Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking and atomic updates from Ingo Molnar:
"Main changes in this cycle are:

- Extend atomic primitives with coherent logic op primitives
(atomic_{or,and,xor}()) and deprecate the old partial APIs
(atomic_{set,clear}_mask())

The old ops were incoherent with incompatible signatures across
architectures and with incomplete support. Now every architecture
supports the primitives consistently (by Peter Zijlstra)

- Generic support for 'relaxed atomics':

- _acquire/release/relaxed() flavours of xchg(), cmpxchg() and {add,sub}_return()
- atomic_read_acquire()
- atomic_set_release()

This came out of porting qwrlock code to arm64 (by Will Deacon)

- Clean up the fragile static_key APIs that were causing repeat bugs,
by introducing a new one:

DEFINE_STATIC_KEY_TRUE(name);
DEFINE_STATIC_KEY_FALSE(name);

which define a key of different types with an initial true/false
value.

Then allow:

static_branch_likely()
static_branch_unlikely()

to take a key of either type and emit the right instruction for the
case. To be able to know the 'type' of the static key we encode it
in the jump entry (by Peter Zijlstra)

- Static key self-tests (by Jason Baron)

- qrwlock optimizations (by Waiman Long)

- small futex enhancements (by Davidlohr Bueso)

- ... and misc other changes"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (63 commits)
jump_label/x86: Work around asm build bug on older/backported GCCs
locking, ARM, atomics: Define our SMP atomics in terms of _relaxed() operations
locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.h
locking/qrwlock: Make use of _{acquire|release|relaxed}() atomics
locking/qrwlock: Implement queue_write_unlock() using smp_store_release()
locking/lockref: Remove homebrew cmpxchg64_relaxed() macro definition
locking, asm-generic: Add _{relaxed|acquire|release}() variants for 'atomic_long_t'
locking, asm-generic: Rework atomic-long.h to avoid bulk code duplication
locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operations
locking, compiler.h: Cast away attributes in the WRITE_ONCE() magic
locking/static_keys: Make verify_keys() static
jump label, locking/static_keys: Update docs
locking/static_keys: Provide a selftest
jump_label: Provide a self-test
s390/uaccess, locking/static_keys: employ static_branch_likely()
x86, tsc, locking/static_keys: Employ static_branch_likely()
locking/static_keys: Add selftest
locking/static_keys: Add a new static_key interface
locking/static_keys: Rework update logic
locking/static_keys: Add static_key_{en,dis}able() helpers
...

+2428 -3588
+3 -1
Documentation/atomic_ops.txt
··· 266 266 atomic_cmpxchg will only satisfy its atomicity semantics as long as all 267 267 other accesses of *v are performed through atomic_xxx operations. 268 268 269 - atomic_cmpxchg must provide explicit memory barriers around the operation. 269 + atomic_cmpxchg must provide explicit memory barriers around the operation, 270 + although if the comparison fails then no memory ordering guarantees are 271 + required. 270 272 271 273 The semantics for atomic_cmpxchg are the same as those defined for 'cas' 272 274 below.
+11
Documentation/fault-injection/fault-injection.txt
··· 15 15 16 16 injects page allocation failures. (alloc_pages(), get_free_pages(), ...) 17 17 18 + o fail_futex 19 + 20 + injects futex deadlock and uaddr fault errors. 21 + 18 22 o fail_make_request 19 23 20 24 injects disk IO errors on devices permitted by setting ··· 117 113 specifies the minimum page allocation order to be injected 118 114 failures. 119 115 116 + - /sys/kernel/debug/fail_futex/ignore-private: 117 + 118 + Format: { 'Y' | 'N' } 119 + default is 'N', setting it to 'Y' will disable failure injections 120 + when dealing with private (address space) futexes. 121 + 120 122 o Boot option 121 123 122 124 In order to inject faults while debugfs is not available (early boot time), ··· 131 121 failslab= 132 122 fail_page_alloc= 133 123 fail_make_request= 124 + fail_futex= 134 125 mmc_core.fail_request=<interval>,<probability>,<space>,<times> 135 126 136 127 How to add new fault injection capability
+3 -3
Documentation/memory-barriers.txt
··· 2327 2327 explicit lock operations, described later). These include: 2328 2328 2329 2329 xchg(); 2330 - cmpxchg(); 2331 2330 atomic_xchg(); atomic_long_xchg(); 2332 - atomic_cmpxchg(); atomic_long_cmpxchg(); 2333 2331 atomic_inc_return(); atomic_long_inc_return(); 2334 2332 atomic_dec_return(); atomic_long_dec_return(); 2335 2333 atomic_add_return(); atomic_long_add_return(); ··· 2340 2342 test_and_clear_bit(); 2341 2343 test_and_change_bit(); 2342 2344 2343 - /* when succeeds (returns 1) */ 2345 + /* when succeeds */ 2346 + cmpxchg(); 2347 + atomic_cmpxchg(); atomic_long_cmpxchg(); 2344 2348 atomic_add_unless(); atomic_long_add_unless(); 2345 2349 2346 2350 These are used for such things as implementing ACQUIRE-class and RELEASE-class
+53 -48
Documentation/static-keys.txt
··· 1 1 Static Keys 2 2 ----------- 3 3 4 - By: Jason Baron <jbaron@redhat.com> 4 + DEPRECATED API: 5 + 6 + The use of 'struct static_key' directly, is now DEPRECATED. In addition 7 + static_key_{true,false}() is also DEPRECATED. IE DO NOT use the following: 8 + 9 + struct static_key false = STATIC_KEY_INIT_FALSE; 10 + struct static_key true = STATIC_KEY_INIT_TRUE; 11 + static_key_true() 12 + static_key_false() 13 + 14 + The updated API replacements are: 15 + 16 + DEFINE_STATIC_KEY_TRUE(key); 17 + DEFINE_STATIC_KEY_FALSE(key); 18 + static_key_likely() 19 + statick_key_unlikely() 5 20 6 21 0) Abstract 7 22 ··· 24 9 performance-sensitive fast-path kernel code, via a GCC feature and a code 25 10 patching technique. A quick example: 26 11 27 - struct static_key key = STATIC_KEY_INIT_FALSE; 12 + DEFINE_STATIC_KEY_FALSE(key); 28 13 29 14 ... 30 15 31 - if (static_key_false(&key)) 16 + if (static_branch_unlikely(&key)) 32 17 do unlikely code 33 18 else 34 19 do likely code 35 20 36 21 ... 37 - static_key_slow_inc(); 22 + static_branch_enable(&key); 38 23 ... 39 - static_key_slow_inc(); 24 + static_branch_disable(&key); 40 25 ... 41 26 42 - The static_key_false() branch will be generated into the code with as little 27 + The static_branch_unlikely() branch will be generated into the code with as little 43 28 impact to the likely code path as possible. 44 29 45 30 ··· 71 56 72 57 For example, if we have a simple branch that is disabled by default: 73 58 74 - if (static_key_false(&key)) 59 + if (static_branch_unlikely(&key)) 75 60 printk("I am the true branch\n"); 76 61 77 62 Thus, by default the 'printk' will not be emitted. And the code generated will ··· 90 75 91 76 In order to make use of this optimization you must first define a key: 92 77 93 - struct static_key key; 94 - 95 - Which is initialized as: 96 - 97 - struct static_key key = STATIC_KEY_INIT_TRUE; 78 + DEFINE_STATIC_KEY_TRUE(key); 98 79 99 80 or: 100 81 101 - struct static_key key = STATIC_KEY_INIT_FALSE; 82 + DEFINE_STATIC_KEY_FALSE(key); 102 83 103 - If the key is not initialized, it is default false. The 'struct static_key', 104 - must be a 'global'. That is, it can't be allocated on the stack or dynamically 84 + 85 + The key must be global, that is, it can't be allocated on the stack or dynamically 105 86 allocated at run-time. 106 87 107 88 The key is then used in code as: 108 89 109 - if (static_key_false(&key)) 90 + if (static_branch_unlikely(&key)) 110 91 do unlikely code 111 92 else 112 93 do likely code 113 94 114 95 Or: 115 96 116 - if (static_key_true(&key)) 97 + if (static_branch_likely(&key)) 117 98 do likely code 118 99 else 119 100 do unlikely code 120 101 121 - A key that is initialized via 'STATIC_KEY_INIT_FALSE', must be used in a 122 - 'static_key_false()' construct. Likewise, a key initialized via 123 - 'STATIC_KEY_INIT_TRUE' must be used in a 'static_key_true()' construct. A 124 - single key can be used in many branches, but all the branches must match the 125 - way that the key has been initialized. 102 + Keys defined via DEFINE_STATIC_KEY_TRUE(), or DEFINE_STATIC_KEY_FALSE, may 103 + be used in either static_branch_likely() or static_branch_unlikely() 104 + statemnts. 126 105 127 - The branch(es) can then be switched via: 106 + Branch(es) can be set true via: 128 107 129 - static_key_slow_inc(&key); 108 + static_branch_enable(&key); 109 + 110 + or false via: 111 + 112 + static_branch_disable(&key); 113 + 114 + The branch(es) can then be switched via reference counts: 115 + 116 + static_branch_inc(&key); 130 117 ... 131 - static_key_slow_dec(&key); 118 + static_branch_dec(&key); 132 119 133 - Thus, 'static_key_slow_inc()' means 'make the branch true', and 134 - 'static_key_slow_dec()' means 'make the branch false' with appropriate 120 + Thus, 'static_branch_inc()' means 'make the branch true', and 121 + 'static_branch_dec()' means 'make the branch false' with appropriate 135 122 reference counting. For example, if the key is initialized true, a 136 - static_key_slow_dec(), will switch the branch to false. And a subsequent 137 - static_key_slow_inc(), will change the branch back to true. Likewise, if the 138 - key is initialized false, a 'static_key_slow_inc()', will change the branch to 139 - true. And then a 'static_key_slow_dec()', will again make the branch false. 140 - 141 - An example usage in the kernel is the implementation of tracepoints: 142 - 143 - static inline void trace_##name(proto) \ 144 - { \ 145 - if (static_key_false(&__tracepoint_##name.key)) \ 146 - __DO_TRACE(&__tracepoint_##name, \ 147 - TP_PROTO(data_proto), \ 148 - TP_ARGS(data_args), \ 149 - TP_CONDITION(cond)); \ 150 - } 151 - 152 - Tracepoints are disabled by default, and can be placed in performance critical 153 - pieces of the kernel. Thus, by using a static key, the tracepoints can have 154 - absolutely minimal impact when not in use. 123 + static_branch_dec(), will switch the branch to false. And a subsequent 124 + static_branch_inc(), will change the branch back to true. Likewise, if the 125 + key is initialized false, a 'static_branch_inc()', will change the branch to 126 + true. And then a 'static_branch_dec()', will again make the branch false. 155 127 156 128 157 129 4) Architecture level code patching interface, 'jump labels' ··· 152 150 153 151 * #define JUMP_LABEL_NOP_SIZE, see: arch/x86/include/asm/jump_label.h 154 152 155 - * __always_inline bool arch_static_branch(struct static_key *key), see: 153 + * __always_inline bool arch_static_branch(struct static_key *key, bool branch), see: 156 154 arch/x86/include/asm/jump_label.h 155 + 156 + * __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch), 157 + see: arch/x86/include/asm/jump_label.h 157 158 158 159 * void arch_jump_label_transform(struct jump_entry *entry, enum jump_label_type type), 159 160 see: arch/x86/kernel/jump_label.c ··· 178 173 { 179 174 int pid; 180 175 181 - + if (static_key_false(&key)) 176 + + if (static_branch_unlikely(&key)) 182 177 + printk("I am the true branch\n"); 183 178 184 179 rcu_read_lock();
+6
arch/Kconfig
··· 71 71 ( On 32-bit x86, the necessary options added to the compiler 72 72 flags may increase the size of the kernel slightly. ) 73 73 74 + config STATIC_KEYS_SELFTEST 75 + bool "Static key selftest" 76 + depends on JUMP_LABEL 77 + help 78 + Boot time self-test of the branch patching code. 79 + 74 80 config OPTPROBES 75 81 def_bool y 76 82 depends on KPROBES && HAVE_OPTPROBES
+27 -15
arch/alpha/include/asm/atomic.h
··· 29 29 * branch back to restart the operation. 30 30 */ 31 31 32 - #define ATOMIC_OP(op) \ 32 + #define ATOMIC_OP(op, asm_op) \ 33 33 static __inline__ void atomic_##op(int i, atomic_t * v) \ 34 34 { \ 35 35 unsigned long temp; \ 36 36 __asm__ __volatile__( \ 37 37 "1: ldl_l %0,%1\n" \ 38 - " " #op "l %0,%2,%0\n" \ 38 + " " #asm_op " %0,%2,%0\n" \ 39 39 " stl_c %0,%1\n" \ 40 40 " beq %0,2f\n" \ 41 41 ".subsection 2\n" \ ··· 45 45 :"Ir" (i), "m" (v->counter)); \ 46 46 } \ 47 47 48 - #define ATOMIC_OP_RETURN(op) \ 48 + #define ATOMIC_OP_RETURN(op, asm_op) \ 49 49 static inline int atomic_##op##_return(int i, atomic_t *v) \ 50 50 { \ 51 51 long temp, result; \ 52 52 smp_mb(); \ 53 53 __asm__ __volatile__( \ 54 54 "1: ldl_l %0,%1\n" \ 55 - " " #op "l %0,%3,%2\n" \ 56 - " " #op "l %0,%3,%0\n" \ 55 + " " #asm_op " %0,%3,%2\n" \ 56 + " " #asm_op " %0,%3,%0\n" \ 57 57 " stl_c %0,%1\n" \ 58 58 " beq %0,2f\n" \ 59 59 ".subsection 2\n" \ ··· 65 65 return result; \ 66 66 } 67 67 68 - #define ATOMIC64_OP(op) \ 68 + #define ATOMIC64_OP(op, asm_op) \ 69 69 static __inline__ void atomic64_##op(long i, atomic64_t * v) \ 70 70 { \ 71 71 unsigned long temp; \ 72 72 __asm__ __volatile__( \ 73 73 "1: ldq_l %0,%1\n" \ 74 - " " #op "q %0,%2,%0\n" \ 74 + " " #asm_op " %0,%2,%0\n" \ 75 75 " stq_c %0,%1\n" \ 76 76 " beq %0,2f\n" \ 77 77 ".subsection 2\n" \ ··· 81 81 :"Ir" (i), "m" (v->counter)); \ 82 82 } \ 83 83 84 - #define ATOMIC64_OP_RETURN(op) \ 84 + #define ATOMIC64_OP_RETURN(op, asm_op) \ 85 85 static __inline__ long atomic64_##op##_return(long i, atomic64_t * v) \ 86 86 { \ 87 87 long temp, result; \ 88 88 smp_mb(); \ 89 89 __asm__ __volatile__( \ 90 90 "1: ldq_l %0,%1\n" \ 91 - " " #op "q %0,%3,%2\n" \ 92 - " " #op "q %0,%3,%0\n" \ 91 + " " #asm_op " %0,%3,%2\n" \ 92 + " " #asm_op " %0,%3,%0\n" \ 93 93 " stq_c %0,%1\n" \ 94 94 " beq %0,2f\n" \ 95 95 ".subsection 2\n" \ ··· 101 101 return result; \ 102 102 } 103 103 104 - #define ATOMIC_OPS(opg) \ 105 - ATOMIC_OP(opg) \ 106 - ATOMIC_OP_RETURN(opg) \ 107 - ATOMIC64_OP(opg) \ 108 - ATOMIC64_OP_RETURN(opg) 104 + #define ATOMIC_OPS(op) \ 105 + ATOMIC_OP(op, op##l) \ 106 + ATOMIC_OP_RETURN(op, op##l) \ 107 + ATOMIC64_OP(op, op##q) \ 108 + ATOMIC64_OP_RETURN(op, op##q) 109 109 110 110 ATOMIC_OPS(add) 111 111 ATOMIC_OPS(sub) 112 + 113 + #define atomic_andnot atomic_andnot 114 + #define atomic64_andnot atomic64_andnot 115 + 116 + ATOMIC_OP(and, and) 117 + ATOMIC_OP(andnot, bic) 118 + ATOMIC_OP(or, bis) 119 + ATOMIC_OP(xor, xor) 120 + ATOMIC64_OP(and, and) 121 + ATOMIC64_OP(andnot, bic) 122 + ATOMIC64_OP(or, bis) 123 + ATOMIC64_OP(xor, xor) 112 124 113 125 #undef ATOMIC_OPS 114 126 #undef ATOMIC64_OP_RETURN
+6 -2
arch/arc/include/asm/atomic.h
··· 172 172 173 173 ATOMIC_OPS(add, +=, add) 174 174 ATOMIC_OPS(sub, -=, sub) 175 - ATOMIC_OP(and, &=, and) 176 175 177 - #define atomic_clear_mask(mask, v) atomic_and(~(mask), (v)) 176 + #define atomic_andnot atomic_andnot 177 + 178 + ATOMIC_OP(and, &=, and) 179 + ATOMIC_OP(andnot, &= ~, bic) 180 + ATOMIC_OP(or, |=, or) 181 + ATOMIC_OP(xor, ^=, xor) 178 182 179 183 #undef ATOMIC_OPS 180 184 #undef ATOMIC_OP_RETURN
+30 -21
arch/arm/include/asm/atomic.h
··· 57 57 } \ 58 58 59 59 #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ 60 - static inline int atomic_##op##_return(int i, atomic_t *v) \ 60 + static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ 61 61 { \ 62 62 unsigned long tmp; \ 63 63 int result; \ 64 64 \ 65 - smp_mb(); \ 66 65 prefetchw(&v->counter); \ 67 66 \ 68 67 __asm__ __volatile__("@ atomic_" #op "_return\n" \ ··· 74 75 : "r" (&v->counter), "Ir" (i) \ 75 76 : "cc"); \ 76 77 \ 77 - smp_mb(); \ 78 - \ 79 78 return result; \ 80 79 } 81 80 82 - static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) 81 + #define atomic_add_return_relaxed atomic_add_return_relaxed 82 + #define atomic_sub_return_relaxed atomic_sub_return_relaxed 83 + 84 + static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new) 83 85 { 84 86 int oldval; 85 87 unsigned long res; 86 88 87 - smp_mb(); 88 89 prefetchw(&ptr->counter); 89 90 90 91 do { ··· 98 99 : "cc"); 99 100 } while (res); 100 101 101 - smp_mb(); 102 - 103 102 return oldval; 104 103 } 104 + #define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed 105 105 106 106 static inline int __atomic_add_unless(atomic_t *v, int a, int u) 107 107 { ··· 191 193 192 194 ATOMIC_OPS(add, +=, add) 193 195 ATOMIC_OPS(sub, -=, sub) 196 + 197 + #define atomic_andnot atomic_andnot 198 + 199 + ATOMIC_OP(and, &=, and) 200 + ATOMIC_OP(andnot, &= ~, bic) 201 + ATOMIC_OP(or, |=, orr) 202 + ATOMIC_OP(xor, ^=, eor) 194 203 195 204 #undef ATOMIC_OPS 196 205 #undef ATOMIC_OP_RETURN ··· 295 290 } \ 296 291 297 292 #define ATOMIC64_OP_RETURN(op, op1, op2) \ 298 - static inline long long atomic64_##op##_return(long long i, atomic64_t *v) \ 293 + static inline long long \ 294 + atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \ 299 295 { \ 300 296 long long result; \ 301 297 unsigned long tmp; \ 302 298 \ 303 - smp_mb(); \ 304 299 prefetchw(&v->counter); \ 305 300 \ 306 301 __asm__ __volatile__("@ atomic64_" #op "_return\n" \ ··· 314 309 : "r" (&v->counter), "r" (i) \ 315 310 : "cc"); \ 316 311 \ 317 - smp_mb(); \ 318 - \ 319 312 return result; \ 320 313 } 321 314 ··· 324 321 ATOMIC64_OPS(add, adds, adc) 325 322 ATOMIC64_OPS(sub, subs, sbc) 326 323 324 + #define atomic64_add_return_relaxed atomic64_add_return_relaxed 325 + #define atomic64_sub_return_relaxed atomic64_sub_return_relaxed 326 + 327 + #define atomic64_andnot atomic64_andnot 328 + 329 + ATOMIC64_OP(and, and, and) 330 + ATOMIC64_OP(andnot, bic, bic) 331 + ATOMIC64_OP(or, orr, orr) 332 + ATOMIC64_OP(xor, eor, eor) 333 + 327 334 #undef ATOMIC64_OPS 328 335 #undef ATOMIC64_OP_RETURN 329 336 #undef ATOMIC64_OP 330 337 331 - static inline long long atomic64_cmpxchg(atomic64_t *ptr, long long old, 332 - long long new) 338 + static inline long long 339 + atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new) 333 340 { 334 341 long long oldval; 335 342 unsigned long res; 336 343 337 - smp_mb(); 338 344 prefetchw(&ptr->counter); 339 345 340 346 do { ··· 358 346 : "cc"); 359 347 } while (res); 360 348 361 - smp_mb(); 362 - 363 349 return oldval; 364 350 } 351 + #define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed 365 352 366 - static inline long long atomic64_xchg(atomic64_t *ptr, long long new) 353 + static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new) 367 354 { 368 355 long long result; 369 356 unsigned long tmp; 370 357 371 - smp_mb(); 372 358 prefetchw(&ptr->counter); 373 359 374 360 __asm__ __volatile__("@ atomic64_xchg\n" ··· 378 368 : "r" (&ptr->counter), "r" (new) 379 369 : "cc"); 380 370 381 - smp_mb(); 382 - 383 371 return result; 384 372 } 373 + #define atomic64_xchg_relaxed atomic64_xchg_relaxed 385 374 386 375 static inline long long atomic64_dec_if_positive(atomic64_t *v) 387 376 {
+2 -2
arch/arm/include/asm/barrier.h
··· 67 67 do { \ 68 68 compiletime_assert_atomic_type(*p); \ 69 69 smp_mb(); \ 70 - ACCESS_ONCE(*p) = (v); \ 70 + WRITE_ONCE(*p, v); \ 71 71 } while (0) 72 72 73 73 #define smp_load_acquire(p) \ 74 74 ({ \ 75 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 75 + typeof(*p) ___p1 = READ_ONCE(*p); \ 76 76 compiletime_assert_atomic_type(*p); \ 77 77 smp_mb(); \ 78 78 ___p1; \
+8 -39
arch/arm/include/asm/cmpxchg.h
··· 35 35 unsigned int tmp; 36 36 #endif 37 37 38 - smp_mb(); 39 38 prefetchw((const void *)ptr); 40 39 41 40 switch (size) { ··· 97 98 __bad_xchg(ptr, size), ret = 0; 98 99 break; 99 100 } 100 - smp_mb(); 101 101 102 102 return ret; 103 103 } 104 104 105 - #define xchg(ptr, x) ({ \ 105 + #define xchg_relaxed(ptr, x) ({ \ 106 106 (__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr), \ 107 107 sizeof(*(ptr))); \ 108 108 }) ··· 114 116 #ifdef CONFIG_SMP 115 117 #error "SMP is not supported on this platform" 116 118 #endif 119 + 120 + #define xchg xchg_relaxed 117 121 118 122 /* 119 123 * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make ··· 194 194 return oldval; 195 195 } 196 196 197 - static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, 198 - unsigned long new, int size) 199 - { 200 - unsigned long ret; 201 - 202 - smp_mb(); 203 - ret = __cmpxchg(ptr, old, new, size); 204 - smp_mb(); 205 - 206 - return ret; 207 - } 208 - 209 - #define cmpxchg(ptr,o,n) ({ \ 210 - (__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ 211 - (unsigned long)(o), \ 212 - (unsigned long)(n), \ 213 - sizeof(*(ptr))); \ 197 + #define cmpxchg_relaxed(ptr,o,n) ({ \ 198 + (__typeof__(*(ptr)))__cmpxchg((ptr), \ 199 + (unsigned long)(o), \ 200 + (unsigned long)(n), \ 201 + sizeof(*(ptr))); \ 214 202 }) 215 203 216 204 static inline unsigned long __cmpxchg_local(volatile void *ptr, ··· 260 272 }) 261 273 262 274 #define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n)) 263 - 264 - static inline unsigned long long __cmpxchg64_mb(unsigned long long *ptr, 265 - unsigned long long old, 266 - unsigned long long new) 267 - { 268 - unsigned long long ret; 269 - 270 - smp_mb(); 271 - ret = __cmpxchg64(ptr, old, new); 272 - smp_mb(); 273 - 274 - return ret; 275 - } 276 - 277 - #define cmpxchg64(ptr, o, n) ({ \ 278 - (__typeof__(*(ptr)))__cmpxchg64_mb((ptr), \ 279 - (unsigned long long)(o), \ 280 - (unsigned long long)(n)); \ 281 - }) 282 275 283 276 #endif /* __LINUX_ARM_ARCH__ >= 6 */ 284 277
+18 -9
arch/arm/include/asm/jump_label.h
··· 4 4 #ifndef __ASSEMBLY__ 5 5 6 6 #include <linux/types.h> 7 + #include <asm/unified.h> 7 8 8 9 #define JUMP_LABEL_NOP_SIZE 4 9 10 10 - #ifdef CONFIG_THUMB2_KERNEL 11 - #define JUMP_LABEL_NOP "nop.w" 12 - #else 13 - #define JUMP_LABEL_NOP "nop" 14 - #endif 15 - 16 - static __always_inline bool arch_static_branch(struct static_key *key) 11 + static __always_inline bool arch_static_branch(struct static_key *key, bool branch) 17 12 { 18 13 asm_volatile_goto("1:\n\t" 19 - JUMP_LABEL_NOP "\n\t" 14 + WASM(nop) "\n\t" 20 15 ".pushsection __jump_table, \"aw\"\n\t" 21 16 ".word 1b, %l[l_yes], %c0\n\t" 22 17 ".popsection\n\t" 23 - : : "i" (key) : : l_yes); 18 + : : "i" (&((char *)key)[branch]) : : l_yes); 19 + 20 + return false; 21 + l_yes: 22 + return true; 23 + } 24 + 25 + static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) 26 + { 27 + asm_volatile_goto("1:\n\t" 28 + WASM(b) " %l[l_yes]\n\t" 29 + ".pushsection __jump_table, \"aw\"\n\t" 30 + ".word 1b, %l[l_yes], %c0\n\t" 31 + ".popsection\n\t" 32 + : : "i" (&((char *)key)[branch]) : : l_yes); 24 33 25 34 return false; 26 35 l_yes:
+1 -1
arch/arm/kernel/jump_label.c
··· 12 12 void *addr = (void *)entry->code; 13 13 unsigned int insn; 14 14 15 - if (type == JUMP_LABEL_ENABLE) 15 + if (type == JUMP_LABEL_JMP) 16 16 insn = arm_gen_branch(entry->code, entry->target); 17 17 else 18 18 insn = arm_gen_nop();
+14
arch/arm64/include/asm/atomic.h
··· 85 85 ATOMIC_OPS(add, add) 86 86 ATOMIC_OPS(sub, sub) 87 87 88 + #define atomic_andnot atomic_andnot 89 + 90 + ATOMIC_OP(and, and) 91 + ATOMIC_OP(andnot, bic) 92 + ATOMIC_OP(or, orr) 93 + ATOMIC_OP(xor, eor) 94 + 88 95 #undef ATOMIC_OPS 89 96 #undef ATOMIC_OP_RETURN 90 97 #undef ATOMIC_OP ··· 189 182 190 183 ATOMIC64_OPS(add, add) 191 184 ATOMIC64_OPS(sub, sub) 185 + 186 + #define atomic64_andnot atomic64_andnot 187 + 188 + ATOMIC64_OP(and, and) 189 + ATOMIC64_OP(andnot, bic) 190 + ATOMIC64_OP(or, orr) 191 + ATOMIC64_OP(xor, eor) 192 192 193 193 #undef ATOMIC64_OPS 194 194 #undef ATOMIC64_OP_RETURN
+2 -2
arch/arm64/include/asm/barrier.h
··· 44 44 do { \ 45 45 compiletime_assert_atomic_type(*p); \ 46 46 barrier(); \ 47 - ACCESS_ONCE(*p) = (v); \ 47 + WRITE_ONCE(*p, v); \ 48 48 } while (0) 49 49 50 50 #define smp_load_acquire(p) \ 51 51 ({ \ 52 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 52 + typeof(*p) ___p1 = READ_ONCE(*p); \ 53 53 compiletime_assert_atomic_type(*p); \ 54 54 barrier(); \ 55 55 ___p1; \
+16 -2
arch/arm64/include/asm/jump_label.h
··· 26 26 27 27 #define JUMP_LABEL_NOP_SIZE AARCH64_INSN_SIZE 28 28 29 - static __always_inline bool arch_static_branch(struct static_key *key) 29 + static __always_inline bool arch_static_branch(struct static_key *key, bool branch) 30 30 { 31 31 asm goto("1: nop\n\t" 32 32 ".pushsection __jump_table, \"aw\"\n\t" 33 33 ".align 3\n\t" 34 34 ".quad 1b, %l[l_yes], %c0\n\t" 35 35 ".popsection\n\t" 36 - : : "i"(key) : : l_yes); 36 + : : "i"(&((char *)key)[branch]) : : l_yes); 37 + 38 + return false; 39 + l_yes: 40 + return true; 41 + } 42 + 43 + static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) 44 + { 45 + asm goto("1: b %l[l_yes]\n\t" 46 + ".pushsection __jump_table, \"aw\"\n\t" 47 + ".align 3\n\t" 48 + ".quad 1b, %l[l_yes], %c0\n\t" 49 + ".popsection\n\t" 50 + : : "i"(&((char *)key)[branch]) : : l_yes); 37 51 38 52 return false; 39 53 l_yes:
+1 -1
arch/arm64/kernel/jump_label.c
··· 28 28 void *addr = (void *)entry->code; 29 29 u32 insn; 30 30 31 - if (type == JUMP_LABEL_ENABLE) { 31 + if (type == JUMP_LABEL_JMP) { 32 32 insn = aarch64_insn_gen_branch_imm(entry->code, 33 33 entry->target, 34 34 AARCH64_INSN_BRANCH_NOLINK);
+12
arch/avr32/include/asm/atomic.h
··· 44 44 ATOMIC_OP_RETURN(sub, sub, rKs21) 45 45 ATOMIC_OP_RETURN(add, add, r) 46 46 47 + #define ATOMIC_OP(op, asm_op) \ 48 + ATOMIC_OP_RETURN(op, asm_op, r) \ 49 + static inline void atomic_##op(int i, atomic_t *v) \ 50 + { \ 51 + (void)__atomic_##op##_return(i, v); \ 52 + } 53 + 54 + ATOMIC_OP(and, and) 55 + ATOMIC_OP(or, or) 56 + ATOMIC_OP(xor, eor) 57 + 58 + #undef ATOMIC_OP 47 59 #undef ATOMIC_OP_RETURN 48 60 49 61 /*
+9 -7
arch/blackfin/include/asm/atomic.h
··· 16 16 #include <linux/types.h> 17 17 18 18 asmlinkage int __raw_uncached_fetch_asm(const volatile int *ptr); 19 - asmlinkage int __raw_atomic_update_asm(volatile int *ptr, int value); 20 - asmlinkage int __raw_atomic_clear_asm(volatile int *ptr, int value); 21 - asmlinkage int __raw_atomic_set_asm(volatile int *ptr, int value); 19 + asmlinkage int __raw_atomic_add_asm(volatile int *ptr, int value); 20 + 21 + asmlinkage int __raw_atomic_and_asm(volatile int *ptr, int value); 22 + asmlinkage int __raw_atomic_or_asm(volatile int *ptr, int value); 22 23 asmlinkage int __raw_atomic_xor_asm(volatile int *ptr, int value); 23 24 asmlinkage int __raw_atomic_test_asm(const volatile int *ptr, int value); 24 25 25 26 #define atomic_read(v) __raw_uncached_fetch_asm(&(v)->counter) 26 27 27 - #define atomic_add_return(i, v) __raw_atomic_update_asm(&(v)->counter, i) 28 - #define atomic_sub_return(i, v) __raw_atomic_update_asm(&(v)->counter, -(i)) 28 + #define atomic_add_return(i, v) __raw_atomic_add_asm(&(v)->counter, i) 29 + #define atomic_sub_return(i, v) __raw_atomic_add_asm(&(v)->counter, -(i)) 29 30 30 - #define atomic_clear_mask(m, v) __raw_atomic_clear_asm(&(v)->counter, m) 31 - #define atomic_set_mask(m, v) __raw_atomic_set_asm(&(v)->counter, m) 31 + #define atomic_or(i, v) (void)__raw_atomic_or_asm(&(v)->counter, i) 32 + #define atomic_and(i, v) (void)__raw_atomic_and_asm(&(v)->counter, i) 33 + #define atomic_xor(i, v) (void)__raw_atomic_xor_asm(&(v)->counter, i) 32 34 33 35 #endif 34 36
+4 -3
arch/blackfin/kernel/bfin_ksyms.c
··· 83 83 EXPORT_SYMBOL(insl_16); 84 84 85 85 #ifdef CONFIG_SMP 86 - EXPORT_SYMBOL(__raw_atomic_update_asm); 87 - EXPORT_SYMBOL(__raw_atomic_clear_asm); 88 - EXPORT_SYMBOL(__raw_atomic_set_asm); 86 + EXPORT_SYMBOL(__raw_atomic_add_asm); 87 + EXPORT_SYMBOL(__raw_atomic_and_asm); 88 + EXPORT_SYMBOL(__raw_atomic_or_asm); 89 89 EXPORT_SYMBOL(__raw_atomic_xor_asm); 90 90 EXPORT_SYMBOL(__raw_atomic_test_asm); 91 + 91 92 EXPORT_SYMBOL(__raw_xchg_1_asm); 92 93 EXPORT_SYMBOL(__raw_xchg_2_asm); 93 94 EXPORT_SYMBOL(__raw_xchg_4_asm);
+15 -15
arch/blackfin/mach-bf561/atomic.S
··· 587 587 * r0 = ptr 588 588 * r1 = value 589 589 * 590 - * Add a signed value to a 32bit word and return the new value atomically. 590 + * ADD a signed value to a 32bit word and return the new value atomically. 591 591 * Clobbers: r3:0, p1:0 592 592 */ 593 - ENTRY(___raw_atomic_update_asm) 593 + ENTRY(___raw_atomic_add_asm) 594 594 p1 = r0; 595 595 r3 = r1; 596 596 [--sp] = rets; ··· 603 603 r0 = r3; 604 604 rets = [sp++]; 605 605 rts; 606 - ENDPROC(___raw_atomic_update_asm) 606 + ENDPROC(___raw_atomic_add_asm) 607 607 608 608 /* 609 609 * r0 = ptr 610 610 * r1 = mask 611 611 * 612 - * Clear the mask bits from a 32bit word and return the old 32bit value 612 + * AND the mask bits from a 32bit word and return the old 32bit value 613 613 * atomically. 614 614 * Clobbers: r3:0, p1:0 615 615 */ 616 - ENTRY(___raw_atomic_clear_asm) 616 + ENTRY(___raw_atomic_and_asm) 617 617 p1 = r0; 618 - r3 = ~r1; 618 + r3 = r1; 619 619 [--sp] = rets; 620 620 call _get_core_lock; 621 621 r2 = [p1]; ··· 627 627 r0 = r3; 628 628 rets = [sp++]; 629 629 rts; 630 - ENDPROC(___raw_atomic_clear_asm) 630 + ENDPROC(___raw_atomic_and_asm) 631 631 632 632 /* 633 633 * r0 = ptr 634 634 * r1 = mask 635 635 * 636 - * Set the mask bits into a 32bit word and return the old 32bit value 636 + * OR the mask bits into a 32bit word and return the old 32bit value 637 637 * atomically. 638 638 * Clobbers: r3:0, p1:0 639 639 */ 640 - ENTRY(___raw_atomic_set_asm) 640 + ENTRY(___raw_atomic_or_asm) 641 641 p1 = r0; 642 642 r3 = r1; 643 643 [--sp] = rets; ··· 651 651 r0 = r3; 652 652 rets = [sp++]; 653 653 rts; 654 - ENDPROC(___raw_atomic_set_asm) 654 + ENDPROC(___raw_atomic_or_asm) 655 655 656 656 /* 657 657 * r0 = ptr ··· 787 787 r2 = r1; 788 788 r1 = 1; 789 789 r1 <<= r2; 790 - jump ___raw_atomic_set_asm 790 + jump ___raw_atomic_or_asm 791 791 ENDPROC(___raw_bit_set_asm) 792 792 793 793 /* ··· 798 798 * Clobbers: r3:0, p1:0 799 799 */ 800 800 ENTRY(___raw_bit_clear_asm) 801 - r2 = r1; 802 - r1 = 1; 803 - r1 <<= r2; 804 - jump ___raw_atomic_clear_asm 801 + r2 = 1; 802 + r2 <<= r1; 803 + r1 = ~r2; 804 + jump ___raw_atomic_and_asm 805 805 ENDPROC(___raw_bit_clear_asm) 806 806 807 807 /*
+1 -1
arch/blackfin/mach-common/smp.c
··· 195 195 local_irq_save(flags); 196 196 for_each_cpu(cpu, cpumask) { 197 197 bfin_ipi_data = &per_cpu(bfin_ipi, cpu); 198 - atomic_set_mask((1 << msg), &bfin_ipi_data->bits); 198 + atomic_or((1 << msg), &bfin_ipi_data->bits); 199 199 atomic_inc(&bfin_ipi_data->count); 200 200 } 201 201 local_irq_restore(flags);
+55 -54
arch/frv/include/asm/atomic.h
··· 15 15 #define _ASM_ATOMIC_H 16 16 17 17 #include <linux/types.h> 18 - #include <asm/spr-regs.h> 19 18 #include <asm/cmpxchg.h> 20 19 #include <asm/barrier.h> 21 20 22 21 #ifdef CONFIG_SMP 23 22 #error not SMP safe 24 23 #endif 24 + 25 + #include <asm/atomic_defs.h> 25 26 26 27 /* 27 28 * Atomic operations that C can't guarantee us. Useful for ··· 35 34 #define atomic_read(v) ACCESS_ONCE((v)->counter) 36 35 #define atomic_set(v, i) (((v)->counter) = (i)) 37 36 38 - #ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS 37 + static inline int atomic_inc_return(atomic_t *v) 38 + { 39 + return __atomic_add_return(1, &v->counter); 40 + } 41 + 42 + static inline int atomic_dec_return(atomic_t *v) 43 + { 44 + return __atomic_sub_return(1, &v->counter); 45 + } 46 + 39 47 static inline int atomic_add_return(int i, atomic_t *v) 40 48 { 41 - unsigned long val; 42 - 43 - asm("0: \n" 44 - " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */ 45 - " ckeq icc3,cc7 \n" 46 - " ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */ 47 - " orcr cc7,cc7,cc3 \n" /* set CC3 to true */ 48 - " add%I2 %1,%2,%1 \n" 49 - " cst.p %1,%M0 ,cc3,#1 \n" 50 - " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */ 51 - " beq icc3,#0,0b \n" 52 - : "+U"(v->counter), "=&r"(val) 53 - : "NPr"(i) 54 - : "memory", "cc7", "cc3", "icc3" 55 - ); 56 - 57 - return val; 49 + return __atomic_add_return(i, &v->counter); 58 50 } 59 51 60 52 static inline int atomic_sub_return(int i, atomic_t *v) 61 53 { 62 - unsigned long val; 63 - 64 - asm("0: \n" 65 - " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */ 66 - " ckeq icc3,cc7 \n" 67 - " ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */ 68 - " orcr cc7,cc7,cc3 \n" /* set CC3 to true */ 69 - " sub%I2 %1,%2,%1 \n" 70 - " cst.p %1,%M0 ,cc3,#1 \n" 71 - " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */ 72 - " beq icc3,#0,0b \n" 73 - : "+U"(v->counter), "=&r"(val) 74 - : "NPr"(i) 75 - : "memory", "cc7", "cc3", "icc3" 76 - ); 77 - 78 - return val; 54 + return __atomic_sub_return(i, &v->counter); 79 55 } 80 - 81 - #else 82 - 83 - extern int atomic_add_return(int i, atomic_t *v); 84 - extern int atomic_sub_return(int i, atomic_t *v); 85 - 86 - #endif 87 56 88 57 static inline int atomic_add_negative(int i, atomic_t *v) 89 58 { ··· 72 101 73 102 static inline void atomic_inc(atomic_t *v) 74 103 { 75 - atomic_add_return(1, v); 104 + atomic_inc_return(v); 76 105 } 77 106 78 107 static inline void atomic_dec(atomic_t *v) 79 108 { 80 - atomic_sub_return(1, v); 109 + atomic_dec_return(v); 81 110 } 82 - 83 - #define atomic_dec_return(v) atomic_sub_return(1, (v)) 84 - #define atomic_inc_return(v) atomic_add_return(1, (v)) 85 111 86 112 #define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0) 87 113 #define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0) ··· 88 120 * 64-bit atomic ops 89 121 */ 90 122 typedef struct { 91 - volatile long long counter; 123 + long long counter; 92 124 } atomic64_t; 93 125 94 126 #define ATOMIC64_INIT(i) { (i) } 95 127 96 - static inline long long atomic64_read(atomic64_t *v) 128 + static inline long long atomic64_read(const atomic64_t *v) 97 129 { 98 130 long long counter; 99 131 100 132 asm("ldd%I1 %M1,%0" 101 133 : "=e"(counter) 102 134 : "m"(v->counter)); 135 + 103 136 return counter; 104 137 } 105 138 ··· 111 142 : "e"(i)); 112 143 } 113 144 114 - extern long long atomic64_inc_return(atomic64_t *v); 115 - extern long long atomic64_dec_return(atomic64_t *v); 116 - extern long long atomic64_add_return(long long i, atomic64_t *v); 117 - extern long long atomic64_sub_return(long long i, atomic64_t *v); 145 + static inline long long atomic64_inc_return(atomic64_t *v) 146 + { 147 + return __atomic64_add_return(1, &v->counter); 148 + } 149 + 150 + static inline long long atomic64_dec_return(atomic64_t *v) 151 + { 152 + return __atomic64_sub_return(1, &v->counter); 153 + } 154 + 155 + static inline long long atomic64_add_return(long long i, atomic64_t *v) 156 + { 157 + return __atomic64_add_return(i, &v->counter); 158 + } 159 + 160 + static inline long long atomic64_sub_return(long long i, atomic64_t *v) 161 + { 162 + return __atomic64_sub_return(i, &v->counter); 163 + } 118 164 119 165 static inline long long atomic64_add_negative(long long i, atomic64_t *v) 120 166 { ··· 160 176 #define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) 161 177 #define atomic64_inc_and_test(v) (atomic64_inc_return((v)) == 0) 162 178 179 + 163 180 #define atomic_cmpxchg(v, old, new) (cmpxchg(&(v)->counter, old, new)) 164 181 #define atomic_xchg(v, new) (xchg(&(v)->counter, new)) 165 182 #define atomic64_cmpxchg(v, old, new) (__cmpxchg_64(old, new, &(v)->counter)) ··· 181 196 return c; 182 197 } 183 198 199 + #define ATOMIC_OP(op) \ 200 + static inline void atomic_##op(int i, atomic_t *v) \ 201 + { \ 202 + (void)__atomic32_fetch_##op(i, &v->counter); \ 203 + } \ 204 + \ 205 + static inline void atomic64_##op(long long i, atomic64_t *v) \ 206 + { \ 207 + (void)__atomic64_fetch_##op(i, &v->counter); \ 208 + } 209 + 210 + ATOMIC_OP(or) 211 + ATOMIC_OP(and) 212 + ATOMIC_OP(xor) 213 + 214 + #undef ATOMIC_OP 184 215 185 216 #endif /* _ASM_ATOMIC_H */
+172
arch/frv/include/asm/atomic_defs.h
··· 1 + 2 + #include <asm/spr-regs.h> 3 + 4 + #ifdef __ATOMIC_LIB__ 5 + 6 + #ifdef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS 7 + 8 + #define ATOMIC_QUALS 9 + #define ATOMIC_EXPORT(x) EXPORT_SYMBOL(x) 10 + 11 + #else /* !OUTOFLINE && LIB */ 12 + 13 + #define ATOMIC_OP_RETURN(op) 14 + #define ATOMIC_FETCH_OP(op) 15 + 16 + #endif /* OUTOFLINE */ 17 + 18 + #else /* !__ATOMIC_LIB__ */ 19 + 20 + #define ATOMIC_EXPORT(x) 21 + 22 + #ifdef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS 23 + 24 + #define ATOMIC_OP_RETURN(op) \ 25 + extern int __atomic_##op##_return(int i, int *v); \ 26 + extern long long __atomic64_##op##_return(long long i, long long *v); 27 + 28 + #define ATOMIC_FETCH_OP(op) \ 29 + extern int __atomic32_fetch_##op(int i, int *v); \ 30 + extern long long __atomic64_fetch_##op(long long i, long long *v); 31 + 32 + #else /* !OUTOFLINE && !LIB */ 33 + 34 + #define ATOMIC_QUALS static inline 35 + 36 + #endif /* OUTOFLINE */ 37 + #endif /* __ATOMIC_LIB__ */ 38 + 39 + 40 + /* 41 + * Note on the 64 bit inline asm variants... 42 + * 43 + * CSTD is a conditional instruction and needs a constrained memory reference. 44 + * Normally 'U' provides the correct constraints for conditional instructions 45 + * and this is used for the 32 bit version, however 'U' does not appear to work 46 + * for 64 bit values (gcc-4.9) 47 + * 48 + * The exact constraint is that conditional instructions cannot deal with an 49 + * immediate displacement in the memory reference, so what we do is we read the 50 + * address through a volatile cast into a local variable in order to insure we 51 + * _have_ to compute the correct address without displacement. This allows us 52 + * to use the regular 'm' for the memory address. 53 + * 54 + * Furthermore, the %Ln operand, which prints the low word register (r+1), 55 + * really only works for registers, this means we cannot allow immediate values 56 + * for the 64 bit versions -- like we do for the 32 bit ones. 57 + * 58 + */ 59 + 60 + #ifndef ATOMIC_OP_RETURN 61 + #define ATOMIC_OP_RETURN(op) \ 62 + ATOMIC_QUALS int __atomic_##op##_return(int i, int *v) \ 63 + { \ 64 + int val; \ 65 + \ 66 + asm volatile( \ 67 + "0: \n" \ 68 + " orcc gr0,gr0,gr0,icc3 \n" \ 69 + " ckeq icc3,cc7 \n" \ 70 + " ld.p %M0,%1 \n" \ 71 + " orcr cc7,cc7,cc3 \n" \ 72 + " "#op"%I2 %1,%2,%1 \n" \ 73 + " cst.p %1,%M0 ,cc3,#1 \n" \ 74 + " corcc gr29,gr29,gr0 ,cc3,#1 \n" \ 75 + " beq icc3,#0,0b \n" \ 76 + : "+U"(*v), "=&r"(val) \ 77 + : "NPr"(i) \ 78 + : "memory", "cc7", "cc3", "icc3" \ 79 + ); \ 80 + \ 81 + return val; \ 82 + } \ 83 + ATOMIC_EXPORT(__atomic_##op##_return); \ 84 + \ 85 + ATOMIC_QUALS long long __atomic64_##op##_return(long long i, long long *v) \ 86 + { \ 87 + long long *__v = READ_ONCE(v); \ 88 + long long val; \ 89 + \ 90 + asm volatile( \ 91 + "0: \n" \ 92 + " orcc gr0,gr0,gr0,icc3 \n" \ 93 + " ckeq icc3,cc7 \n" \ 94 + " ldd.p %M0,%1 \n" \ 95 + " orcr cc7,cc7,cc3 \n" \ 96 + " "#op"cc %L1,%L2,%L1,icc0 \n" \ 97 + " "#op"x %1,%2,%1,icc0 \n" \ 98 + " cstd.p %1,%M0 ,cc3,#1 \n" \ 99 + " corcc gr29,gr29,gr0 ,cc3,#1 \n" \ 100 + " beq icc3,#0,0b \n" \ 101 + : "+m"(*__v), "=&e"(val) \ 102 + : "e"(i) \ 103 + : "memory", "cc7", "cc3", "icc0", "icc3" \ 104 + ); \ 105 + \ 106 + return val; \ 107 + } \ 108 + ATOMIC_EXPORT(__atomic64_##op##_return); 109 + #endif 110 + 111 + #ifndef ATOMIC_FETCH_OP 112 + #define ATOMIC_FETCH_OP(op) \ 113 + ATOMIC_QUALS int __atomic32_fetch_##op(int i, int *v) \ 114 + { \ 115 + int old, tmp; \ 116 + \ 117 + asm volatile( \ 118 + "0: \n" \ 119 + " orcc gr0,gr0,gr0,icc3 \n" \ 120 + " ckeq icc3,cc7 \n" \ 121 + " ld.p %M0,%1 \n" \ 122 + " orcr cc7,cc7,cc3 \n" \ 123 + " "#op"%I3 %1,%3,%2 \n" \ 124 + " cst.p %2,%M0 ,cc3,#1 \n" \ 125 + " corcc gr29,gr29,gr0 ,cc3,#1 \n" \ 126 + " beq icc3,#0,0b \n" \ 127 + : "+U"(*v), "=&r"(old), "=r"(tmp) \ 128 + : "NPr"(i) \ 129 + : "memory", "cc7", "cc3", "icc3" \ 130 + ); \ 131 + \ 132 + return old; \ 133 + } \ 134 + ATOMIC_EXPORT(__atomic32_fetch_##op); \ 135 + \ 136 + ATOMIC_QUALS long long __atomic64_fetch_##op(long long i, long long *v) \ 137 + { \ 138 + long long *__v = READ_ONCE(v); \ 139 + long long old, tmp; \ 140 + \ 141 + asm volatile( \ 142 + "0: \n" \ 143 + " orcc gr0,gr0,gr0,icc3 \n" \ 144 + " ckeq icc3,cc7 \n" \ 145 + " ldd.p %M0,%1 \n" \ 146 + " orcr cc7,cc7,cc3 \n" \ 147 + " "#op" %L1,%L3,%L2 \n" \ 148 + " "#op" %1,%3,%2 \n" \ 149 + " cstd.p %2,%M0 ,cc3,#1 \n" \ 150 + " corcc gr29,gr29,gr0 ,cc3,#1 \n" \ 151 + " beq icc3,#0,0b \n" \ 152 + : "+m"(*__v), "=&e"(old), "=e"(tmp) \ 153 + : "e"(i) \ 154 + : "memory", "cc7", "cc3", "icc3" \ 155 + ); \ 156 + \ 157 + return old; \ 158 + } \ 159 + ATOMIC_EXPORT(__atomic64_fetch_##op); 160 + #endif 161 + 162 + ATOMIC_FETCH_OP(or) 163 + ATOMIC_FETCH_OP(and) 164 + ATOMIC_FETCH_OP(xor) 165 + 166 + ATOMIC_OP_RETURN(add) 167 + ATOMIC_OP_RETURN(sub) 168 + 169 + #undef ATOMIC_FETCH_OP 170 + #undef ATOMIC_OP_RETURN 171 + #undef ATOMIC_QUALS 172 + #undef ATOMIC_EXPORT
+10 -89
arch/frv/include/asm/bitops.h
··· 25 25 26 26 #include <asm-generic/bitops/ffz.h> 27 27 28 - #ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS 29 - static inline 30 - unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v) 31 - { 32 - unsigned long old, tmp; 33 - 34 - asm volatile( 35 - "0: \n" 36 - " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */ 37 - " ckeq icc3,cc7 \n" 38 - " ld.p %M0,%1 \n" /* LD.P/ORCR are atomic */ 39 - " orcr cc7,cc7,cc3 \n" /* set CC3 to true */ 40 - " and%I3 %1,%3,%2 \n" 41 - " cst.p %2,%M0 ,cc3,#1 \n" /* if store happens... */ 42 - " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* ... clear ICC3.Z */ 43 - " beq icc3,#0,0b \n" 44 - : "+U"(*v), "=&r"(old), "=r"(tmp) 45 - : "NPr"(~mask) 46 - : "memory", "cc7", "cc3", "icc3" 47 - ); 48 - 49 - return old; 50 - } 51 - 52 - static inline 53 - unsigned long atomic_test_and_OR_mask(unsigned long mask, volatile unsigned long *v) 54 - { 55 - unsigned long old, tmp; 56 - 57 - asm volatile( 58 - "0: \n" 59 - " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */ 60 - " ckeq icc3,cc7 \n" 61 - " ld.p %M0,%1 \n" /* LD.P/ORCR are atomic */ 62 - " orcr cc7,cc7,cc3 \n" /* set CC3 to true */ 63 - " or%I3 %1,%3,%2 \n" 64 - " cst.p %2,%M0 ,cc3,#1 \n" /* if store happens... */ 65 - " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* ... clear ICC3.Z */ 66 - " beq icc3,#0,0b \n" 67 - : "+U"(*v), "=&r"(old), "=r"(tmp) 68 - : "NPr"(mask) 69 - : "memory", "cc7", "cc3", "icc3" 70 - ); 71 - 72 - return old; 73 - } 74 - 75 - static inline 76 - unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsigned long *v) 77 - { 78 - unsigned long old, tmp; 79 - 80 - asm volatile( 81 - "0: \n" 82 - " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */ 83 - " ckeq icc3,cc7 \n" 84 - " ld.p %M0,%1 \n" /* LD.P/ORCR are atomic */ 85 - " orcr cc7,cc7,cc3 \n" /* set CC3 to true */ 86 - " xor%I3 %1,%3,%2 \n" 87 - " cst.p %2,%M0 ,cc3,#1 \n" /* if store happens... */ 88 - " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* ... clear ICC3.Z */ 89 - " beq icc3,#0,0b \n" 90 - : "+U"(*v), "=&r"(old), "=r"(tmp) 91 - : "NPr"(mask) 92 - : "memory", "cc7", "cc3", "icc3" 93 - ); 94 - 95 - return old; 96 - } 97 - 98 - #else 99 - 100 - extern unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v); 101 - extern unsigned long atomic_test_and_OR_mask(unsigned long mask, volatile unsigned long *v); 102 - extern unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsigned long *v); 103 - 104 - #endif 105 - 106 - #define atomic_clear_mask(mask, v) atomic_test_and_ANDNOT_mask((mask), (v)) 107 - #define atomic_set_mask(mask, v) atomic_test_and_OR_mask((mask), (v)) 28 + #include <asm/atomic.h> 108 29 109 30 static inline int test_and_clear_bit(unsigned long nr, volatile void *addr) 110 31 { 111 - volatile unsigned long *ptr = addr; 112 - unsigned long mask = 1UL << (nr & 31); 32 + unsigned int *ptr = (void *)addr; 33 + unsigned int mask = 1UL << (nr & 31); 113 34 ptr += nr >> 5; 114 - return (atomic_test_and_ANDNOT_mask(mask, ptr) & mask) != 0; 35 + return (__atomic32_fetch_and(~mask, ptr) & mask) != 0; 115 36 } 116 37 117 38 static inline int test_and_set_bit(unsigned long nr, volatile void *addr) 118 39 { 119 - volatile unsigned long *ptr = addr; 120 - unsigned long mask = 1UL << (nr & 31); 40 + unsigned int *ptr = (void *)addr; 41 + unsigned int mask = 1UL << (nr & 31); 121 42 ptr += nr >> 5; 122 - return (atomic_test_and_OR_mask(mask, ptr) & mask) != 0; 43 + return (__atomic32_fetch_or(mask, ptr) & mask) != 0; 123 44 } 124 45 125 46 static inline int test_and_change_bit(unsigned long nr, volatile void *addr) 126 47 { 127 - volatile unsigned long *ptr = addr; 128 - unsigned long mask = 1UL << (nr & 31); 48 + unsigned int *ptr = (void *)addr; 49 + unsigned int mask = 1UL << (nr & 31); 129 50 ptr += nr >> 5; 130 - return (atomic_test_and_XOR_mask(mask, ptr) & mask) != 0; 51 + return (__atomic32_fetch_xor(mask, ptr) & mask) != 0; 131 52 } 132 53 133 54 static inline void clear_bit(unsigned long nr, volatile void *addr)
+3 -3
arch/frv/kernel/dma.c
··· 109 109 110 110 static DEFINE_RWLOCK(frv_dma_channels_lock); 111 111 112 - unsigned long frv_dma_inprogress; 112 + unsigned int frv_dma_inprogress; 113 113 114 114 #define frv_clear_dma_inprogress(channel) \ 115 - atomic_clear_mask(1 << (channel), &frv_dma_inprogress); 115 + (void)__atomic32_fetch_and(~(1 << (channel)), &frv_dma_inprogress); 116 116 117 117 #define frv_set_dma_inprogress(channel) \ 118 - atomic_set_mask(1 << (channel), &frv_dma_inprogress); 118 + (void)__atomic32_fetch_or(1 << (channel), &frv_dma_inprogress); 119 119 120 120 /*****************************************************************************/ 121 121 /*
-5
arch/frv/kernel/frv_ksyms.c
··· 58 58 EXPORT_SYMBOL(__insl_ns); 59 59 60 60 #ifdef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS 61 - EXPORT_SYMBOL(atomic_test_and_ANDNOT_mask); 62 - EXPORT_SYMBOL(atomic_test_and_OR_mask); 63 - EXPORT_SYMBOL(atomic_test_and_XOR_mask); 64 - EXPORT_SYMBOL(atomic_add_return); 65 - EXPORT_SYMBOL(atomic_sub_return); 66 61 EXPORT_SYMBOL(__xchg_32); 67 62 EXPORT_SYMBOL(__cmpxchg_32); 68 63 #endif
+1 -1
arch/frv/lib/Makefile
··· 5 5 lib-y := \ 6 6 __ashldi3.o __lshrdi3.o __muldi3.o __ashrdi3.o __negdi2.o __ucmpdi2.o \ 7 7 checksum.o memcpy.o memset.o atomic-ops.o atomic64-ops.o \ 8 - outsl_ns.o outsl_sw.o insl_ns.o insl_sw.o cache.o 8 + outsl_ns.o outsl_sw.o insl_ns.o insl_sw.o cache.o atomic-lib.o
+7
arch/frv/lib/atomic-lib.c
··· 1 + 2 + #include <linux/export.h> 3 + #include <asm/atomic.h> 4 + 5 + #define __ATOMIC_LIB__ 6 + 7 + #include <asm/atomic_defs.h>
-110
arch/frv/lib/atomic-ops.S
··· 19 19 20 20 ############################################################################### 21 21 # 22 - # unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v); 23 - # 24 - ############################################################################### 25 - .globl atomic_test_and_ANDNOT_mask 26 - .type atomic_test_and_ANDNOT_mask,@function 27 - atomic_test_and_ANDNOT_mask: 28 - not.p gr8,gr10 29 - 0: 30 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 31 - ckeq icc3,cc7 32 - ld.p @(gr9,gr0),gr8 /* LD.P/ORCR must be atomic */ 33 - orcr cc7,cc7,cc3 /* set CC3 to true */ 34 - and gr8,gr10,gr11 35 - cst.p gr11,@(gr9,gr0) ,cc3,#1 36 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 37 - beq icc3,#0,0b 38 - bralr 39 - 40 - .size atomic_test_and_ANDNOT_mask, .-atomic_test_and_ANDNOT_mask 41 - 42 - ############################################################################### 43 - # 44 - # unsigned long atomic_test_and_OR_mask(unsigned long mask, volatile unsigned long *v); 45 - # 46 - ############################################################################### 47 - .globl atomic_test_and_OR_mask 48 - .type atomic_test_and_OR_mask,@function 49 - atomic_test_and_OR_mask: 50 - or.p gr8,gr8,gr10 51 - 0: 52 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 53 - ckeq icc3,cc7 54 - ld.p @(gr9,gr0),gr8 /* LD.P/ORCR must be atomic */ 55 - orcr cc7,cc7,cc3 /* set CC3 to true */ 56 - or gr8,gr10,gr11 57 - cst.p gr11,@(gr9,gr0) ,cc3,#1 58 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 59 - beq icc3,#0,0b 60 - bralr 61 - 62 - .size atomic_test_and_OR_mask, .-atomic_test_and_OR_mask 63 - 64 - ############################################################################### 65 - # 66 - # unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsigned long *v); 67 - # 68 - ############################################################################### 69 - .globl atomic_test_and_XOR_mask 70 - .type atomic_test_and_XOR_mask,@function 71 - atomic_test_and_XOR_mask: 72 - or.p gr8,gr8,gr10 73 - 0: 74 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 75 - ckeq icc3,cc7 76 - ld.p @(gr9,gr0),gr8 /* LD.P/ORCR must be atomic */ 77 - orcr cc7,cc7,cc3 /* set CC3 to true */ 78 - xor gr8,gr10,gr11 79 - cst.p gr11,@(gr9,gr0) ,cc3,#1 80 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 81 - beq icc3,#0,0b 82 - bralr 83 - 84 - .size atomic_test_and_XOR_mask, .-atomic_test_and_XOR_mask 85 - 86 - ############################################################################### 87 - # 88 - # int atomic_add_return(int i, atomic_t *v) 89 - # 90 - ############################################################################### 91 - .globl atomic_add_return 92 - .type atomic_add_return,@function 93 - atomic_add_return: 94 - or.p gr8,gr8,gr10 95 - 0: 96 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 97 - ckeq icc3,cc7 98 - ld.p @(gr9,gr0),gr8 /* LD.P/ORCR must be atomic */ 99 - orcr cc7,cc7,cc3 /* set CC3 to true */ 100 - add gr8,gr10,gr8 101 - cst.p gr8,@(gr9,gr0) ,cc3,#1 102 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 103 - beq icc3,#0,0b 104 - bralr 105 - 106 - .size atomic_add_return, .-atomic_add_return 107 - 108 - ############################################################################### 109 - # 110 - # int atomic_sub_return(int i, atomic_t *v) 111 - # 112 - ############################################################################### 113 - .globl atomic_sub_return 114 - .type atomic_sub_return,@function 115 - atomic_sub_return: 116 - or.p gr8,gr8,gr10 117 - 0: 118 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 119 - ckeq icc3,cc7 120 - ld.p @(gr9,gr0),gr8 /* LD.P/ORCR must be atomic */ 121 - orcr cc7,cc7,cc3 /* set CC3 to true */ 122 - sub gr8,gr10,gr8 123 - cst.p gr8,@(gr9,gr0) ,cc3,#1 124 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 125 - beq icc3,#0,0b 126 - bralr 127 - 128 - .size atomic_sub_return, .-atomic_sub_return 129 - 130 - ############################################################################### 131 - # 132 22 # uint32_t __xchg_32(uint32_t i, uint32_t *v) 133 23 # 134 24 ###############################################################################
-94
arch/frv/lib/atomic64-ops.S
··· 20 20 21 21 ############################################################################### 22 22 # 23 - # long long atomic64_inc_return(atomic64_t *v) 24 - # 25 - ############################################################################### 26 - .globl atomic64_inc_return 27 - .type atomic64_inc_return,@function 28 - atomic64_inc_return: 29 - or.p gr8,gr8,gr10 30 - 0: 31 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 32 - ckeq icc3,cc7 33 - ldd.p @(gr10,gr0),gr8 /* LDD.P/ORCR must be atomic */ 34 - orcr cc7,cc7,cc3 /* set CC3 to true */ 35 - addicc gr9,#1,gr9,icc0 36 - addxi gr8,#0,gr8,icc0 37 - cstd.p gr8,@(gr10,gr0) ,cc3,#1 38 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 39 - beq icc3,#0,0b 40 - bralr 41 - 42 - .size atomic64_inc_return, .-atomic64_inc_return 43 - 44 - ############################################################################### 45 - # 46 - # long long atomic64_dec_return(atomic64_t *v) 47 - # 48 - ############################################################################### 49 - .globl atomic64_dec_return 50 - .type atomic64_dec_return,@function 51 - atomic64_dec_return: 52 - or.p gr8,gr8,gr10 53 - 0: 54 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 55 - ckeq icc3,cc7 56 - ldd.p @(gr10,gr0),gr8 /* LDD.P/ORCR must be atomic */ 57 - orcr cc7,cc7,cc3 /* set CC3 to true */ 58 - subicc gr9,#1,gr9,icc0 59 - subxi gr8,#0,gr8,icc0 60 - cstd.p gr8,@(gr10,gr0) ,cc3,#1 61 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 62 - beq icc3,#0,0b 63 - bralr 64 - 65 - .size atomic64_dec_return, .-atomic64_dec_return 66 - 67 - ############################################################################### 68 - # 69 - # long long atomic64_add_return(long long i, atomic64_t *v) 70 - # 71 - ############################################################################### 72 - .globl atomic64_add_return 73 - .type atomic64_add_return,@function 74 - atomic64_add_return: 75 - or.p gr8,gr8,gr4 76 - or gr9,gr9,gr5 77 - 0: 78 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 79 - ckeq icc3,cc7 80 - ldd.p @(gr10,gr0),gr8 /* LDD.P/ORCR must be atomic */ 81 - orcr cc7,cc7,cc3 /* set CC3 to true */ 82 - addcc gr9,gr5,gr9,icc0 83 - addx gr8,gr4,gr8,icc0 84 - cstd.p gr8,@(gr10,gr0) ,cc3,#1 85 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 86 - beq icc3,#0,0b 87 - bralr 88 - 89 - .size atomic64_add_return, .-atomic64_add_return 90 - 91 - ############################################################################### 92 - # 93 - # long long atomic64_sub_return(long long i, atomic64_t *v) 94 - # 95 - ############################################################################### 96 - .globl atomic64_sub_return 97 - .type atomic64_sub_return,@function 98 - atomic64_sub_return: 99 - or.p gr8,gr8,gr4 100 - or gr9,gr9,gr5 101 - 0: 102 - orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */ 103 - ckeq icc3,cc7 104 - ldd.p @(gr10,gr0),gr8 /* LDD.P/ORCR must be atomic */ 105 - orcr cc7,cc7,cc3 /* set CC3 to true */ 106 - subcc gr9,gr5,gr9,icc0 107 - subx gr8,gr4,gr8,icc0 108 - cstd.p gr8,@(gr10,gr0) ,cc3,#1 109 - corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */ 110 - beq icc3,#0,0b 111 - bralr 112 - 113 - .size atomic64_sub_return, .-atomic64_sub_return 114 - 115 - ############################################################################### 116 - # 117 23 # uint64_t __xchg_64(uint64_t i, uint64_t *v) 118 24 # 119 25 ###############################################################################
+39 -106
arch/h8300/include/asm/atomic.h
··· 16 16 17 17 #include <linux/kernel.h> 18 18 19 - static inline int atomic_add_return(int i, atomic_t *v) 20 - { 21 - h8300flags flags; 22 - int ret; 23 - 24 - flags = arch_local_irq_save(); 25 - ret = v->counter += i; 26 - arch_local_irq_restore(flags); 27 - return ret; 19 + #define ATOMIC_OP_RETURN(op, c_op) \ 20 + static inline int atomic_##op##_return(int i, atomic_t *v) \ 21 + { \ 22 + h8300flags flags; \ 23 + int ret; \ 24 + \ 25 + flags = arch_local_irq_save(); \ 26 + ret = v->counter c_op i; \ 27 + arch_local_irq_restore(flags); \ 28 + return ret; \ 28 29 } 29 30 30 - #define atomic_add(i, v) atomic_add_return(i, v) 31 + #define ATOMIC_OP(op, c_op) \ 32 + static inline void atomic_##op(int i, atomic_t *v) \ 33 + { \ 34 + h8300flags flags; \ 35 + \ 36 + flags = arch_local_irq_save(); \ 37 + v->counter c_op i; \ 38 + arch_local_irq_restore(flags); \ 39 + } 40 + 41 + ATOMIC_OP_RETURN(add, +=) 42 + ATOMIC_OP_RETURN(sub, -=) 43 + 44 + ATOMIC_OP(and, &=) 45 + ATOMIC_OP(or, |=) 46 + ATOMIC_OP(xor, ^=) 47 + 48 + #undef ATOMIC_OP_RETURN 49 + #undef ATOMIC_OP 50 + 51 + #define atomic_add(i, v) (void)atomic_add_return(i, v) 31 52 #define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0) 32 53 33 - static inline int atomic_sub_return(int i, atomic_t *v) 34 - { 35 - h8300flags flags; 36 - int ret; 54 + #define atomic_sub(i, v) (void)atomic_sub_return(i, v) 55 + #define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) 37 56 38 - flags = arch_local_irq_save(); 39 - ret = v->counter -= i; 40 - arch_local_irq_restore(flags); 41 - return ret; 42 - } 57 + #define atomic_inc_return(v) atomic_add_return(1, v) 58 + #define atomic_dec_return(v) atomic_sub_return(1, v) 43 59 44 - #define atomic_sub(i, v) atomic_sub_return(i, v) 45 - #define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0) 60 + #define atomic_inc(v) (void)atomic_inc_return(v) 61 + #define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) 46 62 47 - static inline int atomic_inc_return(atomic_t *v) 48 - { 49 - h8300flags flags; 50 - int ret; 51 - 52 - flags = arch_local_irq_save(); 53 - v->counter++; 54 - ret = v->counter; 55 - arch_local_irq_restore(flags); 56 - return ret; 57 - } 58 - 59 - #define atomic_inc(v) atomic_inc_return(v) 60 - 61 - /* 62 - * atomic_inc_and_test - increment and test 63 - * @v: pointer of type atomic_t 64 - * 65 - * Atomically increments @v by 1 66 - * and returns true if the result is zero, or false for all 67 - * other cases. 68 - */ 69 - #define atomic_inc_and_test(v) (atomic_inc_return(v) == 0) 70 - 71 - static inline int atomic_dec_return(atomic_t *v) 72 - { 73 - h8300flags flags; 74 - int ret; 75 - 76 - flags = arch_local_irq_save(); 77 - --v->counter; 78 - ret = v->counter; 79 - arch_local_irq_restore(flags); 80 - return ret; 81 - } 82 - 83 - #define atomic_dec(v) atomic_dec_return(v) 84 - 85 - static inline int atomic_dec_and_test(atomic_t *v) 86 - { 87 - h8300flags flags; 88 - int ret; 89 - 90 - flags = arch_local_irq_save(); 91 - --v->counter; 92 - ret = v->counter; 93 - arch_local_irq_restore(flags); 94 - return ret == 0; 95 - } 63 + #define atomic_dec(v) (void)atomic_dec_return(v) 64 + #define atomic_dec_and_test(v) (atomic_dec_return(v) == 0) 96 65 97 66 static inline int atomic_cmpxchg(atomic_t *v, int old, int new) 98 67 { ··· 88 119 arch_local_irq_restore(flags); 89 120 return ret; 90 121 } 91 - 92 - static inline void atomic_clear_mask(unsigned long mask, unsigned long *v) 93 - { 94 - unsigned char ccr; 95 - unsigned long tmp; 96 - 97 - __asm__ __volatile__("stc ccr,%w3\n\t" 98 - "orc #0x80,ccr\n\t" 99 - "mov.l %0,%1\n\t" 100 - "and.l %2,%1\n\t" 101 - "mov.l %1,%0\n\t" 102 - "ldc %w3,ccr" 103 - : "=m"(*v), "=r"(tmp) 104 - : "g"(~(mask)), "r"(ccr)); 105 - } 106 - 107 - static inline void atomic_set_mask(unsigned long mask, unsigned long *v) 108 - { 109 - unsigned char ccr; 110 - unsigned long tmp; 111 - 112 - __asm__ __volatile__("stc ccr,%w3\n\t" 113 - "orc #0x80,ccr\n\t" 114 - "mov.l %0,%1\n\t" 115 - "or.l %2,%1\n\t" 116 - "mov.l %1,%0\n\t" 117 - "ldc %w3,ccr" 118 - : "=m"(*v), "=r"(tmp) 119 - : "g"(~(mask)), "r"(ccr)); 120 - } 121 - 122 - /* Atomic operations are already serializing */ 123 - #define smp_mb__before_atomic_dec() barrier() 124 - #define smp_mb__after_atomic_dec() barrier() 125 - #define smp_mb__before_atomic_inc() barrier() 126 - #define smp_mb__after_atomic_inc() barrier() 127 122 128 123 #endif /* __ARCH_H8300_ATOMIC __ */
+4
arch/hexagon/include/asm/atomic.h
··· 132 132 ATOMIC_OPS(add) 133 133 ATOMIC_OPS(sub) 134 134 135 + ATOMIC_OP(and) 136 + ATOMIC_OP(or) 137 + ATOMIC_OP(xor) 138 + 135 139 #undef ATOMIC_OPS 136 140 #undef ATOMIC_OP_RETURN 137 141 #undef ATOMIC_OP
+20 -4
arch/ia64/include/asm/atomic.h
··· 45 45 ATOMIC_OP(add, +) 46 46 ATOMIC_OP(sub, -) 47 47 48 - #undef ATOMIC_OP 49 - 50 48 #define atomic_add_return(i,v) \ 51 49 ({ \ 52 50 int __ia64_aar_i = (i); \ ··· 69 71 : ia64_atomic_sub(__ia64_asr_i, v); \ 70 72 }) 71 73 74 + ATOMIC_OP(and, &) 75 + ATOMIC_OP(or, |) 76 + ATOMIC_OP(xor, ^) 77 + 78 + #define atomic_and(i,v) (void)ia64_atomic_and(i,v) 79 + #define atomic_or(i,v) (void)ia64_atomic_or(i,v) 80 + #define atomic_xor(i,v) (void)ia64_atomic_xor(i,v) 81 + 82 + #undef ATOMIC_OP 83 + 72 84 #define ATOMIC64_OP(op, c_op) \ 73 85 static __inline__ long \ 74 86 ia64_atomic64_##op (__s64 i, atomic64_t *v) \ ··· 96 88 97 89 ATOMIC64_OP(add, +) 98 90 ATOMIC64_OP(sub, -) 99 - 100 - #undef ATOMIC64_OP 101 91 102 92 #define atomic64_add_return(i,v) \ 103 93 ({ \ ··· 120 114 ? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter) \ 121 115 : ia64_atomic64_sub(__ia64_asr_i, v); \ 122 116 }) 117 + 118 + ATOMIC64_OP(and, &) 119 + ATOMIC64_OP(or, |) 120 + ATOMIC64_OP(xor, ^) 121 + 122 + #define atomic64_and(i,v) (void)ia64_atomic64_and(i,v) 123 + #define atomic64_or(i,v) (void)ia64_atomic64_or(i,v) 124 + #define atomic64_xor(i,v) (void)ia64_atomic64_xor(i,v) 125 + 126 + #undef ATOMIC64_OP 123 127 124 128 #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new)) 125 129 #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+2 -2
arch/ia64/include/asm/barrier.h
··· 66 66 do { \ 67 67 compiletime_assert_atomic_type(*p); \ 68 68 barrier(); \ 69 - ACCESS_ONCE(*p) = (v); \ 69 + WRITE_ONCE(*p, v); \ 70 70 } while (0) 71 71 72 72 #define smp_load_acquire(p) \ 73 73 ({ \ 74 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 74 + typeof(*p) ___p1 = READ_ONCE(*p); \ 75 75 compiletime_assert_atomic_type(*p); \ 76 76 barrier(); \ 77 77 ___p1; \
+4 -41
arch/m32r/include/asm/atomic.h
··· 94 94 ATOMIC_OPS(add) 95 95 ATOMIC_OPS(sub) 96 96 97 + ATOMIC_OP(and) 98 + ATOMIC_OP(or) 99 + ATOMIC_OP(xor) 100 + 97 101 #undef ATOMIC_OPS 98 102 #undef ATOMIC_OP_RETURN 99 103 #undef ATOMIC_OP ··· 241 237 c = old; 242 238 } 243 239 return c; 244 - } 245 - 246 - 247 - static __inline__ void atomic_clear_mask(unsigned long mask, atomic_t *addr) 248 - { 249 - unsigned long flags; 250 - unsigned long tmp; 251 - 252 - local_irq_save(flags); 253 - __asm__ __volatile__ ( 254 - "# atomic_clear_mask \n\t" 255 - DCACHE_CLEAR("%0", "r5", "%1") 256 - M32R_LOCK" %0, @%1; \n\t" 257 - "and %0, %2; \n\t" 258 - M32R_UNLOCK" %0, @%1; \n\t" 259 - : "=&r" (tmp) 260 - : "r" (addr), "r" (~mask) 261 - : "memory" 262 - __ATOMIC_CLOBBER 263 - ); 264 - local_irq_restore(flags); 265 - } 266 - 267 - static __inline__ void atomic_set_mask(unsigned long mask, atomic_t *addr) 268 - { 269 - unsigned long flags; 270 - unsigned long tmp; 271 - 272 - local_irq_save(flags); 273 - __asm__ __volatile__ ( 274 - "# atomic_set_mask \n\t" 275 - DCACHE_CLEAR("%0", "r5", "%1") 276 - M32R_LOCK" %0, @%1; \n\t" 277 - "or %0, %2; \n\t" 278 - M32R_UNLOCK" %0, @%1; \n\t" 279 - : "=&r" (tmp) 280 - : "r" (addr), "r" (mask) 281 - : "memory" 282 - __ATOMIC_CLOBBER 283 - ); 284 - local_irq_restore(flags); 285 240 } 286 241 287 242 #endif /* _ASM_M32R_ATOMIC_H */
+2 -2
arch/m32r/kernel/smp.c
··· 156 156 cpumask_clear_cpu(smp_processor_id(), &cpumask); 157 157 spin_lock(&flushcache_lock); 158 158 mask=cpumask_bits(&cpumask); 159 - atomic_set_mask(*mask, (atomic_t *)&flushcache_cpumask); 159 + atomic_or(*mask, (atomic_t *)&flushcache_cpumask); 160 160 send_IPI_mask(&cpumask, INVALIDATE_CACHE_IPI, 0); 161 161 _flush_cache_copyback_all(); 162 162 while (flushcache_cpumask) ··· 407 407 flush_vma = vma; 408 408 flush_va = va; 409 409 mask=cpumask_bits(&cpumask); 410 - atomic_set_mask(*mask, (atomic_t *)&flush_cpumask); 410 + atomic_or(*mask, (atomic_t *)&flush_cpumask); 411 411 412 412 /* 413 413 * We have to send the IPI only to
+4 -10
arch/m68k/include/asm/atomic.h
··· 77 77 ATOMIC_OPS(add, +=, add) 78 78 ATOMIC_OPS(sub, -=, sub) 79 79 80 + ATOMIC_OP(and, &=, and) 81 + ATOMIC_OP(or, |=, or) 82 + ATOMIC_OP(xor, ^=, eor) 83 + 80 84 #undef ATOMIC_OPS 81 85 #undef ATOMIC_OP_RETURN 82 86 #undef ATOMIC_OP ··· 172 168 : "=d" (c), "+m" (*v) 173 169 : ASM_DI (i)); 174 170 return c != 0; 175 - } 176 - 177 - static inline void atomic_clear_mask(unsigned long mask, unsigned long *v) 178 - { 179 - __asm__ __volatile__("andl %1,%0" : "+m" (*v) : ASM_DI (~(mask))); 180 - } 181 - 182 - static inline void atomic_set_mask(unsigned long mask, unsigned long *v) 183 - { 184 - __asm__ __volatile__("orl %1,%0" : "+m" (*v) : ASM_DI (mask)); 185 171 } 186 172 187 173 static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u)
+4 -34
arch/metag/include/asm/atomic_lnkget.h
··· 74 74 ATOMIC_OPS(add) 75 75 ATOMIC_OPS(sub) 76 76 77 + ATOMIC_OP(and) 78 + ATOMIC_OP(or) 79 + ATOMIC_OP(xor) 80 + 77 81 #undef ATOMIC_OPS 78 82 #undef ATOMIC_OP_RETURN 79 83 #undef ATOMIC_OP 80 - 81 - static inline void atomic_clear_mask(unsigned int mask, atomic_t *v) 82 - { 83 - int temp; 84 - 85 - asm volatile ( 86 - "1: LNKGETD %0, [%1]\n" 87 - " AND %0, %0, %2\n" 88 - " LNKSETD [%1] %0\n" 89 - " DEFR %0, TXSTAT\n" 90 - " ANDT %0, %0, #HI(0x3f000000)\n" 91 - " CMPT %0, #HI(0x02000000)\n" 92 - " BNZ 1b\n" 93 - : "=&d" (temp) 94 - : "da" (&v->counter), "bd" (~mask) 95 - : "cc"); 96 - } 97 - 98 - static inline void atomic_set_mask(unsigned int mask, atomic_t *v) 99 - { 100 - int temp; 101 - 102 - asm volatile ( 103 - "1: LNKGETD %0, [%1]\n" 104 - " OR %0, %0, %2\n" 105 - " LNKSETD [%1], %0\n" 106 - " DEFR %0, TXSTAT\n" 107 - " ANDT %0, %0, #HI(0x3f000000)\n" 108 - " CMPT %0, #HI(0x02000000)\n" 109 - " BNZ 1b\n" 110 - : "=&d" (temp) 111 - : "da" (&v->counter), "bd" (mask) 112 - : "cc"); 113 - } 114 84 115 85 static inline int atomic_cmpxchg(atomic_t *v, int old, int new) 116 86 {
+3 -20
arch/metag/include/asm/atomic_lock1.h
··· 68 68 69 69 ATOMIC_OPS(add, +=) 70 70 ATOMIC_OPS(sub, -=) 71 + ATOMIC_OP(and, &=) 72 + ATOMIC_OP(or, |=) 73 + ATOMIC_OP(xor, ^=) 71 74 72 75 #undef ATOMIC_OPS 73 76 #undef ATOMIC_OP_RETURN 74 77 #undef ATOMIC_OP 75 - 76 - static inline void atomic_clear_mask(unsigned int mask, atomic_t *v) 77 - { 78 - unsigned long flags; 79 - 80 - __global_lock1(flags); 81 - fence(); 82 - v->counter &= ~mask; 83 - __global_unlock1(flags); 84 - } 85 - 86 - static inline void atomic_set_mask(unsigned int mask, atomic_t *v) 87 - { 88 - unsigned long flags; 89 - 90 - __global_lock1(flags); 91 - fence(); 92 - v->counter |= mask; 93 - __global_unlock1(flags); 94 - } 95 78 96 79 static inline int atomic_cmpxchg(atomic_t *v, int old, int new) 97 80 {
+2 -2
arch/metag/include/asm/barrier.h
··· 90 90 do { \ 91 91 compiletime_assert_atomic_type(*p); \ 92 92 smp_mb(); \ 93 - ACCESS_ONCE(*p) = (v); \ 93 + WRITE_ONCE(*p, v); \ 94 94 } while (0) 95 95 96 96 #define smp_load_acquire(p) \ 97 97 ({ \ 98 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 98 + typeof(*p) ___p1 = READ_ONCE(*p); \ 99 99 compiletime_assert_atomic_type(*p); \ 100 100 smp_mb(); \ 101 101 ___p1; \
+7
arch/mips/include/asm/atomic.h
··· 137 137 ATOMIC_OPS(add, +=, addu) 138 138 ATOMIC_OPS(sub, -=, subu) 139 139 140 + ATOMIC_OP(and, &=, and) 141 + ATOMIC_OP(or, |=, or) 142 + ATOMIC_OP(xor, ^=, xor) 143 + 140 144 #undef ATOMIC_OPS 141 145 #undef ATOMIC_OP_RETURN 142 146 #undef ATOMIC_OP ··· 420 416 421 417 ATOMIC64_OPS(add, +=, daddu) 422 418 ATOMIC64_OPS(sub, -=, dsubu) 419 + ATOMIC64_OP(and, &=, and) 420 + ATOMIC64_OP(or, |=, or) 421 + ATOMIC64_OP(xor, ^=, xor) 423 422 424 423 #undef ATOMIC64_OPS 425 424 #undef ATOMIC64_OP_RETURN
+2 -2
arch/mips/include/asm/barrier.h
··· 133 133 do { \ 134 134 compiletime_assert_atomic_type(*p); \ 135 135 smp_mb(); \ 136 - ACCESS_ONCE(*p) = (v); \ 136 + WRITE_ONCE(*p, v); \ 137 137 } while (0) 138 138 139 139 #define smp_load_acquire(p) \ 140 140 ({ \ 141 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 141 + typeof(*p) ___p1 = READ_ONCE(*p); \ 142 142 compiletime_assert_atomic_type(*p); \ 143 143 smp_mb(); \ 144 144 ___p1; \
+17 -2
arch/mips/include/asm/jump_label.h
··· 26 26 #define NOP_INSN "nop" 27 27 #endif 28 28 29 - static __always_inline bool arch_static_branch(struct static_key *key) 29 + static __always_inline bool arch_static_branch(struct static_key *key, bool branch) 30 30 { 31 31 asm_volatile_goto("1:\t" NOP_INSN "\n\t" 32 32 "nop\n\t" 33 33 ".pushsection __jump_table, \"aw\"\n\t" 34 34 WORD_INSN " 1b, %l[l_yes], %0\n\t" 35 35 ".popsection\n\t" 36 - : : "i" (key) : : l_yes); 36 + : : "i" (&((char *)key)[branch]) : : l_yes); 37 + 38 + return false; 39 + l_yes: 40 + return true; 41 + } 42 + 43 + static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) 44 + { 45 + asm_volatile_goto("1:\tj %l[l_yes]\n\t" 46 + "nop\n\t" 47 + ".pushsection __jump_table, \"aw\"\n\t" 48 + WORD_INSN " 1b, %l[l_yes], %0\n\t" 49 + ".popsection\n\t" 50 + : : "i" (&((char *)key)[branch]) : : l_yes); 51 + 37 52 return false; 38 53 l_yes: 39 54 return true;
+1 -1
arch/mips/kernel/jump_label.c
··· 51 51 /* Target must have the right alignment and ISA must be preserved. */ 52 52 BUG_ON((e->target & J_ALIGN_MASK) != J_ISA_BIT); 53 53 54 - if (type == JUMP_LABEL_ENABLE) { 54 + if (type == JUMP_LABEL_JMP) { 55 55 insn.j_format.opcode = J_ISA_BIT ? mm_j32_op : j_op; 56 56 insn.j_format.target = e->target >> J_RANGE_SHIFT; 57 57 } else {
+4 -67
arch/mn10300/include/asm/atomic.h
··· 89 89 ATOMIC_OPS(add) 90 90 ATOMIC_OPS(sub) 91 91 92 + ATOMIC_OP(and) 93 + ATOMIC_OP(or) 94 + ATOMIC_OP(xor) 95 + 92 96 #undef ATOMIC_OPS 93 97 #undef ATOMIC_OP_RETURN 94 98 #undef ATOMIC_OP ··· 130 126 131 127 #define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v))) 132 128 #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), (old), (new))) 133 - 134 - /** 135 - * atomic_clear_mask - Atomically clear bits in memory 136 - * @mask: Mask of the bits to be cleared 137 - * @v: pointer to word in memory 138 - * 139 - * Atomically clears the bits set in mask from the memory word specified. 140 - */ 141 - static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr) 142 - { 143 - #ifdef CONFIG_SMP 144 - int status; 145 - 146 - asm volatile( 147 - "1: mov %3,(_AAR,%2) \n" 148 - " mov (_ADR,%2),%0 \n" 149 - " and %4,%0 \n" 150 - " mov %0,(_ADR,%2) \n" 151 - " mov (_ADR,%2),%0 \n" /* flush */ 152 - " mov (_ASR,%2),%0 \n" 153 - " or %0,%0 \n" 154 - " bne 1b \n" 155 - : "=&r"(status), "=m"(*addr) 156 - : "a"(ATOMIC_OPS_BASE_ADDR), "r"(addr), "r"(~mask) 157 - : "memory", "cc"); 158 - #else 159 - unsigned long flags; 160 - 161 - mask = ~mask; 162 - flags = arch_local_cli_save(); 163 - *addr &= mask; 164 - arch_local_irq_restore(flags); 165 - #endif 166 - } 167 - 168 - /** 169 - * atomic_set_mask - Atomically set bits in memory 170 - * @mask: Mask of the bits to be set 171 - * @v: pointer to word in memory 172 - * 173 - * Atomically sets the bits set in mask from the memory word specified. 174 - */ 175 - static inline void atomic_set_mask(unsigned long mask, unsigned long *addr) 176 - { 177 - #ifdef CONFIG_SMP 178 - int status; 179 - 180 - asm volatile( 181 - "1: mov %3,(_AAR,%2) \n" 182 - " mov (_ADR,%2),%0 \n" 183 - " or %4,%0 \n" 184 - " mov %0,(_ADR,%2) \n" 185 - " mov (_ADR,%2),%0 \n" /* flush */ 186 - " mov (_ASR,%2),%0 \n" 187 - " or %0,%0 \n" 188 - " bne 1b \n" 189 - : "=&r"(status), "=m"(*addr) 190 - : "a"(ATOMIC_OPS_BASE_ADDR), "r"(addr), "r"(mask) 191 - : "memory", "cc"); 192 - #else 193 - unsigned long flags; 194 - 195 - flags = arch_local_cli_save(); 196 - *addr |= mask; 197 - arch_local_irq_restore(flags); 198 - #endif 199 - } 200 129 201 130 #endif /* __KERNEL__ */ 202 131 #endif /* CONFIG_SMP */
+1 -1
arch/mn10300/mm/tlb-smp.c
··· 119 119 flush_mm = mm; 120 120 flush_va = va; 121 121 #if NR_CPUS <= BITS_PER_LONG 122 - atomic_set_mask(cpumask.bits[0], &flush_cpumask.bits[0]); 122 + atomic_or(cpumask.bits[0], (atomic_t *)&flush_cpumask.bits[0]); 123 123 #else 124 124 #error Not supported. 125 125 #endif
-1
arch/parisc/configs/c8000_defconfig
··· 242 242 CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 243 243 CONFIG_PANIC_ON_OOPS=y 244 244 CONFIG_DEBUG_RT_MUTEXES=y 245 - CONFIG_RT_MUTEX_TESTER=y 246 245 CONFIG_PROVE_RCU_DELAY=y 247 246 CONFIG_DEBUG_BLOCK_EXT_DEVT=y 248 247 CONFIG_LATENCYTOP=y
-1
arch/parisc/configs/generic-32bit_defconfig
··· 295 295 CONFIG_DETECT_HUNG_TASK=y 296 296 CONFIG_TIMER_STATS=y 297 297 CONFIG_DEBUG_RT_MUTEXES=y 298 - CONFIG_RT_MUTEX_TESTER=y 299 298 CONFIG_DEBUG_SPINLOCK=y 300 299 CONFIG_DEBUG_MUTEXES=y 301 300 CONFIG_RCU_CPU_STALL_INFO=y
+7
arch/parisc/include/asm/atomic.h
··· 126 126 ATOMIC_OPS(add, +=) 127 127 ATOMIC_OPS(sub, -=) 128 128 129 + ATOMIC_OP(and, &=) 130 + ATOMIC_OP(or, |=) 131 + ATOMIC_OP(xor, ^=) 132 + 129 133 #undef ATOMIC_OPS 130 134 #undef ATOMIC_OP_RETURN 131 135 #undef ATOMIC_OP ··· 189 185 190 186 ATOMIC64_OPS(add, +=) 191 187 ATOMIC64_OPS(sub, -=) 188 + ATOMIC64_OP(and, &=) 189 + ATOMIC64_OP(or, |=) 190 + ATOMIC64_OP(xor, ^=) 192 191 193 192 #undef ATOMIC64_OPS 194 193 #undef ATOMIC64_OP_RETURN
+7
arch/powerpc/include/asm/atomic.h
··· 67 67 ATOMIC_OPS(add, add) 68 68 ATOMIC_OPS(sub, subf) 69 69 70 + ATOMIC_OP(and, and) 71 + ATOMIC_OP(or, or) 72 + ATOMIC_OP(xor, xor) 73 + 70 74 #undef ATOMIC_OPS 71 75 #undef ATOMIC_OP_RETURN 72 76 #undef ATOMIC_OP ··· 308 304 309 305 ATOMIC64_OPS(add, add) 310 306 ATOMIC64_OPS(sub, subf) 307 + ATOMIC64_OP(and, and) 308 + ATOMIC64_OP(or, or) 309 + ATOMIC64_OP(xor, xor) 311 310 312 311 #undef ATOMIC64_OPS 313 312 #undef ATOMIC64_OP_RETURN
+2 -2
arch/powerpc/include/asm/barrier.h
··· 76 76 do { \ 77 77 compiletime_assert_atomic_type(*p); \ 78 78 smp_lwsync(); \ 79 - ACCESS_ONCE(*p) = (v); \ 79 + WRITE_ONCE(*p, v); \ 80 80 } while (0) 81 81 82 82 #define smp_load_acquire(p) \ 83 83 ({ \ 84 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 84 + typeof(*p) ___p1 = READ_ONCE(*p); \ 85 85 compiletime_assert_atomic_type(*p); \ 86 86 smp_lwsync(); \ 87 87 ___p1; \
+17 -2
arch/powerpc/include/asm/jump_label.h
··· 18 18 #define JUMP_ENTRY_TYPE stringify_in_c(FTR_ENTRY_LONG) 19 19 #define JUMP_LABEL_NOP_SIZE 4 20 20 21 - static __always_inline bool arch_static_branch(struct static_key *key) 21 + static __always_inline bool arch_static_branch(struct static_key *key, bool branch) 22 22 { 23 23 asm_volatile_goto("1:\n\t" 24 24 "nop\n\t" 25 25 ".pushsection __jump_table, \"aw\"\n\t" 26 26 JUMP_ENTRY_TYPE "1b, %l[l_yes], %c0\n\t" 27 27 ".popsection \n\t" 28 - : : "i" (key) : : l_yes); 28 + : : "i" (&((char *)key)[branch]) : : l_yes); 29 + 30 + return false; 31 + l_yes: 32 + return true; 33 + } 34 + 35 + static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) 36 + { 37 + asm_volatile_goto("1:\n\t" 38 + "b %l[l_yes]\n\t" 39 + ".pushsection __jump_table, \"aw\"\n\t" 40 + JUMP_ENTRY_TYPE "1b, %l[l_yes], %c0\n\t" 41 + ".popsection \n\t" 42 + : : "i" (&((char *)key)[branch]) : : l_yes); 43 + 29 44 return false; 30 45 l_yes: 31 46 return true;
+1 -1
arch/powerpc/kernel/jump_label.c
··· 17 17 { 18 18 u32 *addr = (u32 *)(unsigned long)entry->code; 19 19 20 - if (type == JUMP_LABEL_ENABLE) 20 + if (type == JUMP_LABEL_JMP) 21 21 patch_branch(addr, entry->target, 0); 22 22 else 23 23 patch_instruction(addr, PPC_INST_NOP);
-19
arch/powerpc/kernel/misc_32.S
··· 596 596 b 2b 597 597 598 598 /* 599 - * void atomic_clear_mask(atomic_t mask, atomic_t *addr) 600 - * void atomic_set_mask(atomic_t mask, atomic_t *addr); 601 - */ 602 - _GLOBAL(atomic_clear_mask) 603 - 10: lwarx r5,0,r4 604 - andc r5,r5,r3 605 - PPC405_ERR77(0,r4) 606 - stwcx. r5,0,r4 607 - bne- 10b 608 - blr 609 - _GLOBAL(atomic_set_mask) 610 - 10: lwarx r5,0,r4 611 - or r5,r5,r3 612 - PPC405_ERR77(0,r4) 613 - stwcx. r5,0,r4 614 - bne- 10b 615 - blr 616 - 617 - /* 618 599 * Extended precision shifts. 619 600 * 620 601 * Updated to be valid for shift counts from 0 to 63 inclusive.
+24 -17
arch/s390/include/asm/atomic.h
··· 27 27 #define __ATOMIC_OR "lao" 28 28 #define __ATOMIC_AND "lan" 29 29 #define __ATOMIC_ADD "laa" 30 + #define __ATOMIC_XOR "lax" 30 31 #define __ATOMIC_BARRIER "bcr 14,0\n" 31 32 32 33 #define __ATOMIC_LOOP(ptr, op_val, op_string, __barrier) \ ··· 50 49 #define __ATOMIC_OR "or" 51 50 #define __ATOMIC_AND "nr" 52 51 #define __ATOMIC_ADD "ar" 52 + #define __ATOMIC_XOR "xr" 53 53 #define __ATOMIC_BARRIER "\n" 54 54 55 55 #define __ATOMIC_LOOP(ptr, op_val, op_string, __barrier) \ ··· 120 118 #define atomic_dec_return(_v) atomic_sub_return(1, _v) 121 119 #define atomic_dec_and_test(_v) (atomic_sub_return(1, _v) == 0) 122 120 123 - static inline void atomic_clear_mask(unsigned int mask, atomic_t *v) 124 - { 125 - __ATOMIC_LOOP(v, ~mask, __ATOMIC_AND, __ATOMIC_NO_BARRIER); 121 + #define ATOMIC_OP(op, OP) \ 122 + static inline void atomic_##op(int i, atomic_t *v) \ 123 + { \ 124 + __ATOMIC_LOOP(v, i, __ATOMIC_##OP, __ATOMIC_NO_BARRIER); \ 126 125 } 127 126 128 - static inline void atomic_set_mask(unsigned int mask, atomic_t *v) 129 - { 130 - __ATOMIC_LOOP(v, mask, __ATOMIC_OR, __ATOMIC_NO_BARRIER); 131 - } 127 + ATOMIC_OP(and, AND) 128 + ATOMIC_OP(or, OR) 129 + ATOMIC_OP(xor, XOR) 130 + 131 + #undef ATOMIC_OP 132 132 133 133 #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 134 134 ··· 171 167 #define __ATOMIC64_OR "laog" 172 168 #define __ATOMIC64_AND "lang" 173 169 #define __ATOMIC64_ADD "laag" 170 + #define __ATOMIC64_XOR "laxg" 174 171 #define __ATOMIC64_BARRIER "bcr 14,0\n" 175 172 176 173 #define __ATOMIC64_LOOP(ptr, op_val, op_string, __barrier) \ ··· 194 189 #define __ATOMIC64_OR "ogr" 195 190 #define __ATOMIC64_AND "ngr" 196 191 #define __ATOMIC64_ADD "agr" 192 + #define __ATOMIC64_XOR "xgr" 197 193 #define __ATOMIC64_BARRIER "\n" 198 194 199 195 #define __ATOMIC64_LOOP(ptr, op_val, op_string, __barrier) \ ··· 253 247 __ATOMIC64_LOOP(v, i, __ATOMIC64_ADD, __ATOMIC64_NO_BARRIER); 254 248 } 255 249 256 - static inline void atomic64_clear_mask(unsigned long mask, atomic64_t *v) 257 - { 258 - __ATOMIC64_LOOP(v, ~mask, __ATOMIC64_AND, __ATOMIC64_NO_BARRIER); 259 - } 260 - 261 - static inline void atomic64_set_mask(unsigned long mask, atomic64_t *v) 262 - { 263 - __ATOMIC64_LOOP(v, mask, __ATOMIC64_OR, __ATOMIC64_NO_BARRIER); 264 - } 265 - 266 250 #define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) 267 251 268 252 static inline long long atomic64_cmpxchg(atomic64_t *v, ··· 266 270 return old; 267 271 } 268 272 273 + #define ATOMIC64_OP(op, OP) \ 274 + static inline void atomic64_##op(long i, atomic64_t *v) \ 275 + { \ 276 + __ATOMIC64_LOOP(v, i, __ATOMIC64_##OP, __ATOMIC64_NO_BARRIER); \ 277 + } 278 + 279 + ATOMIC64_OP(and, AND) 280 + ATOMIC64_OP(or, OR) 281 + ATOMIC64_OP(xor, XOR) 282 + 283 + #undef ATOMIC64_OP 269 284 #undef __ATOMIC64_LOOP 270 285 271 286 static inline int atomic64_add_unless(atomic64_t *v, long long i, long long u)
+2 -2
arch/s390/include/asm/barrier.h
··· 42 42 do { \ 43 43 compiletime_assert_atomic_type(*p); \ 44 44 barrier(); \ 45 - ACCESS_ONCE(*p) = (v); \ 45 + WRITE_ONCE(*p, v); \ 46 46 } while (0) 47 47 48 48 #define smp_load_acquire(p) \ 49 49 ({ \ 50 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 50 + typeof(*p) ___p1 = READ_ONCE(*p); \ 51 51 compiletime_assert_atomic_type(*p); \ 52 52 barrier(); \ 53 53 ___p1; \
+17 -2
arch/s390/include/asm/jump_label.h
··· 12 12 * We use a brcl 0,2 instruction for jump labels at compile time so it 13 13 * can be easily distinguished from a hotpatch generated instruction. 14 14 */ 15 - static __always_inline bool arch_static_branch(struct static_key *key) 15 + static __always_inline bool arch_static_branch(struct static_key *key, bool branch) 16 16 { 17 17 asm_volatile_goto("0: brcl 0,"__stringify(JUMP_LABEL_NOP_OFFSET)"\n" 18 18 ".pushsection __jump_table, \"aw\"\n" 19 19 ".balign 8\n" 20 20 ".quad 0b, %l[label], %0\n" 21 21 ".popsection\n" 22 - : : "X" (key) : : label); 22 + : : "X" (&((char *)key)[branch]) : : label); 23 + 24 + return false; 25 + label: 26 + return true; 27 + } 28 + 29 + static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) 30 + { 31 + asm_volatile_goto("0: brcl 15, %l[label]\n" 32 + ".pushsection __jump_table, \"aw\"\n" 33 + ".balign 8\n" 34 + ".quad 0b, %l[label], %0\n" 35 + ".popsection\n" 36 + : : "X" (&((char *)key)[branch]) : : label); 37 + 23 38 return false; 24 39 label: 25 40 return true;
+1 -1
arch/s390/kernel/jump_label.c
··· 61 61 { 62 62 struct insn old, new; 63 63 64 - if (type == JUMP_LABEL_ENABLE) { 64 + if (type == JUMP_LABEL_JMP) { 65 65 jump_label_make_nop(entry, &old); 66 66 jump_label_make_branch(entry, &new); 67 67 } else {
+2 -2
arch/s390/kernel/time.c
··· 378 378 * increase the "sequence" counter to avoid the race of an 379 379 * etr event and the complete recovery against get_sync_clock. 380 380 */ 381 - atomic_clear_mask(0x80000000, sw_ptr); 381 + atomic_andnot(0x80000000, sw_ptr); 382 382 atomic_inc(sw_ptr); 383 383 } 384 384 ··· 389 389 static void enable_sync_clock(void) 390 390 { 391 391 atomic_t *sw_ptr = this_cpu_ptr(&clock_sync_word); 392 - atomic_set_mask(0x80000000, sw_ptr); 392 + atomic_or(0x80000000, sw_ptr); 393 393 } 394 394 395 395 /*
+15 -15
arch/s390/kvm/interrupt.c
··· 173 173 174 174 static void __set_cpu_idle(struct kvm_vcpu *vcpu) 175 175 { 176 - atomic_set_mask(CPUSTAT_WAIT, &vcpu->arch.sie_block->cpuflags); 176 + atomic_or(CPUSTAT_WAIT, &vcpu->arch.sie_block->cpuflags); 177 177 set_bit(vcpu->vcpu_id, vcpu->arch.local_int.float_int->idle_mask); 178 178 } 179 179 180 180 static void __unset_cpu_idle(struct kvm_vcpu *vcpu) 181 181 { 182 - atomic_clear_mask(CPUSTAT_WAIT, &vcpu->arch.sie_block->cpuflags); 182 + atomic_andnot(CPUSTAT_WAIT, &vcpu->arch.sie_block->cpuflags); 183 183 clear_bit(vcpu->vcpu_id, vcpu->arch.local_int.float_int->idle_mask); 184 184 } 185 185 186 186 static void __reset_intercept_indicators(struct kvm_vcpu *vcpu) 187 187 { 188 - atomic_clear_mask(CPUSTAT_IO_INT | CPUSTAT_EXT_INT | CPUSTAT_STOP_INT, 189 - &vcpu->arch.sie_block->cpuflags); 188 + atomic_andnot(CPUSTAT_IO_INT | CPUSTAT_EXT_INT | CPUSTAT_STOP_INT, 189 + &vcpu->arch.sie_block->cpuflags); 190 190 vcpu->arch.sie_block->lctl = 0x0000; 191 191 vcpu->arch.sie_block->ictl &= ~(ICTL_LPSW | ICTL_STCTL | ICTL_PINT); 192 192 ··· 199 199 200 200 static void __set_cpuflag(struct kvm_vcpu *vcpu, u32 flag) 201 201 { 202 - atomic_set_mask(flag, &vcpu->arch.sie_block->cpuflags); 202 + atomic_or(flag, &vcpu->arch.sie_block->cpuflags); 203 203 } 204 204 205 205 static void set_intercept_indicators_io(struct kvm_vcpu *vcpu) ··· 928 928 spin_unlock(&li->lock); 929 929 930 930 /* clear pending external calls set by sigp interpretation facility */ 931 - atomic_clear_mask(CPUSTAT_ECALL_PEND, li->cpuflags); 931 + atomic_andnot(CPUSTAT_ECALL_PEND, li->cpuflags); 932 932 vcpu->kvm->arch.sca->cpu[vcpu->vcpu_id].sigp_ctrl = 0; 933 933 } 934 934 ··· 1026 1026 1027 1027 li->irq.ext = irq->u.ext; 1028 1028 set_bit(IRQ_PEND_PFAULT_INIT, &li->pending_irqs); 1029 - atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags); 1029 + atomic_or(CPUSTAT_EXT_INT, li->cpuflags); 1030 1030 return 0; 1031 1031 } 1032 1032 ··· 1041 1041 /* another external call is pending */ 1042 1042 return -EBUSY; 1043 1043 } 1044 - atomic_set_mask(CPUSTAT_ECALL_PEND, &vcpu->arch.sie_block->cpuflags); 1044 + atomic_or(CPUSTAT_ECALL_PEND, &vcpu->arch.sie_block->cpuflags); 1045 1045 return 0; 1046 1046 } 1047 1047 ··· 1067 1067 if (test_and_set_bit(IRQ_PEND_EXT_EXTERNAL, &li->pending_irqs)) 1068 1068 return -EBUSY; 1069 1069 *extcall = irq->u.extcall; 1070 - atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags); 1070 + atomic_or(CPUSTAT_EXT_INT, li->cpuflags); 1071 1071 return 0; 1072 1072 } 1073 1073 ··· 1139 1139 1140 1140 set_bit(irq->u.emerg.code, li->sigp_emerg_pending); 1141 1141 set_bit(IRQ_PEND_EXT_EMERGENCY, &li->pending_irqs); 1142 - atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags); 1142 + atomic_or(CPUSTAT_EXT_INT, li->cpuflags); 1143 1143 return 0; 1144 1144 } 1145 1145 ··· 1183 1183 0, 0); 1184 1184 1185 1185 set_bit(IRQ_PEND_EXT_CLOCK_COMP, &li->pending_irqs); 1186 - atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags); 1186 + atomic_or(CPUSTAT_EXT_INT, li->cpuflags); 1187 1187 return 0; 1188 1188 } 1189 1189 ··· 1196 1196 0, 0); 1197 1197 1198 1198 set_bit(IRQ_PEND_EXT_CPU_TIMER, &li->pending_irqs); 1199 - atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags); 1199 + atomic_or(CPUSTAT_EXT_INT, li->cpuflags); 1200 1200 return 0; 1201 1201 } 1202 1202 ··· 1375 1375 spin_lock(&li->lock); 1376 1376 switch (type) { 1377 1377 case KVM_S390_MCHK: 1378 - atomic_set_mask(CPUSTAT_STOP_INT, li->cpuflags); 1378 + atomic_or(CPUSTAT_STOP_INT, li->cpuflags); 1379 1379 break; 1380 1380 case KVM_S390_INT_IO_MIN...KVM_S390_INT_IO_MAX: 1381 - atomic_set_mask(CPUSTAT_IO_INT, li->cpuflags); 1381 + atomic_or(CPUSTAT_IO_INT, li->cpuflags); 1382 1382 break; 1383 1383 default: 1384 - atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags); 1384 + atomic_or(CPUSTAT_EXT_INT, li->cpuflags); 1385 1385 break; 1386 1386 } 1387 1387 spin_unlock(&li->lock);
+16 -16
arch/s390/kvm/kvm-s390.c
··· 1333 1333 save_access_regs(vcpu->arch.host_acrs); 1334 1334 restore_access_regs(vcpu->run->s.regs.acrs); 1335 1335 gmap_enable(vcpu->arch.gmap); 1336 - atomic_set_mask(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags); 1336 + atomic_or(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags); 1337 1337 } 1338 1338 1339 1339 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) 1340 1340 { 1341 - atomic_clear_mask(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags); 1341 + atomic_andnot(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags); 1342 1342 gmap_disable(vcpu->arch.gmap); 1343 1343 1344 1344 save_fpu_regs(); ··· 1443 1443 CPUSTAT_STOPPED); 1444 1444 1445 1445 if (test_kvm_facility(vcpu->kvm, 78)) 1446 - atomic_set_mask(CPUSTAT_GED2, &vcpu->arch.sie_block->cpuflags); 1446 + atomic_or(CPUSTAT_GED2, &vcpu->arch.sie_block->cpuflags); 1447 1447 else if (test_kvm_facility(vcpu->kvm, 8)) 1448 - atomic_set_mask(CPUSTAT_GED, &vcpu->arch.sie_block->cpuflags); 1448 + atomic_or(CPUSTAT_GED, &vcpu->arch.sie_block->cpuflags); 1449 1449 1450 1450 kvm_s390_vcpu_setup_model(vcpu); 1451 1451 ··· 1557 1557 1558 1558 void kvm_s390_vcpu_block(struct kvm_vcpu *vcpu) 1559 1559 { 1560 - atomic_set_mask(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20); 1560 + atomic_or(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20); 1561 1561 exit_sie(vcpu); 1562 1562 } 1563 1563 1564 1564 void kvm_s390_vcpu_unblock(struct kvm_vcpu *vcpu) 1565 1565 { 1566 - atomic_clear_mask(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20); 1566 + atomic_andnot(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20); 1567 1567 } 1568 1568 1569 1569 static void kvm_s390_vcpu_request(struct kvm_vcpu *vcpu) 1570 1570 { 1571 - atomic_set_mask(PROG_REQUEST, &vcpu->arch.sie_block->prog20); 1571 + atomic_or(PROG_REQUEST, &vcpu->arch.sie_block->prog20); 1572 1572 exit_sie(vcpu); 1573 1573 } 1574 1574 1575 1575 static void kvm_s390_vcpu_request_handled(struct kvm_vcpu *vcpu) 1576 1576 { 1577 - atomic_clear_mask(PROG_REQUEST, &vcpu->arch.sie_block->prog20); 1577 + atomic_or(PROG_REQUEST, &vcpu->arch.sie_block->prog20); 1578 1578 } 1579 1579 1580 1580 /* ··· 1583 1583 * return immediately. */ 1584 1584 void exit_sie(struct kvm_vcpu *vcpu) 1585 1585 { 1586 - atomic_set_mask(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); 1586 + atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags); 1587 1587 while (vcpu->arch.sie_block->prog0c & PROG_IN_SIE) 1588 1588 cpu_relax(); 1589 1589 } ··· 1807 1807 if (dbg->control & KVM_GUESTDBG_ENABLE) { 1808 1808 vcpu->guest_debug = dbg->control; 1809 1809 /* enforce guest PER */ 1810 - atomic_set_mask(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags); 1810 + atomic_or(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags); 1811 1811 1812 1812 if (dbg->control & KVM_GUESTDBG_USE_HW_BP) 1813 1813 rc = kvm_s390_import_bp_data(vcpu, dbg); 1814 1814 } else { 1815 - atomic_clear_mask(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags); 1815 + atomic_andnot(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags); 1816 1816 vcpu->arch.guestdbg.last_bp = 0; 1817 1817 } 1818 1818 1819 1819 if (rc) { 1820 1820 vcpu->guest_debug = 0; 1821 1821 kvm_s390_clear_bp_data(vcpu); 1822 - atomic_clear_mask(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags); 1822 + atomic_andnot(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags); 1823 1823 } 1824 1824 1825 1825 return rc; ··· 1894 1894 if (kvm_check_request(KVM_REQ_ENABLE_IBS, vcpu)) { 1895 1895 if (!ibs_enabled(vcpu)) { 1896 1896 trace_kvm_s390_enable_disable_ibs(vcpu->vcpu_id, 1); 1897 - atomic_set_mask(CPUSTAT_IBS, 1897 + atomic_or(CPUSTAT_IBS, 1898 1898 &vcpu->arch.sie_block->cpuflags); 1899 1899 } 1900 1900 goto retry; ··· 1903 1903 if (kvm_check_request(KVM_REQ_DISABLE_IBS, vcpu)) { 1904 1904 if (ibs_enabled(vcpu)) { 1905 1905 trace_kvm_s390_enable_disable_ibs(vcpu->vcpu_id, 0); 1906 - atomic_clear_mask(CPUSTAT_IBS, 1906 + atomic_andnot(CPUSTAT_IBS, 1907 1907 &vcpu->arch.sie_block->cpuflags); 1908 1908 } 1909 1909 goto retry; ··· 2419 2419 __disable_ibs_on_all_vcpus(vcpu->kvm); 2420 2420 } 2421 2421 2422 - atomic_clear_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags); 2422 + atomic_andnot(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags); 2423 2423 /* 2424 2424 * Another VCPU might have used IBS while we were offline. 2425 2425 * Let's play safe and flush the VCPU at startup. ··· 2445 2445 /* SIGP STOP and SIGP STOP AND STORE STATUS has been fully processed */ 2446 2446 kvm_s390_clear_stop_irq(vcpu); 2447 2447 2448 - atomic_set_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags); 2448 + atomic_or(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags); 2449 2449 __disable_ibs_on_vcpu(vcpu); 2450 2450 2451 2451 for (i = 0; i < online_vcpus; i++) {
+6 -6
arch/s390/lib/uaccess.c
··· 15 15 #include <asm/mmu_context.h> 16 16 #include <asm/facility.h> 17 17 18 - static struct static_key have_mvcos = STATIC_KEY_INIT_FALSE; 18 + static DEFINE_STATIC_KEY_FALSE(have_mvcos); 19 19 20 20 static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr, 21 21 unsigned long size) ··· 104 104 105 105 unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) 106 106 { 107 - if (static_key_false(&have_mvcos)) 107 + if (static_branch_likely(&have_mvcos)) 108 108 return copy_from_user_mvcos(to, from, n); 109 109 return copy_from_user_mvcp(to, from, n); 110 110 } ··· 177 177 178 178 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n) 179 179 { 180 - if (static_key_false(&have_mvcos)) 180 + if (static_branch_likely(&have_mvcos)) 181 181 return copy_to_user_mvcos(to, from, n); 182 182 return copy_to_user_mvcs(to, from, n); 183 183 } ··· 240 240 241 241 unsigned long __copy_in_user(void __user *to, const void __user *from, unsigned long n) 242 242 { 243 - if (static_key_false(&have_mvcos)) 243 + if (static_branch_likely(&have_mvcos)) 244 244 return copy_in_user_mvcos(to, from, n); 245 245 return copy_in_user_mvc(to, from, n); 246 246 } ··· 312 312 313 313 unsigned long __clear_user(void __user *to, unsigned long size) 314 314 { 315 - if (static_key_false(&have_mvcos)) 315 + if (static_branch_likely(&have_mvcos)) 316 316 return clear_user_mvcos(to, size); 317 317 return clear_user_xc(to, size); 318 318 } ··· 373 373 static int __init uaccess_init(void) 374 374 { 375 375 if (test_facility(27)) 376 - static_key_slow_inc(&have_mvcos); 376 + static_branch_enable(&have_mvcos); 377 377 return 0; 378 378 } 379 379 early_initcall(uaccess_init);
+4 -39
arch/sh/include/asm/atomic-grb.h
··· 48 48 ATOMIC_OPS(add) 49 49 ATOMIC_OPS(sub) 50 50 51 + ATOMIC_OP(and) 52 + ATOMIC_OP(or) 53 + ATOMIC_OP(xor) 54 + 51 55 #undef ATOMIC_OPS 52 56 #undef ATOMIC_OP_RETURN 53 57 #undef ATOMIC_OP 54 - 55 - static inline void atomic_clear_mask(unsigned int mask, atomic_t *v) 56 - { 57 - int tmp; 58 - unsigned int _mask = ~mask; 59 - 60 - __asm__ __volatile__ ( 61 - " .align 2 \n\t" 62 - " mova 1f, r0 \n\t" /* r0 = end point */ 63 - " mov r15, r1 \n\t" /* r1 = saved sp */ 64 - " mov #-6, r15 \n\t" /* LOGIN: r15 = size */ 65 - " mov.l @%1, %0 \n\t" /* load old value */ 66 - " and %2, %0 \n\t" /* add */ 67 - " mov.l %0, @%1 \n\t" /* store new value */ 68 - "1: mov r1, r15 \n\t" /* LOGOUT */ 69 - : "=&r" (tmp), 70 - "+r" (v) 71 - : "r" (_mask) 72 - : "memory" , "r0", "r1"); 73 - } 74 - 75 - static inline void atomic_set_mask(unsigned int mask, atomic_t *v) 76 - { 77 - int tmp; 78 - 79 - __asm__ __volatile__ ( 80 - " .align 2 \n\t" 81 - " mova 1f, r0 \n\t" /* r0 = end point */ 82 - " mov r15, r1 \n\t" /* r1 = saved sp */ 83 - " mov #-6, r15 \n\t" /* LOGIN: r15 = size */ 84 - " mov.l @%1, %0 \n\t" /* load old value */ 85 - " or %2, %0 \n\t" /* or */ 86 - " mov.l %0, @%1 \n\t" /* store new value */ 87 - "1: mov r1, r15 \n\t" /* LOGOUT */ 88 - : "=&r" (tmp), 89 - "+r" (v) 90 - : "r" (mask) 91 - : "memory" , "r0", "r1"); 92 - } 93 58 94 59 #endif /* __ASM_SH_ATOMIC_GRB_H */
+3 -18
arch/sh/include/asm/atomic-irq.h
··· 37 37 38 38 ATOMIC_OPS(add, +=) 39 39 ATOMIC_OPS(sub, -=) 40 + ATOMIC_OP(and, &=) 41 + ATOMIC_OP(or, |=) 42 + ATOMIC_OP(xor, ^=) 40 43 41 44 #undef ATOMIC_OPS 42 45 #undef ATOMIC_OP_RETURN 43 46 #undef ATOMIC_OP 44 - 45 - static inline void atomic_clear_mask(unsigned int mask, atomic_t *v) 46 - { 47 - unsigned long flags; 48 - 49 - raw_local_irq_save(flags); 50 - v->counter &= ~mask; 51 - raw_local_irq_restore(flags); 52 - } 53 - 54 - static inline void atomic_set_mask(unsigned int mask, atomic_t *v) 55 - { 56 - unsigned long flags; 57 - 58 - raw_local_irq_save(flags); 59 - v->counter |= mask; 60 - raw_local_irq_restore(flags); 61 - } 62 47 63 48 #endif /* __ASM_SH_ATOMIC_IRQ_H */
+3 -28
arch/sh/include/asm/atomic-llsc.h
··· 52 52 53 53 ATOMIC_OPS(add) 54 54 ATOMIC_OPS(sub) 55 + ATOMIC_OP(and) 56 + ATOMIC_OP(or) 57 + ATOMIC_OP(xor) 55 58 56 59 #undef ATOMIC_OPS 57 60 #undef ATOMIC_OP_RETURN 58 61 #undef ATOMIC_OP 59 - 60 - static inline void atomic_clear_mask(unsigned int mask, atomic_t *v) 61 - { 62 - unsigned long tmp; 63 - 64 - __asm__ __volatile__ ( 65 - "1: movli.l @%2, %0 ! atomic_clear_mask \n" 66 - " and %1, %0 \n" 67 - " movco.l %0, @%2 \n" 68 - " bf 1b \n" 69 - : "=&z" (tmp) 70 - : "r" (~mask), "r" (&v->counter) 71 - : "t"); 72 - } 73 - 74 - static inline void atomic_set_mask(unsigned int mask, atomic_t *v) 75 - { 76 - unsigned long tmp; 77 - 78 - __asm__ __volatile__ ( 79 - "1: movli.l @%2, %0 ! atomic_set_mask \n" 80 - " or %1, %0 \n" 81 - " movco.l %0, @%2 \n" 82 - " bf 1b \n" 83 - : "=&z" (tmp) 84 - : "r" (mask), "r" (&v->counter) 85 - : "t"); 86 - } 87 62 88 63 #endif /* __ASM_SH_ATOMIC_LLSC_H */
+3 -1
arch/sparc/include/asm/atomic_32.h
··· 17 17 #include <asm/barrier.h> 18 18 #include <asm-generic/atomic64.h> 19 19 20 - 21 20 #define ATOMIC_INIT(i) { (i) } 22 21 23 22 int atomic_add_return(int, atomic_t *); 23 + void atomic_and(int, atomic_t *); 24 + void atomic_or(int, atomic_t *); 25 + void atomic_xor(int, atomic_t *); 24 26 int atomic_cmpxchg(atomic_t *, int, int); 25 27 int atomic_xchg(atomic_t *, int); 26 28 int __atomic_add_unless(atomic_t *, int, int);
+4
arch/sparc/include/asm/atomic_64.h
··· 33 33 ATOMIC_OPS(add) 34 34 ATOMIC_OPS(sub) 35 35 36 + ATOMIC_OP(and) 37 + ATOMIC_OP(or) 38 + ATOMIC_OP(xor) 39 + 36 40 #undef ATOMIC_OPS 37 41 #undef ATOMIC_OP_RETURN 38 42 #undef ATOMIC_OP
+2 -2
arch/sparc/include/asm/barrier_64.h
··· 60 60 do { \ 61 61 compiletime_assert_atomic_type(*p); \ 62 62 barrier(); \ 63 - ACCESS_ONCE(*p) = (v); \ 63 + WRITE_ONCE(*p, v); \ 64 64 } while (0) 65 65 66 66 #define smp_load_acquire(p) \ 67 67 ({ \ 68 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 68 + typeof(*p) ___p1 = READ_ONCE(*p); \ 69 69 compiletime_assert_atomic_type(*p); \ 70 70 barrier(); \ 71 71 ___p1; \
+26 -9
arch/sparc/include/asm/jump_label.h
··· 7 7 8 8 #define JUMP_LABEL_NOP_SIZE 4 9 9 10 - static __always_inline bool arch_static_branch(struct static_key *key) 10 + static __always_inline bool arch_static_branch(struct static_key *key, bool branch) 11 11 { 12 - asm_volatile_goto("1:\n\t" 13 - "nop\n\t" 14 - "nop\n\t" 15 - ".pushsection __jump_table, \"aw\"\n\t" 16 - ".align 4\n\t" 17 - ".word 1b, %l[l_yes], %c0\n\t" 18 - ".popsection \n\t" 19 - : : "i" (key) : : l_yes); 12 + asm_volatile_goto("1:\n\t" 13 + "nop\n\t" 14 + "nop\n\t" 15 + ".pushsection __jump_table, \"aw\"\n\t" 16 + ".align 4\n\t" 17 + ".word 1b, %l[l_yes], %c0\n\t" 18 + ".popsection \n\t" 19 + : : "i" (&((char *)key)[branch]) : : l_yes); 20 + 21 + return false; 22 + l_yes: 23 + return true; 24 + } 25 + 26 + static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) 27 + { 28 + asm_volatile_goto("1:\n\t" 29 + "b %l[l_yes]\n\t" 30 + "nop\n\t" 31 + ".pushsection __jump_table, \"aw\"\n\t" 32 + ".align 4\n\t" 33 + ".word 1b, %l[l_yes], %c0\n\t" 34 + ".popsection \n\t" 35 + : : "i" (&((char *)key)[branch]) : : l_yes); 36 + 20 37 return false; 21 38 l_yes: 22 39 return true;
+1 -1
arch/sparc/kernel/jump_label.c
··· 16 16 u32 val; 17 17 u32 *insn = (u32 *) (unsigned long) entry->code; 18 18 19 - if (type == JUMP_LABEL_ENABLE) { 19 + if (type == JUMP_LABEL_JMP) { 20 20 s32 off = (s32)entry->target - (s32)entry->code; 21 21 22 22 #ifdef CONFIG_SPARC64
+19 -3
arch/sparc/lib/atomic32.c
··· 27 27 28 28 #endif /* SMP */ 29 29 30 - #define ATOMIC_OP(op, cop) \ 30 + #define ATOMIC_OP_RETURN(op, c_op) \ 31 31 int atomic_##op##_return(int i, atomic_t *v) \ 32 32 { \ 33 33 int ret; \ 34 34 unsigned long flags; \ 35 35 spin_lock_irqsave(ATOMIC_HASH(v), flags); \ 36 36 \ 37 - ret = (v->counter cop i); \ 37 + ret = (v->counter c_op i); \ 38 38 \ 39 39 spin_unlock_irqrestore(ATOMIC_HASH(v), flags); \ 40 40 return ret; \ 41 41 } \ 42 42 EXPORT_SYMBOL(atomic_##op##_return); 43 43 44 - ATOMIC_OP(add, +=) 44 + #define ATOMIC_OP(op, c_op) \ 45 + void atomic_##op(int i, atomic_t *v) \ 46 + { \ 47 + unsigned long flags; \ 48 + spin_lock_irqsave(ATOMIC_HASH(v), flags); \ 49 + \ 50 + v->counter c_op i; \ 51 + \ 52 + spin_unlock_irqrestore(ATOMIC_HASH(v), flags); \ 53 + } \ 54 + EXPORT_SYMBOL(atomic_##op); 45 55 56 + ATOMIC_OP_RETURN(add, +=) 57 + ATOMIC_OP(and, &=) 58 + ATOMIC_OP(or, |=) 59 + ATOMIC_OP(xor, ^=) 60 + 61 + #undef ATOMIC_OP_RETURN 46 62 #undef ATOMIC_OP 47 63 48 64 int atomic_xchg(atomic_t *v, int new)
+6
arch/sparc/lib/atomic_64.S
··· 47 47 48 48 ATOMIC_OPS(add) 49 49 ATOMIC_OPS(sub) 50 + ATOMIC_OP(and) 51 + ATOMIC_OP(or) 52 + ATOMIC_OP(xor) 50 53 51 54 #undef ATOMIC_OPS 52 55 #undef ATOMIC_OP_RETURN ··· 87 84 88 85 ATOMIC64_OPS(add) 89 86 ATOMIC64_OPS(sub) 87 + ATOMIC64_OP(and) 88 + ATOMIC64_OP(or) 89 + ATOMIC64_OP(xor) 90 90 91 91 #undef ATOMIC64_OPS 92 92 #undef ATOMIC64_OP_RETURN
+3
arch/sparc/lib/ksyms.c
··· 111 111 112 112 ATOMIC_OPS(add) 113 113 ATOMIC_OPS(sub) 114 + ATOMIC_OP(and) 115 + ATOMIC_OP(or) 116 + ATOMIC_OP(xor) 114 117 115 118 #undef ATOMIC_OPS 116 119 #undef ATOMIC_OP_RETURN
+28
arch/tile/include/asm/atomic_32.h
··· 34 34 _atomic_xchg_add(&v->counter, i); 35 35 } 36 36 37 + #define ATOMIC_OP(op) \ 38 + unsigned long _atomic_##op(volatile unsigned long *p, unsigned long mask); \ 39 + static inline void atomic_##op(int i, atomic_t *v) \ 40 + { \ 41 + _atomic_##op((unsigned long *)&v->counter, i); \ 42 + } 43 + 44 + ATOMIC_OP(and) 45 + ATOMIC_OP(or) 46 + ATOMIC_OP(xor) 47 + 48 + #undef ATOMIC_OP 49 + 37 50 /** 38 51 * atomic_add_return - add integer and return 39 52 * @v: pointer of type atomic_t ··· 125 112 { 126 113 _atomic64_xchg_add(&v->counter, i); 127 114 } 115 + 116 + #define ATOMIC64_OP(op) \ 117 + long long _atomic64_##op(long long *v, long long n); \ 118 + static inline void atomic64_##op(long long i, atomic64_t *v) \ 119 + { \ 120 + _atomic64_##op(&v->counter, i); \ 121 + } 122 + 123 + ATOMIC64_OP(and) 124 + ATOMIC64_OP(or) 125 + ATOMIC64_OP(xor) 128 126 129 127 /** 130 128 * atomic64_add_return - add integer and return ··· 249 225 extern struct __get_user __atomic_xchg_add_unless(volatile int *p, 250 226 int *lock, int o, int n); 251 227 extern struct __get_user __atomic_or(volatile int *p, int *lock, int n); 228 + extern struct __get_user __atomic_and(volatile int *p, int *lock, int n); 252 229 extern struct __get_user __atomic_andn(volatile int *p, int *lock, int n); 253 230 extern struct __get_user __atomic_xor(volatile int *p, int *lock, int n); 254 231 extern long long __atomic64_cmpxchg(volatile long long *p, int *lock, ··· 259 234 long long n); 260 235 extern long long __atomic64_xchg_add_unless(volatile long long *p, 261 236 int *lock, long long o, long long n); 237 + extern long long __atomic64_and(volatile long long *p, int *lock, long long n); 238 + extern long long __atomic64_or(volatile long long *p, int *lock, long long n); 239 + extern long long __atomic64_xor(volatile long long *p, int *lock, long long n); 262 240 263 241 /* Return failure from the atomic wrappers. */ 264 242 struct __get_user __atomic_bad_address(int __user *addr);
+40
arch/tile/include/asm/atomic_64.h
··· 58 58 return oldval; 59 59 } 60 60 61 + static inline void atomic_and(int i, atomic_t *v) 62 + { 63 + __insn_fetchand4((void *)&v->counter, i); 64 + } 65 + 66 + static inline void atomic_or(int i, atomic_t *v) 67 + { 68 + __insn_fetchor4((void *)&v->counter, i); 69 + } 70 + 71 + static inline void atomic_xor(int i, atomic_t *v) 72 + { 73 + int guess, oldval = v->counter; 74 + do { 75 + guess = oldval; 76 + __insn_mtspr(SPR_CMPEXCH_VALUE, guess); 77 + oldval = __insn_cmpexch4(&v->counter, guess ^ i); 78 + } while (guess != oldval); 79 + } 80 + 61 81 /* Now the true 64-bit operations. */ 62 82 63 83 #define ATOMIC64_INIT(i) { (i) } ··· 109 89 oldval = cmpxchg(&v->counter, guess, guess + a); 110 90 } while (guess != oldval); 111 91 return oldval != u; 92 + } 93 + 94 + static inline void atomic64_and(long i, atomic64_t *v) 95 + { 96 + __insn_fetchand((void *)&v->counter, i); 97 + } 98 + 99 + static inline void atomic64_or(long i, atomic64_t *v) 100 + { 101 + __insn_fetchor((void *)&v->counter, i); 102 + } 103 + 104 + static inline void atomic64_xor(long i, atomic64_t *v) 105 + { 106 + long guess, oldval = v->counter; 107 + do { 108 + guess = oldval; 109 + __insn_mtspr(SPR_CMPEXCH_VALUE, guess); 110 + oldval = __insn_cmpexch(&v->counter, guess ^ i); 111 + } while (guess != oldval); 112 112 } 113 113 114 114 #define atomic64_sub_return(i, v) atomic64_add_return(-(i), (v))
+23
arch/tile/lib/atomic_32.c
··· 94 94 } 95 95 EXPORT_SYMBOL(_atomic_or); 96 96 97 + unsigned long _atomic_and(volatile unsigned long *p, unsigned long mask) 98 + { 99 + return __atomic_and((int *)p, __atomic_setup(p), mask).val; 100 + } 101 + EXPORT_SYMBOL(_atomic_and); 102 + 97 103 unsigned long _atomic_andn(volatile unsigned long *p, unsigned long mask) 98 104 { 99 105 return __atomic_andn((int *)p, __atomic_setup(p), mask).val; ··· 142 136 } 143 137 EXPORT_SYMBOL(_atomic64_cmpxchg); 144 138 139 + long long _atomic64_and(long long *v, long long n) 140 + { 141 + return __atomic64_and(v, __atomic_setup(v), n); 142 + } 143 + EXPORT_SYMBOL(_atomic64_and); 144 + 145 + long long _atomic64_or(long long *v, long long n) 146 + { 147 + return __atomic64_or(v, __atomic_setup(v), n); 148 + } 149 + EXPORT_SYMBOL(_atomic64_or); 150 + 151 + long long _atomic64_xor(long long *v, long long n) 152 + { 153 + return __atomic64_xor(v, __atomic_setup(v), n); 154 + } 155 + EXPORT_SYMBOL(_atomic64_xor); 145 156 146 157 /* 147 158 * If any of the atomic or futex routines hit a bad address (not in
+4
arch/tile/lib/atomic_asm_32.S
··· 178 178 atomic_op _xchg_add_unless, 32, \ 179 179 "sne r26, r22, r2; { bbns r26, 3f; add r24, r22, r3 }" 180 180 atomic_op _or, 32, "or r24, r22, r2" 181 + atomic_op _and, 32, "and r24, r22, r2" 181 182 atomic_op _andn, 32, "nor r2, r2, zero; and r24, r22, r2" 182 183 atomic_op _xor, 32, "xor r24, r22, r2" 183 184 ··· 192 191 { bbns r26, 3f; add r24, r22, r4 }; \ 193 192 { bbns r27, 3f; add r25, r23, r5 }; \ 194 193 slt_u r26, r24, r22; add r25, r25, r26" 194 + atomic_op 64_or, 64, "{ or r24, r22, r2; or r25, r23, r3 }" 195 + atomic_op 64_and, 64, "{ and r24, r22, r2; and r25, r23, r3 }" 196 + atomic_op 64_xor, 64, "{ xor r24, r22, r2; xor r25, r23, r3 }" 195 197 196 198 jrp lr /* happy backtracer */ 197 199
+15 -10
arch/x86/include/asm/atomic.h
··· 182 182 return xchg(&v->counter, new); 183 183 } 184 184 185 + #define ATOMIC_OP(op) \ 186 + static inline void atomic_##op(int i, atomic_t *v) \ 187 + { \ 188 + asm volatile(LOCK_PREFIX #op"l %1,%0" \ 189 + : "+m" (v->counter) \ 190 + : "ir" (i) \ 191 + : "memory"); \ 192 + } 193 + 194 + ATOMIC_OP(and) 195 + ATOMIC_OP(or) 196 + ATOMIC_OP(xor) 197 + 198 + #undef ATOMIC_OP 199 + 185 200 /** 186 201 * __atomic_add_unless - add unless the number is already a given value 187 202 * @v: pointer of type atomic_t ··· 233 218 asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v)); 234 219 return *v; 235 220 } 236 - 237 - /* These are x86-specific, used by some header files */ 238 - #define atomic_clear_mask(mask, addr) \ 239 - asm volatile(LOCK_PREFIX "andl %0,%1" \ 240 - : : "r" (~(mask)), "m" (*(addr)) : "memory") 241 - 242 - #define atomic_set_mask(mask, addr) \ 243 - asm volatile(LOCK_PREFIX "orl %0,%1" \ 244 - : : "r" ((unsigned)(mask)), "m" (*(addr)) \ 245 - : "memory") 246 221 247 222 #ifdef CONFIG_X86_32 248 223 # include <asm/atomic64_32.h>
+14
arch/x86/include/asm/atomic64_32.h
··· 313 313 #undef alternative_atomic64 314 314 #undef __alternative_atomic64 315 315 316 + #define ATOMIC64_OP(op, c_op) \ 317 + static inline void atomic64_##op(long long i, atomic64_t *v) \ 318 + { \ 319 + long long old, c = 0; \ 320 + while ((old = atomic64_cmpxchg(v, c, c c_op i)) != c) \ 321 + c = old; \ 322 + } 323 + 324 + ATOMIC64_OP(and, &) 325 + ATOMIC64_OP(or, |) 326 + ATOMIC64_OP(xor, ^) 327 + 328 + #undef ATOMIC64_OP 329 + 316 330 #endif /* _ASM_X86_ATOMIC64_32_H */
+15
arch/x86/include/asm/atomic64_64.h
··· 220 220 return dec; 221 221 } 222 222 223 + #define ATOMIC64_OP(op) \ 224 + static inline void atomic64_##op(long i, atomic64_t *v) \ 225 + { \ 226 + asm volatile(LOCK_PREFIX #op"q %1,%0" \ 227 + : "+m" (v->counter) \ 228 + : "er" (i) \ 229 + : "memory"); \ 230 + } 231 + 232 + ATOMIC64_OP(and) 233 + ATOMIC64_OP(or) 234 + ATOMIC64_OP(xor) 235 + 236 + #undef ATOMIC64_OP 237 + 223 238 #endif /* _ASM_X86_ATOMIC64_64_H */
+4 -4
arch/x86/include/asm/barrier.h
··· 57 57 do { \ 58 58 compiletime_assert_atomic_type(*p); \ 59 59 smp_mb(); \ 60 - ACCESS_ONCE(*p) = (v); \ 60 + WRITE_ONCE(*p, v); \ 61 61 } while (0) 62 62 63 63 #define smp_load_acquire(p) \ 64 64 ({ \ 65 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 65 + typeof(*p) ___p1 = READ_ONCE(*p); \ 66 66 compiletime_assert_atomic_type(*p); \ 67 67 smp_mb(); \ 68 68 ___p1; \ ··· 74 74 do { \ 75 75 compiletime_assert_atomic_type(*p); \ 76 76 barrier(); \ 77 - ACCESS_ONCE(*p) = (v); \ 77 + WRITE_ONCE(*p, v); \ 78 78 } while (0) 79 79 80 80 #define smp_load_acquire(p) \ 81 81 ({ \ 82 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 82 + typeof(*p) ___p1 = READ_ONCE(*p); \ 83 83 compiletime_assert_atomic_type(*p); \ 84 84 barrier(); \ 85 85 ___p1; \
+20 -3
arch/x86/include/asm/jump_label.h
··· 16 16 # define STATIC_KEY_INIT_NOP GENERIC_NOP5_ATOMIC 17 17 #endif 18 18 19 - static __always_inline bool arch_static_branch(struct static_key *key) 19 + static __always_inline bool arch_static_branch(struct static_key *key, bool branch) 20 20 { 21 21 asm_volatile_goto("1:" 22 22 ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" 23 23 ".pushsection __jump_table, \"aw\" \n\t" 24 24 _ASM_ALIGN "\n\t" 25 - _ASM_PTR "1b, %l[l_yes], %c0 \n\t" 25 + _ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t" 26 26 ".popsection \n\t" 27 - : : "i" (key) : : l_yes); 27 + : : "i" (key), "i" (branch) : : l_yes); 28 + 29 + return false; 30 + l_yes: 31 + return true; 32 + } 33 + 34 + static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) 35 + { 36 + asm_volatile_goto("1:" 37 + ".byte 0xe9\n\t .long %l[l_yes] - 2f\n\t" 38 + "2:\n\t" 39 + ".pushsection __jump_table, \"aw\" \n\t" 40 + _ASM_ALIGN "\n\t" 41 + _ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t" 42 + ".popsection \n\t" 43 + : : "i" (key), "i" (branch) : : l_yes); 44 + 28 45 return false; 29 46 l_yes: 30 47 return true;
-10
arch/x86/include/asm/qrwlock.h
··· 2 2 #define _ASM_X86_QRWLOCK_H 3 3 4 4 #include <asm-generic/qrwlock_types.h> 5 - 6 - #ifndef CONFIG_X86_PPRO_FENCE 7 - #define queue_write_unlock queue_write_unlock 8 - static inline void queue_write_unlock(struct qrwlock *lock) 9 - { 10 - barrier(); 11 - ACCESS_ONCE(*(u8 *)&lock->cnts) = 0; 12 - } 13 - #endif 14 - 15 5 #include <asm-generic/qrwlock.h> 16 6 17 7 #endif /* _ASM_X86_QRWLOCK_H */
+1 -1
arch/x86/kernel/jump_label.c
··· 45 45 const unsigned char default_nop[] = { STATIC_KEY_INIT_NOP }; 46 46 const unsigned char *ideal_nop = ideal_nops[NOP_ATOMIC5]; 47 47 48 - if (type == JUMP_LABEL_ENABLE) { 48 + if (type == JUMP_LABEL_JMP) { 49 49 if (init) { 50 50 /* 51 51 * Jump label is enabled for the first time.
+10 -12
arch/x86/kernel/tsc.c
··· 38 38 erroneous rdtsc usage on !cpu_has_tsc processors */ 39 39 static int __read_mostly tsc_disabled = -1; 40 40 41 - static struct static_key __use_tsc = STATIC_KEY_INIT; 41 + static DEFINE_STATIC_KEY_FALSE(__use_tsc); 42 42 43 43 int tsc_clocksource_reliable; 44 44 ··· 274 274 */ 275 275 u64 native_sched_clock(void) 276 276 { 277 - u64 tsc_now; 277 + if (static_branch_likely(&__use_tsc)) { 278 + u64 tsc_now = rdtsc(); 279 + 280 + /* return the value in ns */ 281 + return cycles_2_ns(tsc_now); 282 + } 278 283 279 284 /* 280 285 * Fall back to jiffies if there's no TSC available: ··· 289 284 * very important for it to be as fast as the platform 290 285 * can achieve it. ) 291 286 */ 292 - if (!static_key_false(&__use_tsc)) { 293 - /* No locking but a rare wrong value is not a big deal: */ 294 - return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ); 295 - } 296 287 297 - /* read the Time Stamp Counter: */ 298 - tsc_now = rdtsc(); 299 - 300 - /* return the value in ns */ 301 - return cycles_2_ns(tsc_now); 288 + /* No locking but a rare wrong value is not a big deal: */ 289 + return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ); 302 290 } 303 291 304 292 /* ··· 1210 1212 /* now allow native_sched_clock() to use rdtsc */ 1211 1213 1212 1214 tsc_disabled = 0; 1213 - static_key_slow_inc(&__use_tsc); 1215 + static_branch_enable(&__use_tsc); 1214 1216 1215 1217 if (!no_sched_irq_time) 1216 1218 enable_sched_clock_irqtime();
-1
arch/xtensa/configs/iss_defconfig
··· 616 616 # CONFIG_SLUB_DEBUG_ON is not set 617 617 # CONFIG_SLUB_STATS is not set 618 618 # CONFIG_DEBUG_RT_MUTEXES is not set 619 - # CONFIG_RT_MUTEX_TESTER is not set 620 619 # CONFIG_DEBUG_SPINLOCK is not set 621 620 # CONFIG_DEBUG_MUTEXES is not set 622 621 # CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+4 -69
arch/xtensa/include/asm/atomic.h
··· 145 145 ATOMIC_OPS(add) 146 146 ATOMIC_OPS(sub) 147 147 148 + ATOMIC_OP(and) 149 + ATOMIC_OP(or) 150 + ATOMIC_OP(xor) 151 + 148 152 #undef ATOMIC_OPS 149 153 #undef ATOMIC_OP_RETURN 150 154 #undef ATOMIC_OP ··· 252 248 c = old; 253 249 } 254 250 return c; 255 - } 256 - 257 - 258 - static inline void atomic_clear_mask(unsigned int mask, atomic_t *v) 259 - { 260 - #if XCHAL_HAVE_S32C1I 261 - unsigned long tmp; 262 - int result; 263 - 264 - __asm__ __volatile__( 265 - "1: l32i %1, %3, 0\n" 266 - " wsr %1, scompare1\n" 267 - " and %0, %1, %2\n" 268 - " s32c1i %0, %3, 0\n" 269 - " bne %0, %1, 1b\n" 270 - : "=&a" (result), "=&a" (tmp) 271 - : "a" (~mask), "a" (v) 272 - : "memory" 273 - ); 274 - #else 275 - unsigned int all_f = -1; 276 - unsigned int vval; 277 - 278 - __asm__ __volatile__( 279 - " rsil a15,"__stringify(TOPLEVEL)"\n" 280 - " l32i %0, %2, 0\n" 281 - " xor %1, %4, %3\n" 282 - " and %0, %0, %4\n" 283 - " s32i %0, %2, 0\n" 284 - " wsr a15, ps\n" 285 - " rsync\n" 286 - : "=&a" (vval), "=a" (mask) 287 - : "a" (v), "a" (all_f), "1" (mask) 288 - : "a15", "memory" 289 - ); 290 - #endif 291 - } 292 - 293 - static inline void atomic_set_mask(unsigned int mask, atomic_t *v) 294 - { 295 - #if XCHAL_HAVE_S32C1I 296 - unsigned long tmp; 297 - int result; 298 - 299 - __asm__ __volatile__( 300 - "1: l32i %1, %3, 0\n" 301 - " wsr %1, scompare1\n" 302 - " or %0, %1, %2\n" 303 - " s32c1i %0, %3, 0\n" 304 - " bne %0, %1, 1b\n" 305 - : "=&a" (result), "=&a" (tmp) 306 - : "a" (mask), "a" (v) 307 - : "memory" 308 - ); 309 - #else 310 - unsigned int vval; 311 - 312 - __asm__ __volatile__( 313 - " rsil a15,"__stringify(TOPLEVEL)"\n" 314 - " l32i %0, %2, 0\n" 315 - " or %0, %0, %1\n" 316 - " s32i %0, %2, 0\n" 317 - " wsr a15, ps\n" 318 - " rsync\n" 319 - : "=&a" (vval) 320 - : "a" (mask), "a" (v) 321 - : "a15", "memory" 322 - ); 323 - #endif 324 251 } 325 252 326 253 #endif /* __KERNEL__ */
+1 -1
drivers/gpu/drm/i915/i915_drv.c
··· 748 748 mutex_lock(&dev->struct_mutex); 749 749 if (i915_gem_init_hw(dev)) { 750 750 DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n"); 751 - atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter); 751 + atomic_or(I915_WEDGED, &dev_priv->gpu_error.reset_counter); 752 752 } 753 753 mutex_unlock(&dev->struct_mutex); 754 754
+1 -1
drivers/gpu/drm/i915/i915_gem.c
··· 5091 5091 * for all other failure, such as an allocation failure, bail. 5092 5092 */ 5093 5093 DRM_ERROR("Failed to initialize GPU, declaring it wedged\n"); 5094 - atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter); 5094 + atomic_or(I915_WEDGED, &dev_priv->gpu_error.reset_counter); 5095 5095 ret = 0; 5096 5096 } 5097 5097
+2 -2
drivers/gpu/drm/i915/i915_irq.c
··· 2446 2446 kobject_uevent_env(&dev->primary->kdev->kobj, 2447 2447 KOBJ_CHANGE, reset_done_event); 2448 2448 } else { 2449 - atomic_set_mask(I915_WEDGED, &error->reset_counter); 2449 + atomic_or(I915_WEDGED, &error->reset_counter); 2450 2450 } 2451 2451 2452 2452 /* ··· 2574 2574 i915_report_and_clear_eir(dev); 2575 2575 2576 2576 if (wedged) { 2577 - atomic_set_mask(I915_RESET_IN_PROGRESS_FLAG, 2577 + atomic_or(I915_RESET_IN_PROGRESS_FLAG, 2578 2578 &dev_priv->gpu_error.reset_counter); 2579 2579 2580 2580 /*
+1 -1
drivers/s390/scsi/zfcp_aux.c
··· 529 529 list_add_tail(&port->list, &adapter->port_list); 530 530 write_unlock_irq(&adapter->port_list_lock); 531 531 532 - atomic_set_mask(status | ZFCP_STATUS_COMMON_RUNNING, &port->status); 532 + atomic_or(status | ZFCP_STATUS_COMMON_RUNNING, &port->status); 533 533 534 534 return port; 535 535
+31 -31
drivers/s390/scsi/zfcp_erp.c
··· 190 190 if (!(act_status & ZFCP_STATUS_ERP_NO_REF)) 191 191 if (scsi_device_get(sdev)) 192 192 return NULL; 193 - atomic_set_mask(ZFCP_STATUS_COMMON_ERP_INUSE, 193 + atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, 194 194 &zfcp_sdev->status); 195 195 erp_action = &zfcp_sdev->erp_action; 196 196 memset(erp_action, 0, sizeof(struct zfcp_erp_action)); ··· 206 206 if (!get_device(&port->dev)) 207 207 return NULL; 208 208 zfcp_erp_action_dismiss_port(port); 209 - atomic_set_mask(ZFCP_STATUS_COMMON_ERP_INUSE, &port->status); 209 + atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, &port->status); 210 210 erp_action = &port->erp_action; 211 211 memset(erp_action, 0, sizeof(struct zfcp_erp_action)); 212 212 erp_action->port = port; ··· 217 217 case ZFCP_ERP_ACTION_REOPEN_ADAPTER: 218 218 kref_get(&adapter->ref); 219 219 zfcp_erp_action_dismiss_adapter(adapter); 220 - atomic_set_mask(ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status); 220 + atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status); 221 221 erp_action = &adapter->erp_action; 222 222 memset(erp_action, 0, sizeof(struct zfcp_erp_action)); 223 223 if (!(atomic_read(&adapter->status) & ··· 254 254 act = zfcp_erp_setup_act(need, act_status, adapter, port, sdev); 255 255 if (!act) 256 256 goto out; 257 - atomic_set_mask(ZFCP_STATUS_ADAPTER_ERP_PENDING, &adapter->status); 257 + atomic_or(ZFCP_STATUS_ADAPTER_ERP_PENDING, &adapter->status); 258 258 ++adapter->erp_total_count; 259 259 list_add_tail(&act->list, &adapter->erp_ready_head); 260 260 wake_up(&adapter->erp_ready_wq); ··· 486 486 { 487 487 if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status)) 488 488 zfcp_dbf_rec_run("eraubl1", &adapter->erp_action); 489 - atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status); 489 + atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status); 490 490 } 491 491 492 492 static void zfcp_erp_port_unblock(struct zfcp_port *port) 493 493 { 494 494 if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status)) 495 495 zfcp_dbf_rec_run("erpubl1", &port->erp_action); 496 - atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status); 496 + atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status); 497 497 } 498 498 499 499 static void zfcp_erp_lun_unblock(struct scsi_device *sdev) ··· 502 502 503 503 if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &zfcp_sdev->status)) 504 504 zfcp_dbf_rec_run("erlubl1", &sdev_to_zfcp(sdev)->erp_action); 505 - atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &zfcp_sdev->status); 505 + atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &zfcp_sdev->status); 506 506 } 507 507 508 508 static void zfcp_erp_action_to_running(struct zfcp_erp_action *erp_action) ··· 642 642 read_lock_irqsave(&adapter->erp_lock, flags); 643 643 if (list_empty(&adapter->erp_ready_head) && 644 644 list_empty(&adapter->erp_running_head)) { 645 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_ERP_PENDING, 645 + atomic_andnot(ZFCP_STATUS_ADAPTER_ERP_PENDING, 646 646 &adapter->status); 647 647 wake_up(&adapter->erp_done_wqh); 648 648 } ··· 665 665 int sleep = 1; 666 666 struct zfcp_adapter *adapter = erp_action->adapter; 667 667 668 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, &adapter->status); 668 + atomic_andnot(ZFCP_STATUS_ADAPTER_XCONFIG_OK, &adapter->status); 669 669 670 670 for (retries = 7; retries; retries--) { 671 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 671 + atomic_andnot(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 672 672 &adapter->status); 673 673 write_lock_irq(&adapter->erp_lock); 674 674 zfcp_erp_action_to_running(erp_action); 675 675 write_unlock_irq(&adapter->erp_lock); 676 676 if (zfcp_fsf_exchange_config_data(erp_action)) { 677 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 677 + atomic_andnot(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 678 678 &adapter->status); 679 679 return ZFCP_ERP_FAILED; 680 680 } ··· 692 692 sleep *= 2; 693 693 } 694 694 695 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 695 + atomic_andnot(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 696 696 &adapter->status); 697 697 698 698 if (!(atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_XCONFIG_OK)) ··· 764 764 /* all ports and LUNs are closed */ 765 765 zfcp_erp_clear_adapter_status(adapter, ZFCP_STATUS_COMMON_OPEN); 766 766 767 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK | 767 + atomic_andnot(ZFCP_STATUS_ADAPTER_XCONFIG_OK | 768 768 ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status); 769 769 } 770 770 ··· 773 773 struct zfcp_adapter *adapter = act->adapter; 774 774 775 775 if (zfcp_qdio_open(adapter->qdio)) { 776 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK | 776 + atomic_andnot(ZFCP_STATUS_ADAPTER_XCONFIG_OK | 777 777 ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 778 778 &adapter->status); 779 779 return ZFCP_ERP_FAILED; ··· 784 784 return ZFCP_ERP_FAILED; 785 785 } 786 786 787 - atomic_set_mask(ZFCP_STATUS_COMMON_OPEN, &adapter->status); 787 + atomic_or(ZFCP_STATUS_COMMON_OPEN, &adapter->status); 788 788 789 789 return ZFCP_ERP_SUCCEEDED; 790 790 } ··· 948 948 { 949 949 struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); 950 950 951 - atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED, 951 + atomic_andnot(ZFCP_STATUS_COMMON_ACCESS_DENIED, 952 952 &zfcp_sdev->status); 953 953 } 954 954 ··· 1187 1187 switch (erp_action->action) { 1188 1188 case ZFCP_ERP_ACTION_REOPEN_LUN: 1189 1189 zfcp_sdev = sdev_to_zfcp(erp_action->sdev); 1190 - atomic_clear_mask(ZFCP_STATUS_COMMON_ERP_INUSE, 1190 + atomic_andnot(ZFCP_STATUS_COMMON_ERP_INUSE, 1191 1191 &zfcp_sdev->status); 1192 1192 break; 1193 1193 1194 1194 case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: 1195 1195 case ZFCP_ERP_ACTION_REOPEN_PORT: 1196 - atomic_clear_mask(ZFCP_STATUS_COMMON_ERP_INUSE, 1196 + atomic_andnot(ZFCP_STATUS_COMMON_ERP_INUSE, 1197 1197 &erp_action->port->status); 1198 1198 break; 1199 1199 1200 1200 case ZFCP_ERP_ACTION_REOPEN_ADAPTER: 1201 - atomic_clear_mask(ZFCP_STATUS_COMMON_ERP_INUSE, 1201 + atomic_andnot(ZFCP_STATUS_COMMON_ERP_INUSE, 1202 1202 &erp_action->adapter->status); 1203 1203 break; 1204 1204 } ··· 1422 1422 unsigned long flags; 1423 1423 u32 common_mask = mask & ZFCP_COMMON_FLAGS; 1424 1424 1425 - atomic_set_mask(mask, &adapter->status); 1425 + atomic_or(mask, &adapter->status); 1426 1426 1427 1427 if (!common_mask) 1428 1428 return; 1429 1429 1430 1430 read_lock_irqsave(&adapter->port_list_lock, flags); 1431 1431 list_for_each_entry(port, &adapter->port_list, list) 1432 - atomic_set_mask(common_mask, &port->status); 1432 + atomic_or(common_mask, &port->status); 1433 1433 read_unlock_irqrestore(&adapter->port_list_lock, flags); 1434 1434 1435 1435 spin_lock_irqsave(adapter->scsi_host->host_lock, flags); 1436 1436 __shost_for_each_device(sdev, adapter->scsi_host) 1437 - atomic_set_mask(common_mask, &sdev_to_zfcp(sdev)->status); 1437 + atomic_or(common_mask, &sdev_to_zfcp(sdev)->status); 1438 1438 spin_unlock_irqrestore(adapter->scsi_host->host_lock, flags); 1439 1439 } 1440 1440 ··· 1453 1453 u32 common_mask = mask & ZFCP_COMMON_FLAGS; 1454 1454 u32 clear_counter = mask & ZFCP_STATUS_COMMON_ERP_FAILED; 1455 1455 1456 - atomic_clear_mask(mask, &adapter->status); 1456 + atomic_andnot(mask, &adapter->status); 1457 1457 1458 1458 if (!common_mask) 1459 1459 return; ··· 1463 1463 1464 1464 read_lock_irqsave(&adapter->port_list_lock, flags); 1465 1465 list_for_each_entry(port, &adapter->port_list, list) { 1466 - atomic_clear_mask(common_mask, &port->status); 1466 + atomic_andnot(common_mask, &port->status); 1467 1467 if (clear_counter) 1468 1468 atomic_set(&port->erp_counter, 0); 1469 1469 } ··· 1471 1471 1472 1472 spin_lock_irqsave(adapter->scsi_host->host_lock, flags); 1473 1473 __shost_for_each_device(sdev, adapter->scsi_host) { 1474 - atomic_clear_mask(common_mask, &sdev_to_zfcp(sdev)->status); 1474 + atomic_andnot(common_mask, &sdev_to_zfcp(sdev)->status); 1475 1475 if (clear_counter) 1476 1476 atomic_set(&sdev_to_zfcp(sdev)->erp_counter, 0); 1477 1477 } ··· 1491 1491 u32 common_mask = mask & ZFCP_COMMON_FLAGS; 1492 1492 unsigned long flags; 1493 1493 1494 - atomic_set_mask(mask, &port->status); 1494 + atomic_or(mask, &port->status); 1495 1495 1496 1496 if (!common_mask) 1497 1497 return; ··· 1499 1499 spin_lock_irqsave(port->adapter->scsi_host->host_lock, flags); 1500 1500 __shost_for_each_device(sdev, port->adapter->scsi_host) 1501 1501 if (sdev_to_zfcp(sdev)->port == port) 1502 - atomic_set_mask(common_mask, 1502 + atomic_or(common_mask, 1503 1503 &sdev_to_zfcp(sdev)->status); 1504 1504 spin_unlock_irqrestore(port->adapter->scsi_host->host_lock, flags); 1505 1505 } ··· 1518 1518 u32 clear_counter = mask & ZFCP_STATUS_COMMON_ERP_FAILED; 1519 1519 unsigned long flags; 1520 1520 1521 - atomic_clear_mask(mask, &port->status); 1521 + atomic_andnot(mask, &port->status); 1522 1522 1523 1523 if (!common_mask) 1524 1524 return; ··· 1529 1529 spin_lock_irqsave(port->adapter->scsi_host->host_lock, flags); 1530 1530 __shost_for_each_device(sdev, port->adapter->scsi_host) 1531 1531 if (sdev_to_zfcp(sdev)->port == port) { 1532 - atomic_clear_mask(common_mask, 1532 + atomic_andnot(common_mask, 1533 1533 &sdev_to_zfcp(sdev)->status); 1534 1534 if (clear_counter) 1535 1535 atomic_set(&sdev_to_zfcp(sdev)->erp_counter, 0); ··· 1546 1546 { 1547 1547 struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); 1548 1548 1549 - atomic_set_mask(mask, &zfcp_sdev->status); 1549 + atomic_or(mask, &zfcp_sdev->status); 1550 1550 } 1551 1551 1552 1552 /** ··· 1558 1558 { 1559 1559 struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); 1560 1560 1561 - atomic_clear_mask(mask, &zfcp_sdev->status); 1561 + atomic_andnot(mask, &zfcp_sdev->status); 1562 1562 1563 1563 if (mask & ZFCP_STATUS_COMMON_ERP_FAILED) 1564 1564 atomic_set(&zfcp_sdev->erp_counter, 0);
+4 -4
drivers/s390/scsi/zfcp_fc.c
··· 508 508 /* port is good, unblock rport without going through erp */ 509 509 zfcp_scsi_schedule_rport_register(port); 510 510 out: 511 - atomic_clear_mask(ZFCP_STATUS_PORT_LINK_TEST, &port->status); 511 + atomic_andnot(ZFCP_STATUS_PORT_LINK_TEST, &port->status); 512 512 put_device(&port->dev); 513 513 kmem_cache_free(zfcp_fc_req_cache, fc_req); 514 514 } ··· 564 564 if (atomic_read(&port->status) & ZFCP_STATUS_PORT_LINK_TEST) 565 565 goto out; 566 566 567 - atomic_set_mask(ZFCP_STATUS_PORT_LINK_TEST, &port->status); 567 + atomic_or(ZFCP_STATUS_PORT_LINK_TEST, &port->status); 568 568 569 569 retval = zfcp_fc_adisc(port); 570 570 if (retval == 0) 571 571 return; 572 572 573 573 /* send of ADISC was not possible */ 574 - atomic_clear_mask(ZFCP_STATUS_PORT_LINK_TEST, &port->status); 574 + atomic_andnot(ZFCP_STATUS_PORT_LINK_TEST, &port->status); 575 575 zfcp_erp_port_forced_reopen(port, 0, "fcltwk1"); 576 576 577 577 out: ··· 640 640 if (!(atomic_read(&port->status) & ZFCP_STATUS_COMMON_NOESC)) 641 641 return; 642 642 643 - atomic_clear_mask(ZFCP_STATUS_COMMON_NOESC, &port->status); 643 + atomic_andnot(ZFCP_STATUS_COMMON_NOESC, &port->status); 644 644 645 645 if ((port->supported_classes != 0) || 646 646 !list_empty(&port->unit_list))
+13 -13
drivers/s390/scsi/zfcp_fsf.c
··· 114 114 if (atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED) 115 115 return; 116 116 117 - atomic_set_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status); 117 + atomic_or(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status); 118 118 119 119 zfcp_scsi_schedule_rports_block(adapter); 120 120 ··· 345 345 zfcp_erp_adapter_shutdown(adapter, 0, "fspse_3"); 346 346 break; 347 347 case FSF_PROT_HOST_CONNECTION_INITIALIZING: 348 - atomic_set_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 348 + atomic_or(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 349 349 &adapter->status); 350 350 break; 351 351 case FSF_PROT_DUPLICATE_REQUEST_ID: ··· 554 554 zfcp_erp_adapter_shutdown(adapter, 0, "fsecdh1"); 555 555 return; 556 556 } 557 - atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, 557 + atomic_or(ZFCP_STATUS_ADAPTER_XCONFIG_OK, 558 558 &adapter->status); 559 559 break; 560 560 case FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE: ··· 567 567 568 568 /* avoids adapter shutdown to be able to recognize 569 569 * events such as LINK UP */ 570 - atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, 570 + atomic_or(ZFCP_STATUS_ADAPTER_XCONFIG_OK, 571 571 &adapter->status); 572 572 zfcp_fsf_link_down_info_eval(req, 573 573 &qtcb->header.fsf_status_qual.link_down_info); ··· 1394 1394 break; 1395 1395 case FSF_GOOD: 1396 1396 port->handle = header->port_handle; 1397 - atomic_set_mask(ZFCP_STATUS_COMMON_OPEN | 1397 + atomic_or(ZFCP_STATUS_COMMON_OPEN | 1398 1398 ZFCP_STATUS_PORT_PHYS_OPEN, &port->status); 1399 - atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_BOXED, 1399 + atomic_andnot(ZFCP_STATUS_COMMON_ACCESS_BOXED, 1400 1400 &port->status); 1401 1401 /* check whether D_ID has changed during open */ 1402 1402 /* ··· 1677 1677 case FSF_PORT_BOXED: 1678 1678 /* can't use generic zfcp_erp_modify_port_status because 1679 1679 * ZFCP_STATUS_COMMON_OPEN must not be reset for the port */ 1680 - atomic_clear_mask(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status); 1680 + atomic_andnot(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status); 1681 1681 shost_for_each_device(sdev, port->adapter->scsi_host) 1682 1682 if (sdev_to_zfcp(sdev)->port == port) 1683 - atomic_clear_mask(ZFCP_STATUS_COMMON_OPEN, 1683 + atomic_andnot(ZFCP_STATUS_COMMON_OPEN, 1684 1684 &sdev_to_zfcp(sdev)->status); 1685 1685 zfcp_erp_set_port_status(port, ZFCP_STATUS_COMMON_ACCESS_BOXED); 1686 1686 zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, ··· 1700 1700 /* can't use generic zfcp_erp_modify_port_status because 1701 1701 * ZFCP_STATUS_COMMON_OPEN must not be reset for the port 1702 1702 */ 1703 - atomic_clear_mask(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status); 1703 + atomic_andnot(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status); 1704 1704 shost_for_each_device(sdev, port->adapter->scsi_host) 1705 1705 if (sdev_to_zfcp(sdev)->port == port) 1706 - atomic_clear_mask(ZFCP_STATUS_COMMON_OPEN, 1706 + atomic_andnot(ZFCP_STATUS_COMMON_OPEN, 1707 1707 &sdev_to_zfcp(sdev)->status); 1708 1708 break; 1709 1709 } ··· 1766 1766 1767 1767 zfcp_sdev = sdev_to_zfcp(sdev); 1768 1768 1769 - atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED | 1769 + atomic_andnot(ZFCP_STATUS_COMMON_ACCESS_DENIED | 1770 1770 ZFCP_STATUS_COMMON_ACCESS_BOXED, 1771 1771 &zfcp_sdev->status); 1772 1772 ··· 1822 1822 1823 1823 case FSF_GOOD: 1824 1824 zfcp_sdev->lun_handle = header->lun_handle; 1825 - atomic_set_mask(ZFCP_STATUS_COMMON_OPEN, &zfcp_sdev->status); 1825 + atomic_or(ZFCP_STATUS_COMMON_OPEN, &zfcp_sdev->status); 1826 1826 break; 1827 1827 } 1828 1828 } ··· 1913 1913 } 1914 1914 break; 1915 1915 case FSF_GOOD: 1916 - atomic_clear_mask(ZFCP_STATUS_COMMON_OPEN, &zfcp_sdev->status); 1916 + atomic_andnot(ZFCP_STATUS_COMMON_OPEN, &zfcp_sdev->status); 1917 1917 break; 1918 1918 } 1919 1919 }
+7 -7
drivers/s390/scsi/zfcp_qdio.c
··· 349 349 350 350 /* clear QDIOUP flag, thus do_QDIO is not called during qdio_shutdown */ 351 351 spin_lock_irq(&qdio->req_q_lock); 352 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &adapter->status); 352 + atomic_andnot(ZFCP_STATUS_ADAPTER_QDIOUP, &adapter->status); 353 353 spin_unlock_irq(&qdio->req_q_lock); 354 354 355 355 wake_up(&qdio->req_q_wq); ··· 384 384 if (atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP) 385 385 return -EIO; 386 386 387 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED, 387 + atomic_andnot(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED, 388 388 &qdio->adapter->status); 389 389 390 390 zfcp_qdio_setup_init_data(&init_data, qdio); ··· 396 396 goto failed_qdio; 397 397 398 398 if (ssqd.qdioac2 & CHSC_AC2_DATA_DIV_ENABLED) 399 - atomic_set_mask(ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED, 399 + atomic_or(ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED, 400 400 &qdio->adapter->status); 401 401 402 402 if (ssqd.qdioac2 & CHSC_AC2_MULTI_BUFFER_ENABLED) { 403 - atomic_set_mask(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status); 403 + atomic_or(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status); 404 404 qdio->max_sbale_per_sbal = QDIO_MAX_ELEMENTS_PER_BUFFER; 405 405 } else { 406 - atomic_clear_mask(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status); 406 + atomic_andnot(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status); 407 407 qdio->max_sbale_per_sbal = QDIO_MAX_ELEMENTS_PER_BUFFER - 1; 408 408 } 409 409 ··· 427 427 /* set index of first available SBALS / number of available SBALS */ 428 428 qdio->req_q_idx = 0; 429 429 atomic_set(&qdio->req_q_free, QDIO_MAX_BUFFERS_PER_Q); 430 - atomic_set_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status); 430 + atomic_or(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status); 431 431 432 432 if (adapter->scsi_host) { 433 433 adapter->scsi_host->sg_tablesize = qdio->max_sbale_per_req; ··· 499 499 500 500 rc = ccw_device_siosl(adapter->ccw_device); 501 501 if (!rc) 502 - atomic_set_mask(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED, 502 + atomic_or(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED, 503 503 &adapter->status); 504 504 }
+88 -165
include/asm-generic/atomic-long.h
··· 23 23 typedef atomic64_t atomic_long_t; 24 24 25 25 #define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) 26 + #define ATOMIC_LONG_PFX(x) atomic64 ## x 26 27 27 - static inline long atomic_long_read(atomic_long_t *l) 28 - { 29 - atomic64_t *v = (atomic64_t *)l; 30 - 31 - return (long)atomic64_read(v); 32 - } 33 - 34 - static inline void atomic_long_set(atomic_long_t *l, long i) 35 - { 36 - atomic64_t *v = (atomic64_t *)l; 37 - 38 - atomic64_set(v, i); 39 - } 40 - 41 - static inline void atomic_long_inc(atomic_long_t *l) 42 - { 43 - atomic64_t *v = (atomic64_t *)l; 44 - 45 - atomic64_inc(v); 46 - } 47 - 48 - static inline void atomic_long_dec(atomic_long_t *l) 49 - { 50 - atomic64_t *v = (atomic64_t *)l; 51 - 52 - atomic64_dec(v); 53 - } 54 - 55 - static inline void atomic_long_add(long i, atomic_long_t *l) 56 - { 57 - atomic64_t *v = (atomic64_t *)l; 58 - 59 - atomic64_add(i, v); 60 - } 61 - 62 - static inline void atomic_long_sub(long i, atomic_long_t *l) 63 - { 64 - atomic64_t *v = (atomic64_t *)l; 65 - 66 - atomic64_sub(i, v); 67 - } 68 - 69 - static inline int atomic_long_sub_and_test(long i, atomic_long_t *l) 70 - { 71 - atomic64_t *v = (atomic64_t *)l; 72 - 73 - return atomic64_sub_and_test(i, v); 74 - } 75 - 76 - static inline int atomic_long_dec_and_test(atomic_long_t *l) 77 - { 78 - atomic64_t *v = (atomic64_t *)l; 79 - 80 - return atomic64_dec_and_test(v); 81 - } 82 - 83 - static inline int atomic_long_inc_and_test(atomic_long_t *l) 84 - { 85 - atomic64_t *v = (atomic64_t *)l; 86 - 87 - return atomic64_inc_and_test(v); 88 - } 89 - 90 - static inline int atomic_long_add_negative(long i, atomic_long_t *l) 91 - { 92 - atomic64_t *v = (atomic64_t *)l; 93 - 94 - return atomic64_add_negative(i, v); 95 - } 96 - 97 - static inline long atomic_long_add_return(long i, atomic_long_t *l) 98 - { 99 - atomic64_t *v = (atomic64_t *)l; 100 - 101 - return (long)atomic64_add_return(i, v); 102 - } 103 - 104 - static inline long atomic_long_sub_return(long i, atomic_long_t *l) 105 - { 106 - atomic64_t *v = (atomic64_t *)l; 107 - 108 - return (long)atomic64_sub_return(i, v); 109 - } 110 - 111 - static inline long atomic_long_inc_return(atomic_long_t *l) 112 - { 113 - atomic64_t *v = (atomic64_t *)l; 114 - 115 - return (long)atomic64_inc_return(v); 116 - } 117 - 118 - static inline long atomic_long_dec_return(atomic_long_t *l) 119 - { 120 - atomic64_t *v = (atomic64_t *)l; 121 - 122 - return (long)atomic64_dec_return(v); 123 - } 124 - 125 - static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u) 126 - { 127 - atomic64_t *v = (atomic64_t *)l; 128 - 129 - return (long)atomic64_add_unless(v, a, u); 130 - } 131 - 132 - #define atomic_long_inc_not_zero(l) atomic64_inc_not_zero((atomic64_t *)(l)) 133 - 134 - #define atomic_long_cmpxchg(l, old, new) \ 135 - (atomic64_cmpxchg((atomic64_t *)(l), (old), (new))) 136 - #define atomic_long_xchg(v, new) \ 137 - (atomic64_xchg((atomic64_t *)(v), (new))) 138 - 139 - #else /* BITS_PER_LONG == 64 */ 28 + #else 140 29 141 30 typedef atomic_t atomic_long_t; 142 31 143 32 #define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i) 144 - static inline long atomic_long_read(atomic_long_t *l) 145 - { 146 - atomic_t *v = (atomic_t *)l; 33 + #define ATOMIC_LONG_PFX(x) atomic ## x 147 34 148 - return (long)atomic_read(v); 35 + #endif 36 + 37 + #define ATOMIC_LONG_READ_OP(mo) \ 38 + static inline long atomic_long_read##mo(atomic_long_t *l) \ 39 + { \ 40 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ 41 + \ 42 + return (long)ATOMIC_LONG_PFX(_read##mo)(v); \ 149 43 } 44 + ATOMIC_LONG_READ_OP() 45 + ATOMIC_LONG_READ_OP(_acquire) 150 46 151 - static inline void atomic_long_set(atomic_long_t *l, long i) 152 - { 153 - atomic_t *v = (atomic_t *)l; 47 + #undef ATOMIC_LONG_READ_OP 154 48 155 - atomic_set(v, i); 49 + #define ATOMIC_LONG_SET_OP(mo) \ 50 + static inline void atomic_long_set##mo(atomic_long_t *l, long i) \ 51 + { \ 52 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ 53 + \ 54 + ATOMIC_LONG_PFX(_set##mo)(v, i); \ 156 55 } 56 + ATOMIC_LONG_SET_OP() 57 + ATOMIC_LONG_SET_OP(_release) 58 + 59 + #undef ATOMIC_LONG_SET_OP 60 + 61 + #define ATOMIC_LONG_ADD_SUB_OP(op, mo) \ 62 + static inline long \ 63 + atomic_long_##op##_return##mo(long i, atomic_long_t *l) \ 64 + { \ 65 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; \ 66 + \ 67 + return (long)ATOMIC_LONG_PFX(_##op##_return##mo)(i, v); \ 68 + } 69 + ATOMIC_LONG_ADD_SUB_OP(add,) 70 + ATOMIC_LONG_ADD_SUB_OP(add, _relaxed) 71 + ATOMIC_LONG_ADD_SUB_OP(add, _acquire) 72 + ATOMIC_LONG_ADD_SUB_OP(add, _release) 73 + ATOMIC_LONG_ADD_SUB_OP(sub,) 74 + ATOMIC_LONG_ADD_SUB_OP(sub, _relaxed) 75 + ATOMIC_LONG_ADD_SUB_OP(sub, _acquire) 76 + ATOMIC_LONG_ADD_SUB_OP(sub, _release) 77 + 78 + #undef ATOMIC_LONG_ADD_SUB_OP 79 + 80 + #define atomic_long_cmpxchg_relaxed(l, old, new) \ 81 + (ATOMIC_LONG_PFX(_cmpxchg_relaxed)((ATOMIC_LONG_PFX(_t) *)(l), \ 82 + (old), (new))) 83 + #define atomic_long_cmpxchg_acquire(l, old, new) \ 84 + (ATOMIC_LONG_PFX(_cmpxchg_acquire)((ATOMIC_LONG_PFX(_t) *)(l), \ 85 + (old), (new))) 86 + #define atomic_long_cmpxchg_release(l, old, new) \ 87 + (ATOMIC_LONG_PFX(_cmpxchg_release)((ATOMIC_LONG_PFX(_t) *)(l), \ 88 + (old), (new))) 89 + #define atomic_long_cmpxchg(l, old, new) \ 90 + (ATOMIC_LONG_PFX(_cmpxchg)((ATOMIC_LONG_PFX(_t) *)(l), (old), (new))) 91 + 92 + #define atomic_long_xchg_relaxed(v, new) \ 93 + (ATOMIC_LONG_PFX(_xchg_relaxed)((ATOMIC_LONG_PFX(_t) *)(v), (new))) 94 + #define atomic_long_xchg_acquire(v, new) \ 95 + (ATOMIC_LONG_PFX(_xchg_acquire)((ATOMIC_LONG_PFX(_t) *)(v), (new))) 96 + #define atomic_long_xchg_release(v, new) \ 97 + (ATOMIC_LONG_PFX(_xchg_release)((ATOMIC_LONG_PFX(_t) *)(v), (new))) 98 + #define atomic_long_xchg(v, new) \ 99 + (ATOMIC_LONG_PFX(_xchg)((ATOMIC_LONG_PFX(_t) *)(v), (new))) 157 100 158 101 static inline void atomic_long_inc(atomic_long_t *l) 159 102 { 160 - atomic_t *v = (atomic_t *)l; 103 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 161 104 162 - atomic_inc(v); 105 + ATOMIC_LONG_PFX(_inc)(v); 163 106 } 164 107 165 108 static inline void atomic_long_dec(atomic_long_t *l) 166 109 { 167 - atomic_t *v = (atomic_t *)l; 110 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 168 111 169 - atomic_dec(v); 112 + ATOMIC_LONG_PFX(_dec)(v); 170 113 } 171 114 172 115 static inline void atomic_long_add(long i, atomic_long_t *l) 173 116 { 174 - atomic_t *v = (atomic_t *)l; 117 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 175 118 176 - atomic_add(i, v); 119 + ATOMIC_LONG_PFX(_add)(i, v); 177 120 } 178 121 179 122 static inline void atomic_long_sub(long i, atomic_long_t *l) 180 123 { 181 - atomic_t *v = (atomic_t *)l; 124 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 182 125 183 - atomic_sub(i, v); 126 + ATOMIC_LONG_PFX(_sub)(i, v); 184 127 } 185 128 186 129 static inline int atomic_long_sub_and_test(long i, atomic_long_t *l) 187 130 { 188 - atomic_t *v = (atomic_t *)l; 131 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 189 132 190 - return atomic_sub_and_test(i, v); 133 + return ATOMIC_LONG_PFX(_sub_and_test)(i, v); 191 134 } 192 135 193 136 static inline int atomic_long_dec_and_test(atomic_long_t *l) 194 137 { 195 - atomic_t *v = (atomic_t *)l; 138 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 196 139 197 - return atomic_dec_and_test(v); 140 + return ATOMIC_LONG_PFX(_dec_and_test)(v); 198 141 } 199 142 200 143 static inline int atomic_long_inc_and_test(atomic_long_t *l) 201 144 { 202 - atomic_t *v = (atomic_t *)l; 145 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 203 146 204 - return atomic_inc_and_test(v); 147 + return ATOMIC_LONG_PFX(_inc_and_test)(v); 205 148 } 206 149 207 150 static inline int atomic_long_add_negative(long i, atomic_long_t *l) 208 151 { 209 - atomic_t *v = (atomic_t *)l; 152 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 210 153 211 - return atomic_add_negative(i, v); 212 - } 213 - 214 - static inline long atomic_long_add_return(long i, atomic_long_t *l) 215 - { 216 - atomic_t *v = (atomic_t *)l; 217 - 218 - return (long)atomic_add_return(i, v); 219 - } 220 - 221 - static inline long atomic_long_sub_return(long i, atomic_long_t *l) 222 - { 223 - atomic_t *v = (atomic_t *)l; 224 - 225 - return (long)atomic_sub_return(i, v); 154 + return ATOMIC_LONG_PFX(_add_negative)(i, v); 226 155 } 227 156 228 157 static inline long atomic_long_inc_return(atomic_long_t *l) 229 158 { 230 - atomic_t *v = (atomic_t *)l; 159 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 231 160 232 - return (long)atomic_inc_return(v); 161 + return (long)ATOMIC_LONG_PFX(_inc_return)(v); 233 162 } 234 163 235 164 static inline long atomic_long_dec_return(atomic_long_t *l) 236 165 { 237 - atomic_t *v = (atomic_t *)l; 166 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 238 167 239 - return (long)atomic_dec_return(v); 168 + return (long)ATOMIC_LONG_PFX(_dec_return)(v); 240 169 } 241 170 242 171 static inline long atomic_long_add_unless(atomic_long_t *l, long a, long u) 243 172 { 244 - atomic_t *v = (atomic_t *)l; 173 + ATOMIC_LONG_PFX(_t) *v = (ATOMIC_LONG_PFX(_t) *)l; 245 174 246 - return (long)atomic_add_unless(v, a, u); 175 + return (long)ATOMIC_LONG_PFX(_add_unless)(v, a, u); 247 176 } 248 177 249 - #define atomic_long_inc_not_zero(l) atomic_inc_not_zero((atomic_t *)(l)) 250 - 251 - #define atomic_long_cmpxchg(l, old, new) \ 252 - (atomic_cmpxchg((atomic_t *)(l), (old), (new))) 253 - #define atomic_long_xchg(v, new) \ 254 - (atomic_xchg((atomic_t *)(v), (new))) 255 - 256 - #endif /* BITS_PER_LONG == 64 */ 178 + #define atomic_long_inc_not_zero(l) \ 179 + ATOMIC_LONG_PFX(_inc_not_zero)((ATOMIC_LONG_PFX(_t) *)(l)) 257 180 258 181 #endif /* _ASM_GENERIC_ATOMIC_LONG_H */
+6 -5
include/asm-generic/atomic.h
··· 98 98 ATOMIC_OP_RETURN(sub, -) 99 99 #endif 100 100 101 - #ifndef atomic_clear_mask 101 + #ifndef atomic_and 102 102 ATOMIC_OP(and, &) 103 - #define atomic_clear_mask(i, v) atomic_and(~(i), (v)) 104 103 #endif 105 104 106 - #ifndef atomic_set_mask 107 - #define CONFIG_ARCH_HAS_ATOMIC_OR 105 + #ifndef atomic_or 108 106 ATOMIC_OP(or, |) 109 - #define atomic_set_mask(i, v) atomic_or((i), (v)) 107 + #endif 108 + 109 + #ifndef atomic_xor 110 + ATOMIC_OP(xor, ^) 110 111 #endif 111 112 112 113 #undef ATOMIC_OP_RETURN
+4
include/asm-generic/atomic64.h
··· 32 32 ATOMIC64_OPS(add) 33 33 ATOMIC64_OPS(sub) 34 34 35 + ATOMIC64_OP(and) 36 + ATOMIC64_OP(or) 37 + ATOMIC64_OP(xor) 38 + 35 39 #undef ATOMIC64_OPS 36 40 #undef ATOMIC64_OP_RETURN 37 41 #undef ATOMIC64_OP
+2 -2
include/asm-generic/barrier.h
··· 108 108 do { \ 109 109 compiletime_assert_atomic_type(*p); \ 110 110 smp_mb(); \ 111 - ACCESS_ONCE(*p) = (v); \ 111 + WRITE_ONCE(*p, v); \ 112 112 } while (0) 113 113 114 114 #define smp_load_acquire(p) \ 115 115 ({ \ 116 - typeof(*p) ___p1 = ACCESS_ONCE(*p); \ 116 + typeof(*p) ___p1 = READ_ONCE(*p); \ 117 117 compiletime_assert_atomic_type(*p); \ 118 118 smp_mb(); \ 119 119 ___p1; \
+35 -43
include/asm-generic/qrwlock.h
··· 36 36 /* 37 37 * External function declarations 38 38 */ 39 - extern void queue_read_lock_slowpath(struct qrwlock *lock); 40 - extern void queue_write_lock_slowpath(struct qrwlock *lock); 39 + extern void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts); 40 + extern void queued_write_lock_slowpath(struct qrwlock *lock); 41 41 42 42 /** 43 - * queue_read_can_lock- would read_trylock() succeed? 43 + * queued_read_can_lock- would read_trylock() succeed? 44 44 * @lock: Pointer to queue rwlock structure 45 45 */ 46 - static inline int queue_read_can_lock(struct qrwlock *lock) 46 + static inline int queued_read_can_lock(struct qrwlock *lock) 47 47 { 48 48 return !(atomic_read(&lock->cnts) & _QW_WMASK); 49 49 } 50 50 51 51 /** 52 - * queue_write_can_lock- would write_trylock() succeed? 52 + * queued_write_can_lock- would write_trylock() succeed? 53 53 * @lock: Pointer to queue rwlock structure 54 54 */ 55 - static inline int queue_write_can_lock(struct qrwlock *lock) 55 + static inline int queued_write_can_lock(struct qrwlock *lock) 56 56 { 57 57 return !atomic_read(&lock->cnts); 58 58 } 59 59 60 60 /** 61 - * queue_read_trylock - try to acquire read lock of a queue rwlock 61 + * queued_read_trylock - try to acquire read lock of a queue rwlock 62 62 * @lock : Pointer to queue rwlock structure 63 63 * Return: 1 if lock acquired, 0 if failed 64 64 */ 65 - static inline int queue_read_trylock(struct qrwlock *lock) 65 + static inline int queued_read_trylock(struct qrwlock *lock) 66 66 { 67 67 u32 cnts; 68 68 69 69 cnts = atomic_read(&lock->cnts); 70 70 if (likely(!(cnts & _QW_WMASK))) { 71 - cnts = (u32)atomic_add_return(_QR_BIAS, &lock->cnts); 71 + cnts = (u32)atomic_add_return_acquire(_QR_BIAS, &lock->cnts); 72 72 if (likely(!(cnts & _QW_WMASK))) 73 73 return 1; 74 74 atomic_sub(_QR_BIAS, &lock->cnts); ··· 77 77 } 78 78 79 79 /** 80 - * queue_write_trylock - try to acquire write lock of a queue rwlock 80 + * queued_write_trylock - try to acquire write lock of a queue rwlock 81 81 * @lock : Pointer to queue rwlock structure 82 82 * Return: 1 if lock acquired, 0 if failed 83 83 */ 84 - static inline int queue_write_trylock(struct qrwlock *lock) 84 + static inline int queued_write_trylock(struct qrwlock *lock) 85 85 { 86 86 u32 cnts; 87 87 ··· 89 89 if (unlikely(cnts)) 90 90 return 0; 91 91 92 - return likely(atomic_cmpxchg(&lock->cnts, 93 - cnts, cnts | _QW_LOCKED) == cnts); 92 + return likely(atomic_cmpxchg_acquire(&lock->cnts, 93 + cnts, cnts | _QW_LOCKED) == cnts); 94 94 } 95 95 /** 96 - * queue_read_lock - acquire read lock of a queue rwlock 96 + * queued_read_lock - acquire read lock of a queue rwlock 97 97 * @lock: Pointer to queue rwlock structure 98 98 */ 99 - static inline void queue_read_lock(struct qrwlock *lock) 99 + static inline void queued_read_lock(struct qrwlock *lock) 100 100 { 101 101 u32 cnts; 102 102 103 - cnts = atomic_add_return(_QR_BIAS, &lock->cnts); 103 + cnts = atomic_add_return_acquire(_QR_BIAS, &lock->cnts); 104 104 if (likely(!(cnts & _QW_WMASK))) 105 105 return; 106 106 107 107 /* The slowpath will decrement the reader count, if necessary. */ 108 - queue_read_lock_slowpath(lock); 108 + queued_read_lock_slowpath(lock, cnts); 109 109 } 110 110 111 111 /** 112 - * queue_write_lock - acquire write lock of a queue rwlock 112 + * queued_write_lock - acquire write lock of a queue rwlock 113 113 * @lock : Pointer to queue rwlock structure 114 114 */ 115 - static inline void queue_write_lock(struct qrwlock *lock) 115 + static inline void queued_write_lock(struct qrwlock *lock) 116 116 { 117 117 /* Optimize for the unfair lock case where the fair flag is 0. */ 118 - if (atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0) 118 + if (atomic_cmpxchg_acquire(&lock->cnts, 0, _QW_LOCKED) == 0) 119 119 return; 120 120 121 - queue_write_lock_slowpath(lock); 121 + queued_write_lock_slowpath(lock); 122 122 } 123 123 124 124 /** 125 - * queue_read_unlock - release read lock of a queue rwlock 125 + * queued_read_unlock - release read lock of a queue rwlock 126 126 * @lock : Pointer to queue rwlock structure 127 127 */ 128 - static inline void queue_read_unlock(struct qrwlock *lock) 128 + static inline void queued_read_unlock(struct qrwlock *lock) 129 129 { 130 130 /* 131 131 * Atomically decrement the reader count 132 132 */ 133 - smp_mb__before_atomic(); 134 - atomic_sub(_QR_BIAS, &lock->cnts); 133 + (void)atomic_sub_return_release(_QR_BIAS, &lock->cnts); 135 134 } 136 135 137 - #ifndef queue_write_unlock 138 136 /** 139 - * queue_write_unlock - release write lock of a queue rwlock 137 + * queued_write_unlock - release write lock of a queue rwlock 140 138 * @lock : Pointer to queue rwlock structure 141 139 */ 142 - static inline void queue_write_unlock(struct qrwlock *lock) 140 + static inline void queued_write_unlock(struct qrwlock *lock) 143 141 { 144 - /* 145 - * If the writer field is atomic, it can be cleared directly. 146 - * Otherwise, an atomic subtraction will be used to clear it. 147 - */ 148 - smp_mb__before_atomic(); 149 - atomic_sub(_QW_LOCKED, &lock->cnts); 142 + smp_store_release((u8 *)&lock->cnts, 0); 150 143 } 151 - #endif 152 144 153 145 /* 154 146 * Remapping rwlock architecture specific functions to the corresponding 155 147 * queue rwlock functions. 156 148 */ 157 - #define arch_read_can_lock(l) queue_read_can_lock(l) 158 - #define arch_write_can_lock(l) queue_write_can_lock(l) 159 - #define arch_read_lock(l) queue_read_lock(l) 160 - #define arch_write_lock(l) queue_write_lock(l) 161 - #define arch_read_trylock(l) queue_read_trylock(l) 162 - #define arch_write_trylock(l) queue_write_trylock(l) 163 - #define arch_read_unlock(l) queue_read_unlock(l) 164 - #define arch_write_unlock(l) queue_write_unlock(l) 149 + #define arch_read_can_lock(l) queued_read_can_lock(l) 150 + #define arch_write_can_lock(l) queued_write_can_lock(l) 151 + #define arch_read_lock(l) queued_read_lock(l) 152 + #define arch_write_lock(l) queued_write_lock(l) 153 + #define arch_read_trylock(l) queued_read_trylock(l) 154 + #define arch_write_trylock(l) queued_write_trylock(l) 155 + #define arch_read_unlock(l) queued_read_unlock(l) 156 + #define arch_write_unlock(l) queued_write_unlock(l) 165 157 166 158 #endif /* __ASM_GENERIC_QRWLOCK_H */
+348 -13
include/linux/atomic.h
··· 2 2 #ifndef _LINUX_ATOMIC_H 3 3 #define _LINUX_ATOMIC_H 4 4 #include <asm/atomic.h> 5 + #include <asm/barrier.h> 6 + 7 + /* 8 + * Relaxed variants of xchg, cmpxchg and some atomic operations. 9 + * 10 + * We support four variants: 11 + * 12 + * - Fully ordered: The default implementation, no suffix required. 13 + * - Acquire: Provides ACQUIRE semantics, _acquire suffix. 14 + * - Release: Provides RELEASE semantics, _release suffix. 15 + * - Relaxed: No ordering guarantees, _relaxed suffix. 16 + * 17 + * For compound atomics performing both a load and a store, ACQUIRE 18 + * semantics apply only to the load and RELEASE semantics only to the 19 + * store portion of the operation. Note that a failed cmpxchg_acquire 20 + * does -not- imply any memory ordering constraints. 21 + * 22 + * See Documentation/memory-barriers.txt for ACQUIRE/RELEASE definitions. 23 + */ 24 + 25 + #ifndef atomic_read_acquire 26 + #define atomic_read_acquire(v) smp_load_acquire(&(v)->counter) 27 + #endif 28 + 29 + #ifndef atomic_set_release 30 + #define atomic_set_release(v, i) smp_store_release(&(v)->counter, (i)) 31 + #endif 32 + 33 + /* 34 + * The idea here is to build acquire/release variants by adding explicit 35 + * barriers on top of the relaxed variant. In the case where the relaxed 36 + * variant is already fully ordered, no additional barriers are needed. 37 + */ 38 + #define __atomic_op_acquire(op, args...) \ 39 + ({ \ 40 + typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ 41 + smp_mb__after_atomic(); \ 42 + __ret; \ 43 + }) 44 + 45 + #define __atomic_op_release(op, args...) \ 46 + ({ \ 47 + smp_mb__before_atomic(); \ 48 + op##_relaxed(args); \ 49 + }) 50 + 51 + #define __atomic_op_fence(op, args...) \ 52 + ({ \ 53 + typeof(op##_relaxed(args)) __ret; \ 54 + smp_mb__before_atomic(); \ 55 + __ret = op##_relaxed(args); \ 56 + smp_mb__after_atomic(); \ 57 + __ret; \ 58 + }) 59 + 60 + /* atomic_add_return_relaxed */ 61 + #ifndef atomic_add_return_relaxed 62 + #define atomic_add_return_relaxed atomic_add_return 63 + #define atomic_add_return_acquire atomic_add_return 64 + #define atomic_add_return_release atomic_add_return 65 + 66 + #else /* atomic_add_return_relaxed */ 67 + 68 + #ifndef atomic_add_return_acquire 69 + #define atomic_add_return_acquire(...) \ 70 + __atomic_op_acquire(atomic_add_return, __VA_ARGS__) 71 + #endif 72 + 73 + #ifndef atomic_add_return_release 74 + #define atomic_add_return_release(...) \ 75 + __atomic_op_release(atomic_add_return, __VA_ARGS__) 76 + #endif 77 + 78 + #ifndef atomic_add_return 79 + #define atomic_add_return(...) \ 80 + __atomic_op_fence(atomic_add_return, __VA_ARGS__) 81 + #endif 82 + #endif /* atomic_add_return_relaxed */ 83 + 84 + /* atomic_sub_return_relaxed */ 85 + #ifndef atomic_sub_return_relaxed 86 + #define atomic_sub_return_relaxed atomic_sub_return 87 + #define atomic_sub_return_acquire atomic_sub_return 88 + #define atomic_sub_return_release atomic_sub_return 89 + 90 + #else /* atomic_sub_return_relaxed */ 91 + 92 + #ifndef atomic_sub_return_acquire 93 + #define atomic_sub_return_acquire(...) \ 94 + __atomic_op_acquire(atomic_sub_return, __VA_ARGS__) 95 + #endif 96 + 97 + #ifndef atomic_sub_return_release 98 + #define atomic_sub_return_release(...) \ 99 + __atomic_op_release(atomic_sub_return, __VA_ARGS__) 100 + #endif 101 + 102 + #ifndef atomic_sub_return 103 + #define atomic_sub_return(...) \ 104 + __atomic_op_fence(atomic_sub_return, __VA_ARGS__) 105 + #endif 106 + #endif /* atomic_sub_return_relaxed */ 107 + 108 + /* atomic_xchg_relaxed */ 109 + #ifndef atomic_xchg_relaxed 110 + #define atomic_xchg_relaxed atomic_xchg 111 + #define atomic_xchg_acquire atomic_xchg 112 + #define atomic_xchg_release atomic_xchg 113 + 114 + #else /* atomic_xchg_relaxed */ 115 + 116 + #ifndef atomic_xchg_acquire 117 + #define atomic_xchg_acquire(...) \ 118 + __atomic_op_acquire(atomic_xchg, __VA_ARGS__) 119 + #endif 120 + 121 + #ifndef atomic_xchg_release 122 + #define atomic_xchg_release(...) \ 123 + __atomic_op_release(atomic_xchg, __VA_ARGS__) 124 + #endif 125 + 126 + #ifndef atomic_xchg 127 + #define atomic_xchg(...) \ 128 + __atomic_op_fence(atomic_xchg, __VA_ARGS__) 129 + #endif 130 + #endif /* atomic_xchg_relaxed */ 131 + 132 + /* atomic_cmpxchg_relaxed */ 133 + #ifndef atomic_cmpxchg_relaxed 134 + #define atomic_cmpxchg_relaxed atomic_cmpxchg 135 + #define atomic_cmpxchg_acquire atomic_cmpxchg 136 + #define atomic_cmpxchg_release atomic_cmpxchg 137 + 138 + #else /* atomic_cmpxchg_relaxed */ 139 + 140 + #ifndef atomic_cmpxchg_acquire 141 + #define atomic_cmpxchg_acquire(...) \ 142 + __atomic_op_acquire(atomic_cmpxchg, __VA_ARGS__) 143 + #endif 144 + 145 + #ifndef atomic_cmpxchg_release 146 + #define atomic_cmpxchg_release(...) \ 147 + __atomic_op_release(atomic_cmpxchg, __VA_ARGS__) 148 + #endif 149 + 150 + #ifndef atomic_cmpxchg 151 + #define atomic_cmpxchg(...) \ 152 + __atomic_op_fence(atomic_cmpxchg, __VA_ARGS__) 153 + #endif 154 + #endif /* atomic_cmpxchg_relaxed */ 155 + 156 + #ifndef atomic64_read_acquire 157 + #define atomic64_read_acquire(v) smp_load_acquire(&(v)->counter) 158 + #endif 159 + 160 + #ifndef atomic64_set_release 161 + #define atomic64_set_release(v, i) smp_store_release(&(v)->counter, (i)) 162 + #endif 163 + 164 + /* atomic64_add_return_relaxed */ 165 + #ifndef atomic64_add_return_relaxed 166 + #define atomic64_add_return_relaxed atomic64_add_return 167 + #define atomic64_add_return_acquire atomic64_add_return 168 + #define atomic64_add_return_release atomic64_add_return 169 + 170 + #else /* atomic64_add_return_relaxed */ 171 + 172 + #ifndef atomic64_add_return_acquire 173 + #define atomic64_add_return_acquire(...) \ 174 + __atomic_op_acquire(atomic64_add_return, __VA_ARGS__) 175 + #endif 176 + 177 + #ifndef atomic64_add_return_release 178 + #define atomic64_add_return_release(...) \ 179 + __atomic_op_release(atomic64_add_return, __VA_ARGS__) 180 + #endif 181 + 182 + #ifndef atomic64_add_return 183 + #define atomic64_add_return(...) \ 184 + __atomic_op_fence(atomic64_add_return, __VA_ARGS__) 185 + #endif 186 + #endif /* atomic64_add_return_relaxed */ 187 + 188 + /* atomic64_sub_return_relaxed */ 189 + #ifndef atomic64_sub_return_relaxed 190 + #define atomic64_sub_return_relaxed atomic64_sub_return 191 + #define atomic64_sub_return_acquire atomic64_sub_return 192 + #define atomic64_sub_return_release atomic64_sub_return 193 + 194 + #else /* atomic64_sub_return_relaxed */ 195 + 196 + #ifndef atomic64_sub_return_acquire 197 + #define atomic64_sub_return_acquire(...) \ 198 + __atomic_op_acquire(atomic64_sub_return, __VA_ARGS__) 199 + #endif 200 + 201 + #ifndef atomic64_sub_return_release 202 + #define atomic64_sub_return_release(...) \ 203 + __atomic_op_release(atomic64_sub_return, __VA_ARGS__) 204 + #endif 205 + 206 + #ifndef atomic64_sub_return 207 + #define atomic64_sub_return(...) \ 208 + __atomic_op_fence(atomic64_sub_return, __VA_ARGS__) 209 + #endif 210 + #endif /* atomic64_sub_return_relaxed */ 211 + 212 + /* atomic64_xchg_relaxed */ 213 + #ifndef atomic64_xchg_relaxed 214 + #define atomic64_xchg_relaxed atomic64_xchg 215 + #define atomic64_xchg_acquire atomic64_xchg 216 + #define atomic64_xchg_release atomic64_xchg 217 + 218 + #else /* atomic64_xchg_relaxed */ 219 + 220 + #ifndef atomic64_xchg_acquire 221 + #define atomic64_xchg_acquire(...) \ 222 + __atomic_op_acquire(atomic64_xchg, __VA_ARGS__) 223 + #endif 224 + 225 + #ifndef atomic64_xchg_release 226 + #define atomic64_xchg_release(...) \ 227 + __atomic_op_release(atomic64_xchg, __VA_ARGS__) 228 + #endif 229 + 230 + #ifndef atomic64_xchg 231 + #define atomic64_xchg(...) \ 232 + __atomic_op_fence(atomic64_xchg, __VA_ARGS__) 233 + #endif 234 + #endif /* atomic64_xchg_relaxed */ 235 + 236 + /* atomic64_cmpxchg_relaxed */ 237 + #ifndef atomic64_cmpxchg_relaxed 238 + #define atomic64_cmpxchg_relaxed atomic64_cmpxchg 239 + #define atomic64_cmpxchg_acquire atomic64_cmpxchg 240 + #define atomic64_cmpxchg_release atomic64_cmpxchg 241 + 242 + #else /* atomic64_cmpxchg_relaxed */ 243 + 244 + #ifndef atomic64_cmpxchg_acquire 245 + #define atomic64_cmpxchg_acquire(...) \ 246 + __atomic_op_acquire(atomic64_cmpxchg, __VA_ARGS__) 247 + #endif 248 + 249 + #ifndef atomic64_cmpxchg_release 250 + #define atomic64_cmpxchg_release(...) \ 251 + __atomic_op_release(atomic64_cmpxchg, __VA_ARGS__) 252 + #endif 253 + 254 + #ifndef atomic64_cmpxchg 255 + #define atomic64_cmpxchg(...) \ 256 + __atomic_op_fence(atomic64_cmpxchg, __VA_ARGS__) 257 + #endif 258 + #endif /* atomic64_cmpxchg_relaxed */ 259 + 260 + /* cmpxchg_relaxed */ 261 + #ifndef cmpxchg_relaxed 262 + #define cmpxchg_relaxed cmpxchg 263 + #define cmpxchg_acquire cmpxchg 264 + #define cmpxchg_release cmpxchg 265 + 266 + #else /* cmpxchg_relaxed */ 267 + 268 + #ifndef cmpxchg_acquire 269 + #define cmpxchg_acquire(...) \ 270 + __atomic_op_acquire(cmpxchg, __VA_ARGS__) 271 + #endif 272 + 273 + #ifndef cmpxchg_release 274 + #define cmpxchg_release(...) \ 275 + __atomic_op_release(cmpxchg, __VA_ARGS__) 276 + #endif 277 + 278 + #ifndef cmpxchg 279 + #define cmpxchg(...) \ 280 + __atomic_op_fence(cmpxchg, __VA_ARGS__) 281 + #endif 282 + #endif /* cmpxchg_relaxed */ 283 + 284 + /* cmpxchg64_relaxed */ 285 + #ifndef cmpxchg64_relaxed 286 + #define cmpxchg64_relaxed cmpxchg64 287 + #define cmpxchg64_acquire cmpxchg64 288 + #define cmpxchg64_release cmpxchg64 289 + 290 + #else /* cmpxchg64_relaxed */ 291 + 292 + #ifndef cmpxchg64_acquire 293 + #define cmpxchg64_acquire(...) \ 294 + __atomic_op_acquire(cmpxchg64, __VA_ARGS__) 295 + #endif 296 + 297 + #ifndef cmpxchg64_release 298 + #define cmpxchg64_release(...) \ 299 + __atomic_op_release(cmpxchg64, __VA_ARGS__) 300 + #endif 301 + 302 + #ifndef cmpxchg64 303 + #define cmpxchg64(...) \ 304 + __atomic_op_fence(cmpxchg64, __VA_ARGS__) 305 + #endif 306 + #endif /* cmpxchg64_relaxed */ 307 + 308 + /* xchg_relaxed */ 309 + #ifndef xchg_relaxed 310 + #define xchg_relaxed xchg 311 + #define xchg_acquire xchg 312 + #define xchg_release xchg 313 + 314 + #else /* xchg_relaxed */ 315 + 316 + #ifndef xchg_acquire 317 + #define xchg_acquire(...) __atomic_op_acquire(xchg, __VA_ARGS__) 318 + #endif 319 + 320 + #ifndef xchg_release 321 + #define xchg_release(...) __atomic_op_release(xchg, __VA_ARGS__) 322 + #endif 323 + 324 + #ifndef xchg 325 + #define xchg(...) __atomic_op_fence(xchg, __VA_ARGS__) 326 + #endif 327 + #endif /* xchg_relaxed */ 5 328 6 329 /** 7 330 * atomic_add_unless - add unless the number is already a given value ··· 350 27 #ifndef atomic_inc_not_zero 351 28 #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) 352 29 #endif 30 + 31 + #ifndef atomic_andnot 32 + static inline void atomic_andnot(int i, atomic_t *v) 33 + { 34 + atomic_and(~i, v); 35 + } 36 + #endif 37 + 38 + static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v) 39 + { 40 + atomic_andnot(mask, v); 41 + } 42 + 43 + static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v) 44 + { 45 + atomic_or(mask, v); 46 + } 353 47 354 48 /** 355 49 * atomic_inc_not_zero_hint - increment if not null ··· 451 111 } 452 112 #endif 453 113 454 - #ifndef CONFIG_ARCH_HAS_ATOMIC_OR 455 - static inline void atomic_or(int i, atomic_t *v) 456 - { 457 - int old; 458 - int new; 459 - 460 - do { 461 - old = atomic_read(v); 462 - new = old | i; 463 - } while (atomic_cmpxchg(v, old, new) != old); 464 - } 465 - #endif /* #ifndef CONFIG_ARCH_HAS_ATOMIC_OR */ 466 - 467 114 #include <asm-generic/atomic-long.h> 468 115 #ifdef CONFIG_GENERIC_ATOMIC64 469 116 #include <asm-generic/atomic64.h> 470 117 #endif 118 + 119 + #ifndef atomic64_andnot 120 + static inline void atomic64_andnot(long long i, atomic64_t *v) 121 + { 122 + atomic64_and(~i, v); 123 + } 124 + #endif 125 + 471 126 #endif /* _LINUX_ATOMIC_H */
+6 -1
include/linux/compiler.h
··· 252 252 ({ union { typeof(x) __val; char __c[1]; } __u; __read_once_size(&(x), __u.__c, sizeof(x)); __u.__val; }) 253 253 254 254 #define WRITE_ONCE(x, val) \ 255 - ({ union { typeof(x) __val; char __c[1]; } __u = { .__val = (val) }; __write_once_size(&(x), __u.__c, sizeof(x)); __u.__val; }) 255 + ({ \ 256 + union { typeof(x) __val; char __c[1]; } __u = \ 257 + { .__val = (__force typeof(x)) (val) }; \ 258 + __write_once_size(&(x), __u.__c, sizeof(x)); \ 259 + __u.__val; \ 260 + }) 256 261 257 262 /** 258 263 * READ_ONCE_CTRL - Read a value heading a control dependency
+210 -51
include/linux/jump_label.h
··· 7 7 * Copyright (C) 2009-2012 Jason Baron <jbaron@redhat.com> 8 8 * Copyright (C) 2011-2012 Peter Zijlstra <pzijlstr@redhat.com> 9 9 * 10 + * DEPRECATED API: 11 + * 12 + * The use of 'struct static_key' directly, is now DEPRECATED. In addition 13 + * static_key_{true,false}() is also DEPRECATED. IE DO NOT use the following: 14 + * 15 + * struct static_key false = STATIC_KEY_INIT_FALSE; 16 + * struct static_key true = STATIC_KEY_INIT_TRUE; 17 + * static_key_true() 18 + * static_key_false() 19 + * 20 + * The updated API replacements are: 21 + * 22 + * DEFINE_STATIC_KEY_TRUE(key); 23 + * DEFINE_STATIC_KEY_FALSE(key); 24 + * static_key_likely() 25 + * statick_key_unlikely() 26 + * 10 27 * Jump labels provide an interface to generate dynamic branches using 11 - * self-modifying code. Assuming toolchain and architecture support, the result 12 - * of a "if (static_key_false(&key))" statement is an unconditional branch (which 13 - * defaults to false - and the true block is placed out of line). 28 + * self-modifying code. Assuming toolchain and architecture support, if we 29 + * define a "key" that is initially false via "DEFINE_STATIC_KEY_FALSE(key)", 30 + * an "if (static_branch_unlikely(&key))" statement is an unconditional branch 31 + * (which defaults to false - and the true block is placed out of line). 32 + * Similarly, we can define an initially true key via 33 + * "DEFINE_STATIC_KEY_TRUE(key)", and use it in the same 34 + * "if (static_branch_unlikely(&key))", in which case we will generate an 35 + * unconditional branch to the out-of-line true branch. Keys that are 36 + * initially true or false can be using in both static_branch_unlikely() 37 + * and static_branch_likely() statements. 14 38 * 15 - * However at runtime we can change the branch target using 16 - * static_key_slow_{inc,dec}(). These function as a 'reference' count on the key 17 - * object, and for as long as there are references all branches referring to 18 - * that particular key will point to the (out of line) true block. 39 + * At runtime we can change the branch target by setting the key 40 + * to true via a call to static_branch_enable(), or false using 41 + * static_branch_disable(). If the direction of the branch is switched by 42 + * these calls then we run-time modify the branch target via a 43 + * no-op -> jump or jump -> no-op conversion. For example, for an 44 + * initially false key that is used in an "if (static_branch_unlikely(&key))" 45 + * statement, setting the key to true requires us to patch in a jump 46 + * to the out-of-line of true branch. 19 47 * 20 - * Since this relies on modifying code, the static_key_slow_{inc,dec}() functions 48 + * In addtion to static_branch_{enable,disable}, we can also reference count 49 + * the key or branch direction via static_branch_{inc,dec}. Thus, 50 + * static_branch_inc() can be thought of as a 'make more true' and 51 + * static_branch_dec() as a 'make more false'. The inc()/dec() 52 + * interface is meant to be used exclusively from the inc()/dec() for a given 53 + * key. 54 + * 55 + * Since this relies on modifying code, the branch modifying functions 21 56 * must be considered absolute slow paths (machine wide synchronization etc.). 22 57 * OTOH, since the affected branches are unconditional, their runtime overhead 23 58 * will be absolutely minimal, esp. in the default (off) case where the total ··· 64 29 * cause significant performance degradation. Struct static_key_deferred and 65 30 * static_key_slow_dec_deferred() provide for this. 66 31 * 67 - * Lacking toolchain and or architecture support, jump labels fall back to a simple 68 - * conditional branch. 32 + * Lacking toolchain and or architecture support, static keys fall back to a 33 + * simple conditional branch. 69 34 * 70 - * struct static_key my_key = STATIC_KEY_INIT_TRUE; 71 - * 72 - * if (static_key_true(&my_key)) { 73 - * } 74 - * 75 - * will result in the true case being in-line and starts the key with a single 76 - * reference. Mixing static_key_true() and static_key_false() on the same key is not 77 - * allowed. 78 - * 79 - * Not initializing the key (static data is initialized to 0s anyway) is the 80 - * same as using STATIC_KEY_INIT_FALSE. 35 + * Additional babbling in: Documentation/static-keys.txt 81 36 */ 82 37 83 38 #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL) ··· 111 86 #ifndef __ASSEMBLY__ 112 87 113 88 enum jump_label_type { 114 - JUMP_LABEL_DISABLE = 0, 115 - JUMP_LABEL_ENABLE, 89 + JUMP_LABEL_NOP = 0, 90 + JUMP_LABEL_JMP, 116 91 }; 117 92 118 93 struct module; ··· 126 101 127 102 #ifdef HAVE_JUMP_LABEL 128 103 129 - #define JUMP_LABEL_TYPE_FALSE_BRANCH 0UL 130 - #define JUMP_LABEL_TYPE_TRUE_BRANCH 1UL 131 - #define JUMP_LABEL_TYPE_MASK 1UL 132 - 133 - static 134 - inline struct jump_entry *jump_label_get_entries(struct static_key *key) 135 - { 136 - return (struct jump_entry *)((unsigned long)key->entries 137 - & ~JUMP_LABEL_TYPE_MASK); 138 - } 139 - 140 - static inline bool jump_label_get_branch_default(struct static_key *key) 141 - { 142 - if (((unsigned long)key->entries & JUMP_LABEL_TYPE_MASK) == 143 - JUMP_LABEL_TYPE_TRUE_BRANCH) 144 - return true; 145 - return false; 146 - } 104 + #define JUMP_TYPE_FALSE 0UL 105 + #define JUMP_TYPE_TRUE 1UL 106 + #define JUMP_TYPE_MASK 1UL 147 107 148 108 static __always_inline bool static_key_false(struct static_key *key) 149 109 { 150 - return arch_static_branch(key); 110 + return arch_static_branch(key, false); 151 111 } 152 112 153 113 static __always_inline bool static_key_true(struct static_key *key) 154 114 { 155 - return !static_key_false(key); 115 + return !arch_static_branch(key, true); 156 116 } 157 117 158 118 extern struct jump_entry __start___jump_table[]; ··· 155 145 extern void static_key_slow_dec(struct static_key *key); 156 146 extern void jump_label_apply_nops(struct module *mod); 157 147 158 - #define STATIC_KEY_INIT_TRUE ((struct static_key) \ 148 + #define STATIC_KEY_INIT_TRUE \ 159 149 { .enabled = ATOMIC_INIT(1), \ 160 - .entries = (void *)JUMP_LABEL_TYPE_TRUE_BRANCH }) 161 - #define STATIC_KEY_INIT_FALSE ((struct static_key) \ 150 + .entries = (void *)JUMP_TYPE_TRUE } 151 + #define STATIC_KEY_INIT_FALSE \ 162 152 { .enabled = ATOMIC_INIT(0), \ 163 - .entries = (void *)JUMP_LABEL_TYPE_FALSE_BRANCH }) 153 + .entries = (void *)JUMP_TYPE_FALSE } 164 154 165 155 #else /* !HAVE_JUMP_LABEL */ 166 156 ··· 208 198 return 0; 209 199 } 210 200 211 - #define STATIC_KEY_INIT_TRUE ((struct static_key) \ 212 - { .enabled = ATOMIC_INIT(1) }) 213 - #define STATIC_KEY_INIT_FALSE ((struct static_key) \ 214 - { .enabled = ATOMIC_INIT(0) }) 201 + #define STATIC_KEY_INIT_TRUE { .enabled = ATOMIC_INIT(1) } 202 + #define STATIC_KEY_INIT_FALSE { .enabled = ATOMIC_INIT(0) } 215 203 216 204 #endif /* HAVE_JUMP_LABEL */ 217 205 ··· 220 212 { 221 213 return static_key_count(key) > 0; 222 214 } 215 + 216 + static inline void static_key_enable(struct static_key *key) 217 + { 218 + int count = static_key_count(key); 219 + 220 + WARN_ON_ONCE(count < 0 || count > 1); 221 + 222 + if (!count) 223 + static_key_slow_inc(key); 224 + } 225 + 226 + static inline void static_key_disable(struct static_key *key) 227 + { 228 + int count = static_key_count(key); 229 + 230 + WARN_ON_ONCE(count < 0 || count > 1); 231 + 232 + if (count) 233 + static_key_slow_dec(key); 234 + } 235 + 236 + /* -------------------------------------------------------------------------- */ 237 + 238 + /* 239 + * Two type wrappers around static_key, such that we can use compile time 240 + * type differentiation to emit the right code. 241 + * 242 + * All the below code is macros in order to play type games. 243 + */ 244 + 245 + struct static_key_true { 246 + struct static_key key; 247 + }; 248 + 249 + struct static_key_false { 250 + struct static_key key; 251 + }; 252 + 253 + #define STATIC_KEY_TRUE_INIT (struct static_key_true) { .key = STATIC_KEY_INIT_TRUE, } 254 + #define STATIC_KEY_FALSE_INIT (struct static_key_false){ .key = STATIC_KEY_INIT_FALSE, } 255 + 256 + #define DEFINE_STATIC_KEY_TRUE(name) \ 257 + struct static_key_true name = STATIC_KEY_TRUE_INIT 258 + 259 + #define DEFINE_STATIC_KEY_FALSE(name) \ 260 + struct static_key_false name = STATIC_KEY_FALSE_INIT 261 + 262 + #ifdef HAVE_JUMP_LABEL 263 + 264 + /* 265 + * Combine the right initial value (type) with the right branch order 266 + * to generate the desired result. 267 + * 268 + * 269 + * type\branch| likely (1) | unlikely (0) 270 + * -----------+-----------------------+------------------ 271 + * | | 272 + * true (1) | ... | ... 273 + * | NOP | JMP L 274 + * | <br-stmts> | 1: ... 275 + * | L: ... | 276 + * | | 277 + * | | L: <br-stmts> 278 + * | | jmp 1b 279 + * | | 280 + * -----------+-----------------------+------------------ 281 + * | | 282 + * false (0) | ... | ... 283 + * | JMP L | NOP 284 + * | <br-stmts> | 1: ... 285 + * | L: ... | 286 + * | | 287 + * | | L: <br-stmts> 288 + * | | jmp 1b 289 + * | | 290 + * -----------+-----------------------+------------------ 291 + * 292 + * The initial value is encoded in the LSB of static_key::entries, 293 + * type: 0 = false, 1 = true. 294 + * 295 + * The branch type is encoded in the LSB of jump_entry::key, 296 + * branch: 0 = unlikely, 1 = likely. 297 + * 298 + * This gives the following logic table: 299 + * 300 + * enabled type branch instuction 301 + * -----------------------------+----------- 302 + * 0 0 0 | NOP 303 + * 0 0 1 | JMP 304 + * 0 1 0 | NOP 305 + * 0 1 1 | JMP 306 + * 307 + * 1 0 0 | JMP 308 + * 1 0 1 | NOP 309 + * 1 1 0 | JMP 310 + * 1 1 1 | NOP 311 + * 312 + * Which gives the following functions: 313 + * 314 + * dynamic: instruction = enabled ^ branch 315 + * static: instruction = type ^ branch 316 + * 317 + * See jump_label_type() / jump_label_init_type(). 318 + */ 319 + 320 + extern bool ____wrong_branch_error(void); 321 + 322 + #define static_branch_likely(x) \ 323 + ({ \ 324 + bool branch; \ 325 + if (__builtin_types_compatible_p(typeof(*x), struct static_key_true)) \ 326 + branch = !arch_static_branch(&(x)->key, true); \ 327 + else if (__builtin_types_compatible_p(typeof(*x), struct static_key_false)) \ 328 + branch = !arch_static_branch_jump(&(x)->key, true); \ 329 + else \ 330 + branch = ____wrong_branch_error(); \ 331 + branch; \ 332 + }) 333 + 334 + #define static_branch_unlikely(x) \ 335 + ({ \ 336 + bool branch; \ 337 + if (__builtin_types_compatible_p(typeof(*x), struct static_key_true)) \ 338 + branch = arch_static_branch_jump(&(x)->key, false); \ 339 + else if (__builtin_types_compatible_p(typeof(*x), struct static_key_false)) \ 340 + branch = arch_static_branch(&(x)->key, false); \ 341 + else \ 342 + branch = ____wrong_branch_error(); \ 343 + branch; \ 344 + }) 345 + 346 + #else /* !HAVE_JUMP_LABEL */ 347 + 348 + #define static_branch_likely(x) likely(static_key_enabled(&(x)->key)) 349 + #define static_branch_unlikely(x) unlikely(static_key_enabled(&(x)->key)) 350 + 351 + #endif /* HAVE_JUMP_LABEL */ 352 + 353 + /* 354 + * Advanced usage; refcount, branch is enabled when: count != 0 355 + */ 356 + 357 + #define static_branch_inc(x) static_key_slow_inc(&(x)->key) 358 + #define static_branch_dec(x) static_key_slow_dec(&(x)->key) 359 + 360 + /* 361 + * Normal usage; boolean enable/disable. 362 + */ 363 + 364 + #define static_branch_enable(x) static_key_enable(&(x)->key) 365 + #define static_branch_disable(x) static_key_disable(&(x)->key) 223 366 224 367 #endif /* _LINUX_JUMP_LABEL_H */ 225 368
+1 -1
include/linux/llist.h
··· 55 55 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 56 56 */ 57 57 58 + #include <linux/atomic.h> 58 59 #include <linux/kernel.h> 59 - #include <asm/cmpxchg.h> 60 60 61 61 struct llist_head { 62 62 struct llist_node *first;
+96 -4
kernel/futex.c
··· 64 64 #include <linux/hugetlb.h> 65 65 #include <linux/freezer.h> 66 66 #include <linux/bootmem.h> 67 + #include <linux/fault-inject.h> 67 68 68 69 #include <asm/futex.h> 69 70 ··· 259 258 260 259 static struct futex_hash_bucket *futex_queues; 261 260 261 + /* 262 + * Fault injections for futexes. 263 + */ 264 + #ifdef CONFIG_FAIL_FUTEX 265 + 266 + static struct { 267 + struct fault_attr attr; 268 + 269 + u32 ignore_private; 270 + } fail_futex = { 271 + .attr = FAULT_ATTR_INITIALIZER, 272 + .ignore_private = 0, 273 + }; 274 + 275 + static int __init setup_fail_futex(char *str) 276 + { 277 + return setup_fault_attr(&fail_futex.attr, str); 278 + } 279 + __setup("fail_futex=", setup_fail_futex); 280 + 281 + static bool should_fail_futex(bool fshared) 282 + { 283 + if (fail_futex.ignore_private && !fshared) 284 + return false; 285 + 286 + return should_fail(&fail_futex.attr, 1); 287 + } 288 + 289 + #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS 290 + 291 + static int __init fail_futex_debugfs(void) 292 + { 293 + umode_t mode = S_IFREG | S_IRUSR | S_IWUSR; 294 + struct dentry *dir; 295 + 296 + dir = fault_create_debugfs_attr("fail_futex", NULL, 297 + &fail_futex.attr); 298 + if (IS_ERR(dir)) 299 + return PTR_ERR(dir); 300 + 301 + if (!debugfs_create_bool("ignore-private", mode, dir, 302 + &fail_futex.ignore_private)) { 303 + debugfs_remove_recursive(dir); 304 + return -ENOMEM; 305 + } 306 + 307 + return 0; 308 + } 309 + 310 + late_initcall(fail_futex_debugfs); 311 + 312 + #endif /* CONFIG_FAULT_INJECTION_DEBUG_FS */ 313 + 314 + #else 315 + static inline bool should_fail_futex(bool fshared) 316 + { 317 + return false; 318 + } 319 + #endif /* CONFIG_FAIL_FUTEX */ 320 + 262 321 static inline void futex_get_mm(union futex_key *key) 263 322 { 264 323 atomic_inc(&key->private.mm->mm_count); ··· 474 413 if (unlikely(!access_ok(rw, uaddr, sizeof(u32)))) 475 414 return -EFAULT; 476 415 416 + if (unlikely(should_fail_futex(fshared))) 417 + return -EFAULT; 418 + 477 419 /* 478 420 * PROCESS_PRIVATE futexes are fast. 479 421 * As the mm cannot disappear under us and the 'key' only needs ··· 492 428 } 493 429 494 430 again: 431 + /* Ignore any VERIFY_READ mapping (futex common case) */ 432 + if (unlikely(should_fail_futex(fshared))) 433 + return -EFAULT; 434 + 495 435 err = get_user_pages_fast(address, 1, 1, &page); 496 436 /* 497 437 * If write access is not required (eg. FUTEX_WAIT), try ··· 584 516 * A RO anonymous page will never change and thus doesn't make 585 517 * sense for futex operations. 586 518 */ 587 - if (ro) { 519 + if (unlikely(should_fail_futex(fshared)) || ro) { 588 520 err = -EFAULT; 589 521 goto out; 590 522 } ··· 1042 974 { 1043 975 u32 uninitialized_var(curval); 1044 976 977 + if (unlikely(should_fail_futex(true))) 978 + return -EFAULT; 979 + 1045 980 if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))) 1046 981 return -EFAULT; 1047 982 ··· 1086 1015 if (get_futex_value_locked(&uval, uaddr)) 1087 1016 return -EFAULT; 1088 1017 1018 + if (unlikely(should_fail_futex(true))) 1019 + return -EFAULT; 1020 + 1089 1021 /* 1090 1022 * Detect deadlocks. 1091 1023 */ 1092 1024 if ((unlikely((uval & FUTEX_TID_MASK) == vpid))) 1025 + return -EDEADLK; 1026 + 1027 + if ((unlikely(should_fail_futex(true)))) 1093 1028 return -EDEADLK; 1094 1029 1095 1030 /* ··· 1231 1154 * owner died bit, because we are the owner. 1232 1155 */ 1233 1156 newval = FUTEX_WAITERS | task_pid_vnr(new_owner); 1157 + 1158 + if (unlikely(should_fail_futex(true))) 1159 + ret = -EFAULT; 1234 1160 1235 1161 if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) 1236 1162 ret = -EFAULT; ··· 1535 1455 int ret, vpid; 1536 1456 1537 1457 if (get_futex_value_locked(&curval, pifutex)) 1458 + return -EFAULT; 1459 + 1460 + if (unlikely(should_fail_futex(true))) 1538 1461 return -EFAULT; 1539 1462 1540 1463 /* ··· 2351 2268 /* 2352 2269 * Userspace tried a 0 -> TID atomic transition of the futex value 2353 2270 * and failed. The kernel side here does the whole locking operation: 2354 - * if there are waiters then it will block, it does PI, etc. (Due to 2355 - * races the kernel might see a 0 value of the futex too.) 2271 + * if there are waiters then it will block as a consequence of relying 2272 + * on rt-mutexes, it does PI, etc. (Due to races the kernel might see 2273 + * a 0 value of the futex too.). 2274 + * 2275 + * Also serves as futex trylock_pi()'ing, and due semantics. 2356 2276 */ 2357 2277 static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, 2358 2278 ktime_t *time, int trylock) ··· 2386 2300 2387 2301 ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current, 0); 2388 2302 if (unlikely(ret)) { 2303 + /* 2304 + * Atomic work succeeded and we got the lock, 2305 + * or failed. Either way, we do _not_ block. 2306 + */ 2389 2307 switch (ret) { 2390 2308 case 1: 2391 2309 /* We got the lock. */ ··· 2620 2530 * futex_wait_requeue_pi() - Wait on uaddr and take uaddr2 2621 2531 * @uaddr: the futex we initially wait on (non-pi) 2622 2532 * @flags: futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be 2623 - * the same type, no requeueing from private to shared, etc. 2533 + * the same type, no requeueing from private to shared, etc. 2624 2534 * @val: the expected value of uaddr 2625 2535 * @abs_time: absolute timeout 2626 2536 * @bitset: 32 bit wakeup bitset set by userspace, defaults to all ··· 3095 3005 if (utime && (cmd == FUTEX_WAIT || cmd == FUTEX_LOCK_PI || 3096 3006 cmd == FUTEX_WAIT_BITSET || 3097 3007 cmd == FUTEX_WAIT_REQUEUE_PI)) { 3008 + if (unlikely(should_fail_futex(!(op & FUTEX_PRIVATE_FLAG)))) 3009 + return -EFAULT; 3098 3010 if (copy_from_user(&ts, utime, sizeof(ts)) != 0) 3099 3011 return -EFAULT; 3100 3012 if (!timespec_valid(&ts))
+109 -49
kernel/jump_label.c
··· 54 54 sort(start, size, sizeof(struct jump_entry), jump_label_cmp, NULL); 55 55 } 56 56 57 - static void jump_label_update(struct static_key *key, int enable); 57 + static void jump_label_update(struct static_key *key); 58 58 59 59 void static_key_slow_inc(struct static_key *key) 60 60 { ··· 63 63 return; 64 64 65 65 jump_label_lock(); 66 - if (atomic_read(&key->enabled) == 0) { 67 - if (!jump_label_get_branch_default(key)) 68 - jump_label_update(key, JUMP_LABEL_ENABLE); 69 - else 70 - jump_label_update(key, JUMP_LABEL_DISABLE); 71 - } 72 - atomic_inc(&key->enabled); 66 + if (atomic_inc_return(&key->enabled) == 1) 67 + jump_label_update(key); 73 68 jump_label_unlock(); 74 69 } 75 70 EXPORT_SYMBOL_GPL(static_key_slow_inc); ··· 82 87 atomic_inc(&key->enabled); 83 88 schedule_delayed_work(work, rate_limit); 84 89 } else { 85 - if (!jump_label_get_branch_default(key)) 86 - jump_label_update(key, JUMP_LABEL_DISABLE); 87 - else 88 - jump_label_update(key, JUMP_LABEL_ENABLE); 90 + jump_label_update(key); 89 91 } 90 92 jump_label_unlock(); 91 93 } ··· 141 149 return 0; 142 150 } 143 151 144 - /* 152 + /* 145 153 * Update code which is definitely not currently executing. 146 154 * Architectures which need heavyweight synchronization to modify 147 155 * running code can override this to make the non-live update case ··· 150 158 void __weak __init_or_module arch_jump_label_transform_static(struct jump_entry *entry, 151 159 enum jump_label_type type) 152 160 { 153 - arch_jump_label_transform(entry, type); 161 + arch_jump_label_transform(entry, type); 162 + } 163 + 164 + static inline struct jump_entry *static_key_entries(struct static_key *key) 165 + { 166 + return (struct jump_entry *)((unsigned long)key->entries & ~JUMP_TYPE_MASK); 167 + } 168 + 169 + static inline bool static_key_type(struct static_key *key) 170 + { 171 + return (unsigned long)key->entries & JUMP_TYPE_MASK; 172 + } 173 + 174 + static inline struct static_key *jump_entry_key(struct jump_entry *entry) 175 + { 176 + return (struct static_key *)((unsigned long)entry->key & ~1UL); 177 + } 178 + 179 + static bool jump_entry_branch(struct jump_entry *entry) 180 + { 181 + return (unsigned long)entry->key & 1UL; 182 + } 183 + 184 + static enum jump_label_type jump_label_type(struct jump_entry *entry) 185 + { 186 + struct static_key *key = jump_entry_key(entry); 187 + bool enabled = static_key_enabled(key); 188 + bool branch = jump_entry_branch(entry); 189 + 190 + /* See the comment in linux/jump_label.h */ 191 + return enabled ^ branch; 154 192 } 155 193 156 194 static void __jump_label_update(struct static_key *key, 157 195 struct jump_entry *entry, 158 - struct jump_entry *stop, int enable) 196 + struct jump_entry *stop) 159 197 { 160 - for (; (entry < stop) && 161 - (entry->key == (jump_label_t)(unsigned long)key); 162 - entry++) { 198 + for (; (entry < stop) && (jump_entry_key(entry) == key); entry++) { 163 199 /* 164 200 * entry->code set to 0 invalidates module init text sections 165 201 * kernel_text_address() verifies we are not in core kernel 166 202 * init code, see jump_label_invalidate_module_init(). 167 203 */ 168 204 if (entry->code && kernel_text_address(entry->code)) 169 - arch_jump_label_transform(entry, enable); 205 + arch_jump_label_transform(entry, jump_label_type(entry)); 170 206 } 171 - } 172 - 173 - static enum jump_label_type jump_label_type(struct static_key *key) 174 - { 175 - bool true_branch = jump_label_get_branch_default(key); 176 - bool state = static_key_enabled(key); 177 - 178 - if ((!true_branch && state) || (true_branch && !state)) 179 - return JUMP_LABEL_ENABLE; 180 - 181 - return JUMP_LABEL_DISABLE; 182 207 } 183 208 184 209 void __init jump_label_init(void) ··· 211 202 for (iter = iter_start; iter < iter_stop; iter++) { 212 203 struct static_key *iterk; 213 204 214 - iterk = (struct static_key *)(unsigned long)iter->key; 215 - arch_jump_label_transform_static(iter, jump_label_type(iterk)); 205 + /* rewrite NOPs */ 206 + if (jump_label_type(iter) == JUMP_LABEL_NOP) 207 + arch_jump_label_transform_static(iter, JUMP_LABEL_NOP); 208 + 209 + iterk = jump_entry_key(iter); 216 210 if (iterk == key) 217 211 continue; 218 212 ··· 233 221 } 234 222 235 223 #ifdef CONFIG_MODULES 224 + 225 + static enum jump_label_type jump_label_init_type(struct jump_entry *entry) 226 + { 227 + struct static_key *key = jump_entry_key(entry); 228 + bool type = static_key_type(key); 229 + bool branch = jump_entry_branch(entry); 230 + 231 + /* See the comment in linux/jump_label.h */ 232 + return type ^ branch; 233 + } 236 234 237 235 struct static_key_mod { 238 236 struct static_key_mod *next; ··· 265 243 start, end); 266 244 } 267 245 268 - static void __jump_label_mod_update(struct static_key *key, int enable) 246 + static void __jump_label_mod_update(struct static_key *key) 269 247 { 270 - struct static_key_mod *mod = key->next; 248 + struct static_key_mod *mod; 271 249 272 - while (mod) { 250 + for (mod = key->next; mod; mod = mod->next) { 273 251 struct module *m = mod->mod; 274 252 275 253 __jump_label_update(key, mod->entries, 276 - m->jump_entries + m->num_jump_entries, 277 - enable); 278 - mod = mod->next; 254 + m->jump_entries + m->num_jump_entries); 279 255 } 280 256 } 281 257 ··· 296 276 return; 297 277 298 278 for (iter = iter_start; iter < iter_stop; iter++) { 299 - arch_jump_label_transform_static(iter, JUMP_LABEL_DISABLE); 279 + /* Only write NOPs for arch_branch_static(). */ 280 + if (jump_label_init_type(iter) == JUMP_LABEL_NOP) 281 + arch_jump_label_transform_static(iter, JUMP_LABEL_NOP); 300 282 } 301 283 } 302 284 ··· 319 297 for (iter = iter_start; iter < iter_stop; iter++) { 320 298 struct static_key *iterk; 321 299 322 - iterk = (struct static_key *)(unsigned long)iter->key; 300 + iterk = jump_entry_key(iter); 323 301 if (iterk == key) 324 302 continue; 325 303 ··· 340 318 jlm->next = key->next; 341 319 key->next = jlm; 342 320 343 - if (jump_label_type(key) == JUMP_LABEL_ENABLE) 344 - __jump_label_update(key, iter, iter_stop, JUMP_LABEL_ENABLE); 321 + /* Only update if we've changed from our initial state */ 322 + if (jump_label_type(iter) != jump_label_init_type(iter)) 323 + __jump_label_update(key, iter, iter_stop); 345 324 } 346 325 347 326 return 0; ··· 357 334 struct static_key_mod *jlm, **prev; 358 335 359 336 for (iter = iter_start; iter < iter_stop; iter++) { 360 - if (iter->key == (jump_label_t)(unsigned long)key) 337 + if (jump_entry_key(iter) == key) 361 338 continue; 362 339 363 - key = (struct static_key *)(unsigned long)iter->key; 340 + key = jump_entry_key(iter); 364 341 365 342 if (within_module(iter->key, mod)) 366 343 continue; ··· 462 439 return ret; 463 440 } 464 441 465 - static void jump_label_update(struct static_key *key, int enable) 442 + static void jump_label_update(struct static_key *key) 466 443 { 467 444 struct jump_entry *stop = __stop___jump_table; 468 - struct jump_entry *entry = jump_label_get_entries(key); 445 + struct jump_entry *entry = static_key_entries(key); 469 446 #ifdef CONFIG_MODULES 470 447 struct module *mod; 471 448 472 - __jump_label_mod_update(key, enable); 449 + __jump_label_mod_update(key); 473 450 474 451 preempt_disable(); 475 452 mod = __module_address((unsigned long)key); ··· 479 456 #endif 480 457 /* if there are no users, entry can be NULL */ 481 458 if (entry) 482 - __jump_label_update(key, entry, stop, enable); 459 + __jump_label_update(key, entry, stop); 483 460 } 484 461 485 - #endif 462 + #ifdef CONFIG_STATIC_KEYS_SELFTEST 463 + static DEFINE_STATIC_KEY_TRUE(sk_true); 464 + static DEFINE_STATIC_KEY_FALSE(sk_false); 465 + 466 + static __init int jump_label_test(void) 467 + { 468 + int i; 469 + 470 + for (i = 0; i < 2; i++) { 471 + WARN_ON(static_key_enabled(&sk_true.key) != true); 472 + WARN_ON(static_key_enabled(&sk_false.key) != false); 473 + 474 + WARN_ON(!static_branch_likely(&sk_true)); 475 + WARN_ON(!static_branch_unlikely(&sk_true)); 476 + WARN_ON(static_branch_likely(&sk_false)); 477 + WARN_ON(static_branch_unlikely(&sk_false)); 478 + 479 + static_branch_disable(&sk_true); 480 + static_branch_enable(&sk_false); 481 + 482 + WARN_ON(static_key_enabled(&sk_true.key) == true); 483 + WARN_ON(static_key_enabled(&sk_false.key) == false); 484 + 485 + WARN_ON(static_branch_likely(&sk_true)); 486 + WARN_ON(static_branch_unlikely(&sk_true)); 487 + WARN_ON(!static_branch_likely(&sk_false)); 488 + WARN_ON(!static_branch_unlikely(&sk_false)); 489 + 490 + static_branch_enable(&sk_true); 491 + static_branch_disable(&sk_false); 492 + } 493 + 494 + return 0; 495 + } 496 + late_initcall(jump_label_test); 497 + #endif /* STATIC_KEYS_SELFTEST */ 498 + 499 + #endif /* HAVE_JUMP_LABEL */
-1
kernel/locking/Makefile
··· 20 20 obj-$(CONFIG_QUEUED_SPINLOCKS) += qspinlock.o 21 21 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o 22 22 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o 23 - obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o 24 23 obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o 25 24 obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o 26 25 obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
+22 -25
kernel/locking/qrwlock.c
··· 55 55 { 56 56 while ((cnts & _QW_WMASK) == _QW_LOCKED) { 57 57 cpu_relax_lowlatency(); 58 - cnts = smp_load_acquire((u32 *)&lock->cnts); 58 + cnts = atomic_read_acquire(&lock->cnts); 59 59 } 60 60 } 61 61 62 62 /** 63 - * queue_read_lock_slowpath - acquire read lock of a queue rwlock 63 + * queued_read_lock_slowpath - acquire read lock of a queue rwlock 64 64 * @lock: Pointer to queue rwlock structure 65 + * @cnts: Current qrwlock lock value 65 66 */ 66 - void queue_read_lock_slowpath(struct qrwlock *lock) 67 + void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts) 67 68 { 68 - u32 cnts; 69 - 70 69 /* 71 70 * Readers come here when they cannot get the lock without waiting 72 71 */ 73 72 if (unlikely(in_interrupt())) { 74 73 /* 75 - * Readers in interrupt context will spin until the lock is 76 - * available without waiting in the queue. 74 + * Readers in interrupt context will get the lock immediately 75 + * if the writer is just waiting (not holding the lock yet). 76 + * The rspin_until_writer_unlock() function returns immediately 77 + * in this case. Otherwise, they will spin (with ACQUIRE 78 + * semantics) until the lock is available without waiting in 79 + * the queue. 77 80 */ 78 - cnts = smp_load_acquire((u32 *)&lock->cnts); 79 81 rspin_until_writer_unlock(lock, cnts); 80 82 return; 81 83 } ··· 89 87 arch_spin_lock(&lock->lock); 90 88 91 89 /* 92 - * At the head of the wait queue now, wait until the writer state 93 - * goes to 0 and then try to increment the reader count and get 94 - * the lock. It is possible that an incoming writer may steal the 95 - * lock in the interim, so it is necessary to check the writer byte 96 - * to make sure that the write lock isn't taken. 90 + * The ACQUIRE semantics of the following spinning code ensure 91 + * that accesses can't leak upwards out of our subsequent critical 92 + * section in the case that the lock is currently held for write. 97 93 */ 98 - while (atomic_read(&lock->cnts) & _QW_WMASK) 99 - cpu_relax_lowlatency(); 100 - 101 - cnts = atomic_add_return(_QR_BIAS, &lock->cnts) - _QR_BIAS; 94 + cnts = atomic_add_return_acquire(_QR_BIAS, &lock->cnts) - _QR_BIAS; 102 95 rspin_until_writer_unlock(lock, cnts); 103 96 104 97 /* ··· 101 104 */ 102 105 arch_spin_unlock(&lock->lock); 103 106 } 104 - EXPORT_SYMBOL(queue_read_lock_slowpath); 107 + EXPORT_SYMBOL(queued_read_lock_slowpath); 105 108 106 109 /** 107 - * queue_write_lock_slowpath - acquire write lock of a queue rwlock 110 + * queued_write_lock_slowpath - acquire write lock of a queue rwlock 108 111 * @lock : Pointer to queue rwlock structure 109 112 */ 110 - void queue_write_lock_slowpath(struct qrwlock *lock) 113 + void queued_write_lock_slowpath(struct qrwlock *lock) 111 114 { 112 115 u32 cnts; 113 116 ··· 116 119 117 120 /* Try to acquire the lock directly if no reader is present */ 118 121 if (!atomic_read(&lock->cnts) && 119 - (atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0)) 122 + (atomic_cmpxchg_acquire(&lock->cnts, 0, _QW_LOCKED) == 0)) 120 123 goto unlock; 121 124 122 125 /* ··· 127 130 struct __qrwlock *l = (struct __qrwlock *)lock; 128 131 129 132 if (!READ_ONCE(l->wmode) && 130 - (cmpxchg(&l->wmode, 0, _QW_WAITING) == 0)) 133 + (cmpxchg_relaxed(&l->wmode, 0, _QW_WAITING) == 0)) 131 134 break; 132 135 133 136 cpu_relax_lowlatency(); ··· 137 140 for (;;) { 138 141 cnts = atomic_read(&lock->cnts); 139 142 if ((cnts == _QW_WAITING) && 140 - (atomic_cmpxchg(&lock->cnts, _QW_WAITING, 141 - _QW_LOCKED) == _QW_WAITING)) 143 + (atomic_cmpxchg_acquire(&lock->cnts, _QW_WAITING, 144 + _QW_LOCKED) == _QW_WAITING)) 142 145 break; 143 146 144 147 cpu_relax_lowlatency(); ··· 146 149 unlock: 147 150 arch_spin_unlock(&lock->lock); 148 151 } 149 - EXPORT_SYMBOL(queue_write_lock_slowpath); 152 + EXPORT_SYMBOL(queued_write_lock_slowpath);
+3 -3
kernel/locking/qspinlock.c
··· 239 239 240 240 static __always_inline void __pv_init_node(struct mcs_spinlock *node) { } 241 241 static __always_inline void __pv_wait_node(struct mcs_spinlock *node) { } 242 - static __always_inline void __pv_kick_node(struct mcs_spinlock *node) { } 243 - 242 + static __always_inline void __pv_kick_node(struct qspinlock *lock, 243 + struct mcs_spinlock *node) { } 244 244 static __always_inline void __pv_wait_head(struct qspinlock *lock, 245 245 struct mcs_spinlock *node) { } 246 246 ··· 440 440 cpu_relax(); 441 441 442 442 arch_mcs_spin_unlock_contended(&next->locked); 443 - pv_kick_node(next); 443 + pv_kick_node(lock, next); 444 444 445 445 release: 446 446 /*
+74 -30
kernel/locking/qspinlock_paravirt.h
··· 22 22 23 23 #define _Q_SLOW_VAL (3U << _Q_LOCKED_OFFSET) 24 24 25 + /* 26 + * Queue node uses: vcpu_running & vcpu_halted. 27 + * Queue head uses: vcpu_running & vcpu_hashed. 28 + */ 25 29 enum vcpu_state { 26 30 vcpu_running = 0, 27 - vcpu_halted, 31 + vcpu_halted, /* Used only in pv_wait_node */ 32 + vcpu_hashed, /* = pv_hash'ed + vcpu_halted */ 28 33 }; 29 34 30 35 struct pv_node { ··· 158 153 159 154 /* 160 155 * Wait for node->locked to become true, halt the vcpu after a short spin. 161 - * pv_kick_node() is used to wake the vcpu again. 156 + * pv_kick_node() is used to set _Q_SLOW_VAL and fill in hash table on its 157 + * behalf. 162 158 */ 163 159 static void pv_wait_node(struct mcs_spinlock *node) 164 160 { ··· 178 172 * 179 173 * [S] pn->state = vcpu_halted [S] next->locked = 1 180 174 * MB MB 181 - * [L] pn->locked [RmW] pn->state = vcpu_running 175 + * [L] pn->locked [RmW] pn->state = vcpu_hashed 182 176 * 183 - * Matches the xchg() from pv_kick_node(). 177 + * Matches the cmpxchg() from pv_kick_node(). 184 178 */ 185 179 smp_store_mb(pn->state, vcpu_halted); 186 180 ··· 188 182 pv_wait(&pn->state, vcpu_halted); 189 183 190 184 /* 191 - * Reset the vCPU state to avoid unncessary CPU kicking 185 + * If pv_kick_node() changed us to vcpu_hashed, retain that value 186 + * so that pv_wait_head() knows to not also try to hash this lock. 192 187 */ 193 - WRITE_ONCE(pn->state, vcpu_running); 188 + cmpxchg(&pn->state, vcpu_halted, vcpu_running); 194 189 195 190 /* 196 191 * If the locked flag is still not set after wakeup, it is a ··· 201 194 * MCS lock will be released soon. 202 195 */ 203 196 } 197 + 204 198 /* 205 199 * By now our node->locked should be 1 and our caller will not actually 206 200 * spin-wait for it. We do however rely on our caller to do a ··· 210 202 } 211 203 212 204 /* 213 - * Called after setting next->locked = 1, used to wake those stuck in 214 - * pv_wait_node(). 205 + * Called after setting next->locked = 1 when we're the lock owner. 206 + * 207 + * Instead of waking the waiters stuck in pv_wait_node() advance their state such 208 + * that they're waiting in pv_wait_head(), this avoids a wake/sleep cycle. 215 209 */ 216 - static void pv_kick_node(struct mcs_spinlock *node) 210 + static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node) 217 211 { 218 212 struct pv_node *pn = (struct pv_node *)node; 213 + struct __qspinlock *l = (void *)lock; 219 214 220 215 /* 221 - * Note that because node->locked is already set, this actual 222 - * mcs_spinlock entry could be re-used already. 216 + * If the vCPU is indeed halted, advance its state to match that of 217 + * pv_wait_node(). If OTOH this fails, the vCPU was running and will 218 + * observe its next->locked value and advance itself. 223 219 * 224 - * This should be fine however, kicking people for no reason is 225 - * harmless. 226 - * 227 - * See the comment in pv_wait_node(). 220 + * Matches with smp_store_mb() and cmpxchg() in pv_wait_node() 228 221 */ 229 - if (xchg(&pn->state, vcpu_running) == vcpu_halted) 230 - pv_kick(pn->cpu); 222 + if (cmpxchg(&pn->state, vcpu_halted, vcpu_hashed) != vcpu_halted) 223 + return; 224 + 225 + /* 226 + * Put the lock into the hash table and set the _Q_SLOW_VAL. 227 + * 228 + * As this is the same vCPU that will check the _Q_SLOW_VAL value and 229 + * the hash table later on at unlock time, no atomic instruction is 230 + * needed. 231 + */ 232 + WRITE_ONCE(l->locked, _Q_SLOW_VAL); 233 + (void)pv_hash(lock, pn); 231 234 } 232 235 233 236 /* ··· 252 233 struct qspinlock **lp = NULL; 253 234 int loop; 254 235 236 + /* 237 + * If pv_kick_node() already advanced our state, we don't need to 238 + * insert ourselves into the hash table anymore. 239 + */ 240 + if (READ_ONCE(pn->state) == vcpu_hashed) 241 + lp = (struct qspinlock **)1; 242 + 255 243 for (;;) { 256 244 for (loop = SPIN_THRESHOLD; loop; loop--) { 257 245 if (!READ_ONCE(l->locked)) ··· 266 240 cpu_relax(); 267 241 } 268 242 269 - WRITE_ONCE(pn->state, vcpu_halted); 270 243 if (!lp) { /* ONCE */ 244 + WRITE_ONCE(pn->state, vcpu_hashed); 271 245 lp = pv_hash(lock, pn); 246 + 272 247 /* 273 - * lp must be set before setting _Q_SLOW_VAL 248 + * We must hash before setting _Q_SLOW_VAL, such that 249 + * when we observe _Q_SLOW_VAL in __pv_queued_spin_unlock() 250 + * we'll be sure to be able to observe our hash entry. 274 251 * 275 - * [S] lp = lock [RmW] l = l->locked = 0 276 - * MB MB 277 - * [S] l->locked = _Q_SLOW_VAL [L] lp 252 + * [S] pn->state 253 + * [S] <hash> [Rmw] l->locked == _Q_SLOW_VAL 254 + * MB RMB 255 + * [RmW] l->locked = _Q_SLOW_VAL [L] <unhash> 256 + * [L] pn->state 278 257 * 279 - * Matches the cmpxchg() in __pv_queued_spin_unlock(). 258 + * Matches the smp_rmb() in __pv_queued_spin_unlock(). 280 259 */ 281 260 if (!cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL)) { 282 261 /* ··· 318 287 { 319 288 struct __qspinlock *l = (void *)lock; 320 289 struct pv_node *node; 321 - u8 lockval = cmpxchg(&l->locked, _Q_LOCKED_VAL, 0); 290 + u8 locked; 322 291 323 292 /* 324 293 * We must not unlock if SLOW, because in that case we must first 325 294 * unhash. Otherwise it would be possible to have multiple @lock 326 295 * entries, which would be BAD. 327 296 */ 328 - if (likely(lockval == _Q_LOCKED_VAL)) 297 + locked = cmpxchg(&l->locked, _Q_LOCKED_VAL, 0); 298 + if (likely(locked == _Q_LOCKED_VAL)) 329 299 return; 330 300 331 - if (unlikely(lockval != _Q_SLOW_VAL)) { 332 - if (debug_locks_silent) 333 - return; 334 - WARN(1, "pvqspinlock: lock %p has corrupted value 0x%x!\n", lock, atomic_read(&lock->val)); 301 + if (unlikely(locked != _Q_SLOW_VAL)) { 302 + WARN(!debug_locks_silent, 303 + "pvqspinlock: lock 0x%lx has corrupted value 0x%x!\n", 304 + (unsigned long)lock, atomic_read(&lock->val)); 335 305 return; 336 306 } 307 + 308 + /* 309 + * A failed cmpxchg doesn't provide any memory-ordering guarantees, 310 + * so we need a barrier to order the read of the node data in 311 + * pv_unhash *after* we've read the lock being _Q_SLOW_VAL. 312 + * 313 + * Matches the cmpxchg() in pv_wait_head() setting _Q_SLOW_VAL. 314 + */ 315 + smp_rmb(); 337 316 338 317 /* 339 318 * Since the above failed to release, this must be the SLOW path. ··· 360 319 /* 361 320 * At this point the memory pointed at by lock can be freed/reused, 362 321 * however we can still use the pv_node to kick the CPU. 322 + * The other vCPU may not really be halted, but kicking an active 323 + * vCPU is harmless other than the additional latency in completing 324 + * the unlock. 363 325 */ 364 - if (READ_ONCE(node->state) == vcpu_halted) 326 + if (READ_ONCE(node->state) == vcpu_hashed) 365 327 pv_kick(node->cpu); 366 328 } 367 329 /*
-420
kernel/locking/rtmutex-tester.c
··· 1 - /* 2 - * RT-Mutex-tester: scriptable tester for rt mutexes 3 - * 4 - * started by Thomas Gleixner: 5 - * 6 - * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com> 7 - * 8 - */ 9 - #include <linux/device.h> 10 - #include <linux/kthread.h> 11 - #include <linux/export.h> 12 - #include <linux/sched.h> 13 - #include <linux/sched/rt.h> 14 - #include <linux/spinlock.h> 15 - #include <linux/timer.h> 16 - #include <linux/freezer.h> 17 - #include <linux/stat.h> 18 - 19 - #include "rtmutex.h" 20 - 21 - #define MAX_RT_TEST_THREADS 8 22 - #define MAX_RT_TEST_MUTEXES 8 23 - 24 - static spinlock_t rttest_lock; 25 - static atomic_t rttest_event; 26 - 27 - struct test_thread_data { 28 - int opcode; 29 - int opdata; 30 - int mutexes[MAX_RT_TEST_MUTEXES]; 31 - int event; 32 - struct device dev; 33 - }; 34 - 35 - static struct test_thread_data thread_data[MAX_RT_TEST_THREADS]; 36 - static struct task_struct *threads[MAX_RT_TEST_THREADS]; 37 - static struct rt_mutex mutexes[MAX_RT_TEST_MUTEXES]; 38 - 39 - enum test_opcodes { 40 - RTTEST_NOP = 0, 41 - RTTEST_SCHEDOT, /* 1 Sched other, data = nice */ 42 - RTTEST_SCHEDRT, /* 2 Sched fifo, data = prio */ 43 - RTTEST_LOCK, /* 3 Lock uninterruptible, data = lockindex */ 44 - RTTEST_LOCKNOWAIT, /* 4 Lock uninterruptible no wait in wakeup, data = lockindex */ 45 - RTTEST_LOCKINT, /* 5 Lock interruptible, data = lockindex */ 46 - RTTEST_LOCKINTNOWAIT, /* 6 Lock interruptible no wait in wakeup, data = lockindex */ 47 - RTTEST_LOCKCONT, /* 7 Continue locking after the wakeup delay */ 48 - RTTEST_UNLOCK, /* 8 Unlock, data = lockindex */ 49 - /* 9, 10 - reserved for BKL commemoration */ 50 - RTTEST_SIGNAL = 11, /* 11 Signal other test thread, data = thread id */ 51 - RTTEST_RESETEVENT = 98, /* 98 Reset event counter */ 52 - RTTEST_RESET = 99, /* 99 Reset all pending operations */ 53 - }; 54 - 55 - static int handle_op(struct test_thread_data *td, int lockwakeup) 56 - { 57 - int i, id, ret = -EINVAL; 58 - 59 - switch(td->opcode) { 60 - 61 - case RTTEST_NOP: 62 - return 0; 63 - 64 - case RTTEST_LOCKCONT: 65 - td->mutexes[td->opdata] = 1; 66 - td->event = atomic_add_return(1, &rttest_event); 67 - return 0; 68 - 69 - case RTTEST_RESET: 70 - for (i = 0; i < MAX_RT_TEST_MUTEXES; i++) { 71 - if (td->mutexes[i] == 4) { 72 - rt_mutex_unlock(&mutexes[i]); 73 - td->mutexes[i] = 0; 74 - } 75 - } 76 - return 0; 77 - 78 - case RTTEST_RESETEVENT: 79 - atomic_set(&rttest_event, 0); 80 - return 0; 81 - 82 - default: 83 - if (lockwakeup) 84 - return ret; 85 - } 86 - 87 - switch(td->opcode) { 88 - 89 - case RTTEST_LOCK: 90 - case RTTEST_LOCKNOWAIT: 91 - id = td->opdata; 92 - if (id < 0 || id >= MAX_RT_TEST_MUTEXES) 93 - return ret; 94 - 95 - td->mutexes[id] = 1; 96 - td->event = atomic_add_return(1, &rttest_event); 97 - rt_mutex_lock(&mutexes[id]); 98 - td->event = atomic_add_return(1, &rttest_event); 99 - td->mutexes[id] = 4; 100 - return 0; 101 - 102 - case RTTEST_LOCKINT: 103 - case RTTEST_LOCKINTNOWAIT: 104 - id = td->opdata; 105 - if (id < 0 || id >= MAX_RT_TEST_MUTEXES) 106 - return ret; 107 - 108 - td->mutexes[id] = 1; 109 - td->event = atomic_add_return(1, &rttest_event); 110 - ret = rt_mutex_lock_interruptible(&mutexes[id], 0); 111 - td->event = atomic_add_return(1, &rttest_event); 112 - td->mutexes[id] = ret ? 0 : 4; 113 - return ret ? -EINTR : 0; 114 - 115 - case RTTEST_UNLOCK: 116 - id = td->opdata; 117 - if (id < 0 || id >= MAX_RT_TEST_MUTEXES || td->mutexes[id] != 4) 118 - return ret; 119 - 120 - td->event = atomic_add_return(1, &rttest_event); 121 - rt_mutex_unlock(&mutexes[id]); 122 - td->event = atomic_add_return(1, &rttest_event); 123 - td->mutexes[id] = 0; 124 - return 0; 125 - 126 - default: 127 - break; 128 - } 129 - return ret; 130 - } 131 - 132 - /* 133 - * Schedule replacement for rtsem_down(). Only called for threads with 134 - * PF_MUTEX_TESTER set. 135 - * 136 - * This allows us to have finegrained control over the event flow. 137 - * 138 - */ 139 - void schedule_rt_mutex_test(struct rt_mutex *mutex) 140 - { 141 - int tid, op, dat; 142 - struct test_thread_data *td; 143 - 144 - /* We have to lookup the task */ 145 - for (tid = 0; tid < MAX_RT_TEST_THREADS; tid++) { 146 - if (threads[tid] == current) 147 - break; 148 - } 149 - 150 - BUG_ON(tid == MAX_RT_TEST_THREADS); 151 - 152 - td = &thread_data[tid]; 153 - 154 - op = td->opcode; 155 - dat = td->opdata; 156 - 157 - switch (op) { 158 - case RTTEST_LOCK: 159 - case RTTEST_LOCKINT: 160 - case RTTEST_LOCKNOWAIT: 161 - case RTTEST_LOCKINTNOWAIT: 162 - if (mutex != &mutexes[dat]) 163 - break; 164 - 165 - if (td->mutexes[dat] != 1) 166 - break; 167 - 168 - td->mutexes[dat] = 2; 169 - td->event = atomic_add_return(1, &rttest_event); 170 - break; 171 - 172 - default: 173 - break; 174 - } 175 - 176 - schedule(); 177 - 178 - 179 - switch (op) { 180 - case RTTEST_LOCK: 181 - case RTTEST_LOCKINT: 182 - if (mutex != &mutexes[dat]) 183 - return; 184 - 185 - if (td->mutexes[dat] != 2) 186 - return; 187 - 188 - td->mutexes[dat] = 3; 189 - td->event = atomic_add_return(1, &rttest_event); 190 - break; 191 - 192 - case RTTEST_LOCKNOWAIT: 193 - case RTTEST_LOCKINTNOWAIT: 194 - if (mutex != &mutexes[dat]) 195 - return; 196 - 197 - if (td->mutexes[dat] != 2) 198 - return; 199 - 200 - td->mutexes[dat] = 1; 201 - td->event = atomic_add_return(1, &rttest_event); 202 - return; 203 - 204 - default: 205 - return; 206 - } 207 - 208 - td->opcode = 0; 209 - 210 - for (;;) { 211 - set_current_state(TASK_INTERRUPTIBLE); 212 - 213 - if (td->opcode > 0) { 214 - int ret; 215 - 216 - set_current_state(TASK_RUNNING); 217 - ret = handle_op(td, 1); 218 - set_current_state(TASK_INTERRUPTIBLE); 219 - if (td->opcode == RTTEST_LOCKCONT) 220 - break; 221 - td->opcode = ret; 222 - } 223 - 224 - /* Wait for the next command to be executed */ 225 - schedule(); 226 - } 227 - 228 - /* Restore previous command and data */ 229 - td->opcode = op; 230 - td->opdata = dat; 231 - } 232 - 233 - static int test_func(void *data) 234 - { 235 - struct test_thread_data *td = data; 236 - int ret; 237 - 238 - current->flags |= PF_MUTEX_TESTER; 239 - set_freezable(); 240 - allow_signal(SIGHUP); 241 - 242 - for(;;) { 243 - 244 - set_current_state(TASK_INTERRUPTIBLE); 245 - 246 - if (td->opcode > 0) { 247 - set_current_state(TASK_RUNNING); 248 - ret = handle_op(td, 0); 249 - set_current_state(TASK_INTERRUPTIBLE); 250 - td->opcode = ret; 251 - } 252 - 253 - /* Wait for the next command to be executed */ 254 - schedule(); 255 - try_to_freeze(); 256 - 257 - if (signal_pending(current)) 258 - flush_signals(current); 259 - 260 - if(kthread_should_stop()) 261 - break; 262 - } 263 - return 0; 264 - } 265 - 266 - /** 267 - * sysfs_test_command - interface for test commands 268 - * @dev: thread reference 269 - * @buf: command for actual step 270 - * @count: length of buffer 271 - * 272 - * command syntax: 273 - * 274 - * opcode:data 275 - */ 276 - static ssize_t sysfs_test_command(struct device *dev, struct device_attribute *attr, 277 - const char *buf, size_t count) 278 - { 279 - struct sched_param schedpar; 280 - struct test_thread_data *td; 281 - char cmdbuf[32]; 282 - int op, dat, tid, ret; 283 - 284 - td = container_of(dev, struct test_thread_data, dev); 285 - tid = td->dev.id; 286 - 287 - /* strings from sysfs write are not 0 terminated! */ 288 - if (count >= sizeof(cmdbuf)) 289 - return -EINVAL; 290 - 291 - /* strip of \n: */ 292 - if (buf[count-1] == '\n') 293 - count--; 294 - if (count < 1) 295 - return -EINVAL; 296 - 297 - memcpy(cmdbuf, buf, count); 298 - cmdbuf[count] = 0; 299 - 300 - if (sscanf(cmdbuf, "%d:%d", &op, &dat) != 2) 301 - return -EINVAL; 302 - 303 - switch (op) { 304 - case RTTEST_SCHEDOT: 305 - schedpar.sched_priority = 0; 306 - ret = sched_setscheduler(threads[tid], SCHED_NORMAL, &schedpar); 307 - if (ret) 308 - return ret; 309 - set_user_nice(current, 0); 310 - break; 311 - 312 - case RTTEST_SCHEDRT: 313 - schedpar.sched_priority = dat; 314 - ret = sched_setscheduler(threads[tid], SCHED_FIFO, &schedpar); 315 - if (ret) 316 - return ret; 317 - break; 318 - 319 - case RTTEST_SIGNAL: 320 - send_sig(SIGHUP, threads[tid], 0); 321 - break; 322 - 323 - default: 324 - if (td->opcode > 0) 325 - return -EBUSY; 326 - td->opdata = dat; 327 - td->opcode = op; 328 - wake_up_process(threads[tid]); 329 - } 330 - 331 - return count; 332 - } 333 - 334 - /** 335 - * sysfs_test_status - sysfs interface for rt tester 336 - * @dev: thread to query 337 - * @buf: char buffer to be filled with thread status info 338 - */ 339 - static ssize_t sysfs_test_status(struct device *dev, struct device_attribute *attr, 340 - char *buf) 341 - { 342 - struct test_thread_data *td; 343 - struct task_struct *tsk; 344 - char *curr = buf; 345 - int i; 346 - 347 - td = container_of(dev, struct test_thread_data, dev); 348 - tsk = threads[td->dev.id]; 349 - 350 - spin_lock(&rttest_lock); 351 - 352 - curr += sprintf(curr, 353 - "O: %4d, E:%8d, S: 0x%08lx, P: %4d, N: %4d, B: %p, M:", 354 - td->opcode, td->event, tsk->state, 355 - (MAX_RT_PRIO - 1) - tsk->prio, 356 - (MAX_RT_PRIO - 1) - tsk->normal_prio, 357 - tsk->pi_blocked_on); 358 - 359 - for (i = MAX_RT_TEST_MUTEXES - 1; i >=0 ; i--) 360 - curr += sprintf(curr, "%d", td->mutexes[i]); 361 - 362 - spin_unlock(&rttest_lock); 363 - 364 - curr += sprintf(curr, ", T: %p, R: %p\n", tsk, 365 - mutexes[td->dev.id].owner); 366 - 367 - return curr - buf; 368 - } 369 - 370 - static DEVICE_ATTR(status, S_IRUSR, sysfs_test_status, NULL); 371 - static DEVICE_ATTR(command, S_IWUSR, NULL, sysfs_test_command); 372 - 373 - static struct bus_type rttest_subsys = { 374 - .name = "rttest", 375 - .dev_name = "rttest", 376 - }; 377 - 378 - static int init_test_thread(int id) 379 - { 380 - thread_data[id].dev.bus = &rttest_subsys; 381 - thread_data[id].dev.id = id; 382 - 383 - threads[id] = kthread_run(test_func, &thread_data[id], "rt-test-%d", id); 384 - if (IS_ERR(threads[id])) 385 - return PTR_ERR(threads[id]); 386 - 387 - return device_register(&thread_data[id].dev); 388 - } 389 - 390 - static int init_rttest(void) 391 - { 392 - int ret, i; 393 - 394 - spin_lock_init(&rttest_lock); 395 - 396 - for (i = 0; i < MAX_RT_TEST_MUTEXES; i++) 397 - rt_mutex_init(&mutexes[i]); 398 - 399 - ret = subsys_system_register(&rttest_subsys, NULL); 400 - if (ret) 401 - return ret; 402 - 403 - for (i = 0; i < MAX_RT_TEST_THREADS; i++) { 404 - ret = init_test_thread(i); 405 - if (ret) 406 - break; 407 - ret = device_create_file(&thread_data[i].dev, &dev_attr_status); 408 - if (ret) 409 - break; 410 - ret = device_create_file(&thread_data[i].dev, &dev_attr_command); 411 - if (ret) 412 - break; 413 - } 414 - 415 - printk("Initializing RT-Tester: %s\n", ret ? "Failed" : "OK" ); 416 - 417 - return ret; 418 - } 419 - 420 - device_initcall(init_rttest);
+1 -1
kernel/locking/rtmutex.c
··· 1120 1120 1121 1121 debug_rt_mutex_print_deadlock(waiter); 1122 1122 1123 - schedule_rt_mutex(lock); 1123 + schedule(); 1124 1124 1125 1125 raw_spin_lock(&lock->wait_lock); 1126 1126 set_current_state(state);
-22
kernel/locking/rtmutex_common.h
··· 15 15 #include <linux/rtmutex.h> 16 16 17 17 /* 18 - * The rtmutex in kernel tester is independent of rtmutex debugging. We 19 - * call schedule_rt_mutex_test() instead of schedule() for the tasks which 20 - * belong to the tester. That way we can delay the wakeup path of those 21 - * threads to provoke lock stealing and testing of complex boosting scenarios. 22 - */ 23 - #ifdef CONFIG_RT_MUTEX_TESTER 24 - 25 - extern void schedule_rt_mutex_test(struct rt_mutex *lock); 26 - 27 - #define schedule_rt_mutex(_lock) \ 28 - do { \ 29 - if (!(current->flags & PF_MUTEX_TESTER)) \ 30 - schedule(); \ 31 - else \ 32 - schedule_rt_mutex_test(_lock); \ 33 - } while (0) 34 - 35 - #else 36 - # define schedule_rt_mutex(_lock) schedule() 37 - #endif 38 - 39 - /* 40 18 * This is the control structure for tasks blocked on a rt_mutex, 41 19 * which is allocated on the kernel stack on of the blocked task. 42 20 *
+2 -4
kernel/sched/core.c
··· 164 164 165 165 static void sched_feat_disable(int i) 166 166 { 167 - if (static_key_enabled(&sched_feat_keys[i])) 168 - static_key_slow_dec(&sched_feat_keys[i]); 167 + static_key_disable(&sched_feat_keys[i]); 169 168 } 170 169 171 170 static void sched_feat_enable(int i) 172 171 { 173 - if (!static_key_enabled(&sched_feat_keys[i])) 174 - static_key_slow_inc(&sched_feat_keys[i]); 172 + static_key_enable(&sched_feat_keys[i]); 175 173 } 176 174 #else 177 175 static void sched_feat_disable(int i) { };
+16 -6
lib/Kconfig.debug
··· 916 916 This allows rt mutex semantics violations and rt mutex related 917 917 deadlocks (lockups) to be detected and reported automatically. 918 918 919 - config RT_MUTEX_TESTER 920 - bool "Built-in scriptable tester for rt-mutexes" 921 - depends on DEBUG_KERNEL && RT_MUTEXES && BROKEN 922 - help 923 - This option enables a rt-mutex tester. 924 - 925 919 config DEBUG_SPINLOCK 926 920 bool "Spinlock and rw-lock debugging: basic checks" 927 921 depends on DEBUG_KERNEL ··· 1522 1528 and to test how the mmc host driver handles retries from 1523 1529 the block device. 1524 1530 1531 + config FAIL_FUTEX 1532 + bool "Fault-injection capability for futexes" 1533 + select DEBUG_FS 1534 + depends on FAULT_INJECTION && FUTEX 1535 + help 1536 + Provide fault-injection capability for futexes. 1537 + 1525 1538 config FAULT_INJECTION_DEBUG_FS 1526 1539 bool "Debugfs entries for fault-injection capabilities" 1527 1540 depends on FAULT_INJECTION && SYSFS && DEBUG_FS ··· 1826 1825 ... 1827 1826 memtest=17, mean do 17 test patterns. 1828 1827 If you are unsure how to answer this question, answer N. 1828 + 1829 + config TEST_STATIC_KEYS 1830 + tristate "Test static keys" 1831 + default n 1832 + depends on m 1833 + help 1834 + Test the static key interfaces. 1835 + 1836 + If unsure, say N. 1829 1837 1830 1838 source "samples/Kconfig" 1831 1839
+2
lib/Makefile
··· 39 39 obj-$(CONFIG_TEST_LKM) += test_module.o 40 40 obj-$(CONFIG_TEST_RHASHTABLE) += test_rhashtable.o 41 41 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o 42 + obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_keys.o 43 + obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o 42 44 43 45 ifeq ($(CONFIG_DEBUG_KOBJECT),y) 44 46 CFLAGS_kobject.o += -DDEBUG
+3
lib/atomic64.c
··· 102 102 103 103 ATOMIC64_OPS(add, +=) 104 104 ATOMIC64_OPS(sub, -=) 105 + ATOMIC64_OP(and, &=) 106 + ATOMIC64_OP(or, |=) 107 + ATOMIC64_OP(xor, ^=) 105 108 106 109 #undef ATOMIC64_OPS 107 110 #undef ATOMIC64_OP_RETURN
+47 -21
lib/atomic64_test.c
··· 16 16 #include <linux/kernel.h> 17 17 #include <linux/atomic.h> 18 18 19 + #define TEST(bit, op, c_op, val) \ 20 + do { \ 21 + atomic##bit##_set(&v, v0); \ 22 + r = v0; \ 23 + atomic##bit##_##op(val, &v); \ 24 + r c_op val; \ 25 + WARN(atomic##bit##_read(&v) != r, "%Lx != %Lx\n", \ 26 + (unsigned long long)atomic##bit##_read(&v), \ 27 + (unsigned long long)r); \ 28 + } while (0) 29 + 30 + static __init void test_atomic(void) 31 + { 32 + int v0 = 0xaaa31337; 33 + int v1 = 0xdeadbeef; 34 + int onestwos = 0x11112222; 35 + int one = 1; 36 + 37 + atomic_t v; 38 + int r; 39 + 40 + TEST(, add, +=, onestwos); 41 + TEST(, add, +=, -one); 42 + TEST(, sub, -=, onestwos); 43 + TEST(, sub, -=, -one); 44 + TEST(, or, |=, v1); 45 + TEST(, and, &=, v1); 46 + TEST(, xor, ^=, v1); 47 + TEST(, andnot, &= ~, v1); 48 + } 49 + 19 50 #define INIT(c) do { atomic64_set(&v, c); r = c; } while (0) 20 - static __init int test_atomic64(void) 51 + static __init void test_atomic64(void) 21 52 { 22 53 long long v0 = 0xaaa31337c001d00dLL; 23 54 long long v1 = 0xdeadbeefdeafcafeLL; ··· 65 34 BUG_ON(v.counter != r); 66 35 BUG_ON(atomic64_read(&v) != r); 67 36 68 - INIT(v0); 69 - atomic64_add(onestwos, &v); 70 - r += onestwos; 71 - BUG_ON(v.counter != r); 72 - 73 - INIT(v0); 74 - atomic64_add(-one, &v); 75 - r += -one; 76 - BUG_ON(v.counter != r); 37 + TEST(64, add, +=, onestwos); 38 + TEST(64, add, +=, -one); 39 + TEST(64, sub, -=, onestwos); 40 + TEST(64, sub, -=, -one); 41 + TEST(64, or, |=, v1); 42 + TEST(64, and, &=, v1); 43 + TEST(64, xor, ^=, v1); 44 + TEST(64, andnot, &= ~, v1); 77 45 78 46 INIT(v0); 79 47 r += onestwos; ··· 82 52 INIT(v0); 83 53 r += -one; 84 54 BUG_ON(atomic64_add_return(-one, &v) != r); 85 - BUG_ON(v.counter != r); 86 - 87 - INIT(v0); 88 - atomic64_sub(onestwos, &v); 89 - r -= onestwos; 90 - BUG_ON(v.counter != r); 91 - 92 - INIT(v0); 93 - atomic64_sub(-one, &v); 94 - r -= -one; 95 55 BUG_ON(v.counter != r); 96 56 97 57 INIT(v0); ··· 167 147 BUG_ON(!atomic64_inc_not_zero(&v)); 168 148 r += one; 169 149 BUG_ON(v.counter != r); 150 + } 151 + 152 + static __init int test_atomics(void) 153 + { 154 + test_atomic(); 155 + test_atomic64(); 170 156 171 157 #ifdef CONFIG_X86 172 158 pr_info("passed for %s platform %s CX8 and %s SSE\n", ··· 192 166 return 0; 193 167 } 194 168 195 - core_initcall(test_atomic64); 169 + core_initcall(test_atomics);
-8
lib/lockref.c
··· 4 4 #if USE_CMPXCHG_LOCKREF 5 5 6 6 /* 7 - * Allow weakly-ordered memory architectures to provide barrier-less 8 - * cmpxchg semantics for lockref updates. 9 - */ 10 - #ifndef cmpxchg64_relaxed 11 - # define cmpxchg64_relaxed cmpxchg64 12 - #endif 13 - 14 - /* 15 7 * Note that the "cmpxchg()" reloads the "old" value for the 16 8 * failure case. 17 9 */
+68
lib/test_static_key_base.c
··· 1 + /* 2 + * Kernel module for testing static keys. 3 + * 4 + * Copyright 2015 Akamai Technologies Inc. All Rights Reserved 5 + * 6 + * Authors: 7 + * Jason Baron <jbaron@akamai.com> 8 + * 9 + * This software is licensed under the terms of the GNU General Public 10 + * License version 2, as published by the Free Software Foundation, and 11 + * may be copied, distributed, and modified under those terms. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + */ 18 + 19 + #include <linux/module.h> 20 + #include <linux/jump_label.h> 21 + 22 + /* old keys */ 23 + struct static_key base_old_true_key = STATIC_KEY_INIT_TRUE; 24 + EXPORT_SYMBOL_GPL(base_old_true_key); 25 + struct static_key base_inv_old_true_key = STATIC_KEY_INIT_TRUE; 26 + EXPORT_SYMBOL_GPL(base_inv_old_true_key); 27 + struct static_key base_old_false_key = STATIC_KEY_INIT_FALSE; 28 + EXPORT_SYMBOL_GPL(base_old_false_key); 29 + struct static_key base_inv_old_false_key = STATIC_KEY_INIT_FALSE; 30 + EXPORT_SYMBOL_GPL(base_inv_old_false_key); 31 + 32 + /* new keys */ 33 + DEFINE_STATIC_KEY_TRUE(base_true_key); 34 + EXPORT_SYMBOL_GPL(base_true_key); 35 + DEFINE_STATIC_KEY_TRUE(base_inv_true_key); 36 + EXPORT_SYMBOL_GPL(base_inv_true_key); 37 + DEFINE_STATIC_KEY_FALSE(base_false_key); 38 + EXPORT_SYMBOL_GPL(base_false_key); 39 + DEFINE_STATIC_KEY_FALSE(base_inv_false_key); 40 + EXPORT_SYMBOL_GPL(base_inv_false_key); 41 + 42 + static void invert_key(struct static_key *key) 43 + { 44 + if (static_key_enabled(key)) 45 + static_key_disable(key); 46 + else 47 + static_key_enable(key); 48 + } 49 + 50 + static int __init test_static_key_base_init(void) 51 + { 52 + invert_key(&base_inv_old_true_key); 53 + invert_key(&base_inv_old_false_key); 54 + invert_key(&base_inv_true_key.key); 55 + invert_key(&base_inv_false_key.key); 56 + 57 + return 0; 58 + } 59 + 60 + static void __exit test_static_key_base_exit(void) 61 + { 62 + } 63 + 64 + module_init(test_static_key_base_init); 65 + module_exit(test_static_key_base_exit); 66 + 67 + MODULE_AUTHOR("Jason Baron <jbaron@akamai.com>"); 68 + MODULE_LICENSE("GPL");
+225
lib/test_static_keys.c
··· 1 + /* 2 + * Kernel module for testing static keys. 3 + * 4 + * Copyright 2015 Akamai Technologies Inc. All Rights Reserved 5 + * 6 + * Authors: 7 + * Jason Baron <jbaron@akamai.com> 8 + * 9 + * This software is licensed under the terms of the GNU General Public 10 + * License version 2, as published by the Free Software Foundation, and 11 + * may be copied, distributed, and modified under those terms. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + */ 18 + 19 + #include <linux/module.h> 20 + #include <linux/jump_label.h> 21 + 22 + /* old keys */ 23 + struct static_key old_true_key = STATIC_KEY_INIT_TRUE; 24 + struct static_key old_false_key = STATIC_KEY_INIT_FALSE; 25 + 26 + /* new api */ 27 + DEFINE_STATIC_KEY_TRUE(true_key); 28 + DEFINE_STATIC_KEY_FALSE(false_key); 29 + 30 + /* external */ 31 + extern struct static_key base_old_true_key; 32 + extern struct static_key base_inv_old_true_key; 33 + extern struct static_key base_old_false_key; 34 + extern struct static_key base_inv_old_false_key; 35 + 36 + /* new api */ 37 + extern struct static_key_true base_true_key; 38 + extern struct static_key_true base_inv_true_key; 39 + extern struct static_key_false base_false_key; 40 + extern struct static_key_false base_inv_false_key; 41 + 42 + 43 + struct test_key { 44 + bool init_state; 45 + struct static_key *key; 46 + bool (*test_key)(void); 47 + }; 48 + 49 + #define test_key_func(key, branch) \ 50 + ({bool func(void) { return branch(key); } func; }) 51 + 52 + static void invert_key(struct static_key *key) 53 + { 54 + if (static_key_enabled(key)) 55 + static_key_disable(key); 56 + else 57 + static_key_enable(key); 58 + } 59 + 60 + static void invert_keys(struct test_key *keys, int size) 61 + { 62 + struct static_key *previous = NULL; 63 + int i; 64 + 65 + for (i = 0; i < size; i++) { 66 + if (previous != keys[i].key) { 67 + invert_key(keys[i].key); 68 + previous = keys[i].key; 69 + } 70 + } 71 + } 72 + 73 + static int verify_keys(struct test_key *keys, int size, bool invert) 74 + { 75 + int i; 76 + bool ret, init; 77 + 78 + for (i = 0; i < size; i++) { 79 + ret = static_key_enabled(keys[i].key); 80 + init = keys[i].init_state; 81 + if (ret != (invert ? !init : init)) 82 + return -EINVAL; 83 + ret = keys[i].test_key(); 84 + if (static_key_enabled(keys[i].key)) { 85 + if (!ret) 86 + return -EINVAL; 87 + } else { 88 + if (ret) 89 + return -EINVAL; 90 + } 91 + } 92 + return 0; 93 + } 94 + 95 + static int __init test_static_key_init(void) 96 + { 97 + int ret; 98 + int size; 99 + 100 + struct test_key static_key_tests[] = { 101 + /* internal keys - old keys */ 102 + { 103 + .init_state = true, 104 + .key = &old_true_key, 105 + .test_key = test_key_func(&old_true_key, static_key_true), 106 + }, 107 + { 108 + .init_state = false, 109 + .key = &old_false_key, 110 + .test_key = test_key_func(&old_false_key, static_key_false), 111 + }, 112 + /* internal keys - new keys */ 113 + { 114 + .init_state = true, 115 + .key = &true_key.key, 116 + .test_key = test_key_func(&true_key, static_branch_likely), 117 + }, 118 + { 119 + .init_state = true, 120 + .key = &true_key.key, 121 + .test_key = test_key_func(&true_key, static_branch_unlikely), 122 + }, 123 + { 124 + .init_state = false, 125 + .key = &false_key.key, 126 + .test_key = test_key_func(&false_key, static_branch_likely), 127 + }, 128 + { 129 + .init_state = false, 130 + .key = &false_key.key, 131 + .test_key = test_key_func(&false_key, static_branch_unlikely), 132 + }, 133 + /* external keys - old keys */ 134 + { 135 + .init_state = true, 136 + .key = &base_old_true_key, 137 + .test_key = test_key_func(&base_old_true_key, static_key_true), 138 + }, 139 + { 140 + .init_state = false, 141 + .key = &base_inv_old_true_key, 142 + .test_key = test_key_func(&base_inv_old_true_key, static_key_true), 143 + }, 144 + { 145 + .init_state = false, 146 + .key = &base_old_false_key, 147 + .test_key = test_key_func(&base_old_false_key, static_key_false), 148 + }, 149 + { 150 + .init_state = true, 151 + .key = &base_inv_old_false_key, 152 + .test_key = test_key_func(&base_inv_old_false_key, static_key_false), 153 + }, 154 + /* external keys - new keys */ 155 + { 156 + .init_state = true, 157 + .key = &base_true_key.key, 158 + .test_key = test_key_func(&base_true_key, static_branch_likely), 159 + }, 160 + { 161 + .init_state = true, 162 + .key = &base_true_key.key, 163 + .test_key = test_key_func(&base_true_key, static_branch_unlikely), 164 + }, 165 + { 166 + .init_state = false, 167 + .key = &base_inv_true_key.key, 168 + .test_key = test_key_func(&base_inv_true_key, static_branch_likely), 169 + }, 170 + { 171 + .init_state = false, 172 + .key = &base_inv_true_key.key, 173 + .test_key = test_key_func(&base_inv_true_key, static_branch_unlikely), 174 + }, 175 + { 176 + .init_state = false, 177 + .key = &base_false_key.key, 178 + .test_key = test_key_func(&base_false_key, static_branch_likely), 179 + }, 180 + { 181 + .init_state = false, 182 + .key = &base_false_key.key, 183 + .test_key = test_key_func(&base_false_key, static_branch_unlikely), 184 + }, 185 + { 186 + .init_state = true, 187 + .key = &base_inv_false_key.key, 188 + .test_key = test_key_func(&base_inv_false_key, static_branch_likely), 189 + }, 190 + { 191 + .init_state = true, 192 + .key = &base_inv_false_key.key, 193 + .test_key = test_key_func(&base_inv_false_key, static_branch_unlikely), 194 + }, 195 + }; 196 + 197 + size = ARRAY_SIZE(static_key_tests); 198 + 199 + ret = verify_keys(static_key_tests, size, false); 200 + if (ret) 201 + goto out; 202 + 203 + invert_keys(static_key_tests, size); 204 + ret = verify_keys(static_key_tests, size, true); 205 + if (ret) 206 + goto out; 207 + 208 + invert_keys(static_key_tests, size); 209 + ret = verify_keys(static_key_tests, size, false); 210 + if (ret) 211 + goto out; 212 + return 0; 213 + out: 214 + return ret; 215 + } 216 + 217 + static void __exit test_static_key_exit(void) 218 + { 219 + } 220 + 221 + module_init(test_static_key_init); 222 + module_exit(test_static_key_exit); 223 + 224 + MODULE_AUTHOR("Jason Baron <jbaron@akamai.com>"); 225 + MODULE_LICENSE("GPL");
-21
scripts/rt-tester/check-all.sh
··· 1 - 2 - 3 - function testit () 4 - { 5 - printf "%-30s: " $1 6 - ./rt-tester.py $1 | grep Pass 7 - } 8 - 9 - testit t2-l1-2rt-sameprio.tst 10 - testit t2-l1-pi.tst 11 - testit t2-l1-signal.tst 12 - #testit t2-l2-2rt-deadlock.tst 13 - testit t3-l1-pi-1rt.tst 14 - testit t3-l1-pi-2rt.tst 15 - testit t3-l1-pi-3rt.tst 16 - testit t3-l1-pi-signal.tst 17 - testit t3-l1-pi-steal.tst 18 - testit t3-l2-pi.tst 19 - testit t4-l2-pi-deboost.tst 20 - testit t5-l4-pi-boost-deboost.tst 21 - testit t5-l4-pi-boost-deboost-setsched.tst
-218
scripts/rt-tester/rt-tester.py
··· 1 - #!/usr/bin/python 2 - # 3 - # rt-mutex tester 4 - # 5 - # (C) 2006 Thomas Gleixner <tglx@linutronix.de> 6 - # 7 - # This program is free software; you can redistribute it and/or modify 8 - # it under the terms of the GNU General Public License version 2 as 9 - # published by the Free Software Foundation. 10 - # 11 - import os 12 - import sys 13 - import getopt 14 - import shutil 15 - import string 16 - 17 - # Globals 18 - quiet = 0 19 - test = 0 20 - comments = 0 21 - 22 - sysfsprefix = "/sys/devices/system/rttest/rttest" 23 - statusfile = "/status" 24 - commandfile = "/command" 25 - 26 - # Command opcodes 27 - cmd_opcodes = { 28 - "schedother" : "1", 29 - "schedfifo" : "2", 30 - "lock" : "3", 31 - "locknowait" : "4", 32 - "lockint" : "5", 33 - "lockintnowait" : "6", 34 - "lockcont" : "7", 35 - "unlock" : "8", 36 - "signal" : "11", 37 - "resetevent" : "98", 38 - "reset" : "99", 39 - } 40 - 41 - test_opcodes = { 42 - "prioeq" : ["P" , "eq" , None], 43 - "priolt" : ["P" , "lt" , None], 44 - "priogt" : ["P" , "gt" , None], 45 - "nprioeq" : ["N" , "eq" , None], 46 - "npriolt" : ["N" , "lt" , None], 47 - "npriogt" : ["N" , "gt" , None], 48 - "unlocked" : ["M" , "eq" , 0], 49 - "trylock" : ["M" , "eq" , 1], 50 - "blocked" : ["M" , "eq" , 2], 51 - "blockedwake" : ["M" , "eq" , 3], 52 - "locked" : ["M" , "eq" , 4], 53 - "opcodeeq" : ["O" , "eq" , None], 54 - "opcodelt" : ["O" , "lt" , None], 55 - "opcodegt" : ["O" , "gt" , None], 56 - "eventeq" : ["E" , "eq" , None], 57 - "eventlt" : ["E" , "lt" , None], 58 - "eventgt" : ["E" , "gt" , None], 59 - } 60 - 61 - # Print usage information 62 - def usage(): 63 - print "rt-tester.py <-c -h -q -t> <testfile>" 64 - print " -c display comments after first command" 65 - print " -h help" 66 - print " -q quiet mode" 67 - print " -t test mode (syntax check)" 68 - print " testfile: read test specification from testfile" 69 - print " otherwise from stdin" 70 - return 71 - 72 - # Print progress when not in quiet mode 73 - def progress(str): 74 - if not quiet: 75 - print str 76 - 77 - # Analyse a status value 78 - def analyse(val, top, arg): 79 - 80 - intval = int(val) 81 - 82 - if top[0] == "M": 83 - intval = intval / (10 ** int(arg)) 84 - intval = intval % 10 85 - argval = top[2] 86 - elif top[0] == "O": 87 - argval = int(cmd_opcodes.get(arg, arg)) 88 - else: 89 - argval = int(arg) 90 - 91 - # progress("%d %s %d" %(intval, top[1], argval)) 92 - 93 - if top[1] == "eq" and intval == argval: 94 - return 1 95 - if top[1] == "lt" and intval < argval: 96 - return 1 97 - if top[1] == "gt" and intval > argval: 98 - return 1 99 - return 0 100 - 101 - # Parse the commandline 102 - try: 103 - (options, arguments) = getopt.getopt(sys.argv[1:],'chqt') 104 - except getopt.GetoptError, ex: 105 - usage() 106 - sys.exit(1) 107 - 108 - # Parse commandline options 109 - for option, value in options: 110 - if option == "-c": 111 - comments = 1 112 - elif option == "-q": 113 - quiet = 1 114 - elif option == "-t": 115 - test = 1 116 - elif option == '-h': 117 - usage() 118 - sys.exit(0) 119 - 120 - # Select the input source 121 - if arguments: 122 - try: 123 - fd = open(arguments[0]) 124 - except Exception,ex: 125 - sys.stderr.write("File not found %s\n" %(arguments[0])) 126 - sys.exit(1) 127 - else: 128 - fd = sys.stdin 129 - 130 - linenr = 0 131 - 132 - # Read the test patterns 133 - while 1: 134 - 135 - linenr = linenr + 1 136 - line = fd.readline() 137 - if not len(line): 138 - break 139 - 140 - line = line.strip() 141 - parts = line.split(":") 142 - 143 - if not parts or len(parts) < 1: 144 - continue 145 - 146 - if len(parts[0]) == 0: 147 - continue 148 - 149 - if parts[0].startswith("#"): 150 - if comments > 1: 151 - progress(line) 152 - continue 153 - 154 - if comments == 1: 155 - comments = 2 156 - 157 - progress(line) 158 - 159 - cmd = parts[0].strip().lower() 160 - opc = parts[1].strip().lower() 161 - tid = parts[2].strip() 162 - dat = parts[3].strip() 163 - 164 - try: 165 - # Test or wait for a status value 166 - if cmd == "t" or cmd == "w": 167 - testop = test_opcodes[opc] 168 - 169 - fname = "%s%s%s" %(sysfsprefix, tid, statusfile) 170 - if test: 171 - print fname 172 - continue 173 - 174 - while 1: 175 - query = 1 176 - fsta = open(fname, 'r') 177 - status = fsta.readline().strip() 178 - fsta.close() 179 - stat = status.split(",") 180 - for s in stat: 181 - s = s.strip() 182 - if s.startswith(testop[0]): 183 - # Separate status value 184 - val = s[2:].strip() 185 - query = analyse(val, testop, dat) 186 - break 187 - if query or cmd == "t": 188 - break 189 - 190 - progress(" " + status) 191 - 192 - if not query: 193 - sys.stderr.write("Test failed in line %d\n" %(linenr)) 194 - sys.exit(1) 195 - 196 - # Issue a command to the tester 197 - elif cmd == "c": 198 - cmdnr = cmd_opcodes[opc] 199 - # Build command string and sys filename 200 - cmdstr = "%s:%s" %(cmdnr, dat) 201 - fname = "%s%s%s" %(sysfsprefix, tid, commandfile) 202 - if test: 203 - print fname 204 - continue 205 - fcmd = open(fname, 'w') 206 - fcmd.write(cmdstr) 207 - fcmd.close() 208 - 209 - except Exception,ex: 210 - sys.stderr.write(str(ex)) 211 - sys.stderr.write("\nSyntax error in line %d\n" %(linenr)) 212 - if not test: 213 - fd.close() 214 - sys.exit(1) 215 - 216 - # Normal exit pass 217 - print "Pass" 218 - sys.exit(0)
-94
scripts/rt-tester/t2-l1-2rt-sameprio.tst
··· 1 - # 2 - # RT-Mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal 0 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 2 threads 1 lock 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedfifo: 0: 80 55 - C: schedfifo: 1: 80 56 - 57 - # T0 lock L0 58 - C: locknowait: 0: 0 59 - C: locknowait: 1: 0 60 - W: locked: 0: 0 61 - W: blocked: 1: 0 62 - T: prioeq: 0: 80 63 - 64 - # T0 unlock L0 65 - C: unlock: 0: 0 66 - W: locked: 1: 0 67 - 68 - # Verify T0 69 - W: unlocked: 0: 0 70 - T: prioeq: 0: 80 71 - 72 - # Unlock 73 - C: unlock: 1: 0 74 - W: unlocked: 1: 0 75 - 76 - # T1,T0 lock L0 77 - C: locknowait: 1: 0 78 - C: locknowait: 0: 0 79 - W: locked: 1: 0 80 - W: blocked: 0: 0 81 - T: prioeq: 1: 80 82 - 83 - # T1 unlock L0 84 - C: unlock: 1: 0 85 - W: locked: 0: 0 86 - 87 - # Verify T1 88 - W: unlocked: 1: 0 89 - T: prioeq: 1: 80 90 - 91 - # Unlock and exit 92 - C: unlock: 0: 0 93 - W: unlocked: 0: 0 94 -
-77
scripts/rt-tester/t2-l1-pi.tst
··· 1 - # 2 - # RT-Mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal 0 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 2 threads 1 lock with priority inversion 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedfifo: 1: 80 56 - 57 - # T0 lock L0 58 - C: locknowait: 0: 0 59 - W: locked: 0: 0 60 - 61 - # T1 lock L0 62 - C: locknowait: 1: 0 63 - W: blocked: 1: 0 64 - T: prioeq: 0: 80 65 - 66 - # T0 unlock L0 67 - C: unlock: 0: 0 68 - W: locked: 1: 0 69 - 70 - # Verify T1 71 - W: unlocked: 0: 0 72 - T: priolt: 0: 1 73 - 74 - # Unlock and exit 75 - C: unlock: 1: 0 76 - W: unlocked: 1: 0 77 -
-72
scripts/rt-tester/t2-l1-signal.tst
··· 1 - # 2 - # RT-Mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal 0 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 2 threads 1 lock with priority inversion 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedother: 1: 0 56 - 57 - # T0 lock L0 58 - C: locknowait: 0: 0 59 - W: locked: 0: 0 60 - 61 - # T1 lock L0 62 - C: lockintnowait: 1: 0 63 - W: blocked: 1: 0 64 - 65 - # Interrupt T1 66 - C: signal: 1: 0 67 - W: unlocked: 1: 0 68 - T: opcodeeq: 1: -4 69 - 70 - # Unlock and exit 71 - C: unlock: 0: 0 72 - W: unlocked: 0: 0
-84
scripts/rt-tester/t2-l2-2rt-deadlock.tst
··· 1 - # 2 - # RT-Mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal 0 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 2 threads 2 lock 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedfifo: 0: 80 55 - C: schedfifo: 1: 80 56 - 57 - # T0 lock L0 58 - C: locknowait: 0: 0 59 - W: locked: 0: 0 60 - 61 - # T1 lock L1 62 - C: locknowait: 1: 1 63 - W: locked: 1: 1 64 - 65 - # T0 lock L1 66 - C: lockintnowait: 0: 1 67 - W: blocked: 0: 1 68 - 69 - # T1 lock L0 70 - C: lockintnowait: 1: 0 71 - W: blocked: 1: 0 72 - 73 - # Make deadlock go away 74 - C: signal: 1: 0 75 - W: unlocked: 1: 0 76 - C: signal: 0: 0 77 - W: unlocked: 0: 1 78 - 79 - # Unlock and exit 80 - C: unlock: 0: 0 81 - W: unlocked: 0: 0 82 - C: unlock: 1: 1 83 - W: unlocked: 1: 1 84 -
-87
scripts/rt-tester/t3-l1-pi-1rt.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 3 threads 1 lock PI 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedother: 1: 0 56 - C: schedfifo: 2: 82 57 - 58 - # T0 lock L0 59 - C: locknowait: 0: 0 60 - W: locked: 0: 0 61 - 62 - # T1 lock L0 63 - C: locknowait: 1: 0 64 - W: blocked: 1: 0 65 - T: priolt: 0: 1 66 - 67 - # T2 lock L0 68 - C: locknowait: 2: 0 69 - W: blocked: 2: 0 70 - T: prioeq: 0: 82 71 - 72 - # T0 unlock L0 73 - C: unlock: 0: 0 74 - 75 - # Wait until T2 got the lock 76 - W: locked: 2: 0 77 - W: unlocked: 0: 0 78 - T: priolt: 0: 1 79 - 80 - # T2 unlock L0 81 - C: unlock: 2: 0 82 - 83 - W: unlocked: 2: 0 84 - W: locked: 1: 0 85 - 86 - C: unlock: 1: 0 87 - W: unlocked: 1: 0
-88
scripts/rt-tester/t3-l1-pi-2rt.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 3 threads 1 lock PI 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedfifo: 1: 81 56 - C: schedfifo: 2: 82 57 - 58 - # T0 lock L0 59 - C: locknowait: 0: 0 60 - W: locked: 0: 0 61 - 62 - # T1 lock L0 63 - C: locknowait: 1: 0 64 - W: blocked: 1: 0 65 - T: prioeq: 0: 81 66 - 67 - # T2 lock L0 68 - C: locknowait: 2: 0 69 - W: blocked: 2: 0 70 - T: prioeq: 0: 82 71 - T: prioeq: 1: 81 72 - 73 - # T0 unlock L0 74 - C: unlock: 0: 0 75 - 76 - # Wait until T2 got the lock 77 - W: locked: 2: 0 78 - W: unlocked: 0: 0 79 - T: priolt: 0: 1 80 - 81 - # T2 unlock L0 82 - C: unlock: 2: 0 83 - 84 - W: unlocked: 2: 0 85 - W: locked: 1: 0 86 - 87 - C: unlock: 1: 0 88 - W: unlocked: 1: 0
-87
scripts/rt-tester/t3-l1-pi-3rt.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 3 threads 1 lock PI 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedfifo: 0: 80 55 - C: schedfifo: 1: 81 56 - C: schedfifo: 2: 82 57 - 58 - # T0 lock L0 59 - C: locknowait: 0: 0 60 - W: locked: 0: 0 61 - 62 - # T1 lock L0 63 - C: locknowait: 1: 0 64 - W: blocked: 1: 0 65 - T: prioeq: 0: 81 66 - 67 - # T2 lock L0 68 - C: locknowait: 2: 0 69 - W: blocked: 2: 0 70 - T: prioeq: 0: 82 71 - 72 - # T0 unlock L0 73 - C: unlock: 0: 0 74 - 75 - # Wait until T2 got the lock 76 - W: locked: 2: 0 77 - W: unlocked: 0: 0 78 - T: prioeq: 0: 80 79 - 80 - # T2 unlock L0 81 - C: unlock: 2: 0 82 - 83 - W: locked: 1: 0 84 - W: unlocked: 2: 0 85 - 86 - C: unlock: 1: 0 87 - W: unlocked: 1: 0
-93
scripts/rt-tester/t3-l1-pi-signal.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # Reset event counter 48 - C: resetevent: 0: 0 49 - W: opcodeeq: 0: 0 50 - 51 - # Set priorities 52 - C: schedother: 0: 0 53 - C: schedfifo: 1: 80 54 - C: schedfifo: 2: 81 55 - 56 - # T0 lock L0 57 - C: lock: 0: 0 58 - W: locked: 0: 0 59 - 60 - # T1 lock L0, no wait in the wakeup path 61 - C: locknowait: 1: 0 62 - W: blocked: 1: 0 63 - T: prioeq: 0: 80 64 - T: prioeq: 1: 80 65 - 66 - # T2 lock L0 interruptible, no wait in the wakeup path 67 - C: lockintnowait: 2: 0 68 - W: blocked: 2: 0 69 - T: prioeq: 0: 81 70 - T: prioeq: 1: 80 71 - 72 - # Interrupt T2 73 - C: signal: 2: 2 74 - W: unlocked: 2: 0 75 - T: prioeq: 1: 80 76 - T: prioeq: 0: 80 77 - 78 - T: locked: 0: 0 79 - T: blocked: 1: 0 80 - 81 - # T0 unlock L0 82 - C: unlock: 0: 0 83 - 84 - # Wait until T1 has locked L0 and exit 85 - W: locked: 1: 0 86 - W: unlocked: 0: 0 87 - T: priolt: 0: 1 88 - 89 - C: unlock: 1: 0 90 - W: unlocked: 1: 0 91 - 92 - 93 -
-91
scripts/rt-tester/t3-l1-pi-steal.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 3 threads 1 lock PI steal pending ownership 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedfifo: 1: 80 56 - C: schedfifo: 2: 81 57 - 58 - # T0 lock L0 59 - C: lock: 0: 0 60 - W: locked: 0: 0 61 - 62 - # T1 lock L0 63 - C: lock: 1: 0 64 - W: blocked: 1: 0 65 - T: prioeq: 0: 80 66 - 67 - # T0 unlock L0 68 - C: unlock: 0: 0 69 - 70 - # Wait until T1 is in the wakeup loop 71 - W: blockedwake: 1: 0 72 - T: priolt: 0: 1 73 - 74 - # T2 lock L0 75 - C: lock: 2: 0 76 - # T1 leave wakeup loop 77 - C: lockcont: 1: 0 78 - 79 - # T2 must have the lock and T1 must be blocked 80 - W: locked: 2: 0 81 - W: blocked: 1: 0 82 - 83 - # T2 unlock L0 84 - C: unlock: 2: 0 85 - 86 - # Wait until T1 is in the wakeup loop and let it run 87 - W: blockedwake: 1: 0 88 - C: lockcont: 1: 0 89 - W: locked: 1: 0 90 - C: unlock: 1: 0 91 - W: unlocked: 1: 0
-87
scripts/rt-tester/t3-l2-pi.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 3 threads 2 lock PI 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedother: 1: 0 56 - C: schedfifo: 2: 82 57 - 58 - # T0 lock L0 59 - C: locknowait: 0: 0 60 - W: locked: 0: 0 61 - 62 - # T1 lock L0 63 - C: locknowait: 1: 0 64 - W: blocked: 1: 0 65 - T: priolt: 0: 1 66 - 67 - # T2 lock L0 68 - C: locknowait: 2: 0 69 - W: blocked: 2: 0 70 - T: prioeq: 0: 82 71 - 72 - # T0 unlock L0 73 - C: unlock: 0: 0 74 - 75 - # Wait until T2 got the lock 76 - W: locked: 2: 0 77 - W: unlocked: 0: 0 78 - T: priolt: 0: 1 79 - 80 - # T2 unlock L0 81 - C: unlock: 2: 0 82 - 83 - W: unlocked: 2: 0 84 - W: locked: 1: 0 85 - 86 - C: unlock: 1: 0 87 - W: unlocked: 1: 0
-118
scripts/rt-tester/t4-l2-pi-deboost.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 4 threads 2 lock PI 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedother: 1: 0 56 - C: schedfifo: 2: 82 57 - C: schedfifo: 3: 83 58 - 59 - # T0 lock L0 60 - C: locknowait: 0: 0 61 - W: locked: 0: 0 62 - 63 - # T1 lock L1 64 - C: locknowait: 1: 1 65 - W: locked: 1: 1 66 - 67 - # T3 lock L0 68 - C: lockintnowait: 3: 0 69 - W: blocked: 3: 0 70 - T: prioeq: 0: 83 71 - 72 - # T0 lock L1 73 - C: lock: 0: 1 74 - W: blocked: 0: 1 75 - T: prioeq: 1: 83 76 - 77 - # T1 unlock L1 78 - C: unlock: 1: 1 79 - 80 - # Wait until T0 is in the wakeup code 81 - W: blockedwake: 0: 1 82 - 83 - # Verify that T1 is unboosted 84 - W: unlocked: 1: 1 85 - T: priolt: 1: 1 86 - 87 - # T2 lock L1 (T0 is boosted and pending owner !) 88 - C: locknowait: 2: 1 89 - W: blocked: 2: 1 90 - T: prioeq: 0: 83 91 - 92 - # Interrupt T3 and wait until T3 returned 93 - C: signal: 3: 0 94 - W: unlocked: 3: 0 95 - 96 - # Verify prio of T0 (still pending owner, 97 - # but T2 is enqueued due to the previous boost by T3 98 - T: prioeq: 0: 82 99 - 100 - # Let T0 continue 101 - C: lockcont: 0: 1 102 - W: locked: 0: 1 103 - 104 - # Unlock L1 and let T2 get L1 105 - C: unlock: 0: 1 106 - W: locked: 2: 1 107 - 108 - # Verify that T0 is unboosted 109 - W: unlocked: 0: 1 110 - T: priolt: 0: 1 111 - 112 - # Unlock everything and exit 113 - C: unlock: 2: 1 114 - W: unlocked: 2: 1 115 - 116 - C: unlock: 0: 0 117 - W: unlocked: 0: 0 118 -
-178
scripts/rt-tester/t5-l4-pi-boost-deboost-setsched.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 5 threads 4 lock PI - modify priority of blocked threads 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedfifo: 1: 81 56 - C: schedfifo: 2: 82 57 - C: schedfifo: 3: 83 58 - C: schedfifo: 4: 84 59 - 60 - # T0 lock L0 61 - C: locknowait: 0: 0 62 - W: locked: 0: 0 63 - 64 - # T1 lock L1 65 - C: locknowait: 1: 1 66 - W: locked: 1: 1 67 - 68 - # T1 lock L0 69 - C: lockintnowait: 1: 0 70 - W: blocked: 1: 0 71 - T: prioeq: 0: 81 72 - 73 - # T2 lock L2 74 - C: locknowait: 2: 2 75 - W: locked: 2: 2 76 - 77 - # T2 lock L1 78 - C: lockintnowait: 2: 1 79 - W: blocked: 2: 1 80 - T: prioeq: 0: 82 81 - T: prioeq: 1: 82 82 - 83 - # T3 lock L3 84 - C: locknowait: 3: 3 85 - W: locked: 3: 3 86 - 87 - # T3 lock L2 88 - C: lockintnowait: 3: 2 89 - W: blocked: 3: 2 90 - T: prioeq: 0: 83 91 - T: prioeq: 1: 83 92 - T: prioeq: 2: 83 93 - 94 - # T4 lock L3 95 - C: lockintnowait: 4: 3 96 - W: blocked: 4: 3 97 - T: prioeq: 0: 84 98 - T: prioeq: 1: 84 99 - T: prioeq: 2: 84 100 - T: prioeq: 3: 84 101 - 102 - # Reduce prio of T4 103 - C: schedfifo: 4: 80 104 - T: prioeq: 0: 83 105 - T: prioeq: 1: 83 106 - T: prioeq: 2: 83 107 - T: prioeq: 3: 83 108 - T: prioeq: 4: 80 109 - 110 - # Increase prio of T4 111 - C: schedfifo: 4: 84 112 - T: prioeq: 0: 84 113 - T: prioeq: 1: 84 114 - T: prioeq: 2: 84 115 - T: prioeq: 3: 84 116 - T: prioeq: 4: 84 117 - 118 - # Reduce prio of T3 119 - C: schedfifo: 3: 80 120 - T: prioeq: 0: 84 121 - T: prioeq: 1: 84 122 - T: prioeq: 2: 84 123 - T: prioeq: 3: 84 124 - T: prioeq: 4: 84 125 - 126 - # Increase prio of T3 127 - C: schedfifo: 3: 85 128 - T: prioeq: 0: 85 129 - T: prioeq: 1: 85 130 - T: prioeq: 2: 85 131 - T: prioeq: 3: 85 132 - T: prioeq: 4: 84 133 - 134 - # Reduce prio of T3 135 - C: schedfifo: 3: 83 136 - T: prioeq: 0: 84 137 - T: prioeq: 1: 84 138 - T: prioeq: 2: 84 139 - T: prioeq: 3: 84 140 - T: prioeq: 4: 84 141 - 142 - # Signal T4 143 - C: signal: 4: 0 144 - W: unlocked: 4: 3 145 - T: prioeq: 0: 83 146 - T: prioeq: 1: 83 147 - T: prioeq: 2: 83 148 - T: prioeq: 3: 83 149 - 150 - # Signal T3 151 - C: signal: 3: 0 152 - W: unlocked: 3: 2 153 - T: prioeq: 0: 82 154 - T: prioeq: 1: 82 155 - T: prioeq: 2: 82 156 - 157 - # Signal T2 158 - C: signal: 2: 0 159 - W: unlocked: 2: 1 160 - T: prioeq: 0: 81 161 - T: prioeq: 1: 81 162 - 163 - # Signal T1 164 - C: signal: 1: 0 165 - W: unlocked: 1: 0 166 - T: priolt: 0: 1 167 - 168 - # Unlock and exit 169 - C: unlock: 3: 3 170 - C: unlock: 2: 2 171 - C: unlock: 1: 1 172 - C: unlock: 0: 0 173 - 174 - W: unlocked: 3: 3 175 - W: unlocked: 2: 2 176 - W: unlocked: 1: 1 177 - W: unlocked: 0: 0 178 -
-138
scripts/rt-tester/t5-l4-pi-boost-deboost.tst
··· 1 - # 2 - # rt-mutex test 3 - # 4 - # Op: C(ommand)/T(est)/W(ait) 5 - # | opcode 6 - # | | threadid: 0-7 7 - # | | | opcode argument 8 - # | | | | 9 - # C: lock: 0: 0 10 - # 11 - # Commands 12 - # 13 - # opcode opcode argument 14 - # schedother nice value 15 - # schedfifo priority 16 - # lock lock nr (0-7) 17 - # locknowait lock nr (0-7) 18 - # lockint lock nr (0-7) 19 - # lockintnowait lock nr (0-7) 20 - # lockcont lock nr (0-7) 21 - # unlock lock nr (0-7) 22 - # signal thread to signal (0-7) 23 - # reset 0 24 - # resetevent 0 25 - # 26 - # Tests / Wait 27 - # 28 - # opcode opcode argument 29 - # 30 - # prioeq priority 31 - # priolt priority 32 - # priogt priority 33 - # nprioeq normal priority 34 - # npriolt normal priority 35 - # npriogt normal priority 36 - # locked lock nr (0-7) 37 - # blocked lock nr (0-7) 38 - # blockedwake lock nr (0-7) 39 - # unlocked lock nr (0-7) 40 - # opcodeeq command opcode or number 41 - # opcodelt number 42 - # opcodegt number 43 - # eventeq number 44 - # eventgt number 45 - # eventlt number 46 - 47 - # 48 - # 5 threads 4 lock PI 49 - # 50 - C: resetevent: 0: 0 51 - W: opcodeeq: 0: 0 52 - 53 - # Set schedulers 54 - C: schedother: 0: 0 55 - C: schedfifo: 1: 81 56 - C: schedfifo: 2: 82 57 - C: schedfifo: 3: 83 58 - C: schedfifo: 4: 84 59 - 60 - # T0 lock L0 61 - C: locknowait: 0: 0 62 - W: locked: 0: 0 63 - 64 - # T1 lock L1 65 - C: locknowait: 1: 1 66 - W: locked: 1: 1 67 - 68 - # T1 lock L0 69 - C: lockintnowait: 1: 0 70 - W: blocked: 1: 0 71 - T: prioeq: 0: 81 72 - 73 - # T2 lock L2 74 - C: locknowait: 2: 2 75 - W: locked: 2: 2 76 - 77 - # T2 lock L1 78 - C: lockintnowait: 2: 1 79 - W: blocked: 2: 1 80 - T: prioeq: 0: 82 81 - T: prioeq: 1: 82 82 - 83 - # T3 lock L3 84 - C: locknowait: 3: 3 85 - W: locked: 3: 3 86 - 87 - # T3 lock L2 88 - C: lockintnowait: 3: 2 89 - W: blocked: 3: 2 90 - T: prioeq: 0: 83 91 - T: prioeq: 1: 83 92 - T: prioeq: 2: 83 93 - 94 - # T4 lock L3 95 - C: lockintnowait: 4: 3 96 - W: blocked: 4: 3 97 - T: prioeq: 0: 84 98 - T: prioeq: 1: 84 99 - T: prioeq: 2: 84 100 - T: prioeq: 3: 84 101 - 102 - # Signal T4 103 - C: signal: 4: 0 104 - W: unlocked: 4: 3 105 - T: prioeq: 0: 83 106 - T: prioeq: 1: 83 107 - T: prioeq: 2: 83 108 - T: prioeq: 3: 83 109 - 110 - # Signal T3 111 - C: signal: 3: 0 112 - W: unlocked: 3: 2 113 - T: prioeq: 0: 82 114 - T: prioeq: 1: 82 115 - T: prioeq: 2: 82 116 - 117 - # Signal T2 118 - C: signal: 2: 0 119 - W: unlocked: 2: 1 120 - T: prioeq: 0: 81 121 - T: prioeq: 1: 81 122 - 123 - # Signal T1 124 - C: signal: 1: 0 125 - W: unlocked: 1: 0 126 - T: priolt: 0: 1 127 - 128 - # Unlock and exit 129 - C: unlock: 3: 3 130 - C: unlock: 2: 2 131 - C: unlock: 1: 1 132 - C: unlock: 0: 0 133 - 134 - W: unlocked: 3: 3 135 - W: unlocked: 2: 2 136 - W: unlocked: 1: 1 137 - W: unlocked: 0: 0 138 -
+1
tools/testing/selftests/Makefile
··· 20 20 TARGETS += timers 21 21 endif 22 22 TARGETS += user 23 + TARGETS += jumplabel 23 24 TARGETS += vm 24 25 TARGETS += x86 25 26 #Please keep the TARGETS list alphabetically sorted
+8
tools/testing/selftests/static_keys/Makefile
··· 1 + # Makefile for static keys selftests 2 + 3 + # No binaries, but make sure arg-less "make" doesn't trigger "run_tests" 4 + all: 5 + 6 + TEST_PROGS := test_static_keys.sh 7 + 8 + include ../lib.mk
+16
tools/testing/selftests/static_keys/test_static_keys.sh
··· 1 + #!/bin/sh 2 + # Runs static keys kernel module tests 3 + 4 + if /sbin/modprobe -q test_static_key_base; then 5 + if /sbin/modprobe -q test_static_keys; then 6 + echo "static_key: ok" 7 + /sbin/modprobe -q -r test_static_keys 8 + /sbin/modprobe -q -r test_static_key_base 9 + else 10 + echo "static_keys: [FAIL]" 11 + /sbin/modprobe -q -r test_static_key_base 12 + fi 13 + else 14 + echo "static_key: [FAIL]" 15 + exit 1 16 + fi