Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

documentation: Clarify memory-barrier semantics of atomic operations

All value-returning atomic read-modify-write operations must provide full
memory-barrier semantics on both sides of the operation. This commit
clarifies the documentation to make it clear that these memory-barrier
semantics are provided by the operations themselves, not by their callers.

Reported-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

+23 -22
+23 -22
Documentation/atomic_ops.txt
··· 201 201 atomic_t and return the new counter value after the operation is 202 202 performed. 203 203 204 - Unlike the above routines, it is required that explicit memory 205 - barriers are performed before and after the operation. It must be 206 - done such that all memory operations before and after the atomic 207 - operation calls are strongly ordered with respect to the atomic 208 - operation itself. 204 + Unlike the above routines, it is required that these primitives 205 + include explicit memory barriers that are performed before and after 206 + the operation. It must be done such that all memory operations before 207 + and after the atomic operation calls are strongly ordered with respect 208 + to the atomic operation itself. 209 209 210 210 For example, it should behave as if a smp_mb() call existed both 211 211 before and after the atomic operation. ··· 233 233 given atomic counter. They return a boolean indicating whether the 234 234 resulting counter value was zero or not. 235 235 236 - It requires explicit memory barrier semantics around the operation as 237 - above. 236 + Again, these primitives provide explicit memory barrier semantics around 237 + the atomic operation. 238 238 239 239 int atomic_sub_and_test(int i, atomic_t *v); 240 240 241 241 This is identical to atomic_dec_and_test() except that an explicit 242 - decrement is given instead of the implicit "1". It requires explicit 243 - memory barrier semantics around the operation. 242 + decrement is given instead of the implicit "1". This primitive must 243 + provide explicit memory barrier semantics around the operation. 244 244 245 245 int atomic_add_negative(int i, atomic_t *v); 246 246 247 - The given increment is added to the given atomic counter value. A 248 - boolean is return which indicates whether the resulting counter value 249 - is negative. It requires explicit memory barrier semantics around the 250 - operation. 247 + The given increment is added to the given atomic counter value. A boolean 248 + is return which indicates whether the resulting counter value is negative. 249 + This primitive must provide explicit memory barrier semantics around 250 + the operation. 251 251 252 252 Then: 253 253 ··· 257 257 the given new value. It returns the old value that the atomic variable v had 258 258 just before the operation. 259 259 260 - atomic_xchg requires explicit memory barriers around the operation. 260 + atomic_xchg must provide explicit memory barriers around the operation. 261 261 262 262 int atomic_cmpxchg(atomic_t *v, int old, int new); 263 263 ··· 266 266 atomic_cmpxchg will only satisfy its atomicity semantics as long as all 267 267 other accesses of *v are performed through atomic_xxx operations. 268 268 269 - atomic_cmpxchg requires explicit memory barriers around the operation. 269 + atomic_cmpxchg must provide explicit memory barriers around the operation. 270 270 271 271 The semantics for atomic_cmpxchg are the same as those defined for 'cas' 272 272 below. ··· 279 279 returns non zero. If v is equal to u then it returns zero. This is done as 280 280 an atomic operation. 281 281 282 - atomic_add_unless requires explicit memory barriers around the operation 283 - unless it fails (returns 0). 282 + atomic_add_unless must provide explicit memory barriers around the 283 + operation unless it fails (returns 0). 284 284 285 285 atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) 286 286 ··· 460 460 like this occur as well. 461 461 462 462 These routines, like the atomic_t counter operations returning values, 463 - require explicit memory barrier semantics around their execution. All 464 - memory operations before the atomic bit operation call must be made 465 - visible globally before the atomic bit operation is made visible. 463 + must provide explicit memory barrier semantics around their execution. 464 + All memory operations before the atomic bit operation call must be 465 + made visible globally before the atomic bit operation is made visible. 466 466 Likewise, the atomic bit operation must be visible globally before any 467 467 subsequent memory operation is made visible. For example: 468 468 ··· 536 536 These non-atomic variants also do not require any special memory 537 537 barrier semantics. 538 538 539 - The routines xchg() and cmpxchg() need the same exact memory barriers 540 - as the atomic and bit operations returning values. 539 + The routines xchg() and cmpxchg() must provide the same exact 540 + memory-barrier semantics as the atomic and bit operations returning 541 + values. 541 542 542 543 Spinlocks and rwlocks have memory barrier expectations as well. 543 544 The rule to follow is simple: