Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at 7143740d26098aca84ecc7376ccfe2c58fd0412e 546 lines 19 kB view raw
1 Semantics and Behavior of Atomic and 2 Bitmask Operations 3 4 David S. Miller 5 6 This document is intended to serve as a guide to Linux port 7maintainers on how to implement atomic counter, bitops, and spinlock 8interfaces properly. 9 10 The atomic_t type should be defined as a signed integer. 11Also, it should be made opaque such that any kind of cast to a normal 12C integer type will fail. Something like the following should 13suffice: 14 15 typedef struct { volatile int counter; } atomic_t; 16 17Historically, counter has been declared volatile. This is now discouraged. 18See Documentation/volatile-considered-harmful.txt for the complete rationale. 19 20local_t is very similar to atomic_t. If the counter is per CPU and only 21updated by one CPU, local_t is probably more appropriate. Please see 22Documentation/local_ops.txt for the semantics of local_t. 23 24The first operations to implement for atomic_t's are the initializers and 25plain reads. 26 27 #define ATOMIC_INIT(i) { (i) } 28 #define atomic_set(v, i) ((v)->counter = (i)) 29 30The first macro is used in definitions, such as: 31 32static atomic_t my_counter = ATOMIC_INIT(1); 33 34The initializer is atomic in that the return values of the atomic operations 35are guaranteed to be correct reflecting the initialized value if the 36initializer is used before runtime. If the initializer is used at runtime, a 37proper implicit or explicit read memory barrier is needed before reading the 38value with atomic_read from another thread. 39 40The second interface can be used at runtime, as in: 41 42 struct foo { atomic_t counter; }; 43 ... 44 45 struct foo *k; 46 47 k = kmalloc(sizeof(*k), GFP_KERNEL); 48 if (!k) 49 return -ENOMEM; 50 atomic_set(&k->counter, 0); 51 52The setting is atomic in that the return values of the atomic operations by 53all threads are guaranteed to be correct reflecting either the value that has 54been set with this operation or set with another operation. A proper implicit 55or explicit memory barrier is needed before the value set with the operation 56is guaranteed to be readable with atomic_read from another thread. 57 58Next, we have: 59 60 #define atomic_read(v) ((v)->counter) 61 62which simply reads the counter value currently visible to the calling thread. 63The read is atomic in that the return value is guaranteed to be one of the 64values initialized or modified with the interface operations if a proper 65implicit or explicit memory barrier is used after possible runtime 66initialization by any other thread and the value is modified only with the 67interface operations. atomic_read does not guarantee that the runtime 68initialization by any other thread is visible yet, so the user of the 69interface must take care of that with a proper implicit or explicit memory 70barrier. 71 72*** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! *** 73 74Some architectures may choose to use the volatile keyword, barriers, or inline 75assembly to guarantee some degree of immediacy for atomic_read() and 76atomic_set(). This is not uniformly guaranteed, and may change in the future, 77so all users of atomic_t should treat atomic_read() and atomic_set() as simple 78C statements that may be reordered or optimized away entirely by the compiler 79or processor, and explicitly invoke the appropriate compiler and/or memory 80barrier for each use case. Failure to do so will result in code that may 81suddenly break when used with different architectures or compiler 82optimizations, or even changes in unrelated code which changes how the 83compiler optimizes the section accessing atomic_t variables. 84 85*** YOU HAVE BEEN WARNED! *** 86 87Now, we move onto the atomic operation interfaces typically implemented with 88the help of assembly code. 89 90 void atomic_add(int i, atomic_t *v); 91 void atomic_sub(int i, atomic_t *v); 92 void atomic_inc(atomic_t *v); 93 void atomic_dec(atomic_t *v); 94 95These four routines add and subtract integral values to/from the given 96atomic_t value. The first two routines pass explicit integers by 97which to make the adjustment, whereas the latter two use an implicit 98adjustment value of "1". 99 100One very important aspect of these two routines is that they DO NOT 101require any explicit memory barriers. They need only perform the 102atomic_t counter update in an SMP safe manner. 103 104Next, we have: 105 106 int atomic_inc_return(atomic_t *v); 107 int atomic_dec_return(atomic_t *v); 108 109These routines add 1 and subtract 1, respectively, from the given 110atomic_t and return the new counter value after the operation is 111performed. 112 113Unlike the above routines, it is required that explicit memory 114barriers are performed before and after the operation. It must be 115done such that all memory operations before and after the atomic 116operation calls are strongly ordered with respect to the atomic 117operation itself. 118 119For example, it should behave as if a smp_mb() call existed both 120before and after the atomic operation. 121 122If the atomic instructions used in an implementation provide explicit 123memory barrier semantics which satisfy the above requirements, that is 124fine as well. 125 126Let's move on: 127 128 int atomic_add_return(int i, atomic_t *v); 129 int atomic_sub_return(int i, atomic_t *v); 130 131These behave just like atomic_{inc,dec}_return() except that an 132explicit counter adjustment is given instead of the implicit "1". 133This means that like atomic_{inc,dec}_return(), the memory barrier 134semantics are required. 135 136Next: 137 138 int atomic_inc_and_test(atomic_t *v); 139 int atomic_dec_and_test(atomic_t *v); 140 141These two routines increment and decrement by 1, respectively, the 142given atomic counter. They return a boolean indicating whether the 143resulting counter value was zero or not. 144 145It requires explicit memory barrier semantics around the operation as 146above. 147 148 int atomic_sub_and_test(int i, atomic_t *v); 149 150This is identical to atomic_dec_and_test() except that an explicit 151decrement is given instead of the implicit "1". It requires explicit 152memory barrier semantics around the operation. 153 154 int atomic_add_negative(int i, atomic_t *v); 155 156The given increment is added to the given atomic counter value. A 157boolean is return which indicates whether the resulting counter value 158is negative. It requires explicit memory barrier semantics around the 159operation. 160 161Then: 162 163 int atomic_xchg(atomic_t *v, int new); 164 165This performs an atomic exchange operation on the atomic variable v, setting 166the given new value. It returns the old value that the atomic variable v had 167just before the operation. 168 169 int atomic_cmpxchg(atomic_t *v, int old, int new); 170 171This performs an atomic compare exchange operation on the atomic value v, 172with the given old and new values. Like all atomic_xxx operations, 173atomic_cmpxchg will only satisfy its atomicity semantics as long as all 174other accesses of *v are performed through atomic_xxx operations. 175 176atomic_cmpxchg requires explicit memory barriers around the operation. 177 178The semantics for atomic_cmpxchg are the same as those defined for 'cas' 179below. 180 181Finally: 182 183 int atomic_add_unless(atomic_t *v, int a, int u); 184 185If the atomic value v is not equal to u, this function adds a to v, and 186returns non zero. If v is equal to u then it returns zero. This is done as 187an atomic operation. 188 189atomic_add_unless requires explicit memory barriers around the operation. 190 191atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) 192 193 194If a caller requires memory barrier semantics around an atomic_t 195operation which does not return a value, a set of interfaces are 196defined which accomplish this: 197 198 void smp_mb__before_atomic_dec(void); 199 void smp_mb__after_atomic_dec(void); 200 void smp_mb__before_atomic_inc(void); 201 void smp_mb__after_atomic_inc(void); 202 203For example, smp_mb__before_atomic_dec() can be used like so: 204 205 obj->dead = 1; 206 smp_mb__before_atomic_dec(); 207 atomic_dec(&obj->ref_count); 208 209It makes sure that all memory operations preceding the atomic_dec() 210call are strongly ordered with respect to the atomic counter 211operation. In the above example, it guarantees that the assignment of 212"1" to obj->dead will be globally visible to other cpus before the 213atomic counter decrement. 214 215Without the explicit smp_mb__before_atomic_dec() call, the 216implementation could legally allow the atomic counter update visible 217to other cpus before the "obj->dead = 1;" assignment. 218 219The other three interfaces listed are used to provide explicit 220ordering with respect to memory operations after an atomic_dec() call 221(smp_mb__after_atomic_dec()) and around atomic_inc() calls 222(smp_mb__{before,after}_atomic_inc()). 223 224A missing memory barrier in the cases where they are required by the 225atomic_t implementation above can have disastrous results. Here is 226an example, which follows a pattern occurring frequently in the Linux 227kernel. It is the use of atomic counters to implement reference 228counting, and it works such that once the counter falls to zero it can 229be guaranteed that no other entity can be accessing the object: 230 231static void obj_list_add(struct obj *obj) 232{ 233 obj->active = 1; 234 list_add(&obj->list); 235} 236 237static void obj_list_del(struct obj *obj) 238{ 239 list_del(&obj->list); 240 obj->active = 0; 241} 242 243static void obj_destroy(struct obj *obj) 244{ 245 BUG_ON(obj->active); 246 kfree(obj); 247} 248 249struct obj *obj_list_peek(struct list_head *head) 250{ 251 if (!list_empty(head)) { 252 struct obj *obj; 253 254 obj = list_entry(head->next, struct obj, list); 255 atomic_inc(&obj->refcnt); 256 return obj; 257 } 258 return NULL; 259} 260 261void obj_poke(void) 262{ 263 struct obj *obj; 264 265 spin_lock(&global_list_lock); 266 obj = obj_list_peek(&global_list); 267 spin_unlock(&global_list_lock); 268 269 if (obj) { 270 obj->ops->poke(obj); 271 if (atomic_dec_and_test(&obj->refcnt)) 272 obj_destroy(obj); 273 } 274} 275 276void obj_timeout(struct obj *obj) 277{ 278 spin_lock(&global_list_lock); 279 obj_list_del(obj); 280 spin_unlock(&global_list_lock); 281 282 if (atomic_dec_and_test(&obj->refcnt)) 283 obj_destroy(obj); 284} 285 286(This is a simplification of the ARP queue management in the 287 generic neighbour discover code of the networking. Olaf Kirch 288 found a bug wrt. memory barriers in kfree_skb() that exposed 289 the atomic_t memory barrier requirements quite clearly.) 290 291Given the above scheme, it must be the case that the obj->active 292update done by the obj list deletion be visible to other processors 293before the atomic counter decrement is performed. 294 295Otherwise, the counter could fall to zero, yet obj->active would still 296be set, thus triggering the assertion in obj_destroy(). The error 297sequence looks like this: 298 299 cpu 0 cpu 1 300 obj_poke() obj_timeout() 301 obj = obj_list_peek(); 302 ... gains ref to obj, refcnt=2 303 obj_list_del(obj); 304 obj->active = 0 ... 305 ... visibility delayed ... 306 atomic_dec_and_test() 307 ... refcnt drops to 1 ... 308 atomic_dec_and_test() 309 ... refcount drops to 0 ... 310 obj_destroy() 311 BUG() triggers since obj->active 312 still seen as one 313 obj->active update visibility occurs 314 315With the memory barrier semantics required of the atomic_t operations 316which return values, the above sequence of memory visibility can never 317happen. Specifically, in the above case the atomic_dec_and_test() 318counter decrement would not become globally visible until the 319obj->active update does. 320 321As a historical note, 32-bit Sparc used to only allow usage of 32224-bits of it's atomic_t type. This was because it used 8 bits 323as a spinlock for SMP safety. Sparc32 lacked a "compare and swap" 324type instruction. However, 32-bit Sparc has since been moved over 325to a "hash table of spinlocks" scheme, that allows the full 32-bit 326counter to be realized. Essentially, an array of spinlocks are 327indexed into based upon the address of the atomic_t being operated 328on, and that lock protects the atomic operation. Parisc uses the 329same scheme. 330 331Another note is that the atomic_t operations returning values are 332extremely slow on an old 386. 333 334We will now cover the atomic bitmask operations. You will find that 335their SMP and memory barrier semantics are similar in shape and scope 336to the atomic_t ops above. 337 338Native atomic bit operations are defined to operate on objects aligned 339to the size of an "unsigned long" C data type, and are least of that 340size. The endianness of the bits within each "unsigned long" are the 341native endianness of the cpu. 342 343 void set_bit(unsigned long nr, volatile unsigned long *addr); 344 void clear_bit(unsigned long nr, volatile unsigned long *addr); 345 void change_bit(unsigned long nr, volatile unsigned long *addr); 346 347These routines set, clear, and change, respectively, the bit number 348indicated by "nr" on the bit mask pointed to by "ADDR". 349 350They must execute atomically, yet there are no implicit memory barrier 351semantics required of these interfaces. 352 353 int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); 354 int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); 355 int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); 356 357Like the above, except that these routines return a boolean which 358indicates whether the changed bit was set _BEFORE_ the atomic bit 359operation. 360 361WARNING! It is incredibly important that the value be a boolean, 362ie. "0" or "1". Do not try to be fancy and save a few instructions by 363declaring the above to return "long" and just returning something like 364"old_val & mask" because that will not work. 365 366For one thing, this return value gets truncated to int in many code 367paths using these interfaces, so on 64-bit if the bit is set in the 368upper 32-bits then testers will never see that. 369 370One great example of where this problem crops up are the thread_info 371flag operations. Routines such as test_and_set_ti_thread_flag() chop 372the return value into an int. There are other places where things 373like this occur as well. 374 375These routines, like the atomic_t counter operations returning values, 376require explicit memory barrier semantics around their execution. All 377memory operations before the atomic bit operation call must be made 378visible globally before the atomic bit operation is made visible. 379Likewise, the atomic bit operation must be visible globally before any 380subsequent memory operation is made visible. For example: 381 382 obj->dead = 1; 383 if (test_and_set_bit(0, &obj->flags)) 384 /* ... */; 385 obj->killed = 1; 386 387The implementation of test_and_set_bit() must guarantee that 388"obj->dead = 1;" is visible to cpus before the atomic memory operation 389done by test_and_set_bit() becomes visible. Likewise, the atomic 390memory operation done by test_and_set_bit() must become visible before 391"obj->killed = 1;" is visible. 392 393Finally there is the basic operation: 394 395 int test_bit(unsigned long nr, __const__ volatile unsigned long *addr); 396 397Which returns a boolean indicating if bit "nr" is set in the bitmask 398pointed to by "addr". 399 400If explicit memory barriers are required around clear_bit() (which 401does not return a value, and thus does not need to provide memory 402barrier semantics), two interfaces are provided: 403 404 void smp_mb__before_clear_bit(void); 405 void smp_mb__after_clear_bit(void); 406 407They are used as follows, and are akin to their atomic_t operation 408brothers: 409 410 /* All memory operations before this call will 411 * be globally visible before the clear_bit(). 412 */ 413 smp_mb__before_clear_bit(); 414 clear_bit( ... ); 415 416 /* The clear_bit() will be visible before all 417 * subsequent memory operations. 418 */ 419 smp_mb__after_clear_bit(); 420 421There are two special bitops with lock barrier semantics (acquire/release, 422same as spinlocks). These operate in the same way as their non-_lock/unlock 423postfixed variants, except that they are to provide acquire/release semantics, 424respectively. This means they can be used for bit_spin_trylock and 425bit_spin_unlock type operations without specifying any more barriers. 426 427 int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); 428 void clear_bit_unlock(unsigned long nr, unsigned long *addr); 429 void __clear_bit_unlock(unsigned long nr, unsigned long *addr); 430 431The __clear_bit_unlock version is non-atomic, however it still implements 432unlock barrier semantics. This can be useful if the lock itself is protecting 433the other bits in the word. 434 435Finally, there are non-atomic versions of the bitmask operations 436provided. They are used in contexts where some other higher-level SMP 437locking scheme is being used to protect the bitmask, and thus less 438expensive non-atomic operations may be used in the implementation. 439They have names similar to the above bitmask operation interfaces, 440except that two underscores are prefixed to the interface name. 441 442 void __set_bit(unsigned long nr, volatile unsigned long *addr); 443 void __clear_bit(unsigned long nr, volatile unsigned long *addr); 444 void __change_bit(unsigned long nr, volatile unsigned long *addr); 445 int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr); 446 int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); 447 int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr); 448 449These non-atomic variants also do not require any special memory 450barrier semantics. 451 452The routines xchg() and cmpxchg() need the same exact memory barriers 453as the atomic and bit operations returning values. 454 455Spinlocks and rwlocks have memory barrier expectations as well. 456The rule to follow is simple: 457 4581) When acquiring a lock, the implementation must make it globally 459 visible before any subsequent memory operation. 460 4612) When releasing a lock, the implementation must make it such that 462 all previous memory operations are globally visible before the 463 lock release. 464 465Which finally brings us to _atomic_dec_and_lock(). There is an 466architecture-neutral version implemented in lib/dec_and_lock.c, 467but most platforms will wish to optimize this in assembler. 468 469 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); 470 471Atomically decrement the given counter, and if will drop to zero 472atomically acquire the given spinlock and perform the decrement 473of the counter to zero. If it does not drop to zero, do nothing 474with the spinlock. 475 476It is actually pretty simple to get the memory barrier correct. 477Simply satisfy the spinlock grab requirements, which is make 478sure the spinlock operation is globally visible before any 479subsequent memory operation. 480 481We can demonstrate this operation more clearly if we define 482an abstract atomic operation: 483 484 long cas(long *mem, long old, long new); 485 486"cas" stands for "compare and swap". It atomically: 487 4881) Compares "old" with the value currently at "mem". 4892) If they are equal, "new" is written to "mem". 4903) Regardless, the current value at "mem" is returned. 491 492As an example usage, here is what an atomic counter update 493might look like: 494 495void example_atomic_inc(long *counter) 496{ 497 long old, new, ret; 498 499 while (1) { 500 old = *counter; 501 new = old + 1; 502 503 ret = cas(counter, old, new); 504 if (ret == old) 505 break; 506 } 507} 508 509Let's use cas() in order to build a pseudo-C atomic_dec_and_lock(): 510 511int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) 512{ 513 long old, new, ret; 514 int went_to_zero; 515 516 went_to_zero = 0; 517 while (1) { 518 old = atomic_read(atomic); 519 new = old - 1; 520 if (new == 0) { 521 went_to_zero = 1; 522 spin_lock(lock); 523 } 524 ret = cas(atomic, old, new); 525 if (ret == old) 526 break; 527 if (went_to_zero) { 528 spin_unlock(lock); 529 went_to_zero = 0; 530 } 531 } 532 533 return went_to_zero; 534} 535 536Now, as far as memory barriers go, as long as spin_lock() 537strictly orders all subsequent memory operations (including 538the cas()) with respect to itself, things will be fine. 539 540Said another way, _atomic_dec_and_lock() must guarantee that 541a counter dropping to zero is never made visible before the 542spinlock being acquired. 543 544Note that this also means that for the case where the counter 545is not dropping to zero, there are no memory ordering 546requirements.