···157157 smp_mb__before_atomic_dec();158158 atomic_dec(&obj->ref_count);159159160160-It makes sure that all memory operations preceeding the atomic_dec()160160+It makes sure that all memory operations preceding the atomic_dec()161161call are strongly ordered with respect to the atomic counter162162-operation. In the above example, it guarentees that the assignment of162162+operation. In the above example, it guarantees that the assignment of163163"1" to obj->dead will be globally visible to other cpus before the164164atomic counter decrement.165165166166-Without the explicitl smp_mb__before_atomic_dec() call, the166166+Without the explicit smp_mb__before_atomic_dec() call, the167167implementation could legally allow the atomic counter update visible168168to other cpus before the "obj->dead = 1;" assignment.169169···173173(smp_mb__{before,after}_atomic_inc()).174174175175A missing memory barrier in the cases where they are required by the176176-atomic_t implementation above can have disasterous results. Here is177177-an example, which follows a pattern occuring frequently in the Linux176176+atomic_t implementation above can have disastrous results. Here is177177+an example, which follows a pattern occurring frequently in the Linux178178kernel. It is the use of atomic counters to implement reference179179counting, and it works such that once the counter falls to zero it can180180-be guarenteed that no other entity can be accessing the object:180180+be guaranteed that no other entity can be accessing the object:181181182182static void obj_list_add(struct obj *obj)183183{···291291size. The endianness of the bits within each "unsigned long" are the292292native endianness of the cpu.293293294294- void set_bit(unsigned long nr, volatils unsigned long *addr);295295- void clear_bit(unsigned long nr, volatils unsigned long *addr);296296- void change_bit(unsigned long nr, volatils unsigned long *addr);294294+ void set_bit(unsigned long nr, volatile unsigned long *addr);295295+ void clear_bit(unsigned long nr, volatile unsigned long *addr);296296+ void change_bit(unsigned long nr, volatile unsigned long *addr);297297298298These routines set, clear, and change, respectively, the bit number299299indicated by "nr" on the bit mask pointed to by "ADDR".···301301They must execute atomically, yet there are no implicit memory barrier302302semantics required of these interfaces.303303304304- int test_and_set_bit(unsigned long nr, volatils unsigned long *addr);305305- int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr);306306- int test_and_change_bit(unsigned long nr, volatils unsigned long *addr);304304+ int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);305305+ int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);306306+ int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);307307308308Like the above, except that these routines return a boolean which309309indicates whether the changed bit was set _BEFORE_ the atomic bit···335335 /* ... */;336336 obj->killed = 1;337337338338-The implementation of test_and_set_bit() must guarentee that338338+The implementation of test_and_set_bit() must guarantee that339339"obj->dead = 1;" is visible to cpus before the atomic memory operation340340done by test_and_set_bit() becomes visible. Likewise, the atomic341341memory operation done by test_and_set_bit() must become visible before···474474strictly orders all subsequent memory operations (including475475the cas()) with respect to itself, things will be fine.476476477477-Said another way, _atomic_dec_and_lock() must guarentee that477477+Said another way, _atomic_dec_and_lock() must guarantee that478478a counter dropping to zero is never made visible before the479479spinlock being acquired.480480