Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Spelling fixes for Documentation/atomic_ops.txt

Spelling and typo fixes for Documentation/atomic_ops.txt

Signed-off-by: Adrian Bunk <bunk@stusta.de>

authored by

Michael Hayes and committed by
Adrian Bunk
a0ebb3ff 0ecbf4b5

+14 -14
+14 -14
Documentation/atomic_ops.txt
··· 157 157 smp_mb__before_atomic_dec(); 158 158 atomic_dec(&obj->ref_count); 159 159 160 - It makes sure that all memory operations preceeding the atomic_dec() 160 + It makes sure that all memory operations preceding the atomic_dec() 161 161 call are strongly ordered with respect to the atomic counter 162 - operation. In the above example, it guarentees that the assignment of 162 + operation. In the above example, it guarantees that the assignment of 163 163 "1" to obj->dead will be globally visible to other cpus before the 164 164 atomic counter decrement. 165 165 166 - Without the explicitl smp_mb__before_atomic_dec() call, the 166 + Without the explicit smp_mb__before_atomic_dec() call, the 167 167 implementation could legally allow the atomic counter update visible 168 168 to other cpus before the "obj->dead = 1;" assignment. 169 169 ··· 173 173 (smp_mb__{before,after}_atomic_inc()). 174 174 175 175 A missing memory barrier in the cases where they are required by the 176 - atomic_t implementation above can have disasterous results. Here is 177 - an example, which follows a pattern occuring frequently in the Linux 176 + atomic_t implementation above can have disastrous results. Here is 177 + an example, which follows a pattern occurring frequently in the Linux 178 178 kernel. It is the use of atomic counters to implement reference 179 179 counting, and it works such that once the counter falls to zero it can 180 - be guarenteed that no other entity can be accessing the object: 180 + be guaranteed that no other entity can be accessing the object: 181 181 182 182 static void obj_list_add(struct obj *obj) 183 183 { ··· 291 291 size. The endianness of the bits within each "unsigned long" are the 292 292 native endianness of the cpu. 293 293 294 - void set_bit(unsigned long nr, volatils unsigned long *addr); 295 - void clear_bit(unsigned long nr, volatils unsigned long *addr); 296 - void change_bit(unsigned long nr, volatils unsigned long *addr); 294 + void set_bit(unsigned long nr, volatile unsigned long *addr); 295 + void clear_bit(unsigned long nr, volatile unsigned long *addr); 296 + void change_bit(unsigned long nr, volatile unsigned long *addr); 297 297 298 298 These routines set, clear, and change, respectively, the bit number 299 299 indicated by "nr" on the bit mask pointed to by "ADDR". ··· 301 301 They must execute atomically, yet there are no implicit memory barrier 302 302 semantics required of these interfaces. 303 303 304 - int test_and_set_bit(unsigned long nr, volatils unsigned long *addr); 305 - int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr); 306 - int test_and_change_bit(unsigned long nr, volatils unsigned long *addr); 304 + int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); 305 + int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); 306 + int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); 307 307 308 308 Like the above, except that these routines return a boolean which 309 309 indicates whether the changed bit was set _BEFORE_ the atomic bit ··· 335 335 /* ... */; 336 336 obj->killed = 1; 337 337 338 - The implementation of test_and_set_bit() must guarentee that 338 + The implementation of test_and_set_bit() must guarantee that 339 339 "obj->dead = 1;" is visible to cpus before the atomic memory operation 340 340 done by test_and_set_bit() becomes visible. Likewise, the atomic 341 341 memory operation done by test_and_set_bit() must become visible before ··· 474 474 strictly orders all subsequent memory operations (including 475 475 the cas()) with respect to itself, things will be fine. 476 476 477 - Said another way, _atomic_dec_and_lock() must guarentee that 477 + Said another way, _atomic_dec_and_lock() must guarantee that 478 478 a counter dropping to zero is never made visible before the 479 479 spinlock being acquired. 480 480