[IA64] Slim-down __clear_bit_unlock

- I removed the unnecessary barrier() from __clear_bit_unlock().
ia64_st4_rel_nta() makes sure all the modifications are globally
seen before the bit is seen to be off.
- I made __clear_bit() modeled after __set_bit() and __change_bit().
- I corrected some comments sating that a memory barrier is provided,
yet in reality, it is the acquisition side of the memory barrier only.
- I corrected some comments, e.g. test_and_clear_bit() was peaking
about "bit to set".

Signed-off-by: Zoltan Menyhart, <Zoltan.Menyhart@bull.net>
Acked-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>

authored by Zoltan Menyhart and committed by Tony Luck 5302ac50 97075c4b

+28 -22
+28 -22
include/asm-ia64/bitops.h
··· 122 122 } 123 123 124 124 /** 125 - * __clear_bit_unlock - Non-atomically clear a bit with release 125 + * __clear_bit_unlock - Non-atomically clears a bit in memory with release 126 + * @nr: Bit to clear 127 + * @addr: Address to start counting from 126 128 * 127 - * This is like clear_bit_unlock, but the implementation uses a store 129 + * Similarly to clear_bit_unlock, the implementation uses a store 128 130 * with release semantics. See also __raw_spin_unlock(). 129 131 */ 130 132 static __inline__ void 131 - __clear_bit_unlock(int nr, volatile void *addr) 133 + __clear_bit_unlock(int nr, void *addr) 132 134 { 133 - __u32 mask, new; 134 - volatile __u32 *m; 135 + __u32 * const m = (__u32 *) addr + (nr >> 5); 136 + __u32 const new = *m & ~(1 << (nr & 31)); 135 137 136 - m = (volatile __u32 *)addr + (nr >> 5); 137 - mask = ~(1 << (nr & 31)); 138 - new = *m & mask; 139 - barrier(); 140 138 ia64_st4_rel_nta(m, new); 141 139 } 142 140 143 141 /** 144 142 * __clear_bit - Clears a bit in memory (non-atomic version) 143 + * @nr: the bit to clear 144 + * @addr: the address to start counting from 145 + * 146 + * Unlike clear_bit(), this function is non-atomic and may be reordered. 147 + * If it's called on the same region of memory simultaneously, the effect 148 + * may be that only one operation succeeds. 145 149 */ 146 150 static __inline__ void 147 151 __clear_bit (int nr, volatile void *addr) 148 152 { 149 - volatile __u32 *p = (__u32 *) addr + (nr >> 5); 150 - __u32 m = 1 << (nr & 31); 151 - *p &= ~m; 153 + *((__u32 *) addr + (nr >> 5)) &= ~(1 << (nr & 31)); 152 154 } 153 155 154 156 /** 155 157 * change_bit - Toggle a bit in memory 156 - * @nr: Bit to clear 158 + * @nr: Bit to toggle 157 159 * @addr: Address to start counting from 158 160 * 159 161 * change_bit() is atomic and may not be reordered. ··· 180 178 181 179 /** 182 180 * __change_bit - Toggle a bit in memory 183 - * @nr: the bit to set 181 + * @nr: the bit to toggle 184 182 * @addr: the address to start counting from 185 183 * 186 184 * Unlike change_bit(), this function is non-atomic and may be reordered. ··· 199 197 * @addr: Address to count from 200 198 * 201 199 * This operation is atomic and cannot be reordered. 202 - * It also implies a memory barrier. 200 + * It also implies the acquisition side of the memory barrier. 203 201 */ 204 202 static __inline__ int 205 203 test_and_set_bit (int nr, volatile void *addr) ··· 249 247 250 248 /** 251 249 * test_and_clear_bit - Clear a bit and return its old value 252 - * @nr: Bit to set 250 + * @nr: Bit to clear 253 251 * @addr: Address to count from 254 252 * 255 253 * This operation is atomic and cannot be reordered. 256 - * It also implies a memory barrier. 254 + * It also implies the acquisition side of the memory barrier. 257 255 */ 258 256 static __inline__ int 259 257 test_and_clear_bit (int nr, volatile void *addr) ··· 274 272 275 273 /** 276 274 * __test_and_clear_bit - Clear a bit and return its old value 277 - * @nr: Bit to set 275 + * @nr: Bit to clear 278 276 * @addr: Address to count from 279 277 * 280 278 * This operation is non-atomic and can be reordered. ··· 294 292 295 293 /** 296 294 * test_and_change_bit - Change a bit and return its old value 297 - * @nr: Bit to set 295 + * @nr: Bit to change 298 296 * @addr: Address to count from 299 297 * 300 298 * This operation is atomic and cannot be reordered. 301 - * It also implies a memory barrier. 299 + * It also implies the acquisition side of the memory barrier. 302 300 */ 303 301 static __inline__ int 304 302 test_and_change_bit (int nr, volatile void *addr) ··· 317 315 return (old & bit) != 0; 318 316 } 319 317 320 - /* 321 - * WARNING: non atomic version. 318 + /** 319 + * __test_and_change_bit - Change a bit and return its old value 320 + * @nr: Bit to change 321 + * @addr: Address to count from 322 + * 323 + * This operation is non-atomic and can be reordered. 322 324 */ 323 325 static __inline__ int 324 326 __test_and_change_bit (int nr, void *addr)