Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

bitops: introduce lock ops

Introduce test_and_set_bit_lock / clear_bit_unlock bitops with lock semantics.
Convert all architectures to use the generic implementation.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-By: David Howells <dhowells@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Bryan Wu <bryan.wu@analog.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: Greg Ungerer <gerg@uclinux.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Matthew Wilcox <willy@debian.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
Cc: Richard Curnow <rc@rc0.org.uk>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp>
Cc: Andi Kleen <ak@muc.de>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Nick Piggin and committed by
Linus Torvalds
26333576 38048983

+96 -2
+14
Documentation/atomic_ops.txt
··· 418 418 */ 419 419 smp_mb__after_clear_bit(); 420 420 421 + There are two special bitops with lock barrier semantics (acquire/release, 422 + same as spinlocks). These operate in the same way as their non-_lock/unlock 423 + postfixed variants, except that they are to provide acquire/release semantics, 424 + respectively. This means they can be used for bit_spin_trylock and 425 + bit_spin_unlock type operations without specifying any more barriers. 426 + 427 + int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); 428 + void clear_bit_unlock(unsigned long nr, unsigned long *addr); 429 + void __clear_bit_unlock(unsigned long nr, unsigned long *addr); 430 + 431 + The __clear_bit_unlock version is non-atomic, however it still implements 432 + unlock barrier semantics. This can be useful if the lock itself is protecting 433 + the other bits in the word. 434 + 421 435 Finally, there are non-atomic versions of the bitmask operations 422 436 provided. They are used in contexts where some other higher-level SMP 423 437 locking scheme is being used to protect the bitmask, and thus less
+12 -2
Documentation/memory-barriers.txt
··· 1479 1479 1480 1480 Any atomic operation that modifies some state in memory and returns information 1481 1481 about the state (old or new) implies an SMP-conditional general memory barrier 1482 - (smp_mb()) on each side of the actual operation. These include: 1482 + (smp_mb()) on each side of the actual operation (with the exception of 1483 + explicit lock operations, described later). These include: 1483 1484 1484 1485 xchg(); 1485 1486 cmpxchg(); ··· 1537 1536 do need memory barriers as a lock primitive generally has to do things in a 1538 1537 specific order. 1539 1538 1540 - 1541 1539 Basically, each usage case has to be carefully considered as to whether memory 1542 1540 barriers are needed or not. 1541 + 1542 + The following operations are special locking primitives: 1543 + 1544 + test_and_set_bit_lock(); 1545 + clear_bit_unlock(); 1546 + __clear_bit_unlock(); 1547 + 1548 + These implement LOCK-class and UNLOCK-class operations. These should be used in 1549 + preference to other operations when implementing locking primitives, because 1550 + their implementations can be optimised on many architectures. 1543 1551 1544 1552 [!] Note that special memory barrier primitives are available for these 1545 1553 situations because on some CPUs the atomic instructions used imply full memory
+1
include/asm-alpha/bitops.h
··· 367 367 #else 368 368 #include <asm-generic/bitops/hweight.h> 369 369 #endif 370 + #include <asm-generic/bitops/lock.h> 370 371 371 372 #endif /* __KERNEL__ */ 372 373
+1
include/asm-arm/bitops.h
··· 286 286 287 287 #include <asm-generic/bitops/sched.h> 288 288 #include <asm-generic/bitops/hweight.h> 289 + #include <asm-generic/bitops/lock.h> 289 290 290 291 /* 291 292 * Ext2 is defined to use little-endian byte ordering.
+1
include/asm-avr32/bitops.h
··· 288 288 #include <asm-generic/bitops/fls64.h> 289 289 #include <asm-generic/bitops/sched.h> 290 290 #include <asm-generic/bitops/hweight.h> 291 + #include <asm-generic/bitops/lock.h> 291 292 292 293 #include <asm-generic/bitops/ext2-non-atomic.h> 293 294 #include <asm-generic/bitops/ext2-atomic.h>
+1
include/asm-blackfin/bitops.h
··· 199 199 200 200 #include <asm-generic/bitops/find.h> 201 201 #include <asm-generic/bitops/hweight.h> 202 + #include <asm-generic/bitops/lock.h> 202 203 203 204 #include <asm-generic/bitops/ext2-atomic.h> 204 205 #include <asm-generic/bitops/ext2-non-atomic.h>
+1
include/asm-cris/bitops.h
··· 154 154 #include <asm-generic/bitops/fls64.h> 155 155 #include <asm-generic/bitops/hweight.h> 156 156 #include <asm-generic/bitops/find.h> 157 + #include <asm-generic/bitops/lock.h> 157 158 158 159 #include <asm-generic/bitops/ext2-non-atomic.h> 159 160
+1
include/asm-frv/bitops.h
··· 302 302 303 303 #include <asm-generic/bitops/sched.h> 304 304 #include <asm-generic/bitops/hweight.h> 305 + #include <asm-generic/bitops/lock.h> 305 306 306 307 #include <asm-generic/bitops/ext2-non-atomic.h> 307 308
+1
include/asm-generic/bitops.h
··· 22 22 #include <asm-generic/bitops/sched.h> 23 23 #include <asm-generic/bitops/ffs.h> 24 24 #include <asm-generic/bitops/hweight.h> 25 + #include <asm-generic/bitops/lock.h> 25 26 26 27 #include <asm-generic/bitops/ext2-non-atomic.h> 27 28 #include <asm-generic/bitops/ext2-atomic.h>
+45
include/asm-generic/bitops/lock.h
··· 1 + #ifndef _ASM_GENERIC_BITOPS_LOCK_H_ 2 + #define _ASM_GENERIC_BITOPS_LOCK_H_ 3 + 4 + /** 5 + * test_and_set_bit_lock - Set a bit and return its old value, for lock 6 + * @nr: Bit to set 7 + * @addr: Address to count from 8 + * 9 + * This operation is atomic and provides acquire barrier semantics. 10 + * It can be used to implement bit locks. 11 + */ 12 + #define test_and_set_bit_lock(nr, addr) test_and_set_bit(nr, addr) 13 + 14 + /** 15 + * clear_bit_unlock - Clear a bit in memory, for unlock 16 + * @nr: the bit to set 17 + * @addr: the address to start counting from 18 + * 19 + * This operation is atomic and provides release barrier semantics. 20 + */ 21 + #define clear_bit_unlock(nr, addr) \ 22 + do { \ 23 + smp_mb__before_clear_bit(); \ 24 + clear_bit(nr, addr); \ 25 + } while (0) 26 + 27 + /** 28 + * __clear_bit_unlock - Clear a bit in memory, for unlock 29 + * @nr: the bit to set 30 + * @addr: the address to start counting from 31 + * 32 + * This operation is like clear_bit_unlock, however it is not atomic. 33 + * It does provide release barrier semantics so it can be used to unlock 34 + * a bit lock, however it would only be used if no other CPU can modify 35 + * any bits in the memory until the lock is released (a good example is 36 + * if the bit lock itself protects access to the other bits in the word). 37 + */ 38 + #define __clear_bit_unlock(nr, addr) \ 39 + do { \ 40 + smp_mb(); \ 41 + __clear_bit(nr, addr); \ 42 + } while (0) 43 + 44 + #endif /* _ASM_GENERIC_BITOPS_LOCK_H_ */ 45 +
+1
include/asm-h8300/bitops.h
··· 194 194 #include <asm-generic/bitops/find.h> 195 195 #include <asm-generic/bitops/sched.h> 196 196 #include <asm-generic/bitops/hweight.h> 197 + #include <asm-generic/bitops/lock.h> 197 198 #include <asm-generic/bitops/ext2-non-atomic.h> 198 199 #include <asm-generic/bitops/ext2-atomic.h> 199 200 #include <asm-generic/bitops/minix.h>
+2
include/asm-ia64/bitops.h
··· 371 371 #define hweight16(x) (unsigned int) hweight64((x) & 0xfffful) 372 372 #define hweight8(x) (unsigned int) hweight64((x) & 0xfful) 373 373 374 + #include <asm-generic/bitops/lock.h> 375 + 374 376 #endif /* __KERNEL__ */ 375 377 376 378 #include <asm-generic/bitops/find.h>
+1
include/asm-m32r/bitops.h
··· 255 255 #include <asm-generic/bitops/find.h> 256 256 #include <asm-generic/bitops/ffs.h> 257 257 #include <asm-generic/bitops/hweight.h> 258 + #include <asm-generic/bitops/lock.h> 258 259 259 260 #endif /* __KERNEL__ */ 260 261
+1
include/asm-m68k/bitops.h
··· 314 314 #include <asm-generic/bitops/fls64.h> 315 315 #include <asm-generic/bitops/sched.h> 316 316 #include <asm-generic/bitops/hweight.h> 317 + #include <asm-generic/bitops/lock.h> 317 318 318 319 /* Bitmap functions for the minix filesystem */ 319 320
+1
include/asm-m68knommu/bitops.h
··· 160 160 161 161 #include <asm-generic/bitops/find.h> 162 162 #include <asm-generic/bitops/hweight.h> 163 + #include <asm-generic/bitops/lock.h> 163 164 164 165 static __inline__ int ext2_set_bit(int nr, volatile void * addr) 165 166 {
+1
include/asm-mips/bitops.h
··· 556 556 557 557 #include <asm-generic/bitops/sched.h> 558 558 #include <asm-generic/bitops/hweight.h> 559 + #include <asm-generic/bitops/lock.h> 559 560 #include <asm-generic/bitops/ext2-non-atomic.h> 560 561 #include <asm-generic/bitops/ext2-atomic.h> 561 562 #include <asm-generic/bitops/minix.h>
+1
include/asm-parisc/bitops.h
··· 208 208 209 209 #include <asm-generic/bitops/fls64.h> 210 210 #include <asm-generic/bitops/hweight.h> 211 + #include <asm-generic/bitops/lock.h> 211 212 #include <asm-generic/bitops/sched.h> 212 213 213 214 #endif /* __KERNEL__ */
+1
include/asm-powerpc/bitops.h
··· 266 266 #include <asm-generic/bitops/fls64.h> 267 267 268 268 #include <asm-generic/bitops/hweight.h> 269 + #include <asm-generic/bitops/lock.h> 269 270 270 271 #define find_first_zero_bit(addr, size) find_next_zero_bit((addr), (size), 0) 271 272 unsigned long find_next_zero_bit(const unsigned long *addr,
+1
include/asm-s390/bitops.h
··· 746 746 #include <asm-generic/bitops/fls64.h> 747 747 748 748 #include <asm-generic/bitops/hweight.h> 749 + #include <asm-generic/bitops/lock.h> 749 750 750 751 /* 751 752 * ATTENTION: intel byte ordering convention for ext2 and minix !!
+1
include/asm-sh/bitops.h
··· 137 137 #include <asm-generic/bitops/find.h> 138 138 #include <asm-generic/bitops/ffs.h> 139 139 #include <asm-generic/bitops/hweight.h> 140 + #include <asm-generic/bitops/lock.h> 140 141 #include <asm-generic/bitops/sched.h> 141 142 #include <asm-generic/bitops/ext2-non-atomic.h> 142 143 #include <asm-generic/bitops/ext2-atomic.h>
+1
include/asm-sh64/bitops.h
··· 136 136 #include <asm-generic/bitops/__ffs.h> 137 137 #include <asm-generic/bitops/find.h> 138 138 #include <asm-generic/bitops/hweight.h> 139 + #include <asm-generic/bitops/lock.h> 139 140 #include <asm-generic/bitops/sched.h> 140 141 #include <asm-generic/bitops/ffs.h> 141 142 #include <asm-generic/bitops/ext2-non-atomic.h>
+1
include/asm-sparc/bitops.h
··· 96 96 #include <asm-generic/bitops/fls.h> 97 97 #include <asm-generic/bitops/fls64.h> 98 98 #include <asm-generic/bitops/hweight.h> 99 + #include <asm-generic/bitops/lock.h> 99 100 #include <asm-generic/bitops/find.h> 100 101 #include <asm-generic/bitops/ext2-non-atomic.h> 101 102 #include <asm-generic/bitops/ext2-atomic.h>
+1
include/asm-sparc64/bitops.h
··· 81 81 #include <asm-generic/bitops/hweight.h> 82 82 83 83 #endif 84 + #include <asm-generic/bitops/lock.h> 84 85 #endif /* __KERNEL__ */ 85 86 86 87 #include <asm-generic/bitops/find.h>
+1
include/asm-v850/bitops.h
··· 145 145 #include <asm-generic/bitops/find.h> 146 146 #include <asm-generic/bitops/sched.h> 147 147 #include <asm-generic/bitops/hweight.h> 148 + #include <asm-generic/bitops/lock.h> 148 149 149 150 #include <asm-generic/bitops/ext2-non-atomic.h> 150 151 #define ext2_set_bit_atomic(l,n,a) test_and_set_bit(n,a)
+1
include/asm-x86/bitops_32.h
··· 402 402 } 403 403 404 404 #include <asm-generic/bitops/hweight.h> 405 + #include <asm-generic/bitops/lock.h> 405 406 406 407 #endif /* __KERNEL__ */ 407 408
+1
include/asm-x86/bitops_64.h
··· 408 408 #define ARCH_HAS_FAST_MULTIPLIER 1 409 409 410 410 #include <asm-generic/bitops/hweight.h> 411 + #include <asm-generic/bitops/lock.h> 411 412 412 413 #endif /* __KERNEL__ */ 413 414
+1
include/asm-xtensa/bitops.h
··· 108 108 #endif 109 109 110 110 #include <asm-generic/bitops/hweight.h> 111 + #include <asm-generic/bitops/lock.h> 111 112 #include <asm-generic/bitops/sched.h> 112 113 #include <asm-generic/bitops/minix.h> 113 114