Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

arch,ia64: Convert smp_mb__*()

ia64 atomic ops are full barriers; implement the new
smp_mb__{before,after}_atomic().

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/n/tip-hyp7yj68cmqz1nqbfpr541ca@git.kernel.org
Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-ia64@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Peter Zijlstra and committed by
Ingo Molnar
0cd64efb 94cf42f8

+6 -10
+1 -6
arch/ia64/include/asm/atomic.h
··· 15 15 #include <linux/types.h> 16 16 17 17 #include <asm/intrinsics.h> 18 + #include <asm/barrier.h> 18 19 19 20 20 21 #define ATOMIC_INIT(i) { (i) } ··· 208 207 #define atomic64_sub(i,v) atomic64_sub_return((i), (v)) 209 208 #define atomic64_inc(v) atomic64_add(1, (v)) 210 209 #define atomic64_dec(v) atomic64_sub(1, (v)) 211 - 212 - /* Atomic operations are already serializing */ 213 - #define smp_mb__before_atomic_dec() barrier() 214 - #define smp_mb__after_atomic_dec() barrier() 215 - #define smp_mb__before_atomic_inc() barrier() 216 - #define smp_mb__after_atomic_inc() barrier() 217 210 218 211 #endif /* _ASM_IA64_ATOMIC_H */
+3
arch/ia64/include/asm/barrier.h
··· 55 55 56 56 #endif 57 57 58 + #define smp_mb__before_atomic() barrier() 59 + #define smp_mb__after_atomic() barrier() 60 + 58 61 /* 59 62 * IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no 60 63 * need for asm trickery!
+2 -4
arch/ia64/include/asm/bitops.h
··· 16 16 #include <linux/compiler.h> 17 17 #include <linux/types.h> 18 18 #include <asm/intrinsics.h> 19 + #include <asm/barrier.h> 19 20 20 21 /** 21 22 * set_bit - Atomically set a bit in memory ··· 66 65 *((__u32 *) addr + (nr >> 5)) |= (1 << (nr & 31)); 67 66 } 68 67 69 - #define smp_mb__before_clear_bit() barrier(); 70 - #define smp_mb__after_clear_bit() barrier(); 71 - 72 68 /** 73 69 * clear_bit - Clears a bit in memory 74 70 * @nr: Bit to clear ··· 73 75 * 74 76 * clear_bit() is atomic and may not be reordered. However, it does 75 77 * not contain a memory barrier, so if it is used for locking purposes, 76 - * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit() 78 + * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic() 77 79 * in order to ensure changes are visible on other processors. 78 80 */ 79 81 static __inline__ void