Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

arm64: spinlock: serialise spin_unlock_wait against concurrent lockers

Boqun Feng reported a rather nasty ordering issue with spin_unlock_wait
on architectures implementing spin_lock with LL/SC sequences and acquire
semantics:

| CPU 1 CPU 2 CPU 3
| ================== ==================== ==============
| spin_unlock(&lock);
| spin_lock(&lock):
| r1 = *lock; // r1 == 0;
| o = READ_ONCE(object); // reordered here
| object = NULL;
| smp_mb();
| spin_unlock_wait(&lock);
| *lock = 1;
| smp_mb();
| o->dead = true;
| if (o) // true
| BUG_ON(o->dead); // true!!

The crux of the problem is that spin_unlock_wait(&lock) can return on
CPU 1 whilst CPU 2 is in the process of taking the lock. This can be
resolved by upgrading spin_unlock_wait to a LOCK operation, forcing it
to serialise against a concurrent locker and giving it acquire semantics
in the process (although it is not at all clear whether this is needed -
different callers seem to assume different things about the barrier
semantics and architectures are similarly disjoint in their
implementations of the macro).

This patch implements spin_unlock_wait using an LL/SC sequence with
acquire semantics on arm64. For v8.1 systems with the LSE atomics, the
exclusive writeback is omitted, since the spin_lock operation is
indivisible and no intermediate state can be observed.

Signed-off-by: Will Deacon <will.deacon@arm.com>

+21 -2
+21 -2
arch/arm64/include/asm/spinlock.h
··· 26 26 * The memory barriers are implicit with the load-acquire and store-release 27 27 * instructions. 28 28 */ 29 + static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) 30 + { 31 + unsigned int tmp; 32 + arch_spinlock_t lockval; 29 33 30 - #define arch_spin_unlock_wait(lock) \ 31 - do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0) 34 + asm volatile( 35 + " sevl\n" 36 + "1: wfe\n" 37 + "2: ldaxr %w0, %2\n" 38 + " eor %w1, %w0, %w0, ror #16\n" 39 + " cbnz %w1, 1b\n" 40 + ARM64_LSE_ATOMIC_INSN( 41 + /* LL/SC */ 42 + " stxr %w1, %w0, %2\n" 43 + " cbnz %w1, 2b\n", /* Serialise against any concurrent lockers */ 44 + /* LSE atomics */ 45 + " nop\n" 46 + " nop\n") 47 + : "=&r" (lockval), "=&r" (tmp), "+Q" (*lock) 48 + : 49 + : "memory"); 50 + } 32 51 33 52 #define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) 34 53