Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

lib: atomic64: Initialize locks statically to fix early users

The atomic64 library uses a handful of static spin locks to implement
atomic 64-bit operations on architectures without support for atomic
64-bit instructions.

Unfortunately, the spinlocks are initialized in a pure initcall and that
is too late for the vfs namespace code which wants to use atomic64
operations before the initcall is run.

This became a problem as of commit 8823c079ba71: "vfs: Add setns support
for the mount namespace".

This leads to BUG messages such as:

BUG: spinlock bad magic on CPU#0, swapper/0/0
lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
do_raw_spin_lock+0x158/0x198
_raw_spin_lock_irqsave+0x4c/0x58
atomic64_add_return+0x30/0x5c
alloc_mnt_ns.clone.14+0x44/0xac
create_mnt_ns+0xc/0x54
mnt_init+0x120/0x1d4
vfs_caches_init+0xe0/0x10c
start_kernel+0x29c/0x300

coming out early on during boot when spinlock debugging is enabled.

Fix this by initializing the spinlocks statically at compile time.

Reported-and-tested-by: Vaibhav Bedia <vaibhav.bedia@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Stephen Boyd and committed by
Linus Torvalds
fcc16882 787314c3

+5 -12
+5 -12
lib/atomic64.c
··· 31 31 static union { 32 32 raw_spinlock_t lock; 33 33 char pad[L1_CACHE_BYTES]; 34 - } atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp; 34 + } atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp = { 35 + [0 ... (NR_LOCKS - 1)] = { 36 + .lock = __RAW_SPIN_LOCK_UNLOCKED(atomic64_lock.lock), 37 + }, 38 + }; 35 39 36 40 static inline raw_spinlock_t *lock_addr(const atomic64_t *v) 37 41 { ··· 177 173 return ret; 178 174 } 179 175 EXPORT_SYMBOL(atomic64_add_unless); 180 - 181 - static int init_atomic64_lock(void) 182 - { 183 - int i; 184 - 185 - for (i = 0; i < NR_LOCKS; ++i) 186 - raw_spin_lock_init(&atomic64_lock[i].lock); 187 - return 0; 188 - } 189 - 190 - pure_initcall(init_atomic64_lock);