Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

rcu/nocb: Make local rcu_nocb_lock_irqsave() safe against concurrent deoffloading

rcu_nocb_lock_irqsave() can be preempted between the call to
rcu_segcblist_is_offloaded() and the actual locking. This matters now
that rcu_core() is preemptible on PREEMPT_RT and the (de-)offloading
process can interrupt the softirq or the rcuc kthread.

As a result we may locklessly call into code that requires nocb locking.
In practice this is a problem while we accelerate callbacks on rcu_core().

Simply disabling interrupts before (instead of after) checking the NOCB
offload state fixes the issue.

Reported-and-tested-by: Valentin Schneider <valentin.schneider@arm.com>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

authored by

Frederic Weisbecker and committed by
Paul E. McKenney
118e0d4a 614ddad1

+10 -6
+10 -6
kernel/rcu/tree.h
··· 447 447 static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp); 448 448 #ifdef CONFIG_RCU_NOCB_CPU 449 449 static void __init rcu_organize_nocb_kthreads(void); 450 - #define rcu_nocb_lock_irqsave(rdp, flags) \ 451 - do { \ 452 - if (!rcu_segcblist_is_offloaded(&(rdp)->cblist)) \ 453 - local_irq_save(flags); \ 454 - else \ 455 - raw_spin_lock_irqsave(&(rdp)->nocb_lock, (flags)); \ 450 + 451 + /* 452 + * Disable IRQs before checking offloaded state so that local 453 + * locking is safe against concurrent de-offloading. 454 + */ 455 + #define rcu_nocb_lock_irqsave(rdp, flags) \ 456 + do { \ 457 + local_irq_save(flags); \ 458 + if (rcu_segcblist_is_offloaded(&(rdp)->cblist)) \ 459 + raw_spin_lock(&(rdp)->nocb_lock); \ 456 460 } while (0) 457 461 #else /* #ifdef CONFIG_RCU_NOCB_CPU */ 458 462 #define rcu_nocb_lock_irqsave(rdp, flags) local_irq_save(flags)