Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

net: Add lockdep asserts to ____napi_schedule().

____napi_schedule() needs to be invoked with disabled interrupts due to
__raise_softirq_irqoff (in order not to corrupt the per-CPU list).
____napi_schedule() needs also to be invoked from an interrupt context
so that the raised-softirq is processed while the interrupt context is
left.

Add lockdep asserts for both conditions.
While this is the second time the irq/softirq check is needed, provide a
generic lockdep_assert_softirq_will_run() which is used by both caller.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

Sebastian Andrzej Siewior and committed by
David S. Miller
fbd9a2ce d96657dc

+11 -1
+7
include/linux/lockdep.h
··· 329 329 330 330 #define lockdep_assert_none_held_once() \ 331 331 lockdep_assert_once(!current->lockdep_depth) 332 + /* 333 + * Ensure that softirq is handled within the callchain and not delayed and 334 + * handled by chance. 335 + */ 336 + #define lockdep_assert_softirq_will_run() \ 337 + lockdep_assert_once(hardirq_count() | softirq_count()) 332 338 333 339 #define lockdep_recursing(tsk) ((tsk)->lockdep_recursion) 334 340 ··· 420 414 #define lockdep_assert_held_read(l) do { (void)(l); } while (0) 421 415 #define lockdep_assert_held_once(l) do { (void)(l); } while (0) 422 416 #define lockdep_assert_none_held_once() do { } while (0) 417 + #define lockdep_assert_softirq_will_run() do { } while (0) 423 418 424 419 #define lockdep_recursing(tsk) (0) 425 420
+4 -1
net/core/dev.c
··· 4265 4265 { 4266 4266 struct task_struct *thread; 4267 4267 4268 + lockdep_assert_softirq_will_run(); 4269 + lockdep_assert_irqs_disabled(); 4270 + 4268 4271 if (test_bit(NAPI_STATE_THREADED, &napi->state)) { 4269 4272 /* Paired with smp_mb__before_atomic() in 4270 4273 * napi_enable()/dev_set_threaded(). ··· 4875 4872 { 4876 4873 int ret; 4877 4874 4878 - lockdep_assert_once(hardirq_count() | softirq_count()); 4875 + lockdep_assert_softirq_will_run(); 4879 4876 4880 4877 trace_netif_rx_entry(skb); 4881 4878 ret = netif_rx_internal(skb);