Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

net: gro_cells: Use nested-BH locking for gro_cell

The gro_cell data structure is per-CPU variable and relies on disabled
BH for its locking. Without per-CPU locking in local_bh_disable() on
PREEMPT_RT this data structure requires explicit locking.

Add a local_lock_t to the data structure and use
local_lock_nested_bh() for locking. This change adds only lockdep
coverage and does not alter the functional behaviour for !PREEMPT_RT.

Reported-by: syzbot+8715dd783e9b0bef43b1@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/68c6c3b1.050a0220.2ff435.0382.GAE@google.com/
Fixes: 3253cb49cbad ("softirq: Allow to drop the softirq-BKL lock on PREEMPT_RT")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://patch.msgid.link/20251009094338.j1jyKfjR@linutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

authored by

Sebastian Andrzej Siewior and committed by
Jakub Kicinski
25718fdc fcb8b32a

+10
+10
net/core/gro_cells.c
··· 8 8 struct gro_cell { 9 9 struct sk_buff_head napi_skbs; 10 10 struct napi_struct napi; 11 + local_lock_t bh_lock; 11 12 }; 12 13 13 14 int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb) 14 15 { 15 16 struct net_device *dev = skb->dev; 17 + bool have_bh_lock = false; 16 18 struct gro_cell *cell; 17 19 int res; 18 20 ··· 27 25 goto unlock; 28 26 } 29 27 28 + local_lock_nested_bh(&gcells->cells->bh_lock); 29 + have_bh_lock = true; 30 30 cell = this_cpu_ptr(gcells->cells); 31 31 32 32 if (skb_queue_len(&cell->napi_skbs) > READ_ONCE(net_hotdata.max_backlog)) { ··· 42 38 __skb_queue_tail(&cell->napi_skbs, skb); 43 39 if (skb_queue_len(&cell->napi_skbs) == 1) 44 40 napi_schedule(&cell->napi); 41 + 42 + if (have_bh_lock) 43 + local_unlock_nested_bh(&gcells->cells->bh_lock); 45 44 46 45 res = NET_RX_SUCCESS; 47 46 ··· 61 54 struct sk_buff *skb; 62 55 int work_done = 0; 63 56 57 + __local_lock_nested_bh(&cell->bh_lock); 64 58 while (work_done < budget) { 65 59 skb = __skb_dequeue(&cell->napi_skbs); 66 60 if (!skb) ··· 72 64 73 65 if (work_done < budget) 74 66 napi_complete_done(napi, work_done); 67 + __local_unlock_nested_bh(&cell->bh_lock); 75 68 return work_done; 76 69 } 77 70 ··· 88 79 struct gro_cell *cell = per_cpu_ptr(gcells->cells, i); 89 80 90 81 __skb_queue_head_init(&cell->napi_skbs); 82 + local_lock_init(&cell->bh_lock); 91 83 92 84 set_bit(NAPI_STATE_NO_BUSY_POLL, &cell->napi.state); 93 85