Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

staging: wfx: annotate nested gc_list vs tx queue locking

Lockdep is complaining about recursive locking, because it can't make
a difference between locked skb_queues. Annotate nested locks and avoid
double bh_disable/enable.

[...]
insmod/815 is trying to acquire lock:
cb7d6418 (&(&list->lock)->rlock){+...}, at: wfx_tx_queues_clear+0xfc/0x198 [wfx]

but task is already holding lock:
cb7d61f4 (&(&list->lock)->rlock){+...}, at: wfx_tx_queues_clear+0xa0/0x198 [wfx]

[...]
Possible unsafe locking scenario:

CPU0
----
lock(&(&list->lock)->rlock);
lock(&(&list->lock)->rlock);

Cc: stable@vger.kernel.org
Fixes: 9bca45f3d692 ("staging: wfx: allow to send 802.11 frames")
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Link: https://lore.kernel.org/r/5e30397af95854b4a7deea073b730c00229f42ba.1581416843.git.mirq-linux@rere.qmqm.pl
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

authored by

Michał Mirosław and committed by
Greg Kroah-Hartman
e2525a95 4033714d

+8 -8
+8 -8
drivers/staging/wfx/queue.c
··· 130 130 spin_lock_bh(&queue->queue.lock); 131 131 while ((item = __skb_dequeue(&queue->queue)) != NULL) 132 132 skb_queue_head(gc_list, item); 133 - spin_lock_bh(&stats->pending.lock); 133 + spin_lock_nested(&stats->pending.lock, 1); 134 134 for (i = 0; i < ARRAY_SIZE(stats->link_map_cache); ++i) { 135 135 stats->link_map_cache[i] -= queue->link_map_cache[i]; 136 136 queue->link_map_cache[i] = 0; 137 137 } 138 - spin_unlock_bh(&stats->pending.lock); 138 + spin_unlock(&stats->pending.lock); 139 139 spin_unlock_bh(&queue->queue.lock); 140 140 } 141 141 ··· 207 207 208 208 ++queue->link_map_cache[tx_priv->link_id]; 209 209 210 - spin_lock_bh(&stats->pending.lock); 210 + spin_lock_nested(&stats->pending.lock, 1); 211 211 ++stats->link_map_cache[tx_priv->link_id]; 212 - spin_unlock_bh(&stats->pending.lock); 212 + spin_unlock(&stats->pending.lock); 213 213 spin_unlock_bh(&queue->queue.lock); 214 214 } 215 215 ··· 237 237 __skb_unlink(skb, &queue->queue); 238 238 --queue->link_map_cache[tx_priv->link_id]; 239 239 240 - spin_lock_bh(&stats->pending.lock); 240 + spin_lock_nested(&stats->pending.lock, 1); 241 241 __skb_queue_tail(&stats->pending, skb); 242 242 if (!--stats->link_map_cache[tx_priv->link_id]) 243 243 wakeup_stats = true; 244 - spin_unlock_bh(&stats->pending.lock); 244 + spin_unlock(&stats->pending.lock); 245 245 } 246 246 spin_unlock_bh(&queue->queue.lock); 247 247 if (wakeup_stats) ··· 259 259 spin_lock_bh(&queue->queue.lock); 260 260 ++queue->link_map_cache[tx_priv->link_id]; 261 261 262 - spin_lock_bh(&stats->pending.lock); 262 + spin_lock_nested(&stats->pending.lock, 1); 263 263 ++stats->link_map_cache[tx_priv->link_id]; 264 264 __skb_unlink(skb, &stats->pending); 265 - spin_unlock_bh(&stats->pending.lock); 265 + spin_unlock(&stats->pending.lock); 266 266 __skb_queue_tail(&queue->queue, skb); 267 267 spin_unlock_bh(&queue->queue.lock); 268 268 return 0;