Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

net: fix races in netdev_tx_sent_queue()/dev_watchdog()

Some workloads hit the infamous dev_watchdog() message:

"NETDEV WATCHDOG: eth0 (xxxx): transmit queue XX timed out"

It seems possible to hit this even for perfectly normal
BQL enabled drivers:

1) Assume a TX queue was idle for more than dev->watchdog_timeo
(5 seconds unless changed by the driver)

2) Assume a big packet is sent, exceeding current BQL limit.

3) Driver ndo_start_xmit() puts the packet in TX ring,
and netdev_tx_sent_queue() is called.

4) QUEUE_STATE_STACK_XOFF could be set from netdev_tx_sent_queue()
before txq->trans_start has been written.

5) txq->trans_start is written later, from netdev_start_xmit()

if (rc == NETDEV_TX_OK)
txq_trans_update(txq)

dev_watchdog() running on another cpu could read the old
txq->trans_start, and then see QUEUE_STATE_STACK_XOFF, because 5)
did not happen yet.

To solve the issue, write txq->trans_start right before one XOFF bit
is set :

- _QUEUE_STATE_DRV_XOFF from netif_tx_stop_queue()
- __QUEUE_STATE_STACK_XOFF from netdev_tx_sent_queue()

From dev_watchdog(), we have to read txq->state before txq->trans_start.

Add memory barriers to enforce correct ordering.

In the future, we could avoid writing over txq->trans_start for normal
operations, and rename this field to txq->xoff_start_time.

Fixes: bec251bc8b6a ("net: no longer stop all TX queues in dev_watchdog()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://patch.msgid.link/20241015194118.3951657-1-edumazet@google.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>

authored by

Eric Dumazet and committed by
Paolo Abeni
95ecba62 47dd5447

+19 -1
+12
include/linux/netdevice.h
··· 3325 3325 3326 3326 static __always_inline void netif_tx_stop_queue(struct netdev_queue *dev_queue) 3327 3327 { 3328 + /* Paired with READ_ONCE() from dev_watchdog() */ 3329 + WRITE_ONCE(dev_queue->trans_start, jiffies); 3330 + 3331 + /* This barrier is paired with smp_mb() from dev_watchdog() */ 3332 + smp_mb__before_atomic(); 3333 + 3328 3334 /* Must be an atomic op see netif_txq_try_stop() */ 3329 3335 set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state); 3330 3336 } ··· 3456 3450 3457 3451 if (likely(dql_avail(&dev_queue->dql) >= 0)) 3458 3452 return; 3453 + 3454 + /* Paired with READ_ONCE() from dev_watchdog() */ 3455 + WRITE_ONCE(dev_queue->trans_start, jiffies); 3456 + 3457 + /* This barrier is paired with smp_mb() from dev_watchdog() */ 3458 + smp_mb__before_atomic(); 3459 3459 3460 3460 set_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state); 3461 3461
+7 -1
net/sched/sch_generic.c
··· 512 512 struct netdev_queue *txq; 513 513 514 514 txq = netdev_get_tx_queue(dev, i); 515 - trans_start = READ_ONCE(txq->trans_start); 516 515 if (!netif_xmit_stopped(txq)) 517 516 continue; 517 + 518 + /* Paired with WRITE_ONCE() + smp_mb...() in 519 + * netdev_tx_sent_queue() and netif_tx_stop_queue(). 520 + */ 521 + smp_mb(); 522 + trans_start = READ_ONCE(txq->trans_start); 523 + 518 524 if (time_after(jiffies, trans_start + dev->watchdog_timeo)) { 519 525 timedout_ms = jiffies_to_msecs(jiffies - trans_start); 520 526 atomic_long_inc(&txq->trans_timeout);