Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

net: wireless: intel: iwlwifi: fix GRO_NORMAL packet stalling

Commit 6570bc79c0df ("net: core: use listified Rx for GRO_NORMAL in
napi_gro_receive()") has applied batched GRO_NORMAL packets processing
to all napi_gro_receive() users, including mac80211-based drivers.

However, this change has led to a regression in iwlwifi driver [1][2] as
it is required for NAPI users to call napi_complete_done() or
napi_complete() and the end of every polling iteration, whilst iwlwifi
doesn't use NAPI scheduling at all and just calls napi_gro_flush().
In that particular case, packets which have not been already flushed
from napi->rx_list stall in it until at least next Rx cycle.

Fix this by adding a manual flushing of the list to iwlwifi driver right
before napi_gro_flush() call to mimic napi_complete() logics.

I prefer to open-code gro_normal_list() rather than exporting it for 2
reasons:
* to prevent from using it and napi_gro_flush() in any new drivers,
as it is the *really* bad way to use NAPI that should be avoided;
* to keep gro_normal_list() static and don't lose any CC optimizations.

I also don't add the "Fixes:" tag as the mentioned commit was only a
trigger that only exposed an improper usage of NAPI in this particular
driver.

[1] https://lore.kernel.org/netdev/PSXP216MB04388962C411CD0B17A86F47804A0@PSXP216MB0438.KORP216.PROD.OUTLOOK.COM
[2] https://bugzilla.kernel.org/show_bug.cgi?id=205647

Signed-off-by: Alexander Lobakin <alobakin@dlink.ru>
Acked-by: Luca Coelho <luciano.coelho@intel.com>
Reported-by: Nicholas Johnson <nicholas.johnson-opensource@outlook.com.au>
Tested-by: Nicholas Johnson <nicholas.johnson-opensource@outlook.com.au>
Reviewed-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

Alexander Lobakin and committed by
David S. Miller
b167191e a02e3991

+11 -2
+11 -2
drivers/net/wireless/intel/iwlwifi/pcie/rx.c
··· 1421 1421 static void iwl_pcie_rx_handle(struct iwl_trans *trans, int queue) 1422 1422 { 1423 1423 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 1424 + struct napi_struct *napi; 1424 1425 struct iwl_rxq *rxq; 1425 1426 u32 r, i, count = 0; 1426 1427 bool emergency = false; ··· 1527 1526 if (unlikely(emergency && count)) 1528 1527 iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); 1529 1528 1530 - if (rxq->napi.poll) 1531 - napi_gro_flush(&rxq->napi, false); 1529 + napi = &rxq->napi; 1530 + if (napi->poll) { 1531 + if (napi->rx_count) { 1532 + netif_receive_skb_list(&napi->rx_list); 1533 + INIT_LIST_HEAD(&napi->rx_list); 1534 + napi->rx_count = 0; 1535 + } 1536 + 1537 + napi_gro_flush(napi, false); 1538 + } 1532 1539 1533 1540 iwl_pcie_rxq_restock(trans, rxq); 1534 1541 }