Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from IPSec, netfilter and Bluetooth.

Nothing really stands out, but as usual there's a slight concentration
of fixes for issues added in the last two weeks before the merge
window, and driver bugs from 6.13 which tend to get discovered upon
wider distribution.

Current release - regressions:

- net: revert RTNL changes in unregister_netdevice_many_notify()

- Bluetooth: fix possible infinite recursion of btusb_reset

- eth: adjust locking in some old drivers which protect their state
with spinlocks to avoid sleeping in atomic; core protects netdev
state with a mutex now

Previous releases - regressions:

- eth:
- mlx5e: make sure we pass node ID, not CPU ID to kvzalloc_node()
- bgmac: reduce max frame size to support just 1500 bytes; the
jumbo frame support would previously cause OOB writes, but now
fails outright

- mptcp: blackhole only if 1st SYN retrans w/o MPC is accepted, avoid
false detection of MPTCP blackholing

Previous releases - always broken:

- mptcp: handle fastopen disconnect correctly

- xfrm:
- make sure skb->sk is a full sock before accessing its fields
- fix taking a lock with preempt disabled for RT kernels

- usb: ipheth: improve safety of packet metadata parsing; prevent
potential OOB accesses

- eth: renesas: fix missing rtnl lock in suspend/resume path"

* tag 'net-6.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (88 commits)
MAINTAINERS: add Neal to TCP maintainers
net: revert RTNL changes in unregister_netdevice_many_notify()
net: hsr: fix fill_frame_info() regression vs VLAN packets
doc: mptcp: sysctl: blackhole_timeout is per-netns
mptcp: blackhole only if 1st SYN retrans w/o MPC is accepted
netfilter: nf_tables: reject mismatching sum of field_len with set key length
net: sh_eth: Fix missing rtnl lock in suspend/resume path
net: ravb: Fix missing rtnl lock in suspend/resume path
selftests/net: Add test for loading devbound XDP program in generic mode
net: xdp: Disallow attaching device-bound programs in generic mode
tcp: correct handling of extreme memory squeeze
bgmac: reduce max frame size to support just MTU 1500
vsock/test: Add test for connect() retries
vsock/test: Add test for UAF due to socket unbinding
vsock/test: Introduce vsock_connect_fd()
vsock/test: Introduce vsock_bind()
vsock: Allow retrying on connect() failure
vsock: Keep the binding until socket destruction
Bluetooth: L2CAP: accept zero as a special value for MTU auto-selection
Bluetooth: btnxpuart: Fix glitches seen in dual A2DP streaming
...

+877 -420
+9
Documentation/ABI/stable/sysfs-class-bluetooth
··· 1 + What: /sys/class/bluetooth/hci<index>/reset 2 + Date: 14-Jan-2025 3 + KernelVersion: 6.13 4 + Contact: linux-bluetooth@vger.kernel.org 5 + Description: This write-only attribute allows users to trigger the vendor reset 6 + method on the Bluetooth device when arbitrary data is written. 7 + The reset may or may not be done through the device transport 8 + (e.g., UART/USB), and can also be done through an out-of-band 9 + approach such as GPIO.
+4 -4
Documentation/devicetree/bindings/net/qcom,ethqos.yaml
··· 22 22 oneOf: 23 23 - items: 24 24 - enum: 25 - - qcom,qcs8300-ethqos 26 - - const: qcom,sa8775p-ethqos 25 + - qcom,qcs615-ethqos 26 + - const: qcom,qcs404-ethqos 27 27 - items: 28 28 - enum: 29 - - qcom,qcs615-ethqos 30 - - const: qcom,sm8150-ethqos 29 + - qcom,qcs8300-ethqos 30 + - const: qcom,sa8775p-ethqos 31 31 - enum: 32 32 - qcom,qcs404-ethqos 33 33 - qcom,sa8775p-ethqos
+2 -2
Documentation/networking/can.rst
··· 699 699 700 700 The CAN_RAW socket can set multiple CAN identifier specific filters that 701 701 lead to multiple filters in the af_can.c filter processing. These filters 702 - are indenpendent from each other which leads to logical OR'ed filters when 702 + are independent from each other which leads to logical OR'ed filters when 703 703 applied (see :ref:`socketcan-rawfilter`). 704 704 705 - This socket option joines the given CAN filters in the way that only CAN 705 + This socket option joins the given CAN filters in the way that only CAN 706 706 frames are passed to user space that matched *all* given CAN filters. The 707 707 semantic for the applied filters is therefore changed to a logical AND. 708 708
+1 -1
Documentation/networking/mptcp-sysctl.rst
··· 41 41 MPTCP is re-enabled and will reset to the initial value when the 42 42 blackhole issue goes away. 43 43 44 - 0 to disable the blackhole detection. 44 + 0 to disable the blackhole detection. This is a per-namespace sysctl. 45 45 46 46 Default: 3600 47 47
+1 -1
Documentation/networking/napi.rst
··· 362 362 than ``gro_flush_timeout`` as ``irq-suspend-timeout`` should suspend IRQs for 363 363 the duration of one userland processing cycle. 364 364 365 - While it is not stricly necessary to use ``napi_defer_hard_irqs`` and 365 + While it is not strictly necessary to use ``napi_defer_hard_irqs`` and 366 366 ``gro_flush_timeout`` to use IRQ suspension, their use is strongly 367 367 recommended. 368 368
+3
MAINTAINERS
··· 4102 4102 W: http://www.bluez.org/ 4103 4103 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth.git 4104 4104 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git 4105 + F: Documentation/ABI/stable/sysfs-class-bluetooth 4105 4106 F: include/net/bluetooth/ 4106 4107 F: net/bluetooth/ 4107 4108 ··· 16278 16277 16279 16278 NCSI LIBRARY 16280 16279 M: Samuel Mendoza-Jonas <sam@mendozajonas.com> 16280 + R: Paul Fertser <fercerpav@gmail.com> 16281 16281 S: Maintained 16282 16282 F: net/ncsi/ 16283 16283 ··· 16613 16611 16614 16612 NETWORKING [TCP] 16615 16613 M: Eric Dumazet <edumazet@google.com> 16614 + M: Neal Cardwell <ncardwell@google.com> 16616 16615 L: netdev@vger.kernel.org 16617 16616 S: Maintained 16618 16617 F: Documentation/networking/net_cachelines/tcp_sock.rst
+1 -2
drivers/bluetooth/btnxpuart.c
··· 1381 1381 1382 1382 while ((skb = nxp_dequeue(nxpdev))) { 1383 1383 len = serdev_device_write_buf(serdev, skb->data, skb->len); 1384 - serdev_device_wait_until_sent(serdev, 0); 1385 1384 hdev->stat.byte_tx += len; 1386 1385 1387 1386 skb_pull(skb, len); 1388 1387 if (skb->len > 0) { 1389 1388 skb_queue_head(&nxpdev->txq, skb); 1390 - break; 1389 + continue; 1391 1390 } 1392 1391 1393 1392 switch (hci_skb_pkt_type(skb)) {
+7 -5
drivers/bluetooth/btusb.c
··· 899 899 struct btusb_data *data; 900 900 int err; 901 901 902 - if (hdev->reset) { 903 - hdev->reset(hdev); 904 - return; 905 - } 906 - 907 902 data = hci_get_drvdata(hdev); 908 903 /* This is not an unbalanced PM reference since the device will reset */ 909 904 err = usb_autopm_get_interface(data->intf); ··· 2634 2639 struct btmtk_data *btmtk_data = hci_get_priv(data->hdev); 2635 2640 int err; 2636 2641 2642 + /* 2643 + * The function usb_driver_claim_interface() is documented to need 2644 + * locks held if it's not called from a probe routine. The code here 2645 + * is called from the hci_power_on workqueue, so grab the lock. 2646 + */ 2647 + device_lock(&btmtk_data->isopkt_intf->dev); 2637 2648 err = usb_driver_claim_interface(&btusb_driver, 2638 2649 btmtk_data->isopkt_intf, data); 2650 + device_unlock(&btmtk_data->isopkt_intf->dev); 2639 2651 if (err < 0) { 2640 2652 btmtk_data->isopkt_intf = NULL; 2641 2653 bt_dev_err(data->hdev, "Failed to claim iso interface");
+10 -9
drivers/net/bonding/bond_main.c
··· 1538 1538 NETIF_F_HIGHDMA | NETIF_F_LRO) 1539 1539 1540 1540 #define BOND_ENC_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \ 1541 - NETIF_F_RXCSUM | NETIF_F_GSO_SOFTWARE) 1541 + NETIF_F_RXCSUM | NETIF_F_GSO_SOFTWARE | \ 1542 + NETIF_F_GSO_PARTIAL) 1542 1543 1543 1544 #define BOND_MPLS_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \ 1544 1545 NETIF_F_GSO_SOFTWARE) 1545 1546 1547 + #define BOND_GSO_PARTIAL_FEATURES (NETIF_F_GSO_ESP) 1548 + 1546 1549 1547 1550 static void bond_compute_features(struct bonding *bond) 1548 1551 { 1552 + netdev_features_t gso_partial_features = BOND_GSO_PARTIAL_FEATURES; 1549 1553 unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE | 1550 1554 IFF_XMIT_DST_RELEASE_PERM; 1551 - netdev_features_t gso_partial_features = NETIF_F_GSO_ESP; 1552 1555 netdev_features_t vlan_features = BOND_VLAN_FEATURES; 1553 1556 netdev_features_t enc_features = BOND_ENC_FEATURES; 1554 1557 #ifdef CONFIG_XFRM_OFFLOAD ··· 1585 1582 BOND_XFRM_FEATURES); 1586 1583 #endif /* CONFIG_XFRM_OFFLOAD */ 1587 1584 1588 - if (slave->dev->hw_enc_features & NETIF_F_GSO_PARTIAL) 1589 - gso_partial_features &= slave->dev->gso_partial_features; 1585 + gso_partial_features = netdev_increment_features(gso_partial_features, 1586 + slave->dev->gso_partial_features, 1587 + BOND_GSO_PARTIAL_FEATURES); 1590 1588 1591 1589 mpls_features = netdev_increment_features(mpls_features, 1592 1590 slave->dev->mpls_features, ··· 1602 1598 } 1603 1599 bond_dev->hard_header_len = max_hard_header_len; 1604 1600 1605 - if (gso_partial_features & NETIF_F_GSO_ESP) 1606 - bond_dev->gso_partial_features |= NETIF_F_GSO_ESP; 1607 - else 1608 - bond_dev->gso_partial_features &= ~NETIF_F_GSO_ESP; 1609 - 1610 1601 done: 1602 + bond_dev->gso_partial_features = gso_partial_features; 1611 1603 bond_dev->vlan_features = vlan_features; 1612 1604 bond_dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL | 1613 1605 NETIF_F_HW_VLAN_CTAG_TX | ··· 6046 6046 bond_dev->hw_features |= NETIF_F_GSO_ENCAP_ALL; 6047 6047 bond_dev->features |= bond_dev->hw_features; 6048 6048 bond_dev->features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX; 6049 + bond_dev->features |= NETIF_F_GSO_PARTIAL; 6049 6050 #ifdef CONFIG_XFRM_OFFLOAD 6050 6051 bond_dev->hw_features |= BOND_XFRM_FEATURES; 6051 6052 /* Only enable XFRM features if this is an active-backup config */
+1 -2
drivers/net/ethernet/broadcom/bgmac.h
··· 328 328 #define BGMAC_RX_FRAME_OFFSET 30 /* There are 2 unused bytes between header and real data */ 329 329 #define BGMAC_RX_BUF_OFFSET (NET_SKB_PAD + NET_IP_ALIGN - \ 330 330 BGMAC_RX_FRAME_OFFSET) 331 - /* Jumbo frame size with FCS */ 332 - #define BGMAC_RX_MAX_FRAME_SIZE 9724 331 + #define BGMAC_RX_MAX_FRAME_SIZE 1536 333 332 #define BGMAC_RX_BUF_SIZE (BGMAC_RX_FRAME_OFFSET + BGMAC_RX_MAX_FRAME_SIZE) 334 333 #define BGMAC_RX_ALLOC_SIZE (SKB_DATA_ALIGN(BGMAC_RX_BUF_SIZE + BGMAC_RX_BUF_OFFSET) + \ 335 334 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+31 -4
drivers/net/ethernet/broadcom/tg3.c
··· 7424 7424 7425 7425 for (i = 0; i < tp->irq_cnt; i++) { 7426 7426 tnapi = &tp->napi[i]; 7427 - napi_enable(&tnapi->napi); 7427 + napi_enable_locked(&tnapi->napi); 7428 7428 if (tnapi->tx_buffers) { 7429 7429 netif_queue_set_napi(tp->dev, txq_idx, 7430 7430 NETDEV_QUEUE_TYPE_TX, ··· 7445 7445 int i; 7446 7446 7447 7447 for (i = 0; i < tp->irq_cnt; i++) { 7448 - netif_napi_add(tp->dev, &tp->napi[i].napi, 7449 - i ? tg3_poll_msix : tg3_poll); 7450 - netif_napi_set_irq(&tp->napi[i].napi, tp->napi[i].irq_vec); 7448 + netif_napi_add_locked(tp->dev, &tp->napi[i].napi, 7449 + i ? tg3_poll_msix : tg3_poll); 7450 + netif_napi_set_irq_locked(&tp->napi[i].napi, 7451 + tp->napi[i].irq_vec); 7451 7452 } 7452 7453 } 7453 7454 ··· 11260 11259 static int tg3_restart_hw(struct tg3 *tp, bool reset_phy) 11261 11260 __releases(tp->lock) 11262 11261 __acquires(tp->lock) 11262 + __releases(tp->dev->lock) 11263 + __acquires(tp->dev->lock) 11263 11264 { 11264 11265 int err; 11265 11266 ··· 11274 11271 tg3_timer_stop(tp); 11275 11272 tp->irq_sync = 0; 11276 11273 tg3_napi_enable(tp); 11274 + netdev_unlock(tp->dev); 11277 11275 dev_close(tp->dev); 11276 + netdev_lock(tp->dev); 11278 11277 tg3_full_lock(tp, 0); 11279 11278 } 11280 11279 return err; ··· 11304 11299 11305 11300 tg3_netif_stop(tp); 11306 11301 11302 + netdev_lock(tp->dev); 11307 11303 tg3_full_lock(tp, 1); 11308 11304 11309 11305 if (tg3_flag(tp, TX_RECOVERY_PENDING)) { ··· 11324 11318 * call cancel_work_sync() and wait forever. 11325 11319 */ 11326 11320 tg3_flag_clear(tp, RESET_TASK_PENDING); 11321 + netdev_unlock(tp->dev); 11327 11322 dev_close(tp->dev); 11328 11323 goto out; 11329 11324 } 11330 11325 11331 11326 tg3_netif_start(tp); 11332 11327 tg3_full_unlock(tp); 11328 + netdev_unlock(tp->dev); 11333 11329 tg3_phy_start(tp); 11334 11330 tg3_flag_clear(tp, RESET_TASK_PENDING); 11335 11331 out: ··· 11691 11683 if (err) 11692 11684 goto out_ints_fini; 11693 11685 11686 + netdev_lock(dev); 11694 11687 tg3_napi_init(tp); 11695 11688 11696 11689 tg3_napi_enable(tp); 11690 + netdev_unlock(dev); 11697 11691 11698 11692 for (i = 0; i < tp->irq_cnt; i++) { 11699 11693 err = tg3_request_irq(tp, i); ··· 12579 12569 irq_sync = 1; 12580 12570 } 12581 12571 12572 + netdev_lock(dev); 12582 12573 tg3_full_lock(tp, irq_sync); 12583 12574 12584 12575 tp->rx_pending = ering->rx_pending; ··· 12608 12597 } 12609 12598 12610 12599 tg3_full_unlock(tp); 12600 + netdev_unlock(dev); 12611 12601 12612 12602 if (irq_sync && !err) 12613 12603 tg3_phy_start(tp); ··· 12690 12678 irq_sync = 1; 12691 12679 } 12692 12680 12681 + netdev_lock(dev); 12693 12682 tg3_full_lock(tp, irq_sync); 12694 12683 12695 12684 if (epause->autoneg) ··· 12720 12707 } 12721 12708 12722 12709 tg3_full_unlock(tp); 12710 + netdev_unlock(dev); 12723 12711 } 12724 12712 12725 12713 tp->phy_flags |= TG3_PHYFLG_USER_CONFIGURED; ··· 13925 13911 data[TG3_INTERRUPT_TEST] = 1; 13926 13912 } 13927 13913 13914 + netdev_lock(dev); 13928 13915 tg3_full_lock(tp, 0); 13929 13916 13930 13917 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); ··· 13937 13922 } 13938 13923 13939 13924 tg3_full_unlock(tp); 13925 + netdev_unlock(dev); 13940 13926 13941 13927 if (irq_sync && !err2) 13942 13928 tg3_phy_start(tp); ··· 14381 14365 14382 14366 tg3_set_mtu(dev, tp, new_mtu); 14383 14367 14368 + netdev_lock(dev); 14384 14369 tg3_full_lock(tp, 1); 14385 14370 14386 14371 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); ··· 14401 14384 tg3_netif_start(tp); 14402 14385 14403 14386 tg3_full_unlock(tp); 14387 + netdev_unlock(dev); 14404 14388 14405 14389 if (!err) 14406 14390 tg3_phy_start(tp); ··· 18182 18164 18183 18165 netif_device_attach(dev); 18184 18166 18167 + netdev_lock(dev); 18185 18168 tg3_full_lock(tp, 0); 18186 18169 18187 18170 tg3_ape_driver_state_change(tp, RESET_KIND_INIT); ··· 18199 18180 18200 18181 out: 18201 18182 tg3_full_unlock(tp); 18183 + netdev_unlock(dev); 18202 18184 18203 18185 if (!err) 18204 18186 tg3_phy_start(tp); ··· 18280 18260 done: 18281 18261 if (state == pci_channel_io_perm_failure) { 18282 18262 if (netdev) { 18263 + netdev_lock(netdev); 18283 18264 tg3_napi_enable(tp); 18265 + netdev_unlock(netdev); 18284 18266 dev_close(netdev); 18285 18267 } 18286 18268 err = PCI_ERS_RESULT_DISCONNECT; ··· 18336 18314 18337 18315 done: 18338 18316 if (rc != PCI_ERS_RESULT_RECOVERED && netdev && netif_running(netdev)) { 18317 + netdev_lock(netdev); 18339 18318 tg3_napi_enable(tp); 18319 + netdev_unlock(netdev); 18340 18320 dev_close(netdev); 18341 18321 } 18342 18322 rtnl_unlock(); ··· 18364 18340 if (!netdev || !netif_running(netdev)) 18365 18341 goto done; 18366 18342 18343 + netdev_lock(netdev); 18367 18344 tg3_full_lock(tp, 0); 18368 18345 tg3_ape_driver_state_change(tp, RESET_KIND_INIT); 18369 18346 tg3_flag_set(tp, INIT_COMPLETE); 18370 18347 err = tg3_restart_hw(tp, true); 18371 18348 if (err) { 18372 18349 tg3_full_unlock(tp); 18350 + netdev_unlock(netdev); 18373 18351 netdev_err(netdev, "Cannot restart hardware after reset.\n"); 18374 18352 goto done; 18375 18353 } ··· 18383 18357 tg3_netif_start(tp); 18384 18358 18385 18359 tg3_full_unlock(tp); 18360 + netdev_unlock(netdev); 18386 18361 18387 18362 tg3_phy_start(tp); 18388 18363
+2 -1
drivers/net/ethernet/davicom/dm9000.c
··· 1777 1777 1778 1778 unregister_netdev(ndev); 1779 1779 dm9000_release_board(pdev, dm); 1780 - free_netdev(ndev); /* free device structure */ 1781 1780 if (dm->power_supply) 1782 1781 regulator_disable(dm->power_supply); 1782 + 1783 + free_netdev(ndev); /* free device structure */ 1783 1784 1784 1785 dev_dbg(&pdev->dev, "released and freed device\n"); 1785 1786 }
+30 -1
drivers/net/ethernet/freescale/fec_main.c
··· 840 840 struct fec_enet_private *fep = netdev_priv(ndev); 841 841 int hdr_len, total_len, data_left; 842 842 struct bufdesc *bdp = txq->bd.cur; 843 + struct bufdesc *tmp_bdp; 844 + struct bufdesc_ex *ebdp; 843 845 struct tso_t tso; 844 846 unsigned int index = 0; 845 847 int ret; ··· 915 913 return 0; 916 914 917 915 err_release: 918 - /* TODO: Release all used data descriptors for TSO */ 916 + /* Release all used data descriptors for TSO */ 917 + tmp_bdp = txq->bd.cur; 918 + 919 + while (tmp_bdp != bdp) { 920 + /* Unmap data buffers */ 921 + if (tmp_bdp->cbd_bufaddr && 922 + !IS_TSO_HEADER(txq, fec32_to_cpu(tmp_bdp->cbd_bufaddr))) 923 + dma_unmap_single(&fep->pdev->dev, 924 + fec32_to_cpu(tmp_bdp->cbd_bufaddr), 925 + fec16_to_cpu(tmp_bdp->cbd_datlen), 926 + DMA_TO_DEVICE); 927 + 928 + /* Clear standard buffer descriptor fields */ 929 + tmp_bdp->cbd_sc = 0; 930 + tmp_bdp->cbd_datlen = 0; 931 + tmp_bdp->cbd_bufaddr = 0; 932 + 933 + /* Handle extended descriptor if enabled */ 934 + if (fep->bufdesc_ex) { 935 + ebdp = (struct bufdesc_ex *)tmp_bdp; 936 + ebdp->cbd_esc = 0; 937 + } 938 + 939 + tmp_bdp = fec_enet_get_nextdesc(tmp_bdp, &txq->bd); 940 + } 941 + 942 + dev_kfree_skb_any(skb); 943 + 919 944 return ret; 920 945 } 921 946
+15
drivers/net/ethernet/hisilicon/hns3/hnae3.c
··· 40 40 */ 41 41 static DEFINE_MUTEX(hnae3_common_lock); 42 42 43 + /* ensure the drivers being unloaded one by one */ 44 + static DEFINE_MUTEX(hnae3_unload_lock); 45 + 46 + void hnae3_acquire_unload_lock(void) 47 + { 48 + mutex_lock(&hnae3_unload_lock); 49 + } 50 + EXPORT_SYMBOL(hnae3_acquire_unload_lock); 51 + 52 + void hnae3_release_unload_lock(void) 53 + { 54 + mutex_unlock(&hnae3_unload_lock); 55 + } 56 + EXPORT_SYMBOL(hnae3_release_unload_lock); 57 + 43 58 static bool hnae3_client_match(enum hnae3_client_type client_type) 44 59 { 45 60 if (client_type == HNAE3_CLIENT_KNIC ||
+2
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 963 963 void hnae3_set_client_init_flag(struct hnae3_client *client, 964 964 struct hnae3_ae_dev *ae_dev, 965 965 unsigned int inited); 966 + void hnae3_acquire_unload_lock(void); 967 + void hnae3_release_unload_lock(void); 966 968 #endif
+2
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 6002 6002 */ 6003 6003 static void __exit hns3_exit_module(void) 6004 6004 { 6005 + hnae3_acquire_unload_lock(); 6005 6006 pci_unregister_driver(&hns3_driver); 6006 6007 hnae3_unregister_client(&client); 6007 6008 hns3_dbg_unregister_debugfs(); 6009 + hnae3_release_unload_lock(); 6008 6010 } 6009 6011 module_exit(hns3_exit_module); 6010 6012
+2
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 12919 12919 12920 12920 static void __exit hclge_exit(void) 12921 12921 { 12922 + hnae3_acquire_unload_lock(); 12922 12923 hnae3_unregister_ae_algo_prepare(&ae_algo); 12923 12924 hnae3_unregister_ae_algo(&ae_algo); 12924 12925 destroy_workqueue(hclge_wq); 12926 + hnae3_release_unload_lock(); 12925 12927 } 12926 12928 module_init(hclge_init); 12927 12929 module_exit(hclge_exit);
+2
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 3410 3410 3411 3411 static void __exit hclgevf_exit(void) 3412 3412 { 3413 + hnae3_acquire_unload_lock(); 3413 3414 hnae3_unregister_ae_algo(&ae_algovf); 3414 3415 destroy_workqueue(hclgevf_wq); 3416 + hnae3_release_unload_lock(); 3415 3417 } 3416 3418 module_init(hclgevf_init); 3417 3419 module_exit(hclgevf_exit);
+17 -2
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 773 773 f->state = IAVF_VLAN_ADD; 774 774 adapter->num_vlan_filters++; 775 775 iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER); 776 + } else if (f->state == IAVF_VLAN_REMOVE) { 777 + /* IAVF_VLAN_REMOVE means that VLAN wasn't yet removed. 778 + * We can safely only change the state here. 779 + */ 780 + f->state = IAVF_VLAN_ACTIVE; 776 781 } 777 782 778 783 clearout: ··· 798 793 799 794 f = iavf_find_vlan(adapter, vlan); 800 795 if (f) { 801 - f->state = IAVF_VLAN_REMOVE; 802 - iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_DEL_VLAN_FILTER); 796 + /* IAVF_ADD_VLAN means that VLAN wasn't even added yet. 797 + * Remove it from the list. 798 + */ 799 + if (f->state == IAVF_VLAN_ADD) { 800 + list_del(&f->list); 801 + kfree(f); 802 + adapter->num_vlan_filters--; 803 + } else { 804 + f->state = IAVF_VLAN_REMOVE; 805 + iavf_schedule_aq_request(adapter, 806 + IAVF_FLAG_AQ_DEL_VLAN_FILTER); 807 + } 803 808 } 804 809 805 810 spin_unlock_bh(&adapter->mac_vlan_list_lock);
-1
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
··· 1498 1498 #define ICE_AQC_RX_EQU_POST1 (0x12 << ICE_AQC_RX_EQU_SHIFT) 1499 1499 #define ICE_AQC_RX_EQU_BFLF (0x13 << ICE_AQC_RX_EQU_SHIFT) 1500 1500 #define ICE_AQC_RX_EQU_BFHF (0x14 << ICE_AQC_RX_EQU_SHIFT) 1501 - #define ICE_AQC_RX_EQU_DRATE (0x15 << ICE_AQC_RX_EQU_SHIFT) 1502 1501 #define ICE_AQC_RX_EQU_CTLE_GAINHF (0x20 << ICE_AQC_RX_EQU_SHIFT) 1503 1502 #define ICE_AQC_RX_EQU_CTLE_GAINLF (0x21 << ICE_AQC_RX_EQU_SHIFT) 1504 1503 #define ICE_AQC_RX_EQU_CTLE_GAINDC (0x22 << ICE_AQC_RX_EQU_SHIFT)
-1
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 710 710 { ICE_AQC_RX_EQU_POST1, rx, &ptr->rx_equ_post1 }, 711 711 { ICE_AQC_RX_EQU_BFLF, rx, &ptr->rx_equ_bflf }, 712 712 { ICE_AQC_RX_EQU_BFHF, rx, &ptr->rx_equ_bfhf }, 713 - { ICE_AQC_RX_EQU_DRATE, rx, &ptr->rx_equ_drate }, 714 713 { ICE_AQC_RX_EQU_CTLE_GAINHF, rx, &ptr->rx_equ_ctle_gainhf }, 715 714 { ICE_AQC_RX_EQU_CTLE_GAINLF, rx, &ptr->rx_equ_ctle_gainlf }, 716 715 { ICE_AQC_RX_EQU_CTLE_GAINDC, rx, &ptr->rx_equ_ctle_gaindc },
-1
drivers/net/ethernet/intel/ice/ice_ethtool.h
··· 15 15 int rx_equ_post1; 16 16 int rx_equ_bflf; 17 17 int rx_equ_bfhf; 18 - int rx_equ_drate; 19 18 int rx_equ_ctle_gainhf; 20 19 int rx_equ_ctle_gainlf; 21 20 int rx_equ_ctle_gaindc;
+2 -4
drivers/net/ethernet/intel/ice/ice_parser.h
··· 257 257 /*** ICE_SID_RXPARSER_BOOST_TCAM and ICE_SID_LBL_RXPARSER_TMEM sections ***/ 258 258 #define ICE_BST_TCAM_TABLE_SIZE 256 259 259 #define ICE_BST_TCAM_KEY_SIZE 20 260 - #define ICE_BST_KEY_TCAM_SIZE 19 261 260 262 261 /* Boost TCAM item */ 263 262 struct ice_bst_tcam_item { ··· 400 401 #define ICE_PARSER_GPR_NUM 128 401 402 #define ICE_PARSER_FLG_NUM 64 402 403 #define ICE_PARSER_ERR_NUM 16 403 - #define ICE_BST_KEY_SIZE 10 404 404 #define ICE_MARKER_ID_SIZE 9 405 405 #define ICE_MARKER_MAX_SIZE \ 406 406 (ICE_MARKER_ID_SIZE * BITS_PER_BYTE - 1) ··· 429 431 u8 pkt_buf[ICE_PARSER_MAX_PKT_LEN + ICE_PARSER_PKT_REV]; 430 432 u16 pkt_len; 431 433 u16 po; 432 - u8 bst_key[ICE_BST_KEY_SIZE]; 434 + u8 bst_key[ICE_BST_TCAM_KEY_SIZE]; 433 435 struct ice_pg_cam_key pg_key; 436 + u8 pg_prio; 434 437 struct ice_alu *alu0; 435 438 struct ice_alu *alu1; 436 439 struct ice_alu *alu2; 437 440 struct ice_pg_cam_action *action; 438 - u8 pg_prio; 439 441 struct ice_gpr_pu pu; 440 442 u8 markers[ICE_MARKER_ID_SIZE]; 441 443 bool protocols[ICE_PO_PAIR_SIZE];
+5 -7
drivers/net/ethernet/intel/ice/ice_parser_rt.c
··· 125 125 else 126 126 key[idd] = imem->b_kb.prio; 127 127 128 - idd = ICE_BST_KEY_TCAM_SIZE - 1; 128 + idd = ICE_BST_TCAM_KEY_SIZE - 2; 129 129 for (i = idd; i >= 0; i--) { 130 130 int j; 131 131 132 132 j = ho + idd - i; 133 133 if (j < ICE_PARSER_MAX_PKT_LEN) 134 - key[i] = rt->pkt_buf[ho + idd - i]; 134 + key[i] = rt->pkt_buf[j]; 135 135 else 136 136 key[i] = 0; 137 137 } 138 138 139 - ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Generated Boost TCAM Key:\n"); 140 - ice_debug(rt->psr->hw, ICE_DBG_PARSER, "%02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n", 141 - key[0], key[1], key[2], key[3], key[4], 142 - key[5], key[6], key[7], key[8], key[9]); 143 - ice_debug(rt->psr->hw, ICE_DBG_PARSER, "\n"); 139 + ice_debug_array_w_prefix(rt->psr->hw, ICE_DBG_PARSER, 140 + KBUILD_MODNAME ": Generated Boost TCAM Key", 141 + key, ICE_BST_TCAM_KEY_SIZE); 144 142 } 145 143 146 144 static u16 ice_bit_rev_u16(u16 v, int len)
+6
drivers/net/ethernet/intel/idpf/idpf_controlq.c
··· 376 376 if (!(le16_to_cpu(desc->flags) & IDPF_CTLQ_FLAG_DD)) 377 377 break; 378 378 379 + /* Ensure no other fields are read until DD flag is checked */ 380 + dma_rmb(); 381 + 379 382 /* strip off FW internal code */ 380 383 desc_err = le16_to_cpu(desc->ret_val) & 0xff; 381 384 ··· 565 562 566 563 if (!(flags & IDPF_CTLQ_FLAG_DD)) 567 564 break; 565 + 566 + /* Ensure no other fields are read until DD flag is checked */ 567 + dma_rmb(); 568 568 569 569 q_msg[i].vmvf_type = (flags & 570 570 (IDPF_CTLQ_FLAG_FTYPE_VM |
+10 -5
drivers/net/ethernet/intel/idpf/idpf_main.c
··· 174 174 pci_set_master(pdev); 175 175 pci_set_drvdata(pdev, adapter); 176 176 177 - adapter->init_wq = alloc_workqueue("%s-%s-init", 0, 0, 177 + adapter->init_wq = alloc_workqueue("%s-%s-init", 178 + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, 178 179 dev_driver_string(dev), 179 180 dev_name(dev)); 180 181 if (!adapter->init_wq) { ··· 184 183 goto err_free; 185 184 } 186 185 187 - adapter->serv_wq = alloc_workqueue("%s-%s-service", 0, 0, 186 + adapter->serv_wq = alloc_workqueue("%s-%s-service", 187 + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, 188 188 dev_driver_string(dev), 189 189 dev_name(dev)); 190 190 if (!adapter->serv_wq) { ··· 194 192 goto err_serv_wq_alloc; 195 193 } 196 194 197 - adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 0, 0, 195 + adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 196 + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, 198 197 dev_driver_string(dev), 199 198 dev_name(dev)); 200 199 if (!adapter->mbx_wq) { ··· 204 201 goto err_mbx_wq_alloc; 205 202 } 206 203 207 - adapter->stats_wq = alloc_workqueue("%s-%s-stats", 0, 0, 204 + adapter->stats_wq = alloc_workqueue("%s-%s-stats", 205 + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, 208 206 dev_driver_string(dev), 209 207 dev_name(dev)); 210 208 if (!adapter->stats_wq) { ··· 214 210 goto err_stats_wq_alloc; 215 211 } 216 212 217 - adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 0, 0, 213 + adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 214 + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, 218 215 dev_driver_string(dev), 219 216 dev_name(dev)); 220 217 if (!adapter->vc_event_wq) {
+19 -6
drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
··· 517 517 retval = -ENXIO; 518 518 goto only_unlock; 519 519 case IDPF_VC_XN_WAITING: 520 - dev_notice_ratelimited(&adapter->pdev->dev, "Transaction timed-out (op %d, %dms)\n", 521 - params->vc_op, params->timeout_ms); 520 + dev_notice_ratelimited(&adapter->pdev->dev, 521 + "Transaction timed-out (op:%d cookie:%04x vc_op:%d salt:%02x timeout:%dms)\n", 522 + params->vc_op, cookie, xn->vc_op, 523 + xn->salt, params->timeout_ms); 522 524 retval = -ETIME; 523 525 break; 524 526 case IDPF_VC_XN_COMPLETED_SUCCESS: ··· 614 612 return -EINVAL; 615 613 } 616 614 xn = &adapter->vcxn_mngr->ring[xn_idx]; 615 + idpf_vc_xn_lock(xn); 617 616 salt = FIELD_GET(IDPF_VC_XN_SALT_M, msg_info); 618 617 if (xn->salt != salt) { 619 - dev_err_ratelimited(&adapter->pdev->dev, "Transaction salt does not match (%02x != %02x)\n", 620 - xn->salt, salt); 618 + dev_err_ratelimited(&adapter->pdev->dev, "Transaction salt does not match (exp:%d@%02x(%d) != got:%d@%02x)\n", 619 + xn->vc_op, xn->salt, xn->state, 620 + ctlq_msg->cookie.mbx.chnl_opcode, salt); 621 + idpf_vc_xn_unlock(xn); 621 622 return -EINVAL; 622 623 } 623 624 624 - idpf_vc_xn_lock(xn); 625 625 switch (xn->state) { 626 626 case IDPF_VC_XN_WAITING: 627 627 /* success */ ··· 3081 3077 */ 3082 3078 void idpf_vc_core_deinit(struct idpf_adapter *adapter) 3083 3079 { 3080 + bool remove_in_prog; 3081 + 3084 3082 if (!test_bit(IDPF_VC_CORE_INIT, adapter->flags)) 3085 3083 return; 3086 3084 3085 + /* Avoid transaction timeouts when called during reset */ 3086 + remove_in_prog = test_bit(IDPF_REMOVE_IN_PROG, adapter->flags); 3087 + if (!remove_in_prog) 3088 + idpf_vc_xn_shutdown(adapter->vcxn_mngr); 3089 + 3087 3090 idpf_deinit_task(adapter); 3088 3091 idpf_intr_rel(adapter); 3089 - idpf_vc_xn_shutdown(adapter->vcxn_mngr); 3092 + 3093 + if (remove_in_prog) 3094 + idpf_vc_xn_shutdown(adapter->vcxn_mngr); 3090 3095 3091 3096 cancel_delayed_work_sync(&adapter->serv_task); 3092 3097 cancel_delayed_work_sync(&adapter->mbx_task);
+1
drivers/net/ethernet/marvell/mvneta.c
··· 4432 4432 */ 4433 4433 if (pp->is_stopped) { 4434 4434 spin_unlock(&pp->lock); 4435 + netdev_unlock(port->napi.dev); 4435 4436 return 0; 4436 4437 } 4437 4438 netif_tx_stop_all_queues(pp->dev);
+2 -2
drivers/net/ethernet/mediatek/airoha_eth.c
··· 266 266 #define REG_GDM3_FWD_CFG GDM3_BASE 267 267 #define GDM3_PAD_EN_MASK BIT(28) 268 268 269 - #define REG_GDM4_FWD_CFG (GDM4_BASE + 0x100) 269 + #define REG_GDM4_FWD_CFG GDM4_BASE 270 270 #define GDM4_PAD_EN_MASK BIT(28) 271 271 #define GDM4_SPORT_OFFSET0_MASK GENMASK(11, 8) 272 272 273 - #define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x33c) 273 + #define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x23c) 274 274 #define GDM4_SPORT_OFF2_MASK GENMASK(19, 16) 275 275 #define GDM4_SPORT_OFF1_MASK GENMASK(15, 12) 276 276 #define GDM4_SPORT_OFF0_MASK GENMASK(11, 8)
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 2087 2087 struct mlx5e_xdpsq *xdpsq; 2088 2088 int err; 2089 2089 2090 - xdpsq = kvzalloc_node(sizeof(*xdpsq), GFP_KERNEL, c->cpu); 2090 + xdpsq = kvzalloc_node(sizeof(*xdpsq), GFP_KERNEL, cpu_to_node(c->cpu)); 2091 2091 if (!xdpsq) 2092 2092 return ERR_PTR(-ENOMEM); 2093 2093
+10 -22
drivers/net/ethernet/nvidia/forcedeth.c
··· 1120 1120 } 1121 1121 } 1122 1122 1123 - static void nv_napi_enable(struct net_device *dev) 1124 - { 1125 - struct fe_priv *np = get_nvpriv(dev); 1126 - 1127 - napi_enable(&np->napi); 1128 - } 1129 - 1130 - static void nv_napi_disable(struct net_device *dev) 1131 - { 1132 - struct fe_priv *np = get_nvpriv(dev); 1133 - 1134 - napi_disable(&np->napi); 1135 - } 1136 - 1137 1123 #define MII_READ (-1) 1138 1124 /* mii_rw: read/write a register on the PHY. 1139 1125 * ··· 3100 3114 * Changing the MTU is a rare event, it shouldn't matter. 3101 3115 */ 3102 3116 nv_disable_irq(dev); 3103 - nv_napi_disable(dev); 3117 + napi_disable(&np->napi); 3104 3118 netif_tx_lock_bh(dev); 3105 3119 netif_addr_lock(dev); 3106 3120 spin_lock(&np->lock); ··· 3129 3143 spin_unlock(&np->lock); 3130 3144 netif_addr_unlock(dev); 3131 3145 netif_tx_unlock_bh(dev); 3132 - nv_napi_enable(dev); 3146 + napi_enable(&np->napi); 3133 3147 nv_enable_irq(dev); 3134 3148 } 3135 3149 return 0; ··· 4717 4731 4718 4732 if (netif_running(dev)) { 4719 4733 nv_disable_irq(dev); 4720 - nv_napi_disable(dev); 4734 + napi_disable(&np->napi); 4721 4735 netif_tx_lock_bh(dev); 4722 4736 netif_addr_lock(dev); 4723 4737 spin_lock(&np->lock); ··· 4770 4784 spin_unlock(&np->lock); 4771 4785 netif_addr_unlock(dev); 4772 4786 netif_tx_unlock_bh(dev); 4773 - nv_napi_enable(dev); 4787 + napi_enable(&np->napi); 4774 4788 nv_enable_irq(dev); 4775 4789 } 4776 4790 return 0; ··· 5263 5277 if (test->flags & ETH_TEST_FL_OFFLINE) { 5264 5278 if (netif_running(dev)) { 5265 5279 netif_stop_queue(dev); 5266 - nv_napi_disable(dev); 5280 + napi_disable(&np->napi); 5267 5281 netif_tx_lock_bh(dev); 5268 5282 netif_addr_lock(dev); 5269 5283 spin_lock_irq(&np->lock); ··· 5320 5334 /* restart rx engine */ 5321 5335 nv_start_rxtx(dev); 5322 5336 netif_start_queue(dev); 5323 - nv_napi_enable(dev); 5337 + napi_enable(&np->napi); 5324 5338 nv_enable_hw_interrupts(dev, np->irqmask); 5325 5339 } 5326 5340 } ··· 5562 5576 /* ask for interrupts */ 5563 5577 nv_enable_hw_interrupts(dev, np->irqmask); 5564 5578 5579 + netdev_lock(dev); 5565 5580 spin_lock_irq(&np->lock); 5566 5581 writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA); 5567 5582 writel(0, base + NvRegMulticastAddrB); ··· 5581 5594 ret = nv_update_linkspeed(dev); 5582 5595 nv_start_rxtx(dev); 5583 5596 netif_start_queue(dev); 5584 - nv_napi_enable(dev); 5597 + napi_enable_locked(&np->napi); 5585 5598 5586 5599 if (ret) { 5587 5600 netif_carrier_on(dev); ··· 5598 5611 round_jiffies(jiffies + STATS_INTERVAL)); 5599 5612 5600 5613 spin_unlock_irq(&np->lock); 5614 + netdev_unlock(dev); 5601 5615 5602 5616 /* If the loopback feature was set while the device was down, make sure 5603 5617 * that it's set correctly now. ··· 5620 5632 spin_lock_irq(&np->lock); 5621 5633 np->in_shutdown = 1; 5622 5634 spin_unlock_irq(&np->lock); 5623 - nv_napi_disable(dev); 5635 + napi_disable(&np->napi); 5624 5636 synchronize_irq(np->pci_dev->irq); 5625 5637 5626 5638 del_timer_sync(&np->oom_kick);
+3 -1
drivers/net/ethernet/realtek/8139too.c
··· 1684 1684 if (tmp8 & CmdTxEnb) 1685 1685 RTL_W8 (ChipCmd, CmdRxEnb); 1686 1686 1687 + netdev_lock(dev); 1687 1688 spin_lock_bh(&tp->rx_lock); 1688 1689 /* Disable interrupts by clearing the interrupt mask. */ 1689 1690 RTL_W16 (IntrMask, 0x0000); ··· 1695 1694 spin_unlock_irq(&tp->lock); 1696 1695 1697 1696 /* ...and finally, reset everything */ 1698 - napi_enable(&tp->napi); 1697 + napi_enable_locked(&tp->napi); 1699 1698 rtl8139_hw_start(dev); 1700 1699 netif_wake_queue(dev); 1701 1700 1702 1701 spin_unlock_bh(&tp->rx_lock); 1702 + netdev_unlock(dev); 1703 1703 } 1704 1704 1705 1705 static void rtl8139_tx_timeout(struct net_device *dev, unsigned int txqueue)
+14 -8
drivers/net/ethernet/renesas/ravb_main.c
··· 3217 3217 3218 3218 netif_device_detach(ndev); 3219 3219 3220 - if (priv->wol_enabled) 3221 - return ravb_wol_setup(ndev); 3220 + rtnl_lock(); 3221 + if (priv->wol_enabled) { 3222 + ret = ravb_wol_setup(ndev); 3223 + rtnl_unlock(); 3224 + return ret; 3225 + } 3222 3226 3223 3227 ret = ravb_close(ndev); 3228 + rtnl_unlock(); 3224 3229 if (ret) 3225 3230 return ret; 3226 3231 ··· 3250 3245 if (!netif_running(ndev)) 3251 3246 return 0; 3252 3247 3248 + rtnl_lock(); 3253 3249 /* If WoL is enabled restore the interface. */ 3254 - if (priv->wol_enabled) { 3250 + if (priv->wol_enabled) 3255 3251 ret = ravb_wol_restore(ndev); 3256 - if (ret) 3257 - return ret; 3258 - } else { 3252 + else 3259 3253 ret = pm_runtime_force_resume(dev); 3260 - if (ret) 3261 - return ret; 3254 + if (ret) { 3255 + rtnl_unlock(); 3256 + return ret; 3262 3257 } 3263 3258 3264 3259 /* Reopening the interface will restore the device to the working state. */ 3265 3260 ret = ravb_open(ndev); 3261 + rtnl_unlock(); 3266 3262 if (ret < 0) 3267 3263 goto out_rpm_put; 3268 3264
+4
drivers/net/ethernet/renesas/sh_eth.c
··· 3494 3494 3495 3495 netif_device_detach(ndev); 3496 3496 3497 + rtnl_lock(); 3497 3498 if (mdp->wol_enabled) 3498 3499 ret = sh_eth_wol_setup(ndev); 3499 3500 else 3500 3501 ret = sh_eth_close(ndev); 3502 + rtnl_unlock(); 3501 3503 3502 3504 return ret; 3503 3505 } ··· 3513 3511 if (!netif_running(ndev)) 3514 3512 return 0; 3515 3513 3514 + rtnl_lock(); 3516 3515 if (mdp->wol_enabled) 3517 3516 ret = sh_eth_wol_restore(ndev); 3518 3517 else 3519 3518 ret = sh_eth_open(ndev); 3519 + rtnl_unlock(); 3520 3520 3521 3521 if (ret < 0) 3522 3522 return ret;
+44 -13
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2424 2424 u32 chan = 0; 2425 2425 u8 qmode = 0; 2426 2426 2427 - if (rxfifosz == 0) 2428 - rxfifosz = priv->dma_cap.rx_fifo_size; 2429 - if (txfifosz == 0) 2430 - txfifosz = priv->dma_cap.tx_fifo_size; 2431 - 2432 2427 /* Split up the shared Tx/Rx FIFO memory on DW QoS Eth and DW XGMAC */ 2433 2428 if (priv->plat->has_gmac4 || priv->plat->has_xgmac) { 2434 2429 rxfifosz /= rx_channels_count; ··· 2891 2896 u32 tx_channels_count = priv->plat->tx_queues_to_use; 2892 2897 int rxfifosz = priv->plat->rx_fifo_size; 2893 2898 int txfifosz = priv->plat->tx_fifo_size; 2894 - 2895 - if (rxfifosz == 0) 2896 - rxfifosz = priv->dma_cap.rx_fifo_size; 2897 - if (txfifosz == 0) 2898 - txfifosz = priv->dma_cap.tx_fifo_size; 2899 2899 2900 2900 /* Adjust for real per queue fifo size */ 2901 2901 rxfifosz /= rx_channels_count; ··· 5868 5878 const int mtu = new_mtu; 5869 5879 int ret; 5870 5880 5871 - if (txfifosz == 0) 5872 - txfifosz = priv->dma_cap.tx_fifo_size; 5873 - 5874 5881 txfifosz /= priv->plat->tx_queues_to_use; 5875 5882 5876 5883 if (stmmac_xdp_is_enabled(priv) && new_mtu > ETH_DATA_LEN) { ··· 7203 7216 7204 7217 if (priv->dma_cap.tsoen) 7205 7218 dev_info(priv->device, "TSO supported\n"); 7219 + 7220 + if (priv->dma_cap.number_rx_queues && 7221 + priv->plat->rx_queues_to_use > priv->dma_cap.number_rx_queues) { 7222 + dev_warn(priv->device, 7223 + "Number of Rx queues (%u) exceeds dma capability\n", 7224 + priv->plat->rx_queues_to_use); 7225 + priv->plat->rx_queues_to_use = priv->dma_cap.number_rx_queues; 7226 + } 7227 + if (priv->dma_cap.number_tx_queues && 7228 + priv->plat->tx_queues_to_use > priv->dma_cap.number_tx_queues) { 7229 + dev_warn(priv->device, 7230 + "Number of Tx queues (%u) exceeds dma capability\n", 7231 + priv->plat->tx_queues_to_use); 7232 + priv->plat->tx_queues_to_use = priv->dma_cap.number_tx_queues; 7233 + } 7234 + 7235 + if (!priv->plat->rx_fifo_size) { 7236 + if (priv->dma_cap.rx_fifo_size) { 7237 + priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size; 7238 + } else { 7239 + dev_err(priv->device, "Can't specify Rx FIFO size\n"); 7240 + return -ENODEV; 7241 + } 7242 + } else if (priv->dma_cap.rx_fifo_size && 7243 + priv->plat->rx_fifo_size > priv->dma_cap.rx_fifo_size) { 7244 + dev_warn(priv->device, 7245 + "Rx FIFO size (%u) exceeds dma capability\n", 7246 + priv->plat->rx_fifo_size); 7247 + priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size; 7248 + } 7249 + if (!priv->plat->tx_fifo_size) { 7250 + if (priv->dma_cap.tx_fifo_size) { 7251 + priv->plat->tx_fifo_size = priv->dma_cap.tx_fifo_size; 7252 + } else { 7253 + dev_err(priv->device, "Can't specify Tx FIFO size\n"); 7254 + return -ENODEV; 7255 + } 7256 + } else if (priv->dma_cap.tx_fifo_size && 7257 + priv->plat->tx_fifo_size > priv->dma_cap.tx_fifo_size) { 7258 + dev_warn(priv->device, 7259 + "Tx FIFO size (%u) exceeds dma capability\n", 7260 + priv->plat->tx_fifo_size); 7261 + priv->plat->tx_fifo_size = priv->dma_cap.tx_fifo_size; 7262 + } 7206 7263 7207 7264 priv->hw->vlan_fail_q_en = 7208 7265 (priv->plat->flags & STMMAC_FLAG_VLAN_FAIL_Q_EN);
+9 -1
drivers/net/ethernet/sun/niu.c
··· 6086 6086 int i; 6087 6087 6088 6088 for (i = 0; i < np->num_ldg; i++) 6089 - napi_enable(&np->ldg[i].napi); 6089 + napi_enable_locked(&np->ldg[i].napi); 6090 6090 } 6091 6091 6092 6092 static void niu_disable_napi(struct niu *np) ··· 6116 6116 if (err) 6117 6117 goto out_free_channels; 6118 6118 6119 + netdev_lock(dev); 6119 6120 niu_enable_napi(np); 6121 + netdev_unlock(dev); 6120 6122 6121 6123 spin_lock_irq(&np->lock); 6122 6124 ··· 6523 6521 6524 6522 niu_reset_buffers(np); 6525 6523 6524 + netdev_lock(np->dev); 6526 6525 spin_lock_irqsave(&np->lock, flags); 6527 6526 6528 6527 err = niu_init_hw(np); ··· 6534 6531 } 6535 6532 6536 6533 spin_unlock_irqrestore(&np->lock, flags); 6534 + netdev_unlock(np->dev); 6537 6535 } 6538 6536 6539 6537 static void niu_tx_timeout(struct net_device *dev, unsigned int txqueue) ··· 6765 6761 6766 6762 niu_free_channels(np); 6767 6763 6764 + netdev_lock(dev); 6768 6765 niu_enable_napi(np); 6766 + netdev_unlock(dev); 6769 6767 6770 6768 err = niu_alloc_channels(np); 6771 6769 if (err) ··· 9943 9937 9944 9938 spin_lock_irqsave(&np->lock, flags); 9945 9939 9940 + netdev_lock(dev); 9946 9941 err = niu_init_hw(np); 9947 9942 if (!err) { 9948 9943 np->timer.expires = jiffies + HZ; ··· 9952 9945 } 9953 9946 9954 9947 spin_unlock_irqrestore(&np->lock, flags); 9948 + netdev_unlock(dev); 9955 9949 9956 9950 return err; 9957 9951 }
+10 -1
drivers/net/ethernet/via/via-rhine.c
··· 1568 1568 if (rp->quirks & rqMgmt) 1569 1569 rhine_init_cam_filter(dev); 1570 1570 1571 - napi_enable(&rp->napi); 1571 + napi_enable_locked(&rp->napi); 1572 1572 1573 1573 iowrite16(RHINE_EVENT & 0xffff, ioaddr + IntrEnable); 1574 1574 ··· 1696 1696 rhine_power_init(dev); 1697 1697 rhine_chip_reset(dev); 1698 1698 rhine_task_enable(rp); 1699 + 1700 + netdev_lock(dev); 1699 1701 init_registers(dev); 1702 + netdev_unlock(dev); 1700 1703 1701 1704 netif_dbg(rp, ifup, dev, "%s() Done - status %04x MII status: %04x\n", 1702 1705 __func__, ioread16(ioaddr + ChipCmd), ··· 1730 1727 1731 1728 napi_disable(&rp->napi); 1732 1729 netif_tx_disable(dev); 1730 + 1731 + netdev_lock(dev); 1733 1732 spin_lock_bh(&rp->lock); 1734 1733 1735 1734 /* clear all descriptors */ ··· 1745 1740 init_registers(dev); 1746 1741 1747 1742 spin_unlock_bh(&rp->lock); 1743 + netdev_unlock(dev); 1748 1744 1749 1745 netif_trans_update(dev); /* prevent tx timeout */ 1750 1746 dev->stats.tx_errors++; ··· 2547 2541 alloc_tbufs(dev); 2548 2542 rhine_reset_rbufs(rp); 2549 2543 rhine_task_enable(rp); 2544 + 2545 + netdev_lock(dev); 2550 2546 spin_lock_bh(&rp->lock); 2551 2547 init_registers(dev); 2552 2548 spin_unlock_bh(&rp->lock); 2549 + netdev_unlock(dev); 2553 2550 2554 2551 netif_device_attach(dev); 2555 2552
+1 -1
drivers/net/netdevsim/ethtool.c
··· 74 74 memcpy(ring, &ns->ethtool.ring, sizeof(ns->ethtool.ring)); 75 75 kernel_ring->hds_thresh_max = NSIM_HDS_THRESHOLD_MAX; 76 76 77 - if (kernel_ring->tcp_data_split == ETHTOOL_TCP_DATA_SPLIT_UNKNOWN) 77 + if (dev->cfg->hds_config == ETHTOOL_TCP_DATA_SPLIT_UNKNOWN) 78 78 kernel_ring->tcp_data_split = ETHTOOL_TCP_DATA_SPLIT_ENABLED; 79 79 } 80 80
+1
drivers/net/netdevsim/netdevsim.h
··· 134 134 u32 sleep; 135 135 u32 __ports[2][NSIM_UDP_TUNNEL_N_PORTS]; 136 136 u32 (*ports)[NSIM_UDP_TUNNEL_N_PORTS]; 137 + struct dentry *ddir; 137 138 struct debugfs_u32_array dfs_ports[2]; 138 139 } udp_ports; 139 140
+14 -9
drivers/net/netdevsim/udp_tunnels.c
··· 112 112 struct net_device *dev = file->private_data; 113 113 struct netdevsim *ns = netdev_priv(dev); 114 114 115 - memset(ns->udp_ports.ports, 0, sizeof(ns->udp_ports.__ports)); 116 115 rtnl_lock(); 117 - udp_tunnel_nic_reset_ntf(dev); 116 + if (dev->reg_state == NETREG_REGISTERED) { 117 + memset(ns->udp_ports.ports, 0, sizeof(ns->udp_ports.__ports)); 118 + udp_tunnel_nic_reset_ntf(dev); 119 + } 118 120 rtnl_unlock(); 119 121 120 122 return count; ··· 146 144 else 147 145 ns->udp_ports.ports = nsim_dev->udp_ports.__ports; 148 146 149 - debugfs_create_u32("udp_ports_inject_error", 0600, 150 - ns->nsim_dev_port->ddir, 147 + ns->udp_ports.ddir = debugfs_create_dir("udp_ports", 148 + ns->nsim_dev_port->ddir); 149 + 150 + debugfs_create_u32("inject_error", 0600, ns->udp_ports.ddir, 151 151 &ns->udp_ports.inject_error); 152 152 153 153 ns->udp_ports.dfs_ports[0].array = ns->udp_ports.ports[0]; 154 154 ns->udp_ports.dfs_ports[0].n_elements = NSIM_UDP_TUNNEL_N_PORTS; 155 - debugfs_create_u32_array("udp_ports_table0", 0400, 156 - ns->nsim_dev_port->ddir, 155 + debugfs_create_u32_array("table0", 0400, ns->udp_ports.ddir, 157 156 &ns->udp_ports.dfs_ports[0]); 158 157 159 158 ns->udp_ports.dfs_ports[1].array = ns->udp_ports.ports[1]; 160 159 ns->udp_ports.dfs_ports[1].n_elements = NSIM_UDP_TUNNEL_N_PORTS; 161 - debugfs_create_u32_array("udp_ports_table1", 0400, 162 - ns->nsim_dev_port->ddir, 160 + debugfs_create_u32_array("table1", 0400, ns->udp_ports.ddir, 163 161 &ns->udp_ports.dfs_ports[1]); 164 162 165 - debugfs_create_file("udp_ports_reset", 0200, ns->nsim_dev_port->ddir, 163 + debugfs_create_file("reset", 0200, ns->udp_ports.ddir, 166 164 dev, &nsim_udp_tunnels_info_reset_fops); 167 165 168 166 /* Note: it's not normal to allocate the info struct like this! ··· 198 196 199 197 void nsim_udp_tunnels_info_destroy(struct net_device *dev) 200 198 { 199 + struct netdevsim *ns = netdev_priv(dev); 200 + 201 + debugfs_remove_recursive(ns->udp_ports.ddir); 201 202 kfree(dev->udp_tunnel_nic_info); 202 203 dev->udp_tunnel_nic_info = NULL; 203 204 }
+26 -7
drivers/net/phy/marvell-88q2xxx.c
··· 95 95 96 96 #define MDIO_MMD_PCS_MV_TDR_OFF_CUTOFF 65246 97 97 98 + struct mv88q2xxx_priv { 99 + bool enable_temp; 100 + }; 101 + 98 102 struct mmd_val { 99 103 int devad; 100 104 u32 regnum; ··· 714 710 715 711 static int mv88q2xxx_hwmon_probe(struct phy_device *phydev) 716 712 { 713 + struct mv88q2xxx_priv *priv = phydev->priv; 717 714 struct device *dev = &phydev->mdio.dev; 718 715 struct device *hwmon; 719 716 char *hwmon_name; 720 - int ret; 721 717 722 - /* Enable temperature sense */ 723 - ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, MDIO_MMD_PCS_MV_TEMP_SENSOR2, 724 - MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0); 725 - if (ret < 0) 726 - return ret; 727 - 718 + priv->enable_temp = true; 728 719 hwmon_name = devm_hwmon_sanitize_name(dev, dev_name(dev)); 729 720 if (IS_ERR(hwmon_name)) 730 721 return PTR_ERR(hwmon_name); ··· 742 743 743 744 static int mv88q2xxx_probe(struct phy_device *phydev) 744 745 { 746 + struct mv88q2xxx_priv *priv; 747 + 748 + priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL); 749 + if (!priv) 750 + return -ENOMEM; 751 + 752 + phydev->priv = priv; 753 + 745 754 return mv88q2xxx_hwmon_probe(phydev); 746 755 } 747 756 ··· 817 810 818 811 static int mv88q222x_config_init(struct phy_device *phydev) 819 812 { 813 + struct mv88q2xxx_priv *priv = phydev->priv; 814 + int ret; 815 + 816 + /* Enable temperature sense */ 817 + if (priv->enable_temp) { 818 + ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 819 + MDIO_MMD_PCS_MV_TEMP_SENSOR2, 820 + MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0); 821 + if (ret < 0) 822 + return ret; 823 + } 824 + 820 825 if (phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] == PHY_ID_88Q2220_REVB0) 821 826 return mv88q222x_revb0_config_init(phydev); 822 827 else
+2
drivers/net/phy/nxp-c45-tja11xx.c
··· 1297 1297 if (ret) 1298 1298 return ret; 1299 1299 1300 + usleep_range(2000, 2050); 1301 + 1300 1302 return phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1, 1301 1303 VEND1_DEVICE_CONTROL, ret, 1302 1304 !(ret & DEVICE_CONTROL_RESET), 20000,
+45 -24
drivers/net/usb/ipheth.c
··· 61 61 #define IPHETH_USBINTF_PROTO 1 62 62 63 63 #define IPHETH_IP_ALIGN 2 /* padding at front of URB */ 64 - #define IPHETH_NCM_HEADER_SIZE (12 + 96) /* NCMH + NCM0 */ 64 + /* On iOS devices, NCM headers in RX have a fixed size regardless of DPE count: 65 + * - NTH16 (NCMH): 12 bytes, as per CDC NCM 1.0 spec 66 + * - NDP16 (NCM0): 96 bytes, of which 67 + * - NDP16 fixed header: 8 bytes 68 + * - maximum of 22 DPEs (21 datagrams + trailer), 4 bytes each 69 + */ 70 + #define IPHETH_NDP16_MAX_DPE 22 71 + #define IPHETH_NDP16_HEADER_SIZE (sizeof(struct usb_cdc_ncm_ndp16) + \ 72 + IPHETH_NDP16_MAX_DPE * \ 73 + sizeof(struct usb_cdc_ncm_dpe16)) 74 + #define IPHETH_NCM_HEADER_SIZE (sizeof(struct usb_cdc_ncm_nth16) + \ 75 + IPHETH_NDP16_HEADER_SIZE) 65 76 #define IPHETH_TX_BUF_SIZE ETH_FRAME_LEN 66 77 #define IPHETH_RX_BUF_SIZE_LEGACY (IPHETH_IP_ALIGN + ETH_FRAME_LEN) 67 78 #define IPHETH_RX_BUF_SIZE_NCM 65536 ··· 218 207 return ipheth_consume_skb(buf, len, dev); 219 208 } 220 209 210 + /* In "NCM mode", the iOS device encapsulates RX (phone->computer) traffic 211 + * in NCM Transfer Blocks (similarly to CDC NCM). However, unlike reverse 212 + * tethering (handled by the `cdc_ncm` driver), regular tethering is not 213 + * compliant with the CDC NCM spec, as the device is missing the necessary 214 + * descriptors, and TX (computer->phone) traffic is not encapsulated 215 + * at all. Thus `ipheth` implements a very limited subset of the spec with 216 + * the sole purpose of parsing RX URBs. 217 + */ 221 218 static int ipheth_rcvbulk_callback_ncm(struct urb *urb) 222 219 { 223 220 struct usb_cdc_ncm_nth16 *ncmh; 224 221 struct usb_cdc_ncm_ndp16 *ncm0; 225 222 struct usb_cdc_ncm_dpe16 *dpe; 226 223 struct ipheth_device *dev; 224 + u16 dg_idx, dg_len; 227 225 int retval = -EINVAL; 228 226 char *buf; 229 - int len; 230 227 231 228 dev = urb->context; 232 229 ··· 245 226 246 227 ncmh = urb->transfer_buffer; 247 228 if (ncmh->dwSignature != cpu_to_le32(USB_CDC_NCM_NTH16_SIGN) || 248 - le16_to_cpu(ncmh->wNdpIndex) >= urb->actual_length) { 249 - dev->net->stats.rx_errors++; 250 - return retval; 251 - } 229 + /* On iOS, NDP16 directly follows NTH16 */ 230 + ncmh->wNdpIndex != cpu_to_le16(sizeof(struct usb_cdc_ncm_nth16))) 231 + goto rx_error; 252 232 253 - ncm0 = urb->transfer_buffer + le16_to_cpu(ncmh->wNdpIndex); 254 - if (ncm0->dwSignature != cpu_to_le32(USB_CDC_NCM_NDP16_NOCRC_SIGN) || 255 - le16_to_cpu(ncmh->wHeaderLength) + le16_to_cpu(ncm0->wLength) >= 256 - urb->actual_length) { 257 - dev->net->stats.rx_errors++; 258 - return retval; 259 - } 233 + ncm0 = urb->transfer_buffer + sizeof(struct usb_cdc_ncm_nth16); 234 + if (ncm0->dwSignature != cpu_to_le32(USB_CDC_NCM_NDP16_NOCRC_SIGN)) 235 + goto rx_error; 260 236 261 237 dpe = ncm0->dpe16; 262 - while (le16_to_cpu(dpe->wDatagramIndex) != 0 && 263 - le16_to_cpu(dpe->wDatagramLength) != 0) { 264 - if (le16_to_cpu(dpe->wDatagramIndex) >= urb->actual_length || 265 - le16_to_cpu(dpe->wDatagramIndex) + 266 - le16_to_cpu(dpe->wDatagramLength) > urb->actual_length) { 238 + for (int dpe_i = 0; dpe_i < IPHETH_NDP16_MAX_DPE; ++dpe_i, ++dpe) { 239 + dg_idx = le16_to_cpu(dpe->wDatagramIndex); 240 + dg_len = le16_to_cpu(dpe->wDatagramLength); 241 + 242 + /* Null DPE must be present after last datagram pointer entry 243 + * (3.3.1 USB CDC NCM spec v1.0) 244 + */ 245 + if (dg_idx == 0 && dg_len == 0) 246 + return 0; 247 + 248 + if (dg_idx < IPHETH_NCM_HEADER_SIZE || 249 + dg_idx >= urb->actual_length || 250 + dg_len > urb->actual_length - dg_idx) { 267 251 dev->net->stats.rx_length_errors++; 268 252 return retval; 269 253 } 270 254 271 - buf = urb->transfer_buffer + le16_to_cpu(dpe->wDatagramIndex); 272 - len = le16_to_cpu(dpe->wDatagramLength); 255 + buf = urb->transfer_buffer + dg_idx; 273 256 274 - retval = ipheth_consume_skb(buf, len, dev); 257 + retval = ipheth_consume_skb(buf, dg_len, dev); 275 258 if (retval != 0) 276 259 return retval; 277 - 278 - dpe++; 279 260 } 280 261 281 - return 0; 262 + rx_error: 263 + dev->net->stats.rx_errors++; 264 + return retval; 282 265 } 283 266 284 267 static void ipheth_rcvbulk_callback(struct urb *urb)
+22
drivers/net/usb/rtl8150.c
··· 71 71 #define MSR_SPEED (1<<3) 72 72 #define MSR_LINK (1<<2) 73 73 74 + /* USB endpoints */ 75 + enum rtl8150_usb_ep { 76 + RTL8150_USB_EP_CONTROL = 0, 77 + RTL8150_USB_EP_BULK_IN = 1, 78 + RTL8150_USB_EP_BULK_OUT = 2, 79 + RTL8150_USB_EP_INT_IN = 3, 80 + }; 81 + 74 82 /* Interrupt pipe data */ 75 83 #define INT_TSR 0x00 76 84 #define INT_RSR 0x01 ··· 875 867 struct usb_device *udev = interface_to_usbdev(intf); 876 868 rtl8150_t *dev; 877 869 struct net_device *netdev; 870 + static const u8 bulk_ep_addr[] = { 871 + RTL8150_USB_EP_BULK_IN | USB_DIR_IN, 872 + RTL8150_USB_EP_BULK_OUT | USB_DIR_OUT, 873 + 0}; 874 + static const u8 int_ep_addr[] = { 875 + RTL8150_USB_EP_INT_IN | USB_DIR_IN, 876 + 0}; 878 877 879 878 netdev = alloc_etherdev(sizeof(rtl8150_t)); 880 879 if (!netdev) ··· 893 878 if (!dev->intr_buff) { 894 879 free_netdev(netdev); 895 880 return -ENOMEM; 881 + } 882 + 883 + /* Verify that all required endpoints are present */ 884 + if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) || 885 + !usb_check_int_endpoints(intf, int_ep_addr)) { 886 + dev_err(&intf->dev, "couldn't find required endpoints\n"); 887 + goto out; 896 888 } 897 889 898 890 tasklet_setup(&dev->tl, rx_fixup);
+5
drivers/net/vxlan/vxlan_vnifilter.c
··· 411 411 struct tunnel_msg *tmsg; 412 412 struct net_device *dev; 413 413 414 + if (cb->nlh->nlmsg_len < nlmsg_msg_size(sizeof(struct tunnel_msg))) { 415 + NL_SET_ERR_MSG(cb->extack, "Invalid msg length"); 416 + return -EINVAL; 417 + } 418 + 414 419 tmsg = nlmsg_data(cb->nlh); 415 420 416 421 if (tmsg->flags & ~TUNNEL_MSG_VALID_USER_FLAGS) {
+4 -5
drivers/net/wireless/mediatek/mt76/mt7603/mac.c
··· 1479 1479 tasklet_enable(&dev->mt76.pre_tbtt_tasklet); 1480 1480 mt7603_beacon_set_timer(dev, -1, beacon_int); 1481 1481 1482 - local_bh_disable(); 1483 1482 napi_enable(&dev->mt76.tx_napi); 1484 - napi_schedule(&dev->mt76.tx_napi); 1485 - 1486 1483 napi_enable(&dev->mt76.napi[0]); 1487 - napi_schedule(&dev->mt76.napi[0]); 1488 - 1489 1484 napi_enable(&dev->mt76.napi[1]); 1485 + 1486 + local_bh_disable(); 1487 + napi_schedule(&dev->mt76.tx_napi); 1488 + napi_schedule(&dev->mt76.napi[0]); 1490 1489 napi_schedule(&dev->mt76.napi[1]); 1491 1490 local_bh_enable(); 1492 1491
+6 -2
drivers/net/wireless/mediatek/mt76/mt7615/pci.c
··· 164 164 dev_err(mdev->dev, "PDMA engine must be reinitialized\n"); 165 165 166 166 mt76_worker_enable(&mdev->tx_worker); 167 - local_bh_disable(); 167 + 168 168 mt76_for_each_q_rx(mdev, i) { 169 169 napi_enable(&mdev->napi[i]); 170 - napi_schedule(&mdev->napi[i]); 171 170 } 172 171 napi_enable(&mdev->tx_napi); 172 + 173 + local_bh_disable(); 174 + mt76_for_each_q_rx(mdev, i) { 175 + napi_schedule(&mdev->napi[i]); 176 + } 173 177 napi_schedule(&mdev->tx_napi); 174 178 local_bh_enable(); 175 179
+5 -3
drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
··· 262 262 263 263 mt76_worker_enable(&dev->mt76.tx_worker); 264 264 265 - local_bh_disable(); 266 265 napi_enable(&dev->mt76.tx_napi); 267 - napi_schedule(&dev->mt76.tx_napi); 268 - 269 266 mt76_for_each_q_rx(&dev->mt76, i) { 270 267 napi_enable(&dev->mt76.napi[i]); 268 + } 269 + 270 + local_bh_disable(); 271 + napi_schedule(&dev->mt76.tx_napi); 272 + mt76_for_each_q_rx(&dev->mt76, i) { 271 273 napi_schedule(&dev->mt76.napi[i]); 272 274 } 273 275 local_bh_enable();
+5 -3
drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
··· 282 282 283 283 mt76_worker_enable(&mdev->tx_worker); 284 284 285 - local_bh_disable(); 286 285 mt76_for_each_q_rx(mdev, i) { 287 286 mt76_queue_rx_reset(dev, i); 288 287 napi_enable(&mdev->napi[i]); 288 + } 289 + napi_enable(&mdev->tx_napi); 290 + 291 + local_bh_disable(); 292 + mt76_for_each_q_rx(mdev, i) { 289 293 napi_schedule(&mdev->napi[i]); 290 294 } 291 - 292 - napi_enable(&mdev->tx_napi); 293 295 napi_schedule(&mdev->tx_napi); 294 296 local_bh_enable(); 295 297
+5 -3
drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
··· 504 504 mt76_worker_enable(&dev->mt76.tx_worker); 505 505 tasklet_enable(&dev->mt76.pre_tbtt_tasklet); 506 506 507 - local_bh_disable(); 508 507 napi_enable(&dev->mt76.tx_napi); 509 - napi_schedule(&dev->mt76.tx_napi); 510 - 511 508 mt76_for_each_q_rx(&dev->mt76, i) { 512 509 napi_enable(&dev->mt76.napi[i]); 510 + } 511 + 512 + local_bh_disable(); 513 + napi_schedule(&dev->mt76.tx_napi); 514 + mt76_for_each_q_rx(&dev->mt76, i) { 513 515 napi_schedule(&dev->mt76.napi[i]); 514 516 } 515 517 local_bh_enable();
+5 -2
drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
··· 151 151 152 152 mt76_worker_enable(&mdev->tx_worker); 153 153 154 - local_bh_disable(); 155 154 mt76_for_each_q_rx(mdev, i) { 156 155 napi_enable(&mdev->napi[i]); 157 - napi_schedule(&mdev->napi[i]); 158 156 } 159 157 napi_enable(&mdev->tx_napi); 158 + 159 + local_bh_disable(); 160 + mt76_for_each_q_rx(mdev, i) { 161 + napi_schedule(&mdev->napi[i]); 162 + } 160 163 napi_schedule(&mdev->tx_napi); 161 164 local_bh_enable(); 162 165
+13 -4
drivers/net/wireless/mediatek/mt76/mt7915/mac.c
··· 1356 1356 1357 1357 mt7915_dma_reset(dev, true); 1358 1358 1359 - local_bh_disable(); 1360 1359 mt76_for_each_q_rx(mdev, i) { 1361 1360 if (mdev->q_rx[i].ndesc) { 1362 1361 napi_enable(&dev->mt76.napi[i]); 1362 + } 1363 + } 1364 + 1365 + local_bh_disable(); 1366 + mt76_for_each_q_rx(mdev, i) { 1367 + if (mdev->q_rx[i].ndesc) { 1363 1368 napi_schedule(&dev->mt76.napi[i]); 1364 1369 } 1365 1370 } ··· 1424 1419 if (phy2) 1425 1420 clear_bit(MT76_RESET, &phy2->mt76->state); 1426 1421 1427 - local_bh_disable(); 1428 1422 napi_enable(&dev->mt76.tx_napi); 1423 + 1424 + local_bh_disable(); 1429 1425 napi_schedule(&dev->mt76.tx_napi); 1430 1426 local_bh_enable(); 1431 1427 ··· 1576 1570 if (phy2) 1577 1571 clear_bit(MT76_RESET, &phy2->mt76->state); 1578 1572 1579 - local_bh_disable(); 1580 1573 mt76_for_each_q_rx(&dev->mt76, i) { 1581 1574 napi_enable(&dev->mt76.napi[i]); 1575 + } 1576 + 1577 + local_bh_disable(); 1578 + mt76_for_each_q_rx(&dev->mt76, i) { 1582 1579 napi_schedule(&dev->mt76.napi[i]); 1583 1580 } 1584 1581 local_bh_enable(); ··· 1590 1581 1591 1582 mt76_worker_enable(&dev->mt76.tx_worker); 1592 1583 1593 - local_bh_disable(); 1594 1584 napi_enable(&dev->mt76.tx_napi); 1585 + local_bh_disable(); 1595 1586 napi_schedule(&dev->mt76.tx_napi); 1596 1587 local_bh_enable(); 1597 1588
+5 -2
drivers/net/wireless/mediatek/mt76/mt7921/pci.c
··· 523 523 524 524 mt76_worker_enable(&mdev->tx_worker); 525 525 526 - local_bh_disable(); 527 526 mt76_for_each_q_rx(mdev, i) { 528 527 napi_enable(&mdev->napi[i]); 529 - napi_schedule(&mdev->napi[i]); 530 528 } 531 529 napi_enable(&mdev->tx_napi); 530 + 531 + local_bh_disable(); 532 + mt76_for_each_q_rx(mdev, i) { 533 + napi_schedule(&mdev->napi[i]); 534 + } 532 535 napi_schedule(&mdev->tx_napi); 533 536 local_bh_enable(); 534 537
+5 -2
drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
··· 81 81 82 82 mt792x_wpdma_reset(dev, true); 83 83 84 - local_bh_disable(); 85 84 mt76_for_each_q_rx(&dev->mt76, i) { 86 85 napi_enable(&dev->mt76.napi[i]); 86 + } 87 + 88 + local_bh_disable(); 89 + mt76_for_each_q_rx(&dev->mt76, i) { 87 90 napi_schedule(&dev->mt76.napi[i]); 88 91 } 89 92 local_bh_enable(); ··· 118 115 err = __mt7921_start(&dev->phy); 119 116 out: 120 117 121 - local_bh_disable(); 122 118 napi_enable(&dev->mt76.tx_napi); 119 + local_bh_disable(); 123 120 napi_schedule(&dev->mt76.tx_napi); 124 121 local_bh_enable(); 125 122
+5 -2
drivers/net/wireless/mediatek/mt76/mt7925/pci.c
··· 556 556 557 557 mt76_worker_enable(&mdev->tx_worker); 558 558 559 - local_bh_disable(); 560 559 mt76_for_each_q_rx(mdev, i) { 561 560 napi_enable(&mdev->napi[i]); 562 - napi_schedule(&mdev->napi[i]); 563 561 } 564 562 napi_enable(&mdev->tx_napi); 563 + 564 + local_bh_disable(); 565 + mt76_for_each_q_rx(mdev, i) { 566 + napi_schedule(&mdev->napi[i]); 567 + } 565 568 napi_schedule(&mdev->tx_napi); 566 569 local_bh_enable(); 567 570
+5 -2
drivers/net/wireless/mediatek/mt76/mt7925/pci_mac.c
··· 101 101 102 102 mt792x_wpdma_reset(dev, true); 103 103 104 - local_bh_disable(); 105 104 mt76_for_each_q_rx(&dev->mt76, i) { 106 105 napi_enable(&dev->mt76.napi[i]); 107 - napi_schedule(&dev->mt76.napi[i]); 108 106 } 109 107 napi_enable(&dev->mt76.tx_napi); 108 + 109 + local_bh_disable(); 110 + mt76_for_each_q_rx(&dev->mt76, i) { 111 + napi_schedule(&dev->mt76.napi[i]); 112 + } 110 113 napi_schedule(&dev->mt76.tx_napi); 111 114 local_bh_enable(); 112 115
+6 -6
drivers/net/wireless/mediatek/mt76/mt7996/mac.c
··· 1695 1695 1696 1696 mt7996_dma_reset(dev, true); 1697 1697 1698 - local_bh_disable(); 1699 1698 mt76_for_each_q_rx(mdev, i) { 1700 1699 if (mtk_wed_device_active(&dev->mt76.mmio.wed) && 1701 1700 mt76_queue_is_wed_rro(&mdev->q_rx[i])) ··· 1702 1703 1703 1704 if (mdev->q_rx[i].ndesc) { 1704 1705 napi_enable(&dev->mt76.napi[i]); 1706 + local_bh_disable(); 1705 1707 napi_schedule(&dev->mt76.napi[i]); 1708 + local_bh_enable(); 1706 1709 } 1707 1710 } 1708 - local_bh_enable(); 1709 1711 clear_bit(MT76_MCU_RESET, &dev->mphy.state); 1710 1712 clear_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state); 1711 1713 ··· 1764 1764 if (phy3) 1765 1765 clear_bit(MT76_RESET, &phy3->mt76->state); 1766 1766 1767 - local_bh_disable(); 1768 1767 napi_enable(&dev->mt76.tx_napi); 1768 + local_bh_disable(); 1769 1769 napi_schedule(&dev->mt76.tx_napi); 1770 1770 local_bh_enable(); 1771 1771 ··· 1958 1958 if (phy3) 1959 1959 clear_bit(MT76_RESET, &phy3->mt76->state); 1960 1960 1961 - local_bh_disable(); 1962 1961 mt76_for_each_q_rx(&dev->mt76, i) { 1963 1962 if (mtk_wed_device_active(&dev->mt76.mmio.wed) && 1964 1963 mt76_queue_is_wed_rro(&dev->mt76.q_rx[i])) 1965 1964 continue; 1966 1965 1967 1966 napi_enable(&dev->mt76.napi[i]); 1967 + local_bh_disable(); 1968 1968 napi_schedule(&dev->mt76.napi[i]); 1969 + local_bh_enable(); 1969 1970 } 1970 - local_bh_enable(); 1971 1971 1972 1972 tasklet_schedule(&dev->mt76.irq_tasklet); 1973 1973 1974 1974 mt76_worker_enable(&dev->mt76.tx_worker); 1975 1975 1976 - local_bh_disable(); 1977 1976 napi_enable(&dev->mt76.tx_napi); 1977 + local_bh_disable(); 1978 1978 napi_schedule(&dev->mt76.tx_napi); 1979 1979 local_bh_enable(); 1980 1980
+4
drivers/ptp/ptp_chardev.c
··· 4 4 * 5 5 * Copyright (C) 2010 OMICRON electronics GmbH 6 6 */ 7 + #include <linux/compat.h> 7 8 #include <linux/module.h> 8 9 #include <linux/posix-clock.h> 9 10 #include <linux/poll.h> ··· 176 175 struct ptp_pin_desc pd; 177 176 struct timespec64 ts; 178 177 int enable, err = 0; 178 + 179 + if (in_compat_syscall() && cmd != PTP_ENABLE_PPS && cmd != PTP_ENABLE_PPS2) 180 + arg = (unsigned long)compat_ptr(arg); 179 181 180 182 tsevq = pccontext->private_clkdata; 181 183
+8
drivers/ptp/ptp_clock.c
··· 217 217 return info->gettime64(info, ts); 218 218 } 219 219 220 + static int ptp_enable(struct ptp_clock_info *ptp, struct ptp_clock_request *request, int on) 221 + { 222 + return -EOPNOTSUPP; 223 + } 224 + 220 225 static void ptp_aux_kworker(struct kthread_work *work) 221 226 { 222 227 struct ptp_clock *ptp = container_of(work, struct ptp_clock, ··· 298 293 if (ptp->info->getcrosststamp) 299 294 ptp->info->getcrosscycles = ptp->info->getcrosststamp; 300 295 } 296 + 297 + if (!ptp->info->enable) 298 + ptp->info->enable = ptp_enable; 301 299 302 300 if (ptp->info->do_aux_work) { 303 301 kthread_init_delayed_work(&ptp->aux_work, ptp_aux_kworker);
+2 -2
include/linux/netdevice.h
··· 1085 1085 * 1086 1086 * int (*ndo_do_ioctl)(struct net_device *dev, struct ifreq *ifr, int cmd); 1087 1087 * Old-style ioctl entry point. This is used internally by the 1088 - * appletalk and ieee802154 subsystems but is no longer called by 1089 - * the device ioctl handler. 1088 + * ieee802154 subsystem but is no longer called by the device 1089 + * ioctl handler. 1090 1090 * 1091 1091 * int (*ndo_siocbond)(struct net_device *dev, struct ifreq *ifr, int cmd); 1092 1092 * Used by the bonding driver for its device specific ioctls:
-1
include/net/page_pool/types.h
··· 237 237 struct { 238 238 struct hlist_node list; 239 239 u64 detach_time; 240 - u32 napi_id; 241 240 u32 id; 242 241 } user; 243 242 };
+13 -3
include/net/xfrm.h
··· 1268 1268 1269 1269 if (xo) { 1270 1270 x = xfrm_input_state(skb); 1271 - if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET) 1272 - return (xo->flags & CRYPTO_DONE) && 1273 - (xo->status & CRYPTO_SUCCESS); 1271 + if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET) { 1272 + bool check = (xo->flags & CRYPTO_DONE) && 1273 + (xo->status & CRYPTO_SUCCESS); 1274 + 1275 + /* The packets here are plain ones and secpath was 1276 + * needed to indicate that hardware already handled 1277 + * them and there is no need to do nothing in addition. 1278 + * 1279 + * Consume secpath which was set by drivers. 1280 + */ 1281 + secpath_reset(skb); 1282 + return check; 1283 + } 1274 1284 } 1275 1285 1276 1286 return __xfrm_check_nopolicy(net, skb, dir) ||
+2 -2
net/bluetooth/l2cap_sock.c
··· 710 710 { 711 711 switch (chan->scid) { 712 712 case L2CAP_CID_ATT: 713 - if (mtu < L2CAP_LE_MIN_MTU) 713 + if (mtu && mtu < L2CAP_LE_MIN_MTU) 714 714 return false; 715 715 break; 716 716 717 717 default: 718 - if (mtu < L2CAP_DEFAULT_MIN_MTU) 718 + if (mtu && mtu < L2CAP_DEFAULT_MIN_MTU) 719 719 return false; 720 720 } 721 721
+8 -31
net/core/dev.c
··· 6708 6708 static void __napi_hash_add_with_id(struct napi_struct *napi, 6709 6709 unsigned int napi_id) 6710 6710 { 6711 - napi->napi_id = napi_id; 6711 + WRITE_ONCE(napi->napi_id, napi_id); 6712 6712 hlist_add_head_rcu(&napi->napi_hash_node, 6713 6713 &napi_hash[napi->napi_id % HASH_SIZE(napi_hash)]); 6714 6714 } ··· 9924 9924 NL_SET_ERR_MSG(extack, "Program bound to different device"); 9925 9925 return -EINVAL; 9926 9926 } 9927 + if (bpf_prog_is_dev_bound(new_prog->aux) && mode == XDP_MODE_SKB) { 9928 + NL_SET_ERR_MSG(extack, "Can't attach device-bound programs in generic mode"); 9929 + return -EINVAL; 9930 + } 9927 9931 if (new_prog->expected_attach_type == BPF_XDP_DEVMAP) { 9928 9932 NL_SET_ERR_MSG(extack, "BPF_XDP_DEVMAP programs can not be attached to a device"); 9929 9933 return -EINVAL; ··· 10264 10260 #endif 10265 10261 } 10266 10262 10267 - static void rtnl_drop_if_cleanup_net(void) 10268 - { 10269 - if (from_cleanup_net()) 10270 - __rtnl_unlock(); 10271 - } 10272 - 10273 - static void rtnl_acquire_if_cleanup_net(void) 10274 - { 10275 - if (from_cleanup_net()) 10276 - rtnl_lock(); 10277 - } 10278 - 10279 10263 /* Delayed registration/unregisteration */ 10280 10264 LIST_HEAD(net_todo_list); 10281 - static LIST_HEAD(net_todo_list_for_cleanup_net); 10282 - 10283 - /* TODO: net_todo_list/net_todo_list_for_cleanup_net should probably 10284 - * be provided by callers, instead of being static, rtnl protected. 10285 - */ 10286 - static struct list_head *todo_list(void) 10287 - { 10288 - return from_cleanup_net() ? &net_todo_list_for_cleanup_net : 10289 - &net_todo_list; 10290 - } 10291 - 10292 10265 DECLARE_WAIT_QUEUE_HEAD(netdev_unregistering_wq); 10293 10266 atomic_t dev_unreg_count = ATOMIC_INIT(0); 10294 10267 10295 10268 static void net_set_todo(struct net_device *dev) 10296 10269 { 10297 - list_add_tail(&dev->todo_list, todo_list()); 10270 + list_add_tail(&dev->todo_list, &net_todo_list); 10298 10271 } 10299 10272 10300 10273 static netdev_features_t netdev_sync_upper_features(struct net_device *lower, ··· 11121 11140 #endif 11122 11141 11123 11142 /* Snapshot list, allow later requests */ 11124 - list_replace_init(todo_list(), &list); 11143 + list_replace_init(&net_todo_list, &list); 11125 11144 11126 11145 __rtnl_unlock(); 11127 11146 ··· 11766 11785 WRITE_ONCE(dev->reg_state, NETREG_UNREGISTERING); 11767 11786 netdev_unlock(dev); 11768 11787 } 11769 - 11770 - rtnl_drop_if_cleanup_net(); 11771 11788 flush_all_backlogs(); 11789 + 11772 11790 synchronize_net(); 11773 - rtnl_acquire_if_cleanup_net(); 11774 11791 11775 11792 list_for_each_entry(dev, head, unreg_list) { 11776 11793 struct sk_buff *skb = NULL; ··· 11828 11849 #endif 11829 11850 } 11830 11851 11831 - rtnl_drop_if_cleanup_net(); 11832 11852 synchronize_net(); 11833 - rtnl_acquire_if_cleanup_net(); 11834 11853 11835 11854 list_for_each_entry(dev, head, unreg_list) { 11836 11855 netdev_put(dev, &dev->dev_registered_tracker);
+2
net/core/page_pool.c
··· 1146 1146 WARN_ON(!test_bit(NAPI_STATE_SCHED, &pool->p.napi->state)); 1147 1147 WARN_ON(READ_ONCE(pool->p.napi->list_owner) != -1); 1148 1148 1149 + mutex_lock(&page_pools_lock); 1149 1150 WRITE_ONCE(pool->p.napi, NULL); 1151 + mutex_unlock(&page_pools_lock); 1150 1152 } 1151 1153 EXPORT_SYMBOL(page_pool_disable_direct_recycling); 1152 1154
+2
net/core/page_pool_priv.h
··· 7 7 8 8 #include "netmem_priv.h" 9 9 10 + extern struct mutex page_pools_lock; 11 + 10 12 s32 page_pool_inflight(const struct page_pool *pool, bool strict); 11 13 12 14 int page_pool_list(struct page_pool *pool);
+9 -6
net/core/page_pool_user.c
··· 3 3 #include <linux/mutex.h> 4 4 #include <linux/netdevice.h> 5 5 #include <linux/xarray.h> 6 + #include <net/busy_poll.h> 6 7 #include <net/net_debug.h> 7 8 #include <net/netdev_rx_queue.h> 8 9 #include <net/page_pool/helpers.h> ··· 15 14 #include "netdev-genl-gen.h" 16 15 17 16 static DEFINE_XARRAY_FLAGS(page_pools, XA_FLAGS_ALLOC1); 18 - /* Protects: page_pools, netdevice->page_pools, pool->slow.netdev, pool->user. 17 + /* Protects: page_pools, netdevice->page_pools, pool->p.napi, pool->slow.netdev, 18 + * pool->user. 19 19 * Ordering: inside rtnl_lock 20 20 */ 21 - static DEFINE_MUTEX(page_pools_lock); 21 + DEFINE_MUTEX(page_pools_lock); 22 22 23 23 /* Page pools are only reachable from user space (via netlink) if they are 24 24 * linked to a netdev at creation time. Following page pool "visibility" ··· 218 216 { 219 217 struct net_devmem_dmabuf_binding *binding = pool->mp_priv; 220 218 size_t inflight, refsz; 219 + unsigned int napi_id; 221 220 void *hdr; 222 221 223 222 hdr = genlmsg_iput(rsp, info); ··· 232 229 nla_put_u32(rsp, NETDEV_A_PAGE_POOL_IFINDEX, 233 230 pool->slow.netdev->ifindex)) 234 231 goto err_cancel; 235 - if (pool->user.napi_id && 236 - nla_put_uint(rsp, NETDEV_A_PAGE_POOL_NAPI_ID, pool->user.napi_id)) 232 + 233 + napi_id = pool->p.napi ? READ_ONCE(pool->p.napi->napi_id) : 0; 234 + if (napi_id >= MIN_NAPI_ID && 235 + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_NAPI_ID, napi_id)) 237 236 goto err_cancel; 238 237 239 238 inflight = page_pool_inflight(pool, false); ··· 324 319 if (pool->slow.netdev) { 325 320 hlist_add_head(&pool->user.list, 326 321 &pool->slow.netdev->page_pools); 327 - pool->user.napi_id = pool->p.napi ? pool->p.napi->napi_id : 0; 328 - 329 322 netdev_nl_page_pool_event(pool, NETDEV_CMD_PAGE_POOL_ADD_NTF); 330 323 } 331 324
+1 -1
net/ethtool/ioctl.c
··· 998 998 ethtool_get_flow_spec_ring(info.fs.ring_cookie)) 999 999 return -EINVAL; 1000 1000 1001 - if (ops->get_rxfh) { 1001 + if (cmd == ETHTOOL_SRXFH && ops->get_rxfh) { 1002 1002 struct ethtool_rxfh_param rxfh = {}; 1003 1003 1004 1004 rc = ops->get_rxfh(dev, &rxfh);
+5 -2
net/hsr/hsr_forward.c
··· 700 700 frame->is_vlan = true; 701 701 702 702 if (frame->is_vlan) { 703 - if (skb->mac_len < offsetofend(struct hsr_vlan_ethhdr, vlanhdr)) 703 + /* Note: skb->mac_len might be wrong here. */ 704 + if (!pskb_may_pull(skb, 705 + skb_mac_offset(skb) + 706 + offsetofend(struct hsr_vlan_ethhdr, vlanhdr))) 704 707 return -EINVAL; 705 - vlan_hdr = (struct hsr_vlan_ethhdr *)ethhdr; 708 + vlan_hdr = (struct hsr_vlan_ethhdr *)skb_mac_header(skb); 706 709 proto = vlan_hdr->vlanhdr.h_vlan_encapsulated_proto; 707 710 } 708 711
+1 -1
net/ipv4/esp4.c
··· 279 279 x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 280 280 esp_output_tail_tcp(x, skb); 281 281 else 282 - xfrm_output_resume(skb->sk, skb, err); 282 + xfrm_output_resume(skb_to_full_sk(skb), skb, err); 283 283 } 284 284 } 285 285
-3
net/ipv4/ipmr_base.c
··· 330 330 list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) { 331 331 if (e < s_e) 332 332 goto next_entry2; 333 - if (filter->dev && 334 - !mr_mfc_uses_dev(mrt, mfc, filter->dev)) 335 - goto next_entry2; 336 333 337 334 err = fill(mrt, skb, NETLINK_CB(cb->skb).portid, 338 335 cb->nlh->nlmsg_seq, mfc, RTM_NEWROUTE, flags);
+6 -3
net/ipv4/tcp_output.c
··· 265 265 u32 cur_win, new_win; 266 266 267 267 /* Make the window 0 if we failed to queue the data because we 268 - * are out of memory. The window is temporary, so we don't store 269 - * it on the socket. 268 + * are out of memory. 270 269 */ 271 - if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) 270 + if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) { 271 + tp->pred_flags = 0; 272 + tp->rcv_wnd = 0; 273 + tp->rcv_wup = tp->rcv_nxt; 272 274 return 0; 275 + } 273 276 274 277 cur_win = tcp_receive_window(tp); 275 278 new_win = __tcp_select_window(sk);
+1 -1
net/ipv6/esp6.c
··· 315 315 x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 316 316 esp_output_tail_tcp(x, skb); 317 317 else 318 - xfrm_output_resume(skb->sk, skb, err); 318 + xfrm_output_resume(skb_to_full_sk(skb), skb, err); 319 319 } 320 320 } 321 321
+2 -2
net/ipv6/xfrm6_output.c
··· 82 82 83 83 toobig = skb->len > mtu && !skb_is_gso(skb); 84 84 85 - if (toobig && xfrm6_local_dontfrag(skb->sk)) { 85 + if (toobig && xfrm6_local_dontfrag(sk)) { 86 86 xfrm6_local_rxpmtu(skb, mtu); 87 87 kfree_skb(skb); 88 88 return -EMSGSIZE; 89 89 } else if (toobig && xfrm6_noneed_fragment(skb)) { 90 90 skb->ignore_df = 1; 91 91 goto skip_frag; 92 - } else if (!skb->ignore_df && toobig && skb->sk) { 92 + } else if (!skb->ignore_df && toobig && sk) { 93 93 xfrm_local_error(skb, mtu); 94 94 kfree_skb(skb); 95 95 return -EMSGSIZE;
+2 -2
net/mptcp/ctrl.c
··· 418 418 MPTCP_INC_STATS(net, MPTCP_MIB_MPCAPABLEACTIVEDROP); 419 419 subflow->mpc_drop = 1; 420 420 mptcp_subflow_early_fallback(mptcp_sk(subflow->conn), subflow); 421 - } else { 422 - subflow->mpc_drop = 0; 423 421 } 422 + } else if (ssk->sk_state == TCP_SYN_SENT) { 423 + subflow->mpc_drop = 0; 424 424 } 425 425 } 426 426
+5 -8
net/mptcp/options.c
··· 108 108 mp_opt->suboptions |= OPTION_MPTCP_DSS; 109 109 mp_opt->use_map = 1; 110 110 mp_opt->mpc_map = 1; 111 - mp_opt->use_ack = 0; 112 111 mp_opt->data_len = get_unaligned_be16(ptr); 113 112 ptr += 2; 114 113 } ··· 156 157 pr_debug("DSS\n"); 157 158 ptr++; 158 159 159 - /* we must clear 'mpc_map' be able to detect MP_CAPABLE 160 - * map vs DSS map in mptcp_incoming_options(), and reconstruct 161 - * map info accordingly 162 - */ 163 - mp_opt->mpc_map = 0; 164 160 flags = (*ptr++) & MPTCP_DSS_FLAG_MASK; 165 161 mp_opt->data_fin = (flags & MPTCP_DSS_DATA_FIN) != 0; 166 162 mp_opt->dsn64 = (flags & MPTCP_DSS_DSN64) != 0; ··· 363 369 const unsigned char *ptr; 364 370 int length; 365 371 366 - /* initialize option status */ 367 - mp_opt->suboptions = 0; 372 + /* Ensure that casting the whole status to u32 is efficient and safe */ 373 + BUILD_BUG_ON(sizeof_field(struct mptcp_options_received, status) != sizeof(u32)); 374 + BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct mptcp_options_received, status), 375 + sizeof(u32))); 376 + *(u32 *)&mp_opt->status = 0; 368 377 369 378 length = (th->doff * 4) - sizeof(struct tcphdr); 370 379 ptr = (const unsigned char *)(th + 1);
+2 -1
net/mptcp/pm_netlink.c
··· 2020 2020 return -EINVAL; 2021 2021 } 2022 2022 if ((addr.flags & MPTCP_PM_ADDR_FLAG_FULLMESH) && 2023 - (entry->flags & MPTCP_PM_ADDR_FLAG_SIGNAL)) { 2023 + (entry->flags & (MPTCP_PM_ADDR_FLAG_SIGNAL | 2024 + MPTCP_PM_ADDR_FLAG_IMPLICIT))) { 2024 2025 spin_unlock_bh(&pernet->lock); 2025 2026 GENL_SET_ERR_MSG(info, "invalid addr flags"); 2026 2027 return -EINVAL;
+3 -1
net/mptcp/protocol.c
··· 1767 1767 * see mptcp_disconnect(). 1768 1768 * Attempt it again outside the problematic scope. 1769 1769 */ 1770 - if (!mptcp_disconnect(sk, 0)) 1770 + if (!mptcp_disconnect(sk, 0)) { 1771 + sk->sk_disconnects++; 1771 1772 sk->sk_socket->state = SS_UNCONNECTED; 1773 + } 1772 1774 } 1773 1775 inet_clear_bit(DEFER_CONNECT, sk); 1774 1776
+16 -14
net/mptcp/protocol.h
··· 149 149 u32 subflow_seq; 150 150 u16 data_len; 151 151 __sum16 csum; 152 - u16 suboptions; 152 + struct_group(status, 153 + u16 suboptions; 154 + u16 use_map:1, 155 + dsn64:1, 156 + data_fin:1, 157 + use_ack:1, 158 + ack64:1, 159 + mpc_map:1, 160 + reset_reason:4, 161 + reset_transient:1, 162 + echo:1, 163 + backup:1, 164 + deny_join_id0:1, 165 + __unused:2; 166 + ); 167 + u8 join_id; 153 168 u32 token; 154 169 u32 nonce; 155 - u16 use_map:1, 156 - dsn64:1, 157 - data_fin:1, 158 - use_ack:1, 159 - ack64:1, 160 - mpc_map:1, 161 - reset_reason:4, 162 - reset_transient:1, 163 - echo:1, 164 - backup:1, 165 - deny_join_id0:1, 166 - __unused:2; 167 - u8 join_id; 168 170 u64 thmac; 169 171 u8 hmac[MPTCPOPT_HMAC_LEN]; 170 172 struct mptcp_addr_info addr;
+7 -6
net/ncsi/ncsi-manage.c
··· 1385 1385 nd->state = ncsi_dev_state_probe_package; 1386 1386 break; 1387 1387 case ncsi_dev_state_probe_package: 1388 + if (ndp->package_probe_id >= 8) { 1389 + /* Last package probed, finishing */ 1390 + ndp->flags |= NCSI_DEV_PROBED; 1391 + break; 1392 + } 1393 + 1388 1394 ndp->pending_req_num = 1; 1389 1395 1390 1396 nca.type = NCSI_PKT_CMD_SP; ··· 1507 1501 if (ret) 1508 1502 goto error; 1509 1503 1510 - /* Probe next package */ 1504 + /* Probe next package after receiving response */ 1511 1505 ndp->package_probe_id++; 1512 - if (ndp->package_probe_id >= 8) { 1513 - /* Probe finished */ 1514 - ndp->flags |= NCSI_DEV_PROBED; 1515 - break; 1516 - } 1517 1506 nd->state = ncsi_dev_state_probe_package; 1518 1507 ndp->active_package = NULL; 1519 1508 break;
+8 -10
net/ncsi/ncsi-rsp.c
··· 1089 1089 static int ncsi_rsp_handler_gmcma(struct ncsi_request *nr) 1090 1090 { 1091 1091 struct ncsi_dev_priv *ndp = nr->ndp; 1092 + struct sockaddr *saddr = &ndp->pending_mac; 1092 1093 struct net_device *ndev = ndp->ndev.dev; 1093 1094 struct ncsi_rsp_gmcma_pkt *rsp; 1094 - struct sockaddr saddr; 1095 - int ret = -1; 1096 1095 int i; 1097 1096 1098 1097 rsp = (struct ncsi_rsp_gmcma_pkt *)skb_network_header(nr->rsp); 1099 - saddr.sa_family = ndev->type; 1100 1098 ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 1101 1099 1102 1100 netdev_info(ndev, "NCSI: Received %d provisioned MAC addresses\n", ··· 1106 1108 rsp->addresses[i][4], rsp->addresses[i][5]); 1107 1109 } 1108 1110 1111 + saddr->sa_family = ndev->type; 1109 1112 for (i = 0; i < rsp->address_count; i++) { 1110 - memcpy(saddr.sa_data, &rsp->addresses[i], ETH_ALEN); 1111 - ret = ndev->netdev_ops->ndo_set_mac_address(ndev, &saddr); 1112 - if (ret < 0) { 1113 + if (!is_valid_ether_addr(rsp->addresses[i])) { 1113 1114 netdev_warn(ndev, "NCSI: Unable to assign %pM to device\n", 1114 - saddr.sa_data); 1115 + rsp->addresses[i]); 1115 1116 continue; 1116 1117 } 1117 - netdev_warn(ndev, "NCSI: Set MAC address to %pM\n", saddr.sa_data); 1118 + memcpy(saddr->sa_data, rsp->addresses[i], ETH_ALEN); 1119 + netdev_warn(ndev, "NCSI: Will set MAC address to %pM\n", saddr->sa_data); 1118 1120 break; 1119 1121 } 1120 1122 1121 - ndp->gma_flag = ret == 0; 1122 - return ret; 1123 + ndp->gma_flag = 1; 1124 + return 0; 1123 1125 } 1124 1126 1125 1127 static struct ncsi_rsp_handler {
+4 -4
net/netfilter/nf_tables_api.c
··· 5078 5078 static int nft_set_desc_concat(struct nft_set_desc *desc, 5079 5079 const struct nlattr *nla) 5080 5080 { 5081 - u32 num_regs = 0, key_num_regs = 0; 5081 + u32 len = 0, num_regs; 5082 5082 struct nlattr *attr; 5083 5083 int rem, err, i; 5084 5084 ··· 5092 5092 } 5093 5093 5094 5094 for (i = 0; i < desc->field_count; i++) 5095 - num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32)); 5095 + len += round_up(desc->field_len[i], sizeof(u32)); 5096 5096 5097 - key_num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32)); 5098 - if (key_num_regs != num_regs) 5097 + if (len != desc->klen) 5099 5098 return -EINVAL; 5100 5099 5100 + num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32)); 5101 5101 if (num_regs > NFT_REG32_COUNT) 5102 5102 return -E2BIG; 5103 5103
+2
net/nfc/nci/hci.c
··· 542 542 543 543 pr_debug("pipe created=%d\n", pipe); 544 544 545 + if (pipe >= NCI_HCI_MAX_PIPES) 546 + pipe = NCI_HCI_INVALID_PIPE; 545 547 return pipe; 546 548 } 547 549
+15
net/rose/rose_timer.c
··· 122 122 struct rose_sock *rose = rose_sk(sk); 123 123 124 124 bh_lock_sock(sk); 125 + if (sock_owned_by_user(sk)) { 126 + sk_reset_timer(sk, &sk->sk_timer, jiffies + HZ/20); 127 + goto out; 128 + } 125 129 switch (rose->state) { 126 130 case ROSE_STATE_0: 127 131 /* Magic here: If we listen() and a new link dies before it ··· 156 152 } 157 153 158 154 rose_start_heartbeat(sk); 155 + out: 159 156 bh_unlock_sock(sk); 160 157 sock_put(sk); 161 158 } ··· 167 162 struct sock *sk = &rose->sock; 168 163 169 164 bh_lock_sock(sk); 165 + if (sock_owned_by_user(sk)) { 166 + sk_reset_timer(sk, &rose->timer, jiffies + HZ/20); 167 + goto out; 168 + } 170 169 switch (rose->state) { 171 170 case ROSE_STATE_1: /* T1 */ 172 171 case ROSE_STATE_4: /* T2 */ ··· 191 182 } 192 183 break; 193 184 } 185 + out: 194 186 bh_unlock_sock(sk); 195 187 sock_put(sk); 196 188 } ··· 202 192 struct sock *sk = &rose->sock; 203 193 204 194 bh_lock_sock(sk); 195 + if (sock_owned_by_user(sk)) { 196 + sk_reset_timer(sk, &rose->idletimer, jiffies + HZ/20); 197 + goto out; 198 + } 205 199 rose_clear_queues(sk); 206 200 207 201 rose_write_internal(sk, ROSE_CLEAR_REQUEST); ··· 221 207 sk->sk_state_change(sk); 222 208 sock_set_flag(sk, SOCK_DEAD); 223 209 } 210 + out: 224 211 bh_unlock_sock(sk); 225 212 sock_put(sk); 226 213 }
+8 -8
net/rxrpc/peer_event.c
··· 246 246 bool use; 247 247 int slot; 248 248 249 - spin_lock(&rxnet->peer_hash_lock); 249 + spin_lock_bh(&rxnet->peer_hash_lock); 250 250 251 251 while (!list_empty(collector)) { 252 252 peer = list_entry(collector->next, ··· 257 257 continue; 258 258 259 259 use = __rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive); 260 - spin_unlock(&rxnet->peer_hash_lock); 260 + spin_unlock_bh(&rxnet->peer_hash_lock); 261 261 262 262 if (use) { 263 263 keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME; ··· 277 277 */ 278 278 slot += cursor; 279 279 slot &= mask; 280 - spin_lock(&rxnet->peer_hash_lock); 280 + spin_lock_bh(&rxnet->peer_hash_lock); 281 281 list_add_tail(&peer->keepalive_link, 282 282 &rxnet->peer_keepalive[slot & mask]); 283 - spin_unlock(&rxnet->peer_hash_lock); 283 + spin_unlock_bh(&rxnet->peer_hash_lock); 284 284 rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive); 285 285 } 286 286 rxrpc_put_peer(peer, rxrpc_peer_put_keepalive); 287 - spin_lock(&rxnet->peer_hash_lock); 287 + spin_lock_bh(&rxnet->peer_hash_lock); 288 288 } 289 289 290 - spin_unlock(&rxnet->peer_hash_lock); 290 + spin_unlock_bh(&rxnet->peer_hash_lock); 291 291 } 292 292 293 293 /* ··· 317 317 * second; the bucket at cursor + 1 goes at now + 1s and so 318 318 * on... 319 319 */ 320 - spin_lock(&rxnet->peer_hash_lock); 320 + spin_lock_bh(&rxnet->peer_hash_lock); 321 321 list_splice_init(&rxnet->peer_keepalive_new, &collector); 322 322 323 323 stop = cursor + ARRAY_SIZE(rxnet->peer_keepalive); ··· 329 329 } 330 330 331 331 base = now; 332 - spin_unlock(&rxnet->peer_hash_lock); 332 + spin_unlock_bh(&rxnet->peer_hash_lock); 333 333 334 334 rxnet->peer_keepalive_base = base; 335 335 rxnet->peer_keepalive_cursor = cursor;
+6 -6
net/rxrpc/peer_object.c
··· 325 325 hash_key = rxrpc_peer_hash_key(local, &peer->srx); 326 326 rxrpc_init_peer(local, peer, hash_key); 327 327 328 - spin_lock(&rxnet->peer_hash_lock); 328 + spin_lock_bh(&rxnet->peer_hash_lock); 329 329 hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key); 330 330 list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive_new); 331 - spin_unlock(&rxnet->peer_hash_lock); 331 + spin_unlock_bh(&rxnet->peer_hash_lock); 332 332 } 333 333 334 334 /* ··· 360 360 return NULL; 361 361 } 362 362 363 - spin_lock(&rxnet->peer_hash_lock); 363 + spin_lock_bh(&rxnet->peer_hash_lock); 364 364 365 365 /* Need to check that we aren't racing with someone else */ 366 366 peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key); ··· 373 373 &rxnet->peer_keepalive_new); 374 374 } 375 375 376 - spin_unlock(&rxnet->peer_hash_lock); 376 + spin_unlock_bh(&rxnet->peer_hash_lock); 377 377 378 378 if (peer) 379 379 rxrpc_free_peer(candidate); ··· 423 423 424 424 ASSERT(hlist_empty(&peer->error_targets)); 425 425 426 - spin_lock(&rxnet->peer_hash_lock); 426 + spin_lock_bh(&rxnet->peer_hash_lock); 427 427 hash_del_rcu(&peer->hash_link); 428 428 list_del_init(&peer->keepalive_link); 429 - spin_unlock(&rxnet->peer_hash_lock); 429 + spin_unlock_bh(&rxnet->peer_hash_lock); 430 430 431 431 rxrpc_free_peer(peer); 432 432 }
+2
net/sched/sch_ets.c
··· 91 91 { 92 92 struct ets_sched *q = qdisc_priv(sch); 93 93 94 + if (arg == 0 || arg > q->nbands) 95 + return NULL; 94 96 return &q->classes[arg - 1]; 95 97 } 96 98
+11 -2
net/vmw_vsock/af_vsock.c
··· 337 337 338 338 void vsock_remove_sock(struct vsock_sock *vsk) 339 339 { 340 - vsock_remove_bound(vsk); 340 + /* Transport reassignment must not remove the binding. */ 341 + if (sock_flag(sk_vsock(vsk), SOCK_DEAD)) 342 + vsock_remove_bound(vsk); 343 + 341 344 vsock_remove_connected(vsk); 342 345 } 343 346 EXPORT_SYMBOL_GPL(vsock_remove_sock); ··· 824 821 */ 825 822 lock_sock_nested(sk, level); 826 823 824 + sock_orphan(sk); 825 + 827 826 if (vsk->transport) 828 827 vsk->transport->release(vsk); 829 828 else if (sock_type_connectible(sk->sk_type)) 830 829 vsock_remove_sock(vsk); 831 830 832 - sock_orphan(sk); 833 831 sk->sk_shutdown = SHUTDOWN_MASK; 834 832 835 833 skb_queue_purge(&sk->sk_receive_queue); ··· 1522 1518 err = transport->connect(vsk); 1523 1519 if (err < 0) 1524 1520 goto out; 1521 + 1522 + /* sk_err might have been set as a result of an earlier 1523 + * (failed) connect attempt. 1524 + */ 1525 + sk->sk_err = 0; 1525 1526 1526 1527 /* Mark sock as connecting and set the error code to in 1527 1528 * progress in case this is a non-blocking connect.
+1 -1
net/xfrm/xfrm_interface_core.c
··· 506 506 skb_dst_set(skb, dst); 507 507 skb->dev = tdev; 508 508 509 - err = dst_output(xi->net, skb->sk, skb); 509 + err = dst_output(xi->net, skb_to_full_sk(skb), skb); 510 510 if (net_xmit_eval(err) == 0) { 511 511 dev_sw_netstats_tx_add(dev, 1, length); 512 512 } else {
+4 -3
net/xfrm/xfrm_output.c
··· 802 802 !skb_gso_validate_network_len(skb, ip_skb_dst_mtu(skb->sk, skb)))) { 803 803 skb->protocol = htons(ETH_P_IP); 804 804 805 - if (skb->sk) 805 + if (skb->sk && sk_fullsock(skb->sk)) 806 806 xfrm_local_error(skb, mtu); 807 807 else 808 808 icmp_send(skb, ICMP_DEST_UNREACH, ··· 838 838 { 839 839 int mtu, ret = 0; 840 840 struct dst_entry *dst = skb_dst(skb); 841 + struct sock *sk = skb_to_full_sk(skb); 841 842 842 843 if (skb->ignore_df) 843 844 goto out; ··· 853 852 skb->dev = dst->dev; 854 853 skb->protocol = htons(ETH_P_IPV6); 855 854 856 - if (xfrm6_local_dontfrag(skb->sk)) 855 + if (xfrm6_local_dontfrag(sk)) 857 856 ipv6_stub->xfrm6_local_rxpmtu(skb, mtu); 858 - else if (skb->sk) 857 + else if (sk) 859 858 xfrm_local_error(skb, mtu); 860 859 else 861 860 icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+1 -1
net/xfrm/xfrm_policy.c
··· 2964 2964 skb_dst_drop(skb); 2965 2965 skb_dst_set(skb, dst); 2966 2966 2967 - dst_output(net, skb->sk, skb); 2967 + dst_output(net, skb_to_full_sk(skb), skb); 2968 2968 } 2969 2969 2970 2970 out:
+6 -4
net/xfrm/xfrm_replay.c
··· 714 714 oseq += skb_shinfo(skb)->gso_segs; 715 715 } 716 716 717 - if (unlikely(xo->seq.low < replay_esn->oseq)) { 718 - XFRM_SKB_CB(skb)->seq.output.hi = ++oseq_hi; 719 - xo->seq.hi = oseq_hi; 720 - replay_esn->oseq_hi = oseq_hi; 717 + if (unlikely(oseq < replay_esn->oseq)) { 718 + replay_esn->oseq_hi = ++oseq_hi; 719 + if (xo->seq.low < replay_esn->oseq) { 720 + XFRM_SKB_CB(skb)->seq.output.hi = oseq_hi; 721 + xo->seq.hi = oseq_hi; 722 + } 721 723 if (replay_esn->oseq_hi == 0) { 722 724 replay_esn->oseq--; 723 725 replay_esn->oseq_hi--;
+71 -22
net/xfrm/xfrm_state.c
··· 34 34 35 35 #define xfrm_state_deref_prot(table, net) \ 36 36 rcu_dereference_protected((table), lockdep_is_held(&(net)->xfrm.xfrm_state_lock)) 37 + #define xfrm_state_deref_check(table, net) \ 38 + rcu_dereference_check((table), lockdep_is_held(&(net)->xfrm.xfrm_state_lock)) 37 39 38 40 static void xfrm_state_gc_task(struct work_struct *work); 39 41 ··· 64 62 u32 reqid, 65 63 unsigned short family) 66 64 { 65 + lockdep_assert_held(&net->xfrm.xfrm_state_lock); 66 + 67 67 return __xfrm_dst_hash(daddr, saddr, reqid, family, net->xfrm.state_hmask); 68 68 } 69 69 ··· 74 70 const xfrm_address_t *saddr, 75 71 unsigned short family) 76 72 { 73 + lockdep_assert_held(&net->xfrm.xfrm_state_lock); 74 + 77 75 return __xfrm_src_hash(daddr, saddr, family, net->xfrm.state_hmask); 78 76 } 79 77 ··· 83 77 xfrm_spi_hash(struct net *net, const xfrm_address_t *daddr, 84 78 __be32 spi, u8 proto, unsigned short family) 85 79 { 80 + lockdep_assert_held(&net->xfrm.xfrm_state_lock); 81 + 86 82 return __xfrm_spi_hash(daddr, spi, proto, family, net->xfrm.state_hmask); 87 83 } 88 84 89 85 static unsigned int xfrm_seq_hash(struct net *net, u32 seq) 90 86 { 87 + lockdep_assert_held(&net->xfrm.xfrm_state_lock); 88 + 91 89 return __xfrm_seq_hash(seq, net->xfrm.state_hmask); 92 90 } 93 91 ··· 1118 1108 x->props.family = tmpl->encap_family; 1119 1109 } 1120 1110 1121 - static struct xfrm_state *__xfrm_state_lookup_all(struct net *net, u32 mark, 1111 + struct xfrm_hash_state_ptrs { 1112 + const struct hlist_head *bydst; 1113 + const struct hlist_head *bysrc; 1114 + const struct hlist_head *byspi; 1115 + unsigned int hmask; 1116 + }; 1117 + 1118 + static void xfrm_hash_ptrs_get(const struct net *net, struct xfrm_hash_state_ptrs *ptrs) 1119 + { 1120 + unsigned int sequence; 1121 + 1122 + do { 1123 + sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); 1124 + 1125 + ptrs->bydst = xfrm_state_deref_check(net->xfrm.state_bydst, net); 1126 + ptrs->bysrc = xfrm_state_deref_check(net->xfrm.state_bysrc, net); 1127 + ptrs->byspi = xfrm_state_deref_check(net->xfrm.state_byspi, net); 1128 + ptrs->hmask = net->xfrm.state_hmask; 1129 + } while (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence)); 1130 + } 1131 + 1132 + static struct xfrm_state *__xfrm_state_lookup_all(const struct xfrm_hash_state_ptrs *state_ptrs, 1133 + u32 mark, 1122 1134 const xfrm_address_t *daddr, 1123 1135 __be32 spi, u8 proto, 1124 1136 unsigned short family, 1125 1137 struct xfrm_dev_offload *xdo) 1126 1138 { 1127 - unsigned int h = xfrm_spi_hash(net, daddr, spi, proto, family); 1139 + unsigned int h = __xfrm_spi_hash(daddr, spi, proto, family, state_ptrs->hmask); 1128 1140 struct xfrm_state *x; 1129 1141 1130 - hlist_for_each_entry_rcu(x, net->xfrm.state_byspi + h, byspi) { 1142 + hlist_for_each_entry_rcu(x, state_ptrs->byspi + h, byspi) { 1131 1143 #ifdef CONFIG_XFRM_OFFLOAD 1132 1144 if (xdo->type == XFRM_DEV_OFFLOAD_PACKET) { 1133 1145 if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET) ··· 1183 1151 return NULL; 1184 1152 } 1185 1153 1186 - static struct xfrm_state *__xfrm_state_lookup(struct net *net, u32 mark, 1154 + static struct xfrm_state *__xfrm_state_lookup(const struct xfrm_hash_state_ptrs *state_ptrs, 1155 + u32 mark, 1187 1156 const xfrm_address_t *daddr, 1188 1157 __be32 spi, u8 proto, 1189 1158 unsigned short family) 1190 1159 { 1191 - unsigned int h = xfrm_spi_hash(net, daddr, spi, proto, family); 1160 + unsigned int h = __xfrm_spi_hash(daddr, spi, proto, family, state_ptrs->hmask); 1192 1161 struct xfrm_state *x; 1193 1162 1194 - hlist_for_each_entry_rcu(x, net->xfrm.state_byspi + h, byspi) { 1163 + hlist_for_each_entry_rcu(x, state_ptrs->byspi + h, byspi) { 1195 1164 if (x->props.family != family || 1196 1165 x->id.spi != spi || 1197 1166 x->id.proto != proto || ··· 1214 1181 __be32 spi, u8 proto, 1215 1182 unsigned short family) 1216 1183 { 1184 + struct xfrm_hash_state_ptrs state_ptrs; 1217 1185 struct hlist_head *state_cache_input; 1218 1186 struct xfrm_state *x = NULL; 1219 - int cpu = get_cpu(); 1220 1187 1221 - state_cache_input = per_cpu_ptr(net->xfrm.state_cache_input, cpu); 1188 + state_cache_input = raw_cpu_ptr(net->xfrm.state_cache_input); 1222 1189 1223 1190 rcu_read_lock(); 1224 1191 hlist_for_each_entry_rcu(x, state_cache_input, state_cache_input) { ··· 1235 1202 goto out; 1236 1203 } 1237 1204 1238 - x = __xfrm_state_lookup(net, mark, daddr, spi, proto, family); 1205 + xfrm_hash_ptrs_get(net, &state_ptrs); 1206 + 1207 + x = __xfrm_state_lookup(&state_ptrs, mark, daddr, spi, proto, family); 1239 1208 1240 1209 if (x && x->km.state == XFRM_STATE_VALID) { 1241 1210 spin_lock_bh(&net->xfrm.xfrm_state_lock); ··· 1252 1217 1253 1218 out: 1254 1219 rcu_read_unlock(); 1255 - put_cpu(); 1256 1220 return x; 1257 1221 } 1258 1222 EXPORT_SYMBOL(xfrm_input_state_lookup); 1259 1223 1260 - static struct xfrm_state *__xfrm_state_lookup_byaddr(struct net *net, u32 mark, 1224 + static struct xfrm_state *__xfrm_state_lookup_byaddr(const struct xfrm_hash_state_ptrs *state_ptrs, 1225 + u32 mark, 1261 1226 const xfrm_address_t *daddr, 1262 1227 const xfrm_address_t *saddr, 1263 1228 u8 proto, unsigned short family) 1264 1229 { 1265 - unsigned int h = xfrm_src_hash(net, daddr, saddr, family); 1230 + unsigned int h = __xfrm_src_hash(daddr, saddr, family, state_ptrs->hmask); 1266 1231 struct xfrm_state *x; 1267 1232 1268 - hlist_for_each_entry_rcu(x, net->xfrm.state_bysrc + h, bysrc) { 1233 + hlist_for_each_entry_rcu(x, state_ptrs->bysrc + h, bysrc) { 1269 1234 if (x->props.family != family || 1270 1235 x->id.proto != proto || 1271 1236 !xfrm_addr_equal(&x->id.daddr, daddr, family) || ··· 1285 1250 static inline struct xfrm_state * 1286 1251 __xfrm_state_locate(struct xfrm_state *x, int use_spi, int family) 1287 1252 { 1253 + struct xfrm_hash_state_ptrs state_ptrs; 1288 1254 struct net *net = xs_net(x); 1289 1255 u32 mark = x->mark.v & x->mark.m; 1290 1256 1257 + xfrm_hash_ptrs_get(net, &state_ptrs); 1258 + 1291 1259 if (use_spi) 1292 - return __xfrm_state_lookup(net, mark, &x->id.daddr, 1260 + return __xfrm_state_lookup(&state_ptrs, mark, &x->id.daddr, 1293 1261 x->id.spi, x->id.proto, family); 1294 1262 else 1295 - return __xfrm_state_lookup_byaddr(net, mark, 1263 + return __xfrm_state_lookup_byaddr(&state_ptrs, mark, 1296 1264 &x->id.daddr, 1297 1265 &x->props.saddr, 1298 1266 x->id.proto, family); ··· 1369 1331 unsigned short family, u32 if_id) 1370 1332 { 1371 1333 static xfrm_address_t saddr_wildcard = { }; 1334 + struct xfrm_hash_state_ptrs state_ptrs; 1372 1335 struct net *net = xp_net(pol); 1373 1336 unsigned int h, h_wildcard; 1374 1337 struct xfrm_state *x, *x0, *to_put; ··· 1434 1395 else if (acquire_in_progress) /* XXX: acquire_in_progress should not happen */ 1435 1396 WARN_ON(1); 1436 1397 1437 - h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family); 1438 - hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h, bydst) { 1398 + xfrm_hash_ptrs_get(net, &state_ptrs); 1399 + 1400 + h = __xfrm_dst_hash(daddr, saddr, tmpl->reqid, encap_family, state_ptrs.hmask); 1401 + hlist_for_each_entry_rcu(x, state_ptrs.bydst + h, bydst) { 1439 1402 #ifdef CONFIG_XFRM_OFFLOAD 1440 1403 if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) { 1441 1404 if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET) ··· 1470 1429 if (best || acquire_in_progress) 1471 1430 goto found; 1472 1431 1473 - h_wildcard = xfrm_dst_hash(net, daddr, &saddr_wildcard, tmpl->reqid, encap_family); 1474 - hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h_wildcard, bydst) { 1432 + h_wildcard = __xfrm_dst_hash(daddr, &saddr_wildcard, tmpl->reqid, 1433 + encap_family, state_ptrs.hmask); 1434 + hlist_for_each_entry_rcu(x, state_ptrs.bydst + h_wildcard, bydst) { 1475 1435 #ifdef CONFIG_XFRM_OFFLOAD 1476 1436 if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) { 1477 1437 if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET) ··· 1510 1468 1511 1469 if (!x && !error && !acquire_in_progress) { 1512 1470 if (tmpl->id.spi && 1513 - (x0 = __xfrm_state_lookup_all(net, mark, daddr, 1471 + (x0 = __xfrm_state_lookup_all(&state_ptrs, mark, daddr, 1514 1472 tmpl->id.spi, tmpl->id.proto, 1515 1473 encap_family, 1516 1474 &pol->xdo)) != NULL) { ··· 2295 2253 xfrm_state_lookup(struct net *net, u32 mark, const xfrm_address_t *daddr, __be32 spi, 2296 2254 u8 proto, unsigned short family) 2297 2255 { 2256 + struct xfrm_hash_state_ptrs state_ptrs; 2298 2257 struct xfrm_state *x; 2299 2258 2300 2259 rcu_read_lock(); 2301 - x = __xfrm_state_lookup(net, mark, daddr, spi, proto, family); 2260 + xfrm_hash_ptrs_get(net, &state_ptrs); 2261 + 2262 + x = __xfrm_state_lookup(&state_ptrs, mark, daddr, spi, proto, family); 2302 2263 rcu_read_unlock(); 2303 2264 return x; 2304 2265 } ··· 2312 2267 const xfrm_address_t *daddr, const xfrm_address_t *saddr, 2313 2268 u8 proto, unsigned short family) 2314 2269 { 2270 + struct xfrm_hash_state_ptrs state_ptrs; 2315 2271 struct xfrm_state *x; 2316 2272 2317 2273 spin_lock_bh(&net->xfrm.xfrm_state_lock); 2318 - x = __xfrm_state_lookup_byaddr(net, mark, daddr, saddr, proto, family); 2274 + 2275 + xfrm_hash_ptrs_get(net, &state_ptrs); 2276 + 2277 + x = __xfrm_state_lookup_byaddr(&state_ptrs, mark, daddr, saddr, proto, family); 2319 2278 spin_unlock_bh(&net->xfrm.xfrm_state_lock); 2320 2279 return x; 2321 2280 }
+1 -1
tools/net/ynl/lib/ynl.c
··· 95 95 96 96 ynl_attr_for_each_payload(start, data_len, attr) { 97 97 astart_off = (char *)attr - (char *)start; 98 - aend_off = astart_off + ynl_attr_data_len(attr); 98 + aend_off = (char *)ynl_attr_data_end(attr) - (char *)start; 99 99 if (aend_off <= off) 100 100 continue; 101 101
+8 -8
tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
··· 142 142 } 143 143 144 144 function check_table { 145 - local path=$NSIM_DEV_DFS/ports/$port/udp_ports_table$1 145 + local path=$NSIM_DEV_DFS/ports/$port/udp_ports/table$1 146 146 local -n expected=$2 147 147 local last=$3 148 148 ··· 212 212 } 213 213 214 214 function print_table { 215 - local path=$NSIM_DEV_DFS/ports/$port/udp_ports_table$1 215 + local path=$NSIM_DEV_DFS/ports/$port/udp_ports/table$1 216 216 read -a have < $path 217 217 218 218 tree $NSIM_DEV_DFS/ ··· 641 641 NSIM_NETDEV=`get_netdev_name old_netdevs` 642 642 ip link set dev $NSIM_NETDEV up 643 643 644 - echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error 644 + echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports/inject_error 645 645 646 646 msg="1 - create VxLANs v6" 647 647 exp0=( 0 0 0 0 ) ··· 663 663 new_geneve gnv0 20000 664 664 665 665 msg="2 - destroy GENEVE" 666 - echo 2 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error 666 + echo 2 > $NSIM_DEV_DFS/ports/$port/udp_ports/inject_error 667 667 exp1=( `mke 20000 2` 0 0 0 ) 668 668 del_dev gnv0 669 669 ··· 764 764 msg="create VxLANs v4" 765 765 new_vxlan vxlan0 10000 $NSIM_NETDEV 766 766 767 - echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset 767 + echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset 768 768 check_tables 769 769 770 770 msg="NIC device goes down" ··· 775 775 fi 776 776 check_tables 777 777 778 - echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset 778 + echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset 779 779 check_tables 780 780 781 781 msg="NIC device goes up again" ··· 789 789 del_dev vxlan0 790 790 check_tables 791 791 792 - echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset 792 + echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset 793 793 check_tables 794 794 795 795 msg="destroy NIC" ··· 896 896 exp0=( `mke 10000 1` `mke 10004 1` 0 `mke 10003 1` ) 897 897 del_dev vxlan2 898 898 899 - echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset 899 + echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset 900 900 check_tables 901 901 902 902 msg="tunnels destroyed 2"
+13 -1
tools/testing/selftests/net/bpf_offload.py
··· 215 215 raise Exception("Time out waiting for map counts to stabilize want %d, have %d" % (expected, nmaps)) 216 216 217 217 def bpftool_prog_load(sample, file_name, maps=[], prog_type="xdp", dev=None, 218 - fail=True, include_stderr=False): 218 + fail=True, include_stderr=False, dev_bind=None): 219 219 args = "prog load %s %s" % (os.path.join(bpf_test_dir, sample), file_name) 220 220 if prog_type is not None: 221 221 args += " type " + prog_type 222 222 if dev is not None: 223 223 args += " dev " + dev 224 + elif dev_bind is not None: 225 + args += " xdpmeta_dev " + dev_bind 224 226 if len(maps): 225 227 args += " map " + " map ".join(maps) 226 228 ··· 980 978 fail(ret == 0, "attached offloaded XDP program to drv") 981 979 check_extack(err, "Using offloaded program without HW_MODE flag is not supported.", args) 982 980 rm("/sys/fs/bpf/offload") 981 + sim.wait_for_flush() 982 + 983 + bpftool_prog_load("sample_ret0.bpf.o", "/sys/fs/bpf/devbound", 984 + dev_bind=sim['ifname']) 985 + devbound = bpf_pinned("/sys/fs/bpf/devbound") 986 + start_test("Test dev-bound program in generic mode...") 987 + ret, _, err = sim.set_xdp(devbound, "generic", fail=False, include_stderr=True) 988 + fail(ret == 0, "devbound program in generic mode allowed") 989 + check_extack(err, "Can't attach device-bound programs in generic mode.", args) 990 + rm("/sys/fs/bpf/devbound") 983 991 sim.wait_for_flush() 984 992 985 993 start_test("Test XDP load failure...")
+1 -1
tools/testing/selftests/net/lib/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - CFLAGS = -Wall -Wl,--no-as-needed -O2 -g 3 + CFLAGS += -Wall -Wl,--no-as-needed -O2 -g 4 4 CFLAGS += -I../../../../../usr/include/ $(KHDR_INCLUDES) 5 5 # Additional include paths needed by kselftest.h 6 6 CFLAGS += -I../../
+1 -1
tools/testing/selftests/net/mptcp/Makefile
··· 2 2 3 3 top_srcdir = ../../../../.. 4 4 5 - CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES) 5 + CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES) 6 6 7 7 TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \ 8 8 simult_flows.sh mptcp_sockopt.sh userspace_pm.sh
+1 -1
tools/testing/selftests/net/openvswitch/Makefile
··· 2 2 3 3 top_srcdir = ../../../../.. 4 4 5 - CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES) 5 + CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES) 6 6 7 7 TEST_PROGS := openvswitch.sh 8 8
+3 -1
tools/testing/selftests/net/packetdrill/ksft_runner.sh
··· 39 39 # xfail tests that are known flaky with dbg config, not fixable. 40 40 # still run them for coverage (and expect 100% pass without dbg). 41 41 declare -ar xfail_list=( 42 + "tcp_eor_no-coalesce-retrans.pkt" 42 43 "tcp_fast_recovery_prr-ss.*.pkt" 44 + "tcp_slow_start_slow-start-after-win-update.pkt" 43 45 "tcp_timestamping.*.pkt" 44 46 "tcp_user_timeout_user-timeout-probe.pkt" 45 47 "tcp_zerocopy_epoll_.*.pkt" 46 - "tcp_tcp_info_tcp-info-*-limited.pkt" 48 + "tcp_tcp_info_tcp-info-.*-limited.pkt" 47 49 ) 48 50 readonly xfail_regex="^($(printf '%s|' "${xfail_list[@]}"))$" 49 51 [[ "$script" =~ ${xfail_regex} ]] && failfunc=ktap_test_xfail
+34 -54
tools/testing/vsock/util.c
··· 96 96 close(epollfd); 97 97 } 98 98 99 - /* Bind to <bind_port>, connect to <cid, port> and return the file descriptor. */ 100 - int vsock_bind_connect(unsigned int cid, unsigned int port, unsigned int bind_port, int type) 99 + /* Create socket <type>, bind to <cid, port> and return the file descriptor. */ 100 + int vsock_bind(unsigned int cid, unsigned int port, int type) 101 101 { 102 - struct sockaddr_vm sa_client = { 103 - .svm_family = AF_VSOCK, 104 - .svm_cid = VMADDR_CID_ANY, 105 - .svm_port = bind_port, 106 - }; 107 - struct sockaddr_vm sa_server = { 102 + struct sockaddr_vm sa = { 108 103 .svm_family = AF_VSOCK, 109 104 .svm_cid = cid, 110 105 .svm_port = port, 111 106 }; 107 + int fd; 112 108 113 - int client_fd, ret; 114 - 115 - client_fd = socket(AF_VSOCK, type, 0); 116 - if (client_fd < 0) { 109 + fd = socket(AF_VSOCK, type, 0); 110 + if (fd < 0) { 117 111 perror("socket"); 118 112 exit(EXIT_FAILURE); 119 113 } 120 114 121 - if (bind(client_fd, (struct sockaddr *)&sa_client, sizeof(sa_client))) { 115 + if (bind(fd, (struct sockaddr *)&sa, sizeof(sa))) { 122 116 perror("bind"); 123 117 exit(EXIT_FAILURE); 124 118 } 125 119 120 + return fd; 121 + } 122 + 123 + int vsock_connect_fd(int fd, unsigned int cid, unsigned int port) 124 + { 125 + struct sockaddr_vm sa = { 126 + .svm_family = AF_VSOCK, 127 + .svm_cid = cid, 128 + .svm_port = port, 129 + }; 130 + int ret; 131 + 126 132 timeout_begin(TIMEOUT); 127 133 do { 128 - ret = connect(client_fd, (struct sockaddr *)&sa_server, sizeof(sa_server)); 134 + ret = connect(fd, (struct sockaddr *)&sa, sizeof(sa)); 129 135 timeout_check("connect"); 130 136 } while (ret < 0 && errno == EINTR); 131 137 timeout_end(); 132 138 133 - if (ret < 0) { 139 + return ret; 140 + } 141 + 142 + /* Bind to <bind_port>, connect to <cid, port> and return the file descriptor. */ 143 + int vsock_bind_connect(unsigned int cid, unsigned int port, unsigned int bind_port, int type) 144 + { 145 + int client_fd; 146 + 147 + client_fd = vsock_bind(VMADDR_CID_ANY, bind_port, type); 148 + 149 + if (vsock_connect_fd(client_fd, cid, port)) { 134 150 perror("connect"); 135 151 exit(EXIT_FAILURE); 136 152 } ··· 157 141 /* Connect to <cid, port> and return the file descriptor. */ 158 142 int vsock_connect(unsigned int cid, unsigned int port, int type) 159 143 { 160 - union { 161 - struct sockaddr sa; 162 - struct sockaddr_vm svm; 163 - } addr = { 164 - .svm = { 165 - .svm_family = AF_VSOCK, 166 - .svm_port = port, 167 - .svm_cid = cid, 168 - }, 169 - }; 170 - int ret; 171 144 int fd; 172 145 173 146 control_expectln("LISTENING"); ··· 167 162 exit(EXIT_FAILURE); 168 163 } 169 164 170 - timeout_begin(TIMEOUT); 171 - do { 172 - ret = connect(fd, &addr.sa, sizeof(addr.svm)); 173 - timeout_check("connect"); 174 - } while (ret < 0 && errno == EINTR); 175 - timeout_end(); 176 - 177 - if (ret < 0) { 165 + if (vsock_connect_fd(fd, cid, port)) { 178 166 int old_errno = errno; 179 167 180 168 close(fd); 181 169 fd = -1; 182 170 errno = old_errno; 183 171 } 172 + 184 173 return fd; 185 174 } 186 175 ··· 191 192 /* Listen on <cid, port> and return the file descriptor. */ 192 193 static int vsock_listen(unsigned int cid, unsigned int port, int type) 193 194 { 194 - union { 195 - struct sockaddr sa; 196 - struct sockaddr_vm svm; 197 - } addr = { 198 - .svm = { 199 - .svm_family = AF_VSOCK, 200 - .svm_port = port, 201 - .svm_cid = cid, 202 - }, 203 - }; 204 195 int fd; 205 196 206 - fd = socket(AF_VSOCK, type, 0); 207 - if (fd < 0) { 208 - perror("socket"); 209 - exit(EXIT_FAILURE); 210 - } 211 - 212 - if (bind(fd, &addr.sa, sizeof(addr.svm)) < 0) { 213 - perror("bind"); 214 - exit(EXIT_FAILURE); 215 - } 197 + fd = vsock_bind(cid, port, type); 216 198 217 199 if (listen(fd, 1) < 0) { 218 200 perror("listen");
+2
tools/testing/vsock/util.h
··· 39 39 void init_signals(void); 40 40 unsigned int parse_cid(const char *str); 41 41 unsigned int parse_port(const char *str); 42 + int vsock_connect_fd(int fd, unsigned int cid, unsigned int port); 42 43 int vsock_connect(unsigned int cid, unsigned int port, int type); 43 44 int vsock_accept(unsigned int cid, unsigned int port, 44 45 struct sockaddr_vm *clientaddrp, int type); 45 46 int vsock_stream_connect(unsigned int cid, unsigned int port); 47 + int vsock_bind(unsigned int cid, unsigned int port, int type); 46 48 int vsock_bind_connect(unsigned int cid, unsigned int port, 47 49 unsigned int bind_port, int type); 48 50 int vsock_seqpacket_connect(unsigned int cid, unsigned int port);
+106 -16
tools/testing/vsock/vsock_test.c
··· 113 113 114 114 static void test_stream_bind_only_server(const struct test_opts *opts) 115 115 { 116 - union { 117 - struct sockaddr sa; 118 - struct sockaddr_vm svm; 119 - } addr = { 120 - .svm = { 121 - .svm_family = AF_VSOCK, 122 - .svm_port = opts->peer_port, 123 - .svm_cid = VMADDR_CID_ANY, 124 - }, 125 - }; 126 116 int fd; 127 117 128 - fd = socket(AF_VSOCK, SOCK_STREAM, 0); 129 - 130 - if (bind(fd, &addr.sa, sizeof(addr.svm)) < 0) { 131 - perror("bind"); 132 - exit(EXIT_FAILURE); 133 - } 118 + fd = vsock_bind(VMADDR_CID_ANY, opts->peer_port, SOCK_STREAM); 134 119 135 120 /* Notify the client that the server is ready */ 136 121 control_writeln("BIND"); ··· 1693 1708 close(fd); 1694 1709 } 1695 1710 1711 + #define MAX_PORT_RETRIES 24 /* net/vmw_vsock/af_vsock.c */ 1712 + 1713 + /* Test attempts to trigger a transport release for an unbound socket. This can 1714 + * lead to a reference count mishandling. 1715 + */ 1716 + static void test_stream_transport_uaf_client(const struct test_opts *opts) 1717 + { 1718 + int sockets[MAX_PORT_RETRIES]; 1719 + struct sockaddr_vm addr; 1720 + int fd, i, alen; 1721 + 1722 + fd = vsock_bind(VMADDR_CID_ANY, VMADDR_PORT_ANY, SOCK_STREAM); 1723 + 1724 + alen = sizeof(addr); 1725 + if (getsockname(fd, (struct sockaddr *)&addr, &alen)) { 1726 + perror("getsockname"); 1727 + exit(EXIT_FAILURE); 1728 + } 1729 + 1730 + for (i = 0; i < MAX_PORT_RETRIES; ++i) 1731 + sockets[i] = vsock_bind(VMADDR_CID_ANY, ++addr.svm_port, 1732 + SOCK_STREAM); 1733 + 1734 + close(fd); 1735 + fd = socket(AF_VSOCK, SOCK_STREAM, 0); 1736 + if (fd < 0) { 1737 + perror("socket"); 1738 + exit(EXIT_FAILURE); 1739 + } 1740 + 1741 + if (!vsock_connect_fd(fd, addr.svm_cid, addr.svm_port)) { 1742 + perror("Unexpected connect() #1 success"); 1743 + exit(EXIT_FAILURE); 1744 + } 1745 + 1746 + /* Vulnerable system may crash now. */ 1747 + if (!vsock_connect_fd(fd, VMADDR_CID_HOST, VMADDR_PORT_ANY)) { 1748 + perror("Unexpected connect() #2 success"); 1749 + exit(EXIT_FAILURE); 1750 + } 1751 + 1752 + close(fd); 1753 + while (i--) 1754 + close(sockets[i]); 1755 + 1756 + control_writeln("DONE"); 1757 + } 1758 + 1759 + static void test_stream_transport_uaf_server(const struct test_opts *opts) 1760 + { 1761 + control_expectln("DONE"); 1762 + } 1763 + 1764 + static void test_stream_connect_retry_client(const struct test_opts *opts) 1765 + { 1766 + int fd; 1767 + 1768 + fd = socket(AF_VSOCK, SOCK_STREAM, 0); 1769 + if (fd < 0) { 1770 + perror("socket"); 1771 + exit(EXIT_FAILURE); 1772 + } 1773 + 1774 + if (!vsock_connect_fd(fd, opts->peer_cid, opts->peer_port)) { 1775 + fprintf(stderr, "Unexpected connect() #1 success\n"); 1776 + exit(EXIT_FAILURE); 1777 + } 1778 + 1779 + control_writeln("LISTEN"); 1780 + control_expectln("LISTENING"); 1781 + 1782 + if (vsock_connect_fd(fd, opts->peer_cid, opts->peer_port)) { 1783 + perror("connect() #2"); 1784 + exit(EXIT_FAILURE); 1785 + } 1786 + 1787 + close(fd); 1788 + } 1789 + 1790 + static void test_stream_connect_retry_server(const struct test_opts *opts) 1791 + { 1792 + int fd; 1793 + 1794 + control_expectln("LISTEN"); 1795 + 1796 + fd = vsock_stream_accept(VMADDR_CID_ANY, opts->peer_port, NULL); 1797 + if (fd < 0) { 1798 + perror("accept"); 1799 + exit(EXIT_FAILURE); 1800 + } 1801 + 1802 + vsock_wait_remote_close(fd); 1803 + close(fd); 1804 + } 1805 + 1696 1806 static struct test_case test_cases[] = { 1697 1807 { 1698 1808 .name = "SOCK_STREAM connection reset", ··· 1932 1852 .name = "SOCK_STREAM MSG_ZEROCOPY leak completion skb", 1933 1853 .run_client = test_stream_msgzcopy_leak_zcskb_client, 1934 1854 .run_server = test_stream_msgzcopy_leak_zcskb_server, 1855 + }, 1856 + { 1857 + .name = "SOCK_STREAM transport release use-after-free", 1858 + .run_client = test_stream_transport_uaf_client, 1859 + .run_server = test_stream_transport_uaf_server, 1860 + }, 1861 + { 1862 + .name = "SOCK_STREAM retry failed connect()", 1863 + .run_client = test_stream_connect_retry_client, 1864 + .run_server = test_stream_connect_retry_server, 1935 1865 }, 1936 1866 {}, 1937 1867 };