Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.11-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from wireless, bleutooth, BPF and netfilter.

Current release - regressions:

- core: drop bad gso csum_start and offset in virtio_net_hdr

- wifi: mt76: fix null pointer access in mt792x_mac_link_bss_remove

- eth: tun: add missing bpf_net_ctx_clear() in do_xdp_generic()

- phy: aquantia: only poll GLOBAL_CFG regs on aqr113, aqr113c and
aqr115c

Current release - new code bugs:

- smc: prevent UAF in inet_create()

- bluetooth: btmtk: fix kernel crash when entering btmtk_usb_suspend

- eth: bnxt: reject unsupported hash functions

Previous releases - regressions:

- sched: act_ct: take care of padding in struct zones_ht_key

- netfilter: fix null-ptr-deref in iptable_nat_table_init().

- tcp: adjust clamping window for applications specifying SO_RCVBUF

Previous releases - always broken:

- ethtool: rss: small fixes to spec and GET

- mptcp:
- fix signal endpoint re-add
- pm: fix backup support in signal endpoints

- wifi: ath12k: fix soft lockup on suspend

- eth: bnxt_en: fix RSS logic in __bnxt_reserve_rings()

- eth: ice: fix AF_XDP ZC timeout and concurrency issues

- eth: mlx5:
- fix missing lock on sync reset reload
- fix error handling in irq_pool_request_irq"

* tag 'net-6.11-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (76 commits)
mptcp: fix duplicate data handling
mptcp: fix bad RCVPRUNED mib accounting
ipv6: fix ndisc_is_useropt() handling for PIO
igc: Fix double reset adapter triggered from a single taprio cmd
net: MAINTAINERS: Demote Qualcomm IPA to "maintained"
net: wan: fsl_qmc_hdlc: Discard received CRC
net: wan: fsl_qmc_hdlc: Convert carrier_lock spinlock to a mutex
net/mlx5e: Add a check for the return value from mlx5_port_set_eth_ptys
net/mlx5e: Fix CT entry update leaks of modify header context
net/mlx5e: Require mlx5 tc classifier action support for IPsec prio capability
net/mlx5: Fix missing lock on sync reset reload
net/mlx5: Lag, don't use the hardcoded value of the first port
net/mlx5: DR, Fix 'stack guard page was hit' error in dr_rule
net/mlx5: Fix error handling in irq_pool_request_irq
net/mlx5: Always drain health in shutdown callback
net: Add skbuff.h to MAINTAINERS
r8169: don't increment tx_dropped in case of NETDEV_TX_BUSY
netfilter: iptables: Fix potential null-ptr-deref in ip6table_nat_table_init().
netfilter: iptables: Fix null-ptr-deref in iptable_nat_table_init().
net: drop bad gso csum_start and offset in virtio_net_hdr
...

+695 -304
+1 -1
Documentation/netlink/specs/ethtool.yaml
··· 1753 1753 request: 1754 1754 attributes: 1755 1755 - header 1756 + - context 1756 1757 reply: 1757 1758 attributes: 1758 1759 - header ··· 1762 1761 - indir 1763 1762 - hkey 1764 1763 - input_xfrm 1765 - dump: *rss-get-op 1766 1764 - 1767 1765 name: plca-get-cfg 1768 1766 doc: Get PLCA params.
+1
Documentation/networking/ethtool-netlink.rst
··· 1875 1875 1876 1876 ===================================== ====== ========================== 1877 1877 ``ETHTOOL_A_RSS_HEADER`` nested reply header 1878 + ``ETHTOOL_A_RSS_CONTEXT`` u32 context number 1878 1879 ``ETHTOOL_A_RSS_HFUNC`` u32 RSS hash func 1879 1880 ``ETHTOOL_A_RSS_INDIR`` binary Indir table bytes 1880 1881 ``ETHTOOL_A_RSS_HKEY`` binary Hash key bytes
+2 -1
MAINTAINERS
··· 15936 15936 F: include/linux/indirect_call_wrapper.h 15937 15937 F: include/linux/net.h 15938 15938 F: include/linux/netdevice.h 15939 + F: include/linux/skbuff.h 15939 15940 F: include/net/ 15940 15941 F: include/uapi/linux/in.h 15941 15942 F: include/uapi/linux/net.h ··· 18557 18556 QCOM IPA DRIVER 18558 18557 M: Alex Elder <elder@kernel.org> 18559 18558 L: netdev@vger.kernel.org 18560 - S: Supported 18559 + S: Maintained 18561 18560 F: drivers/net/ipa/ 18562 18561 18563 18562 QEMU MACHINE EMULATOR AND VIRTUALIZER SUPPORT
+2
drivers/bluetooth/Kconfig
··· 413 413 config BT_MTKSDIO 414 414 tristate "MediaTek HCI SDIO driver" 415 415 depends on MMC 416 + depends on USB || !BT_HCIBTUSB_MTK 416 417 select BT_MTK 417 418 help 418 419 MediaTek Bluetooth HCI SDIO driver. ··· 426 425 config BT_MTKUART 427 426 tristate "MediaTek HCI UART driver" 428 427 depends on SERIAL_DEV_BUS 428 + depends on USB || !BT_HCIBTUSB_MTK 429 429 select BT_MTK 430 430 help 431 431 MediaTek Bluetooth HCI UART driver.
+3
drivers/bluetooth/btintel.c
··· 3085 3085 btintel_set_dsm_reset_method(hdev, &ver_tlv); 3086 3086 3087 3087 err = btintel_bootloader_setup_tlv(hdev, &ver_tlv); 3088 + if (err) 3089 + goto exit_error; 3090 + 3088 3091 btintel_register_devcoredump_support(hdev); 3089 3092 btintel_print_fseq_info(hdev); 3090 3093 break;
+4 -1
drivers/bluetooth/btmtk.c
··· 437 437 } 438 438 EXPORT_SYMBOL_GPL(btmtk_process_coredump); 439 439 440 + #if IS_ENABLED(CONFIG_BT_HCIBTUSB_MTK) 440 441 static void btmtk_usb_wmt_recv(struct urb *urb) 441 442 { 442 443 struct hci_dev *hdev = urb->context; ··· 1263 1262 struct btmtk_data *btmtk_data = hci_get_priv(hdev); 1264 1263 1265 1264 /* Stop urb anchor for iso data transmission */ 1266 - usb_kill_anchored_urbs(&btmtk_data->isopkt_anchor); 1265 + if (test_bit(BTMTK_ISOPKT_RUNNING, &btmtk_data->flags)) 1266 + usb_kill_anchored_urbs(&btmtk_data->isopkt_anchor); 1267 1267 1268 1268 return 0; 1269 1269 } ··· 1489 1487 return 0; 1490 1488 } 1491 1489 EXPORT_SYMBOL_GPL(btmtk_usb_shutdown); 1490 + #endif 1492 1491 1493 1492 MODULE_AUTHOR("Sean Wang <sean.wang@mediatek.com>"); 1494 1493 MODULE_AUTHOR("Mark Chen <mark-yw.chen@mediatek.com>");
+4 -2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 7649 7649 static int __bnxt_reserve_rings(struct bnxt *bp) 7650 7650 { 7651 7651 struct bnxt_hw_rings hwr = {0}; 7652 + int rx_rings, old_rx_rings, rc; 7652 7653 int cp = bp->cp_nr_rings; 7653 - int rx_rings, rc; 7654 7654 int ulp_msix = 0; 7655 7655 bool sh = false; 7656 7656 int tx_cp; ··· 7684 7684 hwr.grp = bp->rx_nr_rings; 7685 7685 hwr.rss_ctx = bnxt_get_total_rss_ctxs(bp, &hwr); 7686 7686 hwr.stat = bnxt_get_func_stat_ctxs(bp); 7687 + old_rx_rings = bp->hw_resc.resv_rx_rings; 7687 7688 7688 7689 rc = bnxt_hwrm_reserve_rings(bp, &hwr); 7689 7690 if (rc) ··· 7739 7738 if (!bnxt_rings_ok(bp, &hwr)) 7740 7739 return -ENOMEM; 7741 7740 7742 - if (!netif_is_rxfh_configured(bp->dev)) 7741 + if (old_rx_rings != bp->hw_resc.resv_rx_rings && 7742 + !netif_is_rxfh_configured(bp->dev)) 7743 7743 bnxt_set_dflt_rss_indir_tbl(bp, NULL); 7744 7744 7745 7745 if (!bnxt_ulp_registered(bp->edev) && BNXT_NEW_RM(bp)) {
+12 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 1863 1863 } 1864 1864 1865 1865 static int bnxt_rxfh_context_check(struct bnxt *bp, 1866 + const struct ethtool_rxfh_param *rxfh, 1866 1867 struct netlink_ext_ack *extack) 1867 1868 { 1869 + if (rxfh->hfunc && rxfh->hfunc != ETH_RSS_HASH_TOP) { 1870 + NL_SET_ERR_MSG_MOD(extack, "RSS hash function not supported"); 1871 + return -EOPNOTSUPP; 1872 + } 1873 + 1868 1874 if (!BNXT_SUPPORTS_MULTI_RSS_CTX(bp)) { 1869 1875 NL_SET_ERR_MSG_MOD(extack, "RSS contexts not supported"); 1870 1876 return -EOPNOTSUPP; ··· 1894 1888 struct bnxt_vnic_info *vnic; 1895 1889 int rc; 1896 1890 1897 - rc = bnxt_rxfh_context_check(bp, extack); 1891 + rc = bnxt_rxfh_context_check(bp, rxfh, extack); 1898 1892 if (rc) 1899 1893 return rc; 1900 1894 ··· 1921 1915 if (rc) 1922 1916 goto out; 1923 1917 1918 + /* Populate defaults in the context */ 1924 1919 bnxt_set_dflt_rss_indir_tbl(bp, ctx); 1920 + ctx->hfunc = ETH_RSS_HASH_TOP; 1925 1921 memcpy(vnic->rss_hash_key, bp->rss_hash_key, HW_HASH_KEY_SIZE); 1922 + memcpy(ethtool_rxfh_context_key(ctx), 1923 + bp->rss_hash_key, HW_HASH_KEY_SIZE); 1926 1924 1927 1925 rc = bnxt_hwrm_vnic_alloc(bp, vnic, 0, bp->rx_nr_rings); 1928 1926 if (rc) { ··· 1963 1953 struct bnxt_rss_ctx *rss_ctx; 1964 1954 int rc; 1965 1955 1966 - rc = bnxt_rxfh_context_check(bp, extack); 1956 + rc = bnxt_rxfh_context_check(bp, rxfh, extack); 1967 1957 if (rc) 1968 1958 return rc; 1969 1959
+5 -6
drivers/net/ethernet/intel/ice/ice.h
··· 765 765 } 766 766 767 767 /** 768 - * ice_xsk_pool - get XSK buffer pool bound to a ring 768 + * ice_rx_xsk_pool - assign XSK buff pool to Rx ring 769 769 * @ring: Rx ring to use 770 770 * 771 - * Returns a pointer to xsk_buff_pool structure if there is a buffer pool 772 - * present, NULL otherwise. 771 + * Sets XSK buff pool pointer on Rx ring. 773 772 */ 774 - static inline struct xsk_buff_pool *ice_xsk_pool(struct ice_rx_ring *ring) 773 + static inline void ice_rx_xsk_pool(struct ice_rx_ring *ring) 775 774 { 776 775 struct ice_vsi *vsi = ring->vsi; 777 776 u16 qid = ring->q_index; 778 777 779 - return ice_get_xp_from_qid(vsi, qid); 778 + WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid)); 780 779 } 781 780 782 781 /** ··· 800 801 if (!ring) 801 802 return; 802 803 803 - ring->xsk_pool = ice_get_xp_from_qid(vsi, qid); 804 + WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid)); 804 805 } 805 806 806 807 /**
+2 -2
drivers/net/ethernet/intel/ice/ice_base.c
··· 536 536 return err; 537 537 } 538 538 539 - ring->xsk_pool = ice_xsk_pool(ring); 539 + ice_rx_xsk_pool(ring); 540 540 if (ring->xsk_pool) { 541 541 xdp_rxq_info_unreg(&ring->xdp_rxq); 542 542 ··· 597 597 return 0; 598 598 } 599 599 600 - ok = ice_alloc_rx_bufs_zc(ring, num_bufs); 600 + ok = ice_alloc_rx_bufs_zc(ring, ring->xsk_pool, num_bufs); 601 601 if (!ok) { 602 602 u16 pf_q = ring->vsi->rxq_map[ring->q_index]; 603 603
+1 -1
drivers/net/ethernet/intel/ice/ice_main.c
··· 2948 2948 ice_for_each_rxq(vsi, i) { 2949 2949 struct ice_rx_ring *rx_ring = vsi->rx_rings[i]; 2950 2950 2951 - if (rx_ring->xsk_pool) 2951 + if (READ_ONCE(rx_ring->xsk_pool)) 2952 2952 napi_schedule(&rx_ring->q_vector->napi); 2953 2953 } 2954 2954 }
+6 -4
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 456 456 if (rx_ring->vsi->type == ICE_VSI_PF) 457 457 if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) 458 458 xdp_rxq_info_unreg(&rx_ring->xdp_rxq); 459 - rx_ring->xdp_prog = NULL; 459 + WRITE_ONCE(rx_ring->xdp_prog, NULL); 460 460 if (rx_ring->xsk_pool) { 461 461 kfree(rx_ring->xdp_buf); 462 462 rx_ring->xdp_buf = NULL; ··· 1521 1521 * budget and be more aggressive about cleaning up the Tx descriptors. 1522 1522 */ 1523 1523 ice_for_each_tx_ring(tx_ring, q_vector->tx) { 1524 + struct xsk_buff_pool *xsk_pool = READ_ONCE(tx_ring->xsk_pool); 1524 1525 bool wd; 1525 1526 1526 - if (tx_ring->xsk_pool) 1527 - wd = ice_xmit_zc(tx_ring); 1527 + if (xsk_pool) 1528 + wd = ice_xmit_zc(tx_ring, xsk_pool); 1528 1529 else if (ice_ring_is_xdp(tx_ring)) 1529 1530 wd = true; 1530 1531 else ··· 1551 1550 budget_per_ring = budget; 1552 1551 1553 1552 ice_for_each_rx_ring(rx_ring, q_vector->rx) { 1553 + struct xsk_buff_pool *xsk_pool = READ_ONCE(rx_ring->xsk_pool); 1554 1554 int cleaned; 1555 1555 1556 1556 /* A dedicated path for zero-copy allows making a single ··· 1559 1557 * ice_clean_rx_irq function and makes the codebase cleaner. 1560 1558 */ 1561 1559 cleaned = rx_ring->xsk_pool ? 1562 - ice_clean_rx_irq_zc(rx_ring, budget_per_ring) : 1560 + ice_clean_rx_irq_zc(rx_ring, xsk_pool, budget_per_ring) : 1563 1561 ice_clean_rx_irq(rx_ring, budget_per_ring); 1564 1562 work_done += cleaned; 1565 1563 /* if we clean as many as budgeted, we must not be done */
+111 -73
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 52 52 static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx) 53 53 { 54 54 ice_clean_tx_ring(vsi->tx_rings[q_idx]); 55 - if (ice_is_xdp_ena_vsi(vsi)) { 56 - synchronize_rcu(); 55 + if (ice_is_xdp_ena_vsi(vsi)) 57 56 ice_clean_tx_ring(vsi->xdp_rings[q_idx]); 58 - } 59 57 ice_clean_rx_ring(vsi->rx_rings[q_idx]); 60 58 } 61 59 ··· 110 112 * ice_qvec_cfg_msix - Enable IRQ for given queue vector 111 113 * @vsi: the VSI that contains queue vector 112 114 * @q_vector: queue vector 115 + * @qid: queue index 113 116 */ 114 117 static void 115 - ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector) 118 + ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector, u16 qid) 116 119 { 117 120 u16 reg_idx = q_vector->reg_idx; 118 121 struct ice_pf *pf = vsi->back; 119 122 struct ice_hw *hw = &pf->hw; 120 - struct ice_tx_ring *tx_ring; 121 - struct ice_rx_ring *rx_ring; 123 + int q, _qid = qid; 122 124 123 125 ice_cfg_itr(hw, q_vector); 124 126 125 - ice_for_each_tx_ring(tx_ring, q_vector->tx) 126 - ice_cfg_txq_interrupt(vsi, tx_ring->reg_idx, reg_idx, 127 - q_vector->tx.itr_idx); 127 + for (q = 0; q < q_vector->num_ring_tx; q++) { 128 + ice_cfg_txq_interrupt(vsi, _qid, reg_idx, q_vector->tx.itr_idx); 129 + _qid++; 130 + } 128 131 129 - ice_for_each_rx_ring(rx_ring, q_vector->rx) 130 - ice_cfg_rxq_interrupt(vsi, rx_ring->reg_idx, reg_idx, 131 - q_vector->rx.itr_idx); 132 + _qid = qid; 133 + 134 + for (q = 0; q < q_vector->num_ring_rx; q++) { 135 + ice_cfg_rxq_interrupt(vsi, _qid, reg_idx, q_vector->rx.itr_idx); 136 + _qid++; 137 + } 132 138 133 139 ice_flush(hw); 134 140 } ··· 166 164 struct ice_tx_ring *tx_ring; 167 165 struct ice_rx_ring *rx_ring; 168 166 int timeout = 50; 167 + int fail = 0; 169 168 int err; 170 169 171 170 if (q_idx >= vsi->num_rxq || q_idx >= vsi->num_txq) ··· 183 180 usleep_range(1000, 2000); 184 181 } 185 182 183 + synchronize_net(); 184 + netif_carrier_off(vsi->netdev); 185 + netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); 186 + 186 187 ice_qvec_dis_irq(vsi, rx_ring, q_vector); 187 188 ice_qvec_toggle_napi(vsi, q_vector, false); 188 189 189 - netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); 190 - 191 190 ice_fill_txq_meta(vsi, tx_ring, &txq_meta); 192 191 err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); 193 - if (err) 194 - return err; 192 + if (!fail) 193 + fail = err; 195 194 if (ice_is_xdp_ena_vsi(vsi)) { 196 195 struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; 197 196 ··· 201 196 ice_fill_txq_meta(vsi, xdp_ring, &txq_meta); 202 197 err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, xdp_ring, 203 198 &txq_meta); 204 - if (err) 205 - return err; 199 + if (!fail) 200 + fail = err; 206 201 } 207 - err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true); 208 - if (err) 209 - return err; 210 202 203 + ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false); 211 204 ice_qp_clean_rings(vsi, q_idx); 212 205 ice_qp_reset_stats(vsi, q_idx); 213 206 214 - return 0; 207 + return fail; 215 208 } 216 209 217 210 /** ··· 222 219 static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) 223 220 { 224 221 struct ice_q_vector *q_vector; 222 + int fail = 0; 223 + bool link_up; 225 224 int err; 226 225 227 226 err = ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx); 228 - if (err) 229 - return err; 227 + if (!fail) 228 + fail = err; 230 229 231 230 if (ice_is_xdp_ena_vsi(vsi)) { 232 231 struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; 233 232 234 233 err = ice_vsi_cfg_single_txq(vsi, vsi->xdp_rings, q_idx); 235 - if (err) 236 - return err; 234 + if (!fail) 235 + fail = err; 237 236 ice_set_ring_xdp(xdp_ring); 238 237 ice_tx_xsk_pool(vsi, q_idx); 239 238 } 240 239 241 240 err = ice_vsi_cfg_single_rxq(vsi, q_idx); 242 - if (err) 243 - return err; 241 + if (!fail) 242 + fail = err; 244 243 245 244 q_vector = vsi->rx_rings[q_idx]->q_vector; 246 - ice_qvec_cfg_msix(vsi, q_vector); 245 + ice_qvec_cfg_msix(vsi, q_vector, q_idx); 247 246 248 247 err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); 249 - if (err) 250 - return err; 248 + if (!fail) 249 + fail = err; 251 250 252 251 ice_qvec_toggle_napi(vsi, q_vector, true); 253 252 ice_qvec_ena_irq(vsi, q_vector); 254 253 255 - netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); 254 + /* make sure NAPI sees updated ice_{t,x}_ring::xsk_pool */ 255 + synchronize_net(); 256 + ice_get_link_status(vsi->port_info, &link_up); 257 + if (link_up) { 258 + netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); 259 + netif_carrier_on(vsi->netdev); 260 + } 256 261 clear_bit(ICE_CFG_BUSY, vsi->state); 257 262 258 - return 0; 263 + return fail; 259 264 } 260 265 261 266 /** ··· 470 459 /** 471 460 * __ice_alloc_rx_bufs_zc - allocate a number of Rx buffers 472 461 * @rx_ring: Rx ring 462 + * @xsk_pool: XSK buffer pool to pick buffers to be filled by HW 473 463 * @count: The number of buffers to allocate 474 464 * 475 465 * Place the @count of descriptors onto Rx ring. Handle the ring wrap ··· 479 467 * 480 468 * Returns true if all allocations were successful, false if any fail. 481 469 */ 482 - static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) 470 + static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, 471 + struct xsk_buff_pool *xsk_pool, u16 count) 483 472 { 484 473 u32 nb_buffs_extra = 0, nb_buffs = 0; 485 474 union ice_32b_rx_flex_desc *rx_desc; ··· 492 479 xdp = ice_xdp_buf(rx_ring, ntu); 493 480 494 481 if (ntu + count >= rx_ring->count) { 495 - nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, 496 - rx_desc, 482 + nb_buffs_extra = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, 497 483 rx_ring->count - ntu); 498 484 if (nb_buffs_extra != rx_ring->count - ntu) { 499 485 ntu += nb_buffs_extra; ··· 505 493 ice_release_rx_desc(rx_ring, 0); 506 494 } 507 495 508 - nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count); 496 + nb_buffs = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, count); 509 497 510 498 ntu += nb_buffs; 511 499 if (ntu == rx_ring->count) ··· 521 509 /** 522 510 * ice_alloc_rx_bufs_zc - allocate a number of Rx buffers 523 511 * @rx_ring: Rx ring 512 + * @xsk_pool: XSK buffer pool to pick buffers to be filled by HW 524 513 * @count: The number of buffers to allocate 525 514 * 526 515 * Wrapper for internal allocation routine; figure out how many tail ··· 529 516 * 530 517 * Returns true if all calls to internal alloc routine succeeded 531 518 */ 532 - bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) 519 + bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, 520 + struct xsk_buff_pool *xsk_pool, u16 count) 533 521 { 534 522 u16 rx_thresh = ICE_RING_QUARTER(rx_ring); 535 523 u16 leftover, i, tail_bumps; ··· 539 525 leftover = count - (tail_bumps * rx_thresh); 540 526 541 527 for (i = 0; i < tail_bumps; i++) 542 - if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh)) 528 + if (!__ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, rx_thresh)) 543 529 return false; 544 - return __ice_alloc_rx_bufs_zc(rx_ring, leftover); 530 + return __ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, leftover); 545 531 } 546 532 547 533 /** ··· 610 596 /** 611 597 * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ 612 598 * @xdp_ring: XDP Tx ring 599 + * @xsk_pool: AF_XDP buffer pool pointer 613 600 */ 614 - static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) 601 + static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring, 602 + struct xsk_buff_pool *xsk_pool) 615 603 { 616 604 u16 ntc = xdp_ring->next_to_clean; 617 605 struct ice_tx_desc *tx_desc; ··· 664 648 if (xdp_ring->next_to_clean >= cnt) 665 649 xdp_ring->next_to_clean -= cnt; 666 650 if (xsk_frames) 667 - xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); 651 + xsk_tx_completed(xsk_pool, xsk_frames); 668 652 669 653 return completed_frames; 670 654 } ··· 673 657 * ice_xmit_xdp_tx_zc - AF_XDP ZC handler for XDP_TX 674 658 * @xdp: XDP buffer to xmit 675 659 * @xdp_ring: XDP ring to produce descriptor onto 660 + * @xsk_pool: AF_XDP buffer pool pointer 676 661 * 677 662 * note that this function works directly on xdp_buff, no need to convert 678 663 * it to xdp_frame. xdp_buff pointer is stored to ice_tx_buf so that cleaning ··· 683 666 * was not enough space on XDP ring 684 667 */ 685 668 static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, 686 - struct ice_tx_ring *xdp_ring) 669 + struct ice_tx_ring *xdp_ring, 670 + struct xsk_buff_pool *xsk_pool) 687 671 { 688 672 struct skb_shared_info *sinfo = NULL; 689 673 u32 size = xdp->data_end - xdp->data; ··· 698 680 699 681 free_space = ICE_DESC_UNUSED(xdp_ring); 700 682 if (free_space < ICE_RING_QUARTER(xdp_ring)) 701 - free_space += ice_clean_xdp_irq_zc(xdp_ring); 683 + free_space += ice_clean_xdp_irq_zc(xdp_ring, xsk_pool); 702 684 703 685 if (unlikely(!free_space)) 704 686 goto busy; ··· 718 700 dma_addr_t dma; 719 701 720 702 dma = xsk_buff_xdp_get_dma(xdp); 721 - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, size); 703 + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, size); 722 704 723 705 tx_buf->xdp = xdp; 724 706 tx_buf->type = ICE_TX_BUF_XSK_TX; ··· 760 742 * @xdp: xdp_buff used as input to the XDP program 761 743 * @xdp_prog: XDP program to run 762 744 * @xdp_ring: ring to be used for XDP_TX action 745 + * @xsk_pool: AF_XDP buffer pool pointer 763 746 * 764 747 * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} 765 748 */ 766 749 static int 767 750 ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, 768 - struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring) 751 + struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring, 752 + struct xsk_buff_pool *xsk_pool) 769 753 { 770 754 int err, result = ICE_XDP_PASS; 771 755 u32 act; ··· 778 758 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); 779 759 if (!err) 780 760 return ICE_XDP_REDIR; 781 - if (xsk_uses_need_wakeup(rx_ring->xsk_pool) && err == -ENOBUFS) 761 + if (xsk_uses_need_wakeup(xsk_pool) && err == -ENOBUFS) 782 762 result = ICE_XDP_EXIT; 783 763 else 784 764 result = ICE_XDP_CONSUMED; ··· 789 769 case XDP_PASS: 790 770 break; 791 771 case XDP_TX: 792 - result = ice_xmit_xdp_tx_zc(xdp, xdp_ring); 772 + result = ice_xmit_xdp_tx_zc(xdp, xdp_ring, xsk_pool); 793 773 if (result == ICE_XDP_CONSUMED) 794 774 goto out_failure; 795 775 break; ··· 841 821 /** 842 822 * ice_clean_rx_irq_zc - consumes packets from the hardware ring 843 823 * @rx_ring: AF_XDP Rx ring 824 + * @xsk_pool: AF_XDP buffer pool pointer 844 825 * @budget: NAPI budget 845 826 * 846 827 * Returns number of processed packets on success, remaining budget on failure. 847 828 */ 848 - int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) 829 + int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, 830 + struct xsk_buff_pool *xsk_pool, 831 + int budget) 849 832 { 850 833 unsigned int total_rx_bytes = 0, total_rx_packets = 0; 851 - struct xsk_buff_pool *xsk_pool = rx_ring->xsk_pool; 852 834 u32 ntc = rx_ring->next_to_clean; 853 835 u32 ntu = rx_ring->next_to_use; 854 836 struct xdp_buff *first = NULL; ··· 913 891 if (ice_is_non_eop(rx_ring, rx_desc)) 914 892 continue; 915 893 916 - xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring); 894 + xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring, 895 + xsk_pool); 917 896 if (likely(xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))) { 918 897 xdp_xmit |= xdp_res; 919 898 } else if (xdp_res == ICE_XDP_EXIT) { ··· 963 940 rx_ring->next_to_clean = ntc; 964 941 entries_to_alloc = ICE_RX_DESC_UNUSED(rx_ring); 965 942 if (entries_to_alloc > ICE_RING_QUARTER(rx_ring)) 966 - failure |= !ice_alloc_rx_bufs_zc(rx_ring, entries_to_alloc); 943 + failure |= !ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, 944 + entries_to_alloc); 967 945 968 946 ice_finalize_xdp_rx(xdp_ring, xdp_xmit, 0); 969 947 ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); ··· 987 963 /** 988 964 * ice_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor 989 965 * @xdp_ring: XDP ring to produce the HW Tx descriptor on 966 + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW 990 967 * @desc: AF_XDP descriptor to pull the DMA address and length from 991 968 * @total_bytes: bytes accumulator that will be used for stats update 992 969 */ 993 - static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc, 970 + static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, 971 + struct xsk_buff_pool *xsk_pool, struct xdp_desc *desc, 994 972 unsigned int *total_bytes) 995 973 { 996 974 struct ice_tx_desc *tx_desc; 997 975 dma_addr_t dma; 998 976 999 - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, desc->addr); 1000 - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, desc->len); 977 + dma = xsk_buff_raw_get_dma(xsk_pool, desc->addr); 978 + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, desc->len); 1001 979 1002 980 tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_to_use++); 1003 981 tx_desc->buf_addr = cpu_to_le64(dma); ··· 1012 986 /** 1013 987 * ice_xmit_pkt_batch - produce a batch of HW Tx descriptors out of AF_XDP descriptors 1014 988 * @xdp_ring: XDP ring to produce the HW Tx descriptors on 989 + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW 1015 990 * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from 1016 991 * @total_bytes: bytes accumulator that will be used for stats update 1017 992 */ 1018 - static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, 993 + static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, 994 + struct xsk_buff_pool *xsk_pool, 995 + struct xdp_desc *descs, 1019 996 unsigned int *total_bytes) 1020 997 { 1021 998 u16 ntu = xdp_ring->next_to_use; ··· 1028 999 loop_unrolled_for(i = 0; i < PKTS_PER_BATCH; i++) { 1029 1000 dma_addr_t dma; 1030 1001 1031 - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, descs[i].addr); 1032 - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, descs[i].len); 1002 + dma = xsk_buff_raw_get_dma(xsk_pool, descs[i].addr); 1003 + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, descs[i].len); 1033 1004 1034 1005 tx_desc = ICE_TX_DESC(xdp_ring, ntu++); 1035 1006 tx_desc->buf_addr = cpu_to_le64(dma); ··· 1045 1016 /** 1046 1017 * ice_fill_tx_hw_ring - produce the number of Tx descriptors onto ring 1047 1018 * @xdp_ring: XDP ring to produce the HW Tx descriptors on 1019 + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW 1048 1020 * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from 1049 1021 * @nb_pkts: count of packets to be send 1050 1022 * @total_bytes: bytes accumulator that will be used for stats update 1051 1023 */ 1052 - static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, 1053 - u32 nb_pkts, unsigned int *total_bytes) 1024 + static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, 1025 + struct xsk_buff_pool *xsk_pool, 1026 + struct xdp_desc *descs, u32 nb_pkts, 1027 + unsigned int *total_bytes) 1054 1028 { 1055 1029 u32 batched, leftover, i; 1056 1030 1057 1031 batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH); 1058 1032 leftover = nb_pkts & (PKTS_PER_BATCH - 1); 1059 1033 for (i = 0; i < batched; i += PKTS_PER_BATCH) 1060 - ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes); 1034 + ice_xmit_pkt_batch(xdp_ring, xsk_pool, &descs[i], total_bytes); 1061 1035 for (; i < batched + leftover; i++) 1062 - ice_xmit_pkt(xdp_ring, &descs[i], total_bytes); 1036 + ice_xmit_pkt(xdp_ring, xsk_pool, &descs[i], total_bytes); 1063 1037 } 1064 1038 1065 1039 /** 1066 1040 * ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring 1067 1041 * @xdp_ring: XDP ring to produce the HW Tx descriptors on 1042 + * @xsk_pool: AF_XDP buffer pool pointer 1068 1043 * 1069 1044 * Returns true if there is no more work that needs to be done, false otherwise 1070 1045 */ 1071 - bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) 1046 + bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, struct xsk_buff_pool *xsk_pool) 1072 1047 { 1073 - struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs; 1048 + struct xdp_desc *descs = xsk_pool->tx_descs; 1074 1049 u32 nb_pkts, nb_processed = 0; 1075 1050 unsigned int total_bytes = 0; 1076 1051 int budget; 1077 1052 1078 - ice_clean_xdp_irq_zc(xdp_ring); 1053 + ice_clean_xdp_irq_zc(xdp_ring, xsk_pool); 1054 + 1055 + if (!netif_carrier_ok(xdp_ring->vsi->netdev) || 1056 + !netif_running(xdp_ring->vsi->netdev)) 1057 + return true; 1079 1058 1080 1059 budget = ICE_DESC_UNUSED(xdp_ring); 1081 1060 budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring)); 1082 1061 1083 - nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget); 1062 + nb_pkts = xsk_tx_peek_release_desc_batch(xsk_pool, budget); 1084 1063 if (!nb_pkts) 1085 1064 return true; 1086 1065 1087 1066 if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { 1088 1067 nb_processed = xdp_ring->count - xdp_ring->next_to_use; 1089 - ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes); 1068 + ice_fill_tx_hw_ring(xdp_ring, xsk_pool, descs, nb_processed, 1069 + &total_bytes); 1090 1070 xdp_ring->next_to_use = 0; 1091 1071 } 1092 1072 1093 - ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed, 1094 - &total_bytes); 1073 + ice_fill_tx_hw_ring(xdp_ring, xsk_pool, &descs[nb_processed], 1074 + nb_pkts - nb_processed, &total_bytes); 1095 1075 1096 1076 ice_set_rs_bit(xdp_ring); 1097 1077 ice_xdp_ring_update_tail(xdp_ring); 1098 1078 ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes); 1099 1079 1100 - if (xsk_uses_need_wakeup(xdp_ring->xsk_pool)) 1101 - xsk_set_tx_need_wakeup(xdp_ring->xsk_pool); 1080 + if (xsk_uses_need_wakeup(xsk_pool)) 1081 + xsk_set_tx_need_wakeup(xsk_pool); 1102 1082 1103 1083 return nb_pkts < budget; 1104 1084 } ··· 1129 1091 struct ice_vsi *vsi = np->vsi; 1130 1092 struct ice_tx_ring *ring; 1131 1093 1132 - if (test_bit(ICE_VSI_DOWN, vsi->state)) 1094 + if (test_bit(ICE_VSI_DOWN, vsi->state) || !netif_carrier_ok(netdev)) 1133 1095 return -ENETDOWN; 1134 1096 1135 1097 if (!ice_is_xdp_ena_vsi(vsi)) ··· 1140 1102 1141 1103 ring = vsi->rx_rings[queue_id]->xdp_ring; 1142 1104 1143 - if (!ring->xsk_pool) 1105 + if (!READ_ONCE(ring->xsk_pool)) 1144 1106 return -EINVAL; 1145 1107 1146 1108 /* The idea here is that if NAPI is running, mark a miss, so
+10 -4
drivers/net/ethernet/intel/ice/ice_xsk.h
··· 20 20 #ifdef CONFIG_XDP_SOCKETS 21 21 int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, 22 22 u16 qid); 23 - int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget); 23 + int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, 24 + struct xsk_buff_pool *xsk_pool, 25 + int budget); 24 26 int ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, u32 flags); 25 - bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count); 27 + bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, 28 + struct xsk_buff_pool *xsk_pool, u16 count); 26 29 bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi); 27 30 void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring); 28 31 void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring); 29 - bool ice_xmit_zc(struct ice_tx_ring *xdp_ring); 32 + bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, struct xsk_buff_pool *xsk_pool); 30 33 int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc); 31 34 #else 32 - static inline bool ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring) 35 + static inline bool ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring, 36 + struct xsk_buff_pool __always_unused *xsk_pool) 33 37 { 34 38 return false; 35 39 } ··· 48 44 49 45 static inline int 50 46 ice_clean_rx_irq_zc(struct ice_rx_ring __always_unused *rx_ring, 47 + struct xsk_buff_pool __always_unused *xsk_pool, 51 48 int __always_unused budget) 52 49 { 53 50 return 0; ··· 56 51 57 52 static inline bool 58 53 ice_alloc_rx_bufs_zc(struct ice_rx_ring __always_unused *rx_ring, 54 + struct xsk_buff_pool __always_unused *xsk_pool, 59 55 u16 __always_unused count) 60 56 { 61 57 return false;
+17 -16
drivers/net/ethernet/intel/igc/igc_main.c
··· 6306 6306 size_t n; 6307 6307 int i; 6308 6308 6309 - switch (qopt->cmd) { 6310 - case TAPRIO_CMD_REPLACE: 6311 - break; 6312 - case TAPRIO_CMD_DESTROY: 6313 - return igc_tsn_clear_schedule(adapter); 6314 - case TAPRIO_CMD_STATS: 6315 - igc_taprio_stats(adapter->netdev, &qopt->stats); 6316 - return 0; 6317 - case TAPRIO_CMD_QUEUE_STATS: 6318 - igc_taprio_queue_stats(adapter->netdev, &qopt->queue_stats); 6319 - return 0; 6320 - default: 6321 - return -EOPNOTSUPP; 6322 - } 6323 - 6324 6309 if (qopt->base_time < 0) 6325 6310 return -ERANGE; 6326 6311 ··· 6414 6429 if (hw->mac.type != igc_i225) 6415 6430 return -EOPNOTSUPP; 6416 6431 6417 - err = igc_save_qbv_schedule(adapter, qopt); 6432 + switch (qopt->cmd) { 6433 + case TAPRIO_CMD_REPLACE: 6434 + err = igc_save_qbv_schedule(adapter, qopt); 6435 + break; 6436 + case TAPRIO_CMD_DESTROY: 6437 + err = igc_tsn_clear_schedule(adapter); 6438 + break; 6439 + case TAPRIO_CMD_STATS: 6440 + igc_taprio_stats(adapter->netdev, &qopt->stats); 6441 + return 0; 6442 + case TAPRIO_CMD_QUEUE_STATS: 6443 + igc_taprio_queue_stats(adapter->netdev, &qopt->queue_stats); 6444 + return 0; 6445 + default: 6446 + return -EOPNOTSUPP; 6447 + } 6448 + 6418 6449 if (err) 6419 6450 return err; 6420 6451
+3 -3
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 953 953 static void mvpp2_bm_pool_update_priv_fc(struct mvpp2 *priv, bool en) 954 954 { 955 955 struct mvpp2_port *port; 956 - int i; 956 + int i, j; 957 957 958 958 for (i = 0; i < priv->port_count; i++) { 959 959 port = priv->port_list[i]; 960 960 if (port->priv->percpu_pools) { 961 - for (i = 0; i < port->nrxqs; i++) 962 - mvpp2_bm_pool_update_fc(port, &port->priv->bm_pools[i], 961 + for (j = 0; j < port->nrxqs; j++) 962 + mvpp2_bm_pool_update_fc(port, &port->priv->bm_pools[j], 963 963 port->tx_fc & en); 964 964 } else { 965 965 mvpp2_bm_pool_update_fc(port, port->pool_long, port->tx_fc & en);
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 932 932 mlx5_tc_ct_entry_destroy_mod_hdr(ct_priv, zone_rule->attr, mh); 933 933 mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 934 934 err_mod_hdr: 935 + *attr = *old_attr; 935 936 kfree(old_attr); 936 937 err_attr: 937 938 kvfree(spec);
+4 -3
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
··· 51 51 MLX5_CAP_FLOWTABLE_NIC_RX(mdev, decap)) 52 52 caps |= MLX5_IPSEC_CAP_PACKET_OFFLOAD; 53 53 54 - if ((MLX5_CAP_FLOWTABLE_NIC_TX(mdev, ignore_flow_level) && 55 - MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ignore_flow_level)) || 56 - MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, ignore_flow_level)) 54 + if (IS_ENABLED(CONFIG_MLX5_CLS_ACT) && 55 + ((MLX5_CAP_FLOWTABLE_NIC_TX(mdev, ignore_flow_level) && 56 + MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ignore_flow_level)) || 57 + MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, ignore_flow_level))) 57 58 caps |= MLX5_IPSEC_CAP_PRIO; 58 59 59 60 if (MLX5_CAP_FLOWTABLE_NIC_TX(mdev,
+6 -1
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 1409 1409 if (!an_changes && link_modes == eproto.admin) 1410 1410 goto out; 1411 1411 1412 - mlx5_port_set_eth_ptys(mdev, an_disable, link_modes, ext); 1412 + err = mlx5_port_set_eth_ptys(mdev, an_disable, link_modes, ext); 1413 + if (err) { 1414 + netdev_err(priv->netdev, "%s: failed to set ptys reg: %d\n", __func__, err); 1415 + goto out; 1416 + } 1417 + 1413 1418 mlx5_toggle_port_link(mdev); 1414 1419 1415 1420 out:
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 207 207 static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unloaded) 208 208 { 209 209 struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 210 + struct devlink *devlink = priv_to_devlink(dev); 210 211 211 212 /* if this is the driver that initiated the fw reset, devlink completed the reload */ 212 213 if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) { ··· 219 218 mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n"); 220 219 else 221 220 mlx5_load_one(dev, true); 222 - devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0, 221 + devl_lock(devlink); 222 + devlink_remote_reload_actions_performed(devlink, 0, 223 223 BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) | 224 224 BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE)); 225 + devl_unlock(devlink); 225 226 } 226 227 } 227 228
+7 -3
drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c
··· 48 48 irq_pool_request_irq(struct mlx5_irq_pool *pool, struct irq_affinity_desc *af_desc) 49 49 { 50 50 struct irq_affinity_desc auto_desc = {}; 51 + struct mlx5_irq *irq; 51 52 u32 irq_index; 52 53 int err; 53 54 ··· 65 64 else 66 65 cpu_get(pool, cpumask_first(&af_desc->mask)); 67 66 } 68 - return mlx5_irq_alloc(pool, irq_index, 69 - cpumask_empty(&auto_desc.mask) ? af_desc : &auto_desc, 70 - NULL); 67 + irq = mlx5_irq_alloc(pool, irq_index, 68 + cpumask_empty(&auto_desc.mask) ? af_desc : &auto_desc, 69 + NULL); 70 + if (IS_ERR(irq)) 71 + xa_erase(&pool->irqs, irq_index); 72 + return irq; 71 73 } 72 74 73 75 /* Looking for the IRQ with the smallest refcount that fits req_mask.
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
··· 1538 1538 goto unlock; 1539 1539 1540 1540 for (i = 0; i < ldev->ports; i++) { 1541 - if (ldev->pf[MLX5_LAG_P1].netdev == slave) { 1541 + if (ldev->pf[i].netdev == slave) { 1542 1542 port = i; 1543 1543 break; 1544 1544 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 2142 2142 /* Panic tear down fw command will stop the PCI bus communication 2143 2143 * with the HCA, so the health poll is no longer needed. 2144 2144 */ 2145 - mlx5_drain_health_wq(dev); 2146 2145 mlx5_stop_health_poll(dev, false); 2147 2146 2148 2147 ret = mlx5_cmd_fast_teardown_hca(dev); ··· 2176 2177 2177 2178 mlx5_core_info(dev, "Shutdown was called\n"); 2178 2179 set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state); 2180 + mlx5_drain_health_wq(dev); 2179 2181 err = mlx5_try_fast_unload(dev); 2180 2182 if (err) 2181 2183 mlx5_unload_one(dev, false);
+1
drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
··· 112 112 struct mlx5_core_dev *mdev = sf_dev->mdev; 113 113 114 114 set_bit(MLX5_BREAK_FW_WAIT, &mdev->intf_state); 115 + mlx5_drain_health_wq(mdev); 115 116 mlx5_unload_one(mdev, false); 116 117 } 117 118
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
··· 7 7 /* don't try to optimize STE allocation if the stack is too constaraining */ 8 8 #define DR_RULE_MAX_STES_OPTIMIZED 0 9 9 #else 10 - #define DR_RULE_MAX_STES_OPTIMIZED 5 10 + #define DR_RULE_MAX_STES_OPTIMIZED 2 11 11 #endif 12 12 #define DR_RULE_MAX_STE_CHAIN_OPTIMIZED (DR_RULE_MAX_STES_OPTIMIZED + DR_ACTION_MAX_STES) 13 13
+1 -1
drivers/net/ethernet/meta/Kconfig
··· 20 20 config FBNIC 21 21 tristate "Meta Platforms Host Network Interface" 22 22 depends on X86_64 || COMPILE_TEST 23 - depends on S390=n 23 + depends on !S390 24 24 depends on MAX_SKB_FRAGS < 22 25 25 depends on PCI_MSI 26 26 select PHYLINK
+2 -6
drivers/net/ethernet/realtek/r8169_main.c
··· 4349 4349 if (unlikely(!rtl_tx_slots_avail(tp))) { 4350 4350 if (net_ratelimit()) 4351 4351 netdev_err(dev, "BUG! Tx Ring full when queue awake!\n"); 4352 - goto err_stop_0; 4352 + netif_stop_queue(dev); 4353 + return NETDEV_TX_BUSY; 4353 4354 } 4354 4355 4355 4356 opts[1] = rtl8169_tx_vlan_tag(skb); ··· 4406 4405 dev_kfree_skb_any(skb); 4407 4406 dev->stats.tx_dropped++; 4408 4407 return NETDEV_TX_OK; 4409 - 4410 - err_stop_0: 4411 - netif_stop_queue(dev); 4412 - dev->stats.tx_dropped++; 4413 - return NETDEV_TX_BUSY; 4414 4408 } 4415 4409 4416 4410 static unsigned int rtl_last_frag_len(struct sk_buff *skb)
+1 -1
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 2219 2219 ~(XAE_OPTION_TXEN | XAE_OPTION_RXEN)); 2220 2220 axienet_set_mac_address(ndev, NULL); 2221 2221 axienet_set_multicast_list(ndev); 2222 - axienet_setoptions(ndev, lp->options); 2223 2222 napi_enable(&lp->napi_rx); 2224 2223 napi_enable(&lp->napi_tx); 2224 + axienet_setoptions(ndev, lp->options); 2225 2225 } 2226 2226 2227 2227 /**
+21 -8
drivers/net/phy/aquantia/aquantia_main.c
··· 653 653 unsigned long *possible = phydev->possible_interfaces; 654 654 unsigned int serdes_mode, rate_adapt; 655 655 phy_interface_t interface; 656 - int i, val, ret; 657 - 658 - ret = phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1, 659 - VEND1_GLOBAL_CFG_10M, val, val != 0, 660 - 1000, 100000, false); 661 - if (ret) 662 - return ret; 656 + int i, val; 663 657 664 658 /* Walk the media-speed configuration registers to determine which 665 659 * host-side serdes modes may be used by the PHY depending on the ··· 702 708 return 0; 703 709 } 704 710 711 + static int aqr113c_fill_interface_modes(struct phy_device *phydev) 712 + { 713 + int val, ret; 714 + 715 + /* It's been observed on some models that - when coming out of suspend 716 + * - the FW signals that the PHY is ready but the GLOBAL_CFG registers 717 + * continue on returning zeroes for some time. Let's poll the 100M 718 + * register until it returns a real value as both 113c and 115c support 719 + * this mode. 720 + */ 721 + ret = phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1, 722 + VEND1_GLOBAL_CFG_100M, val, val != 0, 723 + 1000, 100000, false); 724 + if (ret) 725 + return ret; 726 + 727 + return aqr107_fill_interface_modes(phydev); 728 + } 729 + 705 730 static int aqr113c_config_init(struct phy_device *phydev) 706 731 { 707 732 int ret; ··· 738 725 if (ret) 739 726 return ret; 740 727 741 - return aqr107_fill_interface_modes(phydev); 728 + return aqr113c_fill_interface_modes(phydev); 742 729 } 743 730 744 731 static int aqr107_probe(struct phy_device *phydev)
+19 -15
drivers/net/phy/micrel.c
··· 1389 1389 const struct device *dev_walker; 1390 1390 int ret; 1391 1391 1392 + phydev->mdix_ctrl = ETH_TP_MDI_AUTO; 1393 + 1392 1394 dev_walker = &phydev->mdio.dev; 1393 1395 do { 1394 1396 of_node = dev_walker->of_node; ··· 1440 1438 #define MII_KSZ9131_AUTO_MDIX 0x1C 1441 1439 #define MII_KSZ9131_AUTO_MDI_SET BIT(7) 1442 1440 #define MII_KSZ9131_AUTO_MDIX_SWAP_OFF BIT(6) 1441 + #define MII_KSZ9131_DIG_AXAN_STS 0x14 1442 + #define MII_KSZ9131_DIG_AXAN_STS_LINK_DET BIT(14) 1443 + #define MII_KSZ9131_DIG_AXAN_STS_A_SELECT BIT(12) 1443 1444 1444 1445 static int ksz9131_mdix_update(struct phy_device *phydev) 1445 1446 { 1446 1447 int ret; 1447 1448 1448 - ret = phy_read(phydev, MII_KSZ9131_AUTO_MDIX); 1449 - if (ret < 0) 1450 - return ret; 1451 - 1452 - if (ret & MII_KSZ9131_AUTO_MDIX_SWAP_OFF) { 1453 - if (ret & MII_KSZ9131_AUTO_MDI_SET) 1454 - phydev->mdix_ctrl = ETH_TP_MDI; 1455 - else 1456 - phydev->mdix_ctrl = ETH_TP_MDI_X; 1449 + if (phydev->mdix_ctrl != ETH_TP_MDI_AUTO) { 1450 + phydev->mdix = phydev->mdix_ctrl; 1457 1451 } else { 1458 - phydev->mdix_ctrl = ETH_TP_MDI_AUTO; 1459 - } 1452 + ret = phy_read(phydev, MII_KSZ9131_DIG_AXAN_STS); 1453 + if (ret < 0) 1454 + return ret; 1460 1455 1461 - if (ret & MII_KSZ9131_AUTO_MDI_SET) 1462 - phydev->mdix = ETH_TP_MDI; 1463 - else 1464 - phydev->mdix = ETH_TP_MDI_X; 1456 + if (ret & MII_KSZ9131_DIG_AXAN_STS_LINK_DET) { 1457 + if (ret & MII_KSZ9131_DIG_AXAN_STS_A_SELECT) 1458 + phydev->mdix = ETH_TP_MDI; 1459 + else 1460 + phydev->mdix = ETH_TP_MDI_X; 1461 + } else { 1462 + phydev->mdix = ETH_TP_MDI_INVALID; 1463 + } 1464 + } 1465 1465 1466 1466 return 0; 1467 1467 }
+7
drivers/net/phy/realtek.c
··· 1465 1465 .handle_interrupt = genphy_handle_interrupt_no_ack, 1466 1466 .suspend = genphy_suspend, 1467 1467 .resume = genphy_resume, 1468 + }, { 1469 + PHY_ID_MATCH_EXACT(0x001cc960), 1470 + .name = "RTL8366S Gigabit Ethernet", 1471 + .suspend = genphy_suspend, 1472 + .resume = genphy_resume, 1473 + .read_mmd = genphy_read_mmd_unsupported, 1474 + .write_mmd = genphy_write_mmd_unsupported, 1468 1475 }, 1469 1476 }; 1470 1477
+9 -2
drivers/net/usb/sr9700.c
··· 179 179 struct usbnet *dev = netdev_priv(netdev); 180 180 __le16 res; 181 181 int rc = 0; 182 + int err; 182 183 183 184 if (phy_id) { 184 185 netdev_dbg(netdev, "Only internal phy supported\n"); ··· 190 189 if (loc == MII_BMSR) { 191 190 u8 value; 192 191 193 - sr_read_reg(dev, SR_NSR, &value); 192 + err = sr_read_reg(dev, SR_NSR, &value); 193 + if (err < 0) 194 + return err; 195 + 194 196 if (value & NSR_LINKST) 195 197 rc = 1; 196 198 } 197 - sr_share_read_word(dev, 1, loc, &res); 199 + err = sr_share_read_word(dev, 1, loc, &res); 200 + if (err < 0) 201 + return err; 202 + 198 203 if (rc == 1) 199 204 res = le16_to_cpu(res) | BMSR_LSTATUS; 200 205 else
+23 -10
drivers/net/wan/fsl_qmc_hdlc.c
··· 18 18 #include <linux/hdlc.h> 19 19 #include <linux/mod_devicetable.h> 20 20 #include <linux/module.h> 21 + #include <linux/mutex.h> 21 22 #include <linux/platform_device.h> 22 23 #include <linux/slab.h> 23 24 #include <linux/spinlock.h> ··· 38 37 struct qmc_chan *qmc_chan; 39 38 struct net_device *netdev; 40 39 struct framer *framer; 41 - spinlock_t carrier_lock; /* Protect carrier detection */ 40 + struct mutex carrier_lock; /* Protect carrier detection */ 42 41 struct notifier_block nb; 43 42 bool is_crc32; 44 43 spinlock_t tx_lock; /* Protect tx descriptors */ ··· 61 60 if (!qmc_hdlc->framer) 62 61 return 0; 63 62 64 - guard(spinlock_irqsave)(&qmc_hdlc->carrier_lock); 63 + guard(mutex)(&qmc_hdlc->carrier_lock); 65 64 66 65 ret = framer_get_status(qmc_hdlc->framer, &framer_status); 67 66 if (ret) { ··· 250 249 struct qmc_hdlc_desc *desc = context; 251 250 struct net_device *netdev; 252 251 struct qmc_hdlc *qmc_hdlc; 252 + size_t crc_size; 253 253 int ret; 254 254 255 255 netdev = desc->netdev; ··· 269 267 if (flags & QMC_RX_FLAG_HDLC_CRC) /* CRC error */ 270 268 netdev->stats.rx_crc_errors++; 271 269 kfree_skb(desc->skb); 272 - } else { 273 - netdev->stats.rx_packets++; 274 - netdev->stats.rx_bytes += length; 275 - 276 - skb_put(desc->skb, length); 277 - desc->skb->protocol = hdlc_type_trans(desc->skb, netdev); 278 - netif_rx(desc->skb); 270 + goto re_queue; 279 271 } 280 272 273 + /* Discard the CRC */ 274 + crc_size = qmc_hdlc->is_crc32 ? 4 : 2; 275 + if (length < crc_size) { 276 + netdev->stats.rx_length_errors++; 277 + kfree_skb(desc->skb); 278 + goto re_queue; 279 + } 280 + length -= crc_size; 281 + 282 + netdev->stats.rx_packets++; 283 + netdev->stats.rx_bytes += length; 284 + 285 + skb_put(desc->skb, length); 286 + desc->skb->protocol = hdlc_type_trans(desc->skb, netdev); 287 + netif_rx(desc->skb); 288 + 289 + re_queue: 281 290 /* Re-queue a transfer using the same descriptor */ 282 291 ret = qmc_hdlc_recv_queue(qmc_hdlc, desc, desc->dma_size); 283 292 if (ret) { ··· 719 706 720 707 qmc_hdlc->dev = dev; 721 708 spin_lock_init(&qmc_hdlc->tx_lock); 722 - spin_lock_init(&qmc_hdlc->carrier_lock); 709 + mutex_init(&qmc_hdlc->carrier_lock); 723 710 724 711 qmc_hdlc->qmc_chan = devm_qmc_chan_get_bychild(dev, dev->of_node); 725 712 if (IS_ERR(qmc_hdlc->qmc_chan))
+2 -1
drivers/net/wireless/ath/ath12k/pci.c
··· 473 473 { 474 474 int i; 475 475 476 - clear_bit(ATH12K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags); 476 + if (!test_and_clear_bit(ATH12K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags)) 477 + return; 477 478 478 479 for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) { 479 480 struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+4 -4
drivers/net/wireless/ath/ath12k/wow.c
··· 361 361 struct ath12k *ar = arvif->ar; 362 362 unsigned long wow_mask = 0; 363 363 int pattern_id = 0; 364 - int ret, i; 364 + int ret, i, j; 365 365 366 366 /* Setup requested WOW features */ 367 367 switch (arvif->vdev_type) { ··· 431 431 eth_pattern->pattern_len); 432 432 433 433 /* convert bitmask to bytemask */ 434 - for (i = 0; i < eth_pattern->pattern_len; i++) 435 - if (eth_pattern->mask[i / 8] & BIT(i % 8)) 436 - new_pattern.bytemask[i] = 0xff; 434 + for (j = 0; j < eth_pattern->pattern_len; j++) 435 + if (eth_pattern->mask[j / 8] & BIT(j % 8)) 436 + new_pattern.bytemask[j] = 0xff; 437 437 438 438 new_pattern.pattern_len = eth_pattern->pattern_len; 439 439 new_pattern.pkt_offset = eth_pattern->pkt_offset;
+1
drivers/net/wireless/mediatek/mt76/mt7921/main.c
··· 303 303 304 304 mvif->bss_conf.mt76.omac_idx = mvif->bss_conf.mt76.idx; 305 305 mvif->phy = phy; 306 + mvif->bss_conf.vif = mvif; 306 307 mvif->bss_conf.mt76.band_idx = 0; 307 308 mvif->bss_conf.mt76.wmm_idx = mvif->bss_conf.mt76.idx % MT76_CONNAC_MAX_WMM_SETS; 308 309
+5 -11
include/linux/virtio_net.h
··· 56 56 unsigned int thlen = 0; 57 57 unsigned int p_off = 0; 58 58 unsigned int ip_proto; 59 - u64 ret, remainder, gso_size; 60 59 61 60 if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) { 62 61 switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { ··· 97 98 u32 start = __virtio16_to_cpu(little_endian, hdr->csum_start); 98 99 u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset); 99 100 u32 needed = start + max_t(u32, thlen, off + sizeof(__sum16)); 100 - 101 - if (hdr->gso_size) { 102 - gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size); 103 - ret = div64_u64_rem(skb->len, gso_size, &remainder); 104 - if (!(ret && (hdr->gso_size > needed) && 105 - ((remainder > needed) || (remainder == 0)))) { 106 - return -EINVAL; 107 - } 108 - skb_shinfo(skb)->tx_flags |= SKBFL_SHARED_FRAG; 109 - } 110 101 111 102 if (!pskb_may_pull(skb, needed)) 112 103 return -EINVAL; ··· 169 180 if (skb->len - p_off > gso_size * UDP_MAX_SEGMENTS) 170 181 return -EINVAL; 171 182 if (gso_type != SKB_GSO_UDP_L4) 183 + return -EINVAL; 184 + break; 185 + case SKB_GSO_TCPV4: 186 + case SKB_GSO_TCPV6: 187 + if (skb->csum_offset != offsetof(struct tcphdr, check)) 172 188 return -EINVAL; 173 189 break; 174 190 }
+1 -1
include/trace/events/mptcp.h
··· 34 34 struct sock *ssk; 35 35 36 36 __entry->active = mptcp_subflow_active(subflow); 37 - __entry->backup = subflow->backup; 37 + __entry->backup = subflow->backup || subflow->request_bkup; 38 38 39 39 if (subflow->tcp_sock && sk_fullsock(subflow->tcp_sock)) 40 40 __entry->free = sk_stream_memory_free(subflow->tcp_sock);
-7
net/bluetooth/hci_core.c
··· 119 119 case DISCOVERY_STARTING: 120 120 break; 121 121 case DISCOVERY_FINDING: 122 - /* If discovery was not started then it was initiated by the 123 - * MGMT interface so no MGMT event shall be generated either 124 - */ 125 - if (old_state != DISCOVERY_STARTING) { 126 - hdev->discovery.state = old_state; 127 - return; 128 - } 129 122 mgmt_discovering(hdev, 1); 130 123 break; 131 124 case DISCOVERY_RESOLVING:
+3 -2
net/bluetooth/hci_event.c
··· 1721 1721 switch (enable) { 1722 1722 case LE_SCAN_ENABLE: 1723 1723 hci_dev_set_flag(hdev, HCI_LE_SCAN); 1724 - if (hdev->le_scan_type == LE_SCAN_ACTIVE) 1724 + if (hdev->le_scan_type == LE_SCAN_ACTIVE) { 1725 1725 clear_pending_adv_report(hdev); 1726 - hci_discovery_set_state(hdev, DISCOVERY_FINDING); 1726 + hci_discovery_set_state(hdev, DISCOVERY_FINDING); 1727 + } 1727 1728 break; 1728 1729 1729 1730 case LE_SCAN_DISABLE:
+21
net/bluetooth/hci_sync.c
··· 2976 2976 */ 2977 2977 filter_policy = hci_update_accept_list_sync(hdev); 2978 2978 2979 + /* If suspended and filter_policy set to 0x00 (no acceptlist) then 2980 + * passive scanning cannot be started since that would require the host 2981 + * to be woken up to process the reports. 2982 + */ 2983 + if (hdev->suspended && !filter_policy) { 2984 + /* Check if accept list is empty then there is no need to scan 2985 + * while suspended. 2986 + */ 2987 + if (list_empty(&hdev->le_accept_list)) 2988 + return 0; 2989 + 2990 + /* If there are devices is the accept_list that means some 2991 + * devices could not be programmed which in non-suspended case 2992 + * means filter_policy needs to be set to 0x00 so the host needs 2993 + * to filter, but since this is treating suspended case we 2994 + * can ignore device needing host to filter to allow devices in 2995 + * the acceptlist to be able to wakeup the system. 2996 + */ 2997 + filter_policy = 0x01; 2998 + } 2999 + 2979 3000 /* When the controller is using random resolvable addresses and 2980 3001 * with that having LE privacy enabled, then controllers with 2981 3002 * Extended Scanner Filter Policies support can now enable support
+1
net/core/dev.c
··· 5150 5150 bpf_net_ctx_clear(bpf_net_ctx); 5151 5151 return XDP_DROP; 5152 5152 } 5153 + bpf_net_ctx_clear(bpf_net_ctx); 5153 5154 } 5154 5155 return XDP_PASS; 5155 5156 out_redir:
+1 -1
net/core/rtnetlink.c
··· 3288 3288 if (ifm->ifi_index > 0) 3289 3289 dev = __dev_get_by_index(tgt_net, ifm->ifi_index); 3290 3290 else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME]) 3291 - dev = rtnl_dev_get(net, tb); 3291 + dev = rtnl_dev_get(tgt_net, tb); 3292 3292 else if (tb[IFLA_GROUP]) 3293 3293 err = rtnl_group_dellink(tgt_net, nla_get_u32(tb[IFLA_GROUP])); 3294 3294 else
+33 -10
net/ethtool/ioctl.c
··· 1331 1331 u32 rss_cfg_offset = offsetof(struct ethtool_rxfh, rss_config[0]); 1332 1332 const struct ethtool_ops *ops = dev->ethtool_ops; 1333 1333 u32 dev_indir_size = 0, dev_key_size = 0, i; 1334 + u32 user_indir_len = 0, indir_bytes = 0; 1334 1335 struct ethtool_rxfh_param rxfh_dev = {}; 1335 1336 struct ethtool_rxfh_context *ctx = NULL; 1336 1337 struct netlink_ext_ack *extack = NULL; 1337 1338 struct ethtool_rxnfc rx_rings; 1338 1339 struct ethtool_rxfh rxfh; 1339 1340 bool locked = false; /* dev->ethtool->rss_lock taken */ 1340 - u32 indir_bytes = 0; 1341 1341 bool create = false; 1342 1342 u8 *rss_config; 1343 1343 int ret; ··· 1382 1382 rxfh.input_xfrm == RXH_XFRM_NO_CHANGE)) 1383 1383 return -EINVAL; 1384 1384 1385 - if (rxfh.indir_size != ETH_RXFH_INDIR_NO_CHANGE) 1386 - indir_bytes = dev_indir_size * sizeof(rxfh_dev.indir[0]); 1385 + indir_bytes = dev_indir_size * sizeof(rxfh_dev.indir[0]); 1387 1386 1388 - rss_config = kzalloc(indir_bytes + rxfh.key_size, GFP_USER); 1387 + rss_config = kzalloc(indir_bytes + dev_key_size, GFP_USER); 1389 1388 if (!rss_config) 1390 1389 return -ENOMEM; 1391 1390 ··· 1399 1400 */ 1400 1401 if (rxfh.indir_size && 1401 1402 rxfh.indir_size != ETH_RXFH_INDIR_NO_CHANGE) { 1403 + user_indir_len = indir_bytes; 1402 1404 rxfh_dev.indir = (u32 *)rss_config; 1403 1405 rxfh_dev.indir_size = dev_indir_size; 1404 1406 ret = ethtool_copy_validate_indir(rxfh_dev.indir, ··· 1426 1426 rxfh_dev.key_size = dev_key_size; 1427 1427 rxfh_dev.key = rss_config + indir_bytes; 1428 1428 if (copy_from_user(rxfh_dev.key, 1429 - useraddr + rss_cfg_offset + indir_bytes, 1429 + useraddr + rss_cfg_offset + user_indir_len, 1430 1430 rxfh.key_size)) { 1431 1431 ret = -EFAULT; 1432 1432 goto out; ··· 1474 1474 rxfh_dev.input_xfrm = rxfh.input_xfrm; 1475 1475 1476 1476 if (rxfh.rss_context && ops->create_rxfh_context) { 1477 - if (create) 1477 + if (create) { 1478 1478 ret = ops->create_rxfh_context(dev, ctx, &rxfh_dev, 1479 1479 extack); 1480 - else if (rxfh_dev.rss_delete) 1480 + /* Make sure driver populates defaults */ 1481 + WARN_ON_ONCE(!ret && !rxfh_dev.key && 1482 + !memchr_inv(ethtool_rxfh_context_key(ctx), 1483 + 0, ctx->key_size)); 1484 + } else if (rxfh_dev.rss_delete) { 1481 1485 ret = ops->remove_rxfh_context(dev, ctx, 1482 1486 rxfh.rss_context, 1483 1487 extack); 1484 - else 1488 + } else { 1485 1489 ret = ops->modify_rxfh_context(dev, ctx, &rxfh_dev, 1486 1490 extack); 1491 + } 1487 1492 } else { 1488 1493 ret = ops->set_rxfh(dev, &rxfh_dev, extack); 1489 1494 } ··· 1527 1522 kfree(ctx); 1528 1523 goto out; 1529 1524 } 1525 + 1526 + /* Fetch the defaults for the old API, in the new API drivers 1527 + * should write defaults into ctx themselves. 1528 + */ 1529 + rxfh_dev.indir = (u32 *)rss_config; 1530 + rxfh_dev.indir_size = dev_indir_size; 1531 + 1532 + rxfh_dev.key = rss_config + indir_bytes; 1533 + rxfh_dev.key_size = dev_key_size; 1534 + 1535 + ret = ops->get_rxfh(dev, &rxfh_dev); 1536 + if (WARN_ON(ret)) { 1537 + xa_erase(&dev->ethtool->rss_ctx, rxfh.rss_context); 1538 + kfree(ctx); 1539 + goto out; 1540 + } 1530 1541 } 1531 1542 if (rxfh_dev.rss_delete) { 1532 1543 WARN_ON(xa_erase(&dev->ethtool->rss_ctx, rxfh.rss_context) != ctx); ··· 1551 1530 if (rxfh_dev.indir) { 1552 1531 for (i = 0; i < dev_indir_size; i++) 1553 1532 ethtool_rxfh_context_indir(ctx)[i] = rxfh_dev.indir[i]; 1554 - ctx->indir_configured = 1; 1533 + ctx->indir_configured = 1534 + rxfh.indir_size && 1535 + rxfh.indir_size != ETH_RXFH_INDIR_NO_CHANGE; 1555 1536 } 1556 1537 if (rxfh_dev.key) { 1557 1538 memcpy(ethtool_rxfh_context_key(ctx), rxfh_dev.key, 1558 1539 dev_key_size); 1559 - ctx->key_configured = 1; 1540 + ctx->key_configured = !!rxfh.key_size; 1560 1541 } 1561 1542 if (rxfh_dev.hfunc != ETH_RSS_HASH_NO_CHANGE) 1562 1543 ctx->hfunc = rxfh_dev.hfunc;
+7 -1
net/ethtool/rss.c
··· 111 111 const struct rss_reply_data *data = RSS_REPDATA(reply_base); 112 112 int len; 113 113 114 - len = nla_total_size(sizeof(u32)) + /* _RSS_HFUNC */ 114 + len = nla_total_size(sizeof(u32)) + /* _RSS_CONTEXT */ 115 + nla_total_size(sizeof(u32)) + /* _RSS_HFUNC */ 115 116 nla_total_size(sizeof(u32)) + /* _RSS_INPUT_XFRM */ 116 117 nla_total_size(sizeof(u32) * data->indir_size) + /* _RSS_INDIR */ 117 118 nla_total_size(data->hkey_size); /* _RSS_HKEY */ ··· 125 124 const struct ethnl_reply_data *reply_base) 126 125 { 127 126 const struct rss_reply_data *data = RSS_REPDATA(reply_base); 127 + struct rss_req_info *request = RSS_REQINFO(req_base); 128 + 129 + if (request->rss_context && 130 + nla_put_u32(skb, ETHTOOL_A_RSS_CONTEXT, request->rss_context)) 131 + return -EMSGSIZE; 128 132 129 133 if ((data->hfunc && 130 134 nla_put_u32(skb, ETHTOOL_A_RSS_HFUNC, data->hfunc)) ||
+10 -8
net/ipv4/netfilter/iptable_nat.c
··· 145 145 146 146 static int __init iptable_nat_init(void) 147 147 { 148 - int ret = xt_register_template(&nf_nat_ipv4_table, 149 - iptable_nat_table_init); 148 + int ret; 150 149 150 + /* net->gen->ptr[iptable_nat_net_id] must be allocated 151 + * before calling iptable_nat_table_init(). 152 + */ 153 + ret = register_pernet_subsys(&iptable_nat_net_ops); 151 154 if (ret < 0) 152 155 return ret; 153 156 154 - ret = register_pernet_subsys(&iptable_nat_net_ops); 155 - if (ret < 0) { 156 - xt_unregister_template(&nf_nat_ipv4_table); 157 - return ret; 158 - } 157 + ret = xt_register_template(&nf_nat_ipv4_table, 158 + iptable_nat_table_init); 159 + if (ret < 0) 160 + unregister_pernet_subsys(&iptable_nat_net_ops); 159 161 160 162 return ret; 161 163 } 162 164 163 165 static void __exit iptable_nat_exit(void) 164 166 { 165 - unregister_pernet_subsys(&iptable_nat_net_ops); 166 167 xt_unregister_template(&nf_nat_ipv4_table); 168 + unregister_pernet_subsys(&iptable_nat_net_ops); 167 169 } 168 170 169 171 module_init(iptable_nat_init);
+16 -7
net/ipv4/tcp_input.c
··· 754 754 * <prev RTT . ><current RTT .. ><next RTT .... > 755 755 */ 756 756 757 - if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) && 758 - !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) { 757 + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf)) { 759 758 u64 rcvwin, grow; 760 759 int rcvbuf; 761 760 ··· 770 771 771 772 rcvbuf = min_t(u64, tcp_space_from_win(sk, rcvwin), 772 773 READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2])); 773 - if (rcvbuf > sk->sk_rcvbuf) { 774 - WRITE_ONCE(sk->sk_rcvbuf, rcvbuf); 774 + if (!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) { 775 + if (rcvbuf > sk->sk_rcvbuf) { 776 + WRITE_ONCE(sk->sk_rcvbuf, rcvbuf); 775 777 776 - /* Make the window clamp follow along. */ 777 - WRITE_ONCE(tp->window_clamp, 778 - tcp_win_from_space(sk, rcvbuf)); 778 + /* Make the window clamp follow along. */ 779 + WRITE_ONCE(tp->window_clamp, 780 + tcp_win_from_space(sk, rcvbuf)); 781 + } 782 + } else { 783 + /* Make the window clamp follow along while being bounded 784 + * by SO_RCVBUF. 785 + */ 786 + int clamp = tcp_win_from_space(sk, min(rcvbuf, sk->sk_rcvbuf)); 787 + 788 + if (clamp > tp->window_clamp) 789 + WRITE_ONCE(tp->window_clamp, clamp); 779 790 } 780 791 } 781 792 tp->rcvq_space.space = copied;
+3
net/ipv4/tcp_offload.c
··· 140 140 if (thlen < sizeof(*th)) 141 141 goto out; 142 142 143 + if (unlikely(skb_checksum_start(skb) != skb_transport_header(skb))) 144 + goto out; 145 + 143 146 if (!pskb_may_pull(skb, thlen)) 144 147 goto out; 145 148
+4
net/ipv4/udp_offload.c
··· 278 278 if (gso_skb->len <= sizeof(*uh) + mss) 279 279 return ERR_PTR(-EINVAL); 280 280 281 + if (unlikely(skb_checksum_start(gso_skb) != 282 + skb_transport_header(gso_skb))) 283 + return ERR_PTR(-EINVAL); 284 + 281 285 if (skb_gso_ok(gso_skb, features | NETIF_F_GSO_ROBUST)) { 282 286 /* Packet is from an untrusted source, reset gso_segs. */ 283 287 skb_shinfo(gso_skb)->gso_segs = DIV_ROUND_UP(gso_skb->len - sizeof(*uh),
+18 -16
net/ipv6/ndisc.c
··· 227 227 return NULL; 228 228 memset(ndopts, 0, sizeof(*ndopts)); 229 229 while (opt_len) { 230 + bool unknown = false; 230 231 int l; 231 232 if (opt_len < sizeof(struct nd_opt_hdr)) 232 233 return NULL; ··· 263 262 break; 264 263 #endif 265 264 default: 266 - if (ndisc_is_useropt(dev, nd_opt)) { 267 - ndopts->nd_useropts_end = nd_opt; 268 - if (!ndopts->nd_useropts) 269 - ndopts->nd_useropts = nd_opt; 270 - } else { 271 - /* 272 - * Unknown options must be silently ignored, 273 - * to accommodate future extension to the 274 - * protocol. 275 - */ 276 - ND_PRINTK(2, notice, 277 - "%s: ignored unsupported option; type=%d, len=%d\n", 278 - __func__, 279 - nd_opt->nd_opt_type, 280 - nd_opt->nd_opt_len); 281 - } 265 + unknown = true; 266 + } 267 + if (ndisc_is_useropt(dev, nd_opt)) { 268 + ndopts->nd_useropts_end = nd_opt; 269 + if (!ndopts->nd_useropts) 270 + ndopts->nd_useropts = nd_opt; 271 + } else if (unknown) { 272 + /* 273 + * Unknown options must be silently ignored, 274 + * to accommodate future extension to the 275 + * protocol. 276 + */ 277 + ND_PRINTK(2, notice, 278 + "%s: ignored unsupported option; type=%d, len=%d\n", 279 + __func__, 280 + nd_opt->nd_opt_type, 281 + nd_opt->nd_opt_len); 282 282 } 283 283 next_opt: 284 284 opt_len -= l;
+9 -5
net/ipv6/netfilter/ip6table_nat.c
··· 147 147 148 148 static int __init ip6table_nat_init(void) 149 149 { 150 - int ret = xt_register_template(&nf_nat_ipv6_table, 151 - ip6table_nat_table_init); 150 + int ret; 152 151 152 + /* net->gen->ptr[ip6table_nat_net_id] must be allocated 153 + * before calling ip6t_nat_register_lookups(). 154 + */ 155 + ret = register_pernet_subsys(&ip6table_nat_net_ops); 153 156 if (ret < 0) 154 157 return ret; 155 158 156 - ret = register_pernet_subsys(&ip6table_nat_net_ops); 159 + ret = xt_register_template(&nf_nat_ipv6_table, 160 + ip6table_nat_table_init); 157 161 if (ret) 158 - xt_unregister_template(&nf_nat_ipv6_table); 162 + unregister_pernet_subsys(&ip6table_nat_net_ops); 159 163 160 164 return ret; 161 165 } 162 166 163 167 static void __exit ip6table_nat_exit(void) 164 168 { 165 - unregister_pernet_subsys(&ip6table_nat_net_ops); 166 169 xt_unregister_template(&nf_nat_ipv6_table); 170 + unregister_pernet_subsys(&ip6table_nat_net_ops); 167 171 } 168 172 169 173 module_init(ip6table_nat_init);
+2 -2
net/iucv/af_iucv.c
··· 335 335 struct iucv_sock *iucv = iucv_sk(sk); 336 336 struct iucv_path *path = iucv->path; 337 337 338 - if (iucv->path) { 339 - iucv->path = NULL; 338 + /* Whoever resets the path pointer, must sever and free it. */ 339 + if (xchg(&iucv->path, NULL)) { 340 340 if (with_user_data) { 341 341 low_nmcpy(user_data, iucv->src_name); 342 342 high_nmcpy(user_data, iucv->dst_name);
+5 -2
net/mac80211/cfg.c
··· 114 114 115 115 /* apply all changes now - no failures allowed */ 116 116 117 - if (monitor_sdata) 117 + if (monitor_sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) 118 118 ieee80211_set_mu_mimo_follow(monitor_sdata, params); 119 119 120 120 if (params->flags) { ··· 3053 3053 sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); 3054 3054 3055 3055 if (sdata->vif.type == NL80211_IFTYPE_MONITOR) { 3056 + if (!ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) 3057 + return -EOPNOTSUPP; 3058 + 3056 3059 sdata = wiphy_dereference(local->hw.wiphy, 3057 3060 local->monitor_sdata); 3058 3061 if (!sdata) ··· 3118 3115 if (has_monitor) { 3119 3116 sdata = wiphy_dereference(local->hw.wiphy, 3120 3117 local->monitor_sdata); 3121 - if (sdata) { 3118 + if (sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) { 3122 3119 sdata->deflink.user_power_level = local->user_power_level; 3123 3120 if (txp_type != sdata->vif.bss_conf.txpower_type) 3124 3121 update_txp_type = true;
+3 -2
net/mac80211/tx.c
··· 1768 1768 break; 1769 1769 } 1770 1770 sdata = rcu_dereference(local->monitor_sdata); 1771 - if (sdata) { 1771 + if (sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) { 1772 1772 vif = &sdata->vif; 1773 1773 info->hw_queue = 1774 1774 vif->hw_queue[skb_get_queue_mapping(skb)]; ··· 3957 3957 break; 3958 3958 } 3959 3959 tx.sdata = rcu_dereference(local->monitor_sdata); 3960 - if (tx.sdata) { 3960 + if (tx.sdata && 3961 + ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) { 3961 3962 vif = &tx.sdata->vif; 3962 3963 info->hw_queue = 3963 3964 vif->hw_queue[skb_get_queue_mapping(skb)];
+1 -1
net/mac80211/util.c
··· 776 776 sdata = rcu_dereference_check(local->monitor_sdata, 777 777 lockdep_is_held(&local->iflist_mtx) || 778 778 lockdep_is_held(&local->hw.wiphy->mtx)); 779 - if (sdata && 779 + if (sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF) && 780 780 (iter_flags & IEEE80211_IFACE_ITER_RESUME_ALL || !active_only || 781 781 sdata->flags & IEEE80211_SDATA_IN_DRIVER)) 782 782 iterator(data, sdata->vif.addr, &sdata->vif);
+2
net/mptcp/mib.c
··· 19 19 SNMP_MIB_ITEM("MPTCPRetrans", MPTCP_MIB_RETRANSSEGS), 20 20 SNMP_MIB_ITEM("MPJoinNoTokenFound", MPTCP_MIB_JOINNOTOKEN), 21 21 SNMP_MIB_ITEM("MPJoinSynRx", MPTCP_MIB_JOINSYNRX), 22 + SNMP_MIB_ITEM("MPJoinSynBackupRx", MPTCP_MIB_JOINSYNBACKUPRX), 22 23 SNMP_MIB_ITEM("MPJoinSynAckRx", MPTCP_MIB_JOINSYNACKRX), 24 + SNMP_MIB_ITEM("MPJoinSynAckBackupRx", MPTCP_MIB_JOINSYNACKBACKUPRX), 23 25 SNMP_MIB_ITEM("MPJoinSynAckHMacFailure", MPTCP_MIB_JOINSYNACKMAC), 24 26 SNMP_MIB_ITEM("MPJoinAckRx", MPTCP_MIB_JOINACKRX), 25 27 SNMP_MIB_ITEM("MPJoinAckHMacFailure", MPTCP_MIB_JOINACKMAC),
+2
net/mptcp/mib.h
··· 14 14 MPTCP_MIB_RETRANSSEGS, /* Segments retransmitted at the MPTCP-level */ 15 15 MPTCP_MIB_JOINNOTOKEN, /* Received MP_JOIN but the token was not found */ 16 16 MPTCP_MIB_JOINSYNRX, /* Received a SYN + MP_JOIN */ 17 + MPTCP_MIB_JOINSYNBACKUPRX, /* Received a SYN + MP_JOIN + backup flag */ 17 18 MPTCP_MIB_JOINSYNACKRX, /* Received a SYN/ACK + MP_JOIN */ 19 + MPTCP_MIB_JOINSYNACKBACKUPRX, /* Received a SYN/ACK + MP_JOIN + backup flag */ 18 20 MPTCP_MIB_JOINSYNACKMAC, /* HMAC was wrong on SYN/ACK + MP_JOIN */ 19 21 MPTCP_MIB_JOINACKRX, /* Received an ACK + MP_JOIN */ 20 22 MPTCP_MIB_JOINACKMAC, /* HMAC was wrong on ACK + MP_JOIN */
+1 -1
net/mptcp/options.c
··· 909 909 return true; 910 910 } else if (subflow_req->mp_join) { 911 911 opts->suboptions = OPTION_MPTCP_MPJ_SYNACK; 912 - opts->backup = subflow_req->backup; 912 + opts->backup = subflow_req->request_bkup; 913 913 opts->join_id = subflow_req->local_id; 914 914 opts->thmac = subflow_req->thmac; 915 915 opts->nonce = subflow_req->local_nonce;
+12
net/mptcp/pm.c
··· 426 426 return mptcp_pm_nl_get_local_id(msk, &skc_local); 427 427 } 428 428 429 + bool mptcp_pm_is_backup(struct mptcp_sock *msk, struct sock_common *skc) 430 + { 431 + struct mptcp_addr_info skc_local; 432 + 433 + mptcp_local_address((struct sock_common *)skc, &skc_local); 434 + 435 + if (mptcp_pm_is_userspace(msk)) 436 + return mptcp_userspace_pm_is_backup(msk, &skc_local); 437 + 438 + return mptcp_pm_nl_is_backup(msk, &skc_local); 439 + } 440 + 429 441 int mptcp_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, unsigned int id, 430 442 u8 *flags, int *ifindex) 431 443 {
+37 -9
net/mptcp/pm_netlink.c
··· 471 471 slow = lock_sock_fast(ssk); 472 472 if (prio) { 473 473 subflow->send_mp_prio = 1; 474 - subflow->backup = backup; 475 474 subflow->request_bkup = backup; 476 475 } 477 476 ··· 1101 1102 return ret; 1102 1103 } 1103 1104 1105 + bool mptcp_pm_nl_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc) 1106 + { 1107 + struct pm_nl_pernet *pernet = pm_nl_get_pernet_from_msk(msk); 1108 + struct mptcp_pm_addr_entry *entry; 1109 + bool backup = false; 1110 + 1111 + rcu_read_lock(); 1112 + list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) { 1113 + if (mptcp_addresses_equal(&entry->addr, skc, entry->addr.port)) { 1114 + backup = !!(entry->flags & MPTCP_PM_ADDR_FLAG_BACKUP); 1115 + break; 1116 + } 1117 + } 1118 + rcu_read_unlock(); 1119 + 1120 + return backup; 1121 + } 1122 + 1104 1123 #define MPTCP_PM_CMD_GRP_OFFSET 0 1105 1124 #define MPTCP_PM_EV_GRP_OFFSET 1 1106 1125 ··· 1418 1401 ret = remove_anno_list_by_saddr(msk, addr); 1419 1402 if (ret || force) { 1420 1403 spin_lock_bh(&msk->pm.lock); 1404 + msk->pm.add_addr_signaled -= ret; 1421 1405 mptcp_pm_remove_addr(msk, &list); 1422 1406 spin_unlock_bh(&msk->pm.lock); 1423 1407 } ··· 1552 1534 { 1553 1535 struct mptcp_rm_list alist = { .nr = 0 }; 1554 1536 struct mptcp_pm_addr_entry *entry; 1537 + int anno_nr = 0; 1555 1538 1556 1539 list_for_each_entry(entry, rm_list, list) { 1557 - if ((remove_anno_list_by_saddr(msk, &entry->addr) || 1558 - lookup_subflow_by_saddr(&msk->conn_list, &entry->addr)) && 1559 - alist.nr < MPTCP_RM_IDS_MAX) 1560 - alist.ids[alist.nr++] = entry->addr.id; 1540 + if (alist.nr >= MPTCP_RM_IDS_MAX) 1541 + break; 1542 + 1543 + /* only delete if either announced or matching a subflow */ 1544 + if (remove_anno_list_by_saddr(msk, &entry->addr)) 1545 + anno_nr++; 1546 + else if (!lookup_subflow_by_saddr(&msk->conn_list, 1547 + &entry->addr)) 1548 + continue; 1549 + 1550 + alist.ids[alist.nr++] = entry->addr.id; 1561 1551 } 1562 1552 1563 1553 if (alist.nr) { 1564 1554 spin_lock_bh(&msk->pm.lock); 1555 + msk->pm.add_addr_signaled -= anno_nr; 1565 1556 mptcp_pm_remove_addr(msk, &alist); 1566 1557 spin_unlock_bh(&msk->pm.lock); 1567 1558 } ··· 1583 1556 struct mptcp_pm_addr_entry *entry; 1584 1557 1585 1558 list_for_each_entry(entry, rm_list, list) { 1586 - if (lookup_subflow_by_saddr(&msk->conn_list, &entry->addr) && 1587 - slist.nr < MPTCP_RM_IDS_MAX) 1559 + if (slist.nr < MPTCP_RM_IDS_MAX && 1560 + lookup_subflow_by_saddr(&msk->conn_list, &entry->addr)) 1588 1561 slist.ids[slist.nr++] = entry->addr.id; 1589 1562 1590 - if (remove_anno_list_by_saddr(msk, &entry->addr) && 1591 - alist.nr < MPTCP_RM_IDS_MAX) 1563 + if (alist.nr < MPTCP_RM_IDS_MAX && 1564 + remove_anno_list_by_saddr(msk, &entry->addr)) 1592 1565 alist.ids[alist.nr++] = entry->addr.id; 1593 1566 } 1594 1567 1595 1568 if (alist.nr) { 1596 1569 spin_lock_bh(&msk->pm.lock); 1570 + msk->pm.add_addr_signaled -= alist.nr; 1597 1571 mptcp_pm_remove_addr(msk, &alist); 1598 1572 spin_unlock_bh(&msk->pm.lock); 1599 1573 }
+18
net/mptcp/pm_userspace.c
··· 165 165 return mptcp_userspace_pm_append_new_local_addr(msk, &new_entry, true); 166 166 } 167 167 168 + bool mptcp_userspace_pm_is_backup(struct mptcp_sock *msk, 169 + struct mptcp_addr_info *skc) 170 + { 171 + struct mptcp_pm_addr_entry *entry; 172 + bool backup = false; 173 + 174 + spin_lock_bh(&msk->pm.lock); 175 + list_for_each_entry(entry, &msk->pm.userspace_pm_local_addr_list, list) { 176 + if (mptcp_addresses_equal(&entry->addr, skc, false)) { 177 + backup = !!(entry->flags & MPTCP_PM_ADDR_FLAG_BACKUP); 178 + break; 179 + } 180 + } 181 + spin_unlock_bh(&msk->pm.lock); 182 + 183 + return backup; 184 + } 185 + 168 186 int mptcp_pm_nl_announce_doit(struct sk_buff *skb, struct genl_info *info) 169 187 { 170 188 struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
+10 -8
net/mptcp/protocol.c
··· 350 350 skb_orphan(skb); 351 351 352 352 /* try to fetch required memory from subflow */ 353 - if (!mptcp_rmem_schedule(sk, ssk, skb->truesize)) 353 + if (!mptcp_rmem_schedule(sk, ssk, skb->truesize)) { 354 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED); 354 355 goto drop; 356 + } 355 357 356 358 has_rxtstamp = TCP_SKB_CB(skb)->has_rxtstamp; 357 359 ··· 846 844 sk_rbuf = ssk_rbuf; 847 845 848 846 /* over limit? can't append more skbs to msk, Also, no need to wake-up*/ 849 - if (__mptcp_rmem(sk) > sk_rbuf) { 850 - MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED); 847 + if (__mptcp_rmem(sk) > sk_rbuf) 851 848 return; 852 - } 853 849 854 850 /* Wake-up the reader only for in-sequence data */ 855 851 mptcp_data_lock(sk); ··· 1422 1422 } 1423 1423 1424 1424 mptcp_for_each_subflow(msk, subflow) { 1425 + bool backup = subflow->backup || subflow->request_bkup; 1426 + 1425 1427 trace_mptcp_subflow_get_send(subflow); 1426 1428 ssk = mptcp_subflow_tcp_sock(subflow); 1427 1429 if (!mptcp_subflow_active(subflow)) 1428 1430 continue; 1429 1431 1430 1432 tout = max(tout, mptcp_timeout_from_subflow(subflow)); 1431 - nr_active += !subflow->backup; 1433 + nr_active += !backup; 1432 1434 pace = subflow->avg_pacing_rate; 1433 1435 if (unlikely(!pace)) { 1434 1436 /* init pacing rate from socket */ ··· 1441 1439 } 1442 1440 1443 1441 linger_time = div_u64((u64)READ_ONCE(ssk->sk_wmem_queued) << 32, pace); 1444 - if (linger_time < send_info[subflow->backup].linger_time) { 1445 - send_info[subflow->backup].ssk = ssk; 1446 - send_info[subflow->backup].linger_time = linger_time; 1442 + if (linger_time < send_info[backup].linger_time) { 1443 + send_info[backup].ssk = ssk; 1444 + send_info[backup].linger_time = linger_time; 1447 1445 } 1448 1446 } 1449 1447 __mptcp_set_timeout(sk, tout);
+4
net/mptcp/protocol.h
··· 448 448 u16 mp_capable : 1, 449 449 mp_join : 1, 450 450 backup : 1, 451 + request_bkup : 1, 451 452 csum_reqd : 1, 452 453 allow_join_id0 : 1; 453 454 u8 local_id; ··· 1109 1108 int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc); 1110 1109 int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc); 1111 1110 int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc); 1111 + bool mptcp_pm_is_backup(struct mptcp_sock *msk, struct sock_common *skc); 1112 + bool mptcp_pm_nl_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc); 1113 + bool mptcp_userspace_pm_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc); 1112 1114 int mptcp_pm_dump_addr(struct sk_buff *msg, struct netlink_callback *cb); 1113 1115 int mptcp_pm_nl_dump_addr(struct sk_buff *msg, 1114 1116 struct netlink_callback *cb);
+22 -4
net/mptcp/subflow.c
··· 100 100 return NULL; 101 101 } 102 102 subflow_req->local_id = local_id; 103 + subflow_req->request_bkup = mptcp_pm_is_backup(msk, (struct sock_common *)req); 103 104 104 105 return msk; 105 106 } ··· 169 168 return 0; 170 169 } else if (opt_mp_join) { 171 170 SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNRX); 171 + 172 + if (mp_opt.backup) 173 + SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNBACKUPRX); 172 174 } 173 175 174 176 if (opt_mp_capable && listener->request_mptcp) { ··· 581 577 subflow->mp_join = 1; 582 578 MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKRX); 583 579 580 + if (subflow->backup) 581 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKBACKUPRX); 582 + 584 583 if (subflow_use_different_dport(msk, sk)) { 585 584 pr_debug("synack inet_dport=%d %d", 586 585 ntohs(inet_sk(sk)->inet_dport), ··· 621 614 return err; 622 615 623 616 subflow_set_local_id(subflow, err); 617 + subflow->request_bkup = mptcp_pm_is_backup(msk, (struct sock_common *)sk); 618 + 624 619 return 0; 625 620 } 626 621 ··· 1230 1221 { 1231 1222 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 1232 1223 bool fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN; 1233 - u32 incr; 1224 + struct tcp_sock *tp = tcp_sk(ssk); 1225 + u32 offset, incr, avail_len; 1234 1226 1235 - incr = limit >= skb->len ? skb->len + fin : limit; 1227 + offset = tp->copied_seq - TCP_SKB_CB(skb)->seq; 1228 + if (WARN_ON_ONCE(offset > skb->len)) 1229 + goto out; 1236 1230 1237 - pr_debug("discarding=%d len=%d seq=%d", incr, skb->len, 1238 - subflow->map_subflow_seq); 1231 + avail_len = skb->len - offset; 1232 + incr = limit >= avail_len ? avail_len + fin : limit; 1233 + 1234 + pr_debug("discarding=%d len=%d offset=%d seq=%d", incr, skb->len, 1235 + offset, subflow->map_subflow_seq); 1239 1236 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DUPDATA); 1240 1237 tcp_sk(ssk)->copied_seq += incr; 1238 + 1239 + out: 1241 1240 if (!before(tcp_sk(ssk)->copied_seq, TCP_SKB_CB(skb)->end_seq)) 1242 1241 sk_eat_skb(ssk, skb); 1243 1242 if (mptcp_subflow_get_map_offset(subflow) >= subflow->map_data_len) ··· 2022 2005 new_ctx->fully_established = 1; 2023 2006 new_ctx->remote_key_valid = 1; 2024 2007 new_ctx->backup = subflow_req->backup; 2008 + new_ctx->request_bkup = subflow_req->request_bkup; 2025 2009 WRITE_ONCE(new_ctx->remote_id, subflow_req->remote_id); 2026 2010 new_ctx->token = subflow_req->token; 2027 2011 new_ctx->thmac = subflow_req->thmac;
+3 -1
net/sched/act_ct.c
··· 44 44 struct zones_ht_key { 45 45 struct net *net; 46 46 u16 zone; 47 + /* Note : pad[] must be the last field. */ 48 + u8 pad[]; 47 49 }; 48 50 49 51 struct tcf_ct_flow_table { ··· 62 60 static const struct rhashtable_params zones_params = { 63 61 .head_offset = offsetof(struct tcf_ct_flow_table, node), 64 62 .key_offset = offsetof(struct tcf_ct_flow_table, key), 65 - .key_len = sizeof_field(struct tcf_ct_flow_table, key), 63 + .key_len = offsetof(struct zones_ht_key, pad), 66 64 .automatic_shrinking = true, 67 65 }; 68 66
+4 -3
net/smc/af_smc.c
··· 3319 3319 3320 3320 rc = sock_create_kern(net, family, SOCK_STREAM, IPPROTO_TCP, 3321 3321 &smc->clcsock); 3322 - if (rc) { 3323 - sk_common_release(sk); 3322 + if (rc) 3324 3323 return rc; 3325 - } 3326 3324 3327 3325 /* smc_clcsock_release() does not wait smc->clcsock->sk's 3328 3326 * destruction; its sk_state might not be TCP_CLOSE after ··· 3366 3368 smc->clcsock = clcsock; 3367 3369 else 3368 3370 rc = smc_create_clcsk(net, sk, family); 3371 + 3372 + if (rc) 3373 + sk_common_release(sk); 3369 3374 out: 3370 3375 return rc; 3371 3376 }
+8 -3
net/wireless/scan.c
··· 3178 3178 struct ieee80211_mgmt *mgmt, size_t len, 3179 3179 gfp_t gfp) 3180 3180 { 3181 - size_t min_hdr_len = offsetof(struct ieee80211_mgmt, 3182 - u.probe_resp.variable); 3181 + size_t min_hdr_len; 3183 3182 struct ieee80211_ext *ext = NULL; 3184 3183 enum cfg80211_bss_frame_type ftype; 3185 3184 u16 beacon_interval; ··· 3201 3202 3202 3203 if (ieee80211_is_s1g_beacon(mgmt->frame_control)) { 3203 3204 ext = (void *) mgmt; 3204 - min_hdr_len = offsetof(struct ieee80211_ext, u.s1g_beacon); 3205 3205 if (ieee80211_is_s1g_short_beacon(mgmt->frame_control)) 3206 3206 min_hdr_len = offsetof(struct ieee80211_ext, 3207 3207 u.s1g_short_beacon.variable); 3208 + else 3209 + min_hdr_len = offsetof(struct ieee80211_ext, 3210 + u.s1g_beacon.variable); 3211 + } else { 3212 + /* same for beacons */ 3213 + min_hdr_len = offsetof(struct ieee80211_mgmt, 3214 + u.probe_resp.variable); 3208 3215 } 3209 3216 3210 3217 if (WARN_ON(len < min_hdr_len))
+1
net/wireless/sme.c
··· 1045 1045 cfg80211_hold_bss( 1046 1046 bss_from_pub(params->links[link].bss)); 1047 1047 ev->cr.links[link].bss = params->links[link].bss; 1048 + ev->cr.links[link].status = params->links[link].status; 1048 1049 1049 1050 if (params->links[link].addr) { 1050 1051 ev->cr.links[link].addr = next;
+1 -1
tools/testing/selftests/bpf/Makefile
··· 713 713 # Make sure we are able to include and link libbpf against c++. 714 714 $(OUTPUT)/test_cpp: test_cpp.cpp $(OUTPUT)/test_core_extern.skel.h $(BPFOBJ) 715 715 $(call msg,CXX,,$@) 716 - $(Q)$(CXX) $(CFLAGS) $(filter %.a %.o %.cpp,$^) $(LDLIBS) -o $@ 716 + $(Q)$(CXX) $(subst -D_GNU_SOURCE=,,$(CFLAGS)) $(filter %.a %.o %.cpp,$^) $(LDLIBS) -o $@ 717 717 718 718 # Benchmark runner 719 719 $(OUTPUT)/bench_%.o: benchs/bench_%.c bench.h $(BPFOBJ)
+1 -1
tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
··· 253 253 struct uprobe_syscall_executed *skel; 254 254 int pid, status, err, go[2], c; 255 255 256 - if (ASSERT_OK(pipe(go), "pipe")) 256 + if (!ASSERT_OK(pipe(go), "pipe")) 257 257 return; 258 258 259 259 skel = uprobe_syscall_executed__open_and_load();
+34 -3
tools/testing/selftests/drivers/net/hw/rss_ctx.py
··· 19 19 return [random.randint(0, 255) for _ in range(length)] 20 20 21 21 22 + def _rss_key_check(cfg, data=None, context=0): 23 + if data is None: 24 + data = get_rss(cfg, context=context) 25 + if 'rss-hash-key' not in data: 26 + return 27 + non_zero = [x for x in data['rss-hash-key'] if x != 0] 28 + ksft_eq(bool(non_zero), True, comment=f"RSS key is all zero {data['rss-hash-key']}") 29 + 30 + 22 31 def get_rss(cfg, context=0): 23 32 return ethtool(f"-x {cfg.ifname} context {context}", json=True)[0] 24 33 ··· 99 90 def test_rss_key_indir(cfg): 100 91 """Test basics like updating the main RSS key and indirection table.""" 101 92 102 - if len(_get_rx_cnts(cfg)) < 2: 103 - KsftSkipEx("Device has only one queue (or doesn't support queue stats)") 93 + qcnt = len(_get_rx_cnts(cfg)) 94 + if qcnt < 3: 95 + KsftSkipEx("Device has fewer than 3 queues (or doesn't support queue stats)") 104 96 105 97 data = get_rss(cfg) 106 98 want_keys = ['rss-hash-key', 'rss-hash-function', 'rss-indirection-table'] ··· 111 101 if not data[k]: 112 102 raise KsftFailEx(f"ethtool results empty for '{k}': {data[k]}") 113 103 104 + _rss_key_check(cfg, data=data) 114 105 key_len = len(data['rss-hash-key']) 115 106 116 107 # Set the key ··· 121 110 data = get_rss(cfg) 122 111 ksft_eq(key, data['rss-hash-key']) 123 112 113 + # Set the indirection table and the key together 114 + key = _rss_key_rand(key_len) 115 + ethtool(f"-X {cfg.ifname} equal 3 hkey " + _rss_key_str(key)) 116 + reset_indir = defer(ethtool, f"-X {cfg.ifname} default") 117 + 118 + data = get_rss(cfg) 119 + _rss_key_check(cfg, data=data) 120 + ksft_eq(0, min(data['rss-indirection-table'])) 121 + ksft_eq(2, max(data['rss-indirection-table'])) 122 + 123 + # Reset indirection table and set the key 124 + key = _rss_key_rand(key_len) 125 + ethtool(f"-X {cfg.ifname} default hkey " + _rss_key_str(key)) 126 + data = get_rss(cfg) 127 + _rss_key_check(cfg, data=data) 128 + ksft_eq(0, min(data['rss-indirection-table'])) 129 + ksft_eq(qcnt - 1, max(data['rss-indirection-table'])) 130 + 124 131 # Set the indirection table 125 132 ethtool(f"-X {cfg.ifname} equal 2") 126 - reset_indir = defer(ethtool, f"-X {cfg.ifname} default") 127 133 data = get_rss(cfg) 128 134 ksft_eq(0, min(data['rss-indirection-table'])) 129 135 ksft_eq(1, max(data['rss-indirection-table'])) ··· 345 317 ctx_cnt = i 346 318 break 347 319 320 + _rss_key_check(cfg, context=ctx_id) 321 + 348 322 if not create_with_cfg: 349 323 ethtool(f"-X {cfg.ifname} context {ctx_id} {want_cfg}") 324 + _rss_key_check(cfg, context=ctx_id) 350 325 351 326 # Sanity check the context we just created 352 327 data = get_rss(cfg, ctx_id)
+4 -4
tools/testing/selftests/net/mptcp/mptcp_connect.c
··· 1115 1115 return 1; 1116 1116 } 1117 1117 1118 - if (--cfg_repeat > 0) { 1119 - if (cfg_input) 1120 - close(fd); 1118 + if (cfg_input) 1119 + close(fd); 1120 + 1121 + if (--cfg_repeat > 0) 1121 1122 goto again; 1122 - } 1123 1123 1124 1124 return 0; 1125 1125 }
+88 -15
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 661 661 done 662 662 663 663 if [ -z "${id}" ]; then 664 - test_fail "bad test - missing endpoint id" 664 + fail_test "bad test - missing endpoint id" 665 665 return 666 666 fi 667 667 ··· 1634 1634 { 1635 1635 local mp_prio_nr_tx=$1 1636 1636 local mp_prio_nr_rx=$2 1637 + local mpj_syn=$3 1638 + local mpj_syn_ack=$4 1637 1639 local count 1638 1640 1639 1641 print_check "ptx" ··· 1654 1652 print_skip 1655 1653 elif [ "$count" != "$mp_prio_nr_rx" ]; then 1656 1654 fail_test "got $count MP_PRIO[s] RX expected $mp_prio_nr_rx" 1655 + else 1656 + print_ok 1657 + fi 1658 + 1659 + print_check "syn backup" 1660 + count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPJoinSynBackupRx") 1661 + if [ -z "$count" ]; then 1662 + print_skip 1663 + elif [ "$count" != "$mpj_syn" ]; then 1664 + fail_test "got $count JOIN[s] syn with Backup expected $mpj_syn" 1665 + else 1666 + print_ok 1667 + fi 1668 + 1669 + print_check "synack backup" 1670 + count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtMPJoinSynAckBackupRx") 1671 + if [ -z "$count" ]; then 1672 + print_skip 1673 + elif [ "$count" != "$mpj_syn_ack" ]; then 1674 + fail_test "got $count JOIN[s] synack with Backup expected $mpj_syn_ack" 1657 1675 else 1658 1676 print_ok 1659 1677 fi ··· 2634 2612 sflags=nobackup speed=slow \ 2635 2613 run_tests $ns1 $ns2 10.0.1.1 2636 2614 chk_join_nr 1 1 1 2637 - chk_prio_nr 0 1 2615 + chk_prio_nr 0 1 1 0 2638 2616 fi 2639 2617 2640 2618 # single address, backup 2641 2619 if reset "single address, backup" && 2620 + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2621 + pm_nl_set_limits $ns1 0 1 2622 + pm_nl_add_endpoint $ns1 10.0.2.1 flags signal,backup 2623 + pm_nl_set_limits $ns2 1 1 2624 + sflags=nobackup speed=slow \ 2625 + run_tests $ns1 $ns2 10.0.1.1 2626 + chk_join_nr 1 1 1 2627 + chk_add_nr 1 1 2628 + chk_prio_nr 1 0 0 1 2629 + fi 2630 + 2631 + # single address, switch to backup 2632 + if reset "single address, switch to backup" && 2642 2633 continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2643 2634 pm_nl_set_limits $ns1 0 1 2644 2635 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal ··· 2660 2625 run_tests $ns1 $ns2 10.0.1.1 2661 2626 chk_join_nr 1 1 1 2662 2627 chk_add_nr 1 1 2663 - chk_prio_nr 1 1 2628 + chk_prio_nr 1 1 0 0 2664 2629 fi 2665 2630 2666 2631 # single address with port, backup 2667 2632 if reset "single address with port, backup" && 2668 2633 continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2669 2634 pm_nl_set_limits $ns1 0 1 2670 - pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100 2635 + pm_nl_add_endpoint $ns1 10.0.2.1 flags signal,backup port 10100 2671 2636 pm_nl_set_limits $ns2 1 1 2672 - sflags=backup speed=slow \ 2637 + sflags=nobackup speed=slow \ 2673 2638 run_tests $ns1 $ns2 10.0.1.1 2674 2639 chk_join_nr 1 1 1 2675 2640 chk_add_nr 1 1 2676 - chk_prio_nr 1 1 2641 + chk_prio_nr 1 0 0 1 2677 2642 fi 2678 2643 2679 2644 if reset "mpc backup" && ··· 2682 2647 speed=slow \ 2683 2648 run_tests $ns1 $ns2 10.0.1.1 2684 2649 chk_join_nr 0 0 0 2685 - chk_prio_nr 0 1 2650 + chk_prio_nr 0 1 0 0 2686 2651 fi 2687 2652 2688 2653 if reset "mpc backup both sides" && 2689 2654 continue_if mptcp_lib_kallsyms_doesnt_have "T mptcp_subflow_send_ack$"; then 2690 - pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow,backup 2655 + pm_nl_set_limits $ns1 0 2 2656 + pm_nl_set_limits $ns2 1 2 2657 + pm_nl_add_endpoint $ns1 10.0.1.1 flags signal,backup 2691 2658 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup 2659 + 2660 + # 10.0.2.2 (non-backup) -> 10.0.1.1 (backup) 2661 + pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow 2662 + # 10.0.1.2 (backup) -> 10.0.2.1 (non-backup) 2663 + pm_nl_add_endpoint $ns1 10.0.2.1 flags signal 2664 + ip -net "$ns2" route add 10.0.2.1 via 10.0.1.1 dev ns2eth1 # force this path 2665 + 2692 2666 speed=slow \ 2693 2667 run_tests $ns1 $ns2 10.0.1.1 2694 - chk_join_nr 0 0 0 2695 - chk_prio_nr 1 1 2668 + chk_join_nr 2 2 2 2669 + chk_prio_nr 1 1 1 1 2696 2670 fi 2697 2671 2698 2672 if reset "mpc switch to backup" && ··· 2710 2666 sflags=backup speed=slow \ 2711 2667 run_tests $ns1 $ns2 10.0.1.1 2712 2668 chk_join_nr 0 0 0 2713 - chk_prio_nr 0 1 2669 + chk_prio_nr 0 1 0 0 2714 2670 fi 2715 2671 2716 2672 if reset "mpc switch to backup both sides" && ··· 2720 2676 sflags=backup speed=slow \ 2721 2677 run_tests $ns1 $ns2 10.0.1.1 2722 2678 chk_join_nr 0 0 0 2723 - chk_prio_nr 1 1 2679 + chk_prio_nr 1 1 0 0 2724 2680 fi 2725 2681 } 2726 2682 ··· 3097 3053 addr_nr_ns2=1 sflags=backup,fullmesh speed=slow \ 3098 3054 run_tests $ns1 $ns2 10.0.1.1 3099 3055 chk_join_nr 2 2 2 3100 - chk_prio_nr 0 1 3056 + chk_prio_nr 0 1 1 0 3101 3057 chk_rm_nr 0 1 3102 3058 fi 3103 3059 ··· 3110 3066 sflags=nobackup,nofullmesh speed=slow \ 3111 3067 run_tests $ns1 $ns2 10.0.1.1 3112 3068 chk_join_nr 2 2 2 3113 - chk_prio_nr 0 1 3069 + chk_prio_nr 0 1 1 0 3114 3070 chk_rm_nr 0 1 3115 3071 fi 3116 3072 } ··· 3362 3318 sflags=backup speed=slow \ 3363 3319 run_tests $ns1 $ns2 10.0.1.1 3364 3320 chk_join_nr 1 1 0 3365 - chk_prio_nr 0 0 3321 + chk_prio_nr 0 0 0 0 3366 3322 fi 3367 3323 3368 3324 # userspace pm type prevents rm_addr ··· 3570 3526 chk_mptcp_info subflows 1 subflows 1 3571 3527 mptcp_lib_kill_wait $tests_pid 3572 3528 fi 3529 + 3530 + # remove and re-add 3531 + if reset "delete re-add signal" && 3532 + mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 3533 + pm_nl_set_limits $ns1 1 1 3534 + pm_nl_set_limits $ns2 1 1 3535 + pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal 3536 + test_linkfail=4 speed=20 \ 3537 + run_tests $ns1 $ns2 10.0.1.1 & 3538 + local tests_pid=$! 3539 + 3540 + wait_mpj $ns2 3541 + pm_nl_check_endpoint "creation" \ 3542 + $ns1 10.0.2.1 id 1 flags signal 3543 + chk_subflow_nr "before delete" 2 3544 + chk_mptcp_info subflows 1 subflows 1 3545 + 3546 + pm_nl_del_endpoint $ns1 1 10.0.2.1 3547 + sleep 0.5 3548 + chk_subflow_nr "after delete" 1 3549 + chk_mptcp_info subflows 0 subflows 0 3550 + 3551 + pm_nl_add_endpoint $ns1 10.0.2.1 flags signal 3552 + wait_mpj $ns2 3553 + chk_subflow_nr "after re-add" 2 3554 + chk_mptcp_info subflows 1 subflows 1 3555 + mptcp_lib_kill_wait $tests_pid 3556 + fi 3557 + 3573 3558 } 3574 3559 3575 3560 # [$1: error message]