Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

1) Use netif_rx_ni() when necessary in batman-adv stack, from Jussi
Kivilinna.

2) Fix loss of RTT samples in rxrpc, from David Howells.

3) Memory leak in hns_nic_dev_probe(), from Dignhao Liu.

4) ravb module cannot be unloaded, fix from Yuusuke Ashizuka.

5) We disable BH for too lokng in sctp_get_port_local(), add a
cond_resched() here as well, from Xin Long.

6) Fix memory leak in st95hf_in_send_cmd, from Dinghao Liu.

7) Out of bound access in bpf_raw_tp_link_fill_link_info(), from
Yonghong Song.

8) Missing of_node_put() in mt7530 DSA driver, from Sumera
Priyadarsini.

9) Fix crash in bnxt_fw_reset_task(), from Michael Chan.

10) Fix geneve tunnel checksumming bug in hns3, from Yi Li.

11) Memory leak in rxkad_verify_response, from Dinghao Liu.

12) In tipc, don't use smp_processor_id() in preemptible context. From
Tuong Lien.

13) Fix signedness issue in mlx4 memory allocation, from Shung-Hsi Yu.

14) Missing clk_disable_prepare() in gemini driver, from Dan Carpenter.

15) Fix ABI mismatch between driver and firmware in nfp, from Louis
Peens.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (110 commits)
net/smc: fix sock refcounting in case of termination
net/smc: reset sndbuf_desc if freed
net/smc: set rx_off for SMCR explicitly
net/smc: fix toleration of fake add_link messages
tg3: Fix soft lockup when tg3_reset_task() fails.
doc: net: dsa: Fix typo in config code sample
net: dp83867: Fix WoL SecureOn password
nfp: flower: fix ABI mismatch between driver and firmware
tipc: fix shutdown() of connectionless socket
ipv6: Fix sysctl max for fib_multipath_hash_policy
drivers/net/wan/hdlc: Change the default of hard_header_len to 0
net: gemini: Fix another missing clk_disable_unprepare() in probe
net: bcmgenet: fix mask check in bcmgenet_validate_flow()
amd-xgbe: Add support for new port mode
net: usb: dm9601: Add USB ID of Keenetic Plus DSL
vhost: fix typo in error message
net: ethernet: mlx4: Fix memory allocation in mlx4_buddy_init()
pktgen: fix error message with wrong function name
net: ethernet: ti: am65-cpsw: fix rmii 100Mbit link mode
cxgb4: fix thermal zone device registration
...

+1072 -603
+1 -1
Documentation/devicetree/bindings/net/dsa/dsa.txt
··· 1 1 Distributed Switch Architecture Device Tree Bindings 2 2 ---------------------------------------------------- 3 3 4 - See Documentation/devicetree/bindings/net/dsa/dsa.yaml for the documenation. 4 + See Documentation/devicetree/bindings/net/dsa/dsa.yaml for the documentation.
+1 -1
Documentation/networking/dsa/configuration.rst
··· 180 180 181 181 # bring up the slave interfaces 182 182 ip link set lan1 up 183 - ip link set lan1 up 183 + ip link set lan2 up 184 184 ip link set lan3 up 185 185 186 186 # create bridge
+16 -1
MAINTAINERS
··· 3389 3389 L: netdev@vger.kernel.org 3390 3390 L: openwrt-devel@lists.openwrt.org (subscribers-only) 3391 3391 S: Supported 3392 + F: Documentation/devicetree/bindings/net/dsa/b53.txt 3392 3393 F: drivers/net/dsa/b53/* 3393 3394 F: include/linux/platform_data/b53.h 3394 3395 ··· 3575 3574 S: Maintained 3576 3575 F: drivers/phy/broadcom/phy-brcm-usb* 3577 3576 3577 + BROADCOM ETHERNET PHY DRIVERS 3578 + M: Florian Fainelli <f.fainelli@gmail.com> 3579 + L: bcm-kernel-feedback-list@broadcom.com 3580 + L: netdev@vger.kernel.org 3581 + S: Supported 3582 + F: Documentation/devicetree/bindings/net/broadcom-bcm87xx.txt 3583 + F: drivers/net/phy/bcm*.[ch] 3584 + F: drivers/net/phy/broadcom.c 3585 + F: include/linux/brcmphy.h 3586 + 3578 3587 BROADCOM GENET ETHERNET DRIVER 3579 3588 M: Doug Berger <opendmb@gmail.com> 3580 3589 M: Florian Fainelli <f.fainelli@gmail.com> 3581 3590 L: bcm-kernel-feedback-list@broadcom.com 3582 3591 L: netdev@vger.kernel.org 3583 3592 S: Supported 3593 + F: Documentation/devicetree/bindings/net/brcm,bcmgenet.txt 3594 + F: Documentation/devicetree/bindings/net/brcm,unimac-mdio.txt 3584 3595 F: drivers/net/ethernet/broadcom/genet/ 3596 + F: drivers/net/mdio/mdio-bcm-unimac.c 3597 + F: include/linux/platform_data/bcmgenet.h 3598 + F: include/linux/platform_data/mdio-bcm-unimac.h 3585 3599 3586 3600 BROADCOM IPROC ARM ARCHITECTURE 3587 3601 M: Ray Jui <rjui@broadcom.com> ··· 6512 6496 6513 6497 ETHERNET PHY LIBRARY 6514 6498 M: Andrew Lunn <andrew@lunn.ch> 6515 - M: Florian Fainelli <f.fainelli@gmail.com> 6516 6499 M: Heiner Kallweit <hkallweit1@gmail.com> 6517 6500 R: Russell King <linux@armlinux.org.uk> 6518 6501 L: netdev@vger.kernel.org
+1
drivers/atm/firestream.c
··· 998 998 error = make_rate (pcr, r, &tmc0, NULL); 999 999 if (error) { 1000 1000 kfree(tc); 1001 + kfree(vcc); 1001 1002 return error; 1002 1003 } 1003 1004 }
+5 -2
drivers/net/dsa/mt7530.c
··· 1326 1326 1327 1327 if (phy_node->parent == priv->dev->of_node->parent) { 1328 1328 ret = of_get_phy_mode(mac_np, &interface); 1329 - if (ret && ret != -ENODEV) 1329 + if (ret && ret != -ENODEV) { 1330 + of_node_put(mac_np); 1330 1331 return ret; 1332 + } 1331 1333 id = of_mdio_parse_addr(ds->dev, phy_node); 1332 1334 if (id == 0) 1333 1335 priv->p5_intf_sel = P5_INTF_SEL_PHY_P0; 1334 1336 if (id == 4) 1335 1337 priv->p5_intf_sel = P5_INTF_SEL_PHY_P4; 1336 1338 } 1339 + of_node_put(mac_np); 1337 1340 of_node_put(phy_node); 1338 1341 break; 1339 1342 } ··· 1504 1501 phylink_set(mask, 100baseT_Full); 1505 1502 1506 1503 if (state->interface != PHY_INTERFACE_MODE_MII) { 1507 - phylink_set(mask, 1000baseT_Half); 1504 + /* This switch only supports 1G full-duplex. */ 1508 1505 phylink_set(mask, 1000baseT_Full); 1509 1506 if (port == 5) 1510 1507 phylink_set(mask, 1000baseX_Full);
+1
drivers/net/dsa/ocelot/felix.c
··· 400 400 if (err < 0) { 401 401 dev_err(dev, "Unsupported PHY mode %s on port %d\n", 402 402 phy_modes(phy_mode), port); 403 + of_node_put(child); 403 404 return err; 404 405 } 405 406
+1 -1
drivers/net/dsa/sja1105/sja1105_main.c
··· 3415 3415 3416 3416 sja1105_unpack(prod_id, &part_no, 19, 4, SJA1105_SIZE_DEVICE_ID); 3417 3417 3418 - for (match = sja1105_dt_ids; match->compatible; match++) { 3418 + for (match = sja1105_dt_ids; match->compatible[0]; match++) { 3419 3419 const struct sja1105_info *info = match->data; 3420 3420 3421 3421 /* Is what's been probed in our match table at all? */
+13
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
··· 166 166 XGBE_PORT_MODE_10GBASE_T, 167 167 XGBE_PORT_MODE_10GBASE_R, 168 168 XGBE_PORT_MODE_SFP, 169 + XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG, 169 170 XGBE_PORT_MODE_MAX, 170 171 }; 171 172 ··· 1635 1634 if (ad_reg & 0x80) { 1636 1635 switch (phy_data->port_mode) { 1637 1636 case XGBE_PORT_MODE_BACKPLANE: 1637 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 1638 1638 mode = XGBE_MODE_KR; 1639 1639 break; 1640 1640 default: ··· 1645 1643 } else if (ad_reg & 0x20) { 1646 1644 switch (phy_data->port_mode) { 1647 1645 case XGBE_PORT_MODE_BACKPLANE: 1646 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 1648 1647 mode = XGBE_MODE_KX_1000; 1649 1648 break; 1650 1649 case XGBE_PORT_MODE_1000BASE_X: ··· 1785 1782 1786 1783 switch (phy_data->port_mode) { 1787 1784 case XGBE_PORT_MODE_BACKPLANE: 1785 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 1788 1786 XGBE_SET_ADV(dlks, 10000baseKR_Full); 1789 1787 break; 1790 1788 case XGBE_PORT_MODE_BACKPLANE_2500: ··· 1878 1874 switch (phy_data->port_mode) { 1879 1875 case XGBE_PORT_MODE_BACKPLANE: 1880 1876 return XGBE_AN_MODE_CL73; 1877 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 1881 1878 case XGBE_PORT_MODE_BACKPLANE_2500: 1882 1879 return XGBE_AN_MODE_NONE; 1883 1880 case XGBE_PORT_MODE_1000BASE_T: ··· 2161 2156 2162 2157 switch (phy_data->port_mode) { 2163 2158 case XGBE_PORT_MODE_BACKPLANE: 2159 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 2164 2160 return xgbe_phy_switch_bp_mode(pdata); 2165 2161 case XGBE_PORT_MODE_BACKPLANE_2500: 2166 2162 return xgbe_phy_switch_bp_2500_mode(pdata); ··· 2257 2251 2258 2252 switch (phy_data->port_mode) { 2259 2253 case XGBE_PORT_MODE_BACKPLANE: 2254 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 2260 2255 return xgbe_phy_get_bp_mode(speed); 2261 2256 case XGBE_PORT_MODE_BACKPLANE_2500: 2262 2257 return xgbe_phy_get_bp_2500_mode(speed); ··· 2433 2426 2434 2427 switch (phy_data->port_mode) { 2435 2428 case XGBE_PORT_MODE_BACKPLANE: 2429 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 2436 2430 return xgbe_phy_use_bp_mode(pdata, mode); 2437 2431 case XGBE_PORT_MODE_BACKPLANE_2500: 2438 2432 return xgbe_phy_use_bp_2500_mode(pdata, mode); ··· 2523 2515 2524 2516 switch (phy_data->port_mode) { 2525 2517 case XGBE_PORT_MODE_BACKPLANE: 2518 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 2526 2519 return xgbe_phy_valid_speed_bp_mode(speed); 2527 2520 case XGBE_PORT_MODE_BACKPLANE_2500: 2528 2521 return xgbe_phy_valid_speed_bp_2500_mode(speed); ··· 2801 2792 2802 2793 switch (phy_data->port_mode) { 2803 2794 case XGBE_PORT_MODE_BACKPLANE: 2795 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 2804 2796 if ((phy_data->port_speeds & XGBE_PHY_PORT_SPEED_1000) || 2805 2797 (phy_data->port_speeds & XGBE_PHY_PORT_SPEED_10000)) 2806 2798 return false; ··· 2854 2844 2855 2845 switch (phy_data->port_mode) { 2856 2846 case XGBE_PORT_MODE_BACKPLANE: 2847 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 2857 2848 case XGBE_PORT_MODE_BACKPLANE_2500: 2858 2849 if (phy_data->conn_type == XGBE_CONN_TYPE_BACKPLANE) 2859 2850 return false; ··· 3171 3160 /* Backplane support */ 3172 3161 case XGBE_PORT_MODE_BACKPLANE: 3173 3162 XGBE_SET_SUP(lks, Autoneg); 3163 + fallthrough; 3164 + case XGBE_PORT_MODE_BACKPLANE_NO_AUTONEG: 3174 3165 XGBE_SET_SUP(lks, Pause); 3175 3166 XGBE_SET_SUP(lks, Asym_Pause); 3176 3167 XGBE_SET_SUP(lks, Backplane);
+1
drivers/net/ethernet/arc/emac_mdio.c
··· 153 153 if (IS_ERR(data->reset_gpio)) { 154 154 error = PTR_ERR(data->reset_gpio); 155 155 dev_err(priv->dev, "Failed to request gpio: %d\n", error); 156 + mdiobus_free(bus); 156 157 return error; 157 158 } 158 159
+4 -2
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2491 2491 priv->tx_rings = devm_kcalloc(&pdev->dev, txq, 2492 2492 sizeof(struct bcm_sysport_tx_ring), 2493 2493 GFP_KERNEL); 2494 - if (!priv->tx_rings) 2495 - return -ENOMEM; 2494 + if (!priv->tx_rings) { 2495 + ret = -ENOMEM; 2496 + goto err_free_netdev; 2497 + } 2496 2498 2497 2499 priv->is_lite = params->is_lite; 2498 2500 priv->num_rx_desc_words = params->num_rx_desc_words;
+59 -31
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 1141 1141 1142 1142 static void bnxt_queue_fw_reset_work(struct bnxt *bp, unsigned long delay) 1143 1143 { 1144 + if (!(test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))) 1145 + return; 1146 + 1144 1147 if (BNXT_PF(bp)) 1145 1148 queue_delayed_work(bnxt_pf_wq, &bp->fw_reset_task, delay); 1146 1149 else ··· 1160 1157 1161 1158 static void bnxt_cancel_sp_work(struct bnxt *bp) 1162 1159 { 1163 - if (BNXT_PF(bp)) 1160 + if (BNXT_PF(bp)) { 1164 1161 flush_workqueue(bnxt_pf_wq); 1165 - else 1162 + } else { 1166 1163 cancel_work_sync(&bp->sp_task); 1164 + cancel_delayed_work_sync(&bp->fw_reset_task); 1165 + } 1167 1166 } 1168 1167 1169 1168 static void bnxt_sched_reset(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) ··· 6107 6102 return cp + ulp_stat; 6108 6103 } 6109 6104 6105 + /* Check if a default RSS map needs to be setup. This function is only 6106 + * used on older firmware that does not require reserving RX rings. 6107 + */ 6108 + static void bnxt_check_rss_tbl_no_rmgr(struct bnxt *bp) 6109 + { 6110 + struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 6111 + 6112 + /* The RSS map is valid for RX rings set to resv_rx_rings */ 6113 + if (hw_resc->resv_rx_rings != bp->rx_nr_rings) { 6114 + hw_resc->resv_rx_rings = bp->rx_nr_rings; 6115 + if (!netif_is_rxfh_configured(bp->dev)) 6116 + bnxt_set_dflt_rss_indir_tbl(bp); 6117 + } 6118 + } 6119 + 6110 6120 static bool bnxt_need_reserve_rings(struct bnxt *bp) 6111 6121 { 6112 6122 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; ··· 6130 6110 int rx = bp->rx_nr_rings, stat; 6131 6111 int vnic = 1, grp = rx; 6132 6112 6133 - if (bp->hwrm_spec_code < 0x10601) 6134 - return false; 6135 - 6136 - if (hw_resc->resv_tx_rings != bp->tx_nr_rings) 6113 + if (hw_resc->resv_tx_rings != bp->tx_nr_rings && 6114 + bp->hwrm_spec_code >= 0x10601) 6137 6115 return true; 6138 6116 6117 + /* Old firmware does not need RX ring reservations but we still 6118 + * need to setup a default RSS map when needed. With new firmware 6119 + * we go through RX ring reservations first and then set up the 6120 + * RSS map for the successfully reserved RX rings when needed. 6121 + */ 6122 + if (!BNXT_NEW_RM(bp)) { 6123 + bnxt_check_rss_tbl_no_rmgr(bp); 6124 + return false; 6125 + } 6139 6126 if ((bp->flags & BNXT_FLAG_RFS) && !(bp->flags & BNXT_FLAG_CHIP_P5)) 6140 6127 vnic = rx + 1; 6141 6128 if (bp->flags & BNXT_FLAG_AGG_RINGS) 6142 6129 rx <<= 1; 6143 6130 stat = bnxt_get_func_stat_ctxs(bp); 6144 - if (BNXT_NEW_RM(bp) && 6145 - (hw_resc->resv_rx_rings != rx || hw_resc->resv_cp_rings != cp || 6146 - hw_resc->resv_vnics != vnic || hw_resc->resv_stat_ctxs != stat || 6147 - (hw_resc->resv_hw_ring_grps != grp && 6148 - !(bp->flags & BNXT_FLAG_CHIP_P5)))) 6131 + if (hw_resc->resv_rx_rings != rx || hw_resc->resv_cp_rings != cp || 6132 + hw_resc->resv_vnics != vnic || hw_resc->resv_stat_ctxs != stat || 6133 + (hw_resc->resv_hw_ring_grps != grp && 6134 + !(bp->flags & BNXT_FLAG_CHIP_P5))) 6149 6135 return true; 6150 6136 if ((bp->flags & BNXT_FLAG_CHIP_P5) && BNXT_PF(bp) && 6151 6137 hw_resc->resv_irqs != nq) ··· 6239 6213 6240 6214 if (!tx || !rx || !cp || !grp || !vnic || !stat) 6241 6215 return -ENOMEM; 6216 + 6217 + if (!netif_is_rxfh_configured(bp->dev)) 6218 + bnxt_set_dflt_rss_indir_tbl(bp); 6242 6219 6243 6220 return rc; 6244 6221 } ··· 8524 8495 rc = bnxt_init_int_mode(bp); 8525 8496 bnxt_ulp_irq_restart(bp, rc); 8526 8497 } 8527 - if (!netif_is_rxfh_configured(bp->dev)) 8528 - bnxt_set_dflt_rss_indir_tbl(bp); 8529 - 8530 8498 if (rc) { 8531 8499 netdev_err(bp->dev, "ring reservation/IRQ init failure rc: %d\n", rc); 8532 8500 return rc; ··· 9310 9284 struct hwrm_temp_monitor_query_input req = {0}; 9311 9285 struct hwrm_temp_monitor_query_output *resp; 9312 9286 struct bnxt *bp = dev_get_drvdata(dev); 9313 - u32 temp = 0; 9287 + u32 len = 0; 9314 9288 9315 9289 resp = bp->hwrm_cmd_resp_addr; 9316 9290 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1); 9317 9291 mutex_lock(&bp->hwrm_cmd_lock); 9318 - if (!_hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT)) 9319 - temp = resp->temp * 1000; /* display millidegree */ 9292 + if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT)) 9293 + len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */ 9320 9294 mutex_unlock(&bp->hwrm_cmd_lock); 9321 9295 9322 - return sprintf(buf, "%u\n", temp); 9296 + if (len) 9297 + return len; 9298 + 9299 + return sprintf(buf, "unknown\n"); 9323 9300 } 9324 9301 static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0); 9325 9302 ··· 9504 9475 } 9505 9476 } 9506 9477 9507 - bnxt_enable_napi(bp); 9508 - bnxt_debug_dev_init(bp); 9509 - 9510 9478 rc = bnxt_init_nic(bp, irq_re_init); 9511 9479 if (rc) { 9512 9480 netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc); 9513 - goto open_err; 9481 + goto open_err_irq; 9514 9482 } 9483 + 9484 + bnxt_enable_napi(bp); 9485 + bnxt_debug_dev_init(bp); 9515 9486 9516 9487 if (link_re_init) { 9517 9488 mutex_lock(&bp->link_lock); ··· 9542 9513 if (BNXT_PF(bp)) 9543 9514 bnxt_vf_reps_open(bp); 9544 9515 return 0; 9545 - 9546 - open_err: 9547 - bnxt_debug_dev_exit(bp); 9548 - bnxt_disable_napi(bp); 9549 9516 9550 9517 open_err_irq: 9551 9518 bnxt_del_napi(bp); ··· 11786 11761 unregister_netdev(dev); 11787 11762 bnxt_dl_unregister(bp); 11788 11763 bnxt_shutdown_tc(bp); 11764 + clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 11789 11765 bnxt_cancel_sp_work(bp); 11790 11766 bp->sp_event = 0; 11791 11767 ··· 12226 12200 if (BNXT_CHIP_P5(bp)) 12227 12201 bp->flags |= BNXT_FLAG_CHIP_P5; 12228 12202 12203 + rc = bnxt_alloc_rss_indir_tbl(bp); 12204 + if (rc) 12205 + goto init_err_pci_clean; 12206 + 12229 12207 rc = bnxt_fw_init_one_p2(bp); 12230 12208 if (rc) 12231 12209 goto init_err_pci_clean; ··· 12334 12304 */ 12335 12305 bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 12336 12306 12337 - rc = bnxt_alloc_rss_indir_tbl(bp); 12338 - if (rc) 12339 - goto init_err_pci_clean; 12340 - bnxt_set_dflt_rss_indir_tbl(bp); 12341 - 12342 12307 if (BNXT_PF(bp)) { 12343 12308 if (!bnxt_pf_wq) { 12344 12309 bnxt_pf_wq = ··· 12364 12339 (long)pci_resource_start(pdev, 0), dev->dev_addr); 12365 12340 pcie_print_link_status(pdev); 12366 12341 12342 + pci_save_state(pdev); 12367 12343 return 0; 12368 12344 12369 12345 init_err_cleanup: ··· 12562 12536 "Cannot re-enable PCI device after reset.\n"); 12563 12537 } else { 12564 12538 pci_set_master(pdev); 12539 + pci_restore_state(pdev); 12540 + pci_save_state(pdev); 12565 12541 12566 12542 err = bnxt_hwrm_func_reset(bp); 12567 12543 if (!err) {
+6 -10
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 472 472 static int bnxt_get_num_ring_stats(struct bnxt *bp) 473 473 { 474 474 int rx, tx, cmn; 475 - bool sh = false; 476 - 477 - if (bp->flags & BNXT_FLAG_SHARED_RINGS) 478 - sh = true; 479 475 480 476 rx = NUM_RING_RX_HW_STATS + NUM_RING_RX_SW_STATS + 481 477 bnxt_get_num_tpa_ring_stats(bp); 482 478 tx = NUM_RING_TX_HW_STATS; 483 479 cmn = NUM_RING_CMN_SW_STATS; 484 - if (sh) 485 - return (rx + tx + cmn) * bp->cp_nr_rings; 486 - else 487 - return rx * bp->rx_nr_rings + tx * bp->tx_nr_rings + 488 - cmn * bp->cp_nr_rings; 480 + return rx * bp->rx_nr_rings + tx * bp->tx_nr_rings + 481 + cmn * bp->cp_nr_rings; 489 482 } 490 483 491 484 static int bnxt_get_num_stats(struct bnxt *bp) ··· 799 806 int max_tx_sch_inputs; 800 807 801 808 /* Get the most up-to-date max_tx_sch_inputs. */ 802 - if (BNXT_NEW_RM(bp)) 809 + if (netif_running(dev) && BNXT_NEW_RM(bp)) 803 810 bnxt_hwrm_func_resc_qcaps(bp, false); 804 811 max_tx_sch_inputs = hw_resc->max_tx_sch_inputs; 805 812 ··· 2315 2322 rc = nvm_get_dir_info(dev, &dir_entries, &entry_length); 2316 2323 if (rc != 0) 2317 2324 return rc; 2325 + 2326 + if (!dir_entries || !entry_length) 2327 + return -EIO; 2318 2328 2319 2329 /* Insert 2 bytes of directory info (count and size of entries) */ 2320 2330 if (len < 2)
+1 -1
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1364 1364 case ETHER_FLOW: 1365 1365 eth_mask = &cmd->fs.m_u.ether_spec; 1366 1366 /* don't allow mask which isn't valid */ 1367 - if (VALIDATE_MASK(eth_mask->h_source) || 1367 + if (VALIDATE_MASK(eth_mask->h_dest) || 1368 1368 VALIDATE_MASK(eth_mask->h_source) || 1369 1369 VALIDATE_MASK(eth_mask->h_proto)) { 1370 1370 netdev_err(dev, "rxnfc: Unsupported mask\n");
+13 -4
drivers/net/ethernet/broadcom/tg3.c
··· 7221 7221 7222 7222 static inline void tg3_reset_task_cancel(struct tg3 *tp) 7223 7223 { 7224 - cancel_work_sync(&tp->reset_task); 7225 - tg3_flag_clear(tp, RESET_TASK_PENDING); 7224 + if (test_and_clear_bit(TG3_FLAG_RESET_TASK_PENDING, tp->tg3_flags)) 7225 + cancel_work_sync(&tp->reset_task); 7226 7226 tg3_flag_clear(tp, TX_RECOVERY_PENDING); 7227 7227 } 7228 7228 ··· 11209 11209 11210 11210 tg3_halt(tp, RESET_KIND_SHUTDOWN, 0); 11211 11211 err = tg3_init_hw(tp, true); 11212 - if (err) 11212 + if (err) { 11213 + tg3_full_unlock(tp); 11214 + tp->irq_sync = 0; 11215 + tg3_napi_enable(tp); 11216 + /* Clear this flag so that tg3_reset_task_cancel() will not 11217 + * call cancel_work_sync() and wait forever. 11218 + */ 11219 + tg3_flag_clear(tp, RESET_TASK_PENDING); 11220 + dev_close(tp->dev); 11213 11221 goto out; 11222 + } 11214 11223 11215 11224 tg3_netif_start(tp); 11216 11225 11217 - out: 11218 11226 tg3_full_unlock(tp); 11219 11227 11220 11228 if (!err) 11221 11229 tg3_phy_start(tp); 11222 11230 11223 11231 tg3_flag_clear(tp, RESET_TASK_PENDING); 11232 + out: 11224 11233 rtnl_unlock(); 11225 11234 } 11226 11235
+6 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_thermal.c
··· 62 62 int cxgb4_thermal_init(struct adapter *adap) 63 63 { 64 64 struct ch_thermal *ch_thermal = &adap->ch_thermal; 65 + char ch_tz_name[THERMAL_NAME_LENGTH]; 65 66 int num_trip = CXGB4_NUM_TRIPS; 66 67 u32 param, val; 67 68 int ret; ··· 83 82 ch_thermal->trip_type = THERMAL_TRIP_CRITICAL; 84 83 } 85 84 86 - ch_thermal->tzdev = thermal_zone_device_register("cxgb4", num_trip, 85 + snprintf(ch_tz_name, sizeof(ch_tz_name), "cxgb4_%s", adap->name); 86 + ch_thermal->tzdev = thermal_zone_device_register(ch_tz_name, num_trip, 87 87 0, adap, 88 88 &cxgb4_thermal_ops, 89 89 NULL, 0, 0); ··· 107 105 108 106 int cxgb4_thermal_remove(struct adapter *adap) 109 107 { 110 - if (adap->ch_thermal.tzdev) 108 + if (adap->ch_thermal.tzdev) { 111 109 thermal_zone_device_unregister(adap->ch_thermal.tzdev); 110 + adap->ch_thermal.tzdev = NULL; 111 + } 112 112 return 0; 113 113 }
+18 -18
drivers/net/ethernet/cortina/gemini.c
··· 2446 2446 port->reset = devm_reset_control_get_exclusive(dev, NULL); 2447 2447 if (IS_ERR(port->reset)) { 2448 2448 dev_err(dev, "no reset\n"); 2449 - clk_disable_unprepare(port->pclk); 2450 - return PTR_ERR(port->reset); 2449 + ret = PTR_ERR(port->reset); 2450 + goto unprepare; 2451 2451 } 2452 2452 reset_control_reset(port->reset); 2453 2453 usleep_range(100, 500); ··· 2502 2502 IRQF_SHARED, 2503 2503 port_names[port->id], 2504 2504 port); 2505 - if (ret) { 2506 - clk_disable_unprepare(port->pclk); 2507 - return ret; 2508 - } 2505 + if (ret) 2506 + goto unprepare; 2509 2507 2510 2508 ret = register_netdev(netdev); 2511 - if (!ret) { 2512 - netdev_info(netdev, 2513 - "irq %d, DMA @ 0x%pap, GMAC @ 0x%pap\n", 2514 - port->irq, &dmares->start, 2515 - &gmacres->start); 2516 - ret = gmac_setup_phy(netdev); 2517 - if (ret) 2518 - netdev_info(netdev, 2519 - "PHY init failed, deferring to ifup time\n"); 2520 - return 0; 2521 - } 2509 + if (ret) 2510 + goto unprepare; 2522 2511 2523 - port->netdev = NULL; 2512 + netdev_info(netdev, 2513 + "irq %d, DMA @ 0x%pap, GMAC @ 0x%pap\n", 2514 + port->irq, &dmares->start, 2515 + &gmacres->start); 2516 + ret = gmac_setup_phy(netdev); 2517 + if (ret) 2518 + netdev_info(netdev, 2519 + "PHY init failed, deferring to ifup time\n"); 2520 + return 0; 2521 + 2522 + unprepare: 2523 + clk_disable_unprepare(port->pclk); 2524 2524 return ret; 2525 2525 } 2526 2526
+6 -3
drivers/net/ethernet/hisilicon/hns/hns_enet.c
··· 2282 2282 priv->enet_ver = AE_VERSION_1; 2283 2283 else if (acpi_dev_found(hns_enet_acpi_match[1].id)) 2284 2284 priv->enet_ver = AE_VERSION_2; 2285 - else 2286 - return -ENXIO; 2285 + else { 2286 + ret = -ENXIO; 2287 + goto out_read_prop_fail; 2288 + } 2287 2289 2288 2290 /* try to find port-idx-in-ae first */ 2289 2291 ret = acpi_node_get_property_reference(dev->fwnode, ··· 2301 2299 priv->fwnode = args.fwnode; 2302 2300 } else { 2303 2301 dev_err(dev, "cannot read cfg data from OF or acpi\n"); 2304 - return -ENXIO; 2302 + ret = -ENXIO; 2303 + goto out_read_prop_fail; 2305 2304 } 2306 2305 2307 2306 ret = device_property_read_u32(dev, "port-idx-in-ae", &port_id);
+4 -2
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 21 21 #include <net/pkt_cls.h> 22 22 #include <net/tcp.h> 23 23 #include <net/vxlan.h> 24 + #include <net/geneve.h> 24 25 25 26 #include "hnae3.h" 26 27 #include "hns3_enet.h" ··· 781 780 * and it is udp packet, which has a dest port as the IANA assigned. 782 781 * the hardware is expected to do the checksum offload, but the 783 782 * hardware will not do the checksum offload when udp dest port is 784 - * 4789. 783 + * 4789 or 6081. 785 784 */ 786 785 static bool hns3_tunnel_csum_bug(struct sk_buff *skb) 787 786 { ··· 790 789 l4.hdr = skb_transport_header(skb); 791 790 792 791 if (!(!skb->encapsulation && 793 - l4.udp->dest == htons(IANA_VXLAN_UDP_PORT))) 792 + (l4.udp->dest == htons(IANA_VXLAN_UDP_PORT) || 793 + l4.udp->dest == htons(GENEVE_UDP_PORT)))) 794 794 return false; 795 795 796 796 skb_checksum_help(skb);
+14 -1
drivers/net/ethernet/ibm/ibmvnic.c
··· 479 479 int i, j, rc; 480 480 u64 *size_array; 481 481 482 + if (!adapter->rx_pool) 483 + return -1; 484 + 482 485 size_array = (u64 *)((u8 *)(adapter->login_rsp_buf) + 483 486 be32_to_cpu(adapter->login_rsp_buf->off_rxadd_buff_size)); 484 487 ··· 651 648 { 652 649 int tx_scrqs; 653 650 int i, rc; 651 + 652 + if (!adapter->tx_pool) 653 + return -1; 654 654 655 655 tx_scrqs = be32_to_cpu(adapter->login_rsp_buf->num_txsubm_subcrqs); 656 656 for (i = 0; i < tx_scrqs; i++) { ··· 2017 2011 adapter->req_rx_add_entries_per_subcrq != 2018 2012 old_num_rx_slots || 2019 2013 adapter->req_tx_entries_per_subcrq != 2020 - old_num_tx_slots) { 2014 + old_num_tx_slots || 2015 + !adapter->rx_pool || 2016 + !adapter->tso_pool || 2017 + !adapter->tx_pool) { 2021 2018 release_rx_pools(adapter); 2022 2019 release_tx_pools(adapter); 2023 2020 release_napi(adapter); ··· 2033 2024 } else { 2034 2025 rc = reset_tx_pools(adapter); 2035 2026 if (rc) 2027 + netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n", 2028 + rc); 2036 2029 goto out; 2037 2030 2038 2031 rc = reset_rx_pools(adapter); 2039 2032 if (rc) 2033 + netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n", 2034 + rc); 2040 2035 goto out; 2041 2036 } 2042 2037 ibmvnic_disable_irqs(adapter);
+1 -1
drivers/net/ethernet/mellanox/mlx4/mr.c
··· 114 114 goto err_out; 115 115 116 116 for (i = 0; i <= buddy->max_order; ++i) { 117 - s = BITS_TO_LONGS(1 << (buddy->max_order - i)); 117 + s = BITS_TO_LONGS(1UL << (buddy->max_order - i)); 118 118 buddy->bits[i] = kvmalloc_array(s, sizeof(long), GFP_KERNEL | __GFP_ZERO); 119 119 if (!buddy->bits[i]) 120 120 goto err_out_free;
+2
drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
··· 61 61 * @flags: options part of the request 62 62 * @tun_info.ipv6: dest IPv6 address of active route 63 63 * @tun_info.egress_port: port the encapsulated packet egressed 64 + * @tun_info.extra: reserved for future use 64 65 * @tun_info: tunnels that have sent traffic in reported period 65 66 */ 66 67 struct nfp_tun_active_tuns_v6 { ··· 71 70 struct route_ip_info_v6 { 72 71 struct in6_addr ipv6; 73 72 __be32 egress_port; 73 + __be32 extra[2]; 74 74 } tun_info[]; 75 75 }; 76 76
+3 -10
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 496 496 struct ionic_cq *txcq; 497 497 u32 rx_work_done = 0; 498 498 u32 tx_work_done = 0; 499 - u32 work_done = 0; 500 499 u32 flags = 0; 501 - bool unmask; 502 500 503 501 lif = rxcq->bound_q->lif; 504 502 idev = &lif->ionic->idev; ··· 510 512 if (rx_work_done) 511 513 ionic_rx_fill_cb(rxcq->bound_q); 512 514 513 - unmask = (rx_work_done < budget) && (tx_work_done < lif->tx_budget); 514 - 515 - if (unmask && napi_complete_done(napi, rx_work_done)) { 515 + if (rx_work_done < budget && napi_complete_done(napi, rx_work_done)) { 516 516 flags |= IONIC_INTR_CRED_UNMASK; 517 517 DEBUG_STATS_INTR_REARM(rxcq->bound_intr); 518 - work_done = rx_work_done; 519 - } else { 520 - work_done = budget; 521 518 } 522 519 523 - if (work_done || flags) { 520 + if (rx_work_done || flags) { 524 521 flags |= IONIC_INTR_CRED_RESET_COALESCE; 525 522 ionic_intr_credits(idev->intr_ctrl, rxcq->bound_intr->index, 526 523 tx_work_done + rx_work_done, flags); ··· 524 531 DEBUG_STATS_NAPI_POLL(qcq, rx_work_done); 525 532 DEBUG_STATS_NAPI_POLL(qcq, tx_work_done); 526 533 527 - return work_done; 534 + return rx_work_done; 528 535 } 529 536 530 537 static dma_addr_t ionic_tx_map_single(struct ionic_queue *q,
+55 -55
drivers/net/ethernet/renesas/ravb_main.c
··· 1342 1342 return error; 1343 1343 } 1344 1344 1345 + /* MDIO bus init function */ 1346 + static int ravb_mdio_init(struct ravb_private *priv) 1347 + { 1348 + struct platform_device *pdev = priv->pdev; 1349 + struct device *dev = &pdev->dev; 1350 + int error; 1351 + 1352 + /* Bitbang init */ 1353 + priv->mdiobb.ops = &bb_ops; 1354 + 1355 + /* MII controller setting */ 1356 + priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb); 1357 + if (!priv->mii_bus) 1358 + return -ENOMEM; 1359 + 1360 + /* Hook up MII support for ethtool */ 1361 + priv->mii_bus->name = "ravb_mii"; 1362 + priv->mii_bus->parent = dev; 1363 + snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x", 1364 + pdev->name, pdev->id); 1365 + 1366 + /* Register MDIO bus */ 1367 + error = of_mdiobus_register(priv->mii_bus, dev->of_node); 1368 + if (error) 1369 + goto out_free_bus; 1370 + 1371 + return 0; 1372 + 1373 + out_free_bus: 1374 + free_mdio_bitbang(priv->mii_bus); 1375 + return error; 1376 + } 1377 + 1378 + /* MDIO bus release function */ 1379 + static int ravb_mdio_release(struct ravb_private *priv) 1380 + { 1381 + /* Unregister mdio bus */ 1382 + mdiobus_unregister(priv->mii_bus); 1383 + 1384 + /* Free bitbang info */ 1385 + free_mdio_bitbang(priv->mii_bus); 1386 + 1387 + return 0; 1388 + } 1389 + 1345 1390 /* Network device open function for Ethernet AVB */ 1346 1391 static int ravb_open(struct net_device *ndev) 1347 1392 { ··· 1394 1349 struct platform_device *pdev = priv->pdev; 1395 1350 struct device *dev = &pdev->dev; 1396 1351 int error; 1352 + 1353 + /* MDIO bus init */ 1354 + error = ravb_mdio_init(priv); 1355 + if (error) { 1356 + netdev_err(ndev, "failed to initialize MDIO\n"); 1357 + return error; 1358 + } 1397 1359 1398 1360 napi_enable(&priv->napi[RAVB_BE]); 1399 1361 napi_enable(&priv->napi[RAVB_NC]); ··· 1479 1427 out_napi_off: 1480 1428 napi_disable(&priv->napi[RAVB_NC]); 1481 1429 napi_disable(&priv->napi[RAVB_BE]); 1430 + ravb_mdio_release(priv); 1482 1431 return error; 1483 1432 } 1484 1433 ··· 1789 1736 ravb_ring_free(ndev, RAVB_BE); 1790 1737 ravb_ring_free(ndev, RAVB_NC); 1791 1738 1739 + ravb_mdio_release(priv); 1740 + 1792 1741 return 0; 1793 1742 } 1794 1743 ··· 1941 1886 .ndo_set_mac_address = eth_mac_addr, 1942 1887 .ndo_set_features = ravb_set_features, 1943 1888 }; 1944 - 1945 - /* MDIO bus init function */ 1946 - static int ravb_mdio_init(struct ravb_private *priv) 1947 - { 1948 - struct platform_device *pdev = priv->pdev; 1949 - struct device *dev = &pdev->dev; 1950 - int error; 1951 - 1952 - /* Bitbang init */ 1953 - priv->mdiobb.ops = &bb_ops; 1954 - 1955 - /* MII controller setting */ 1956 - priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb); 1957 - if (!priv->mii_bus) 1958 - return -ENOMEM; 1959 - 1960 - /* Hook up MII support for ethtool */ 1961 - priv->mii_bus->name = "ravb_mii"; 1962 - priv->mii_bus->parent = dev; 1963 - snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x", 1964 - pdev->name, pdev->id); 1965 - 1966 - /* Register MDIO bus */ 1967 - error = of_mdiobus_register(priv->mii_bus, dev->of_node); 1968 - if (error) 1969 - goto out_free_bus; 1970 - 1971 - return 0; 1972 - 1973 - out_free_bus: 1974 - free_mdio_bitbang(priv->mii_bus); 1975 - return error; 1976 - } 1977 - 1978 - /* MDIO bus release function */ 1979 - static int ravb_mdio_release(struct ravb_private *priv) 1980 - { 1981 - /* Unregister mdio bus */ 1982 - mdiobus_unregister(priv->mii_bus); 1983 - 1984 - /* Free bitbang info */ 1985 - free_mdio_bitbang(priv->mii_bus); 1986 - 1987 - return 0; 1988 - } 1989 1889 1990 1890 static const struct of_device_id ravb_match_table[] = { 1991 1891 { .compatible = "renesas,etheravb-r8a7790", .data = (void *)RCAR_GEN2 }, ··· 2184 2174 eth_hw_addr_random(ndev); 2185 2175 } 2186 2176 2187 - /* MDIO bus init */ 2188 - error = ravb_mdio_init(priv); 2189 - if (error) { 2190 - dev_err(&pdev->dev, "failed to initialize MDIO\n"); 2191 - goto out_dma_free; 2192 - } 2193 - 2194 2177 netif_napi_add(ndev, &priv->napi[RAVB_BE], ravb_poll, 64); 2195 2178 netif_napi_add(ndev, &priv->napi[RAVB_NC], ravb_poll, 64); 2196 2179 ··· 2205 2202 out_napi_del: 2206 2203 netif_napi_del(&priv->napi[RAVB_NC]); 2207 2204 netif_napi_del(&priv->napi[RAVB_BE]); 2208 - ravb_mdio_release(priv); 2209 - out_dma_free: 2210 2205 dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat, 2211 2206 priv->desc_bat_dma); 2212 2207 ··· 2236 2235 unregister_netdev(ndev); 2237 2236 netif_napi_del(&priv->napi[RAVB_NC]); 2238 2237 netif_napi_del(&priv->napi[RAVB_BE]); 2239 - ravb_mdio_release(priv); 2240 2238 pm_runtime_disable(&pdev->dev); 2241 2239 free_netdev(ndev); 2242 2240 platform_set_drvdata(pdev, NULL);
+4 -4
drivers/net/ethernet/sfc/ef100_rx.c
··· 36 36 return PREFIX_FIELD(prefix, RSS_HASH_VALID); 37 37 } 38 38 39 - static bool check_fcs(struct efx_channel *channel, u32 *prefix) 39 + static bool ef100_has_fcs_error(struct efx_channel *channel, u32 *prefix) 40 40 { 41 41 u16 rxclass; 42 42 u8 l2status; ··· 46 46 47 47 if (likely(l2status == ESE_GZ_RH_HCLASS_L2_STATUS_OK)) 48 48 /* Everything is ok */ 49 - return 0; 49 + return false; 50 50 51 51 if (l2status == ESE_GZ_RH_HCLASS_L2_STATUS_FCS_ERR) 52 52 channel->n_rx_eth_crc_err++; 53 - return 1; 53 + return true; 54 54 } 55 55 56 56 void __ef100_rx_packet(struct efx_channel *channel) ··· 63 63 64 64 prefix = (u32 *)(eh - ESE_GZ_RX_PKT_PREFIX_LEN); 65 65 66 - if (check_fcs(channel, prefix) && 66 + if (ef100_has_fcs_error(channel, prefix) && 67 67 unlikely(!(efx->net_dev->features & NETIF_F_RXALL))) 68 68 goto out; 69 69
+2
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 174 174 if (phy->speed == 10 && phy_interface_is_rgmii(phy)) 175 175 /* Can be used with in band mode only */ 176 176 mac_control |= CPSW_SL_CTL_EXT_EN; 177 + if (phy->speed == 100 && phy->interface == PHY_INTERFACE_MODE_RMII) 178 + mac_control |= CPSW_SL_CTL_IFCTL_A; 177 179 if (phy->duplex) 178 180 mac_control |= CPSW_SL_CTL_FULLDUPLEX; 179 181
+1 -1
drivers/net/ethernet/ti/cpsw.c
··· 1116 1116 HOST_PORT_NUM, ALE_VLAN, vid); 1117 1117 ret |= cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast, 1118 1118 0, ALE_VLAN, vid); 1119 - ret |= cpsw_ale_flush_multicast(cpsw->ale, 0, vid); 1119 + ret |= cpsw_ale_flush_multicast(cpsw->ale, ALE_PORT_HOST, vid); 1120 1120 err: 1121 1121 pm_runtime_put(cpsw->dev); 1122 1122 return ret;
+22 -7
drivers/net/ethernet/ti/cpsw_new.c
··· 1032 1032 return ret; 1033 1033 } 1034 1034 1035 + /* reset the return code as pm_runtime_get_sync() can return 1036 + * non zero values as well. 1037 + */ 1038 + ret = 0; 1035 1039 for (i = 0; i < cpsw->data.slaves; i++) { 1036 1040 if (cpsw->slaves[i].ndev && 1037 - vid == cpsw->slaves[i].port_vlan) 1041 + vid == cpsw->slaves[i].port_vlan) { 1042 + ret = -EINVAL; 1038 1043 goto err; 1044 + } 1039 1045 } 1040 1046 1041 1047 dev_dbg(priv->dev, "removing vlanid %d from vlan filter\n", vid); 1042 - cpsw_ale_del_vlan(cpsw->ale, vid, 0); 1043 - cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr, 1044 - HOST_PORT_NUM, ALE_VLAN, vid); 1045 - cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast, 1046 - 0, ALE_VLAN, vid); 1047 - cpsw_ale_flush_multicast(cpsw->ale, 0, vid); 1048 + ret = cpsw_ale_del_vlan(cpsw->ale, vid, 0); 1049 + if (ret) 1050 + dev_err(priv->dev, "cpsw_ale_del_vlan() failed: ret %d\n", ret); 1051 + ret = cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr, 1052 + HOST_PORT_NUM, ALE_VLAN, vid); 1053 + if (ret) 1054 + dev_err(priv->dev, "cpsw_ale_del_ucast() failed: ret %d\n", 1055 + ret); 1056 + ret = cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast, 1057 + 0, ALE_VLAN, vid); 1058 + if (ret) 1059 + dev_err(priv->dev, "cpsw_ale_del_mcast failed. ret %d\n", 1060 + ret); 1061 + cpsw_ale_flush_multicast(cpsw->ale, ALE_PORT_HOST, vid); 1062 + ret = 0; 1048 1063 err: 1049 1064 pm_runtime_put(cpsw->dev); 1050 1065 return ret;
+1
drivers/net/gtp.c
··· 1179 1179 goto nlmsg_failure; 1180 1180 1181 1181 if (nla_put_u32(skb, GTPA_VERSION, pctx->gtp_version) || 1182 + nla_put_u32(skb, GTPA_LINK, pctx->dev->ifindex) || 1182 1183 nla_put_be32(skb, GTPA_PEER_ADDRESS, pctx->peer_addr_ip4.s_addr) || 1183 1184 nla_put_be32(skb, GTPA_MS_ADDRESS, pctx->ms_addr_ip4.s_addr)) 1184 1185 goto nla_put_failure;
+2 -2
drivers/net/phy/dp83867.c
··· 215 215 if (wol->wolopts & WAKE_MAGICSECURE) { 216 216 phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP1, 217 217 (wol->sopass[1] << 8) | wol->sopass[0]); 218 - phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP1, 218 + phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP2, 219 219 (wol->sopass[3] << 8) | wol->sopass[2]); 220 - phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP1, 220 + phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RXFSOP3, 221 221 (wol->sopass[5] << 8) | wol->sopass[4]); 222 222 223 223 val_rxcfg |= DP83867_WOL_SEC_EN;
+6 -6
drivers/net/phy/dp83869.c
··· 427 427 return ret; 428 428 429 429 val = phy_read_mmd(phydev, DP83869_DEVADDR, DP83869_RGMIICTL); 430 - val &= ~(DP83869_RGMII_TX_CLK_DELAY_EN | 431 - DP83869_RGMII_RX_CLK_DELAY_EN); 430 + val |= (DP83869_RGMII_TX_CLK_DELAY_EN | 431 + DP83869_RGMII_RX_CLK_DELAY_EN); 432 432 433 433 if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID) 434 - val |= (DP83869_RGMII_TX_CLK_DELAY_EN | 435 - DP83869_RGMII_RX_CLK_DELAY_EN); 434 + val &= ~(DP83869_RGMII_TX_CLK_DELAY_EN | 435 + DP83869_RGMII_RX_CLK_DELAY_EN); 436 436 437 437 if (phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) 438 - val |= DP83869_RGMII_TX_CLK_DELAY_EN; 438 + val &= ~DP83869_RGMII_TX_CLK_DELAY_EN; 439 439 440 440 if (phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) 441 - val |= DP83869_RGMII_RX_CLK_DELAY_EN; 441 + val &= ~DP83869_RGMII_RX_CLK_DELAY_EN; 442 442 443 443 ret = phy_write_mmd(phydev, DP83869_DEVADDR, DP83869_RGMIICTL, 444 444 val);
+1
drivers/net/usb/Kconfig
··· 252 252 config USB_NET_CDC_NCM 253 253 tristate "CDC NCM support" 254 254 depends on USB_USBNET 255 + select USB_NET_CDCETHER 255 256 default y 256 257 help 257 258 This driver provides support for CDC NCM (Network Control Model
+1 -1
drivers/net/usb/asix_common.c
··· 296 296 297 297 netdev_dbg(dev->net, "asix_get_phy_addr()\n"); 298 298 299 - if (ret < 0) { 299 + if (ret < 2) { 300 300 netdev_err(dev->net, "Error reading PHYID register: %02x\n", ret); 301 301 goto out; 302 302 }
+4
drivers/net/usb/dm9601.c
··· 625 625 USB_DEVICE(0x0a46, 0x1269), /* DM9621A USB to Fast Ethernet Adapter */ 626 626 .driver_info = (unsigned long)&dm9601_info, 627 627 }, 628 + { 629 + USB_DEVICE(0x0586, 0x3427), /* ZyXEL Keenetic Plus DSL xDSL modem */ 630 + .driver_info = (unsigned long)&dm9601_info, 631 + }, 628 632 {}, // END 629 633 }; 630 634
+1 -1
drivers/net/wan/hdlc.c
··· 229 229 dev->min_mtu = 68; 230 230 dev->max_mtu = HDLC_MAX_MTU; 231 231 dev->type = ARPHRD_RAWHDLC; 232 - dev->hard_header_len = 16; 232 + dev->hard_header_len = 0; 233 233 dev->needed_headroom = 0; 234 234 dev->addr_len = 0; 235 235 dev->header_ops = &hdlc_null_ops;
+1
drivers/net/wan/hdlc_cisco.c
··· 370 370 memcpy(&state(hdlc)->settings, &new_settings, size); 371 371 spin_lock_init(&state(hdlc)->lock); 372 372 dev->header_ops = &cisco_header_ops; 373 + dev->hard_header_len = sizeof(struct hdlc_header); 373 374 dev->type = ARPHRD_CISCO; 374 375 call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE, dev); 375 376 netif_dormant_on(dev);
+3
drivers/net/wan/lapbether.c
··· 210 210 211 211 skb->dev = dev = lapbeth->ethdev; 212 212 213 + skb_reset_network_header(skb); 214 + 213 215 dev_hard_header(skb, dev, ETH_P_DEC, bcast_addr, NULL, 0); 214 216 215 217 dev_queue_xmit(skb); ··· 342 340 */ 343 341 ndev->needed_headroom = -1 + 3 + 2 + dev->hard_header_len 344 342 + dev->needed_headroom; 343 + ndev->needed_tailroom = dev->needed_tailroom; 345 344 346 345 lapbeth = netdev_priv(ndev); 347 346 lapbeth->axdev = ndev;
+1 -1
drivers/nfc/st95hf/core.c
··· 966 966 rc = down_killable(&stcontext->exchange_lock); 967 967 if (rc) { 968 968 WARN(1, "Semaphore is not found up in st95hf_in_send_cmd\n"); 969 - return rc; 969 + goto free_skb_resp; 970 970 } 971 971 972 972 rc = st95hf_spi_send(&stcontext->spicontext, skb->data,
+1 -1
drivers/vhost/vhost.c
··· 2537 2537 if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) { 2538 2538 r = vhost_update_used_flags(vq); 2539 2539 if (r) 2540 - vq_err(vq, "Failed to enable notification at %p: %d\n", 2540 + vq_err(vq, "Failed to disable notification at %p: %d\n", 2541 2541 &vq->used->flags, r); 2542 2542 } 2543 2543 }
+2 -2
fs/afs/fs_probe.c
··· 161 161 } 162 162 } 163 163 164 - rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall); 165 - if (rtt_us < server->probe.rtt) { 164 + if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) && 165 + rtt_us < server->probe.rtt) { 166 166 server->probe.rtt = rtt_us; 167 167 server->rtt = rtt_us; 168 168 alist->preferred = index;
+8 -6
fs/afs/internal.h
··· 401 401 #define AFS_VLSERVER_FL_PROBED 0 /* The VL server has been probed */ 402 402 #define AFS_VLSERVER_FL_PROBING 1 /* VL server is being probed */ 403 403 #define AFS_VLSERVER_FL_IS_YFS 2 /* Server is YFS not AFS */ 404 + #define AFS_VLSERVER_FL_RESPONDING 3 /* VL server is responding */ 404 405 rwlock_t lock; /* Lock on addresses */ 405 406 atomic_t usage; 407 + unsigned int rtt; /* Server's current RTT in uS */ 406 408 407 409 /* Probe state */ 408 410 wait_queue_head_t probe_wq; 409 411 atomic_t probe_outstanding; 410 412 spinlock_t probe_lock; 411 413 struct { 412 - unsigned int rtt; /* RTT as ktime/64 */ 414 + unsigned int rtt; /* RTT in uS */ 413 415 u32 abort_code; 414 416 short error; 415 - bool have_result; 416 - bool responded:1; 417 - bool is_yfs:1; 418 - bool not_yfs:1; 419 - bool local_failure:1; 417 + unsigned short flags; 418 + #define AFS_VLSERVER_PROBE_RESPONDED 0x01 /* At least once response (may be abort) */ 419 + #define AFS_VLSERVER_PROBE_IS_YFS 0x02 /* The peer appears to be YFS */ 420 + #define AFS_VLSERVER_PROBE_NOT_YFS 0x04 /* The peer appears not to be YFS */ 421 + #define AFS_VLSERVER_PROBE_LOCAL_FAILURE 0x08 /* A local failure prevented a probe */ 420 422 } probe; 421 423 422 424 u16 port;
+5
fs/afs/proc.c
··· 310 310 alist->preferred == i ? '>' : '-', 311 311 &alist->addrs[i].transport); 312 312 } 313 + seq_printf(m, " info: fl=%lx rtt=%d\n", vlserver->flags, vlserver->rtt); 314 + seq_printf(m, " probe: fl=%x e=%d ac=%d out=%d\n", 315 + vlserver->probe.flags, vlserver->probe.error, 316 + vlserver->probe.abort_code, 317 + atomic_read(&vlserver->probe_outstanding)); 313 318 return 0; 314 319 } 315 320
+1
fs/afs/vl_list.c
··· 21 21 rwlock_init(&vlserver->lock); 22 22 init_waitqueue_head(&vlserver->probe_wq); 23 23 spin_lock_init(&vlserver->probe_lock); 24 + vlserver->rtt = UINT_MAX; 24 25 vlserver->name_len = name_len; 25 26 vlserver->port = port; 26 27 memcpy(vlserver->name, name, name_len);
+52 -32
fs/afs/vl_probe.c
··· 11 11 #include "internal.h" 12 12 #include "protocol_yfs.h" 13 13 14 - static bool afs_vl_probe_done(struct afs_vlserver *server) 15 - { 16 - if (!atomic_dec_and_test(&server->probe_outstanding)) 17 - return false; 18 14 19 - wake_up_var(&server->probe_outstanding); 15 + /* 16 + * Handle the completion of a set of probes. 17 + */ 18 + static void afs_finished_vl_probe(struct afs_vlserver *server) 19 + { 20 + if (!(server->probe.flags & AFS_VLSERVER_PROBE_RESPONDED)) { 21 + server->rtt = UINT_MAX; 22 + clear_bit(AFS_VLSERVER_FL_RESPONDING, &server->flags); 23 + } 24 + 20 25 clear_bit_unlock(AFS_VLSERVER_FL_PROBING, &server->flags); 21 26 wake_up_bit(&server->flags, AFS_VLSERVER_FL_PROBING); 22 - return true; 27 + } 28 + 29 + /* 30 + * Handle the completion of a probe RPC call. 31 + */ 32 + static void afs_done_one_vl_probe(struct afs_vlserver *server, bool wake_up) 33 + { 34 + if (atomic_dec_and_test(&server->probe_outstanding)) { 35 + afs_finished_vl_probe(server); 36 + wake_up = true; 37 + } 38 + 39 + if (wake_up) 40 + wake_up_all(&server->probe_wq); 23 41 } 24 42 25 43 /* ··· 63 45 server->probe.error = 0; 64 46 goto responded; 65 47 case -ECONNABORTED: 66 - if (!server->probe.responded) { 48 + if (!(server->probe.flags & AFS_VLSERVER_PROBE_RESPONDED)) { 67 49 server->probe.abort_code = call->abort_code; 68 50 server->probe.error = ret; 69 51 } 70 52 goto responded; 71 53 case -ENOMEM: 72 54 case -ENONET: 73 - server->probe.local_failure = true; 74 - afs_io_error(call, afs_io_error_vl_probe_fail); 55 + case -EKEYEXPIRED: 56 + case -EKEYREVOKED: 57 + case -EKEYREJECTED: 58 + server->probe.flags |= AFS_VLSERVER_PROBE_LOCAL_FAILURE; 59 + if (server->probe.error == 0) 60 + server->probe.error = ret; 61 + trace_afs_io_error(call->debug_id, ret, afs_io_error_vl_probe_fail); 75 62 goto out; 76 63 case -ECONNRESET: /* Responded, but call expired. */ 77 64 case -ERFKILL: ··· 90 67 default: 91 68 clear_bit(index, &alist->responded); 92 69 set_bit(index, &alist->failed); 93 - if (!server->probe.responded && 70 + if (!(server->probe.flags & AFS_VLSERVER_PROBE_RESPONDED) && 94 71 (server->probe.error == 0 || 95 72 server->probe.error == -ETIMEDOUT || 96 73 server->probe.error == -ETIME)) 97 74 server->probe.error = ret; 98 - afs_io_error(call, afs_io_error_vl_probe_fail); 75 + trace_afs_io_error(call->debug_id, ret, afs_io_error_vl_probe_fail); 99 76 goto out; 100 77 } 101 78 ··· 104 81 clear_bit(index, &alist->failed); 105 82 106 83 if (call->service_id == YFS_VL_SERVICE) { 107 - server->probe.is_yfs = true; 84 + server->probe.flags |= AFS_VLSERVER_PROBE_IS_YFS; 108 85 set_bit(AFS_VLSERVER_FL_IS_YFS, &server->flags); 109 86 alist->addrs[index].srx_service = call->service_id; 110 87 } else { 111 - server->probe.not_yfs = true; 112 - if (!server->probe.is_yfs) { 88 + server->probe.flags |= AFS_VLSERVER_PROBE_NOT_YFS; 89 + if (!(server->probe.flags & AFS_VLSERVER_PROBE_IS_YFS)) { 113 90 clear_bit(AFS_VLSERVER_FL_IS_YFS, &server->flags); 114 91 alist->addrs[index].srx_service = call->service_id; 115 92 } 116 93 } 117 94 118 - rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall); 119 - if (rtt_us < server->probe.rtt) { 95 + if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) && 96 + rtt_us < server->probe.rtt) { 120 97 server->probe.rtt = rtt_us; 98 + server->rtt = rtt_us; 121 99 alist->preferred = index; 122 - have_result = true; 123 100 } 124 101 125 102 smp_wmb(); /* Set rtt before responded. */ 126 - server->probe.responded = true; 103 + server->probe.flags |= AFS_VLSERVER_PROBE_RESPONDED; 127 104 set_bit(AFS_VLSERVER_FL_PROBED, &server->flags); 105 + set_bit(AFS_VLSERVER_FL_RESPONDING, &server->flags); 106 + have_result = true; 128 107 out: 129 108 spin_unlock(&server->probe_lock); 130 109 131 110 _debug("probe [%u][%u] %pISpc rtt=%u ret=%d", 132 111 server_index, index, &alist->addrs[index].transport, rtt_us, ret); 133 112 134 - have_result |= afs_vl_probe_done(server); 135 - if (have_result) { 136 - server->probe.have_result = true; 137 - wake_up_var(&server->probe.have_result); 138 - wake_up_all(&server->probe_wq); 139 - } 113 + afs_done_one_vl_probe(server, have_result); 140 114 } 141 115 142 116 /* ··· 171 151 in_progress = true; 172 152 } else { 173 153 afs_prioritise_error(_e, PTR_ERR(call), ac.abort_code); 154 + afs_done_one_vl_probe(server, false); 174 155 } 175 156 } 176 157 177 - if (!in_progress) 178 - afs_vl_probe_done(server); 179 158 return in_progress; 180 159 } 181 160 ··· 212 193 { 213 194 struct wait_queue_entry *waits; 214 195 struct afs_vlserver *server; 215 - unsigned int rtt = UINT_MAX; 196 + unsigned int rtt = UINT_MAX, rtt_s; 216 197 bool have_responders = false; 217 198 int pref = -1, i; 218 199 ··· 224 205 server = vllist->servers[i].server; 225 206 if (!test_bit(AFS_VLSERVER_FL_PROBING, &server->flags)) 226 207 __clear_bit(i, &untried); 227 - if (server->probe.responded) 208 + if (server->probe.flags & AFS_VLSERVER_PROBE_RESPONDED) 228 209 have_responders = true; 229 210 } 230 211 } ··· 250 231 for (i = 0; i < vllist->nr_servers; i++) { 251 232 if (test_bit(i, &untried)) { 252 233 server = vllist->servers[i].server; 253 - if (server->probe.responded) 234 + if (server->probe.flags & AFS_VLSERVER_PROBE_RESPONDED) 254 235 goto stop; 255 236 if (test_bit(AFS_VLSERVER_FL_PROBING, &server->flags)) 256 237 still_probing = true; ··· 268 249 for (i = 0; i < vllist->nr_servers; i++) { 269 250 if (test_bit(i, &untried)) { 270 251 server = vllist->servers[i].server; 271 - if (server->probe.responded && 272 - server->probe.rtt < rtt) { 252 + rtt_s = READ_ONCE(server->rtt); 253 + if (test_bit(AFS_VLSERVER_FL_RESPONDING, &server->flags) && 254 + rtt_s < rtt) { 273 255 pref = i; 274 - rtt = server->probe.rtt; 256 + rtt = rtt_s; 275 257 } 276 258 277 259 remove_wait_queue(&server->probe_wq, &waits[i]);
+6 -1
fs/afs/vl_rotate.c
··· 192 192 for (i = 0; i < vc->server_list->nr_servers; i++) { 193 193 struct afs_vlserver *s = vc->server_list->servers[i].server; 194 194 195 - if (!test_bit(i, &vc->untried) || !s->probe.responded) 195 + if (!test_bit(i, &vc->untried) || 196 + !test_bit(AFS_VLSERVER_FL_RESPONDING, &s->flags)) 196 197 continue; 197 198 if (s->probe.rtt < rtt) { 198 199 vc->index = i; ··· 263 262 for (i = 0; i < vc->server_list->nr_servers; i++) { 264 263 struct afs_vlserver *s = vc->server_list->servers[i].server; 265 264 265 + if (test_bit(AFS_VLSERVER_FL_RESPONDING, &s->flags)) 266 + e.responded = true; 266 267 afs_prioritise_error(&e, READ_ONCE(s->probe.error), 267 268 s->probe.abort_code); 268 269 } 270 + 271 + error = e.error; 269 272 270 273 failed_set_error: 271 274 vc->error = error;
+2
include/linux/netfilter/nf_conntrack_sctp.h
··· 9 9 enum sctp_conntrack state; 10 10 11 11 __be32 vtag[IP_CT_DIR_MAX]; 12 + u8 last_dir; 13 + u8 flags; 12 14 }; 13 15 14 16 #endif /* _NF_CONNTRACK_SCTP_H */
+1 -2
include/linux/netfilter/nfnetlink.h
··· 43 43 int nfnetlink_send(struct sk_buff *skb, struct net *net, u32 portid, 44 44 unsigned int group, int echo, gfp_t flags); 45 45 int nfnetlink_set_err(struct net *net, u32 portid, u32 group, int error); 46 - int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid, 47 - int flags); 46 + int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid); 48 47 49 48 static inline u16 nfnl_msg_type(u8 subsys, u8 msg_type) 50 49 {
+11 -2
include/linux/skbuff.h
··· 71 71 * NETIF_F_IPV6_CSUM - Driver (device) is only able to checksum plain 72 72 * TCP or UDP packets over IPv6. These are specifically 73 73 * unencapsulated packets of the form IPv6|TCP or 74 - * IPv4|UDP where the Next Header field in the IPv6 74 + * IPv6|UDP where the Next Header field in the IPv6 75 75 * header is either TCP or UDP. IPv6 extension headers 76 76 * are not supported with this feature. This feature 77 77 * cannot be set in features for a device with ··· 1056 1056 void kfree_skb_list(struct sk_buff *segs); 1057 1057 void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt); 1058 1058 void skb_tx_error(struct sk_buff *skb); 1059 + 1060 + #ifdef CONFIG_TRACEPOINTS 1059 1061 void consume_skb(struct sk_buff *skb); 1062 + #else 1063 + static inline void consume_skb(struct sk_buff *skb) 1064 + { 1065 + return kfree_skb(skb); 1066 + } 1067 + #endif 1068 + 1060 1069 void __consume_stateless_skb(struct sk_buff *skb); 1061 1070 void __kfree_skb(struct sk_buff *skb); 1062 1071 extern struct kmem_cache *skbuff_head_cache; ··· 2667 2658 * 2668 2659 * Using max(32, L1_CACHE_BYTES) makes sense (especially with RPS) 2669 2660 * to reduce average number of cache lines per packet. 2670 - * get_rps_cpus() for example only access one 64 bytes aligned block : 2661 + * get_rps_cpu() for example only access one 64 bytes aligned block : 2671 2662 * NET_IP_ALIGN(2) + ethernet_header(14) + IP_header(20/40) + ports(8) 2672 2663 */ 2673 2664 #ifndef NET_SKB_PAD
+1 -1
include/net/af_rxrpc.h
··· 59 59 void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *); 60 60 void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *, 61 61 struct sockaddr_rxrpc *); 62 - u32 rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *); 62 + bool rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *, u32 *); 63 63 int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t, 64 64 rxrpc_user_attach_call_t, unsigned long, gfp_t, 65 65 unsigned int);
+1 -1
include/net/ndisc.h
··· 494 494 495 495 #ifdef CONFIG_SYSCTL 496 496 int ndisc_ifinfo_sysctl_change(struct ctl_table *ctl, int write, 497 - void __user *buffer, size_t *lenp, loff_t *ppos); 497 + void *buffer, size_t *lenp, loff_t *ppos); 498 498 int ndisc_ifinfo_sysctl_strategy(struct ctl_table *ctl, 499 499 void __user *oldval, size_t __user *oldlenp, 500 500 void __user *newval, size_t newlen);
+2
include/net/netfilter/nf_tables.h
··· 143 143 static inline void nft_data_copy(u32 *dst, const struct nft_data *src, 144 144 unsigned int len) 145 145 { 146 + if (len % NFT_REG32_SIZE) 147 + dst[len / NFT_REG32_SIZE] = 0; 146 148 memcpy(dst, src, len); 147 149 } 148 150
+22 -5
include/trace/events/rxrpc.h
··· 138 138 }; 139 139 140 140 enum rxrpc_rtt_tx_trace { 141 + rxrpc_rtt_tx_cancel, 141 142 rxrpc_rtt_tx_data, 143 + rxrpc_rtt_tx_no_slot, 142 144 rxrpc_rtt_tx_ping, 143 145 }; 144 146 145 147 enum rxrpc_rtt_rx_trace { 148 + rxrpc_rtt_rx_cancel, 149 + rxrpc_rtt_rx_lost, 150 + rxrpc_rtt_rx_obsolete, 146 151 rxrpc_rtt_rx_ping_response, 147 152 rxrpc_rtt_rx_requested_ack, 148 153 }; ··· 344 339 E_(rxrpc_recvmsg_wait, "WAIT") 345 340 346 341 #define rxrpc_rtt_tx_traces \ 342 + EM(rxrpc_rtt_tx_cancel, "CNCE") \ 347 343 EM(rxrpc_rtt_tx_data, "DATA") \ 344 + EM(rxrpc_rtt_tx_no_slot, "FULL") \ 348 345 E_(rxrpc_rtt_tx_ping, "PING") 349 346 350 347 #define rxrpc_rtt_rx_traces \ 348 + EM(rxrpc_rtt_rx_cancel, "CNCL") \ 349 + EM(rxrpc_rtt_rx_obsolete, "OBSL") \ 350 + EM(rxrpc_rtt_rx_lost, "LOST") \ 351 351 EM(rxrpc_rtt_rx_ping_response, "PONG") \ 352 352 E_(rxrpc_rtt_rx_requested_ack, "RACK") 353 353 ··· 1097 1087 1098 1088 TRACE_EVENT(rxrpc_rtt_tx, 1099 1089 TP_PROTO(struct rxrpc_call *call, enum rxrpc_rtt_tx_trace why, 1100 - rxrpc_serial_t send_serial), 1090 + int slot, rxrpc_serial_t send_serial), 1101 1091 1102 - TP_ARGS(call, why, send_serial), 1092 + TP_ARGS(call, why, slot, send_serial), 1103 1093 1104 1094 TP_STRUCT__entry( 1105 1095 __field(unsigned int, call ) 1106 1096 __field(enum rxrpc_rtt_tx_trace, why ) 1097 + __field(int, slot ) 1107 1098 __field(rxrpc_serial_t, send_serial ) 1108 1099 ), 1109 1100 1110 1101 TP_fast_assign( 1111 1102 __entry->call = call->debug_id; 1112 1103 __entry->why = why; 1104 + __entry->slot = slot; 1113 1105 __entry->send_serial = send_serial; 1114 1106 ), 1115 1107 1116 - TP_printk("c=%08x %s sr=%08x", 1108 + TP_printk("c=%08x [%d] %s sr=%08x", 1117 1109 __entry->call, 1110 + __entry->slot, 1118 1111 __print_symbolic(__entry->why, rxrpc_rtt_tx_traces), 1119 1112 __entry->send_serial) 1120 1113 ); 1121 1114 1122 1115 TRACE_EVENT(rxrpc_rtt_rx, 1123 1116 TP_PROTO(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why, 1117 + int slot, 1124 1118 rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial, 1125 1119 u32 rtt, u32 rto), 1126 1120 1127 - TP_ARGS(call, why, send_serial, resp_serial, rtt, rto), 1121 + TP_ARGS(call, why, slot, send_serial, resp_serial, rtt, rto), 1128 1122 1129 1123 TP_STRUCT__entry( 1130 1124 __field(unsigned int, call ) 1131 1125 __field(enum rxrpc_rtt_rx_trace, why ) 1126 + __field(int, slot ) 1132 1127 __field(rxrpc_serial_t, send_serial ) 1133 1128 __field(rxrpc_serial_t, resp_serial ) 1134 1129 __field(u32, rtt ) ··· 1143 1128 TP_fast_assign( 1144 1129 __entry->call = call->debug_id; 1145 1130 __entry->why = why; 1131 + __entry->slot = slot; 1146 1132 __entry->send_serial = send_serial; 1147 1133 __entry->resp_serial = resp_serial; 1148 1134 __entry->rtt = rtt; 1149 1135 __entry->rto = rto; 1150 1136 ), 1151 1137 1152 - TP_printk("c=%08x %s sr=%08x rr=%08x rtt=%u rto=%u", 1138 + TP_printk("c=%08x [%d] %s sr=%08x rr=%08x rtt=%u rto=%u", 1153 1139 __entry->call, 1140 + __entry->slot, 1154 1141 __print_symbolic(__entry->why, rxrpc_rtt_rx_traces), 1155 1142 __entry->send_serial, 1156 1143 __entry->resp_serial,
+1 -1
include/uapi/linux/in.h
··· 135 135 * this socket to prevent accepting spoofed ones. 136 136 */ 137 137 #define IP_PMTUDISC_INTERFACE 4 138 - /* weaker version of IP_PMTUDISC_INTERFACE, which allos packets to get 138 + /* weaker version of IP_PMTUDISC_INTERFACE, which allows packets to get 139 139 * fragmented if they exeed the interface mtu 140 140 */ 141 141 #define IP_PMTUDISC_OMIT 5
+1 -1
include/uapi/linux/netfilter/nf_tables.h
··· 133 133 * @NFTA_LIST_ELEM: list element (NLA_NESTED) 134 134 */ 135 135 enum nft_list_attributes { 136 - NFTA_LIST_UNPEC, 136 + NFTA_LIST_UNSPEC, 137 137 NFTA_LIST_ELEM, 138 138 __NFTA_LIST_MAX 139 139 };
+1 -1
kernel/bpf/syscall.c
··· 2634 2634 u32 ulen = info->raw_tracepoint.tp_name_len; 2635 2635 size_t tp_len = strlen(tp_name); 2636 2636 2637 - if (ulen && !ubuf) 2637 + if (!ulen ^ !ubuf) 2638 2638 return -EINVAL; 2639 2639 2640 2640 info->raw_tracepoint.tp_name_len = tp_len + 1;
+1 -2
kernel/sysctl.c
··· 204 204 205 205 #if defined(CONFIG_BPF_SYSCALL) && defined(CONFIG_SYSCTL) 206 206 static int bpf_stats_handler(struct ctl_table *table, int write, 207 - void __user *buffer, size_t *lenp, 208 - loff_t *ppos) 207 + void *buffer, size_t *lenp, loff_t *ppos) 209 208 { 210 209 struct static_key *key = (struct static_key *)table->data; 211 210 static int saved_val;
+6 -5
net/batman-adv/bat_v_ogm.c
··· 881 881 ntohl(ogm_packet->seqno), ogm_throughput, ogm_packet->ttl, 882 882 ogm_packet->version, ntohs(ogm_packet->tvlv_len)); 883 883 884 + if (batadv_is_my_mac(bat_priv, ogm_packet->orig)) { 885 + batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 886 + "Drop packet: originator packet from ourself\n"); 887 + return; 888 + } 889 + 884 890 /* If the throughput metric is 0, immediately drop the packet. No need 885 891 * to create orig_node / neigh_node for an unusable route. 886 892 */ ··· 1012 1006 goto free_skb; 1013 1007 1014 1008 if (batadv_is_my_mac(bat_priv, ethhdr->h_source)) 1015 - goto free_skb; 1016 - 1017 - ogm_packet = (struct batadv_ogm2_packet *)skb->data; 1018 - 1019 - if (batadv_is_my_mac(bat_priv, ogm_packet->orig)) 1020 1009 goto free_skb; 1021 1010 1022 1011 batadv_inc_counter(bat_priv, BATADV_CNT_MGMT_RX);
+4 -1
net/batman-adv/bridge_loop_avoidance.c
··· 437 437 batadv_add_counter(bat_priv, BATADV_CNT_RX_BYTES, 438 438 skb->len + ETH_HLEN); 439 439 440 - netif_rx(skb); 440 + if (in_interrupt()) 441 + netif_rx(skb); 442 + else 443 + netif_rx_ni(skb); 441 444 out: 442 445 if (primary_if) 443 446 batadv_hardif_put(primary_if);
+4 -2
net/batman-adv/gateway_client.c
··· 703 703 704 704 chaddr_offset = *header_len + BATADV_DHCP_CHADDR_OFFSET; 705 705 /* store the client address if the message is going to a client */ 706 - if (ret == BATADV_DHCP_TO_CLIENT && 707 - pskb_may_pull(skb, chaddr_offset + ETH_ALEN)) { 706 + if (ret == BATADV_DHCP_TO_CLIENT) { 707 + if (!pskb_may_pull(skb, chaddr_offset + ETH_ALEN)) 708 + return BATADV_DHCP_NO; 709 + 708 710 /* check if the DHCP packet carries an Ethernet DHCP */ 709 711 p = skb->data + *header_len + BATADV_DHCP_HTYPE_OFFSET; 710 712 if (*p != BATADV_DHCP_HTYPE_ETHERNET)
+2 -2
net/caif/cfrfml.c
··· 116 116 if (segmented) { 117 117 if (rfml->incomplete_frm == NULL) { 118 118 /* Initial Segment */ 119 - if (cfpkt_peek_head(pkt, rfml->seghead, 6) < 0) 119 + if (cfpkt_peek_head(pkt, rfml->seghead, 6) != 0) 120 120 goto out; 121 121 122 122 rfml->pdu_size = get_unaligned_le16(rfml->seghead+4); ··· 233 233 if (cfpkt_getlen(pkt) > rfml->fragment_size + RFM_HEAD_SIZE) 234 234 err = cfpkt_peek_head(pkt, head, 6); 235 235 236 - if (err < 0) 236 + if (err != 0) 237 237 goto out; 238 238 239 239 while (cfpkt_getlen(frontpkt) > rfml->fragment_size + RFM_HEAD_SIZE) {
+2 -1
net/core/dev.c
··· 6612 6612 netdev_err_once(dev, "%s() called with weight %d\n", __func__, 6613 6613 weight); 6614 6614 napi->weight = weight; 6615 - list_add(&napi->dev_list, &dev->napi_list); 6616 6615 napi->dev = dev; 6617 6616 #ifdef CONFIG_NETPOLL 6618 6617 napi->poll_owner = -1; 6619 6618 #endif 6620 6619 set_bit(NAPI_STATE_SCHED, &napi->state); 6620 + set_bit(NAPI_STATE_NPSVC, &napi->state); 6621 + list_add_rcu(&napi->dev_list, &dev->napi_list); 6621 6622 napi_hash_add(napi); 6622 6623 } 6623 6624 EXPORT_SYMBOL(netif_napi_add);
+1 -1
net/core/netpoll.c
··· 162 162 struct napi_struct *napi; 163 163 int cpu = smp_processor_id(); 164 164 165 - list_for_each_entry(napi, &dev->napi_list, dev_list) { 165 + list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) { 166 166 if (cmpxchg(&napi->poll_owner, -1, cpu) == -1) { 167 167 poll_one_napi(napi); 168 168 smp_store_release(&napi->poll_owner, -1);
+1 -1
net/core/pktgen.c
··· 3699 3699 cpu_to_node(cpu), 3700 3700 "kpktgend_%d", cpu); 3701 3701 if (IS_ERR(p)) { 3702 - pr_err("kernel_thread() failed for cpu %d\n", t->cpu); 3702 + pr_err("kthread_create_on_node() failed for cpu %d\n", t->cpu); 3703 3703 list_del(&t->th_list); 3704 3704 kfree(t); 3705 3705 return PTR_ERR(p);
+2
net/core/skbuff.c
··· 820 820 } 821 821 EXPORT_SYMBOL(skb_tx_error); 822 822 823 + #ifdef CONFIG_TRACEPOINTS 823 824 /** 824 825 * consume_skb - free an skbuff 825 826 * @skb: buffer to free ··· 838 837 __kfree_skb(skb); 839 838 } 840 839 EXPORT_SYMBOL(consume_skb); 840 + #endif 841 841 842 842 /** 843 843 * consume_stateless_skb - free an skbuff, assuming it is stateless
+1 -1
net/core/sock.c
··· 3254 3254 sk->sk_prot->destroy(sk); 3255 3255 3256 3256 /* 3257 - * Observation: when sock_common_release is called, processes have 3257 + * Observation: when sk_common_release is called, processes have 3258 3258 * no access to socket. But net still has. 3259 3259 * Step one, detach it from networking: 3260 3260 *
+2 -1
net/ipv4/fib_trie.c
··· 2121 2121 struct hlist_head *head = &net->ipv4.fib_table_hash[h]; 2122 2122 struct fib_table *tb; 2123 2123 2124 - hlist_for_each_entry_rcu(tb, head, tb_hlist) 2124 + hlist_for_each_entry_rcu(tb, head, tb_hlist, 2125 + lockdep_rtnl_is_held()) 2125 2126 __fib_info_notify_update(net, tb, info); 2126 2127 } 2127 2128 }
+1 -1
net/ipv4/netfilter/nf_nat_pptp.c
··· 3 3 * nf_nat_pptp.c 4 4 * 5 5 * NAT support for PPTP (Point to Point Tunneling Protocol). 6 - * PPTP is a a protocol for creating virtual private networks. 6 + * PPTP is a protocol for creating virtual private networks. 7 7 * It is a specification defined by Microsoft and some vendors 8 8 * working with Microsoft. PPTP is built on top of a modified 9 9 * version of the Internet Generic Routing Encapsulation Protocol.
+1 -1
net/ipv4/raw.c
··· 610 610 } else if (!ipc.oif) { 611 611 ipc.oif = inet->uc_index; 612 612 } else if (ipv4_is_lbcast(daddr) && inet->uc_index) { 613 - /* oif is set, packet is to local broadcast and 613 + /* oif is set, packet is to local broadcast 614 614 * and uc_index is set. oif is most likely set 615 615 * by sk_bound_dev_if. If uc_index != oif check if the 616 616 * oif is an L3 master and uc_index is an L3 slave.
+2 -1
net/ipv6/sysctl_net_ipv6.c
··· 21 21 #include <net/calipso.h> 22 22 #endif 23 23 24 + static int two = 2; 24 25 static int flowlabel_reflect_max = 0x7; 25 26 static int auto_flowlabels_min; 26 27 static int auto_flowlabels_max = IP6_AUTO_FLOW_LABEL_MAX; ··· 151 150 .mode = 0644, 152 151 .proc_handler = proc_rt6_multipath_hash_policy, 153 152 .extra1 = SYSCTL_ZERO, 154 - .extra2 = SYSCTL_ONE, 153 + .extra2 = &two, 155 154 }, 156 155 { 157 156 .procname = "seg6_flowlabel",
+1 -1
net/l3mdev/l3mdev.c
··· 154 154 EXPORT_SYMBOL_GPL(l3mdev_master_upper_ifindex_by_index_rcu); 155 155 156 156 /** 157 - * l3mdev_fib_table - get FIB table id associated with an L3 157 + * l3mdev_fib_table_rcu - get FIB table id associated with an L3 158 158 * master interface 159 159 * @dev: targeted interface 160 160 */
+145 -55
net/mac80211/airtime.c
··· 405 405 return duration; 406 406 } 407 407 408 - u32 ieee80211_calc_rx_airtime(struct ieee80211_hw *hw, 409 - struct ieee80211_rx_status *status, 410 - int len) 408 + static u32 ieee80211_get_rate_duration(struct ieee80211_hw *hw, 409 + struct ieee80211_rx_status *status, 410 + u32 *overhead) 411 411 { 412 - struct ieee80211_supported_band *sband; 413 - const struct ieee80211_rate *rate; 414 412 bool sgi = status->enc_flags & RX_ENC_FLAG_SHORT_GI; 415 - bool sp = status->enc_flags & RX_ENC_FLAG_SHORTPRE; 416 413 int bw, streams; 417 414 int group, idx; 418 415 u32 duration; 419 - bool cck; 420 416 421 417 switch (status->bw) { 422 418 case RATE_INFO_BW_20: ··· 433 437 } 434 438 435 439 switch (status->encoding) { 436 - case RX_ENC_LEGACY: 437 - if (WARN_ON_ONCE(status->band > NL80211_BAND_5GHZ)) 438 - return 0; 439 - 440 - sband = hw->wiphy->bands[status->band]; 441 - if (!sband || status->rate_idx >= sband->n_bitrates) 442 - return 0; 443 - 444 - rate = &sband->bitrates[status->rate_idx]; 445 - cck = rate->flags & IEEE80211_RATE_MANDATORY_B; 446 - 447 - return ieee80211_calc_legacy_rate_duration(rate->bitrate, sp, 448 - cck, len); 449 - 450 440 case RX_ENC_VHT: 451 441 streams = status->nss; 452 442 idx = status->rate_idx; ··· 459 477 460 478 duration = airtime_mcs_groups[group].duration[idx]; 461 479 duration <<= airtime_mcs_groups[group].shift; 480 + *overhead = 36 + (streams << 2); 481 + 482 + return duration; 483 + } 484 + 485 + 486 + u32 ieee80211_calc_rx_airtime(struct ieee80211_hw *hw, 487 + struct ieee80211_rx_status *status, 488 + int len) 489 + { 490 + struct ieee80211_supported_band *sband; 491 + u32 duration, overhead = 0; 492 + 493 + if (status->encoding == RX_ENC_LEGACY) { 494 + const struct ieee80211_rate *rate; 495 + bool sp = status->enc_flags & RX_ENC_FLAG_SHORTPRE; 496 + bool cck; 497 + 498 + if (WARN_ON_ONCE(status->band > NL80211_BAND_5GHZ)) 499 + return 0; 500 + 501 + sband = hw->wiphy->bands[status->band]; 502 + if (!sband || status->rate_idx >= sband->n_bitrates) 503 + return 0; 504 + 505 + rate = &sband->bitrates[status->rate_idx]; 506 + cck = rate->flags & IEEE80211_RATE_MANDATORY_B; 507 + 508 + return ieee80211_calc_legacy_rate_duration(rate->bitrate, sp, 509 + cck, len); 510 + } 511 + 512 + duration = ieee80211_get_rate_duration(hw, status, &overhead); 513 + if (!duration) 514 + return 0; 515 + 462 516 duration *= len; 463 517 duration /= AVG_PKT_SIZE; 464 518 duration /= 1024; 465 519 466 - duration += 36 + (streams << 2); 467 - 468 - return duration; 520 + return duration + overhead; 469 521 } 470 522 EXPORT_SYMBOL_GPL(ieee80211_calc_rx_airtime); 471 523 472 - static u32 ieee80211_calc_tx_airtime_rate(struct ieee80211_hw *hw, 473 - struct ieee80211_tx_rate *rate, 474 - u8 band, int len) 524 + static bool ieee80211_fill_rate_info(struct ieee80211_hw *hw, 525 + struct ieee80211_rx_status *stat, u8 band, 526 + struct rate_info *ri) 475 527 { 476 - struct ieee80211_rx_status stat = { 477 - .band = band, 478 - }; 528 + struct ieee80211_supported_band *sband = hw->wiphy->bands[band]; 529 + int i; 479 530 480 - if (rate->idx < 0 || !rate->count) 531 + if (!ri || !sband) 532 + return false; 533 + 534 + stat->bw = ri->bw; 535 + stat->nss = ri->nss; 536 + stat->rate_idx = ri->mcs; 537 + 538 + if (ri->flags & RATE_INFO_FLAGS_HE_MCS) 539 + stat->encoding = RX_ENC_HE; 540 + else if (ri->flags & RATE_INFO_FLAGS_VHT_MCS) 541 + stat->encoding = RX_ENC_VHT; 542 + else if (ri->flags & RATE_INFO_FLAGS_MCS) 543 + stat->encoding = RX_ENC_HT; 544 + else 545 + stat->encoding = RX_ENC_LEGACY; 546 + 547 + if (ri->flags & RATE_INFO_FLAGS_SHORT_GI) 548 + stat->enc_flags |= RX_ENC_FLAG_SHORT_GI; 549 + 550 + stat->he_gi = ri->he_gi; 551 + 552 + if (stat->encoding != RX_ENC_LEGACY) 553 + return true; 554 + 555 + stat->rate_idx = 0; 556 + for (i = 0; i < sband->n_bitrates; i++) { 557 + if (ri->legacy != sband->bitrates[i].bitrate) 558 + continue; 559 + 560 + stat->rate_idx = i; 561 + return true; 562 + } 563 + 564 + return false; 565 + } 566 + 567 + static int ieee80211_fill_rx_status(struct ieee80211_rx_status *stat, 568 + struct ieee80211_hw *hw, 569 + struct ieee80211_tx_rate *rate, 570 + struct rate_info *ri, u8 band, int len) 571 + { 572 + memset(stat, 0, sizeof(*stat)); 573 + stat->band = band; 574 + 575 + if (ieee80211_fill_rate_info(hw, stat, band, ri)) 481 576 return 0; 482 577 578 + if (rate->idx < 0 || !rate->count) 579 + return -1; 580 + 483 581 if (rate->flags & IEEE80211_TX_RC_80_MHZ_WIDTH) 484 - stat.bw = RATE_INFO_BW_80; 582 + stat->bw = RATE_INFO_BW_80; 485 583 else if (rate->flags & IEEE80211_TX_RC_40_MHZ_WIDTH) 486 - stat.bw = RATE_INFO_BW_40; 584 + stat->bw = RATE_INFO_BW_40; 487 585 else 488 - stat.bw = RATE_INFO_BW_20; 586 + stat->bw = RATE_INFO_BW_20; 489 587 490 - stat.enc_flags = 0; 588 + stat->enc_flags = 0; 491 589 if (rate->flags & IEEE80211_TX_RC_USE_SHORT_PREAMBLE) 492 - stat.enc_flags |= RX_ENC_FLAG_SHORTPRE; 590 + stat->enc_flags |= RX_ENC_FLAG_SHORTPRE; 493 591 if (rate->flags & IEEE80211_TX_RC_SHORT_GI) 494 - stat.enc_flags |= RX_ENC_FLAG_SHORT_GI; 592 + stat->enc_flags |= RX_ENC_FLAG_SHORT_GI; 495 593 496 - stat.rate_idx = rate->idx; 594 + stat->rate_idx = rate->idx; 497 595 if (rate->flags & IEEE80211_TX_RC_VHT_MCS) { 498 - stat.encoding = RX_ENC_VHT; 499 - stat.rate_idx = ieee80211_rate_get_vht_mcs(rate); 500 - stat.nss = ieee80211_rate_get_vht_nss(rate); 596 + stat->encoding = RX_ENC_VHT; 597 + stat->rate_idx = ieee80211_rate_get_vht_mcs(rate); 598 + stat->nss = ieee80211_rate_get_vht_nss(rate); 501 599 } else if (rate->flags & IEEE80211_TX_RC_MCS) { 502 - stat.encoding = RX_ENC_HT; 600 + stat->encoding = RX_ENC_HT; 503 601 } else { 504 - stat.encoding = RX_ENC_LEGACY; 602 + stat->encoding = RX_ENC_LEGACY; 505 603 } 604 + 605 + return 0; 606 + } 607 + 608 + static u32 ieee80211_calc_tx_airtime_rate(struct ieee80211_hw *hw, 609 + struct ieee80211_tx_rate *rate, 610 + struct rate_info *ri, 611 + u8 band, int len) 612 + { 613 + struct ieee80211_rx_status stat; 614 + 615 + if (ieee80211_fill_rx_status(&stat, hw, rate, ri, band, len)) 616 + return 0; 506 617 507 618 return ieee80211_calc_rx_airtime(hw, &stat, len); 508 619 } ··· 611 536 struct ieee80211_tx_rate *rate = &info->status.rates[i]; 612 537 u32 cur_duration; 613 538 614 - cur_duration = ieee80211_calc_tx_airtime_rate(hw, rate, 539 + cur_duration = ieee80211_calc_tx_airtime_rate(hw, rate, NULL, 615 540 info->band, len); 616 541 if (!cur_duration) 617 542 break; ··· 647 572 if (pubsta) { 648 573 struct sta_info *sta = container_of(pubsta, struct sta_info, 649 574 sta); 575 + struct ieee80211_rx_status stat; 650 576 struct ieee80211_tx_rate *rate = &sta->tx_stats.last_rate; 651 - u32 airtime; 577 + struct rate_info *ri = &sta->tx_stats.last_rate_info; 578 + u32 duration, overhead; 579 + u8 agg_shift; 652 580 653 - if (!(rate->flags & (IEEE80211_TX_RC_VHT_MCS | 654 - IEEE80211_TX_RC_MCS))) 655 - ampdu = false; 581 + if (ieee80211_fill_rx_status(&stat, hw, rate, ri, band, len)) 582 + return 0; 656 583 584 + if (stat.encoding == RX_ENC_LEGACY || !ampdu) 585 + return ieee80211_calc_rx_airtime(hw, &stat, len); 586 + 587 + duration = ieee80211_get_rate_duration(hw, &stat, &overhead); 657 588 /* 658 589 * Assume that HT/VHT transmission on any AC except VO will 659 590 * use aggregation. Since we don't have reliable reporting 660 - * of aggregation length, assume an average of 16. 591 + * of aggregation length, assume an average size based on the 592 + * tx rate. 661 593 * This will not be very accurate, but much better than simply 662 - * assuming un-aggregated tx. 594 + * assuming un-aggregated tx in all cases. 663 595 */ 664 - airtime = ieee80211_calc_tx_airtime_rate(hw, rate, band, 665 - ampdu ? len * 16 : len); 666 - if (ampdu) 667 - airtime /= 16; 596 + if (duration > 400) /* <= VHT20 MCS2 1S */ 597 + agg_shift = 1; 598 + else if (duration > 250) /* <= VHT20 MCS3 1S or MCS1 2S */ 599 + agg_shift = 2; 600 + else if (duration > 150) /* <= VHT20 MCS5 1S or MCS3 2S */ 601 + agg_shift = 3; 602 + else 603 + agg_shift = 4; 668 604 669 - return airtime; 605 + duration *= len; 606 + duration /= AVG_PKT_SIZE; 607 + duration /= 1024; 608 + 609 + return duration + (overhead >> agg_shift); 670 610 } 671 611 672 612 if (!conf)
+3 -2
net/mac80211/sta_info.h
··· 524 524 * @status_stats.retry_failed: # of frames that failed after retry 525 525 * @status_stats.retry_count: # of retries attempted 526 526 * @status_stats.lost_packets: # of lost packets 527 - * @status_stats.last_tdls_pkt_time: timestamp of last TDLS packet 527 + * @status_stats.last_pkt_time: timestamp of last ACKed packet 528 528 * @status_stats.msdu_retries: # of MSDU retries 529 529 * @status_stats.msdu_failed: # of failed MSDUs 530 530 * @status_stats.last_ack: last ack timestamp (jiffies) ··· 597 597 unsigned long filtered; 598 598 unsigned long retry_failed, retry_count; 599 599 unsigned int lost_packets; 600 - unsigned long last_tdls_pkt_time; 600 + unsigned long last_pkt_time; 601 601 u64 msdu_retries[IEEE80211_NUM_TIDS + 1]; 602 602 u64 msdu_failed[IEEE80211_NUM_TIDS + 1]; 603 603 unsigned long last_ack; ··· 611 611 u64 packets[IEEE80211_NUM_ACS]; 612 612 u64 bytes[IEEE80211_NUM_ACS]; 613 613 struct ieee80211_tx_rate last_rate; 614 + struct rate_info last_rate_info; 614 615 u64 msdu[IEEE80211_NUM_TIDS + 1]; 615 616 } tx_stats; 616 617 u16 tid_seq[IEEE80211_QOS_CTL_TID_MASK + 1];
+23 -20
net/mac80211/status.c
··· 755 755 * - current throughput (higher value for higher tpt)? 756 756 */ 757 757 #define STA_LOST_PKT_THRESHOLD 50 758 + #define STA_LOST_PKT_TIME HZ /* 1 sec since last ACK */ 758 759 #define STA_LOST_TDLS_PKT_THRESHOLD 10 759 760 #define STA_LOST_TDLS_PKT_TIME (10*HZ) /* 10secs since last ACK */ 760 761 761 762 static void ieee80211_lost_packet(struct sta_info *sta, 762 763 struct ieee80211_tx_info *info) 763 764 { 765 + unsigned long pkt_time = STA_LOST_PKT_TIME; 766 + unsigned int pkt_thr = STA_LOST_PKT_THRESHOLD; 767 + 764 768 /* If driver relies on its own algorithm for station kickout, skip 765 769 * mac80211 packet loss mechanism. 766 770 */ ··· 777 773 return; 778 774 779 775 sta->status_stats.lost_packets++; 780 - if (!sta->sta.tdls && 781 - sta->status_stats.lost_packets < STA_LOST_PKT_THRESHOLD) 782 - return; 776 + if (sta->sta.tdls) { 777 + pkt_time = STA_LOST_TDLS_PKT_TIME; 778 + pkt_thr = STA_LOST_PKT_THRESHOLD; 779 + } 783 780 784 781 /* 785 782 * If we're in TDLS mode, make sure that all STA_LOST_TDLS_PKT_THRESHOLD 786 783 * of the last packets were lost, and that no ACK was received in the 787 784 * last STA_LOST_TDLS_PKT_TIME ms, before triggering the CQM packet-loss 788 785 * mechanism. 786 + * For non-TDLS, use STA_LOST_PKT_THRESHOLD and STA_LOST_PKT_TIME 789 787 */ 790 - if (sta->sta.tdls && 791 - (sta->status_stats.lost_packets < STA_LOST_TDLS_PKT_THRESHOLD || 792 - time_before(jiffies, 793 - sta->status_stats.last_tdls_pkt_time + 794 - STA_LOST_TDLS_PKT_TIME))) 788 + if (sta->status_stats.lost_packets < pkt_thr || 789 + !time_after(jiffies, sta->status_stats.last_pkt_time + pkt_time)) 795 790 return; 796 791 797 792 cfg80211_cqm_pktloss_notify(sta->sdata->dev, sta->sta.addr, ··· 1036 1033 sta->status_stats.lost_packets = 0; 1037 1034 1038 1035 /* Track when last TDLS packet was ACKed */ 1039 - if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) 1040 - sta->status_stats.last_tdls_pkt_time = 1041 - jiffies; 1036 + sta->status_stats.last_pkt_time = jiffies; 1042 1037 } else if (noack_success) { 1043 1038 /* nothing to do here, do not account as lost */ 1044 1039 } else { ··· 1138 1137 struct ieee80211_tx_info *info = status->info; 1139 1138 struct ieee80211_sta *pubsta = status->sta; 1140 1139 struct ieee80211_supported_band *sband; 1140 + struct sta_info *sta; 1141 1141 int retry_count; 1142 1142 bool acked, noack_success; 1143 + 1144 + if (pubsta) { 1145 + sta = container_of(pubsta, struct sta_info, sta); 1146 + 1147 + if (status->rate) 1148 + sta->tx_stats.last_rate_info = *status->rate; 1149 + } 1143 1150 1144 1151 if (status->skb) 1145 1152 return __ieee80211_tx_status(hw, status); ··· 1163 1154 noack_success = !!(info->flags & IEEE80211_TX_STAT_NOACK_TRANSMITTED); 1164 1155 1165 1156 if (pubsta) { 1166 - struct sta_info *sta; 1167 - 1168 - sta = container_of(pubsta, struct sta_info, sta); 1169 - 1170 1157 if (!acked && !noack_success) 1171 1158 sta->status_stats.retry_failed++; 1172 1159 sta->status_stats.retry_count += retry_count; ··· 1173 1168 if (sta->status_stats.lost_packets) 1174 1169 sta->status_stats.lost_packets = 0; 1175 1170 1176 - /* Track when last TDLS packet was ACKed */ 1177 - if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) 1178 - sta->status_stats.last_tdls_pkt_time = jiffies; 1171 + /* Track when last packet was ACKed */ 1172 + sta->status_stats.last_pkt_time = jiffies; 1179 1173 } else if (test_sta_flag(sta, WLAN_STA_PS_STA)) { 1180 1174 return; 1181 1175 } else if (noack_success) { ··· 1263 1259 if (sta->status_stats.lost_packets) 1264 1260 sta->status_stats.lost_packets = 0; 1265 1261 1266 - if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) 1267 - sta->status_stats.last_tdls_pkt_time = jiffies; 1262 + sta->status_stats.last_pkt_time = jiffies; 1268 1263 } else { 1269 1264 ieee80211_lost_packet(sta, info); 1270 1265 }
+1 -2
net/mptcp/protocol.c
··· 891 891 goto out; 892 892 } 893 893 894 - wait_for_sndbuf: 895 894 __mptcp_flush_join_list(msk); 896 895 ssk = mptcp_subflow_get_send(msk); 897 896 while (!sk_stream_memory_free(sk) || ··· 980 981 */ 981 982 mptcp_set_timeout(sk, ssk); 982 983 release_sock(ssk); 983 - goto wait_for_sndbuf; 984 + goto restart; 984 985 } 985 986 } 986 987 }
+1 -1
net/netfilter/nf_conntrack_pptp.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 3 * Connection tracking support for PPTP (Point to Point Tunneling Protocol). 4 - * PPTP is a a protocol for creating virtual private networks. 4 + * PPTP is a protocol for creating virtual private networks. 5 5 * It is a specification defined by Microsoft and some vendors 6 6 * working with Microsoft. PPTP is built on top of a modified 7 7 * version of the Internet Generic Routing Encapsulation Protocol.
+35 -4
net/netfilter/nf_conntrack_proto_sctp.c
··· 62 62 [SCTP_CONNTRACK_HEARTBEAT_ACKED] = 210 SECS, 63 63 }; 64 64 65 + #define SCTP_FLAG_HEARTBEAT_VTAG_FAILED 1 66 + 65 67 #define sNO SCTP_CONNTRACK_NONE 66 68 #define sCL SCTP_CONNTRACK_CLOSED 67 69 #define sCW SCTP_CONNTRACK_COOKIE_WAIT ··· 371 369 u_int32_t offset, count; 372 370 unsigned int *timeouts; 373 371 unsigned long map[256 / sizeof(unsigned long)] = { 0 }; 372 + bool ignore = false; 374 373 375 374 if (sctp_error(skb, dataoff, state)) 376 375 return -NF_ACCEPT; ··· 430 427 /* Sec 8.5.1 (D) */ 431 428 if (sh->vtag != ct->proto.sctp.vtag[dir]) 432 429 goto out_unlock; 433 - } else if (sch->type == SCTP_CID_HEARTBEAT || 434 - sch->type == SCTP_CID_HEARTBEAT_ACK) { 430 + } else if (sch->type == SCTP_CID_HEARTBEAT) { 431 + if (ct->proto.sctp.vtag[dir] == 0) { 432 + pr_debug("Setting %d vtag %x for dir %d\n", sch->type, sh->vtag, dir); 433 + ct->proto.sctp.vtag[dir] = sh->vtag; 434 + } else if (sh->vtag != ct->proto.sctp.vtag[dir]) { 435 + if (test_bit(SCTP_CID_DATA, map) || ignore) 436 + goto out_unlock; 437 + 438 + ct->proto.sctp.flags |= SCTP_FLAG_HEARTBEAT_VTAG_FAILED; 439 + ct->proto.sctp.last_dir = dir; 440 + ignore = true; 441 + continue; 442 + } else if (ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) { 443 + ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED; 444 + } 445 + } else if (sch->type == SCTP_CID_HEARTBEAT_ACK) { 435 446 if (ct->proto.sctp.vtag[dir] == 0) { 436 447 pr_debug("Setting vtag %x for dir %d\n", 437 448 sh->vtag, dir); 438 449 ct->proto.sctp.vtag[dir] = sh->vtag; 439 450 } else if (sh->vtag != ct->proto.sctp.vtag[dir]) { 440 - pr_debug("Verification tag check failed\n"); 441 - goto out_unlock; 451 + if (test_bit(SCTP_CID_DATA, map) || ignore) 452 + goto out_unlock; 453 + 454 + if ((ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) == 0 || 455 + ct->proto.sctp.last_dir == dir) 456 + goto out_unlock; 457 + 458 + ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED; 459 + ct->proto.sctp.vtag[dir] = sh->vtag; 460 + ct->proto.sctp.vtag[!dir] = 0; 461 + } else if (ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) { 462 + ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED; 442 463 } 443 464 } 444 465 ··· 496 469 nf_conntrack_event_cache(IPCT_PROTOINFO, ct); 497 470 } 498 471 spin_unlock_bh(&ct->lock); 472 + 473 + /* allow but do not refresh timeout */ 474 + if (ignore) 475 + return NF_ACCEPT; 499 476 500 477 timeouts = nf_ct_timeout_lookup(ct); 501 478 if (!timeouts)
+1 -1
net/netfilter/nf_conntrack_proto_tcp.c
··· 1152 1152 && (old_state == TCP_CONNTRACK_SYN_RECV 1153 1153 || old_state == TCP_CONNTRACK_ESTABLISHED) 1154 1154 && new_state == TCP_CONNTRACK_ESTABLISHED) { 1155 - /* Set ASSURED if we see see valid ack in ESTABLISHED 1155 + /* Set ASSURED if we see valid ack in ESTABLISHED 1156 1156 after SYN_RECV or a valid answer for a picked up 1157 1157 connection. */ 1158 1158 set_bit(IPS_ASSURED_BIT, &ct->status);
+10 -16
net/netfilter/nf_conntrack_proto_udp.c
··· 81 81 return false; 82 82 } 83 83 84 - static void nf_conntrack_udp_refresh_unreplied(struct nf_conn *ct, 85 - struct sk_buff *skb, 86 - enum ip_conntrack_info ctinfo, 87 - u32 extra_jiffies) 88 - { 89 - if (unlikely(ctinfo == IP_CT_ESTABLISHED_REPLY && 90 - ct->status & IPS_NAT_CLASH)) 91 - nf_ct_kill(ct); 92 - else 93 - nf_ct_refresh_acct(ct, ctinfo, skb, extra_jiffies); 94 - } 95 - 96 84 /* Returns verdict for packet, and may modify conntracktype */ 97 85 int nf_conntrack_udp_packet(struct nf_conn *ct, 98 86 struct sk_buff *skb, ··· 112 124 113 125 nf_ct_refresh_acct(ct, ctinfo, skb, extra); 114 126 127 + /* never set ASSURED for IPS_NAT_CLASH, they time out soon */ 128 + if (unlikely((ct->status & IPS_NAT_CLASH))) 129 + return NF_ACCEPT; 130 + 115 131 /* Also, more likely to be important, and not a probe */ 116 132 if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status)) 117 133 nf_conntrack_event_cache(IPCT_ASSURED, ct); 118 134 } else { 119 - nf_conntrack_udp_refresh_unreplied(ct, skb, ctinfo, 120 - timeouts[UDP_CT_UNREPLIED]); 135 + nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[UDP_CT_UNREPLIED]); 121 136 } 122 137 return NF_ACCEPT; 123 138 } ··· 197 206 if (test_bit(IPS_SEEN_REPLY_BIT, &ct->status)) { 198 207 nf_ct_refresh_acct(ct, ctinfo, skb, 199 208 timeouts[UDP_CT_REPLIED]); 209 + 210 + if (unlikely((ct->status & IPS_NAT_CLASH))) 211 + return NF_ACCEPT; 212 + 200 213 /* Also, more likely to be important, and not a probe */ 201 214 if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status)) 202 215 nf_conntrack_event_cache(IPCT_ASSURED, ct); 203 216 } else { 204 - nf_conntrack_udp_refresh_unreplied(ct, skb, ctinfo, 205 - timeouts[UDP_CT_UNREPLIED]); 217 + nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[UDP_CT_UNREPLIED]); 206 218 } 207 219 return NF_ACCEPT; 208 220 }
+31 -33
net/netfilter/nf_tables_api.c
··· 815 815 nlh->nlmsg_seq, NFT_MSG_NEWTABLE, 0, 816 816 family, table); 817 817 if (err < 0) 818 - goto err; 818 + goto err_fill_table_info; 819 819 820 - return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid); 820 + return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid); 821 821 822 - err: 822 + err_fill_table_info: 823 823 kfree_skb(skb2); 824 824 return err; 825 825 } ··· 1563 1563 nlh->nlmsg_seq, NFT_MSG_NEWCHAIN, 0, 1564 1564 family, table, chain); 1565 1565 if (err < 0) 1566 - goto err; 1566 + goto err_fill_chain_info; 1567 1567 1568 - return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid); 1568 + return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid); 1569 1569 1570 - err: 1570 + err_fill_chain_info: 1571 1571 kfree_skb(skb2); 1572 1572 return err; 1573 1573 } ··· 3008 3008 nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0, 3009 3009 family, table, chain, rule, NULL); 3010 3010 if (err < 0) 3011 - goto err; 3011 + goto err_fill_rule_info; 3012 3012 3013 - return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid); 3013 + return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid); 3014 3014 3015 - err: 3015 + err_fill_rule_info: 3016 3016 kfree_skb(skb2); 3017 3017 return err; 3018 3018 } ··· 3770 3770 goto nla_put_failure; 3771 3771 } 3772 3772 3773 - if (nla_put(skb, NFTA_SET_USERDATA, set->udlen, set->udata)) 3773 + if (set->udata && 3774 + nla_put(skb, NFTA_SET_USERDATA, set->udlen, set->udata)) 3774 3775 goto nla_put_failure; 3775 3776 3776 3777 nest = nla_nest_start_noflag(skb, NFTA_SET_DESC); ··· 3968 3967 3969 3968 err = nf_tables_fill_set(skb2, &ctx, set, NFT_MSG_NEWSET, 0); 3970 3969 if (err < 0) 3971 - goto err; 3970 + goto err_fill_set_info; 3972 3971 3973 - return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid); 3972 + return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid); 3974 3973 3975 - err: 3974 + err_fill_set_info: 3976 3975 kfree_skb(skb2); 3977 3976 return err; 3978 3977 } ··· 4860 4859 err = -ENOMEM; 4861 4860 skb = nlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC); 4862 4861 if (skb == NULL) 4863 - goto err1; 4862 + return err; 4864 4863 4865 4864 err = nf_tables_fill_setelem_info(skb, ctx, ctx->seq, ctx->portid, 4866 4865 NFT_MSG_NEWSETELEM, 0, set, &elem); 4867 4866 if (err < 0) 4868 - goto err2; 4867 + goto err_fill_setelem; 4869 4868 4870 - err = nfnetlink_unicast(skb, ctx->net, ctx->portid, MSG_DONTWAIT); 4871 - /* This avoids a loop in nfnetlink. */ 4872 - if (err < 0) 4873 - goto err1; 4869 + return nfnetlink_unicast(skb, ctx->net, ctx->portid); 4874 4870 4875 - return 0; 4876 - err2: 4871 + err_fill_setelem: 4877 4872 kfree_skb(skb); 4878 - err1: 4879 - /* this avoids a loop in nfnetlink. */ 4880 - return err == -EAGAIN ? -ENOBUFS : err; 4873 + return err; 4881 4874 } 4882 4875 4883 4876 /* called with rcu_read_lock held */ ··· 6176 6181 nlh->nlmsg_seq, NFT_MSG_NEWOBJ, 0, 6177 6182 family, table, obj, reset); 6178 6183 if (err < 0) 6179 - goto err; 6184 + goto err_fill_obj_info; 6180 6185 6181 - return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid); 6182 - err: 6186 + return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid); 6187 + 6188 + err_fill_obj_info: 6183 6189 kfree_skb(skb2); 6184 6190 return err; 6185 6191 } ··· 7040 7044 NFT_MSG_NEWFLOWTABLE, 0, family, 7041 7045 flowtable, &flowtable->hook_list); 7042 7046 if (err < 0) 7043 - goto err; 7047 + goto err_fill_flowtable_info; 7044 7048 7045 - return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid); 7046 - err: 7049 + return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid); 7050 + 7051 + err_fill_flowtable_info: 7047 7052 kfree_skb(skb2); 7048 7053 return err; 7049 7054 } ··· 7230 7233 err = nf_tables_fill_gen_info(skb2, net, NETLINK_CB(skb).portid, 7231 7234 nlh->nlmsg_seq); 7232 7235 if (err < 0) 7233 - goto err; 7236 + goto err_fill_gen_info; 7234 7237 7235 - return nlmsg_unicast(nlsk, skb2, NETLINK_CB(skb).portid); 7236 - err: 7238 + return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid); 7239 + 7240 + err_fill_gen_info: 7237 7241 kfree_skb(skb2); 7238 7242 return err; 7239 7243 }
+8 -3
net/netfilter/nfnetlink.c
··· 149 149 } 150 150 EXPORT_SYMBOL_GPL(nfnetlink_set_err); 151 151 152 - int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid, 153 - int flags) 152 + int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid) 154 153 { 155 - return netlink_unicast(net->nfnl, skb, portid, flags); 154 + int err; 155 + 156 + err = nlmsg_unicast(net->nfnl, skb, portid); 157 + if (err == -EAGAIN) 158 + err = -ENOBUFS; 159 + 160 + return err; 156 161 } 157 162 EXPORT_SYMBOL_GPL(nfnetlink_unicast); 158 163
+1 -1
net/netfilter/nft_flow_offload.c
··· 102 102 } 103 103 104 104 if (nf_ct_ext_exist(ct, NF_CT_EXT_HELPER) || 105 - ct->status & IPS_SEQ_ADJUST) 105 + ct->status & (IPS_SEQ_ADJUST | IPS_NAT_CLASH)) 106 106 goto out; 107 107 108 108 if (!nf_ct_is_confirmed(ct))
+3 -1
net/netfilter/nft_payload.c
··· 87 87 u32 *dest = &regs->data[priv->dreg]; 88 88 int offset; 89 89 90 - dest[priv->len / NFT_REG32_SIZE] = 0; 90 + if (priv->len % NFT_REG32_SIZE) 91 + dest[priv->len / NFT_REG32_SIZE] = 0; 92 + 91 93 switch (priv->base) { 92 94 case NFT_PAYLOAD_LL_HEADER: 93 95 if (!skb_mac_header_was_set(skb))
+47 -10
net/netfilter/nft_set_rbtree.c
··· 218 218 struct nft_rbtree_elem *new, 219 219 struct nft_set_ext **ext) 220 220 { 221 + bool overlap = false, dup_end_left = false, dup_end_right = false; 221 222 struct nft_rbtree *priv = nft_set_priv(set); 222 223 u8 genmask = nft_genmask_next(net); 223 224 struct nft_rbtree_elem *rbe; 224 225 struct rb_node *parent, **p; 225 - bool overlap = false; 226 226 int d; 227 227 228 228 /* Detect overlaps as we descend the tree. Set the flag in these cases: ··· 238 238 * 239 239 * b1. _ _ __>| !_ _ __| (insert end before existing start) 240 240 * b2. _ _ ___| !_ _ _>| (insert end after existing start) 241 - * b3. _ _ ___! >|_ _ __| (insert start after existing end) 241 + * b3. _ _ ___! >|_ _ __| (insert start after existing end, as a leaf) 242 + * '--' no nodes falling in this range 243 + * b4. >|_ _ ! (insert start before existing start) 242 244 * 243 245 * Case a3. resolves to b3.: 244 246 * - if the inserted start element is the leftmost, because the '0' 245 247 * element in the tree serves as end element 246 - * - otherwise, if an existing end is found. Note that end elements are 247 - * always inserted after corresponding start elements. 248 + * - otherwise, if an existing end is found immediately to the left. If 249 + * there are existing nodes in between, we need to further descend the 250 + * tree before we can conclude the new start isn't causing an overlap 251 + * 252 + * or to b4., which, preceded by a3., means we already traversed one or 253 + * more existing intervals entirely, from the right. 248 254 * 249 255 * For a new, rightmost pair of elements, we'll hit cases b3. and b2., 250 256 * in that order. 251 257 * 252 258 * The flag is also cleared in two special cases: 253 259 * 254 - * b4. |__ _ _!|<_ _ _ (insert start right before existing end) 255 - * b5. |__ _ >|!__ _ _ (insert end right after existing start) 260 + * b5. |__ _ _!|<_ _ _ (insert start right before existing end) 261 + * b6. |__ _ >|!__ _ _ (insert end right after existing start) 256 262 * 257 263 * which always happen as last step and imply that no further 258 264 * overlapping is possible. 265 + * 266 + * Another special case comes from the fact that start elements matching 267 + * an already existing start element are allowed: insertion is not 268 + * performed but we return -EEXIST in that case, and the error will be 269 + * cleared by the caller if NLM_F_EXCL is not present in the request. 270 + * This way, request for insertion of an exact overlap isn't reported as 271 + * error to userspace if not desired. 272 + * 273 + * However, if the existing start matches a pre-existing start, but the 274 + * end element doesn't match the corresponding pre-existing end element, 275 + * we need to report a partial overlap. This is a local condition that 276 + * can be noticed without need for a tracking flag, by checking for a 277 + * local duplicated end for a corresponding start, from left and right, 278 + * separately. 259 279 */ 260 280 261 281 parent = NULL; ··· 292 272 if (nft_rbtree_interval_start(new)) { 293 273 if (nft_rbtree_interval_end(rbe) && 294 274 nft_set_elem_active(&rbe->ext, genmask) && 295 - !nft_set_elem_expired(&rbe->ext)) 275 + !nft_set_elem_expired(&rbe->ext) && !*p) 296 276 overlap = false; 297 277 } else { 278 + if (dup_end_left && !*p) 279 + return -ENOTEMPTY; 280 + 298 281 overlap = nft_rbtree_interval_end(rbe) && 299 282 nft_set_elem_active(&rbe->ext, 300 283 genmask) && 301 284 !nft_set_elem_expired(&rbe->ext); 285 + 286 + if (overlap) { 287 + dup_end_right = true; 288 + continue; 289 + } 302 290 } 303 291 } else if (d > 0) { 304 292 p = &parent->rb_right; 305 293 306 294 if (nft_rbtree_interval_end(new)) { 295 + if (dup_end_right && !*p) 296 + return -ENOTEMPTY; 297 + 307 298 overlap = nft_rbtree_interval_end(rbe) && 308 299 nft_set_elem_active(&rbe->ext, 309 300 genmask) && 310 301 !nft_set_elem_expired(&rbe->ext); 311 - } else if (nft_rbtree_interval_end(rbe) && 312 - nft_set_elem_active(&rbe->ext, genmask) && 302 + 303 + if (overlap) { 304 + dup_end_left = true; 305 + continue; 306 + } 307 + } else if (nft_set_elem_active(&rbe->ext, genmask) && 313 308 !nft_set_elem_expired(&rbe->ext)) { 314 - overlap = true; 309 + overlap = nft_rbtree_interval_end(rbe); 315 310 } 316 311 } else { 317 312 if (nft_rbtree_interval_end(rbe) && ··· 351 316 p = &parent->rb_left; 352 317 } 353 318 } 319 + 320 + dup_end_left = dup_end_right = false; 354 321 } 355 322 356 323 if (overlap)
+1 -1
net/netfilter/xt_recent.c
··· 640 640 struct recent_table *t; 641 641 642 642 /* recent_net_exit() is called before recent_mt_destroy(). Make sure 643 - * that the parent xt_recent proc entry is is empty before trying to 643 + * that the parent xt_recent proc entry is empty before trying to 644 644 * remove it. 645 645 */ 646 646 spin_lock_bh(&recent_lock);
+33 -32
net/netlabel/netlabel_domainhash.c
··· 85 85 kfree(netlbl_domhsh_addr6_entry(iter6)); 86 86 } 87 87 #endif /* IPv6 */ 88 + kfree(ptr->def.addrsel); 88 89 } 89 90 kfree(ptr->domain); 90 91 kfree(ptr); ··· 538 537 goto add_return; 539 538 } 540 539 #endif /* IPv6 */ 540 + /* cleanup the new entry since we've moved everything over */ 541 + netlbl_domhsh_free_entry(&entry->rcu); 541 542 } else 542 543 ret_val = -EINVAL; 543 544 ··· 583 580 { 584 581 int ret_val = 0; 585 582 struct audit_buffer *audit_buf; 583 + struct netlbl_af4list *iter4; 584 + struct netlbl_domaddr4_map *map4; 585 + #if IS_ENABLED(CONFIG_IPV6) 586 + struct netlbl_af6list *iter6; 587 + struct netlbl_domaddr6_map *map6; 588 + #endif /* IPv6 */ 586 589 587 590 if (entry == NULL) 588 591 return -ENOENT; ··· 606 597 ret_val = -ENOENT; 607 598 spin_unlock(&netlbl_domhsh_lock); 608 599 600 + if (ret_val) 601 + return ret_val; 602 + 609 603 audit_buf = netlbl_audit_start_common(AUDIT_MAC_MAP_DEL, audit_info); 610 604 if (audit_buf != NULL) { 611 605 audit_log_format(audit_buf, ··· 618 606 audit_log_end(audit_buf); 619 607 } 620 608 621 - if (ret_val == 0) { 622 - struct netlbl_af4list *iter4; 623 - struct netlbl_domaddr4_map *map4; 624 - #if IS_ENABLED(CONFIG_IPV6) 625 - struct netlbl_af6list *iter6; 626 - struct netlbl_domaddr6_map *map6; 627 - #endif /* IPv6 */ 628 - 629 - switch (entry->def.type) { 630 - case NETLBL_NLTYPE_ADDRSELECT: 631 - netlbl_af4list_foreach_rcu(iter4, 632 - &entry->def.addrsel->list4) { 633 - map4 = netlbl_domhsh_addr4_entry(iter4); 634 - cipso_v4_doi_putdef(map4->def.cipso); 635 - } 636 - #if IS_ENABLED(CONFIG_IPV6) 637 - netlbl_af6list_foreach_rcu(iter6, 638 - &entry->def.addrsel->list6) { 639 - map6 = netlbl_domhsh_addr6_entry(iter6); 640 - calipso_doi_putdef(map6->def.calipso); 641 - } 642 - #endif /* IPv6 */ 643 - break; 644 - case NETLBL_NLTYPE_CIPSOV4: 645 - cipso_v4_doi_putdef(entry->def.cipso); 646 - break; 647 - #if IS_ENABLED(CONFIG_IPV6) 648 - case NETLBL_NLTYPE_CALIPSO: 649 - calipso_doi_putdef(entry->def.calipso); 650 - break; 651 - #endif /* IPv6 */ 609 + switch (entry->def.type) { 610 + case NETLBL_NLTYPE_ADDRSELECT: 611 + netlbl_af4list_foreach_rcu(iter4, &entry->def.addrsel->list4) { 612 + map4 = netlbl_domhsh_addr4_entry(iter4); 613 + cipso_v4_doi_putdef(map4->def.cipso); 652 614 } 653 - call_rcu(&entry->rcu, netlbl_domhsh_free_entry); 615 + #if IS_ENABLED(CONFIG_IPV6) 616 + netlbl_af6list_foreach_rcu(iter6, &entry->def.addrsel->list6) { 617 + map6 = netlbl_domhsh_addr6_entry(iter6); 618 + calipso_doi_putdef(map6->def.calipso); 619 + } 620 + #endif /* IPv6 */ 621 + break; 622 + case NETLBL_NLTYPE_CIPSOV4: 623 + cipso_v4_doi_putdef(entry->def.cipso); 624 + break; 625 + #if IS_ENABLED(CONFIG_IPV6) 626 + case NETLBL_NLTYPE_CALIPSO: 627 + calipso_doi_putdef(entry->def.calipso); 628 + break; 629 + #endif /* IPv6 */ 654 630 } 631 + call_rcu(&entry->rcu, netlbl_domhsh_free_entry); 655 632 656 633 return ret_val; 657 634 }
+1 -1
net/netlink/af_netlink.c
··· 353 353 { 354 354 struct netlink_sock *nlk = nlk_sk(sk); 355 355 356 - if (skb_queue_empty(&sk->sk_receive_queue)) 356 + if (skb_queue_empty_lockless(&sk->sk_receive_queue)) 357 357 clear_bit(NETLINK_S_CONGESTED, &nlk->state); 358 358 if (!test_bit(NETLINK_S_CONGESTED, &nlk->state)) 359 359 wake_up_interruptible(&nlk->wait);
+8 -5
net/rxrpc/ar-internal.h
··· 488 488 RXRPC_CALL_RX_LAST, /* Received the last packet (at rxtx_top) */ 489 489 RXRPC_CALL_TX_LAST, /* Last packet in Tx buffer (at rxtx_top) */ 490 490 RXRPC_CALL_SEND_PING, /* A ping will need to be sent */ 491 - RXRPC_CALL_PINGING, /* Ping in process */ 492 491 RXRPC_CALL_RETRANS_TIMEOUT, /* Retransmission due to timeout occurred */ 493 492 RXRPC_CALL_BEGAN_RX_TIMER, /* We began the expect_rx_by timer */ 494 493 RXRPC_CALL_RX_HEARD, /* The peer responded at least once to this call */ ··· 672 673 rxrpc_seq_t ackr_consumed; /* Highest packet shown consumed */ 673 674 rxrpc_seq_t ackr_seen; /* Highest packet shown seen */ 674 675 675 - /* ping management */ 676 - rxrpc_serial_t ping_serial; /* Last ping sent */ 677 - ktime_t ping_time; /* Time last ping sent */ 676 + /* RTT management */ 677 + rxrpc_serial_t rtt_serial[4]; /* Serial number of DATA or PING sent */ 678 + ktime_t rtt_sent_at[4]; /* Time packet sent */ 679 + unsigned long rtt_avail; /* Mask of available slots in bits 0-3, 680 + * Mask of pending samples in 8-11 */ 681 + #define RXRPC_CALL_RTT_AVAIL_MASK 0xf 682 + #define RXRPC_CALL_RTT_PEND_SHIFT 8 678 683 679 684 /* transmission-phase ACK management */ 680 685 ktime_t acks_latest_ts; /* Timestamp of latest ACK received */ ··· 1040 1037 /* 1041 1038 * rtt.c 1042 1039 */ 1043 - void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace, 1040 + void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace, int, 1044 1041 rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t); 1045 1042 unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *, bool); 1046 1043 void rxrpc_peer_init_rtt(struct rxrpc_peer *);
+1
net/rxrpc/call_object.c
··· 153 153 call->cong_ssthresh = RXRPC_RXTX_BUFF_SIZE - 1; 154 154 155 155 call->rxnet = rxnet; 156 + call->rtt_avail = RXRPC_CALL_RTT_AVAIL_MASK; 156 157 atomic_inc(&rxnet->nr_calls); 157 158 return call; 158 159
+70 -53
net/rxrpc/input.c
··· 608 608 } 609 609 610 610 /* 611 - * Process a requested ACK. 611 + * See if there's a cached RTT probe to complete. 612 612 */ 613 - static void rxrpc_input_requested_ack(struct rxrpc_call *call, 614 - ktime_t resp_time, 615 - rxrpc_serial_t orig_serial, 616 - rxrpc_serial_t ack_serial) 613 + static void rxrpc_complete_rtt_probe(struct rxrpc_call *call, 614 + ktime_t resp_time, 615 + rxrpc_serial_t acked_serial, 616 + rxrpc_serial_t ack_serial, 617 + enum rxrpc_rtt_rx_trace type) 617 618 { 618 - struct rxrpc_skb_priv *sp; 619 - struct sk_buff *skb; 619 + rxrpc_serial_t orig_serial; 620 + unsigned long avail; 620 621 ktime_t sent_at; 621 - int ix; 622 + bool matched = false; 623 + int i; 622 624 623 - for (ix = 0; ix < RXRPC_RXTX_BUFF_SIZE; ix++) { 624 - skb = call->rxtx_buffer[ix]; 625 - if (!skb) 625 + avail = READ_ONCE(call->rtt_avail); 626 + smp_rmb(); /* Read avail bits before accessing data. */ 627 + 628 + for (i = 0; i < ARRAY_SIZE(call->rtt_serial); i++) { 629 + if (!test_bit(i + RXRPC_CALL_RTT_PEND_SHIFT, &avail)) 626 630 continue; 627 631 628 - sent_at = skb->tstamp; 629 - smp_rmb(); /* Read timestamp before serial. */ 630 - sp = rxrpc_skb(skb); 631 - if (sp->hdr.serial != orig_serial) 632 - continue; 633 - goto found; 632 + sent_at = call->rtt_sent_at[i]; 633 + orig_serial = call->rtt_serial[i]; 634 + 635 + if (orig_serial == acked_serial) { 636 + clear_bit(i + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail); 637 + smp_mb(); /* Read data before setting avail bit */ 638 + set_bit(i, &call->rtt_avail); 639 + if (type != rxrpc_rtt_rx_cancel) 640 + rxrpc_peer_add_rtt(call, type, i, acked_serial, ack_serial, 641 + sent_at, resp_time); 642 + else 643 + trace_rxrpc_rtt_rx(call, rxrpc_rtt_rx_cancel, i, 644 + orig_serial, acked_serial, 0, 0); 645 + matched = true; 646 + } 647 + 648 + /* If a later serial is being acked, then mark this slot as 649 + * being available. 650 + */ 651 + if (after(acked_serial, orig_serial)) { 652 + trace_rxrpc_rtt_rx(call, rxrpc_rtt_rx_obsolete, i, 653 + orig_serial, acked_serial, 0, 0); 654 + clear_bit(i + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail); 655 + smp_wmb(); 656 + set_bit(i, &call->rtt_avail); 657 + } 634 658 } 635 659 636 - return; 637 - 638 - found: 639 - rxrpc_peer_add_rtt(call, rxrpc_rtt_rx_requested_ack, 640 - orig_serial, ack_serial, sent_at, resp_time); 660 + if (!matched) 661 + trace_rxrpc_rtt_rx(call, rxrpc_rtt_rx_lost, 9, 0, acked_serial, 0, 0); 641 662 } 642 663 643 664 /* ··· 703 682 */ 704 683 static void rxrpc_input_ping_response(struct rxrpc_call *call, 705 684 ktime_t resp_time, 706 - rxrpc_serial_t orig_serial, 685 + rxrpc_serial_t acked_serial, 707 686 rxrpc_serial_t ack_serial) 708 687 { 709 - rxrpc_serial_t ping_serial; 710 - ktime_t ping_time; 711 - 712 - ping_time = call->ping_time; 713 - smp_rmb(); 714 - ping_serial = READ_ONCE(call->ping_serial); 715 - 716 - if (orig_serial == call->acks_lost_ping) 688 + if (acked_serial == call->acks_lost_ping) 717 689 rxrpc_input_check_for_lost_ack(call); 718 - 719 - if (before(orig_serial, ping_serial) || 720 - !test_and_clear_bit(RXRPC_CALL_PINGING, &call->flags)) 721 - return; 722 - if (after(orig_serial, ping_serial)) 723 - return; 724 - 725 - rxrpc_peer_add_rtt(call, rxrpc_rtt_rx_ping_response, 726 - orig_serial, ack_serial, ping_time, resp_time); 727 690 } 728 691 729 692 /* ··· 848 843 struct rxrpc_ackinfo info; 849 844 u8 acks[RXRPC_MAXACKS]; 850 845 } buf; 851 - rxrpc_serial_t acked_serial; 846 + rxrpc_serial_t ack_serial, acked_serial; 852 847 rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt; 853 848 int nr_acks, offset, ioffset; 854 849 ··· 861 856 } 862 857 offset += sizeof(buf.ack); 863 858 859 + ack_serial = sp->hdr.serial; 864 860 acked_serial = ntohl(buf.ack.serial); 865 861 first_soft_ack = ntohl(buf.ack.firstPacket); 866 862 prev_pkt = ntohl(buf.ack.previousPacket); ··· 870 864 summary.ack_reason = (buf.ack.reason < RXRPC_ACK__INVALID ? 871 865 buf.ack.reason : RXRPC_ACK__INVALID); 872 866 873 - trace_rxrpc_rx_ack(call, sp->hdr.serial, acked_serial, 867 + trace_rxrpc_rx_ack(call, ack_serial, acked_serial, 874 868 first_soft_ack, prev_pkt, 875 869 summary.ack_reason, nr_acks); 876 870 877 - if (buf.ack.reason == RXRPC_ACK_PING_RESPONSE) 871 + switch (buf.ack.reason) { 872 + case RXRPC_ACK_PING_RESPONSE: 878 873 rxrpc_input_ping_response(call, skb->tstamp, acked_serial, 879 - sp->hdr.serial); 880 - if (buf.ack.reason == RXRPC_ACK_REQUESTED) 881 - rxrpc_input_requested_ack(call, skb->tstamp, acked_serial, 882 - sp->hdr.serial); 874 + ack_serial); 875 + rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, 876 + rxrpc_rtt_rx_ping_response); 877 + break; 878 + case RXRPC_ACK_REQUESTED: 879 + rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, 880 + rxrpc_rtt_rx_requested_ack); 881 + break; 882 + default: 883 + if (acked_serial != 0) 884 + rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, 885 + rxrpc_rtt_rx_cancel); 886 + break; 887 + } 883 888 884 889 if (buf.ack.reason == RXRPC_ACK_PING) { 885 - _proto("Rx ACK %%%u PING Request", sp->hdr.serial); 890 + _proto("Rx ACK %%%u PING Request", ack_serial); 886 891 rxrpc_propose_ACK(call, RXRPC_ACK_PING_RESPONSE, 887 - sp->hdr.serial, true, true, 892 + ack_serial, true, true, 888 893 rxrpc_propose_ack_respond_to_ping); 889 894 } else if (sp->hdr.flags & RXRPC_REQUEST_ACK) { 890 895 rxrpc_propose_ACK(call, RXRPC_ACK_REQUESTED, 891 - sp->hdr.serial, true, true, 896 + ack_serial, true, true, 892 897 rxrpc_propose_ack_respond_to_ack); 893 898 } 894 899 895 900 /* Discard any out-of-order or duplicate ACKs (outside lock). */ 896 901 if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { 897 - trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial, 902 + trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, 898 903 first_soft_ack, call->ackr_first_seq, 899 904 prev_pkt, call->ackr_prev_seq); 900 905 return; ··· 921 904 922 905 /* Discard any out-of-order or duplicate ACKs (inside lock). */ 923 906 if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { 924 - trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial, 907 + trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, 925 908 first_soft_ack, call->ackr_first_seq, 926 909 prev_pkt, call->ackr_prev_seq); 927 910 goto out; ··· 981 964 RXRPC_TX_ANNO_LAST && 982 965 summary.nr_acks == call->tx_top - hard_ack && 983 966 rxrpc_is_client_call(call)) 984 - rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial, 967 + rxrpc_propose_ACK(call, RXRPC_ACK_PING, ack_serial, 985 968 false, true, 986 969 rxrpc_propose_ack_ping_for_lost_reply); 987 970
+61 -21
net/rxrpc/output.c
··· 124 124 } 125 125 126 126 /* 127 + * Record the beginning of an RTT probe. 128 + */ 129 + static int rxrpc_begin_rtt_probe(struct rxrpc_call *call, rxrpc_serial_t serial, 130 + enum rxrpc_rtt_tx_trace why) 131 + { 132 + unsigned long avail = call->rtt_avail; 133 + int rtt_slot = 9; 134 + 135 + if (!(avail & RXRPC_CALL_RTT_AVAIL_MASK)) 136 + goto no_slot; 137 + 138 + rtt_slot = __ffs(avail & RXRPC_CALL_RTT_AVAIL_MASK); 139 + if (!test_and_clear_bit(rtt_slot, &call->rtt_avail)) 140 + goto no_slot; 141 + 142 + call->rtt_serial[rtt_slot] = serial; 143 + call->rtt_sent_at[rtt_slot] = ktime_get_real(); 144 + smp_wmb(); /* Write data before avail bit */ 145 + set_bit(rtt_slot + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail); 146 + 147 + trace_rxrpc_rtt_tx(call, why, rtt_slot, serial); 148 + return rtt_slot; 149 + 150 + no_slot: 151 + trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_no_slot, rtt_slot, serial); 152 + return -1; 153 + } 154 + 155 + /* 156 + * Cancel an RTT probe. 157 + */ 158 + static void rxrpc_cancel_rtt_probe(struct rxrpc_call *call, 159 + rxrpc_serial_t serial, int rtt_slot) 160 + { 161 + if (rtt_slot != -1) { 162 + clear_bit(rtt_slot + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail); 163 + smp_wmb(); /* Clear pending bit before setting slot */ 164 + set_bit(rtt_slot, &call->rtt_avail); 165 + trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_cancel, rtt_slot, serial); 166 + } 167 + } 168 + 169 + /* 127 170 * Send an ACK call packet. 128 171 */ 129 172 int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping, ··· 179 136 rxrpc_serial_t serial; 180 137 rxrpc_seq_t hard_ack, top; 181 138 size_t len, n; 182 - int ret; 139 + int ret, rtt_slot = -1; 183 140 u8 reason; 184 141 185 142 if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) ··· 239 196 if (_serial) 240 197 *_serial = serial; 241 198 242 - if (ping) { 243 - call->ping_serial = serial; 244 - smp_wmb(); 245 - /* We need to stick a time in before we send the packet in case 246 - * the reply gets back before kernel_sendmsg() completes - but 247 - * asking UDP to send the packet can take a relatively long 248 - * time. 249 - */ 250 - call->ping_time = ktime_get_real(); 251 - set_bit(RXRPC_CALL_PINGING, &call->flags); 252 - trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_ping, serial); 253 - } 199 + if (ping) 200 + rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_ping); 254 201 255 202 ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len); 256 203 conn->params.peer->last_tx_at = ktime_get_seconds(); ··· 254 221 255 222 if (call->state < RXRPC_CALL_COMPLETE) { 256 223 if (ret < 0) { 257 - if (ping) 258 - clear_bit(RXRPC_CALL_PINGING, &call->flags); 224 + rxrpc_cancel_rtt_probe(call, serial, rtt_slot); 259 225 rxrpc_propose_ACK(call, pkt->ack.reason, 260 226 ntohl(pkt->ack.serial), 261 227 false, true, ··· 353 321 struct kvec iov[2]; 354 322 rxrpc_serial_t serial; 355 323 size_t len; 356 - int ret; 324 + int ret, rtt_slot = -1; 357 325 358 326 _enter(",{%d}", skb->len); 359 327 ··· 429 397 sp->hdr.serial = serial; 430 398 smp_wmb(); /* Set serial before timestamp */ 431 399 skb->tstamp = ktime_get_real(); 400 + if (whdr.flags & RXRPC_REQUEST_ACK) 401 + rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data); 432 402 433 403 /* send the packet by UDP 434 404 * - returns -EMSGSIZE if UDP would have to fragment the packet ··· 442 408 conn->params.peer->last_tx_at = ktime_get_seconds(); 443 409 444 410 up_read(&conn->params.local->defrag_sem); 445 - if (ret < 0) 411 + if (ret < 0) { 412 + rxrpc_cancel_rtt_probe(call, serial, rtt_slot); 446 413 trace_rxrpc_tx_fail(call->debug_id, serial, ret, 447 414 rxrpc_tx_point_call_data_nofrag); 448 - else 415 + } else { 449 416 trace_rxrpc_tx_packet(call->debug_id, &whdr, 450 417 rxrpc_tx_point_call_data_nofrag); 418 + } 419 + 451 420 rxrpc_tx_backoff(call, ret); 452 421 if (ret == -EMSGSIZE) 453 422 goto send_fragmentable; ··· 459 422 if (ret >= 0) { 460 423 if (whdr.flags & RXRPC_REQUEST_ACK) { 461 424 call->peer->rtt_last_req = skb->tstamp; 462 - trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_data, serial); 463 425 if (call->peer->rtt_count > 1) { 464 426 unsigned long nowj = jiffies, ack_lost_at; 465 427 ··· 505 469 sp->hdr.serial = serial; 506 470 smp_wmb(); /* Set serial before timestamp */ 507 471 skb->tstamp = ktime_get_real(); 472 + if (whdr.flags & RXRPC_REQUEST_ACK) 473 + rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data); 508 474 509 475 switch (conn->params.local->srx.transport.family) { 510 476 case AF_INET6: ··· 525 487 BUG(); 526 488 } 527 489 528 - if (ret < 0) 490 + if (ret < 0) { 491 + rxrpc_cancel_rtt_probe(call, serial, rtt_slot); 529 492 trace_rxrpc_tx_fail(call->debug_id, serial, ret, 530 493 rxrpc_tx_point_call_data_frag); 531 - else 494 + } else { 532 495 trace_rxrpc_tx_packet(call->debug_id, &whdr, 533 496 rxrpc_tx_point_call_data_frag); 497 + } 534 498 rxrpc_tx_backoff(call, ret); 535 499 536 500 up_write(&conn->params.local->defrag_sem);
+13 -3
net/rxrpc/peer_object.c
··· 502 502 * rxrpc_kernel_get_srtt - Get a call's peer smoothed RTT 503 503 * @sock: The socket on which the call is in progress. 504 504 * @call: The call to query 505 + * @_srtt: Where to store the SRTT value. 505 506 * 506 - * Get the call's peer smoothed RTT. 507 + * Get the call's peer smoothed RTT in uS. 507 508 */ 508 - u32 rxrpc_kernel_get_srtt(struct socket *sock, struct rxrpc_call *call) 509 + bool rxrpc_kernel_get_srtt(struct socket *sock, struct rxrpc_call *call, 510 + u32 *_srtt) 509 511 { 510 - return call->peer->srtt_us >> 3; 512 + struct rxrpc_peer *peer = call->peer; 513 + 514 + if (peer->rtt_count == 0) { 515 + *_srtt = 1000000; /* 1S */ 516 + return false; 517 + } 518 + 519 + *_srtt = call->peer->srtt_us >> 3; 520 + return true; 511 521 } 512 522 EXPORT_SYMBOL(rxrpc_kernel_get_srtt);
+2 -1
net/rxrpc/rtt.c
··· 146 146 * exclusive access to the peer RTT data. 147 147 */ 148 148 void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why, 149 + int rtt_slot, 149 150 rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial, 150 151 ktime_t send_time, ktime_t resp_time) 151 152 { ··· 163 162 peer->rtt_count++; 164 163 spin_unlock(&peer->rtt_input_lock); 165 164 166 - trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial, 165 + trace_rxrpc_rtt_rx(call, why, rtt_slot, send_serial, resp_serial, 167 166 peer->srtt_us >> 3, peer->rto_j); 168 167 } 169 168
+2 -1
net/rxrpc/rxkad.c
··· 1137 1137 ret = -ENOMEM; 1138 1138 ticket = kmalloc(ticket_len, GFP_NOFS); 1139 1139 if (!ticket) 1140 - goto temporary_error; 1140 + goto temporary_error_free_resp; 1141 1141 1142 1142 eproto = tracepoint_string("rxkad_tkt_short"); 1143 1143 abort_code = RXKADPACKETSHORT; ··· 1230 1230 1231 1231 temporary_error_free_ticket: 1232 1232 kfree(ticket); 1233 + temporary_error_free_resp: 1233 1234 kfree(response); 1234 1235 temporary_error: 1235 1236 /* Ignore the response packet if we got a temporary error such as
+4 -16
net/sched/sch_red.c
··· 353 353 FLOW_BLOCK_BINDER_TYPE_RED_EARLY_DROP, 354 354 tb[TCA_RED_EARLY_DROP_BLOCK], extack); 355 355 if (err) 356 - goto err_early_drop_init; 356 + return err; 357 357 358 - err = tcf_qevent_init(&q->qe_mark, sch, 359 - FLOW_BLOCK_BINDER_TYPE_RED_MARK, 360 - tb[TCA_RED_MARK_BLOCK], extack); 361 - if (err) 362 - goto err_mark_init; 363 - 364 - return 0; 365 - 366 - err_mark_init: 367 - tcf_qevent_destroy(&q->qe_early_drop, sch); 368 - err_early_drop_init: 369 - del_timer_sync(&q->adapt_timer); 370 - red_offload(sch, false); 371 - qdisc_put(q->qdisc); 372 - return err; 358 + return tcf_qevent_init(&q->qe_mark, sch, 359 + FLOW_BLOCK_BINDER_TYPE_RED_MARK, 360 + tb[TCA_RED_MARK_BLOCK], extack); 373 361 } 374 362 375 363 static int red_change(struct Qdisc *sch, struct nlattr *opt,
+24 -6
net/sched/sch_taprio.c
··· 1176 1176 spin_unlock(&q->current_entry_lock); 1177 1177 } 1178 1178 1179 - static void taprio_sched_to_offload(struct taprio_sched *q, 1179 + static u32 tc_map_to_queue_mask(struct net_device *dev, u32 tc_mask) 1180 + { 1181 + u32 i, queue_mask = 0; 1182 + 1183 + for (i = 0; i < dev->num_tc; i++) { 1184 + u32 offset, count; 1185 + 1186 + if (!(tc_mask & BIT(i))) 1187 + continue; 1188 + 1189 + offset = dev->tc_to_txq[i].offset; 1190 + count = dev->tc_to_txq[i].count; 1191 + 1192 + queue_mask |= GENMASK(offset + count - 1, offset); 1193 + } 1194 + 1195 + return queue_mask; 1196 + } 1197 + 1198 + static void taprio_sched_to_offload(struct net_device *dev, 1180 1199 struct sched_gate_list *sched, 1181 - const struct tc_mqprio_qopt *mqprio, 1182 1200 struct tc_taprio_qopt_offload *offload) 1183 1201 { 1184 1202 struct sched_entry *entry; ··· 1211 1193 1212 1194 e->command = entry->command; 1213 1195 e->interval = entry->interval; 1214 - e->gate_mask = entry->gate_mask; 1196 + e->gate_mask = tc_map_to_queue_mask(dev, entry->gate_mask); 1197 + 1215 1198 i++; 1216 1199 } 1217 1200 ··· 1220 1201 } 1221 1202 1222 1203 static int taprio_enable_offload(struct net_device *dev, 1223 - struct tc_mqprio_qopt *mqprio, 1224 1204 struct taprio_sched *q, 1225 1205 struct sched_gate_list *sched, 1226 1206 struct netlink_ext_ack *extack) ··· 1241 1223 return -ENOMEM; 1242 1224 } 1243 1225 offload->enable = 1; 1244 - taprio_sched_to_offload(q, sched, mqprio, offload); 1226 + taprio_sched_to_offload(dev, sched, offload); 1245 1227 1246 1228 err = ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TAPRIO, offload); 1247 1229 if (err < 0) { ··· 1503 1485 } 1504 1486 1505 1487 if (FULL_OFFLOAD_IS_ENABLED(q->flags)) 1506 - err = taprio_enable_offload(dev, mqprio, q, new_admin, extack); 1488 + err = taprio_enable_offload(dev, q, new_admin, extack); 1507 1489 else 1508 1490 err = taprio_disable_offload(dev, q, extack); 1509 1491 if (err)
+6 -10
net/sctp/socket.c
··· 8060 8060 8061 8061 pr_debug("%s: begins, snum:%d\n", __func__, snum); 8062 8062 8063 - local_bh_disable(); 8064 - 8065 8063 if (snum == 0) { 8066 8064 /* Search for an available port. */ 8067 8065 int low, high, remaining, index; ··· 8077 8079 continue; 8078 8080 index = sctp_phashfn(net, rover); 8079 8081 head = &sctp_port_hashtable[index]; 8080 - spin_lock(&head->lock); 8082 + spin_lock_bh(&head->lock); 8081 8083 sctp_for_each_hentry(pp, &head->chain) 8082 8084 if ((pp->port == rover) && 8083 8085 net_eq(net, pp->net)) 8084 8086 goto next; 8085 8087 break; 8086 8088 next: 8087 - spin_unlock(&head->lock); 8089 + spin_unlock_bh(&head->lock); 8090 + cond_resched(); 8088 8091 } while (--remaining > 0); 8089 8092 8090 8093 /* Exhausted local port range during search? */ 8091 8094 ret = 1; 8092 8095 if (remaining <= 0) 8093 - goto fail; 8096 + return ret; 8094 8097 8095 8098 /* OK, here is the one we will use. HEAD (the port 8096 8099 * hash table list entry) is non-NULL and we hold it's ··· 8106 8107 * port iterator, pp being NULL. 8107 8108 */ 8108 8109 head = &sctp_port_hashtable[sctp_phashfn(net, snum)]; 8109 - spin_lock(&head->lock); 8110 + spin_lock_bh(&head->lock); 8110 8111 sctp_for_each_hentry(pp, &head->chain) { 8111 8112 if ((pp->port == snum) && net_eq(pp->net, net)) 8112 8113 goto pp_found; ··· 8206 8207 ret = 0; 8207 8208 8208 8209 fail_unlock: 8209 - spin_unlock(&head->lock); 8210 - 8211 - fail: 8212 - local_bh_enable(); 8210 + spin_unlock_bh(&head->lock); 8213 8211 return ret; 8214 8212 } 8215 8213
+8 -7
net/smc/smc_close.c
··· 116 116 cancel_work_sync(&smc->conn.close_work); 117 117 cancel_delayed_work_sync(&smc->conn.tx_work); 118 118 lock_sock(sk); 119 - sk->sk_state = SMC_CLOSED; 120 119 } 121 120 122 121 /* terminate smc socket abnormally - active abort ··· 133 134 } 134 135 switch (sk->sk_state) { 135 136 case SMC_ACTIVE: 136 - sk->sk_state = SMC_PEERABORTWAIT; 137 - smc_close_cancel_work(smc); 138 - sk->sk_state = SMC_CLOSED; 139 - sock_put(sk); /* passive closing */ 140 - break; 141 137 case SMC_APPCLOSEWAIT1: 142 138 case SMC_APPCLOSEWAIT2: 139 + sk->sk_state = SMC_PEERABORTWAIT; 143 140 smc_close_cancel_work(smc); 141 + if (sk->sk_state != SMC_PEERABORTWAIT) 142 + break; 144 143 sk->sk_state = SMC_CLOSED; 145 - sock_put(sk); /* postponed passive closing */ 144 + sock_put(sk); /* (postponed) passive closing */ 146 145 break; 147 146 case SMC_PEERCLOSEWAIT1: 148 147 case SMC_PEERCLOSEWAIT2: 149 148 case SMC_PEERFINCLOSEWAIT: 150 149 sk->sk_state = SMC_PEERABORTWAIT; 151 150 smc_close_cancel_work(smc); 151 + if (sk->sk_state != SMC_PEERABORTWAIT) 152 + break; 152 153 sk->sk_state = SMC_CLOSED; 153 154 smc_conn_free(&smc->conn); 154 155 release_clcsock = true; ··· 158 159 case SMC_APPFINCLOSEWAIT: 159 160 sk->sk_state = SMC_PEERABORTWAIT; 160 161 smc_close_cancel_work(smc); 162 + if (sk->sk_state != SMC_PEERABORTWAIT) 163 + break; 161 164 sk->sk_state = SMC_CLOSED; 162 165 smc_conn_free(&smc->conn); 163 166 release_clcsock = true;
+3
net/smc/smc_core.c
··· 1356 1356 if (ini->is_smcd) { 1357 1357 conn->rx_off = sizeof(struct smcd_cdc_msg); 1358 1358 smcd_cdc_rx_init(conn); /* init tasklet for this conn */ 1359 + } else { 1360 + conn->rx_off = 0; 1359 1361 } 1360 1362 #ifndef KERNEL_HAS_ATOMIC64 1361 1363 spin_lock_init(&conn->acurs_lock); ··· 1779 1777 list_del(&smc->conn.sndbuf_desc->list); 1780 1778 mutex_unlock(&smc->conn.lgr->sndbufs_lock); 1781 1779 smc_buf_free(smc->conn.lgr, false, smc->conn.sndbuf_desc); 1780 + smc->conn.sndbuf_desc = NULL; 1782 1781 } 1783 1782 return rc; 1784 1783 }
+14 -1
net/smc/smc_llc.c
··· 841 841 struct smc_init_info ini; 842 842 int lnk_idx, rc = 0; 843 843 844 + if (!llc->qp_mtu) 845 + goto out_reject; 846 + 844 847 ini.vlan_id = lgr->vlan_id; 845 848 smc_pnet_find_alt_roce(lgr, &ini, link->smcibdev); 846 849 if (!memcmp(llc->sender_gid, link->peer_gid, SMC_GID_SIZE) && ··· 920 917 kfree(qentry); 921 918 } 922 919 920 + static bool smc_llc_is_empty_llc_message(union smc_llc_msg *llc) 921 + { 922 + int i; 923 + 924 + for (i = 0; i < ARRAY_SIZE(llc->raw.data); i++) 925 + if (llc->raw.data[i]) 926 + return false; 927 + return true; 928 + } 929 + 923 930 static bool smc_llc_is_local_add_link(union smc_llc_msg *llc) 924 931 { 925 932 if (llc->raw.hdr.common.type == SMC_LLC_ADD_LINK && 926 - !llc->add_link.qp_mtu && !llc->add_link.link_num) 933 + smc_llc_is_empty_llc_message(llc)) 927 934 return true; 928 935 return false; 929 936 }
+2 -2
net/socket.c
··· 3610 3610 EXPORT_SYMBOL(kernel_getsockname); 3611 3611 3612 3612 /** 3613 - * kernel_peername - get the address which the socket is connected (kernel space) 3613 + * kernel_getpeername - get the address which the socket is connected (kernel space) 3614 3614 * @sock: socket 3615 3615 * @addr: address holder 3616 3616 * ··· 3671 3671 EXPORT_SYMBOL(kernel_sendpage_locked); 3672 3672 3673 3673 /** 3674 - * kernel_shutdown - shut down part of a full-duplex connection (kernel space) 3674 + * kernel_sock_shutdown - shut down part of a full-duplex connection (kernel space) 3675 3675 * @sock: socket 3676 3676 * @how: connection part 3677 3677 *
+9 -3
net/tipc/crypto.c
··· 326 326 if (aead->cloned) { 327 327 tipc_aead_put(aead->cloned); 328 328 } else { 329 - head = *this_cpu_ptr(aead->tfm_entry); 329 + head = *get_cpu_ptr(aead->tfm_entry); 330 + put_cpu_ptr(aead->tfm_entry); 330 331 list_for_each_entry_safe(tfm_entry, tmp, &head->list, list) { 331 332 crypto_free_aead(tfm_entry->tfm); 332 333 list_del(&tfm_entry->list); ··· 400 399 */ 401 400 static struct crypto_aead *tipc_aead_tfm_next(struct tipc_aead *aead) 402 401 { 403 - struct tipc_tfm **tfm_entry = this_cpu_ptr(aead->tfm_entry); 402 + struct tipc_tfm **tfm_entry; 403 + struct crypto_aead *tfm; 404 404 405 + tfm_entry = get_cpu_ptr(aead->tfm_entry); 405 406 *tfm_entry = list_next_entry(*tfm_entry, list); 406 - return (*tfm_entry)->tfm; 407 + tfm = (*tfm_entry)->tfm; 408 + put_cpu_ptr(tfm_entry); 409 + 410 + return tfm; 407 411 } 408 412 409 413 /**
+6 -3
net/tipc/socket.c
··· 2771 2771 2772 2772 trace_tipc_sk_shutdown(sk, NULL, TIPC_DUMP_ALL, " "); 2773 2773 __tipc_shutdown(sock, TIPC_CONN_SHUTDOWN); 2774 - sk->sk_shutdown = SEND_SHUTDOWN; 2774 + if (tipc_sk_type_connectionless(sk)) 2775 + sk->sk_shutdown = SHUTDOWN_MASK; 2776 + else 2777 + sk->sk_shutdown = SEND_SHUTDOWN; 2775 2778 2776 2779 if (sk->sk_state == TIPC_DISCONNECTING) { 2777 2780 /* Discard any unreceived messages */ 2778 2781 __skb_queue_purge(&sk->sk_receive_queue); 2779 2782 2780 - /* Wake up anyone sleeping in poll */ 2781 - sk->sk_state_change(sk); 2782 2783 res = 0; 2783 2784 } else { 2784 2785 res = -ENOTCONN; 2785 2786 } 2787 + /* Wake up anyone sleeping in poll. */ 2788 + sk->sk_state_change(sk); 2786 2789 2787 2790 release_sock(sk); 2788 2791 return res;
+11 -4
net/wireless/chan.c
··· 10 10 */ 11 11 12 12 #include <linux/export.h> 13 + #include <linux/bitfield.h> 13 14 #include <net/cfg80211.h> 14 15 #include "core.h" 15 16 #include "rdev-ops.h" ··· 913 912 struct ieee80211_sta_vht_cap *vht_cap; 914 913 struct ieee80211_edmg *edmg_cap; 915 914 u32 width, control_freq, cap; 915 + bool support_80_80 = false; 916 916 917 917 if (WARN_ON(!cfg80211_chandef_valid(chandef))) 918 918 return false; ··· 981 979 return false; 982 980 break; 983 981 case NL80211_CHAN_WIDTH_80P80: 984 - cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; 985 - if (chandef->chan->band != NL80211_BAND_6GHZ && 986 - cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) 982 + cap = vht_cap->cap; 983 + support_80_80 = 984 + (cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) || 985 + (cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ && 986 + cap & IEEE80211_VHT_CAP_EXT_NSS_BW_MASK) || 987 + u32_get_bits(cap, IEEE80211_VHT_CAP_EXT_NSS_BW_MASK) > 1; 988 + if (chandef->chan->band != NL80211_BAND_6GHZ && !support_80_80) 987 989 return false; 988 990 fallthrough; 989 991 case NL80211_CHAN_WIDTH_80: ··· 1007 1001 return false; 1008 1002 cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; 1009 1003 if (cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ && 1010 - cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) 1004 + cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ && 1005 + !(vht_cap->cap & IEEE80211_VHT_CAP_EXT_NSS_BW_MASK)) 1011 1006 return false; 1012 1007 break; 1013 1008 default:
+1 -1
net/wireless/nl80211.c
··· 6011 6011 6012 6012 if (info->attrs[NL80211_ATTR_HE_6GHZ_CAPABILITY]) 6013 6013 params.he_6ghz_capa = 6014 - nla_data(info->attrs[NL80211_ATTR_HE_CAPABILITY]); 6014 + nla_data(info->attrs[NL80211_ATTR_HE_6GHZ_CAPABILITY]); 6015 6015 6016 6016 if (info->attrs[NL80211_ATTR_AIRTIME_WEIGHT]) 6017 6017 params.airtime_weight =
+3
net/wireless/reg.c
··· 2946 2946 if (WARN_ON(!alpha2)) 2947 2947 return -EINVAL; 2948 2948 2949 + if (!is_world_regdom(alpha2) && !is_an_alpha2(alpha2)) 2950 + return -EINVAL; 2951 + 2949 2952 request = kzalloc(sizeof(struct regulatory_request), GFP_KERNEL); 2950 2953 if (!request) 2951 2954 return -ENOMEM;
+5 -3
net/wireless/util.c
··· 123 123 return (freq - 2407) / 5; 124 124 else if (freq >= 4910 && freq <= 4980) 125 125 return (freq - 4000) / 5; 126 - else if (freq < 5945) 126 + else if (freq < 5925) 127 127 return (freq - 5000) / 5; 128 + else if (freq == 5935) 129 + return 2; 128 130 else if (freq <= 45000) /* DMG band lower limit */ 129 - /* see 802.11ax D4.1 27.3.22.2 */ 130 - return (freq - 5940) / 5; 131 + /* see 802.11ax D6.1 27.3.22.2 */ 132 + return (freq - 5950) / 5; 131 133 else if (freq >= 58320 && freq <= 70200) 132 134 return (freq - 56160) / 2160; 133 135 else
+2
tools/testing/selftests/bpf/test_maps.c
··· 1274 1274 pid_t pid[tasks]; 1275 1275 int i; 1276 1276 1277 + fflush(stdout); 1278 + 1277 1279 for (i = 0; i < tasks; i++) { 1278 1280 pid[i] = fork(); 1279 1281 if (pid[i] == 0) {
+3 -1
tools/testing/selftests/bpf/test_progs.c
··· 618 618 if (!flavor) 619 619 return 0; 620 620 flavor++; 621 - fprintf(stdout, "Switching to flavor '%s' subdirectory...\n", flavor); 621 + if (env.verbosity > VERBOSE_NONE) 622 + fprintf(stdout, "Switching to flavor '%s' subdirectory...\n", flavor); 623 + 622 624 return chdir(flavor); 623 625 } 624 626
+37 -30
tools/testing/selftests/netfilter/nft_flowtable.sh
··· 11 11 # result in fragmentation and/or PMTU discovery. 12 12 # 13 13 # You can check with different Orgininator/Link/Responder MTU eg: 14 - # sh nft_flowtable.sh -o1000 -l500 -r100 14 + # nft_flowtable.sh -o8000 -l1500 -r2000 15 15 # 16 16 17 17 ··· 27 27 log_netns=$(sysctl -n net.netfilter.nf_log_all_netns) 28 28 29 29 checktool (){ 30 - $1 > /dev/null 2>&1 31 - if [ $? -ne 0 ];then 30 + if ! $1 > /dev/null 2>&1; then 32 31 echo "SKIP: Could not $2" 33 32 exit $ksft_skip 34 33 fi ··· 86 87 lmtu=1500 87 88 rmtu=2000 88 89 90 + usage(){ 91 + echo "nft_flowtable.sh [OPTIONS]" 92 + echo 93 + echo "MTU options" 94 + echo " -o originator" 95 + echo " -l link" 96 + echo " -r responder" 97 + exit 1 98 + } 99 + 89 100 while getopts "o:l:r:" o 90 101 do 91 102 case $o in 92 103 o) omtu=$OPTARG;; 93 104 l) lmtu=$OPTARG;; 94 105 r) rmtu=$OPTARG;; 106 + *) usage;; 95 107 esac 96 108 done 97 109 98 - ip -net nsr1 link set veth0 mtu $omtu 110 + if ! ip -net nsr1 link set veth0 mtu $omtu; then 111 + exit 1 112 + fi 113 + 99 114 ip -net ns1 link set eth0 mtu $omtu 100 115 101 - ip -net nsr2 link set veth1 mtu $rmtu 116 + if ! ip -net nsr2 link set veth1 mtu $rmtu; then 117 + exit 1 118 + fi 119 + 102 120 ip -net ns2 link set eth0 mtu $rmtu 103 121 104 122 # transfer-net between nsr1 and nsr2. ··· 136 120 ip -net ns$i route add default via 10.0.$i.1 137 121 ip -net ns$i addr add dead:$i::99/64 dev eth0 138 122 ip -net ns$i route add default via dead:$i::1 139 - ip netns exec ns$i sysctl net.ipv4.tcp_no_metrics_save=1 > /dev/null 123 + if ! ip netns exec ns$i sysctl net.ipv4.tcp_no_metrics_save=1 > /dev/null; then 124 + echo "ERROR: Check Originator/Responder values (problem during address addition)" 125 + exit 1 126 + fi 140 127 141 128 # don't set ip DF bit for first two tests 142 129 ip netns exec ns$i sysctl net.ipv4.ip_no_pmtu_disc=1 > /dev/null ··· 197 178 fi 198 179 199 180 # test basic connectivity 200 - ip netns exec ns1 ping -c 1 -q 10.0.2.99 > /dev/null 201 - if [ $? -ne 0 ];then 181 + if ! ip netns exec ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then 202 182 echo "ERROR: ns1 cannot reach ns2" 1>&2 203 183 bash 204 184 exit 1 205 185 fi 206 186 207 - ip netns exec ns2 ping -c 1 -q 10.0.1.99 > /dev/null 208 - if [ $? -ne 0 ];then 187 + if ! ip netns exec ns2 ping -c 1 -q 10.0.1.99 > /dev/null; then 209 188 echo "ERROR: ns2 cannot reach ns1" 1>&2 210 189 exit 1 211 190 fi ··· 220 203 make_file() 221 204 { 222 205 name=$1 223 - who=$2 224 206 225 207 SIZE=$((RANDOM % (1024 * 8))) 226 208 TSIZE=$((SIZE * 1024)) ··· 238 222 out=$2 239 223 what=$3 240 224 241 - cmp "$in" "$out" > /dev/null 2>&1 242 - if [ $? -ne 0 ] ;then 225 + if ! cmp "$in" "$out" > /dev/null 2>&1; then 243 226 echo "FAIL: file mismatch for $what" 1>&2 244 227 ls -l "$in" 245 228 ls -l "$out" ··· 275 260 276 261 wait 277 262 278 - check_transfer "$ns1in" "$ns2out" "ns1 -> ns2" 279 - if [ $? -ne 0 ];then 263 + if ! check_transfer "$ns1in" "$ns2out" "ns1 -> ns2"; then 280 264 lret=1 281 265 fi 282 266 283 - check_transfer "$ns2in" "$ns1out" "ns1 <- ns2" 284 - if [ $? -ne 0 ];then 267 + if ! check_transfer "$ns2in" "$ns1out" "ns1 <- ns2"; then 285 268 lret=1 286 269 fi 287 270 ··· 308 295 return $lret 309 296 } 310 297 311 - make_file "$ns1in" "ns1" 312 - make_file "$ns2in" "ns2" 298 + make_file "$ns1in" 299 + make_file "$ns2in" 313 300 314 301 # First test: 315 302 # No PMTU discovery, nsr1 is expected to fragment packets from ns1 to ns2 as needed. 316 - test_tcp_forwarding ns1 ns2 317 - if [ $? -eq 0 ] ;then 303 + if test_tcp_forwarding ns1 ns2; then 318 304 echo "PASS: flow offloaded for ns1/ns2" 319 305 else 320 306 echo "FAIL: flow offload for ns1/ns2:" 1>&2 ··· 344 332 } 345 333 EOF 346 334 347 - test_tcp_forwarding_nat ns1 ns2 348 - 349 - if [ $? -eq 0 ] ;then 335 + if test_tcp_forwarding_nat ns1 ns2; then 350 336 echo "PASS: flow offloaded for ns1/ns2 with NAT" 351 337 else 352 338 echo "FAIL: flow offload for ns1/ns2 with NAT" 1>&2 ··· 356 346 # Same as second test, but with PMTU discovery enabled. 357 347 handle=$(ip netns exec nsr1 nft -a list table inet filter | grep something-to-grep-for | cut -d \# -f 2) 358 348 359 - ip netns exec nsr1 nft delete rule inet filter forward $handle 360 - if [ $? -ne 0 ] ;then 349 + if ! ip netns exec nsr1 nft delete rule inet filter forward $handle; then 361 350 echo "FAIL: Could not delete large-packet accept rule" 362 351 exit 1 363 352 fi ··· 364 355 ip netns exec ns1 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null 365 356 ip netns exec ns2 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null 366 357 367 - test_tcp_forwarding_nat ns1 ns2 368 - if [ $? -eq 0 ] ;then 358 + if test_tcp_forwarding_nat ns1 ns2; then 369 359 echo "PASS: flow offloaded for ns1/ns2 with NAT and pmtu discovery" 370 360 else 371 361 echo "FAIL: flow offload for ns1/ns2 with NAT and pmtu discovery" 1>&2 ··· 410 402 ip -net ns2 route add default via 10.0.2.1 411 403 ip -net ns2 route add default via dead:2::1 412 404 413 - test_tcp_forwarding ns1 ns2 414 - if [ $? -eq 0 ] ;then 405 + if test_tcp_forwarding ns1 ns2; then 415 406 echo "PASS: ipsec tunnel mode for ns1/ns2" 416 407 else 417 408 echo "FAIL: ipsec tunnel mode for ns1/ns2"