Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from netfilter, bluetooth and bpf.

Fairly usual collection of driver and core fixes. The large selftest
accompanying one of the fixes is also becoming a common occurrence.

Current release - regressions:

- ipv6: fix infinite recursion in fib6_dump_done()

- net/rds: fix possible null-deref in newly added error path

Current release - new code bugs:

- net: do not consume a full cacheline for system_page_pool

- bpf: fix bpf_arena-related file descriptor leaks in the verifier

- drv: ice: fix freeing uninitialized pointers, fixing misuse of the
newfangled __free() auto-cleanup

Previous releases - regressions:

- x86/bpf: fixes the BPF JIT with retbleed=stuff

- xen-netfront: add missing skb_mark_for_recycle, fix page pool
accounting leaks, revealed by recently added explicit warning

- tcp: fix bind() regression for v6-only wildcard and v4-mapped-v6
non-wildcard addresses

- Bluetooth:
- replace "hci_qca: Set BDA quirk bit if fwnode exists in DT" with
better workarounds to un-break some buggy Qualcomm devices
- set conn encrypted before conn establishes, fix re-connecting to
some headsets which use slightly unusual sequence of msgs

- mptcp:
- prevent BPF accessing lowat from a subflow socket
- don't account accept() of non-MPC client as fallback to TCP

- drv: mana: fix Rx DMA datasize and skb_over_panic

- drv: i40e: fix VF MAC filter removal

Previous releases - always broken:

- gro: various fixes related to UDP tunnels - netns crossing
problems, incorrect checksum conversions, and incorrect packet
transformations which may lead to panics

- bpf: support deferring bpf_link dealloc to after RCU grace period

- nf_tables:
- release batch on table validation from abort path
- release mutex after nft_gc_seq_end from abort path
- flush pending destroy work before exit_net release

- drv: r8169: skip DASH fw status checks when DASH is disabled"

* tag 'net-6.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (81 commits)
netfilter: validate user input for expected length
net/sched: act_skbmod: prevent kernel-infoleak
net: usb: ax88179_178a: avoid the interface always configured as random address
net: dsa: sja1105: Fix parameters order in sja1110_pcs_mdio_write_c45()
net: ravb: Always update error counters
net: ravb: Always process TX descriptor ring
netfilter: nf_tables: discard table flag update with pending basechain deletion
netfilter: nf_tables: Fix potential data-race in __nft_flowtable_type_get()
netfilter: nf_tables: reject new basechain after table flag update
netfilter: nf_tables: flush pending destroy work before exit_net release
netfilter: nf_tables: release mutex after nft_gc_seq_end from abort path
netfilter: nf_tables: release batch on table validation from abort path
Revert "tg3: Remove residual error handling in tg3_suspend"
tg3: Remove residual error handling in tg3_suspend
net: mana: Fix Rx DMA datasize and skb_over_panic
net/sched: fix lockdep splat in qdisc_tree_reduce_backlog()
net: phy: micrel: lan8814: Fix when enabling/disabling 1-step timestamping
net: stmmac: fix rx queue priority assignment
net: txgbe: fix i2c dev name cannot match clkdev
net: fec: Set mac_managed_pm during probe
...

+1620 -419
+4
Documentation/devicetree/bindings/net/bluetooth/qualcomm-bluetooth.yaml
··· 94 94 95 95 local-bd-address: true 96 96 97 + qcom,local-bd-address-broken: 98 + type: boolean 99 + description: 100 + boot firmware is incorrectly passing the address in big-endian order 97 101 98 102 required: 99 103 - compatible
+1
Documentation/networking/devlink/index.rst
··· 67 67 devlink-selftests 68 68 devlink-trap 69 69 devlink-linecard 70 + devlink-eswitch-attr 70 71 71 72 Driver-specific documentation 72 73 -----------------------------
+1
Documentation/networking/representors.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 + .. _representors: 2 3 3 4 ============================= 4 5 Network Function Representors
+2 -1
MAINTAINERS
··· 14019 14019 14020 14020 MELLANOX ETHERNET DRIVER (mlx5e) 14021 14021 M: Saeed Mahameed <saeedm@nvidia.com> 14022 + M: Tariq Toukan <tariqt@nvidia.com> 14022 14023 L: netdev@vger.kernel.org 14023 14024 S: Supported 14024 14025 W: http://www.mellanox.com ··· 14087 14086 MELLANOX MLX5 core VPI driver 14088 14087 M: Saeed Mahameed <saeedm@nvidia.com> 14089 14088 M: Leon Romanovsky <leonro@nvidia.com> 14089 + M: Tariq Toukan <tariqt@nvidia.com> 14090 14090 L: netdev@vger.kernel.org 14091 14091 L: linux-rdma@vger.kernel.org 14092 14092 S: Supported ··· 23681 23679 F: drivers/scsi/vmw_pvscsi.h 23682 23680 23683 23681 VMWARE VIRTUAL PTP CLOCK DRIVER 23684 - M: Jeff Sipek <jsipek@vmware.com> 23685 23682 R: Ajay Kaher <akaher@vmware.com> 23686 23683 R: Alexey Makhalov <amakhalov@vmware.com> 23687 23684 R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
+2
arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
··· 944 944 vddrf-supply = <&pp1300_l2c>; 945 945 vddch0-supply = <&pp3300_l10c>; 946 946 max-speed = <3200000>; 947 + 948 + qcom,local-bd-address-broken; 947 949 }; 948 950 }; 949 951
+2 -2
arch/x86/include/asm/alternative.h
··· 117 117 extern void callthunks_patch_module_calls(struct callthunk_sites *sites, 118 118 struct module *mod); 119 119 extern void *callthunks_translate_call_dest(void *dest); 120 - extern int x86_call_depth_emit_accounting(u8 **pprog, void *func); 120 + extern int x86_call_depth_emit_accounting(u8 **pprog, void *func, void *ip); 121 121 #else 122 122 static __always_inline void callthunks_patch_builtin_calls(void) {} 123 123 static __always_inline void ··· 128 128 return dest; 129 129 } 130 130 static __always_inline int x86_call_depth_emit_accounting(u8 **pprog, 131 - void *func) 131 + void *func, void *ip) 132 132 { 133 133 return 0; 134 134 }
+2 -2
arch/x86/kernel/callthunks.c
··· 314 314 return !bcmp(pad, insn_buff, tmpl_size); 315 315 } 316 316 317 - int x86_call_depth_emit_accounting(u8 **pprog, void *func) 317 + int x86_call_depth_emit_accounting(u8 **pprog, void *func, void *ip) 318 318 { 319 319 unsigned int tmpl_size = SKL_TMPL_SIZE; 320 320 u8 insn_buff[MAX_PATCH_LEN]; ··· 327 327 return 0; 328 328 329 329 memcpy(insn_buff, skl_call_thunk_template, tmpl_size); 330 - apply_relocation(insn_buff, tmpl_size, *pprog, 330 + apply_relocation(insn_buff, tmpl_size, ip, 331 331 skl_call_thunk_template, tmpl_size); 332 332 333 333 memcpy(*pprog, insn_buff, tmpl_size);
+8 -11
arch/x86/net/bpf_jit_comp.c
··· 480 480 static int emit_rsb_call(u8 **pprog, void *func, void *ip) 481 481 { 482 482 OPTIMIZER_HIDE_VAR(func); 483 - x86_call_depth_emit_accounting(pprog, func); 483 + ip += x86_call_depth_emit_accounting(pprog, func, ip); 484 484 return emit_patch(pprog, func, ip, 0xE8); 485 485 } 486 486 ··· 1972 1972 1973 1973 /* call */ 1974 1974 case BPF_JMP | BPF_CALL: { 1975 - int offs; 1975 + u8 *ip = image + addrs[i - 1]; 1976 1976 1977 1977 func = (u8 *) __bpf_call_base + imm32; 1978 1978 if (tail_call_reachable) { 1979 1979 RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth); 1980 - if (!imm32) 1981 - return -EINVAL; 1982 - offs = 7 + x86_call_depth_emit_accounting(&prog, func); 1983 - } else { 1984 - if (!imm32) 1985 - return -EINVAL; 1986 - offs = x86_call_depth_emit_accounting(&prog, func); 1980 + ip += 7; 1987 1981 } 1988 - if (emit_call(&prog, func, image + addrs[i - 1] + offs)) 1982 + if (!imm32) 1983 + return -EINVAL; 1984 + ip += x86_call_depth_emit_accounting(&prog, func, ip); 1985 + if (emit_call(&prog, func, ip)) 1989 1986 return -EINVAL; 1990 1987 break; 1991 1988 } ··· 2832 2835 * Direct-call fentry stub, as such it needs accounting for the 2833 2836 * __fentry__ call. 2834 2837 */ 2835 - x86_call_depth_emit_accounting(&prog, NULL); 2838 + x86_call_depth_emit_accounting(&prog, NULL, image); 2836 2839 } 2837 2840 EMIT1(0x55); /* push rbp */ 2838 2841 EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
+6 -2
drivers/bluetooth/btqca.c
··· 826 826 827 827 int qca_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr) 828 828 { 829 + bdaddr_t bdaddr_swapped; 829 830 struct sk_buff *skb; 830 831 int err; 831 832 832 - skb = __hci_cmd_sync_ev(hdev, EDL_WRITE_BD_ADDR_OPCODE, 6, bdaddr, 833 - HCI_EV_VENDOR, HCI_INIT_TIMEOUT); 833 + baswap(&bdaddr_swapped, bdaddr); 834 + 835 + skb = __hci_cmd_sync_ev(hdev, EDL_WRITE_BD_ADDR_OPCODE, 6, 836 + &bdaddr_swapped, HCI_EV_VENDOR, 837 + HCI_INIT_TIMEOUT); 834 838 if (IS_ERR(skb)) { 835 839 err = PTR_ERR(skb); 836 840 bt_dev_err(hdev, "QCA Change address cmd failed (%d)", err);
+9 -10
drivers/bluetooth/hci_qca.c
··· 7 7 * 8 8 * Copyright (C) 2007 Texas Instruments, Inc. 9 9 * Copyright (c) 2010, 2012, 2018 The Linux Foundation. All rights reserved. 10 - * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. 11 10 * 12 11 * Acknowledgements: 13 12 * This file is based on hci_ll.c, which was... ··· 225 226 struct qca_power *bt_power; 226 227 u32 init_speed; 227 228 u32 oper_speed; 229 + bool bdaddr_property_broken; 228 230 const char *firmware_name; 229 231 }; 230 232 ··· 1843 1843 const char *firmware_name = qca_get_firmware_name(hu); 1844 1844 int ret; 1845 1845 struct qca_btsoc_version ver; 1846 + struct qca_serdev *qcadev; 1846 1847 const char *soc_name; 1847 1848 1848 1849 ret = qca_check_speeds(hu); ··· 1905 1904 case QCA_WCN6750: 1906 1905 case QCA_WCN6855: 1907 1906 case QCA_WCN7850: 1907 + set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); 1908 1908 1909 - /* Set BDA quirk bit for reading BDA value from fwnode property 1910 - * only if that property exist in DT. 1911 - */ 1912 - if (fwnode_property_present(dev_fwnode(hdev->dev.parent), "local-bd-address")) { 1913 - set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); 1914 - bt_dev_info(hdev, "setting quirk bit to read BDA from fwnode later"); 1915 - } else { 1916 - bt_dev_dbg(hdev, "local-bd-address` is not present in the devicetree so not setting quirk bit for BDA"); 1917 - } 1909 + qcadev = serdev_device_get_drvdata(hu->serdev); 1910 + if (qcadev->bdaddr_property_broken) 1911 + set_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks); 1918 1912 1919 1913 hci_set_aosp_capable(hdev); 1920 1914 ··· 2290 2294 &qcadev->oper_speed); 2291 2295 if (!qcadev->oper_speed) 2292 2296 BT_DBG("UART will pick default operating speed"); 2297 + 2298 + qcadev->bdaddr_property_broken = device_property_read_bool(&serdev->dev, 2299 + "qcom,local-bd-address-broken"); 2293 2300 2294 2301 if (data) 2295 2302 qcadev->btsoc_type = data->soc_type;
+5 -1
drivers/net/dsa/mv88e6xxx/chip.c
··· 5503 5503 .family = MV88E6XXX_FAMILY_6250, 5504 5504 .name = "Marvell 88E6020", 5505 5505 .num_databases = 64, 5506 - .num_ports = 4, 5506 + /* Ports 2-4 are not routed to pins 5507 + * => usable ports 0, 1, 5, 6 5508 + */ 5509 + .num_ports = 7, 5507 5510 .num_internal_phys = 2, 5511 + .invalid_port_mask = BIT(2) | BIT(3) | BIT(4), 5508 5512 .max_vid = 4095, 5509 5513 .port_base_addr = 0x8, 5510 5514 .phy_base_addr = 0x0,
+1 -1
drivers/net/dsa/sja1105/sja1105_mdio.c
··· 94 94 return tmp & 0xffff; 95 95 } 96 96 97 - int sja1110_pcs_mdio_write_c45(struct mii_bus *bus, int phy, int reg, int mmd, 97 + int sja1110_pcs_mdio_write_c45(struct mii_bus *bus, int phy, int mmd, int reg, 98 98 u16 val) 99 99 { 100 100 struct sja1105_mdio_private *mdio_priv = bus->priv;
+12 -4
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 3280 3280 } 3281 3281 3282 3282 /* Returns a reusable dma control register value */ 3283 - static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv) 3283 + static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv, bool flush_rx) 3284 3284 { 3285 3285 unsigned int i; 3286 3286 u32 reg; ··· 3304 3304 bcmgenet_umac_writel(priv, 1, UMAC_TX_FLUSH); 3305 3305 udelay(10); 3306 3306 bcmgenet_umac_writel(priv, 0, UMAC_TX_FLUSH); 3307 + 3308 + if (flush_rx) { 3309 + reg = bcmgenet_rbuf_ctrl_get(priv); 3310 + bcmgenet_rbuf_ctrl_set(priv, reg | BIT(0)); 3311 + udelay(10); 3312 + bcmgenet_rbuf_ctrl_set(priv, reg); 3313 + udelay(10); 3314 + } 3307 3315 3308 3316 return dma_ctrl; 3309 3317 } ··· 3376 3368 3377 3369 bcmgenet_set_hw_addr(priv, dev->dev_addr); 3378 3370 3379 - /* Disable RX/TX DMA and flush TX queues */ 3380 - dma_ctrl = bcmgenet_dma_disable(priv); 3371 + /* Disable RX/TX DMA and flush TX and RX queues */ 3372 + dma_ctrl = bcmgenet_dma_disable(priv, true); 3381 3373 3382 3374 /* Reinitialize TDMA and RDMA and SW housekeeping */ 3383 3375 ret = bcmgenet_init_dma(priv); ··· 4243 4235 bcmgenet_hfb_create_rxnfc_filter(priv, rule); 4244 4236 4245 4237 /* Disable RX/TX DMA and flush TX queues */ 4246 - dma_ctrl = bcmgenet_dma_disable(priv); 4238 + dma_ctrl = bcmgenet_dma_disable(priv, false); 4247 4239 4248 4240 /* Reinitialize TDMA and RDMA and SW housekeeping */ 4249 4241 ret = bcmgenet_init_dma(priv);
+9 -2
drivers/net/ethernet/freescale/fec_main.c
··· 2454 2454 fep->link = 0; 2455 2455 fep->full_duplex = 0; 2456 2456 2457 - phy_dev->mac_managed_pm = true; 2458 - 2459 2457 phy_attached_info(phy_dev); 2460 2458 2461 2459 return 0; ··· 2465 2467 struct net_device *ndev = platform_get_drvdata(pdev); 2466 2468 struct fec_enet_private *fep = netdev_priv(ndev); 2467 2469 bool suppress_preamble = false; 2470 + struct phy_device *phydev; 2468 2471 struct device_node *node; 2469 2472 int err = -ENXIO; 2470 2473 u32 mii_speed, holdtime; 2471 2474 u32 bus_freq; 2475 + int addr; 2472 2476 2473 2477 /* 2474 2478 * The i.MX28 dual fec interfaces are not equal. ··· 2583 2583 if (err) 2584 2584 goto err_out_free_mdiobus; 2585 2585 of_node_put(node); 2586 + 2587 + /* find all the PHY devices on the bus and set mac_managed_pm to true */ 2588 + for (addr = 0; addr < PHY_MAX_ADDR; addr++) { 2589 + phydev = mdiobus_get_phy(fep->mii_bus, addr); 2590 + if (phydev) 2591 + phydev->mac_managed_pm = true; 2592 + } 2586 2593 2587 2594 mii_cnt++; 2588 2595
+2
drivers/net/ethernet/intel/e1000e/hw.h
··· 628 628 u32 id; 629 629 u32 reset_delay_us; /* in usec */ 630 630 u32 revision; 631 + u32 retry_count; 631 632 632 633 enum e1000_media_type media_type; 633 634 ··· 645 644 bool polarity_correction; 646 645 bool speed_downgraded; 647 646 bool autoneg_wait_to_complete; 647 + bool retry_enabled; 648 648 }; 649 649 650 650 struct e1000_nvm_info {
+26 -12
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 222 222 if (hw->mac.type >= e1000_pch_lpt) { 223 223 /* Only unforce SMBus if ME is not active */ 224 224 if (!(er32(FWSM) & E1000_ICH_FWSM_FW_VALID)) { 225 + /* Switching PHY interface always returns MDI error 226 + * so disable retry mechanism to avoid wasting time 227 + */ 228 + e1000e_disable_phy_retry(hw); 229 + 225 230 /* Unforce SMBus mode in PHY */ 226 231 e1e_rphy_locked(hw, CV_SMB_CTRL, &phy_reg); 227 232 phy_reg &= ~CV_SMB_CTRL_FORCE_SMBUS; 228 233 e1e_wphy_locked(hw, CV_SMB_CTRL, phy_reg); 234 + 235 + e1000e_enable_phy_retry(hw); 229 236 230 237 /* Unforce SMBus mode in MAC */ 231 238 mac_reg = er32(CTRL_EXT); ··· 317 310 goto out; 318 311 } 319 312 313 + /* There is no guarantee that the PHY is accessible at this time 314 + * so disable retry mechanism to avoid wasting time 315 + */ 316 + e1000e_disable_phy_retry(hw); 317 + 320 318 /* The MAC-PHY interconnect may be in SMBus mode. If the PHY is 321 319 * inaccessible and resetting the PHY is not blocked, toggle the 322 320 * LANPHYPC Value bit to force the interconnect to PCIe mode. ··· 392 380 break; 393 381 } 394 382 383 + e1000e_enable_phy_retry(hw); 384 + 395 385 hw->phy.ops.release(hw); 396 386 if (!ret_val) { 397 387 ··· 462 448 phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT; 463 449 464 450 phy->id = e1000_phy_unknown; 451 + 452 + if (hw->mac.type == e1000_pch_mtp) { 453 + phy->retry_count = 2; 454 + e1000e_enable_phy_retry(hw); 455 + } 465 456 466 457 ret_val = e1000_init_phy_workarounds_pchlan(hw); 467 458 if (ret_val) ··· 1165 1146 if (ret_val) 1166 1147 goto out; 1167 1148 1168 - /* Force SMBus mode in PHY */ 1169 - ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, &phy_reg); 1170 - if (ret_val) 1171 - goto release; 1172 - phy_reg |= CV_SMB_CTRL_FORCE_SMBUS; 1173 - e1000_write_phy_reg_hv_locked(hw, CV_SMB_CTRL, phy_reg); 1174 - 1175 - /* Force SMBus mode in MAC */ 1176 - mac_reg = er32(CTRL_EXT); 1177 - mac_reg |= E1000_CTRL_EXT_FORCE_SMBUS; 1178 - ew32(CTRL_EXT, mac_reg); 1179 - 1180 1149 /* Si workaround for ULP entry flow on i127/rev6 h/w. Enable 1181 1150 * LPLU and disable Gig speed when entering ULP 1182 1151 */ ··· 1320 1313 /* Toggle LANPHYPC Value bit */ 1321 1314 e1000_toggle_lanphypc_pch_lpt(hw); 1322 1315 1316 + /* Switching PHY interface always returns MDI error 1317 + * so disable retry mechanism to avoid wasting time 1318 + */ 1319 + e1000e_disable_phy_retry(hw); 1320 + 1323 1321 /* Unforce SMBus mode in PHY */ 1324 1322 ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, &phy_reg); 1325 1323 if (ret_val) { ··· 1344 1332 } 1345 1333 phy_reg &= ~CV_SMB_CTRL_FORCE_SMBUS; 1346 1334 e1000_write_phy_reg_hv_locked(hw, CV_SMB_CTRL, phy_reg); 1335 + 1336 + e1000e_enable_phy_retry(hw); 1347 1337 1348 1338 /* Unforce SMBus mode in MAC */ 1349 1339 mac_reg = er32(CTRL_EXT);
+18
drivers/net/ethernet/intel/e1000e/netdev.c
··· 6623 6623 struct e1000_hw *hw = &adapter->hw; 6624 6624 u32 ctrl, ctrl_ext, rctl, status, wufc; 6625 6625 int retval = 0; 6626 + u16 smb_ctrl; 6626 6627 6627 6628 /* Runtime suspend should only enable wakeup for link changes */ 6628 6629 if (runtime) ··· 6697 6696 if (retval) 6698 6697 return retval; 6699 6698 } 6699 + 6700 + /* Force SMBUS to allow WOL */ 6701 + /* Switching PHY interface always returns MDI error 6702 + * so disable retry mechanism to avoid wasting time 6703 + */ 6704 + e1000e_disable_phy_retry(hw); 6705 + 6706 + e1e_rphy(hw, CV_SMB_CTRL, &smb_ctrl); 6707 + smb_ctrl |= CV_SMB_CTRL_FORCE_SMBUS; 6708 + e1e_wphy(hw, CV_SMB_CTRL, smb_ctrl); 6709 + 6710 + e1000e_enable_phy_retry(hw); 6711 + 6712 + /* Force SMBus mode in MAC */ 6713 + ctrl_ext = er32(CTRL_EXT); 6714 + ctrl_ext |= E1000_CTRL_EXT_FORCE_SMBUS; 6715 + ew32(CTRL_EXT, ctrl_ext); 6700 6716 } 6701 6717 6702 6718 /* Ensure that the appropriate bits are set in LPI_CTRL
+114 -70
drivers/net/ethernet/intel/e1000e/phy.c
··· 107 107 return e1e_wphy(hw, M88E1000_PHY_GEN_CONTROL, 0); 108 108 } 109 109 110 + void e1000e_disable_phy_retry(struct e1000_hw *hw) 111 + { 112 + hw->phy.retry_enabled = false; 113 + } 114 + 115 + void e1000e_enable_phy_retry(struct e1000_hw *hw) 116 + { 117 + hw->phy.retry_enabled = true; 118 + } 119 + 110 120 /** 111 121 * e1000e_read_phy_reg_mdic - Read MDI control register 112 122 * @hw: pointer to the HW structure ··· 128 118 **/ 129 119 s32 e1000e_read_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 *data) 130 120 { 121 + u32 i, mdic = 0, retry_counter, retry_max; 131 122 struct e1000_phy_info *phy = &hw->phy; 132 - u32 i, mdic = 0; 123 + bool success; 133 124 134 125 if (offset > MAX_PHY_REG_ADDRESS) { 135 126 e_dbg("PHY Address %d is out of range\n", offset); 136 127 return -E1000_ERR_PARAM; 137 128 } 138 129 130 + retry_max = phy->retry_enabled ? phy->retry_count : 0; 131 + 139 132 /* Set up Op-code, Phy Address, and register offset in the MDI 140 133 * Control register. The MAC will take care of interfacing with the 141 134 * PHY to retrieve the desired data. 142 135 */ 143 - mdic = ((offset << E1000_MDIC_REG_SHIFT) | 144 - (phy->addr << E1000_MDIC_PHY_SHIFT) | 145 - (E1000_MDIC_OP_READ)); 136 + for (retry_counter = 0; retry_counter <= retry_max; retry_counter++) { 137 + success = true; 146 138 147 - ew32(MDIC, mdic); 139 + mdic = ((offset << E1000_MDIC_REG_SHIFT) | 140 + (phy->addr << E1000_MDIC_PHY_SHIFT) | 141 + (E1000_MDIC_OP_READ)); 148 142 149 - /* Poll the ready bit to see if the MDI read completed 150 - * Increasing the time out as testing showed failures with 151 - * the lower time out 152 - */ 153 - for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) { 154 - udelay(50); 155 - mdic = er32(MDIC); 156 - if (mdic & E1000_MDIC_READY) 157 - break; 158 - } 159 - if (!(mdic & E1000_MDIC_READY)) { 160 - e_dbg("MDI Read PHY Reg Address %d did not complete\n", offset); 161 - return -E1000_ERR_PHY; 162 - } 163 - if (mdic & E1000_MDIC_ERROR) { 164 - e_dbg("MDI Read PHY Reg Address %d Error\n", offset); 165 - return -E1000_ERR_PHY; 166 - } 167 - if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) { 168 - e_dbg("MDI Read offset error - requested %d, returned %d\n", 169 - offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic)); 170 - return -E1000_ERR_PHY; 171 - } 172 - *data = (u16)mdic; 143 + ew32(MDIC, mdic); 173 144 174 - /* Allow some time after each MDIC transaction to avoid 175 - * reading duplicate data in the next MDIC transaction. 176 - */ 177 - if (hw->mac.type == e1000_pch2lan) 178 - udelay(100); 179 - return 0; 145 + /* Poll the ready bit to see if the MDI read completed 146 + * Increasing the time out as testing showed failures with 147 + * the lower time out 148 + */ 149 + for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) { 150 + usleep_range(50, 60); 151 + mdic = er32(MDIC); 152 + if (mdic & E1000_MDIC_READY) 153 + break; 154 + } 155 + if (!(mdic & E1000_MDIC_READY)) { 156 + e_dbg("MDI Read PHY Reg Address %d did not complete\n", 157 + offset); 158 + success = false; 159 + } 160 + if (mdic & E1000_MDIC_ERROR) { 161 + e_dbg("MDI Read PHY Reg Address %d Error\n", offset); 162 + success = false; 163 + } 164 + if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) { 165 + e_dbg("MDI Read offset error - requested %d, returned %d\n", 166 + offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic)); 167 + success = false; 168 + } 169 + 170 + /* Allow some time after each MDIC transaction to avoid 171 + * reading duplicate data in the next MDIC transaction. 172 + */ 173 + if (hw->mac.type == e1000_pch2lan) 174 + usleep_range(100, 150); 175 + 176 + if (success) { 177 + *data = (u16)mdic; 178 + return 0; 179 + } 180 + 181 + if (retry_counter != retry_max) { 182 + e_dbg("Perform retry on PHY transaction...\n"); 183 + mdelay(10); 184 + } 185 + } 186 + 187 + return -E1000_ERR_PHY; 180 188 } 181 189 182 190 /** ··· 207 179 **/ 208 180 s32 e1000e_write_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 data) 209 181 { 182 + u32 i, mdic = 0, retry_counter, retry_max; 210 183 struct e1000_phy_info *phy = &hw->phy; 211 - u32 i, mdic = 0; 184 + bool success; 212 185 213 186 if (offset > MAX_PHY_REG_ADDRESS) { 214 187 e_dbg("PHY Address %d is out of range\n", offset); 215 188 return -E1000_ERR_PARAM; 216 189 } 217 190 191 + retry_max = phy->retry_enabled ? phy->retry_count : 0; 192 + 218 193 /* Set up Op-code, Phy Address, and register offset in the MDI 219 194 * Control register. The MAC will take care of interfacing with the 220 195 * PHY to retrieve the desired data. 221 196 */ 222 - mdic = (((u32)data) | 223 - (offset << E1000_MDIC_REG_SHIFT) | 224 - (phy->addr << E1000_MDIC_PHY_SHIFT) | 225 - (E1000_MDIC_OP_WRITE)); 197 + for (retry_counter = 0; retry_counter <= retry_max; retry_counter++) { 198 + success = true; 226 199 227 - ew32(MDIC, mdic); 200 + mdic = (((u32)data) | 201 + (offset << E1000_MDIC_REG_SHIFT) | 202 + (phy->addr << E1000_MDIC_PHY_SHIFT) | 203 + (E1000_MDIC_OP_WRITE)); 228 204 229 - /* Poll the ready bit to see if the MDI read completed 230 - * Increasing the time out as testing showed failures with 231 - * the lower time out 232 - */ 233 - for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) { 234 - udelay(50); 235 - mdic = er32(MDIC); 236 - if (mdic & E1000_MDIC_READY) 237 - break; 238 - } 239 - if (!(mdic & E1000_MDIC_READY)) { 240 - e_dbg("MDI Write PHY Reg Address %d did not complete\n", offset); 241 - return -E1000_ERR_PHY; 242 - } 243 - if (mdic & E1000_MDIC_ERROR) { 244 - e_dbg("MDI Write PHY Red Address %d Error\n", offset); 245 - return -E1000_ERR_PHY; 246 - } 247 - if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) { 248 - e_dbg("MDI Write offset error - requested %d, returned %d\n", 249 - offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic)); 250 - return -E1000_ERR_PHY; 205 + ew32(MDIC, mdic); 206 + 207 + /* Poll the ready bit to see if the MDI read completed 208 + * Increasing the time out as testing showed failures with 209 + * the lower time out 210 + */ 211 + for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) { 212 + usleep_range(50, 60); 213 + mdic = er32(MDIC); 214 + if (mdic & E1000_MDIC_READY) 215 + break; 216 + } 217 + if (!(mdic & E1000_MDIC_READY)) { 218 + e_dbg("MDI Write PHY Reg Address %d did not complete\n", 219 + offset); 220 + success = false; 221 + } 222 + if (mdic & E1000_MDIC_ERROR) { 223 + e_dbg("MDI Write PHY Reg Address %d Error\n", offset); 224 + success = false; 225 + } 226 + if (FIELD_GET(E1000_MDIC_REG_MASK, mdic) != offset) { 227 + e_dbg("MDI Write offset error - requested %d, returned %d\n", 228 + offset, FIELD_GET(E1000_MDIC_REG_MASK, mdic)); 229 + success = false; 230 + } 231 + 232 + /* Allow some time after each MDIC transaction to avoid 233 + * reading duplicate data in the next MDIC transaction. 234 + */ 235 + if (hw->mac.type == e1000_pch2lan) 236 + usleep_range(100, 150); 237 + 238 + if (success) 239 + return 0; 240 + 241 + if (retry_counter != retry_max) { 242 + e_dbg("Perform retry on PHY transaction...\n"); 243 + mdelay(10); 244 + } 251 245 } 252 246 253 - /* Allow some time after each MDIC transaction to avoid 254 - * reading duplicate data in the next MDIC transaction. 255 - */ 256 - if (hw->mac.type == e1000_pch2lan) 257 - udelay(100); 258 - 259 - return 0; 247 + return -E1000_ERR_PHY; 260 248 } 261 249 262 250 /**
+2
drivers/net/ethernet/intel/e1000e/phy.h
··· 51 51 s32 e1000e_write_phy_reg_bm2(struct e1000_hw *hw, u32 offset, u16 data); 52 52 void e1000_power_up_phy_copper(struct e1000_hw *hw); 53 53 void e1000_power_down_phy_copper(struct e1000_hw *hw); 54 + void e1000e_disable_phy_retry(struct e1000_hw *hw); 55 + void e1000e_enable_phy_retry(struct e1000_hw *hw); 54 56 s32 e1000e_read_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 *data); 55 57 s32 e1000e_write_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 data); 56 58 s32 e1000_read_phy_reg_hv(struct e1000_hw *hw, u32 offset, u16 *data);
+1
drivers/net/ethernet/intel/i40e/i40e.h
··· 955 955 struct rcu_head rcu; /* to avoid race with update stats on free */ 956 956 char name[I40E_INT_NAME_STR_LEN]; 957 957 bool arm_wb_state; 958 + bool in_busy_poll; 958 959 int irq_num; /* IRQ assigned to this q_vector */ 959 960 } ____cacheline_internodealigned_in_smp; 960 961
+11 -2
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 1253 1253 int bkt; 1254 1254 int cnt = 0; 1255 1255 1256 - hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) 1257 - ++cnt; 1256 + hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) { 1257 + if (f->state == I40E_FILTER_NEW || 1258 + f->state == I40E_FILTER_ACTIVE) 1259 + ++cnt; 1260 + } 1258 1261 1259 1262 return cnt; 1260 1263 } ··· 3913 3910 wr32(hw, I40E_PFINT_ITRN(I40E_TX_ITR, vector - 1), 3914 3911 q_vector->tx.target_itr >> 1); 3915 3912 q_vector->tx.current_itr = q_vector->tx.target_itr; 3913 + 3914 + /* Set ITR for software interrupts triggered after exiting 3915 + * busy-loop polling. 3916 + */ 3917 + wr32(hw, I40E_PFINT_ITRN(I40E_SW_ITR, vector - 1), 3918 + I40E_ITR_20K); 3916 3919 3917 3920 wr32(hw, I40E_PFINT_RATEN(vector - 1), 3918 3921 i40e_intrl_usec_to_reg(vsi->int_rate_limit));
+3
drivers/net/ethernet/intel/i40e/i40e_register.h
··· 333 333 #define I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT 3 334 334 #define I40E_PFINT_DYN_CTLN_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) 335 335 #define I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT 5 336 + #define I40E_PFINT_DYN_CTLN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT) 336 337 #define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT 24 337 338 #define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT) 339 + #define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_SHIFT 25 340 + #define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTLN_SW_ITR_INDX_SHIFT) 338 341 #define I40E_PFINT_ICR0 0x00038780 /* Reset: CORER */ 339 342 #define I40E_PFINT_ICR0_INTEVENT_SHIFT 0 340 343 #define I40E_PFINT_ICR0_INTEVENT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_INTEVENT_SHIFT)
+61 -21
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2630 2630 return failure ? budget : (int)total_rx_packets; 2631 2631 } 2632 2632 2633 - static inline u32 i40e_buildreg_itr(const int type, u16 itr) 2633 + /** 2634 + * i40e_buildreg_itr - build a value for writing to I40E_PFINT_DYN_CTLN register 2635 + * @itr_idx: interrupt throttling index 2636 + * @interval: interrupt throttling interval value in usecs 2637 + * @force_swint: force software interrupt 2638 + * 2639 + * The function builds a value for I40E_PFINT_DYN_CTLN register that 2640 + * is used to update interrupt throttling interval for specified ITR index 2641 + * and optionally enforces a software interrupt. If the @itr_idx is equal 2642 + * to I40E_ITR_NONE then no interval change is applied and only @force_swint 2643 + * parameter is taken into account. If the interval change and enforced 2644 + * software interrupt are not requested then the built value just enables 2645 + * appropriate vector interrupt. 2646 + **/ 2647 + static u32 i40e_buildreg_itr(enum i40e_dyn_idx itr_idx, u16 interval, 2648 + bool force_swint) 2634 2649 { 2635 2650 u32 val; 2636 2651 ··· 2659 2644 * an event in the PBA anyway so we need to rely on the automask 2660 2645 * to hold pending events for us until the interrupt is re-enabled 2661 2646 * 2662 - * The itr value is reported in microseconds, and the register 2663 - * value is recorded in 2 microsecond units. For this reason we 2664 - * only need to shift by the interval shift - 1 instead of the 2665 - * full value. 2647 + * We have to shift the given value as it is reported in microseconds 2648 + * and the register value is recorded in 2 microsecond units. 2666 2649 */ 2667 - itr &= I40E_ITR_MASK; 2650 + interval >>= 1; 2668 2651 2652 + /* 1. Enable vector interrupt 2653 + * 2. Update the interval for the specified ITR index 2654 + * (I40E_ITR_NONE in the register is used to indicate that 2655 + * no interval update is requested) 2656 + */ 2669 2657 val = I40E_PFINT_DYN_CTLN_INTENA_MASK | 2670 - (type << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) | 2671 - (itr << (I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT - 1)); 2658 + FIELD_PREP(I40E_PFINT_DYN_CTLN_ITR_INDX_MASK, itr_idx) | 2659 + FIELD_PREP(I40E_PFINT_DYN_CTLN_INTERVAL_MASK, interval); 2660 + 2661 + /* 3. Enforce software interrupt trigger if requested 2662 + * (These software interrupts rate is limited by ITR2 that is 2663 + * set to 20K interrupts per second) 2664 + */ 2665 + if (force_swint) 2666 + val |= I40E_PFINT_DYN_CTLN_SWINT_TRIG_MASK | 2667 + I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK | 2668 + FIELD_PREP(I40E_PFINT_DYN_CTLN_SW_ITR_INDX_MASK, 2669 + I40E_SW_ITR); 2672 2670 2673 2671 return val; 2674 2672 } 2675 - 2676 - /* a small macro to shorten up some long lines */ 2677 - #define INTREG I40E_PFINT_DYN_CTLN 2678 2673 2679 2674 /* The act of updating the ITR will cause it to immediately trigger. In order 2680 2675 * to prevent this from throwing off adaptive update statistics we defer the ··· 2704 2679 static inline void i40e_update_enable_itr(struct i40e_vsi *vsi, 2705 2680 struct i40e_q_vector *q_vector) 2706 2681 { 2682 + enum i40e_dyn_idx itr_idx = I40E_ITR_NONE; 2707 2683 struct i40e_hw *hw = &vsi->back->hw; 2708 - u32 intval; 2684 + u16 interval = 0; 2685 + u32 itr_val; 2709 2686 2710 2687 /* If we don't have MSIX, then we only need to re-enable icr0 */ 2711 2688 if (!test_bit(I40E_FLAG_MSIX_ENA, vsi->back->flags)) { ··· 2729 2702 */ 2730 2703 if (q_vector->rx.target_itr < q_vector->rx.current_itr) { 2731 2704 /* Rx ITR needs to be reduced, this is highest priority */ 2732 - intval = i40e_buildreg_itr(I40E_RX_ITR, 2733 - q_vector->rx.target_itr); 2705 + itr_idx = I40E_RX_ITR; 2706 + interval = q_vector->rx.target_itr; 2734 2707 q_vector->rx.current_itr = q_vector->rx.target_itr; 2735 2708 q_vector->itr_countdown = ITR_COUNTDOWN_START; 2736 2709 } else if ((q_vector->tx.target_itr < q_vector->tx.current_itr) || ··· 2739 2712 /* Tx ITR needs to be reduced, this is second priority 2740 2713 * Tx ITR needs to be increased more than Rx, fourth priority 2741 2714 */ 2742 - intval = i40e_buildreg_itr(I40E_TX_ITR, 2743 - q_vector->tx.target_itr); 2715 + itr_idx = I40E_TX_ITR; 2716 + interval = q_vector->tx.target_itr; 2744 2717 q_vector->tx.current_itr = q_vector->tx.target_itr; 2745 2718 q_vector->itr_countdown = ITR_COUNTDOWN_START; 2746 2719 } else if (q_vector->rx.current_itr != q_vector->rx.target_itr) { 2747 2720 /* Rx ITR needs to be increased, third priority */ 2748 - intval = i40e_buildreg_itr(I40E_RX_ITR, 2749 - q_vector->rx.target_itr); 2721 + itr_idx = I40E_RX_ITR; 2722 + interval = q_vector->rx.target_itr; 2750 2723 q_vector->rx.current_itr = q_vector->rx.target_itr; 2751 2724 q_vector->itr_countdown = ITR_COUNTDOWN_START; 2752 2725 } else { 2753 2726 /* No ITR update, lowest priority */ 2754 - intval = i40e_buildreg_itr(I40E_ITR_NONE, 0); 2755 2727 if (q_vector->itr_countdown) 2756 2728 q_vector->itr_countdown--; 2757 2729 } 2758 2730 2759 - if (!test_bit(__I40E_VSI_DOWN, vsi->state)) 2760 - wr32(hw, INTREG(q_vector->reg_idx), intval); 2731 + /* Do not update interrupt control register if VSI is down */ 2732 + if (test_bit(__I40E_VSI_DOWN, vsi->state)) 2733 + return; 2734 + 2735 + /* Update ITR interval if necessary and enforce software interrupt 2736 + * if we are exiting busy poll. 2737 + */ 2738 + if (q_vector->in_busy_poll) { 2739 + itr_val = i40e_buildreg_itr(itr_idx, interval, true); 2740 + q_vector->in_busy_poll = false; 2741 + } else { 2742 + itr_val = i40e_buildreg_itr(itr_idx, interval, false); 2743 + } 2744 + wr32(hw, I40E_PFINT_DYN_CTLN(q_vector->reg_idx), itr_val); 2761 2745 } 2762 2746 2763 2747 /** ··· 2883 2845 */ 2884 2846 if (likely(napi_complete_done(napi, work_done))) 2885 2847 i40e_update_enable_itr(vsi, q_vector); 2848 + else 2849 + q_vector->in_busy_poll = true; 2886 2850 2887 2851 return min(work_done, budget - 1); 2888 2852 }
+1
drivers/net/ethernet/intel/i40e/i40e_txrx.h
··· 68 68 /* these are indexes into ITRN registers */ 69 69 #define I40E_RX_ITR I40E_IDX_ITR0 70 70 #define I40E_TX_ITR I40E_IDX_ITR1 71 + #define I40E_SW_ITR I40E_IDX_ITR2 71 72 72 73 /* Supported RSS offloads */ 73 74 #define I40E_DEFAULT_RSS_HENA ( \
+22 -23
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1624 1624 { 1625 1625 struct i40e_hw *hw = &pf->hw; 1626 1626 struct i40e_vf *vf; 1627 - int i, v; 1628 1627 u32 reg; 1628 + int i; 1629 1629 1630 1630 /* If we don't have any VFs, then there is nothing to reset */ 1631 1631 if (!pf->num_alloc_vfs) ··· 1636 1636 return false; 1637 1637 1638 1638 /* Begin reset on all VFs at once */ 1639 - for (v = 0; v < pf->num_alloc_vfs; v++) { 1640 - vf = &pf->vf[v]; 1639 + for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) { 1641 1640 /* If VF is being reset no need to trigger reset again */ 1642 1641 if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) 1643 - i40e_trigger_vf_reset(&pf->vf[v], flr); 1642 + i40e_trigger_vf_reset(vf, flr); 1644 1643 } 1645 1644 1646 1645 /* HW requires some time to make sure it can flush the FIFO for a VF ··· 1648 1649 * the VFs using a simple iterator that increments once that VF has 1649 1650 * finished resetting. 1650 1651 */ 1651 - for (i = 0, v = 0; i < 10 && v < pf->num_alloc_vfs; i++) { 1652 + for (i = 0, vf = &pf->vf[0]; i < 10 && vf < &pf->vf[pf->num_alloc_vfs]; ++i) { 1652 1653 usleep_range(10000, 20000); 1653 1654 1654 1655 /* Check each VF in sequence, beginning with the VF to fail 1655 1656 * the previous check. 1656 1657 */ 1657 - while (v < pf->num_alloc_vfs) { 1658 - vf = &pf->vf[v]; 1658 + while (vf < &pf->vf[pf->num_alloc_vfs]) { 1659 1659 if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) { 1660 1660 reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id)); 1661 1661 if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK)) ··· 1664 1666 /* If the current VF has finished resetting, move on 1665 1667 * to the next VF in sequence. 1666 1668 */ 1667 - v++; 1669 + ++vf; 1668 1670 } 1669 1671 } 1670 1672 ··· 1674 1676 /* Display a warning if at least one VF didn't manage to reset in 1675 1677 * time, but continue on with the operation. 1676 1678 */ 1677 - if (v < pf->num_alloc_vfs) 1679 + if (vf < &pf->vf[pf->num_alloc_vfs]) 1678 1680 dev_err(&pf->pdev->dev, "VF reset check timeout on VF %d\n", 1679 - pf->vf[v].vf_id); 1681 + vf->vf_id); 1680 1682 usleep_range(10000, 20000); 1681 1683 1682 1684 /* Begin disabling all the rings associated with VFs, but do not wait 1683 1685 * between each VF. 1684 1686 */ 1685 - for (v = 0; v < pf->num_alloc_vfs; v++) { 1687 + for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) { 1686 1688 /* On initial reset, we don't have any queues to disable */ 1687 - if (pf->vf[v].lan_vsi_idx == 0) 1689 + if (vf->lan_vsi_idx == 0) 1688 1690 continue; 1689 1691 1690 1692 /* If VF is reset in another thread just continue */ 1691 1693 if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) 1692 1694 continue; 1693 1695 1694 - i40e_vsi_stop_rings_no_wait(pf->vsi[pf->vf[v].lan_vsi_idx]); 1696 + i40e_vsi_stop_rings_no_wait(pf->vsi[vf->lan_vsi_idx]); 1695 1697 } 1696 1698 1697 1699 /* Now that we've notified HW to disable all of the VF rings, wait 1698 1700 * until they finish. 1699 1701 */ 1700 - for (v = 0; v < pf->num_alloc_vfs; v++) { 1702 + for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) { 1701 1703 /* On initial reset, we don't have any queues to disable */ 1702 - if (pf->vf[v].lan_vsi_idx == 0) 1704 + if (vf->lan_vsi_idx == 0) 1703 1705 continue; 1704 1706 1705 1707 /* If VF is reset in another thread just continue */ 1706 1708 if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) 1707 1709 continue; 1708 1710 1709 - i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[v].lan_vsi_idx]); 1711 + i40e_vsi_wait_queues_disabled(pf->vsi[vf->lan_vsi_idx]); 1710 1712 } 1711 1713 1712 1714 /* Hw may need up to 50ms to finish disabling the RX queues. We ··· 1715 1717 mdelay(50); 1716 1718 1717 1719 /* Finish the reset on each VF */ 1718 - for (v = 0; v < pf->num_alloc_vfs; v++) { 1720 + for (vf = &pf->vf[0]; vf < &pf->vf[pf->num_alloc_vfs]; ++vf) { 1719 1721 /* If VF is reset in another thread just continue */ 1720 1722 if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) 1721 1723 continue; 1722 1724 1723 - i40e_cleanup_reset_vf(&pf->vf[v]); 1725 + i40e_cleanup_reset_vf(vf); 1724 1726 } 1725 1727 1726 1728 i40e_flush(hw); ··· 3137 3139 /* Allow to delete VF primary MAC only if it was not set 3138 3140 * administratively by PF or if VF is trusted. 3139 3141 */ 3140 - if (ether_addr_equal(addr, vf->default_lan_addr.addr) && 3141 - i40e_can_vf_change_mac(vf)) 3142 - was_unimac_deleted = true; 3143 - else 3144 - continue; 3142 + if (ether_addr_equal(addr, vf->default_lan_addr.addr)) { 3143 + if (i40e_can_vf_change_mac(vf)) 3144 + was_unimac_deleted = true; 3145 + else 3146 + continue; 3147 + } 3145 3148 3146 3149 if (i40e_del_mac_filter(vsi, al->list[i].addr)) { 3147 3150 ret = -EINVAL;
+5 -5
drivers/net/ethernet/intel/ice/ice_common.c
··· 1002 1002 */ 1003 1003 int ice_init_hw(struct ice_hw *hw) 1004 1004 { 1005 - struct ice_aqc_get_phy_caps_data *pcaps __free(kfree); 1006 - void *mac_buf __free(kfree); 1005 + struct ice_aqc_get_phy_caps_data *pcaps __free(kfree) = NULL; 1006 + void *mac_buf __free(kfree) = NULL; 1007 1007 u16 mac_buf_len; 1008 1008 int status; 1009 1009 ··· 3272 3272 return status; 3273 3273 3274 3274 if (li->link_info & ICE_AQ_MEDIA_AVAILABLE) { 3275 - struct ice_aqc_get_phy_caps_data *pcaps __free(kfree); 3275 + struct ice_aqc_get_phy_caps_data *pcaps __free(kfree) = NULL; 3276 3276 3277 3277 pcaps = kzalloc(sizeof(*pcaps), GFP_KERNEL); 3278 3278 if (!pcaps) ··· 3420 3420 int 3421 3421 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) 3422 3422 { 3423 - struct ice_aqc_get_phy_caps_data *pcaps __free(kfree); 3423 + struct ice_aqc_get_phy_caps_data *pcaps __free(kfree) = NULL; 3424 3424 struct ice_aqc_set_phy_cfg_data cfg = { 0 }; 3425 3425 struct ice_hw *hw; 3426 3426 int status; ··· 3561 3561 ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, 3562 3562 enum ice_fec_mode fec) 3563 3563 { 3564 - struct ice_aqc_get_phy_caps_data *pcaps __free(kfree); 3564 + struct ice_aqc_get_phy_caps_data *pcaps __free(kfree) = NULL; 3565 3565 struct ice_hw *hw; 3566 3566 int status; 3567 3567
+1 -1
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 941 941 struct ice_netdev_priv *np = netdev_priv(netdev); 942 942 struct ice_vsi *orig_vsi = np->vsi, *test_vsi; 943 943 struct ice_pf *pf = orig_vsi->back; 944 + u8 *tx_frame __free(kfree) = NULL; 944 945 u8 broadcast[ETH_ALEN], ret = 0; 945 946 int num_frames, valid_frames; 946 947 struct ice_tx_ring *tx_ring; 947 948 struct ice_rx_ring *rx_ring; 948 - u8 *tx_frame __free(kfree); 949 949 int i; 950 950 951 951 netdev_info(netdev, "loopback test\n");
+8 -10
drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c
··· 26 26 struct ice_vsi_vlan_ops *vlan_ops; 27 27 struct ice_pf *pf = vsi->back; 28 28 29 + /* setup inner VLAN ops */ 30 + vlan_ops = &vsi->inner_vlan_ops; 31 + 29 32 if (ice_is_dvm_ena(&pf->hw)) { 30 - vlan_ops = &vsi->outer_vlan_ops; 31 - 32 - /* setup outer VLAN ops */ 33 - vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; 34 - vlan_ops->clear_port_vlan = ice_vsi_clear_outer_port_vlan; 35 - 36 - /* setup inner VLAN ops */ 37 - vlan_ops = &vsi->inner_vlan_ops; 38 33 vlan_ops->add_vlan = noop_vlan_arg; 39 34 vlan_ops->del_vlan = noop_vlan_arg; 40 35 vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; 41 36 vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; 42 37 vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; 43 38 vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; 44 - } else { 45 - vlan_ops = &vsi->inner_vlan_ops; 46 39 40 + /* setup outer VLAN ops */ 41 + vlan_ops = &vsi->outer_vlan_ops; 42 + vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; 43 + vlan_ops->clear_port_vlan = ice_vsi_clear_outer_port_vlan; 44 + } else { 47 45 vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; 48 46 vlan_ops->clear_port_vlan = ice_vsi_clear_inner_port_vlan; 49 47 }
+2 -2
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 2941 2941 rx_ptype = le16_get_bits(rx_desc->ptype_err_fflags0, 2942 2942 VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M); 2943 2943 2944 + skb->protocol = eth_type_trans(skb, rxq->vport->netdev); 2945 + 2944 2946 decoded = rxq->vport->rx_ptype_lkup[rx_ptype]; 2945 2947 /* If we don't know the ptype we can't do anything else with it. Just 2946 2948 * pass it up the stack as-is. ··· 2952 2950 2953 2951 /* process RSS/hash */ 2954 2952 idpf_rx_hash(rxq, skb, rx_desc, &decoded); 2955 - 2956 - skb->protocol = eth_type_trans(skb, rxq->vport->netdev); 2957 2953 2958 2954 if (le16_get_bits(rx_desc->hdrlen_flags, 2959 2955 VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M))
+2
drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
··· 160 160 continue; 161 161 lmac_bmap = cgx_get_lmac_bmap(rvu_cgx_pdata(cgx, rvu)); 162 162 for_each_set_bit(iter, &lmac_bmap, rvu->hw->lmac_per_cgx) { 163 + if (iter >= MAX_LMAC_COUNT) 164 + continue; 163 165 lmac = cgx_get_lmacid(rvu_cgx_pdata(cgx, rvu), 164 166 iter); 165 167 rvu->pf2cgxlmac_map[pf] = cgxlmac_id_to_bmap(cgx, lmac);
+1 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
··· 1657 1657 struct npc_coalesced_kpu_prfl *img_data = NULL; 1658 1658 int i = 0, rc = -EINVAL; 1659 1659 void __iomem *kpu_prfl_addr; 1660 - u16 offset; 1660 + u32 offset; 1661 1661 1662 1662 img_data = (struct npc_coalesced_kpu_prfl __force *)rvu->kpu_prfl_addr; 1663 1663 if (le64_to_cpu(img_data->signature) == KPU_SIGN &&
+1 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 1933 1933 * mcam entries are enabled to receive the packets. Hence disable the 1934 1934 * packet I/O. 1935 1935 */ 1936 - if (err == EIO) 1936 + if (err == -EIO) 1937 1937 goto err_disable_rxtx; 1938 1938 else if (err) 1939 1939 goto err_tx_stop_queues;
+8 -2
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/phy.h> 16 16 #include <linux/platform_device.h> 17 + #include <linux/rtnetlink.h> 17 18 #include <linux/skbuff.h> 18 19 19 20 #include "mlxbf_gige.h" ··· 493 492 { 494 493 struct mlxbf_gige *priv = platform_get_drvdata(pdev); 495 494 496 - writeq(0, priv->base + MLXBF_GIGE_INT_EN); 497 - mlxbf_gige_clean_port(priv); 495 + rtnl_lock(); 496 + netif_device_detach(priv->netdev); 497 + 498 + if (netif_running(priv->netdev)) 499 + dev_close(priv->netdev); 500 + 501 + rtnl_unlock(); 498 502 } 499 503 500 504 static const struct acpi_device_id __maybe_unused mlxbf_gige_acpi_match[] = {
+1 -1
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 601 601 602 602 *alloc_size = mtu + MANA_RXBUF_PAD + *headroom; 603 603 604 - *datasize = ALIGN(mtu + ETH_HLEN, MANA_RX_DATA_ALIGN); 604 + *datasize = mtu + ETH_HLEN; 605 605 } 606 606 607 607 static int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu)
+36 -4
drivers/net/ethernet/realtek/r8169_main.c
··· 1314 1314 RTL_W8(tp, IBCR0, RTL_R8(tp, IBCR0) & ~0x01); 1315 1315 } 1316 1316 1317 + static void rtl_dash_loop_wait(struct rtl8169_private *tp, 1318 + const struct rtl_cond *c, 1319 + unsigned long usecs, int n, bool high) 1320 + { 1321 + if (!tp->dash_enabled) 1322 + return; 1323 + rtl_loop_wait(tp, c, usecs, n, high); 1324 + } 1325 + 1326 + static void rtl_dash_loop_wait_high(struct rtl8169_private *tp, 1327 + const struct rtl_cond *c, 1328 + unsigned long d, int n) 1329 + { 1330 + rtl_dash_loop_wait(tp, c, d, n, true); 1331 + } 1332 + 1333 + static void rtl_dash_loop_wait_low(struct rtl8169_private *tp, 1334 + const struct rtl_cond *c, 1335 + unsigned long d, int n) 1336 + { 1337 + rtl_dash_loop_wait(tp, c, d, n, false); 1338 + } 1339 + 1317 1340 static void rtl8168dp_driver_start(struct rtl8169_private *tp) 1318 1341 { 1319 1342 r8168dp_oob_notify(tp, OOB_CMD_DRIVER_START); 1320 - rtl_loop_wait_high(tp, &rtl_dp_ocp_read_cond, 10000, 10); 1343 + rtl_dash_loop_wait_high(tp, &rtl_dp_ocp_read_cond, 10000, 10); 1321 1344 } 1322 1345 1323 1346 static void rtl8168ep_driver_start(struct rtl8169_private *tp) 1324 1347 { 1325 1348 r8168ep_ocp_write(tp, 0x01, 0x180, OOB_CMD_DRIVER_START); 1326 1349 r8168ep_ocp_write(tp, 0x01, 0x30, r8168ep_ocp_read(tp, 0x30) | 0x01); 1327 - rtl_loop_wait_high(tp, &rtl_ep_ocp_read_cond, 10000, 30); 1350 + rtl_dash_loop_wait_high(tp, &rtl_ep_ocp_read_cond, 10000, 30); 1328 1351 } 1329 1352 1330 1353 static void rtl8168_driver_start(struct rtl8169_private *tp) ··· 1361 1338 static void rtl8168dp_driver_stop(struct rtl8169_private *tp) 1362 1339 { 1363 1340 r8168dp_oob_notify(tp, OOB_CMD_DRIVER_STOP); 1364 - rtl_loop_wait_low(tp, &rtl_dp_ocp_read_cond, 10000, 10); 1341 + rtl_dash_loop_wait_low(tp, &rtl_dp_ocp_read_cond, 10000, 10); 1365 1342 } 1366 1343 1367 1344 static void rtl8168ep_driver_stop(struct rtl8169_private *tp) ··· 1369 1346 rtl8168ep_stop_cmac(tp); 1370 1347 r8168ep_ocp_write(tp, 0x01, 0x180, OOB_CMD_DRIVER_STOP); 1371 1348 r8168ep_ocp_write(tp, 0x01, 0x30, r8168ep_ocp_read(tp, 0x30) | 0x01); 1372 - rtl_loop_wait_low(tp, &rtl_ep_ocp_read_cond, 10000, 10); 1349 + rtl_dash_loop_wait_low(tp, &rtl_ep_ocp_read_cond, 10000, 10); 1373 1350 } 1374 1351 1375 1352 static void rtl8168_driver_stop(struct rtl8169_private *tp) ··· 5163 5140 struct pci_dev *pdev = tp->pci_dev; 5164 5141 struct mii_bus *new_bus; 5165 5142 int ret; 5143 + 5144 + /* On some boards with this chip version the BIOS is buggy and misses 5145 + * to reset the PHY page selector. This results in the PHY ID read 5146 + * accessing registers on a different page, returning a more or 5147 + * less random value. Fix this by resetting the page selector first. 5148 + */ 5149 + if (tp->mac_version == RTL_GIGA_MAC_VER_25 || 5150 + tp->mac_version == RTL_GIGA_MAC_VER_26) 5151 + r8169_mdio_write(tp, 0x1f, 0); 5166 5152 5167 5153 new_bus = devm_mdiobus_alloc(&pdev->dev); 5168 5154 if (!new_bus)
+14 -10
drivers/net/ethernet/renesas/ravb_main.c
··· 1324 1324 int q = napi - priv->napi; 1325 1325 int mask = BIT(q); 1326 1326 int quota = budget; 1327 + bool unmask; 1327 1328 1328 1329 /* Processing RX Descriptor Ring */ 1329 1330 /* Clear RX interrupt */ 1330 1331 ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0); 1331 - if (ravb_rx(ndev, &quota, q)) 1332 - goto out; 1332 + unmask = !ravb_rx(ndev, &quota, q); 1333 1333 1334 1334 /* Processing TX Descriptor Ring */ 1335 1335 spin_lock_irqsave(&priv->lock, flags); ··· 1338 1338 ravb_tx_free(ndev, q, true); 1339 1339 netif_wake_subqueue(ndev, q); 1340 1340 spin_unlock_irqrestore(&priv->lock, flags); 1341 + 1342 + /* Receive error message handling */ 1343 + priv->rx_over_errors = priv->stats[RAVB_BE].rx_over_errors; 1344 + if (info->nc_queues) 1345 + priv->rx_over_errors += priv->stats[RAVB_NC].rx_over_errors; 1346 + if (priv->rx_over_errors != ndev->stats.rx_over_errors) 1347 + ndev->stats.rx_over_errors = priv->rx_over_errors; 1348 + if (priv->rx_fifo_errors != ndev->stats.rx_fifo_errors) 1349 + ndev->stats.rx_fifo_errors = priv->rx_fifo_errors; 1350 + 1351 + if (!unmask) 1352 + goto out; 1341 1353 1342 1354 napi_complete(napi); 1343 1355 ··· 1364 1352 } 1365 1353 spin_unlock_irqrestore(&priv->lock, flags); 1366 1354 1367 - /* Receive error message handling */ 1368 - priv->rx_over_errors = priv->stats[RAVB_BE].rx_over_errors; 1369 - if (info->nc_queues) 1370 - priv->rx_over_errors += priv->stats[RAVB_NC].rx_over_errors; 1371 - if (priv->rx_over_errors != ndev->stats.rx_over_errors) 1372 - ndev->stats.rx_over_errors = priv->rx_over_errors; 1373 - if (priv->rx_fifo_errors != ndev->stats.rx_fifo_errors) 1374 - ndev->stats.rx_fifo_errors = priv->rx_fifo_errors; 1375 1355 out: 1376 1356 return budget - quota; 1377 1357 }
+31 -9
drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
··· 92 92 u32 prio, u32 queue) 93 93 { 94 94 void __iomem *ioaddr = hw->pcsr; 95 - u32 base_register; 96 - u32 value; 95 + u32 clear_mask = 0; 96 + u32 ctrl2, ctrl3; 97 + int i; 97 98 98 - base_register = (queue < 4) ? GMAC_RXQ_CTRL2 : GMAC_RXQ_CTRL3; 99 - if (queue >= 4) 99 + ctrl2 = readl(ioaddr + GMAC_RXQ_CTRL2); 100 + ctrl3 = readl(ioaddr + GMAC_RXQ_CTRL3); 101 + 102 + /* The software must ensure that the same priority 103 + * is not mapped to multiple Rx queues 104 + */ 105 + for (i = 0; i < 4; i++) 106 + clear_mask |= ((prio << GMAC_RXQCTRL_PSRQX_SHIFT(i)) & 107 + GMAC_RXQCTRL_PSRQX_MASK(i)); 108 + 109 + ctrl2 &= ~clear_mask; 110 + ctrl3 &= ~clear_mask; 111 + 112 + /* First assign new priorities to a queue, then 113 + * clear them from others queues 114 + */ 115 + if (queue < 4) { 116 + ctrl2 |= (prio << GMAC_RXQCTRL_PSRQX_SHIFT(queue)) & 117 + GMAC_RXQCTRL_PSRQX_MASK(queue); 118 + 119 + writel(ctrl2, ioaddr + GMAC_RXQ_CTRL2); 120 + writel(ctrl3, ioaddr + GMAC_RXQ_CTRL3); 121 + } else { 100 122 queue -= 4; 101 123 102 - value = readl(ioaddr + base_register); 103 - 104 - value &= ~GMAC_RXQCTRL_PSRQX_MASK(queue); 105 - value |= (prio << GMAC_RXQCTRL_PSRQX_SHIFT(queue)) & 124 + ctrl3 |= (prio << GMAC_RXQCTRL_PSRQX_SHIFT(queue)) & 106 125 GMAC_RXQCTRL_PSRQX_MASK(queue); 107 - writel(value, ioaddr + base_register); 126 + 127 + writel(ctrl3, ioaddr + GMAC_RXQ_CTRL3); 128 + writel(ctrl2, ioaddr + GMAC_RXQ_CTRL2); 129 + } 108 130 } 109 131 110 132 static void dwmac4_tx_queue_priority(struct mac_device_info *hw,
+31 -7
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
··· 105 105 u32 queue) 106 106 { 107 107 void __iomem *ioaddr = hw->pcsr; 108 - u32 value, reg; 108 + u32 clear_mask = 0; 109 + u32 ctrl2, ctrl3; 110 + int i; 109 111 110 - reg = (queue < 4) ? XGMAC_RXQ_CTRL2 : XGMAC_RXQ_CTRL3; 111 - if (queue >= 4) 112 + ctrl2 = readl(ioaddr + XGMAC_RXQ_CTRL2); 113 + ctrl3 = readl(ioaddr + XGMAC_RXQ_CTRL3); 114 + 115 + /* The software must ensure that the same priority 116 + * is not mapped to multiple Rx queues 117 + */ 118 + for (i = 0; i < 4; i++) 119 + clear_mask |= ((prio << XGMAC_PSRQ_SHIFT(i)) & 120 + XGMAC_PSRQ(i)); 121 + 122 + ctrl2 &= ~clear_mask; 123 + ctrl3 &= ~clear_mask; 124 + 125 + /* First assign new priorities to a queue, then 126 + * clear them from others queues 127 + */ 128 + if (queue < 4) { 129 + ctrl2 |= (prio << XGMAC_PSRQ_SHIFT(queue)) & 130 + XGMAC_PSRQ(queue); 131 + 132 + writel(ctrl2, ioaddr + XGMAC_RXQ_CTRL2); 133 + writel(ctrl3, ioaddr + XGMAC_RXQ_CTRL3); 134 + } else { 112 135 queue -= 4; 113 136 114 - value = readl(ioaddr + reg); 115 - value &= ~XGMAC_PSRQ(queue); 116 - value |= (prio << XGMAC_PSRQ_SHIFT(queue)) & XGMAC_PSRQ(queue); 137 + ctrl3 |= (prio << XGMAC_PSRQ_SHIFT(queue)) & 138 + XGMAC_PSRQ(queue); 117 139 118 - writel(value, ioaddr + reg); 140 + writel(ctrl3, ioaddr + XGMAC_RXQ_CTRL3); 141 + writel(ctrl2, ioaddr + XGMAC_RXQ_CTRL2); 142 + } 119 143 } 120 144 121 145 static void dwxgmac2_tx_queue_prio(struct mac_device_info *hw, u32 prio,
+5 -3
drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
··· 20 20 #include "txgbe_phy.h" 21 21 #include "txgbe_hw.h" 22 22 23 + #define TXGBE_I2C_CLK_DEV_NAME "i2c_dw" 24 + 23 25 static int txgbe_swnodes_register(struct txgbe *txgbe) 24 26 { 25 27 struct txgbe_nodes *nodes = &txgbe->nodes; ··· 573 571 char clk_name[32]; 574 572 struct clk *clk; 575 573 576 - snprintf(clk_name, sizeof(clk_name), "i2c_dw.%d", 577 - pci_dev_id(pdev)); 574 + snprintf(clk_name, sizeof(clk_name), "%s.%d", 575 + TXGBE_I2C_CLK_DEV_NAME, pci_dev_id(pdev)); 578 576 579 577 clk = clk_register_fixed_rate(NULL, clk_name, NULL, 0, 156250000); 580 578 if (IS_ERR(clk)) ··· 636 634 637 635 info.parent = &pdev->dev; 638 636 info.fwnode = software_node_fwnode(txgbe->nodes.group[SWNODE_I2C]); 639 - info.name = "i2c_designware"; 637 + info.name = TXGBE_I2C_CLK_DEV_NAME; 640 638 info.id = pci_dev_id(pdev); 641 639 642 640 info.res = &DEFINE_RES_IRQ(pdev->irq);
+24 -7
drivers/net/phy/micrel.c
··· 2431 2431 struct lan8814_ptp_rx_ts *rx_ts, *tmp; 2432 2432 int txcfg = 0, rxcfg = 0; 2433 2433 int pkt_ts_enable; 2434 + int tx_mod; 2434 2435 2435 2436 ptp_priv->hwts_tx_type = config->tx_type; 2436 2437 ptp_priv->rx_filter = config->rx_filter; ··· 2478 2477 lanphy_write_page_reg(ptp_priv->phydev, 5, PTP_RX_TIMESTAMP_EN, pkt_ts_enable); 2479 2478 lanphy_write_page_reg(ptp_priv->phydev, 5, PTP_TX_TIMESTAMP_EN, pkt_ts_enable); 2480 2479 2481 - if (ptp_priv->hwts_tx_type == HWTSTAMP_TX_ONESTEP_SYNC) 2480 + tx_mod = lanphy_read_page_reg(ptp_priv->phydev, 5, PTP_TX_MOD); 2481 + if (ptp_priv->hwts_tx_type == HWTSTAMP_TX_ONESTEP_SYNC) { 2482 2482 lanphy_write_page_reg(ptp_priv->phydev, 5, PTP_TX_MOD, 2483 - PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_); 2483 + tx_mod | PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_); 2484 + } else if (ptp_priv->hwts_tx_type == HWTSTAMP_TX_ON) { 2485 + lanphy_write_page_reg(ptp_priv->phydev, 5, PTP_TX_MOD, 2486 + tx_mod & ~PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_); 2487 + } 2484 2488 2485 2489 if (config->rx_filter != HWTSTAMP_FILTER_NONE) 2486 2490 lan8814_config_ts_intr(ptp_priv->phydev, true); ··· 2543 2537 } 2544 2538 } 2545 2539 2546 - static void lan8814_get_sig_rx(struct sk_buff *skb, u16 *sig) 2540 + static bool lan8814_get_sig_rx(struct sk_buff *skb, u16 *sig) 2547 2541 { 2548 2542 struct ptp_header *ptp_header; 2549 2543 u32 type; ··· 2553 2547 ptp_header = ptp_parse_header(skb, type); 2554 2548 skb_pull_inline(skb, ETH_HLEN); 2555 2549 2550 + if (!ptp_header) 2551 + return false; 2552 + 2556 2553 *sig = (__force u16)(ntohs(ptp_header->sequence_id)); 2554 + return true; 2557 2555 } 2558 2556 2559 2557 static bool lan8814_match_rx_skb(struct kszphy_ptp_priv *ptp_priv, ··· 2569 2559 bool ret = false; 2570 2560 u16 skb_sig; 2571 2561 2572 - lan8814_get_sig_rx(skb, &skb_sig); 2562 + if (!lan8814_get_sig_rx(skb, &skb_sig)) 2563 + return ret; 2573 2564 2574 2565 /* Iterate over all RX timestamps and match it with the received skbs */ 2575 2566 spin_lock_irqsave(&ptp_priv->rx_ts_lock, flags); ··· 2845 2834 return 0; 2846 2835 } 2847 2836 2848 - static void lan8814_get_sig_tx(struct sk_buff *skb, u16 *sig) 2837 + static bool lan8814_get_sig_tx(struct sk_buff *skb, u16 *sig) 2849 2838 { 2850 2839 struct ptp_header *ptp_header; 2851 2840 u32 type; ··· 2853 2842 type = ptp_classify_raw(skb); 2854 2843 ptp_header = ptp_parse_header(skb, type); 2855 2844 2845 + if (!ptp_header) 2846 + return false; 2847 + 2856 2848 *sig = (__force u16)(ntohs(ptp_header->sequence_id)); 2849 + return true; 2857 2850 } 2858 2851 2859 2852 static void lan8814_match_tx_skb(struct kszphy_ptp_priv *ptp_priv, ··· 2871 2856 2872 2857 spin_lock_irqsave(&ptp_priv->tx_queue.lock, flags); 2873 2858 skb_queue_walk_safe(&ptp_priv->tx_queue, skb, skb_tmp) { 2874 - lan8814_get_sig_tx(skb, &skb_sig); 2859 + if (!lan8814_get_sig_tx(skb, &skb_sig)) 2860 + continue; 2875 2861 2876 2862 if (memcmp(&skb_sig, &seq_id, sizeof(seq_id))) 2877 2863 continue; ··· 2926 2910 2927 2911 spin_lock_irqsave(&ptp_priv->rx_queue.lock, flags); 2928 2912 skb_queue_walk_safe(&ptp_priv->rx_queue, skb, skb_tmp) { 2929 - lan8814_get_sig_rx(skb, &skb_sig); 2913 + if (!lan8814_get_sig_rx(skb, &skb_sig)) 2914 + continue; 2930 2915 2931 2916 if (memcmp(&skb_sig, &rx_ts->seq_id, sizeof(rx_ts->seq_id))) 2932 2917 continue;
+2
drivers/net/usb/ax88179_178a.c
··· 1273 1273 1274 1274 if (is_valid_ether_addr(mac)) { 1275 1275 eth_hw_addr_set(dev->net, mac); 1276 + if (!is_local_ether_addr(mac)) 1277 + dev->net->addr_assign_type = NET_ADDR_PERM; 1276 1278 } else { 1277 1279 netdev_info(dev->net, "invalid MAC address, using random\n"); 1278 1280 eth_hw_addr_random(dev->net);
+1
drivers/net/xen-netfront.c
··· 285 285 return NULL; 286 286 } 287 287 skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE); 288 + skb_mark_for_recycle(skb); 288 289 289 290 /* Align ip header to a 16 bytes boundary */ 290 291 skb_reserve(skb, NET_IP_ALIGN);
+15 -1
include/linux/bpf.h
··· 1574 1574 enum bpf_link_type type; 1575 1575 const struct bpf_link_ops *ops; 1576 1576 struct bpf_prog *prog; 1577 - struct work_struct work; 1577 + /* rcu is used before freeing, work can be used to schedule that 1578 + * RCU-based freeing before that, so they never overlap 1579 + */ 1580 + union { 1581 + struct rcu_head rcu; 1582 + struct work_struct work; 1583 + }; 1578 1584 }; 1579 1585 1580 1586 struct bpf_link_ops { 1581 1587 void (*release)(struct bpf_link *link); 1588 + /* deallocate link resources callback, called without RCU grace period 1589 + * waiting 1590 + */ 1582 1591 void (*dealloc)(struct bpf_link *link); 1592 + /* deallocate link resources callback, called after RCU grace period; 1593 + * if underlying BPF program is sleepable we go through tasks trace 1594 + * RCU GP and then "classic" RCU GP 1595 + */ 1596 + void (*dealloc_deferred)(struct bpf_link *link); 1583 1597 int (*detach)(struct bpf_link *link); 1584 1598 int (*update_prog)(struct bpf_link *link, struct bpf_prog *new_prog, 1585 1599 struct bpf_prog *old_prog);
+28
include/linux/udp.h
··· 150 150 } 151 151 } 152 152 153 + DECLARE_STATIC_KEY_FALSE(udp_encap_needed_key); 154 + #if IS_ENABLED(CONFIG_IPV6) 155 + DECLARE_STATIC_KEY_FALSE(udpv6_encap_needed_key); 156 + #endif 157 + 158 + static inline bool udp_encap_needed(void) 159 + { 160 + if (static_branch_unlikely(&udp_encap_needed_key)) 161 + return true; 162 + 163 + #if IS_ENABLED(CONFIG_IPV6) 164 + if (static_branch_unlikely(&udpv6_encap_needed_key)) 165 + return true; 166 + #endif 167 + 168 + return false; 169 + } 170 + 153 171 static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb) 154 172 { 155 173 if (!skb_is_gso(skb)) ··· 179 161 180 162 if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST && 181 163 !udp_test_bit(ACCEPT_FRAGLIST, sk)) 164 + return true; 165 + 166 + /* GSO packets lacking the SKB_GSO_UDP_TUNNEL/_CSUM bits might still 167 + * land in a tunnel as the socket check in udp_gro_receive cannot be 168 + * foolproof. 169 + */ 170 + if (udp_encap_needed() && 171 + READ_ONCE(udp_sk(sk)->encap_rcv) && 172 + !(skb_shinfo(skb)->gso_type & 173 + (SKB_GSO_UDP_TUNNEL | SKB_GSO_UDP_TUNNEL_CSUM))) 182 174 return true; 183 175 184 176 return false;
+9
include/net/bluetooth/hci.h
··· 176 176 */ 177 177 HCI_QUIRK_USE_BDADDR_PROPERTY, 178 178 179 + /* When this quirk is set, the Bluetooth Device Address provided by 180 + * the 'local-bd-address' fwnode property is incorrectly specified in 181 + * big-endian order. 182 + * 183 + * This quirk can be set before hci_register_dev is called or 184 + * during the hdev->setup vendor callback. 185 + */ 186 + HCI_QUIRK_BDADDR_PROPERTY_BROKEN, 187 + 179 188 /* When this quirk is set, the duplicate filtering during 180 189 * scanning is based on Bluetooth devices addresses. To allow 181 190 * RSSI based updates, restart scanning if needed.
-1
include/net/mana/mana.h
··· 39 39 #define COMP_ENTRY_SIZE 64 40 40 41 41 #define RX_BUFFERS_PER_QUEUE 512 42 - #define MANA_RX_DATA_ALIGN 64 43 42 44 43 #define MAX_SEND_BUFFERS_PER_QUEUE 256 45 44
+32 -3
kernel/bpf/syscall.c
··· 3024 3024 atomic64_inc(&link->refcnt); 3025 3025 } 3026 3026 3027 + static void bpf_link_defer_dealloc_rcu_gp(struct rcu_head *rcu) 3028 + { 3029 + struct bpf_link *link = container_of(rcu, struct bpf_link, rcu); 3030 + 3031 + /* free bpf_link and its containing memory */ 3032 + link->ops->dealloc_deferred(link); 3033 + } 3034 + 3035 + static void bpf_link_defer_dealloc_mult_rcu_gp(struct rcu_head *rcu) 3036 + { 3037 + if (rcu_trace_implies_rcu_gp()) 3038 + bpf_link_defer_dealloc_rcu_gp(rcu); 3039 + else 3040 + call_rcu(rcu, bpf_link_defer_dealloc_rcu_gp); 3041 + } 3042 + 3027 3043 /* bpf_link_free is guaranteed to be called from process context */ 3028 3044 static void bpf_link_free(struct bpf_link *link) 3029 3045 { 3046 + bool sleepable = false; 3047 + 3030 3048 bpf_link_free_id(link->id); 3031 3049 if (link->prog) { 3050 + sleepable = link->prog->sleepable; 3032 3051 /* detach BPF program, clean up used resources */ 3033 3052 link->ops->release(link); 3034 3053 bpf_prog_put(link->prog); 3035 3054 } 3036 - /* free bpf_link and its containing memory */ 3037 - link->ops->dealloc(link); 3055 + if (link->ops->dealloc_deferred) { 3056 + /* schedule BPF link deallocation; if underlying BPF program 3057 + * is sleepable, we need to first wait for RCU tasks trace 3058 + * sync, then go through "classic" RCU grace period 3059 + */ 3060 + if (sleepable) 3061 + call_rcu_tasks_trace(&link->rcu, bpf_link_defer_dealloc_mult_rcu_gp); 3062 + else 3063 + call_rcu(&link->rcu, bpf_link_defer_dealloc_rcu_gp); 3064 + } 3065 + if (link->ops->dealloc) 3066 + link->ops->dealloc(link); 3038 3067 } 3039 3068 3040 3069 static void bpf_link_put_deferred(struct work_struct *work) ··· 3573 3544 3574 3545 static const struct bpf_link_ops bpf_raw_tp_link_lops = { 3575 3546 .release = bpf_raw_tp_link_release, 3576 - .dealloc = bpf_raw_tp_link_dealloc, 3547 + .dealloc_deferred = bpf_raw_tp_link_dealloc, 3577 3548 .show_fdinfo = bpf_raw_tp_link_show_fdinfo, 3578 3549 .fill_link_info = bpf_raw_tp_link_fill_link_info, 3579 3550 };
+3
kernel/bpf/verifier.c
··· 18379 18379 } 18380 18380 if (!env->prog->jit_requested) { 18381 18381 verbose(env, "JIT is required to use arena\n"); 18382 + fdput(f); 18382 18383 return -EOPNOTSUPP; 18383 18384 } 18384 18385 if (!bpf_jit_supports_arena()) { 18385 18386 verbose(env, "JIT doesn't support arena\n"); 18387 + fdput(f); 18386 18388 return -EOPNOTSUPP; 18387 18389 } 18388 18390 env->prog->aux->arena = (void *)map; 18389 18391 if (!bpf_arena_get_user_vm_start(env->prog->aux->arena)) { 18390 18392 verbose(env, "arena's user address must be set via map_extra or mmap()\n"); 18393 + fdput(f); 18391 18394 return -EINVAL; 18392 18395 } 18393 18396 }
+5 -5
kernel/trace/bpf_trace.c
··· 2728 2728 2729 2729 static const struct bpf_link_ops bpf_kprobe_multi_link_lops = { 2730 2730 .release = bpf_kprobe_multi_link_release, 2731 - .dealloc = bpf_kprobe_multi_link_dealloc, 2731 + .dealloc_deferred = bpf_kprobe_multi_link_dealloc, 2732 2732 .fill_link_info = bpf_kprobe_multi_link_fill_link_info, 2733 2733 }; 2734 2734 ··· 3157 3157 3158 3158 umulti_link = container_of(link, struct bpf_uprobe_multi_link, link); 3159 3159 bpf_uprobe_unregister(&umulti_link->path, umulti_link->uprobes, umulti_link->cnt); 3160 + if (umulti_link->task) 3161 + put_task_struct(umulti_link->task); 3162 + path_put(&umulti_link->path); 3160 3163 } 3161 3164 3162 3165 static void bpf_uprobe_multi_link_dealloc(struct bpf_link *link) ··· 3167 3164 struct bpf_uprobe_multi_link *umulti_link; 3168 3165 3169 3166 umulti_link = container_of(link, struct bpf_uprobe_multi_link, link); 3170 - if (umulti_link->task) 3171 - put_task_struct(umulti_link->task); 3172 - path_put(&umulti_link->path); 3173 3167 kvfree(umulti_link->uprobes); 3174 3168 kfree(umulti_link); 3175 3169 } ··· 3242 3242 3243 3243 static const struct bpf_link_ops bpf_uprobe_multi_link_lops = { 3244 3244 .release = bpf_uprobe_multi_link_release, 3245 - .dealloc = bpf_uprobe_multi_link_dealloc, 3245 + .dealloc_deferred = bpf_uprobe_multi_link_dealloc, 3246 3246 .fill_link_info = bpf_uprobe_multi_link_fill_link_info, 3247 3247 }; 3248 3248
+1 -1
net/ax25/ax25_dev.c
··· 105 105 spin_lock_bh(&ax25_dev_lock); 106 106 107 107 #ifdef CONFIG_AX25_DAMA_SLAVE 108 - ax25_ds_del_timer(ax25_dev); 108 + timer_shutdown_sync(&ax25_dev->dama.slave_timer); 109 109 #endif 110 110 111 111 /*
+3 -3
net/bluetooth/hci_core.c
··· 2874 2874 cancel_delayed_work_sync(&hdev->ncmd_timer); 2875 2875 atomic_set(&hdev->cmd_cnt, 1); 2876 2876 2877 - hci_cmd_sync_cancel_sync(hdev, -err); 2877 + hci_cmd_sync_cancel_sync(hdev, err); 2878 2878 } 2879 2879 2880 2880 /* Suspend HCI device */ ··· 2894 2894 return 0; 2895 2895 2896 2896 /* Cancel potentially blocking sync operation before suspend */ 2897 - hci_cancel_cmd_sync(hdev, -EHOSTDOWN); 2897 + hci_cancel_cmd_sync(hdev, EHOSTDOWN); 2898 2898 2899 2899 hci_req_sync_lock(hdev); 2900 2900 ret = hci_suspend_sync(hdev); ··· 4210 4210 4211 4211 err = hci_send_frame(hdev, skb); 4212 4212 if (err < 0) { 4213 - hci_cmd_sync_cancel_sync(hdev, err); 4213 + hci_cmd_sync_cancel_sync(hdev, -err); 4214 4214 return; 4215 4215 } 4216 4216
+40 -24
net/bluetooth/hci_debugfs.c
··· 218 218 { 219 219 struct hci_dev *hdev = data; 220 220 221 - if (val == 0 || val > hdev->conn_info_max_age) 222 - return -EINVAL; 223 - 224 221 hci_dev_lock(hdev); 222 + if (val == 0 || val > hdev->conn_info_max_age) { 223 + hci_dev_unlock(hdev); 224 + return -EINVAL; 225 + } 226 + 225 227 hdev->conn_info_min_age = val; 226 228 hci_dev_unlock(hdev); 227 229 ··· 248 246 { 249 247 struct hci_dev *hdev = data; 250 248 251 - if (val == 0 || val < hdev->conn_info_min_age) 252 - return -EINVAL; 253 - 254 249 hci_dev_lock(hdev); 250 + if (val == 0 || val < hdev->conn_info_min_age) { 251 + hci_dev_unlock(hdev); 252 + return -EINVAL; 253 + } 254 + 255 255 hdev->conn_info_max_age = val; 256 256 hci_dev_unlock(hdev); 257 257 ··· 571 567 { 572 568 struct hci_dev *hdev = data; 573 569 574 - if (val == 0 || val % 2 || val > hdev->sniff_max_interval) 575 - return -EINVAL; 576 - 577 570 hci_dev_lock(hdev); 571 + if (val == 0 || val % 2 || val > hdev->sniff_max_interval) { 572 + hci_dev_unlock(hdev); 573 + return -EINVAL; 574 + } 575 + 578 576 hdev->sniff_min_interval = val; 579 577 hci_dev_unlock(hdev); 580 578 ··· 601 595 { 602 596 struct hci_dev *hdev = data; 603 597 604 - if (val == 0 || val % 2 || val < hdev->sniff_min_interval) 605 - return -EINVAL; 606 - 607 598 hci_dev_lock(hdev); 599 + if (val == 0 || val % 2 || val < hdev->sniff_min_interval) { 600 + hci_dev_unlock(hdev); 601 + return -EINVAL; 602 + } 603 + 608 604 hdev->sniff_max_interval = val; 609 605 hci_dev_unlock(hdev); 610 606 ··· 858 850 { 859 851 struct hci_dev *hdev = data; 860 852 861 - if (val < 0x0006 || val > 0x0c80 || val > hdev->le_conn_max_interval) 862 - return -EINVAL; 863 - 864 853 hci_dev_lock(hdev); 854 + if (val < 0x0006 || val > 0x0c80 || val > hdev->le_conn_max_interval) { 855 + hci_dev_unlock(hdev); 856 + return -EINVAL; 857 + } 858 + 865 859 hdev->le_conn_min_interval = val; 866 860 hci_dev_unlock(hdev); 867 861 ··· 888 878 { 889 879 struct hci_dev *hdev = data; 890 880 891 - if (val < 0x0006 || val > 0x0c80 || val < hdev->le_conn_min_interval) 892 - return -EINVAL; 893 - 894 881 hci_dev_lock(hdev); 882 + if (val < 0x0006 || val > 0x0c80 || val < hdev->le_conn_min_interval) { 883 + hci_dev_unlock(hdev); 884 + return -EINVAL; 885 + } 886 + 895 887 hdev->le_conn_max_interval = val; 896 888 hci_dev_unlock(hdev); 897 889 ··· 1002 990 { 1003 991 struct hci_dev *hdev = data; 1004 992 1005 - if (val < 0x0020 || val > 0x4000 || val > hdev->le_adv_max_interval) 1006 - return -EINVAL; 1007 - 1008 993 hci_dev_lock(hdev); 994 + if (val < 0x0020 || val > 0x4000 || val > hdev->le_adv_max_interval) { 995 + hci_dev_unlock(hdev); 996 + return -EINVAL; 997 + } 998 + 1009 999 hdev->le_adv_min_interval = val; 1010 1000 hci_dev_unlock(hdev); 1011 1001 ··· 1032 1018 { 1033 1019 struct hci_dev *hdev = data; 1034 1020 1035 - if (val < 0x0020 || val > 0x4000 || val < hdev->le_adv_min_interval) 1036 - return -EINVAL; 1037 - 1038 1021 hci_dev_lock(hdev); 1022 + if (val < 0x0020 || val > 0x4000 || val < hdev->le_adv_min_interval) { 1023 + hci_dev_unlock(hdev); 1024 + return -EINVAL; 1025 + } 1026 + 1039 1027 hdev->le_adv_max_interval = val; 1040 1028 hci_dev_unlock(hdev); 1041 1029
+25
net/bluetooth/hci_event.c
··· 3208 3208 if (test_bit(HCI_ENCRYPT, &hdev->flags)) 3209 3209 set_bit(HCI_CONN_ENCRYPT, &conn->flags); 3210 3210 3211 + /* "Link key request" completed ahead of "connect request" completes */ 3212 + if (ev->encr_mode == 1 && !test_bit(HCI_CONN_ENCRYPT, &conn->flags) && 3213 + ev->link_type == ACL_LINK) { 3214 + struct link_key *key; 3215 + struct hci_cp_read_enc_key_size cp; 3216 + 3217 + key = hci_find_link_key(hdev, &ev->bdaddr); 3218 + if (key) { 3219 + set_bit(HCI_CONN_ENCRYPT, &conn->flags); 3220 + 3221 + if (!(hdev->commands[20] & 0x10)) { 3222 + conn->enc_key_size = HCI_LINK_KEY_SIZE; 3223 + } else { 3224 + cp.handle = cpu_to_le16(conn->handle); 3225 + if (hci_send_cmd(hdev, HCI_OP_READ_ENC_KEY_SIZE, 3226 + sizeof(cp), &cp)) { 3227 + bt_dev_err(hdev, "sending read key size failed"); 3228 + conn->enc_key_size = HCI_LINK_KEY_SIZE; 3229 + } 3230 + } 3231 + 3232 + hci_encrypt_cfm(conn, ev->status); 3233 + } 3234 + } 3235 + 3211 3236 /* Get remote features */ 3212 3237 if (conn->type == ACL_LINK) { 3213 3238 struct hci_cp_read_remote_features cp;
+8 -2
net/bluetooth/hci_sync.c
··· 617 617 bt_dev_dbg(hdev, "err 0x%2.2x", err); 618 618 619 619 if (hdev->req_status == HCI_REQ_PEND) { 620 - hdev->req_result = err; 620 + /* req_result is __u32 so error must be positive to be properly 621 + * propagated. 622 + */ 623 + hdev->req_result = err < 0 ? -err : err; 621 624 hdev->req_status = HCI_REQ_CANCELED; 622 625 623 626 wake_up_interruptible(&hdev->req_wait_q); ··· 3419 3416 if (ret < 0 || !bacmp(&ba, BDADDR_ANY)) 3420 3417 return; 3421 3418 3422 - bacpy(&hdev->public_addr, &ba); 3419 + if (test_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks)) 3420 + baswap(&hdev->public_addr, &ba); 3421 + else 3422 + bacpy(&hdev->public_addr, &ba); 3423 3423 } 3424 3424 3425 3425 struct hci_init_stage {
+6
net/bridge/netfilter/ebtables.c
··· 1111 1111 struct ebt_table_info *newinfo; 1112 1112 struct ebt_replace tmp; 1113 1113 1114 + if (len < sizeof(tmp)) 1115 + return -EINVAL; 1114 1116 if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0) 1115 1117 return -EFAULT; 1116 1118 ··· 1425 1423 { 1426 1424 struct ebt_replace hlp; 1427 1425 1426 + if (len < sizeof(hlp)) 1427 + return -EINVAL; 1428 1428 if (copy_from_sockptr(&hlp, arg, sizeof(hlp))) 1429 1429 return -EFAULT; 1430 1430 ··· 2356 2352 { 2357 2353 struct compat_ebt_replace hlp; 2358 2354 2355 + if (len < sizeof(hlp)) 2356 + return -EINVAL; 2359 2357 if (copy_from_sockptr(&hlp, arg, sizeof(hlp))) 2360 2358 return -EFAULT; 2361 2359
+1 -1
net/core/dev.c
··· 429 429 * PP consumers must pay attention to run APIs in the appropriate context 430 430 * (e.g. NAPI context). 431 431 */ 432 - static DEFINE_PER_CPU_ALIGNED(struct page_pool *, system_page_pool); 432 + static DEFINE_PER_CPU(struct page_pool *, system_page_pool); 433 433 434 434 #ifdef CONFIG_LOCKDEP 435 435 /*
+2 -1
net/core/gro.c
··· 192 192 } 193 193 194 194 merge: 195 - /* sk owenrship - if any - completely transferred to the aggregated packet */ 195 + /* sk ownership - if any - completely transferred to the aggregated packet */ 196 196 skb->destructor = NULL; 197 + skb->sk = NULL; 197 198 delta_truesize = skb->truesize; 198 199 if (offset > headlen) { 199 200 unsigned int eat = offset - headlen;
+6
net/core/sock_map.c
··· 411 411 struct sock *sk; 412 412 int err = 0; 413 413 414 + if (irqs_disabled()) 415 + return -EOPNOTSUPP; /* locks here are hardirq-unsafe */ 416 + 414 417 spin_lock_bh(&stab->lock); 415 418 sk = *psk; 416 419 if (!sk_test || sk_test == sk) ··· 935 932 struct bpf_shtab_bucket *bucket; 936 933 struct bpf_shtab_elem *elem; 937 934 int ret = -ENOENT; 935 + 936 + if (irqs_disabled()) 937 + return -EOPNOTSUPP; /* locks here are hardirq-unsafe */ 938 938 939 939 hash = sock_hash_bucket_hash(key, key_size); 940 940 bucket = sock_hash_select_bucket(htab, hash);
+6 -7
net/hsr/hsr_device.c
··· 132 132 { 133 133 struct hsr_priv *hsr; 134 134 struct hsr_port *port; 135 - char designation; 135 + const char *designation = NULL; 136 136 137 137 hsr = netdev_priv(dev); 138 - designation = '\0'; 139 138 140 139 hsr_for_each_port(hsr, port) { 141 140 if (port->type == HSR_PT_MASTER) 142 141 continue; 143 142 switch (port->type) { 144 143 case HSR_PT_SLAVE_A: 145 - designation = 'A'; 144 + designation = "Slave A"; 146 145 break; 147 146 case HSR_PT_SLAVE_B: 148 - designation = 'B'; 147 + designation = "Slave B"; 149 148 break; 150 149 default: 151 - designation = '?'; 150 + designation = "Unknown"; 152 151 } 153 152 if (!is_slave_up(port->dev)) 154 - netdev_warn(dev, "Slave %c (%s) is not up; please bring it up to get a fully working HSR network\n", 153 + netdev_warn(dev, "%s (%s) is not up; please bring it up to get a fully working HSR network\n", 155 154 designation, port->dev->name); 156 155 } 157 156 158 - if (designation == '\0') 157 + if (!designation) 159 158 netdev_warn(dev, "No slave devices configured\n"); 160 159 161 160 return 0;
+20 -10
net/ipv4/inet_connection_sock.c
··· 203 203 kuid_t sk_uid, bool relax, 204 204 bool reuseport_cb_ok, bool reuseport_ok) 205 205 { 206 - if (sk->sk_family == AF_INET && ipv6_only_sock(sk2)) 207 - return false; 206 + if (ipv6_only_sock(sk2)) { 207 + if (sk->sk_family == AF_INET) 208 + return false; 209 + 210 + #if IS_ENABLED(CONFIG_IPV6) 211 + if (ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr)) 212 + return false; 213 + #endif 214 + } 208 215 209 216 return inet_bind_conflict(sk, sk2, sk_uid, relax, 210 217 reuseport_cb_ok, reuseport_ok); ··· 294 287 struct sock_reuseport *reuseport_cb; 295 288 struct inet_bind_hashbucket *head2; 296 289 struct inet_bind2_bucket *tb2; 290 + bool conflict = false; 297 291 bool reuseport_cb_ok; 298 292 299 293 rcu_read_lock(); ··· 307 299 308 300 spin_lock(&head2->lock); 309 301 310 - inet_bind_bucket_for_each(tb2, &head2->chain) 311 - if (inet_bind2_bucket_match_addr_any(tb2, net, port, l3mdev, sk)) 312 - break; 302 + inet_bind_bucket_for_each(tb2, &head2->chain) { 303 + if (!inet_bind2_bucket_match_addr_any(tb2, net, port, l3mdev, sk)) 304 + continue; 313 305 314 - if (tb2 && inet_bhash2_conflict(sk, tb2, uid, relax, reuseport_cb_ok, 315 - reuseport_ok)) { 316 - spin_unlock(&head2->lock); 317 - return true; 306 + if (!inet_bhash2_conflict(sk, tb2, uid, relax, reuseport_cb_ok, reuseport_ok)) 307 + continue; 308 + 309 + conflict = true; 310 + break; 318 311 } 319 312 320 313 spin_unlock(&head2->lock); 321 - return false; 314 + 315 + return conflict; 322 316 } 323 317 324 318 /*
+5
net/ipv4/ip_gre.c
··· 280 280 tpi->flags | TUNNEL_NO_KEY, 281 281 iph->saddr, iph->daddr, 0); 282 282 } else { 283 + if (unlikely(!pskb_may_pull(skb, 284 + gre_hdr_len + sizeof(*ershdr)))) 285 + return PACKET_REJECT; 286 + 283 287 ershdr = (struct erspan_base_hdr *)(skb->data + gre_hdr_len); 284 288 ver = ershdr->ver; 289 + iph = ip_hdr(skb); 285 290 tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, 286 291 tpi->flags | TUNNEL_KEY, 287 292 iph->saddr, iph->daddr, tpi->key);
+4
net/ipv4/netfilter/arp_tables.c
··· 956 956 void *loc_cpu_entry; 957 957 struct arpt_entry *iter; 958 958 959 + if (len < sizeof(tmp)) 960 + return -EINVAL; 959 961 if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0) 960 962 return -EFAULT; 961 963 ··· 1256 1254 void *loc_cpu_entry; 1257 1255 struct arpt_entry *iter; 1258 1256 1257 + if (len < sizeof(tmp)) 1258 + return -EINVAL; 1259 1259 if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0) 1260 1260 return -EFAULT; 1261 1261
+4
net/ipv4/netfilter/ip_tables.c
··· 1108 1108 void *loc_cpu_entry; 1109 1109 struct ipt_entry *iter; 1110 1110 1111 + if (len < sizeof(tmp)) 1112 + return -EINVAL; 1111 1113 if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0) 1112 1114 return -EFAULT; 1113 1115 ··· 1494 1492 void *loc_cpu_entry; 1495 1493 struct ipt_entry *iter; 1496 1494 1495 + if (len < sizeof(tmp)) 1496 + return -EINVAL; 1497 1497 if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0) 1498 1498 return -EFAULT; 1499 1499
+7
net/ipv4/udp.c
··· 582 582 } 583 583 584 584 DEFINE_STATIC_KEY_FALSE(udp_encap_needed_key); 585 + EXPORT_SYMBOL(udp_encap_needed_key); 586 + 587 + #if IS_ENABLED(CONFIG_IPV6) 588 + DEFINE_STATIC_KEY_FALSE(udpv6_encap_needed_key); 589 + EXPORT_SYMBOL(udpv6_encap_needed_key); 590 + #endif 591 + 585 592 void udp_encap_enable(void) 586 593 { 587 594 static_branch_inc(&udp_encap_needed_key);
+13 -10
net/ipv4/udp_offload.c
··· 449 449 NAPI_GRO_CB(p)->count++; 450 450 p->data_len += skb->len; 451 451 452 - /* sk owenrship - if any - completely transferred to the aggregated packet */ 452 + /* sk ownership - if any - completely transferred to the aggregated packet */ 453 453 skb->destructor = NULL; 454 + skb->sk = NULL; 454 455 p->truesize += skb->truesize; 455 456 p->len += skb->len; 456 457 ··· 552 551 unsigned int off = skb_gro_offset(skb); 553 552 int flush = 1; 554 553 555 - /* we can do L4 aggregation only if the packet can't land in a tunnel 556 - * otherwise we could corrupt the inner stream 554 + /* We can do L4 aggregation only if the packet can't land in a tunnel 555 + * otherwise we could corrupt the inner stream. Detecting such packets 556 + * cannot be foolproof and the aggregation might still happen in some 557 + * cases. Such packets should be caught in udp_unexpected_gso later. 557 558 */ 558 559 NAPI_GRO_CB(skb)->is_flist = 0; 559 560 if (!sk || !udp_sk(sk)->gro_receive) { 561 + /* If the packet was locally encapsulated in a UDP tunnel that 562 + * wasn't detected above, do not GRO. 563 + */ 564 + if (skb->encapsulation) 565 + goto out; 566 + 560 567 if (skb->dev->features & NETIF_F_GRO_FRAGLIST) 561 568 NAPI_GRO_CB(skb)->is_flist = sk ? !udp_test_bit(GRO_ENABLED, sk) : 1; 562 569 ··· 728 719 skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4); 729 720 skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; 730 721 731 - if (skb->ip_summed == CHECKSUM_UNNECESSARY) { 732 - if (skb->csum_level < SKB_MAX_CSUM_LEVEL) 733 - skb->csum_level++; 734 - } else { 735 - skb->ip_summed = CHECKSUM_UNNECESSARY; 736 - skb->csum_level = 0; 737 - } 722 + __skb_incr_checksum_unnecessary(skb); 738 723 739 724 return 0; 740 725 }
+7 -7
net/ipv6/ip6_fib.c
··· 651 651 if (!w) { 652 652 /* New dump: 653 653 * 654 - * 1. hook callback destructor. 655 - */ 656 - cb->args[3] = (long)cb->done; 657 - cb->done = fib6_dump_done; 658 - 659 - /* 660 - * 2. allocate and initialize walker. 654 + * 1. allocate and initialize walker. 661 655 */ 662 656 w = kzalloc(sizeof(*w), GFP_ATOMIC); 663 657 if (!w) 664 658 return -ENOMEM; 665 659 w->func = fib6_dump_node; 666 660 cb->args[2] = (long)w; 661 + 662 + /* 2. hook callback destructor. 663 + */ 664 + cb->args[3] = (long)cb->done; 665 + cb->done = fib6_dump_done; 666 + 667 667 } 668 668 669 669 arg.skb = skb;
+3
net/ipv6/ip6_gre.c
··· 528 528 struct ip6_tnl *tunnel; 529 529 u8 ver; 530 530 531 + if (unlikely(!pskb_may_pull(skb, sizeof(*ershdr)))) 532 + return PACKET_REJECT; 533 + 531 534 ipv6h = ipv6_hdr(skb); 532 535 ershdr = (struct erspan_base_hdr *)skb->data; 533 536 ver = ershdr->ver;
+4
net/ipv6/netfilter/ip6_tables.c
··· 1125 1125 void *loc_cpu_entry; 1126 1126 struct ip6t_entry *iter; 1127 1127 1128 + if (len < sizeof(tmp)) 1129 + return -EINVAL; 1128 1130 if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0) 1129 1131 return -EFAULT; 1130 1132 ··· 1503 1501 void *loc_cpu_entry; 1504 1502 struct ip6t_entry *iter; 1505 1503 1504 + if (len < sizeof(tmp)) 1505 + return -EINVAL; 1506 1506 if (copy_from_sockptr(&tmp, arg, sizeof(tmp)) != 0) 1507 1507 return -EFAULT; 1508 1508
+1 -1
net/ipv6/udp.c
··· 447 447 goto try_again; 448 448 } 449 449 450 - DEFINE_STATIC_KEY_FALSE(udpv6_encap_needed_key); 450 + DECLARE_STATIC_KEY_FALSE(udpv6_encap_needed_key); 451 451 void udpv6_encap_enable(void) 452 452 { 453 453 static_branch_inc(&udpv6_encap_needed_key);
+1 -7
net/ipv6/udp_offload.c
··· 174 174 skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4); 175 175 skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; 176 176 177 - if (skb->ip_summed == CHECKSUM_UNNECESSARY) { 178 - if (skb->csum_level < SKB_MAX_CSUM_LEVEL) 179 - skb->csum_level++; 180 - } else { 181 - skb->ip_summed = CHECKSUM_UNNECESSARY; 182 - skb->csum_level = 0; 183 - } 177 + __skb_incr_checksum_unnecessary(skb); 184 178 185 179 return 0; 186 180 }
-2
net/mptcp/protocol.c
··· 3937 3937 mptcp_set_state(newsk, TCP_CLOSE); 3938 3938 } 3939 3939 } else { 3940 - MPTCP_INC_STATS(sock_net(ssk), 3941 - MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK); 3942 3940 tcpfallback: 3943 3941 newsk->sk_kern_sock = kern; 3944 3942 lock_sock(newsk);
+4
net/mptcp/sockopt.c
··· 1493 1493 struct mptcp_subflow_context *subflow; 1494 1494 int space, cap; 1495 1495 1496 + /* bpf can land here with a wrong sk type */ 1497 + if (sk->sk_protocol == IPPROTO_TCP) 1498 + return -EINVAL; 1499 + 1496 1500 if (sk->sk_userlocks & SOCK_RCVBUF_LOCK) 1497 1501 cap = sk->sk_rcvbuf >> 1; 1498 1502 else
+2
net/mptcp/subflow.c
··· 905 905 return child; 906 906 907 907 fallback: 908 + if (fallback) 909 + SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK); 908 910 mptcp_subflow_drop_ctx(child); 909 911 return child; 910 912 }
+34 -16
net/netfilter/nf_tables_api.c
··· 1209 1209 return true; 1210 1210 1211 1211 list_for_each_entry(trans, &nft_net->commit_list, list) { 1212 - if ((trans->msg_type == NFT_MSG_NEWCHAIN || 1213 - trans->msg_type == NFT_MSG_DELCHAIN) && 1214 - trans->ctx.table == ctx->table && 1215 - nft_trans_chain_update(trans)) 1212 + if (trans->ctx.table == ctx->table && 1213 + ((trans->msg_type == NFT_MSG_NEWCHAIN && 1214 + nft_trans_chain_update(trans)) || 1215 + (trans->msg_type == NFT_MSG_DELCHAIN && 1216 + nft_is_base_chain(trans->ctx.chain)))) 1216 1217 return true; 1217 1218 } 1218 1219 ··· 2449 2448 if (nla[NFTA_CHAIN_HOOK]) { 2450 2449 struct nft_stats __percpu *stats = NULL; 2451 2450 struct nft_chain_hook hook = {}; 2451 + 2452 + if (table->flags & __NFT_TABLE_F_UPDATE) 2453 + return -EINVAL; 2452 2454 2453 2455 if (flags & NFT_CHAIN_BINDING) 2454 2456 return -EOPNOTSUPP; ··· 8297 8293 return err; 8298 8294 } 8299 8295 8296 + /* call under rcu_read_lock */ 8300 8297 static const struct nf_flowtable_type *__nft_flowtable_type_get(u8 family) 8301 8298 { 8302 8299 const struct nf_flowtable_type *type; 8303 8300 8304 - list_for_each_entry(type, &nf_tables_flowtables, list) { 8301 + list_for_each_entry_rcu(type, &nf_tables_flowtables, list) { 8305 8302 if (family == type->family) 8306 8303 return type; 8307 8304 } ··· 8314 8309 { 8315 8310 const struct nf_flowtable_type *type; 8316 8311 8312 + rcu_read_lock(); 8317 8313 type = __nft_flowtable_type_get(family); 8318 - if (type != NULL && try_module_get(type->owner)) 8314 + if (type != NULL && try_module_get(type->owner)) { 8315 + rcu_read_unlock(); 8319 8316 return type; 8317 + } 8318 + rcu_read_unlock(); 8320 8319 8321 8320 lockdep_nfnl_nft_mutex_not_held(); 8322 8321 #ifdef CONFIG_MODULES ··· 10464 10455 struct nft_trans *trans, *next; 10465 10456 LIST_HEAD(set_update_list); 10466 10457 struct nft_trans_elem *te; 10458 + int err = 0; 10467 10459 10468 10460 if (action == NFNL_ABORT_VALIDATE && 10469 10461 nf_tables_validate(net) < 0) 10470 - return -EAGAIN; 10462 + err = -EAGAIN; 10471 10463 10472 10464 list_for_each_entry_safe_reverse(trans, next, &nft_net->commit_list, 10473 10465 list) { ··· 10660 10650 nf_tables_abort_release(trans); 10661 10651 } 10662 10652 10663 - if (action == NFNL_ABORT_AUTOLOAD) 10664 - nf_tables_module_autoload(net); 10665 - else 10666 - nf_tables_module_autoload_cleanup(net); 10667 - 10668 - return 0; 10653 + return err; 10669 10654 } 10670 10655 10671 10656 static int nf_tables_abort(struct net *net, struct sk_buff *skb, ··· 10673 10668 gc_seq = nft_gc_seq_begin(nft_net); 10674 10669 ret = __nf_tables_abort(net, action); 10675 10670 nft_gc_seq_end(nft_net, gc_seq); 10671 + 10672 + WARN_ON_ONCE(!list_empty(&nft_net->commit_list)); 10673 + 10674 + /* module autoload needs to happen after GC sequence update because it 10675 + * temporarily releases and grabs mutex again. 10676 + */ 10677 + if (action == NFNL_ABORT_AUTOLOAD) 10678 + nf_tables_module_autoload(net); 10679 + else 10680 + nf_tables_module_autoload_cleanup(net); 10681 + 10676 10682 mutex_unlock(&nft_net->commit_mutex); 10677 10683 10678 10684 return ret; ··· 11489 11473 11490 11474 gc_seq = nft_gc_seq_begin(nft_net); 11491 11475 11492 - if (!list_empty(&nft_net->commit_list) || 11493 - !list_empty(&nft_net->module_list)) 11494 - __nf_tables_abort(net, NFNL_ABORT_NONE); 11476 + WARN_ON_ONCE(!list_empty(&nft_net->commit_list)); 11477 + 11478 + if (!list_empty(&nft_net->module_list)) 11479 + nf_tables_module_autoload_cleanup(net); 11495 11480 11496 11481 __nft_release_tables(net); 11497 11482 ··· 11584 11567 unregister_netdevice_notifier(&nf_tables_flowtable_notifier); 11585 11568 nft_chain_filter_fini(); 11586 11569 nft_chain_route_fini(); 11570 + nf_tables_trans_destroy_flush_work(); 11587 11571 unregister_pernet_subsys(&nf_tables_net_ops); 11588 11572 cancel_work_sync(&trans_gc_work); 11589 11573 cancel_work_sync(&trans_destroy_work);
+1 -1
net/rds/rdma.c
··· 302 302 } 303 303 ret = PTR_ERR(trans_private); 304 304 /* Trigger connection so that its ready for the next retry */ 305 - if (ret == -ENODEV) 305 + if (ret == -ENODEV && cp) 306 306 rds_conn_connect_if_down(cp->cp_conn); 307 307 goto out; 308 308 }
+5 -5
net/sched/act_skbmod.c
··· 241 241 struct tcf_skbmod *d = to_skbmod(a); 242 242 unsigned char *b = skb_tail_pointer(skb); 243 243 struct tcf_skbmod_params *p; 244 - struct tc_skbmod opt = { 245 - .index = d->tcf_index, 246 - .refcnt = refcount_read(&d->tcf_refcnt) - ref, 247 - .bindcnt = atomic_read(&d->tcf_bindcnt) - bind, 248 - }; 244 + struct tc_skbmod opt; 249 245 struct tcf_t t; 250 246 247 + memset(&opt, 0, sizeof(opt)); 248 + opt.index = d->tcf_index; 249 + opt.refcnt = refcount_read(&d->tcf_refcnt) - ref, 250 + opt.bindcnt = atomic_read(&d->tcf_bindcnt) - bind; 251 251 spin_lock_bh(&d->tcf_lock); 252 252 opt.action = d->tcf_action; 253 253 p = rcu_dereference_protected(d->skbmod_p,
+1 -1
net/sched/sch_api.c
··· 809 809 notify = !sch->q.qlen && !WARN_ON_ONCE(!n && 810 810 !qdisc_is_offloaded); 811 811 /* TODO: perform the search on a per txq basis */ 812 - sch = qdisc_lookup(qdisc_dev(sch), TC_H_MAJ(parentid)); 812 + sch = qdisc_lookup_rcu(qdisc_dev(sch), TC_H_MAJ(parentid)); 813 813 if (sch == NULL) { 814 814 WARN_ON_ONCE(parentid != TC_H_ROOT); 815 815 break;
+2 -1
net/vmw_vsock/virtio_transport.c
··· 120 120 if (!skb) 121 121 break; 122 122 123 - virtio_transport_deliver_tap_pkt(skb); 124 123 reply = virtio_vsock_skb_reply(skb); 125 124 sgs = vsock->out_sgs; 126 125 sg_init_one(sgs[out_sg], virtio_vsock_hdr(skb), ··· 168 169 virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb); 169 170 break; 170 171 } 172 + 173 + virtio_transport_deliver_tap_pkt(skb); 171 174 172 175 if (reply) { 173 176 struct virtqueue *rx_vq = vsock->vqs[VSOCK_VQ_RX];
+2
tools/include/linux/btf_ids.h
··· 3 3 #ifndef _LINUX_BTF_IDS_H 4 4 #define _LINUX_BTF_IDS_H 5 5 6 + #include <linux/types.h> /* for u32 */ 7 + 6 8 struct btf_id_set { 7 9 u32 cnt; 8 10 u32 ids[];
+721 -72
tools/testing/selftests/net/bind_wildcard.c
··· 6 6 7 7 #include "../kselftest_harness.h" 8 8 9 - struct in6_addr in6addr_v4mapped_any = { 9 + static const __u32 in4addr_any = INADDR_ANY; 10 + static const __u32 in4addr_loopback = INADDR_LOOPBACK; 11 + static const struct in6_addr in6addr_v4mapped_any = { 10 12 .s6_addr = { 11 13 0, 0, 0, 0, 12 14 0, 0, 0, 0, ··· 16 14 0, 0, 0, 0 17 15 } 18 16 }; 19 - 20 - struct in6_addr in6addr_v4mapped_loopback = { 17 + static const struct in6_addr in6addr_v4mapped_loopback = { 21 18 .s6_addr = { 22 19 0, 0, 0, 0, 23 20 0, 0, 0, 0, ··· 25 24 } 26 25 }; 27 26 27 + #define NR_SOCKETS 8 28 + 28 29 FIXTURE(bind_wildcard) 29 30 { 30 - struct sockaddr_in addr4; 31 - struct sockaddr_in6 addr6; 31 + int fd[NR_SOCKETS]; 32 + socklen_t addrlen[NR_SOCKETS]; 33 + union { 34 + struct sockaddr addr; 35 + struct sockaddr_in addr4; 36 + struct sockaddr_in6 addr6; 37 + } addr[NR_SOCKETS]; 32 38 }; 33 39 34 40 FIXTURE_VARIANT(bind_wildcard) 35 41 { 36 - const __u32 addr4_const; 37 - const struct in6_addr *addr6_const; 38 - int expected_errno; 42 + sa_family_t family[2]; 43 + const void *addr[2]; 44 + bool ipv6_only[2]; 45 + 46 + /* 6 bind() calls below follow two bind() for the defined 2 addresses: 47 + * 48 + * 0.0.0.0 49 + * 127.0.0.1 50 + * :: 51 + * ::1 52 + * ::ffff:0.0.0.0 53 + * ::ffff:127.0.0.1 54 + */ 55 + int expected_errno[NR_SOCKETS]; 56 + int expected_reuse_errno[NR_SOCKETS]; 39 57 }; 40 58 59 + /* (IPv4, IPv4) */ 60 + FIXTURE_VARIANT_ADD(bind_wildcard, v4_any_v4_local) 61 + { 62 + .family = {AF_INET, AF_INET}, 63 + .addr = {&in4addr_any, &in4addr_loopback}, 64 + .expected_errno = {0, EADDRINUSE, 65 + EADDRINUSE, EADDRINUSE, 66 + EADDRINUSE, 0, 67 + EADDRINUSE, EADDRINUSE}, 68 + .expected_reuse_errno = {0, 0, 69 + EADDRINUSE, EADDRINUSE, 70 + EADDRINUSE, 0, 71 + EADDRINUSE, EADDRINUSE}, 72 + }; 73 + 74 + FIXTURE_VARIANT_ADD(bind_wildcard, v4_local_v4_any) 75 + { 76 + .family = {AF_INET, AF_INET}, 77 + .addr = {&in4addr_loopback, &in4addr_any}, 78 + .expected_errno = {0, EADDRINUSE, 79 + EADDRINUSE, EADDRINUSE, 80 + EADDRINUSE, 0, 81 + EADDRINUSE, EADDRINUSE}, 82 + .expected_reuse_errno = {0, 0, 83 + EADDRINUSE, EADDRINUSE, 84 + EADDRINUSE, 0, 85 + EADDRINUSE, EADDRINUSE}, 86 + }; 87 + 88 + /* (IPv4, IPv6) */ 41 89 FIXTURE_VARIANT_ADD(bind_wildcard, v4_any_v6_any) 42 90 { 43 - .addr4_const = INADDR_ANY, 44 - .addr6_const = &in6addr_any, 45 - .expected_errno = EADDRINUSE, 91 + .family = {AF_INET, AF_INET6}, 92 + .addr = {&in4addr_any, &in6addr_any}, 93 + .expected_errno = {0, EADDRINUSE, 94 + EADDRINUSE, EADDRINUSE, 95 + EADDRINUSE, 0, 96 + EADDRINUSE, EADDRINUSE}, 97 + .expected_reuse_errno = {0, 0, 98 + EADDRINUSE, EADDRINUSE, 99 + EADDRINUSE, EADDRINUSE, 100 + EADDRINUSE, EADDRINUSE}, 101 + }; 102 + 103 + FIXTURE_VARIANT_ADD(bind_wildcard, v4_any_v6_any_only) 104 + { 105 + .family = {AF_INET, AF_INET6}, 106 + .addr = {&in4addr_any, &in6addr_any}, 107 + .ipv6_only = {false, true}, 108 + .expected_errno = {0, 0, 109 + EADDRINUSE, EADDRINUSE, 110 + EADDRINUSE, EADDRINUSE, 111 + EADDRINUSE, EADDRINUSE}, 112 + .expected_reuse_errno = {0, 0, 113 + EADDRINUSE, EADDRINUSE, 114 + EADDRINUSE, EADDRINUSE, 115 + EADDRINUSE, EADDRINUSE}, 46 116 }; 47 117 48 118 FIXTURE_VARIANT_ADD(bind_wildcard, v4_any_v6_local) 49 119 { 50 - .addr4_const = INADDR_ANY, 51 - .addr6_const = &in6addr_loopback, 52 - .expected_errno = 0, 120 + .family = {AF_INET, AF_INET6}, 121 + .addr = {&in4addr_any, &in6addr_loopback}, 122 + .expected_errno = {0, 0, 123 + EADDRINUSE, EADDRINUSE, 124 + EADDRINUSE, EADDRINUSE, 125 + EADDRINUSE, EADDRINUSE}, 126 + .expected_reuse_errno = {0, 0, 127 + EADDRINUSE, EADDRINUSE, 128 + EADDRINUSE, EADDRINUSE, 129 + EADDRINUSE, EADDRINUSE}, 53 130 }; 54 131 55 132 FIXTURE_VARIANT_ADD(bind_wildcard, v4_any_v6_v4mapped_any) 56 133 { 57 - .addr4_const = INADDR_ANY, 58 - .addr6_const = &in6addr_v4mapped_any, 59 - .expected_errno = EADDRINUSE, 134 + .family = {AF_INET, AF_INET6}, 135 + .addr = {&in4addr_any, &in6addr_v4mapped_any}, 136 + .expected_errno = {0, EADDRINUSE, 137 + EADDRINUSE, EADDRINUSE, 138 + EADDRINUSE, 0, 139 + EADDRINUSE, EADDRINUSE}, 140 + .expected_reuse_errno = {0, 0, 141 + EADDRINUSE, EADDRINUSE, 142 + EADDRINUSE, 0, 143 + EADDRINUSE, EADDRINUSE}, 60 144 }; 61 145 62 146 FIXTURE_VARIANT_ADD(bind_wildcard, v4_any_v6_v4mapped_local) 63 147 { 64 - .addr4_const = INADDR_ANY, 65 - .addr6_const = &in6addr_v4mapped_loopback, 66 - .expected_errno = EADDRINUSE, 148 + .family = {AF_INET, AF_INET6}, 149 + .addr = {&in4addr_any, &in6addr_v4mapped_loopback}, 150 + .expected_errno = {0, EADDRINUSE, 151 + EADDRINUSE, EADDRINUSE, 152 + EADDRINUSE, 0, 153 + EADDRINUSE, EADDRINUSE}, 154 + .expected_reuse_errno = {0, 0, 155 + EADDRINUSE, EADDRINUSE, 156 + EADDRINUSE, 0, 157 + EADDRINUSE, EADDRINUSE}, 67 158 }; 68 159 69 160 FIXTURE_VARIANT_ADD(bind_wildcard, v4_local_v6_any) 70 161 { 71 - .addr4_const = INADDR_LOOPBACK, 72 - .addr6_const = &in6addr_any, 73 - .expected_errno = EADDRINUSE, 162 + .family = {AF_INET, AF_INET6}, 163 + .addr = {&in4addr_loopback, &in6addr_any}, 164 + .expected_errno = {0, EADDRINUSE, 165 + EADDRINUSE, EADDRINUSE, 166 + EADDRINUSE, 0, 167 + EADDRINUSE, EADDRINUSE}, 168 + .expected_reuse_errno = {0, 0, 169 + EADDRINUSE, EADDRINUSE, 170 + EADDRINUSE, EADDRINUSE, 171 + EADDRINUSE, EADDRINUSE}, 172 + }; 173 + 174 + FIXTURE_VARIANT_ADD(bind_wildcard, v4_local_v6_any_only) 175 + { 176 + .family = {AF_INET, AF_INET6}, 177 + .addr = {&in4addr_loopback, &in6addr_any}, 178 + .ipv6_only = {false, true}, 179 + .expected_errno = {0, 0, 180 + EADDRINUSE, EADDRINUSE, 181 + EADDRINUSE, EADDRINUSE, 182 + EADDRINUSE, EADDRINUSE}, 183 + .expected_reuse_errno = {0, 0, 184 + EADDRINUSE, EADDRINUSE, 185 + EADDRINUSE, EADDRINUSE, 186 + EADDRINUSE, EADDRINUSE}, 74 187 }; 75 188 76 189 FIXTURE_VARIANT_ADD(bind_wildcard, v4_local_v6_local) 77 190 { 78 - .addr4_const = INADDR_LOOPBACK, 79 - .addr6_const = &in6addr_loopback, 80 - .expected_errno = 0, 191 + .family = {AF_INET, AF_INET6}, 192 + .addr = {&in4addr_loopback, &in6addr_loopback}, 193 + .expected_errno = {0, 0, 194 + EADDRINUSE, EADDRINUSE, 195 + EADDRINUSE, EADDRINUSE, 196 + EADDRINUSE, EADDRINUSE}, 197 + .expected_reuse_errno = {0, 0, 198 + EADDRINUSE, EADDRINUSE, 199 + EADDRINUSE, EADDRINUSE, 200 + EADDRINUSE, EADDRINUSE}, 81 201 }; 82 202 83 203 FIXTURE_VARIANT_ADD(bind_wildcard, v4_local_v6_v4mapped_any) 84 204 { 85 - .addr4_const = INADDR_LOOPBACK, 86 - .addr6_const = &in6addr_v4mapped_any, 87 - .expected_errno = EADDRINUSE, 205 + .family = {AF_INET, AF_INET6}, 206 + .addr = {&in4addr_loopback, &in6addr_v4mapped_any}, 207 + .expected_errno = {0, EADDRINUSE, 208 + EADDRINUSE, EADDRINUSE, 209 + EADDRINUSE, 0, 210 + EADDRINUSE, EADDRINUSE}, 211 + .expected_reuse_errno = {0, 0, 212 + EADDRINUSE, EADDRINUSE, 213 + EADDRINUSE, 0, 214 + EADDRINUSE, EADDRINUSE}, 88 215 }; 89 216 90 217 FIXTURE_VARIANT_ADD(bind_wildcard, v4_local_v6_v4mapped_local) 91 218 { 92 - .addr4_const = INADDR_LOOPBACK, 93 - .addr6_const = &in6addr_v4mapped_loopback, 94 - .expected_errno = EADDRINUSE, 219 + .family = {AF_INET, AF_INET6}, 220 + .addr = {&in4addr_loopback, &in6addr_v4mapped_loopback}, 221 + .expected_errno = {0, EADDRINUSE, 222 + EADDRINUSE, EADDRINUSE, 223 + EADDRINUSE, 0, 224 + EADDRINUSE, EADDRINUSE}, 225 + .expected_reuse_errno = {0, 0, 226 + EADDRINUSE, EADDRINUSE, 227 + EADDRINUSE, 0, 228 + EADDRINUSE, EADDRINUSE}, 95 229 }; 230 + 231 + /* (IPv6, IPv4) */ 232 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_v4_any) 233 + { 234 + .family = {AF_INET6, AF_INET}, 235 + .addr = {&in6addr_any, &in4addr_any}, 236 + .expected_errno = {0, EADDRINUSE, 237 + EADDRINUSE, EADDRINUSE, 238 + EADDRINUSE, EADDRINUSE, 239 + EADDRINUSE, EADDRINUSE}, 240 + .expected_reuse_errno = {0, 0, 241 + EADDRINUSE, EADDRINUSE, 242 + EADDRINUSE, EADDRINUSE, 243 + EADDRINUSE, EADDRINUSE}, 244 + }; 245 + 246 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_only_v4_any) 247 + { 248 + .family = {AF_INET6, AF_INET}, 249 + .addr = {&in6addr_any, &in4addr_any}, 250 + .ipv6_only = {true, false}, 251 + .expected_errno = {0, 0, 252 + EADDRINUSE, EADDRINUSE, 253 + EADDRINUSE, EADDRINUSE, 254 + EADDRINUSE, EADDRINUSE}, 255 + .expected_reuse_errno = {0, 0, 256 + EADDRINUSE, EADDRINUSE, 257 + EADDRINUSE, EADDRINUSE, 258 + EADDRINUSE, EADDRINUSE}, 259 + }; 260 + 261 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_v4_local) 262 + { 263 + .family = {AF_INET6, AF_INET}, 264 + .addr = {&in6addr_any, &in4addr_loopback}, 265 + .expected_errno = {0, EADDRINUSE, 266 + EADDRINUSE, EADDRINUSE, 267 + EADDRINUSE, EADDRINUSE, 268 + EADDRINUSE, EADDRINUSE}, 269 + .expected_reuse_errno = {0, 0, 270 + EADDRINUSE, EADDRINUSE, 271 + EADDRINUSE, EADDRINUSE, 272 + EADDRINUSE, EADDRINUSE}, 273 + }; 274 + 275 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_only_v4_local) 276 + { 277 + .family = {AF_INET6, AF_INET}, 278 + .addr = {&in6addr_any, &in4addr_loopback}, 279 + .ipv6_only = {true, false}, 280 + .expected_errno = {0, 0, 281 + EADDRINUSE, EADDRINUSE, 282 + EADDRINUSE, EADDRINUSE, 283 + EADDRINUSE, EADDRINUSE}, 284 + .expected_reuse_errno = {0, 0, 285 + EADDRINUSE, EADDRINUSE, 286 + EADDRINUSE, EADDRINUSE, 287 + EADDRINUSE, EADDRINUSE}, 288 + }; 289 + 290 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_local_v4_any) 291 + { 292 + .family = {AF_INET6, AF_INET}, 293 + .addr = {&in6addr_loopback, &in4addr_any}, 294 + .expected_errno = {0, 0, 295 + EADDRINUSE, EADDRINUSE, 296 + EADDRINUSE, EADDRINUSE, 297 + EADDRINUSE, EADDRINUSE}, 298 + .expected_reuse_errno = {0, 0, 299 + EADDRINUSE, EADDRINUSE, 300 + EADDRINUSE, EADDRINUSE, 301 + EADDRINUSE, EADDRINUSE}, 302 + }; 303 + 304 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_local_v4_local) 305 + { 306 + .family = {AF_INET6, AF_INET}, 307 + .addr = {&in6addr_loopback, &in4addr_loopback}, 308 + .expected_errno = {0, 0, 309 + EADDRINUSE, EADDRINUSE, 310 + EADDRINUSE, EADDRINUSE, 311 + EADDRINUSE, EADDRINUSE}, 312 + .expected_reuse_errno = {0, 0, 313 + EADDRINUSE, EADDRINUSE, 314 + EADDRINUSE, EADDRINUSE, 315 + EADDRINUSE, EADDRINUSE}, 316 + }; 317 + 318 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_any_v4_any) 319 + { 320 + .family = {AF_INET6, AF_INET}, 321 + .addr = {&in6addr_v4mapped_any, &in4addr_any}, 322 + .expected_errno = {0, EADDRINUSE, 323 + EADDRINUSE, EADDRINUSE, 324 + EADDRINUSE, 0, 325 + EADDRINUSE, EADDRINUSE}, 326 + .expected_reuse_errno = {0, 0, 327 + EADDRINUSE, EADDRINUSE, 328 + EADDRINUSE, 0, 329 + EADDRINUSE, EADDRINUSE}, 330 + }; 331 + 332 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_any_v4_local) 333 + { 334 + .family = {AF_INET6, AF_INET}, 335 + .addr = {&in6addr_v4mapped_any, &in4addr_loopback}, 336 + .expected_errno = {0, EADDRINUSE, 337 + EADDRINUSE, EADDRINUSE, 338 + EADDRINUSE, 0, 339 + EADDRINUSE, EADDRINUSE}, 340 + .expected_reuse_errno = {0, 0, 341 + EADDRINUSE, EADDRINUSE, 342 + EADDRINUSE, 0, 343 + EADDRINUSE, EADDRINUSE}, 344 + }; 345 + 346 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_local_v4_any) 347 + { 348 + .family = {AF_INET6, AF_INET}, 349 + .addr = {&in6addr_v4mapped_loopback, &in4addr_any}, 350 + .expected_errno = {0, EADDRINUSE, 351 + EADDRINUSE, EADDRINUSE, 352 + EADDRINUSE, 0, 353 + EADDRINUSE, EADDRINUSE}, 354 + .expected_reuse_errno = {0, 0, 355 + EADDRINUSE, EADDRINUSE, 356 + EADDRINUSE, 0, 357 + EADDRINUSE, EADDRINUSE}, 358 + }; 359 + 360 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_local_v4_local) 361 + { 362 + .family = {AF_INET6, AF_INET}, 363 + .addr = {&in6addr_v4mapped_loopback, &in4addr_loopback}, 364 + .expected_errno = {0, EADDRINUSE, 365 + EADDRINUSE, EADDRINUSE, 366 + EADDRINUSE, 0, 367 + EADDRINUSE, EADDRINUSE}, 368 + .expected_reuse_errno = {0, 0, 369 + EADDRINUSE, EADDRINUSE, 370 + EADDRINUSE, 0, 371 + EADDRINUSE, EADDRINUSE}, 372 + }; 373 + 374 + /* (IPv6, IPv6) */ 375 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_v6_any) 376 + { 377 + .family = {AF_INET6, AF_INET6}, 378 + .addr = {&in6addr_any, &in6addr_any}, 379 + .expected_errno = {0, EADDRINUSE, 380 + EADDRINUSE, EADDRINUSE, 381 + EADDRINUSE, EADDRINUSE, 382 + EADDRINUSE, EADDRINUSE}, 383 + .expected_reuse_errno = {0, 0, 384 + EADDRINUSE, EADDRINUSE, 385 + EADDRINUSE, EADDRINUSE, 386 + EADDRINUSE, EADDRINUSE}, 387 + }; 388 + 389 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_only_v6_any) 390 + { 391 + .family = {AF_INET6, AF_INET6}, 392 + .addr = {&in6addr_any, &in6addr_any}, 393 + .ipv6_only = {true, false}, 394 + .expected_errno = {0, EADDRINUSE, 395 + 0, EADDRINUSE, 396 + EADDRINUSE, EADDRINUSE, 397 + EADDRINUSE, EADDRINUSE}, 398 + .expected_reuse_errno = {0, 0, 399 + EADDRINUSE, EADDRINUSE, 400 + EADDRINUSE, EADDRINUSE, 401 + EADDRINUSE, EADDRINUSE}, 402 + }; 403 + 404 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_v6_any_only) 405 + { 406 + .family = {AF_INET6, AF_INET6}, 407 + .addr = {&in6addr_any, &in6addr_any}, 408 + .ipv6_only = {false, true}, 409 + .expected_errno = {0, EADDRINUSE, 410 + EADDRINUSE, EADDRINUSE, 411 + EADDRINUSE, EADDRINUSE, 412 + EADDRINUSE, EADDRINUSE}, 413 + .expected_reuse_errno = {0, 0, 414 + EADDRINUSE, EADDRINUSE, 415 + EADDRINUSE, EADDRINUSE, 416 + EADDRINUSE, EADDRINUSE}, 417 + }; 418 + 419 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_only_v6_any_only) 420 + { 421 + .family = {AF_INET6, AF_INET6}, 422 + .addr = {&in6addr_any, &in6addr_any}, 423 + .ipv6_only = {true, true}, 424 + .expected_errno = {0, EADDRINUSE, 425 + 0, EADDRINUSE, 426 + EADDRINUSE, EADDRINUSE, 427 + EADDRINUSE, EADDRINUSE}, 428 + .expected_reuse_errno = {0, 0, 429 + 0, EADDRINUSE, 430 + EADDRINUSE, EADDRINUSE, 431 + EADDRINUSE, EADDRINUSE}, 432 + }; 433 + 434 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_v6_local) 435 + { 436 + .family = {AF_INET6, AF_INET6}, 437 + .addr = {&in6addr_any, &in6addr_loopback}, 438 + .expected_errno = {0, EADDRINUSE, 439 + EADDRINUSE, EADDRINUSE, 440 + EADDRINUSE, EADDRINUSE, 441 + EADDRINUSE, EADDRINUSE}, 442 + .expected_reuse_errno = {0, 0, 443 + EADDRINUSE, EADDRINUSE, 444 + EADDRINUSE, EADDRINUSE, 445 + EADDRINUSE, EADDRINUSE}, 446 + }; 447 + 448 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_only_v6_local) 449 + { 450 + .family = {AF_INET6, AF_INET6}, 451 + .addr = {&in6addr_any, &in6addr_loopback}, 452 + .ipv6_only = {true, false}, 453 + .expected_errno = {0, EADDRINUSE, 454 + 0, EADDRINUSE, 455 + EADDRINUSE, EADDRINUSE, 456 + EADDRINUSE, EADDRINUSE}, 457 + .expected_reuse_errno = {0, 0, 458 + 0, EADDRINUSE, 459 + EADDRINUSE, EADDRINUSE, 460 + EADDRINUSE, EADDRINUSE}, 461 + }; 462 + 463 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_v6_v4mapped_any) 464 + { 465 + .family = {AF_INET6, AF_INET6}, 466 + .addr = {&in6addr_any, &in6addr_v4mapped_any}, 467 + .expected_errno = {0, EADDRINUSE, 468 + EADDRINUSE, EADDRINUSE, 469 + EADDRINUSE, EADDRINUSE, 470 + EADDRINUSE, EADDRINUSE}, 471 + .expected_reuse_errno = {0, 0, 472 + EADDRINUSE, EADDRINUSE, 473 + EADDRINUSE, EADDRINUSE, 474 + EADDRINUSE, EADDRINUSE}, 475 + }; 476 + 477 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_only_v6_v4mapped_any) 478 + { 479 + .family = {AF_INET6, AF_INET6}, 480 + .addr = {&in6addr_any, &in6addr_v4mapped_any}, 481 + .ipv6_only = {true, false}, 482 + .expected_errno = {0, 0, 483 + EADDRINUSE, EADDRINUSE, 484 + EADDRINUSE, EADDRINUSE, 485 + EADDRINUSE, EADDRINUSE}, 486 + .expected_reuse_errno = {0, 0, 487 + EADDRINUSE, EADDRINUSE, 488 + EADDRINUSE, EADDRINUSE, 489 + EADDRINUSE, EADDRINUSE}, 490 + }; 491 + 492 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_v6_v4mapped_local) 493 + { 494 + .family = {AF_INET6, AF_INET6}, 495 + .addr = {&in6addr_any, &in6addr_v4mapped_loopback}, 496 + .expected_errno = {0, EADDRINUSE, 497 + EADDRINUSE, EADDRINUSE, 498 + EADDRINUSE, EADDRINUSE, 499 + EADDRINUSE, EADDRINUSE}, 500 + .expected_reuse_errno = {0, 0, 501 + EADDRINUSE, EADDRINUSE, 502 + EADDRINUSE, EADDRINUSE, 503 + EADDRINUSE, EADDRINUSE}, 504 + }; 505 + 506 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_any_only_v6_v4mapped_local) 507 + { 508 + .family = {AF_INET6, AF_INET6}, 509 + .addr = {&in6addr_any, &in6addr_v4mapped_loopback}, 510 + .ipv6_only = {true, false}, 511 + .expected_errno = {0, 0, 512 + EADDRINUSE, EADDRINUSE, 513 + EADDRINUSE, EADDRINUSE, 514 + EADDRINUSE, EADDRINUSE}, 515 + .expected_reuse_errno = {0, 0, 516 + EADDRINUSE, EADDRINUSE, 517 + EADDRINUSE, EADDRINUSE, 518 + EADDRINUSE, EADDRINUSE}, 519 + }; 520 + 521 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_local_v6_any) 522 + { 523 + .family = {AF_INET6, AF_INET6}, 524 + .addr = {&in6addr_loopback, &in6addr_any}, 525 + .expected_errno = {0, EADDRINUSE, 526 + 0, EADDRINUSE, 527 + EADDRINUSE, EADDRINUSE, 528 + EADDRINUSE, EADDRINUSE}, 529 + .expected_reuse_errno = {0, 0, 530 + EADDRINUSE, EADDRINUSE, 531 + EADDRINUSE, EADDRINUSE, 532 + EADDRINUSE, EADDRINUSE}, 533 + }; 534 + 535 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_local_v6_any_only) 536 + { 537 + .family = {AF_INET6, AF_INET6}, 538 + .addr = {&in6addr_loopback, &in6addr_any}, 539 + .ipv6_only = {false, true}, 540 + .expected_errno = {0, EADDRINUSE, 541 + 0, EADDRINUSE, 542 + EADDRINUSE, EADDRINUSE, 543 + EADDRINUSE, EADDRINUSE}, 544 + .expected_reuse_errno = {0, 0, 545 + 0, EADDRINUSE, 546 + EADDRINUSE, EADDRINUSE, 547 + EADDRINUSE, EADDRINUSE}, 548 + }; 549 + 550 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_local_v6_v4mapped_any) 551 + { 552 + .family = {AF_INET6, AF_INET6}, 553 + .addr = {&in6addr_loopback, &in6addr_v4mapped_any}, 554 + .expected_errno = {0, 0, 555 + EADDRINUSE, EADDRINUSE, 556 + EADDRINUSE, EADDRINUSE, 557 + EADDRINUSE, EADDRINUSE}, 558 + .expected_reuse_errno = {0, 0, 559 + EADDRINUSE, EADDRINUSE, 560 + EADDRINUSE, EADDRINUSE, 561 + EADDRINUSE, EADDRINUSE}, 562 + }; 563 + 564 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_local_v6_v4mapped_local) 565 + { 566 + .family = {AF_INET6, AF_INET6}, 567 + .addr = {&in6addr_loopback, &in6addr_v4mapped_loopback}, 568 + .expected_errno = {0, 0, 569 + EADDRINUSE, EADDRINUSE, 570 + EADDRINUSE, EADDRINUSE, 571 + EADDRINUSE, EADDRINUSE}, 572 + .expected_reuse_errno = {0, 0, 573 + EADDRINUSE, EADDRINUSE, 574 + EADDRINUSE, EADDRINUSE, 575 + EADDRINUSE, EADDRINUSE}, 576 + }; 577 + 578 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_any_v6_any) 579 + { 580 + .family = {AF_INET6, AF_INET6}, 581 + .addr = {&in6addr_v4mapped_any, &in6addr_any}, 582 + .expected_errno = {0, EADDRINUSE, 583 + EADDRINUSE, EADDRINUSE, 584 + EADDRINUSE, 0, 585 + EADDRINUSE, EADDRINUSE}, 586 + .expected_reuse_errno = {0, 0, 587 + EADDRINUSE, EADDRINUSE, 588 + EADDRINUSE, EADDRINUSE, 589 + EADDRINUSE, EADDRINUSE}, 590 + }; 591 + 592 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_any_v6_any_only) 593 + { 594 + .family = {AF_INET6, AF_INET6}, 595 + .addr = {&in6addr_v4mapped_any, &in6addr_any}, 596 + .ipv6_only = {false, true}, 597 + .expected_errno = {0, 0, 598 + EADDRINUSE, EADDRINUSE, 599 + EADDRINUSE, EADDRINUSE, 600 + EADDRINUSE, EADDRINUSE}, 601 + .expected_reuse_errno = {0, 0, 602 + EADDRINUSE, EADDRINUSE, 603 + EADDRINUSE, EADDRINUSE, 604 + EADDRINUSE, EADDRINUSE}, 605 + }; 606 + 607 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_any_v6_local) 608 + { 609 + .family = {AF_INET6, AF_INET6}, 610 + .addr = {&in6addr_v4mapped_any, &in6addr_loopback}, 611 + .expected_errno = {0, 0, 612 + EADDRINUSE, EADDRINUSE, 613 + EADDRINUSE, EADDRINUSE, 614 + EADDRINUSE, EADDRINUSE}, 615 + .expected_reuse_errno = {0, 0, 616 + EADDRINUSE, EADDRINUSE, 617 + EADDRINUSE, EADDRINUSE, 618 + EADDRINUSE, EADDRINUSE}, 619 + }; 620 + 621 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_any_v6_v4mapped_local) 622 + { 623 + .family = {AF_INET6, AF_INET6}, 624 + .addr = {&in6addr_v4mapped_any, &in6addr_v4mapped_loopback}, 625 + .expected_errno = {0, EADDRINUSE, 626 + EADDRINUSE, EADDRINUSE, 627 + EADDRINUSE, 0, 628 + EADDRINUSE, EADDRINUSE}, 629 + .expected_reuse_errno = {0, 0, 630 + EADDRINUSE, EADDRINUSE, 631 + EADDRINUSE, 0, 632 + EADDRINUSE, EADDRINUSE}, 633 + }; 634 + 635 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_loopback_v6_any) 636 + { 637 + .family = {AF_INET6, AF_INET6}, 638 + .addr = {&in6addr_v4mapped_loopback, &in6addr_any}, 639 + .expected_errno = {0, EADDRINUSE, 640 + EADDRINUSE, EADDRINUSE, 641 + EADDRINUSE, 0, 642 + EADDRINUSE, EADDRINUSE}, 643 + .expected_reuse_errno = {0, 0, 644 + EADDRINUSE, EADDRINUSE, 645 + EADDRINUSE, EADDRINUSE, 646 + EADDRINUSE, EADDRINUSE}, 647 + }; 648 + 649 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_loopback_v6_any_only) 650 + { 651 + .family = {AF_INET6, AF_INET6}, 652 + .addr = {&in6addr_v4mapped_loopback, &in6addr_any}, 653 + .ipv6_only = {false, true}, 654 + .expected_errno = {0, 0, 655 + EADDRINUSE, EADDRINUSE, 656 + EADDRINUSE, EADDRINUSE, 657 + EADDRINUSE, EADDRINUSE}, 658 + .expected_reuse_errno = {0, 0, 659 + EADDRINUSE, EADDRINUSE, 660 + EADDRINUSE, EADDRINUSE, 661 + EADDRINUSE, EADDRINUSE}, 662 + }; 663 + 664 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_loopback_v6_local) 665 + { 666 + .family = {AF_INET6, AF_INET6}, 667 + .addr = {&in6addr_v4mapped_loopback, &in6addr_loopback}, 668 + .expected_errno = {0, 0, 669 + EADDRINUSE, EADDRINUSE, 670 + EADDRINUSE, EADDRINUSE, 671 + EADDRINUSE, EADDRINUSE}, 672 + .expected_reuse_errno = {0, 0, 673 + EADDRINUSE, EADDRINUSE, 674 + EADDRINUSE, EADDRINUSE, 675 + EADDRINUSE, EADDRINUSE}, 676 + }; 677 + 678 + FIXTURE_VARIANT_ADD(bind_wildcard, v6_v4mapped_loopback_v6_v4mapped_any) 679 + { 680 + .family = {AF_INET6, AF_INET6}, 681 + .addr = {&in6addr_v4mapped_loopback, &in6addr_v4mapped_any}, 682 + .expected_errno = {0, EADDRINUSE, 683 + EADDRINUSE, EADDRINUSE, 684 + EADDRINUSE, 0, 685 + EADDRINUSE, EADDRINUSE}, 686 + .expected_reuse_errno = {0, 0, 687 + EADDRINUSE, EADDRINUSE, 688 + EADDRINUSE, 0, 689 + EADDRINUSE, EADDRINUSE}, 690 + }; 691 + 692 + static void setup_addr(FIXTURE_DATA(bind_wildcard) *self, int i, 693 + int family, const void *addr_const) 694 + { 695 + if (family == AF_INET) { 696 + struct sockaddr_in *addr4 = &self->addr[i].addr4; 697 + const __u32 *addr4_const = addr_const; 698 + 699 + addr4->sin_family = AF_INET; 700 + addr4->sin_port = htons(0); 701 + addr4->sin_addr.s_addr = htonl(*addr4_const); 702 + 703 + self->addrlen[i] = sizeof(struct sockaddr_in); 704 + } else { 705 + struct sockaddr_in6 *addr6 = &self->addr[i].addr6; 706 + const struct in6_addr *addr6_const = addr_const; 707 + 708 + addr6->sin6_family = AF_INET6; 709 + addr6->sin6_port = htons(0); 710 + addr6->sin6_addr = *addr6_const; 711 + 712 + self->addrlen[i] = sizeof(struct sockaddr_in6); 713 + } 714 + } 96 715 97 716 FIXTURE_SETUP(bind_wildcard) 98 717 { 99 - self->addr4.sin_family = AF_INET; 100 - self->addr4.sin_port = htons(0); 101 - self->addr4.sin_addr.s_addr = htonl(variant->addr4_const); 718 + setup_addr(self, 0, variant->family[0], variant->addr[0]); 719 + setup_addr(self, 1, variant->family[1], variant->addr[1]); 102 720 103 - self->addr6.sin6_family = AF_INET6; 104 - self->addr6.sin6_port = htons(0); 105 - self->addr6.sin6_addr = *variant->addr6_const; 721 + setup_addr(self, 2, AF_INET, &in4addr_any); 722 + setup_addr(self, 3, AF_INET, &in4addr_loopback); 723 + 724 + setup_addr(self, 4, AF_INET6, &in6addr_any); 725 + setup_addr(self, 5, AF_INET6, &in6addr_loopback); 726 + setup_addr(self, 6, AF_INET6, &in6addr_v4mapped_any); 727 + setup_addr(self, 7, AF_INET6, &in6addr_v4mapped_loopback); 106 728 } 107 729 108 730 FIXTURE_TEARDOWN(bind_wildcard) 109 731 { 732 + int i; 733 + 734 + for (i = 0; i < NR_SOCKETS; i++) 735 + close(self->fd[i]); 110 736 } 111 737 112 - void bind_sockets(struct __test_metadata *_metadata, 113 - FIXTURE_DATA(bind_wildcard) *self, 114 - int expected_errno, 115 - struct sockaddr *addr1, socklen_t addrlen1, 116 - struct sockaddr *addr2, socklen_t addrlen2) 738 + void bind_socket(struct __test_metadata *_metadata, 739 + FIXTURE_DATA(bind_wildcard) *self, 740 + const FIXTURE_VARIANT(bind_wildcard) *variant, 741 + int i, int reuse) 117 742 { 118 - int fd[2]; 119 743 int ret; 120 744 121 - fd[0] = socket(addr1->sa_family, SOCK_STREAM, 0); 122 - ASSERT_GT(fd[0], 0); 745 + self->fd[i] = socket(self->addr[i].addr.sa_family, SOCK_STREAM, 0); 746 + ASSERT_GT(self->fd[i], 0); 123 747 124 - ret = bind(fd[0], addr1, addrlen1); 125 - ASSERT_EQ(ret, 0); 126 - 127 - ret = getsockname(fd[0], addr1, &addrlen1); 128 - ASSERT_EQ(ret, 0); 129 - 130 - ((struct sockaddr_in *)addr2)->sin_port = ((struct sockaddr_in *)addr1)->sin_port; 131 - 132 - fd[1] = socket(addr2->sa_family, SOCK_STREAM, 0); 133 - ASSERT_GT(fd[1], 0); 134 - 135 - ret = bind(fd[1], addr2, addrlen2); 136 - if (expected_errno) { 137 - ASSERT_EQ(ret, -1); 138 - ASSERT_EQ(errno, expected_errno); 139 - } else { 748 + if (i < 2 && variant->ipv6_only[i]) { 749 + ret = setsockopt(self->fd[i], SOL_IPV6, IPV6_V6ONLY, &(int){1}, sizeof(int)); 140 750 ASSERT_EQ(ret, 0); 141 751 } 142 752 143 - close(fd[1]); 144 - close(fd[0]); 753 + if (i < 2 && reuse) { 754 + ret = setsockopt(self->fd[i], SOL_SOCKET, reuse, &(int){1}, sizeof(int)); 755 + ASSERT_EQ(ret, 0); 756 + } 757 + 758 + self->addr[i].addr4.sin_port = self->addr[0].addr4.sin_port; 759 + 760 + ret = bind(self->fd[i], &self->addr[i].addr, self->addrlen[i]); 761 + 762 + if (reuse) { 763 + if (variant->expected_reuse_errno[i]) { 764 + ASSERT_EQ(ret, -1); 765 + ASSERT_EQ(errno, variant->expected_reuse_errno[i]); 766 + } else { 767 + ASSERT_EQ(ret, 0); 768 + } 769 + } else { 770 + if (variant->expected_errno[i]) { 771 + ASSERT_EQ(ret, -1); 772 + ASSERT_EQ(errno, variant->expected_errno[i]); 773 + } else { 774 + ASSERT_EQ(ret, 0); 775 + } 776 + } 777 + 778 + if (i == 0) { 779 + ret = getsockname(self->fd[0], &self->addr[0].addr, &self->addrlen[0]); 780 + ASSERT_EQ(ret, 0); 781 + } 145 782 } 146 783 147 - TEST_F(bind_wildcard, v4_v6) 784 + TEST_F(bind_wildcard, plain) 148 785 { 149 - bind_sockets(_metadata, self, variant->expected_errno, 150 - (struct sockaddr *)&self->addr4, sizeof(self->addr4), 151 - (struct sockaddr *)&self->addr6, sizeof(self->addr6)); 786 + int i; 787 + 788 + for (i = 0; i < NR_SOCKETS; i++) 789 + bind_socket(_metadata, self, variant, i, 0); 152 790 } 153 791 154 - TEST_F(bind_wildcard, v6_v4) 792 + TEST_F(bind_wildcard, reuseaddr) 155 793 { 156 - bind_sockets(_metadata, self, variant->expected_errno, 157 - (struct sockaddr *)&self->addr6, sizeof(self->addr6), 158 - (struct sockaddr *)&self->addr4, sizeof(self->addr4)); 794 + int i; 795 + 796 + for (i = 0; i < NR_SOCKETS; i++) 797 + bind_socket(_metadata, self, variant, i, SO_REUSEADDR); 798 + } 799 + 800 + TEST_F(bind_wildcard, reuseport) 801 + { 802 + int i; 803 + 804 + for (i = 0; i < NR_SOCKETS; i++) 805 + bind_socket(_metadata, self, variant, i, SO_REUSEPORT); 159 806 } 160 807 161 808 TEST_HARNESS_MAIN
+9
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 383 383 local stat_cookierx_last 384 384 local stat_csum_err_s 385 385 local stat_csum_err_c 386 + local stat_tcpfb_last_l 386 387 stat_synrx_last_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableSYNRX") 387 388 stat_ackrx_last_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableACKRX") 388 389 stat_cookietx_last=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtSyncookiesSent") 389 390 stat_cookierx_last=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtSyncookiesRecv") 390 391 stat_csum_err_s=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtDataCsumErr") 391 392 stat_csum_err_c=$(mptcp_lib_get_counter "${connector_ns}" "MPTcpExtDataCsumErr") 393 + stat_tcpfb_last_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableFallbackACK") 392 394 393 395 timeout ${timeout_test} \ 394 396 ip netns exec ${listener_ns} \ ··· 459 457 local stat_cookietx_now 460 458 local stat_cookierx_now 461 459 local stat_ooo_now 460 + local stat_tcpfb_now_l 462 461 stat_synrx_now_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableSYNRX") 463 462 stat_ackrx_now_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableACKRX") 464 463 stat_cookietx_now=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtSyncookiesSent") 465 464 stat_cookierx_now=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtSyncookiesRecv") 466 465 stat_ooo_now=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtTCPOFOQueue") 466 + stat_tcpfb_now_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableFallbackACK") 467 467 468 468 expect_synrx=$((stat_synrx_last_l)) 469 469 expect_ackrx=$((stat_ackrx_last_l)) ··· 510 506 mptcp_lib_pr_fail "client got ${csum_err_c_nr} data checksum error[s]" 511 507 retc=1 512 508 fi 509 + fi 510 + 511 + if [ ${stat_ooo_now} -eq 0 ] && [ ${stat_tcpfb_last_l} -ne ${stat_tcpfb_now_l} ]; then 512 + mptcp_lib_pr_fail "unexpected fallback to TCP" 513 + rets=1 513 514 fi 514 515 515 516 if [ $cookies -eq 2 ];then
+3 -1
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 729 729 [ -n "$_flags" ]; flags="flags $_flags" 730 730 shift 731 731 elif [ $1 = "dev" ]; then 732 - [ -n "$2" ]; dev="dev $1" 732 + [ -n "$2" ]; dev="dev $2" 733 733 shift 734 734 elif [ $1 = "id" ]; then 735 735 _id=$2 ··· 3610 3610 local tests_pid=$! 3611 3611 3612 3612 wait_mpj $ns2 3613 + pm_nl_check_endpoint "creation" \ 3614 + $ns2 10.0.2.2 id 2 flags subflow dev ns2eth2 3613 3615 chk_subflow_nr "before delete" 2 3614 3616 chk_mptcp_info subflows 1 subflows 1 3615 3617
+1 -1
tools/testing/selftests/net/reuseaddr_conflict.c
··· 109 109 fd1 = open_port(0, 1); 110 110 if (fd1 >= 0) 111 111 error(1, 0, "Was allowed to create an ipv4 reuseport on an already bound non-reuseport socket with no ipv6"); 112 - fprintf(stderr, "Success"); 112 + fprintf(stderr, "Success\n"); 113 113 return 0; 114 114 }
+2 -8
tools/testing/selftests/net/udpgro_fwd.sh
··· 244 244 create_vxlan_pair 245 245 ip netns exec $NS_DST ethtool -K veth$DST generic-receive-offload on 246 246 ip netns exec $NS_DST ethtool -K veth$DST rx-gro-list on 247 - run_test "GRO frag list over UDP tunnel" $OL_NET$DST 1 1 247 + run_test "GRO frag list over UDP tunnel" $OL_NET$DST 10 10 248 248 cleanup 249 249 250 250 # use NAT to circumvent GRO FWD check ··· 258 258 # load arp cache before running the test to reduce the amount of 259 259 # stray traffic on top of the UDP tunnel 260 260 ip netns exec $NS_SRC $PING -q -c 1 $OL_NET$DST_NAT >/dev/null 261 - run_test "GRO fwd over UDP tunnel" $OL_NET$DST_NAT 1 1 $OL_NET$DST 262 - cleanup 263 - 264 - create_vxlan_pair 265 - run_bench "UDP tunnel fwd perf" $OL_NET$DST 266 - ip netns exec $NS_DST ethtool -K veth$DST rx-udp-gro-forwarding on 267 - run_bench "UDP tunnel GRO fwd perf" $OL_NET$DST 261 + run_test "GRO fwd over UDP tunnel" $OL_NET$DST_NAT 10 10 $OL_NET$DST 268 262 cleanup 269 263 done 270 264