Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Networking fixes for 5.12-rc7, including fixes from can, ipsec,
mac80211, wireless, and bpf trees.

No scary regressions here or in the works, but small fixes for 5.12
changes keep coming.

Current release - regressions:

- virtio: do not pull payload in skb->head

- virtio: ensure mac header is set in virtio_net_hdr_to_skb()

- Revert "net: correct sk_acceptq_is_full()"

- mptcp: revert "mptcp: provide subflow aware release function"

- ethernet: lan743x: fix ethernet frame cutoff issue

- dsa: fix type was not set for devlink port

- ethtool: remove link_mode param and derive link params from driver

- sched: htb: fix null pointer dereference on a null new_q

- wireless: iwlwifi: Fix softirq/hardirq disabling in
iwl_pcie_enqueue_hcmd()

- wireless: iwlwifi: fw: fix notification wait locking

- wireless: brcmfmac: p2p: Fix deadlock introduced by avoiding the
rtnl dependency

Current release - new code bugs:

- napi: fix hangup on napi_disable for threaded napi

- bpf: take module reference for trampoline in module

- wireless: mt76: mt7921: fix airtime reporting and related tx hangs

- wireless: iwlwifi: mvm: rfi: don't lock mvm->mutex when sending
config command

Previous releases - regressions:

- rfkill: revert back to old userspace API by default

- nfc: fix infinite loop, refcount & memory leaks in LLCP sockets

- let skb_orphan_partial wake-up waiters

- xfrm/compat: Cleanup WARN()s that can be user-triggered

- vxlan, geneve: do not modify the shared tunnel info when PMTU
triggers an ICMP reply

- can: fix msg_namelen values depending on CAN_REQUIRED_SIZE

- can: uapi: mark union inside struct can_frame packed

- sched: cls: fix action overwrite reference counting

- sched: cls: fix err handler in tcf_action_init()

- ethernet: mlxsw: fix ECN marking in tunnel decapsulation

- ethernet: nfp: Fix a use after free in nfp_bpf_ctrl_msg_rx

- ethernet: i40e: fix receiving of single packets in xsk zero-copy
mode

- ethernet: cxgb4: avoid collecting SGE_QBASE regs during traffic

Previous releases - always broken:

- bpf: Refuse non-O_RDWR flags in BPF_OBJ_GET

- bpf: Refcount task stack in bpf_get_task_stack

- bpf, x86: Validate computation of branch displacements

- ieee802154: fix many similar syzbot-found bugs
- fix NULL dereferences in netlink attribute handling
- reject unsupported operations on monitor interfaces
- fix error handling in llsec_key_alloc()

- xfrm: make ipv4 pmtu check honor ip header df

- xfrm: make hash generation lock per network namespace

- xfrm: esp: delete NETIF_F_SCTP_CRC bit from features for esp
offload

- ethtool: fix incorrect datatype in set_eee ops

- xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory
model

- openvswitch: fix send of uninitialized stack memory in ct limit
reply

Misc:

- udp: add get handling for UDP_GRO sockopt"

* tag 'net-5.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (182 commits)
net: fix hangup on napi_disable for threaded napi
net: hns3: Trivial spell fix in hns3 driver
lan743x: fix ethernet frame cutoff issue
net: ipv6: check for validity before dereferencing cfg->fc_nlinfo.nlh
net: dsa: lantiq_gswip: Configure all remaining GSWIP_MII_CFG bits
net: dsa: lantiq_gswip: Don't use PHY auto polling
net: sched: sch_teql: fix null-pointer dereference
ipv6: report errors for iftoken via netlink extack
net: sched: fix err handler in tcf_action_init()
net: sched: fix action overwrite reference counting
Revert "net: sched: bump refcount for new action in ACT replace mode"
ice: fix memory leak of aRFS after resuming from suspend
i40e: Fix sparse warning: missing error code 'err'
i40e: Fix sparse error: 'vsi->netdev' could be null
i40e: Fix sparse error: uninitialized symbol 'ring'
i40e: Fix sparse errors in i40e_txrx.c
i40e: Fix parameters in aq_get_phy_register()
nl80211: fix beacon head validation
bpf, x86: Validate computation of branch displacements for x86-32
bpf, x86: Validate computation of branch displacements for x86-64
...

+1871 -714
+1 -1
Documentation/devicetree/bindings/net/brcm,bcm4908-enet.yaml
··· 32 32 - interrupts 33 33 - interrupt-names 34 34 35 - additionalProperties: false 35 + unevaluatedProperties: false 36 36 37 37 examples: 38 38 - |
+1 -1
Documentation/devicetree/bindings/net/ethernet-controller.yaml
··· 49 49 description: 50 50 Reference to an nvmem node for the MAC address 51 51 52 - nvmem-cells-names: 52 + nvmem-cell-names: 53 53 const: mac-address 54 54 55 55 phy-connection-type:
+94 -2
Documentation/devicetree/bindings/net/micrel-ksz90x1.txt
··· 65 65 step is 60ps. The default value is the neutral setting, so setting 66 66 rxc-skew-ps=<0> actually results in -900 picoseconds adjustment. 67 67 68 + The KSZ9031 hardware supports a range of skew values from negative to 69 + positive, where the specific range is property dependent. All values 70 + specified in the devicetree are offset by the minimum value so they 71 + can be represented as positive integers in the devicetree since it's 72 + difficult to represent a negative number in the devictree. 73 + 74 + The following 5-bit values table apply to rxc-skew-ps and txc-skew-ps. 75 + 76 + Pad Skew Value Delay (ps) Devicetree Value 77 + ------------------------------------------------------ 78 + 0_0000 -900ps 0 79 + 0_0001 -840ps 60 80 + 0_0010 -780ps 120 81 + 0_0011 -720ps 180 82 + 0_0100 -660ps 240 83 + 0_0101 -600ps 300 84 + 0_0110 -540ps 360 85 + 0_0111 -480ps 420 86 + 0_1000 -420ps 480 87 + 0_1001 -360ps 540 88 + 0_1010 -300ps 600 89 + 0_1011 -240ps 660 90 + 0_1100 -180ps 720 91 + 0_1101 -120ps 780 92 + 0_1110 -60ps 840 93 + 0_1111 0ps 900 94 + 1_0000 60ps 960 95 + 1_0001 120ps 1020 96 + 1_0010 180ps 1080 97 + 1_0011 240ps 1140 98 + 1_0100 300ps 1200 99 + 1_0101 360ps 1260 100 + 1_0110 420ps 1320 101 + 1_0111 480ps 1380 102 + 1_1000 540ps 1440 103 + 1_1001 600ps 1500 104 + 1_1010 660ps 1560 105 + 1_1011 720ps 1620 106 + 1_1100 780ps 1680 107 + 1_1101 840ps 1740 108 + 1_1110 900ps 1800 109 + 1_1111 960ps 1860 110 + 111 + The following 4-bit values table apply to the txdX-skew-ps, rxdX-skew-ps 112 + data pads, and the rxdv-skew-ps, txen-skew-ps control pads. 113 + 114 + Pad Skew Value Delay (ps) Devicetree Value 115 + ------------------------------------------------------ 116 + 0000 -420ps 0 117 + 0001 -360ps 60 118 + 0010 -300ps 120 119 + 0011 -240ps 180 120 + 0100 -180ps 240 121 + 0101 -120ps 300 122 + 0110 -60ps 360 123 + 0111 0ps 420 124 + 1000 60ps 480 125 + 1001 120ps 540 126 + 1010 180ps 600 127 + 1011 240ps 660 128 + 1100 300ps 720 129 + 1101 360ps 780 130 + 1110 420ps 840 131 + 1111 480ps 900 132 + 68 133 Optional properties: 69 134 70 135 Maximum value of 1860, default value 900: ··· 185 120 186 121 Examples: 187 122 123 + /* Attach to an Ethernet device with autodetected PHY */ 124 + &enet { 125 + rxc-skew-ps = <1800>; 126 + rxdv-skew-ps = <0>; 127 + txc-skew-ps = <1800>; 128 + txen-skew-ps = <0>; 129 + status = "okay"; 130 + }; 131 + 132 + /* Attach to an explicitly-specified PHY */ 188 133 mdio { 189 134 phy0: ethernet-phy@0 { 190 - rxc-skew-ps = <3000>; 135 + rxc-skew-ps = <1800>; 191 136 rxdv-skew-ps = <0>; 192 - txc-skew-ps = <3000>; 137 + txc-skew-ps = <1800>; 193 138 txen-skew-ps = <0>; 194 139 reg = <0>; 195 140 }; ··· 208 133 phy = <&phy0>; 209 134 phy-mode = "rgmii-id"; 210 135 }; 136 + 137 + References 138 + 139 + Micrel ksz9021rl/rn Data Sheet, Revision 1.2. Dated 2/13/2014. 140 + http://www.micrel.com/_PDF/Ethernet/datasheets/ksz9021rl-rn_ds.pdf 141 + 142 + Micrel ksz9031rnx Data Sheet, Revision 2.1. Dated 11/20/2014. 143 + http://www.micrel.com/_PDF/Ethernet/datasheets/KSZ9031RNX.pdf 144 + 145 + Notes: 146 + 147 + Note that a previous version of the Micrel ksz9021rl/rn Data Sheet 148 + was missing extended register 106 (transmit data pad skews), and 149 + incorrectly specified the ps per step as 200ps/step instead of 150 + 120ps/step. The latest update to this document reflects the latest 151 + revision of the Micrel specification even though usage in the kernel 152 + still reflects that incorrect document.
+5 -5
Documentation/networking/ethtool-netlink.rst
··· 976 976 977 977 978 978 PAUSE_GET 979 - ============ 979 + ========= 980 980 981 - Gets channel counts like ``ETHTOOL_GPAUSE`` ioctl request. 981 + Gets pause frame settings like ``ETHTOOL_GPAUSEPARAM`` ioctl request. 982 982 983 983 Request contents: 984 984 ··· 1007 1007 Each member has a corresponding attribute defined. 1008 1008 1009 1009 PAUSE_SET 1010 - ============ 1010 + ========= 1011 1011 1012 1012 Sets pause parameters like ``ETHTOOL_GPAUSEPARAM`` ioctl request. 1013 1013 ··· 1024 1024 EEE_GET 1025 1025 ======= 1026 1026 1027 - Gets channel counts like ``ETHTOOL_GEEE`` ioctl request. 1027 + Gets Energy Efficient Ethernet settings like ``ETHTOOL_GEEE`` ioctl request. 1028 1028 1029 1029 Request contents: 1030 1030 ··· 1054 1054 EEE_SET 1055 1055 ======= 1056 1056 1057 - Sets pause parameters like ``ETHTOOL_GEEEPARAM`` ioctl request. 1057 + Sets Energy Efficient Ethernet parameters like ``ETHTOOL_SEEE`` ioctl request. 1058 1058 1059 1059 Request contents: 1060 1060
+8
MAINTAINERS
··· 14850 14850 S: Maintained 14851 14851 F: drivers/iommu/arm/arm-smmu/qcom_iommu.c 14852 14852 14853 + QUALCOMM IPC ROUTER (QRTR) DRIVER 14854 + M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 14855 + L: linux-arm-msm@vger.kernel.org 14856 + S: Maintained 14857 + F: include/trace/events/qrtr.h 14858 + F: include/uapi/linux/qrtr.h 14859 + F: net/qrtr/ 14860 + 14853 14861 QUALCOMM IPCC MAILBOX DRIVER 14854 14862 M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 14855 14863 L: linux-arm-msm@vger.kernel.org
+10 -1
arch/x86/net/bpf_jit_comp.c
··· 1689 1689 } 1690 1690 1691 1691 if (image) { 1692 - if (unlikely(proglen + ilen > oldproglen)) { 1692 + /* 1693 + * When populating the image, assert that: 1694 + * 1695 + * i) We do not write beyond the allocated space, and 1696 + * ii) addrs[i] did not change from the prior run, in order 1697 + * to validate assumptions made for computing branch 1698 + * displacements. 1699 + */ 1700 + if (unlikely(proglen + ilen > oldproglen || 1701 + proglen + ilen != addrs[i])) { 1693 1702 pr_err("bpf_jit: fatal error\n"); 1694 1703 return -EFAULT; 1695 1704 }
+10 -1
arch/x86/net/bpf_jit_comp32.c
··· 2276 2276 } 2277 2277 2278 2278 if (image) { 2279 - if (unlikely(proglen + ilen > oldproglen)) { 2279 + /* 2280 + * When populating the image, assert that: 2281 + * 2282 + * i) We do not write beyond the allocated space, and 2283 + * ii) addrs[i] did not change from the prior run, in order 2284 + * to validate assumptions made for computing branch 2285 + * displacements. 2286 + */ 2287 + if (unlikely(proglen + ilen > oldproglen || 2288 + proglen + ilen != addrs[i])) { 2280 2289 pr_err("bpf_jit: fatal error\n"); 2281 2290 return -EFAULT; 2282 2291 }
+18 -6
drivers/net/can/spi/mcp251x.c
··· 314 314 return ret; 315 315 } 316 316 317 + static int mcp251x_spi_write(struct spi_device *spi, int len) 318 + { 319 + struct mcp251x_priv *priv = spi_get_drvdata(spi); 320 + int ret; 321 + 322 + ret = spi_write(spi, priv->spi_tx_buf, len); 323 + if (ret) 324 + dev_err(&spi->dev, "spi write failed: ret = %d\n", ret); 325 + 326 + return ret; 327 + } 328 + 317 329 static u8 mcp251x_read_reg(struct spi_device *spi, u8 reg) 318 330 { 319 331 struct mcp251x_priv *priv = spi_get_drvdata(spi); ··· 373 361 priv->spi_tx_buf[1] = reg; 374 362 priv->spi_tx_buf[2] = val; 375 363 376 - mcp251x_spi_trans(spi, 3); 364 + mcp251x_spi_write(spi, 3); 377 365 } 378 366 379 367 static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2) ··· 385 373 priv->spi_tx_buf[2] = v1; 386 374 priv->spi_tx_buf[3] = v2; 387 375 388 - mcp251x_spi_trans(spi, 4); 376 + mcp251x_spi_write(spi, 4); 389 377 } 390 378 391 379 static void mcp251x_write_bits(struct spi_device *spi, u8 reg, ··· 398 386 priv->spi_tx_buf[2] = mask; 399 387 priv->spi_tx_buf[3] = val; 400 388 401 - mcp251x_spi_trans(spi, 4); 389 + mcp251x_spi_write(spi, 4); 402 390 } 403 391 404 392 static u8 mcp251x_read_stat(struct spi_device *spi) ··· 630 618 buf[i]); 631 619 } else { 632 620 memcpy(priv->spi_tx_buf, buf, TXBDAT_OFF + len); 633 - mcp251x_spi_trans(spi, TXBDAT_OFF + len); 621 + mcp251x_spi_write(spi, TXBDAT_OFF + len); 634 622 } 635 623 } 636 624 ··· 662 650 663 651 /* use INSTRUCTION_RTS, to avoid "repeated frame problem" */ 664 652 priv->spi_tx_buf[0] = INSTRUCTION_RTS(1 << tx_buf_idx); 665 - mcp251x_spi_trans(priv->spi, 1); 653 + mcp251x_spi_write(priv->spi, 1); 666 654 } 667 655 668 656 static void mcp251x_hw_rx_frame(struct spi_device *spi, u8 *buf, ··· 900 888 mdelay(MCP251X_OST_DELAY_MS); 901 889 902 890 priv->spi_tx_buf[0] = INSTRUCTION_RESET; 903 - ret = mcp251x_spi_trans(spi, 1); 891 + ret = mcp251x_spi_write(spi, 1); 904 892 if (ret) 905 893 return ret; 906 894
+5 -1
drivers/net/can/usb/peak_usb/pcan_usb_core.c
··· 857 857 if (dev->adapter->dev_set_bus) { 858 858 err = dev->adapter->dev_set_bus(dev, 0); 859 859 if (err) 860 - goto lbl_unregister_candev; 860 + goto adap_dev_free; 861 861 } 862 862 863 863 /* get device number early */ ··· 868 868 peak_usb_adapter->name, ctrl_idx, dev->device_number); 869 869 870 870 return 0; 871 + 872 + adap_dev_free: 873 + if (dev->adapter->dev_free) 874 + dev->adapter->dev_free(dev); 871 875 872 876 lbl_unregister_candev: 873 877 unregister_candev(netdev);
+172 -21
drivers/net/dsa/lantiq_gswip.c
··· 93 93 94 94 /* GSWIP MII Registers */ 95 95 #define GSWIP_MII_CFGp(p) (0x2 * (p)) 96 + #define GSWIP_MII_CFG_RESET BIT(15) 96 97 #define GSWIP_MII_CFG_EN BIT(14) 98 + #define GSWIP_MII_CFG_ISOLATE BIT(13) 97 99 #define GSWIP_MII_CFG_LDCLKDIS BIT(12) 100 + #define GSWIP_MII_CFG_RGMII_IBS BIT(8) 101 + #define GSWIP_MII_CFG_RMII_CLK BIT(7) 98 102 #define GSWIP_MII_CFG_MODE_MIIP 0x0 99 103 #define GSWIP_MII_CFG_MODE_MIIM 0x1 100 104 #define GSWIP_MII_CFG_MODE_RMIIP 0x2 ··· 194 190 #define GSWIP_PCE_DEFPVID(p) (0x486 + ((p) * 0xA)) 195 191 196 192 #define GSWIP_MAC_FLEN 0x8C5 193 + #define GSWIP_MAC_CTRL_0p(p) (0x903 + ((p) * 0xC)) 194 + #define GSWIP_MAC_CTRL_0_PADEN BIT(8) 195 + #define GSWIP_MAC_CTRL_0_FCS_EN BIT(7) 196 + #define GSWIP_MAC_CTRL_0_FCON_MASK 0x0070 197 + #define GSWIP_MAC_CTRL_0_FCON_AUTO 0x0000 198 + #define GSWIP_MAC_CTRL_0_FCON_RX 0x0010 199 + #define GSWIP_MAC_CTRL_0_FCON_TX 0x0020 200 + #define GSWIP_MAC_CTRL_0_FCON_RXTX 0x0030 201 + #define GSWIP_MAC_CTRL_0_FCON_NONE 0x0040 202 + #define GSWIP_MAC_CTRL_0_FDUP_MASK 0x000C 203 + #define GSWIP_MAC_CTRL_0_FDUP_AUTO 0x0000 204 + #define GSWIP_MAC_CTRL_0_FDUP_EN 0x0004 205 + #define GSWIP_MAC_CTRL_0_FDUP_DIS 0x000C 206 + #define GSWIP_MAC_CTRL_0_GMII_MASK 0x0003 207 + #define GSWIP_MAC_CTRL_0_GMII_AUTO 0x0000 208 + #define GSWIP_MAC_CTRL_0_GMII_MII 0x0001 209 + #define GSWIP_MAC_CTRL_0_GMII_RGMII 0x0002 197 210 #define GSWIP_MAC_CTRL_2p(p) (0x905 + ((p) * 0xC)) 198 211 #define GSWIP_MAC_CTRL_2_MLEN BIT(3) /* Maximum Untagged Frame Lnegth */ 199 212 ··· 674 653 GSWIP_SDMA_PCTRLp(port)); 675 654 676 655 if (!dsa_is_cpu_port(ds, port)) { 677 - u32 macconf = GSWIP_MDIO_PHY_LINK_AUTO | 678 - GSWIP_MDIO_PHY_SPEED_AUTO | 679 - GSWIP_MDIO_PHY_FDUP_AUTO | 680 - GSWIP_MDIO_PHY_FCONTX_AUTO | 681 - GSWIP_MDIO_PHY_FCONRX_AUTO | 682 - (phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK); 656 + u32 mdio_phy = 0; 683 657 684 - gswip_mdio_w(priv, macconf, GSWIP_MDIO_PHYp(port)); 685 - /* Activate MDIO auto polling */ 686 - gswip_mdio_mask(priv, 0, BIT(port), GSWIP_MDIO_MDC_CFG0); 658 + if (phydev) 659 + mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK; 660 + 661 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_ADDR_MASK, mdio_phy, 662 + GSWIP_MDIO_PHYp(port)); 687 663 } 688 664 689 665 return 0; ··· 692 674 693 675 if (!dsa_is_user_port(ds, port)) 694 676 return; 695 - 696 - if (!dsa_is_cpu_port(ds, port)) { 697 - gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_DOWN, 698 - GSWIP_MDIO_PHY_LINK_MASK, 699 - GSWIP_MDIO_PHYp(port)); 700 - /* Deactivate MDIO auto polling */ 701 - gswip_mdio_mask(priv, BIT(port), 0, GSWIP_MDIO_MDC_CFG0); 702 - } 703 677 704 678 gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0, 705 679 GSWIP_FDMA_PCTRLp(port)); ··· 804 794 gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2); 805 795 gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3); 806 796 807 - /* disable PHY auto polling */ 797 + /* Deactivate MDIO PHY auto polling. Some PHYs as the AR8030 have an 798 + * interoperability problem with this auto polling mechanism because 799 + * their status registers think that the link is in a different state 800 + * than it actually is. For the AR8030 it has the BMSR_ESTATEN bit set 801 + * as well as ESTATUS_1000_TFULL and ESTATUS_1000_XFULL. This makes the 802 + * auto polling state machine consider the link being negotiated with 803 + * 1Gbit/s. Since the PHY itself is a Fast Ethernet RMII PHY this leads 804 + * to the switch port being completely dead (RX and TX are both not 805 + * working). 806 + * Also with various other PHY / port combinations (PHY11G GPHY, PHY22F 807 + * GPHY, external RGMII PEF7071/7072) any traffic would stop. Sometimes 808 + * it would work fine for a few minutes to hours and then stop, on 809 + * other device it would no traffic could be sent or received at all. 810 + * Testing shows that when PHY auto polling is disabled these problems 811 + * go away. 812 + */ 808 813 gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0); 814 + 809 815 /* Configure the MDIO Clock 2.5 MHz */ 810 816 gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1); 811 817 812 - /* Disable the xMII link */ 818 + /* Disable the xMII interface and clear it's isolation bit */ 813 819 for (i = 0; i < priv->hw_info->max_ports; i++) 814 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i); 820 + gswip_mii_mask_cfg(priv, 821 + GSWIP_MII_CFG_EN | GSWIP_MII_CFG_ISOLATE, 822 + 0, i); 815 823 816 824 /* enable special tag insertion on cpu port */ 817 825 gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN, ··· 1478 1450 return; 1479 1451 } 1480 1452 1453 + static void gswip_port_set_link(struct gswip_priv *priv, int port, bool link) 1454 + { 1455 + u32 mdio_phy; 1456 + 1457 + if (link) 1458 + mdio_phy = GSWIP_MDIO_PHY_LINK_UP; 1459 + else 1460 + mdio_phy = GSWIP_MDIO_PHY_LINK_DOWN; 1461 + 1462 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_MASK, mdio_phy, 1463 + GSWIP_MDIO_PHYp(port)); 1464 + } 1465 + 1466 + static void gswip_port_set_speed(struct gswip_priv *priv, int port, int speed, 1467 + phy_interface_t interface) 1468 + { 1469 + u32 mdio_phy = 0, mii_cfg = 0, mac_ctrl_0 = 0; 1470 + 1471 + switch (speed) { 1472 + case SPEED_10: 1473 + mdio_phy = GSWIP_MDIO_PHY_SPEED_M10; 1474 + 1475 + if (interface == PHY_INTERFACE_MODE_RMII) 1476 + mii_cfg = GSWIP_MII_CFG_RATE_M50; 1477 + else 1478 + mii_cfg = GSWIP_MII_CFG_RATE_M2P5; 1479 + 1480 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII; 1481 + break; 1482 + 1483 + case SPEED_100: 1484 + mdio_phy = GSWIP_MDIO_PHY_SPEED_M100; 1485 + 1486 + if (interface == PHY_INTERFACE_MODE_RMII) 1487 + mii_cfg = GSWIP_MII_CFG_RATE_M50; 1488 + else 1489 + mii_cfg = GSWIP_MII_CFG_RATE_M25; 1490 + 1491 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII; 1492 + break; 1493 + 1494 + case SPEED_1000: 1495 + mdio_phy = GSWIP_MDIO_PHY_SPEED_G1; 1496 + 1497 + mii_cfg = GSWIP_MII_CFG_RATE_M125; 1498 + 1499 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_RGMII; 1500 + break; 1501 + } 1502 + 1503 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_SPEED_MASK, mdio_phy, 1504 + GSWIP_MDIO_PHYp(port)); 1505 + gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_RATE_MASK, mii_cfg, port); 1506 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_GMII_MASK, mac_ctrl_0, 1507 + GSWIP_MAC_CTRL_0p(port)); 1508 + } 1509 + 1510 + static void gswip_port_set_duplex(struct gswip_priv *priv, int port, int duplex) 1511 + { 1512 + u32 mac_ctrl_0, mdio_phy; 1513 + 1514 + if (duplex == DUPLEX_FULL) { 1515 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_EN; 1516 + mdio_phy = GSWIP_MDIO_PHY_FDUP_EN; 1517 + } else { 1518 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_DIS; 1519 + mdio_phy = GSWIP_MDIO_PHY_FDUP_DIS; 1520 + } 1521 + 1522 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FDUP_MASK, mac_ctrl_0, 1523 + GSWIP_MAC_CTRL_0p(port)); 1524 + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_FDUP_MASK, mdio_phy, 1525 + GSWIP_MDIO_PHYp(port)); 1526 + } 1527 + 1528 + static void gswip_port_set_pause(struct gswip_priv *priv, int port, 1529 + bool tx_pause, bool rx_pause) 1530 + { 1531 + u32 mac_ctrl_0, mdio_phy; 1532 + 1533 + if (tx_pause && rx_pause) { 1534 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RXTX; 1535 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN | 1536 + GSWIP_MDIO_PHY_FCONRX_EN; 1537 + } else if (tx_pause) { 1538 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_TX; 1539 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN | 1540 + GSWIP_MDIO_PHY_FCONRX_DIS; 1541 + } else if (rx_pause) { 1542 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RX; 1543 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS | 1544 + GSWIP_MDIO_PHY_FCONRX_EN; 1545 + } else { 1546 + mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_NONE; 1547 + mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS | 1548 + GSWIP_MDIO_PHY_FCONRX_DIS; 1549 + } 1550 + 1551 + gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FCON_MASK, 1552 + mac_ctrl_0, GSWIP_MAC_CTRL_0p(port)); 1553 + gswip_mdio_mask(priv, 1554 + GSWIP_MDIO_PHY_FCONTX_MASK | 1555 + GSWIP_MDIO_PHY_FCONRX_MASK, 1556 + mdio_phy, GSWIP_MDIO_PHYp(port)); 1557 + } 1558 + 1481 1559 static void gswip_phylink_mac_config(struct dsa_switch *ds, int port, 1482 1560 unsigned int mode, 1483 1561 const struct phylink_link_state *state) ··· 1603 1469 break; 1604 1470 case PHY_INTERFACE_MODE_RMII: 1605 1471 miicfg |= GSWIP_MII_CFG_MODE_RMIIM; 1472 + 1473 + /* Configure the RMII clock as output: */ 1474 + miicfg |= GSWIP_MII_CFG_RMII_CLK; 1606 1475 break; 1607 1476 case PHY_INTERFACE_MODE_RGMII: 1608 1477 case PHY_INTERFACE_MODE_RGMII_ID: ··· 1618 1481 "Unsupported interface: %d\n", state->interface); 1619 1482 return; 1620 1483 } 1621 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_MODE_MASK, miicfg, port); 1484 + 1485 + gswip_mii_mask_cfg(priv, 1486 + GSWIP_MII_CFG_MODE_MASK | GSWIP_MII_CFG_RMII_CLK | 1487 + GSWIP_MII_CFG_RGMII_IBS | GSWIP_MII_CFG_LDCLKDIS, 1488 + miicfg, port); 1622 1489 1623 1490 switch (state->interface) { 1624 1491 case PHY_INTERFACE_MODE_RGMII_ID: ··· 1647 1506 struct gswip_priv *priv = ds->priv; 1648 1507 1649 1508 gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, port); 1509 + 1510 + if (!dsa_is_cpu_port(ds, port)) 1511 + gswip_port_set_link(priv, port, false); 1650 1512 } 1651 1513 1652 1514 static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port, ··· 1660 1516 bool tx_pause, bool rx_pause) 1661 1517 { 1662 1518 struct gswip_priv *priv = ds->priv; 1519 + 1520 + if (!dsa_is_cpu_port(ds, port)) { 1521 + gswip_port_set_link(priv, port, true); 1522 + gswip_port_set_speed(priv, port, speed, interface); 1523 + gswip_port_set_duplex(priv, port, duplex); 1524 + gswip_port_set_pause(priv, port, tx_pause, rx_pause); 1525 + } 1663 1526 1664 1527 gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port); 1665 1528 }
+3 -2
drivers/net/ethernet/amd/pcnet32.c
··· 1534 1534 } 1535 1535 pci_set_master(pdev); 1536 1536 1537 - ioaddr = pci_resource_start(pdev, 0); 1538 - if (!ioaddr) { 1537 + if (!pci_resource_len(pdev, 0)) { 1539 1538 if (pcnet32_debug & NETIF_MSG_PROBE) 1540 1539 pr_err("card has no PCI IO resources, aborting\n"); 1541 1540 err = -ENODEV; ··· 1547 1548 pr_err("architecture does not support 32bit PCI busmaster DMA\n"); 1548 1549 goto err_disable_dev; 1549 1550 } 1551 + 1552 + ioaddr = pci_resource_start(pdev, 0); 1550 1553 if (!request_region(ioaddr, PCNET32_TOTAL_SIZE, "pcnet32_probe_pci")) { 1551 1554 if (pcnet32_debug & NETIF_MSG_PROBE) 1552 1555 pr_err("io address range already allocated\n");
+3 -3
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 180 180 #define XGBE_DMA_SYS_AWCR 0x30303030 181 181 182 182 /* DMA cache settings - PCI device */ 183 - #define XGBE_DMA_PCI_ARCR 0x00000003 184 - #define XGBE_DMA_PCI_AWCR 0x13131313 185 - #define XGBE_DMA_PCI_AWARCR 0x00000313 183 + #define XGBE_DMA_PCI_ARCR 0x000f0f0f 184 + #define XGBE_DMA_PCI_AWCR 0x0f0f0f0f 185 + #define XGBE_DMA_PCI_AWARCR 0x00000f0f 186 186 187 187 /* DMA channel interrupt modes */ 188 188 #define XGBE_IRQ_MODE_EDGE 0
+1
drivers/net/ethernet/broadcom/bcm4908_enet.c
··· 172 172 173 173 err_free_buf_descs: 174 174 dma_free_coherent(dev, size, ring->cpu_addr, ring->dma_addr); 175 + ring->cpu_addr = NULL; 175 176 return -ENOMEM; 176 177 } 177 178
+7
drivers/net/ethernet/cadence/macb_main.c
··· 3239 3239 bool cmp_b = false; 3240 3240 bool cmp_c = false; 3241 3241 3242 + if (!macb_is_gem(bp)) 3243 + return; 3244 + 3242 3245 tp4sp_v = &(fs->h_u.tcp_ip4_spec); 3243 3246 tp4sp_m = &(fs->m_u.tcp_ip4_spec); 3244 3247 ··· 3610 3607 { 3611 3608 struct net_device *netdev = bp->dev; 3612 3609 netdev_features_t features = netdev->features; 3610 + struct ethtool_rx_fs_item *item; 3613 3611 3614 3612 /* TX checksum offload */ 3615 3613 macb_set_txcsum_feature(bp, features); ··· 3619 3615 macb_set_rxcsum_feature(bp, features); 3620 3616 3621 3617 /* RX Flow Filters */ 3618 + list_for_each_entry(item, &bp->rx_fs_list.list, list) 3619 + gem_prog_cmp_regs(bp, &item->fs); 3620 + 3622 3621 macb_set_rxflow_feature(bp, features); 3623 3622 } 3624 3623
+19 -4
drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
··· 1794 1794 struct cudbg_buffer temp_buff = { 0 }; 1795 1795 struct sge_qbase_reg_field *sge_qbase; 1796 1796 struct ireg_buf *ch_sge_dbg; 1797 + u8 padap_running = 0; 1797 1798 int i, rc; 1799 + u32 size; 1798 1800 1799 - rc = cudbg_get_buff(pdbg_init, dbg_buff, 1800 - sizeof(*ch_sge_dbg) * 2 + sizeof(*sge_qbase), 1801 - &temp_buff); 1801 + /* Accessing SGE_QBASE_MAP[0-3] and SGE_QBASE_INDEX regs can 1802 + * lead to SGE missing doorbells under heavy traffic. So, only 1803 + * collect them when adapter is idle. 1804 + */ 1805 + for_each_port(padap, i) { 1806 + padap_running = netif_running(padap->port[i]); 1807 + if (padap_running) 1808 + break; 1809 + } 1810 + 1811 + size = sizeof(*ch_sge_dbg) * 2; 1812 + if (!padap_running) 1813 + size += sizeof(*sge_qbase); 1814 + 1815 + rc = cudbg_get_buff(pdbg_init, dbg_buff, size, &temp_buff); 1802 1816 if (rc) 1803 1817 return rc; 1804 1818 ··· 1834 1820 ch_sge_dbg++; 1835 1821 } 1836 1822 1837 - if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5) { 1823 + if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5 && 1824 + !padap_running) { 1838 1825 sge_qbase = (struct sge_qbase_reg_field *)ch_sge_dbg; 1839 1826 /* 1 addr reg SGE_QBASE_INDEX and 4 data reg 1840 1827 * SGE_QBASE_MAP[0-3]
+2 -1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 2090 2090 0x1190, 0x1194, 2091 2091 0x11a0, 0x11a4, 2092 2092 0x11b0, 0x11b4, 2093 - 0x11fc, 0x1274, 2093 + 0x11fc, 0x123c, 2094 + 0x1254, 0x1274, 2094 2095 0x1280, 0x133c, 2095 2096 0x1800, 0x18fc, 2096 2097 0x3000, 0x302c,
+5 -1
drivers/net/ethernet/freescale/gianfar.c
··· 363 363 364 364 static int gfar_set_mac_addr(struct net_device *dev, void *p) 365 365 { 366 - eth_mac_addr(dev, p); 366 + int ret; 367 + 368 + ret = eth_mac_addr(dev, p); 369 + if (ret) 370 + return ret; 367 371 368 372 gfar_set_mac_for_addr(dev, 0, dev->dev_addr); 369 373
+4 -5
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 3966 3966 * normalcy is to reset. 3967 3967 * 2. A new reset request from the stack due to timeout 3968 3968 * 3969 - * For the first case,error event might not have ae handle available. 3970 3969 * check if this is a new reset request and we are not here just because 3971 3970 * last reset attempt did not succeed and watchdog hit us again. We will 3972 3971 * know this if last reset request did not occur very recently (watchdog ··· 3975 3976 * want to make sure we throttle the reset request. Therefore, we will 3976 3977 * not allow it again before 3*HZ times. 3977 3978 */ 3978 - if (!handle) 3979 - handle = &hdev->vport[0].nic; 3980 3979 3981 3980 if (time_before(jiffies, (hdev->last_reset_time + 3982 3981 HCLGE_RESET_INTERVAL))) { 3983 3982 mod_timer(&hdev->reset_timer, jiffies + HCLGE_RESET_INTERVAL); 3984 3983 return; 3985 - } else if (hdev->default_reset_request) { 3984 + } 3985 + 3986 + if (hdev->default_reset_request) { 3986 3987 hdev->reset_level = 3987 3988 hclge_get_reset_level(ae_dev, 3988 3989 &hdev->default_reset_request); ··· 11210 11211 if (ret) 11211 11212 return ret; 11212 11213 11213 - /* RSS indirection table has been configuared by user */ 11214 + /* RSS indirection table has been configured by user */ 11214 11215 if (rxfh_configured) 11215 11216 goto out; 11216 11217
+4 -4
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 2193 2193 2194 2194 if (test_and_clear_bit(HCLGEVF_RESET_PENDING, 2195 2195 &hdev->reset_state)) { 2196 - /* PF has initmated that it is about to reset the hardware. 2196 + /* PF has intimated that it is about to reset the hardware. 2197 2197 * We now have to poll & check if hardware has actually 2198 2198 * completed the reset sequence. On hardware reset completion, 2199 2199 * VF needs to reset the client and ae device. ··· 2624 2624 { 2625 2625 struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 2626 2626 2627 + clear_bit(HCLGEVF_STATE_DOWN, &hdev->state); 2628 + 2627 2629 hclgevf_reset_tqp_stats(handle); 2628 2630 2629 2631 hclgevf_request_link_info(hdev); 2630 2632 2631 2633 hclgevf_update_link_mode(hdev); 2632 - 2633 - clear_bit(HCLGEVF_STATE_DOWN, &hdev->state); 2634 2634 2635 2635 return 0; 2636 2636 } ··· 3497 3497 if (ret) 3498 3498 return ret; 3499 3499 3500 - /* RSS indirection table has been configuared by user */ 3500 + /* RSS indirection table has been configured by user */ 3501 3501 if (rxfh_configured) 3502 3502 goto out; 3503 3503
+1
drivers/net/ethernet/intel/i40e/i40e.h
··· 142 142 __I40E_VIRTCHNL_OP_PENDING, 143 143 __I40E_RECOVERY_MODE, 144 144 __I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */ 145 + __I40E_VFS_RELEASING, 145 146 /* This must be last as it determines the size of the BITMAP */ 146 147 __I40E_STATE_SIZE__, 147 148 };
+3
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 578 578 case RING_TYPE_XDP: 579 579 ring = kmemdup(vsi->xdp_rings[ring_id], sizeof(*ring), GFP_KERNEL); 580 580 break; 581 + default: 582 + ring = NULL; 583 + break; 581 584 } 582 585 if (!ring) 583 586 return;
+48 -7
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
··· 232 232 I40E_STAT(struct i40e_vsi, _name, _stat) 233 233 #define I40E_VEB_STAT(_name, _stat) \ 234 234 I40E_STAT(struct i40e_veb, _name, _stat) 235 + #define I40E_VEB_TC_STAT(_name, _stat) \ 236 + I40E_STAT(struct i40e_cp_veb_tc_stats, _name, _stat) 235 237 #define I40E_PFC_STAT(_name, _stat) \ 236 238 I40E_STAT(struct i40e_pfc_stats, _name, _stat) 237 239 #define I40E_QUEUE_STAT(_name, _stat) \ ··· 268 266 I40E_VEB_STAT("veb.rx_unknown_protocol", stats.rx_unknown_protocol), 269 267 }; 270 268 269 + struct i40e_cp_veb_tc_stats { 270 + u64 tc_rx_packets; 271 + u64 tc_rx_bytes; 272 + u64 tc_tx_packets; 273 + u64 tc_tx_bytes; 274 + }; 275 + 271 276 static const struct i40e_stats i40e_gstrings_veb_tc_stats[] = { 272 - I40E_VEB_STAT("veb.tc_%u_tx_packets", tc_stats.tc_tx_packets), 273 - I40E_VEB_STAT("veb.tc_%u_tx_bytes", tc_stats.tc_tx_bytes), 274 - I40E_VEB_STAT("veb.tc_%u_rx_packets", tc_stats.tc_rx_packets), 275 - I40E_VEB_STAT("veb.tc_%u_rx_bytes", tc_stats.tc_rx_bytes), 277 + I40E_VEB_TC_STAT("veb.tc_%u_tx_packets", tc_tx_packets), 278 + I40E_VEB_TC_STAT("veb.tc_%u_tx_bytes", tc_tx_bytes), 279 + I40E_VEB_TC_STAT("veb.tc_%u_rx_packets", tc_rx_packets), 280 + I40E_VEB_TC_STAT("veb.tc_%u_rx_bytes", tc_rx_bytes), 276 281 }; 277 282 278 283 static const struct i40e_stats i40e_gstrings_misc_stats[] = { ··· 1110 1101 1111 1102 /* Set flow control settings */ 1112 1103 ethtool_link_ksettings_add_link_mode(ks, supported, Pause); 1104 + ethtool_link_ksettings_add_link_mode(ks, supported, Asym_Pause); 1113 1105 1114 1106 switch (hw->fc.requested_mode) { 1115 1107 case I40E_FC_FULL: ··· 2227 2217 } 2228 2218 2229 2219 /** 2220 + * i40e_get_veb_tc_stats - copy VEB TC statistics to formatted structure 2221 + * @tc: the TC statistics in VEB structure (veb->tc_stats) 2222 + * @i: the index of traffic class in (veb->tc_stats) structure to copy 2223 + * 2224 + * Copy VEB TC statistics from structure of arrays (veb->tc_stats) to 2225 + * one dimensional structure i40e_cp_veb_tc_stats. 2226 + * Produce formatted i40e_cp_veb_tc_stats structure of the VEB TC 2227 + * statistics for the given TC. 2228 + **/ 2229 + static struct i40e_cp_veb_tc_stats 2230 + i40e_get_veb_tc_stats(struct i40e_veb_tc_stats *tc, unsigned int i) 2231 + { 2232 + struct i40e_cp_veb_tc_stats veb_tc = { 2233 + .tc_rx_packets = tc->tc_rx_packets[i], 2234 + .tc_rx_bytes = tc->tc_rx_bytes[i], 2235 + .tc_tx_packets = tc->tc_tx_packets[i], 2236 + .tc_tx_bytes = tc->tc_tx_bytes[i], 2237 + }; 2238 + 2239 + return veb_tc; 2240 + } 2241 + 2242 + /** 2230 2243 * i40e_get_pfc_stats - copy HW PFC statistics to formatted structure 2231 2244 * @pf: the PF device structure 2232 2245 * @i: the priority value to copy ··· 2333 2300 i40e_gstrings_veb_stats); 2334 2301 2335 2302 for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) 2336 - i40e_add_ethtool_stats(&data, veb_stats ? veb : NULL, 2337 - i40e_gstrings_veb_tc_stats); 2303 + if (veb_stats) { 2304 + struct i40e_cp_veb_tc_stats veb_tc = 2305 + i40e_get_veb_tc_stats(&veb->tc_stats, i); 2306 + 2307 + i40e_add_ethtool_stats(&data, &veb_tc, 2308 + i40e_gstrings_veb_tc_stats); 2309 + } else { 2310 + i40e_add_ethtool_stats(&data, NULL, 2311 + i40e_gstrings_veb_tc_stats); 2312 + } 2338 2313 2339 2314 i40e_add_ethtool_stats(&data, pf, i40e_gstrings_stats); 2340 2315 ··· 5480 5439 5481 5440 status = i40e_aq_get_phy_register(hw, 5482 5441 I40E_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE, 5483 - true, addr, offset, &value, NULL); 5442 + addr, true, offset, &value, NULL); 5484 5443 if (status) 5485 5444 return -EIO; 5486 5445 data[i] = value;
+16 -14
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 2560 2560 i40e_stat_str(hw, aq_ret), 2561 2561 i40e_aq_str(hw, hw->aq.asq_last_status)); 2562 2562 } else { 2563 - dev_info(&pf->pdev->dev, "%s is %s allmulti mode.\n", 2564 - vsi->netdev->name, 2563 + dev_info(&pf->pdev->dev, "%s allmulti mode.\n", 2565 2564 cur_multipromisc ? "entering" : "leaving"); 2566 2565 } 2567 2566 } ··· 6737 6738 set_bit(__I40E_CLIENT_SERVICE_REQUESTED, pf->state); 6738 6739 set_bit(__I40E_CLIENT_L2_CHANGE, pf->state); 6739 6740 } 6740 - /* registers are set, lets apply */ 6741 - if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) 6742 - ret = i40e_hw_set_dcb_config(pf, new_cfg); 6741 + /* registers are set, lets apply */ 6742 + if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) 6743 + ret = i40e_hw_set_dcb_config(pf, new_cfg); 6743 6744 } 6744 6745 6745 6746 err: ··· 10572 10573 goto end_core_reset; 10573 10574 } 10574 10575 10575 - if (!lock_acquired) 10576 - rtnl_lock(); 10577 - ret = i40e_setup_pf_switch(pf, reinit); 10578 - if (ret) 10579 - goto end_unlock; 10580 - 10581 10576 #ifdef CONFIG_I40E_DCB 10582 10577 /* Enable FW to write a default DCB config on link-up 10583 10578 * unless I40E_FLAG_TC_MQPRIO was enabled or DCB ··· 10586 10593 i40e_aq_set_dcb_parameters(hw, false, NULL); 10587 10594 dev_warn(&pf->pdev->dev, 10588 10595 "DCB is not supported for X710-T*L 2.5/5G speeds\n"); 10589 - pf->flags &= ~I40E_FLAG_DCB_CAPABLE; 10596 + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; 10590 10597 } else { 10591 10598 i40e_aq_set_dcb_parameters(hw, true, NULL); 10592 10599 ret = i40e_init_pf_dcb(pf); ··· 10600 10607 } 10601 10608 10602 10609 #endif /* CONFIG_I40E_DCB */ 10610 + if (!lock_acquired) 10611 + rtnl_lock(); 10612 + ret = i40e_setup_pf_switch(pf, reinit); 10613 + if (ret) 10614 + goto end_unlock; 10603 10615 10604 10616 /* The driver only wants link up/down and module qualification 10605 10617 * reports from firmware. Note the negative logic. ··· 15138 15140 * in order to register the netdev 15139 15141 */ 15140 15142 v_idx = i40e_vsi_mem_alloc(pf, I40E_VSI_MAIN); 15141 - if (v_idx < 0) 15143 + if (v_idx < 0) { 15144 + err = v_idx; 15142 15145 goto err_switch_setup; 15146 + } 15143 15147 pf->lan_vsi = v_idx; 15144 15148 vsi = pf->vsi[v_idx]; 15145 - if (!vsi) 15149 + if (!vsi) { 15150 + err = -EFAULT; 15146 15151 goto err_switch_setup; 15152 + } 15147 15153 vsi->alloc_queue_pairs = 1; 15148 15154 err = i40e_config_netdev(vsi); 15149 15155 if (err)
+5 -7
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2295 2295 * @rx_ring: Rx ring being processed 2296 2296 * @xdp: XDP buffer containing the frame 2297 2297 **/ 2298 - static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring, 2299 - struct xdp_buff *xdp) 2298 + static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp) 2300 2299 { 2301 2300 int err, result = I40E_XDP_PASS; 2302 2301 struct i40e_ring *xdp_ring; ··· 2334 2335 } 2335 2336 xdp_out: 2336 2337 rcu_read_unlock(); 2337 - return ERR_PTR(-result); 2338 + return result; 2338 2339 } 2339 2340 2340 2341 /** ··· 2447 2448 unsigned int xdp_xmit = 0; 2448 2449 bool failure = false; 2449 2450 struct xdp_buff xdp; 2451 + int xdp_res = 0; 2450 2452 2451 2453 #if (PAGE_SIZE < 8192) 2452 2454 frame_sz = i40e_rx_frame_truesize(rx_ring, 0); ··· 2513 2513 /* At larger PAGE_SIZE, frame_sz depend on len size */ 2514 2514 xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size); 2515 2515 #endif 2516 - skb = i40e_run_xdp(rx_ring, &xdp); 2516 + xdp_res = i40e_run_xdp(rx_ring, &xdp); 2517 2517 } 2518 2518 2519 - if (IS_ERR(skb)) { 2520 - unsigned int xdp_res = -PTR_ERR(skb); 2521 - 2519 + if (xdp_res) { 2522 2520 if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) { 2523 2521 xdp_xmit |= xdp_res; 2524 2522 i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
+9
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 137 137 **/ 138 138 static inline void i40e_vc_disable_vf(struct i40e_vf *vf) 139 139 { 140 + struct i40e_pf *pf = vf->pf; 140 141 int i; 141 142 142 143 i40e_vc_notify_vf_reset(vf); ··· 148 147 * ensure a reset. 149 148 */ 150 149 for (i = 0; i < 20; i++) { 150 + /* If PF is in VFs releasing state reset VF is impossible, 151 + * so leave it. 152 + */ 153 + if (test_bit(__I40E_VFS_RELEASING, pf->state)) 154 + return; 151 155 if (i40e_reset_vf(vf, false)) 152 156 return; 153 157 usleep_range(10000, 20000); ··· 1580 1574 1581 1575 if (!pf->vf) 1582 1576 return; 1577 + 1578 + set_bit(__I40E_VFS_RELEASING, pf->state); 1583 1579 while (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) 1584 1580 usleep_range(1000, 2000); 1585 1581 ··· 1639 1631 } 1640 1632 } 1641 1633 clear_bit(__I40E_VF_DISABLE, pf->state); 1634 + clear_bit(__I40E_VFS_RELEASING, pf->state); 1642 1635 } 1643 1636 1644 1637 #ifdef CONFIG_PCI_IOV
+2 -2
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 471 471 472 472 nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, descs, budget); 473 473 if (!nb_pkts) 474 - return false; 474 + return true; 475 475 476 476 if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { 477 477 nb_processed = xdp_ring->count - xdp_ring->next_to_use; ··· 488 488 489 489 i40e_update_tx_stats(xdp_ring, nb_pkts, total_bytes); 490 490 491 - return true; 491 + return nb_pkts < budget; 492 492 } 493 493 494 494 /**
+2 -2
drivers/net/ethernet/intel/ice/ice.h
··· 196 196 __ICE_NEEDS_RESTART, 197 197 __ICE_PREPARED_FOR_RESET, /* set by driver when prepared */ 198 198 __ICE_RESET_OICR_RECV, /* set by driver after rcv reset OICR */ 199 - __ICE_DCBNL_DEVRESET, /* set by dcbnl devreset */ 200 199 __ICE_PFR_REQ, /* set by driver and peers */ 201 200 __ICE_CORER_REQ, /* set by driver and peers */ 202 201 __ICE_GLOBR_REQ, /* set by driver and peers */ ··· 623 624 void ice_print_link_msg(struct ice_vsi *vsi, bool isup); 624 625 const char *ice_stat_str(enum ice_status stat_err); 625 626 const char *ice_aq_str(enum ice_aq_err aq_err); 626 - bool ice_is_wol_supported(struct ice_pf *pf); 627 + bool ice_is_wol_supported(struct ice_hw *hw); 627 628 int 628 629 ice_fdir_write_fltr(struct ice_pf *pf, struct ice_fdir_fltr *input, bool add, 629 630 bool is_tun); ··· 641 642 int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout, 642 643 struct ice_rq_event_info *event); 643 644 int ice_open(struct net_device *netdev); 645 + int ice_open_internal(struct net_device *netdev); 644 646 int ice_stop(struct net_device *netdev); 645 647 void ice_service_task_schedule(struct ice_pf *pf); 646 648
+1 -1
drivers/net/ethernet/intel/ice/ice_common.c
··· 717 717 718 718 if (!data) { 719 719 data = devm_kcalloc(ice_hw_to_dev(hw), 720 - sizeof(*data), 721 720 ICE_AQC_FW_LOG_ID_MAX, 721 + sizeof(*data), 722 722 GFP_KERNEL); 723 723 if (!data) 724 724 return ICE_ERR_NO_MEMORY;
+2 -2
drivers/net/ethernet/intel/ice/ice_controlq.h
··· 31 31 ICE_CTL_Q_MAILBOX, 32 32 }; 33 33 34 - /* Control Queue timeout settings - max delay 250ms */ 35 - #define ICE_CTL_Q_SQ_CMD_TIMEOUT 2500 /* Count 2500 times */ 34 + /* Control Queue timeout settings - max delay 1s */ 35 + #define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */ 36 36 #define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */ 37 37 #define ICE_CTL_Q_ADMIN_INIT_TIMEOUT 10 /* Count 10 times */ 38 38 #define ICE_CTL_Q_ADMIN_INIT_MSEC 100 /* Check every 100msec */
+29 -9
drivers/net/ethernet/intel/ice/ice_dcb.c
··· 738 738 /** 739 739 * ice_cee_to_dcb_cfg 740 740 * @cee_cfg: pointer to CEE configuration struct 741 - * @dcbcfg: DCB configuration struct 741 + * @pi: port information structure 742 742 * 743 743 * Convert CEE configuration from firmware to DCB configuration 744 744 */ 745 745 static void 746 746 ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg, 747 - struct ice_dcbx_cfg *dcbcfg) 747 + struct ice_port_info *pi) 748 748 { 749 749 u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status); 750 750 u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift; 751 + u8 i, j, err, sync, oper, app_index, ice_app_sel_type; 751 752 u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio); 752 - u8 i, err, sync, oper, app_index, ice_app_sel_type; 753 753 u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift; 754 + struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg; 754 755 u16 ice_app_prot_id_type; 755 756 756 - /* CEE PG data to ETS config */ 757 + dcbcfg = &pi->qos_cfg.local_dcbx_cfg; 758 + dcbcfg->dcbx_mode = ICE_DCBX_MODE_CEE; 759 + dcbcfg->tlv_status = tlv_status; 760 + 761 + /* CEE PG data */ 757 762 dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc; 758 763 759 764 /* Note that the FW creates the oper_prio_tc nibbles reversed ··· 785 780 } 786 781 } 787 782 788 - /* CEE PFC data to ETS config */ 783 + /* CEE PFC data */ 789 784 dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en; 790 785 dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS; 786 + 787 + /* CEE APP TLV data */ 788 + if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING) 789 + cmp_dcbcfg = &pi->qos_cfg.desired_dcbx_cfg; 790 + else 791 + cmp_dcbcfg = &pi->qos_cfg.remote_dcbx_cfg; 791 792 792 793 app_index = 0; 793 794 for (i = 0; i < 3; i++) { ··· 813 802 ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S; 814 803 ice_app_sel_type = ICE_APP_SEL_TCPIP; 815 804 ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI; 805 + 806 + for (j = 0; j < cmp_dcbcfg->numapps; j++) { 807 + u16 prot_id = cmp_dcbcfg->app[j].prot_id; 808 + u8 sel = cmp_dcbcfg->app[j].selector; 809 + 810 + if (sel == ICE_APP_SEL_TCPIP && 811 + (prot_id == ICE_APP_PROT_ID_ISCSI || 812 + prot_id == ICE_APP_PROT_ID_ISCSI_860)) { 813 + ice_app_prot_id_type = prot_id; 814 + break; 815 + } 816 + } 816 817 } else { 817 818 /* FIP APP */ 818 819 ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M; ··· 915 892 ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL); 916 893 if (!ret) { 917 894 /* CEE mode */ 918 - dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg; 919 - dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE; 920 - dcbx_cfg->tlv_status = le32_to_cpu(cee_cfg.tlv_status); 921 - ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg); 922 895 ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE); 896 + ice_cee_to_dcb_cfg(&cee_cfg, pi); 923 897 } else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) { 924 898 /* CEE mode not enabled try querying IEEE data */ 925 899 dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
-2
drivers/net/ethernet/intel/ice/ice_dcb_nl.c
··· 18 18 while (ice_is_reset_in_progress(pf->state)) 19 19 usleep_range(1000, 2000); 20 20 21 - set_bit(__ICE_DCBNL_DEVRESET, pf->state); 22 21 dev_close(netdev); 23 22 netdev_state_change(netdev); 24 23 dev_open(netdev, NULL); 25 24 netdev_state_change(netdev); 26 - clear_bit(__ICE_DCBNL_DEVRESET, pf->state); 27 25 } 28 26 29 27 /**
+2 -2
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 3472 3472 netdev_warn(netdev, "Wake on LAN is not supported on this interface!\n"); 3473 3473 3474 3474 /* Get WoL settings based on the HW capability */ 3475 - if (ice_is_wol_supported(pf)) { 3475 + if (ice_is_wol_supported(&pf->hw)) { 3476 3476 wol->supported = WAKE_MAGIC; 3477 3477 wol->wolopts = pf->wol_ena ? WAKE_MAGIC : 0; 3478 3478 } else { ··· 3492 3492 struct ice_vsi *vsi = np->vsi; 3493 3493 struct ice_pf *pf = vsi->back; 3494 3494 3495 - if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(pf)) 3495 + if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(&pf->hw)) 3496 3496 return -EOPNOTSUPP; 3497 3497 3498 3498 /* only magic packet is supported */
+2 -3
drivers/net/ethernet/intel/ice/ice_lib.c
··· 2620 2620 if (!locked) 2621 2621 rtnl_lock(); 2622 2622 2623 - err = ice_open(vsi->netdev); 2623 + err = ice_open_internal(vsi->netdev); 2624 2624 2625 2625 if (!locked) 2626 2626 rtnl_unlock(); ··· 2649 2649 if (!locked) 2650 2650 rtnl_lock(); 2651 2651 2652 - ice_stop(vsi->netdev); 2652 + ice_vsi_close(vsi); 2653 2653 2654 2654 if (!locked) 2655 2655 rtnl_unlock(); ··· 3078 3078 bool ice_is_reset_in_progress(unsigned long *state) 3079 3079 { 3080 3080 return test_bit(__ICE_RESET_OICR_RECV, state) || 3081 - test_bit(__ICE_DCBNL_DEVRESET, state) || 3082 3081 test_bit(__ICE_PFR_REQ, state) || 3083 3082 test_bit(__ICE_CORER_REQ, state) || 3084 3083 test_bit(__ICE_GLOBR_REQ, state);
+39 -14
drivers/net/ethernet/intel/ice/ice_main.c
··· 3537 3537 } 3538 3538 3539 3539 /** 3540 - * ice_is_wol_supported - get NVM state of WoL 3541 - * @pf: board private structure 3540 + * ice_is_wol_supported - check if WoL is supported 3541 + * @hw: pointer to hardware info 3542 3542 * 3543 3543 * Check if WoL is supported based on the HW configuration. 3544 3544 * Returns true if NVM supports and enables WoL for this port, false otherwise 3545 3545 */ 3546 - bool ice_is_wol_supported(struct ice_pf *pf) 3546 + bool ice_is_wol_supported(struct ice_hw *hw) 3547 3547 { 3548 - struct ice_hw *hw = &pf->hw; 3549 3548 u16 wol_ctrl; 3550 3549 3551 3550 /* A bit set to 1 in the NVM Software Reserved Word 2 (WoL control ··· 3553 3554 if (ice_read_sr_word(hw, ICE_SR_NVM_WOL_CFG, &wol_ctrl)) 3554 3555 return false; 3555 3556 3556 - return !(BIT(hw->pf_id) & wol_ctrl); 3557 + return !(BIT(hw->port_info->lport) & wol_ctrl); 3557 3558 } 3558 3559 3559 3560 /** ··· 4191 4192 goto err_send_version_unroll; 4192 4193 } 4193 4194 4195 + /* not a fatal error if this fails */ 4194 4196 err = ice_init_nvm_phy_type(pf->hw.port_info); 4195 - if (err) { 4197 + if (err) 4196 4198 dev_err(dev, "ice_init_nvm_phy_type failed: %d\n", err); 4197 - goto err_send_version_unroll; 4198 - } 4199 4199 4200 + /* not a fatal error if this fails */ 4200 4201 err = ice_update_link_info(pf->hw.port_info); 4201 - if (err) { 4202 + if (err) 4202 4203 dev_err(dev, "ice_update_link_info failed: %d\n", err); 4203 - goto err_send_version_unroll; 4204 - } 4205 4204 4206 4205 ice_init_link_dflt_override(pf->hw.port_info); 4207 4206 4208 4207 /* if media available, initialize PHY settings */ 4209 4208 if (pf->hw.port_info->phy.link_info.link_info & 4210 4209 ICE_AQ_MEDIA_AVAILABLE) { 4210 + /* not a fatal error if this fails */ 4211 4211 err = ice_init_phy_user_cfg(pf->hw.port_info); 4212 - if (err) { 4212 + if (err) 4213 4213 dev_err(dev, "ice_init_phy_user_cfg failed: %d\n", err); 4214 - goto err_send_version_unroll; 4215 - } 4216 4214 4217 4215 if (!test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, pf->flags)) { 4218 4216 struct ice_vsi *vsi = ice_get_main_vsi(pf); ··· 4564 4568 continue; 4565 4569 ice_vsi_free_q_vectors(pf->vsi[v]); 4566 4570 } 4571 + ice_free_cpu_rx_rmap(ice_get_main_vsi(pf)); 4567 4572 ice_clear_interrupt_scheme(pf); 4568 4573 4569 4574 pci_save_state(pdev); ··· 6634 6637 int ice_open(struct net_device *netdev) 6635 6638 { 6636 6639 struct ice_netdev_priv *np = netdev_priv(netdev); 6640 + struct ice_pf *pf = np->vsi->back; 6641 + 6642 + if (ice_is_reset_in_progress(pf->state)) { 6643 + netdev_err(netdev, "can't open net device while reset is in progress"); 6644 + return -EBUSY; 6645 + } 6646 + 6647 + return ice_open_internal(netdev); 6648 + } 6649 + 6650 + /** 6651 + * ice_open_internal - Called when a network interface becomes active 6652 + * @netdev: network interface device structure 6653 + * 6654 + * Internal ice_open implementation. Should not be used directly except for ice_open and reset 6655 + * handling routine 6656 + * 6657 + * Returns 0 on success, negative value on failure 6658 + */ 6659 + int ice_open_internal(struct net_device *netdev) 6660 + { 6661 + struct ice_netdev_priv *np = netdev_priv(netdev); 6637 6662 struct ice_vsi *vsi = np->vsi; 6638 6663 struct ice_pf *pf = vsi->back; 6639 6664 struct ice_port_info *pi; ··· 6734 6715 { 6735 6716 struct ice_netdev_priv *np = netdev_priv(netdev); 6736 6717 struct ice_vsi *vsi = np->vsi; 6718 + struct ice_pf *pf = vsi->back; 6719 + 6720 + if (ice_is_reset_in_progress(pf->state)) { 6721 + netdev_err(netdev, "can't stop net device while reset is in progress"); 6722 + return -EBUSY; 6723 + } 6737 6724 6738 6725 ice_vsi_close(vsi); 6739 6726
+9 -6
drivers/net/ethernet/intel/ice/ice_switch.c
··· 1238 1238 ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2, 1239 1239 vsi_list_id); 1240 1240 1241 + if (!m_entry->vsi_list_info) 1242 + return ICE_ERR_NO_MEMORY; 1243 + 1241 1244 /* If this entry was large action then the large action needs 1242 1245 * to be updated to point to FWD to VSI list 1243 1246 */ ··· 2223 2220 return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI && 2224 2221 fm_entry->fltr_info.vsi_handle == vsi_handle) || 2225 2222 (fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST && 2223 + fm_entry->vsi_list_info && 2226 2224 (test_bit(vsi_handle, fm_entry->vsi_list_info->vsi_map)))); 2227 2225 } 2228 2226 ··· 2296 2292 return ICE_ERR_PARAM; 2297 2293 2298 2294 list_for_each_entry(fm_entry, lkup_list_head, list_entry) { 2299 - struct ice_fltr_info *fi; 2300 - 2301 - fi = &fm_entry->fltr_info; 2302 - if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle)) 2295 + if (!ice_vsi_uses_fltr(fm_entry, vsi_handle)) 2303 2296 continue; 2304 2297 2305 2298 status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle, 2306 - vsi_list_head, fi); 2299 + vsi_list_head, 2300 + &fm_entry->fltr_info); 2307 2301 if (status) 2308 2302 return status; 2309 2303 } ··· 2624 2622 &remove_list_head); 2625 2623 mutex_unlock(rule_lock); 2626 2624 if (status) 2627 - return; 2625 + goto free_fltr_list; 2628 2626 2629 2627 switch (lkup) { 2630 2628 case ICE_SW_LKUP_MAC: ··· 2647 2645 break; 2648 2646 } 2649 2647 2648 + free_fltr_list: 2650 2649 list_for_each_entry_safe(fm_entry, tmp, &remove_list_head, list_entry) { 2651 2650 list_del(&fm_entry->list_entry); 2652 2651 devm_kfree(ice_hw_to_dev(hw), fm_entry);
+1
drivers/net/ethernet/intel/ice/ice_type.h
··· 535 535 #define ICE_TLV_STATUS_ERR 0x4 536 536 #define ICE_APP_PROT_ID_FCOE 0x8906 537 537 #define ICE_APP_PROT_ID_ISCSI 0x0cbc 538 + #define ICE_APP_PROT_ID_ISCSI_860 0x035c 538 539 #define ICE_APP_PROT_ID_FIP 0x8914 539 540 #define ICE_APP_SEL_ETHTYPE 0x1 540 541 #define ICE_APP_SEL_TCPIP 0x2
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/dev.c
··· 191 191 } 192 192 193 193 enum { 194 - MLX5_INTERFACE_PROTOCOL_ETH_REP, 195 194 MLX5_INTERFACE_PROTOCOL_ETH, 195 + MLX5_INTERFACE_PROTOCOL_ETH_REP, 196 196 197 + MLX5_INTERFACE_PROTOCOL_IB, 197 198 MLX5_INTERFACE_PROTOCOL_IB_REP, 198 199 MLX5_INTERFACE_PROTOCOL_MPIB, 199 - MLX5_INTERFACE_PROTOCOL_IB, 200 200 201 201 MLX5_INTERFACE_PROTOCOL_VNET, 202 202 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 516 516 struct mlx5_wq_cyc wq; 517 517 void __iomem *uar_map; 518 518 u32 sqn; 519 + u16 reserved_room; 519 520 unsigned long state; 520 521 521 522 /* control path */
+29 -7
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 186 186 } 187 187 188 188 static int 189 + mlx5_get_label_mapping(struct mlx5_tc_ct_priv *ct_priv, 190 + u32 *labels, u32 *id) 191 + { 192 + if (!memchr_inv(labels, 0, sizeof(u32) * 4)) { 193 + *id = 0; 194 + return 0; 195 + } 196 + 197 + if (mapping_add(ct_priv->labels_mapping, labels, id)) 198 + return -EOPNOTSUPP; 199 + 200 + return 0; 201 + } 202 + 203 + static void 204 + mlx5_put_label_mapping(struct mlx5_tc_ct_priv *ct_priv, u32 id) 205 + { 206 + if (id) 207 + mapping_remove(ct_priv->labels_mapping, id); 208 + } 209 + 210 + static int 189 211 mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule) 190 212 { 191 213 struct flow_match_control control; ··· 458 436 mlx5_tc_rule_delete(netdev_priv(ct_priv->netdev), zone_rule->rule, attr); 459 437 mlx5e_mod_hdr_detach(ct_priv->dev, 460 438 ct_priv->mod_hdr_tbl, zone_rule->mh); 461 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 439 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 462 440 kfree(attr); 463 441 } 464 442 ··· 661 639 if (!meta) 662 640 return -EOPNOTSUPP; 663 641 664 - err = mapping_add(ct_priv->labels_mapping, meta->ct_metadata.labels, 665 - &attr->ct_attr.ct_labels_id); 642 + err = mlx5_get_label_mapping(ct_priv, meta->ct_metadata.labels, 643 + &attr->ct_attr.ct_labels_id); 666 644 if (err) 667 645 return -EOPNOTSUPP; 668 646 if (nat) { ··· 699 677 700 678 err_mapping: 701 679 dealloc_mod_hdr_actions(&mod_acts); 702 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 680 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 703 681 return err; 704 682 } 705 683 ··· 767 745 err_rule: 768 746 mlx5e_mod_hdr_detach(ct_priv->dev, 769 747 ct_priv->mod_hdr_tbl, zone_rule->mh); 770 - mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 748 + mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id); 771 749 err_mod_hdr: 772 750 kfree(attr); 773 751 err_attr: ··· 1219 1197 if (!priv || !ct_attr->ct_labels_id) 1220 1198 return; 1221 1199 1222 - mapping_remove(priv->labels_mapping, ct_attr->ct_labels_id); 1200 + mlx5_put_label_mapping(priv, ct_attr->ct_labels_id); 1223 1201 } 1224 1202 1225 1203 int ··· 1302 1280 ct_labels[1] = key->ct_labels[1] & mask->ct_labels[1]; 1303 1281 ct_labels[2] = key->ct_labels[2] & mask->ct_labels[2]; 1304 1282 ct_labels[3] = key->ct_labels[3] & mask->ct_labels[3]; 1305 - if (mapping_add(priv->labels_mapping, ct_labels, &ct_attr->ct_labels_id)) 1283 + if (mlx5_get_label_mapping(priv, ct_labels, &ct_attr->ct_labels_id)) 1306 1284 return -EOPNOTSUPP; 1307 1285 mlx5e_tc_match_to_reg_match(spec, LABELS_TO_REG, ct_attr->ct_labels_id, 1308 1286 MLX5_CT_LABELS_MASK);
+10
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h
··· 21 21 MLX5E_TC_TUNNEL_TYPE_MPLSOUDP, 22 22 }; 23 23 24 + struct mlx5e_encap_key { 25 + const struct ip_tunnel_key *ip_tun_key; 26 + struct mlx5e_tc_tunnel *tc_tunnel; 27 + }; 28 + 24 29 struct mlx5e_tc_tunnel { 25 30 int tunnel_type; 26 31 enum mlx5_flow_match_level match_level; ··· 49 44 struct flow_cls_offload *f, 50 45 void *headers_c, 51 46 void *headers_v); 47 + bool (*encap_info_equal)(struct mlx5e_encap_key *a, 48 + struct mlx5e_encap_key *b); 52 49 }; 53 50 54 51 extern struct mlx5e_tc_tunnel vxlan_tunnel; ··· 107 100 struct flow_cls_offload *f, 108 101 void *headers_c, 109 102 void *headers_v); 103 + 104 + bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a, 105 + struct mlx5e_encap_key *b); 110 106 111 107 #endif /* CONFIG_MLX5_ESWITCH */ 112 108
+9 -14
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
··· 476 476 mlx5e_decap_dealloc(priv, d); 477 477 } 478 478 479 - struct encap_key { 480 - const struct ip_tunnel_key *ip_tun_key; 481 - struct mlx5e_tc_tunnel *tc_tunnel; 482 - }; 483 - 484 - static int cmp_encap_info(struct encap_key *a, 485 - struct encap_key *b) 479 + bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a, 480 + struct mlx5e_encap_key *b) 486 481 { 487 - return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) || 488 - a->tc_tunnel->tunnel_type != b->tc_tunnel->tunnel_type; 482 + return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) == 0 && 483 + a->tc_tunnel->tunnel_type == b->tc_tunnel->tunnel_type; 489 484 } 490 485 491 486 static int cmp_decap_info(struct mlx5e_decap_key *a, ··· 489 494 return memcmp(&a->key, &b->key, sizeof(b->key)); 490 495 } 491 496 492 - static int hash_encap_info(struct encap_key *key) 497 + static int hash_encap_info(struct mlx5e_encap_key *key) 493 498 { 494 499 return jhash(key->ip_tun_key, sizeof(*key->ip_tun_key), 495 500 key->tc_tunnel->tunnel_type); ··· 511 516 } 512 517 513 518 static struct mlx5e_encap_entry * 514 - mlx5e_encap_get(struct mlx5e_priv *priv, struct encap_key *key, 519 + mlx5e_encap_get(struct mlx5e_priv *priv, struct mlx5e_encap_key *key, 515 520 uintptr_t hash_key) 516 521 { 517 522 struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 523 + struct mlx5e_encap_key e_key; 518 524 struct mlx5e_encap_entry *e; 519 - struct encap_key e_key; 520 525 521 526 hash_for_each_possible_rcu(esw->offloads.encap_tbl, e, 522 527 encap_hlist, hash_key) { 523 528 e_key.ip_tun_key = &e->tun_info->key; 524 529 e_key.tc_tunnel = e->tunnel; 525 - if (!cmp_encap_info(&e_key, key) && 530 + if (e->tunnel->encap_info_equal(&e_key, key) && 526 531 mlx5e_encap_take(e)) 527 532 return e; 528 533 } ··· 689 694 struct mlx5_flow_attr *attr = flow->attr; 690 695 const struct ip_tunnel_info *tun_info; 691 696 unsigned long tbl_time_before = 0; 692 - struct encap_key key; 693 697 struct mlx5e_encap_entry *e; 698 + struct mlx5e_encap_key key; 694 699 bool entry_created = false; 695 700 unsigned short family; 696 701 uintptr_t hash_key;
+29
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
··· 329 329 return mlx5e_tc_tun_parse_geneve_options(priv, spec, f); 330 330 } 331 331 332 + static bool mlx5e_tc_tun_encap_info_equal_geneve(struct mlx5e_encap_key *a, 333 + struct mlx5e_encap_key *b) 334 + { 335 + struct ip_tunnel_info *a_info; 336 + struct ip_tunnel_info *b_info; 337 + bool a_has_opts, b_has_opts; 338 + 339 + if (!mlx5e_tc_tun_encap_info_equal_generic(a, b)) 340 + return false; 341 + 342 + a_has_opts = !!(a->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT); 343 + b_has_opts = !!(b->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT); 344 + 345 + /* keys are equal when both don't have any options attached */ 346 + if (!a_has_opts && !b_has_opts) 347 + return true; 348 + 349 + if (a_has_opts != b_has_opts) 350 + return false; 351 + 352 + /* geneve options stored in memory next to ip_tunnel_info struct */ 353 + a_info = container_of(a->ip_tun_key, struct ip_tunnel_info, key); 354 + b_info = container_of(b->ip_tun_key, struct ip_tunnel_info, key); 355 + 356 + return a_info->options_len == b_info->options_len && 357 + memcmp(a_info + 1, b_info + 1, a_info->options_len) == 0; 358 + } 359 + 332 360 struct mlx5e_tc_tunnel geneve_tunnel = { 333 361 .tunnel_type = MLX5E_TC_TUNNEL_TYPE_GENEVE, 334 362 .match_level = MLX5_MATCH_L4, ··· 366 338 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_geneve, 367 339 .parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_geneve, 368 340 .parse_tunnel = mlx5e_tc_tun_parse_geneve, 341 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_geneve, 369 342 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c
··· 94 94 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_gretap, 95 95 .parse_udp_ports = NULL, 96 96 .parse_tunnel = mlx5e_tc_tun_parse_gretap, 97 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 97 98 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_mplsoudp.c
··· 131 131 .generate_ip_tun_hdr = generate_ip_tun_hdr, 132 132 .parse_udp_ports = parse_udp_ports, 133 133 .parse_tunnel = parse_tunnel, 134 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 134 135 };
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
··· 150 150 .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_vxlan, 151 151 .parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_vxlan, 152 152 .parse_tunnel = mlx5e_tc_tun_parse_vxlan, 153 + .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic, 153 154 };
+6
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
··· 441 441 return wqe_size * 2 - 1; 442 442 } 443 443 444 + static inline bool mlx5e_icosq_can_post_wqe(struct mlx5e_icosq *sq, u16 wqe_size) 445 + { 446 + u16 room = sq->reserved_room + mlx5e_stop_room_for_wqe(wqe_size); 447 + 448 + return mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room); 449 + } 444 450 #endif
+19 -21
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
··· 46 46 struct tls12_crypto_info_aes_gcm_128 crypto_info; 47 47 struct accel_rule rule; 48 48 struct sock *sk; 49 - struct mlx5e_rq_stats *stats; 49 + struct mlx5e_rq_stats *rq_stats; 50 + struct mlx5e_tls_sw_stats *sw_stats; 50 51 struct completion add_ctx; 51 52 u32 tirn; 52 53 u32 key_id; ··· 138 137 { 139 138 struct mlx5e_set_tls_static_params_wqe *wqe; 140 139 struct mlx5e_icosq_wqe_info wi; 141 - u16 pi, num_wqebbs, room; 140 + u16 pi, num_wqebbs; 142 141 143 142 num_wqebbs = MLX5E_TLS_SET_STATIC_PARAMS_WQEBBS; 144 - room = mlx5e_stop_room_for_wqe(num_wqebbs); 145 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room))) 143 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, num_wqebbs))) 146 144 return ERR_PTR(-ENOSPC); 147 145 148 146 pi = mlx5e_icosq_get_next_pi(sq, num_wqebbs); ··· 168 168 { 169 169 struct mlx5e_set_tls_progress_params_wqe *wqe; 170 170 struct mlx5e_icosq_wqe_info wi; 171 - u16 pi, num_wqebbs, room; 171 + u16 pi, num_wqebbs; 172 172 173 173 num_wqebbs = MLX5E_TLS_SET_PROGRESS_PARAMS_WQEBBS; 174 - room = mlx5e_stop_room_for_wqe(num_wqebbs); 175 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room))) 174 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, num_wqebbs))) 176 175 return ERR_PTR(-ENOSPC); 177 176 178 177 pi = mlx5e_icosq_get_next_pi(sq, num_wqebbs); ··· 217 218 return err; 218 219 219 220 err_out: 220 - priv_rx->stats->tls_resync_req_skip++; 221 + priv_rx->rq_stats->tls_resync_req_skip++; 221 222 err = PTR_ERR(cseg); 222 223 complete(&priv_rx->add_ctx); 223 224 goto unlock; ··· 276 277 277 278 buf->priv_rx = priv_rx; 278 279 279 - BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1); 280 - 281 280 spin_lock_bh(&sq->channel->async_icosq_lock); 282 281 283 - if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) { 282 + if (unlikely(!mlx5e_icosq_can_post_wqe(sq, MLX5E_KTLS_GET_PROGRESS_WQEBBS))) { 284 283 spin_unlock_bh(&sq->channel->async_icosq_lock); 285 284 err = -ENOSPC; 286 285 goto err_dma_unmap; 287 286 } 288 287 289 - pi = mlx5e_icosq_get_next_pi(sq, 1); 288 + pi = mlx5e_icosq_get_next_pi(sq, MLX5E_KTLS_GET_PROGRESS_WQEBBS); 290 289 wqe = MLX5E_TLS_FETCH_GET_PROGRESS_PARAMS_WQE(sq, pi); 291 290 292 291 #define GET_PSV_DS_CNT (DIV_ROUND_UP(sizeof(*wqe), MLX5_SEND_WQE_DS)) ··· 304 307 305 308 wi = (struct mlx5e_icosq_wqe_info) { 306 309 .wqe_type = MLX5E_ICOSQ_WQE_GET_PSV_TLS, 307 - .num_wqebbs = 1, 310 + .num_wqebbs = MLX5E_KTLS_GET_PROGRESS_WQEBBS, 308 311 .tls_get_params.buf = buf, 309 312 }; 310 313 icosq_fill_wi(sq, pi, &wi); ··· 319 322 err_free: 320 323 kfree(buf); 321 324 err_out: 322 - priv_rx->stats->tls_resync_req_skip++; 325 + priv_rx->rq_stats->tls_resync_req_skip++; 323 326 return err; 324 327 } 325 328 ··· 375 378 376 379 cseg = post_static_params(sq, priv_rx); 377 380 if (IS_ERR(cseg)) { 378 - priv_rx->stats->tls_resync_res_skip++; 381 + priv_rx->rq_stats->tls_resync_res_skip++; 379 382 err = PTR_ERR(cseg); 380 383 goto unlock; 381 384 } 382 385 /* Do not increment priv_rx refcnt, CQE handling is empty */ 383 386 mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg); 384 - priv_rx->stats->tls_resync_res_ok++; 387 + priv_rx->rq_stats->tls_resync_res_ok++; 385 388 unlock: 386 389 spin_unlock_bh(&c->async_icosq_lock); 387 390 ··· 417 420 auth_state = MLX5_GET(tls_progress_params, ctx, auth_state); 418 421 if (tracker_state != MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING || 419 422 auth_state != MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD) { 420 - priv_rx->stats->tls_resync_req_skip++; 423 + priv_rx->rq_stats->tls_resync_req_skip++; 421 424 goto out; 422 425 } 423 426 424 427 hw_seq = MLX5_GET(tls_progress_params, ctx, hw_resync_tcp_sn); 425 428 tls_offload_rx_resync_async_request_end(priv_rx->sk, cpu_to_be32(hw_seq)); 426 - priv_rx->stats->tls_resync_req_end++; 429 + priv_rx->rq_stats->tls_resync_req_end++; 427 430 out: 428 431 mlx5e_ktls_priv_rx_put(priv_rx); 429 432 dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); ··· 606 609 priv_rx->rxq = rxq; 607 610 priv_rx->sk = sk; 608 611 609 - priv_rx->stats = &priv->channel_stats[rxq].rq; 612 + priv_rx->rq_stats = &priv->channel_stats[rxq].rq; 613 + priv_rx->sw_stats = &priv->tls->sw_stats; 610 614 mlx5e_set_ktls_rx_priv_ctx(tls_ctx, priv_rx); 611 615 612 616 rqtn = priv->direct_tir[rxq].rqt.rqtn; ··· 628 630 if (err) 629 631 goto err_post_wqes; 630 632 631 - priv_rx->stats->tls_ctx++; 633 + atomic64_inc(&priv_rx->sw_stats->rx_tls_ctx); 632 634 633 635 return 0; 634 636 ··· 664 666 if (cancel_work_sync(&resync->work)) 665 667 mlx5e_ktls_priv_rx_put(priv_rx); 666 668 667 - priv_rx->stats->tls_del++; 669 + atomic64_inc(&priv_rx->sw_stats->rx_tls_del); 668 670 if (priv_rx->rule.rule) 669 671 mlx5e_accel_fs_del_sk(priv_rx->rule.rule); 670 672
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 // Copyright (c) 2019 Mellanox Technologies. 3 3 4 + #include "en_accel/tls.h" 4 5 #include "en_accel/ktls_txrx.h" 5 6 #include "en_accel/ktls_utils.h" 6 7 ··· 51 50 struct mlx5e_ktls_offload_context_tx { 52 51 struct tls_offload_context_tx *tx_ctx; 53 52 struct tls12_crypto_info_aes_gcm_128 crypto_info; 53 + struct mlx5e_tls_sw_stats *sw_stats; 54 54 u32 expected_seq; 55 55 u32 tisn; 56 56 u32 key_id; ··· 101 99 if (err) 102 100 goto err_create_key; 103 101 102 + priv_tx->sw_stats = &priv->tls->sw_stats; 104 103 priv_tx->expected_seq = start_offload_tcp_sn; 105 104 priv_tx->crypto_info = 106 105 *(struct tls12_crypto_info_aes_gcm_128 *)crypto_info; ··· 114 111 goto err_create_tis; 115 112 116 113 priv_tx->ctx_post_pending = true; 114 + atomic64_inc(&priv_tx->sw_stats->tx_tls_ctx); 117 115 118 116 return 0; 119 117 ··· 456 452 457 453 if (unlikely(mlx5e_ktls_tx_offload_test_and_clear_pending(priv_tx))) { 458 454 mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, false, false); 459 - stats->tls_ctx++; 460 455 } 461 456 462 457 seq = ntohl(tcp_hdr(skb)->seq);
+3
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.h
··· 41 41 #include "en.h" 42 42 43 43 struct mlx5e_tls_sw_stats { 44 + atomic64_t tx_tls_ctx; 44 45 atomic64_t tx_tls_drop_metadata; 45 46 atomic64_t tx_tls_drop_resync_alloc; 46 47 atomic64_t tx_tls_drop_no_sync_data; 47 48 atomic64_t tx_tls_drop_bypass_required; 49 + atomic64_t rx_tls_ctx; 50 + atomic64_t rx_tls_del; 48 51 atomic64_t rx_tls_drop_resync_request; 49 52 atomic64_t rx_tls_resync_request; 50 53 atomic64_t rx_tls_resync_reply;
+30 -19
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c
··· 45 45 { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_drop_bypass_required) }, 46 46 }; 47 47 48 + static const struct counter_desc mlx5e_ktls_sw_stats_desc[] = { 49 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_ctx) }, 50 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_ctx) }, 51 + { MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_del) }, 52 + }; 53 + 48 54 #define MLX5E_READ_CTR_ATOMIC64(ptr, dsc, i) \ 49 55 atomic64_read((atomic64_t *)((char *)(ptr) + (dsc)[i].offset)) 50 56 51 - #define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc) 52 - 53 - static bool is_tls_atomic_stats(struct mlx5e_priv *priv) 57 + static const struct counter_desc *get_tls_atomic_stats(struct mlx5e_priv *priv) 54 58 { 55 - return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev); 59 + if (!priv->tls) 60 + return NULL; 61 + if (mlx5_accel_is_ktls_device(priv->mdev)) 62 + return mlx5e_ktls_sw_stats_desc; 63 + return mlx5e_tls_sw_stats_desc; 56 64 } 57 65 58 66 int mlx5e_tls_get_count(struct mlx5e_priv *priv) 59 67 { 60 - if (!is_tls_atomic_stats(priv)) 68 + if (!priv->tls) 61 69 return 0; 62 - 63 - return NUM_TLS_SW_COUNTERS; 70 + if (mlx5_accel_is_ktls_device(priv->mdev)) 71 + return ARRAY_SIZE(mlx5e_ktls_sw_stats_desc); 72 + return ARRAY_SIZE(mlx5e_tls_sw_stats_desc); 64 73 } 65 74 66 75 int mlx5e_tls_get_strings(struct mlx5e_priv *priv, uint8_t *data) 67 76 { 68 - unsigned int i, idx = 0; 77 + const struct counter_desc *stats_desc; 78 + unsigned int i, n, idx = 0; 69 79 70 - if (!is_tls_atomic_stats(priv)) 71 - return 0; 80 + stats_desc = get_tls_atomic_stats(priv); 81 + n = mlx5e_tls_get_count(priv); 72 82 73 - for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) 83 + for (i = 0; i < n; i++) 74 84 strcpy(data + (idx++) * ETH_GSTRING_LEN, 75 - mlx5e_tls_sw_stats_desc[i].format); 85 + stats_desc[i].format); 76 86 77 - return NUM_TLS_SW_COUNTERS; 87 + return n; 78 88 } 79 89 80 90 int mlx5e_tls_get_stats(struct mlx5e_priv *priv, u64 *data) 81 91 { 82 - int i, idx = 0; 92 + const struct counter_desc *stats_desc; 93 + unsigned int i, n, idx = 0; 83 94 84 - if (!is_tls_atomic_stats(priv)) 85 - return 0; 95 + stats_desc = get_tls_atomic_stats(priv); 96 + n = mlx5e_tls_get_count(priv); 86 97 87 - for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) 98 + for (i = 0; i < n; i++) 88 99 data[idx++] = 89 100 MLX5E_READ_CTR_ATOMIC64(&priv->tls->sw_stats, 90 - mlx5e_tls_sw_stats_desc, i); 101 + stats_desc, i); 91 102 92 - return NUM_TLS_SW_COUNTERS; 103 + return n; 93 104 }
+11 -11
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 758 758 return 0; 759 759 } 760 760 761 - static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings *link_ksettings, 762 - u32 eth_proto_cap, 763 - u8 connector_type, bool ext) 761 + static void ptys2ethtool_supported_advertised_port(struct mlx5_core_dev *mdev, 762 + struct ethtool_link_ksettings *link_ksettings, 763 + u32 eth_proto_cap, u8 connector_type) 764 764 { 765 - if ((!connector_type && !ext) || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) { 765 + if (!MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) { 766 766 if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR) 767 767 | MLX5E_PROT_MASK(MLX5E_10GBASE_SR) 768 768 | MLX5E_PROT_MASK(MLX5E_40GBASE_CR4) ··· 898 898 [MLX5E_PORT_OTHER] = PORT_OTHER, 899 899 }; 900 900 901 - static u8 get_connector_port(u32 eth_proto, u8 connector_type, bool ext) 901 + static u8 get_connector_port(struct mlx5_core_dev *mdev, u32 eth_proto, u8 connector_type) 902 902 { 903 - if ((connector_type || ext) && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER) 903 + if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) 904 904 return ptys2connector_type[connector_type]; 905 905 906 906 if (eth_proto & ··· 1001 1001 data_rate_oper, link_ksettings); 1002 1002 1003 1003 eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap; 1004 - 1005 - link_ksettings->base.port = get_connector_port(eth_proto_oper, 1006 - connector_type, ext); 1007 - ptys2ethtool_supported_advertised_port(link_ksettings, eth_proto_admin, 1008 - connector_type, ext); 1004 + connector_type = connector_type < MLX5E_CONNECTOR_TYPE_NUMBER ? 1005 + connector_type : MLX5E_PORT_UNKNOWN; 1006 + link_ksettings->base.port = get_connector_port(mdev, eth_proto_oper, connector_type); 1007 + ptys2ethtool_supported_advertised_port(mdev, link_ksettings, eth_proto_admin, 1008 + connector_type); 1009 1009 get_lp_advertising(mdev, eth_proto_lp, link_ksettings); 1010 1010 1011 1011 if (an_status == MLX5_AN_COMPLETE)
+20 -1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1091 1091 1092 1092 sq->channel = c; 1093 1093 sq->uar_map = mdev->mlx5e_res.bfreg.map; 1094 + sq->reserved_room = param->stop_room; 1094 1095 1095 1096 param->wq.db_numa_node = cpu_to_node(c->cpu); 1096 1097 err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl); ··· 2351 2350 mlx5e_build_ico_cq_param(priv, log_wq_size, &param->cqp); 2352 2351 } 2353 2352 2353 + static void mlx5e_build_async_icosq_param(struct mlx5e_priv *priv, 2354 + struct mlx5e_params *params, 2355 + u8 log_wq_size, 2356 + struct mlx5e_sq_param *param) 2357 + { 2358 + void *sqc = param->sqc; 2359 + void *wq = MLX5_ADDR_OF(sqc, sqc, wq); 2360 + 2361 + mlx5e_build_sq_param_common(priv, param); 2362 + 2363 + /* async_icosq is used by XSK only if xdp_prog is active */ 2364 + if (params->xdp_prog) 2365 + param->stop_room = mlx5e_stop_room_for_wqe(1); /* for XSK NOP */ 2366 + MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq)); 2367 + MLX5_SET(wq, wq, log_wq_sz, log_wq_size); 2368 + mlx5e_build_ico_cq_param(priv, log_wq_size, &param->cqp); 2369 + } 2370 + 2354 2371 void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, 2355 2372 struct mlx5e_params *params, 2356 2373 struct mlx5e_sq_param *param) ··· 2417 2398 mlx5e_build_sq_param(priv, params, &cparam->txq_sq); 2418 2399 mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq); 2419 2400 mlx5e_build_icosq_param(priv, icosq_log_wq_sz, &cparam->icosq); 2420 - mlx5e_build_icosq_param(priv, async_icosq_log_wq_sz, &cparam->async_icosq); 2401 + mlx5e_build_async_icosq_param(priv, params, async_icosq_log_wq_sz, &cparam->async_icosq); 2421 2402 } 2422 2403 2423 2404 int mlx5e_open_channels(struct mlx5e_priv *priv,
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1107 1107 1108 1108 mlx5e_rep_tc_enable(priv); 1109 1109 1110 - mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK, 1111 - 0, 0, MLX5_VPORT_ADMIN_STATE_AUTO); 1110 + if (MLX5_CAP_GEN(mdev, uplink_follow)) 1111 + mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK, 1112 + 0, 0, MLX5_VPORT_ADMIN_STATE_AUTO); 1112 1113 mlx5_lag_add(mdev, netdev); 1113 1114 priv->events_nb.notifier_call = uplink_rep_async_event; 1114 1115 mlx5_notifier_register(mdev, &priv->events_nb);
-10
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 116 116 #ifdef CONFIG_MLX5_EN_TLS 117 117 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_packets) }, 118 118 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_bytes) }, 119 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ctx) }, 120 119 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ooo) }, 121 120 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) }, 122 121 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) }, ··· 179 180 #ifdef CONFIG_MLX5_EN_TLS 180 181 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_packets) }, 181 182 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_bytes) }, 182 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_ctx) }, 183 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_del) }, 184 183 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_pkt) }, 185 184 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_start) }, 186 185 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_end) }, ··· 339 342 #ifdef CONFIG_MLX5_EN_TLS 340 343 s->rx_tls_decrypted_packets += rq_stats->tls_decrypted_packets; 341 344 s->rx_tls_decrypted_bytes += rq_stats->tls_decrypted_bytes; 342 - s->rx_tls_ctx += rq_stats->tls_ctx; 343 - s->rx_tls_del += rq_stats->tls_del; 344 345 s->rx_tls_resync_req_pkt += rq_stats->tls_resync_req_pkt; 345 346 s->rx_tls_resync_req_start += rq_stats->tls_resync_req_start; 346 347 s->rx_tls_resync_req_end += rq_stats->tls_resync_req_end; ··· 385 390 #ifdef CONFIG_MLX5_EN_TLS 386 391 s->tx_tls_encrypted_packets += sq_stats->tls_encrypted_packets; 387 392 s->tx_tls_encrypted_bytes += sq_stats->tls_encrypted_bytes; 388 - s->tx_tls_ctx += sq_stats->tls_ctx; 389 393 s->tx_tls_ooo += sq_stats->tls_ooo; 390 394 s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes; 391 395 s->tx_tls_dump_packets += sq_stats->tls_dump_packets; ··· 1616 1622 #ifdef CONFIG_MLX5_EN_TLS 1617 1623 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_packets) }, 1618 1624 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_bytes) }, 1619 - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_ctx) }, 1620 - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_del) }, 1621 1625 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_pkt) }, 1622 1626 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_start) }, 1623 1627 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_end) }, ··· 1642 1650 #ifdef CONFIG_MLX5_EN_TLS 1643 1651 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) }, 1644 1652 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) }, 1645 - { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ctx) }, 1646 1653 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ooo) }, 1647 1654 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) }, 1648 1655 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) }, ··· 1767 1776 #ifdef CONFIG_MLX5_EN_TLS 1768 1777 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) }, 1769 1778 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) }, 1770 - { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_ctx) }, 1771 1779 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_ooo) }, 1772 1780 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) }, 1773 1781 { MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) },
-6
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 191 191 #ifdef CONFIG_MLX5_EN_TLS 192 192 u64 tx_tls_encrypted_packets; 193 193 u64 tx_tls_encrypted_bytes; 194 - u64 tx_tls_ctx; 195 194 u64 tx_tls_ooo; 196 195 u64 tx_tls_dump_packets; 197 196 u64 tx_tls_dump_bytes; ··· 201 202 202 203 u64 rx_tls_decrypted_packets; 203 204 u64 rx_tls_decrypted_bytes; 204 - u64 rx_tls_ctx; 205 - u64 rx_tls_del; 206 205 u64 rx_tls_resync_req_pkt; 207 206 u64 rx_tls_resync_req_start; 208 207 u64 rx_tls_resync_req_end; ··· 331 334 #ifdef CONFIG_MLX5_EN_TLS 332 335 u64 tls_decrypted_packets; 333 336 u64 tls_decrypted_bytes; 334 - u64 tls_ctx; 335 - u64 tls_del; 336 337 u64 tls_resync_req_pkt; 337 338 u64 tls_resync_req_start; 338 339 u64 tls_resync_req_end; ··· 359 364 #ifdef CONFIG_MLX5_EN_TLS 360 365 u64 tls_encrypted_packets; 361 366 u64 tls_encrypted_bytes; 362 - u64 tls_ctx; 363 367 u64 tls_ooo; 364 368 u64 tls_dump_packets; 365 369 u64 tls_dump_bytes;
+12 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 931 931 mutex_unlock(&table->lock); 932 932 } 933 933 934 + #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING 935 + #define MLX5_MAX_ASYNC_EQS 4 936 + #else 937 + #define MLX5_MAX_ASYNC_EQS 3 938 + #endif 939 + 934 940 int mlx5_eq_table_create(struct mlx5_core_dev *dev) 935 941 { 936 942 struct mlx5_eq_table *eq_table = dev->priv.eq_table; 943 + int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ? 944 + MLX5_CAP_GEN(dev, max_num_eqs) : 945 + 1 << MLX5_CAP_GEN(dev, log_max_eq); 937 946 int err; 938 947 939 948 eq_table->num_comp_eqs = 940 - mlx5_irq_get_num_comp(eq_table->irq_table); 949 + min_t(int, 950 + mlx5_irq_get_num_comp(eq_table->irq_table), 951 + num_eqs - MLX5_MAX_ASYNC_EQS); 941 952 942 953 err = create_async_eqs(dev); 943 954 if (err) {
+5 -5
drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
··· 248 248 err_ethertype: 249 249 kfree(rule); 250 250 out: 251 - kfree(rule_spec); 251 + kvfree(rule_spec); 252 252 return err; 253 253 } 254 254 ··· 328 328 e->recirc_cnt = 0; 329 329 330 330 out: 331 - kfree(in); 331 + kvfree(in); 332 332 return err; 333 333 } 334 334 ··· 347 347 348 348 spec = kvzalloc(sizeof(*spec), GFP_KERNEL); 349 349 if (!spec) { 350 - kfree(in); 350 + kvfree(in); 351 351 return -ENOMEM; 352 352 } 353 353 ··· 371 371 } 372 372 373 373 err_out: 374 - kfree(spec); 375 - kfree(in); 374 + kvfree(spec); 375 + kvfree(in); 376 376 return err; 377 377 } 378 378
+38 -28
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 537 537 return i; 538 538 } 539 539 540 + static bool 541 + esw_src_port_rewrite_supported(struct mlx5_eswitch *esw) 542 + { 543 + return MLX5_CAP_GEN(esw->dev, reg_c_preserve) && 544 + mlx5_eswitch_vport_match_metadata_enabled(esw) && 545 + MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level); 546 + } 547 + 540 548 static int 541 549 esw_setup_dests(struct mlx5_flow_destination *dest, 542 550 struct mlx5_flow_act *flow_act, ··· 558 550 int err = 0; 559 551 560 552 if (!mlx5_eswitch_termtbl_required(esw, attr, flow_act, spec) && 561 - MLX5_CAP_GEN(esw_attr->in_mdev, reg_c_preserve) && 562 - mlx5_eswitch_vport_match_metadata_enabled(esw) && 563 - MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level)) 553 + esw_src_port_rewrite_supported(esw)) 564 554 attr->flags |= MLX5_ESW_ATTR_FLAG_SRC_REWRITE; 565 555 566 556 if (attr->dest_ft) { ··· 1722 1716 } 1723 1717 esw->fdb_table.offloads.send_to_vport_grp = g; 1724 1718 1725 - /* meta send to vport */ 1726 - memset(flow_group_in, 0, inlen); 1727 - MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, 1728 - MLX5_MATCH_MISC_PARAMETERS_2); 1719 + if (esw_src_port_rewrite_supported(esw)) { 1720 + /* meta send to vport */ 1721 + memset(flow_group_in, 0, inlen); 1722 + MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, 1723 + MLX5_MATCH_MISC_PARAMETERS_2); 1729 1724 1730 - match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria); 1725 + match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria); 1731 1726 1732 - MLX5_SET(fte_match_param, match_criteria, 1733 - misc_parameters_2.metadata_reg_c_0, mlx5_eswitch_get_vport_metadata_mask()); 1734 - MLX5_SET(fte_match_param, match_criteria, 1735 - misc_parameters_2.metadata_reg_c_1, ESW_TUN_MASK); 1727 + MLX5_SET(fte_match_param, match_criteria, 1728 + misc_parameters_2.metadata_reg_c_0, 1729 + mlx5_eswitch_get_vport_metadata_mask()); 1730 + MLX5_SET(fte_match_param, match_criteria, 1731 + misc_parameters_2.metadata_reg_c_1, ESW_TUN_MASK); 1736 1732 1737 - num_vfs = esw->esw_funcs.num_vfs; 1738 - if (num_vfs) { 1739 - MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1740 - MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, ix + num_vfs - 1); 1741 - ix += num_vfs; 1733 + num_vfs = esw->esw_funcs.num_vfs; 1734 + if (num_vfs) { 1735 + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1736 + MLX5_SET(create_flow_group_in, flow_group_in, 1737 + end_flow_index, ix + num_vfs - 1); 1738 + ix += num_vfs; 1742 1739 1743 - g = mlx5_create_flow_group(fdb, flow_group_in); 1744 - if (IS_ERR(g)) { 1745 - err = PTR_ERR(g); 1746 - esw_warn(dev, "Failed to create send-to-vport meta flow group err(%d)\n", 1747 - err); 1748 - goto send_vport_meta_err; 1740 + g = mlx5_create_flow_group(fdb, flow_group_in); 1741 + if (IS_ERR(g)) { 1742 + err = PTR_ERR(g); 1743 + esw_warn(dev, "Failed to create send-to-vport meta flow group err(%d)\n", 1744 + err); 1745 + goto send_vport_meta_err; 1746 + } 1747 + esw->fdb_table.offloads.send_to_vport_meta_grp = g; 1748 + 1749 + err = mlx5_eswitch_add_send_to_vport_meta_rules(esw); 1750 + if (err) 1751 + goto meta_rule_err; 1749 1752 } 1750 - esw->fdb_table.offloads.send_to_vport_meta_grp = g; 1751 - 1752 - err = mlx5_eswitch_add_send_to_vport_meta_rules(esw); 1753 - if (err) 1754 - goto meta_rule_err; 1755 1753 } 1756 1754 1757 1755 if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {
+15
drivers/net/ethernet/mellanox/mlxsw/spectrum.h
··· 21 21 #include <net/red.h> 22 22 #include <net/vxlan.h> 23 23 #include <net/flow_offload.h> 24 + #include <net/inet_ecn.h> 24 25 25 26 #include "port.h" 26 27 #include "core.h" ··· 347 346 u32 *p_eth_proto_oper); 348 347 u32 (*ptys_proto_cap_masked_get)(u32 eth_proto_cap); 349 348 }; 349 + 350 + static inline u8 mlxsw_sp_tunnel_ecn_decap(u8 outer_ecn, u8 inner_ecn, 351 + bool *trap_en) 352 + { 353 + bool set_ce = false; 354 + 355 + *trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 356 + if (set_ce) 357 + return INET_ECN_CE; 358 + else if (outer_ecn == INET_ECN_ECT_1 && inner_ecn == INET_ECN_ECT_0) 359 + return INET_ECN_ECT_1; 360 + else 361 + return inner_ecn; 362 + } 350 363 351 364 static inline struct net_device * 352 365 mlxsw_sp_bridge_vxlan_dev_find(struct net_device *br_dev)
+14 -5
drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
··· 1230 1230 u32 ptys_eth_proto, 1231 1231 struct ethtool_link_ksettings *cmd) 1232 1232 { 1233 + struct mlxsw_sp1_port_link_mode link; 1233 1234 int i; 1234 1235 1235 - cmd->link_mode = -1; 1236 + cmd->base.speed = SPEED_UNKNOWN; 1237 + cmd->base.duplex = DUPLEX_UNKNOWN; 1238 + cmd->lanes = 0; 1236 1239 1237 1240 if (!carrier_ok) 1238 1241 return; 1239 1242 1240 1243 for (i = 0; i < MLXSW_SP1_PORT_LINK_MODE_LEN; i++) { 1241 - if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask) 1242 - cmd->link_mode = mlxsw_sp1_port_link_mode[i].mask_ethtool; 1244 + if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask) { 1245 + link = mlxsw_sp1_port_link_mode[i]; 1246 + ethtool_params_from_link_mode(cmd, 1247 + link.mask_ethtool); 1248 + } 1243 1249 } 1244 1250 } 1245 1251 ··· 1678 1672 struct mlxsw_sp2_port_link_mode link; 1679 1673 int i; 1680 1674 1681 - cmd->link_mode = -1; 1675 + cmd->base.speed = SPEED_UNKNOWN; 1676 + cmd->base.duplex = DUPLEX_UNKNOWN; 1677 + cmd->lanes = 0; 1682 1678 1683 1679 if (!carrier_ok) 1684 1680 return; ··· 1688 1680 for (i = 0; i < MLXSW_SP2_PORT_LINK_MODE_LEN; i++) { 1689 1681 if (ptys_eth_proto & mlxsw_sp2_port_link_mode[i].mask) { 1690 1682 link = mlxsw_sp2_port_link_mode[i]; 1691 - cmd->link_mode = link.mask_ethtool[1]; 1683 + ethtool_params_from_link_mode(cmd, 1684 + link.mask_ethtool[1]); 1692 1685 } 1693 1686 } 1694 1687 }
+3 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
··· 335 335 u8 inner_ecn, u8 outer_ecn) 336 336 { 337 337 char tidem_pl[MLXSW_REG_TIDEM_LEN]; 338 - bool trap_en, set_ce = false; 339 338 u8 new_inner_ecn; 339 + bool trap_en; 340 340 341 - trap_en = __INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 342 - new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn; 343 - 341 + new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn, 342 + &trap_en); 344 343 mlxsw_reg_tidem_pack(tidem_pl, outer_ecn, inner_ecn, new_inner_ecn, 345 344 trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0); 346 345 return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tidem), tidem_pl);
+3 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
··· 909 909 u8 inner_ecn, u8 outer_ecn) 910 910 { 911 911 char tndem_pl[MLXSW_REG_TNDEM_LEN]; 912 - bool trap_en, set_ce = false; 913 912 u8 new_inner_ecn; 913 + bool trap_en; 914 914 915 - trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce); 916 - new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn; 917 - 915 + new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn, 916 + &trap_en); 918 917 mlxsw_reg_tndem_pack(tndem_pl, outer_ecn, inner_ecn, new_inner_ecn, 919 918 trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0); 920 919 return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tndem), tndem_pl);
+4 -4
drivers/net/ethernet/microchip/lan743x_main.c
··· 885 885 } 886 886 887 887 mac_rx &= ~(MAC_RX_MAX_SIZE_MASK_); 888 - mac_rx |= (((new_mtu + ETH_HLEN + 4) << MAC_RX_MAX_SIZE_SHIFT_) & 889 - MAC_RX_MAX_SIZE_MASK_); 888 + mac_rx |= (((new_mtu + ETH_HLEN + ETH_FCS_LEN) 889 + << MAC_RX_MAX_SIZE_SHIFT_) & MAC_RX_MAX_SIZE_MASK_); 890 890 lan743x_csr_write(adapter, MAC_RX, mac_rx); 891 891 892 892 if (enabled) { ··· 1944 1944 struct sk_buff *skb; 1945 1945 dma_addr_t dma_ptr; 1946 1946 1947 - buffer_length = netdev->mtu + ETH_HLEN + 4 + RX_HEAD_PADDING; 1947 + buffer_length = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + RX_HEAD_PADDING; 1948 1948 1949 1949 descriptor = &rx->ring_cpu_ptr[index]; 1950 1950 buffer_info = &rx->buffer_info[index]; ··· 2040 2040 dev_kfree_skb_irq(skb); 2041 2041 return NULL; 2042 2042 } 2043 - frame_length = max_t(int, 0, frame_length - RX_HEAD_PADDING - 4); 2043 + frame_length = max_t(int, 0, frame_length - ETH_FCS_LEN); 2044 2044 if (skb->len > frame_length) { 2045 2045 skb->tail -= skb->len - frame_length; 2046 2046 skb->len = frame_length;
+1 -1
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 2897 2897 dev_kfree_skb_any(curr); 2898 2898 if (segs != NULL) { 2899 2899 curr = segs; 2900 - segs = segs->next; 2900 + segs = next; 2901 2901 curr->next = NULL; 2902 2902 dev_kfree_skb_any(segs); 2903 2903 }
+1
drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
··· 454 454 dev_consume_skb_any(skb); 455 455 else 456 456 dev_kfree_skb_any(skb); 457 + return; 457 458 } 458 459 459 460 nfp_ccm_rx(&bpf->ccm, skb);
+8
drivers/net/ethernet/netronome/nfp/flower/main.h
··· 190 190 * @qos_rate_limiters: Current active qos rate limiters 191 191 * @qos_stats_lock: Lock on qos stats updates 192 192 * @pre_tun_rule_cnt: Number of pre-tunnel rules offloaded 193 + * @merge_table: Hash table to store merged flows 193 194 */ 194 195 struct nfp_flower_priv { 195 196 struct nfp_app *app; ··· 224 223 unsigned int qos_rate_limiters; 225 224 spinlock_t qos_stats_lock; /* Protect the qos stats */ 226 225 int pre_tun_rule_cnt; 226 + struct rhashtable merge_table; 227 227 }; 228 228 229 229 /** ··· 352 350 }; 353 351 354 352 extern const struct rhashtable_params nfp_flower_table_params; 353 + extern const struct rhashtable_params merge_table_params; 354 + 355 + struct nfp_merge_info { 356 + u64 parent_ctx; 357 + struct rhash_head ht_node; 358 + }; 355 359 356 360 struct nfp_fl_stats_frame { 357 361 __be32 stats_con_id;
+15 -1
drivers/net/ethernet/netronome/nfp/flower/metadata.c
··· 490 490 .automatic_shrinking = true, 491 491 }; 492 492 493 + const struct rhashtable_params merge_table_params = { 494 + .key_offset = offsetof(struct nfp_merge_info, parent_ctx), 495 + .head_offset = offsetof(struct nfp_merge_info, ht_node), 496 + .key_len = sizeof(u64), 497 + }; 498 + 493 499 int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count, 494 500 unsigned int host_num_mems) 495 501 { ··· 512 506 if (err) 513 507 goto err_free_flow_table; 514 508 509 + err = rhashtable_init(&priv->merge_table, &merge_table_params); 510 + if (err) 511 + goto err_free_stats_ctx_table; 512 + 515 513 get_random_bytes(&priv->mask_id_seed, sizeof(priv->mask_id_seed)); 516 514 517 515 /* Init ring buffer and unallocated mask_ids. */ ··· 523 513 kmalloc_array(NFP_FLOWER_MASK_ENTRY_RS, 524 514 NFP_FLOWER_MASK_ELEMENT_RS, GFP_KERNEL); 525 515 if (!priv->mask_ids.mask_id_free_list.buf) 526 - goto err_free_stats_ctx_table; 516 + goto err_free_merge_table; 527 517 528 518 priv->mask_ids.init_unallocated = NFP_FLOWER_MASK_ENTRY_RS - 1; 529 519 ··· 560 550 kfree(priv->mask_ids.last_used); 561 551 err_free_mask_id: 562 552 kfree(priv->mask_ids.mask_id_free_list.buf); 553 + err_free_merge_table: 554 + rhashtable_destroy(&priv->merge_table); 563 555 err_free_stats_ctx_table: 564 556 rhashtable_destroy(&priv->stats_ctx_table); 565 557 err_free_flow_table: ··· 579 567 rhashtable_free_and_destroy(&priv->flow_table, 580 568 nfp_check_rhashtable_empty, NULL); 581 569 rhashtable_free_and_destroy(&priv->stats_ctx_table, 570 + nfp_check_rhashtable_empty, NULL); 571 + rhashtable_free_and_destroy(&priv->merge_table, 582 572 nfp_check_rhashtable_empty, NULL); 583 573 kvfree(priv->stats); 584 574 kfree(priv->mask_ids.mask_id_free_list.buf);
+46 -2
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 1009 1009 struct netlink_ext_ack *extack = NULL; 1010 1010 struct nfp_fl_payload *merge_flow; 1011 1011 struct nfp_fl_key_ls merge_key_ls; 1012 + struct nfp_merge_info *merge_info; 1013 + u64 parent_ctx = 0; 1012 1014 int err; 1013 1015 1014 1016 ASSERT_RTNL(); ··· 1020 1018 nfp_flower_is_merge_flow(sub_flow1) || 1021 1019 nfp_flower_is_merge_flow(sub_flow2)) 1022 1020 return -EINVAL; 1021 + 1022 + /* check if the two flows are already merged */ 1023 + parent_ctx = (u64)(be32_to_cpu(sub_flow1->meta.host_ctx_id)) << 32; 1024 + parent_ctx |= (u64)(be32_to_cpu(sub_flow2->meta.host_ctx_id)); 1025 + if (rhashtable_lookup_fast(&priv->merge_table, 1026 + &parent_ctx, merge_table_params)) { 1027 + nfp_flower_cmsg_warn(app, "The two flows are already merged.\n"); 1028 + return 0; 1029 + } 1023 1030 1024 1031 err = nfp_flower_can_merge(sub_flow1, sub_flow2); 1025 1032 if (err) ··· 1071 1060 if (err) 1072 1061 goto err_release_metadata; 1073 1062 1063 + merge_info = kmalloc(sizeof(*merge_info), GFP_KERNEL); 1064 + if (!merge_info) { 1065 + err = -ENOMEM; 1066 + goto err_remove_rhash; 1067 + } 1068 + merge_info->parent_ctx = parent_ctx; 1069 + err = rhashtable_insert_fast(&priv->merge_table, &merge_info->ht_node, 1070 + merge_table_params); 1071 + if (err) 1072 + goto err_destroy_merge_info; 1073 + 1074 1074 err = nfp_flower_xmit_flow(app, merge_flow, 1075 1075 NFP_FLOWER_CMSG_TYPE_FLOW_MOD); 1076 1076 if (err) 1077 - goto err_remove_rhash; 1077 + goto err_remove_merge_info; 1078 1078 1079 1079 merge_flow->in_hw = true; 1080 1080 sub_flow1->in_hw = false; 1081 1081 1082 1082 return 0; 1083 1083 1084 + err_remove_merge_info: 1085 + WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table, 1086 + &merge_info->ht_node, 1087 + merge_table_params)); 1088 + err_destroy_merge_info: 1089 + kfree(merge_info); 1084 1090 err_remove_rhash: 1085 1091 WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table, 1086 1092 &merge_flow->fl_node, ··· 1387 1359 { 1388 1360 struct nfp_flower_priv *priv = app->priv; 1389 1361 struct nfp_fl_payload_link *link, *temp; 1362 + struct nfp_merge_info *merge_info; 1390 1363 struct nfp_fl_payload *origin; 1364 + u64 parent_ctx = 0; 1391 1365 bool mod = false; 1392 1366 int err; 1393 1367 ··· 1426 1396 err_free_links: 1427 1397 /* Clean any links connected with the merged flow. */ 1428 1398 list_for_each_entry_safe(link, temp, &merge_flow->linked_flows, 1429 - merge_flow.list) 1399 + merge_flow.list) { 1400 + u32 ctx_id = be32_to_cpu(link->sub_flow.flow->meta.host_ctx_id); 1401 + 1402 + parent_ctx = (parent_ctx << 32) | (u64)(ctx_id); 1430 1403 nfp_flower_unlink_flow(link); 1404 + } 1405 + 1406 + merge_info = rhashtable_lookup_fast(&priv->merge_table, 1407 + &parent_ctx, 1408 + merge_table_params); 1409 + if (merge_info) { 1410 + WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table, 1411 + &merge_info->ht_node, 1412 + merge_table_params)); 1413 + kfree(merge_info); 1414 + } 1431 1415 1432 1416 kfree(merge_flow->action_data); 1433 1417 kfree(merge_flow->mask_data);
+12
drivers/net/ethernet/xilinx/xilinx_axienet.h
··· 504 504 return axienet_ior(lp, XAE_MDIO_MCR_OFFSET); 505 505 } 506 506 507 + static inline void axienet_lock_mii(struct axienet_local *lp) 508 + { 509 + if (lp->mii_bus) 510 + mutex_lock(&lp->mii_bus->mdio_lock); 511 + } 512 + 513 + static inline void axienet_unlock_mii(struct axienet_local *lp) 514 + { 515 + if (lp->mii_bus) 516 + mutex_unlock(&lp->mii_bus->mdio_lock); 517 + } 518 + 507 519 /** 508 520 * axienet_iow - Memory mapped Axi Ethernet register write 509 521 * @lp: Pointer to axienet local structure
+6 -6
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1053 1053 * including the MDIO. MDIO must be disabled before resetting. 1054 1054 * Hold MDIO bus lock to avoid MDIO accesses during the reset. 1055 1055 */ 1056 - mutex_lock(&lp->mii_bus->mdio_lock); 1056 + axienet_lock_mii(lp); 1057 1057 ret = axienet_device_reset(ndev); 1058 - mutex_unlock(&lp->mii_bus->mdio_lock); 1058 + axienet_unlock_mii(lp); 1059 1059 1060 1060 ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0); 1061 1061 if (ret) { ··· 1148 1148 } 1149 1149 1150 1150 /* Do a reset to ensure DMA is really stopped */ 1151 - mutex_lock(&lp->mii_bus->mdio_lock); 1151 + axienet_lock_mii(lp); 1152 1152 __axienet_device_reset(lp); 1153 - mutex_unlock(&lp->mii_bus->mdio_lock); 1153 + axienet_unlock_mii(lp); 1154 1154 1155 1155 cancel_work_sync(&lp->dma_err_task); 1156 1156 ··· 1709 1709 * including the MDIO. MDIO must be disabled before resetting. 1710 1710 * Hold MDIO bus lock to avoid MDIO accesses during the reset. 1711 1711 */ 1712 - mutex_lock(&lp->mii_bus->mdio_lock); 1712 + axienet_lock_mii(lp); 1713 1713 __axienet_device_reset(lp); 1714 - mutex_unlock(&lp->mii_bus->mdio_lock); 1714 + axienet_unlock_mii(lp); 1715 1715 1716 1716 for (i = 0; i < lp->tx_bd_num; i++) { 1717 1717 cur_p = &lp->tx_bd_v[i];
+20 -4
drivers/net/geneve.c
··· 908 908 909 909 info = skb_tunnel_info(skb); 910 910 if (info) { 911 - info->key.u.ipv4.dst = fl4.saddr; 912 - info->key.u.ipv4.src = fl4.daddr; 911 + struct ip_tunnel_info *unclone; 912 + 913 + unclone = skb_tunnel_info_unclone(skb); 914 + if (unlikely(!unclone)) { 915 + dst_release(&rt->dst); 916 + return -ENOMEM; 917 + } 918 + 919 + unclone->key.u.ipv4.dst = fl4.saddr; 920 + unclone->key.u.ipv4.src = fl4.daddr; 913 921 } 914 922 915 923 if (!pskb_may_pull(skb, ETH_HLEN)) { ··· 1001 993 struct ip_tunnel_info *info = skb_tunnel_info(skb); 1002 994 1003 995 if (info) { 1004 - info->key.u.ipv6.dst = fl6.saddr; 1005 - info->key.u.ipv6.src = fl6.daddr; 996 + struct ip_tunnel_info *unclone; 997 + 998 + unclone = skb_tunnel_info_unclone(skb); 999 + if (unlikely(!unclone)) { 1000 + dst_release(dst); 1001 + return -ENOMEM; 1002 + } 1003 + 1004 + unclone->key.u.ipv6.dst = fl6.saddr; 1005 + unclone->key.u.ipv6.src = fl6.daddr; 1006 1006 } 1007 1007 1008 1008 if (!pskb_may_pull(skb, ETH_HLEN)) {
+1
drivers/net/ieee802154/atusb.c
··· 365 365 return -ENOMEM; 366 366 } 367 367 usb_anchor_urb(urb, &atusb->idle_urbs); 368 + usb_free_urb(urb); 368 369 n--; 369 370 } 370 371 return 0;
+10 -3
drivers/net/phy/bcm-phy-lib.c
··· 369 369 370 370 int bcm_phy_set_eee(struct phy_device *phydev, bool enable) 371 371 { 372 - int val; 372 + int val, mask = 0; 373 373 374 374 /* Enable EEE at PHY level */ 375 375 val = phy_read_mmd(phydev, MDIO_MMD_AN, BRCM_CL45VEN_EEE_CONTROL); ··· 388 388 if (val < 0) 389 389 return val; 390 390 391 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, 392 + phydev->supported)) 393 + mask |= MDIO_EEE_1000T; 394 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, 395 + phydev->supported)) 396 + mask |= MDIO_EEE_100TX; 397 + 391 398 if (enable) 392 - val |= (MDIO_EEE_100TX | MDIO_EEE_1000T); 399 + val |= mask; 393 400 else 394 - val &= ~(MDIO_EEE_100TX | MDIO_EEE_1000T); 401 + val &= ~mask; 395 402 396 403 phy_write_mmd(phydev, MDIO_MMD_AN, BCM_CL45VEN_EEE_ADV, (u32)val); 397 404
+48
drivers/net/tun.c
··· 69 69 #include <linux/bpf.h> 70 70 #include <linux/bpf_trace.h> 71 71 #include <linux/mutex.h> 72 + #include <linux/ieee802154.h> 73 + #include <linux/if_ltalk.h> 74 + #include <uapi/linux/if_fddi.h> 75 + #include <uapi/linux/if_hippi.h> 76 + #include <uapi/linux/if_fc.h> 77 + #include <net/ax25.h> 78 + #include <net/rose.h> 79 + #include <net/6lowpan.h> 72 80 73 81 #include <linux/uaccess.h> 74 82 #include <linux/proc_fs.h> ··· 2927 2919 return __tun_set_ebpf(tun, prog_p, prog); 2928 2920 } 2929 2921 2922 + /* Return correct value for tun->dev->addr_len based on tun->dev->type. */ 2923 + static unsigned char tun_get_addr_len(unsigned short type) 2924 + { 2925 + switch (type) { 2926 + case ARPHRD_IP6GRE: 2927 + case ARPHRD_TUNNEL6: 2928 + return sizeof(struct in6_addr); 2929 + case ARPHRD_IPGRE: 2930 + case ARPHRD_TUNNEL: 2931 + case ARPHRD_SIT: 2932 + return 4; 2933 + case ARPHRD_ETHER: 2934 + return ETH_ALEN; 2935 + case ARPHRD_IEEE802154: 2936 + case ARPHRD_IEEE802154_MONITOR: 2937 + return IEEE802154_EXTENDED_ADDR_LEN; 2938 + case ARPHRD_PHONET_PIPE: 2939 + case ARPHRD_PPP: 2940 + case ARPHRD_NONE: 2941 + return 0; 2942 + case ARPHRD_6LOWPAN: 2943 + return EUI64_ADDR_LEN; 2944 + case ARPHRD_FDDI: 2945 + return FDDI_K_ALEN; 2946 + case ARPHRD_HIPPI: 2947 + return HIPPI_ALEN; 2948 + case ARPHRD_IEEE802: 2949 + return FC_ALEN; 2950 + case ARPHRD_ROSE: 2951 + return ROSE_ADDR_LEN; 2952 + case ARPHRD_NETROM: 2953 + return AX25_ADDR_LEN; 2954 + case ARPHRD_LOCALTLK: 2955 + return LTALK_ALEN; 2956 + default: 2957 + return 0; 2958 + } 2959 + } 2960 + 2930 2961 static long __tun_chr_ioctl(struct file *file, unsigned int cmd, 2931 2962 unsigned long arg, int ifreq_len) 2932 2963 { ··· 3129 3082 break; 3130 3083 } 3131 3084 tun->dev->type = (int) arg; 3085 + tun->dev->addr_len = tun_get_addr_len(tun->dev->type); 3132 3086 netif_info(tun, drv, tun->dev, "linktype set to %d\n", 3133 3087 tun->dev->type); 3134 3088 call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE,
+12 -21
drivers/net/usb/hso.c
··· 611 611 return serial; 612 612 } 613 613 614 - static int get_free_serial_index(void) 614 + static int obtain_minor(struct hso_serial *serial) 615 615 { 616 616 int index; 617 617 unsigned long flags; ··· 619 619 spin_lock_irqsave(&serial_table_lock, flags); 620 620 for (index = 0; index < HSO_SERIAL_TTY_MINORS; index++) { 621 621 if (serial_table[index] == NULL) { 622 + serial_table[index] = serial->parent; 623 + serial->minor = index; 622 624 spin_unlock_irqrestore(&serial_table_lock, flags); 623 - return index; 625 + return 0; 624 626 } 625 627 } 626 628 spin_unlock_irqrestore(&serial_table_lock, flags); ··· 631 629 return -1; 632 630 } 633 631 634 - static void set_serial_by_index(unsigned index, struct hso_serial *serial) 632 + static void release_minor(struct hso_serial *serial) 635 633 { 636 634 unsigned long flags; 637 635 638 636 spin_lock_irqsave(&serial_table_lock, flags); 639 - if (serial) 640 - serial_table[index] = serial->parent; 641 - else 642 - serial_table[index] = NULL; 637 + serial_table[serial->minor] = NULL; 643 638 spin_unlock_irqrestore(&serial_table_lock, flags); 644 639 } 645 640 ··· 2229 2230 static void hso_serial_tty_unregister(struct hso_serial *serial) 2230 2231 { 2231 2232 tty_unregister_device(tty_drv, serial->minor); 2233 + release_minor(serial); 2232 2234 } 2233 2235 2234 2236 static void hso_serial_common_free(struct hso_serial *serial) ··· 2253 2253 static int hso_serial_common_create(struct hso_serial *serial, int num_urbs, 2254 2254 int rx_size, int tx_size) 2255 2255 { 2256 - int minor; 2257 2256 int i; 2258 2257 2259 2258 tty_port_init(&serial->port); 2260 2259 2261 - minor = get_free_serial_index(); 2262 - if (minor < 0) 2260 + if (obtain_minor(serial)) 2263 2261 goto exit2; 2264 2262 2265 2263 /* register our minor number */ 2266 2264 serial->parent->dev = tty_port_register_device_attr(&serial->port, 2267 - tty_drv, minor, &serial->parent->interface->dev, 2265 + tty_drv, serial->minor, &serial->parent->interface->dev, 2268 2266 serial->parent, hso_serial_dev_groups); 2269 - if (IS_ERR(serial->parent->dev)) 2267 + if (IS_ERR(serial->parent->dev)) { 2268 + release_minor(serial); 2270 2269 goto exit2; 2270 + } 2271 2271 2272 - /* fill in specific data for later use */ 2273 - serial->minor = minor; 2274 2272 serial->magic = HSO_SERIAL_MAGIC; 2275 2273 spin_lock_init(&serial->serial_lock); 2276 2274 serial->num_rx_urbs = num_urbs; ··· 2665 2667 2666 2668 serial->write_data = hso_std_serial_write_data; 2667 2669 2668 - /* and record this serial */ 2669 - set_serial_by_index(serial->minor, serial); 2670 - 2671 2670 /* setup the proc dirs and files if needed */ 2672 2671 hso_log_port(hso_dev); 2673 2672 ··· 2720 2725 mutex_lock(&serial->shared_int->shared_int_lock); 2721 2726 serial->shared_int->ref_count++; 2722 2727 mutex_unlock(&serial->shared_int->shared_int_lock); 2723 - 2724 - /* and record this serial */ 2725 - set_serial_by_index(serial->minor, serial); 2726 2728 2727 2729 /* setup the proc dirs and files if needed */ 2728 2730 hso_log_port(hso_dev); ··· 3105 3113 cancel_work_sync(&serial_table[i]->async_get_intf); 3106 3114 hso_serial_tty_unregister(serial); 3107 3115 kref_put(&serial_table[i]->ref, hso_serial_ref_free); 3108 - set_serial_by_index(i, NULL); 3109 3116 } 3110 3117 } 3111 3118
+7 -3
drivers/net/virtio_net.c
··· 406 406 offset += hdr_padded_len; 407 407 p += hdr_padded_len; 408 408 409 - copy = len; 410 - if (copy > skb_tailroom(skb)) 411 - copy = skb_tailroom(skb); 409 + /* Copy all frame if it fits skb->head, otherwise 410 + * we let virtio_net_hdr_to_skb() and GRO pull headers as needed. 411 + */ 412 + if (len <= skb_tailroom(skb)) 413 + copy = len; 414 + else 415 + copy = ETH_HLEN + metasize; 412 416 skb_put_data(skb, p, copy); 413 417 414 418 if (metasize) {
+14 -4
drivers/net/vxlan.c
··· 2725 2725 goto tx_error; 2726 2726 } else if (err) { 2727 2727 if (info) { 2728 + struct ip_tunnel_info *unclone; 2728 2729 struct in_addr src, dst; 2730 + 2731 + unclone = skb_tunnel_info_unclone(skb); 2732 + if (unlikely(!unclone)) 2733 + goto tx_error; 2729 2734 2730 2735 src = remote_ip.sin.sin_addr; 2731 2736 dst = local_ip.sin.sin_addr; 2732 - info->key.u.ipv4.src = src.s_addr; 2733 - info->key.u.ipv4.dst = dst.s_addr; 2737 + unclone->key.u.ipv4.src = src.s_addr; 2738 + unclone->key.u.ipv4.dst = dst.s_addr; 2734 2739 } 2735 2740 vxlan_encap_bypass(skb, vxlan, vxlan, vni, false); 2736 2741 dst_release(ndst); ··· 2786 2781 goto tx_error; 2787 2782 } else if (err) { 2788 2783 if (info) { 2784 + struct ip_tunnel_info *unclone; 2789 2785 struct in6_addr src, dst; 2786 + 2787 + unclone = skb_tunnel_info_unclone(skb); 2788 + if (unlikely(!unclone)) 2789 + goto tx_error; 2790 2790 2791 2791 src = remote_ip.sin6.sin6_addr; 2792 2792 dst = local_ip.sin6.sin6_addr; 2793 - info->key.u.ipv6.src = src; 2794 - info->key.u.ipv6.dst = dst; 2793 + unclone->key.u.ipv6.src = src; 2794 + unclone->key.u.ipv6.dst = dst; 2795 2795 } 2796 2796 2797 2797 vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
+3 -2
drivers/net/wan/hdlc_fr.c
··· 415 415 416 416 if (pad > 0) { /* Pad the frame with zeros */ 417 417 if (__skb_pad(skb, pad, false)) 418 - goto drop; 418 + goto out; 419 419 skb_put(skb, pad); 420 420 } 421 421 } ··· 448 448 return NETDEV_TX_OK; 449 449 450 450 drop: 451 - dev->stats.tx_dropped++; 452 451 kfree_skb(skb); 452 + out: 453 + dev->stats.tx_dropped++; 453 454 return NETDEV_TX_OK; 454 455 } 455 456
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
··· 2439 2439 vif = ifp->vif; 2440 2440 cfg = wdev_to_cfg(&vif->wdev); 2441 2441 cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif = NULL; 2442 - if (locked) { 2442 + if (!locked) { 2443 2443 rtnl_lock(); 2444 2444 wiphy_lock(cfg->wiphy); 2445 2445 cfg80211_unregister_wdev(&vif->wdev);
+5 -5
drivers/net/wireless/intel/iwlwifi/fw/notif-wait.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2005-2014 Intel Corporation 3 + * Copyright (C) 2005-2014, 2021 Intel Corporation 4 4 * Copyright (C) 2015-2017 Intel Deutschland GmbH 5 5 */ 6 6 #include <linux/sched.h> ··· 26 26 if (!list_empty(&notif_wait->notif_waits)) { 27 27 struct iwl_notification_wait *w; 28 28 29 - spin_lock(&notif_wait->notif_wait_lock); 29 + spin_lock_bh(&notif_wait->notif_wait_lock); 30 30 list_for_each_entry(w, &notif_wait->notif_waits, list) { 31 31 int i; 32 32 bool found = false; ··· 59 59 triggered = true; 60 60 } 61 61 } 62 - spin_unlock(&notif_wait->notif_wait_lock); 62 + spin_unlock_bh(&notif_wait->notif_wait_lock); 63 63 } 64 64 65 65 return triggered; ··· 70 70 { 71 71 struct iwl_notification_wait *wait_entry; 72 72 73 - spin_lock(&notif_wait->notif_wait_lock); 73 + spin_lock_bh(&notif_wait->notif_wait_lock); 74 74 list_for_each_entry(wait_entry, &notif_wait->notif_waits, list) 75 75 wait_entry->aborted = true; 76 - spin_unlock(&notif_wait->notif_wait_lock); 76 + spin_unlock_bh(&notif_wait->notif_wait_lock); 77 77 78 78 wake_up_all(&notif_wait->notif_waitq); 79 79 }
+1
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 414 414 #define IWL_CFG_MAC_TYPE_QNJ 0x36 415 415 #define IWL_CFG_MAC_TYPE_SO 0x37 416 416 #define IWL_CFG_MAC_TYPE_SNJ 0x42 417 + #define IWL_CFG_MAC_TYPE_SOF 0x43 417 418 #define IWL_CFG_MAC_TYPE_MA 0x44 418 419 419 420 #define IWL_CFG_RF_TYPE_TH 0x105
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 232 232 REG_CAPA_V2_MCS_9_ALLOWED = BIT(6), 233 233 REG_CAPA_V2_WEATHER_DISABLED = BIT(7), 234 234 REG_CAPA_V2_40MHZ_ALLOWED = BIT(8), 235 - REG_CAPA_V2_11AX_DISABLED = BIT(13), 235 + REG_CAPA_V2_11AX_DISABLED = BIT(10), 236 236 }; 237 237 238 238 /*
+5 -2
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
··· 1786 1786 return -EINVAL; 1787 1787 1788 1788 /* value zero triggers re-sending the default table to the device */ 1789 - if (!op_id) 1789 + if (!op_id) { 1790 + mutex_lock(&mvm->mutex); 1790 1791 ret = iwl_rfi_send_config_cmd(mvm, NULL); 1791 - else 1792 + mutex_unlock(&mvm->mutex); 1793 + } else { 1792 1794 ret = -EOPNOTSUPP; /* in the future a new table will be added */ 1795 + } 1793 1796 1794 1797 return ret ?: count; 1795 1798 }
+3 -3
drivers/net/wireless/intel/iwlwifi/mvm/rfi.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2020 Intel Corporation 3 + * Copyright (C) 2020 - 2021 Intel Corporation 4 4 */ 5 5 6 6 #include "mvm.h" ··· 66 66 if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_RFIM_SUPPORT)) 67 67 return -EOPNOTSUPP; 68 68 69 + lockdep_assert_held(&mvm->mutex); 70 + 69 71 /* in case no table is passed, use the default one */ 70 72 if (!rfi_table) { 71 73 memcpy(cmd.table, iwl_rfi_table, sizeof(cmd.table)); ··· 77 75 cmd.oem = 1; 78 76 } 79 77 80 - mutex_lock(&mvm->mutex); 81 78 ret = iwl_mvm_send_cmd(mvm, &hcmd); 82 - mutex_unlock(&mvm->mutex); 83 79 84 80 if (ret) 85 81 IWL_ERR(mvm, "Failed to send RFI config cmd %d\n", ret);
+12 -5
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 272 272 rx_status->chain_signal[2] = S8_MIN; 273 273 } 274 274 275 - static int iwl_mvm_rx_mgmt_crypto(struct ieee80211_sta *sta, 276 - struct ieee80211_hdr *hdr, 277 - struct iwl_rx_mpdu_desc *desc, 278 - u32 status) 275 + static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta, 276 + struct ieee80211_hdr *hdr, 277 + struct iwl_rx_mpdu_desc *desc, 278 + u32 status) 279 279 { 280 280 struct iwl_mvm_sta *mvmsta; 281 281 struct iwl_mvm_vif *mvmvif; ··· 284 284 struct ieee80211_key_conf *key; 285 285 u32 len = le16_to_cpu(desc->mpdu_len); 286 286 const u8 *frame = (void *)hdr; 287 + 288 + if ((status & IWL_RX_MPDU_STATUS_SEC_MASK) == IWL_RX_MPDU_STATUS_SEC_NONE) 289 + return 0; 287 290 288 291 /* 289 292 * For non-beacon, we don't really care. But beacons may ··· 359 356 IWL_RX_MPDU_STATUS_SEC_UNKNOWN && !mvm->monitor_on) 360 357 return -1; 361 358 359 + if (unlikely(ieee80211_is_mgmt(hdr->frame_control) && 360 + !ieee80211_has_protected(hdr->frame_control))) 361 + return iwl_mvm_rx_mgmt_prot(sta, hdr, desc, status); 362 + 362 363 if (!ieee80211_has_protected(hdr->frame_control) || 363 364 (status & IWL_RX_MPDU_STATUS_SEC_MASK) == 364 365 IWL_RX_MPDU_STATUS_SEC_NONE) ··· 418 411 stats->flag |= RX_FLAG_DECRYPTED; 419 412 return 0; 420 413 case RX_MPDU_RES_STATUS_SEC_CMAC_GMAC_ENC: 421 - return iwl_mvm_rx_mgmt_crypto(sta, hdr, desc, status); 414 + break; 422 415 default: 423 416 /* 424 417 * Sometimes we can get frames that were not decrypted
+1 -30
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2018-2020 Intel Corporation 3 + * Copyright (C) 2018-2021 Intel Corporation 4 4 */ 5 5 #include "iwl-trans.h" 6 6 #include "iwl-fh.h" ··· 75 75 const struct fw_img *fw) 76 76 { 77 77 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 78 - u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 79 - u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 80 - CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 81 - u32_encode_bits(250, 82 - CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 83 - CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 84 - u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 85 - CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 86 - u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 87 78 struct iwl_context_info_gen3 *ctxt_info_gen3; 88 79 struct iwl_prph_scratch *prph_scratch; 89 80 struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl; ··· 207 216 208 217 iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL, 209 218 CSR_AUTO_FUNC_BOOT_ENA); 210 - 211 - /* 212 - * To workaround hardware latency issues during the boot process, 213 - * initialize the LTR to ~250 usec (see ltr_val above). 214 - * The firmware initializes this again later (to a smaller value). 215 - */ 216 - if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 || 217 - trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) && 218 - !trans->trans_cfg->integrated) { 219 - iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val); 220 - } else if (trans->trans_cfg->integrated && 221 - trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) { 222 - iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL); 223 - iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val); 224 - } 225 - 226 - if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) 227 - iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1); 228 - else 229 - iwl_set_bit(trans, CSR_GP_CNTRL, CSR_AUTO_FUNC_INIT); 230 219 231 220 return 0; 232 221
+1 -2
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 3 * Copyright (C) 2017 Intel Deutschland GmbH 4 - * Copyright (C) 2018-2020 Intel Corporation 4 + * Copyright (C) 2018-2021 Intel Corporation 5 5 */ 6 6 #include "iwl-trans.h" 7 7 #include "iwl-fh.h" ··· 240 240 241 241 /* kick FW self load */ 242 242 iwl_write64(trans, CSR_CTXT_INFO_BA, trans_pcie->ctxt_info_dma_addr); 243 - iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1); 244 243 245 244 /* Context info will be released upon alive or failure to get one */ 246 245
+26 -1
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 592 592 IWL_DEV_INFO(0x4DF0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL), 593 593 IWL_DEV_INFO(0x4DF0, 0x2074, iwl_ax201_cfg_qu_hr, NULL), 594 594 IWL_DEV_INFO(0x4DF0, 0x4070, iwl_ax201_cfg_qu_hr, NULL), 595 + IWL_DEV_INFO(0x4DF0, 0x6074, iwl_ax201_cfg_qu_hr, NULL), 595 596 596 597 /* So with HR */ 597 598 IWL_DEV_INFO(0x2725, 0x0090, iwlax211_2ax_cfg_so_gf_a0, NULL), ··· 1041 1040 IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY, 1042 1041 IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1043 1042 IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1044 - iwl_cfg_so_a0_hr_a0, iwl_ax201_name) 1043 + iwl_cfg_so_a0_hr_a0, iwl_ax201_name), 1044 + 1045 + /* So-F with Hr */ 1046 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1047 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1048 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1049 + IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1050 + iwl_cfg_so_a0_hr_a0, iwl_ax203_name), 1051 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1052 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1053 + IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, 1054 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1055 + iwl_cfg_so_a0_hr_a0, iwl_ax101_name), 1056 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1057 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1058 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 1059 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1060 + iwl_cfg_so_a0_hr_a0, iwl_ax201_name), 1061 + 1062 + /* So-F with Gf */ 1063 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1064 + IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY, 1065 + IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, 1066 + IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB, 1067 + iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name), 1045 1068 1046 1069 #endif /* CONFIG_IWLMVM */ 1047 1070 };
+35
drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
··· 266 266 mutex_unlock(&trans_pcie->mutex); 267 267 } 268 268 269 + static void iwl_pcie_set_ltr(struct iwl_trans *trans) 270 + { 271 + u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 272 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 273 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 274 + u32_encode_bits(250, 275 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 276 + CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 277 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 278 + CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 279 + u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 280 + 281 + /* 282 + * To workaround hardware latency issues during the boot process, 283 + * initialize the LTR to ~250 usec (see ltr_val above). 284 + * The firmware initializes this again later (to a smaller value). 285 + */ 286 + if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 || 287 + trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) && 288 + !trans->trans_cfg->integrated) { 289 + iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val); 290 + } else if (trans->trans_cfg->integrated && 291 + trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) { 292 + iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL); 293 + iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val); 294 + } 295 + } 296 + 269 297 int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans, 270 298 const struct fw_img *fw, bool run_in_rfkill) 271 299 { ··· 359 331 ret = iwl_pcie_ctxt_info_init(trans, fw); 360 332 if (ret) 361 333 goto out; 334 + 335 + iwl_pcie_set_ltr(trans); 336 + 337 + if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) 338 + iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1); 339 + else 340 + iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1); 362 341 363 342 /* re-check RF-Kill state since we may have missed the interrupt */ 364 343 hw_rfkill = iwl_pcie_check_hw_rf_kill(trans);
+4 -3
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 928 928 u32 cmd_pos; 929 929 const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD]; 930 930 u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD]; 931 + unsigned long flags; 931 932 932 933 if (WARN(!trans->wide_cmd_header && 933 934 group_id > IWL_ALWAYS_LONG_GROUP, ··· 1012 1011 goto free_dup_buf; 1013 1012 } 1014 1013 1015 - spin_lock_bh(&txq->lock); 1014 + spin_lock_irqsave(&txq->lock, flags); 1016 1015 1017 1016 if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) { 1018 - spin_unlock_bh(&txq->lock); 1017 + spin_unlock_irqrestore(&txq->lock, flags); 1019 1018 1020 1019 IWL_ERR(trans, "No space in command queue\n"); 1021 1020 iwl_op_mode_cmd_queue_full(trans->op_mode); ··· 1175 1174 unlock_reg: 1176 1175 spin_unlock(&trans_pcie->reg_lock); 1177 1176 out: 1178 - spin_unlock_bh(&txq->lock); 1177 + spin_unlock_irqrestore(&txq->lock, flags); 1179 1178 free_dup_buf: 1180 1179 if (idx < 0) 1181 1180 kfree(dup_buf);
+2 -2
drivers/net/wireless/mediatek/mt76/mt7921/regs.h
··· 135 135 136 136 #define MT_WTBLON_TOP_BASE 0x34000 137 137 #define MT_WTBLON_TOP(ofs) (MT_WTBLON_TOP_BASE + (ofs)) 138 - #define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x0) 138 + #define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x200) 139 139 #define MT_WTBLON_TOP_WDUCR_GROUP GENMASK(2, 0) 140 140 141 - #define MT_WTBL_UPDATE MT_WTBLON_TOP(0x030) 141 + #define MT_WTBL_UPDATE MT_WTBLON_TOP(0x230) 142 142 #define MT_WTBL_UPDATE_WLAN_IDX GENMASK(9, 0) 143 143 #define MT_WTBL_UPDATE_ADM_COUNT_CLEAR BIT(12) 144 144 #define MT_WTBL_UPDATE_BUSY BIT(31)
+3 -2
drivers/net/wireless/virt_wifi.c
··· 12 12 #include <net/cfg80211.h> 13 13 #include <net/rtnetlink.h> 14 14 #include <linux/etherdevice.h> 15 + #include <linux/math64.h> 15 16 #include <linux/module.h> 16 17 17 18 static struct wiphy *common_wiphy; ··· 169 168 scan_result.work); 170 169 struct wiphy *wiphy = priv_to_wiphy(priv); 171 170 struct cfg80211_scan_info scan_info = { .aborted = false }; 171 + u64 tsf = div_u64(ktime_get_boottime_ns(), 1000); 172 172 173 173 informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz, 174 174 CFG80211_BSS_FTYPE_PRESP, 175 - fake_router_bssid, 176 - ktime_get_boottime_ns(), 175 + fake_router_bssid, tsf, 177 176 WLAN_CAPABILITY_ESS, 0, 178 177 (void *)&ssid, sizeof(ssid), 179 178 DBM_TO_MBM(-50), GFP_KERNEL);
-2
include/linux/avf/virtchnl.h
··· 476 476 u16 vsi_id; 477 477 u16 key_len; 478 478 u8 key[1]; /* RSS hash key, packed bytes */ 479 - u8 pad[1]; 480 479 }; 481 480 482 481 VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key); ··· 484 485 u16 vsi_id; 485 486 u16 lut_entries; 486 487 u8 lut[1]; /* RSS lookup table */ 487 - u8 pad[1]; 488 488 }; 489 489 490 490 VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+2
include/linux/bpf.h
··· 40 40 struct bpf_local_storage_map; 41 41 struct kobject; 42 42 struct mem_cgroup; 43 + struct module; 43 44 44 45 extern struct idr btf_idr; 45 46 extern spinlock_t btf_idr_lock; ··· 624 623 /* Executable image of trampoline */ 625 624 struct bpf_tramp_image *cur_image; 626 625 u64 selector; 626 + struct module *mod; 627 627 }; 628 628 629 629 struct bpf_attach_target_info {
+16 -6
include/linux/ethtool.h
··· 87 87 int ethtool_op_get_ts_info(struct net_device *dev, struct ethtool_ts_info *eti); 88 88 89 89 90 - /** 91 - * struct ethtool_link_ext_state_info - link extended state and substate. 92 - */ 90 + /* Link extended state and substate. */ 93 91 struct ethtool_link_ext_state_info { 94 92 enum ethtool_link_ext_state link_ext_state; 95 93 union { ··· 127 129 __ETHTOOL_DECLARE_LINK_MODE_MASK(lp_advertising); 128 130 } link_modes; 129 131 u32 lanes; 130 - enum ethtool_link_mode_bit_indices link_mode; 131 132 }; 132 133 133 134 /** ··· 289 292 * do not attach ext_substate attribute to netlink message). If link_ext_state 290 293 * and link_ext_substate are unknown, return -ENODATA. If not implemented, 291 294 * link_ext_state and link_ext_substate will not be sent to userspace. 295 + * @get_eeprom_len: Read range of EEPROM addresses for validation of 296 + * @get_eeprom and @set_eeprom requests. 297 + * Returns 0 if device does not support EEPROM access. 292 298 * @get_eeprom: Read data from the device EEPROM. 293 299 * Should fill in the magic field. Don't need to check len for zero 294 300 * or wraparound. Fill in the data argument with the eeprom values ··· 384 384 * @get_module_eeprom: Get the eeprom information from the plug-in module 385 385 * @get_eee: Get Energy-Efficient (EEE) supported and status. 386 386 * @set_eee: Set EEE status (enable/disable) as well as LPI timers. 387 + * @get_tunable: Read the value of a driver / device tunable. 388 + * @set_tunable: Set the value of a driver / device tunable. 387 389 * @get_per_queue_coalesce: Get interrupt coalescing parameters per queue. 388 390 * It must check that the given queue number is valid. If neither a RX nor 389 391 * a TX queue has this number, return -EINVAL. If only a RX queue or a TX ··· 549 547 * @get_sset_count: Get number of strings that @get_strings will write. 550 548 * @get_strings: Return a set of strings that describe the requested objects 551 549 * @get_stats: Return extended statistics about the PHY device. 552 - * @start_cable_test - Start a cable test 553 - * @start_cable_test_tdr - Start a Time Domain Reflectometry cable test 550 + * @start_cable_test: Start a cable test 551 + * @start_cable_test_tdr: Start a Time Domain Reflectometry cable test 554 552 * 555 553 * All operations are optional (i.e. the function pointer may be set to %NULL) 556 554 * and callers must take this into account. Callers must hold the RTNL lock. ··· 573 571 */ 574 572 void ethtool_set_ethtool_phy_ops(const struct ethtool_phy_ops *ops); 575 573 574 + /* 575 + * ethtool_params_from_link_mode - Derive link parameters from a given link mode 576 + * @link_ksettings: Link parameters to be derived from the link mode 577 + * @link_mode: Link mode 578 + */ 579 + void 580 + ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings, 581 + enum ethtool_link_mode_bit_indices link_mode); 576 582 #endif /* _LINUX_ETHTOOL_H */
+6 -4
include/linux/mlx5/mlx5_ifc.h
··· 437 437 u8 reserved_at_60[0x18]; 438 438 u8 log_max_ft_num[0x8]; 439 439 440 - u8 reserved_at_80[0x18]; 440 + u8 reserved_at_80[0x10]; 441 + u8 log_max_flow_counter[0x8]; 441 442 u8 log_max_destination[0x8]; 442 443 443 - u8 log_max_flow_counter[0x8]; 444 - u8 reserved_at_a8[0x10]; 444 + u8 reserved_at_a0[0x18]; 445 445 u8 log_max_flow[0x8]; 446 446 447 447 u8 reserved_at_c0[0x40]; ··· 8835 8835 8836 8836 u8 fec_override_admin_100g_2x[0x10]; 8837 8837 u8 fec_override_admin_50g_1x[0x10]; 8838 + 8839 + u8 reserved_at_140[0x140]; 8838 8840 }; 8839 8841 8840 8842 struct mlx5_ifc_ppcnt_reg_bits { ··· 10200 10198 10201 10199 struct mlx5_ifc_bufferx_reg_bits buffer[10]; 10202 10200 10203 - u8 reserved_at_2e0[0x40]; 10201 + u8 reserved_at_2e0[0x80]; 10204 10202 }; 10205 10203 10206 10204 struct mlx5_ifc_qtct_reg_bits {
+6 -1
include/linux/skmsg.h
··· 349 349 static inline void sk_psock_restore_proto(struct sock *sk, 350 350 struct sk_psock *psock) 351 351 { 352 - sk->sk_prot->unhash = psock->saved_unhash; 353 352 if (inet_csk_has_ulp(sk)) { 353 + /* TLS does not have an unhash proto in SW cases, but we need 354 + * to ensure we stop using the sock_map unhash routine because 355 + * the associated psock is being removed. So use the original 356 + * unhash handler. 357 + */ 358 + WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash); 354 359 tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space); 355 360 } else { 356 361 sk->sk_write_space = psock->saved_write_space;
+11 -5
include/linux/virtio_net.h
··· 62 62 return -EINVAL; 63 63 } 64 64 65 + skb_reset_mac_header(skb); 66 + 65 67 if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) { 66 - u16 start = __virtio16_to_cpu(little_endian, hdr->csum_start); 67 - u16 off = __virtio16_to_cpu(little_endian, hdr->csum_offset); 68 + u32 start = __virtio16_to_cpu(little_endian, hdr->csum_start); 69 + u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset); 70 + u32 needed = start + max_t(u32, thlen, off + sizeof(__sum16)); 71 + 72 + if (!pskb_may_pull(skb, needed)) 73 + return -EINVAL; 68 74 69 75 if (!skb_partial_csum_set(skb, start, off)) 70 76 return -EINVAL; 71 77 72 78 p_off = skb_transport_offset(skb) + thlen; 73 - if (p_off > skb_headlen(skb)) 79 + if (!pskb_may_pull(skb, p_off)) 74 80 return -EINVAL; 75 81 } else { 76 82 /* gso packets without NEEDS_CSUM do not set transport_offset. ··· 106 100 } 107 101 108 102 p_off = keys.control.thoff + thlen; 109 - if (p_off > skb_headlen(skb) || 103 + if (!pskb_may_pull(skb, p_off) || 110 104 keys.basic.ip_proto != ip_proto) 111 105 return -EINVAL; 112 106 113 107 skb_set_transport_header(skb, keys.control.thoff); 114 108 } else if (gso_type) { 115 109 p_off = thlen; 116 - if (p_off > skb_headlen(skb)) 110 + if (!pskb_may_pull(skb, p_off)) 117 111 return -EINVAL; 118 112 } 119 113 }
+4 -8
include/net/act_api.h
··· 170 170 void tcf_idr_cleanup(struct tc_action_net *tn, u32 index); 171 171 int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index, 172 172 struct tc_action **a, int bind); 173 - int __tcf_idr_release(struct tc_action *a, bool bind, bool strict); 174 - 175 - static inline int tcf_idr_release(struct tc_action *a, bool bind) 176 - { 177 - return __tcf_idr_release(a, bind, false); 178 - } 173 + int tcf_idr_release(struct tc_action *a, bool bind); 179 174 180 175 int tcf_register_action(struct tc_action_ops *a, struct pernet_operations *ops); 181 176 int tcf_unregister_action(struct tc_action_ops *a, ··· 180 185 int nr_actions, struct tcf_result *res); 181 186 int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, 182 187 struct nlattr *est, char *name, int ovr, int bind, 183 - struct tc_action *actions[], size_t *attr_size, 188 + struct tc_action *actions[], int init_res[], size_t *attr_size, 184 189 bool rtnl_held, struct netlink_ext_ack *extack); 185 190 struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla, 186 191 bool rtnl_held, ··· 188 193 struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, 189 194 struct nlattr *nla, struct nlattr *est, 190 195 char *name, int ovr, int bind, 191 - struct tc_action_ops *ops, bool rtnl_held, 196 + struct tc_action_ops *a_o, int *init_res, 197 + bool rtnl_held, 192 198 struct netlink_ext_ack *extack); 193 199 int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind, 194 200 int ref, bool terse);
+3 -1
include/net/netns/xfrm.h
··· 72 72 #if IS_ENABLED(CONFIG_IPV6) 73 73 struct dst_ops xfrm6_dst_ops; 74 74 #endif 75 - spinlock_t xfrm_state_lock; 75 + spinlock_t xfrm_state_lock; 76 + seqcount_spinlock_t xfrm_state_hash_generation; 77 + 76 78 spinlock_t xfrm_policy_lock; 77 79 struct mutex xfrm_cfg_mutex; 78 80 };
+2 -2
include/net/red.h
··· 171 171 static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, 172 172 u8 Scell_log, u8 *stab) 173 173 { 174 - if (fls(qth_min) + Wlog > 32) 174 + if (fls(qth_min) + Wlog >= 32) 175 175 return false; 176 - if (fls(qth_max) + Wlog > 32) 176 + if (fls(qth_max) + Wlog >= 32) 177 177 return false; 178 178 if (Scell_log >= 32) 179 179 return false;
+2 -2
include/net/rtnetlink.h
··· 147 147 int (*validate_link_af)(const struct net_device *dev, 148 148 const struct nlattr *attr); 149 149 int (*set_link_af)(struct net_device *dev, 150 - const struct nlattr *attr); 151 - 150 + const struct nlattr *attr, 151 + struct netlink_ext_ack *extack); 152 152 int (*fill_stats_af)(struct sk_buff *skb, 153 153 const struct net_device *dev); 154 154 size_t (*get_stats_af_size)(const struct net_device *dev);
+14 -1
include/net/sock.h
··· 934 934 WRITE_ONCE(sk->sk_ack_backlog, sk->sk_ack_backlog + 1); 935 935 } 936 936 937 + /* Note: If you think the test should be: 938 + * return READ_ONCE(sk->sk_ack_backlog) >= READ_ONCE(sk->sk_max_ack_backlog); 939 + * Then please take a look at commit 64a146513f8f ("[NET]: Revert incorrect accept queue backlog changes.") 940 + */ 937 941 static inline bool sk_acceptq_is_full(const struct sock *sk) 938 942 { 939 - return READ_ONCE(sk->sk_ack_backlog) >= READ_ONCE(sk->sk_max_ack_backlog); 943 + return READ_ONCE(sk->sk_ack_backlog) > READ_ONCE(sk->sk_max_ack_backlog); 940 944 } 941 945 942 946 /* ··· 2223 2219 skb->destructor = sock_rfree; 2224 2220 atomic_add(skb->truesize, &sk->sk_rmem_alloc); 2225 2221 sk_mem_charge(sk, skb->truesize); 2222 + } 2223 + 2224 + static inline void skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk) 2225 + { 2226 + if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) { 2227 + skb_orphan(skb); 2228 + skb->destructor = sock_efree; 2229 + skb->sk = sk; 2230 + } 2226 2231 } 2227 2232 2228 2233 void sk_reset_timer(struct sock *sk, struct timer_list *timer,
+2 -2
include/net/xfrm.h
··· 1097 1097 return __xfrm_policy_check(sk, ndir, skb, family); 1098 1098 1099 1099 return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || 1100 - (skb_dst(skb)->flags & DST_NOPOLICY) || 1100 + (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || 1101 1101 __xfrm_policy_check(sk, ndir, skb, family); 1102 1102 } 1103 1103 ··· 1557 1557 int xfrm_trans_queue(struct sk_buff *skb, 1558 1558 int (*finish)(struct net *, struct sock *, 1559 1559 struct sk_buff *)); 1560 - int xfrm_output_resume(struct sk_buff *skb, int err); 1560 + int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err); 1561 1561 int xfrm_output(struct sock *sk, struct sk_buff *skb); 1562 1562 1563 1563 #if IS_ENABLED(CONFIG_NET_PKTGEN)
+1 -1
include/uapi/linux/can.h
··· 113 113 */ 114 114 __u8 len; 115 115 __u8 can_dlc; /* deprecated */ 116 - }; 116 + } __attribute__((packed)); /* disable padding added in some ABIs */ 117 117 __u8 __pad; /* padding */ 118 118 __u8 __res0; /* reserved / padding */ 119 119 __u8 len8_dlc; /* optional DLC for 8 byte payload length (9 .. 15) */
+33 -21
include/uapi/linux/ethtool.h
··· 26 26 * have the same layout for 32-bit and 64-bit userland. 27 27 */ 28 28 29 + /* Note on reserved space. 30 + * Reserved fields must not be accessed directly by user space because 31 + * they may be replaced by a different field in the future. They must 32 + * be initialized to zero before making the request, e.g. via memset 33 + * of the entire structure or implicitly by not being set in a structure 34 + * initializer. 35 + */ 36 + 29 37 /** 30 38 * struct ethtool_cmd - DEPRECATED, link control and status 31 39 * This structure is DEPRECATED, please use struct ethtool_link_settings. ··· 75 67 * and other link features that the link partner advertised 76 68 * through autonegotiation; 0 if unknown or not applicable. 77 69 * Read-only. 70 + * @reserved: Reserved for future use; see the note on reserved space. 78 71 * 79 72 * The link speed in Mbps is split between @speed and @speed_hi. Use 80 73 * the ethtool_cmd_speed() and ethtool_cmd_speed_set() functions to ··· 164 155 * @bus_info: Device bus address. This should match the dev_name() 165 156 * string for the underlying bus device, if there is one. May be 166 157 * an empty string. 158 + * @reserved2: Reserved for future use; see the note on reserved space. 167 159 * @n_priv_flags: Number of flags valid for %ETHTOOL_GPFLAGS and 168 160 * %ETHTOOL_SPFLAGS commands; also the number of strings in the 169 161 * %ETH_SS_PRIV_FLAGS set ··· 366 356 * @tx_lpi_timer: Time in microseconds the interface delays prior to asserting 367 357 * its tx lpi (after reaching 'idle' state). Effective only when eee 368 358 * was negotiated and tx_lpi_enabled was set. 359 + * @reserved: Reserved for future use; see the note on reserved space. 369 360 */ 370 361 struct ethtool_eee { 371 362 __u32 cmd; ··· 385 374 * @cmd: %ETHTOOL_GMODULEINFO 386 375 * @type: Standard the module information conforms to %ETH_MODULE_SFF_xxxx 387 376 * @eeprom_len: Length of the eeprom 377 + * @reserved: Reserved for future use; see the note on reserved space. 388 378 * 389 379 * This structure is used to return the information to 390 380 * properly size memory for a subsequent call to %ETHTOOL_GMODULEEEPROM. ··· 591 579 __u32 tx_pause; 592 580 }; 593 581 594 - /** 595 - * enum ethtool_link_ext_state - link extended state 596 - */ 582 + /* Link extended state */ 597 583 enum ethtool_link_ext_state { 598 584 ETHTOOL_LINK_EXT_STATE_AUTONEG, 599 585 ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE, ··· 605 595 ETHTOOL_LINK_EXT_STATE_OVERHEAT, 606 596 }; 607 597 608 - /** 609 - * enum ethtool_link_ext_substate_autoneg - more information in addition to 610 - * ETHTOOL_LINK_EXT_STATE_AUTONEG. 611 - */ 598 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_AUTONEG. */ 612 599 enum ethtool_link_ext_substate_autoneg { 613 600 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_PARTNER_DETECTED = 1, 614 601 ETHTOOL_LINK_EXT_SUBSTATE_AN_ACK_NOT_RECEIVED, ··· 615 608 ETHTOOL_LINK_EXT_SUBSTATE_AN_NO_HCD, 616 609 }; 617 610 618 - /** 619 - * enum ethtool_link_ext_substate_link_training - more information in addition to 620 - * ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE. 611 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_LINK_TRAINING_FAILURE. 621 612 */ 622 613 enum ethtool_link_ext_substate_link_training { 623 614 ETHTOOL_LINK_EXT_SUBSTATE_LT_KR_FRAME_LOCK_NOT_ACQUIRED = 1, ··· 624 619 ETHTOOL_LINK_EXT_SUBSTATE_LT_REMOTE_FAULT, 625 620 }; 626 621 627 - /** 628 - * enum ethtool_link_ext_substate_logical_mismatch - more information in addition 629 - * to ETHTOOL_LINK_EXT_STATE_LINK_LOGICAL_MISMATCH. 622 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_LINK_LOGICAL_MISMATCH. 630 623 */ 631 624 enum ethtool_link_ext_substate_link_logical_mismatch { 632 625 ETHTOOL_LINK_EXT_SUBSTATE_LLM_PCS_DID_NOT_ACQUIRE_BLOCK_LOCK = 1, ··· 634 631 ETHTOOL_LINK_EXT_SUBSTATE_LLM_RS_FEC_IS_NOT_LOCKED, 635 632 }; 636 633 637 - /** 638 - * enum ethtool_link_ext_substate_bad_signal_integrity - more information in 639 - * addition to ETHTOOL_LINK_EXT_STATE_BAD_SIGNAL_INTEGRITY. 634 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_BAD_SIGNAL_INTEGRITY. 640 635 */ 641 636 enum ethtool_link_ext_substate_bad_signal_integrity { 642 637 ETHTOOL_LINK_EXT_SUBSTATE_BSI_LARGE_NUMBER_OF_PHYSICAL_ERRORS = 1, 643 638 ETHTOOL_LINK_EXT_SUBSTATE_BSI_UNSUPPORTED_RATE, 644 639 }; 645 640 646 - /** 647 - * enum ethtool_link_ext_substate_cable_issue - more information in 648 - * addition to ETHTOOL_LINK_EXT_STATE_CABLE_ISSUE. 649 - */ 641 + /* More information in addition to ETHTOOL_LINK_EXT_STATE_CABLE_ISSUE. */ 650 642 enum ethtool_link_ext_substate_cable_issue { 651 643 ETHTOOL_LINK_EXT_SUBSTATE_CI_UNSUPPORTED_CABLE = 1, 652 644 ETHTOOL_LINK_EXT_SUBSTATE_CI_CABLE_TEST_FAILURE, ··· 659 661 * now deprecated 660 662 * @ETH_SS_FEATURES: Device feature names 661 663 * @ETH_SS_RSS_HASH_FUNCS: RSS hush function names 664 + * @ETH_SS_TUNABLES: tunable names 662 665 * @ETH_SS_PHY_STATS: Statistic names, for use with %ETHTOOL_GPHYSTATS 663 666 * @ETH_SS_PHY_TUNABLES: PHY tunable names 664 667 * @ETH_SS_LINK_MODES: link mode names ··· 669 670 * @ETH_SS_TS_TX_TYPES: timestamping Tx types 670 671 * @ETH_SS_TS_RX_FILTERS: timestamping Rx filters 671 672 * @ETH_SS_UDP_TUNNEL_TYPES: UDP tunnel types 673 + * 674 + * @ETH_SS_COUNT: number of defined string sets 672 675 */ 673 676 enum ethtool_stringset { 674 677 ETH_SS_TEST = 0, ··· 716 715 /** 717 716 * struct ethtool_sset_info - string set information 718 717 * @cmd: Command number = %ETHTOOL_GSSET_INFO 718 + * @reserved: Reserved for future use; see the note on reserved space. 719 719 * @sset_mask: On entry, a bitmask of string sets to query, with bits 720 720 * numbered according to &enum ethtool_stringset. On return, a 721 721 * bitmask of those string sets queried that are supported. ··· 761 759 * @flags: A bitmask of flags from &enum ethtool_test_flags. Some 762 760 * flags may be set by the user on entry; others may be set by 763 761 * the driver on return. 762 + * @reserved: Reserved for future use; see the note on reserved space. 764 763 * @len: On return, the number of test results 765 764 * @data: Array of test results 766 765 * ··· 962 959 * @vlan_etype: VLAN EtherType 963 960 * @vlan_tci: VLAN tag control information 964 961 * @data: user defined data 962 + * @padding: Reserved for future use; see the note on reserved space. 965 963 * 966 964 * Note, @vlan_etype, @vlan_tci, and @data are only valid if %FLOW_EXT 967 965 * is set in &struct ethtool_rx_flow_spec @flow_type. ··· 1138 1134 * hardware hash key. 1139 1135 * @hfunc: Defines the current RSS hash function used by HW (or to be set to). 1140 1136 * Valid values are one of the %ETH_RSS_HASH_*. 1141 - * @rsvd: Reserved for future extensions. 1137 + * @rsvd8: Reserved for future use; see the note on reserved space. 1138 + * @rsvd32: Reserved for future use; see the note on reserved space. 1142 1139 * @rss_config: RX ring/queue index for each hash value i.e., indirection table 1143 1140 * of @indir_size __u32 elements, followed by hash key of @key_size 1144 1141 * bytes. ··· 1307 1302 * @so_timestamping: bit mask of the sum of the supported SO_TIMESTAMPING flags 1308 1303 * @phc_index: device index of the associated PHC, or -1 if there is none 1309 1304 * @tx_types: bit mask of the supported hwtstamp_tx_types enumeration values 1305 + * @tx_reserved: Reserved for future use; see the note on reserved space. 1310 1306 * @rx_filters: bit mask of the supported hwtstamp_rx_filters enumeration values 1307 + * @rx_reserved: Reserved for future use; see the note on reserved space. 1311 1308 * 1312 1309 * The bits in the 'tx_types' and 'rx_filters' fields correspond to 1313 1310 * the 'hwtstamp_tx_types' and 'hwtstamp_rx_filters' enumeration values, ··· 1965 1958 * autonegotiation; 0 if unknown or not applicable. Read-only. 1966 1959 * @transceiver: Used to distinguish different possible PHY types, 1967 1960 * reported consistently by PHYLIB. Read-only. 1961 + * @master_slave_cfg: Master/slave port mode. 1962 + * @master_slave_state: Master/slave port state. 1963 + * @reserved: Reserved for future use; see the note on reserved space. 1964 + * @reserved1: Reserved for future use; see the note on reserved space. 1965 + * @link_mode_masks: Variable length bitmaps. 1968 1966 * 1969 1967 * If autonegotiation is disabled, the speed and @duplex represent the 1970 1968 * fixed link mode and are writable if the driver supports multiple
+69 -13
include/uapi/linux/rfkill.h
··· 86 86 * @op: operation code 87 87 * @hard: hard state (0/1) 88 88 * @soft: soft state (0/1) 89 - * @hard_block_reasons: valid if hard is set. One or several reasons from 90 - * &enum rfkill_hard_block_reasons. 91 89 * 92 90 * Structure used for userspace communication on /dev/rfkill, 93 91 * used for events from the kernel and control to the kernel. ··· 96 98 __u8 op; 97 99 __u8 soft; 98 100 __u8 hard; 101 + } __attribute__((packed)); 102 + 103 + /** 104 + * struct rfkill_event_ext - events for userspace on /dev/rfkill 105 + * @idx: index of dev rfkill 106 + * @type: type of the rfkill struct 107 + * @op: operation code 108 + * @hard: hard state (0/1) 109 + * @soft: soft state (0/1) 110 + * @hard_block_reasons: valid if hard is set. One or several reasons from 111 + * &enum rfkill_hard_block_reasons. 112 + * 113 + * Structure used for userspace communication on /dev/rfkill, 114 + * used for events from the kernel and control to the kernel. 115 + * 116 + * See the extensibility docs below. 117 + */ 118 + struct rfkill_event_ext { 119 + __u32 idx; 120 + __u8 type; 121 + __u8 op; 122 + __u8 soft; 123 + __u8 hard; 124 + 125 + /* 126 + * older kernels will accept/send only up to this point, 127 + * and if extended further up to any chunk marked below 128 + */ 129 + 99 130 __u8 hard_block_reasons; 100 131 } __attribute__((packed)); 101 132 102 - /* 103 - * We are planning to be backward and forward compatible with changes 104 - * to the event struct, by adding new, optional, members at the end. 105 - * When reading an event (whether the kernel from userspace or vice 106 - * versa) we need to accept anything that's at least as large as the 107 - * version 1 event size, but might be able to accept other sizes in 108 - * the future. 133 + /** 134 + * DOC: Extensibility 109 135 * 110 - * One exception is the kernel -- we already have two event sizes in 111 - * that we've made the 'hard' member optional since our only option 112 - * is to ignore it anyway. 136 + * Originally, we had planned to allow backward and forward compatible 137 + * changes by just adding fields at the end of the structure that are 138 + * then not reported on older kernels on read(), and not written to by 139 + * older kernels on write(), with the kernel reporting the size it did 140 + * accept as the result. 141 + * 142 + * This would have allowed userspace to detect on read() and write() 143 + * which kernel structure version it was dealing with, and if was just 144 + * recompiled it would have gotten the new fields, but obviously not 145 + * accessed them, but things should've continued to work. 146 + * 147 + * Unfortunately, while actually exercising this mechanism to add the 148 + * hard block reasons field, we found that userspace (notably systemd) 149 + * did all kinds of fun things not in line with this scheme: 150 + * 151 + * 1. treat the (expected) short writes as an error; 152 + * 2. ask to read sizeof(struct rfkill_event) but then compare the 153 + * actual return value to RFKILL_EVENT_SIZE_V1 and treat any 154 + * mismatch as an error. 155 + * 156 + * As a consequence, just recompiling with a new struct version caused 157 + * things to no longer work correctly on old and new kernels. 158 + * 159 + * Hence, we've rolled back &struct rfkill_event to the original version 160 + * and added &struct rfkill_event_ext. This effectively reverts to the 161 + * old behaviour for all userspace, unless it explicitly opts in to the 162 + * rules outlined here by using the new &struct rfkill_event_ext. 163 + * 164 + * Userspace using &struct rfkill_event_ext must adhere to the following 165 + * rules 166 + * 167 + * 1. accept short writes, optionally using them to detect that it's 168 + * running on an older kernel; 169 + * 2. accept short reads, knowing that this means it's running on an 170 + * older kernel; 171 + * 3. treat reads that are as long as requested as acceptable, not 172 + * checking against RFKILL_EVENT_SIZE_V1 or such. 113 173 */ 114 - #define RFKILL_EVENT_SIZE_V1 8 174 + #define RFKILL_EVENT_SIZE_V1 sizeof(struct rfkill_event) 115 175 116 176 /* ioctl for turning off rfkill-input (if present) */ 117 177 #define RFKILL_IOC_MAGIC 'R'
+1 -1
kernel/bpf/disasm.c
··· 84 84 [BPF_ADD >> 4] = "add", 85 85 [BPF_AND >> 4] = "and", 86 86 [BPF_OR >> 4] = "or", 87 - [BPF_XOR >> 4] = "or", 87 + [BPF_XOR >> 4] = "xor", 88 88 }; 89 89 90 90 static const char *const bpf_ldst_string[] = {
+2 -2
kernel/bpf/inode.c
··· 543 543 return PTR_ERR(raw); 544 544 545 545 if (type == BPF_TYPE_PROG) 546 - ret = bpf_prog_new_fd(raw); 546 + ret = (f_flags != O_RDWR) ? -EINVAL : bpf_prog_new_fd(raw); 547 547 else if (type == BPF_TYPE_MAP) 548 548 ret = bpf_map_new_fd(raw, f_flags); 549 549 else if (type == BPF_TYPE_LINK) 550 - ret = bpf_link_new_fd(raw); 550 + ret = (f_flags != O_RDWR) ? -EINVAL : bpf_link_new_fd(raw); 551 551 else 552 552 return -ENOENT; 553 553
+10 -2
kernel/bpf/stackmap.c
··· 517 517 BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf, 518 518 u32, size, u64, flags) 519 519 { 520 - struct pt_regs *regs = task_pt_regs(task); 520 + struct pt_regs *regs; 521 + long res; 521 522 522 - return __bpf_get_stack(regs, task, NULL, buf, size, flags); 523 + if (!try_get_task_stack(task)) 524 + return -EFAULT; 525 + 526 + regs = task_pt_regs(task); 527 + res = __bpf_get_stack(regs, task, NULL, buf, size, flags); 528 + put_task_stack(task); 529 + 530 + return res; 523 531 } 524 532 525 533 BTF_ID_LIST_SINGLE(bpf_get_task_stack_btf_ids, struct, task_struct)
+30
kernel/bpf/trampoline.c
··· 9 9 #include <linux/btf.h> 10 10 #include <linux/rcupdate_trace.h> 11 11 #include <linux/rcupdate_wait.h> 12 + #include <linux/module.h> 12 13 13 14 /* dummy _ops. The verifier will operate on target program's ops. */ 14 15 const struct bpf_verifier_ops bpf_extension_verifier_ops = { ··· 88 87 return tr; 89 88 } 90 89 90 + static int bpf_trampoline_module_get(struct bpf_trampoline *tr) 91 + { 92 + struct module *mod; 93 + int err = 0; 94 + 95 + preempt_disable(); 96 + mod = __module_text_address((unsigned long) tr->func.addr); 97 + if (mod && !try_module_get(mod)) 98 + err = -ENOENT; 99 + preempt_enable(); 100 + tr->mod = mod; 101 + return err; 102 + } 103 + 104 + static void bpf_trampoline_module_put(struct bpf_trampoline *tr) 105 + { 106 + module_put(tr->mod); 107 + tr->mod = NULL; 108 + } 109 + 91 110 static int is_ftrace_location(void *ip) 92 111 { 93 112 long addr; ··· 129 108 ret = unregister_ftrace_direct((long)ip, (long)old_addr); 130 109 else 131 110 ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, NULL); 111 + 112 + if (!ret) 113 + bpf_trampoline_module_put(tr); 132 114 return ret; 133 115 } 134 116 ··· 158 134 return ret; 159 135 tr->func.ftrace_managed = ret; 160 136 137 + if (bpf_trampoline_module_get(tr)) 138 + return -ENOENT; 139 + 161 140 if (tr->func.ftrace_managed) 162 141 ret = register_ftrace_direct((long)ip, (long)new_addr); 163 142 else 164 143 ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr); 144 + 145 + if (ret) 146 + bpf_trampoline_module_put(tr); 165 147 return ret; 166 148 } 167 149
+5
kernel/bpf/verifier.c
··· 12158 12158 u32 btf_id, member_idx; 12159 12159 const char *mname; 12160 12160 12161 + if (!prog->gpl_compatible) { 12162 + verbose(env, "struct ops programs must have a GPL compatible license\n"); 12163 + return -EINVAL; 12164 + } 12165 + 12161 12166 btf_id = prog->aux->attach_btf_id; 12162 12167 st_ops = bpf_struct_ops_find(btf_id); 12163 12168 if (!st_ops) {
+2
net/batman-adv/translation-table.c
··· 890 890 hlist_for_each_entry(vlan, &orig_node->vlan_list, list) { 891 891 tt_vlan->vid = htons(vlan->vid); 892 892 tt_vlan->crc = htonl(vlan->tt.crc); 893 + tt_vlan->reserved = 0; 893 894 894 895 tt_vlan++; 895 896 } ··· 974 973 975 974 tt_vlan->vid = htons(vlan->vid); 976 975 tt_vlan->crc = htonl(vlan->tt.crc); 976 + tt_vlan->reserved = 0; 977 977 978 978 tt_vlan++; 979 979 }
+6 -4
net/can/bcm.c
··· 86 86 MODULE_AUTHOR("Oliver Hartkopp <oliver.hartkopp@volkswagen.de>"); 87 87 MODULE_ALIAS("can-proto-2"); 88 88 89 + #define BCM_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex) 90 + 89 91 /* 90 92 * easy access to the first 64 bit of can(fd)_frame payload. cp->data is 91 93 * 64 bit aligned so the offset has to be multiples of 8 which is ensured ··· 1294 1292 /* no bound device as default => check msg_name */ 1295 1293 DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name); 1296 1294 1297 - if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 1295 + if (msg->msg_namelen < BCM_MIN_NAMELEN) 1298 1296 return -EINVAL; 1299 1297 1300 1298 if (addr->can_family != AF_CAN) ··· 1536 1534 struct net *net = sock_net(sk); 1537 1535 int ret = 0; 1538 1536 1539 - if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 1537 + if (len < BCM_MIN_NAMELEN) 1540 1538 return -EINVAL; 1541 1539 1542 1540 lock_sock(sk); ··· 1618 1616 sock_recv_ts_and_drops(msg, sk, skb); 1619 1617 1620 1618 if (msg->msg_name) { 1621 - __sockaddr_check_size(sizeof(struct sockaddr_can)); 1622 - msg->msg_namelen = sizeof(struct sockaddr_can); 1619 + __sockaddr_check_size(BCM_MIN_NAMELEN); 1620 + msg->msg_namelen = BCM_MIN_NAMELEN; 1623 1621 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 1624 1622 } 1625 1623
+7 -4
net/can/isotp.c
··· 77 77 MODULE_AUTHOR("Oliver Hartkopp <socketcan@hartkopp.net>"); 78 78 MODULE_ALIAS("can-proto-6"); 79 79 80 + #define ISOTP_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp) 81 + 80 82 #define SINGLE_MASK(id) (((id) & CAN_EFF_FLAG) ? \ 81 83 (CAN_EFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG) : \ 82 84 (CAN_SFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG)) ··· 988 986 sock_recv_timestamp(msg, sk, skb); 989 987 990 988 if (msg->msg_name) { 991 - msg->msg_namelen = sizeof(struct sockaddr_can); 989 + __sockaddr_check_size(ISOTP_MIN_NAMELEN); 990 + msg->msg_namelen = ISOTP_MIN_NAMELEN; 992 991 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 993 992 } 994 993 ··· 1059 1056 int notify_enetdown = 0; 1060 1057 int do_rx_reg = 1; 1061 1058 1062 - if (len < CAN_REQUIRED_SIZE(struct sockaddr_can, can_addr.tp)) 1059 + if (len < ISOTP_MIN_NAMELEN) 1063 1060 return -EINVAL; 1064 1061 1065 1062 /* do not register frame reception for functional addressing */ ··· 1155 1152 if (peer) 1156 1153 return -EOPNOTSUPP; 1157 1154 1158 - memset(addr, 0, sizeof(*addr)); 1155 + memset(addr, 0, ISOTP_MIN_NAMELEN); 1159 1156 addr->can_family = AF_CAN; 1160 1157 addr->can_ifindex = so->ifindex; 1161 1158 addr->can_addr.tp.rx_id = so->rxid; 1162 1159 addr->can_addr.tp.tx_id = so->txid; 1163 1160 1164 - return sizeof(*addr); 1161 + return ISOTP_MIN_NAMELEN; 1165 1162 } 1166 1163 1167 1164 static int isotp_setsockopt(struct socket *sock, int level, int optname,
+8 -6
net/can/raw.c
··· 60 60 MODULE_AUTHOR("Urs Thuermann <urs.thuermann@volkswagen.de>"); 61 61 MODULE_ALIAS("can-proto-1"); 62 62 63 + #define RAW_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex) 64 + 63 65 #define MASK_ALL 0 64 66 65 67 /* A raw socket has a list of can_filters attached to it, each receiving ··· 396 394 int err = 0; 397 395 int notify_enetdown = 0; 398 396 399 - if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 397 + if (len < RAW_MIN_NAMELEN) 400 398 return -EINVAL; 401 399 if (addr->can_family != AF_CAN) 402 400 return -EINVAL; ··· 477 475 if (peer) 478 476 return -EOPNOTSUPP; 479 477 480 - memset(addr, 0, sizeof(*addr)); 478 + memset(addr, 0, RAW_MIN_NAMELEN); 481 479 addr->can_family = AF_CAN; 482 480 addr->can_ifindex = ro->ifindex; 483 481 484 - return sizeof(*addr); 482 + return RAW_MIN_NAMELEN; 485 483 } 486 484 487 485 static int raw_setsockopt(struct socket *sock, int level, int optname, ··· 741 739 if (msg->msg_name) { 742 740 DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name); 743 741 744 - if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex)) 742 + if (msg->msg_namelen < RAW_MIN_NAMELEN) 745 743 return -EINVAL; 746 744 747 745 if (addr->can_family != AF_CAN) ··· 834 832 sock_recv_ts_and_drops(msg, sk, skb); 835 833 836 834 if (msg->msg_name) { 837 - __sockaddr_check_size(sizeof(struct sockaddr_can)); 838 - msg->msg_namelen = sizeof(struct sockaddr_can); 835 + __sockaddr_check_size(RAW_MIN_NAMELEN); 836 + msg->msg_namelen = RAW_MIN_NAMELEN; 839 837 memcpy(msg->msg_name, skb->cb, msg->msg_namelen); 840 838 } 841 839
+2 -1
net/core/dev.c
··· 6992 6992 6993 6993 set_current_state(TASK_INTERRUPTIBLE); 6994 6994 6995 - while (!kthread_should_stop() && !napi_disable_pending(napi)) { 6995 + while (!kthread_should_stop()) { 6996 6996 /* Testing SCHED_THREADED bit here to make sure the current 6997 6997 * kthread owns this napi and could poll on this napi. 6998 6998 * Testing SCHED bit is not enough because SCHED bit might be ··· 7010 7010 set_current_state(TASK_INTERRUPTIBLE); 7011 7011 } 7012 7012 __set_current_state(TASK_RUNNING); 7013 + 7013 7014 return -1; 7014 7015 } 7015 7016
+1 -1
net/core/neighbour.c
··· 1379 1379 * we can reinject the packet there. 1380 1380 */ 1381 1381 n2 = NULL; 1382 - if (dst) { 1382 + if (dst && dst->obsolete != DST_OBSOLETE_DEAD) { 1383 1383 n2 = dst_neigh_lookup_skb(dst, skb); 1384 1384 if (n2) 1385 1385 n1 = n2;
+1 -1
net/core/rtnetlink.c
··· 2863 2863 2864 2864 BUG_ON(!(af_ops = rtnl_af_lookup(nla_type(af)))); 2865 2865 2866 - err = af_ops->set_link_af(dev, af); 2866 + err = af_ops->set_link_af(dev, af, extack); 2867 2867 if (err < 0) { 2868 2868 rcu_read_unlock(); 2869 2869 goto errout;
+5 -7
net/core/skmsg.c
··· 488 488 if (unlikely(!msg)) 489 489 return -EAGAIN; 490 490 sk_msg_init(msg); 491 + skb_set_owner_r(skb, sk); 491 492 return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg); 492 493 } 493 494 ··· 791 790 { 792 791 switch (verdict) { 793 792 case __SK_REDIRECT: 794 - skb_set_owner_r(skb, sk); 795 793 sk_psock_skb_redirect(skb); 796 794 break; 797 795 case __SK_PASS: ··· 808 808 rcu_read_lock(); 809 809 prog = READ_ONCE(psock->progs.skb_verdict); 810 810 if (likely(prog)) { 811 - /* We skip full set_owner_r here because if we do a SK_PASS 812 - * or SK_DROP we can skip skb memory accounting and use the 813 - * TLS context. 814 - */ 815 811 skb->sk = psock->sk; 816 812 tcp_skb_bpf_redirect_clear(skb); 817 813 ret = sk_psock_bpf_run(psock, prog, skb); ··· 876 880 kfree_skb(skb); 877 881 goto out; 878 882 } 879 - skb_set_owner_r(skb, sk); 880 883 prog = READ_ONCE(psock->progs.skb_verdict); 881 884 if (likely(prog)) { 885 + skb->sk = sk; 882 886 tcp_skb_bpf_redirect_clear(skb); 883 887 ret = sk_psock_bpf_run(psock, prog, skb); 884 888 ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb)); 889 + skb->sk = NULL; 885 890 } 886 891 sk_psock_verdict_apply(psock, skb, ret); 887 892 out: ··· 953 956 kfree_skb(skb); 954 957 goto out; 955 958 } 956 - skb_set_owner_r(skb, sk); 957 959 prog = READ_ONCE(psock->progs.skb_verdict); 958 960 if (likely(prog)) { 961 + skb->sk = sk; 959 962 tcp_skb_bpf_redirect_clear(skb); 960 963 ret = sk_psock_bpf_run(psock, prog, skb); 961 964 ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb)); 965 + skb->sk = NULL; 962 966 } 963 967 sk_psock_verdict_apply(psock, skb, ret); 964 968 out:
+3 -9
net/core/sock.c
··· 2132 2132 if (skb_is_tcp_pure_ack(skb)) 2133 2133 return; 2134 2134 2135 - if (can_skb_orphan_partial(skb)) { 2136 - struct sock *sk = skb->sk; 2137 - 2138 - if (refcount_inc_not_zero(&sk->sk_refcnt)) { 2139 - WARN_ON(refcount_sub_and_test(skb->truesize, &sk->sk_wmem_alloc)); 2140 - skb->destructor = sock_efree; 2141 - } 2142 - } else { 2135 + if (can_skb_orphan_partial(skb)) 2136 + skb_set_owner_sk_safe(skb, skb->sk); 2137 + else 2143 2138 skb_orphan(skb); 2144 - } 2145 2139 } 2146 2140 EXPORT_SYMBOL(skb_orphan_partial); 2147 2141
+2 -1
net/core/xdp.c
··· 350 350 /* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */ 351 351 xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); 352 352 page = virt_to_head_page(data); 353 - napi_direct &= !xdp_return_frame_no_direct(); 353 + if (napi_direct && xdp_return_frame_no_direct()) 354 + napi_direct = false; 354 355 page_pool_put_full_page(xa->page_pool, page, napi_direct); 355 356 rcu_read_unlock(); 356 357 break;
+7 -1
net/dsa/dsa2.c
··· 795 795 796 796 list_for_each_entry(dp, &dst->ports, list) { 797 797 err = dsa_port_setup(dp); 798 - if (err) 798 + if (err) { 799 + dsa_port_devlink_teardown(dp); 800 + dp->type = DSA_PORT_TYPE_UNUSED; 801 + err = dsa_port_devlink_setup(dp); 802 + if (err) 803 + goto teardown; 799 804 continue; 805 + } 800 806 } 801 807 802 808 return 0;
+9 -6
net/dsa/switch.c
··· 107 107 bool unset_vlan_filtering = br_vlan_enabled(info->br); 108 108 struct dsa_switch_tree *dst = ds->dst; 109 109 struct netlink_ext_ack extack = {0}; 110 - int err, i; 110 + int err, port; 111 111 112 112 if (dst->index == info->tree_index && ds->index == info->sw_index && 113 113 ds->ops->port_bridge_join) ··· 124 124 * it. That is a good thing, because that lets us handle it and also 125 125 * handle the case where the switch's vlan_filtering setting is global 126 126 * (not per port). When that happens, the correct moment to trigger the 127 - * vlan_filtering callback is only when the last port left this bridge. 127 + * vlan_filtering callback is only when the last port leaves the last 128 + * VLAN-aware bridge. 128 129 */ 129 130 if (unset_vlan_filtering && ds->vlan_filtering_is_global) { 130 - for (i = 0; i < ds->num_ports; i++) { 131 - if (i == info->port) 132 - continue; 133 - if (dsa_to_port(ds, i)->bridge_dev == info->br) { 131 + for (port = 0; port < ds->num_ports; port++) { 132 + struct net_device *bridge_dev; 133 + 134 + bridge_dev = dsa_to_port(ds, port)->bridge_dev; 135 + 136 + if (bridge_dev && br_vlan_enabled(bridge_dev)) { 134 137 unset_vlan_filtering = false; 135 138 break; 136 139 }
+17
net/ethtool/common.c
··· 273 273 __DEFINE_LINK_MODE_PARAMS(10000, KR, Full), 274 274 [ETHTOOL_LINK_MODE_10000baseR_FEC_BIT] = { 275 275 .speed = SPEED_10000, 276 + .lanes = 1, 276 277 .duplex = DUPLEX_FULL, 277 278 }, 278 279 __DEFINE_LINK_MODE_PARAMS(20000, MLD2, Full), ··· 563 562 rtnl_unlock(); 564 563 } 565 564 EXPORT_SYMBOL_GPL(ethtool_set_ethtool_phy_ops); 565 + 566 + void 567 + ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings, 568 + enum ethtool_link_mode_bit_indices link_mode) 569 + { 570 + const struct link_mode_info *link_info; 571 + 572 + if (WARN_ON_ONCE(link_mode >= __ETHTOOL_LINK_MODE_MASK_NBITS)) 573 + return; 574 + 575 + link_info = &link_mode_params[link_mode]; 576 + link_ksettings->base.speed = link_info->speed; 577 + link_ksettings->lanes = link_info->lanes; 578 + link_ksettings->base.duplex = link_info->duplex; 579 + } 580 + EXPORT_SYMBOL_GPL(ethtool_params_from_link_mode);
+2 -2
net/ethtool/eee.c
··· 169 169 ethnl_update_bool32(&eee.eee_enabled, tb[ETHTOOL_A_EEE_ENABLED], &mod); 170 170 ethnl_update_bool32(&eee.tx_lpi_enabled, 171 171 tb[ETHTOOL_A_EEE_TX_LPI_ENABLED], &mod); 172 - ethnl_update_bool32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER], 173 - &mod); 172 + ethnl_update_u32(&eee.tx_lpi_timer, tb[ETHTOOL_A_EEE_TX_LPI_TIMER], 173 + &mod); 174 174 ret = 0; 175 175 if (!mod) 176 176 goto out_ops;
+1 -17
net/ethtool/ioctl.c
··· 426 426 int __ethtool_get_link_ksettings(struct net_device *dev, 427 427 struct ethtool_link_ksettings *link_ksettings) 428 428 { 429 - const struct link_mode_info *link_info; 430 - int err; 431 - 432 429 ASSERT_RTNL(); 433 430 434 431 if (!dev->ethtool_ops->get_link_ksettings) 435 432 return -EOPNOTSUPP; 436 433 437 434 memset(link_ksettings, 0, sizeof(*link_ksettings)); 438 - 439 - link_ksettings->link_mode = -1; 440 - err = dev->ethtool_ops->get_link_ksettings(dev, link_ksettings); 441 - if (err) 442 - return err; 443 - 444 - if (link_ksettings->link_mode != -1) { 445 - link_info = &link_mode_params[link_ksettings->link_mode]; 446 - link_ksettings->base.speed = link_info->speed; 447 - link_ksettings->lanes = link_info->lanes; 448 - link_ksettings->base.duplex = link_info->duplex; 449 - } 450 - 451 - return 0; 435 + return dev->ethtool_ops->get_link_ksettings(dev, link_ksettings); 452 436 } 453 437 EXPORT_SYMBOL(__ethtool_get_link_ksettings); 454 438
+1
net/hsr/hsr_device.c
··· 217 217 master = hsr_port_get_hsr(hsr, HSR_PT_MASTER); 218 218 if (master) { 219 219 skb->dev = master->dev; 220 + skb_reset_mac_header(skb); 220 221 hsr_forward_skb(skb, master); 221 222 } else { 222 223 atomic_long_inc(&dev->tx_dropped);
-6
net/hsr/hsr_forward.c
··· 555 555 { 556 556 struct hsr_frame_info frame; 557 557 558 - if (skb_mac_header(skb) != skb->data) { 559 - WARN_ONCE(1, "%s:%d: Malformed frame (port_src %s)\n", 560 - __FILE__, __LINE__, port->dev->name); 561 - goto out_drop; 562 - } 563 - 564 558 if (fill_frame_info(&frame, skb, port) < 0) 565 559 goto out_drop; 566 560
+4 -3
net/ieee802154/nl-mac.c
··· 551 551 desc->mode = nla_get_u8(info->attrs[IEEE802154_ATTR_LLSEC_KEY_MODE]); 552 552 553 553 if (desc->mode == IEEE802154_SCF_KEY_IMPLICIT) { 554 - if (!info->attrs[IEEE802154_ATTR_PAN_ID] && 555 - !(info->attrs[IEEE802154_ATTR_SHORT_ADDR] || 556 - info->attrs[IEEE802154_ATTR_HW_ADDR])) 554 + if (!info->attrs[IEEE802154_ATTR_PAN_ID]) 557 555 return -EINVAL; 558 556 559 557 desc->device_addr.pan_id = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_PAN_ID]); ··· 560 562 desc->device_addr.mode = IEEE802154_ADDR_SHORT; 561 563 desc->device_addr.short_addr = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_SHORT_ADDR]); 562 564 } else { 565 + if (!info->attrs[IEEE802154_ATTR_HW_ADDR]) 566 + return -EINVAL; 567 + 563 568 desc->device_addr.mode = IEEE802154_ADDR_LONG; 564 569 desc->device_addr.extended_addr = nla_get_hwaddr(info->attrs[IEEE802154_ATTR_HW_ADDR]); 565 570 }
+60 -8
net/ieee802154/nl802154.c
··· 820 820 goto nla_put_failure; 821 821 822 822 #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL 823 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 824 + goto out; 825 + 823 826 if (nl802154_get_llsec_params(msg, rdev, wpan_dev) < 0) 824 827 goto nla_put_failure; 828 + 829 + out: 825 830 #endif /* CONFIG_IEEE802154_NL802154_EXPERIMENTAL */ 826 831 827 832 genlmsg_end(msg, hdr); ··· 1389 1384 u32 changed = 0; 1390 1385 int ret; 1391 1386 1387 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1388 + return -EOPNOTSUPP; 1389 + 1392 1390 if (info->attrs[NL802154_ATTR_SEC_ENABLED]) { 1393 1391 u8 enabled; 1394 1392 ··· 1498 1490 if (err) 1499 1491 return err; 1500 1492 1493 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1494 + err = skb->len; 1495 + goto out_err; 1496 + } 1497 + 1501 1498 if (!wpan_dev->netdev) { 1502 1499 err = -EINVAL; 1503 1500 goto out_err; ··· 1557 1544 struct ieee802154_llsec_key_id id = { }; 1558 1545 u32 commands[NL802154_CMD_FRAME_NR_IDS / 32] = { }; 1559 1546 1560 - if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1547 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1548 + return -EOPNOTSUPP; 1549 + 1550 + if (!info->attrs[NL802154_ATTR_SEC_KEY] || 1551 + nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1561 1552 return -EINVAL; 1562 1553 1563 1554 if (!attrs[NL802154_KEY_ATTR_USAGE_FRAMES] || ··· 1609 1592 struct nlattr *attrs[NL802154_KEY_ATTR_MAX + 1]; 1610 1593 struct ieee802154_llsec_key_id id; 1611 1594 1612 - if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1595 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1596 + return -EOPNOTSUPP; 1597 + 1598 + if (!info->attrs[NL802154_ATTR_SEC_KEY] || 1599 + nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) 1613 1600 return -EINVAL; 1614 1601 1615 1602 if (ieee802154_llsec_parse_key_id(attrs[NL802154_KEY_ATTR_ID], &id) < 0) ··· 1676 1655 err = nl802154_prepare_wpan_dev_dump(skb, cb, &rdev, &wpan_dev); 1677 1656 if (err) 1678 1657 return err; 1658 + 1659 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1660 + err = skb->len; 1661 + goto out_err; 1662 + } 1679 1663 1680 1664 if (!wpan_dev->netdev) { 1681 1665 err = -EINVAL; ··· 1768 1742 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 1769 1743 struct ieee802154_llsec_device dev_desc; 1770 1744 1745 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1746 + return -EOPNOTSUPP; 1747 + 1771 1748 if (ieee802154_llsec_parse_device(info->attrs[NL802154_ATTR_SEC_DEVICE], 1772 1749 &dev_desc) < 0) 1773 1750 return -EINVAL; ··· 1786 1757 struct nlattr *attrs[NL802154_DEV_ATTR_MAX + 1]; 1787 1758 __le64 extended_addr; 1788 1759 1789 - if (nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack)) 1760 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1761 + return -EOPNOTSUPP; 1762 + 1763 + if (!info->attrs[NL802154_ATTR_SEC_DEVICE] || 1764 + nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack)) 1790 1765 return -EINVAL; 1791 1766 1792 1767 if (!attrs[NL802154_DEV_ATTR_EXTENDED_ADDR]) ··· 1858 1825 if (err) 1859 1826 return err; 1860 1827 1828 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1829 + err = skb->len; 1830 + goto out_err; 1831 + } 1832 + 1861 1833 if (!wpan_dev->netdev) { 1862 1834 err = -EINVAL; 1863 1835 goto out_err; ··· 1920 1882 struct ieee802154_llsec_device_key key; 1921 1883 __le64 extended_addr; 1922 1884 1885 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1886 + return -EOPNOTSUPP; 1887 + 1923 1888 if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] || 1924 1889 nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack) < 0) 1925 1890 return -EINVAL; ··· 1954 1913 struct ieee802154_llsec_device_key key; 1955 1914 __le64 extended_addr; 1956 1915 1957 - if (nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack)) 1916 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 1917 + return -EOPNOTSUPP; 1918 + 1919 + if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] || 1920 + nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack)) 1958 1921 return -EINVAL; 1959 1922 1960 1923 if (!attrs[NL802154_DEVKEY_ATTR_EXTENDED_ADDR]) ··· 2030 1985 err = nl802154_prepare_wpan_dev_dump(skb, cb, &rdev, &wpan_dev); 2031 1986 if (err) 2032 1987 return err; 1988 + 1989 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) { 1990 + err = skb->len; 1991 + goto out_err; 1992 + } 2033 1993 2034 1994 if (!wpan_dev->netdev) { 2035 1995 err = -EINVAL; ··· 2120 2070 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 2121 2071 struct ieee802154_llsec_seclevel sl; 2122 2072 2073 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 2074 + return -EOPNOTSUPP; 2075 + 2123 2076 if (llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], 2124 2077 &sl) < 0) 2125 2078 return -EINVAL; ··· 2138 2085 struct wpan_dev *wpan_dev = dev->ieee802154_ptr; 2139 2086 struct ieee802154_llsec_seclevel sl; 2140 2087 2088 + if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR) 2089 + return -EOPNOTSUPP; 2090 + 2141 2091 if (!info->attrs[NL802154_ATTR_SEC_LEVEL] || 2142 2092 llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], 2143 2093 &sl) < 0) ··· 2154 2098 #define NL802154_FLAG_NEED_NETDEV 0x02 2155 2099 #define NL802154_FLAG_NEED_RTNL 0x04 2156 2100 #define NL802154_FLAG_CHECK_NETDEV_UP 0x08 2157 - #define NL802154_FLAG_NEED_NETDEV_UP (NL802154_FLAG_NEED_NETDEV |\ 2158 - NL802154_FLAG_CHECK_NETDEV_UP) 2159 2101 #define NL802154_FLAG_NEED_WPAN_DEV 0x10 2160 - #define NL802154_FLAG_NEED_WPAN_DEV_UP (NL802154_FLAG_NEED_WPAN_DEV |\ 2161 - NL802154_FLAG_CHECK_NETDEV_UP) 2162 2102 2163 2103 static int nl802154_pre_doit(const struct genl_ops *ops, struct sk_buff *skb, 2164 2104 struct genl_info *info)
+1 -1
net/ipv4/ah4.c
··· 141 141 } 142 142 143 143 kfree(AH_SKB_CB(skb)->tmp); 144 - xfrm_output_resume(skb, err); 144 + xfrm_output_resume(skb->sk, skb, err); 145 145 } 146 146 147 147 static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
+2 -1
net/ipv4/devinet.c
··· 1978 1978 return 0; 1979 1979 } 1980 1980 1981 - static int inet_set_link_af(struct net_device *dev, const struct nlattr *nla) 1981 + static int inet_set_link_af(struct net_device *dev, const struct nlattr *nla, 1982 + struct netlink_ext_ack *extack) 1982 1983 { 1983 1984 struct in_device *in_dev = __in_dev_get_rcu(dev); 1984 1985 struct nlattr *a, *tb[IFLA_INET_MAX+1];
+1 -1
net/ipv4/esp4.c
··· 279 279 x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 280 280 esp_output_tail_tcp(x, skb); 281 281 else 282 - xfrm_output_resume(skb, err); 282 + xfrm_output_resume(skb->sk, skb, err); 283 283 } 284 284 } 285 285
+14 -3
net/ipv4/esp4_offload.c
··· 217 217 218 218 if ((!(skb->dev->gso_partial_features & NETIF_F_HW_ESP) && 219 219 !(features & NETIF_F_HW_ESP)) || x->xso.dev != skb->dev) 220 - esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK); 220 + esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK | 221 + NETIF_F_SCTP_CRC); 221 222 else if (!(features & NETIF_F_HW_ESP_TX_CSUM) && 222 223 !(skb->dev->gso_partial_features & NETIF_F_HW_ESP_TX_CSUM)) 223 - esp_features = features & ~NETIF_F_CSUM_MASK; 224 + esp_features = features & ~(NETIF_F_CSUM_MASK | 225 + NETIF_F_SCTP_CRC); 224 226 225 227 xo->flags |= XFRM_GSO_SEGMENT; 226 228 ··· 314 312 ip_hdr(skb)->tot_len = htons(skb->len); 315 313 ip_send_check(ip_hdr(skb)); 316 314 317 - if (hw_offload) 315 + if (hw_offload) { 316 + if (!skb_ext_add(skb, SKB_EXT_SEC_PATH)) 317 + return -ENOMEM; 318 + 319 + xo = xfrm_offload(skb); 320 + if (!xo) 321 + return -EINVAL; 322 + 323 + xo->flags |= XFRM_XMIT; 318 324 return 0; 325 + } 319 326 320 327 err = esp_output_tail(x, skb, &esp); 321 328 if (err)
+4 -2
net/ipv4/ip_vti.c
··· 218 218 } 219 219 220 220 if (dst->flags & DST_XFRM_QUEUE) 221 - goto queued; 221 + goto xmit; 222 222 223 223 if (!vti_state_check(dst->xfrm, parms->iph.daddr, parms->iph.saddr)) { 224 224 dev->stats.tx_carrier_errors++; ··· 238 238 if (skb->len > mtu) { 239 239 skb_dst_update_pmtu_no_confirm(skb, mtu); 240 240 if (skb->protocol == htons(ETH_P_IP)) { 241 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 242 + goto xmit; 241 243 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 242 244 htonl(mtu)); 243 245 } else { ··· 253 251 goto tx_error; 254 252 } 255 253 256 - queued: 254 + xmit: 257 255 skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(dev))); 258 256 skb_dst_set(skb, dst); 259 257 skb->dev = skb_dst(skb)->dev;
+4
net/ipv4/udp.c
··· 2754 2754 val = up->gso_size; 2755 2755 break; 2756 2756 2757 + case UDP_GRO: 2758 + val = up->gro_enabled; 2759 + break; 2760 + 2757 2761 /* The following two cannot be changed on UDP sockets, the return is 2758 2762 * always 0 (which corresponds to the full checksum coverage of UDP). */ 2759 2763 case UDPLITE_SEND_CSCOV:
+26 -6
net/ipv6/addrconf.c
··· 5669 5669 return 0; 5670 5670 } 5671 5671 5672 - static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token) 5672 + static int inet6_set_iftoken(struct inet6_dev *idev, struct in6_addr *token, 5673 + struct netlink_ext_ack *extack) 5673 5674 { 5674 5675 struct inet6_ifaddr *ifp; 5675 5676 struct net_device *dev = idev->dev; ··· 5681 5680 5682 5681 if (!token) 5683 5682 return -EINVAL; 5684 - if (dev->flags & (IFF_LOOPBACK | IFF_NOARP)) 5683 + 5684 + if (dev->flags & IFF_LOOPBACK) { 5685 + NL_SET_ERR_MSG_MOD(extack, "Device is loopback"); 5685 5686 return -EINVAL; 5686 - if (!ipv6_accept_ra(idev)) 5687 + } 5688 + 5689 + if (dev->flags & IFF_NOARP) { 5690 + NL_SET_ERR_MSG_MOD(extack, 5691 + "Device does not do neighbour discovery"); 5687 5692 return -EINVAL; 5688 - if (idev->cnf.rtr_solicits == 0) 5693 + } 5694 + 5695 + if (!ipv6_accept_ra(idev)) { 5696 + NL_SET_ERR_MSG_MOD(extack, 5697 + "Router advertisement is disabled on device"); 5689 5698 return -EINVAL; 5699 + } 5700 + 5701 + if (idev->cnf.rtr_solicits == 0) { 5702 + NL_SET_ERR_MSG(extack, 5703 + "Router solicitation is disabled on device"); 5704 + return -EINVAL; 5705 + } 5690 5706 5691 5707 write_lock_bh(&idev->lock); 5692 5708 ··· 5811 5793 return 0; 5812 5794 } 5813 5795 5814 - static int inet6_set_link_af(struct net_device *dev, const struct nlattr *nla) 5796 + static int inet6_set_link_af(struct net_device *dev, const struct nlattr *nla, 5797 + struct netlink_ext_ack *extack) 5815 5798 { 5816 5799 struct inet6_dev *idev = __in6_dev_get(dev); 5817 5800 struct nlattr *tb[IFLA_INET6_MAX + 1]; ··· 5825 5806 BUG(); 5826 5807 5827 5808 if (tb[IFLA_INET6_TOKEN]) { 5828 - err = inet6_set_iftoken(idev, nla_data(tb[IFLA_INET6_TOKEN])); 5809 + err = inet6_set_iftoken(idev, nla_data(tb[IFLA_INET6_TOKEN]), 5810 + extack); 5829 5811 if (err) 5830 5812 return err; 5831 5813 }
+1 -1
net/ipv6/ah6.c
··· 316 316 } 317 317 318 318 kfree(AH_SKB_CB(skb)->tmp); 319 - xfrm_output_resume(skb, err); 319 + xfrm_output_resume(skb->sk, skb, err); 320 320 } 321 321 322 322 static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
+1 -1
net/ipv6/esp6.c
··· 314 314 x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 315 315 esp_output_tail_tcp(x, skb); 316 316 else 317 - xfrm_output_resume(skb, err); 317 + xfrm_output_resume(skb->sk, skb, err); 318 318 } 319 319 } 320 320
+14 -3
net/ipv6/esp6_offload.c
··· 254 254 skb->encap_hdr_csum = 1; 255 255 256 256 if (!(features & NETIF_F_HW_ESP) || x->xso.dev != skb->dev) 257 - esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK); 257 + esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK | 258 + NETIF_F_SCTP_CRC); 258 259 else if (!(features & NETIF_F_HW_ESP_TX_CSUM)) 259 - esp_features = features & ~NETIF_F_CSUM_MASK; 260 + esp_features = features & ~(NETIF_F_CSUM_MASK | 261 + NETIF_F_SCTP_CRC); 260 262 261 263 xo->flags |= XFRM_GSO_SEGMENT; 262 264 ··· 348 346 349 347 ipv6_hdr(skb)->payload_len = htons(len); 350 348 351 - if (hw_offload) 349 + if (hw_offload) { 350 + if (!skb_ext_add(skb, SKB_EXT_SEC_PATH)) 351 + return -ENOMEM; 352 + 353 + xo = xfrm_offload(skb); 354 + if (!xo) 355 + return -EINVAL; 356 + 357 + xo->flags |= XFRM_XMIT; 352 358 return 0; 359 + } 353 360 354 361 err = esp6_output_tail(x, skb, &esp); 355 362 if (err)
+4 -2
net/ipv6/ip6_vti.c
··· 494 494 } 495 495 496 496 if (dst->flags & DST_XFRM_QUEUE) 497 - goto queued; 497 + goto xmit; 498 498 499 499 x = dst->xfrm; 500 500 if (!vti6_state_check(x, &t->parms.raddr, &t->parms.laddr)) ··· 523 523 524 524 icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 525 525 } else { 526 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 527 + goto xmit; 526 528 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 527 529 htonl(mtu)); 528 530 } ··· 533 531 goto tx_err_dst_release; 534 532 } 535 533 536 - queued: 534 + xmit: 537 535 skb_scrub_packet(skb, !net_eq(t->net, dev_net(dev))); 538 536 skb_dst_set(skb, dst); 539 537 skb->dev = skb_dst(skb)->dev;
+1 -1
net/ipv6/raw.c
··· 298 298 */ 299 299 v4addr = LOOPBACK4_IPV6; 300 300 if (!(addr_type & IPV6_ADDR_MULTICAST) && 301 - !sock_net(sk)->ipv6.sysctl.ip_nonlocal_bind) { 301 + !ipv6_can_nonlocal_bind(sock_net(sk), inet)) { 302 302 err = -EADDRNOTAVAIL; 303 303 if (!ipv6_chk_addr(sock_net(sk), &addr->sin6_addr, 304 304 dev, 0)) {
+5 -3
net/ipv6/route.c
··· 5209 5209 * nexthops have been replaced by first new, the rest should 5210 5210 * be added to it. 5211 5211 */ 5212 - cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL | 5213 - NLM_F_REPLACE); 5214 - cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE; 5212 + if (cfg->fc_nlinfo.nlh) { 5213 + cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL | 5214 + NLM_F_REPLACE); 5215 + cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE; 5216 + } 5215 5217 nhn++; 5216 5218 } 5217 5219
+3 -1
net/mac80211/cfg.c
··· 1788 1788 } 1789 1789 1790 1790 if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN && 1791 - sta->sdata->u.vlan.sta) 1791 + sta->sdata->u.vlan.sta) { 1792 + ieee80211_clear_fast_rx(sta); 1792 1793 RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL); 1794 + } 1793 1795 1794 1796 if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) 1795 1797 ieee80211_vif_dec_num_mcast(sta->sdata);
+4 -1
net/mac80211/mlme.c
··· 4707 4707 timeout = sta->rx_stats.last_rx; 4708 4708 timeout += IEEE80211_CONNECTION_IDLE_TIME; 4709 4709 4710 - if (time_is_before_jiffies(timeout)) { 4710 + /* If timeout is after now, then update timer to fire at 4711 + * the later date, but do not actually probe at this time. 4712 + */ 4713 + if (time_is_after_jiffies(timeout)) { 4711 4714 mod_timer(&ifmgd->conn_mon_timer, round_jiffies_up(timeout)); 4712 4715 return; 4713 4716 }
+1 -1
net/mac80211/tx.c
··· 3573 3573 test_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags)) 3574 3574 goto out; 3575 3575 3576 - if (vif->txqs_stopped[ieee80211_ac_from_tid(txq->tid)]) { 3576 + if (vif->txqs_stopped[txq->ac]) { 3577 3577 set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags); 3578 3578 goto out; 3579 3579 }
+1 -1
net/mac802154/llsec.c
··· 152 152 crypto_free_sync_skcipher(key->tfm0); 153 153 err_tfm: 154 154 for (i = 0; i < ARRAY_SIZE(key->tfm); i++) 155 - if (key->tfm[i]) 155 + if (!IS_ERR_OR_NULL(key->tfm[i])) 156 156 crypto_free_aead(key->tfm[i]); 157 157 158 158 kfree_sensitive(key);
+47 -53
net/mptcp/protocol.c
··· 11 11 #include <linux/netdevice.h> 12 12 #include <linux/sched/signal.h> 13 13 #include <linux/atomic.h> 14 - #include <linux/igmp.h> 15 14 #include <net/sock.h> 16 15 #include <net/inet_common.h> 17 16 #include <net/inet_hashtables.h> ··· 19 20 #include <net/tcp_states.h> 20 21 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 21 22 #include <net/transp_v6.h> 22 - #include <net/addrconf.h> 23 23 #endif 24 24 #include <net/mptcp.h> 25 25 #include <net/xfrm.h> ··· 2876 2878 return ret; 2877 2879 } 2878 2880 2881 + static bool mptcp_unsupported(int level, int optname) 2882 + { 2883 + if (level == SOL_IP) { 2884 + switch (optname) { 2885 + case IP_ADD_MEMBERSHIP: 2886 + case IP_ADD_SOURCE_MEMBERSHIP: 2887 + case IP_DROP_MEMBERSHIP: 2888 + case IP_DROP_SOURCE_MEMBERSHIP: 2889 + case IP_BLOCK_SOURCE: 2890 + case IP_UNBLOCK_SOURCE: 2891 + case MCAST_JOIN_GROUP: 2892 + case MCAST_LEAVE_GROUP: 2893 + case MCAST_JOIN_SOURCE_GROUP: 2894 + case MCAST_LEAVE_SOURCE_GROUP: 2895 + case MCAST_BLOCK_SOURCE: 2896 + case MCAST_UNBLOCK_SOURCE: 2897 + case MCAST_MSFILTER: 2898 + return true; 2899 + } 2900 + return false; 2901 + } 2902 + if (level == SOL_IPV6) { 2903 + switch (optname) { 2904 + case IPV6_ADDRFORM: 2905 + case IPV6_ADD_MEMBERSHIP: 2906 + case IPV6_DROP_MEMBERSHIP: 2907 + case IPV6_JOIN_ANYCAST: 2908 + case IPV6_LEAVE_ANYCAST: 2909 + case MCAST_JOIN_GROUP: 2910 + case MCAST_LEAVE_GROUP: 2911 + case MCAST_JOIN_SOURCE_GROUP: 2912 + case MCAST_LEAVE_SOURCE_GROUP: 2913 + case MCAST_BLOCK_SOURCE: 2914 + case MCAST_UNBLOCK_SOURCE: 2915 + case MCAST_MSFILTER: 2916 + return true; 2917 + } 2918 + return false; 2919 + } 2920 + return false; 2921 + } 2922 + 2879 2923 static int mptcp_setsockopt(struct sock *sk, int level, int optname, 2880 2924 sockptr_t optval, unsigned int optlen) 2881 2925 { ··· 2925 2885 struct sock *ssk; 2926 2886 2927 2887 pr_debug("msk=%p", msk); 2888 + 2889 + if (mptcp_unsupported(level, optname)) 2890 + return -ENOPROTOOPT; 2928 2891 2929 2892 if (level == SOL_SOCKET) 2930 2893 return mptcp_setsockopt_sol_socket(msk, optname, optval, optlen); ··· 3462 3419 return mask; 3463 3420 } 3464 3421 3465 - static int mptcp_release(struct socket *sock) 3466 - { 3467 - struct mptcp_subflow_context *subflow; 3468 - struct sock *sk = sock->sk; 3469 - struct mptcp_sock *msk; 3470 - 3471 - if (!sk) 3472 - return 0; 3473 - 3474 - lock_sock(sk); 3475 - 3476 - msk = mptcp_sk(sk); 3477 - 3478 - mptcp_for_each_subflow(msk, subflow) { 3479 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 3480 - 3481 - ip_mc_drop_socket(ssk); 3482 - } 3483 - 3484 - release_sock(sk); 3485 - 3486 - return inet_release(sock); 3487 - } 3488 - 3489 3422 static const struct proto_ops mptcp_stream_ops = { 3490 3423 .family = PF_INET, 3491 3424 .owner = THIS_MODULE, 3492 - .release = mptcp_release, 3425 + .release = inet_release, 3493 3426 .bind = mptcp_bind, 3494 3427 .connect = mptcp_stream_connect, 3495 3428 .socketpair = sock_no_socketpair, ··· 3557 3538 } 3558 3539 3559 3540 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 3560 - static int mptcp6_release(struct socket *sock) 3561 - { 3562 - struct mptcp_subflow_context *subflow; 3563 - struct mptcp_sock *msk; 3564 - struct sock *sk = sock->sk; 3565 - 3566 - if (!sk) 3567 - return 0; 3568 - 3569 - lock_sock(sk); 3570 - 3571 - msk = mptcp_sk(sk); 3572 - 3573 - mptcp_for_each_subflow(msk, subflow) { 3574 - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 3575 - 3576 - ip_mc_drop_socket(ssk); 3577 - ipv6_sock_mc_close(ssk); 3578 - ipv6_sock_ac_close(ssk); 3579 - } 3580 - 3581 - release_sock(sk); 3582 - return inet6_release(sock); 3583 - } 3584 - 3585 3541 static const struct proto_ops mptcp_v6_stream_ops = { 3586 3542 .family = PF_INET6, 3587 3543 .owner = THIS_MODULE, 3588 - .release = mptcp6_release, 3544 + .release = inet6_release, 3589 3545 .bind = mptcp_bind, 3590 3546 .connect = mptcp_stream_connect, 3591 3547 .socketpair = sock_no_socketpair,
+13 -7
net/ncsi/ncsi-manage.c
··· 105 105 monitor_state = nc->monitor.state; 106 106 spin_unlock_irqrestore(&nc->lock, flags); 107 107 108 - if (!enabled || chained) { 109 - ncsi_stop_channel_monitor(nc); 110 - return; 111 - } 108 + if (!enabled) 109 + return; /* expected race disabling timer */ 110 + if (WARN_ON_ONCE(chained)) 111 + goto bad_state; 112 + 112 113 if (state != NCSI_CHANNEL_INACTIVE && 113 114 state != NCSI_CHANNEL_ACTIVE) { 114 - ncsi_stop_channel_monitor(nc); 115 + bad_state: 116 + netdev_warn(ndp->ndev.dev, 117 + "Bad NCSI monitor state channel %d 0x%x %s queue\n", 118 + nc->id, state, chained ? "on" : "off"); 119 + spin_lock_irqsave(&nc->lock, flags); 120 + nc->monitor.enabled = false; 121 + spin_unlock_irqrestore(&nc->lock, flags); 115 122 return; 116 123 } 117 124 ··· 143 136 ncsi_report_link(ndp, true); 144 137 ndp->flags |= NCSI_DEV_RESHUFFLE; 145 138 146 - ncsi_stop_channel_monitor(nc); 147 - 148 139 ncm = &nc->modes[NCSI_MODE_LINK]; 149 140 spin_lock_irqsave(&nc->lock, flags); 141 + nc->monitor.enabled = false; 150 142 nc->state = NCSI_CHANNEL_INVISIBLE; 151 143 ncm->data[2] &= ~0x1; 152 144 spin_unlock_irqrestore(&nc->lock, flags);
+10
net/nfc/llcp_sock.c
··· 108 108 llcp_sock->service_name_len, 109 109 GFP_KERNEL); 110 110 if (!llcp_sock->service_name) { 111 + nfc_llcp_local_put(llcp_sock->local); 111 112 ret = -ENOMEM; 112 113 goto put_dev; 113 114 } 114 115 llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock); 115 116 if (llcp_sock->ssap == LLCP_SAP_MAX) { 117 + nfc_llcp_local_put(llcp_sock->local); 116 118 kfree(llcp_sock->service_name); 117 119 llcp_sock->service_name = NULL; 118 120 ret = -EADDRINUSE; ··· 673 671 ret = -EISCONN; 674 672 goto error; 675 673 } 674 + if (sk->sk_state == LLCP_CONNECTING) { 675 + ret = -EINPROGRESS; 676 + goto error; 677 + } 676 678 677 679 dev = nfc_get_device(addr->dev_idx); 678 680 if (dev == NULL) { ··· 708 702 llcp_sock->local = nfc_llcp_local_get(local); 709 703 llcp_sock->ssap = nfc_llcp_get_local_ssap(local); 710 704 if (llcp_sock->ssap == LLCP_SAP_MAX) { 705 + nfc_llcp_local_put(llcp_sock->local); 711 706 ret = -ENOMEM; 712 707 goto put_dev; 713 708 } ··· 750 743 751 744 sock_unlink: 752 745 nfc_llcp_sock_unlink(&local->connecting_sockets, sk); 746 + kfree(llcp_sock->service_name); 747 + llcp_sock->service_name = NULL; 753 748 754 749 sock_llcp_release: 755 750 nfc_llcp_put_ssap(local, llcp_sock->ssap); 751 + nfc_llcp_local_put(llcp_sock->local); 756 752 757 753 put_dev: 758 754 nfc_put_device(dev);
+4 -4
net/openvswitch/conntrack.c
··· 2034 2034 static int ovs_ct_limit_get_default_limit(struct ovs_ct_limit_info *info, 2035 2035 struct sk_buff *reply) 2036 2036 { 2037 - struct ovs_zone_limit zone_limit; 2038 - 2039 - zone_limit.zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE; 2040 - zone_limit.limit = info->default_limit; 2037 + struct ovs_zone_limit zone_limit = { 2038 + .zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE, 2039 + .limit = info->default_limit, 2040 + }; 2041 2041 2042 2042 return nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit); 2043 2043 }
+4 -1
net/qrtr/qrtr.c
··· 271 271 flow = kzalloc(sizeof(*flow), GFP_KERNEL); 272 272 if (flow) { 273 273 init_waitqueue_head(&flow->resume_tx); 274 - radix_tree_insert(&node->qrtr_tx_flow, key, flow); 274 + if (radix_tree_insert(&node->qrtr_tx_flow, key, flow)) { 275 + kfree(flow); 276 + flow = NULL; 277 + } 275 278 } 276 279 } 277 280 mutex_unlock(&node->qrtr_tx_lock);
+3 -1
net/rds/message.c
··· 180 180 rds_message_purge(rm); 181 181 182 182 kfree(rm); 183 + rm = NULL; 183 184 } 184 185 } 185 186 EXPORT_SYMBOL_GPL(rds_message_put); ··· 348 347 rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE); 349 348 rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs); 350 349 if (IS_ERR(rm->data.op_sg)) { 350 + void *err = ERR_CAST(rm->data.op_sg); 351 351 rds_message_put(rm); 352 - return ERR_CAST(rm->data.op_sg); 352 + return err; 353 353 } 354 354 355 355 for (i = 0; i < rm->data.op_nents; ++i) {
+1 -1
net/rds/send.c
··· 665 665 unlock_and_drop: 666 666 spin_unlock_irqrestore(&rm->m_rs_lock, flags); 667 667 rds_message_put(rm); 668 - if (was_on_sock) 668 + if (was_on_sock && rm) 669 669 rds_message_put(rm); 670 670 } 671 671
+4 -3
net/rfkill/core.c
··· 69 69 70 70 struct rfkill_int_event { 71 71 struct list_head list; 72 - struct rfkill_event ev; 72 + struct rfkill_event_ext ev; 73 73 }; 74 74 75 75 struct rfkill_data { ··· 253 253 } 254 254 #endif /* CONFIG_RFKILL_LEDS */ 255 255 256 - static void rfkill_fill_event(struct rfkill_event *ev, struct rfkill *rfkill, 256 + static void rfkill_fill_event(struct rfkill_event_ext *ev, 257 + struct rfkill *rfkill, 257 258 enum rfkill_operation op) 258 259 { 259 260 unsigned long flags; ··· 1238 1237 size_t count, loff_t *pos) 1239 1238 { 1240 1239 struct rfkill *rfkill; 1241 - struct rfkill_event ev; 1240 + struct rfkill_event_ext ev; 1242 1241 int ret; 1243 1242 1244 1243 /* we don't need the 'hard' variable but accept it */
+31 -17
net/sched/act_api.c
··· 158 158 return 0; 159 159 } 160 160 161 - int __tcf_idr_release(struct tc_action *p, bool bind, bool strict) 161 + static int __tcf_idr_release(struct tc_action *p, bool bind, bool strict) 162 162 { 163 163 int ret = 0; 164 164 ··· 184 184 185 185 return ret; 186 186 } 187 - EXPORT_SYMBOL(__tcf_idr_release); 187 + 188 + int tcf_idr_release(struct tc_action *a, bool bind) 189 + { 190 + const struct tc_action_ops *ops = a->ops; 191 + int ret; 192 + 193 + ret = __tcf_idr_release(a, bind, false); 194 + if (ret == ACT_P_DELETED) 195 + module_put(ops->owner); 196 + return ret; 197 + } 198 + EXPORT_SYMBOL(tcf_idr_release); 188 199 189 200 static size_t tcf_action_shared_attrs_size(const struct tc_action *act) 190 201 { ··· 504 493 } 505 494 506 495 p->idrinfo = idrinfo; 496 + __module_get(ops->owner); 507 497 p->ops = ops; 508 498 *a = p; 509 499 return 0; ··· 1004 992 struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, 1005 993 struct nlattr *nla, struct nlattr *est, 1006 994 char *name, int ovr, int bind, 1007 - struct tc_action_ops *a_o, bool rtnl_held, 995 + struct tc_action_ops *a_o, int *init_res, 996 + bool rtnl_held, 1008 997 struct netlink_ext_ack *extack) 1009 998 { 1010 999 struct nla_bitfield32 flags = { 0, 0 }; ··· 1041 1028 } 1042 1029 if (err < 0) 1043 1030 goto err_out; 1031 + *init_res = err; 1044 1032 1045 1033 if (!name && tb[TCA_ACT_COOKIE]) 1046 1034 tcf_set_action_cookie(&a->act_cookie, cookie); 1047 1035 1048 1036 if (!name) 1049 1037 a->hw_stats = hw_stats; 1050 - 1051 - /* module count goes up only when brand new policy is created 1052 - * if it exists and is only bound to in a_o->init() then 1053 - * ACT_P_CREATED is not returned (a zero is). 1054 - */ 1055 - if (err != ACT_P_CREATED) 1056 - module_put(a_o->owner); 1057 1038 1058 1039 return a; 1059 1040 ··· 1063 1056 1064 1057 int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, 1065 1058 struct nlattr *est, char *name, int ovr, int bind, 1066 - struct tc_action *actions[], size_t *attr_size, 1059 + struct tc_action *actions[], int init_res[], size_t *attr_size, 1067 1060 bool rtnl_held, struct netlink_ext_ack *extack) 1068 1061 { 1069 1062 struct tc_action_ops *ops[TCA_ACT_MAX_PRIO] = {}; ··· 1091 1084 1092 1085 for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) { 1093 1086 act = tcf_action_init_1(net, tp, tb[i], est, name, ovr, bind, 1094 - ops[i - 1], rtnl_held, extack); 1087 + ops[i - 1], &init_res[i - 1], rtnl_held, 1088 + extack); 1095 1089 if (IS_ERR(act)) { 1096 1090 err = PTR_ERR(act); 1097 1091 goto err; ··· 1108 1100 tcf_idr_insert_many(actions); 1109 1101 1110 1102 *attr_size = tcf_action_full_attrs_size(sz); 1111 - return i - 1; 1103 + err = i - 1; 1104 + goto err_mod; 1112 1105 1113 1106 err: 1114 1107 tcf_action_destroy(actions, bind); ··· 1506 1497 struct netlink_ext_ack *extack) 1507 1498 { 1508 1499 size_t attr_size = 0; 1509 - int loop, ret; 1500 + int loop, ret, i; 1510 1501 struct tc_action *actions[TCA_ACT_MAX_PRIO] = {}; 1502 + int init_res[TCA_ACT_MAX_PRIO] = {}; 1511 1503 1512 1504 for (loop = 0; loop < 10; loop++) { 1513 1505 ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0, 1514 - actions, &attr_size, true, extack); 1506 + actions, init_res, &attr_size, true, extack); 1515 1507 if (ret != -EAGAIN) 1516 1508 break; 1517 1509 } ··· 1520 1510 if (ret < 0) 1521 1511 return ret; 1522 1512 ret = tcf_add_notify(net, n, actions, portid, attr_size, extack); 1523 - if (ovr) 1524 - tcf_action_put_many(actions); 1513 + 1514 + /* only put existing actions */ 1515 + for (i = 0; i < TCA_ACT_MAX_PRIO; i++) 1516 + if (init_res[i] == ACT_P_CREATED) 1517 + actions[i] = NULL; 1518 + tcf_action_put_many(actions); 1525 1519 1526 1520 return ret; 1527 1521 }
+8 -8
net/sched/cls_api.c
··· 646 646 struct net_device *dev = block_cb->indr.dev; 647 647 struct Qdisc *sch = block_cb->indr.sch; 648 648 struct netlink_ext_ack extack = {}; 649 - struct flow_block_offload bo; 649 + struct flow_block_offload bo = {}; 650 650 651 651 tcf_block_offload_init(&bo, dev, sch, FLOW_BLOCK_UNBIND, 652 652 block_cb->indr.binder_type, ··· 3040 3040 { 3041 3041 #ifdef CONFIG_NET_CLS_ACT 3042 3042 { 3043 + int init_res[TCA_ACT_MAX_PRIO] = {}; 3043 3044 struct tc_action *act; 3044 3045 size_t attr_size = 0; 3045 3046 ··· 3052 3051 return PTR_ERR(a_o); 3053 3052 act = tcf_action_init_1(net, tp, tb[exts->police], 3054 3053 rate_tlv, "police", ovr, 3055 - TCA_ACT_BIND, a_o, rtnl_held, 3056 - extack); 3057 - if (IS_ERR(act)) { 3058 - module_put(a_o->owner); 3054 + TCA_ACT_BIND, a_o, init_res, 3055 + rtnl_held, extack); 3056 + module_put(a_o->owner); 3057 + if (IS_ERR(act)) 3059 3058 return PTR_ERR(act); 3060 - } 3061 3059 3062 3060 act->type = exts->type = TCA_OLD_COMPAT; 3063 3061 exts->actions[0] = act; ··· 3067 3067 3068 3068 err = tcf_action_init(net, tp, tb[exts->action], 3069 3069 rate_tlv, NULL, ovr, TCA_ACT_BIND, 3070 - exts->actions, &attr_size, 3071 - rtnl_held, extack); 3070 + exts->actions, init_res, 3071 + &attr_size, rtnl_held, extack); 3072 3072 if (err < 0) 3073 3073 return err; 3074 3074 exts->nr_actions = err;
+3 -2
net/sched/sch_htb.c
··· 1675 1675 cl->parent->common.classid, 1676 1676 NULL); 1677 1677 if (q->offload) { 1678 - if (new_q) 1678 + if (new_q) { 1679 1679 htb_set_lockdep_class_child(new_q); 1680 - htb_parent_to_leaf_offload(sch, dev_queue, new_q); 1680 + htb_parent_to_leaf_offload(sch, dev_queue, new_q); 1681 + } 1681 1682 } 1682 1683 } 1683 1684
+3
net/sched/sch_teql.c
··· 134 134 struct teql_sched_data *dat = qdisc_priv(sch); 135 135 struct teql_master *master = dat->m; 136 136 137 + if (!master) 138 + return; 139 + 137 140 prev = master->slaves; 138 141 if (prev) { 139 142 do {
+3 -4
net/sctp/ipv6.c
··· 664 664 if (!(type & IPV6_ADDR_UNICAST)) 665 665 return 0; 666 666 667 - return sp->inet.freebind || net->ipv6.sysctl.ip_nonlocal_bind || 668 - ipv6_chk_addr(net, in6, NULL, 0); 667 + return ipv6_can_nonlocal_bind(net, &sp->inet) || 668 + ipv6_chk_addr(net, in6, NULL, 0); 669 669 } 670 670 671 671 /* This function checks if the address is a valid address to be used for ··· 954 954 net = sock_net(&opt->inet.sk); 955 955 rcu_read_lock(); 956 956 dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id); 957 - if (!dev || !(opt->inet.freebind || 958 - net->ipv6.sysctl.ip_nonlocal_bind || 957 + if (!dev || !(ipv6_can_nonlocal_bind(net, &opt->inet) || 959 958 ipv6_chk_addr(net, &addr->v6.sin6_addr, 960 959 dev, 0))) { 961 960 rcu_read_unlock();
+3 -3
net/tipc/bearer.h
··· 154 154 * care of initializing all other fields. 155 155 */ 156 156 struct tipc_bearer { 157 - void __rcu *media_ptr; /* initalized by media */ 158 - u32 mtu; /* initalized by media */ 159 - struct tipc_media_addr addr; /* initalized by media */ 157 + void __rcu *media_ptr; /* initialized by media */ 158 + u32 mtu; /* initialized by media */ 159 + struct tipc_media_addr addr; /* initialized by media */ 160 160 char name[TIPC_MAX_BEARER_NAME]; 161 161 struct tipc_media *media; 162 162 struct tipc_media_addr bcast_addr;
+2 -1
net/tipc/crypto.c
··· 1941 1941 goto rcv; 1942 1942 if (tipc_aead_clone(&tmp, aead) < 0) 1943 1943 goto rcv; 1944 + WARN_ON(!refcount_inc_not_zero(&tmp->refcnt)); 1944 1945 if (tipc_crypto_key_attach(rx, tmp, ehdr->tx_key, false) < 0) { 1945 1946 tipc_aead_free(&tmp->rcu); 1946 1947 goto rcv; 1947 1948 } 1948 1949 tipc_aead_put(aead); 1949 - aead = tipc_aead_get(tmp); 1950 + aead = tmp; 1950 1951 } 1951 1952 1952 1953 if (unlikely(err)) {
+1 -1
net/tipc/net.c
··· 89 89 * - A spin lock to protect the registry of kernel/driver users (reg.c) 90 90 * - A global spin_lock (tipc_port_lock), which only task is to ensure 91 91 * consistency where more than one port is involved in an operation, 92 - * i.e., whe a port is part of a linked list of ports. 92 + * i.e., when a port is part of a linked list of ports. 93 93 * There are two such lists; 'port_list', which is used for management, 94 94 * and 'wait_list', which is used to queue ports during congestion. 95 95 *
+1 -1
net/tipc/node.c
··· 1734 1734 } 1735 1735 1736 1736 /* tipc_node_xmit_skb(): send single buffer to destination 1737 - * Buffers sent via this functon are generally TIPC_SYSTEM_IMPORTANCE 1737 + * Buffers sent via this function are generally TIPC_SYSTEM_IMPORTANCE 1738 1738 * messages, which will not be rejected 1739 1739 * The only exception is datagram messages rerouted after secondary 1740 1740 * lookup, which are rare and safe to dispose of anyway.
+1 -1
net/tipc/socket.c
··· 1265 1265 spin_lock_bh(&inputq->lock); 1266 1266 if (skb_peek(arrvq) == skb) { 1267 1267 skb_queue_splice_tail_init(&tmpq, inputq); 1268 - kfree_skb(__skb_dequeue(arrvq)); 1268 + __skb_dequeue(arrvq); 1269 1269 } 1270 1270 spin_unlock_bh(&inputq->lock); 1271 1271 __skb_queue_purge(&tmpq);
+7 -3
net/wireless/nl80211.c
··· 5 5 * Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net> 6 6 * Copyright 2013-2014 Intel Mobile Communications GmbH 7 7 * Copyright 2015-2017 Intel Deutschland GmbH 8 - * Copyright (C) 2018-2020 Intel Corporation 8 + * Copyright (C) 2018-2021 Intel Corporation 9 9 */ 10 10 11 11 #include <linux/if.h> ··· 229 229 unsigned int len = nla_len(attr); 230 230 const struct element *elem; 231 231 const struct ieee80211_mgmt *mgmt = (void *)data; 232 - bool s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control); 233 232 unsigned int fixedlen, hdrlen; 233 + bool s1g_bcn; 234 234 235 + if (len < offsetofend(typeof(*mgmt), frame_control)) 236 + goto err; 237 + 238 + s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control); 235 239 if (s1g_bcn) { 236 240 fixedlen = offsetof(struct ieee80211_ext, 237 241 u.s1g_beacon.variable); ··· 5489 5485 rdev, info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP], 5490 5486 &params); 5491 5487 if (err) 5492 - return err; 5488 + goto out; 5493 5489 } 5494 5490 5495 5491 nl80211_calculate_ap_params(&params);
+8 -6
net/wireless/scan.c
··· 2352 2352 return NULL; 2353 2353 2354 2354 if (ext) { 2355 - struct ieee80211_s1g_bcn_compat_ie *compat; 2356 - u8 *ie; 2355 + const struct ieee80211_s1g_bcn_compat_ie *compat; 2356 + const struct element *elem; 2357 2357 2358 - ie = (void *)cfg80211_find_ie(WLAN_EID_S1G_BCN_COMPAT, 2359 - variable, ielen); 2360 - if (!ie) 2358 + elem = cfg80211_find_elem(WLAN_EID_S1G_BCN_COMPAT, 2359 + variable, ielen); 2360 + if (!elem) 2361 2361 return NULL; 2362 - compat = (void *)(ie + 2); 2362 + if (elem->datalen < sizeof(*compat)) 2363 + return NULL; 2364 + compat = (void *)elem->data; 2363 2365 bssid = ext->u.s1g_beacon.sa; 2364 2366 capability = le16_to_cpu(compat->compat_info); 2365 2367 beacon_int = le16_to_cpu(compat->beacon_int);
+1 -1
net/wireless/sme.c
··· 529 529 cfg80211_sme_free(wdev); 530 530 } 531 531 532 - if (WARN_ON(wdev->conn)) 532 + if (wdev->conn) 533 533 return -EINPROGRESS; 534 534 535 535 wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL);
+9 -3
net/xfrm/xfrm_compat.c
··· 216 216 case XFRM_MSG_GETSADINFO: 217 217 case XFRM_MSG_GETSPDINFO: 218 218 default: 219 - WARN_ONCE(1, "unsupported nlmsg_type %d", nlh_src->nlmsg_type); 219 + pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type); 220 220 return ERR_PTR(-EOPNOTSUPP); 221 221 } 222 222 ··· 277 277 return xfrm_nla_cpy(dst, src, nla_len(src)); 278 278 default: 279 279 BUILD_BUG_ON(XFRMA_MAX != XFRMA_IF_ID); 280 - WARN_ONCE(1, "unsupported nla_type %d", src->nla_type); 280 + pr_warn_once("unsupported nla_type %d\n", src->nla_type); 281 281 return -EOPNOTSUPP; 282 282 } 283 283 } ··· 315 315 struct sk_buff *new = NULL; 316 316 int err; 317 317 318 - if (WARN_ON_ONCE(type >= ARRAY_SIZE(xfrm_msg_min))) 318 + if (type >= ARRAY_SIZE(xfrm_msg_min)) { 319 + pr_warn_once("unsupported nlmsg_type %d\n", nlh_src->nlmsg_type); 319 320 return -EOPNOTSUPP; 321 + } 320 322 321 323 if (skb_shinfo(skb)->frag_list == NULL) { 322 324 new = alloc_skb(skb->len + skb_tailroom(skb), GFP_ATOMIC); ··· 380 378 struct nlmsghdr *nlmsg = dst; 381 379 struct nlattr *nla; 382 380 381 + /* xfrm_user_rcv_msg_compat() relies on fact that 32-bit messages 382 + * have the same len or shorted than 64-bit ones. 383 + * 32-bit translation that is bigger than 64-bit original is unexpected. 384 + */ 383 385 if (WARN_ON_ONCE(copy_len > payload)) 384 386 copy_len = payload; 385 387
-2
net/xfrm/xfrm_device.c
··· 134 134 return skb; 135 135 } 136 136 137 - xo->flags |= XFRM_XMIT; 138 - 139 137 if (skb_is_gso(skb) && unlikely(x->xso.dev != dev)) { 140 138 struct sk_buff *segs; 141 139
+3
net/xfrm/xfrm_interface.c
··· 306 306 307 307 icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 308 308 } else { 309 + if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 310 + goto xmit; 309 311 icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 310 312 htonl(mtu)); 311 313 } ··· 316 314 return -EMSGSIZE; 317 315 } 318 316 317 + xmit: 319 318 xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev))); 320 319 skb_dst_set(skb, dst); 321 320 skb->dev = tdev;
+18 -5
net/xfrm/xfrm_output.c
··· 503 503 return err; 504 504 } 505 505 506 - int xfrm_output_resume(struct sk_buff *skb, int err) 506 + int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err) 507 507 { 508 508 struct net *net = xs_net(skb_dst(skb)->xfrm); 509 509 510 510 while (likely((err = xfrm_output_one(skb, err)) == 0)) { 511 511 nf_reset_ct(skb); 512 512 513 - err = skb_dst(skb)->ops->local_out(net, skb->sk, skb); 513 + err = skb_dst(skb)->ops->local_out(net, sk, skb); 514 514 if (unlikely(err != 1)) 515 515 goto out; 516 516 517 517 if (!skb_dst(skb)->xfrm) 518 - return dst_output(net, skb->sk, skb); 518 + return dst_output(net, sk, skb); 519 519 520 520 err = nf_hook(skb_dst(skb)->ops->family, 521 - NF_INET_POST_ROUTING, net, skb->sk, skb, 521 + NF_INET_POST_ROUTING, net, sk, skb, 522 522 NULL, skb_dst(skb)->dev, xfrm_output2); 523 523 if (unlikely(err != 1)) 524 524 goto out; ··· 534 534 535 535 static int xfrm_output2(struct net *net, struct sock *sk, struct sk_buff *skb) 536 536 { 537 - return xfrm_output_resume(skb, 1); 537 + return xfrm_output_resume(sk, skb, 1); 538 538 } 539 539 540 540 static int xfrm_output_gso(struct net *net, struct sock *sk, struct sk_buff *skb) ··· 660 660 { 661 661 int err; 662 662 663 + if (x->outer_mode.encap == XFRM_MODE_BEET && 664 + ip_is_fragment(ip_hdr(skb))) { 665 + net_warn_ratelimited("BEET mode doesn't support inner IPv4 fragments\n"); 666 + return -EAFNOSUPPORT; 667 + } 668 + 663 669 err = xfrm4_tunnel_check_size(skb); 664 670 if (err) 665 671 return err; ··· 711 705 static int xfrm6_extract_output(struct xfrm_state *x, struct sk_buff *skb) 712 706 { 713 707 #if IS_ENABLED(CONFIG_IPV6) 708 + unsigned int ptr = 0; 714 709 int err; 710 + 711 + if (x->outer_mode.encap == XFRM_MODE_BEET && 712 + ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT, NULL, NULL) >= 0) { 713 + net_warn_ratelimited("BEET mode doesn't support inner IPv6 fragments\n"); 714 + return -EAFNOSUPPORT; 715 + } 715 716 716 717 err = xfrm6_tunnel_check_size(skb); 717 718 if (err)
+6 -5
net/xfrm/xfrm_state.c
··· 44 44 */ 45 45 46 46 static unsigned int xfrm_state_hashmax __read_mostly = 1 * 1024 * 1024; 47 - static __read_mostly seqcount_t xfrm_state_hash_generation = SEQCNT_ZERO(xfrm_state_hash_generation); 48 47 static struct kmem_cache *xfrm_state_cache __ro_after_init; 49 48 50 49 static DECLARE_WORK(xfrm_state_gc_work, xfrm_state_gc_task); ··· 139 140 } 140 141 141 142 spin_lock_bh(&net->xfrm.xfrm_state_lock); 142 - write_seqcount_begin(&xfrm_state_hash_generation); 143 + write_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); 143 144 144 145 nhashmask = (nsize / sizeof(struct hlist_head)) - 1U; 145 146 odst = xfrm_state_deref_prot(net->xfrm.state_bydst, net); ··· 155 156 rcu_assign_pointer(net->xfrm.state_byspi, nspi); 156 157 net->xfrm.state_hmask = nhashmask; 157 158 158 - write_seqcount_end(&xfrm_state_hash_generation); 159 + write_seqcount_end(&net->xfrm.xfrm_state_hash_generation); 159 160 spin_unlock_bh(&net->xfrm.xfrm_state_lock); 160 161 161 162 osize = (ohashmask + 1) * sizeof(struct hlist_head); ··· 1062 1063 1063 1064 to_put = NULL; 1064 1065 1065 - sequence = read_seqcount_begin(&xfrm_state_hash_generation); 1066 + sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); 1066 1067 1067 1068 rcu_read_lock(); 1068 1069 h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family); ··· 1175 1176 if (to_put) 1176 1177 xfrm_state_put(to_put); 1177 1178 1178 - if (read_seqcount_retry(&xfrm_state_hash_generation, sequence)) { 1179 + if (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence)) { 1179 1180 *err = -EAGAIN; 1180 1181 if (x) { 1181 1182 xfrm_state_put(x); ··· 2665 2666 net->xfrm.state_num = 0; 2666 2667 INIT_WORK(&net->xfrm.state_hash_work, xfrm_hash_resize); 2667 2668 spin_lock_init(&net->xfrm.xfrm_state_lock); 2669 + seqcount_spinlock_init(&net->xfrm.xfrm_state_hash_generation, 2670 + &net->xfrm.xfrm_state_lock); 2668 2671 return 0; 2669 2672 2670 2673 out_byspi:
+1 -1
tools/lib/bpf/ringbuf.c
··· 227 227 if ((len & BPF_RINGBUF_DISCARD_BIT) == 0) { 228 228 sample = (void *)len_ptr + BPF_RINGBUF_HDR_SZ; 229 229 err = r->sample_cb(r->ctx, sample, len); 230 - if (err) { 230 + if (err < 0) { 231 231 /* update consumer pos and bail out */ 232 232 smp_store_release(r->consumer_pos, 233 233 cons_pos);
+37 -20
tools/lib/bpf/xsk.c
··· 59 59 int fd; 60 60 int refcount; 61 61 struct list_head ctx_list; 62 + bool rx_ring_setup_done; 63 + bool tx_ring_setup_done; 62 64 }; 63 65 64 66 struct xsk_ctx { ··· 745 743 return NULL; 746 744 } 747 745 748 - static void xsk_put_ctx(struct xsk_ctx *ctx) 746 + static void xsk_put_ctx(struct xsk_ctx *ctx, bool unmap) 749 747 { 750 748 struct xsk_umem *umem = ctx->umem; 751 749 struct xdp_mmap_offsets off; 752 750 int err; 753 751 754 - if (--ctx->refcount == 0) { 755 - err = xsk_get_mmap_offsets(umem->fd, &off); 756 - if (!err) { 757 - munmap(ctx->fill->ring - off.fr.desc, 758 - off.fr.desc + umem->config.fill_size * 759 - sizeof(__u64)); 760 - munmap(ctx->comp->ring - off.cr.desc, 761 - off.cr.desc + umem->config.comp_size * 762 - sizeof(__u64)); 763 - } 752 + if (--ctx->refcount) 753 + return; 764 754 765 - list_del(&ctx->list); 766 - free(ctx); 767 - } 755 + if (!unmap) 756 + goto out_free; 757 + 758 + err = xsk_get_mmap_offsets(umem->fd, &off); 759 + if (err) 760 + goto out_free; 761 + 762 + munmap(ctx->fill->ring - off.fr.desc, off.fr.desc + umem->config.fill_size * 763 + sizeof(__u64)); 764 + munmap(ctx->comp->ring - off.cr.desc, off.cr.desc + umem->config.comp_size * 765 + sizeof(__u64)); 766 + 767 + out_free: 768 + list_del(&ctx->list); 769 + free(ctx); 768 770 } 769 771 770 772 static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk, ··· 803 797 memcpy(ctx->ifname, ifname, IFNAMSIZ - 1); 804 798 ctx->ifname[IFNAMSIZ - 1] = '\0'; 805 799 806 - umem->fill_save = NULL; 807 - umem->comp_save = NULL; 808 800 ctx->fill = fill; 809 801 ctx->comp = comp; 810 802 list_add(&ctx->list, &umem->ctx_list); ··· 858 854 struct xsk_socket *xsk; 859 855 struct xsk_ctx *ctx; 860 856 int err, ifindex; 857 + bool unmap = umem->fill_save != fill; 858 + bool rx_setup_done = false, tx_setup_done = false; 861 859 862 860 if (!umem || !xsk_ptr || !(rx || tx)) 863 861 return -EFAULT; ··· 887 881 } 888 882 } else { 889 883 xsk->fd = umem->fd; 884 + rx_setup_done = umem->rx_ring_setup_done; 885 + tx_setup_done = umem->tx_ring_setup_done; 890 886 } 891 887 892 888 ctx = xsk_get_ctx(umem, ifindex, queue_id); ··· 907 899 } 908 900 xsk->ctx = ctx; 909 901 910 - if (rx) { 902 + if (rx && !rx_setup_done) { 911 903 err = setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING, 912 904 &xsk->config.rx_size, 913 905 sizeof(xsk->config.rx_size)); ··· 915 907 err = -errno; 916 908 goto out_put_ctx; 917 909 } 910 + if (xsk->fd == umem->fd) 911 + umem->rx_ring_setup_done = true; 918 912 } 919 - if (tx) { 913 + if (tx && !tx_setup_done) { 920 914 err = setsockopt(xsk->fd, SOL_XDP, XDP_TX_RING, 921 915 &xsk->config.tx_size, 922 916 sizeof(xsk->config.tx_size)); ··· 926 916 err = -errno; 927 917 goto out_put_ctx; 928 918 } 919 + if (xsk->fd == umem->fd) 920 + umem->rx_ring_setup_done = true; 929 921 } 930 922 931 923 err = xsk_get_mmap_offsets(xsk->fd, &off); ··· 1006 994 } 1007 995 1008 996 *xsk_ptr = xsk; 997 + umem->fill_save = NULL; 998 + umem->comp_save = NULL; 1009 999 return 0; 1010 1000 1011 1001 out_mmap_tx: ··· 1019 1005 munmap(rx_map, off.rx.desc + 1020 1006 xsk->config.rx_size * sizeof(struct xdp_desc)); 1021 1007 out_put_ctx: 1022 - xsk_put_ctx(ctx); 1008 + xsk_put_ctx(ctx, unmap); 1023 1009 out_socket: 1024 1010 if (--umem->refcount) 1025 1011 close(xsk->fd); ··· 1033 1019 struct xsk_ring_cons *rx, struct xsk_ring_prod *tx, 1034 1020 const struct xsk_socket_config *usr_config) 1035 1021 { 1022 + if (!umem) 1023 + return -EFAULT; 1024 + 1036 1025 return xsk_socket__create_shared(xsk_ptr, ifname, queue_id, umem, 1037 1026 rx, tx, umem->fill_save, 1038 1027 umem->comp_save, usr_config); ··· 1085 1068 } 1086 1069 } 1087 1070 1088 - xsk_put_ctx(ctx); 1071 + xsk_put_ctx(ctx, true); 1089 1072 1090 1073 umem->refcount--; 1091 1074 /* Do not close an fd that also has an associated umem connected
+44
tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c
··· 6 6 #include <test_progs.h> 7 7 #include "bpf_dctcp.skel.h" 8 8 #include "bpf_cubic.skel.h" 9 + #include "bpf_tcp_nogpl.skel.h" 9 10 10 11 #define min(a, b) ((a) < (b) ? (a) : (b)) 11 12 ··· 228 227 bpf_dctcp__destroy(dctcp_skel); 229 228 } 230 229 230 + static char *err_str; 231 + static bool found; 232 + 233 + static int libbpf_debug_print(enum libbpf_print_level level, 234 + const char *format, va_list args) 235 + { 236 + char *log_buf; 237 + 238 + if (level != LIBBPF_WARN || 239 + strcmp(format, "libbpf: \n%s\n")) { 240 + vprintf(format, args); 241 + return 0; 242 + } 243 + 244 + log_buf = va_arg(args, char *); 245 + if (!log_buf) 246 + goto out; 247 + if (err_str && strstr(log_buf, err_str) != NULL) 248 + found = true; 249 + out: 250 + printf(format, log_buf); 251 + return 0; 252 + } 253 + 254 + static void test_invalid_license(void) 255 + { 256 + libbpf_print_fn_t old_print_fn; 257 + struct bpf_tcp_nogpl *skel; 258 + 259 + err_str = "struct ops programs must have a GPL compatible license"; 260 + found = false; 261 + old_print_fn = libbpf_set_print(libbpf_debug_print); 262 + 263 + skel = bpf_tcp_nogpl__open_and_load(); 264 + ASSERT_NULL(skel, "bpf_tcp_nogpl"); 265 + ASSERT_EQ(found, true, "expected_err_msg"); 266 + 267 + bpf_tcp_nogpl__destroy(skel); 268 + libbpf_set_print(old_print_fn); 269 + } 270 + 231 271 void test_bpf_tcp_ca(void) 232 272 { 233 273 if (test__start_subtest("dctcp")) 234 274 test_dctcp(); 235 275 if (test__start_subtest("cubic")) 236 276 test_cubic(); 277 + if (test__start_subtest("invalid_license")) 278 + test_invalid_license(); 237 279 }
+19
tools/testing/selftests/bpf/progs/bpf_tcp_nogpl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bpf.h> 4 + #include <linux/types.h> 5 + #include <bpf/bpf_helpers.h> 6 + #include <bpf/bpf_tracing.h> 7 + #include "bpf_tcp_helpers.h" 8 + 9 + char _license[] SEC("license") = "X"; 10 + 11 + void BPF_STRUCT_OPS(nogpltcp_init, struct sock *sk) 12 + { 13 + } 14 + 15 + SEC(".struct_ops") 16 + struct tcp_congestion_ops bpf_nogpltcp = { 17 + .init = (void *)nogpltcp_init, 18 + .name = "bpf_nogpltcp", 19 + };
+12 -1
tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
··· 657 657 { 658 658 # In accordance with INET_ECN_decapsulate() 659 659 __test_ecn_decap 00 00 0x00 660 + __test_ecn_decap 00 01 0x00 661 + __test_ecn_decap 00 02 0x00 662 + # 00 03 is tested in test_ecn_decap_error() 663 + __test_ecn_decap 01 00 0x01 660 664 __test_ecn_decap 01 01 0x01 661 - __test_ecn_decap 02 01 0x01 665 + __test_ecn_decap 01 02 0x01 662 666 __test_ecn_decap 01 03 0x03 667 + __test_ecn_decap 02 00 0x02 668 + __test_ecn_decap 02 01 0x01 669 + __test_ecn_decap 02 02 0x02 663 670 __test_ecn_decap 02 03 0x03 671 + __test_ecn_decap 03 00 0x03 672 + __test_ecn_decap 03 01 0x03 673 + __test_ecn_decap 03 02 0x03 674 + __test_ecn_decap 03 03 0x03 664 675 test_ecn_decap_error 665 676 } 666 677