Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from bpf and netfilter.

Current release - new code bugs:

- af_packet: make sure to pull the MAC header, avoid skb panic in GSO

- ptp_clockmatrix: fix inverted logic in is_single_shot()

- netfilter: flowtable: fix missing FLOWI_FLAG_ANYSRC flag

- dt-bindings: net: adin: fix adi,phy-output-clock description syntax

- wifi: iwlwifi: pcie: rename CAUSE macro, avoid MIPS build warning

Previous releases - regressions:

- Revert "net: af_key: add check for pfkey_broadcast in function
pfkey_process"

- tcp: fix tcp_mtup_probe_success vs wrong snd_cwnd

- nf_tables: disallow non-stateful expression in sets earlier

- nft_limit: clone packet limits' cost value

- nf_tables: double hook unregistration in netns path

- ping6: fix ping -6 with interface name

Previous releases - always broken:

- sched: fix memory barriers to prevent skbs from getting stuck in
lockless qdiscs

- neigh: set lower cap for neigh_managed_work rearming, avoid
constantly scheduling the probe work

- bpf: fix probe read error on big endian in ___bpf_prog_run()

- amt: memory leak and error handling fixes

Misc:

- ipv6: expand & rename accept_unsolicited_na to accept_untracked_na"

* tag 'net-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (80 commits)
net/af_packet: make sure to pull mac header
net: add debug info to __skb_pull()
net: CONFIG_DEBUG_NET depends on CONFIG_NET
stmmac: intel: Add RPL-P PCI ID
net: stmmac: use dev_err_probe() for reporting mdio bus registration failure
tipc: check attribute length for bearer name
ice: fix access-beyond-end in the switch code
nfp: remove padding in nfp_nfdk_tx_desc
ax25: Fix ax25 session cleanup problems
net: usb: qmi_wwan: Add support for Cinterion MV31 with new baseline
sfc/siena: fix wrong tx channel offset with efx_separate_tx_channels
sfc/siena: fix considering that all channels have TX queues
socket: Don't use u8 type in uapi socket.h
net/sched: act_api: fix error code in tcf_ct_flow_table_fill_tuple_ipv6()
net: ping6: Fix ping -6 with interface name
macsec: fix UAF bug for real_dev
octeontx2-af: fix error code in is_valid_offset()
wifi: mac80211: fix use-after-free in chanctx code
bonding: guard ns_targets by CONFIG_IPV6
tcp: tcp_rtx_synack() can be called from process context
...

+665 -611
+3 -2
Documentation/devicetree/bindings/net/adi,adin.yaml
··· 7 7 title: Analog Devices ADIN1200/ADIN1300 PHY 8 8 9 9 maintainers: 10 - - Alexandru Ardelean <alexandru.ardelean@analog.com> 10 + - Alexandru Tachici <alexandru.tachici@analog.com> 11 11 12 12 description: | 13 13 Bindings for Analog Devices Industrial Ethernet PHYs ··· 37 37 default: 8 38 38 39 39 adi,phy-output-clock: 40 - description: Select clock output on GP_CLK pin. Two clocks are available: 40 + description: | 41 + Select clock output on GP_CLK pin. Two clocks are available: 41 42 A 25MHz reference and a free-running 125MHz. 42 43 The phy can alternatively automatically switch between the reference and 43 44 the 125MHz clocks based on its internal state.
+9 -14
Documentation/networking/ip-sysctl.rst
··· 2474 2474 2475 2475 By default this is turned off. 2476 2476 2477 - accept_unsolicited_na - BOOLEAN 2478 - Add a new neighbour cache entry in STALE state for routers on receiving an 2479 - unsolicited neighbour advertisement with target link-layer address option 2480 - specified. This is as per router-side behavior documented in RFC9131. 2481 - This has lower precedence than drop_unsolicited_na. 2477 + accept_untracked_na - BOOLEAN 2478 + Add a new neighbour cache entry in STALE state for routers on receiving a 2479 + neighbour advertisement (either solicited or unsolicited) with target 2480 + link-layer address option specified if no neighbour entry is already 2481 + present for the advertised IPv6 address. Without this knob, NAs received 2482 + for untracked addresses (absent in neighbour cache) are silently ignored. 2482 2483 2483 - ==== ====== ====== ============================================== 2484 - drop accept fwding behaviour 2485 - ---- ------ ------ ---------------------------------------------- 2486 - 1 X X Drop NA packet and don't pass up the stack 2487 - 0 0 X Pass NA packet up the stack, don't update NC 2488 - 0 1 0 Pass NA packet up the stack, don't update NC 2489 - 0 1 1 Pass NA packet up the stack, and add a STALE 2490 - NC entry 2491 - ==== ====== ====== ============================================== 2484 + This is as per router-side behaviour documented in RFC9131. 2485 + 2486 + This has lower precedence than drop_unsolicited_na. 2492 2487 2493 2488 This will optimize the return path for the initial off-link communication 2494 2489 that is initiated by a directly connected host, by ensuring that
+3 -3
drivers/net/amt.c
··· 57 57 "AMT_MSG_MEMBERSHIP_QUERY", 58 58 "AMT_MSG_MEMBERSHIP_UPDATE", 59 59 "AMT_MSG_MULTICAST_DATA", 60 - "AMT_MSG_TEARDOWM", 60 + "AMT_MSG_TEARDOWN", 61 61 }; 62 62 63 63 static char *action_str[] = { ··· 2423 2423 } 2424 2424 } 2425 2425 2426 - return false; 2426 + return true; 2427 2427 2428 2428 report: 2429 2429 iph = ip_hdr(skb); ··· 2679 2679 amt = rcu_dereference_sk_user_data(sk); 2680 2680 if (!amt) { 2681 2681 err = true; 2682 - goto out; 2682 + goto drop; 2683 2683 } 2684 2684 2685 2685 skb->dev = amt->dev;
+2
drivers/net/bonding/bond_main.c
··· 6159 6159 strscpy_pad(params->primary, primary, sizeof(params->primary)); 6160 6160 6161 6161 memcpy(params->arp_targets, arp_target, sizeof(arp_target)); 6162 + #if IS_ENABLED(CONFIG_IPV6) 6162 6163 memset(params->ns_targets, 0, sizeof(struct in6_addr) * BOND_MAX_NS_TARGETS); 6164 + #endif 6163 6165 6164 6166 return 0; 6165 6167 }
-5
drivers/net/bonding/bond_netlink.c
··· 290 290 291 291 addr6 = nla_get_in6_addr(attr); 292 292 293 - if (ipv6_addr_type(&addr6) & IPV6_ADDR_LINKLOCAL) { 294 - NL_SET_ERR_MSG(extack, "Invalid IPv6 addr6"); 295 - return -EINVAL; 296 - } 297 - 298 293 bond_opt_initextra(&newval, &addr6, sizeof(addr6)); 299 294 err = __bond_opt_set(bond, BOND_OPT_NS_TARGETS, 300 295 &newval);
+6 -4
drivers/net/bonding/bond_options.c
··· 34 34 static int bond_option_arp_ip_target_rem(struct bonding *bond, __be32 target); 35 35 static int bond_option_arp_ip_targets_set(struct bonding *bond, 36 36 const struct bond_opt_value *newval); 37 - #if IS_ENABLED(CONFIG_IPV6) 38 37 static int bond_option_ns_ip6_targets_set(struct bonding *bond, 39 38 const struct bond_opt_value *newval); 40 - #endif 41 39 static int bond_option_arp_validate_set(struct bonding *bond, 42 40 const struct bond_opt_value *newval); 43 41 static int bond_option_arp_all_targets_set(struct bonding *bond, ··· 297 299 .flags = BOND_OPTFLAG_RAWVAL, 298 300 .set = bond_option_arp_ip_targets_set 299 301 }, 300 - #if IS_ENABLED(CONFIG_IPV6) 301 302 [BOND_OPT_NS_TARGETS] = { 302 303 .id = BOND_OPT_NS_TARGETS, 303 304 .name = "ns_ip6_target", ··· 304 307 .flags = BOND_OPTFLAG_RAWVAL, 305 308 .set = bond_option_ns_ip6_targets_set 306 309 }, 307 - #endif 308 310 [BOND_OPT_DOWNDELAY] = { 309 311 .id = BOND_OPT_DOWNDELAY, 310 312 .name = "downdelay", ··· 1249 1253 _bond_options_ns_ip6_target_set(bond, index, target, jiffies); 1250 1254 1251 1255 return 0; 1256 + } 1257 + #else 1258 + static int bond_option_ns_ip6_targets_set(struct bonding *bond, 1259 + const struct bond_opt_value *newval) 1260 + { 1261 + return -EPERM; 1252 1262 } 1253 1263 #endif 1254 1264
+15
drivers/net/bonding/bond_procfs.c
··· 129 129 printed = 1; 130 130 } 131 131 seq_printf(seq, "\n"); 132 + 133 + #if IS_ENABLED(CONFIG_IPV6) 134 + printed = 0; 135 + seq_printf(seq, "NS IPv6 target/s (xx::xx form):"); 136 + 137 + for (i = 0; (i < BOND_MAX_NS_TARGETS); i++) { 138 + if (ipv6_addr_any(&bond->params.ns_targets[i])) 139 + break; 140 + if (printed) 141 + seq_printf(seq, ","); 142 + seq_printf(seq, " %pI6c", &bond->params.ns_targets[i]); 143 + printed = 1; 144 + } 145 + seq_printf(seq, "\n"); 146 + #endif 132 147 } 133 148 134 149 if (BOND_MODE(bond) == BOND_MODE_8023AD) {
+1
drivers/net/dsa/mv88e6xxx/chip.c
··· 3960 3960 */ 3961 3961 child = of_get_child_by_name(np, "mdio"); 3962 3962 err = mv88e6xxx_mdio_register(chip, child, false); 3963 + of_node_put(child); 3963 3964 if (err) 3964 3965 return err; 3965 3966
+3 -28
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 1 - /* Copyright 2008 - 2016 Freescale Semiconductor Inc. 1 + // SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later 2 + /* 3 + * Copyright 2008 - 2016 Freescale Semiconductor Inc. 2 4 * Copyright 2020 NXP 3 - * 4 - * Redistribution and use in source and binary forms, with or without 5 - * modification, are permitted provided that the following conditions are met: 6 - * * Redistributions of source code must retain the above copyright 7 - * notice, this list of conditions and the following disclaimer. 8 - * * Redistributions in binary form must reproduce the above copyright 9 - * notice, this list of conditions and the following disclaimer in the 10 - * documentation and/or other materials provided with the distribution. 11 - * * Neither the name of Freescale Semiconductor nor the 12 - * names of its contributors may be used to endorse or promote products 13 - * derived from this software without specific prior written permission. 14 - * 15 - * ALTERNATIVELY, this software may be distributed under the terms of the 16 - * GNU General Public License ("GPL") as published by the Free Software 17 - * Foundation, either version 2 of that License or (at your option) any 18 - * later version. 19 - * 20 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 21 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 22 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 24 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 25 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 26 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 27 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 29 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 5 */ 31 6 32 7 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+3 -28
drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
··· 1 - /* Copyright 2008 - 2016 Freescale Semiconductor Inc. 2 - * 3 - * Redistribution and use in source and binary forms, with or without 4 - * modification, are permitted provided that the following conditions are met: 5 - * * Redistributions of source code must retain the above copyright 6 - * notice, this list of conditions and the following disclaimer. 7 - * * Redistributions in binary form must reproduce the above copyright 8 - * notice, this list of conditions and the following disclaimer in the 9 - * documentation and/or other materials provided with the distribution. 10 - * * Neither the name of Freescale Semiconductor nor the 11 - * names of its contributors may be used to endorse or promote products 12 - * derived from this software without specific prior written permission. 13 - * 14 - * ALTERNATIVELY, this software may be distributed under the terms of the 15 - * GNU General Public License ("GPL") as published by the Free Software 16 - * Foundation, either version 2 of that License or (at your option) any 17 - * later version. 18 - * 19 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 20 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 21 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 22 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 23 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 24 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 25 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 26 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 27 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 28 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 1 + /* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later */ 2 + /* 3 + * Copyright 2008 - 2016 Freescale Semiconductor Inc. 29 4 */ 30 5 31 6 #ifndef __DPAA_H
+3 -29
drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
··· 1 - /* Copyright 2008-2016 Freescale Semiconductor Inc. 2 - * 3 - * Redistribution and use in source and binary forms, with or without 4 - * modification, are permitted provided that the following conditions are met: 5 - * * Redistributions of source code must retain the above copyright 6 - * notice, this list of conditions and the following disclaimer. 7 - * * Redistributions in binary form must reproduce the above copyright 8 - * notice, this list of conditions and the following disclaimer in the 9 - * documentation and/or other materials provided with the distribution. 10 - * * Neither the name of Freescale Semiconductor nor the 11 - * names of its contributors may be used to endorse or promote products 12 - * derived from this software without specific prior written permission. 13 - * 14 - * 15 - * ALTERNATIVELY, this software may be distributed under the terms of the 16 - * GNU General Public License ("GPL") as published by the Free Software 17 - * Foundation, either version 2 of that License or (at your option) any 18 - * later version. 19 - * 20 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 21 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 22 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 24 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 25 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 26 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 27 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 29 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 1 + // SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later 2 + /* 3 + * Copyright 2008 - 2016 Freescale Semiconductor Inc. 30 4 */ 31 5 32 6 #include <linux/init.h>
+3 -29
drivers/net/ethernet/freescale/dpaa/dpaa_eth_trace.h
··· 1 - /* Copyright 2013-2015 Freescale Semiconductor Inc. 2 - * 3 - * Redistribution and use in source and binary forms, with or without 4 - * modification, are permitted provided that the following conditions are met: 5 - * * Redistributions of source code must retain the above copyright 6 - * notice, this list of conditions and the following disclaimer. 7 - * * Redistributions in binary form must reproduce the above copyright 8 - * notice, this list of conditions and the following disclaimer in the 9 - * documentation and/or other materials provided with the distribution. 10 - * * Neither the name of Freescale Semiconductor nor the 11 - * names of its contributors may be used to endorse or promote products 12 - * derived from this software without specific prior written permission. 13 - * 14 - * 15 - * ALTERNATIVELY, this software may be distributed under the terms of the 16 - * GNU General Public License ("GPL") as published by the Free Software 17 - * Foundation, either version 2 of that License or (at your option) any 18 - * later version. 19 - * 20 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 21 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 22 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 24 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 25 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 26 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 27 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 29 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 1 + /* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later */ 2 + /* 3 + * Copyright 2013-2015 Freescale Semiconductor Inc. 30 4 */ 31 5 32 6 #undef TRACE_SYSTEM
+3 -29
drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
··· 1 - /* Copyright 2008-2016 Freescale Semiconductor, Inc. 2 - * 3 - * Redistribution and use in source and binary forms, with or without 4 - * modification, are permitted provided that the following conditions are met: 5 - * * Redistributions of source code must retain the above copyright 6 - * notice, this list of conditions and the following disclaimer. 7 - * * Redistributions in binary form must reproduce the above copyright 8 - * notice, this list of conditions and the following disclaimer in the 9 - * documentation and/or other materials provided with the distribution. 10 - * * Neither the name of Freescale Semiconductor nor the 11 - * names of its contributors may be used to endorse or promote products 12 - * derived from this software without specific prior written permission. 13 - * 14 - * 15 - * ALTERNATIVELY, this software may be distributed under the terms of the 16 - * GNU General Public License ("GPL") as published by the Free Software 17 - * Foundation, either version 2 of that License or (at your option) any 18 - * later version. 19 - * 20 - * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 21 - * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 22 - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 - * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 24 - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 25 - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 26 - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 27 - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 29 - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 1 + // SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later 2 + /* 3 + * Copyright 2008 - 2016 Freescale Semiconductor Inc. 30 4 */ 31 5 32 6 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+2 -2
drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
··· 69 69 return 0; 70 70 71 71 err_mdiobus_reg: 72 - pci_release_mem_regions(pdev); 72 + pci_release_region(pdev, 0); 73 73 err_pci_mem_reg: 74 74 pci_disable_device(pdev); 75 75 err_pci_enable: ··· 88 88 mdiobus_unregister(bus); 89 89 mdio_priv = bus->priv; 90 90 iounmap(mdio_priv->hw->port); 91 - pci_release_mem_regions(pdev); 91 + pci_release_region(pdev, 0); 92 92 pci_disable_device(pdev); 93 93 } 94 94
-5
drivers/net/ethernet/intel/ice/Makefile
··· 47 47 ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o 48 48 ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o 49 49 ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o 50 - 51 - # FIXME: temporarily silence -Warray-bounds on non W=1+ builds 52 - ifndef KBUILD_EXTRA_WARN 53 - CFLAGS_ice_switch.o += -Wno-array-bounds 54 - endif
+26 -32
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
··· 601 601 __le32 addr_low; 602 602 }; 603 603 604 + /* Add switch rule response: 605 + * Content of return buffer is same as the input buffer. The status field and 606 + * LUT index are updated as part of the response 607 + */ 608 + struct ice_aqc_sw_rules_elem_hdr { 609 + __le16 type; /* Switch rule type, one of T_... */ 610 + #define ICE_AQC_SW_RULES_T_LKUP_RX 0x0 611 + #define ICE_AQC_SW_RULES_T_LKUP_TX 0x1 612 + #define ICE_AQC_SW_RULES_T_LG_ACT 0x2 613 + #define ICE_AQC_SW_RULES_T_VSI_LIST_SET 0x3 614 + #define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR 0x4 615 + #define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET 0x5 616 + #define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR 0x6 617 + __le16 status; 618 + } __packed __aligned(sizeof(__le16)); 619 + 604 620 /* Add/Update/Get/Remove lookup Rx/Tx command/response entry 605 621 * This structures describes the lookup rules and associated actions. "index" 606 622 * is returned as part of a response to a successful Add command, and can be 607 623 * used to identify the rule for Update/Get/Remove commands. 608 624 */ 609 625 struct ice_sw_rule_lkup_rx_tx { 626 + struct ice_aqc_sw_rules_elem_hdr hdr; 627 + 610 628 __le16 recipe_id; 611 629 #define ICE_SW_RECIPE_LOGICAL_PORT_FWD 10 612 630 /* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */ ··· 701 683 * lookup-type 702 684 */ 703 685 __le16 hdr_len; 704 - u8 hdr[]; 705 - }; 686 + u8 hdr_data[]; 687 + } __packed __aligned(sizeof(__le16)); 706 688 707 689 /* Add/Update/Remove large action command/response entry 708 690 * "index" is returned as part of a response to a successful Add command, and 709 691 * can be used to identify the action for Update/Get/Remove commands. 710 692 */ 711 693 struct ice_sw_rule_lg_act { 694 + struct ice_aqc_sw_rules_elem_hdr hdr; 695 + 712 696 __le16 index; /* Index in large action table */ 713 697 __le16 size; 714 698 /* Max number of large actions */ ··· 764 744 #define ICE_LG_ACT_STAT_COUNT_S 3 765 745 #define ICE_LG_ACT_STAT_COUNT_M (0x7F << ICE_LG_ACT_STAT_COUNT_S) 766 746 __le32 act[]; /* array of size for actions */ 767 - }; 747 + } __packed __aligned(sizeof(__le16)); 768 748 769 749 /* Add/Update/Remove VSI list command/response entry 770 750 * "index" is returned as part of a response to a successful Add command, and 771 751 * can be used to identify the VSI list for Update/Get/Remove commands. 772 752 */ 773 753 struct ice_sw_rule_vsi_list { 754 + struct ice_aqc_sw_rules_elem_hdr hdr; 755 + 774 756 __le16 index; /* Index of VSI/Prune list */ 775 757 __le16 number_vsi; 776 758 __le16 vsi[]; /* Array of number_vsi VSI numbers */ 777 - }; 778 - 779 - /* Query VSI list command/response entry */ 780 - struct ice_sw_rule_vsi_list_query { 781 - __le16 index; 782 - DECLARE_BITMAP(vsi_list, ICE_MAX_VSI); 783 - } __packed; 784 - 785 - /* Add switch rule response: 786 - * Content of return buffer is same as the input buffer. The status field and 787 - * LUT index are updated as part of the response 788 - */ 789 - struct ice_aqc_sw_rules_elem { 790 - __le16 type; /* Switch rule type, one of T_... */ 791 - #define ICE_AQC_SW_RULES_T_LKUP_RX 0x0 792 - #define ICE_AQC_SW_RULES_T_LKUP_TX 0x1 793 - #define ICE_AQC_SW_RULES_T_LG_ACT 0x2 794 - #define ICE_AQC_SW_RULES_T_VSI_LIST_SET 0x3 795 - #define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR 0x4 796 - #define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET 0x5 797 - #define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR 0x6 798 - __le16 status; 799 - union { 800 - struct ice_sw_rule_lkup_rx_tx lkup_tx_rx; 801 - struct ice_sw_rule_lg_act lg_act; 802 - struct ice_sw_rule_vsi_list vsi_list; 803 - struct ice_sw_rule_vsi_list_query vsi_list_query; 804 - } __packed pdata; 805 - }; 759 + } __packed __aligned(sizeof(__le16)); 806 760 807 761 /* Query PFC Mode (direct 0x0302) 808 762 * Set PFC Mode (direct 0x0303)
+89 -99
drivers/net/ethernet/intel/ice/ice_switch.c
··· 1282 1282 ICE_PKT_PROFILE(tcp, 0), 1283 1283 }; 1284 1284 1285 - #define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \ 1286 - (offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr) + \ 1287 - (DUMMY_ETH_HDR_LEN * \ 1288 - sizeof(((struct ice_sw_rule_lkup_rx_tx *)0)->hdr[0]))) 1289 - #define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \ 1290 - (offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr)) 1291 - #define ICE_SW_RULE_LG_ACT_SIZE(n) \ 1292 - (offsetof(struct ice_aqc_sw_rules_elem, pdata.lg_act.act) + \ 1293 - ((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act[0]))) 1294 - #define ICE_SW_RULE_VSI_LIST_SIZE(n) \ 1295 - (offsetof(struct ice_aqc_sw_rules_elem, pdata.vsi_list.vsi) + \ 1296 - ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi[0]))) 1285 + #define ICE_SW_RULE_RX_TX_HDR_SIZE(s, l) struct_size((s), hdr_data, (l)) 1286 + #define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s) \ 1287 + ICE_SW_RULE_RX_TX_HDR_SIZE((s), DUMMY_ETH_HDR_LEN) 1288 + #define ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s) \ 1289 + ICE_SW_RULE_RX_TX_HDR_SIZE((s), 0) 1290 + #define ICE_SW_RULE_LG_ACT_SIZE(s, n) struct_size((s), act, (n)) 1291 + #define ICE_SW_RULE_VSI_LIST_SIZE(s, n) struct_size((s), vsi, (n)) 1297 1292 1298 1293 /* this is a recipe to profile association bitmap */ 1299 1294 static DECLARE_BITMAP(recipe_to_profile[ICE_MAX_NUM_RECIPES], ··· 2371 2376 */ 2372 2377 static void 2373 2378 ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, 2374 - struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc) 2379 + struct ice_sw_rule_lkup_rx_tx *s_rule, 2380 + enum ice_adminq_opc opc) 2375 2381 { 2376 2382 u16 vlan_id = ICE_MAX_VLAN_ID + 1; 2377 2383 u16 vlan_tpid = ETH_P_8021Q; ··· 2384 2388 u8 q_rgn; 2385 2389 2386 2390 if (opc == ice_aqc_opc_remove_sw_rules) { 2387 - s_rule->pdata.lkup_tx_rx.act = 0; 2388 - s_rule->pdata.lkup_tx_rx.index = 2389 - cpu_to_le16(f_info->fltr_rule_id); 2390 - s_rule->pdata.lkup_tx_rx.hdr_len = 0; 2391 + s_rule->act = 0; 2392 + s_rule->index = cpu_to_le16(f_info->fltr_rule_id); 2393 + s_rule->hdr_len = 0; 2391 2394 return; 2392 2395 } 2393 2396 2394 2397 eth_hdr_sz = sizeof(dummy_eth_header); 2395 - eth_hdr = s_rule->pdata.lkup_tx_rx.hdr; 2398 + eth_hdr = s_rule->hdr_data; 2396 2399 2397 2400 /* initialize the ether header with a dummy header */ 2398 2401 memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz); ··· 2476 2481 break; 2477 2482 } 2478 2483 2479 - s_rule->type = (f_info->flag & ICE_FLTR_RX) ? 2484 + s_rule->hdr.type = (f_info->flag & ICE_FLTR_RX) ? 2480 2485 cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX) : 2481 2486 cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_TX); 2482 2487 2483 2488 /* Recipe set depending on lookup type */ 2484 - s_rule->pdata.lkup_tx_rx.recipe_id = cpu_to_le16(f_info->lkup_type); 2485 - s_rule->pdata.lkup_tx_rx.src = cpu_to_le16(f_info->src); 2486 - s_rule->pdata.lkup_tx_rx.act = cpu_to_le32(act); 2489 + s_rule->recipe_id = cpu_to_le16(f_info->lkup_type); 2490 + s_rule->src = cpu_to_le16(f_info->src); 2491 + s_rule->act = cpu_to_le32(act); 2487 2492 2488 2493 if (daddr) 2489 2494 ether_addr_copy(eth_hdr + ICE_ETH_DA_OFFSET, daddr); ··· 2497 2502 2498 2503 /* Create the switch rule with the final dummy Ethernet header */ 2499 2504 if (opc != ice_aqc_opc_update_sw_rules) 2500 - s_rule->pdata.lkup_tx_rx.hdr_len = cpu_to_le16(eth_hdr_sz); 2505 + s_rule->hdr_len = cpu_to_le16(eth_hdr_sz); 2501 2506 } 2502 2507 2503 2508 /** ··· 2514 2519 ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent, 2515 2520 u16 sw_marker, u16 l_id) 2516 2521 { 2517 - struct ice_aqc_sw_rules_elem *lg_act, *rx_tx; 2522 + struct ice_sw_rule_lkup_rx_tx *rx_tx; 2523 + struct ice_sw_rule_lg_act *lg_act; 2518 2524 /* For software marker we need 3 large actions 2519 2525 * 1. FWD action: FWD TO VSI or VSI LIST 2520 2526 * 2. GENERIC VALUE action to hold the profile ID ··· 2536 2540 * 1. Large Action 2537 2541 * 2. Look up Tx Rx 2538 2542 */ 2539 - lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts); 2540 - rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE; 2543 + lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(lg_act, num_lg_acts); 2544 + rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(rx_tx); 2541 2545 lg_act = devm_kzalloc(ice_hw_to_dev(hw), rules_size, GFP_KERNEL); 2542 2546 if (!lg_act) 2543 2547 return -ENOMEM; 2544 2548 2545 - rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size); 2549 + rx_tx = (typeof(rx_tx))((u8 *)lg_act + lg_act_size); 2546 2550 2547 2551 /* Fill in the first switch rule i.e. large action */ 2548 - lg_act->type = cpu_to_le16(ICE_AQC_SW_RULES_T_LG_ACT); 2549 - lg_act->pdata.lg_act.index = cpu_to_le16(l_id); 2550 - lg_act->pdata.lg_act.size = cpu_to_le16(num_lg_acts); 2552 + lg_act->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LG_ACT); 2553 + lg_act->index = cpu_to_le16(l_id); 2554 + lg_act->size = cpu_to_le16(num_lg_acts); 2551 2555 2552 2556 /* First action VSI forwarding or VSI list forwarding depending on how 2553 2557 * many VSIs ··· 2559 2563 act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) & ICE_LG_ACT_VSI_LIST_ID_M; 2560 2564 if (m_ent->vsi_count > 1) 2561 2565 act |= ICE_LG_ACT_VSI_LIST; 2562 - lg_act->pdata.lg_act.act[0] = cpu_to_le32(act); 2566 + lg_act->act[0] = cpu_to_le32(act); 2563 2567 2564 2568 /* Second action descriptor type */ 2565 2569 act = ICE_LG_ACT_GENERIC; 2566 2570 2567 2571 act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M; 2568 - lg_act->pdata.lg_act.act[1] = cpu_to_le32(act); 2572 + lg_act->act[1] = cpu_to_le32(act); 2569 2573 2570 2574 act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX << 2571 2575 ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M; ··· 2575 2579 act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) & 2576 2580 ICE_LG_ACT_GENERIC_VALUE_M; 2577 2581 2578 - lg_act->pdata.lg_act.act[2] = cpu_to_le32(act); 2582 + lg_act->act[2] = cpu_to_le32(act); 2579 2583 2580 2584 /* call the fill switch rule to fill the lookup Tx Rx structure */ 2581 2585 ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx, 2582 2586 ice_aqc_opc_update_sw_rules); 2583 2587 2584 2588 /* Update the action to point to the large action ID */ 2585 - rx_tx->pdata.lkup_tx_rx.act = 2586 - cpu_to_le32(ICE_SINGLE_ACT_PTR | 2587 - ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) & 2588 - ICE_SINGLE_ACT_PTR_VAL_M)); 2589 + rx_tx->act = cpu_to_le32(ICE_SINGLE_ACT_PTR | 2590 + ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) & 2591 + ICE_SINGLE_ACT_PTR_VAL_M)); 2589 2592 2590 2593 /* Use the filter rule ID of the previously created rule with single 2591 2594 * act. Once the update happens, hardware will treat this as large 2592 2595 * action 2593 2596 */ 2594 - rx_tx->pdata.lkup_tx_rx.index = 2595 - cpu_to_le16(m_ent->fltr_info.fltr_rule_id); 2597 + rx_tx->index = cpu_to_le16(m_ent->fltr_info.fltr_rule_id); 2596 2598 2597 2599 status = ice_aq_sw_rules(hw, lg_act, rules_size, 2, 2598 2600 ice_aqc_opc_update_sw_rules, NULL); ··· 2652 2658 u16 vsi_list_id, bool remove, enum ice_adminq_opc opc, 2653 2659 enum ice_sw_lkup_type lkup_type) 2654 2660 { 2655 - struct ice_aqc_sw_rules_elem *s_rule; 2661 + struct ice_sw_rule_vsi_list *s_rule; 2656 2662 u16 s_rule_size; 2657 2663 u16 rule_type; 2658 2664 int status; ··· 2675 2681 else 2676 2682 return -EINVAL; 2677 2683 2678 - s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi); 2684 + s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(s_rule, num_vsi); 2679 2685 s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL); 2680 2686 if (!s_rule) 2681 2687 return -ENOMEM; ··· 2685 2691 goto exit; 2686 2692 } 2687 2693 /* AQ call requires hw_vsi_id(s) */ 2688 - s_rule->pdata.vsi_list.vsi[i] = 2694 + s_rule->vsi[i] = 2689 2695 cpu_to_le16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i])); 2690 2696 } 2691 2697 2692 - s_rule->type = cpu_to_le16(rule_type); 2693 - s_rule->pdata.vsi_list.number_vsi = cpu_to_le16(num_vsi); 2694 - s_rule->pdata.vsi_list.index = cpu_to_le16(vsi_list_id); 2698 + s_rule->hdr.type = cpu_to_le16(rule_type); 2699 + s_rule->number_vsi = cpu_to_le16(num_vsi); 2700 + s_rule->index = cpu_to_le16(vsi_list_id); 2695 2701 2696 2702 status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL); 2697 2703 ··· 2739 2745 struct ice_fltr_list_entry *f_entry) 2740 2746 { 2741 2747 struct ice_fltr_mgmt_list_entry *fm_entry; 2742 - struct ice_aqc_sw_rules_elem *s_rule; 2748 + struct ice_sw_rule_lkup_rx_tx *s_rule; 2743 2749 enum ice_sw_lkup_type l_type; 2744 2750 struct ice_sw_recipe *recp; 2745 2751 int status; 2746 2752 2747 2753 s_rule = devm_kzalloc(ice_hw_to_dev(hw), 2748 - ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, GFP_KERNEL); 2754 + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule), 2755 + GFP_KERNEL); 2749 2756 if (!s_rule) 2750 2757 return -ENOMEM; 2751 2758 fm_entry = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*fm_entry), ··· 2767 2772 ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule, 2768 2773 ice_aqc_opc_add_sw_rules); 2769 2774 2770 - status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1, 2775 + status = ice_aq_sw_rules(hw, s_rule, 2776 + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule), 1, 2771 2777 ice_aqc_opc_add_sw_rules, NULL); 2772 2778 if (status) { 2773 2779 devm_kfree(ice_hw_to_dev(hw), fm_entry); 2774 2780 goto ice_create_pkt_fwd_rule_exit; 2775 2781 } 2776 2782 2777 - f_entry->fltr_info.fltr_rule_id = 2778 - le16_to_cpu(s_rule->pdata.lkup_tx_rx.index); 2779 - fm_entry->fltr_info.fltr_rule_id = 2780 - le16_to_cpu(s_rule->pdata.lkup_tx_rx.index); 2783 + f_entry->fltr_info.fltr_rule_id = le16_to_cpu(s_rule->index); 2784 + fm_entry->fltr_info.fltr_rule_id = le16_to_cpu(s_rule->index); 2781 2785 2782 2786 /* The book keeping entries will get removed when base driver 2783 2787 * calls remove filter AQ command ··· 2801 2807 static int 2802 2808 ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info) 2803 2809 { 2804 - struct ice_aqc_sw_rules_elem *s_rule; 2810 + struct ice_sw_rule_lkup_rx_tx *s_rule; 2805 2811 int status; 2806 2812 2807 2813 s_rule = devm_kzalloc(ice_hw_to_dev(hw), 2808 - ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, GFP_KERNEL); 2814 + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule), 2815 + GFP_KERNEL); 2809 2816 if (!s_rule) 2810 2817 return -ENOMEM; 2811 2818 2812 2819 ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules); 2813 2820 2814 - s_rule->pdata.lkup_tx_rx.index = cpu_to_le16(f_info->fltr_rule_id); 2821 + s_rule->index = cpu_to_le16(f_info->fltr_rule_id); 2815 2822 2816 2823 /* Update switch rule with new rule set to forward VSI list */ 2817 - status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1, 2824 + status = ice_aq_sw_rules(hw, s_rule, 2825 + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule), 1, 2818 2826 ice_aqc_opc_update_sw_rules, NULL); 2819 2827 2820 2828 devm_kfree(ice_hw_to_dev(hw), s_rule); ··· 3100 3104 ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id, 3101 3105 enum ice_sw_lkup_type lkup_type) 3102 3106 { 3103 - struct ice_aqc_sw_rules_elem *s_rule; 3107 + struct ice_sw_rule_vsi_list *s_rule; 3104 3108 u16 s_rule_size; 3105 3109 int status; 3106 3110 3107 - s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(0); 3111 + s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(s_rule, 0); 3108 3112 s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL); 3109 3113 if (!s_rule) 3110 3114 return -ENOMEM; 3111 3115 3112 - s_rule->type = cpu_to_le16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR); 3113 - s_rule->pdata.vsi_list.index = cpu_to_le16(vsi_list_id); 3116 + s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR); 3117 + s_rule->index = cpu_to_le16(vsi_list_id); 3114 3118 3115 3119 /* Free the vsi_list resource that we allocated. It is assumed that the 3116 3120 * list is empty at this point. ··· 3270 3274 3271 3275 if (remove_rule) { 3272 3276 /* Remove the lookup rule */ 3273 - struct ice_aqc_sw_rules_elem *s_rule; 3277 + struct ice_sw_rule_lkup_rx_tx *s_rule; 3274 3278 3275 3279 s_rule = devm_kzalloc(ice_hw_to_dev(hw), 3276 - ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 3280 + ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s_rule), 3277 3281 GFP_KERNEL); 3278 3282 if (!s_rule) { 3279 3283 status = -ENOMEM; ··· 3284 3288 ice_aqc_opc_remove_sw_rules); 3285 3289 3286 3290 status = ice_aq_sw_rules(hw, s_rule, 3287 - ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1, 3288 - ice_aqc_opc_remove_sw_rules, NULL); 3291 + ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s_rule), 3292 + 1, ice_aqc_opc_remove_sw_rules, NULL); 3289 3293 3290 3294 /* Remove a book keeping from the list */ 3291 3295 devm_kfree(ice_hw_to_dev(hw), s_rule); ··· 3433 3437 */ 3434 3438 int ice_add_mac(struct ice_hw *hw, struct list_head *m_list) 3435 3439 { 3436 - struct ice_aqc_sw_rules_elem *s_rule, *r_iter; 3440 + struct ice_sw_rule_lkup_rx_tx *s_rule, *r_iter; 3437 3441 struct ice_fltr_list_entry *m_list_itr; 3438 3442 struct list_head *rule_head; 3439 3443 u16 total_elem_left, s_rule_size; ··· 3497 3501 rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules; 3498 3502 3499 3503 /* Allocate switch rule buffer for the bulk update for unicast */ 3500 - s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE; 3504 + s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule); 3501 3505 s_rule = devm_kcalloc(ice_hw_to_dev(hw), num_unicast, s_rule_size, 3502 3506 GFP_KERNEL); 3503 3507 if (!s_rule) { ··· 3513 3517 if (is_unicast_ether_addr(mac_addr)) { 3514 3518 ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter, 3515 3519 ice_aqc_opc_add_sw_rules); 3516 - r_iter = (struct ice_aqc_sw_rules_elem *) 3517 - ((u8 *)r_iter + s_rule_size); 3520 + r_iter = (typeof(s_rule))((u8 *)r_iter + s_rule_size); 3518 3521 } 3519 3522 } 3520 3523 ··· 3522 3527 /* Call AQ switch rule in AQ_MAX chunk */ 3523 3528 for (total_elem_left = num_unicast; total_elem_left > 0; 3524 3529 total_elem_left -= elem_sent) { 3525 - struct ice_aqc_sw_rules_elem *entry = r_iter; 3530 + struct ice_sw_rule_lkup_rx_tx *entry = r_iter; 3526 3531 3527 3532 elem_sent = min_t(u8, total_elem_left, 3528 3533 (ICE_AQ_MAX_BUF_LEN / s_rule_size)); ··· 3531 3536 NULL); 3532 3537 if (status) 3533 3538 goto ice_add_mac_exit; 3534 - r_iter = (struct ice_aqc_sw_rules_elem *) 3539 + r_iter = (typeof(s_rule)) 3535 3540 ((u8 *)r_iter + (elem_sent * s_rule_size)); 3536 3541 } 3537 3542 ··· 3543 3548 struct ice_fltr_mgmt_list_entry *fm_entry; 3544 3549 3545 3550 if (is_unicast_ether_addr(mac_addr)) { 3546 - f_info->fltr_rule_id = 3547 - le16_to_cpu(r_iter->pdata.lkup_tx_rx.index); 3551 + f_info->fltr_rule_id = le16_to_cpu(r_iter->index); 3548 3552 f_info->fltr_act = ICE_FWD_TO_VSI; 3549 3553 /* Create an entry to track this MAC address */ 3550 3554 fm_entry = devm_kzalloc(ice_hw_to_dev(hw), ··· 3559 3565 */ 3560 3566 3561 3567 list_add(&fm_entry->list_entry, rule_head); 3562 - r_iter = (struct ice_aqc_sw_rules_elem *) 3563 - ((u8 *)r_iter + s_rule_size); 3568 + r_iter = (typeof(s_rule))((u8 *)r_iter + s_rule_size); 3564 3569 } 3565 3570 } 3566 3571 ··· 3858 3865 */ 3859 3866 int ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction) 3860 3867 { 3861 - struct ice_aqc_sw_rules_elem *s_rule; 3868 + struct ice_sw_rule_lkup_rx_tx *s_rule; 3862 3869 struct ice_fltr_info f_info; 3863 3870 enum ice_adminq_opc opcode; 3864 3871 u16 s_rule_size; ··· 3869 3876 return -EINVAL; 3870 3877 hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle); 3871 3878 3872 - s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE : 3873 - ICE_SW_RULE_RX_TX_NO_HDR_SIZE; 3879 + s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule) : 3880 + ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s_rule); 3874 3881 3875 3882 s_rule = devm_kzalloc(ice_hw_to_dev(hw), s_rule_size, GFP_KERNEL); 3876 3883 if (!s_rule) ··· 3908 3915 if (status || !(f_info.flag & ICE_FLTR_TX_RX)) 3909 3916 goto out; 3910 3917 if (set) { 3911 - u16 index = le16_to_cpu(s_rule->pdata.lkup_tx_rx.index); 3918 + u16 index = le16_to_cpu(s_rule->index); 3912 3919 3913 3920 if (f_info.flag & ICE_FLTR_TX) { 3914 3921 hw->port_info->dflt_tx_vsi_num = hw_vsi_id; ··· 5634 5641 */ 5635 5642 static int 5636 5643 ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, 5637 - struct ice_aqc_sw_rules_elem *s_rule, 5644 + struct ice_sw_rule_lkup_rx_tx *s_rule, 5638 5645 const struct ice_dummy_pkt_profile *profile) 5639 5646 { 5640 5647 u8 *pkt; ··· 5643 5650 /* Start with a packet with a pre-defined/dummy content. Then, fill 5644 5651 * in the header values to be looked up or matched. 5645 5652 */ 5646 - pkt = s_rule->pdata.lkup_tx_rx.hdr; 5653 + pkt = s_rule->hdr_data; 5647 5654 5648 5655 memcpy(pkt, profile->pkt, profile->pkt_len); 5649 5656 ··· 5733 5740 } 5734 5741 } 5735 5742 5736 - s_rule->pdata.lkup_tx_rx.hdr_len = cpu_to_le16(profile->pkt_len); 5743 + s_rule->hdr_len = cpu_to_le16(profile->pkt_len); 5737 5744 5738 5745 return 0; 5739 5746 } ··· 5956 5963 struct ice_rule_query_data *added_entry) 5957 5964 { 5958 5965 struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL; 5959 - struct ice_aqc_sw_rules_elem *s_rule = NULL; 5966 + struct ice_sw_rule_lkup_rx_tx *s_rule = NULL; 5960 5967 const struct ice_dummy_pkt_profile *profile; 5961 5968 u16 rid = 0, i, rule_buf_sz, vsi_handle; 5962 5969 struct list_head *rule_head; ··· 6033 6040 } 6034 6041 return status; 6035 6042 } 6036 - rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + profile->pkt_len; 6043 + rule_buf_sz = ICE_SW_RULE_RX_TX_HDR_SIZE(s_rule, profile->pkt_len); 6037 6044 s_rule = kzalloc(rule_buf_sz, GFP_KERNEL); 6038 6045 if (!s_rule) 6039 6046 return -ENOMEM; ··· 6082 6089 * by caller) 6083 6090 */ 6084 6091 if (rinfo->rx) { 6085 - s_rule->type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX); 6086 - s_rule->pdata.lkup_tx_rx.src = 6087 - cpu_to_le16(hw->port_info->lport); 6092 + s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX); 6093 + s_rule->src = cpu_to_le16(hw->port_info->lport); 6088 6094 } else { 6089 - s_rule->type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_TX); 6090 - s_rule->pdata.lkup_tx_rx.src = cpu_to_le16(rinfo->sw_act.src); 6095 + s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_TX); 6096 + s_rule->src = cpu_to_le16(rinfo->sw_act.src); 6091 6097 } 6092 6098 6093 - s_rule->pdata.lkup_tx_rx.recipe_id = cpu_to_le16(rid); 6094 - s_rule->pdata.lkup_tx_rx.act = cpu_to_le32(act); 6099 + s_rule->recipe_id = cpu_to_le16(rid); 6100 + s_rule->act = cpu_to_le32(act); 6095 6101 6096 6102 status = ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, profile); 6097 6103 if (status) ··· 6099 6107 if (rinfo->tun_type != ICE_NON_TUN && 6100 6108 rinfo->tun_type != ICE_SW_TUN_AND_NON_TUN) { 6101 6109 status = ice_fill_adv_packet_tun(hw, rinfo->tun_type, 6102 - s_rule->pdata.lkup_tx_rx.hdr, 6110 + s_rule->hdr_data, 6103 6111 profile->offsets); 6104 6112 if (status) 6105 6113 goto err_ice_add_adv_rule; ··· 6127 6135 6128 6136 adv_fltr->lkups_cnt = lkups_cnt; 6129 6137 adv_fltr->rule_info = *rinfo; 6130 - adv_fltr->rule_info.fltr_rule_id = 6131 - le16_to_cpu(s_rule->pdata.lkup_tx_rx.index); 6138 + adv_fltr->rule_info.fltr_rule_id = le16_to_cpu(s_rule->index); 6132 6139 sw = hw->switch_info; 6133 6140 sw->recp_list[rid].adv_rule = true; 6134 6141 rule_head = &sw->recp_list[rid].filt_rules; ··· 6375 6384 } 6376 6385 mutex_unlock(rule_lock); 6377 6386 if (remove_rule) { 6378 - struct ice_aqc_sw_rules_elem *s_rule; 6387 + struct ice_sw_rule_lkup_rx_tx *s_rule; 6379 6388 u16 rule_buf_sz; 6380 6389 6381 - rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE; 6390 + rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s_rule); 6382 6391 s_rule = kzalloc(rule_buf_sz, GFP_KERNEL); 6383 6392 if (!s_rule) 6384 6393 return -ENOMEM; 6385 - s_rule->pdata.lkup_tx_rx.act = 0; 6386 - s_rule->pdata.lkup_tx_rx.index = 6387 - cpu_to_le16(list_elem->rule_info.fltr_rule_id); 6388 - s_rule->pdata.lkup_tx_rx.hdr_len = 0; 6394 + s_rule->act = 0; 6395 + s_rule->index = cpu_to_le16(list_elem->rule_info.fltr_rule_id); 6396 + s_rule->hdr_len = 0; 6389 6397 status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule, 6390 6398 rule_buf_sz, 1, 6391 6399 ice_aqc_opc_remove_sw_rules, NULL);
-3
drivers/net/ethernet/intel/ice/ice_switch.h
··· 23 23 #define ICE_PROFID_IPV6_GTPU_TEID 46 24 24 #define ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER 70 25 25 26 - #define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \ 27 - (offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr)) 28 - 29 26 /* VSI context structure for add/get/update/free operations */ 30 27 struct ice_vsi_ctx { 31 28 u16 vsi_num;
+1 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
··· 579 579 580 580 blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr); 581 581 if (blkaddr < 0) 582 - return blkaddr; 582 + return false; 583 583 584 584 /* Registers that can be accessed from PF/VF */ 585 585 if ((offset & 0xFF000) == CPT_AF_LFX_CTL(0) ||
+3
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 2212 2212 struct ethtool_rx_flow_spec *fsp = 2213 2213 (struct ethtool_rx_flow_spec *)&cmd->fs; 2214 2214 2215 + if (fsp->location >= ARRAY_SIZE(mac->hwlro_ip)) 2216 + return -EINVAL; 2217 + 2215 2218 /* only tcp dst ipv4 is meaningful, others are meaningless */ 2216 2219 fsp->flow_type = TCP_V4_FLOW; 2217 2220 fsp->h_u.tcp_ip4_spec.ip4dst = ntohl(mac->hwlro_ip[fsp->location]);
+23 -11
drivers/net/ethernet/mellanox/mlx5/core/dev.c
··· 571 571 return 1; 572 572 } 573 573 574 + static void *pci_get_other_drvdata(struct device *this, struct device *other) 575 + { 576 + if (this->driver != other->driver) 577 + return NULL; 578 + 579 + return pci_get_drvdata(to_pci_dev(other)); 580 + } 581 + 574 582 static int next_phys_dev(struct device *dev, const void *data) 575 583 { 576 - struct mlx5_adev *madev = container_of(dev, struct mlx5_adev, adev.dev); 577 - struct mlx5_core_dev *mdev = madev->mdev; 584 + struct mlx5_core_dev *mdev, *this = (struct mlx5_core_dev *)data; 585 + 586 + mdev = pci_get_other_drvdata(this->device, dev); 587 + if (!mdev) 588 + return 0; 578 589 579 590 return _next_phys_dev(mdev, data); 580 591 } 581 592 582 593 static int next_phys_dev_lag(struct device *dev, const void *data) 583 594 { 584 - struct mlx5_adev *madev = container_of(dev, struct mlx5_adev, adev.dev); 585 - struct mlx5_core_dev *mdev = madev->mdev; 595 + struct mlx5_core_dev *mdev, *this = (struct mlx5_core_dev *)data; 596 + 597 + mdev = pci_get_other_drvdata(this->device, dev); 598 + if (!mdev) 599 + return 0; 586 600 587 601 if (!MLX5_CAP_GEN(mdev, vport_group_manager) || 588 602 !MLX5_CAP_GEN(mdev, lag_master) || ··· 610 596 static struct mlx5_core_dev *mlx5_get_next_dev(struct mlx5_core_dev *dev, 611 597 int (*match)(struct device *dev, const void *data)) 612 598 { 613 - struct auxiliary_device *adev; 614 - struct mlx5_adev *madev; 599 + struct device *next; 615 600 616 601 if (!mlx5_core_is_pf(dev)) 617 602 return NULL; 618 603 619 - adev = auxiliary_find_device(NULL, dev, match); 620 - if (!adev) 604 + next = bus_find_device(&pci_bus_type, NULL, dev, match); 605 + if (!next) 621 606 return NULL; 622 607 623 - madev = container_of(adev, struct mlx5_adev, adev); 624 - put_device(&adev->dev); 625 - return madev->mdev; 608 + put_device(next); 609 + return pci_get_drvdata(to_pci_dev(next)); 626 610 } 627 611 628 612 /* Must be called with intf_mutex held */
+4
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 764 764 u8 wq_type; 765 765 u32 rqn; 766 766 struct mlx5_core_dev *mdev; 767 + struct mlx5e_channel *channel; 767 768 u32 umr_mkey; 768 769 struct mlx5e_dma_info wqe_overflow; 769 770 ··· 1076 1075 1077 1076 int mlx5e_open_locked(struct net_device *netdev); 1078 1077 int mlx5e_close_locked(struct net_device *netdev); 1078 + 1079 + void mlx5e_trigger_napi_icosq(struct mlx5e_channel *c); 1080 + void mlx5e_trigger_napi_sched(struct napi_struct *napi); 1079 1081 1080 1082 int mlx5e_open_channels(struct mlx5e_priv *priv, 1081 1083 struct mlx5e_channels *chs);
+2
drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
··· 12 12 enum { 13 13 MLX5E_TC_FT_LEVEL = 0, 14 14 MLX5E_TC_TTC_FT_LEVEL, 15 + MLX5E_TC_MISS_LEVEL, 15 16 }; 16 17 17 18 struct mlx5e_tc_table { ··· 21 20 */ 22 21 struct mutex t_lock; 23 22 struct mlx5_flow_table *t; 23 + struct mlx5_flow_table *miss_t; 24 24 struct mlx5_fs_chains *chains; 25 25 struct mlx5e_post_act *post_act; 26 26
+1
drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
··· 736 736 if (test_bit(MLX5E_PTP_STATE_RX, c->state)) { 737 737 mlx5e_ptp_rx_set_fs(c->priv); 738 738 mlx5e_activate_rq(&c->rq); 739 + mlx5e_trigger_napi_sched(&c->napi); 739 740 } 740 741 } 741 742
+6
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
··· 123 123 xskrq->stats->recover++; 124 124 } 125 125 126 + mlx5e_trigger_napi_icosq(icosq->channel); 127 + 126 128 mutex_unlock(&icosq->channel->icosq_recovery_lock); 127 129 128 130 return 0; ··· 168 166 clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state); 169 167 mlx5e_activate_rq(rq); 170 168 rq->stats->recover++; 169 + if (rq->channel) 170 + mlx5e_trigger_napi_icosq(rq->channel); 171 + else 172 + mlx5e_trigger_napi_sched(rq->cq.napi); 171 173 return 0; 172 174 out: 173 175 clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
+11 -8
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 715 715 struct mlx5_flow_attr *attr, 716 716 struct flow_rule *flow_rule, 717 717 struct mlx5e_mod_hdr_handle **mh, 718 - u8 zone_restore_id, bool nat) 718 + u8 zone_restore_id, bool nat_table, bool has_nat) 719 719 { 720 720 DECLARE_MOD_HDR_ACTS_ACTIONS(actions_arr, MLX5_CT_MIN_MOD_ACTS); 721 721 DECLARE_MOD_HDR_ACTS(mod_acts, actions_arr); ··· 731 731 &attr->ct_attr.ct_labels_id); 732 732 if (err) 733 733 return -EOPNOTSUPP; 734 - if (nat) { 735 - err = mlx5_tc_ct_entry_create_nat(ct_priv, flow_rule, 736 - &mod_acts); 737 - if (err) 738 - goto err_mapping; 734 + if (nat_table) { 735 + if (has_nat) { 736 + err = mlx5_tc_ct_entry_create_nat(ct_priv, flow_rule, &mod_acts); 737 + if (err) 738 + goto err_mapping; 739 + } 739 740 740 741 ct_state |= MLX5_CT_STATE_NAT_BIT; 741 742 } ··· 751 750 if (err) 752 751 goto err_mapping; 753 752 754 - if (nat) { 753 + if (nat_table && has_nat) { 755 754 attr->modify_hdr = mlx5_modify_header_alloc(ct_priv->dev, ct_priv->ns_type, 756 755 mod_acts.num_actions, 757 756 mod_acts.actions); ··· 819 818 820 819 err = mlx5_tc_ct_entry_create_mod_hdr(ct_priv, attr, flow_rule, 821 820 &zone_rule->mh, 822 - zone_restore_id, nat); 821 + zone_restore_id, 822 + nat, 823 + mlx5_tc_ct_entry_has_nat(entry)); 823 824 if (err) { 824 825 ct_dbg("Failed to create ct entry mod hdr"); 825 826 goto err_mod_hdr;
+1
drivers/net/ethernet/mellanox/mlx5/core/en/trap.c
··· 179 179 { 180 180 napi_enable(&trap->napi); 181 181 mlx5e_activate_rq(&trap->rq); 182 + mlx5e_trigger_napi_sched(&trap->napi); 182 183 } 183 184 184 185 void mlx5e_deactivate_trap(struct mlx5e_priv *priv)
+1
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
··· 117 117 goto err_remove_pool; 118 118 119 119 mlx5e_activate_xsk(c); 120 + mlx5e_trigger_napi_icosq(c); 120 121 121 122 /* Don't wait for WQEs, because the newer xdpsock sample doesn't provide 122 123 * any Fill Ring entries at the setup stage.
+1 -4
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
··· 64 64 rq->clock = &mdev->clock; 65 65 rq->icosq = &c->icosq; 66 66 rq->ix = c->ix; 67 + rq->channel = c; 67 68 rq->mdev = mdev; 68 69 rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); 69 70 rq->xdpsq = &c->rq_xdpsq; ··· 180 179 mlx5e_reporter_icosq_resume_recovery(c); 181 180 182 181 /* TX queue is created active. */ 183 - 184 - spin_lock_bh(&c->async_icosq_lock); 185 - mlx5e_trigger_irq(&c->async_icosq); 186 - spin_unlock_bh(&c->async_icosq_lock); 187 182 } 188 183 189 184 void mlx5e_deactivate_xsk(struct mlx5e_channel *c)
+22 -7
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 475 475 rq->clock = &mdev->clock; 476 476 rq->icosq = &c->icosq; 477 477 rq->ix = c->ix; 478 + rq->channel = c; 478 479 rq->mdev = mdev; 479 480 rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); 480 481 rq->xdpsq = &c->rq_xdpsq; ··· 1067 1066 void mlx5e_activate_rq(struct mlx5e_rq *rq) 1068 1067 { 1069 1068 set_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); 1070 - if (rq->icosq) { 1071 - mlx5e_trigger_irq(rq->icosq); 1072 - } else { 1073 - local_bh_disable(); 1074 - napi_schedule(rq->cq.napi); 1075 - local_bh_enable(); 1076 - } 1077 1069 } 1078 1070 1079 1071 void mlx5e_deactivate_rq(struct mlx5e_rq *rq) ··· 2221 2227 return 0; 2222 2228 } 2223 2229 2230 + void mlx5e_trigger_napi_icosq(struct mlx5e_channel *c) 2231 + { 2232 + spin_lock_bh(&c->async_icosq_lock); 2233 + mlx5e_trigger_irq(&c->async_icosq); 2234 + spin_unlock_bh(&c->async_icosq_lock); 2235 + } 2236 + 2237 + void mlx5e_trigger_napi_sched(struct napi_struct *napi) 2238 + { 2239 + local_bh_disable(); 2240 + napi_schedule(napi); 2241 + local_bh_enable(); 2242 + } 2243 + 2224 2244 static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix, 2225 2245 struct mlx5e_params *params, 2226 2246 struct mlx5e_channel_param *cparam, ··· 2316 2308 2317 2309 if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state)) 2318 2310 mlx5e_activate_xsk(c); 2311 + 2312 + mlx5e_trigger_napi_icosq(c); 2319 2313 } 2320 2314 2321 2315 static void mlx5e_deactivate_channel(struct mlx5e_channel *c) ··· 4569 4559 4570 4560 unlock: 4571 4561 mutex_unlock(&priv->state_lock); 4562 + 4563 + /* Need to fix some features. */ 4564 + if (!err) 4565 + netdev_update_features(netdev); 4566 + 4572 4567 return err; 4573 4568 } 4574 4569
+36 -2
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 4714 4714 return tc_tbl_size; 4715 4715 } 4716 4716 4717 + static int mlx5e_tc_nic_create_miss_table(struct mlx5e_priv *priv) 4718 + { 4719 + struct mlx5_flow_table **ft = &priv->fs.tc.miss_t; 4720 + struct mlx5_flow_table_attr ft_attr = {}; 4721 + struct mlx5_flow_namespace *ns; 4722 + int err = 0; 4723 + 4724 + ft_attr.max_fte = 1; 4725 + ft_attr.autogroup.max_num_groups = 1; 4726 + ft_attr.level = MLX5E_TC_MISS_LEVEL; 4727 + ft_attr.prio = 0; 4728 + ns = mlx5_get_flow_namespace(priv->mdev, MLX5_FLOW_NAMESPACE_KERNEL); 4729 + 4730 + *ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); 4731 + if (IS_ERR(*ft)) { 4732 + err = PTR_ERR(*ft); 4733 + netdev_err(priv->netdev, "failed to create tc nic miss table err=%d\n", err); 4734 + } 4735 + 4736 + return err; 4737 + } 4738 + 4739 + static void mlx5e_tc_nic_destroy_miss_table(struct mlx5e_priv *priv) 4740 + { 4741 + mlx5_destroy_flow_table(priv->fs.tc.miss_t); 4742 + } 4743 + 4717 4744 int mlx5e_tc_nic_init(struct mlx5e_priv *priv) 4718 4745 { 4719 4746 struct mlx5e_tc_table *tc = &priv->fs.tc; ··· 4773 4746 } 4774 4747 tc->mapping = chains_mapping; 4775 4748 4749 + err = mlx5e_tc_nic_create_miss_table(priv); 4750 + if (err) 4751 + goto err_chains; 4752 + 4776 4753 if (MLX5_CAP_FLOWTABLE_NIC_RX(priv->mdev, ignore_flow_level)) 4777 4754 attr.flags = MLX5_CHAINS_AND_PRIOS_SUPPORTED | 4778 4755 MLX5_CHAINS_IGNORE_FLOW_LEVEL_SUPPORTED; 4779 4756 attr.ns = MLX5_FLOW_NAMESPACE_KERNEL; 4780 4757 attr.max_ft_sz = mlx5e_tc_nic_get_ft_size(dev); 4781 4758 attr.max_grp_num = MLX5E_TC_TABLE_NUM_GROUPS; 4782 - attr.default_ft = mlx5e_vlan_get_flowtable(priv->fs.vlan); 4759 + attr.default_ft = priv->fs.tc.miss_t; 4783 4760 attr.mapping = chains_mapping; 4784 4761 4785 4762 tc->chains = mlx5_chains_create(dev, &attr); 4786 4763 if (IS_ERR(tc->chains)) { 4787 4764 err = PTR_ERR(tc->chains); 4788 - goto err_chains; 4765 + goto err_miss; 4789 4766 } 4790 4767 4791 4768 tc->post_act = mlx5e_tc_post_act_init(priv, tc->chains, MLX5_FLOW_NAMESPACE_KERNEL); ··· 4812 4781 mlx5_tc_ct_clean(tc->ct); 4813 4782 mlx5e_tc_post_act_destroy(tc->post_act); 4814 4783 mlx5_chains_destroy(tc->chains); 4784 + err_miss: 4785 + mlx5e_tc_nic_destroy_miss_table(priv); 4815 4786 err_chains: 4816 4787 mapping_destroy(chains_mapping); 4817 4788 err_mapping: ··· 4854 4821 mlx5e_tc_post_act_destroy(tc->post_act); 4855 4822 mapping_destroy(tc->mapping); 4856 4823 mlx5_chains_destroy(tc->chains); 4824 + mlx5e_tc_nic_destroy_miss_table(priv); 4857 4825 } 4858 4826 4859 4827 int mlx5e_tc_ht_init(struct rhashtable *tc_ht)
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 114 114 #define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1) 115 115 116 116 #define KERNEL_NIC_TC_NUM_PRIOS 1 117 - #define KERNEL_NIC_TC_NUM_LEVELS 2 117 + #define KERNEL_NIC_TC_NUM_LEVELS 3 118 118 119 119 #define ANCHOR_NUM_LEVELS 1 120 120 #define ANCHOR_NUM_PRIOS 1
+4 -5
drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
··· 44 44 err = mlx5dr_table_set_miss_action(ft->fs_dr_table.dr_table, action); 45 45 if (err && action) { 46 46 err = mlx5dr_action_destroy(action); 47 - if (err) { 48 - action = NULL; 49 - mlx5_core_err(ns->dev, "Failed to destroy action (%d)\n", 50 - err); 51 - } 47 + if (err) 48 + mlx5_core_err(ns->dev, 49 + "Failed to destroy action (%d)\n", err); 50 + action = NULL; 52 51 } 53 52 ft->fs_dr_table.miss_action = action; 54 53 if (old_miss_action) {
+22 -10
drivers/net/ethernet/microchip/lan743x_main.c
··· 1164 1164 if (!phydev) 1165 1165 goto return_error; 1166 1166 1167 - ret = phy_connect_direct(netdev, phydev, 1168 - lan743x_phy_link_status_change, 1169 - PHY_INTERFACE_MODE_GMII); 1167 + if (adapter->is_pci11x1x) 1168 + ret = phy_connect_direct(netdev, phydev, 1169 + lan743x_phy_link_status_change, 1170 + PHY_INTERFACE_MODE_RGMII); 1171 + else 1172 + ret = phy_connect_direct(netdev, phydev, 1173 + lan743x_phy_link_status_change, 1174 + PHY_INTERFACE_MODE_GMII); 1170 1175 if (ret) 1171 1176 goto return_error; 1172 1177 } ··· 2941 2936 lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl); 2942 2937 netif_dbg(adapter, drv, adapter->netdev, 2943 2938 "SGMII operation\n"); 2939 + adapter->mdiobus->probe_capabilities = MDIOBUS_C22_C45; 2940 + adapter->mdiobus->read = lan743x_mdiobus_c45_read; 2941 + adapter->mdiobus->write = lan743x_mdiobus_c45_write; 2942 + adapter->mdiobus->name = "lan743x-mdiobus-c45"; 2943 + netif_dbg(adapter, drv, adapter->netdev, 2944 + "lan743x-mdiobus-c45\n"); 2944 2945 } else { 2945 2946 sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL); 2946 2947 sgmii_ctl &= ~SGMII_CTL_SGMII_ENABLE_; 2947 2948 sgmii_ctl |= SGMII_CTL_SGMII_POWER_DN_; 2948 2949 lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl); 2949 2950 netif_dbg(adapter, drv, adapter->netdev, 2950 - "(R)GMII operation\n"); 2951 + "RGMII operation\n"); 2952 + // Only C22 support when RGMII I/F 2953 + adapter->mdiobus->probe_capabilities = MDIOBUS_C22; 2954 + adapter->mdiobus->read = lan743x_mdiobus_read; 2955 + adapter->mdiobus->write = lan743x_mdiobus_write; 2956 + adapter->mdiobus->name = "lan743x-mdiobus"; 2957 + netif_dbg(adapter, drv, adapter->netdev, 2958 + "lan743x-mdiobus\n"); 2951 2959 } 2952 - 2953 - adapter->mdiobus->probe_capabilities = MDIOBUS_C22_C45; 2954 - adapter->mdiobus->read = lan743x_mdiobus_c45_read; 2955 - adapter->mdiobus->write = lan743x_mdiobus_c45_write; 2956 - adapter->mdiobus->name = "lan743x-mdiobus-c45"; 2957 - netif_dbg(adapter, drv, adapter->netdev, "lan743x-mdiobus-c45\n"); 2958 2960 } else { 2959 2961 adapter->mdiobus->read = lan743x_mdiobus_read; 2960 2962 adapter->mdiobus->write = lan743x_mdiobus_write;
+7 -2
drivers/net/ethernet/microchip/lan966x/lan966x_main.c
··· 1120 1120 lan966x->ports[p]->fwnode = fwnode_handle_get(portnp); 1121 1121 1122 1122 serdes = devm_of_phy_get(lan966x->dev, to_of_node(portnp), NULL); 1123 - if (!IS_ERR(serdes)) 1124 - lan966x->ports[p]->serdes = serdes; 1123 + if (PTR_ERR(serdes) == -ENODEV) 1124 + serdes = NULL; 1125 + if (IS_ERR(serdes)) { 1126 + err = PTR_ERR(serdes); 1127 + goto cleanup_ports; 1128 + } 1129 + lan966x->ports[p]->serdes = serdes; 1125 1130 1126 1131 lan966x_port_init(lan966x->ports[p]); 1127 1132 }
+6 -6
drivers/net/ethernet/netronome/nfp/nfdk/dp.c
··· 314 314 FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type); 315 315 316 316 txd->dma_len_type = cpu_to_le16(dlen_type); 317 - nfp_desc_set_dma_addr(txd, dma_addr); 317 + nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr); 318 318 319 319 /* starts at bit 0 */ 320 320 BUILD_BUG_ON(!(NFDK_DESC_TX_DMA_LEN_HEAD & 1)); ··· 339 339 dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len); 340 340 341 341 txd->dma_len_type = cpu_to_le16(dlen_type); 342 - nfp_desc_set_dma_addr(txd, dma_addr); 342 + nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr); 343 343 344 344 dma_len -= dlen_type; 345 345 dma_addr += dlen_type + 1; ··· 929 929 FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type); 930 930 931 931 txd->dma_len_type = cpu_to_le16(dlen_type); 932 - nfp_desc_set_dma_addr(txd, dma_addr); 932 + nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr); 933 933 934 934 tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD; 935 935 dma_len -= tmp_dlen; ··· 940 940 dma_len -= 1; 941 941 dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len); 942 942 txd->dma_len_type = cpu_to_le16(dlen_type); 943 - nfp_desc_set_dma_addr(txd, dma_addr); 943 + nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr); 944 944 945 945 dlen_type &= NFDK_DESC_TX_DMA_LEN; 946 946 dma_len -= dlen_type; ··· 1332 1332 FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type); 1333 1333 1334 1334 txd->dma_len_type = cpu_to_le16(dlen_type); 1335 - nfp_desc_set_dma_addr(txd, dma_addr); 1335 + nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr); 1336 1336 1337 1337 tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD; 1338 1338 dma_len -= tmp_dlen; ··· 1343 1343 dma_len -= 1; 1344 1344 dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len); 1345 1345 txd->dma_len_type = cpu_to_le16(dlen_type); 1346 - nfp_desc_set_dma_addr(txd, dma_addr); 1346 + nfp_nfdk_tx_desc_set_dma_addr(txd, dma_addr); 1347 1347 1348 1348 dlen_type &= NFDK_DESC_TX_DMA_LEN; 1349 1349 dma_len -= dlen_type;
+1 -2
drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h
··· 46 46 struct nfp_nfdk_tx_desc { 47 47 union { 48 48 struct { 49 - u8 dma_addr_hi; /* High bits of host buf address */ 50 - u8 padding; /* Must be zero */ 49 + __le16 dma_addr_hi; /* High bits of host buf address */ 51 50 __le16 dma_len_type; /* Length to DMA for this desc */ 52 51 __le32 dma_addr_lo; /* Low 32bit of host buf addr */ 53 52 };
+10 -1
drivers/net/ethernet/netronome/nfp/nfp_net.h
··· 117 117 /* Convenience macro for writing dma address into RX/TX descriptors */ 118 118 #define nfp_desc_set_dma_addr(desc, dma_addr) \ 119 119 do { \ 120 - __typeof(desc) __d = (desc); \ 120 + __typeof__(desc) __d = (desc); \ 121 121 dma_addr_t __addr = (dma_addr); \ 122 122 \ 123 123 __d->dma_addr_lo = cpu_to_le32(lower_32_bits(__addr)); \ 124 124 __d->dma_addr_hi = upper_32_bits(__addr) & 0xff; \ 125 + } while (0) 126 + 127 + #define nfp_nfdk_tx_desc_set_dma_addr(desc, dma_addr) \ 128 + do { \ 129 + __typeof__(desc) __d = (desc); \ 130 + dma_addr_t __addr = (dma_addr); \ 131 + \ 132 + __d->dma_addr_hi = cpu_to_le16(upper_32_bits(__addr) & 0xff); \ 133 + __d->dma_addr_lo = cpu_to_le32(lower_32_bits(__addr)); \ 125 134 } while (0) 126 135 127 136 /**
+2 -2
drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
··· 289 289 290 290 /* Init to unknowns */ 291 291 ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); 292 - ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); 293 - ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); 294 292 cmd->base.port = PORT_OTHER; 295 293 cmd->base.speed = SPEED_UNKNOWN; 296 294 cmd->base.duplex = DUPLEX_UNKNOWN; ··· 296 298 port = nfp_port_from_netdev(netdev); 297 299 eth_port = nfp_port_get_eth_port(port); 298 300 if (eth_port) { 301 + ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); 302 + ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); 299 303 cmd->base.autoneg = eth_port->aneg != NFP_ANEG_DISABLED ? 300 304 AUTONEG_ENABLE : AUTONEG_DISABLE; 301 305 nfp_net_set_fec_link_mode(eth_port, cmd);
+2 -4
drivers/net/ethernet/sfc/efx_channels.c
··· 298 298 efx->n_channels = 1; 299 299 efx->n_rx_channels = 1; 300 300 efx->n_tx_channels = 1; 301 + efx->tx_channel_offset = 0; 301 302 efx->n_xdp_channels = 0; 302 303 efx->xdp_channel_offset = efx->n_channels; 303 304 rc = pci_enable_msi(efx->pci_dev); ··· 319 318 efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0); 320 319 efx->n_rx_channels = 1; 321 320 efx->n_tx_channels = 1; 321 + efx->tx_channel_offset = 1; 322 322 efx->n_xdp_channels = 0; 323 323 efx->xdp_channel_offset = efx->n_channels; 324 324 efx->legacy_irq = efx->pci_dev->irq; ··· 955 953 { 956 954 struct efx_channel *channel; 957 955 int rc; 958 - 959 - efx->tx_channel_offset = 960 - efx_separate_tx_channels ? 961 - efx->n_channels - efx->n_tx_channels : 0; 962 956 963 957 if (efx->xdp_tx_queue_count) { 964 958 EFX_WARN_ON_PARANOID(efx->xdp_tx_queues);
+1 -1
drivers/net/ethernet/sfc/net_driver.h
··· 1530 1530 1531 1531 static inline bool efx_channel_has_tx_queues(struct efx_channel *channel) 1532 1532 { 1533 - return true; 1533 + return channel && channel->channel >= channel->efx->tx_channel_offset; 1534 1534 } 1535 1535 1536 1536 static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel)
+2 -4
drivers/net/ethernet/sfc/siena/efx_channels.c
··· 299 299 efx->n_channels = 1; 300 300 efx->n_rx_channels = 1; 301 301 efx->n_tx_channels = 1; 302 + efx->tx_channel_offset = 0; 302 303 efx->n_xdp_channels = 0; 303 304 efx->xdp_channel_offset = efx->n_channels; 304 305 rc = pci_enable_msi(efx->pci_dev); ··· 320 319 efx->n_channels = 1 + (efx_siena_separate_tx_channels ? 1 : 0); 321 320 efx->n_rx_channels = 1; 322 321 efx->n_tx_channels = 1; 322 + efx->tx_channel_offset = 1; 323 323 efx->n_xdp_channels = 0; 324 324 efx->xdp_channel_offset = efx->n_channels; 325 325 efx->legacy_irq = efx->pci_dev->irq; ··· 959 957 { 960 958 struct efx_channel *channel; 961 959 int rc; 962 - 963 - efx->tx_channel_offset = 964 - efx_siena_separate_tx_channels ? 965 - efx->n_channels - efx->n_tx_channels : 0; 966 960 967 961 if (efx->xdp_tx_queue_count) { 968 962 EFX_WARN_ON_PARANOID(efx->xdp_tx_queues);
+1 -1
drivers/net/ethernet/sfc/siena/net_driver.h
··· 1529 1529 1530 1530 static inline bool efx_channel_has_tx_queues(struct efx_channel *channel) 1531 1531 { 1532 - return true; 1532 + return channel && channel->channel >= channel->efx->tx_channel_offset; 1533 1533 } 1534 1534 1535 1535 static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel)
+2
drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
··· 1161 1161 #define PCI_DEVICE_ID_INTEL_ADLS_SGMII1G_0 0x7aac 1162 1162 #define PCI_DEVICE_ID_INTEL_ADLS_SGMII1G_1 0x7aad 1163 1163 #define PCI_DEVICE_ID_INTEL_ADLN_SGMII1G 0x54ac 1164 + #define PCI_DEVICE_ID_INTEL_RPLP_SGMII1G 0x51ac 1164 1165 1165 1166 static const struct pci_device_id intel_eth_pci_id_table[] = { 1166 1167 { PCI_DEVICE_DATA(INTEL, QUARK, &quark_info) }, ··· 1180 1179 { PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_0, &adls_sgmii1g_phy0_info) }, 1181 1180 { PCI_DEVICE_DATA(INTEL, ADLS_SGMII1G_1, &adls_sgmii1g_phy1_info) }, 1182 1181 { PCI_DEVICE_DATA(INTEL, ADLN_SGMII1G, &tgl_sgmii1g_phy0_info) }, 1182 + { PCI_DEVICE_DATA(INTEL, RPLP_SGMII1G, &tgl_sgmii1g_phy0_info) }, 1183 1183 {} 1184 1184 }; 1185 1185 MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table);
+3 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 7129 7129 /* MDIO bus Registration */ 7130 7130 ret = stmmac_mdio_register(ndev); 7131 7131 if (ret < 0) { 7132 - dev_err(priv->device, 7133 - "%s: MDIO bus (id: %d) registration failed", 7134 - __func__, priv->plat->bus_id); 7132 + dev_err_probe(priv->device, ret, 7133 + "%s: MDIO bus (id: %d) registration failed\n", 7134 + __func__, priv->plat->bus_id); 7135 7135 goto error_mdio_register; 7136 7136 } 7137 7137 }
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
··· 482 482 483 483 err = of_mdiobus_register(new_bus, mdio_node); 484 484 if (err != 0) { 485 - dev_err(dev, "Cannot register the MDIO bus\n"); 485 + dev_err_probe(dev, err, "Cannot register the MDIO bus\n"); 486 486 goto bus_register_fail; 487 487 } 488 488
+6 -2
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 9 9 #include <linux/etherdevice.h> 10 10 #include <linux/if_vlan.h> 11 11 #include <linux/interrupt.h> 12 + #include <linux/irqdomain.h> 12 13 #include <linux/kernel.h> 13 14 #include <linux/kmemleak.h> 14 15 #include <linux/module.h> ··· 1789 1788 if (IS_ERR(cpts)) { 1790 1789 int ret = PTR_ERR(cpts); 1791 1790 1791 + of_node_put(node); 1792 1792 if (ret == -EOPNOTSUPP) { 1793 1793 dev_info(dev, "cpts disabled\n"); 1794 1794 return 0; ··· 1983 1981 1984 1982 phy_interface_set_rgmii(port->slave.phylink_config.supported_interfaces); 1985 1983 1986 - phylink = phylink_create(&port->slave.phylink_config, dev->fwnode, port->slave.phy_if, 1984 + phylink = phylink_create(&port->slave.phylink_config, 1985 + of_node_to_fwnode(port->slave.phy_node), 1986 + port->slave.phy_if, 1987 1987 &am65_cpsw_phylink_mac_ops); 1988 1988 if (IS_ERR(phylink)) 1989 1989 return PTR_ERR(phylink); ··· 2666 2662 if (!node) 2667 2663 return -ENOENT; 2668 2664 common->port_num = of_get_child_count(node); 2665 + of_node_put(node); 2669 2666 if (common->port_num < 1 || common->port_num > AM65_CPSW_MAX_PORTS) 2670 2667 return -ENOENT; 2671 - of_node_put(node); 2672 2668 2673 2669 common->rx_flow_id_base = -1; 2674 2670 init_completion(&common->tdown_complete);
+3 -6
drivers/net/ipa/ipa_endpoint.c
··· 1095 1095 1096 1096 ret = gsi_trans_page_add(trans, page, len, offset); 1097 1097 if (ret) 1098 - __free_pages(page, get_order(buffer_size)); 1098 + put_page(page); 1099 1099 else 1100 1100 trans->data = page; /* transaction owns page now */ 1101 1101 ··· 1418 1418 } else { 1419 1419 struct page *page = trans->data; 1420 1420 1421 - if (page) { 1422 - u32 buffer_size = endpoint->config.rx.buffer_size; 1423 - 1424 - __free_pages(page, get_order(buffer_size)); 1425 - } 1421 + if (page) 1422 + put_page(page); 1426 1423 } 1427 1424 } 1428 1425
+7
drivers/net/macsec.c
··· 99 99 * struct macsec_dev - private data 100 100 * @secy: SecY config 101 101 * @real_dev: pointer to underlying netdevice 102 + * @dev_tracker: refcount tracker for @real_dev reference 102 103 * @stats: MACsec device stats 103 104 * @secys: linked list of SecY's on the underlying device 104 105 * @gro_cells: pointer to the Generic Receive Offload cell ··· 108 107 struct macsec_dev { 109 108 struct macsec_secy secy; 110 109 struct net_device *real_dev; 110 + netdevice_tracker dev_tracker; 111 111 struct pcpu_secy_stats __percpu *stats; 112 112 struct list_head secys; 113 113 struct gro_cells gro_cells; ··· 3461 3459 if (is_zero_ether_addr(dev->broadcast)) 3462 3460 memcpy(dev->broadcast, real_dev->broadcast, dev->addr_len); 3463 3461 3462 + /* Get macsec's reference to real_dev */ 3463 + dev_hold_track(real_dev, &macsec->dev_tracker, GFP_KERNEL); 3464 + 3464 3465 return 0; 3465 3466 } 3466 3467 ··· 3709 3704 free_percpu(macsec->stats); 3710 3705 free_percpu(macsec->secy.tx_sc.stats); 3711 3706 3707 + /* Get rid of the macsec's reference to real_dev */ 3708 + dev_put_track(macsec->real_dev, &macsec->dev_tracker); 3712 3709 } 3713 3710 3714 3711 static void macsec_setup(struct net_device *dev)
+22 -11
drivers/net/phy/at803x.c
··· 433 433 static int at803x_set_wol(struct phy_device *phydev, 434 434 struct ethtool_wolinfo *wol) 435 435 { 436 - struct net_device *ndev = phydev->attached_dev; 437 - const u8 *mac; 438 436 int ret, irq_enabled; 439 - unsigned int i; 440 - static const unsigned int offsets[] = { 441 - AT803X_LOC_MAC_ADDR_32_47_OFFSET, 442 - AT803X_LOC_MAC_ADDR_16_31_OFFSET, 443 - AT803X_LOC_MAC_ADDR_0_15_OFFSET, 444 - }; 445 - 446 - if (!ndev) 447 - return -ENODEV; 448 437 449 438 if (wol->wolopts & WAKE_MAGIC) { 439 + struct net_device *ndev = phydev->attached_dev; 440 + const u8 *mac; 441 + unsigned int i; 442 + static const unsigned int offsets[] = { 443 + AT803X_LOC_MAC_ADDR_32_47_OFFSET, 444 + AT803X_LOC_MAC_ADDR_16_31_OFFSET, 445 + AT803X_LOC_MAC_ADDR_0_15_OFFSET, 446 + }; 447 + 448 + if (!ndev) 449 + return -ENODEV; 450 + 450 451 mac = (const u8 *) ndev->dev_addr; 451 452 452 453 if (!is_valid_ether_addr(mac)) ··· 858 857 if (phydev->drv->phy_id == ATH8031_PHY_ID) { 859 858 int ccr = phy_read(phydev, AT803X_REG_CHIP_CONFIG); 860 859 int mode_cfg; 860 + struct ethtool_wolinfo wol = { 861 + .wolopts = 0, 862 + }; 861 863 862 864 if (ccr < 0) 863 865 goto err; ··· 875 871 case AT803X_MODE_CFG_FX100_RGMII_75OHM: 876 872 priv->is_fiber = true; 877 873 break; 874 + } 875 + 876 + /* Disable WOL by default */ 877 + ret = at803x_set_wol(phydev, &wol); 878 + if (ret < 0) { 879 + phydev_err(phydev, "failed to disable WOL on probe: %d\n", ret); 880 + goto err; 878 881 } 879 882 } 880 883
+3 -3
drivers/net/phy/fixed_phy.c
··· 180 180 if (fp->link_gpiod) 181 181 gpiod_put(fp->link_gpiod); 182 182 kfree(fp); 183 - ida_simple_remove(&phy_fixed_ida, phy_addr); 183 + ida_free(&phy_fixed_ida, phy_addr); 184 184 return; 185 185 } 186 186 } ··· 244 244 } 245 245 246 246 /* Get the next available PHY address, up to PHY_MAX_ADDR */ 247 - phy_addr = ida_simple_get(&phy_fixed_ida, 0, PHY_MAX_ADDR, GFP_KERNEL); 247 + phy_addr = ida_alloc_max(&phy_fixed_ida, PHY_MAX_ADDR - 1, GFP_KERNEL); 248 248 if (phy_addr < 0) 249 249 return ERR_PTR(phy_addr); 250 250 251 251 ret = fixed_phy_add_gpiod(irq, phy_addr, status, gpiod); 252 252 if (ret < 0) { 253 - ida_simple_remove(&phy_fixed_ida, phy_addr); 253 + ida_free(&phy_fixed_ida, phy_addr); 254 254 return ERR_PTR(ret); 255 255 } 256 256
+2
drivers/net/usb/qmi_wwan.c
··· 1366 1366 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 1367 1367 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)}, /* Telit LE920, LE920A4 */ 1368 1368 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1230, 2)}, /* Telit LE910Cx */ 1369 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1250, 0)}, /* Telit LE910Cx */ 1369 1370 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1260, 2)}, /* Telit LE910Cx */ 1370 1371 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1261, 2)}, /* Telit LE910Cx */ 1371 1372 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)}, /* Telit LN940 series */ ··· 1389 1388 {QMI_FIXED_INTF(0x1e2d, 0x0083, 4)}, /* Cinterion PHxx,PXxx (1 RmNet + USB Audio)*/ 1390 1389 {QMI_QUIRK_SET_DTR(0x1e2d, 0x00b0, 4)}, /* Cinterion CLS8 */ 1391 1390 {QMI_FIXED_INTF(0x1e2d, 0x00b7, 0)}, /* Cinterion MV31 RmNet */ 1391 + {QMI_FIXED_INTF(0x1e2d, 0x00b9, 0)}, /* Cinterion MV31 RmNet based on new baseline */ 1392 1392 {QMI_FIXED_INTF(0x413c, 0x81a2, 8)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */ 1393 1393 {QMI_FIXED_INTF(0x413c, 0x81a3, 8)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */ 1394 1394 {QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+17 -17
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
··· 1090 1090 u8 addr; 1091 1091 }; 1092 1092 1093 - #define CAUSE(reg, mask) \ 1093 + #define IWL_CAUSE(reg, mask) \ 1094 1094 { \ 1095 1095 .mask_reg = reg, \ 1096 1096 .bit = ilog2(mask), \ ··· 1101 1101 } 1102 1102 1103 1103 static const struct iwl_causes_list causes_list_common[] = { 1104 - CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_D2S_CH0_NUM), 1105 - CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_D2S_CH1_NUM), 1106 - CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_S2D), 1107 - CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_FH_ERR), 1108 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_ALIVE), 1109 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_WAKEUP), 1110 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RESET_DONE), 1111 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_CT_KILL), 1112 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RF_KILL), 1113 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_PERIODIC), 1114 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SCD), 1115 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_FH_TX), 1116 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_HW_ERR), 1117 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_HAP), 1104 + IWL_CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_D2S_CH0_NUM), 1105 + IWL_CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_D2S_CH1_NUM), 1106 + IWL_CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_S2D), 1107 + IWL_CAUSE(CSR_MSIX_FH_INT_MASK_AD, MSIX_FH_INT_CAUSES_FH_ERR), 1108 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_ALIVE), 1109 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_WAKEUP), 1110 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RESET_DONE), 1111 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_CT_KILL), 1112 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RF_KILL), 1113 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_PERIODIC), 1114 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SCD), 1115 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_FH_TX), 1116 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_HW_ERR), 1117 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_HAP), 1118 1118 }; 1119 1119 1120 1120 static const struct iwl_causes_list causes_list_pre_bz[] = { 1121 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SW_ERR), 1121 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SW_ERR), 1122 1122 }; 1123 1123 1124 1124 static const struct iwl_causes_list causes_list_bz[] = { 1125 - CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ), 1125 + IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ), 1126 1126 }; 1127 1127 1128 1128 static void iwl_pcie_map_list(struct iwl_trans *trans,
+1 -3
drivers/net/wireless/marvell/libertas/cfg.c
··· 1053 1053 */ 1054 1054 #define LBS_ASSOC_MAX_CMD_SIZE \ 1055 1055 (sizeof(struct cmd_ds_802_11_associate) \ 1056 - - 512 /* cmd_ds_802_11_associate.iebuf */ \ 1057 1056 + LBS_MAX_SSID_TLV_SIZE \ 1058 1057 + LBS_MAX_CHANNEL_TLV_SIZE \ 1059 1058 + LBS_MAX_CF_PARAM_TLV_SIZE \ ··· 1129 1130 if (sme->ie && sme->ie_len) 1130 1131 pos += lbs_add_wpa_tlv(pos, sme->ie, sme->ie_len); 1131 1132 1132 - len = (sizeof(*cmd) - sizeof(cmd->iebuf)) + 1133 - (u16)(pos - (u8 *) &cmd->iebuf); 1133 + len = sizeof(*cmd) + (u16)(pos - (u8 *) &cmd->iebuf); 1134 1134 cmd->hdr.size = cpu_to_le16(len); 1135 1135 1136 1136 lbs_deb_hex(LBS_DEB_ASSOC, "ASSOC_CMD", (u8 *) cmd,
+4 -2
drivers/net/wireless/marvell/libertas/host.h
··· 528 528 __le16 listeninterval; 529 529 __le16 bcnperiod; 530 530 u8 dtimperiod; 531 - u8 iebuf[512]; /* Enough for required and most optional IEs */ 531 + /* 512 permitted - enough for required and most optional IEs */ 532 + u8 iebuf[]; 532 533 } __packed; 533 534 534 535 struct cmd_ds_802_11_associate_response { ··· 538 537 __le16 capability; 539 538 __le16 statuscode; 540 539 __le16 aid; 541 - u8 iebuf[512]; 540 + /* max 512 */ 541 + u8 iebuf[]; 542 542 } __packed; 543 543 544 544 struct cmd_ds_802_11_set_wep {
+10
drivers/net/wireless/realtek/rtw88/fw.c
··· 1602 1602 return ret; 1603 1603 } 1604 1604 1605 + void rtw_fw_update_beacon_work(struct work_struct *work) 1606 + { 1607 + struct rtw_dev *rtwdev = container_of(work, struct rtw_dev, 1608 + update_beacon_work); 1609 + 1610 + mutex_lock(&rtwdev->mutex); 1611 + rtw_fw_download_rsvd_page(rtwdev); 1612 + mutex_unlock(&rtwdev->mutex); 1613 + } 1614 + 1605 1615 static void rtw_fw_read_fifo_page(struct rtw_dev *rtwdev, u32 offset, u32 size, 1606 1616 u32 *buf, u32 residue, u16 start_pg) 1607 1617 {
+1
drivers/net/wireless/realtek/rtw88/fw.h
··· 809 809 void rtw_add_rsvd_page_sta(struct rtw_dev *rtwdev, 810 810 struct rtw_vif *rtwvif); 811 811 int rtw_fw_download_rsvd_page(struct rtw_dev *rtwdev); 812 + void rtw_fw_update_beacon_work(struct work_struct *work); 812 813 void rtw_send_rsvd_page_h2c(struct rtw_dev *rtwdev); 813 814 int rtw_dump_drv_rsvd_page(struct rtw_dev *rtwdev, 814 815 u32 offset, u32 size, u32 *buf);
+1 -3
drivers/net/wireless/realtek/rtw88/mac80211.c
··· 493 493 { 494 494 struct rtw_dev *rtwdev = hw->priv; 495 495 496 - mutex_lock(&rtwdev->mutex); 497 - rtw_fw_download_rsvd_page(rtwdev); 498 - mutex_unlock(&rtwdev->mutex); 496 + ieee80211_queue_work(hw, &rtwdev->update_beacon_work); 499 497 500 498 return 0; 501 499 }
+2
drivers/net/wireless/realtek/rtw88/main.c
··· 1442 1442 mutex_unlock(&rtwdev->mutex); 1443 1443 1444 1444 cancel_work_sync(&rtwdev->c2h_work); 1445 + cancel_work_sync(&rtwdev->update_beacon_work); 1445 1446 cancel_delayed_work_sync(&rtwdev->watch_dog_work); 1446 1447 cancel_delayed_work_sync(&coex->bt_relink_work); 1447 1448 cancel_delayed_work_sync(&coex->bt_reenable_work); ··· 1999 1998 INIT_WORK(&rtwdev->c2h_work, rtw_c2h_work); 2000 1999 INIT_WORK(&rtwdev->ips_work, rtw_ips_work); 2001 2000 INIT_WORK(&rtwdev->fw_recovery_work, rtw_fw_recovery_work); 2001 + INIT_WORK(&rtwdev->update_beacon_work, rtw_fw_update_beacon_work); 2002 2002 INIT_WORK(&rtwdev->ba_work, rtw_txq_ba_work); 2003 2003 skb_queue_head_init(&rtwdev->c2h_queue); 2004 2004 skb_queue_head_init(&rtwdev->coex.queue);
+1
drivers/net/wireless/realtek/rtw88/main.h
··· 2008 2008 struct work_struct c2h_work; 2009 2009 struct work_struct ips_work; 2010 2010 struct work_struct fw_recovery_work; 2011 + struct work_struct update_beacon_work; 2011 2012 2012 2013 /* used to protect txqs list */ 2013 2014 spinlock_t txq_lock;
+1 -1
drivers/net/xen-netback/netback.c
··· 828 828 break; 829 829 } 830 830 831 - work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx); 831 + work_to_do = XEN_RING_NR_UNCONSUMED_REQUESTS(&queue->tx); 832 832 if (!work_to_do) 833 833 break; 834 834
+1 -1
drivers/ptp/ptp_clockmatrix.c
··· 267 267 static bool is_single_shot(u8 mask) 268 268 { 269 269 /* Treat single bit ToD masks as continuous trigger */ 270 - return mask <= 8 && is_power_of_2(mask); 270 + return !(mask <= 8 && is_power_of_2(mask)); 271 271 } 272 272 273 273 static int idtcm_extts_enable(struct idtcm_channel *channel,
+1 -1
include/linux/ipv6.h
··· 61 61 __s32 suppress_frag_ndisc; 62 62 __s32 accept_ra_mtu; 63 63 __s32 drop_unsolicited_na; 64 - __s32 accept_unsolicited_na; 64 + __s32 accept_untracked_na; 65 65 struct ipv6_stable_secret { 66 66 bool initialized; 67 67 struct in6_addr secret;
+2 -3
include/linux/mlx5/mlx5_ifc.h
··· 5176 5176 5177 5177 u8 syndrome[0x20]; 5178 5178 5179 - u8 reserved_at_40[0x20]; 5180 - u8 ece[0x20]; 5179 + u8 reserved_at_40[0x40]; 5181 5180 5182 5181 u8 opt_param_mask[0x20]; 5183 5182 5184 - u8 reserved_at_a0[0x20]; 5183 + u8 ece[0x20]; 5185 5184 5186 5185 struct mlx5_ifc_qpc_bits qpc; 5187 5186
+8 -1
include/linux/skbuff.h
··· 2696 2696 static inline void *__skb_pull(struct sk_buff *skb, unsigned int len) 2697 2697 { 2698 2698 skb->len -= len; 2699 - BUG_ON(skb->len < skb->data_len); 2699 + if (unlikely(skb->len < skb->data_len)) { 2700 + #if defined(CONFIG_DEBUG_NET) 2701 + skb->len += len; 2702 + pr_err("__skb_pull(len=%u)\n", len); 2703 + skb_dump(KERN_ERR, skb, false); 2704 + #endif 2705 + BUG(); 2706 + } 2700 2707 return skb->data += len; 2701 2708 } 2702 2709
+1 -1
include/net/amt.h
··· 15 15 AMT_MSG_MEMBERSHIP_QUERY, 16 16 AMT_MSG_MEMBERSHIP_UPDATE, 17 17 AMT_MSG_MULTICAST_DATA, 18 - AMT_MSG_TEARDOWM, 18 + AMT_MSG_TEARDOWN, 19 19 __AMT_MSG_MAX, 20 20 }; 21 21
+1
include/net/ax25.h
··· 228 228 ax25_dama_info dama; 229 229 #endif 230 230 refcount_t refcount; 231 + bool device_up; 231 232 } ax25_dev; 232 233 233 234 typedef struct ax25_cb {
+6
include/net/bonding.h
··· 149 149 struct reciprocal_value reciprocal_packets_per_slave; 150 150 u16 ad_actor_sys_prio; 151 151 u16 ad_user_port_key; 152 + #if IS_ENABLED(CONFIG_IPV6) 152 153 struct in6_addr ns_targets[BOND_MAX_NS_TARGETS]; 154 + #endif 153 155 154 156 /* 2 bytes of padding : see ether_addr_equal_64bits() */ 155 157 u8 ad_actor_system[ETH_ALEN + 2]; ··· 505 503 return !ipv4_is_lbcast(addr) && !ipv4_is_zeronet(addr); 506 504 } 507 505 506 + #if IS_ENABLED(CONFIG_IPV6) 508 507 static inline int bond_is_ip6_target_ok(struct in6_addr *addr) 509 508 { 510 509 return !ipv6_addr_any(addr) && 511 510 !ipv6_addr_loopback(addr) && 512 511 !ipv6_addr_is_multicast(addr); 513 512 } 513 + #endif 514 514 515 515 /* Get the oldest arp which we've received on this slave for bond's 516 516 * arp_targets. ··· 750 746 return -1; 751 747 } 752 748 749 + #if IS_ENABLED(CONFIG_IPV6) 753 750 static inline int bond_get_targets_ip6(struct in6_addr *targets, struct in6_addr *ip) 754 751 { 755 752 int i; ··· 763 758 764 759 return -1; 765 760 } 761 + #endif 766 762 767 763 /* exported from bond_main.c */ 768 764 extern unsigned int bond_net_id;
+6 -1
include/net/netfilter/nf_conntrack_core.h
··· 58 58 int ret = NF_ACCEPT; 59 59 60 60 if (ct) { 61 - if (!nf_ct_is_confirmed(ct)) 61 + if (!nf_ct_is_confirmed(ct)) { 62 62 ret = __nf_conntrack_confirm(skb); 63 + 64 + if (ret == NF_ACCEPT) 65 + ct = (struct nf_conn *)skb_nfct(skb); 66 + } 67 + 63 68 if (ret == NF_ACCEPT && nf_ct_ecache_exist(ct)) 64 69 nf_ct_deliver_cached_events(ct); 65 70 }
+14 -28
include/net/sch_generic.h
··· 187 187 if (spin_trylock(&qdisc->seqlock)) 188 188 return true; 189 189 190 - /* Paired with smp_mb__after_atomic() to make sure 191 - * STATE_MISSED checking is synchronized with clearing 192 - * in pfifo_fast_dequeue(). 190 + /* No need to insist if the MISSED flag was already set. 191 + * Note that test_and_set_bit() also gives us memory ordering 192 + * guarantees wrt potential earlier enqueue() and below 193 + * spin_trylock(), both of which are necessary to prevent races 193 194 */ 194 - smp_mb__before_atomic(); 195 - 196 - /* If the MISSED flag is set, it means other thread has 197 - * set the MISSED flag before second spin_trylock(), so 198 - * we can return false here to avoid multi cpus doing 199 - * the set_bit() and second spin_trylock() concurrently. 200 - */ 201 - if (test_bit(__QDISC_STATE_MISSED, &qdisc->state)) 195 + if (test_and_set_bit(__QDISC_STATE_MISSED, &qdisc->state)) 202 196 return false; 203 197 204 - /* Set the MISSED flag before the second spin_trylock(), 205 - * if the second spin_trylock() return false, it means 206 - * other cpu holding the lock will do dequeuing for us 207 - * or it will see the MISSED flag set after releasing 208 - * lock and reschedule the net_tx_action() to do the 209 - * dequeuing. 210 - */ 211 - set_bit(__QDISC_STATE_MISSED, &qdisc->state); 212 - 213 - /* spin_trylock() only has load-acquire semantic, so use 214 - * smp_mb__after_atomic() to ensure STATE_MISSED is set 215 - * before doing the second spin_trylock(). 216 - */ 217 - smp_mb__after_atomic(); 218 - 219 - /* Retry again in case other CPU may not see the new flag 220 - * after it releases the lock at the end of qdisc_run_end(). 198 + /* Try to take the lock again to make sure that we will either 199 + * grab it or the CPU that still has it will see MISSED set 200 + * when testing it in qdisc_run_end() 221 201 */ 222 202 return spin_trylock(&qdisc->seqlock); 223 203 } ··· 208 228 { 209 229 if (qdisc->flags & TCQ_F_NOLOCK) { 210 230 spin_unlock(&qdisc->seqlock); 231 + 232 + /* spin_unlock() only has store-release semantic. The unlock 233 + * and test_bit() ordering is a store-load ordering, so a full 234 + * memory barrier is needed here. 235 + */ 236 + smp_mb(); 211 237 212 238 if (unlikely(test_bit(__QDISC_STATE_MISSED, 213 239 &qdisc->state)))
+1 -1
include/uapi/linux/ipv6.h
··· 194 194 DEVCONF_IOAM6_ID, 195 195 DEVCONF_IOAM6_ID_WIDE, 196 196 DEVCONF_NDISC_EVICT_NOCARRIER, 197 - DEVCONF_ACCEPT_UNSOLICITED_NA, 197 + DEVCONF_ACCEPT_UNTRACKED_NA, 198 198 DEVCONF_MAX 199 199 }; 200 200
+1 -1
include/uapi/linux/socket.h
··· 31 31 32 32 #define SOCK_BUF_LOCK_MASK (SOCK_SNDBUF_LOCK | SOCK_RCVBUF_LOCK) 33 33 34 - #define SOCK_TXREHASH_DEFAULT ((u8)-1) 34 + #define SOCK_TXREHASH_DEFAULT 255 35 35 #define SOCK_TXREHASH_DISABLED 0 36 36 #define SOCK_TXREHASH_ENABLED 1 37 37
+5 -9
kernel/bpf/core.c
··· 1953 1953 CONT; \ 1954 1954 LDX_MEM_##SIZEOP: \ 1955 1955 DST = *(SIZE *)(unsigned long) (SRC + insn->off); \ 1956 + CONT; \ 1957 + LDX_PROBE_MEM_##SIZEOP: \ 1958 + bpf_probe_read_kernel(&DST, sizeof(SIZE), \ 1959 + (const void *)(long) (SRC + insn->off)); \ 1960 + DST = *((SIZE *)&DST); \ 1956 1961 CONT; 1957 1962 1958 1963 LDST(B, u8) ··· 1965 1960 LDST(W, u32) 1966 1961 LDST(DW, u64) 1967 1962 #undef LDST 1968 - #define LDX_PROBE(SIZEOP, SIZE) \ 1969 - LDX_PROBE_MEM_##SIZEOP: \ 1970 - bpf_probe_read_kernel(&DST, SIZE, (const void *)(long) (SRC + insn->off)); \ 1971 - CONT; 1972 - LDX_PROBE(B, 1) 1973 - LDX_PROBE(H, 2) 1974 - LDX_PROBE(W, 4) 1975 - LDX_PROBE(DW, 8) 1976 - #undef LDX_PROBE 1977 1963 1978 1964 #define ATOMIC_ALU_OP(BOP, KOP) \ 1979 1965 case BOP: \
+1 -1
net/Kconfig.debug
··· 20 20 21 21 config DEBUG_NET 22 22 bool "Add generic networking debug" 23 - depends on DEBUG_KERNEL 23 + depends on DEBUG_KERNEL && NET 24 24 help 25 25 Enable extra sanity checks in networking. 26 26 This is mostly used by fuzzers, but is safe to select.
+17 -10
net/ax25/af_ax25.c
··· 62 62 */ 63 63 static void ax25_cb_del(ax25_cb *ax25) 64 64 { 65 + spin_lock_bh(&ax25_list_lock); 65 66 if (!hlist_unhashed(&ax25->ax25_node)) { 66 - spin_lock_bh(&ax25_list_lock); 67 67 hlist_del_init(&ax25->ax25_node); 68 - spin_unlock_bh(&ax25_list_lock); 69 68 ax25_cb_put(ax25); 70 69 } 70 + spin_unlock_bh(&ax25_list_lock); 71 71 } 72 72 73 73 /* ··· 81 81 82 82 if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL) 83 83 return; 84 + ax25_dev->device_up = false; 84 85 85 86 spin_lock_bh(&ax25_list_lock); 86 87 again: ··· 92 91 spin_unlock_bh(&ax25_list_lock); 93 92 ax25_disconnect(s, ENETUNREACH); 94 93 s->ax25_dev = NULL; 94 + ax25_cb_del(s); 95 95 spin_lock_bh(&ax25_list_lock); 96 96 goto again; 97 97 } ··· 105 103 dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker); 106 104 ax25_dev_put(ax25_dev); 107 105 } 106 + ax25_cb_del(s); 108 107 release_sock(sk); 109 108 spin_lock_bh(&ax25_list_lock); 110 109 sock_put(sk); ··· 998 995 if (sk->sk_type == SOCK_SEQPACKET) { 999 996 switch (ax25->state) { 1000 997 case AX25_STATE_0: 1001 - release_sock(sk); 1002 - ax25_disconnect(ax25, 0); 1003 - lock_sock(sk); 998 + if (!sock_flag(ax25->sk, SOCK_DEAD)) { 999 + release_sock(sk); 1000 + ax25_disconnect(ax25, 0); 1001 + lock_sock(sk); 1002 + } 1004 1003 ax25_destroy_socket(ax25); 1005 1004 break; 1006 1005 ··· 1058 1053 ax25_destroy_socket(ax25); 1059 1054 } 1060 1055 if (ax25_dev) { 1061 - del_timer_sync(&ax25->timer); 1062 - del_timer_sync(&ax25->t1timer); 1063 - del_timer_sync(&ax25->t2timer); 1064 - del_timer_sync(&ax25->t3timer); 1065 - del_timer_sync(&ax25->idletimer); 1056 + if (!ax25_dev->device_up) { 1057 + del_timer_sync(&ax25->timer); 1058 + del_timer_sync(&ax25->t1timer); 1059 + del_timer_sync(&ax25->t2timer); 1060 + del_timer_sync(&ax25->t3timer); 1061 + del_timer_sync(&ax25->idletimer); 1062 + } 1066 1063 dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker); 1067 1064 ax25_dev_put(ax25_dev); 1068 1065 }
+1
net/ax25/ax25_dev.c
··· 62 62 ax25_dev->dev = dev; 63 63 dev_hold_track(dev, &ax25_dev->dev_tracker, GFP_ATOMIC); 64 64 ax25_dev->forward = NULL; 65 + ax25_dev->device_up = true; 65 66 66 67 ax25_dev->values[AX25_VALUES_IPDEFMODE] = AX25_DEF_IPDEFMODE; 67 68 ax25_dev->values[AX25_VALUES_AXDEFMODE] = AX25_DEF_AXDEFMODE;
+1 -1
net/ax25/ax25_subr.c
··· 268 268 del_timer_sync(&ax25->t3timer); 269 269 del_timer_sync(&ax25->idletimer); 270 270 } else { 271 - if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY)) 271 + if (ax25->sk && !sock_flag(ax25->sk, SOCK_DESTROY)) 272 272 ax25_stop_heartbeat(ax25); 273 273 ax25_stop_t1timer(ax25); 274 274 ax25_stop_t2timer(ax25);
+1 -1
net/core/neighbour.c
··· 1579 1579 list_for_each_entry(neigh, &tbl->managed_list, managed_list) 1580 1580 neigh_event_send_probe(neigh, NULL, false); 1581 1581 queue_delayed_work(system_power_efficient_wq, &tbl->managed_work, 1582 - NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME)); 1582 + max(NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME), HZ)); 1583 1583 write_unlock_bh(&tbl->lock); 1584 1584 } 1585 1585
+7 -4
net/ipv4/tcp_input.c
··· 2706 2706 { 2707 2707 struct tcp_sock *tp = tcp_sk(sk); 2708 2708 struct inet_connection_sock *icsk = inet_csk(sk); 2709 + u64 val; 2709 2710 2710 - /* FIXME: breaks with very large cwnd */ 2711 2711 tp->prior_ssthresh = tcp_current_ssthresh(sk); 2712 - tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) * 2713 - tcp_mss_to_mtu(sk, tp->mss_cache) / 2714 - icsk->icsk_mtup.probe_size); 2712 + 2713 + val = (u64)tcp_snd_cwnd(tp) * tcp_mss_to_mtu(sk, tp->mss_cache); 2714 + do_div(val, icsk->icsk_mtup.probe_size); 2715 + DEBUG_NET_WARN_ON_ONCE((u32)val != val); 2716 + tcp_snd_cwnd_set(tp, max_t(u32, 1U, val)); 2717 + 2715 2718 tp->snd_cwnd_cnt = 0; 2716 2719 tp->snd_cwnd_stamp = tcp_jiffies32; 2717 2720 tp->snd_ssthresh = tcp_current_ssthresh(sk);
+2 -2
net/ipv4/tcp_ipv4.c
··· 1207 1207 key->l3index = l3index; 1208 1208 key->flags = flags; 1209 1209 memcpy(&key->addr, addr, 1210 - (family == AF_INET6) ? sizeof(struct in6_addr) : 1211 - sizeof(struct in_addr)); 1210 + (IS_ENABLED(CONFIG_IPV6) && family == AF_INET6) ? sizeof(struct in6_addr) : 1211 + sizeof(struct in_addr)); 1212 1212 hlist_add_head_rcu(&key->node, &md5sig->head); 1213 1213 return 0; 1214 1214 }
+2 -2
net/ipv4/tcp_output.c
··· 4115 4115 res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL, 4116 4116 NULL); 4117 4117 if (!res) { 4118 - __TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS); 4119 - __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS); 4118 + TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS); 4119 + NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS); 4120 4120 if (unlikely(tcp_passive_fastopen(sk))) 4121 4121 tcp_sk(sk)->total_retrans++; 4122 4122 trace_tcp_retransmit_synack(sk, req);
+3 -3
net/ipv6/addrconf.c
··· 5586 5586 array[DEVCONF_IOAM6_ID] = cnf->ioam6_id; 5587 5587 array[DEVCONF_IOAM6_ID_WIDE] = cnf->ioam6_id_wide; 5588 5588 array[DEVCONF_NDISC_EVICT_NOCARRIER] = cnf->ndisc_evict_nocarrier; 5589 - array[DEVCONF_ACCEPT_UNSOLICITED_NA] = cnf->accept_unsolicited_na; 5589 + array[DEVCONF_ACCEPT_UNTRACKED_NA] = cnf->accept_untracked_na; 5590 5590 } 5591 5591 5592 5592 static inline size_t inet6_ifla6_size(void) ··· 7038 7038 .extra2 = (void *)SYSCTL_ONE, 7039 7039 }, 7040 7040 { 7041 - .procname = "accept_unsolicited_na", 7042 - .data = &ipv6_devconf.accept_unsolicited_na, 7041 + .procname = "accept_untracked_na", 7042 + .data = &ipv6_devconf.accept_untracked_na, 7043 7043 .maxlen = sizeof(int), 7044 7044 .mode = 0644, 7045 7045 .proc_handler = proc_dointvec_minmax,
+25 -17
net/ipv6/ndisc.c
··· 979 979 struct inet6_dev *idev = __in6_dev_get(dev); 980 980 struct inet6_ifaddr *ifp; 981 981 struct neighbour *neigh; 982 - bool create_neigh; 982 + u8 new_state; 983 983 984 984 if (skb->len < sizeof(struct nd_msg)) { 985 985 ND_PRINTK(2, warn, "NA: packet too short\n"); ··· 1000 1000 /* For some 802.11 wireless deployments (and possibly other networks), 1001 1001 * there will be a NA proxy and unsolicitd packets are attacks 1002 1002 * and thus should not be accepted. 1003 - * drop_unsolicited_na takes precedence over accept_unsolicited_na 1003 + * drop_unsolicited_na takes precedence over accept_untracked_na 1004 1004 */ 1005 1005 if (!msg->icmph.icmp6_solicited && idev && 1006 1006 idev->cnf.drop_unsolicited_na) ··· 1041 1041 in6_ifa_put(ifp); 1042 1042 return; 1043 1043 } 1044 + 1045 + neigh = neigh_lookup(&nd_tbl, &msg->target, dev); 1046 + 1044 1047 /* RFC 9131 updates original Neighbour Discovery RFC 4861. 1045 - * An unsolicited NA can now create a neighbour cache entry 1046 - * on routers if it has Target LL Address option. 1048 + * NAs with Target LL Address option without a corresponding 1049 + * entry in the neighbour cache can now create a STALE neighbour 1050 + * cache entry on routers. 1047 1051 * 1048 - * drop accept fwding behaviour 1049 - * ---- ------ ------ ---------------------------------------------- 1050 - * 1 X X Drop NA packet and don't pass up the stack 1051 - * 0 0 X Pass NA packet up the stack, don't update NC 1052 - * 0 1 0 Pass NA packet up the stack, don't update NC 1053 - * 0 1 1 Pass NA packet up the stack, and add a STALE 1054 - * NC entry 1052 + * entry accept fwding solicited behaviour 1053 + * ------- ------ ------ --------- ---------------------- 1054 + * present X X 0 Set state to STALE 1055 + * present X X 1 Set state to REACHABLE 1056 + * absent 0 X X Do nothing 1057 + * absent 1 0 X Do nothing 1058 + * absent 1 1 X Add a new STALE entry 1059 + * 1055 1060 * Note that we don't do a (daddr == all-routers-mcast) check. 1056 1061 */ 1057 - create_neigh = !msg->icmph.icmp6_solicited && lladdr && 1058 - idev && idev->cnf.forwarding && 1059 - idev->cnf.accept_unsolicited_na; 1060 - neigh = __neigh_lookup(&nd_tbl, &msg->target, dev, create_neigh); 1062 + new_state = msg->icmph.icmp6_solicited ? NUD_REACHABLE : NUD_STALE; 1063 + if (!neigh && lladdr && 1064 + idev && idev->cnf.forwarding && 1065 + idev->cnf.accept_untracked_na) { 1066 + neigh = neigh_create(&nd_tbl, &msg->target, dev); 1067 + new_state = NUD_STALE; 1068 + } 1061 1069 1062 - if (neigh) { 1070 + if (neigh && !IS_ERR(neigh)) { 1063 1071 u8 old_flags = neigh->flags; 1064 1072 struct net *net = dev_net(dev); 1065 1073 ··· 1087 1079 } 1088 1080 1089 1081 ndisc_update(dev, neigh, lladdr, 1090 - msg->icmph.icmp6_solicited ? NUD_REACHABLE : NUD_STALE, 1082 + new_state, 1091 1083 NEIGH_UPDATE_F_WEAK_OVERRIDE| 1092 1084 (msg->icmph.icmp6_override ? NEIGH_UPDATE_F_OVERRIDE : 0)| 1093 1085 NEIGH_UPDATE_F_OVERRIDE_ISROUTER|
+4 -4
net/ipv6/ping.c
··· 101 101 ipc6.sockc.tsflags = sk->sk_tsflags; 102 102 ipc6.sockc.mark = sk->sk_mark; 103 103 104 + memset(&fl6, 0, sizeof(fl6)); 105 + fl6.flowi6_oif = oif; 106 + 104 107 if (msg->msg_controllen) { 105 108 struct ipv6_txoptions opt = {}; 106 109 ··· 115 112 return err; 116 113 117 114 /* Changes to txoptions and flow info are not implemented, yet. 118 - * Drop the options, fl6 is wiped below. 115 + * Drop the options. 119 116 */ 120 117 ipc6.opt = NULL; 121 118 } 122 119 123 - memset(&fl6, 0, sizeof(fl6)); 124 - 125 120 fl6.flowi6_proto = IPPROTO_ICMPV6; 126 121 fl6.saddr = np->saddr; 127 122 fl6.daddr = *daddr; 128 - fl6.flowi6_oif = oif; 129 123 fl6.flowi6_mark = ipc6.sockc.mark; 130 124 fl6.flowi6_uid = sk->sk_uid; 131 125 fl6.fl6_icmp_type = user_icmph.icmp6_type;
+6 -4
net/key/af_key.c
··· 2826 2826 void *ext_hdrs[SADB_EXT_MAX]; 2827 2827 int err; 2828 2828 2829 - err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, 2830 - BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); 2831 - if (err) 2832 - return err; 2829 + /* Non-zero return value of pfkey_broadcast() does not always signal 2830 + * an error and even on an actual error we may still want to process 2831 + * the message so rather ignore the return value. 2832 + */ 2833 + pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, 2834 + BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); 2833 2835 2834 2836 memset(ext_hdrs, 0, sizeof(ext_hdrs)); 2835 2837 err = parse_exthdrs(skb, hdr, ext_hdrs);
+2 -5
net/mac80211/chan.c
··· 1749 1749 1750 1750 if (new_ctx->replace_state == IEEE80211_CHANCTX_REPLACE_NONE) { 1751 1751 if (old_ctx) 1752 - err = ieee80211_vif_use_reserved_reassign(sdata); 1753 - else 1754 - err = ieee80211_vif_use_reserved_assign(sdata); 1752 + return ieee80211_vif_use_reserved_reassign(sdata); 1755 1753 1756 - if (err) 1757 - return err; 1754 + return ieee80211_vif_use_reserved_assign(sdata); 1758 1755 } 1759 1756 1760 1757 /*
+77 -29
net/netfilter/nf_tables_api.c
··· 222 222 } 223 223 224 224 static void nft_netdev_unregister_hooks(struct net *net, 225 - struct list_head *hook_list) 225 + struct list_head *hook_list, 226 + bool release_netdev) 226 227 { 227 - struct nft_hook *hook; 228 + struct nft_hook *hook, *next; 228 229 229 - list_for_each_entry(hook, hook_list, list) 230 + list_for_each_entry_safe(hook, next, hook_list, list) { 230 231 nf_unregister_net_hook(net, &hook->ops); 232 + if (release_netdev) { 233 + list_del(&hook->list); 234 + kfree_rcu(hook, rcu); 235 + } 236 + } 231 237 } 232 238 233 239 static int nf_tables_register_hook(struct net *net, ··· 259 253 return nf_register_net_hook(net, &basechain->ops); 260 254 } 261 255 262 - static void nf_tables_unregister_hook(struct net *net, 263 - const struct nft_table *table, 264 - struct nft_chain *chain) 256 + static void __nf_tables_unregister_hook(struct net *net, 257 + const struct nft_table *table, 258 + struct nft_chain *chain, 259 + bool release_netdev) 265 260 { 266 261 struct nft_base_chain *basechain; 267 262 const struct nf_hook_ops *ops; ··· 277 270 return basechain->type->ops_unregister(net, ops); 278 271 279 272 if (nft_base_chain_netdev(table->family, basechain->ops.hooknum)) 280 - nft_netdev_unregister_hooks(net, &basechain->hook_list); 273 + nft_netdev_unregister_hooks(net, &basechain->hook_list, 274 + release_netdev); 281 275 else 282 276 nf_unregister_net_hook(net, &basechain->ops); 277 + } 278 + 279 + static void nf_tables_unregister_hook(struct net *net, 280 + const struct nft_table *table, 281 + struct nft_chain *chain) 282 + { 283 + return __nf_tables_unregister_hook(net, table, chain, false); 283 284 } 284 285 285 286 static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *trans) ··· 2888 2873 2889 2874 err = nf_tables_expr_parse(ctx, nla, &expr_info); 2890 2875 if (err < 0) 2891 - goto err1; 2876 + goto err_expr_parse; 2877 + 2878 + err = -EOPNOTSUPP; 2879 + if (!(expr_info.ops->type->flags & NFT_EXPR_STATEFUL)) 2880 + goto err_expr_stateful; 2892 2881 2893 2882 err = -ENOMEM; 2894 2883 expr = kzalloc(expr_info.ops->size, GFP_KERNEL_ACCOUNT); 2895 2884 if (expr == NULL) 2896 - goto err2; 2885 + goto err_expr_stateful; 2897 2886 2898 2887 err = nf_tables_newexpr(ctx, &expr_info, expr); 2899 2888 if (err < 0) 2900 - goto err3; 2889 + goto err_expr_new; 2901 2890 2902 2891 return expr; 2903 - err3: 2892 + err_expr_new: 2904 2893 kfree(expr); 2905 - err2: 2894 + err_expr_stateful: 2906 2895 owner = expr_info.ops->type->owner; 2907 2896 if (expr_info.ops->type->release_ops) 2908 2897 expr_info.ops->type->release_ops(expr_info.ops); 2909 2898 2910 2899 module_put(owner); 2911 - err1: 2900 + err_expr_parse: 2912 2901 return ERR_PTR(err); 2913 2902 } 2914 2903 ··· 4261 4242 u32 len; 4262 4243 int err; 4263 4244 4245 + if (desc->field_count >= ARRAY_SIZE(desc->field_len)) 4246 + return -E2BIG; 4247 + 4264 4248 err = nla_parse_nested_deprecated(tb, NFTA_SET_FIELD_MAX, attr, 4265 4249 nft_concat_policy, NULL); 4266 4250 if (err < 0) ··· 4273 4251 return -EINVAL; 4274 4252 4275 4253 len = ntohl(nla_get_be32(tb[NFTA_SET_FIELD_LEN])); 4276 - 4277 - if (len * BITS_PER_BYTE / 32 > NFT_REG32_COUNT) 4278 - return -E2BIG; 4254 + if (!len || len > U8_MAX) 4255 + return -EINVAL; 4279 4256 4280 4257 desc->field_len[desc->field_count++] = len; 4281 4258 ··· 4285 4264 const struct nlattr *nla) 4286 4265 { 4287 4266 struct nlattr *attr; 4288 - int rem, err; 4267 + u32 num_regs = 0; 4268 + int rem, err, i; 4289 4269 4290 4270 nla_for_each_nested(attr, nla, rem) { 4291 4271 if (nla_type(attr) != NFTA_LIST_ELEM) ··· 4296 4274 if (err < 0) 4297 4275 return err; 4298 4276 } 4277 + 4278 + for (i = 0; i < desc->field_count; i++) 4279 + num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32)); 4280 + 4281 + if (num_regs > NFT_REG32_COUNT) 4282 + return -E2BIG; 4299 4283 4300 4284 return 0; 4301 4285 } ··· 5372 5344 5373 5345 nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) { 5374 5346 err = nft_get_set_elem(&ctx, set, attr); 5375 - if (err < 0) 5347 + if (err < 0) { 5348 + NL_SET_BAD_ATTR(extack, attr); 5376 5349 break; 5350 + } 5377 5351 } 5378 5352 5379 5353 return err; ··· 5443 5413 return expr; 5444 5414 5445 5415 err = -EOPNOTSUPP; 5446 - if (!(expr->ops->type->flags & NFT_EXPR_STATEFUL)) 5447 - goto err_set_elem_expr; 5448 - 5449 5416 if (expr->ops->type->flags & NFT_EXPR_GC) { 5450 5417 if (set->flags & NFT_SET_TIMEOUT) 5451 5418 goto err_set_elem_expr; ··· 6152 6125 6153 6126 nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) { 6154 6127 err = nft_add_set_elem(&ctx, set, attr, info->nlh->nlmsg_flags); 6155 - if (err < 0) 6128 + if (err < 0) { 6129 + NL_SET_BAD_ATTR(extack, attr); 6156 6130 return err; 6131 + } 6157 6132 } 6158 6133 6159 6134 if (nft_net->validate_state == NFT_VALIDATE_DO) ··· 6425 6396 6426 6397 nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) { 6427 6398 err = nft_del_setelem(&ctx, set, attr); 6428 - if (err < 0) 6399 + if (err < 0) { 6400 + NL_SET_BAD_ATTR(extack, attr); 6429 6401 break; 6402 + } 6430 6403 } 6431 6404 return err; 6432 6405 } ··· 7322 7291 FLOW_BLOCK_UNBIND); 7323 7292 } 7324 7293 7294 + static void __nft_unregister_flowtable_net_hooks(struct net *net, 7295 + struct list_head *hook_list, 7296 + bool release_netdev) 7297 + { 7298 + struct nft_hook *hook, *next; 7299 + 7300 + list_for_each_entry_safe(hook, next, hook_list, list) { 7301 + nf_unregister_net_hook(net, &hook->ops); 7302 + if (release_netdev) { 7303 + list_del(&hook->list); 7304 + kfree_rcu(hook); 7305 + } 7306 + } 7307 + } 7308 + 7325 7309 static void nft_unregister_flowtable_net_hooks(struct net *net, 7326 7310 struct list_head *hook_list) 7327 7311 { 7328 - struct nft_hook *hook; 7329 - 7330 - list_for_each_entry(hook, hook_list, list) 7331 - nf_unregister_net_hook(net, &hook->ops); 7312 + __nft_unregister_flowtable_net_hooks(net, hook_list, false); 7332 7313 } 7333 7314 7334 7315 static int nft_register_flowtable_net_hooks(struct net *net, ··· 9782 9739 struct nft_chain *chain; 9783 9740 9784 9741 list_for_each_entry(chain, &table->chains, list) 9785 - nf_tables_unregister_hook(net, table, chain); 9742 + __nf_tables_unregister_hook(net, table, chain, true); 9786 9743 list_for_each_entry(flowtable, &table->flowtables, list) 9787 - nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list); 9744 + __nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list, 9745 + true); 9788 9746 } 9789 9747 9790 9748 static void __nft_release_hooks(struct net *net) ··· 9924 9880 9925 9881 static void __net_exit nf_tables_pre_exit_net(struct net *net) 9926 9882 { 9883 + struct nftables_pernet *nft_net = nft_pernet(net); 9884 + 9885 + mutex_lock(&nft_net->commit_mutex); 9927 9886 __nft_release_hooks(net); 9887 + mutex_unlock(&nft_net->commit_mutex); 9928 9888 } 9929 9889 9930 9890 static void __net_exit nf_tables_exit_net(struct net *net)
+5 -19
net/netfilter/nfnetlink.c
··· 45 45 static unsigned int nfnetlink_pernet_id __read_mostly; 46 46 47 47 struct nfnl_net { 48 - unsigned int ctnetlink_listeners; 49 48 struct sock *nfnl; 50 49 }; 51 50 ··· 672 673 673 674 #ifdef CONFIG_NF_CONNTRACK_EVENTS 674 675 if (type == NFNL_SUBSYS_CTNETLINK) { 675 - struct nfnl_net *nfnlnet = nfnl_pernet(net); 676 - 677 676 nfnl_lock(NFNL_SUBSYS_CTNETLINK); 678 - 679 - if (WARN_ON_ONCE(nfnlnet->ctnetlink_listeners == UINT_MAX)) { 680 - nfnl_unlock(NFNL_SUBSYS_CTNETLINK); 681 - return -EOVERFLOW; 682 - } 683 - 684 - nfnlnet->ctnetlink_listeners++; 685 - if (nfnlnet->ctnetlink_listeners == 1) 686 - WRITE_ONCE(net->ct.ctnetlink_has_listener, true); 677 + WRITE_ONCE(net->ct.ctnetlink_has_listener, true); 687 678 nfnl_unlock(NFNL_SUBSYS_CTNETLINK); 688 679 } 689 680 #endif ··· 683 694 static void nfnetlink_unbind(struct net *net, int group) 684 695 { 685 696 #ifdef CONFIG_NF_CONNTRACK_EVENTS 686 - int type = nfnl_group2type[group]; 697 + if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX) 698 + return; 687 699 688 - if (type == NFNL_SUBSYS_CTNETLINK) { 689 - struct nfnl_net *nfnlnet = nfnl_pernet(net); 690 - 700 + if (nfnl_group2type[group] == NFNL_SUBSYS_CTNETLINK) { 691 701 nfnl_lock(NFNL_SUBSYS_CTNETLINK); 692 - WARN_ON_ONCE(nfnlnet->ctnetlink_listeners == 0); 693 - nfnlnet->ctnetlink_listeners--; 694 - if (nfnlnet->ctnetlink_listeners == 0) 702 + if (!nfnetlink_has_listeners(net, group)) 695 703 WRITE_ONCE(net->ct.ctnetlink_has_listener, false); 696 704 nfnl_unlock(NFNL_SUBSYS_CTNETLINK); 697 705 }
+4 -2
net/netfilter/nft_flow_offload.c
··· 232 232 switch (nft_pf(pkt)) { 233 233 case NFPROTO_IPV4: 234 234 fl.u.ip4.daddr = ct->tuplehash[dir].tuple.src.u3.ip; 235 - fl.u.ip4.saddr = ct->tuplehash[dir].tuple.dst.u3.ip; 235 + fl.u.ip4.saddr = ct->tuplehash[!dir].tuple.src.u3.ip; 236 236 fl.u.ip4.flowi4_oif = nft_in(pkt)->ifindex; 237 237 fl.u.ip4.flowi4_iif = this_dst->dev->ifindex; 238 238 fl.u.ip4.flowi4_tos = RT_TOS(ip_hdr(pkt->skb)->tos); 239 239 fl.u.ip4.flowi4_mark = pkt->skb->mark; 240 + fl.u.ip4.flowi4_flags = FLOWI_FLAG_ANYSRC; 240 241 break; 241 242 case NFPROTO_IPV6: 242 243 fl.u.ip6.daddr = ct->tuplehash[dir].tuple.src.u3.in6; 243 - fl.u.ip6.saddr = ct->tuplehash[dir].tuple.dst.u3.in6; 244 + fl.u.ip6.saddr = ct->tuplehash[!dir].tuple.src.u3.in6; 244 245 fl.u.ip6.flowi6_oif = nft_in(pkt)->ifindex; 245 246 fl.u.ip6.flowi6_iif = this_dst->dev->ifindex; 246 247 fl.u.ip6.flowlabel = ip6_flowinfo(ipv6_hdr(pkt->skb)); 247 248 fl.u.ip6.flowi6_mark = pkt->skb->mark; 249 + fl.u.ip6.flowi6_flags = FLOWI_FLAG_ANYSRC; 248 250 break; 249 251 } 250 252
+2
net/netfilter/nft_limit.c
··· 213 213 struct nft_limit_priv_pkts *priv_dst = nft_expr_priv(dst); 214 214 struct nft_limit_priv_pkts *priv_src = nft_expr_priv(src); 215 215 216 + priv_dst->cost = priv_src->cost; 217 + 216 218 return nft_limit_clone(&priv_dst->limit, &priv_src->limit); 217 219 } 218 220
+2 -2
net/nfc/core.c
··· 975 975 kfree(se); 976 976 } 977 977 978 - ida_simple_remove(&nfc_index_ida, dev->idx); 978 + ida_free(&nfc_index_ida, dev->idx); 979 979 980 980 kfree(dev); 981 981 } ··· 1066 1066 if (!dev) 1067 1067 return NULL; 1068 1068 1069 - rc = ida_simple_get(&nfc_index_ida, 0, 0, GFP_KERNEL); 1069 + rc = ida_alloc(&nfc_index_ida, GFP_KERNEL); 1070 1070 if (rc < 0) 1071 1071 goto err_free_dev; 1072 1072 dev->idx = rc;
+4 -2
net/packet/af_packet.c
··· 1935 1935 /* Move network header to the right position for VLAN tagged packets */ 1936 1936 if (likely(skb->dev->type == ARPHRD_ETHER) && 1937 1937 eth_type_vlan(skb->protocol) && 1938 - __vlan_get_protocol(skb, skb->protocol, &depth) != 0) 1939 - skb_set_network_header(skb, depth); 1938 + __vlan_get_protocol(skb, skb->protocol, &depth) != 0) { 1939 + if (pskb_may_pull(skb, depth)) 1940 + skb_set_network_header(skb, depth); 1941 + } 1940 1942 1941 1943 skb_probe_transport_header(skb); 1942 1944 }
+1 -1
net/sched/act_ct.c
··· 548 548 break; 549 549 #endif 550 550 default: 551 - return -1; 551 + return false; 552 552 } 553 553 554 554 if (ip6h->hop_limit <= 1)
+1
net/smc/af_smc.c
··· 2161 2161 2162 2162 not_found: 2163 2163 ini->smcr_version &= ~SMC_V2; 2164 + ini->smcrv2.ib_dev_v2 = NULL; 2164 2165 ini->check_smcrv2 = false; 2165 2166 } 2166 2167
+1 -1
net/smc/smc_cdc.c
··· 82 82 /* abnormal termination */ 83 83 if (!rc) 84 84 smc_wr_tx_put_slot(link, 85 - (struct smc_wr_tx_pend_priv *)pend); 85 + (struct smc_wr_tx_pend_priv *)(*pend)); 86 86 rc = -EPIPE; 87 87 } 88 88 return rc;
+1 -2
net/tipc/bearer.c
··· 259 259 u32 i; 260 260 261 261 if (!bearer_name_validate(name, &b_names)) { 262 - errstr = "illegal name"; 263 262 NL_SET_ERR_MSG(extack, "Illegal name"); 264 - goto rejected; 263 + return res; 265 264 } 266 265 267 266 if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) {
+2 -1
net/xfrm/xfrm_output.c
··· 273 273 */ 274 274 static int xfrm4_tunnel_encap_add(struct xfrm_state *x, struct sk_buff *skb) 275 275 { 276 + bool small_ipv6 = (skb->protocol == htons(ETH_P_IPV6)) && (skb->len <= IPV6_MIN_MTU); 276 277 struct dst_entry *dst = skb_dst(skb); 277 278 struct iphdr *top_iph; 278 279 int flags; ··· 304 303 if (flags & XFRM_STATE_NOECN) 305 304 IP_ECN_clear(top_iph); 306 305 307 - top_iph->frag_off = (flags & XFRM_STATE_NOPMTUDISC) ? 306 + top_iph->frag_off = (flags & XFRM_STATE_NOPMTUDISC) || small_ipv6 ? 308 307 0 : (XFRM_MODE_SKB_CB(skb)->frag_off & htons(IP_DF)); 309 308 310 309 top_iph->ttl = ip4_dst_hoplimit(xfrm_dst_child(dst));
+1 -1
tools/testing/selftests/bpf/progs/test_stacktrace_build_id.c
··· 39 39 __type(value, stack_trace_t); 40 40 } stack_amap SEC(".maps"); 41 41 42 - SEC("kprobe/urandom_read") 42 + SEC("kprobe/urandom_read_iter") 43 43 int oncpu(struct pt_regs *args) 44 44 { 45 45 __u32 max_len = sizeof(struct bpf_stack_build_id)
+11 -12
tools/testing/selftests/net/ndisc_unsolicited_na_test.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - # This test is for the accept_unsolicited_na feature to 4 + # This test is for the accept_untracked_na feature to 5 5 # enable RFC9131 behaviour. The following is the test-matrix. 6 6 # drop accept fwding behaviour 7 7 # ---- ------ ------ ---------------------------------------------- 8 - # 1 X X Drop NA packet and don't pass up the stack 9 - # 0 0 X Pass NA packet up the stack, don't update NC 10 - # 0 1 0 Pass NA packet up the stack, don't update NC 11 - # 0 1 1 Pass NA packet up the stack, and add a STALE 12 - # NC entry 8 + # 1 X X Don't update NC 9 + # 0 0 X Don't update NC 10 + # 0 1 0 Don't update NC 11 + # 0 1 1 Add a STALE NC entry 13 12 14 13 ret=0 15 14 # Kselftest framework requirement - SKIP code is 4. ··· 71 72 set -e 72 73 73 74 local drop_unsolicited_na=$1 74 - local accept_unsolicited_na=$2 75 + local accept_untracked_na=$2 75 76 local forwarding=$3 76 77 77 78 # Setup two namespaces and a veth tunnel across them. ··· 92 93 ${IP_ROUTER_EXEC} sysctl -qw \ 93 94 ${ROUTER_CONF}.drop_unsolicited_na=${drop_unsolicited_na} 94 95 ${IP_ROUTER_EXEC} sysctl -qw \ 95 - ${ROUTER_CONF}.accept_unsolicited_na=${accept_unsolicited_na} 96 + ${ROUTER_CONF}.accept_untracked_na=${accept_untracked_na} 96 97 ${IP_ROUTER_EXEC} sysctl -qw ${ROUTER_CONF}.disable_ipv6=0 97 98 ${IP_ROUTER} addr add ${ROUTER_ADDR_WITH_MASK} dev ${ROUTER_INTF} 98 99 ··· 143 144 144 145 verify_ndisc() { 145 146 local drop_unsolicited_na=$1 146 - local accept_unsolicited_na=$2 147 + local accept_untracked_na=$2 147 148 local forwarding=$3 148 149 149 150 neigh_show_output=$(${IP_ROUTER} neigh show \ 150 151 to ${HOST_ADDR} dev ${ROUTER_INTF} nud stale) 151 152 if [ ${drop_unsolicited_na} -eq 0 ] && \ 152 - [ ${accept_unsolicited_na} -eq 1 ] && \ 153 + [ ${accept_untracked_na} -eq 1 ] && \ 153 154 [ ${forwarding} -eq 1 ]; then 154 155 # Neighbour entry expected to be present for 011 case 155 156 [[ ${neigh_show_output} ]] ··· 178 179 test_unsolicited_na_common $1 $2 $3 179 180 test_msg=("test_unsolicited_na: " 180 181 "drop_unsolicited_na=$1 " 181 - "accept_unsolicited_na=$2 " 182 + "accept_untracked_na=$2 " 182 183 "forwarding=$3") 183 184 log_test $? 0 "${test_msg[*]}" 184 185 cleanup 185 186 } 186 187 187 188 test_unsolicited_na_combinations() { 188 - # Args: drop_unsolicited_na accept_unsolicited_na forwarding 189 + # Args: drop_unsolicited_na accept_untracked_na forwarding 189 190 190 191 # Expect entry 191 192 test_unsolicited_na_combination 0 1 1
+2
tools/testing/selftests/net/psock_snd.c
··· 389 389 error(1, errno, "ip link set mtu"); 390 390 if (system("ip addr add dev lo 172.17.0.1/24")) 391 391 error(1, errno, "ip addr add"); 392 + if (system("sysctl -w net.ipv4.conf.lo.accept_local=1")) 393 + error(1, errno, "sysctl lo.accept_local"); 392 394 393 395 run_test(); 394 396