Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Networking fixes for 5.10-rc7, including fixes from bpf, netfilter,
wireless drivers, wireless mesh and can.

Current release - regressions:

- mt76: usb: fix crash on device removal

Current release - always broken:

- xsk: Fix umem cleanup from wrong context in socket destruct

Previous release - regressions:

- net: ip6_gre: set dev->hard_header_len when using header_ops

- ipv4: Fix TOS mask in inet_rtm_getroute()

- net, xsk: Avoid taking multiple skbuff references

Previous release - always broken:

- net/x25: prevent a couple of overflows

- netfilter: ipset: prevent uninit-value in hash_ip6_add

- geneve: pull IP header before ECN decapsulation

- mpls: ensure LSE is pullable in TC and openvswitch paths

- vxlan: respect needed_headroom of lower device

- batman-adv: Consider fragmentation for needed packet headroom

- can: drivers: don't count arbitration loss as an error

- netfilter: bridge: reset skb->pkt_type after POST_ROUTING traversal

- inet_ecn: Fix endianness of checksum update when setting ECT(1)

- ibmvnic: fix various corner cases around reset handling

- net/mlx5: fix rejecting unsupported Connect-X6DX SW steering

- net/mlx5: Enforce HW TX csum offload with kTLS"

* tag 'net-5.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (62 commits)
net/mlx5: DR, Proper handling of unsupported Connect-X6DX SW steering
net/mlx5e: kTLS, Enforce HW TX csum offload with kTLS
net: mlx5e: fix fs_tcp.c build when IPV6 is not enabled
net/mlx5: Fix wrong address reclaim when command interface is down
net/sched: act_mpls: ensure LSE is pullable before reading it
net: openvswitch: ensure LSE is pullable before reading it
net: skbuff: ensure LSE is pullable before decrementing the MPLS ttl
net: mvpp2: Fix error return code in mvpp2_open()
chelsio/chtls: fix a double free in chtls_setkey()
rtw88: debug: Fix uninitialized memory in debugfs code
vxlan: fix error return code in __vxlan_dev_create()
net: pasemi: fix error return code in pasemi_mac_open()
cxgb3: fix error return code in t3_sge_alloc_qset()
net/x25: prevent a couple of overflows
dpaa_eth: copy timestamp fields to new skb in A-050385 workaround
net: ip6_gre: set dev->hard_header_len when using header_ops
mt76: usb: fix crash on device removal
iwlwifi: pcie: add some missing entries for AX210
iwlwifi: pcie: invert values of NO_160 device config entries
iwlwifi: pcie: add one missing entry for AX210
...

+496 -199
+1 -1
Documentation/devicetree/bindings/net/can/tcan4x5x.txt
··· 33 33 spi-max-frequency = <10000000>; 34 34 bosch,mram-cfg = <0x0 0 0 32 0 0 1 1>; 35 35 interrupt-parent = <&gpio1>; 36 - interrupts = <14 GPIO_ACTIVE_LOW>; 36 + interrupts = <14 IRQ_TYPE_LEVEL_LOW>; 37 37 device-state-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>; 38 38 device-wake-gpios = <&gpio1 15 GPIO_ACTIVE_HIGH>; 39 39 reset-gpios = <&gpio1 27 GPIO_ACTIVE_HIGH>;
+1 -1
Documentation/devicetree/bindings/net/nfc/nxp-nci.txt
··· 25 25 clock-frequency = <100000>; 26 26 27 27 interrupt-parent = <&gpio1>; 28 - interrupts = <29 GPIO_ACTIVE_HIGH>; 28 + interrupts = <29 IRQ_TYPE_LEVEL_HIGH>; 29 29 30 30 enable-gpios = <&gpio0 30 GPIO_ACTIVE_HIGH>; 31 31 firmware-gpios = <&gpio0 31 GPIO_ACTIVE_HIGH>;
+1 -1
Documentation/devicetree/bindings/net/nfc/pn544.txt
··· 25 25 clock-frequency = <400000>; 26 26 27 27 interrupt-parent = <&gpio1>; 28 - interrupts = <17 GPIO_ACTIVE_HIGH>; 28 + interrupts = <17 IRQ_TYPE_LEVEL_HIGH>; 29 29 30 30 enable-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>; 31 31 firmware-gpios = <&gpio3 19 GPIO_ACTIVE_HIGH>;
+21 -5
MAINTAINERS
··· 3357 3357 F: arch/x86/net/ 3358 3358 X: arch/x86/net/bpf_jit_comp32.c 3359 3359 3360 + BPF LSM (Security Audit and Enforcement using BPF) 3361 + M: KP Singh <kpsingh@chromium.org> 3362 + R: Florent Revest <revest@chromium.org> 3363 + R: Brendan Jackman <jackmanb@chromium.org> 3364 + L: bpf@vger.kernel.org 3365 + S: Maintained 3366 + F: Documentation/bpf/bpf_lsm.rst 3367 + F: include/linux/bpf_lsm.h 3368 + F: kernel/bpf/bpf_lsm.c 3369 + F: security/bpf/ 3370 + 3360 3371 BROADCOM B44 10/100 ETHERNET DRIVER 3361 3372 M: Michael Chan <michael.chan@broadcom.com> 3362 3373 L: netdev@vger.kernel.org ··· 9080 9069 F: drivers/net/wireless/intel/iwlegacy/ 9081 9070 9082 9071 INTEL WIRELESS WIFI LINK (iwlwifi) 9083 - M: Johannes Berg <johannes.berg@intel.com> 9084 - M: Emmanuel Grumbach <emmanuel.grumbach@intel.com> 9085 9072 M: Luca Coelho <luciano.coelho@intel.com> 9086 - M: Intel Linux Wireless <linuxwifi@intel.com> 9087 9073 L: linux-wireless@vger.kernel.org 9088 9074 S: Supported 9089 9075 W: https://wireless.wiki.kernel.org/en/users/drivers/iwlwifi ··· 19122 19114 L: bpf@vger.kernel.org 19123 19115 S: Supported 19124 19116 F: include/net/xdp.h 19117 + F: include/net/xdp_priv.h 19125 19118 F: include/trace/events/xdp.h 19126 19119 F: kernel/bpf/cpumap.c 19127 19120 F: kernel/bpf/devmap.c 19128 19121 F: net/core/xdp.c 19129 - N: xdp 19130 - K: xdp 19122 + F: samples/bpf/xdp* 19123 + F: tools/testing/selftests/bpf/*xdp* 19124 + F: tools/testing/selftests/bpf/*/*xdp* 19125 + F: drivers/net/ethernet/*/*/*/*/*xdp* 19126 + F: drivers/net/ethernet/*/*/*xdp* 19127 + K: (?:\b|_)xdp(?:\b|_) 19131 19128 19132 19129 XDP SOCKETS (AF_XDP) 19133 19130 M: Björn Töpel <bjorn.topel@intel.com> ··· 19141 19128 L: netdev@vger.kernel.org 19142 19129 L: bpf@vger.kernel.org 19143 19130 S: Maintained 19131 + F: Documentation/networking/af_xdp.rst 19144 19132 F: include/net/xdp_sock* 19145 19133 F: include/net/xsk_buff_pool.h 19146 19134 F: include/uapi/linux/if_xdp.h 19135 + F: include/uapi/linux/xdp_diag.h 19136 + F: include/net/netns/xdp.h 19147 19137 F: net/xdp/ 19148 19138 F: samples/bpf/xdpsock* 19149 19139 F: tools/lib/bpf/xsk*
+14 -4
drivers/net/can/c_can/c_can.c
··· 1295 1295 time_after(time_out, jiffies)) 1296 1296 cpu_relax(); 1297 1297 1298 - if (time_after(jiffies, time_out)) 1299 - return -ETIMEDOUT; 1298 + if (time_after(jiffies, time_out)) { 1299 + ret = -ETIMEDOUT; 1300 + goto err_out; 1301 + } 1300 1302 1301 1303 ret = c_can_start(dev); 1302 - if (!ret) 1303 - c_can_irq_control(priv, true); 1304 + if (ret) 1305 + goto err_out; 1306 + 1307 + c_can_irq_control(priv, true); 1308 + 1309 + return 0; 1310 + 1311 + err_out: 1312 + c_can_reset_ram(priv, false); 1313 + c_can_pm_runtime_put_sync(priv); 1304 1314 1305 1315 return ret; 1306 1316 }
+3 -1
drivers/net/can/kvaser_pciefd.c
··· 692 692 return err; 693 693 694 694 err = kvaser_pciefd_bus_on(can); 695 - if (err) 695 + if (err) { 696 + close_candev(netdev); 696 697 return err; 698 + } 697 699 698 700 return 0; 699 701 }
+3 -8
drivers/net/can/m_can/tcan4x5x.c
··· 489 489 spi->bits_per_word = 32; 490 490 ret = spi_setup(spi); 491 491 if (ret) 492 - goto out_clk; 492 + goto out_m_can_class_free_dev; 493 493 494 494 priv->regmap = devm_regmap_init(&spi->dev, &tcan4x5x_bus, 495 495 &spi->dev, &tcan4x5x_regmap); 496 496 if (IS_ERR(priv->regmap)) { 497 497 ret = PTR_ERR(priv->regmap); 498 - goto out_clk; 498 + goto out_m_can_class_free_dev; 499 499 } 500 500 501 501 ret = tcan4x5x_power_enable(priv->power, 1); 502 502 if (ret) 503 - goto out_clk; 503 + goto out_m_can_class_free_dev; 504 504 505 505 ret = tcan4x5x_parse_config(mcan_class); 506 506 if (ret) ··· 519 519 520 520 out_power: 521 521 tcan4x5x_power_enable(priv->power, 0); 522 - out_clk: 523 - if (!IS_ERR(mcan_class->cclk)) { 524 - clk_disable_unprepare(mcan_class->cclk); 525 - clk_disable_unprepare(mcan_class->hclk); 526 - } 527 522 out_m_can_class_free_dev: 528 523 m_can_class_free_dev(mcan_class->net); 529 524 dev_err(&spi->dev, "Probe failed, err=%d\n", ret);
-1
drivers/net/can/sja1000/sja1000.c
··· 474 474 netdev_dbg(dev, "arbitration lost interrupt\n"); 475 475 alc = priv->read_reg(priv, SJA1000_ALC); 476 476 priv->can.can_stats.arbitration_lost++; 477 - stats->tx_errors++; 478 477 cf->can_id |= CAN_ERR_LOSTARB; 479 478 cf->data[0] = alc & 0x1f; 480 479 }
-1
drivers/net/can/sun4i_can.c
··· 604 604 netdev_dbg(dev, "arbitration lost interrupt\n"); 605 605 alc = readl(priv->base + SUN4I_REG_STA_ADDR); 606 606 priv->can.can_stats.arbitration_lost++; 607 - stats->tx_errors++; 608 607 if (likely(skb)) { 609 608 cf->can_id |= CAN_ERR_LOSTARB; 610 609 cf->data[0] = (alc >> 8) & 0x1f;
+1
drivers/net/ethernet/broadcom/Kconfig
··· 88 88 config CNIC 89 89 tristate "QLogic CNIC support" 90 90 depends on PCI && (IPV6 || IPV6=n) 91 + depends on MMU 91 92 select BNX2 92 93 select UIO 93 94 help
+1
drivers/net/ethernet/chelsio/cxgb3/sge.c
··· 3175 3175 GFP_KERNEL | __GFP_COMP); 3176 3176 if (!avail) { 3177 3177 CH_ALERT(adapter, "free list queue 0 initialization failed\n"); 3178 + ret = -ENOMEM; 3178 3179 goto err; 3179 3180 } 3180 3181 if (avail < q->fl[0].size)
+1
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
··· 1206 1206 sk_setup_caps(newsk, dst); 1207 1207 ctx = tls_get_ctx(lsk); 1208 1208 newsk->sk_destruct = ctx->sk_destruct; 1209 + newsk->sk_prot_creator = lsk->sk_prot_creator; 1209 1210 csk->sk = newsk; 1210 1211 csk->passive_reap_next = oreq; 1211 1212 csk->tx_chan = cxgb4_port_chan(ndev);
+1
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c
··· 391 391 csk->wr_unacked += DIV_ROUND_UP(len, 16); 392 392 enqueue_wr(csk, skb); 393 393 cxgb4_ofld_send(csk->egress_dev, skb); 394 + skb = NULL; 394 395 395 396 chtls_set_scmd(csk); 396 397 /* Clear quiesce for Rx key */
+9 -1
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 2120 2120 skb_copy_header(new_skb, skb); 2121 2121 new_skb->dev = skb->dev; 2122 2122 2123 + /* Copy relevant timestamp info from the old skb to the new */ 2124 + if (priv->tx_tstamp) { 2125 + skb_shinfo(new_skb)->tx_flags = skb_shinfo(skb)->tx_flags; 2126 + skb_shinfo(new_skb)->hwtstamps = skb_shinfo(skb)->hwtstamps; 2127 + skb_shinfo(new_skb)->tskey = skb_shinfo(skb)->tskey; 2128 + if (skb->sk) 2129 + skb_set_owner_w(new_skb, skb->sk); 2130 + } 2131 + 2123 2132 /* We move the headroom when we align it so we have to reset the 2124 2133 * network and transport header offsets relative to the new data 2125 2134 * pointer. The checksum offload relies on these offsets. ··· 2136 2127 skb_set_network_header(new_skb, skb_network_offset(skb)); 2137 2128 skb_set_transport_header(new_skb, skb_transport_offset(skb)); 2138 2129 2139 - /* TODO: does timestamping need the result in the old skb? */ 2140 2130 dev_kfree_skb(skb); 2141 2131 *s = new_skb; 2142 2132
+122 -68
drivers/net/ethernet/ibm/ibmvnic.c
··· 834 834 static int ibmvnic_login(struct net_device *netdev) 835 835 { 836 836 struct ibmvnic_adapter *adapter = netdev_priv(netdev); 837 - unsigned long timeout = msecs_to_jiffies(30000); 837 + unsigned long timeout = msecs_to_jiffies(20000); 838 838 int retry_count = 0; 839 839 int retries = 10; 840 840 bool retry; ··· 850 850 adapter->init_done_rc = 0; 851 851 reinit_completion(&adapter->init_done); 852 852 rc = send_login(adapter); 853 - if (rc) { 854 - netdev_warn(netdev, "Unable to login\n"); 853 + if (rc) 855 854 return rc; 856 - } 857 855 858 856 if (!wait_for_completion_timeout(&adapter->init_done, 859 857 timeout)) { ··· 938 940 static int set_link_state(struct ibmvnic_adapter *adapter, u8 link_state) 939 941 { 940 942 struct net_device *netdev = adapter->netdev; 941 - unsigned long timeout = msecs_to_jiffies(30000); 943 + unsigned long timeout = msecs_to_jiffies(20000); 942 944 union ibmvnic_crq crq; 943 945 bool resend; 944 946 int rc; ··· 1855 1857 if (reset_state == VNIC_OPEN) { 1856 1858 rc = __ibmvnic_close(netdev); 1857 1859 if (rc) 1858 - return rc; 1860 + goto out; 1859 1861 } 1860 1862 1861 1863 release_resources(adapter); ··· 1873 1875 } 1874 1876 1875 1877 rc = ibmvnic_reset_init(adapter, true); 1876 - if (rc) 1877 - return IBMVNIC_INIT_FAILED; 1878 + if (rc) { 1879 + rc = IBMVNIC_INIT_FAILED; 1880 + goto out; 1881 + } 1878 1882 1879 1883 /* If the adapter was in PROBE state prior to the reset, 1880 1884 * exit here. 1881 1885 */ 1882 1886 if (reset_state == VNIC_PROBED) 1883 - return 0; 1887 + goto out; 1884 1888 1885 1889 rc = ibmvnic_login(netdev); 1886 1890 if (rc) { 1887 - adapter->state = reset_state; 1888 - return rc; 1891 + goto out; 1889 1892 } 1890 1893 1891 1894 rc = init_resources(adapter); 1892 1895 if (rc) 1893 - return rc; 1896 + goto out; 1894 1897 1895 1898 ibmvnic_disable_irqs(adapter); 1896 1899 ··· 1901 1902 return 0; 1902 1903 1903 1904 rc = __ibmvnic_open(netdev); 1904 - if (rc) 1905 - return IBMVNIC_OPEN_FAILED; 1905 + if (rc) { 1906 + rc = IBMVNIC_OPEN_FAILED; 1907 + goto out; 1908 + } 1906 1909 1907 1910 /* refresh device's multicast list */ 1908 1911 ibmvnic_set_multi(netdev); ··· 1913 1912 for (i = 0; i < adapter->req_rx_queues; i++) 1914 1913 napi_schedule(&adapter->napi[i]); 1915 1914 1916 - return 0; 1915 + out: 1916 + if (rc) 1917 + adapter->state = reset_state; 1918 + return rc; 1917 1919 } 1918 1920 1919 1921 /** ··· 2019 2015 2020 2016 rc = ibmvnic_login(netdev); 2021 2017 if (rc) { 2022 - adapter->state = reset_state; 2023 2018 goto out; 2024 2019 } 2025 2020 ··· 2086 2083 rc = 0; 2087 2084 2088 2085 out: 2086 + /* restore the adapter state if reset failed */ 2087 + if (rc) 2088 + adapter->state = reset_state; 2089 2089 rtnl_unlock(); 2090 2090 2091 2091 return rc; ··· 2121 2115 if (rc) { 2122 2116 netdev_err(adapter->netdev, 2123 2117 "Couldn't initialize crq. rc=%d\n", rc); 2124 - return rc; 2118 + goto out; 2125 2119 } 2126 2120 2127 2121 rc = ibmvnic_reset_init(adapter, false); 2128 2122 if (rc) 2129 - return rc; 2123 + goto out; 2130 2124 2131 2125 /* If the adapter was in PROBE state prior to the reset, 2132 2126 * exit here. 2133 2127 */ 2134 2128 if (reset_state == VNIC_PROBED) 2135 - return 0; 2129 + goto out; 2136 2130 2137 2131 rc = ibmvnic_login(netdev); 2138 - if (rc) { 2139 - adapter->state = VNIC_PROBED; 2140 - return 0; 2141 - } 2132 + if (rc) 2133 + goto out; 2142 2134 2143 2135 rc = init_resources(adapter); 2144 2136 if (rc) 2145 - return rc; 2137 + goto out; 2146 2138 2147 2139 ibmvnic_disable_irqs(adapter); 2148 2140 adapter->state = VNIC_CLOSED; 2149 2141 2150 2142 if (reset_state == VNIC_CLOSED) 2151 - return 0; 2143 + goto out; 2152 2144 2153 2145 rc = __ibmvnic_open(netdev); 2154 - if (rc) 2155 - return IBMVNIC_OPEN_FAILED; 2146 + if (rc) { 2147 + rc = IBMVNIC_OPEN_FAILED; 2148 + goto out; 2149 + } 2156 2150 2157 2151 call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev); 2158 2152 call_netdevice_notifiers(NETDEV_RESEND_IGMP, netdev); 2159 - 2160 - return 0; 2153 + out: 2154 + /* restore adapter state if reset failed */ 2155 + if (rc) 2156 + adapter->state = reset_state; 2157 + return rc; 2161 2158 } 2162 2159 2163 2160 static struct ibmvnic_rwi *get_next_rwi(struct ibmvnic_adapter *adapter) ··· 2180 2171 2181 2172 spin_unlock_irqrestore(&adapter->rwi_lock, flags); 2182 2173 return rwi; 2183 - } 2184 - 2185 - static void free_all_rwi(struct ibmvnic_adapter *adapter) 2186 - { 2187 - struct ibmvnic_rwi *rwi; 2188 - 2189 - rwi = get_next_rwi(adapter); 2190 - while (rwi) { 2191 - kfree(rwi); 2192 - rwi = get_next_rwi(adapter); 2193 - } 2194 2174 } 2195 2175 2196 2176 static void __ibmvnic_reset(struct work_struct *work) ··· 2239 2241 rc = do_hard_reset(adapter, rwi, reset_state); 2240 2242 rtnl_unlock(); 2241 2243 } 2244 + if (rc) { 2245 + /* give backing device time to settle down */ 2246 + netdev_dbg(adapter->netdev, 2247 + "[S:%d] Hard reset failed, waiting 60 secs\n", 2248 + adapter->state); 2249 + set_current_state(TASK_UNINTERRUPTIBLE); 2250 + schedule_timeout(60 * HZ); 2251 + } 2242 2252 } else if (!(rwi->reset_reason == VNIC_RESET_FATAL && 2243 2253 adapter->from_passive_init)) { 2244 2254 rc = do_reset(adapter, rwi, reset_state); 2245 2255 } 2246 2256 kfree(rwi); 2247 - if (rc == IBMVNIC_OPEN_FAILED) { 2248 - if (list_empty(&adapter->rwi_list)) 2249 - adapter->state = VNIC_CLOSED; 2250 - else 2251 - adapter->state = reset_state; 2252 - rc = 0; 2253 - } else if (rc && rc != IBMVNIC_INIT_FAILED && 2254 - !adapter->force_reset_recovery) 2255 - break; 2257 + adapter->last_reset_time = jiffies; 2258 + 2259 + if (rc) 2260 + netdev_dbg(adapter->netdev, "Reset failed, rc=%d\n", rc); 2256 2261 2257 2262 rwi = get_next_rwi(adapter); 2258 2263 ··· 2267 2266 if (adapter->wait_for_reset) { 2268 2267 adapter->reset_done_rc = rc; 2269 2268 complete(&adapter->reset_done); 2270 - } 2271 - 2272 - if (rc) { 2273 - netdev_dbg(adapter->netdev, "Reset failed\n"); 2274 - free_all_rwi(adapter); 2275 2269 } 2276 2270 2277 2271 clear_bit_unlock(0, &adapter->resetting); ··· 2356 2360 "Adapter is resetting, skip timeout reset\n"); 2357 2361 return; 2358 2362 } 2359 - 2363 + /* No queuing up reset until at least 5 seconds (default watchdog val) 2364 + * after last reset 2365 + */ 2366 + if (time_before(jiffies, (adapter->last_reset_time + dev->watchdog_timeo))) { 2367 + netdev_dbg(dev, "Not yet time to tx timeout.\n"); 2368 + return; 2369 + } 2360 2370 ibmvnic_reset(adapter, VNIC_RESET_TIMEOUT); 2361 2371 } 2362 2372 ··· 2404 2402 2405 2403 if (!pending_scrq(adapter, adapter->rx_scrq[scrq_num])) 2406 2404 break; 2405 + /* The queue entry at the current index is peeked at above 2406 + * to determine that there is a valid descriptor awaiting 2407 + * processing. We want to be sure that the current slot 2408 + * holds a valid descriptor before reading its contents. 2409 + */ 2410 + dma_rmb(); 2407 2411 next = ibmvnic_next_scrq(adapter, adapter->rx_scrq[scrq_num]); 2408 2412 rx_buff = 2409 2413 (struct ibmvnic_rx_buff *)be64_to_cpu(next-> ··· 2868 2860 { 2869 2861 int rc; 2870 2862 2863 + if (!scrq) { 2864 + netdev_dbg(adapter->netdev, 2865 + "Invalid scrq reset. irq (%d) or msgs (%p).\n", 2866 + scrq->irq, scrq->msgs); 2867 + return -EINVAL; 2868 + } 2869 + 2871 2870 if (scrq->irq) { 2872 2871 free_irq(scrq->irq, scrq); 2873 2872 irq_dispose_mapping(scrq->irq); 2874 2873 scrq->irq = 0; 2875 2874 } 2876 - 2877 - memset(scrq->msgs, 0, 4 * PAGE_SIZE); 2878 - atomic_set(&scrq->used, 0); 2879 - scrq->cur = 0; 2875 + if (scrq->msgs) { 2876 + memset(scrq->msgs, 0, 4 * PAGE_SIZE); 2877 + atomic_set(&scrq->used, 0); 2878 + scrq->cur = 0; 2879 + } else { 2880 + netdev_dbg(adapter->netdev, "Invalid scrq reset\n"); 2881 + return -EINVAL; 2882 + } 2880 2883 2881 2884 rc = h_reg_sub_crq(adapter->vdev->unit_address, scrq->msg_token, 2882 2885 4 * PAGE_SIZE, &scrq->crq_num, &scrq->hw_irq); ··· 3119 3100 unsigned int pool = scrq->pool_index; 3120 3101 int num_entries = 0; 3121 3102 3103 + /* The queue entry at the current index is peeked at above 3104 + * to determine that there is a valid descriptor awaiting 3105 + * processing. We want to be sure that the current slot 3106 + * holds a valid descriptor before reading its contents. 3107 + */ 3108 + dma_rmb(); 3109 + 3122 3110 next = ibmvnic_next_scrq(adapter, scrq); 3123 3111 for (i = 0; i < next->tx_comp.num_comps; i++) { 3124 - if (next->tx_comp.rcs[i]) { 3112 + if (next->tx_comp.rcs[i]) 3125 3113 dev_err(dev, "tx error %x\n", 3126 3114 next->tx_comp.rcs[i]); 3127 - continue; 3128 - } 3129 3115 index = be32_to_cpu(next->tx_comp.correlators[i]); 3130 3116 if (index & IBMVNIC_TSO_POOL_MASK) { 3131 3117 tx_pool = &adapter->tso_pool[pool]; ··· 3524 3500 } 3525 3501 spin_unlock_irqrestore(&scrq->lock, flags); 3526 3502 3503 + /* Ensure that the entire buffer descriptor has been 3504 + * loaded before reading its contents 3505 + */ 3506 + dma_rmb(); 3507 + 3527 3508 return entry; 3528 3509 } 3529 3510 ··· 3750 3721 struct ibmvnic_login_rsp_buffer *login_rsp_buffer; 3751 3722 struct ibmvnic_login_buffer *login_buffer; 3752 3723 struct device *dev = &adapter->vdev->dev; 3724 + struct vnic_login_client_data *vlcd; 3753 3725 dma_addr_t rsp_buffer_token; 3754 3726 dma_addr_t buffer_token; 3755 3727 size_t rsp_buffer_size; 3756 3728 union ibmvnic_crq crq; 3729 + int client_data_len; 3757 3730 size_t buffer_size; 3758 3731 __be64 *tx_list_p; 3759 3732 __be64 *rx_list_p; 3760 - int client_data_len; 3761 - struct vnic_login_client_data *vlcd; 3733 + int rc; 3762 3734 int i; 3763 3735 3764 3736 if (!adapter->tx_scrq || !adapter->rx_scrq) { ··· 3863 3833 crq.login.cmd = LOGIN; 3864 3834 crq.login.ioba = cpu_to_be32(buffer_token); 3865 3835 crq.login.len = cpu_to_be32(buffer_size); 3866 - ibmvnic_send_crq(adapter, &crq); 3836 + 3837 + adapter->login_pending = true; 3838 + rc = ibmvnic_send_crq(adapter, &crq); 3839 + if (rc) { 3840 + adapter->login_pending = false; 3841 + netdev_err(adapter->netdev, "Failed to send login, rc=%d\n", rc); 3842 + goto buf_rsp_map_failed; 3843 + } 3867 3844 3868 3845 return 0; 3869 3846 3870 3847 buf_rsp_map_failed: 3871 3848 kfree(login_rsp_buffer); 3849 + adapter->login_rsp_buf = NULL; 3872 3850 buf_rsp_alloc_failed: 3873 3851 dma_unmap_single(dev, buffer_token, buffer_size, DMA_TO_DEVICE); 3874 3852 buf_map_failed: 3875 3853 kfree(login_buffer); 3854 + adapter->login_buf = NULL; 3876 3855 buf_alloc_failed: 3877 3856 return -1; 3878 3857 } ··· 4424 4385 u64 *size_array; 4425 4386 int i; 4426 4387 4388 + /* CHECK: Test/set of login_pending does not need to be atomic 4389 + * because only ibmvnic_tasklet tests/clears this. 4390 + */ 4391 + if (!adapter->login_pending) { 4392 + netdev_warn(netdev, "Ignoring unexpected login response\n"); 4393 + return 0; 4394 + } 4395 + adapter->login_pending = false; 4396 + 4427 4397 dma_unmap_single(dev, adapter->login_buf_token, adapter->login_buf_sz, 4428 4398 DMA_TO_DEVICE); 4429 4399 dma_unmap_single(dev, adapter->login_rsp_buf_token, ··· 4462 4414 adapter->req_rx_add_queues != 4463 4415 be32_to_cpu(login_rsp->num_rxadd_subcrqs))) { 4464 4416 dev_err(dev, "FATAL: Inconsistent login and login rsp\n"); 4465 - ibmvnic_remove(adapter->vdev); 4417 + ibmvnic_reset(adapter, VNIC_RESET_FATAL); 4466 4418 return -EIO; 4467 4419 } 4468 4420 size_array = (u64 *)((u8 *)(adapter->login_rsp_buf) + ··· 4804 4756 case IBMVNIC_CRQ_INIT: 4805 4757 dev_info(dev, "Partner initialized\n"); 4806 4758 adapter->from_passive_init = true; 4759 + /* Discard any stale login responses from prev reset. 4760 + * CHECK: should we clear even on INIT_COMPLETE? 4761 + */ 4762 + adapter->login_pending = false; 4763 + 4807 4764 if (!completion_done(&adapter->init_done)) { 4808 4765 complete(&adapter->init_done); 4809 4766 adapter->init_done_rc = -EIO; ··· 5146 5093 static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset) 5147 5094 { 5148 5095 struct device *dev = &adapter->vdev->dev; 5149 - unsigned long timeout = msecs_to_jiffies(30000); 5096 + unsigned long timeout = msecs_to_jiffies(20000); 5150 5097 u64 old_num_rx_queues, old_num_tx_queues; 5151 5098 int rc; 5152 5099 ··· 5241 5188 dev_set_drvdata(&dev->dev, netdev); 5242 5189 adapter->vdev = dev; 5243 5190 adapter->netdev = netdev; 5191 + adapter->login_pending = false; 5244 5192 5245 5193 ether_addr_copy(adapter->mac_addr, mac_addr_p); 5246 5194 ether_addr_copy(netdev->dev_addr, adapter->mac_addr); ··· 5305 5251 adapter->state = VNIC_PROBED; 5306 5252 5307 5253 adapter->wait_for_reset = false; 5308 - 5254 + adapter->last_reset_time = jiffies; 5309 5255 return 0; 5310 5256 5311 5257 ibmvnic_register_fail:
+3
drivers/net/ethernet/ibm/ibmvnic.h
··· 1086 1086 struct delayed_work ibmvnic_delayed_reset; 1087 1087 unsigned long resetting; 1088 1088 bool napi_enabled, from_passive_init; 1089 + bool login_pending; 1090 + /* last device reset time */ 1091 + unsigned long last_reset_time; 1089 1092 1090 1093 bool failover_pending; 1091 1094 bool force_reset_recovery;
+1
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 4426 4426 if (!valid) { 4427 4427 netdev_err(port->dev, 4428 4428 "invalid configuration: no dt or link IRQ"); 4429 + err = -ENOENT; 4429 4430 goto err_free_irq; 4430 4431 } 4431 4432
+2
drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c
··· 44 44 outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4); 45 45 } 46 46 47 + #if IS_ENABLED(CONFIG_IPV6) 47 48 static void accel_fs_tcp_set_ipv6_flow(struct mlx5_flow_spec *spec, struct sock *sk) 48 49 { 49 50 MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_protocol); ··· 64 63 outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), 65 64 0xff, 16); 66 65 } 66 + #endif 67 67 68 68 void mlx5e_accel_fs_del_sk(struct mlx5_flow_handle *rule) 69 69 {
+15 -7
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 161 161 } 162 162 163 163 static inline void 164 - mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg) 164 + mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, 165 + struct mlx5e_accel_tx_state *accel, 166 + struct mlx5_wqe_eth_seg *eseg) 165 167 { 166 168 if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { 167 169 eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM; ··· 175 173 eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM; 176 174 sq->stats->csum_partial++; 177 175 } 176 + #ifdef CONFIG_MLX5_EN_TLS 177 + } else if (unlikely(accel && accel->tls.tls_tisn)) { 178 + eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM; 179 + sq->stats->csum_partial++; 180 + #endif 178 181 } else if (unlikely(eseg->flow_table_metadata & cpu_to_be32(MLX5_ETH_WQE_FT_META_IPSEC))) { 179 182 ipsec_txwqe_build_eseg_csum(sq, skb, eseg); 180 183 ··· 614 607 } 615 608 616 609 static bool mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq, 617 - struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg) 610 + struct sk_buff *skb, struct mlx5e_accel_tx_state *accel, 611 + struct mlx5_wqe_eth_seg *eseg) 618 612 { 619 613 if (unlikely(!mlx5e_accel_tx_eseg(priv, skb, eseg))) 620 614 return false; 621 615 622 - mlx5e_txwqe_build_eseg_csum(sq, skb, eseg); 616 + mlx5e_txwqe_build_eseg_csum(sq, skb, accel, eseg); 623 617 624 618 return true; 625 619 } ··· 647 639 if (mlx5e_tx_skb_supports_mpwqe(skb, &attr)) { 648 640 struct mlx5_wqe_eth_seg eseg = {}; 649 641 650 - if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &eseg))) 642 + if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &eseg))) 651 643 return NETDEV_TX_OK; 652 644 653 645 mlx5e_sq_xmit_mpwqe(sq, skb, &eseg, netdev_xmit_more()); ··· 664 656 /* May update the WQE, but may not post other WQEs. */ 665 657 mlx5e_accel_tx_finish(sq, wqe, &accel, 666 658 (struct mlx5_wqe_inline_seg *)(wqe->data + wqe_attr.ds_cnt_inl)); 667 - if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &wqe->eth))) 659 + if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &wqe->eth))) 668 660 return NETDEV_TX_OK; 669 661 670 662 mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, netdev_xmit_more()); ··· 683 675 mlx5e_sq_calc_wqe_attr(skb, &attr, &wqe_attr); 684 676 pi = mlx5e_txqsq_get_next_pi(sq, wqe_attr.num_wqebbs); 685 677 wqe = MLX5E_TX_FETCH_WQE(sq, pi); 686 - mlx5e_txwqe_build_eseg_csum(sq, skb, &wqe->eth); 678 + mlx5e_txwqe_build_eseg_csum(sq, skb, NULL, &wqe->eth); 687 679 mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, xmit_more); 688 680 } 689 681 ··· 952 944 953 945 mlx5i_txwqe_build_datagram(av, dqpn, dqkey, datagram); 954 946 955 - mlx5e_txwqe_build_eseg_csum(sq, skb, eseg); 947 + mlx5e_txwqe_build_eseg_csum(sq, skb, NULL, eseg); 956 948 957 949 eseg->mss = attr.mss; 958 950
+19 -2
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
··· 422 422 npages, ec_function, func_id); 423 423 } 424 424 425 + static u32 fwp_fill_manage_pages_out(struct fw_page *fwp, u32 *out, u32 index, 426 + u32 npages) 427 + { 428 + u32 pages_set = 0; 429 + unsigned int n; 430 + 431 + for_each_clear_bit(n, &fwp->bitmask, MLX5_NUM_4K_IN_PAGE) { 432 + MLX5_ARRAY_SET64(manage_pages_out, out, pas, index + pages_set, 433 + fwp->addr + (n * MLX5_ADAPTER_PAGE_SIZE)); 434 + pages_set++; 435 + 436 + if (!--npages) 437 + break; 438 + } 439 + 440 + return pages_set; 441 + } 442 + 425 443 static int reclaim_pages_cmd(struct mlx5_core_dev *dev, 426 444 u32 *in, int in_size, u32 *out, int out_size) 427 445 { ··· 466 448 fwp = rb_entry(p, struct fw_page, rb_node); 467 449 p = rb_next(p); 468 450 469 - MLX5_ARRAY_SET64(manage_pages_out, out, pas, i, fwp->addr); 470 - i++; 451 + i += fwp_fill_manage_pages_out(fwp, out, i, npages - i); 471 452 } 472 453 473 454 MLX5_SET(manage_pages_out, out, output_num_entries, i);
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
··· 92 92 caps->eswitch_manager = MLX5_CAP_GEN(mdev, eswitch_manager); 93 93 caps->gvmi = MLX5_CAP_GEN(mdev, vhca_id); 94 94 caps->flex_protocols = MLX5_CAP_GEN(mdev, flex_parser_protocols); 95 + caps->sw_format_ver = MLX5_CAP_GEN(mdev, steering_format_version); 95 96 96 97 if (mlx5dr_matcher_supp_flex_parser_icmp_v4(caps)) { 97 98 caps->flex_parser_id_icmp_dw0 = MLX5_CAP_GEN(mdev, flex_parser_id_icmp_dw0);
+5
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
··· 223 223 if (ret) 224 224 return ret; 225 225 226 + if (dmn->info.caps.sw_format_ver != MLX5_STEERING_FORMAT_CONNECTX_5) { 227 + mlx5dr_err(dmn, "SW steering is not supported on this device\n"); 228 + return -EOPNOTSUPP; 229 + } 230 + 226 231 ret = dr_domain_query_fdb_caps(mdev, dmn); 227 232 if (ret) 228 233 return ret;
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
··· 625 625 u8 max_ft_level; 626 626 u16 roce_min_src_udp; 627 627 u8 num_esw_ports; 628 + u8 sw_format_ver; 628 629 bool eswitch_manager; 629 630 bool rx_sw_owner; 630 631 bool tx_sw_owner;
+6 -2
drivers/net/ethernet/pasemi/pasemi_mac.c
··· 1078 1078 1079 1079 mac->tx = pasemi_mac_setup_tx_resources(dev); 1080 1080 1081 - if (!mac->tx) 1081 + if (!mac->tx) { 1082 + ret = -ENOMEM; 1082 1083 goto out_tx_ring; 1084 + } 1083 1085 1084 1086 /* We might already have allocated rings in case mtu was changed 1085 1087 * before interface was brought up. 1086 1088 */ 1087 1089 if (dev->mtu > 1500 && !mac->num_cs) { 1088 1090 pasemi_mac_setup_csrings(mac); 1089 - if (!mac->num_cs) 1091 + if (!mac->num_cs) { 1092 + ret = -ENOMEM; 1090 1093 goto out_tx_ring; 1094 + } 1091 1095 } 1092 1096 1093 1097 /* Zero out rmon counters */
+16 -4
drivers/net/geneve.c
··· 257 257 skb_dst_set(skb, &tun_dst->dst); 258 258 259 259 /* Ignore packet loops (and multicast echo) */ 260 - if (ether_addr_equal(eth_hdr(skb)->h_source, geneve->dev->dev_addr)) { 261 - geneve->dev->stats.rx_errors++; 262 - goto drop; 263 - } 260 + if (ether_addr_equal(eth_hdr(skb)->h_source, geneve->dev->dev_addr)) 261 + goto rx_error; 264 262 263 + switch (skb_protocol(skb, true)) { 264 + case htons(ETH_P_IP): 265 + if (pskb_may_pull(skb, sizeof(struct iphdr))) 266 + goto rx_error; 267 + break; 268 + case htons(ETH_P_IPV6): 269 + if (pskb_may_pull(skb, sizeof(struct ipv6hdr))) 270 + goto rx_error; 271 + break; 272 + default: 273 + goto rx_error; 274 + } 265 275 oiph = skb_network_header(skb); 266 276 skb_reset_network_header(skb); 267 277 ··· 308 298 dev_sw_netstats_rx_add(geneve->dev, len); 309 299 310 300 return; 301 + rx_error: 302 + geneve->dev->stats.rx_errors++; 311 303 drop: 312 304 /* Consume bad packet */ 313 305 kfree_skb(skb);
+6 -1
drivers/net/vxlan.c
··· 3798 3798 dev->gso_max_segs = lowerdev->gso_max_segs; 3799 3799 3800 3800 needed_headroom = lowerdev->hard_header_len; 3801 + needed_headroom += lowerdev->needed_headroom; 3802 + 3803 + dev->needed_tailroom = lowerdev->needed_tailroom; 3801 3804 3802 3805 max_mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM : 3803 3806 VXLAN_HEADROOM); ··· 3880 3877 3881 3878 if (dst->remote_ifindex) { 3882 3879 remote_dev = __dev_get_by_index(net, dst->remote_ifindex); 3883 - if (!remote_dev) 3880 + if (!remote_dev) { 3881 + err = -ENODEV; 3884 3882 goto errout; 3883 + } 3885 3884 3886 3885 err = netdev_upper_dev_link(remote_dev, dev, extack); 3887 3886 if (err)
+2 -2
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 491 491 #define IWL_CFG_RF_ID_HR 0x7 492 492 #define IWL_CFG_RF_ID_HR1 0x4 493 493 494 - #define IWL_CFG_NO_160 0x0 495 - #define IWL_CFG_160 0x1 494 + #define IWL_CFG_NO_160 0x1 495 + #define IWL_CFG_160 0x0 496 496 497 497 #define IWL_CFG_CORES_BT 0x0 498 498 #define IWL_CFG_CORES_BT_GNSS 0x5
+6
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 536 536 537 537 {IWL_PCI_DEVICE(0x2725, 0x0090, iwlax211_2ax_cfg_so_gf_a0)}, 538 538 {IWL_PCI_DEVICE(0x2725, 0x0020, iwlax210_2ax_cfg_ty_gf_a0)}, 539 + {IWL_PCI_DEVICE(0x2725, 0x0024, iwlax210_2ax_cfg_ty_gf_a0)}, 539 540 {IWL_PCI_DEVICE(0x2725, 0x0310, iwlax210_2ax_cfg_ty_gf_a0)}, 540 541 {IWL_PCI_DEVICE(0x2725, 0x0510, iwlax210_2ax_cfg_ty_gf_a0)}, 541 542 {IWL_PCI_DEVICE(0x2725, 0x0A10, iwlax210_2ax_cfg_ty_gf_a0)}, 543 + {IWL_PCI_DEVICE(0x2725, 0xE020, iwlax210_2ax_cfg_ty_gf_a0)}, 544 + {IWL_PCI_DEVICE(0x2725, 0xE024, iwlax210_2ax_cfg_ty_gf_a0)}, 545 + {IWL_PCI_DEVICE(0x2725, 0x4020, iwlax210_2ax_cfg_ty_gf_a0)}, 546 + {IWL_PCI_DEVICE(0x2725, 0x6020, iwlax210_2ax_cfg_ty_gf_a0)}, 547 + {IWL_PCI_DEVICE(0x2725, 0x6024, iwlax210_2ax_cfg_ty_gf_a0)}, 542 548 {IWL_PCI_DEVICE(0x2725, 0x00B0, iwlax411_2ax_cfg_sosnj_gf4_a0)}, 543 549 {IWL_PCI_DEVICE(0x2726, 0x0070, iwlax201_cfg_snj_hr_b0)}, 544 550 {IWL_PCI_DEVICE(0x2726, 0x0074, iwlax201_cfg_snj_hr_b0)},
+9 -8
drivers/net/wireless/mediatek/mt76/usb.c
··· 1020 1020 { 1021 1021 int ret; 1022 1022 1023 - mt76_worker_disable(&dev->tx_worker); 1024 - 1025 1023 ret = wait_event_timeout(dev->tx_wait, !mt76_has_tx_pending(&dev->phy), 1026 1024 HZ / 5); 1027 1025 if (!ret) { ··· 1038 1040 usb_kill_urb(q->entry[j].urb); 1039 1041 } 1040 1042 1043 + mt76_worker_disable(&dev->tx_worker); 1044 + 1041 1045 /* On device removal we maight queue skb's, but mt76u_tx_kick() 1042 1046 * will fail to submit urb, cleanup those skb's manually. 1043 1047 */ ··· 1048 1048 if (!q) 1049 1049 continue; 1050 1050 1051 - entry = q->entry[q->tail]; 1052 - q->entry[q->tail].done = false; 1053 - 1054 - mt76_queue_tx_complete(dev, q, &entry); 1051 + while (q->queued > 0) { 1052 + entry = q->entry[q->tail]; 1053 + q->entry[q->tail].done = false; 1054 + mt76_queue_tx_complete(dev, q, &entry); 1055 + } 1055 1056 } 1057 + 1058 + mt76_worker_enable(&dev->tx_worker); 1056 1059 } 1057 1060 1058 1061 cancel_work_sync(&dev->usb.stat_work); 1059 1062 clear_bit(MT76_READING_STATS, &dev->phy.state); 1060 - 1061 - mt76_worker_enable(&dev->tx_worker); 1062 1063 1063 1064 mt76_tx_status_check(dev, NULL, true); 1064 1065 }
+2
drivers/net/wireless/realtek/rtw88/debug.c
··· 147 147 { 148 148 int tmp_len; 149 149 150 + memset(tmp, 0, size); 151 + 150 152 if (count < num) 151 153 return -EFAULT; 152 154
+8 -1
include/linux/mlx5/mlx5_ifc.h
··· 1223 1223 1224 1224 #define MLX5_FC_BULK_NUM_FCS(fc_enum) (MLX5_FC_BULK_SIZE_FACTOR * (fc_enum)) 1225 1225 1226 + enum { 1227 + MLX5_STEERING_FORMAT_CONNECTX_5 = 0, 1228 + MLX5_STEERING_FORMAT_CONNECTX_6DX = 1, 1229 + }; 1230 + 1226 1231 struct mlx5_ifc_cmd_hca_cap_bits { 1227 1232 u8 reserved_at_0[0x30]; 1228 1233 u8 vhca_id[0x10]; ··· 1526 1521 1527 1522 u8 general_obj_types[0x40]; 1528 1523 1529 - u8 reserved_at_440[0x20]; 1524 + u8 reserved_at_440[0x4]; 1525 + u8 steering_format_version[0x4]; 1526 + u8 create_qp_start_hint[0x18]; 1530 1527 1531 1528 u8 reserved_at_460[0x3]; 1532 1529 u8 log_max_uctx[0x5];
+13 -1
include/linux/netdevice.h
··· 2813 2813 struct net_device *sb_dev); 2814 2814 u16 dev_pick_tx_cpu_id(struct net_device *dev, struct sk_buff *skb, 2815 2815 struct net_device *sb_dev); 2816 + 2816 2817 int dev_queue_xmit(struct sk_buff *skb); 2817 2818 int dev_queue_xmit_accel(struct sk_buff *skb, struct net_device *sb_dev); 2818 - int dev_direct_xmit(struct sk_buff *skb, u16 queue_id); 2819 + int __dev_direct_xmit(struct sk_buff *skb, u16 queue_id); 2820 + 2821 + static inline int dev_direct_xmit(struct sk_buff *skb, u16 queue_id) 2822 + { 2823 + int ret; 2824 + 2825 + ret = __dev_direct_xmit(skb, queue_id); 2826 + if (!dev_xmit_complete(ret)) 2827 + kfree_skb(skb); 2828 + return ret; 2829 + } 2830 + 2819 2831 int register_netdevice(struct net_device *dev); 2820 2832 void unregister_netdevice_queue(struct net_device *dev, struct list_head *head); 2821 2833 void unregister_netdevice_many(struct list_head *head);
+1 -1
include/net/inet_ecn.h
··· 107 107 if ((iph->tos & INET_ECN_MASK) != INET_ECN_ECT_0) 108 108 return 0; 109 109 110 - check += (__force u16)htons(0x100); 110 + check += (__force u16)htons(0x1); 111 111 112 112 iph->check = (__force __sum16)(check + (check>=0xFFFF)); 113 113 iph->tos ^= INET_ECN_MASK;
+7
include/net/netfilter/nf_tables_offload.h
··· 37 37 38 38 struct nft_flow_key { 39 39 struct flow_dissector_key_basic basic; 40 + struct flow_dissector_key_control control; 40 41 union { 41 42 struct flow_dissector_key_ipv4_addrs ipv4; 42 43 struct flow_dissector_key_ipv6_addrs ipv6; ··· 63 62 64 63 #define NFT_OFFLOAD_F_ACTION (1 << 0) 65 64 65 + void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow, 66 + enum flow_dissector_key_id addr_type); 67 + 66 68 struct nft_rule; 67 69 struct nft_flow_rule *nft_flow_rule_create(struct net *net, const struct nft_rule *rule); 68 70 void nft_flow_rule_destroy(struct nft_flow_rule *flow); ··· 78 74 offsetof(struct nft_flow_key, __base.__field); \ 79 75 (__reg)->len = __len; \ 80 76 (__reg)->key = __key; \ 77 + 78 + #define NFT_OFFLOAD_MATCH_EXACT(__key, __base, __field, __len, __reg) \ 79 + NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg) \ 81 80 memset(&(__reg)->mask, 0xff, (__reg)->len); 82 81 83 82 int nft_chain_offload_priority(struct nft_base_chain *basechain);
+1
include/net/xdp_sock.h
··· 31 31 struct page **pgs; 32 32 int id; 33 33 struct list_head xsk_dma_list; 34 + struct work_struct work; 34 35 }; 35 36 36 37 struct xsk_map {
+16 -10
net/batman-adv/fragmentation.c
··· 391 391 392 392 /** 393 393 * batadv_frag_create() - create a fragment from skb 394 + * @net_dev: outgoing device for fragment 394 395 * @skb: skb to create fragment from 395 396 * @frag_head: header to use in new fragment 396 397 * @fragment_size: size of new fragment ··· 402 401 * 403 402 * Return: the new fragment, NULL on error. 404 403 */ 405 - static struct sk_buff *batadv_frag_create(struct sk_buff *skb, 404 + static struct sk_buff *batadv_frag_create(struct net_device *net_dev, 405 + struct sk_buff *skb, 406 406 struct batadv_frag_packet *frag_head, 407 407 unsigned int fragment_size) 408 408 { 409 + unsigned int ll_reserved = LL_RESERVED_SPACE(net_dev); 410 + unsigned int tailroom = net_dev->needed_tailroom; 409 411 struct sk_buff *skb_fragment; 410 412 unsigned int header_size = sizeof(*frag_head); 411 413 unsigned int mtu = fragment_size + header_size; 412 414 413 - skb_fragment = netdev_alloc_skb(NULL, mtu + ETH_HLEN); 415 + skb_fragment = dev_alloc_skb(ll_reserved + mtu + tailroom); 414 416 if (!skb_fragment) 415 417 goto err; 416 418 417 419 skb_fragment->priority = skb->priority; 418 420 419 421 /* Eat the last mtu-bytes of the skb */ 420 - skb_reserve(skb_fragment, header_size + ETH_HLEN); 422 + skb_reserve(skb_fragment, ll_reserved + header_size); 421 423 skb_split(skb, skb_fragment, skb->len - fragment_size); 422 424 423 425 /* Add the header */ ··· 443 439 struct batadv_orig_node *orig_node, 444 440 struct batadv_neigh_node *neigh_node) 445 441 { 442 + struct net_device *net_dev = neigh_node->if_incoming->net_dev; 446 443 struct batadv_priv *bat_priv; 447 444 struct batadv_hard_iface *primary_if = NULL; 448 445 struct batadv_frag_packet frag_header; 449 446 struct sk_buff *skb_fragment; 450 - unsigned int mtu = neigh_node->if_incoming->net_dev->mtu; 447 + unsigned int mtu = net_dev->mtu; 451 448 unsigned int header_size = sizeof(frag_header); 452 449 unsigned int max_fragment_size, num_fragments; 453 450 int ret; ··· 508 503 goto put_primary_if; 509 504 } 510 505 511 - skb_fragment = batadv_frag_create(skb, &frag_header, 506 + skb_fragment = batadv_frag_create(net_dev, skb, &frag_header, 512 507 max_fragment_size); 513 508 if (!skb_fragment) { 514 509 ret = -ENOMEM; ··· 527 522 frag_header.no++; 528 523 } 529 524 530 - /* Make room for the fragment header. */ 531 - if (batadv_skb_head_push(skb, header_size) < 0 || 532 - pskb_expand_head(skb, header_size + ETH_HLEN, 0, GFP_ATOMIC) < 0) { 533 - ret = -ENOMEM; 525 + /* make sure that there is at least enough head for the fragmentation 526 + * and ethernet headers 527 + */ 528 + ret = skb_cow_head(skb, ETH_HLEN + header_size); 529 + if (ret < 0) 534 530 goto put_primary_if; 535 - } 536 531 532 + skb_push(skb, header_size); 537 533 memcpy(skb->data, &frag_header, header_size); 538 534 539 535 /* Send the last fragment */
+3
net/batman-adv/hard-interface.c
··· 554 554 needed_headroom = lower_headroom + (lower_header_len - ETH_HLEN); 555 555 needed_headroom += batadv_max_header_len(); 556 556 557 + /* fragmentation headers don't strip the unicast/... header */ 558 + needed_headroom += sizeof(struct batadv_frag_packet); 559 + 557 560 soft_iface->needed_headroom = needed_headroom; 558 561 soft_iface->needed_tailroom = lower_tailroom; 559 562 }
+5 -2
net/bridge/br_netfilter_hooks.c
··· 735 735 mtu_reserved = nf_bridge_mtu_reduction(skb); 736 736 mtu = skb->dev->mtu; 737 737 738 + if (nf_bridge->pkt_otherhost) { 739 + skb->pkt_type = PACKET_OTHERHOST; 740 + nf_bridge->pkt_otherhost = false; 741 + } 742 + 738 743 if (nf_bridge->frag_max_size && nf_bridge->frag_max_size < mtu) 739 744 mtu = nf_bridge->frag_max_size; 740 745 ··· 840 835 else 841 836 return NF_ACCEPT; 842 837 843 - /* We assume any code from br_dev_queue_push_xmit onwards doesn't care 844 - * about the value of skb->pkt_type. */ 845 838 if (skb->pkt_type == PACKET_OTHERHOST) { 846 839 skb->pkt_type = PACKET_HOST; 847 840 nf_bridge->pkt_otherhost = true;
+2 -6
net/core/dev.c
··· 4180 4180 } 4181 4181 EXPORT_SYMBOL(dev_queue_xmit_accel); 4182 4182 4183 - int dev_direct_xmit(struct sk_buff *skb, u16 queue_id) 4183 + int __dev_direct_xmit(struct sk_buff *skb, u16 queue_id) 4184 4184 { 4185 4185 struct net_device *dev = skb->dev; 4186 4186 struct sk_buff *orig_skb = skb; ··· 4210 4210 dev_xmit_recursion_dec(); 4211 4211 4212 4212 local_bh_enable(); 4213 - 4214 - if (!dev_xmit_complete(ret)) 4215 - kfree_skb(skb); 4216 - 4217 4213 return ret; 4218 4214 drop: 4219 4215 atomic_long_inc(&dev->tx_dropped); 4220 4216 kfree_skb_list(skb); 4221 4217 return NET_XMIT_DROP; 4222 4218 } 4223 - EXPORT_SYMBOL(dev_direct_xmit); 4219 + EXPORT_SYMBOL(__dev_direct_xmit); 4224 4220 4225 4221 /************************************************************************* 4226 4222 * Receiver routines
+3
net/core/skbuff.c
··· 5786 5786 if (unlikely(!eth_p_mpls(skb->protocol))) 5787 5787 return -EINVAL; 5788 5788 5789 + if (!pskb_may_pull(skb, skb_network_offset(skb) + MPLS_HLEN)) 5790 + return -ENOMEM; 5791 + 5789 5792 lse = be32_to_cpu(mpls_hdr(skb)->label_stack_entry); 5790 5793 ttl = (lse & MPLS_LS_TTL_MASK) >> MPLS_LS_TTL_SHIFT; 5791 5794 if (!--ttl)
+4 -3
net/ipv4/route.c
··· 3222 3222 3223 3223 fl4.daddr = dst; 3224 3224 fl4.saddr = src; 3225 - fl4.flowi4_tos = rtm->rtm_tos; 3225 + fl4.flowi4_tos = rtm->rtm_tos & IPTOS_RT_MASK; 3226 3226 fl4.flowi4_oif = tb[RTA_OIF] ? nla_get_u32(tb[RTA_OIF]) : 0; 3227 3227 fl4.flowi4_mark = mark; 3228 3228 fl4.flowi4_uid = uid; ··· 3246 3246 fl4.flowi4_iif = iif; /* for rt_fill_info */ 3247 3247 skb->dev = dev; 3248 3248 skb->mark = mark; 3249 - err = ip_route_input_rcu(skb, dst, src, rtm->rtm_tos, 3250 - dev, &res); 3249 + err = ip_route_input_rcu(skb, dst, src, 3250 + rtm->rtm_tos & IPTOS_RT_MASK, dev, 3251 + &res); 3251 3252 3252 3253 rt = skb_rtable(skb); 3253 3254 if (err == 0 && rt->dst.error)
+13 -3
net/ipv6/ip6_gre.c
··· 1133 1133 return; 1134 1134 1135 1135 if (rt->dst.dev) { 1136 - dev->needed_headroom = rt->dst.dev->hard_header_len + 1137 - t_hlen; 1136 + unsigned short dst_len = rt->dst.dev->hard_header_len + 1137 + t_hlen; 1138 + 1139 + if (t->dev->header_ops) 1140 + dev->hard_header_len = dst_len; 1141 + else 1142 + dev->needed_headroom = dst_len; 1138 1143 1139 1144 if (set_mtu) { 1140 1145 dev->mtu = rt->dst.dev->mtu - t_hlen; ··· 1164 1159 tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen; 1165 1160 1166 1161 t_hlen = tunnel->hlen + sizeof(struct ipv6hdr); 1167 - tunnel->dev->needed_headroom = LL_MAX_HEADER + t_hlen; 1162 + 1163 + if (tunnel->dev->header_ops) 1164 + tunnel->dev->hard_header_len = LL_MAX_HEADER + t_hlen; 1165 + else 1166 + tunnel->dev->needed_headroom = LL_MAX_HEADER + t_hlen; 1167 + 1168 1168 return t_hlen; 1169 1169 } 1170 1170
+1 -2
net/netfilter/ipset/ip_set_core.c
··· 271 271 272 272 static const struct nla_policy ipaddr_policy[IPSET_ATTR_IPADDR_MAX + 1] = { 273 273 [IPSET_ATTR_IPADDR_IPV4] = { .type = NLA_U32 }, 274 - [IPSET_ATTR_IPADDR_IPV6] = { .type = NLA_BINARY, 275 - .len = sizeof(struct in6_addr) }, 274 + [IPSET_ATTR_IPADDR_IPV6] = NLA_POLICY_EXACT_LEN(sizeof(struct in6_addr)), 276 275 }; 277 276 278 277 int
+25 -6
net/netfilter/ipvs/ip_vs_ctl.c
··· 4167 4167 4168 4168 spin_lock_init(&ipvs->tot_stats.lock); 4169 4169 4170 - proc_create_net("ip_vs", 0, ipvs->net->proc_net, &ip_vs_info_seq_ops, 4171 - sizeof(struct ip_vs_iter)); 4172 - proc_create_net_single("ip_vs_stats", 0, ipvs->net->proc_net, 4173 - ip_vs_stats_show, NULL); 4174 - proc_create_net_single("ip_vs_stats_percpu", 0, ipvs->net->proc_net, 4175 - ip_vs_stats_percpu_show, NULL); 4170 + #ifdef CONFIG_PROC_FS 4171 + if (!proc_create_net("ip_vs", 0, ipvs->net->proc_net, 4172 + &ip_vs_info_seq_ops, sizeof(struct ip_vs_iter))) 4173 + goto err_vs; 4174 + if (!proc_create_net_single("ip_vs_stats", 0, ipvs->net->proc_net, 4175 + ip_vs_stats_show, NULL)) 4176 + goto err_stats; 4177 + if (!proc_create_net_single("ip_vs_stats_percpu", 0, 4178 + ipvs->net->proc_net, 4179 + ip_vs_stats_percpu_show, NULL)) 4180 + goto err_percpu; 4181 + #endif 4176 4182 4177 4183 if (ip_vs_control_net_init_sysctl(ipvs)) 4178 4184 goto err; ··· 4186 4180 return 0; 4187 4181 4188 4182 err: 4183 + #ifdef CONFIG_PROC_FS 4184 + remove_proc_entry("ip_vs_stats_percpu", ipvs->net->proc_net); 4185 + 4186 + err_percpu: 4187 + remove_proc_entry("ip_vs_stats", ipvs->net->proc_net); 4188 + 4189 + err_stats: 4190 + remove_proc_entry("ip_vs", ipvs->net->proc_net); 4191 + 4192 + err_vs: 4193 + #endif 4189 4194 free_percpu(ipvs->tot_stats.cpustats); 4190 4195 return -ENOMEM; 4191 4196 } ··· 4205 4188 { 4206 4189 ip_vs_trash_cleanup(ipvs); 4207 4190 ip_vs_control_net_cleanup_sysctl(ipvs); 4191 + #ifdef CONFIG_PROC_FS 4208 4192 remove_proc_entry("ip_vs_stats_percpu", ipvs->net->proc_net); 4209 4193 remove_proc_entry("ip_vs_stats", ipvs->net->proc_net); 4210 4194 remove_proc_entry("ip_vs", ipvs->net->proc_net); 4195 + #endif 4211 4196 free_percpu(ipvs->tot_stats.cpustats); 4212 4197 } 4213 4198
+2 -1
net/netfilter/nf_tables_api.c
··· 619 619 static void lockdep_nfnl_nft_mutex_not_held(void) 620 620 { 621 621 #ifdef CONFIG_PROVE_LOCKING 622 - WARN_ON_ONCE(lockdep_nfnl_is_held(NFNL_SUBSYS_NFTABLES)); 622 + if (debug_locks) 623 + WARN_ON_ONCE(lockdep_nfnl_is_held(NFNL_SUBSYS_NFTABLES)); 623 624 #endif 624 625 } 625 626
+17
net/netfilter/nf_tables_offload.c
··· 28 28 return flow; 29 29 } 30 30 31 + void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow, 32 + enum flow_dissector_key_id addr_type) 33 + { 34 + struct nft_flow_match *match = &flow->match; 35 + struct nft_flow_key *mask = &match->mask; 36 + struct nft_flow_key *key = &match->key; 37 + 38 + if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL)) 39 + return; 40 + 41 + key->control.addr_type = addr_type; 42 + mask->control.addr_type = 0xffff; 43 + match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL); 44 + match->dissector.offset[FLOW_DISSECTOR_KEY_CONTROL] = 45 + offsetof(struct nft_flow_key, control); 46 + } 47 + 31 48 struct nft_flow_rule *nft_flow_rule_create(struct net *net, 32 49 const struct nft_rule *rule) 33 50 {
+4 -4
net/netfilter/nft_cmp.c
··· 123 123 u8 *mask = (u8 *)&flow->match.mask; 124 124 u8 *key = (u8 *)&flow->match.key; 125 125 126 - if (priv->op != NFT_CMP_EQ || reg->len != priv->len) 126 + if (priv->op != NFT_CMP_EQ || priv->len > reg->len) 127 127 return -EOPNOTSUPP; 128 128 129 - memcpy(key + reg->offset, &priv->data, priv->len); 130 - memcpy(mask + reg->offset, &reg->mask, priv->len); 129 + memcpy(key + reg->offset, &priv->data, reg->len); 130 + memcpy(mask + reg->offset, &reg->mask, reg->len); 131 131 132 132 flow->match.dissector.used_keys |= BIT(reg->key); 133 133 flow->match.dissector.offset[reg->key] = reg->base_offset; ··· 137 137 nft_reg_load16(priv->data.data) != ARPHRD_ETHER) 138 138 return -EOPNOTSUPP; 139 139 140 - nft_offload_update_dependency(ctx, &priv->data, priv->len); 140 + nft_offload_update_dependency(ctx, &priv->data, reg->len); 141 141 142 142 return 0; 143 143 }
+8 -8
net/netfilter/nft_meta.c
··· 724 724 725 725 switch (priv->key) { 726 726 case NFT_META_PROTOCOL: 727 - NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, n_proto, 728 - sizeof(__u16), reg); 727 + NFT_OFFLOAD_MATCH_EXACT(FLOW_DISSECTOR_KEY_BASIC, basic, n_proto, 728 + sizeof(__u16), reg); 729 729 nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_NETWORK); 730 730 break; 731 731 case NFT_META_L4PROTO: 732 - NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto, 733 - sizeof(__u8), reg); 732 + NFT_OFFLOAD_MATCH_EXACT(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto, 733 + sizeof(__u8), reg); 734 734 nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_TRANSPORT); 735 735 break; 736 736 case NFT_META_IIF: 737 - NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_META, meta, 738 - ingress_ifindex, sizeof(__u32), reg); 737 + NFT_OFFLOAD_MATCH_EXACT(FLOW_DISSECTOR_KEY_META, meta, 738 + ingress_ifindex, sizeof(__u32), reg); 739 739 break; 740 740 case NFT_META_IIFTYPE: 741 - NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_META, meta, 742 - ingress_iftype, sizeof(__u16), reg); 741 + NFT_OFFLOAD_MATCH_EXACT(FLOW_DISSECTOR_KEY_META, meta, 742 + ingress_iftype, sizeof(__u16), reg); 743 743 break; 744 744 default: 745 745 return -EOPNOTSUPP;
+53 -17
net/netfilter/nft_payload.c
··· 165 165 return -1; 166 166 } 167 167 168 + static bool nft_payload_offload_mask(struct nft_offload_reg *reg, 169 + u32 priv_len, u32 field_len) 170 + { 171 + unsigned int remainder, delta, k; 172 + struct nft_data mask = {}; 173 + __be32 remainder_mask; 174 + 175 + if (priv_len == field_len) { 176 + memset(&reg->mask, 0xff, priv_len); 177 + return true; 178 + } else if (priv_len > field_len) { 179 + return false; 180 + } 181 + 182 + memset(&mask, 0xff, field_len); 183 + remainder = priv_len % sizeof(u32); 184 + if (remainder) { 185 + k = priv_len / sizeof(u32); 186 + delta = field_len - priv_len; 187 + remainder_mask = htonl(~((1 << (delta * BITS_PER_BYTE)) - 1)); 188 + mask.data[k] = (__force u32)remainder_mask; 189 + } 190 + 191 + memcpy(&reg->mask, &mask, field_len); 192 + 193 + return true; 194 + } 195 + 168 196 static int nft_payload_offload_ll(struct nft_offload_ctx *ctx, 169 197 struct nft_flow_rule *flow, 170 198 const struct nft_payload *priv) ··· 201 173 202 174 switch (priv->offset) { 203 175 case offsetof(struct ethhdr, h_source): 204 - if (priv->len != ETH_ALEN) 176 + if (!nft_payload_offload_mask(reg, priv->len, ETH_ALEN)) 205 177 return -EOPNOTSUPP; 206 178 207 179 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_ETH_ADDRS, eth_addrs, 208 180 src, ETH_ALEN, reg); 209 181 break; 210 182 case offsetof(struct ethhdr, h_dest): 211 - if (priv->len != ETH_ALEN) 183 + if (!nft_payload_offload_mask(reg, priv->len, ETH_ALEN)) 212 184 return -EOPNOTSUPP; 213 185 214 186 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_ETH_ADDRS, eth_addrs, 215 187 dst, ETH_ALEN, reg); 216 188 break; 217 189 case offsetof(struct ethhdr, h_proto): 218 - if (priv->len != sizeof(__be16)) 190 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 219 191 return -EOPNOTSUPP; 220 192 221 193 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ··· 223 195 nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_NETWORK); 224 196 break; 225 197 case offsetof(struct vlan_ethhdr, h_vlan_TCI): 226 - if (priv->len != sizeof(__be16)) 198 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 227 199 return -EOPNOTSUPP; 228 200 229 201 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_VLAN, vlan, 230 202 vlan_tci, sizeof(__be16), reg); 231 203 break; 232 204 case offsetof(struct vlan_ethhdr, h_vlan_encapsulated_proto): 233 - if (priv->len != sizeof(__be16)) 205 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 234 206 return -EOPNOTSUPP; 235 207 236 208 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_VLAN, vlan, ··· 238 210 nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_NETWORK); 239 211 break; 240 212 case offsetof(struct vlan_ethhdr, h_vlan_TCI) + sizeof(struct vlan_hdr): 241 - if (priv->len != sizeof(__be16)) 213 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 242 214 return -EOPNOTSUPP; 243 215 244 216 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, vlan, ··· 246 218 break; 247 219 case offsetof(struct vlan_ethhdr, h_vlan_encapsulated_proto) + 248 220 sizeof(struct vlan_hdr): 249 - if (priv->len != sizeof(__be16)) 221 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 250 222 return -EOPNOTSUPP; 251 223 252 224 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, vlan, ··· 267 239 268 240 switch (priv->offset) { 269 241 case offsetof(struct iphdr, saddr): 270 - if (priv->len != sizeof(struct in_addr)) 242 + if (!nft_payload_offload_mask(reg, priv->len, 243 + sizeof(struct in_addr))) 271 244 return -EOPNOTSUPP; 272 245 273 246 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4, src, 274 247 sizeof(struct in_addr), reg); 248 + nft_flow_rule_set_addr_type(flow, FLOW_DISSECTOR_KEY_IPV4_ADDRS); 275 249 break; 276 250 case offsetof(struct iphdr, daddr): 277 - if (priv->len != sizeof(struct in_addr)) 251 + if (!nft_payload_offload_mask(reg, priv->len, 252 + sizeof(struct in_addr))) 278 253 return -EOPNOTSUPP; 279 254 280 255 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4, dst, 281 256 sizeof(struct in_addr), reg); 257 + nft_flow_rule_set_addr_type(flow, FLOW_DISSECTOR_KEY_IPV4_ADDRS); 282 258 break; 283 259 case offsetof(struct iphdr, protocol): 284 - if (priv->len != sizeof(__u8)) 260 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__u8))) 285 261 return -EOPNOTSUPP; 286 262 287 263 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto, ··· 307 275 308 276 switch (priv->offset) { 309 277 case offsetof(struct ipv6hdr, saddr): 310 - if (priv->len != sizeof(struct in6_addr)) 278 + if (!nft_payload_offload_mask(reg, priv->len, 279 + sizeof(struct in6_addr))) 311 280 return -EOPNOTSUPP; 312 281 313 282 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6, src, 314 283 sizeof(struct in6_addr), reg); 284 + nft_flow_rule_set_addr_type(flow, FLOW_DISSECTOR_KEY_IPV6_ADDRS); 315 285 break; 316 286 case offsetof(struct ipv6hdr, daddr): 317 - if (priv->len != sizeof(struct in6_addr)) 287 + if (!nft_payload_offload_mask(reg, priv->len, 288 + sizeof(struct in6_addr))) 318 289 return -EOPNOTSUPP; 319 290 320 291 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6, dst, 321 292 sizeof(struct in6_addr), reg); 293 + nft_flow_rule_set_addr_type(flow, FLOW_DISSECTOR_KEY_IPV6_ADDRS); 322 294 break; 323 295 case offsetof(struct ipv6hdr, nexthdr): 324 - if (priv->len != sizeof(__u8)) 296 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__u8))) 325 297 return -EOPNOTSUPP; 326 298 327 299 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto, ··· 367 331 368 332 switch (priv->offset) { 369 333 case offsetof(struct tcphdr, source): 370 - if (priv->len != sizeof(__be16)) 334 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 371 335 return -EOPNOTSUPP; 372 336 373 337 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, src, 374 338 sizeof(__be16), reg); 375 339 break; 376 340 case offsetof(struct tcphdr, dest): 377 - if (priv->len != sizeof(__be16)) 341 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 378 342 return -EOPNOTSUPP; 379 343 380 344 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, dst, ··· 395 359 396 360 switch (priv->offset) { 397 361 case offsetof(struct udphdr, source): 398 - if (priv->len != sizeof(__be16)) 362 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 399 363 return -EOPNOTSUPP; 400 364 401 365 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, src, 402 366 sizeof(__be16), reg); 403 367 break; 404 368 case offsetof(struct udphdr, dest): 405 - if (priv->len != sizeof(__be16)) 369 + if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16))) 406 370 return -EOPNOTSUPP; 407 371 408 372 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, dst,
+3
net/openvswitch/actions.c
··· 199 199 __be32 lse; 200 200 int err; 201 201 202 + if (!pskb_may_pull(skb, skb_network_offset(skb) + MPLS_HLEN)) 203 + return -ENOMEM; 204 + 202 205 stack = mpls_hdr(skb); 203 206 lse = OVS_MASKED(stack->label_stack_entry, *mpls_lse, *mask); 204 207 err = skb_mpls_update_lse(skb, lse);
+3
net/sched/act_mpls.c
··· 105 105 goto drop; 106 106 break; 107 107 case TCA_MPLS_ACT_MODIFY: 108 + if (!pskb_may_pull(skb, 109 + skb_network_offset(skb) + MPLS_HLEN)) 110 + goto drop; 108 111 new_lse = tcf_mpls_get_lse(mpls_hdr(skb), p, false); 109 112 if (skb_mpls_update_lse(skb, new_lse)) 110 113 goto drop;
+2
net/tipc/node.c
··· 2182 2182 else if (prop == TIPC_NLA_PROP_MTU) 2183 2183 tipc_link_set_mtu(e->link, b->mtu); 2184 2184 } 2185 + /* Update MTU for node link entry */ 2186 + e->mtu = tipc_link_mss(e->link); 2185 2187 tipc_node_write_unlock(n); 2186 2188 tipc_bearer_xmit(net, bearer_id, &xmitq, &e->maddr, NULL); 2187 2189 }
+4 -2
net/x25/af_x25.c
··· 681 681 int len, i, rc = 0; 682 682 683 683 if (addr_len != sizeof(struct sockaddr_x25) || 684 - addr->sx25_family != AF_X25) { 684 + addr->sx25_family != AF_X25 || 685 + strnlen(addr->sx25_addr.x25_addr, X25_ADDR_LEN) == X25_ADDR_LEN) { 685 686 rc = -EINVAL; 686 687 goto out; 687 688 } ··· 776 775 777 776 rc = -EINVAL; 778 777 if (addr_len != sizeof(struct sockaddr_x25) || 779 - addr->sx25_family != AF_X25) 778 + addr->sx25_family != AF_X25 || 779 + strnlen(addr->sx25_addr.x25_addr, X25_ADDR_LEN) == X25_ADDR_LEN) 780 780 goto out; 781 781 782 782 rc = -ENETUNREACH;
+16 -3
net/xdp/xdp_umem.c
··· 66 66 kfree(umem); 67 67 } 68 68 69 + static void xdp_umem_release_deferred(struct work_struct *work) 70 + { 71 + struct xdp_umem *umem = container_of(work, struct xdp_umem, work); 72 + 73 + xdp_umem_release(umem); 74 + } 75 + 69 76 void xdp_get_umem(struct xdp_umem *umem) 70 77 { 71 78 refcount_inc(&umem->users); 72 79 } 73 80 74 - void xdp_put_umem(struct xdp_umem *umem) 81 + void xdp_put_umem(struct xdp_umem *umem, bool defer_cleanup) 75 82 { 76 83 if (!umem) 77 84 return; 78 85 79 - if (refcount_dec_and_test(&umem->users)) 80 - xdp_umem_release(umem); 86 + if (refcount_dec_and_test(&umem->users)) { 87 + if (defer_cleanup) { 88 + INIT_WORK(&umem->work, xdp_umem_release_deferred); 89 + schedule_work(&umem->work); 90 + } else { 91 + xdp_umem_release(umem); 92 + } 93 + } 81 94 } 82 95 83 96 static int xdp_umem_pin_pages(struct xdp_umem *umem, unsigned long address)
+1 -1
net/xdp/xdp_umem.h
··· 9 9 #include <net/xdp_sock_drv.h> 10 10 11 11 void xdp_get_umem(struct xdp_umem *umem); 12 - void xdp_put_umem(struct xdp_umem *umem); 12 + void xdp_put_umem(struct xdp_umem *umem, bool defer_cleanup); 13 13 struct xdp_umem *xdp_umem_create(struct xdp_umem_reg *mr); 14 14 15 15 #endif /* XDP_UMEM_H_ */
+2 -8
net/xdp/xsk.c
··· 411 411 skb_shinfo(skb)->destructor_arg = (void *)(long)desc.addr; 412 412 skb->destructor = xsk_destruct_skb; 413 413 414 - /* Hinder dev_direct_xmit from freeing the packet and 415 - * therefore completing it in the destructor 416 - */ 417 - refcount_inc(&skb->users); 418 - err = dev_direct_xmit(skb, xs->queue_id); 414 + err = __dev_direct_xmit(skb, xs->queue_id); 419 415 if (err == NETDEV_TX_BUSY) { 420 416 /* Tell user-space to retry the send */ 421 417 skb->destructor = sock_wfree; ··· 425 429 /* Ignore NET_XMIT_CN as packet might have been sent */ 426 430 if (err == NET_XMIT_DROP) { 427 431 /* SKB completed but not sent */ 428 - kfree_skb(skb); 429 432 err = -EBUSY; 430 433 goto out; 431 434 } 432 435 433 - consume_skb(skb); 434 436 sent_frame = true; 435 437 } 436 438 ··· 1141 1147 return; 1142 1148 1143 1149 if (!xp_put_pool(xs->pool)) 1144 - xdp_put_umem(xs->umem); 1150 + xdp_put_umem(xs->umem, !xs->pool); 1145 1151 1146 1152 sk_refcnt_debug_dec(sk); 1147 1153 }
+4 -2
net/xdp/xsk_buff_pool.c
··· 185 185 err_unreg_pool: 186 186 if (!force_zc) 187 187 err = 0; /* fallback to copy mode */ 188 - if (err) 188 + if (err) { 189 189 xsk_clear_pool_at_qid(netdev, queue_id); 190 + dev_put(netdev); 191 + } 190 192 return err; 191 193 } 192 194 ··· 244 242 pool->cq = NULL; 245 243 } 246 244 247 - xdp_put_umem(pool->umem); 245 + xdp_put_umem(pool->umem, false); 248 246 xp_destroy(pool); 249 247 } 250 248
+1
tools/bpf/bpftool/btf.c
··· 693 693 obj_node = calloc(1, sizeof(*obj_node)); 694 694 if (!obj_node) { 695 695 p_err("failed to allocate memory: %s", strerror(errno)); 696 + err = -ENOMEM; 696 697 goto err_free; 697 698 } 698 699
+1
tools/testing/selftests/tc-testing/config
··· 59 59 CONFIG_NET_IFE_SKBTCINDEX=m 60 60 CONFIG_NET_SCH_FIFO=y 61 61 CONFIG_NET_SCH_ETS=m 62 + CONFIG_NET_SCH_RED=m 62 63 63 64 # 64 65 ## Network testing