Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

1) Fix cfg80211 deadlock, from Johannes Berg.

2) RXRPC fails to send norigications, from David Howells.

3) MPTCP RM_ADDR parsing has an off by one pointer error, fix from
Geliang Tang.

4) Fix crash when using MSG_PEEK with sockmap, from Anny Hu.

5) The ucc_geth driver needs __netdev_watchdog_up exported, from
Valentin Longchamp.

6) Fix hashtable memory leak in dccp, from Wang Hai.

7) Fix how nexthops are marked as FDB nexthops, from David Ahern.

8) Fix mptcp races between shutdown and recvmsg, from Paolo Abeni.

9) Fix crashes in tipc_disc_rcv(), from Tuong Lien.

10) Fix link speed reporting in iavf driver, from Brett Creeley.

11) When a channel is used for XSK and then reused again later for XSK,
we forget to clear out the relevant data structures in mlx5 which
causes all kinds of problems. Fix from Maxim Mikityanskiy.

12) Fix memory leak in genetlink, from Cong Wang.

13) Disallow sockmap attachments to UDP sockets, it simply won't work.
From Lorenz Bauer.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (83 commits)
net: ethernet: ti: ale: fix allmulti for nu type ale
net: ethernet: ti: am65-cpsw-nuss: fix ale parameters init
net: atm: Remove the error message according to the atomic context
bpf: Undo internal BPF_PROBE_MEM in BPF insns dump
libbpf: Support pre-initializing .bss global variables
tools/bpftool: Fix skeleton codegen
bpf: Fix memlock accounting for sock_hash
bpf: sockmap: Don't attach programs to UDP sockets
bpf: tcp: Recv() should return 0 when the peer socket is closed
ibmvnic: Flush existing work items before device removal
genetlink: clean up family attributes allocations
net: ipa: header pad field only valid for AP->modem endpoint
net: ipa: program upper nibbles of sequencer type
net: ipa: fix modem LAN RX endpoint id
net: ipa: program metadata mask differently
ionic: add pcie_print_link_status
rxrpc: Fix race between incoming ACK parser and retransmitter
net/mlx5: E-Switch, Fix some error pointer dereferences
net/mlx5: Don't fail driver on failure to create debugfs
net/mlx5e: CT: Fix ipv6 nat header rewrite actions
...

Changed files
+1348 -651
Documentation
drivers
include
kernel
net
scripts
tools
+1
Documentation/networking/devlink/index.rst
··· 40 40 mv88e6xxx 41 41 netdevsim 42 42 nfp 43 + sja1105 43 44 qed 44 45 ti-cpsw-switch
+49
Documentation/networking/devlink/sja1105.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ======================= 4 + sja1105 devlink support 5 + ======================= 6 + 7 + This document describes the devlink features implemented 8 + by the ``sja1105`` device driver. 9 + 10 + Parameters 11 + ========== 12 + 13 + .. list-table:: Driver-specific parameters implemented 14 + :widths: 5 5 5 85 15 + 16 + * - Name 17 + - Type 18 + - Mode 19 + - Description 20 + * - ``best_effort_vlan_filtering`` 21 + - Boolean 22 + - runtime 23 + - Allow plain ETH_P_8021Q headers to be used as DSA tags. 24 + 25 + Benefits: 26 + 27 + - Can terminate untagged traffic over switch net 28 + devices even when enslaved to a bridge with 29 + vlan_filtering=1. 30 + - Can terminate VLAN-tagged traffic over switch net 31 + devices even when enslaved to a bridge with 32 + vlan_filtering=1, with some constraints (no more than 33 + 7 non-pvid VLANs per user port). 34 + - Can do QoS based on VLAN PCP and VLAN membership 35 + admission control for autonomously forwarded frames 36 + (regardless of whether they can be terminated on the 37 + CPU or not). 38 + 39 + Drawbacks: 40 + 41 + - User cannot use VLANs in range 1024-3071. If the 42 + switch receives frames with such VIDs, it will 43 + misinterpret them as DSA tags. 44 + - Switch uses Shared VLAN Learning (FDB lookup uses 45 + only DMAC as key). 46 + - When VLANs span cross-chip topologies, the total 47 + number of permitted VLANs may be less than 7 per 48 + port, due to a maximum number of 32 VLAN retagging 49 + rules per switch.
+4 -2
Documentation/networking/dsa/sja1105.rst
··· 103 103 +-------------+-----------+--------------+------------+ 104 104 | | Mode 1 | Mode 2 | Mode 3 | 105 105 +=============+===========+==============+============+ 106 - | Regular | Yes | No | Yes | 106 + | Regular | Yes | No | Yes | 107 107 | traffic | | (use master) | | 108 108 +-------------+-----------+--------------+------------+ 109 109 | Management | Yes | Yes | Yes | 110 - | traffic | | | | 110 + | traffic | | | | 111 111 | (BPDU, PTP) | | | | 112 112 +-------------+-----------+--------------+------------+ 113 113 ··· 241 241 242 242 In this case, SJA1105 switch 1 consumes a total of 11 retagging entries, as 243 243 follows: 244 + 244 245 - 8 retagging entries for VLANs 1 and 100 installed on its user ports 245 246 (``sw1p0`` - ``sw1p3``) 246 247 - 3 retagging entries for VLAN 100 installed on the user ports of SJA1105 ··· 250 249 reverse retagging. 251 250 252 251 SJA1105 switch 2 also consumes 11 retagging entries, but organized as follows: 252 + 253 253 - 7 retagging entries for the bridge VLANs on its user ports (``sw2p0`` - 254 254 ``sw2p3``). 255 255 - 4 retagging entries for VLAN 100 installed on the user ports of SJA1105
+6 -6
Documentation/networking/ethtool-netlink.rst
··· 1028 1028 +--------------------------------------------+--------+-----------------------+ 1029 1029 | ``ETHTOOL_A_CABLE_TEST_TDR_CFG`` | nested | test configuration | 1030 1030 +-+------------------------------------------+--------+-----------------------+ 1031 - | | ``ETHTOOL_A_CABLE_STEP_FIRST_DISTANCE `` | u32 | first data distance | 1031 + | | ``ETHTOOL_A_CABLE_STEP_FIRST_DISTANCE`` | u32 | first data distance | 1032 1032 +-+-+----------------------------------------+--------+-----------------------+ 1033 - | | ``ETHTOOL_A_CABLE_STEP_LAST_DISTANCE `` | u32 | last data distance | 1033 + | | ``ETHTOOL_A_CABLE_STEP_LAST_DISTANCE`` | u32 | last data distance | 1034 1034 +-+-+----------------------------------------+--------+-----------------------+ 1035 - | | ``ETHTOOL_A_CABLE_STEP_STEP_DISTANCE `` | u32 | distance of each step | 1035 + | | ``ETHTOOL_A_CABLE_STEP_STEP_DISTANCE`` | u32 | distance of each step | 1036 1036 +-+-+----------------------------------------+--------+-----------------------+ 1037 1037 | | ``ETHTOOL_A_CABLE_TEST_TDR_CFG_PAIR`` | u8 | pair to test | 1038 1038 +-+-+----------------------------------------+--------+-----------------------+ ··· 1085 1085 +-+-+-----------------------------------------+--------+----------------------+ 1086 1086 | | ``ETHTOOL_A_CABLE_NEST_STEP`` | nested | TDR step info | 1087 1087 +-+-+-----------------------------------------+--------+----------------------+ 1088 - | | | ``ETHTOOL_A_CABLE_STEP_FIRST_DISTANCE ``| u32 | First data distance | 1088 + | | | ``ETHTOOL_A_CABLE_STEP_FIRST_DISTANCE`` | u32 | First data distance | 1089 1089 +-+-+-----------------------------------------+--------+----------------------+ 1090 - | | | ``ETHTOOL_A_CABLE_STEP_LAST_DISTANCE `` | u32 | Last data distance | 1090 + | | | ``ETHTOOL_A_CABLE_STEP_LAST_DISTANCE`` | u32 | Last data distance | 1091 1091 +-+-+-----------------------------------------+--------+----------------------+ 1092 - | | | ``ETHTOOL_A_CABLE_STEP_STEP_DISTANCE `` | u32 | distance of each step| 1092 + | | | ``ETHTOOL_A_CABLE_STEP_STEP_DISTANCE`` | u32 | distance of each step| 1093 1093 +-+-+-----------------------------------------+--------+----------------------+ 1094 1094 | | ``ETHTOOL_A_CABLE_TDR_NEST_AMPLITUDE`` | nested | Reflection amplitude | 1095 1095 +-+-+-----------------------------------------+--------+----------------------+
+1 -1
Documentation/networking/mac80211-injection.rst
··· 101 101 102 102 You can also find a link to a complete inject application here: 103 103 104 - http://wireless.kernel.org/en/users/Documentation/packetspammer 104 + https://wireless.wiki.kernel.org/en/users/Documentation/packetspammer 105 105 106 106 Andy Green <andy@warmcat.com>
+3 -3
Documentation/networking/regulatory.rst
··· 9 9 10 10 More up to date information can be obtained at the project's web page: 11 11 12 - http://wireless.kernel.org/en/developers/Regulatory 12 + https://wireless.wiki.kernel.org/en/developers/Regulatory 13 13 14 14 Keeping regulatory domains in userspace 15 15 --------------------------------------- ··· 37 37 A currently available userspace agent which can accomplish this 38 38 is CRDA - central regulatory domain agent. Its documented here: 39 39 40 - http://wireless.kernel.org/en/developers/Regulatory/CRDA 40 + https://wireless.wiki.kernel.org/en/developers/Regulatory/CRDA 41 41 42 42 Essentially the kernel will send a udev event when it knows 43 43 it needs a new regulatory domain. A udev rule can be put in place ··· 58 58 59 59 Users can use iw: 60 60 61 - http://wireless.kernel.org/en/users/Documentation/iw 61 + https://wireless.wiki.kernel.org/en/users/Documentation/iw 62 62 63 63 An example:: 64 64
+59 -22
drivers/crypto/chelsio/chcr_algo.c
··· 2590 2590 struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); 2591 2591 struct crypto_aead *tfm = crypto_aead_reqtfm(req); 2592 2592 unsigned int authsize = crypto_aead_authsize(tfm); 2593 - int dst_size; 2593 + int src_len, dst_len; 2594 2594 2595 - dst_size = req->assoclen + req->cryptlen + (op_type ? 2596 - 0 : authsize); 2597 - if (!req->cryptlen || !dst_size) 2595 + /* calculate and handle src and dst sg length separately 2596 + * for inplace and out-of place operations 2597 + */ 2598 + if (req->src == req->dst) { 2599 + src_len = req->assoclen + req->cryptlen + (op_type ? 2600 + 0 : authsize); 2601 + dst_len = src_len; 2602 + } else { 2603 + src_len = req->assoclen + req->cryptlen; 2604 + dst_len = req->assoclen + req->cryptlen + (op_type ? 2605 + -authsize : authsize); 2606 + } 2607 + 2608 + if (!req->cryptlen || !src_len || !dst_len) 2598 2609 return 0; 2599 2610 reqctx->iv_dma = dma_map_single(dev, reqctx->iv, (IV + reqctx->b0_len), 2600 2611 DMA_BIDIRECTIONAL); ··· 2617 2606 reqctx->b0_dma = 0; 2618 2607 if (req->src == req->dst) { 2619 2608 error = dma_map_sg(dev, req->src, 2620 - sg_nents_for_len(req->src, dst_size), 2609 + sg_nents_for_len(req->src, src_len), 2621 2610 DMA_BIDIRECTIONAL); 2622 2611 if (!error) 2623 2612 goto err; 2624 2613 } else { 2625 - error = dma_map_sg(dev, req->src, sg_nents(req->src), 2614 + error = dma_map_sg(dev, req->src, 2615 + sg_nents_for_len(req->src, src_len), 2626 2616 DMA_TO_DEVICE); 2627 2617 if (!error) 2628 2618 goto err; 2629 - error = dma_map_sg(dev, req->dst, sg_nents(req->dst), 2619 + error = dma_map_sg(dev, req->dst, 2620 + sg_nents_for_len(req->dst, dst_len), 2630 2621 DMA_FROM_DEVICE); 2631 2622 if (!error) { 2632 - dma_unmap_sg(dev, req->src, sg_nents(req->src), 2633 - DMA_TO_DEVICE); 2623 + dma_unmap_sg(dev, req->src, 2624 + sg_nents_for_len(req->src, src_len), 2625 + DMA_TO_DEVICE); 2634 2626 goto err; 2635 2627 } 2636 2628 } ··· 2651 2637 struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); 2652 2638 struct crypto_aead *tfm = crypto_aead_reqtfm(req); 2653 2639 unsigned int authsize = crypto_aead_authsize(tfm); 2654 - int dst_size; 2640 + int src_len, dst_len; 2655 2641 2656 - dst_size = req->assoclen + req->cryptlen + (op_type ? 2657 - 0 : authsize); 2658 - if (!req->cryptlen || !dst_size) 2642 + /* calculate and handle src and dst sg length separately 2643 + * for inplace and out-of place operations 2644 + */ 2645 + if (req->src == req->dst) { 2646 + src_len = req->assoclen + req->cryptlen + (op_type ? 2647 + 0 : authsize); 2648 + dst_len = src_len; 2649 + } else { 2650 + src_len = req->assoclen + req->cryptlen; 2651 + dst_len = req->assoclen + req->cryptlen + (op_type ? 2652 + -authsize : authsize); 2653 + } 2654 + 2655 + if (!req->cryptlen || !src_len || !dst_len) 2659 2656 return; 2660 2657 2661 2658 dma_unmap_single(dev, reqctx->iv_dma, (IV + reqctx->b0_len), 2662 2659 DMA_BIDIRECTIONAL); 2663 2660 if (req->src == req->dst) { 2664 2661 dma_unmap_sg(dev, req->src, 2665 - sg_nents_for_len(req->src, dst_size), 2662 + sg_nents_for_len(req->src, src_len), 2666 2663 DMA_BIDIRECTIONAL); 2667 2664 } else { 2668 - dma_unmap_sg(dev, req->src, sg_nents(req->src), 2669 - DMA_TO_DEVICE); 2670 - dma_unmap_sg(dev, req->dst, sg_nents(req->dst), 2671 - DMA_FROM_DEVICE); 2665 + dma_unmap_sg(dev, req->src, 2666 + sg_nents_for_len(req->src, src_len), 2667 + DMA_TO_DEVICE); 2668 + dma_unmap_sg(dev, req->dst, 2669 + sg_nents_for_len(req->dst, dst_len), 2670 + DMA_FROM_DEVICE); 2672 2671 } 2673 2672 } 2674 2673 ··· 4391 4364 for (i = 0; i < ARRAY_SIZE(driver_algs); i++) { 4392 4365 switch (driver_algs[i].type & CRYPTO_ALG_TYPE_MASK) { 4393 4366 case CRYPTO_ALG_TYPE_SKCIPHER: 4394 - if (driver_algs[i].is_registered) 4367 + if (driver_algs[i].is_registered && refcount_read( 4368 + &driver_algs[i].alg.skcipher.base.cra_refcnt) 4369 + == 1) { 4395 4370 crypto_unregister_skcipher( 4396 4371 &driver_algs[i].alg.skcipher); 4372 + driver_algs[i].is_registered = 0; 4373 + } 4397 4374 break; 4398 4375 case CRYPTO_ALG_TYPE_AEAD: 4399 - if (driver_algs[i].is_registered) 4376 + if (driver_algs[i].is_registered && refcount_read( 4377 + &driver_algs[i].alg.aead.base.cra_refcnt) == 1) { 4400 4378 crypto_unregister_aead( 4401 4379 &driver_algs[i].alg.aead); 4380 + driver_algs[i].is_registered = 0; 4381 + } 4402 4382 break; 4403 4383 case CRYPTO_ALG_TYPE_AHASH: 4404 - if (driver_algs[i].is_registered) 4384 + if (driver_algs[i].is_registered && refcount_read( 4385 + &driver_algs[i].alg.hash.halg.base.cra_refcnt) 4386 + == 1) { 4405 4387 crypto_unregister_ahash( 4406 4388 &driver_algs[i].alg.hash); 4389 + driver_algs[i].is_registered = 0; 4390 + } 4407 4391 break; 4408 4392 } 4409 - driver_algs[i].is_registered = 0; 4410 4393 } 4411 4394 return 0; 4412 4395 }
-2
drivers/net/bonding/bond_main.c
··· 3687 3687 case BOND_RELEASE_OLD: 3688 3688 case SIOCBONDRELEASE: 3689 3689 res = bond_release(bond_dev, slave_dev); 3690 - if (!res) 3691 - netdev_update_lockdep_key(slave_dev); 3692 3690 break; 3693 3691 case BOND_SETHWADDR_OLD: 3694 3692 case SIOCBONDSETHWADDR:
-2
drivers/net/bonding/bond_options.c
··· 1398 1398 case '-': 1399 1399 slave_dbg(bond->dev, dev, "Releasing interface\n"); 1400 1400 ret = bond_release(bond->dev, dev); 1401 - if (!ret) 1402 - netdev_update_lockdep_key(dev); 1403 1401 break; 1404 1402 1405 1403 default:
+4 -1
drivers/net/ethernet/cadence/macb_main.c
··· 2558 2558 2559 2559 err = macb_phylink_connect(bp); 2560 2560 if (err) 2561 - goto pm_exit; 2561 + goto napi_exit; 2562 2562 2563 2563 netif_tx_start_all_queues(dev); 2564 2564 2565 2565 if (bp->ptp_info) 2566 2566 bp->ptp_info->ptp_init(dev); 2567 2567 2568 + napi_exit: 2569 + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) 2570 + napi_disable(&queue->napi); 2568 2571 pm_exit: 2569 2572 if (err) { 2570 2573 pm_runtime_put_sync(&bp->pdev->dev);
+3 -2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 2907 2907 if (err && err != -EOPNOTSUPP) 2908 2908 goto close; 2909 2909 2910 - priv->cls_rules = devm_kzalloc(dev, sizeof(struct dpaa2_eth_cls_rule) * 2911 - dpaa2_eth_fs_count(priv), GFP_KERNEL); 2910 + priv->cls_rules = devm_kcalloc(dev, dpaa2_eth_fs_count(priv), 2911 + sizeof(struct dpaa2_eth_cls_rule), 2912 + GFP_KERNEL); 2912 2913 if (!priv->cls_rules) { 2913 2914 err = -ENOMEM; 2914 2915 goto close;
+3
drivers/net/ethernet/ibm/ibmvnic.c
··· 5184 5184 adapter->state = VNIC_REMOVING; 5185 5185 spin_unlock_irqrestore(&adapter->state_lock, flags); 5186 5186 5187 + flush_work(&adapter->ibmvnic_reset); 5188 + flush_delayed_work(&adapter->ibmvnic_delayed_reset); 5189 + 5187 5190 rtnl_lock(); 5188 5191 unregister_netdevice(netdev); 5189 5192
+18
drivers/net/ethernet/intel/iavf/iavf.h
··· 87 87 #define IAVF_HLUT_ARRAY_SIZE ((IAVF_VFQF_HLUT_MAX_INDEX + 1) * 4) 88 88 #define IAVF_MBPS_DIVISOR 125000 /* divisor to convert to Mbps */ 89 89 90 + #define IAVF_VIRTCHNL_VF_RESOURCE_SIZE (sizeof(struct virtchnl_vf_resource) + \ 91 + (IAVF_MAX_VF_VSI * \ 92 + sizeof(struct virtchnl_vsi_resource))) 93 + 90 94 /* MAX_MSIX_Q_VECTORS of these are allocated, 91 95 * but we only use one per queue-specific vector. 92 96 */ ··· 219 215 bool add; /* filter needs to be added */ 220 216 }; 221 217 218 + #define IAVF_RESET_WAIT_MS 10 219 + #define IAVF_RESET_WAIT_DETECTED_COUNT 500 220 + #define IAVF_RESET_WAIT_COMPLETE_COUNT 2000 221 + 222 222 /* board specific private data structure */ 223 223 struct iavf_adapter { 224 224 struct work_struct reset_task; ··· 314 306 bool netdev_registered; 315 307 bool link_up; 316 308 enum virtchnl_link_speed link_speed; 309 + /* This is only populated if the VIRTCHNL_VF_CAP_ADV_LINK_SPEED is set 310 + * in vf_res->vf_cap_flags. Use ADV_LINK_SUPPORT macro to determine if 311 + * this field is valid. This field should be used going forward and the 312 + * enum virtchnl_link_speed above should be considered the legacy way of 313 + * storing/communicating link speeds. 314 + */ 315 + u32 link_speed_mbps; 316 + 317 317 enum virtchnl_ops current_op; 318 318 #define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \ 319 319 (_a)->vf_res->vf_cap_flags & \ ··· 338 322 VIRTCHNL_VF_OFFLOAD_RSS_PF))) 339 323 #define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ 340 324 VIRTCHNL_VF_OFFLOAD_VLAN) 325 + #define ADV_LINK_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ 326 + VIRTCHNL_VF_CAP_ADV_LINK_SPEED) 341 327 struct virtchnl_vf_resource *vf_res; /* incl. all VSIs */ 342 328 struct virtchnl_vsi_resource *vsi_res; /* our LAN VSI */ 343 329 struct virtchnl_version_info pf_version;
+24 -13
drivers/net/ethernet/intel/iavf/iavf_ethtool.c
··· 278 278 ethtool_link_ksettings_zero_link_mode(cmd, supported); 279 279 cmd->base.autoneg = AUTONEG_DISABLE; 280 280 cmd->base.port = PORT_NONE; 281 - /* Set speed and duplex */ 281 + cmd->base.duplex = DUPLEX_FULL; 282 + 283 + if (ADV_LINK_SUPPORT(adapter)) { 284 + if (adapter->link_speed_mbps && 285 + adapter->link_speed_mbps < U32_MAX) 286 + cmd->base.speed = adapter->link_speed_mbps; 287 + else 288 + cmd->base.speed = SPEED_UNKNOWN; 289 + 290 + return 0; 291 + } 292 + 282 293 switch (adapter->link_speed) { 283 - case IAVF_LINK_SPEED_40GB: 294 + case VIRTCHNL_LINK_SPEED_40GB: 284 295 cmd->base.speed = SPEED_40000; 285 296 break; 286 - case IAVF_LINK_SPEED_25GB: 287 - #ifdef SPEED_25000 297 + case VIRTCHNL_LINK_SPEED_25GB: 288 298 cmd->base.speed = SPEED_25000; 289 - #else 290 - netdev_info(netdev, 291 - "Speed is 25G, display not supported by this version of ethtool.\n"); 292 - #endif 293 299 break; 294 - case IAVF_LINK_SPEED_20GB: 300 + case VIRTCHNL_LINK_SPEED_20GB: 295 301 cmd->base.speed = SPEED_20000; 296 302 break; 297 - case IAVF_LINK_SPEED_10GB: 303 + case VIRTCHNL_LINK_SPEED_10GB: 298 304 cmd->base.speed = SPEED_10000; 299 305 break; 300 - case IAVF_LINK_SPEED_1GB: 306 + case VIRTCHNL_LINK_SPEED_5GB: 307 + cmd->base.speed = SPEED_5000; 308 + break; 309 + case VIRTCHNL_LINK_SPEED_2_5GB: 310 + cmd->base.speed = SPEED_2500; 311 + break; 312 + case VIRTCHNL_LINK_SPEED_1GB: 301 313 cmd->base.speed = SPEED_1000; 302 314 break; 303 - case IAVF_LINK_SPEED_100MB: 315 + case VIRTCHNL_LINK_SPEED_100MB: 304 316 cmd->base.speed = SPEED_100; 305 317 break; 306 318 default: 307 319 break; 308 320 } 309 - cmd->base.duplex = DUPLEX_FULL; 310 321 311 322 return 0; 312 323 }
+41 -26
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 1756 1756 struct net_device *netdev = adapter->netdev; 1757 1757 struct pci_dev *pdev = adapter->pdev; 1758 1758 struct iavf_hw *hw = &adapter->hw; 1759 - int err = 0, bufsz; 1759 + int err; 1760 1760 1761 1761 WARN_ON(adapter->state != __IAVF_INIT_GET_RESOURCES); 1762 1762 /* aq msg sent, awaiting reply */ 1763 1763 if (!adapter->vf_res) { 1764 - bufsz = sizeof(struct virtchnl_vf_resource) + 1765 - (IAVF_MAX_VF_VSI * 1766 - sizeof(struct virtchnl_vsi_resource)); 1767 - adapter->vf_res = kzalloc(bufsz, GFP_KERNEL); 1768 - if (!adapter->vf_res) 1764 + adapter->vf_res = kzalloc(IAVF_VIRTCHNL_VF_RESOURCE_SIZE, 1765 + GFP_KERNEL); 1766 + if (!adapter->vf_res) { 1767 + err = -ENOMEM; 1769 1768 goto err; 1769 + } 1770 1770 } 1771 1771 err = iavf_get_vf_config(adapter); 1772 1772 if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK) { ··· 2036 2036 iavf_reset_interrupt_capability(adapter); 2037 2037 iavf_free_queues(adapter); 2038 2038 iavf_free_q_vectors(adapter); 2039 - kfree(adapter->vf_res); 2039 + memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE); 2040 2040 iavf_shutdown_adminq(&adapter->hw); 2041 2041 adapter->netdev->flags &= ~IFF_UP; 2042 2042 clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section); ··· 2046 2046 dev_info(&adapter->pdev->dev, "Reset task did not complete, VF disabled\n"); 2047 2047 } 2048 2048 2049 - #define IAVF_RESET_WAIT_MS 10 2050 - #define IAVF_RESET_WAIT_COUNT 500 2051 2049 /** 2052 2050 * iavf_reset_task - Call-back task to handle hardware reset 2053 2051 * @work: pointer to work_struct ··· 2099 2101 adapter->flags |= IAVF_FLAG_RESET_PENDING; 2100 2102 2101 2103 /* poll until we see the reset actually happen */ 2102 - for (i = 0; i < IAVF_RESET_WAIT_COUNT; i++) { 2104 + for (i = 0; i < IAVF_RESET_WAIT_DETECTED_COUNT; i++) { 2103 2105 reg_val = rd32(hw, IAVF_VF_ARQLEN1) & 2104 2106 IAVF_VF_ARQLEN1_ARQENABLE_MASK; 2105 2107 if (!reg_val) 2106 2108 break; 2107 2109 usleep_range(5000, 10000); 2108 2110 } 2109 - if (i == IAVF_RESET_WAIT_COUNT) { 2111 + if (i == IAVF_RESET_WAIT_DETECTED_COUNT) { 2110 2112 dev_info(&adapter->pdev->dev, "Never saw reset\n"); 2111 2113 goto continue_reset; /* act like the reset happened */ 2112 2114 } 2113 2115 2114 2116 /* wait until the reset is complete and the PF is responding to us */ 2115 - for (i = 0; i < IAVF_RESET_WAIT_COUNT; i++) { 2117 + for (i = 0; i < IAVF_RESET_WAIT_COMPLETE_COUNT; i++) { 2116 2118 /* sleep first to make sure a minimum wait time is met */ 2117 2119 msleep(IAVF_RESET_WAIT_MS); 2118 2120 ··· 2124 2126 2125 2127 pci_set_master(adapter->pdev); 2126 2128 2127 - if (i == IAVF_RESET_WAIT_COUNT) { 2129 + if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) { 2128 2130 dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n", 2129 2131 reg_val); 2130 2132 iavf_disable_vf(adapter); ··· 2485 2487 { 2486 2488 int speed = 0, ret = 0; 2487 2489 2490 + if (ADV_LINK_SUPPORT(adapter)) { 2491 + if (adapter->link_speed_mbps < U32_MAX) { 2492 + speed = adapter->link_speed_mbps; 2493 + goto validate_bw; 2494 + } else { 2495 + dev_err(&adapter->pdev->dev, "Unknown link speed\n"); 2496 + return -EINVAL; 2497 + } 2498 + } 2499 + 2488 2500 switch (adapter->link_speed) { 2489 - case IAVF_LINK_SPEED_40GB: 2490 - speed = 40000; 2501 + case VIRTCHNL_LINK_SPEED_40GB: 2502 + speed = SPEED_40000; 2491 2503 break; 2492 - case IAVF_LINK_SPEED_25GB: 2493 - speed = 25000; 2504 + case VIRTCHNL_LINK_SPEED_25GB: 2505 + speed = SPEED_25000; 2494 2506 break; 2495 - case IAVF_LINK_SPEED_20GB: 2496 - speed = 20000; 2507 + case VIRTCHNL_LINK_SPEED_20GB: 2508 + speed = SPEED_20000; 2497 2509 break; 2498 - case IAVF_LINK_SPEED_10GB: 2499 - speed = 10000; 2510 + case VIRTCHNL_LINK_SPEED_10GB: 2511 + speed = SPEED_10000; 2500 2512 break; 2501 - case IAVF_LINK_SPEED_1GB: 2502 - speed = 1000; 2513 + case VIRTCHNL_LINK_SPEED_5GB: 2514 + speed = SPEED_5000; 2503 2515 break; 2504 - case IAVF_LINK_SPEED_100MB: 2505 - speed = 100; 2516 + case VIRTCHNL_LINK_SPEED_2_5GB: 2517 + speed = SPEED_2500; 2518 + break; 2519 + case VIRTCHNL_LINK_SPEED_1GB: 2520 + speed = SPEED_1000; 2521 + break; 2522 + case VIRTCHNL_LINK_SPEED_100MB: 2523 + speed = SPEED_100; 2506 2524 break; 2507 2525 default: 2508 2526 break; 2509 2527 } 2510 2528 2529 + validate_bw: 2511 2530 if (max_tx_rate > speed) { 2512 2531 dev_err(&adapter->pdev->dev, 2513 2532 "Invalid tx rate specified\n"); ··· 3427 3412 u32 rstat; 3428 3413 int i; 3429 3414 3430 - for (i = 0; i < 100; i++) { 3415 + for (i = 0; i < IAVF_RESET_WAIT_COMPLETE_COUNT; i++) { 3431 3416 rstat = rd32(hw, IAVF_VFGEN_RSTAT) & 3432 3417 IAVF_VFGEN_RSTAT_VFR_STATE_MASK; 3433 3418 if ((rstat == VIRTCHNL_VFR_VFACTIVE) ||
+6 -6
drivers/net/ethernet/intel/iavf/iavf_txrx.c
··· 379 379 unsigned int divisor; 380 380 381 381 switch (q_vector->adapter->link_speed) { 382 - case IAVF_LINK_SPEED_40GB: 382 + case VIRTCHNL_LINK_SPEED_40GB: 383 383 divisor = IAVF_ITR_ADAPTIVE_MIN_INC * 1024; 384 384 break; 385 - case IAVF_LINK_SPEED_25GB: 386 - case IAVF_LINK_SPEED_20GB: 385 + case VIRTCHNL_LINK_SPEED_25GB: 386 + case VIRTCHNL_LINK_SPEED_20GB: 387 387 divisor = IAVF_ITR_ADAPTIVE_MIN_INC * 512; 388 388 break; 389 389 default: 390 - case IAVF_LINK_SPEED_10GB: 390 + case VIRTCHNL_LINK_SPEED_10GB: 391 391 divisor = IAVF_ITR_ADAPTIVE_MIN_INC * 256; 392 392 break; 393 - case IAVF_LINK_SPEED_1GB: 394 - case IAVF_LINK_SPEED_100MB: 393 + case VIRTCHNL_LINK_SPEED_1GB: 394 + case VIRTCHNL_LINK_SPEED_100MB: 395 395 divisor = IAVF_ITR_ADAPTIVE_MIN_INC * 32; 396 396 break; 397 397 }
+88 -18
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 139 139 VIRTCHNL_VF_OFFLOAD_ENCAP | 140 140 VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM | 141 141 VIRTCHNL_VF_OFFLOAD_REQ_QUEUES | 142 - VIRTCHNL_VF_OFFLOAD_ADQ; 142 + VIRTCHNL_VF_OFFLOAD_ADQ | 143 + VIRTCHNL_VF_CAP_ADV_LINK_SPEED; 143 144 144 145 adapter->current_op = VIRTCHNL_OP_GET_VF_RESOURCES; 145 146 adapter->aq_required &= ~IAVF_FLAG_AQ_GET_CONFIG; ··· 892 891 iavf_send_pf_msg(adapter, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, NULL, 0); 893 892 } 894 893 894 + #define IAVF_MAX_SPEED_STRLEN 13 895 + 895 896 /** 896 897 * iavf_print_link_message - print link up or down 897 898 * @adapter: adapter structure ··· 903 900 static void iavf_print_link_message(struct iavf_adapter *adapter) 904 901 { 905 902 struct net_device *netdev = adapter->netdev; 906 - char *speed = "Unknown "; 903 + int link_speed_mbps; 904 + char *speed; 907 905 908 906 if (!adapter->link_up) { 909 907 netdev_info(netdev, "NIC Link is Down\n"); 910 908 return; 911 909 } 912 910 911 + speed = kcalloc(1, IAVF_MAX_SPEED_STRLEN, GFP_KERNEL); 912 + if (!speed) 913 + return; 914 + 915 + if (ADV_LINK_SUPPORT(adapter)) { 916 + link_speed_mbps = adapter->link_speed_mbps; 917 + goto print_link_msg; 918 + } 919 + 913 920 switch (adapter->link_speed) { 914 - case IAVF_LINK_SPEED_40GB: 915 - speed = "40 G"; 921 + case VIRTCHNL_LINK_SPEED_40GB: 922 + link_speed_mbps = SPEED_40000; 916 923 break; 917 - case IAVF_LINK_SPEED_25GB: 918 - speed = "25 G"; 924 + case VIRTCHNL_LINK_SPEED_25GB: 925 + link_speed_mbps = SPEED_25000; 919 926 break; 920 - case IAVF_LINK_SPEED_20GB: 921 - speed = "20 G"; 927 + case VIRTCHNL_LINK_SPEED_20GB: 928 + link_speed_mbps = SPEED_20000; 922 929 break; 923 - case IAVF_LINK_SPEED_10GB: 924 - speed = "10 G"; 930 + case VIRTCHNL_LINK_SPEED_10GB: 931 + link_speed_mbps = SPEED_10000; 925 932 break; 926 - case IAVF_LINK_SPEED_1GB: 927 - speed = "1000 M"; 933 + case VIRTCHNL_LINK_SPEED_5GB: 934 + link_speed_mbps = SPEED_5000; 928 935 break; 929 - case IAVF_LINK_SPEED_100MB: 930 - speed = "100 M"; 936 + case VIRTCHNL_LINK_SPEED_2_5GB: 937 + link_speed_mbps = SPEED_2500; 938 + break; 939 + case VIRTCHNL_LINK_SPEED_1GB: 940 + link_speed_mbps = SPEED_1000; 941 + break; 942 + case VIRTCHNL_LINK_SPEED_100MB: 943 + link_speed_mbps = SPEED_100; 931 944 break; 932 945 default: 946 + link_speed_mbps = SPEED_UNKNOWN; 933 947 break; 934 948 } 935 949 936 - netdev_info(netdev, "NIC Link is Up %sbps Full Duplex\n", speed); 950 + print_link_msg: 951 + if (link_speed_mbps > SPEED_1000) { 952 + if (link_speed_mbps == SPEED_2500) 953 + snprintf(speed, IAVF_MAX_SPEED_STRLEN, "2.5 Gbps"); 954 + else 955 + /* convert to Gbps inline */ 956 + snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%d %s", 957 + link_speed_mbps / 1000, "Gbps"); 958 + } else if (link_speed_mbps == SPEED_UNKNOWN) { 959 + snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%s", "Unknown Mbps"); 960 + } else { 961 + snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%u %s", 962 + link_speed_mbps, "Mbps"); 963 + } 964 + 965 + netdev_info(netdev, "NIC Link is Up Speed is %s Full Duplex\n", speed); 966 + kfree(speed); 967 + } 968 + 969 + /** 970 + * iavf_get_vpe_link_status 971 + * @adapter: adapter structure 972 + * @vpe: virtchnl_pf_event structure 973 + * 974 + * Helper function for determining the link status 975 + **/ 976 + static bool 977 + iavf_get_vpe_link_status(struct iavf_adapter *adapter, 978 + struct virtchnl_pf_event *vpe) 979 + { 980 + if (ADV_LINK_SUPPORT(adapter)) 981 + return vpe->event_data.link_event_adv.link_status; 982 + else 983 + return vpe->event_data.link_event.link_status; 984 + } 985 + 986 + /** 987 + * iavf_set_adapter_link_speed_from_vpe 988 + * @adapter: adapter structure for which we are setting the link speed 989 + * @vpe: virtchnl_pf_event structure that contains the link speed we are setting 990 + * 991 + * Helper function for setting iavf_adapter link speed 992 + **/ 993 + static void 994 + iavf_set_adapter_link_speed_from_vpe(struct iavf_adapter *adapter, 995 + struct virtchnl_pf_event *vpe) 996 + { 997 + if (ADV_LINK_SUPPORT(adapter)) 998 + adapter->link_speed_mbps = 999 + vpe->event_data.link_event_adv.link_speed; 1000 + else 1001 + adapter->link_speed = vpe->event_data.link_event.link_speed; 937 1002 } 938 1003 939 1004 /** ··· 1231 1160 if (v_opcode == VIRTCHNL_OP_EVENT) { 1232 1161 struct virtchnl_pf_event *vpe = 1233 1162 (struct virtchnl_pf_event *)msg; 1234 - bool link_up = vpe->event_data.link_event.link_status; 1163 + bool link_up = iavf_get_vpe_link_status(adapter, vpe); 1235 1164 1236 1165 switch (vpe->event) { 1237 1166 case VIRTCHNL_EVENT_LINK_CHANGE: 1238 - adapter->link_speed = 1239 - vpe->event_data.link_event.link_speed; 1167 + iavf_set_adapter_link_speed_from_vpe(adapter, vpe); 1240 1168 1241 1169 /* we've already got the right link status, bail */ 1242 1170 if (adapter->link_up == link_up)
+13
drivers/net/ethernet/marvell/mvneta.c
··· 452 452 u32 cause_rx_tx; 453 453 }; 454 454 455 + enum { 456 + __MVNETA_DOWN, 457 + }; 458 + 455 459 struct mvneta_port { 456 460 u8 id; 457 461 struct mvneta_pcpu_port __percpu *ports; 458 462 struct mvneta_pcpu_stats __percpu *stats; 463 + 464 + unsigned long state; 459 465 460 466 int pkt_size; 461 467 void __iomem *base; ··· 2119 2113 struct netdev_queue *nq; 2120 2114 u32 ret; 2121 2115 2116 + if (unlikely(test_bit(__MVNETA_DOWN, &pp->state))) 2117 + return -ENETDOWN; 2118 + 2122 2119 if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) 2123 2120 return -EINVAL; 2124 2121 ··· 3577 3568 3578 3569 phylink_start(pp->phylink); 3579 3570 netif_tx_start_all_queues(pp->dev); 3571 + 3572 + clear_bit(__MVNETA_DOWN, &pp->state); 3580 3573 } 3581 3574 3582 3575 static void mvneta_stop_dev(struct mvneta_port *pp) 3583 3576 { 3584 3577 unsigned int cpu; 3578 + 3579 + set_bit(__MVNETA_DOWN, &pp->state); 3585 3580 3586 3581 phylink_stop(pp->phylink); 3587 3582
-2
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 283 283 goto params_reg_err; 284 284 mlx5_devlink_set_params_init_values(devlink); 285 285 devlink_params_publish(devlink); 286 - devlink_reload_enable(devlink); 287 286 return 0; 288 287 289 288 params_reg_err: ··· 292 293 293 294 void mlx5_devlink_unregister(struct devlink *devlink) 294 295 { 295 - devlink_reload_disable(devlink); 296 296 devlink_params_unregister(devlink, mlx5_devlink_params, 297 297 ARRAY_SIZE(mlx5_devlink_params)); 298 298 devlink_unregister(devlink);
+10 -10
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 328 328 329 329 case FLOW_ACT_MANGLE_HDR_TYPE_IP6: 330 330 MLX5_SET(set_action_in, modact, length, 0); 331 - if (offset == offsetof(struct ipv6hdr, saddr)) 331 + if (offset == offsetof(struct ipv6hdr, saddr) + 12) 332 332 field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_31_0; 333 - else if (offset == offsetof(struct ipv6hdr, saddr) + 4) 334 - field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_63_32; 335 333 else if (offset == offsetof(struct ipv6hdr, saddr) + 8) 334 + field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_63_32; 335 + else if (offset == offsetof(struct ipv6hdr, saddr) + 4) 336 336 field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_95_64; 337 - else if (offset == offsetof(struct ipv6hdr, saddr) + 12) 337 + else if (offset == offsetof(struct ipv6hdr, saddr)) 338 338 field = MLX5_ACTION_IN_FIELD_OUT_SIPV6_127_96; 339 - else if (offset == offsetof(struct ipv6hdr, daddr)) 340 - field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_31_0; 341 - else if (offset == offsetof(struct ipv6hdr, daddr) + 4) 342 - field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_63_32; 343 - else if (offset == offsetof(struct ipv6hdr, daddr) + 8) 344 - field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_95_64; 345 339 else if (offset == offsetof(struct ipv6hdr, daddr) + 12) 340 + field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_31_0; 341 + else if (offset == offsetof(struct ipv6hdr, daddr) + 8) 342 + field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_63_32; 343 + else if (offset == offsetof(struct ipv6hdr, daddr) + 4) 344 + field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_95_64; 345 + else if (offset == offsetof(struct ipv6hdr, daddr)) 346 346 field = MLX5_ACTION_IN_FIELD_OUT_DIPV6_127_96; 347 347 else 348 348 return -EOPNOTSUPP;
+4
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
··· 152 152 mlx5e_close_cq(&c->xskicosq.cq); 153 153 mlx5e_close_xdpsq(&c->xsksq); 154 154 mlx5e_close_cq(&c->xsksq.cq); 155 + 156 + memset(&c->xskrq, 0, sizeof(c->xskrq)); 157 + memset(&c->xsksq, 0, sizeof(c->xsksq)); 158 + memset(&c->xskicosq, 0, sizeof(c->xskicosq)); 155 159 } 156 160 157 161 void mlx5e_activate_xsk(struct mlx5e_channel *c)
+22 -19
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 1173 1173 struct mlx5e_priv *priv = netdev_priv(dev); 1174 1174 struct mlx5e_rss_params *rss = &priv->rss_params; 1175 1175 int inlen = MLX5_ST_SZ_BYTES(modify_tir_in); 1176 - bool hash_changed = false; 1176 + bool refresh_tirs = false; 1177 + bool refresh_rqt = false; 1177 1178 void *in; 1178 1179 1179 1180 if ((hfunc != ETH_RSS_HASH_NO_CHANGE) && ··· 1190 1189 1191 1190 if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != rss->hfunc) { 1192 1191 rss->hfunc = hfunc; 1193 - hash_changed = true; 1192 + refresh_rqt = true; 1193 + refresh_tirs = true; 1194 1194 } 1195 1195 1196 1196 if (indir) { 1197 1197 memcpy(rss->indirection_rqt, indir, 1198 1198 sizeof(rss->indirection_rqt)); 1199 - 1200 - if (test_bit(MLX5E_STATE_OPENED, &priv->state)) { 1201 - u32 rqtn = priv->indir_rqt.rqtn; 1202 - struct mlx5e_redirect_rqt_param rrp = { 1203 - .is_rss = true, 1204 - { 1205 - .rss = { 1206 - .hfunc = rss->hfunc, 1207 - .channels = &priv->channels, 1208 - }, 1209 - }, 1210 - }; 1211 - 1212 - mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, rrp); 1213 - } 1199 + refresh_rqt = true; 1214 1200 } 1215 1201 1216 1202 if (key) { 1217 1203 memcpy(rss->toeplitz_hash_key, key, 1218 1204 sizeof(rss->toeplitz_hash_key)); 1219 - hash_changed = hash_changed || rss->hfunc == ETH_RSS_HASH_TOP; 1205 + refresh_tirs = refresh_tirs || rss->hfunc == ETH_RSS_HASH_TOP; 1220 1206 } 1221 1207 1222 - if (hash_changed) 1208 + if (refresh_rqt && test_bit(MLX5E_STATE_OPENED, &priv->state)) { 1209 + struct mlx5e_redirect_rqt_param rrp = { 1210 + .is_rss = true, 1211 + { 1212 + .rss = { 1213 + .hfunc = rss->hfunc, 1214 + .channels = &priv->channels, 1215 + }, 1216 + }, 1217 + }; 1218 + u32 rqtn = priv->indir_rqt.rqtn; 1219 + 1220 + mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, rrp); 1221 + } 1222 + 1223 + if (refresh_tirs) 1223 1224 mlx5e_modify_tirs_hash(priv, in); 1224 1225 1225 1226 mutex_unlock(&priv->state_lock);
+4 -2
drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
··· 162 162 163 163 if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) { 164 164 counter = mlx5_fc_create(esw->dev, false); 165 - if (IS_ERR(counter)) 165 + if (IS_ERR(counter)) { 166 166 esw_warn(esw->dev, 167 167 "vport[%d] configure ingress drop rule counter failed\n", 168 168 vport->vport); 169 + counter = NULL; 170 + } 169 171 vport->ingress.legacy.drop_counter = counter; 170 172 } 171 173 ··· 274 272 esw_acl_ingress_table_destroy(vport); 275 273 276 274 clean_drop_counter: 277 - if (!IS_ERR_OR_NULL(vport->ingress.legacy.drop_counter)) { 275 + if (vport->ingress.legacy.drop_counter) { 278 276 mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter); 279 277 vport->ingress.legacy.drop_counter = NULL; 280 278 }
+11 -3
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 192 192 193 193 void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force) 194 194 { 195 + bool err_detected = false; 196 + 197 + /* Mark the device as fatal in order to abort FW commands */ 198 + if ((check_fatal_sensors(dev) || force) && 199 + dev->state == MLX5_DEVICE_STATE_UP) { 200 + dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; 201 + err_detected = true; 202 + } 195 203 mutex_lock(&dev->intf_state_mutex); 196 - if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) 197 - goto unlock; 204 + if (!err_detected && dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) 205 + goto unlock;/* a previous error is still being handled */ 198 206 if (dev->state == MLX5_DEVICE_STATE_UNINITIALIZED) { 199 207 dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; 200 208 goto unlock; 201 209 } 202 210 203 - if (check_fatal_sensors(dev) || force) { 211 + if (check_fatal_sensors(dev) || force) { /* protected state setting */ 204 212 dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; 205 213 mlx5_cmd_flush(dev); 206 214 }
+21 -21
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 785 785 786 786 static void mlx5_pci_close(struct mlx5_core_dev *dev) 787 787 { 788 + /* health work might still be active, and it needs pci bar in 789 + * order to know the NIC state. Therefore, drain the health WQ 790 + * before removing the pci bars 791 + */ 792 + mlx5_drain_health_wq(dev); 788 793 iounmap(dev->iseg); 789 794 pci_clear_master(dev->pdev); 790 795 release_bar(dev->pdev); ··· 1199 1194 if (err) 1200 1195 goto err_load; 1201 1196 1197 + set_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state); 1198 + 1202 1199 if (boot) { 1203 1200 err = mlx5_devlink_register(priv_to_devlink(dev), dev->device); 1204 1201 if (err) 1205 1202 goto err_devlink_reg; 1206 - } 1207 - 1208 - if (mlx5_device_registered(dev)) 1209 - mlx5_attach_device(dev); 1210 - else 1211 1203 mlx5_register_device(dev); 1212 - 1213 - set_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state); 1204 + } else { 1205 + mlx5_attach_device(dev); 1206 + } 1214 1207 1215 1208 mutex_unlock(&dev->intf_state_mutex); 1216 1209 return 0; 1217 1210 1218 1211 err_devlink_reg: 1212 + clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state); 1219 1213 mlx5_unload(dev); 1220 1214 err_load: 1221 1215 if (boot) ··· 1230 1226 1231 1227 void mlx5_unload_one(struct mlx5_core_dev *dev, bool cleanup) 1232 1228 { 1233 - if (cleanup) 1234 - mlx5_unregister_device(dev); 1235 - 1236 1229 mutex_lock(&dev->intf_state_mutex); 1230 + 1231 + if (cleanup) { 1232 + mlx5_unregister_device(dev); 1233 + mlx5_devlink_unregister(priv_to_devlink(dev)); 1234 + } else { 1235 + mlx5_detach_device(dev); 1236 + } 1237 + 1237 1238 if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) { 1238 1239 mlx5_core_warn(dev, "%s: interface is down, NOP\n", 1239 1240 __func__); ··· 1248 1239 } 1249 1240 1250 1241 clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state); 1251 - 1252 - if (mlx5_device_registered(dev)) 1253 - mlx5_detach_device(dev); 1254 1242 1255 1243 mlx5_unload(dev); 1256 1244 ··· 1281 1275 1282 1276 priv->dbg_root = debugfs_create_dir(dev_name(dev->device), 1283 1277 mlx5_debugfs_root); 1284 - if (!priv->dbg_root) { 1285 - dev_err(dev->device, "mlx5_core: error, Cannot create debugfs dir, aborting\n"); 1286 - goto err_dbg_root; 1287 - } 1288 - 1289 1278 err = mlx5_health_init(dev); 1290 1279 if (err) 1291 1280 goto err_health_init; ··· 1295 1294 mlx5_health_cleanup(dev); 1296 1295 err_health_init: 1297 1296 debugfs_remove(dev->priv.dbg_root); 1298 - err_dbg_root: 1299 1297 mutex_destroy(&priv->pgdir_mutex); 1300 1298 mutex_destroy(&priv->alloc_mutex); 1301 1299 mutex_destroy(&priv->bfregs.wc_head.lock); ··· 1362 1362 dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err); 1363 1363 1364 1364 pci_save_state(pdev); 1365 + devlink_reload_enable(devlink); 1365 1366 return 0; 1366 1367 1367 1368 err_load_one: ··· 1380 1379 struct mlx5_core_dev *dev = pci_get_drvdata(pdev); 1381 1380 struct devlink *devlink = priv_to_devlink(dev); 1382 1381 1382 + devlink_reload_disable(devlink); 1383 1383 mlx5_crdump_disable(dev); 1384 - mlx5_devlink_unregister(devlink); 1385 - 1386 1384 mlx5_drain_health_wq(dev); 1387 1385 mlx5_unload_one(dev, true); 1388 1386 mlx5_pci_close(dev);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
··· 179 179 MLX5_SET(create_qp_in, in, opcode, MLX5_CMD_OP_CREATE_QP); 180 180 err = mlx5_cmd_exec(mdev, in, inlen, out, sizeof(out)); 181 181 dr_qp->qpn = MLX5_GET(create_qp_out, out, qpn); 182 - kfree(in); 182 + kvfree(in); 183 183 if (err) 184 184 goto err_in; 185 185 dr_qp->uar = attr->uar;
-2
drivers/net/ethernet/pensando/ionic/ionic.h
··· 17 17 18 18 #define PCI_DEVICE_ID_PENSANDO_IONIC_ETH_PF 0x1002 19 19 #define PCI_DEVICE_ID_PENSANDO_IONIC_ETH_VF 0x1003 20 - #define PCI_DEVICE_ID_PENSANDO_IONIC_ETH_MGMT 0x1004 21 20 22 21 #define DEVCMD_TIMEOUT 10 23 22 ··· 41 42 struct dentry *dentry; 42 43 struct ionic_dev_bar bars[IONIC_BARS_MAX]; 43 44 unsigned int num_bars; 44 - bool is_mgmt_nic; 45 45 struct ionic_identity ident; 46 46 struct list_head lifs; 47 47 struct ionic_lif *master_lif;
+1 -6
drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
··· 15 15 static const struct pci_device_id ionic_id_table[] = { 16 16 { PCI_VDEVICE(PENSANDO, PCI_DEVICE_ID_PENSANDO_IONIC_ETH_PF) }, 17 17 { PCI_VDEVICE(PENSANDO, PCI_DEVICE_ID_PENSANDO_IONIC_ETH_VF) }, 18 - { PCI_VDEVICE(PENSANDO, PCI_DEVICE_ID_PENSANDO_IONIC_ETH_MGMT) }, 19 18 { 0, } /* end of table */ 20 19 }; 21 20 MODULE_DEVICE_TABLE(pci, ionic_id_table); ··· 224 225 pci_set_drvdata(pdev, ionic); 225 226 mutex_init(&ionic->dev_cmd_lock); 226 227 227 - ionic->is_mgmt_nic = 228 - ent->device == PCI_DEVICE_ID_PENSANDO_IONIC_ETH_MGMT; 229 - 230 228 /* Query system for DMA addressing limitation for the device. */ 231 229 err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(IONIC_ADDR_LEN)); 232 230 if (err) { ··· 248 252 } 249 253 250 254 pci_set_master(pdev); 251 - if (!ionic->is_mgmt_nic) 252 - pcie_print_link_status(pdev); 255 + pcie_print_link_status(pdev); 253 256 254 257 err = ionic_map_bars(ionic); 255 258 if (err)
-4
drivers/net/ethernet/pensando/ionic/ionic_devlink.c
··· 77 77 return err; 78 78 } 79 79 80 - /* don't register the mgmt_nic as a port */ 81 - if (ionic->is_mgmt_nic) 82 - return 0; 83 - 84 80 devlink_port_attrs_set(&ionic->dl_port, DEVLINK_PORT_FLAVOUR_PHYSICAL, 85 81 0, false, 0, NULL, 0); 86 82 err = devlink_port_register(dl, &ionic->dl_port, 0);
+2 -15
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 99 99 if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state)) 100 100 return; 101 101 102 - if (lif->ionic->is_mgmt_nic) 103 - return; 104 - 105 102 link_status = le16_to_cpu(lif->info->status.link_status); 106 103 link_up = link_status == IONIC_PORT_OPER_STATUS_UP; 107 104 ··· 113 116 netif_carrier_on(netdev); 114 117 } 115 118 116 - if (netif_running(lif->netdev)) 119 + if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) 117 120 ionic_start_queues(lif); 118 121 } else { 119 122 if (netif_carrier_ok(netdev)) { ··· 121 124 netif_carrier_off(netdev); 122 125 } 123 126 124 - if (netif_running(lif->netdev)) 127 + if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) 125 128 ionic_stop_queues(lif); 126 129 } 127 130 ··· 1189 1192 struct net_device *netdev = lif->netdev; 1190 1193 netdev_features_t features; 1191 1194 int err; 1192 - 1193 - /* no netdev features on the management device */ 1194 - if (lif->ionic->is_mgmt_nic) 1195 - return 0; 1196 1195 1197 1196 /* set up what we expect to support by default */ 1198 1197 features = NETIF_F_HW_VLAN_CTAG_TX | ··· 2586 2593 int ionic_lifs_register(struct ionic *ionic) 2587 2594 { 2588 2595 int err; 2589 - 2590 - /* the netdev is not registered on the management device, it is 2591 - * only used as a vehicle for napi operations on the adminq 2592 - */ 2593 - if (ionic->is_mgmt_nic) 2594 - return 0; 2595 2596 2596 2597 INIT_WORK(&ionic->nb_work, ionic_lif_notify_work); 2597 2598
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1981 1981 1982 1982 static int am65_cpsw_nuss_probe(struct platform_device *pdev) 1983 1983 { 1984 - struct cpsw_ale_params ale_params; 1984 + struct cpsw_ale_params ale_params = { 0 }; 1985 1985 const struct of_device_id *of_id; 1986 1986 struct device *dev = &pdev->dev; 1987 1987 struct am65_cpsw_common *common;
+40 -9
drivers/net/ethernet/ti/cpsw_ale.c
··· 604 604 } 605 605 } 606 606 607 + static void cpsw_ale_vlan_set_unreg_mcast(struct cpsw_ale *ale, u32 *ale_entry, 608 + int allmulti) 609 + { 610 + int unreg_mcast; 611 + 612 + unreg_mcast = 613 + cpsw_ale_get_vlan_unreg_mcast(ale_entry, 614 + ale->vlan_field_bits); 615 + if (allmulti) 616 + unreg_mcast |= ALE_PORT_HOST; 617 + else 618 + unreg_mcast &= ~ALE_PORT_HOST; 619 + cpsw_ale_set_vlan_unreg_mcast(ale_entry, unreg_mcast, 620 + ale->vlan_field_bits); 621 + } 622 + 623 + static void 624 + cpsw_ale_vlan_set_unreg_mcast_idx(struct cpsw_ale *ale, u32 *ale_entry, 625 + int allmulti) 626 + { 627 + int unreg_mcast; 628 + int idx; 629 + 630 + idx = cpsw_ale_get_vlan_unreg_mcast_idx(ale_entry); 631 + 632 + unreg_mcast = readl(ale->params.ale_regs + ALE_VLAN_MASK_MUX(idx)); 633 + 634 + if (allmulti) 635 + unreg_mcast |= ALE_PORT_HOST; 636 + else 637 + unreg_mcast &= ~ALE_PORT_HOST; 638 + 639 + writel(unreg_mcast, ale->params.ale_regs + ALE_VLAN_MASK_MUX(idx)); 640 + } 641 + 607 642 void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti, int port) 608 643 { 609 644 u32 ale_entry[ALE_ENTRY_WORDS]; 610 - int unreg_mcast = 0; 611 645 int type, idx; 612 646 613 647 for (idx = 0; idx < ale->params.ale_entries; idx++) { ··· 658 624 if (port != -1 && !(vlan_members & BIT(port))) 659 625 continue; 660 626 661 - unreg_mcast = 662 - cpsw_ale_get_vlan_unreg_mcast(ale_entry, 663 - ale->vlan_field_bits); 664 - if (allmulti) 665 - unreg_mcast |= ALE_PORT_HOST; 627 + if (!ale->params.nu_switch_ale) 628 + cpsw_ale_vlan_set_unreg_mcast(ale, ale_entry, allmulti); 666 629 else 667 - unreg_mcast &= ~ALE_PORT_HOST; 668 - cpsw_ale_set_vlan_unreg_mcast(ale_entry, unreg_mcast, 669 - ale->vlan_field_bits); 630 + cpsw_ale_vlan_set_unreg_mcast_idx(ale, ale_entry, 631 + allmulti); 632 + 670 633 cpsw_ale_write(ale, idx, ale_entry); 671 634 } 672 635 }
+2
drivers/net/hamradio/bpqether.c
··· 113 113 * off into a separate class since they always nest. 114 114 */ 115 115 static struct lock_class_key bpq_netdev_xmit_lock_key; 116 + static struct lock_class_key bpq_netdev_addr_lock_key; 116 117 117 118 static void bpq_set_lockdep_class_one(struct net_device *dev, 118 119 struct netdev_queue *txq, ··· 124 123 125 124 static void bpq_set_lockdep_class(struct net_device *dev) 126 125 { 126 + lockdep_set_class(&dev->addr_list_lock, &bpq_netdev_addr_lock_key); 127 127 netdev_for_each_tx_queue(dev, bpq_set_lockdep_class_one, NULL); 128 128 } 129 129
+1 -1
drivers/net/ipa/ipa_data-sc7180.c
··· 106 106 [IPA_ENDPOINT_MODEM_LAN_RX] = { 107 107 .ee_id = GSI_EE_MODEM, 108 108 .channel_id = 3, 109 - .endpoint_id = 13, 109 + .endpoint_id = 11, 110 110 .toward_ipa = false, 111 111 }, 112 112 [IPA_ENDPOINT_MODEM_AP_TX] = {
+58 -39
drivers/net/ipa/ipa_endpoint.c
··· 32 32 /* The amount of RX buffer space consumed by standard skb overhead */ 33 33 #define IPA_RX_BUFFER_OVERHEAD (PAGE_SIZE - SKB_MAX_ORDER(NET_SKB_PAD, 0)) 34 34 35 + /* Where to find the QMAP mux_id for a packet within modem-supplied metadata */ 36 + #define IPA_ENDPOINT_QMAP_METADATA_MASK 0x000000ff /* host byte order */ 37 + 35 38 #define IPA_ENDPOINT_RESET_AGGR_RETRY_MAX 3 36 39 #define IPA_AGGR_TIME_LIMIT_DEFAULT 1000 /* microseconds */ 37 40 ··· 436 433 iowrite32(val, endpoint->ipa->reg_virt + offset); 437 434 } 438 435 436 + /** 437 + * We program QMAP endpoints so each packet received is preceded by a QMAP 438 + * header structure. The QMAP header contains a 1-byte mux_id and 2-byte 439 + * packet size field, and we have the IPA hardware populate both for each 440 + * received packet. The header is configured (in the HDR_EXT register) 441 + * to use big endian format. 442 + * 443 + * The packet size is written into the QMAP header's pkt_len field. That 444 + * location is defined here using the HDR_OFST_PKT_SIZE field. 445 + * 446 + * The mux_id comes from a 4-byte metadata value supplied with each packet 447 + * by the modem. It is *not* a QMAP header, but it does contain the mux_id 448 + * value that we want, in its low-order byte. A bitmask defined in the 449 + * endpoint's METADATA_MASK register defines which byte within the modem 450 + * metadata contains the mux_id. And the OFST_METADATA field programmed 451 + * here indicates where the extracted byte should be placed within the QMAP 452 + * header. 453 + */ 439 454 static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) 440 455 { 441 456 u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id); ··· 462 441 if (endpoint->data->qmap) { 463 442 size_t header_size = sizeof(struct rmnet_map_header); 464 443 444 + /* We might supply a checksum header after the QMAP header */ 465 445 if (endpoint->toward_ipa && endpoint->data->checksum) 466 446 header_size += sizeof(struct rmnet_map_ul_csum_header); 467 - 468 447 val |= u32_encode_bits(header_size, HDR_LEN_FMASK); 469 - /* metadata is the 4 byte rmnet_map header itself */ 470 - val |= HDR_OFST_METADATA_VALID_FMASK; 471 - val |= u32_encode_bits(0, HDR_OFST_METADATA_FMASK); 472 - /* HDR_ADDITIONAL_CONST_LEN is 0; (IPA->AP only) */ 473 - if (!endpoint->toward_ipa) { 474 - u32 size_offset = offsetof(struct rmnet_map_header, 475 - pkt_len); 476 448 449 + /* Define how to fill fields in a received QMAP header */ 450 + if (!endpoint->toward_ipa) { 451 + u32 off; /* Field offset within header */ 452 + 453 + /* Where IPA will write the metadata value */ 454 + off = offsetof(struct rmnet_map_header, mux_id); 455 + val |= u32_encode_bits(off, HDR_OFST_METADATA_FMASK); 456 + 457 + /* Where IPA will write the length */ 458 + off = offsetof(struct rmnet_map_header, pkt_len); 477 459 val |= HDR_OFST_PKT_SIZE_VALID_FMASK; 478 - val |= u32_encode_bits(size_offset, 479 - HDR_OFST_PKT_SIZE_FMASK); 460 + val |= u32_encode_bits(off, HDR_OFST_PKT_SIZE_FMASK); 480 461 } 462 + /* For QMAP TX, metadata offset is 0 (modem assumes this) */ 463 + val |= HDR_OFST_METADATA_VALID_FMASK; 464 + 465 + /* HDR_ADDITIONAL_CONST_LEN is 0; (RX only) */ 481 466 /* HDR_A5_MUX is 0 */ 482 467 /* HDR_LEN_INC_DEAGG_HDR is 0 */ 483 - /* HDR_METADATA_REG_VALID is 0; (AP->IPA only) */ 468 + /* HDR_METADATA_REG_VALID is 0 (TX only) */ 484 469 } 485 470 486 471 iowrite32(val, endpoint->ipa->reg_virt + offset); ··· 499 472 u32 val = 0; 500 473 501 474 val |= HDR_ENDIANNESS_FMASK; /* big endian */ 502 - val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK; 503 - /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */ 475 + 476 + /* A QMAP header contains a 6 bit pad field at offset 0. The RMNet 477 + * driver assumes this field is meaningful in packets it receives, 478 + * and assumes the header's payload length includes that padding. 479 + * The RMNet driver does *not* pad packets it sends, however, so 480 + * the pad field (although 0) should be ignored. 481 + */ 482 + if (endpoint->data->qmap && !endpoint->toward_ipa) { 483 + val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK; 484 + /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */ 485 + val |= HDR_PAYLOAD_LEN_INC_PADDING_FMASK; 486 + /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */ 487 + } 488 + 504 489 /* HDR_PAYLOAD_LEN_INC_PADDING is 0 */ 505 - /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */ 506 490 if (!endpoint->toward_ipa) 507 491 val |= u32_encode_bits(pad_align, HDR_PAD_TO_ALIGNMENT_FMASK); 508 492 509 493 iowrite32(val, endpoint->ipa->reg_virt + offset); 510 494 } 511 495 512 - /** 513 - * Generate a metadata mask value that will select only the mux_id 514 - * field in an rmnet_map header structure. The mux_id is at offset 515 - * 1 byte from the beginning of the structure, but the metadata 516 - * value is treated as a 4-byte unit. So this mask must be computed 517 - * with endianness in mind. Note that ipa_endpoint_init_hdr_metadata_mask() 518 - * will convert this value to the proper byte order. 519 - * 520 - * Marked __always_inline because this is really computing a 521 - * constant value. 522 - */ 523 - static __always_inline __be32 ipa_rmnet_mux_id_metadata_mask(void) 524 - { 525 - size_t mux_id_offset = offsetof(struct rmnet_map_header, mux_id); 526 - u32 mux_id_mask = 0; 527 - u8 *bytes; 528 - 529 - bytes = (u8 *)&mux_id_mask; 530 - bytes[mux_id_offset] = 0xff; /* mux_id is 1 byte */ 531 - 532 - return cpu_to_be32(mux_id_mask); 533 - } 534 496 535 497 static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint) 536 498 { ··· 529 513 530 514 offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id); 531 515 516 + /* Note that HDR_ENDIANNESS indicates big endian header fields */ 532 517 if (!endpoint->toward_ipa && endpoint->data->qmap) 533 - val = ipa_rmnet_mux_id_metadata_mask(); 518 + val = cpu_to_be32(IPA_ENDPOINT_QMAP_METADATA_MASK); 534 519 535 520 iowrite32(val, endpoint->ipa->reg_virt + offset); 536 521 } ··· 710 693 u32 seq_type = endpoint->seq_type; 711 694 u32 val = 0; 712 695 696 + /* Sequencer type is made up of four nibbles */ 713 697 val |= u32_encode_bits(seq_type & 0xf, HPS_SEQ_TYPE_FMASK); 714 698 val |= u32_encode_bits((seq_type >> 4) & 0xf, DPS_SEQ_TYPE_FMASK); 715 - /* HPS_REP_SEQ_TYPE is 0 */ 716 - /* DPS_REP_SEQ_TYPE is 0 */ 699 + /* The second two apply to replicated packets */ 700 + val |= u32_encode_bits((seq_type >> 8) & 0xf, HPS_REP_SEQ_TYPE_FMASK); 701 + val |= u32_encode_bits((seq_type >> 12) & 0xf, DPS_REP_SEQ_TYPE_FMASK); 717 702 718 703 iowrite32(val, endpoint->ipa->reg_virt + offset); 719 704 }
+2
drivers/net/ipa/ipa_reg.h
··· 455 455 * second packet processing pass + no decipher + microcontroller 456 456 * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher 457 457 * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression 458 + * @IPA_SEQ_PKT_PROCESS_NO_DEC_NO_UCP_DMAP: 459 + * packet processing + no decipher + no uCP + HPS REP DMA parser 458 460 * @IPA_SEQ_INVALID: invalid sequencer type 459 461 * 460 462 * The values defined here are broken into 4-bit nibbles that are written
+5
drivers/net/macsec.c
··· 3999 3999 return 0; 4000 4000 } 4001 4001 4002 + static struct lock_class_key macsec_netdev_addr_lock_key; 4003 + 4002 4004 static int macsec_newlink(struct net *net, struct net_device *dev, 4003 4005 struct nlattr *tb[], struct nlattr *data[], 4004 4006 struct netlink_ext_ack *extack) ··· 4052 4050 return err; 4053 4051 4054 4052 netdev_lockdep_set_classes(dev); 4053 + lockdep_set_class_and_subclass(&dev->addr_list_lock, 4054 + &macsec_netdev_addr_lock_key, 4055 + dev->lower_level); 4055 4056 4056 4057 err = netdev_upper_dev_link(real_dev, dev, extack); 4057 4058 if (err < 0)
+11 -2
drivers/net/macvlan.c
··· 860 860 * "super class" of normal network devices; split their locks off into a 861 861 * separate class since they always nest. 862 862 */ 863 + static struct lock_class_key macvlan_netdev_addr_lock_key; 864 + 863 865 #define ALWAYS_ON_OFFLOADS \ 864 866 (NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE | \ 865 867 NETIF_F_GSO_ROBUST | NETIF_F_GSO_ENCAP_ALL) ··· 876 874 877 875 #define MACVLAN_STATE_MASK \ 878 876 ((1<<__LINK_STATE_NOCARRIER) | (1<<__LINK_STATE_DORMANT)) 877 + 878 + static void macvlan_set_lockdep_class(struct net_device *dev) 879 + { 880 + netdev_lockdep_set_classes(dev); 881 + lockdep_set_class_and_subclass(&dev->addr_list_lock, 882 + &macvlan_netdev_addr_lock_key, 883 + dev->lower_level); 884 + } 879 885 880 886 static int macvlan_init(struct net_device *dev) 881 887 { ··· 902 892 dev->gso_max_size = lowerdev->gso_max_size; 903 893 dev->gso_max_segs = lowerdev->gso_max_segs; 904 894 dev->hard_header_len = lowerdev->hard_header_len; 905 - 906 - netdev_lockdep_set_classes(dev); 895 + macvlan_set_lockdep_class(dev); 907 896 908 897 vlan->pcpu_stats = netdev_alloc_pcpu_stats(struct vlan_pcpu_stats); 909 898 if (!vlan->pcpu_stats)
+5 -9
drivers/net/vxlan.c
··· 857 857 u32 nhid, struct netlink_ext_ack *extack) 858 858 { 859 859 struct nexthop *old_nh = rtnl_dereference(fdb->nh); 860 - struct nh_group *nhg; 861 860 struct nexthop *nh; 862 861 int err = -EINVAL; 863 862 ··· 875 876 nh = NULL; 876 877 goto err_inval; 877 878 } 878 - if (!nh->is_fdb_nh) { 879 + if (!nexthop_is_fdb(nh)) { 879 880 NL_SET_ERR_MSG(extack, "Nexthop is not a fdb nexthop"); 880 881 goto err_inval; 881 882 } 882 883 883 - nhg = rtnl_dereference(nh->nh_grp); 884 - if (!nh->is_group || !nhg->mpath) { 884 + if (!nexthop_is_multipath(nh)) { 885 885 NL_SET_ERR_MSG(extack, "Nexthop is not a multipath group"); 886 886 goto err_inval; 887 887 } ··· 888 890 /* check nexthop group family */ 889 891 switch (vxlan->default_dst.remote_ip.sa.sa_family) { 890 892 case AF_INET: 891 - if (!nhg->has_v4) { 893 + if (!nexthop_has_v4(nh)) { 892 894 err = -EAFNOSUPPORT; 893 895 NL_SET_ERR_MSG(extack, "Nexthop group family not supported"); 894 896 goto err_inval; 895 897 } 896 898 break; 897 899 case AF_INET6: 898 - if (nhg->has_v4) { 900 + if (nexthop_has_v4(nh)) { 899 901 err = -EAFNOSUPPORT; 900 902 NL_SET_ERR_MSG(extack, "Nexthop group family not supported"); 901 903 goto err_inval; ··· 4243 4245 mod_timer(&vxlan->age_timer, jiffies); 4244 4246 4245 4247 netdev_adjacent_change_commit(dst->remote_dev, lowerdev, dev); 4246 - if (lowerdev && lowerdev != dst->remote_dev) { 4248 + if (lowerdev && lowerdev != dst->remote_dev) 4247 4249 dst->remote_dev = lowerdev; 4248 - netdev_update_lockdep_key(lowerdev); 4249 - } 4250 4250 vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true); 4251 4251 return 0; 4252 4252 }
+3
drivers/net/wireless/intersil/hostap/hostap_hw.c
··· 3048 3048 * This is a natural nesting, which needs a split lock type. 3049 3049 */ 3050 3050 static struct lock_class_key hostap_netdev_xmit_lock_key; 3051 + static struct lock_class_key hostap_netdev_addr_lock_key; 3051 3052 3052 3053 static void prism2_set_lockdep_class_one(struct net_device *dev, 3053 3054 struct netdev_queue *txq, ··· 3060 3059 3061 3060 static void prism2_set_lockdep_class(struct net_device *dev) 3062 3061 { 3062 + lockdep_set_class(&dev->addr_list_lock, 3063 + &hostap_netdev_addr_lock_key); 3063 3064 netdev_for_each_tx_queue(dev, prism2_set_lockdep_class_one, NULL); 3064 3065 } 3065 3066
+8 -4
include/linux/netdevice.h
··· 1821 1821 * for hardware timestamping 1822 1822 * @sfp_bus: attached &struct sfp_bus structure. 1823 1823 * 1824 - * @addr_list_lock_key: lockdep class annotating 1825 - * net_device->addr_list_lock spinlock 1826 1824 * @qdisc_tx_busylock: lockdep class annotating Qdisc->busylock spinlock 1827 1825 * @qdisc_running_key: lockdep class annotating Qdisc->running seqcount 1828 1826 * ··· 2123 2125 #endif 2124 2126 struct phy_device *phydev; 2125 2127 struct sfp_bus *sfp_bus; 2126 - struct lock_class_key addr_list_lock_key; 2127 2128 struct lock_class_key *qdisc_tx_busylock; 2128 2129 struct lock_class_key *qdisc_running_key; 2129 2130 bool proto_down; ··· 2214 2217 static struct lock_class_key qdisc_tx_busylock_key; \ 2215 2218 static struct lock_class_key qdisc_running_key; \ 2216 2219 static struct lock_class_key qdisc_xmit_lock_key; \ 2220 + static struct lock_class_key dev_addr_list_lock_key; \ 2217 2221 unsigned int i; \ 2218 2222 \ 2219 2223 (dev)->qdisc_tx_busylock = &qdisc_tx_busylock_key; \ 2220 2224 (dev)->qdisc_running_key = &qdisc_running_key; \ 2225 + lockdep_set_class(&(dev)->addr_list_lock, \ 2226 + &dev_addr_list_lock_key); \ 2221 2227 for (i = 0; i < (dev)->num_tx_queues; i++) \ 2222 2228 lockdep_set_class(&(dev)->_tx[i]._xmit_lock, \ 2223 2229 &qdisc_xmit_lock_key); \ ··· 3253 3253 } 3254 3254 3255 3255 void netif_tx_stop_all_queues(struct net_device *dev); 3256 - void netdev_update_lockdep_key(struct net_device *dev); 3257 3256 3258 3257 static inline bool netif_tx_queue_stopped(const struct netdev_queue *dev_queue) 3259 3258 { ··· 4236 4237 static inline void netif_addr_lock(struct net_device *dev) 4237 4238 { 4238 4239 spin_lock(&dev->addr_list_lock); 4240 + } 4241 + 4242 + static inline void netif_addr_lock_nested(struct net_device *dev) 4243 + { 4244 + spin_lock_nested(&dev->addr_list_lock, dev->lower_level); 4239 4245 } 4240 4246 4241 4247 static inline void netif_addr_lock_bh(struct net_device *dev)
+3 -2
include/net/cfg80211.h
··· 5075 5075 * by cfg80211 on change_interface 5076 5076 * @mgmt_registrations: list of registrations for management frames 5077 5077 * @mgmt_registrations_lock: lock for the list 5078 - * @mgmt_registrations_update_wk: update work to defer from atomic context 5078 + * @mgmt_registrations_need_update: mgmt registrations were updated, 5079 + * need to propagate the update to the driver 5079 5080 * @mtx: mutex used to lock data in this struct, may be used by drivers 5080 5081 * and some API functions require it held 5081 5082 * @beacon_interval: beacon interval used on this device for transmitting ··· 5122 5121 5123 5122 struct list_head mgmt_registrations; 5124 5123 spinlock_t mgmt_registrations_lock; 5125 - struct work_struct mgmt_registrations_update_wk; 5124 + u8 mgmt_registrations_need_update:1; 5126 5125 5127 5126 struct mutex mtx; 5128 5127
-24
include/net/flow_offload.h
··· 542 542 struct flow_block_offload *bo, 543 543 void (*cleanup)(struct flow_block_cb *block_cb)); 544 544 545 - typedef void flow_indr_block_cmd_t(struct net_device *dev, 546 - flow_indr_block_bind_cb_t *cb, void *cb_priv, 547 - enum flow_block_command command); 548 - 549 - int __flow_indr_block_cb_register(struct net_device *dev, void *cb_priv, 550 - flow_indr_block_bind_cb_t *cb, 551 - void *cb_ident); 552 - 553 - void __flow_indr_block_cb_unregister(struct net_device *dev, 554 - flow_indr_block_bind_cb_t *cb, 555 - void *cb_ident); 556 - 557 - int flow_indr_block_cb_register(struct net_device *dev, void *cb_priv, 558 - flow_indr_block_bind_cb_t *cb, void *cb_ident); 559 - 560 - void flow_indr_block_cb_unregister(struct net_device *dev, 561 - flow_indr_block_bind_cb_t *cb, 562 - void *cb_ident); 563 - 564 - void flow_indr_block_call(struct net_device *dev, 565 - struct flow_block_offload *bo, 566 - enum flow_block_command command, 567 - enum tc_setup_type type); 568 - 569 545 #endif /* _NET_FLOW_OFFLOAD_H */
+6
include/net/inet_hashtables.h
··· 185 185 186 186 int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo); 187 187 188 + static inline void inet_hashinfo2_free_mod(struct inet_hashinfo *h) 189 + { 190 + kfree(h->lhash2); 191 + h->lhash2 = NULL; 192 + } 193 + 188 194 static inline void inet_ehash_locks_free(struct inet_hashinfo *hashinfo) 189 195 { 190 196 kvfree(hashinfo->ehash_locks);
+27 -1
include/net/nexthop.h
··· 76 76 struct nh_group *spare; /* spare group for removals */ 77 77 u16 num_nh; 78 78 bool mpath; 79 + bool fdb_nh; 79 80 bool has_v4; 80 81 struct nh_grp_entry nh_entries[]; 81 82 }; ··· 94 93 u8 protocol; /* app managing this nh */ 95 94 u8 nh_flags; 96 95 bool is_group; 97 - bool is_fdb_nh; 98 96 99 97 refcount_t refcnt; 100 98 struct rcu_head rcu; ··· 134 134 const struct nexthop *nh2) 135 135 { 136 136 return nh1 == nh2; 137 + } 138 + 139 + static inline bool nexthop_is_fdb(const struct nexthop *nh) 140 + { 141 + if (nh->is_group) { 142 + const struct nh_group *nh_grp; 143 + 144 + nh_grp = rcu_dereference_rtnl(nh->nh_grp); 145 + return nh_grp->fdb_nh; 146 + } else { 147 + const struct nh_info *nhi; 148 + 149 + nhi = rcu_dereference_rtnl(nh->nh_info); 150 + return nhi->fdb_nh; 151 + } 152 + } 153 + 154 + static inline bool nexthop_has_v4(const struct nexthop *nh) 155 + { 156 + if (nh->is_group) { 157 + struct nh_group *nh_grp; 158 + 159 + nh_grp = rcu_dereference_rtnl(nh->nh_grp); 160 + return nh_grp->has_v4; 161 + } 162 + return false; 137 163 } 138 164 139 165 static inline bool nexthop_is_multipath(const struct nexthop *nh)
+13
include/uapi/linux/bpf.h
··· 3761 3761 __u32 egress_ifindex; /* txq->dev->ifindex */ 3762 3762 }; 3763 3763 3764 + /* DEVMAP map-value layout 3765 + * 3766 + * The struct data-layout of map-value is a configuration interface. 3767 + * New members can only be added to the end of this structure. 3768 + */ 3769 + struct bpf_devmap_val { 3770 + __u32 ifindex; /* device index */ 3771 + union { 3772 + int fd; /* prog fd on map write */ 3773 + __u32 id; /* prog id on map read */ 3774 + } bpf_prog; 3775 + }; 3776 + 3764 3777 enum sk_action { 3765 3778 SK_DROP = 0, 3766 3779 SK_PASS,
+1 -1
include/uapi/linux/nl80211.h
··· 794 794 * various triggers. These triggers can be configured through this 795 795 * command with the %NL80211_ATTR_WOWLAN_TRIGGERS attribute. For 796 796 * more background information, see 797 - * http://wireless.kernel.org/en/users/Documentation/WoWLAN. 797 + * https://wireless.wiki.kernel.org/en/users/Documentation/WoWLAN. 798 798 * The @NL80211_CMD_SET_WOWLAN command can also be used as a notification 799 799 * from the driver reporting the wakeup reason. In this case, the 800 800 * @NL80211_ATTR_WOWLAN_TRIGGERS attribute will contain the reason
+1 -1
kernel/bpf/cgroup.c
··· 378 378 } 379 379 380 380 list_for_each_entry(pl, progs, node) { 381 - if (prog && pl->prog == prog) 381 + if (prog && pl->prog == prog && prog != replace_prog) 382 382 /* disallow attaching the same prog twice */ 383 383 return ERR_PTR(-EINVAL); 384 384 if (link && pl->link == link)
+5 -13
kernel/bpf/devmap.c
··· 60 60 unsigned int count; 61 61 }; 62 62 63 - /* DEVMAP values */ 64 - struct bpf_devmap_val { 65 - u32 ifindex; /* device index */ 66 - union { 67 - int fd; /* prog fd on map write */ 68 - u32 id; /* prog id on map read */ 69 - } bpf_prog; 70 - }; 71 - 72 63 struct bpf_dtab_netdev { 73 64 struct net_device *dev; /* must be first member, due to tracepoint */ 74 65 struct hlist_node index_hlist; ··· 470 479 struct xdp_txq_info txq = { .dev = dev }; 471 480 u32 act; 472 481 482 + xdp_set_data_meta_invalid(xdp); 473 483 xdp->txq = &txq; 474 484 475 485 act = bpf_prog_run_xdp(xdp_prog, xdp); ··· 610 618 if (!dev->dev) 611 619 goto err_out; 612 620 613 - if (val->bpf_prog.fd >= 0) { 621 + if (val->bpf_prog.fd > 0) { 614 622 prog = bpf_prog_get_type_dev(val->bpf_prog.fd, 615 623 BPF_PROG_TYPE_XDP, false); 616 624 if (IS_ERR(prog)) ··· 644 652 void *key, void *value, u64 map_flags) 645 653 { 646 654 struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); 647 - struct bpf_devmap_val val = { .bpf_prog.fd = -1 }; 648 655 struct bpf_dtab_netdev *dev, *old_dev; 656 + struct bpf_devmap_val val = {}; 649 657 u32 i = *(u32 *)key; 650 658 651 659 if (unlikely(map_flags > BPF_EXIST)) ··· 661 669 if (!val.ifindex) { 662 670 dev = NULL; 663 671 /* can not specify fd if ifindex is 0 */ 664 - if (val.bpf_prog.fd != -1) 672 + if (val.bpf_prog.fd > 0) 665 673 return -EINVAL; 666 674 } else { 667 675 dev = __dev_map_alloc_node(net, dtab, &val, i); ··· 691 699 void *key, void *value, u64 map_flags) 692 700 { 693 701 struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); 694 - struct bpf_devmap_val val = { .bpf_prog.fd = -1 }; 695 702 struct bpf_dtab_netdev *dev, *old_dev; 703 + struct bpf_devmap_val val = {}; 696 704 u32 idx = *(u32 *)key; 697 705 unsigned long flags; 698 706 int err = -EEXIST;
+12 -5
kernel/bpf/syscall.c
··· 3145 3145 struct bpf_insn *insns; 3146 3146 u32 off, type; 3147 3147 u64 imm; 3148 + u8 code; 3148 3149 int i; 3149 3150 3150 3151 insns = kmemdup(prog->insnsi, bpf_prog_insn_size(prog), ··· 3154 3153 return insns; 3155 3154 3156 3155 for (i = 0; i < prog->len; i++) { 3157 - if (insns[i].code == (BPF_JMP | BPF_TAIL_CALL)) { 3156 + code = insns[i].code; 3157 + 3158 + if (code == (BPF_JMP | BPF_TAIL_CALL)) { 3158 3159 insns[i].code = BPF_JMP | BPF_CALL; 3159 3160 insns[i].imm = BPF_FUNC_tail_call; 3160 3161 /* fall-through */ 3161 3162 } 3162 - if (insns[i].code == (BPF_JMP | BPF_CALL) || 3163 - insns[i].code == (BPF_JMP | BPF_CALL_ARGS)) { 3164 - if (insns[i].code == (BPF_JMP | BPF_CALL_ARGS)) 3163 + if (code == (BPF_JMP | BPF_CALL) || 3164 + code == (BPF_JMP | BPF_CALL_ARGS)) { 3165 + if (code == (BPF_JMP | BPF_CALL_ARGS)) 3165 3166 insns[i].code = BPF_JMP | BPF_CALL; 3166 3167 if (!bpf_dump_raw_ok()) 3167 3168 insns[i].imm = 0; 3168 3169 continue; 3169 3170 } 3171 + if (BPF_CLASS(code) == BPF_LDX && BPF_MODE(code) == BPF_PROBE_MEM) { 3172 + insns[i].code = BPF_LDX | BPF_SIZE(code) | BPF_MEM; 3173 + continue; 3174 + } 3170 3175 3171 - if (insns[i].code != (BPF_LD | BPF_IMM | BPF_DW)) 3176 + if (code != (BPF_LD | BPF_IMM | BPF_DW)) 3172 3177 continue; 3173 3178 3174 3179 imm = ((u64)insns[i + 1].imm << 32) | (u32)insns[i].imm;
+1 -1
kernel/bpf/verifier.c
··· 7552 7552 const struct btf *btf; 7553 7553 void __user *urecord; 7554 7554 u32 prev_offset = 0; 7555 - int ret = 0; 7555 + int ret = -ENOMEM; 7556 7556 7557 7557 nfuncs = attr->func_info_cnt; 7558 7558 if (!nfuncs)
+1 -1
kernel/trace/trace_kprobe.c
··· 1643 1643 if (perf_type_tracepoint) 1644 1644 tk = find_trace_kprobe(pevent, group); 1645 1645 else 1646 - tk = event->tp_event->data; 1646 + tk = trace_kprobe_primary_from_call(event->tp_event); 1647 1647 if (!tk) 1648 1648 return -EINVAL; 1649 1649
+1 -1
kernel/trace/trace_uprobe.c
··· 1412 1412 if (perf_type_tracepoint) 1413 1413 tu = find_probe_event(pevent, group); 1414 1414 else 1415 - tu = event->tp_event->data; 1415 + tu = trace_uprobe_primary_from_call(event->tp_event); 1416 1416 if (!tu) 1417 1417 return -EINVAL; 1418 1418
+6 -2
net/8021q/vlan_dev.c
··· 494 494 * separate class since they always nest. 495 495 */ 496 496 static struct lock_class_key vlan_netdev_xmit_lock_key; 497 + static struct lock_class_key vlan_netdev_addr_lock_key; 497 498 498 499 static void vlan_dev_set_lockdep_one(struct net_device *dev, 499 500 struct netdev_queue *txq, ··· 503 502 lockdep_set_class(&txq->_xmit_lock, &vlan_netdev_xmit_lock_key); 504 503 } 505 504 506 - static void vlan_dev_set_lockdep_class(struct net_device *dev) 505 + static void vlan_dev_set_lockdep_class(struct net_device *dev, int subclass) 507 506 { 507 + lockdep_set_class_and_subclass(&dev->addr_list_lock, 508 + &vlan_netdev_addr_lock_key, 509 + subclass); 508 510 netdev_for_each_tx_queue(dev, vlan_dev_set_lockdep_one, NULL); 509 511 } 510 512 ··· 601 597 602 598 SET_NETDEV_DEVTYPE(dev, &vlan_type); 603 599 604 - vlan_dev_set_lockdep_class(dev); 600 + vlan_dev_set_lockdep_class(dev, dev->lower_level); 605 601 606 602 vlan->vlan_pcpu_stats = netdev_alloc_pcpu_stats(struct vlan_pcpu_stats); 607 603 if (!vlan->vlan_pcpu_stats)
+1 -3
net/atm/lec.c
··· 1536 1536 struct lec_arp_table *to_return; 1537 1537 1538 1538 to_return = kzalloc(sizeof(struct lec_arp_table), GFP_ATOMIC); 1539 - if (!to_return) { 1540 - pr_info("LEC: Arp entry kmalloc failed\n"); 1539 + if (!to_return) 1541 1540 return NULL; 1542 - } 1543 1541 ether_addr_copy(to_return->mac_addr, mac_addr); 1544 1542 INIT_HLIST_NODE(&to_return->next); 1545 1543 timer_setup(&to_return->timer, lec_arp_expire_arp, 0);
+2
net/batman-adv/soft-interface.c
··· 745 745 * separate class since they always nest. 746 746 */ 747 747 static struct lock_class_key batadv_netdev_xmit_lock_key; 748 + static struct lock_class_key batadv_netdev_addr_lock_key; 748 749 749 750 /** 750 751 * batadv_set_lockdep_class_one() - Set lockdep class for a single tx queue ··· 766 765 */ 767 766 static void batadv_set_lockdep_class(struct net_device *dev) 768 767 { 768 + lockdep_set_class(&dev->addr_list_lock, &batadv_netdev_addr_lock_key); 769 769 netdev_for_each_tx_queue(dev, batadv_set_lockdep_class_one, NULL); 770 770 } 771 771
+8
net/bridge/br_device.c
··· 105 105 return NETDEV_TX_OK; 106 106 } 107 107 108 + static struct lock_class_key bridge_netdev_addr_lock_key; 109 + 110 + static void br_set_lockdep_class(struct net_device *dev) 111 + { 112 + lockdep_set_class(&dev->addr_list_lock, &bridge_netdev_addr_lock_key); 113 + } 114 + 108 115 static int br_dev_init(struct net_device *dev) 109 116 { 110 117 struct net_bridge *br = netdev_priv(dev); ··· 150 143 br_fdb_hash_fini(br); 151 144 } 152 145 146 + br_set_lockdep_class(dev); 153 147 return err; 154 148 } 155 149
+16 -14
net/core/dev.c
··· 439 439 "_xmit_IEEE802154", "_xmit_VOID", "_xmit_NONE"}; 440 440 441 441 static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)]; 442 + static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)]; 442 443 443 444 static inline unsigned short netdev_lock_pos(unsigned short dev_type) 444 445 { ··· 461 460 lockdep_set_class_and_name(lock, &netdev_xmit_lock_key[i], 462 461 netdev_lock_name[i]); 463 462 } 463 + 464 + static inline void netdev_set_addr_lockdep_class(struct net_device *dev) 465 + { 466 + int i; 467 + 468 + i = netdev_lock_pos(dev->type); 469 + lockdep_set_class_and_name(&dev->addr_list_lock, 470 + &netdev_addr_lock_key[i], 471 + netdev_lock_name[i]); 472 + } 464 473 #else 465 474 static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock, 466 475 unsigned short dev_type) 476 + { 477 + } 478 + 479 + static inline void netdev_set_addr_lockdep_class(struct net_device *dev) 467 480 { 468 481 } 469 482 #endif ··· 9388 9373 } 9389 9374 EXPORT_SYMBOL(netif_tx_stop_all_queues); 9390 9375 9391 - void netdev_update_lockdep_key(struct net_device *dev) 9392 - { 9393 - lockdep_unregister_key(&dev->addr_list_lock_key); 9394 - lockdep_register_key(&dev->addr_list_lock_key); 9395 - 9396 - lockdep_set_class(&dev->addr_list_lock, &dev->addr_list_lock_key); 9397 - } 9398 - EXPORT_SYMBOL(netdev_update_lockdep_key); 9399 - 9400 9376 /** 9401 9377 * register_netdevice - register a network device 9402 9378 * @dev: device to register ··· 9426 9420 return ret; 9427 9421 9428 9422 spin_lock_init(&dev->addr_list_lock); 9429 - lockdep_set_class(&dev->addr_list_lock, &dev->addr_list_lock_key); 9423 + netdev_set_addr_lockdep_class(dev); 9430 9424 9431 9425 ret = dev_get_valid_name(net, dev, dev->name); 9432 9426 if (ret < 0) ··· 9945 9939 9946 9940 dev_net_set(dev, &init_net); 9947 9941 9948 - lockdep_register_key(&dev->addr_list_lock_key); 9949 - 9950 9942 dev->gso_max_size = GSO_MAX_SIZE; 9951 9943 dev->gso_max_segs = GSO_MAX_SEGS; 9952 9944 dev->upper_level = 1; ··· 10031 10027 dev->pcpu_refcnt = NULL; 10032 10028 free_percpu(dev->xdp_bulkq); 10033 10029 dev->xdp_bulkq = NULL; 10034 - 10035 - lockdep_unregister_key(&dev->addr_list_lock_key); 10036 10030 10037 10031 /* Compatibility with error handling in drivers */ 10038 10032 if (dev->reg_state == NETREG_UNINITIALIZED) {
+6 -6
net/core/dev_addr_lists.c
··· 637 637 if (to->addr_len != from->addr_len) 638 638 return -EINVAL; 639 639 640 - netif_addr_lock(to); 640 + netif_addr_lock_nested(to); 641 641 err = __hw_addr_sync(&to->uc, &from->uc, to->addr_len); 642 642 if (!err) 643 643 __dev_set_rx_mode(to); ··· 667 667 if (to->addr_len != from->addr_len) 668 668 return -EINVAL; 669 669 670 - netif_addr_lock(to); 670 + netif_addr_lock_nested(to); 671 671 err = __hw_addr_sync_multiple(&to->uc, &from->uc, to->addr_len); 672 672 if (!err) 673 673 __dev_set_rx_mode(to); ··· 691 691 return; 692 692 693 693 netif_addr_lock_bh(from); 694 - netif_addr_lock(to); 694 + netif_addr_lock_nested(to); 695 695 __hw_addr_unsync(&to->uc, &from->uc, to->addr_len); 696 696 __dev_set_rx_mode(to); 697 697 netif_addr_unlock(to); ··· 858 858 if (to->addr_len != from->addr_len) 859 859 return -EINVAL; 860 860 861 - netif_addr_lock(to); 861 + netif_addr_lock_nested(to); 862 862 err = __hw_addr_sync(&to->mc, &from->mc, to->addr_len); 863 863 if (!err) 864 864 __dev_set_rx_mode(to); ··· 888 888 if (to->addr_len != from->addr_len) 889 889 return -EINVAL; 890 890 891 - netif_addr_lock(to); 891 + netif_addr_lock_nested(to); 892 892 err = __hw_addr_sync_multiple(&to->mc, &from->mc, to->addr_len); 893 893 if (!err) 894 894 __dev_set_rx_mode(to); ··· 912 912 return; 913 913 914 914 netif_addr_lock_bh(from); 915 - netif_addr_lock(to); 915 + netif_addr_lock_nested(to); 916 916 __hw_addr_unsync(&to->mc, &from->mc, to->addr_len); 917 917 __dev_set_rx_mode(to); 918 918 netif_addr_unlock(to);
+9 -10
net/core/filter.c
··· 1755 1755 u32, offset, void *, to, u32, len, u32, start_header) 1756 1756 { 1757 1757 u8 *end = skb_tail_pointer(skb); 1758 - u8 *net = skb_network_header(skb); 1759 - u8 *mac = skb_mac_header(skb); 1760 - u8 *ptr; 1758 + u8 *start, *ptr; 1761 1759 1762 - if (unlikely(offset > 0xffff || len > (end - mac))) 1760 + if (unlikely(offset > 0xffff)) 1763 1761 goto err_clear; 1764 1762 1765 1763 switch (start_header) { 1766 1764 case BPF_HDR_START_MAC: 1767 - ptr = mac + offset; 1765 + if (unlikely(!skb_mac_header_was_set(skb))) 1766 + goto err_clear; 1767 + start = skb_mac_header(skb); 1768 1768 break; 1769 1769 case BPF_HDR_START_NET: 1770 - ptr = net + offset; 1770 + start = skb_network_header(skb); 1771 1771 break; 1772 1772 default: 1773 1773 goto err_clear; 1774 1774 } 1775 1775 1776 - if (likely(ptr >= mac && ptr + len <= end)) { 1776 + ptr = start + offset; 1777 + 1778 + if (likely(ptr + len <= end)) { 1777 1779 memcpy(to, ptr, len); 1778 1780 return 0; 1779 1781 } ··· 4342 4340 } 4343 4341 break; 4344 4342 case SO_BINDTODEVICE: 4345 - ret = -ENOPROTOOPT; 4346 - #ifdef CONFIG_NETDEVICES 4347 4343 optlen = min_t(long, optlen, IFNAMSIZ - 1); 4348 4344 strncpy(devname, optval, optlen); 4349 4345 devname[optlen] = 0; ··· 4360 4360 dev_put(dev); 4361 4361 } 4362 4362 ret = sock_bindtoindex(sk, ifindex, false); 4363 - #endif 4364 4363 break; 4365 4364 default: 4366 4365 ret = -EINVAL;
-1
net/core/rtnetlink.c
··· 2462 2462 err = ops->ndo_del_slave(upper_dev, dev); 2463 2463 if (err) 2464 2464 return err; 2465 - netdev_update_lockdep_key(dev); 2466 2465 } else { 2467 2466 return -EOPNOTSUPP; 2468 2467 }
+32 -6
net/core/sock_map.c
··· 424 424 return 0; 425 425 } 426 426 427 - static bool sock_map_redirect_allowed(const struct sock *sk) 428 - { 429 - return sk->sk_state != TCP_LISTEN; 430 - } 427 + static bool sock_map_redirect_allowed(const struct sock *sk); 431 428 432 429 static int sock_map_update_common(struct bpf_map *map, u32 idx, 433 430 struct sock *sk, u64 flags) ··· 503 506 { 504 507 return sk->sk_type == SOCK_DGRAM && 505 508 sk->sk_protocol == IPPROTO_UDP; 509 + } 510 + 511 + static bool sock_map_redirect_allowed(const struct sock *sk) 512 + { 513 + return sk_is_tcp(sk) && sk->sk_state != TCP_LISTEN; 506 514 } 507 515 508 516 static bool sock_map_sk_is_suitable(const struct sock *sk) ··· 991 989 err = -EINVAL; 992 990 goto free_htab; 993 991 } 992 + err = bpf_map_charge_init(&htab->map.memory, cost); 993 + if (err) 994 + goto free_htab; 994 995 995 996 htab->buckets = bpf_map_area_alloc(htab->buckets_num * 996 997 sizeof(struct bpf_htab_bucket), 997 998 htab->map.numa_node); 998 999 if (!htab->buckets) { 1000 + bpf_map_charge_finish(&htab->map.memory); 999 1001 err = -ENOMEM; 1000 1002 goto free_htab; 1001 1003 } ··· 1019 1013 { 1020 1014 struct bpf_htab *htab = container_of(map, struct bpf_htab, map); 1021 1015 struct bpf_htab_bucket *bucket; 1016 + struct hlist_head unlink_list; 1022 1017 struct bpf_htab_elem *elem; 1023 1018 struct hlist_node *node; 1024 1019 int i; ··· 1031 1024 synchronize_rcu(); 1032 1025 for (i = 0; i < htab->buckets_num; i++) { 1033 1026 bucket = sock_hash_select_bucket(htab, i); 1034 - hlist_for_each_entry_safe(elem, node, &bucket->head, node) { 1035 - hlist_del_rcu(&elem->node); 1027 + 1028 + /* We are racing with sock_hash_delete_from_link to 1029 + * enter the spin-lock critical section. Every socket on 1030 + * the list is still linked to sockhash. Since link 1031 + * exists, psock exists and holds a ref to socket. That 1032 + * lets us to grab a socket ref too. 1033 + */ 1034 + raw_spin_lock_bh(&bucket->lock); 1035 + hlist_for_each_entry(elem, &bucket->head, node) 1036 + sock_hold(elem->sk); 1037 + hlist_move_list(&bucket->head, &unlink_list); 1038 + raw_spin_unlock_bh(&bucket->lock); 1039 + 1040 + /* Process removed entries out of atomic context to 1041 + * block for socket lock before deleting the psock's 1042 + * link to sockhash. 1043 + */ 1044 + hlist_for_each_entry_safe(elem, node, &unlink_list, node) { 1045 + hlist_del(&elem->node); 1036 1046 lock_sock(elem->sk); 1037 1047 rcu_read_lock(); 1038 1048 sock_map_unref(elem->sk, elem); 1039 1049 rcu_read_unlock(); 1040 1050 release_sock(elem->sk); 1051 + sock_put(elem->sk); 1052 + sock_hash_free_elem(htab, elem); 1041 1053 } 1042 1054 } 1043 1055
+5 -2
net/dccp/proto.c
··· 1139 1139 inet_hashinfo_init(&dccp_hashinfo); 1140 1140 rc = inet_hashinfo2_init_mod(&dccp_hashinfo); 1141 1141 if (rc) 1142 - goto out_fail; 1142 + goto out_free_percpu; 1143 1143 rc = -ENOBUFS; 1144 1144 dccp_hashinfo.bind_bucket_cachep = 1145 1145 kmem_cache_create("dccp_bind_bucket", 1146 1146 sizeof(struct inet_bind_bucket), 0, 1147 1147 SLAB_HWCACHE_ALIGN, NULL); 1148 1148 if (!dccp_hashinfo.bind_bucket_cachep) 1149 - goto out_free_percpu; 1149 + goto out_free_hashinfo2; 1150 1150 1151 1151 /* 1152 1152 * Size and allocate the main established and bind bucket ··· 1242 1242 free_pages((unsigned long)dccp_hashinfo.ehash, ehash_order); 1243 1243 out_free_bind_bucket_cachep: 1244 1244 kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep); 1245 + out_free_hashinfo2: 1246 + inet_hashinfo2_free_mod(&dccp_hashinfo); 1245 1247 out_free_percpu: 1246 1248 percpu_counter_destroy(&dccp_orphan_count); 1247 1249 out_fail: ··· 1267 1265 kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep); 1268 1266 dccp_ackvec_exit(); 1269 1267 dccp_sysctl_exit(); 1268 + inet_hashinfo2_free_mod(&dccp_hashinfo); 1270 1269 percpu_counter_destroy(&dccp_orphan_count); 1271 1270 } 1272 1271
+4
net/dsa/master.c
··· 327 327 rtnl_unlock(); 328 328 } 329 329 330 + static struct lock_class_key dsa_master_addr_list_lock_key; 331 + 330 332 int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 331 333 { 332 334 int ret; ··· 347 345 wmb(); 348 346 349 347 dev->dsa_ptr = cpu_dp; 348 + lockdep_set_class(&dev->addr_list_lock, 349 + &dsa_master_addr_list_lock_key); 350 350 ret = dsa_master_ethtool_setup(dev); 351 351 if (ret) 352 352 return ret;
+49 -33
net/ipv4/nexthop.c
··· 247 247 if (nla_put_u32(skb, NHA_ID, nh->id)) 248 248 goto nla_put_failure; 249 249 250 - if (nh->is_fdb_nh && nla_put_flag(skb, NHA_FDB)) 251 - goto nla_put_failure; 252 - 253 250 if (nh->is_group) { 254 251 struct nh_group *nhg = rtnl_dereference(nh->nh_grp); 255 252 253 + if (nhg->fdb_nh && nla_put_flag(skb, NHA_FDB)) 254 + goto nla_put_failure; 256 255 if (nla_put_nh_group(skb, nhg)) 257 256 goto nla_put_failure; 258 257 goto out; ··· 263 264 if (nla_put_flag(skb, NHA_BLACKHOLE)) 264 265 goto nla_put_failure; 265 266 goto out; 266 - } else if (!nh->is_fdb_nh) { 267 + } else if (nhi->fdb_nh) { 268 + if (nla_put_flag(skb, NHA_FDB)) 269 + goto nla_put_failure; 270 + } else { 267 271 const struct net_device *dev; 268 272 269 273 dev = nhi->fib_nhc.nhc_dev; ··· 387 385 } 388 386 389 387 static bool valid_group_nh(struct nexthop *nh, unsigned int npaths, 390 - struct netlink_ext_ack *extack) 388 + bool *is_fdb, struct netlink_ext_ack *extack) 391 389 { 392 390 if (nh->is_group) { 393 391 struct nh_group *nhg = rtnl_dereference(nh->nh_grp); ··· 400 398 "Multipath group can not be a nexthop within a group"); 401 399 return false; 402 400 } 401 + *is_fdb = nhg->fdb_nh; 403 402 } else { 404 403 struct nh_info *nhi = rtnl_dereference(nh->nh_info); 405 404 ··· 409 406 "Blackhole nexthop can not be used in a group with more than 1 path"); 410 407 return false; 411 408 } 409 + *is_fdb = nhi->fdb_nh; 412 410 } 413 411 414 412 return true; ··· 420 416 { 421 417 struct nh_info *nhi; 422 418 423 - if (!nh->is_fdb_nh) { 419 + nhi = rtnl_dereference(nh->nh_info); 420 + 421 + if (!nhi->fdb_nh) { 424 422 NL_SET_ERR_MSG(extack, "FDB nexthop group can only have fdb nexthops"); 425 423 return -EINVAL; 426 424 } 427 425 428 - nhi = rtnl_dereference(nh->nh_info); 429 426 if (*nh_family == AF_UNSPEC) { 430 427 *nh_family = nhi->family; 431 428 } else if (*nh_family != nhi->family) { ··· 478 473 nhg = nla_data(tb[NHA_GROUP]); 479 474 for (i = 0; i < len; ++i) { 480 475 struct nexthop *nh; 476 + bool is_fdb_nh; 481 477 482 478 nh = nexthop_find_by_id(net, nhg[i].id); 483 479 if (!nh) { 484 480 NL_SET_ERR_MSG(extack, "Invalid nexthop id"); 485 481 return -EINVAL; 486 482 } 487 - if (!valid_group_nh(nh, len, extack)) 483 + if (!valid_group_nh(nh, len, &is_fdb_nh, extack)) 488 484 return -EINVAL; 489 485 490 486 if (nhg_fdb && nh_check_attr_fdb_group(nh, &nh_family, extack)) 491 487 return -EINVAL; 492 488 493 - if (!nhg_fdb && nh->is_fdb_nh) { 489 + if (!nhg_fdb && is_fdb_nh) { 494 490 NL_SET_ERR_MSG(extack, "Non FDB nexthop group cannot have fdb nexthops"); 495 491 return -EINVAL; 496 492 } ··· 559 553 if (hash > atomic_read(&nhge->upper_bound)) 560 554 continue; 561 555 562 - if (nhge->nh->is_fdb_nh) 556 + nhi = rcu_dereference(nhge->nh->nh_info); 557 + if (nhi->fdb_nh) 563 558 return nhge->nh; 564 559 565 560 /* nexthops always check if it is good and does 566 561 * not rely on a sysctl for this behavior 567 562 */ 568 - nhi = rcu_dereference(nhge->nh->nh_info); 569 563 switch (nhi->family) { 570 564 case AF_INET: 571 565 if (ipv4_good_nh(&nhi->fib_nh)) ··· 630 624 struct netlink_ext_ack *extack) 631 625 { 632 626 struct nh_info *nhi; 633 - 634 - if (nh->is_fdb_nh) { 635 - NL_SET_ERR_MSG(extack, "Route cannot point to a fdb nexthop"); 636 - return -EINVAL; 637 - } 627 + bool is_fdb_nh; 638 628 639 629 /* fib6_src is unique to a fib6_info and limits the ability to cache 640 630 * routes in fib6_nh within a nexthop that is potentially shared ··· 647 645 nhg = rtnl_dereference(nh->nh_grp); 648 646 if (nhg->has_v4) 649 647 goto no_v4_nh; 648 + is_fdb_nh = nhg->fdb_nh; 650 649 } else { 651 650 nhi = rtnl_dereference(nh->nh_info); 652 651 if (nhi->family == AF_INET) 653 652 goto no_v4_nh; 653 + is_fdb_nh = nhi->fdb_nh; 654 + } 655 + 656 + if (is_fdb_nh) { 657 + NL_SET_ERR_MSG(extack, "Route cannot point to a fdb nexthop"); 658 + return -EINVAL; 654 659 } 655 660 656 661 return 0; ··· 686 677 return fib6_check_nexthop(new, NULL, extack); 687 678 } 688 679 689 - static int nexthop_check_scope(struct nexthop *nh, u8 scope, 680 + static int nexthop_check_scope(struct nh_info *nhi, u8 scope, 690 681 struct netlink_ext_ack *extack) 691 682 { 692 - struct nh_info *nhi; 693 - 694 - nhi = rtnl_dereference(nh->nh_info); 695 683 if (scope == RT_SCOPE_HOST && nhi->fib_nhc.nhc_gw_family) { 696 684 NL_SET_ERR_MSG(extack, 697 685 "Route with host scope can not have a gateway"); ··· 710 704 int fib_check_nexthop(struct nexthop *nh, u8 scope, 711 705 struct netlink_ext_ack *extack) 712 706 { 707 + struct nh_info *nhi; 713 708 int err = 0; 714 - 715 - if (nh->is_fdb_nh) { 716 - NL_SET_ERR_MSG(extack, "Route cannot point to a fdb nexthop"); 717 - err = -EINVAL; 718 - goto out; 719 - } 720 709 721 710 if (nh->is_group) { 722 711 struct nh_group *nhg; 712 + 713 + nhg = rtnl_dereference(nh->nh_grp); 714 + if (nhg->fdb_nh) { 715 + NL_SET_ERR_MSG(extack, "Route cannot point to a fdb nexthop"); 716 + err = -EINVAL; 717 + goto out; 718 + } 723 719 724 720 if (scope == RT_SCOPE_HOST) { 725 721 NL_SET_ERR_MSG(extack, "Route with host scope can not have multiple nexthops"); ··· 729 721 goto out; 730 722 } 731 723 732 - nhg = rtnl_dereference(nh->nh_grp); 733 724 /* all nexthops in a group have the same scope */ 734 - err = nexthop_check_scope(nhg->nh_entries[0].nh, scope, extack); 725 + nhi = rtnl_dereference(nhg->nh_entries[0].nh->nh_info); 726 + err = nexthop_check_scope(nhi, scope, extack); 735 727 } else { 736 - err = nexthop_check_scope(nh, scope, extack); 728 + nhi = rtnl_dereference(nh->nh_info); 729 + if (nhi->fdb_nh) { 730 + NL_SET_ERR_MSG(extack, "Route cannot point to a fdb nexthop"); 731 + err = -EINVAL; 732 + goto out; 733 + } 734 + err = nexthop_check_scope(nhi, scope, extack); 737 735 } 736 + 738 737 out: 739 738 return err; 740 739 } ··· 802 787 803 788 newg->has_v4 = nhg->has_v4; 804 789 newg->mpath = nhg->mpath; 790 + newg->fdb_nh = nhg->fdb_nh; 805 791 newg->num_nh = nhg->num_nh; 806 792 807 793 /* copy old entries to new except the one getting removed */ ··· 1232 1216 } 1233 1217 1234 1218 if (cfg->nh_fdb) 1235 - nh->is_fdb_nh = 1; 1219 + nhg->fdb_nh = 1; 1236 1220 1237 1221 rcu_assign_pointer(nh->nh_grp, nhg); 1238 1222 ··· 1271 1255 goto out; 1272 1256 } 1273 1257 1274 - if (nh->is_fdb_nh) 1258 + if (nhi->fdb_nh) 1275 1259 goto out; 1276 1260 1277 1261 /* sets nh_dev if successful */ ··· 1342 1326 nhi->fib_nhc.nhc_scope = RT_SCOPE_LINK; 1343 1327 1344 1328 if (cfg->nh_fdb) 1345 - nh->is_fdb_nh = 1; 1329 + nhi->fdb_nh = 1; 1346 1330 1347 1331 if (cfg->nh_blackhole) { 1348 1332 nhi->reject_nh = 1; ··· 1365 1349 } 1366 1350 1367 1351 /* add the entry to the device based hash */ 1368 - if (!nh->is_fdb_nh) 1352 + if (!nhi->fdb_nh) 1369 1353 nexthop_devhash_add(net, nhi); 1370 1354 1371 1355 rcu_assign_pointer(nh->nh_info, nhi);
+63 -7
net/ipv4/tcp.c
··· 1742 1742 } 1743 1743 EXPORT_SYMBOL(tcp_mmap); 1744 1744 1745 + static int tcp_zerocopy_vm_insert_batch(struct vm_area_struct *vma, 1746 + struct page **pages, 1747 + unsigned long pages_to_map, 1748 + unsigned long *insert_addr, 1749 + u32 *length_with_pending, 1750 + u32 *seq, 1751 + struct tcp_zerocopy_receive *zc) 1752 + { 1753 + unsigned long pages_remaining = pages_to_map; 1754 + int bytes_mapped; 1755 + int ret; 1756 + 1757 + ret = vm_insert_pages(vma, *insert_addr, pages, &pages_remaining); 1758 + bytes_mapped = PAGE_SIZE * (pages_to_map - pages_remaining); 1759 + /* Even if vm_insert_pages fails, it may have partially succeeded in 1760 + * mapping (some but not all of the pages). 1761 + */ 1762 + *seq += bytes_mapped; 1763 + *insert_addr += bytes_mapped; 1764 + if (ret) { 1765 + /* But if vm_insert_pages did fail, we have to unroll some state 1766 + * we speculatively touched before. 1767 + */ 1768 + const int bytes_not_mapped = PAGE_SIZE * pages_remaining; 1769 + *length_with_pending -= bytes_not_mapped; 1770 + zc->recv_skip_hint += bytes_not_mapped; 1771 + } 1772 + return ret; 1773 + } 1774 + 1745 1775 static int tcp_zerocopy_receive(struct sock *sk, 1746 1776 struct tcp_zerocopy_receive *zc) 1747 1777 { 1748 1778 unsigned long address = (unsigned long)zc->address; 1749 1779 u32 length = 0, seq, offset, zap_len; 1780 + #define PAGE_BATCH_SIZE 8 1781 + struct page *pages[PAGE_BATCH_SIZE]; 1750 1782 const skb_frag_t *frags = NULL; 1751 1783 struct vm_area_struct *vma; 1752 1784 struct sk_buff *skb = NULL; 1785 + unsigned long pg_idx = 0; 1786 + unsigned long curr_addr; 1753 1787 struct tcp_sock *tp; 1754 1788 int inq; 1755 1789 int ret; ··· 1796 1762 1797 1763 sock_rps_record_flow(sk); 1798 1764 1765 + tp = tcp_sk(sk); 1766 + 1799 1767 mmap_read_lock(current->mm); 1800 1768 1801 1769 vma = find_vma(current->mm, address); ··· 1807 1771 } 1808 1772 zc->length = min_t(unsigned long, zc->length, vma->vm_end - address); 1809 1773 1810 - tp = tcp_sk(sk); 1811 1774 seq = tp->copied_seq; 1812 1775 inq = tcp_inq(sk); 1813 1776 zc->length = min_t(u32, zc->length, inq); ··· 1818 1783 zc->recv_skip_hint = zc->length; 1819 1784 } 1820 1785 ret = 0; 1786 + curr_addr = address; 1821 1787 while (length + PAGE_SIZE <= zc->length) { 1822 1788 if (zc->recv_skip_hint < PAGE_SIZE) { 1789 + /* If we're here, finish the current batch. */ 1790 + if (pg_idx) { 1791 + ret = tcp_zerocopy_vm_insert_batch(vma, pages, 1792 + pg_idx, 1793 + &curr_addr, 1794 + &length, 1795 + &seq, zc); 1796 + if (ret) 1797 + goto out; 1798 + pg_idx = 0; 1799 + } 1823 1800 if (skb) { 1824 1801 if (zc->recv_skip_hint > 0) 1825 1802 break; ··· 1840 1793 } else { 1841 1794 skb = tcp_recv_skb(sk, seq, &offset); 1842 1795 } 1843 - 1844 1796 zc->recv_skip_hint = skb->len - offset; 1845 1797 offset -= skb_headlen(skb); 1846 1798 if ((int)offset < 0 || skb_has_frag_list(skb)) ··· 1863 1817 zc->recv_skip_hint -= remaining; 1864 1818 break; 1865 1819 } 1866 - ret = vm_insert_page(vma, address + length, 1867 - skb_frag_page(frags)); 1868 - if (ret) 1869 - break; 1820 + pages[pg_idx] = skb_frag_page(frags); 1821 + pg_idx++; 1870 1822 length += PAGE_SIZE; 1871 - seq += PAGE_SIZE; 1872 1823 zc->recv_skip_hint -= PAGE_SIZE; 1873 1824 frags++; 1825 + if (pg_idx == PAGE_BATCH_SIZE) { 1826 + ret = tcp_zerocopy_vm_insert_batch(vma, pages, pg_idx, 1827 + &curr_addr, &length, 1828 + &seq, zc); 1829 + if (ret) 1830 + goto out; 1831 + pg_idx = 0; 1832 + } 1833 + } 1834 + if (pg_idx) { 1835 + ret = tcp_zerocopy_vm_insert_batch(vma, pages, pg_idx, 1836 + &curr_addr, &length, &seq, 1837 + zc); 1874 1838 } 1875 1839 out: 1876 1840 mmap_read_unlock(current->mm);
+6
net/ipv4/tcp_bpf.c
··· 64 64 } while (i != msg_rx->sg.end); 65 65 66 66 if (unlikely(peek)) { 67 + if (msg_rx == list_last_entry(&psock->ingress_msg, 68 + struct sk_msg, list)) 69 + break; 67 70 msg_rx = list_next_entry(msg_rx, list); 68 71 continue; 69 72 } ··· 244 241 { 245 242 DEFINE_WAIT_FUNC(wait, woken_wake_function); 246 243 int ret = 0; 244 + 245 + if (sk->sk_shutdown & RCV_SHUTDOWN) 246 + return 1; 247 247 248 248 if (!timeo) 249 249 return ret;
+2
net/mac80211/mlme.c
··· 167 167 ret = IEEE80211_STA_DISABLE_HT | 168 168 IEEE80211_STA_DISABLE_VHT | 169 169 IEEE80211_STA_DISABLE_HE; 170 + else 171 + ret = 0; 170 172 vht_chandef = *chandef; 171 173 goto out; 172 174 }
+1 -1
net/mac80211/rx.c
··· 4694 4694 * rate_idx is MCS index, which can be [0-76] 4695 4695 * as documented on: 4696 4696 * 4697 - * http://wireless.kernel.org/en/developers/Documentation/ieee80211/802.11n 4697 + * https://wireless.wiki.kernel.org/en/developers/Documentation/ieee80211/802.11n 4698 4698 * 4699 4699 * Anything else would be some sort of driver or 4700 4700 * hardware error. The driver should catch hardware
+2
net/mptcp/options.c
··· 273 273 if (opsize != TCPOLEN_MPTCP_RM_ADDR_BASE) 274 274 break; 275 275 276 + ptr++; 277 + 276 278 mp_opt->rm_addr = 1; 277 279 mp_opt->rm_id = *ptr++; 278 280 pr_debug("RM_ADDR: id=%d", mp_opt->rm_id);
+24 -21
net/mptcp/protocol.c
··· 374 374 sock_hold(sk); 375 375 } 376 376 377 + static void mptcp_check_for_eof(struct mptcp_sock *msk) 378 + { 379 + struct mptcp_subflow_context *subflow; 380 + struct sock *sk = (struct sock *)msk; 381 + int receivers = 0; 382 + 383 + mptcp_for_each_subflow(msk, subflow) 384 + receivers += !subflow->rx_eof; 385 + 386 + if (!receivers && !(sk->sk_shutdown & RCV_SHUTDOWN)) { 387 + /* hopefully temporary hack: propagate shutdown status 388 + * to msk, when all subflows agree on it 389 + */ 390 + sk->sk_shutdown |= RCV_SHUTDOWN; 391 + 392 + smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 393 + set_bit(MPTCP_DATA_READY, &msk->flags); 394 + sk->sk_data_ready(sk); 395 + } 396 + } 397 + 377 398 static void mptcp_stop_timer(struct sock *sk) 378 399 { 379 400 struct inet_connection_sock *icsk = inet_csk(sk); ··· 1032 1011 break; 1033 1012 } 1034 1013 1014 + if (test_and_clear_bit(MPTCP_WORK_EOF, &msk->flags)) 1015 + mptcp_check_for_eof(msk); 1016 + 1035 1017 if (sk->sk_shutdown & RCV_SHUTDOWN) 1036 1018 break; 1037 1019 ··· 1170 1146 static unsigned int mptcp_sync_mss(struct sock *sk, u32 pmtu) 1171 1147 { 1172 1148 return 0; 1173 - } 1174 - 1175 - static void mptcp_check_for_eof(struct mptcp_sock *msk) 1176 - { 1177 - struct mptcp_subflow_context *subflow; 1178 - struct sock *sk = (struct sock *)msk; 1179 - int receivers = 0; 1180 - 1181 - mptcp_for_each_subflow(msk, subflow) 1182 - receivers += !subflow->rx_eof; 1183 - 1184 - if (!receivers && !(sk->sk_shutdown & RCV_SHUTDOWN)) { 1185 - /* hopefully temporary hack: propagate shutdown status 1186 - * to msk, when all subflows agree on it 1187 - */ 1188 - sk->sk_shutdown |= RCV_SHUTDOWN; 1189 - 1190 - smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 1191 - set_bit(MPTCP_DATA_READY, &msk->flags); 1192 - sk->sk_data_ready(sk); 1193 - } 1194 1149 } 1195 1150 1196 1151 static void mptcp_worker(struct work_struct *work)
+1
net/mptcp/subflow.c
··· 393 393 sock_orphan(sk); 394 394 } 395 395 396 + mptcp_token_destroy(mptcp_sk(sk)->token); 396 397 inet_sock_destruct(sk); 397 398 } 398 399
+12 -16
net/netlink/genetlink.c
··· 474 474 struct netlink_ext_ack *extack, 475 475 const struct genl_ops *ops, 476 476 int hdrlen, 477 - enum genl_validate_flags no_strict_flag, 478 - bool parallel) 477 + enum genl_validate_flags no_strict_flag) 479 478 { 480 479 enum netlink_validation validate = ops->validate & no_strict_flag ? 481 480 NL_VALIDATE_LIBERAL : ··· 485 486 if (!family->maxattr) 486 487 return NULL; 487 488 488 - if (parallel) { 489 + if (family->parallel_ops) { 489 490 attrbuf = kmalloc_array(family->maxattr + 1, 490 491 sizeof(struct nlattr *), GFP_KERNEL); 491 492 if (!attrbuf) ··· 497 498 err = __nlmsg_parse(nlh, hdrlen, attrbuf, family->maxattr, 498 499 family->policy, validate, extack); 499 500 if (err) { 500 - if (parallel) 501 + if (family->parallel_ops) 501 502 kfree(attrbuf); 502 503 return ERR_PTR(err); 503 504 } ··· 505 506 } 506 507 507 508 static void genl_family_rcv_msg_attrs_free(const struct genl_family *family, 508 - struct nlattr **attrbuf, 509 - bool parallel) 509 + struct nlattr **attrbuf) 510 510 { 511 - if (parallel) 511 + if (family->parallel_ops) 512 512 kfree(attrbuf); 513 513 } 514 514 ··· 535 537 536 538 attrs = genl_family_rcv_msg_attrs_parse(ctx->family, ctx->nlh, ctx->extack, 537 539 ops, ctx->hdrlen, 538 - GENL_DONT_VALIDATE_DUMP_STRICT, 539 - true); 540 + GENL_DONT_VALIDATE_DUMP_STRICT); 540 541 if (IS_ERR(attrs)) 541 542 return PTR_ERR(attrs); 542 543 543 544 no_attrs: 544 545 info = genl_dumpit_info_alloc(); 545 546 if (!info) { 546 - kfree(attrs); 547 + genl_family_rcv_msg_attrs_free(ctx->family, attrs); 547 548 return -ENOMEM; 548 549 } 549 550 info->family = ctx->family; ··· 559 562 } 560 563 561 564 if (rc) { 562 - kfree(attrs); 565 + genl_family_rcv_msg_attrs_free(info->family, info->attrs); 563 566 genl_dumpit_info_free(info); 564 567 cb->data = NULL; 565 568 } ··· 588 591 rc = ops->done(cb); 589 592 genl_unlock(); 590 593 } 591 - genl_family_rcv_msg_attrs_free(info->family, info->attrs, false); 594 + genl_family_rcv_msg_attrs_free(info->family, info->attrs); 592 595 genl_dumpit_info_free(info); 593 596 return rc; 594 597 } ··· 601 604 602 605 if (ops->done) 603 606 rc = ops->done(cb); 604 - genl_family_rcv_msg_attrs_free(info->family, info->attrs, true); 607 + genl_family_rcv_msg_attrs_free(info->family, info->attrs); 605 608 genl_dumpit_info_free(info); 606 609 return rc; 607 610 } ··· 668 671 669 672 attrbuf = genl_family_rcv_msg_attrs_parse(family, nlh, extack, 670 673 ops, hdrlen, 671 - GENL_DONT_VALIDATE_STRICT, 672 - family->parallel_ops); 674 + GENL_DONT_VALIDATE_STRICT); 673 675 if (IS_ERR(attrbuf)) 674 676 return PTR_ERR(attrbuf); 675 677 ··· 694 698 family->post_doit(ops, skb, &info); 695 699 696 700 out: 697 - genl_family_rcv_msg_attrs_free(family, attrbuf, family->parallel_ops); 701 + genl_family_rcv_msg_attrs_free(family, attrbuf); 698 702 699 703 return err; 700 704 }
+2
net/netrom/af_netrom.c
··· 70 70 * separate class since they always nest. 71 71 */ 72 72 static struct lock_class_key nr_netdev_xmit_lock_key; 73 + static struct lock_class_key nr_netdev_addr_lock_key; 73 74 74 75 static void nr_set_lockdep_one(struct net_device *dev, 75 76 struct netdev_queue *txq, ··· 81 80 82 81 static void nr_set_lockdep_key(struct net_device *dev) 83 82 { 83 + lockdep_set_class(&dev->addr_list_lock, &nr_netdev_addr_lock_key); 84 84 netdev_for_each_tx_queue(dev, nr_set_lockdep_one, NULL); 85 85 } 86 86
+2
net/rose/af_rose.c
··· 71 71 * separate class since they always nest. 72 72 */ 73 73 static struct lock_class_key rose_netdev_xmit_lock_key; 74 + static struct lock_class_key rose_netdev_addr_lock_key; 74 75 75 76 static void rose_set_lockdep_one(struct net_device *dev, 76 77 struct netdev_queue *txq, ··· 82 81 83 82 static void rose_set_lockdep_key(struct net_device *dev) 84 83 { 84 + lockdep_set_class(&dev->addr_list_lock, &rose_netdev_addr_lock_key); 85 85 netdev_for_each_tx_queue(dev, rose_set_lockdep_one, NULL); 86 86 } 87 87
+25 -94
net/rxrpc/ar-internal.h
··· 810 810 } 811 811 812 812 /* 813 - * Transition a call to the complete state. 814 - */ 815 - static inline bool __rxrpc_set_call_completion(struct rxrpc_call *call, 816 - enum rxrpc_call_completion compl, 817 - u32 abort_code, 818 - int error) 819 - { 820 - if (call->state < RXRPC_CALL_COMPLETE) { 821 - call->abort_code = abort_code; 822 - call->error = error; 823 - call->completion = compl, 824 - call->state = RXRPC_CALL_COMPLETE; 825 - trace_rxrpc_call_complete(call); 826 - wake_up(&call->waitq); 827 - return true; 828 - } 829 - return false; 830 - } 831 - 832 - static inline bool rxrpc_set_call_completion(struct rxrpc_call *call, 833 - enum rxrpc_call_completion compl, 834 - u32 abort_code, 835 - int error) 836 - { 837 - bool ret; 838 - 839 - write_lock_bh(&call->state_lock); 840 - ret = __rxrpc_set_call_completion(call, compl, abort_code, error); 841 - write_unlock_bh(&call->state_lock); 842 - return ret; 843 - } 844 - 845 - /* 846 - * Record that a call successfully completed. 847 - */ 848 - static inline bool __rxrpc_call_completed(struct rxrpc_call *call) 849 - { 850 - return __rxrpc_set_call_completion(call, RXRPC_CALL_SUCCEEDED, 0, 0); 851 - } 852 - 853 - static inline bool rxrpc_call_completed(struct rxrpc_call *call) 854 - { 855 - bool ret; 856 - 857 - write_lock_bh(&call->state_lock); 858 - ret = __rxrpc_call_completed(call); 859 - write_unlock_bh(&call->state_lock); 860 - return ret; 861 - } 862 - 863 - /* 864 - * Record that a call is locally aborted. 865 - */ 866 - static inline bool __rxrpc_abort_call(const char *why, struct rxrpc_call *call, 867 - rxrpc_seq_t seq, 868 - u32 abort_code, int error) 869 - { 870 - trace_rxrpc_abort(call->debug_id, why, call->cid, call->call_id, seq, 871 - abort_code, error); 872 - return __rxrpc_set_call_completion(call, RXRPC_CALL_LOCALLY_ABORTED, 873 - abort_code, error); 874 - } 875 - 876 - static inline bool rxrpc_abort_call(const char *why, struct rxrpc_call *call, 877 - rxrpc_seq_t seq, u32 abort_code, int error) 878 - { 879 - bool ret; 880 - 881 - write_lock_bh(&call->state_lock); 882 - ret = __rxrpc_abort_call(why, call, seq, abort_code, error); 883 - write_unlock_bh(&call->state_lock); 884 - return ret; 885 - } 886 - 887 - /* 888 - * Abort a call due to a protocol error. 889 - */ 890 - static inline bool __rxrpc_abort_eproto(struct rxrpc_call *call, 891 - struct sk_buff *skb, 892 - const char *eproto_why, 893 - const char *why, 894 - u32 abort_code) 895 - { 896 - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 897 - 898 - trace_rxrpc_rx_eproto(call, sp->hdr.serial, eproto_why); 899 - return rxrpc_abort_call(why, call, sp->hdr.seq, abort_code, -EPROTO); 900 - } 901 - 902 - #define rxrpc_abort_eproto(call, skb, eproto_why, abort_why, abort_code) \ 903 - __rxrpc_abort_eproto((call), (skb), tracepoint_string(eproto_why), \ 904 - (abort_why), (abort_code)) 905 - 906 - /* 907 813 * conn_client.c 908 814 */ 909 815 extern unsigned int rxrpc_max_client_connections; ··· 1007 1101 * recvmsg.c 1008 1102 */ 1009 1103 void rxrpc_notify_socket(struct rxrpc_call *); 1104 + bool __rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); 1105 + bool rxrpc_set_call_completion(struct rxrpc_call *, enum rxrpc_call_completion, u32, int); 1106 + bool __rxrpc_call_completed(struct rxrpc_call *); 1107 + bool rxrpc_call_completed(struct rxrpc_call *); 1108 + bool __rxrpc_abort_call(const char *, struct rxrpc_call *, rxrpc_seq_t, u32, int); 1109 + bool rxrpc_abort_call(const char *, struct rxrpc_call *, rxrpc_seq_t, u32, int); 1010 1110 int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int); 1111 + 1112 + /* 1113 + * Abort a call due to a protocol error. 1114 + */ 1115 + static inline bool __rxrpc_abort_eproto(struct rxrpc_call *call, 1116 + struct sk_buff *skb, 1117 + const char *eproto_why, 1118 + const char *why, 1119 + u32 abort_code) 1120 + { 1121 + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 1122 + 1123 + trace_rxrpc_rx_eproto(call, sp->hdr.serial, eproto_why); 1124 + return rxrpc_abort_call(why, call, sp->hdr.seq, abort_code, -EPROTO); 1125 + } 1126 + 1127 + #define rxrpc_abort_eproto(call, skb, eproto_why, abort_why, abort_code) \ 1128 + __rxrpc_abort_eproto((call), (skb), tracepoint_string(eproto_why), \ 1129 + (abort_why), (abort_code)) 1011 1130 1012 1131 /* 1013 1132 * rtt.c
+11 -19
net/rxrpc/call_event.c
··· 248 248 if (anno_type != RXRPC_TX_ANNO_RETRANS) 249 249 continue; 250 250 251 + /* We need to reset the retransmission state, but we need to do 252 + * so before we drop the lock as a new ACK/NAK may come in and 253 + * confuse things 254 + */ 255 + annotation &= ~RXRPC_TX_ANNO_MASK; 256 + annotation |= RXRPC_TX_ANNO_RESENT; 257 + call->rxtx_annotations[ix] = annotation; 258 + 251 259 skb = call->rxtx_buffer[ix]; 260 + if (!skb) 261 + continue; 262 + 252 263 rxrpc_get_skb(skb, rxrpc_skb_got); 253 264 spin_unlock_bh(&call->lock); 254 265 ··· 273 262 274 263 rxrpc_free_skb(skb, rxrpc_skb_freed); 275 264 spin_lock_bh(&call->lock); 276 - 277 - /* We need to clear the retransmit state, but there are two 278 - * things we need to be aware of: A new ACK/NAK might have been 279 - * received and the packet might have been hard-ACK'd (in which 280 - * case it will no longer be in the buffer). 281 - */ 282 - if (after(seq, call->tx_hard_ack)) { 283 - annotation = call->rxtx_annotations[ix]; 284 - anno_type = annotation & RXRPC_TX_ANNO_MASK; 285 - if (anno_type == RXRPC_TX_ANNO_RETRANS || 286 - anno_type == RXRPC_TX_ANNO_NAK) { 287 - annotation &= ~RXRPC_TX_ANNO_MASK; 288 - annotation |= RXRPC_TX_ANNO_UNACK; 289 - } 290 - annotation |= RXRPC_TX_ANNO_RESENT; 291 - call->rxtx_annotations[ix] = annotation; 292 - } 293 - 294 265 if (after(call->tx_hard_ack, seq)) 295 266 seq = call->tx_hard_ack; 296 267 } ··· 313 320 314 321 if (call->state == RXRPC_CALL_COMPLETE) { 315 322 del_timer_sync(&call->timer); 316 - rxrpc_notify_socket(call); 317 323 goto out_put; 318 324 } 319 325
+3 -4
net/rxrpc/conn_event.c
··· 173 173 else 174 174 trace_rxrpc_rx_abort(call, serial, 175 175 conn->abort_code); 176 - if (rxrpc_set_call_completion(call, compl, 177 - conn->abort_code, 178 - conn->error)) 179 - rxrpc_notify_socket(call); 176 + rxrpc_set_call_completion(call, compl, 177 + conn->abort_code, 178 + conn->error); 180 179 } 181 180 } 182 181
+2 -5
net/rxrpc/input.c
··· 275 275 276 276 case RXRPC_CALL_SERVER_AWAIT_ACK: 277 277 __rxrpc_call_completed(call); 278 - rxrpc_notify_socket(call); 279 278 state = call->state; 280 279 break; 281 280 ··· 1012 1013 1013 1014 _proto("Rx ABORT %%%u { %x }", sp->hdr.serial, abort_code); 1014 1015 1015 - if (rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, 1016 - abort_code, -ECONNABORTED)) 1017 - rxrpc_notify_socket(call); 1016 + rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, 1017 + abort_code, -ECONNABORTED); 1018 1018 } 1019 1019 1020 1020 /* ··· 1100 1102 spin_lock(&rx->incoming_lock); 1101 1103 __rxrpc_disconnect_call(conn, call); 1102 1104 spin_unlock(&rx->incoming_lock); 1103 - rxrpc_notify_socket(call); 1104 1105 } 1105 1106 1106 1107 /*
+1 -3
net/rxrpc/peer_event.c
··· 292 292 293 293 hlist_for_each_entry_rcu(call, &peer->error_targets, error_link) { 294 294 rxrpc_see_call(call); 295 - if (call->state < RXRPC_CALL_COMPLETE && 296 - rxrpc_set_call_completion(call, compl, 0, -error)) 297 - rxrpc_notify_socket(call); 295 + rxrpc_set_call_completion(call, compl, 0, -error); 298 296 } 299 297 } 300 298
+79
net/rxrpc/recvmsg.c
··· 59 59 } 60 60 61 61 /* 62 + * Transition a call to the complete state. 63 + */ 64 + bool __rxrpc_set_call_completion(struct rxrpc_call *call, 65 + enum rxrpc_call_completion compl, 66 + u32 abort_code, 67 + int error) 68 + { 69 + if (call->state < RXRPC_CALL_COMPLETE) { 70 + call->abort_code = abort_code; 71 + call->error = error; 72 + call->completion = compl, 73 + call->state = RXRPC_CALL_COMPLETE; 74 + trace_rxrpc_call_complete(call); 75 + wake_up(&call->waitq); 76 + rxrpc_notify_socket(call); 77 + return true; 78 + } 79 + return false; 80 + } 81 + 82 + bool rxrpc_set_call_completion(struct rxrpc_call *call, 83 + enum rxrpc_call_completion compl, 84 + u32 abort_code, 85 + int error) 86 + { 87 + bool ret = false; 88 + 89 + if (call->state < RXRPC_CALL_COMPLETE) { 90 + write_lock_bh(&call->state_lock); 91 + ret = __rxrpc_set_call_completion(call, compl, abort_code, error); 92 + write_unlock_bh(&call->state_lock); 93 + } 94 + return ret; 95 + } 96 + 97 + /* 98 + * Record that a call successfully completed. 99 + */ 100 + bool __rxrpc_call_completed(struct rxrpc_call *call) 101 + { 102 + return __rxrpc_set_call_completion(call, RXRPC_CALL_SUCCEEDED, 0, 0); 103 + } 104 + 105 + bool rxrpc_call_completed(struct rxrpc_call *call) 106 + { 107 + bool ret = false; 108 + 109 + if (call->state < RXRPC_CALL_COMPLETE) { 110 + write_lock_bh(&call->state_lock); 111 + ret = __rxrpc_call_completed(call); 112 + write_unlock_bh(&call->state_lock); 113 + } 114 + return ret; 115 + } 116 + 117 + /* 118 + * Record that a call is locally aborted. 119 + */ 120 + bool __rxrpc_abort_call(const char *why, struct rxrpc_call *call, 121 + rxrpc_seq_t seq, u32 abort_code, int error) 122 + { 123 + trace_rxrpc_abort(call->debug_id, why, call->cid, call->call_id, seq, 124 + abort_code, error); 125 + return __rxrpc_set_call_completion(call, RXRPC_CALL_LOCALLY_ABORTED, 126 + abort_code, error); 127 + } 128 + 129 + bool rxrpc_abort_call(const char *why, struct rxrpc_call *call, 130 + rxrpc_seq_t seq, u32 abort_code, int error) 131 + { 132 + bool ret; 133 + 134 + write_lock_bh(&call->state_lock); 135 + ret = __rxrpc_abort_call(why, call, seq, abort_code, error); 136 + write_unlock_bh(&call->state_lock); 137 + return ret; 138 + } 139 + 140 + /* 62 141 * Pass a call terminating message to userspace. 63 142 */ 64 143 static int rxrpc_recvmsg_term(struct rxrpc_call *call, struct msghdr *msg)
+1 -3
net/rxrpc/sendmsg.c
··· 261 261 case -ENETUNREACH: 262 262 case -EHOSTUNREACH: 263 263 case -ECONNREFUSED: 264 - rxrpc_set_call_completion(call, 265 - RXRPC_CALL_LOCAL_ERROR, 264 + rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 266 265 0, ret); 267 - rxrpc_notify_socket(call); 268 266 goto out; 269 267 } 270 268 _debug("need instant resend %d", ret);
+1
net/sched/sch_generic.c
··· 464 464 dev_hold(dev); 465 465 } 466 466 } 467 + EXPORT_SYMBOL_GPL(__netdev_watchdog_up); 467 468 468 469 static void dev_watchdog_up(struct net_device *dev) 469 470 {
+1 -1
net/tipc/bearer.c
··· 316 316 b->domain = disc_domain; 317 317 b->net_plane = bearer_id + 'A'; 318 318 b->priority = prio; 319 - test_and_set_bit_lock(0, &b->up); 320 319 refcount_set(&b->refcnt, 1); 321 320 322 321 res = tipc_disc_create(net, b, &b->bcast_addr, &skb); ··· 325 326 goto rejected; 326 327 } 327 328 329 + test_and_set_bit_lock(0, &b->up); 328 330 rcu_assign_pointer(tn->bearer_list[bearer_id], b); 329 331 if (skb) 330 332 tipc_bearer_xmit_skb(net, bearer_id, skb, &b->bcast_addr);
+2 -2
net/tipc/msg.c
··· 238 238 hdr = buf_msg(skb); 239 239 curr = msg_blocks(hdr); 240 240 mlen = msg_size(hdr); 241 - cpy = min_t(int, rem, mss - mlen); 241 + cpy = min_t(size_t, rem, mss - mlen); 242 242 if (cpy != copy_from_iter(skb->data + mlen, cpy, &m->msg_iter)) 243 243 return -EFAULT; 244 244 msg_set_size(hdr, mlen + cpy); 245 245 skb_put(skb, cpy); 246 246 rem -= cpy; 247 247 total += msg_blocks(hdr) - curr; 248 - } while (rem); 248 + } while (rem > 0); 249 249 return total - accounted; 250 250 } 251 251
+2 -1
net/tipc/socket.c
··· 1574 1574 break; 1575 1575 send = min_t(size_t, dlen - sent, TIPC_MAX_USER_MSG_SIZE); 1576 1576 blocks = tsk->snd_backlog; 1577 - if (tsk->oneway++ >= tsk->nagle_start && send <= maxnagle) { 1577 + if (tsk->oneway++ >= tsk->nagle_start && maxnagle && 1578 + send <= maxnagle) { 1578 1579 rc = tipc_msg_append(hdr, m, send, maxnagle, txq); 1579 1580 if (unlikely(rc < 0)) 1580 1581 break;
+1 -1
net/wireless/Kconfig
··· 31 31 32 32 For more information refer to documentation on the wireless wiki: 33 33 34 - http://wireless.kernel.org/en/developers/Documentation/cfg80211 34 + https://wireless.wiki.kernel.org/en/developers/Documentation/cfg80211 35 35 36 36 When built as a module it will be called cfg80211. 37 37
+3 -3
net/wireless/core.c
··· 497 497 INIT_WORK(&rdev->propagate_radar_detect_wk, 498 498 cfg80211_propagate_radar_detect_wk); 499 499 INIT_WORK(&rdev->propagate_cac_done_wk, cfg80211_propagate_cac_done_wk); 500 + INIT_WORK(&rdev->mgmt_registrations_update_wk, 501 + cfg80211_mgmt_registrations_update_wk); 500 502 501 503 #ifdef CONFIG_CFG80211_DEFAULT_PS 502 504 rdev->wiphy.flags |= WIPHY_FLAG_PS_ON_BY_DEFAULT; ··· 1049 1047 flush_work(&rdev->sched_scan_stop_wk); 1050 1048 flush_work(&rdev->propagate_radar_detect_wk); 1051 1049 flush_work(&rdev->propagate_cac_done_wk); 1050 + flush_work(&rdev->mgmt_registrations_update_wk); 1052 1051 1053 1052 #ifdef CONFIG_PM 1054 1053 if (rdev->wiphy.wowlan_config && rdev->ops->set_wakeup) ··· 1111 1108 rdev->devlist_generation++; 1112 1109 1113 1110 cfg80211_mlme_purge_registrations(wdev); 1114 - flush_work(&wdev->mgmt_registrations_update_wk); 1115 1111 1116 1112 switch (wdev->iftype) { 1117 1113 case NL80211_IFTYPE_P2P_DEVICE: ··· 1255 1253 spin_lock_init(&wdev->event_lock); 1256 1254 INIT_LIST_HEAD(&wdev->mgmt_registrations); 1257 1255 spin_lock_init(&wdev->mgmt_registrations_lock); 1258 - INIT_WORK(&wdev->mgmt_registrations_update_wk, 1259 - cfg80211_mgmt_registrations_update_wk); 1260 1256 INIT_LIST_HEAD(&wdev->pmsr_list); 1261 1257 spin_lock_init(&wdev->pmsr_lock); 1262 1258 INIT_WORK(&wdev->pmsr_free_wk, cfg80211_pmsr_free_wk);
+2
net/wireless/core.h
··· 99 99 struct cfg80211_chan_def cac_done_chandef; 100 100 struct work_struct propagate_cac_done_wk; 101 101 102 + struct work_struct mgmt_registrations_update_wk; 103 + 102 104 /* must be last because of the way we do wiphy_priv(), 103 105 * and it should at least be aligned to NETDEV_ALIGN */ 104 106 struct wiphy wiphy __aligned(NETDEV_ALIGN);
+21 -5
net/wireless/mlme.c
··· 440 440 441 441 ASSERT_RTNL(); 442 442 443 + spin_lock_bh(&wdev->mgmt_registrations_lock); 444 + if (!wdev->mgmt_registrations_need_update) { 445 + spin_unlock_bh(&wdev->mgmt_registrations_lock); 446 + return; 447 + } 448 + 443 449 rcu_read_lock(); 444 450 list_for_each_entry_rcu(tmp, &rdev->wiphy.wdev_list, list) { 445 - list_for_each_entry_rcu(reg, &tmp->mgmt_registrations, list) { 451 + list_for_each_entry(reg, &tmp->mgmt_registrations, list) { 446 452 u32 mask = BIT(le16_to_cpu(reg->frame_type) >> 4); 447 453 u32 mcast_mask = 0; 448 454 ··· 466 460 } 467 461 rcu_read_unlock(); 468 462 463 + wdev->mgmt_registrations_need_update = 0; 464 + spin_unlock_bh(&wdev->mgmt_registrations_lock); 465 + 469 466 rdev_update_mgmt_frame_registrations(rdev, wdev, &upd); 470 467 } 471 468 472 469 void cfg80211_mgmt_registrations_update_wk(struct work_struct *wk) 473 470 { 474 - struct wireless_dev *wdev = container_of(wk, struct wireless_dev, 475 - mgmt_registrations_update_wk); 471 + struct cfg80211_registered_device *rdev; 472 + struct wireless_dev *wdev; 473 + 474 + rdev = container_of(wk, struct cfg80211_registered_device, 475 + mgmt_registrations_update_wk); 476 476 477 477 rtnl_lock(); 478 - cfg80211_mgmt_registrations_update(wdev); 478 + list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) 479 + cfg80211_mgmt_registrations_update(wdev); 479 480 rtnl_unlock(); 480 481 } 481 482 ··· 570 557 nreg->multicast_rx = multicast_rx; 571 558 list_add(&nreg->list, &wdev->mgmt_registrations); 572 559 } 560 + wdev->mgmt_registrations_need_update = 1; 573 561 spin_unlock_bh(&wdev->mgmt_registrations_lock); 574 562 575 563 cfg80211_mgmt_registrations_update(wdev); ··· 599 585 list_del(&reg->list); 600 586 kfree(reg); 601 587 602 - schedule_work(&wdev->mgmt_registrations_update_wk); 588 + wdev->mgmt_registrations_need_update = 1; 589 + schedule_work(&rdev->mgmt_registrations_update_wk); 603 590 } 604 591 605 592 spin_unlock_bh(&wdev->mgmt_registrations_lock); ··· 623 608 list_del(&reg->list); 624 609 kfree(reg); 625 610 } 611 + wdev->mgmt_registrations_need_update = 1; 626 612 spin_unlock_bh(&wdev->mgmt_registrations_lock); 627 613 628 614 cfg80211_mgmt_registrations_update(wdev);
+1 -3
net/xdp/xsk.c
··· 352 352 353 353 len = desc.len; 354 354 skb = sock_alloc_send_skb(sk, len, 1, &err); 355 - if (unlikely(!skb)) { 356 - err = -EAGAIN; 355 + if (unlikely(!skb)) 357 356 goto out; 358 - } 359 357 360 358 skb_put(skb, len); 361 359 addr = desc.addr;
-1
tools/bpf/Makefile
··· 3 3 4 4 prefix ?= /usr/local 5 5 6 - CC = gcc 7 6 LEX = flex 8 7 YACC = bison 9 8 MAKE = make
+6 -5
tools/bpf/bpftool/gen.c
··· 200 200 return err; 201 201 } 202 202 203 - static int codegen(const char *template, ...) 203 + static void codegen(const char *template, ...) 204 204 { 205 205 const char *src, *end; 206 206 int skip_tabs = 0, n; ··· 211 211 n = strlen(template); 212 212 s = malloc(n + 1); 213 213 if (!s) 214 - return -ENOMEM; 214 + exit(-1); 215 215 src = template; 216 216 dst = s; 217 217 ··· 224 224 } else { 225 225 p_err("unrecognized character at pos %td in template '%s'", 226 226 src - template - 1, template); 227 - return -EINVAL; 227 + free(s); 228 + exit(-1); 228 229 } 229 230 } 230 231 ··· 235 234 if (*src != '\t') { 236 235 p_err("not enough tabs at pos %td in template '%s'", 237 236 src - template - 1, template); 238 - return -EINVAL; 237 + free(s); 238 + exit(-1); 239 239 } 240 240 } 241 241 /* trim trailing whitespace */ ··· 257 255 va_end(args); 258 256 259 257 free(s); 260 - return n; 261 258 } 262 259 263 260 static int do_skeleton(int argc, char **argv)
+13
tools/include/uapi/linux/bpf.h
··· 3761 3761 __u32 egress_ifindex; /* txq->dev->ifindex */ 3762 3762 }; 3763 3763 3764 + /* DEVMAP map-value layout 3765 + * 3766 + * The struct data-layout of map-value is a configuration interface. 3767 + * New members can only be added to the end of this structure. 3768 + */ 3769 + struct bpf_devmap_val { 3770 + __u32 ifindex; /* device index */ 3771 + union { 3772 + int fd; /* prog fd on map write */ 3773 + __u32 id; /* prog id on map read */ 3774 + } bpf_prog; 3775 + }; 3776 + 3764 3777 enum sk_action { 3765 3778 SK_DROP = 0, 3766 3779 SK_PASS,
+24 -9
tools/lib/bpf/btf_dump.c
··· 1137 1137 } 1138 1138 } 1139 1139 1140 + static void btf_dump_drop_mods(struct btf_dump *d, struct id_stack *decl_stack) 1141 + { 1142 + const struct btf_type *t; 1143 + __u32 id; 1144 + 1145 + while (decl_stack->cnt) { 1146 + id = decl_stack->ids[decl_stack->cnt - 1]; 1147 + t = btf__type_by_id(d->btf, id); 1148 + if (!btf_is_mod(t)) 1149 + return; 1150 + decl_stack->cnt--; 1151 + } 1152 + } 1153 + 1140 1154 static void btf_dump_emit_name(const struct btf_dump *d, 1141 1155 const char *name, bool last_was_ptr) 1142 1156 { ··· 1249 1235 * a const/volatile modifier for array, so we are 1250 1236 * going to silently skip them here. 1251 1237 */ 1252 - while (decls->cnt) { 1253 - next_id = decls->ids[decls->cnt - 1]; 1254 - next_t = btf__type_by_id(d->btf, next_id); 1255 - if (btf_is_mod(next_t)) 1256 - decls->cnt--; 1257 - else 1258 - break; 1259 - } 1238 + btf_dump_drop_mods(d, decls); 1260 1239 1261 1240 if (decls->cnt == 0) { 1262 1241 btf_dump_emit_name(d, fname, last_was_ptr); ··· 1277 1270 __u16 vlen = btf_vlen(t); 1278 1271 int i; 1279 1272 1280 - btf_dump_emit_mods(d, decls); 1273 + /* 1274 + * GCC emits extra volatile qualifier for 1275 + * __attribute__((noreturn)) function pointers. Clang 1276 + * doesn't do it. It's a GCC quirk for backwards 1277 + * compatibility with code written for GCC <2.5. So, 1278 + * similarly to extra qualifiers for array, just drop 1279 + * them, instead of handling them. 1280 + */ 1281 + btf_dump_drop_mods(d, decls); 1281 1282 if (decls->cnt) { 1282 1283 btf_dump_printf(d, " ("); 1283 1284 btf_dump_emit_type_chain(d, decls, fname, lvl);
+3 -4
tools/lib/bpf/hashmap.h
··· 10 10 11 11 #include <stdbool.h> 12 12 #include <stddef.h> 13 - #ifdef __GLIBC__ 14 - #include <bits/wordsize.h> 15 - #else 16 - #include <bits/reg.h> 13 + #include <limits.h> 14 + #ifndef __WORDSIZE 15 + #define __WORDSIZE (__SIZEOF_LONG__ * 8) 17 16 #endif 18 17 19 18 static inline size_t hash_bits(size_t h, int bits)
-4
tools/lib/bpf/libbpf.c
··· 3564 3564 char *cp, errmsg[STRERR_BUFSIZE]; 3565 3565 int err, zero = 0; 3566 3566 3567 - /* kernel already zero-initializes .bss map. */ 3568 - if (map_type == LIBBPF_MAP_BSS) 3569 - return 0; 3570 - 3571 3567 err = bpf_map_update_elem(map->fd, &zero, map->mmaped, 0); 3572 3568 if (err) { 3573 3569 err = -errno;
+7
tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
··· 230 230 "prog_replace", "errno=%d\n", errno)) 231 231 goto err; 232 232 233 + /* replace program with itself */ 234 + attach_opts.replace_prog_fd = allow_prog[6]; 235 + if (CHECK(bpf_prog_attach_xattr(allow_prog[6], cg1, 236 + BPF_CGROUP_INET_EGRESS, &attach_opts), 237 + "prog_replace", "errno=%d\n", errno)) 238 + goto err; 239 + 233 240 value = 0; 234 241 CHECK_FAIL(bpf_map_update_elem(map_fd, &key, &value, 0)); 235 242 CHECK_FAIL(system(PING_CMD));
+71
tools/testing/selftests/bpf/prog_tests/load_bytes_relative.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + /* 4 + * Copyright 2020 Google LLC. 5 + */ 6 + 7 + #include <test_progs.h> 8 + #include <network_helpers.h> 9 + 10 + void test_load_bytes_relative(void) 11 + { 12 + int server_fd, cgroup_fd, prog_fd, map_fd, client_fd; 13 + int err; 14 + struct bpf_object *obj; 15 + struct bpf_program *prog; 16 + struct bpf_map *test_result; 17 + __u32 duration = 0; 18 + 19 + __u32 map_key = 0; 20 + __u32 map_value = 0; 21 + 22 + cgroup_fd = test__join_cgroup("/load_bytes_relative"); 23 + if (CHECK_FAIL(cgroup_fd < 0)) 24 + return; 25 + 26 + server_fd = start_server(AF_INET, SOCK_STREAM); 27 + if (CHECK_FAIL(server_fd < 0)) 28 + goto close_cgroup_fd; 29 + 30 + err = bpf_prog_load("./load_bytes_relative.o", BPF_PROG_TYPE_CGROUP_SKB, 31 + &obj, &prog_fd); 32 + if (CHECK_FAIL(err)) 33 + goto close_server_fd; 34 + 35 + test_result = bpf_object__find_map_by_name(obj, "test_result"); 36 + if (CHECK_FAIL(!test_result)) 37 + goto close_bpf_object; 38 + 39 + map_fd = bpf_map__fd(test_result); 40 + if (map_fd < 0) 41 + goto close_bpf_object; 42 + 43 + prog = bpf_object__find_program_by_name(obj, "load_bytes_relative"); 44 + if (CHECK_FAIL(!prog)) 45 + goto close_bpf_object; 46 + 47 + err = bpf_prog_attach(prog_fd, cgroup_fd, BPF_CGROUP_INET_EGRESS, 48 + BPF_F_ALLOW_MULTI); 49 + if (CHECK_FAIL(err)) 50 + goto close_bpf_object; 51 + 52 + client_fd = connect_to_fd(AF_INET, SOCK_STREAM, server_fd); 53 + if (CHECK_FAIL(client_fd < 0)) 54 + goto close_bpf_object; 55 + close(client_fd); 56 + 57 + err = bpf_map_lookup_elem(map_fd, &map_key, &map_value); 58 + if (CHECK_FAIL(err)) 59 + goto close_bpf_object; 60 + 61 + CHECK(map_value != 1, "bpf", "bpf program returned failure"); 62 + 63 + close_bpf_object: 64 + bpf_object__close(obj); 65 + 66 + close_server_fd: 67 + close(server_fd); 68 + 69 + close_cgroup_fd: 70 + close(cgroup_fd); 71 + }
+35 -7
tools/testing/selftests/bpf/prog_tests/ringbuf.c
··· 25 25 char comm[16]; 26 26 }; 27 27 28 - static volatile int sample_cnt; 28 + static int sample_cnt; 29 + 30 + static void atomic_inc(int *cnt) 31 + { 32 + __atomic_add_fetch(cnt, 1, __ATOMIC_SEQ_CST); 33 + } 34 + 35 + static int atomic_xchg(int *cnt, int val) 36 + { 37 + return __atomic_exchange_n(cnt, val, __ATOMIC_SEQ_CST); 38 + } 29 39 30 40 static int process_sample(void *ctx, void *data, size_t len) 31 41 { 32 42 struct sample *s = data; 33 43 34 - sample_cnt++; 44 + atomic_inc(&sample_cnt); 35 45 36 46 switch (s->seq) { 37 47 case 0: ··· 86 76 const size_t rec_sz = BPF_RINGBUF_HDR_SZ + sizeof(struct sample); 87 77 pthread_t thread; 88 78 long bg_ret = -1; 89 - int err; 79 + int err, cnt; 90 80 91 81 skel = test_ringbuf__open_and_load(); 92 82 if (CHECK(!skel, "skel_open_load", "skeleton open&load failed\n")) ··· 126 116 /* -EDONE is used as an indicator that we are done */ 127 117 if (CHECK(err != -EDONE, "err_done", "done err: %d\n", err)) 128 118 goto cleanup; 119 + cnt = atomic_xchg(&sample_cnt, 0); 120 + CHECK(cnt != 2, "cnt", "exp %d samples, got %d\n", 2, cnt); 129 121 130 122 /* we expect extra polling to return nothing */ 131 123 err = ring_buffer__poll(ringbuf, 0); 132 124 if (CHECK(err != 0, "extra_samples", "poll result: %d\n", err)) 133 125 goto cleanup; 126 + cnt = atomic_xchg(&sample_cnt, 0); 127 + CHECK(cnt != 0, "cnt", "exp %d samples, got %d\n", 0, cnt); 134 128 135 129 CHECK(skel->bss->dropped != 0, "err_dropped", "exp %ld, got %ld\n", 136 130 0L, skel->bss->dropped); ··· 150 136 3L * rec_sz, skel->bss->cons_pos); 151 137 err = ring_buffer__poll(ringbuf, -1); 152 138 CHECK(err <= 0, "poll_err", "err %d\n", err); 139 + cnt = atomic_xchg(&sample_cnt, 0); 140 + CHECK(cnt != 2, "cnt", "exp %d samples, got %d\n", 2, cnt); 153 141 154 142 /* start poll in background w/ long timeout */ 155 143 err = pthread_create(&thread, NULL, poll_thread, (void *)(long)10000); ··· 180 164 2L, skel->bss->total); 181 165 CHECK(skel->bss->discarded != 1, "err_discarded", "exp %ld, got %ld\n", 182 166 1L, skel->bss->discarded); 167 + cnt = atomic_xchg(&sample_cnt, 0); 168 + CHECK(cnt != 0, "cnt", "exp %d samples, got %d\n", 0, cnt); 183 169 184 170 /* clear flags to return to "adaptive" notification mode */ 185 171 skel->bss->flags = 0; ··· 196 178 if (CHECK(err != EBUSY, "try_join", "err %d\n", err)) 197 179 goto cleanup; 198 180 181 + /* still no samples, because consumer is behind */ 182 + cnt = atomic_xchg(&sample_cnt, 0); 183 + CHECK(cnt != 0, "cnt", "exp %d samples, got %d\n", 0, cnt); 184 + 185 + skel->bss->dropped = 0; 186 + skel->bss->total = 0; 187 + skel->bss->discarded = 0; 188 + 189 + skel->bss->value = 333; 190 + syscall(__NR_getpgid); 199 191 /* now force notifications */ 200 192 skel->bss->flags = BPF_RB_FORCE_WAKEUP; 201 - sample_cnt = 0; 202 - trigger_samples(); 193 + skel->bss->value = 777; 194 + syscall(__NR_getpgid); 203 195 204 196 /* now we should get a pending notification */ 205 197 usleep(50000); ··· 221 193 goto cleanup; 222 194 223 195 /* 3 rounds, 2 samples each */ 224 - CHECK(sample_cnt != 6, "wrong_sample_cnt", 225 - "expected to see %d samples, got %d\n", 6, sample_cnt); 196 + cnt = atomic_xchg(&sample_cnt, 0); 197 + CHECK(cnt != 6, "cnt", "exp %d samples, got %d\n", 6, cnt); 226 198 227 199 /* BPF side did everything right */ 228 200 CHECK(skel->bss->dropped != 0, "err_dropped", "exp %ld, got %ld\n",
+40 -5
tools/testing/selftests/bpf/prog_tests/skeleton.c
··· 15 15 int duration = 0, err; 16 16 struct test_skeleton* skel; 17 17 struct test_skeleton__bss *bss; 18 + struct test_skeleton__data *data; 19 + struct test_skeleton__rodata *rodata; 18 20 struct test_skeleton__kconfig *kcfg; 19 21 20 22 skel = test_skeleton__open(); ··· 26 24 if (CHECK(skel->kconfig, "skel_kconfig", "kconfig is mmaped()!\n")) 27 25 goto cleanup; 28 26 27 + bss = skel->bss; 28 + data = skel->data; 29 + rodata = skel->rodata; 30 + 31 + /* validate values are pre-initialized correctly */ 32 + CHECK(data->in1 != -1, "in1", "got %d != exp %d\n", data->in1, -1); 33 + CHECK(data->out1 != -1, "out1", "got %d != exp %d\n", data->out1, -1); 34 + CHECK(data->in2 != -1, "in2", "got %lld != exp %lld\n", data->in2, -1LL); 35 + CHECK(data->out2 != -1, "out2", "got %lld != exp %lld\n", data->out2, -1LL); 36 + 37 + CHECK(bss->in3 != 0, "in3", "got %d != exp %d\n", bss->in3, 0); 38 + CHECK(bss->out3 != 0, "out3", "got %d != exp %d\n", bss->out3, 0); 39 + CHECK(bss->in4 != 0, "in4", "got %lld != exp %lld\n", bss->in4, 0LL); 40 + CHECK(bss->out4 != 0, "out4", "got %lld != exp %lld\n", bss->out4, 0LL); 41 + 42 + CHECK(rodata->in6 != 0, "in6", "got %d != exp %d\n", rodata->in6, 0); 43 + CHECK(bss->out6 != 0, "out6", "got %d != exp %d\n", bss->out6, 0); 44 + 45 + /* validate we can pre-setup global variables, even in .bss */ 46 + data->in1 = 10; 47 + data->in2 = 11; 48 + bss->in3 = 12; 49 + bss->in4 = 13; 50 + rodata->in6 = 14; 51 + 29 52 err = test_skeleton__load(skel); 30 53 if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err)) 31 54 goto cleanup; 32 55 33 - bss = skel->bss; 34 - bss->in1 = 1; 35 - bss->in2 = 2; 56 + /* validate pre-setup values are still there */ 57 + CHECK(data->in1 != 10, "in1", "got %d != exp %d\n", data->in1, 10); 58 + CHECK(data->in2 != 11, "in2", "got %lld != exp %lld\n", data->in2, 11LL); 59 + CHECK(bss->in3 != 12, "in3", "got %d != exp %d\n", bss->in3, 12); 60 + CHECK(bss->in4 != 13, "in4", "got %lld != exp %lld\n", bss->in4, 13LL); 61 + CHECK(rodata->in6 != 14, "in6", "got %d != exp %d\n", rodata->in6, 14); 62 + 63 + /* now set new values and attach to get them into outX variables */ 64 + data->in1 = 1; 65 + data->in2 = 2; 36 66 bss->in3 = 3; 37 67 bss->in4 = 4; 38 68 bss->in5.a = 5; ··· 78 44 /* trigger tracepoint */ 79 45 usleep(1); 80 46 81 - CHECK(bss->out1 != 1, "res1", "got %d != exp %d\n", bss->out1, 1); 82 - CHECK(bss->out2 != 2, "res2", "got %lld != exp %d\n", bss->out2, 2); 47 + CHECK(data->out1 != 1, "res1", "got %d != exp %d\n", data->out1, 1); 48 + CHECK(data->out2 != 2, "res2", "got %lld != exp %d\n", data->out2, 2); 83 49 CHECK(bss->out3 != 3, "res3", "got %d != exp %d\n", (int)bss->out3, 3); 84 50 CHECK(bss->out4 != 4, "res4", "got %lld != exp %d\n", bss->out4, 4); 85 51 CHECK(bss->handler_out5.a != 5, "res5", "got %d != exp %d\n", 86 52 bss->handler_out5.a, 5); 87 53 CHECK(bss->handler_out5.b != 6, "res6", "got %lld != exp %d\n", 88 54 bss->handler_out5.b, 6); 55 + CHECK(bss->out6 != 14, "res7", "got %d != exp %d\n", bss->out6, 14); 89 56 90 57 CHECK(bss->bpf_syscall != kcfg->CONFIG_BPF_SYSCALL, "ext1", 91 58 "got %d != exp %d\n", bss->bpf_syscall, kcfg->CONFIG_BPF_SYSCALL);
-8
tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c
··· 8 8 9 9 #define IFINDEX_LO 1 10 10 11 - struct bpf_devmap_val { 12 - u32 ifindex; /* device index */ 13 - union { 14 - int fd; /* prog fd on map write */ 15 - u32 id; /* prog id on map read */ 16 - } bpf_prog; 17 - }; 18 - 19 11 void test_xdp_with_devmap_helpers(void) 20 12 { 21 13 struct test_xdp_with_devmap_helpers *skel;
+48
tools/testing/selftests/bpf/progs/load_bytes_relative.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + /* 4 + * Copyright 2020 Google LLC. 5 + */ 6 + 7 + #include <errno.h> 8 + #include <linux/bpf.h> 9 + #include <linux/if_ether.h> 10 + #include <linux/ip.h> 11 + #include <bpf/bpf_helpers.h> 12 + 13 + struct { 14 + __uint(type, BPF_MAP_TYPE_ARRAY); 15 + __uint(max_entries, 1); 16 + __type(key, __u32); 17 + __type(value, __u32); 18 + } test_result SEC(".maps"); 19 + 20 + SEC("cgroup_skb/egress") 21 + int load_bytes_relative(struct __sk_buff *skb) 22 + { 23 + struct ethhdr eth; 24 + struct iphdr iph; 25 + 26 + __u32 map_key = 0; 27 + __u32 test_passed = 0; 28 + 29 + /* MAC header is not set by the time cgroup_skb/egress triggers */ 30 + if (bpf_skb_load_bytes_relative(skb, 0, &eth, sizeof(eth), 31 + BPF_HDR_START_MAC) != -EFAULT) 32 + goto fail; 33 + 34 + if (bpf_skb_load_bytes_relative(skb, 0, &iph, sizeof(iph), 35 + BPF_HDR_START_NET)) 36 + goto fail; 37 + 38 + if (bpf_skb_load_bytes_relative(skb, 0xffff, &iph, sizeof(iph), 39 + BPF_HDR_START_NET) != -EFAULT) 40 + goto fail; 41 + 42 + test_passed = 1; 43 + 44 + fail: 45 + bpf_map_update_elem(&test_result, &map_key, &test_passed, BPF_ANY); 46 + 47 + return 1; 48 + }
+15 -4
tools/testing/selftests/bpf/progs/test_skeleton.c
··· 10 10 long long b; 11 11 } __attribute__((packed)); 12 12 13 - int in1 = 0; 14 - long long in2 = 0; 13 + /* .data section */ 14 + int in1 = -1; 15 + long long in2 = -1; 16 + 17 + /* .bss section */ 15 18 char in3 = '\0'; 16 19 long long in4 __attribute__((aligned(64))) = 0; 17 20 struct s in5 = {}; 18 21 19 - long long out2 = 0; 22 + /* .rodata section */ 23 + const volatile int in6 = 0; 24 + 25 + /* .data section */ 26 + int out1 = -1; 27 + long long out2 = -1; 28 + 29 + /* .bss section */ 20 30 char out3 = 0; 21 31 long long out4 = 0; 22 - int out1 = 0; 32 + int out6 = 0; 23 33 24 34 extern bool CONFIG_BPF_SYSCALL __kconfig; 25 35 extern int LINUX_KERNEL_VERSION __kconfig; ··· 46 36 out3 = in3; 47 37 out4 = in4; 48 38 out5 = in5; 39 + out6 = in6; 49 40 50 41 bpf_syscall = CONFIG_BPF_SYSCALL; 51 42 kern_ver = LINUX_KERNEL_VERSION;
+1 -1
tools/testing/selftests/bpf/progs/test_xdp_devmap_helpers.c
··· 2 2 /* fails to load without expected_attach_type = BPF_XDP_DEVMAP 3 3 * because of access to egress_ifindex 4 4 */ 5 - #include "vmlinux.h" 5 + #include <linux/bpf.h> 6 6 #include <bpf/bpf_helpers.h> 7 7 8 8 SEC("xdp_dm_log")
+1 -2
tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - 3 - #include "vmlinux.h" 2 + #include <linux/bpf.h> 4 3 #include <bpf/bpf_helpers.h> 5 4 6 5 struct {
+1
tools/testing/selftests/net/rxtimestamp.c
··· 115 115 { "tcp", no_argument, 0, 't' }, 116 116 { "udp", no_argument, 0, 'u' }, 117 117 { "ip", no_argument, 0, 'i' }, 118 + { NULL, 0, NULL, 0 }, 118 119 }; 119 120 120 121 static int next_port = 19999;
+8 -2
tools/testing/selftests/net/timestamping.c
··· 313 313 int val; 314 314 socklen_t len; 315 315 struct timeval next; 316 + size_t if_len; 316 317 317 318 if (argc < 2) 318 319 usage(0); 319 320 interface = argv[1]; 321 + if_len = strlen(interface); 322 + if (if_len >= IFNAMSIZ) { 323 + printf("interface name exceeds IFNAMSIZ\n"); 324 + exit(1); 325 + } 320 326 321 327 for (i = 2; i < argc; i++) { 322 328 if (!strcasecmp(argv[i], "SO_TIMESTAMP")) ··· 356 350 bail("socket"); 357 351 358 352 memset(&device, 0, sizeof(device)); 359 - strncpy(device.ifr_name, interface, sizeof(device.ifr_name)); 353 + memcpy(device.ifr_name, interface, if_len + 1); 360 354 if (ioctl(sock, SIOCGIFADDR, &device) < 0) 361 355 bail("getting interface IP address"); 362 356 363 357 memset(&hwtstamp, 0, sizeof(hwtstamp)); 364 - strncpy(hwtstamp.ifr_name, interface, sizeof(hwtstamp.ifr_name)); 358 + memcpy(hwtstamp.ifr_name, interface, if_len + 1); 365 359 hwtstamp.ifr_data = (void *)&hwconfig; 366 360 memset(&hwconfig, 0, sizeof(hwconfig)); 367 361 hwconfig.tx_type =
+58
tools/testing/selftests/net/tls.c
··· 213 213 EXPECT_EQ(recv(self->cfd, buf, st.st_size, MSG_WAITALL), st.st_size); 214 214 } 215 215 216 + static void chunked_sendfile(struct __test_metadata *_metadata, 217 + struct _test_data_tls *self, 218 + uint16_t chunk_size, 219 + uint16_t extra_payload_size) 220 + { 221 + char buf[TLS_PAYLOAD_MAX_LEN]; 222 + uint16_t test_payload_size; 223 + int size = 0; 224 + int ret; 225 + char filename[] = "/tmp/mytemp.XXXXXX"; 226 + int fd = mkstemp(filename); 227 + off_t offset = 0; 228 + 229 + unlink(filename); 230 + ASSERT_GE(fd, 0); 231 + EXPECT_GE(chunk_size, 1); 232 + test_payload_size = chunk_size + extra_payload_size; 233 + ASSERT_GE(TLS_PAYLOAD_MAX_LEN, test_payload_size); 234 + memset(buf, 1, test_payload_size); 235 + size = write(fd, buf, test_payload_size); 236 + EXPECT_EQ(size, test_payload_size); 237 + fsync(fd); 238 + 239 + while (size > 0) { 240 + ret = sendfile(self->fd, fd, &offset, chunk_size); 241 + EXPECT_GE(ret, 0); 242 + size -= ret; 243 + } 244 + 245 + EXPECT_EQ(recv(self->cfd, buf, test_payload_size, MSG_WAITALL), 246 + test_payload_size); 247 + 248 + close(fd); 249 + } 250 + 251 + TEST_F(tls, multi_chunk_sendfile) 252 + { 253 + chunked_sendfile(_metadata, self, 4096, 4096); 254 + chunked_sendfile(_metadata, self, 4096, 0); 255 + chunked_sendfile(_metadata, self, 4096, 1); 256 + chunked_sendfile(_metadata, self, 4096, 2048); 257 + chunked_sendfile(_metadata, self, 8192, 2048); 258 + chunked_sendfile(_metadata, self, 4096, 8192); 259 + chunked_sendfile(_metadata, self, 8192, 4096); 260 + chunked_sendfile(_metadata, self, 12288, 1024); 261 + chunked_sendfile(_metadata, self, 12288, 2000); 262 + chunked_sendfile(_metadata, self, 15360, 100); 263 + chunked_sendfile(_metadata, self, 15360, 300); 264 + chunked_sendfile(_metadata, self, 1, 4096); 265 + chunked_sendfile(_metadata, self, 2048, 4096); 266 + chunked_sendfile(_metadata, self, 2048, 8192); 267 + chunked_sendfile(_metadata, self, 4096, 8192); 268 + chunked_sendfile(_metadata, self, 1024, 12288); 269 + chunked_sendfile(_metadata, self, 2000, 12288); 270 + chunked_sendfile(_metadata, self, 100, 15360); 271 + chunked_sendfile(_metadata, self, 300, 15360); 272 + } 273 + 216 274 TEST_F(tls, recv_max) 217 275 { 218 276 unsigned int send_len = TLS_PAYLOAD_MAX_LEN;