Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:

- fix failure to add bond interfaces to a bridge, the offload-handling
code was too defensive there and recent refactoring unearthed that.
Users complained (Ido)

- fix unnecessarily reflecting ECN bits within TOS values / QoS marking
in TCP ACK and reset packets (Wei)

- fix a deadlock with bpf iterator. Hopefully we're in the clear on
this front now... (Yonghong)

- BPF fix for clobbering r2 in bpf_gen_ld_abs (Daniel)

- fix AQL on mt76 devices with FW rate control and add a couple of AQL
issues in mac80211 code (Felix)

- fix authentication issue with mwifiex (Maximilian)

- WiFi connectivity fix: revert IGTK support in ti/wlcore (Mauro)

- fix exception handling for multipath routes via same device (David
Ahern)

- revert back to a BH spin lock flavor for nsid_lock: there are paths
which do require the BH context protection (Taehee)

- fix interrupt / queue / NAPI handling in the lantiq driver (Hauke)

- fix ife module load deadlock (Cong)

- make an adjustment to netlink reply message type for code added in
this release (the sole change touching uAPI here) (Michal)

- a number of fixes for small NXP and Microchip switches (Vladimir)

[ Pull request acked by David: "you can expect more of this in the
future as I try to delegate more things to Jakub" ]

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (167 commits)
net: mscc: ocelot: fix some key offsets for IP4_TCP_UDP VCAP IS2 entries
net: dsa: seville: fix some key offsets for IP4_TCP_UDP VCAP IS2 entries
net: dsa: felix: fix some key offsets for IP4_TCP_UDP VCAP IS2 entries
inet_diag: validate INET_DIAG_REQ_PROTOCOL attribute
net: bridge: br_vlan_get_pvid_rcu() should dereference the VLAN group under RCU
net: Update MAINTAINERS for MediaTek switch driver
net/mlx5e: mlx5e_fec_in_caps() returns a boolean
net/mlx5e: kTLS, Avoid kzalloc(GFP_KERNEL) under spinlock
net/mlx5e: kTLS, Fix leak on resync error flow
net/mlx5e: kTLS, Add missing dma_unmap in RX resync
net/mlx5e: kTLS, Fix napi sync and possible use-after-free
net/mlx5e: TLS, Do not expose FPGA TLS counter if not supported
net/mlx5e: Fix using wrong stats_grps in mlx5e_update_ndo_stats()
net/mlx5e: Fix multicast counter not up-to-date in "ip -s"
net/mlx5e: Fix endianness when calculating pedit mask first bit
net/mlx5e: Enable adding peer miss rules only if merged eswitch is supported
net/mlx5e: CT: Fix freeing ct_label mapping
net/mlx5e: Fix memory leak of tunnel info when rule under multipath not ready
net/mlx5e: Use synchronize_rcu to sync with NAPI
net/mlx5e: Use RCU to protect rq->xdp_prog
...

+1709 -828
+1 -4
Documentation/bpf/ringbuf.rst
··· 182 182 already committed. It is thus possible for slow producers to temporarily hold 183 183 off submitted records, that were reserved later. 184 184 185 - Reservation/commit/consumer protocol is verified by litmus tests in 186 - Documentation/litmus_tests/bpf-rb/_. 187 - 188 185 One interesting implementation bit, that significantly simplifies (and thus 189 186 speeds up as well) implementation of both producers and consumers is how data 190 187 area is mapped twice contiguously back-to-back in the virtual memory. This ··· 197 200 being available after commit only if consumer has already caught up right up to 198 201 the record being committed. If not, consumer still has to catch up and thus 199 202 will see new data anyways without needing an extra poll notification. 200 - Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c_) show that 203 + Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbufs.c) show that 201 204 this allows to achieve a very high throughput without having to resort to 202 205 tricks like "notify only every Nth sample", which are necessary with perf 203 206 buffer. For extreme cases, when BPF program wants more manual control of
+3
Documentation/networking/ethtool-netlink.rst
··· 206 206 ``ETHTOOL_MSG_TSINFO_GET`` get timestamping info 207 207 ``ETHTOOL_MSG_CABLE_TEST_ACT`` action start cable test 208 208 ``ETHTOOL_MSG_CABLE_TEST_TDR_ACT`` action start raw TDR cable test 209 + ``ETHTOOL_MSG_TUNNEL_INFO_GET`` get tunnel offload info 209 210 ===================================== ================================ 210 211 211 212 Kernel to userspace: ··· 240 239 ``ETHTOOL_MSG_TSINFO_GET_REPLY`` timestamping info 241 240 ``ETHTOOL_MSG_CABLE_TEST_NTF`` Cable test results 242 241 ``ETHTOOL_MSG_CABLE_TEST_TDR_NTF`` Cable test TDR results 242 + ``ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY`` tunnel offload info 243 243 ===================================== ================================= 244 244 245 245 ``GET`` requests are sent by userspace applications to retrieve device ··· 1365 1363 ``ETHTOOL_SFECPARAM`` n/a 1366 1364 n/a ''ETHTOOL_MSG_CABLE_TEST_ACT'' 1367 1365 n/a ''ETHTOOL_MSG_CABLE_TEST_TDR_ACT'' 1366 + n/a ``ETHTOOL_MSG_TUNNEL_INFO_GET`` 1368 1367 =================================== =====================================
+6 -9
MAINTAINERS
··· 4408 4408 F: fs/configfs/ 4409 4409 F: include/linux/configfs.h 4410 4410 4411 - CONNECTOR 4412 - M: Evgeniy Polyakov <zbr@ioremap.net> 4413 - L: netdev@vger.kernel.org 4414 - S: Maintained 4415 - F: drivers/connector/ 4416 - 4417 4411 CONSOLE SUBSYSTEM 4418 4412 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 4419 4413 S: Supported ··· 8323 8329 F: drivers/pci/hotplug/rpaphp* 8324 8330 8325 8331 IBM Power SRIOV Virtual NIC Device Driver 8326 - M: Thomas Falcon <tlfalcon@linux.ibm.com> 8327 - M: John Allen <jallen@linux.ibm.com> 8332 + M: Dany Madden <drt@linux.ibm.com> 8333 + M: Lijun Pan <ljp@linux.ibm.com> 8334 + M: Sukadev Bhattiprolu <sukadev@linux.ibm.com> 8328 8335 L: netdev@vger.kernel.org 8329 8336 S: Supported 8330 8337 F: drivers/net/ethernet/ibm/ibmvnic.* ··· 8339 8344 F: arch/powerpc/platforms/powernv/vas* 8340 8345 8341 8346 IBM Power Virtual Ethernet Device Driver 8342 - M: Thomas Falcon <tlfalcon@linux.ibm.com> 8347 + M: Cristobal Forno <cforno12@linux.ibm.com> 8343 8348 L: netdev@vger.kernel.org 8344 8349 S: Supported 8345 8350 F: drivers/net/ethernet/ibm/ibmveth.* ··· 11037 11042 11038 11043 MEDIATEK SWITCH DRIVER 11039 11044 M: Sean Wang <sean.wang@mediatek.com> 11045 + M: Landen Chao <Landen.Chao@mediatek.com> 11040 11046 L: netdev@vger.kernel.org 11041 11047 S: Maintained 11042 11048 F: drivers/net/dsa/mt7530.* ··· 12051 12055 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 12052 12056 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git 12053 12057 F: Documentation/devicetree/bindings/net/ 12058 + F: drivers/connector/ 12054 12059 F: drivers/net/ 12055 12060 F: include/linux/etherdevice.h 12056 12061 F: include/linux/fcdevice.h
+1 -1
arch/arm/boot/dts/at91-sama5d2_icp.dts
··· 116 116 switch0: ksz8563@0 { 117 117 compatible = "microchip,ksz8563"; 118 118 reg = <0>; 119 - phy-mode = "mii"; 120 119 reset-gpios = <&pioA PIN_PD4 GPIO_ACTIVE_LOW>; 121 120 122 121 spi-max-frequency = <500000>; ··· 139 140 reg = <2>; 140 141 label = "cpu"; 141 142 ethernet = <&macb0>; 143 + phy-mode = "mii"; 142 144 fixed-link { 143 145 speed = <100>; 144 146 full-duplex;
+1 -1
drivers/atm/eni.c
··· 2224 2224 2225 2225 rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32)); 2226 2226 if (rc < 0) 2227 - goto out; 2227 + goto err_disable; 2228 2228 2229 2229 rc = -ENOMEM; 2230 2230 eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);
+14 -6
drivers/net/dsa/microchip/ksz8795.c
··· 932 932 ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_ENABLE, true); 933 933 934 934 if (cpu_port) { 935 + if (!p->interface && dev->compat_interface) { 936 + dev_warn(dev->dev, 937 + "Using legacy switch \"phy-mode\" property, because it is missing on port %d node. " 938 + "Please update your device tree.\n", 939 + port); 940 + p->interface = dev->compat_interface; 941 + } 942 + 935 943 /* Configure MII interface for proper network communication. */ 936 944 ksz_read8(dev, REG_PORT_5_CTRL_6, &data8); 937 945 data8 &= ~PORT_INTERFACE_TYPE; 938 946 data8 &= ~PORT_GMII_1GPS_MODE; 939 - switch (dev->interface) { 947 + switch (p->interface) { 940 948 case PHY_INTERFACE_MODE_MII: 941 949 p->phydev.speed = SPEED_100; 942 950 break; ··· 960 952 default: 961 953 data8 &= ~PORT_RGMII_ID_IN_ENABLE; 962 954 data8 &= ~PORT_RGMII_ID_OUT_ENABLE; 963 - if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || 964 - dev->interface == PHY_INTERFACE_MODE_RGMII_RXID) 955 + if (p->interface == PHY_INTERFACE_MODE_RGMII_ID || 956 + p->interface == PHY_INTERFACE_MODE_RGMII_RXID) 965 957 data8 |= PORT_RGMII_ID_IN_ENABLE; 966 - if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || 967 - dev->interface == PHY_INTERFACE_MODE_RGMII_TXID) 958 + if (p->interface == PHY_INTERFACE_MODE_RGMII_ID || 959 + p->interface == PHY_INTERFACE_MODE_RGMII_TXID) 968 960 data8 |= PORT_RGMII_ID_OUT_ENABLE; 969 961 data8 |= PORT_GMII_1GPS_MODE; 970 962 data8 |= PORT_INTERFACE_RGMII; ··· 1260 1252 } 1261 1253 1262 1254 /* set the real number of ports */ 1263 - dev->ds->num_ports = dev->port_cnt; 1255 + dev->ds->num_ports = dev->port_cnt + 1; 1264 1256 1265 1257 return 0; 1266 1258 }
+19 -10
drivers/net/dsa/microchip/ksz9477.c
··· 1208 1208 1209 1209 /* configure MAC to 1G & RGMII mode */ 1210 1210 ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8); 1211 - switch (dev->interface) { 1211 + switch (p->interface) { 1212 1212 case PHY_INTERFACE_MODE_MII: 1213 1213 ksz9477_set_xmii(dev, 0, &data8); 1214 1214 ksz9477_set_gbit(dev, false, &data8); ··· 1229 1229 ksz9477_set_gbit(dev, true, &data8); 1230 1230 data8 &= ~PORT_RGMII_ID_IG_ENABLE; 1231 1231 data8 &= ~PORT_RGMII_ID_EG_ENABLE; 1232 - if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || 1233 - dev->interface == PHY_INTERFACE_MODE_RGMII_RXID) 1232 + if (p->interface == PHY_INTERFACE_MODE_RGMII_ID || 1233 + p->interface == PHY_INTERFACE_MODE_RGMII_RXID) 1234 1234 data8 |= PORT_RGMII_ID_IG_ENABLE; 1235 - if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || 1236 - dev->interface == PHY_INTERFACE_MODE_RGMII_TXID) 1235 + if (p->interface == PHY_INTERFACE_MODE_RGMII_ID || 1236 + p->interface == PHY_INTERFACE_MODE_RGMII_TXID) 1237 1237 data8 |= PORT_RGMII_ID_EG_ENABLE; 1238 1238 p->phydev.speed = SPEED_1000; 1239 1239 break; ··· 1269 1269 dev->cpu_port = i; 1270 1270 dev->host_mask = (1 << dev->cpu_port); 1271 1271 dev->port_mask |= dev->host_mask; 1272 + p = &dev->ports[i]; 1272 1273 1273 1274 /* Read from XMII register to determine host port 1274 1275 * interface. If set specifically in device tree 1275 1276 * note the difference to help debugging. 1276 1277 */ 1277 1278 interface = ksz9477_get_interface(dev, i); 1278 - if (!dev->interface) 1279 - dev->interface = interface; 1280 - if (interface && interface != dev->interface) 1279 + if (!p->interface) { 1280 + if (dev->compat_interface) { 1281 + dev_warn(dev->dev, 1282 + "Using legacy switch \"phy-mode\" property, because it is missing on port %d node. " 1283 + "Please update your device tree.\n", 1284 + i); 1285 + p->interface = dev->compat_interface; 1286 + } else { 1287 + p->interface = interface; 1288 + } 1289 + } 1290 + if (interface && interface != p->interface) 1281 1291 dev_info(dev->dev, 1282 1292 "use %s instead of %s\n", 1283 - phy_modes(dev->interface), 1293 + phy_modes(p->interface), 1284 1294 phy_modes(interface)); 1285 1295 1286 1296 /* enable cpu port */ 1287 1297 ksz9477_port_setup(dev, i, true); 1288 - p = &dev->ports[dev->cpu_port]; 1289 1298 p->vid_member = dev->port_mask; 1290 1299 p->on = 1; 1291 1300 }
+12 -1
drivers/net/dsa/microchip/ksz_common.c
··· 388 388 const struct ksz_dev_ops *ops) 389 389 { 390 390 phy_interface_t interface; 391 + struct device_node *port; 392 + unsigned int port_num; 391 393 int ret; 392 394 393 395 if (dev->pdata) ··· 423 421 /* Host port interface will be self detected, or specifically set in 424 422 * device tree. 425 423 */ 424 + for (port_num = 0; port_num < dev->port_cnt; ++port_num) 425 + dev->ports[port_num].interface = PHY_INTERFACE_MODE_NA; 426 426 if (dev->dev->of_node) { 427 427 ret = of_get_phy_mode(dev->dev->of_node, &interface); 428 428 if (ret == 0) 429 - dev->interface = interface; 429 + dev->compat_interface = interface; 430 + for_each_available_child_of_node(dev->dev->of_node, port) { 431 + if (of_property_read_u32(port, "reg", &port_num)) 432 + continue; 433 + if (port_num >= dev->port_cnt) 434 + return -EINVAL; 435 + of_get_phy_mode(port, &dev->ports[port_num].interface); 436 + } 430 437 dev->synclko_125 = of_property_read_bool(dev->dev->of_node, 431 438 "microchip,synclko-125"); 432 439 }
+2 -1
drivers/net/dsa/microchip/ksz_common.h
··· 39 39 u32 freeze:1; /* MIB counter freeze is enabled */ 40 40 41 41 struct ksz_port_mib mib; 42 + phy_interface_t interface; 42 43 }; 43 44 44 45 struct ksz_device { ··· 73 72 int mib_cnt; 74 73 int mib_port_cnt; 75 74 int last_port; /* ports after that not used */ 76 - phy_interface_t interface; 75 + phy_interface_t compat_interface; 77 76 u32 regs_size; 78 77 bool phy_errata_9477; 79 78 bool synclko_125;
+7 -1
drivers/net/dsa/ocelot/felix.c
··· 585 585 if (err) 586 586 return err; 587 587 588 - ocelot_init(ocelot); 588 + err = ocelot_init(ocelot); 589 + if (err) 590 + return err; 591 + 589 592 if (ocelot->ptp) { 590 593 err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info); 591 594 if (err) { ··· 643 640 { 644 641 struct ocelot *ocelot = ds->priv; 645 642 struct felix *felix = ocelot_to_felix(ocelot); 643 + int port; 646 644 647 645 if (felix->info->mdio_bus_free) 648 646 felix->info->mdio_bus_free(ocelot); 649 647 648 + for (port = 0; port < ocelot->num_phys_ports; port++) 649 + ocelot_deinit_port(ocelot, port); 650 650 ocelot_deinit_timestamp(ocelot); 651 651 /* stop workqueue thread */ 652 652 ocelot_deinit(ocelot);
+8 -8
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 645 645 [VCAP_IS2_HK_DIP_EQ_SIP] = {118, 1}, 646 646 /* IP4_TCP_UDP (TYPE=100) */ 647 647 [VCAP_IS2_HK_TCP] = {119, 1}, 648 - [VCAP_IS2_HK_L4_SPORT] = {120, 16}, 649 - [VCAP_IS2_HK_L4_DPORT] = {136, 16}, 648 + [VCAP_IS2_HK_L4_DPORT] = {120, 16}, 649 + [VCAP_IS2_HK_L4_SPORT] = {136, 16}, 650 650 [VCAP_IS2_HK_L4_RNG] = {152, 8}, 651 651 [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {160, 1}, 652 652 [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {161, 1}, 653 - [VCAP_IS2_HK_L4_URG] = {162, 1}, 654 - [VCAP_IS2_HK_L4_ACK] = {163, 1}, 655 - [VCAP_IS2_HK_L4_PSH] = {164, 1}, 656 - [VCAP_IS2_HK_L4_RST] = {165, 1}, 657 - [VCAP_IS2_HK_L4_SYN] = {166, 1}, 658 - [VCAP_IS2_HK_L4_FIN] = {167, 1}, 653 + [VCAP_IS2_HK_L4_FIN] = {162, 1}, 654 + [VCAP_IS2_HK_L4_SYN] = {163, 1}, 655 + [VCAP_IS2_HK_L4_RST] = {164, 1}, 656 + [VCAP_IS2_HK_L4_PSH] = {165, 1}, 657 + [VCAP_IS2_HK_L4_ACK] = {166, 1}, 658 + [VCAP_IS2_HK_L4_URG] = {167, 1}, 659 659 [VCAP_IS2_HK_L4_1588_DOM] = {168, 8}, 660 660 [VCAP_IS2_HK_L4_1588_VER] = {176, 4}, 661 661 /* IP4_OTHER (TYPE=101) */
+9 -9
drivers/net/dsa/ocelot/seville_vsc9953.c
··· 659 659 [VCAP_IS2_HK_DIP_EQ_SIP] = {122, 1}, 660 660 /* IP4_TCP_UDP (TYPE=100) */ 661 661 [VCAP_IS2_HK_TCP] = {123, 1}, 662 - [VCAP_IS2_HK_L4_SPORT] = {124, 16}, 663 - [VCAP_IS2_HK_L4_DPORT] = {140, 16}, 662 + [VCAP_IS2_HK_L4_DPORT] = {124, 16}, 663 + [VCAP_IS2_HK_L4_SPORT] = {140, 16}, 664 664 [VCAP_IS2_HK_L4_RNG] = {156, 8}, 665 665 [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {164, 1}, 666 666 [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {165, 1}, 667 - [VCAP_IS2_HK_L4_URG] = {166, 1}, 668 - [VCAP_IS2_HK_L4_ACK] = {167, 1}, 669 - [VCAP_IS2_HK_L4_PSH] = {168, 1}, 670 - [VCAP_IS2_HK_L4_RST] = {169, 1}, 671 - [VCAP_IS2_HK_L4_SYN] = {170, 1}, 672 - [VCAP_IS2_HK_L4_FIN] = {171, 1}, 667 + [VCAP_IS2_HK_L4_FIN] = {166, 1}, 668 + [VCAP_IS2_HK_L4_SYN] = {167, 1}, 669 + [VCAP_IS2_HK_L4_RST] = {168, 1}, 670 + [VCAP_IS2_HK_L4_PSH] = {169, 1}, 671 + [VCAP_IS2_HK_L4_ACK] = {170, 1}, 672 + [VCAP_IS2_HK_L4_URG] = {171, 1}, 673 673 /* IP4_OTHER (TYPE=101) */ 674 674 [VCAP_IS2_HK_IP4_L3_PROTO] = {123, 8}, 675 675 [VCAP_IS2_HK_L3_PAYLOAD] = {131, 56}, ··· 1008 1008 .vcap_is2_keys = vsc9953_vcap_is2_keys, 1009 1009 .vcap_is2_actions = vsc9953_vcap_is2_actions, 1010 1010 .vcap = vsc9953_vcap_props, 1011 - .shared_queue_sz = 128 * 1024, 1011 + .shared_queue_sz = 2048 * 1024, 1012 1012 .num_mact_rows = 2048, 1013 1013 .num_ports = 10, 1014 1014 .mdio_bus_alloc = vsc9953_mdio_bus_alloc,
+13 -7
drivers/net/dsa/rtl8366.c
··· 452 452 return ret; 453 453 454 454 if (vid == vlanmc.vid) { 455 - /* clear VLAN member configurations */ 456 - vlanmc.vid = 0; 457 - vlanmc.priority = 0; 458 - vlanmc.member = 0; 459 - vlanmc.untag = 0; 460 - vlanmc.fid = 0; 461 - 455 + /* Remove this port from the VLAN */ 456 + vlanmc.member &= ~BIT(port); 457 + vlanmc.untag &= ~BIT(port); 458 + /* 459 + * If no ports are members of this VLAN 460 + * anymore then clear the whole member 461 + * config so it can be reused. 462 + */ 463 + if (!vlanmc.member && vlanmc.untag) { 464 + vlanmc.vid = 0; 465 + vlanmc.priority = 0; 466 + vlanmc.fid = 0; 467 + } 462 468 ret = smi->ops->set_vlan_mc(smi, i, &vlanmc); 463 469 if (ret) { 464 470 dev_err(smi->dev,
+27 -16
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 3782 3782 return -EOPNOTSUPP; 3783 3783 3784 3784 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_QSTATS_EXT, -1, -1); 3785 + req.fid = cpu_to_le16(0xffff); 3785 3786 req.flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK; 3786 3787 mutex_lock(&bp->hwrm_cmd_lock); 3787 3788 rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); ··· 3853 3852 tx_masks = stats->hw_masks; 3854 3853 tx_count = sizeof(struct tx_port_stats_ext) / 8; 3855 3854 3856 - flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK; 3855 + flags = PORT_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK; 3857 3856 rc = bnxt_hwrm_port_qstats_ext(bp, flags); 3858 3857 if (rc) { 3859 3858 mask = (1ULL << 40) - 1; ··· 4306 4305 u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM; 4307 4306 u16 dst = BNXT_HWRM_CHNL_CHIMP; 4308 4307 4309 - if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) 4308 + if (BNXT_NO_FW_ACCESS(bp)) 4310 4309 return -EBUSY; 4311 4310 4312 4311 if (msg_len > BNXT_HWRM_MAX_REQ_LEN) { ··· 5724 5723 struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr; 5725 5724 u16 error_code; 5726 5725 5727 - if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) 5726 + if (BNXT_NO_FW_ACCESS(bp)) 5728 5727 return 0; 5729 5728 5730 5729 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_RING_FREE, cmpl_ring_id, -1); ··· 7818 7817 7819 7818 if (set_tpa) 7820 7819 tpa_flags = bp->flags & BNXT_FLAG_TPA; 7821 - else if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) 7820 + else if (BNXT_NO_FW_ACCESS(bp)) 7822 7821 return 0; 7823 7822 for (i = 0; i < bp->nr_vnics; i++) { 7824 7823 rc = bnxt_hwrm_vnic_set_tpa(bp, i, tpa_flags); ··· 9312 9311 struct hwrm_temp_monitor_query_output *resp; 9313 9312 struct bnxt *bp = dev_get_drvdata(dev); 9314 9313 u32 len = 0; 9314 + int rc; 9315 9315 9316 9316 resp = bp->hwrm_cmd_resp_addr; 9317 9317 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1); 9318 9318 mutex_lock(&bp->hwrm_cmd_lock); 9319 - if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT)) 9319 + rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); 9320 + if (!rc) 9320 9321 len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */ 9321 9322 mutex_unlock(&bp->hwrm_cmd_lock); 9322 - 9323 - if (len) 9324 - return len; 9325 - 9326 - return sprintf(buf, "unknown\n"); 9323 + return rc ?: len; 9327 9324 } 9328 9325 static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0); 9329 9326 ··· 9341 9342 9342 9343 static void bnxt_hwmon_open(struct bnxt *bp) 9343 9344 { 9345 + struct hwrm_temp_monitor_query_input req = {0}; 9344 9346 struct pci_dev *pdev = bp->pdev; 9347 + int rc; 9348 + 9349 + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1); 9350 + rc = hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); 9351 + if (rc == -EACCES || rc == -EOPNOTSUPP) { 9352 + bnxt_hwmon_close(bp); 9353 + return; 9354 + } 9345 9355 9346 9356 if (bp->hwmon_dev) 9347 9357 return; ··· 11787 11779 if (BNXT_PF(bp)) 11788 11780 bnxt_sriov_disable(bp); 11789 11781 11782 + clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 11783 + bnxt_cancel_sp_work(bp); 11784 + bp->sp_event = 0; 11785 + 11790 11786 bnxt_dl_fw_reporters_destroy(bp, true); 11791 11787 if (BNXT_PF(bp)) 11792 11788 devlink_port_type_clear(&bp->dl_port); ··· 11798 11786 unregister_netdev(dev); 11799 11787 bnxt_dl_unregister(bp); 11800 11788 bnxt_shutdown_tc(bp); 11801 - clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 11802 - bnxt_cancel_sp_work(bp); 11803 - bp->sp_event = 0; 11804 11789 11805 11790 bnxt_clear_int_mode(bp); 11806 11791 bnxt_hwrm_func_drv_unrgtr(bp); ··· 12098 12089 static void bnxt_vpd_read_info(struct bnxt *bp) 12099 12090 { 12100 12091 struct pci_dev *pdev = bp->pdev; 12101 - int i, len, pos, ro_size; 12092 + int i, len, pos, ro_size, size; 12102 12093 ssize_t vpd_size; 12103 12094 u8 *vpd_data; 12104 12095 ··· 12133 12124 if (len + pos > vpd_size) 12134 12125 goto read_sn; 12135 12126 12136 - strlcpy(bp->board_partno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN)); 12127 + size = min(len, BNXT_VPD_FLD_LEN - 1); 12128 + memcpy(bp->board_partno, &vpd_data[pos], size); 12137 12129 12138 12130 read_sn: 12139 12131 pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size, ··· 12147 12137 if (len + pos > vpd_size) 12148 12138 goto exit; 12149 12139 12150 - strlcpy(bp->board_serialno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN)); 12140 + size = min(len, BNXT_VPD_FLD_LEN - 1); 12141 + memcpy(bp->board_serialno, &vpd_data[pos], size); 12151 12142 exit: 12152 12143 kfree(vpd_data); 12153 12144 }
+4
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1737 1737 #define BNXT_STATE_FW_FATAL_COND 6 1738 1738 #define BNXT_STATE_DRV_REGISTERED 7 1739 1739 1740 + #define BNXT_NO_FW_ACCESS(bp) \ 1741 + (test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) || \ 1742 + pci_channel_offline((bp)->pdev)) 1743 + 1740 1744 struct bnxt_irq *irq_tbl; 1741 1745 int total_irqs; 1742 1746 u8 mac_addr[ETH_ALEN];
+23 -11
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 1322 1322 struct bnxt *bp = netdev_priv(dev); 1323 1323 int reg_len; 1324 1324 1325 + if (!BNXT_PF(bp)) 1326 + return -EOPNOTSUPP; 1327 + 1325 1328 reg_len = BNXT_PXP_REG_LEN; 1326 1329 1327 1330 if (bp->fw_cap & BNXT_FW_CAP_PCIE_STATS_SUPPORTED) ··· 1791 1788 if (!BNXT_PHY_CFG_ABLE(bp)) 1792 1789 return -EOPNOTSUPP; 1793 1790 1791 + mutex_lock(&bp->link_lock); 1794 1792 if (epause->autoneg) { 1795 - if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) 1796 - return -EINVAL; 1793 + if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) { 1794 + rc = -EINVAL; 1795 + goto pause_exit; 1796 + } 1797 1797 1798 1798 link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL; 1799 1799 if (bp->hwrm_spec_code >= 0x10201) ··· 1817 1811 if (epause->tx_pause) 1818 1812 link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX; 1819 1813 1820 - if (netif_running(dev)) { 1821 - mutex_lock(&bp->link_lock); 1814 + if (netif_running(dev)) 1822 1815 rc = bnxt_hwrm_set_pause(bp); 1823 - mutex_unlock(&bp->link_lock); 1824 - } 1816 + 1817 + pause_exit: 1818 + mutex_unlock(&bp->link_lock); 1825 1819 return rc; 1826 1820 } 1827 1821 ··· 2558 2552 struct bnxt *bp = netdev_priv(dev); 2559 2553 struct ethtool_eee *eee = &bp->eee; 2560 2554 struct bnxt_link_info *link_info = &bp->link_info; 2561 - u32 advertising = 2562 - _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0); 2555 + u32 advertising; 2563 2556 int rc = 0; 2564 2557 2565 2558 if (!BNXT_PHY_CFG_ABLE(bp)) ··· 2567 2562 if (!(bp->flags & BNXT_FLAG_EEE_CAP)) 2568 2563 return -EOPNOTSUPP; 2569 2564 2565 + mutex_lock(&bp->link_lock); 2566 + advertising = _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0); 2570 2567 if (!edata->eee_enabled) 2571 2568 goto eee_ok; 2572 2569 2573 2570 if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) { 2574 2571 netdev_warn(dev, "EEE requires autoneg\n"); 2575 - return -EINVAL; 2572 + rc = -EINVAL; 2573 + goto eee_exit; 2576 2574 } 2577 2575 if (edata->tx_lpi_enabled) { 2578 2576 if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi || 2579 2577 edata->tx_lpi_timer < bp->lpi_tmr_lo)) { 2580 2578 netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n", 2581 2579 bp->lpi_tmr_lo, bp->lpi_tmr_hi); 2582 - return -EINVAL; 2580 + rc = -EINVAL; 2581 + goto eee_exit; 2583 2582 } else if (!bp->lpi_tmr_hi) { 2584 2583 edata->tx_lpi_timer = eee->tx_lpi_timer; 2585 2584 } ··· 2593 2584 } else if (edata->advertised & ~advertising) { 2594 2585 netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n", 2595 2586 edata->advertised, advertising); 2596 - return -EINVAL; 2587 + rc = -EINVAL; 2588 + goto eee_exit; 2597 2589 } 2598 2590 2599 2591 eee->advertised = edata->advertised; ··· 2606 2596 if (netif_running(dev)) 2607 2597 rc = bnxt_hwrm_set_link_setting(bp, false, true); 2608 2598 2599 + eee_exit: 2600 + mutex_unlock(&bp->link_lock); 2609 2601 return rc; 2610 2602 } 2611 2603
+1 -2
drivers/net/ethernet/cadence/macb_main.c
··· 647 647 ctrl |= GEM_BIT(GBE); 648 648 } 649 649 650 - /* We do not support MLO_PAUSE_RX yet */ 651 - if (tx_pause) 650 + if (rx_pause) 652 651 ctrl |= MACB_BIT(PAE); 653 652 654 653 macb_set_tx_clk(bp->tx_clk, speed, ndev);
+6 -3
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
··· 1911 1911 static int configure_filter_tcb(struct adapter *adap, unsigned int tid, 1912 1912 struct filter_entry *f) 1913 1913 { 1914 - if (f->fs.hitcnts) 1914 + if (f->fs.hitcnts) { 1915 1915 set_tcb_field(adap, f, tid, TCB_TIMESTAMP_W, 1916 - TCB_TIMESTAMP_V(TCB_TIMESTAMP_M) | 1916 + TCB_TIMESTAMP_V(TCB_TIMESTAMP_M), 1917 + TCB_TIMESTAMP_V(0ULL), 1918 + 1); 1919 + set_tcb_field(adap, f, tid, TCB_RTT_TS_RECENT_AGE_W, 1917 1920 TCB_RTT_TS_RECENT_AGE_V(TCB_RTT_TS_RECENT_AGE_M), 1918 - TCB_TIMESTAMP_V(0ULL) | 1919 1921 TCB_RTT_TS_RECENT_AGE_V(0ULL), 1920 1922 1); 1923 + } 1921 1924 1922 1925 if (f->fs.newdmac) 1923 1926 set_tcb_tflag(adap, f, tid, TF_CCTRL_ECE_S, 1,
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c
··· 229 229 { 230 230 struct mps_entries_ref *mps_entry, *tmp; 231 231 232 - if (!list_empty(&adap->mps_ref)) 232 + if (list_empty(&adap->mps_ref)) 233 233 return; 234 234 235 235 spin_lock(&adap->mps_ref_lock);
+1 -1
drivers/net/ethernet/dec/tulip/de2104x.c
··· 85 85 #define DSL CONFIG_DE2104X_DSL 86 86 #endif 87 87 88 - #define DE_RX_RING_SIZE 64 88 + #define DE_RX_RING_SIZE 128 89 89 #define DE_TX_RING_SIZE 64 90 90 #define DE_RING_BYTES \ 91 91 ((sizeof(struct de_desc) * DE_RX_RING_SIZE) + \
+2 -2
drivers/net/ethernet/freescale/dpaa2/dpmac-cmd.h
··· 66 66 }; 67 67 68 68 struct dpmac_rsp_get_counter { 69 - u64 pad; 70 - u64 counter; 69 + __le64 pad; 70 + __le64 counter; 71 71 }; 72 72 73 73 #endif /* _FSL_DPMAC_CMD_H */
+1 -1
drivers/net/ethernet/freescale/enetc/enetc_pf.c
··· 1053 1053 1054 1054 err_reg_netdev: 1055 1055 enetc_teardown_serdes(priv); 1056 - enetc_mdio_remove(pf); 1057 1056 enetc_free_msix(priv); 1058 1057 err_alloc_msix: 1059 1058 enetc_free_si_resources(priv); ··· 1060 1061 si->ndev = NULL; 1061 1062 free_netdev(ndev); 1062 1063 err_alloc_netdev: 1064 + enetc_mdio_remove(pf); 1063 1065 enetc_of_put_phy(pf); 1064 1066 err_map_pf_space: 1065 1067 enetc_pci_remove(pdev);
+2 -2
drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
··· 334 334 * bit6-11 for ppe0-5 335 335 * bit12-17 for roce0-5 336 336 * bit18-19 for com/dfx 337 - * @enable: false - request reset , true - drop reset 337 + * @dereset: false - request reset , true - drop reset 338 338 */ 339 339 static void 340 340 hns_dsaf_srst_chns(struct dsaf_device *dsaf_dev, u32 msk, bool dereset) ··· 357 357 * bit6-11 for ppe0-5 358 358 * bit12-17 for roce0-5 359 359 * bit18-19 for com/dfx 360 - * @enable: false - request reset , true - drop reset 360 + * @dereset: false - request reset , true - drop reset 361 361 */ 362 362 static void 363 363 hns_dsaf_srst_chns_acpi(struct dsaf_device *dsaf_dev, u32 msk, bool dereset)
+20 -20
drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
··· 463 463 464 464 /** 465 465 * nic_run_loopback_test - run loopback test 466 - * @nic_dev: net device 467 - * @loopback_type: loopback type 466 + * @ndev: net device 467 + * @loop_mode: loopback mode 468 468 */ 469 469 static int __lb_run_test(struct net_device *ndev, 470 470 enum hnae_loop loop_mode) ··· 572 572 573 573 /** 574 574 * hns_nic_self_test - self test 575 - * @dev: net device 575 + * @ndev: net device 576 576 * @eth_test: test cmd 577 577 * @data: test result 578 578 */ ··· 633 633 634 634 /** 635 635 * hns_nic_get_drvinfo - get net driver info 636 - * @dev: net device 636 + * @net_dev: net device 637 637 * @drvinfo: driver info 638 638 */ 639 639 static void hns_nic_get_drvinfo(struct net_device *net_dev, ··· 658 658 659 659 /** 660 660 * hns_get_ringparam - get ring parameter 661 - * @dev: net device 661 + * @net_dev: net device 662 662 * @param: ethtool parameter 663 663 */ 664 664 static void hns_get_ringparam(struct net_device *net_dev, ··· 683 683 684 684 /** 685 685 * hns_get_pauseparam - get pause parameter 686 - * @dev: net device 686 + * @net_dev: net device 687 687 * @param: pause parameter 688 688 */ 689 689 static void hns_get_pauseparam(struct net_device *net_dev, ··· 701 701 702 702 /** 703 703 * hns_set_pauseparam - set pause parameter 704 - * @dev: net device 704 + * @net_dev: net device 705 705 * @param: pause parameter 706 706 * 707 707 * Return 0 on success, negative on failure ··· 725 725 726 726 /** 727 727 * hns_get_coalesce - get coalesce info. 728 - * @dev: net device 728 + * @net_dev: net device 729 729 * @ec: coalesce info. 730 730 * 731 731 * Return 0 on success, negative on failure. ··· 769 769 770 770 /** 771 771 * hns_set_coalesce - set coalesce info. 772 - * @dev: net device 772 + * @net_dev: net device 773 773 * @ec: coalesce info. 774 774 * 775 775 * Return 0 on success, negative on failure. ··· 808 808 809 809 /** 810 810 * hns_get_channels - get channel info. 811 - * @dev: net device 811 + * @net_dev: net device 812 812 * @ch: channel info. 813 813 */ 814 814 static void ··· 825 825 826 826 /** 827 827 * get_ethtool_stats - get detail statistics. 828 - * @dev: net device 828 + * @netdev: net device 829 829 * @stats: statistics info. 830 830 * @data: statistics data. 831 831 */ ··· 883 883 884 884 /** 885 885 * get_strings: Return a set of strings that describe the requested objects 886 - * @dev: net device 887 - * @stats: string set ID. 886 + * @netdev: net device 887 + * @stringset: string set ID. 888 888 * @data: objects data. 889 889 */ 890 890 static void hns_get_strings(struct net_device *netdev, u32 stringset, u8 *data) ··· 972 972 973 973 /** 974 974 * nic_get_sset_count - get string set count witch returned by nic_get_strings. 975 - * @dev: net device 975 + * @netdev: net device 976 976 * @stringset: string set index, 0: self test string; 1: statistics string. 977 977 * 978 978 * Return string set count. ··· 1006 1006 1007 1007 /** 1008 1008 * hns_phy_led_set - set phy LED status. 1009 - * @dev: net device 1009 + * @netdev: net device 1010 1010 * @value: LED state. 1011 1011 * 1012 1012 * Return 0 on success, negative on failure. ··· 1028 1028 1029 1029 /** 1030 1030 * nic_set_phys_id - set phy identify LED. 1031 - * @dev: net device 1031 + * @netdev: net device 1032 1032 * @state: LED state. 1033 1033 * 1034 1034 * Return 0 on success, negative on failure. ··· 1104 1104 1105 1105 /** 1106 1106 * hns_get_regs - get net device register 1107 - * @dev: net device 1107 + * @net_dev: net device 1108 1108 * @cmd: ethtool cmd 1109 - * @date: register data 1109 + * @data: register data 1110 1110 */ 1111 1111 static void hns_get_regs(struct net_device *net_dev, struct ethtool_regs *cmd, 1112 1112 void *data) ··· 1126 1126 1127 1127 /** 1128 1128 * nic_get_regs_len - get total register len. 1129 - * @dev: net device 1129 + * @net_dev: net device 1130 1130 * 1131 1131 * Return total register len. 1132 1132 */ ··· 1151 1151 1152 1152 /** 1153 1153 * hns_nic_nway_reset - nway reset 1154 - * @dev: net device 1154 + * @netdev: net device 1155 1155 * 1156 1156 * Return 0 on success, negative on failure 1157 1157 */
+4
drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
··· 1654 1654 } 1655 1655 1656 1656 netif_carrier_off(netdev); 1657 + netif_tx_disable(netdev); 1657 1658 1658 1659 err = do_lp_test(nic_dev, eth_test->flags, LP_DEFAULT_TIME, 1659 1660 &test_index); ··· 1663 1662 data[test_index] = 1; 1664 1663 } 1665 1664 1665 + netif_tx_wake_all_queues(netdev); 1666 + 1666 1667 err = hinic_port_link_state(nic_dev, &link_state); 1667 1668 if (!err && link_state == HINIC_LINK_STATE_UP) 1668 1669 netif_carrier_on(netdev); 1670 + 1669 1671 } 1670 1672 1671 1673 static int hinic_set_phys_id(struct net_device *netdev,
+15 -5
drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
··· 47 47 48 48 #define MGMT_MSG_TIMEOUT 5000 49 49 50 + #define SET_FUNC_PORT_MBOX_TIMEOUT 30000 51 + 50 52 #define SET_FUNC_PORT_MGMT_TIMEOUT 25000 53 + 54 + #define UPDATE_FW_MGMT_TIMEOUT 20000 51 55 52 56 #define mgmt_to_pfhwdev(pf_mgmt) \ 53 57 container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt) ··· 365 361 return -EINVAL; 366 362 } 367 363 368 - if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE) 369 - timeout = SET_FUNC_PORT_MGMT_TIMEOUT; 364 + if (HINIC_IS_VF(hwif)) { 365 + if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE) 366 + timeout = SET_FUNC_PORT_MBOX_TIMEOUT; 370 367 371 - if (HINIC_IS_VF(hwif)) 372 368 return hinic_mbox_to_pf(pf_to_mgmt->hwdev, mod, cmd, buf_in, 373 - in_size, buf_out, out_size, 0); 374 - else 369 + in_size, buf_out, out_size, timeout); 370 + } else { 371 + if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE) 372 + timeout = SET_FUNC_PORT_MGMT_TIMEOUT; 373 + else if (cmd == HINIC_PORT_CMD_UPDATE_FW) 374 + timeout = UPDATE_FW_MGMT_TIMEOUT; 375 + 375 376 return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size, 376 377 buf_out, out_size, MGMT_DIRECT_SEND, 377 378 MSG_NOT_RESP, timeout); 379 + } 378 380 } 379 381 380 382 static void recv_mgmt_msg_work_handler(struct work_struct *work)
+24
drivers/net/ethernet/huawei/hinic/hinic_main.c
··· 174 174 return err; 175 175 } 176 176 177 + static void enable_txqs_napi(struct hinic_dev *nic_dev) 178 + { 179 + int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev); 180 + int i; 181 + 182 + for (i = 0; i < num_txqs; i++) 183 + napi_enable(&nic_dev->txqs[i].napi); 184 + } 185 + 186 + static void disable_txqs_napi(struct hinic_dev *nic_dev) 187 + { 188 + int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev); 189 + int i; 190 + 191 + for (i = 0; i < num_txqs; i++) 192 + napi_disable(&nic_dev->txqs[i].napi); 193 + } 194 + 177 195 /** 178 196 * free_txqs - Free the Logical Tx Queues of specific NIC device 179 197 * @nic_dev: the specific NIC device ··· 418 400 goto err_create_txqs; 419 401 } 420 402 403 + enable_txqs_napi(nic_dev); 404 + 421 405 err = create_rxqs(nic_dev); 422 406 if (err) { 423 407 netif_err(nic_dev, drv, netdev, ··· 504 484 } 505 485 506 486 err_create_rxqs: 487 + disable_txqs_napi(nic_dev); 507 488 free_txqs(nic_dev); 508 489 509 490 err_create_txqs: ··· 517 496 { 518 497 struct hinic_dev *nic_dev = netdev_priv(netdev); 519 498 unsigned int flags; 499 + 500 + /* Disable txq napi firstly to aviod rewaking txq in free_tx_poll */ 501 + disable_txqs_napi(nic_dev); 520 502 521 503 down(&nic_dev->mgmt_lock); 522 504
+14 -7
drivers/net/ethernet/huawei/hinic/hinic_rx.c
··· 543 543 if (err) { 544 544 netif_err(nic_dev, drv, rxq->netdev, 545 545 "Failed to set RX interrupt coalescing attribute\n"); 546 - rx_del_napi(rxq); 547 - return err; 546 + goto err_req_irq; 548 547 } 549 548 550 549 err = request_irq(rq->irq, rx_irq, 0, rxq->irq_name, rxq); 551 - if (err) { 552 - rx_del_napi(rxq); 553 - return err; 554 - } 550 + if (err) 551 + goto err_req_irq; 555 552 556 553 cpumask_set_cpu(qp->q_id % num_online_cpus(), &rq->affinity_mask); 557 - return irq_set_affinity_hint(rq->irq, &rq->affinity_mask); 554 + err = irq_set_affinity_hint(rq->irq, &rq->affinity_mask); 555 + if (err) 556 + goto err_irq_affinity; 557 + 558 + return 0; 559 + 560 + err_irq_affinity: 561 + free_irq(rq->irq, rxq); 562 + err_req_irq: 563 + rx_del_napi(rxq); 564 + return err; 558 565 } 559 566 560 567 static void rx_free_irq(struct hinic_rxq *rxq)
+6 -18
drivers/net/ethernet/huawei/hinic/hinic_tx.c
··· 717 717 netdev_txq = netdev_get_tx_queue(txq->netdev, qp->q_id); 718 718 719 719 __netif_tx_lock(netdev_txq, smp_processor_id()); 720 - 721 - netif_wake_subqueue(nic_dev->netdev, qp->q_id); 720 + if (!netif_testing(nic_dev->netdev)) 721 + netif_wake_subqueue(nic_dev->netdev, qp->q_id); 722 722 723 723 __netif_tx_unlock(netdev_txq); 724 724 ··· 743 743 } 744 744 745 745 return budget; 746 - } 747 - 748 - static void tx_napi_add(struct hinic_txq *txq, int weight) 749 - { 750 - netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, weight); 751 - napi_enable(&txq->napi); 752 - } 753 - 754 - static void tx_napi_del(struct hinic_txq *txq) 755 - { 756 - napi_disable(&txq->napi); 757 - netif_napi_del(&txq->napi); 758 746 } 759 747 760 748 static irqreturn_t tx_irq(int irq, void *data) ··· 778 790 779 791 qp = container_of(sq, struct hinic_qp, sq); 780 792 781 - tx_napi_add(txq, nic_dev->tx_weight); 793 + netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, nic_dev->tx_weight); 782 794 783 795 hinic_hwdev_msix_set(nic_dev->hwdev, sq->msix_entry, 784 796 TX_IRQ_NO_PENDING, TX_IRQ_NO_COALESC, ··· 795 807 if (err) { 796 808 netif_err(nic_dev, drv, txq->netdev, 797 809 "Failed to set TX interrupt coalescing attribute\n"); 798 - tx_napi_del(txq); 810 + netif_napi_del(&txq->napi); 799 811 return err; 800 812 } 801 813 802 814 err = request_irq(sq->irq, tx_irq, 0, txq->irq_name, txq); 803 815 if (err) { 804 816 dev_err(&pdev->dev, "Failed to request Tx irq\n"); 805 - tx_napi_del(txq); 817 + netif_napi_del(&txq->napi); 806 818 return err; 807 819 } 808 820 ··· 814 826 struct hinic_sq *sq = txq->sq; 815 827 816 828 free_irq(sq->irq, txq); 817 - tx_napi_del(txq); 829 + netif_napi_del(&txq->napi); 818 830 } 819 831 820 832 /**
+4 -2
drivers/net/ethernet/ibm/ibmvnic.c
··· 2032 2032 2033 2033 } else { 2034 2034 rc = reset_tx_pools(adapter); 2035 - if (rc) 2035 + if (rc) { 2036 2036 netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n", 2037 2037 rc); 2038 2038 goto out; 2039 + } 2039 2040 2040 2041 rc = reset_rx_pools(adapter); 2041 - if (rc) 2042 + if (rc) { 2042 2043 netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n", 2043 2044 rc); 2044 2045 goto out; 2046 + } 2045 2047 } 2046 2048 ibmvnic_disable_irqs(adapter); 2047 2049 }
+16 -6
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1115 1115 static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi) 1116 1116 { 1117 1117 struct i40e_mac_filter *f; 1118 - int num_vlans = 0, bkt; 1118 + u16 num_vlans = 0, bkt; 1119 1119 1120 1120 hash_for_each(vsi->mac_filter_hash, bkt, f, hlist) { 1121 1121 if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID) ··· 1134 1134 * 1135 1135 * Called to get number of VLANs and VLAN list present in mac_filter_hash. 1136 1136 **/ 1137 - static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, int *num_vlans, 1138 - s16 **vlan_list) 1137 + static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, u16 *num_vlans, 1138 + s16 **vlan_list) 1139 1139 { 1140 1140 struct i40e_mac_filter *f; 1141 1141 int i = 0; ··· 1169 1169 **/ 1170 1170 static i40e_status 1171 1171 i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, 1172 - bool unicast_enable, s16 *vl, int num_vlans) 1172 + bool unicast_enable, s16 *vl, u16 num_vlans) 1173 1173 { 1174 + i40e_status aq_ret, aq_tmp = 0; 1174 1175 struct i40e_pf *pf = vf->pf; 1175 1176 struct i40e_hw *hw = &pf->hw; 1176 - i40e_status aq_ret; 1177 1177 int i; 1178 1178 1179 1179 /* No VLAN to set promisc on, set on VSI */ ··· 1222 1222 vf->vf_id, 1223 1223 i40e_stat_str(&pf->hw, aq_ret), 1224 1224 i40e_aq_str(&pf->hw, aq_err)); 1225 + 1226 + if (!aq_tmp) 1227 + aq_tmp = aq_ret; 1225 1228 } 1226 1229 1227 1230 aq_ret = i40e_aq_set_vsi_uc_promisc_on_vlan(hw, seid, ··· 1238 1235 vf->vf_id, 1239 1236 i40e_stat_str(&pf->hw, aq_ret), 1240 1237 i40e_aq_str(&pf->hw, aq_err)); 1238 + 1239 + if (!aq_tmp) 1240 + aq_tmp = aq_ret; 1241 1241 } 1242 1242 } 1243 + 1244 + if (aq_tmp) 1245 + aq_ret = aq_tmp; 1246 + 1243 1247 return aq_ret; 1244 1248 } 1245 1249 ··· 1268 1258 i40e_status aq_ret = I40E_SUCCESS; 1269 1259 struct i40e_pf *pf = vf->pf; 1270 1260 struct i40e_vsi *vsi; 1271 - int num_vlans; 1261 + u16 num_vlans; 1272 1262 s16 *vl; 1273 1263 1274 1264 vsi = i40e_find_vsi_from_id(pf, vsi_id);
+8 -12
drivers/net/ethernet/intel/igc/igc.h
··· 299 299 #define IGC_RX_HDR_LEN IGC_RXBUFFER_256 300 300 301 301 /* Transmit and receive latency (for PTP timestamps) */ 302 - /* FIXME: These values were estimated using the ones that i225 has as 303 - * basis, they seem to provide good numbers with ptp4l/phc2sys, but we 304 - * need to confirm them. 305 - */ 306 - #define IGC_I225_TX_LATENCY_10 9542 307 - #define IGC_I225_TX_LATENCY_100 1024 308 - #define IGC_I225_TX_LATENCY_1000 178 309 - #define IGC_I225_TX_LATENCY_2500 64 310 - #define IGC_I225_RX_LATENCY_10 20662 311 - #define IGC_I225_RX_LATENCY_100 2213 312 - #define IGC_I225_RX_LATENCY_1000 448 313 - #define IGC_I225_RX_LATENCY_2500 160 302 + #define IGC_I225_TX_LATENCY_10 240 303 + #define IGC_I225_TX_LATENCY_100 58 304 + #define IGC_I225_TX_LATENCY_1000 80 305 + #define IGC_I225_TX_LATENCY_2500 1325 306 + #define IGC_I225_RX_LATENCY_10 6450 307 + #define IGC_I225_RX_LATENCY_100 185 308 + #define IGC_I225_RX_LATENCY_1000 300 309 + #define IGC_I225_RX_LATENCY_2500 1485 314 310 315 311 /* RX and TX descriptor control thresholds. 316 312 * PTHRESH - MAC will consider prefetch if it has fewer than this number of
+19
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 364 364 struct sk_buff *skb = adapter->ptp_tx_skb; 365 365 struct skb_shared_hwtstamps shhwtstamps; 366 366 struct igc_hw *hw = &adapter->hw; 367 + int adjust = 0; 367 368 u64 regval; 368 369 369 370 if (WARN_ON_ONCE(!skb)) ··· 373 372 regval = rd32(IGC_TXSTMPL); 374 373 regval |= (u64)rd32(IGC_TXSTMPH) << 32; 375 374 igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval); 375 + 376 + switch (adapter->link_speed) { 377 + case SPEED_10: 378 + adjust = IGC_I225_TX_LATENCY_10; 379 + break; 380 + case SPEED_100: 381 + adjust = IGC_I225_TX_LATENCY_100; 382 + break; 383 + case SPEED_1000: 384 + adjust = IGC_I225_TX_LATENCY_1000; 385 + break; 386 + case SPEED_2500: 387 + adjust = IGC_I225_TX_LATENCY_2500; 388 + break; 389 + } 390 + 391 + shhwtstamps.hwtstamp = 392 + ktime_add_ns(shhwtstamps.hwtstamp, adjust); 376 393 377 394 /* Clear the lock early before calling skb_tstamp_tx so that 378 395 * applications are not woken up before the lock bit is clear. We use
+13 -8
drivers/net/ethernet/lantiq_xrx200.c
··· 230 230 } 231 231 232 232 if (rx < budget) { 233 - napi_complete(&ch->napi); 234 - ltq_dma_enable_irq(&ch->dma); 233 + if (napi_complete_done(&ch->napi, rx)) 234 + ltq_dma_enable_irq(&ch->dma); 235 235 } 236 236 237 237 return rx; ··· 268 268 net_dev->stats.tx_bytes += bytes; 269 269 netdev_completed_queue(ch->priv->net_dev, pkts, bytes); 270 270 271 + if (netif_queue_stopped(net_dev)) 272 + netif_wake_queue(net_dev); 273 + 271 274 if (pkts < budget) { 272 - napi_complete(&ch->napi); 273 - ltq_dma_enable_irq(&ch->dma); 275 + if (napi_complete_done(&ch->napi, pkts)) 276 + ltq_dma_enable_irq(&ch->dma); 274 277 } 275 278 276 279 return pkts; ··· 345 342 { 346 343 struct xrx200_chan *ch = ptr; 347 344 348 - ltq_dma_disable_irq(&ch->dma); 349 - ltq_dma_ack_irq(&ch->dma); 345 + if (napi_schedule_prep(&ch->napi)) { 346 + __napi_schedule(&ch->napi); 347 + ltq_dma_disable_irq(&ch->dma); 348 + } 350 349 351 - napi_schedule(&ch->napi); 350 + ltq_dma_ack_irq(&ch->dma); 352 351 353 352 return IRQ_HANDLED; 354 353 } ··· 504 499 505 500 /* setup NAPI */ 506 501 netif_napi_add(net_dev, &priv->chan_rx.napi, xrx200_poll_rx, 32); 507 - netif_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32); 502 + netif_tx_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32); 508 503 509 504 platform_set_drvdata(pdev, priv); 510 505
+7 -3
drivers/net/ethernet/marvell/mvneta.c
··· 2029 2029 struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); 2030 2030 int i; 2031 2031 2032 - page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), 2033 - sync_len, napi); 2034 2032 for (i = 0; i < sinfo->nr_frags; i++) 2035 2033 page_pool_put_full_page(rxq->page_pool, 2036 2034 skb_frag_page(&sinfo->frags[i]), napi); 2035 + page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), 2036 + sync_len, napi); 2037 2037 } 2038 2038 2039 2039 static int ··· 2383 2383 mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, 2384 2384 &size, page, &ps); 2385 2385 } else { 2386 - if (unlikely(!xdp_buf.data_hard_start)) 2386 + if (unlikely(!xdp_buf.data_hard_start)) { 2387 + rx_desc->buf_phys_addr = 0; 2388 + page_pool_put_full_page(rxq->page_pool, page, 2389 + true); 2387 2390 continue; 2391 + } 2388 2392 2389 2393 mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, 2390 2394 &size, page);
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 600 600 struct dim dim; /* Dynamic Interrupt Moderation */ 601 601 602 602 /* XDP */ 603 - struct bpf_prog *xdp_prog; 603 + struct bpf_prog __rcu *xdp_prog; 604 604 struct mlx5e_xdpsq *xdpsq; 605 605 DECLARE_BITMAP(flags, 8); 606 606 struct page_pool *page_pool; ··· 1005 1005 void mlx5e_update_carrier(struct mlx5e_priv *priv); 1006 1006 int mlx5e_close(struct net_device *netdev); 1007 1007 int mlx5e_open(struct net_device *netdev); 1008 - void mlx5e_update_ndo_stats(struct mlx5e_priv *priv); 1009 1008 1010 1009 void mlx5e_queue_update_stats(struct mlx5e_priv *priv); 1011 1010 int mlx5e_bits_invert(unsigned long a, int size);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/monitor_stats.c
··· 51 51 monitor_counters_work); 52 52 53 53 mutex_lock(&priv->state_lock); 54 - mlx5e_update_ndo_stats(priv); 54 + mlx5e_stats_update_ndo_stats(priv); 55 55 mutex_unlock(&priv->state_lock); 56 56 mlx5e_monitor_counter_arm(priv); 57 57 }
+2 -5
drivers/net/ethernet/mellanox/mlx5/core/en/port.c
··· 490 490 int err; 491 491 int i; 492 492 493 - if (!MLX5_CAP_GEN(dev, pcam_reg)) 494 - return -EOPNOTSUPP; 495 - 496 - if (!MLX5_CAP_PCAM_REG(dev, pplm)) 497 - return -EOPNOTSUPP; 493 + if (!MLX5_CAP_GEN(dev, pcam_reg) || !MLX5_CAP_PCAM_REG(dev, pplm)) 494 + return false; 498 495 499 496 MLX5_SET(pplm_reg, in, local_port, 1); 500 497 err = mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 0);
+16 -5
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 699 699 err_rule: 700 700 mlx5e_mod_hdr_detach(ct_priv->esw->dev, 701 701 &esw->offloads.mod_hdr, zone_rule->mh); 702 + mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id); 702 703 err_mod_hdr: 703 704 kfree(spec); 704 705 return err; ··· 959 958 return 0; 960 959 } 961 960 961 + void mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr) 962 + { 963 + struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv); 964 + 965 + if (!ct_priv || !ct_attr->ct_labels_id) 966 + return; 967 + 968 + mapping_remove(ct_priv->labels_mapping, ct_attr->ct_labels_id); 969 + } 970 + 962 971 int 963 - mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, 964 - struct mlx5_flow_spec *spec, 965 - struct flow_cls_offload *f, 966 - struct mlx5_ct_attr *ct_attr, 967 - struct netlink_ext_ack *extack) 972 + mlx5_tc_ct_match_add(struct mlx5e_priv *priv, 973 + struct mlx5_flow_spec *spec, 974 + struct flow_cls_offload *f, 975 + struct mlx5_ct_attr *ct_attr, 976 + struct netlink_ext_ack *extack) 968 977 { 969 978 struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv); 970 979 struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+16 -10
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
··· 87 87 void 88 88 mlx5_tc_ct_clean(struct mlx5_rep_uplink_priv *uplink_priv); 89 89 90 + void 91 + mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr); 92 + 90 93 int 91 - mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, 92 - struct mlx5_flow_spec *spec, 93 - struct flow_cls_offload *f, 94 - struct mlx5_ct_attr *ct_attr, 95 - struct netlink_ext_ack *extack); 94 + mlx5_tc_ct_match_add(struct mlx5e_priv *priv, 95 + struct mlx5_flow_spec *spec, 96 + struct flow_cls_offload *f, 97 + struct mlx5_ct_attr *ct_attr, 98 + struct netlink_ext_ack *extack); 96 99 int 97 100 mlx5_tc_ct_add_no_trk_match(struct mlx5e_priv *priv, 98 101 struct mlx5_flow_spec *spec); ··· 133 130 { 134 131 } 135 132 133 + static inline void 134 + mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr) {} 135 + 136 136 static inline int 137 - mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, 138 - struct mlx5_flow_spec *spec, 139 - struct flow_cls_offload *f, 140 - struct mlx5_ct_attr *ct_attr, 141 - struct netlink_ext_ack *extack) 137 + mlx5_tc_ct_match_add(struct mlx5e_priv *priv, 138 + struct mlx5_flow_spec *spec, 139 + struct flow_cls_offload *f, 140 + struct mlx5_ct_attr *ct_attr, 141 + struct netlink_ext_ack *extack) 142 142 { 143 143 struct flow_rule *rule = flow_cls_offload_flow_rule(f); 144 144
+5
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
··· 20 20 }; 21 21 22 22 /* General */ 23 + static inline bool mlx5e_skb_is_multicast(struct sk_buff *skb) 24 + { 25 + return skb->pkt_type == PACKET_MULTICAST || skb->pkt_type == PACKET_BROADCAST; 26 + } 27 + 23 28 void mlx5e_trigger_irq(struct mlx5e_icosq *sq); 24 29 void mlx5e_completion_event(struct mlx5_core_cq *mcq, struct mlx5_eqe *eqe); 25 30 void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
··· 122 122 bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di, 123 123 u32 *len, struct xdp_buff *xdp) 124 124 { 125 - struct bpf_prog *prog = READ_ONCE(rq->xdp_prog); 125 + struct bpf_prog *prog = rcu_dereference(rq->xdp_prog); 126 126 u32 act; 127 127 int err; 128 128
+2 -12
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
··· 31 31 { 32 32 struct xdp_buff *xdp = wi->umr.dma_info[page_idx].xsk; 33 33 u32 cqe_bcnt32 = cqe_bcnt; 34 - bool consumed; 35 34 36 35 /* Check packet size. Note LRO doesn't use linear SKB */ 37 36 if (unlikely(cqe_bcnt > rq->hw_mtu)) { ··· 50 51 xsk_buff_dma_sync_for_cpu(xdp); 51 52 prefetch(xdp->data); 52 53 53 - rcu_read_lock(); 54 - consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp); 55 - rcu_read_unlock(); 56 - 57 54 /* Possible flows: 58 55 * - XDP_REDIRECT to XSKMAP: 59 56 * The page is owned by the userspace from now. ··· 65 70 * allocated first from the Reuse Ring, so it has enough space. 66 71 */ 67 72 68 - if (likely(consumed)) { 73 + if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp))) { 69 74 if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))) 70 75 __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ 71 76 return NULL; /* page/packet was consumed by XDP */ ··· 83 88 u32 cqe_bcnt) 84 89 { 85 90 struct xdp_buff *xdp = wi->di->xsk; 86 - bool consumed; 87 91 88 92 /* wi->offset is not used in this function, because xdp->data and the 89 93 * DMA address point directly to the necessary place. Furthermore, the ··· 101 107 return NULL; 102 108 } 103 109 104 - rcu_read_lock(); 105 - consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp); 106 - rcu_read_unlock(); 107 - 108 - if (likely(consumed)) 110 + if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp))) 109 111 return NULL; /* page/packet was consumed by XDP */ 110 112 111 113 /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
··· 106 106 void mlx5e_close_xsk(struct mlx5e_channel *c) 107 107 { 108 108 clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state); 109 - napi_synchronize(&c->napi); 110 - synchronize_rcu(); /* Sync with the XSK wakeup. */ 109 + synchronize_rcu(); /* Sync with the XSK wakeup and with NAPI. */ 111 110 112 111 mlx5e_close_rq(&c->xskrq); 113 112 mlx5e_close_cq(&c->xskrq.cq);
+22 -21
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
··· 234 234 235 235 /* Re-sync */ 236 236 /* Runs in work context */ 237 - static struct mlx5_wqe_ctrl_seg * 237 + static int 238 238 resync_post_get_progress_params(struct mlx5e_icosq *sq, 239 239 struct mlx5e_ktls_offload_context_rx *priv_rx) 240 240 { ··· 258 258 PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); 259 259 if (unlikely(dma_mapping_error(pdev, buf->dma_addr))) { 260 260 err = -ENOMEM; 261 - goto err_out; 261 + goto err_free; 262 262 } 263 263 264 264 buf->priv_rx = priv_rx; 265 265 266 266 BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1); 267 + 268 + spin_lock(&sq->channel->async_icosq_lock); 269 + 267 270 if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) { 271 + spin_unlock(&sq->channel->async_icosq_lock); 268 272 err = -ENOSPC; 269 - goto err_out; 273 + goto err_dma_unmap; 270 274 } 271 275 272 276 pi = mlx5e_icosq_get_next_pi(sq, 1); ··· 298 294 }; 299 295 icosq_fill_wi(sq, pi, &wi); 300 296 sq->pc++; 297 + mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg); 298 + spin_unlock(&sq->channel->async_icosq_lock); 301 299 302 - return cseg; 300 + return 0; 303 301 302 + err_dma_unmap: 303 + dma_unmap_single(pdev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); 304 + err_free: 305 + kfree(buf); 304 306 err_out: 305 307 priv_rx->stats->tls_resync_req_skip++; 306 - return ERR_PTR(err); 308 + return err; 307 309 } 308 310 309 311 /* Function is called with elevated refcount. ··· 319 309 { 320 310 struct mlx5e_ktls_offload_context_rx *priv_rx; 321 311 struct mlx5e_ktls_rx_resync_ctx *resync; 322 - struct mlx5_wqe_ctrl_seg *cseg; 323 312 struct mlx5e_channel *c; 324 313 struct mlx5e_icosq *sq; 325 - struct mlx5_wq_cyc *wq; 326 314 327 315 resync = container_of(work, struct mlx5e_ktls_rx_resync_ctx, work); 328 316 priv_rx = container_of(resync, struct mlx5e_ktls_offload_context_rx, resync); ··· 332 324 333 325 c = resync->priv->channels.c[priv_rx->rxq]; 334 326 sq = &c->async_icosq; 335 - wq = &sq->wq; 336 327 337 - spin_lock(&c->async_icosq_lock); 338 - 339 - cseg = resync_post_get_progress_params(sq, priv_rx); 340 - if (IS_ERR(cseg)) { 328 + if (resync_post_get_progress_params(sq, priv_rx)) 341 329 refcount_dec(&resync->refcnt); 342 - goto unlock; 343 - } 344 - mlx5e_notify_hw(wq, sq->pc, sq->uar_map, cseg); 345 - unlock: 346 - spin_unlock(&c->async_icosq_lock); 347 330 } 348 331 349 332 static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync, ··· 385 386 struct mlx5e_ktls_offload_context_rx *priv_rx; 386 387 struct mlx5e_ktls_rx_resync_ctx *resync; 387 388 u8 tracker_state, auth_state, *ctx; 389 + struct device *dev; 388 390 u32 hw_seq; 389 391 390 392 priv_rx = buf->priv_rx; 391 393 resync = &priv_rx->resync; 392 - 394 + dev = resync->priv->mdev->device; 393 395 if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) 394 396 goto out; 395 397 396 - dma_sync_single_for_cpu(resync->priv->mdev->device, buf->dma_addr, 397 - PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); 398 + dma_sync_single_for_cpu(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, 399 + DMA_FROM_DEVICE); 398 400 399 401 ctx = buf->progress.ctx; 400 402 tracker_state = MLX5_GET(tls_progress_params, ctx, record_tracker_state); ··· 411 411 priv_rx->stats->tls_resync_req_end++; 412 412 out: 413 413 refcount_dec(&resync->refcnt); 414 + dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); 414 415 kfree(buf); 415 416 } 416 417 ··· 660 659 priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx); 661 660 set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags); 662 661 mlx5e_set_ktls_rx_priv_ctx(tls_ctx, NULL); 663 - napi_synchronize(&priv->channels.c[priv_rx->rxq]->napi); 662 + synchronize_rcu(); /* Sync with NAPI */ 664 663 if (!cancel_work_sync(&priv_rx->rule.work)) 665 664 /* completion is needed, as the priv_rx in the add flow 666 665 * is maintained on the wqe info (wi), not on the socket.
+8 -4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_stats.c
··· 35 35 #include <net/sock.h> 36 36 37 37 #include "en.h" 38 - #include "accel/tls.h" 39 38 #include "fpga/sdk.h" 40 39 #include "en_accel/tls.h" 41 40 ··· 50 51 51 52 #define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc) 52 53 54 + static bool is_tls_atomic_stats(struct mlx5e_priv *priv) 55 + { 56 + return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev); 57 + } 58 + 53 59 int mlx5e_tls_get_count(struct mlx5e_priv *priv) 54 60 { 55 - if (!priv->tls) 61 + if (!is_tls_atomic_stats(priv)) 56 62 return 0; 57 63 58 64 return NUM_TLS_SW_COUNTERS; ··· 67 63 { 68 64 unsigned int i, idx = 0; 69 65 70 - if (!priv->tls) 66 + if (!is_tls_atomic_stats(priv)) 71 67 return 0; 72 68 73 69 for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) ··· 81 77 { 82 78 int i, idx = 0; 83 79 84 - if (!priv->tls) 80 + if (!is_tls_atomic_stats(priv)) 85 81 return 0; 86 82 87 83 for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
+31 -54
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 158 158 mutex_unlock(&priv->state_lock); 159 159 } 160 160 161 - void mlx5e_update_ndo_stats(struct mlx5e_priv *priv) 162 - { 163 - int i; 164 - 165 - for (i = mlx5e_nic_stats_grps_num(priv) - 1; i >= 0; i--) 166 - if (mlx5e_nic_stats_grps[i]->update_stats_mask & 167 - MLX5E_NDO_UPDATE_STATS) 168 - mlx5e_nic_stats_grps[i]->update_stats(priv); 169 - } 170 - 171 161 static void mlx5e_update_stats_work(struct work_struct *work) 172 162 { 173 163 struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv, ··· 389 399 390 400 if (params->xdp_prog) 391 401 bpf_prog_inc(params->xdp_prog); 392 - rq->xdp_prog = params->xdp_prog; 402 + RCU_INIT_POINTER(rq->xdp_prog, params->xdp_prog); 393 403 394 404 rq_xdp_ix = rq->ix; 395 405 if (xsk) ··· 398 408 if (err < 0) 399 409 goto err_rq_wq_destroy; 400 410 401 - rq->buff.map_dir = rq->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; 411 + rq->buff.map_dir = params->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; 402 412 rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params, xsk); 403 413 pool_size = 1 << params->log_rq_mtu_frames; 404 414 ··· 554 564 } 555 565 556 566 err_rq_wq_destroy: 557 - if (rq->xdp_prog) 558 - bpf_prog_put(rq->xdp_prog); 567 + if (params->xdp_prog) 568 + bpf_prog_put(params->xdp_prog); 559 569 xdp_rxq_info_unreg(&rq->xdp_rxq); 560 570 page_pool_destroy(rq->page_pool); 561 571 mlx5_wq_destroy(&rq->wq_ctrl); ··· 565 575 566 576 static void mlx5e_free_rq(struct mlx5e_rq *rq) 567 577 { 578 + struct mlx5e_channel *c = rq->channel; 579 + struct bpf_prog *old_prog = NULL; 568 580 int i; 569 581 570 - if (rq->xdp_prog) 571 - bpf_prog_put(rq->xdp_prog); 582 + /* drop_rq has neither channel nor xdp_prog. */ 583 + if (c) 584 + old_prog = rcu_dereference_protected(rq->xdp_prog, 585 + lockdep_is_held(&c->priv->state_lock)); 586 + if (old_prog) 587 + bpf_prog_put(old_prog); 572 588 573 589 switch (rq->wq_type) { 574 590 case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: ··· 863 867 void mlx5e_deactivate_rq(struct mlx5e_rq *rq) 864 868 { 865 869 clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); 866 - napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */ 870 + synchronize_rcu(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */ 867 871 } 868 872 869 873 void mlx5e_close_rq(struct mlx5e_rq *rq) ··· 1308 1312 1309 1313 static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq) 1310 1314 { 1311 - struct mlx5e_channel *c = sq->channel; 1312 1315 struct mlx5_wq_cyc *wq = &sq->wq; 1313 1316 1314 1317 clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); 1315 - /* prevent netif_tx_wake_queue */ 1316 - napi_synchronize(&c->napi); 1318 + synchronize_rcu(); /* Sync with NAPI to prevent netif_tx_wake_queue. */ 1317 1319 1318 1320 mlx5e_tx_disable_queue(sq->txq); 1319 1321 ··· 1386 1392 1387 1393 void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq) 1388 1394 { 1389 - struct mlx5e_channel *c = icosq->channel; 1390 - 1391 1395 clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state); 1392 - napi_synchronize(&c->napi); 1396 + synchronize_rcu(); /* Sync with NAPI. */ 1393 1397 } 1394 1398 1395 1399 void mlx5e_close_icosq(struct mlx5e_icosq *sq) ··· 1466 1474 struct mlx5e_channel *c = sq->channel; 1467 1475 1468 1476 clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); 1469 - napi_synchronize(&c->napi); 1477 + synchronize_rcu(); /* Sync with NAPI. */ 1470 1478 1471 1479 mlx5e_destroy_sq(c->mdev, sq->sqn); 1472 1480 mlx5e_free_xdpsq_descs(sq); ··· 3559 3567 3560 3568 s->rx_packets += rq_stats->packets + xskrq_stats->packets; 3561 3569 s->rx_bytes += rq_stats->bytes + xskrq_stats->bytes; 3570 + s->multicast += rq_stats->mcast_packets + xskrq_stats->mcast_packets; 3562 3571 3563 3572 for (j = 0; j < priv->max_opened_tc; j++) { 3564 3573 struct mlx5e_sq_stats *sq_stats = &channel_stats->sq[j]; ··· 3575 3582 mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) 3576 3583 { 3577 3584 struct mlx5e_priv *priv = netdev_priv(dev); 3578 - struct mlx5e_vport_stats *vstats = &priv->stats.vport; 3579 3585 struct mlx5e_pport_stats *pstats = &priv->stats.pport; 3580 3586 3581 3587 /* In switchdev mode, monitor counters doesn't monitor ··· 3609 3617 stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors + 3610 3618 stats->rx_frame_errors; 3611 3619 stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors; 3612 - 3613 - /* vport multicast also counts packets that are dropped due to steering 3614 - * or rx out of buffer 3615 - */ 3616 - stats->multicast = 3617 - VPORT_COUNTER_GET(vstats, received_eth_multicast.packets); 3618 3620 } 3619 3621 3620 3622 static void mlx5e_set_rx_mode(struct net_device *dev) ··· 4316 4330 return 0; 4317 4331 } 4318 4332 4333 + static void mlx5e_rq_replace_xdp_prog(struct mlx5e_rq *rq, struct bpf_prog *prog) 4334 + { 4335 + struct bpf_prog *old_prog; 4336 + 4337 + old_prog = rcu_replace_pointer(rq->xdp_prog, prog, 4338 + lockdep_is_held(&rq->channel->priv->state_lock)); 4339 + if (old_prog) 4340 + bpf_prog_put(old_prog); 4341 + } 4342 + 4319 4343 static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) 4320 4344 { 4321 4345 struct mlx5e_priv *priv = netdev_priv(netdev); ··· 4384 4388 */ 4385 4389 for (i = 0; i < priv->channels.num; i++) { 4386 4390 struct mlx5e_channel *c = priv->channels.c[i]; 4387 - bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state); 4388 4391 4389 - clear_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state); 4390 - if (xsk_open) 4391 - clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); 4392 - napi_synchronize(&c->napi); 4393 - /* prevent mlx5e_poll_rx_cq from accessing rq->xdp_prog */ 4394 - 4395 - old_prog = xchg(&c->rq.xdp_prog, prog); 4396 - if (old_prog) 4397 - bpf_prog_put(old_prog); 4398 - 4399 - if (xsk_open) { 4400 - old_prog = xchg(&c->xskrq.xdp_prog, prog); 4401 - if (old_prog) 4402 - bpf_prog_put(old_prog); 4403 - } 4404 - 4405 - set_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state); 4406 - if (xsk_open) 4407 - set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); 4408 - /* napi_schedule in case we have missed anything */ 4409 - napi_schedule(&c->napi); 4392 + mlx5e_rq_replace_xdp_prog(&c->rq, prog); 4393 + if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state)) 4394 + mlx5e_rq_replace_xdp_prog(&c->xskrq, prog); 4410 4395 } 4411 4396 4412 4397 unlock: ··· 5177 5200 .enable = mlx5e_nic_enable, 5178 5201 .disable = mlx5e_nic_disable, 5179 5202 .update_rx = mlx5e_update_nic_rx, 5180 - .update_stats = mlx5e_update_ndo_stats, 5203 + .update_stats = mlx5e_stats_update_ndo_stats, 5181 5204 .update_carrier = mlx5e_update_carrier, 5182 5205 .rx_handlers = &mlx5e_rx_handlers_nic, 5183 5206 .max_tc = MLX5E_MAX_NUM_TC,
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1171 1171 .cleanup_tx = mlx5e_cleanup_rep_tx, 1172 1172 .enable = mlx5e_rep_enable, 1173 1173 .update_rx = mlx5e_update_rep_rx, 1174 - .update_stats = mlx5e_update_ndo_stats, 1174 + .update_stats = mlx5e_stats_update_ndo_stats, 1175 1175 .rx_handlers = &mlx5e_rx_handlers_rep, 1176 1176 .max_tc = 1, 1177 1177 .rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR), ··· 1189 1189 .enable = mlx5e_uplink_rep_enable, 1190 1190 .disable = mlx5e_uplink_rep_disable, 1191 1191 .update_rx = mlx5e_update_rep_rx, 1192 - .update_stats = mlx5e_update_ndo_stats, 1192 + .update_stats = mlx5e_stats_update_ndo_stats, 1193 1193 .update_carrier = mlx5e_update_carrier, 1194 1194 .rx_handlers = &mlx5e_rx_handlers_rep, 1195 1195 .max_tc = MLX5E_MAX_NUM_TC,
+6 -10
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 53 53 #include "en/xsk/rx.h" 54 54 #include "en/health.h" 55 55 #include "en/params.h" 56 + #include "en/txrx.h" 56 57 57 58 static struct sk_buff * 58 59 mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, ··· 1081 1080 mlx5e_enable_ecn(rq, skb); 1082 1081 1083 1082 skb->protocol = eth_type_trans(skb, netdev); 1083 + 1084 + if (unlikely(mlx5e_skb_is_multicast(skb))) 1085 + stats->mcast_packets++; 1084 1086 } 1085 1087 1086 1088 static inline void mlx5e_complete_rx_cqe(struct mlx5e_rq *rq, ··· 1136 1132 struct xdp_buff xdp; 1137 1133 struct sk_buff *skb; 1138 1134 void *va, *data; 1139 - bool consumed; 1140 1135 u32 frag_size; 1141 1136 1142 1137 va = page_address(di->page) + wi->offset; ··· 1147 1144 prefetchw(va); /* xdp_frame data area */ 1148 1145 prefetch(data); 1149 1146 1150 - rcu_read_lock(); 1151 1147 mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); 1152 - consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp); 1153 - rcu_read_unlock(); 1154 - if (consumed) 1148 + if (mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp)) 1155 1149 return NULL; /* page/packet was consumed by XDP */ 1156 1150 1157 1151 rx_headroom = xdp.data - xdp.data_hard_start; ··· 1438 1438 struct sk_buff *skb; 1439 1439 void *va, *data; 1440 1440 u32 frag_size; 1441 - bool consumed; 1442 1441 1443 1442 /* Check packet size. Note LRO doesn't use linear SKB */ 1444 1443 if (unlikely(cqe_bcnt > rq->hw_mtu)) { ··· 1454 1455 prefetchw(va); /* xdp_frame data area */ 1455 1456 prefetch(data); 1456 1457 1457 - rcu_read_lock(); 1458 1458 mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt32, &xdp); 1459 - consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp); 1460 - rcu_read_unlock(); 1461 - if (consumed) { 1459 + if (mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp)) { 1462 1460 if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) 1463 1461 __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ 1464 1462 return NULL; /* page/packet was consumed by XDP */
+12
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 54 54 return total; 55 55 } 56 56 57 + void mlx5e_stats_update_ndo_stats(struct mlx5e_priv *priv) 58 + { 59 + mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps; 60 + const unsigned int num_stats_grps = stats_grps_num(priv); 61 + int i; 62 + 63 + for (i = num_stats_grps - 1; i >= 0; i--) 64 + if (stats_grps[i]->update_stats && 65 + stats_grps[i]->update_stats_mask & MLX5E_NDO_UPDATE_STATS) 66 + stats_grps[i]->update_stats(priv); 67 + } 68 + 57 69 void mlx5e_stats_update(struct mlx5e_priv *priv) 58 70 { 59 71 mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps;
+3
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 103 103 void mlx5e_stats_update(struct mlx5e_priv *priv); 104 104 void mlx5e_stats_fill(struct mlx5e_priv *priv, u64 *data, int idx); 105 105 void mlx5e_stats_fill_strings(struct mlx5e_priv *priv, u8 *data); 106 + void mlx5e_stats_update_ndo_stats(struct mlx5e_priv *priv); 106 107 107 108 /* Concrete NIC Stats */ 108 109 ··· 120 119 u64 tx_nop; 121 120 u64 rx_lro_packets; 122 121 u64 rx_lro_bytes; 122 + u64 rx_mcast_packets; 123 123 u64 rx_ecn_mark; 124 124 u64 rx_removed_vlan_packets; 125 125 u64 rx_csum_unnecessary; ··· 300 298 u64 csum_none; 301 299 u64 lro_packets; 302 300 u64 lro_bytes; 301 + u64 mcast_packets; 303 302 u64 ecn_mark; 304 303 u64 removed_vlan_packets; 305 304 u64 xdp_drop;
+26 -19
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1290 1290 1291 1291 mlx5e_put_flow_tunnel_id(flow); 1292 1292 1293 - if (flow_flag_test(flow, NOT_READY)) { 1293 + if (flow_flag_test(flow, NOT_READY)) 1294 1294 remove_unready_flow(flow); 1295 - kvfree(attr->parse_attr); 1296 - return; 1297 - } 1298 1295 1299 1296 if (mlx5e_is_offloaded_flow(flow)) { 1300 1297 if (flow_flag_test(flow, SLOW)) ··· 1311 1314 kfree(attr->parse_attr->tun_info[out_index]); 1312 1315 } 1313 1316 kvfree(attr->parse_attr); 1317 + 1318 + mlx5_tc_ct_match_del(priv, &flow->esw_attr->ct_attr); 1314 1319 1315 1320 if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) 1316 1321 mlx5e_detach_mod_hdr(priv, flow); ··· 2624 2625 OFFLOAD(UDP_DPORT, 16, U16_MAX, udp.dest, 0, udp_dport), 2625 2626 }; 2626 2627 2628 + static unsigned long mask_to_le(unsigned long mask, int size) 2629 + { 2630 + __be32 mask_be32; 2631 + __be16 mask_be16; 2632 + 2633 + if (size == 32) { 2634 + mask_be32 = (__force __be32)(mask); 2635 + mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32)); 2636 + } else if (size == 16) { 2637 + mask_be32 = (__force __be32)(mask); 2638 + mask_be16 = *(__be16 *)&mask_be32; 2639 + mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16)); 2640 + } 2641 + 2642 + return mask; 2643 + } 2627 2644 static int offload_pedit_fields(struct mlx5e_priv *priv, 2628 2645 int namespace, 2629 2646 struct pedit_headers_action *hdrs, ··· 2653 2638 u32 *s_masks_p, *a_masks_p, s_mask, a_mask; 2654 2639 struct mlx5e_tc_mod_hdr_acts *mod_acts; 2655 2640 struct mlx5_fields *f; 2656 - unsigned long mask; 2657 - __be32 mask_be32; 2658 - __be16 mask_be16; 2641 + unsigned long mask, field_mask; 2659 2642 int err; 2660 2643 u8 cmd; 2661 2644 ··· 2719 2706 if (skip) 2720 2707 continue; 2721 2708 2722 - if (f->field_bsize == 32) { 2723 - mask_be32 = (__force __be32)(mask); 2724 - mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32)); 2725 - } else if (f->field_bsize == 16) { 2726 - mask_be32 = (__force __be32)(mask); 2727 - mask_be16 = *(__be16 *)&mask_be32; 2728 - mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16)); 2729 - } 2709 + mask = mask_to_le(mask, f->field_bsize); 2730 2710 2731 2711 first = find_first_bit(&mask, f->field_bsize); 2732 2712 next_z = find_next_zero_bit(&mask, f->field_bsize, first); ··· 2750 2744 if (cmd == MLX5_ACTION_TYPE_SET) { 2751 2745 int start; 2752 2746 2747 + field_mask = mask_to_le(f->field_mask, f->field_bsize); 2748 + 2753 2749 /* if field is bit sized it can start not from first bit */ 2754 - start = find_first_bit((unsigned long *)&f->field_mask, 2755 - f->field_bsize); 2750 + start = find_first_bit(&field_mask, f->field_bsize); 2756 2751 2757 2752 MLX5_SET(set_action_in, action, offset, first - start); 2758 2753 /* length is num of bits to be written, zero means length of 32 */ ··· 4409 4402 goto err_free; 4410 4403 4411 4404 /* actions validation depends on parsing the ct matches first */ 4412 - err = mlx5_tc_ct_parse_match(priv, &parse_attr->spec, f, 4413 - &flow->esw_attr->ct_attr, extack); 4405 + err = mlx5_tc_ct_match_add(priv, &parse_attr->spec, f, 4406 + &flow->esw_attr->ct_attr, extack); 4414 4407 if (err) 4415 4408 goto err_free; 4416 4409
+13 -4
drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
··· 121 121 struct mlx5e_xdpsq *xsksq = &c->xsksq; 122 122 struct mlx5e_rq *xskrq = &c->xskrq; 123 123 struct mlx5e_rq *rq = &c->rq; 124 - bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state); 125 124 bool aff_change = false; 126 125 bool busy_xsk = false; 127 126 bool busy = false; 128 127 int work_done = 0; 128 + bool xsk_open; 129 129 int i; 130 + 131 + rcu_read_lock(); 132 + 133 + xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state); 130 134 131 135 ch_stats->poll++; 132 136 ··· 171 167 busy |= busy_xsk; 172 168 173 169 if (busy) { 174 - if (likely(mlx5e_channel_no_affinity_change(c))) 175 - return budget; 170 + if (likely(mlx5e_channel_no_affinity_change(c))) { 171 + work_done = budget; 172 + goto out; 173 + } 176 174 ch_stats->aff_change++; 177 175 aff_change = true; 178 176 if (budget && work_done == budget) ··· 182 176 } 183 177 184 178 if (unlikely(!napi_complete_done(napi, work_done))) 185 - return work_done; 179 + goto out; 186 180 187 181 ch_stats->arm++; 188 182 ··· 208 202 mlx5e_trigger_irq(&c->icosq); 209 203 ch_stats->force_irq++; 210 204 } 205 + 206 + out: 207 + rcu_read_unlock(); 211 208 212 209 return work_done; 213 210 }
+30 -26
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1219 1219 } 1220 1220 esw->fdb_table.offloads.send_to_vport_grp = g; 1221 1221 1222 - /* create peer esw miss group */ 1223 - memset(flow_group_in, 0, inlen); 1222 + if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) { 1223 + /* create peer esw miss group */ 1224 + memset(flow_group_in, 0, inlen); 1224 1225 1225 - esw_set_flow_group_source_port(esw, flow_group_in); 1226 + esw_set_flow_group_source_port(esw, flow_group_in); 1226 1227 1227 - if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) { 1228 - match_criteria = MLX5_ADDR_OF(create_flow_group_in, 1229 - flow_group_in, 1230 - match_criteria); 1228 + if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) { 1229 + match_criteria = MLX5_ADDR_OF(create_flow_group_in, 1230 + flow_group_in, 1231 + match_criteria); 1231 1232 1232 - MLX5_SET_TO_ONES(fte_match_param, match_criteria, 1233 - misc_parameters.source_eswitch_owner_vhca_id); 1233 + MLX5_SET_TO_ONES(fte_match_param, match_criteria, 1234 + misc_parameters.source_eswitch_owner_vhca_id); 1234 1235 1235 - MLX5_SET(create_flow_group_in, flow_group_in, 1236 - source_eswitch_owner_vhca_id_valid, 1); 1236 + MLX5_SET(create_flow_group_in, flow_group_in, 1237 + source_eswitch_owner_vhca_id_valid, 1); 1238 + } 1239 + 1240 + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1241 + MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1242 + ix + esw->total_vports - 1); 1243 + ix += esw->total_vports; 1244 + 1245 + g = mlx5_create_flow_group(fdb, flow_group_in); 1246 + if (IS_ERR(g)) { 1247 + err = PTR_ERR(g); 1248 + esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err); 1249 + goto peer_miss_err; 1250 + } 1251 + esw->fdb_table.offloads.peer_miss_grp = g; 1237 1252 } 1238 - 1239 - MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix); 1240 - MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1241 - ix + esw->total_vports - 1); 1242 - ix += esw->total_vports; 1243 - 1244 - g = mlx5_create_flow_group(fdb, flow_group_in); 1245 - if (IS_ERR(g)) { 1246 - err = PTR_ERR(g); 1247 - esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err); 1248 - goto peer_miss_err; 1249 - } 1250 - esw->fdb_table.offloads.peer_miss_grp = g; 1251 1253 1252 1254 /* create miss group */ 1253 1255 memset(flow_group_in, 0, inlen); ··· 1283 1281 miss_rule_err: 1284 1282 mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp); 1285 1283 miss_err: 1286 - mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); 1284 + if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) 1285 + mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); 1287 1286 peer_miss_err: 1288 1287 mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp); 1289 1288 send_vport_err: ··· 1308 1305 mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_multi); 1309 1306 mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_uni); 1310 1307 mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp); 1311 - mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); 1308 + if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) 1309 + mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); 1312 1310 mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp); 1313 1311 1314 1312 mlx5_esw_chains_destroy(esw);
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 654 654 fte->action = *flow_act; 655 655 fte->flow_context = spec->flow_context; 656 656 657 - tree_init_node(&fte->node, NULL, del_sw_fte); 657 + tree_init_node(&fte->node, del_hw_fte, del_sw_fte); 658 658 659 659 return fte; 660 660 } ··· 1792 1792 up_write_ref_node(&g->node, false); 1793 1793 rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte); 1794 1794 up_write_ref_node(&fte->node, false); 1795 - tree_put_node(&fte->node, false); 1796 1795 return rule; 1797 1796 } 1798 1797 rule = ERR_PTR(-ENOENT); ··· 1890 1891 up_write_ref_node(&g->node, false); 1891 1892 rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte); 1892 1893 up_write_ref_node(&fte->node, false); 1893 - tree_put_node(&fte->node, false); 1894 1894 tree_put_node(&g->node, false); 1895 1895 return rule; 1896 1896 ··· 1999 2001 up_write_ref_node(&fte->node, false); 2000 2002 } else { 2001 2003 del_hw_fte(&fte->node); 2002 - up_write(&fte->node.lock); 2004 + /* Avoid double call to del_hw_fte */ 2005 + fte->node.del_hw_func = NULL; 2006 + up_write_ref_node(&fte->node, false); 2003 2007 tree_put_node(&fte->node, false); 2004 2008 } 2005 2009 kfree(handle);
+15 -9
drivers/net/ethernet/mscc/ocelot.c
··· 421 421 422 422 if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP && 423 423 ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) { 424 + spin_lock(&ocelot_port->ts_id_lock); 425 + 424 426 shinfo->tx_flags |= SKBTX_IN_PROGRESS; 425 427 /* Store timestamp ID in cb[0] of sk_buff */ 426 - skb->cb[0] = ocelot_port->ts_id % 4; 428 + skb->cb[0] = ocelot_port->ts_id; 429 + ocelot_port->ts_id = (ocelot_port->ts_id + 1) % 4; 427 430 skb_queue_tail(&ocelot_port->tx_skbs, skb); 431 + 432 + spin_unlock(&ocelot_port->ts_id_lock); 428 433 return 0; 429 434 } 430 435 return -ENODATA; ··· 1305 1300 struct ocelot_port *ocelot_port = ocelot->ports[port]; 1306 1301 1307 1302 skb_queue_head_init(&ocelot_port->tx_skbs); 1303 + spin_lock_init(&ocelot_port->ts_id_lock); 1308 1304 1309 1305 /* Basic L2 initialization */ 1310 1306 ··· 1550 1544 1551 1545 void ocelot_deinit(struct ocelot *ocelot) 1552 1546 { 1553 - struct ocelot_port *port; 1554 - int i; 1555 - 1556 1547 cancel_delayed_work(&ocelot->stats_work); 1557 1548 destroy_workqueue(ocelot->stats_queue); 1558 1549 mutex_destroy(&ocelot->stats_lock); 1559 - 1560 - for (i = 0; i < ocelot->num_phys_ports; i++) { 1561 - port = ocelot->ports[i]; 1562 - skb_queue_purge(&port->tx_skbs); 1563 - } 1564 1550 } 1565 1551 EXPORT_SYMBOL(ocelot_deinit); 1552 + 1553 + void ocelot_deinit_port(struct ocelot *ocelot, int port) 1554 + { 1555 + struct ocelot_port *ocelot_port = ocelot->ports[port]; 1556 + 1557 + skb_queue_purge(&ocelot_port->tx_skbs); 1558 + } 1559 + EXPORT_SYMBOL(ocelot_deinit_port); 1566 1560 1567 1561 MODULE_LICENSE("Dual MIT/GPL");
+6 -6
drivers/net/ethernet/mscc/ocelot_net.c
··· 330 330 u8 grp = 0; /* Send everything on CPU group 0 */ 331 331 unsigned int i, count, last; 332 332 int port = priv->chip_port; 333 + bool do_tstamp; 333 334 334 335 val = ocelot_read(ocelot, QS_INJ_STATUS); 335 336 if (!(val & QS_INJ_STATUS_FIFO_RDY(BIT(grp))) || ··· 345 344 info.vid = skb_vlan_tag_get(skb); 346 345 347 346 /* Check if timestamping is needed */ 347 + do_tstamp = (ocelot_port_add_txtstamp_skb(ocelot_port, skb) == 0); 348 + 348 349 if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP) { 349 350 info.rew_op = ocelot_port->ptp_cmd; 350 351 if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) 351 - info.rew_op |= (ocelot_port->ts_id % 4) << 3; 352 + info.rew_op |= skb->cb[0] << 3; 352 353 } 353 354 354 355 ocelot_gen_ifh(ifh, &info); ··· 383 380 dev->stats.tx_packets++; 384 381 dev->stats.tx_bytes += skb->len; 385 382 386 - if (!ocelot_port_add_txtstamp_skb(ocelot_port, skb)) { 387 - ocelot_port->ts_id++; 388 - return NETDEV_TX_OK; 389 - } 383 + if (!do_tstamp) 384 + dev_kfree_skb_any(skb); 390 385 391 - dev_kfree_skb_any(skb); 392 386 return NETDEV_TX_OK; 393 387 } 394 388
+145 -104
drivers/net/ethernet/mscc/ocelot_vsc7514.c
··· 806 806 [VCAP_IS2_HK_DIP_EQ_SIP] = {123, 1}, 807 807 /* IP4_TCP_UDP (TYPE=100) */ 808 808 [VCAP_IS2_HK_TCP] = {124, 1}, 809 - [VCAP_IS2_HK_L4_SPORT] = {125, 16}, 810 - [VCAP_IS2_HK_L4_DPORT] = {141, 16}, 809 + [VCAP_IS2_HK_L4_DPORT] = {125, 16}, 810 + [VCAP_IS2_HK_L4_SPORT] = {141, 16}, 811 811 [VCAP_IS2_HK_L4_RNG] = {157, 8}, 812 812 [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {165, 1}, 813 813 [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {166, 1}, 814 - [VCAP_IS2_HK_L4_URG] = {167, 1}, 815 - [VCAP_IS2_HK_L4_ACK] = {168, 1}, 816 - [VCAP_IS2_HK_L4_PSH] = {169, 1}, 817 - [VCAP_IS2_HK_L4_RST] = {170, 1}, 818 - [VCAP_IS2_HK_L4_SYN] = {171, 1}, 819 - [VCAP_IS2_HK_L4_FIN] = {172, 1}, 814 + [VCAP_IS2_HK_L4_FIN] = {167, 1}, 815 + [VCAP_IS2_HK_L4_SYN] = {168, 1}, 816 + [VCAP_IS2_HK_L4_RST] = {169, 1}, 817 + [VCAP_IS2_HK_L4_PSH] = {170, 1}, 818 + [VCAP_IS2_HK_L4_ACK] = {171, 1}, 819 + [VCAP_IS2_HK_L4_URG] = {172, 1}, 820 820 [VCAP_IS2_HK_L4_1588_DOM] = {173, 8}, 821 821 [VCAP_IS2_HK_L4_1588_VER] = {181, 4}, 822 822 /* IP4_OTHER (TYPE=101) */ ··· 896 896 .enable = ocelot_ptp_enable, 897 897 }; 898 898 899 + static void mscc_ocelot_release_ports(struct ocelot *ocelot) 900 + { 901 + int port; 902 + 903 + for (port = 0; port < ocelot->num_phys_ports; port++) { 904 + struct ocelot_port_private *priv; 905 + struct ocelot_port *ocelot_port; 906 + 907 + ocelot_port = ocelot->ports[port]; 908 + if (!ocelot_port) 909 + continue; 910 + 911 + ocelot_deinit_port(ocelot, port); 912 + 913 + priv = container_of(ocelot_port, struct ocelot_port_private, 914 + port); 915 + 916 + unregister_netdev(priv->dev); 917 + free_netdev(priv->dev); 918 + } 919 + } 920 + 921 + static int mscc_ocelot_init_ports(struct platform_device *pdev, 922 + struct device_node *ports) 923 + { 924 + struct ocelot *ocelot = platform_get_drvdata(pdev); 925 + struct device_node *portnp; 926 + int err; 927 + 928 + ocelot->ports = devm_kcalloc(ocelot->dev, ocelot->num_phys_ports, 929 + sizeof(struct ocelot_port *), GFP_KERNEL); 930 + if (!ocelot->ports) 931 + return -ENOMEM; 932 + 933 + /* No NPI port */ 934 + ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE, 935 + OCELOT_TAG_PREFIX_NONE); 936 + 937 + for_each_available_child_of_node(ports, portnp) { 938 + struct ocelot_port_private *priv; 939 + struct ocelot_port *ocelot_port; 940 + struct device_node *phy_node; 941 + phy_interface_t phy_mode; 942 + struct phy_device *phy; 943 + struct regmap *target; 944 + struct resource *res; 945 + struct phy *serdes; 946 + char res_name[8]; 947 + u32 port; 948 + 949 + if (of_property_read_u32(portnp, "reg", &port)) 950 + continue; 951 + 952 + snprintf(res_name, sizeof(res_name), "port%d", port); 953 + 954 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 955 + res_name); 956 + target = ocelot_regmap_init(ocelot, res); 957 + if (IS_ERR(target)) 958 + continue; 959 + 960 + phy_node = of_parse_phandle(portnp, "phy-handle", 0); 961 + if (!phy_node) 962 + continue; 963 + 964 + phy = of_phy_find_device(phy_node); 965 + of_node_put(phy_node); 966 + if (!phy) 967 + continue; 968 + 969 + err = ocelot_probe_port(ocelot, port, target, phy); 970 + if (err) { 971 + of_node_put(portnp); 972 + return err; 973 + } 974 + 975 + ocelot_port = ocelot->ports[port]; 976 + priv = container_of(ocelot_port, struct ocelot_port_private, 977 + port); 978 + 979 + of_get_phy_mode(portnp, &phy_mode); 980 + 981 + ocelot_port->phy_mode = phy_mode; 982 + 983 + switch (ocelot_port->phy_mode) { 984 + case PHY_INTERFACE_MODE_NA: 985 + continue; 986 + case PHY_INTERFACE_MODE_SGMII: 987 + break; 988 + case PHY_INTERFACE_MODE_QSGMII: 989 + /* Ensure clock signals and speed is set on all 990 + * QSGMII links 991 + */ 992 + ocelot_port_writel(ocelot_port, 993 + DEV_CLOCK_CFG_LINK_SPEED 994 + (OCELOT_SPEED_1000), 995 + DEV_CLOCK_CFG); 996 + break; 997 + default: 998 + dev_err(ocelot->dev, 999 + "invalid phy mode for port%d, (Q)SGMII only\n", 1000 + port); 1001 + of_node_put(portnp); 1002 + return -EINVAL; 1003 + } 1004 + 1005 + serdes = devm_of_phy_get(ocelot->dev, portnp, NULL); 1006 + if (IS_ERR(serdes)) { 1007 + err = PTR_ERR(serdes); 1008 + if (err == -EPROBE_DEFER) 1009 + dev_dbg(ocelot->dev, "deferring probe\n"); 1010 + else 1011 + dev_err(ocelot->dev, 1012 + "missing SerDes phys for port%d\n", 1013 + port); 1014 + 1015 + of_node_put(portnp); 1016 + return err; 1017 + } 1018 + 1019 + priv->serdes = serdes; 1020 + } 1021 + 1022 + return 0; 1023 + } 1024 + 899 1025 static int mscc_ocelot_probe(struct platform_device *pdev) 900 1026 { 901 1027 struct device_node *np = pdev->dev.of_node; 902 - struct device_node *ports, *portnp; 903 1028 int err, irq_xtr, irq_ptp_rdy; 1029 + struct device_node *ports; 904 1030 struct ocelot *ocelot; 905 1031 struct regmap *hsio; 906 1032 unsigned int i; ··· 1111 985 1112 986 ports = of_get_child_by_name(np, "ethernet-ports"); 1113 987 if (!ports) { 1114 - dev_err(&pdev->dev, "no ethernet-ports child node found\n"); 988 + dev_err(ocelot->dev, "no ethernet-ports child node found\n"); 1115 989 return -ENODEV; 1116 990 } 1117 991 1118 992 ocelot->num_phys_ports = of_get_child_count(ports); 1119 993 1120 - ocelot->ports = devm_kcalloc(&pdev->dev, ocelot->num_phys_ports, 1121 - sizeof(struct ocelot_port *), GFP_KERNEL); 1122 - 1123 994 ocelot->vcap_is2_keys = vsc7514_vcap_is2_keys; 1124 995 ocelot->vcap_is2_actions = vsc7514_vcap_is2_actions; 1125 996 ocelot->vcap = vsc7514_vcap_props; 1126 997 1127 - ocelot_init(ocelot); 998 + err = ocelot_init(ocelot); 999 + if (err) 1000 + goto out_put_ports; 1001 + 1002 + err = mscc_ocelot_init_ports(pdev, ports); 1003 + if (err) 1004 + goto out_put_ports; 1005 + 1128 1006 if (ocelot->ptp) { 1129 1007 err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info); 1130 1008 if (err) { ··· 1136 1006 "Timestamp initialization failed\n"); 1137 1007 ocelot->ptp = 0; 1138 1008 } 1139 - } 1140 - 1141 - /* No NPI port */ 1142 - ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE, 1143 - OCELOT_TAG_PREFIX_NONE); 1144 - 1145 - for_each_available_child_of_node(ports, portnp) { 1146 - struct ocelot_port_private *priv; 1147 - struct ocelot_port *ocelot_port; 1148 - struct device_node *phy_node; 1149 - phy_interface_t phy_mode; 1150 - struct phy_device *phy; 1151 - struct regmap *target; 1152 - struct resource *res; 1153 - struct phy *serdes; 1154 - char res_name[8]; 1155 - u32 port; 1156 - 1157 - if (of_property_read_u32(portnp, "reg", &port)) 1158 - continue; 1159 - 1160 - snprintf(res_name, sizeof(res_name), "port%d", port); 1161 - 1162 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 1163 - res_name); 1164 - target = ocelot_regmap_init(ocelot, res); 1165 - if (IS_ERR(target)) 1166 - continue; 1167 - 1168 - phy_node = of_parse_phandle(portnp, "phy-handle", 0); 1169 - if (!phy_node) 1170 - continue; 1171 - 1172 - phy = of_phy_find_device(phy_node); 1173 - of_node_put(phy_node); 1174 - if (!phy) 1175 - continue; 1176 - 1177 - err = ocelot_probe_port(ocelot, port, target, phy); 1178 - if (err) { 1179 - of_node_put(portnp); 1180 - goto out_put_ports; 1181 - } 1182 - 1183 - ocelot_port = ocelot->ports[port]; 1184 - priv = container_of(ocelot_port, struct ocelot_port_private, 1185 - port); 1186 - 1187 - of_get_phy_mode(portnp, &phy_mode); 1188 - 1189 - ocelot_port->phy_mode = phy_mode; 1190 - 1191 - switch (ocelot_port->phy_mode) { 1192 - case PHY_INTERFACE_MODE_NA: 1193 - continue; 1194 - case PHY_INTERFACE_MODE_SGMII: 1195 - break; 1196 - case PHY_INTERFACE_MODE_QSGMII: 1197 - /* Ensure clock signals and speed is set on all 1198 - * QSGMII links 1199 - */ 1200 - ocelot_port_writel(ocelot_port, 1201 - DEV_CLOCK_CFG_LINK_SPEED 1202 - (OCELOT_SPEED_1000), 1203 - DEV_CLOCK_CFG); 1204 - break; 1205 - default: 1206 - dev_err(ocelot->dev, 1207 - "invalid phy mode for port%d, (Q)SGMII only\n", 1208 - port); 1209 - of_node_put(portnp); 1210 - err = -EINVAL; 1211 - goto out_put_ports; 1212 - } 1213 - 1214 - serdes = devm_of_phy_get(ocelot->dev, portnp, NULL); 1215 - if (IS_ERR(serdes)) { 1216 - err = PTR_ERR(serdes); 1217 - if (err == -EPROBE_DEFER) 1218 - dev_dbg(ocelot->dev, "deferring probe\n"); 1219 - else 1220 - dev_err(ocelot->dev, 1221 - "missing SerDes phys for port%d\n", 1222 - port); 1223 - 1224 - of_node_put(portnp); 1225 - goto out_put_ports; 1226 - } 1227 - 1228 - priv->serdes = serdes; 1229 1009 } 1230 1010 1231 1011 register_netdevice_notifier(&ocelot_netdevice_nb); ··· 1154 1114 struct ocelot *ocelot = platform_get_drvdata(pdev); 1155 1115 1156 1116 ocelot_deinit_timestamp(ocelot); 1117 + mscc_ocelot_release_ports(ocelot); 1157 1118 ocelot_deinit(ocelot); 1158 1119 unregister_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb); 1159 1120 unregister_switchdev_notifier(&ocelot_switchdev_nb);
+2 -2
drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
··· 829 829 struct nfp_eth_table_port *eth_port; 830 830 struct nfp_port *port; 831 831 832 - param->active_fec = ETHTOOL_FEC_NONE_BIT; 833 - param->fec = ETHTOOL_FEC_NONE_BIT; 832 + param->active_fec = ETHTOOL_FEC_NONE; 833 + param->fec = ETHTOOL_FEC_NONE; 834 834 835 835 port = nfp_port_from_netdev(netdev); 836 836 eth_port = nfp_port_get_eth_port(port);
+10 -1
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 4253 4253 cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) | 4254 4254 BIT(QED_MF_LLH_PROTO_CLSS) | 4255 4255 BIT(QED_MF_LL2_NON_UNICAST) | 4256 - BIT(QED_MF_INTER_PF_SWITCH); 4256 + BIT(QED_MF_INTER_PF_SWITCH) | 4257 + BIT(QED_MF_DISABLE_ARFS); 4257 4258 break; 4258 4259 case NVM_CFG1_GLOB_MF_MODE_DEFAULT: 4259 4260 cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) | ··· 4267 4266 4268 4267 DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n", 4269 4268 cdev->mf_bits); 4269 + 4270 + /* In CMT the PF is unknown when the GFS block processes the 4271 + * packet. Therefore cannot use searcher as it has a per PF 4272 + * database, and thus ARFS must be disabled. 4273 + * 4274 + */ 4275 + if (QED_IS_CMT(cdev)) 4276 + cdev->mf_bits |= BIT(QED_MF_DISABLE_ARFS); 4270 4277 } 4271 4278 4272 4279 DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",
+3
drivers/net/ethernet/qlogic/qed/qed_l2.c
··· 1980 1980 struct qed_ptt *p_ptt, 1981 1981 struct qed_arfs_config_params *p_cfg_params) 1982 1982 { 1983 + if (test_bit(QED_MF_DISABLE_ARFS, &p_hwfn->cdev->mf_bits)) 1984 + return; 1985 + 1983 1986 if (p_cfg_params->mode != QED_FILTER_CONFIG_MODE_DISABLE) { 1984 1987 qed_gft_config(p_hwfn, p_ptt, p_hwfn->rel_pf_id, 1985 1988 p_cfg_params->tcp,
+2
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 444 444 dev_info->fw_eng = FW_ENGINEERING_VERSION; 445 445 dev_info->b_inter_pf_switch = test_bit(QED_MF_INTER_PF_SWITCH, 446 446 &cdev->mf_bits); 447 + if (!test_bit(QED_MF_DISABLE_ARFS, &cdev->mf_bits)) 448 + dev_info->b_arfs_capable = true; 447 449 dev_info->tx_switching = true; 448 450 449 451 if (hw_info->b_wol_support == QED_WOL_SUPPORT_PME)
+1
drivers/net/ethernet/qlogic/qed/qed_sriov.c
··· 71 71 p_ramrod->personality = PERSONALITY_ETH; 72 72 break; 73 73 case QED_PCI_ETH_ROCE: 74 + case QED_PCI_ETH_IWARP: 74 75 p_ramrod->personality = PERSONALITY_RDMA_AND_ETH; 75 76 break; 76 77 default:
+3
drivers/net/ethernet/qlogic/qede/qede_filter.c
··· 311 311 { 312 312 int i; 313 313 314 + if (!edev->dev_info.common.b_arfs_capable) 315 + return -EINVAL; 316 + 314 317 edev->arfs = vzalloc(sizeof(*edev->arfs)); 315 318 if (!edev->arfs) 316 319 return -ENOMEM;
+5 -6
drivers/net/ethernet/qlogic/qede/qede_main.c
··· 804 804 NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 805 805 NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_TC; 806 806 807 - if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) 807 + if (edev->dev_info.common.b_arfs_capable) 808 808 hw_features |= NETIF_F_NTUPLE; 809 809 810 810 if (edev->dev_info.common.vxlan_enable || ··· 2274 2274 qede_vlan_mark_nonconfigured(edev); 2275 2275 edev->ops->fastpath_stop(edev->cdev); 2276 2276 2277 - if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) { 2277 + if (edev->dev_info.common.b_arfs_capable) { 2278 2278 qede_poll_for_freeing_arfs_filters(edev); 2279 2279 qede_free_arfs(edev); 2280 2280 } ··· 2341 2341 if (rc) 2342 2342 goto err2; 2343 2343 2344 - if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) { 2345 - rc = qede_alloc_arfs(edev); 2346 - if (rc) 2347 - DP_NOTICE(edev, "aRFS memory allocation failed\n"); 2344 + if (qede_alloc_arfs(edev)) { 2345 + edev->ndev->features &= ~NETIF_F_NTUPLE; 2346 + edev->dev_info.common.b_arfs_capable = false; 2348 2347 } 2349 2348 2350 2349 qede_napi_add_enable(edev);
+1
drivers/net/ethernet/sfc/ef100.c
··· 490 490 if (fcw.offset > pci_resource_len(efx->pci_dev, fcw.bar) - ESE_GZ_FCW_LEN) { 491 491 netif_err(efx, probe, efx->net_dev, 492 492 "Func control window overruns BAR\n"); 493 + rc = -EIO; 493 494 goto fail; 494 495 } 495 496
+53
drivers/net/ethernet/ti/cpsw_new.c
··· 17 17 #include <linux/phy.h> 18 18 #include <linux/phy/phy.h> 19 19 #include <linux/delay.h> 20 + #include <linux/pinctrl/consumer.h> 20 21 #include <linux/pm_runtime.h> 21 22 #include <linux/gpio/consumer.h> 22 23 #include <linux/of.h> ··· 2071 2070 return 0; 2072 2071 } 2073 2072 2073 + static int __maybe_unused cpsw_suspend(struct device *dev) 2074 + { 2075 + struct cpsw_common *cpsw = dev_get_drvdata(dev); 2076 + int i; 2077 + 2078 + rtnl_lock(); 2079 + 2080 + for (i = 0; i < cpsw->data.slaves; i++) { 2081 + struct net_device *ndev = cpsw->slaves[i].ndev; 2082 + 2083 + if (!(ndev && netif_running(ndev))) 2084 + continue; 2085 + 2086 + cpsw_ndo_stop(ndev); 2087 + } 2088 + 2089 + rtnl_unlock(); 2090 + 2091 + /* Select sleep pin state */ 2092 + pinctrl_pm_select_sleep_state(dev); 2093 + 2094 + return 0; 2095 + } 2096 + 2097 + static int __maybe_unused cpsw_resume(struct device *dev) 2098 + { 2099 + struct cpsw_common *cpsw = dev_get_drvdata(dev); 2100 + int i; 2101 + 2102 + /* Select default pin state */ 2103 + pinctrl_pm_select_default_state(dev); 2104 + 2105 + /* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */ 2106 + rtnl_lock(); 2107 + 2108 + for (i = 0; i < cpsw->data.slaves; i++) { 2109 + struct net_device *ndev = cpsw->slaves[i].ndev; 2110 + 2111 + if (!(ndev && netif_running(ndev))) 2112 + continue; 2113 + 2114 + cpsw_ndo_open(ndev); 2115 + } 2116 + 2117 + rtnl_unlock(); 2118 + 2119 + return 0; 2120 + } 2121 + 2122 + static SIMPLE_DEV_PM_OPS(cpsw_pm_ops, cpsw_suspend, cpsw_resume); 2123 + 2074 2124 static struct platform_driver cpsw_driver = { 2075 2125 .driver = { 2076 2126 .name = "cpsw-switch", 2127 + .pm = &cpsw_pm_ops, 2077 2128 .of_match_table = cpsw_of_mtable, 2078 2129 }, 2079 2130 .probe = cpsw_probe,
+29 -12
drivers/net/geneve.c
··· 777 777 struct net_device *dev, 778 778 struct geneve_sock *gs4, 779 779 struct flowi4 *fl4, 780 - const struct ip_tunnel_info *info) 780 + const struct ip_tunnel_info *info, 781 + __be16 dport, __be16 sport) 781 782 { 782 783 bool use_cache = ip_tunnel_dst_cache_usable(skb, info); 783 784 struct geneve_dev *geneve = netdev_priv(dev); ··· 794 793 fl4->flowi4_proto = IPPROTO_UDP; 795 794 fl4->daddr = info->key.u.ipv4.dst; 796 795 fl4->saddr = info->key.u.ipv4.src; 796 + fl4->fl4_dport = dport; 797 + fl4->fl4_sport = sport; 797 798 798 799 tos = info->key.tos; 799 800 if ((tos == 1) && !geneve->cfg.collect_md) { ··· 830 827 struct net_device *dev, 831 828 struct geneve_sock *gs6, 832 829 struct flowi6 *fl6, 833 - const struct ip_tunnel_info *info) 830 + const struct ip_tunnel_info *info, 831 + __be16 dport, __be16 sport) 834 832 { 835 833 bool use_cache = ip_tunnel_dst_cache_usable(skb, info); 836 834 struct geneve_dev *geneve = netdev_priv(dev); ··· 847 843 fl6->flowi6_proto = IPPROTO_UDP; 848 844 fl6->daddr = info->key.u.ipv6.dst; 849 845 fl6->saddr = info->key.u.ipv6.src; 846 + fl6->fl6_dport = dport; 847 + fl6->fl6_sport = sport; 848 + 850 849 prio = info->key.tos; 851 850 if ((prio == 1) && !geneve->cfg.collect_md) { 852 851 prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb); ··· 896 889 __be16 sport; 897 890 int err; 898 891 899 - rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info); 892 + sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 893 + rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info, 894 + geneve->cfg.info.key.tp_dst, sport); 900 895 if (IS_ERR(rt)) 901 896 return PTR_ERR(rt); 902 897 ··· 928 919 return -EMSGSIZE; 929 920 } 930 921 931 - sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 932 922 if (geneve->cfg.collect_md) { 933 923 tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); 934 924 ttl = key->ttl; ··· 982 974 __be16 sport; 983 975 int err; 984 976 985 - dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info); 977 + sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 978 + dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info, 979 + geneve->cfg.info.key.tp_dst, sport); 986 980 if (IS_ERR(dst)) 987 981 return PTR_ERR(dst); 988 982 ··· 1013 1003 return -EMSGSIZE; 1014 1004 } 1015 1005 1016 - sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 1017 1006 if (geneve->cfg.collect_md) { 1018 1007 prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); 1019 1008 ttl = key->ttl; ··· 1094 1085 { 1095 1086 struct ip_tunnel_info *info = skb_tunnel_info(skb); 1096 1087 struct geneve_dev *geneve = netdev_priv(dev); 1088 + __be16 sport; 1097 1089 1098 1090 if (ip_tunnel_info_af(info) == AF_INET) { 1099 1091 struct rtable *rt; 1100 1092 struct flowi4 fl4; 1101 - struct geneve_sock *gs4 = rcu_dereference(geneve->sock4); 1102 1093 1103 - rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info); 1094 + struct geneve_sock *gs4 = rcu_dereference(geneve->sock4); 1095 + sport = udp_flow_src_port(geneve->net, skb, 1096 + 1, USHRT_MAX, true); 1097 + 1098 + rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info, 1099 + geneve->cfg.info.key.tp_dst, sport); 1104 1100 if (IS_ERR(rt)) 1105 1101 return PTR_ERR(rt); 1106 1102 ··· 1115 1101 } else if (ip_tunnel_info_af(info) == AF_INET6) { 1116 1102 struct dst_entry *dst; 1117 1103 struct flowi6 fl6; 1118 - struct geneve_sock *gs6 = rcu_dereference(geneve->sock6); 1119 1104 1120 - dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info); 1105 + struct geneve_sock *gs6 = rcu_dereference(geneve->sock6); 1106 + sport = udp_flow_src_port(geneve->net, skb, 1107 + 1, USHRT_MAX, true); 1108 + 1109 + dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info, 1110 + geneve->cfg.info.key.tp_dst, sport); 1121 1111 if (IS_ERR(dst)) 1122 1112 return PTR_ERR(dst); 1123 1113 ··· 1132 1114 return -EINVAL; 1133 1115 } 1134 1116 1135 - info->key.tp_src = udp_flow_src_port(geneve->net, skb, 1136 - 1, USHRT_MAX, true); 1117 + info->key.tp_src = sport; 1137 1118 info->key.tp_dst = geneve->cfg.info.key.tp_dst; 1138 1119 return 0; 1139 1120 }
+7
drivers/net/hyperv/hyperv_net.h
··· 847 847 848 848 #define NETVSC_XDP_HDRM 256 849 849 850 + #define NETVSC_XFER_HEADER_SIZE(rng_cnt) \ 851 + (offsetof(struct vmtransfer_page_packet_header, ranges) + \ 852 + (rng_cnt) * sizeof(struct vmtransfer_page_range)) 853 + 850 854 struct multi_send_data { 851 855 struct sk_buff *skb; /* skb containing the pkt */ 852 856 struct hv_netvsc_packet *pkt; /* netvsc pkt pending */ ··· 977 973 u32 vf_alloc; 978 974 /* Serial number of the VF to team with */ 979 975 u32 vf_serial; 976 + 977 + /* Is the current data path through the VF NIC? */ 978 + bool data_path_is_vf; 980 979 981 980 /* Used to temporarily save the config info across hibernation */ 982 981 struct netvsc_device_info *saved_netvsc_dev_info;
+111 -13
drivers/net/hyperv/netvsc.c
··· 388 388 net_device->recv_section_size = resp->sections[0].sub_alloc_size; 389 389 net_device->recv_section_cnt = resp->sections[0].num_sub_allocs; 390 390 391 + /* Ensure buffer will not overflow */ 392 + if (net_device->recv_section_size < NETVSC_MTU_MIN || (u64)net_device->recv_section_size * 393 + (u64)net_device->recv_section_cnt > (u64)buf_size) { 394 + netdev_err(ndev, "invalid recv_section_size %u\n", 395 + net_device->recv_section_size); 396 + ret = -EINVAL; 397 + goto cleanup; 398 + } 399 + 391 400 /* Setup receive completion ring. 392 401 * Add 1 to the recv_section_cnt because at least one entry in a 393 402 * ring buffer has to be empty. ··· 469 460 /* Parse the response */ 470 461 net_device->send_section_size = init_packet->msg. 471 462 v1_msg.send_send_buf_complete.section_size; 463 + if (net_device->send_section_size < NETVSC_MTU_MIN) { 464 + netdev_err(ndev, "invalid send_section_size %u\n", 465 + net_device->send_section_size); 466 + ret = -EINVAL; 467 + goto cleanup; 468 + } 472 469 473 470 /* Section count is simply the size divided by the section size. */ 474 471 net_device->send_section_cnt = buf_size / net_device->send_section_size; ··· 746 731 int budget) 747 732 { 748 733 const struct nvsp_message *nvsp_packet = hv_pkt_data(desc); 734 + u32 msglen = hv_pkt_datalen(desc); 735 + 736 + /* Ensure packet is big enough to read header fields */ 737 + if (msglen < sizeof(struct nvsp_message_header)) { 738 + netdev_err(ndev, "nvsp_message length too small: %u\n", msglen); 739 + return; 740 + } 749 741 750 742 switch (nvsp_packet->hdr.msg_type) { 751 743 case NVSP_MSG_TYPE_INIT_COMPLETE: 744 + if (msglen < sizeof(struct nvsp_message_header) + 745 + sizeof(struct nvsp_message_init_complete)) { 746 + netdev_err(ndev, "nvsp_msg length too small: %u\n", 747 + msglen); 748 + return; 749 + } 750 + fallthrough; 751 + 752 752 case NVSP_MSG1_TYPE_SEND_RECV_BUF_COMPLETE: 753 + if (msglen < sizeof(struct nvsp_message_header) + 754 + sizeof(struct nvsp_1_message_send_receive_buffer_complete)) { 755 + netdev_err(ndev, "nvsp_msg1 length too small: %u\n", 756 + msglen); 757 + return; 758 + } 759 + fallthrough; 760 + 753 761 case NVSP_MSG1_TYPE_SEND_SEND_BUF_COMPLETE: 762 + if (msglen < sizeof(struct nvsp_message_header) + 763 + sizeof(struct nvsp_1_message_send_send_buffer_complete)) { 764 + netdev_err(ndev, "nvsp_msg1 length too small: %u\n", 765 + msglen); 766 + return; 767 + } 768 + fallthrough; 769 + 754 770 case NVSP_MSG5_TYPE_SUBCHANNEL: 771 + if (msglen < sizeof(struct nvsp_message_header) + 772 + sizeof(struct nvsp_5_subchannel_complete)) { 773 + netdev_err(ndev, "nvsp_msg5 length too small: %u\n", 774 + msglen); 775 + return; 776 + } 755 777 /* Copy the response back */ 756 778 memcpy(&net_device->channel_init_pkt, nvsp_packet, 757 779 sizeof(struct nvsp_message)); ··· 1169 1117 static int netvsc_receive(struct net_device *ndev, 1170 1118 struct netvsc_device *net_device, 1171 1119 struct netvsc_channel *nvchan, 1172 - const struct vmpacket_descriptor *desc, 1173 - const struct nvsp_message *nvsp) 1120 + const struct vmpacket_descriptor *desc) 1174 1121 { 1175 1122 struct net_device_context *net_device_ctx = netdev_priv(ndev); 1176 1123 struct vmbus_channel *channel = nvchan->channel; 1177 1124 const struct vmtransfer_page_packet_header *vmxferpage_packet 1178 1125 = container_of(desc, const struct vmtransfer_page_packet_header, d); 1126 + const struct nvsp_message *nvsp = hv_pkt_data(desc); 1127 + u32 msglen = hv_pkt_datalen(desc); 1179 1128 u16 q_idx = channel->offermsg.offer.sub_channel_index; 1180 1129 char *recv_buf = net_device->recv_buf; 1181 1130 u32 status = NVSP_STAT_SUCCESS; 1182 1131 int i; 1183 1132 int count = 0; 1184 1133 1134 + /* Ensure packet is big enough to read header fields */ 1135 + if (msglen < sizeof(struct nvsp_message_header)) { 1136 + netif_err(net_device_ctx, rx_err, ndev, 1137 + "invalid nvsp header, length too small: %u\n", 1138 + msglen); 1139 + return 0; 1140 + } 1141 + 1185 1142 /* Make sure this is a valid nvsp packet */ 1186 1143 if (unlikely(nvsp->hdr.msg_type != NVSP_MSG1_TYPE_SEND_RNDIS_PKT)) { 1187 1144 netif_err(net_device_ctx, rx_err, ndev, 1188 1145 "Unknown nvsp packet type received %u\n", 1189 1146 nvsp->hdr.msg_type); 1147 + return 0; 1148 + } 1149 + 1150 + /* Validate xfer page pkt header */ 1151 + if ((desc->offset8 << 3) < sizeof(struct vmtransfer_page_packet_header)) { 1152 + netif_err(net_device_ctx, rx_err, ndev, 1153 + "Invalid xfer page pkt, offset too small: %u\n", 1154 + desc->offset8 << 3); 1190 1155 return 0; 1191 1156 } 1192 1157 ··· 1217 1148 1218 1149 count = vmxferpage_packet->range_cnt; 1219 1150 1151 + /* Check count for a valid value */ 1152 + if (NETVSC_XFER_HEADER_SIZE(count) > desc->offset8 << 3) { 1153 + netif_err(net_device_ctx, rx_err, ndev, 1154 + "Range count is not valid: %d\n", 1155 + count); 1156 + return 0; 1157 + } 1158 + 1220 1159 /* Each range represents 1 RNDIS pkt that contains 1 ethernet frame */ 1221 1160 for (i = 0; i < count; i++) { 1222 1161 u32 offset = vmxferpage_packet->ranges[i].byte_offset; ··· 1232 1155 void *data; 1233 1156 int ret; 1234 1157 1235 - if (unlikely(offset + buflen > net_device->recv_buf_size)) { 1158 + if (unlikely(offset > net_device->recv_buf_size || 1159 + buflen > net_device->recv_buf_size - offset)) { 1236 1160 nvchan->rsc.cnt = 0; 1237 1161 status = NVSP_STAT_FAIL; 1238 1162 netif_err(net_device_ctx, rx_err, ndev, ··· 1272 1194 u32 count, offset, *tab; 1273 1195 int i; 1274 1196 1197 + /* Ensure packet is big enough to read send_table fields */ 1198 + if (msglen < sizeof(struct nvsp_message_header) + 1199 + sizeof(struct nvsp_5_send_indirect_table)) { 1200 + netdev_err(ndev, "nvsp_v5_msg length too small: %u\n", msglen); 1201 + return; 1202 + } 1203 + 1275 1204 count = nvmsg->msg.v5_msg.send_table.count; 1276 1205 offset = nvmsg->msg.v5_msg.send_table.offset; 1277 1206 ··· 1310 1225 } 1311 1226 1312 1227 static void netvsc_send_vf(struct net_device *ndev, 1313 - const struct nvsp_message *nvmsg) 1228 + const struct nvsp_message *nvmsg, 1229 + u32 msglen) 1314 1230 { 1315 1231 struct net_device_context *net_device_ctx = netdev_priv(ndev); 1232 + 1233 + /* Ensure packet is big enough to read its fields */ 1234 + if (msglen < sizeof(struct nvsp_message_header) + 1235 + sizeof(struct nvsp_4_send_vf_association)) { 1236 + netdev_err(ndev, "nvsp_v4_msg length too small: %u\n", msglen); 1237 + return; 1238 + } 1316 1239 1317 1240 net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated; 1318 1241 net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial; ··· 1331 1238 1332 1239 static void netvsc_receive_inband(struct net_device *ndev, 1333 1240 struct netvsc_device *nvscdev, 1334 - const struct nvsp_message *nvmsg, 1335 - u32 msglen) 1241 + const struct vmpacket_descriptor *desc) 1336 1242 { 1243 + const struct nvsp_message *nvmsg = hv_pkt_data(desc); 1244 + u32 msglen = hv_pkt_datalen(desc); 1245 + 1246 + /* Ensure packet is big enough to read header fields */ 1247 + if (msglen < sizeof(struct nvsp_message_header)) { 1248 + netdev_err(ndev, "inband nvsp_message length too small: %u\n", msglen); 1249 + return; 1250 + } 1251 + 1337 1252 switch (nvmsg->hdr.msg_type) { 1338 1253 case NVSP_MSG5_TYPE_SEND_INDIRECTION_TABLE: 1339 1254 netvsc_send_table(ndev, nvscdev, nvmsg, msglen); 1340 1255 break; 1341 1256 1342 1257 case NVSP_MSG4_TYPE_SEND_VF_ASSOCIATION: 1343 - netvsc_send_vf(ndev, nvmsg); 1258 + netvsc_send_vf(ndev, nvmsg, msglen); 1344 1259 break; 1345 1260 } 1346 1261 } ··· 1362 1261 { 1363 1262 struct vmbus_channel *channel = nvchan->channel; 1364 1263 const struct nvsp_message *nvmsg = hv_pkt_data(desc); 1365 - u32 msglen = hv_pkt_datalen(desc); 1366 1264 1367 1265 trace_nvsp_recv(ndev, channel, nvmsg); 1368 1266 1369 1267 switch (desc->type) { 1370 1268 case VM_PKT_COMP: 1371 - netvsc_send_completion(ndev, net_device, channel, 1372 - desc, budget); 1269 + netvsc_send_completion(ndev, net_device, channel, desc, budget); 1373 1270 break; 1374 1271 1375 1272 case VM_PKT_DATA_USING_XFER_PAGES: 1376 - return netvsc_receive(ndev, net_device, nvchan, 1377 - desc, nvmsg); 1273 + return netvsc_receive(ndev, net_device, nvchan, desc); 1378 1274 break; 1379 1275 1380 1276 case VM_PKT_DATA_INBAND: 1381 - netvsc_receive_inband(ndev, net_device, nvmsg, msglen); 1277 + netvsc_receive_inband(ndev, net_device, desc); 1382 1278 break; 1383 1279 1384 1280 default:
+29 -6
drivers/net/hyperv/netvsc_drv.c
··· 748 748 struct netvsc_reconfig *event; 749 749 unsigned long flags; 750 750 751 + /* Ensure the packet is big enough to access its fields */ 752 + if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(struct rndis_indicate_status)) { 753 + netdev_err(net, "invalid rndis_indicate_status packet, len: %u\n", 754 + resp->msg_len); 755 + return; 756 + } 757 + 751 758 /* Update the physical link speed when changing to another vSwitch */ 752 759 if (indicate->status == RNDIS_STATUS_LINK_SPEED_CHANGE) { 753 760 u32 speed; ··· 2373 2366 return NOTIFY_OK; 2374 2367 } 2375 2368 2376 - /* VF up/down change detected, schedule to change data path */ 2369 + /* Change the data path when VF UP/DOWN/CHANGE are detected. 2370 + * 2371 + * Typically a UP or DOWN event is followed by a CHANGE event, so 2372 + * net_device_ctx->data_path_is_vf is used to cache the current data path 2373 + * to avoid the duplicate call of netvsc_switch_datapath() and the duplicate 2374 + * message. 2375 + * 2376 + * During hibernation, if a VF NIC driver (e.g. mlx5) preserves the network 2377 + * interface, there is only the CHANGE event and no UP or DOWN event. 2378 + */ 2377 2379 static int netvsc_vf_changed(struct net_device *vf_netdev) 2378 2380 { 2379 2381 struct net_device_context *net_device_ctx; ··· 2398 2382 netvsc_dev = rtnl_dereference(net_device_ctx->nvdev); 2399 2383 if (!netvsc_dev) 2400 2384 return NOTIFY_DONE; 2385 + 2386 + if (net_device_ctx->data_path_is_vf == vf_is_up) 2387 + return NOTIFY_OK; 2388 + net_device_ctx->data_path_is_vf = vf_is_up; 2401 2389 2402 2390 netvsc_switch_datapath(ndev, vf_is_up); 2403 2391 netdev_info(ndev, "Data path switched %s VF: %s\n", ··· 2607 2587 static int netvsc_suspend(struct hv_device *dev) 2608 2588 { 2609 2589 struct net_device_context *ndev_ctx; 2610 - struct net_device *vf_netdev, *net; 2611 2590 struct netvsc_device *nvdev; 2591 + struct net_device *net; 2612 2592 int ret; 2613 2593 2614 2594 net = hv_get_drvdata(dev); ··· 2623 2603 ret = -ENODEV; 2624 2604 goto out; 2625 2605 } 2626 - 2627 - vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev); 2628 - if (vf_netdev) 2629 - netvsc_unregister_vf(vf_netdev); 2630 2606 2631 2607 /* Save the current config info */ 2632 2608 ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev); ··· 2644 2628 rtnl_lock(); 2645 2629 2646 2630 net_device_ctx = netdev_priv(net); 2631 + 2632 + /* Reset the data path to the netvsc NIC before re-opening the vmbus 2633 + * channel. Later netvsc_netdev_event() will switch the data path to 2634 + * the VF upon the UP or CHANGE event. 2635 + */ 2636 + net_device_ctx->data_path_is_vf = false; 2647 2637 device_info = net_device_ctx->saved_netvsc_dev_info; 2648 2638 2649 2639 ret = netvsc_attach(net, device_info); ··· 2717 2695 return netvsc_unregister_vf(event_dev); 2718 2696 case NETDEV_UP: 2719 2697 case NETDEV_DOWN: 2698 + case NETDEV_CHANGE: 2720 2699 return netvsc_vf_changed(event_dev); 2721 2700 default: 2722 2701 return NOTIFY_DONE;
+66 -7
drivers/net/hyperv/rndis_filter.c
··· 275 275 return; 276 276 } 277 277 278 + /* Ensure the packet is big enough to read req_id. Req_id is the 1st 279 + * field in any request/response message, so the payload should have at 280 + * least sizeof(u32) bytes 281 + */ 282 + if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(u32)) { 283 + netdev_err(ndev, "rndis msg_len too small: %u\n", 284 + resp->msg_len); 285 + return; 286 + } 287 + 278 288 spin_lock_irqsave(&dev->request_lock, flags); 279 289 list_for_each_entry(request, &dev->req_list, list_ent) { 280 290 /* ··· 341 331 * Get the Per-Packet-Info with the specified type 342 332 * return NULL if not found. 343 333 */ 344 - static inline void *rndis_get_ppi(struct rndis_packet *rpkt, 345 - u32 type, u8 internal) 334 + static inline void *rndis_get_ppi(struct net_device *ndev, 335 + struct rndis_packet *rpkt, 336 + u32 rpkt_len, u32 type, u8 internal) 346 337 { 347 338 struct rndis_per_packet_info *ppi; 348 339 int len; ··· 351 340 if (rpkt->per_pkt_info_offset == 0) 352 341 return NULL; 353 342 343 + /* Validate info_offset and info_len */ 344 + if (rpkt->per_pkt_info_offset < sizeof(struct rndis_packet) || 345 + rpkt->per_pkt_info_offset > rpkt_len) { 346 + netdev_err(ndev, "Invalid per_pkt_info_offset: %u\n", 347 + rpkt->per_pkt_info_offset); 348 + return NULL; 349 + } 350 + 351 + if (rpkt->per_pkt_info_len > rpkt_len - rpkt->per_pkt_info_offset) { 352 + netdev_err(ndev, "Invalid per_pkt_info_len: %u\n", 353 + rpkt->per_pkt_info_len); 354 + return NULL; 355 + } 356 + 354 357 ppi = (struct rndis_per_packet_info *)((ulong)rpkt + 355 358 rpkt->per_pkt_info_offset); 356 359 len = rpkt->per_pkt_info_len; 357 360 358 361 while (len > 0) { 362 + /* Validate ppi_offset and ppi_size */ 363 + if (ppi->size > len) { 364 + netdev_err(ndev, "Invalid ppi size: %u\n", ppi->size); 365 + continue; 366 + } 367 + 368 + if (ppi->ppi_offset >= ppi->size) { 369 + netdev_err(ndev, "Invalid ppi_offset: %u\n", ppi->ppi_offset); 370 + continue; 371 + } 372 + 359 373 if (ppi->type == type && ppi->internal == internal) 360 374 return (void *)((ulong)ppi + ppi->ppi_offset); 361 375 len -= ppi->size; ··· 424 388 const struct ndis_pkt_8021q_info *vlan; 425 389 const struct rndis_pktinfo_id *pktinfo_id; 426 390 const u32 *hash_info; 427 - u32 data_offset; 391 + u32 data_offset, rpkt_len; 428 392 void *data; 429 393 bool rsc_more = false; 430 394 int ret; 431 395 396 + /* Ensure data_buflen is big enough to read header fields */ 397 + if (data_buflen < RNDIS_HEADER_SIZE + sizeof(struct rndis_packet)) { 398 + netdev_err(ndev, "invalid rndis pkt, data_buflen too small: %u\n", 399 + data_buflen); 400 + return NVSP_STAT_FAIL; 401 + } 402 + 403 + /* Validate rndis_pkt offset */ 404 + if (rndis_pkt->data_offset >= data_buflen - RNDIS_HEADER_SIZE) { 405 + netdev_err(ndev, "invalid rndis packet offset: %u\n", 406 + rndis_pkt->data_offset); 407 + return NVSP_STAT_FAIL; 408 + } 409 + 432 410 /* Remove the rndis header and pass it back up the stack */ 433 411 data_offset = RNDIS_HEADER_SIZE + rndis_pkt->data_offset; 434 412 413 + rpkt_len = data_buflen - RNDIS_HEADER_SIZE; 435 414 data_buflen -= data_offset; 436 415 437 416 /* ··· 461 410 return NVSP_STAT_FAIL; 462 411 } 463 412 464 - vlan = rndis_get_ppi(rndis_pkt, IEEE_8021Q_INFO, 0); 413 + vlan = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, IEEE_8021Q_INFO, 0); 465 414 466 - csum_info = rndis_get_ppi(rndis_pkt, TCPIP_CHKSUM_PKTINFO, 0); 415 + csum_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, TCPIP_CHKSUM_PKTINFO, 0); 467 416 468 - hash_info = rndis_get_ppi(rndis_pkt, NBL_HASH_VALUE, 0); 417 + hash_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, NBL_HASH_VALUE, 0); 469 418 470 - pktinfo_id = rndis_get_ppi(rndis_pkt, RNDIS_PKTINFO_ID, 1); 419 + pktinfo_id = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, RNDIS_PKTINFO_ID, 1); 471 420 472 421 data = (void *)msg + data_offset; 473 422 ··· 524 473 525 474 if (netif_msg_rx_status(net_device_ctx)) 526 475 dump_rndis_message(ndev, rndis_msg); 476 + 477 + /* Validate incoming rndis_message packet */ 478 + if (buflen < RNDIS_HEADER_SIZE || rndis_msg->msg_len < RNDIS_HEADER_SIZE || 479 + buflen < rndis_msg->msg_len) { 480 + netdev_err(ndev, "Invalid rndis_msg (buflen: %u, msg_len: %u)\n", 481 + buflen, rndis_msg->msg_len); 482 + return NVSP_STAT_FAIL; 483 + } 527 484 528 485 switch (rndis_msg->ndis_msg_type) { 529 486 case RNDIS_MSG_PACKET:
+3 -1
drivers/net/ieee802154/adf7242.c
··· 882 882 int ret; 883 883 u8 lqi, len_u8, *data; 884 884 885 - adf7242_read_reg(lp, 0, &len_u8); 885 + ret = adf7242_read_reg(lp, 0, &len_u8); 886 + if (ret) 887 + return ret; 886 888 887 889 len = len_u8; 888 890
+1
drivers/net/ieee802154/ca8210.c
··· 2925 2925 ); 2926 2926 if (!priv->irq_workqueue) { 2927 2927 dev_crit(&priv->spi->dev, "alloc of irq_workqueue failed!\n"); 2928 + destroy_workqueue(priv->mlme_workqueue); 2928 2929 return -ENOMEM; 2929 2930 } 2930 2931
+2 -2
drivers/net/ipa/ipa_table.c
··· 521 521 val = ioread32(endpoint->ipa->reg_virt + offset); 522 522 523 523 /* Zero all filter-related fields, preserving the rest */ 524 - u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); 524 + u32p_replace_bits(&val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); 525 525 526 526 iowrite32(val, endpoint->ipa->reg_virt + offset); 527 527 } ··· 573 573 val = ioread32(ipa->reg_virt + offset); 574 574 575 575 /* Zero all route-related fields, preserving the rest */ 576 - u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); 576 + u32p_replace_bits(&val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); 577 577 578 578 iowrite32(val, ipa->reg_virt + offset); 579 579 }
+1 -1
drivers/net/phy/phy.c
··· 996 996 { 997 997 struct net_device *dev = phydev->attached_dev; 998 998 999 - if (!phy_is_started(phydev)) { 999 + if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) { 1000 1000 WARN(1, "called from state %s\n", 1001 1001 phy_state_to_str(phydev->state)); 1002 1002 return;
+6 -5
drivers/net/phy/phy_device.c
··· 1143 1143 if (ret < 0) 1144 1144 return ret; 1145 1145 1146 - ret = phy_disable_interrupts(phydev); 1147 - if (ret) 1148 - return ret; 1149 - 1150 1146 if (phydev->drv->config_init) 1151 1147 ret = phydev->drv->config_init(phydev); 1152 1148 ··· 1419 1423 if (err) 1420 1424 goto error; 1421 1425 1426 + err = phy_disable_interrupts(phydev); 1427 + if (err) 1428 + return err; 1429 + 1422 1430 phy_resume(phydev); 1423 1431 phy_led_triggers_register(phydev); 1424 1432 ··· 1682 1682 1683 1683 phy_led_triggers_unregister(phydev); 1684 1684 1685 - module_put(phydev->mdio.dev.driver->owner); 1685 + if (phydev->mdio.dev.driver) 1686 + module_put(phydev->mdio.dev.driver->owner); 1686 1687 1687 1688 /* If the device had no specific driver before (i.e. - it 1688 1689 * was using the generic driver), we unbind the device
+1 -1
drivers/net/usb/rndis_host.c
··· 201 201 dev_dbg(&info->control->dev, 202 202 "rndis response error, code %d\n", retval); 203 203 } 204 - msleep(20); 204 + msleep(40); 205 205 } 206 206 dev_dbg(&info->control->dev, "rndis response timeout\n"); 207 207 return -ETIMEDOUT;
+1
drivers/net/wan/hdlc_cisco.c
··· 118 118 skb_put(skb, sizeof(struct cisco_packet)); 119 119 skb->priority = TC_PRIO_CONTROL; 120 120 skb->dev = dev; 121 + skb->protocol = htons(ETH_P_HDLC); 121 122 skb_reset_network_header(skb); 122 123 123 124 dev_queue_xmit(skb);
+5 -1
drivers/net/wan/hdlc_fr.c
··· 433 433 if (pvc->state.fecn) /* TX Congestion counter */ 434 434 dev->stats.tx_compressed++; 435 435 skb->dev = pvc->frad; 436 + skb->protocol = htons(ETH_P_HDLC); 437 + skb_reset_network_header(skb); 436 438 dev_queue_xmit(skb); 437 439 return NETDEV_TX_OK; 438 440 } ··· 557 555 skb_put(skb, i); 558 556 skb->priority = TC_PRIO_CONTROL; 559 557 skb->dev = dev; 558 + skb->protocol = htons(ETH_P_HDLC); 560 559 skb_reset_network_header(skb); 561 560 562 561 dev_queue_xmit(skb); ··· 1044 1041 { 1045 1042 dev->type = ARPHRD_DLCI; 1046 1043 dev->flags = IFF_POINTOPOINT; 1047 - dev->hard_header_len = 10; 1044 + dev->hard_header_len = 0; 1048 1045 dev->addr_len = 2; 1049 1046 netif_keep_dst(dev); 1050 1047 } ··· 1096 1093 dev->mtu = HDLC_MAX_MTU; 1097 1094 dev->min_mtu = 68; 1098 1095 dev->max_mtu = HDLC_MAX_MTU; 1096 + dev->needed_headroom = 10; 1099 1097 dev->priv_flags |= IFF_NO_QUEUE; 1100 1098 dev->ml_priv = pvc; 1101 1099
+12 -5
drivers/net/wan/hdlc_ppp.c
··· 251 251 252 252 skb->priority = TC_PRIO_CONTROL; 253 253 skb->dev = dev; 254 + skb->protocol = htons(ETH_P_HDLC); 254 255 skb_reset_network_header(skb); 255 256 skb_queue_tail(&tx_queue, skb); 256 257 } ··· 384 383 } 385 384 386 385 for (opt = data; len; len -= opt[1], opt += opt[1]) { 387 - if (len < 2 || len < opt[1]) { 388 - dev->stats.rx_errors++; 389 - kfree(out); 390 - return; /* bad packet, drop silently */ 391 - } 386 + if (len < 2 || opt[1] < 2 || len < opt[1]) 387 + goto err_out; 392 388 393 389 if (pid == PID_LCP) 394 390 switch (opt[0]) { ··· 393 395 continue; /* MRU always OK and > 1500 bytes? */ 394 396 395 397 case LCP_OPTION_ACCM: /* async control character map */ 398 + if (opt[1] < sizeof(valid_accm)) 399 + goto err_out; 396 400 if (!memcmp(opt, valid_accm, 397 401 sizeof(valid_accm))) 398 402 continue; ··· 406 406 } 407 407 break; 408 408 case LCP_OPTION_MAGIC: 409 + if (len < 6) 410 + goto err_out; 409 411 if (opt[1] != 6 || (!opt[2] && !opt[3] && 410 412 !opt[4] && !opt[5])) 411 413 break; /* reject invalid magic number */ ··· 425 423 else 426 424 ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data); 427 425 426 + kfree(out); 427 + return; 428 + 429 + err_out: 430 + dev->stats.rx_errors++; 428 431 kfree(out); 429 432 } 430 433
+2 -2
drivers/net/wan/lapbether.c
··· 198 198 struct net_device *dev; 199 199 int size = skb->len; 200 200 201 - skb->protocol = htons(ETH_P_X25); 202 - 203 201 ptr = skb_push(skb, 2); 204 202 205 203 *ptr++ = size % 256; ··· 207 209 ndev->stats.tx_bytes += size; 208 210 209 211 skb->dev = dev = lapbeth->ethdev; 212 + 213 + skb->protocol = htons(ETH_P_DEC); 210 214 211 215 skb_reset_network_header(skb); 212 216
+1 -4
drivers/net/wireguard/noise.c
··· 87 87 88 88 void wg_noise_handshake_clear(struct noise_handshake *handshake) 89 89 { 90 + down_write(&handshake->lock); 90 91 wg_index_hashtable_remove( 91 92 handshake->entry.peer->device->index_hashtable, 92 93 &handshake->entry); 93 - down_write(&handshake->lock); 94 94 handshake_zero(handshake); 95 95 up_write(&handshake->lock); 96 - wg_index_hashtable_remove( 97 - handshake->entry.peer->device->index_hashtable, 98 - &handshake->entry); 99 96 } 100 97 101 98 static struct noise_keypair *keypair_create(struct wg_peer *peer)
+8 -3
drivers/net/wireguard/peerlookup.c
··· 167 167 struct index_hashtable_entry *old, 168 168 struct index_hashtable_entry *new) 169 169 { 170 - if (unlikely(hlist_unhashed(&old->index_hash))) 171 - return false; 170 + bool ret; 171 + 172 172 spin_lock_bh(&table->lock); 173 + ret = !hlist_unhashed(&old->index_hash); 174 + if (unlikely(!ret)) 175 + goto out; 176 + 173 177 new->index = old->index; 174 178 hlist_replace_rcu(&old->index_hash, &new->index_hash); 175 179 ··· 184 180 * simply gets dropped, which isn't terrible. 185 181 */ 186 182 INIT_HLIST_NODE(&old->index_hash); 183 + out: 187 184 spin_unlock_bh(&table->lock); 188 - return true; 185 + return ret; 189 186 } 190 187 191 188 void wg_index_hashtable_remove(struct index_hashtable *table,
+9 -3
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
··· 664 664 /* To check if there's window offered */ 665 665 static bool data_ok(struct brcmf_sdio *bus) 666 666 { 667 - /* Reserve TXCTL_CREDITS credits for txctl */ 668 - return (bus->tx_max - bus->tx_seq) > TXCTL_CREDITS && 669 - ((bus->tx_max - bus->tx_seq) & 0x80) == 0; 667 + u8 tx_rsv = 0; 668 + 669 + /* Reserve TXCTL_CREDITS credits for txctl when it is ready to send */ 670 + if (bus->ctrl_frame_stat) 671 + tx_rsv = TXCTL_CREDITS; 672 + 673 + return (bus->tx_max - bus->tx_seq - tx_rsv) != 0 && 674 + ((bus->tx_max - bus->tx_seq - tx_rsv) & 0x80) == 0; 675 + 670 676 } 671 677 672 678 /* To check if there's window offered */
+1 -1
drivers/net/wireless/marvell/mwifiex/fw.h
··· 954 954 struct mwifiex_aes_param { 955 955 u8 pn[WPA_PN_SIZE]; 956 956 __le16 key_len; 957 - u8 key[WLAN_KEY_LEN_CCMP]; 957 + u8 key[WLAN_KEY_LEN_CCMP_256]; 958 958 } __packed; 959 959 960 960 struct mwifiex_wapi_param {
+2 -2
drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
··· 619 619 key_v2 = &resp->params.key_material_v2; 620 620 621 621 len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len); 622 - if (len > WLAN_KEY_LEN_CCMP) 622 + if (len > sizeof(key_v2->key_param_set.key_params.aes.key)) 623 623 return -EINVAL; 624 624 625 625 if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) { ··· 635 635 return 0; 636 636 637 637 memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0, 638 - WLAN_KEY_LEN_CCMP); 638 + sizeof(key_v2->key_param_set.key_params.aes.key)); 639 639 priv->aes_key_v2.key_param_set.key_params.aes.key_len = 640 640 cpu_to_le16(len); 641 641 memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key,
+2 -1
drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
··· 2128 2128 sizeof(dev->mt76.hw->wiphy->fw_version), 2129 2129 "%.10s-%.15s", hdr->fw_ver, hdr->build_date); 2130 2130 2131 - if (!strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) { 2131 + if (!is_mt7615(&dev->mt76) && 2132 + !strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) { 2132 2133 dev->fw_ver = MT7615_FIRMWARE_V2; 2133 2134 dev->mcu_ops = &sta_update_ops; 2134 2135 } else {
+6 -2
drivers/net/wireless/mediatek/mt76/mt7915/init.c
··· 699 699 spin_lock_bh(&dev->token_lock); 700 700 idr_for_each_entry(&dev->token, txwi, id) { 701 701 mt7915_txp_skb_unmap(&dev->mt76, txwi); 702 - if (txwi->skb) 703 - dev_kfree_skb_any(txwi->skb); 702 + if (txwi->skb) { 703 + struct ieee80211_hw *hw; 704 + 705 + hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb); 706 + ieee80211_free_txskb(hw, txwi->skb); 707 + } 704 708 mt76_put_txwi(&dev->mt76, txwi); 705 709 } 706 710 spin_unlock_bh(&dev->token_lock);
+1 -1
drivers/net/wireless/mediatek/mt76/mt7915/mac.c
··· 841 841 if (sta || !(info->flags & IEEE80211_TX_CTL_NO_ACK)) 842 842 mt7915_tx_status(sta, hw, info, NULL); 843 843 844 - dev_kfree_skb(skb); 844 + ieee80211_free_txskb(hw, skb); 845 845 } 846 846 847 847 void mt7915_txp_skb_unmap(struct mt76_dev *dev,
-1
drivers/net/wireless/ti/wlcore/cmd.h
··· 458 458 KEY_TKIP = 2, 459 459 KEY_AES = 3, 460 460 KEY_GEM = 4, 461 - KEY_IGTK = 5, 462 461 }; 463 462 464 463 struct wl1271_cmd_set_keys {
-4
drivers/net/wireless/ti/wlcore/main.c
··· 3559 3559 case WL1271_CIPHER_SUITE_GEM: 3560 3560 key_type = KEY_GEM; 3561 3561 break; 3562 - case WLAN_CIPHER_SUITE_AES_CMAC: 3563 - key_type = KEY_IGTK; 3564 - break; 3565 3562 default: 3566 3563 wl1271_error("Unknown key algo 0x%x", key_conf->cipher); 3567 3564 ··· 6228 6231 WLAN_CIPHER_SUITE_TKIP, 6229 6232 WLAN_CIPHER_SUITE_CCMP, 6230 6233 WL1271_CIPHER_SUITE_GEM, 6231 - WLAN_CIPHER_SUITE_AES_CMAC, 6232 6234 }; 6233 6235 6234 6236 /* The tx descriptor buffer */
+1 -1
drivers/s390/net/qeth_l2_main.c
··· 284 284 285 285 if (card->state == CARD_STATE_SOFTSETUP) { 286 286 qeth_clear_ipacmd_list(card); 287 - qeth_drain_output_queues(card); 288 287 card->state = CARD_STATE_DOWN; 289 288 } 290 289 291 290 qeth_qdio_clear_card(card, 0); 291 + qeth_drain_output_queues(card); 292 292 qeth_clear_working_pool_list(card); 293 293 flush_workqueue(card->event_wq); 294 294 qeth_flush_local_addrs(card);
+1 -1
drivers/s390/net/qeth_l3_main.c
··· 1168 1168 if (card->state == CARD_STATE_SOFTSETUP) { 1169 1169 qeth_l3_clear_ip_htable(card, 1); 1170 1170 qeth_clear_ipacmd_list(card); 1171 - qeth_drain_output_queues(card); 1172 1171 card->state = CARD_STATE_DOWN; 1173 1172 } 1174 1173 1175 1174 qeth_qdio_clear_card(card, 0); 1175 + qeth_drain_output_queues(card); 1176 1176 qeth_clear_working_pool_list(card); 1177 1177 flush_workqueue(card->event_wq); 1178 1178 qeth_flush_local_addrs(card);
+1 -1
include/linux/netdev_features.h
··· 193 193 #define NETIF_F_GSO_MASK (__NETIF_F_BIT(NETIF_F_GSO_LAST + 1) - \ 194 194 __NETIF_F_BIT(NETIF_F_GSO_SHIFT)) 195 195 196 - /* List of IP checksum features. Note that NETIF_F_ HW_CSUM should not be 196 + /* List of IP checksum features. Note that NETIF_F_HW_CSUM should not be 197 197 * set in features when NETIF_F_IP_CSUM or NETIF_F_IPV6_CSUM are set-- 198 198 * this would be contradictory 199 199 */
+2
include/linux/netdevice.h
··· 1784 1784 * the watchdog (see dev_watchdog()) 1785 1785 * @watchdog_timer: List of timers 1786 1786 * 1787 + * @proto_down_reason: reason a netdev interface is held down 1787 1788 * @pcpu_refcnt: Number of references to this device 1788 1789 * @todo_list: Delayed register/unregister 1789 1790 * @link_watch_list: XXX: need comments on this one ··· 1849 1848 * @udp_tunnel_nic_info: static structure describing the UDP tunnel 1850 1849 * offload capabilities of the device 1851 1850 * @udp_tunnel_nic: UDP tunnel offload state 1851 + * @xdp_state: stores info on attached XDP BPF programs 1852 1852 * 1853 1853 * FIXME: cleanup struct net_device such that network protocol info 1854 1854 * moves out.
+1
include/linux/qed/qed_if.h
··· 623 623 #define QED_MFW_VERSION_3_OFFSET 24 624 624 625 625 u32 flash_size; 626 + bool b_arfs_capable; 626 627 bool b_inter_pf_switch; 627 628 bool tx_switching; 628 629 bool rdma_supported;
+4 -3
include/linux/skbuff.h
··· 3223 3223 * is untouched. Otherwise it is extended. Returns zero on 3224 3224 * success. The skb is freed on error if @free_on_error is true. 3225 3225 */ 3226 - static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len, 3227 - bool free_on_error) 3226 + static inline int __must_check __skb_put_padto(struct sk_buff *skb, 3227 + unsigned int len, 3228 + bool free_on_error) 3228 3229 { 3229 3230 unsigned int size = skb->len; 3230 3231 ··· 3248 3247 * is untouched. Otherwise it is extended. Returns zero on 3249 3248 * success. The skb is freed on error. 3250 3249 */ 3251 - static inline int skb_put_padto(struct sk_buff *skb, unsigned int len) 3250 + static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int len) 3252 3251 { 3253 3252 return __skb_put_padto(skb, len, true); 3254 3253 }
+1
include/net/flow.h
··· 116 116 fl4->saddr = saddr; 117 117 fl4->fl4_dport = dport; 118 118 fl4->fl4_sport = sport; 119 + fl4->flowi4_multipath_hash = 0; 119 120 } 120 121 121 122 /* Reset some input parameters after previous lookup */
-2
include/net/netlink.h
··· 726 726 * @hdrlen: length of family specific header 727 727 * @tb: destination array with maxtype+1 elements 728 728 * @maxtype: maximum attribute type to be expected 729 - * @validate: validation strictness 730 729 * @extack: extended ACK report struct 731 730 * 732 731 * See nla_parse() ··· 823 824 * @len: length of attribute stream 824 825 * @maxtype: maximum attribute type to be expected 825 826 * @policy: validation policy 826 - * @validate: validation strictness 827 827 * @extack: extended ACK report struct 828 828 * 829 829 * Validates all attributes in the specified attribute stream against the
+1
include/net/netns/nftables.h
··· 8 8 struct list_head tables; 9 9 struct list_head commit_list; 10 10 struct list_head module_list; 11 + struct list_head notify_list; 11 12 struct mutex commit_mutex; 12 13 unsigned int base_seq; 13 14 u8 gencursor;
+5 -3
include/net/sctp/structs.h
··· 226 226 data_ready_signalled:1; 227 227 228 228 atomic_t pd_mode; 229 + 230 + /* Fields after this point will be skipped on copies, like on accept 231 + * and peeloff operations 232 + */ 233 + 229 234 /* Receive to here while partial delivery is in effect. */ 230 235 struct sk_buff_head pd_lobby; 231 236 232 - /* These must be the last fields, as they will skipped on copies, 233 - * like on accept and peeloff operations 234 - */ 235 237 struct list_head auto_asconf_list; 236 238 int do_auto_asconf; 237 239 };
+3
include/net/vxlan.h
··· 121 121 #define VXLAN_GBP_POLICY_APPLIED (BIT(3) << 16) 122 122 #define VXLAN_GBP_ID_MASK (0xFFFF) 123 123 124 + #define VXLAN_GBP_MASK (VXLAN_GBP_DONT_LEARN | VXLAN_GBP_POLICY_APPLIED | \ 125 + VXLAN_GBP_ID_MASK) 126 + 124 127 /* 125 128 * VXLAN Generic Protocol Extension (VXLAN_F_GPE): 126 129 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+2
include/soc/mscc/ocelot.h
··· 566 566 u8 ptp_cmd; 567 567 struct sk_buff_head tx_skbs; 568 568 u8 ts_id; 569 + spinlock_t ts_id_lock; 569 570 570 571 phy_interface_t phy_mode; 571 572 ··· 678 677 int ocelot_init(struct ocelot *ocelot); 679 678 void ocelot_deinit(struct ocelot *ocelot); 680 679 void ocelot_init_port(struct ocelot *ocelot, int port); 680 + void ocelot_deinit_port(struct ocelot *ocelot, int port); 681 681 682 682 /* DSA callbacks */ 683 683 void ocelot_port_enable(struct ocelot *ocelot, int port,
+1
include/uapi/linux/ethtool_netlink.h
··· 79 79 ETHTOOL_MSG_TSINFO_GET_REPLY, 80 80 ETHTOOL_MSG_CABLE_TEST_NTF, 81 81 ETHTOOL_MSG_CABLE_TEST_TDR_NTF, 82 + ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY, 82 83 83 84 /* add new constants above here */ 84 85 __ETHTOOL_MSG_KERNEL_CNT,
+4 -11
kernel/bpf/hashtab.c
··· 1622 1622 struct bpf_map *map; 1623 1623 struct bpf_htab *htab; 1624 1624 void *percpu_value_buf; // non-zero means percpu hash 1625 - unsigned long flags; 1626 1625 u32 bucket_id; 1627 1626 u32 skip_elems; 1628 1627 }; ··· 1631 1632 struct htab_elem *prev_elem) 1632 1633 { 1633 1634 const struct bpf_htab *htab = info->htab; 1634 - unsigned long flags = info->flags; 1635 1635 u32 skip_elems = info->skip_elems; 1636 1636 u32 bucket_id = info->bucket_id; 1637 1637 struct hlist_nulls_head *head; ··· 1654 1656 1655 1657 /* not found, unlock and go to the next bucket */ 1656 1658 b = &htab->buckets[bucket_id++]; 1657 - htab_unlock_bucket(htab, b, flags); 1659 + rcu_read_unlock(); 1658 1660 skip_elems = 0; 1659 1661 } 1660 1662 1661 1663 for (i = bucket_id; i < htab->n_buckets; i++) { 1662 1664 b = &htab->buckets[i]; 1663 - flags = htab_lock_bucket(htab, b); 1665 + rcu_read_lock(); 1664 1666 1665 1667 count = 0; 1666 1668 head = &b->head; 1667 1669 hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) { 1668 1670 if (count >= skip_elems) { 1669 - info->flags = flags; 1670 1671 info->bucket_id = i; 1671 1672 info->skip_elems = count; 1672 1673 return elem; ··· 1673 1676 count++; 1674 1677 } 1675 1678 1676 - htab_unlock_bucket(htab, b, flags); 1679 + rcu_read_unlock(); 1677 1680 skip_elems = 0; 1678 1681 } 1679 1682 ··· 1751 1754 1752 1755 static void bpf_hash_map_seq_stop(struct seq_file *seq, void *v) 1753 1756 { 1754 - struct bpf_iter_seq_hash_map_info *info = seq->private; 1755 - 1756 1757 if (!v) 1757 1758 (void)__bpf_hash_map_seq_show(seq, NULL); 1758 1759 else 1759 - htab_unlock_bucket(info->htab, 1760 - &info->htab->buckets[info->bucket_id], 1761 - info->flags); 1760 + rcu_read_unlock(); 1762 1761 } 1763 1762 1764 1763 static int bpf_iter_init_hash_map(void *priv_data,
+3 -1
kernel/bpf/inode.c
··· 226 226 else 227 227 prev_key = key; 228 228 229 + rcu_read_lock(); 229 230 if (map->ops->map_get_next_key(map, prev_key, key)) { 230 231 map_iter(m)->done = true; 231 - return NULL; 232 + key = NULL; 232 233 } 234 + rcu_read_unlock(); 233 235 return key; 234 236 } 235 237
+1 -1
lib/test_rhashtable.c
··· 434 434 } else { 435 435 if (WARN(err != -ENOENT, "removed non-existent element, error %d not %d", 436 436 err, -ENOENT)) 437 - continue; 437 + continue; 438 438 } 439 439 } 440 440
+117 -28
net/batman-adv/bridge_loop_avoidance.c
··· 25 25 #include <linux/lockdep.h> 26 26 #include <linux/netdevice.h> 27 27 #include <linux/netlink.h> 28 + #include <linux/preempt.h> 28 29 #include <linux/rculist.h> 29 30 #include <linux/rcupdate.h> 30 31 #include <linux/seq_file.h> ··· 84 83 */ 85 84 static inline u32 batadv_choose_backbone_gw(const void *data, u32 size) 86 85 { 87 - const struct batadv_bla_claim *claim = (struct batadv_bla_claim *)data; 86 + const struct batadv_bla_backbone_gw *gw; 88 87 u32 hash = 0; 89 88 90 - hash = jhash(&claim->addr, sizeof(claim->addr), hash); 91 - hash = jhash(&claim->vid, sizeof(claim->vid), hash); 89 + gw = (struct batadv_bla_backbone_gw *)data; 90 + hash = jhash(&gw->orig, sizeof(gw->orig), hash); 91 + hash = jhash(&gw->vid, sizeof(gw->vid), hash); 92 92 93 93 return hash % size; 94 94 } ··· 1581 1579 } 1582 1580 1583 1581 /** 1584 - * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup. 1582 + * batadv_bla_check_duplist() - Check if a frame is in the broadcast dup. 1585 1583 * @bat_priv: the bat priv with all the soft interface information 1586 - * @skb: contains the bcast_packet to be checked 1584 + * @skb: contains the multicast packet to be checked 1585 + * @payload_ptr: pointer to position inside the head buffer of the skb 1586 + * marking the start of the data to be CRC'ed 1587 + * @orig: originator mac address, NULL if unknown 1587 1588 * 1588 - * check if it is on our broadcast list. Another gateway might 1589 - * have sent the same packet because it is connected to the same backbone, 1590 - * so we have to remove this duplicate. 1589 + * Check if it is on our broadcast list. Another gateway might have sent the 1590 + * same packet because it is connected to the same backbone, so we have to 1591 + * remove this duplicate. 1591 1592 * 1592 1593 * This is performed by checking the CRC, which will tell us 1593 1594 * with a good chance that it is the same packet. If it is furthermore ··· 1599 1594 * 1600 1595 * Return: true if a packet is in the duplicate list, false otherwise. 1601 1596 */ 1602 - bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv, 1603 - struct sk_buff *skb) 1597 + static bool batadv_bla_check_duplist(struct batadv_priv *bat_priv, 1598 + struct sk_buff *skb, u8 *payload_ptr, 1599 + const u8 *orig) 1604 1600 { 1605 - int i, curr; 1606 - __be32 crc; 1607 - struct batadv_bcast_packet *bcast_packet; 1608 1601 struct batadv_bcast_duplist_entry *entry; 1609 1602 bool ret = false; 1610 - 1611 - bcast_packet = (struct batadv_bcast_packet *)skb->data; 1603 + int i, curr; 1604 + __be32 crc; 1612 1605 1613 1606 /* calculate the crc ... */ 1614 - crc = batadv_skb_crc32(skb, (u8 *)(bcast_packet + 1)); 1607 + crc = batadv_skb_crc32(skb, payload_ptr); 1615 1608 1616 1609 spin_lock_bh(&bat_priv->bla.bcast_duplist_lock); 1617 1610 ··· 1628 1625 if (entry->crc != crc) 1629 1626 continue; 1630 1627 1631 - if (batadv_compare_eth(entry->orig, bcast_packet->orig)) 1632 - continue; 1628 + /* are the originators both known and not anonymous? */ 1629 + if (orig && !is_zero_ether_addr(orig) && 1630 + !is_zero_ether_addr(entry->orig)) { 1631 + /* If known, check if the new frame came from 1632 + * the same originator: 1633 + * We are safe to take identical frames from the 1634 + * same orig, if known, as multiplications in 1635 + * the mesh are detected via the (orig, seqno) pair. 1636 + * So we can be a bit more liberal here and allow 1637 + * identical frames from the same orig which the source 1638 + * host might have sent multiple times on purpose. 1639 + */ 1640 + if (batadv_compare_eth(entry->orig, orig)) 1641 + continue; 1642 + } 1633 1643 1634 1644 /* this entry seems to match: same crc, not too old, 1635 1645 * and from another gw. therefore return true to forbid it. ··· 1658 1642 entry = &bat_priv->bla.bcast_duplist[curr]; 1659 1643 entry->crc = crc; 1660 1644 entry->entrytime = jiffies; 1661 - ether_addr_copy(entry->orig, bcast_packet->orig); 1645 + 1646 + /* known originator */ 1647 + if (orig) 1648 + ether_addr_copy(entry->orig, orig); 1649 + /* anonymous originator */ 1650 + else 1651 + eth_zero_addr(entry->orig); 1652 + 1662 1653 bat_priv->bla.bcast_duplist_curr = curr; 1663 1654 1664 1655 out: 1665 1656 spin_unlock_bh(&bat_priv->bla.bcast_duplist_lock); 1666 1657 1667 1658 return ret; 1659 + } 1660 + 1661 + /** 1662 + * batadv_bla_check_ucast_duplist() - Check if a frame is in the broadcast dup. 1663 + * @bat_priv: the bat priv with all the soft interface information 1664 + * @skb: contains the multicast packet to be checked, decapsulated from a 1665 + * unicast_packet 1666 + * 1667 + * Check if it is on our broadcast list. Another gateway might have sent the 1668 + * same packet because it is connected to the same backbone, so we have to 1669 + * remove this duplicate. 1670 + * 1671 + * Return: true if a packet is in the duplicate list, false otherwise. 1672 + */ 1673 + static bool batadv_bla_check_ucast_duplist(struct batadv_priv *bat_priv, 1674 + struct sk_buff *skb) 1675 + { 1676 + return batadv_bla_check_duplist(bat_priv, skb, (u8 *)skb->data, NULL); 1677 + } 1678 + 1679 + /** 1680 + * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup. 1681 + * @bat_priv: the bat priv with all the soft interface information 1682 + * @skb: contains the bcast_packet to be checked 1683 + * 1684 + * Check if it is on our broadcast list. Another gateway might have sent the 1685 + * same packet because it is connected to the same backbone, so we have to 1686 + * remove this duplicate. 1687 + * 1688 + * Return: true if a packet is in the duplicate list, false otherwise. 1689 + */ 1690 + bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv, 1691 + struct sk_buff *skb) 1692 + { 1693 + struct batadv_bcast_packet *bcast_packet; 1694 + u8 *payload_ptr; 1695 + 1696 + bcast_packet = (struct batadv_bcast_packet *)skb->data; 1697 + payload_ptr = (u8 *)(bcast_packet + 1); 1698 + 1699 + return batadv_bla_check_duplist(bat_priv, skb, payload_ptr, 1700 + bcast_packet->orig); 1668 1701 } 1669 1702 1670 1703 /** ··· 1877 1812 * @bat_priv: the bat priv with all the soft interface information 1878 1813 * @skb: the frame to be checked 1879 1814 * @vid: the VLAN ID of the frame 1880 - * @is_bcast: the packet came in a broadcast packet type. 1815 + * @packet_type: the batman packet type this frame came in 1881 1816 * 1882 1817 * batadv_bla_rx avoidance checks if: 1883 1818 * * we have to race for a claim ··· 1889 1824 * further process the skb. 1890 1825 */ 1891 1826 bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb, 1892 - unsigned short vid, bool is_bcast) 1827 + unsigned short vid, int packet_type) 1893 1828 { 1894 1829 struct batadv_bla_backbone_gw *backbone_gw; 1895 1830 struct ethhdr *ethhdr; ··· 1911 1846 goto handled; 1912 1847 1913 1848 if (unlikely(atomic_read(&bat_priv->bla.num_requests))) 1914 - /* don't allow broadcasts while requests are in flight */ 1915 - if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast) 1916 - goto handled; 1849 + /* don't allow multicast packets while requests are in flight */ 1850 + if (is_multicast_ether_addr(ethhdr->h_dest)) 1851 + /* Both broadcast flooding or multicast-via-unicasts 1852 + * delivery might send to multiple backbone gateways 1853 + * sharing the same LAN and therefore need to coordinate 1854 + * which backbone gateway forwards into the LAN, 1855 + * by claiming the payload source address. 1856 + * 1857 + * Broadcast flooding and multicast-via-unicasts 1858 + * delivery use the following two batman packet types. 1859 + * Note: explicitly exclude BATADV_UNICAST_4ADDR, 1860 + * as the DHCP gateway feature will send explicitly 1861 + * to only one BLA gateway, so the claiming process 1862 + * should be avoided there. 1863 + */ 1864 + if (packet_type == BATADV_BCAST || 1865 + packet_type == BATADV_UNICAST) 1866 + goto handled; 1867 + 1868 + /* potential duplicates from foreign BLA backbone gateways via 1869 + * multicast-in-unicast packets 1870 + */ 1871 + if (is_multicast_ether_addr(ethhdr->h_dest) && 1872 + packet_type == BATADV_UNICAST && 1873 + batadv_bla_check_ucast_duplist(bat_priv, skb)) 1874 + goto handled; 1917 1875 1918 1876 ether_addr_copy(search_claim.addr, ethhdr->h_source); 1919 1877 search_claim.vid = vid; ··· 1971 1883 goto allow; 1972 1884 } 1973 1885 1974 - /* if it is a broadcast ... */ 1975 - if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast) { 1886 + /* if it is a multicast ... */ 1887 + if (is_multicast_ether_addr(ethhdr->h_dest) && 1888 + (packet_type == BATADV_BCAST || packet_type == BATADV_UNICAST)) { 1976 1889 /* ... drop it. the responsible gateway is in charge. 1977 1890 * 1978 - * We need to check is_bcast because with the gateway 1891 + * We need to check packet type because with the gateway 1979 1892 * feature, broadcasts (like DHCP requests) may be sent 1980 - * using a unicast packet type. 1893 + * using a unicast 4 address packet type. See comment above. 1981 1894 */ 1982 1895 goto handled; 1983 1896 } else {
+2 -2
net/batman-adv/bridge_loop_avoidance.h
··· 35 35 36 36 #ifdef CONFIG_BATMAN_ADV_BLA 37 37 bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb, 38 - unsigned short vid, bool is_bcast); 38 + unsigned short vid, int packet_type); 39 39 bool batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb, 40 40 unsigned short vid); 41 41 bool batadv_bla_is_backbone_gw(struct sk_buff *skb, ··· 66 66 67 67 static inline bool batadv_bla_rx(struct batadv_priv *bat_priv, 68 68 struct sk_buff *skb, unsigned short vid, 69 - bool is_bcast) 69 + int packet_type) 70 70 { 71 71 return false; 72 72 }
+36 -10
net/batman-adv/multicast.c
··· 51 51 #include <uapi/linux/batadv_packet.h> 52 52 #include <uapi/linux/batman_adv.h> 53 53 54 + #include "bridge_loop_avoidance.h" 54 55 #include "hard-interface.h" 55 56 #include "hash.h" 56 57 #include "log.h" ··· 1436 1435 } 1437 1436 1438 1437 /** 1438 + * batadv_mcast_forw_send_orig() - send a multicast packet to an originator 1439 + * @bat_priv: the bat priv with all the soft interface information 1440 + * @skb: the multicast packet to send 1441 + * @vid: the vlan identifier 1442 + * @orig_node: the originator to send the packet to 1443 + * 1444 + * Return: NET_XMIT_DROP in case of error or NET_XMIT_SUCCESS otherwise. 1445 + */ 1446 + int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv, 1447 + struct sk_buff *skb, 1448 + unsigned short vid, 1449 + struct batadv_orig_node *orig_node) 1450 + { 1451 + /* Avoid sending multicast-in-unicast packets to other BLA 1452 + * gateways - they already got the frame from the LAN side 1453 + * we share with them. 1454 + * TODO: Refactor to take BLA into account earlier, to avoid 1455 + * reducing the mcast_fanout count. 1456 + */ 1457 + if (batadv_bla_is_backbone_gw_orig(bat_priv, orig_node->orig, vid)) { 1458 + dev_kfree_skb(skb); 1459 + return NET_XMIT_SUCCESS; 1460 + } 1461 + 1462 + return batadv_send_skb_unicast(bat_priv, skb, BATADV_UNICAST, 0, 1463 + orig_node, vid); 1464 + } 1465 + 1466 + /** 1439 1467 * batadv_mcast_forw_tt() - forwards a packet to multicast listeners 1440 1468 * @bat_priv: the bat priv with all the soft interface information 1441 1469 * @skb: the multicast packet to transmit ··· 1501 1471 break; 1502 1472 } 1503 1473 1504 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1505 - orig_entry->orig_node, vid); 1474 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, 1475 + orig_entry->orig_node); 1506 1476 } 1507 1477 rcu_read_unlock(); 1508 1478 ··· 1543 1513 break; 1544 1514 } 1545 1515 1546 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1547 - orig_node, vid); 1516 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node); 1548 1517 } 1549 1518 rcu_read_unlock(); 1550 1519 return ret; ··· 1580 1551 break; 1581 1552 } 1582 1553 1583 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1584 - orig_node, vid); 1554 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node); 1585 1555 } 1586 1556 rcu_read_unlock(); 1587 1557 return ret; ··· 1646 1618 break; 1647 1619 } 1648 1620 1649 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1650 - orig_node, vid); 1621 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node); 1651 1622 } 1652 1623 rcu_read_unlock(); 1653 1624 return ret; ··· 1683 1656 break; 1684 1657 } 1685 1658 1686 - batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0, 1687 - orig_node, vid); 1659 + batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node); 1688 1660 } 1689 1661 rcu_read_unlock(); 1690 1662 return ret;
+15
net/batman-adv/multicast.h
··· 46 46 batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb, 47 47 struct batadv_orig_node **mcast_single_orig); 48 48 49 + int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv, 50 + struct sk_buff *skb, 51 + unsigned short vid, 52 + struct batadv_orig_node *orig_node); 53 + 49 54 int batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb, 50 55 unsigned short vid); 51 56 ··· 74 69 struct batadv_orig_node **mcast_single_orig) 75 70 { 76 71 return BATADV_FORW_ALL; 72 + } 73 + 74 + static inline int 75 + batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv, 76 + struct sk_buff *skb, 77 + unsigned short vid, 78 + struct batadv_orig_node *orig_node) 79 + { 80 + kfree_skb(skb); 81 + return NET_XMIT_DROP; 77 82 } 78 83 79 84 static inline int
+4
net/batman-adv/routing.c
··· 826 826 vid = batadv_get_vid(skb, hdr_len); 827 827 ethhdr = (struct ethhdr *)(skb->data + hdr_len); 828 828 829 + /* do not reroute multicast frames in a unicast header */ 830 + if (is_multicast_ether_addr(ethhdr->h_dest)) 831 + return true; 832 + 829 833 /* check if the destination client was served by this node and it is now 830 834 * roaming. In this case, it means that the node has got a ROAM_ADV 831 835 * message and that it knows the new destination in the mesh to re-route
+5 -6
net/batman-adv/soft-interface.c
··· 364 364 goto dropped; 365 365 ret = batadv_send_skb_via_gw(bat_priv, skb, vid); 366 366 } else if (mcast_single_orig) { 367 - ret = batadv_send_skb_unicast(bat_priv, skb, 368 - BATADV_UNICAST, 0, 369 - mcast_single_orig, vid); 367 + ret = batadv_mcast_forw_send_orig(bat_priv, skb, vid, 368 + mcast_single_orig); 370 369 } else if (forw_mode == BATADV_FORW_SOME) { 371 370 ret = batadv_mcast_forw_send(bat_priv, skb, vid); 372 371 } else { ··· 424 425 struct vlan_ethhdr *vhdr; 425 426 struct ethhdr *ethhdr; 426 427 unsigned short vid; 427 - bool is_bcast; 428 + int packet_type; 428 429 429 430 batadv_bcast_packet = (struct batadv_bcast_packet *)skb->data; 430 - is_bcast = (batadv_bcast_packet->packet_type == BATADV_BCAST); 431 + packet_type = batadv_bcast_packet->packet_type; 431 432 432 433 skb_pull_rcsum(skb, hdr_size); 433 434 skb_reset_mac_header(skb); ··· 470 471 /* Let the bridge loop avoidance check the packet. If will 471 472 * not handle it, we can safely push it up. 472 473 */ 473 - if (batadv_bla_rx(bat_priv, skb, vid, is_bcast)) 474 + if (batadv_bla_rx(bat_priv, skb, vid, packet_type)) 474 475 goto out; 475 476 476 477 if (orig_node)
+17 -10
net/bridge/br_vlan.c
··· 1288 1288 } 1289 1289 } 1290 1290 1291 - static int __br_vlan_get_pvid(const struct net_device *dev, 1292 - struct net_bridge_port *p, u16 *p_pvid) 1291 + int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid) 1293 1292 { 1294 1293 struct net_bridge_vlan_group *vg; 1294 + struct net_bridge_port *p; 1295 1295 1296 + ASSERT_RTNL(); 1297 + p = br_port_get_check_rtnl(dev); 1296 1298 if (p) 1297 1299 vg = nbp_vlan_group(p); 1298 1300 else if (netif_is_bridge_master(dev)) ··· 1305 1303 *p_pvid = br_get_pvid(vg); 1306 1304 return 0; 1307 1305 } 1308 - 1309 - int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid) 1310 - { 1311 - ASSERT_RTNL(); 1312 - 1313 - return __br_vlan_get_pvid(dev, br_port_get_check_rtnl(dev), p_pvid); 1314 - } 1315 1306 EXPORT_SYMBOL_GPL(br_vlan_get_pvid); 1316 1307 1317 1308 int br_vlan_get_pvid_rcu(const struct net_device *dev, u16 *p_pvid) 1318 1309 { 1319 - return __br_vlan_get_pvid(dev, br_port_get_check_rcu(dev), p_pvid); 1310 + struct net_bridge_vlan_group *vg; 1311 + struct net_bridge_port *p; 1312 + 1313 + p = br_port_get_check_rcu(dev); 1314 + if (p) 1315 + vg = nbp_vlan_group_rcu(p); 1316 + else if (netif_is_bridge_master(dev)) 1317 + vg = br_vlan_group_rcu(netdev_priv(dev)); 1318 + else 1319 + return -EINVAL; 1320 + 1321 + *p_pvid = br_get_pvid(vg); 1322 + return 0; 1320 1323 } 1321 1324 EXPORT_SYMBOL_GPL(br_vlan_get_pvid_rcu); 1322 1325
+1 -1
net/core/dev.c
··· 8647 8647 if (!first.id_len) 8648 8648 first = *ppid; 8649 8649 else if (memcmp(&first, ppid, sizeof(*ppid))) 8650 - return -ENODATA; 8650 + return -EOPNOTSUPP; 8651 8651 } 8652 8652 8653 8653 return err;
+1 -1
net/core/dst.c
··· 144 144 145 145 /* Operations to mark dst as DEAD and clean up the net device referenced 146 146 * by dst: 147 - * 1. put the dst under loopback interface and discard all tx/rx packets 147 + * 1. put the dst under blackhole interface and discard all tx/rx packets 148 148 * on this route. 149 149 * 2. release the net_device 150 150 * This function should be called when removing routes from the fib tree
+1 -1
net/core/fib_rules.c
··· 16 16 #include <net/ip_tunnels.h> 17 17 #include <linux/indirect_call_wrapper.h> 18 18 19 - #ifdef CONFIG_IPV6_MULTIPLE_TABLES 19 + #if defined(CONFIG_IPV6) && defined(CONFIG_IPV6_MULTIPLE_TABLES) 20 20 #ifdef CONFIG_IP_MULTIPLE_TABLES 21 21 #define INDIRECT_CALL_MT(f, f2, f1, ...) \ 22 22 INDIRECT_CALL_INET(f, f2, f1, __VA_ARGS__)
+10 -9
net/core/filter.c
··· 4838 4838 fl4.saddr = params->ipv4_src; 4839 4839 fl4.fl4_sport = params->sport; 4840 4840 fl4.fl4_dport = params->dport; 4841 + fl4.flowi4_multipath_hash = 0; 4841 4842 4842 4843 if (flags & BPF_FIB_LOOKUP_DIRECT) { 4843 4844 u32 tbid = l3mdev_fib_table_rcu(dev) ? : RT_TABLE_MAIN; ··· 7066 7065 bool indirect = BPF_MODE(orig->code) == BPF_IND; 7067 7066 struct bpf_insn *insn = insn_buf; 7068 7067 7069 - /* We're guaranteed here that CTX is in R6. */ 7070 - *insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_CTX); 7071 7068 if (!indirect) { 7072 7069 *insn++ = BPF_MOV64_IMM(BPF_REG_2, orig->imm); 7073 7070 } else { ··· 7073 7074 if (orig->imm) 7074 7075 *insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, orig->imm); 7075 7076 } 7077 + /* We're guaranteed here that CTX is in R6. */ 7078 + *insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_CTX); 7076 7079 7077 7080 switch (BPF_SIZE(orig->code)) { 7078 7081 case BPF_B: ··· 9523 9522 * trigger an explicit type generation here. 9524 9523 */ 9525 9524 BTF_TYPE_EMIT(struct tcp6_sock); 9526 - if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP && 9525 + if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP && 9527 9526 sk->sk_family == AF_INET6) 9528 9527 return (unsigned long)sk; 9529 9528 ··· 9541 9540 9542 9541 BPF_CALL_1(bpf_skc_to_tcp_sock, struct sock *, sk) 9543 9542 { 9544 - if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP) 9543 + if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP) 9545 9544 return (unsigned long)sk; 9546 9545 9547 9546 return (unsigned long)NULL; ··· 9559 9558 BPF_CALL_1(bpf_skc_to_tcp_timewait_sock, struct sock *, sk) 9560 9559 { 9561 9560 #ifdef CONFIG_INET 9562 - if (sk->sk_prot == &tcp_prot && sk->sk_state == TCP_TIME_WAIT) 9561 + if (sk && sk->sk_prot == &tcp_prot && sk->sk_state == TCP_TIME_WAIT) 9563 9562 return (unsigned long)sk; 9564 9563 #endif 9565 9564 9566 9565 #if IS_BUILTIN(CONFIG_IPV6) 9567 - if (sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_TIME_WAIT) 9566 + if (sk && sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_TIME_WAIT) 9568 9567 return (unsigned long)sk; 9569 9568 #endif 9570 9569 ··· 9583 9582 BPF_CALL_1(bpf_skc_to_tcp_request_sock, struct sock *, sk) 9584 9583 { 9585 9584 #ifdef CONFIG_INET 9586 - if (sk->sk_prot == &tcp_prot && sk->sk_state == TCP_NEW_SYN_RECV) 9585 + if (sk && sk->sk_prot == &tcp_prot && sk->sk_state == TCP_NEW_SYN_RECV) 9587 9586 return (unsigned long)sk; 9588 9587 #endif 9589 9588 9590 9589 #if IS_BUILTIN(CONFIG_IPV6) 9591 - if (sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_NEW_SYN_RECV) 9590 + if (sk && sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_NEW_SYN_RECV) 9592 9591 return (unsigned long)sk; 9593 9592 #endif 9594 9593 ··· 9610 9609 * trigger an explicit type generation here. 9611 9610 */ 9612 9611 BTF_TYPE_EMIT(struct udp6_sock); 9613 - if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_UDP && 9612 + if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_UDP && 9614 9613 sk->sk_type == SOCK_DGRAM && sk->sk_family == AF_INET6) 9615 9614 return (unsigned long)sk; 9616 9615
+11 -11
net/core/net_namespace.c
··· 251 251 if (refcount_read(&net->count) == 0) 252 252 return NETNSA_NSID_NOT_ASSIGNED; 253 253 254 - spin_lock(&net->nsid_lock); 254 + spin_lock_bh(&net->nsid_lock); 255 255 id = __peernet2id(net, peer); 256 256 if (id >= 0) { 257 - spin_unlock(&net->nsid_lock); 257 + spin_unlock_bh(&net->nsid_lock); 258 258 return id; 259 259 } 260 260 ··· 264 264 * just been idr_remove()'d from there in cleanup_net(). 265 265 */ 266 266 if (!maybe_get_net(peer)) { 267 - spin_unlock(&net->nsid_lock); 267 + spin_unlock_bh(&net->nsid_lock); 268 268 return NETNSA_NSID_NOT_ASSIGNED; 269 269 } 270 270 271 271 id = alloc_netid(net, peer, -1); 272 - spin_unlock(&net->nsid_lock); 272 + spin_unlock_bh(&net->nsid_lock); 273 273 274 274 put_net(peer); 275 275 if (id < 0) ··· 534 534 for_each_net(tmp) { 535 535 int id; 536 536 537 - spin_lock(&tmp->nsid_lock); 537 + spin_lock_bh(&tmp->nsid_lock); 538 538 id = __peernet2id(tmp, net); 539 539 if (id >= 0) 540 540 idr_remove(&tmp->netns_ids, id); 541 - spin_unlock(&tmp->nsid_lock); 541 + spin_unlock_bh(&tmp->nsid_lock); 542 542 if (id >= 0) 543 543 rtnl_net_notifyid(tmp, RTM_DELNSID, id, 0, NULL, 544 544 GFP_KERNEL); 545 545 if (tmp == last) 546 546 break; 547 547 } 548 - spin_lock(&net->nsid_lock); 548 + spin_lock_bh(&net->nsid_lock); 549 549 idr_destroy(&net->netns_ids); 550 - spin_unlock(&net->nsid_lock); 550 + spin_unlock_bh(&net->nsid_lock); 551 551 } 552 552 553 553 static LLIST_HEAD(cleanup_list); ··· 760 760 return PTR_ERR(peer); 761 761 } 762 762 763 - spin_lock(&net->nsid_lock); 763 + spin_lock_bh(&net->nsid_lock); 764 764 if (__peernet2id(net, peer) >= 0) { 765 - spin_unlock(&net->nsid_lock); 765 + spin_unlock_bh(&net->nsid_lock); 766 766 err = -EEXIST; 767 767 NL_SET_BAD_ATTR(extack, nla); 768 768 NL_SET_ERR_MSG(extack, ··· 771 771 } 772 772 773 773 err = alloc_netid(net, peer, nsid); 774 - spin_unlock(&net->nsid_lock); 774 + spin_unlock_bh(&net->nsid_lock); 775 775 if (err >= 0) { 776 776 rtnl_net_notifyid(net, RTM_NEWNSID, err, NETLINK_CB(skb).portid, 777 777 nlh, GFP_KERNEL);
+8
net/dcb/dcbnl.c
··· 1426 1426 { 1427 1427 const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops; 1428 1428 struct nlattr *ieee[DCB_ATTR_IEEE_MAX + 1]; 1429 + int prio; 1429 1430 int err; 1430 1431 1431 1432 if (!ops) ··· 1475 1474 if (ieee[DCB_ATTR_DCB_BUFFER] && ops->dcbnl_setbuffer) { 1476 1475 struct dcbnl_buffer *buffer = 1477 1476 nla_data(ieee[DCB_ATTR_DCB_BUFFER]); 1477 + 1478 + for (prio = 0; prio < ARRAY_SIZE(buffer->prio2buffer); prio++) { 1479 + if (buffer->prio2buffer[prio] >= DCBX_MAX_BUFFERS) { 1480 + err = -EINVAL; 1481 + goto err; 1482 + } 1483 + } 1478 1484 1479 1485 err = ops->dcbnl_setbuffer(netdev, buffer); 1480 1486 if (err)
+16 -2
net/dsa/slave.c
··· 1799 1799 1800 1800 dsa_slave_notify(slave_dev, DSA_PORT_REGISTER); 1801 1801 1802 - ret = register_netdev(slave_dev); 1802 + rtnl_lock(); 1803 + 1804 + ret = register_netdevice(slave_dev); 1803 1805 if (ret) { 1804 1806 netdev_err(master, "error %d registering interface %s\n", 1805 1807 ret, slave_dev->name); 1808 + rtnl_unlock(); 1806 1809 goto out_phy; 1807 1810 } 1808 1811 1812 + ret = netdev_upper_dev_link(master, slave_dev, NULL); 1813 + 1814 + rtnl_unlock(); 1815 + 1816 + if (ret) 1817 + goto out_unregister; 1818 + 1809 1819 return 0; 1810 1820 1821 + out_unregister: 1822 + unregister_netdev(slave_dev); 1811 1823 out_phy: 1812 1824 rtnl_lock(); 1813 1825 phylink_disconnect_phy(p->dp->pl); ··· 1836 1824 1837 1825 void dsa_slave_destroy(struct net_device *slave_dev) 1838 1826 { 1827 + struct net_device *master = dsa_slave_to_master(slave_dev); 1839 1828 struct dsa_port *dp = dsa_slave_to_port(slave_dev); 1840 1829 struct dsa_slave_priv *p = netdev_priv(slave_dev); 1841 1830 1842 1831 netif_carrier_off(slave_dev); 1843 1832 rtnl_lock(); 1833 + netdev_upper_dev_unlink(master, slave_dev); 1834 + unregister_netdevice(slave_dev); 1844 1835 phylink_disconnect_phy(dp->pl); 1845 1836 rtnl_unlock(); 1846 1837 1847 1838 dsa_slave_notify(slave_dev, DSA_PORT_UNREGISTER); 1848 - unregister_netdev(slave_dev); 1849 1839 phylink_destroy(dp->pl); 1850 1840 gro_cells_destroy(&p->gcells); 1851 1841 free_percpu(p->stats64);
+7 -4
net/dsa/tag_ocelot.c
··· 160 160 packing(injection, &qos_class, 19, 17, OCELOT_TAG_LEN, PACK, 0); 161 161 162 162 if (ocelot->ptp && (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) { 163 + struct sk_buff *clone = DSA_SKB_CB(skb)->clone; 164 + 163 165 rew_op = ocelot_port->ptp_cmd; 164 - if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) { 165 - rew_op |= (ocelot_port->ts_id % 4) << 3; 166 - ocelot_port->ts_id++; 167 - } 166 + /* Retrieve timestamp ID populated inside skb->cb[0] of the 167 + * clone by ocelot_port_add_txtstamp_skb 168 + */ 169 + if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) 170 + rew_op |= clone->cb[0] << 3; 168 171 169 172 packing(injection, &rew_op, 125, 117, OCELOT_TAG_LEN, PACK, 0); 170 173 }
+2 -2
net/ethtool/tunnels.c
··· 200 200 reply_len = ret + ethnl_reply_header_size(); 201 201 202 202 rskb = ethnl_reply_init(reply_len, req_info.dev, 203 - ETHTOOL_MSG_TUNNEL_INFO_GET, 203 + ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY, 204 204 ETHTOOL_A_TUNNEL_INFO_HEADER, 205 205 info, &reply_payload); 206 206 if (!rskb) { ··· 273 273 goto cont; 274 274 275 275 ehdr = ethnl_dump_put(skb, cb, 276 - ETHTOOL_MSG_TUNNEL_INFO_GET); 276 + ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY); 277 277 if (!ehdr) { 278 278 ret = -EMSGSIZE; 279 279 goto out;
+3 -3
net/hsr/hsr_netlink.c
··· 76 76 proto = nla_get_u8(data[IFLA_HSR_PROTOCOL]); 77 77 78 78 if (proto >= HSR_PROTOCOL_MAX) { 79 - NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol\n"); 79 + NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol"); 80 80 return -EINVAL; 81 81 } 82 82 ··· 84 84 proto_version = HSR_V0; 85 85 } else { 86 86 if (proto == HSR_PROTOCOL_PRP) { 87 - NL_SET_ERR_MSG_MOD(extack, "PRP version unsupported\n"); 87 + NL_SET_ERR_MSG_MOD(extack, "PRP version unsupported"); 88 88 return -EINVAL; 89 89 } 90 90 91 91 proto_version = nla_get_u8(data[IFLA_HSR_VERSION]); 92 92 if (proto_version > HSR_V1) { 93 93 NL_SET_ERR_MSG_MOD(extack, 94 - "Only HSR version 0/1 supported\n"); 94 + "Only HSR version 0/1 supported"); 95 95 return -EINVAL; 96 96 } 97 97 }
+1
net/ipv4/fib_frontend.c
··· 362 362 fl4.flowi4_tun_key.tun_id = 0; 363 363 fl4.flowi4_flags = 0; 364 364 fl4.flowi4_uid = sock_net_uid(net, NULL); 365 + fl4.flowi4_multipath_hash = 0; 365 366 366 367 no_addr = idev->ifa_list == NULL; 367 368
+15 -5
net/ipv4/inet_diag.c
··· 186 186 } 187 187 EXPORT_SYMBOL_GPL(inet_diag_msg_attrs_fill); 188 188 189 - static void inet_diag_parse_attrs(const struct nlmsghdr *nlh, int hdrlen, 190 - struct nlattr **req_nlas) 189 + static int inet_diag_parse_attrs(const struct nlmsghdr *nlh, int hdrlen, 190 + struct nlattr **req_nlas) 191 191 { 192 192 struct nlattr *nla; 193 193 int remaining; ··· 195 195 nlmsg_for_each_attr(nla, nlh, hdrlen, remaining) { 196 196 int type = nla_type(nla); 197 197 198 + if (type == INET_DIAG_REQ_PROTOCOL && nla_len(nla) != sizeof(u32)) 199 + return -EINVAL; 200 + 198 201 if (type < __INET_DIAG_REQ_MAX) 199 202 req_nlas[type] = nla; 200 203 } 204 + return 0; 201 205 } 202 206 203 207 static int inet_diag_get_protocol(const struct inet_diag_req_v2 *req, ··· 578 574 int err, protocol; 579 575 580 576 memset(&dump_data, 0, sizeof(dump_data)); 581 - inet_diag_parse_attrs(nlh, hdrlen, dump_data.req_nlas); 577 + err = inet_diag_parse_attrs(nlh, hdrlen, dump_data.req_nlas); 578 + if (err) 579 + return err; 580 + 582 581 protocol = inet_diag_get_protocol(req, &dump_data); 583 582 584 583 handler = inet_diag_lock_handler(protocol); ··· 1187 1180 if (!cb_data) 1188 1181 return -ENOMEM; 1189 1182 1190 - inet_diag_parse_attrs(nlh, hdrlen, cb_data->req_nlas); 1191 - 1183 + err = inet_diag_parse_attrs(nlh, hdrlen, cb_data->req_nlas); 1184 + if (err) { 1185 + kfree(cb_data); 1186 + return err; 1187 + } 1192 1188 nla = cb_data->inet_diag_nla_bc; 1193 1189 if (nla) { 1194 1190 err = inet_diag_bc_audit(nla, skb);
+2 -1
net/ipv4/ip_output.c
··· 74 74 #include <net/icmp.h> 75 75 #include <net/checksum.h> 76 76 #include <net/inetpeer.h> 77 + #include <net/inet_ecn.h> 77 78 #include <net/lwtunnel.h> 78 79 #include <linux/bpf-cgroup.h> 79 80 #include <linux/igmp.h> ··· 1704 1703 if (IS_ERR(rt)) 1705 1704 return; 1706 1705 1707 - inet_sk(sk)->tos = arg->tos; 1706 + inet_sk(sk)->tos = arg->tos & ~INET_ECN_MASK; 1708 1707 1709 1708 sk->sk_protocol = ip_hdr(skb)->protocol; 1710 1709 sk->sk_bound_dev_if = arg->bound_dev_if;
+1
net/ipv4/ip_tunnel_core.c
··· 554 554 555 555 attr = tb[LWTUNNEL_IP_OPT_VXLAN_GBP]; 556 556 md->gbp = nla_get_u32(attr); 557 + md->gbp &= VXLAN_GBP_MASK; 557 558 info->key.tun_flags |= TUNNEL_VXLAN_OPT; 558 559 } 559 560
+9 -5
net/ipv4/route.c
··· 786 786 neigh_event_send(n, NULL); 787 787 } else { 788 788 if (fib_lookup(net, fl4, &res, 0) == 0) { 789 - struct fib_nh_common *nhc = FIB_RES_NHC(res); 789 + struct fib_nh_common *nhc; 790 790 791 + fib_select_path(net, &res, fl4, skb); 792 + nhc = FIB_RES_NHC(res); 791 793 update_or_create_fnhe(nhc, fl4->daddr, new_gw, 792 794 0, false, 793 795 jiffies + ip_rt_gc_timeout); ··· 1015 1013 static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu) 1016 1014 { 1017 1015 struct dst_entry *dst = &rt->dst; 1016 + struct net *net = dev_net(dst->dev); 1018 1017 u32 old_mtu = ipv4_mtu(dst); 1019 1018 struct fib_result res; 1020 1019 bool lock = false; ··· 1036 1033 return; 1037 1034 1038 1035 rcu_read_lock(); 1039 - if (fib_lookup(dev_net(dst->dev), fl4, &res, 0) == 0) { 1040 - struct fib_nh_common *nhc = FIB_RES_NHC(res); 1036 + if (fib_lookup(net, fl4, &res, 0) == 0) { 1037 + struct fib_nh_common *nhc; 1041 1038 1039 + fib_select_path(net, &res, fl4, NULL); 1040 + nhc = FIB_RES_NHC(res); 1042 1041 update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock, 1043 1042 jiffies + ip_rt_mtu_expires); 1044 1043 } ··· 2152 2147 fl4.daddr = daddr; 2153 2148 fl4.saddr = saddr; 2154 2149 fl4.flowi4_uid = sock_net_uid(net, NULL); 2150 + fl4.flowi4_multipath_hash = 0; 2155 2151 2156 2152 if (fib4_rules_early_flow_dissect(net, skb, &fl4, &_flkeys)) { 2157 2153 flkeys = &_flkeys; ··· 2673 2667 fib_select_path(net, res, fl4, skb); 2674 2668 2675 2669 dev_out = FIB_RES_DEV(*res); 2676 - fl4->flowi4_oif = dev_out->ifindex; 2677 - 2678 2670 2679 2671 make_route: 2680 2672 rth = __mkroute_output(res, fl4, orig_oif, dev_out, flags);
+1
net/ipv6/Kconfig
··· 303 303 config IPV6_SEG6_HMAC 304 304 bool "IPv6: Segment Routing HMAC support" 305 305 depends on IPV6 306 + select CRYPTO 306 307 select CRYPTO_HMAC 307 308 select CRYPTO_SHA1 308 309 select CRYPTO_SHA256
+9 -4
net/ipv6/ip6_fib.c
··· 1993 1993 /* Need to own table->tb6_lock */ 1994 1994 int fib6_del(struct fib6_info *rt, struct nl_info *info) 1995 1995 { 1996 - struct fib6_node *fn = rcu_dereference_protected(rt->fib6_node, 1997 - lockdep_is_held(&rt->fib6_table->tb6_lock)); 1998 - struct fib6_table *table = rt->fib6_table; 1999 1996 struct net *net = info->nl_net; 2000 1997 struct fib6_info __rcu **rtp; 2001 1998 struct fib6_info __rcu **rtp_next; 1999 + struct fib6_table *table; 2000 + struct fib6_node *fn; 2002 2001 2003 - if (!fn || rt == net->ipv6.fib6_null_entry) 2002 + if (rt == net->ipv6.fib6_null_entry) 2003 + return -ENOENT; 2004 + 2005 + table = rt->fib6_table; 2006 + fn = rcu_dereference_protected(rt->fib6_node, 2007 + lockdep_is_held(&table->tb6_lock)); 2008 + if (!fn) 2004 2009 return -ENOENT; 2005 2010 2006 2011 WARN_ON(!(fn->fn_flags & RTN_RTINFO));
+1 -1
net/ipv6/route.c
··· 4202 4202 .fc_nlinfo.nl_net = net, 4203 4203 }; 4204 4204 4205 - cfg.fc_table = l3mdev_fib_table(dev) ? : RT6_TABLE_INFO, 4205 + cfg.fc_table = l3mdev_fib_table(dev) ? : RT6_TABLE_INFO; 4206 4206 cfg.fc_dst = *prefix; 4207 4207 cfg.fc_gateway = *gwaddr; 4208 4208
+14 -6
net/mac80211/airtime.c
··· 560 560 if (rate->idx < 0 || !rate->count) 561 561 return -1; 562 562 563 - if (rate->flags & IEEE80211_TX_RC_80_MHZ_WIDTH) 563 + if (rate->flags & IEEE80211_TX_RC_160_MHZ_WIDTH) 564 + stat->bw = RATE_INFO_BW_160; 565 + else if (rate->flags & IEEE80211_TX_RC_80_MHZ_WIDTH) 564 566 stat->bw = RATE_INFO_BW_80; 565 567 else if (rate->flags & IEEE80211_TX_RC_40_MHZ_WIDTH) 566 568 stat->bw = RATE_INFO_BW_40; ··· 670 668 * This will not be very accurate, but much better than simply 671 669 * assuming un-aggregated tx in all cases. 672 670 */ 673 - if (duration > 400) /* <= VHT20 MCS2 1S */ 671 + if (duration > 400 * 1024) /* <= VHT20 MCS2 1S */ 674 672 agg_shift = 1; 675 - else if (duration > 250) /* <= VHT20 MCS3 1S or MCS1 2S */ 673 + else if (duration > 250 * 1024) /* <= VHT20 MCS3 1S or MCS1 2S */ 676 674 agg_shift = 2; 677 - else if (duration > 150) /* <= VHT20 MCS5 1S or MCS3 2S */ 675 + else if (duration > 150 * 1024) /* <= VHT20 MCS5 1S or MCS2 2S */ 678 676 agg_shift = 3; 679 - else 677 + else if (duration > 70 * 1024) /* <= VHT20 MCS5 2S */ 680 678 agg_shift = 4; 679 + else if (stat.encoding != RX_ENC_HE || 680 + duration > 20 * 1024) /* <= HE40 MCS6 2S */ 681 + agg_shift = 5; 682 + else 683 + agg_shift = 6; 681 684 682 685 duration *= len; 683 686 duration /= AVG_PKT_SIZE; 684 687 duration /= 1024; 688 + duration += (overhead >> agg_shift); 685 689 686 - return duration + (overhead >> agg_shift); 690 + return max_t(u32, duration, 4); 687 691 } 688 692 689 693 if (!conf)
+2 -1
net/mac80211/mlme.c
··· 4861 4861 struct ieee80211_supported_band *sband; 4862 4862 struct cfg80211_chan_def chandef; 4863 4863 bool is_6ghz = cbss->channel->band == NL80211_BAND_6GHZ; 4864 + bool is_5ghz = cbss->channel->band == NL80211_BAND_5GHZ; 4864 4865 struct ieee80211_bss *bss = (void *)cbss->priv; 4865 4866 int ret; 4866 4867 u32 i; ··· 4880 4879 ifmgd->flags |= IEEE80211_STA_DISABLE_HE; 4881 4880 } 4882 4881 4883 - if (!sband->vht_cap.vht_supported && !is_6ghz) { 4882 + if (!sband->vht_cap.vht_supported && is_5ghz) { 4884 4883 ifmgd->flags |= IEEE80211_STA_DISABLE_VHT; 4885 4884 ifmgd->flags |= IEEE80211_STA_DISABLE_HE; 4886 4885 }
+2 -1
net/mac80211/rx.c
··· 451 451 else if (status->bw == RATE_INFO_BW_5) 452 452 channel_flags |= IEEE80211_CHAN_QUARTER; 453 453 454 - if (status->band == NL80211_BAND_5GHZ) 454 + if (status->band == NL80211_BAND_5GHZ || 455 + status->band == NL80211_BAND_6GHZ) 455 456 channel_flags |= IEEE80211_CHAN_OFDM | IEEE80211_CHAN_5GHZ; 456 457 else if (status->encoding != RX_ENC_LEGACY) 457 458 channel_flags |= IEEE80211_CHAN_DYN | IEEE80211_CHAN_2GHZ;
+4 -3
net/mac80211/util.c
··· 3353 3353 he_chandef.center_freq1 = 3354 3354 ieee80211_channel_to_frequency(he_6ghz_oper->ccfs0, 3355 3355 NL80211_BAND_6GHZ); 3356 - he_chandef.center_freq2 = 3357 - ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1, 3358 - NL80211_BAND_6GHZ); 3356 + if (support_80_80 || support_160) 3357 + he_chandef.center_freq2 = 3358 + ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1, 3359 + NL80211_BAND_6GHZ); 3359 3360 } 3360 3361 3361 3362 if (!cfg80211_chandef_valid(&he_chandef)) {
+4 -4
net/mac80211/vht.c
··· 168 168 /* take some capabilities as-is */ 169 169 cap_info = le32_to_cpu(vht_cap_ie->vht_cap_info); 170 170 vht_cap->cap = cap_info; 171 - vht_cap->cap &= IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895 | 172 - IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 | 173 - IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 | 174 - IEEE80211_VHT_CAP_RXLDPC | 171 + vht_cap->cap &= IEEE80211_VHT_CAP_RXLDPC | 175 172 IEEE80211_VHT_CAP_VHT_TXOP_PS | 176 173 IEEE80211_VHT_CAP_HTC_VHT | 177 174 IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK | ··· 176 179 IEEE80211_VHT_CAP_VHT_LINK_ADAPTATION_VHT_MRQ_MFB | 177 180 IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN | 178 181 IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN; 182 + 183 + vht_cap->cap |= min_t(u32, cap_info & IEEE80211_VHT_CAP_MAX_MPDU_MASK, 184 + own_cap.cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK); 179 185 180 186 /* and some based on our own capabilities */ 181 187 switch (own_cap.cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK) {
+5 -3
net/mac802154/tx.c
··· 34 34 if (res) 35 35 goto err_tx; 36 36 37 - ieee802154_xmit_complete(&local->hw, skb, false); 38 - 39 37 dev->stats.tx_packets++; 40 38 dev->stats.tx_bytes += skb->len; 39 + 40 + ieee802154_xmit_complete(&local->hw, skb, false); 41 41 42 42 return; 43 43 ··· 78 78 79 79 /* async is priority, otherwise sync is fallback */ 80 80 if (local->ops->xmit_async) { 81 + unsigned int len = skb->len; 82 + 81 83 ret = drv_xmit_async(local, skb); 82 84 if (ret) { 83 85 ieee802154_wake_queue(&local->hw); ··· 87 85 } 88 86 89 87 dev->stats.tx_packets++; 90 - dev->stats.tx_bytes += skb->len; 88 + dev->stats.tx_bytes += len; 91 89 } else { 92 90 local->tx_skb = skb; 93 91 queue_work(local->workqueue, &local->tx_work);
+16 -3
net/mptcp/pm_netlink.c
··· 66 66 return a->port == b->port; 67 67 } 68 68 69 + static bool address_zero(const struct mptcp_addr_info *addr) 70 + { 71 + struct mptcp_addr_info zero; 72 + 73 + memset(&zero, 0, sizeof(zero)); 74 + zero.family = addr->family; 75 + 76 + return addresses_equal(addr, &zero, false); 77 + } 78 + 69 79 static void local_address(const struct sock_common *skc, 70 80 struct mptcp_addr_info *addr) 71 81 { ··· 181 171 182 172 static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk) 183 173 { 174 + struct mptcp_addr_info remote = { 0 }; 184 175 struct sock *sk = (struct sock *)msk; 185 176 struct mptcp_pm_addr_entry *local; 186 - struct mptcp_addr_info remote; 187 177 struct pm_nl_pernet *pernet; 188 178 189 179 pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id); ··· 333 323 * addr 334 324 */ 335 325 local_address((struct sock_common *)msk, &msk_local); 336 - local_address((struct sock_common *)msk, &skc_local); 326 + local_address((struct sock_common *)skc, &skc_local); 337 327 if (addresses_equal(&msk_local, &skc_local, false)) 328 + return 0; 329 + 330 + if (address_zero(&skc_local)) 338 331 return 0; 339 332 340 333 pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id); ··· 354 341 return ret; 355 342 356 343 /* address not found, add to local list */ 357 - entry = kmalloc(sizeof(*entry), GFP_KERNEL); 344 + entry = kmalloc(sizeof(*entry), GFP_ATOMIC); 358 345 if (!entry) 359 346 return -ENOMEM; 360 347
+5 -2
net/mptcp/subflow.c
··· 1063 1063 struct mptcp_sock *msk = mptcp_sk(sk); 1064 1064 struct mptcp_subflow_context *subflow; 1065 1065 struct sockaddr_storage addr; 1066 + int remote_id = remote->id; 1066 1067 int local_id = loc->id; 1067 1068 struct socket *sf; 1068 1069 struct sock *ssk; ··· 1108 1107 goto failed; 1109 1108 1110 1109 mptcp_crypto_key_sha(subflow->remote_key, &remote_token, NULL); 1111 - pr_debug("msk=%p remote_token=%u local_id=%d", msk, remote_token, 1112 - local_id); 1110 + pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk, 1111 + remote_token, local_id, remote_id); 1113 1112 subflow->remote_token = remote_token; 1114 1113 subflow->local_id = local_id; 1114 + subflow->remote_id = remote_id; 1115 1115 subflow->request_join = 1; 1116 1116 subflow->request_bkup = 1; 1117 1117 mptcp_info2sockaddr(remote, &addr); ··· 1349 1347 new_ctx->fully_established = 1; 1350 1348 new_ctx->backup = subflow_req->backup; 1351 1349 new_ctx->local_id = subflow_req->local_id; 1350 + new_ctx->remote_id = subflow_req->remote_id; 1352 1351 new_ctx->token = subflow_req->token; 1353 1352 new_ctx->thmac = subflow_req->thmac; 1354 1353 }
+5 -17
net/netfilter/nf_conntrack_netlink.c
··· 851 851 } 852 852 853 853 struct ctnetlink_filter { 854 - u_int32_t cta_flags; 855 854 u8 family; 856 855 857 856 u_int32_t orig_flags; ··· 905 906 struct nf_conntrack_zone *zone, 906 907 u_int32_t flags); 907 908 908 - /* applied on filters */ 909 - #define CTA_FILTER_F_CTA_MARK (1 << 0) 910 - #define CTA_FILTER_F_CTA_MARK_MASK (1 << 1) 911 - 912 909 static struct ctnetlink_filter * 913 910 ctnetlink_alloc_filter(const struct nlattr * const cda[], u8 family) 914 911 { ··· 925 930 #ifdef CONFIG_NF_CONNTRACK_MARK 926 931 if (cda[CTA_MARK]) { 927 932 filter->mark.val = ntohl(nla_get_be32(cda[CTA_MARK])); 928 - filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK); 929 - 930 - if (cda[CTA_MARK_MASK]) { 933 + if (cda[CTA_MARK_MASK]) 931 934 filter->mark.mask = ntohl(nla_get_be32(cda[CTA_MARK_MASK])); 932 - filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK_MASK); 933 - } else { 935 + else 934 936 filter->mark.mask = 0xffffffff; 935 - } 936 937 } else if (cda[CTA_MARK_MASK]) { 937 938 err = -EINVAL; 938 939 goto err_filter; ··· 1108 1117 } 1109 1118 1110 1119 #ifdef CONFIG_NF_CONNTRACK_MARK 1111 - if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK_MASK)) && 1112 - (ct->mark & filter->mark.mask) != filter->mark.val) 1113 - goto ignore_entry; 1114 - else if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK)) && 1115 - ct->mark != filter->mark.val) 1120 + if ((ct->mark & filter->mark.mask) != filter->mark.val) 1116 1121 goto ignore_entry; 1117 1122 #endif 1118 1123 ··· 1391 1404 if (err < 0) 1392 1405 return err; 1393 1406 1394 - 1407 + if (l3num != NFPROTO_IPV4 && l3num != NFPROTO_IPV6) 1408 + return -EOPNOTSUPP; 1395 1409 tuple->src.l3num = l3num; 1396 1410 1397 1411 if (flags & CTA_FILTER_FLAG(CTA_IP_DST) ||
+2
net/netfilter/nf_conntrack_proto.c
··· 565 565 int err; 566 566 567 567 err = nf_ct_netns_do_get(net, NFPROTO_IPV4); 568 + #if IS_ENABLED(CONFIG_IPV6) 568 569 if (err < 0) 569 570 goto err1; 570 571 err = nf_ct_netns_do_get(net, NFPROTO_IPV6); ··· 576 575 err2: 577 576 nf_ct_netns_put(net, NFPROTO_IPV4); 578 577 err1: 578 + #endif 579 579 return err; 580 580 } 581 581
+57 -13
net/netfilter/nf_tables_api.c
··· 684 684 return -1; 685 685 } 686 686 687 + struct nftnl_skb_parms { 688 + bool report; 689 + }; 690 + #define NFT_CB(skb) (*(struct nftnl_skb_parms*)&((skb)->cb)) 691 + 692 + static void nft_notify_enqueue(struct sk_buff *skb, bool report, 693 + struct list_head *notify_list) 694 + { 695 + NFT_CB(skb).report = report; 696 + list_add_tail(&skb->list, notify_list); 697 + } 698 + 687 699 static void nf_tables_table_notify(const struct nft_ctx *ctx, int event) 688 700 { 689 701 struct sk_buff *skb; ··· 727 715 goto err; 728 716 } 729 717 730 - nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES, 731 - ctx->report, GFP_KERNEL); 718 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 732 719 return; 733 720 err: 734 721 nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 1479 1468 goto err; 1480 1469 } 1481 1470 1482 - nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES, 1483 - ctx->report, GFP_KERNEL); 1471 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 1484 1472 return; 1485 1473 err: 1486 1474 nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 2817 2807 goto err; 2818 2808 } 2819 2809 2820 - nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES, 2821 - ctx->report, GFP_KERNEL); 2810 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 2822 2811 return; 2823 2812 err: 2824 2813 nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 3846 3837 goto err; 3847 3838 } 3848 3839 3849 - nfnetlink_send(skb, ctx->net, portid, NFNLGRP_NFTABLES, ctx->report, 3850 - gfp_flags); 3840 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 3851 3841 return; 3852 3842 err: 3853 3843 nfnetlink_set_err(ctx->net, portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 4967 4959 goto err; 4968 4960 } 4969 4961 4970 - nfnetlink_send(skb, net, portid, NFNLGRP_NFTABLES, ctx->report, 4971 - GFP_KERNEL); 4962 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 4972 4963 return; 4973 4964 err: 4974 4965 nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 6282 6275 goto err; 6283 6276 } 6284 6277 6285 - nfnetlink_send(skb, net, portid, NFNLGRP_NFTABLES, report, gfp); 6278 + nft_notify_enqueue(skb, report, &net->nft.notify_list); 6286 6279 return; 6287 6280 err: 6288 6281 nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 7092 7085 goto err; 7093 7086 } 7094 7087 7095 - nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES, 7096 - ctx->report, GFP_KERNEL); 7088 + nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list); 7097 7089 return; 7098 7090 err: 7099 7091 nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS); ··· 7701 7695 mutex_unlock(&net->nft.commit_mutex); 7702 7696 } 7703 7697 7698 + static void nft_commit_notify(struct net *net, u32 portid) 7699 + { 7700 + struct sk_buff *batch_skb = NULL, *nskb, *skb; 7701 + unsigned char *data; 7702 + int len; 7703 + 7704 + list_for_each_entry_safe(skb, nskb, &net->nft.notify_list, list) { 7705 + if (!batch_skb) { 7706 + new_batch: 7707 + batch_skb = skb; 7708 + len = NLMSG_GOODSIZE - skb->len; 7709 + list_del(&skb->list); 7710 + continue; 7711 + } 7712 + len -= skb->len; 7713 + if (len > 0 && NFT_CB(skb).report == NFT_CB(batch_skb).report) { 7714 + data = skb_put(batch_skb, skb->len); 7715 + memcpy(data, skb->data, skb->len); 7716 + list_del(&skb->list); 7717 + kfree_skb(skb); 7718 + continue; 7719 + } 7720 + nfnetlink_send(batch_skb, net, portid, NFNLGRP_NFTABLES, 7721 + NFT_CB(batch_skb).report, GFP_KERNEL); 7722 + goto new_batch; 7723 + } 7724 + 7725 + if (batch_skb) { 7726 + nfnetlink_send(batch_skb, net, portid, NFNLGRP_NFTABLES, 7727 + NFT_CB(batch_skb).report, GFP_KERNEL); 7728 + } 7729 + 7730 + WARN_ON_ONCE(!list_empty(&net->nft.notify_list)); 7731 + } 7732 + 7704 7733 static int nf_tables_commit(struct net *net, struct sk_buff *skb) 7705 7734 { 7706 7735 struct nft_trans *trans, *next; ··· 7938 7897 } 7939 7898 } 7940 7899 7900 + nft_commit_notify(net, NETLINK_CB(skb).portid); 7941 7901 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); 7942 7902 nf_tables_commit_release(net); 7943 7903 ··· 8763 8721 INIT_LIST_HEAD(&net->nft.tables); 8764 8722 INIT_LIST_HEAD(&net->nft.commit_list); 8765 8723 INIT_LIST_HEAD(&net->nft.module_list); 8724 + INIT_LIST_HEAD(&net->nft.notify_list); 8766 8725 mutex_init(&net->nft.commit_mutex); 8767 8726 net->nft.base_seq = 1; 8768 8727 net->nft.validate_state = NFT_VALIDATE_SKIP; ··· 8780 8737 mutex_unlock(&net->nft.commit_mutex); 8781 8738 WARN_ON_ONCE(!list_empty(&net->nft.tables)); 8782 8739 WARN_ON_ONCE(!list_empty(&net->nft.module_list)); 8740 + WARN_ON_ONCE(!list_empty(&net->nft.notify_list)); 8783 8741 } 8784 8742 8785 8743 static struct pernet_operations nf_tables_net_ops = {
+2 -2
net/netfilter/nft_meta.c
··· 147 147 148 148 switch (key) { 149 149 case NFT_META_SKUID: 150 - *dest = from_kuid_munged(&init_user_ns, 150 + *dest = from_kuid_munged(sock_net(sk)->user_ns, 151 151 sock->file->f_cred->fsuid); 152 152 break; 153 153 case NFT_META_SKGID: 154 - *dest = from_kgid_munged(&init_user_ns, 154 + *dest = from_kgid_munged(sock_net(sk)->user_ns, 155 155 sock->file->f_cred->fsgid); 156 156 break; 157 157 default:
+11 -10
net/qrtr/qrtr.c
··· 332 332 { 333 333 struct qrtr_hdr_v1 *hdr; 334 334 size_t len = skb->len; 335 - int rc = -ENODEV; 336 - int confirm_rx; 335 + int rc, confirm_rx; 337 336 338 337 confirm_rx = qrtr_tx_wait(node, to->sq_node, to->sq_port, type); 339 338 if (confirm_rx < 0) { ··· 356 357 hdr->size = cpu_to_le32(len); 357 358 hdr->confirm_rx = !!confirm_rx; 358 359 359 - skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr)); 360 + rc = skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr)); 360 361 361 - mutex_lock(&node->ep_lock); 362 - if (node->ep) 363 - rc = node->ep->xmit(node->ep, skb); 364 - else 365 - kfree_skb(skb); 366 - mutex_unlock(&node->ep_lock); 367 - 362 + if (!rc) { 363 + mutex_lock(&node->ep_lock); 364 + rc = -ENODEV; 365 + if (node->ep) 366 + rc = node->ep->xmit(node->ep, skb); 367 + else 368 + kfree_skb(skb); 369 + mutex_unlock(&node->ep_lock); 370 + } 368 371 /* Need to ensure that a subsequent message carries the otherwise lost 369 372 * confirm_rx flag if we dropped this one */ 370 373 if (rc && confirm_rx)
+34 -10
net/sched/act_ife.c
··· 436 436 kfree_rcu(p, rcu); 437 437 } 438 438 439 + static int load_metalist(struct nlattr **tb, bool rtnl_held) 440 + { 441 + int i; 442 + 443 + for (i = 1; i < max_metacnt; i++) { 444 + if (tb[i]) { 445 + void *val = nla_data(tb[i]); 446 + int len = nla_len(tb[i]); 447 + int rc; 448 + 449 + rc = load_metaops_and_vet(i, val, len, rtnl_held); 450 + if (rc != 0) 451 + return rc; 452 + } 453 + } 454 + 455 + return 0; 456 + } 457 + 439 458 static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb, 440 459 bool exists, bool rtnl_held) 441 460 { ··· 467 448 if (tb[i]) { 468 449 val = nla_data(tb[i]); 469 450 len = nla_len(tb[i]); 470 - 471 - rc = load_metaops_and_vet(i, val, len, rtnl_held); 472 - if (rc != 0) 473 - return rc; 474 451 475 452 rc = add_metainfo(ife, i, val, len, exists); 476 453 if (rc) ··· 523 508 p = kzalloc(sizeof(*p), GFP_KERNEL); 524 509 if (!p) 525 510 return -ENOMEM; 511 + 512 + if (tb[TCA_IFE_METALST]) { 513 + err = nla_parse_nested_deprecated(tb2, IFE_META_MAX, 514 + tb[TCA_IFE_METALST], NULL, 515 + NULL); 516 + if (err) { 517 + kfree(p); 518 + return err; 519 + } 520 + err = load_metalist(tb2, rtnl_held); 521 + if (err) { 522 + kfree(p); 523 + return err; 524 + } 525 + } 526 526 527 527 index = parm->index; 528 528 err = tcf_idr_check_alloc(tn, &index, a, bind); ··· 600 570 } 601 571 602 572 if (tb[TCA_IFE_METALST]) { 603 - err = nla_parse_nested_deprecated(tb2, IFE_META_MAX, 604 - tb[TCA_IFE_METALST], NULL, 605 - NULL); 606 - if (err) 607 - goto metadata_parse_err; 608 573 err = populate_metalist(ife, tb2, exists, rtnl_held); 609 574 if (err) 610 575 goto metadata_parse_err; 611 - 612 576 } else { 613 577 /* if no passed metadata allow list or passed allow-all 614 578 * then here we process by adding as many supported metadatum
+1
net/sched/act_tunnel_key.c
··· 156 156 struct vxlan_metadata *md = dst; 157 157 158 158 md->gbp = nla_get_u32(tb[TCA_TUNNEL_KEY_ENC_OPT_VXLAN_GBP]); 159 + md->gbp &= VXLAN_GBP_MASK; 159 160 } 160 161 161 162 return sizeof(struct vxlan_metadata);
+4 -1
net/sched/cls_flower.c
··· 1175 1175 return -EINVAL; 1176 1176 } 1177 1177 1178 - if (tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP]) 1178 + if (tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP]) { 1179 1179 md->gbp = nla_get_u32(tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP]); 1180 + md->gbp &= VXLAN_GBP_MASK; 1181 + } 1180 1182 1181 1183 return sizeof(*md); 1182 1184 } ··· 1223 1221 } 1224 1222 if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX]) { 1225 1223 nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX]; 1224 + memset(&md->u, 0x00, sizeof(md->u)); 1226 1225 md->u.index = nla_get_be32(nla); 1227 1226 } 1228 1227 } else if (md->version == 2) {
+33 -15
net/sched/sch_generic.c
··· 1131 1131 1132 1132 static void qdisc_deactivate(struct Qdisc *qdisc) 1133 1133 { 1134 - bool nolock = qdisc->flags & TCQ_F_NOLOCK; 1135 - 1136 1134 if (qdisc->flags & TCQ_F_BUILTIN) 1137 1135 return; 1138 - if (test_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state)) 1139 - return; 1140 - 1141 - if (nolock) 1142 - spin_lock_bh(&qdisc->seqlock); 1143 - spin_lock_bh(qdisc_lock(qdisc)); 1144 1136 1145 1137 set_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state); 1146 - 1147 - qdisc_reset(qdisc); 1148 - 1149 - spin_unlock_bh(qdisc_lock(qdisc)); 1150 - if (nolock) 1151 - spin_unlock_bh(&qdisc->seqlock); 1152 1138 } 1153 1139 1154 1140 static void dev_deactivate_queue(struct net_device *dev, ··· 1149 1163 qdisc_deactivate(qdisc); 1150 1164 rcu_assign_pointer(dev_queue->qdisc, qdisc_default); 1151 1165 } 1166 + } 1167 + 1168 + static void dev_reset_queue(struct net_device *dev, 1169 + struct netdev_queue *dev_queue, 1170 + void *_unused) 1171 + { 1172 + struct Qdisc *qdisc; 1173 + bool nolock; 1174 + 1175 + qdisc = dev_queue->qdisc_sleeping; 1176 + if (!qdisc) 1177 + return; 1178 + 1179 + nolock = qdisc->flags & TCQ_F_NOLOCK; 1180 + 1181 + if (nolock) 1182 + spin_lock_bh(&qdisc->seqlock); 1183 + spin_lock_bh(qdisc_lock(qdisc)); 1184 + 1185 + qdisc_reset(qdisc); 1186 + 1187 + spin_unlock_bh(qdisc_lock(qdisc)); 1188 + if (nolock) 1189 + spin_unlock_bh(&qdisc->seqlock); 1152 1190 } 1153 1191 1154 1192 static bool some_qdisc_is_busy(struct net_device *dev) ··· 1223 1213 dev_watchdog_down(dev); 1224 1214 } 1225 1215 1226 - /* Wait for outstanding qdisc-less dev_queue_xmit calls. 1216 + /* Wait for outstanding qdisc-less dev_queue_xmit calls or 1217 + * outstanding qdisc enqueuing calls. 1227 1218 * This is avoided if all devices are in dismantle phase : 1228 1219 * Caller will call synchronize_net() for us 1229 1220 */ 1230 1221 synchronize_net(); 1222 + 1223 + list_for_each_entry(dev, head, close_list) { 1224 + netdev_for_each_tx_queue(dev, dev_reset_queue, NULL); 1225 + 1226 + if (dev_ingress_queue(dev)) 1227 + dev_reset_queue(dev, dev_ingress_queue(dev), NULL); 1228 + } 1231 1229 1232 1230 /* Wait for outstanding qdisc_run calls. */ 1233 1231 list_for_each_entry(dev, head, close_list) {
+17 -11
net/sched/sch_taprio.c
··· 777 777 [TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 }, 778 778 }; 779 779 780 - static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry, 780 + static int fill_sched_entry(struct taprio_sched *q, struct nlattr **tb, 781 + struct sched_entry *entry, 781 782 struct netlink_ext_ack *extack) 782 783 { 784 + int min_duration = length_to_duration(q, ETH_ZLEN); 783 785 u32 interval = 0; 784 786 785 787 if (tb[TCA_TAPRIO_SCHED_ENTRY_CMD]) ··· 796 794 interval = nla_get_u32( 797 795 tb[TCA_TAPRIO_SCHED_ENTRY_INTERVAL]); 798 796 799 - if (interval == 0) { 797 + /* The interval should allow at least the minimum ethernet 798 + * frame to go out. 799 + */ 800 + if (interval < min_duration) { 800 801 NL_SET_ERR_MSG(extack, "Invalid interval for schedule entry"); 801 802 return -EINVAL; 802 803 } ··· 809 804 return 0; 810 805 } 811 806 812 - static int parse_sched_entry(struct nlattr *n, struct sched_entry *entry, 813 - int index, struct netlink_ext_ack *extack) 807 + static int parse_sched_entry(struct taprio_sched *q, struct nlattr *n, 808 + struct sched_entry *entry, int index, 809 + struct netlink_ext_ack *extack) 814 810 { 815 811 struct nlattr *tb[TCA_TAPRIO_SCHED_ENTRY_MAX + 1] = { }; 816 812 int err; ··· 825 819 826 820 entry->index = index; 827 821 828 - return fill_sched_entry(tb, entry, extack); 822 + return fill_sched_entry(q, tb, entry, extack); 829 823 } 830 824 831 - static int parse_sched_list(struct nlattr *list, 825 + static int parse_sched_list(struct taprio_sched *q, struct nlattr *list, 832 826 struct sched_gate_list *sched, 833 827 struct netlink_ext_ack *extack) 834 828 { ··· 853 847 return -ENOMEM; 854 848 } 855 849 856 - err = parse_sched_entry(n, entry, i, extack); 850 + err = parse_sched_entry(q, n, entry, i, extack); 857 851 if (err < 0) { 858 852 kfree(entry); 859 853 return err; ··· 868 862 return i; 869 863 } 870 864 871 - static int parse_taprio_schedule(struct nlattr **tb, 865 + static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb, 872 866 struct sched_gate_list *new, 873 867 struct netlink_ext_ack *extack) 874 868 { ··· 889 883 new->cycle_time = nla_get_s64(tb[TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME]); 890 884 891 885 if (tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST]) 892 - err = parse_sched_list( 893 - tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST], new, extack); 886 + err = parse_sched_list(q, tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST], 887 + new, extack); 894 888 if (err < 0) 895 889 return err; 896 890 ··· 1479 1473 goto free_sched; 1480 1474 } 1481 1475 1482 - err = parse_taprio_schedule(tb, new_admin, extack); 1476 + err = parse_taprio_schedule(q, tb, new_admin, extack); 1483 1477 if (err < 0) 1484 1478 goto free_sched; 1485 1479
+3 -6
net/sctp/socket.c
··· 9220 9220 static inline void sctp_copy_descendant(struct sock *sk_to, 9221 9221 const struct sock *sk_from) 9222 9222 { 9223 - int ancestor_size = sizeof(struct inet_sock) + 9224 - sizeof(struct sctp_sock) - 9225 - offsetof(struct sctp_sock, pd_lobby); 9223 + size_t ancestor_size = sizeof(struct inet_sock); 9226 9224 9227 - if (sk_from->sk_family == PF_INET6) 9228 - ancestor_size += sizeof(struct ipv6_pinfo); 9229 - 9225 + ancestor_size += sk_from->sk_prot->obj_size; 9226 + ancestor_size -= offsetof(struct sctp_sock, pd_lobby); 9230 9227 __inet_sk_copy_descendant(sk_to, sk_from, ancestor_size); 9231 9228 } 9232 9229
+10 -4
net/tipc/group.c
··· 273 273 return NULL; 274 274 } 275 275 276 - static void tipc_group_add_to_tree(struct tipc_group *grp, 277 - struct tipc_member *m) 276 + static int tipc_group_add_to_tree(struct tipc_group *grp, 277 + struct tipc_member *m) 278 278 { 279 279 u64 nkey, key = (u64)m->node << 32 | m->port; 280 280 struct rb_node **n, *parent = NULL; ··· 291 291 else if (key > nkey) 292 292 n = &(*n)->rb_right; 293 293 else 294 - return; 294 + return -EEXIST; 295 295 } 296 296 rb_link_node(&m->tree_node, parent, n); 297 297 rb_insert_color(&m->tree_node, &grp->members); 298 + return 0; 298 299 } 299 300 300 301 static struct tipc_member *tipc_group_create_member(struct tipc_group *grp, ··· 303 302 u32 instance, int state) 304 303 { 305 304 struct tipc_member *m; 305 + int ret; 306 306 307 307 m = kzalloc(sizeof(*m), GFP_ATOMIC); 308 308 if (!m) ··· 316 314 m->port = port; 317 315 m->instance = instance; 318 316 m->bc_acked = grp->bc_snd_nxt - 1; 317 + ret = tipc_group_add_to_tree(grp, m); 318 + if (ret < 0) { 319 + kfree(m); 320 + return NULL; 321 + } 319 322 grp->member_cnt++; 320 - tipc_group_add_to_tree(grp, m); 321 323 tipc_nlist_add(&grp->dests, m->node); 322 324 m->state = state; 323 325 return m;
+2 -1
net/tipc/link.c
··· 532 532 * tipc_link_bc_create - create new link to be used for broadcast 533 533 * @net: pointer to associated network namespace 534 534 * @mtu: mtu to be used initially if no peers 535 - * @window: send window to be used 535 + * @min_win: minimal send window to be used by link 536 + * @max_win: maximal send window to be used by link 536 537 * @inputq: queue to put messages ready for delivery 537 538 * @namedq: queue to put binding table update messages ready for delivery 538 539 * @link: return value, pointer to put the created link
+2 -1
net/tipc/msg.c
··· 150 150 if (fragid == FIRST_FRAGMENT) { 151 151 if (unlikely(head)) 152 152 goto err; 153 - if (unlikely(skb_unclone(frag, GFP_ATOMIC))) 153 + frag = skb_unshare(frag, GFP_ATOMIC); 154 + if (unlikely(!frag)) 154 155 goto err; 155 156 head = *headbuf = frag; 156 157 *buf = NULL;
+1 -4
net/tipc/socket.c
··· 2771 2771 2772 2772 trace_tipc_sk_shutdown(sk, NULL, TIPC_DUMP_ALL, " "); 2773 2773 __tipc_shutdown(sock, TIPC_CONN_SHUTDOWN); 2774 - if (tipc_sk_type_connectionless(sk)) 2775 - sk->sk_shutdown = SHUTDOWN_MASK; 2776 - else 2777 - sk->sk_shutdown = SEND_SHUTDOWN; 2774 + sk->sk_shutdown = SHUTDOWN_MASK; 2778 2775 2779 2776 if (sk->sk_state == TIPC_DISCONNECTING) { 2780 2777 /* Discard any unreceived messages */
+1
net/wireless/Kconfig
··· 217 217 218 218 config LIB80211_CRYPT_CCMP 219 219 tristate 220 + select CRYPTO 220 221 select CRYPTO_AES 221 222 select CRYPTO_CCM 222 223
+1 -1
net/wireless/util.c
··· 95 95 /* see 802.11ax D6.1 27.3.23.2 */ 96 96 if (chan == 2) 97 97 return MHZ_TO_KHZ(5935); 98 - if (chan <= 253) 98 + if (chan <= 233) 99 99 return MHZ_TO_KHZ(5950 + chan * 5); 100 100 break; 101 101 case NL80211_BAND_60GHZ:
+8 -9
net/xdp/xdp_umem.c
··· 303 303 304 304 static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) 305 305 { 306 + u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom; 306 307 bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG; 307 - u32 chunk_size = mr->chunk_size, headroom = mr->headroom; 308 308 u64 npgs, addr = mr->addr, size = mr->len; 309 - unsigned int chunks, chunks_per_page; 309 + unsigned int chunks, chunks_rem; 310 310 int err; 311 311 312 312 if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) { ··· 336 336 if ((addr + size) < addr) 337 337 return -EINVAL; 338 338 339 - npgs = size >> PAGE_SHIFT; 339 + npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem); 340 + if (npgs_rem) 341 + npgs++; 340 342 if (npgs > U32_MAX) 341 343 return -EINVAL; 342 344 343 - chunks = (unsigned int)div_u64(size, chunk_size); 345 + chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem); 344 346 if (chunks == 0) 345 347 return -EINVAL; 346 348 347 - if (!unaligned_chunks) { 348 - chunks_per_page = PAGE_SIZE / chunk_size; 349 - if (chunks < chunks_per_page || chunks % chunks_per_page) 350 - return -EINVAL; 351 - } 349 + if (!unaligned_chunks && chunks_rem) 350 + return -EINVAL; 352 351 353 352 if (headroom >= chunk_size - XDP_PACKET_HEADROOM) 354 353 return -EINVAL;
+2 -2
tools/bpf/Makefile
··· 38 38 FEATURE_DISPLAY = libbfd disassembler-four-args 39 39 40 40 check_feat := 1 41 - NON_CHECK_FEAT_TARGETS := clean bpftool_clean runqslower_clean 41 + NON_CHECK_FEAT_TARGETS := clean bpftool_clean runqslower_clean resolve_btfids_clean 42 42 ifdef MAKECMDGOALS 43 43 ifeq ($(filter-out $(NON_CHECK_FEAT_TARGETS),$(MAKECMDGOALS)),) 44 44 check_feat := 0 ··· 89 89 $(OUTPUT)bpf_exp.yacc.o: $(OUTPUT)bpf_exp.yacc.c 90 90 $(OUTPUT)bpf_exp.lex.o: $(OUTPUT)bpf_exp.lex.c 91 91 92 - clean: bpftool_clean runqslower_clean 92 + clean: bpftool_clean runqslower_clean resolve_btfids_clean 93 93 $(call QUIET_CLEAN, bpf-progs) 94 94 $(Q)$(RM) -r -- $(OUTPUT)*.o $(OUTPUT)bpf_jit_disasm $(OUTPUT)bpf_dbg \ 95 95 $(OUTPUT)bpf_asm $(OUTPUT)bpf_exp.yacc.* $(OUTPUT)bpf_exp.lex.*
+1
tools/bpf/resolve_btfids/Makefile
··· 80 80 clean: libsubcmd-clean libbpf-clean fixdep-clean 81 81 $(call msg,CLEAN,$(BINARY)) 82 82 $(Q)$(RM) -f $(BINARY); \ 83 + $(RM) -rf $(if $(OUTPUT),$(OUTPUT),.)/feature; \ 83 84 find $(if $(OUTPUT),$(OUTPUT),.) -name \*.o -or -name \*.o.cmd -or -name \*.o.d | xargs $(RM) 84 85 85 86 tags:
+3 -1
tools/lib/bpf/Makefile
··· 59 59 FEATURE_TESTS = libelf libelf-mmap zlib bpf reallocarray 60 60 FEATURE_DISPLAY = libelf zlib bpf 61 61 62 - INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(ARCH)/include/uapi -I$(srctree)/tools/include/uapi 62 + INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi 63 63 FEATURE_CHECK_CFLAGS-bpf = $(INCLUDES) 64 64 65 65 check_feat := 1 ··· 152 152 awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \ 153 153 sort -u | wc -l) 154 154 VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \ 155 + awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \ 155 156 grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l) 156 157 157 158 CMD_TARGETS = $(LIB_TARGET) $(PC_FILE) ··· 220 219 awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \ 221 220 sort -u > $(OUTPUT)libbpf_global_syms.tmp; \ 222 221 readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \ 222 + awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \ 223 223 grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | \ 224 224 sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; \ 225 225 diff -u $(OUTPUT)libbpf_global_syms.tmp \
+1 -1
tools/lib/bpf/libbpf.c
··· 5203 5203 int i, j, nrels, new_sz; 5204 5204 const struct btf_var_secinfo *vi = NULL; 5205 5205 const struct btf_type *sec, *var, *def; 5206 + struct bpf_map *map = NULL, *targ_map; 5206 5207 const struct btf_member *member; 5207 - struct bpf_map *map, *targ_map; 5208 5208 const char *name, *mname; 5209 5209 Elf_Data *symbols; 5210 5210 unsigned int moff;
+15
tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
··· 47 47 __u32 seq_num = ctx->meta->seq_num; 48 48 struct bpf_map *map = ctx->map; 49 49 struct key_t *key = ctx->key; 50 + struct key_t tmp_key; 50 51 __u64 *val = ctx->value; 52 + __u64 tmp_val = 0; 53 + int ret; 51 54 52 55 if (in_test_mode) { 53 56 /* test mode is used by selftests to ··· 62 59 * size. 63 60 */ 64 61 if (key == (void *)0 || val == (void *)0) 62 + return 0; 63 + 64 + /* update the value and then delete the <key, value> pair. 65 + * it should not impact the existing 'val' which is still 66 + * accessible under rcu. 67 + */ 68 + __builtin_memcpy(&tmp_key, key, sizeof(struct key_t)); 69 + ret = bpf_map_update_elem(&hashmap1, &tmp_key, &tmp_val, 0); 70 + if (ret) 71 + return 0; 72 + ret = bpf_map_delete_elem(&hashmap1, &tmp_key); 73 + if (ret) 65 74 return 0; 66 75 67 76 key_sum_a += key->a;
+47
tools/testing/selftests/net/rtnetlink.sh
··· 1175 1175 echo "PASS: neigh get" 1176 1176 } 1177 1177 1178 + kci_test_bridge_parent_id() 1179 + { 1180 + local ret=0 1181 + sysfsnet=/sys/bus/netdevsim/devices/netdevsim 1182 + probed=false 1183 + 1184 + if [ ! -w /sys/bus/netdevsim/new_device ] ; then 1185 + modprobe -q netdevsim 1186 + check_err $? 1187 + if [ $ret -ne 0 ]; then 1188 + echo "SKIP: bridge_parent_id can't load netdevsim" 1189 + return $ksft_skip 1190 + fi 1191 + probed=true 1192 + fi 1193 + 1194 + echo "10 1" > /sys/bus/netdevsim/new_device 1195 + while [ ! -d ${sysfsnet}10 ] ; do :; done 1196 + echo "20 1" > /sys/bus/netdevsim/new_device 1197 + while [ ! -d ${sysfsnet}20 ] ; do :; done 1198 + udevadm settle 1199 + dev10=`ls ${sysfsnet}10/net/` 1200 + dev20=`ls ${sysfsnet}20/net/` 1201 + 1202 + ip link add name test-bond0 type bond mode 802.3ad 1203 + ip link set dev $dev10 master test-bond0 1204 + ip link set dev $dev20 master test-bond0 1205 + ip link add name test-br0 type bridge 1206 + ip link set dev test-bond0 master test-br0 1207 + check_err $? 1208 + 1209 + # clean up any leftovers 1210 + ip link del dev test-br0 1211 + ip link del dev test-bond0 1212 + echo 20 > /sys/bus/netdevsim/del_device 1213 + echo 10 > /sys/bus/netdevsim/del_device 1214 + $probed && rmmod netdevsim 1215 + 1216 + if [ $ret -ne 0 ]; then 1217 + echo "FAIL: bridge_parent_id" 1218 + return 1 1219 + fi 1220 + echo "PASS: bridge_parent_id" 1221 + } 1222 + 1178 1223 kci_test_rtnl() 1179 1224 { 1180 1225 local ret=0 ··· 1268 1223 kci_test_fdb_get 1269 1224 check_err $? 1270 1225 kci_test_neigh_get 1226 + check_err $? 1227 + kci_test_bridge_parent_id 1271 1228 check_err $? 1272 1229 1273 1230 kci_del_dummy