Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.17-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from CAN, netfilter and wireless.

We have an IPv6 routing regression with the relevant fix still a WiP.
This includes a last-minute revert to avoid more problems.

Current release - new code bugs:

- wifi: nl80211: completely disable per-link stats for now

Previous releases - regressions:

- dev_ioctl: take ops lock in hwtstamp lower paths

- netfilter:
- fix spurious set lookup failures
- fix lockdep splat due to missing annotation

- genetlink: fix genl_bind() invoking bind() after -EPERM

- phy: transfer phy_config_inband() locking responsibility to phylink

- can: xilinx_can: fix use-after-free of transmitted SKB

- hsr: fix lock warnings

- eth:
- igb: fix NULL pointer dereference in ethtool loopback test
- i40e: fix Jumbo Frame support after iPXE boot
- macsec: sync features on RTM_NEWLINK

Previous releases - always broken:

- tunnels: reset the GSO metadata before reusing the skb

- mptcp: make sync_socket_options propagate SOCK_KEEPOPEN

- can: j1939: implement NETDEV_UNREGISTER notification hanidler

- wifi: ath12k: fix WMI TLV header misalignment"

* tag 'net-6.17-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (47 commits)
Revert "net: usb: asix: ax88772: drop phylink use in PM to avoid MDIO runtime PM wakeups"
hsr: hold rcu and dev lock for hsr_get_port_ndev
hsr: use hsr_for_each_port_rtnl in hsr_port_get_hsr
hsr: use rtnl lock when iterating over ports
wifi: nl80211: completely disable per-link stats for now
net: usb: asix: ax88772: drop phylink use in PM to avoid MDIO runtime PM wakeups
net: ethtool: fix wrong type used in struct kernel_ethtool_ts_info
MAINTAINERS: add Phil as netfilter reviewer
netfilter: nf_tables: restart set lookup on base_seq change
netfilter: nf_tables: make nft_set_do_lookup available unconditionally
netfilter: nf_tables: place base_seq in struct net
netfilter: nft_set_rbtree: continue traversal if element is inactive
netfilter: nft_set_pipapo: don't check genbit from packetpath lookups
netfilter: nft_set_bitmap: fix lockdep splat due to missing annotation
can: rcar_can: rcar_can_resume(): fix s2ram with PSCI
can: xilinx_can: xcan_write_frame(): fix use-after-free of transmitted SKB
can: j1939: j1939_local_ecu_get(): undo increment when j1939_local_ecu_get() fails
can: j1939: j1939_sk_bind(): call j1939_priv_put() immediately when j1939_local_ecu_get() failed
can: j1939: implement NETDEV_UNREGISTER notification handler
selftests: can: enable CONFIG_CAN_VCAN as a module
...

+542 -227
+1 -1
Documentation/netlink/specs/mptcp_pm.yaml
··· 256 256 type: u32 257 257 - 258 258 name: if-idx 259 - type: u32 259 + type: s32 260 260 - 261 261 name: reset-reason 262 262 type: u32
+1 -1
Documentation/networking/can.rst
··· 742 742 struct timeval ival1, ival2; /* count and subsequent interval */ 743 743 canid_t can_id; /* unique can_id for task */ 744 744 __u32 nframes; /* number of can_frames following */ 745 - struct can_frame frames[0]; 745 + struct can_frame frames[]; 746 746 }; 747 747 748 748 The aligned payload 'frames' uses the same basic CAN frame structure defined
+4 -4
Documentation/networking/mptcp.rst
··· 60 60 and the server side that announces additional addresses via the ``ADD_ADDR`` and 61 61 ``REMOVE_ADDR`` options. 62 62 63 - Path managers are controlled by the ``net.mptcp.pm_type`` sysctl knob -- see 64 - mptcp-sysctl.rst. There are two types: the in-kernel one (type ``0``) where the 65 - same rules are applied for all the connections (see: ``ip mptcp``) ; and the 66 - userspace one (type ``1``), controlled by a userspace daemon (i.e. `mptcpd 63 + Path managers are controlled by the ``net.mptcp.path_manager`` sysctl knob -- 64 + see mptcp-sysctl.rst. There are two types: the in-kernel one (``kernel``) where 65 + the same rules are applied for all the connections (see: ``ip mptcp``) ; and the 66 + userspace one (``userspace``), controlled by a userspace daemon (i.e. `mptcpd 67 67 <https://mptcpd.mptcp.dev/>`_) where different rules can be applied for each 68 68 connection. The path managers can be controlled via a Netlink API; see 69 69 netlink_spec/mptcp_pm.rst.
+1
MAINTAINERS
··· 17479 17479 M: Pablo Neira Ayuso <pablo@netfilter.org> 17480 17480 M: Jozsef Kadlecsik <kadlec@netfilter.org> 17481 17481 M: Florian Westphal <fw@strlen.de> 17482 + R: Phil Sutter <phil@nwl.cc> 17482 17483 L: netfilter-devel@vger.kernel.org 17483 17484 L: coreteam@netfilter.org 17484 17485 S: Maintained
+1 -7
drivers/net/can/rcar/rcar_can.c
··· 861 861 { 862 862 struct net_device *ndev = dev_get_drvdata(dev); 863 863 struct rcar_can_priv *priv = netdev_priv(ndev); 864 - u16 ctlr; 865 864 int err; 866 865 867 866 if (!netif_running(ndev)) ··· 872 873 return err; 873 874 } 874 875 875 - ctlr = readw(&priv->regs->ctlr); 876 - ctlr &= ~RCAR_CAN_CTLR_SLPM; 877 - writew(ctlr, &priv->regs->ctlr); 878 - ctlr &= ~RCAR_CAN_CTLR_CANM; 879 - writew(ctlr, &priv->regs->ctlr); 880 - priv->can.state = CAN_STATE_ERROR_ACTIVE; 876 + rcar_can_start(ndev); 881 877 882 878 netif_device_attach(ndev); 883 879 netif_start_queue(ndev);
+8 -8
drivers/net/can/xilinx_can.c
··· 690 690 dlc |= XCAN_DLCR_EDL_MASK; 691 691 } 692 692 693 - if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) && 694 - (priv->devtype.flags & XCAN_FLAG_TXFEMP)) 695 - can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0); 696 - else 697 - can_put_echo_skb(skb, ndev, 0, 0); 698 - 699 - priv->tx_head++; 700 - 701 693 priv->write_reg(priv, XCAN_FRAME_ID_OFFSET(frame_offset), id); 702 694 /* If the CAN frame is RTR frame this write triggers transmission 703 695 * (not on CAN FD) ··· 722 730 data[1]); 723 731 } 724 732 } 733 + 734 + if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) && 735 + (priv->devtype.flags & XCAN_FLAG_TXFEMP)) 736 + can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0); 737 + else 738 + can_put_echo_skb(skb, ndev, 0, 0); 739 + 740 + priv->tx_head++; 725 741 } 726 742 727 743 /**
+13 -4
drivers/net/dsa/b53/b53_common.c
··· 1273 1273 */ 1274 1274 ds->untag_vlan_aware_bridge_pvid = true; 1275 1275 1276 - /* Ageing time is set in seconds */ 1277 - ds->ageing_time_min = 1 * 1000; 1278 - ds->ageing_time_max = AGE_TIME_MAX * 1000; 1276 + if (dev->chip_id == BCM53101_DEVICE_ID) { 1277 + /* BCM53101 uses 0.5 second increments */ 1278 + ds->ageing_time_min = 1 * 500; 1279 + ds->ageing_time_max = AGE_TIME_MAX * 500; 1280 + } else { 1281 + /* Everything else uses 1 second increments */ 1282 + ds->ageing_time_min = 1 * 1000; 1283 + ds->ageing_time_max = AGE_TIME_MAX * 1000; 1284 + } 1279 1285 1280 1286 ret = b53_reset_switch(dev); 1281 1287 if (ret) { ··· 2565 2559 else 2566 2560 reg = B53_AGING_TIME_CONTROL; 2567 2561 2568 - atc = DIV_ROUND_CLOSEST(msecs, 1000); 2562 + if (dev->chip_id == BCM53101_DEVICE_ID) 2563 + atc = DIV_ROUND_CLOSEST(msecs, 500); 2564 + else 2565 + atc = DIV_ROUND_CLOSEST(msecs, 1000); 2569 2566 2570 2567 if (!is5325(dev) && !is5365(dev)) 2571 2568 atc |= AGE_CHANGE;
+2 -1
drivers/net/ethernet/freescale/fec_main.c
··· 2363 2363 */ 2364 2364 phy_dev = of_phy_find_device(fep->phy_node); 2365 2365 phy_reset_after_clk_enable(phy_dev); 2366 - put_device(&phy_dev->mdio.dev); 2366 + if (phy_dev) 2367 + put_device(&phy_dev->mdio.dev); 2367 2368 } 2368 2369 } 2369 2370
+1
drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
··· 1561 1561 struct i40e_aq_set_mac_config { 1562 1562 __le16 max_frame_size; 1563 1563 u8 params; 1564 + #define I40E_AQ_SET_MAC_CONFIG_CRC_EN BIT(2) 1564 1565 u8 tx_timer_priority; /* bitmap */ 1565 1566 __le16 tx_timer_value; 1566 1567 __le16 fc_refresh_threshold;
+34
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 1190 1190 } 1191 1191 1192 1192 /** 1193 + * i40e_aq_set_mac_config - Configure MAC settings 1194 + * @hw: pointer to the hw struct 1195 + * @max_frame_size: Maximum Frame Size to be supported by the port 1196 + * @cmd_details: pointer to command details structure or NULL 1197 + * 1198 + * Set MAC configuration (0x0603). Note that max_frame_size must be greater 1199 + * than zero. 1200 + * 1201 + * Return: 0 on success, or a negative error code on failure. 1202 + */ 1203 + int i40e_aq_set_mac_config(struct i40e_hw *hw, u16 max_frame_size, 1204 + struct i40e_asq_cmd_details *cmd_details) 1205 + { 1206 + struct i40e_aq_set_mac_config *cmd; 1207 + struct libie_aq_desc desc; 1208 + 1209 + cmd = libie_aq_raw(&desc); 1210 + 1211 + if (max_frame_size == 0) 1212 + return -EINVAL; 1213 + 1214 + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_set_mac_config); 1215 + 1216 + cmd->max_frame_size = cpu_to_le16(max_frame_size); 1217 + cmd->params = I40E_AQ_SET_MAC_CONFIG_CRC_EN; 1218 + 1219 + #define I40E_AQ_SET_MAC_CONFIG_FC_DEFAULT_THRESHOLD 0x7FFF 1220 + cmd->fc_refresh_threshold = 1221 + cpu_to_le16(I40E_AQ_SET_MAC_CONFIG_FC_DEFAULT_THRESHOLD); 1222 + 1223 + return i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); 1224 + } 1225 + 1226 + /** 1193 1227 * i40e_aq_clear_pxe_mode 1194 1228 * @hw: pointer to the hw struct 1195 1229 * @cmd_details: pointer to command details structure or NULL
+11 -7
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 4156 4156 irq_num = pf->msix_entries[base + vector].vector; 4157 4157 irq_set_affinity_notifier(irq_num, NULL); 4158 4158 irq_update_affinity_hint(irq_num, NULL); 4159 - free_irq(irq_num, &vsi->q_vectors[vector]); 4159 + free_irq(irq_num, vsi->q_vectors[vector]); 4160 4160 } 4161 4161 return err; 4162 4162 } ··· 16045 16045 dev_dbg(&pf->pdev->dev, "get supported phy types ret = %pe last_status = %s\n", 16046 16046 ERR_PTR(err), libie_aq_str(pf->hw.aq.asq_last_status)); 16047 16047 16048 - /* make sure the MFS hasn't been set lower than the default */ 16049 16048 #define MAX_FRAME_SIZE_DEFAULT 0x2600 16050 - val = FIELD_GET(I40E_PRTGL_SAH_MFS_MASK, 16051 - rd32(&pf->hw, I40E_PRTGL_SAH)); 16052 - if (val < MAX_FRAME_SIZE_DEFAULT) 16053 - dev_warn(&pdev->dev, "MFS for port %x (%d) has been set below the default (%d)\n", 16054 - pf->hw.port, val, MAX_FRAME_SIZE_DEFAULT); 16049 + 16050 + err = i40e_aq_set_mac_config(hw, MAX_FRAME_SIZE_DEFAULT, NULL); 16051 + if (err) 16052 + dev_warn(&pdev->dev, "set mac config ret = %pe last_status = %s\n", 16053 + ERR_PTR(err), libie_aq_str(pf->hw.aq.asq_last_status)); 16054 + 16055 + /* Make sure the MFS is set to the expected value */ 16056 + val = rd32(hw, I40E_PRTGL_SAH); 16057 + FIELD_MODIFY(I40E_PRTGL_SAH_MFS_MASK, &val, MAX_FRAME_SIZE_DEFAULT); 16058 + wr32(hw, I40E_PRTGL_SAH, val); 16055 16059 16056 16060 /* Add a filter to drop all Flow control frames from any VSI from being 16057 16061 * transmitted. By doing so we stop a malicious VF from sending out
+2
drivers/net/ethernet/intel/i40e/i40e_prototype.h
··· 98 98 struct i40e_asq_cmd_details *cmd_details); 99 99 int i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask, 100 100 struct i40e_asq_cmd_details *cmd_details); 101 + int i40e_aq_set_mac_config(struct i40e_hw *hw, u16 max_frame_size, 102 + struct i40e_asq_cmd_details *cmd_details); 101 103 int i40e_aq_clear_pxe_mode(struct i40e_hw *hw, 102 104 struct i40e_asq_cmd_details *cmd_details); 103 105 int i40e_aq_set_link_restart_an(struct i40e_hw *hw,
+1 -4
drivers/net/ethernet/intel/igb/igb_ethtool.c
··· 2081 2081 } else { 2082 2082 dev_info(&adapter->pdev->dev, "online testing starting\n"); 2083 2083 2084 - /* PHY is powered down when interface is down */ 2085 - if (if_running && igb_link_test(adapter, &data[TEST_LINK])) 2084 + if (igb_link_test(adapter, &data[TEST_LINK])) 2086 2085 eth_test->flags |= ETH_TEST_FL_FAILED; 2087 - else 2088 - data[TEST_LINK] = 0; 2089 2086 2090 2087 /* Online tests aren't run; pass by default */ 2091 2088 data[TEST_REG] = 0;
+1 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 4453 4453 if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) 4454 4454 xdp_rxq_info_unreg(&rx_ring->xdp_rxq); 4455 4455 res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev, 4456 - rx_ring->queue_index, 4457 - rx_ring->q_vector->napi.napi_id); 4456 + rx_ring->queue_index, 0); 4458 4457 if (res < 0) { 4459 4458 dev_err(dev, "Failed to register xdp_rxq index %u\n", 4460 4459 rx_ring->queue_index);
+14 -6
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 654 654 655 655 static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr) 656 656 { 657 - struct net_device *real_dev; 657 + struct net_device *real_dev, *port_dev; 658 658 struct prueth_emac *emac; 659 659 u8 vlan_id, i; 660 660 ··· 663 663 664 664 if (is_hsr_master(real_dev)) { 665 665 for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) { 666 - emac = netdev_priv(hsr_get_port_ndev(real_dev, i)); 667 - if (!emac) 666 + port_dev = hsr_get_port_ndev(real_dev, i); 667 + emac = netdev_priv(port_dev); 668 + if (!emac) { 669 + dev_put(port_dev); 668 670 return -EINVAL; 671 + } 669 672 icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id, 670 673 true); 674 + dev_put(port_dev); 671 675 } 672 676 } else { 673 677 emac = netdev_priv(real_dev); ··· 683 679 684 680 static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr) 685 681 { 686 - struct net_device *real_dev; 682 + struct net_device *real_dev, *port_dev; 687 683 struct prueth_emac *emac; 688 684 u8 vlan_id, i; 689 685 ··· 692 688 693 689 if (is_hsr_master(real_dev)) { 694 690 for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) { 695 - emac = netdev_priv(hsr_get_port_ndev(real_dev, i)); 696 - if (!emac) 691 + port_dev = hsr_get_port_ndev(real_dev, i); 692 + emac = netdev_priv(port_dev); 693 + if (!emac) { 694 + dev_put(port_dev); 697 695 return -EINVAL; 696 + } 698 697 icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id, 699 698 false); 699 + dev_put(port_dev); 700 700 } 701 701 } else { 702 702 emac = netdev_priv(real_dev);
-4
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 2078 2078 { 2079 2079 u32 rss_field = 0; 2080 2080 2081 - /* VT, and RSS do not coexist at the same time */ 2082 - if (test_bit(WX_FLAG_VMDQ_ENABLED, wx->flags)) 2083 - return; 2084 - 2085 2081 /* Disable indicating checksum in descriptor, enables RSS hash */ 2086 2082 wr32m(wx, WX_PSR_CTL, WX_PSR_CTL_PCSD, WX_PSR_CTL_PCSD); 2087 2083
+1
drivers/net/macsec.c
··· 4286 4286 if (err < 0) 4287 4287 goto del_dev; 4288 4288 4289 + netdev_update_features(dev); 4289 4290 netif_stacked_transfer_operstate(real_dev, dev); 4290 4291 linkwatch_fire_event(dev); 4291 4292
+1 -1
drivers/net/phy/Kconfig
··· 361 361 tristate "NXP TJA11xx PHYs support" 362 362 depends on HWMON 363 363 help 364 - Currently supports the NXP TJA1100 and TJA1101 PHY. 364 + Currently supports the NXP TJA1100, TJA1101 and TJA1102 PHYs. 365 365 366 366 config NCN26000_PHY 367 367 tristate "Onsemi 10BASE-T1S Ethernet PHY"
+4 -8
drivers/net/phy/phy.c
··· 1065 1065 */ 1066 1066 int phy_config_inband(struct phy_device *phydev, unsigned int modes) 1067 1067 { 1068 - int err; 1068 + lockdep_assert_held(&phydev->lock); 1069 1069 1070 1070 if (!!(modes & LINK_INBAND_DISABLE) + 1071 1071 !!(modes & LINK_INBAND_ENABLE) + 1072 1072 !!(modes & LINK_INBAND_BYPASS) != 1) 1073 1073 return -EINVAL; 1074 1074 1075 - mutex_lock(&phydev->lock); 1076 1075 if (!phydev->drv) 1077 - err = -EIO; 1076 + return -EIO; 1078 1077 else if (!phydev->drv->config_inband) 1079 - err = -EOPNOTSUPP; 1080 - else 1081 - err = phydev->drv->config_inband(phydev, modes); 1082 - mutex_unlock(&phydev->lock); 1078 + return -EOPNOTSUPP; 1083 1079 1084 - return err; 1080 + return phydev->drv->config_inband(phydev, modes); 1085 1081 } 1086 1082 EXPORT_SYMBOL(phy_config_inband); 1087 1083
+3 -2
drivers/net/phy/phy_device.c
··· 287 287 if (phydev->phy_link_change == phy_link_change) 288 288 return phydev->attached_dev && phydev->adjust_link; 289 289 290 - /* phydev->phy_link_change is implicitly phylink_phy_change() */ 291 - return true; 290 + return !!phydev->phy_link_change; 292 291 } 293 292 294 293 static bool mdio_bus_phy_may_suspend(struct phy_device *phydev) ··· 1863 1864 phydev->attached_dev = NULL; 1864 1865 phy_link_topo_del_phy(dev, phydev); 1865 1866 } 1867 + 1868 + phydev->phy_link_change = NULL; 1866 1869 phydev->phylink = NULL; 1867 1870 1868 1871 if (!phydev->is_on_sfp_module)
+25 -3
drivers/net/phy/phylink.c
··· 67 67 struct timer_list link_poll; 68 68 69 69 struct mutex state_mutex; 70 + /* Serialize updates to pl->phydev with phylink_resolve() */ 71 + struct mutex phydev_mutex; 70 72 struct phylink_link_state phy_state; 71 73 unsigned int phy_ib_mode; 72 74 struct work_struct resolve; ··· 1434 1432 static void phylink_mac_initial_config(struct phylink *pl, bool force_restart) 1435 1433 { 1436 1434 struct phylink_link_state link_state; 1435 + struct phy_device *phy = pl->phydev; 1437 1436 1438 1437 switch (pl->req_link_an_mode) { 1439 1438 case MLO_AN_PHY: ··· 1458 1455 link_state.link = false; 1459 1456 1460 1457 phylink_apply_manual_flow(pl, &link_state); 1458 + if (phy) 1459 + mutex_lock(&phy->lock); 1461 1460 phylink_major_config(pl, force_restart, &link_state); 1461 + if (phy) 1462 + mutex_unlock(&phy->lock); 1462 1463 } 1463 1464 1464 1465 static const char *phylink_pause_to_str(int pause) ··· 1598 1591 struct phylink_link_state link_state; 1599 1592 bool mac_config = false; 1600 1593 bool retrigger = false; 1594 + struct phy_device *phy; 1601 1595 bool cur_link_state; 1602 1596 1597 + mutex_lock(&pl->phydev_mutex); 1598 + phy = pl->phydev; 1599 + if (phy) 1600 + mutex_lock(&phy->lock); 1603 1601 mutex_lock(&pl->state_mutex); 1604 1602 cur_link_state = phylink_link_is_up(pl); 1605 1603 ··· 1638 1626 /* If we have a phy, the "up" state is the union of both the 1639 1627 * PHY and the MAC 1640 1628 */ 1641 - if (pl->phydev) 1629 + if (phy) 1642 1630 link_state.link &= pl->phy_state.link; 1643 1631 1644 1632 /* Only update if the PHY link is up */ 1645 - if (pl->phydev && pl->phy_state.link) { 1633 + if (phy && pl->phy_state.link) { 1646 1634 /* If the interface has changed, force a link down 1647 1635 * event if the link isn't already down, and re-resolve. 1648 1636 */ ··· 1706 1694 queue_work(system_power_efficient_wq, &pl->resolve); 1707 1695 } 1708 1696 mutex_unlock(&pl->state_mutex); 1697 + if (phy) 1698 + mutex_unlock(&phy->lock); 1699 + mutex_unlock(&pl->phydev_mutex); 1709 1700 } 1710 1701 1711 1702 static void phylink_run_resolve(struct phylink *pl) ··· 1844 1829 if (!pl) 1845 1830 return ERR_PTR(-ENOMEM); 1846 1831 1832 + mutex_init(&pl->phydev_mutex); 1847 1833 mutex_init(&pl->state_mutex); 1848 1834 INIT_WORK(&pl->resolve, phylink_resolve); 1849 1835 ··· 2105 2089 dev_name(&phy->mdio.dev), phy->drv->name, irq_str); 2106 2090 kfree(irq_str); 2107 2091 2092 + mutex_lock(&pl->phydev_mutex); 2108 2093 mutex_lock(&phy->lock); 2109 2094 mutex_lock(&pl->state_mutex); 2110 2095 pl->phydev = phy; ··· 2151 2134 2152 2135 mutex_unlock(&pl->state_mutex); 2153 2136 mutex_unlock(&phy->lock); 2137 + mutex_unlock(&pl->phydev_mutex); 2154 2138 2155 2139 phylink_dbg(pl, 2156 2140 "phy: %s setting supported %*pb advertising %*pb\n", ··· 2330 2312 2331 2313 ASSERT_RTNL(); 2332 2314 2315 + mutex_lock(&pl->phydev_mutex); 2333 2316 phy = pl->phydev; 2334 2317 if (phy) { 2335 2318 mutex_lock(&phy->lock); ··· 2340 2321 pl->mac_tx_clk_stop = false; 2341 2322 mutex_unlock(&pl->state_mutex); 2342 2323 mutex_unlock(&phy->lock); 2343 - flush_work(&pl->resolve); 2324 + } 2325 + mutex_unlock(&pl->phydev_mutex); 2344 2326 2327 + if (phy) { 2328 + flush_work(&pl->resolve); 2345 2329 phy_disconnect(phy); 2346 2330 } 2347 2331 }
+67 -55
drivers/net/wireless/ath/ath12k/mac.c
··· 4078 4078 return ret; 4079 4079 } 4080 4080 4081 + static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif) 4082 + { 4083 + struct ath12k *ar = arvif->ar; 4084 + struct ieee80211_vif *vif = arvif->ahvif->vif; 4085 + struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf; 4086 + enum wmi_sta_powersave_param param; 4087 + struct ieee80211_bss_conf *info; 4088 + enum wmi_sta_ps_mode psmode; 4089 + int ret; 4090 + int timeout; 4091 + bool enable_ps; 4092 + 4093 + lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy); 4094 + 4095 + if (vif->type != NL80211_IFTYPE_STATION) 4096 + return; 4097 + 4098 + enable_ps = arvif->ahvif->ps; 4099 + if (enable_ps) { 4100 + psmode = WMI_STA_PS_MODE_ENABLED; 4101 + param = WMI_STA_PS_PARAM_INACTIVITY_TIME; 4102 + 4103 + timeout = conf->dynamic_ps_timeout; 4104 + if (timeout == 0) { 4105 + info = ath12k_mac_get_link_bss_conf(arvif); 4106 + if (!info) { 4107 + ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n", 4108 + vif->addr, arvif->link_id); 4109 + return; 4110 + } 4111 + 4112 + /* firmware doesn't like 0 */ 4113 + timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000; 4114 + } 4115 + 4116 + ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param, 4117 + timeout); 4118 + if (ret) { 4119 + ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n", 4120 + arvif->vdev_id, ret); 4121 + return; 4122 + } 4123 + } else { 4124 + psmode = WMI_STA_PS_MODE_DISABLED; 4125 + } 4126 + 4127 + ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n", 4128 + arvif->vdev_id, psmode ? "enable" : "disable"); 4129 + 4130 + ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode); 4131 + if (ret) 4132 + ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n", 4133 + psmode, arvif->vdev_id, ret); 4134 + } 4135 + 4081 4136 static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw, 4082 4137 struct ieee80211_vif *vif, 4083 4138 u64 changed) 4084 4139 { 4085 4140 struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 4086 4141 unsigned long links = ahvif->links_map; 4142 + struct ieee80211_vif_cfg *vif_cfg; 4087 4143 struct ieee80211_bss_conf *info; 4088 4144 struct ath12k_link_vif *arvif; 4089 4145 struct ieee80211_sta *sta; ··· 4203 4147 } 4204 4148 } 4205 4149 } 4206 - } 4207 4150 4208 - static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif) 4209 - { 4210 - struct ath12k *ar = arvif->ar; 4211 - struct ieee80211_vif *vif = arvif->ahvif->vif; 4212 - struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf; 4213 - enum wmi_sta_powersave_param param; 4214 - struct ieee80211_bss_conf *info; 4215 - enum wmi_sta_ps_mode psmode; 4216 - int ret; 4217 - int timeout; 4218 - bool enable_ps; 4151 + if (changed & BSS_CHANGED_PS) { 4152 + links = ahvif->links_map; 4153 + vif_cfg = &vif->cfg; 4219 4154 4220 - lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy); 4155 + for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) { 4156 + arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]); 4157 + if (!arvif || !arvif->ar) 4158 + continue; 4221 4159 4222 - if (vif->type != NL80211_IFTYPE_STATION) 4223 - return; 4160 + ar = arvif->ar; 4224 4161 4225 - enable_ps = arvif->ahvif->ps; 4226 - if (enable_ps) { 4227 - psmode = WMI_STA_PS_MODE_ENABLED; 4228 - param = WMI_STA_PS_PARAM_INACTIVITY_TIME; 4229 - 4230 - timeout = conf->dynamic_ps_timeout; 4231 - if (timeout == 0) { 4232 - info = ath12k_mac_get_link_bss_conf(arvif); 4233 - if (!info) { 4234 - ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n", 4235 - vif->addr, arvif->link_id); 4236 - return; 4162 + if (ar->ab->hw_params->supports_sta_ps) { 4163 + ahvif->ps = vif_cfg->ps; 4164 + ath12k_mac_vif_setup_ps(arvif); 4237 4165 } 4238 - 4239 - /* firmware doesn't like 0 */ 4240 - timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000; 4241 4166 } 4242 - 4243 - ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param, 4244 - timeout); 4245 - if (ret) { 4246 - ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n", 4247 - arvif->vdev_id, ret); 4248 - return; 4249 - } 4250 - } else { 4251 - psmode = WMI_STA_PS_MODE_DISABLED; 4252 4167 } 4253 - 4254 - ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n", 4255 - arvif->vdev_id, psmode ? "enable" : "disable"); 4256 - 4257 - ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode); 4258 - if (ret) 4259 - ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n", 4260 - psmode, arvif->vdev_id, ret); 4261 4168 } 4262 4169 4263 4170 static bool ath12k_mac_supports_tpc(struct ath12k *ar, struct ath12k_vif *ahvif, ··· 4242 4223 { 4243 4224 struct ath12k_vif *ahvif = arvif->ahvif; 4244 4225 struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif); 4245 - struct ieee80211_vif_cfg *vif_cfg = &vif->cfg; 4246 4226 struct cfg80211_chan_def def; 4247 4227 u32 param_id, param_value; 4248 4228 enum nl80211_band band; ··· 4528 4510 } 4529 4511 4530 4512 ath12k_mac_fils_discovery(arvif, info); 4531 - 4532 - if (changed & BSS_CHANGED_PS && 4533 - ar->ab->hw_params->supports_sta_ps) { 4534 - ahvif->ps = vif_cfg->ps; 4535 - ath12k_mac_vif_setup_ps(arvif); 4536 - } 4537 4513 } 4538 4514 4539 4515 static struct ath12k_vif_cache *ath12k_ahvif_get_link_cache(struct ath12k_vif *ahvif,
+1 -1
drivers/net/wireless/ath/ath12k/wmi.c
··· 843 843 cmd->tx_params_valid = 0; 844 844 845 845 frame_tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd)); 846 - frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len); 846 + frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len_aligned); 847 847 848 848 memcpy(frame_tlv->value, frame->data, buf_len); 849 849
+13 -13
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 124 124 {IWL_PCI_DEVICE(0x0082, 0x1304, iwl6005_mac_cfg)},/* low 5GHz active */ 125 125 {IWL_PCI_DEVICE(0x0082, 0x1305, iwl6005_mac_cfg)},/* high 5GHz active */ 126 126 127 - /* 6x30 Series */ 128 - {IWL_PCI_DEVICE(0x008A, 0x5305, iwl1000_mac_cfg)}, 129 - {IWL_PCI_DEVICE(0x008A, 0x5307, iwl1000_mac_cfg)}, 130 - {IWL_PCI_DEVICE(0x008A, 0x5325, iwl1000_mac_cfg)}, 131 - {IWL_PCI_DEVICE(0x008A, 0x5327, iwl1000_mac_cfg)}, 132 - {IWL_PCI_DEVICE(0x008B, 0x5315, iwl1000_mac_cfg)}, 133 - {IWL_PCI_DEVICE(0x008B, 0x5317, iwl1000_mac_cfg)}, 127 + /* 1030/6x30 Series */ 128 + {IWL_PCI_DEVICE(0x008A, 0x5305, iwl6030_mac_cfg)}, 129 + {IWL_PCI_DEVICE(0x008A, 0x5307, iwl6030_mac_cfg)}, 130 + {IWL_PCI_DEVICE(0x008A, 0x5325, iwl6030_mac_cfg)}, 131 + {IWL_PCI_DEVICE(0x008A, 0x5327, iwl6030_mac_cfg)}, 132 + {IWL_PCI_DEVICE(0x008B, 0x5315, iwl6030_mac_cfg)}, 133 + {IWL_PCI_DEVICE(0x008B, 0x5317, iwl6030_mac_cfg)}, 134 134 {IWL_PCI_DEVICE(0x0090, 0x5211, iwl6030_mac_cfg)}, 135 135 {IWL_PCI_DEVICE(0x0090, 0x5215, iwl6030_mac_cfg)}, 136 136 {IWL_PCI_DEVICE(0x0090, 0x5216, iwl6030_mac_cfg)}, ··· 181 181 {IWL_PCI_DEVICE(0x08AE, 0x1027, iwl1000_mac_cfg)}, 182 182 183 183 /* 130 Series WiFi */ 184 - {IWL_PCI_DEVICE(0x0896, 0x5005, iwl1000_mac_cfg)}, 185 - {IWL_PCI_DEVICE(0x0896, 0x5007, iwl1000_mac_cfg)}, 186 - {IWL_PCI_DEVICE(0x0897, 0x5015, iwl1000_mac_cfg)}, 187 - {IWL_PCI_DEVICE(0x0897, 0x5017, iwl1000_mac_cfg)}, 188 - {IWL_PCI_DEVICE(0x0896, 0x5025, iwl1000_mac_cfg)}, 189 - {IWL_PCI_DEVICE(0x0896, 0x5027, iwl1000_mac_cfg)}, 184 + {IWL_PCI_DEVICE(0x0896, 0x5005, iwl6030_mac_cfg)}, 185 + {IWL_PCI_DEVICE(0x0896, 0x5007, iwl6030_mac_cfg)}, 186 + {IWL_PCI_DEVICE(0x0897, 0x5015, iwl6030_mac_cfg)}, 187 + {IWL_PCI_DEVICE(0x0897, 0x5017, iwl6030_mac_cfg)}, 188 + {IWL_PCI_DEVICE(0x0896, 0x5025, iwl6030_mac_cfg)}, 189 + {IWL_PCI_DEVICE(0x0896, 0x5027, iwl6030_mac_cfg)}, 190 190 191 191 /* 2x00 Series */ 192 192 {IWL_PCI_DEVICE(0x0890, 0x4022, iwl2000_mac_cfg)},
+3 -1
drivers/net/wireless/virtual/virt_wifi.c
··· 277 277 priv->is_connected = true; 278 278 279 279 /* Schedules an event that acquires the rtnl lock. */ 280 - cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0, 280 + cfg80211_connect_result(priv->upperdev, 281 + priv->is_connected ? fake_router_bssid : NULL, 282 + NULL, 0, NULL, 0, 281 283 status, GFP_KERNEL); 282 284 netif_carrier_on(priv->upperdev); 283 285 }
+2 -2
include/linux/ethtool.h
··· 856 856 enum hwtstamp_provider_qualifier phc_qualifier; 857 857 enum hwtstamp_source phc_source; 858 858 int phc_phyindex; 859 - enum hwtstamp_tx_types tx_types; 860 - enum hwtstamp_rx_filters rx_filters; 859 + u32 tx_types; 860 + u32 rx_filters; 861 861 }; 862 862 863 863 /**
-1
include/net/netfilter/nf_tables.h
··· 1912 1912 struct mutex commit_mutex; 1913 1913 u64 table_handle; 1914 1914 u64 tstamp; 1915 - unsigned int base_seq; 1916 1915 unsigned int gc_seq; 1917 1916 u8 validate_state; 1918 1917 struct work_struct destroy_work;
+2 -8
include/net/netfilter/nf_tables_core.h
··· 109 109 const struct nft_set_ext * 110 110 nft_hash_lookup(const struct net *net, const struct nft_set *set, 111 111 const u32 *key); 112 + #endif 113 + 112 114 const struct nft_set_ext * 113 115 nft_set_do_lookup(const struct net *net, const struct nft_set *set, 114 116 const u32 *key); 115 - #else 116 - static inline const struct nft_set_ext * 117 - nft_set_do_lookup(const struct net *net, const struct nft_set *set, 118 - const u32 *key) 119 - { 120 - return set->ops->lookup(net, set, key); 121 - } 122 - #endif 123 117 124 118 /* called from nft_pipapo_avx2.c */ 125 119 const struct nft_set_ext *
+1
include/net/netns/nftables.h
··· 3 3 #define _NETNS_NFTABLES_H_ 4 4 5 5 struct netns_nftables { 6 + unsigned int base_seq; 6 7 u8 gencursor; 7 8 }; 8 9
+7
net/bridge/br.c
··· 324 324 int err = 0; 325 325 int opt_id; 326 326 327 + opt_id = find_next_bit(&bitmap, BITS_PER_LONG, BR_BOOLOPT_MAX); 328 + if (opt_id != BITS_PER_LONG) { 329 + NL_SET_ERR_MSG_FMT_MOD(extack, "Unknown boolean option %d", 330 + opt_id); 331 + return -EINVAL; 332 + } 333 + 327 334 for_each_set_bit(opt_id, &bitmap, BR_BOOLOPT_MAX) { 328 335 bool on = !!(bm->optval & BIT(opt_id)); 329 336
+4 -1
net/can/j1939/bus.c
··· 290 290 if (!ecu) 291 291 ecu = j1939_ecu_create_locked(priv, name); 292 292 err = PTR_ERR_OR_ZERO(ecu); 293 - if (err) 293 + if (err) { 294 + if (j1939_address_is_unicast(sa)) 295 + priv->ents[sa].nusers--; 294 296 goto done; 297 + } 295 298 296 299 ecu->nusers++; 297 300 /* TODO: do we care if ecu->addr != sa? */
+1
net/can/j1939/j1939-priv.h
··· 212 212 213 213 /* notify/alert all j1939 sockets bound to ifindex */ 214 214 void j1939_sk_netdev_event_netdown(struct j1939_priv *priv); 215 + void j1939_sk_netdev_event_unregister(struct j1939_priv *priv); 215 216 int j1939_cancel_active_session(struct j1939_priv *priv, struct sock *sk); 216 217 void j1939_tp_init(struct j1939_priv *priv); 217 218
+3
net/can/j1939/main.c
··· 377 377 j1939_sk_netdev_event_netdown(priv); 378 378 j1939_ecu_unmap_all(priv); 379 379 break; 380 + case NETDEV_UNREGISTER: 381 + j1939_sk_netdev_event_unregister(priv); 382 + break; 380 383 } 381 384 382 385 j1939_priv_put(priv);
+52
net/can/j1939/socket.c
··· 521 521 ret = j1939_local_ecu_get(priv, jsk->addr.src_name, jsk->addr.sa); 522 522 if (ret) { 523 523 j1939_netdev_stop(priv); 524 + jsk->priv = NULL; 525 + synchronize_rcu(); 526 + j1939_priv_put(priv); 524 527 goto out_release_sock; 525 528 } 526 529 ··· 1301 1298 j1939_sk_queue_drop_all(priv, jsk, error_code); 1302 1299 } 1303 1300 read_unlock_bh(&priv->j1939_socks_lock); 1301 + } 1302 + 1303 + void j1939_sk_netdev_event_unregister(struct j1939_priv *priv) 1304 + { 1305 + struct sock *sk; 1306 + struct j1939_sock *jsk; 1307 + bool wait_rcu = false; 1308 + 1309 + rescan: /* The caller is holding a ref on this "priv" via j1939_priv_get_by_ndev(). */ 1310 + read_lock_bh(&priv->j1939_socks_lock); 1311 + list_for_each_entry(jsk, &priv->j1939_socks, list) { 1312 + /* Skip if j1939_jsk_add() is not called on this socket. */ 1313 + if (!(jsk->state & J1939_SOCK_BOUND)) 1314 + continue; 1315 + sk = &jsk->sk; 1316 + sock_hold(sk); 1317 + read_unlock_bh(&priv->j1939_socks_lock); 1318 + /* Check if j1939_jsk_del() is not yet called on this socket after holding 1319 + * socket's lock, for both j1939_sk_bind() and j1939_sk_release() call 1320 + * j1939_jsk_del() with socket's lock held. 1321 + */ 1322 + lock_sock(sk); 1323 + if (jsk->state & J1939_SOCK_BOUND) { 1324 + /* Neither j1939_sk_bind() nor j1939_sk_release() called j1939_jsk_del(). 1325 + * Make this socket no longer bound, by pretending as if j1939_sk_bind() 1326 + * dropped old references but did not get new references. 1327 + */ 1328 + j1939_jsk_del(priv, jsk); 1329 + j1939_local_ecu_put(priv, jsk->addr.src_name, jsk->addr.sa); 1330 + j1939_netdev_stop(priv); 1331 + /* Call j1939_priv_put() now and prevent j1939_sk_sock_destruct() from 1332 + * calling the corresponding j1939_priv_put(). 1333 + * 1334 + * j1939_sk_sock_destruct() is supposed to call j1939_priv_put() after 1335 + * an RCU grace period. But since the caller is holding a ref on this 1336 + * "priv", we can defer synchronize_rcu() until immediately before 1337 + * the caller calls j1939_priv_put(). 1338 + */ 1339 + j1939_priv_put(priv); 1340 + jsk->priv = NULL; 1341 + wait_rcu = true; 1342 + } 1343 + release_sock(sk); 1344 + sock_put(sk); 1345 + goto rescan; 1346 + } 1347 + read_unlock_bh(&priv->j1939_socks_lock); 1348 + if (wait_rcu) 1349 + synchronize_rcu(); 1304 1350 } 1305 1351 1306 1352 static int j1939_sk_no_ioctlcmd(struct socket *sock, unsigned int cmd,
+18 -4
net/core/dev_ioctl.c
··· 464 464 if (!netif_device_present(dev)) 465 465 return -ENODEV; 466 466 467 - if (ops->ndo_hwtstamp_get) 468 - return dev_get_hwtstamp_phylib(dev, kernel_cfg); 467 + if (ops->ndo_hwtstamp_get) { 468 + int err; 469 + 470 + netdev_lock_ops(dev); 471 + err = dev_get_hwtstamp_phylib(dev, kernel_cfg); 472 + netdev_unlock_ops(dev); 473 + 474 + return err; 475 + } 469 476 470 477 /* Legacy path: unconverted lower driver */ 471 478 return generic_hwtstamp_ioctl_lower(dev, SIOCGHWTSTAMP, kernel_cfg); ··· 488 481 if (!netif_device_present(dev)) 489 482 return -ENODEV; 490 483 491 - if (ops->ndo_hwtstamp_set) 492 - return dev_set_hwtstamp_phylib(dev, kernel_cfg, extack); 484 + if (ops->ndo_hwtstamp_set) { 485 + int err; 486 + 487 + netdev_lock_ops(dev); 488 + err = dev_set_hwtstamp_phylib(dev, kernel_cfg, extack); 489 + netdev_unlock_ops(dev); 490 + 491 + return err; 492 + } 493 493 494 494 /* Legacy path: unconverted lower driver */ 495 495 return generic_hwtstamp_ioctl_lower(dev, SIOCSHWTSTAMP, kernel_cfg);
+18 -10
net/hsr/hsr_device.c
··· 49 49 50 50 ASSERT_RTNL(); 51 51 52 - hsr_for_each_port(master->hsr, port) { 52 + hsr_for_each_port_rtnl(master->hsr, port) { 53 53 if (port->type != HSR_PT_MASTER && is_slave_up(port->dev)) { 54 54 netif_carrier_on(master->dev); 55 55 return true; ··· 105 105 struct hsr_port *port; 106 106 107 107 mtu_max = ETH_DATA_LEN; 108 - hsr_for_each_port(hsr, port) 108 + hsr_for_each_port_rtnl(hsr, port) 109 109 if (port->type != HSR_PT_MASTER) 110 110 mtu_max = min(port->dev->mtu, mtu_max); 111 111 ··· 139 139 140 140 hsr = netdev_priv(dev); 141 141 142 - hsr_for_each_port(hsr, port) { 142 + hsr_for_each_port_rtnl(hsr, port) { 143 143 if (port->type == HSR_PT_MASTER) 144 144 continue; 145 145 switch (port->type) { ··· 172 172 struct hsr_priv *hsr; 173 173 174 174 hsr = netdev_priv(dev); 175 - hsr_for_each_port(hsr, port) { 175 + hsr_for_each_port_rtnl(hsr, port) { 176 176 if (port->type == HSR_PT_MASTER) 177 177 continue; 178 178 switch (port->type) { ··· 205 205 * may become enabled. 206 206 */ 207 207 features &= ~NETIF_F_ONE_FOR_ALL; 208 - hsr_for_each_port(hsr, port) 208 + hsr_for_each_port_rtnl(hsr, port) 209 209 features = netdev_increment_features(features, 210 210 port->dev->features, 211 211 mask); ··· 226 226 struct hsr_priv *hsr = netdev_priv(dev); 227 227 struct hsr_port *master; 228 228 229 + rcu_read_lock(); 229 230 master = hsr_port_get_hsr(hsr, HSR_PT_MASTER); 230 231 if (master) { 231 232 skb->dev = master->dev; ··· 239 238 dev_core_stats_tx_dropped_inc(dev); 240 239 dev_kfree_skb_any(skb); 241 240 } 241 + rcu_read_unlock(); 242 + 242 243 return NETDEV_TX_OK; 243 244 } 244 245 ··· 487 484 488 485 hsr = netdev_priv(dev); 489 486 490 - hsr_for_each_port(hsr, port) { 487 + hsr_for_each_port_rtnl(hsr, port) { 491 488 if (port->type == HSR_PT_MASTER) 492 489 continue; 493 490 switch (port->type) { ··· 509 506 510 507 hsr = netdev_priv(dev); 511 508 512 - hsr_for_each_port(hsr, port) { 509 + hsr_for_each_port_rtnl(hsr, port) { 513 510 if (port->type == HSR_PT_MASTER) 514 511 continue; 515 512 switch (port->type) { ··· 537 534 538 535 hsr = netdev_priv(dev); 539 536 540 - hsr_for_each_port(hsr, port) { 537 + hsr_for_each_port_rtnl(hsr, port) { 541 538 if (port->type == HSR_PT_MASTER || 542 539 port->type == HSR_PT_INTERLINK) 543 540 continue; ··· 583 580 584 581 hsr = netdev_priv(dev); 585 582 586 - hsr_for_each_port(hsr, port) { 583 + hsr_for_each_port_rtnl(hsr, port) { 587 584 switch (port->type) { 588 585 case HSR_PT_SLAVE_A: 589 586 case HSR_PT_SLAVE_B: ··· 675 672 struct hsr_priv *hsr = netdev_priv(ndev); 676 673 struct hsr_port *port; 677 674 675 + rcu_read_lock(); 678 676 hsr_for_each_port(hsr, port) 679 - if (port->type == pt) 677 + if (port->type == pt) { 678 + dev_hold(port->dev); 679 + rcu_read_unlock(); 680 680 return port->dev; 681 + } 682 + rcu_read_unlock(); 681 683 return NULL; 682 684 } 683 685 EXPORT_SYMBOL(hsr_get_port_ndev);
+2 -2
net/hsr/hsr_main.c
··· 22 22 { 23 23 struct hsr_port *port; 24 24 25 - hsr_for_each_port(hsr, port) 25 + hsr_for_each_port_rtnl(hsr, port) 26 26 if (port->type != HSR_PT_MASTER) 27 27 return false; 28 28 return true; ··· 134 134 { 135 135 struct hsr_port *port; 136 136 137 - hsr_for_each_port(hsr, port) 137 + hsr_for_each_port_rtnl(hsr, port) 138 138 if (port->type == pt) 139 139 return port; 140 140 return NULL;
+3
net/hsr/hsr_main.h
··· 224 224 #define hsr_for_each_port(hsr, port) \ 225 225 list_for_each_entry_rcu((port), &(hsr)->ports, port_list) 226 226 227 + #define hsr_for_each_port_rtnl(hsr, port) \ 228 + list_for_each_entry_rcu((port), &(hsr)->ports, port_list, lockdep_rtnl_is_held()) 229 + 227 230 struct hsr_port *hsr_port_get_hsr(struct hsr_priv *hsr, enum hsr_port_type pt); 228 231 229 232 /* Caller must ensure skb is a valid HSR frame */
+6
net/ipv4/ip_tunnel_core.c
··· 206 206 if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct iphdr))) 207 207 return -EINVAL; 208 208 209 + if (skb_is_gso(skb)) 210 + skb_gso_reset(skb); 211 + 209 212 skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN); 210 213 pskb_pull(skb, ETH_HLEN); 211 214 skb_reset_network_header(skb); ··· 302 299 303 300 if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct ipv6hdr))) 304 301 return -EINVAL; 302 + 303 + if (skb_is_gso(skb)) 304 + skb_gso_reset(skb); 305 305 306 306 skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN); 307 307 pskb_pull(skb, ETH_HLEN);
+5 -6
net/mptcp/sockopt.c
··· 1532 1532 { 1533 1533 static const unsigned int tx_rx_locks = SOCK_RCVBUF_LOCK | SOCK_SNDBUF_LOCK; 1534 1534 struct sock *sk = (struct sock *)msk; 1535 + bool keep_open; 1535 1536 1536 - if (ssk->sk_prot->keepalive) { 1537 - if (sock_flag(sk, SOCK_KEEPOPEN)) 1538 - ssk->sk_prot->keepalive(ssk, 1); 1539 - else 1540 - ssk->sk_prot->keepalive(ssk, 0); 1541 - } 1537 + keep_open = sock_flag(sk, SOCK_KEEPOPEN); 1538 + if (ssk->sk_prot->keepalive) 1539 + ssk->sk_prot->keepalive(ssk, keep_open); 1540 + sock_valbool_flag(ssk, SOCK_KEEPOPEN, keep_open); 1542 1541 1543 1542 ssk->sk_priority = sk->sk_priority; 1544 1543 ssk->sk_bound_dev_if = sk->sk_bound_dev_if;
+34 -32
net/netfilter/nf_tables_api.c
··· 1131 1131 return ERR_PTR(-ENOENT); 1132 1132 } 1133 1133 1134 - static __be16 nft_base_seq(const struct net *net) 1134 + static unsigned int nft_base_seq(const struct net *net) 1135 1135 { 1136 - struct nftables_pernet *nft_net = nft_pernet(net); 1136 + return READ_ONCE(net->nft.base_seq); 1137 + } 1137 1138 1138 - return htons(nft_net->base_seq & 0xffff); 1139 + static __be16 nft_base_seq_be16(const struct net *net) 1140 + { 1141 + return htons(nft_base_seq(net) & 0xffff); 1139 1142 } 1140 1143 1141 1144 static const struct nla_policy nft_table_policy[NFTA_TABLE_MAX + 1] = { ··· 1158 1155 1159 1156 nlh = nfnl_msg_put(skb, portid, seq, 1160 1157 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 1161 - flags, family, NFNETLINK_V0, nft_base_seq(net)); 1158 + flags, family, NFNETLINK_V0, nft_base_seq_be16(net)); 1162 1159 if (!nlh) 1163 1160 goto nla_put_failure; 1164 1161 ··· 1251 1248 1252 1249 rcu_read_lock(); 1253 1250 nft_net = nft_pernet(net); 1254 - cb->seq = READ_ONCE(nft_net->base_seq); 1251 + cb->seq = nft_base_seq(net); 1255 1252 1256 1253 list_for_each_entry_rcu(table, &nft_net->tables, list) { 1257 1254 if (family != NFPROTO_UNSPEC && family != table->family) ··· 2033 2030 2034 2031 nlh = nfnl_msg_put(skb, portid, seq, 2035 2032 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 2036 - flags, family, NFNETLINK_V0, nft_base_seq(net)); 2033 + flags, family, NFNETLINK_V0, nft_base_seq_be16(net)); 2037 2034 if (!nlh) 2038 2035 goto nla_put_failure; 2039 2036 ··· 2136 2133 2137 2134 rcu_read_lock(); 2138 2135 nft_net = nft_pernet(net); 2139 - cb->seq = READ_ONCE(nft_net->base_seq); 2136 + cb->seq = nft_base_seq(net); 2140 2137 2141 2138 list_for_each_entry_rcu(table, &nft_net->tables, list) { 2142 2139 if (family != NFPROTO_UNSPEC && family != table->family) ··· 3674 3671 u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event); 3675 3672 3676 3673 nlh = nfnl_msg_put(skb, portid, seq, type, flags, family, NFNETLINK_V0, 3677 - nft_base_seq(net)); 3674 + nft_base_seq_be16(net)); 3678 3675 if (!nlh) 3679 3676 goto nla_put_failure; 3680 3677 ··· 3842 3839 3843 3840 rcu_read_lock(); 3844 3841 nft_net = nft_pernet(net); 3845 - cb->seq = READ_ONCE(nft_net->base_seq); 3842 + cb->seq = nft_base_seq(net); 3846 3843 3847 3844 list_for_each_entry_rcu(table, &nft_net->tables, list) { 3848 3845 if (family != NFPROTO_UNSPEC && family != table->family) ··· 4053 4050 buf = kasprintf(GFP_ATOMIC, "%.*s:%u", 4054 4051 nla_len(nla[NFTA_RULE_TABLE]), 4055 4052 (char *)nla_data(nla[NFTA_RULE_TABLE]), 4056 - nft_net->base_seq); 4053 + nft_base_seq(net)); 4057 4054 audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1, 4058 4055 AUDIT_NFT_OP_RULE_RESET, GFP_ATOMIC); 4059 4056 kfree(buf); ··· 4890 4887 nlh = nfnl_msg_put(skb, portid, seq, 4891 4888 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 4892 4889 flags, ctx->family, NFNETLINK_V0, 4893 - nft_base_seq(ctx->net)); 4890 + nft_base_seq_be16(ctx->net)); 4894 4891 if (!nlh) 4895 4892 goto nla_put_failure; 4896 4893 ··· 5035 5032 5036 5033 rcu_read_lock(); 5037 5034 nft_net = nft_pernet(net); 5038 - cb->seq = READ_ONCE(nft_net->base_seq); 5035 + cb->seq = nft_base_seq(net); 5039 5036 5040 5037 list_for_each_entry_rcu(table, &nft_net->tables, list) { 5041 5038 if (ctx->family != NFPROTO_UNSPEC && ··· 6212 6209 6213 6210 rcu_read_lock(); 6214 6211 nft_net = nft_pernet(net); 6215 - cb->seq = READ_ONCE(nft_net->base_seq); 6212 + cb->seq = nft_base_seq(net); 6216 6213 6217 6214 list_for_each_entry_rcu(table, &nft_net->tables, list) { 6218 6215 if (dump_ctx->ctx.family != NFPROTO_UNSPEC && ··· 6241 6238 seq = cb->nlh->nlmsg_seq; 6242 6239 6243 6240 nlh = nfnl_msg_put(skb, portid, seq, event, NLM_F_MULTI, 6244 - table->family, NFNETLINK_V0, nft_base_seq(net)); 6241 + table->family, NFNETLINK_V0, nft_base_seq_be16(net)); 6245 6242 if (!nlh) 6246 6243 goto nla_put_failure; 6247 6244 ··· 6334 6331 6335 6332 event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event); 6336 6333 nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family, 6337 - NFNETLINK_V0, nft_base_seq(ctx->net)); 6334 + NFNETLINK_V0, nft_base_seq_be16(ctx->net)); 6338 6335 if (!nlh) 6339 6336 goto nla_put_failure; 6340 6337 ··· 6633 6630 } 6634 6631 nelems++; 6635 6632 } 6636 - audit_log_nft_set_reset(dump_ctx.ctx.table, nft_net->base_seq, nelems); 6633 + audit_log_nft_set_reset(dump_ctx.ctx.table, nft_base_seq(info->net), nelems); 6637 6634 6638 6635 out_unlock: 6639 6636 rcu_read_unlock(); ··· 8384 8381 8385 8382 nlh = nfnl_msg_put(skb, portid, seq, 8386 8383 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 8387 - flags, family, NFNETLINK_V0, nft_base_seq(net)); 8384 + flags, family, NFNETLINK_V0, nft_base_seq_be16(net)); 8388 8385 if (!nlh) 8389 8386 goto nla_put_failure; 8390 8387 ··· 8449 8446 8450 8447 rcu_read_lock(); 8451 8448 nft_net = nft_pernet(net); 8452 - cb->seq = READ_ONCE(nft_net->base_seq); 8449 + cb->seq = nft_base_seq(net); 8453 8450 8454 8451 list_for_each_entry_rcu(table, &nft_net->tables, list) { 8455 8452 if (family != NFPROTO_UNSPEC && family != table->family) ··· 8483 8480 idx++; 8484 8481 } 8485 8482 if (ctx->reset && entries) 8486 - audit_log_obj_reset(table, nft_net->base_seq, entries); 8483 + audit_log_obj_reset(table, nft_base_seq(net), entries); 8487 8484 if (rc < 0) 8488 8485 break; 8489 8486 } ··· 8652 8649 buf = kasprintf(GFP_ATOMIC, "%.*s:%u", 8653 8650 nla_len(nla[NFTA_OBJ_TABLE]), 8654 8651 (char *)nla_data(nla[NFTA_OBJ_TABLE]), 8655 - nft_net->base_seq); 8652 + nft_base_seq(net)); 8656 8653 audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1, 8657 8654 AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC); 8658 8655 kfree(buf); ··· 8757 8754 struct nft_object *obj, u32 portid, u32 seq, int event, 8758 8755 u16 flags, int family, int report, gfp_t gfp) 8759 8756 { 8760 - struct nftables_pernet *nft_net = nft_pernet(net); 8761 8757 char *buf = kasprintf(gfp, "%s:%u", 8762 - table->name, nft_net->base_seq); 8758 + table->name, nft_base_seq(net)); 8763 8759 8764 8760 audit_log_nfcfg(buf, 8765 8761 family, ··· 9444 9442 9445 9443 nlh = nfnl_msg_put(skb, portid, seq, 9446 9444 nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event), 9447 - flags, family, NFNETLINK_V0, nft_base_seq(net)); 9445 + flags, family, NFNETLINK_V0, nft_base_seq_be16(net)); 9448 9446 if (!nlh) 9449 9447 goto nla_put_failure; 9450 9448 ··· 9513 9511 9514 9512 rcu_read_lock(); 9515 9513 nft_net = nft_pernet(net); 9516 - cb->seq = READ_ONCE(nft_net->base_seq); 9514 + cb->seq = nft_base_seq(net); 9517 9515 9518 9516 list_for_each_entry_rcu(table, &nft_net->tables, list) { 9519 9517 if (family != NFPROTO_UNSPEC && family != table->family) ··· 9698 9696 static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net, 9699 9697 u32 portid, u32 seq) 9700 9698 { 9701 - struct nftables_pernet *nft_net = nft_pernet(net); 9702 9699 struct nlmsghdr *nlh; 9703 9700 char buf[TASK_COMM_LEN]; 9704 9701 int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN); 9705 9702 9706 9703 nlh = nfnl_msg_put(skb, portid, seq, event, 0, AF_UNSPEC, 9707 - NFNETLINK_V0, nft_base_seq(net)); 9704 + NFNETLINK_V0, nft_base_seq_be16(net)); 9708 9705 if (!nlh) 9709 9706 goto nla_put_failure; 9710 9707 9711 - if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_net->base_seq)) || 9708 + if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_base_seq(net))) || 9712 9709 nla_put_be32(skb, NFTA_GEN_PROC_PID, htonl(task_pid_nr(current))) || 9713 9710 nla_put_string(skb, NFTA_GEN_PROC_NAME, get_task_comm(buf, current))) 9714 9711 goto nla_put_failure; ··· 10969 10968 * Bump generation counter, invalidate any dump in progress. 10970 10969 * Cannot fail after this point. 10971 10970 */ 10972 - base_seq = READ_ONCE(nft_net->base_seq); 10971 + base_seq = nft_base_seq(net); 10973 10972 while (++base_seq == 0) 10974 10973 ; 10975 10974 10976 - WRITE_ONCE(nft_net->base_seq, base_seq); 10975 + /* pairs with smp_load_acquire in nft_lookup_eval */ 10976 + smp_store_release(&net->nft.base_seq, base_seq); 10977 10977 10978 10978 gc_seq = nft_gc_seq_begin(nft_net); 10979 10979 ··· 11183 11181 11184 11182 nft_commit_notify(net, NETLINK_CB(skb).portid); 11185 11183 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); 11186 - nf_tables_commit_audit_log(&adl, nft_net->base_seq); 11184 + nf_tables_commit_audit_log(&adl, nft_base_seq(net)); 11187 11185 11188 11186 nft_gc_seq_end(nft_net, gc_seq); 11189 11187 nft_net->validate_state = NFT_VALIDATE_SKIP; ··· 11508 11506 mutex_lock(&nft_net->commit_mutex); 11509 11507 nft_net->tstamp = get_jiffies_64(); 11510 11508 11511 - genid_ok = genid == 0 || nft_net->base_seq == genid; 11509 + genid_ok = genid == 0 || nft_base_seq(net) == genid; 11512 11510 if (!genid_ok) 11513 11511 mutex_unlock(&nft_net->commit_mutex); 11514 11512 ··· 12145 12143 INIT_LIST_HEAD(&nft_net->module_list); 12146 12144 INIT_LIST_HEAD(&nft_net->notify_list); 12147 12145 mutex_init(&nft_net->commit_mutex); 12148 - nft_net->base_seq = 1; 12146 + net->nft.base_seq = 1; 12149 12147 nft_net->gc_seq = 0; 12150 12148 nft_net->validate_state = NFT_VALIDATE_SKIP; 12151 12149 INIT_WORK(&nft_net->destroy_work, nf_tables_trans_destroy_work);
+41 -5
net/netfilter/nft_lookup.c
··· 24 24 struct nft_set_binding binding; 25 25 }; 26 26 27 - #ifdef CONFIG_MITIGATION_RETPOLINE 28 - const struct nft_set_ext * 29 - nft_set_do_lookup(const struct net *net, const struct nft_set *set, 30 - const u32 *key) 27 + static const struct nft_set_ext * 28 + __nft_set_do_lookup(const struct net *net, const struct nft_set *set, 29 + const u32 *key) 31 30 { 31 + #ifdef CONFIG_MITIGATION_RETPOLINE 32 32 if (set->ops == &nft_set_hash_fast_type.ops) 33 33 return nft_hash_lookup_fast(net, set, key); 34 34 if (set->ops == &nft_set_hash_type.ops) ··· 51 51 return nft_rbtree_lookup(net, set, key); 52 52 53 53 WARN_ON_ONCE(1); 54 + #endif 54 55 return set->ops->lookup(net, set, key); 55 56 } 57 + 58 + static unsigned int nft_base_seq(const struct net *net) 59 + { 60 + /* pairs with smp_store_release() in nf_tables_commit() */ 61 + return smp_load_acquire(&net->nft.base_seq); 62 + } 63 + 64 + static bool nft_lookup_should_retry(const struct net *net, unsigned int seq) 65 + { 66 + return unlikely(seq != nft_base_seq(net)); 67 + } 68 + 69 + const struct nft_set_ext * 70 + nft_set_do_lookup(const struct net *net, const struct nft_set *set, 71 + const u32 *key) 72 + { 73 + const struct nft_set_ext *ext; 74 + unsigned int base_seq; 75 + 76 + do { 77 + base_seq = nft_base_seq(net); 78 + 79 + ext = __nft_set_do_lookup(net, set, key); 80 + if (ext) 81 + break; 82 + /* No match? There is a small chance that lookup was 83 + * performed in the old generation, but nf_tables_commit() 84 + * already unlinked a (matching) element. 85 + * 86 + * We need to repeat the lookup to make sure that we didn't 87 + * miss a matching element in the new generation. 88 + */ 89 + } while (nft_lookup_should_retry(net, base_seq)); 90 + 91 + return ext; 92 + } 56 93 EXPORT_SYMBOL_GPL(nft_set_do_lookup); 57 - #endif 58 94 59 95 void nft_lookup_eval(const struct nft_expr *expr, 60 96 struct nft_regs *regs,
+2 -1
net/netfilter/nft_set_bitmap.c
··· 226 226 const struct nft_bitmap *priv = nft_set_priv(set); 227 227 struct nft_bitmap_elem *be; 228 228 229 - list_for_each_entry_rcu(be, &priv->list, head) { 229 + list_for_each_entry_rcu(be, &priv->list, head, 230 + lockdep_is_held(&nft_pernet(ctx->net)->commit_mutex)) { 230 231 if (iter->count < iter->skip) 231 232 goto cont; 232 233
+18 -2
net/netfilter/nft_set_pipapo.c
··· 510 510 * 511 511 * This function is called from the data path. It will search for 512 512 * an element matching the given key in the current active copy. 513 + * Unlike other set types, this uses NFT_GENMASK_ANY instead of 514 + * nft_genmask_cur(). 515 + * 516 + * This is because new (future) elements are not reachable from 517 + * priv->match, they get added to priv->clone instead. 518 + * When the commit phase flips the generation bitmask, the 519 + * 'now old' entries are skipped but without the 'now current' 520 + * elements becoming visible. Using nft_genmask_cur() thus creates 521 + * inconsistent state: matching old entries get skipped but thew 522 + * newly matching entries are unreachable. 523 + * 524 + * GENMASK will still find the 'now old' entries which ensures consistent 525 + * priv->match view. 526 + * 527 + * nft_pipapo_commit swaps ->clone and ->match shortly after the 528 + * genbit flip. As ->clone doesn't contain the old entries in the first 529 + * place, lookup will only find the now-current ones. 513 530 * 514 531 * Return: ntables API extension pointer or NULL if no match. 515 532 */ ··· 535 518 const u32 *key) 536 519 { 537 520 struct nft_pipapo *priv = nft_set_priv(set); 538 - u8 genmask = nft_genmask_cur(net); 539 521 const struct nft_pipapo_match *m; 540 522 const struct nft_pipapo_elem *e; 541 523 542 524 m = rcu_dereference(priv->match); 543 - e = pipapo_get(m, (const u8 *)key, genmask, get_jiffies_64()); 525 + e = pipapo_get(m, (const u8 *)key, NFT_GENMASK_ANY, get_jiffies_64()); 544 526 545 527 return e ? &e->ext : NULL; 546 528 }
+1 -3
net/netfilter/nft_set_pipapo_avx2.c
··· 1152 1152 struct nft_pipapo *priv = nft_set_priv(set); 1153 1153 const struct nft_set_ext *ext = NULL; 1154 1154 struct nft_pipapo_scratch *scratch; 1155 - u8 genmask = nft_genmask_cur(net); 1156 1155 const struct nft_pipapo_match *m; 1157 1156 const struct nft_pipapo_field *f; 1158 1157 const u8 *rp = (const u8 *)key; ··· 1247 1248 if (last) { 1248 1249 const struct nft_set_ext *e = &f->mt[ret].e->ext; 1249 1250 1250 - if (unlikely(nft_set_elem_expired(e) || 1251 - !nft_set_elem_active(e, genmask))) 1251 + if (unlikely(nft_set_elem_expired(e))) 1252 1252 goto next_match; 1253 1253 1254 1254 ext = e;
+3 -3
net/netfilter/nft_set_rbtree.c
··· 77 77 nft_rbtree_interval_end(rbe) && 78 78 nft_rbtree_interval_start(interval)) 79 79 continue; 80 - interval = rbe; 80 + if (nft_set_elem_active(&rbe->ext, genmask) && 81 + !nft_rbtree_elem_expired(rbe)) 82 + interval = rbe; 81 83 } else if (d > 0) 82 84 parent = rcu_dereference_raw(parent->rb_right); 83 85 else { ··· 104 102 } 105 103 106 104 if (set->flags & NFT_SET_INTERVAL && interval != NULL && 107 - nft_set_elem_active(&interval->ext, genmask) && 108 - !nft_rbtree_elem_expired(interval) && 109 105 nft_rbtree_interval_start(interval)) 110 106 return &interval->ext; 111 107
+3
net/netlink/genetlink.c
··· 1836 1836 !ns_capable(net->user_ns, CAP_SYS_ADMIN)) 1837 1837 ret = -EPERM; 1838 1838 1839 + if (ret) 1840 + break; 1841 + 1839 1842 if (family->bind) 1840 1843 family->bind(i); 1841 1844
+7 -6
net/wireless/nl80211.c
··· 7062 7062 u32 seq, int flags, 7063 7063 struct cfg80211_registered_device *rdev, 7064 7064 struct net_device *dev, 7065 - const u8 *mac_addr, struct station_info *sinfo) 7065 + const u8 *mac_addr, struct station_info *sinfo, 7066 + bool link_stats) 7066 7067 { 7067 7068 void *hdr; 7068 7069 struct nlattr *sinfoattr, *bss_param; ··· 7284 7283 goto nla_put_failure; 7285 7284 } 7286 7285 7287 - if (sinfo->valid_links) { 7286 + if (link_stats && sinfo->valid_links) { 7288 7287 links = nla_nest_start(msg, NL80211_ATTR_MLO_LINKS); 7289 7288 if (!links) 7290 7289 goto nla_put_failure; ··· 7575 7574 NETLINK_CB(cb->skb).portid, 7576 7575 cb->nlh->nlmsg_seq, NLM_F_MULTI, 7577 7576 rdev, wdev->netdev, mac_addr, 7578 - &sinfo) < 0) 7577 + &sinfo, false) < 0) 7579 7578 goto out; 7580 7579 7581 7580 sta_idx++; ··· 7636 7635 7637 7636 if (nl80211_send_station(msg, NL80211_CMD_NEW_STATION, 7638 7637 info->snd_portid, info->snd_seq, 0, 7639 - rdev, dev, mac_addr, &sinfo) < 0) { 7638 + rdev, dev, mac_addr, &sinfo, false) < 0) { 7640 7639 nlmsg_free(msg); 7641 7640 return -ENOBUFS; 7642 7641 } ··· 19681 19680 return; 19682 19681 19683 19682 if (nl80211_send_station(msg, NL80211_CMD_NEW_STATION, 0, 0, 0, 19684 - rdev, dev, mac_addr, sinfo) < 0) { 19683 + rdev, dev, mac_addr, sinfo, false) < 0) { 19685 19684 nlmsg_free(msg); 19686 19685 return; 19687 19686 } ··· 19711 19710 } 19712 19711 19713 19712 if (nl80211_send_station(msg, NL80211_CMD_DEL_STATION, 0, 0, 0, 19714 - rdev, dev, mac_addr, sinfo) < 0) { 19713 + rdev, dev, mac_addr, sinfo, false) < 0) { 19715 19714 nlmsg_free(msg); 19716 19715 return; 19717 19716 }
+1
tools/testing/selftests/net/Makefile
··· 116 116 TEST_GEN_FILES += skf_net_off 117 117 TEST_GEN_FILES += tfo 118 118 TEST_PROGS += tfo_passive.sh 119 + TEST_PROGS += broadcast_ether_dst.sh 119 120 TEST_PROGS += broadcast_pmtu.sh 120 121 TEST_PROGS += ipv6_force_forwarding.sh 121 122
+83
tools/testing/selftests/net/broadcast_ether_dst.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Author: Brett A C Sheffield <bacs@librecast.net> 5 + # Author: Oscar Maes <oscmaes92@gmail.com> 6 + # 7 + # Ensure destination ethernet field is correctly set for 8 + # broadcast packets 9 + 10 + source lib.sh 11 + 12 + CLIENT_IP4="192.168.0.1" 13 + GW_IP4="192.168.0.2" 14 + 15 + setup() { 16 + setup_ns CLIENT_NS SERVER_NS 17 + 18 + ip -net "${SERVER_NS}" link add link1 type veth \ 19 + peer name link0 netns "${CLIENT_NS}" 20 + 21 + ip -net "${CLIENT_NS}" link set link0 up 22 + ip -net "${CLIENT_NS}" addr add "${CLIENT_IP4}"/24 dev link0 23 + 24 + ip -net "${SERVER_NS}" link set link1 up 25 + 26 + ip -net "${CLIENT_NS}" route add default via "${GW_IP4}" 27 + ip netns exec "${CLIENT_NS}" arp -s "${GW_IP4}" 00:11:22:33:44:55 28 + } 29 + 30 + cleanup() { 31 + rm -f "${CAPFILE}" "${OUTPUT}" 32 + ip -net "${SERVER_NS}" link del link1 33 + cleanup_ns "${CLIENT_NS}" "${SERVER_NS}" 34 + } 35 + 36 + test_broadcast_ether_dst() { 37 + local rc=0 38 + CAPFILE=$(mktemp -u cap.XXXXXXXXXX) 39 + OUTPUT=$(mktemp -u out.XXXXXXXXXX) 40 + 41 + echo "Testing ethernet broadcast destination" 42 + 43 + # start tcpdump listening for icmp 44 + # tcpdump will exit after receiving a single packet 45 + # timeout will kill tcpdump if it is still running after 2s 46 + timeout 2s ip netns exec "${CLIENT_NS}" \ 47 + tcpdump -i link0 -c 1 -w "${CAPFILE}" icmp &> "${OUTPUT}" & 48 + pid=$! 49 + slowwait 1 grep -qs "listening" "${OUTPUT}" 50 + 51 + # send broadcast ping 52 + ip netns exec "${CLIENT_NS}" \ 53 + ping -W0.01 -c1 -b 255.255.255.255 &> /dev/null 54 + 55 + # wait for tcpdump for exit after receiving packet 56 + wait "${pid}" 57 + 58 + # compare ethernet destination field to ff:ff:ff:ff:ff:ff 59 + ether_dst=$(tcpdump -r "${CAPFILE}" -tnne 2>/dev/null | \ 60 + awk '{sub(/,/,"",$3); print $3}') 61 + if [[ "${ether_dst}" == "ff:ff:ff:ff:ff:ff" ]]; then 62 + echo "[ OK ]" 63 + rc="${ksft_pass}" 64 + else 65 + echo "[FAIL] expected dst ether addr to be ff:ff:ff:ff:ff:ff," \ 66 + "got ${ether_dst}" 67 + rc="${ksft_fail}" 68 + fi 69 + 70 + return "${rc}" 71 + } 72 + 73 + if [ ! -x "$(command -v tcpdump)" ]; then 74 + echo "SKIP: Could not run test without tcpdump tool" 75 + exit "${ksft_skip}" 76 + fi 77 + 78 + trap cleanup EXIT 79 + 80 + setup 81 + test_broadcast_ether_dst 82 + 83 + exit $?
+3
tools/testing/selftests/net/can/config
··· 1 + CONFIG_CAN=m 2 + CONFIG_CAN_DEV=m 3 + CONFIG_CAN_VCAN=m
+1 -1
tools/testing/selftests/net/mptcp/diag.sh
··· 28 28 } 29 29 30 30 # This function is used in the cleanup trap 31 - #shellcheck disable=SC2317 31 + #shellcheck disable=SC2317,SC2329 32 32 cleanup() 33 33 { 34 34 ip netns pids "${ns}" | xargs --no-run-if-empty kill -SIGKILL &>/dev/null
+1 -1
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 134 134 TEST_GROUP="" 135 135 136 136 # This function is used in the cleanup trap 137 - #shellcheck disable=SC2317 137 + #shellcheck disable=SC2317,SC2329 138 138 cleanup() 139 139 { 140 140 rm -f "$cin_disconnect"
+1 -1
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 8 8 9 9 # ShellCheck incorrectly believes that most of the code here is unreachable 10 10 # because it's invoked by variable name, see how the "tests" array is used 11 - #shellcheck disable=SC2317 11 + #shellcheck disable=SC2317,SC2329 12 12 13 13 . "$(dirname "${0}")/mptcp_lib.sh" 14 14
+1 -1
tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
··· 95 95 } 96 96 97 97 # This function is used in the cleanup trap 98 - #shellcheck disable=SC2317 98 + #shellcheck disable=SC2317,SC2329 99 99 cleanup() 100 100 { 101 101 mptcp_lib_ns_exit "${ns1}" "${ns2}" "${ns_sbox}"
+3 -2
tools/testing/selftests/net/mptcp/pm_netlink.sh
··· 32 32 err=$(mktemp) 33 33 34 34 # This function is used in the cleanup trap 35 - #shellcheck disable=SC2317 35 + #shellcheck disable=SC2317,SC2329 36 36 cleanup() 37 37 { 38 38 rm -f "${err}" ··· 70 70 mptcp_lib_pm_nl_format_endpoints "${@}" 71 71 } 72 72 73 + # This function is invoked indirectly 74 + #shellcheck disable=SC2317,SC2329 73 75 get_endpoint() { 74 - # shellcheck disable=SC2317 # invoked indirectly 75 76 mptcp_lib_pm_nl_get_endpoint "${ns1}" "${@}" 76 77 } 77 78
+1 -1
tools/testing/selftests/net/mptcp/simult_flows.sh
··· 35 35 } 36 36 37 37 # This function is used in the cleanup trap 38 - #shellcheck disable=SC2317 38 + #shellcheck disable=SC2317,SC2329 39 39 cleanup() 40 40 { 41 41 rm -f "$cout" "$sout"
+1 -1
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 94 94 } 95 95 96 96 # This function is used in the cleanup trap 97 - #shellcheck disable=SC2317 97 + #shellcheck disable=SC2317,SC2329 98 98 cleanup() 99 99 { 100 100 print_title "Cleanup"