Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.5-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from wifi, can and netfilter.

Fixes to fixes:

- nf_tables:
- GC transaction race with abort path
- defer gc run if previous batch is still pending

Previous releases - regressions:

- ipv4: fix data-races around inet->inet_id

- phy: fix deadlocking in phy_error() invocation

- mdio: fix C45 read/write protocol

- ipvlan: fix a reference count leak warning in ipvlan_ns_exit()

- ice: fix NULL pointer deref during VF reset

- i40e: fix potential NULL pointer dereferencing of pf->vf in
i40e_sync_vsi_filters()

- tg3: use slab_build_skb() when needed

- mtk_eth_soc: fix NULL pointer on hw reset

Previous releases - always broken:

- core: validate veth and vxcan peer ifindexes

- sched: fix a qdisc modification with ambiguous command request

- devlink: add missing unregister linecard notification

- wifi: mac80211: limit reorder_buf_filtered to avoid UBSAN warning

- batman:
- do not get eth header before batadv_check_management_packet
- fix batadv_v_ogm_aggr_send memory leak

- bonding: fix macvlan over alb bond support

- mlxsw: set time stamp fields also when its type is MIRROR_UTC"

* tag 'net-6.5-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (54 commits)
selftests: bonding: add macvlan over bond testing
selftest: bond: add new topo bond_topo_2d1c.sh
bonding: fix macvlan over alb bond support
rtnetlink: Reject negative ifindexes in RTM_NEWLINK
netfilter: nf_tables: defer gc run if previous batch is still pending
netfilter: nf_tables: fix out of memory error handling
netfilter: nf_tables: use correct lock to protect gc_list
netfilter: nf_tables: GC transaction race with abort path
netfilter: nf_tables: flush pending destroy work before netlink notifier
netfilter: nf_tables: validate all pending tables
ibmveth: Use dcbf rather than dcbfl
i40e: fix potential NULL pointer dereferencing of pf->vf i40e_sync_vsi_filters()
net/sched: fix a qdisc modification with ambiguous command request
igc: Fix the typo in the PTM Control macro
batman-adv: Hold rtnl lock during MTU update via netlink
igb: Avoid starting unnecessary workqueues
can: raw: add missing refcount for memory leak fix
can: isotp: fix support for transmission of SF without flow control
bnx2x: new flag for track HW resource allocation
sfc: allocate a big enough SKB for loopback selftest packet
...

+663 -355
+10 -10
Documentation/ABI/testing/sysfs-class-led-trigger-netdev
··· 13 13 Specifies the duration of the LED blink in milliseconds. 14 14 Defaults to 50 ms. 15 15 16 - With hw_control ON, the interval value MUST be set to the 16 + When offloaded is true, the interval value MUST be set to the 17 17 default value and cannot be changed. 18 18 Trying to set any value in this specific mode will return 19 19 an EINVAL error. ··· 44 44 If set to 1, the LED will blink for the milliseconds specified 45 45 in interval to signal transmission. 46 46 47 - With hw_control ON, the blink interval is controlled by hardware 48 - and won't reflect the value set in interval. 47 + When offloaded is true, the blink interval is controlled by 48 + hardware and won't reflect the value set in interval. 49 49 50 50 What: /sys/class/leds/<led>/rx 51 51 Date: Dec 2017 ··· 59 59 If set to 1, the LED will blink for the milliseconds specified 60 60 in interval to signal reception. 61 61 62 - With hw_control ON, the blink interval is controlled by hardware 63 - and won't reflect the value set in interval. 62 + When offloaded is true, the blink interval is controlled by 63 + hardware and won't reflect the value set in interval. 64 64 65 - What: /sys/class/leds/<led>/hw_control 65 + What: /sys/class/leds/<led>/offloaded 66 66 Date: Jun 2023 67 67 KernelVersion: 6.5 68 68 Contact: linux-leds@vger.kernel.org 69 69 Description: 70 - Communicate whether the LED trigger modes are driven by hardware 71 - or software fallback is used. 70 + Communicate whether the LED trigger modes are offloaded to 71 + hardware or whether software fallback is used. 72 72 73 73 If 0, the LED is using software fallback to blink. 74 74 75 - If 1, the LED is using hardware control to blink and signal the 76 - requested modes. 75 + If 1, the LED blinking in requested mode is offloaded to 76 + hardware. 77 77 78 78 What: /sys/class/leds/<led>/link_10 79 79 Date: Jun 2023
+10
MAINTAINERS
··· 14803 14803 F: net/netfilter/xt_SECMARK.c 14804 14804 F: net/netlabel/ 14805 14805 14806 + NETWORKING [MACSEC] 14807 + M: Sabrina Dubroca <sd@queasysnail.net> 14808 + L: netdev@vger.kernel.org 14809 + S: Maintained 14810 + F: drivers/net/macsec.c 14811 + F: include/net/macsec.h 14812 + F: include/uapi/linux/if_macsec.h 14813 + K: macsec 14814 + K: \bmdo_ 14815 + 14806 14816 NETWORKING [MPTCP] 14807 14817 M: Matthieu Baerts <matthieu.baerts@tessares.net> 14808 14818 M: Mat Martineau <martineau@kernel.org>
+4 -4
drivers/leds/trigger/ledtrig-netdev.c
··· 406 406 407 407 static DEVICE_ATTR_RW(interval); 408 408 409 - static ssize_t hw_control_show(struct device *dev, 410 - struct device_attribute *attr, char *buf) 409 + static ssize_t offloaded_show(struct device *dev, 410 + struct device_attribute *attr, char *buf) 411 411 { 412 412 struct led_netdev_data *trigger_data = led_trigger_get_drvdata(dev); 413 413 414 414 return sprintf(buf, "%d\n", trigger_data->hw_control); 415 415 } 416 416 417 - static DEVICE_ATTR_RO(hw_control); 417 + static DEVICE_ATTR_RO(offloaded); 418 418 419 419 static struct attribute *netdev_trig_attrs[] = { 420 420 &dev_attr_device_name.attr, ··· 427 427 &dev_attr_rx.attr, 428 428 &dev_attr_tx.attr, 429 429 &dev_attr_interval.attr, 430 - &dev_attr_hw_control.attr, 430 + &dev_attr_offloaded.attr, 431 431 NULL 432 432 }; 433 433 ATTRIBUTE_GROUPS(netdev_trig);
+3 -3
drivers/net/bonding/bond_alb.c
··· 660 660 return NULL; 661 661 arp = (struct arp_pkt *)skb_network_header(skb); 662 662 663 - /* Don't modify or load balance ARPs that do not originate locally 664 - * (e.g.,arrive via a bridge). 663 + /* Don't modify or load balance ARPs that do not originate 664 + * from the bond itself or a VLAN directly above the bond. 665 665 */ 666 - if (!bond_slave_has_mac_rx(bond, arp->mac_src)) 666 + if (!bond_slave_has_mac_rcu(bond, arp->mac_src)) 667 667 return NULL; 668 668 669 669 dev = ip_dev_find(dev_net(bond->dev), arp->ip_src);
+1 -6
drivers/net/can/vxcan.c
··· 192 192 193 193 nla_peer = data[VXCAN_INFO_PEER]; 194 194 ifmp = nla_data(nla_peer); 195 - err = rtnl_nla_parse_ifla(peer_tb, 196 - nla_data(nla_peer) + 197 - sizeof(struct ifinfomsg), 198 - nla_len(nla_peer) - 199 - sizeof(struct ifinfomsg), 200 - NULL); 195 + err = rtnl_nla_parse_ifinfomsg(peer_tb, nla_peer, extack); 201 196 if (err < 0) 202 197 return err; 203 198
+4
drivers/net/dsa/mt7530.c
··· 1006 1006 mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK, 1007 1007 MT753X_BPDU_CPU_ONLY); 1008 1008 1009 + /* Trap 802.1X PAE frames to the CPU port(s) */ 1010 + mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_PORT_FW_MASK, 1011 + MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY)); 1012 + 1009 1013 /* Trap LLDP frames with :0E MAC DA to the CPU port(s) */ 1010 1014 mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_PORT_FW_MASK, 1011 1015 MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY));
+2
drivers/net/dsa/mt7530.h
··· 66 66 /* Registers for BPDU and PAE frame control*/ 67 67 #define MT753X_BPC 0x24 68 68 #define MT753X_BPDU_PORT_FW_MASK GENMASK(2, 0) 69 + #define MT753X_PAE_PORT_FW_MASK GENMASK(18, 16) 70 + #define MT753X_PAE_PORT_FW(x) FIELD_PREP(MT753X_PAE_PORT_FW_MASK, x) 69 71 70 72 /* Register for :03 and :0E MAC DA frame control */ 71 73 #define MT753X_RGAC2 0x2c
+3
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1069 1069 if (gate_len_ns == U64_MAX) 1070 1070 return U64_MAX; 1071 1071 1072 + if (gate_len_ns < VSC9959_TAS_MIN_GATE_LEN_NS) 1073 + return 0; 1074 + 1072 1075 return (gate_len_ns - VSC9959_TAS_MIN_GATE_LEN_NS) * PSEC_PER_NSEC; 1073 1076 } 1074 1077
+1 -1
drivers/net/ethernet/broadcom/bgmac.c
··· 1448 1448 int err; 1449 1449 1450 1450 phy_dev = fixed_phy_register(PHY_POLL, &fphy_status, NULL); 1451 - if (!phy_dev || IS_ERR(phy_dev)) { 1451 + if (IS_ERR(phy_dev)) { 1452 1452 dev_err(bgmac->dev, "Failed to register fixed PHY device\n"); 1453 1453 return -ENODEV; 1454 1454 }
+2
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1508 1508 bool cnic_loaded; 1509 1509 struct cnic_eth_dev *(*cnic_probe)(struct net_device *); 1510 1510 1511 + bool nic_stopped; 1512 + 1511 1513 /* Flag that indicates that we can start looking for FCoE L2 queue 1512 1514 * completions in the default status block. 1513 1515 */
+13 -8
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 2715 2715 bnx2x_add_all_napi(bp); 2716 2716 DP(NETIF_MSG_IFUP, "napi added\n"); 2717 2717 bnx2x_napi_enable(bp); 2718 + bp->nic_stopped = false; 2718 2719 2719 2720 if (IS_PF(bp)) { 2720 2721 /* set pf load just before approaching the MCP */ ··· 2961 2960 load_error1: 2962 2961 bnx2x_napi_disable(bp); 2963 2962 bnx2x_del_all_napi(bp); 2963 + bp->nic_stopped = true; 2964 2964 2965 2965 /* clear pf_load status, as it was already set */ 2966 2966 if (IS_PF(bp)) ··· 3097 3095 if (!CHIP_IS_E1x(bp)) 3098 3096 bnx2x_pf_disable(bp); 3099 3097 3100 - /* Disable HW interrupts, NAPI */ 3101 - bnx2x_netif_stop(bp, 1); 3102 - /* Delete all NAPI objects */ 3103 - bnx2x_del_all_napi(bp); 3104 - if (CNIC_LOADED(bp)) 3105 - bnx2x_del_all_napi_cnic(bp); 3106 - /* Release IRQs */ 3107 - bnx2x_free_irq(bp); 3098 + if (!bp->nic_stopped) { 3099 + /* Disable HW interrupts, NAPI */ 3100 + bnx2x_netif_stop(bp, 1); 3101 + /* Delete all NAPI objects */ 3102 + bnx2x_del_all_napi(bp); 3103 + if (CNIC_LOADED(bp)) 3104 + bnx2x_del_all_napi_cnic(bp); 3105 + /* Release IRQs */ 3106 + bnx2x_free_irq(bp); 3107 + bp->nic_stopped = true; 3108 + } 3108 3109 3109 3110 /* Report UNLOAD_DONE to MCP */ 3110 3111 bnx2x_send_unload_done(bp, false);
+19 -13
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 9474 9474 } 9475 9475 } 9476 9476 9477 - /* Disable HW interrupts, NAPI */ 9478 - bnx2x_netif_stop(bp, 1); 9479 - /* Delete all NAPI objects */ 9480 - bnx2x_del_all_napi(bp); 9481 - if (CNIC_LOADED(bp)) 9482 - bnx2x_del_all_napi_cnic(bp); 9477 + if (!bp->nic_stopped) { 9478 + /* Disable HW interrupts, NAPI */ 9479 + bnx2x_netif_stop(bp, 1); 9480 + /* Delete all NAPI objects */ 9481 + bnx2x_del_all_napi(bp); 9482 + if (CNIC_LOADED(bp)) 9483 + bnx2x_del_all_napi_cnic(bp); 9483 9484 9484 - /* Release IRQs */ 9485 - bnx2x_free_irq(bp); 9485 + /* Release IRQs */ 9486 + bnx2x_free_irq(bp); 9487 + bp->nic_stopped = true; 9488 + } 9486 9489 9487 9490 /* Reset the chip, unless PCI function is offline. If we reach this 9488 9491 * point following a PCI error handling, it means device is really ··· 14241 14238 } 14242 14239 bnx2x_drain_tx_queues(bp); 14243 14240 bnx2x_send_unload_req(bp, UNLOAD_RECOVERY); 14244 - bnx2x_netif_stop(bp, 1); 14245 - bnx2x_del_all_napi(bp); 14241 + if (!bp->nic_stopped) { 14242 + bnx2x_netif_stop(bp, 1); 14243 + bnx2x_del_all_napi(bp); 14246 14244 14247 - if (CNIC_LOADED(bp)) 14248 - bnx2x_del_all_napi_cnic(bp); 14245 + if (CNIC_LOADED(bp)) 14246 + bnx2x_del_all_napi_cnic(bp); 14249 14247 14250 - bnx2x_free_irq(bp); 14248 + bnx2x_free_irq(bp); 14249 + bp->nic_stopped = true; 14250 + } 14251 14251 14252 14252 /* Report UNLOAD_DONE to MCP */ 14253 14253 bnx2x_send_unload_done(bp, true);
+9 -6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
··· 529 529 bnx2x_vfpf_finalize(bp, &req->first_tlv); 530 530 531 531 free_irq: 532 - /* Disable HW interrupts, NAPI */ 533 - bnx2x_netif_stop(bp, 0); 534 - /* Delete all NAPI objects */ 535 - bnx2x_del_all_napi(bp); 532 + if (!bp->nic_stopped) { 533 + /* Disable HW interrupts, NAPI */ 534 + bnx2x_netif_stop(bp, 0); 535 + /* Delete all NAPI objects */ 536 + bnx2x_del_all_napi(bp); 536 537 537 - /* Release IRQs */ 538 - bnx2x_free_irq(bp); 538 + /* Release IRQs */ 539 + bnx2x_free_irq(bp); 540 + bp->nic_stopped = true; 541 + } 539 542 } 540 543 541 544 static void bnx2x_leading_vfq_init(struct bnx2x *bp, struct bnx2x_virtf *vf,
+1 -1
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 617 617 }; 618 618 619 619 phydev = fixed_phy_register(PHY_POLL, &fphy_status, NULL); 620 - if (!phydev || IS_ERR(phydev)) { 620 + if (IS_ERR(phydev)) { 621 621 dev_err(kdev, "failed to register fixed PHY device\n"); 622 622 return -ENODEV; 623 623 }
+4 -1
drivers/net/ethernet/broadcom/tg3.c
··· 6881 6881 6882 6882 ri->data = NULL; 6883 6883 6884 - skb = build_skb(data, frag_size); 6884 + if (frag_size) 6885 + skb = build_skb(data, frag_size); 6886 + else 6887 + skb = slab_build_skb(data); 6885 6888 if (!skb) { 6886 6889 tg3_frag_free(frag_size != 0, data); 6887 6890 goto drop_it_no_recycle;
+1 -1
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
··· 1466 1466 tp->write_seq = snd_isn; 1467 1467 tp->snd_nxt = snd_isn; 1468 1468 tp->snd_una = snd_isn; 1469 - inet_sk(sk)->inet_id = get_random_u16(); 1469 + atomic_set(&inet_sk(sk)->inet_id, get_random_u16()); 1470 1470 assign_rxopt(sk, opt); 1471 1471 1472 1472 if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))
+1 -1
drivers/net/ethernet/ibm/ibmveth.c
··· 203 203 unsigned long offset; 204 204 205 205 for (offset = 0; offset < length; offset += SMP_CACHE_BYTES) 206 - asm("dcbfl %0,%1" :: "b" (addr), "r" (offset)); 206 + asm("dcbf %0,%1,1" :: "b" (addr), "r" (offset)); 207 207 } 208 208 209 209 /* replenish the buffers for a pool. note that we don't need to
+3 -2
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 2609 2609 retval = i40e_correct_mac_vlan_filters 2610 2610 (vsi, &tmp_add_list, &tmp_del_list, 2611 2611 vlan_filters); 2612 - else 2612 + else if (pf->vf) 2613 2613 retval = i40e_correct_vf_mac_vlan_filters 2614 2614 (vsi, &tmp_add_list, &tmp_del_list, 2615 2615 vlan_filters, pf->vf[vsi->vf_id].trusted); ··· 2782 2782 } 2783 2783 2784 2784 /* if the VF is not trusted do not do promisc */ 2785 - if ((vsi->type == I40E_VSI_SRIOV) && !pf->vf[vsi->vf_id].trusted) { 2785 + if (vsi->type == I40E_VSI_SRIOV && pf->vf && 2786 + !pf->vf[vsi->vf_id].trusted) { 2786 2787 clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state); 2787 2788 goto out; 2788 2789 }
+2 -1
drivers/net/ethernet/intel/ice/ice_base.c
··· 435 435 /* Receive Packet Data Buffer Size. 436 436 * The Packet Data Buffer Size is defined in 128 byte units. 437 437 */ 438 - rlan_ctx.dbuf = ring->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; 438 + rlan_ctx.dbuf = DIV_ROUND_UP(ring->rx_buf_len, 439 + BIT_ULL(ICE_RLAN_CTX_DBUF_S)); 439 440 440 441 /* use 32 byte descriptors */ 441 442 rlan_ctx.dsize = 1;
+4 -4
drivers/net/ethernet/intel/ice/ice_sriov.c
··· 1131 1131 if (!vf) 1132 1132 return -EINVAL; 1133 1133 1134 - ret = ice_check_vf_ready_for_reset(vf); 1134 + ret = ice_check_vf_ready_for_cfg(vf); 1135 1135 if (ret) 1136 1136 goto out_put_vf; 1137 1137 ··· 1246 1246 goto out_put_vf; 1247 1247 } 1248 1248 1249 - ret = ice_check_vf_ready_for_reset(vf); 1249 + ret = ice_check_vf_ready_for_cfg(vf); 1250 1250 if (ret) 1251 1251 goto out_put_vf; 1252 1252 ··· 1300 1300 return -EOPNOTSUPP; 1301 1301 } 1302 1302 1303 - ret = ice_check_vf_ready_for_reset(vf); 1303 + ret = ice_check_vf_ready_for_cfg(vf); 1304 1304 if (ret) 1305 1305 goto out_put_vf; 1306 1306 ··· 1613 1613 if (!vf) 1614 1614 return -EINVAL; 1615 1615 1616 - ret = ice_check_vf_ready_for_reset(vf); 1616 + ret = ice_check_vf_ready_for_cfg(vf); 1617 1617 if (ret) 1618 1618 goto out_put_vf; 1619 1619
+8 -26
drivers/net/ethernet/intel/ice/ice_vf_lib.c
··· 186 186 } 187 187 188 188 /** 189 - * ice_check_vf_ready_for_reset - check if VF is ready to be reset 190 - * @vf: VF to check if it's ready to be reset 191 - * 192 - * The purpose of this function is to ensure that the VF is not in reset, 193 - * disabled, and is both initialized and active, thus enabling us to safely 194 - * initialize another reset. 195 - */ 196 - int ice_check_vf_ready_for_reset(struct ice_vf *vf) 197 - { 198 - int ret; 199 - 200 - ret = ice_check_vf_ready_for_cfg(vf); 201 - if (!ret && !test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) 202 - ret = -EAGAIN; 203 - 204 - return ret; 205 - } 206 - 207 - /** 208 189 * ice_trigger_vf_reset - Reset a VF on HW 209 190 * @vf: pointer to the VF structure 210 191 * @is_vflr: true if VFLR was issued, false if not ··· 612 631 return 0; 613 632 } 614 633 634 + if (flags & ICE_VF_RESET_LOCK) 635 + mutex_lock(&vf->cfg_lock); 636 + else 637 + lockdep_assert_held(&vf->cfg_lock); 638 + 615 639 if (ice_is_vf_disabled(vf)) { 616 640 vsi = ice_get_vf_vsi(vf); 617 641 if (!vsi) { 618 642 dev_dbg(dev, "VF is already removed\n"); 619 - return -EINVAL; 643 + err = -EINVAL; 644 + goto out_unlock; 620 645 } 621 646 ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id); 622 647 ··· 631 644 632 645 dev_dbg(dev, "VF is already disabled, there is no need for resetting it, telling VM, all is fine %d\n", 633 646 vf->vf_id); 634 - return 0; 647 + goto out_unlock; 635 648 } 636 - 637 - if (flags & ICE_VF_RESET_LOCK) 638 - mutex_lock(&vf->cfg_lock); 639 - else 640 - lockdep_assert_held(&vf->cfg_lock); 641 649 642 650 /* Set VF disable bit state here, before triggering reset */ 643 651 set_bit(ICE_VF_STATE_DIS, vf->vf_states);
-1
drivers/net/ethernet/intel/ice/ice_vf_lib.h
··· 215 215 struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf); 216 216 bool ice_is_vf_disabled(struct ice_vf *vf); 217 217 int ice_check_vf_ready_for_cfg(struct ice_vf *vf); 218 - int ice_check_vf_ready_for_reset(struct ice_vf *vf); 219 218 void ice_set_vf_state_dis(struct ice_vf *vf); 220 219 bool ice_is_any_vf_in_unicast_promisc(struct ice_pf *pf); 221 220 void
-1
drivers/net/ethernet/intel/ice/ice_virtchnl.c
··· 3947 3947 ice_vc_notify_vf_link_state(vf); 3948 3948 break; 3949 3949 case VIRTCHNL_OP_RESET_VF: 3950 - clear_bit(ICE_VF_STATE_ACTIVE, vf->vf_states); 3951 3950 ops->reset_vf(vf); 3952 3951 break; 3953 3952 case VIRTCHNL_OP_ADD_ETH_ADDR:
+12 -12
drivers/net/ethernet/intel/igb/igb_ptp.c
··· 1385 1385 return; 1386 1386 } 1387 1387 1388 - spin_lock_init(&adapter->tmreg_lock); 1389 - INIT_WORK(&adapter->ptp_tx_work, igb_ptp_tx_work); 1390 - 1391 - if (adapter->ptp_flags & IGB_PTP_OVERFLOW_CHECK) 1392 - INIT_DELAYED_WORK(&adapter->ptp_overflow_work, 1393 - igb_ptp_overflow_check); 1394 - 1395 - adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE; 1396 - adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF; 1397 - 1398 - igb_ptp_reset(adapter); 1399 - 1400 1388 adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps, 1401 1389 &adapter->pdev->dev); 1402 1390 if (IS_ERR(adapter->ptp_clock)) { ··· 1394 1406 dev_info(&adapter->pdev->dev, "added PHC on %s\n", 1395 1407 adapter->netdev->name); 1396 1408 adapter->ptp_flags |= IGB_PTP_ENABLED; 1409 + 1410 + spin_lock_init(&adapter->tmreg_lock); 1411 + INIT_WORK(&adapter->ptp_tx_work, igb_ptp_tx_work); 1412 + 1413 + if (adapter->ptp_flags & IGB_PTP_OVERFLOW_CHECK) 1414 + INIT_DELAYED_WORK(&adapter->ptp_overflow_work, 1415 + igb_ptp_overflow_check); 1416 + 1417 + adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE; 1418 + adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF; 1419 + 1420 + igb_ptp_reset(adapter); 1397 1421 } 1398 1422 } 1399 1423
+1 -1
drivers/net/ethernet/intel/igc/igc_defines.h
··· 546 546 #define IGC_PTM_CTRL_START_NOW BIT(29) /* Start PTM Now */ 547 547 #define IGC_PTM_CTRL_EN BIT(30) /* Enable PTM */ 548 548 #define IGC_PTM_CTRL_TRIG BIT(31) /* PTM Cycle trigger */ 549 - #define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x2f) << 2) 549 + #define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x3f) << 2) 550 550 #define IGC_PTM_CTRL_PTM_TO(usec) (((usec) & 0xff) << 8) 551 551 552 552 #define IGC_PTM_SHORT_CYC_DEFAULT 10 /* Default Short/interrupted cycle interval */
+2 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
··· 4270 4270 if (link < 0) 4271 4271 return NIX_AF_ERR_RX_LINK_INVALID; 4272 4272 4273 - nix_find_link_frs(rvu, req, pcifunc); 4274 4273 4275 4274 linkcfg: 4275 + nix_find_link_frs(rvu, req, pcifunc); 4276 + 4276 4277 cfg = rvu_read64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link)); 4277 4278 cfg = (cfg & ~(0xFFFFULL << 16)) | ((u64)req->maxlen << 16); 4278 4279 if (req->update_minlen)
+10 -2
drivers/net/ethernet/mediatek/mtk_wed.c
··· 221 221 222 222 for (i = 0; i < ARRAY_SIZE(hw_list); i++) { 223 223 struct mtk_wed_hw *hw = hw_list[i]; 224 - struct mtk_wed_device *dev = hw->wed_dev; 224 + struct mtk_wed_device *dev; 225 225 int err; 226 226 227 + if (!hw) 228 + break; 229 + 230 + dev = hw->wed_dev; 227 231 if (!dev || !dev->wlan.reset) 228 232 continue; 229 233 ··· 248 244 249 245 for (i = 0; i < ARRAY_SIZE(hw_list); i++) { 250 246 struct mtk_wed_hw *hw = hw_list[i]; 251 - struct mtk_wed_device *dev = hw->wed_dev; 247 + struct mtk_wed_device *dev; 252 248 249 + if (!hw) 250 + break; 251 + 252 + dev = hw->wed_dev; 253 253 if (!dev || !dev->wlan.reset_complete) 254 254 continue; 255 255
+2 -2
drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
··· 32 32 MLXSW_AFK_ELEMENT_INFO_U32(IP_TTL_, 0x18, 0, 8), 33 33 MLXSW_AFK_ELEMENT_INFO_U32(IP_ECN, 0x18, 9, 2), 34 34 MLXSW_AFK_ELEMENT_INFO_U32(IP_DSCP, 0x18, 11, 6), 35 - MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_MSB, 0x18, 17, 3), 36 - MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_LSB, 0x18, 20, 8), 35 + MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_MSB, 0x18, 17, 4), 36 + MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_LSB, 0x18, 21, 8), 37 37 MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_96_127, 0x20, 4), 38 38 MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_64_95, 0x24, 4), 39 39 MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_32_63, 0x28, 4),
+6 -2
drivers/net/ethernet/mellanox/mlxsw/pci.c
··· 517 517 struct sk_buff *skb, 518 518 enum mlxsw_pci_cqe_v cqe_v, char *cqe) 519 519 { 520 + u8 ts_type; 521 + 520 522 if (cqe_v != MLXSW_PCI_CQE_V2) 521 523 return; 522 524 523 - if (mlxsw_pci_cqe2_time_stamp_type_get(cqe) != 524 - MLXSW_PCI_CQE_TIME_STAMP_TYPE_UTC) 525 + ts_type = mlxsw_pci_cqe2_time_stamp_type_get(cqe); 526 + 527 + if (ts_type != MLXSW_PCI_CQE_TIME_STAMP_TYPE_UTC && 528 + ts_type != MLXSW_PCI_CQE_TIME_STAMP_TYPE_MIRROR_UTC) 525 529 return; 526 530 527 531 mlxsw_skb_cb(skb)->cqe_ts.sec = mlxsw_pci_cqe2_time_stamp_sec_get(cqe);
-9
drivers/net/ethernet/mellanox/mlxsw/reg.h
··· 97 97 */ 98 98 MLXSW_ITEM32_LP(reg, sspr, 0x00, 16, 0x00, 12); 99 99 100 - /* reg_sspr_sub_port 101 - * Virtual port within the physical port. 102 - * Should be set to 0 when virtual ports are not enabled on the port. 103 - * 104 - * Access: RW 105 - */ 106 - MLXSW_ITEM32(reg, sspr, sub_port, 0x00, 8, 8); 107 - 108 100 /* reg_sspr_system_port 109 101 * Unique identifier within the stacking domain that represents all the ports 110 102 * that are available in the system (external ports). ··· 112 120 MLXSW_REG_ZERO(sspr, payload); 113 121 mlxsw_reg_sspr_m_set(payload, 1); 114 122 mlxsw_reg_sspr_local_port_set(payload, local_port); 115 - mlxsw_reg_sspr_sub_port_set(payload, 0); 116 123 mlxsw_reg_sspr_system_port_set(payload, local_port); 117 124 } 118 125
+1 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c
··· 193 193 key->vrid, GENMASK(7, 0)); 194 194 mlxsw_sp_acl_rulei_keymask_u32(rulei, 195 195 MLXSW_AFK_ELEMENT_VIRT_ROUTER_MSB, 196 - key->vrid >> 8, GENMASK(2, 0)); 196 + key->vrid >> 8, GENMASK(3, 0)); 197 197 switch (key->proto) { 198 198 case MLXSW_SP_L3_PROTO_IPV4: 199 199 return mlxsw_sp2_mr_tcam_rule_parse4(rulei, key);
+2 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
··· 171 171 172 172 static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_4[] = { 173 173 MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_LSB, 0x04, 24, 8), 174 - MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_MSB, 0x00, 0, 3), 174 + MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER_MSB, 0x00, 0, 3, 0, true), 175 175 }; 176 176 177 177 static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_0[] = { ··· 321 321 322 322 static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_4b[] = { 323 323 MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_LSB, 0x04, 13, 8), 324 - MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER_MSB, 0x04, 21, 4, 0, true), 324 + MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_MSB, 0x04, 21, 4), 325 325 }; 326 326 327 327 static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2b[] = {
+1 -1
drivers/net/ethernet/sfc/falcon/selftest.c
··· 428 428 for (i = 0; i < state->packet_count; i++) { 429 429 /* Allocate an skb, holding an extra reference for 430 430 * transmit completion counting */ 431 - skb = alloc_skb(EF4_LOOPBACK_PAYLOAD_LEN, GFP_KERNEL); 431 + skb = alloc_skb(sizeof(state->payload), GFP_KERNEL); 432 432 if (!skb) 433 433 return -ENOMEM; 434 434 state->skbs[i] = skb;
+1 -1
drivers/net/ethernet/sfc/selftest.c
··· 426 426 for (i = 0; i < state->packet_count; i++) { 427 427 /* Allocate an skb, holding an extra reference for 428 428 * transmit completion counting */ 429 - skb = alloc_skb(EFX_LOOPBACK_PAYLOAD_LEN, GFP_KERNEL); 429 + skb = alloc_skb(sizeof(state->payload), GFP_KERNEL); 430 430 if (!skb) 431 431 return -ENOMEM; 432 432 state->skbs[i] = skb;
+1 -1
drivers/net/ethernet/sfc/siena/selftest.c
··· 426 426 for (i = 0; i < state->packet_count; i++) { 427 427 /* Allocate an skb, holding an extra reference for 428 428 * transmit completion counting */ 429 - skb = alloc_skb(EFX_LOOPBACK_PAYLOAD_LEN, GFP_KERNEL); 429 + skb = alloc_skb(sizeof(state->payload), GFP_KERNEL); 430 430 if (!skb) 431 431 return -ENOMEM; 432 432 state->skbs[i] = skb;
+2 -1
drivers/net/ipvlan/ipvlan_main.c
··· 748 748 749 749 write_pnet(&port->pnet, newnet); 750 750 751 - ipvlan_migrate_l3s_hook(oldnet, newnet); 751 + if (port->mode == IPVLAN_MODE_L3S) 752 + ipvlan_migrate_l3s_hook(oldnet, newnet); 752 753 break; 753 754 } 754 755 case NETDEV_UNREGISTER:
+2 -2
drivers/net/mdio/mdio-bitbang.c
··· 186 186 struct mdiobb_ctrl *ctrl = bus->priv; 187 187 188 188 mdiobb_cmd_addr(ctrl, phy, devad, reg); 189 - mdiobb_cmd(ctrl, MDIO_C45_READ, phy, reg); 189 + mdiobb_cmd(ctrl, MDIO_C45_READ, phy, devad); 190 190 191 191 return mdiobb_read_common(bus, phy); 192 192 } ··· 222 222 struct mdiobb_ctrl *ctrl = bus->priv; 223 223 224 224 mdiobb_cmd_addr(ctrl, phy, devad, reg); 225 - mdiobb_cmd(ctrl, MDIO_C45_WRITE, phy, reg); 225 + mdiobb_cmd(ctrl, MDIO_C45_WRITE, phy, devad); 226 226 227 227 return mdiobb_write_common(bus, val); 228 228 }
+7 -4
drivers/net/phy/phy.c
··· 1184 1184 1185 1185 static void phy_process_error(struct phy_device *phydev) 1186 1186 { 1187 - mutex_lock(&phydev->lock); 1187 + /* phydev->lock must be held for the state change to be safe */ 1188 + if (!mutex_is_locked(&phydev->lock)) 1189 + phydev_err(phydev, "PHY-device data unsafe context\n"); 1190 + 1188 1191 phydev->state = PHY_ERROR; 1189 - mutex_unlock(&phydev->lock); 1190 1192 1191 1193 phy_trigger_machine(phydev); 1192 1194 } ··· 1197 1195 const void *func, int err) 1198 1196 { 1199 1197 WARN(1, "%pS: returned: %d\n", func, err); 1198 + mutex_lock(&phydev->lock); 1200 1199 phy_process_error(phydev); 1200 + mutex_unlock(&phydev->lock); 1201 1201 } 1202 1202 1203 1203 /** ··· 1208 1204 * 1209 1205 * Moves the PHY to the ERROR state in response to a read 1210 1206 * or write error, and tells the controller the link is down. 1211 - * Must not be called from interrupt context, or while the 1212 - * phydev->lock is held. 1207 + * Must be called with phydev->lock held. 1213 1208 */ 1214 1209 void phy_error(struct phy_device *phydev) 1215 1210 {
+10
drivers/net/phy/sfp-bus.c
··· 258 258 switch (id->base.extended_cc) { 259 259 case SFF8024_ECC_UNSPEC: 260 260 break; 261 + case SFF8024_ECC_100G_25GAUI_C2M_AOC: 262 + if (br_min <= 28000 && br_max >= 25000) { 263 + /* 25GBASE-R, possibly with FEC */ 264 + __set_bit(PHY_INTERFACE_MODE_25GBASER, interfaces); 265 + /* There is currently no link mode for 25000base 266 + * with unspecified range, reuse SR. 267 + */ 268 + phylink_set(modes, 25000baseSR_Full); 269 + } 270 + break; 261 271 case SFF8024_ECC_100GBASE_SR4_25GBASE_SR: 262 272 phylink_set(modes, 100000baseSR4_Full); 263 273 phylink_set(modes, 25000baseSR_Full);
+1 -4
drivers/net/veth.c
··· 1861 1861 1862 1862 nla_peer = data[VETH_INFO_PEER]; 1863 1863 ifmp = nla_data(nla_peer); 1864 - err = rtnl_nla_parse_ifla(peer_tb, 1865 - nla_data(nla_peer) + sizeof(struct ifinfomsg), 1866 - nla_len(nla_peer) - sizeof(struct ifinfomsg), 1867 - NULL); 1864 + err = rtnl_nla_parse_ifinfomsg(peer_tb, nla_peer, extack); 1868 1865 if (err < 0) 1869 1866 return err; 1870 1867
+1
drivers/net/wireless/intel/iwlwifi/Kconfig
··· 66 66 tristate "Intel Wireless WiFi MVM Firmware support" 67 67 select WANT_DEV_COREDUMP 68 68 depends on MAC80211 69 + depends on PTP_1588_CLOCK_OPTIONAL 69 70 help 70 71 This is the driver that supports the MVM firmware. The list 71 72 of the devices that use this firmware is available here:
+1 -10
include/net/bonding.h
··· 722 722 } 723 723 724 724 /* Caller must hold rcu_read_lock() for read */ 725 - static inline bool bond_slave_has_mac_rx(struct bonding *bond, const u8 *mac) 725 + static inline bool bond_slave_has_mac_rcu(struct bonding *bond, const u8 *mac) 726 726 { 727 727 struct list_head *iter; 728 728 struct slave *tmp; 729 - struct netdev_hw_addr *ha; 730 729 731 730 bond_for_each_slave_rcu(bond, tmp, iter) 732 731 if (ether_addr_equal_64bits(mac, tmp->dev->dev_addr)) 733 732 return true; 734 - 735 - if (netdev_uc_empty(bond->dev)) 736 - return false; 737 - 738 - netdev_for_each_uc_addr(ha, bond->dev) 739 - if (ether_addr_equal_64bits(mac, ha->addr)) 740 - return true; 741 - 742 733 return false; 743 734 } 744 735
+1 -1
include/net/inet_sock.h
··· 222 222 __s16 uc_ttl; 223 223 __u16 cmsg_flags; 224 224 struct ip_options_rcu __rcu *inet_opt; 225 + atomic_t inet_id; 225 226 __be16 inet_sport; 226 - __u16 inet_id; 227 227 228 228 __u8 tos; 229 229 __u8 min_ttl;
+13 -2
include/net/ip.h
··· 538 538 * generator as much as we can. 539 539 */ 540 540 if (sk && inet_sk(sk)->inet_daddr) { 541 - iph->id = htons(inet_sk(sk)->inet_id); 542 - inet_sk(sk)->inet_id += segs; 541 + int val; 542 + 543 + /* avoid atomic operations for TCP, 544 + * as we hold socket lock at this point. 545 + */ 546 + if (sk_is_tcp(sk)) { 547 + sock_owned_by_me(sk); 548 + val = atomic_read(&inet_sk(sk)->inet_id); 549 + atomic_set(&inet_sk(sk)->inet_id, val + segs); 550 + } else { 551 + val = atomic_add_return(segs, &inet_sk(sk)->inet_id); 552 + } 553 + iph->id = htons(val); 543 554 return; 544 555 } 545 556 if ((iph->frag_off & htons(IP_DF)) && !skb->ignore_df) {
+1
include/net/mac80211.h
··· 6612 6612 * marks frames marked in the bitmap as having been filtered. Afterwards, it 6613 6613 * checks if any frames in the window starting from @ssn can now be released 6614 6614 * (in case they were only waiting for frames that were filtered.) 6615 + * (Only work correctly if @max_rx_aggregation_subframes <= 64 frames) 6615 6616 */ 6616 6617 void ieee80211_mark_rx_ba_filtered_frames(struct ieee80211_sta *pubsta, u8 tid, 6617 6618 u16 ssn, u64 filtered,
+6
include/net/netfilter/nf_tables.h
··· 587 587 return (void *)set->data; 588 588 } 589 589 590 + static inline bool nft_set_gc_is_pending(const struct nft_set *s) 591 + { 592 + return refcount_read(&s->refs) != 1; 593 + } 594 + 590 595 static inline struct nft_set *nft_set_container_of(const void *priv) 591 596 { 592 597 return (void *)priv - offsetof(struct nft_set, data); ··· 1734 1729 u64 table_handle; 1735 1730 unsigned int base_seq; 1736 1731 unsigned int gc_seq; 1732 + u8 validate_state; 1737 1733 }; 1738 1734 1739 1735 extern unsigned int nf_tables_net_id;
+2 -2
include/net/rtnetlink.h
··· 190 190 int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm, 191 191 u32 portid, const struct nlmsghdr *nlh); 192 192 193 - int rtnl_nla_parse_ifla(struct nlattr **tb, const struct nlattr *head, int len, 194 - struct netlink_ext_ack *exterr); 193 + int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer, 194 + struct netlink_ext_ack *exterr); 195 195 struct net *rtnl_get_net_ns_capable(struct sock *sk, int netnsid); 196 196 197 197 #define MODULE_ALIAS_RTNL_LINK(kind) MODULE_ALIAS("rtnl-link-" kind)
+4 -3
include/net/sock.h
··· 1323 1323 /* 1324 1324 * Pressure flag: try to collapse. 1325 1325 * Technical note: it is used by multiple contexts non atomically. 1326 + * Make sure to use READ_ONCE()/WRITE_ONCE() for all reads/writes. 1326 1327 * All the __sk_mem_schedule() is of this nature: accounting 1327 1328 * is strict, actions are advisory and have some latency. 1328 1329 */ ··· 1424 1423 static inline bool sk_under_global_memory_pressure(const struct sock *sk) 1425 1424 { 1426 1425 return sk->sk_prot->memory_pressure && 1427 - !!*sk->sk_prot->memory_pressure; 1426 + !!READ_ONCE(*sk->sk_prot->memory_pressure); 1428 1427 } 1429 1428 1430 1429 static inline bool sk_under_memory_pressure(const struct sock *sk) ··· 1436 1435 mem_cgroup_under_socket_pressure(sk->sk_memcg)) 1437 1436 return true; 1438 1437 1439 - return !!*sk->sk_prot->memory_pressure; 1438 + return !!READ_ONCE(*sk->sk_prot->memory_pressure); 1440 1439 } 1441 1440 1442 1441 static inline long ··· 1513 1512 { 1514 1513 if (!prot->memory_pressure) 1515 1514 return false; 1516 - return !!*prot->memory_pressure; 1515 + return !!READ_ONCE(*prot->memory_pressure); 1517 1516 } 1518 1517 1519 1518
+2 -1
net/batman-adv/bat_v_elp.c
··· 505 505 struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); 506 506 struct batadv_elp_packet *elp_packet; 507 507 struct batadv_hard_iface *primary_if; 508 - struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb); 508 + struct ethhdr *ethhdr; 509 509 bool res; 510 510 int ret = NET_RX_DROP; 511 511 ··· 513 513 if (!res) 514 514 goto free_skb; 515 515 516 + ethhdr = eth_hdr(skb); 516 517 if (batadv_is_my_mac(bat_priv, ethhdr->h_source)) 517 518 goto free_skb; 518 519
+5 -2
net/batman-adv/bat_v_ogm.c
··· 123 123 { 124 124 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); 125 125 126 - if (hard_iface->if_status != BATADV_IF_ACTIVE) 126 + if (hard_iface->if_status != BATADV_IF_ACTIVE) { 127 + kfree_skb(skb); 127 128 return; 129 + } 128 130 129 131 batadv_inc_counter(bat_priv, BATADV_CNT_MGMT_TX); 130 132 batadv_add_counter(bat_priv, BATADV_CNT_MGMT_TX_BYTES, ··· 987 985 { 988 986 struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); 989 987 struct batadv_ogm2_packet *ogm_packet; 990 - struct ethhdr *ethhdr = eth_hdr(skb); 988 + struct ethhdr *ethhdr; 991 989 int ogm_offset; 992 990 u8 *packet_pos; 993 991 int ret = NET_RX_DROP; ··· 1001 999 if (!batadv_check_management_packet(skb, if_incoming, BATADV_OGM2_HLEN)) 1002 1000 goto free_skb; 1003 1001 1002 + ethhdr = eth_hdr(skb); 1004 1003 if (batadv_is_my_mac(bat_priv, ethhdr->h_source)) 1005 1004 goto free_skb; 1006 1005
+13 -1
net/batman-adv/hard-interface.c
··· 630 630 */ 631 631 void batadv_update_min_mtu(struct net_device *soft_iface) 632 632 { 633 - soft_iface->mtu = batadv_hardif_min_mtu(soft_iface); 633 + struct batadv_priv *bat_priv = netdev_priv(soft_iface); 634 + int limit_mtu; 635 + int mtu; 636 + 637 + mtu = batadv_hardif_min_mtu(soft_iface); 638 + 639 + if (bat_priv->mtu_set_by_user) 640 + limit_mtu = bat_priv->mtu_set_by_user; 641 + else 642 + limit_mtu = ETH_DATA_LEN; 643 + 644 + mtu = min(mtu, limit_mtu); 645 + dev_set_mtu(soft_iface, mtu); 634 646 635 647 /* Check if the local translate table should be cleaned up to match a 636 648 * new (and smaller) MTU.
+3
net/batman-adv/netlink.c
··· 495 495 attr = info->attrs[BATADV_ATTR_FRAGMENTATION_ENABLED]; 496 496 497 497 atomic_set(&bat_priv->fragmentation, !!nla_get_u8(attr)); 498 + 499 + rtnl_lock(); 498 500 batadv_update_min_mtu(bat_priv->soft_iface); 501 + rtnl_unlock(); 499 502 } 500 503 501 504 if (info->attrs[BATADV_ATTR_GW_BANDWIDTH_DOWN]) {
+3
net/batman-adv/soft-interface.c
··· 153 153 154 154 static int batadv_interface_change_mtu(struct net_device *dev, int new_mtu) 155 155 { 156 + struct batadv_priv *bat_priv = netdev_priv(dev); 157 + 156 158 /* check ranges */ 157 159 if (new_mtu < 68 || new_mtu > batadv_hardif_min_mtu(dev)) 158 160 return -EINVAL; 159 161 160 162 dev->mtu = new_mtu; 163 + bat_priv->mtu_set_by_user = new_mtu; 161 164 162 165 return 0; 163 166 }
-1
net/batman-adv/translation-table.c
··· 774 774 if (roamed_back) { 775 775 batadv_tt_global_free(bat_priv, tt_global, 776 776 "Roaming canceled"); 777 - tt_global = NULL; 778 777 } else { 779 778 /* The global entry has to be marked as ROAMING and 780 779 * has to be kept for consistency purpose
+6
net/batman-adv/types.h
··· 1547 1547 struct net_device *soft_iface; 1548 1548 1549 1549 /** 1550 + * @mtu_set_by_user: MTU was set once by user 1551 + * protected by rtnl_lock 1552 + */ 1553 + int mtu_set_by_user; 1554 + 1555 + /** 1550 1556 * @bat_counters: mesh internal traffic statistic counters (see 1551 1557 * batadv_counters) 1552 1558 */
+7 -15
net/can/isotp.c
··· 188 188 return (isotp_bc_flags(so) == 0); 189 189 } 190 190 191 - static bool isotp_register_txecho(struct isotp_sock *so) 192 - { 193 - /* all modes but SF_BROADCAST register for tx echo skbs */ 194 - return (isotp_bc_flags(so) != CAN_ISOTP_SF_BROADCAST); 195 - } 196 - 197 191 static enum hrtimer_restart isotp_rx_timer_handler(struct hrtimer *hrtimer) 198 192 { 199 193 struct isotp_sock *so = container_of(hrtimer, struct isotp_sock, ··· 1203 1209 lock_sock(sk); 1204 1210 1205 1211 /* remove current filters & unregister */ 1206 - if (so->bound && isotp_register_txecho(so)) { 1212 + if (so->bound) { 1207 1213 if (so->ifindex) { 1208 1214 struct net_device *dev; 1209 1215 ··· 1326 1332 can_rx_register(net, dev, rx_id, SINGLE_MASK(rx_id), 1327 1333 isotp_rcv, sk, "isotp", sk); 1328 1334 1329 - if (isotp_register_txecho(so)) { 1330 - /* no consecutive frame echo skb in flight */ 1331 - so->cfecho = 0; 1335 + /* no consecutive frame echo skb in flight */ 1336 + so->cfecho = 0; 1332 1337 1333 - /* register for echo skb's */ 1334 - can_rx_register(net, dev, tx_id, SINGLE_MASK(tx_id), 1335 - isotp_rcv_echo, sk, "isotpe", sk); 1336 - } 1338 + /* register for echo skb's */ 1339 + can_rx_register(net, dev, tx_id, SINGLE_MASK(tx_id), 1340 + isotp_rcv_echo, sk, "isotpe", sk); 1337 1341 1338 1342 dev_put(dev); 1339 1343 ··· 1552 1560 case NETDEV_UNREGISTER: 1553 1561 lock_sock(sk); 1554 1562 /* remove current filters & unregister */ 1555 - if (so->bound && isotp_register_txecho(so)) { 1563 + if (so->bound) { 1556 1564 if (isotp_register_rxid(so)) 1557 1565 can_rx_unregister(dev_net(dev), dev, so->rxid, 1558 1566 SINGLE_MASK(so->rxid),
+26 -9
net/can/raw.c
··· 85 85 int bound; 86 86 int ifindex; 87 87 struct net_device *dev; 88 + netdevice_tracker dev_tracker; 88 89 struct list_head notifier; 89 90 int loopback; 90 91 int recv_own_msgs; ··· 286 285 case NETDEV_UNREGISTER: 287 286 lock_sock(sk); 288 287 /* remove current filters & unregister */ 289 - if (ro->bound) 288 + if (ro->bound) { 290 289 raw_disable_allfilters(dev_net(dev), dev, sk); 290 + netdev_put(dev, &ro->dev_tracker); 291 + } 291 292 292 293 if (ro->count > 1) 293 294 kfree(ro->filter); ··· 394 391 395 392 /* remove current filters & unregister */ 396 393 if (ro->bound) { 397 - if (ro->dev) 394 + if (ro->dev) { 398 395 raw_disable_allfilters(dev_net(ro->dev), ro->dev, sk); 399 - else 396 + netdev_put(ro->dev, &ro->dev_tracker); 397 + } else { 400 398 raw_disable_allfilters(sock_net(sk), NULL, sk); 399 + } 401 400 } 402 401 403 402 if (ro->count > 1) ··· 450 445 goto out; 451 446 } 452 447 if (dev->type != ARPHRD_CAN) { 453 - dev_put(dev); 454 448 err = -ENODEV; 455 - goto out; 449 + goto out_put_dev; 456 450 } 451 + 457 452 if (!(dev->flags & IFF_UP)) 458 453 notify_enetdown = 1; 459 454 ··· 461 456 462 457 /* filters set by default/setsockopt */ 463 458 err = raw_enable_allfilters(sock_net(sk), dev, sk); 464 - dev_put(dev); 459 + if (err) 460 + goto out_put_dev; 461 + 465 462 } else { 466 463 ifindex = 0; 467 464 ··· 474 467 if (!err) { 475 468 if (ro->bound) { 476 469 /* unregister old filters */ 477 - if (ro->dev) 470 + if (ro->dev) { 478 471 raw_disable_allfilters(dev_net(ro->dev), 479 472 ro->dev, sk); 480 - else 473 + /* drop reference to old ro->dev */ 474 + netdev_put(ro->dev, &ro->dev_tracker); 475 + } else { 481 476 raw_disable_allfilters(sock_net(sk), NULL, sk); 477 + } 482 478 } 483 479 ro->ifindex = ifindex; 484 480 ro->bound = 1; 481 + /* bind() ok -> hold a reference for new ro->dev */ 485 482 ro->dev = dev; 483 + if (ro->dev) 484 + netdev_hold(ro->dev, &ro->dev_tracker, GFP_KERNEL); 486 485 } 487 486 488 - out: 487 + out_put_dev: 488 + /* remove potential reference from dev_get_by_index() */ 489 + if (dev) 490 + dev_put(dev); 491 + out: 489 492 release_sock(sk); 490 493 rtnl_unlock(); 491 494
+21 -4
net/core/rtnetlink.c
··· 2268 2268 return err; 2269 2269 } 2270 2270 2271 - int rtnl_nla_parse_ifla(struct nlattr **tb, const struct nlattr *head, int len, 2272 - struct netlink_ext_ack *exterr) 2271 + int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer, 2272 + struct netlink_ext_ack *exterr) 2273 2273 { 2274 - return nla_parse_deprecated(tb, IFLA_MAX, head, len, ifla_policy, 2274 + const struct ifinfomsg *ifmp; 2275 + const struct nlattr *attrs; 2276 + size_t len; 2277 + 2278 + ifmp = nla_data(nla_peer); 2279 + attrs = nla_data(nla_peer) + sizeof(struct ifinfomsg); 2280 + len = nla_len(nla_peer) - sizeof(struct ifinfomsg); 2281 + 2282 + if (ifmp->ifi_index < 0) { 2283 + NL_SET_ERR_MSG_ATTR(exterr, nla_peer, 2284 + "ifindex can't be negative"); 2285 + return -EINVAL; 2286 + } 2287 + 2288 + return nla_parse_deprecated(tb, IFLA_MAX, attrs, len, ifla_policy, 2275 2289 exterr); 2276 2290 } 2277 - EXPORT_SYMBOL(rtnl_nla_parse_ifla); 2291 + EXPORT_SYMBOL(rtnl_nla_parse_ifinfomsg); 2278 2292 2279 2293 struct net *rtnl_link_get_net(struct net *src_net, struct nlattr *tb[]) 2280 2294 { ··· 3561 3547 if (ifm->ifi_index > 0) { 3562 3548 link_specified = true; 3563 3549 dev = __dev_get_by_index(net, ifm->ifi_index); 3550 + } else if (ifm->ifi_index < 0) { 3551 + NL_SET_ERR_MSG(extack, "ifindex can't be negative"); 3552 + return -EINVAL; 3564 3553 } else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME]) { 3565 3554 link_specified = true; 3566 3555 dev = rtnl_dev_get(net, tb);
+2 -2
net/dccp/ipv4.c
··· 130 130 inet->inet_daddr, 131 131 inet->inet_sport, 132 132 inet->inet_dport); 133 - inet->inet_id = get_random_u16(); 133 + atomic_set(&inet->inet_id, get_random_u16()); 134 134 135 135 err = dccp_connect(sk); 136 136 rt = NULL; ··· 432 432 RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt)); 433 433 newinet->mc_index = inet_iif(skb); 434 434 newinet->mc_ttl = ip_hdr(skb)->ttl; 435 - newinet->inet_id = get_random_u16(); 435 + atomic_set(&newinet->inet_id, get_random_u16()); 436 436 437 437 if (dst == NULL && (dst = inet_csk_route_child_sock(sk, newsk, req)) == NULL) 438 438 goto put_and_exit;
+12 -8
net/dccp/proto.c
··· 315 315 __poll_t dccp_poll(struct file *file, struct socket *sock, 316 316 poll_table *wait) 317 317 { 318 - __poll_t mask; 319 318 struct sock *sk = sock->sk; 319 + __poll_t mask; 320 + u8 shutdown; 321 + int state; 320 322 321 323 sock_poll_wait(file, sock, wait); 322 - if (sk->sk_state == DCCP_LISTEN) 324 + 325 + state = inet_sk_state_load(sk); 326 + if (state == DCCP_LISTEN) 323 327 return inet_csk_listen_poll(sk); 324 328 325 329 /* Socket is not locked. We are protected from async events ··· 332 328 */ 333 329 334 330 mask = 0; 335 - if (sk->sk_err) 331 + if (READ_ONCE(sk->sk_err)) 336 332 mask = EPOLLERR; 333 + shutdown = READ_ONCE(sk->sk_shutdown); 337 334 338 - if (sk->sk_shutdown == SHUTDOWN_MASK || sk->sk_state == DCCP_CLOSED) 335 + if (shutdown == SHUTDOWN_MASK || state == DCCP_CLOSED) 339 336 mask |= EPOLLHUP; 340 - if (sk->sk_shutdown & RCV_SHUTDOWN) 337 + if (shutdown & RCV_SHUTDOWN) 341 338 mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP; 342 339 343 340 /* Connected? */ 344 - if ((1 << sk->sk_state) & ~(DCCPF_REQUESTING | DCCPF_RESPOND)) { 341 + if ((1 << state) & ~(DCCPF_REQUESTING | DCCPF_RESPOND)) { 345 342 if (atomic_read(&sk->sk_rmem_alloc) > 0) 346 343 mask |= EPOLLIN | EPOLLRDNORM; 347 344 348 - if (!(sk->sk_shutdown & SEND_SHUTDOWN)) { 345 + if (!(shutdown & SEND_SHUTDOWN)) { 349 346 if (sk_stream_is_writeable(sk)) { 350 347 mask |= EPOLLOUT | EPOLLWRNORM; 351 348 } else { /* send SIGIO later */ ··· 364 359 } 365 360 return mask; 366 361 } 367 - 368 362 EXPORT_SYMBOL_GPL(dccp_poll); 369 363 370 364 int dccp_ioctl(struct sock *sk, int cmd, int *karg)
+3
net/devlink/leftover.c
··· 6704 6704 struct devlink_param_item *param_item; 6705 6705 struct devlink_trap_item *trap_item; 6706 6706 struct devlink_port *devlink_port; 6707 + struct devlink_linecard *linecard; 6707 6708 struct devlink_rate *rate_node; 6708 6709 struct devlink_region *region; 6709 6710 unsigned long port_index; ··· 6733 6732 6734 6733 xa_for_each(&devlink->ports, port_index, devlink_port) 6735 6734 devlink_port_notify(devlink_port, DEVLINK_CMD_PORT_DEL); 6735 + list_for_each_entry_reverse(linecard, &devlink->linecard_list, list) 6736 + devlink_linecard_notify(linecard, DEVLINK_CMD_LINECARD_DEL); 6736 6737 devlink_notify(devlink, DEVLINK_CMD_DEL); 6737 6738 } 6738 6739
+1 -1
net/ipv4/af_inet.c
··· 340 340 else 341 341 inet->pmtudisc = IP_PMTUDISC_WANT; 342 342 343 - inet->inet_id = 0; 343 + atomic_set(&inet->inet_id, 0); 344 344 345 345 sock_init_data(sock, sk); 346 346
+1 -1
net/ipv4/datagram.c
··· 73 73 reuseport_has_conns_set(sk); 74 74 sk->sk_state = TCP_ESTABLISHED; 75 75 sk_set_txhash(sk); 76 - inet->inet_id = get_random_u16(); 76 + atomic_set(&inet->inet_id, get_random_u16()); 77 77 78 78 sk_dst_set(sk, &rt->dst); 79 79 err = 0;
+2 -2
net/ipv4/tcp_ipv4.c
··· 312 312 inet->inet_daddr)); 313 313 } 314 314 315 - inet->inet_id = get_random_u16(); 315 + atomic_set(&inet->inet_id, get_random_u16()); 316 316 317 317 if (tcp_fastopen_defer_connect(sk, &err)) 318 318 return err; ··· 1596 1596 inet_csk(newsk)->icsk_ext_hdr_len = 0; 1597 1597 if (inet_opt) 1598 1598 inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen; 1599 - newinet->inet_id = get_random_u16(); 1599 + atomic_set(&newinet->inet_id, get_random_u16()); 1600 1600 1601 1601 /* Set ToS of the new socket based upon the value of incoming SYN. 1602 1602 * ECT bits are set later in tcp_init_transfer().
+10 -2
net/mac80211/rx.c
··· 1083 1083 struct sk_buff *tail = skb_peek_tail(frames); 1084 1084 struct ieee80211_rx_status *status; 1085 1085 1086 - if (tid_agg_rx->reorder_buf_filtered & BIT_ULL(index)) 1086 + if (tid_agg_rx->reorder_buf_filtered && 1087 + tid_agg_rx->reorder_buf_filtered & BIT_ULL(index)) 1087 1088 return true; 1088 1089 1089 1090 if (!tail) ··· 1125 1124 } 1126 1125 1127 1126 no_frame: 1128 - tid_agg_rx->reorder_buf_filtered &= ~BIT_ULL(index); 1127 + if (tid_agg_rx->reorder_buf_filtered) 1128 + tid_agg_rx->reorder_buf_filtered &= ~BIT_ULL(index); 1129 1129 tid_agg_rx->head_seq_num = ieee80211_sn_inc(tid_agg_rx->head_seq_num); 1130 1130 } 1131 1131 ··· 4266 4264 u16 ssn, u64 filtered, 4267 4265 u16 received_mpdus) 4268 4266 { 4267 + struct ieee80211_local *local; 4269 4268 struct sta_info *sta; 4270 4269 struct tid_ampdu_rx *tid_agg_rx; 4271 4270 struct sk_buff_head frames; ··· 4283 4280 __skb_queue_head_init(&frames); 4284 4281 4285 4282 sta = container_of(pubsta, struct sta_info, sta); 4283 + 4284 + local = sta->sdata->local; 4285 + WARN_ONCE(local->hw.max_rx_aggregation_subframes > 64, 4286 + "RX BA marker can't support max_rx_aggregation_subframes %u > 64\n", 4287 + local->hw.max_rx_aggregation_subframes); 4286 4288 4287 4289 if (!ieee80211_rx_data_set_sta(&rx, sta, -1)) 4288 4290 return;
+15 -8
net/netfilter/nf_tables_api.c
··· 1373 1373 if (table == NULL) 1374 1374 goto err_kzalloc; 1375 1375 1376 - table->validate_state = NFT_VALIDATE_SKIP; 1376 + table->validate_state = nft_net->validate_state; 1377 1377 table->name = nla_strdup(attr, GFP_KERNEL_ACCOUNT); 1378 1378 if (table->name == NULL) 1379 1379 goto err_strdup; ··· 9051 9051 return -EAGAIN; 9052 9052 9053 9053 nft_validate_state_update(table, NFT_VALIDATE_SKIP); 9054 + break; 9054 9055 } 9055 - 9056 - break; 9057 9056 } 9058 9057 9059 9058 return 0; ··· 9456 9457 struct nft_trans_gc *trans, *next; 9457 9458 LIST_HEAD(trans_gc_list); 9458 9459 9459 - spin_lock(&nf_tables_destroy_list_lock); 9460 + spin_lock(&nf_tables_gc_list_lock); 9460 9461 list_splice_init(&nf_tables_gc_list, &trans_gc_list); 9461 - spin_unlock(&nf_tables_destroy_list_lock); 9462 + spin_unlock(&nf_tables_gc_list_lock); 9462 9463 9463 9464 list_for_each_entry_safe(trans, next, &trans_gc_list, list) { 9464 9465 list_del(&trans->list); ··· 9798 9799 } 9799 9800 9800 9801 /* 0. Validate ruleset, otherwise roll back for error reporting. */ 9801 - if (nf_tables_validate(net) < 0) 9802 + if (nf_tables_validate(net) < 0) { 9803 + nft_net->validate_state = NFT_VALIDATE_DO; 9802 9804 return -EAGAIN; 9805 + } 9803 9806 9804 9807 err = nft_flow_rule_offload_commit(net); 9805 9808 if (err < 0) ··· 10060 10059 nf_tables_commit_audit_log(&adl, nft_net->base_seq); 10061 10060 10062 10061 nft_gc_seq_end(nft_net, gc_seq); 10062 + nft_net->validate_state = NFT_VALIDATE_SKIP; 10063 10063 nf_tables_commit_release(net); 10064 10064 10065 10065 return 0; ··· 10337 10335 enum nfnl_abort_action action) 10338 10336 { 10339 10337 struct nftables_pernet *nft_net = nft_pernet(net); 10340 - int ret = __nf_tables_abort(net, action); 10338 + unsigned int gc_seq; 10339 + int ret; 10341 10340 10341 + gc_seq = nft_gc_seq_begin(nft_net); 10342 + ret = __nf_tables_abort(net, action); 10343 + nft_gc_seq_end(nft_net, gc_seq); 10342 10344 mutex_unlock(&nft_net->commit_mutex); 10343 10345 10344 10346 return ret; ··· 11077 11071 gc_seq = nft_gc_seq_begin(nft_net); 11078 11072 11079 11073 if (!list_empty(&nf_tables_destroy_list)) 11080 - rcu_barrier(); 11074 + nf_tables_trans_destroy_flush_work(); 11081 11075 again: 11082 11076 list_for_each_entry(table, &nft_net->tables, list) { 11083 11077 if (nft_table_has_owner(table) && ··· 11121 11115 mutex_init(&nft_net->commit_mutex); 11122 11116 nft_net->base_seq = 1; 11123 11117 nft_net->gc_seq = 0; 11118 + nft_net->validate_state = NFT_VALIDATE_SKIP; 11124 11119 11125 11120 return 0; 11126 11121 }
+3
net/netfilter/nft_set_hash.c
··· 326 326 nft_net = nft_pernet(net); 327 327 gc_seq = READ_ONCE(nft_net->gc_seq); 328 328 329 + if (nft_set_gc_is_pending(set)) 330 + goto done; 331 + 329 332 gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL); 330 333 if (!gc) 331 334 goto done;
+10 -3
net/netfilter/nft_set_pipapo.c
··· 902 902 static int pipapo_insert(struct nft_pipapo_field *f, const uint8_t *k, 903 903 int mask_bits) 904 904 { 905 - int rule = f->rules++, group, ret, bit_offset = 0; 905 + int rule = f->rules, group, ret, bit_offset = 0; 906 906 907 - ret = pipapo_resize(f, f->rules - 1, f->rules); 907 + ret = pipapo_resize(f, f->rules, f->rules + 1); 908 908 if (ret) 909 909 return ret; 910 + 911 + f->rules++; 910 912 911 913 for (group = 0; group < f->groups; group++) { 912 914 int i, v; ··· 1054 1052 step++; 1055 1053 if (step >= len) { 1056 1054 if (!masks) { 1057 - pipapo_insert(f, base, 0); 1055 + err = pipapo_insert(f, base, 0); 1056 + if (err < 0) 1057 + return err; 1058 1058 masks = 1; 1059 1059 } 1060 1060 goto out; ··· 1238 1234 ret = pipapo_insert(f, start, f->groups * f->bb); 1239 1235 else 1240 1236 ret = pipapo_expand(f, start, end, f->groups * f->bb); 1237 + 1238 + if (ret < 0) 1239 + return ret; 1241 1240 1242 1241 if (f->bsize > bsize_max) 1243 1242 bsize_max = f->bsize;
+3
net/netfilter/nft_set_rbtree.c
··· 611 611 nft_net = nft_pernet(net); 612 612 gc_seq = READ_ONCE(nft_net->gc_seq); 613 613 614 + if (nft_set_gc_is_pending(set)) 615 + goto done; 616 + 614 617 gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL); 615 618 if (!gc) 616 619 goto done;
+40 -13
net/sched/sch_api.c
··· 1547 1547 return 0; 1548 1548 } 1549 1549 1550 + static bool req_create_or_replace(struct nlmsghdr *n) 1551 + { 1552 + return (n->nlmsg_flags & NLM_F_CREATE && 1553 + n->nlmsg_flags & NLM_F_REPLACE); 1554 + } 1555 + 1556 + static bool req_create_exclusive(struct nlmsghdr *n) 1557 + { 1558 + return (n->nlmsg_flags & NLM_F_CREATE && 1559 + n->nlmsg_flags & NLM_F_EXCL); 1560 + } 1561 + 1562 + static bool req_change(struct nlmsghdr *n) 1563 + { 1564 + return (!(n->nlmsg_flags & NLM_F_CREATE) && 1565 + !(n->nlmsg_flags & NLM_F_REPLACE) && 1566 + !(n->nlmsg_flags & NLM_F_EXCL)); 1567 + } 1568 + 1550 1569 /* 1551 1570 * Create/change qdisc. 1552 1571 */ 1553 - 1554 1572 static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n, 1555 1573 struct netlink_ext_ack *extack) 1556 1574 { ··· 1662 1644 * 1663 1645 * We know, that some child q is already 1664 1646 * attached to this parent and have choice: 1665 - * either to change it or to create/graft new one. 1647 + * 1) change it or 2) create/graft new one. 1648 + * If the requested qdisc kind is different 1649 + * than the existing one, then we choose graft. 1650 + * If they are the same then this is "change" 1651 + * operation - just let it fallthrough.. 1666 1652 * 1667 1653 * 1. We are allowed to create/graft only 1668 - * if CREATE and REPLACE flags are set. 1654 + * if the request is explicitly stating 1655 + * "please create if it doesn't exist". 1669 1656 * 1670 - * 2. If EXCL is set, requestor wanted to say, 1671 - * that qdisc tcm_handle is not expected 1657 + * 2. If the request is to exclusive create 1658 + * then the qdisc tcm_handle is not expected 1672 1659 * to exist, so that we choose create/graft too. 1673 1660 * 1674 1661 * 3. The last case is when no flags are set. 1662 + * This will happen when for example tc 1663 + * utility issues a "change" command. 1675 1664 * Alas, it is sort of hole in API, we 1676 1665 * cannot decide what to do unambiguously. 1677 - * For now we select create/graft, if 1678 - * user gave KIND, which does not match existing. 1666 + * For now we select create/graft. 1679 1667 */ 1680 - if ((n->nlmsg_flags & NLM_F_CREATE) && 1681 - (n->nlmsg_flags & NLM_F_REPLACE) && 1682 - ((n->nlmsg_flags & NLM_F_EXCL) || 1683 - (tca[TCA_KIND] && 1684 - nla_strcmp(tca[TCA_KIND], q->ops->id)))) 1685 - goto create_n_graft; 1668 + if (tca[TCA_KIND] && 1669 + nla_strcmp(tca[TCA_KIND], q->ops->id)) { 1670 + if (req_create_or_replace(n) || 1671 + req_create_exclusive(n)) 1672 + goto create_n_graft; 1673 + else if (req_change(n)) 1674 + goto create_n_graft2; 1675 + } 1686 1676 } 1687 1677 } 1688 1678 } else { ··· 1724 1698 NL_SET_ERR_MSG(extack, "Qdisc not found. To create specify NLM_F_CREATE flag"); 1725 1699 return -ENOENT; 1726 1700 } 1701 + create_n_graft2: 1727 1702 if (clid == TC_H_INGRESS) { 1728 1703 if (dev_ingress_queue(dev)) { 1729 1704 q = qdisc_create(dev, dev_ingress_queue(dev),
+2 -2
net/sctp/socket.c
··· 99 99 100 100 static void sctp_enter_memory_pressure(struct sock *sk) 101 101 { 102 - sctp_memory_pressure = 1; 102 + WRITE_ONCE(sctp_memory_pressure, 1); 103 103 } 104 104 105 105 ··· 9479 9479 newinet->inet_rcv_saddr = inet->inet_rcv_saddr; 9480 9480 newinet->inet_dport = htons(asoc->peer.port); 9481 9481 newinet->pmtudisc = inet->pmtudisc; 9482 - newinet->inet_id = get_random_u16(); 9482 + atomic_set(&newinet->inet_id, get_random_u16()); 9483 9483 9484 9484 newinet->uc_ttl = inet->uc_ttl; 9485 9485 newinet->mc_loop = 1;
+3 -1
tools/testing/selftests/drivers/net/bonding/Makefile
··· 9 9 mode-1-recovery-updelay.sh \ 10 10 mode-2-recovery-updelay.sh \ 11 11 bond_options.sh \ 12 - bond-eth-type-change.sh 12 + bond-eth-type-change.sh \ 13 + bond_macvlan.sh 13 14 14 15 TEST_FILES := \ 15 16 lag_lib.sh \ 17 + bond_topo_2d1c.sh \ 16 18 bond_topo_3d1c.sh \ 17 19 net_forwarding_lib.sh 18 20
+2 -2
tools/testing/selftests/drivers/net/bonding/bond-break-lacpdu-tx.sh
··· 57 57 58 58 # add ports 59 59 ip link set fbond master fab-br0 60 - ip link set veth1-bond down master fbond 61 - ip link set veth2-bond down master fbond 60 + ip link set veth1-bond master fbond 61 + ip link set veth2-bond master fbond 62 62 63 63 # bring up 64 64 ip link set veth1-end up
+99
tools/testing/selftests/drivers/net/bonding/bond_macvlan.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Test macvlan over balance-alb 5 + 6 + lib_dir=$(dirname "$0") 7 + source ${lib_dir}/bond_topo_2d1c.sh 8 + 9 + m1_ns="m1-$(mktemp -u XXXXXX)" 10 + m2_ns="m1-$(mktemp -u XXXXXX)" 11 + m1_ip4="192.0.2.11" 12 + m1_ip6="2001:db8::11" 13 + m2_ip4="192.0.2.12" 14 + m2_ip6="2001:db8::12" 15 + 16 + cleanup() 17 + { 18 + ip -n ${m1_ns} link del macv0 19 + ip netns del ${m1_ns} 20 + ip -n ${m2_ns} link del macv0 21 + ip netns del ${m2_ns} 22 + 23 + client_destroy 24 + server_destroy 25 + gateway_destroy 26 + } 27 + 28 + check_connection() 29 + { 30 + local ns=${1} 31 + local target=${2} 32 + local message=${3:-"macvlan_over_bond"} 33 + RET=0 34 + 35 + 36 + ip netns exec ${ns} ping ${target} -c 4 -i 0.1 &>/dev/null 37 + check_err $? "ping failed" 38 + log_test "$mode: $message" 39 + } 40 + 41 + macvlan_over_bond() 42 + { 43 + local param="$1" 44 + RET=0 45 + 46 + # setup new bond mode 47 + bond_reset "${param}" 48 + 49 + ip -n ${s_ns} link add link bond0 name macv0 type macvlan mode bridge 50 + ip -n ${s_ns} link set macv0 netns ${m1_ns} 51 + ip -n ${m1_ns} link set dev macv0 up 52 + ip -n ${m1_ns} addr add ${m1_ip4}/24 dev macv0 53 + ip -n ${m1_ns} addr add ${m1_ip6}/24 dev macv0 54 + 55 + ip -n ${s_ns} link add link bond0 name macv0 type macvlan mode bridge 56 + ip -n ${s_ns} link set macv0 netns ${m2_ns} 57 + ip -n ${m2_ns} link set dev macv0 up 58 + ip -n ${m2_ns} addr add ${m2_ip4}/24 dev macv0 59 + ip -n ${m2_ns} addr add ${m2_ip6}/24 dev macv0 60 + 61 + sleep 2 62 + 63 + check_connection "${c_ns}" "${s_ip4}" "IPv4: client->server" 64 + check_connection "${c_ns}" "${s_ip6}" "IPv6: client->server" 65 + check_connection "${c_ns}" "${m1_ip4}" "IPv4: client->macvlan_1" 66 + check_connection "${c_ns}" "${m1_ip6}" "IPv6: client->macvlan_1" 67 + check_connection "${c_ns}" "${m2_ip4}" "IPv4: client->macvlan_2" 68 + check_connection "${c_ns}" "${m2_ip6}" "IPv6: client->macvlan_2" 69 + check_connection "${m1_ns}" "${m2_ip4}" "IPv4: macvlan_1->macvlan_2" 70 + check_connection "${m1_ns}" "${m2_ip6}" "IPv6: macvlan_1->macvlan_2" 71 + 72 + 73 + sleep 5 74 + 75 + check_connection "${s_ns}" "${c_ip4}" "IPv4: server->client" 76 + check_connection "${s_ns}" "${c_ip6}" "IPv6: server->client" 77 + check_connection "${m1_ns}" "${c_ip4}" "IPv4: macvlan_1->client" 78 + check_connection "${m1_ns}" "${c_ip6}" "IPv6: macvlan_1->client" 79 + check_connection "${m2_ns}" "${c_ip4}" "IPv4: macvlan_2->client" 80 + check_connection "${m2_ns}" "${c_ip6}" "IPv6: macvlan_2->client" 81 + check_connection "${m2_ns}" "${m1_ip4}" "IPv4: macvlan_2->macvlan_2" 82 + check_connection "${m2_ns}" "${m1_ip6}" "IPv6: macvlan_2->macvlan_2" 83 + 84 + ip -n ${c_ns} neigh flush dev eth0 85 + } 86 + 87 + trap cleanup EXIT 88 + 89 + setup_prepare 90 + ip netns add ${m1_ns} 91 + ip netns add ${m2_ns} 92 + 93 + modes="active-backup balance-tlb balance-alb" 94 + 95 + for mode in $modes; do 96 + macvlan_over_bond "mode $mode" 97 + done 98 + 99 + exit $EXIT_STATUS
-3
tools/testing/selftests/drivers/net/bonding/bond_options.sh
··· 9 9 num_grat_arp 10 10 " 11 11 12 - REQUIRE_MZ=no 13 - NUM_NETIFS=0 14 12 lib_dir=$(dirname "$0") 15 - source ${lib_dir}/net_forwarding_lib.sh 16 13 source ${lib_dir}/bond_topo_3d1c.sh 17 14 18 15 skip_prio()
+158
tools/testing/selftests/drivers/net/bonding/bond_topo_2d1c.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Topology for Bond mode 1,5,6 testing 5 + # 6 + # +-------------------------+ 7 + # | bond0 | Server 8 + # | + | 192.0.2.1/24 9 + # | eth0 | eth1 | 2001:db8::1/24 10 + # | +---+---+ | 11 + # | | | | 12 + # +-------------------------+ 13 + # | | 14 + # +-------------------------+ 15 + # | | | | 16 + # | +---+-------+---+ | Gateway 17 + # | | br0 | | 192.0.2.254/24 18 + # | +-------+-------+ | 2001:db8::254/24 19 + # | | | 20 + # +-------------------------+ 21 + # | 22 + # +-------------------------+ 23 + # | | | Client 24 + # | + | 192.0.2.10/24 25 + # | eth0 | 2001:db8::10/24 26 + # +-------------------------+ 27 + 28 + REQUIRE_MZ=no 29 + NUM_NETIFS=0 30 + lib_dir=$(dirname "$0") 31 + source ${lib_dir}/net_forwarding_lib.sh 32 + 33 + s_ns="s-$(mktemp -u XXXXXX)" 34 + c_ns="c-$(mktemp -u XXXXXX)" 35 + g_ns="g-$(mktemp -u XXXXXX)" 36 + s_ip4="192.0.2.1" 37 + c_ip4="192.0.2.10" 38 + g_ip4="192.0.2.254" 39 + s_ip6="2001:db8::1" 40 + c_ip6="2001:db8::10" 41 + g_ip6="2001:db8::254" 42 + 43 + gateway_create() 44 + { 45 + ip netns add ${g_ns} 46 + ip -n ${g_ns} link add br0 type bridge 47 + ip -n ${g_ns} link set br0 up 48 + ip -n ${g_ns} addr add ${g_ip4}/24 dev br0 49 + ip -n ${g_ns} addr add ${g_ip6}/24 dev br0 50 + } 51 + 52 + gateway_destroy() 53 + { 54 + ip -n ${g_ns} link del br0 55 + ip netns del ${g_ns} 56 + } 57 + 58 + server_create() 59 + { 60 + ip netns add ${s_ns} 61 + ip -n ${s_ns} link add bond0 type bond mode active-backup miimon 100 62 + 63 + for i in $(seq 0 1); do 64 + ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns} 65 + 66 + ip -n ${g_ns} link set s${i} up 67 + ip -n ${g_ns} link set s${i} master br0 68 + ip -n ${s_ns} link set eth${i} master bond0 69 + 70 + tc -n ${g_ns} qdisc add dev s${i} clsact 71 + done 72 + 73 + ip -n ${s_ns} link set bond0 up 74 + ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0 75 + ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0 76 + sleep 2 77 + } 78 + 79 + # Reset bond with new mode and options 80 + bond_reset() 81 + { 82 + # Count the eth link number in real-time as this function 83 + # maybe called from other topologies. 84 + local link_num=$(ip -n ${s_ns} -br link show | grep -c "^eth") 85 + local param="$1" 86 + link_num=$((link_num -1)) 87 + 88 + ip -n ${s_ns} link set bond0 down 89 + ip -n ${s_ns} link del bond0 90 + 91 + ip -n ${s_ns} link add bond0 type bond $param 92 + for i in $(seq 0 ${link_num}); do 93 + ip -n ${s_ns} link set eth$i master bond0 94 + done 95 + 96 + ip -n ${s_ns} link set bond0 up 97 + ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0 98 + ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0 99 + sleep 2 100 + } 101 + 102 + server_destroy() 103 + { 104 + # Count the eth link number in real-time as this function 105 + # maybe called from other topologies. 106 + local link_num=$(ip -n ${s_ns} -br link show | grep -c "^eth") 107 + link_num=$((link_num -1)) 108 + for i in $(seq 0 ${link_num}); do 109 + ip -n ${s_ns} link del eth${i} 110 + done 111 + ip netns del ${s_ns} 112 + } 113 + 114 + client_create() 115 + { 116 + ip netns add ${c_ns} 117 + ip -n ${c_ns} link add eth0 type veth peer name c0 netns ${g_ns} 118 + 119 + ip -n ${g_ns} link set c0 up 120 + ip -n ${g_ns} link set c0 master br0 121 + 122 + ip -n ${c_ns} link set eth0 up 123 + ip -n ${c_ns} addr add ${c_ip4}/24 dev eth0 124 + ip -n ${c_ns} addr add ${c_ip6}/24 dev eth0 125 + } 126 + 127 + client_destroy() 128 + { 129 + ip -n ${c_ns} link del eth0 130 + ip netns del ${c_ns} 131 + } 132 + 133 + setup_prepare() 134 + { 135 + gateway_create 136 + server_create 137 + client_create 138 + } 139 + 140 + cleanup() 141 + { 142 + pre_cleanup 143 + 144 + client_destroy 145 + server_destroy 146 + gateway_destroy 147 + } 148 + 149 + bond_check_connection() 150 + { 151 + local msg=${1:-"check connection"} 152 + 153 + sleep 2 154 + ip netns exec ${s_ns} ping ${c_ip4} -c5 -i 0.1 &>/dev/null 155 + check_err $? "${msg}: ping failed" 156 + ip netns exec ${s_ns} ping6 ${c_ip6} -c5 -i 0.1 &>/dev/null 157 + check_err $? "${msg}: ping6 failed" 158 + }
+8 -110
tools/testing/selftests/drivers/net/bonding/bond_topo_3d1c.sh
··· 25 25 # | eth0 | 2001:db8::10/24 26 26 # +-------------------------------------+ 27 27 28 - s_ns="s-$(mktemp -u XXXXXX)" 29 - c_ns="c-$(mktemp -u XXXXXX)" 30 - g_ns="g-$(mktemp -u XXXXXX)" 31 - s_ip4="192.0.2.1" 32 - c_ip4="192.0.2.10" 33 - g_ip4="192.0.2.254" 34 - s_ip6="2001:db8::1" 35 - c_ip6="2001:db8::10" 36 - g_ip6="2001:db8::254" 37 - 38 - gateway_create() 39 - { 40 - ip netns add ${g_ns} 41 - ip -n ${g_ns} link add br0 type bridge 42 - ip -n ${g_ns} link set br0 up 43 - ip -n ${g_ns} addr add ${g_ip4}/24 dev br0 44 - ip -n ${g_ns} addr add ${g_ip6}/24 dev br0 45 - } 46 - 47 - gateway_destroy() 48 - { 49 - ip -n ${g_ns} link del br0 50 - ip netns del ${g_ns} 51 - } 52 - 53 - server_create() 54 - { 55 - ip netns add ${s_ns} 56 - ip -n ${s_ns} link add bond0 type bond mode active-backup miimon 100 57 - 58 - for i in $(seq 0 2); do 59 - ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns} 60 - 61 - ip -n ${g_ns} link set s${i} up 62 - ip -n ${g_ns} link set s${i} master br0 63 - ip -n ${s_ns} link set eth${i} master bond0 64 - 65 - tc -n ${g_ns} qdisc add dev s${i} clsact 66 - done 67 - 68 - ip -n ${s_ns} link set bond0 up 69 - ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0 70 - ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0 71 - sleep 2 72 - } 73 - 74 - # Reset bond with new mode and options 75 - bond_reset() 76 - { 77 - local param="$1" 78 - 79 - ip -n ${s_ns} link set bond0 down 80 - ip -n ${s_ns} link del bond0 81 - 82 - ip -n ${s_ns} link add bond0 type bond $param 83 - for i in $(seq 0 2); do 84 - ip -n ${s_ns} link set eth$i master bond0 85 - done 86 - 87 - ip -n ${s_ns} link set bond0 up 88 - ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0 89 - ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0 90 - sleep 2 91 - } 92 - 93 - server_destroy() 94 - { 95 - for i in $(seq 0 2); do 96 - ip -n ${s_ns} link del eth${i} 97 - done 98 - ip netns del ${s_ns} 99 - } 100 - 101 - client_create() 102 - { 103 - ip netns add ${c_ns} 104 - ip -n ${c_ns} link add eth0 type veth peer name c0 netns ${g_ns} 105 - 106 - ip -n ${g_ns} link set c0 up 107 - ip -n ${g_ns} link set c0 master br0 108 - 109 - ip -n ${c_ns} link set eth0 up 110 - ip -n ${c_ns} addr add ${c_ip4}/24 dev eth0 111 - ip -n ${c_ns} addr add ${c_ip6}/24 dev eth0 112 - } 113 - 114 - client_destroy() 115 - { 116 - ip -n ${c_ns} link del eth0 117 - ip netns del ${c_ns} 118 - } 28 + source bond_topo_2d1c.sh 119 29 120 30 setup_prepare() 121 31 { 122 32 gateway_create 123 33 server_create 124 34 client_create 125 - } 126 35 127 - cleanup() 128 - { 129 - pre_cleanup 130 - 131 - client_destroy 132 - server_destroy 133 - gateway_destroy 134 - } 135 - 136 - bond_check_connection() 137 - { 138 - local msg=${1:-"check connection"} 139 - 140 - sleep 2 141 - ip netns exec ${s_ns} ping ${c_ip4} -c5 -i 0.1 &>/dev/null 142 - check_err $? "${msg}: ping failed" 143 - ip netns exec ${s_ns} ping6 ${c_ip6} -c5 -i 0.1 &>/dev/null 144 - check_err $? "${msg}: ping6 failed" 36 + # Add the extra device as we use 3 down links for bond0 37 + local i=2 38 + ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns} 39 + ip -n ${g_ns} link set s${i} up 40 + ip -n ${g_ns} link set s${i} master br0 41 + ip -n ${s_ns} link set eth${i} master bond0 42 + tc -n ${g_ns} qdisc add dev s${i} clsact 145 43 }
+6 -10
tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh
··· 98 98 99 99 port_pool_test() 100 100 { 101 - local exp_max_occ=288 101 + local exp_max_occ=$(devlink_cell_size_get) 102 102 local max_occ 103 103 104 104 devlink sb occupancy clearmax $DEVLINK_DEV 105 105 106 - $MZ $h1 -c 1 -p 160 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \ 106 + $MZ $h1 -c 1 -p 10 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \ 107 107 -t ip -q 108 108 109 109 devlink sb occupancy snapshot $DEVLINK_DEV ··· 126 126 127 127 port_tc_ip_test() 128 128 { 129 - local exp_max_occ=288 129 + local exp_max_occ=$(devlink_cell_size_get) 130 130 local max_occ 131 131 132 132 devlink sb occupancy clearmax $DEVLINK_DEV 133 133 134 - $MZ $h1 -c 1 -p 160 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \ 134 + $MZ $h1 -c 1 -p 10 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \ 135 135 -t ip -q 136 136 137 137 devlink sb occupancy snapshot $DEVLINK_DEV ··· 154 154 155 155 port_tc_arp_test() 156 156 { 157 - local exp_max_occ=96 157 + local exp_max_occ=$(devlink_cell_size_get) 158 158 local max_occ 159 - 160 - if [[ $MLXSW_CHIP != "mlxsw_spectrum" ]]; then 161 - exp_max_occ=144 162 - fi 163 159 164 160 devlink sb occupancy clearmax $DEVLINK_DEV 165 161 166 - $MZ $h1 -c 1 -p 160 -a $h1mac -A 192.0.1.1 -t arp -q 162 + $MZ $h1 -c 1 -p 10 -a $h1mac -A 192.0.1.1 -t arp -q 167 163 168 164 devlink sb occupancy snapshot $DEVLINK_DEV 169 165
+2
tools/testing/selftests/net/.gitignore
··· 15 15 ipsec 16 16 ipv6_flowlabel 17 17 ipv6_flowlabel_mgr 18 + log.txt 18 19 msg_zerocopy 19 20 nettest 20 21 psock_fanout ··· 46 45 timestamping 47 46 tls 48 47 toeplitz 48 + tools 49 49 tun 50 50 txring_overwrite 51 51 txtimestamp