Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'net-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from bpf and netfilter.

A lot of networking people were at a conference last week, busy
catching COVID, so relatively short PR.

Current release - regressions:

- tcp: process the 3rd ACK with sk_socket for TFO and MPTCP

Current release - new code bugs:

- l2tp: protect session IDR and tunnel session list with one lock,
make sure the state is coherent to avoid a warning

- eth: bnxt_en: update xdp_rxq_info in queue restart logic

- eth: airoha: fix location of the MBI_RX_AGE_SEL_MASK field

Previous releases - regressions:

- xsk: require XDP_UMEM_TX_METADATA_LEN to actuate tx_metadata_len,
the field reuses previously un-validated pad

Previous releases - always broken:

- tap/tun: drop short frames to prevent crashes later in the stack

- eth: ice: add a per-VF limit on number of FDIR filters

- af_unix: disable MSG_OOB handling for sockets in sockmap/sockhash"

* tag 'net-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (34 commits)
tun: add missing verification for short frame
tap: add missing verification for short frame
mISDN: Fix a use after free in hfcmulti_tx()
gve: Fix an edge case for TSO skb validity check
bnxt_en: update xdp_rxq_info in queue restart logic
tcp: process the 3rd ACK with sk_socket for TFO/MPTCP
selftests/bpf: Add XDP_UMEM_TX_METADATA_LEN to XSK TX metadata test
xsk: Require XDP_UMEM_TX_METADATA_LEN to actuate tx_metadata_len
bpf: Fix a segment issue when downgrading gso_size
net: mediatek: Fix potential NULL pointer dereference in dummy net_device handling
MAINTAINERS: make Breno the netconsole maintainer
MAINTAINERS: Update bonding entry
net: nexthop: Initialize all fields in dumped nexthops
net: stmmac: Correct byte order of perfect_match
selftests: forwarding: skip if kernel not support setting bridge fdb learning limit
tipc: Return non-zero value from tipc_udp_addr2str() on error
netfilter: nft_set_pipapo_avx2: disable softinterrupts
ice: Fix recipe read procedure
ice: Add a per-VF limit on number of FDIR filters
net: bonding: correctly annotate RCU in bond_should_notify_peers()
...

+319 -122
+2
.mailmap
··· 474 474 Naoya Horiguchi <nao.horiguchi@gmail.com> <n-horiguchi@ah.jp.nec.com> 475 475 Naoya Horiguchi <nao.horiguchi@gmail.com> <naoya.horiguchi@nec.com> 476 476 Nathan Chancellor <nathan@kernel.org> <natechancellor@gmail.com> 477 + Naveen N Rao <naveen@kernel.org> <naveen.n.rao@linux.ibm.com> 478 + Naveen N Rao <naveen@kernel.org> <naveen.n.rao@linux.vnet.ibm.com> 477 479 Neeraj Upadhyay <neeraj.upadhyay@kernel.org> <quic_neeraju@quicinc.com> 478 480 Neeraj Upadhyay <neeraj.upadhyay@kernel.org> <neeraju@codeaurora.org> 479 481 Neil Armstrong <neil.armstrong@linaro.org> <narmstrong@baylibre.com>
+10 -6
Documentation/networking/xsk-tx-metadata.rst
··· 11 11 General Design 12 12 ============== 13 13 14 - The headroom for the metadata is reserved via ``tx_metadata_len`` in 15 - ``struct xdp_umem_reg``. The metadata length is therefore the same for 16 - every socket that shares the same umem. The metadata layout is a fixed UAPI, 17 - refer to ``union xsk_tx_metadata`` in ``include/uapi/linux/if_xdp.h``. 18 - Thus, generally, the ``tx_metadata_len`` field above should contain 19 - ``sizeof(union xsk_tx_metadata)``. 14 + The headroom for the metadata is reserved via ``tx_metadata_len`` and 15 + ``XDP_UMEM_TX_METADATA_LEN`` flag in ``struct xdp_umem_reg``. The metadata 16 + length is therefore the same for every socket that shares the same umem. 17 + The metadata layout is a fixed UAPI, refer to ``union xsk_tx_metadata`` in 18 + ``include/uapi/linux/if_xdp.h``. Thus, generally, the ``tx_metadata_len`` 19 + field above should contain ``sizeof(union xsk_tx_metadata)``. 20 + 21 + Note that in the original implementation the ``XDP_UMEM_TX_METADATA_LEN`` 22 + flag was not required. Applications might attempt to create a umem 23 + with a flag first and if it fails, do another attempt without a flag. 20 24 21 25 The headroom and the metadata itself should be located right before 22 26 ``xdp_desc->addr`` in the umem frame. Within a frame, the metadata
+13 -6
MAINTAINERS
··· 3906 3906 F: net/bluetooth/ 3907 3907 3908 3908 BONDING DRIVER 3909 - M: Jay Vosburgh <j.vosburgh@gmail.com> 3909 + M: Jay Vosburgh <jv@jvosburgh.net> 3910 3910 M: Andy Gospodarek <andy@greyhouse.net> 3911 3911 L: netdev@vger.kernel.org 3912 - S: Supported 3913 - W: http://sourceforge.net/projects/bonding/ 3912 + S: Maintained 3914 3913 F: Documentation/networking/bonding.rst 3915 3914 F: drivers/net/bonding/ 3916 3915 F: include/net/bond* ··· 3973 3974 F: drivers/net/ethernet/netronome/nfp/bpf/ 3974 3975 3975 3976 BPF JIT for POWERPC (32-BIT AND 64-BIT) 3976 - M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 3977 3977 M: Michael Ellerman <mpe@ellerman.id.au> 3978 + M: Hari Bathini <hbathini@linux.ibm.com> 3979 + M: Christophe Leroy <christophe.leroy@csgroup.eu> 3980 + R: Naveen N Rao <naveen@kernel.org> 3978 3981 L: bpf@vger.kernel.org 3979 3982 S: Supported 3980 3983 F: arch/powerpc/net/ ··· 12530 12529 F: scripts/Makefile.kmsan 12531 12530 12532 12531 KPROBES 12533 - M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 12532 + M: Naveen N Rao <naveen@kernel.org> 12534 12533 M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> 12535 12534 M: "David S. Miller" <davem@davemloft.net> 12536 12535 M: Masami Hiramatsu <mhiramat@kernel.org> ··· 12907 12906 M: Michael Ellerman <mpe@ellerman.id.au> 12908 12907 R: Nicholas Piggin <npiggin@gmail.com> 12909 12908 R: Christophe Leroy <christophe.leroy@csgroup.eu> 12910 - R: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 12909 + R: Naveen N Rao <naveen@kernel.org> 12911 12910 L: linuxppc-dev@lists.ozlabs.org 12912 12911 S: Supported 12913 12912 W: https://github.com/linuxppc/wiki/wiki ··· 15762 15761 S: Maintained 15763 15762 F: Documentation/devicetree/bindings/hwmon/nuvoton,nct6775.yaml 15764 15763 F: drivers/hwmon/nct6775-i2c.c 15764 + 15765 + NETCONSOLE 15766 + M: Breno Leitao <leitao@debian.org> 15767 + S: Maintained 15768 + F: Documentation/networking/netconsole.rst 15769 + F: drivers/net/netconsole.c 15765 15770 15766 15771 NETDEVSIM 15767 15772 M: Jakub Kicinski <kuba@kernel.org>
+4 -3
drivers/isdn/hardware/mISDN/hfcmulti.c
··· 1901 1901 static void 1902 1902 hfcmulti_tx(struct hfc_multi *hc, int ch) 1903 1903 { 1904 - int i, ii, temp, len = 0; 1904 + int i, ii, temp, tmp_len, len = 0; 1905 1905 int Zspace, z1, z2; /* must be int for calculation */ 1906 1906 int Fspace, f1, f2; 1907 1907 u_char *d; ··· 2122 2122 HFC_wait_nodebug(hc); 2123 2123 } 2124 2124 2125 + tmp_len = (*sp)->len; 2125 2126 dev_kfree_skb(*sp); 2126 2127 /* check for next frame */ 2127 2128 if (bch && get_next_bframe(bch)) { 2128 - len = (*sp)->len; 2129 + len = tmp_len; 2129 2130 goto next_frame; 2130 2131 } 2131 2132 if (dch && get_next_dframe(dch)) { 2132 - len = (*sp)->len; 2133 + len = tmp_len; 2133 2134 goto next_frame; 2134 2135 } 2135 2136
+2 -5
drivers/net/bonding/bond_main.c
··· 1121 1121 return bestslave; 1122 1122 } 1123 1123 1124 + /* must be called in RCU critical section or with RTNL held */ 1124 1125 static bool bond_should_notify_peers(struct bonding *bond) 1125 1126 { 1126 - struct slave *slave; 1127 - 1128 - rcu_read_lock(); 1129 - slave = rcu_dereference(bond->curr_active_slave); 1130 - rcu_read_unlock(); 1127 + struct slave *slave = rcu_dereference_rtnl(bond->curr_active_slave); 1131 1128 1132 1129 if (!slave || !bond->send_peer_notif || 1133 1130 bond->send_peer_notif %
+17
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 4052 4052 4053 4053 rxr->page_pool->p.napi = NULL; 4054 4054 rxr->page_pool = NULL; 4055 + memset(&rxr->xdp_rxq, 0, sizeof(struct xdp_rxq_info)); 4055 4056 4056 4057 ring = &rxr->rx_ring_struct; 4057 4058 rmem = &ring->ring_mem; ··· 15019 15018 if (rc) 15020 15019 return rc; 15021 15020 15021 + rc = xdp_rxq_info_reg(&clone->xdp_rxq, bp->dev, idx, 0); 15022 + if (rc < 0) 15023 + goto err_page_pool_destroy; 15024 + 15025 + rc = xdp_rxq_info_reg_mem_model(&clone->xdp_rxq, 15026 + MEM_TYPE_PAGE_POOL, 15027 + clone->page_pool); 15028 + if (rc) 15029 + goto err_rxq_info_unreg; 15030 + 15022 15031 ring = &clone->rx_ring_struct; 15023 15032 rc = bnxt_alloc_ring(bp, &ring->ring_mem); 15024 15033 if (rc) ··· 15058 15047 bnxt_free_ring(bp, &clone->rx_agg_ring_struct.ring_mem); 15059 15048 err_free_rx_ring: 15060 15049 bnxt_free_ring(bp, &clone->rx_ring_struct.ring_mem); 15050 + err_rxq_info_unreg: 15051 + xdp_rxq_info_unreg(&clone->xdp_rxq); 15052 + err_page_pool_destroy: 15061 15053 clone->page_pool->p.napi = NULL; 15062 15054 page_pool_destroy(clone->page_pool); 15063 15055 clone->page_pool = NULL; ··· 15075 15061 15076 15062 bnxt_free_one_rx_ring(bp, rxr); 15077 15063 bnxt_free_one_rx_agg_ring(bp, rxr); 15064 + 15065 + xdp_rxq_info_unreg(&rxr->xdp_rxq); 15078 15066 15079 15067 page_pool_destroy(rxr->page_pool); 15080 15068 rxr->page_pool = NULL; ··· 15161 15145 rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod; 15162 15146 rxr->rx_next_cons = clone->rx_next_cons; 15163 15147 rxr->page_pool = clone->page_pool; 15148 + rxr->xdp_rxq = clone->xdp_rxq; 15164 15149 15165 15150 bnxt_copy_rx_ring(bp, rxr, clone); 15166 15151
+21 -1
drivers/net/ethernet/google/gve/gve_tx_dqo.c
··· 866 866 const int header_len = skb_tcp_all_headers(skb); 867 867 const int gso_size = shinfo->gso_size; 868 868 int cur_seg_num_bufs; 869 + int prev_frag_size; 869 870 int cur_seg_size; 870 871 int i; 871 872 872 873 cur_seg_size = skb_headlen(skb) - header_len; 874 + prev_frag_size = skb_headlen(skb); 873 875 cur_seg_num_bufs = cur_seg_size > 0; 874 876 875 877 for (i = 0; i < shinfo->nr_frags; i++) { 876 878 if (cur_seg_size >= gso_size) { 877 879 cur_seg_size %= gso_size; 878 880 cur_seg_num_bufs = cur_seg_size > 0; 881 + 882 + if (prev_frag_size > GVE_TX_MAX_BUF_SIZE_DQO) { 883 + int prev_frag_remain = prev_frag_size % 884 + GVE_TX_MAX_BUF_SIZE_DQO; 885 + 886 + /* If the last descriptor of the previous frag 887 + * is less than cur_seg_size, the segment will 888 + * span two descriptors in the previous frag. 889 + * Since max gso size (9728) is less than 890 + * GVE_TX_MAX_BUF_SIZE_DQO, it is impossible 891 + * for the segment to span more than two 892 + * descriptors. 893 + */ 894 + if (prev_frag_remain && 895 + cur_seg_size > prev_frag_remain) 896 + cur_seg_num_bufs++; 897 + } 879 898 } 880 899 881 900 if (unlikely(++cur_seg_num_bufs > max_bufs_per_seg)) 882 901 return false; 883 902 884 - cur_seg_size += skb_frag_size(&shinfo->frags[i]); 903 + prev_frag_size = skb_frag_size(&shinfo->frags[i]); 904 + cur_seg_size += prev_frag_size; 885 905 } 886 906 887 907 return true;
+1 -1
drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
··· 534 534 * 535 535 * Returns the number of available flow director filters to this VSI 536 536 */ 537 - static int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi) 537 + int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi) 538 538 { 539 539 u16 vsi_num = ice_get_hw_vsi_num(hw, vsi->idx); 540 540 u16 num_guar;
+3
drivers/net/ethernet/intel/ice/ice_fdir.h
··· 207 207 const u8 *tun_pkt; 208 208 }; 209 209 210 + struct ice_vsi; 211 + 210 212 int ice_alloc_fd_res_cntr(struct ice_hw *hw, u16 *cntr_id); 211 213 int ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id); 212 214 int ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr); ··· 220 218 ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input, 221 219 u8 *pkt, bool frag, bool tun); 222 220 int ice_get_fdir_cnt_all(struct ice_hw *hw); 221 + int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi); 223 222 bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input); 224 223 bool ice_fdir_has_frag(enum ice_fltr_ptype flow); 225 224 struct ice_fdir_fltr *
+4 -4
drivers/net/ethernet/intel/ice/ice_switch.c
··· 2400 2400 2401 2401 /* Propagate some data to the recipe database */ 2402 2402 recps[idx].priority = root_bufs.content.act_ctrl_fwd_priority; 2403 - recps[idx].need_pass_l2 = root_bufs.content.act_ctrl & 2404 - ICE_AQ_RECIPE_ACT_NEED_PASS_L2; 2405 - recps[idx].allow_pass_l2 = root_bufs.content.act_ctrl & 2406 - ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2; 2403 + recps[idx].need_pass_l2 = !!(root_bufs.content.act_ctrl & 2404 + ICE_AQ_RECIPE_ACT_NEED_PASS_L2); 2405 + recps[idx].allow_pass_l2 = !!(root_bufs.content.act_ctrl & 2406 + ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2); 2407 2407 bitmap_zero(recps[idx].res_idxs, ICE_MAX_FV_WORDS); 2408 2408 if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN) { 2409 2409 set_bit(root_bufs.content.result_indx &
+16
drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
··· 536 536 fdir->fdir_fltr_cnt[flow][0] = 0; 537 537 fdir->fdir_fltr_cnt[flow][1] = 0; 538 538 } 539 + 540 + fdir->fdir_fltr_cnt_total = 0; 539 541 } 540 542 541 543 /** ··· 1562 1560 resp->status = status; 1563 1561 resp->flow_id = conf->flow_id; 1564 1562 vf->fdir.fdir_fltr_cnt[conf->input.flow_type][is_tun]++; 1563 + vf->fdir.fdir_fltr_cnt_total++; 1565 1564 1566 1565 ret = ice_vc_send_msg_to_vf(vf, ctx->v_opcode, v_ret, 1567 1566 (u8 *)resp, len); ··· 1627 1624 resp->status = status; 1628 1625 ice_vc_fdir_remove_entry(vf, conf, conf->flow_id); 1629 1626 vf->fdir.fdir_fltr_cnt[conf->input.flow_type][is_tun]--; 1627 + vf->fdir.fdir_fltr_cnt_total--; 1630 1628 1631 1629 ret = ice_vc_send_msg_to_vf(vf, ctx->v_opcode, v_ret, 1632 1630 (u8 *)resp, len); ··· 1794 1790 struct virtchnl_fdir_add *stat = NULL; 1795 1791 struct virtchnl_fdir_fltr_conf *conf; 1796 1792 enum virtchnl_status_code v_ret; 1793 + struct ice_vsi *vf_vsi; 1797 1794 struct device *dev; 1798 1795 struct ice_pf *pf; 1799 1796 int is_tun = 0; ··· 1803 1798 1804 1799 pf = vf->pf; 1805 1800 dev = ice_pf_to_dev(pf); 1801 + vf_vsi = ice_get_vf_vsi(vf); 1802 + 1803 + #define ICE_VF_MAX_FDIR_FILTERS 128 1804 + if (!ice_fdir_num_avail_fltr(&pf->hw, vf_vsi) || 1805 + vf->fdir.fdir_fltr_cnt_total >= ICE_VF_MAX_FDIR_FILTERS) { 1806 + v_ret = VIRTCHNL_STATUS_ERR_PARAM; 1807 + dev_err(dev, "Max number of FDIR filters for VF %d is reached\n", 1808 + vf->vf_id); 1809 + goto err_exit; 1810 + } 1811 + 1806 1812 ret = ice_vc_fdir_param_check(vf, fltr->vsi_id); 1807 1813 if (ret) { 1808 1814 v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+1
drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.h
··· 29 29 struct ice_vf_fdir { 30 30 u16 fdir_fltr_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX]; 31 31 int prof_entry_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX]; 32 + u16 fdir_fltr_cnt_total; 32 33 struct ice_fd_hw_prof **fdir_prof; 33 34 34 35 struct idr fdir_rule_idr;
+1 -1
drivers/net/ethernet/mediatek/airoha_eth.c
··· 249 249 #define REG_FE_GDM_RX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2fc) 250 250 251 251 #define REG_GDM2_CHN_RLS (GDM2_BASE + 0x20) 252 - #define MBI_RX_AGE_SEL_MASK GENMASK(18, 17) 252 + #define MBI_RX_AGE_SEL_MASK GENMASK(26, 25) 253 253 #define MBI_TX_AGE_SEL_MASK GENMASK(18, 17) 254 254 255 255 #define REG_GDM3_FWD_CFG GDM3_BASE
+1 -2
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 4223 4223 metadata_dst_free(eth->dsa_meta[i]); 4224 4224 } 4225 4225 4226 - free_netdev(eth->dummy_dev); 4227 - 4228 4226 return 0; 4229 4227 } 4230 4228 ··· 5088 5090 netif_napi_del(&eth->tx_napi); 5089 5091 netif_napi_del(&eth->rx_napi); 5090 5092 mtk_cleanup(eth); 5093 + free_netdev(eth->dummy_dev); 5091 5094 mtk_mdio_cleanup(eth); 5092 5095 } 5093 5096
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
··· 977 977 } 978 978 979 979 static void dwmac4_update_vlan_hash(struct mac_device_info *hw, u32 hash, 980 - __le16 perfect_match, bool is_double) 980 + u16 perfect_match, bool is_double) 981 981 { 982 982 void __iomem *ioaddr = hw->pcsr; 983 983 u32 value;
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
··· 615 615 } 616 616 617 617 static void dwxgmac2_update_vlan_hash(struct mac_device_info *hw, u32 hash, 618 - __le16 perfect_match, bool is_double) 618 + u16 perfect_match, bool is_double) 619 619 { 620 620 void __iomem *ioaddr = hw->pcsr; 621 621
+1 -1
drivers/net/ethernet/stmicro/stmmac/hwif.h
··· 393 393 struct stmmac_rss *cfg, u32 num_rxq); 394 394 /* VLAN */ 395 395 void (*update_vlan_hash)(struct mac_device_info *hw, u32 hash, 396 - __le16 perfect_match, bool is_double); 396 + u16 perfect_match, bool is_double); 397 397 void (*enable_vlan)(struct mac_device_info *hw, u32 type); 398 398 void (*rx_hw_vlan)(struct mac_device_info *hw, struct dma_desc *rx_desc, 399 399 struct sk_buff *skb);
+2 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 6641 6641 static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double) 6642 6642 { 6643 6643 u32 crc, hash = 0; 6644 - __le16 pmatch = 0; 6644 + u16 pmatch = 0; 6645 6645 int count = 0; 6646 6646 u16 vid = 0; 6647 6647 ··· 6656 6656 if (count > 2) /* VID = 0 always passes filter */ 6657 6657 return -EOPNOTSUPP; 6658 6658 6659 - pmatch = cpu_to_le16(vid); 6659 + pmatch = vid; 6660 6660 hash = 0; 6661 6661 } 6662 6662
+5
drivers/net/tap.c
··· 1177 1177 struct sk_buff *skb; 1178 1178 int err, depth; 1179 1179 1180 + if (unlikely(xdp->data_end - xdp->data < ETH_HLEN)) { 1181 + err = -EINVAL; 1182 + goto err; 1183 + } 1184 + 1180 1185 if (q->flags & IFF_VNET_HDR) 1181 1186 vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz); 1182 1187
+3
drivers/net/tun.c
··· 2455 2455 bool skb_xdp = false; 2456 2456 struct page *page; 2457 2457 2458 + if (unlikely(datasize < ETH_HLEN)) 2459 + return -EINVAL; 2460 + 2458 2461 xdp_prog = rcu_dereference(tun->xdp_prog); 2459 2462 if (xdp_prog) { 2460 2463 if (gso->gso_type) {
+4
include/uapi/linux/if_xdp.h
··· 41 41 */ 42 42 #define XDP_UMEM_TX_SW_CSUM (1 << 1) 43 43 44 + /* Request to reserve tx_metadata_len bytes of per-chunk metadata. 45 + */ 46 + #define XDP_UMEM_TX_METADATA_LEN (1 << 2) 47 + 44 48 struct sockaddr_xdp { 45 49 __u16 sxdp_family; 46 50 __u16 sxdp_flags;
+12 -14
kernel/events/core.c
··· 9328 9328 bool unregister = type == PERF_BPF_EVENT_PROG_UNLOAD; 9329 9329 int i; 9330 9330 9331 - if (prog->aux->func_cnt == 0) { 9332 - perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, 9333 - (u64)(unsigned long)prog->bpf_func, 9334 - prog->jited_len, unregister, 9335 - prog->aux->ksym.name); 9336 - } else { 9337 - for (i = 0; i < prog->aux->func_cnt; i++) { 9338 - struct bpf_prog *subprog = prog->aux->func[i]; 9331 + perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, 9332 + (u64)(unsigned long)prog->bpf_func, 9333 + prog->jited_len, unregister, 9334 + prog->aux->ksym.name); 9339 9335 9340 - perf_event_ksymbol( 9341 - PERF_RECORD_KSYMBOL_TYPE_BPF, 9342 - (u64)(unsigned long)subprog->bpf_func, 9343 - subprog->jited_len, unregister, 9344 - subprog->aux->ksym.name); 9345 - } 9336 + for (i = 1; i < prog->aux->func_cnt; i++) { 9337 + struct bpf_prog *subprog = prog->aux->func[i]; 9338 + 9339 + perf_event_ksymbol( 9340 + PERF_RECORD_KSYMBOL_TYPE_BPF, 9341 + (u64)(unsigned long)subprog->bpf_func, 9342 + subprog->jited_len, unregister, 9343 + subprog->aux->ksym.name); 9346 9344 } 9347 9345 } 9348 9346
+11 -4
net/core/filter.c
··· 3548 3548 if (skb_is_gso(skb)) { 3549 3549 struct skb_shared_info *shinfo = skb_shinfo(skb); 3550 3550 3551 - /* Due to header grow, MSS needs to be downgraded. */ 3552 - if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO)) 3553 - skb_decrease_gso_size(shinfo, len_diff); 3554 - 3555 3551 /* Header must be checked, and gso_segs recomputed. */ 3556 3552 shinfo->gso_type |= gso_type; 3557 3553 shinfo->gso_segs = 0; 3554 + 3555 + /* Due to header growth, MSS needs to be downgraded. 3556 + * There is a BUG_ON() when segmenting the frag_list with 3557 + * head_frag true, so linearize the skb after downgrading 3558 + * the MSS. 3559 + */ 3560 + if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO)) { 3561 + skb_decrease_gso_size(shinfo, len_diff); 3562 + if (shinfo->frag_list) 3563 + return skb_linearize(skb); 3564 + } 3558 3565 } 3559 3566 3560 3567 return 0;
+4 -3
net/ipv4/nexthop.c
··· 888 888 889 889 p = nla_data(nla); 890 890 for (i = 0; i < nhg->num_nh; ++i) { 891 - p->id = nhg->nh_entries[i].nh->id; 892 - p->weight = nhg->nh_entries[i].weight - 1; 893 - p += 1; 891 + *p++ = (struct nexthop_grp) { 892 + .id = nhg->nh_entries[i].nh->id, 893 + .weight = nhg->nh_entries[i].weight - 1, 894 + }; 894 895 } 895 896 896 897 if (nhg->resilient && nla_put_nh_group_res(skb, nhg))
+1 -1
net/ipv4/route.c
··· 1263 1263 struct flowi4 fl4 = { 1264 1264 .daddr = iph->daddr, 1265 1265 .saddr = iph->saddr, 1266 - .flowi4_tos = RT_TOS(iph->tos), 1266 + .flowi4_tos = iph->tos & IPTOS_RT_MASK, 1267 1267 .flowi4_oif = rt->dst.dev->ifindex, 1268 1268 .flowi4_iif = skb->dev->ifindex, 1269 1269 .flowi4_mark = skb->mark,
-3
net/ipv4/tcp_input.c
··· 6819 6819 tcp_fast_path_on(tp); 6820 6820 if (sk->sk_shutdown & SEND_SHUTDOWN) 6821 6821 tcp_shutdown(sk, SEND_SHUTDOWN); 6822 - 6823 - if (sk->sk_socket) 6824 - goto consume; 6825 6822 break; 6826 6823 6827 6824 case TCP_FIN_WAIT1: {
+14 -18
net/l2tp/l2tp_core.c
··· 441 441 int err; 442 442 443 443 spin_lock_bh(&tunnel->list_lock); 444 + spin_lock_bh(&pn->l2tp_session_idr_lock); 445 + 444 446 if (!tunnel->acpt_newsess) { 445 447 err = -ENODEV; 446 - goto err_tlock; 448 + goto out; 447 449 } 448 450 449 451 if (tunnel->version == L2TP_HDR_VER_3) { 450 452 session_key = session->session_id; 451 - spin_lock_bh(&pn->l2tp_session_idr_lock); 452 453 err = idr_alloc_u32(&pn->l2tp_v3_session_idr, NULL, 453 454 &session_key, session_key, GFP_ATOMIC); 454 455 /* IP encap expects session IDs to be globally unique, while ··· 463 462 err = l2tp_session_collision_add(pn, session, 464 463 other_session); 465 464 } 466 - spin_unlock_bh(&pn->l2tp_session_idr_lock); 467 465 } else { 468 466 session_key = l2tp_v2_session_key(tunnel->tunnel_id, 469 467 session->session_id); 470 - spin_lock_bh(&pn->l2tp_session_idr_lock); 471 468 err = idr_alloc_u32(&pn->l2tp_v2_session_idr, NULL, 472 469 &session_key, session_key, GFP_ATOMIC); 473 - spin_unlock_bh(&pn->l2tp_session_idr_lock); 474 470 } 475 471 476 472 if (err) { 477 473 if (err == -ENOSPC) 478 474 err = -EEXIST; 479 - goto err_tlock; 475 + goto out; 480 476 } 481 477 482 478 l2tp_tunnel_inc_refcount(tunnel); 483 - 484 479 list_add(&session->list, &tunnel->session_list); 485 - spin_unlock_bh(&tunnel->list_lock); 486 480 487 - spin_lock_bh(&pn->l2tp_session_idr_lock); 488 481 if (tunnel->version == L2TP_HDR_VER_3) { 489 482 if (!other_session) 490 483 idr_replace(&pn->l2tp_v3_session_idr, session, session_key); 491 484 } else { 492 485 idr_replace(&pn->l2tp_v2_session_idr, session, session_key); 493 486 } 487 + 488 + out: 494 489 spin_unlock_bh(&pn->l2tp_session_idr_lock); 495 - 496 - trace_register_session(session); 497 - 498 - return 0; 499 - 500 - err_tlock: 501 490 spin_unlock_bh(&tunnel->list_lock); 491 + 492 + if (!err) 493 + trace_register_session(session); 502 494 503 495 return err; 504 496 } ··· 1254 1260 struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net); 1255 1261 struct l2tp_session *removed = session; 1256 1262 1257 - /* Remove from the per-tunnel list */ 1258 1263 spin_lock_bh(&tunnel->list_lock); 1264 + spin_lock_bh(&pn->l2tp_session_idr_lock); 1265 + 1266 + /* Remove from the per-tunnel list */ 1259 1267 list_del_init(&session->list); 1260 - spin_unlock_bh(&tunnel->list_lock); 1261 1268 1262 1269 /* Remove from per-net IDR */ 1263 - spin_lock_bh(&pn->l2tp_session_idr_lock); 1264 1270 if (tunnel->version == L2TP_HDR_VER_3) { 1265 1271 if (hash_hashed(&session->hlist)) 1266 1272 l2tp_session_collision_del(pn, session); ··· 1274 1280 session_key); 1275 1281 } 1276 1282 WARN_ON_ONCE(removed && removed != session); 1283 + 1277 1284 spin_unlock_bh(&pn->l2tp_session_idr_lock); 1285 + spin_unlock_bh(&tunnel->list_lock); 1278 1286 1279 1287 synchronize_rcu(); 1280 1288 }
+10 -2
net/netfilter/nft_set_pipapo_avx2.c
··· 1139 1139 bool map_index; 1140 1140 int i, ret = 0; 1141 1141 1142 - if (unlikely(!irq_fpu_usable())) 1143 - return nft_pipapo_lookup(net, set, key, ext); 1142 + local_bh_disable(); 1143 + 1144 + if (unlikely(!irq_fpu_usable())) { 1145 + bool fallback_res = nft_pipapo_lookup(net, set, key, ext); 1146 + 1147 + local_bh_enable(); 1148 + return fallback_res; 1149 + } 1144 1150 1145 1151 m = rcu_dereference(priv->match); 1146 1152 ··· 1161 1155 scratch = *raw_cpu_ptr(m->scratch); 1162 1156 if (unlikely(!scratch)) { 1163 1157 kernel_fpu_end(); 1158 + local_bh_enable(); 1164 1159 return false; 1165 1160 } 1166 1161 ··· 1242 1235 if (i % 2) 1243 1236 scratch->map_index = !map_index; 1244 1237 kernel_fpu_end(); 1238 + local_bh_enable(); 1245 1239 1246 1240 return ret >= 0; 1247 1241 }
+4 -1
net/tipc/udp_media.c
··· 135 135 snprintf(buf, size, "%pI4:%u", &ua->ipv4, ntohs(ua->port)); 136 136 else if (ntohs(ua->proto) == ETH_P_IPV6) 137 137 snprintf(buf, size, "%pI6:%u", &ua->ipv6, ntohs(ua->port)); 138 - else 138 + else { 139 139 pr_err("Invalid UDP media address\n"); 140 + return 1; 141 + } 142 + 140 143 return 0; 141 144 } 142 145
+40 -1
net/unix/af_unix.c
··· 2721 2721 2722 2722 static int unix_stream_read_skb(struct sock *sk, skb_read_actor_t recv_actor) 2723 2723 { 2724 + struct unix_sock *u = unix_sk(sk); 2725 + struct sk_buff *skb; 2726 + int err; 2727 + 2724 2728 if (unlikely(READ_ONCE(sk->sk_state) != TCP_ESTABLISHED)) 2725 2729 return -ENOTCONN; 2726 2730 2727 - return unix_read_skb(sk, recv_actor); 2731 + mutex_lock(&u->iolock); 2732 + skb = skb_recv_datagram(sk, MSG_DONTWAIT, &err); 2733 + mutex_unlock(&u->iolock); 2734 + if (!skb) 2735 + return err; 2736 + 2737 + #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 2738 + if (unlikely(skb == READ_ONCE(u->oob_skb))) { 2739 + bool drop = false; 2740 + 2741 + unix_state_lock(sk); 2742 + 2743 + if (sock_flag(sk, SOCK_DEAD)) { 2744 + unix_state_unlock(sk); 2745 + kfree_skb(skb); 2746 + return -ECONNRESET; 2747 + } 2748 + 2749 + spin_lock(&sk->sk_receive_queue.lock); 2750 + if (likely(skb == u->oob_skb)) { 2751 + WRITE_ONCE(u->oob_skb, NULL); 2752 + drop = true; 2753 + } 2754 + spin_unlock(&sk->sk_receive_queue.lock); 2755 + 2756 + unix_state_unlock(sk); 2757 + 2758 + if (drop) { 2759 + WARN_ON_ONCE(skb_unref(skb)); 2760 + kfree_skb(skb); 2761 + return -EAGAIN; 2762 + } 2763 + } 2764 + #endif 2765 + 2766 + return recv_actor(sk, skb); 2728 2767 } 2729 2768 2730 2769 static int unix_stream_read_generic(struct unix_stream_read_state *state,
+3
net/unix/unix_bpf.c
··· 54 54 struct sk_psock *psock; 55 55 int copied; 56 56 57 + if (flags & MSG_OOB) 58 + return -EOPNOTSUPP; 59 + 57 60 if (!len) 58 61 return 0; 59 62
+6 -3
net/xdp/xdp_umem.c
··· 151 151 #define XDP_UMEM_FLAGS_VALID ( \ 152 152 XDP_UMEM_UNALIGNED_CHUNK_FLAG | \ 153 153 XDP_UMEM_TX_SW_CSUM | \ 154 + XDP_UMEM_TX_METADATA_LEN | \ 154 155 0) 155 156 156 157 static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr) ··· 205 204 if (headroom >= chunk_size - XDP_PACKET_HEADROOM) 206 205 return -EINVAL; 207 206 208 - if (mr->tx_metadata_len >= 256 || mr->tx_metadata_len % 8) 209 - return -EINVAL; 207 + if (mr->flags & XDP_UMEM_TX_METADATA_LEN) { 208 + if (mr->tx_metadata_len >= 256 || mr->tx_metadata_len % 8) 209 + return -EINVAL; 210 + umem->tx_metadata_len = mr->tx_metadata_len; 211 + } 210 212 211 213 umem->size = size; 212 214 umem->headroom = headroom; ··· 219 215 umem->pgs = NULL; 220 216 umem->user = NULL; 221 217 umem->flags = mr->flags; 222 - umem->tx_metadata_len = mr->tx_metadata_len; 223 218 224 219 INIT_LIST_HEAD(&umem->xsk_dma_list); 225 220 refcount_set(&umem->users, 1);
+1 -1
tools/bpf/bpftool/prog.c
··· 2489 2489 " cgroup/connect_unix | cgroup/getpeername4 | cgroup/getpeername6 |\n" 2490 2490 " cgroup/getpeername_unix | cgroup/getsockname4 | cgroup/getsockname6 |\n" 2491 2491 " cgroup/getsockname_unix | cgroup/sendmsg4 | cgroup/sendmsg6 |\n" 2492 - " cgroup/sendmsg°unix | cgroup/recvmsg4 | cgroup/recvmsg6 | cgroup/recvmsg_unix |\n" 2492 + " cgroup/sendmsg_unix | cgroup/recvmsg4 | cgroup/recvmsg6 | cgroup/recvmsg_unix |\n" 2493 2493 " cgroup/getsockopt | cgroup/setsockopt | cgroup/sock_release |\n" 2494 2494 " struct_ops | fentry | fexit | freplace | sk_lookup }\n" 2495 2495 " ATTACH_TYPE := { sk_msg_verdict | sk_skb_verdict | sk_skb_stream_verdict |\n"
+1 -1
tools/bpf/resolve_btfids/main.c
··· 704 704 * Make sure id is at the beginning of the pairs 705 705 * struct, otherwise the below qsort would not work. 706 706 */ 707 - BUILD_BUG_ON(set8->pairs != &set8->pairs[0].id); 707 + BUILD_BUG_ON((u32 *)set8->pairs != &set8->pairs[0].id); 708 708 qsort(set8->pairs, set8->cnt, sizeof(set8->pairs[0]), cmp_id); 709 709 710 710 /*
+4
tools/include/uapi/linux/if_xdp.h
··· 41 41 */ 42 42 #define XDP_UMEM_TX_SW_CSUM (1 << 1) 43 43 44 + /* Request to reserve tx_metadata_len bytes of per-chunk metadata. 45 + */ 46 + #define XDP_UMEM_TX_METADATA_LEN (1 << 2) 47 + 44 48 struct sockaddr_xdp { 45 49 __u16 sxdp_family; 46 50 __u16 sxdp_flags;
+5 -3
tools/lib/bpf/btf_dump.c
··· 1559 1559 * Clang for BPF target generates func_proto with no 1560 1560 * args as a func_proto with a single void arg (e.g., 1561 1561 * `int (*f)(void)` vs just `int (*f)()`). We are 1562 - * going to pretend there are no args for such case. 1562 + * going to emit valid empty args (void) syntax for 1563 + * such case. Similarly and conveniently, valid 1564 + * no args case can be special-cased here as well. 1563 1565 */ 1564 - if (vlen == 1 && p->type == 0) { 1565 - btf_dump_printf(d, ")"); 1566 + if (vlen == 0 || (vlen == 1 && p->type == 0)) { 1567 + btf_dump_printf(d, "void)"); 1566 1568 return; 1567 1569 } 1568 1570
-1
tools/testing/selftests/bpf/DENYLIST.aarch64
··· 1 1 bpf_cookie/multi_kprobe_attach_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3 2 2 bpf_cookie/multi_kprobe_link_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3 3 - fexit_sleep # The test never returns. The remaining tests cannot start. 4 3 kprobe_multi_bench_attach # needs CONFIG_FPROBE 5 4 kprobe_multi_test # needs CONFIG_FPROBE 6 5 module_attach # prog 'kprobe_multi': failed to auto-attach: -95
+7 -1
tools/testing/selftests/bpf/prog_tests/fexit_sleep.c
··· 21 21 } 22 22 23 23 #define STACK_SIZE (1024 * 1024) 24 - static char child_stack[STACK_SIZE]; 25 24 26 25 void test_fexit_sleep(void) 27 26 { 28 27 struct fexit_sleep_lskel *fexit_skel = NULL; 29 28 int wstatus, duration = 0; 30 29 pid_t cpid; 30 + char *child_stack = NULL; 31 31 int err, fexit_cnt; 32 32 33 33 fexit_skel = fexit_sleep_lskel__open_and_load(); ··· 36 36 37 37 err = fexit_sleep_lskel__attach(fexit_skel); 38 38 if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err)) 39 + goto cleanup; 40 + 41 + child_stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | 42 + MAP_ANONYMOUS | MAP_STACK, -1, 0); 43 + if (!ASSERT_NEQ(child_stack, MAP_FAILED, "mmap")) 39 44 goto cleanup; 40 45 41 46 cpid = clone(do_sleep, child_stack + STACK_SIZE, CLONE_FILES | SIGCHLD, fexit_skel); ··· 83 78 goto cleanup; 84 79 85 80 cleanup: 81 + munmap(child_stack, STACK_SIZE); 86 82 fexit_sleep_lskel__destroy(fexit_skel); 87 83 }
+59 -26
tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
··· 29 29 30 30 #include "sockmap_helpers.h" 31 31 32 + #define NO_FLAGS 0 33 + 32 34 static void test_insert_invalid(struct test_sockmap_listen *skel __always_unused, 33 35 int family, int sotype, int mapfd) 34 36 { ··· 1378 1376 1379 1377 static void pairs_redir_to_connected(int cli0, int peer0, int cli1, int peer1, 1380 1378 int sock_mapfd, int nop_mapfd, 1381 - int verd_mapfd, enum redir_mode mode) 1379 + int verd_mapfd, enum redir_mode mode, 1380 + int send_flags) 1382 1381 { 1383 1382 const char *log_prefix = redir_mode_str(mode); 1384 1383 unsigned int pass; ··· 1399 1396 return; 1400 1397 } 1401 1398 1402 - n = write(cli1, "a", 1); 1403 - if (n < 0) 1404 - FAIL_ERRNO("%s: write", log_prefix); 1405 - if (n == 0) 1406 - FAIL("%s: incomplete write", log_prefix); 1407 - if (n < 1) 1399 + /* Last byte is OOB data when send_flags has MSG_OOB bit set */ 1400 + n = xsend(cli1, "ab", 2, send_flags); 1401 + if (n >= 0 && n < 2) 1402 + FAIL("%s: incomplete send", log_prefix); 1403 + if (n < 2) 1408 1404 return; 1409 1405 1410 1406 key = SK_PASS; ··· 1418 1416 FAIL_ERRNO("%s: recv_timeout", log_prefix); 1419 1417 if (n == 0) 1420 1418 FAIL("%s: incomplete recv", log_prefix); 1419 + 1420 + if (send_flags & MSG_OOB) { 1421 + /* Check that we can't read OOB while in sockmap */ 1422 + errno = 0; 1423 + n = recv(peer1, &b, 1, MSG_OOB | MSG_DONTWAIT); 1424 + if (n != -1 || errno != EOPNOTSUPP) 1425 + FAIL("%s: recv(MSG_OOB): expected EOPNOTSUPP: retval=%d errno=%d", 1426 + log_prefix, n, errno); 1427 + 1428 + /* Remove peer1 from sockmap */ 1429 + xbpf_map_delete_elem(sock_mapfd, &(int){ 1 }); 1430 + 1431 + /* Check that OOB was dropped on redirect */ 1432 + errno = 0; 1433 + n = recv(peer1, &b, 1, MSG_OOB | MSG_DONTWAIT); 1434 + if (n != -1 || errno != EINVAL) 1435 + FAIL("%s: recv(MSG_OOB): expected EINVAL: retval=%d errno=%d", 1436 + log_prefix, n, errno); 1437 + } 1421 1438 } 1422 1439 1423 1440 static void unix_redir_to_connected(int sotype, int sock_mapfd, ··· 1453 1432 goto close0; 1454 1433 c1 = sfd[0], p1 = sfd[1]; 1455 1434 1456 - pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, mode); 1435 + pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, 1436 + mode, NO_FLAGS); 1457 1437 1458 1438 xclose(c1); 1459 1439 xclose(p1); ··· 1744 1722 if (err) 1745 1723 goto close_cli0; 1746 1724 1747 - pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, mode); 1725 + pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, 1726 + mode, NO_FLAGS); 1748 1727 1749 1728 xclose(c1); 1750 1729 xclose(p1); ··· 1803 1780 if (err) 1804 1781 goto close; 1805 1782 1806 - pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, mode); 1783 + pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, 1784 + mode, NO_FLAGS); 1807 1785 1808 1786 xclose(c1); 1809 1787 xclose(p1); ··· 1839 1815 xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_VERDICT); 1840 1816 } 1841 1817 1842 - static void unix_inet_redir_to_connected(int family, int type, 1843 - int sock_mapfd, int nop_mapfd, 1844 - int verd_mapfd, 1845 - enum redir_mode mode) 1818 + static void unix_inet_redir_to_connected(int family, int type, int sock_mapfd, 1819 + int nop_mapfd, int verd_mapfd, 1820 + enum redir_mode mode, int send_flags) 1846 1821 { 1847 1822 int c0, c1, p0, p1; 1848 1823 int sfd[2]; ··· 1851 1828 if (err) 1852 1829 return; 1853 1830 1854 - if (socketpair(AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0, sfd)) 1831 + if (socketpair(AF_UNIX, type | SOCK_NONBLOCK, 0, sfd)) 1855 1832 goto close_cli0; 1856 1833 c1 = sfd[0], p1 = sfd[1]; 1857 1834 1858 - pairs_redir_to_connected(c0, p0, c1, p1, 1859 - sock_mapfd, nop_mapfd, verd_mapfd, mode); 1835 + pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, nop_mapfd, 1836 + verd_mapfd, mode, send_flags); 1860 1837 1861 1838 xclose(c1); 1862 1839 xclose(p1); 1863 1840 close_cli0: 1864 1841 xclose(c0); 1865 1842 xclose(p0); 1866 - 1867 1843 } 1868 1844 1869 1845 static void unix_inet_skb_redir_to_connected(struct test_sockmap_listen *skel, ··· 1881 1859 skel->bss->test_ingress = false; 1882 1860 unix_inet_redir_to_connected(family, SOCK_DGRAM, 1883 1861 sock_map, -1, verdict_map, 1884 - REDIR_EGRESS); 1862 + REDIR_EGRESS, NO_FLAGS); 1885 1863 unix_inet_redir_to_connected(family, SOCK_DGRAM, 1886 1864 sock_map, -1, verdict_map, 1887 - REDIR_EGRESS); 1865 + REDIR_EGRESS, NO_FLAGS); 1888 1866 1889 1867 unix_inet_redir_to_connected(family, SOCK_DGRAM, 1890 1868 sock_map, nop_map, verdict_map, 1891 - REDIR_EGRESS); 1869 + REDIR_EGRESS, NO_FLAGS); 1892 1870 unix_inet_redir_to_connected(family, SOCK_STREAM, 1893 1871 sock_map, nop_map, verdict_map, 1894 - REDIR_EGRESS); 1872 + REDIR_EGRESS, NO_FLAGS); 1873 + 1874 + /* MSG_OOB not supported by AF_UNIX SOCK_DGRAM */ 1875 + unix_inet_redir_to_connected(family, SOCK_STREAM, 1876 + sock_map, nop_map, verdict_map, 1877 + REDIR_EGRESS, MSG_OOB); 1878 + 1895 1879 skel->bss->test_ingress = true; 1896 1880 unix_inet_redir_to_connected(family, SOCK_DGRAM, 1897 1881 sock_map, -1, verdict_map, 1898 - REDIR_INGRESS); 1882 + REDIR_INGRESS, NO_FLAGS); 1899 1883 unix_inet_redir_to_connected(family, SOCK_STREAM, 1900 1884 sock_map, -1, verdict_map, 1901 - REDIR_INGRESS); 1885 + REDIR_INGRESS, NO_FLAGS); 1902 1886 1903 1887 unix_inet_redir_to_connected(family, SOCK_DGRAM, 1904 1888 sock_map, nop_map, verdict_map, 1905 - REDIR_INGRESS); 1889 + REDIR_INGRESS, NO_FLAGS); 1906 1890 unix_inet_redir_to_connected(family, SOCK_STREAM, 1907 1891 sock_map, nop_map, verdict_map, 1908 - REDIR_INGRESS); 1892 + REDIR_INGRESS, NO_FLAGS); 1893 + 1894 + /* MSG_OOB not supported by AF_UNIX SOCK_DGRAM */ 1895 + unix_inet_redir_to_connected(family, SOCK_STREAM, 1896 + sock_map, nop_map, verdict_map, 1897 + REDIR_INGRESS, MSG_OOB); 1909 1898 1910 1899 xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_VERDICT); 1911 1900 }
+2 -1
tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
··· 68 68 .fill_size = XSK_RING_PROD__DEFAULT_NUM_DESCS, 69 69 .comp_size = XSK_RING_CONS__DEFAULT_NUM_DESCS, 70 70 .frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE, 71 - .flags = XDP_UMEM_UNALIGNED_CHUNK_FLAG | XDP_UMEM_TX_SW_CSUM, 71 + .flags = XDP_UMEM_UNALIGNED_CHUNK_FLAG | XDP_UMEM_TX_SW_CSUM | 72 + XDP_UMEM_TX_METADATA_LEN, 72 73 .tx_metadata_len = sizeof(struct xsk_tx_metadata), 73 74 }; 74 75 __u32 idx;
+2 -2
tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c
··· 14 14 15 15 typedef int *ptr_multiarr_t[7][8][9][10]; 16 16 17 - typedef int * (*fn_ptr_arr_t[11])(); 17 + typedef int * (*fn_ptr_arr_t[11])(void); 18 18 19 - typedef int * (*fn_ptr_multiarr_t[12][13])(); 19 + typedef int * (*fn_ptr_multiarr_t[12][13])(void); 20 20 21 21 struct root_struct { 22 22 arr_t _1;
+2 -2
tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
··· 100 100 * `int -> char *` function and returns pointer to a char. Equivalent: 101 101 * typedef char * (*fn_input_t)(int); 102 102 * typedef char * (*fn_output_outer_t)(fn_input_t); 103 - * typedef const fn_output_outer_t (* fn_output_inner_t)(); 103 + * typedef const fn_output_outer_t (* fn_output_inner_t)(void); 104 104 * typedef const fn_output_inner_t fn_ptr_arr2_t[5]; 105 105 */ 106 106 /* ----- START-EXPECTED-OUTPUT ----- */ ··· 127 127 128 128 typedef char * (*fn_ptr_arr1_t[10])(int **); 129 129 130 - typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int)); 130 + typedef char * (* (* const fn_ptr_arr2_t[5])(void))(char * (*)(int)); 131 131 132 132 struct struct_w_typedefs { 133 133 int_t a;
+18
tools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh
··· 178 178 check_err $? "Failed to remove a FDB entry of type ${type}" 179 179 } 180 180 181 + check_fdb_n_learned_support() 182 + { 183 + if ! ip link help bridge 2>&1 | grep -q "fdb_max_learned"; then 184 + echo "SKIP: iproute2 too old, missing bridge max learned support" 185 + exit $ksft_skip 186 + fi 187 + 188 + ip link add dev br0 type bridge 189 + local learned=$(fdb_get_n_learned) 190 + ip link del dev br0 191 + if [ "$learned" == "null" ]; then 192 + echo "SKIP: kernel too old; bridge fdb_n_learned feature not supported." 193 + exit $ksft_skip 194 + fi 195 + } 196 + 181 197 check_accounting_one_type() 182 198 { 183 199 local type=$1 is_counted=$2 overrides_learned=$3 ··· 289 273 check_limit_one_type $type_args 290 274 done 291 275 } 276 + 277 + check_fdb_n_learned_support 292 278 293 279 trap cleanup EXIT 294 280