Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from bpf and netfilter.

Current release - regressions:

- netfilter: cttimeout: fix slab-out-of-bounds read in
cttimeout_net_exit

Current release - new code bugs:

- bpf: ftrace: keep address offset in ftrace_lookup_symbols

- bpf: force cookies array to follow symbols sorting

Previous releases - regressions:

- ipv4: ping: fix bind address validity check

- tipc: fix use-after-free read in tipc_named_reinit

- eth: veth: add updating of trans_start

Previous releases - always broken:

- sock: redo the psock vs ULP protection check

- netfilter: nf_dup_netdev: fix skb_under_panic

- bpf: fix request_sock leak in sk lookup helpers

- eth: igb: fix a use-after-free issue in igb_clean_tx_ring

- eth: ice: prohibit improper channel config for DCB

- eth: at803x: fix null pointer dereference on AR9331 phy

- eth: virtio_net: fix xdp_rxq_info bug after suspend/resume

Misc:

- eth: hinic: replace memcpy() with direct assignment"

* tag 'net-5.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (47 commits)
net: openvswitch: fix parsing of nw_proto for IPv6 fragments
sock: redo the psock vs ULP protection check
Revert "net/tls: fix tls_sk_proto_close executed repeatedly"
virtio_net: fix xdp_rxq_info bug after suspend/resume
igb: Make DMA faster when CPU is active on the PCIe link
net: dsa: qca8k: reduce mgmt ethernet timeout
net: dsa: qca8k: reset cpu port on MTU change
MAINTAINERS: Add a maintainer for OCP Time Card
hinic: Replace memcpy() with direct assignment
Revert "drivers/net/ethernet/neterion/vxge: Fix a use-after-free bug in vxge-main.c"
net: phy: smsc: Disable Energy Detect Power-Down in interrupt mode
ice: ethtool: Prohibit improper channel config for DCB
ice: ethtool: advertise 1000M speeds properly
ice: Fix switchdev rules book keeping
ice: ignore protocol field in GTP offload
netfilter: nf_dup_netdev: add and use recursion counter
netfilter: nf_dup_netdev: do not push mac header a second time
selftests: netfilter: correct PKTGEN_SCRIPT_PATHS in nft_concat_range.sh
net/tls: fix tls_sk_proto_close executed repeatedly
erspan: do not assume transport header is always set
...

+622 -220
+21 -21
MAINTAINERS
··· 3662 3662 M: Shubham Bansal <illusionist.neo@gmail.com> 3663 3663 L: netdev@vger.kernel.org 3664 3664 L: bpf@vger.kernel.org 3665 - S: Maintained 3665 + S: Odd Fixes 3666 3666 F: arch/arm/net/ 3667 3667 3668 3668 BPF JIT for ARM64 ··· 3686 3686 M: Jakub Kicinski <kuba@kernel.org> 3687 3687 L: netdev@vger.kernel.org 3688 3688 L: bpf@vger.kernel.org 3689 - S: Supported 3689 + S: Odd Fixes 3690 3690 F: drivers/net/ethernet/netronome/nfp/bpf/ 3691 3691 3692 3692 BPF JIT for POWERPC (32-BIT AND 64-BIT) 3693 3693 M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 3694 + M: Michael Ellerman <mpe@ellerman.id.au> 3694 3695 L: netdev@vger.kernel.org 3695 3696 L: bpf@vger.kernel.org 3696 - S: Maintained 3697 + S: Supported 3697 3698 F: arch/powerpc/net/ 3698 3699 3699 3700 BPF JIT for RISC-V (32-bit) ··· 3720 3719 M: Vasily Gorbik <gor@linux.ibm.com> 3721 3720 L: netdev@vger.kernel.org 3722 3721 L: bpf@vger.kernel.org 3723 - S: Maintained 3722 + S: Supported 3724 3723 F: arch/s390/net/ 3725 3724 X: arch/s390/net/pnet.c 3726 3725 ··· 3728 3727 M: David S. Miller <davem@davemloft.net> 3729 3728 L: netdev@vger.kernel.org 3730 3729 L: bpf@vger.kernel.org 3731 - S: Maintained 3730 + S: Odd Fixes 3732 3731 F: arch/sparc/net/ 3733 3732 3734 3733 BPF JIT for X86 32-BIT 3735 3734 M: Wang YanQing <udknight@gmail.com> 3736 3735 L: netdev@vger.kernel.org 3737 3736 L: bpf@vger.kernel.org 3738 - S: Maintained 3737 + S: Odd Fixes 3739 3738 F: arch/x86/net/bpf_jit_comp32.c 3740 3739 3741 3740 BPF JIT for X86 64-BIT ··· 3757 3756 F: include/linux/bpf_lsm.h 3758 3757 F: kernel/bpf/bpf_lsm.c 3759 3758 F: security/bpf/ 3759 + 3760 + BPF L7 FRAMEWORK 3761 + M: John Fastabend <john.fastabend@gmail.com> 3762 + M: Jakub Sitnicki <jakub@cloudflare.com> 3763 + L: netdev@vger.kernel.org 3764 + L: bpf@vger.kernel.org 3765 + S: Maintained 3766 + F: include/linux/skmsg.h 3767 + F: net/core/skmsg.c 3768 + F: net/core/sock_map.c 3769 + F: net/ipv4/tcp_bpf.c 3770 + F: net/ipv4/udp_bpf.c 3771 + F: net/unix/unix_bpf.c 3760 3772 3761 3773 BPFTOOL 3762 3774 M: Quentin Monnet <quentin@isovalent.com> ··· 11112 11098 F: include/net/l3mdev.h 11113 11099 F: net/l3mdev 11114 11100 11115 - L7 BPF FRAMEWORK 11116 - M: John Fastabend <john.fastabend@gmail.com> 11117 - M: Daniel Borkmann <daniel@iogearbox.net> 11118 - M: Jakub Sitnicki <jakub@cloudflare.com> 11119 - L: netdev@vger.kernel.org 11120 - L: bpf@vger.kernel.org 11121 - S: Maintained 11122 - F: include/linux/skmsg.h 11123 - F: net/core/skmsg.c 11124 - F: net/core/sock_map.c 11125 - F: net/ipv4/tcp_bpf.c 11126 - F: net/ipv4/udp_bpf.c 11127 - F: net/unix/unix_bpf.c 11128 - 11129 11101 LANDLOCK SECURITY MODULE 11130 11102 M: Mickaël Salaün <mic@digikod.net> 11131 11103 L: linux-security-module@vger.kernel.org ··· 13954 13954 NETWORKING [TLS] 13955 13955 M: Boris Pismenny <borisp@nvidia.com> 13956 13956 M: John Fastabend <john.fastabend@gmail.com> 13957 - M: Daniel Borkmann <daniel@iogearbox.net> 13958 13957 M: Jakub Kicinski <kuba@kernel.org> 13959 13958 L: netdev@vger.kernel.org 13960 13959 S: Maintained ··· 14870 14871 14871 14872 OPENCOMPUTE PTP CLOCK DRIVER 14872 14873 M: Jonathan Lemon <jonathan.lemon@gmail.com> 14874 + M: Vadim Fedorenko <vadfed@fb.com> 14873 14875 L: netdev@vger.kernel.org 14874 14876 S: Maintained 14875 14877 F: drivers/ptp/ptp_ocp.c
+2 -1
arch/x86/net/bpf_jit_comp.c
··· 1420 1420 case BPF_JMP | BPF_CALL: 1421 1421 func = (u8 *) __bpf_call_base + imm32; 1422 1422 if (tail_call_reachable) { 1423 + /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */ 1423 1424 EMIT3_off32(0x48, 0x8B, 0x85, 1424 - -(bpf_prog->aux->stack_depth + 8)); 1425 + -round_up(bpf_prog->aux->stack_depth, 8) - 8); 1425 1426 if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7)) 1426 1427 return -EINVAL; 1427 1428 } else {
+3 -1
drivers/net/bonding/bond_main.c
··· 3684 3684 if (!rtnl_trylock()) 3685 3685 return; 3686 3686 3687 - if (should_notify_peers) 3687 + if (should_notify_peers) { 3688 + bond->send_peer_notif--; 3688 3689 call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, 3689 3690 bond->dev); 3691 + } 3690 3692 if (should_notify_rtnl) { 3691 3693 bond_slave_state_notify(bond); 3692 3694 bond_slave_link_notify(bond);
+21 -1
drivers/net/dsa/qca8k.c
··· 2334 2334 qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu) 2335 2335 { 2336 2336 struct qca8k_priv *priv = ds->priv; 2337 + int ret; 2337 2338 2338 2339 /* We have only have a general MTU setting. 2339 2340 * DSA always set the CPU port's MTU to the largest MTU of the slave ··· 2345 2344 if (!dsa_is_cpu_port(ds, port)) 2346 2345 return 0; 2347 2346 2347 + /* To change the MAX_FRAME_SIZE the cpu ports must be off or 2348 + * the switch panics. 2349 + * Turn off both cpu ports before applying the new value to prevent 2350 + * this. 2351 + */ 2352 + if (priv->port_enabled_map & BIT(0)) 2353 + qca8k_port_set_status(priv, 0, 0); 2354 + 2355 + if (priv->port_enabled_map & BIT(6)) 2356 + qca8k_port_set_status(priv, 6, 0); 2357 + 2348 2358 /* Include L2 header / FCS length */ 2349 - return qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN); 2359 + ret = qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN); 2360 + 2361 + if (priv->port_enabled_map & BIT(0)) 2362 + qca8k_port_set_status(priv, 0, 1); 2363 + 2364 + if (priv->port_enabled_map & BIT(6)) 2365 + qca8k_port_set_status(priv, 6, 1); 2366 + 2367 + return ret; 2350 2368 } 2351 2369 2352 2370 static int
+1 -1
drivers/net/dsa/qca8k.h
··· 15 15 16 16 #define QCA8K_ETHERNET_MDIO_PRIORITY 7 17 17 #define QCA8K_ETHERNET_PHY_PRIORITY 6 18 - #define QCA8K_ETHERNET_TIMEOUT 100 18 + #define QCA8K_ETHERNET_TIMEOUT 5 19 19 20 20 #define QCA8K_NUM_PORTS 7 21 21 #define QCA8K_NUM_CPU_PORTS 2
+1 -3
drivers/net/ethernet/huawei/hinic/hinic_devlink.c
··· 43 43 44 44 for (i = 0; i < fw_image->fw_info.fw_section_cnt; i++) { 45 45 len += fw_image->fw_section_info[i].fw_section_len; 46 - memcpy(&host_image->image_section_info[i], 47 - &fw_image->fw_section_info[i], 48 - sizeof(struct fw_section_info_st)); 46 + host_image->image_section_info[i] = fw_image->fw_section_info[i]; 49 47 } 50 48 51 49 if (len != fw_image->fw_len ||
+48 -1
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 2190 2190 } 2191 2191 2192 2192 /** 2193 + * ice_set_phy_type_from_speed - set phy_types based on speeds 2194 + * and advertised modes 2195 + * @ks: ethtool link ksettings struct 2196 + * @phy_type_low: pointer to the lower part of phy_type 2197 + * @phy_type_high: pointer to the higher part of phy_type 2198 + * @adv_link_speed: targeted link speeds bitmap 2199 + */ 2200 + static void 2201 + ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks, 2202 + u64 *phy_type_low, u64 *phy_type_high, 2203 + u16 adv_link_speed) 2204 + { 2205 + /* Handle 1000M speed in a special way because ice_update_phy_type 2206 + * enables all link modes, but having mixed copper and optical 2207 + * standards is not supported. 2208 + */ 2209 + adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB; 2210 + 2211 + if (ethtool_link_ksettings_test_link_mode(ks, advertising, 2212 + 1000baseT_Full)) 2213 + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T | 2214 + ICE_PHY_TYPE_LOW_1G_SGMII; 2215 + 2216 + if (ethtool_link_ksettings_test_link_mode(ks, advertising, 2217 + 1000baseKX_Full)) 2218 + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX; 2219 + 2220 + if (ethtool_link_ksettings_test_link_mode(ks, advertising, 2221 + 1000baseX_Full)) 2222 + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX | 2223 + ICE_PHY_TYPE_LOW_1000BASE_LX; 2224 + 2225 + ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed); 2226 + } 2227 + 2228 + /** 2193 2229 * ice_set_link_ksettings - Set Speed and Duplex 2194 2230 * @netdev: network interface device structure 2195 2231 * @ks: ethtool ksettings ··· 2356 2320 adv_link_speed = curr_link_speed; 2357 2321 2358 2322 /* Convert the advertise link speeds to their corresponded PHY_TYPE */ 2359 - ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed); 2323 + ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high, 2324 + adv_link_speed); 2360 2325 2361 2326 if (!autoneg_changed && adv_link_speed == curr_link_speed) { 2362 2327 netdev_info(netdev, "Nothing changed, exiting without setting anything.\n"); ··· 3507 3470 new_rx = ch->combined_count + ch->rx_count; 3508 3471 new_tx = ch->combined_count + ch->tx_count; 3509 3472 3473 + if (new_rx < vsi->tc_cfg.numtc) { 3474 + netdev_err(dev, "Cannot set less Rx channels, than Traffic Classes you have (%u)\n", 3475 + vsi->tc_cfg.numtc); 3476 + return -EINVAL; 3477 + } 3478 + if (new_tx < vsi->tc_cfg.numtc) { 3479 + netdev_err(dev, "Cannot set less Tx channels, than Traffic Classes you have (%u)\n", 3480 + vsi->tc_cfg.numtc); 3481 + return -EINVAL; 3482 + } 3510 3483 if (new_rx > ice_get_max_rxq(pf)) { 3511 3484 netdev_err(dev, "Maximum allowed Rx channels is %d\n", 3512 3485 ice_get_max_rxq(pf));
+37 -5
drivers/net/ethernet/intel/ice/ice_lib.c
··· 909 909 * @vsi: the VSI being configured 910 910 * @ctxt: VSI context structure 911 911 */ 912 - static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) 912 + static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) 913 913 { 914 914 u16 offset = 0, qmap = 0, tx_count = 0, pow = 0; 915 915 u16 num_txq_per_tc, num_rxq_per_tc; ··· 982 982 else 983 983 vsi->num_rxq = num_rxq_per_tc; 984 984 985 + if (vsi->num_rxq > vsi->alloc_rxq) { 986 + dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n", 987 + vsi->num_rxq, vsi->alloc_rxq); 988 + return -EINVAL; 989 + } 990 + 985 991 vsi->num_txq = tx_count; 992 + if (vsi->num_txq > vsi->alloc_txq) { 993 + dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n", 994 + vsi->num_txq, vsi->alloc_txq); 995 + return -EINVAL; 996 + } 986 997 987 998 if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) { 988 999 dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n"); ··· 1011 1000 */ 1012 1001 ctxt->info.q_mapping[0] = cpu_to_le16(vsi->rxq_map[0]); 1013 1002 ctxt->info.q_mapping[1] = cpu_to_le16(vsi->num_rxq); 1003 + 1004 + return 0; 1014 1005 } 1015 1006 1016 1007 /** ··· 1200 1187 if (vsi->type == ICE_VSI_CHNL) { 1201 1188 ice_chnl_vsi_setup_q_map(vsi, ctxt); 1202 1189 } else { 1203 - ice_vsi_setup_q_map(vsi, ctxt); 1190 + ret = ice_vsi_setup_q_map(vsi, ctxt); 1191 + if (ret) 1192 + goto out; 1193 + 1204 1194 if (!init_vsi) /* means VSI being updated */ 1205 1195 /* must to indicate which section of VSI context are 1206 1196 * being modified ··· 3480 3464 * 3481 3465 * Prepares VSI tc_config to have queue configurations based on MQPRIO options. 3482 3466 */ 3483 - static void 3467 + static int 3484 3468 ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt, 3485 3469 u8 ena_tc) 3486 3470 { ··· 3529 3513 3530 3514 /* Set actual Tx/Rx queue pairs */ 3531 3515 vsi->num_txq = offset + qcount_tx; 3516 + if (vsi->num_txq > vsi->alloc_txq) { 3517 + dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n", 3518 + vsi->num_txq, vsi->alloc_txq); 3519 + return -EINVAL; 3520 + } 3521 + 3532 3522 vsi->num_rxq = offset + qcount_rx; 3523 + if (vsi->num_rxq > vsi->alloc_rxq) { 3524 + dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n", 3525 + vsi->num_rxq, vsi->alloc_rxq); 3526 + return -EINVAL; 3527 + } 3533 3528 3534 3529 /* Setup queue TC[0].qmap for given VSI context */ 3535 3530 ctxt->info.tc_mapping[0] = cpu_to_le16(qmap); ··· 3558 3531 dev_dbg(ice_pf_to_dev(vsi->back), "vsi->num_rxq = %d\n", vsi->num_rxq); 3559 3532 dev_dbg(ice_pf_to_dev(vsi->back), "all_numtc %u, all_enatc: 0x%04x, tc_cfg.numtc %u\n", 3560 3533 vsi->all_numtc, vsi->all_enatc, vsi->tc_cfg.numtc); 3534 + 3535 + return 0; 3561 3536 } 3562 3537 3563 3538 /** ··· 3609 3580 3610 3581 if (vsi->type == ICE_VSI_PF && 3611 3582 test_bit(ICE_FLAG_TC_MQPRIO, pf->flags)) 3612 - ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc); 3583 + ret = ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc); 3613 3584 else 3614 - ice_vsi_setup_q_map(vsi, ctx); 3585 + ret = ice_vsi_setup_q_map(vsi, ctx); 3586 + 3587 + if (ret) 3588 + goto out; 3615 3589 3616 3590 /* must to indicate which section of VSI context are being modified */ 3617 3591 ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+4 -1
drivers/net/ethernet/intel/ice/ice_tc_lib.c
··· 524 524 */ 525 525 fltr->rid = rule_added.rid; 526 526 fltr->rule_id = rule_added.rule_id; 527 + fltr->dest_id = rule_added.vsi_handle; 527 528 528 529 exit: 529 530 kfree(list); ··· 994 993 n_proto_key = ntohs(match.key->n_proto); 995 994 n_proto_mask = ntohs(match.mask->n_proto); 996 995 997 - if (n_proto_key == ETH_P_ALL || n_proto_key == 0) { 996 + if (n_proto_key == ETH_P_ALL || n_proto_key == 0 || 997 + fltr->tunnel_type == TNL_GTPU || 998 + fltr->tunnel_type == TNL_GTPC) { 998 999 n_proto_key = 0; 999 1000 n_proto_mask = 0; 1000 1001 } else {
+10 -9
drivers/net/ethernet/intel/igb/igb_main.c
··· 4819 4819 while (i != tx_ring->next_to_use) { 4820 4820 union e1000_adv_tx_desc *eop_desc, *tx_desc; 4821 4821 4822 - /* Free all the Tx ring sk_buffs */ 4823 - dev_kfree_skb_any(tx_buffer->skb); 4822 + /* Free all the Tx ring sk_buffs or xdp frames */ 4823 + if (tx_buffer->type == IGB_TYPE_SKB) 4824 + dev_kfree_skb_any(tx_buffer->skb); 4825 + else 4826 + xdp_return_frame(tx_buffer->xdpf); 4824 4827 4825 4828 /* unmap skb header data */ 4826 4829 dma_unmap_single(tx_ring->dev, ··· 9901 9898 struct e1000_hw *hw = &adapter->hw; 9902 9899 u32 dmac_thr; 9903 9900 u16 hwm; 9901 + u32 reg; 9904 9902 9905 9903 if (hw->mac.type > e1000_82580) { 9906 9904 if (adapter->flags & IGB_FLAG_DMAC) { 9907 - u32 reg; 9908 - 9909 9905 /* force threshold to 0. */ 9910 9906 wr32(E1000_DMCTXTH, 0); 9911 9907 ··· 9937 9935 /* Disable BMC-to-OS Watchdog Enable */ 9938 9936 if (hw->mac.type != e1000_i354) 9939 9937 reg &= ~E1000_DMACR_DC_BMC2OSW_EN; 9940 - 9941 9938 wr32(E1000_DMACR, reg); 9942 9939 9943 9940 /* no lower threshold to disable ··· 9953 9952 */ 9954 9953 wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE - 9955 9954 (IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6); 9955 + } 9956 9956 9957 - /* make low power state decision controlled 9958 - * by DMA coal 9959 - */ 9957 + if (hw->mac.type >= e1000_i210 || 9958 + (adapter->flags & IGB_FLAG_DMAC)) { 9960 9959 reg = rd32(E1000_PCIEMISC); 9961 - reg &= ~E1000_PCIEMISC_LX_DECISION; 9960 + reg |= E1000_PCIEMISC_LX_DECISION; 9962 9961 wr32(E1000_PCIEMISC, reg); 9963 9962 } /* endif adapter->dmac is not disabled */ 9964 9963 } else if (hw->mac.type == e1000_82580) {
+8 -1
drivers/net/hamradio/6pack.c
··· 99 99 100 100 unsigned int rx_count; 101 101 unsigned int rx_count_cooked; 102 + spinlock_t rxlock; 102 103 103 104 int mtu; /* Our mtu (to spot changes!) */ 104 105 int buffsize; /* Max buffers sizes */ ··· 566 565 sp->dev = dev; 567 566 568 567 spin_lock_init(&sp->lock); 568 + spin_lock_init(&sp->rxlock); 569 569 refcount_set(&sp->refcnt, 1); 570 570 init_completion(&sp->dead); 571 571 ··· 915 913 sp->led_state = 0x60; 916 914 /* fill trailing bytes with zeroes */ 917 915 sp->tty->ops->write(sp->tty, &sp->led_state, 1); 916 + spin_lock_bh(&sp->rxlock); 918 917 rest = sp->rx_count; 919 918 if (rest != 0) 920 919 for (i = rest; i <= 3; i++) ··· 933 930 sp_bump(sp, 0); 934 931 } 935 932 sp->rx_count_cooked = 0; 933 + spin_unlock_bh(&sp->rxlock); 936 934 } 937 935 break; 938 936 case SIXP_TX_URUN: printk(KERN_DEBUG "6pack: TX underrun\n"); ··· 963 959 decode_prio_command(sp, inbyte); 964 960 else if ((inbyte & SIXP_STD_CMD_MASK) != 0) 965 961 decode_std_command(sp, inbyte); 966 - else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK) 962 + else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK) { 963 + spin_lock_bh(&sp->rxlock); 967 964 decode_data(sp, inbyte); 965 + spin_unlock_bh(&sp->rxlock); 966 + } 968 967 } 969 968 } 970 969
+14 -1
drivers/net/phy/aquantia_main.c
··· 34 34 #define MDIO_AN_VEND_PROV 0xc400 35 35 #define MDIO_AN_VEND_PROV_1000BASET_FULL BIT(15) 36 36 #define MDIO_AN_VEND_PROV_1000BASET_HALF BIT(14) 37 + #define MDIO_AN_VEND_PROV_5000BASET_FULL BIT(11) 38 + #define MDIO_AN_VEND_PROV_2500BASET_FULL BIT(10) 37 39 #define MDIO_AN_VEND_PROV_DOWNSHIFT_EN BIT(4) 38 40 #define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK GENMASK(3, 0) 39 41 #define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT 4 ··· 233 231 phydev->advertising)) 234 232 reg |= MDIO_AN_VEND_PROV_1000BASET_HALF; 235 233 234 + /* Handle the case when the 2.5G and 5G speeds are not advertised */ 235 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, 236 + phydev->advertising)) 237 + reg |= MDIO_AN_VEND_PROV_2500BASET_FULL; 238 + 239 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT, 240 + phydev->advertising)) 241 + reg |= MDIO_AN_VEND_PROV_5000BASET_FULL; 242 + 236 243 ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV, 237 244 MDIO_AN_VEND_PROV_1000BASET_HALF | 238 - MDIO_AN_VEND_PROV_1000BASET_FULL, reg); 245 + MDIO_AN_VEND_PROV_1000BASET_FULL | 246 + MDIO_AN_VEND_PROV_2500BASET_FULL | 247 + MDIO_AN_VEND_PROV_5000BASET_FULL, reg); 239 248 if (ret < 0) 240 249 return ret; 241 250 if (ret > 0)
+6
drivers/net/phy/at803x.c
··· 2072 2072 /* ATHEROS AR9331 */ 2073 2073 PHY_ID_MATCH_EXACT(ATH9331_PHY_ID), 2074 2074 .name = "Qualcomm Atheros AR9331 built-in PHY", 2075 + .probe = at803x_probe, 2076 + .remove = at803x_remove, 2075 2077 .suspend = at803x_suspend, 2076 2078 .resume = at803x_resume, 2077 2079 .flags = PHY_POLL_CABLE_TEST, ··· 2089 2087 /* Qualcomm Atheros QCA9561 */ 2090 2088 PHY_ID_MATCH_EXACT(QCA9561_PHY_ID), 2091 2089 .name = "Qualcomm Atheros QCA9561 built-in PHY", 2090 + .probe = at803x_probe, 2091 + .remove = at803x_remove, 2092 2092 .suspend = at803x_suspend, 2093 2093 .resume = at803x_resume, 2094 2094 .flags = PHY_POLL_CABLE_TEST, ··· 2155 2151 PHY_ID_MATCH_EXACT(QCA8081_PHY_ID), 2156 2152 .name = "Qualcomm QCA8081", 2157 2153 .flags = PHY_POLL_CABLE_TEST, 2154 + .probe = at803x_probe, 2155 + .remove = at803x_remove, 2158 2156 .config_intr = at803x_config_intr, 2159 2157 .handle_interrupt = at803x_handle_interrupt, 2160 2158 .get_tunable = at803x_get_tunable,
+4 -2
drivers/net/phy/smsc.c
··· 110 110 struct smsc_phy_priv *priv = phydev->priv; 111 111 int rc; 112 112 113 - if (!priv->energy_enable) 113 + if (!priv->energy_enable || phydev->irq != PHY_POLL) 114 114 return 0; 115 115 116 116 rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS); ··· 210 210 * response on link pulses to detect presence of plugged Ethernet cable. 211 211 * The Energy Detect Power-Down mode is enabled again in the end of procedure to 212 212 * save approximately 220 mW of power if cable is unplugged. 213 + * The workaround is only applicable to poll mode. Energy Detect Power-Down may 214 + * not be used in interrupt mode lest link change detection becomes unreliable. 213 215 */ 214 216 static int lan87xx_read_status(struct phy_device *phydev) 215 217 { ··· 219 217 220 218 int err = genphy_read_status(phydev); 221 219 222 - if (!phydev->link && priv->energy_enable) { 220 + if (!phydev->link && priv->energy_enable && phydev->irq == PHY_POLL) { 223 221 /* Disable EDPD to wake up PHY */ 224 222 int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS); 225 223 if (rc < 0)
+4
drivers/net/veth.c
··· 312 312 static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) 313 313 { 314 314 struct veth_priv *rcv_priv, *priv = netdev_priv(dev); 315 + struct netdev_queue *queue = NULL; 315 316 struct veth_rq *rq = NULL; 316 317 struct net_device *rcv; 317 318 int length = skb->len; ··· 330 329 rxq = skb_get_queue_mapping(skb); 331 330 if (rxq < rcv->real_num_rx_queues) { 332 331 rq = &rcv_priv->rq[rxq]; 332 + queue = netdev_get_tx_queue(dev, rxq); 333 333 334 334 /* The napi pointer is available when an XDP program is 335 335 * attached or when GRO is enabled ··· 342 340 343 341 skb_tx_timestamp(skb); 344 342 if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) { 343 + if (queue) 344 + txq_trans_cond_update(queue); 345 345 if (!use_napi) 346 346 dev_lstats_add(dev, length); 347 347 } else {
+6 -19
drivers/net/virtio_net.c
··· 2768 2768 static void virtnet_freeze_down(struct virtio_device *vdev) 2769 2769 { 2770 2770 struct virtnet_info *vi = vdev->priv; 2771 - int i; 2772 2771 2773 2772 /* Make sure no work handler is accessing the device */ 2774 2773 flush_work(&vi->config_work); ··· 2775 2776 netif_tx_lock_bh(vi->dev); 2776 2777 netif_device_detach(vi->dev); 2777 2778 netif_tx_unlock_bh(vi->dev); 2778 - cancel_delayed_work_sync(&vi->refill); 2779 - 2780 - if (netif_running(vi->dev)) { 2781 - for (i = 0; i < vi->max_queue_pairs; i++) { 2782 - napi_disable(&vi->rq[i].napi); 2783 - virtnet_napi_tx_disable(&vi->sq[i].napi); 2784 - } 2785 - } 2779 + if (netif_running(vi->dev)) 2780 + virtnet_close(vi->dev); 2786 2781 } 2787 2782 2788 2783 static int init_vqs(struct virtnet_info *vi); ··· 2784 2791 static int virtnet_restore_up(struct virtio_device *vdev) 2785 2792 { 2786 2793 struct virtnet_info *vi = vdev->priv; 2787 - int err, i; 2794 + int err; 2788 2795 2789 2796 err = init_vqs(vi); 2790 2797 if (err) ··· 2793 2800 virtio_device_ready(vdev); 2794 2801 2795 2802 if (netif_running(vi->dev)) { 2796 - for (i = 0; i < vi->curr_queue_pairs; i++) 2797 - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) 2798 - schedule_delayed_work(&vi->refill, 0); 2799 - 2800 - for (i = 0; i < vi->max_queue_pairs; i++) { 2801 - virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); 2802 - virtnet_napi_tx_enable(vi, vi->sq[i].vq, 2803 - &vi->sq[i].napi); 2804 - } 2803 + err = virtnet_open(vi->dev); 2804 + if (err) 2805 + return err; 2805 2806 } 2806 2807 2807 2808 netif_tx_lock_bh(vi->dev);
+5
include/net/inet_sock.h
··· 253 253 #define IP_CMSG_CHECKSUM BIT(7) 254 254 #define IP_CMSG_RECVFRAGSIZE BIT(8) 255 255 256 + static inline bool sk_is_inet(struct sock *sk) 257 + { 258 + return sk->sk_family == AF_INET || sk->sk_family == AF_INET6; 259 + } 260 + 256 261 /** 257 262 * sk_to_full_sk - Access to a full socket 258 263 * @sk: pointer to a socket
+5
kernel/bpf/btf.c
··· 4815 4815 n = btf_nr_types(btf); 4816 4816 for (i = start_id; i < n; i++) { 4817 4817 const struct btf_type *t; 4818 + int chain_limit = 32; 4818 4819 u32 cur_id = i; 4819 4820 4820 4821 t = btf_type_by_id(btf, i); ··· 4828 4827 4829 4828 in_tags = btf_type_is_type_tag(t); 4830 4829 while (btf_type_is_modifier(t)) { 4830 + if (!chain_limit--) { 4831 + btf_verifier_log(env, "Max chain length or cycle detected"); 4832 + return -ELOOP; 4833 + } 4831 4834 if (btf_type_is_type_tag(t)) { 4832 4835 if (!in_tags) { 4833 4836 btf_verifier_log(env, "Type tags don't precede modifiers");
+50 -20
kernel/trace/bpf_trace.c
··· 2423 2423 kprobe_multi_link_prog_run(link, entry_ip, regs); 2424 2424 } 2425 2425 2426 - static int symbols_cmp(const void *a, const void *b) 2426 + static int symbols_cmp_r(const void *a, const void *b, const void *priv) 2427 2427 { 2428 2428 const char **str_a = (const char **) a; 2429 2429 const char **str_b = (const char **) b; 2430 2430 2431 2431 return strcmp(*str_a, *str_b); 2432 + } 2433 + 2434 + struct multi_symbols_sort { 2435 + const char **funcs; 2436 + u64 *cookies; 2437 + }; 2438 + 2439 + static void symbols_swap_r(void *a, void *b, int size, const void *priv) 2440 + { 2441 + const struct multi_symbols_sort *data = priv; 2442 + const char **name_a = a, **name_b = b; 2443 + 2444 + swap(*name_a, *name_b); 2445 + 2446 + /* If defined, swap also related cookies. */ 2447 + if (data->cookies) { 2448 + u64 *cookie_a, *cookie_b; 2449 + 2450 + cookie_a = data->cookies + (name_a - data->funcs); 2451 + cookie_b = data->cookies + (name_b - data->funcs); 2452 + swap(*cookie_a, *cookie_b); 2453 + } 2432 2454 } 2433 2455 2434 2456 int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) ··· 2490 2468 if (!addrs) 2491 2469 return -ENOMEM; 2492 2470 2493 - if (uaddrs) { 2494 - if (copy_from_user(addrs, uaddrs, size)) { 2495 - err = -EFAULT; 2496 - goto error; 2497 - } 2498 - } else { 2499 - struct user_syms us; 2500 - 2501 - err = copy_user_syms(&us, usyms, cnt); 2502 - if (err) 2503 - goto error; 2504 - 2505 - sort(us.syms, cnt, sizeof(*us.syms), symbols_cmp, NULL); 2506 - err = ftrace_lookup_symbols(us.syms, cnt, addrs); 2507 - free_user_syms(&us); 2508 - if (err) 2509 - goto error; 2510 - } 2511 - 2512 2471 ucookies = u64_to_user_ptr(attr->link_create.kprobe_multi.cookies); 2513 2472 if (ucookies) { 2514 2473 cookies = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL); ··· 2501 2498 err = -EFAULT; 2502 2499 goto error; 2503 2500 } 2501 + } 2502 + 2503 + if (uaddrs) { 2504 + if (copy_from_user(addrs, uaddrs, size)) { 2505 + err = -EFAULT; 2506 + goto error; 2507 + } 2508 + } else { 2509 + struct multi_symbols_sort data = { 2510 + .cookies = cookies, 2511 + }; 2512 + struct user_syms us; 2513 + 2514 + err = copy_user_syms(&us, usyms, cnt); 2515 + if (err) 2516 + goto error; 2517 + 2518 + if (cookies) 2519 + data.funcs = us.syms; 2520 + 2521 + sort_r(us.syms, cnt, sizeof(*us.syms), symbols_cmp_r, 2522 + symbols_swap_r, &data); 2523 + 2524 + err = ftrace_lookup_symbols(us.syms, cnt, addrs); 2525 + free_user_syms(&us); 2526 + if (err) 2527 + goto error; 2504 2528 } 2505 2529 2506 2530 link = kzalloc(sizeof(*link), GFP_KERNEL);
+11 -2
kernel/trace/ftrace.c
··· 8029 8029 struct module *mod, unsigned long addr) 8030 8030 { 8031 8031 struct kallsyms_data *args = data; 8032 + const char **sym; 8033 + int idx; 8032 8034 8033 - if (!bsearch(&name, args->syms, args->cnt, sizeof(*args->syms), symbols_cmp)) 8035 + sym = bsearch(&name, args->syms, args->cnt, sizeof(*args->syms), symbols_cmp); 8036 + if (!sym) 8037 + return 0; 8038 + 8039 + idx = sym - args->syms; 8040 + if (args->addrs[idx]) 8034 8041 return 0; 8035 8042 8036 8043 addr = ftrace_location(addr); 8037 8044 if (!addr) 8038 8045 return 0; 8039 8046 8040 - args->addrs[args->found++] = addr; 8047 + args->addrs[idx] = addr; 8048 + args->found++; 8041 8049 return args->found == args->cnt ? 1 : 0; 8042 8050 } 8043 8051 ··· 8070 8062 struct kallsyms_data args; 8071 8063 int err; 8072 8064 8065 + memset(addrs, 0, sizeof(*addrs) * cnt); 8073 8066 args.addrs = addrs; 8074 8067 args.syms = sorted_syms; 8075 8068 args.cnt = cnt;
+9
kernel/trace/rethook.c
··· 154 154 if (unlikely(!handler)) 155 155 return NULL; 156 156 157 + /* 158 + * This expects the caller will set up a rethook on a function entry. 159 + * When the function returns, the rethook will eventually be reclaimed 160 + * or released in the rethook_recycle() with call_rcu(). 161 + * This means the caller must be run in the RCU-availabe context. 162 + */ 163 + if (unlikely(!rcu_is_watching())) 164 + return NULL; 165 + 157 166 fn = freelist_try_get(&rh->pool); 158 167 if (!fn) 159 168 return NULL;
+15 -10
net/core/dev.c
··· 397 397 /* Device list removal 398 398 * caller must respect a RCU grace period before freeing/reusing dev 399 399 */ 400 - static void unlist_netdevice(struct net_device *dev) 400 + static void unlist_netdevice(struct net_device *dev, bool lock) 401 401 { 402 402 ASSERT_RTNL(); 403 403 404 404 /* Unlink dev from the device chain */ 405 - write_lock(&dev_base_lock); 405 + if (lock) 406 + write_lock(&dev_base_lock); 406 407 list_del_rcu(&dev->dev_list); 407 408 netdev_name_node_del(dev->name_node); 408 409 hlist_del_rcu(&dev->index_hlist); 409 - write_unlock(&dev_base_lock); 410 + if (lock) 411 + write_unlock(&dev_base_lock); 410 412 411 413 dev_base_seq_inc(dev_net(dev)); 412 414 } ··· 10045 10043 goto err_uninit; 10046 10044 10047 10045 ret = netdev_register_kobject(dev); 10048 - if (ret) { 10049 - dev->reg_state = NETREG_UNREGISTERED; 10046 + write_lock(&dev_base_lock); 10047 + dev->reg_state = ret ? NETREG_UNREGISTERED : NETREG_REGISTERED; 10048 + write_unlock(&dev_base_lock); 10049 + if (ret) 10050 10050 goto err_uninit; 10051 - } 10052 - dev->reg_state = NETREG_REGISTERED; 10053 10051 10054 10052 __netdev_update_features(dev); 10055 10053 ··· 10331 10329 continue; 10332 10330 } 10333 10331 10332 + write_lock(&dev_base_lock); 10334 10333 dev->reg_state = NETREG_UNREGISTERED; 10334 + write_unlock(&dev_base_lock); 10335 10335 linkwatch_forget_dev(dev); 10336 10336 } 10337 10337 ··· 10814 10810 10815 10811 list_for_each_entry(dev, head, unreg_list) { 10816 10812 /* And unlink it from device chain. */ 10817 - unlist_netdevice(dev); 10818 - 10813 + write_lock(&dev_base_lock); 10814 + unlist_netdevice(dev, false); 10819 10815 dev->reg_state = NETREG_UNREGISTERING; 10816 + write_unlock(&dev_base_lock); 10820 10817 } 10821 10818 flush_all_backlogs(); 10822 10819 ··· 10964 10959 dev_close(dev); 10965 10960 10966 10961 /* And unlink it from device chain */ 10967 - unlist_netdevice(dev); 10962 + unlist_netdevice(dev, true); 10968 10963 10969 10964 synchronize_net(); 10970 10965
+28 -6
net/core/filter.c
··· 6516 6516 ifindex, proto, netns_id, flags); 6517 6517 6518 6518 if (sk) { 6519 - sk = sk_to_full_sk(sk); 6520 - if (!sk_fullsock(sk)) { 6519 + struct sock *sk2 = sk_to_full_sk(sk); 6520 + 6521 + /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk 6522 + * sock refcnt is decremented to prevent a request_sock leak. 6523 + */ 6524 + if (!sk_fullsock(sk2)) 6525 + sk2 = NULL; 6526 + if (sk2 != sk) { 6521 6527 sock_gen_put(sk); 6522 - return NULL; 6528 + /* Ensure there is no need to bump sk2 refcnt */ 6529 + if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) { 6530 + WARN_ONCE(1, "Found non-RCU, unreferenced socket!"); 6531 + return NULL; 6532 + } 6533 + sk = sk2; 6523 6534 } 6524 6535 } 6525 6536 ··· 6564 6553 flags); 6565 6554 6566 6555 if (sk) { 6567 - sk = sk_to_full_sk(sk); 6568 - if (!sk_fullsock(sk)) { 6556 + struct sock *sk2 = sk_to_full_sk(sk); 6557 + 6558 + /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk 6559 + * sock refcnt is decremented to prevent a request_sock leak. 6560 + */ 6561 + if (!sk_fullsock(sk2)) 6562 + sk2 = NULL; 6563 + if (sk2 != sk) { 6569 6564 sock_gen_put(sk); 6570 - return NULL; 6565 + /* Ensure there is no need to bump sk2 refcnt */ 6566 + if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) { 6567 + WARN_ONCE(1, "Found non-RCU, unreferenced socket!"); 6568 + return NULL; 6569 + } 6570 + sk = sk2; 6571 6571 } 6572 6572 } 6573 6573
+1
net/core/net-sysfs.c
··· 33 33 static const char fmt_ulong[] = "%lu\n"; 34 34 static const char fmt_u64[] = "%llu\n"; 35 35 36 + /* Caller holds RTNL or dev_base_lock */ 36 37 static inline int dev_isalive(const struct net_device *dev) 37 38 { 38 39 return dev->reg_state <= NETREG_REGISTERED;
+5
net/core/skmsg.c
··· 699 699 700 700 write_lock_bh(&sk->sk_callback_lock); 701 701 702 + if (sk_is_inet(sk) && inet_csk_has_ulp(sk)) { 703 + psock = ERR_PTR(-EINVAL); 704 + goto out; 705 + } 706 + 702 707 if (sk->sk_user_data) { 703 708 psock = ERR_PTR(-EBUSY); 704 709 goto out;
+1 -1
net/ethtool/eeprom.c
··· 36 36 if (request->page) 37 37 offset = request->page * ETH_MODULE_EEPROM_PAGE_LEN + offset; 38 38 39 - if (modinfo->type == ETH_MODULE_SFF_8079 && 39 + if (modinfo->type == ETH_MODULE_SFF_8472 && 40 40 request->i2c_address == 0x51) 41 41 offset += ETH_MODULE_EEPROM_PAGE_LEN * 2; 42 42
+10 -5
net/ipv4/ip_gre.c
··· 524 524 int tunnel_hlen; 525 525 int version; 526 526 int nhoff; 527 - int thoff; 528 527 529 528 tun_info = skb_tunnel_info(skb); 530 529 if (unlikely(!tun_info || !(tun_info->mode & IP_TUNNEL_INFO_TX) || ··· 557 558 (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff)) 558 559 truncate = true; 559 560 560 - thoff = skb_transport_header(skb) - skb_mac_header(skb); 561 - if (skb->protocol == htons(ETH_P_IPV6) && 562 - (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)) 563 - truncate = true; 561 + if (skb->protocol == htons(ETH_P_IPV6)) { 562 + int thoff; 563 + 564 + if (skb_transport_header_was_set(skb)) 565 + thoff = skb_transport_header(skb) - skb_mac_header(skb); 566 + else 567 + thoff = nhoff + sizeof(struct ipv6hdr); 568 + if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff) 569 + truncate = true; 570 + } 564 571 565 572 if (version == 1) { 566 573 erspan_build_header(skb, ntohl(tunnel_id_to_key32(key->tun_id)),
+7 -3
net/ipv4/ping.c
··· 319 319 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n", 320 320 sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port)); 321 321 322 + if (addr->sin_addr.s_addr == htonl(INADDR_ANY)) 323 + return 0; 324 + 322 325 tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ? : tb_id; 323 326 chk_addr_ret = inet_addr_type_table(net, addr->sin_addr.s_addr, tb_id); 324 327 325 - if (!inet_addr_valid_or_nonlocal(net, inet_sk(sk), 326 - addr->sin_addr.s_addr, 327 - chk_addr_ret)) 328 + if (chk_addr_ret == RTN_MULTICAST || 329 + chk_addr_ret == RTN_BROADCAST || 330 + (chk_addr_ret != RTN_LOCAL && 331 + !inet_can_nonlocal_bind(net, isk))) 328 332 return -EADDRNOTAVAIL; 329 333 330 334 #if IS_ENABLED(CONFIG_IPV6)
-3
net/ipv4/tcp_bpf.c
··· 611 611 return 0; 612 612 } 613 613 614 - if (inet_csk_has_ulp(sk)) 615 - return -EINVAL; 616 - 617 614 if (sk->sk_family == AF_INET6) { 618 615 if (tcp_bpf_assert_proto_ops(psock->sk_proto)) 619 616 return -EINVAL;
+10 -5
net/ipv6/ip6_gre.c
··· 939 939 __be16 proto; 940 940 __u32 mtu; 941 941 int nhoff; 942 - int thoff; 943 942 944 943 if (!pskb_inet_may_pull(skb)) 945 944 goto tx_err; ··· 959 960 (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff)) 960 961 truncate = true; 961 962 962 - thoff = skb_transport_header(skb) - skb_mac_header(skb); 963 - if (skb->protocol == htons(ETH_P_IPV6) && 964 - (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)) 965 - truncate = true; 963 + if (skb->protocol == htons(ETH_P_IPV6)) { 964 + int thoff; 965 + 966 + if (skb_transport_header_was_set(skb)) 967 + thoff = skb_transport_header(skb) - skb_mac_header(skb); 968 + else 969 + thoff = nhoff + sizeof(struct ipv6hdr); 970 + if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff) 971 + truncate = true; 972 + } 966 973 967 974 if (skb_cow_head(skb, dev->needed_headroom ?: t->hlen)) 968 975 goto tx_err;
+21 -4
net/netfilter/nf_dup_netdev.c
··· 13 13 #include <net/netfilter/nf_tables_offload.h> 14 14 #include <net/netfilter/nf_dup_netdev.h> 15 15 16 - static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev) 16 + #define NF_RECURSION_LIMIT 2 17 + 18 + static DEFINE_PER_CPU(u8, nf_dup_skb_recursion); 19 + 20 + static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev, 21 + enum nf_dev_hooks hook) 17 22 { 18 - if (skb_mac_header_was_set(skb)) 23 + if (__this_cpu_read(nf_dup_skb_recursion) > NF_RECURSION_LIMIT) 24 + goto err; 25 + 26 + if (hook == NF_NETDEV_INGRESS && skb_mac_header_was_set(skb)) { 27 + if (skb_cow_head(skb, skb->mac_len)) 28 + goto err; 29 + 19 30 skb_push(skb, skb->mac_len); 31 + } 20 32 21 33 skb->dev = dev; 22 34 skb_clear_tstamp(skb); 35 + __this_cpu_inc(nf_dup_skb_recursion); 23 36 dev_queue_xmit(skb); 37 + __this_cpu_dec(nf_dup_skb_recursion); 38 + return; 39 + err: 40 + kfree_skb(skb); 24 41 } 25 42 26 43 void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif) ··· 50 33 return; 51 34 } 52 35 53 - nf_do_netdev_egress(pkt->skb, dev); 36 + nf_do_netdev_egress(pkt->skb, dev, nft_hook(pkt)); 54 37 } 55 38 EXPORT_SYMBOL_GPL(nf_fwd_netdev_egress); 56 39 ··· 65 48 66 49 skb = skb_clone(pkt->skb, GFP_ATOMIC); 67 50 if (skb) 68 - nf_do_netdev_egress(skb, dev); 51 + nf_do_netdev_egress(skb, dev, nft_hook(pkt)); 69 52 } 70 53 EXPORT_SYMBOL_GPL(nf_dup_netdev_egress); 71 54
+2 -11
net/netfilter/nft_meta.c
··· 14 14 #include <linux/in.h> 15 15 #include <linux/ip.h> 16 16 #include <linux/ipv6.h> 17 + #include <linux/random.h> 17 18 #include <linux/smp.h> 18 19 #include <linux/static_key.h> 19 20 #include <net/dst.h> ··· 32 31 #define NFT_META_SECS_PER_HOUR 3600 33 32 #define NFT_META_SECS_PER_DAY 86400 34 33 #define NFT_META_DAYS_PER_WEEK 7 35 - 36 - static DEFINE_PER_CPU(struct rnd_state, nft_prandom_state); 37 34 38 35 static u8 nft_meta_weekday(void) 39 36 { ··· 270 271 return true; 271 272 } 272 273 273 - static noinline u32 nft_prandom_u32(void) 274 - { 275 - struct rnd_state *state = this_cpu_ptr(&nft_prandom_state); 276 - 277 - return prandom_u32_state(state); 278 - } 279 - 280 274 #ifdef CONFIG_IP_ROUTE_CLASSID 281 275 static noinline bool 282 276 nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest) ··· 381 389 break; 382 390 #endif 383 391 case NFT_META_PRANDOM: 384 - *dest = nft_prandom_u32(); 392 + *dest = get_random_u32(); 385 393 break; 386 394 #ifdef CONFIG_XFRM 387 395 case NFT_META_SECPATH: ··· 510 518 len = IFNAMSIZ; 511 519 break; 512 520 case NFT_META_PRANDOM: 513 - prandom_init_once(&nft_prandom_state); 514 521 len = sizeof(u32); 515 522 break; 516 523 #ifdef CONFIG_XFRM
+3 -9
net/netfilter/nft_numgen.c
··· 9 9 #include <linux/netlink.h> 10 10 #include <linux/netfilter.h> 11 11 #include <linux/netfilter/nf_tables.h> 12 + #include <linux/random.h> 12 13 #include <linux/static_key.h> 13 14 #include <net/netfilter/nf_tables.h> 14 15 #include <net/netfilter/nf_tables_core.h> 15 - 16 - static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state); 17 16 18 17 struct nft_ng_inc { 19 18 u8 dreg; ··· 134 135 u32 offset; 135 136 }; 136 137 137 - static u32 nft_ng_random_gen(struct nft_ng_random *priv) 138 + static u32 nft_ng_random_gen(const struct nft_ng_random *priv) 138 139 { 139 - struct rnd_state *state = this_cpu_ptr(&nft_numgen_prandom_state); 140 - 141 - return reciprocal_scale(prandom_u32_state(state), priv->modulus) + 142 - priv->offset; 140 + return reciprocal_scale(get_random_u32(), priv->modulus) + priv->offset; 143 141 } 144 142 145 143 static void nft_ng_random_eval(const struct nft_expr *expr, ··· 163 167 164 168 if (priv->offset + priv->modulus - 1 < priv->offset) 165 169 return -EOVERFLOW; 166 - 167 - prandom_init_once(&nft_numgen_prandom_state); 168 170 169 171 return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg, 170 172 NULL, NFT_DATA_VALUE, sizeof(u32));
+1 -1
net/openvswitch/flow.c
··· 407 407 if (flags & IP6_FH_F_FRAG) { 408 408 if (frag_off) { 409 409 key->ip.frag = OVS_FRAG_TYPE_LATER; 410 - key->ip.proto = nexthdr; 410 + key->ip.proto = NEXTHDR_FRAGMENT; 411 411 return 0; 412 412 } 413 413 key->ip.frag = OVS_FRAG_TYPE_FIRST;
+2 -2
net/sched/sch_netem.c
··· 1146 1146 struct tc_netem_rate rate; 1147 1147 struct tc_netem_slot slot; 1148 1148 1149 - qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency), 1149 + qopt.latency = min_t(psched_time_t, PSCHED_NS2TICKS(q->latency), 1150 1150 UINT_MAX); 1151 - qopt.jitter = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->jitter), 1151 + qopt.jitter = min_t(psched_time_t, PSCHED_NS2TICKS(q->jitter), 1152 1152 UINT_MAX); 1153 1153 qopt.limit = q->limit; 1154 1154 qopt.loss = q->loss;
+1 -2
net/tipc/core.c
··· 109 109 struct tipc_net *tn = tipc_net(net); 110 110 111 111 tipc_detach_loopback(net); 112 + tipc_net_stop(net); 112 113 /* Make sure the tipc_net_finalize_work() finished */ 113 114 cancel_work_sync(&tn->work); 114 - tipc_net_stop(net); 115 - 116 115 tipc_bcast_stop(net); 117 116 tipc_nametbl_stop(net); 118 117 tipc_sk_rht_destroy(net);
+2
net/tls/tls_main.c
··· 921 921 { 922 922 struct tls_context *ctx; 923 923 924 + WARN_ON_ONCE(sk->sk_prot == p); 925 + 924 926 ctx = tls_get_ctx(sk); 925 927 if (likely(ctx)) { 926 928 ctx->sk_write_space = write_space;
+9 -7
net/xdp/xsk.c
··· 538 538 goto out; 539 539 } 540 540 541 - skb = xsk_build_skb(xs, &desc); 542 - if (IS_ERR(skb)) { 543 - err = PTR_ERR(skb); 544 - goto out; 545 - } 546 - 547 541 /* This is the backpressure mechanism for the Tx path. 548 542 * Reserve space in the completion queue and only proceed 549 543 * if there is space in it. This avoids having to implement ··· 546 552 spin_lock_irqsave(&xs->pool->cq_lock, flags); 547 553 if (xskq_prod_reserve(xs->pool->cq)) { 548 554 spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 549 - kfree_skb(skb); 550 555 goto out; 551 556 } 552 557 spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 558 + 559 + skb = xsk_build_skb(xs, &desc); 560 + if (IS_ERR(skb)) { 561 + err = PTR_ERR(skb); 562 + spin_lock_irqsave(&xs->pool->cq_lock, flags); 563 + xskq_prod_cancel(xs->pool->cq); 564 + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 565 + goto out; 566 + } 553 567 554 568 err = __dev_direct_xmit(skb, xs->queue_id); 555 569 if (err == NETDEV_TX_BUSY) {
+25 -4
samples/fprobe/fprobe_example.c
··· 21 21 #define BACKTRACE_DEPTH 16 22 22 #define MAX_SYMBOL_LEN 4096 23 23 struct fprobe sample_probe; 24 + static unsigned long nhit; 24 25 25 26 static char symbol[MAX_SYMBOL_LEN] = "kernel_clone"; 26 27 module_param_string(symbol, symbol, sizeof(symbol), 0644); ··· 29 28 module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644); 30 29 static bool stackdump = true; 31 30 module_param(stackdump, bool, 0644); 31 + static bool use_trace = false; 32 + module_param(use_trace, bool, 0644); 32 33 33 34 static void show_backtrace(void) 34 35 { ··· 43 40 44 41 static void sample_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs) 45 42 { 46 - pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip); 43 + if (use_trace) 44 + /* 45 + * This is just an example, no kernel code should call 46 + * trace_printk() except when actively debugging. 47 + */ 48 + trace_printk("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip); 49 + else 50 + pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip); 51 + nhit++; 47 52 if (stackdump) 48 53 show_backtrace(); 49 54 } ··· 60 49 { 61 50 unsigned long rip = instruction_pointer(regs); 62 51 63 - pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n", 64 - (void *)ip, (void *)ip, (void *)rip, (void *)rip); 52 + if (use_trace) 53 + /* 54 + * This is just an example, no kernel code should call 55 + * trace_printk() except when actively debugging. 56 + */ 57 + trace_printk("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n", 58 + (void *)ip, (void *)ip, (void *)rip, (void *)rip); 59 + else 60 + pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n", 61 + (void *)ip, (void *)ip, (void *)rip, (void *)rip); 62 + nhit++; 65 63 if (stackdump) 66 64 show_backtrace(); 67 65 } ··· 132 112 { 133 113 unregister_fprobe(&sample_probe); 134 114 135 - pr_info("fprobe at %s unregistered\n", symbol); 115 + pr_info("fprobe at %s unregistered. %ld times hit, %ld times missed\n", 116 + symbol, nhit, sample_probe.nmissed); 136 117 } 137 118 138 119 module_init(fprobe_init)
+39 -39
tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
··· 121 121 }) 122 122 123 123 GET_ADDR("bpf_fentry_test1", addrs[0]); 124 - GET_ADDR("bpf_fentry_test2", addrs[1]); 125 - GET_ADDR("bpf_fentry_test3", addrs[2]); 126 - GET_ADDR("bpf_fentry_test4", addrs[3]); 127 - GET_ADDR("bpf_fentry_test5", addrs[4]); 128 - GET_ADDR("bpf_fentry_test6", addrs[5]); 129 - GET_ADDR("bpf_fentry_test7", addrs[6]); 124 + GET_ADDR("bpf_fentry_test3", addrs[1]); 125 + GET_ADDR("bpf_fentry_test4", addrs[2]); 126 + GET_ADDR("bpf_fentry_test5", addrs[3]); 127 + GET_ADDR("bpf_fentry_test6", addrs[4]); 128 + GET_ADDR("bpf_fentry_test7", addrs[5]); 129 + GET_ADDR("bpf_fentry_test2", addrs[6]); 130 130 GET_ADDR("bpf_fentry_test8", addrs[7]); 131 131 132 132 #undef GET_ADDR 133 133 134 - cookies[0] = 1; 135 - cookies[1] = 2; 136 - cookies[2] = 3; 137 - cookies[3] = 4; 138 - cookies[4] = 5; 139 - cookies[5] = 6; 140 - cookies[6] = 7; 141 - cookies[7] = 8; 134 + cookies[0] = 1; /* bpf_fentry_test1 */ 135 + cookies[1] = 2; /* bpf_fentry_test3 */ 136 + cookies[2] = 3; /* bpf_fentry_test4 */ 137 + cookies[3] = 4; /* bpf_fentry_test5 */ 138 + cookies[4] = 5; /* bpf_fentry_test6 */ 139 + cookies[5] = 6; /* bpf_fentry_test7 */ 140 + cookies[6] = 7; /* bpf_fentry_test2 */ 141 + cookies[7] = 8; /* bpf_fentry_test8 */ 142 142 143 143 opts.kprobe_multi.addrs = (const unsigned long *) &addrs; 144 144 opts.kprobe_multi.cnt = ARRAY_SIZE(addrs); ··· 149 149 if (!ASSERT_GE(link1_fd, 0, "link1_fd")) 150 150 goto cleanup; 151 151 152 - cookies[0] = 8; 153 - cookies[1] = 7; 154 - cookies[2] = 6; 155 - cookies[3] = 5; 156 - cookies[4] = 4; 157 - cookies[5] = 3; 158 - cookies[6] = 2; 159 - cookies[7] = 1; 152 + cookies[0] = 8; /* bpf_fentry_test1 */ 153 + cookies[1] = 7; /* bpf_fentry_test3 */ 154 + cookies[2] = 6; /* bpf_fentry_test4 */ 155 + cookies[3] = 5; /* bpf_fentry_test5 */ 156 + cookies[4] = 4; /* bpf_fentry_test6 */ 157 + cookies[5] = 3; /* bpf_fentry_test7 */ 158 + cookies[6] = 2; /* bpf_fentry_test2 */ 159 + cookies[7] = 1; /* bpf_fentry_test8 */ 160 160 161 161 opts.kprobe_multi.flags = BPF_F_KPROBE_MULTI_RETURN; 162 162 prog_fd = bpf_program__fd(skel->progs.test_kretprobe); ··· 181 181 struct kprobe_multi *skel = NULL; 182 182 const char *syms[8] = { 183 183 "bpf_fentry_test1", 184 - "bpf_fentry_test2", 185 184 "bpf_fentry_test3", 186 185 "bpf_fentry_test4", 187 186 "bpf_fentry_test5", 188 187 "bpf_fentry_test6", 189 188 "bpf_fentry_test7", 189 + "bpf_fentry_test2", 190 190 "bpf_fentry_test8", 191 191 }; 192 192 __u64 cookies[8]; ··· 198 198 skel->bss->pid = getpid(); 199 199 skel->bss->test_cookie = true; 200 200 201 - cookies[0] = 1; 202 - cookies[1] = 2; 203 - cookies[2] = 3; 204 - cookies[3] = 4; 205 - cookies[4] = 5; 206 - cookies[5] = 6; 207 - cookies[6] = 7; 208 - cookies[7] = 8; 201 + cookies[0] = 1; /* bpf_fentry_test1 */ 202 + cookies[1] = 2; /* bpf_fentry_test3 */ 203 + cookies[2] = 3; /* bpf_fentry_test4 */ 204 + cookies[3] = 4; /* bpf_fentry_test5 */ 205 + cookies[4] = 5; /* bpf_fentry_test6 */ 206 + cookies[5] = 6; /* bpf_fentry_test7 */ 207 + cookies[6] = 7; /* bpf_fentry_test2 */ 208 + cookies[7] = 8; /* bpf_fentry_test8 */ 209 209 210 210 opts.syms = syms; 211 211 opts.cnt = ARRAY_SIZE(syms); ··· 216 216 if (!ASSERT_OK_PTR(link1, "bpf_program__attach_kprobe_multi_opts")) 217 217 goto cleanup; 218 218 219 - cookies[0] = 8; 220 - cookies[1] = 7; 221 - cookies[2] = 6; 222 - cookies[3] = 5; 223 - cookies[4] = 4; 224 - cookies[5] = 3; 225 - cookies[6] = 2; 226 - cookies[7] = 1; 219 + cookies[0] = 8; /* bpf_fentry_test1 */ 220 + cookies[1] = 7; /* bpf_fentry_test3 */ 221 + cookies[2] = 6; /* bpf_fentry_test4 */ 222 + cookies[3] = 5; /* bpf_fentry_test5 */ 223 + cookies[4] = 4; /* bpf_fentry_test6 */ 224 + cookies[5] = 3; /* bpf_fentry_test7 */ 225 + cookies[6] = 2; /* bpf_fentry_test2 */ 226 + cookies[7] = 1; /* bpf_fentry_test8 */ 227 227 228 228 opts.retprobe = true; 229 229
+3
tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
··· 364 364 continue; 365 365 if (!strncmp(name, "rcu_", 4)) 366 366 continue; 367 + if (!strncmp(name, "__ftrace_invalid_address__", 368 + sizeof("__ftrace_invalid_address__") - 1)) 369 + continue; 367 370 err = hashmap__add(map, name, NULL); 368 371 if (err) { 369 372 free(name);
+55
tools/testing/selftests/bpf/prog_tests/tailcalls.c
··· 831 831 bpf_object__close(obj); 832 832 } 833 833 834 + #include "tailcall_bpf2bpf6.skel.h" 835 + 836 + /* Tail call counting works even when there is data on stack which is 837 + * not aligned to 8 bytes. 838 + */ 839 + static void test_tailcall_bpf2bpf_6(void) 840 + { 841 + struct tailcall_bpf2bpf6 *obj; 842 + int err, map_fd, prog_fd, main_fd, data_fd, i, val; 843 + LIBBPF_OPTS(bpf_test_run_opts, topts, 844 + .data_in = &pkt_v4, 845 + .data_size_in = sizeof(pkt_v4), 846 + .repeat = 1, 847 + ); 848 + 849 + obj = tailcall_bpf2bpf6__open_and_load(); 850 + if (!ASSERT_OK_PTR(obj, "open and load")) 851 + return; 852 + 853 + main_fd = bpf_program__fd(obj->progs.entry); 854 + if (!ASSERT_GE(main_fd, 0, "entry prog fd")) 855 + goto out; 856 + 857 + map_fd = bpf_map__fd(obj->maps.jmp_table); 858 + if (!ASSERT_GE(map_fd, 0, "jmp_table map fd")) 859 + goto out; 860 + 861 + prog_fd = bpf_program__fd(obj->progs.classifier_0); 862 + if (!ASSERT_GE(prog_fd, 0, "classifier_0 prog fd")) 863 + goto out; 864 + 865 + i = 0; 866 + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); 867 + if (!ASSERT_OK(err, "jmp_table map update")) 868 + goto out; 869 + 870 + err = bpf_prog_test_run_opts(main_fd, &topts); 871 + ASSERT_OK(err, "entry prog test run"); 872 + ASSERT_EQ(topts.retval, 0, "tailcall retval"); 873 + 874 + data_fd = bpf_map__fd(obj->maps.bss); 875 + if (!ASSERT_GE(map_fd, 0, "bss map fd")) 876 + goto out; 877 + 878 + i = 0; 879 + err = bpf_map_lookup_elem(data_fd, &i, &val); 880 + ASSERT_OK(err, "bss map lookup"); 881 + ASSERT_EQ(val, 1, "done flag is set"); 882 + 883 + out: 884 + tailcall_bpf2bpf6__destroy(obj); 885 + } 886 + 834 887 void test_tailcalls(void) 835 888 { 836 889 if (test__start_subtest("tailcall_1")) ··· 908 855 test_tailcall_bpf2bpf_4(false); 909 856 if (test__start_subtest("tailcall_bpf2bpf_5")) 910 857 test_tailcall_bpf2bpf_4(true); 858 + if (test__start_subtest("tailcall_bpf2bpf_6")) 859 + test_tailcall_bpf2bpf_6(); 911 860 }
+12 -12
tools/testing/selftests/bpf/progs/kprobe_multi.c
··· 54 54 55 55 if (is_return) { 56 56 SET(kretprobe_test1_result, &bpf_fentry_test1, 8); 57 - SET(kretprobe_test2_result, &bpf_fentry_test2, 7); 58 - SET(kretprobe_test3_result, &bpf_fentry_test3, 6); 59 - SET(kretprobe_test4_result, &bpf_fentry_test4, 5); 60 - SET(kretprobe_test5_result, &bpf_fentry_test5, 4); 61 - SET(kretprobe_test6_result, &bpf_fentry_test6, 3); 62 - SET(kretprobe_test7_result, &bpf_fentry_test7, 2); 57 + SET(kretprobe_test2_result, &bpf_fentry_test2, 2); 58 + SET(kretprobe_test3_result, &bpf_fentry_test3, 7); 59 + SET(kretprobe_test4_result, &bpf_fentry_test4, 6); 60 + SET(kretprobe_test5_result, &bpf_fentry_test5, 5); 61 + SET(kretprobe_test6_result, &bpf_fentry_test6, 4); 62 + SET(kretprobe_test7_result, &bpf_fentry_test7, 3); 63 63 SET(kretprobe_test8_result, &bpf_fentry_test8, 1); 64 64 } else { 65 65 SET(kprobe_test1_result, &bpf_fentry_test1, 1); 66 - SET(kprobe_test2_result, &bpf_fentry_test2, 2); 67 - SET(kprobe_test3_result, &bpf_fentry_test3, 3); 68 - SET(kprobe_test4_result, &bpf_fentry_test4, 4); 69 - SET(kprobe_test5_result, &bpf_fentry_test5, 5); 70 - SET(kprobe_test6_result, &bpf_fentry_test6, 6); 71 - SET(kprobe_test7_result, &bpf_fentry_test7, 7); 66 + SET(kprobe_test2_result, &bpf_fentry_test2, 7); 67 + SET(kprobe_test3_result, &bpf_fentry_test3, 2); 68 + SET(kprobe_test4_result, &bpf_fentry_test4, 3); 69 + SET(kprobe_test5_result, &bpf_fentry_test5, 4); 70 + SET(kprobe_test6_result, &bpf_fentry_test6, 5); 71 + SET(kprobe_test7_result, &bpf_fentry_test7, 6); 72 72 SET(kprobe_test8_result, &bpf_fentry_test8, 8); 73 73 } 74 74
+42
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <bpf/bpf_helpers.h> 4 + 5 + #define __unused __attribute__((unused)) 6 + 7 + struct { 8 + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); 9 + __uint(max_entries, 1); 10 + __uint(key_size, sizeof(__u32)); 11 + __uint(value_size, sizeof(__u32)); 12 + } jmp_table SEC(".maps"); 13 + 14 + int done = 0; 15 + 16 + SEC("tc") 17 + int classifier_0(struct __sk_buff *skb __unused) 18 + { 19 + done = 1; 20 + return 0; 21 + } 22 + 23 + static __noinline 24 + int subprog_tail(struct __sk_buff *skb) 25 + { 26 + /* Don't propagate the constant to the caller */ 27 + volatile int ret = 1; 28 + 29 + bpf_tail_call_static(skb, &jmp_table, 0); 30 + return ret; 31 + } 32 + 33 + SEC("tc") 34 + int entry(struct __sk_buff *skb) 35 + { 36 + /* Have data on stack which size is not a multiple of 8 */ 37 + volatile char arr[1] = {}; 38 + 39 + return subprog_tail(skb); 40 + } 41 + 42 + char __license[] SEC("license") = "GPL";
+56 -5
tools/testing/selftests/net/fcnal-test.sh
··· 70 70 NL_IP=172.17.1.1 71 71 NL_IP6=2001:db8:4::1 72 72 73 + # multicast and broadcast addresses 74 + MCAST_IP=224.0.0.1 75 + BCAST_IP=255.255.255.255 76 + 73 77 MD5_PW=abc123 74 78 MD5_WRONG_PW=abc1234 75 79 ··· 311 307 case "$1" in 312 308 127.0.0.1) echo "loopback";; 313 309 ::1) echo "IPv6 loopback";; 310 + 311 + ${BCAST_IP}) echo "broadcast";; 312 + ${MCAST_IP}) echo "multicast";; 314 313 315 314 ${NSA_IP}) echo "ns-A IP";; 316 315 ${NSA_IP6}) echo "ns-A IPv6";; ··· 1800 1793 done 1801 1794 1802 1795 # 1803 - # raw socket with nonlocal bind 1796 + # tests for nonlocal bind 1804 1797 # 1805 1798 a=${NL_IP} 1806 1799 log_start 1807 - run_cmd nettest -s -R -P icmp -f -l ${a} -I ${NSA_DEV} -b 1808 - log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after device bind" 1800 + run_cmd nettest -s -R -f -l ${a} -b 1801 + log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address" 1802 + 1803 + log_start 1804 + run_cmd nettest -s -f -l ${a} -b 1805 + log_test_addr ${a} $? 0 "TCP socket bind to nonlocal address" 1806 + 1807 + log_start 1808 + run_cmd nettest -s -D -P icmp -f -l ${a} -b 1809 + log_test_addr ${a} $? 0 "ICMP socket bind to nonlocal address" 1810 + 1811 + # 1812 + # check that ICMP sockets cannot bind to broadcast and multicast addresses 1813 + # 1814 + a=${BCAST_IP} 1815 + log_start 1816 + run_cmd nettest -s -D -P icmp -l ${a} -b 1817 + log_test_addr ${a} $? 1 "ICMP socket bind to broadcast address" 1818 + 1819 + a=${MCAST_IP} 1820 + log_start 1821 + run_cmd nettest -s -D -P icmp -l ${a} -b 1822 + log_test_addr ${a} $? 1 "ICMP socket bind to multicast address" 1809 1823 1810 1824 # 1811 1825 # tcp sockets ··· 1878 1850 log_test_addr ${a} $? 1 "Raw socket bind to out of scope address after VRF bind" 1879 1851 1880 1852 # 1881 - # raw socket with nonlocal bind 1853 + # tests for nonlocal bind 1882 1854 # 1883 1855 a=${NL_IP} 1884 1856 log_start 1885 - run_cmd nettest -s -R -P icmp -f -l ${a} -I ${VRF} -b 1857 + run_cmd nettest -s -R -f -l ${a} -I ${VRF} -b 1886 1858 log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after VRF bind" 1859 + 1860 + log_start 1861 + run_cmd nettest -s -f -l ${a} -I ${VRF} -b 1862 + log_test_addr ${a} $? 0 "TCP socket bind to nonlocal address after VRF bind" 1863 + 1864 + log_start 1865 + run_cmd nettest -s -D -P icmp -f -l ${a} -I ${VRF} -b 1866 + log_test_addr ${a} $? 0 "ICMP socket bind to nonlocal address after VRF bind" 1867 + 1868 + # 1869 + # check that ICMP sockets cannot bind to broadcast and multicast addresses 1870 + # 1871 + a=${BCAST_IP} 1872 + log_start 1873 + run_cmd nettest -s -D -P icmp -l ${a} -I ${VRF} -b 1874 + log_test_addr ${a} $? 1 "ICMP socket bind to broadcast address after VRF bind" 1875 + 1876 + a=${MCAST_IP} 1877 + log_start 1878 + run_cmd nettest -s -D -P icmp -l ${a} -I ${VRF} -b 1879 + log_test_addr ${a} $? 1 "ICMP socket bind to multicast address after VRF bind" 1887 1880 1888 1881 # 1889 1882 # tcp sockets ··· 1938 1889 1939 1890 log_subsection "No VRF" 1940 1891 setup 1892 + set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null 1941 1893 ipv4_addr_bind_novrf 1942 1894 1943 1895 log_subsection "With VRF" 1944 1896 setup "yes" 1897 + set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null 1945 1898 ipv4_addr_bind_vrf 1946 1899 } 1947 1900
+1 -1
tools/testing/selftests/netfilter/nft_concat_range.sh
··· 31 31 32 32 # List of possible paths to pktgen script from kernel tree for performance tests 33 33 PKTGEN_SCRIPT_PATHS=" 34 - ../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh 34 + ../../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh 35 35 pktgen/pktgen_bench_xmit_mode_netif_receive.sh" 36 36 37 37 # Definition of set types: