Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:
"A decent batch of fixes here. I'd say about half are for problems that
have existed for a while, and half are for new regressions added in
the 4.20 merge window.

1) Fix 10G SFP phy module detection in mvpp2, from Baruch Siach.

2) Revert bogus emac driver change, from Benjamin Herrenschmidt.

3) Handle BPF exported data structure with pointers when building
32-bit userland, from Daniel Borkmann.

4) Memory leak fix in act_police, from Davide Caratti.

5) Check RX checksum offload in RX descriptors properly in aquantia
driver, from Dmitry Bogdanov.

6) SKB unlink fix in various spots, from Edward Cree.

7) ndo_dflt_fdb_dump() only works with ethernet, enforce this, from
Eric Dumazet.

8) Fix FID leak in mlxsw driver, from Ido Schimmel.

9) IOTLB locking fix in vhost, from Jean-Philippe Brucker.

10) Fix SKB truesize accounting in ipv4/ipv6/netfilter frag memory
limits otherwise namespace exit can hang. From Jiri Wiesner.

11) Address block parsing length fixes in x25 from Martin Schiller.

12) IRQ and ring accounting fixes in bnxt_en, from Michael Chan.

13) For tun interfaces, only iface delete works with rtnl ops, enforce
this by disallowing add. From Nicolas Dichtel.

14) Use after free in liquidio, from Pan Bian.

15) Fix SKB use after passing to netif_receive_skb(), from Prashant
Bhole.

16) Static key accounting and other fixes in XPS from Sabrina Dubroca.

17) Partially initialized flow key passed to ip6_route_output(), from
Shmulik Ladkani.

18) Fix RTNL deadlock during reset in ibmvnic driver, from Thomas
Falcon.

19) Several small TCP fixes (off-by-one on window probe abort, NULL
deref in tail loss probe, SNMP mis-estimations) from Yuchung
Cheng"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (93 commits)
net/sched: cls_flower: Reject duplicated rules also under skip_sw
bnxt_en: Fix _bnxt_get_max_rings() for 57500 chips.
bnxt_en: Fix NQ/CP rings accounting on the new 57500 chips.
bnxt_en: Keep track of reserved IRQs.
bnxt_en: Fix CNP CoS queue regression.
net/mlx4_core: Correctly set PFC param if global pause is turned off.
Revert "net/ibm/emac: wrong bit is used for STA control"
neighbour: Avoid writing before skb->head in neigh_hh_output()
ipv6: Check available headroom in ip6_xmit() even without options
tcp: lack of available data can also cause TSO defer
ipv6: sr: properly initialize flowi6 prior passing to ip6_route_output
mlxsw: spectrum_switchdev: Fix VLAN device deletion via ioctl
mlxsw: spectrum_router: Relax GRE decap matching check
mlxsw: spectrum_switchdev: Avoid leaking FID's reference count
mlxsw: spectrum_nve: Remove easily triggerable warnings
ipv4: ipv6: netfilter: Adjust the frag mem limit when truesize changes
sctp: frag_point sanity check
tcp: fix NULL ref in tail loss probe
tcp: Do not underestimate rwnd_limited
net: use skb_list_del_init() to remove from RX sublists
...

+1226 -369
+1 -1
Documentation/ABI/testing/sysfs-class-net-dsa
··· 1 - What: /sys/class/net/<iface>/tagging 2 Date: August 2018 3 KernelVersion: 4.20 4 Contact: netdev@vger.kernel.org
··· 1 + What: /sys/class/net/<iface>/dsa/tagging 2 Date: August 2018 3 KernelVersion: 4.20 4 Contact: netdev@vger.kernel.org
+7
MAINTAINERS
··· 13890 F: include/linux/raid/ 13891 F: include/uapi/linux/raid/ 13892 13893 SOCIONEXT (SNI) NETSEC NETWORK DRIVER 13894 M: Jassi Brar <jaswinder.singh@linaro.org> 13895 L: netdev@vger.kernel.org
··· 13890 F: include/linux/raid/ 13891 F: include/uapi/linux/raid/ 13892 13893 + SOCIONEXT (SNI) AVE NETWORK DRIVER 13894 + M: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> 13895 + L: netdev@vger.kernel.org 13896 + S: Maintained 13897 + F: drivers/net/ethernet/socionext/sni_ave.c 13898 + F: Documentation/devicetree/bindings/net/socionext,uniphier-ave4.txt 13899 + 13900 SOCIONEXT (SNI) NETSEC NETWORK DRIVER 13901 M: Jassi Brar <jaswinder.singh@linaro.org> 13902 L: netdev@vger.kernel.org
+66
arch/powerpc/net/bpf_jit_comp64.c
··· 891 return 0; 892 } 893 894 struct powerpc64_jit_data { 895 struct bpf_binary_header *header; 896 u32 *addrs; ··· 1038 skip_init_ctx: 1039 code_base = (u32 *)(image + FUNCTION_DESCR_SIZE); 1040 1041 /* Code generation passes 1-2 */ 1042 for (pass = 1; pass < 3; pass++) { 1043 /* Now build the prologue, body code & epilogue for real. */ ··· 1067 proglen - (cgctx.idx * 4), cgctx.seen); 1068 } 1069 1070 if (bpf_jit_enable > 1) 1071 /* 1072 * Note that we output the base address of the code_base
··· 891 return 0; 892 } 893 894 + /* Fix the branch target addresses for subprog calls */ 895 + static int bpf_jit_fixup_subprog_calls(struct bpf_prog *fp, u32 *image, 896 + struct codegen_context *ctx, u32 *addrs) 897 + { 898 + const struct bpf_insn *insn = fp->insnsi; 899 + bool func_addr_fixed; 900 + u64 func_addr; 901 + u32 tmp_idx; 902 + int i, ret; 903 + 904 + for (i = 0; i < fp->len; i++) { 905 + /* 906 + * During the extra pass, only the branch target addresses for 907 + * the subprog calls need to be fixed. All other instructions 908 + * can left untouched. 909 + * 910 + * The JITed image length does not change because we already 911 + * ensure that the JITed instruction sequence for these calls 912 + * are of fixed length by padding them with NOPs. 913 + */ 914 + if (insn[i].code == (BPF_JMP | BPF_CALL) && 915 + insn[i].src_reg == BPF_PSEUDO_CALL) { 916 + ret = bpf_jit_get_func_addr(fp, &insn[i], true, 917 + &func_addr, 918 + &func_addr_fixed); 919 + if (ret < 0) 920 + return ret; 921 + 922 + /* 923 + * Save ctx->idx as this would currently point to the 924 + * end of the JITed image and set it to the offset of 925 + * the instruction sequence corresponding to the 926 + * subprog call temporarily. 927 + */ 928 + tmp_idx = ctx->idx; 929 + ctx->idx = addrs[i] / 4; 930 + bpf_jit_emit_func_call_rel(image, ctx, func_addr); 931 + 932 + /* 933 + * Restore ctx->idx here. This is safe as the length 934 + * of the JITed sequence remains unchanged. 935 + */ 936 + ctx->idx = tmp_idx; 937 + } 938 + } 939 + 940 + return 0; 941 + } 942 + 943 struct powerpc64_jit_data { 944 struct bpf_binary_header *header; 945 u32 *addrs; ··· 989 skip_init_ctx: 990 code_base = (u32 *)(image + FUNCTION_DESCR_SIZE); 991 992 + if (extra_pass) { 993 + /* 994 + * Do not touch the prologue and epilogue as they will remain 995 + * unchanged. Only fix the branch target address for subprog 996 + * calls in the body. 997 + * 998 + * This does not change the offsets and lengths of the subprog 999 + * call instruction sequences and hence, the size of the JITed 1000 + * image as well. 1001 + */ 1002 + bpf_jit_fixup_subprog_calls(fp, code_base, &cgctx, addrs); 1003 + 1004 + /* There is no need to perform the usual passes. */ 1005 + goto skip_codegen_passes; 1006 + } 1007 + 1008 /* Code generation passes 1-2 */ 1009 for (pass = 1; pass < 3; pass++) { 1010 /* Now build the prologue, body code & epilogue for real. */ ··· 1002 proglen - (cgctx.idx * 4), cgctx.seen); 1003 } 1004 1005 + skip_codegen_passes: 1006 if (bpf_jit_enable > 1) 1007 /* 1008 * Note that we output the base address of the code_base
+3
drivers/net/bonding/bond_3ad.c
··· 2086 aggregator->aggregator_identifier); 2087 2088 /* Tell the partner that this port is not suitable for aggregation */ 2089 port->actor_oper_port_state &= ~AD_STATE_AGGREGATION; 2090 __update_lacpdu_from_port(port); 2091 ad_lacpdu_send(port);
··· 2086 aggregator->aggregator_identifier); 2087 2088 /* Tell the partner that this port is not suitable for aggregation */ 2089 + port->actor_oper_port_state &= ~AD_STATE_SYNCHRONIZATION; 2090 + port->actor_oper_port_state &= ~AD_STATE_COLLECTING; 2091 + port->actor_oper_port_state &= ~AD_STATE_DISTRIBUTING; 2092 port->actor_oper_port_state &= ~AD_STATE_AGGREGATION; 2093 __update_lacpdu_from_port(port); 2094 ad_lacpdu_send(port);
+3 -7
drivers/net/dsa/mv88e6060.c
··· 116 /* Reset the switch. */ 117 REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL, 118 GLOBAL_ATU_CONTROL_SWRESET | 119 - GLOBAL_ATU_CONTROL_ATUSIZE_1024 | 120 - GLOBAL_ATU_CONTROL_ATE_AGE_5MIN); 121 122 /* Wait up to one second for reset to complete. */ 123 timeout = jiffies + 1 * HZ; ··· 141 */ 142 REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL, GLOBAL_CONTROL_MAX_FRAME_1536); 143 144 - /* Enable automatic address learning, set the address 145 - * database size to 1024 entries, and set the default aging 146 - * time to 5 minutes. 147 */ 148 REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL, 149 - GLOBAL_ATU_CONTROL_ATUSIZE_1024 | 150 - GLOBAL_ATU_CONTROL_ATE_AGE_5MIN); 151 152 return 0; 153 }
··· 116 /* Reset the switch. */ 117 REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL, 118 GLOBAL_ATU_CONTROL_SWRESET | 119 + GLOBAL_ATU_CONTROL_LEARNDIS); 120 121 /* Wait up to one second for reset to complete. */ 122 timeout = jiffies + 1 * HZ; ··· 142 */ 143 REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL, GLOBAL_CONTROL_MAX_FRAME_1536); 144 145 + /* Disable automatic address learning. 146 */ 147 REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL, 148 + GLOBAL_ATU_CONTROL_LEARNDIS); 149 150 return 0; 151 }
+1 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 674 675 rx_stat = (0x0000003CU & rxd_wb->status) >> 2; 676 677 - is_rx_check_sum_enabled = (rxd_wb->type) & (0x3U << 19); 678 679 pkt_type = 0xFFU & (rxd_wb->type >> 4); 680
··· 674 675 rx_stat = (0x0000003CU & rxd_wb->status) >> 2; 676 677 + is_rx_check_sum_enabled = (rxd_wb->type >> 19) & 0x3U; 678 679 pkt_type = 0xFFU & (rxd_wb->type >> 4); 680
+48 -12
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 5162 cp = le16_to_cpu(resp->alloc_cmpl_rings); 5163 stats = le16_to_cpu(resp->alloc_stat_ctx); 5164 cp = min_t(u16, cp, stats); 5165 if (bp->flags & BNXT_FLAG_CHIP_P5) { 5166 int rx = hw_resc->resv_rx_rings; 5167 int tx = hw_resc->resv_tx_rings; ··· 5176 hw_resc->resv_rx_rings = rx; 5177 hw_resc->resv_tx_rings = tx; 5178 } 5179 - cp = le16_to_cpu(resp->alloc_msix); 5180 hw_resc->resv_hw_ring_grps = rx; 5181 } 5182 hw_resc->resv_cp_rings = cp; ··· 5354 return bnxt_hwrm_reserve_vf_rings(bp, tx, rx, grp, cp, vnic); 5355 } 5356 5357 - static int bnxt_cp_rings_in_use(struct bnxt *bp) 5358 { 5359 int cp = bp->cp_nr_rings; 5360 int ulp_msix, ulp_base; ··· 5369 return cp; 5370 } 5371 5372 static bool bnxt_need_reserve_rings(struct bnxt *bp) 5373 { 5374 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 5375 int cp = bnxt_cp_rings_in_use(bp); 5376 int rx = bp->rx_nr_rings; 5377 int vnic = 1, grp = rx; 5378 ··· 5400 rx <<= 1; 5401 if (BNXT_NEW_RM(bp) && 5402 (hw_resc->resv_rx_rings != rx || hw_resc->resv_cp_rings != cp || 5403 - hw_resc->resv_vnics != vnic || 5404 (hw_resc->resv_hw_ring_grps != grp && 5405 !(bp->flags & BNXT_FLAG_CHIP_P5)))) 5406 return true; ··· 5410 static int __bnxt_reserve_rings(struct bnxt *bp) 5411 { 5412 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 5413 - int cp = bnxt_cp_rings_in_use(bp); 5414 int tx = bp->tx_nr_rings; 5415 int rx = bp->rx_nr_rings; 5416 int grp, rx_rings, rc; ··· 5435 tx = hw_resc->resv_tx_rings; 5436 if (BNXT_NEW_RM(bp)) { 5437 rx = hw_resc->resv_rx_rings; 5438 - cp = hw_resc->resv_cp_rings; 5439 grp = hw_resc->resv_hw_ring_grps; 5440 vnic = hw_resc->resv_vnics; 5441 } ··· 6305 return rc; 6306 } 6307 6308 static int bnxt_hwrm_func_qcaps(struct bnxt *bp) 6309 { 6310 int rc; ··· 6314 rc = __bnxt_hwrm_func_qcaps(bp); 6315 if (rc) 6316 return rc; 6317 if (bp->hwrm_spec_code >= 0x10803) { 6318 rc = bnxt_alloc_ctx_mem(bp); 6319 if (rc) ··· 7046 7047 unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp) 7048 { 7049 - return bp->hw_resc.max_cp_rings - bnxt_get_ulp_msix_num(bp); 7050 } 7051 7052 static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) ··· 7073 int total_req = bp->cp_nr_rings + num; 7074 int max_idx, avail_msix; 7075 7076 - max_idx = min_t(int, bp->total_irqs, max_cp); 7077 avail_msix = max_idx - bp->cp_nr_rings; 7078 if (!BNXT_NEW_RM(bp) || avail_msix >= num) 7079 return avail_msix; ··· 7093 if (!BNXT_NEW_RM(bp)) 7094 return bnxt_get_max_func_irqs(bp); 7095 7096 - return bnxt_cp_rings_in_use(bp); 7097 } 7098 7099 static int bnxt_init_msix(struct bnxt *bp) ··· 7821 7822 rc = bnxt_hwrm_func_resc_qcaps(bp, true); 7823 hw_resc->resv_cp_rings = 0; 7824 hw_resc->resv_tx_rings = 0; 7825 hw_resc->resv_rx_rings = 0; 7826 hw_resc->resv_hw_ring_grps = 0; ··· 9827 int *max_cp) 9828 { 9829 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 9830 - int max_ring_grps = 0; 9831 9832 *max_tx = hw_resc->max_tx_rings; 9833 *max_rx = hw_resc->max_rx_rings; 9834 - *max_cp = min_t(int, bnxt_get_max_func_cp_rings_for_en(bp), 9835 - hw_resc->max_irqs - bnxt_get_ulp_msix_num(bp)); 9836 - *max_cp = min_t(int, *max_cp, hw_resc->max_stat_ctxs); 9837 max_ring_grps = hw_resc->max_hw_ring_grps; 9838 if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) { 9839 *max_cp -= 1; ··· 9844 } 9845 if (bp->flags & BNXT_FLAG_AGG_RINGS) 9846 *max_rx >>= 1; 9847 *max_rx = min_t(int, *max_rx, max_ring_grps); 9848 } 9849
··· 5162 cp = le16_to_cpu(resp->alloc_cmpl_rings); 5163 stats = le16_to_cpu(resp->alloc_stat_ctx); 5164 cp = min_t(u16, cp, stats); 5165 + hw_resc->resv_irqs = cp; 5166 if (bp->flags & BNXT_FLAG_CHIP_P5) { 5167 int rx = hw_resc->resv_rx_rings; 5168 int tx = hw_resc->resv_tx_rings; ··· 5175 hw_resc->resv_rx_rings = rx; 5176 hw_resc->resv_tx_rings = tx; 5177 } 5178 + hw_resc->resv_irqs = le16_to_cpu(resp->alloc_msix); 5179 hw_resc->resv_hw_ring_grps = rx; 5180 } 5181 hw_resc->resv_cp_rings = cp; ··· 5353 return bnxt_hwrm_reserve_vf_rings(bp, tx, rx, grp, cp, vnic); 5354 } 5355 5356 + static int bnxt_nq_rings_in_use(struct bnxt *bp) 5357 { 5358 int cp = bp->cp_nr_rings; 5359 int ulp_msix, ulp_base; ··· 5368 return cp; 5369 } 5370 5371 + static int bnxt_cp_rings_in_use(struct bnxt *bp) 5372 + { 5373 + int cp; 5374 + 5375 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 5376 + return bnxt_nq_rings_in_use(bp); 5377 + 5378 + cp = bp->tx_nr_rings + bp->rx_nr_rings; 5379 + return cp; 5380 + } 5381 + 5382 static bool bnxt_need_reserve_rings(struct bnxt *bp) 5383 { 5384 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 5385 int cp = bnxt_cp_rings_in_use(bp); 5386 + int nq = bnxt_nq_rings_in_use(bp); 5387 int rx = bp->rx_nr_rings; 5388 int vnic = 1, grp = rx; 5389 ··· 5387 rx <<= 1; 5388 if (BNXT_NEW_RM(bp) && 5389 (hw_resc->resv_rx_rings != rx || hw_resc->resv_cp_rings != cp || 5390 + hw_resc->resv_irqs < nq || hw_resc->resv_vnics != vnic || 5391 (hw_resc->resv_hw_ring_grps != grp && 5392 !(bp->flags & BNXT_FLAG_CHIP_P5)))) 5393 return true; ··· 5397 static int __bnxt_reserve_rings(struct bnxt *bp) 5398 { 5399 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 5400 + int cp = bnxt_nq_rings_in_use(bp); 5401 int tx = bp->tx_nr_rings; 5402 int rx = bp->rx_nr_rings; 5403 int grp, rx_rings, rc; ··· 5422 tx = hw_resc->resv_tx_rings; 5423 if (BNXT_NEW_RM(bp)) { 5424 rx = hw_resc->resv_rx_rings; 5425 + cp = hw_resc->resv_irqs; 5426 grp = hw_resc->resv_hw_ring_grps; 5427 vnic = hw_resc->resv_vnics; 5428 } ··· 6292 return rc; 6293 } 6294 6295 + static int bnxt_hwrm_queue_qportcfg(struct bnxt *bp); 6296 + 6297 static int bnxt_hwrm_func_qcaps(struct bnxt *bp) 6298 { 6299 int rc; ··· 6299 rc = __bnxt_hwrm_func_qcaps(bp); 6300 if (rc) 6301 return rc; 6302 + rc = bnxt_hwrm_queue_qportcfg(bp); 6303 + if (rc) { 6304 + netdev_err(bp->dev, "hwrm query qportcfg failure rc: %d\n", rc); 6305 + return rc; 6306 + } 6307 if (bp->hwrm_spec_code >= 0x10803) { 6308 rc = bnxt_alloc_ctx_mem(bp); 6309 if (rc) ··· 7026 7027 unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp) 7028 { 7029 + unsigned int cp = bp->hw_resc.max_cp_rings; 7030 + 7031 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 7032 + cp -= bnxt_get_ulp_msix_num(bp); 7033 + 7034 + return cp; 7035 } 7036 7037 static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) ··· 7048 int total_req = bp->cp_nr_rings + num; 7049 int max_idx, avail_msix; 7050 7051 + max_idx = bp->total_irqs; 7052 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 7053 + max_idx = min_t(int, bp->total_irqs, max_cp); 7054 avail_msix = max_idx - bp->cp_nr_rings; 7055 if (!BNXT_NEW_RM(bp) || avail_msix >= num) 7056 return avail_msix; ··· 7066 if (!BNXT_NEW_RM(bp)) 7067 return bnxt_get_max_func_irqs(bp); 7068 7069 + return bnxt_nq_rings_in_use(bp); 7070 } 7071 7072 static int bnxt_init_msix(struct bnxt *bp) ··· 7794 7795 rc = bnxt_hwrm_func_resc_qcaps(bp, true); 7796 hw_resc->resv_cp_rings = 0; 7797 + hw_resc->resv_irqs = 0; 7798 hw_resc->resv_tx_rings = 0; 7799 hw_resc->resv_rx_rings = 0; 7800 hw_resc->resv_hw_ring_grps = 0; ··· 9799 int *max_cp) 9800 { 9801 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 9802 + int max_ring_grps = 0, max_irq; 9803 9804 *max_tx = hw_resc->max_tx_rings; 9805 *max_rx = hw_resc->max_rx_rings; 9806 + *max_cp = bnxt_get_max_func_cp_rings_for_en(bp); 9807 + max_irq = min_t(int, bnxt_get_max_func_irqs(bp) - 9808 + bnxt_get_ulp_msix_num(bp), 9809 + bnxt_get_max_func_stat_ctxs(bp)); 9810 + if (!(bp->flags & BNXT_FLAG_CHIP_P5)) 9811 + *max_cp = min_t(int, *max_cp, max_irq); 9812 max_ring_grps = hw_resc->max_hw_ring_grps; 9813 if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) { 9814 *max_cp -= 1; ··· 9813 } 9814 if (bp->flags & BNXT_FLAG_AGG_RINGS) 9815 *max_rx >>= 1; 9816 + if (bp->flags & BNXT_FLAG_CHIP_P5) { 9817 + bnxt_trim_rings(bp, max_rx, max_tx, *max_cp, false); 9818 + /* On P5 chips, max_cp output param should be available NQs */ 9819 + *max_cp = max_irq; 9820 + } 9821 *max_rx = min_t(int, *max_rx, max_ring_grps); 9822 } 9823
+1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 928 u16 min_stat_ctxs; 929 u16 max_stat_ctxs; 930 u16 max_irqs; 931 }; 932 933 #if defined(CONFIG_BNXT_SRIOV)
··· 928 u16 min_stat_ctxs; 929 u16 max_stat_ctxs; 930 u16 max_irqs; 931 + u16 resv_irqs; 932 }; 933 934 #if defined(CONFIG_BNXT_SRIOV)
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 168 if (BNXT_NEW_RM(bp)) { 169 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 170 171 - avail_msix = hw_resc->resv_cp_rings - bp->cp_nr_rings; 172 edev->ulp_tbl[ulp_id].msix_requested = avail_msix; 173 } 174 bnxt_fill_msix_vecs(bp, ent);
··· 168 if (BNXT_NEW_RM(bp)) { 169 struct bnxt_hw_resc *hw_resc = &bp->hw_resc; 170 171 + avail_msix = hw_resc->resv_irqs - bp->cp_nr_rings; 172 edev->ulp_tbl[ulp_id].msix_requested = avail_msix; 173 } 174 bnxt_fill_msix_vecs(bp, ent);
+1 -1
drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
··· 111 "mac_tx_one_collision", 112 "mac_tx_multi_collision", 113 "mac_tx_max_collision_fail", 114 - "mac_tx_max_deferal_fail", 115 "mac_tx_fifo_err", 116 "mac_tx_runts", 117
··· 111 "mac_tx_one_collision", 112 "mac_tx_multi_collision", 113 "mac_tx_max_collision_fail", 114 + "mac_tx_max_deferral_fail", 115 "mac_tx_fifo_err", 116 "mac_tx_runts", 117
+3 -1
drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
··· 349 struct octeon_soft_command *sc = (struct octeon_soft_command *)buf; 350 struct sk_buff *skb = sc->ctxptr; 351 struct net_device *ndev = skb->dev; 352 353 dma_unmap_single(&oct->pci_dev->dev, sc->dmadptr, 354 sc->datasize, DMA_TO_DEVICE); 355 dev_kfree_skb_any(skb); 356 octeon_free_soft_command(oct, sc); 357 358 - if (octnet_iq_is_full(oct, sc->iq_no)) 359 return; 360 361 if (netif_queue_stopped(ndev))
··· 349 struct octeon_soft_command *sc = (struct octeon_soft_command *)buf; 350 struct sk_buff *skb = sc->ctxptr; 351 struct net_device *ndev = skb->dev; 352 + u32 iq_no; 353 354 dma_unmap_single(&oct->pci_dev->dev, sc->dmadptr, 355 sc->datasize, DMA_TO_DEVICE); 356 dev_kfree_skb_any(skb); 357 + iq_no = sc->iq_no; 358 octeon_free_soft_command(oct, sc); 359 360 + if (octnet_iq_is_full(oct, iq_no)) 361 return; 362 363 if (netif_queue_stopped(ndev))
+2 -3
drivers/net/ethernet/freescale/fman/fman.c
··· 2786 if (!muram_node) { 2787 dev_err(&of_dev->dev, "%s: could not find MURAM node\n", 2788 __func__); 2789 - goto fman_node_put; 2790 } 2791 2792 err = of_address_to_resource(muram_node, 0, ··· 2795 of_node_put(muram_node); 2796 dev_err(&of_dev->dev, "%s: of_address_to_resource() = %d\n", 2797 __func__, err); 2798 - goto fman_node_put; 2799 } 2800 2801 of_node_put(muram_node); 2802 - of_node_put(fm_node); 2803 2804 err = devm_request_irq(&of_dev->dev, irq, fman_irq, IRQF_SHARED, 2805 "fman", fman);
··· 2786 if (!muram_node) { 2787 dev_err(&of_dev->dev, "%s: could not find MURAM node\n", 2788 __func__); 2789 + goto fman_free; 2790 } 2791 2792 err = of_address_to_resource(muram_node, 0, ··· 2795 of_node_put(muram_node); 2796 dev_err(&of_dev->dev, "%s: of_address_to_resource() = %d\n", 2797 __func__, err); 2798 + goto fman_free; 2799 } 2800 2801 of_node_put(muram_node); 2802 2803 err = devm_request_irq(&of_dev->dev, irq, fman_irq, IRQF_SHARED, 2804 "fman", fman);
+1 -1
drivers/net/ethernet/ibm/emac/emac.h
··· 231 #define EMAC_STACR_PHYE 0x00004000 232 #define EMAC_STACR_STAC_MASK 0x00003000 233 #define EMAC_STACR_STAC_READ 0x00001000 234 - #define EMAC_STACR_STAC_WRITE 0x00000800 235 #define EMAC_STACR_OPBC_MASK 0x00000C00 236 #define EMAC_STACR_OPBC_50 0x00000000 237 #define EMAC_STACR_OPBC_66 0x00000400
··· 231 #define EMAC_STACR_PHYE 0x00004000 232 #define EMAC_STACR_STAC_MASK 0x00003000 233 #define EMAC_STACR_STAC_READ 0x00001000 234 + #define EMAC_STACR_STAC_WRITE 0x00002000 235 #define EMAC_STACR_OPBC_MASK 0x00000C00 236 #define EMAC_STACR_OPBC_50 0x00000000 237 #define EMAC_STACR_OPBC_66 0x00000400
+1 -1
drivers/net/ethernet/ibm/ibmvnic.c
··· 1859 1860 if (adapter->reset_reason != VNIC_RESET_FAILOVER && 1861 adapter->reset_reason != VNIC_RESET_CHANGE_PARAM) 1862 - netdev_notify_peers(netdev); 1863 1864 netif_carrier_on(netdev); 1865
··· 1859 1860 if (adapter->reset_reason != VNIC_RESET_FAILOVER && 1861 adapter->reset_reason != VNIC_RESET_CHANGE_PARAM) 1862 + call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev); 1863 1864 netif_carrier_on(netdev); 1865
+33 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 4375 unsigned long *supported, 4376 struct phylink_link_state *state) 4377 { 4378 __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; 4379 4380 phylink_set(mask, Autoneg); 4381 phylink_set_port_modes(mask); ··· 4403 4404 switch (state->interface) { 4405 case PHY_INTERFACE_MODE_10GKR: 4406 phylink_set(mask, 10000baseCR_Full); 4407 phylink_set(mask, 10000baseSR_Full); 4408 phylink_set(mask, 10000baseLR_Full); ··· 4412 phylink_set(mask, 10000baseER_Full); 4413 phylink_set(mask, 10000baseKR_Full); 4414 /* Fall-through */ 4415 - default: 4416 phylink_set(mask, 10baseT_Half); 4417 phylink_set(mask, 10baseT_Full); 4418 phylink_set(mask, 100baseT_Half); ··· 4428 phylink_set(mask, 1000baseT_Full); 4429 phylink_set(mask, 1000baseX_Full); 4430 phylink_set(mask, 2500baseX_Full); 4431 } 4432 4433 bitmap_and(supported, supported, mask, __ETHTOOL_LINK_MODE_MASK_NBITS); 4434 bitmap_and(state->advertising, state->advertising, mask, 4435 __ETHTOOL_LINK_MODE_MASK_NBITS); 4436 } 4437 4438 static void mvpp22_xlg_link_state(struct mvpp2_port *port,
··· 4375 unsigned long *supported, 4376 struct phylink_link_state *state) 4377 { 4378 + struct mvpp2_port *port = netdev_priv(dev); 4379 __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, }; 4380 + 4381 + /* Invalid combinations */ 4382 + switch (state->interface) { 4383 + case PHY_INTERFACE_MODE_10GKR: 4384 + case PHY_INTERFACE_MODE_XAUI: 4385 + if (port->gop_id != 0) 4386 + goto empty_set; 4387 + break; 4388 + case PHY_INTERFACE_MODE_RGMII: 4389 + case PHY_INTERFACE_MODE_RGMII_ID: 4390 + case PHY_INTERFACE_MODE_RGMII_RXID: 4391 + case PHY_INTERFACE_MODE_RGMII_TXID: 4392 + if (port->gop_id == 0) 4393 + goto empty_set; 4394 + break; 4395 + default: 4396 + break; 4397 + } 4398 4399 phylink_set(mask, Autoneg); 4400 phylink_set_port_modes(mask); ··· 4384 4385 switch (state->interface) { 4386 case PHY_INTERFACE_MODE_10GKR: 4387 + case PHY_INTERFACE_MODE_XAUI: 4388 + case PHY_INTERFACE_MODE_NA: 4389 phylink_set(mask, 10000baseCR_Full); 4390 phylink_set(mask, 10000baseSR_Full); 4391 phylink_set(mask, 10000baseLR_Full); ··· 4391 phylink_set(mask, 10000baseER_Full); 4392 phylink_set(mask, 10000baseKR_Full); 4393 /* Fall-through */ 4394 + case PHY_INTERFACE_MODE_RGMII: 4395 + case PHY_INTERFACE_MODE_RGMII_ID: 4396 + case PHY_INTERFACE_MODE_RGMII_RXID: 4397 + case PHY_INTERFACE_MODE_RGMII_TXID: 4398 + case PHY_INTERFACE_MODE_SGMII: 4399 phylink_set(mask, 10baseT_Half); 4400 phylink_set(mask, 10baseT_Full); 4401 phylink_set(mask, 100baseT_Half); ··· 4403 phylink_set(mask, 1000baseT_Full); 4404 phylink_set(mask, 1000baseX_Full); 4405 phylink_set(mask, 2500baseX_Full); 4406 + break; 4407 + default: 4408 + goto empty_set; 4409 } 4410 4411 bitmap_and(supported, supported, mask, __ETHTOOL_LINK_MODE_MASK_NBITS); 4412 bitmap_and(state->advertising, state->advertising, mask, 4413 __ETHTOOL_LINK_MODE_MASK_NBITS); 4414 + return; 4415 + 4416 + empty_set: 4417 + bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS); 4418 } 4419 4420 static void mvpp22_xlg_link_state(struct mvpp2_port *port,
+1 -1
drivers/net/ethernet/mellanox/mlx4/Kconfig
··· 5 config MLX4_EN 6 tristate "Mellanox Technologies 1/10/40Gbit Ethernet support" 7 depends on MAY_USE_DEVLINK 8 - depends on PCI 9 select MLX4_CORE 10 imply PTP_1588_CLOCK 11 ---help---
··· 5 config MLX4_EN 6 tristate "Mellanox Technologies 1/10/40Gbit Ethernet support" 7 depends on MAY_USE_DEVLINK 8 + depends on PCI && NETDEVICES && ETHERNET && INET 9 select MLX4_CORE 10 imply PTP_1588_CLOCK 11 ---help---
+2 -2
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 1084 1085 tx_pause = !!(pause->tx_pause); 1086 rx_pause = !!(pause->rx_pause); 1087 - rx_ppp = priv->prof->rx_ppp && !(tx_pause || rx_pause); 1088 - tx_ppp = priv->prof->tx_ppp && !(tx_pause || rx_pause); 1089 1090 err = mlx4_SET_PORT_general(mdev->dev, priv->port, 1091 priv->rx_skb_size + ETH_FCS_LEN,
··· 1084 1085 tx_pause = !!(pause->tx_pause); 1086 rx_pause = !!(pause->rx_pause); 1087 + rx_ppp = (tx_pause || rx_pause) ? 0 : priv->prof->rx_ppp; 1088 + tx_ppp = (tx_pause || rx_pause) ? 0 : priv->prof->tx_ppp; 1089 1090 err = mlx4_SET_PORT_general(mdev->dev, priv->port, 1091 priv->rx_skb_size + ETH_FCS_LEN,
+2 -2
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 3493 dev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM; 3494 } 3495 3496 - /* MTU range: 46 - hw-specific max */ 3497 - dev->min_mtu = MLX4_EN_MIN_MTU; 3498 dev->max_mtu = priv->max_mtu; 3499 3500 mdev->pndev[port] = dev;
··· 3493 dev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM; 3494 } 3495 3496 + /* MTU range: 68 - hw-specific max */ 3497 + dev->min_mtu = ETH_MIN_MTU; 3498 dev->max_mtu = priv->max_mtu; 3499 3500 mdev->pndev[port] = dev;
-1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 161 #define MLX4_SELFTEST_LB_MIN_MTU (MLX4_LOOPBACK_TEST_PAYLOAD + NET_IP_ALIGN + \ 162 ETH_HLEN + PREAMBLE_LEN) 163 164 - #define MLX4_EN_MIN_MTU 46 165 /* VLAN_HLEN is added twice,to support skb vlan tagged with multiple 166 * headers. (For example: ETH_P_8021Q and ETH_P_8021AD). 167 */
··· 161 #define MLX4_SELFTEST_LB_MIN_MTU (MLX4_LOOPBACK_TEST_PAYLOAD + NET_IP_ALIGN + \ 162 ETH_HLEN + PREAMBLE_LEN) 163 164 /* VLAN_HLEN is added twice,to support skb vlan tagged with multiple 165 * headers. (For example: ETH_P_8021Q and ETH_P_8021AD). 166 */
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 724 return __get_unaligned_cpu32(fcs_bytes); 725 } 726 727 - static u8 get_ip_proto(struct sk_buff *skb, __be16 proto) 728 { 729 - void *ip_p = skb->data + sizeof(struct ethhdr); 730 731 return (proto == htons(ETH_P_IP)) ? ((struct iphdr *)ip_p)->protocol : 732 ((struct ipv6hdr *)ip_p)->nexthdr; ··· 755 goto csum_unnecessary; 756 757 if (likely(is_last_ethertype_ip(skb, &network_depth, &proto))) { 758 - if (unlikely(get_ip_proto(skb, proto) == IPPROTO_SCTP)) 759 goto csum_unnecessary; 760 761 skb->ip_summed = CHECKSUM_COMPLETE;
··· 724 return __get_unaligned_cpu32(fcs_bytes); 725 } 726 727 + static u8 get_ip_proto(struct sk_buff *skb, int network_depth, __be16 proto) 728 { 729 + void *ip_p = skb->data + network_depth; 730 731 return (proto == htons(ETH_P_IP)) ? ((struct iphdr *)ip_p)->protocol : 732 ((struct ipv6hdr *)ip_p)->nexthdr; ··· 755 goto csum_unnecessary; 756 757 if (likely(is_last_ethertype_ip(skb, &network_depth, &proto))) { 758 + if (unlikely(get_ip_proto(skb, network_depth, proto) == IPPROTO_SCTP)) 759 goto csum_unnecessary; 760 761 skb->ip_summed = CHECKSUM_COMPLETE;
+2 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
··· 560 561 mc_record = mlxsw_sp_nve_mc_record_find(mc_list, proto, addr, 562 &mc_entry); 563 - if (WARN_ON(!mc_record)) 564 return; 565 566 mlxsw_sp_nve_mc_record_entry_del(mc_record, mc_entry); ··· 647 648 key.fid_index = mlxsw_sp_fid_index(fid); 649 mc_list = mlxsw_sp_nve_mc_list_find(mlxsw_sp, &key); 650 - if (WARN_ON(!mc_list)) 651 return; 652 653 mlxsw_sp_nve_fid_flood_index_clear(fid, mc_list);
··· 560 561 mc_record = mlxsw_sp_nve_mc_record_find(mc_list, proto, addr, 562 &mc_entry); 563 + if (!mc_record) 564 return; 565 566 mlxsw_sp_nve_mc_record_entry_del(mc_record, mc_entry); ··· 647 648 key.fid_index = mlxsw_sp_fid_index(fid); 649 mc_list = mlxsw_sp_nve_mc_list_find(mlxsw_sp, &key); 650 + if (!mc_list) 651 return; 652 653 mlxsw_sp_nve_fid_flood_index_clear(fid, mc_list);
+1 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 1275 { 1276 u32 ul_tb_id = l3mdev_fib_table(ul_dev) ? : RT_TABLE_MAIN; 1277 enum mlxsw_sp_ipip_type ipipt = ipip_entry->ipipt; 1278 - struct net_device *ipip_ul_dev; 1279 1280 if (mlxsw_sp->router->ipip_ops_arr[ipipt]->ul_proto != ul_proto) 1281 return false; 1282 1283 - ipip_ul_dev = __mlxsw_sp_ipip_netdev_ul_dev_get(ipip_entry->ol_dev); 1284 return mlxsw_sp_ipip_entry_saddr_matches(mlxsw_sp, ul_proto, ul_dip, 1285 - ul_tb_id, ipip_entry) && 1286 - (!ipip_ul_dev || ipip_ul_dev == ul_dev); 1287 } 1288 1289 /* Given decap parameters, find the corresponding IPIP entry. */
··· 1275 { 1276 u32 ul_tb_id = l3mdev_fib_table(ul_dev) ? : RT_TABLE_MAIN; 1277 enum mlxsw_sp_ipip_type ipipt = ipip_entry->ipipt; 1278 1279 if (mlxsw_sp->router->ipip_ops_arr[ipipt]->ul_proto != ul_proto) 1280 return false; 1281 1282 return mlxsw_sp_ipip_entry_saddr_matches(mlxsw_sp, ul_proto, ul_dip, 1283 + ul_tb_id, ipip_entry); 1284 } 1285 1286 /* Given decap parameters, find the corresponding IPIP entry. */
+13 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
··· 296 mlxsw_sp_bridge_port_should_destroy(const struct mlxsw_sp_bridge_port * 297 bridge_port) 298 { 299 - struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(bridge_port->dev); 300 301 /* In case ports were pulled from out of a bridged LAG, then 302 * it's possible the reference count isn't zero, yet the bridge ··· 2115 2116 vid = is_vlan_dev(dev) ? vlan_dev_vlan_id(dev) : 1; 2117 mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); 2118 - if (WARN_ON(!mlxsw_sp_port_vlan)) 2119 return; 2120 2121 mlxsw_sp_port_vlan_bridge_leave(mlxsw_sp_port_vlan); ··· 2140 if (!fid) 2141 return -EINVAL; 2142 2143 - if (mlxsw_sp_fid_vni_is_set(fid)) 2144 - return -EINVAL; 2145 2146 err = mlxsw_sp_nve_fid_enable(mlxsw_sp, fid, &params, extack); 2147 if (err) ··· 2157 return 0; 2158 2159 err_nve_fid_enable: 2160 mlxsw_sp_fid_put(fid); 2161 return err; 2162 }
··· 296 mlxsw_sp_bridge_port_should_destroy(const struct mlxsw_sp_bridge_port * 297 bridge_port) 298 { 299 + struct net_device *dev = bridge_port->dev; 300 + struct mlxsw_sp *mlxsw_sp; 301 + 302 + if (is_vlan_dev(dev)) 303 + mlxsw_sp = mlxsw_sp_lower_get(vlan_dev_real_dev(dev)); 304 + else 305 + mlxsw_sp = mlxsw_sp_lower_get(dev); 306 307 /* In case ports were pulled from out of a bridged LAG, then 308 * it's possible the reference count isn't zero, yet the bridge ··· 2109 2110 vid = is_vlan_dev(dev) ? vlan_dev_vlan_id(dev) : 1; 2111 mlxsw_sp_port_vlan = mlxsw_sp_port_vlan_find_by_vid(mlxsw_sp_port, vid); 2112 + if (!mlxsw_sp_port_vlan) 2113 return; 2114 2115 mlxsw_sp_port_vlan_bridge_leave(mlxsw_sp_port_vlan); ··· 2134 if (!fid) 2135 return -EINVAL; 2136 2137 + if (mlxsw_sp_fid_vni_is_set(fid)) { 2138 + err = -EINVAL; 2139 + goto err_vni_exists; 2140 + } 2141 2142 err = mlxsw_sp_nve_fid_enable(mlxsw_sp, fid, &params, extack); 2143 if (err) ··· 2149 return 0; 2150 2151 err_nve_fid_enable: 2152 + err_vni_exists: 2153 mlxsw_sp_fid_put(fid); 2154 return err; 2155 }
+12 -6
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 476 if (err) 477 goto err_destroy_flow; 478 479 - err = nfp_flower_xmit_flow(netdev, flow_pay, 480 - NFP_FLOWER_CMSG_TYPE_FLOW_ADD); 481 - if (err) 482 - goto err_destroy_flow; 483 - 484 flow_pay->tc_flower_cookie = flow->cookie; 485 err = rhashtable_insert_fast(&priv->flow_table, &flow_pay->fl_node, 486 nfp_flower_table_params); 487 if (err) 488 - goto err_destroy_flow; 489 490 port->tc_offload_cnt++; 491 ··· 494 495 return 0; 496 497 err_destroy_flow: 498 kfree(flow_pay->action_data); 499 kfree(flow_pay->mask_data);
··· 476 if (err) 477 goto err_destroy_flow; 478 479 flow_pay->tc_flower_cookie = flow->cookie; 480 err = rhashtable_insert_fast(&priv->flow_table, &flow_pay->fl_node, 481 nfp_flower_table_params); 482 if (err) 483 + goto err_release_metadata; 484 + 485 + err = nfp_flower_xmit_flow(netdev, flow_pay, 486 + NFP_FLOWER_CMSG_TYPE_FLOW_ADD); 487 + if (err) 488 + goto err_remove_rhash; 489 490 port->tc_offload_cnt++; 491 ··· 494 495 return 0; 496 497 + err_remove_rhash: 498 + WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table, 499 + &flow_pay->fl_node, 500 + nfp_flower_table_params)); 501 + err_release_metadata: 502 + nfp_modify_flow_metadata(app, flow_pay); 503 err_destroy_flow: 504 kfree(flow_pay->action_data); 505 kfree(flow_pay->mask_data);
+5
drivers/net/ethernet/realtek/8139cp.c
··· 571 struct cp_private *cp; 572 int handled = 0; 573 u16 status; 574 575 if (unlikely(dev == NULL)) 576 return IRQ_NONE; 577 cp = netdev_priv(dev); 578 579 spin_lock(&cp->lock); 580 581 status = cpr16(IntrStatus); 582 if (!status || (status == 0xFFFF))
··· 571 struct cp_private *cp; 572 int handled = 0; 573 u16 status; 574 + u16 mask; 575 576 if (unlikely(dev == NULL)) 577 return IRQ_NONE; 578 cp = netdev_priv(dev); 579 580 spin_lock(&cp->lock); 581 + 582 + mask = cpr16(IntrMask); 583 + if (!mask) 584 + goto out_unlock; 585 586 status = cpr16(IntrStatus); 587 if (!status || (status == 0xFFFF))
+14 -10
drivers/net/ethernet/socionext/sni_ave.c
··· 185 NETIF_MSG_TX_ERR) 186 187 /* Parameter for descriptor */ 188 - #define AVE_NR_TXDESC 32 /* Tx descriptor */ 189 - #define AVE_NR_RXDESC 64 /* Rx descriptor */ 190 191 #define AVE_DESC_OFS_CMDSTS 0 192 #define AVE_DESC_OFS_ADDRL 4 ··· 194 195 /* Parameter for ethernet frame */ 196 #define AVE_MAX_ETHFRAME 1518 197 198 /* Parameter for interrupt */ 199 #define AVE_INTM_COUNT 20 ··· 577 578 skb = priv->rx.desc[entry].skbs; 579 if (!skb) { 580 - skb = netdev_alloc_skb_ip_align(ndev, 581 - AVE_MAX_ETHFRAME); 582 if (!skb) { 583 netdev_err(ndev, "can't allocate skb for Rx\n"); 584 return -ENOMEM; 585 } 586 } 587 588 /* set disable to cmdsts */ ··· 596 * - Rx buffer begins with 2 byte headroom, and data will be put from 597 * (buffer + 2). 598 * To satisfy this, specify the address to put back the buffer 599 - * pointer advanced by NET_IP_ALIGN by netdev_alloc_skb_ip_align(), 600 - * and expand the map size by NET_IP_ALIGN. 601 */ 602 ret = ave_dma_map(ndev, &priv->rx.desc[entry], 603 - skb->data - NET_IP_ALIGN, 604 - AVE_MAX_ETHFRAME + NET_IP_ALIGN, 605 DMA_FROM_DEVICE, &paddr); 606 if (ret) { 607 netdev_err(ndev, "can't map skb for Rx\n"); ··· 1691 pdev->name, pdev->id); 1692 1693 /* Register as a NAPI supported driver */ 1694 - netif_napi_add(ndev, &priv->napi_rx, ave_napi_poll_rx, priv->rx.ndesc); 1695 netif_tx_napi_add(ndev, &priv->napi_tx, ave_napi_poll_tx, 1696 - priv->tx.ndesc); 1697 1698 platform_set_drvdata(pdev, ndev); 1699 ··· 1916 }; 1917 module_platform_driver(ave_driver); 1918 1919 MODULE_DESCRIPTION("Socionext UniPhier AVE ethernet driver"); 1920 MODULE_LICENSE("GPL v2");
··· 185 NETIF_MSG_TX_ERR) 186 187 /* Parameter for descriptor */ 188 + #define AVE_NR_TXDESC 64 /* Tx descriptor */ 189 + #define AVE_NR_RXDESC 256 /* Rx descriptor */ 190 191 #define AVE_DESC_OFS_CMDSTS 0 192 #define AVE_DESC_OFS_ADDRL 4 ··· 194 195 /* Parameter for ethernet frame */ 196 #define AVE_MAX_ETHFRAME 1518 197 + #define AVE_FRAME_HEADROOM 2 198 199 /* Parameter for interrupt */ 200 #define AVE_INTM_COUNT 20 ··· 576 577 skb = priv->rx.desc[entry].skbs; 578 if (!skb) { 579 + skb = netdev_alloc_skb(ndev, AVE_MAX_ETHFRAME); 580 if (!skb) { 581 netdev_err(ndev, "can't allocate skb for Rx\n"); 582 return -ENOMEM; 583 } 584 + skb->data += AVE_FRAME_HEADROOM; 585 + skb->tail += AVE_FRAME_HEADROOM; 586 } 587 588 /* set disable to cmdsts */ ··· 594 * - Rx buffer begins with 2 byte headroom, and data will be put from 595 * (buffer + 2). 596 * To satisfy this, specify the address to put back the buffer 597 + * pointer advanced by AVE_FRAME_HEADROOM, and expand the map size 598 + * by AVE_FRAME_HEADROOM. 599 */ 600 ret = ave_dma_map(ndev, &priv->rx.desc[entry], 601 + skb->data - AVE_FRAME_HEADROOM, 602 + AVE_MAX_ETHFRAME + AVE_FRAME_HEADROOM, 603 DMA_FROM_DEVICE, &paddr); 604 if (ret) { 605 netdev_err(ndev, "can't map skb for Rx\n"); ··· 1689 pdev->name, pdev->id); 1690 1691 /* Register as a NAPI supported driver */ 1692 + netif_napi_add(ndev, &priv->napi_rx, ave_napi_poll_rx, 1693 + NAPI_POLL_WEIGHT); 1694 netif_tx_napi_add(ndev, &priv->napi_tx, ave_napi_poll_tx, 1695 + NAPI_POLL_WEIGHT); 1696 1697 platform_set_drvdata(pdev, ndev); 1698 ··· 1913 }; 1914 module_platform_driver(ave_driver); 1915 1916 + MODULE_AUTHOR("Kunihiko Hayashi <hayashi.kunihiko@socionext.com>"); 1917 MODULE_DESCRIPTION("Socionext UniPhier AVE ethernet driver"); 1918 MODULE_LICENSE("GPL v2");
+13 -10
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2550 netdev_warn(priv->dev, "PTP init failed\n"); 2551 } 2552 2553 - #ifdef CONFIG_DEBUG_FS 2554 - ret = stmmac_init_fs(dev); 2555 - if (ret < 0) 2556 - netdev_warn(priv->dev, "%s: failed debugFS registration\n", 2557 - __func__); 2558 - #endif 2559 priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS; 2560 2561 if (priv->use_riwt) { ··· 2749 stmmac_mac_set(priv, priv->ioaddr, false); 2750 2751 netif_carrier_off(dev); 2752 - 2753 - #ifdef CONFIG_DEBUG_FS 2754 - stmmac_exit_fs(dev); 2755 - #endif 2756 2757 stmmac_release_ptp(priv); 2758 ··· 3889 u32 tx_count = priv->plat->tx_queues_to_use; 3890 u32 queue; 3891 3892 for (queue = 0; queue < rx_count; queue++) { 3893 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 3894 ··· 4390 goto error_netdev_register; 4391 } 4392 4393 return ret; 4394 4395 error_netdev_register: ··· 4432 4433 netdev_info(priv->dev, "%s: removing driver", __func__); 4434 4435 stmmac_stop_all_dma(priv); 4436 4437 stmmac_mac_set(priv, priv->ioaddr, false);
··· 2550 netdev_warn(priv->dev, "PTP init failed\n"); 2551 } 2552 2553 priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS; 2554 2555 if (priv->use_riwt) { ··· 2755 stmmac_mac_set(priv, priv->ioaddr, false); 2756 2757 netif_carrier_off(dev); 2758 2759 stmmac_release_ptp(priv); 2760 ··· 3899 u32 tx_count = priv->plat->tx_queues_to_use; 3900 u32 queue; 3901 3902 + if ((dev->flags & IFF_UP) == 0) 3903 + return 0; 3904 + 3905 for (queue = 0; queue < rx_count; queue++) { 3906 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 3907 ··· 4397 goto error_netdev_register; 4398 } 4399 4400 + #ifdef CONFIG_DEBUG_FS 4401 + ret = stmmac_init_fs(ndev); 4402 + if (ret < 0) 4403 + netdev_warn(priv->dev, "%s: failed debugFS registration\n", 4404 + __func__); 4405 + #endif 4406 + 4407 return ret; 4408 4409 error_netdev_register: ··· 4432 4433 netdev_info(priv->dev, "%s: removing driver", __func__); 4434 4435 + #ifdef CONFIG_DEBUG_FS 4436 + stmmac_exit_fs(ndev); 4437 + #endif 4438 stmmac_stop_all_dma(priv); 4439 4440 stmmac_mac_set(priv, priv->ioaddr, false);
+5 -2
drivers/net/macvlan.c
··· 608 goto hash_add; 609 } 610 611 - err = -EBUSY; 612 if (macvlan_addr_busy(vlan->port, dev->dev_addr)) 613 goto out; 614 ··· 706 } else { 707 /* Rehash and update the device filters */ 708 if (macvlan_addr_busy(vlan->port, addr)) 709 - return -EBUSY; 710 711 if (!macvlan_passthru(port)) { 712 err = dev_uc_add(lowerdev, addr); ··· 746 macvlan_set_addr_change(vlan->port); 747 return dev_set_mac_address(vlan->lowerdev, addr); 748 } 749 750 return macvlan_sync_address(dev, addr->sa_data); 751 }
··· 608 goto hash_add; 609 } 610 611 + err = -EADDRINUSE; 612 if (macvlan_addr_busy(vlan->port, dev->dev_addr)) 613 goto out; 614 ··· 706 } else { 707 /* Rehash and update the device filters */ 708 if (macvlan_addr_busy(vlan->port, addr)) 709 + return -EADDRINUSE; 710 711 if (!macvlan_passthru(port)) { 712 err = dev_uc_add(lowerdev, addr); ··· 746 macvlan_set_addr_change(vlan->port); 747 return dev_set_mac_address(vlan->lowerdev, addr); 748 } 749 + 750 + if (macvlan_addr_busy(vlan->port, addr->sa_data)) 751 + return -EADDRINUSE; 752 753 return macvlan_sync_address(dev, addr->sa_data); 754 }
+8 -11
drivers/net/phy/phy_device.c
··· 1880 1881 static int __set_phy_supported(struct phy_device *phydev, u32 max_speed) 1882 { 1883 - phydev->supported &= ~(PHY_1000BT_FEATURES | PHY_100BT_FEATURES | 1884 - PHY_10BT_FEATURES); 1885 - 1886 switch (max_speed) { 1887 - default: 1888 - return -ENOTSUPP; 1889 - case SPEED_1000: 1890 - phydev->supported |= PHY_1000BT_FEATURES; 1891 /* fall through */ 1892 case SPEED_100: 1893 - phydev->supported |= PHY_100BT_FEATURES; 1894 - /* fall through */ 1895 - case SPEED_10: 1896 - phydev->supported |= PHY_10BT_FEATURES; 1897 } 1898 1899 return 0;
··· 1880 1881 static int __set_phy_supported(struct phy_device *phydev, u32 max_speed) 1882 { 1883 switch (max_speed) { 1884 + case SPEED_10: 1885 + phydev->supported &= ~PHY_100BT_FEATURES; 1886 /* fall through */ 1887 case SPEED_100: 1888 + phydev->supported &= ~PHY_1000BT_FEATURES; 1889 + break; 1890 + case SPEED_1000: 1891 + break; 1892 + default: 1893 + return -ENOTSUPP; 1894 } 1895 1896 return 0;
+1 -1
drivers/net/phy/sfp-bus.c
··· 162 /* 1000Base-PX or 1000Base-BX10 */ 163 if ((id->base.e_base_px || id->base.e_base_bx10) && 164 br_min <= 1300 && br_max >= 1200) 165 - phylink_set(support, 1000baseX_Full); 166 167 /* For active or passive cables, select the link modes 168 * based on the bit rates and the cable compliance bytes.
··· 162 /* 1000Base-PX or 1000Base-BX10 */ 163 if ((id->base.e_base_px || id->base.e_base_bx10) && 164 br_min <= 1300 && br_max >= 1200) 165 + phylink_set(modes, 1000baseX_Full); 166 167 /* For active or passive cables, select the link modes 168 * based on the bit rates and the cable compliance bytes.
+5 -4
drivers/net/tun.c
··· 2293 static int tun_validate(struct nlattr *tb[], struct nlattr *data[], 2294 struct netlink_ext_ack *extack) 2295 { 2296 - if (!data) 2297 - return 0; 2298 - return -EINVAL; 2299 } 2300 2301 static size_t tun_get_size(const struct net_device *dev) ··· 2385 struct tun_file *tfile, 2386 struct xdp_buff *xdp, int *flush) 2387 { 2388 struct tun_xdp_hdr *hdr = xdp->data_hard_start; 2389 struct virtio_net_hdr *gso = &hdr->gso; 2390 struct tun_pcpu_stats *stats; ··· 2462 stats = get_cpu_ptr(tun->pcpu_stats); 2463 u64_stats_update_begin(&stats->syncp); 2464 stats->rx_packets++; 2465 - stats->rx_bytes += skb->len; 2466 u64_stats_update_end(&stats->syncp); 2467 put_cpu_ptr(stats); 2468
··· 2293 static int tun_validate(struct nlattr *tb[], struct nlattr *data[], 2294 struct netlink_ext_ack *extack) 2295 { 2296 + NL_SET_ERR_MSG(extack, 2297 + "tun/tap creation via rtnetlink is not supported."); 2298 + return -EOPNOTSUPP; 2299 } 2300 2301 static size_t tun_get_size(const struct net_device *dev) ··· 2385 struct tun_file *tfile, 2386 struct xdp_buff *xdp, int *flush) 2387 { 2388 + unsigned int datasize = xdp->data_end - xdp->data; 2389 struct tun_xdp_hdr *hdr = xdp->data_hard_start; 2390 struct virtio_net_hdr *gso = &hdr->gso; 2391 struct tun_pcpu_stats *stats; ··· 2461 stats = get_cpu_ptr(tun->pcpu_stats); 2462 u64_stats_update_begin(&stats->syncp); 2463 stats->rx_packets++; 2464 + stats->rx_bytes += datasize; 2465 u64_stats_update_end(&stats->syncp); 2466 put_cpu_ptr(stats); 2467
+9 -5
drivers/net/virtio_net.c
··· 365 static struct sk_buff *page_to_skb(struct virtnet_info *vi, 366 struct receive_queue *rq, 367 struct page *page, unsigned int offset, 368 - unsigned int len, unsigned int truesize) 369 { 370 struct sk_buff *skb; 371 struct virtio_net_hdr_mrg_rxbuf *hdr; ··· 388 else 389 hdr_padded_len = sizeof(struct padded_vnet_hdr); 390 391 - memcpy(hdr, p, hdr_len); 392 393 len -= hdr_len; 394 offset += hdr_padded_len; ··· 741 struct virtnet_rq_stats *stats) 742 { 743 struct page *page = buf; 744 - struct sk_buff *skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE); 745 746 stats->bytes += len - vi->hdr_len; 747 if (unlikely(!skb)) ··· 845 rcu_read_unlock(); 846 put_page(page); 847 head_skb = page_to_skb(vi, rq, xdp_page, 848 - offset, len, PAGE_SIZE); 849 return head_skb; 850 } 851 break; ··· 902 goto err_skb; 903 } 904 905 - head_skb = page_to_skb(vi, rq, page, offset, len, truesize); 906 curr_skb = head_skb; 907 908 if (unlikely(!curr_skb))
··· 365 static struct sk_buff *page_to_skb(struct virtnet_info *vi, 366 struct receive_queue *rq, 367 struct page *page, unsigned int offset, 368 + unsigned int len, unsigned int truesize, 369 + bool hdr_valid) 370 { 371 struct sk_buff *skb; 372 struct virtio_net_hdr_mrg_rxbuf *hdr; ··· 387 else 388 hdr_padded_len = sizeof(struct padded_vnet_hdr); 389 390 + if (hdr_valid) 391 + memcpy(hdr, p, hdr_len); 392 393 len -= hdr_len; 394 offset += hdr_padded_len; ··· 739 struct virtnet_rq_stats *stats) 740 { 741 struct page *page = buf; 742 + struct sk_buff *skb = page_to_skb(vi, rq, page, 0, len, 743 + PAGE_SIZE, true); 744 745 stats->bytes += len - vi->hdr_len; 746 if (unlikely(!skb)) ··· 842 rcu_read_unlock(); 843 put_page(page); 844 head_skb = page_to_skb(vi, rq, xdp_page, 845 + offset, len, 846 + PAGE_SIZE, false); 847 return head_skb; 848 } 849 break; ··· 898 goto err_skb; 899 } 900 901 + head_skb = page_to_skb(vi, rq, page, offset, len, truesize, !xdp_prog); 902 curr_skb = head_skb; 903 904 if (unlikely(!curr_skb))
+11 -9
drivers/net/wireless/mac80211_hwsim.c
··· 2884 2885 wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST); 2886 2887 err = ieee80211_register_hw(hw); 2888 if (err < 0) { 2889 pr_debug("mac80211_hwsim: ieee80211_register_hw failed (%d)\n", ··· 2911 debugfs_create_file("dfs_simulate_radar", 0222, 2912 data->debugfs, 2913 data, &hwsim_simulate_radar); 2914 - 2915 - tasklet_hrtimer_init(&data->beacon_timer, 2916 - mac80211_hwsim_beacon, 2917 - CLOCK_MONOTONIC, HRTIMER_MODE_ABS); 2918 2919 spin_lock_bh(&hwsim_radio_lock); 2920 err = rhashtable_insert_fast(&hwsim_radios_rht, &data->rht, ··· 3703 if (err) 3704 goto out_unregister_pernet; 3705 3706 hwsim_class = class_create(THIS_MODULE, "mac80211_hwsim"); 3707 if (IS_ERR(hwsim_class)) { 3708 err = PTR_ERR(hwsim_class); 3709 - goto out_unregister_driver; 3710 } 3711 - 3712 - err = hwsim_init_netlink(); 3713 - if (err < 0) 3714 - goto out_unregister_driver; 3715 3716 for (i = 0; i < radios; i++) { 3717 struct hwsim_new_radio_params param = { 0 }; ··· 3818 free_netdev(hwsim_mon); 3819 out_free_radios: 3820 mac80211_hwsim_free(); 3821 out_unregister_driver: 3822 platform_driver_unregister(&mac80211_hwsim_driver); 3823 out_unregister_pernet:
··· 2884 2885 wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST); 2886 2887 + tasklet_hrtimer_init(&data->beacon_timer, 2888 + mac80211_hwsim_beacon, 2889 + CLOCK_MONOTONIC, HRTIMER_MODE_ABS); 2890 + 2891 err = ieee80211_register_hw(hw); 2892 if (err < 0) { 2893 pr_debug("mac80211_hwsim: ieee80211_register_hw failed (%d)\n", ··· 2907 debugfs_create_file("dfs_simulate_radar", 0222, 2908 data->debugfs, 2909 data, &hwsim_simulate_radar); 2910 2911 spin_lock_bh(&hwsim_radio_lock); 2912 err = rhashtable_insert_fast(&hwsim_radios_rht, &data->rht, ··· 3703 if (err) 3704 goto out_unregister_pernet; 3705 3706 + err = hwsim_init_netlink(); 3707 + if (err) 3708 + goto out_unregister_driver; 3709 + 3710 hwsim_class = class_create(THIS_MODULE, "mac80211_hwsim"); 3711 if (IS_ERR(hwsim_class)) { 3712 err = PTR_ERR(hwsim_class); 3713 + goto out_exit_netlink; 3714 } 3715 3716 for (i = 0; i < radios; i++) { 3717 struct hwsim_new_radio_params param = { 0 }; ··· 3818 free_netdev(hwsim_mon); 3819 out_free_radios: 3820 mac80211_hwsim_free(); 3821 + out_exit_netlink: 3822 + hwsim_exit_netlink(); 3823 out_unregister_driver: 3824 platform_driver_unregister(&mac80211_hwsim_driver); 3825 out_unregister_pernet:
-3
drivers/vhost/vhost.c
··· 944 if (msg->iova <= vq_msg->iova && 945 msg->iova + msg->size - 1 >= vq_msg->iova && 946 vq_msg->type == VHOST_IOTLB_MISS) { 947 - mutex_lock(&node->vq->mutex); 948 vhost_poll_queue(&node->vq->poll); 949 - mutex_unlock(&node->vq->mutex); 950 - 951 list_del(&node->node); 952 kfree(node); 953 }
··· 944 if (msg->iova <= vq_msg->iova && 945 msg->iova + msg->size - 1 >= vq_msg->iova && 946 vq_msg->type == VHOST_IOTLB_MISS) { 947 vhost_poll_queue(&node->vq->poll); 948 list_del(&node->node); 949 kfree(node); 950 }
+7
include/linux/filter.h
··· 449 offsetof(TYPE, MEMBER) ... offsetofend(TYPE, MEMBER) - 1 450 #define bpf_ctx_range_till(TYPE, MEMBER1, MEMBER2) \ 451 offsetof(TYPE, MEMBER1) ... offsetofend(TYPE, MEMBER2) - 1 452 453 #define bpf_target_off(TYPE, MEMBER, SIZE, PTR_SIZE) \ 454 ({ \
··· 449 offsetof(TYPE, MEMBER) ... offsetofend(TYPE, MEMBER) - 1 450 #define bpf_ctx_range_till(TYPE, MEMBER1, MEMBER2) \ 451 offsetof(TYPE, MEMBER1) ... offsetofend(TYPE, MEMBER2) - 1 452 + #if BITS_PER_LONG == 64 453 + # define bpf_ctx_range_ptr(TYPE, MEMBER) \ 454 + offsetof(TYPE, MEMBER) ... offsetofend(TYPE, MEMBER) - 1 455 + #else 456 + # define bpf_ctx_range_ptr(TYPE, MEMBER) \ 457 + offsetof(TYPE, MEMBER) ... offsetof(TYPE, MEMBER) + 8 - 1 458 + #endif /* BITS_PER_LONG == 64 */ 459 460 #define bpf_target_off(TYPE, MEMBER, SIZE, PTR_SIZE) \ 461 ({ \
+1 -1
include/linux/sfp.h
··· 224 * 225 * See the SFF-8472 specification and related documents for the definition 226 * of these structure members. This can be obtained from 227 - * ftp://ftp.seagate.com/sff 228 */ 229 struct sfp_eeprom_id { 230 struct sfp_eeprom_base base;
··· 224 * 225 * See the SFF-8472 specification and related documents for the definition 226 * of these structure members. This can be obtained from 227 + * https://www.snia.org/technology-communities/sff/specifications 228 */ 229 struct sfp_eeprom_id { 230 struct sfp_eeprom_base base;
+24 -6
include/net/neighbour.h
··· 454 455 static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb) 456 { 457 unsigned int seq; 458 unsigned int hh_len; 459 ··· 462 seq = read_seqbegin(&hh->hh_lock); 463 hh_len = hh->hh_len; 464 if (likely(hh_len <= HH_DATA_MOD)) { 465 - /* this is inlined by gcc */ 466 - memcpy(skb->data - HH_DATA_MOD, hh->hh_data, HH_DATA_MOD); 467 - } else { 468 - unsigned int hh_alen = HH_DATA_ALIGN(hh_len); 469 470 - memcpy(skb->data - hh_alen, hh->hh_data, hh_alen); 471 } 472 } while (read_seqretry(&hh->hh_lock, seq)); 473 474 - skb_push(skb, hh_len); 475 return dev_queue_xmit(skb); 476 } 477
··· 454 455 static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb) 456 { 457 + unsigned int hh_alen = 0; 458 unsigned int seq; 459 unsigned int hh_len; 460 ··· 461 seq = read_seqbegin(&hh->hh_lock); 462 hh_len = hh->hh_len; 463 if (likely(hh_len <= HH_DATA_MOD)) { 464 + hh_alen = HH_DATA_MOD; 465 466 + /* skb_push() would proceed silently if we have room for 467 + * the unaligned size but not for the aligned size: 468 + * check headroom explicitly. 469 + */ 470 + if (likely(skb_headroom(skb) >= HH_DATA_MOD)) { 471 + /* this is inlined by gcc */ 472 + memcpy(skb->data - HH_DATA_MOD, hh->hh_data, 473 + HH_DATA_MOD); 474 + } 475 + } else { 476 + hh_alen = HH_DATA_ALIGN(hh_len); 477 + 478 + if (likely(skb_headroom(skb) >= hh_alen)) { 479 + memcpy(skb->data - hh_alen, hh->hh_data, 480 + hh_alen); 481 + } 482 } 483 } while (read_seqretry(&hh->hh_lock, seq)); 484 485 + if (WARN_ON_ONCE(skb_headroom(skb) < hh_alen)) { 486 + kfree_skb(skb); 487 + return NET_XMIT_DROP; 488 + } 489 + 490 + __skb_push(skb, hh_len); 491 return dev_queue_xmit(skb); 492 } 493
+5
include/net/sctp/sctp.h
··· 620 return false; 621 } 622 623 #endif /* __net_sctp_h__ */
··· 620 return false; 621 } 622 623 + static inline __u32 sctp_min_frag_point(struct sctp_sock *sp, __u16 datasize) 624 + { 625 + return sctp_mtu_payload(sp, SCTP_DEFAULT_MINSEGMENT, datasize); 626 + } 627 + 628 #endif /* __net_sctp_h__ */
+2
include/net/sctp/structs.h
··· 2075 2076 __u64 abandoned_unsent[SCTP_PR_INDEX(MAX) + 1]; 2077 __u64 abandoned_sent[SCTP_PR_INDEX(MAX) + 1]; 2078 }; 2079 2080
··· 2075 2076 __u64 abandoned_unsent[SCTP_PR_INDEX(MAX) + 1]; 2077 __u64 abandoned_sent[SCTP_PR_INDEX(MAX) + 1]; 2078 + 2079 + struct rcu_head rcu; 2080 }; 2081 2082
+37 -19
include/uapi/linux/bpf.h
··· 2170 * Return 2171 * 0 on success, or a negative error in case of failure. 2172 * 2173 - * struct bpf_sock *bpf_sk_lookup_tcp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u32 netns, u64 flags) 2174 * Description 2175 * Look for TCP socket matching *tuple*, optionally in a child 2176 * network namespace *netns*. The return value must be checked, ··· 2187 * **sizeof**\ (*tuple*\ **->ipv6**) 2188 * Look for an IPv6 socket. 2189 * 2190 - * If the *netns* is zero, then the socket lookup table in the 2191 - * netns associated with the *ctx* will be used. For the TC hooks, 2192 - * this in the netns of the device in the skb. For socket hooks, 2193 - * this in the netns of the socket. If *netns* is non-zero, then 2194 - * it specifies the ID of the netns relative to the netns 2195 - * associated with the *ctx*. 2196 * 2197 * All values for *flags* are reserved for future usage, and must 2198 * be left at zero. ··· 2203 * **CONFIG_NET** configuration option. 2204 * Return 2205 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2206 * 2207 - * struct bpf_sock *bpf_sk_lookup_udp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u32 netns, u64 flags) 2208 * Description 2209 * Look for UDP socket matching *tuple*, optionally in a child 2210 * network namespace *netns*. The return value must be checked, ··· 2223 * **sizeof**\ (*tuple*\ **->ipv6**) 2224 * Look for an IPv6 socket. 2225 * 2226 - * If the *netns* is zero, then the socket lookup table in the 2227 - * netns associated with the *ctx* will be used. For the TC hooks, 2228 - * this in the netns of the device in the skb. For socket hooks, 2229 - * this in the netns of the socket. If *netns* is non-zero, then 2230 - * it specifies the ID of the netns relative to the netns 2231 - * associated with the *ctx*. 2232 * 2233 * All values for *flags* are reserved for future usage, and must 2234 * be left at zero. ··· 2239 * **CONFIG_NET** configuration option. 2240 * Return 2241 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2242 * 2243 * int bpf_sk_release(struct bpf_sock *sk) 2244 * Description ··· 2413 /* BPF_FUNC_perf_event_output for sk_buff input context. */ 2414 #define BPF_F_CTXLEN_MASK (0xfffffULL << 32) 2415 2416 /* Mode for BPF_FUNC_skb_adjust_room helper. */ 2417 enum bpf_adj_room_mode { 2418 BPF_ADJ_ROOM_NET, ··· 2432 BPF_LWT_ENCAP_SEG6, 2433 BPF_LWT_ENCAP_SEG6_INLINE 2434 }; 2435 2436 /* user accessible mirror of in-kernel sk_buff. 2437 * new fields can only be added to the end of this structure ··· 2473 /* ... here. */ 2474 2475 __u32 data_meta; 2476 - struct bpf_flow_keys *flow_keys; 2477 }; 2478 2479 struct bpf_tunnel_key { ··· 2589 * be added to the end of this structure 2590 */ 2591 struct sk_msg_md { 2592 - void *data; 2593 - void *data_end; 2594 2595 __u32 family; 2596 __u32 remote_ip4; /* Stored in network byte order */ ··· 2606 * Start of directly accessible data. It begins from 2607 * the tcp/udp header. 2608 */ 2609 - void *data; 2610 - void *data_end; /* End of directly accessible data */ 2611 /* 2612 * Total length of packet (starting from the tcp/udp header). 2613 * Note that the directly accessible bytes (data_end - data)
··· 2170 * Return 2171 * 0 on success, or a negative error in case of failure. 2172 * 2173 + * struct bpf_sock *bpf_sk_lookup_tcp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u64 netns, u64 flags) 2174 * Description 2175 * Look for TCP socket matching *tuple*, optionally in a child 2176 * network namespace *netns*. The return value must be checked, ··· 2187 * **sizeof**\ (*tuple*\ **->ipv6**) 2188 * Look for an IPv6 socket. 2189 * 2190 + * If the *netns* is a negative signed 32-bit integer, then the 2191 + * socket lookup table in the netns associated with the *ctx* will 2192 + * will be used. For the TC hooks, this is the netns of the device 2193 + * in the skb. For socket hooks, this is the netns of the socket. 2194 + * If *netns* is any other signed 32-bit value greater than or 2195 + * equal to zero then it specifies the ID of the netns relative to 2196 + * the netns associated with the *ctx*. *netns* values beyond the 2197 + * range of 32-bit integers are reserved for future use. 2198 * 2199 * All values for *flags* are reserved for future usage, and must 2200 * be left at zero. ··· 2201 * **CONFIG_NET** configuration option. 2202 * Return 2203 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2204 + * For sockets with reuseport option, the *struct bpf_sock* 2205 + * result is from reuse->socks[] using the hash of the tuple. 2206 * 2207 + * struct bpf_sock *bpf_sk_lookup_udp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u64 netns, u64 flags) 2208 * Description 2209 * Look for UDP socket matching *tuple*, optionally in a child 2210 * network namespace *netns*. The return value must be checked, ··· 2219 * **sizeof**\ (*tuple*\ **->ipv6**) 2220 * Look for an IPv6 socket. 2221 * 2222 + * If the *netns* is a negative signed 32-bit integer, then the 2223 + * socket lookup table in the netns associated with the *ctx* will 2224 + * will be used. For the TC hooks, this is the netns of the device 2225 + * in the skb. For socket hooks, this is the netns of the socket. 2226 + * If *netns* is any other signed 32-bit value greater than or 2227 + * equal to zero then it specifies the ID of the netns relative to 2228 + * the netns associated with the *ctx*. *netns* values beyond the 2229 + * range of 32-bit integers are reserved for future use. 2230 * 2231 * All values for *flags* are reserved for future usage, and must 2232 * be left at zero. ··· 2233 * **CONFIG_NET** configuration option. 2234 * Return 2235 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2236 + * For sockets with reuseport option, the *struct bpf_sock* 2237 + * result is from reuse->socks[] using the hash of the tuple. 2238 * 2239 * int bpf_sk_release(struct bpf_sock *sk) 2240 * Description ··· 2405 /* BPF_FUNC_perf_event_output for sk_buff input context. */ 2406 #define BPF_F_CTXLEN_MASK (0xfffffULL << 32) 2407 2408 + /* Current network namespace */ 2409 + #define BPF_F_CURRENT_NETNS (-1L) 2410 + 2411 /* Mode for BPF_FUNC_skb_adjust_room helper. */ 2412 enum bpf_adj_room_mode { 2413 BPF_ADJ_ROOM_NET, ··· 2421 BPF_LWT_ENCAP_SEG6, 2422 BPF_LWT_ENCAP_SEG6_INLINE 2423 }; 2424 + 2425 + #define __bpf_md_ptr(type, name) \ 2426 + union { \ 2427 + type name; \ 2428 + __u64 :64; \ 2429 + } __attribute__((aligned(8))) 2430 2431 /* user accessible mirror of in-kernel sk_buff. 2432 * new fields can only be added to the end of this structure ··· 2456 /* ... here. */ 2457 2458 __u32 data_meta; 2459 + __bpf_md_ptr(struct bpf_flow_keys *, flow_keys); 2460 }; 2461 2462 struct bpf_tunnel_key { ··· 2572 * be added to the end of this structure 2573 */ 2574 struct sk_msg_md { 2575 + __bpf_md_ptr(void *, data); 2576 + __bpf_md_ptr(void *, data_end); 2577 2578 __u32 family; 2579 __u32 remote_ip4; /* Stored in network byte order */ ··· 2589 * Start of directly accessible data. It begins from 2590 * the tcp/udp header. 2591 */ 2592 + __bpf_md_ptr(void *, data); 2593 + /* End of directly accessible data */ 2594 + __bpf_md_ptr(void *, data_end); 2595 /* 2596 * Total length of packet (starting from the tcp/udp header). 2597 * Note that the directly accessible bytes (data_end - data)
+82
kernel/bpf/btf.c
··· 5 #include <uapi/linux/types.h> 6 #include <linux/seq_file.h> 7 #include <linux/compiler.h> 8 #include <linux/errno.h> 9 #include <linux/slab.h> 10 #include <linux/anon_inodes.h> ··· 425 { 426 return BTF_STR_OFFSET_VALID(offset) && 427 offset < btf->hdr.str_len; 428 } 429 430 static const char *btf_name_by_offset(const struct btf *btf, u32 offset) ··· 1168 return -EINVAL; 1169 } 1170 1171 btf_verifier_log_type(env, t, NULL); 1172 1173 return 0; ··· 1341 return -EINVAL; 1342 } 1343 1344 btf_verifier_log_type(env, t, NULL); 1345 1346 return 0; ··· 1401 btf_verifier_log_basic(env, t, 1402 "meta_left:%u meta_needed:%u", 1403 meta_left, meta_needed); 1404 return -EINVAL; 1405 } 1406 ··· 1586 return -EINVAL; 1587 } 1588 1589 btf_verifier_log_type(env, t, NULL); 1590 1591 last_offset = 0; ··· 1604 return -EINVAL; 1605 } 1606 1607 /* A member cannot be in type void */ 1608 if (!member->type || !BTF_TYPE_ID_VALID(member->type)) { 1609 btf_verifier_log_member(env, t, member, ··· 1797 return -EINVAL; 1798 } 1799 1800 btf_verifier_log_type(env, t, NULL); 1801 1802 for (i = 0; i < nr_enums; i++) { ··· 1812 enums[i].name_off); 1813 return -EINVAL; 1814 } 1815 1816 btf_verifier_log(env, "\t%s val=%d\n", 1817 btf_name_by_offset(btf, enums[i].name_off),
··· 5 #include <uapi/linux/types.h> 6 #include <linux/seq_file.h> 7 #include <linux/compiler.h> 8 + #include <linux/ctype.h> 9 #include <linux/errno.h> 10 #include <linux/slab.h> 11 #include <linux/anon_inodes.h> ··· 424 { 425 return BTF_STR_OFFSET_VALID(offset) && 426 offset < btf->hdr.str_len; 427 + } 428 + 429 + /* Only C-style identifier is permitted. This can be relaxed if 430 + * necessary. 431 + */ 432 + static bool btf_name_valid_identifier(const struct btf *btf, u32 offset) 433 + { 434 + /* offset must be valid */ 435 + const char *src = &btf->strings[offset]; 436 + const char *src_limit; 437 + 438 + if (!isalpha(*src) && *src != '_') 439 + return false; 440 + 441 + /* set a limit on identifier length */ 442 + src_limit = src + KSYM_NAME_LEN; 443 + src++; 444 + while (*src && src < src_limit) { 445 + if (!isalnum(*src) && *src != '_') 446 + return false; 447 + src++; 448 + } 449 + 450 + return !*src; 451 } 452 453 static const char *btf_name_by_offset(const struct btf *btf, u32 offset) ··· 1143 return -EINVAL; 1144 } 1145 1146 + /* typedef type must have a valid name, and other ref types, 1147 + * volatile, const, restrict, should have a null name. 1148 + */ 1149 + if (BTF_INFO_KIND(t->info) == BTF_KIND_TYPEDEF) { 1150 + if (!t->name_off || 1151 + !btf_name_valid_identifier(env->btf, t->name_off)) { 1152 + btf_verifier_log_type(env, t, "Invalid name"); 1153 + return -EINVAL; 1154 + } 1155 + } else { 1156 + if (t->name_off) { 1157 + btf_verifier_log_type(env, t, "Invalid name"); 1158 + return -EINVAL; 1159 + } 1160 + } 1161 + 1162 btf_verifier_log_type(env, t, NULL); 1163 1164 return 0; ··· 1300 return -EINVAL; 1301 } 1302 1303 + /* fwd type must have a valid name */ 1304 + if (!t->name_off || 1305 + !btf_name_valid_identifier(env->btf, t->name_off)) { 1306 + btf_verifier_log_type(env, t, "Invalid name"); 1307 + return -EINVAL; 1308 + } 1309 + 1310 btf_verifier_log_type(env, t, NULL); 1311 1312 return 0; ··· 1353 btf_verifier_log_basic(env, t, 1354 "meta_left:%u meta_needed:%u", 1355 meta_left, meta_needed); 1356 + return -EINVAL; 1357 + } 1358 + 1359 + /* array type should not have a name */ 1360 + if (t->name_off) { 1361 + btf_verifier_log_type(env, t, "Invalid name"); 1362 return -EINVAL; 1363 } 1364 ··· 1532 return -EINVAL; 1533 } 1534 1535 + /* struct type either no name or a valid one */ 1536 + if (t->name_off && 1537 + !btf_name_valid_identifier(env->btf, t->name_off)) { 1538 + btf_verifier_log_type(env, t, "Invalid name"); 1539 + return -EINVAL; 1540 + } 1541 + 1542 btf_verifier_log_type(env, t, NULL); 1543 1544 last_offset = 0; ··· 1543 return -EINVAL; 1544 } 1545 1546 + /* struct member either no name or a valid one */ 1547 + if (member->name_off && 1548 + !btf_name_valid_identifier(btf, member->name_off)) { 1549 + btf_verifier_log_member(env, t, member, "Invalid name"); 1550 + return -EINVAL; 1551 + } 1552 /* A member cannot be in type void */ 1553 if (!member->type || !BTF_TYPE_ID_VALID(member->type)) { 1554 btf_verifier_log_member(env, t, member, ··· 1730 return -EINVAL; 1731 } 1732 1733 + /* enum type either no name or a valid one */ 1734 + if (t->name_off && 1735 + !btf_name_valid_identifier(env->btf, t->name_off)) { 1736 + btf_verifier_log_type(env, t, "Invalid name"); 1737 + return -EINVAL; 1738 + } 1739 + 1740 btf_verifier_log_type(env, t, NULL); 1741 1742 for (i = 0; i < nr_enums; i++) { ··· 1738 enums[i].name_off); 1739 return -EINVAL; 1740 } 1741 + 1742 + /* enum member must have a valid name */ 1743 + if (!enums[i].name_off || 1744 + !btf_name_valid_identifier(btf, enums[i].name_off)) { 1745 + btf_verifier_log_type(env, t, "Invalid name"); 1746 + return -EINVAL; 1747 + } 1748 + 1749 1750 btf_verifier_log(env, "\t%s val=%d\n", 1751 btf_name_by_offset(btf, enums[i].name_off),
+89 -14
kernel/bpf/verifier.c
··· 175 176 #define BPF_COMPLEXITY_LIMIT_INSNS 131072 177 #define BPF_COMPLEXITY_LIMIT_STACK 1024 178 179 #define BPF_MAP_PTR_UNPRIV 1UL 180 #define BPF_MAP_PTR_POISON ((void *)((0xeB9FUL << 1) + \ ··· 3752 } 3753 } 3754 3755 /* Adjusts the register min/max values in the case that the dst_reg is the 3756 * variable register that we are working on, and src_reg is a constant or we're 3757 * simply doing a BPF_K check. ··· 4226 4227 dst_reg = &regs[insn->dst_reg]; 4228 4229 - /* detect if R == 0 where R was initialized to zero earlier */ 4230 - if (BPF_SRC(insn->code) == BPF_K && 4231 - (opcode == BPF_JEQ || opcode == BPF_JNE) && 4232 - dst_reg->type == SCALAR_VALUE && 4233 - tnum_is_const(dst_reg->var_off)) { 4234 - if ((opcode == BPF_JEQ && dst_reg->var_off.value == insn->imm) || 4235 - (opcode == BPF_JNE && dst_reg->var_off.value != insn->imm)) { 4236 - /* if (imm == imm) goto pc+off; 4237 - * only follow the goto, ignore fall-through 4238 - */ 4239 *insn_idx += insn->off; 4240 return 0; 4241 - } else { 4242 - /* if (imm != imm) goto pc+off; 4243 - * only follow fall-through branch, since 4244 * that's where the program will go 4245 */ 4246 return 0; ··· 5048 struct bpf_verifier_state_list *new_sl; 5049 struct bpf_verifier_state_list *sl; 5050 struct bpf_verifier_state *cur = env->cur_state, *new; 5051 - int i, j, err; 5052 5053 sl = env->explored_states[insn_idx]; 5054 if (!sl) ··· 5075 return 1; 5076 } 5077 sl = sl->next; 5078 } 5079 5080 /* there were no equivalent states, remember current one. 5081 * technically the current state is not proven to be safe yet, ··· 5219 } 5220 goto process_bpf_exit; 5221 } 5222 5223 if (need_resched()) 5224 cond_resched();
··· 175 176 #define BPF_COMPLEXITY_LIMIT_INSNS 131072 177 #define BPF_COMPLEXITY_LIMIT_STACK 1024 178 + #define BPF_COMPLEXITY_LIMIT_STATES 64 179 180 #define BPF_MAP_PTR_UNPRIV 1UL 181 #define BPF_MAP_PTR_POISON ((void *)((0xeB9FUL << 1) + \ ··· 3751 } 3752 } 3753 3754 + /* compute branch direction of the expression "if (reg opcode val) goto target;" 3755 + * and return: 3756 + * 1 - branch will be taken and "goto target" will be executed 3757 + * 0 - branch will not be taken and fall-through to next insn 3758 + * -1 - unknown. Example: "if (reg < 5)" is unknown when register value range [0,10] 3759 + */ 3760 + static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode) 3761 + { 3762 + if (__is_pointer_value(false, reg)) 3763 + return -1; 3764 + 3765 + switch (opcode) { 3766 + case BPF_JEQ: 3767 + if (tnum_is_const(reg->var_off)) 3768 + return !!tnum_equals_const(reg->var_off, val); 3769 + break; 3770 + case BPF_JNE: 3771 + if (tnum_is_const(reg->var_off)) 3772 + return !tnum_equals_const(reg->var_off, val); 3773 + break; 3774 + case BPF_JGT: 3775 + if (reg->umin_value > val) 3776 + return 1; 3777 + else if (reg->umax_value <= val) 3778 + return 0; 3779 + break; 3780 + case BPF_JSGT: 3781 + if (reg->smin_value > (s64)val) 3782 + return 1; 3783 + else if (reg->smax_value < (s64)val) 3784 + return 0; 3785 + break; 3786 + case BPF_JLT: 3787 + if (reg->umax_value < val) 3788 + return 1; 3789 + else if (reg->umin_value >= val) 3790 + return 0; 3791 + break; 3792 + case BPF_JSLT: 3793 + if (reg->smax_value < (s64)val) 3794 + return 1; 3795 + else if (reg->smin_value >= (s64)val) 3796 + return 0; 3797 + break; 3798 + case BPF_JGE: 3799 + if (reg->umin_value >= val) 3800 + return 1; 3801 + else if (reg->umax_value < val) 3802 + return 0; 3803 + break; 3804 + case BPF_JSGE: 3805 + if (reg->smin_value >= (s64)val) 3806 + return 1; 3807 + else if (reg->smax_value < (s64)val) 3808 + return 0; 3809 + break; 3810 + case BPF_JLE: 3811 + if (reg->umax_value <= val) 3812 + return 1; 3813 + else if (reg->umin_value > val) 3814 + return 0; 3815 + break; 3816 + case BPF_JSLE: 3817 + if (reg->smax_value <= (s64)val) 3818 + return 1; 3819 + else if (reg->smin_value > (s64)val) 3820 + return 0; 3821 + break; 3822 + } 3823 + 3824 + return -1; 3825 + } 3826 + 3827 /* Adjusts the register min/max values in the case that the dst_reg is the 3828 * variable register that we are working on, and src_reg is a constant or we're 3829 * simply doing a BPF_K check. ··· 4152 4153 dst_reg = &regs[insn->dst_reg]; 4154 4155 + if (BPF_SRC(insn->code) == BPF_K) { 4156 + int pred = is_branch_taken(dst_reg, insn->imm, opcode); 4157 + 4158 + if (pred == 1) { 4159 + /* only follow the goto, ignore fall-through */ 4160 *insn_idx += insn->off; 4161 return 0; 4162 + } else if (pred == 0) { 4163 + /* only follow fall-through branch, since 4164 * that's where the program will go 4165 */ 4166 return 0; ··· 4980 struct bpf_verifier_state_list *new_sl; 4981 struct bpf_verifier_state_list *sl; 4982 struct bpf_verifier_state *cur = env->cur_state, *new; 4983 + int i, j, err, states_cnt = 0; 4984 4985 sl = env->explored_states[insn_idx]; 4986 if (!sl) ··· 5007 return 1; 5008 } 5009 sl = sl->next; 5010 + states_cnt++; 5011 } 5012 + 5013 + if (!env->allow_ptr_leaks && states_cnt > BPF_COMPLEXITY_LIMIT_STATES) 5014 + return 0; 5015 5016 /* there were no equivalent states, remember current one. 5017 * technically the current state is not proven to be safe yet, ··· 5147 } 5148 goto process_bpf_exit; 5149 } 5150 + 5151 + if (signal_pending(current)) 5152 + return -EAGAIN; 5153 5154 if (need_resched()) 5155 cond_resched();
+15 -6
net/bpf/test_run.c
··· 28 return ret; 29 } 30 31 - static u32 bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, u32 *time) 32 { 33 struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE] = { 0 }; 34 enum bpf_cgroup_storage_type stype; 35 u64 time_start, time_spent = 0; 36 - u32 ret = 0, i; 37 38 for_each_cgroup_storage_type(stype) { 39 storage[stype] = bpf_cgroup_storage_alloc(prog, stype); ··· 50 repeat = 1; 51 time_start = ktime_get_ns(); 52 for (i = 0; i < repeat; i++) { 53 - ret = bpf_test_run_one(prog, ctx, storage); 54 if (need_resched()) { 55 if (signal_pending(current)) 56 break; ··· 66 for_each_cgroup_storage_type(stype) 67 bpf_cgroup_storage_free(storage[stype]); 68 69 - return ret; 70 } 71 72 static int bpf_test_finish(const union bpf_attr *kattr, ··· 166 __skb_push(skb, hh_len); 167 if (is_direct_pkt_access) 168 bpf_compute_data_pointers(skb); 169 - retval = bpf_test_run(prog, skb, repeat, &duration); 170 if (!is_l2) { 171 if (skb_headroom(skb) < hh_len) { 172 int nhead = HH_DATA_ALIGN(hh_len - skb_headroom(skb)); ··· 218 rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0); 219 xdp.rxq = &rxqueue->xdp_rxq; 220 221 - retval = bpf_test_run(prog, &xdp, repeat, &duration); 222 if (xdp.data != data + XDP_PACKET_HEADROOM + NET_IP_ALIGN || 223 xdp.data_end != xdp.data + size) 224 size = xdp.data_end - xdp.data; 225 ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); 226 kfree(data); 227 return ret; 228 }
··· 28 return ret; 29 } 30 31 + static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, u32 *ret, 32 + u32 *time) 33 { 34 struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE] = { 0 }; 35 enum bpf_cgroup_storage_type stype; 36 u64 time_start, time_spent = 0; 37 + u32 i; 38 39 for_each_cgroup_storage_type(stype) { 40 storage[stype] = bpf_cgroup_storage_alloc(prog, stype); ··· 49 repeat = 1; 50 time_start = ktime_get_ns(); 51 for (i = 0; i < repeat; i++) { 52 + *ret = bpf_test_run_one(prog, ctx, storage); 53 if (need_resched()) { 54 if (signal_pending(current)) 55 break; ··· 65 for_each_cgroup_storage_type(stype) 66 bpf_cgroup_storage_free(storage[stype]); 67 68 + return 0; 69 } 70 71 static int bpf_test_finish(const union bpf_attr *kattr, ··· 165 __skb_push(skb, hh_len); 166 if (is_direct_pkt_access) 167 bpf_compute_data_pointers(skb); 168 + ret = bpf_test_run(prog, skb, repeat, &retval, &duration); 169 + if (ret) { 170 + kfree_skb(skb); 171 + kfree(sk); 172 + return ret; 173 + } 174 if (!is_l2) { 175 if (skb_headroom(skb) < hh_len) { 176 int nhead = HH_DATA_ALIGN(hh_len - skb_headroom(skb)); ··· 212 rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0); 213 xdp.rxq = &rxqueue->xdp_rxq; 214 215 + ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration); 216 + if (ret) 217 + goto out; 218 if (xdp.data != data + XDP_PACKET_HEADROOM + NET_IP_ALIGN || 219 xdp.data_end != xdp.data + size) 220 size = xdp.data_end - xdp.data; 221 ret = bpf_test_finish(kattr, uattr, xdp.data, size, retval, duration); 222 + out: 223 kfree(data); 224 return ret; 225 }
+35 -30
net/core/dev.c
··· 2175 return active; 2176 } 2177 2178 static void clean_xps_maps(struct net_device *dev, const unsigned long *mask, 2179 struct xps_dev_maps *dev_maps, unsigned int nr_ids, 2180 u16 offset, u16 count, bool is_rxqs_map) ··· 2200 j < nr_ids;) 2201 active |= remove_xps_queue_cpu(dev, dev_maps, j, offset, 2202 count); 2203 - if (!active) { 2204 - if (is_rxqs_map) { 2205 - RCU_INIT_POINTER(dev->xps_rxqs_map, NULL); 2206 - } else { 2207 - RCU_INIT_POINTER(dev->xps_cpus_map, NULL); 2208 2209 - for (i = offset + (count - 1); count--; i--) 2210 - netdev_queue_numa_node_write( 2211 - netdev_get_tx_queue(dev, i), 2212 - NUMA_NO_NODE); 2213 } 2214 - kfree_rcu(dev_maps, rcu); 2215 } 2216 } 2217 ··· 2245 false); 2246 2247 out_no_maps: 2248 - if (static_key_enabled(&xps_rxqs_needed)) 2249 - static_key_slow_dec_cpuslocked(&xps_rxqs_needed); 2250 - 2251 - static_key_slow_dec_cpuslocked(&xps_needed); 2252 mutex_unlock(&xps_map_mutex); 2253 cpus_read_unlock(); 2254 } ··· 2362 if (!new_dev_maps) 2363 goto out_no_new_maps; 2364 2365 - static_key_slow_inc_cpuslocked(&xps_needed); 2366 - if (is_rxqs_map) 2367 - static_key_slow_inc_cpuslocked(&xps_rxqs_needed); 2368 2369 for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids), 2370 j < nr_ids;) { ··· 2465 } 2466 2467 /* free map if not active */ 2468 - if (!active) { 2469 - if (is_rxqs_map) 2470 - RCU_INIT_POINTER(dev->xps_rxqs_map, NULL); 2471 - else 2472 - RCU_INIT_POINTER(dev->xps_cpus_map, NULL); 2473 - kfree_rcu(dev_maps, rcu); 2474 - } 2475 2476 out_no_maps: 2477 mutex_unlock(&xps_map_mutex); ··· 5014 struct net_device *orig_dev = skb->dev; 5015 struct packet_type *pt_prev = NULL; 5016 5017 - list_del(&skb->list); 5018 __netif_receive_skb_core(skb, pfmemalloc, &pt_prev); 5019 if (!pt_prev) 5020 continue; ··· 5170 INIT_LIST_HEAD(&sublist); 5171 list_for_each_entry_safe(skb, next, head, list) { 5172 net_timestamp_check(netdev_tstamp_prequeue, skb); 5173 - list_del(&skb->list); 5174 if (!skb_defer_rx_timestamp(skb)) 5175 list_add_tail(&skb->list, &sublist); 5176 } ··· 5181 rcu_read_lock(); 5182 list_for_each_entry_safe(skb, next, head, list) { 5183 xdp_prog = rcu_dereference(skb->dev->xdp_prog); 5184 - list_del(&skb->list); 5185 if (do_xdp_generic(xdp_prog, skb) == XDP_PASS) 5186 list_add_tail(&skb->list, &sublist); 5187 } ··· 5200 5201 if (cpu >= 0) { 5202 /* Will be handled, remove from list */ 5203 - list_del(&skb->list); 5204 enqueue_to_backlog(skb, cpu, &rflow->last_qtail); 5205 } 5206 } ··· 6209 napi->skb = NULL; 6210 napi->poll = poll; 6211 if (weight > NAPI_POLL_WEIGHT) 6212 - pr_err_once("netif_napi_add() called with weight %d on device %s\n", 6213 - weight, dev->name); 6214 napi->weight = weight; 6215 list_add(&napi->dev_list, &dev->napi_list); 6216 napi->dev = dev;
··· 2175 return active; 2176 } 2177 2178 + static void reset_xps_maps(struct net_device *dev, 2179 + struct xps_dev_maps *dev_maps, 2180 + bool is_rxqs_map) 2181 + { 2182 + if (is_rxqs_map) { 2183 + static_key_slow_dec_cpuslocked(&xps_rxqs_needed); 2184 + RCU_INIT_POINTER(dev->xps_rxqs_map, NULL); 2185 + } else { 2186 + RCU_INIT_POINTER(dev->xps_cpus_map, NULL); 2187 + } 2188 + static_key_slow_dec_cpuslocked(&xps_needed); 2189 + kfree_rcu(dev_maps, rcu); 2190 + } 2191 + 2192 static void clean_xps_maps(struct net_device *dev, const unsigned long *mask, 2193 struct xps_dev_maps *dev_maps, unsigned int nr_ids, 2194 u16 offset, u16 count, bool is_rxqs_map) ··· 2186 j < nr_ids;) 2187 active |= remove_xps_queue_cpu(dev, dev_maps, j, offset, 2188 count); 2189 + if (!active) 2190 + reset_xps_maps(dev, dev_maps, is_rxqs_map); 2191 2192 + if (!is_rxqs_map) { 2193 + for (i = offset + (count - 1); count--; i--) { 2194 + netdev_queue_numa_node_write( 2195 + netdev_get_tx_queue(dev, i), 2196 + NUMA_NO_NODE); 2197 } 2198 } 2199 } 2200 ··· 2234 false); 2235 2236 out_no_maps: 2237 mutex_unlock(&xps_map_mutex); 2238 cpus_read_unlock(); 2239 } ··· 2355 if (!new_dev_maps) 2356 goto out_no_new_maps; 2357 2358 + if (!dev_maps) { 2359 + /* Increment static keys at most once per type */ 2360 + static_key_slow_inc_cpuslocked(&xps_needed); 2361 + if (is_rxqs_map) 2362 + static_key_slow_inc_cpuslocked(&xps_rxqs_needed); 2363 + } 2364 2365 for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids), 2366 j < nr_ids;) { ··· 2455 } 2456 2457 /* free map if not active */ 2458 + if (!active) 2459 + reset_xps_maps(dev, dev_maps, is_rxqs_map); 2460 2461 out_no_maps: 2462 mutex_unlock(&xps_map_mutex); ··· 5009 struct net_device *orig_dev = skb->dev; 5010 struct packet_type *pt_prev = NULL; 5011 5012 + skb_list_del_init(skb); 5013 __netif_receive_skb_core(skb, pfmemalloc, &pt_prev); 5014 if (!pt_prev) 5015 continue; ··· 5165 INIT_LIST_HEAD(&sublist); 5166 list_for_each_entry_safe(skb, next, head, list) { 5167 net_timestamp_check(netdev_tstamp_prequeue, skb); 5168 + skb_list_del_init(skb); 5169 if (!skb_defer_rx_timestamp(skb)) 5170 list_add_tail(&skb->list, &sublist); 5171 } ··· 5176 rcu_read_lock(); 5177 list_for_each_entry_safe(skb, next, head, list) { 5178 xdp_prog = rcu_dereference(skb->dev->xdp_prog); 5179 + skb_list_del_init(skb); 5180 if (do_xdp_generic(xdp_prog, skb) == XDP_PASS) 5181 list_add_tail(&skb->list, &sublist); 5182 } ··· 5195 5196 if (cpu >= 0) { 5197 /* Will be handled, remove from list */ 5198 + skb_list_del_init(skb); 5199 enqueue_to_backlog(skb, cpu, &rflow->last_qtail); 5200 } 5201 } ··· 6204 napi->skb = NULL; 6205 napi->poll = poll; 6206 if (weight > NAPI_POLL_WEIGHT) 6207 + netdev_err_once(dev, "%s() called with weight %d\n", __func__, 6208 + weight); 6209 napi->weight = weight; 6210 list_add(&napi->dev_list, &dev->napi_list); 6211 napi->dev = dev;
+14 -13
net/core/filter.c
··· 4890 struct net *net; 4891 4892 family = len == sizeof(tuple->ipv4) ? AF_INET : AF_INET6; 4893 - if (unlikely(family == AF_UNSPEC || netns_id > U32_MAX || flags)) 4894 goto out; 4895 4896 if (skb->dev) 4897 caller_net = dev_net(skb->dev); 4898 else 4899 caller_net = sock_net(skb->sk); 4900 - if (netns_id) { 4901 net = get_net_ns_by_id(caller_net, netns_id); 4902 if (unlikely(!net)) 4903 goto out; 4904 sk = sk_lookup(net, tuple, skb, family, proto); 4905 put_net(net); 4906 - } else { 4907 - net = caller_net; 4908 - sk = sk_lookup(net, tuple, skb, family, proto); 4909 } 4910 4911 if (sk) ··· 5436 if (size != size_default) 5437 return false; 5438 break; 5439 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5440 - if (size != sizeof(struct bpf_flow_keys *)) 5441 return false; 5442 break; 5443 default: ··· 5465 case bpf_ctx_range(struct __sk_buff, data): 5466 case bpf_ctx_range(struct __sk_buff, data_meta): 5467 case bpf_ctx_range(struct __sk_buff, data_end): 5468 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5469 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5470 return false; 5471 } ··· 5490 switch (off) { 5491 case bpf_ctx_range(struct __sk_buff, tc_classid): 5492 case bpf_ctx_range(struct __sk_buff, data_meta): 5493 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5494 return false; 5495 case bpf_ctx_range(struct __sk_buff, data): 5496 case bpf_ctx_range(struct __sk_buff, data_end): ··· 5531 case bpf_ctx_range(struct __sk_buff, tc_classid): 5532 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5533 case bpf_ctx_range(struct __sk_buff, data_meta): 5534 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5535 return false; 5536 } 5537 ··· 5757 case bpf_ctx_range(struct __sk_buff, data_end): 5758 info->reg_type = PTR_TO_PACKET_END; 5759 break; 5760 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5761 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5762 return false; 5763 } ··· 5959 switch (off) { 5960 case bpf_ctx_range(struct __sk_buff, tc_classid): 5961 case bpf_ctx_range(struct __sk_buff, data_meta): 5962 - case bpf_ctx_range(struct __sk_buff, flow_keys): 5963 return false; 5964 } 5965 ··· 6040 case bpf_ctx_range(struct __sk_buff, data_end): 6041 info->reg_type = PTR_TO_PACKET_END; 6042 break; 6043 - case bpf_ctx_range(struct __sk_buff, flow_keys): 6044 info->reg_type = PTR_TO_FLOW_KEYS; 6045 break; 6046 case bpf_ctx_range(struct __sk_buff, tc_classid):
··· 4890 struct net *net; 4891 4892 family = len == sizeof(tuple->ipv4) ? AF_INET : AF_INET6; 4893 + if (unlikely(family == AF_UNSPEC || flags || 4894 + !((s32)netns_id < 0 || netns_id <= S32_MAX))) 4895 goto out; 4896 4897 if (skb->dev) 4898 caller_net = dev_net(skb->dev); 4899 else 4900 caller_net = sock_net(skb->sk); 4901 + if ((s32)netns_id < 0) { 4902 + net = caller_net; 4903 + sk = sk_lookup(net, tuple, skb, family, proto); 4904 + } else { 4905 net = get_net_ns_by_id(caller_net, netns_id); 4906 if (unlikely(!net)) 4907 goto out; 4908 sk = sk_lookup(net, tuple, skb, family, proto); 4909 put_net(net); 4910 } 4911 4912 if (sk) ··· 5435 if (size != size_default) 5436 return false; 5437 break; 5438 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5439 + if (size != sizeof(__u64)) 5440 return false; 5441 break; 5442 default: ··· 5464 case bpf_ctx_range(struct __sk_buff, data): 5465 case bpf_ctx_range(struct __sk_buff, data_meta): 5466 case bpf_ctx_range(struct __sk_buff, data_end): 5467 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5468 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5469 return false; 5470 } ··· 5489 switch (off) { 5490 case bpf_ctx_range(struct __sk_buff, tc_classid): 5491 case bpf_ctx_range(struct __sk_buff, data_meta): 5492 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5493 return false; 5494 case bpf_ctx_range(struct __sk_buff, data): 5495 case bpf_ctx_range(struct __sk_buff, data_end): ··· 5530 case bpf_ctx_range(struct __sk_buff, tc_classid): 5531 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5532 case bpf_ctx_range(struct __sk_buff, data_meta): 5533 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5534 return false; 5535 } 5536 ··· 5756 case bpf_ctx_range(struct __sk_buff, data_end): 5757 info->reg_type = PTR_TO_PACKET_END; 5758 break; 5759 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5760 case bpf_ctx_range_till(struct __sk_buff, family, local_port): 5761 return false; 5762 } ··· 5958 switch (off) { 5959 case bpf_ctx_range(struct __sk_buff, tc_classid): 5960 case bpf_ctx_range(struct __sk_buff, data_meta): 5961 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 5962 return false; 5963 } 5964 ··· 6039 case bpf_ctx_range(struct __sk_buff, data_end): 6040 info->reg_type = PTR_TO_PACKET_END; 6041 break; 6042 + case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): 6043 info->reg_type = PTR_TO_FLOW_KEYS; 6044 break; 6045 case bpf_ctx_range(struct __sk_buff, tc_classid):
+3
net/core/rtnetlink.c
··· 3800 { 3801 int err; 3802 3803 netif_addr_lock_bh(dev); 3804 err = nlmsg_populate_fdb(skb, cb, dev, idx, &dev->uc); 3805 if (err)
··· 3800 { 3801 int err; 3802 3803 + if (dev->type != ARPHRD_ETHER) 3804 + return -EINVAL; 3805 + 3806 netif_addr_lock_bh(dev); 3807 err = nlmsg_populate_fdb(skb, cb, dev, idx, &dev->uc); 3808 if (err)
+33 -1
net/dsa/master.c
··· 158 cpu_dp->orig_ethtool_ops = NULL; 159 } 160 161 int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 162 { 163 /* If we use a tagging format that doesn't have an ethertype 164 * field, make sure that all packets from this point on get 165 * sent to the tag format's receive function. ··· 191 192 dev->dsa_ptr = cpu_dp; 193 194 - return dsa_master_ethtool_setup(dev); 195 } 196 197 void dsa_master_teardown(struct net_device *dev) 198 { 199 dsa_master_ethtool_teardown(dev); 200 201 dev->dsa_ptr = NULL;
··· 158 cpu_dp->orig_ethtool_ops = NULL; 159 } 160 161 + static ssize_t tagging_show(struct device *d, struct device_attribute *attr, 162 + char *buf) 163 + { 164 + struct net_device *dev = to_net_dev(d); 165 + struct dsa_port *cpu_dp = dev->dsa_ptr; 166 + 167 + return sprintf(buf, "%s\n", 168 + dsa_tag_protocol_to_str(cpu_dp->tag_ops)); 169 + } 170 + static DEVICE_ATTR_RO(tagging); 171 + 172 + static struct attribute *dsa_slave_attrs[] = { 173 + &dev_attr_tagging.attr, 174 + NULL 175 + }; 176 + 177 + static const struct attribute_group dsa_group = { 178 + .name = "dsa", 179 + .attrs = dsa_slave_attrs, 180 + }; 181 + 182 int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 183 { 184 + int ret; 185 + 186 /* If we use a tagging format that doesn't have an ethertype 187 * field, make sure that all packets from this point on get 188 * sent to the tag format's receive function. ··· 168 169 dev->dsa_ptr = cpu_dp; 170 171 + ret = dsa_master_ethtool_setup(dev); 172 + if (ret) 173 + return ret; 174 + 175 + ret = sysfs_create_group(&dev->dev.kobj, &dsa_group); 176 + if (ret) 177 + dsa_master_ethtool_teardown(dev); 178 + 179 + return ret; 180 } 181 182 void dsa_master_teardown(struct net_device *dev) 183 { 184 + sysfs_remove_group(&dev->dev.kobj, &dsa_group); 185 dsa_master_ethtool_teardown(dev); 186 187 dev->dsa_ptr = NULL;
-28
net/dsa/slave.c
··· 1058 .name = "dsa", 1059 }; 1060 1061 - static ssize_t tagging_show(struct device *d, struct device_attribute *attr, 1062 - char *buf) 1063 - { 1064 - struct net_device *dev = to_net_dev(d); 1065 - struct dsa_port *dp = dsa_slave_to_port(dev); 1066 - 1067 - return sprintf(buf, "%s\n", 1068 - dsa_tag_protocol_to_str(dp->cpu_dp->tag_ops)); 1069 - } 1070 - static DEVICE_ATTR_RO(tagging); 1071 - 1072 - static struct attribute *dsa_slave_attrs[] = { 1073 - &dev_attr_tagging.attr, 1074 - NULL 1075 - }; 1076 - 1077 - static const struct attribute_group dsa_group = { 1078 - .name = "dsa", 1079 - .attrs = dsa_slave_attrs, 1080 - }; 1081 - 1082 static void dsa_slave_phylink_validate(struct net_device *dev, 1083 unsigned long *supported, 1084 struct phylink_link_state *state) ··· 1353 goto out_phy; 1354 } 1355 1356 - ret = sysfs_create_group(&slave_dev->dev.kobj, &dsa_group); 1357 - if (ret) 1358 - goto out_unreg; 1359 - 1360 return 0; 1361 1362 - out_unreg: 1363 - unregister_netdev(slave_dev); 1364 out_phy: 1365 rtnl_lock(); 1366 phylink_disconnect_phy(p->dp->pl); ··· 1378 rtnl_unlock(); 1379 1380 dsa_slave_notify(slave_dev, DSA_PORT_UNREGISTER); 1381 - sysfs_remove_group(&slave_dev->dev.kobj, &dsa_group); 1382 unregister_netdev(slave_dev); 1383 phylink_destroy(dp->pl); 1384 free_percpu(p->stats64);
··· 1058 .name = "dsa", 1059 }; 1060 1061 static void dsa_slave_phylink_validate(struct net_device *dev, 1062 unsigned long *supported, 1063 struct phylink_link_state *state) ··· 1374 goto out_phy; 1375 } 1376 1377 return 0; 1378 1379 out_phy: 1380 rtnl_lock(); 1381 phylink_disconnect_phy(p->dp->pl); ··· 1405 rtnl_unlock(); 1406 1407 dsa_slave_notify(slave_dev, DSA_PORT_UNREGISTER); 1408 unregister_netdev(slave_dev); 1409 phylink_destroy(dp->pl); 1410 free_percpu(p->stats64);
+7
net/ipv4/ip_fragment.c
··· 515 struct rb_node *rbn; 516 int len; 517 int ihlen; 518 int err; 519 u8 ecn; 520 ··· 557 if (len > 65535) 558 goto out_oversize; 559 560 /* Head of list must not be cloned. */ 561 if (skb_unclone(head, GFP_ATOMIC)) 562 goto out_nomem; 563 564 /* If the first fragment is fragmented itself, we split 565 * it to two chunks: the first with data and paged part
··· 515 struct rb_node *rbn; 516 int len; 517 int ihlen; 518 + int delta; 519 int err; 520 u8 ecn; 521 ··· 556 if (len > 65535) 557 goto out_oversize; 558 559 + delta = - head->truesize; 560 + 561 /* Head of list must not be cloned. */ 562 if (skb_unclone(head, GFP_ATOMIC)) 563 goto out_nomem; 564 + 565 + delta += head->truesize; 566 + if (delta) 567 + add_frag_mem_limit(qp->q.net, delta); 568 569 /* If the first fragment is fragmented itself, we split 570 * it to two chunks: the first with data and paged part
+2 -2
net/ipv4/ip_input.c
··· 547 list_for_each_entry_safe(skb, next, head, list) { 548 struct dst_entry *dst; 549 550 - list_del(&skb->list); 551 /* if ingress device is enslaved to an L3 master device pass the 552 * skb to its handler for processing 553 */ ··· 594 struct net_device *dev = skb->dev; 595 struct net *net = dev_net(dev); 596 597 - list_del(&skb->list); 598 skb = ip_rcv_core(skb, net); 599 if (skb == NULL) 600 continue;
··· 547 list_for_each_entry_safe(skb, next, head, list) { 548 struct dst_entry *dst; 549 550 + skb_list_del_init(skb); 551 /* if ingress device is enslaved to an L3 master device pass the 552 * skb to its handler for processing 553 */ ··· 594 struct net_device *dev = skb->dev; 595 struct net *net = dev_net(dev); 596 597 + skb_list_del_init(skb); 598 skb = ip_rcv_core(skb, net); 599 if (skb == NULL) 600 continue;
+32 -13
net/ipv4/tcp_output.c
··· 1904 * This algorithm is from John Heffner. 1905 */ 1906 static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb, 1907 - bool *is_cwnd_limited, u32 max_segs) 1908 { 1909 const struct inet_connection_sock *icsk = inet_csk(sk); 1910 u32 age, send_win, cong_win, limit, in_flight; 1911 struct tcp_sock *tp = tcp_sk(sk); 1912 struct sk_buff *head; 1913 int win_divisor; 1914 - 1915 - if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) 1916 - goto send_now; 1917 1918 if (icsk->icsk_ca_state >= TCP_CA_Recovery) 1919 goto send_now; ··· 1972 if (age < (tp->srtt_us >> 4)) 1973 goto send_now; 1974 1975 - /* Ok, it looks like it is advisable to defer. */ 1976 1977 - if (cong_win < send_win && cong_win <= skb->len) 1978 - *is_cwnd_limited = true; 1979 1980 return true; 1981 ··· 2372 } else { 2373 if (!push_one && 2374 tcp_tso_should_defer(sk, skb, &is_cwnd_limited, 2375 - max_segs)) 2376 break; 2377 } 2378 ··· 2510 goto rearm_timer; 2511 } 2512 skb = skb_rb_last(&sk->tcp_rtx_queue); 2513 2514 /* At most one outstanding TLP retransmission. */ 2515 if (tp->tlp_high_seq) 2516 - goto rearm_timer; 2517 - 2518 - /* Retransmit last segment. */ 2519 - if (WARN_ON(!skb)) 2520 goto rearm_timer; 2521 2522 if (skb_still_in_host_queue(sk, skb)) ··· 2939 TCP_SKB_CB(skb)->sacked |= TCPCB_EVER_RETRANS; 2940 trace_tcp_retransmit_skb(sk, skb); 2941 } else if (err != -EBUSY) { 2942 - NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL); 2943 } 2944 return err; 2945 }
··· 1904 * This algorithm is from John Heffner. 1905 */ 1906 static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb, 1907 + bool *is_cwnd_limited, 1908 + bool *is_rwnd_limited, 1909 + u32 max_segs) 1910 { 1911 const struct inet_connection_sock *icsk = inet_csk(sk); 1912 u32 age, send_win, cong_win, limit, in_flight; 1913 struct tcp_sock *tp = tcp_sk(sk); 1914 struct sk_buff *head; 1915 int win_divisor; 1916 1917 if (icsk->icsk_ca_state >= TCP_CA_Recovery) 1918 goto send_now; ··· 1973 if (age < (tp->srtt_us >> 4)) 1974 goto send_now; 1975 1976 + /* Ok, it looks like it is advisable to defer. 1977 + * Three cases are tracked : 1978 + * 1) We are cwnd-limited 1979 + * 2) We are rwnd-limited 1980 + * 3) We are application limited. 1981 + */ 1982 + if (cong_win < send_win) { 1983 + if (cong_win <= skb->len) { 1984 + *is_cwnd_limited = true; 1985 + return true; 1986 + } 1987 + } else { 1988 + if (send_win <= skb->len) { 1989 + *is_rwnd_limited = true; 1990 + return true; 1991 + } 1992 + } 1993 1994 + /* If this packet won't get more data, do not wait. */ 1995 + if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) 1996 + goto send_now; 1997 1998 return true; 1999 ··· 2356 } else { 2357 if (!push_one && 2358 tcp_tso_should_defer(sk, skb, &is_cwnd_limited, 2359 + &is_rwnd_limited, max_segs)) 2360 break; 2361 } 2362 ··· 2494 goto rearm_timer; 2495 } 2496 skb = skb_rb_last(&sk->tcp_rtx_queue); 2497 + if (unlikely(!skb)) { 2498 + WARN_ONCE(tp->packets_out, 2499 + "invalid inflight: %u state %u cwnd %u mss %d\n", 2500 + tp->packets_out, sk->sk_state, tp->snd_cwnd, mss); 2501 + inet_csk(sk)->icsk_pending = 0; 2502 + return; 2503 + } 2504 2505 /* At most one outstanding TLP retransmission. */ 2506 if (tp->tlp_high_seq) 2507 goto rearm_timer; 2508 2509 if (skb_still_in_host_queue(sk, skb)) ··· 2920 TCP_SKB_CB(skb)->sacked |= TCPCB_EVER_RETRANS; 2921 trace_tcp_retransmit_skb(sk, skb); 2922 } else if (err != -EBUSY) { 2923 + NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL, segs); 2924 } 2925 return err; 2926 }
+5 -5
net/ipv4/tcp_timer.c
··· 378 return; 379 } 380 381 - if (icsk->icsk_probes_out > max_probes) { 382 abort: tcp_write_err(sk); 383 } else { 384 /* Only send another probe if we didn't close things up. */ ··· 484 goto out_reset_timer; 485 } 486 487 if (tcp_write_timeout(sk)) 488 goto out; 489 490 if (icsk->icsk_retransmits == 0) { 491 - int mib_idx; 492 493 if (icsk->icsk_ca_state == TCP_CA_Recovery) { 494 if (tcp_is_sack(tp)) ··· 504 mib_idx = LINUX_MIB_TCPSACKFAILURES; 505 else 506 mib_idx = LINUX_MIB_TCPRENOFAILURES; 507 - } else { 508 - mib_idx = LINUX_MIB_TCPTIMEOUTS; 509 } 510 - __NET_INC_STATS(sock_net(sk), mib_idx); 511 } 512 513 tcp_enter_loss(sk);
··· 378 return; 379 } 380 381 + if (icsk->icsk_probes_out >= max_probes) { 382 abort: tcp_write_err(sk); 383 } else { 384 /* Only send another probe if we didn't close things up. */ ··· 484 goto out_reset_timer; 485 } 486 487 + __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPTIMEOUTS); 488 if (tcp_write_timeout(sk)) 489 goto out; 490 491 if (icsk->icsk_retransmits == 0) { 492 + int mib_idx = 0; 493 494 if (icsk->icsk_ca_state == TCP_CA_Recovery) { 495 if (tcp_is_sack(tp)) ··· 503 mib_idx = LINUX_MIB_TCPSACKFAILURES; 504 else 505 mib_idx = LINUX_MIB_TCPRENOFAILURES; 506 } 507 + if (mib_idx) 508 + __NET_INC_STATS(sock_net(sk), mib_idx); 509 } 510 511 tcp_enter_loss(sk);
+2 -2
net/ipv6/ip6_input.c
··· 95 list_for_each_entry_safe(skb, next, head, list) { 96 struct dst_entry *dst; 97 98 - list_del(&skb->list); 99 /* if ingress device is enslaved to an L3 master device pass the 100 * skb to its handler for processing 101 */ ··· 296 struct net_device *dev = skb->dev; 297 struct net *net = dev_net(dev); 298 299 - list_del(&skb->list); 300 skb = ip6_rcv_core(skb, dev, net); 301 if (skb == NULL) 302 continue;
··· 95 list_for_each_entry_safe(skb, next, head, list) { 96 struct dst_entry *dst; 97 98 + skb_list_del_init(skb); 99 /* if ingress device is enslaved to an L3 master device pass the 100 * skb to its handler for processing 101 */ ··· 296 struct net_device *dev = skb->dev; 297 struct net *net = dev_net(dev); 298 299 + skb_list_del_init(skb); 300 skb = ip6_rcv_core(skb, dev, net); 301 if (skb == NULL) 302 continue;
+21 -21
net/ipv6/ip6_output.c
··· 195 const struct ipv6_pinfo *np = inet6_sk(sk); 196 struct in6_addr *first_hop = &fl6->daddr; 197 struct dst_entry *dst = skb_dst(skb); 198 struct ipv6hdr *hdr; 199 u8 proto = fl6->flowi6_proto; 200 int seg_len = skb->len; 201 int hlimit = -1; 202 u32 mtu; 203 204 - if (opt) { 205 - unsigned int head_room; 206 207 - /* First: exthdrs may take lots of space (~8K for now) 208 - MAX_HEADER is not enough. 209 - */ 210 - head_room = opt->opt_nflen + opt->opt_flen; 211 - seg_len += head_room; 212 - head_room += sizeof(struct ipv6hdr) + LL_RESERVED_SPACE(dst->dev); 213 - 214 - if (skb_headroom(skb) < head_room) { 215 - struct sk_buff *skb2 = skb_realloc_headroom(skb, head_room); 216 - if (!skb2) { 217 - IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), 218 - IPSTATS_MIB_OUTDISCARDS); 219 - kfree_skb(skb); 220 - return -ENOBUFS; 221 - } 222 - if (skb->sk) 223 - skb_set_owner_w(skb2, skb->sk); 224 - consume_skb(skb); 225 - skb = skb2; 226 } 227 if (opt->opt_flen) 228 ipv6_push_frag_opts(skb, opt, &proto); 229 if (opt->opt_nflen) 230 ipv6_push_nfrag_opts(skb, opt, &proto, &first_hop, 231 &fl6->saddr);
··· 195 const struct ipv6_pinfo *np = inet6_sk(sk); 196 struct in6_addr *first_hop = &fl6->daddr; 197 struct dst_entry *dst = skb_dst(skb); 198 + unsigned int head_room; 199 struct ipv6hdr *hdr; 200 u8 proto = fl6->flowi6_proto; 201 int seg_len = skb->len; 202 int hlimit = -1; 203 u32 mtu; 204 205 + head_room = sizeof(struct ipv6hdr) + LL_RESERVED_SPACE(dst->dev); 206 + if (opt) 207 + head_room += opt->opt_nflen + opt->opt_flen; 208 209 + if (unlikely(skb_headroom(skb) < head_room)) { 210 + struct sk_buff *skb2 = skb_realloc_headroom(skb, head_room); 211 + if (!skb2) { 212 + IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), 213 + IPSTATS_MIB_OUTDISCARDS); 214 + kfree_skb(skb); 215 + return -ENOBUFS; 216 } 217 + if (skb->sk) 218 + skb_set_owner_w(skb2, skb->sk); 219 + consume_skb(skb); 220 + skb = skb2; 221 + } 222 + 223 + if (opt) { 224 + seg_len += opt->opt_nflen + opt->opt_flen; 225 + 226 if (opt->opt_flen) 227 ipv6_push_frag_opts(skb, opt, &proto); 228 + 229 if (opt->opt_nflen) 230 ipv6_push_nfrag_opts(skb, opt, &proto, &first_hop, 231 &fl6->saddr);
+7 -1
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 341 nf_ct_frag6_reasm(struct frag_queue *fq, struct sk_buff *prev, struct net_device *dev) 342 { 343 struct sk_buff *fp, *head = fq->q.fragments; 344 - int payload_len; 345 u8 ecn; 346 347 inet_frag_kill(&fq->q); ··· 363 return false; 364 } 365 366 /* Head of list must not be cloned. */ 367 if (skb_unclone(head, GFP_ATOMIC)) 368 return false; 369 370 /* If the first fragment is fragmented itself, we split 371 * it to two chunks: the first with data and paged part
··· 341 nf_ct_frag6_reasm(struct frag_queue *fq, struct sk_buff *prev, struct net_device *dev) 342 { 343 struct sk_buff *fp, *head = fq->q.fragments; 344 + int payload_len, delta; 345 u8 ecn; 346 347 inet_frag_kill(&fq->q); ··· 363 return false; 364 } 365 366 + delta = - head->truesize; 367 + 368 /* Head of list must not be cloned. */ 369 if (skb_unclone(head, GFP_ATOMIC)) 370 return false; 371 + 372 + delta += head->truesize; 373 + if (delta) 374 + add_frag_mem_limit(fq->q.net, delta); 375 376 /* If the first fragment is fragmented itself, we split 377 * it to two chunks: the first with data and paged part
+7 -1
net/ipv6/reassembly.c
··· 281 { 282 struct net *net = container_of(fq->q.net, struct net, ipv6.frags); 283 struct sk_buff *fp, *head = fq->q.fragments; 284 - int payload_len; 285 unsigned int nhoff; 286 int sum_truesize; 287 u8 ecn; ··· 322 if (payload_len > IPV6_MAXPLEN) 323 goto out_oversize; 324 325 /* Head of list must not be cloned. */ 326 if (skb_unclone(head, GFP_ATOMIC)) 327 goto out_oom; 328 329 /* If the first fragment is fragmented itself, we split 330 * it to two chunks: the first with data and paged part
··· 281 { 282 struct net *net = container_of(fq->q.net, struct net, ipv6.frags); 283 struct sk_buff *fp, *head = fq->q.fragments; 284 + int payload_len, delta; 285 unsigned int nhoff; 286 int sum_truesize; 287 u8 ecn; ··· 322 if (payload_len > IPV6_MAXPLEN) 323 goto out_oversize; 324 325 + delta = - head->truesize; 326 + 327 /* Head of list must not be cloned. */ 328 if (skb_unclone(head, GFP_ATOMIC)) 329 goto out_oom; 330 + 331 + delta += head->truesize; 332 + if (delta) 333 + add_frag_mem_limit(fq->q.net, delta); 334 335 /* If the first fragment is fragmented itself, we split 336 * it to two chunks: the first with data and paged part
+1
net/ipv6/seg6_iptunnel.c
··· 347 struct ipv6hdr *hdr = ipv6_hdr(skb); 348 struct flowi6 fl6; 349 350 fl6.daddr = hdr->daddr; 351 fl6.saddr = hdr->saddr; 352 fl6.flowlabel = ip6_flowinfo(hdr);
··· 347 struct ipv6hdr *hdr = ipv6_hdr(skb); 348 struct flowi6 fl6; 349 350 + memset(&fl6, 0, sizeof(fl6)); 351 fl6.daddr = hdr->daddr; 352 fl6.saddr = hdr->saddr; 353 fl6.flowlabel = ip6_flowinfo(hdr);
+4 -3
net/mac80211/cfg.c
··· 2891 2892 len = beacon->head_len + beacon->tail_len + beacon->beacon_ies_len + 2893 beacon->proberesp_ies_len + beacon->assocresp_ies_len + 2894 - beacon->probe_resp_len; 2895 2896 new_beacon = kzalloc(sizeof(*new_beacon) + len, GFP_KERNEL); 2897 if (!new_beacon) ··· 2934 memcpy(pos, beacon->probe_resp, beacon->probe_resp_len); 2935 pos += beacon->probe_resp_len; 2936 } 2937 - if (beacon->ftm_responder) 2938 - new_beacon->ftm_responder = beacon->ftm_responder; 2939 if (beacon->lci) { 2940 new_beacon->lci_len = beacon->lci_len; 2941 new_beacon->lci = pos;
··· 2891 2892 len = beacon->head_len + beacon->tail_len + beacon->beacon_ies_len + 2893 beacon->proberesp_ies_len + beacon->assocresp_ies_len + 2894 + beacon->probe_resp_len + beacon->lci_len + beacon->civicloc_len; 2895 2896 new_beacon = kzalloc(sizeof(*new_beacon) + len, GFP_KERNEL); 2897 if (!new_beacon) ··· 2934 memcpy(pos, beacon->probe_resp, beacon->probe_resp_len); 2935 pos += beacon->probe_resp_len; 2936 } 2937 + 2938 + /* might copy -1, meaning no changes requested */ 2939 + new_beacon->ftm_responder = beacon->ftm_responder; 2940 if (beacon->lci) { 2941 new_beacon->lci_len = beacon->lci_len; 2942 new_beacon->lci = pos;
+2
net/mac80211/iface.c
··· 1015 if (local->open_count == 0) 1016 ieee80211_clear_tx_pending(local); 1017 1018 /* 1019 * If the interface goes down while suspended, presumably because 1020 * the device was unplugged and that happens before our resume,
··· 1015 if (local->open_count == 0) 1016 ieee80211_clear_tx_pending(local); 1017 1018 + sdata->vif.bss_conf.beacon_int = 0; 1019 + 1020 /* 1021 * If the interface goes down while suspended, presumably because 1022 * the device was unplugged and that happens before our resume,
+8 -4
net/mac80211/mlme.c
··· 2766 { 2767 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2768 struct sta_info *sta; 2769 2770 sdata_info(sdata, "authenticated\n"); 2771 ifmgd->auth_data->done = true; ··· 2779 sta = sta_info_get(sdata, bssid); 2780 if (!sta) { 2781 WARN_ONCE(1, "%s: STA %pM not found", sdata->name, bssid); 2782 - return false; 2783 } 2784 if (sta_info_move_state(sta, IEEE80211_STA_AUTH)) { 2785 sdata_info(sdata, "failed moving %pM to auth\n", bssid); 2786 - return false; 2787 } 2788 - mutex_unlock(&sdata->local->sta_mtx); 2789 2790 - return true; 2791 } 2792 2793 static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
··· 2766 { 2767 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2768 struct sta_info *sta; 2769 + bool result = true; 2770 2771 sdata_info(sdata, "authenticated\n"); 2772 ifmgd->auth_data->done = true; ··· 2778 sta = sta_info_get(sdata, bssid); 2779 if (!sta) { 2780 WARN_ONCE(1, "%s: STA %pM not found", sdata->name, bssid); 2781 + result = false; 2782 + goto out; 2783 } 2784 if (sta_info_move_state(sta, IEEE80211_STA_AUTH)) { 2785 sdata_info(sdata, "failed moving %pM to auth\n", bssid); 2786 + result = false; 2787 + goto out; 2788 } 2789 2790 + out: 2791 + mutex_unlock(&sdata->local->sta_mtx); 2792 + return result; 2793 } 2794 2795 static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+3 -2
net/mac80211/rx.c
··· 1403 return RX_CONTINUE; 1404 1405 if (ieee80211_is_ctl(hdr->frame_control) || 1406 ieee80211_is_qos_nullfunc(hdr->frame_control) || 1407 is_multicast_ether_addr(hdr->addr1)) 1408 return RX_CONTINUE; ··· 3064 cfg80211_sta_opmode_change_notify(sdata->dev, 3065 rx->sta->addr, 3066 &sta_opmode, 3067 - GFP_KERNEL); 3068 goto handled; 3069 } 3070 case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: { ··· 3101 cfg80211_sta_opmode_change_notify(sdata->dev, 3102 rx->sta->addr, 3103 &sta_opmode, 3104 - GFP_KERNEL); 3105 goto handled; 3106 } 3107 default:
··· 1403 return RX_CONTINUE; 1404 1405 if (ieee80211_is_ctl(hdr->frame_control) || 1406 + ieee80211_is_nullfunc(hdr->frame_control) || 1407 ieee80211_is_qos_nullfunc(hdr->frame_control) || 1408 is_multicast_ether_addr(hdr->addr1)) 1409 return RX_CONTINUE; ··· 3063 cfg80211_sta_opmode_change_notify(sdata->dev, 3064 rx->sta->addr, 3065 &sta_opmode, 3066 + GFP_ATOMIC); 3067 goto handled; 3068 } 3069 case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: { ··· 3100 cfg80211_sta_opmode_change_notify(sdata->dev, 3101 rx->sta->addr, 3102 &sta_opmode, 3103 + GFP_ATOMIC); 3104 goto handled; 3105 } 3106 default:
+2
net/mac80211/status.c
··· 964 /* Track when last TDLS packet was ACKed */ 965 if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) 966 sta->status_stats.last_tdls_pkt_time = jiffies; 967 } else { 968 ieee80211_lost_packet(sta, info); 969 }
··· 964 /* Track when last TDLS packet was ACKed */ 965 if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) 966 sta->status_stats.last_tdls_pkt_time = jiffies; 967 + } else if (test_sta_flag(sta, WLAN_STA_PS_STA)) { 968 + return; 969 } else { 970 ieee80211_lost_packet(sta, info); 971 }
+2 -2
net/mac80211/tx.c
··· 439 if (ieee80211_hw_check(&tx->local->hw, QUEUE_CONTROL)) 440 info->hw_queue = tx->sdata->vif.cab_queue; 441 442 - /* no stations in PS mode */ 443 - if (!atomic_read(&ps->num_sta_ps)) 444 return TX_CONTINUE; 445 446 info->flags |= IEEE80211_TX_CTL_SEND_AFTER_DTIM;
··· 439 if (ieee80211_hw_check(&tx->local->hw, QUEUE_CONTROL)) 440 info->hw_queue = tx->sdata->vif.cab_queue; 441 442 + /* no stations in PS mode and no buffered packets */ 443 + if (!atomic_read(&ps->num_sta_ps) && skb_queue_empty(&ps->bc_buf)) 444 return TX_CONTINUE; 445 446 info->flags |= IEEE80211_TX_CTL_SEND_AFTER_DTIM;
+1 -1
net/openvswitch/conntrack.c
··· 1166 &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); 1167 if (err) { 1168 net_warn_ratelimited("openvswitch: zone: %u " 1169 - "execeeds conntrack limit\n", 1170 info->zone.id); 1171 return err; 1172 }
··· 1166 &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); 1167 if (err) { 1168 net_warn_ratelimited("openvswitch: zone: %u " 1169 + "exceeds conntrack limit\n", 1170 info->zone.id); 1171 return err; 1172 }
+12 -12
net/sched/act_police.c
··· 85 int ovr, int bind, bool rtnl_held, 86 struct netlink_ext_ack *extack) 87 { 88 - int ret = 0, err; 89 struct nlattr *tb[TCA_POLICE_MAX + 1]; 90 struct tc_police *parm; 91 struct tcf_police *police; ··· 93 struct tc_action_net *tn = net_generic(net, police_net_id); 94 struct tcf_police_params *new; 95 bool exists = false; 96 - int size; 97 98 if (nla == NULL) 99 return -EINVAL; ··· 159 goto failure; 160 } 161 162 new = kzalloc(sizeof(*new), GFP_KERNEL); 163 if (unlikely(!new)) { 164 err = -ENOMEM; ··· 176 } 177 178 /* No failure allowed after this point */ 179 new->tcfp_mtu = parm->mtu; 180 if (!new->tcfp_mtu) { 181 new->tcfp_mtu = ~0; ··· 205 206 if (tb[TCA_POLICE_AVRATE]) 207 new->tcfp_ewma_rate = nla_get_u32(tb[TCA_POLICE_AVRATE]); 208 - 209 - if (tb[TCA_POLICE_RESULT]) { 210 - new->tcfp_result = nla_get_u32(tb[TCA_POLICE_RESULT]); 211 - if (TC_ACT_EXT_CMP(new->tcfp_result, TC_ACT_GOTO_CHAIN)) { 212 - NL_SET_ERR_MSG(extack, 213 - "goto chain not allowed on fallback"); 214 - err = -EINVAL; 215 - goto failure; 216 - } 217 - } 218 219 spin_lock_bh(&police->tcf_lock); 220 spin_lock_bh(&police->tcfp_lock);
··· 85 int ovr, int bind, bool rtnl_held, 86 struct netlink_ext_ack *extack) 87 { 88 + int ret = 0, tcfp_result = TC_ACT_OK, err, size; 89 struct nlattr *tb[TCA_POLICE_MAX + 1]; 90 struct tc_police *parm; 91 struct tcf_police *police; ··· 93 struct tc_action_net *tn = net_generic(net, police_net_id); 94 struct tcf_police_params *new; 95 bool exists = false; 96 97 if (nla == NULL) 98 return -EINVAL; ··· 160 goto failure; 161 } 162 163 + if (tb[TCA_POLICE_RESULT]) { 164 + tcfp_result = nla_get_u32(tb[TCA_POLICE_RESULT]); 165 + if (TC_ACT_EXT_CMP(tcfp_result, TC_ACT_GOTO_CHAIN)) { 166 + NL_SET_ERR_MSG(extack, 167 + "goto chain not allowed on fallback"); 168 + err = -EINVAL; 169 + goto failure; 170 + } 171 + } 172 + 173 new = kzalloc(sizeof(*new), GFP_KERNEL); 174 if (unlikely(!new)) { 175 err = -ENOMEM; ··· 167 } 168 169 /* No failure allowed after this point */ 170 + new->tcfp_result = tcfp_result; 171 new->tcfp_mtu = parm->mtu; 172 if (!new->tcfp_mtu) { 173 new->tcfp_mtu = ~0; ··· 195 196 if (tb[TCA_POLICE_AVRATE]) 197 new->tcfp_ewma_rate = nla_get_u32(tb[TCA_POLICE_AVRATE]); 198 199 spin_lock_bh(&police->tcf_lock); 200 spin_lock_bh(&police->tcfp_lock);
+10 -13
net/sched/cls_flower.c
··· 1238 if (err) 1239 goto errout_idr; 1240 1241 - if (!tc_skip_sw(fnew->flags)) { 1242 - if (!fold && fl_lookup(fnew->mask, &fnew->mkey)) { 1243 - err = -EEXIST; 1244 - goto errout_mask; 1245 - } 1246 - 1247 - err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node, 1248 - fnew->mask->filter_ht_params); 1249 - if (err) 1250 - goto errout_mask; 1251 } 1252 1253 if (!tc_skip_hw(fnew->flags)) { 1254 err = fl_hw_replace_filter(tp, fnew, extack); ··· 1301 struct cls_fl_head *head = rtnl_dereference(tp->root); 1302 struct cls_fl_filter *f = arg; 1303 1304 - if (!tc_skip_sw(f->flags)) 1305 - rhashtable_remove_fast(&f->mask->ht, &f->ht_node, 1306 - f->mask->filter_ht_params); 1307 __fl_delete(tp, f, extack); 1308 *last = list_empty(&head->masks); 1309 return 0;
··· 1238 if (err) 1239 goto errout_idr; 1240 1241 + if (!fold && fl_lookup(fnew->mask, &fnew->mkey)) { 1242 + err = -EEXIST; 1243 + goto errout_mask; 1244 } 1245 + 1246 + err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node, 1247 + fnew->mask->filter_ht_params); 1248 + if (err) 1249 + goto errout_mask; 1250 1251 if (!tc_skip_hw(fnew->flags)) { 1252 err = fl_hw_replace_filter(tp, fnew, extack); ··· 1303 struct cls_fl_head *head = rtnl_dereference(tp->root); 1304 struct cls_fl_filter *f = arg; 1305 1306 + rhashtable_remove_fast(&f->mask->ht, &f->ht_node, 1307 + f->mask->filter_ht_params); 1308 __fl_delete(tp, f, extack); 1309 *last = list_empty(&head->masks); 1310 return 0;
+3
net/sched/sch_netem.c
··· 431 int count = 1; 432 int rc = NET_XMIT_SUCCESS; 433 434 /* Random duplication */ 435 if (q->duplicate && q->duplicate >= get_crandom(&q->dup_cor)) 436 ++count;
··· 431 int count = 1; 432 int rc = NET_XMIT_SUCCESS; 433 434 + /* Do not fool qdisc_drop_all() */ 435 + skb->prev = NULL; 436 + 437 /* Random duplication */ 438 if (q->duplicate && q->duplicate >= get_crandom(&q->dup_cor)) 439 ++count;
+5 -4
net/sctp/associola.c
··· 118 asoc->flowlabel = sp->flowlabel; 119 asoc->dscp = sp->dscp; 120 121 - /* Initialize default path MTU. */ 122 - asoc->pathmtu = sp->pathmtu; 123 - 124 /* Set association default SACK delay */ 125 asoc->sackdelay = msecs_to_jiffies(sp->sackdelay); 126 asoc->sackfreq = sp->sackfreq; ··· 248 if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams, 249 0, gfp)) 250 goto fail_init; 251 252 /* Assume that peer would support both address types unless we are 253 * told otherwise. ··· 435 436 WARN_ON(atomic_read(&asoc->rmem_alloc)); 437 438 - kfree(asoc); 439 SCTP_DBG_OBJCNT_DEC(assoc); 440 } 441
··· 118 asoc->flowlabel = sp->flowlabel; 119 asoc->dscp = sp->dscp; 120 121 /* Set association default SACK delay */ 122 asoc->sackdelay = msecs_to_jiffies(sp->sackdelay); 123 asoc->sackfreq = sp->sackfreq; ··· 251 if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams, 252 0, gfp)) 253 goto fail_init; 254 + 255 + /* Initialize default path MTU. */ 256 + asoc->pathmtu = sp->pathmtu; 257 + sctp_assoc_update_frag_point(asoc); 258 259 /* Assume that peer would support both address types unless we are 260 * told otherwise. ··· 434 435 WARN_ON(atomic_read(&asoc->rmem_alloc)); 436 437 + kfree_rcu(asoc, rcu); 438 SCTP_DBG_OBJCNT_DEC(assoc); 439 } 440
+6
net/sctp/chunk.c
··· 191 * the packet 192 */ 193 max_data = asoc->frag_point; 194 195 /* If the the peer requested that we authenticate DATA chunks 196 * we need to account for bundling of the AUTH chunks along with
··· 191 * the packet 192 */ 193 max_data = asoc->frag_point; 194 + if (unlikely(!max_data)) { 195 + max_data = sctp_min_frag_point(sctp_sk(asoc->base.sk), 196 + sctp_datachk_len(&asoc->stream)); 197 + pr_warn_ratelimited("%s: asoc:%p frag_point is zero, forcing max_data to default minimum (%Zu)", 198 + __func__, asoc, max_data); 199 + } 200 201 /* If the the peer requested that we authenticate DATA chunks 202 * we need to account for bundling of the AUTH chunks along with
+3
net/sctp/sm_make_chunk.c
··· 2462 asoc->c.sinit_max_instreams, gfp)) 2463 goto clean_up; 2464 2465 if (!asoc->temp && sctp_assoc_set_id(asoc, gfp)) 2466 goto clean_up; 2467
··· 2462 asoc->c.sinit_max_instreams, gfp)) 2463 goto clean_up; 2464 2465 + /* Update frag_point when stream_interleave may get changed. */ 2466 + sctp_assoc_update_frag_point(asoc); 2467 + 2468 if (!asoc->temp && sctp_assoc_set_id(asoc, gfp)) 2469 goto clean_up; 2470
+1 -2
net/sctp/socket.c
··· 3324 __u16 datasize = asoc ? sctp_datachk_len(&asoc->stream) : 3325 sizeof(struct sctp_data_chunk); 3326 3327 - min_len = sctp_mtu_payload(sp, SCTP_DEFAULT_MINSEGMENT, 3328 - datasize); 3329 max_len = SCTP_MAX_CHUNK_LEN - datasize; 3330 3331 if (val < min_len || val > max_len)
··· 3324 __u16 datasize = asoc ? sctp_datachk_len(&asoc->stream) : 3325 sizeof(struct sctp_data_chunk); 3326 3327 + min_len = sctp_min_frag_point(sp, datasize); 3328 max_len = SCTP_MAX_CHUNK_LEN - datasize; 3329 3330 if (val < min_len || val > max_len)
+2 -2
net/wireless/mlme.c
··· 272 273 p1 = (u8*)(ht_capa); 274 p2 = (u8*)(ht_capa_mask); 275 - for (i = 0; i<sizeof(*ht_capa); i++) 276 p1[i] &= p2[i]; 277 } 278 279 - /* Do a logical ht_capa &= ht_capa_mask. */ 280 void cfg80211_oper_and_vht_capa(struct ieee80211_vht_cap *vht_capa, 281 const struct ieee80211_vht_cap *vht_capa_mask) 282 {
··· 272 273 p1 = (u8*)(ht_capa); 274 p2 = (u8*)(ht_capa_mask); 275 + for (i = 0; i < sizeof(*ht_capa); i++) 276 p1[i] &= p2[i]; 277 } 278 279 + /* Do a logical vht_capa &= vht_capa_mask. */ 280 void cfg80211_oper_and_vht_capa(struct ieee80211_vht_cap *vht_capa, 281 const struct ieee80211_vht_cap *vht_capa_mask) 282 {
+1
net/wireless/nl80211.c
··· 7870 } 7871 7872 memset(&params, 0, sizeof(params)); 7873 7874 if (!info->attrs[NL80211_ATTR_WIPHY_FREQ] || 7875 !info->attrs[NL80211_ATTR_CH_SWITCH_COUNT])
··· 7870 } 7871 7872 memset(&params, 0, sizeof(params)); 7873 + params.beacon_csa.ftm_responder = -1; 7874 7875 if (!info->attrs[NL80211_ATTR_WIPHY_FREQ] || 7876 !info->attrs[NL80211_ATTR_CH_SWITCH_COUNT])
+7 -1
net/wireless/sme.c
··· 642 * All devices must be idle as otherwise if you are actively 643 * scanning some new beacon hints could be learned and would 644 * count as new regulatory hints. 645 */ 646 list_for_each_entry(rdev, &cfg80211_rdev_list, list) { 647 list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) { 648 wdev_lock(wdev); 649 - if (wdev->conn || wdev->current_bss) 650 is_all_idle = false; 651 wdev_unlock(wdev); 652 } ··· 1175 1176 cfg80211_oper_and_ht_capa(&connect->ht_capa_mask, 1177 rdev->wiphy.ht_capa_mod_mask); 1178 1179 if (connkeys && connkeys->def >= 0) { 1180 int idx;
··· 642 * All devices must be idle as otherwise if you are actively 643 * scanning some new beacon hints could be learned and would 644 * count as new regulatory hints. 645 + * Also if there is any other active beaconing interface we 646 + * need not issue a disconnect hint and reset any info such 647 + * as chan dfs state, etc. 648 */ 649 list_for_each_entry(rdev, &cfg80211_rdev_list, list) { 650 list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) { 651 wdev_lock(wdev); 652 + if (wdev->conn || wdev->current_bss || 653 + cfg80211_beaconing_iface_active(wdev)) 654 is_all_idle = false; 655 wdev_unlock(wdev); 656 } ··· 1171 1172 cfg80211_oper_and_ht_capa(&connect->ht_capa_mask, 1173 rdev->wiphy.ht_capa_mod_mask); 1174 + cfg80211_oper_and_vht_capa(&connect->vht_capa_mask, 1175 + rdev->wiphy.vht_capa_mod_mask); 1176 1177 if (connkeys && connkeys->def >= 0) { 1178 int idx;
+2
net/wireless/util.c
··· 1421 ies[pos + ext], 1422 ext == 2)) 1423 pos = skip_ie(ies, ielen, pos); 1424 } 1425 } else { 1426 pos = skip_ie(ies, ielen, pos);
··· 1421 ies[pos + ext], 1422 ext == 2)) 1423 pos = skip_ie(ies, ielen, pos); 1424 + else 1425 + break; 1426 } 1427 } else { 1428 pos = skip_ie(ies, ielen, pos);
+11 -7
net/x25/af_x25.c
··· 100 } 101 102 len = *skb->data; 103 - needed = 1 + (len >> 4) + (len & 0x0f); 104 105 if (!pskb_may_pull(skb, needed)) { 106 /* packet is too short to hold the addresses it claims ··· 288 sk_for_each(s, &x25_list) 289 if ((!strcmp(addr->x25_addr, 290 x25_sk(s)->source_addr.x25_addr) || 291 - !strcmp(addr->x25_addr, 292 null_x25_address.x25_addr)) && 293 s->sk_state == TCP_LISTEN) { 294 /* ··· 688 goto out; 689 } 690 691 - len = strlen(addr->sx25_addr.x25_addr); 692 - for (i = 0; i < len; i++) { 693 - if (!isdigit(addr->sx25_addr.x25_addr[i])) { 694 - rc = -EINVAL; 695 - goto out; 696 } 697 } 698
··· 100 } 101 102 len = *skb->data; 103 + needed = 1 + ((len >> 4) + (len & 0x0f) + 1) / 2; 104 105 if (!pskb_may_pull(skb, needed)) { 106 /* packet is too short to hold the addresses it claims ··· 288 sk_for_each(s, &x25_list) 289 if ((!strcmp(addr->x25_addr, 290 x25_sk(s)->source_addr.x25_addr) || 291 + !strcmp(x25_sk(s)->source_addr.x25_addr, 292 null_x25_address.x25_addr)) && 293 s->sk_state == TCP_LISTEN) { 294 /* ··· 688 goto out; 689 } 690 691 + /* check for the null_x25_address */ 692 + if (strcmp(addr->sx25_addr.x25_addr, null_x25_address.x25_addr)) { 693 + 694 + len = strlen(addr->sx25_addr.x25_addr); 695 + for (i = 0; i < len; i++) { 696 + if (!isdigit(addr->sx25_addr.x25_addr[i])) { 697 + rc = -EINVAL; 698 + goto out; 699 + } 700 } 701 } 702
+9
net/x25/x25_in.c
··· 142 sk->sk_state_change(sk); 143 break; 144 } 145 case X25_CLEAR_REQUEST: 146 if (!pskb_may_pull(skb, X25_STD_MIN_LEN + 2)) 147 goto out_clear;
··· 142 sk->sk_state_change(sk); 143 break; 144 } 145 + case X25_CALL_REQUEST: 146 + /* call collision */ 147 + x25->causediag.cause = 0x01; 148 + x25->causediag.diagnostic = 0x48; 149 + 150 + x25_write_internal(sk, X25_CLEAR_REQUEST); 151 + x25_disconnect(sk, EISCONN, 0x01, 0x48); 152 + break; 153 + 154 case X25_CLEAR_REQUEST: 155 if (!pskb_may_pull(skb, X25_STD_MIN_LEN + 2)) 156 goto out_clear;
+3 -3
tools/bpf/bpftool/btf_dumper.c
··· 32 } 33 34 static int btf_dumper_modifier(const struct btf_dumper *d, __u32 type_id, 35 - const void *data) 36 { 37 int actual_type_id; 38 ··· 40 if (actual_type_id < 0) 41 return actual_type_id; 42 43 - return btf_dumper_do_type(d, actual_type_id, 0, data); 44 } 45 46 static void btf_dumper_enum(const void *data, json_writer_t *jw) ··· 237 case BTF_KIND_VOLATILE: 238 case BTF_KIND_CONST: 239 case BTF_KIND_RESTRICT: 240 - return btf_dumper_modifier(d, type_id, data); 241 default: 242 jsonw_printf(d->jw, "(unsupported-kind"); 243 return -EINVAL;
··· 32 } 33 34 static int btf_dumper_modifier(const struct btf_dumper *d, __u32 type_id, 35 + __u8 bit_offset, const void *data) 36 { 37 int actual_type_id; 38 ··· 40 if (actual_type_id < 0) 41 return actual_type_id; 42 43 + return btf_dumper_do_type(d, actual_type_id, bit_offset, data); 44 } 45 46 static void btf_dumper_enum(const void *data, json_writer_t *jw) ··· 237 case BTF_KIND_VOLATILE: 238 case BTF_KIND_CONST: 239 case BTF_KIND_RESTRICT: 240 + return btf_dumper_modifier(d, type_id, bit_offset, data); 241 default: 242 jsonw_printf(d->jw, "(unsupported-kind"); 243 return -EINVAL;
+37 -19
tools/include/uapi/linux/bpf.h
··· 2170 * Return 2171 * 0 on success, or a negative error in case of failure. 2172 * 2173 - * struct bpf_sock *bpf_sk_lookup_tcp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u32 netns, u64 flags) 2174 * Description 2175 * Look for TCP socket matching *tuple*, optionally in a child 2176 * network namespace *netns*. The return value must be checked, ··· 2187 * **sizeof**\ (*tuple*\ **->ipv6**) 2188 * Look for an IPv6 socket. 2189 * 2190 - * If the *netns* is zero, then the socket lookup table in the 2191 - * netns associated with the *ctx* will be used. For the TC hooks, 2192 - * this in the netns of the device in the skb. For socket hooks, 2193 - * this in the netns of the socket. If *netns* is non-zero, then 2194 - * it specifies the ID of the netns relative to the netns 2195 - * associated with the *ctx*. 2196 * 2197 * All values for *flags* are reserved for future usage, and must 2198 * be left at zero. ··· 2203 * **CONFIG_NET** configuration option. 2204 * Return 2205 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2206 * 2207 - * struct bpf_sock *bpf_sk_lookup_udp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u32 netns, u64 flags) 2208 * Description 2209 * Look for UDP socket matching *tuple*, optionally in a child 2210 * network namespace *netns*. The return value must be checked, ··· 2223 * **sizeof**\ (*tuple*\ **->ipv6**) 2224 * Look for an IPv6 socket. 2225 * 2226 - * If the *netns* is zero, then the socket lookup table in the 2227 - * netns associated with the *ctx* will be used. For the TC hooks, 2228 - * this in the netns of the device in the skb. For socket hooks, 2229 - * this in the netns of the socket. If *netns* is non-zero, then 2230 - * it specifies the ID of the netns relative to the netns 2231 - * associated with the *ctx*. 2232 * 2233 * All values for *flags* are reserved for future usage, and must 2234 * be left at zero. ··· 2239 * **CONFIG_NET** configuration option. 2240 * Return 2241 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2242 * 2243 * int bpf_sk_release(struct bpf_sock *sk) 2244 * Description ··· 2413 /* BPF_FUNC_perf_event_output for sk_buff input context. */ 2414 #define BPF_F_CTXLEN_MASK (0xfffffULL << 32) 2415 2416 /* Mode for BPF_FUNC_skb_adjust_room helper. */ 2417 enum bpf_adj_room_mode { 2418 BPF_ADJ_ROOM_NET, ··· 2432 BPF_LWT_ENCAP_SEG6, 2433 BPF_LWT_ENCAP_SEG6_INLINE 2434 }; 2435 2436 /* user accessible mirror of in-kernel sk_buff. 2437 * new fields can only be added to the end of this structure ··· 2473 /* ... here. */ 2474 2475 __u32 data_meta; 2476 - struct bpf_flow_keys *flow_keys; 2477 }; 2478 2479 struct bpf_tunnel_key { ··· 2589 * be added to the end of this structure 2590 */ 2591 struct sk_msg_md { 2592 - void *data; 2593 - void *data_end; 2594 2595 __u32 family; 2596 __u32 remote_ip4; /* Stored in network byte order */ ··· 2606 * Start of directly accessible data. It begins from 2607 * the tcp/udp header. 2608 */ 2609 - void *data; 2610 - void *data_end; /* End of directly accessible data */ 2611 /* 2612 * Total length of packet (starting from the tcp/udp header). 2613 * Note that the directly accessible bytes (data_end - data)
··· 2170 * Return 2171 * 0 on success, or a negative error in case of failure. 2172 * 2173 + * struct bpf_sock *bpf_sk_lookup_tcp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u64 netns, u64 flags) 2174 * Description 2175 * Look for TCP socket matching *tuple*, optionally in a child 2176 * network namespace *netns*. The return value must be checked, ··· 2187 * **sizeof**\ (*tuple*\ **->ipv6**) 2188 * Look for an IPv6 socket. 2189 * 2190 + * If the *netns* is a negative signed 32-bit integer, then the 2191 + * socket lookup table in the netns associated with the *ctx* will 2192 + * will be used. For the TC hooks, this is the netns of the device 2193 + * in the skb. For socket hooks, this is the netns of the socket. 2194 + * If *netns* is any other signed 32-bit value greater than or 2195 + * equal to zero then it specifies the ID of the netns relative to 2196 + * the netns associated with the *ctx*. *netns* values beyond the 2197 + * range of 32-bit integers are reserved for future use. 2198 * 2199 * All values for *flags* are reserved for future usage, and must 2200 * be left at zero. ··· 2201 * **CONFIG_NET** configuration option. 2202 * Return 2203 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2204 + * For sockets with reuseport option, the *struct bpf_sock* 2205 + * result is from reuse->socks[] using the hash of the tuple. 2206 * 2207 + * struct bpf_sock *bpf_sk_lookup_udp(void *ctx, struct bpf_sock_tuple *tuple, u32 tuple_size, u64 netns, u64 flags) 2208 * Description 2209 * Look for UDP socket matching *tuple*, optionally in a child 2210 * network namespace *netns*. The return value must be checked, ··· 2219 * **sizeof**\ (*tuple*\ **->ipv6**) 2220 * Look for an IPv6 socket. 2221 * 2222 + * If the *netns* is a negative signed 32-bit integer, then the 2223 + * socket lookup table in the netns associated with the *ctx* will 2224 + * will be used. For the TC hooks, this is the netns of the device 2225 + * in the skb. For socket hooks, this is the netns of the socket. 2226 + * If *netns* is any other signed 32-bit value greater than or 2227 + * equal to zero then it specifies the ID of the netns relative to 2228 + * the netns associated with the *ctx*. *netns* values beyond the 2229 + * range of 32-bit integers are reserved for future use. 2230 * 2231 * All values for *flags* are reserved for future usage, and must 2232 * be left at zero. ··· 2233 * **CONFIG_NET** configuration option. 2234 * Return 2235 * Pointer to *struct bpf_sock*, or NULL in case of failure. 2236 + * For sockets with reuseport option, the *struct bpf_sock* 2237 + * result is from reuse->socks[] using the hash of the tuple. 2238 * 2239 * int bpf_sk_release(struct bpf_sock *sk) 2240 * Description ··· 2405 /* BPF_FUNC_perf_event_output for sk_buff input context. */ 2406 #define BPF_F_CTXLEN_MASK (0xfffffULL << 32) 2407 2408 + /* Current network namespace */ 2409 + #define BPF_F_CURRENT_NETNS (-1L) 2410 + 2411 /* Mode for BPF_FUNC_skb_adjust_room helper. */ 2412 enum bpf_adj_room_mode { 2413 BPF_ADJ_ROOM_NET, ··· 2421 BPF_LWT_ENCAP_SEG6, 2422 BPF_LWT_ENCAP_SEG6_INLINE 2423 }; 2424 + 2425 + #define __bpf_md_ptr(type, name) \ 2426 + union { \ 2427 + type name; \ 2428 + __u64 :64; \ 2429 + } __attribute__((aligned(8))) 2430 2431 /* user accessible mirror of in-kernel sk_buff. 2432 * new fields can only be added to the end of this structure ··· 2456 /* ... here. */ 2457 2458 __u32 data_meta; 2459 + __bpf_md_ptr(struct bpf_flow_keys *, flow_keys); 2460 }; 2461 2462 struct bpf_tunnel_key { ··· 2572 * be added to the end of this structure 2573 */ 2574 struct sk_msg_md { 2575 + __bpf_md_ptr(void *, data); 2576 + __bpf_md_ptr(void *, data_end); 2577 2578 __u32 family; 2579 __u32 remote_ip4; /* Stored in network byte order */ ··· 2589 * Start of directly accessible data. It begins from 2590 * the tcp/udp header. 2591 */ 2592 + __bpf_md_ptr(void *, data); 2593 + /* End of directly accessible data */ 2594 + __bpf_md_ptr(void *, data_end); 2595 /* 2596 * Total length of packet (starting from the tcp/udp header). 2597 * Note that the directly accessible bytes (data_end - data)
+2 -2
tools/testing/selftests/bpf/bpf_helpers.h
··· 154 (void *) BPF_FUNC_skb_ancestor_cgroup_id; 155 static struct bpf_sock *(*bpf_sk_lookup_tcp)(void *ctx, 156 struct bpf_sock_tuple *tuple, 157 - int size, unsigned int netns_id, 158 unsigned long long flags) = 159 (void *) BPF_FUNC_sk_lookup_tcp; 160 static struct bpf_sock *(*bpf_sk_lookup_udp)(void *ctx, 161 struct bpf_sock_tuple *tuple, 162 - int size, unsigned int netns_id, 163 unsigned long long flags) = 164 (void *) BPF_FUNC_sk_lookup_udp; 165 static int (*bpf_sk_release)(struct bpf_sock *sk) =
··· 154 (void *) BPF_FUNC_skb_ancestor_cgroup_id; 155 static struct bpf_sock *(*bpf_sk_lookup_tcp)(void *ctx, 156 struct bpf_sock_tuple *tuple, 157 + int size, unsigned long long netns_id, 158 unsigned long long flags) = 159 (void *) BPF_FUNC_sk_lookup_tcp; 160 static struct bpf_sock *(*bpf_sk_lookup_udp)(void *ctx, 161 struct bpf_sock_tuple *tuple, 162 + int size, unsigned long long netns_id, 163 unsigned long long flags) = 164 (void *) BPF_FUNC_sk_lookup_udp; 165 static int (*bpf_sk_release)(struct bpf_sock *sk) =
+368 -7
tools/testing/selftests/bpf/test_btf.c
··· 432 /* const void* */ /* [3] */ 433 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), 434 /* typedef const void * const_void_ptr */ 435 - BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 3), 436 - /* struct A { */ /* [4] */ 437 BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), sizeof(void *)), 438 /* const_void_ptr m; */ 439 - BTF_MEMBER_ENC(NAME_TBD, 3, 0), 440 /* } */ 441 BTF_END_RAW, 442 }, ··· 494 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_CONST, 0, 0), 0), 495 /* const void* */ /* [3] */ 496 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), 497 - /* typedef const void * const_void_ptr */ /* [4] */ 498 - BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 3), 499 - /* const_void_ptr[4] */ /* [5] */ 500 - BTF_TYPE_ARRAY_ENC(3, 1, 4), 501 BTF_END_RAW, 502 }, 503 .str_sec = "\0const_void_ptr", ··· 1292 .err_str = "type != 0", 1293 }, 1294 1295 { 1296 .descr = "arraymap invalid btf key (a bit field)", 1297 .raw_types = {
··· 432 /* const void* */ /* [3] */ 433 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), 434 /* typedef const void * const_void_ptr */ 435 + BTF_TYPEDEF_ENC(NAME_TBD, 3), /* [4] */ 436 + /* struct A { */ /* [5] */ 437 BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), sizeof(void *)), 438 /* const_void_ptr m; */ 439 + BTF_MEMBER_ENC(NAME_TBD, 4, 0), 440 /* } */ 441 BTF_END_RAW, 442 }, ··· 494 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_CONST, 0, 0), 0), 495 /* const void* */ /* [3] */ 496 BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), 497 + /* typedef const void * const_void_ptr */ 498 + BTF_TYPEDEF_ENC(NAME_TBD, 3), /* [4] */ 499 + /* const_void_ptr[4] */ 500 + BTF_TYPE_ARRAY_ENC(4, 1, 4), /* [5] */ 501 BTF_END_RAW, 502 }, 503 .str_sec = "\0const_void_ptr", ··· 1292 .err_str = "type != 0", 1293 }, 1294 1295 + { 1296 + .descr = "typedef (invalid name, name_off = 0)", 1297 + .raw_types = { 1298 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1299 + BTF_TYPEDEF_ENC(0, 1), /* [2] */ 1300 + BTF_END_RAW, 1301 + }, 1302 + .str_sec = "\0__int", 1303 + .str_sec_size = sizeof("\0__int"), 1304 + .map_type = BPF_MAP_TYPE_ARRAY, 1305 + .map_name = "typedef_check_btf", 1306 + .key_size = sizeof(int), 1307 + .value_size = sizeof(int), 1308 + .key_type_id = 1, 1309 + .value_type_id = 1, 1310 + .max_entries = 4, 1311 + .btf_load_err = true, 1312 + .err_str = "Invalid name", 1313 + }, 1314 + 1315 + { 1316 + .descr = "typedef (invalid name, invalid identifier)", 1317 + .raw_types = { 1318 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1319 + BTF_TYPEDEF_ENC(NAME_TBD, 1), /* [2] */ 1320 + BTF_END_RAW, 1321 + }, 1322 + .str_sec = "\0__!int", 1323 + .str_sec_size = sizeof("\0__!int"), 1324 + .map_type = BPF_MAP_TYPE_ARRAY, 1325 + .map_name = "typedef_check_btf", 1326 + .key_size = sizeof(int), 1327 + .value_size = sizeof(int), 1328 + .key_type_id = 1, 1329 + .value_type_id = 1, 1330 + .max_entries = 4, 1331 + .btf_load_err = true, 1332 + .err_str = "Invalid name", 1333 + }, 1334 + 1335 + { 1336 + .descr = "ptr type (invalid name, name_off <> 0)", 1337 + .raw_types = { 1338 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1339 + BTF_TYPE_ENC(NAME_TBD, 1340 + BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 1), /* [2] */ 1341 + BTF_END_RAW, 1342 + }, 1343 + .str_sec = "\0__int", 1344 + .str_sec_size = sizeof("\0__int"), 1345 + .map_type = BPF_MAP_TYPE_ARRAY, 1346 + .map_name = "ptr_type_check_btf", 1347 + .key_size = sizeof(int), 1348 + .value_size = sizeof(int), 1349 + .key_type_id = 1, 1350 + .value_type_id = 1, 1351 + .max_entries = 4, 1352 + .btf_load_err = true, 1353 + .err_str = "Invalid name", 1354 + }, 1355 + 1356 + { 1357 + .descr = "volatile type (invalid name, name_off <> 0)", 1358 + .raw_types = { 1359 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1360 + BTF_TYPE_ENC(NAME_TBD, 1361 + BTF_INFO_ENC(BTF_KIND_VOLATILE, 0, 0), 1), /* [2] */ 1362 + BTF_END_RAW, 1363 + }, 1364 + .str_sec = "\0__int", 1365 + .str_sec_size = sizeof("\0__int"), 1366 + .map_type = BPF_MAP_TYPE_ARRAY, 1367 + .map_name = "volatile_type_check_btf", 1368 + .key_size = sizeof(int), 1369 + .value_size = sizeof(int), 1370 + .key_type_id = 1, 1371 + .value_type_id = 1, 1372 + .max_entries = 4, 1373 + .btf_load_err = true, 1374 + .err_str = "Invalid name", 1375 + }, 1376 + 1377 + { 1378 + .descr = "const type (invalid name, name_off <> 0)", 1379 + .raw_types = { 1380 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1381 + BTF_TYPE_ENC(NAME_TBD, 1382 + BTF_INFO_ENC(BTF_KIND_CONST, 0, 0), 1), /* [2] */ 1383 + BTF_END_RAW, 1384 + }, 1385 + .str_sec = "\0__int", 1386 + .str_sec_size = sizeof("\0__int"), 1387 + .map_type = BPF_MAP_TYPE_ARRAY, 1388 + .map_name = "const_type_check_btf", 1389 + .key_size = sizeof(int), 1390 + .value_size = sizeof(int), 1391 + .key_type_id = 1, 1392 + .value_type_id = 1, 1393 + .max_entries = 4, 1394 + .btf_load_err = true, 1395 + .err_str = "Invalid name", 1396 + }, 1397 + 1398 + { 1399 + .descr = "restrict type (invalid name, name_off <> 0)", 1400 + .raw_types = { 1401 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1402 + BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 1), /* [2] */ 1403 + BTF_TYPE_ENC(NAME_TBD, 1404 + BTF_INFO_ENC(BTF_KIND_RESTRICT, 0, 0), 2), /* [3] */ 1405 + BTF_END_RAW, 1406 + }, 1407 + .str_sec = "\0__int", 1408 + .str_sec_size = sizeof("\0__int"), 1409 + .map_type = BPF_MAP_TYPE_ARRAY, 1410 + .map_name = "restrict_type_check_btf", 1411 + .key_size = sizeof(int), 1412 + .value_size = sizeof(int), 1413 + .key_type_id = 1, 1414 + .value_type_id = 1, 1415 + .max_entries = 4, 1416 + .btf_load_err = true, 1417 + .err_str = "Invalid name", 1418 + }, 1419 + 1420 + { 1421 + .descr = "fwd type (invalid name, name_off = 0)", 1422 + .raw_types = { 1423 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1424 + BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FWD, 0, 0), 0), /* [2] */ 1425 + BTF_END_RAW, 1426 + }, 1427 + .str_sec = "\0__skb", 1428 + .str_sec_size = sizeof("\0__skb"), 1429 + .map_type = BPF_MAP_TYPE_ARRAY, 1430 + .map_name = "fwd_type_check_btf", 1431 + .key_size = sizeof(int), 1432 + .value_size = sizeof(int), 1433 + .key_type_id = 1, 1434 + .value_type_id = 1, 1435 + .max_entries = 4, 1436 + .btf_load_err = true, 1437 + .err_str = "Invalid name", 1438 + }, 1439 + 1440 + { 1441 + .descr = "fwd type (invalid name, invalid identifier)", 1442 + .raw_types = { 1443 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1444 + BTF_TYPE_ENC(NAME_TBD, 1445 + BTF_INFO_ENC(BTF_KIND_FWD, 0, 0), 0), /* [2] */ 1446 + BTF_END_RAW, 1447 + }, 1448 + .str_sec = "\0__!skb", 1449 + .str_sec_size = sizeof("\0__!skb"), 1450 + .map_type = BPF_MAP_TYPE_ARRAY, 1451 + .map_name = "fwd_type_check_btf", 1452 + .key_size = sizeof(int), 1453 + .value_size = sizeof(int), 1454 + .key_type_id = 1, 1455 + .value_type_id = 1, 1456 + .max_entries = 4, 1457 + .btf_load_err = true, 1458 + .err_str = "Invalid name", 1459 + }, 1460 + 1461 + { 1462 + .descr = "array type (invalid name, name_off <> 0)", 1463 + .raw_types = { 1464 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1465 + BTF_TYPE_ENC(NAME_TBD, 1466 + BTF_INFO_ENC(BTF_KIND_ARRAY, 0, 0), 0), /* [2] */ 1467 + BTF_ARRAY_ENC(1, 1, 4), 1468 + BTF_END_RAW, 1469 + }, 1470 + .str_sec = "\0__skb", 1471 + .str_sec_size = sizeof("\0__skb"), 1472 + .map_type = BPF_MAP_TYPE_ARRAY, 1473 + .map_name = "array_type_check_btf", 1474 + .key_size = sizeof(int), 1475 + .value_size = sizeof(int), 1476 + .key_type_id = 1, 1477 + .value_type_id = 1, 1478 + .max_entries = 4, 1479 + .btf_load_err = true, 1480 + .err_str = "Invalid name", 1481 + }, 1482 + 1483 + { 1484 + .descr = "struct type (name_off = 0)", 1485 + .raw_types = { 1486 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1487 + BTF_TYPE_ENC(0, 1488 + BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4), /* [2] */ 1489 + BTF_MEMBER_ENC(NAME_TBD, 1, 0), 1490 + BTF_END_RAW, 1491 + }, 1492 + .str_sec = "\0A", 1493 + .str_sec_size = sizeof("\0A"), 1494 + .map_type = BPF_MAP_TYPE_ARRAY, 1495 + .map_name = "struct_type_check_btf", 1496 + .key_size = sizeof(int), 1497 + .value_size = sizeof(int), 1498 + .key_type_id = 1, 1499 + .value_type_id = 1, 1500 + .max_entries = 4, 1501 + }, 1502 + 1503 + { 1504 + .descr = "struct type (invalid name, invalid identifier)", 1505 + .raw_types = { 1506 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1507 + BTF_TYPE_ENC(NAME_TBD, 1508 + BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4), /* [2] */ 1509 + BTF_MEMBER_ENC(NAME_TBD, 1, 0), 1510 + BTF_END_RAW, 1511 + }, 1512 + .str_sec = "\0A!\0B", 1513 + .str_sec_size = sizeof("\0A!\0B"), 1514 + .map_type = BPF_MAP_TYPE_ARRAY, 1515 + .map_name = "struct_type_check_btf", 1516 + .key_size = sizeof(int), 1517 + .value_size = sizeof(int), 1518 + .key_type_id = 1, 1519 + .value_type_id = 1, 1520 + .max_entries = 4, 1521 + .btf_load_err = true, 1522 + .err_str = "Invalid name", 1523 + }, 1524 + 1525 + { 1526 + .descr = "struct member (name_off = 0)", 1527 + .raw_types = { 1528 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1529 + BTF_TYPE_ENC(0, 1530 + BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4), /* [2] */ 1531 + BTF_MEMBER_ENC(NAME_TBD, 1, 0), 1532 + BTF_END_RAW, 1533 + }, 1534 + .str_sec = "\0A", 1535 + .str_sec_size = sizeof("\0A"), 1536 + .map_type = BPF_MAP_TYPE_ARRAY, 1537 + .map_name = "struct_type_check_btf", 1538 + .key_size = sizeof(int), 1539 + .value_size = sizeof(int), 1540 + .key_type_id = 1, 1541 + .value_type_id = 1, 1542 + .max_entries = 4, 1543 + }, 1544 + 1545 + { 1546 + .descr = "struct member (invalid name, invalid identifier)", 1547 + .raw_types = { 1548 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1549 + BTF_TYPE_ENC(NAME_TBD, 1550 + BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 4), /* [2] */ 1551 + BTF_MEMBER_ENC(NAME_TBD, 1, 0), 1552 + BTF_END_RAW, 1553 + }, 1554 + .str_sec = "\0A\0B*", 1555 + .str_sec_size = sizeof("\0A\0B*"), 1556 + .map_type = BPF_MAP_TYPE_ARRAY, 1557 + .map_name = "struct_type_check_btf", 1558 + .key_size = sizeof(int), 1559 + .value_size = sizeof(int), 1560 + .key_type_id = 1, 1561 + .value_type_id = 1, 1562 + .max_entries = 4, 1563 + .btf_load_err = true, 1564 + .err_str = "Invalid name", 1565 + }, 1566 + 1567 + { 1568 + .descr = "enum type (name_off = 0)", 1569 + .raw_types = { 1570 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1571 + BTF_TYPE_ENC(0, 1572 + BTF_INFO_ENC(BTF_KIND_ENUM, 0, 1), 1573 + sizeof(int)), /* [2] */ 1574 + BTF_ENUM_ENC(NAME_TBD, 0), 1575 + BTF_END_RAW, 1576 + }, 1577 + .str_sec = "\0A\0B", 1578 + .str_sec_size = sizeof("\0A\0B"), 1579 + .map_type = BPF_MAP_TYPE_ARRAY, 1580 + .map_name = "enum_type_check_btf", 1581 + .key_size = sizeof(int), 1582 + .value_size = sizeof(int), 1583 + .key_type_id = 1, 1584 + .value_type_id = 1, 1585 + .max_entries = 4, 1586 + }, 1587 + 1588 + { 1589 + .descr = "enum type (invalid name, invalid identifier)", 1590 + .raw_types = { 1591 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1592 + BTF_TYPE_ENC(NAME_TBD, 1593 + BTF_INFO_ENC(BTF_KIND_ENUM, 0, 1), 1594 + sizeof(int)), /* [2] */ 1595 + BTF_ENUM_ENC(NAME_TBD, 0), 1596 + BTF_END_RAW, 1597 + }, 1598 + .str_sec = "\0A!\0B", 1599 + .str_sec_size = sizeof("\0A!\0B"), 1600 + .map_type = BPF_MAP_TYPE_ARRAY, 1601 + .map_name = "enum_type_check_btf", 1602 + .key_size = sizeof(int), 1603 + .value_size = sizeof(int), 1604 + .key_type_id = 1, 1605 + .value_type_id = 1, 1606 + .max_entries = 4, 1607 + .btf_load_err = true, 1608 + .err_str = "Invalid name", 1609 + }, 1610 + 1611 + { 1612 + .descr = "enum member (invalid name, name_off = 0)", 1613 + .raw_types = { 1614 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1615 + BTF_TYPE_ENC(0, 1616 + BTF_INFO_ENC(BTF_KIND_ENUM, 0, 1), 1617 + sizeof(int)), /* [2] */ 1618 + BTF_ENUM_ENC(0, 0), 1619 + BTF_END_RAW, 1620 + }, 1621 + .str_sec = "", 1622 + .str_sec_size = sizeof(""), 1623 + .map_type = BPF_MAP_TYPE_ARRAY, 1624 + .map_name = "enum_type_check_btf", 1625 + .key_size = sizeof(int), 1626 + .value_size = sizeof(int), 1627 + .key_type_id = 1, 1628 + .value_type_id = 1, 1629 + .max_entries = 4, 1630 + .btf_load_err = true, 1631 + .err_str = "Invalid name", 1632 + }, 1633 + 1634 + { 1635 + .descr = "enum member (invalid name, invalid identifier)", 1636 + .raw_types = { 1637 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 1638 + BTF_TYPE_ENC(0, 1639 + BTF_INFO_ENC(BTF_KIND_ENUM, 0, 1), 1640 + sizeof(int)), /* [2] */ 1641 + BTF_ENUM_ENC(NAME_TBD, 0), 1642 + BTF_END_RAW, 1643 + }, 1644 + .str_sec = "\0A!", 1645 + .str_sec_size = sizeof("\0A!"), 1646 + .map_type = BPF_MAP_TYPE_ARRAY, 1647 + .map_name = "enum_type_check_btf", 1648 + .key_size = sizeof(int), 1649 + .value_size = sizeof(int), 1650 + .key_type_id = 1, 1651 + .value_type_id = 1, 1652 + .max_entries = 4, 1653 + .btf_load_err = true, 1654 + .err_str = "Invalid name", 1655 + }, 1656 { 1657 .descr = "arraymap invalid btf key (a bit field)", 1658 .raw_types = {
+9 -9
tools/testing/selftests/bpf/test_sk_lookup_kern.c
··· 72 return TC_ACT_SHOT; 73 74 tuple_len = ipv4 ? sizeof(tuple->ipv4) : sizeof(tuple->ipv6); 75 - sk = bpf_sk_lookup_tcp(skb, tuple, tuple_len, 0, 0); 76 if (sk) 77 bpf_sk_release(sk); 78 return sk ? TC_ACT_OK : TC_ACT_UNSPEC; ··· 84 struct bpf_sock_tuple tuple = {}; 85 struct bpf_sock *sk; 86 87 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 88 if (sk) 89 bpf_sk_release(sk); 90 return 0; ··· 97 struct bpf_sock *sk; 98 __u32 family = 0; 99 100 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 101 if (sk) { 102 bpf_sk_release(sk); 103 family = sk->family; ··· 112 struct bpf_sock *sk; 113 __u32 family; 114 115 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 116 if (sk) { 117 sk += 1; 118 bpf_sk_release(sk); ··· 127 struct bpf_sock *sk; 128 __u32 family; 129 130 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 131 sk += 1; 132 if (sk) 133 bpf_sk_release(sk); ··· 139 { 140 struct bpf_sock_tuple tuple = {}; 141 142 - bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 143 return 0; 144 } 145 ··· 149 struct bpf_sock_tuple tuple = {}; 150 struct bpf_sock *sk; 151 152 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 153 bpf_sk_release(sk); 154 bpf_sk_release(sk); 155 return 0; ··· 161 struct bpf_sock_tuple tuple = {}; 162 struct bpf_sock *sk; 163 164 - sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 165 bpf_sk_release(sk); 166 return 0; 167 } ··· 169 void lookup_no_release(struct __sk_buff *skb) 170 { 171 struct bpf_sock_tuple tuple = {}; 172 - bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), 0, 0); 173 } 174 175 SEC("fail_no_release_subcall")
··· 72 return TC_ACT_SHOT; 73 74 tuple_len = ipv4 ? sizeof(tuple->ipv4) : sizeof(tuple->ipv6); 75 + sk = bpf_sk_lookup_tcp(skb, tuple, tuple_len, BPF_F_CURRENT_NETNS, 0); 76 if (sk) 77 bpf_sk_release(sk); 78 return sk ? TC_ACT_OK : TC_ACT_UNSPEC; ··· 84 struct bpf_sock_tuple tuple = {}; 85 struct bpf_sock *sk; 86 87 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 88 if (sk) 89 bpf_sk_release(sk); 90 return 0; ··· 97 struct bpf_sock *sk; 98 __u32 family = 0; 99 100 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 101 if (sk) { 102 bpf_sk_release(sk); 103 family = sk->family; ··· 112 struct bpf_sock *sk; 113 __u32 family; 114 115 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 116 if (sk) { 117 sk += 1; 118 bpf_sk_release(sk); ··· 127 struct bpf_sock *sk; 128 __u32 family; 129 130 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 131 sk += 1; 132 if (sk) 133 bpf_sk_release(sk); ··· 139 { 140 struct bpf_sock_tuple tuple = {}; 141 142 + bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 143 return 0; 144 } 145 ··· 149 struct bpf_sock_tuple tuple = {}; 150 struct bpf_sock *sk; 151 152 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 153 bpf_sk_release(sk); 154 bpf_sk_release(sk); 155 return 0; ··· 161 struct bpf_sock_tuple tuple = {}; 162 struct bpf_sock *sk; 163 164 + sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 165 bpf_sk_release(sk); 166 return 0; 167 } ··· 169 void lookup_no_release(struct __sk_buff *skb) 170 { 171 struct bpf_sock_tuple tuple = {}; 172 + bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); 173 } 174 175 SEC("fail_no_release_subcall")
+3 -3
tools/testing/selftests/bpf/test_verifier.c
··· 8576 BPF_JMP_IMM(BPF_JA, 0, 0, -7), 8577 }, 8578 .fixup_map_hash_8b = { 4 }, 8579 - .errstr = "R0 invalid mem access 'inv'", 8580 .result = REJECT, 8581 }, 8582 { ··· 10547 "check deducing bounds from const, 5", 10548 .insns = { 10549 BPF_MOV64_IMM(BPF_REG_0, 0), 10550 - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1), 10551 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 10552 BPF_EXIT_INSN(), 10553 }, ··· 14230 14231 reject_from_alignment = fd_prog < 0 && 14232 (test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS) && 14233 - strstr(bpf_vlog, "Unknown alignment."); 14234 #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 14235 if (reject_from_alignment) { 14236 printf("FAIL\nFailed due to alignment despite having efficient unaligned access: '%s'!\n",
··· 8576 BPF_JMP_IMM(BPF_JA, 0, 0, -7), 8577 }, 8578 .fixup_map_hash_8b = { 4 }, 8579 + .errstr = "unbounded min value", 8580 .result = REJECT, 8581 }, 8582 { ··· 10547 "check deducing bounds from const, 5", 10548 .insns = { 10549 BPF_MOV64_IMM(BPF_REG_0, 0), 10550 + BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1), 10551 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 10552 BPF_EXIT_INSN(), 10553 }, ··· 14230 14231 reject_from_alignment = fd_prog < 0 && 14232 (test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS) && 14233 + strstr(bpf_vlog, "misaligned"); 14234 #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 14235 if (reject_from_alignment) { 14236 printf("FAIL\nFailed due to alignment despite having efficient unaligned access: '%s'!\n",