Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

1) Disable RISCV BPF JIT builds when !MMU, from Björn Töpel.

2) nf_tables leaves dangling pointer after free, fix from Eric Dumazet.

3) Out of boundary write in __xsk_rcv_memcpy(), fix from Li RongQing.

4) Adjust icmp6 message source address selection when routes have a
preferred source address set, from Tim Stallard.

5) Be sure to validate HSR protocol version when creating new links,
from Taehee Yoo.

6) CAP_NET_ADMIN should be sufficient to manage l2tp tunnels even in
non-initial namespaces, from Michael Weiß.

7) Missing release firmware call in mlx5, from Eran Ben Elisha.

8) Fix variable type in macsec_changelink(), caught by KASAN. Fix from
Taehee Yoo.

9) Fix pause frame negotiation in marvell phy driver, from Clemens
Gruber.

10) Record RX queue early enough in tun packet paths such that XDP
programs will see the correct RX queue index, from Gilberto Bertin.

11) Fix double unlock in mptcp, from Florian Westphal.

12) Fix offset overflow in ARM bpf JIT, from Luke Nelson.

13) marvell10g needs to soft reset PHY when coming out of low power
mode, from Russell King.

14) Fix MTU setting regression in stmmac for some chip types, from
Florian Fainelli.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (101 commits)
amd-xgbe: Use __napi_schedule() in BH context
mISDN: make dmril and dmrim static
net: stmmac: dwmac-sunxi: Provide TX and RX fifo sizes
net: dsa: mt7530: fix tagged frames pass-through in VLAN-unaware mode
tipc: fix incorrect increasing of link window
Documentation: Fix tcp_challenge_ack_limit default value
net: tulip: make early_486_chipsets static
dt-bindings: net: ethernet-phy: add desciption for ethernet-phy-id1234.d400
ipv6: remove redundant assignment to variable err
net/rds: Use ERR_PTR for rds_message_alloc_sgs()
net: mscc: ocelot: fix untagged packet drops when enslaving to vlan aware bridge
selftests/bpf: Check for correct program attach/detach in xdp_attach test
libbpf: Fix type of old_fd in bpf_xdp_set_link_opts
libbpf: Always specify expected_attach_type on program load if supported
xsk: Add missing check on user supplied headroom size
mac80211: fix channel switch trigger from unknown mesh peer
mac80211: fix race in ieee80211_register_hw()
net: marvell10g: soft-reset the PHY when coming out of low power
net: marvell10g: report firmware version
net/cxgb4: Check the return from t4_query_params properly
...

+1078 -690
+3
Documentation/devicetree/bindings/net/ethernet-phy.yaml
··· 45 45 bits of a vendor specific ID. 46 46 - items: 47 47 - pattern: "^ethernet-phy-id[a-f0-9]{4}\\.[a-f0-9]{4}$" 48 + - const: ethernet-phy-ieee802.3-c22 49 + - items: 50 + - pattern: "^ethernet-phy-id[a-f0-9]{4}\\.[a-f0-9]{4}$" 48 51 - const: ethernet-phy-ieee802.3-c45 49 52 50 53 reg:
+2
Documentation/devicetree/bindings/net/fsl-fec.txt
··· 22 22 - fsl,err006687-workaround-present: If present indicates that the system has 23 23 the hardware workaround for ERR006687 applied and does not need a software 24 24 workaround. 25 + - gpr: phandle of SoC general purpose register mode. Required for wake on LAN 26 + on some SoCs 25 27 -interrupt-names: names of the interrupts listed in interrupts property in 26 28 the same order. The defaults if not specified are 27 29 __Number of interrupts__ __Default__
+1
Documentation/networking/index.rst
··· 22 22 z8530book 23 23 msg_zerocopy 24 24 failover 25 + net_dim 25 26 net_failover 26 27 phy 27 28 sfp-phylink
+1 -1
Documentation/networking/ip-sysctl.txt
··· 812 812 tcp_challenge_ack_limit - INTEGER 813 813 Limits number of Challenge ACK sent per second, as recommended 814 814 in RFC 5961 (Improving TCP's Robustness to Blind In-Window Attacks) 815 - Default: 100 815 + Default: 1000 816 816 817 817 tcp_rx_skb_cache - BOOLEAN 818 818 Controls a per TCP socket cache of one skb, that might help
+49 -47
Documentation/networking/net_dim.txt Documentation/networking/net_dim.rst
··· 1 + ====================================================== 1 2 Net DIM - Generic Network Dynamic Interrupt Moderation 2 3 ====================================================== 3 4 4 - Author: 5 - Tal Gilboa <talgi@mellanox.com> 5 + :Author: Tal Gilboa <talgi@mellanox.com> 6 6 7 + .. contents:: :depth: 2 7 8 8 - Contents 9 - ========= 10 - 11 - - Assumptions 12 - - Introduction 13 - - The Net DIM Algorithm 14 - - Registering a Network Device to DIM 15 - - Example 16 - 17 - Part 0: Assumptions 18 - ====================== 9 + Assumptions 10 + =========== 19 11 20 12 This document assumes the reader has basic knowledge in network drivers 21 13 and in general interrupt moderation. 22 14 23 15 24 - Part I: Introduction 25 - ====================== 16 + Introduction 17 + ============ 26 18 27 19 Dynamic Interrupt Moderation (DIM) (in networking) refers to changing the 28 20 interrupt moderation configuration of a channel in order to optimize packet ··· 33 41 increase bandwidth over reducing interrupt rate. 34 42 35 43 36 - Part II: The Net DIM Algorithm 37 - =============================== 44 + Net DIM Algorithm 45 + ================= 38 46 39 47 Each iteration of the Net DIM algorithm follows these steps: 40 - 1. Calculates new data sample. 41 - 2. Compares it to previous sample. 42 - 3. Makes a decision - suggests interrupt moderation configuration fields. 43 - 4. Applies a schedule work function, which applies suggested configuration. 48 + 49 + #. Calculates new data sample. 50 + #. Compares it to previous sample. 51 + #. Makes a decision - suggests interrupt moderation configuration fields. 52 + #. Applies a schedule work function, which applies suggested configuration. 44 53 45 54 The first two steps are straightforward, both the new and the previous data are 46 55 supplied by the driver registered to Net DIM. The previous data is the new data ··· 82 89 under some conditions. 83 90 84 91 85 - Part III: Registering a Network Device to DIM 86 - ============================================== 92 + Registering a Network Device to DIM 93 + =================================== 87 94 88 - Net DIM API exposes the main function net_dim(struct dim *dim, 89 - struct dim_sample end_sample). This function is the entry point to the Net 95 + Net DIM API exposes the main function net_dim(). 96 + This function is the entry point to the Net 90 97 DIM algorithm and has to be called every time the driver would like to check if 91 98 it should change interrupt moderation parameters. The driver should provide two 92 - data structures: struct dim and struct dim_sample. Struct dim 99 + data structures: :c:type:`struct dim <dim>` and 100 + :c:type:`struct dim_sample <dim_sample>`. :c:type:`struct dim <dim>` 93 101 describes the state of DIM for a specific object (RX queue, TX queue, 94 102 other queues, etc.). This includes the current selected profile, previous data 95 103 samples, the callback function provided by the driver and more. 96 - Struct dim_sample describes a data sample, which will be compared to the 97 - data sample stored in struct dim in order to decide on the algorithm's next 104 + :c:type:`struct dim_sample <dim_sample>` describes a data sample, 105 + which will be compared to the data sample stored in :c:type:`struct dim <dim>` 106 + in order to decide on the algorithm's next 98 107 step. The sample should include bytes, packets and interrupts, measured by 99 108 the driver. 100 109 ··· 105 110 interrupt. Since Net DIM has a built-in moderation and it might decide to skip 106 111 iterations under certain conditions, there is no need to moderate the net_dim() 107 112 calls as well. As mentioned above, the driver needs to provide an object of type 108 - struct dim to the net_dim() function call. It is advised for each entity 109 - using Net DIM to hold a struct dim as part of its data structure and use it 110 - as the main Net DIM API object. The struct dim_sample should hold the latest 113 + :c:type:`struct dim <dim>` to the net_dim() function call. It is advised for 114 + each entity using Net DIM to hold a :c:type:`struct dim <dim>` as part of its 115 + data structure and use it as the main Net DIM API object. 116 + The :c:type:`struct dim_sample <dim_sample>` should hold the latest 111 117 bytes, packets and interrupts count. No need to perform any calculations, just 112 118 include the raw data. 113 119 ··· 120 124 the proper state in order to move to the next iteration. 121 125 122 126 123 - Part IV: Example 124 - ================= 127 + Example 128 + ======= 125 129 126 130 The following code demonstrates how to register a driver to Net DIM. The actual 127 131 usage is not complete but it should make the outline of the usage clear. 128 132 129 - my_driver.c: 133 + .. code-block:: c 130 134 131 - #include <linux/dim.h> 135 + #include <linux/dim.h> 132 136 133 - /* Callback for net DIM to schedule on a decision to change moderation */ 134 - void my_driver_do_dim_work(struct work_struct *work) 135 - { 137 + /* Callback for net DIM to schedule on a decision to change moderation */ 138 + void my_driver_do_dim_work(struct work_struct *work) 139 + { 136 140 /* Get struct dim from struct work_struct */ 137 141 struct dim *dim = container_of(work, struct dim, 138 142 work); ··· 141 145 142 146 /* Signal net DIM work is done and it should move to next iteration */ 143 147 dim->state = DIM_START_MEASURE; 144 - } 148 + } 145 149 146 - /* My driver's interrupt handler */ 147 - int my_driver_handle_interrupt(struct my_driver_entity *my_entity, ...) 148 - { 150 + /* My driver's interrupt handler */ 151 + int my_driver_handle_interrupt(struct my_driver_entity *my_entity, ...) 152 + { 149 153 ... 150 154 /* A struct to hold current measured data */ 151 155 struct dim_sample dim_sample; ··· 158 162 /* Call net DIM */ 159 163 net_dim(&my_entity->dim, dim_sample); 160 164 ... 161 - } 165 + } 162 166 163 - /* My entity's initialization function (my_entity was already allocated) */ 164 - int my_driver_init_my_entity(struct my_driver_entity *my_entity, ...) 165 - { 167 + /* My entity's initialization function (my_entity was already allocated) */ 168 + int my_driver_init_my_entity(struct my_driver_entity *my_entity, ...) 169 + { 166 170 ... 167 171 /* Initiate struct work_struct with my driver's callback function */ 168 172 INIT_WORK(&my_entity->dim.work, my_driver_do_dim_work); 169 173 ... 170 - } 174 + } 175 + 176 + Dynamic Interrupt Moderation (DIM) library API 177 + ============================================== 178 + 179 + .. kernel-doc:: include/linux/dim.h 180 + :internal:
+1
MAINTAINERS
··· 5934 5934 S: Maintained 5935 5935 F: include/linux/dim.h 5936 5936 F: lib/dim/ 5937 + F: Documentation/networking/net_dim.rst 5937 5938 5938 5939 DZ DECSTATION DZ11 SERIAL DRIVER 5939 5940 M: "Maciej W. Rozycki" <macro@linux-mips.org>
+3 -3
arch/arm/boot/dts/imx6qdl.dtsi
··· 1039 1039 compatible = "fsl,imx6q-fec"; 1040 1040 reg = <0x02188000 0x4000>; 1041 1041 interrupt-names = "int0", "pps"; 1042 - interrupts-extended = 1043 - <&intc 0 118 IRQ_TYPE_LEVEL_HIGH>, 1044 - <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>; 1042 + interrupts = <0 118 IRQ_TYPE_LEVEL_HIGH>, 1043 + <0 119 IRQ_TYPE_LEVEL_HIGH>; 1045 1044 clocks = <&clks IMX6QDL_CLK_ENET>, 1046 1045 <&clks IMX6QDL_CLK_ENET>, 1047 1046 <&clks IMX6QDL_CLK_ENET_REF>; 1048 1047 clock-names = "ipg", "ahb", "ptp"; 1048 + gpr = <&gpr>; 1049 1049 status = "disabled"; 1050 1050 }; 1051 1051
-1
arch/arm/boot/dts/imx6qp.dtsi
··· 77 77 }; 78 78 79 79 &fec { 80 - /delete-property/interrupts-extended; 81 80 interrupts = <0 118 IRQ_TYPE_LEVEL_HIGH>, 82 81 <0 119 IRQ_TYPE_LEVEL_HIGH>; 83 82 };
+34 -18
arch/arm/net/bpf_jit_32.c
··· 929 929 rd = arm_bpf_get_reg64(dst, tmp, ctx); 930 930 931 931 /* Do LSR operation */ 932 - if (val < 32) { 932 + if (val == 0) { 933 + /* An immediate value of 0 encodes a shift amount of 32 934 + * for LSR. To shift by 0, don't do anything. 935 + */ 936 + } else if (val < 32) { 933 937 emit(ARM_MOV_SI(tmp2[1], rd[1], SRTYPE_LSR, val), ctx); 934 938 emit(ARM_ORR_SI(rd[1], tmp2[1], rd[0], SRTYPE_ASL, 32 - val), ctx); 935 939 emit(ARM_MOV_SI(rd[0], rd[0], SRTYPE_LSR, val), ctx); ··· 959 955 rd = arm_bpf_get_reg64(dst, tmp, ctx); 960 956 961 957 /* Do ARSH operation */ 962 - if (val < 32) { 958 + if (val == 0) { 959 + /* An immediate value of 0 encodes a shift amount of 32 960 + * for ASR. To shift by 0, don't do anything. 961 + */ 962 + } else if (val < 32) { 963 963 emit(ARM_MOV_SI(tmp2[1], rd[1], SRTYPE_LSR, val), ctx); 964 964 emit(ARM_ORR_SI(rd[1], tmp2[1], rd[0], SRTYPE_ASL, 32 - val), ctx); 965 965 emit(ARM_MOV_SI(rd[0], rd[0], SRTYPE_ASR, val), ctx); ··· 1000 992 arm_bpf_put_reg32(dst_hi, rd[0], ctx); 1001 993 } 1002 994 995 + static bool is_ldst_imm(s16 off, const u8 size) 996 + { 997 + s16 off_max = 0; 998 + 999 + switch (size) { 1000 + case BPF_B: 1001 + case BPF_W: 1002 + off_max = 0xfff; 1003 + break; 1004 + case BPF_H: 1005 + off_max = 0xff; 1006 + break; 1007 + case BPF_DW: 1008 + /* Need to make sure off+4 does not overflow. */ 1009 + off_max = 0xfff - 4; 1010 + break; 1011 + } 1012 + return -off_max <= off && off <= off_max; 1013 + } 1014 + 1003 1015 /* *(size *)(dst + off) = src */ 1004 1016 static inline void emit_str_r(const s8 dst, const s8 src[], 1005 - s32 off, struct jit_ctx *ctx, const u8 sz){ 1017 + s16 off, struct jit_ctx *ctx, const u8 sz){ 1006 1018 const s8 *tmp = bpf2a32[TMP_REG_1]; 1007 - s32 off_max; 1008 1019 s8 rd; 1009 1020 1010 1021 rd = arm_bpf_get_reg32(dst, tmp[1], ctx); 1011 1022 1012 - if (sz == BPF_H) 1013 - off_max = 0xff; 1014 - else 1015 - off_max = 0xfff; 1016 - 1017 - if (off < 0 || off > off_max) { 1023 + if (!is_ldst_imm(off, sz)) { 1018 1024 emit_a32_mov_i(tmp[0], off, ctx); 1019 1025 emit(ARM_ADD_R(tmp[0], tmp[0], rd), ctx); 1020 1026 rd = tmp[0]; ··· 1057 1035 1058 1036 /* dst = *(size*)(src + off) */ 1059 1037 static inline void emit_ldx_r(const s8 dst[], const s8 src, 1060 - s32 off, struct jit_ctx *ctx, const u8 sz){ 1038 + s16 off, struct jit_ctx *ctx, const u8 sz){ 1061 1039 const s8 *tmp = bpf2a32[TMP_REG_1]; 1062 1040 const s8 *rd = is_stacked(dst_lo) ? tmp : dst; 1063 1041 s8 rm = src; 1064 - s32 off_max; 1065 1042 1066 - if (sz == BPF_H) 1067 - off_max = 0xff; 1068 - else 1069 - off_max = 0xfff; 1070 - 1071 - if (off < 0 || off > off_max) { 1043 + if (!is_ldst_imm(off, sz)) { 1072 1044 emit_a32_mov_i(tmp[0], off, ctx); 1073 1045 emit(ARM_ADD_R(tmp[0], tmp[0], src), ctx); 1074 1046 rm = tmp[0];
+1 -1
arch/riscv/Kconfig
··· 55 55 select ARCH_HAS_PTE_SPECIAL 56 56 select ARCH_HAS_MMIOWB 57 57 select ARCH_HAS_DEBUG_VIRTUAL 58 - select HAVE_EBPF_JIT 58 + select HAVE_EBPF_JIT if MMU 59 59 select EDAC_SUPPORT 60 60 select ARCH_HAS_GIGANTIC_PAGE 61 61 select ARCH_HAS_SET_DIRECT_MAP
+32 -17
arch/riscv/net/bpf_jit_comp64.c
··· 110 110 return -(1L << 31) <= val && val < (1L << 31); 111 111 } 112 112 113 + static bool in_auipc_jalr_range(s64 val) 114 + { 115 + /* 116 + * auipc+jalr can reach any signed PC-relative offset in the range 117 + * [-2^31 - 2^11, 2^31 - 2^11). 118 + */ 119 + return (-(1L << 31) - (1L << 11)) <= val && 120 + val < ((1L << 31) - (1L << 11)); 121 + } 122 + 113 123 static void emit_imm(u8 rd, s64 val, struct rv_jit_context *ctx) 114 124 { 115 125 /* Note that the immediate from the add is sign-extended, ··· 390 380 *rd = RV_REG_T2; 391 381 } 392 382 393 - static void emit_jump_and_link(u8 rd, s64 rvoff, bool force_jalr, 394 - struct rv_jit_context *ctx) 383 + static int emit_jump_and_link(u8 rd, s64 rvoff, bool force_jalr, 384 + struct rv_jit_context *ctx) 395 385 { 396 386 s64 upper, lower; 397 387 398 388 if (rvoff && is_21b_int(rvoff) && !force_jalr) { 399 389 emit(rv_jal(rd, rvoff >> 1), ctx); 400 - return; 390 + return 0; 391 + } else if (in_auipc_jalr_range(rvoff)) { 392 + upper = (rvoff + (1 << 11)) >> 12; 393 + lower = rvoff & 0xfff; 394 + emit(rv_auipc(RV_REG_T1, upper), ctx); 395 + emit(rv_jalr(rd, RV_REG_T1, lower), ctx); 396 + return 0; 401 397 } 402 398 403 - upper = (rvoff + (1 << 11)) >> 12; 404 - lower = rvoff & 0xfff; 405 - emit(rv_auipc(RV_REG_T1, upper), ctx); 406 - emit(rv_jalr(rd, RV_REG_T1, lower), ctx); 399 + pr_err("bpf-jit: target offset 0x%llx is out of range\n", rvoff); 400 + return -ERANGE; 407 401 } 408 402 409 403 static bool is_signed_bpf_cond(u8 cond) ··· 421 407 s64 off = 0; 422 408 u64 ip; 423 409 u8 rd; 410 + int ret; 424 411 425 412 if (addr && ctx->insns) { 426 413 ip = (u64)(long)(ctx->insns + ctx->ninsns); 427 414 off = addr - ip; 428 - if (!is_32b_int(off)) { 429 - pr_err("bpf-jit: target call addr %pK is out of range\n", 430 - (void *)addr); 431 - return -ERANGE; 432 - } 433 415 } 434 416 435 - emit_jump_and_link(RV_REG_RA, off, !fixed, ctx); 417 + ret = emit_jump_and_link(RV_REG_RA, off, !fixed, ctx); 418 + if (ret) 419 + return ret; 436 420 rd = bpf_to_rv_reg(BPF_REG_0, ctx); 437 421 emit(rv_addi(rd, RV_REG_A0, 0), ctx); 438 422 return 0; ··· 441 429 { 442 430 bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 || 443 431 BPF_CLASS(insn->code) == BPF_JMP; 444 - int s, e, rvoff, i = insn - ctx->prog->insnsi; 432 + int s, e, rvoff, ret, i = insn - ctx->prog->insnsi; 445 433 struct bpf_prog_aux *aux = ctx->prog->aux; 446 434 u8 rd = -1, rs = -1, code = insn->code; 447 435 s16 off = insn->off; ··· 711 699 /* JUMP off */ 712 700 case BPF_JMP | BPF_JA: 713 701 rvoff = rv_offset(i, off, ctx); 714 - emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx); 702 + ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx); 703 + if (ret) 704 + return ret; 715 705 break; 716 706 717 707 /* IF (dst COND src) JUMP off */ ··· 815 801 case BPF_JMP | BPF_CALL: 816 802 { 817 803 bool fixed; 818 - int ret; 819 804 u64 addr; 820 805 821 806 mark_call(ctx); ··· 839 826 break; 840 827 841 828 rvoff = epilogue_offset(ctx); 842 - emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx); 829 + ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx); 830 + if (ret) 831 + return ret; 843 832 break; 844 833 845 834 /* dst = imm64 */
+2 -2
drivers/isdn/hardware/mISDN/mISDNisar.c
··· 743 743 } 744 744 } 745 745 746 - const char *dmril[] = {"NO SPEED", "1200/75", "NODEF2", "75/1200", "NODEF4", 746 + static const char *dmril[] = {"NO SPEED", "1200/75", "NODEF2", "75/1200", "NODEF4", 747 747 "300", "600", "1200", "2400", "4800", "7200", 748 748 "9600nt", "9600t", "12000", "14400", "WRONG"}; 749 - const char *dmrim[] = {"NO MOD", "NO DEF", "V32/V32b", "V22", "V21", 749 + static const char *dmrim[] = {"NO MOD", "NO DEF", "V32/V32b", "V22", "V21", 750 750 "Bell103", "V23", "Bell202", "V17", "V29", "V27ter"}; 751 751 752 752 static void
+12 -91
drivers/net/dsa/mt7530.c
··· 67 67 }; 68 68 69 69 static int 70 - mt7623_trgmii_write(struct mt7530_priv *priv, u32 reg, u32 val) 71 - { 72 - int ret; 73 - 74 - ret = regmap_write(priv->ethernet, TRGMII_BASE(reg), val); 75 - if (ret < 0) 76 - dev_err(priv->dev, 77 - "failed to priv write register\n"); 78 - return ret; 79 - } 80 - 81 - static u32 82 - mt7623_trgmii_read(struct mt7530_priv *priv, u32 reg) 83 - { 84 - int ret; 85 - u32 val; 86 - 87 - ret = regmap_read(priv->ethernet, TRGMII_BASE(reg), &val); 88 - if (ret < 0) { 89 - dev_err(priv->dev, 90 - "failed to priv read register\n"); 91 - return ret; 92 - } 93 - 94 - return val; 95 - } 96 - 97 - static void 98 - mt7623_trgmii_rmw(struct mt7530_priv *priv, u32 reg, 99 - u32 mask, u32 set) 100 - { 101 - u32 val; 102 - 103 - val = mt7623_trgmii_read(priv, reg); 104 - val &= ~mask; 105 - val |= set; 106 - mt7623_trgmii_write(priv, reg, val); 107 - } 108 - 109 - static void 110 - mt7623_trgmii_set(struct mt7530_priv *priv, u32 reg, u32 val) 111 - { 112 - mt7623_trgmii_rmw(priv, reg, 0, val); 113 - } 114 - 115 - static void 116 - mt7623_trgmii_clear(struct mt7530_priv *priv, u32 reg, u32 val) 117 - { 118 - mt7623_trgmii_rmw(priv, reg, val, 0); 119 - } 120 - 121 - static int 122 70 core_read_mmd_indirect(struct mt7530_priv *priv, int prtad, int devad) 123 71 { 124 72 struct mii_bus *bus = priv->bus; ··· 478 530 for (i = 0 ; i < NUM_TRGMII_CTRL; i++) 479 531 mt7530_rmw(priv, MT7530_TRGMII_RD(i), 480 532 RD_TAP_MASK, RD_TAP(16)); 481 - else 482 - if (priv->id != ID_MT7621) 483 - mt7623_trgmii_set(priv, GSW_INTF_MODE, 484 - INTF_MODE_TRGMII); 485 - 486 - return 0; 487 - } 488 - 489 - static int 490 - mt7623_pad_clk_setup(struct dsa_switch *ds) 491 - { 492 - struct mt7530_priv *priv = ds->priv; 493 - int i; 494 - 495 - for (i = 0 ; i < NUM_TRGMII_CTRL; i++) 496 - mt7623_trgmii_write(priv, GSW_TRGMII_TD_ODT(i), 497 - TD_DM_DRVP(8) | TD_DM_DRVN(8)); 498 - 499 - mt7623_trgmii_set(priv, GSW_TRGMII_RCK_CTRL, RX_RST | RXC_DQSISEL); 500 - mt7623_trgmii_clear(priv, GSW_TRGMII_RCK_CTRL, RX_RST); 501 - 502 533 return 0; 503 534 } 504 535 ··· 773 846 */ 774 847 mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK, 775 848 MT7530_PORT_MATRIX_MODE); 776 - mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK, 777 - VLAN_ATTR(MT7530_VLAN_TRANSPARENT)); 849 + mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK, 850 + VLAN_ATTR(MT7530_VLAN_TRANSPARENT) | 851 + PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT)); 778 852 779 853 for (i = 0; i < MT7530_NUM_PORTS; i++) { 780 854 if (dsa_is_user_port(ds, i) && ··· 791 863 if (all_user_ports_removed) { 792 864 mt7530_write(priv, MT7530_PCR_P(MT7530_CPU_PORT), 793 865 PCR_MATRIX(dsa_user_ports(priv->ds))); 794 - mt7530_write(priv, MT7530_PVC_P(MT7530_CPU_PORT), 795 - PORT_SPEC_TAG); 866 + mt7530_write(priv, MT7530_PVC_P(MT7530_CPU_PORT), PORT_SPEC_TAG 867 + | PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT)); 796 868 } 797 869 } 798 870 ··· 818 890 /* Set the port as a user port which is to be able to recognize VID 819 891 * from incoming packets before fetching entry within the VLAN table. 820 892 */ 821 - mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK, 822 - VLAN_ATTR(MT7530_VLAN_USER)); 893 + mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK, 894 + VLAN_ATTR(MT7530_VLAN_USER) | 895 + PVC_EG_TAG(MT7530_VLAN_EG_DISABLED)); 823 896 } 824 897 825 898 static void ··· 1232 1303 dn = dsa_to_port(ds, MT7530_CPU_PORT)->master->dev.of_node->parent; 1233 1304 1234 1305 if (priv->id == ID_MT7530) { 1235 - priv->ethernet = syscon_node_to_regmap(dn); 1236 - if (IS_ERR(priv->ethernet)) 1237 - return PTR_ERR(priv->ethernet); 1238 - 1239 1306 regulator_set_voltage(priv->core_pwr, 1000000, 1000000); 1240 1307 ret = regulator_enable(priv->core_pwr); 1241 1308 if (ret < 0) { ··· 1305 1380 mt7530_cpu_port_enable(priv, i); 1306 1381 else 1307 1382 mt7530_port_disable(ds, i); 1383 + 1384 + /* Enable consistent egress tag */ 1385 + mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK, 1386 + PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT)); 1308 1387 } 1309 1388 1310 1389 /* Setup port 5 */ ··· 1396 1467 1397 1468 /* Setup TX circuit incluing relevant PAD and driving */ 1398 1469 mt7530_pad_clk_setup(ds, state->interface); 1399 - 1400 - if (priv->id == ID_MT7530) { 1401 - /* Setup RX circuit, relevant PAD and driving on the 1402 - * host which must be placed after the setup on the 1403 - * device side is all finished. 1404 - */ 1405 - mt7623_pad_clk_setup(ds); 1406 - } 1407 1470 1408 1471 priv->p6_interface = state->interface; 1409 1472 break;
+7 -10
drivers/net/dsa/mt7530.h
··· 172 172 /* Register for port vlan control */ 173 173 #define MT7530_PVC_P(x) (0x2010 + ((x) * 0x100)) 174 174 #define PORT_SPEC_TAG BIT(5) 175 + #define PVC_EG_TAG(x) (((x) & 0x7) << 8) 176 + #define PVC_EG_TAG_MASK PVC_EG_TAG(7) 175 177 #define VLAN_ATTR(x) (((x) & 0x3) << 6) 176 178 #define VLAN_ATTR_MASK VLAN_ATTR(3) 179 + 180 + enum mt7530_vlan_port_eg_tag { 181 + MT7530_VLAN_EG_DISABLED = 0, 182 + MT7530_VLAN_EG_CONSISTENT = 1, 183 + }; 177 184 178 185 enum mt7530_vlan_port_attr { 179 186 MT7530_VLAN_USER = 0, ··· 284 277 285 278 /* Registers for TRGMII on the both side */ 286 279 #define MT7530_TRGMII_RCK_CTRL 0x7a00 287 - #define GSW_TRGMII_RCK_CTRL 0x300 288 280 #define RX_RST BIT(31) 289 281 #define RXC_DQSISEL BIT(30) 290 282 #define DQSI1_TAP_MASK (0x7f << 8) ··· 292 286 #define DQSI0_TAP(x) ((x) & 0x7f) 293 287 294 288 #define MT7530_TRGMII_RCK_RTT 0x7a04 295 - #define GSW_TRGMII_RCK_RTT 0x304 296 289 #define DQS1_GATE BIT(31) 297 290 #define DQS0_GATE BIT(30) 298 291 299 292 #define MT7530_TRGMII_RD(x) (0x7a10 + (x) * 8) 300 - #define GSW_TRGMII_RD(x) (0x310 + (x) * 8) 301 293 #define BSLIP_EN BIT(31) 302 294 #define EDGE_CHK BIT(30) 303 295 #define RD_TAP_MASK 0x7f 304 296 #define RD_TAP(x) ((x) & 0x7f) 305 297 306 - #define GSW_TRGMII_TXCTRL 0x340 307 298 #define MT7530_TRGMII_TXCTRL 0x7a40 308 299 #define TRAIN_TXEN BIT(31) 309 300 #define TXC_INV BIT(30) 310 301 #define TX_RST BIT(28) 311 302 312 303 #define MT7530_TRGMII_TD_ODT(i) (0x7a54 + 8 * (i)) 313 - #define GSW_TRGMII_TD_ODT(i) (0x354 + 8 * (i)) 314 304 #define TD_DM_DRVP(x) ((x) & 0xf) 315 305 #define TD_DM_DRVN(x) (((x) & 0xf) << 4) 316 - 317 - #define GSW_INTF_MODE 0x390 318 - #define INTF_MODE_TRGMII BIT(1) 319 306 320 307 #define MT7530_TRGMII_TCK_CTRL 0x7a78 321 308 #define TCK_TAP(x) (((x) & 0xf) << 8) ··· 442 443 * @ds: The pointer to the dsa core structure 443 444 * @bus: The bus used for the device and built-in PHY 444 445 * @rstc: The pointer to reset control used by MCM 445 - * @ethernet: The regmap used for access TRGMII-based registers 446 446 * @core_pwr: The power supplied into the core 447 447 * @io_pwr: The power supplied into the I/O 448 448 * @reset: The descriptor for GPIO line tied to its reset pin ··· 458 460 struct dsa_switch *ds; 459 461 struct mii_bus *bus; 460 462 struct reset_control *rstc; 461 - struct regmap *ethernet; 462 463 struct regulator *core_pwr; 463 464 struct regulator *io_pwr; 464 465 struct gpio_desc *reset;
+3 -2
drivers/net/dsa/mv88e6xxx/chip.c
··· 709 709 ops = chip->info->ops; 710 710 711 711 mv88e6xxx_reg_lock(chip); 712 - if (!mv88e6xxx_port_ppu_updates(chip, port) && ops->port_set_link) 712 + if ((!mv88e6xxx_port_ppu_updates(chip, port) || 713 + mode == MLO_AN_FIXED) && ops->port_set_link) 713 714 err = ops->port_set_link(chip, port, LINK_FORCED_DOWN); 714 715 mv88e6xxx_reg_unlock(chip); 715 716 ··· 732 731 ops = chip->info->ops; 733 732 734 733 mv88e6xxx_reg_lock(chip); 735 - if (!mv88e6xxx_port_ppu_updates(chip, port)) { 734 + if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) { 736 735 /* FIXME: for an automedia port, should we force the link 737 736 * down here - what if the link comes up due to "other" media 738 737 * while we're bringing the port up, how is the exclusivity
+1 -4
drivers/net/dsa/ocelot/felix.c
··· 46 46 const unsigned char *addr, u16 vid) 47 47 { 48 48 struct ocelot *ocelot = ds->priv; 49 - bool vlan_aware; 50 49 51 - vlan_aware = dsa_port_is_vlan_filtering(dsa_to_port(ds, port)); 52 - 53 - return ocelot_fdb_add(ocelot, port, addr, vid, vlan_aware); 50 + return ocelot_fdb_add(ocelot, port, addr, vid); 54 51 } 55 52 56 53 static int felix_fdb_del(struct dsa_switch *ds, int port,
+1 -1
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 514 514 xgbe_disable_rx_tx_ints(pdata); 515 515 516 516 /* Turn on polling */ 517 - __napi_schedule_irqoff(&pdata->napi); 517 + __napi_schedule(&pdata->napi); 518 518 } 519 519 } else { 520 520 /* Don't clear Rx/Tx status if doing per channel DMA
+1 -1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 3742 3742 FW_PARAMS_PARAM_Z_V(FW_PARAMS_PARAM_DEV_PHYFW_VERSION)); 3743 3743 ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1, 3744 3744 &param, &val); 3745 - if (ret < 0) 3745 + if (ret) 3746 3746 return ret; 3747 3747 *phy_fw_ver = val; 3748 3748 return 0;
+1 -1
drivers/net/ethernet/dec/tulip/tulip_core.c
··· 1277 1277 #endif 1278 1278 }; 1279 1279 1280 - const struct pci_device_id early_486_chipsets[] = { 1280 + static const struct pci_device_id early_486_chipsets[] = { 1281 1281 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82424) }, 1282 1282 { PCI_DEVICE(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_496) }, 1283 1283 { },
+7
drivers/net/ethernet/freescale/fec.h
··· 488 488 struct sk_buff *rx_skbuff[RX_RING_SIZE]; 489 489 }; 490 490 491 + struct fec_stop_mode_gpr { 492 + struct regmap *gpr; 493 + u8 reg; 494 + u8 bit; 495 + }; 496 + 491 497 /* The FEC buffer descriptors track the ring buffers. The rx_bd_base and 492 498 * tx_bd_base always point to the base of the buffer descriptors. The 493 499 * cur_rx and cur_tx point to the currently available buffer. ··· 568 562 int hwts_tx_en; 569 563 struct delayed_work time_keep; 570 564 struct regulator *reg_phy; 565 + struct fec_stop_mode_gpr stop_gpr; 571 566 572 567 unsigned int tx_align; 573 568 unsigned int rx_align;
+120 -29
drivers/net/ethernet/freescale/fec_main.c
··· 62 62 #include <linux/if_vlan.h> 63 63 #include <linux/pinctrl/consumer.h> 64 64 #include <linux/prefetch.h> 65 + #include <linux/mfd/syscon.h> 66 + #include <linux/regmap.h> 65 67 #include <soc/imx/cpuidle.h> 66 68 67 69 #include <asm/cacheflush.h> ··· 86 84 #define FEC_ENET_OPD_V 0xFFF0 87 85 #define FEC_MDIO_PM_TIMEOUT 100 /* ms */ 88 86 87 + struct fec_devinfo { 88 + u32 quirks; 89 + u8 stop_gpr_reg; 90 + u8 stop_gpr_bit; 91 + }; 92 + 93 + static const struct fec_devinfo fec_imx25_info = { 94 + .quirks = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR | 95 + FEC_QUIRK_HAS_FRREG, 96 + }; 97 + 98 + static const struct fec_devinfo fec_imx27_info = { 99 + .quirks = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG, 100 + }; 101 + 102 + static const struct fec_devinfo fec_imx28_info = { 103 + .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME | 104 + FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC | 105 + FEC_QUIRK_HAS_FRREG, 106 + }; 107 + 108 + static const struct fec_devinfo fec_imx6q_info = { 109 + .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | 110 + FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 111 + FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 | 112 + FEC_QUIRK_HAS_RACC, 113 + .stop_gpr_reg = 0x34, 114 + .stop_gpr_bit = 27, 115 + }; 116 + 117 + static const struct fec_devinfo fec_mvf600_info = { 118 + .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_RACC, 119 + }; 120 + 121 + static const struct fec_devinfo fec_imx6x_info = { 122 + .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | 123 + FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 124 + FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB | 125 + FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE | 126 + FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE, 127 + }; 128 + 129 + static const struct fec_devinfo fec_imx6ul_info = { 130 + .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | 131 + FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 132 + FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR007885 | 133 + FEC_QUIRK_BUG_CAPTURE | FEC_QUIRK_HAS_RACC | 134 + FEC_QUIRK_HAS_COALESCE, 135 + }; 136 + 89 137 static struct platform_device_id fec_devtype[] = { 90 138 { 91 139 /* keep it for coldfire */ ··· 143 91 .driver_data = 0, 144 92 }, { 145 93 .name = "imx25-fec", 146 - .driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR | 147 - FEC_QUIRK_HAS_FRREG, 94 + .driver_data = (kernel_ulong_t)&fec_imx25_info, 148 95 }, { 149 96 .name = "imx27-fec", 150 - .driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG, 97 + .driver_data = (kernel_ulong_t)&fec_imx27_info, 151 98 }, { 152 99 .name = "imx28-fec", 153 - .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME | 154 - FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC | 155 - FEC_QUIRK_HAS_FRREG, 100 + .driver_data = (kernel_ulong_t)&fec_imx28_info, 156 101 }, { 157 102 .name = "imx6q-fec", 158 - .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | 159 - FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 160 - FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 | 161 - FEC_QUIRK_HAS_RACC, 103 + .driver_data = (kernel_ulong_t)&fec_imx6q_info, 162 104 }, { 163 105 .name = "mvf600-fec", 164 - .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_RACC, 106 + .driver_data = (kernel_ulong_t)&fec_mvf600_info, 165 107 }, { 166 108 .name = "imx6sx-fec", 167 - .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | 168 - FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 169 - FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB | 170 - FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE | 171 - FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE, 109 + .driver_data = (kernel_ulong_t)&fec_imx6x_info, 172 110 }, { 173 111 .name = "imx6ul-fec", 174 - .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT | 175 - FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | 176 - FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR007885 | 177 - FEC_QUIRK_BUG_CAPTURE | FEC_QUIRK_HAS_RACC | 178 - FEC_QUIRK_HAS_COALESCE, 112 + .driver_data = (kernel_ulong_t)&fec_imx6ul_info, 179 113 }, { 180 114 /* sentinel */ 181 115 } ··· 1130 1092 1131 1093 } 1132 1094 1095 + static void fec_enet_stop_mode(struct fec_enet_private *fep, bool enabled) 1096 + { 1097 + struct fec_platform_data *pdata = fep->pdev->dev.platform_data; 1098 + struct fec_stop_mode_gpr *stop_gpr = &fep->stop_gpr; 1099 + 1100 + if (stop_gpr->gpr) { 1101 + if (enabled) 1102 + regmap_update_bits(stop_gpr->gpr, stop_gpr->reg, 1103 + BIT(stop_gpr->bit), 1104 + BIT(stop_gpr->bit)); 1105 + else 1106 + regmap_update_bits(stop_gpr->gpr, stop_gpr->reg, 1107 + BIT(stop_gpr->bit), 0); 1108 + } else if (pdata && pdata->sleep_mode_enable) { 1109 + pdata->sleep_mode_enable(enabled); 1110 + } 1111 + } 1112 + 1133 1113 static void 1134 1114 fec_stop(struct net_device *ndev) 1135 1115 { 1136 1116 struct fec_enet_private *fep = netdev_priv(ndev); 1137 - struct fec_platform_data *pdata = fep->pdev->dev.platform_data; 1138 1117 u32 rmii_mode = readl(fep->hwp + FEC_R_CNTRL) & (1 << 8); 1139 1118 u32 val; 1140 1119 ··· 1180 1125 val = readl(fep->hwp + FEC_ECNTRL); 1181 1126 val |= (FEC_ECR_MAGICEN | FEC_ECR_SLEEP); 1182 1127 writel(val, fep->hwp + FEC_ECNTRL); 1183 - 1184 - if (pdata && pdata->sleep_mode_enable) 1185 - pdata->sleep_mode_enable(true); 1128 + fec_enet_stop_mode(fep, true); 1186 1129 } 1187 1130 writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED); 1188 1131 ··· 3451 3398 return irq_cnt; 3452 3399 } 3453 3400 3401 + static int fec_enet_init_stop_mode(struct fec_enet_private *fep, 3402 + struct fec_devinfo *dev_info, 3403 + struct device_node *np) 3404 + { 3405 + struct device_node *gpr_np; 3406 + int ret = 0; 3407 + 3408 + if (!dev_info) 3409 + return 0; 3410 + 3411 + gpr_np = of_parse_phandle(np, "gpr", 0); 3412 + if (!gpr_np) 3413 + return 0; 3414 + 3415 + fep->stop_gpr.gpr = syscon_node_to_regmap(gpr_np); 3416 + if (IS_ERR(fep->stop_gpr.gpr)) { 3417 + dev_err(&fep->pdev->dev, "could not find gpr regmap\n"); 3418 + ret = PTR_ERR(fep->stop_gpr.gpr); 3419 + fep->stop_gpr.gpr = NULL; 3420 + goto out; 3421 + } 3422 + 3423 + fep->stop_gpr.reg = dev_info->stop_gpr_reg; 3424 + fep->stop_gpr.bit = dev_info->stop_gpr_bit; 3425 + 3426 + out: 3427 + of_node_put(gpr_np); 3428 + 3429 + return ret; 3430 + } 3431 + 3454 3432 static int 3455 3433 fec_probe(struct platform_device *pdev) 3456 3434 { ··· 3497 3413 int num_rx_qs; 3498 3414 char irq_name[8]; 3499 3415 int irq_cnt; 3416 + struct fec_devinfo *dev_info; 3500 3417 3501 3418 fec_enet_get_queue_num(pdev, &num_tx_qs, &num_rx_qs); 3502 3419 ··· 3515 3430 of_id = of_match_device(fec_dt_ids, &pdev->dev); 3516 3431 if (of_id) 3517 3432 pdev->id_entry = of_id->data; 3518 - fep->quirks = pdev->id_entry->driver_data; 3433 + dev_info = (struct fec_devinfo *)pdev->id_entry->driver_data; 3434 + if (dev_info) 3435 + fep->quirks = dev_info->quirks; 3519 3436 3520 3437 fep->netdev = ndev; 3521 3438 fep->num_rx_queues = num_rx_qs; ··· 3550 3463 3551 3464 if (of_get_property(np, "fsl,magic-packet", NULL)) 3552 3465 fep->wol_flag |= FEC_WOL_HAS_MAGIC_PACKET; 3466 + 3467 + ret = fec_enet_init_stop_mode(fep, dev_info, np); 3468 + if (ret) 3469 + goto failed_stop_mode; 3553 3470 3554 3471 phy_node = of_parse_phandle(np, "phy-handle", 0); 3555 3472 if (!phy_node && of_phy_is_fixed_link(np)) { ··· 3723 3632 if (of_phy_is_fixed_link(np)) 3724 3633 of_phy_deregister_fixed_link(np); 3725 3634 of_node_put(phy_node); 3635 + failed_stop_mode: 3726 3636 failed_phy: 3727 3637 dev_id--; 3728 3638 failed_ioremap: ··· 3801 3709 { 3802 3710 struct net_device *ndev = dev_get_drvdata(dev); 3803 3711 struct fec_enet_private *fep = netdev_priv(ndev); 3804 - struct fec_platform_data *pdata = fep->pdev->dev.platform_data; 3805 3712 int ret; 3806 3713 int val; 3807 3714 ··· 3818 3727 goto failed_clk; 3819 3728 } 3820 3729 if (fep->wol_flag & FEC_WOL_FLAG_ENABLE) { 3821 - if (pdata && pdata->sleep_mode_enable) 3822 - pdata->sleep_mode_enable(false); 3730 + fec_enet_stop_mode(fep, false); 3731 + 3823 3732 val = readl(fep->hwp + FEC_ECNTRL); 3824 3733 val &= ~(FEC_ECR_MAGICEN | FEC_ECR_SLEEP); 3825 3734 writel(val, fep->hwp + FEC_ECNTRL);
+1 -1
drivers/net/ethernet/marvell/mvneta.c
··· 5383 5383 { 5384 5384 int ret; 5385 5385 5386 - ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "net/mvmeta:online", 5386 + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "net/mvneta:online", 5387 5387 mvneta_cpu_online, 5388 5388 mvneta_cpu_down_prepare); 5389 5389 if (ret < 0)
+23 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 65 65 return __raw_readl(eth->base + reg); 66 66 } 67 67 68 + u32 mtk_m32(struct mtk_eth *eth, u32 mask, u32 set, unsigned reg) 69 + { 70 + u32 val; 71 + 72 + val = mtk_r32(eth, reg); 73 + val &= ~mask; 74 + val |= set; 75 + mtk_w32(eth, val, reg); 76 + return reg; 77 + } 78 + 68 79 static int mtk_mdio_busy_wait(struct mtk_eth *eth) 69 80 { 70 81 unsigned long t_start = jiffies; ··· 204 193 struct mtk_mac *mac = container_of(config, struct mtk_mac, 205 194 phylink_config); 206 195 struct mtk_eth *eth = mac->hw; 207 - u32 mcr_cur, mcr_new, sid; 196 + u32 mcr_cur, mcr_new, sid, i; 208 197 int val, ge_mode, err; 209 198 210 199 /* MT76x8 has no hardware settings between for the MAC */ ··· 266 255 PHY_INTERFACE_MODE_TRGMII) 267 256 mtk_gmac0_rgmii_adjust(mac->hw, 268 257 state->speed); 258 + 259 + /* mt7623_pad_clk_setup */ 260 + for (i = 0 ; i < NUM_TRGMII_CTRL; i++) 261 + mtk_w32(mac->hw, 262 + TD_DM_DRVP(8) | TD_DM_DRVN(8), 263 + TRGMII_TD_ODT(i)); 264 + 265 + /* Assert/release MT7623 RXC reset */ 266 + mtk_m32(mac->hw, 0, RXC_RST | RXC_DQSISEL, 267 + TRGMII_RCK_CTRL); 268 + mtk_m32(mac->hw, RXC_RST, 0, TRGMII_RCK_CTRL); 269 269 } 270 270 } 271 271
+8
drivers/net/ethernet/mediatek/mtk_eth_soc.h
··· 352 352 #define DQSI0(x) ((x << 0) & GENMASK(6, 0)) 353 353 #define DQSI1(x) ((x << 8) & GENMASK(14, 8)) 354 354 #define RXCTL_DMWTLAT(x) ((x << 16) & GENMASK(18, 16)) 355 + #define RXC_RST BIT(31) 355 356 #define RXC_DQSISEL BIT(30) 356 357 #define RCK_CTRL_RGMII_1000 (RXC_DQSISEL | RXCTL_DMWTLAT(2) | DQSI1(16)) 357 358 #define RCK_CTRL_RGMII_10_100 RXCTL_DMWTLAT(2) 359 + 360 + #define NUM_TRGMII_CTRL 5 358 361 359 362 /* TRGMII RXC control register */ 360 363 #define TRGMII_TCK_CTRL 0x10340 ··· 365 362 #define TXC_INV BIT(30) 366 363 #define TCK_CTRL_RGMII_1000 TXCTL_DMWTLAT(2) 367 364 #define TCK_CTRL_RGMII_10_100 (TXC_INV | TXCTL_DMWTLAT(2)) 365 + 366 + /* TRGMII TX Drive Strength */ 367 + #define TRGMII_TD_ODT(i) (0x10354 + 8 * (i)) 368 + #define TD_DM_DRVP(x) ((x) & 0xf) 369 + #define TD_DM_DRVN(x) (((x) & 0xf) << 4) 368 370 369 371 /* TRGMII Interface mode register */ 370 372 #define INTF_MODE 0x10390
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 23 23 if (err) 24 24 return err; 25 25 26 - return mlx5_firmware_flash(dev, fw, extack); 26 + err = mlx5_firmware_flash(dev, fw, extack); 27 + release_firmware(fw); 28 + 29 + return err; 27 30 } 28 31 29 32 static u8 mlx5_fw_ver_major(u32 version)
+7 -12
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 67 67 struct nf_flowtable *nf_ft; 68 68 struct mlx5_tc_ct_priv *ct_priv; 69 69 struct rhashtable ct_entries_ht; 70 - struct list_head ct_entries_list; 71 70 }; 72 71 73 72 struct mlx5_ct_entry { 74 - struct list_head list; 75 73 u16 zone; 76 74 struct rhash_head node; 77 75 struct flow_rule *flow_rule; ··· 615 617 if (err) 616 618 goto err_insert; 617 619 618 - list_add(&entry->list, &ft->ct_entries_list); 619 - 620 620 return 0; 621 621 622 622 err_insert: ··· 642 646 WARN_ON(rhashtable_remove_fast(&ft->ct_entries_ht, 643 647 &entry->node, 644 648 cts_ht_params)); 645 - list_del(&entry->list); 646 649 kfree(entry); 647 650 648 651 return 0; ··· 813 818 ft->zone = zone; 814 819 ft->nf_ft = nf_ft; 815 820 ft->ct_priv = ct_priv; 816 - INIT_LIST_HEAD(&ft->ct_entries_list); 817 821 refcount_set(&ft->refcount, 1); 818 822 819 823 err = rhashtable_init(&ft->ct_entries_ht, &cts_ht_params); ··· 841 847 } 842 848 843 849 static void 844 - mlx5_tc_ct_flush_ft(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft) 850 + mlx5_tc_ct_flush_ft_entry(void *ptr, void *arg) 845 851 { 846 - struct mlx5_ct_entry *entry; 852 + struct mlx5_tc_ct_priv *ct_priv = arg; 853 + struct mlx5_ct_entry *entry = ptr; 847 854 848 - list_for_each_entry(entry, &ft->ct_entries_list, list) 849 - mlx5_tc_ct_entry_del_rules(ft->ct_priv, entry); 855 + mlx5_tc_ct_entry_del_rules(ct_priv, entry); 850 856 } 851 857 852 858 static void ··· 857 863 858 864 nf_flow_table_offload_del_cb(ft->nf_ft, 859 865 mlx5_tc_ct_block_flow_offload, ft); 860 - mlx5_tc_ct_flush_ft(ct_priv, ft); 861 866 rhashtable_remove_fast(&ct_priv->zone_ht, &ft->node, zone_params); 862 - rhashtable_destroy(&ft->ct_entries_ht); 867 + rhashtable_free_and_destroy(&ft->ct_entries_ht, 868 + mlx5_tc_ct_flush_ft_entry, 869 + ct_priv); 863 870 kfree(ft); 864 871 } 865 872
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 5526 5526 #ifdef CONFIG_MLX5_CORE_EN_DCB 5527 5527 mlx5e_dcbnl_delete_app(priv); 5528 5528 #endif 5529 - mlx5e_devlink_port_unregister(priv); 5530 5529 unregister_netdev(priv->netdev); 5530 + mlx5e_devlink_port_unregister(priv); 5531 5531 mlx5e_detach(mdev, vpriv); 5532 5532 mlx5e_destroy_netdev(priv); 5533 5533 }
+5 -4
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 2050 2050 struct mlx5_eswitch_rep *rep = rpriv->rep; 2051 2051 struct netdev_phys_item_id ppid = {}; 2052 2052 unsigned int dl_port_index = 0; 2053 + u16 pfnum; 2053 2054 2054 2055 if (!is_devlink_port_supported(dev, rpriv)) 2055 2056 return 0; 2056 2057 2057 2058 mlx5e_rep_get_port_parent_id(rpriv->netdev, &ppid); 2059 + pfnum = PCI_FUNC(dev->pdev->devfn); 2058 2060 2059 2061 if (rep->vport == MLX5_VPORT_UPLINK) { 2060 2062 devlink_port_attrs_set(&rpriv->dl_port, 2061 2063 DEVLINK_PORT_FLAVOUR_PHYSICAL, 2062 - PCI_FUNC(dev->pdev->devfn), false, 0, 2064 + pfnum, false, 0, 2063 2065 &ppid.id[0], ppid.id_len); 2064 2066 dl_port_index = vport_to_devlink_port_index(dev, rep->vport); 2065 2067 } else if (rep->vport == MLX5_VPORT_PF) { 2066 2068 devlink_port_attrs_pci_pf_set(&rpriv->dl_port, 2067 2069 &ppid.id[0], ppid.id_len, 2068 - dev->pdev->devfn); 2070 + pfnum); 2069 2071 dl_port_index = rep->vport; 2070 2072 } else if (mlx5_eswitch_is_vf_vport(dev->priv.eswitch, 2071 2073 rpriv->rep->vport)) { 2072 2074 devlink_port_attrs_pci_vf_set(&rpriv->dl_port, 2073 2075 &ppid.id[0], ppid.id_len, 2074 - dev->pdev->devfn, 2075 - rep->vport - 1); 2076 + pfnum, rep->vport - 1); 2076 2077 dl_port_index = vport_to_devlink_port_index(dev, rep->vport); 2077 2078 } 2078 2079
+5 -3
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1343 1343 if (err) 1344 1344 return err; 1345 1345 1346 - if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) { 1346 + if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR && 1347 + !(attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR)) { 1347 1348 err = mlx5e_attach_mod_hdr(priv, flow, parse_attr); 1348 1349 dealloc_mod_hdr_actions(&parse_attr->mod_hdr_acts); 1349 1350 if (err) ··· 3559 3558 struct mlx5_esw_flow_attr *attr, 3560 3559 u32 *action) 3561 3560 { 3562 - int nest_level = attr->parse_attr->filter_dev->lower_level; 3563 3561 struct flow_action_entry vlan_act = { 3564 3562 .id = FLOW_ACTION_VLAN_POP, 3565 3563 }; 3566 - int err = 0; 3564 + int nest_level, err = 0; 3567 3565 3566 + nest_level = attr->parse_attr->filter_dev->lower_level - 3567 + priv->netdev->lower_level; 3568 3568 while (nest_level--) { 3569 3569 err = parse_tc_vlan_action(priv, &vlan_act, attr, action); 3570 3570 if (err)
-1
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
··· 403 403 MLX5_ESW_ATTR_FLAG_VLAN_HANDLED = BIT(0), 404 404 MLX5_ESW_ATTR_FLAG_SLOW_PATH = BIT(1), 405 405 MLX5_ESW_ATTR_FLAG_NO_IN_PORT = BIT(2), 406 - MLX5_ESW_ATTR_FLAG_HAIRPIN = BIT(3), 407 406 }; 408 407 409 408 struct mlx5_esw_flow_attr {
+3 -9
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 300 300 bool split = !!(attr->split_count); 301 301 struct mlx5_flow_handle *rule; 302 302 struct mlx5_flow_table *fdb; 303 - bool hairpin = false; 304 303 int j, i = 0; 305 304 306 305 if (esw->mode != MLX5_ESWITCH_OFFLOADS) ··· 397 398 goto err_esw_get; 398 399 } 399 400 400 - if (mlx5_eswitch_termtbl_required(esw, attr, &flow_act, spec)) { 401 + if (mlx5_eswitch_termtbl_required(esw, attr, &flow_act, spec)) 401 402 rule = mlx5_eswitch_add_termtbl_rule(esw, fdb, spec, attr, 402 403 &flow_act, dest, i); 403 - hairpin = true; 404 - } else { 404 + else 405 405 rule = mlx5_add_flow_rules(fdb, spec, &flow_act, dest, i); 406 - } 407 406 if (IS_ERR(rule)) 408 407 goto err_add_rule; 409 408 else 410 409 atomic64_inc(&esw->offloads.num_flows); 411 - 412 - if (hairpin) 413 - attr->flags |= MLX5_ESW_ATTR_FLAG_HAIRPIN; 414 410 415 411 return rule; 416 412 ··· 495 501 496 502 mlx5_del_flow_rules(rule); 497 503 498 - if (attr->flags & MLX5_ESW_ATTR_FLAG_HAIRPIN) { 504 + if (!(attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH)) { 499 505 /* unref the term table */ 500 506 for (i = 0; i < MLX5_MAX_FLOW_FWD_VPORTS; i++) { 501 507 if (attr->dests[i].termtbl)
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 243 243 if (mlx5_get_nic_state(dev) == MLX5_NIC_IFC_DISABLED) 244 244 break; 245 245 246 - cond_resched(); 246 + msleep(20); 247 247 } while (!time_after(jiffies, end)); 248 248 249 249 if (mlx5_get_nic_state(dev) != MLX5_NIC_IFC_DISABLED) {
+56 -54
drivers/net/ethernet/mscc/ocelot.c
··· 183 183 ocelot_write(ocelot, val, ANA_VLANMASK); 184 184 } 185 185 186 - void ocelot_port_vlan_filtering(struct ocelot *ocelot, int port, 187 - bool vlan_aware) 188 - { 189 - struct ocelot_port *ocelot_port = ocelot->ports[port]; 190 - u32 val; 191 - 192 - if (vlan_aware) 193 - val = ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA | 194 - ANA_PORT_VLAN_CFG_VLAN_POP_CNT(1); 195 - else 196 - val = 0; 197 - ocelot_rmw_gix(ocelot, val, 198 - ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA | 199 - ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M, 200 - ANA_PORT_VLAN_CFG, port); 201 - 202 - if (vlan_aware && !ocelot_port->vid) 203 - /* If port is vlan-aware and tagged, drop untagged and priority 204 - * tagged frames. 205 - */ 206 - val = ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA | 207 - ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA | 208 - ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA; 209 - else 210 - val = 0; 211 - ocelot_rmw_gix(ocelot, val, 212 - ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA | 213 - ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA | 214 - ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA, 215 - ANA_PORT_DROP_CFG, port); 216 - 217 - if (vlan_aware) { 218 - if (ocelot_port->vid) 219 - /* Tag all frames except when VID == DEFAULT_VLAN */ 220 - val |= REW_TAG_CFG_TAG_CFG(1); 221 - else 222 - /* Tag all frames */ 223 - val |= REW_TAG_CFG_TAG_CFG(3); 224 - } else { 225 - /* Port tagging disabled. */ 226 - val = REW_TAG_CFG_TAG_CFG(0); 227 - } 228 - ocelot_rmw_gix(ocelot, val, 229 - REW_TAG_CFG_TAG_CFG_M, 230 - REW_TAG_CFG, port); 231 - } 232 - EXPORT_SYMBOL(ocelot_port_vlan_filtering); 233 - 234 186 static int ocelot_port_set_native_vlan(struct ocelot *ocelot, int port, 235 187 u16 vid) 236 188 { 237 189 struct ocelot_port *ocelot_port = ocelot->ports[port]; 190 + u32 val = 0; 238 191 239 192 if (ocelot_port->vid != vid) { 240 193 /* Always permit deleting the native VLAN (vid = 0) */ ··· 204 251 REW_PORT_VLAN_CFG_PORT_VID_M, 205 252 REW_PORT_VLAN_CFG, port); 206 253 254 + if (ocelot_port->vlan_aware && !ocelot_port->vid) 255 + /* If port is vlan-aware and tagged, drop untagged and priority 256 + * tagged frames. 257 + */ 258 + val = ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA | 259 + ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA | 260 + ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA; 261 + ocelot_rmw_gix(ocelot, val, 262 + ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA | 263 + ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA | 264 + ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA, 265 + ANA_PORT_DROP_CFG, port); 266 + 267 + if (ocelot_port->vlan_aware) { 268 + if (ocelot_port->vid) 269 + /* Tag all frames except when VID == DEFAULT_VLAN */ 270 + val = REW_TAG_CFG_TAG_CFG(1); 271 + else 272 + /* Tag all frames */ 273 + val = REW_TAG_CFG_TAG_CFG(3); 274 + } else { 275 + /* Port tagging disabled. */ 276 + val = REW_TAG_CFG_TAG_CFG(0); 277 + } 278 + ocelot_rmw_gix(ocelot, val, 279 + REW_TAG_CFG_TAG_CFG_M, 280 + REW_TAG_CFG, port); 281 + 207 282 return 0; 208 283 } 284 + 285 + void ocelot_port_vlan_filtering(struct ocelot *ocelot, int port, 286 + bool vlan_aware) 287 + { 288 + struct ocelot_port *ocelot_port = ocelot->ports[port]; 289 + u32 val; 290 + 291 + ocelot_port->vlan_aware = vlan_aware; 292 + 293 + if (vlan_aware) 294 + val = ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA | 295 + ANA_PORT_VLAN_CFG_VLAN_POP_CNT(1); 296 + else 297 + val = 0; 298 + ocelot_rmw_gix(ocelot, val, 299 + ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA | 300 + ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M, 301 + ANA_PORT_VLAN_CFG, port); 302 + 303 + ocelot_port_set_native_vlan(ocelot, port, ocelot_port->vid); 304 + } 305 + EXPORT_SYMBOL(ocelot_port_vlan_filtering); 209 306 210 307 /* Default vlan to clasify for untagged frames (may be zero) */ 211 308 static void ocelot_port_set_pvid(struct ocelot *ocelot, int port, u16 pvid) ··· 876 873 } 877 874 878 875 int ocelot_fdb_add(struct ocelot *ocelot, int port, 879 - const unsigned char *addr, u16 vid, bool vlan_aware) 876 + const unsigned char *addr, u16 vid) 880 877 { 881 878 struct ocelot_port *ocelot_port = ocelot->ports[port]; 882 879 883 880 if (!vid) { 884 - if (!vlan_aware) 881 + if (!ocelot_port->vlan_aware) 885 882 /* If the bridge is not VLAN aware and no VID was 886 883 * provided, set it to pvid to ensure the MAC entry 887 884 * matches incoming untagged packets ··· 908 905 struct ocelot *ocelot = priv->port.ocelot; 909 906 int port = priv->chip_port; 910 907 911 - return ocelot_fdb_add(ocelot, port, addr, vid, priv->vlan_aware); 908 + return ocelot_fdb_add(ocelot, port, addr, vid); 912 909 } 913 910 914 911 int ocelot_fdb_del(struct ocelot *ocelot, int port, ··· 1499 1496 ocelot_port_attr_ageing_set(ocelot, port, attr->u.ageing_time); 1500 1497 break; 1501 1498 case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING: 1502 - priv->vlan_aware = attr->u.vlan_filtering; 1503 - ocelot_port_vlan_filtering(ocelot, port, priv->vlan_aware); 1499 + ocelot_port_vlan_filtering(ocelot, port, 1500 + attr->u.vlan_filtering); 1504 1501 break; 1505 1502 case SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED: 1506 1503 ocelot_port_attr_mc_set(ocelot, port, !attr->u.mc_disabled); ··· 1871 1868 } else { 1872 1869 err = ocelot_port_bridge_leave(ocelot, port, 1873 1870 info->upper_dev); 1874 - priv->vlan_aware = false; 1875 1871 } 1876 1872 } 1877 1873 if (netif_is_lag_master(info->upper_dev)) {
-2
drivers/net/ethernet/mscc/ocelot.h
··· 56 56 struct phy_device *phy; 57 57 u8 chip_port; 58 58 59 - u8 vlan_aware; 60 - 61 59 struct phy *serdes; 62 60 63 61 struct ocelot_port_tc tc;
+1 -1
drivers/net/ethernet/neterion/s2io.c
··· 5155 5155 /* read mac entries from CAM */ 5156 5156 static u64 do_s2io_read_unicast_mc(struct s2io_nic *sp, int offset) 5157 5157 { 5158 - u64 tmp64 = 0xffffffffffff0000ULL, val64; 5158 + u64 tmp64, val64; 5159 5159 struct XENA_dev_config __iomem *bar0 = sp->bar0; 5160 5160 5161 5161 /* read mac addr */
+27 -17
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 2127 2127 if (lif->registered) 2128 2128 ionic_lif_set_netdev_info(lif); 2129 2129 2130 + ionic_rx_filter_replay(lif); 2131 + 2130 2132 if (netif_running(lif->netdev)) { 2131 2133 err = ionic_txrx_alloc(lif); 2132 2134 if (err) ··· 2208 2206 if (!test_bit(IONIC_LIF_F_FW_RESET, lif->state)) { 2209 2207 cancel_work_sync(&lif->deferred.work); 2210 2208 cancel_work_sync(&lif->tx_timeout_work); 2209 + ionic_rx_filters_deinit(lif); 2211 2210 } 2212 2211 2213 - ionic_rx_filters_deinit(lif); 2214 2212 if (lif->netdev->features & NETIF_F_RXHASH) 2215 2213 ionic_lif_rss_deinit(lif); 2216 2214 ··· 2341 2339 err = ionic_adminq_post_wait(lif, &ctx); 2342 2340 if (err) 2343 2341 return err; 2344 - 2342 + netdev_dbg(lif->netdev, "found initial MAC addr %pM\n", 2343 + ctx.comp.lif_getattr.mac); 2345 2344 if (is_zero_ether_addr(ctx.comp.lif_getattr.mac)) 2346 2345 return 0; 2347 2346 2348 - memcpy(addr.sa_data, ctx.comp.lif_getattr.mac, netdev->addr_len); 2349 - addr.sa_family = AF_INET; 2350 - err = eth_prepare_mac_addr_change(netdev, &addr); 2351 - if (err) { 2352 - netdev_warn(lif->netdev, "ignoring bad MAC addr from NIC %pM - err %d\n", 2353 - addr.sa_data, err); 2354 - return 0; 2347 + if (!ether_addr_equal(ctx.comp.lif_getattr.mac, netdev->dev_addr)) { 2348 + memcpy(addr.sa_data, ctx.comp.lif_getattr.mac, netdev->addr_len); 2349 + addr.sa_family = AF_INET; 2350 + err = eth_prepare_mac_addr_change(netdev, &addr); 2351 + if (err) { 2352 + netdev_warn(lif->netdev, "ignoring bad MAC addr from NIC %pM - err %d\n", 2353 + addr.sa_data, err); 2354 + return 0; 2355 + } 2356 + 2357 + if (!is_zero_ether_addr(netdev->dev_addr)) { 2358 + netdev_dbg(lif->netdev, "deleting station MAC addr %pM\n", 2359 + netdev->dev_addr); 2360 + ionic_lif_addr(lif, netdev->dev_addr, false); 2361 + } 2362 + 2363 + eth_commit_mac_addr_change(netdev, &addr); 2355 2364 } 2356 2365 2357 - netdev_dbg(lif->netdev, "deleting station MAC addr %pM\n", 2358 - netdev->dev_addr); 2359 - ionic_lif_addr(lif, netdev->dev_addr, false); 2360 - 2361 - eth_commit_mac_addr_change(netdev, &addr); 2362 2366 netdev_dbg(lif->netdev, "adding station MAC addr %pM\n", 2363 2367 netdev->dev_addr); 2364 2368 ionic_lif_addr(lif, netdev->dev_addr, true); ··· 2429 2421 if (err) 2430 2422 goto err_out_notifyq_deinit; 2431 2423 2432 - err = ionic_rx_filters_init(lif); 2433 - if (err) 2434 - goto err_out_notifyq_deinit; 2424 + if (!test_bit(IONIC_LIF_F_FW_RESET, lif->state)) { 2425 + err = ionic_rx_filters_init(lif); 2426 + if (err) 2427 + goto err_out_notifyq_deinit; 2428 + } 2435 2429 2436 2430 err = ionic_station_set(lif); 2437 2431 if (err)
+42 -9
drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
··· 2 2 /* Copyright(c) 2017 - 2019 Pensando Systems, Inc */ 3 3 4 4 #include <linux/netdevice.h> 5 + #include <linux/dynamic_debug.h> 5 6 #include <linux/etherdevice.h> 6 7 7 8 #include "ionic.h" ··· 18 17 devm_kfree(dev, f); 19 18 } 20 19 21 - int ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f) 20 + void ionic_rx_filter_replay(struct ionic_lif *lif) 22 21 { 23 - struct ionic_admin_ctx ctx = { 24 - .work = COMPLETION_INITIALIZER_ONSTACK(ctx.work), 25 - .cmd.rx_filter_del = { 26 - .opcode = IONIC_CMD_RX_FILTER_DEL, 27 - .filter_id = cpu_to_le32(f->filter_id), 28 - }, 29 - }; 22 + struct ionic_rx_filter_add_cmd *ac; 23 + struct ionic_admin_ctx ctx; 24 + struct ionic_rx_filter *f; 25 + struct hlist_head *head; 26 + struct hlist_node *tmp; 27 + unsigned int i; 28 + int err; 30 29 31 - return ionic_adminq_post_wait(lif, &ctx); 30 + ac = &ctx.cmd.rx_filter_add; 31 + 32 + for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) { 33 + head = &lif->rx_filters.by_id[i]; 34 + hlist_for_each_entry_safe(f, tmp, head, by_id) { 35 + ctx.work = COMPLETION_INITIALIZER_ONSTACK(ctx.work); 36 + memcpy(ac, &f->cmd, sizeof(f->cmd)); 37 + dev_dbg(&lif->netdev->dev, "replay filter command:\n"); 38 + dynamic_hex_dump("cmd ", DUMP_PREFIX_OFFSET, 16, 1, 39 + &ctx.cmd, sizeof(ctx.cmd), true); 40 + 41 + err = ionic_adminq_post_wait(lif, &ctx); 42 + if (err) { 43 + switch (le16_to_cpu(ac->match)) { 44 + case IONIC_RX_FILTER_MATCH_VLAN: 45 + netdev_info(lif->netdev, "Replay failed - %d: vlan %d\n", 46 + err, 47 + le16_to_cpu(ac->vlan.vlan)); 48 + break; 49 + case IONIC_RX_FILTER_MATCH_MAC: 50 + netdev_info(lif->netdev, "Replay failed - %d: mac %pM\n", 51 + err, ac->mac.addr); 52 + break; 53 + case IONIC_RX_FILTER_MATCH_MAC_VLAN: 54 + netdev_info(lif->netdev, "Replay failed - %d: vlan %d mac %pM\n", 55 + err, 56 + le16_to_cpu(ac->vlan.vlan), 57 + ac->mac.addr); 58 + break; 59 + } 60 + } 61 + } 62 + } 32 63 } 33 64 34 65 int ionic_rx_filters_init(struct ionic_lif *lif)
+1 -1
drivers/net/ethernet/pensando/ionic/ionic_rx_filter.h
··· 24 24 }; 25 25 26 26 void ionic_rx_filter_free(struct ionic_lif *lif, struct ionic_rx_filter *f); 27 - int ionic_rx_filter_del(struct ionic_lif *lif, struct ionic_rx_filter *f); 27 + void ionic_rx_filter_replay(struct ionic_lif *lif); 28 28 int ionic_rx_filters_init(struct ionic_lif *lif); 29 29 void ionic_rx_filters_deinit(struct ionic_lif *lif); 30 30 int ionic_rx_filter_save(struct ionic_lif *lif, u32 flow_id, u16 rxq_index,
+2
drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
··· 241 241 switch (phymode) { 242 242 case PHY_INTERFACE_MODE_RGMII: 243 243 case PHY_INTERFACE_MODE_RGMII_ID: 244 + case PHY_INTERFACE_MODE_RGMII_RXID: 245 + case PHY_INTERFACE_MODE_RGMII_TXID: 244 246 *val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII; 245 247 break; 246 248 case PHY_INTERFACE_MODE_MII:
+2
drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
··· 150 150 plat_dat->init = sun7i_gmac_init; 151 151 plat_dat->exit = sun7i_gmac_exit; 152 152 plat_dat->fix_mac_speed = sun7i_fix_speed; 153 + plat_dat->tx_fifo_size = 4096; 154 + plat_dat->rx_fifo_size = 16384; 153 155 154 156 ret = sun7i_gmac_init(pdev, plat_dat->bsp_priv); 155 157 if (ret)
+3 -3
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1372 1372 err: 1373 1373 i = devm_add_action(dev, am65_cpsw_nuss_free_tx_chns, common); 1374 1374 if (i) { 1375 - dev_err(dev, "failed to add free_tx_chns action %d", i); 1375 + dev_err(dev, "Failed to add free_tx_chns action %d\n", i); 1376 1376 return i; 1377 1377 } 1378 1378 ··· 1481 1481 err: 1482 1482 i = devm_add_action(dev, am65_cpsw_nuss_free_rx_chns, common); 1483 1483 if (i) { 1484 - dev_err(dev, "failed to add free_rx_chns action %d", i); 1484 + dev_err(dev, "Failed to add free_rx_chns action %d\n", i); 1485 1485 return i; 1486 1486 } 1487 1487 ··· 1691 1691 ret = devm_add_action_or_reset(dev, am65_cpsw_pcpu_stats_free, 1692 1692 ndev_priv->stats); 1693 1693 if (ret) { 1694 - dev_err(dev, "failed to add percpu stat free action %d", ret); 1694 + dev_err(dev, "Failed to add percpu stat free action %d\n", ret); 1695 1695 return ret; 1696 1696 } 1697 1697
+2 -3
drivers/net/ipa/ipa_modem.c
··· 297 297 298 298 ret = ipa_endpoint_modem_exception_reset_all(ipa); 299 299 if (ret) 300 - dev_err(dev, "error %d resetting exception endpoint", 301 - ret); 300 + dev_err(dev, "error %d resetting exception endpoint\n", ret); 302 301 303 302 ipa_endpoint_modem_pause_all(ipa, false); 304 303 305 304 ret = ipa_modem_stop(ipa); 306 305 if (ret) 307 - dev_err(dev, "error %d stopping modem", ret); 306 + dev_err(dev, "error %d stopping modem\n", ret); 308 307 309 308 /* Now prepare for the next modem boot */ 310 309 ret = ipa_mem_zero_modem(ipa);
+1 -1
drivers/net/macsec.c
··· 3809 3809 struct netlink_ext_ack *extack) 3810 3810 { 3811 3811 struct macsec_dev *macsec = macsec_priv(dev); 3812 - struct macsec_tx_sa tx_sc; 3812 + struct macsec_tx_sc tx_sc; 3813 3813 struct macsec_secy secy; 3814 3814 int ret; 3815 3815
+24 -22
drivers/net/phy/marvell.c
··· 1263 1263 int lpa; 1264 1264 int err; 1265 1265 1266 + if (!(status & MII_M1011_PHY_STATUS_RESOLVED)) { 1267 + phydev->link = 0; 1268 + return 0; 1269 + } 1270 + 1271 + if (status & MII_M1011_PHY_STATUS_FULLDUPLEX) 1272 + phydev->duplex = DUPLEX_FULL; 1273 + else 1274 + phydev->duplex = DUPLEX_HALF; 1275 + 1276 + switch (status & MII_M1011_PHY_STATUS_SPD_MASK) { 1277 + case MII_M1011_PHY_STATUS_1000: 1278 + phydev->speed = SPEED_1000; 1279 + break; 1280 + 1281 + case MII_M1011_PHY_STATUS_100: 1282 + phydev->speed = SPEED_100; 1283 + break; 1284 + 1285 + default: 1286 + phydev->speed = SPEED_10; 1287 + break; 1288 + } 1289 + 1266 1290 if (!fiber) { 1267 1291 err = genphy_read_lpa(phydev); 1268 1292 if (err < 0) ··· 1313 1289 phydev->asym_pause = 0; 1314 1290 } 1315 1291 } 1316 - } 1317 - 1318 - if (!(status & MII_M1011_PHY_STATUS_RESOLVED)) 1319 - return 0; 1320 - 1321 - if (status & MII_M1011_PHY_STATUS_FULLDUPLEX) 1322 - phydev->duplex = DUPLEX_FULL; 1323 - else 1324 - phydev->duplex = DUPLEX_HALF; 1325 - 1326 - switch (status & MII_M1011_PHY_STATUS_SPD_MASK) { 1327 - case MII_M1011_PHY_STATUS_1000: 1328 - phydev->speed = SPEED_1000; 1329 - break; 1330 - 1331 - case MII_M1011_PHY_STATUS_100: 1332 - phydev->speed = SPEED_100; 1333 - break; 1334 - 1335 - default: 1336 - phydev->speed = SPEED_10; 1337 - break; 1338 1292 } 1339 1293 1340 1294 return 0;
+33 -3
drivers/net/phy/marvell10g.c
··· 33 33 #define MV_PHY_ALASKA_NBT_QUIRK_REV (MARVELL_PHY_ID_88X3310 | 0xa) 34 34 35 35 enum { 36 + MV_PMA_FW_VER0 = 0xc011, 37 + MV_PMA_FW_VER1 = 0xc012, 36 38 MV_PMA_BOOT = 0xc050, 37 39 MV_PMA_BOOT_FATAL = BIT(0), 38 40 ··· 75 73 76 74 /* Vendor2 MMD registers */ 77 75 MV_V2_PORT_CTRL = 0xf001, 78 - MV_V2_PORT_CTRL_PWRDOWN = 0x0800, 76 + MV_V2_PORT_CTRL_SWRST = BIT(15), 77 + MV_V2_PORT_CTRL_PWRDOWN = BIT(11), 79 78 MV_V2_TEMP_CTRL = 0xf08a, 80 79 MV_V2_TEMP_CTRL_MASK = 0xc000, 81 80 MV_V2_TEMP_CTRL_SAMPLE = 0x0000, ··· 86 83 }; 87 84 88 85 struct mv3310_priv { 86 + u32 firmware_ver; 87 + 89 88 struct device *hwmon_dev; 90 89 char *hwmon_name; 91 90 }; ··· 240 235 241 236 static int mv3310_power_up(struct phy_device *phydev) 242 237 { 243 - return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL, 244 - MV_V2_PORT_CTRL_PWRDOWN); 238 + struct mv3310_priv *priv = dev_get_drvdata(&phydev->mdio.dev); 239 + int ret; 240 + 241 + ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL, 242 + MV_V2_PORT_CTRL_PWRDOWN); 243 + 244 + if (priv->firmware_ver < 0x00030000) 245 + return ret; 246 + 247 + return phy_set_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL, 248 + MV_V2_PORT_CTRL_SWRST); 245 249 } 246 250 247 251 static int mv3310_reset(struct phy_device *phydev, u32 unit) ··· 368 354 return -ENOMEM; 369 355 370 356 dev_set_drvdata(&phydev->mdio.dev, priv); 357 + 358 + ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MV_PMA_FW_VER0); 359 + if (ret < 0) 360 + return ret; 361 + 362 + priv->firmware_ver = ret << 16; 363 + 364 + ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MV_PMA_FW_VER1); 365 + if (ret < 0) 366 + return ret; 367 + 368 + priv->firmware_ver |= ret; 369 + 370 + phydev_info(phydev, "Firmware version %u.%u.%u.%u\n", 371 + priv->firmware_ver >> 24, (priv->firmware_ver >> 16) & 255, 372 + (priv->firmware_ver >> 8) & 255, priv->firmware_ver & 255); 371 373 372 374 /* Powering down the port when not in use saves about 600mW */ 373 375 ret = mv3310_power_down(phydev);
+1 -1
drivers/net/phy/mdio_bus.c
··· 464 464 465 465 /** 466 466 * mdio_find_bus - Given the name of a mdiobus, find the mii_bus. 467 - * @mdio_bus_np: Pointer to the mii_bus. 467 + * @mdio_name: The name of a mdiobus. 468 468 * 469 469 * Returns a reference to the mii_bus, or NULL if none found. The 470 470 * embedded struct device will have its reference count incremented,
+1 -1
drivers/net/phy/micrel.c
··· 1204 1204 .driver_data = &ksz9021_type, 1205 1205 .probe = kszphy_probe, 1206 1206 .config_init = ksz9131_config_init, 1207 - .read_status = ksz9031_read_status, 1207 + .read_status = genphy_read_status, 1208 1208 .ack_interrupt = kszphy_ack_interrupt, 1209 1209 .config_intr = kszphy_config_intr, 1210 1210 .get_sset_count = kszphy_get_sset_count,
+2 -1
drivers/net/tun.c
··· 1888 1888 1889 1889 skb_reset_network_header(skb); 1890 1890 skb_probe_transport_header(skb); 1891 + skb_record_rx_queue(skb, tfile->queue_index); 1891 1892 1892 1893 if (skb_xdp) { 1893 1894 struct bpf_prog *xdp_prog; ··· 2460 2459 skb->protocol = eth_type_trans(skb, tun->dev); 2461 2460 skb_reset_network_header(skb); 2462 2461 skb_probe_transport_header(skb); 2462 + skb_record_rx_queue(skb, tfile->queue_index); 2463 2463 2464 2464 if (skb_xdp) { 2465 2465 err = do_xdp_generic(xdp_prog, skb); ··· 2472 2470 !tfile->detached) 2473 2471 rxhash = __skb_get_hash_symmetric(skb); 2474 2472 2475 - skb_record_rx_queue(skb, tfile->queue_index); 2476 2473 netif_receive_skb(skb); 2477 2474 2478 2475 /* No need for get_cpu_ptr() here since this function is
+2 -1
drivers/net/wireless/ath/ath11k/thermal.h
··· 36 36 return 0; 37 37 } 38 38 39 - static inline void ath11k_thermal_unregister(struct ath11k *ar) 39 + static inline void ath11k_thermal_unregister(struct ath11k_base *sc) 40 40 { 41 41 } 42 42 43 43 static inline int ath11k_thermal_set_throttling(struct ath11k *ar, u32 throttle_state) 44 44 { 45 + return 0; 45 46 } 46 47 47 48 static inline void ath11k_thermal_event_temperature(struct ath11k *ar,
+9
drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
··· 729 729 return err; 730 730 } 731 731 732 + static netdev_tx_t brcmf_net_mon_start_xmit(struct sk_buff *skb, 733 + struct net_device *ndev) 734 + { 735 + dev_kfree_skb_any(skb); 736 + 737 + return NETDEV_TX_OK; 738 + } 739 + 732 740 static const struct net_device_ops brcmf_netdev_ops_mon = { 733 741 .ndo_open = brcmf_net_mon_open, 734 742 .ndo_stop = brcmf_net_mon_stop, 743 + .ndo_start_xmit = brcmf_net_mon_start_xmit, 735 744 }; 736 745 737 746 int brcmf_net_mon_attach(struct brcmf_if *ifp)
+6 -6
drivers/net/wireless/mac80211_hwsim.c
··· 3669 3669 } 3670 3670 3671 3671 if (info->attrs[HWSIM_ATTR_RADIO_NAME]) { 3672 - hwname = kasprintf(GFP_KERNEL, "%.*s", 3673 - nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]), 3674 - (char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME])); 3672 + hwname = kstrndup((char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]), 3673 + nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]), 3674 + GFP_KERNEL); 3675 3675 if (!hwname) 3676 3676 return -ENOMEM; 3677 3677 param.hwname = hwname; ··· 3691 3691 if (info->attrs[HWSIM_ATTR_RADIO_ID]) { 3692 3692 idx = nla_get_u32(info->attrs[HWSIM_ATTR_RADIO_ID]); 3693 3693 } else if (info->attrs[HWSIM_ATTR_RADIO_NAME]) { 3694 - hwname = kasprintf(GFP_KERNEL, "%.*s", 3695 - nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]), 3696 - (char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME])); 3694 + hwname = kstrndup((char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]), 3695 + nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]), 3696 + GFP_KERNEL); 3697 3697 if (!hwname) 3698 3698 return -ENOMEM; 3699 3699 } else
+3 -8
drivers/net/wireless/realtek/rtw88/pci.c
··· 1338 1338 rtw_pci_link_cfg(rtwdev); 1339 1339 } 1340 1340 1341 - #ifdef CONFIG_PM 1342 - static int rtw_pci_suspend(struct device *dev) 1341 + static int __maybe_unused rtw_pci_suspend(struct device *dev) 1343 1342 { 1344 1343 return 0; 1345 1344 } 1346 1345 1347 - static int rtw_pci_resume(struct device *dev) 1346 + static int __maybe_unused rtw_pci_resume(struct device *dev) 1348 1347 { 1349 1348 return 0; 1350 1349 } 1351 1350 1352 1351 static SIMPLE_DEV_PM_OPS(rtw_pm_ops, rtw_pci_suspend, rtw_pci_resume); 1353 - #define RTW_PM_OPS (&rtw_pm_ops) 1354 - #else 1355 - #define RTW_PM_OPS NULL 1356 - #endif 1357 1352 1358 1353 static int rtw_pci_claim(struct rtw_dev *rtwdev, struct pci_dev *pdev) 1359 1354 { ··· 1577 1582 .id_table = rtw_pci_id_table, 1578 1583 .probe = rtw_pci_probe, 1579 1584 .remove = rtw_pci_remove, 1580 - .driver.pm = RTW_PM_OPS, 1585 + .driver.pm = &rtw_pm_ops, 1581 1586 }; 1582 1587 module_pci_driver(rtw_pci_driver); 1583 1588
+10
include/net/cfg80211.h
··· 905 905 * protocol frames. 906 906 * @control_port_over_nl80211: TRUE if userspace expects to exchange control 907 907 * port frames over NL80211 instead of the network interface. 908 + * @control_port_no_preauth: disables pre-auth rx over the nl80211 control 909 + * port for mac80211 908 910 * @wep_keys: static WEP keys, if not NULL points to an array of 909 911 * CFG80211_MAX_WEP_KEYS WEP keys 910 912 * @wep_tx_key: key index (0..3) of the default TX static WEP key ··· 1224 1222 * @he_capa: HE capabilities of station 1225 1223 * @he_capa_len: the length of the HE capabilities 1226 1224 * @airtime_weight: airtime scheduler weight for this station 1225 + * @txpwr: transmit power for an associated station 1227 1226 */ 1228 1227 struct station_parameters { 1229 1228 const u8 *supported_rates; ··· 4669 4666 * @txq_memory_limit: configuration internal TX queue memory limit 4670 4667 * @txq_quantum: configuration of internal TX queue scheduler quantum 4671 4668 * 4669 + * @tx_queue_len: allow setting transmit queue len for drivers not using 4670 + * wake_tx_queue 4671 + * 4672 4672 * @support_mbssid: can HW support association with nontransmitted AP 4673 4673 * @support_only_he_mbssid: don't parse MBSSID elements if it is not 4674 4674 * HE AP, in order to avoid compatibility issues. ··· 4687 4681 * supported by the driver for each peer 4688 4682 * @tid_config_support.max_retry: maximum supported retry count for 4689 4683 * long/short retry configuration 4684 + * 4685 + * @max_data_retry_count: maximum supported per TID retry count for 4686 + * configuration through the %NL80211_TID_CONFIG_ATTR_RETRY_SHORT and 4687 + * %NL80211_TID_CONFIG_ATTR_RETRY_LONG attributes 4690 4688 */ 4691 4689 struct wiphy { 4692 4690 /* assign these fields before you register the wiphy */
+1
include/net/ip6_route.h
··· 254 254 255 255 return rt->rt6i_flags & RTF_ANYCAST || 256 256 (rt->rt6i_dst.plen < 127 && 257 + !(rt->rt6i_flags & (RTF_GATEWAY | RTF_NONEXTHOP)) && 257 258 ipv6_addr_equal(&rt->rt6i_dst.addr, daddr)); 258 259 } 259 260
+1 -1
include/net/netfilter/nf_tables.h
··· 901 901 { 902 902 struct nft_expr *expr; 903 903 904 - if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) { 904 + if (__nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) { 905 905 expr = nft_set_ext_expr(ext); 906 906 expr->ops->eval(expr, regs, pkt); 907 907 }
+3 -3
include/net/sock.h
··· 2553 2553 } 2554 2554 2555 2555 /** 2556 - * skb_steal_sock 2557 - * @skb to steal the socket from 2558 - * @refcounted is set to true if the socket is reference-counted 2556 + * skb_steal_sock - steal a socket from an sk_buff 2557 + * @skb: sk_buff to steal the socket from 2558 + * @refcounted: is set to true if the socket is reference-counted 2559 2559 */ 2560 2560 static inline struct sock * 2561 2561 skb_steal_sock(struct sk_buff *skb, bool *refcounted)
+3 -1
include/soc/mscc/ocelot.h
··· 476 476 477 477 void __iomem *regs; 478 478 479 + bool vlan_aware; 480 + 479 481 /* Ingress default VLAN (pvid) */ 480 482 u16 pvid; 481 483 ··· 612 610 int ocelot_fdb_dump(struct ocelot *ocelot, int port, 613 611 dsa_fdb_dump_cb_t *cb, void *data); 614 612 int ocelot_fdb_add(struct ocelot *ocelot, int port, 615 - const unsigned char *addr, u16 vid, bool vlan_aware); 613 + const unsigned char *addr, u16 vid); 616 614 int ocelot_fdb_del(struct ocelot *ocelot, int port, 617 615 const unsigned char *addr, u16 vid); 618 616 int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid,
+2
include/uapi/linux/netfilter/nf_tables.h
··· 276 276 * @NFT_SET_TIMEOUT: set uses timeouts 277 277 * @NFT_SET_EVAL: set can be updated from the evaluation path 278 278 * @NFT_SET_OBJECT: set contains stateful objects 279 + * @NFT_SET_CONCAT: set contains a concatenation 279 280 */ 280 281 enum nft_set_flags { 281 282 NFT_SET_ANONYMOUS = 0x1, ··· 286 285 NFT_SET_TIMEOUT = 0x10, 287 286 NFT_SET_EVAL = 0x20, 288 287 NFT_SET_OBJECT = 0x40, 288 + NFT_SET_CONCAT = 0x80, 289 289 }; 290 290 291 291 /**
+1
include/uapi/linux/netfilter/xt_IDLETIMER.h
··· 48 48 49 49 char label[MAX_IDLETIMER_LABEL_SIZE]; 50 50 51 + __u8 send_nl_msg; /* unused: for compatibility with Android */ 51 52 __u8 timer_type; 52 53 53 54 /* for kernel module internal use only */
+1 -1
kernel/bpf/bpf_lru_list.h
··· 30 30 struct bpf_lru_list { 31 31 struct list_head lists[NR_BPF_LRU_LIST_T]; 32 32 unsigned int counts[NR_BPF_LRU_LIST_COUNT]; 33 - /* The next inacitve list rotation starts from here */ 33 + /* The next inactive list rotation starts from here */ 34 34 struct list_head *next_inactive_rotation; 35 35 36 36 raw_spinlock_t lock ____cacheline_aligned_in_smp;
+7 -9
kernel/bpf/syscall.c
··· 586 586 { 587 587 struct bpf_map *map = vma->vm_file->private_data; 588 588 589 - bpf_map_inc_with_uref(map); 590 - 591 - if (vma->vm_flags & VM_WRITE) { 589 + if (vma->vm_flags & VM_MAYWRITE) { 592 590 mutex_lock(&map->freeze_mutex); 593 591 map->writecnt++; 594 592 mutex_unlock(&map->freeze_mutex); ··· 598 600 { 599 601 struct bpf_map *map = vma->vm_file->private_data; 600 602 601 - if (vma->vm_flags & VM_WRITE) { 603 + if (vma->vm_flags & VM_MAYWRITE) { 602 604 mutex_lock(&map->freeze_mutex); 603 605 map->writecnt--; 604 606 mutex_unlock(&map->freeze_mutex); 605 607 } 606 - 607 - bpf_map_put_with_uref(map); 608 608 } 609 609 610 610 static const struct vm_operations_struct bpf_map_default_vmops = { ··· 631 635 /* set default open/close callbacks */ 632 636 vma->vm_ops = &bpf_map_default_vmops; 633 637 vma->vm_private_data = map; 638 + vma->vm_flags &= ~VM_MAYEXEC; 639 + if (!(vma->vm_flags & VM_WRITE)) 640 + /* disallow re-mapping with PROT_WRITE */ 641 + vma->vm_flags &= ~VM_MAYWRITE; 634 642 635 643 err = map->ops->map_mmap(map, vma); 636 644 if (err) 637 645 goto out; 638 646 639 - bpf_map_inc_with_uref(map); 640 - 641 - if (vma->vm_flags & VM_WRITE) 647 + if (vma->vm_flags & VM_MAYWRITE) 642 648 map->writecnt++; 643 649 out: 644 650 mutex_unlock(&map->freeze_mutex);
+1 -2
kernel/bpf/verifier.c
··· 1255 1255 reg->type = SCALAR_VALUE; 1256 1256 reg->var_off = tnum_unknown; 1257 1257 reg->frameno = 0; 1258 - reg->precise = env->subprog_cnt > 1 || !env->allow_ptr_leaks ? 1259 - true : false; 1258 + reg->precise = env->subprog_cnt > 1 || !env->allow_ptr_leaks; 1260 1259 __mark_reg_unbounded(reg); 1261 1260 } 1262 1261
+2
lib/Kconfig.debug
··· 242 242 config DEBUG_INFO_BTF 243 243 bool "Generate BTF typeinfo" 244 244 depends on DEBUG_INFO 245 + depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED 246 + depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST 245 247 help 246 248 Generate deduplicated BTF type information from DWARF debug info. 247 249 Turning this on expects presence of pahole tool, which will convert
+4 -2
net/core/dev.c
··· 4140 4140 4141 4141 int netdev_tstamp_prequeue __read_mostly = 1; 4142 4142 int netdev_budget __read_mostly = 300; 4143 - unsigned int __read_mostly netdev_budget_usecs = 2000; 4143 + /* Must be at least 2 jiffes to guarantee 1 jiffy timeout */ 4144 + unsigned int __read_mostly netdev_budget_usecs = 2 * USEC_PER_SEC / HZ; 4144 4145 int weight_p __read_mostly = 64; /* old backlog weight */ 4145 4146 int dev_weight_rx_bias __read_mostly = 1; /* bias for backlog weight */ 4146 4147 int dev_weight_tx_bias __read_mostly = 1; /* bias for output_queue quota */ ··· 8667 8666 const struct net_device_ops *ops = dev->netdev_ops; 8668 8667 enum bpf_netdev_command query; 8669 8668 u32 prog_id, expected_id = 0; 8670 - struct bpf_prog *prog = NULL; 8671 8669 bpf_op_t bpf_op, bpf_chk; 8670 + struct bpf_prog *prog; 8672 8671 bool offload; 8673 8672 int err; 8674 8673 ··· 8734 8733 } else { 8735 8734 if (!prog_id) 8736 8735 return 0; 8736 + prog = NULL; 8737 8737 } 8738 8738 8739 8739 err = dev_xdp_install(dev, bpf_op, extack, flags, prog);
+1 -1
net/core/filter.c
··· 5925 5925 return -EOPNOTSUPP; 5926 5926 if (unlikely(dev_net(skb->dev) != sock_net(sk))) 5927 5927 return -ENETUNREACH; 5928 - if (unlikely(sk->sk_reuseport)) 5928 + if (unlikely(sk_fullsock(sk) && sk->sk_reuseport)) 5929 5929 return -ESOCKTNOSUPPORT; 5930 5930 if (sk_is_refcounted(sk) && 5931 5931 unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+1 -1
net/core/net-sysfs.c
··· 80 80 struct net_device *netdev = to_net_dev(dev); 81 81 struct net *net = dev_net(netdev); 82 82 unsigned long new; 83 - int ret = -EINVAL; 83 + int ret; 84 84 85 85 if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) 86 86 return -EPERM;
+1 -1
net/core/sock.c
··· 1872 1872 * as not suitable for copying when cloning. 1873 1873 */ 1874 1874 if (sk_user_data_is_nocopy(newsk)) 1875 - RCU_INIT_POINTER(newsk->sk_user_data, NULL); 1875 + newsk->sk_user_data = NULL; 1876 1876 1877 1877 newsk->sk_err = 0; 1878 1878 newsk->sk_err_soft = 0;
+6 -1
net/dsa/port.c
··· 670 670 { 671 671 struct dsa_switch *ds = dp->ds; 672 672 struct device_node *phy_np; 673 + int port = dp->index; 673 674 674 675 if (!ds->ops->adjust_link) { 675 676 phy_np = of_parse_phandle(dp->dn, "phy-handle", 0); 676 - if (of_phy_is_fixed_link(dp->dn) || phy_np) 677 + if (of_phy_is_fixed_link(dp->dn) || phy_np) { 678 + if (ds->ops->phylink_mac_link_down) 679 + ds->ops->phylink_mac_link_down(ds, port, 680 + MLO_AN_FIXED, PHY_INTERFACE_MODE_NA); 677 681 return dsa_port_phylink_register(dp); 682 + } 678 683 return 0; 679 684 } 680 685
+8 -2
net/hsr/hsr_netlink.c
··· 69 69 else 70 70 multicast_spec = nla_get_u8(data[IFLA_HSR_MULTICAST_SPEC]); 71 71 72 - if (!data[IFLA_HSR_VERSION]) 72 + if (!data[IFLA_HSR_VERSION]) { 73 73 hsr_version = 0; 74 - else 74 + } else { 75 75 hsr_version = nla_get_u8(data[IFLA_HSR_VERSION]); 76 + if (hsr_version > 1) { 77 + NL_SET_ERR_MSG_MOD(extack, 78 + "Only versions 0..1 are supported"); 79 + return -EINVAL; 80 + } 81 + } 76 82 77 83 return hsr_dev_finalize(dev, link, multicast_spec, hsr_version, extack); 78 84 }
+9 -4
net/ipv4/devinet.c
··· 614 614 return NULL; 615 615 } 616 616 617 - static int ip_mc_config(struct sock *sk, bool join, const struct in_ifaddr *ifa) 617 + static int ip_mc_autojoin_config(struct net *net, bool join, 618 + const struct in_ifaddr *ifa) 618 619 { 620 + #if defined(CONFIG_IP_MULTICAST) 619 621 struct ip_mreqn mreq = { 620 622 .imr_multiaddr.s_addr = ifa->ifa_address, 621 623 .imr_ifindex = ifa->ifa_dev->dev->ifindex, 622 624 }; 625 + struct sock *sk = net->ipv4.mc_autojoin_sk; 623 626 int ret; 624 627 625 628 ASSERT_RTNL(); ··· 635 632 release_sock(sk); 636 633 637 634 return ret; 635 + #else 636 + return -EOPNOTSUPP; 637 + #endif 638 638 } 639 639 640 640 static int inet_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh, ··· 681 675 continue; 682 676 683 677 if (ipv4_is_multicast(ifa->ifa_address)) 684 - ip_mc_config(net->ipv4.mc_autojoin_sk, false, ifa); 678 + ip_mc_autojoin_config(net, false, ifa); 685 679 __inet_del_ifa(in_dev, ifap, 1, nlh, NETLINK_CB(skb).portid); 686 680 return 0; 687 681 } ··· 946 940 */ 947 941 set_ifa_lifetime(ifa, valid_lft, prefered_lft); 948 942 if (ifa->ifa_flags & IFA_F_MCAUTOJOIN) { 949 - int ret = ip_mc_config(net->ipv4.mc_autojoin_sk, 950 - true, ifa); 943 + int ret = ip_mc_autojoin_config(net, true, ifa); 951 944 952 945 if (ret < 0) { 953 946 inet_free_ifa(ifa);
+20 -1
net/ipv6/icmp.c
··· 229 229 return res; 230 230 } 231 231 232 + static bool icmpv6_rt_has_prefsrc(struct sock *sk, u8 type, 233 + struct flowi6 *fl6) 234 + { 235 + struct net *net = sock_net(sk); 236 + struct dst_entry *dst; 237 + bool res = false; 238 + 239 + dst = ip6_route_output(net, sk, fl6); 240 + if (!dst->error) { 241 + struct rt6_info *rt = (struct rt6_info *)dst; 242 + struct in6_addr prefsrc; 243 + 244 + rt6_get_prefsrc(rt, &prefsrc); 245 + res = !ipv6_addr_any(&prefsrc); 246 + } 247 + dst_release(dst); 248 + return res; 249 + } 250 + 232 251 /* 233 252 * an inline helper for the "simple" if statement below 234 253 * checks if parameter problem report is caused by an ··· 546 527 saddr = force_saddr; 547 528 if (saddr) { 548 529 fl6.saddr = *saddr; 549 - } else { 530 + } else if (!icmpv6_rt_has_prefsrc(sk, type, &fl6)) { 550 531 /* select a more meaningful saddr from input if */ 551 532 struct net_device *in_netdev; 552 533
+1 -1
net/ipv6/seg6.c
··· 434 434 435 435 int __init seg6_init(void) 436 436 { 437 - int err = -ENOMEM; 437 + int err; 438 438 439 439 err = genl_register_family(&seg6_genl_family); 440 440 if (err)
+8 -8
net/l2tp/l2tp_netlink.c
··· 920 920 .cmd = L2TP_CMD_TUNNEL_CREATE, 921 921 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 922 922 .doit = l2tp_nl_cmd_tunnel_create, 923 - .flags = GENL_ADMIN_PERM, 923 + .flags = GENL_UNS_ADMIN_PERM, 924 924 }, 925 925 { 926 926 .cmd = L2TP_CMD_TUNNEL_DELETE, 927 927 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 928 928 .doit = l2tp_nl_cmd_tunnel_delete, 929 - .flags = GENL_ADMIN_PERM, 929 + .flags = GENL_UNS_ADMIN_PERM, 930 930 }, 931 931 { 932 932 .cmd = L2TP_CMD_TUNNEL_MODIFY, 933 933 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 934 934 .doit = l2tp_nl_cmd_tunnel_modify, 935 - .flags = GENL_ADMIN_PERM, 935 + .flags = GENL_UNS_ADMIN_PERM, 936 936 }, 937 937 { 938 938 .cmd = L2TP_CMD_TUNNEL_GET, 939 939 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 940 940 .doit = l2tp_nl_cmd_tunnel_get, 941 941 .dumpit = l2tp_nl_cmd_tunnel_dump, 942 - .flags = GENL_ADMIN_PERM, 942 + .flags = GENL_UNS_ADMIN_PERM, 943 943 }, 944 944 { 945 945 .cmd = L2TP_CMD_SESSION_CREATE, 946 946 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 947 947 .doit = l2tp_nl_cmd_session_create, 948 - .flags = GENL_ADMIN_PERM, 948 + .flags = GENL_UNS_ADMIN_PERM, 949 949 }, 950 950 { 951 951 .cmd = L2TP_CMD_SESSION_DELETE, 952 952 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 953 953 .doit = l2tp_nl_cmd_session_delete, 954 - .flags = GENL_ADMIN_PERM, 954 + .flags = GENL_UNS_ADMIN_PERM, 955 955 }, 956 956 { 957 957 .cmd = L2TP_CMD_SESSION_MODIFY, 958 958 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 959 959 .doit = l2tp_nl_cmd_session_modify, 960 - .flags = GENL_ADMIN_PERM, 960 + .flags = GENL_UNS_ADMIN_PERM, 961 961 }, 962 962 { 963 963 .cmd = L2TP_CMD_SESSION_GET, 964 964 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 965 965 .doit = l2tp_nl_cmd_session_get, 966 966 .dumpit = l2tp_nl_cmd_session_dump, 967 - .flags = GENL_ADMIN_PERM, 967 + .flags = GENL_UNS_ADMIN_PERM, 968 968 }, 969 969 }; 970 970
+13 -11
net/mac80211/main.c
··· 1069 1069 local->hw.wiphy->signal_type = CFG80211_SIGNAL_TYPE_UNSPEC; 1070 1070 if (hw->max_signal <= 0) { 1071 1071 result = -EINVAL; 1072 - goto fail_wiphy_register; 1072 + goto fail_workqueue; 1073 1073 } 1074 1074 } 1075 1075 ··· 1135 1135 1136 1136 result = ieee80211_init_cipher_suites(local); 1137 1137 if (result < 0) 1138 - goto fail_wiphy_register; 1138 + goto fail_workqueue; 1139 1139 1140 1140 if (!local->ops->remain_on_channel) 1141 1141 local->hw.wiphy->max_remain_on_channel_duration = 5000; ··· 1160 1160 } 1161 1161 1162 1162 local->hw.wiphy->max_num_csa_counters = IEEE80211_MAX_CSA_COUNTERS_NUM; 1163 - 1164 - result = wiphy_register(local->hw.wiphy); 1165 - if (result < 0) 1166 - goto fail_wiphy_register; 1167 1163 1168 1164 /* 1169 1165 * We use the number of queues for feature tests (QoS, HT) internally ··· 1213 1217 goto fail_flows; 1214 1218 1215 1219 rtnl_lock(); 1216 - 1217 1220 result = ieee80211_init_rate_ctrl_alg(local, 1218 1221 hw->rate_control_algorithm); 1222 + rtnl_unlock(); 1219 1223 if (result < 0) { 1220 1224 wiphy_debug(local->hw.wiphy, 1221 1225 "Failed to initialize rate control algorithm\n"); ··· 1269 1273 local->sband_allocated |= BIT(band); 1270 1274 } 1271 1275 1276 + result = wiphy_register(local->hw.wiphy); 1277 + if (result < 0) 1278 + goto fail_wiphy_register; 1279 + 1280 + rtnl_lock(); 1281 + 1272 1282 /* add one default STA interface if supported */ 1273 1283 if (local->hw.wiphy->interface_modes & BIT(NL80211_IFTYPE_STATION) && 1274 1284 !ieee80211_hw_check(hw, NO_AUTO_VIF)) { ··· 1314 1312 #if defined(CONFIG_INET) || defined(CONFIG_IPV6) 1315 1313 fail_ifa: 1316 1314 #endif 1315 + wiphy_unregister(local->hw.wiphy); 1316 + fail_wiphy_register: 1317 1317 rtnl_lock(); 1318 1318 rate_control_deinitialize(local); 1319 1319 ieee80211_remove_interfaces(local); 1320 - fail_rate: 1321 1320 rtnl_unlock(); 1321 + fail_rate: 1322 1322 fail_flows: 1323 1323 ieee80211_led_exit(local); 1324 1324 destroy_workqueue(local->workqueue); 1325 1325 fail_workqueue: 1326 - wiphy_unregister(local->hw.wiphy); 1327 - fail_wiphy_register: 1328 1326 if (local->wiphy_ciphers_allocated) 1329 1327 kfree(local->hw.wiphy->cipher_suites); 1330 1328 kfree(local->int_scan_req); ··· 1374 1372 skb_queue_purge(&local->skb_queue_unreliable); 1375 1373 skb_queue_purge(&local->skb_queue_tdls_chsw); 1376 1374 1377 - destroy_workqueue(local->workqueue); 1378 1375 wiphy_unregister(local->hw.wiphy); 1376 + destroy_workqueue(local->workqueue); 1379 1377 ieee80211_led_exit(local); 1380 1378 kfree(local->int_scan_req); 1381 1379 }
+7 -4
net/mac80211/mesh.c
··· 1257 1257 sdata->u.mesh.mshcfg.rssi_threshold < rx_status->signal) 1258 1258 mesh_neighbour_update(sdata, mgmt->sa, &elems, 1259 1259 rx_status); 1260 + 1261 + if (ifmsh->csa_role != IEEE80211_MESH_CSA_ROLE_INIT && 1262 + !sdata->vif.csa_active) 1263 + ieee80211_mesh_process_chnswitch(sdata, &elems, true); 1260 1264 } 1261 1265 1262 1266 if (ifmsh->sync_ops) 1263 1267 ifmsh->sync_ops->rx_bcn_presp(sdata, 1264 1268 stype, mgmt, &elems, rx_status); 1265 - 1266 - if (ifmsh->csa_role != IEEE80211_MESH_CSA_ROLE_INIT && 1267 - !sdata->vif.csa_active) 1268 - ieee80211_mesh_process_chnswitch(sdata, &elems, true); 1269 1269 } 1270 1270 1271 1271 int ieee80211_mesh_finish_csa(struct ieee80211_sub_if_data *sdata) ··· 1372 1372 u.action.u.chan_switch.variable); 1373 1373 ieee802_11_parse_elems(pos, len - baselen, true, &elems, 1374 1374 mgmt->bssid, NULL); 1375 + 1376 + if (!mesh_matches_local(sdata, &elems)) 1377 + return; 1375 1378 1376 1379 ifmsh->chsw_ttl = elems.mesh_chansw_params_ie->mesh_ttl; 1377 1380 if (!--ifmsh->chsw_ttl)
+13 -12
net/mptcp/protocol.c
··· 97 97 if (likely(!__mptcp_needs_tcp_fallback(msk))) 98 98 return NULL; 99 99 100 - if (msk->subflow) { 101 - release_sock((struct sock *)msk); 102 - return msk->subflow; 103 - } 104 - 105 - return NULL; 100 + return msk->subflow; 106 101 } 107 102 108 103 static bool __mptcp_can_create_subflow(const struct mptcp_sock *msk) ··· 729 734 goto out; 730 735 } 731 736 737 + fallback: 732 738 ssock = __mptcp_tcp_fallback(msk); 733 739 if (unlikely(ssock)) { 734 - fallback: 740 + release_sock(sk); 735 741 pr_debug("fallback passthrough"); 736 742 ret = sock_sendmsg(ssock, msg); 737 743 return ret >= 0 ? ret + copied : (copied ? copied : ret); ··· 765 769 if (ret < 0) 766 770 break; 767 771 if (ret == 0 && unlikely(__mptcp_needs_tcp_fallback(msk))) { 772 + /* Can happen for passive sockets: 773 + * 3WHS negotiated MPTCP, but first packet after is 774 + * plain TCP (e.g. due to middlebox filtering unknown 775 + * options). 776 + * 777 + * Fall back to TCP. 778 + */ 768 779 release_sock(ssk); 769 - ssock = __mptcp_tcp_fallback(msk); 770 780 goto fallback; 771 781 } 772 782 ··· 885 883 ssock = __mptcp_tcp_fallback(msk); 886 884 if (unlikely(ssock)) { 887 885 fallback: 886 + release_sock(sk); 888 887 pr_debug("fallback-read subflow=%p", 889 888 mptcp_subflow_ctx(ssock->sk)); 890 889 copied = sock_recvmsg(ssock, msg, flags); ··· 1470 1467 */ 1471 1468 lock_sock(sk); 1472 1469 ssock = __mptcp_tcp_fallback(msk); 1470 + release_sock(sk); 1473 1471 if (ssock) 1474 1472 return tcp_setsockopt(ssock->sk, level, optname, optval, 1475 1473 optlen); 1476 - 1477 - release_sock(sk); 1478 1474 1479 1475 return -EOPNOTSUPP; 1480 1476 } ··· 1494 1492 */ 1495 1493 lock_sock(sk); 1496 1494 ssock = __mptcp_tcp_fallback(msk); 1495 + release_sock(sk); 1497 1496 if (ssock) 1498 1497 return tcp_getsockopt(ssock->sk, level, optname, optval, 1499 1498 option); 1500 - 1501 - release_sock(sk); 1502 1499 1503 1500 return -EOPNOTSUPP; 1504 1501 }
+2 -1
net/netfilter/ipset/ip_set_core.c
··· 86 86 { 87 87 struct ip_set_type *type; 88 88 89 - list_for_each_entry_rcu(type, &ip_set_type_list, list) 89 + list_for_each_entry_rcu(type, &ip_set_type_list, list, 90 + lockdep_is_held(&ip_set_type_mutex)) 90 91 if (STRNCMP(type->name, name) && 91 92 (type->family == family || 92 93 type->family == NFPROTO_UNSPEC) &&
+4 -3
net/netfilter/nf_tables_api.c
··· 3542 3542 continue; 3543 3543 if (!strcmp(set->name, i->name)) { 3544 3544 kfree(set->name); 3545 + set->name = NULL; 3545 3546 return -ENFILE; 3546 3547 } 3547 3548 } ··· 3962 3961 if (flags & ~(NFT_SET_ANONYMOUS | NFT_SET_CONSTANT | 3963 3962 NFT_SET_INTERVAL | NFT_SET_TIMEOUT | 3964 3963 NFT_SET_MAP | NFT_SET_EVAL | 3965 - NFT_SET_OBJECT)) 3966 - return -EINVAL; 3964 + NFT_SET_OBJECT | NFT_SET_CONCAT)) 3965 + return -EOPNOTSUPP; 3967 3966 /* Only one of these operations is supported */ 3968 3967 if ((flags & (NFT_SET_MAP | NFT_SET_OBJECT)) == 3969 3968 (NFT_SET_MAP | NFT_SET_OBJECT)) ··· 4001 4000 objtype = ntohl(nla_get_be32(nla[NFTA_SET_OBJ_TYPE])); 4002 4001 if (objtype == NFT_OBJECT_UNSPEC || 4003 4002 objtype > NFT_OBJECT_MAX) 4004 - return -EINVAL; 4003 + return -EOPNOTSUPP; 4005 4004 } else if (flags & NFT_SET_OBJECT) 4006 4005 return -EINVAL; 4007 4006 else
+7 -5
net/netfilter/nft_lookup.c
··· 29 29 { 30 30 const struct nft_lookup *priv = nft_expr_priv(expr); 31 31 const struct nft_set *set = priv->set; 32 - const struct nft_set_ext *ext; 32 + const struct nft_set_ext *ext = NULL; 33 33 bool found; 34 34 35 35 found = set->ops->lookup(nft_net(pkt), set, &regs->data[priv->sreg], ··· 39 39 return; 40 40 } 41 41 42 - if (set->flags & NFT_SET_MAP) 43 - nft_data_copy(&regs->data[priv->dreg], 44 - nft_set_ext_data(ext), set->dlen); 42 + if (ext) { 43 + if (set->flags & NFT_SET_MAP) 44 + nft_data_copy(&regs->data[priv->dreg], 45 + nft_set_ext_data(ext), set->dlen); 45 46 46 - nft_set_elem_update_expr(ext, regs, pkt); 47 + nft_set_elem_update_expr(ext, regs, pkt); 48 + } 47 49 } 48 50 49 51 static const struct nla_policy nft_lookup_policy[NFTA_LOOKUP_MAX + 1] = {
-1
net/netfilter/nft_set_bitmap.c
··· 81 81 u32 idx, off; 82 82 83 83 nft_bitmap_location(set, key, &idx, &off); 84 - *ext = NULL; 85 84 86 85 return nft_bitmap_active(priv->bitmap, idx, off, genmask); 87 86 }
+11 -12
net/netfilter/nft_set_rbtree.c
··· 218 218 219 219 /* Detect overlaps as we descend the tree. Set the flag in these cases: 220 220 * 221 - * a1. |__ _ _? >|__ _ _ (insert start after existing start) 222 - * a2. _ _ __>| ?_ _ __| (insert end before existing end) 223 - * a3. _ _ ___| ?_ _ _>| (insert end after existing end) 224 - * a4. >|__ _ _ _ _ __| (insert start before existing end) 221 + * a1. _ _ __>| ?_ _ __| (insert end before existing end) 222 + * a2. _ _ ___| ?_ _ _>| (insert end after existing end) 223 + * a3. _ _ ___? >|_ _ __| (insert start before existing end) 225 224 * 226 225 * and clear it later on, as we eventually reach the points indicated by 227 226 * '?' above, in the cases described below. We'll always meet these 228 227 * later, locally, due to tree ordering, and overlaps for the intervals 229 228 * that are the closest together are always evaluated last. 230 229 * 231 - * b1. |__ _ _! >|__ _ _ (insert start after existing end) 232 - * b2. _ _ __>| !_ _ __| (insert end before existing start) 233 - * b3. !_____>| (insert end after existing start) 230 + * b1. _ _ __>| !_ _ __| (insert end before existing start) 231 + * b2. _ _ ___| !_ _ _>| (insert end after existing start) 232 + * b3. _ _ ___! >|_ _ __| (insert start after existing end) 234 233 * 235 - * Case a4. resolves to b1.: 234 + * Case a3. resolves to b3.: 236 235 * - if the inserted start element is the leftmost, because the '0' 237 236 * element in the tree serves as end element 238 237 * - otherwise, if an existing end is found. Note that end elements are 239 238 * always inserted after corresponding start elements. 240 239 * 241 - * For a new, rightmost pair of elements, we'll hit cases b1. and b3., 240 + * For a new, rightmost pair of elements, we'll hit cases b3. and b2., 242 241 * in that order. 243 242 * 244 243 * The flag is also cleared in two special cases: ··· 261 262 p = &parent->rb_left; 262 263 263 264 if (nft_rbtree_interval_start(new)) { 264 - overlap = nft_rbtree_interval_start(rbe) && 265 - nft_set_elem_active(&rbe->ext, 266 - genmask); 265 + if (nft_rbtree_interval_end(rbe) && 266 + nft_set_elem_active(&rbe->ext, genmask)) 267 + overlap = false; 267 268 } else { 268 269 overlap = nft_rbtree_interval_end(rbe) && 269 270 nft_set_elem_active(&rbe->ext,
+3
net/netfilter/xt_IDLETIMER.c
··· 346 346 347 347 pr_debug("checkentry targinfo%s\n", info->label); 348 348 349 + if (info->send_nl_msg) 350 + return -EOPNOTSUPP; 351 + 349 352 ret = idletimer_tg_helper((struct idletimer_tg_info *)info); 350 353 if(ret < 0) 351 354 {
+4 -3
net/qrtr/qrtr.c
··· 906 906 907 907 node = NULL; 908 908 if (addr->sq_node == QRTR_NODE_BCAST) { 909 - enqueue_fn = qrtr_bcast_enqueue; 910 - if (addr->sq_port != QRTR_PORT_CTRL) { 909 + if (addr->sq_port != QRTR_PORT_CTRL && 910 + qrtr_local_nid != QRTR_NODE_BCAST) { 911 911 release_sock(sk); 912 912 return -ENOTCONN; 913 913 } 914 + enqueue_fn = qrtr_bcast_enqueue; 914 915 } else if (addr->sq_node == ipc->us.sq_node) { 915 916 enqueue_fn = qrtr_local_enqueue; 916 917 } else { 917 - enqueue_fn = qrtr_node_enqueue; 918 918 node = qrtr_node_lookup(addr->sq_node); 919 919 if (!node) { 920 920 release_sock(sk); 921 921 return -ECONNRESET; 922 922 } 923 + enqueue_fn = qrtr_node_enqueue; 923 924 } 924 925 925 926 plen = (len + 3) & ~3;
+9 -16
net/rds/message.c
··· 1 1 /* 2 - * Copyright (c) 2006 Oracle. All rights reserved. 2 + * Copyright (c) 2006, 2020 Oracle and/or its affiliates. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU ··· 162 162 if (rm->rdma.op_active) 163 163 rds_rdma_free_op(&rm->rdma); 164 164 if (rm->rdma.op_rdma_mr) 165 - rds_mr_put(rm->rdma.op_rdma_mr); 165 + kref_put(&rm->rdma.op_rdma_mr->r_kref, __rds_put_mr_final); 166 166 167 167 if (rm->atomic.op_active) 168 168 rds_atomic_free_op(&rm->atomic); 169 169 if (rm->atomic.op_rdma_mr) 170 - rds_mr_put(rm->atomic.op_rdma_mr); 170 + kref_put(&rm->atomic.op_rdma_mr->r_kref, __rds_put_mr_final); 171 171 } 172 172 173 173 void rds_message_put(struct rds_message *rm) ··· 308 308 /* 309 309 * RDS ops use this to grab SG entries from the rm's sg pool. 310 310 */ 311 - struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents, 312 - int *ret) 311 + struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents) 313 312 { 314 313 struct scatterlist *sg_first = (struct scatterlist *) &rm[1]; 315 314 struct scatterlist *sg_ret; 316 315 317 - if (WARN_ON(!ret)) 318 - return NULL; 319 - 320 316 if (nents <= 0) { 321 317 pr_warn("rds: alloc sgs failed! nents <= 0\n"); 322 - *ret = -EINVAL; 323 - return NULL; 318 + return ERR_PTR(-EINVAL); 324 319 } 325 320 326 321 if (rm->m_used_sgs + nents > rm->m_total_sgs) { 327 322 pr_warn("rds: alloc sgs failed! total %d used %d nents %d\n", 328 323 rm->m_total_sgs, rm->m_used_sgs, nents); 329 - *ret = -ENOMEM; 330 - return NULL; 324 + return ERR_PTR(-ENOMEM); 331 325 } 332 326 333 327 sg_ret = &sg_first[rm->m_used_sgs]; ··· 337 343 unsigned int i; 338 344 int num_sgs = DIV_ROUND_UP(total_len, PAGE_SIZE); 339 345 int extra_bytes = num_sgs * sizeof(struct scatterlist); 340 - int ret; 341 346 342 347 rm = rds_message_alloc(extra_bytes, GFP_NOWAIT); 343 348 if (!rm) ··· 345 352 set_bit(RDS_MSG_PAGEVEC, &rm->m_flags); 346 353 rm->m_inc.i_hdr.h_len = cpu_to_be32(total_len); 347 354 rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE); 348 - rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret); 349 - if (!rm->data.op_sg) { 355 + rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs); 356 + if (IS_ERR(rm->data.op_sg)) { 350 357 rds_message_put(rm); 351 - return ERR_PTR(ret); 358 + return ERR_CAST(rm->data.op_sg); 352 359 } 353 360 354 361 for (i = 0; i < rm->data.op_nents; ++i) {
+35 -30
net/rds/rdma.c
··· 1 1 /* 2 - * Copyright (c) 2007, 2017 Oracle and/or its affiliates. All rights reserved. 2 + * Copyright (c) 2007, 2020 Oracle and/or its affiliates. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU ··· 84 84 if (insert) { 85 85 rb_link_node(&insert->r_rb_node, parent, p); 86 86 rb_insert_color(&insert->r_rb_node, root); 87 - refcount_inc(&insert->r_refcount); 87 + kref_get(&insert->r_kref); 88 88 } 89 89 return NULL; 90 90 } ··· 99 99 unsigned long flags; 100 100 101 101 rdsdebug("RDS: destroy mr key is %x refcnt %u\n", 102 - mr->r_key, refcount_read(&mr->r_refcount)); 103 - 104 - if (test_and_set_bit(RDS_MR_DEAD, &mr->r_state)) 105 - return; 102 + mr->r_key, kref_read(&mr->r_kref)); 106 103 107 104 spin_lock_irqsave(&rs->rs_rdma_lock, flags); 108 105 if (!RB_EMPTY_NODE(&mr->r_rb_node)) ··· 112 115 mr->r_trans->free_mr(trans_private, mr->r_invalidate); 113 116 } 114 117 115 - void __rds_put_mr_final(struct rds_mr *mr) 118 + void __rds_put_mr_final(struct kref *kref) 116 119 { 120 + struct rds_mr *mr = container_of(kref, struct rds_mr, r_kref); 121 + 117 122 rds_destroy_mr(mr); 118 123 kfree(mr); 119 124 } ··· 139 140 rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys); 140 141 RB_CLEAR_NODE(&mr->r_rb_node); 141 142 spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); 142 - rds_destroy_mr(mr); 143 - rds_mr_put(mr); 143 + kref_put(&mr->r_kref, __rds_put_mr_final); 144 144 spin_lock_irqsave(&rs->rs_rdma_lock, flags); 145 145 } 146 146 spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); ··· 240 242 goto out; 241 243 } 242 244 243 - refcount_set(&mr->r_refcount, 1); 245 + kref_init(&mr->r_kref); 244 246 RB_CLEAR_NODE(&mr->r_rb_node); 245 247 mr->r_trans = rs->rs_transport; 246 248 mr->r_sock = rs; ··· 341 343 342 344 rdsdebug("RDS: get_mr key is %x\n", mr->r_key); 343 345 if (mr_ret) { 344 - refcount_inc(&mr->r_refcount); 346 + kref_get(&mr->r_kref); 345 347 *mr_ret = mr; 346 348 } 347 349 ··· 349 351 out: 350 352 kfree(pages); 351 353 if (mr) 352 - rds_mr_put(mr); 354 + kref_put(&mr->r_kref, __rds_put_mr_final); 353 355 return ret; 354 356 } 355 357 ··· 432 434 if (!mr) 433 435 return -EINVAL; 434 436 435 - /* 436 - * call rds_destroy_mr() ourselves so that we're sure it's done by the time 437 - * we return. If we let rds_mr_put() do it it might not happen until 438 - * someone else drops their ref. 439 - */ 440 - rds_destroy_mr(mr); 441 - rds_mr_put(mr); 437 + kref_put(&mr->r_kref, __rds_put_mr_final); 442 438 return 0; 443 439 } 444 440 ··· 456 464 return; 457 465 } 458 466 467 + /* Get a reference so that the MR won't go away before calling 468 + * sync_mr() below. 469 + */ 470 + kref_get(&mr->r_kref); 471 + 472 + /* If it is going to be freed, remove it from the tree now so 473 + * that no other thread can find it and free it. 474 + */ 459 475 if (mr->r_use_once || force) { 460 476 rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys); 461 477 RB_CLEAR_NODE(&mr->r_rb_node); ··· 477 477 if (mr->r_trans->sync_mr) 478 478 mr->r_trans->sync_mr(mr->r_trans_private, DMA_FROM_DEVICE); 479 479 480 + /* Release the reference held above. */ 481 + kref_put(&mr->r_kref, __rds_put_mr_final); 482 + 480 483 /* If the MR was marked as invalidate, this will 481 484 * trigger an async flush. */ 482 - if (zot_me) { 483 - rds_destroy_mr(mr); 484 - rds_mr_put(mr); 485 - } 485 + if (zot_me) 486 + kref_put(&mr->r_kref, __rds_put_mr_final); 486 487 } 487 488 488 489 void rds_rdma_free_op(struct rm_rdma_op *ro) ··· 491 490 unsigned int i; 492 491 493 492 if (ro->op_odp_mr) { 494 - rds_mr_put(ro->op_odp_mr); 493 + kref_put(&ro->op_odp_mr->r_kref, __rds_put_mr_final); 495 494 } else { 496 495 for (i = 0; i < ro->op_nents; i++) { 497 496 struct page *page = sg_page(&ro->op_sg[i]); ··· 665 664 op->op_odp_mr = NULL; 666 665 667 666 WARN_ON(!nr_pages); 668 - op->op_sg = rds_message_alloc_sgs(rm, nr_pages, &ret); 669 - if (!op->op_sg) 667 + op->op_sg = rds_message_alloc_sgs(rm, nr_pages); 668 + if (IS_ERR(op->op_sg)) { 669 + ret = PTR_ERR(op->op_sg); 670 670 goto out_pages; 671 + } 671 672 672 673 if (op->op_notify || op->op_recverr) { 673 674 /* We allocate an uninitialized notifier here, because ··· 733 730 goto out_pages; 734 731 } 735 732 RB_CLEAR_NODE(&local_odp_mr->r_rb_node); 736 - refcount_set(&local_odp_mr->r_refcount, 1); 733 + kref_init(&local_odp_mr->r_kref); 737 734 local_odp_mr->r_trans = rs->rs_transport; 738 735 local_odp_mr->r_sock = rs; 739 736 local_odp_mr->r_trans_private = ··· 830 827 if (!mr) 831 828 err = -EINVAL; /* invalid r_key */ 832 829 else 833 - refcount_inc(&mr->r_refcount); 830 + kref_get(&mr->r_kref); 834 831 spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); 835 832 836 833 if (mr) { ··· 908 905 rm->atomic.op_silent = !!(args->flags & RDS_RDMA_SILENT); 909 906 rm->atomic.op_active = 1; 910 907 rm->atomic.op_recverr = rs->rs_recverr; 911 - rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1, &ret); 912 - if (!rm->atomic.op_sg) 908 + rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1); 909 + if (IS_ERR(rm->atomic.op_sg)) { 910 + ret = PTR_ERR(rm->atomic.op_sg); 913 911 goto err; 912 + } 914 913 915 914 /* verify 8 byte-aligned */ 916 915 if (args->local_addr & 0x7) {
+3 -17
net/rds/rds.h
··· 291 291 292 292 struct rds_mr { 293 293 struct rb_node r_rb_node; 294 - refcount_t r_refcount; 294 + struct kref r_kref; 295 295 u32 r_key; 296 296 297 297 /* A copy of the creation flags */ ··· 299 299 unsigned int r_invalidate:1; 300 300 unsigned int r_write:1; 301 301 302 - /* This is for RDS_MR_DEAD. 303 - * It would be nice & consistent to make this part of the above 304 - * bit field here, but we need to use test_and_set_bit. 305 - */ 306 - unsigned long r_state; 307 302 struct rds_sock *r_sock; /* back pointer to the socket that owns us */ 308 303 struct rds_transport *r_trans; 309 304 void *r_trans_private; 310 305 }; 311 - 312 - /* Flags for mr->r_state */ 313 - #define RDS_MR_DEAD 0 314 306 315 307 static inline rds_rdma_cookie_t rds_rdma_make_cookie(u32 r_key, u32 offset) 316 308 { ··· 844 852 845 853 /* message.c */ 846 854 struct rds_message *rds_message_alloc(unsigned int nents, gfp_t gfp); 847 - struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents, 848 - int *ret); 855 + struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents); 849 856 int rds_message_copy_from_user(struct rds_message *rm, struct iov_iter *from, 850 857 bool zcopy); 851 858 struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned int total_len); ··· 937 946 int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message *rm, 938 947 struct cmsghdr *cmsg); 939 948 940 - void __rds_put_mr_final(struct rds_mr *mr); 941 - static inline void rds_mr_put(struct rds_mr *mr) 942 - { 943 - if (refcount_dec_and_test(&mr->r_refcount)) 944 - __rds_put_mr_final(mr); 945 - } 949 + void __rds_put_mr_final(struct kref *kref); 946 950 947 951 static inline bool rds_destroy_pending(struct rds_connection *conn) 948 952 {
+4 -2
net/rds/send.c
··· 1274 1274 1275 1275 /* Attach data to the rm */ 1276 1276 if (payload_len) { 1277 - rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret); 1278 - if (!rm->data.op_sg) 1277 + rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs); 1278 + if (IS_ERR(rm->data.op_sg)) { 1279 + ret = PTR_ERR(rm->data.op_sg); 1279 1280 goto out; 1281 + } 1280 1282 ret = rds_message_copy_from_user(rm, &msg->msg_iter, zcopy); 1281 1283 if (ret) 1282 1284 goto out;
-9
net/rxrpc/local_object.c
··· 165 165 goto error; 166 166 } 167 167 168 - /* we want to set the don't fragment bit */ 169 - opt = IPV6_PMTUDISC_DO; 170 - ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER, 171 - (char *) &opt, sizeof(opt)); 172 - if (ret < 0) { 173 - _debug("setsockopt failed"); 174 - goto error; 175 - } 176 - 177 168 /* Fall through and set IPv4 options too otherwise we don't get 178 169 * errors from IPv4 packets sent through the IPv6 socket. 179 170 */
+11 -31
net/rxrpc/output.c
··· 474 474 skb->tstamp = ktime_get_real(); 475 475 476 476 switch (conn->params.local->srx.transport.family) { 477 + case AF_INET6: 477 478 case AF_INET: 478 479 opt = IP_PMTUDISC_DONT; 479 - ret = kernel_setsockopt(conn->params.local->socket, 480 - SOL_IP, IP_MTU_DISCOVER, 481 - (char *)&opt, sizeof(opt)); 482 - if (ret == 0) { 483 - ret = kernel_sendmsg(conn->params.local->socket, &msg, 484 - iov, 2, len); 485 - conn->params.peer->last_tx_at = ktime_get_seconds(); 480 + kernel_setsockopt(conn->params.local->socket, 481 + SOL_IP, IP_MTU_DISCOVER, 482 + (char *)&opt, sizeof(opt)); 483 + ret = kernel_sendmsg(conn->params.local->socket, &msg, 484 + iov, 2, len); 485 + conn->params.peer->last_tx_at = ktime_get_seconds(); 486 486 487 - opt = IP_PMTUDISC_DO; 488 - kernel_setsockopt(conn->params.local->socket, SOL_IP, 489 - IP_MTU_DISCOVER, 490 - (char *)&opt, sizeof(opt)); 491 - } 487 + opt = IP_PMTUDISC_DO; 488 + kernel_setsockopt(conn->params.local->socket, 489 + SOL_IP, IP_MTU_DISCOVER, 490 + (char *)&opt, sizeof(opt)); 492 491 break; 493 - 494 - #ifdef CONFIG_AF_RXRPC_IPV6 495 - case AF_INET6: 496 - opt = IPV6_PMTUDISC_DONT; 497 - ret = kernel_setsockopt(conn->params.local->socket, 498 - SOL_IPV6, IPV6_MTU_DISCOVER, 499 - (char *)&opt, sizeof(opt)); 500 - if (ret == 0) { 501 - ret = kernel_sendmsg(conn->params.local->socket, &msg, 502 - iov, 2, len); 503 - conn->params.peer->last_tx_at = ktime_get_seconds(); 504 - 505 - opt = IPV6_PMTUDISC_DO; 506 - kernel_setsockopt(conn->params.local->socket, 507 - SOL_IPV6, IPV6_MTU_DISCOVER, 508 - (char *)&opt, sizeof(opt)); 509 - } 510 - break; 511 - #endif 512 492 513 493 default: 514 494 BUG();
+1
net/sched/cls_api.c
··· 1667 1667 skb_ext_del(skb, TC_SKB_EXT); 1668 1668 1669 1669 tp = rcu_dereference_bh(fchain->filter_chain); 1670 + last_executed_chain = fchain->index; 1670 1671 } 1671 1672 1672 1673 ret = __tcf_classify(skb, tp, orig_tp, res, compat_mode,
+1 -1
net/tipc/link.c
··· 1065 1065 /* Enter fast recovery */ 1066 1066 if (unlikely(retransmitted)) { 1067 1067 l->ssthresh = max_t(u16, l->window / 2, 300); 1068 - l->window = l->ssthresh; 1068 + l->window = min_t(u16, l->ssthresh, l->window); 1069 1069 return; 1070 1070 } 1071 1071 /* Enter slow start */
+2 -2
net/tls/tls_main.c
··· 56 56 TLS_NUM_PROTS, 57 57 }; 58 58 59 - static struct proto *saved_tcpv6_prot; 59 + static const struct proto *saved_tcpv6_prot; 60 60 static DEFINE_MUTEX(tcpv6_prot_mutex); 61 - static struct proto *saved_tcpv4_prot; 61 + static const struct proto *saved_tcpv4_prot; 62 62 static DEFINE_MUTEX(tcpv4_prot_mutex); 63 63 static struct proto tls_prots[TLS_NUM_PROTS][TLS_NUM_CONFIG][TLS_NUM_CONFIG]; 64 64 static struct proto_ops tls_sw_proto_ops;
+2 -4
net/wireless/nl80211.c
··· 644 644 [NL80211_ATTR_HE_CAPABILITY] = { .type = NLA_BINARY, 645 645 .len = NL80211_HE_MAX_CAPABILITY_LEN }, 646 646 647 - [NL80211_ATTR_FTM_RESPONDER] = { 648 - .type = NLA_NESTED, 649 - .validation_data = nl80211_ftm_responder_policy, 650 - }, 647 + [NL80211_ATTR_FTM_RESPONDER] = 648 + NLA_POLICY_NESTED(nl80211_ftm_responder_policy), 651 649 [NL80211_ATTR_TIMEOUT] = NLA_POLICY_MIN(NLA_U32, 1), 652 650 [NL80211_ATTR_PEER_MEASUREMENTS] = 653 651 NLA_POLICY_NESTED(nl80211_pmsr_attr_policy),
+2 -3
net/xdp/xdp_umem.c
··· 343 343 u32 chunk_size = mr->chunk_size, headroom = mr->headroom; 344 344 unsigned int chunks, chunks_per_page; 345 345 u64 addr = mr->addr, size = mr->len; 346 - int size_chk, err; 346 + int err; 347 347 348 348 if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) { 349 349 /* Strictly speaking we could support this, if: ··· 382 382 return -EINVAL; 383 383 } 384 384 385 - size_chk = chunk_size - headroom - XDP_PACKET_HEADROOM; 386 - if (size_chk < 0) 385 + if (headroom >= chunk_size - XDP_PACKET_HEADROOM) 387 386 return -EINVAL; 388 387 389 388 umem->address = (unsigned long)addr;
+3 -2
net/xdp/xsk.c
··· 131 131 u64 page_start = addr & ~(PAGE_SIZE - 1); 132 132 u64 first_len = PAGE_SIZE - (addr - page_start); 133 133 134 - memcpy(to_buf, from_buf, first_len + metalen); 135 - memcpy(next_pg_addr, from_buf + first_len, len - first_len); 134 + memcpy(to_buf, from_buf, first_len); 135 + memcpy(next_pg_addr, from_buf + first_len, 136 + len + metalen - first_len); 136 137 137 138 return; 138 139 }
+3 -1
tools/bpf/bpftool/struct_ops.c
··· 591 591 592 592 err = cmd_select(cmds, argc, argv, do_help); 593 593 594 - btf__free(btf_vmlinux); 594 + if (!IS_ERR(btf_vmlinux)) 595 + btf__free(btf_vmlinux); 596 + 595 597 return err; 596 598 }
+82 -44
tools/lib/bpf/libbpf.c
··· 178 178 __u32 array_mmap:1; 179 179 /* BTF_FUNC_GLOBAL is supported */ 180 180 __u32 btf_func_global:1; 181 + /* kernel support for expected_attach_type in BPF_PROG_LOAD */ 182 + __u32 exp_attach_type:1; 181 183 }; 182 184 183 185 enum reloc_type { ··· 196 194 int sym_off; 197 195 }; 198 196 197 + struct bpf_sec_def; 198 + 199 + typedef struct bpf_link *(*attach_fn_t)(const struct bpf_sec_def *sec, 200 + struct bpf_program *prog); 201 + 202 + struct bpf_sec_def { 203 + const char *sec; 204 + size_t len; 205 + enum bpf_prog_type prog_type; 206 + enum bpf_attach_type expected_attach_type; 207 + bool is_exp_attach_type_optional; 208 + bool is_attachable; 209 + bool is_attach_btf; 210 + attach_fn_t attach_fn; 211 + }; 212 + 199 213 /* 200 214 * bpf_prog should be a better name but it has been used in 201 215 * linux/filter.h. ··· 222 204 char *name; 223 205 int prog_ifindex; 224 206 char *section_name; 207 + const struct bpf_sec_def *sec_def; 225 208 /* section_name with / replaced by _; makes recursive pinning 226 209 * in bpf_object__pin_programs easier 227 210 */ ··· 3335 3316 } 3336 3317 3337 3318 static int 3319 + bpf_object__probe_exp_attach_type(struct bpf_object *obj) 3320 + { 3321 + struct bpf_load_program_attr attr; 3322 + struct bpf_insn insns[] = { 3323 + BPF_MOV64_IMM(BPF_REG_0, 0), 3324 + BPF_EXIT_INSN(), 3325 + }; 3326 + int fd; 3327 + 3328 + memset(&attr, 0, sizeof(attr)); 3329 + /* use any valid combination of program type and (optional) 3330 + * non-zero expected attach type (i.e., not a BPF_CGROUP_INET_INGRESS) 3331 + * to see if kernel supports expected_attach_type field for 3332 + * BPF_PROG_LOAD command 3333 + */ 3334 + attr.prog_type = BPF_PROG_TYPE_CGROUP_SOCK; 3335 + attr.expected_attach_type = BPF_CGROUP_INET_SOCK_CREATE; 3336 + attr.insns = insns; 3337 + attr.insns_cnt = ARRAY_SIZE(insns); 3338 + attr.license = "GPL"; 3339 + 3340 + fd = bpf_load_program_xattr(&attr, NULL, 0); 3341 + if (fd >= 0) { 3342 + obj->caps.exp_attach_type = 1; 3343 + close(fd); 3344 + return 1; 3345 + } 3346 + return 0; 3347 + } 3348 + 3349 + static int 3338 3350 bpf_object__probe_caps(struct bpf_object *obj) 3339 3351 { 3340 3352 int (*probe_fn[])(struct bpf_object *obj) = { ··· 3375 3325 bpf_object__probe_btf_func_global, 3376 3326 bpf_object__probe_btf_datasec, 3377 3327 bpf_object__probe_array_mmap, 3328 + bpf_object__probe_exp_attach_type, 3378 3329 }; 3379 3330 int i, ret; 3380 3331 ··· 4912 4861 4913 4862 memset(&load_attr, 0, sizeof(struct bpf_load_program_attr)); 4914 4863 load_attr.prog_type = prog->type; 4915 - load_attr.expected_attach_type = prog->expected_attach_type; 4864 + /* old kernels might not support specifying expected_attach_type */ 4865 + if (!prog->caps->exp_attach_type && prog->sec_def && 4866 + prog->sec_def->is_exp_attach_type_optional) 4867 + load_attr.expected_attach_type = 0; 4868 + else 4869 + load_attr.expected_attach_type = prog->expected_attach_type; 4916 4870 if (prog->caps->name) 4917 4871 load_attr.name = prog->name; 4918 4872 load_attr.insns = insns; ··· 5118 5062 return 0; 5119 5063 } 5120 5064 5065 + static const struct bpf_sec_def *find_sec_def(const char *sec_name); 5066 + 5121 5067 static struct bpf_object * 5122 5068 __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz, 5123 5069 const struct bpf_object_open_opts *opts) ··· 5175 5117 bpf_object__elf_finish(obj); 5176 5118 5177 5119 bpf_object__for_each_program(prog, obj) { 5178 - enum bpf_prog_type prog_type; 5179 - enum bpf_attach_type attach_type; 5180 - 5181 - if (prog->type != BPF_PROG_TYPE_UNSPEC) 5182 - continue; 5183 - 5184 - err = libbpf_prog_type_by_name(prog->section_name, &prog_type, 5185 - &attach_type); 5186 - if (err == -ESRCH) 5120 + prog->sec_def = find_sec_def(prog->section_name); 5121 + if (!prog->sec_def) 5187 5122 /* couldn't guess, but user might manually specify */ 5188 5123 continue; 5189 - if (err) 5190 - goto out; 5191 5124 5192 - bpf_program__set_type(prog, prog_type); 5193 - bpf_program__set_expected_attach_type(prog, attach_type); 5194 - if (prog_type == BPF_PROG_TYPE_TRACING || 5195 - prog_type == BPF_PROG_TYPE_EXT) 5125 + bpf_program__set_type(prog, prog->sec_def->prog_type); 5126 + bpf_program__set_expected_attach_type(prog, 5127 + prog->sec_def->expected_attach_type); 5128 + 5129 + if (prog->sec_def->prog_type == BPF_PROG_TYPE_TRACING || 5130 + prog->sec_def->prog_type == BPF_PROG_TYPE_EXT) 5196 5131 prog->attach_prog_fd = OPTS_GET(opts, attach_prog_fd, 0); 5197 5132 } 5198 5133 ··· 6274 6223 prog->expected_attach_type = type; 6275 6224 } 6276 6225 6277 - #define BPF_PROG_SEC_IMPL(string, ptype, eatype, is_attachable, btf, atype) \ 6278 - { string, sizeof(string) - 1, ptype, eatype, is_attachable, btf, atype } 6226 + #define BPF_PROG_SEC_IMPL(string, ptype, eatype, eatype_optional, \ 6227 + attachable, attach_btf) \ 6228 + { \ 6229 + .sec = string, \ 6230 + .len = sizeof(string) - 1, \ 6231 + .prog_type = ptype, \ 6232 + .expected_attach_type = eatype, \ 6233 + .is_exp_attach_type_optional = eatype_optional, \ 6234 + .is_attachable = attachable, \ 6235 + .is_attach_btf = attach_btf, \ 6236 + } 6279 6237 6280 6238 /* Programs that can NOT be attached. */ 6281 6239 #define BPF_PROG_SEC(string, ptype) BPF_PROG_SEC_IMPL(string, ptype, 0, 0, 0, 0) 6282 6240 6283 6241 /* Programs that can be attached. */ 6284 6242 #define BPF_APROG_SEC(string, ptype, atype) \ 6285 - BPF_PROG_SEC_IMPL(string, ptype, 0, 1, 0, atype) 6243 + BPF_PROG_SEC_IMPL(string, ptype, atype, true, 1, 0) 6286 6244 6287 6245 /* Programs that must specify expected attach type at load time. */ 6288 6246 #define BPF_EAPROG_SEC(string, ptype, eatype) \ 6289 - BPF_PROG_SEC_IMPL(string, ptype, eatype, 1, 0, eatype) 6247 + BPF_PROG_SEC_IMPL(string, ptype, eatype, false, 1, 0) 6290 6248 6291 6249 /* Programs that use BTF to identify attach point */ 6292 6250 #define BPF_PROG_BTF(string, ptype, eatype) \ 6293 - BPF_PROG_SEC_IMPL(string, ptype, eatype, 0, 1, 0) 6251 + BPF_PROG_SEC_IMPL(string, ptype, eatype, false, 0, 1) 6294 6252 6295 6253 /* Programs that can be attached but attach type can't be identified by section 6296 6254 * name. Kept for backward compatibility. ··· 6313 6253 __VA_ARGS__ \ 6314 6254 } 6315 6255 6316 - struct bpf_sec_def; 6317 - 6318 - typedef struct bpf_link *(*attach_fn_t)(const struct bpf_sec_def *sec, 6319 - struct bpf_program *prog); 6320 - 6321 6256 static struct bpf_link *attach_kprobe(const struct bpf_sec_def *sec, 6322 6257 struct bpf_program *prog); 6323 6258 static struct bpf_link *attach_tp(const struct bpf_sec_def *sec, ··· 6323 6268 struct bpf_program *prog); 6324 6269 static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec, 6325 6270 struct bpf_program *prog); 6326 - 6327 - struct bpf_sec_def { 6328 - const char *sec; 6329 - size_t len; 6330 - enum bpf_prog_type prog_type; 6331 - enum bpf_attach_type expected_attach_type; 6332 - bool is_attachable; 6333 - bool is_attach_btf; 6334 - enum bpf_attach_type attach_type; 6335 - attach_fn_t attach_fn; 6336 - }; 6337 6271 6338 6272 static const struct bpf_sec_def section_defs[] = { 6339 6273 BPF_PROG_SEC("socket", BPF_PROG_TYPE_SOCKET_FILTER), ··· 6757 6713 continue; 6758 6714 if (!section_defs[i].is_attachable) 6759 6715 return -EINVAL; 6760 - *attach_type = section_defs[i].attach_type; 6716 + *attach_type = section_defs[i].expected_attach_type; 6761 6717 return 0; 6762 6718 } 6763 6719 pr_debug("failed to guess attach type based on ELF section name '%s'\n", name); ··· 7586 7542 struct bpf_link * 7587 7543 bpf_program__attach_cgroup(struct bpf_program *prog, int cgroup_fd) 7588 7544 { 7589 - const struct bpf_sec_def *sec_def; 7590 7545 enum bpf_attach_type attach_type; 7591 7546 char errmsg[STRERR_BUFSIZE]; 7592 7547 struct bpf_link *link; ··· 7604 7561 link->detach = &bpf_link__detach_fd; 7605 7562 7606 7563 attach_type = bpf_program__get_expected_attach_type(prog); 7607 - if (!attach_type) { 7608 - sec_def = find_sec_def(bpf_program__title(prog, false)); 7609 - if (sec_def) 7610 - attach_type = sec_def->attach_type; 7611 - } 7612 7564 link_fd = bpf_link_create(prog_fd, cgroup_fd, attach_type, NULL); 7613 7565 if (link_fd < 0) { 7614 7566 link_fd = -errno;
+1 -1
tools/lib/bpf/libbpf.h
··· 458 458 459 459 struct bpf_xdp_set_link_opts { 460 460 size_t sz; 461 - __u32 old_fd; 461 + int old_fd; 462 462 }; 463 463 #define bpf_xdp_set_link_opts__last_field old_fd 464 464
+3 -3
tools/lib/bpf/netlink.c
··· 142 142 struct ifinfomsg ifinfo; 143 143 char attrbuf[64]; 144 144 } req; 145 - __u32 nl_pid; 145 + __u32 nl_pid = 0; 146 146 147 147 sock = libbpf_netlink_open(&nl_pid); 148 148 if (sock < 0) ··· 288 288 { 289 289 struct xdp_id_md xdp_id = {}; 290 290 int sock, ret; 291 - __u32 nl_pid; 291 + __u32 nl_pid = 0; 292 292 __u32 mask; 293 293 294 294 if (flags & ~XDP_FLAGS_MASK || !info_size) ··· 321 321 322 322 static __u32 get_xdp_id(struct xdp_link_info *info, __u32 flags) 323 323 { 324 - if (info->attach_mode != XDP_ATTACHED_MULTI) 324 + if (info->attach_mode != XDP_ATTACHED_MULTI && !flags) 325 325 return info->prog_id; 326 326 if (flags & XDP_FLAGS_DRV_MODE) 327 327 return info->drv_prog_id;
+60 -2
tools/testing/selftests/bpf/prog_tests/mmap.c
··· 19 19 const size_t map_sz = roundup_page(sizeof(struct map_data)); 20 20 const int zero = 0, one = 1, two = 2, far = 1500; 21 21 const long page_size = sysconf(_SC_PAGE_SIZE); 22 - int err, duration = 0, i, data_map_fd; 22 + int err, duration = 0, i, data_map_fd, data_map_id, tmp_fd; 23 23 struct bpf_map *data_map, *bss_map; 24 24 void *bss_mmaped = NULL, *map_mmaped = NULL, *tmp1, *tmp2; 25 25 struct test_mmap__bss *bss_data; 26 + struct bpf_map_info map_info; 27 + __u32 map_info_sz = sizeof(map_info); 26 28 struct map_data *map_data; 27 29 struct test_mmap *skel; 28 30 __u64 val = 0; 29 - 30 31 31 32 skel = test_mmap__open_and_load(); 32 33 if (CHECK(!skel, "skel_open_and_load", "skeleton open/load failed\n")) ··· 37 36 data_map = skel->maps.data_map; 38 37 data_map_fd = bpf_map__fd(data_map); 39 38 39 + /* get map's ID */ 40 + memset(&map_info, 0, map_info_sz); 41 + err = bpf_obj_get_info_by_fd(data_map_fd, &map_info, &map_info_sz); 42 + if (CHECK(err, "map_get_info", "failed %d\n", errno)) 43 + goto cleanup; 44 + data_map_id = map_info.id; 45 + 46 + /* mmap BSS map */ 40 47 bss_mmaped = mmap(NULL, bss_sz, PROT_READ | PROT_WRITE, MAP_SHARED, 41 48 bpf_map__fd(bss_map), 0); 42 49 if (CHECK(bss_mmaped == MAP_FAILED, "bss_mmap", ··· 107 98 "data_map freeze succeeded: err=%d, errno=%d\n", err, errno)) 108 99 goto cleanup; 109 100 101 + err = mprotect(map_mmaped, map_sz, PROT_READ); 102 + if (CHECK(err, "mprotect_ro", "mprotect to r/o failed %d\n", errno)) 103 + goto cleanup; 104 + 110 105 /* unmap R/W mapping */ 111 106 err = munmap(map_mmaped, map_sz); 112 107 map_mmaped = NULL; ··· 124 111 map_mmaped = NULL; 125 112 goto cleanup; 126 113 } 114 + err = mprotect(map_mmaped, map_sz, PROT_WRITE); 115 + if (CHECK(!err, "mprotect_wr", "mprotect() succeeded unexpectedly!\n")) 116 + goto cleanup; 117 + err = mprotect(map_mmaped, map_sz, PROT_EXEC); 118 + if (CHECK(!err, "mprotect_ex", "mprotect() succeeded unexpectedly!\n")) 119 + goto cleanup; 127 120 map_data = map_mmaped; 128 121 129 122 /* map/unmap in a loop to test ref counting */ ··· 216 197 CHECK_FAIL(map_data->val[far] != 3 * 321); 217 198 218 199 munmap(tmp2, 4 * page_size); 200 + 201 + tmp1 = mmap(NULL, map_sz, PROT_READ, MAP_SHARED, data_map_fd, 0); 202 + if (CHECK(tmp1 == MAP_FAILED, "last_mmap", "failed %d\n", errno)) 203 + goto cleanup; 204 + 205 + test_mmap__destroy(skel); 206 + skel = NULL; 207 + CHECK_FAIL(munmap(bss_mmaped, bss_sz)); 208 + bss_mmaped = NULL; 209 + CHECK_FAIL(munmap(map_mmaped, map_sz)); 210 + map_mmaped = NULL; 211 + 212 + /* map should be still held by active mmap */ 213 + tmp_fd = bpf_map_get_fd_by_id(data_map_id); 214 + if (CHECK(tmp_fd < 0, "get_map_by_id", "failed %d\n", errno)) { 215 + munmap(tmp1, map_sz); 216 + goto cleanup; 217 + } 218 + close(tmp_fd); 219 + 220 + /* this should release data map finally */ 221 + munmap(tmp1, map_sz); 222 + 223 + /* we need to wait for RCU grace period */ 224 + for (i = 0; i < 10000; i++) { 225 + __u32 id = data_map_id - 1; 226 + if (bpf_map_get_next_id(id, &id) || id > data_map_id) 227 + break; 228 + usleep(1); 229 + } 230 + 231 + /* should fail to get map FD by non-existing ID */ 232 + tmp_fd = bpf_map_get_fd_by_id(data_map_id); 233 + if (CHECK(tmp_fd >= 0, "get_map_by_id_after", 234 + "unexpectedly succeeded %d\n", tmp_fd)) { 235 + close(tmp_fd); 236 + goto cleanup; 237 + } 238 + 219 239 cleanup: 220 240 if (bss_mmaped) 221 241 CHECK_FAIL(munmap(bss_mmaped, bss_sz));
+27 -15
tools/testing/selftests/bpf/prog_tests/section_names.c
··· 43 43 {"lwt_seg6local", {0, BPF_PROG_TYPE_LWT_SEG6LOCAL, 0}, {-EINVAL, 0} }, 44 44 { 45 45 "cgroup_skb/ingress", 46 - {0, BPF_PROG_TYPE_CGROUP_SKB, 0}, 46 + {0, BPF_PROG_TYPE_CGROUP_SKB, BPF_CGROUP_INET_INGRESS}, 47 47 {0, BPF_CGROUP_INET_INGRESS}, 48 48 }, 49 49 { 50 50 "cgroup_skb/egress", 51 - {0, BPF_PROG_TYPE_CGROUP_SKB, 0}, 51 + {0, BPF_PROG_TYPE_CGROUP_SKB, BPF_CGROUP_INET_EGRESS}, 52 52 {0, BPF_CGROUP_INET_EGRESS}, 53 53 }, 54 54 {"cgroup/skb", {0, BPF_PROG_TYPE_CGROUP_SKB, 0}, {-EINVAL, 0} }, 55 55 { 56 56 "cgroup/sock", 57 - {0, BPF_PROG_TYPE_CGROUP_SOCK, 0}, 57 + {0, BPF_PROG_TYPE_CGROUP_SOCK, BPF_CGROUP_INET_SOCK_CREATE}, 58 58 {0, BPF_CGROUP_INET_SOCK_CREATE}, 59 59 }, 60 60 { ··· 69 69 }, 70 70 { 71 71 "cgroup/dev", 72 - {0, BPF_PROG_TYPE_CGROUP_DEVICE, 0}, 72 + {0, BPF_PROG_TYPE_CGROUP_DEVICE, BPF_CGROUP_DEVICE}, 73 73 {0, BPF_CGROUP_DEVICE}, 74 74 }, 75 - {"sockops", {0, BPF_PROG_TYPE_SOCK_OPS, 0}, {0, BPF_CGROUP_SOCK_OPS} }, 75 + { 76 + "sockops", 77 + {0, BPF_PROG_TYPE_SOCK_OPS, BPF_CGROUP_SOCK_OPS}, 78 + {0, BPF_CGROUP_SOCK_OPS}, 79 + }, 76 80 { 77 81 "sk_skb/stream_parser", 78 - {0, BPF_PROG_TYPE_SK_SKB, 0}, 82 + {0, BPF_PROG_TYPE_SK_SKB, BPF_SK_SKB_STREAM_PARSER}, 79 83 {0, BPF_SK_SKB_STREAM_PARSER}, 80 84 }, 81 85 { 82 86 "sk_skb/stream_verdict", 83 - {0, BPF_PROG_TYPE_SK_SKB, 0}, 87 + {0, BPF_PROG_TYPE_SK_SKB, BPF_SK_SKB_STREAM_VERDICT}, 84 88 {0, BPF_SK_SKB_STREAM_VERDICT}, 85 89 }, 86 90 {"sk_skb", {0, BPF_PROG_TYPE_SK_SKB, 0}, {-EINVAL, 0} }, 87 - {"sk_msg", {0, BPF_PROG_TYPE_SK_MSG, 0}, {0, BPF_SK_MSG_VERDICT} }, 88 - {"lirc_mode2", {0, BPF_PROG_TYPE_LIRC_MODE2, 0}, {0, BPF_LIRC_MODE2} }, 91 + { 92 + "sk_msg", 93 + {0, BPF_PROG_TYPE_SK_MSG, BPF_SK_MSG_VERDICT}, 94 + {0, BPF_SK_MSG_VERDICT}, 95 + }, 96 + { 97 + "lirc_mode2", 98 + {0, BPF_PROG_TYPE_LIRC_MODE2, BPF_LIRC_MODE2}, 99 + {0, BPF_LIRC_MODE2}, 100 + }, 89 101 { 90 102 "flow_dissector", 91 - {0, BPF_PROG_TYPE_FLOW_DISSECTOR, 0}, 103 + {0, BPF_PROG_TYPE_FLOW_DISSECTOR, BPF_FLOW_DISSECTOR}, 92 104 {0, BPF_FLOW_DISSECTOR}, 93 105 }, 94 106 { ··· 170 158 &expected_attach_type); 171 159 172 160 CHECK(rc != test->expected_load.rc, "check_code", 173 - "prog: unexpected rc=%d for %s", rc, test->sec_name); 161 + "prog: unexpected rc=%d for %s\n", rc, test->sec_name); 174 162 175 163 if (rc) 176 164 return; 177 165 178 166 CHECK(prog_type != test->expected_load.prog_type, "check_prog_type", 179 - "prog: unexpected prog_type=%d for %s", 167 + "prog: unexpected prog_type=%d for %s\n", 180 168 prog_type, test->sec_name); 181 169 182 170 CHECK(expected_attach_type != test->expected_load.expected_attach_type, 183 - "check_attach_type", "prog: unexpected expected_attach_type=%d for %s", 171 + "check_attach_type", "prog: unexpected expected_attach_type=%d for %s\n", 184 172 expected_attach_type, test->sec_name); 185 173 } 186 174 ··· 192 180 rc = libbpf_attach_type_by_name(test->sec_name, &attach_type); 193 181 194 182 CHECK(rc != test->expected_attach.rc, "check_ret", 195 - "attach: unexpected rc=%d for %s", rc, test->sec_name); 183 + "attach: unexpected rc=%d for %s\n", rc, test->sec_name); 196 184 197 185 if (rc) 198 186 return; 199 187 200 188 CHECK(attach_type != test->expected_attach.attach_type, 201 - "check_attach_type", "attach: unexpected attach_type=%d for %s", 189 + "check_attach_type", "attach: unexpected attach_type=%d for %s\n", 202 190 attach_type, test->sec_name); 203 191 } 204 192
+9 -9
tools/testing/selftests/bpf/prog_tests/test_lsm.c
··· 15 15 16 16 char *CMD_ARGS[] = {"true", NULL}; 17 17 18 - int heap_mprotect(void) 18 + #define GET_PAGE_ADDR(ADDR, PAGE_SIZE) \ 19 + (char *)(((unsigned long) (ADDR + PAGE_SIZE)) & ~(PAGE_SIZE-1)) 20 + 21 + int stack_mprotect(void) 19 22 { 20 23 void *buf; 21 24 long sz; ··· 28 25 if (sz < 0) 29 26 return sz; 30 27 31 - buf = memalign(sz, 2 * sz); 32 - if (buf == NULL) 33 - return -ENOMEM; 34 - 35 - ret = mprotect(buf, sz, PROT_READ | PROT_WRITE | PROT_EXEC); 36 - free(buf); 28 + buf = alloca(sz * 3); 29 + ret = mprotect(GET_PAGE_ADDR(buf, sz), sz, 30 + PROT_READ | PROT_WRITE | PROT_EXEC); 37 31 return ret; 38 32 } 39 33 ··· 73 73 74 74 skel->bss->monitored_pid = getpid(); 75 75 76 - err = heap_mprotect(); 77 - if (CHECK(errno != EPERM, "heap_mprotect", "want errno=EPERM, got %d\n", 76 + err = stack_mprotect(); 77 + if (CHECK(errno != EPERM, "stack_mprotect", "want err=EPERM, got %d\n", 78 78 errno)) 79 79 goto close_prog; 80 80
+29 -1
tools/testing/selftests/bpf/prog_tests/xdp_attach.c
··· 6 6 7 7 void test_xdp_attach(void) 8 8 { 9 + __u32 duration = 0, id1, id2, id0 = 0, len; 9 10 struct bpf_object *obj1, *obj2, *obj3; 10 11 const char *file = "./test_xdp.o"; 12 + struct bpf_prog_info info = {}; 11 13 int err, fd1, fd2, fd3; 12 - __u32 duration = 0; 13 14 DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts, 14 15 .old_fd = -1); 16 + 17 + len = sizeof(info); 15 18 16 19 err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj1, &fd1); 17 20 if (CHECK_FAIL(err)) 18 21 return; 22 + err = bpf_obj_get_info_by_fd(fd1, &info, &len); 23 + if (CHECK_FAIL(err)) 24 + goto out_1; 25 + id1 = info.id; 26 + 19 27 err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj2, &fd2); 20 28 if (CHECK_FAIL(err)) 21 29 goto out_1; 30 + 31 + memset(&info, 0, sizeof(info)); 32 + err = bpf_obj_get_info_by_fd(fd2, &info, &len); 33 + if (CHECK_FAIL(err)) 34 + goto out_2; 35 + id2 = info.id; 36 + 22 37 err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj3, &fd3); 23 38 if (CHECK_FAIL(err)) 24 39 goto out_2; ··· 41 26 err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd1, XDP_FLAGS_REPLACE, 42 27 &opts); 43 28 if (CHECK(err, "load_ok", "initial load failed")) 29 + goto out_close; 30 + 31 + err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 32 + if (CHECK(err || id0 != id1, "id1_check", 33 + "loaded prog id %u != id1 %u, err %d", id0, id1, err)) 44 34 goto out_close; 45 35 46 36 err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd2, XDP_FLAGS_REPLACE, ··· 57 37 err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd2, 0, &opts); 58 38 if (CHECK(err, "replace_ok", "replace valid old_fd failed")) 59 39 goto out; 40 + err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 41 + if (CHECK(err || id0 != id2, "id2_check", 42 + "loaded prog id %u != id2 %u, err %d", id0, id2, err)) 43 + goto out_close; 60 44 61 45 err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, fd3, 0, &opts); 62 46 if (CHECK(!err, "replace_fail", "replace invalid old_fd didn't fail")) ··· 75 51 if (CHECK(err, "remove_ok", "remove valid old_fd failed")) 76 52 goto out; 77 53 54 + err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 55 + if (CHECK(err || id0 != 0, "unload_check", 56 + "loaded prog id %u != 0, err %d", id0, err)) 57 + goto out_close; 78 58 out: 79 59 bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0); 80 60 out_close:
+68
tools/testing/selftests/bpf/prog_tests/xdp_info.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/if_link.h> 3 + #include <test_progs.h> 4 + 5 + #define IFINDEX_LO 1 6 + 7 + void test_xdp_info(void) 8 + { 9 + __u32 len = sizeof(struct bpf_prog_info), duration = 0, prog_id; 10 + const char *file = "./xdp_dummy.o"; 11 + struct bpf_prog_info info = {}; 12 + struct bpf_object *obj; 13 + int err, prog_fd; 14 + 15 + /* Get prog_id for XDP_ATTACHED_NONE mode */ 16 + 17 + err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, 0); 18 + if (CHECK(err, "get_xdp_none", "errno=%d\n", errno)) 19 + return; 20 + if (CHECK(prog_id, "prog_id_none", "unexpected prog_id=%u\n", prog_id)) 21 + return; 22 + 23 + err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_SKB_MODE); 24 + if (CHECK(err, "get_xdp_none_skb", "errno=%d\n", errno)) 25 + return; 26 + if (CHECK(prog_id, "prog_id_none_skb", "unexpected prog_id=%u\n", 27 + prog_id)) 28 + return; 29 + 30 + /* Setup prog */ 31 + 32 + err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd); 33 + if (CHECK_FAIL(err)) 34 + return; 35 + 36 + err = bpf_obj_get_info_by_fd(prog_fd, &info, &len); 37 + if (CHECK(err, "get_prog_info", "errno=%d\n", errno)) 38 + goto out_close; 39 + 40 + err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE); 41 + if (CHECK(err, "set_xdp_skb", "errno=%d\n", errno)) 42 + goto out_close; 43 + 44 + /* Get prog_id for single prog mode */ 45 + 46 + err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, 0); 47 + if (CHECK(err, "get_xdp", "errno=%d\n", errno)) 48 + goto out; 49 + if (CHECK(prog_id != info.id, "prog_id", "prog_id not available\n")) 50 + goto out; 51 + 52 + err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_SKB_MODE); 53 + if (CHECK(err, "get_xdp_skb", "errno=%d\n", errno)) 54 + goto out; 55 + if (CHECK(prog_id != info.id, "prog_id_skb", "prog_id not available\n")) 56 + goto out; 57 + 58 + err = bpf_get_link_xdp_id(IFINDEX_LO, &prog_id, XDP_FLAGS_DRV_MODE); 59 + if (CHECK(err, "get_xdp_drv", "errno=%d\n", errno)) 60 + goto out; 61 + if (CHECK(prog_id, "prog_id_drv", "unexpected prog_id=%u\n", prog_id)) 62 + goto out; 63 + 64 + out: 65 + bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0); 66 + out_close: 67 + bpf_object__close(obj); 68 + }
+4 -4
tools/testing/selftests/bpf/progs/lsm.c
··· 23 23 return ret; 24 24 25 25 __u32 pid = bpf_get_current_pid_tgid() >> 32; 26 - int is_heap = 0; 26 + int is_stack = 0; 27 27 28 - is_heap = (vma->vm_start >= vma->vm_mm->start_brk && 29 - vma->vm_end <= vma->vm_mm->brk); 28 + is_stack = (vma->vm_start <= vma->vm_mm->start_stack && 29 + vma->vm_end >= vma->vm_mm->start_stack); 30 30 31 - if (is_heap && monitored_pid == pid) { 31 + if (is_stack && monitored_pid == pid) { 32 32 mprotect_count++; 33 33 ret = -EPERM; 34 34 }
+2 -2
tools/testing/selftests/bpf/verifier/bounds.c
··· 501 501 .result = REJECT 502 502 }, 503 503 { 504 - "bounds check mixed 32bit and 64bit arithmatic. test1", 504 + "bounds check mixed 32bit and 64bit arithmetic. test1", 505 505 .insns = { 506 506 BPF_MOV64_IMM(BPF_REG_0, 0), 507 507 BPF_MOV64_IMM(BPF_REG_1, -1), ··· 520 520 .result = ACCEPT 521 521 }, 522 522 { 523 - "bounds check mixed 32bit and 64bit arithmatic. test2", 523 + "bounds check mixed 32bit and 64bit arithmetic. test2", 524 524 .insns = { 525 525 BPF_MOV64_IMM(BPF_REG_0, 0), 526 526 BPF_MOV64_IMM(BPF_REG_1, -1),
+2 -3
tools/testing/selftests/tc-testing/tdc.py
··· 713 713 exit(0) 714 714 715 715 if args.list: 716 - if args.list: 717 - list_test_cases(alltests) 718 - exit(0) 716 + list_test_cases(alltests) 717 + exit(0) 719 718 720 719 if len(alltests): 721 720 req_plugins = pm.get_required_plugins(alltests)