Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Trivial conflict in net/core/filter.c, a locally computed
'sdif' is now an argument to the function.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2190 -1065
+4
CREDITS
··· 2204 2204 S: North Little Rock, Arkansas 72115 2205 2205 S: USA 2206 2206 2207 + N: Christopher Li 2208 + E: sparse@chrisli.org 2209 + D: Sparse maintainer 2009 - 2018 2210 + 2207 2211 N: Stephan Linz 2208 2212 E: linz@mazet.de 2209 2213 E: Stephan.Linz@gmx.de
+41 -11
Documentation/core-api/xarray.rst
··· 74 74 new entry and return the previous entry stored at that index. You can 75 75 use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a 76 76 ``NULL`` entry. There is no difference between an entry that has never 77 - been stored to and one that has most recently had ``NULL`` stored to it. 77 + been stored to, one that has been erased and one that has most recently 78 + had ``NULL`` stored to it. 78 79 79 80 You can conditionally replace an entry at an index by using 80 81 :c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if ··· 106 105 indices. Storing into one index may result in the entry retrieved by 107 106 some, but not all of the other indices changing. 108 107 108 + Sometimes you need to ensure that a subsequent call to :c:func:`xa_store` 109 + will not need to allocate memory. The :c:func:`xa_reserve` function 110 + will store a reserved entry at the indicated index. Users of the normal 111 + API will see this entry as containing ``NULL``. If you do not need to 112 + use the reserved entry, you can call :c:func:`xa_release` to remove the 113 + unused entry. If another user has stored to the entry in the meantime, 114 + :c:func:`xa_release` will do nothing; if instead you want the entry to 115 + become ``NULL``, you should use :c:func:`xa_erase`. 116 + 117 + If all entries in the array are ``NULL``, the :c:func:`xa_empty` function 118 + will return ``true``. 119 + 109 120 Finally, you can remove all entries from an XArray by calling 110 121 :c:func:`xa_destroy`. If the XArray entries are pointers, you may wish 111 122 to free the entries first. You can do this by iterating over all present 112 123 entries in the XArray using the :c:func:`xa_for_each` iterator. 113 124 114 - ID assignment 115 - ------------- 125 + Allocating XArrays 126 + ------------------ 127 + 128 + If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or 129 + initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`, 130 + the XArray changes to track whether entries are in use or not. 116 131 117 132 You can call :c:func:`xa_alloc` to store the entry at any unused index 118 133 in the XArray. If you need to modify the array from interrupt context, 119 134 you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable 120 - interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating 121 - a ``NULL`` pointer does not delete an entry. Instead it reserves an 122 - entry like :c:func:`xa_reserve` and you can release it using either 123 - :c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the 124 - XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised 125 - by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`, 135 + interrupts while allocating the ID. 136 + 137 + Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert` 138 + will mark the entry as being allocated. Unlike a normal XArray, storing 139 + ``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`. 140 + To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if 141 + you only want to free the entry if it's ``NULL``). 142 + 143 + You cannot use ``XA_MARK_0`` with an allocating XArray as this mark 144 + is used to track whether an entry is free or not. The other marks are 145 + available for your use. 126 146 127 147 Memory allocation 128 148 ----------------- ··· 180 158 181 159 Takes xa_lock internally: 182 160 * :c:func:`xa_store` 161 + * :c:func:`xa_store_bh` 162 + * :c:func:`xa_store_irq` 183 163 * :c:func:`xa_insert` 184 164 * :c:func:`xa_erase` 185 165 * :c:func:`xa_erase_bh` ··· 191 167 * :c:func:`xa_alloc` 192 168 * :c:func:`xa_alloc_bh` 193 169 * :c:func:`xa_alloc_irq` 170 + * :c:func:`xa_reserve` 171 + * :c:func:`xa_reserve_bh` 172 + * :c:func:`xa_reserve_irq` 194 173 * :c:func:`xa_destroy` 195 174 * :c:func:`xa_set_mark` 196 175 * :c:func:`xa_clear_mark` ··· 204 177 * :c:func:`__xa_erase` 205 178 * :c:func:`__xa_cmpxchg` 206 179 * :c:func:`__xa_alloc` 180 + * :c:func:`__xa_reserve` 207 181 * :c:func:`__xa_set_mark` 208 182 * :c:func:`__xa_clear_mark` 209 183 ··· 262 234 using :c:func:`xa_lock_irqsave` in both the interrupt handler and process 263 235 context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock` 264 236 in the interrupt handler. Some of the more common patterns have helper 265 - functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`. 237 + functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`, 238 + :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`. 266 239 267 240 Sometimes you need to protect access to the XArray with a mutex because 268 241 that lock sits above another mutex in the locking hierarchy. That does ··· 351 322 - :c:func:`xa_is_zero` 352 323 - Zero entries appear as ``NULL`` through the Normal API, but occupy 353 324 an entry in the XArray which can be used to reserve the index for 354 - future use. 325 + future use. This is used by allocating XArrays for allocated entries 326 + which are ``NULL``. 355 327 356 328 Other internal entries may be added in the future. As far as possible, they 357 329 will be handled by :c:func:`xas_retry`.
+8 -6
Documentation/devicetree/bindings/spi/spi-uniphier.txt
··· 5 5 Required properties: 6 6 - compatible: should be "socionext,uniphier-scssi" 7 7 - reg: address and length of the spi master registers 8 - - #address-cells: must be <1>, see spi-bus.txt 9 - - #size-cells: must be <0>, see spi-bus.txt 10 - - clocks: A phandle to the clock for the device. 11 - - resets: A phandle to the reset control for the device. 8 + - interrupts: a single interrupt specifier 9 + - pinctrl-names: should be "default" 10 + - pinctrl-0: pin control state for the default mode 11 + - clocks: a phandle to the clock for the device 12 + - resets: a phandle to the reset control for the device 12 13 13 14 Example: 14 15 15 16 spi0: spi@54006000 { 16 17 compatible = "socionext,uniphier-scssi"; 17 18 reg = <0x54006000 0x100>; 18 - #address-cells = <1>; 19 - #size-cells = <0>; 19 + interrupts = <0 39 4>; 20 + pinctrl-names = "default"; 21 + pinctrl-0 = <&pinctrl_spi0>; 20 22 clocks = <&peri_clk 11>; 21 23 resets = <&peri_rst 11>; 22 24 };
+1 -10
Documentation/input/event-codes.rst
··· 190 190 * REL_WHEEL, REL_HWHEEL: 191 191 192 192 - These codes are used for vertical and horizontal scroll wheels, 193 - respectively. The value is the number of "notches" moved on the wheel, the 194 - physical size of which varies by device. For high-resolution wheels (which 195 - report multiple events for each notch of movement, or do not have notches) 196 - this may be an approximation based on the high-resolution scroll events. 197 - 198 - * REL_WHEEL_HI_RES: 199 - 200 - - If a vertical scroll wheel supports high-resolution scrolling, this code 201 - will be emitted in addition to REL_WHEEL. The value is the (approximate) 202 - distance travelled by the user's finger, in microns. 193 + respectively. 203 194 204 195 EV_ABS 205 196 ------
+63 -3
MAINTAINERS
··· 2801 2801 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git 2802 2802 Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 2803 2803 S: Supported 2804 - F: arch/x86/net/bpf_jit* 2804 + F: arch/*/net/* 2805 2805 F: Documentation/networking/filter.txt 2806 2806 F: Documentation/bpf/ 2807 2807 F: include/linux/bpf* ··· 2820 2820 F: tools/bpf/ 2821 2821 F: tools/lib/bpf/ 2822 2822 F: tools/testing/selftests/bpf/ 2823 + 2824 + BPF JIT for ARM 2825 + M: Shubham Bansal <illusionist.neo@gmail.com> 2826 + L: netdev@vger.kernel.org 2827 + S: Maintained 2828 + F: arch/arm/net/ 2829 + 2830 + BPF JIT for ARM64 2831 + M: Daniel Borkmann <daniel@iogearbox.net> 2832 + M: Alexei Starovoitov <ast@kernel.org> 2833 + M: Zi Shen Lim <zlim.lnx@gmail.com> 2834 + L: netdev@vger.kernel.org 2835 + S: Supported 2836 + F: arch/arm64/net/ 2837 + 2838 + BPF JIT for MIPS (32-BIT AND 64-BIT) 2839 + M: Paul Burton <paul.burton@mips.com> 2840 + L: netdev@vger.kernel.org 2841 + S: Maintained 2842 + F: arch/mips/net/ 2843 + 2844 + BPF JIT for NFP NICs 2845 + M: Jakub Kicinski <jakub.kicinski@netronome.com> 2846 + L: netdev@vger.kernel.org 2847 + S: Supported 2848 + F: drivers/net/ethernet/netronome/nfp/bpf/ 2849 + 2850 + BPF JIT for POWERPC (32-BIT AND 64-BIT) 2851 + M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 2852 + M: Sandipan Das <sandipan@linux.ibm.com> 2853 + L: netdev@vger.kernel.org 2854 + S: Maintained 2855 + F: arch/powerpc/net/ 2856 + 2857 + BPF JIT for S390 2858 + M: Martin Schwidefsky <schwidefsky@de.ibm.com> 2859 + M: Heiko Carstens <heiko.carstens@de.ibm.com> 2860 + L: netdev@vger.kernel.org 2861 + S: Maintained 2862 + F: arch/s390/net/ 2863 + X: arch/s390/net/pnet.c 2864 + 2865 + BPF JIT for SPARC (32-BIT AND 64-BIT) 2866 + M: David S. Miller <davem@davemloft.net> 2867 + L: netdev@vger.kernel.org 2868 + S: Maintained 2869 + F: arch/sparc/net/ 2870 + 2871 + BPF JIT for X86 32-BIT 2872 + M: Wang YanQing <udknight@gmail.com> 2873 + L: netdev@vger.kernel.org 2874 + S: Maintained 2875 + F: arch/x86/net/bpf_jit_comp32.c 2876 + 2877 + BPF JIT for X86 64-BIT 2878 + M: Alexei Starovoitov <ast@kernel.org> 2879 + M: Daniel Borkmann <daniel@iogearbox.net> 2880 + L: netdev@vger.kernel.org 2881 + S: Supported 2882 + F: arch/x86/net/ 2883 + X: arch/x86/net/bpf_jit_comp32.c 2823 2884 2824 2885 BROADCOM B44 10/100 ETHERNET DRIVER 2825 2886 M: Michael Chan <michael.chan@broadcom.com> ··· 14058 13997 F: drivers/tty/vcc.c 14059 13998 14060 13999 SPARSE CHECKER 14061 - M: "Christopher Li" <sparse@chrisli.org> 14000 + M: "Luc Van Oostenryck" <luc.vanoostenryck@gmail.com> 14062 14001 L: linux-sparse@vger.kernel.org 14063 14002 W: https://sparse.wiki.kernel.org/ 14064 14003 T: git git://git.kernel.org/pub/scm/devel/sparse/sparse.git 14065 - T: git git://git.kernel.org/pub/scm/devel/sparse/chrisl/sparse.git 14066 14004 S: Maintained 14067 14005 F: include/linux/compiler.h 14068 14006
+2 -2
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 20 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 6 - NAME = "People's Front" 5 + EXTRAVERSION = -rc4 6 + NAME = Shy Crocodile 7 7 8 8 # *DOCUMENTATION* 9 9 # To see a list of typical targets execute "make help"
+17 -9
arch/arm64/net/bpf_jit_comp.c
··· 351 351 * >0 - successfully JITed a 16-byte eBPF instruction. 352 352 * <0 - failed to JIT. 353 353 */ 354 - static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) 354 + static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, 355 + bool extra_pass) 355 356 { 356 357 const u8 code = insn->code; 357 358 const u8 dst = bpf2a64[insn->dst_reg]; ··· 626 625 case BPF_JMP | BPF_CALL: 627 626 { 628 627 const u8 r0 = bpf2a64[BPF_REG_0]; 629 - const u64 func = (u64)__bpf_call_base + imm; 628 + bool func_addr_fixed; 629 + u64 func_addr; 630 + int ret; 630 631 631 - if (ctx->prog->is_func) 632 - emit_addr_mov_i64(tmp, func, ctx); 632 + ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass, 633 + &func_addr, &func_addr_fixed); 634 + if (ret < 0) 635 + return ret; 636 + if (func_addr_fixed) 637 + /* We can use optimized emission here. */ 638 + emit_a64_mov_i64(tmp, func_addr, ctx); 633 639 else 634 - emit_a64_mov_i64(tmp, func, ctx); 640 + emit_addr_mov_i64(tmp, func_addr, ctx); 635 641 emit(A64_BLR(tmp), ctx); 636 642 emit(A64_MOV(1, r0, A64_R(0)), ctx); 637 643 break; ··· 761 753 return 0; 762 754 } 763 755 764 - static int build_body(struct jit_ctx *ctx) 756 + static int build_body(struct jit_ctx *ctx, bool extra_pass) 765 757 { 766 758 const struct bpf_prog *prog = ctx->prog; 767 759 int i; ··· 770 762 const struct bpf_insn *insn = &prog->insnsi[i]; 771 763 int ret; 772 764 773 - ret = build_insn(insn, ctx); 765 + ret = build_insn(insn, ctx, extra_pass); 774 766 if (ret > 0) { 775 767 i++; 776 768 if (ctx->image == NULL) ··· 866 858 /* 1. Initial fake pass to compute ctx->idx. */ 867 859 868 860 /* Fake pass to fill in ctx->offset. */ 869 - if (build_body(&ctx)) { 861 + if (build_body(&ctx, extra_pass)) { 870 862 prog = orig_prog; 871 863 goto out_off; 872 864 } ··· 896 888 897 889 build_prologue(&ctx, was_classic); 898 890 899 - if (build_body(&ctx)) { 891 + if (build_body(&ctx, extra_pass)) { 900 892 bpf_jit_binary_free(header); 901 893 prog = orig_prog; 902 894 goto out_off;
+3 -1
arch/ia64/include/asm/numa.h
··· 59 59 */ 60 60 61 61 extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES]; 62 - #define node_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)]) 62 + #define slit_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)]) 63 + extern int __node_distance(int from, int to); 64 + #define node_distance(from,to) __node_distance(from, to) 63 65 64 66 extern int paddr_to_nid(unsigned long paddr); 65 67
+3 -3
arch/ia64/kernel/acpi.c
··· 578 578 if (!slit_table) { 579 579 for (i = 0; i < MAX_NUMNODES; i++) 580 580 for (j = 0; j < MAX_NUMNODES; j++) 581 - node_distance(i, j) = i == j ? LOCAL_DISTANCE : 582 - REMOTE_DISTANCE; 581 + slit_distance(i, j) = i == j ? 582 + LOCAL_DISTANCE : REMOTE_DISTANCE; 583 583 return; 584 584 } 585 585 ··· 592 592 if (!pxm_bit_test(j)) 593 593 continue; 594 594 node_to = pxm_to_node(j); 595 - node_distance(node_from, node_to) = 595 + slit_distance(node_from, node_to) = 596 596 slit_table->entry[i * slit_table->locality_count + j]; 597 597 } 598 598 }
+6
arch/ia64/mm/numa.c
··· 36 36 */ 37 37 u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES]; 38 38 39 + int __node_distance(int from, int to) 40 + { 41 + return slit_distance(from, to); 42 + } 43 + EXPORT_SYMBOL(__node_distance); 44 + 39 45 /* Identify which cnode a physical address resides on */ 40 46 int 41 47 paddr_to_nid(unsigned long paddr)
+1
arch/powerpc/kvm/book3s_hv.c
··· 983 983 ret = kvmhv_enter_nested_guest(vcpu); 984 984 if (ret == H_INTERRUPT) { 985 985 kvmppc_set_gpr(vcpu, 3, 0); 986 + vcpu->arch.hcall_needed = 0; 986 987 return -EINTR; 987 988 } 988 989 break;
+38 -19
arch/powerpc/net/bpf_jit_comp64.c
··· 166 166 PPC_BLR(); 167 167 } 168 168 169 - static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func) 169 + static void bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx, 170 + u64 func) 171 + { 172 + #ifdef PPC64_ELF_ABI_v1 173 + /* func points to the function descriptor */ 174 + PPC_LI64(b2p[TMP_REG_2], func); 175 + /* Load actual entry point from function descriptor */ 176 + PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0); 177 + /* ... and move it to LR */ 178 + PPC_MTLR(b2p[TMP_REG_1]); 179 + /* 180 + * Load TOC from function descriptor at offset 8. 181 + * We can clobber r2 since we get called through a 182 + * function pointer (so caller will save/restore r2) 183 + * and since we don't use a TOC ourself. 184 + */ 185 + PPC_BPF_LL(2, b2p[TMP_REG_2], 8); 186 + #else 187 + /* We can clobber r12 */ 188 + PPC_FUNC_ADDR(12, func); 189 + PPC_MTLR(12); 190 + #endif 191 + PPC_BLRL(); 192 + } 193 + 194 + static void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, 195 + u64 func) 170 196 { 171 197 unsigned int i, ctx_idx = ctx->idx; 172 198 ··· 299 273 { 300 274 const struct bpf_insn *insn = fp->insnsi; 301 275 int flen = fp->len; 302 - int i; 276 + int i, ret; 303 277 304 278 /* Start of epilogue code - will only be valid 2nd pass onwards */ 305 279 u32 exit_addr = addrs[flen]; ··· 310 284 u32 src_reg = b2p[insn[i].src_reg]; 311 285 s16 off = insn[i].off; 312 286 s32 imm = insn[i].imm; 287 + bool func_addr_fixed; 288 + u64 func_addr; 313 289 u64 imm64; 314 - u8 *func; 315 290 u32 true_cond; 316 291 u32 tmp_idx; 317 292 ··· 738 711 case BPF_JMP | BPF_CALL: 739 712 ctx->seen |= SEEN_FUNC; 740 713 741 - /* bpf function call */ 742 - if (insn[i].src_reg == BPF_PSEUDO_CALL) 743 - if (!extra_pass) 744 - func = NULL; 745 - else if (fp->aux->func && off < fp->aux->func_cnt) 746 - /* use the subprog id from the off 747 - * field to lookup the callee address 748 - */ 749 - func = (u8 *) fp->aux->func[off]->bpf_func; 750 - else 751 - return -EINVAL; 752 - /* kernel helper call */ 714 + ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass, 715 + &func_addr, &func_addr_fixed); 716 + if (ret < 0) 717 + return ret; 718 + 719 + if (func_addr_fixed) 720 + bpf_jit_emit_func_call_hlp(image, ctx, func_addr); 753 721 else 754 - func = (u8 *) __bpf_call_base + imm; 755 - 756 - bpf_jit_emit_func_call(image, ctx, (u64)func); 757 - 722 + bpf_jit_emit_func_call_rel(image, ctx, func_addr); 758 723 /* move return value from r3 to BPF_REG_0 */ 759 724 PPC_MR(b2p[BPF_REG_0], 3); 760 725 break;
+68 -29
arch/sparc/net/bpf_jit_comp_64.c
··· 791 791 } 792 792 793 793 /* Just skip the save instruction and the ctx register move. */ 794 - #define BPF_TAILCALL_PROLOGUE_SKIP 16 794 + #define BPF_TAILCALL_PROLOGUE_SKIP 32 795 795 #define BPF_TAILCALL_CNT_SP_OFF (STACK_BIAS + 128) 796 796 797 797 static void build_prologue(struct jit_ctx *ctx) ··· 824 824 const u8 vfp = bpf2sparc[BPF_REG_FP]; 825 825 826 826 emit(ADD | IMMED | RS1(FP) | S13(STACK_BIAS) | RD(vfp), ctx); 827 + } else { 828 + emit_nop(ctx); 827 829 } 828 830 829 831 emit_reg_move(I0, O0, ctx); 832 + emit_reg_move(I1, O1, ctx); 833 + emit_reg_move(I2, O2, ctx); 834 + emit_reg_move(I3, O3, ctx); 835 + emit_reg_move(I4, O4, ctx); 830 836 /* If you add anything here, adjust BPF_TAILCALL_PROLOGUE_SKIP above. */ 831 837 } 832 838 ··· 1276 1270 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1277 1271 u32 opcode = 0, rs2; 1278 1272 1273 + if (insn->dst_reg == BPF_REG_FP) 1274 + ctx->saw_frame_pointer = true; 1275 + 1279 1276 ctx->tmp_2_used = true; 1280 1277 emit_loadimm(imm, tmp2, ctx); 1281 1278 ··· 1317 1308 const u8 tmp = bpf2sparc[TMP_REG_1]; 1318 1309 u32 opcode = 0, rs2; 1319 1310 1311 + if (insn->dst_reg == BPF_REG_FP) 1312 + ctx->saw_frame_pointer = true; 1313 + 1320 1314 switch (BPF_SIZE(code)) { 1321 1315 case BPF_W: 1322 1316 opcode = ST32; ··· 1352 1340 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1353 1341 const u8 tmp3 = bpf2sparc[TMP_REG_3]; 1354 1342 1343 + if (insn->dst_reg == BPF_REG_FP) 1344 + ctx->saw_frame_pointer = true; 1345 + 1355 1346 ctx->tmp_1_used = true; 1356 1347 ctx->tmp_2_used = true; 1357 1348 ctx->tmp_3_used = true; ··· 1374 1359 const u8 tmp = bpf2sparc[TMP_REG_1]; 1375 1360 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1376 1361 const u8 tmp3 = bpf2sparc[TMP_REG_3]; 1362 + 1363 + if (insn->dst_reg == BPF_REG_FP) 1364 + ctx->saw_frame_pointer = true; 1377 1365 1378 1366 ctx->tmp_1_used = true; 1379 1367 ctx->tmp_2_used = true; ··· 1443 1425 struct bpf_prog *tmp, *orig_prog = prog; 1444 1426 struct sparc64_jit_data *jit_data; 1445 1427 struct bpf_binary_header *header; 1428 + u32 prev_image_size, image_size; 1446 1429 bool tmp_blinded = false; 1447 1430 bool extra_pass = false; 1448 1431 struct jit_ctx ctx; 1449 - u32 image_size; 1450 1432 u8 *image_ptr; 1451 - int pass; 1433 + int pass, i; 1452 1434 1453 1435 if (!prog->jit_requested) 1454 1436 return orig_prog; ··· 1479 1461 header = jit_data->header; 1480 1462 extra_pass = true; 1481 1463 image_size = sizeof(u32) * ctx.idx; 1464 + prev_image_size = image_size; 1465 + pass = 1; 1482 1466 goto skip_init_ctx; 1483 1467 } 1484 1468 1485 1469 memset(&ctx, 0, sizeof(ctx)); 1486 1470 ctx.prog = prog; 1487 1471 1488 - ctx.offset = kcalloc(prog->len, sizeof(unsigned int), GFP_KERNEL); 1472 + ctx.offset = kmalloc_array(prog->len, sizeof(unsigned int), GFP_KERNEL); 1489 1473 if (ctx.offset == NULL) { 1490 1474 prog = orig_prog; 1491 1475 goto out_off; 1492 1476 } 1493 1477 1494 - /* Fake pass to detect features used, and get an accurate assessment 1495 - * of what the final image size will be. 1478 + /* Longest sequence emitted is for bswap32, 12 instructions. Pre-cook 1479 + * the offset array so that we converge faster. 1496 1480 */ 1497 - if (build_body(&ctx)) { 1498 - prog = orig_prog; 1499 - goto out_off; 1481 + for (i = 0; i < prog->len; i++) 1482 + ctx.offset[i] = i * (12 * 4); 1483 + 1484 + prev_image_size = ~0U; 1485 + for (pass = 1; pass < 40; pass++) { 1486 + ctx.idx = 0; 1487 + 1488 + build_prologue(&ctx); 1489 + if (build_body(&ctx)) { 1490 + prog = orig_prog; 1491 + goto out_off; 1492 + } 1493 + build_epilogue(&ctx); 1494 + 1495 + if (bpf_jit_enable > 1) 1496 + pr_info("Pass %d: size = %u, seen = [%c%c%c%c%c%c]\n", pass, 1497 + ctx.idx * 4, 1498 + ctx.tmp_1_used ? '1' : ' ', 1499 + ctx.tmp_2_used ? '2' : ' ', 1500 + ctx.tmp_3_used ? '3' : ' ', 1501 + ctx.saw_frame_pointer ? 'F' : ' ', 1502 + ctx.saw_call ? 'C' : ' ', 1503 + ctx.saw_tail_call ? 'T' : ' '); 1504 + 1505 + if (ctx.idx * 4 == prev_image_size) 1506 + break; 1507 + prev_image_size = ctx.idx * 4; 1508 + cond_resched(); 1500 1509 } 1501 - build_prologue(&ctx); 1502 - build_epilogue(&ctx); 1503 1510 1504 1511 /* Now we know the actual image size. */ 1505 1512 image_size = sizeof(u32) * ctx.idx; ··· 1537 1494 1538 1495 ctx.image = (u32 *)image_ptr; 1539 1496 skip_init_ctx: 1540 - for (pass = 1; pass < 3; pass++) { 1541 - ctx.idx = 0; 1497 + ctx.idx = 0; 1542 1498 1543 - build_prologue(&ctx); 1499 + build_prologue(&ctx); 1544 1500 1545 - if (build_body(&ctx)) { 1546 - bpf_jit_binary_free(header); 1547 - prog = orig_prog; 1548 - goto out_off; 1549 - } 1501 + if (build_body(&ctx)) { 1502 + bpf_jit_binary_free(header); 1503 + prog = orig_prog; 1504 + goto out_off; 1505 + } 1550 1506 1551 - build_epilogue(&ctx); 1507 + build_epilogue(&ctx); 1552 1508 1553 - if (bpf_jit_enable > 1) 1554 - pr_info("Pass %d: shrink = %d, seen = [%c%c%c%c%c%c]\n", pass, 1555 - image_size - (ctx.idx * 4), 1556 - ctx.tmp_1_used ? '1' : ' ', 1557 - ctx.tmp_2_used ? '2' : ' ', 1558 - ctx.tmp_3_used ? '3' : ' ', 1559 - ctx.saw_frame_pointer ? 'F' : ' ', 1560 - ctx.saw_call ? 'C' : ' ', 1561 - ctx.saw_tail_call ? 'T' : ' '); 1509 + if (ctx.idx * 4 != prev_image_size) { 1510 + pr_err("bpf_jit: Failed to converge, prev_size=%u size=%d\n", 1511 + prev_image_size, ctx.idx * 4); 1512 + bpf_jit_binary_free(header); 1513 + prog = orig_prog; 1514 + goto out_off; 1562 1515 } 1563 1516 1564 1517 if (bpf_jit_enable > 1)
+2 -1
arch/x86/include/asm/kvm_host.h
··· 1094 1094 bool (*has_wbinvd_exit)(void); 1095 1095 1096 1096 u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu); 1097 - void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); 1097 + /* Returns actual tsc_offset set in active VMCS */ 1098 + u64 (*write_l1_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); 1098 1099 1099 1100 void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2); 1100 1101
+6 -1
arch/x86/kvm/lapic.c
··· 55 55 #define PRIo64 "o" 56 56 57 57 /* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */ 58 - #define apic_debug(fmt, arg...) 58 + #define apic_debug(fmt, arg...) do {} while (0) 59 59 60 60 /* 14 is the version for Xeon and Pentium 8.4.8*/ 61 61 #define APIC_VERSION (0x14UL | ((KVM_APIC_LVT_NUM - 1) << 16)) ··· 575 575 576 576 rcu_read_lock(); 577 577 map = rcu_dereference(kvm->arch.apic_map); 578 + 579 + if (unlikely(!map)) { 580 + count = -EOPNOTSUPP; 581 + goto out; 582 + } 578 583 579 584 if (min > map->max_apic_id) 580 585 goto out;
+9 -18
arch/x86/kvm/mmu.c
··· 5074 5074 } 5075 5075 5076 5076 static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, 5077 - const u8 *new, int *bytes) 5077 + int *bytes) 5078 5078 { 5079 - u64 gentry; 5079 + u64 gentry = 0; 5080 5080 int r; 5081 5081 5082 5082 /* ··· 5088 5088 /* Handle a 32-bit guest writing two halves of a 64-bit gpte */ 5089 5089 *gpa &= ~(gpa_t)7; 5090 5090 *bytes = 8; 5091 - r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8); 5092 - if (r) 5093 - gentry = 0; 5094 - new = (const u8 *)&gentry; 5095 5091 } 5096 5092 5097 - switch (*bytes) { 5098 - case 4: 5099 - gentry = *(const u32 *)new; 5100 - break; 5101 - case 8: 5102 - gentry = *(const u64 *)new; 5103 - break; 5104 - default: 5105 - gentry = 0; 5106 - break; 5093 + if (*bytes == 4 || *bytes == 8) { 5094 + r = kvm_vcpu_read_guest_atomic(vcpu, *gpa, &gentry, *bytes); 5095 + if (r) 5096 + gentry = 0; 5107 5097 } 5108 5098 5109 5099 return gentry; ··· 5197 5207 5198 5208 pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes); 5199 5209 5200 - gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes); 5201 - 5202 5210 /* 5203 5211 * No need to care whether allocation memory is successful 5204 5212 * or not since pte prefetch is skiped if it does not have ··· 5205 5217 mmu_topup_memory_caches(vcpu); 5206 5218 5207 5219 spin_lock(&vcpu->kvm->mmu_lock); 5220 + 5221 + gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes); 5222 + 5208 5223 ++vcpu->kvm->stat.mmu_pte_write; 5209 5224 kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE); 5210 5225
+29 -15
arch/x86/kvm/svm.c
··· 1446 1446 return vcpu->arch.tsc_offset; 1447 1447 } 1448 1448 1449 - static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1449 + static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1450 1450 { 1451 1451 struct vcpu_svm *svm = to_svm(vcpu); 1452 1452 u64 g_tsc_offset = 0; ··· 1464 1464 svm->vmcb->control.tsc_offset = offset + g_tsc_offset; 1465 1465 1466 1466 mark_dirty(svm->vmcb, VMCB_INTERCEPTS); 1467 + return svm->vmcb->control.tsc_offset; 1467 1468 } 1468 1469 1469 1470 static void avic_init_vmcb(struct vcpu_svm *svm) ··· 1665 1664 static int avic_init_access_page(struct kvm_vcpu *vcpu) 1666 1665 { 1667 1666 struct kvm *kvm = vcpu->kvm; 1668 - int ret; 1667 + int ret = 0; 1669 1668 1669 + mutex_lock(&kvm->slots_lock); 1670 1670 if (kvm->arch.apic_access_page_done) 1671 - return 0; 1671 + goto out; 1672 1672 1673 - ret = x86_set_memory_region(kvm, 1674 - APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, 1675 - APIC_DEFAULT_PHYS_BASE, 1676 - PAGE_SIZE); 1673 + ret = __x86_set_memory_region(kvm, 1674 + APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, 1675 + APIC_DEFAULT_PHYS_BASE, 1676 + PAGE_SIZE); 1677 1677 if (ret) 1678 - return ret; 1678 + goto out; 1679 1679 1680 1680 kvm->arch.apic_access_page_done = true; 1681 - return 0; 1681 + out: 1682 + mutex_unlock(&kvm->slots_lock); 1683 + return ret; 1682 1684 } 1683 1685 1684 1686 static int avic_init_backing_page(struct kvm_vcpu *vcpu) ··· 2193 2189 return ERR_PTR(err); 2194 2190 } 2195 2191 2192 + static void svm_clear_current_vmcb(struct vmcb *vmcb) 2193 + { 2194 + int i; 2195 + 2196 + for_each_online_cpu(i) 2197 + cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL); 2198 + } 2199 + 2196 2200 static void svm_free_vcpu(struct kvm_vcpu *vcpu) 2197 2201 { 2198 2202 struct vcpu_svm *svm = to_svm(vcpu); 2203 + 2204 + /* 2205 + * The vmcb page can be recycled, causing a false negative in 2206 + * svm_vcpu_load(). So, ensure that no logical CPU has this 2207 + * vmcb page recorded as its current vmcb. 2208 + */ 2209 + svm_clear_current_vmcb(svm->vmcb); 2199 2210 2200 2211 __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); 2201 2212 __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); ··· 2218 2199 __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); 2219 2200 kvm_vcpu_uninit(vcpu); 2220 2201 kmem_cache_free(kvm_vcpu_cache, svm); 2221 - /* 2222 - * The vmcb page can be recycled, causing a false negative in 2223 - * svm_vcpu_load(). So do a full IBPB now. 2224 - */ 2225 - indirect_branch_prediction_barrier(); 2226 2202 } 2227 2203 2228 2204 static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) ··· 7163 7149 .has_wbinvd_exit = svm_has_wbinvd_exit, 7164 7150 7165 7151 .read_l1_tsc_offset = svm_read_l1_tsc_offset, 7166 - .write_tsc_offset = svm_write_tsc_offset, 7152 + .write_l1_tsc_offset = svm_write_l1_tsc_offset, 7167 7153 7168 7154 .set_tdp_cr3 = set_tdp_cr3, 7169 7155
+65 -33
arch/x86/kvm/vmx.c
··· 174 174 * refer SDM volume 3b section 21.6.13 & 22.1.3. 175 175 */ 176 176 static unsigned int ple_gap = KVM_DEFAULT_PLE_GAP; 177 + module_param(ple_gap, uint, 0444); 177 178 178 179 static unsigned int ple_window = KVM_VMX_DEFAULT_PLE_WINDOW; 179 180 module_param(ple_window, uint, 0444); ··· 985 984 struct shared_msr_entry *guest_msrs; 986 985 int nmsrs; 987 986 int save_nmsrs; 987 + bool guest_msrs_dirty; 988 988 unsigned long host_idt_base; 989 989 #ifdef CONFIG_X86_64 990 990 u64 msr_host_kernel_gs_base; ··· 1308 1306 static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12, 1309 1307 u16 error_code); 1310 1308 static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu); 1311 - static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 1309 + static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 1312 1310 u32 msr, int type); 1313 1311 1314 1312 static DEFINE_PER_CPU(struct vmcs *, vmxarea); ··· 1612 1610 { 1613 1611 struct vcpu_vmx *vmx = to_vmx(vcpu); 1614 1612 1615 - /* We don't support disabling the feature for simplicity. */ 1616 - if (vmx->nested.enlightened_vmcs_enabled) 1617 - return 0; 1618 - 1619 - vmx->nested.enlightened_vmcs_enabled = true; 1620 - 1621 1613 /* 1622 1614 * vmcs_version represents the range of supported Enlightened VMCS 1623 1615 * versions: lower 8 bits is the minimal version, higher 8 bits is the ··· 1620 1624 */ 1621 1625 if (vmcs_version) 1622 1626 *vmcs_version = (KVM_EVMCS_VERSION << 8) | 1; 1627 + 1628 + /* We don't support disabling the feature for simplicity. */ 1629 + if (vmx->nested.enlightened_vmcs_enabled) 1630 + return 0; 1631 + 1632 + vmx->nested.enlightened_vmcs_enabled = true; 1623 1633 1624 1634 vmx->nested.msrs.pinbased_ctls_high &= ~EVMCS1_UNSUPPORTED_PINCTRL; 1625 1635 vmx->nested.msrs.entry_ctls_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL; ··· 2899 2897 2900 2898 vmx->req_immediate_exit = false; 2901 2899 2900 + /* 2901 + * Note that guest MSRs to be saved/restored can also be changed 2902 + * when guest state is loaded. This happens when guest transitions 2903 + * to/from long-mode by setting MSR_EFER.LMA. 2904 + */ 2905 + if (!vmx->loaded_cpu_state || vmx->guest_msrs_dirty) { 2906 + vmx->guest_msrs_dirty = false; 2907 + for (i = 0; i < vmx->save_nmsrs; ++i) 2908 + kvm_set_shared_msr(vmx->guest_msrs[i].index, 2909 + vmx->guest_msrs[i].data, 2910 + vmx->guest_msrs[i].mask); 2911 + 2912 + } 2913 + 2902 2914 if (vmx->loaded_cpu_state) 2903 2915 return; 2904 2916 ··· 2973 2957 vmcs_writel(HOST_GS_BASE, gs_base); 2974 2958 host_state->gs_base = gs_base; 2975 2959 } 2976 - 2977 - for (i = 0; i < vmx->save_nmsrs; ++i) 2978 - kvm_set_shared_msr(vmx->guest_msrs[i].index, 2979 - vmx->guest_msrs[i].data, 2980 - vmx->guest_msrs[i].mask); 2981 2960 } 2982 2961 2983 2962 static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) ··· 3447 3436 move_msr_up(vmx, index, save_nmsrs++); 3448 3437 3449 3438 vmx->save_nmsrs = save_nmsrs; 3439 + vmx->guest_msrs_dirty = true; 3450 3440 3451 3441 if (cpu_has_vmx_msr_bitmap()) 3452 3442 vmx_update_msr_bitmap(&vmx->vcpu); ··· 3464 3452 return vcpu->arch.tsc_offset; 3465 3453 } 3466 3454 3467 - /* 3468 - * writes 'offset' into guest's timestamp counter offset register 3469 - */ 3470 - static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 3455 + static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 3471 3456 { 3457 + u64 active_offset = offset; 3472 3458 if (is_guest_mode(vcpu)) { 3473 3459 /* 3474 3460 * We're here if L1 chose not to trap WRMSR to TSC. According ··· 3474 3464 * set for L2 remains unchanged, and still needs to be added 3475 3465 * to the newly set TSC to get L2's TSC. 3476 3466 */ 3477 - struct vmcs12 *vmcs12; 3478 - /* recalculate vmcs02.TSC_OFFSET: */ 3479 - vmcs12 = get_vmcs12(vcpu); 3480 - vmcs_write64(TSC_OFFSET, offset + 3481 - (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ? 3482 - vmcs12->tsc_offset : 0)); 3467 + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); 3468 + if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING)) 3469 + active_offset += vmcs12->tsc_offset; 3483 3470 } else { 3484 3471 trace_kvm_write_tsc_offset(vcpu->vcpu_id, 3485 3472 vmcs_read64(TSC_OFFSET), offset); 3486 - vmcs_write64(TSC_OFFSET, offset); 3487 3473 } 3474 + 3475 + vmcs_write64(TSC_OFFSET, active_offset); 3476 + return active_offset; 3488 3477 } 3489 3478 3490 3479 /* ··· 5953 5944 spin_unlock(&vmx_vpid_lock); 5954 5945 } 5955 5946 5956 - static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 5947 + static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 5957 5948 u32 msr, int type) 5958 5949 { 5959 5950 int f = sizeof(unsigned long); ··· 5991 5982 } 5992 5983 } 5993 5984 5994 - static void __always_inline vmx_enable_intercept_for_msr(unsigned long *msr_bitmap, 5985 + static __always_inline void vmx_enable_intercept_for_msr(unsigned long *msr_bitmap, 5995 5986 u32 msr, int type) 5996 5987 { 5997 5988 int f = sizeof(unsigned long); ··· 6029 6020 } 6030 6021 } 6031 6022 6032 - static void __always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap, 6023 + static __always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap, 6033 6024 u32 msr, int type, bool value) 6034 6025 { 6035 6026 if (value) ··· 8673 8664 struct vmcs12 *vmcs12 = vmx->nested.cached_vmcs12; 8674 8665 struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs; 8675 8666 8676 - vmcs12->hdr.revision_id = evmcs->revision_id; 8677 - 8678 8667 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */ 8679 8668 vmcs12->tpr_threshold = evmcs->tpr_threshold; 8680 8669 vmcs12->guest_rip = evmcs->guest_rip; ··· 9376 9369 9377 9370 vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page); 9378 9371 9379 - if (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION) { 9372 + /* 9373 + * Currently, KVM only supports eVMCS version 1 9374 + * (== KVM_EVMCS_VERSION) and thus we expect guest to set this 9375 + * value to first u32 field of eVMCS which should specify eVMCS 9376 + * VersionNumber. 9377 + * 9378 + * Guest should be aware of supported eVMCS versions by host by 9379 + * examining CPUID.0x4000000A.EAX[0:15]. Host userspace VMM is 9380 + * expected to set this CPUID leaf according to the value 9381 + * returned in vmcs_version from nested_enable_evmcs(). 9382 + * 9383 + * However, it turns out that Microsoft Hyper-V fails to comply 9384 + * to their own invented interface: When Hyper-V use eVMCS, it 9385 + * just sets first u32 field of eVMCS to revision_id specified 9386 + * in MSR_IA32_VMX_BASIC. Instead of used eVMCS version number 9387 + * which is one of the supported versions specified in 9388 + * CPUID.0x4000000A.EAX[0:15]. 9389 + * 9390 + * To overcome Hyper-V bug, we accept here either a supported 9391 + * eVMCS version or VMCS12 revision_id as valid values for first 9392 + * u32 field of eVMCS. 9393 + */ 9394 + if ((vmx->nested.hv_evmcs->revision_id != KVM_EVMCS_VERSION) && 9395 + (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION)) { 9380 9396 nested_release_evmcs(vcpu); 9381 9397 return 0; 9382 9398 } ··· 9420 9390 * present in struct hv_enlightened_vmcs, ...). Make sure there 9421 9391 * are no leftovers. 9422 9392 */ 9423 - if (from_launch) 9424 - memset(vmx->nested.cached_vmcs12, 0, 9425 - sizeof(*vmx->nested.cached_vmcs12)); 9393 + if (from_launch) { 9394 + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); 9395 + memset(vmcs12, 0, sizeof(*vmcs12)); 9396 + vmcs12->hdr.revision_id = VMCS12_REVISION; 9397 + } 9426 9398 9427 9399 } 9428 9400 return 1; ··· 15094 15062 .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit, 15095 15063 15096 15064 .read_l1_tsc_offset = vmx_read_l1_tsc_offset, 15097 - .write_tsc_offset = vmx_write_tsc_offset, 15065 + .write_l1_tsc_offset = vmx_write_l1_tsc_offset, 15098 15066 15099 15067 .set_tdp_cr3 = vmx_set_cr3, 15100 15068
+6 -4
arch/x86/kvm/x86.c
··· 1665 1665 1666 1666 static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1667 1667 { 1668 - kvm_x86_ops->write_tsc_offset(vcpu, offset); 1669 - vcpu->arch.tsc_offset = offset; 1668 + vcpu->arch.tsc_offset = kvm_x86_ops->write_l1_tsc_offset(vcpu, offset); 1670 1669 } 1671 1670 1672 1671 static inline bool kvm_check_tsc_unstable(void) ··· 1793 1794 static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, 1794 1795 s64 adjustment) 1795 1796 { 1796 - kvm_vcpu_write_tsc_offset(vcpu, vcpu->arch.tsc_offset + adjustment); 1797 + u64 tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu); 1798 + kvm_vcpu_write_tsc_offset(vcpu, tsc_offset + adjustment); 1797 1799 } 1798 1800 1799 1801 static inline void adjust_tsc_offset_host(struct kvm_vcpu *vcpu, s64 adjustment) ··· 6918 6918 clock_pairing.nsec = ts.tv_nsec; 6919 6919 clock_pairing.tsc = kvm_read_l1_tsc(vcpu, cycle); 6920 6920 clock_pairing.flags = 0; 6921 + memset(&clock_pairing.pad, 0, sizeof(clock_pairing.pad)); 6921 6922 6922 6923 ret = 0; 6923 6924 if (kvm_write_guest(vcpu->kvm, paddr, &clock_pairing, ··· 7456 7455 else { 7457 7456 if (vcpu->arch.apicv_active) 7458 7457 kvm_x86_ops->sync_pir_to_irr(vcpu); 7459 - kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); 7458 + if (ioapic_in_kernel(vcpu->kvm)) 7459 + kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); 7460 7460 } 7461 7461 7462 7462 if (is_guest_mode(vcpu))
+8 -8
arch/xtensa/kernel/asm-offsets.c
··· 94 94 DEFINE(THREAD_SP, offsetof (struct task_struct, thread.sp)); 95 95 DEFINE(THREAD_CPENABLE, offsetof (struct thread_info, cpenable)); 96 96 #if XTENSA_HAVE_COPROCESSORS 97 - DEFINE(THREAD_XTREGS_CP0, offsetof (struct thread_info, xtregs_cp)); 98 - DEFINE(THREAD_XTREGS_CP1, offsetof (struct thread_info, xtregs_cp)); 99 - DEFINE(THREAD_XTREGS_CP2, offsetof (struct thread_info, xtregs_cp)); 100 - DEFINE(THREAD_XTREGS_CP3, offsetof (struct thread_info, xtregs_cp)); 101 - DEFINE(THREAD_XTREGS_CP4, offsetof (struct thread_info, xtregs_cp)); 102 - DEFINE(THREAD_XTREGS_CP5, offsetof (struct thread_info, xtregs_cp)); 103 - DEFINE(THREAD_XTREGS_CP6, offsetof (struct thread_info, xtregs_cp)); 104 - DEFINE(THREAD_XTREGS_CP7, offsetof (struct thread_info, xtregs_cp)); 97 + DEFINE(THREAD_XTREGS_CP0, offsetof(struct thread_info, xtregs_cp.cp0)); 98 + DEFINE(THREAD_XTREGS_CP1, offsetof(struct thread_info, xtregs_cp.cp1)); 99 + DEFINE(THREAD_XTREGS_CP2, offsetof(struct thread_info, xtregs_cp.cp2)); 100 + DEFINE(THREAD_XTREGS_CP3, offsetof(struct thread_info, xtregs_cp.cp3)); 101 + DEFINE(THREAD_XTREGS_CP4, offsetof(struct thread_info, xtregs_cp.cp4)); 102 + DEFINE(THREAD_XTREGS_CP5, offsetof(struct thread_info, xtregs_cp.cp5)); 103 + DEFINE(THREAD_XTREGS_CP6, offsetof(struct thread_info, xtregs_cp.cp6)); 104 + DEFINE(THREAD_XTREGS_CP7, offsetof(struct thread_info, xtregs_cp.cp7)); 105 105 #endif 106 106 DEFINE(THREAD_XTREGS_USER, offsetof (struct thread_info, xtregs_user)); 107 107 DEFINE(XTREGS_USER_SIZE, sizeof(xtregs_user_t));
+4 -1
arch/xtensa/kernel/process.c
··· 94 94 95 95 void coprocessor_flush_all(struct thread_info *ti) 96 96 { 97 - unsigned long cpenable; 97 + unsigned long cpenable, old_cpenable; 98 98 int i; 99 99 100 100 preempt_disable(); 101 101 102 + RSR_CPENABLE(old_cpenable); 102 103 cpenable = ti->cpenable; 104 + WSR_CPENABLE(cpenable); 103 105 104 106 for (i = 0; i < XCHAL_CP_MAX; i++) { 105 107 if ((cpenable & 1) != 0 && coprocessor_owner[i] == ti) 106 108 coprocessor_flush(ti, i); 107 109 cpenable >>= 1; 108 110 } 111 + WSR_CPENABLE(old_cpenable); 109 112 110 113 preempt_enable(); 111 114 }
+38 -4
arch/xtensa/kernel/ptrace.c
··· 127 127 } 128 128 129 129 130 + #if XTENSA_HAVE_COPROCESSORS 131 + #define CP_OFFSETS(cp) \ 132 + { \ 133 + .elf_xtregs_offset = offsetof(elf_xtregs_t, cp), \ 134 + .ti_offset = offsetof(struct thread_info, xtregs_cp.cp), \ 135 + .sz = sizeof(xtregs_ ## cp ## _t), \ 136 + } 137 + 138 + static const struct { 139 + size_t elf_xtregs_offset; 140 + size_t ti_offset; 141 + size_t sz; 142 + } cp_offsets[] = { 143 + CP_OFFSETS(cp0), 144 + CP_OFFSETS(cp1), 145 + CP_OFFSETS(cp2), 146 + CP_OFFSETS(cp3), 147 + CP_OFFSETS(cp4), 148 + CP_OFFSETS(cp5), 149 + CP_OFFSETS(cp6), 150 + CP_OFFSETS(cp7), 151 + }; 152 + #endif 153 + 130 154 static int ptrace_getxregs(struct task_struct *child, void __user *uregs) 131 155 { 132 156 struct pt_regs *regs = task_pt_regs(child); 133 157 struct thread_info *ti = task_thread_info(child); 134 158 elf_xtregs_t __user *xtregs = uregs; 135 159 int ret = 0; 160 + int i __maybe_unused; 136 161 137 162 if (!access_ok(VERIFY_WRITE, uregs, sizeof(elf_xtregs_t))) 138 163 return -EIO; ··· 165 140 #if XTENSA_HAVE_COPROCESSORS 166 141 /* Flush all coprocessor registers to memory. */ 167 142 coprocessor_flush_all(ti); 168 - ret |= __copy_to_user(&xtregs->cp0, &ti->xtregs_cp, 169 - sizeof(xtregs_coprocessor_t)); 143 + 144 + for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i) 145 + ret |= __copy_to_user((char __user *)xtregs + 146 + cp_offsets[i].elf_xtregs_offset, 147 + (const char *)ti + 148 + cp_offsets[i].ti_offset, 149 + cp_offsets[i].sz); 170 150 #endif 171 151 ret |= __copy_to_user(&xtregs->opt, &regs->xtregs_opt, 172 152 sizeof(xtregs->opt)); ··· 187 157 struct pt_regs *regs = task_pt_regs(child); 188 158 elf_xtregs_t *xtregs = uregs; 189 159 int ret = 0; 160 + int i __maybe_unused; 190 161 191 162 if (!access_ok(VERIFY_READ, uregs, sizeof(elf_xtregs_t))) 192 163 return -EFAULT; ··· 197 166 coprocessor_flush_all(ti); 198 167 coprocessor_release_all(ti); 199 168 200 - ret |= __copy_from_user(&ti->xtregs_cp, &xtregs->cp0, 201 - sizeof(xtregs_coprocessor_t)); 169 + for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i) 170 + ret |= __copy_from_user((char *)ti + cp_offsets[i].ti_offset, 171 + (const char __user *)xtregs + 172 + cp_offsets[i].elf_xtregs_offset, 173 + cp_offsets[i].sz); 202 174 #endif 203 175 ret |= __copy_from_user(&regs->xtregs_opt, &xtregs->opt, 204 176 sizeof(xtregs->opt));
+2 -2
drivers/atm/firestream.c
··· 1410 1410 1411 1411 func_enter (); 1412 1412 1413 - fs_dprintk (FS_DEBUG_INIT, "Inititing queue at %x: %d entries:\n", 1413 + fs_dprintk (FS_DEBUG_INIT, "Initializing queue at %x: %d entries:\n", 1414 1414 queue, nentries); 1415 1415 1416 1416 p = aligned_kmalloc (sz, GFP_KERNEL, 0x10); ··· 1443 1443 { 1444 1444 func_enter (); 1445 1445 1446 - fs_dprintk (FS_DEBUG_INIT, "Inititing free pool at %x:\n", queue); 1446 + fs_dprintk (FS_DEBUG_INIT, "Initializing free pool at %x:\n", queue); 1447 1447 1448 1448 write_fs (dev, FP_CNF(queue), (bufsize * RBFP_RBS) | RBFP_RBSVAL | RBFP_CME); 1449 1449 write_fs (dev, FP_SA(queue), 0);
+8
drivers/hid/hid-ids.h
··· 275 275 276 276 #define USB_VENDOR_ID_CIDC 0x1677 277 277 278 + #define I2C_VENDOR_ID_CIRQUE 0x0488 279 + #define I2C_PRODUCT_ID_CIRQUE_121F 0x121F 280 + 278 281 #define USB_VENDOR_ID_CJTOUCH 0x24b8 279 282 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0020 0x0020 280 283 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0040 0x0040 ··· 710 707 #define USB_VENDOR_ID_LG 0x1fd2 711 708 #define USB_DEVICE_ID_LG_MULTITOUCH 0x0064 712 709 #define USB_DEVICE_ID_LG_MELFAS_MT 0x6007 710 + #define I2C_DEVICE_ID_LG_8001 0x8001 713 711 714 712 #define USB_VENDOR_ID_LOGITECH 0x046d 715 713 #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e ··· 809 805 #define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9 810 806 #define USB_DEVICE_ID_MS_POWER_COVER 0x07da 811 807 #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd 808 + #define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb 812 809 813 810 #define USB_VENDOR_ID_MOJO 0x8282 814 811 #define USB_DEVICE_ID_RETRO_ADAPTER 0x3201 ··· 1048 1043 #define USB_VENDOR_ID_SYMBOL 0x05e0 1049 1044 #define USB_DEVICE_ID_SYMBOL_SCANNER_1 0x0800 1050 1045 #define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300 1046 + #define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200 1051 1047 1052 1048 #define USB_VENDOR_ID_SYNAPTICS 0x06cb 1053 1049 #define USB_DEVICE_ID_SYNAPTICS_TP 0x0001 ··· 1210 1204 #define USB_DEVICE_ID_PRIMAX_MOUSE_4D22 0x4d22 1211 1205 #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05 1212 1206 #define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72 1207 + #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f 1208 + #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22 1213 1209 1214 1210 1215 1211 #define USB_VENDOR_ID_RISO_KAGAKU 0x1294 /* Riso Kagaku Corp. */
+3 -44
drivers/hid/hid-input.c
··· 325 325 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, 326 326 USB_DEVICE_ID_ELECOM_BM084), 327 327 HID_BATTERY_QUIRK_IGNORE }, 328 + { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL, 329 + USB_DEVICE_ID_SYMBOL_SCANNER_3), 330 + HID_BATTERY_QUIRK_IGNORE }, 328 331 {} 329 332 }; 330 333 ··· 1841 1838 } 1842 1839 EXPORT_SYMBOL_GPL(hidinput_disconnect); 1843 1840 1844 - /** 1845 - * hid_scroll_counter_handle_scroll() - Send high- and low-resolution scroll 1846 - * events given a high-resolution wheel 1847 - * movement. 1848 - * @counter: a hid_scroll_counter struct describing the wheel. 1849 - * @hi_res_value: the movement of the wheel, in the mouse's high-resolution 1850 - * units. 1851 - * 1852 - * Given a high-resolution movement, this function converts the movement into 1853 - * microns and emits high-resolution scroll events for the input device. It also 1854 - * uses the multiplier from &struct hid_scroll_counter to emit low-resolution 1855 - * scroll events when appropriate for backwards-compatibility with userspace 1856 - * input libraries. 1857 - */ 1858 - void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter, 1859 - int hi_res_value) 1860 - { 1861 - int low_res_value, remainder, multiplier; 1862 - 1863 - input_report_rel(counter->dev, REL_WHEEL_HI_RES, 1864 - hi_res_value * counter->microns_per_hi_res_unit); 1865 - 1866 - /* 1867 - * Update the low-res remainder with the high-res value, 1868 - * but reset if the direction has changed. 1869 - */ 1870 - remainder = counter->remainder; 1871 - if ((remainder ^ hi_res_value) < 0) 1872 - remainder = 0; 1873 - remainder += hi_res_value; 1874 - 1875 - /* 1876 - * Then just use the resolution multiplier to see if 1877 - * we should send a low-res (aka regular wheel) event. 1878 - */ 1879 - multiplier = counter->resolution_multiplier; 1880 - low_res_value = remainder / multiplier; 1881 - remainder -= low_res_value * multiplier; 1882 - counter->remainder = remainder; 1883 - 1884 - if (low_res_value) 1885 - input_report_rel(counter->dev, REL_WHEEL, low_res_value); 1886 - } 1887 - EXPORT_SYMBOL_GPL(hid_scroll_counter_handle_scroll);
+27 -282
drivers/hid/hid-logitech-hidpp.c
··· 64 64 #define HIDPP_QUIRK_NO_HIDINPUT BIT(23) 65 65 #define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS BIT(24) 66 66 #define HIDPP_QUIRK_UNIFYING BIT(25) 67 - #define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(26) 68 - #define HIDPP_QUIRK_HI_RES_SCROLL_X2120 BIT(27) 69 - #define HIDPP_QUIRK_HI_RES_SCROLL_X2121 BIT(28) 70 - 71 - /* Convenience constant to check for any high-res support. */ 72 - #define HIDPP_QUIRK_HI_RES_SCROLL (HIDPP_QUIRK_HI_RES_SCROLL_1P0 | \ 73 - HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \ 74 - HIDPP_QUIRK_HI_RES_SCROLL_X2121) 75 67 76 68 #define HIDPP_QUIRK_DELAYED_INIT HIDPP_QUIRK_NO_HIDINPUT 77 69 ··· 149 157 unsigned long capabilities; 150 158 151 159 struct hidpp_battery battery; 152 - struct hid_scroll_counter vertical_wheel_counter; 153 160 }; 154 161 155 162 /* HID++ 1.0 error codes */ ··· 400 409 #define HIDPP_SET_LONG_REGISTER 0x82 401 410 #define HIDPP_GET_LONG_REGISTER 0x83 402 411 403 - /** 404 - * hidpp10_set_register_bit() - Sets a single bit in a HID++ 1.0 register. 405 - * @hidpp_dev: the device to set the register on. 406 - * @register_address: the address of the register to modify. 407 - * @byte: the byte of the register to modify. Should be less than 3. 408 - * Return: 0 if successful, otherwise a negative error code. 409 - */ 410 - static int hidpp10_set_register_bit(struct hidpp_device *hidpp_dev, 411 - u8 register_address, u8 byte, u8 bit) 412 + #define HIDPP_REG_GENERAL 0x00 413 + 414 + static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev) 412 415 { 413 416 struct hidpp_report response; 414 417 int ret; 415 418 u8 params[3] = { 0 }; 416 419 417 420 ret = hidpp_send_rap_command_sync(hidpp_dev, 418 - REPORT_ID_HIDPP_SHORT, 419 - HIDPP_GET_REGISTER, 420 - register_address, 421 - NULL, 0, &response); 421 + REPORT_ID_HIDPP_SHORT, 422 + HIDPP_GET_REGISTER, 423 + HIDPP_REG_GENERAL, 424 + NULL, 0, &response); 422 425 if (ret) 423 426 return ret; 424 427 425 428 memcpy(params, response.rap.params, 3); 426 429 427 - params[byte] |= BIT(bit); 430 + /* Set the battery bit */ 431 + params[0] |= BIT(4); 428 432 429 433 return hidpp_send_rap_command_sync(hidpp_dev, 430 - REPORT_ID_HIDPP_SHORT, 431 - HIDPP_SET_REGISTER, 432 - register_address, 433 - params, 3, &response); 434 - } 435 - 436 - 437 - #define HIDPP_REG_GENERAL 0x00 438 - 439 - static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev) 440 - { 441 - return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_GENERAL, 0, 4); 442 - } 443 - 444 - #define HIDPP_REG_FEATURES 0x01 445 - 446 - /* On HID++ 1.0 devices, high-res scroll was called "scrolling acceleration". */ 447 - static int hidpp10_enable_scrolling_acceleration(struct hidpp_device *hidpp_dev) 448 - { 449 - return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_FEATURES, 0, 6); 434 + REPORT_ID_HIDPP_SHORT, 435 + HIDPP_SET_REGISTER, 436 + HIDPP_REG_GENERAL, 437 + params, 3, &response); 450 438 } 451 439 452 440 #define HIDPP_REG_BATTERY_STATUS 0x07 ··· 1134 1164 } 1135 1165 1136 1166 return ret; 1137 - } 1138 - 1139 - /* -------------------------------------------------------------------------- */ 1140 - /* 0x2120: Hi-resolution scrolling */ 1141 - /* -------------------------------------------------------------------------- */ 1142 - 1143 - #define HIDPP_PAGE_HI_RESOLUTION_SCROLLING 0x2120 1144 - 1145 - #define CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE 0x10 1146 - 1147 - static int hidpp_hrs_set_highres_scrolling_mode(struct hidpp_device *hidpp, 1148 - bool enabled, u8 *multiplier) 1149 - { 1150 - u8 feature_index; 1151 - u8 feature_type; 1152 - int ret; 1153 - u8 params[1]; 1154 - struct hidpp_report response; 1155 - 1156 - ret = hidpp_root_get_feature(hidpp, 1157 - HIDPP_PAGE_HI_RESOLUTION_SCROLLING, 1158 - &feature_index, 1159 - &feature_type); 1160 - if (ret) 1161 - return ret; 1162 - 1163 - params[0] = enabled ? BIT(0) : 0; 1164 - ret = hidpp_send_fap_command_sync(hidpp, feature_index, 1165 - CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE, 1166 - params, sizeof(params), &response); 1167 - if (ret) 1168 - return ret; 1169 - *multiplier = response.fap.params[1]; 1170 - return 0; 1171 - } 1172 - 1173 - /* -------------------------------------------------------------------------- */ 1174 - /* 0x2121: HiRes Wheel */ 1175 - /* -------------------------------------------------------------------------- */ 1176 - 1177 - #define HIDPP_PAGE_HIRES_WHEEL 0x2121 1178 - 1179 - #define CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY 0x00 1180 - #define CMD_HIRES_WHEEL_SET_WHEEL_MODE 0x20 1181 - 1182 - static int hidpp_hrw_get_wheel_capability(struct hidpp_device *hidpp, 1183 - u8 *multiplier) 1184 - { 1185 - u8 feature_index; 1186 - u8 feature_type; 1187 - int ret; 1188 - struct hidpp_report response; 1189 - 1190 - ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL, 1191 - &feature_index, &feature_type); 1192 - if (ret) 1193 - goto return_default; 1194 - 1195 - ret = hidpp_send_fap_command_sync(hidpp, feature_index, 1196 - CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY, 1197 - NULL, 0, &response); 1198 - if (ret) 1199 - goto return_default; 1200 - 1201 - *multiplier = response.fap.params[0]; 1202 - return 0; 1203 - return_default: 1204 - hid_warn(hidpp->hid_dev, 1205 - "Couldn't get wheel multiplier (error %d), assuming %d.\n", 1206 - ret, *multiplier); 1207 - return ret; 1208 - } 1209 - 1210 - static int hidpp_hrw_set_wheel_mode(struct hidpp_device *hidpp, bool invert, 1211 - bool high_resolution, bool use_hidpp) 1212 - { 1213 - u8 feature_index; 1214 - u8 feature_type; 1215 - int ret; 1216 - u8 params[1]; 1217 - struct hidpp_report response; 1218 - 1219 - ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL, 1220 - &feature_index, &feature_type); 1221 - if (ret) 1222 - return ret; 1223 - 1224 - params[0] = (invert ? BIT(2) : 0) | 1225 - (high_resolution ? BIT(1) : 0) | 1226 - (use_hidpp ? BIT(0) : 0); 1227 - 1228 - return hidpp_send_fap_command_sync(hidpp, feature_index, 1229 - CMD_HIRES_WHEEL_SET_WHEEL_MODE, 1230 - params, sizeof(params), &response); 1231 1167 } 1232 1168 1233 1169 /* -------------------------------------------------------------------------- */ ··· 2399 2523 input_report_rel(mydata->input, REL_Y, v); 2400 2524 2401 2525 v = hid_snto32(data[6], 8); 2402 - hid_scroll_counter_handle_scroll( 2403 - &hidpp->vertical_wheel_counter, v); 2526 + input_report_rel(mydata->input, REL_WHEEL, v); 2404 2527 2405 2528 input_sync(mydata->input); 2406 2529 } ··· 2528 2653 } 2529 2654 2530 2655 /* -------------------------------------------------------------------------- */ 2531 - /* High-resolution scroll wheels */ 2532 - /* -------------------------------------------------------------------------- */ 2533 - 2534 - /** 2535 - * struct hi_res_scroll_info - Stores info on a device's high-res scroll wheel. 2536 - * @product_id: the HID product ID of the device being described. 2537 - * @microns_per_hi_res_unit: the distance moved by the user's finger for each 2538 - * high-resolution unit reported by the device, in 2539 - * 256ths of a millimetre. 2540 - */ 2541 - struct hi_res_scroll_info { 2542 - __u32 product_id; 2543 - int microns_per_hi_res_unit; 2544 - }; 2545 - 2546 - static struct hi_res_scroll_info hi_res_scroll_devices[] = { 2547 - { /* Anywhere MX */ 2548 - .product_id = 0x1017, .microns_per_hi_res_unit = 445 }, 2549 - { /* Performance MX */ 2550 - .product_id = 0x101a, .microns_per_hi_res_unit = 406 }, 2551 - { /* M560 */ 2552 - .product_id = 0x402d, .microns_per_hi_res_unit = 435 }, 2553 - { /* MX Master 2S */ 2554 - .product_id = 0x4069, .microns_per_hi_res_unit = 406 }, 2555 - }; 2556 - 2557 - static int hi_res_scroll_look_up_microns(__u32 product_id) 2558 - { 2559 - int i; 2560 - int num_devices = sizeof(hi_res_scroll_devices) 2561 - / sizeof(hi_res_scroll_devices[0]); 2562 - for (i = 0; i < num_devices; i++) { 2563 - if (hi_res_scroll_devices[i].product_id == product_id) 2564 - return hi_res_scroll_devices[i].microns_per_hi_res_unit; 2565 - } 2566 - /* We don't have a value for this device, so use a sensible default. */ 2567 - return 406; 2568 - } 2569 - 2570 - static int hi_res_scroll_enable(struct hidpp_device *hidpp) 2571 - { 2572 - int ret; 2573 - u8 multiplier = 8; 2574 - 2575 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2121) { 2576 - ret = hidpp_hrw_set_wheel_mode(hidpp, false, true, false); 2577 - hidpp_hrw_get_wheel_capability(hidpp, &multiplier); 2578 - } else if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2120) { 2579 - ret = hidpp_hrs_set_highres_scrolling_mode(hidpp, true, 2580 - &multiplier); 2581 - } else /* if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) */ 2582 - ret = hidpp10_enable_scrolling_acceleration(hidpp); 2583 - 2584 - if (ret) 2585 - return ret; 2586 - 2587 - hidpp->vertical_wheel_counter.resolution_multiplier = multiplier; 2588 - hidpp->vertical_wheel_counter.microns_per_hi_res_unit = 2589 - hi_res_scroll_look_up_microns(hidpp->hid_dev->product); 2590 - hid_info(hidpp->hid_dev, "multiplier = %d, microns = %d\n", 2591 - multiplier, 2592 - hidpp->vertical_wheel_counter.microns_per_hi_res_unit); 2593 - return 0; 2594 - } 2595 - 2596 - /* -------------------------------------------------------------------------- */ 2597 2656 /* Generic HID++ devices */ 2598 2657 /* -------------------------------------------------------------------------- */ 2599 2658 ··· 2572 2763 wtp_populate_input(hidpp, input, origin_is_hid_core); 2573 2764 else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560) 2574 2765 m560_populate_input(hidpp, input, origin_is_hid_core); 2575 - 2576 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) { 2577 - input_set_capability(input, EV_REL, REL_WHEEL_HI_RES); 2578 - hidpp->vertical_wheel_counter.dev = input; 2579 - } 2580 2766 } 2581 2767 2582 2768 static int hidpp_input_configured(struct hid_device *hdev, ··· 2688 2884 return m560_raw_event(hdev, data, size); 2689 2885 2690 2886 return 0; 2691 - } 2692 - 2693 - static int hidpp_event(struct hid_device *hdev, struct hid_field *field, 2694 - struct hid_usage *usage, __s32 value) 2695 - { 2696 - /* This function will only be called for scroll events, due to the 2697 - * restriction imposed in hidpp_usages. 2698 - */ 2699 - struct hidpp_device *hidpp = hid_get_drvdata(hdev); 2700 - struct hid_scroll_counter *counter = &hidpp->vertical_wheel_counter; 2701 - /* A scroll event may occur before the multiplier has been retrieved or 2702 - * the input device set, or high-res scroll enabling may fail. In such 2703 - * cases we must return early (falling back to default behaviour) to 2704 - * avoid a crash in hid_scroll_counter_handle_scroll. 2705 - */ 2706 - if (!(hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) || value == 0 2707 - || counter->dev == NULL || counter->resolution_multiplier == 0) 2708 - return 0; 2709 - 2710 - hid_scroll_counter_handle_scroll(counter, value); 2711 - return 1; 2712 2887 } 2713 2888 2714 2889 static int hidpp_initialize_battery(struct hidpp_device *hidpp) ··· 2901 3118 if (hidpp->battery.ps) 2902 3119 power_supply_changed(hidpp->battery.ps); 2903 3120 2904 - if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) 2905 - hi_res_scroll_enable(hidpp); 2906 - 2907 3121 if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input) 2908 3122 /* if the input nodes are already created, we can stop now */ 2909 3123 return; ··· 3086 3306 mutex_destroy(&hidpp->send_mutex); 3087 3307 } 3088 3308 3089 - #define LDJ_DEVICE(product) \ 3090 - HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, \ 3091 - USB_VENDOR_ID_LOGITECH, (product)) 3092 - 3093 3309 static const struct hid_device_id hidpp_devices[] = { 3094 3310 { /* wireless touchpad */ 3095 - LDJ_DEVICE(0x4011), 3311 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3312 + USB_VENDOR_ID_LOGITECH, 0x4011), 3096 3313 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT | 3097 3314 HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS }, 3098 3315 { /* wireless touchpad T650 */ 3099 - LDJ_DEVICE(0x4101), 3316 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3317 + USB_VENDOR_ID_LOGITECH, 0x4101), 3100 3318 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT }, 3101 3319 { /* wireless touchpad T651 */ 3102 3320 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 3103 3321 USB_DEVICE_ID_LOGITECH_T651), 3104 3322 .driver_data = HIDPP_QUIRK_CLASS_WTP }, 3105 - { /* Mouse Logitech Anywhere MX */ 3106 - LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3107 - { /* Mouse Logitech Cube */ 3108 - LDJ_DEVICE(0x4010), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3109 - { /* Mouse Logitech M335 */ 3110 - LDJ_DEVICE(0x4050), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3111 - { /* Mouse Logitech M515 */ 3112 - LDJ_DEVICE(0x4007), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3113 3323 { /* Mouse logitech M560 */ 3114 - LDJ_DEVICE(0x402d), 3115 - .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 3116 - | HIDPP_QUIRK_HI_RES_SCROLL_X2120 }, 3117 - { /* Mouse Logitech M705 (firmware RQM17) */ 3118 - LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3119 - { /* Mouse Logitech M705 (firmware RQM67) */ 3120 - LDJ_DEVICE(0x406d), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3121 - { /* Mouse Logitech M720 */ 3122 - LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3123 - { /* Mouse Logitech MX Anywhere 2 */ 3124 - LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3125 - { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3126 - { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3127 - { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3128 - { /* Mouse Logitech MX Anywhere 2S */ 3129 - LDJ_DEVICE(0x406a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3130 - { /* Mouse Logitech MX Master */ 3131 - LDJ_DEVICE(0x4041), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3132 - { LDJ_DEVICE(0x4060), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3133 - { LDJ_DEVICE(0x4071), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3134 - { /* Mouse Logitech MX Master 2S */ 3135 - LDJ_DEVICE(0x4069), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3136 - { /* Mouse Logitech Performance MX */ 3137 - LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 }, 3324 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3325 + USB_VENDOR_ID_LOGITECH, 0x402d), 3326 + .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 }, 3138 3327 { /* Keyboard logitech K400 */ 3139 - LDJ_DEVICE(0x4024), 3328 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3329 + USB_VENDOR_ID_LOGITECH, 0x4024), 3140 3330 .driver_data = HIDPP_QUIRK_CLASS_K400 }, 3141 3331 { /* Solar Keyboard Logitech K750 */ 3142 - LDJ_DEVICE(0x4002), 3332 + HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3333 + USB_VENDOR_ID_LOGITECH, 0x4002), 3143 3334 .driver_data = HIDPP_QUIRK_CLASS_K750 }, 3144 3335 3145 - { LDJ_DEVICE(HID_ANY_ID) }, 3336 + { HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, 3337 + USB_VENDOR_ID_LOGITECH, HID_ANY_ID)}, 3146 3338 3147 3339 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G920_WHEEL), 3148 3340 .driver_data = HIDPP_QUIRK_CLASS_G920 | HIDPP_QUIRK_FORCE_OUTPUT_REPORTS}, ··· 3123 3371 3124 3372 MODULE_DEVICE_TABLE(hid, hidpp_devices); 3125 3373 3126 - static const struct hid_usage_id hidpp_usages[] = { 3127 - { HID_GD_WHEEL, EV_REL, REL_WHEEL }, 3128 - { HID_ANY_ID - 1, HID_ANY_ID - 1, HID_ANY_ID - 1} 3129 - }; 3130 - 3131 3374 static struct hid_driver hidpp_driver = { 3132 3375 .name = "logitech-hidpp-device", 3133 3376 .id_table = hidpp_devices, 3134 3377 .probe = hidpp_probe, 3135 3378 .remove = hidpp_remove, 3136 3379 .raw_event = hidpp_raw_event, 3137 - .usage_table = hidpp_usages, 3138 - .event = hidpp_event, 3139 3380 .input_configured = hidpp_input_configured, 3140 3381 .input_mapping = hidpp_input_mapping, 3141 3382 .input_mapped = hidpp_input_mapped,
+6
drivers/hid/hid-multitouch.c
··· 1814 1814 MT_USB_DEVICE(USB_VENDOR_ID_CHUNGHWAT, 1815 1815 USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH) }, 1816 1816 1817 + /* Cirque devices */ 1818 + { .driver_data = MT_CLS_WIN_8_DUAL, 1819 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 1820 + I2C_VENDOR_ID_CIRQUE, 1821 + I2C_PRODUCT_ID_CIRQUE_121F) }, 1822 + 1817 1823 /* CJTouch panels */ 1818 1824 { .driver_data = MT_CLS_NSMU, 1819 1825 MT_USB_DEVICE(USB_VENDOR_ID_CJTOUCH,
+3
drivers/hid/hid-quirks.c
··· 107 107 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C05A), HID_QUIRK_ALWAYS_POLL }, 108 108 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C06A), HID_QUIRK_ALWAYS_POLL }, 109 109 { HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT }, 110 + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL }, 110 111 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS }, 111 112 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS }, 112 113 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS }, ··· 130 129 { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN), HID_QUIRK_NO_INIT_REPORTS }, 131 130 { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL }, 132 131 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL }, 132 + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL }, 133 + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL }, 133 134 { HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET }, 134 135 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET }, 135 136 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003), HID_QUIRK_NOGET },
+90 -64
drivers/hid/hid-steam.c
··· 23 23 * In order to avoid breaking them this driver creates a layered hidraw device, 24 24 * so it can detect when the client is running and then: 25 25 * - it will not send any command to the controller. 26 - * - this input device will be disabled, to avoid double input of the same 26 + * - this input device will be removed, to avoid double input of the same 27 27 * user action. 28 + * When the client is closed, this input device will be created again. 28 29 * 29 30 * For additional functions, such as changing the right-pad margin or switching 30 31 * the led, you can use the user-space tool at: ··· 114 113 spinlock_t lock; 115 114 struct hid_device *hdev, *client_hdev; 116 115 struct mutex mutex; 117 - bool client_opened, input_opened; 116 + bool client_opened; 118 117 struct input_dev __rcu *input; 119 118 unsigned long quirks; 120 119 struct work_struct work_connect; ··· 280 279 } 281 280 } 282 281 283 - static void steam_update_lizard_mode(struct steam_device *steam) 284 - { 285 - mutex_lock(&steam->mutex); 286 - if (!steam->client_opened) { 287 - if (steam->input_opened) 288 - steam_set_lizard_mode(steam, false); 289 - else 290 - steam_set_lizard_mode(steam, lizard_mode); 291 - } 292 - mutex_unlock(&steam->mutex); 293 - } 294 - 295 282 static int steam_input_open(struct input_dev *dev) 296 283 { 297 284 struct steam_device *steam = input_get_drvdata(dev); ··· 290 301 return ret; 291 302 292 303 mutex_lock(&steam->mutex); 293 - steam->input_opened = true; 294 304 if (!steam->client_opened && lizard_mode) 295 305 steam_set_lizard_mode(steam, false); 296 306 mutex_unlock(&steam->mutex); ··· 301 313 struct steam_device *steam = input_get_drvdata(dev); 302 314 303 315 mutex_lock(&steam->mutex); 304 - steam->input_opened = false; 305 316 if (!steam->client_opened && lizard_mode) 306 317 steam_set_lizard_mode(steam, true); 307 318 mutex_unlock(&steam->mutex); ··· 387 400 return 0; 388 401 } 389 402 390 - static int steam_register(struct steam_device *steam) 403 + static int steam_input_register(struct steam_device *steam) 391 404 { 392 405 struct hid_device *hdev = steam->hdev; 393 406 struct input_dev *input; ··· 400 413 dbg_hid("%s: already connected\n", __func__); 401 414 return 0; 402 415 } 403 - 404 - /* 405 - * Unlikely, but getting the serial could fail, and it is not so 406 - * important, so make up a serial number and go on. 407 - */ 408 - if (steam_get_serial(steam) < 0) 409 - strlcpy(steam->serial_no, "XXXXXXXXXX", 410 - sizeof(steam->serial_no)); 411 - 412 - hid_info(hdev, "Steam Controller '%s' connected", 413 - steam->serial_no); 414 416 415 417 input = input_allocate_device(); 416 418 if (!input) ··· 468 492 goto input_register_fail; 469 493 470 494 rcu_assign_pointer(steam->input, input); 471 - 472 - /* ignore battery errors, we can live without it */ 473 - if (steam->quirks & STEAM_QUIRK_WIRELESS) 474 - steam_battery_register(steam); 475 - 476 495 return 0; 477 496 478 497 input_register_fail: ··· 475 504 return ret; 476 505 } 477 506 478 - static void steam_unregister(struct steam_device *steam) 507 + static void steam_input_unregister(struct steam_device *steam) 479 508 { 480 509 struct input_dev *input; 510 + rcu_read_lock(); 511 + input = rcu_dereference(steam->input); 512 + rcu_read_unlock(); 513 + if (!input) 514 + return; 515 + RCU_INIT_POINTER(steam->input, NULL); 516 + synchronize_rcu(); 517 + input_unregister_device(input); 518 + } 519 + 520 + static void steam_battery_unregister(struct steam_device *steam) 521 + { 481 522 struct power_supply *battery; 482 523 483 524 rcu_read_lock(); 484 - input = rcu_dereference(steam->input); 485 525 battery = rcu_dereference(steam->battery); 486 526 rcu_read_unlock(); 487 527 488 - if (battery) { 489 - RCU_INIT_POINTER(steam->battery, NULL); 490 - synchronize_rcu(); 491 - power_supply_unregister(battery); 528 + if (!battery) 529 + return; 530 + RCU_INIT_POINTER(steam->battery, NULL); 531 + synchronize_rcu(); 532 + power_supply_unregister(battery); 533 + } 534 + 535 + static int steam_register(struct steam_device *steam) 536 + { 537 + int ret; 538 + 539 + /* 540 + * This function can be called several times in a row with the 541 + * wireless adaptor, without steam_unregister() between them, because 542 + * another client send a get_connection_status command, for example. 543 + * The battery and serial number are set just once per device. 544 + */ 545 + if (!steam->serial_no[0]) { 546 + /* 547 + * Unlikely, but getting the serial could fail, and it is not so 548 + * important, so make up a serial number and go on. 549 + */ 550 + if (steam_get_serial(steam) < 0) 551 + strlcpy(steam->serial_no, "XXXXXXXXXX", 552 + sizeof(steam->serial_no)); 553 + 554 + hid_info(steam->hdev, "Steam Controller '%s' connected", 555 + steam->serial_no); 556 + 557 + /* ignore battery errors, we can live without it */ 558 + if (steam->quirks & STEAM_QUIRK_WIRELESS) 559 + steam_battery_register(steam); 560 + 561 + mutex_lock(&steam_devices_lock); 562 + list_add(&steam->list, &steam_devices); 563 + mutex_unlock(&steam_devices_lock); 492 564 } 493 - if (input) { 494 - RCU_INIT_POINTER(steam->input, NULL); 495 - synchronize_rcu(); 565 + 566 + mutex_lock(&steam->mutex); 567 + if (!steam->client_opened) { 568 + steam_set_lizard_mode(steam, lizard_mode); 569 + ret = steam_input_register(steam); 570 + } else { 571 + ret = 0; 572 + } 573 + mutex_unlock(&steam->mutex); 574 + 575 + return ret; 576 + } 577 + 578 + static void steam_unregister(struct steam_device *steam) 579 + { 580 + steam_battery_unregister(steam); 581 + steam_input_unregister(steam); 582 + if (steam->serial_no[0]) { 496 583 hid_info(steam->hdev, "Steam Controller '%s' disconnected", 497 584 steam->serial_no); 498 - input_unregister_device(input); 585 + mutex_lock(&steam_devices_lock); 586 + list_del(&steam->list); 587 + mutex_unlock(&steam_devices_lock); 588 + steam->serial_no[0] = 0; 499 589 } 500 590 } 501 591 ··· 632 600 mutex_lock(&steam->mutex); 633 601 steam->client_opened = true; 634 602 mutex_unlock(&steam->mutex); 603 + 604 + steam_input_unregister(steam); 605 + 635 606 return ret; 636 607 } 637 608 ··· 644 609 645 610 mutex_lock(&steam->mutex); 646 611 steam->client_opened = false; 647 - if (steam->input_opened) 648 - steam_set_lizard_mode(steam, false); 649 - else 650 - steam_set_lizard_mode(steam, lizard_mode); 651 612 mutex_unlock(&steam->mutex); 652 613 653 614 hid_hw_close(steam->hdev); 615 + if (steam->connected) { 616 + steam_set_lizard_mode(steam, lizard_mode); 617 + steam_input_register(steam); 618 + } 654 619 } 655 620 656 621 static int steam_client_ll_raw_request(struct hid_device *hdev, ··· 779 744 } 780 745 } 781 746 782 - mutex_lock(&steam_devices_lock); 783 - steam_update_lizard_mode(steam); 784 - list_add(&steam->list, &steam_devices); 785 - mutex_unlock(&steam_devices_lock); 786 - 787 747 return 0; 788 748 789 749 hid_hw_open_fail: ··· 804 774 return; 805 775 } 806 776 807 - mutex_lock(&steam_devices_lock); 808 - list_del(&steam->list); 809 - mutex_unlock(&steam_devices_lock); 810 - 811 777 hid_destroy_device(steam->client_hdev); 812 778 steam->client_opened = false; 813 779 cancel_work_sync(&steam->work_connect); ··· 818 792 static void steam_do_connect_event(struct steam_device *steam, bool connected) 819 793 { 820 794 unsigned long flags; 795 + bool changed; 821 796 822 797 spin_lock_irqsave(&steam->lock, flags); 798 + changed = steam->connected != connected; 823 799 steam->connected = connected; 824 800 spin_unlock_irqrestore(&steam->lock, flags); 825 801 826 - if (schedule_work(&steam->work_connect) == 0) 802 + if (changed && schedule_work(&steam->work_connect) == 0) 827 803 dbg_hid("%s: connected=%d event already queued\n", 828 804 __func__, connected); 829 805 } ··· 1047 1019 return 0; 1048 1020 rcu_read_lock(); 1049 1021 input = rcu_dereference(steam->input); 1050 - if (likely(input)) { 1022 + if (likely(input)) 1051 1023 steam_do_input_event(steam, input, data); 1052 - } else { 1053 - dbg_hid("%s: input data without connect event\n", 1054 - __func__); 1055 - steam_do_connect_event(steam, true); 1056 - } 1057 1024 rcu_read_unlock(); 1058 1025 break; 1059 1026 case STEAM_EV_CONNECT: ··· 1097 1074 1098 1075 mutex_lock(&steam_devices_lock); 1099 1076 list_for_each_entry(steam, &steam_devices, list) { 1100 - steam_update_lizard_mode(steam); 1077 + mutex_lock(&steam->mutex); 1078 + if (!steam->client_opened) 1079 + steam_set_lizard_mode(steam, lizard_mode); 1080 + mutex_unlock(&steam->mutex); 1101 1081 } 1102 1082 mutex_unlock(&steam_devices_lock); 1103 1083 return 0;
+2
drivers/hid/i2c-hid/i2c-hid-core.c
··· 177 177 I2C_HID_QUIRK_NO_RUNTIME_PM }, 178 178 { I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_4B33, 179 179 I2C_HID_QUIRK_DELAY_AFTER_SLEEP }, 180 + { USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_8001, 181 + I2C_HID_QUIRK_NO_RUNTIME_PM }, 180 182 { 0, 0 } 181 183 }; 182 184
+19 -6
drivers/hid/uhid.c
··· 12 12 13 13 #include <linux/atomic.h> 14 14 #include <linux/compat.h> 15 + #include <linux/cred.h> 15 16 #include <linux/device.h> 16 17 #include <linux/fs.h> 17 18 #include <linux/hid.h> ··· 497 496 goto err_free; 498 497 } 499 498 500 - len = min(sizeof(hid->name), sizeof(ev->u.create2.name)); 501 - strlcpy(hid->name, ev->u.create2.name, len); 502 - len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)); 503 - strlcpy(hid->phys, ev->u.create2.phys, len); 504 - len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)); 505 - strlcpy(hid->uniq, ev->u.create2.uniq, len); 499 + /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */ 500 + len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1; 501 + strncpy(hid->name, ev->u.create2.name, len); 502 + len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1; 503 + strncpy(hid->phys, ev->u.create2.phys, len); 504 + len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1; 505 + strncpy(hid->uniq, ev->u.create2.uniq, len); 506 506 507 507 hid->ll_driver = &uhid_hid_driver; 508 508 hid->bus = ev->u.create2.bus; ··· 724 722 725 723 switch (uhid->input_buf.type) { 726 724 case UHID_CREATE: 725 + /* 726 + * 'struct uhid_create_req' contains a __user pointer which is 727 + * copied from, so it's unsafe to allow this with elevated 728 + * privileges (e.g. from a setuid binary) or via kernel_write(). 729 + */ 730 + if (file->f_cred != current_cred() || uaccess_kernel()) { 731 + pr_err_once("UHID_CREATE from different security context by process %d (%s), this is not allowed.\n", 732 + task_tgid_vnr(current), current->comm); 733 + ret = -EACCES; 734 + goto unlock; 735 + } 727 736 ret = uhid_dev_create(uhid, &uhid->input_buf); 728 737 break; 729 738 case UHID_CREATE2:
+3 -3
drivers/hwmon/ina2xx.c
··· 274 274 break; 275 275 case INA2XX_CURRENT: 276 276 /* signed register, result in mA */ 277 - val = regval * data->current_lsb_uA; 277 + val = (s16)regval * data->current_lsb_uA; 278 278 val = DIV_ROUND_CLOSEST(val, 1000); 279 279 break; 280 280 case INA2XX_CALIBRATION: ··· 491 491 } 492 492 493 493 data->groups[group++] = &ina2xx_group; 494 - if (id->driver_data == ina226) 494 + if (chip == ina226) 495 495 data->groups[group++] = &ina226_group; 496 496 497 497 hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name, ··· 500 500 return PTR_ERR(hwmon_dev); 501 501 502 502 dev_info(dev, "power monitor %s (Rshunt = %li uOhm)\n", 503 - id->name, data->rshunt); 503 + client->name, data->rshunt); 504 504 505 505 return 0; 506 506 }
+1 -1
drivers/hwmon/mlxreg-fan.c
··· 51 51 */ 52 52 #define MLXREG_FAN_GET_RPM(rval, d, s) (DIV_ROUND_CLOSEST(15000000 * 100, \ 53 53 ((rval) + (s)) * (d))) 54 - #define MLXREG_FAN_GET_FAULT(val, mask) (!!((val) ^ (mask))) 54 + #define MLXREG_FAN_GET_FAULT(val, mask) (!((val) ^ (mask))) 55 55 #define MLXREG_FAN_PWM_DUTY2STATE(duty) (DIV_ROUND_CLOSEST((duty) * \ 56 56 MLXREG_FAN_MAX_STATE, \ 57 57 MLXREG_FAN_MAX_DUTY))
-6
drivers/hwmon/raspberrypi-hwmon.c
··· 115 115 { 116 116 struct device *dev = &pdev->dev; 117 117 struct rpi_hwmon_data *data; 118 - int ret; 119 118 120 119 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 121 120 if (!data) ··· 122 123 123 124 /* Parent driver assure that firmware is correct */ 124 125 data->fw = dev_get_drvdata(dev->parent); 125 - 126 - /* Init throttled */ 127 - ret = rpi_firmware_property(data->fw, RPI_FIRMWARE_GET_THROTTLED, 128 - &data->last_throttled, 129 - sizeof(data->last_throttled)); 130 126 131 127 data->hwmon_dev = devm_hwmon_device_register_with_info(dev, "rpi_volt", 132 128 data,
+1 -1
drivers/hwmon/w83795.c
··· 1691 1691 * somewhere else in the code 1692 1692 */ 1693 1693 #define SENSOR_ATTR_TEMP(index) { \ 1694 - SENSOR_ATTR_2(temp##index##_type, S_IRUGO | (index < 4 ? S_IWUSR : 0), \ 1694 + SENSOR_ATTR_2(temp##index##_type, S_IRUGO | (index < 5 ? S_IWUSR : 0), \ 1695 1695 show_temp_mode, store_temp_mode, NOT_USED, index - 1), \ 1696 1696 SENSOR_ATTR_2(temp##index##_input, S_IRUGO, show_temp, \ 1697 1697 NULL, TEMP_READ, index - 1), \
+3
drivers/net/ethernet/cavium/thunder/nic_main.c
··· 1441 1441 { 1442 1442 struct nicpf *nic = pci_get_drvdata(pdev); 1443 1443 1444 + if (!nic) 1445 + return; 1446 + 1444 1447 if (nic->flags & NIC_SRIOV_ENABLED) 1445 1448 pci_disable_sriov(pdev); 1446 1449
+1 -3
drivers/net/ethernet/hisilicon/hip04_eth.c
··· 915 915 } 916 916 917 917 ret = register_netdev(ndev); 918 - if (ret) { 919 - free_netdev(ndev); 918 + if (ret) 920 919 goto alloc_fail; 921 - } 922 920 923 921 return 0; 924 922
+1 -1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 1417 1417 } 1418 1418 1419 1419 vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED; 1420 - set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->state); 1420 + set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->back->state); 1421 1421 } 1422 1422 1423 1423 /**
+7 -7
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 33 33 } 34 34 35 35 /** 36 - * i40e_add_xsk_umem - Store an UMEM for a certain ring/qid 36 + * i40e_add_xsk_umem - Store a UMEM for a certain ring/qid 37 37 * @vsi: Current VSI 38 38 * @umem: UMEM to store 39 39 * @qid: Ring/qid to associate with the UMEM ··· 56 56 } 57 57 58 58 /** 59 - * i40e_remove_xsk_umem - Remove an UMEM for a certain ring/qid 59 + * i40e_remove_xsk_umem - Remove a UMEM for a certain ring/qid 60 60 * @vsi: Current VSI 61 61 * @qid: Ring/qid associated with the UMEM 62 62 **/ ··· 130 130 } 131 131 132 132 /** 133 - * i40e_xsk_umem_enable - Enable/associate an UMEM to a certain ring/qid 133 + * i40e_xsk_umem_enable - Enable/associate a UMEM to a certain ring/qid 134 134 * @vsi: Current VSI 135 135 * @umem: UMEM 136 136 * @qid: Rx ring to associate UMEM to ··· 189 189 } 190 190 191 191 /** 192 - * i40e_xsk_umem_disable - Diassociate an UMEM from a certain ring/qid 192 + * i40e_xsk_umem_disable - Disassociate a UMEM from a certain ring/qid 193 193 * @vsi: Current VSI 194 194 * @qid: Rx ring to associate UMEM to 195 195 * ··· 255 255 } 256 256 257 257 /** 258 - * i40e_xsk_umem_query - Queries a certain ring/qid for its UMEM 258 + * i40e_xsk_umem_setup - Enable/disassociate a UMEM to/from a ring/qid 259 259 * @vsi: Current VSI 260 260 * @umem: UMEM to enable/associate to a ring, or NULL to disable 261 261 * @qid: Rx ring to (dis)associate UMEM (from)to 262 262 * 263 - * This function enables or disables an UMEM to a certain ring. 263 + * This function enables or disables a UMEM to a certain ring. 264 264 * 265 265 * Returns 0 on success, <0 on failure 266 266 **/ ··· 276 276 * @rx_ring: Rx ring 277 277 * @xdp: xdp_buff used as input to the XDP program 278 278 * 279 - * This function enables or disables an UMEM to a certain ring. 279 + * This function enables or disables a UMEM to a certain ring. 280 280 * 281 281 * Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR} 282 282 **/
+1
drivers/net/ethernet/intel/igb/e1000_i210.c
··· 842 842 nvm_word = E1000_INVM_DEFAULT_AL; 843 843 tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; 844 844 igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, E1000_PHY_PLL_FREQ_PAGE); 845 + phy_word = E1000_PHY_PLL_UNCONF; 845 846 for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { 846 847 /* check current state directly from internal PHY */ 847 848 igb_read_phy_reg_82580(hw, E1000_PHY_PLL_FREQ_REG, &phy_word);
+3 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 2262 2262 *autoneg = false; 2263 2263 2264 2264 if (hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || 2265 - hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1) { 2265 + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1 || 2266 + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core0 || 2267 + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core1) { 2266 2268 *speed = IXGBE_LINK_SPEED_1GB_FULL; 2267 2269 return 0; 2268 2270 }
+6 -5
drivers/net/ethernet/microchip/lan743x_main.c
··· 1672 1672 netif_wake_queue(adapter->netdev); 1673 1673 } 1674 1674 1675 - if (!napi_complete_done(napi, weight)) 1675 + if (!napi_complete(napi)) 1676 1676 goto done; 1677 1677 1678 1678 /* enable isr */ ··· 1681 1681 lan743x_csr_read(adapter, INT_STS); 1682 1682 1683 1683 done: 1684 - return weight; 1684 + return 0; 1685 1685 } 1686 1686 1687 1687 static void lan743x_tx_ring_cleanup(struct lan743x_tx *tx) ··· 1870 1870 tx->vector_flags = lan743x_intr_get_vector_flags(adapter, 1871 1871 INT_BIT_DMA_TX_ 1872 1872 (tx->channel_number)); 1873 - netif_napi_add(adapter->netdev, 1874 - &tx->napi, lan743x_tx_napi_poll, 1875 - tx->ring_size - 1); 1873 + netif_tx_napi_add(adapter->netdev, 1874 + &tx->napi, lan743x_tx_napi_poll, 1875 + tx->ring_size - 1); 1876 1876 napi_enable(&tx->napi); 1877 1877 1878 1878 data = 0; ··· 3017 3017 3018 3018 static const struct pci_device_id lan743x_pcidev_tbl[] = { 3019 3019 { PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7430) }, 3020 + { PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7431) }, 3020 3021 { 0, } 3021 3022 }; 3022 3023
+1
drivers/net/ethernet/microchip/lan743x_main.h
··· 548 548 /* SMSC acquired EFAR late 1990's, MCHP acquired SMSC 2012 */ 549 549 #define PCI_VENDOR_ID_SMSC PCI_VENDOR_ID_EFAR 550 550 #define PCI_DEVICE_ID_SMSC_LAN7430 (0x7430) 551 + #define PCI_DEVICE_ID_SMSC_LAN7431 (0x7431) 551 552 552 553 #define PCI_CONFIG_LENGTH (0x1000) 553 554
+1 -1
drivers/net/ethernet/qlogic/qed/qed_debug.c
··· 6071 6071 "no error", 6072 6072 "length error", 6073 6073 "function disabled", 6074 - "VF sent command to attnetion address", 6074 + "VF sent command to attention address", 6075 6075 "host sent prod update command", 6076 6076 "read of during interrupt register while in MIMD mode", 6077 6077 "access to PXP BAR reserved address",
+1 -1
drivers/net/ethernet/via/via-velocity.c
··· 3605 3605 "tx_jumbo", 3606 3606 "rx_mac_control_frames", 3607 3607 "tx_mac_control_frames", 3608 - "rx_frame_alignement_errors", 3608 + "rx_frame_alignment_errors", 3609 3609 "rx_long_ok", 3610 3610 "rx_long_err", 3611 3611 "tx_sqe_errors",
+8
drivers/net/phy/phy_device.c
··· 2255 2255 new_driver->mdiodrv.driver.remove = phy_remove; 2256 2256 new_driver->mdiodrv.driver.owner = owner; 2257 2257 2258 + /* The following works around an issue where the PHY driver doesn't bind 2259 + * to the device, resulting in the genphy driver being used instead of 2260 + * the dedicated driver. The root cause of the issue isn't known yet 2261 + * and seems to be in the base driver core. Once this is fixed we may 2262 + * remove this workaround. 2263 + */ 2264 + new_driver->mdiodrv.driver.probe_type = PROBE_FORCE_SYNCHRONOUS; 2265 + 2258 2266 retval = driver_register(&new_driver->mdiodrv.driver); 2259 2267 if (retval) { 2260 2268 pr_err("%s: Error %d in registering driver\n",
+1 -1
drivers/net/rionet.c
··· 216 216 * it just report sending a packet to the target 217 217 * (without actual packet transfer). 218 218 */ 219 - dev_kfree_skb_any(skb); 220 219 ndev->stats.tx_packets++; 221 220 ndev->stats.tx_bytes += skb->len; 221 + dev_kfree_skb_any(skb); 222 222 } 223 223 } 224 224
+4 -6
drivers/net/usb/ipheth.c
··· 140 140 struct usb_device *udev; 141 141 struct usb_interface *intf; 142 142 struct net_device *net; 143 - struct sk_buff *tx_skb; 144 143 struct urb *tx_urb; 145 144 struct urb *rx_urb; 146 145 unsigned char *tx_buf; ··· 229 230 case -ENOENT: 230 231 case -ECONNRESET: 231 232 case -ESHUTDOWN: 233 + case -EPROTO: 232 234 return; 233 235 case 0: 234 236 break; ··· 281 281 dev_err(&dev->intf->dev, "%s: urb status: %d\n", 282 282 __func__, status); 283 283 284 - dev_kfree_skb_irq(dev->tx_skb); 285 284 if (status == 0) 286 285 netif_wake_queue(dev->net); 287 286 else ··· 422 423 if (skb->len > IPHETH_BUF_SIZE) { 423 424 WARN(1, "%s: skb too large: %d bytes\n", __func__, skb->len); 424 425 dev->net->stats.tx_dropped++; 425 - dev_kfree_skb_irq(skb); 426 + dev_kfree_skb_any(skb); 426 427 return NETDEV_TX_OK; 427 428 } 428 429 ··· 442 443 dev_err(&dev->intf->dev, "%s: usb_submit_urb: %d\n", 443 444 __func__, retval); 444 445 dev->net->stats.tx_errors++; 445 - dev_kfree_skb_irq(skb); 446 + dev_kfree_skb_any(skb); 446 447 } else { 447 - dev->tx_skb = skb; 448 - 449 448 dev->net->stats.tx_packets++; 450 449 dev->net->stats.tx_bytes += skb->len; 450 + dev_consume_skb_any(skb); 451 451 netif_stop_queue(net); 452 452 } 453 453
+12 -15
drivers/s390/net/qeth_core_main.c
··· 4513 4513 { 4514 4514 struct qeth_ipa_cmd *cmd; 4515 4515 struct qeth_arp_query_info *qinfo; 4516 - struct qeth_snmp_cmd *snmp; 4517 4516 unsigned char *data; 4517 + void *snmp_data; 4518 4518 __u16 data_len; 4519 4519 4520 4520 QETH_CARD_TEXT(card, 3, "snpcmdcb"); ··· 4522 4522 cmd = (struct qeth_ipa_cmd *) sdata; 4523 4523 data = (unsigned char *)((char *)cmd - reply->offset); 4524 4524 qinfo = (struct qeth_arp_query_info *) reply->param; 4525 - snmp = &cmd->data.setadapterparms.data.snmp; 4526 4525 4527 4526 if (cmd->hdr.return_code) { 4528 4527 QETH_CARD_TEXT_(card, 4, "scer1%x", cmd->hdr.return_code); ··· 4534 4535 return 0; 4535 4536 } 4536 4537 data_len = *((__u16 *)QETH_IPA_PDU_LEN_PDU1(data)); 4537 - if (cmd->data.setadapterparms.hdr.seq_no == 1) 4538 - data_len -= (__u16)((char *)&snmp->data - (char *)cmd); 4539 - else 4540 - data_len -= (__u16)((char *)&snmp->request - (char *)cmd); 4538 + if (cmd->data.setadapterparms.hdr.seq_no == 1) { 4539 + snmp_data = &cmd->data.setadapterparms.data.snmp; 4540 + data_len -= offsetof(struct qeth_ipa_cmd, 4541 + data.setadapterparms.data.snmp); 4542 + } else { 4543 + snmp_data = &cmd->data.setadapterparms.data.snmp.request; 4544 + data_len -= offsetof(struct qeth_ipa_cmd, 4545 + data.setadapterparms.data.snmp.request); 4546 + } 4541 4547 4542 4548 /* check if there is enough room in userspace */ 4543 4549 if ((qinfo->udata_len - qinfo->udata_offset) < data_len) { ··· 4555 4551 QETH_CARD_TEXT_(card, 4, "sseqn%i", 4556 4552 cmd->data.setadapterparms.hdr.seq_no); 4557 4553 /*copy entries to user buffer*/ 4558 - if (cmd->data.setadapterparms.hdr.seq_no == 1) { 4559 - memcpy(qinfo->udata + qinfo->udata_offset, 4560 - (char *)snmp, 4561 - data_len + offsetof(struct qeth_snmp_cmd, data)); 4562 - qinfo->udata_offset += offsetof(struct qeth_snmp_cmd, data); 4563 - } else { 4564 - memcpy(qinfo->udata + qinfo->udata_offset, 4565 - (char *)&snmp->request, data_len); 4566 - } 4554 + memcpy(qinfo->udata + qinfo->udata_offset, snmp_data, data_len); 4567 4555 qinfo->udata_offset += data_len; 4556 + 4568 4557 /* check if all replies received ... */ 4569 4558 QETH_CARD_TEXT_(card, 4, "srtot%i", 4570 4559 cmd->data.setadapterparms.hdr.used_total);
+2 -2
drivers/spi/spi-mt65xx.c
··· 522 522 mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len); 523 523 mtk_spi_setup_packet(master); 524 524 525 - cnt = len / 4; 525 + cnt = mdata->xfer_len / 4; 526 526 iowrite32_rep(mdata->base + SPI_TX_DATA_REG, 527 527 trans->tx_buf + mdata->num_xfered, cnt); 528 528 529 - remainder = len % 4; 529 + remainder = mdata->xfer_len % 4; 530 530 if (remainder > 0) { 531 531 reg_val = 0; 532 532 memcpy(&reg_val,
+25 -12
drivers/spi/spi-omap2-mcspi.c
··· 1540 1540 /* work with hotplug and coldplug */ 1541 1541 MODULE_ALIAS("platform:omap2_mcspi"); 1542 1542 1543 - #ifdef CONFIG_SUSPEND 1544 - static int omap2_mcspi_suspend_noirq(struct device *dev) 1543 + static int __maybe_unused omap2_mcspi_suspend(struct device *dev) 1545 1544 { 1546 - return pinctrl_pm_select_sleep_state(dev); 1545 + struct spi_master *master = dev_get_drvdata(dev); 1546 + struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 1547 + int error; 1548 + 1549 + error = pinctrl_pm_select_sleep_state(dev); 1550 + if (error) 1551 + dev_warn(mcspi->dev, "%s: failed to set pins: %i\n", 1552 + __func__, error); 1553 + 1554 + error = spi_master_suspend(master); 1555 + if (error) 1556 + dev_warn(mcspi->dev, "%s: master suspend failed: %i\n", 1557 + __func__, error); 1558 + 1559 + return pm_runtime_force_suspend(dev); 1547 1560 } 1548 1561 1549 - static int omap2_mcspi_resume_noirq(struct device *dev) 1562 + static int __maybe_unused omap2_mcspi_resume(struct device *dev) 1550 1563 { 1551 1564 struct spi_master *master = dev_get_drvdata(dev); 1552 1565 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); ··· 1570 1557 dev_warn(mcspi->dev, "%s: failed to set pins: %i\n", 1571 1558 __func__, error); 1572 1559 1573 - return 0; 1560 + error = spi_master_resume(master); 1561 + if (error) 1562 + dev_warn(mcspi->dev, "%s: master resume failed: %i\n", 1563 + __func__, error); 1564 + 1565 + return pm_runtime_force_resume(dev); 1574 1566 } 1575 1567 1576 - #else 1577 - #define omap2_mcspi_suspend_noirq NULL 1578 - #define omap2_mcspi_resume_noirq NULL 1579 - #endif 1580 - 1581 1568 static const struct dev_pm_ops omap2_mcspi_pm_ops = { 1582 - .suspend_noirq = omap2_mcspi_suspend_noirq, 1583 - .resume_noirq = omap2_mcspi_resume_noirq, 1569 + SET_SYSTEM_SLEEP_PM_OPS(omap2_mcspi_suspend, 1570 + omap2_mcspi_resume) 1584 1571 .runtime_resume = omap_mcspi_runtime_resume, 1585 1572 }; 1586 1573
+1 -10
fs/btrfs/disk-io.c
··· 477 477 int mirror_num = 0; 478 478 int failed_mirror = 0; 479 479 480 - clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags); 481 480 io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree; 482 481 while (1) { 482 + clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags); 483 483 ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE, 484 484 mirror_num); 485 485 if (!ret) { ··· 492 492 else 493 493 break; 494 494 } 495 - 496 - /* 497 - * This buffer's crc is fine, but its contents are corrupted, so 498 - * there is no reason to read the other copies, they won't be 499 - * any less wrong. 500 - */ 501 - if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags) || 502 - ret == -EUCLEAN) 503 - break; 504 495 505 496 num_copies = btrfs_num_copies(fs_info, 506 497 eb->start, eb->len);
+24
fs/btrfs/file.c
··· 2089 2089 atomic_inc(&root->log_batch); 2090 2090 2091 2091 /* 2092 + * Before we acquired the inode's lock, someone may have dirtied more 2093 + * pages in the target range. We need to make sure that writeback for 2094 + * any such pages does not start while we are logging the inode, because 2095 + * if it does, any of the following might happen when we are not doing a 2096 + * full inode sync: 2097 + * 2098 + * 1) We log an extent after its writeback finishes but before its 2099 + * checksums are added to the csum tree, leading to -EIO errors 2100 + * when attempting to read the extent after a log replay. 2101 + * 2102 + * 2) We can end up logging an extent before its writeback finishes. 2103 + * Therefore after the log replay we will have a file extent item 2104 + * pointing to an unwritten extent (and no data checksums as well). 2105 + * 2106 + * So trigger writeback for any eventual new dirty pages and then we 2107 + * wait for all ordered extents to complete below. 2108 + */ 2109 + ret = start_ordered_ops(inode, start, end); 2110 + if (ret) { 2111 + inode_unlock(inode); 2112 + goto out; 2113 + } 2114 + 2115 + /* 2092 2116 * We have to do this here to avoid the priority inversion of waiting on 2093 2117 * IO of a lower priority task while holding a transaciton open. 2094 2118 */
+2 -1
fs/btrfs/qgroup.c
··· 2659 2659 int i; 2660 2660 u64 *i_qgroups; 2661 2661 struct btrfs_fs_info *fs_info = trans->fs_info; 2662 - struct btrfs_root *quota_root = fs_info->quota_root; 2662 + struct btrfs_root *quota_root; 2663 2663 struct btrfs_qgroup *srcgroup; 2664 2664 struct btrfs_qgroup *dstgroup; 2665 2665 u32 level_size = 0; ··· 2669 2669 if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) 2670 2670 goto out; 2671 2671 2672 + quota_root = fs_info->quota_root; 2672 2673 if (!quota_root) { 2673 2674 ret = -EINVAL; 2674 2675 goto out;
+1
fs/btrfs/relocation.c
··· 3959 3959 restart: 3960 3960 if (update_backref_cache(trans, &rc->backref_cache)) { 3961 3961 btrfs_end_transaction(trans); 3962 + trans = NULL; 3962 3963 continue; 3963 3964 } 3964 3965
+8 -3
fs/btrfs/send.c
··· 3340 3340 kfree(m); 3341 3341 } 3342 3342 3343 - static void tail_append_pending_moves(struct pending_dir_move *moves, 3343 + static void tail_append_pending_moves(struct send_ctx *sctx, 3344 + struct pending_dir_move *moves, 3344 3345 struct list_head *stack) 3345 3346 { 3346 3347 if (list_empty(&moves->list)) { ··· 3351 3350 list_splice_init(&moves->list, &list); 3352 3351 list_add_tail(&moves->list, stack); 3353 3352 list_splice_tail(&list, stack); 3353 + } 3354 + if (!RB_EMPTY_NODE(&moves->node)) { 3355 + rb_erase(&moves->node, &sctx->pending_dir_moves); 3356 + RB_CLEAR_NODE(&moves->node); 3354 3357 } 3355 3358 } 3356 3359 ··· 3370 3365 return 0; 3371 3366 3372 3367 INIT_LIST_HEAD(&stack); 3373 - tail_append_pending_moves(pm, &stack); 3368 + tail_append_pending_moves(sctx, pm, &stack); 3374 3369 3375 3370 while (!list_empty(&stack)) { 3376 3371 pm = list_first_entry(&stack, struct pending_dir_move, list); ··· 3381 3376 goto out; 3382 3377 pm = get_pending_dir_moves(sctx, parent_ino); 3383 3378 if (pm) 3384 - tail_append_pending_moves(pm, &stack); 3379 + tail_append_pending_moves(sctx, pm, &stack); 3385 3380 } 3386 3381 return 0; 3387 3382
+1
fs/btrfs/super.c
··· 2237 2237 vol = memdup_user((void __user *)arg, sizeof(*vol)); 2238 2238 if (IS_ERR(vol)) 2239 2239 return PTR_ERR(vol); 2240 + vol->name[BTRFS_PATH_NAME_MAX] = '\0'; 2240 2241 2241 2242 switch (cmd) { 2242 2243 case BTRFS_IOC_SCAN_DEV:
+35 -25
fs/dax.c
··· 98 98 return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT)); 99 99 } 100 100 101 - static void *dax_make_page_entry(struct page *page) 102 - { 103 - pfn_t pfn = page_to_pfn_t(page); 104 - return dax_make_entry(pfn, PageHead(page) ? DAX_PMD : 0); 105 - } 106 - 107 101 static bool dax_is_locked(void *entry) 108 102 { 109 103 return xa_to_value(entry) & DAX_LOCKED; ··· 110 116 return 0; 111 117 } 112 118 113 - static int dax_is_pmd_entry(void *entry) 119 + static unsigned long dax_is_pmd_entry(void *entry) 114 120 { 115 121 return xa_to_value(entry) & DAX_PMD; 116 122 } 117 123 118 - static int dax_is_pte_entry(void *entry) 124 + static bool dax_is_pte_entry(void *entry) 119 125 { 120 126 return !(xa_to_value(entry) & DAX_PMD); 121 127 } ··· 216 222 ewait.wait.func = wake_exceptional_entry_func; 217 223 218 224 for (;;) { 219 - entry = xas_load(xas); 220 - if (!entry || xa_is_internal(entry) || 221 - WARN_ON_ONCE(!xa_is_value(entry)) || 225 + entry = xas_find_conflict(xas); 226 + if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) || 222 227 !dax_is_locked(entry)) 223 228 return entry; 224 229 ··· 248 255 { 249 256 void *old; 250 257 258 + BUG_ON(dax_is_locked(entry)); 251 259 xas_reset(xas); 252 260 xas_lock_irq(xas); 253 261 old = xas_store(xas, entry); ··· 346 352 return NULL; 347 353 } 348 354 355 + /* 356 + * dax_lock_mapping_entry - Lock the DAX entry corresponding to a page 357 + * @page: The page whose entry we want to lock 358 + * 359 + * Context: Process context. 360 + * Return: %true if the entry was locked or does not need to be locked. 361 + */ 349 362 bool dax_lock_mapping_entry(struct page *page) 350 363 { 351 364 XA_STATE(xas, NULL, 0); 352 365 void *entry; 366 + bool locked; 353 367 368 + /* Ensure page->mapping isn't freed while we look at it */ 369 + rcu_read_lock(); 354 370 for (;;) { 355 371 struct address_space *mapping = READ_ONCE(page->mapping); 356 372 373 + locked = false; 357 374 if (!dax_mapping(mapping)) 358 - return false; 375 + break; 359 376 360 377 /* 361 378 * In the device-dax case there's no need to lock, a ··· 375 370 * otherwise we would not have a valid pfn_to_page() 376 371 * translation. 377 372 */ 373 + locked = true; 378 374 if (S_ISCHR(mapping->host->i_mode)) 379 - return true; 375 + break; 380 376 381 377 xas.xa = &mapping->i_pages; 382 378 xas_lock_irq(&xas); ··· 388 382 xas_set(&xas, page->index); 389 383 entry = xas_load(&xas); 390 384 if (dax_is_locked(entry)) { 385 + rcu_read_unlock(); 391 386 entry = get_unlocked_entry(&xas); 392 - /* Did the page move while we slept? */ 393 - if (dax_to_pfn(entry) != page_to_pfn(page)) { 394 - xas_unlock_irq(&xas); 395 - continue; 396 - } 387 + xas_unlock_irq(&xas); 388 + put_unlocked_entry(&xas, entry); 389 + rcu_read_lock(); 390 + continue; 397 391 } 398 392 dax_lock_entry(&xas, entry); 399 393 xas_unlock_irq(&xas); 400 - return true; 394 + break; 401 395 } 396 + rcu_read_unlock(); 397 + return locked; 402 398 } 403 399 404 400 void dax_unlock_mapping_entry(struct page *page) 405 401 { 406 402 struct address_space *mapping = page->mapping; 407 403 XA_STATE(xas, &mapping->i_pages, page->index); 404 + void *entry; 408 405 409 406 if (S_ISCHR(mapping->host->i_mode)) 410 407 return; 411 408 412 - dax_unlock_entry(&xas, dax_make_page_entry(page)); 409 + rcu_read_lock(); 410 + entry = xas_load(&xas); 411 + rcu_read_unlock(); 412 + entry = dax_make_entry(page_to_pfn_t(page), dax_is_pmd_entry(entry)); 413 + dax_unlock_entry(&xas, entry); 413 414 } 414 415 415 416 /* ··· 458 445 retry: 459 446 xas_lock_irq(xas); 460 447 entry = get_unlocked_entry(xas); 461 - if (xa_is_internal(entry)) 462 - goto fallback; 463 448 464 449 if (entry) { 465 - if (WARN_ON_ONCE(!xa_is_value(entry))) { 450 + if (!xa_is_value(entry)) { 466 451 xas_set_err(xas, EIO); 467 452 goto out_unlock; 468 453 } ··· 1639 1628 /* Did we race with someone splitting entry or so? */ 1640 1629 if (!entry || 1641 1630 (order == 0 && !dax_is_pte_entry(entry)) || 1642 - (order == PMD_ORDER && (xa_is_internal(entry) || 1643 - !dax_is_pmd_entry(entry)))) { 1631 + (order == PMD_ORDER && !dax_is_pmd_entry(entry))) { 1644 1632 put_unlocked_entry(&xas, entry); 1645 1633 xas_unlock_irq(&xas); 1646 1634 trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+11 -11
fs/nfs/callback_proc.c
··· 686 686 { 687 687 struct cb_offloadargs *args = data; 688 688 struct nfs_server *server; 689 - struct nfs4_copy_state *copy; 689 + struct nfs4_copy_state *copy, *tmp_copy; 690 690 bool found = false; 691 + 692 + copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 693 + if (!copy) 694 + return htonl(NFS4ERR_SERVERFAULT); 691 695 692 696 spin_lock(&cps->clp->cl_lock); 693 697 rcu_read_lock(); 694 698 list_for_each_entry_rcu(server, &cps->clp->cl_superblocks, 695 699 client_link) { 696 - list_for_each_entry(copy, &server->ss_copies, copies) { 700 + list_for_each_entry(tmp_copy, &server->ss_copies, copies) { 697 701 if (memcmp(args->coa_stateid.other, 698 - copy->stateid.other, 702 + tmp_copy->stateid.other, 699 703 sizeof(args->coa_stateid.other))) 700 704 continue; 701 - nfs4_copy_cb_args(copy, args); 702 - complete(&copy->completion); 705 + nfs4_copy_cb_args(tmp_copy, args); 706 + complete(&tmp_copy->completion); 703 707 found = true; 704 708 goto out; 705 709 } ··· 711 707 out: 712 708 rcu_read_unlock(); 713 709 if (!found) { 714 - copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 715 - if (!copy) { 716 - spin_unlock(&cps->clp->cl_lock); 717 - return htonl(NFS4ERR_SERVERFAULT); 718 - } 719 710 memcpy(&copy->stateid, &args->coa_stateid, NFS4_STATEID_SIZE); 720 711 nfs4_copy_cb_args(copy, args); 721 712 list_add_tail(&copy->copies, &cps->clp->pending_cb_stateids); 722 - } 713 + } else 714 + kfree(copy); 723 715 spin_unlock(&cps->clp->cl_lock); 724 716 725 717 return 0;
+9 -12
fs/nfs/flexfilelayout/flexfilelayout.c
··· 1361 1361 task)) 1362 1362 return; 1363 1363 1364 - if (ff_layout_read_prepare_common(task, hdr)) 1365 - return; 1366 - 1367 - if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, 1368 - hdr->args.lock_context, FMODE_READ) == -EIO) 1369 - rpc_exit(task, -EIO); /* lost lock, terminate I/O */ 1364 + ff_layout_read_prepare_common(task, hdr); 1370 1365 } 1371 1366 1372 1367 static void ff_layout_read_call_done(struct rpc_task *task, void *data) ··· 1537 1542 task)) 1538 1543 return; 1539 1544 1540 - if (ff_layout_write_prepare_common(task, hdr)) 1541 - return; 1542 - 1543 - if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, 1544 - hdr->args.lock_context, FMODE_WRITE) == -EIO) 1545 - rpc_exit(task, -EIO); /* lost lock, terminate I/O */ 1545 + ff_layout_write_prepare_common(task, hdr); 1546 1546 } 1547 1547 1548 1548 static void ff_layout_write_call_done(struct rpc_task *task, void *data) ··· 1732 1742 fh = nfs4_ff_layout_select_ds_fh(lseg, idx); 1733 1743 if (fh) 1734 1744 hdr->args.fh = fh; 1745 + 1746 + if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1747 + goto out_failed; 1748 + 1735 1749 /* 1736 1750 * Note that if we ever decide to split across DSes, 1737 1751 * then we may need to handle dense-like offsets. ··· 1797 1803 fh = nfs4_ff_layout_select_ds_fh(lseg, idx); 1798 1804 if (fh) 1799 1805 hdr->args.fh = fh; 1806 + 1807 + if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) 1808 + goto out_failed; 1800 1809 1801 1810 /* 1802 1811 * Note that if we ever decide to split across DSes,
+4
fs/nfs/flexfilelayout/flexfilelayout.h
··· 215 215 unsigned int maxnum); 216 216 struct nfs_fh * 217 217 nfs4_ff_layout_select_ds_fh(struct pnfs_layout_segment *lseg, u32 mirror_idx); 218 + int 219 + nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, 220 + u32 mirror_idx, 221 + nfs4_stateid *stateid); 218 222 219 223 struct nfs4_pnfs_ds * 220 224 nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx,
+19
fs/nfs/flexfilelayout/flexfilelayoutdev.c
··· 370 370 return fh; 371 371 } 372 372 373 + int 374 + nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, 375 + u32 mirror_idx, 376 + nfs4_stateid *stateid) 377 + { 378 + struct nfs4_ff_layout_mirror *mirror = FF_LAYOUT_COMP(lseg, mirror_idx); 379 + 380 + if (!ff_layout_mirror_valid(lseg, mirror, false)) { 381 + pr_err_ratelimited("NFS: %s: No data server for mirror offset index %d\n", 382 + __func__, mirror_idx); 383 + goto out; 384 + } 385 + 386 + nfs4_stateid_copy(stateid, &mirror->stateid); 387 + return 1; 388 + out: 389 + return 0; 390 + } 391 + 373 392 /** 374 393 * nfs4_ff_layout_prepare_ds - prepare a DS connection for an RPC call 375 394 * @lseg: the layout segment we're operating on
+10 -9
fs/nfs/nfs42proc.c
··· 137 137 struct file *dst, 138 138 nfs4_stateid *src_stateid) 139 139 { 140 - struct nfs4_copy_state *copy; 140 + struct nfs4_copy_state *copy, *tmp_copy; 141 141 int status = NFS4_OK; 142 142 bool found_pending = false; 143 143 struct nfs_open_context *ctx = nfs_file_open_context(dst); 144 144 145 + copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 146 + if (!copy) 147 + return -ENOMEM; 148 + 145 149 spin_lock(&server->nfs_client->cl_lock); 146 - list_for_each_entry(copy, &server->nfs_client->pending_cb_stateids, 150 + list_for_each_entry(tmp_copy, &server->nfs_client->pending_cb_stateids, 147 151 copies) { 148 - if (memcmp(&res->write_res.stateid, &copy->stateid, 152 + if (memcmp(&res->write_res.stateid, &tmp_copy->stateid, 149 153 NFS4_STATEID_SIZE)) 150 154 continue; 151 155 found_pending = true; 152 - list_del(&copy->copies); 156 + list_del(&tmp_copy->copies); 153 157 break; 154 158 } 155 159 if (found_pending) { 156 160 spin_unlock(&server->nfs_client->cl_lock); 161 + kfree(copy); 162 + copy = tmp_copy; 157 163 goto out; 158 164 } 159 165 160 - copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); 161 - if (!copy) { 162 - spin_unlock(&server->nfs_client->cl_lock); 163 - return -ENOMEM; 164 - } 165 166 memcpy(&copy->stateid, &res->write_res.stateid, NFS4_STATEID_SIZE); 166 167 init_completion(&copy->completion); 167 168 copy->parent_state = ctx->state;
+2
fs/nfs/nfs4_fs.h
··· 41 41 NFS4CLNT_MOVED, 42 42 NFS4CLNT_LEASE_MOVED, 43 43 NFS4CLNT_DELEGATION_EXPIRED, 44 + NFS4CLNT_RUN_MANAGER, 45 + NFS4CLNT_DELEGRETURN_RUNNING, 44 46 }; 45 47 46 48 #define NFS4_RENEW_TIMEOUT 0x01
+11 -5
fs/nfs/nfs4state.c
··· 1210 1210 struct task_struct *task; 1211 1211 char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1]; 1212 1212 1213 + set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 1213 1214 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) 1214 1215 return; 1215 1216 __module_get(THIS_MODULE); ··· 2504 2503 2505 2504 /* Ensure exclusive access to NFSv4 state */ 2506 2505 do { 2506 + clear_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 2507 2507 if (test_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state)) { 2508 2508 section = "purge state"; 2509 2509 status = nfs4_purge_lease(clp); ··· 2595 2593 } 2596 2594 2597 2595 nfs4_end_drain_session(clp); 2598 - if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { 2599 - nfs_client_return_marked_delegations(clp); 2600 - continue; 2596 + nfs4_clear_state_manager_bit(clp); 2597 + 2598 + if (!test_and_set_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state)) { 2599 + if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { 2600 + nfs_client_return_marked_delegations(clp); 2601 + set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); 2602 + } 2603 + clear_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state); 2601 2604 } 2602 2605 2603 - nfs4_clear_state_manager_bit(clp); 2604 2606 /* Did we race with an attempt to give us more work? */ 2605 - if (clp->cl_state == 0) 2607 + if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state)) 2606 2608 return; 2607 2609 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) 2608 2610 return;
+1 -3
fs/nilfs2/btnode.c
··· 266 266 return; 267 267 268 268 if (nbh == NULL) { /* blocksize == pagesize */ 269 - xa_lock_irq(&btnc->i_pages); 270 - __xa_erase(&btnc->i_pages, newkey); 271 - xa_unlock_irq(&btnc->i_pages); 269 + xa_erase_irq(&btnc->i_pages, newkey); 272 270 unlock_page(ctxt->bh->b_page); 273 271 } else 274 272 brelse(nbh);
+1 -1
include/linux/dma-direct.h
··· 5 5 #include <linux/dma-mapping.h> 6 6 #include <linux/mem_encrypt.h> 7 7 8 - #define DIRECT_MAPPING_ERROR 0 8 + #define DIRECT_MAPPING_ERROR (~(dma_addr_t)0) 9 9 10 10 #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA 11 11 #include <asm/dma-direct.h>
+4
include/linux/filter.h
··· 852 852 853 853 void bpf_jit_free(struct bpf_prog *fp); 854 854 855 + int bpf_jit_get_func_addr(const struct bpf_prog *prog, 856 + const struct bpf_insn *insn, bool extra_pass, 857 + u64 *func_addr, bool *func_addr_fixed); 858 + 855 859 struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *fp); 856 860 void bpf_jit_prog_release_other(struct bpf_prog *fp, struct bpf_prog *fp_other); 857 861
-28
include/linux/hid.h
··· 1139 1139 int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size, 1140 1140 int interrupt); 1141 1141 1142 - 1143 - /** 1144 - * struct hid_scroll_counter - Utility class for processing high-resolution 1145 - * scroll events. 1146 - * @dev: the input device for which events should be reported. 1147 - * @microns_per_hi_res_unit: the amount moved by the user's finger for each 1148 - * high-resolution unit reported by the mouse, in 1149 - * microns. 1150 - * @resolution_multiplier: the wheel's resolution in high-resolution mode as a 1151 - * multiple of its lower resolution. For example, if 1152 - * moving the wheel by one "notch" would result in a 1153 - * value of 1 in low-resolution mode but 8 in 1154 - * high-resolution, the multiplier is 8. 1155 - * @remainder: counts the number of high-resolution units moved since the last 1156 - * low-resolution event (REL_WHEEL or REL_HWHEEL) was sent. Should 1157 - * only be used by class methods. 1158 - */ 1159 - struct hid_scroll_counter { 1160 - struct input_dev *dev; 1161 - int microns_per_hi_res_unit; 1162 - int resolution_multiplier; 1163 - 1164 - int remainder; 1165 - }; 1166 - 1167 - void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter, 1168 - int hi_res_value); 1169 - 1170 1142 /* HID quirks API */ 1171 1143 unsigned long hid_lookup_quirk(const struct hid_device *hdev); 1172 1144 int hid_quirks_init(char **quirks_param, __u16 bus, int count);
+13
include/linux/netfilter/nf_conntrack_proto_gre.h
··· 21 21 struct nf_conntrack_tuple tuple; 22 22 }; 23 23 24 + enum grep_conntrack { 25 + GRE_CT_UNREPLIED, 26 + GRE_CT_REPLIED, 27 + GRE_CT_MAX 28 + }; 29 + 30 + struct netns_proto_gre { 31 + struct nf_proto_net nf; 32 + rwlock_t keymap_lock; 33 + struct list_head keymap_list; 34 + unsigned int gre_timeouts[GRE_CT_MAX]; 35 + }; 36 + 24 37 /* add new tuple->key_reply pair to keymap */ 25 38 int nf_ct_gre_keymap_add(struct nf_conn *ct, enum ip_conntrack_dir dir, 26 39 struct nf_conntrack_tuple *t);
+203 -64
include/linux/xarray.h
··· 289 289 void xa_init_flags(struct xarray *, gfp_t flags); 290 290 void *xa_load(struct xarray *, unsigned long index); 291 291 void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t); 292 - void *xa_cmpxchg(struct xarray *, unsigned long index, 293 - void *old, void *entry, gfp_t); 294 - int xa_reserve(struct xarray *, unsigned long index, gfp_t); 292 + void *xa_erase(struct xarray *, unsigned long index); 295 293 void *xa_store_range(struct xarray *, unsigned long first, unsigned long last, 296 294 void *entry, gfp_t); 297 295 bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t); ··· 339 341 static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark) 340 342 { 341 343 return xa->xa_flags & XA_FLAGS_MARK(mark); 342 - } 343 - 344 - /** 345 - * xa_erase() - Erase this entry from the XArray. 346 - * @xa: XArray. 347 - * @index: Index of entry. 348 - * 349 - * This function is the equivalent of calling xa_store() with %NULL as 350 - * the third argument. The XArray does not need to allocate memory, so 351 - * the user does not need to provide GFP flags. 352 - * 353 - * Context: Process context. Takes and releases the xa_lock. 354 - * Return: The entry which used to be at this index. 355 - */ 356 - static inline void *xa_erase(struct xarray *xa, unsigned long index) 357 - { 358 - return xa_store(xa, index, NULL, 0); 359 - } 360 - 361 - /** 362 - * xa_insert() - Store this entry in the XArray unless another entry is 363 - * already present. 364 - * @xa: XArray. 365 - * @index: Index into array. 366 - * @entry: New entry. 367 - * @gfp: Memory allocation flags. 368 - * 369 - * If you would rather see the existing entry in the array, use xa_cmpxchg(). 370 - * This function is for users who don't care what the entry is, only that 371 - * one is present. 372 - * 373 - * Context: Process context. Takes and releases the xa_lock. 374 - * May sleep if the @gfp flags permit. 375 - * Return: 0 if the store succeeded. -EEXIST if another entry was present. 376 - * -ENOMEM if memory could not be allocated. 377 - */ 378 - static inline int xa_insert(struct xarray *xa, unsigned long index, 379 - void *entry, gfp_t gfp) 380 - { 381 - void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp); 382 - if (!curr) 383 - return 0; 384 - if (xa_is_err(curr)) 385 - return xa_err(curr); 386 - return -EEXIST; 387 - } 388 - 389 - /** 390 - * xa_release() - Release a reserved entry. 391 - * @xa: XArray. 392 - * @index: Index of entry. 393 - * 394 - * After calling xa_reserve(), you can call this function to release the 395 - * reservation. If the entry at @index has been stored to, this function 396 - * will do nothing. 397 - */ 398 - static inline void xa_release(struct xarray *xa, unsigned long index) 399 - { 400 - xa_cmpxchg(xa, index, NULL, NULL, 0); 401 344 } 402 345 403 346 /** ··· 394 455 void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old, 395 456 void *entry, gfp_t); 396 457 int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t); 458 + int __xa_reserve(struct xarray *, unsigned long index, gfp_t); 397 459 void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t); 398 460 void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t); 399 461 ··· 427 487 } 428 488 429 489 /** 490 + * xa_store_bh() - Store this entry in the XArray. 491 + * @xa: XArray. 492 + * @index: Index into array. 493 + * @entry: New entry. 494 + * @gfp: Memory allocation flags. 495 + * 496 + * This function is like calling xa_store() except it disables softirqs 497 + * while holding the array lock. 498 + * 499 + * Context: Any context. Takes and releases the xa_lock while 500 + * disabling softirqs. 501 + * Return: The entry which used to be at this index. 502 + */ 503 + static inline void *xa_store_bh(struct xarray *xa, unsigned long index, 504 + void *entry, gfp_t gfp) 505 + { 506 + void *curr; 507 + 508 + xa_lock_bh(xa); 509 + curr = __xa_store(xa, index, entry, gfp); 510 + xa_unlock_bh(xa); 511 + 512 + return curr; 513 + } 514 + 515 + /** 516 + * xa_store_irq() - Erase this entry from the XArray. 517 + * @xa: XArray. 518 + * @index: Index into array. 519 + * @entry: New entry. 520 + * @gfp: Memory allocation flags. 521 + * 522 + * This function is like calling xa_store() except it disables interrupts 523 + * while holding the array lock. 524 + * 525 + * Context: Process context. Takes and releases the xa_lock while 526 + * disabling interrupts. 527 + * Return: The entry which used to be at this index. 528 + */ 529 + static inline void *xa_store_irq(struct xarray *xa, unsigned long index, 530 + void *entry, gfp_t gfp) 531 + { 532 + void *curr; 533 + 534 + xa_lock_irq(xa); 535 + curr = __xa_store(xa, index, entry, gfp); 536 + xa_unlock_irq(xa); 537 + 538 + return curr; 539 + } 540 + 541 + /** 430 542 * xa_erase_bh() - Erase this entry from the XArray. 431 543 * @xa: XArray. 432 544 * @index: Index of entry. ··· 487 495 * the third argument. The XArray does not need to allocate memory, so 488 496 * the user does not need to provide GFP flags. 489 497 * 490 - * Context: Process context. Takes and releases the xa_lock while 498 + * Context: Any context. Takes and releases the xa_lock while 491 499 * disabling softirqs. 492 500 * Return: The entry which used to be at this index. 493 501 */ ··· 524 532 xa_unlock_irq(xa); 525 533 526 534 return entry; 535 + } 536 + 537 + /** 538 + * xa_cmpxchg() - Conditionally replace an entry in the XArray. 539 + * @xa: XArray. 540 + * @index: Index into array. 541 + * @old: Old value to test against. 542 + * @entry: New value to place in array. 543 + * @gfp: Memory allocation flags. 544 + * 545 + * If the entry at @index is the same as @old, replace it with @entry. 546 + * If the return value is equal to @old, then the exchange was successful. 547 + * 548 + * Context: Any context. Takes and releases the xa_lock. May sleep 549 + * if the @gfp flags permit. 550 + * Return: The old value at this index or xa_err() if an error happened. 551 + */ 552 + static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index, 553 + void *old, void *entry, gfp_t gfp) 554 + { 555 + void *curr; 556 + 557 + xa_lock(xa); 558 + curr = __xa_cmpxchg(xa, index, old, entry, gfp); 559 + xa_unlock(xa); 560 + 561 + return curr; 562 + } 563 + 564 + /** 565 + * xa_insert() - Store this entry in the XArray unless another entry is 566 + * already present. 567 + * @xa: XArray. 568 + * @index: Index into array. 569 + * @entry: New entry. 570 + * @gfp: Memory allocation flags. 571 + * 572 + * If you would rather see the existing entry in the array, use xa_cmpxchg(). 573 + * This function is for users who don't care what the entry is, only that 574 + * one is present. 575 + * 576 + * Context: Process context. Takes and releases the xa_lock. 577 + * May sleep if the @gfp flags permit. 578 + * Return: 0 if the store succeeded. -EEXIST if another entry was present. 579 + * -ENOMEM if memory could not be allocated. 580 + */ 581 + static inline int xa_insert(struct xarray *xa, unsigned long index, 582 + void *entry, gfp_t gfp) 583 + { 584 + void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp); 585 + if (!curr) 586 + return 0; 587 + if (xa_is_err(curr)) 588 + return xa_err(curr); 589 + return -EEXIST; 527 590 } 528 591 529 592 /** ··· 622 575 * Updates the @id pointer with the index, then stores the entry at that 623 576 * index. A concurrent lookup will not see an uninitialised @id. 624 577 * 625 - * Context: Process context. Takes and releases the xa_lock while 578 + * Context: Any context. Takes and releases the xa_lock while 626 579 * disabling softirqs. May sleep if the @gfp flags permit. 627 580 * Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if 628 581 * there is no more space in the XArray. ··· 666 619 xa_unlock_irq(xa); 667 620 668 621 return err; 622 + } 623 + 624 + /** 625 + * xa_reserve() - Reserve this index in the XArray. 626 + * @xa: XArray. 627 + * @index: Index into array. 628 + * @gfp: Memory allocation flags. 629 + * 630 + * Ensures there is somewhere to store an entry at @index in the array. 631 + * If there is already something stored at @index, this function does 632 + * nothing. If there was nothing there, the entry is marked as reserved. 633 + * Loading from a reserved entry returns a %NULL pointer. 634 + * 635 + * If you do not use the entry that you have reserved, call xa_release() 636 + * or xa_erase() to free any unnecessary memory. 637 + * 638 + * Context: Any context. Takes and releases the xa_lock. 639 + * May sleep if the @gfp flags permit. 640 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 641 + */ 642 + static inline 643 + int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 644 + { 645 + int ret; 646 + 647 + xa_lock(xa); 648 + ret = __xa_reserve(xa, index, gfp); 649 + xa_unlock(xa); 650 + 651 + return ret; 652 + } 653 + 654 + /** 655 + * xa_reserve_bh() - Reserve this index in the XArray. 656 + * @xa: XArray. 657 + * @index: Index into array. 658 + * @gfp: Memory allocation flags. 659 + * 660 + * A softirq-disabling version of xa_reserve(). 661 + * 662 + * Context: Any context. Takes and releases the xa_lock while 663 + * disabling softirqs. 664 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 665 + */ 666 + static inline 667 + int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp) 668 + { 669 + int ret; 670 + 671 + xa_lock_bh(xa); 672 + ret = __xa_reserve(xa, index, gfp); 673 + xa_unlock_bh(xa); 674 + 675 + return ret; 676 + } 677 + 678 + /** 679 + * xa_reserve_irq() - Reserve this index in the XArray. 680 + * @xa: XArray. 681 + * @index: Index into array. 682 + * @gfp: Memory allocation flags. 683 + * 684 + * An interrupt-disabling version of xa_reserve(). 685 + * 686 + * Context: Process context. Takes and releases the xa_lock while 687 + * disabling interrupts. 688 + * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 689 + */ 690 + static inline 691 + int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp) 692 + { 693 + int ret; 694 + 695 + xa_lock_irq(xa); 696 + ret = __xa_reserve(xa, index, gfp); 697 + xa_unlock_irq(xa); 698 + 699 + return ret; 700 + } 701 + 702 + /** 703 + * xa_release() - Release a reserved entry. 704 + * @xa: XArray. 705 + * @index: Index of entry. 706 + * 707 + * After calling xa_reserve(), you can call this function to release the 708 + * reservation. If the entry at @index has been stored to, this function 709 + * will do nothing. 710 + */ 711 + static inline void xa_release(struct xarray *xa, unsigned long index) 712 + { 713 + xa_cmpxchg(xa, index, NULL, NULL, 0); 669 714 } 670 715 671 716 /* Everything below here is the Advanced API. Proceed with caution. */
+1 -1
include/net/netfilter/ipv4/nf_nat_masquerade.h
··· 9 9 const struct nf_nat_range2 *range, 10 10 const struct net_device *out); 11 11 12 - void nf_nat_masquerade_ipv4_register_notifier(void); 12 + int nf_nat_masquerade_ipv4_register_notifier(void); 13 13 void nf_nat_masquerade_ipv4_unregister_notifier(void); 14 14 15 15 #endif /*_NF_NAT_MASQUERADE_IPV4_H_ */
+1 -1
include/net/netfilter/ipv6/nf_nat_masquerade.h
··· 5 5 unsigned int 6 6 nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range, 7 7 const struct net_device *out); 8 - void nf_nat_masquerade_ipv6_register_notifier(void); 8 + int nf_nat_masquerade_ipv6_register_notifier(void); 9 9 void nf_nat_masquerade_ipv6_unregister_notifier(void); 10 10 11 11 #endif /* _NF_NAT_MASQUERADE_IPV6_H_ */
-10
include/uapi/linux/input-event-codes.h
··· 716 716 * the situation described above. 717 717 */ 718 718 #define REL_RESERVED 0x0a 719 - #define REL_WHEEL_HI_RES 0x0b 720 719 #define REL_MAX 0x0f 721 720 #define REL_CNT (REL_MAX+1) 722 721 ··· 751 752 #define ABS_VOLUME 0x20 752 753 753 754 #define ABS_MISC 0x28 754 - 755 - /* 756 - * 0x2e is reserved and should not be used in input drivers. 757 - * It was used by HID as ABS_MISC+6 and userspace needs to detect if 758 - * the next ABS_* event is correct or is just ABS_MISC + n. 759 - * We define here ABS_RESERVED so userspace can rely on it and detect 760 - * the situation described above. 761 - */ 762 - #define ABS_RESERVED 0x2e 763 755 764 756 #define ABS_MT_SLOT 0x2f /* MT slot being modified */ 765 757 #define ABS_MT_TOUCH_MAJOR 0x30 /* Major axis of touching ellipse */
+34
kernel/bpf/core.c
··· 685 685 bpf_prog_unlock_free(fp); 686 686 } 687 687 688 + int bpf_jit_get_func_addr(const struct bpf_prog *prog, 689 + const struct bpf_insn *insn, bool extra_pass, 690 + u64 *func_addr, bool *func_addr_fixed) 691 + { 692 + s16 off = insn->off; 693 + s32 imm = insn->imm; 694 + u8 *addr; 695 + 696 + *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL; 697 + if (!*func_addr_fixed) { 698 + /* Place-holder address till the last pass has collected 699 + * all addresses for JITed subprograms in which case we 700 + * can pick them up from prog->aux. 701 + */ 702 + if (!extra_pass) 703 + addr = NULL; 704 + else if (prog->aux->func && 705 + off >= 0 && off < prog->aux->func_cnt) 706 + addr = (u8 *)prog->aux->func[off]->bpf_func; 707 + else 708 + return -EINVAL; 709 + } else { 710 + /* Address of a BPF helper call. Since part of the core 711 + * kernel, it's always at a fixed location. __bpf_call_base 712 + * and the helper with imm relative to it are both in core 713 + * kernel. 714 + */ 715 + addr = (u8 *)__bpf_call_base + imm; 716 + } 717 + 718 + *func_addr = (unsigned long)addr; 719 + return 0; 720 + } 721 + 688 722 static int bpf_jit_blind_insn(const struct bpf_insn *from, 689 723 const struct bpf_insn *aux, 690 724 struct bpf_insn *to_buff)
+2 -1
kernel/bpf/local_storage.c
··· 138 138 return -ENOENT; 139 139 140 140 new = kmalloc_node(sizeof(struct bpf_storage_buffer) + 141 - map->value_size, __GFP_ZERO | GFP_USER, 141 + map->value_size, 142 + __GFP_ZERO | GFP_ATOMIC | __GFP_NOWARN, 142 143 map->numa_node); 143 144 if (!new) 144 145 return -ENOMEM;
+8 -8
kernel/bpf/queue_stack_maps.c
··· 7 7 #include <linux/bpf.h> 8 8 #include <linux/list.h> 9 9 #include <linux/slab.h> 10 + #include <linux/capability.h> 10 11 #include "percpu_freelist.h" 11 12 12 13 #define QUEUE_STACK_CREATE_FLAG_MASK \ ··· 46 45 /* Called from syscall */ 47 46 static int queue_stack_map_alloc_check(union bpf_attr *attr) 48 47 { 48 + if (!capable(CAP_SYS_ADMIN)) 49 + return -EPERM; 50 + 49 51 /* check sanity of attributes */ 50 52 if (attr->max_entries == 0 || attr->key_size != 0 || 53 + attr->value_size == 0 || 51 54 attr->map_flags & ~QUEUE_STACK_CREATE_FLAG_MASK) 52 55 return -EINVAL; 53 56 ··· 68 63 { 69 64 int ret, numa_node = bpf_map_attr_numa_node(attr); 70 65 struct bpf_queue_stack *qs; 71 - u32 size, value_size; 72 - u64 queue_size, cost; 66 + u64 size, queue_size, cost; 73 67 74 - size = attr->max_entries + 1; 75 - value_size = attr->value_size; 76 - 77 - queue_size = sizeof(*qs) + (u64) value_size * size; 78 - 79 - cost = queue_size; 68 + size = (u64) attr->max_entries + 1; 69 + cost = queue_size = sizeof(*qs) + size * attr->value_size; 80 70 if (cost >= U32_MAX - PAGE_SIZE) 81 71 return ERR_PTR(-E2BIG); 82 72
+1 -1
kernel/bpf/verifier.c
··· 5771 5771 return; 5772 5772 /* NOTE: fake 'exit' subprog should be updated as well. */ 5773 5773 for (i = 0; i <= env->subprog_cnt; i++) { 5774 - if (env->subprog_info[i].start < off) 5774 + if (env->subprog_info[i].start <= off) 5775 5775 continue; 5776 5776 env->subprog_info[i].start += len - 1; 5777 5777 }
+2 -1
kernel/dma/swiotlb.c
··· 679 679 } 680 680 681 681 if (!dev_is_dma_coherent(dev) && 682 - (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) 682 + (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0 && 683 + dev_addr != DIRECT_MAPPING_ERROR) 683 684 arch_sync_dma_for_device(dev, phys, size, dir); 684 685 685 686 return dev_addr;
+5 -3
kernel/trace/bpf_trace.c
··· 196 196 i++; 197 197 } else if (fmt[i] == 'p' || fmt[i] == 's') { 198 198 mod[fmt_cnt]++; 199 - i++; 200 - if (!isspace(fmt[i]) && !ispunct(fmt[i]) && fmt[i] != 0) 199 + /* disallow any further format extensions */ 200 + if (fmt[i + 1] != 0 && 201 + !isspace(fmt[i + 1]) && 202 + !ispunct(fmt[i + 1])) 201 203 return -EINVAL; 202 204 fmt_cnt++; 203 - if (fmt[i - 1] == 's') { 205 + if (fmt[i] == 's') { 204 206 if (str_seen) 205 207 /* allow only one '%s' per fmt string */ 206 208 return -EINVAL;
+47 -3
lib/test_xarray.c
··· 208 208 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2)); 209 209 210 210 /* We should see two elements in the array */ 211 + rcu_read_lock(); 211 212 xas_for_each(&xas, entry, ULONG_MAX) 212 213 seen++; 214 + rcu_read_unlock(); 213 215 XA_BUG_ON(xa, seen != 2); 214 216 215 217 /* One of which is marked */ 216 218 xas_set(&xas, 0); 217 219 seen = 0; 220 + rcu_read_lock(); 218 221 xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0) 219 222 seen++; 223 + rcu_read_unlock(); 220 224 XA_BUG_ON(xa, seen != 1); 221 225 } 222 226 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0)); ··· 377 373 xa_erase_index(xa, 12345678); 378 374 XA_BUG_ON(xa, !xa_empty(xa)); 379 375 376 + /* And so does xa_insert */ 377 + xa_reserve(xa, 12345678, GFP_KERNEL); 378 + XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != 0); 379 + xa_erase_index(xa, 12345678); 380 + XA_BUG_ON(xa, !xa_empty(xa)); 381 + 380 382 /* Can iterate through a reserved entry */ 381 383 xa_store_index(xa, 5, GFP_KERNEL); 382 384 xa_reserve(xa, 6, GFP_KERNEL); ··· 446 436 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 447 437 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 448 438 439 + xas_lock(&xas); 449 440 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index)); 441 + xas_unlock(&xas); 450 442 XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min)); 451 443 XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min)); 452 444 XA_BUG_ON(xa, xa_load(xa, max) != NULL); ··· 464 452 XA_STATE(xas, xa, index); 465 453 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL); 466 454 455 + xas_lock(&xas); 467 456 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0)); 468 457 XA_BUG_ON(xa, xas.xa_index != index); 469 458 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1)); 459 + xas_unlock(&xas); 470 460 XA_BUG_ON(xa, !xa_empty(xa)); 471 461 } 472 462 #endif ··· 512 498 rcu_read_unlock(); 513 499 514 500 /* We can erase multiple values with a single store */ 515 - xa_store_order(xa, 0, 63, NULL, GFP_KERNEL); 501 + xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL); 516 502 XA_BUG_ON(xa, !xa_empty(xa)); 517 503 518 504 /* Even when the first slot is empty but the others aren't */ ··· 716 702 } 717 703 } 718 704 719 - static noinline void check_find(struct xarray *xa) 705 + static noinline void check_find_1(struct xarray *xa) 720 706 { 721 707 unsigned long i, j, k; 722 708 ··· 762 748 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0)); 763 749 } 764 750 XA_BUG_ON(xa, !xa_empty(xa)); 751 + } 752 + 753 + static noinline void check_find_2(struct xarray *xa) 754 + { 755 + void *entry; 756 + unsigned long i, j, index = 0; 757 + 758 + xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { 759 + XA_BUG_ON(xa, true); 760 + } 761 + 762 + for (i = 0; i < 1024; i++) { 763 + xa_store_index(xa, index, GFP_KERNEL); 764 + j = 0; 765 + index = 0; 766 + xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { 767 + XA_BUG_ON(xa, xa_mk_value(index) != entry); 768 + XA_BUG_ON(xa, index != j++); 769 + } 770 + } 771 + 772 + xa_destroy(xa); 773 + } 774 + 775 + static noinline void check_find(struct xarray *xa) 776 + { 777 + check_find_1(xa); 778 + check_find_2(xa); 765 779 check_multi_find(xa); 766 780 check_multi_find_2(xa); 767 781 } ··· 1109 1067 __check_store_range(xa, 4095 + i, 4095 + j); 1110 1068 __check_store_range(xa, 4096 + i, 4096 + j); 1111 1069 __check_store_range(xa, 123456 + i, 123456 + j); 1112 - __check_store_range(xa, UINT_MAX + i, UINT_MAX + j); 1070 + __check_store_range(xa, (1 << 24) + i, (1 << 24) + j); 1113 1071 } 1114 1072 } 1115 1073 } ··· 1188 1146 XA_STATE(xas, xa, 1 << order); 1189 1147 1190 1148 xa_store_order(xa, 0, order, xa, GFP_KERNEL); 1149 + rcu_read_lock(); 1191 1150 xas_load(&xas); 1192 1151 XA_BUG_ON(xa, xas.xa_node->count == 0); 1193 1152 XA_BUG_ON(xa, xas.xa_node->count > (1 << order)); 1194 1153 XA_BUG_ON(xa, xas.xa_node->nr_values != 0); 1154 + rcu_read_unlock(); 1195 1155 1196 1156 xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order), 1197 1157 GFP_KERNEL);
+60 -79
lib/xarray.c
··· 610 610 * (see the xa_cmpxchg() implementation for an example). 611 611 * 612 612 * Return: If the slot already existed, returns the contents of this slot. 613 - * If the slot was newly created, returns NULL. If it failed to create the 614 - * slot, returns NULL and indicates the error in @xas. 613 + * If the slot was newly created, returns %NULL. If it failed to create the 614 + * slot, returns %NULL and indicates the error in @xas. 615 615 */ 616 616 static void *xas_create(struct xa_state *xas) 617 617 { ··· 1334 1334 XA_STATE(xas, xa, index); 1335 1335 return xas_result(&xas, xas_store(&xas, NULL)); 1336 1336 } 1337 - EXPORT_SYMBOL_GPL(__xa_erase); 1337 + EXPORT_SYMBOL(__xa_erase); 1338 1338 1339 1339 /** 1340 - * xa_store() - Store this entry in the XArray. 1340 + * xa_erase() - Erase this entry from the XArray. 1341 1341 * @xa: XArray. 1342 - * @index: Index into array. 1343 - * @entry: New entry. 1344 - * @gfp: Memory allocation flags. 1342 + * @index: Index of entry. 1345 1343 * 1346 - * After this function returns, loads from this index will return @entry. 1347 - * Storing into an existing multislot entry updates the entry of every index. 1348 - * The marks associated with @index are unaffected unless @entry is %NULL. 1344 + * This function is the equivalent of calling xa_store() with %NULL as 1345 + * the third argument. The XArray does not need to allocate memory, so 1346 + * the user does not need to provide GFP flags. 1349 1347 * 1350 - * Context: Process context. Takes and releases the xa_lock. May sleep 1351 - * if the @gfp flags permit. 1352 - * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry 1353 - * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation 1354 - * failed. 1348 + * Context: Any context. Takes and releases the xa_lock. 1349 + * Return: The entry which used to be at this index. 1355 1350 */ 1356 - void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) 1351 + void *xa_erase(struct xarray *xa, unsigned long index) 1357 1352 { 1358 - XA_STATE(xas, xa, index); 1359 - void *curr; 1353 + void *entry; 1360 1354 1361 - if (WARN_ON_ONCE(xa_is_internal(entry))) 1362 - return XA_ERROR(-EINVAL); 1355 + xa_lock(xa); 1356 + entry = __xa_erase(xa, index); 1357 + xa_unlock(xa); 1363 1358 1364 - do { 1365 - xas_lock(&xas); 1366 - curr = xas_store(&xas, entry); 1367 - if (xa_track_free(xa) && entry) 1368 - xas_clear_mark(&xas, XA_FREE_MARK); 1369 - xas_unlock(&xas); 1370 - } while (xas_nomem(&xas, gfp)); 1371 - 1372 - return xas_result(&xas, curr); 1359 + return entry; 1373 1360 } 1374 - EXPORT_SYMBOL(xa_store); 1361 + EXPORT_SYMBOL(xa_erase); 1375 1362 1376 1363 /** 1377 1364 * __xa_store() - Store this entry in the XArray. ··· 1382 1395 1383 1396 if (WARN_ON_ONCE(xa_is_internal(entry))) 1384 1397 return XA_ERROR(-EINVAL); 1398 + if (xa_track_free(xa) && !entry) 1399 + entry = XA_ZERO_ENTRY; 1385 1400 1386 1401 do { 1387 1402 curr = xas_store(&xas, entry); 1388 - if (xa_track_free(xa) && entry) 1403 + if (xa_track_free(xa)) 1389 1404 xas_clear_mark(&xas, XA_FREE_MARK); 1390 1405 } while (__xas_nomem(&xas, gfp)); 1391 1406 ··· 1396 1407 EXPORT_SYMBOL(__xa_store); 1397 1408 1398 1409 /** 1399 - * xa_cmpxchg() - Conditionally replace an entry in the XArray. 1410 + * xa_store() - Store this entry in the XArray. 1400 1411 * @xa: XArray. 1401 1412 * @index: Index into array. 1402 - * @old: Old value to test against. 1403 - * @entry: New value to place in array. 1413 + * @entry: New entry. 1404 1414 * @gfp: Memory allocation flags. 1405 1415 * 1406 - * If the entry at @index is the same as @old, replace it with @entry. 1407 - * If the return value is equal to @old, then the exchange was successful. 1416 + * After this function returns, loads from this index will return @entry. 1417 + * Storing into an existing multislot entry updates the entry of every index. 1418 + * The marks associated with @index are unaffected unless @entry is %NULL. 1408 1419 * 1409 - * Context: Process context. Takes and releases the xa_lock. May sleep 1410 - * if the @gfp flags permit. 1411 - * Return: The old value at this index or xa_err() if an error happened. 1420 + * Context: Any context. Takes and releases the xa_lock. 1421 + * May sleep if the @gfp flags permit. 1422 + * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry 1423 + * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation 1424 + * failed. 1412 1425 */ 1413 - void *xa_cmpxchg(struct xarray *xa, unsigned long index, 1414 - void *old, void *entry, gfp_t gfp) 1426 + void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) 1415 1427 { 1416 - XA_STATE(xas, xa, index); 1417 1428 void *curr; 1418 1429 1419 - if (WARN_ON_ONCE(xa_is_internal(entry))) 1420 - return XA_ERROR(-EINVAL); 1430 + xa_lock(xa); 1431 + curr = __xa_store(xa, index, entry, gfp); 1432 + xa_unlock(xa); 1421 1433 1422 - do { 1423 - xas_lock(&xas); 1424 - curr = xas_load(&xas); 1425 - if (curr == XA_ZERO_ENTRY) 1426 - curr = NULL; 1427 - if (curr == old) { 1428 - xas_store(&xas, entry); 1429 - if (xa_track_free(xa) && entry) 1430 - xas_clear_mark(&xas, XA_FREE_MARK); 1431 - } 1432 - xas_unlock(&xas); 1433 - } while (xas_nomem(&xas, gfp)); 1434 - 1435 - return xas_result(&xas, curr); 1434 + return curr; 1436 1435 } 1437 - EXPORT_SYMBOL(xa_cmpxchg); 1436 + EXPORT_SYMBOL(xa_store); 1438 1437 1439 1438 /** 1440 1439 * __xa_cmpxchg() - Store this entry in the XArray. ··· 1448 1471 1449 1472 if (WARN_ON_ONCE(xa_is_internal(entry))) 1450 1473 return XA_ERROR(-EINVAL); 1474 + if (xa_track_free(xa) && !entry) 1475 + entry = XA_ZERO_ENTRY; 1451 1476 1452 1477 do { 1453 1478 curr = xas_load(&xas); ··· 1457 1478 curr = NULL; 1458 1479 if (curr == old) { 1459 1480 xas_store(&xas, entry); 1460 - if (xa_track_free(xa) && entry) 1481 + if (xa_track_free(xa)) 1461 1482 xas_clear_mark(&xas, XA_FREE_MARK); 1462 1483 } 1463 1484 } while (__xas_nomem(&xas, gfp)); ··· 1467 1488 EXPORT_SYMBOL(__xa_cmpxchg); 1468 1489 1469 1490 /** 1470 - * xa_reserve() - Reserve this index in the XArray. 1491 + * __xa_reserve() - Reserve this index in the XArray. 1471 1492 * @xa: XArray. 1472 1493 * @index: Index into array. 1473 1494 * @gfp: Memory allocation flags. ··· 1475 1496 * Ensures there is somewhere to store an entry at @index in the array. 1476 1497 * If there is already something stored at @index, this function does 1477 1498 * nothing. If there was nothing there, the entry is marked as reserved. 1478 - * Loads from @index will continue to see a %NULL pointer until a 1479 - * subsequent store to @index. 1499 + * Loading from a reserved entry returns a %NULL pointer. 1480 1500 * 1481 1501 * If you do not use the entry that you have reserved, call xa_release() 1482 1502 * or xa_erase() to free any unnecessary memory. 1483 1503 * 1484 - * Context: Process context. Takes and releases the xa_lock, IRQ or BH safe 1485 - * if specified in XArray flags. May sleep if the @gfp flags permit. 1504 + * Context: Any context. Expects the xa_lock to be held on entry. May 1505 + * release the lock, sleep and reacquire the lock if the @gfp flags permit. 1486 1506 * Return: 0 if the reservation succeeded or -ENOMEM if it failed. 1487 1507 */ 1488 - int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 1508 + int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) 1489 1509 { 1490 1510 XA_STATE(xas, xa, index); 1491 - unsigned int lock_type = xa_lock_type(xa); 1492 1511 void *curr; 1493 1512 1494 1513 do { 1495 - xas_lock_type(&xas, lock_type); 1496 1514 curr = xas_load(&xas); 1497 - if (!curr) 1515 + if (!curr) { 1498 1516 xas_store(&xas, XA_ZERO_ENTRY); 1499 - xas_unlock_type(&xas, lock_type); 1500 - } while (xas_nomem(&xas, gfp)); 1517 + if (xa_track_free(xa)) 1518 + xas_clear_mark(&xas, XA_FREE_MARK); 1519 + } 1520 + } while (__xas_nomem(&xas, gfp)); 1501 1521 1502 1522 return xas_error(&xas); 1503 1523 } 1504 - EXPORT_SYMBOL(xa_reserve); 1524 + EXPORT_SYMBOL(__xa_reserve); 1505 1525 1506 1526 #ifdef CONFIG_XARRAY_MULTI 1507 1527 static void xas_set_range(struct xa_state *xas, unsigned long first, ··· 1565 1587 do { 1566 1588 xas_lock(&xas); 1567 1589 if (entry) { 1568 - unsigned int order = (last == ~0UL) ? 64 : 1569 - ilog2(last + 1); 1590 + unsigned int order = BITS_PER_LONG; 1591 + if (last + 1) 1592 + order = __ffs(last + 1); 1570 1593 xas_set_order(&xas, last, order); 1571 1594 xas_create(&xas); 1572 1595 if (xas_error(&xas)) ··· 1641 1662 * @index: Index of entry. 1642 1663 * @mark: Mark number. 1643 1664 * 1644 - * Attempting to set a mark on a NULL entry does not succeed. 1665 + * Attempting to set a mark on a %NULL entry does not succeed. 1645 1666 * 1646 1667 * Context: Any context. Expects xa_lock to be held on entry. 1647 1668 */ ··· 1653 1674 if (entry) 1654 1675 xas_set_mark(&xas, mark); 1655 1676 } 1656 - EXPORT_SYMBOL_GPL(__xa_set_mark); 1677 + EXPORT_SYMBOL(__xa_set_mark); 1657 1678 1658 1679 /** 1659 1680 * __xa_clear_mark() - Clear this mark on this entry while locked. ··· 1671 1692 if (entry) 1672 1693 xas_clear_mark(&xas, mark); 1673 1694 } 1674 - EXPORT_SYMBOL_GPL(__xa_clear_mark); 1695 + EXPORT_SYMBOL(__xa_clear_mark); 1675 1696 1676 1697 /** 1677 1698 * xa_get_mark() - Inquire whether this mark is set on this entry. ··· 1711 1732 * @index: Index of entry. 1712 1733 * @mark: Mark number. 1713 1734 * 1714 - * Attempting to set a mark on a NULL entry does not succeed. 1735 + * Attempting to set a mark on a %NULL entry does not succeed. 1715 1736 * 1716 1737 * Context: Process context. Takes and releases the xa_lock. 1717 1738 */ ··· 1808 1829 entry = xas_find_marked(&xas, max, filter); 1809 1830 else 1810 1831 entry = xas_find(&xas, max); 1832 + if (xas.xa_node == XAS_BOUNDS) 1833 + break; 1811 1834 if (xas.xa_shift) { 1812 1835 if (xas.xa_index & ((1UL << xas.xa_shift) - 1)) 1813 1836 continue; ··· 1880 1899 * 1881 1900 * The @filter may be an XArray mark value, in which case entries which are 1882 1901 * marked with that mark will be copied. It may also be %XA_PRESENT, in 1883 - * which case all entries which are not NULL will be copied. 1902 + * which case all entries which are not %NULL will be copied. 1884 1903 * 1885 1904 * The entries returned may not represent a snapshot of the XArray at a 1886 1905 * moment in time. For example, if another thread stores to index 5, then
+2 -1
net/ipv4/ip_output.c
··· 939 939 unsigned int fraglen; 940 940 unsigned int fraggap; 941 941 unsigned int alloclen; 942 - unsigned int pagedlen = 0; 942 + unsigned int pagedlen; 943 943 struct sk_buff *skb_prev; 944 944 alloc_new_skb: 945 945 skb_prev = skb; ··· 956 956 if (datalen > mtu - fragheaderlen) 957 957 datalen = maxfraglen - fragheaderlen; 958 958 fraglen = datalen + fragheaderlen; 959 + pagedlen = 0; 959 960 960 961 if ((flags & MSG_MORE) && 961 962 !(rt->dst.dev->features&NETIF_F_SG))
+5 -2
net/ipv4/netfilter/ipt_MASQUERADE.c
··· 81 81 int ret; 82 82 83 83 ret = xt_register_target(&masquerade_tg_reg); 84 + if (ret) 85 + return ret; 84 86 85 - if (ret == 0) 86 - nf_nat_masquerade_ipv4_register_notifier(); 87 + ret = nf_nat_masquerade_ipv4_register_notifier(); 88 + if (ret) 89 + xt_unregister_target(&masquerade_tg_reg); 87 90 88 91 return ret; 89 92 }
+30 -8
net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
··· 147 147 .notifier_call = masq_inet_event, 148 148 }; 149 149 150 - static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0); 150 + static int masq_refcnt; 151 + static DEFINE_MUTEX(masq_mutex); 151 152 152 - void nf_nat_masquerade_ipv4_register_notifier(void) 153 + int nf_nat_masquerade_ipv4_register_notifier(void) 153 154 { 155 + int ret = 0; 156 + 157 + mutex_lock(&masq_mutex); 154 158 /* check if the notifier was already set */ 155 - if (atomic_inc_return(&masquerade_notifier_refcount) > 1) 156 - return; 159 + if (++masq_refcnt > 1) 160 + goto out_unlock; 157 161 158 162 /* Register for device down reports */ 159 - register_netdevice_notifier(&masq_dev_notifier); 163 + ret = register_netdevice_notifier(&masq_dev_notifier); 164 + if (ret) 165 + goto err_dec; 160 166 /* Register IP address change reports */ 161 - register_inetaddr_notifier(&masq_inet_notifier); 167 + ret = register_inetaddr_notifier(&masq_inet_notifier); 168 + if (ret) 169 + goto err_unregister; 170 + 171 + mutex_unlock(&masq_mutex); 172 + return ret; 173 + 174 + err_unregister: 175 + unregister_netdevice_notifier(&masq_dev_notifier); 176 + err_dec: 177 + masq_refcnt--; 178 + out_unlock: 179 + mutex_unlock(&masq_mutex); 180 + return ret; 162 181 } 163 182 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_register_notifier); 164 183 165 184 void nf_nat_masquerade_ipv4_unregister_notifier(void) 166 185 { 186 + mutex_lock(&masq_mutex); 167 187 /* check if the notifier still has clients */ 168 - if (atomic_dec_return(&masquerade_notifier_refcount) > 0) 169 - return; 188 + if (--masq_refcnt > 0) 189 + goto out_unlock; 170 190 171 191 unregister_netdevice_notifier(&masq_dev_notifier); 172 192 unregister_inetaddr_notifier(&masq_inet_notifier); 193 + out_unlock: 194 + mutex_unlock(&masq_mutex); 173 195 } 174 196 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_unregister_notifier);
+3 -1
net/ipv4/netfilter/nft_masq_ipv4.c
··· 69 69 if (ret < 0) 70 70 return ret; 71 71 72 - nf_nat_masquerade_ipv4_register_notifier(); 72 + ret = nf_nat_masquerade_ipv4_register_notifier(); 73 + if (ret) 74 + nft_unregister_expr(&nft_masq_ipv4_type); 73 75 74 76 return ret; 75 77 }
+10 -6
net/ipv4/tcp_input.c
··· 579 579 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr; 580 580 u32 delta_us; 581 581 582 - if (!delta) 583 - delta = 1; 584 - delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 585 - tcp_rcv_rtt_update(tp, delta_us, 0); 582 + if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) { 583 + if (!delta) 584 + delta = 1; 585 + delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 586 + tcp_rcv_rtt_update(tp, delta_us, 0); 587 + } 586 588 } 587 589 } 588 590 ··· 2912 2910 if (seq_rtt_us < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr && 2913 2911 flag & FLAG_ACKED) { 2914 2912 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr; 2915 - u32 delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 2916 2913 2917 - seq_rtt_us = ca_rtt_us = delta_us; 2914 + if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) { 2915 + seq_rtt_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 2916 + ca_rtt_us = seq_rtt_us; 2917 + } 2918 2918 } 2919 2919 rs->rtt_us = ca_rtt_us; /* RTT of last (S)ACKed packet (or -1) */ 2920 2920 if (seq_rtt_us < 0)
+6 -4
net/ipv4/tcp_timer.c
··· 40 40 { 41 41 struct inet_connection_sock *icsk = inet_csk(sk); 42 42 u32 elapsed, start_ts; 43 + s32 remaining; 43 44 44 45 start_ts = tcp_retransmit_stamp(sk); 45 46 if (!icsk->icsk_user_timeout || !start_ts) 46 47 return icsk->icsk_rto; 47 48 elapsed = tcp_time_stamp(tcp_sk(sk)) - start_ts; 48 - if (elapsed >= icsk->icsk_user_timeout) 49 + remaining = icsk->icsk_user_timeout - elapsed; 50 + if (remaining <= 0) 49 51 return 1; /* user timeout has passed; fire ASAP */ 50 - else 51 - return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(icsk->icsk_user_timeout - elapsed)); 52 + 53 + return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining)); 52 54 } 53 55 54 56 /** ··· 211 209 (boundary - linear_backoff_thresh) * TCP_RTO_MAX; 212 210 timeout = jiffies_to_msecs(timeout); 213 211 } 214 - return (tcp_time_stamp(tcp_sk(sk)) - start_ts) >= timeout; 212 + return (s32)(tcp_time_stamp(tcp_sk(sk)) - start_ts - timeout) >= 0; 215 213 } 216 214 217 215 /* A write timeout has occurred. Process the after effects. */
+2 -1
net/ipv6/ip6_output.c
··· 1354 1354 unsigned int fraglen; 1355 1355 unsigned int fraggap; 1356 1356 unsigned int alloclen; 1357 - unsigned int pagedlen = 0; 1357 + unsigned int pagedlen; 1358 1358 alloc_new_skb: 1359 1359 /* There's no room in the current skb */ 1360 1360 if (skb) ··· 1378 1378 if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen) 1379 1379 datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len; 1380 1380 fraglen = datalen + fragheaderlen; 1381 + pagedlen = 0; 1381 1382 1382 1383 if ((flags & MSG_MORE) && 1383 1384 !(rt->dst.dev->features&NETIF_F_SG))
+2 -1
net/ipv6/netfilter.c
··· 24 24 unsigned int hh_len; 25 25 struct dst_entry *dst; 26 26 struct flowi6 fl6 = { 27 - .flowi6_oif = sk ? sk->sk_bound_dev_if : 0, 27 + .flowi6_oif = sk && sk->sk_bound_dev_if ? sk->sk_bound_dev_if : 28 + rt6_need_strict(&iph->daddr) ? skb_dst(skb)->dev->ifindex : 0, 28 29 .flowi6_mark = skb->mark, 29 30 .flowi6_uid = sock_net_uid(net, sk), 30 31 .daddr = iph->daddr,
+6 -2
net/ipv6/netfilter/ip6t_MASQUERADE.c
··· 58 58 int err; 59 59 60 60 err = xt_register_target(&masquerade_tg6_reg); 61 - if (err == 0) 62 - nf_nat_masquerade_ipv6_register_notifier(); 61 + if (err) 62 + return err; 63 + 64 + err = nf_nat_masquerade_ipv6_register_notifier(); 65 + if (err) 66 + xt_unregister_target(&masquerade_tg6_reg); 63 67 64 68 return err; 65 69 }
+37 -14
net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
··· 132 132 * of ipv6 addresses being deleted), we also need to add an upper 133 133 * limit to the number of queued work items. 134 134 */ 135 - static int masq_inet_event(struct notifier_block *this, 136 - unsigned long event, void *ptr) 135 + static int masq_inet6_event(struct notifier_block *this, 136 + unsigned long event, void *ptr) 137 137 { 138 138 struct inet6_ifaddr *ifa = ptr; 139 139 const struct net_device *dev; ··· 171 171 return NOTIFY_DONE; 172 172 } 173 173 174 - static struct notifier_block masq_inet_notifier = { 175 - .notifier_call = masq_inet_event, 174 + static struct notifier_block masq_inet6_notifier = { 175 + .notifier_call = masq_inet6_event, 176 176 }; 177 177 178 - static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0); 178 + static int masq_refcnt; 179 + static DEFINE_MUTEX(masq_mutex); 179 180 180 - void nf_nat_masquerade_ipv6_register_notifier(void) 181 + int nf_nat_masquerade_ipv6_register_notifier(void) 181 182 { 182 - /* check if the notifier is already set */ 183 - if (atomic_inc_return(&masquerade_notifier_refcount) > 1) 184 - return; 183 + int ret = 0; 185 184 186 - register_netdevice_notifier(&masq_dev_notifier); 187 - register_inet6addr_notifier(&masq_inet_notifier); 185 + mutex_lock(&masq_mutex); 186 + /* check if the notifier is already set */ 187 + if (++masq_refcnt > 1) 188 + goto out_unlock; 189 + 190 + ret = register_netdevice_notifier(&masq_dev_notifier); 191 + if (ret) 192 + goto err_dec; 193 + 194 + ret = register_inet6addr_notifier(&masq_inet6_notifier); 195 + if (ret) 196 + goto err_unregister; 197 + 198 + mutex_unlock(&masq_mutex); 199 + return ret; 200 + 201 + err_unregister: 202 + unregister_netdevice_notifier(&masq_dev_notifier); 203 + err_dec: 204 + masq_refcnt--; 205 + out_unlock: 206 + mutex_unlock(&masq_mutex); 207 + return ret; 188 208 } 189 209 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_register_notifier); 190 210 191 211 void nf_nat_masquerade_ipv6_unregister_notifier(void) 192 212 { 213 + mutex_lock(&masq_mutex); 193 214 /* check if the notifier still has clients */ 194 - if (atomic_dec_return(&masquerade_notifier_refcount) > 0) 195 - return; 215 + if (--masq_refcnt > 0) 216 + goto out_unlock; 196 217 197 - unregister_inet6addr_notifier(&masq_inet_notifier); 218 + unregister_inet6addr_notifier(&masq_inet6_notifier); 198 219 unregister_netdevice_notifier(&masq_dev_notifier); 220 + out_unlock: 221 + mutex_unlock(&masq_mutex); 199 222 } 200 223 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_unregister_notifier);
+3 -1
net/ipv6/netfilter/nft_masq_ipv6.c
··· 70 70 if (ret < 0) 71 71 return ret; 72 72 73 - nf_nat_masquerade_ipv6_register_notifier(); 73 + ret = nf_nat_masquerade_ipv6_register_notifier(); 74 + if (ret) 75 + nft_unregister_expr(&nft_masq_ipv6_type); 74 76 75 77 return ret; 76 78 }
+3
net/netfilter/ipvs/ip_vs_ctl.c
··· 3980 3980 3981 3981 static struct notifier_block ip_vs_dst_notifier = { 3982 3982 .notifier_call = ip_vs_dst_event, 3983 + #ifdef CONFIG_IP_VS_IPV6 3984 + .priority = ADDRCONF_NOTIFY_PRIORITY + 5, 3985 + #endif 3983 3986 }; 3984 3987 3985 3988 int __net_init ip_vs_control_net_init(struct netns_ipvs *ipvs)
+28 -16
net/netfilter/nf_conncount.c
··· 49 49 struct nf_conntrack_zone zone; 50 50 int cpu; 51 51 u32 jiffies32; 52 + bool dead; 52 53 struct rcu_head rcu_head; 53 54 }; 54 55 ··· 107 106 conn->zone = *zone; 108 107 conn->cpu = raw_smp_processor_id(); 109 108 conn->jiffies32 = (u32)jiffies; 110 - spin_lock(&list->list_lock); 109 + conn->dead = false; 110 + spin_lock_bh(&list->list_lock); 111 111 if (list->dead == true) { 112 112 kmem_cache_free(conncount_conn_cachep, conn); 113 - spin_unlock(&list->list_lock); 113 + spin_unlock_bh(&list->list_lock); 114 114 return NF_CONNCOUNT_SKIP; 115 115 } 116 116 list_add_tail(&conn->node, &list->head); 117 117 list->count++; 118 - spin_unlock(&list->list_lock); 118 + spin_unlock_bh(&list->list_lock); 119 119 return NF_CONNCOUNT_ADDED; 120 120 } 121 121 EXPORT_SYMBOL_GPL(nf_conncount_add); ··· 134 132 { 135 133 bool free_entry = false; 136 134 137 - spin_lock(&list->list_lock); 135 + spin_lock_bh(&list->list_lock); 138 136 139 - if (list->count == 0) { 140 - spin_unlock(&list->list_lock); 141 - return free_entry; 137 + if (conn->dead) { 138 + spin_unlock_bh(&list->list_lock); 139 + return free_entry; 142 140 } 143 141 144 142 list->count--; 143 + conn->dead = true; 145 144 list_del_rcu(&conn->node); 146 - if (list->count == 0) 145 + if (list->count == 0) { 146 + list->dead = true; 147 147 free_entry = true; 148 + } 148 149 149 - spin_unlock(&list->list_lock); 150 + spin_unlock_bh(&list->list_lock); 150 151 call_rcu(&conn->rcu_head, __conn_free); 151 152 return free_entry; 152 153 } ··· 250 245 { 251 246 spin_lock_init(&list->list_lock); 252 247 INIT_LIST_HEAD(&list->head); 253 - list->count = 1; 248 + list->count = 0; 254 249 list->dead = false; 255 250 } 256 251 EXPORT_SYMBOL_GPL(nf_conncount_list_init); ··· 264 259 struct nf_conn *found_ct; 265 260 unsigned int collected = 0; 266 261 bool free_entry = false; 262 + bool ret = false; 267 263 268 264 list_for_each_entry_safe(conn, conn_n, &list->head, node) { 269 265 found = find_or_evict(net, list, conn, &free_entry); ··· 294 288 if (collected > CONNCOUNT_GC_MAX_NODES) 295 289 return false; 296 290 } 297 - return false; 291 + 292 + spin_lock_bh(&list->list_lock); 293 + if (!list->count) { 294 + list->dead = true; 295 + ret = true; 296 + } 297 + spin_unlock_bh(&list->list_lock); 298 + 299 + return ret; 298 300 } 299 301 EXPORT_SYMBOL_GPL(nf_conncount_gc_list); 300 302 ··· 323 309 while (gc_count) { 324 310 rbconn = gc_nodes[--gc_count]; 325 311 spin_lock(&rbconn->list.list_lock); 326 - if (rbconn->list.count == 0 && rbconn->list.dead == false) { 327 - rbconn->list.dead = true; 328 - rb_erase(&rbconn->node, root); 329 - call_rcu(&rbconn->rcu_head, __tree_nodes_free); 330 - } 312 + rb_erase(&rbconn->node, root); 313 + call_rcu(&rbconn->rcu_head, __tree_nodes_free); 331 314 spin_unlock(&rbconn->list.list_lock); 332 315 } 333 316 } ··· 425 414 nf_conncount_list_init(&rbconn->list); 426 415 list_add(&conn->node, &rbconn->list.head); 427 416 count = 1; 417 + rbconn->list.count = count; 428 418 429 419 rb_link_node(&rbconn->node, parent, rbnode); 430 420 rb_insert_color(&rbconn->node, root);
+2 -12
net/netfilter/nf_conntrack_proto_gre.c
··· 43 43 #include <linux/netfilter/nf_conntrack_proto_gre.h> 44 44 #include <linux/netfilter/nf_conntrack_pptp.h> 45 45 46 - enum grep_conntrack { 47 - GRE_CT_UNREPLIED, 48 - GRE_CT_REPLIED, 49 - GRE_CT_MAX 50 - }; 51 - 52 46 static const unsigned int gre_timeouts[GRE_CT_MAX] = { 53 47 [GRE_CT_UNREPLIED] = 30*HZ, 54 48 [GRE_CT_REPLIED] = 180*HZ, 55 49 }; 56 50 57 51 static unsigned int proto_gre_net_id __read_mostly; 58 - struct netns_proto_gre { 59 - struct nf_proto_net nf; 60 - rwlock_t keymap_lock; 61 - struct list_head keymap_list; 62 - unsigned int gre_timeouts[GRE_CT_MAX]; 63 - }; 64 52 65 53 static inline struct netns_proto_gre *gre_pernet(struct net *net) 66 54 { ··· 389 401 static int __init nf_ct_proto_gre_init(void) 390 402 { 391 403 int ret; 404 + 405 + BUILD_BUG_ON(offsetof(struct netns_proto_gre, nf) != 0); 392 406 393 407 ret = register_pernet_subsys(&proto_gre_net_ops); 394 408 if (ret < 0)
+17 -29
net/netfilter/nf_tables_api.c
··· 2457 2457 static void nf_tables_rule_destroy(const struct nft_ctx *ctx, 2458 2458 struct nft_rule *rule) 2459 2459 { 2460 - struct nft_expr *expr; 2460 + struct nft_expr *expr, *next; 2461 2461 2462 2462 /* 2463 2463 * Careful: some expressions might not be initialized in case this ··· 2465 2465 */ 2466 2466 expr = nft_expr_first(rule); 2467 2467 while (expr != nft_expr_last(rule) && expr->ops) { 2468 + next = nft_expr_next(expr); 2468 2469 nf_tables_expr_destroy(ctx, expr); 2469 - expr = nft_expr_next(expr); 2470 + expr = next; 2470 2471 } 2471 2472 kfree(rule); 2472 2473 } ··· 2590 2589 2591 2590 if (chain->use == UINT_MAX) 2592 2591 return -EOVERFLOW; 2593 - } 2594 2592 2595 - if (nla[NFTA_RULE_POSITION]) { 2596 - if (!(nlh->nlmsg_flags & NLM_F_CREATE)) 2597 - return -EOPNOTSUPP; 2598 - 2599 - pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); 2600 - old_rule = __nft_rule_lookup(chain, pos_handle); 2601 - if (IS_ERR(old_rule)) { 2602 - NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]); 2603 - return PTR_ERR(old_rule); 2593 + if (nla[NFTA_RULE_POSITION]) { 2594 + pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); 2595 + old_rule = __nft_rule_lookup(chain, pos_handle); 2596 + if (IS_ERR(old_rule)) { 2597 + NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]); 2598 + return PTR_ERR(old_rule); 2599 + } 2604 2600 } 2605 2601 } 2606 2602 ··· 2667 2669 } 2668 2670 2669 2671 if (nlh->nlmsg_flags & NLM_F_REPLACE) { 2670 - if (!nft_is_active_next(net, old_rule)) { 2671 - err = -ENOENT; 2672 - goto err2; 2673 - } 2674 - trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE, 2675 - old_rule); 2672 + trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule); 2676 2673 if (trans == NULL) { 2677 2674 err = -ENOMEM; 2678 2675 goto err2; 2679 2676 } 2680 - nft_deactivate_next(net, old_rule); 2681 - chain->use--; 2682 - 2683 - if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { 2684 - err = -ENOMEM; 2677 + err = nft_delrule(&ctx, old_rule); 2678 + if (err < 0) { 2679 + nft_trans_destroy(trans); 2685 2680 goto err2; 2686 2681 } 2687 2682 ··· 6315 6324 call_rcu(&old->h, __nf_tables_commit_chain_free_rules_old); 6316 6325 } 6317 6326 6318 - static void nf_tables_commit_chain_active(struct net *net, struct nft_chain *chain) 6327 + static void nf_tables_commit_chain(struct net *net, struct nft_chain *chain) 6319 6328 { 6320 6329 struct nft_rule **g0, **g1; 6321 6330 bool next_genbit; ··· 6432 6441 6433 6442 /* step 2. Make rules_gen_X visible to packet path */ 6434 6443 list_for_each_entry(table, &net->nft.tables, list) { 6435 - list_for_each_entry(chain, &table->chains, list) { 6436 - if (!nft_is_active_next(net, chain)) 6437 - continue; 6438 - nf_tables_commit_chain_active(net, chain); 6439 - } 6444 + list_for_each_entry(chain, &table->chains, list) 6445 + nf_tables_commit_chain(net, chain); 6440 6446 } 6441 6447 6442 6448 /*
+2 -1
net/netfilter/nft_compat.c
··· 520 520 void *info) 521 521 { 522 522 struct xt_match *match = expr->ops->data; 523 + struct module *me = match->me; 523 524 struct xt_mtdtor_param par; 524 525 525 526 par.net = ctx->net; ··· 531 530 par.match->destroy(&par); 532 531 533 532 if (nft_xt_put(container_of(expr->ops, struct nft_xt, ops))) 534 - module_put(match->me); 533 + module_put(me); 535 534 } 536 535 537 536 static void
+4 -1
net/netfilter/nft_flow_offload.c
··· 214 214 { 215 215 int err; 216 216 217 - register_netdevice_notifier(&flow_offload_netdev_notifier); 217 + err = register_netdevice_notifier(&flow_offload_netdev_notifier); 218 + if (err) 219 + goto err; 218 220 219 221 err = nft_register_expr(&nft_flow_offload_type); 220 222 if (err < 0) ··· 226 224 227 225 register_expr: 228 226 unregister_netdevice_notifier(&flow_offload_netdev_notifier); 227 + err: 229 228 return err; 230 229 } 231 230
-10
net/netfilter/xt_RATEEST.c
··· 201 201 return 0; 202 202 } 203 203 204 - static void __net_exit xt_rateest_net_exit(struct net *net) 205 - { 206 - struct xt_rateest_net *xn = net_generic(net, xt_rateest_id); 207 - int i; 208 - 209 - for (i = 0; i < ARRAY_SIZE(xn->hash); i++) 210 - WARN_ON_ONCE(!hlist_empty(&xn->hash[i])); 211 - } 212 - 213 204 static struct pernet_operations xt_rateest_net_ops = { 214 205 .init = xt_rateest_net_init, 215 - .exit = xt_rateest_net_exit, 216 206 .id = &xt_rateest_id, 217 207 .size = sizeof(struct xt_rateest_net), 218 208 };
+3 -6
net/netfilter/xt_hashlimit.c
··· 295 295 296 296 /* copy match config into hashtable config */ 297 297 ret = cfg_copy(&hinfo->cfg, (void *)cfg, 3); 298 - 299 - if (ret) 298 + if (ret) { 299 + vfree(hinfo); 300 300 return ret; 301 + } 301 302 302 303 hinfo->cfg.size = size; 303 304 if (hinfo->cfg.max == 0) ··· 815 814 int ret; 816 815 817 816 ret = cfg_copy(&cfg, (void *)&info->cfg, 1); 818 - 819 817 if (ret) 820 818 return ret; 821 819 ··· 830 830 int ret; 831 831 832 832 ret = cfg_copy(&cfg, (void *)&info->cfg, 2); 833 - 834 833 if (ret) 835 834 return ret; 836 835 ··· 920 921 return ret; 921 922 922 923 ret = cfg_copy(&cfg, (void *)&info->cfg, 1); 923 - 924 924 if (ret) 925 925 return ret; 926 926 ··· 938 940 return ret; 939 941 940 942 ret = cfg_copy(&cfg, (void *)&info->cfg, 2); 941 - 942 943 if (ret) 943 944 return ret; 944 945
+1
net/sctp/output.c
··· 410 410 head->truesize += skb->truesize; 411 411 head->data_len += skb->len; 412 412 head->len += skb->len; 413 + refcount_add(skb->truesize, &head->sk->sk_wmem_alloc); 413 414 414 415 __skb_header_release(skb); 415 416 }
+5 -2
net/tipc/node.c
··· 584 584 /* tipc_node_cleanup - delete nodes that does not 585 585 * have active links for NODE_CLEANUP_AFTER time 586 586 */ 587 - static int tipc_node_cleanup(struct tipc_node *peer) 587 + static bool tipc_node_cleanup(struct tipc_node *peer) 588 588 { 589 589 struct tipc_net *tn = tipc_net(peer->net); 590 590 bool deleted = false; 591 591 592 - spin_lock_bh(&tn->node_list_lock); 592 + /* If lock held by tipc_node_stop() the node will be deleted anyway */ 593 + if (!spin_trylock_bh(&tn->node_list_lock)) 594 + return false; 595 + 593 596 tipc_node_write_lock(peer); 594 597 595 598 if (!node_is_up(peer) && time_after(jiffies, peer->delete_at)) {
+7 -1
tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
··· 137 137 138 138 SEE ALSO 139 139 ======== 140 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 140 + **bpf**\ (2), 141 + **bpf-helpers**\ (7), 142 + **bpftool**\ (8), 143 + **bpftool-prog**\ (8), 144 + **bpftool-map**\ (8), 145 + **bpftool-net**\ (8), 146 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-map.rst
··· 172 172 173 173 SEE ALSO 174 174 ======== 175 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8) 175 + **bpf**\ (2), 176 + **bpf-helpers**\ (7), 177 + **bpftool**\ (8), 178 + **bpftool-prog**\ (8), 179 + **bpftool-cgroup**\ (8), 180 + **bpftool-net**\ (8), 181 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-net.rst
··· 136 136 137 137 SEE ALSO 138 138 ======== 139 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 139 + **bpf**\ (2), 140 + **bpf-helpers**\ (7), 141 + **bpftool**\ (8), 142 + **bpftool-prog**\ (8), 143 + **bpftool-map**\ (8), 144 + **bpftool-cgroup**\ (8), 145 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-perf.rst
··· 78 78 79 79 SEE ALSO 80 80 ======== 81 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 81 + **bpf**\ (2), 82 + **bpf-helpers**\ (7), 83 + **bpftool**\ (8), 84 + **bpftool-prog**\ (8), 85 + **bpftool-map**\ (8), 86 + **bpftool-cgroup**\ (8), 87 + **bpftool-net**\ (8)
+9 -2
tools/bpf/bpftool/Documentation/bpftool-prog.rst
··· 136 136 Generate human-readable JSON output. Implies **-j**. 137 137 138 138 -f, --bpffs 139 - Show file names of pinned programs. 139 + When showing BPF programs, show file names of pinned 140 + programs. 140 141 141 142 EXAMPLES 142 143 ======== ··· 219 218 220 219 SEE ALSO 221 220 ======== 222 - **bpftool**\ (8), **bpftool-map**\ (8), **bpftool-cgroup**\ (8) 221 + **bpf**\ (2), 222 + **bpf-helpers**\ (7), 223 + **bpftool**\ (8), 224 + **bpftool-map**\ (8), 225 + **bpftool-cgroup**\ (8), 226 + **bpftool-net**\ (8), 227 + **bpftool-perf**\ (8)
+7 -2
tools/bpf/bpftool/Documentation/bpftool.rst
··· 63 63 64 64 SEE ALSO 65 65 ======== 66 - **bpftool-map**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8) 67 - **bpftool-perf**\ (8), **bpftool-net**\ (8) 66 + **bpf**\ (2), 67 + **bpf-helpers**\ (7), 68 + **bpftool-prog**\ (8), 69 + **bpftool-map**\ (8), 70 + **bpftool-cgroup**\ (8), 71 + **bpftool-net**\ (8), 72 + **bpftool-perf**\ (8)
+9 -8
tools/bpf/bpftool/common.c
··· 138 138 return 0; 139 139 } 140 140 141 - int open_obj_pinned(char *path) 141 + int open_obj_pinned(char *path, bool quiet) 142 142 { 143 143 int fd; 144 144 145 145 fd = bpf_obj_get(path); 146 146 if (fd < 0) { 147 - p_err("bpf obj get (%s): %s", path, 148 - errno == EACCES && !is_bpffs(dirname(path)) ? 149 - "directory not in bpf file system (bpffs)" : 150 - strerror(errno)); 147 + if (!quiet) 148 + p_err("bpf obj get (%s): %s", path, 149 + errno == EACCES && !is_bpffs(dirname(path)) ? 150 + "directory not in bpf file system (bpffs)" : 151 + strerror(errno)); 151 152 return -1; 152 153 } 153 154 ··· 160 159 enum bpf_obj_type type; 161 160 int fd; 162 161 163 - fd = open_obj_pinned(path); 162 + fd = open_obj_pinned(path, false); 164 163 if (fd < 0) 165 164 return -1; 166 165 ··· 312 311 return NULL; 313 312 } 314 313 315 - while ((n = getline(&line, &line_n, fdi))) { 314 + while ((n = getline(&line, &line_n, fdi)) > 0) { 316 315 char *value; 317 316 int len; 318 317 ··· 392 391 while ((ftse = fts_read(fts))) { 393 392 if (!(ftse->fts_info & FTS_F)) 394 393 continue; 395 - fd = open_obj_pinned(ftse->fts_path); 394 + fd = open_obj_pinned(ftse->fts_path, true); 396 395 if (fd < 0) 397 396 continue; 398 397
+1 -1
tools/bpf/bpftool/main.h
··· 129 129 int get_fd_type(int fd); 130 130 const char *get_fd_type_name(enum bpf_obj_type type); 131 131 char *get_fdinfo(int fd, const char *key); 132 - int open_obj_pinned(char *path); 132 + int open_obj_pinned(char *path, bool quiet); 133 133 int open_obj_pinned_any(char *path, enum bpf_obj_type exp_type); 134 134 int mount_bpffs_for_pin(const char *name); 135 135 int do_pin_any(int argc, char **argv, int (*get_fd_by_id)(__u32));
+8 -5
tools/bpf/bpftool/prog.c
··· 359 359 if (!hash_empty(prog_table.table)) { 360 360 struct pinned_obj *obj; 361 361 362 - printf("\n"); 363 362 hash_for_each_possible(prog_table.table, obj, hash, info->id) { 364 363 if (obj->id == info->id) 365 - printf("\tpinned %s\n", obj->path); 364 + printf("\n\tpinned %s", obj->path); 366 365 } 367 366 } 368 367 ··· 911 912 } 912 913 NEXT_ARG(); 913 914 } else if (is_prefix(*argv, "map")) { 915 + void *new_map_replace; 914 916 char *endptr, *name; 915 917 int fd; 916 918 ··· 945 945 if (fd < 0) 946 946 goto err_free_reuse_maps; 947 947 948 - map_replace = reallocarray(map_replace, old_map_fds + 1, 949 - sizeof(*map_replace)); 950 - if (!map_replace) { 948 + new_map_replace = reallocarray(map_replace, 949 + old_map_fds + 1, 950 + sizeof(*map_replace)); 951 + if (!new_map_replace) { 951 952 p_err("mem alloc failed"); 952 953 goto err_free_reuse_maps; 953 954 } 955 + map_replace = new_map_replace; 956 + 954 957 map_replace[old_map_fds].idx = idx; 955 958 map_replace[old_map_fds].name = name; 956 959 map_replace[old_map_fds].fd = fd;
+612
tools/include/uapi/linux/pkt_cls.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef __LINUX_PKT_CLS_H 3 + #define __LINUX_PKT_CLS_H 4 + 5 + #include <linux/types.h> 6 + #include <linux/pkt_sched.h> 7 + 8 + #define TC_COOKIE_MAX_SIZE 16 9 + 10 + /* Action attributes */ 11 + enum { 12 + TCA_ACT_UNSPEC, 13 + TCA_ACT_KIND, 14 + TCA_ACT_OPTIONS, 15 + TCA_ACT_INDEX, 16 + TCA_ACT_STATS, 17 + TCA_ACT_PAD, 18 + TCA_ACT_COOKIE, 19 + __TCA_ACT_MAX 20 + }; 21 + 22 + #define TCA_ACT_MAX __TCA_ACT_MAX 23 + #define TCA_OLD_COMPAT (TCA_ACT_MAX+1) 24 + #define TCA_ACT_MAX_PRIO 32 25 + #define TCA_ACT_BIND 1 26 + #define TCA_ACT_NOBIND 0 27 + #define TCA_ACT_UNBIND 1 28 + #define TCA_ACT_NOUNBIND 0 29 + #define TCA_ACT_REPLACE 1 30 + #define TCA_ACT_NOREPLACE 0 31 + 32 + #define TC_ACT_UNSPEC (-1) 33 + #define TC_ACT_OK 0 34 + #define TC_ACT_RECLASSIFY 1 35 + #define TC_ACT_SHOT 2 36 + #define TC_ACT_PIPE 3 37 + #define TC_ACT_STOLEN 4 38 + #define TC_ACT_QUEUED 5 39 + #define TC_ACT_REPEAT 6 40 + #define TC_ACT_REDIRECT 7 41 + #define TC_ACT_TRAP 8 /* For hw path, this means "trap to cpu" 42 + * and don't further process the frame 43 + * in hardware. For sw path, this is 44 + * equivalent of TC_ACT_STOLEN - drop 45 + * the skb and act like everything 46 + * is alright. 47 + */ 48 + #define TC_ACT_VALUE_MAX TC_ACT_TRAP 49 + 50 + /* There is a special kind of actions called "extended actions", 51 + * which need a value parameter. These have a local opcode located in 52 + * the highest nibble, starting from 1. The rest of the bits 53 + * are used to carry the value. These two parts together make 54 + * a combined opcode. 55 + */ 56 + #define __TC_ACT_EXT_SHIFT 28 57 + #define __TC_ACT_EXT(local) ((local) << __TC_ACT_EXT_SHIFT) 58 + #define TC_ACT_EXT_VAL_MASK ((1 << __TC_ACT_EXT_SHIFT) - 1) 59 + #define TC_ACT_EXT_OPCODE(combined) ((combined) & (~TC_ACT_EXT_VAL_MASK)) 60 + #define TC_ACT_EXT_CMP(combined, opcode) (TC_ACT_EXT_OPCODE(combined) == opcode) 61 + 62 + #define TC_ACT_JUMP __TC_ACT_EXT(1) 63 + #define TC_ACT_GOTO_CHAIN __TC_ACT_EXT(2) 64 + #define TC_ACT_EXT_OPCODE_MAX TC_ACT_GOTO_CHAIN 65 + 66 + /* Action type identifiers*/ 67 + enum { 68 + TCA_ID_UNSPEC=0, 69 + TCA_ID_POLICE=1, 70 + /* other actions go here */ 71 + __TCA_ID_MAX=255 72 + }; 73 + 74 + #define TCA_ID_MAX __TCA_ID_MAX 75 + 76 + struct tc_police { 77 + __u32 index; 78 + int action; 79 + #define TC_POLICE_UNSPEC TC_ACT_UNSPEC 80 + #define TC_POLICE_OK TC_ACT_OK 81 + #define TC_POLICE_RECLASSIFY TC_ACT_RECLASSIFY 82 + #define TC_POLICE_SHOT TC_ACT_SHOT 83 + #define TC_POLICE_PIPE TC_ACT_PIPE 84 + 85 + __u32 limit; 86 + __u32 burst; 87 + __u32 mtu; 88 + struct tc_ratespec rate; 89 + struct tc_ratespec peakrate; 90 + int refcnt; 91 + int bindcnt; 92 + __u32 capab; 93 + }; 94 + 95 + struct tcf_t { 96 + __u64 install; 97 + __u64 lastuse; 98 + __u64 expires; 99 + __u64 firstuse; 100 + }; 101 + 102 + struct tc_cnt { 103 + int refcnt; 104 + int bindcnt; 105 + }; 106 + 107 + #define tc_gen \ 108 + __u32 index; \ 109 + __u32 capab; \ 110 + int action; \ 111 + int refcnt; \ 112 + int bindcnt 113 + 114 + enum { 115 + TCA_POLICE_UNSPEC, 116 + TCA_POLICE_TBF, 117 + TCA_POLICE_RATE, 118 + TCA_POLICE_PEAKRATE, 119 + TCA_POLICE_AVRATE, 120 + TCA_POLICE_RESULT, 121 + TCA_POLICE_TM, 122 + TCA_POLICE_PAD, 123 + __TCA_POLICE_MAX 124 + #define TCA_POLICE_RESULT TCA_POLICE_RESULT 125 + }; 126 + 127 + #define TCA_POLICE_MAX (__TCA_POLICE_MAX - 1) 128 + 129 + /* tca flags definitions */ 130 + #define TCA_CLS_FLAGS_SKIP_HW (1 << 0) /* don't offload filter to HW */ 131 + #define TCA_CLS_FLAGS_SKIP_SW (1 << 1) /* don't use filter in SW */ 132 + #define TCA_CLS_FLAGS_IN_HW (1 << 2) /* filter is offloaded to HW */ 133 + #define TCA_CLS_FLAGS_NOT_IN_HW (1 << 3) /* filter isn't offloaded to HW */ 134 + #define TCA_CLS_FLAGS_VERBOSE (1 << 4) /* verbose logging */ 135 + 136 + /* U32 filters */ 137 + 138 + #define TC_U32_HTID(h) ((h)&0xFFF00000) 139 + #define TC_U32_USERHTID(h) (TC_U32_HTID(h)>>20) 140 + #define TC_U32_HASH(h) (((h)>>12)&0xFF) 141 + #define TC_U32_NODE(h) ((h)&0xFFF) 142 + #define TC_U32_KEY(h) ((h)&0xFFFFF) 143 + #define TC_U32_UNSPEC 0 144 + #define TC_U32_ROOT (0xFFF00000) 145 + 146 + enum { 147 + TCA_U32_UNSPEC, 148 + TCA_U32_CLASSID, 149 + TCA_U32_HASH, 150 + TCA_U32_LINK, 151 + TCA_U32_DIVISOR, 152 + TCA_U32_SEL, 153 + TCA_U32_POLICE, 154 + TCA_U32_ACT, 155 + TCA_U32_INDEV, 156 + TCA_U32_PCNT, 157 + TCA_U32_MARK, 158 + TCA_U32_FLAGS, 159 + TCA_U32_PAD, 160 + __TCA_U32_MAX 161 + }; 162 + 163 + #define TCA_U32_MAX (__TCA_U32_MAX - 1) 164 + 165 + struct tc_u32_key { 166 + __be32 mask; 167 + __be32 val; 168 + int off; 169 + int offmask; 170 + }; 171 + 172 + struct tc_u32_sel { 173 + unsigned char flags; 174 + unsigned char offshift; 175 + unsigned char nkeys; 176 + 177 + __be16 offmask; 178 + __u16 off; 179 + short offoff; 180 + 181 + short hoff; 182 + __be32 hmask; 183 + struct tc_u32_key keys[0]; 184 + }; 185 + 186 + struct tc_u32_mark { 187 + __u32 val; 188 + __u32 mask; 189 + __u32 success; 190 + }; 191 + 192 + struct tc_u32_pcnt { 193 + __u64 rcnt; 194 + __u64 rhit; 195 + __u64 kcnts[0]; 196 + }; 197 + 198 + /* Flags */ 199 + 200 + #define TC_U32_TERMINAL 1 201 + #define TC_U32_OFFSET 2 202 + #define TC_U32_VAROFFSET 4 203 + #define TC_U32_EAT 8 204 + 205 + #define TC_U32_MAXDEPTH 8 206 + 207 + 208 + /* RSVP filter */ 209 + 210 + enum { 211 + TCA_RSVP_UNSPEC, 212 + TCA_RSVP_CLASSID, 213 + TCA_RSVP_DST, 214 + TCA_RSVP_SRC, 215 + TCA_RSVP_PINFO, 216 + TCA_RSVP_POLICE, 217 + TCA_RSVP_ACT, 218 + __TCA_RSVP_MAX 219 + }; 220 + 221 + #define TCA_RSVP_MAX (__TCA_RSVP_MAX - 1 ) 222 + 223 + struct tc_rsvp_gpi { 224 + __u32 key; 225 + __u32 mask; 226 + int offset; 227 + }; 228 + 229 + struct tc_rsvp_pinfo { 230 + struct tc_rsvp_gpi dpi; 231 + struct tc_rsvp_gpi spi; 232 + __u8 protocol; 233 + __u8 tunnelid; 234 + __u8 tunnelhdr; 235 + __u8 pad; 236 + }; 237 + 238 + /* ROUTE filter */ 239 + 240 + enum { 241 + TCA_ROUTE4_UNSPEC, 242 + TCA_ROUTE4_CLASSID, 243 + TCA_ROUTE4_TO, 244 + TCA_ROUTE4_FROM, 245 + TCA_ROUTE4_IIF, 246 + TCA_ROUTE4_POLICE, 247 + TCA_ROUTE4_ACT, 248 + __TCA_ROUTE4_MAX 249 + }; 250 + 251 + #define TCA_ROUTE4_MAX (__TCA_ROUTE4_MAX - 1) 252 + 253 + 254 + /* FW filter */ 255 + 256 + enum { 257 + TCA_FW_UNSPEC, 258 + TCA_FW_CLASSID, 259 + TCA_FW_POLICE, 260 + TCA_FW_INDEV, /* used by CONFIG_NET_CLS_IND */ 261 + TCA_FW_ACT, /* used by CONFIG_NET_CLS_ACT */ 262 + TCA_FW_MASK, 263 + __TCA_FW_MAX 264 + }; 265 + 266 + #define TCA_FW_MAX (__TCA_FW_MAX - 1) 267 + 268 + /* TC index filter */ 269 + 270 + enum { 271 + TCA_TCINDEX_UNSPEC, 272 + TCA_TCINDEX_HASH, 273 + TCA_TCINDEX_MASK, 274 + TCA_TCINDEX_SHIFT, 275 + TCA_TCINDEX_FALL_THROUGH, 276 + TCA_TCINDEX_CLASSID, 277 + TCA_TCINDEX_POLICE, 278 + TCA_TCINDEX_ACT, 279 + __TCA_TCINDEX_MAX 280 + }; 281 + 282 + #define TCA_TCINDEX_MAX (__TCA_TCINDEX_MAX - 1) 283 + 284 + /* Flow filter */ 285 + 286 + enum { 287 + FLOW_KEY_SRC, 288 + FLOW_KEY_DST, 289 + FLOW_KEY_PROTO, 290 + FLOW_KEY_PROTO_SRC, 291 + FLOW_KEY_PROTO_DST, 292 + FLOW_KEY_IIF, 293 + FLOW_KEY_PRIORITY, 294 + FLOW_KEY_MARK, 295 + FLOW_KEY_NFCT, 296 + FLOW_KEY_NFCT_SRC, 297 + FLOW_KEY_NFCT_DST, 298 + FLOW_KEY_NFCT_PROTO_SRC, 299 + FLOW_KEY_NFCT_PROTO_DST, 300 + FLOW_KEY_RTCLASSID, 301 + FLOW_KEY_SKUID, 302 + FLOW_KEY_SKGID, 303 + FLOW_KEY_VLAN_TAG, 304 + FLOW_KEY_RXHASH, 305 + __FLOW_KEY_MAX, 306 + }; 307 + 308 + #define FLOW_KEY_MAX (__FLOW_KEY_MAX - 1) 309 + 310 + enum { 311 + FLOW_MODE_MAP, 312 + FLOW_MODE_HASH, 313 + }; 314 + 315 + enum { 316 + TCA_FLOW_UNSPEC, 317 + TCA_FLOW_KEYS, 318 + TCA_FLOW_MODE, 319 + TCA_FLOW_BASECLASS, 320 + TCA_FLOW_RSHIFT, 321 + TCA_FLOW_ADDEND, 322 + TCA_FLOW_MASK, 323 + TCA_FLOW_XOR, 324 + TCA_FLOW_DIVISOR, 325 + TCA_FLOW_ACT, 326 + TCA_FLOW_POLICE, 327 + TCA_FLOW_EMATCHES, 328 + TCA_FLOW_PERTURB, 329 + __TCA_FLOW_MAX 330 + }; 331 + 332 + #define TCA_FLOW_MAX (__TCA_FLOW_MAX - 1) 333 + 334 + /* Basic filter */ 335 + 336 + enum { 337 + TCA_BASIC_UNSPEC, 338 + TCA_BASIC_CLASSID, 339 + TCA_BASIC_EMATCHES, 340 + TCA_BASIC_ACT, 341 + TCA_BASIC_POLICE, 342 + __TCA_BASIC_MAX 343 + }; 344 + 345 + #define TCA_BASIC_MAX (__TCA_BASIC_MAX - 1) 346 + 347 + 348 + /* Cgroup classifier */ 349 + 350 + enum { 351 + TCA_CGROUP_UNSPEC, 352 + TCA_CGROUP_ACT, 353 + TCA_CGROUP_POLICE, 354 + TCA_CGROUP_EMATCHES, 355 + __TCA_CGROUP_MAX, 356 + }; 357 + 358 + #define TCA_CGROUP_MAX (__TCA_CGROUP_MAX - 1) 359 + 360 + /* BPF classifier */ 361 + 362 + #define TCA_BPF_FLAG_ACT_DIRECT (1 << 0) 363 + 364 + enum { 365 + TCA_BPF_UNSPEC, 366 + TCA_BPF_ACT, 367 + TCA_BPF_POLICE, 368 + TCA_BPF_CLASSID, 369 + TCA_BPF_OPS_LEN, 370 + TCA_BPF_OPS, 371 + TCA_BPF_FD, 372 + TCA_BPF_NAME, 373 + TCA_BPF_FLAGS, 374 + TCA_BPF_FLAGS_GEN, 375 + TCA_BPF_TAG, 376 + TCA_BPF_ID, 377 + __TCA_BPF_MAX, 378 + }; 379 + 380 + #define TCA_BPF_MAX (__TCA_BPF_MAX - 1) 381 + 382 + /* Flower classifier */ 383 + 384 + enum { 385 + TCA_FLOWER_UNSPEC, 386 + TCA_FLOWER_CLASSID, 387 + TCA_FLOWER_INDEV, 388 + TCA_FLOWER_ACT, 389 + TCA_FLOWER_KEY_ETH_DST, /* ETH_ALEN */ 390 + TCA_FLOWER_KEY_ETH_DST_MASK, /* ETH_ALEN */ 391 + TCA_FLOWER_KEY_ETH_SRC, /* ETH_ALEN */ 392 + TCA_FLOWER_KEY_ETH_SRC_MASK, /* ETH_ALEN */ 393 + TCA_FLOWER_KEY_ETH_TYPE, /* be16 */ 394 + TCA_FLOWER_KEY_IP_PROTO, /* u8 */ 395 + TCA_FLOWER_KEY_IPV4_SRC, /* be32 */ 396 + TCA_FLOWER_KEY_IPV4_SRC_MASK, /* be32 */ 397 + TCA_FLOWER_KEY_IPV4_DST, /* be32 */ 398 + TCA_FLOWER_KEY_IPV4_DST_MASK, /* be32 */ 399 + TCA_FLOWER_KEY_IPV6_SRC, /* struct in6_addr */ 400 + TCA_FLOWER_KEY_IPV6_SRC_MASK, /* struct in6_addr */ 401 + TCA_FLOWER_KEY_IPV6_DST, /* struct in6_addr */ 402 + TCA_FLOWER_KEY_IPV6_DST_MASK, /* struct in6_addr */ 403 + TCA_FLOWER_KEY_TCP_SRC, /* be16 */ 404 + TCA_FLOWER_KEY_TCP_DST, /* be16 */ 405 + TCA_FLOWER_KEY_UDP_SRC, /* be16 */ 406 + TCA_FLOWER_KEY_UDP_DST, /* be16 */ 407 + 408 + TCA_FLOWER_FLAGS, 409 + TCA_FLOWER_KEY_VLAN_ID, /* be16 */ 410 + TCA_FLOWER_KEY_VLAN_PRIO, /* u8 */ 411 + TCA_FLOWER_KEY_VLAN_ETH_TYPE, /* be16 */ 412 + 413 + TCA_FLOWER_KEY_ENC_KEY_ID, /* be32 */ 414 + TCA_FLOWER_KEY_ENC_IPV4_SRC, /* be32 */ 415 + TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK,/* be32 */ 416 + TCA_FLOWER_KEY_ENC_IPV4_DST, /* be32 */ 417 + TCA_FLOWER_KEY_ENC_IPV4_DST_MASK,/* be32 */ 418 + TCA_FLOWER_KEY_ENC_IPV6_SRC, /* struct in6_addr */ 419 + TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK,/* struct in6_addr */ 420 + TCA_FLOWER_KEY_ENC_IPV6_DST, /* struct in6_addr */ 421 + TCA_FLOWER_KEY_ENC_IPV6_DST_MASK,/* struct in6_addr */ 422 + 423 + TCA_FLOWER_KEY_TCP_SRC_MASK, /* be16 */ 424 + TCA_FLOWER_KEY_TCP_DST_MASK, /* be16 */ 425 + TCA_FLOWER_KEY_UDP_SRC_MASK, /* be16 */ 426 + TCA_FLOWER_KEY_UDP_DST_MASK, /* be16 */ 427 + TCA_FLOWER_KEY_SCTP_SRC_MASK, /* be16 */ 428 + TCA_FLOWER_KEY_SCTP_DST_MASK, /* be16 */ 429 + 430 + TCA_FLOWER_KEY_SCTP_SRC, /* be16 */ 431 + TCA_FLOWER_KEY_SCTP_DST, /* be16 */ 432 + 433 + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT, /* be16 */ 434 + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK, /* be16 */ 435 + TCA_FLOWER_KEY_ENC_UDP_DST_PORT, /* be16 */ 436 + TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK, /* be16 */ 437 + 438 + TCA_FLOWER_KEY_FLAGS, /* be32 */ 439 + TCA_FLOWER_KEY_FLAGS_MASK, /* be32 */ 440 + 441 + TCA_FLOWER_KEY_ICMPV4_CODE, /* u8 */ 442 + TCA_FLOWER_KEY_ICMPV4_CODE_MASK,/* u8 */ 443 + TCA_FLOWER_KEY_ICMPV4_TYPE, /* u8 */ 444 + TCA_FLOWER_KEY_ICMPV4_TYPE_MASK,/* u8 */ 445 + TCA_FLOWER_KEY_ICMPV6_CODE, /* u8 */ 446 + TCA_FLOWER_KEY_ICMPV6_CODE_MASK,/* u8 */ 447 + TCA_FLOWER_KEY_ICMPV6_TYPE, /* u8 */ 448 + TCA_FLOWER_KEY_ICMPV6_TYPE_MASK,/* u8 */ 449 + 450 + TCA_FLOWER_KEY_ARP_SIP, /* be32 */ 451 + TCA_FLOWER_KEY_ARP_SIP_MASK, /* be32 */ 452 + TCA_FLOWER_KEY_ARP_TIP, /* be32 */ 453 + TCA_FLOWER_KEY_ARP_TIP_MASK, /* be32 */ 454 + TCA_FLOWER_KEY_ARP_OP, /* u8 */ 455 + TCA_FLOWER_KEY_ARP_OP_MASK, /* u8 */ 456 + TCA_FLOWER_KEY_ARP_SHA, /* ETH_ALEN */ 457 + TCA_FLOWER_KEY_ARP_SHA_MASK, /* ETH_ALEN */ 458 + TCA_FLOWER_KEY_ARP_THA, /* ETH_ALEN */ 459 + TCA_FLOWER_KEY_ARP_THA_MASK, /* ETH_ALEN */ 460 + 461 + TCA_FLOWER_KEY_MPLS_TTL, /* u8 - 8 bits */ 462 + TCA_FLOWER_KEY_MPLS_BOS, /* u8 - 1 bit */ 463 + TCA_FLOWER_KEY_MPLS_TC, /* u8 - 3 bits */ 464 + TCA_FLOWER_KEY_MPLS_LABEL, /* be32 - 20 bits */ 465 + 466 + TCA_FLOWER_KEY_TCP_FLAGS, /* be16 */ 467 + TCA_FLOWER_KEY_TCP_FLAGS_MASK, /* be16 */ 468 + 469 + TCA_FLOWER_KEY_IP_TOS, /* u8 */ 470 + TCA_FLOWER_KEY_IP_TOS_MASK, /* u8 */ 471 + TCA_FLOWER_KEY_IP_TTL, /* u8 */ 472 + TCA_FLOWER_KEY_IP_TTL_MASK, /* u8 */ 473 + 474 + TCA_FLOWER_KEY_CVLAN_ID, /* be16 */ 475 + TCA_FLOWER_KEY_CVLAN_PRIO, /* u8 */ 476 + TCA_FLOWER_KEY_CVLAN_ETH_TYPE, /* be16 */ 477 + 478 + TCA_FLOWER_KEY_ENC_IP_TOS, /* u8 */ 479 + TCA_FLOWER_KEY_ENC_IP_TOS_MASK, /* u8 */ 480 + TCA_FLOWER_KEY_ENC_IP_TTL, /* u8 */ 481 + TCA_FLOWER_KEY_ENC_IP_TTL_MASK, /* u8 */ 482 + 483 + TCA_FLOWER_KEY_ENC_OPTS, 484 + TCA_FLOWER_KEY_ENC_OPTS_MASK, 485 + 486 + TCA_FLOWER_IN_HW_COUNT, 487 + 488 + __TCA_FLOWER_MAX, 489 + }; 490 + 491 + #define TCA_FLOWER_MAX (__TCA_FLOWER_MAX - 1) 492 + 493 + enum { 494 + TCA_FLOWER_KEY_ENC_OPTS_UNSPEC, 495 + TCA_FLOWER_KEY_ENC_OPTS_GENEVE, /* Nested 496 + * TCA_FLOWER_KEY_ENC_OPT_GENEVE_ 497 + * attributes 498 + */ 499 + __TCA_FLOWER_KEY_ENC_OPTS_MAX, 500 + }; 501 + 502 + #define TCA_FLOWER_KEY_ENC_OPTS_MAX (__TCA_FLOWER_KEY_ENC_OPTS_MAX - 1) 503 + 504 + enum { 505 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_UNSPEC, 506 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS, /* u16 */ 507 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE, /* u8 */ 508 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA, /* 4 to 128 bytes */ 509 + 510 + __TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX, 511 + }; 512 + 513 + #define TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX \ 514 + (__TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX - 1) 515 + 516 + enum { 517 + TCA_FLOWER_KEY_FLAGS_IS_FRAGMENT = (1 << 0), 518 + TCA_FLOWER_KEY_FLAGS_FRAG_IS_FIRST = (1 << 1), 519 + }; 520 + 521 + /* Match-all classifier */ 522 + 523 + enum { 524 + TCA_MATCHALL_UNSPEC, 525 + TCA_MATCHALL_CLASSID, 526 + TCA_MATCHALL_ACT, 527 + TCA_MATCHALL_FLAGS, 528 + __TCA_MATCHALL_MAX, 529 + }; 530 + 531 + #define TCA_MATCHALL_MAX (__TCA_MATCHALL_MAX - 1) 532 + 533 + /* Extended Matches */ 534 + 535 + struct tcf_ematch_tree_hdr { 536 + __u16 nmatches; 537 + __u16 progid; 538 + }; 539 + 540 + enum { 541 + TCA_EMATCH_TREE_UNSPEC, 542 + TCA_EMATCH_TREE_HDR, 543 + TCA_EMATCH_TREE_LIST, 544 + __TCA_EMATCH_TREE_MAX 545 + }; 546 + #define TCA_EMATCH_TREE_MAX (__TCA_EMATCH_TREE_MAX - 1) 547 + 548 + struct tcf_ematch_hdr { 549 + __u16 matchid; 550 + __u16 kind; 551 + __u16 flags; 552 + __u16 pad; /* currently unused */ 553 + }; 554 + 555 + /* 0 1 556 + * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 557 + * +-----------------------+-+-+---+ 558 + * | Unused |S|I| R | 559 + * +-----------------------+-+-+---+ 560 + * 561 + * R(2) ::= relation to next ematch 562 + * where: 0 0 END (last ematch) 563 + * 0 1 AND 564 + * 1 0 OR 565 + * 1 1 Unused (invalid) 566 + * I(1) ::= invert result 567 + * S(1) ::= simple payload 568 + */ 569 + #define TCF_EM_REL_END 0 570 + #define TCF_EM_REL_AND (1<<0) 571 + #define TCF_EM_REL_OR (1<<1) 572 + #define TCF_EM_INVERT (1<<2) 573 + #define TCF_EM_SIMPLE (1<<3) 574 + 575 + #define TCF_EM_REL_MASK 3 576 + #define TCF_EM_REL_VALID(v) (((v) & TCF_EM_REL_MASK) != TCF_EM_REL_MASK) 577 + 578 + enum { 579 + TCF_LAYER_LINK, 580 + TCF_LAYER_NETWORK, 581 + TCF_LAYER_TRANSPORT, 582 + __TCF_LAYER_MAX 583 + }; 584 + #define TCF_LAYER_MAX (__TCF_LAYER_MAX - 1) 585 + 586 + /* Ematch type assignments 587 + * 1..32767 Reserved for ematches inside kernel tree 588 + * 32768..65535 Free to use, not reliable 589 + */ 590 + #define TCF_EM_CONTAINER 0 591 + #define TCF_EM_CMP 1 592 + #define TCF_EM_NBYTE 2 593 + #define TCF_EM_U32 3 594 + #define TCF_EM_META 4 595 + #define TCF_EM_TEXT 5 596 + #define TCF_EM_VLAN 6 597 + #define TCF_EM_CANID 7 598 + #define TCF_EM_IPSET 8 599 + #define TCF_EM_IPT 9 600 + #define TCF_EM_MAX 9 601 + 602 + enum { 603 + TCF_EM_PROG_TC 604 + }; 605 + 606 + enum { 607 + TCF_EM_OPND_EQ, 608 + TCF_EM_OPND_GT, 609 + TCF_EM_OPND_LT 610 + }; 611 + 612 + #endif
+37
tools/include/uapi/linux/tc_act/tc_bpf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ 2 + /* 3 + * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + */ 10 + 11 + #ifndef __LINUX_TC_BPF_H 12 + #define __LINUX_TC_BPF_H 13 + 14 + #include <linux/pkt_cls.h> 15 + 16 + #define TCA_ACT_BPF 13 17 + 18 + struct tc_act_bpf { 19 + tc_gen; 20 + }; 21 + 22 + enum { 23 + TCA_ACT_BPF_UNSPEC, 24 + TCA_ACT_BPF_TM, 25 + TCA_ACT_BPF_PARMS, 26 + TCA_ACT_BPF_OPS_LEN, 27 + TCA_ACT_BPF_OPS, 28 + TCA_ACT_BPF_FD, 29 + TCA_ACT_BPF_NAME, 30 + TCA_ACT_BPF_PAD, 31 + TCA_ACT_BPF_TAG, 32 + TCA_ACT_BPF_ID, 33 + __TCA_ACT_BPF_MAX, 34 + }; 35 + #define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1) 36 + 37 + #endif
+1
tools/testing/selftests/Makefile
··· 24 24 TARGETS += mount 25 25 TARGETS += mqueue 26 26 TARGETS += net 27 + TARGETS += netfilter 27 28 TARGETS += nsfs 28 29 TARGETS += powerpc 29 30 TARGETS += proc
+4 -1
tools/testing/selftests/bpf/test_netcnt.c
··· 81 81 goto err; 82 82 } 83 83 84 - assert(system("ping localhost -6 -c 10000 -f -q > /dev/null") == 0); 84 + if (system("which ping6 &>/dev/null") == 0) 85 + assert(!system("ping6 localhost -c 10000 -f -q > /dev/null")); 86 + else 87 + assert(!system("ping -6 localhost -c 10000 -f -q > /dev/null")); 85 88 86 89 if (bpf_prog_query(cgroup_fd, BPF_CGROUP_INET_EGRESS, 0, NULL, NULL, 87 90 &prog_cnt)) {
+19
tools/testing/selftests/bpf/test_verifier.c
··· 13953 13953 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 13954 13954 .result = ACCEPT, 13955 13955 }, 13956 + { 13957 + "calls: ctx read at start of subprog", 13958 + .insns = { 13959 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 13960 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5), 13961 + BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_0, 0), 13962 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), 13963 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), 13964 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 13965 + BPF_EXIT_INSN(), 13966 + BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_1, 0), 13967 + BPF_MOV64_IMM(BPF_REG_0, 0), 13968 + BPF_EXIT_INSN(), 13969 + }, 13970 + .prog_type = BPF_PROG_TYPE_SOCKET_FILTER, 13971 + .errstr_unpriv = "function calls to other bpf functions are allowed for root only", 13972 + .result_unpriv = REJECT, 13973 + .result = ACCEPT, 13974 + }, 13956 13975 }; 13957 13976 13958 13977 static int probe_filter_length(const struct bpf_insn *fp)
+6
tools/testing/selftests/netfilter/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # Makefile for netfilter selftests 3 + 4 + TEST_PROGS := nft_trans_stress.sh 5 + 6 + include ../lib.mk
+2
tools/testing/selftests/netfilter/config
··· 1 + CONFIG_NET_NS=y 2 + NF_TABLES_INET=y
+78
tools/testing/selftests/netfilter/nft_trans_stress.sh
··· 1 + #!/bin/bash 2 + # 3 + # This test is for stress-testing the nf_tables config plane path vs. 4 + # packet path processing: Make sure we never release rules that are 5 + # still visible to other cpus. 6 + # 7 + # set -e 8 + 9 + # Kselftest framework requirement - SKIP code is 4. 10 + ksft_skip=4 11 + 12 + testns=testns1 13 + tables="foo bar baz quux" 14 + 15 + nft --version > /dev/null 2>&1 16 + if [ $? -ne 0 ];then 17 + echo "SKIP: Could not run test without nft tool" 18 + exit $ksft_skip 19 + fi 20 + 21 + ip -Version > /dev/null 2>&1 22 + if [ $? -ne 0 ];then 23 + echo "SKIP: Could not run test without ip tool" 24 + exit $ksft_skip 25 + fi 26 + 27 + tmp=$(mktemp) 28 + 29 + for table in $tables; do 30 + echo add table inet "$table" >> "$tmp" 31 + echo flush table inet "$table" >> "$tmp" 32 + 33 + echo "add chain inet $table INPUT { type filter hook input priority 0; }" >> "$tmp" 34 + echo "add chain inet $table OUTPUT { type filter hook output priority 0; }" >> "$tmp" 35 + for c in $(seq 1 400); do 36 + chain=$(printf "chain%03u" "$c") 37 + echo "add chain inet $table $chain" >> "$tmp" 38 + done 39 + 40 + for c in $(seq 1 400); do 41 + chain=$(printf "chain%03u" "$c") 42 + for BASE in INPUT OUTPUT; do 43 + echo "add rule inet $table $BASE counter jump $chain" >> "$tmp" 44 + done 45 + echo "add rule inet $table $chain counter return" >> "$tmp" 46 + done 47 + done 48 + 49 + ip netns add "$testns" 50 + ip -netns "$testns" link set lo up 51 + 52 + lscpu | grep ^CPU\(s\): | ( read cpu cpunum ; 53 + cpunum=$((cpunum-1)) 54 + for i in $(seq 0 $cpunum);do 55 + mask=$(printf 0x%x $((1<<$i))) 56 + ip netns exec "$testns" taskset $mask ping -4 127.0.0.1 -fq > /dev/null & 57 + ip netns exec "$testns" taskset $mask ping -6 ::1 -fq > /dev/null & 58 + done) 59 + 60 + sleep 1 61 + 62 + for i in $(seq 1 10) ; do ip netns exec "$testns" nft -f "$tmp" & done 63 + 64 + for table in $tables;do 65 + randsleep=$((RANDOM%10)) 66 + sleep $randsleep 67 + ip netns exec "$testns" nft delete table inet $table 2>/dev/null 68 + done 69 + 70 + randsleep=$((RANDOM%10)) 71 + sleep $randsleep 72 + 73 + pkill -9 ping 74 + 75 + wait 76 + 77 + rm -f "$tmp" 78 + ip netns del "$testns"