···7474new entry and return the previous entry stored at that index. You can7575use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a7676``NULL`` entry. There is no difference between an entry that has never7777-been stored to and one that has most recently had ``NULL`` stored to it.7777+been stored to, one that has been erased and one that has most recently7878+had ``NULL`` stored to it.78797980You can conditionally replace an entry at an index by using8081:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if···106105indices. Storing into one index may result in the entry retrieved by107106some, but not all of the other indices changing.108107108108+Sometimes you need to ensure that a subsequent call to :c:func:`xa_store`109109+will not need to allocate memory. The :c:func:`xa_reserve` function110110+will store a reserved entry at the indicated index. Users of the normal111111+API will see this entry as containing ``NULL``. If you do not need to112112+use the reserved entry, you can call :c:func:`xa_release` to remove the113113+unused entry. If another user has stored to the entry in the meantime,114114+:c:func:`xa_release` will do nothing; if instead you want the entry to115115+become ``NULL``, you should use :c:func:`xa_erase`.116116+117117+If all entries in the array are ``NULL``, the :c:func:`xa_empty` function118118+will return ``true``.119119+109120Finally, you can remove all entries from an XArray by calling110121:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish111122to free the entries first. You can do this by iterating over all present112123entries in the XArray using the :c:func:`xa_for_each` iterator.113124114114-ID assignment115115--------------125125+Allocating XArrays126126+------------------127127+128128+If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or129129+initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,130130+the XArray changes to track whether entries are in use or not.116131117132You can call :c:func:`xa_alloc` to store the entry at any unused index118133in the XArray. If you need to modify the array from interrupt context,119134you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable120120-interrupts while allocating the ID. Unlike :c:func:`xa_store`, allocating121121-a ``NULL`` pointer does not delete an entry. Instead it reserves an122122-entry like :c:func:`xa_reserve` and you can release it using either123123-:c:func:`xa_erase` or :c:func:`xa_release`. To use ID assignment, the124124-XArray must be defined with :c:func:`DEFINE_XARRAY_ALLOC`, or initialised125125-by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,135135+interrupts while allocating the ID.136136+137137+Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert`138138+will mark the entry as being allocated. Unlike a normal XArray, storing139139+``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`.140140+To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if141141+you only want to free the entry if it's ``NULL``).142142+143143+You cannot use ``XA_MARK_0`` with an allocating XArray as this mark144144+is used to track whether an entry is free or not. The other marks are145145+available for your use.126146127147Memory allocation128148-----------------···180158181159Takes xa_lock internally:182160 * :c:func:`xa_store`161161+ * :c:func:`xa_store_bh`162162+ * :c:func:`xa_store_irq`183163 * :c:func:`xa_insert`184164 * :c:func:`xa_erase`185165 * :c:func:`xa_erase_bh`···191167 * :c:func:`xa_alloc`192168 * :c:func:`xa_alloc_bh`193169 * :c:func:`xa_alloc_irq`170170+ * :c:func:`xa_reserve`171171+ * :c:func:`xa_reserve_bh`172172+ * :c:func:`xa_reserve_irq`194173 * :c:func:`xa_destroy`195174 * :c:func:`xa_set_mark`196175 * :c:func:`xa_clear_mark`···204177 * :c:func:`__xa_erase`205178 * :c:func:`__xa_cmpxchg`206179 * :c:func:`__xa_alloc`180180+ * :c:func:`__xa_reserve`207181 * :c:func:`__xa_set_mark`208182 * :c:func:`__xa_clear_mark`209183···262234using :c:func:`xa_lock_irqsave` in both the interrupt handler and process263235context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock`264236in the interrupt handler. Some of the more common patterns have helper265265-functions such as :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.237237+functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`,238238+:c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`.266239267240Sometimes you need to protect access to the XArray with a mutex because268241that lock sits above another mutex in the locking hierarchy. That does···351322 - :c:func:`xa_is_zero`352323 - Zero entries appear as ``NULL`` through the Normal API, but occupy353324 an entry in the XArray which can be used to reserve the index for354354- future use.325325+ future use. This is used by allocating XArrays for allocated entries326326+ which are ``NULL``.355327356328Other internal entries may be added in the future. As far as possible, they357329will be handled by :c:func:`xas_retry`.
···55Required properties:66 - compatible: should be "socionext,uniphier-scssi"77 - reg: address and length of the spi master registers88- - #address-cells: must be <1>, see spi-bus.txt99- - #size-cells: must be <0>, see spi-bus.txt1010- - clocks: A phandle to the clock for the device.1111- - resets: A phandle to the reset control for the device.88+ - interrupts: a single interrupt specifier99+ - pinctrl-names: should be "default"1010+ - pinctrl-0: pin control state for the default mode1111+ - clocks: a phandle to the clock for the device1212+ - resets: a phandle to the reset control for the device12131314Example:14151516spi0: spi@54006000 {1617 compatible = "socionext,uniphier-scssi";1718 reg = <0x54006000 0x100>;1818- #address-cells = <1>;1919- #size-cells = <0>;1919+ interrupts = <0 39 4>;2020+ pinctrl-names = "default";2121+ pinctrl-0 = <&pinctrl_spi0>;2022 clocks = <&peri_clk 11>;2123 resets = <&peri_rst 11>;2224};
+1-10
Documentation/input/event-codes.rst
···190190* REL_WHEEL, REL_HWHEEL:191191192192 - These codes are used for vertical and horizontal scroll wheels,193193- respectively. The value is the number of "notches" moved on the wheel, the194194- physical size of which varies by device. For high-resolution wheels (which195195- report multiple events for each notch of movement, or do not have notches)196196- this may be an approximation based on the high-resolution scroll events.197197-198198-* REL_WHEEL_HI_RES:199199-200200- - If a vertical scroll wheel supports high-resolution scrolling, this code201201- will be emitted in addition to REL_WHEEL. The value is the (approximate)202202- distance travelled by the user's finger, in microns.193193+ respectively.203194204195EV_ABS205196------
+63-3
MAINTAINERS
···28012801T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git28022802Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=7714728032803S: Supported28042804-F: arch/x86/net/bpf_jit*28042804+F: arch/*/net/*28052805F: Documentation/networking/filter.txt28062806F: Documentation/bpf/28072807F: include/linux/bpf*···28202820F: tools/bpf/28212821F: tools/lib/bpf/28222822F: tools/testing/selftests/bpf/28232823+28242824+BPF JIT for ARM28252825+M: Shubham Bansal <illusionist.neo@gmail.com>28262826+L: netdev@vger.kernel.org28272827+S: Maintained28282828+F: arch/arm/net/28292829+28302830+BPF JIT for ARM6428312831+M: Daniel Borkmann <daniel@iogearbox.net>28322832+M: Alexei Starovoitov <ast@kernel.org>28332833+M: Zi Shen Lim <zlim.lnx@gmail.com>28342834+L: netdev@vger.kernel.org28352835+S: Supported28362836+F: arch/arm64/net/28372837+28382838+BPF JIT for MIPS (32-BIT AND 64-BIT)28392839+M: Paul Burton <paul.burton@mips.com>28402840+L: netdev@vger.kernel.org28412841+S: Maintained28422842+F: arch/mips/net/28432843+28442844+BPF JIT for NFP NICs28452845+M: Jakub Kicinski <jakub.kicinski@netronome.com>28462846+L: netdev@vger.kernel.org28472847+S: Supported28482848+F: drivers/net/ethernet/netronome/nfp/bpf/28492849+28502850+BPF JIT for POWERPC (32-BIT AND 64-BIT)28512851+M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>28522852+M: Sandipan Das <sandipan@linux.ibm.com>28532853+L: netdev@vger.kernel.org28542854+S: Maintained28552855+F: arch/powerpc/net/28562856+28572857+BPF JIT for S39028582858+M: Martin Schwidefsky <schwidefsky@de.ibm.com>28592859+M: Heiko Carstens <heiko.carstens@de.ibm.com>28602860+L: netdev@vger.kernel.org28612861+S: Maintained28622862+F: arch/s390/net/28632863+X: arch/s390/net/pnet.c28642864+28652865+BPF JIT for SPARC (32-BIT AND 64-BIT)28662866+M: David S. Miller <davem@davemloft.net>28672867+L: netdev@vger.kernel.org28682868+S: Maintained28692869+F: arch/sparc/net/28702870+28712871+BPF JIT for X86 32-BIT28722872+M: Wang YanQing <udknight@gmail.com>28732873+L: netdev@vger.kernel.org28742874+S: Maintained28752875+F: arch/x86/net/bpf_jit_comp32.c28762876+28772877+BPF JIT for X86 64-BIT28782878+M: Alexei Starovoitov <ast@kernel.org>28792879+M: Daniel Borkmann <daniel@iogearbox.net>28802880+L: netdev@vger.kernel.org28812881+S: Supported28822882+F: arch/x86/net/28832883+X: arch/x86/net/bpf_jit_comp32.c2823288428242885BROADCOM B44 10/100 ETHERNET DRIVER28252886M: Michael Chan <michael.chan@broadcom.com>···1405813997F: drivers/tty/vcc.c14059139981406013999SPARSE CHECKER1406114061-M: "Christopher Li" <sparse@chrisli.org>1400014000+M: "Luc Van Oostenryck" <luc.vanoostenryck@gmail.com>1406214001L: linux-sparse@vger.kernel.org1406314002W: https://sparse.wiki.kernel.org/1406414003T: git git://git.kernel.org/pub/scm/devel/sparse/sparse.git1406514065-T: git git://git.kernel.org/pub/scm/devel/sparse/chrisl/sparse.git1406614004S: Maintained1406714005F: include/linux/compiler.h1406814006
+2-2
Makefile
···22VERSION = 433PATCHLEVEL = 2044SUBLEVEL = 055-EXTRAVERSION = -rc366-NAME = "People's Front"55+EXTRAVERSION = -rc466+NAME = Shy Crocodile7788# *DOCUMENTATION*99# To see a list of typical targets execute "make help"
+17-9
arch/arm64/net/bpf_jit_comp.c
···351351 * >0 - successfully JITed a 16-byte eBPF instruction.352352 * <0 - failed to JIT.353353 */354354-static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)354354+static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,355355+ bool extra_pass)355356{356357 const u8 code = insn->code;357358 const u8 dst = bpf2a64[insn->dst_reg];···626625 case BPF_JMP | BPF_CALL:627626 {628627 const u8 r0 = bpf2a64[BPF_REG_0];629629- const u64 func = (u64)__bpf_call_base + imm;628628+ bool func_addr_fixed;629629+ u64 func_addr;630630+ int ret;630631631631- if (ctx->prog->is_func)632632- emit_addr_mov_i64(tmp, func, ctx);632632+ ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,633633+ &func_addr, &func_addr_fixed);634634+ if (ret < 0)635635+ return ret;636636+ if (func_addr_fixed)637637+ /* We can use optimized emission here. */638638+ emit_a64_mov_i64(tmp, func_addr, ctx);633639 else634634- emit_a64_mov_i64(tmp, func, ctx);640640+ emit_addr_mov_i64(tmp, func_addr, ctx);635641 emit(A64_BLR(tmp), ctx);636642 emit(A64_MOV(1, r0, A64_R(0)), ctx);637643 break;···761753 return 0;762754}763755764764-static int build_body(struct jit_ctx *ctx)756756+static int build_body(struct jit_ctx *ctx, bool extra_pass)765757{766758 const struct bpf_prog *prog = ctx->prog;767759 int i;···770762 const struct bpf_insn *insn = &prog->insnsi[i];771763 int ret;772764773773- ret = build_insn(insn, ctx);765765+ ret = build_insn(insn, ctx, extra_pass);774766 if (ret > 0) {775767 i++;776768 if (ctx->image == NULL)···866858 /* 1. Initial fake pass to compute ctx->idx. */867859868860 /* Fake pass to fill in ctx->offset. */869869- if (build_body(&ctx)) {861861+ if (build_body(&ctx, extra_pass)) {870862 prog = orig_prog;871863 goto out_off;872864 }···896888897889 build_prologue(&ctx, was_classic);898890899899- if (build_body(&ctx)) {891891+ if (build_body(&ctx, extra_pass)) {900892 bpf_jit_binary_free(header);901893 prog = orig_prog;902894 goto out_off;
+3-1
arch/ia64/include/asm/numa.h
···5959 */60606161extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];6262-#define node_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])6262+#define slit_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])6363+extern int __node_distance(int from, int to);6464+#define node_distance(from,to) __node_distance(from, to)63656466extern int paddr_to_nid(unsigned long paddr);6567
+3-3
arch/ia64/kernel/acpi.c
···578578 if (!slit_table) {579579 for (i = 0; i < MAX_NUMNODES; i++)580580 for (j = 0; j < MAX_NUMNODES; j++)581581- node_distance(i, j) = i == j ? LOCAL_DISTANCE :582582- REMOTE_DISTANCE;581581+ slit_distance(i, j) = i == j ?582582+ LOCAL_DISTANCE : REMOTE_DISTANCE;583583 return;584584 }585585···592592 if (!pxm_bit_test(j))593593 continue;594594 node_to = pxm_to_node(j);595595- node_distance(node_from, node_to) =595595+ slit_distance(node_from, node_to) =596596 slit_table->entry[i * slit_table->locality_count + j];597597 }598598 }
+6
arch/ia64/mm/numa.c
···3636 */3737u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];38383939+int __node_distance(int from, int to)4040+{4141+ return slit_distance(from, to);4242+}4343+EXPORT_SYMBOL(__node_distance);4444+3945/* Identify which cnode a physical address resides on */4046int4147paddr_to_nid(unsigned long paddr)
+1
arch/powerpc/kvm/book3s_hv.c
···983983 ret = kvmhv_enter_nested_guest(vcpu);984984 if (ret == H_INTERRUPT) {985985 kvmppc_set_gpr(vcpu, 3, 0);986986+ vcpu->arch.hcall_needed = 0;986987 return -EINTR;987988 }988989 break;
+38-19
arch/powerpc/net/bpf_jit_comp64.c
···166166 PPC_BLR();167167}168168169169-static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func)169169+static void bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx,170170+ u64 func)171171+{172172+#ifdef PPC64_ELF_ABI_v1173173+ /* func points to the function descriptor */174174+ PPC_LI64(b2p[TMP_REG_2], func);175175+ /* Load actual entry point from function descriptor */176176+ PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0);177177+ /* ... and move it to LR */178178+ PPC_MTLR(b2p[TMP_REG_1]);179179+ /*180180+ * Load TOC from function descriptor at offset 8.181181+ * We can clobber r2 since we get called through a182182+ * function pointer (so caller will save/restore r2)183183+ * and since we don't use a TOC ourself.184184+ */185185+ PPC_BPF_LL(2, b2p[TMP_REG_2], 8);186186+#else187187+ /* We can clobber r12 */188188+ PPC_FUNC_ADDR(12, func);189189+ PPC_MTLR(12);190190+#endif191191+ PPC_BLRL();192192+}193193+194194+static void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx,195195+ u64 func)170196{171197 unsigned int i, ctx_idx = ctx->idx;172198···299273{300274 const struct bpf_insn *insn = fp->insnsi;301275 int flen = fp->len;302302- int i;276276+ int i, ret;303277304278 /* Start of epilogue code - will only be valid 2nd pass onwards */305279 u32 exit_addr = addrs[flen];···310284 u32 src_reg = b2p[insn[i].src_reg];311285 s16 off = insn[i].off;312286 s32 imm = insn[i].imm;287287+ bool func_addr_fixed;288288+ u64 func_addr;313289 u64 imm64;314314- u8 *func;315290 u32 true_cond;316291 u32 tmp_idx;317292···738711 case BPF_JMP | BPF_CALL:739712 ctx->seen |= SEEN_FUNC;740713741741- /* bpf function call */742742- if (insn[i].src_reg == BPF_PSEUDO_CALL)743743- if (!extra_pass)744744- func = NULL;745745- else if (fp->aux->func && off < fp->aux->func_cnt)746746- /* use the subprog id from the off747747- * field to lookup the callee address748748- */749749- func = (u8 *) fp->aux->func[off]->bpf_func;750750- else751751- return -EINVAL;752752- /* kernel helper call */714714+ ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,715715+ &func_addr, &func_addr_fixed);716716+ if (ret < 0)717717+ return ret;718718+719719+ if (func_addr_fixed)720720+ bpf_jit_emit_func_call_hlp(image, ctx, func_addr);753721 else754754- func = (u8 *) __bpf_call_base + imm;755755-756756- bpf_jit_emit_func_call(image, ctx, (u64)func);757757-722722+ bpf_jit_emit_func_call_rel(image, ctx, func_addr);758723 /* move return value from r3 to BPF_REG_0 */759724 PPC_MR(b2p[BPF_REG_0], 3);760725 break;
···10941094 bool (*has_wbinvd_exit)(void);1095109510961096 u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu);10971097- void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset);10971097+ /* Returns actual tsc_offset set in active VMCS */10981098+ u64 (*write_l1_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset);1098109910991100 void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2);11001101
+6-1
arch/x86/kvm/lapic.c
···5555#define PRIo64 "o"56565757/* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */5858-#define apic_debug(fmt, arg...)5858+#define apic_debug(fmt, arg...) do {} while (0)59596060/* 14 is the version for Xeon and Pentium 8.4.8*/6161#define APIC_VERSION (0x14UL | ((KVM_APIC_LVT_NUM - 1) << 16))···575575576576 rcu_read_lock();577577 map = rcu_dereference(kvm->arch.apic_map);578578+579579+ if (unlikely(!map)) {580580+ count = -EOPNOTSUPP;581581+ goto out;582582+ }578583579584 if (min > map->max_apic_id)580585 goto out;
+9-18
arch/x86/kvm/mmu.c
···50745074}5075507550765076static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,50775077- const u8 *new, int *bytes)50775077+ int *bytes)50785078{50795079- u64 gentry;50795079+ u64 gentry = 0;50805080 int r;5081508150825082 /*···50885088 /* Handle a 32-bit guest writing two halves of a 64-bit gpte */50895089 *gpa &= ~(gpa_t)7;50905090 *bytes = 8;50915091- r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8);50925092- if (r)50935093- gentry = 0;50945094- new = (const u8 *)&gentry;50955091 }5096509250975097- switch (*bytes) {50985098- case 4:50995099- gentry = *(const u32 *)new;51005100- break;51015101- case 8:51025102- gentry = *(const u64 *)new;51035103- break;51045104- default:51055105- gentry = 0;51065106- break;50935093+ if (*bytes == 4 || *bytes == 8) {50945094+ r = kvm_vcpu_read_guest_atomic(vcpu, *gpa, &gentry, *bytes);50955095+ if (r)50965096+ gentry = 0;51075097 }5108509851095099 return gentry;···5197520751985208 pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes);5199520952005200- gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes);52015201-52025210 /*52035211 * No need to care whether allocation memory is successful52045212 * or not since pte prefetch is skiped if it does not have···52055217 mmu_topup_memory_caches(vcpu);5206521852075219 spin_lock(&vcpu->kvm->mmu_lock);52205220+52215221+ gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes);52225222+52085223 ++vcpu->kvm->stat.mmu_pte_write;52095224 kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);52105225
+29-15
arch/x86/kvm/svm.c
···14461446 return vcpu->arch.tsc_offset;14471447}1448144814491449-static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)14491449+static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)14501450{14511451 struct vcpu_svm *svm = to_svm(vcpu);14521452 u64 g_tsc_offset = 0;···14641464 svm->vmcb->control.tsc_offset = offset + g_tsc_offset;1465146514661466 mark_dirty(svm->vmcb, VMCB_INTERCEPTS);14671467+ return svm->vmcb->control.tsc_offset;14671468}1468146914691470static void avic_init_vmcb(struct vcpu_svm *svm)···16651664static int avic_init_access_page(struct kvm_vcpu *vcpu)16661665{16671666 struct kvm *kvm = vcpu->kvm;16681668- int ret;16671667+ int ret = 0;1669166816691669+ mutex_lock(&kvm->slots_lock);16701670 if (kvm->arch.apic_access_page_done)16711671- return 0;16711671+ goto out;1672167216731673- ret = x86_set_memory_region(kvm,16741674- APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,16751675- APIC_DEFAULT_PHYS_BASE,16761676- PAGE_SIZE);16731673+ ret = __x86_set_memory_region(kvm,16741674+ APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,16751675+ APIC_DEFAULT_PHYS_BASE,16761676+ PAGE_SIZE);16771677 if (ret)16781678- return ret;16781678+ goto out;1679167916801680 kvm->arch.apic_access_page_done = true;16811681- return 0;16811681+out:16821682+ mutex_unlock(&kvm->slots_lock);16831683+ return ret;16821684}1683168516841686static int avic_init_backing_page(struct kvm_vcpu *vcpu)···21932189 return ERR_PTR(err);21942190}2195219121922192+static void svm_clear_current_vmcb(struct vmcb *vmcb)21932193+{21942194+ int i;21952195+21962196+ for_each_online_cpu(i)21972197+ cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL);21982198+}21992199+21962200static void svm_free_vcpu(struct kvm_vcpu *vcpu)21972201{21982202 struct vcpu_svm *svm = to_svm(vcpu);22032203+22042204+ /*22052205+ * The vmcb page can be recycled, causing a false negative in22062206+ * svm_vcpu_load(). So, ensure that no logical CPU has this22072207+ * vmcb page recorded as its current vmcb.22082208+ */22092209+ svm_clear_current_vmcb(svm->vmcb);2199221022002211 __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT));22012212 __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);···22182199 __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);22192200 kvm_vcpu_uninit(vcpu);22202201 kmem_cache_free(kvm_vcpu_cache, svm);22212221- /*22222222- * The vmcb page can be recycled, causing a false negative in22232223- * svm_vcpu_load(). So do a full IBPB now.22242224- */22252225- indirect_branch_prediction_barrier();22262202}2227220322282204static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)···71637149 .has_wbinvd_exit = svm_has_wbinvd_exit,7164715071657151 .read_l1_tsc_offset = svm_read_l1_tsc_offset,71667166- .write_tsc_offset = svm_write_tsc_offset,71527152+ .write_l1_tsc_offset = svm_write_l1_tsc_offset,7167715371687154 .set_tdp_cr3 = set_tdp_cr3,71697155
+65-33
arch/x86/kvm/vmx.c
···174174 * refer SDM volume 3b section 21.6.13 & 22.1.3.175175 */176176static unsigned int ple_gap = KVM_DEFAULT_PLE_GAP;177177+module_param(ple_gap, uint, 0444);177178178179static unsigned int ple_window = KVM_VMX_DEFAULT_PLE_WINDOW;179180module_param(ple_window, uint, 0444);···985984 struct shared_msr_entry *guest_msrs;986985 int nmsrs;987986 int save_nmsrs;987987+ bool guest_msrs_dirty;988988 unsigned long host_idt_base;989989#ifdef CONFIG_X86_64990990 u64 msr_host_kernel_gs_base;···13081306static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,13091307 u16 error_code);13101308static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);13111311-static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,13091309+static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,13121310 u32 msr, int type);1313131113141312static DEFINE_PER_CPU(struct vmcs *, vmxarea);···16121610{16131611 struct vcpu_vmx *vmx = to_vmx(vcpu);1614161216151615- /* We don't support disabling the feature for simplicity. */16161616- if (vmx->nested.enlightened_vmcs_enabled)16171617- return 0;16181618-16191619- vmx->nested.enlightened_vmcs_enabled = true;16201620-16211613 /*16221614 * vmcs_version represents the range of supported Enlightened VMCS16231615 * versions: lower 8 bits is the minimal version, higher 8 bits is the···16201624 */16211625 if (vmcs_version)16221626 *vmcs_version = (KVM_EVMCS_VERSION << 8) | 1;16271627+16281628+ /* We don't support disabling the feature for simplicity. */16291629+ if (vmx->nested.enlightened_vmcs_enabled)16301630+ return 0;16311631+16321632+ vmx->nested.enlightened_vmcs_enabled = true;1623163316241634 vmx->nested.msrs.pinbased_ctls_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;16251635 vmx->nested.msrs.entry_ctls_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL;···2899289729002898 vmx->req_immediate_exit = false;2901289929002900+ /*29012901+ * Note that guest MSRs to be saved/restored can also be changed29022902+ * when guest state is loaded. This happens when guest transitions29032903+ * to/from long-mode by setting MSR_EFER.LMA.29042904+ */29052905+ if (!vmx->loaded_cpu_state || vmx->guest_msrs_dirty) {29062906+ vmx->guest_msrs_dirty = false;29072907+ for (i = 0; i < vmx->save_nmsrs; ++i)29082908+ kvm_set_shared_msr(vmx->guest_msrs[i].index,29092909+ vmx->guest_msrs[i].data,29102910+ vmx->guest_msrs[i].mask);29112911+29122912+ }29132913+29022914 if (vmx->loaded_cpu_state)29032915 return;29042916···29732957 vmcs_writel(HOST_GS_BASE, gs_base);29742958 host_state->gs_base = gs_base;29752959 }29762976-29772977- for (i = 0; i < vmx->save_nmsrs; ++i)29782978- kvm_set_shared_msr(vmx->guest_msrs[i].index,29792979- vmx->guest_msrs[i].data,29802980- vmx->guest_msrs[i].mask);29812960}2982296129832962static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)···34473436 move_msr_up(vmx, index, save_nmsrs++);3448343734493438 vmx->save_nmsrs = save_nmsrs;34393439+ vmx->guest_msrs_dirty = true;3450344034513441 if (cpu_has_vmx_msr_bitmap())34523442 vmx_update_msr_bitmap(&vmx->vcpu);···34643452 return vcpu->arch.tsc_offset;34653453}3466345434673467-/*34683468- * writes 'offset' into guest's timestamp counter offset register34693469- */34703470-static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)34553455+static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)34713456{34573457+ u64 active_offset = offset;34723458 if (is_guest_mode(vcpu)) {34733459 /*34743460 * We're here if L1 chose not to trap WRMSR to TSC. According···34743464 * set for L2 remains unchanged, and still needs to be added34753465 * to the newly set TSC to get L2's TSC.34763466 */34773477- struct vmcs12 *vmcs12;34783478- /* recalculate vmcs02.TSC_OFFSET: */34793479- vmcs12 = get_vmcs12(vcpu);34803480- vmcs_write64(TSC_OFFSET, offset +34813481- (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ?34823482- vmcs12->tsc_offset : 0));34673467+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);34683468+ if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING))34693469+ active_offset += vmcs12->tsc_offset;34833470 } else {34843471 trace_kvm_write_tsc_offset(vcpu->vcpu_id,34853472 vmcs_read64(TSC_OFFSET), offset);34863486- vmcs_write64(TSC_OFFSET, offset);34873473 }34743474+34753475+ vmcs_write64(TSC_OFFSET, active_offset);34763476+ return active_offset;34883477}3489347834903479/*···59535944 spin_unlock(&vmx_vpid_lock);59545945}5955594659565956-static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,59475947+static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,59575948 u32 msr, int type)59585949{59595950 int f = sizeof(unsigned long);···59915982 }59925983}5993598459945994-static void __always_inline vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,59855985+static __always_inline void vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,59955986 u32 msr, int type)59965987{59975988 int f = sizeof(unsigned long);···60296020 }60306021}6031602260326032-static void __always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap,60236023+static __always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap,60336024 u32 msr, int type, bool value)60346025{60356026 if (value)···86738664 struct vmcs12 *vmcs12 = vmx->nested.cached_vmcs12;86748665 struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;8675866686768676- vmcs12->hdr.revision_id = evmcs->revision_id;86778677-86788667 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */86798668 vmcs12->tpr_threshold = evmcs->tpr_threshold;86808669 vmcs12->guest_rip = evmcs->guest_rip;···9376936993779370 vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page);9378937193799379- if (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION) {93729372+ /*93739373+ * Currently, KVM only supports eVMCS version 193749374+ * (== KVM_EVMCS_VERSION) and thus we expect guest to set this93759375+ * value to first u32 field of eVMCS which should specify eVMCS93769376+ * VersionNumber.93779377+ *93789378+ * Guest should be aware of supported eVMCS versions by host by93799379+ * examining CPUID.0x4000000A.EAX[0:15]. Host userspace VMM is93809380+ * expected to set this CPUID leaf according to the value93819381+ * returned in vmcs_version from nested_enable_evmcs().93829382+ *93839383+ * However, it turns out that Microsoft Hyper-V fails to comply93849384+ * to their own invented interface: When Hyper-V use eVMCS, it93859385+ * just sets first u32 field of eVMCS to revision_id specified93869386+ * in MSR_IA32_VMX_BASIC. Instead of used eVMCS version number93879387+ * which is one of the supported versions specified in93889388+ * CPUID.0x4000000A.EAX[0:15].93899389+ *93909390+ * To overcome Hyper-V bug, we accept here either a supported93919391+ * eVMCS version or VMCS12 revision_id as valid values for first93929392+ * u32 field of eVMCS.93939393+ */93949394+ if ((vmx->nested.hv_evmcs->revision_id != KVM_EVMCS_VERSION) &&93959395+ (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION)) {93809396 nested_release_evmcs(vcpu);93819397 return 0;93829398 }···94209390 * present in struct hv_enlightened_vmcs, ...). Make sure there94219391 * are no leftovers.94229392 */94239423- if (from_launch)94249424- memset(vmx->nested.cached_vmcs12, 0,94259425- sizeof(*vmx->nested.cached_vmcs12));93939393+ if (from_launch) {93949394+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);93959395+ memset(vmcs12, 0, sizeof(*vmcs12));93969396+ vmcs12->hdr.revision_id = VMCS12_REVISION;93979397+ }9426939894279399 }94289400 return 1;···1509415062 .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,15095150631509615064 .read_l1_tsc_offset = vmx_read_l1_tsc_offset,1509715097- .write_tsc_offset = vmx_write_tsc_offset,1506515065+ .write_l1_tsc_offset = vmx_write_l1_tsc_offset,15098150661509915067 .set_tdp_cr3 = vmx_set_cr3,1510015068
···325325 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM,326326 USB_DEVICE_ID_ELECOM_BM084),327327 HID_BATTERY_QUIRK_IGNORE },328328+ { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL,329329+ USB_DEVICE_ID_SYMBOL_SCANNER_3),330330+ HID_BATTERY_QUIRK_IGNORE },328331 {}329332};330333···18411838}18421839EXPORT_SYMBOL_GPL(hidinput_disconnect);1843184018441844-/**18451845- * hid_scroll_counter_handle_scroll() - Send high- and low-resolution scroll18461846- * events given a high-resolution wheel18471847- * movement.18481848- * @counter: a hid_scroll_counter struct describing the wheel.18491849- * @hi_res_value: the movement of the wheel, in the mouse's high-resolution18501850- * units.18511851- *18521852- * Given a high-resolution movement, this function converts the movement into18531853- * microns and emits high-resolution scroll events for the input device. It also18541854- * uses the multiplier from &struct hid_scroll_counter to emit low-resolution18551855- * scroll events when appropriate for backwards-compatibility with userspace18561856- * input libraries.18571857- */18581858-void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,18591859- int hi_res_value)18601860-{18611861- int low_res_value, remainder, multiplier;18621862-18631863- input_report_rel(counter->dev, REL_WHEEL_HI_RES,18641864- hi_res_value * counter->microns_per_hi_res_unit);18651865-18661866- /*18671867- * Update the low-res remainder with the high-res value,18681868- * but reset if the direction has changed.18691869- */18701870- remainder = counter->remainder;18711871- if ((remainder ^ hi_res_value) < 0)18721872- remainder = 0;18731873- remainder += hi_res_value;18741874-18751875- /*18761876- * Then just use the resolution multiplier to see if18771877- * we should send a low-res (aka regular wheel) event.18781878- */18791879- multiplier = counter->resolution_multiplier;18801880- low_res_value = remainder / multiplier;18811881- remainder -= low_res_value * multiplier;18821882- counter->remainder = remainder;18831883-18841884- if (low_res_value)18851885- input_report_rel(counter->dev, REL_WHEEL, low_res_value);18861886-}18871887-EXPORT_SYMBOL_GPL(hid_scroll_counter_handle_scroll);
+27-282
drivers/hid/hid-logitech-hidpp.c
···6464#define HIDPP_QUIRK_NO_HIDINPUT BIT(23)6565#define HIDPP_QUIRK_FORCE_OUTPUT_REPORTS BIT(24)6666#define HIDPP_QUIRK_UNIFYING BIT(25)6767-#define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(26)6868-#define HIDPP_QUIRK_HI_RES_SCROLL_X2120 BIT(27)6969-#define HIDPP_QUIRK_HI_RES_SCROLL_X2121 BIT(28)7070-7171-/* Convenience constant to check for any high-res support. */7272-#define HIDPP_QUIRK_HI_RES_SCROLL (HIDPP_QUIRK_HI_RES_SCROLL_1P0 | \7373- HIDPP_QUIRK_HI_RES_SCROLL_X2120 | \7474- HIDPP_QUIRK_HI_RES_SCROLL_X2121)75677668#define HIDPP_QUIRK_DELAYED_INIT HIDPP_QUIRK_NO_HIDINPUT7769···149157 unsigned long capabilities;150158151159 struct hidpp_battery battery;152152- struct hid_scroll_counter vertical_wheel_counter;153160};154161155162/* HID++ 1.0 error codes */···400409#define HIDPP_SET_LONG_REGISTER 0x82401410#define HIDPP_GET_LONG_REGISTER 0x83402411403403-/**404404- * hidpp10_set_register_bit() - Sets a single bit in a HID++ 1.0 register.405405- * @hidpp_dev: the device to set the register on.406406- * @register_address: the address of the register to modify.407407- * @byte: the byte of the register to modify. Should be less than 3.408408- * Return: 0 if successful, otherwise a negative error code.409409- */410410-static int hidpp10_set_register_bit(struct hidpp_device *hidpp_dev,411411- u8 register_address, u8 byte, u8 bit)412412+#define HIDPP_REG_GENERAL 0x00413413+414414+static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)412415{413416 struct hidpp_report response;414417 int ret;415418 u8 params[3] = { 0 };416419417420 ret = hidpp_send_rap_command_sync(hidpp_dev,418418- REPORT_ID_HIDPP_SHORT,419419- HIDPP_GET_REGISTER,420420- register_address,421421- NULL, 0, &response);421421+ REPORT_ID_HIDPP_SHORT,422422+ HIDPP_GET_REGISTER,423423+ HIDPP_REG_GENERAL,424424+ NULL, 0, &response);422425 if (ret)423426 return ret;424427425428 memcpy(params, response.rap.params, 3);426429427427- params[byte] |= BIT(bit);430430+ /* Set the battery bit */431431+ params[0] |= BIT(4);428432429433 return hidpp_send_rap_command_sync(hidpp_dev,430430- REPORT_ID_HIDPP_SHORT,431431- HIDPP_SET_REGISTER,432432- register_address,433433- params, 3, &response);434434-}435435-436436-437437-#define HIDPP_REG_GENERAL 0x00438438-439439-static int hidpp10_enable_battery_reporting(struct hidpp_device *hidpp_dev)440440-{441441- return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_GENERAL, 0, 4);442442-}443443-444444-#define HIDPP_REG_FEATURES 0x01445445-446446-/* On HID++ 1.0 devices, high-res scroll was called "scrolling acceleration". */447447-static int hidpp10_enable_scrolling_acceleration(struct hidpp_device *hidpp_dev)448448-{449449- return hidpp10_set_register_bit(hidpp_dev, HIDPP_REG_FEATURES, 0, 6);434434+ REPORT_ID_HIDPP_SHORT,435435+ HIDPP_SET_REGISTER,436436+ HIDPP_REG_GENERAL,437437+ params, 3, &response);450438}451439452440#define HIDPP_REG_BATTERY_STATUS 0x07···11341164 }1135116511361166 return ret;11371137-}11381138-11391139-/* -------------------------------------------------------------------------- */11401140-/* 0x2120: Hi-resolution scrolling */11411141-/* -------------------------------------------------------------------------- */11421142-11431143-#define HIDPP_PAGE_HI_RESOLUTION_SCROLLING 0x212011441144-11451145-#define CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE 0x1011461146-11471147-static int hidpp_hrs_set_highres_scrolling_mode(struct hidpp_device *hidpp,11481148- bool enabled, u8 *multiplier)11491149-{11501150- u8 feature_index;11511151- u8 feature_type;11521152- int ret;11531153- u8 params[1];11541154- struct hidpp_report response;11551155-11561156- ret = hidpp_root_get_feature(hidpp,11571157- HIDPP_PAGE_HI_RESOLUTION_SCROLLING,11581158- &feature_index,11591159- &feature_type);11601160- if (ret)11611161- return ret;11621162-11631163- params[0] = enabled ? BIT(0) : 0;11641164- ret = hidpp_send_fap_command_sync(hidpp, feature_index,11651165- CMD_HI_RESOLUTION_SCROLLING_SET_HIGHRES_SCROLLING_MODE,11661166- params, sizeof(params), &response);11671167- if (ret)11681168- return ret;11691169- *multiplier = response.fap.params[1];11701170- return 0;11711171-}11721172-11731173-/* -------------------------------------------------------------------------- */11741174-/* 0x2121: HiRes Wheel */11751175-/* -------------------------------------------------------------------------- */11761176-11771177-#define HIDPP_PAGE_HIRES_WHEEL 0x212111781178-11791179-#define CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY 0x0011801180-#define CMD_HIRES_WHEEL_SET_WHEEL_MODE 0x2011811181-11821182-static int hidpp_hrw_get_wheel_capability(struct hidpp_device *hidpp,11831183- u8 *multiplier)11841184-{11851185- u8 feature_index;11861186- u8 feature_type;11871187- int ret;11881188- struct hidpp_report response;11891189-11901190- ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,11911191- &feature_index, &feature_type);11921192- if (ret)11931193- goto return_default;11941194-11951195- ret = hidpp_send_fap_command_sync(hidpp, feature_index,11961196- CMD_HIRES_WHEEL_GET_WHEEL_CAPABILITY,11971197- NULL, 0, &response);11981198- if (ret)11991199- goto return_default;12001200-12011201- *multiplier = response.fap.params[0];12021202- return 0;12031203-return_default:12041204- hid_warn(hidpp->hid_dev,12051205- "Couldn't get wheel multiplier (error %d), assuming %d.\n",12061206- ret, *multiplier);12071207- return ret;12081208-}12091209-12101210-static int hidpp_hrw_set_wheel_mode(struct hidpp_device *hidpp, bool invert,12111211- bool high_resolution, bool use_hidpp)12121212-{12131213- u8 feature_index;12141214- u8 feature_type;12151215- int ret;12161216- u8 params[1];12171217- struct hidpp_report response;12181218-12191219- ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_HIRES_WHEEL,12201220- &feature_index, &feature_type);12211221- if (ret)12221222- return ret;12231223-12241224- params[0] = (invert ? BIT(2) : 0) |12251225- (high_resolution ? BIT(1) : 0) |12261226- (use_hidpp ? BIT(0) : 0);12271227-12281228- return hidpp_send_fap_command_sync(hidpp, feature_index,12291229- CMD_HIRES_WHEEL_SET_WHEEL_MODE,12301230- params, sizeof(params), &response);12311167}1232116812331169/* -------------------------------------------------------------------------- */···23992523 input_report_rel(mydata->input, REL_Y, v);2400252424012525 v = hid_snto32(data[6], 8);24022402- hid_scroll_counter_handle_scroll(24032403- &hidpp->vertical_wheel_counter, v);25262526+ input_report_rel(mydata->input, REL_WHEEL, v);2404252724052528 input_sync(mydata->input);24062529 }···25282653}2529265425302655/* -------------------------------------------------------------------------- */25312531-/* High-resolution scroll wheels */25322532-/* -------------------------------------------------------------------------- */25332533-25342534-/**25352535- * struct hi_res_scroll_info - Stores info on a device's high-res scroll wheel.25362536- * @product_id: the HID product ID of the device being described.25372537- * @microns_per_hi_res_unit: the distance moved by the user's finger for each25382538- * high-resolution unit reported by the device, in25392539- * 256ths of a millimetre.25402540- */25412541-struct hi_res_scroll_info {25422542- __u32 product_id;25432543- int microns_per_hi_res_unit;25442544-};25452545-25462546-static struct hi_res_scroll_info hi_res_scroll_devices[] = {25472547- { /* Anywhere MX */25482548- .product_id = 0x1017, .microns_per_hi_res_unit = 445 },25492549- { /* Performance MX */25502550- .product_id = 0x101a, .microns_per_hi_res_unit = 406 },25512551- { /* M560 */25522552- .product_id = 0x402d, .microns_per_hi_res_unit = 435 },25532553- { /* MX Master 2S */25542554- .product_id = 0x4069, .microns_per_hi_res_unit = 406 },25552555-};25562556-25572557-static int hi_res_scroll_look_up_microns(__u32 product_id)25582558-{25592559- int i;25602560- int num_devices = sizeof(hi_res_scroll_devices)25612561- / sizeof(hi_res_scroll_devices[0]);25622562- for (i = 0; i < num_devices; i++) {25632563- if (hi_res_scroll_devices[i].product_id == product_id)25642564- return hi_res_scroll_devices[i].microns_per_hi_res_unit;25652565- }25662566- /* We don't have a value for this device, so use a sensible default. */25672567- return 406;25682568-}25692569-25702570-static int hi_res_scroll_enable(struct hidpp_device *hidpp)25712571-{25722572- int ret;25732573- u8 multiplier = 8;25742574-25752575- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2121) {25762576- ret = hidpp_hrw_set_wheel_mode(hidpp, false, true, false);25772577- hidpp_hrw_get_wheel_capability(hidpp, &multiplier);25782578- } else if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_X2120) {25792579- ret = hidpp_hrs_set_highres_scrolling_mode(hidpp, true,25802580- &multiplier);25812581- } else /* if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) */25822582- ret = hidpp10_enable_scrolling_acceleration(hidpp);25832583-25842584- if (ret)25852585- return ret;25862586-25872587- hidpp->vertical_wheel_counter.resolution_multiplier = multiplier;25882588- hidpp->vertical_wheel_counter.microns_per_hi_res_unit =25892589- hi_res_scroll_look_up_microns(hidpp->hid_dev->product);25902590- hid_info(hidpp->hid_dev, "multiplier = %d, microns = %d\n",25912591- multiplier,25922592- hidpp->vertical_wheel_counter.microns_per_hi_res_unit);25932593- return 0;25942594-}25952595-25962596-/* -------------------------------------------------------------------------- */25972656/* Generic HID++ devices */25982657/* -------------------------------------------------------------------------- */25992658···25722763 wtp_populate_input(hidpp, input, origin_is_hid_core);25732764 else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560)25742765 m560_populate_input(hidpp, input, origin_is_hid_core);25752575-25762576- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) {25772577- input_set_capability(input, EV_REL, REL_WHEEL_HI_RES);25782578- hidpp->vertical_wheel_counter.dev = input;25792579- }25802766}2581276725822768static int hidpp_input_configured(struct hid_device *hdev,···26882884 return m560_raw_event(hdev, data, size);2689288526902886 return 0;26912691-}26922692-26932693-static int hidpp_event(struct hid_device *hdev, struct hid_field *field,26942694- struct hid_usage *usage, __s32 value)26952695-{26962696- /* This function will only be called for scroll events, due to the26972697- * restriction imposed in hidpp_usages.26982698- */26992699- struct hidpp_device *hidpp = hid_get_drvdata(hdev);27002700- struct hid_scroll_counter *counter = &hidpp->vertical_wheel_counter;27012701- /* A scroll event may occur before the multiplier has been retrieved or27022702- * the input device set, or high-res scroll enabling may fail. In such27032703- * cases we must return early (falling back to default behaviour) to27042704- * avoid a crash in hid_scroll_counter_handle_scroll.27052705- */27062706- if (!(hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL) || value == 027072707- || counter->dev == NULL || counter->resolution_multiplier == 0)27082708- return 0;27092709-27102710- hid_scroll_counter_handle_scroll(counter, value);27112711- return 1;27122887}2713288827142889static int hidpp_initialize_battery(struct hidpp_device *hidpp)···29013118 if (hidpp->battery.ps)29023119 power_supply_changed(hidpp->battery.ps);2903312029042904- if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL)29052905- hi_res_scroll_enable(hidpp);29062906-29073121 if (!(hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT) || hidpp->delayed_input)29083122 /* if the input nodes are already created, we can stop now */29093123 return;···30863306 mutex_destroy(&hidpp->send_mutex);30873307}3088330830893089-#define LDJ_DEVICE(product) \30903090- HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE, \30913091- USB_VENDOR_ID_LOGITECH, (product))30923092-30933309static const struct hid_device_id hidpp_devices[] = {30943310 { /* wireless touchpad */30953095- LDJ_DEVICE(0x4011),33113311+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33123312+ USB_VENDOR_ID_LOGITECH, 0x4011),30963313 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT |30973314 HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS },30983315 { /* wireless touchpad T650 */30993099- LDJ_DEVICE(0x4101),33163316+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33173317+ USB_VENDOR_ID_LOGITECH, 0x4101),31003318 .driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT },31013319 { /* wireless touchpad T651 */31023320 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,31033321 USB_DEVICE_ID_LOGITECH_T651),31043322 .driver_data = HIDPP_QUIRK_CLASS_WTP },31053105- { /* Mouse Logitech Anywhere MX */31063106- LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },31073107- { /* Mouse Logitech Cube */31083108- LDJ_DEVICE(0x4010), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31093109- { /* Mouse Logitech M335 */31103110- LDJ_DEVICE(0x4050), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31113111- { /* Mouse Logitech M515 */31123112- LDJ_DEVICE(0x4007), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31133323 { /* Mouse logitech M560 */31143114- LDJ_DEVICE(0x402d),31153115- .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M56031163116- | HIDPP_QUIRK_HI_RES_SCROLL_X2120 },31173117- { /* Mouse Logitech M705 (firmware RQM17) */31183118- LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },31193119- { /* Mouse Logitech M705 (firmware RQM67) */31203120- LDJ_DEVICE(0x406d), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31213121- { /* Mouse Logitech M720 */31223122- LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31233123- { /* Mouse Logitech MX Anywhere 2 */31243124- LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31253125- { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31263126- { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31273127- { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31283128- { /* Mouse Logitech MX Anywhere 2S */31293129- LDJ_DEVICE(0x406a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31303130- { /* Mouse Logitech MX Master */31313131- LDJ_DEVICE(0x4041), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31323132- { LDJ_DEVICE(0x4060), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31333133- { LDJ_DEVICE(0x4071), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31343134- { /* Mouse Logitech MX Master 2S */31353135- LDJ_DEVICE(0x4069), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },31363136- { /* Mouse Logitech Performance MX */31373137- LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },33243324+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33253325+ USB_VENDOR_ID_LOGITECH, 0x402d),33263326+ .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 },31383327 { /* Keyboard logitech K400 */31393139- LDJ_DEVICE(0x4024),33283328+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33293329+ USB_VENDOR_ID_LOGITECH, 0x4024),31403330 .driver_data = HIDPP_QUIRK_CLASS_K400 },31413331 { /* Solar Keyboard Logitech K750 */31423142- LDJ_DEVICE(0x4002),33323332+ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33333333+ USB_VENDOR_ID_LOGITECH, 0x4002),31433334 .driver_data = HIDPP_QUIRK_CLASS_K750 },3144333531453145- { LDJ_DEVICE(HID_ANY_ID) },33363336+ { HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,33373337+ USB_VENDOR_ID_LOGITECH, HID_ANY_ID)},3146333831473339 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_G920_WHEEL),31483340 .driver_data = HIDPP_QUIRK_CLASS_G920 | HIDPP_QUIRK_FORCE_OUTPUT_REPORTS},···3123337131243372MODULE_DEVICE_TABLE(hid, hidpp_devices);3125337331263126-static const struct hid_usage_id hidpp_usages[] = {31273127- { HID_GD_WHEEL, EV_REL, REL_WHEEL },31283128- { HID_ANY_ID - 1, HID_ANY_ID - 1, HID_ANY_ID - 1}31293129-};31303130-31313374static struct hid_driver hidpp_driver = {31323375 .name = "logitech-hidpp-device",31333376 .id_table = hidpp_devices,31343377 .probe = hidpp_probe,31353378 .remove = hidpp_remove,31363379 .raw_event = hidpp_raw_event,31373137- .usage_table = hidpp_usages,31383138- .event = hidpp_event,31393380 .input_configured = hidpp_input_configured,31403381 .input_mapping = hidpp_input_mapping,31413382 .input_mapped = hidpp_input_mapped,
···2323 * In order to avoid breaking them this driver creates a layered hidraw device,2424 * so it can detect when the client is running and then:2525 * - it will not send any command to the controller.2626- * - this input device will be disabled, to avoid double input of the same2626+ * - this input device will be removed, to avoid double input of the same2727 * user action.2828+ * When the client is closed, this input device will be created again.2829 *2930 * For additional functions, such as changing the right-pad margin or switching3031 * the led, you can use the user-space tool at:···114113 spinlock_t lock;115114 struct hid_device *hdev, *client_hdev;116115 struct mutex mutex;117117- bool client_opened, input_opened;116116+ bool client_opened;118117 struct input_dev __rcu *input;119118 unsigned long quirks;120119 struct work_struct work_connect;···280279 }281280}282281283283-static void steam_update_lizard_mode(struct steam_device *steam)284284-{285285- mutex_lock(&steam->mutex);286286- if (!steam->client_opened) {287287- if (steam->input_opened)288288- steam_set_lizard_mode(steam, false);289289- else290290- steam_set_lizard_mode(steam, lizard_mode);291291- }292292- mutex_unlock(&steam->mutex);293293-}294294-295282static int steam_input_open(struct input_dev *dev)296283{297284 struct steam_device *steam = input_get_drvdata(dev);···290301 return ret;291302292303 mutex_lock(&steam->mutex);293293- steam->input_opened = true;294304 if (!steam->client_opened && lizard_mode)295305 steam_set_lizard_mode(steam, false);296306 mutex_unlock(&steam->mutex);···301313 struct steam_device *steam = input_get_drvdata(dev);302314303315 mutex_lock(&steam->mutex);304304- steam->input_opened = false;305316 if (!steam->client_opened && lizard_mode)306317 steam_set_lizard_mode(steam, true);307318 mutex_unlock(&steam->mutex);···387400 return 0;388401}389402390390-static int steam_register(struct steam_device *steam)403403+static int steam_input_register(struct steam_device *steam)391404{392405 struct hid_device *hdev = steam->hdev;393406 struct input_dev *input;···400413 dbg_hid("%s: already connected\n", __func__);401414 return 0;402415 }403403-404404- /*405405- * Unlikely, but getting the serial could fail, and it is not so406406- * important, so make up a serial number and go on.407407- */408408- if (steam_get_serial(steam) < 0)409409- strlcpy(steam->serial_no, "XXXXXXXXXX",410410- sizeof(steam->serial_no));411411-412412- hid_info(hdev, "Steam Controller '%s' connected",413413- steam->serial_no);414416415417 input = input_allocate_device();416418 if (!input)···468492 goto input_register_fail;469493470494 rcu_assign_pointer(steam->input, input);471471-472472- /* ignore battery errors, we can live without it */473473- if (steam->quirks & STEAM_QUIRK_WIRELESS)474474- steam_battery_register(steam);475475-476495 return 0;477496478497input_register_fail:···475504 return ret;476505}477506478478-static void steam_unregister(struct steam_device *steam)507507+static void steam_input_unregister(struct steam_device *steam)479508{480509 struct input_dev *input;510510+ rcu_read_lock();511511+ input = rcu_dereference(steam->input);512512+ rcu_read_unlock();513513+ if (!input)514514+ return;515515+ RCU_INIT_POINTER(steam->input, NULL);516516+ synchronize_rcu();517517+ input_unregister_device(input);518518+}519519+520520+static void steam_battery_unregister(struct steam_device *steam)521521+{481522 struct power_supply *battery;482523483524 rcu_read_lock();484484- input = rcu_dereference(steam->input);485525 battery = rcu_dereference(steam->battery);486526 rcu_read_unlock();487527488488- if (battery) {489489- RCU_INIT_POINTER(steam->battery, NULL);490490- synchronize_rcu();491491- power_supply_unregister(battery);528528+ if (!battery)529529+ return;530530+ RCU_INIT_POINTER(steam->battery, NULL);531531+ synchronize_rcu();532532+ power_supply_unregister(battery);533533+}534534+535535+static int steam_register(struct steam_device *steam)536536+{537537+ int ret;538538+539539+ /*540540+ * This function can be called several times in a row with the541541+ * wireless adaptor, without steam_unregister() between them, because542542+ * another client send a get_connection_status command, for example.543543+ * The battery and serial number are set just once per device.544544+ */545545+ if (!steam->serial_no[0]) {546546+ /*547547+ * Unlikely, but getting the serial could fail, and it is not so548548+ * important, so make up a serial number and go on.549549+ */550550+ if (steam_get_serial(steam) < 0)551551+ strlcpy(steam->serial_no, "XXXXXXXXXX",552552+ sizeof(steam->serial_no));553553+554554+ hid_info(steam->hdev, "Steam Controller '%s' connected",555555+ steam->serial_no);556556+557557+ /* ignore battery errors, we can live without it */558558+ if (steam->quirks & STEAM_QUIRK_WIRELESS)559559+ steam_battery_register(steam);560560+561561+ mutex_lock(&steam_devices_lock);562562+ list_add(&steam->list, &steam_devices);563563+ mutex_unlock(&steam_devices_lock);492564 }493493- if (input) {494494- RCU_INIT_POINTER(steam->input, NULL);495495- synchronize_rcu();565565+566566+ mutex_lock(&steam->mutex);567567+ if (!steam->client_opened) {568568+ steam_set_lizard_mode(steam, lizard_mode);569569+ ret = steam_input_register(steam);570570+ } else {571571+ ret = 0;572572+ }573573+ mutex_unlock(&steam->mutex);574574+575575+ return ret;576576+}577577+578578+static void steam_unregister(struct steam_device *steam)579579+{580580+ steam_battery_unregister(steam);581581+ steam_input_unregister(steam);582582+ if (steam->serial_no[0]) {496583 hid_info(steam->hdev, "Steam Controller '%s' disconnected",497584 steam->serial_no);498498- input_unregister_device(input);585585+ mutex_lock(&steam_devices_lock);586586+ list_del(&steam->list);587587+ mutex_unlock(&steam_devices_lock);588588+ steam->serial_no[0] = 0;499589 }500590}501591···632600 mutex_lock(&steam->mutex);633601 steam->client_opened = true;634602 mutex_unlock(&steam->mutex);603603+604604+ steam_input_unregister(steam);605605+635606 return ret;636607}637608···644609645610 mutex_lock(&steam->mutex);646611 steam->client_opened = false;647647- if (steam->input_opened)648648- steam_set_lizard_mode(steam, false);649649- else650650- steam_set_lizard_mode(steam, lizard_mode);651612 mutex_unlock(&steam->mutex);652613653614 hid_hw_close(steam->hdev);615615+ if (steam->connected) {616616+ steam_set_lizard_mode(steam, lizard_mode);617617+ steam_input_register(steam);618618+ }654619}655620656621static int steam_client_ll_raw_request(struct hid_device *hdev,···779744 }780745 }781746782782- mutex_lock(&steam_devices_lock);783783- steam_update_lizard_mode(steam);784784- list_add(&steam->list, &steam_devices);785785- mutex_unlock(&steam_devices_lock);786786-787747 return 0;788748789749hid_hw_open_fail:···804774 return;805775 }806776807807- mutex_lock(&steam_devices_lock);808808- list_del(&steam->list);809809- mutex_unlock(&steam_devices_lock);810810-811777 hid_destroy_device(steam->client_hdev);812778 steam->client_opened = false;813779 cancel_work_sync(&steam->work_connect);···818792static void steam_do_connect_event(struct steam_device *steam, bool connected)819793{820794 unsigned long flags;795795+ bool changed;821796822797 spin_lock_irqsave(&steam->lock, flags);798798+ changed = steam->connected != connected;823799 steam->connected = connected;824800 spin_unlock_irqrestore(&steam->lock, flags);825801826826- if (schedule_work(&steam->work_connect) == 0)802802+ if (changed && schedule_work(&steam->work_connect) == 0)827803 dbg_hid("%s: connected=%d event already queued\n",828804 __func__, connected);829805}···10471019 return 0;10481020 rcu_read_lock();10491021 input = rcu_dereference(steam->input);10501050- if (likely(input)) {10221022+ if (likely(input))10511023 steam_do_input_event(steam, input, data);10521052- } else {10531053- dbg_hid("%s: input data without connect event\n",10541054- __func__);10551055- steam_do_connect_event(steam, true);10561056- }10571024 rcu_read_unlock();10581025 break;10591026 case STEAM_EV_CONNECT:···1097107410981075 mutex_lock(&steam_devices_lock);10991076 list_for_each_entry(steam, &steam_devices, list) {11001100- steam_update_lizard_mode(steam);10771077+ mutex_lock(&steam->mutex);10781078+ if (!steam->client_opened)10791079+ steam_set_lizard_mode(steam, lizard_mode);10801080+ mutex_unlock(&steam->mutex);11011081 }11021082 mutex_unlock(&steam_devices_lock);11031083 return 0;
···12121313#include <linux/atomic.h>1414#include <linux/compat.h>1515+#include <linux/cred.h>1516#include <linux/device.h>1617#include <linux/fs.h>1718#include <linux/hid.h>···497496 goto err_free;498497 }499498500500- len = min(sizeof(hid->name), sizeof(ev->u.create2.name));501501- strlcpy(hid->name, ev->u.create2.name, len);502502- len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys));503503- strlcpy(hid->phys, ev->u.create2.phys, len);504504- len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq));505505- strlcpy(hid->uniq, ev->u.create2.uniq, len);499499+ /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */500500+ len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1;501501+ strncpy(hid->name, ev->u.create2.name, len);502502+ len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1;503503+ strncpy(hid->phys, ev->u.create2.phys, len);504504+ len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1;505505+ strncpy(hid->uniq, ev->u.create2.uniq, len);506506507507 hid->ll_driver = &uhid_hid_driver;508508 hid->bus = ev->u.create2.bus;···724722725723 switch (uhid->input_buf.type) {726724 case UHID_CREATE:725725+ /*726726+ * 'struct uhid_create_req' contains a __user pointer which is727727+ * copied from, so it's unsafe to allow this with elevated728728+ * privileges (e.g. from a setuid binary) or via kernel_write().729729+ */730730+ if (file->f_cred != current_cred() || uaccess_kernel()) {731731+ pr_err_once("UHID_CREATE from different security context by process %d (%s), this is not allowed.\n",732732+ task_tgid_vnr(current), current->comm);733733+ ret = -EACCES;734734+ goto unlock;735735+ }727736 ret = uhid_dev_create(uhid, &uhid->input_buf);728737 break;729738 case UHID_CREATE2:
+3-3
drivers/hwmon/ina2xx.c
···274274 break;275275 case INA2XX_CURRENT:276276 /* signed register, result in mA */277277- val = regval * data->current_lsb_uA;277277+ val = (s16)regval * data->current_lsb_uA;278278 val = DIV_ROUND_CLOSEST(val, 1000);279279 break;280280 case INA2XX_CALIBRATION:···491491 }492492493493 data->groups[group++] = &ina2xx_group;494494- if (id->driver_data == ina226)494494+ if (chip == ina226)495495 data->groups[group++] = &ina226_group;496496497497 hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name,···500500 return PTR_ERR(hwmon_dev);501501502502 dev_info(dev, "power monitor %s (Rshunt = %li uOhm)\n",503503- id->name, data->rshunt);503503+ client->name, data->rshunt);504504505505 return 0;506506}
···3333}34343535/**3636- * i40e_add_xsk_umem - Store an UMEM for a certain ring/qid3636+ * i40e_add_xsk_umem - Store a UMEM for a certain ring/qid3737 * @vsi: Current VSI3838 * @umem: UMEM to store3939 * @qid: Ring/qid to associate with the UMEM···5656}57575858/**5959- * i40e_remove_xsk_umem - Remove an UMEM for a certain ring/qid5959+ * i40e_remove_xsk_umem - Remove a UMEM for a certain ring/qid6060 * @vsi: Current VSI6161 * @qid: Ring/qid associated with the UMEM6262 **/···130130}131131132132/**133133- * i40e_xsk_umem_enable - Enable/associate an UMEM to a certain ring/qid133133+ * i40e_xsk_umem_enable - Enable/associate a UMEM to a certain ring/qid134134 * @vsi: Current VSI135135 * @umem: UMEM136136 * @qid: Rx ring to associate UMEM to···189189}190190191191/**192192- * i40e_xsk_umem_disable - Diassociate an UMEM from a certain ring/qid192192+ * i40e_xsk_umem_disable - Disassociate a UMEM from a certain ring/qid193193 * @vsi: Current VSI194194 * @qid: Rx ring to associate UMEM to195195 *···255255}256256257257/**258258- * i40e_xsk_umem_query - Queries a certain ring/qid for its UMEM258258+ * i40e_xsk_umem_setup - Enable/disassociate a UMEM to/from a ring/qid259259 * @vsi: Current VSI260260 * @umem: UMEM to enable/associate to a ring, or NULL to disable261261 * @qid: Rx ring to (dis)associate UMEM (from)to262262 *263263- * This function enables or disables an UMEM to a certain ring.263263+ * This function enables or disables a UMEM to a certain ring.264264 *265265 * Returns 0 on success, <0 on failure266266 **/···276276 * @rx_ring: Rx ring277277 * @xdp: xdp_buff used as input to the XDP program278278 *279279- * This function enables or disables an UMEM to a certain ring.279279+ * This function enables or disables a UMEM to a certain ring.280280 *281281 * Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR}282282 **/
+1
drivers/net/ethernet/intel/igb/e1000_i210.c
···842842 nvm_word = E1000_INVM_DEFAULT_AL;843843 tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL;844844 igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, E1000_PHY_PLL_FREQ_PAGE);845845+ phy_word = E1000_PHY_PLL_UNCONF;845846 for (i = 0; i < E1000_MAX_PLL_TRIES; i++) {846847 /* check current state directly from internal PHY */847848 igb_read_phy_reg_82580(hw, E1000_PHY_PLL_FREQ_REG, &phy_word);
···60716071 "no error",60726072 "length error",60736073 "function disabled",60746074- "VF sent command to attnetion address",60746074+ "VF sent command to attention address",60756075 "host sent prod update command",60766076 "read of during interrupt register while in MIMD mode",60776077 "access to PXP BAR reserved address",
···22552255 new_driver->mdiodrv.driver.remove = phy_remove;22562256 new_driver->mdiodrv.driver.owner = owner;2257225722582258+ /* The following works around an issue where the PHY driver doesn't bind22592259+ * to the device, resulting in the genphy driver being used instead of22602260+ * the dedicated driver. The root cause of the issue isn't known yet22612261+ * and seems to be in the base driver core. Once this is fixed we may22622262+ * remove this workaround.22632263+ */22642264+ new_driver->mdiodrv.driver.probe_type = PROBE_FORCE_SYNCHRONOUS;22652265+22582266 retval = driver_register(&new_driver->mdiodrv.driver);22592267 if (retval) {22602268 pr_err("%s: Error %d in registering driver\n",
+1-1
drivers/net/rionet.c
···216216 * it just report sending a packet to the target217217 * (without actual packet transfer).218218 */219219- dev_kfree_skb_any(skb);220219 ndev->stats.tx_packets++;221220 ndev->stats.tx_bytes += skb->len;221221+ dev_kfree_skb_any(skb);222222 }223223 }224224
···15401540/* work with hotplug and coldplug */15411541MODULE_ALIAS("platform:omap2_mcspi");1542154215431543-#ifdef CONFIG_SUSPEND15441544-static int omap2_mcspi_suspend_noirq(struct device *dev)15431543+static int __maybe_unused omap2_mcspi_suspend(struct device *dev)15451544{15461546- return pinctrl_pm_select_sleep_state(dev);15451545+ struct spi_master *master = dev_get_drvdata(dev);15461546+ struct omap2_mcspi *mcspi = spi_master_get_devdata(master);15471547+ int error;15481548+15491549+ error = pinctrl_pm_select_sleep_state(dev);15501550+ if (error)15511551+ dev_warn(mcspi->dev, "%s: failed to set pins: %i\n",15521552+ __func__, error);15531553+15541554+ error = spi_master_suspend(master);15551555+ if (error)15561556+ dev_warn(mcspi->dev, "%s: master suspend failed: %i\n",15571557+ __func__, error);15581558+15591559+ return pm_runtime_force_suspend(dev);15471560}1548156115491549-static int omap2_mcspi_resume_noirq(struct device *dev)15621562+static int __maybe_unused omap2_mcspi_resume(struct device *dev)15501563{15511564 struct spi_master *master = dev_get_drvdata(dev);15521565 struct omap2_mcspi *mcspi = spi_master_get_devdata(master);···15701557 dev_warn(mcspi->dev, "%s: failed to set pins: %i\n",15711558 __func__, error);1572155915731573- return 0;15601560+ error = spi_master_resume(master);15611561+ if (error)15621562+ dev_warn(mcspi->dev, "%s: master resume failed: %i\n",15631563+ __func__, error);15641564+15651565+ return pm_runtime_force_resume(dev);15741566}1575156715761576-#else15771577-#define omap2_mcspi_suspend_noirq NULL15781578-#define omap2_mcspi_resume_noirq NULL15791579-#endif15801580-15811568static const struct dev_pm_ops omap2_mcspi_pm_ops = {15821582- .suspend_noirq = omap2_mcspi_suspend_noirq,15831583- .resume_noirq = omap2_mcspi_resume_noirq,15691569+ SET_SYSTEM_SLEEP_PM_OPS(omap2_mcspi_suspend,15701570+ omap2_mcspi_resume)15841571 .runtime_resume = omap_mcspi_runtime_resume,15851572};15861573
+1-10
fs/btrfs/disk-io.c
···477477 int mirror_num = 0;478478 int failed_mirror = 0;479479480480- clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);481480 io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree;482481 while (1) {482482+ clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);483483 ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE,484484 mirror_num);485485 if (!ret) {···492492 else493493 break;494494 }495495-496496- /*497497- * This buffer's crc is fine, but its contents are corrupted, so498498- * there is no reason to read the other copies, they won't be499499- * any less wrong.500500- */501501- if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags) ||502502- ret == -EUCLEAN)503503- break;504495505496 num_copies = btrfs_num_copies(fs_info,506497 eb->start, eb->len);
+24
fs/btrfs/file.c
···20892089 atomic_inc(&root->log_batch);2090209020912091 /*20922092+ * Before we acquired the inode's lock, someone may have dirtied more20932093+ * pages in the target range. We need to make sure that writeback for20942094+ * any such pages does not start while we are logging the inode, because20952095+ * if it does, any of the following might happen when we are not doing a20962096+ * full inode sync:20972097+ *20982098+ * 1) We log an extent after its writeback finishes but before its20992099+ * checksums are added to the csum tree, leading to -EIO errors21002100+ * when attempting to read the extent after a log replay.21012101+ *21022102+ * 2) We can end up logging an extent before its writeback finishes.21032103+ * Therefore after the log replay we will have a file extent item21042104+ * pointing to an unwritten extent (and no data checksums as well).21052105+ *21062106+ * So trigger writeback for any eventual new dirty pages and then we21072107+ * wait for all ordered extents to complete below.21082108+ */21092109+ ret = start_ordered_ops(inode, start, end);21102110+ if (ret) {21112111+ inode_unlock(inode);21122112+ goto out;21132113+ }21142114+21152115+ /*20922116 * We have to do this here to avoid the priority inversion of waiting on20932117 * IO of a lower priority task while holding a transaciton open.20942118 */
···13611361 task))13621362 return;1363136313641364- if (ff_layout_read_prepare_common(task, hdr))13651365- return;13661366-13671367- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context,13681368- hdr->args.lock_context, FMODE_READ) == -EIO)13691369- rpc_exit(task, -EIO); /* lost lock, terminate I/O */13641364+ ff_layout_read_prepare_common(task, hdr);13701365}1371136613721367static void ff_layout_read_call_done(struct rpc_task *task, void *data)···15371542 task))15381543 return;1539154415401540- if (ff_layout_write_prepare_common(task, hdr))15411541- return;15421542-15431543- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context,15441544- hdr->args.lock_context, FMODE_WRITE) == -EIO)15451545- rpc_exit(task, -EIO); /* lost lock, terminate I/O */15451545+ ff_layout_write_prepare_common(task, hdr);15461546}1547154715481548static void ff_layout_write_call_done(struct rpc_task *task, void *data)···17321742 fh = nfs4_ff_layout_select_ds_fh(lseg, idx);17331743 if (fh)17341744 hdr->args.fh = fh;17451745+17461746+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid))17471747+ goto out_failed;17481748+17351749 /*17361750 * Note that if we ever decide to split across DSes,17371751 * then we may need to handle dense-like offsets.···17971803 fh = nfs4_ff_layout_select_ds_fh(lseg, idx);17981804 if (fh)17991805 hdr->args.fh = fh;18061806+18071807+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid))18081808+ goto out_failed;1800180918011810 /*18021811 * Note that if we ever decide to split across DSes,
···12101210 struct task_struct *task;12111211 char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1];1212121212131213+ set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);12131214 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)12141215 return;12151216 __module_get(THIS_MODULE);···2504250325052504 /* Ensure exclusive access to NFSv4 state */25062505 do {25062506+ clear_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);25072507 if (test_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state)) {25082508 section = "purge state";25092509 status = nfs4_purge_lease(clp);···25952593 }2596259425972595 nfs4_end_drain_session(clp);25982598- if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) {25992599- nfs_client_return_marked_delegations(clp);26002600- continue;25962596+ nfs4_clear_state_manager_bit(clp);25972597+25982598+ if (!test_and_set_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state)) {25992599+ if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) {26002600+ nfs_client_return_marked_delegations(clp);26012601+ set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);26022602+ }26032603+ clear_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state);26012604 }2602260526032603- nfs4_clear_state_manager_bit(clp);26042606 /* Did we race with an attempt to give us more work? */26052605- if (clp->cl_state == 0)26072607+ if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state))26062608 return;26072609 if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)26082610 return;
···11391139int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,11401140 int interrupt);1141114111421142-11431143-/**11441144- * struct hid_scroll_counter - Utility class for processing high-resolution11451145- * scroll events.11461146- * @dev: the input device for which events should be reported.11471147- * @microns_per_hi_res_unit: the amount moved by the user's finger for each11481148- * high-resolution unit reported by the mouse, in11491149- * microns.11501150- * @resolution_multiplier: the wheel's resolution in high-resolution mode as a11511151- * multiple of its lower resolution. For example, if11521152- * moving the wheel by one "notch" would result in a11531153- * value of 1 in low-resolution mode but 8 in11541154- * high-resolution, the multiplier is 8.11551155- * @remainder: counts the number of high-resolution units moved since the last11561156- * low-resolution event (REL_WHEEL or REL_HWHEEL) was sent. Should11571157- * only be used by class methods.11581158- */11591159-struct hid_scroll_counter {11601160- struct input_dev *dev;11611161- int microns_per_hi_res_unit;11621162- int resolution_multiplier;11631163-11641164- int remainder;11651165-};11661166-11671167-void hid_scroll_counter_handle_scroll(struct hid_scroll_counter *counter,11681168- int hi_res_value);11691169-11701142/* HID quirks API */11711143unsigned long hid_lookup_quirk(const struct hid_device *hdev);11721144int hid_quirks_init(char **quirks_param, __u16 bus, int count);
···289289void xa_init_flags(struct xarray *, gfp_t flags);290290void *xa_load(struct xarray *, unsigned long index);291291void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);292292-void *xa_cmpxchg(struct xarray *, unsigned long index,293293- void *old, void *entry, gfp_t);294294-int xa_reserve(struct xarray *, unsigned long index, gfp_t);292292+void *xa_erase(struct xarray *, unsigned long index);295293void *xa_store_range(struct xarray *, unsigned long first, unsigned long last,296294 void *entry, gfp_t);297295bool xa_get_mark(struct xarray *, unsigned long index, xa_mark_t);···339341static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark)340342{341343 return xa->xa_flags & XA_FLAGS_MARK(mark);342342-}343343-344344-/**345345- * xa_erase() - Erase this entry from the XArray.346346- * @xa: XArray.347347- * @index: Index of entry.348348- *349349- * This function is the equivalent of calling xa_store() with %NULL as350350- * the third argument. The XArray does not need to allocate memory, so351351- * the user does not need to provide GFP flags.352352- *353353- * Context: Process context. Takes and releases the xa_lock.354354- * Return: The entry which used to be at this index.355355- */356356-static inline void *xa_erase(struct xarray *xa, unsigned long index)357357-{358358- return xa_store(xa, index, NULL, 0);359359-}360360-361361-/**362362- * xa_insert() - Store this entry in the XArray unless another entry is363363- * already present.364364- * @xa: XArray.365365- * @index: Index into array.366366- * @entry: New entry.367367- * @gfp: Memory allocation flags.368368- *369369- * If you would rather see the existing entry in the array, use xa_cmpxchg().370370- * This function is for users who don't care what the entry is, only that371371- * one is present.372372- *373373- * Context: Process context. Takes and releases the xa_lock.374374- * May sleep if the @gfp flags permit.375375- * Return: 0 if the store succeeded. -EEXIST if another entry was present.376376- * -ENOMEM if memory could not be allocated.377377- */378378-static inline int xa_insert(struct xarray *xa, unsigned long index,379379- void *entry, gfp_t gfp)380380-{381381- void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);382382- if (!curr)383383- return 0;384384- if (xa_is_err(curr))385385- return xa_err(curr);386386- return -EEXIST;387387-}388388-389389-/**390390- * xa_release() - Release a reserved entry.391391- * @xa: XArray.392392- * @index: Index of entry.393393- *394394- * After calling xa_reserve(), you can call this function to release the395395- * reservation. If the entry at @index has been stored to, this function396396- * will do nothing.397397- */398398-static inline void xa_release(struct xarray *xa, unsigned long index)399399-{400400- xa_cmpxchg(xa, index, NULL, NULL, 0);401344}402345403346/**···394455void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,395456 void *entry, gfp_t);396457int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t);458458+int __xa_reserve(struct xarray *, unsigned long index, gfp_t);397459void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t);398460void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t);399461···427487}428488429489/**490490+ * xa_store_bh() - Store this entry in the XArray.491491+ * @xa: XArray.492492+ * @index: Index into array.493493+ * @entry: New entry.494494+ * @gfp: Memory allocation flags.495495+ *496496+ * This function is like calling xa_store() except it disables softirqs497497+ * while holding the array lock.498498+ *499499+ * Context: Any context. Takes and releases the xa_lock while500500+ * disabling softirqs.501501+ * Return: The entry which used to be at this index.502502+ */503503+static inline void *xa_store_bh(struct xarray *xa, unsigned long index,504504+ void *entry, gfp_t gfp)505505+{506506+ void *curr;507507+508508+ xa_lock_bh(xa);509509+ curr = __xa_store(xa, index, entry, gfp);510510+ xa_unlock_bh(xa);511511+512512+ return curr;513513+}514514+515515+/**516516+ * xa_store_irq() - Erase this entry from the XArray.517517+ * @xa: XArray.518518+ * @index: Index into array.519519+ * @entry: New entry.520520+ * @gfp: Memory allocation flags.521521+ *522522+ * This function is like calling xa_store() except it disables interrupts523523+ * while holding the array lock.524524+ *525525+ * Context: Process context. Takes and releases the xa_lock while526526+ * disabling interrupts.527527+ * Return: The entry which used to be at this index.528528+ */529529+static inline void *xa_store_irq(struct xarray *xa, unsigned long index,530530+ void *entry, gfp_t gfp)531531+{532532+ void *curr;533533+534534+ xa_lock_irq(xa);535535+ curr = __xa_store(xa, index, entry, gfp);536536+ xa_unlock_irq(xa);537537+538538+ return curr;539539+}540540+541541+/**430542 * xa_erase_bh() - Erase this entry from the XArray.431543 * @xa: XArray.432544 * @index: Index of entry.···487495 * the third argument. The XArray does not need to allocate memory, so488496 * the user does not need to provide GFP flags.489497 *490490- * Context: Process context. Takes and releases the xa_lock while498498+ * Context: Any context. Takes and releases the xa_lock while491499 * disabling softirqs.492500 * Return: The entry which used to be at this index.493501 */···524532 xa_unlock_irq(xa);525533526534 return entry;535535+}536536+537537+/**538538+ * xa_cmpxchg() - Conditionally replace an entry in the XArray.539539+ * @xa: XArray.540540+ * @index: Index into array.541541+ * @old: Old value to test against.542542+ * @entry: New value to place in array.543543+ * @gfp: Memory allocation flags.544544+ *545545+ * If the entry at @index is the same as @old, replace it with @entry.546546+ * If the return value is equal to @old, then the exchange was successful.547547+ *548548+ * Context: Any context. Takes and releases the xa_lock. May sleep549549+ * if the @gfp flags permit.550550+ * Return: The old value at this index or xa_err() if an error happened.551551+ */552552+static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index,553553+ void *old, void *entry, gfp_t gfp)554554+{555555+ void *curr;556556+557557+ xa_lock(xa);558558+ curr = __xa_cmpxchg(xa, index, old, entry, gfp);559559+ xa_unlock(xa);560560+561561+ return curr;562562+}563563+564564+/**565565+ * xa_insert() - Store this entry in the XArray unless another entry is566566+ * already present.567567+ * @xa: XArray.568568+ * @index: Index into array.569569+ * @entry: New entry.570570+ * @gfp: Memory allocation flags.571571+ *572572+ * If you would rather see the existing entry in the array, use xa_cmpxchg().573573+ * This function is for users who don't care what the entry is, only that574574+ * one is present.575575+ *576576+ * Context: Process context. Takes and releases the xa_lock.577577+ * May sleep if the @gfp flags permit.578578+ * Return: 0 if the store succeeded. -EEXIST if another entry was present.579579+ * -ENOMEM if memory could not be allocated.580580+ */581581+static inline int xa_insert(struct xarray *xa, unsigned long index,582582+ void *entry, gfp_t gfp)583583+{584584+ void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);585585+ if (!curr)586586+ return 0;587587+ if (xa_is_err(curr))588588+ return xa_err(curr);589589+ return -EEXIST;527590}528591529592/**···622575 * Updates the @id pointer with the index, then stores the entry at that623576 * index. A concurrent lookup will not see an uninitialised @id.624577 *625625- * Context: Process context. Takes and releases the xa_lock while578578+ * Context: Any context. Takes and releases the xa_lock while626579 * disabling softirqs. May sleep if the @gfp flags permit.627580 * Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if628581 * there is no more space in the XArray.···666619 xa_unlock_irq(xa);667620668621 return err;622622+}623623+624624+/**625625+ * xa_reserve() - Reserve this index in the XArray.626626+ * @xa: XArray.627627+ * @index: Index into array.628628+ * @gfp: Memory allocation flags.629629+ *630630+ * Ensures there is somewhere to store an entry at @index in the array.631631+ * If there is already something stored at @index, this function does632632+ * nothing. If there was nothing there, the entry is marked as reserved.633633+ * Loading from a reserved entry returns a %NULL pointer.634634+ *635635+ * If you do not use the entry that you have reserved, call xa_release()636636+ * or xa_erase() to free any unnecessary memory.637637+ *638638+ * Context: Any context. Takes and releases the xa_lock.639639+ * May sleep if the @gfp flags permit.640640+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.641641+ */642642+static inline643643+int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)644644+{645645+ int ret;646646+647647+ xa_lock(xa);648648+ ret = __xa_reserve(xa, index, gfp);649649+ xa_unlock(xa);650650+651651+ return ret;652652+}653653+654654+/**655655+ * xa_reserve_bh() - Reserve this index in the XArray.656656+ * @xa: XArray.657657+ * @index: Index into array.658658+ * @gfp: Memory allocation flags.659659+ *660660+ * A softirq-disabling version of xa_reserve().661661+ *662662+ * Context: Any context. Takes and releases the xa_lock while663663+ * disabling softirqs.664664+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.665665+ */666666+static inline667667+int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp)668668+{669669+ int ret;670670+671671+ xa_lock_bh(xa);672672+ ret = __xa_reserve(xa, index, gfp);673673+ xa_unlock_bh(xa);674674+675675+ return ret;676676+}677677+678678+/**679679+ * xa_reserve_irq() - Reserve this index in the XArray.680680+ * @xa: XArray.681681+ * @index: Index into array.682682+ * @gfp: Memory allocation flags.683683+ *684684+ * An interrupt-disabling version of xa_reserve().685685+ *686686+ * Context: Process context. Takes and releases the xa_lock while687687+ * disabling interrupts.688688+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.689689+ */690690+static inline691691+int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp)692692+{693693+ int ret;694694+695695+ xa_lock_irq(xa);696696+ ret = __xa_reserve(xa, index, gfp);697697+ xa_unlock_irq(xa);698698+699699+ return ret;700700+}701701+702702+/**703703+ * xa_release() - Release a reserved entry.704704+ * @xa: XArray.705705+ * @index: Index of entry.706706+ *707707+ * After calling xa_reserve(), you can call this function to release the708708+ * reservation. If the entry at @index has been stored to, this function709709+ * will do nothing.710710+ */711711+static inline void xa_release(struct xarray *xa, unsigned long index)712712+{713713+ xa_cmpxchg(xa, index, NULL, NULL, 0);669714}670715671716/* Everything below here is the Advanced API. Proceed with caution. */
···716716 * the situation described above.717717 */718718#define REL_RESERVED 0x0a719719-#define REL_WHEEL_HI_RES 0x0b720719#define REL_MAX 0x0f721720#define REL_CNT (REL_MAX+1)722721···751752#define ABS_VOLUME 0x20752753753754#define ABS_MISC 0x28754754-755755-/*756756- * 0x2e is reserved and should not be used in input drivers.757757- * It was used by HID as ABS_MISC+6 and userspace needs to detect if758758- * the next ABS_* event is correct or is just ABS_MISC + n.759759- * We define here ABS_RESERVED so userspace can rely on it and detect760760- * the situation described above.761761- */762762-#define ABS_RESERVED 0x2e763755764756#define ABS_MT_SLOT 0x2f /* MT slot being modified */765757#define ABS_MT_TOUCH_MAJOR 0x30 /* Major axis of touching ellipse */
+34
kernel/bpf/core.c
···685685 bpf_prog_unlock_free(fp);686686}687687688688+int bpf_jit_get_func_addr(const struct bpf_prog *prog,689689+ const struct bpf_insn *insn, bool extra_pass,690690+ u64 *func_addr, bool *func_addr_fixed)691691+{692692+ s16 off = insn->off;693693+ s32 imm = insn->imm;694694+ u8 *addr;695695+696696+ *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;697697+ if (!*func_addr_fixed) {698698+ /* Place-holder address till the last pass has collected699699+ * all addresses for JITed subprograms in which case we700700+ * can pick them up from prog->aux.701701+ */702702+ if (!extra_pass)703703+ addr = NULL;704704+ else if (prog->aux->func &&705705+ off >= 0 && off < prog->aux->func_cnt)706706+ addr = (u8 *)prog->aux->func[off]->bpf_func;707707+ else708708+ return -EINVAL;709709+ } else {710710+ /* Address of a BPF helper call. Since part of the core711711+ * kernel, it's always at a fixed location. __bpf_call_base712712+ * and the helper with imm relative to it are both in core713713+ * kernel.714714+ */715715+ addr = (u8 *)__bpf_call_base + imm;716716+ }717717+718718+ *func_addr = (unsigned long)addr;719719+ return 0;720720+}721721+688722static int bpf_jit_blind_insn(const struct bpf_insn *from,689723 const struct bpf_insn *aux,690724 struct bpf_insn *to_buff)
···57715771 return;57725772 /* NOTE: fake 'exit' subprog should be updated as well. */57735773 for (i = 0; i <= env->subprog_cnt; i++) {57745774- if (env->subprog_info[i].start < off)57745774+ if (env->subprog_info[i].start <= off)57755775 continue;57765776 env->subprog_info[i].start += len - 1;57775777 }
···196196 i++;197197 } else if (fmt[i] == 'p' || fmt[i] == 's') {198198 mod[fmt_cnt]++;199199- i++;200200- if (!isspace(fmt[i]) && !ispunct(fmt[i]) && fmt[i] != 0)199199+ /* disallow any further format extensions */200200+ if (fmt[i + 1] != 0 &&201201+ !isspace(fmt[i + 1]) &&202202+ !ispunct(fmt[i + 1]))201203 return -EINVAL;202204 fmt_cnt++;203203- if (fmt[i - 1] == 's') {205205+ if (fmt[i] == 's') {204206 if (str_seen)205207 /* allow only one '%s' per fmt string */206208 return -EINVAL;
+47-3
lib/test_xarray.c
···208208 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_2));209209210210 /* We should see two elements in the array */211211+ rcu_read_lock();211212 xas_for_each(&xas, entry, ULONG_MAX)212213 seen++;214214+ rcu_read_unlock();213215 XA_BUG_ON(xa, seen != 2);214216215217 /* One of which is marked */216218 xas_set(&xas, 0);217219 seen = 0;220220+ rcu_read_lock();218221 xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0)219222 seen++;223223+ rcu_read_unlock();220224 XA_BUG_ON(xa, seen != 1);221225 }222226 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0));···377373 xa_erase_index(xa, 12345678);378374 XA_BUG_ON(xa, !xa_empty(xa));379375376376+ /* And so does xa_insert */377377+ xa_reserve(xa, 12345678, GFP_KERNEL);378378+ XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != 0);379379+ xa_erase_index(xa, 12345678);380380+ XA_BUG_ON(xa, !xa_empty(xa));381381+380382 /* Can iterate through a reserved entry */381383 xa_store_index(xa, 5, GFP_KERNEL);382384 xa_reserve(xa, 6, GFP_KERNEL);···446436 XA_BUG_ON(xa, xa_load(xa, max) != NULL);447437 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL);448438439439+ xas_lock(&xas);449440 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index));441441+ xas_unlock(&xas);450442 XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min));451443 XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min));452444 XA_BUG_ON(xa, xa_load(xa, max) != NULL);···464452 XA_STATE(xas, xa, index);465453 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL);466454455455+ xas_lock(&xas);467456 XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(1)) != xa_mk_value(0));468457 XA_BUG_ON(xa, xas.xa_index != index);469458 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1));459459+ xas_unlock(&xas);470460 XA_BUG_ON(xa, !xa_empty(xa));471461}472462#endif···512498 rcu_read_unlock();513499514500 /* We can erase multiple values with a single store */515515- xa_store_order(xa, 0, 63, NULL, GFP_KERNEL);501501+ xa_store_order(xa, 0, BITS_PER_LONG - 1, NULL, GFP_KERNEL);516502 XA_BUG_ON(xa, !xa_empty(xa));517503518504 /* Even when the first slot is empty but the others aren't */···716702 }717703}718704719719-static noinline void check_find(struct xarray *xa)705705+static noinline void check_find_1(struct xarray *xa)720706{721707 unsigned long i, j, k;722708···762748 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0));763749 }764750 XA_BUG_ON(xa, !xa_empty(xa));751751+}752752+753753+static noinline void check_find_2(struct xarray *xa)754754+{755755+ void *entry;756756+ unsigned long i, j, index = 0;757757+758758+ xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {759759+ XA_BUG_ON(xa, true);760760+ }761761+762762+ for (i = 0; i < 1024; i++) {763763+ xa_store_index(xa, index, GFP_KERNEL);764764+ j = 0;765765+ index = 0;766766+ xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) {767767+ XA_BUG_ON(xa, xa_mk_value(index) != entry);768768+ XA_BUG_ON(xa, index != j++);769769+ }770770+ }771771+772772+ xa_destroy(xa);773773+}774774+775775+static noinline void check_find(struct xarray *xa)776776+{777777+ check_find_1(xa);778778+ check_find_2(xa);765779 check_multi_find(xa);766780 check_multi_find_2(xa);767781}···11091067 __check_store_range(xa, 4095 + i, 4095 + j);11101068 __check_store_range(xa, 4096 + i, 4096 + j);11111069 __check_store_range(xa, 123456 + i, 123456 + j);11121112- __check_store_range(xa, UINT_MAX + i, UINT_MAX + j);10701070+ __check_store_range(xa, (1 << 24) + i, (1 << 24) + j);11131071 }11141072 }11151073}···11881146 XA_STATE(xas, xa, 1 << order);1189114711901148 xa_store_order(xa, 0, order, xa, GFP_KERNEL);11491149+ rcu_read_lock();11911150 xas_load(&xas);11921151 XA_BUG_ON(xa, xas.xa_node->count == 0);11931152 XA_BUG_ON(xa, xas.xa_node->count > (1 << order));11941153 XA_BUG_ON(xa, xas.xa_node->nr_values != 0);11541154+ rcu_read_unlock();1195115511961156 xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order),11971157 GFP_KERNEL);
+60-79
lib/xarray.c
···610610 * (see the xa_cmpxchg() implementation for an example).611611 *612612 * Return: If the slot already existed, returns the contents of this slot.613613- * If the slot was newly created, returns NULL. If it failed to create the614614- * slot, returns NULL and indicates the error in @xas.613613+ * If the slot was newly created, returns %NULL. If it failed to create the614614+ * slot, returns %NULL and indicates the error in @xas.615615 */616616static void *xas_create(struct xa_state *xas)617617{···13341334 XA_STATE(xas, xa, index);13351335 return xas_result(&xas, xas_store(&xas, NULL));13361336}13371337-EXPORT_SYMBOL_GPL(__xa_erase);13371337+EXPORT_SYMBOL(__xa_erase);1338133813391339/**13401340- * xa_store() - Store this entry in the XArray.13401340+ * xa_erase() - Erase this entry from the XArray.13411341 * @xa: XArray.13421342- * @index: Index into array.13431343- * @entry: New entry.13441344- * @gfp: Memory allocation flags.13421342+ * @index: Index of entry.13451343 *13461346- * After this function returns, loads from this index will return @entry.13471347- * Storing into an existing multislot entry updates the entry of every index.13481348- * The marks associated with @index are unaffected unless @entry is %NULL.13441344+ * This function is the equivalent of calling xa_store() with %NULL as13451345+ * the third argument. The XArray does not need to allocate memory, so13461346+ * the user does not need to provide GFP flags.13491347 *13501350- * Context: Process context. Takes and releases the xa_lock. May sleep13511351- * if the @gfp flags permit.13521352- * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry13531353- * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation13541354- * failed.13481348+ * Context: Any context. Takes and releases the xa_lock.13491349+ * Return: The entry which used to be at this index.13551350 */13561356-void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)13511351+void *xa_erase(struct xarray *xa, unsigned long index)13571352{13581358- XA_STATE(xas, xa, index);13591359- void *curr;13531353+ void *entry;1360135413611361- if (WARN_ON_ONCE(xa_is_internal(entry)))13621362- return XA_ERROR(-EINVAL);13551355+ xa_lock(xa);13561356+ entry = __xa_erase(xa, index);13571357+ xa_unlock(xa);1363135813641364- do {13651365- xas_lock(&xas);13661366- curr = xas_store(&xas, entry);13671367- if (xa_track_free(xa) && entry)13681368- xas_clear_mark(&xas, XA_FREE_MARK);13691369- xas_unlock(&xas);13701370- } while (xas_nomem(&xas, gfp));13711371-13721372- return xas_result(&xas, curr);13591359+ return entry;13731360}13741374-EXPORT_SYMBOL(xa_store);13611361+EXPORT_SYMBOL(xa_erase);1375136213761363/**13771364 * __xa_store() - Store this entry in the XArray.···1382139513831396 if (WARN_ON_ONCE(xa_is_internal(entry)))13841397 return XA_ERROR(-EINVAL);13981398+ if (xa_track_free(xa) && !entry)13991399+ entry = XA_ZERO_ENTRY;1385140013861401 do {13871402 curr = xas_store(&xas, entry);13881388- if (xa_track_free(xa) && entry)14031403+ if (xa_track_free(xa))13891404 xas_clear_mark(&xas, XA_FREE_MARK);13901405 } while (__xas_nomem(&xas, gfp));13911406···13961407EXPORT_SYMBOL(__xa_store);1397140813981409/**13991399- * xa_cmpxchg() - Conditionally replace an entry in the XArray.14101410+ * xa_store() - Store this entry in the XArray.14001411 * @xa: XArray.14011412 * @index: Index into array.14021402- * @old: Old value to test against.14031403- * @entry: New value to place in array.14131413+ * @entry: New entry.14041414 * @gfp: Memory allocation flags.14051415 *14061406- * If the entry at @index is the same as @old, replace it with @entry.14071407- * If the return value is equal to @old, then the exchange was successful.14161416+ * After this function returns, loads from this index will return @entry.14171417+ * Storing into an existing multislot entry updates the entry of every index.14181418+ * The marks associated with @index are unaffected unless @entry is %NULL.14081419 *14091409- * Context: Process context. Takes and releases the xa_lock. May sleep14101410- * if the @gfp flags permit.14111411- * Return: The old value at this index or xa_err() if an error happened.14201420+ * Context: Any context. Takes and releases the xa_lock.14211421+ * May sleep if the @gfp flags permit.14221422+ * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry14231423+ * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation14241424+ * failed.14121425 */14131413-void *xa_cmpxchg(struct xarray *xa, unsigned long index,14141414- void *old, void *entry, gfp_t gfp)14261426+void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)14151427{14161416- XA_STATE(xas, xa, index);14171428 void *curr;1418142914191419- if (WARN_ON_ONCE(xa_is_internal(entry)))14201420- return XA_ERROR(-EINVAL);14301430+ xa_lock(xa);14311431+ curr = __xa_store(xa, index, entry, gfp);14321432+ xa_unlock(xa);1421143314221422- do {14231423- xas_lock(&xas);14241424- curr = xas_load(&xas);14251425- if (curr == XA_ZERO_ENTRY)14261426- curr = NULL;14271427- if (curr == old) {14281428- xas_store(&xas, entry);14291429- if (xa_track_free(xa) && entry)14301430- xas_clear_mark(&xas, XA_FREE_MARK);14311431- }14321432- xas_unlock(&xas);14331433- } while (xas_nomem(&xas, gfp));14341434-14351435- return xas_result(&xas, curr);14341434+ return curr;14361435}14371437-EXPORT_SYMBOL(xa_cmpxchg);14361436+EXPORT_SYMBOL(xa_store);1438143714391438/**14401439 * __xa_cmpxchg() - Store this entry in the XArray.···1448147114491472 if (WARN_ON_ONCE(xa_is_internal(entry)))14501473 return XA_ERROR(-EINVAL);14741474+ if (xa_track_free(xa) && !entry)14751475+ entry = XA_ZERO_ENTRY;1451147614521477 do {14531478 curr = xas_load(&xas);···14571478 curr = NULL;14581479 if (curr == old) {14591480 xas_store(&xas, entry);14601460- if (xa_track_free(xa) && entry)14811481+ if (xa_track_free(xa))14611482 xas_clear_mark(&xas, XA_FREE_MARK);14621483 }14631484 } while (__xas_nomem(&xas, gfp));···14671488EXPORT_SYMBOL(__xa_cmpxchg);1468148914691490/**14701470- * xa_reserve() - Reserve this index in the XArray.14911491+ * __xa_reserve() - Reserve this index in the XArray.14711492 * @xa: XArray.14721493 * @index: Index into array.14731494 * @gfp: Memory allocation flags.···14751496 * Ensures there is somewhere to store an entry at @index in the array.14761497 * If there is already something stored at @index, this function does14771498 * nothing. If there was nothing there, the entry is marked as reserved.14781478- * Loads from @index will continue to see a %NULL pointer until a14791479- * subsequent store to @index.14991499+ * Loading from a reserved entry returns a %NULL pointer.14801500 *14811501 * If you do not use the entry that you have reserved, call xa_release()14821502 * or xa_erase() to free any unnecessary memory.14831503 *14841484- * Context: Process context. Takes and releases the xa_lock, IRQ or BH safe14851485- * if specified in XArray flags. May sleep if the @gfp flags permit.15041504+ * Context: Any context. Expects the xa_lock to be held on entry. May15051505+ * release the lock, sleep and reacquire the lock if the @gfp flags permit.14861506 * Return: 0 if the reservation succeeded or -ENOMEM if it failed.14871507 */14881488-int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)15081508+int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)14891509{14901510 XA_STATE(xas, xa, index);14911491- unsigned int lock_type = xa_lock_type(xa);14921511 void *curr;1493151214941513 do {14951495- xas_lock_type(&xas, lock_type);14961514 curr = xas_load(&xas);14971497- if (!curr)15151515+ if (!curr) {14981516 xas_store(&xas, XA_ZERO_ENTRY);14991499- xas_unlock_type(&xas, lock_type);15001500- } while (xas_nomem(&xas, gfp));15171517+ if (xa_track_free(xa))15181518+ xas_clear_mark(&xas, XA_FREE_MARK);15191519+ }15201520+ } while (__xas_nomem(&xas, gfp));1501152115021522 return xas_error(&xas);15031523}15041504-EXPORT_SYMBOL(xa_reserve);15241524+EXPORT_SYMBOL(__xa_reserve);1505152515061526#ifdef CONFIG_XARRAY_MULTI15071527static void xas_set_range(struct xa_state *xas, unsigned long first,···15651587 do {15661588 xas_lock(&xas);15671589 if (entry) {15681568- unsigned int order = (last == ~0UL) ? 64 :15691569- ilog2(last + 1);15901590+ unsigned int order = BITS_PER_LONG;15911591+ if (last + 1)15921592+ order = __ffs(last + 1);15701593 xas_set_order(&xas, last, order);15711594 xas_create(&xas);15721595 if (xas_error(&xas))···16411662 * @index: Index of entry.16421663 * @mark: Mark number.16431664 *16441644- * Attempting to set a mark on a NULL entry does not succeed.16651665+ * Attempting to set a mark on a %NULL entry does not succeed.16451666 *16461667 * Context: Any context. Expects xa_lock to be held on entry.16471668 */···16531674 if (entry)16541675 xas_set_mark(&xas, mark);16551676}16561656-EXPORT_SYMBOL_GPL(__xa_set_mark);16771677+EXPORT_SYMBOL(__xa_set_mark);1657167816581679/**16591680 * __xa_clear_mark() - Clear this mark on this entry while locked.···16711692 if (entry)16721693 xas_clear_mark(&xas, mark);16731694}16741674-EXPORT_SYMBOL_GPL(__xa_clear_mark);16951695+EXPORT_SYMBOL(__xa_clear_mark);1675169616761697/**16771698 * xa_get_mark() - Inquire whether this mark is set on this entry.···17111732 * @index: Index of entry.17121733 * @mark: Mark number.17131734 *17141714- * Attempting to set a mark on a NULL entry does not succeed.17351735+ * Attempting to set a mark on a %NULL entry does not succeed.17151736 *17161737 * Context: Process context. Takes and releases the xa_lock.17171738 */···18081829 entry = xas_find_marked(&xas, max, filter);18091830 else18101831 entry = xas_find(&xas, max);18321832+ if (xas.xa_node == XAS_BOUNDS)18331833+ break;18111834 if (xas.xa_shift) {18121835 if (xas.xa_index & ((1UL << xas.xa_shift) - 1))18131836 continue;···18801899 *18811900 * The @filter may be an XArray mark value, in which case entries which are18821901 * marked with that mark will be copied. It may also be %XA_PRESENT, in18831883- * which case all entries which are not NULL will be copied.19021902+ * which case all entries which are not %NULL will be copied.18841903 *18851904 * The entries returned may not represent a snapshot of the XArray at a18861905 * moment in time. For example, if another thread stores to index 5, then
+2-1
net/ipv4/ip_output.c
···939939 unsigned int fraglen;940940 unsigned int fraggap;941941 unsigned int alloclen;942942- unsigned int pagedlen = 0;942942+ unsigned int pagedlen;943943 struct sk_buff *skb_prev;944944alloc_new_skb:945945 skb_prev = skb;···956956 if (datalen > mtu - fragheaderlen)957957 datalen = maxfraglen - fragheaderlen;958958 fraglen = datalen + fragheaderlen;959959+ pagedlen = 0;959960960961 if ((flags & MSG_MORE) &&961962 !(rt->dst.dev->features&NETIF_F_SG))
+5-2
net/ipv4/netfilter/ipt_MASQUERADE.c
···8181 int ret;82828383 ret = xt_register_target(&masquerade_tg_reg);8484+ if (ret)8585+ return ret;84868585- if (ret == 0)8686- nf_nat_masquerade_ipv4_register_notifier();8787+ ret = nf_nat_masquerade_ipv4_register_notifier();8888+ if (ret)8989+ xt_unregister_target(&masquerade_tg_reg);87908891 return ret;8992}
+30-8
net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
···147147 .notifier_call = masq_inet_event,148148};149149150150-static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0);150150+static int masq_refcnt;151151+static DEFINE_MUTEX(masq_mutex);151152152152-void nf_nat_masquerade_ipv4_register_notifier(void)153153+int nf_nat_masquerade_ipv4_register_notifier(void)153154{155155+ int ret = 0;156156+157157+ mutex_lock(&masq_mutex);154158 /* check if the notifier was already set */155155- if (atomic_inc_return(&masquerade_notifier_refcount) > 1)156156- return;159159+ if (++masq_refcnt > 1)160160+ goto out_unlock;157161158162 /* Register for device down reports */159159- register_netdevice_notifier(&masq_dev_notifier);163163+ ret = register_netdevice_notifier(&masq_dev_notifier);164164+ if (ret)165165+ goto err_dec;160166 /* Register IP address change reports */161161- register_inetaddr_notifier(&masq_inet_notifier);167167+ ret = register_inetaddr_notifier(&masq_inet_notifier);168168+ if (ret)169169+ goto err_unregister;170170+171171+ mutex_unlock(&masq_mutex);172172+ return ret;173173+174174+err_unregister:175175+ unregister_netdevice_notifier(&masq_dev_notifier);176176+err_dec:177177+ masq_refcnt--;178178+out_unlock:179179+ mutex_unlock(&masq_mutex);180180+ return ret;162181}163182EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_register_notifier);164183165184void nf_nat_masquerade_ipv4_unregister_notifier(void)166185{186186+ mutex_lock(&masq_mutex);167187 /* check if the notifier still has clients */168168- if (atomic_dec_return(&masquerade_notifier_refcount) > 0)169169- return;188188+ if (--masq_refcnt > 0)189189+ goto out_unlock;170190171191 unregister_netdevice_notifier(&masq_dev_notifier);172192 unregister_inetaddr_notifier(&masq_inet_notifier);193193+out_unlock:194194+ mutex_unlock(&masq_mutex);173195}174196EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_unregister_notifier);
+3-1
net/ipv4/netfilter/nft_masq_ipv4.c
···6969 if (ret < 0)7070 return ret;71717272- nf_nat_masquerade_ipv4_register_notifier();7272+ ret = nf_nat_masquerade_ipv4_register_notifier();7373+ if (ret)7474+ nft_unregister_expr(&nft_masq_ipv4_type);73757476 return ret;7577}
···4040{4141 struct inet_connection_sock *icsk = inet_csk(sk);4242 u32 elapsed, start_ts;4343+ s32 remaining;43444445 start_ts = tcp_retransmit_stamp(sk);4546 if (!icsk->icsk_user_timeout || !start_ts)4647 return icsk->icsk_rto;4748 elapsed = tcp_time_stamp(tcp_sk(sk)) - start_ts;4848- if (elapsed >= icsk->icsk_user_timeout)4949+ remaining = icsk->icsk_user_timeout - elapsed;5050+ if (remaining <= 0)4951 return 1; /* user timeout has passed; fire ASAP */5050- else5151- return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(icsk->icsk_user_timeout - elapsed));5252+5353+ return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining));5254}53555456/**···211209 (boundary - linear_backoff_thresh) * TCP_RTO_MAX;212210 timeout = jiffies_to_msecs(timeout);213211 }214214- return (tcp_time_stamp(tcp_sk(sk)) - start_ts) >= timeout;212212+ return (s32)(tcp_time_stamp(tcp_sk(sk)) - start_ts - timeout) >= 0;215213}216214217215/* A write timeout has occurred. Process the after effects. */
+2-1
net/ipv6/ip6_output.c
···13541354 unsigned int fraglen;13551355 unsigned int fraggap;13561356 unsigned int alloclen;13571357- unsigned int pagedlen = 0;13571357+ unsigned int pagedlen;13581358alloc_new_skb:13591359 /* There's no room in the current skb */13601360 if (skb)···13781378 if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen)13791379 datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len;13801380 fraglen = datalen + fragheaderlen;13811381+ pagedlen = 0;1381138213821383 if ((flags & MSG_MORE) &&13831384 !(rt->dst.dev->features&NETIF_F_SG))
···5858 int err;59596060 err = xt_register_target(&masquerade_tg6_reg);6161- if (err == 0)6262- nf_nat_masquerade_ipv6_register_notifier();6161+ if (err)6262+ return err;6363+6464+ err = nf_nat_masquerade_ipv6_register_notifier();6565+ if (err)6666+ xt_unregister_target(&masquerade_tg6_reg);63676468 return err;6569}
+37-14
net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
···132132 * of ipv6 addresses being deleted), we also need to add an upper133133 * limit to the number of queued work items.134134 */135135-static int masq_inet_event(struct notifier_block *this,136136- unsigned long event, void *ptr)135135+static int masq_inet6_event(struct notifier_block *this,136136+ unsigned long event, void *ptr)137137{138138 struct inet6_ifaddr *ifa = ptr;139139 const struct net_device *dev;···171171 return NOTIFY_DONE;172172}173173174174-static struct notifier_block masq_inet_notifier = {175175- .notifier_call = masq_inet_event,174174+static struct notifier_block masq_inet6_notifier = {175175+ .notifier_call = masq_inet6_event,176176};177177178178-static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0);178178+static int masq_refcnt;179179+static DEFINE_MUTEX(masq_mutex);179180180180-void nf_nat_masquerade_ipv6_register_notifier(void)181181+int nf_nat_masquerade_ipv6_register_notifier(void)181182{182182- /* check if the notifier is already set */183183- if (atomic_inc_return(&masquerade_notifier_refcount) > 1)184184- return;183183+ int ret = 0;185184186186- register_netdevice_notifier(&masq_dev_notifier);187187- register_inet6addr_notifier(&masq_inet_notifier);185185+ mutex_lock(&masq_mutex);186186+ /* check if the notifier is already set */187187+ if (++masq_refcnt > 1)188188+ goto out_unlock;189189+190190+ ret = register_netdevice_notifier(&masq_dev_notifier);191191+ if (ret)192192+ goto err_dec;193193+194194+ ret = register_inet6addr_notifier(&masq_inet6_notifier);195195+ if (ret)196196+ goto err_unregister;197197+198198+ mutex_unlock(&masq_mutex);199199+ return ret;200200+201201+err_unregister:202202+ unregister_netdevice_notifier(&masq_dev_notifier);203203+err_dec:204204+ masq_refcnt--;205205+out_unlock:206206+ mutex_unlock(&masq_mutex);207207+ return ret;188208}189209EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_register_notifier);190210191211void nf_nat_masquerade_ipv6_unregister_notifier(void)192212{213213+ mutex_lock(&masq_mutex);193214 /* check if the notifier still has clients */194194- if (atomic_dec_return(&masquerade_notifier_refcount) > 0)195195- return;215215+ if (--masq_refcnt > 0)216216+ goto out_unlock;196217197197- unregister_inet6addr_notifier(&masq_inet_notifier);218218+ unregister_inet6addr_notifier(&masq_inet6_notifier);198219 unregister_netdevice_notifier(&masq_dev_notifier);220220+out_unlock:221221+ mutex_unlock(&masq_mutex);199222}200223EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_unregister_notifier);
+3-1
net/ipv6/netfilter/nft_masq_ipv6.c
···7070 if (ret < 0)7171 return ret;72727373- nf_nat_masquerade_ipv6_register_notifier();7373+ ret = nf_nat_masquerade_ipv6_register_notifier();7474+ if (ret)7575+ nft_unregister_expr(&nft_masq_ipv6_type);74767577 return ret;7678}
···584584/* tipc_node_cleanup - delete nodes that does not585585 * have active links for NODE_CLEANUP_AFTER time586586 */587587-static int tipc_node_cleanup(struct tipc_node *peer)587587+static bool tipc_node_cleanup(struct tipc_node *peer)588588{589589 struct tipc_net *tn = tipc_net(peer->net);590590 bool deleted = false;591591592592- spin_lock_bh(&tn->node_list_lock);592592+ /* If lock held by tipc_node_stop() the node will be deleted anyway */593593+ if (!spin_trylock_bh(&tn->node_list_lock))594594+ return false;595595+593596 tipc_node_write_lock(peer);594597595598 if (!node_is_up(peer) && time_after(jiffies, peer->delete_at)) {
···11+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */22+/*33+ * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us>44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License as published by77+ * the Free Software Foundation; either version 2 of the License, or88+ * (at your option) any later version.99+ */1010+1111+#ifndef __LINUX_TC_BPF_H1212+#define __LINUX_TC_BPF_H1313+1414+#include <linux/pkt_cls.h>1515+1616+#define TCA_ACT_BPF 131717+1818+struct tc_act_bpf {1919+ tc_gen;2020+};2121+2222+enum {2323+ TCA_ACT_BPF_UNSPEC,2424+ TCA_ACT_BPF_TM,2525+ TCA_ACT_BPF_PARMS,2626+ TCA_ACT_BPF_OPS_LEN,2727+ TCA_ACT_BPF_OPS,2828+ TCA_ACT_BPF_FD,2929+ TCA_ACT_BPF_NAME,3030+ TCA_ACT_BPF_PAD,3131+ TCA_ACT_BPF_TAG,3232+ TCA_ACT_BPF_ID,3333+ __TCA_ACT_BPF_MAX,3434+};3535+#define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1)3636+3737+#endif