···454546461. User addresses not accessed by the kernel but used for address space4747 management (e.g. ``mprotect()``, ``madvise()``). The use of valid4848- tagged pointers in this context is allowed with the exception of4949- ``brk()``, ``mmap()`` and the ``new_address`` argument to5050- ``mremap()`` as these have the potential to alias with existing5151- user addresses.4848+ tagged pointers in this context is allowed with these exceptions:52495353- NOTE: This behaviour changed in v5.6 and so some earlier kernels may5454- incorrectly accept valid tagged pointers for the ``brk()``,5555- ``mmap()`` and ``mremap()`` system calls.5050+ - ``brk()``, ``mmap()`` and the ``new_address`` argument to5151+ ``mremap()`` as these have the potential to alias with existing5252+ user addresses.5353+5454+ NOTE: This behaviour changed in v5.6 and so some earlier kernels may5555+ incorrectly accept valid tagged pointers for the ``brk()``,5656+ ``mmap()`` and ``mremap()`` system calls.5757+5858+ - The ``range.start``, ``start`` and ``dst`` arguments to the5959+ ``UFFDIO_*`` ``ioctl()``s used on a file descriptor obtained from6060+ ``userfaultfd()``, as fault addresses subsequently obtained by reading6161+ the file descriptor will be untagged, which may otherwise confuse6262+ tag-unaware programs.6363+6464+ NOTE: This behaviour changed in v5.14 and so some earlier kernels may6565+ incorrectly accept valid tagged pointers for this system call.566657672. User addresses accessed by the kernel (e.g. ``write()``). This ABI5868 relaxation is disabled by default and the application thread needs to
···11-IMX8 glue layer controller, NXP imx8 families support Synopsys MAC 5.10a IP.22-33-This file documents platform glue layer for IMX.44-Please see stmmac.txt for the other unchanged properties.55-66-The device node has following properties.77-88-Required properties:99-- compatible: Should be "nxp,imx8mp-dwmac-eqos" to select glue layer1010- and "snps,dwmac-5.10a" to select IP version.1111-- clocks: Must contain a phandle for each entry in clock-names.1212-- clock-names: Should be "stmmaceth" for the host clock.1313- Should be "pclk" for the MAC apb clock.1414- Should be "ptp_ref" for the MAC timer clock.1515- Should be "tx" for the MAC RGMII TX clock:1616- Should be "mem" for EQOS MEM clock.1717- - "mem" clock is required for imx8dxl platform.1818- - "mem" clock is not required for imx8mp platform.1919-- interrupt-names: Should contain a list of interrupt names corresponding to2020- the interrupts in the interrupts property, if available.2121- Should be "macirq" for the main MAC IRQ2222- Should be "eth_wake_irq" for the IT which wake up system2323-- intf_mode: Should be phandle/offset pair. The phandle to the syscon node which2424- encompases the GPR register, and the offset of the GPR register.2525- - required for imx8mp platform.2626- - is optional for imx8dxl platform.2727-2828-Optional properties:2929-- intf_mode: is optional for imx8dxl platform.3030-- snps,rmii_refclk_ext: to select RMII reference clock from external.3131-3232-Example:3333- eqos: ethernet@30bf0000 {3434- compatible = "nxp,imx8mp-dwmac-eqos", "snps,dwmac-5.10a";3535- reg = <0x30bf0000 0x10000>;3636- interrupts = <GIC_SPI 134 IRQ_TYPE_LEVEL_HIGH>,3737- <GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>;3838- interrupt-names = "eth_wake_irq", "macirq";3939- clocks = <&clk IMX8MP_CLK_ENET_QOS_ROOT>,4040- <&clk IMX8MP_CLK_QOS_ENET_ROOT>,4141- <&clk IMX8MP_CLK_ENET_QOS_TIMER>,4242- <&clk IMX8MP_CLK_ENET_QOS>;4343- clock-names = "stmmaceth", "pclk", "ptp_ref", "tx";4444- assigned-clocks = <&clk IMX8MP_CLK_ENET_AXI>,4545- <&clk IMX8MP_CLK_ENET_QOS_TIMER>,4646- <&clk IMX8MP_CLK_ENET_QOS>;4747- assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_266M>,4848- <&clk IMX8MP_SYS_PLL2_100M>,4949- <&clk IMX8MP_SYS_PLL2_125M>;5050- assigned-clock-rates = <0>, <100000000>, <125000000>;5151- nvmem-cells = <ð_mac0>;5252- nvmem-cell-names = "mac-address";5353- nvmem_macaddr_swap;5454- intf_mode = <&gpr 0x4>;5555- status = "disabled";5656- };
···5757 maxItems: 158585959 power-domains:6060+ deprecated: true6061 description:6162 Power domain to use for enable control. This binding is only6263 available if the compatible is chosen to regulator-fixed-domain.6364 maxItems: 164656566 required-opps:6767+ deprecated: true6668 description:6769 Performance state to use for enable control. This binding is only6870 available if the compatible is chosen to regulator-fixed-domain. The
···243243These are the various configuration flags that can be used to control244244and monitor the behavior of AF_XDP sockets.245245246246-XDP_COPY and XDP_ZERO_COPY bind flags247247--------------------------------------246246+XDP_COPY and XDP_ZEROCOPY bind flags247247+------------------------------------248248249249When you bind to a socket, the kernel will first try to use zero-copy250250copy. If zero-copy is not supported, it will fall back on using copy···252252like to force a certain mode, you can use the following flags. If you253253pass the XDP_COPY flag to the bind call, the kernel will force the254254socket into copy mode. If it cannot use copy mode, the bind call will255255-fail with an error. Conversely, the XDP_ZERO_COPY flag will force the255255+fail with an error. Conversely, the XDP_ZEROCOPY flag will force the256256socket into zero-copy mode or fail.257257258258XDP_SHARED_UMEM bind flag
+1-1
Documentation/networking/ip-sysctl.rst
···826826 initial value when the blackhole issue goes away.827827 0 to disable the blackhole detection.828828829829- By default, it is set to 1hr.829829+ By default, it is set to 0 (feature is disabled).830830831831tcp_fastopen_key - list of comma separated 32-digit hexadecimal INTEGERs832832 The list consists of a primary key and an optional backup key. The
+1-1
Documentation/trace/histogram.rst
···191191 with the event, in nanoseconds. May be192192 modified by .usecs to have timestamps193193 interpreted as microseconds.194194- cpu int the cpu on which the event occurred.194194+ common_cpu int the cpu on which the event occurred.195195 ====================== ==== =======================================196196197197Extended error information
+11-2
MAINTAINERS
···445445F: include/uapi/linux/wmi.h446446447447ACRN HYPERVISOR SERVICE MODULE448448-M: Shuo Liu <shuo.a.liu@intel.com>448448+M: Fei Li <fei1.li@intel.com>449449L: acrn-dev@lists.projectacrn.org (subscribers-only)450450S: Supported451451W: https://projectacrn.org···1175811758MEDIATEK SWITCH DRIVER1175911759M: Sean Wang <sean.wang@mediatek.com>1176011760M: Landen Chao <Landen.Chao@mediatek.com>1176111761+M: DENG Qingfang <dqfext@gmail.com>1176111762L: netdev@vger.kernel.org1176211763S: Maintained1176311764F: drivers/net/dsa/mt7530.*···1912319122L: linux-usb@vger.kernel.org1912419123S: Maintained1912519124F: Documentation/devicetree/bindings/phy/hisilicon,hi3670-usb3.yaml1912619126-F: drivers/phy/hisilicon/phy-kirin970-usb3.c1912519125+F: drivers/phy/hisilicon/phy-hi3670-usb3.c19127191261912819127USB ISP116X DRIVER1912919128M: Olav Kongas <ok@artecdesign.ee>···1980019799L: netdev@vger.kernel.org1980119800S: Supported1980219801F: drivers/ptp/ptp_vmw.c1980219802+1980319803+VMWARE VMCI DRIVER1980419804+M: Jorgen Hansen <jhansen@vmware.com>1980519805+M: Vishnu Dasa <vdasa@vmware.com>1980619806+L: linux-kernel@vger.kernel.org1980719807+L: pv-drivers@vmware.com (private)1980819808+S: Maintained1980919809+F: drivers/misc/vmw_vmci/19803198101980419811VMWARE VMMOUSE SUBDRIVER1980519812M: "VMware Graphics" <linux-graphics-maintainer@vmware.com>
···3333 depends on MMU3434 select MMU_MOTOROLA if MMU3535 select HAVE_ARCH_NVRAM_OPS3636+ select HAVE_PATA_PLATFORM3637 select LEGACY_TIMER_TICK3738 help3839 This option enables support for the Apple Macintosh series of
+1-1
arch/nds32/mm/mmap.c
···59596060 vma = find_vma(mm, addr);6161 if (TASK_SIZE - len >= addr &&6262- (!vma || addr + len <= vma->vm_start))6262+ (!vma || addr + len <= vm_start_gap(vma)))6363 return addr;6464 }6565
+2
arch/powerpc/kvm/book3s_hv.c
···26972697 HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP | HFSCR_PREFIX;26982698 if (cpu_has_feature(CPU_FTR_HVMODE)) {26992699 vcpu->arch.hfscr &= mfspr(SPRN_HFSCR);27002700+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM27002701 if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))27012702 vcpu->arch.hfscr |= HFSCR_TM;27032703+#endif27022704 }27032705 if (cpu_has_feature(CPU_FTR_TM_COMP))27042706 vcpu->arch.hfscr |= HFSCR_TM;
+20
arch/powerpc/kvm/book3s_hv_nested.c
···302302 if (vcpu->kvm->arch.l1_ptcr == 0)303303 return H_NOT_AVAILABLE;304304305305+ if (MSR_TM_TRANSACTIONAL(vcpu->arch.shregs.msr))306306+ return H_BAD_MODE;307307+305308 /* copy parameters in */306309 hv_ptr = kvmppc_get_gpr(vcpu, 4);307310 regs_ptr = kvmppc_get_gpr(vcpu, 5);···324321 byteswap_pt_regs(&l2_regs);325322 if (l2_hv.vcpu_token >= NR_CPUS)326323 return H_PARAMETER;324324+325325+ /*326326+ * L1 must have set up a suspended state to enter the L2 in a327327+ * transactional state, and only in that case. These have to be328328+ * filtered out here to prevent causing a TM Bad Thing in the329329+ * host HRFID. We could synthesize a TM Bad Thing back to the L1330330+ * here but there doesn't seem like much point.331331+ */332332+ if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr)) {333333+ if (!MSR_TM_ACTIVE(l2_regs.msr))334334+ return H_BAD_MODE;335335+ } else {336336+ if (l2_regs.msr & MSR_TS_MASK)337337+ return H_BAD_MODE;338338+ if (WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_TS_MASK))339339+ return H_BAD_MODE;340340+ }327341328342 /* translate lpid */329343 l2 = kvmhv_get_nested(vcpu->kvm, l2_hv.lpid, true);
+22-3
arch/powerpc/kvm/book3s_hv_p9_entry.c
···317317 */318318 mtspr(SPRN_HDEC, hdec);319319320320+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM321321+tm_return_to_guest:322322+#endif320323 mtspr(SPRN_DAR, vcpu->arch.shregs.dar);321324 mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr);322325 mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0);···418415 * is in real suspend mode and is trying to transition to419416 * transactional mode.420417 */421421- if (local_paca->kvm_hstate.fake_suspend &&418418+ if (!local_paca->kvm_hstate.fake_suspend &&422419 (vcpu->arch.shregs.msr & MSR_TS_S)) {423420 if (kvmhv_p9_tm_emulation_early(vcpu)) {424424- /* Prevent it being handled again. */425425- trap = 0;421421+ /*422422+ * Go straight back into the guest with the423423+ * new NIP/MSR as set by TM emulation.424424+ */425425+ mtspr(SPRN_HSRR0, vcpu->arch.regs.nip);426426+ mtspr(SPRN_HSRR1, vcpu->arch.shregs.msr);427427+428428+ /*429429+ * tm_return_to_guest re-loads SRR0/1, DAR,430430+ * DSISR after RI is cleared, in case they had431431+ * been clobbered by a MCE.432432+ */433433+ __mtmsrd(0, 1); /* clear RI */434434+ goto tm_return_to_guest;426435 }427436 }428437#endif···514499 * If we are in real mode, only switch MMU on after the MMU is515500 * switched to host, to avoid the P9_RADIX_PREFETCH_BUG.516501 */502502+ if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&503503+ vcpu->arch.shregs.msr & MSR_TS_MASK)504504+ msr |= MSR_TS_S;505505+517506 __mtmsrd(msr, 0);518507519508 end_timing(vcpu);
+22-3
arch/powerpc/kvm/book3s_rtas.c
···242242 * value so we can restore it on the way out.243243 */244244 orig_rets = args.rets;245245+ if (be32_to_cpu(args.nargs) >= ARRAY_SIZE(args.args)) {246246+ /*247247+ * Don't overflow our args array: ensure there is room for248248+ * at least rets[0] (even if the call specifies 0 nret).249249+ *250250+ * Each handler must then check for the correct nargs and nret251251+ * values, but they may always return failure in rets[0].252252+ */253253+ rc = -EINVAL;254254+ goto fail;255255+ }245256 args.rets = &args.args[be32_to_cpu(args.nargs)];246257247258 mutex_lock(&vcpu->kvm->arch.rtas_token_lock);···280269fail:281270 /*282271 * We only get here if the guest has called RTAS with a bogus283283- * args pointer. That means we can't get to the args, and so we284284- * can't fail the RTAS call. So fail right out to userspace,285285- * which should kill the guest.272272+ * args pointer or nargs/nret values that would overflow the273273+ * array. That means we can't get to the args, and so we can't274274+ * fail the RTAS call. So fail right out to userspace, which275275+ * should kill the guest.276276+ *277277+ * SLOF should actually pass the hcall return value from the278278+ * rtas handler call in r3, so enter_rtas could be modified to279279+ * return a failure indication in r3 and we could return such280280+ * errors to the guest rather than failing to host userspace.281281+ * However old guests that don't test for failure could then282282+ * continue silently after errors, so for now we won't do this.286283 */287284 return rc;288285}
+2-2
arch/powerpc/kvm/powerpc.c
···20482048 {20492049 struct kvm_enable_cap cap;20502050 r = -EFAULT;20512051- vcpu_load(vcpu);20522051 if (copy_from_user(&cap, argp, sizeof(cap)))20532052 goto out;20532053+ vcpu_load(vcpu);20542054 r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);20552055 vcpu_put(vcpu);20562056 break;···20742074 case KVM_DIRTY_TLB: {20752075 struct kvm_dirty_tlb dirty;20762076 r = -EFAULT;20772077- vcpu_load(vcpu);20782077 if (copy_from_user(&dirty, argp, sizeof(dirty)))20792078 goto out;20792079+ vcpu_load(vcpu);20802080 r = kvm_vcpu_ioctl_dirty_tlb(vcpu, &dirty);20812081 vcpu_put(vcpu);20822082 break;
···4242 switch (regs->msr & SRR1_WAKEMASK) {4343 case SRR1_WAKEDEC:4444 set_dec(1);4545+ break;4546 case SRR1_WAKEEE:4647 /*4748 * Handle these when interrupts get re-enabled and we take
+2-2
arch/riscv/include/asm/efi.h
···27272828#define ARCH_EFI_IRQ_FLAGS_MASK (SR_IE | SR_SPIE)29293030-/* Load initrd at enough distance from DRAM start */3030+/* Load initrd anywhere in system RAM */3131static inline unsigned long efi_get_max_initrd_addr(unsigned long image_addr)3232{3333- return image_addr + SZ_256M;3333+ return ULONG_MAX;3434}35353636#define alloc_screen_info(x...) (&screen_info)
+5-1
arch/riscv/kernel/stacktrace.c
···132132{133133 unsigned long pc = 0;134134135135- if (likely(task && task != current && !task_is_running(task)))135135+ if (likely(task && task != current && !task_is_running(task))) {136136+ if (!try_get_task_stack(task))137137+ return 0;136138 walk_stackframe(task, NULL, save_wchan, &pc);139139+ put_task_stack(task);140140+ }137141 return pc;138142}139143
+13-14
arch/riscv/lib/uaccess.S
···3030 * t0 - end of uncopied dst3131 */3232 add t0, a0, a23333- bgtu a0, t0, 5f34333534 /*3635 * Use byte copy only if too small.3636+ * SZREG holds 4 for RV32 and 8 for RV643737 */3838- li a3, 8*SZREG /* size must be larger than size in word_copy */3838+ li a3, 9*SZREG /* size must be larger than size in word_copy */3939 bltu a2, a3, .Lbyte_copy_tail40404141 /*4242- * Copy first bytes until dst is align to word boundary.4242+ * Copy first bytes until dst is aligned to word boundary.4343 * a0 - start of dst4444 * t1 - start of aligned dst4545 */4646 addi t1, a0, SZREG-14747 andi t1, t1, ~(SZREG-1)4848 /* dst is already aligned, skip */4949- beq a0, t1, .Lskip_first_bytes4949+ beq a0, t1, .Lskip_align_dst50501:5151 /* a5 - one byte for copying data */5252 fixup lb a5, 0(a1), 10f···5555 addi a0, a0, 1 /* dst */5656 bltu a0, t1, 1b /* t1 - start of aligned dst */57575858-.Lskip_first_bytes:5858+.Lskip_align_dst:5959 /*6060 * Now dst is aligned.6161 * Use shift-copy if src is misaligned.···7272 *7373 * a0 - start of aligned dst7474 * a1 - start of aligned src7575- * a3 - a1 & mask:(SZREG-1)7675 * t0 - end of aligned dst7776 */7878- addi t0, t0, -(8*SZREG-1) /* not to over run */7777+ addi t0, t0, -(8*SZREG) /* not to over run */79782:8079 fixup REG_L a4, 0(a1), 10f8180 fixup REG_L a5, SZREG(a1), 10f···9697 addi a1, a1, 8*SZREG9798 bltu a0, t0, 2b98999999- addi t0, t0, 8*SZREG-1 /* revert to original value */100100+ addi t0, t0, 8*SZREG /* revert to original value */100101 j .Lbyte_copy_tail101102102103.Lshift_copy:···106107 * For misaligned copy we still perform aligned word copy, but107108 * we need to use the value fetched from the previous iteration and108109 * do some shifts.109109- * This is safe because reading less than a word size.110110+ * This is safe because reading is less than a word size.110111 *111112 * a0 - start of aligned dst112113 * a1 - start of src···116117 */117118 /* calculating aligned word boundary for dst */118119 andi t1, t0, ~(SZREG-1)119119- /* Converting unaligned src to aligned arc */120120+ /* Converting unaligned src to aligned src */120121 andi a1, a1, ~(SZREG-1)121122122123 /*···124125 * t3 - prev shift125126 * t4 - current shift126127 */127127- slli t3, a3, LGREG128128+ slli t3, a3, 3 /* converting bytes in a3 to bits */128129 li a5, SZREG*8129130 sub t4, a5, t3130131131131- /* Load the first word to combine with seceond word */132132+ /* Load the first word to combine with second word */132133 fixup REG_L a5, 0(a1), 10f1331341341353:···160161 * a1 - start of remaining src161162 * t0 - end of remaining dst162163 */163163- bgeu a0, t0, 5f164164+ bgeu a0, t0, .Lout_copy_user /* check if end of copy */1641654:165166 fixup lb a5, 0(a1), 10f166167 addi a1, a1, 1 /* src */···168169 addi a0, a0, 1 /* dst */169170 bltu a0, t0, 4b /* t0 - end of dst */170171171171-5:172172+.Lout_copy_user:172173 /* Disable access to user memory */173174 csrc CSR_STATUS, t6174175 li a0, 0
+28-4
arch/riscv/mm/init.c
···127127}128128129129/*130130- * The default maximal physical memory size is -PAGE_OFFSET,131131- * limit the memory size via mem.130130+ * The default maximal physical memory size is -PAGE_OFFSET for 32-bit kernel,131131+ * whereas for 64-bit kernel, the end of the virtual address space is occupied132132+ * by the modules/BPF/kernel mappings which reduces the available size of the133133+ * linear mapping.134134+ * Limit the memory size via mem.132135 */136136+#ifdef CONFIG_64BIT137137+static phys_addr_t memory_limit = -PAGE_OFFSET - SZ_4G;138138+#else133139static phys_addr_t memory_limit = -PAGE_OFFSET;140140+#endif134141135142static int __init early_mem(char *p)136143{···159152{160153 phys_addr_t vmlinux_end = __pa_symbol(&_end);161154 phys_addr_t vmlinux_start = __pa_symbol(&_start);162162- phys_addr_t max_mapped_addr = __pa(~(ulong)0);155155+ phys_addr_t __maybe_unused max_mapped_addr;163156 phys_addr_t dram_end;164157165158#ifdef CONFIG_XIP_KERNEL···182175 memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);183176184177 dram_end = memblock_end_of_DRAM();178178+179179+#ifndef CONFIG_64BIT185180 /*186181 * memblock allocator is not aware of the fact that last 4K bytes of187182 * the addressable memory can not be mapped because of IS_ERR_VALUE188183 * macro. Make sure that last 4k bytes are not usable by memblock189189- * if end of dram is equal to maximum addressable memory.184184+ * if end of dram is equal to maximum addressable memory. For 64-bit185185+ * kernel, this problem can't happen here as the end of the virtual186186+ * address space is occupied by the kernel mapping then this check must187187+ * be done in create_kernel_page_table.190188 */189189+ max_mapped_addr = __pa(~(ulong)0);191190 if (max_mapped_addr == (dram_end - 1))192191 memblock_set_current_limit(max_mapped_addr - 4096);192192+#endif193193194194 min_low_pfn = PFN_UP(memblock_start_of_DRAM());195195 max_low_pfn = max_pfn = PFN_DOWN(dram_end);···584570 BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);585571 BUG_ON((kernel_map.phys_addr % map_size) != 0);586572573573+#ifdef CONFIG_64BIT574574+ /*575575+ * The last 4K bytes of the addressable memory can not be mapped because576576+ * of IS_ERR_VALUE macro.577577+ */578578+ BUG_ON((kernel_map.virt_addr + kernel_map.size) > ADDRESS_SPACE_END - SZ_4K);579579+#endif580580+587581 pt_ops.alloc_pte = alloc_pte_early;588582 pt_ops.get_pte_virt = get_pte_virt_early;589583#ifndef __PAGETABLE_PMD_FOLDED···731709 if (start <= __pa(PAGE_OFFSET) &&732710 __pa(PAGE_OFFSET) < end)733711 start = __pa(PAGE_OFFSET);712712+ if (end >= __pa(PAGE_OFFSET) + memory_limit)713713+ end = __pa(PAGE_OFFSET) + memory_limit;734714735715 map_size = best_map_size(start, end - start);736716 for (pa = start; pa < end; pa += map_size) {
+4-15
arch/s390/boot/text_dma.S
···99#include <asm/errno.h>1010#include <asm/sigp.h>11111212-#ifdef CC_USING_EXPOLINE1313- .pushsection .dma.text.__s390_indirect_jump_r14,"axG"1414-__dma__s390_indirect_jump_r14:1515- larl %r1,0f1616- ex 0,0(%r1)1717- j .1818-0: br %r141919- .popsection2020-#endif2121-2212 .section .dma.text,"ax"2313/*2414 * Simplified version of expoline thunk. The normal thunks can not be used here,···1727 * affects a few functions that are not performance-relevant.1828 */1929 .macro BR_EX_DMA_r142020-#ifdef CC_USING_EXPOLINE2121- jg __dma__s390_indirect_jump_r142222-#else2323- br %r142424-#endif3030+ larl %r1,0f3131+ ex 0,0(%r1)3232+ j .3333+0: br %r142534 .endm26352736/*
+18-9
arch/s390/configs/debug_defconfig
···55CONFIG_AUDIT=y66CONFIG_NO_HZ_IDLE=y77CONFIG_HIGH_RES_TIMERS=y88+CONFIG_BPF_SYSCALL=y99+CONFIG_BPF_JIT=y1010+CONFIG_BPF_JIT_ALWAYS_ON=y1111+CONFIG_BPF_LSM=y812CONFIG_PREEMPT=y1313+CONFIG_SCHED_CORE=y914CONFIG_BSD_PROCESS_ACCT=y1015CONFIG_BSD_PROCESS_ACCT_V3=y1116CONFIG_TASKSTATS=y···3328CONFIG_CGROUP_CPUACCT=y3429CONFIG_CGROUP_PERF=y3530CONFIG_CGROUP_BPF=y3131+CONFIG_CGROUP_MISC=y3632CONFIG_NAMESPACES=y3733CONFIG_USER_NS=y3834CONFIG_CHECKPOINT_RESTORE=y3935CONFIG_SCHED_AUTOGROUP=y4036CONFIG_EXPERT=y4137# CONFIG_SYSFS_SYSCALL is not set4242-CONFIG_BPF_LSM=y4343-CONFIG_BPF_SYSCALL=y4438CONFIG_USERFAULTFD=y4539# CONFIG_COMPAT_BRK is not set4640CONFIG_PROFILING=y···8076CONFIG_BLK_WBT=y8177CONFIG_BLK_CGROUP_IOLATENCY=y8278CONFIG_BLK_CGROUP_IOCOST=y7979+CONFIG_BLK_CGROUP_IOPRIO=y8380CONFIG_BLK_INLINE_ENCRYPTION=y8481CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y8582CONFIG_PARTITION_ADVANCED=y···10095CONFIG_FRONTSWAP=y10196CONFIG_CMA_DEBUG=y10297CONFIG_CMA_DEBUGFS=y9898+CONFIG_CMA_SYSFS=y10399CONFIG_CMA_AREAS=7104100CONFIG_MEM_SOFT_DIRTY=y105101CONFIG_ZSWAP=y···164158CONFIG_MPTCP=y165159CONFIG_NETFILTER=y166160CONFIG_BRIDGE_NETFILTER=m161161+CONFIG_NETFILTER_NETLINK_HOOK=m167162CONFIG_NF_CONNTRACK=m168163CONFIG_NF_CONNTRACK_SECMARK=y169164CONFIG_NF_CONNTRACK_EVENTS=y···287280CONFIG_IP_VS_PE_SIP=m288281CONFIG_NFT_FIB_IPV4=m289282CONFIG_NF_TABLES_ARP=y283283+CONFIG_NF_LOG_IPV4=m290284CONFIG_IP_NF_IPTABLES=m291285CONFIG_IP_NF_MATCH_AH=m292286CONFIG_IP_NF_MATCH_ECN=m···392384CONFIG_VIRTIO_VSOCKETS=m393385CONFIG_NETLINK_DIAG=m394386CONFIG_CGROUP_NET_PRIO=y395395-CONFIG_BPF_JIT=y396387CONFIG_NET_PKTGEN=m397388CONFIG_PCI=y398398-CONFIG_PCI_IOV=y399389# CONFIG_PCIEASPM is not set400390CONFIG_PCI_DEBUG=y391391+CONFIG_PCI_IOV=y401392CONFIG_HOTPLUG_PCI=y402393CONFIG_HOTPLUG_PCI_S390=y403394CONFIG_DEVTMPFS=y···443436CONFIG_MD_FAULTY=m444437CONFIG_MD_CLUSTER=m445438CONFIG_BCACHE=m446446-CONFIG_BLK_DEV_DM=m439439+CONFIG_BLK_DEV_DM=y447440CONFIG_DM_UNSTRIPED=m448441CONFIG_DM_CRYPT=m449442CONFIG_DM_SNAPSHOT=m···460453CONFIG_DM_MULTIPATH_HST=m461454CONFIG_DM_MULTIPATH_IOA=m462455CONFIG_DM_DELAY=m456456+CONFIG_DM_INIT=y463457CONFIG_DM_UEVENT=y464458CONFIG_DM_FLAKEY=m465459CONFIG_DM_VERITY=m···503495# CONFIG_NET_VENDOR_GOOGLE is not set504496# CONFIG_NET_VENDOR_HUAWEI is not set505497# CONFIG_NET_VENDOR_INTEL is not set498498+# CONFIG_NET_VENDOR_MICROSOFT is not set506499# CONFIG_NET_VENDOR_MARVELL is not set507500CONFIG_MLX4_EN=m508501CONFIG_MLX5_CORE=m···560551CONFIG_LEGACY_PTY_COUNT=0561552CONFIG_VIRTIO_CONSOLE=m562553CONFIG_HW_RANDOM_VIRTIO=m563563-CONFIG_RAW_DRIVER=m564554CONFIG_HANGCHECK_TIMER=m565555CONFIG_TN3270_FS=y566556CONFIG_PPS=m···582574CONFIG_VFIO=m583575CONFIG_VFIO_PCI=m584576CONFIG_VFIO_MDEV=m585585-CONFIG_VFIO_MDEV_DEVICE=m586577CONFIG_VIRTIO_PCI=m587578CONFIG_VIRTIO_BALLOON=m588579CONFIG_VIRTIO_INPUT=y···626619CONFIG_CUSE=m627620CONFIG_VIRTIO_FS=m628621CONFIG_OVERLAY_FS=m622622+CONFIG_NETFS_STATS=y629623CONFIG_FSCACHE=m630624CONFIG_CACHEFILES=m631625CONFIG_ISO9660_FS=y···662654CONFIG_NFSD_V4=y663655CONFIG_NFSD_V4_SECURITY_LABEL=y664656CONFIG_CIFS=m665665-CONFIG_CIFS_STATS2=y666657CONFIG_CIFS_WEAK_PW_HASH=y667658CONFIG_CIFS_UPCALL=y668659CONFIG_CIFS_XATTR=y···689682CONFIG_SECURITY_SELINUX_DISABLE=y690683CONFIG_SECURITY_LOCKDOWN_LSM=y691684CONFIG_SECURITY_LOCKDOWN_LSM_EARLY=y685685+CONFIG_SECURITY_LANDLOCK=y692686CONFIG_INTEGRITY_SIGNATURE=y693687CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y694688CONFIG_IMA=y···704696CONFIG_CRYPTO_TEST=m705697CONFIG_CRYPTO_DH=m706698CONFIG_CRYPTO_ECDH=m699699+CONFIG_CRYPTO_ECDSA=m707700CONFIG_CRYPTO_ECRDSA=m708701CONFIG_CRYPTO_SM2=m709702CONFIG_CRYPTO_CURVE25519=m···852843CONFIG_FAIL_FUNCTION=y853844CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y854845CONFIG_LKDTM=m855855-CONFIG_TEST_LIST_SORT=y856846CONFIG_TEST_MIN_HEAP=y857847CONFIG_TEST_SORT=y858848CONFIG_KPROBES_SANITY_TEST=y···861853CONFIG_ATOMIC64_SELFTEST=y862854CONFIG_TEST_BITOPS=m863855CONFIG_TEST_BPF=m856856+CONFIG_TEST_LIVEPATCH=m
+19-8
arch/s390/configs/defconfig
···44CONFIG_AUDIT=y55CONFIG_NO_HZ_IDLE=y66CONFIG_HIGH_RES_TIMERS=y77+CONFIG_BPF_SYSCALL=y88+CONFIG_BPF_JIT=y99+CONFIG_BPF_JIT_ALWAYS_ON=y1010+CONFIG_BPF_LSM=y1111+CONFIG_SCHED_CORE=y712CONFIG_BSD_PROCESS_ACCT=y813CONFIG_BSD_PROCESS_ACCT_V3=y914CONFIG_TASKSTATS=y···3126CONFIG_CGROUP_CPUACCT=y3227CONFIG_CGROUP_PERF=y3328CONFIG_CGROUP_BPF=y2929+CONFIG_CGROUP_MISC=y3430CONFIG_NAMESPACES=y3531CONFIG_USER_NS=y3632CONFIG_CHECKPOINT_RESTORE=y3733CONFIG_SCHED_AUTOGROUP=y3834CONFIG_EXPERT=y3935# CONFIG_SYSFS_SYSCALL is not set4040-CONFIG_BPF_LSM=y4141-CONFIG_BPF_SYSCALL=y4236CONFIG_USERFAULTFD=y4337# CONFIG_COMPAT_BRK is not set4438CONFIG_PROFILING=y···7470CONFIG_BLK_WBT=y7571CONFIG_BLK_CGROUP_IOLATENCY=y7672CONFIG_BLK_CGROUP_IOCOST=y7373+CONFIG_BLK_CGROUP_IOPRIO=y7774CONFIG_BLK_INLINE_ENCRYPTION=y7875CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y7976CONFIG_PARTITION_ADVANCED=y···9287CONFIG_TRANSPARENT_HUGEPAGE=y9388CONFIG_CLEANCACHE=y9489CONFIG_FRONTSWAP=y9090+CONFIG_CMA_SYSFS=y9591CONFIG_CMA_AREAS=79692CONFIG_MEM_SOFT_DIRTY=y9793CONFIG_ZSWAP=y···155149CONFIG_MPTCP=y156150CONFIG_NETFILTER=y157151CONFIG_BRIDGE_NETFILTER=m152152+CONFIG_NETFILTER_NETLINK_HOOK=m158153CONFIG_NF_CONNTRACK=m159154CONFIG_NF_CONNTRACK_SECMARK=y160155CONFIG_NF_CONNTRACK_EVENTS=y···278271CONFIG_IP_VS_PE_SIP=m279272CONFIG_NFT_FIB_IPV4=m280273CONFIG_NF_TABLES_ARP=y274274+CONFIG_NF_LOG_IPV4=m281275CONFIG_IP_NF_IPTABLES=m282276CONFIG_IP_NF_MATCH_AH=m283277CONFIG_IP_NF_MATCH_ECN=m···382374CONFIG_VIRTIO_VSOCKETS=m383375CONFIG_NETLINK_DIAG=m384376CONFIG_CGROUP_NET_PRIO=y385385-CONFIG_BPF_JIT=y386377CONFIG_NET_PKTGEN=m387378CONFIG_PCI=y388388-CONFIG_PCI_IOV=y389379# CONFIG_PCIEASPM is not set380380+CONFIG_PCI_IOV=y390381CONFIG_HOTPLUG_PCI=y391382CONFIG_HOTPLUG_PCI_S390=y392383CONFIG_UEVENT_HELPER=y···434427CONFIG_MD_FAULTY=m435428CONFIG_MD_CLUSTER=m436429CONFIG_BCACHE=m437437-CONFIG_BLK_DEV_DM=m430430+CONFIG_BLK_DEV_DM=y438431CONFIG_DM_UNSTRIPED=m439432CONFIG_DM_CRYPT=m440433CONFIG_DM_SNAPSHOT=m···451444CONFIG_DM_MULTIPATH_HST=m452445CONFIG_DM_MULTIPATH_IOA=m453446CONFIG_DM_DELAY=m447447+CONFIG_DM_INIT=y454448CONFIG_DM_UEVENT=y455449CONFIG_DM_FLAKEY=m456450CONFIG_DM_VERITY=m···495487# CONFIG_NET_VENDOR_GOOGLE is not set496488# CONFIG_NET_VENDOR_HUAWEI is not set497489# CONFIG_NET_VENDOR_INTEL is not set490490+# CONFIG_NET_VENDOR_MICROSOFT is not set498491# CONFIG_NET_VENDOR_MARVELL is not set499492CONFIG_MLX4_EN=m500493CONFIG_MLX5_CORE=m···552543CONFIG_LEGACY_PTY_COUNT=0553544CONFIG_VIRTIO_CONSOLE=m554545CONFIG_HW_RANDOM_VIRTIO=m555555-CONFIG_RAW_DRIVER=m556546CONFIG_HANGCHECK_TIMER=m557547CONFIG_TN3270_FS=y558548# CONFIG_PTP_1588_CLOCK is not set···574566CONFIG_VFIO=m575567CONFIG_VFIO_PCI=m576568CONFIG_VFIO_MDEV=m577577-CONFIG_VFIO_MDEV_DEVICE=m578569CONFIG_VIRTIO_PCI=m579570CONFIG_VIRTIO_BALLOON=m580571CONFIG_VIRTIO_INPUT=y···614607CONFIG_CUSE=m615608CONFIG_VIRTIO_FS=m616609CONFIG_OVERLAY_FS=m610610+CONFIG_NETFS_STATS=y617611CONFIG_FSCACHE=m618612CONFIG_CACHEFILES=m619613CONFIG_ISO9660_FS=y···650642CONFIG_NFSD_V4=y651643CONFIG_NFSD_V4_SECURITY_LABEL=y652644CONFIG_CIFS=m653653-CONFIG_CIFS_STATS2=y654645CONFIG_CIFS_WEAK_PW_HASH=y655646CONFIG_CIFS_UPCALL=y656647CONFIG_CIFS_XATTR=y···676669CONFIG_SECURITY_SELINUX_DISABLE=y677670CONFIG_SECURITY_LOCKDOWN_LSM=y678671CONFIG_SECURITY_LOCKDOWN_LSM_EARLY=y672672+CONFIG_SECURITY_LANDLOCK=y679673CONFIG_INTEGRITY_SIGNATURE=y680674CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y681675CONFIG_IMA=y···692684CONFIG_CRYPTO_TEST=m693685CONFIG_CRYPTO_DH=m694686CONFIG_CRYPTO_ECDH=m687687+CONFIG_CRYPTO_ECDSA=m695688CONFIG_CRYPTO_ECRDSA=m696689CONFIG_CRYPTO_SM2=m697690CONFIG_CRYPTO_CURVE25519=m···763754CONFIG_DMA_CMA=y764755CONFIG_CMA_SIZE_MBYTES=0765756CONFIG_PRINTK_TIME=y757757+CONFIG_DYNAMIC_DEBUG=y766758CONFIG_DEBUG_INFO=y767759CONFIG_DEBUG_INFO_DWARF4=y768760CONFIG_GDB_SCRIPTS=y···791781CONFIG_PERCPU_TEST=m792782CONFIG_ATOMIC64_SELFTEST=y793783CONFIG_TEST_BPF=m784784+CONFIG_TEST_LIVEPATCH=m
+1-2
arch/s390/configs/zfcpdump_defconfig
···2929# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set3030# CONFIG_COMPACTION is not set3131# CONFIG_MIGRATION is not set3232-# CONFIG_BOUNCE is not set3332CONFIG_NET=y3433# CONFIG_IUCV is not set3434+# CONFIG_PCPU_DEV_REFCNT is not set3535# CONFIG_ETHTOOL_NETLINK is not set3636CONFIG_DEVTMPFS=y3737CONFIG_BLK_DEV_RAM=y···5151# CONFIG_SERIO is not set5252# CONFIG_HVC_IUCV is not set5353# CONFIG_HW_RANDOM_S390 is not set5454-CONFIG_RAW_DRIVER=y5554# CONFIG_HMC_DRV is not set5655# CONFIG_S390_TAPE is not set5756# CONFIG_VMCP is not set
···237237 for_each_present_cpu(i) {238238 if (i == 0)239239 continue;240240- ret = hv_call_add_logical_proc(numa_cpu_node(i), i, i);240240+ ret = hv_call_add_logical_proc(numa_cpu_node(i), i, cpu_physical_id(i));241241 BUG_ON(ret);242242 }243243
···682682}683683#endif684684685685-#if CONFIG_PGTABLE_LEVELS > 3686685/**687686 * pud_set_huge - setup kernel PUD mapping688687 *···721722}722723723724/**724724- * pud_clear_huge - clear kernel PUD mapping when it is set725725- *726726- * Returns 1 on success and 0 on failure (no PUD map is found).727727- */728728-int pud_clear_huge(pud_t *pud)729729-{730730- if (pud_large(*pud)) {731731- pud_clear(pud);732732- return 1;733733- }734734-735735- return 0;736736-}737737-#endif738738-739739-#if CONFIG_PGTABLE_LEVELS > 2740740-/**741725 * pmd_set_huge - setup kernel PMD mapping742726 *743727 * See text over pud_set_huge() above.···751769}752770753771/**772772+ * pud_clear_huge - clear kernel PUD mapping when it is set773773+ *774774+ * Returns 1 on success and 0 on failure (no PUD map is found).775775+ */776776+int pud_clear_huge(pud_t *pud)777777+{778778+ if (pud_large(*pud)) {779779+ pud_clear(pud);780780+ return 1;781781+ }782782+783783+ return 0;784784+}785785+786786+/**754787 * pmd_clear_huge - clear kernel PMD mapping when it is set755788 *756789 * Returns 1 on success and 0 on failure (no PMD map is found).···779782780783 return 0;781784}782782-#endif783785784786#ifdef CONFIG_X86_64785787/**
+1-1
drivers/acpi/Kconfig
···370370config ACPI_TABLE_OVERRIDE_VIA_BUILTIN_INITRD371371 bool "Override ACPI tables from built-in initrd"372372 depends on ACPI_TABLE_UPGRADE373373- depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION=""373373+ depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION_NONE374374 help375375 This option provides functionality to override arbitrary ACPI tables376376 from built-in uncompressed initrd.
+3-4
drivers/acpi/utils.c
···860860 * Return the next match of ACPI device if another matching device was present861861 * at the moment of invocation, or NULL otherwise.862862 *863863- * FIXME: The function does not tolerate the sudden disappearance of @adev, e.g.864864- * in the case of a hotplug event. That said, the caller should ensure that865865- * this will never happen.866866- *867863 * The caller is responsible for invoking acpi_dev_put() on the returned device.864864+ * On the other hand the function invokes acpi_dev_put() on the given @adev865865+ * assuming that its reference counter had been increased beforehand.868866 *869867 * See additional information in acpi_dev_present() as well.870868 */···878880 match.hrv = hrv;879881880882 dev = bus_find_device(&acpi_bus_type, start, &match, acpi_dev_match_cb);883883+ acpi_dev_put(adev);881884 return dev ? to_acpi_device(dev) : NULL;882885}883886EXPORT_SYMBOL(acpi_dev_get_next_match_dev);
+7-1
drivers/base/auxiliary.c
···229229int __auxiliary_driver_register(struct auxiliary_driver *auxdrv,230230 struct module *owner, const char *modname)231231{232232+ int ret;233233+232234 if (WARN_ON(!auxdrv->probe) || WARN_ON(!auxdrv->id_table))233235 return -EINVAL;234236···246244 auxdrv->driver.bus = &auxiliary_bus_type;247245 auxdrv->driver.mod_name = modname;248246249249- return driver_register(&auxdrv->driver);247247+ ret = driver_register(&auxdrv->driver);248248+ if (ret)249249+ kfree(auxdrv->driver.name);250250+251251+ return ret;250252}251253EXPORT_SYMBOL_GPL(__auxiliary_driver_register);252254
···3434 break;3535 if (!adev->pnp.unique_id && node->acpi.uid == 0)3636 break;3737- acpi_dev_put(adev);3837 }3938 if (!adev)4039 return -ENODEV;
+12-1
drivers/firmware/efi/efi.c
···896896static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)897897{898898 struct resource *res, *parent;899899+ int ret;899900900901 res = kzalloc(sizeof(struct resource), GFP_ATOMIC);901902 if (!res)···909908910909 /* we expect a conflict with a 'System RAM' region */911910 parent = request_resource_conflict(&iomem_resource, res);912912- return parent ? request_resource(parent, res) : 0;911911+ ret = parent ? request_resource(parent, res) : 0;912912+913913+ /*914914+ * Given that efi_mem_reserve_iomem() can be called at any915915+ * time, only call memblock_reserve() if the architecture916916+ * keeps the infrastructure around.917917+ */918918+ if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK) && !ret)919919+ memblock_reserve(addr, size);920920+921921+ return ret;913922}914923915924int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+2-2
drivers/firmware/efi/libstub/efi-stub-helper.c
···630630 * @image: EFI loaded image protocol631631 * @load_addr: pointer to loaded initrd632632 * @load_size: size of loaded initrd633633- * @soft_limit: preferred size of allocated memory for loading the initrd634634- * @hard_limit: minimum size of allocated memory633633+ * @soft_limit: preferred address for loading the initrd634634+ * @hard_limit: upper limit address for loading the initrd635635 *636636 * Return: status code637637 */
+4-1
drivers/firmware/efi/mokvar-table.c
···180180 pr_err("EFI MOKvar config table is not valid\n");181181 return;182182 }183183- efi_mem_reserve(efi.mokvar_table, map_size_needed);183183+184184+ if (md.type == EFI_BOOT_SERVICES_DATA)185185+ efi_mem_reserve(efi.mokvar_table, map_size_needed);186186+184187 efi_mokvar_table_size = map_size_needed;185188}186189
+5-3
drivers/firmware/efi/tpm.c
···6262 tbl_size = sizeof(*log_tbl) + log_tbl->size;6363 memblock_reserve(efi.tpm_log, tbl_size);64646565- if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||6666- log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {6767- pr_warn(FW_BUG "TPM Final Events table missing or invalid\n");6565+ if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR) {6666+ pr_info("TPM Final Events table not present\n");6767+ goto out;6868+ } else if (log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {6969+ pr_warn(FW_BUG "TPM Final Events table invalid\n");6870 goto out;6971 }7072
···255255 if (bo->flags & AMDGPU_GEM_CREATE_NO_CPU_ACCESS)256256 return -EPERM;257257258258+ /* Workaround for Thunk bug creating PROT_NONE,MAP_PRIVATE mappings259259+ * for debugger access to invisible VRAM. Should have used MAP_SHARED260260+ * instead. Clearing VM_MAYWRITE prevents the mapping from ever261261+ * becoming writable and makes is_cow_mapping(vm_flags) false.262262+ */263263+ if (is_cow_mapping(vma->vm_flags) &&264264+ !(vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)))265265+ vma->vm_flags &= ~VM_MAYWRITE;266266+258267 return drm_gem_ttm_mmap(obj, vma);259268}260269
···18201820 */18211821 panel_mode = DP_PANEL_MODE_DEFAULT;18221822 }18231823- } else18241824- panel_mode = DP_PANEL_MODE_DEFAULT;18231823+ }18251824 }18261825#endif18271826···46494650 }46504651 }4651465246524652- if (link->dpcd_caps.panel_mode_edp) {46534653+ if (link->dpcd_caps.panel_mode_edp &&46544654+ (link->connector_signal == SIGNAL_TYPE_EDP ||46554655+ (link->connector_signal == SIGNAL_TYPE_DISPLAY_PORT &&46564656+ link->is_internal_display))) {46534657 return DP_PANEL_MODE_EDP;46544658 }46554659···49164914{49174915 uint32_t default_backlight;4918491649194919- if (link &&49204920- (link->dpcd_sink_ext_caps.bits.hdr_aux_backlight_control == 1 ||49214921- link->dpcd_sink_ext_caps.bits.sdr_aux_backlight_control == 1)) {49174917+ if (link && link->dpcd_sink_ext_caps.bits.oled == 1) {49224918 if (!dc_link_read_default_bl_aux(link, &default_backlight))49234919 default_backlight = 150000;49244920 // if < 5 nits or > 5000, it might be wrong readback
+1-1
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
···10621062 * so use only 30 bpp on DCE_VERSION_11_0. Testing with DCE 11.2 and 8.310631063 * did not show such problems, so this seems to be the exception.10641064 */10651065- if (plane_state->ctx->dce_version != DCE_VERSION_11_0)10651065+ if (plane_state->ctx->dce_version > DCE_VERSION_11_0)10661066 pipe_ctx->plane_res.scl_data.lb_params.depth = LB_PIXEL_DEPTH_36BPP;10671067 else10681068 pipe_ctx->plane_res.scl_data.lb_params.depth = LB_PIXEL_DEPTH_30BPP;
···636636 uint32_t ODM_MEM_PWR_CTRL3;637637 uint32_t DMU_MEM_PWR_CNTL;638638 uint32_t MMHUBBUB_MEM_PWR_CNTL;639639+ uint32_t DCHUBBUB_ARB_HOSTVM_CNTL;639640};640641 /* set field name */641642#define HWS_SF(blk_name, reg_name, field_name, post_fix)\···11111110 type DOMAIN_POWER_FORCEON;\11121111 type DOMAIN_POWER_GATE;\11131112 type DOMAIN_PGFSM_PWR_STATUS;\11141114- type HPO_HDMISTREAMCLK_G_GATE_DIS;11131113+ type HPO_HDMISTREAMCLK_G_GATE_DIS;\11141114+ type DISABLE_HOSTVM_FORCE_ALLOW_PSTATE;1115111511161116struct dce_hwseq_shift {11171117 HWSEQ_REG_FIELD_LIST(uint8_t)
···289289 /* DSCL processing pixel data in fixed or float format */290290 enum dscl_data_processing_format dscl_data_proc_format;291291292292+ /* max LB partitions */293293+ unsigned int max_lb_partitions;294294+292295 /* Calculates the number of partitions in the line buffer.293296 * The implementation of this function is overloaded for294297 * different versions of DSCL LB.
···134134 MSG_MAP(DisableDeterminism, PPSMC_MSG_DisableDeterminism, 0),135135 MSG_MAP(SetUclkDpmMode, PPSMC_MSG_SetUclkDpmMode, 0),136136 MSG_MAP(GfxDriverResetRecovery, PPSMC_MSG_GfxDriverResetRecovery, 0),137137+ MSG_MAP(BoardPowerCalibration, PPSMC_MSG_BoardPowerCalibration, 0),137138};138139139140static const struct cmn2asic_mapping aldebaran_clk_map[SMU_CLK_COUNT] = {···441440 return ret;442441}443442443443+static bool aldebaran_is_primary(struct smu_context *smu)444444+{445445+ struct amdgpu_device *adev = smu->adev;446446+447447+ if (adev->smuio.funcs && adev->smuio.funcs->get_die_id)448448+ return adev->smuio.funcs->get_die_id(adev) == 0;449449+450450+ return true;451451+}452452+453453+static int aldebaran_run_board_btc(struct smu_context *smu)454454+{455455+ u32 smu_version;456456+ int ret;457457+458458+ if (!aldebaran_is_primary(smu))459459+ return 0;460460+461461+ ret = smu_cmn_get_smc_version(smu, NULL, &smu_version);462462+ if (ret) {463463+ dev_err(smu->adev->dev, "Failed to get smu version!\n");464464+ return ret;465465+ }466466+ if (smu_version <= 0x00441d00)467467+ return 0;468468+469469+ ret = smu_cmn_send_smc_msg(smu, SMU_MSG_BoardPowerCalibration, NULL);470470+ if (ret)471471+ dev_err(smu->adev->dev, "Board power calibration failed!\n");472472+473473+ return ret;474474+}475475+444476static int aldebaran_run_btc(struct smu_context *smu)445477{446478 int ret;···481447 ret = smu_cmn_send_smc_msg(smu, SMU_MSG_RunDcBtc, NULL);482448 if (ret)483449 dev_err(smu->adev->dev, "RunDcBtc failed!\n");450450+ else451451+ ret = aldebaran_run_board_btc(smu);484452485453 return ret;486454}···558522 int32_t frequency2)559523{560524 return (abs(frequency1 - frequency2) <= EPSILON);561561-}562562-563563-static bool aldebaran_is_primary(struct smu_context *smu)564564-{565565- struct amdgpu_device *adev = smu->adev;566566-567567- if (adev->smuio.funcs && adev->smuio.funcs->get_die_id)568568- return adev->smuio.funcs->get_die_id(adev) == 0;569569-570570- return true;571525}572526573527static int aldebaran_get_smu_metrics_data(struct smu_context *smu,
+3
drivers/gpu/drm/drm_ioctl.c
···834834 if (drm_dev_is_unplugged(dev))835835 return -ENODEV;836836837837+ if (DRM_IOCTL_TYPE(cmd) != DRM_IOCTL_BASE)838838+ return -ENOTTY;839839+837840 is_driver_ioctl = nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END;838841839842 if (is_driver_ioctl) {
+13-214
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
···2525#include "i915_gem_clflush.h"2626#include "i915_gem_context.h"2727#include "i915_gem_ioctls.h"2828-#include "i915_sw_fence_work.h"2928#include "i915_trace.h"3029#include "i915_user_extensions.h"3131-#include "i915_memcpy.h"32303331struct eb_vma {3432 struct i915_vma *vma;···14541456 int err;14551457 struct intel_engine_cs *engine = eb->engine;1456145814591459+ /* If we need to copy for the cmdparser, we will stall anyway */14601460+ if (eb_use_cmdparser(eb))14611461+ return ERR_PTR(-EWOULDBLOCK);14621462+14571463 if (!reloc_can_use_engine(engine)) {14581464 engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];14591465 if (!engine)···23742372 return vma;23752373}2376237423772377-struct eb_parse_work {23782378- struct dma_fence_work base;23792379- struct intel_engine_cs *engine;23802380- struct i915_vma *batch;23812381- struct i915_vma *shadow;23822382- struct i915_vma *trampoline;23832383- unsigned long batch_offset;23842384- unsigned long batch_length;23852385- unsigned long *jump_whitelist;23862386- const void *batch_map;23872387- void *shadow_map;23882388-};23892389-23902390-static int __eb_parse(struct dma_fence_work *work)23912391-{23922392- struct eb_parse_work *pw = container_of(work, typeof(*pw), base);23932393- int ret;23942394- bool cookie;23952395-23962396- cookie = dma_fence_begin_signalling();23972397- ret = intel_engine_cmd_parser(pw->engine,23982398- pw->batch,23992399- pw->batch_offset,24002400- pw->batch_length,24012401- pw->shadow,24022402- pw->jump_whitelist,24032403- pw->shadow_map,24042404- pw->batch_map);24052405- dma_fence_end_signalling(cookie);24062406-24072407- return ret;24082408-}24092409-24102410-static void __eb_parse_release(struct dma_fence_work *work)24112411-{24122412- struct eb_parse_work *pw = container_of(work, typeof(*pw), base);24132413-24142414- if (!IS_ERR_OR_NULL(pw->jump_whitelist))24152415- kfree(pw->jump_whitelist);24162416-24172417- if (pw->batch_map)24182418- i915_gem_object_unpin_map(pw->batch->obj);24192419- else24202420- i915_gem_object_unpin_pages(pw->batch->obj);24212421-24222422- i915_gem_object_unpin_map(pw->shadow->obj);24232423-24242424- if (pw->trampoline)24252425- i915_active_release(&pw->trampoline->active);24262426- i915_active_release(&pw->shadow->active);24272427- i915_active_release(&pw->batch->active);24282428-}24292429-24302430-static const struct dma_fence_work_ops eb_parse_ops = {24312431- .name = "eb_parse",24322432- .work = __eb_parse,24332433- .release = __eb_parse_release,24342434-};24352435-24362436-static inline int24372437-__parser_mark_active(struct i915_vma *vma,24382438- struct intel_timeline *tl,24392439- struct dma_fence *fence)24402440-{24412441- struct intel_gt_buffer_pool_node *node = vma->private;24422442-24432443- return i915_active_ref(&node->active, tl->fence_context, fence);24442444-}24452445-24462446-static int24472447-parser_mark_active(struct eb_parse_work *pw, struct intel_timeline *tl)24482448-{24492449- int err;24502450-24512451- mutex_lock(&tl->mutex);24522452-24532453- err = __parser_mark_active(pw->shadow, tl, &pw->base.dma);24542454- if (err)24552455- goto unlock;24562456-24572457- if (pw->trampoline) {24582458- err = __parser_mark_active(pw->trampoline, tl, &pw->base.dma);24592459- if (err)24602460- goto unlock;24612461- }24622462-24632463-unlock:24642464- mutex_unlock(&tl->mutex);24652465- return err;24662466-}24672467-24682468-static int eb_parse_pipeline(struct i915_execbuffer *eb,24692469- struct i915_vma *shadow,24702470- struct i915_vma *trampoline)24712471-{24722472- struct eb_parse_work *pw;24732473- struct drm_i915_gem_object *batch = eb->batch->vma->obj;24742474- bool needs_clflush;24752475- int err;24762476-24772477- GEM_BUG_ON(overflows_type(eb->batch_start_offset, pw->batch_offset));24782478- GEM_BUG_ON(overflows_type(eb->batch_len, pw->batch_length));24792479-24802480- pw = kzalloc(sizeof(*pw), GFP_KERNEL);24812481- if (!pw)24822482- return -ENOMEM;24832483-24842484- err = i915_active_acquire(&eb->batch->vma->active);24852485- if (err)24862486- goto err_free;24872487-24882488- err = i915_active_acquire(&shadow->active);24892489- if (err)24902490- goto err_batch;24912491-24922492- if (trampoline) {24932493- err = i915_active_acquire(&trampoline->active);24942494- if (err)24952495- goto err_shadow;24962496- }24972497-24982498- pw->shadow_map = i915_gem_object_pin_map(shadow->obj, I915_MAP_WB);24992499- if (IS_ERR(pw->shadow_map)) {25002500- err = PTR_ERR(pw->shadow_map);25012501- goto err_trampoline;25022502- }25032503-25042504- needs_clflush =25052505- !(batch->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);25062506-25072507- pw->batch_map = ERR_PTR(-ENODEV);25082508- if (needs_clflush && i915_has_memcpy_from_wc())25092509- pw->batch_map = i915_gem_object_pin_map(batch, I915_MAP_WC);25102510-25112511- if (IS_ERR(pw->batch_map)) {25122512- err = i915_gem_object_pin_pages(batch);25132513- if (err)25142514- goto err_unmap_shadow;25152515- pw->batch_map = NULL;25162516- }25172517-25182518- pw->jump_whitelist =25192519- intel_engine_cmd_parser_alloc_jump_whitelist(eb->batch_len,25202520- trampoline);25212521- if (IS_ERR(pw->jump_whitelist)) {25222522- err = PTR_ERR(pw->jump_whitelist);25232523- goto err_unmap_batch;25242524- }25252525-25262526- dma_fence_work_init(&pw->base, &eb_parse_ops);25272527-25282528- pw->engine = eb->engine;25292529- pw->batch = eb->batch->vma;25302530- pw->batch_offset = eb->batch_start_offset;25312531- pw->batch_length = eb->batch_len;25322532- pw->shadow = shadow;25332533- pw->trampoline = trampoline;25342534-25352535- /* Mark active refs early for this worker, in case we get interrupted */25362536- err = parser_mark_active(pw, eb->context->timeline);25372537- if (err)25382538- goto err_commit;25392539-25402540- err = dma_resv_reserve_shared(pw->batch->resv, 1);25412541- if (err)25422542- goto err_commit;25432543-25442544- err = dma_resv_reserve_shared(shadow->resv, 1);25452545- if (err)25462546- goto err_commit;25472547-25482548- /* Wait for all writes (and relocs) into the batch to complete */25492549- err = i915_sw_fence_await_reservation(&pw->base.chain,25502550- pw->batch->resv, NULL, false,25512551- 0, I915_FENCE_GFP);25522552- if (err < 0)25532553- goto err_commit;25542554-25552555- /* Keep the batch alive and unwritten as we parse */25562556- dma_resv_add_shared_fence(pw->batch->resv, &pw->base.dma);25572557-25582558- /* Force execution to wait for completion of the parser */25592559- dma_resv_add_excl_fence(shadow->resv, &pw->base.dma);25602560-25612561- dma_fence_work_commit_imm(&pw->base);25622562- return 0;25632563-25642564-err_commit:25652565- i915_sw_fence_set_error_once(&pw->base.chain, err);25662566- dma_fence_work_commit_imm(&pw->base);25672567- return err;25682568-25692569-err_unmap_batch:25702570- if (pw->batch_map)25712571- i915_gem_object_unpin_map(batch);25722572- else25732573- i915_gem_object_unpin_pages(batch);25742574-err_unmap_shadow:25752575- i915_gem_object_unpin_map(shadow->obj);25762576-err_trampoline:25772577- if (trampoline)25782578- i915_active_release(&trampoline->active);25792579-err_shadow:25802580- i915_active_release(&shadow->active);25812581-err_batch:25822582- i915_active_release(&eb->batch->vma->active);25832583-err_free:25842584- kfree(pw);25852585- return err;25862586-}25872587-25882375static struct i915_vma *eb_dispatch_secure(struct i915_execbuffer *eb, struct i915_vma *vma)25892376{25902377 /*···24632672 goto err_trampoline;24642673 }2465267424662466- err = eb_parse_pipeline(eb, shadow, trampoline);26752675+ err = dma_resv_reserve_shared(shadow->resv, 1);26762676+ if (err)26772677+ goto err_trampoline;26782678+26792679+ err = intel_engine_cmd_parser(eb->engine,26802680+ eb->batch->vma,26812681+ eb->batch_start_offset,26822682+ eb->batch_len,26832683+ shadow, trampoline);24672684 if (err)24682685 goto err_unpin_batch;24692686
···19771977 if (drm_WARN_ON(&i915->drm, !engine))19781978 return -EINVAL;1979197919801980+ /*19811981+ * Due to d3_entered is used to indicate skipping PPGTT invalidation on19821982+ * vGPU reset, it's set on D0->D3 on PCI config write, and cleared after19831983+ * vGPU reset if in resuming.19841984+ * In S0ix exit, the device power state also transite from D3 to D0 as19851985+ * S3 resume, but no vGPU reset (triggered by QEMU devic model). After19861986+ * S0ix exit, all engines continue to work. However the d3_entered19871987+ * remains set which will break next vGPU reset logic (miss the expected19881988+ * PPGTT invalidation).19891989+ * Engines can only work in D0. Thus the 1st elsp write gives GVT a19901990+ * chance to clear d3_entered.19911991+ */19921992+ if (vgpu->d3_entered)19931993+ vgpu->d3_entered = false;19941994+19801995 execlist = &vgpu->submission.execlist[engine->id];1981199619821997 execlist->elsp_dwords.data[3 - execlist->elsp_dwords.index] = data;
+71-59
drivers/gpu/drm/i915/i915_cmd_parser.c
···11451145static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,11461146 struct drm_i915_gem_object *src_obj,11471147 unsigned long offset, unsigned long length,11481148- void *dst, const void *src)11481148+ bool *needs_clflush_after)11491149{11501150- bool needs_clflush =11511151- !(src_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);11501150+ unsigned int src_needs_clflush;11511151+ unsigned int dst_needs_clflush;11521152+ void *dst, *src;11531153+ int ret;1152115411531153- if (src) {11541154- GEM_BUG_ON(!needs_clflush);11551155- i915_unaligned_memcpy_from_wc(dst, src + offset, length);11561156- } else {11571157- struct scatterlist *sg;11551155+ ret = i915_gem_object_prepare_write(dst_obj, &dst_needs_clflush);11561156+ if (ret)11571157+ return ERR_PTR(ret);11581158+11591159+ dst = i915_gem_object_pin_map(dst_obj, I915_MAP_WB);11601160+ i915_gem_object_finish_access(dst_obj);11611161+ if (IS_ERR(dst))11621162+ return dst;11631163+11641164+ ret = i915_gem_object_prepare_read(src_obj, &src_needs_clflush);11651165+ if (ret) {11661166+ i915_gem_object_unpin_map(dst_obj);11671167+ return ERR_PTR(ret);11681168+ }11691169+11701170+ src = ERR_PTR(-ENODEV);11711171+ if (src_needs_clflush && i915_has_memcpy_from_wc()) {11721172+ src = i915_gem_object_pin_map(src_obj, I915_MAP_WC);11731173+ if (!IS_ERR(src)) {11741174+ i915_unaligned_memcpy_from_wc(dst,11751175+ src + offset,11761176+ length);11771177+ i915_gem_object_unpin_map(src_obj);11781178+ }11791179+ }11801180+ if (IS_ERR(src)) {11811181+ unsigned long x, n, remain;11581182 void *ptr;11591159- unsigned int x, sg_ofs;11601160- unsigned long remain;1161118311621184 /*11631185 * We can avoid clflushing partial cachelines before the write···11901168 * validate up to the end of the batch.11911169 */11921170 remain = length;11931193- if (!(dst_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))11711171+ if (dst_needs_clflush & CLFLUSH_BEFORE)11941172 remain = round_up(remain,11951173 boot_cpu_data.x86_clflush_size);1196117411971175 ptr = dst;11981176 x = offset_in_page(offset);11991199- sg = i915_gem_object_get_sg(src_obj, offset >> PAGE_SHIFT, &sg_ofs, false);11771177+ for (n = offset >> PAGE_SHIFT; remain; n++) {11781178+ int len = min(remain, PAGE_SIZE - x);1200117912011201- while (remain) {12021202- unsigned long sg_max = sg->length >> PAGE_SHIFT;11801180+ src = kmap_atomic(i915_gem_object_get_page(src_obj, n));11811181+ if (src_needs_clflush)11821182+ drm_clflush_virt_range(src + x, len);11831183+ memcpy(ptr, src + x, len);11841184+ kunmap_atomic(src);1203118512041204- for (; remain && sg_ofs < sg_max; sg_ofs++) {12051205- unsigned long len = min(remain, PAGE_SIZE - x);12061206- void *map;12071207-12081208- map = kmap_atomic(nth_page(sg_page(sg), sg_ofs));12091209- if (needs_clflush)12101210- drm_clflush_virt_range(map + x, len);12111211- memcpy(ptr, map + x, len);12121212- kunmap_atomic(map);12131213-12141214- ptr += len;12151215- remain -= len;12161216- x = 0;12171217- }12181218-12191219- sg_ofs = 0;12201220- sg = sg_next(sg);11861186+ ptr += len;11871187+ remain -= len;11881188+ x = 0;12211189 }12221190 }11911191+11921192+ i915_gem_object_finish_access(src_obj);1223119312241194 memset32(dst + length, 0, (dst_obj->base.size - length) / sizeof(u32));1225119512261196 /* dst_obj is returned with vmap pinned */11971197+ *needs_clflush_after = dst_needs_clflush & CLFLUSH_AFTER;11981198+12271199 return dst;12281200}12291201···13761360 if (target_cmd_index == offset)13771361 return 0;1378136213631363+ if (IS_ERR(jump_whitelist))13641364+ return PTR_ERR(jump_whitelist);13651365+13791366 if (!test_bit(target_cmd_index, jump_whitelist)) {13801367 DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n",13811368 jump_target);···13881369 return 0;13891370}1390137113911391-/**13921392- * intel_engine_cmd_parser_alloc_jump_whitelist() - preallocate jump whitelist for intel_engine_cmd_parser()13931393- * @batch_length: length of the commands in batch_obj13941394- * @trampoline: Whether jump trampolines are used.13951395- *13961396- * Preallocates a jump whitelist for parsing the cmd buffer in intel_engine_cmd_parser().13971397- * This has to be preallocated, because the command parser runs in signaling context,13981398- * and may not allocate any memory.13991399- *14001400- * Return: NULL or pointer to a jump whitelist, or ERR_PTR() on failure. Use14011401- * IS_ERR() to check for errors. Must bre freed() with kfree().14021402- *14031403- * NULL is a valid value, meaning no allocation was required.14041404- */14051405-unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,14061406- bool trampoline)13721372+static unsigned long *alloc_whitelist(u32 batch_length)14071373{14081374 unsigned long *jmp;14091409-14101410- if (trampoline)14111411- return NULL;1412137514131376 /*14141377 * We expect batch_length to be less than 256KiB for known users,···14161415 * @batch_offset: byte offset in the batch at which execution starts14171416 * @batch_length: length of the commands in batch_obj14181417 * @shadow: validated copy of the batch buffer in question14191419- * @jump_whitelist: buffer preallocated with intel_engine_cmd_parser_alloc_jump_whitelist()14201420- * @shadow_map: mapping to @shadow vma14211421- * @batch_map: mapping to @batch vma14181418+ * @trampoline: true if we need to trampoline into privileged execution14221419 *14231420 * Parses the specified batch buffer looking for privilege violations as14241421 * described in the overview.···14241425 * Return: non-zero if the parser finds violations or otherwise fails; -EACCES14251426 * if the batch appears legal but should use hardware parsing14261427 */14281428+14271429int intel_engine_cmd_parser(struct intel_engine_cs *engine,14281430 struct i915_vma *batch,14291431 unsigned long batch_offset,14301432 unsigned long batch_length,14311433 struct i915_vma *shadow,14321432- unsigned long *jump_whitelist,14331433- void *shadow_map,14341434- const void *batch_map)14341434+ bool trampoline)14351435{14361436 u32 *cmd, *batch_end, offset = 0;14371437 struct drm_i915_cmd_descriptor default_desc = noop_desc;14381438 const struct drm_i915_cmd_descriptor *desc = &default_desc;14391439+ bool needs_clflush_after = false;14401440+ unsigned long *jump_whitelist;14391441 u64 batch_addr, shadow_addr;14401442 int ret = 0;14411441- bool trampoline = !jump_whitelist;1442144314431444 GEM_BUG_ON(!IS_ALIGNED(batch_offset, sizeof(*cmd)));14441445 GEM_BUG_ON(!IS_ALIGNED(batch_length, sizeof(*cmd)));···14461447 batch->size));14471448 GEM_BUG_ON(!batch_length);1448144914491449- cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length,14501450- shadow_map, batch_map);14501450+ cmd = copy_batch(shadow->obj, batch->obj,14511451+ batch_offset, batch_length,14521452+ &needs_clflush_after);14531453+ if (IS_ERR(cmd)) {14541454+ DRM_DEBUG("CMD: Failed to copy batch\n");14551455+ return PTR_ERR(cmd);14561456+ }14571457+14581458+ jump_whitelist = NULL;14591459+ if (!trampoline)14601460+ /* Defer failure until attempted use */14611461+ jump_whitelist = alloc_whitelist(batch_length);1451146214521463 shadow_addr = gen8_canonical_addr(shadow->node.start);14531464 batch_addr = gen8_canonical_addr(batch->node.start + batch_offset);···1558154915591550 i915_gem_object_flush_map(shadow->obj);1560155115521552+ if (!IS_ERR_OR_NULL(jump_whitelist))15531553+ kfree(jump_whitelist);15541554+ i915_gem_object_unpin_map(shadow->obj);15611555 return ret;15621556}15631557
···1426142614271427 do {14281428 fence = *child++;14291429- if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {14301430- i915_sw_fence_set_error_once(&rq->submit, fence->error);14291429+ if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))14311430 continue;14321432- }1433143114341432 if (fence->context == rq->fence.context)14351433 continue;···1525152715261528 do {15271529 fence = *child++;15281528- if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {15291529- i915_sw_fence_set_error_once(&rq->submit, fence->error);15301530+ if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))15301531 continue;15311531- }1532153215331533 /*15341534 * Requests on the same timeline are explicitly ordered, along
+6
drivers/gpu/drm/nouveau/nouveau_bo.c
···149149 */150150 if (bo->base.dev)151151 drm_gem_object_release(&bo->base);152152+ else153153+ dma_resv_fini(&bo->base._resv);152154153155 kfree(nvbo);154156}···331329 tile_flags);332330 if (IS_ERR(nvbo))333331 return PTR_ERR(nvbo);332332+333333+ nvbo->bo.base.size = size;334334+ dma_resv_init(&nvbo->bo.base._resv);335335+ drm_vma_node_reset(&nvbo->bo.base.vma_node);334336335337 ret = nouveau_bo_init(nvbo, size, align, domain, sg, robj);336338 if (ret)
···605605 */606606 mutex_lock(&vmbus_connection.channel_mutex);607607608608+ list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {609609+ if (guid_equal(&channel->offermsg.offer.if_type,610610+ &newchannel->offermsg.offer.if_type) &&611611+ guid_equal(&channel->offermsg.offer.if_instance,612612+ &newchannel->offermsg.offer.if_instance)) {613613+ fnew = false;614614+ newchannel->primary_channel = channel;615615+ break;616616+ }617617+ }618618+608619 init_vp_index(newchannel);609620610621 /* Remember the channels that should be cleaned up upon suspend. */···627616 * we can release the potentially racing rescind thread.628617 */629618 atomic_dec(&vmbus_connection.offer_in_progress);630630-631631- list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {632632- if (guid_equal(&channel->offermsg.offer.if_type,633633- &newchannel->offermsg.offer.if_type) &&634634- guid_equal(&channel->offermsg.offer.if_instance,635635- &newchannel->offermsg.offer.if_instance)) {636636- fnew = false;637637- break;638638- }639639- }640619641620 if (fnew) {642621 list_add_tail(&newchannel->listentry,···648647 /*649648 * Process the sub-channel.650649 */651651- newchannel->primary_channel = channel;652650 list_add_tail(&newchannel->sc_list, &channel->sc_list);653651 }654652···684684}685685686686/*687687+ * Check if CPUs used by other channels of the same device.688688+ * It should only be called by init_vp_index().689689+ */690690+static bool hv_cpuself_used(u32 cpu, struct vmbus_channel *chn)691691+{692692+ struct vmbus_channel *primary = chn->primary_channel;693693+ struct vmbus_channel *sc;694694+695695+ lockdep_assert_held(&vmbus_connection.channel_mutex);696696+697697+ if (!primary)698698+ return false;699699+700700+ if (primary->target_cpu == cpu)701701+ return true;702702+703703+ list_for_each_entry(sc, &primary->sc_list, sc_list)704704+ if (sc != chn && sc->target_cpu == cpu)705705+ return true;706706+707707+ return false;708708+}709709+710710+/*687711 * We use this state to statically distribute the channel interrupt load.688712 */689713static int next_numa_node_id;···726702static void init_vp_index(struct vmbus_channel *channel)727703{728704 bool perf_chn = hv_is_perf_channel(channel);705705+ u32 i, ncpu = num_online_cpus();729706 cpumask_var_t available_mask;730707 struct cpumask *alloced_mask;731708 u32 target_cpu;···749724 return;750725 }751726752752- while (true) {753753- numa_node = next_numa_node_id++;754754- if (numa_node == nr_node_ids) {755755- next_numa_node_id = 0;756756- continue;727727+ for (i = 1; i <= ncpu + 1; i++) {728728+ while (true) {729729+ numa_node = next_numa_node_id++;730730+ if (numa_node == nr_node_ids) {731731+ next_numa_node_id = 0;732732+ continue;733733+ }734734+ if (cpumask_empty(cpumask_of_node(numa_node)))735735+ continue;736736+ break;757737 }758758- if (cpumask_empty(cpumask_of_node(numa_node)))759759- continue;760760- break;738738+ alloced_mask = &hv_context.hv_numa_map[numa_node];739739+740740+ if (cpumask_weight(alloced_mask) ==741741+ cpumask_weight(cpumask_of_node(numa_node))) {742742+ /*743743+ * We have cycled through all the CPUs in the node;744744+ * reset the alloced map.745745+ */746746+ cpumask_clear(alloced_mask);747747+ }748748+749749+ cpumask_xor(available_mask, alloced_mask,750750+ cpumask_of_node(numa_node));751751+752752+ target_cpu = cpumask_first(available_mask);753753+ cpumask_set_cpu(target_cpu, alloced_mask);754754+755755+ if (channel->offermsg.offer.sub_channel_index >= ncpu ||756756+ i > ncpu || !hv_cpuself_used(target_cpu, channel))757757+ break;761758 }762762- alloced_mask = &hv_context.hv_numa_map[numa_node];763763-764764- if (cpumask_weight(alloced_mask) ==765765- cpumask_weight(cpumask_of_node(numa_node))) {766766- /*767767- * We have cycled through all the CPUs in the node;768768- * reset the alloced map.769769- */770770- cpumask_clear(alloced_mask);771771- }772772-773773- cpumask_xor(available_mask, alloced_mask, cpumask_of_node(numa_node));774774-775775- target_cpu = cpumask_first(available_mask);776776- cpumask_set_cpu(target_cpu, alloced_mask);777759778760 channel->target_cpu = target_cpu;779761
+2-2
drivers/i2c/busses/i2c-mpc.c
···635635636636 status = readb(i2c->base + MPC_I2C_SR);637637 if (status & CSR_MIF) {638638- /* Read again to allow register to stabilise */639639- status = readb(i2c->base + MPC_I2C_SR);638638+ /* Wait up to 100us for transfer to properly complete */639639+ readb_poll_timeout(i2c->base + MPC_I2C_SR, status, !(status & CSR_MCF), 0, 100);640640 writeb(0, i2c->base + MPC_I2C_SR);641641 mpc_i2c_do_intr(i2c, status);642642 return IRQ_HANDLED;
+2-4
drivers/media/pci/intel/ipu3/cio2-bridge.c
···173173 int ret;174174175175 for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {176176- if (!adev->status.enabled) {177177- acpi_dev_put(adev);176176+ if (!adev->status.enabled)178177 continue;179179- }180178181179 if (bridge->n_sensors >= CIO2_NUM_PORTS) {182180 acpi_dev_put(adev);···183185 }184186185187 sensor = &bridge->sensors[bridge->n_sensors];186186- sensor->adev = adev;187188 strscpy(sensor->name, cfg->hid, sizeof(sensor->name));188189189190 ret = cio2_bridge_read_acpi_buffer(adev, "SSDB",···212215 goto err_free_swnodes;213216 }214217218218+ sensor->adev = acpi_dev_get(adev);215219 adev->fwnode.secondary = fwnode;216220217221 dev_info(&cio2->dev, "Found supported sensor %s\n",
···714714 }715715716716 /*717717- * If the 'label' property is not present for the AT24 EEPROM,718718- * then nvmem_config.id is initialised to NVMEM_DEVID_AUTO,719719- * and this will append the 'devid' to the name of the NVMEM720720- * device. This is purely legacy and the AT24 driver has always721721- * defaulted to this. However, if the 'label' property is722722- * present then this means that the name is specified by the723723- * firmware and this name should be used verbatim and so it is724724- * not necessary to append the 'devid'.717717+ * We initialize nvmem_config.id to NVMEM_DEVID_AUTO even if the718718+ * label property is set as some platform can have multiple eeproms719719+ * with same label and we can not register each of those with same720720+ * label. Failing to register those eeproms trigger cascade failure721721+ * on such platform.725722 */723723+ nvmem_config.id = NVMEM_DEVID_AUTO;724724+726725 if (device_property_present(dev, "label")) {727727- nvmem_config.id = NVMEM_DEVID_NONE;728726 err = device_property_read_string(dev, "label",729727 &nvmem_config.name);730728 if (err)731729 return err;732730 } else {733733- nvmem_config.id = NVMEM_DEVID_AUTO;734731 nvmem_config.name = dev_name(dev);735732 }736733
+22-15
drivers/mmc/core/block.c
···2828#include <linux/errno.h>2929#include <linux/hdreg.h>3030#include <linux/kdev_t.h>3131+#include <linux/kref.h>3132#include <linux/blkdev.h>3233#include <linux/cdev.h>3334#include <linux/mutex.h>···112111#define MMC_BLK_CMD23 (1 << 0) /* Can do SET_BLOCK_COUNT for multiblock */113112#define MMC_BLK_REL_WR (1 << 1) /* MMC Reliable write support */114113115115- unsigned int usage;114114+ struct kref kref;116115 unsigned int read_only;117116 unsigned int part_type;118117 unsigned int reset_done;···182181183182 mutex_lock(&open_lock);184183 md = disk->private_data;185185- if (md && md->usage == 0)184184+ if (md && !kref_get_unless_zero(&md->kref))186185 md = NULL;187187- if (md)188188- md->usage++;189186 mutex_unlock(&open_lock);190187191188 return md;···195196 return devidx;196197}197198199199+static void mmc_blk_kref_release(struct kref *ref)200200+{201201+ struct mmc_blk_data *md = container_of(ref, struct mmc_blk_data, kref);202202+ int devidx;203203+204204+ devidx = mmc_get_devidx(md->disk);205205+ ida_simple_remove(&mmc_blk_ida, devidx);206206+207207+ mutex_lock(&open_lock);208208+ md->disk->private_data = NULL;209209+ mutex_unlock(&open_lock);210210+211211+ put_disk(md->disk);212212+ kfree(md);213213+}214214+198215static void mmc_blk_put(struct mmc_blk_data *md)199216{200200- mutex_lock(&open_lock);201201- md->usage--;202202- if (md->usage == 0) {203203- int devidx = mmc_get_devidx(md->disk);204204-205205- ida_simple_remove(&mmc_blk_ida, devidx);206206- put_disk(md->disk);207207- kfree(md);208208- }209209- mutex_unlock(&open_lock);217217+ kref_put(&md->kref, mmc_blk_kref_release);210218}211219212220static ssize_t power_ro_lock_show(struct device *dev,···2333232723342328 INIT_LIST_HEAD(&md->part);23352329 INIT_LIST_HEAD(&md->rpmbs);23362336- md->usage = 1;23302330+ kref_init(&md->kref);23312331+23372332 md->queue.blkdata = md;2338233323392334 md->disk->major = MMC_BLOCK_MAJOR;
···34503450 return bond_event_changename(event_bond);34513451 case NETDEV_UNREGISTER:34523452 bond_remove_proc_entry(event_bond);34533453+#ifdef CONFIG_XFRM_OFFLOAD34533454 xfrm_dev_state_flush(dev_net(bond_dev), bond_dev, true);34553455+#endif /* CONFIG_XFRM_OFFLOAD */34543456 break;34553457 case NETDEV_REGISTER:34563458 bond_create_proc_entry(event_bond);
+2
drivers/net/dsa/mt7530.c
···366366 int i;367367368368 reg[1] |= vid & CVID_MASK;369369+ if (vid > 1)370370+ reg[1] |= ATA2_IVL;369371 reg[2] |= (aging & AGE_TIMER_MASK) << AGE_TIMER;370372 reg[2] |= (port_mask & PORT_MAP_MASK) << PORT_MAP;371373 /* STATIC_ENT indicate that entry is static wouldn't
+1
drivers/net/dsa/mt7530.h
···7979#define STATIC_EMP 08080#define STATIC_ENT 38181#define MT7530_ATA2 0x788282+#define ATA2_IVL BIT(15)82838384/* Register for address table write data */8485#define MT7530_ATWD 0x7c
+1-1
drivers/net/dsa/mv88e6xxx/Kconfig
···1212config NET_DSA_MV88E6XXX_PTP1313 bool "PTP support for Marvell 88E6xxx"1414 default n1515- depends on PTP_1588_CLOCK1515+ depends on NET_DSA_MV88E6XXX && PTP_1588_CLOCK1616 help1717 Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch1818 chips that support it.
+6
drivers/net/dsa/sja1105/sja1105_main.c
···397397 if (dsa_is_cpu_port(ds, port))398398 v->pvid = true;399399 list_add(&v->list, &priv->dsa_8021q_vlans);400400+401401+ v = kmemdup(v, sizeof(*v), GFP_KERNEL);402402+ if (!v)403403+ return -ENOMEM;404404+405405+ list_add(&v->list, &priv->bridge_vlans);400406 }401407402408 ((struct sja1105_vlan_lookup_entry *)table->entries)[0] = pvid;
···1313 return resp_code ? -resp_code : 0;1414}15151616+#define HCLGEVF_MBX_MATCH_ID_START 11617static void hclgevf_reset_mbx_resp_status(struct hclgevf_dev *hdev)1718{1819 /* this function should be called with mbx_resp.mbx_mutex held···2221 hdev->mbx_resp.received_resp = false;2322 hdev->mbx_resp.origin_mbx_msg = 0;2423 hdev->mbx_resp.resp_status = 0;2424+ hdev->mbx_resp.match_id++;2525+ /* Update match_id and ensure the value of match_id is not zero */2626+ if (hdev->mbx_resp.match_id == 0)2727+ hdev->mbx_resp.match_id = HCLGEVF_MBX_MATCH_ID_START;2528 memset(hdev->mbx_resp.additional_info, 0, HCLGE_MBX_MAX_RESP_DATA_SIZE);2629}2730···120115 if (need_resp) {121116 mutex_lock(&hdev->mbx_resp.mbx_mutex);122117 hclgevf_reset_mbx_resp_status(hdev);118118+ req->match_id = hdev->mbx_resp.match_id;123119 status = hclgevf_cmd_send(&hdev->hw, &desc, 1);124120 if (status) {125121 dev_err(&hdev->pdev->dev,···216210 for (i = 0; i < HCLGE_MBX_MAX_RESP_DATA_SIZE; i++) {217211 resp->additional_info[i] = *temp;218212 temp++;213213+ }214214+215215+ /* If match_id is not zero, it means PF support216216+ * match_id. If the match_id is right, VF get the217217+ * right response, otherwise ignore the response.218218+ * Driver will clear hdev->mbx_resp when send219219+ * next message which need response.220220+ */221221+ if (req->match_id) {222222+ if (req->match_id == resp->match_id)223223+ resp->received_resp = true;224224+ } else {225225+ resp->received_resp = true;219226 }220227 break;221228 case HCLGE_MBX_LINK_STAT_CHANGE:
···13141314 return rvu_detach_rsrcs(rvu, detach, detach->hdr.pcifunc);13151315}1316131613171317-static int rvu_get_nix_blkaddr(struct rvu *rvu, u16 pcifunc)13171317+int rvu_get_nix_blkaddr(struct rvu *rvu, u16 pcifunc)13181318{13191319 struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);13201320 int blkaddr = BLKADDR_NIX0, vf;···28592859 if (!vfs)28602860 return 0;2861286128622862+ /* LBK channel number 63 is used for switching packets between28632863+ * CGX mapped VFs. Hence limit LBK pairs till 62 only.28642864+ */28652865+ if (vfs > 62)28662866+ vfs = 62;28672867+28622868 /* Save VFs number for reference in VF interrupts handlers.28632869 * Since interrupts might start arriving during SRIOV enablement28642870 * ordinary API cannot be used to get number of enabled VFs.···3006300030073001 /* Initialize debugfs */30083002 rvu_dbg_init(rvu);30033003+30043004+ mutex_init(&rvu->rswitch.switch_lock);3009300530103006 return 0;30113007err_dl:
···26312631 bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL);26322632 int result;2633263326342634- if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) {26342634+ if (dev->ctrl.state != NVME_CTRL_RESETTING) {26352635+ dev_warn(dev->ctrl.device, "ctrl state %d is not RESETTING\n",26362636+ dev->ctrl.state);26352637 result = -ENODEV;26362638 goto out;26372639 }
···8383 * struct bcm2835_spi - BCM2835 SPI controller8484 * @regs: base address of register map8585 * @clk: core clock, divided to calculate serial clock8686+ * @clk_hz: core clock cached speed8687 * @irq: interrupt, signals TX FIFO empty or RX FIFO ¾ full8788 * @tfr: SPI transfer currently processed8889 * @ctlr: SPI controller reverse lookup···117116struct bcm2835_spi {118117 void __iomem *regs;119118 struct clk *clk;119119+ unsigned long clk_hz;120120 int irq;121121 struct spi_transfer *tfr;122122 struct spi_controller *ctlr;···10471045{10481046 struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);10491047 struct bcm2835_spidev *slv = spi_get_ctldata(spi);10501050- unsigned long spi_hz, clk_hz, cdiv;10481048+ unsigned long spi_hz, cdiv;10511049 unsigned long hz_per_byte, byte_limit;10521050 u32 cs = slv->prepare_cs;1053105110541052 /* set clock */10551053 spi_hz = tfr->speed_hz;10561056- clk_hz = clk_get_rate(bs->clk);1057105410581058- if (spi_hz >= clk_hz / 2) {10551055+ if (spi_hz >= bs->clk_hz / 2) {10591056 cdiv = 2; /* clk_hz/2 is the fastest we can go */10601057 } else if (spi_hz) {10611058 /* CDIV must be a multiple of two */10621062- cdiv = DIV_ROUND_UP(clk_hz, spi_hz);10591059+ cdiv = DIV_ROUND_UP(bs->clk_hz, spi_hz);10631060 cdiv += (cdiv % 2);1064106110651062 if (cdiv >= 65536)···10661065 } else {10671066 cdiv = 0; /* 0 is the slowest we can go */10681067 }10691069- tfr->effective_speed_hz = cdiv ? (clk_hz / cdiv) : (clk_hz / 65536);10681068+ tfr->effective_speed_hz = cdiv ? (bs->clk_hz / cdiv) : (bs->clk_hz / 65536);10701069 bcm2835_wr(bs, BCM2835_SPI_CLK, cdiv);1071107010721071 /* handle all the 3-wire mode */···13551354 return bs->irq ? bs->irq : -ENODEV;1356135513571356 clk_prepare_enable(bs->clk);13571357+ bs->clk_hz = clk_get_rate(bs->clk);1358135813591359 err = bcm2835_dma_init(ctlr, &pdev->dev, bs);13601360 if (err)
+17-13
drivers/spi/spi-cadence-quadspi.c
···309309{310310 unsigned int dummy_clk;311311312312+ if (!op->dummy.nbytes)313313+ return 0;314314+312315 dummy_clk = op->dummy.nbytes * (8 / op->dummy.buswidth);313316 if (dtr)314317 dummy_clk /= 2;···800797 reg = cqspi_calc_rdreg(f_pdata);801798 writel(reg, reg_base + CQSPI_REG_RD_INSTR);802799803803- if (f_pdata->dtr) {804804- /*805805- * Some flashes like the cypress Semper flash expect a 4-byte806806- * dummy address with the Read SR command in DTR mode, but this807807- * controller does not support sending address with the Read SR808808- * command. So, disable write completion polling on the809809- * controller's side. spi-nor will take care of polling the810810- * status register.811811- */812812- reg = readl(reg_base + CQSPI_REG_WR_COMPLETION_CTRL);813813- reg |= CQSPI_REG_WR_DISABLE_AUTO_POLL;814814- writel(reg, reg_base + CQSPI_REG_WR_COMPLETION_CTRL);815815- }800800+ /*801801+ * SPI NAND flashes require the address of the status register to be802802+ * passed in the Read SR command. Also, some SPI NOR flashes like the803803+ * cypress Semper flash expect a 4-byte dummy address in the Read SR804804+ * command in DTR mode.805805+ *806806+ * But this controller does not support address phase in the Read SR807807+ * command when doing auto-HW polling. So, disable write completion808808+ * polling on the controller's side. spinand and spi-nor will take809809+ * care of polling the status register.810810+ */811811+ reg = readl(reg_base + CQSPI_REG_WR_COMPLETION_CTRL);812812+ reg |= CQSPI_REG_WR_DISABLE_AUTO_POLL;813813+ writel(reg, reg_base + CQSPI_REG_WR_COMPLETION_CTRL);816814817815 reg = readl(reg_base + CQSPI_REG_SIZE);818816 reg &= ~CQSPI_REG_SIZE_ADDRESS_MASK;
···506506{507507 struct spi_device *spi = msg->spi;508508 u32 ctrl = MX51_ECSPI_CTRL_ENABLE;509509- u32 testreg;509509+ u32 testreg, delay;510510 u32 cfg = readl(spi_imx->base + MX51_ECSPI_CONFIG);511511512512 /* set Master or Slave mode */···567567568568 writel(cfg, spi_imx->base + MX51_ECSPI_CONFIG);569569570570+ /*571571+ * Wait until the changes in the configuration register CONFIGREG572572+ * propagate into the hardware. It takes exactly one tick of the573573+ * SCLK clock, but we will wait two SCLK clock just to be sure. The574574+ * effect of the delay it takes for the hardware to apply changes575575+ * is noticable if the SCLK clock run very slow. In such a case, if576576+ * the polarity of SCLK should be inverted, the GPIO ChipSelect might577577+ * be asserted before the SCLK polarity changes, which would disrupt578578+ * the SPI communication as the device on the other end would consider579579+ * the change of SCLK polarity as a clock tick already.580580+ */581581+ delay = (2 * 1000000) / spi_imx->spi_bus_clk;582582+ if (likely(delay < 10)) /* SCLK is faster than 100 kHz */583583+ udelay(delay);584584+ else /* SCLK is _very_ slow */585585+ usleep_range(delay, delay + 10);586586+570587 return 0;571588}572589···591574 struct spi_device *spi)592575{593576 u32 ctrl = readl(spi_imx->base + MX51_ECSPI_CTRL);594594- u32 clk, delay;577577+ u32 clk;595578596579 /* Clear BL field and set the right value */597580 ctrl &= ~MX51_ECSPI_CTRL_BL_MASK;···612595 ctrl |= MX51_ECSPI_CTRL_SMC;613596614597 writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL);615615-616616- /*617617- * Wait until the changes in the configuration register CONFIGREG618618- * propagate into the hardware. It takes exactly one tick of the619619- * SCLK clock, but we will wait two SCLK clock just to be sure. The620620- * effect of the delay it takes for the hardware to apply changes621621- * is noticable if the SCLK clock run very slow. In such a case, if622622- * the polarity of SCLK should be inverted, the GPIO ChipSelect might623623- * be asserted before the SCLK polarity changes, which would disrupt624624- * the SPI communication as the device on the other end would consider625625- * the change of SCLK polarity as a clock tick already.626626- */627627- delay = (2 * 1000000) / clk;628628- if (likely(delay < 10)) /* SCLK is faster than 100 kHz */629629- udelay(delay);630630- else /* SCLK is _very_ slow */631631- usleep_range(delay, delay + 10);632598633599 return 0;634600}
+19-9
drivers/spi/spi-mt65xx.c
···427427 mtk_spi_setup_packet(master);428428429429 cnt = xfer->len / 4;430430- iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);430430+ if (xfer->tx_buf)431431+ iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);432432+433433+ if (xfer->rx_buf)434434+ ioread32_rep(mdata->base + SPI_RX_DATA_REG, xfer->rx_buf, cnt);431435432436 remainder = xfer->len % 4;433437 if (remainder > 0) {434438 reg_val = 0;435435- memcpy(®_val, xfer->tx_buf + (cnt * 4), remainder);436436- writel(reg_val, mdata->base + SPI_TX_DATA_REG);439439+ if (xfer->tx_buf) {440440+ memcpy(®_val, xfer->tx_buf + (cnt * 4), remainder);441441+ writel(reg_val, mdata->base + SPI_TX_DATA_REG);442442+ }443443+ if (xfer->rx_buf) {444444+ reg_val = readl(mdata->base + SPI_RX_DATA_REG);445445+ memcpy(xfer->rx_buf + (cnt * 4), ®_val, remainder);446446+ }437447 }438448439449 mtk_spi_enable_transfer(master);···803793804794 pm_runtime_enable(&pdev->dev);805795806806- ret = devm_spi_register_master(&pdev->dev, master);807807- if (ret) {808808- dev_err(&pdev->dev, "failed to register master (%d)\n", ret);809809- goto err_disable_runtime_pm;810810- }811811-812796 if (mdata->dev_comp->need_pad_sel) {813797 if (mdata->pad_num != master->num_chipselect) {814798 dev_err(&pdev->dev,···841837 if (ret)842838 dev_notice(&pdev->dev, "SPI dma_set_mask(%d) failed, ret:%d\n",843839 addr_bits, ret);840840+841841+ ret = devm_spi_register_master(&pdev->dev, master);842842+ if (ret) {843843+ dev_err(&pdev->dev, "failed to register master (%d)\n", ret);844844+ goto err_disable_runtime_pm;845845+ }844846845847 return 0;846848
+17-7
drivers/spi/spi-stm32.c
···884884 ier = readl_relaxed(spi->base + STM32H7_SPI_IER);885885886886 mask = ier;887887- /* EOTIE is triggered on EOT, SUSP and TXC events. */887887+ /*888888+ * EOTIE enables irq from EOT, SUSP and TXC events. We need to set889889+ * SUSP to acknowledge it later. TXC is automatically cleared890890+ */891891+888892 mask |= STM32H7_SPI_SR_SUSP;889893 /*890890- * When TXTF is set, DXPIE and TXPIE are cleared. So in case of891891- * Full-Duplex, need to poll RXP event to know if there are remaining892892- * data, before disabling SPI.894894+ * DXPIE is set in Full-Duplex, one IT will be raised if TXP and RXP895895+ * are set. So in case of Full-Duplex, need to poll TXP and RXP event.893896 */894894- if (spi->rx_buf && !spi->cur_usedma)895895- mask |= STM32H7_SPI_SR_RXP;897897+ if ((spi->cur_comm == SPI_FULL_DUPLEX) && !spi->cur_usedma)898898+ mask |= STM32H7_SPI_SR_TXP | STM32H7_SPI_SR_RXP;896899897900 if (!(sr & mask)) {898901 dev_warn(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",···19281925 master->can_dma = stm32_spi_can_dma;1929192619301927 pm_runtime_set_active(&pdev->dev);19281928+ pm_runtime_get_noresume(&pdev->dev);19311929 pm_runtime_enable(&pdev->dev);1932193019331931 ret = spi_register_master(master);···1944194019451941err_pm_disable:19461942 pm_runtime_disable(&pdev->dev);19431943+ pm_runtime_put_noidle(&pdev->dev);19441944+ pm_runtime_set_suspended(&pdev->dev);19471945err_dma_release:19481946 if (spi->dma_tx)19491947 dma_release_channel(spi->dma_tx);···19621956 struct spi_master *master = platform_get_drvdata(pdev);19631957 struct stm32_spi *spi = spi_master_get_devdata(master);1964195819591959+ pm_runtime_get_sync(&pdev->dev);19601960+19651961 spi_unregister_master(master);19661962 spi->cfg->disable(spi);1967196319641964+ pm_runtime_disable(&pdev->dev);19651965+ pm_runtime_put_noidle(&pdev->dev);19661966+ pm_runtime_set_suspended(&pdev->dev);19681967 if (master->dma_tx)19691968 dma_release_channel(master->dma_tx);19701969 if (master->dma_rx)···1977196619781967 clk_disable_unprepare(spi->clk);1979196819801980- pm_runtime_disable(&pdev->dev);1981196919821970 pinctrl_pm_select_sleep_state(&pdev->dev);19831971
+17-18
drivers/target/target_core_sbc.c
···2525#include "target_core_alua.h"26262727static sense_reason_t2828-sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char *, u32, bool);2828+sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char, u32, bool);2929static sense_reason_t sbc_execute_unmap(struct se_cmd *cmd);30303131static sense_reason_t···279279}280280281281static sense_reason_t282282-sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *ops)282282+sbc_setup_write_same(struct se_cmd *cmd, unsigned char flags, struct sbc_ops *ops)283283{284284 struct se_device *dev = cmd->se_dev;285285 sector_t end_lba = dev->transport->get_blocks(dev) + 1;286286 unsigned int sectors = sbc_get_write_same_sectors(cmd);287287 sense_reason_t ret;288288289289- if ((flags[0] & 0x04) || (flags[0] & 0x02)) {289289+ if ((flags & 0x04) || (flags & 0x02)) {290290 pr_err("WRITE_SAME PBDATA and LBDATA"291291 " bits not supported for Block Discard"292292 " Emulation\n");···308308 }309309310310 /* We always have ANC_SUP == 0 so setting ANCHOR is always an error */311311- if (flags[0] & 0x10) {311311+ if (flags & 0x10) {312312 pr_warn("WRITE SAME with ANCHOR not supported\n");313313 return TCM_INVALID_CDB_FIELD;314314 }···316316 * Special case for WRITE_SAME w/ UNMAP=1 that ends up getting317317 * translated into block discard requests within backend code.318318 */319319- if (flags[0] & 0x08) {319319+ if (flags & 0x08) {320320 if (!ops->execute_unmap)321321 return TCM_UNSUPPORTED_SCSI_OPCODE;322322···331331 if (!ops->execute_write_same)332332 return TCM_UNSUPPORTED_SCSI_OPCODE;333333334334- ret = sbc_check_prot(dev, cmd, &cmd->t_task_cdb[0], sectors, true);334334+ ret = sbc_check_prot(dev, cmd, flags >> 5, sectors, true);335335 if (ret)336336 return ret;337337···717717}718718719719static sense_reason_t720720-sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,720720+sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char protect,721721 u32 sectors, bool is_write)722722{723723- u8 protect = cdb[1] >> 5;724723 int sp_ops = cmd->se_sess->sup_prot_ops;725724 int pi_prot_type = dev->dev_attrib.pi_prot_type;726725 bool fabric_prot = false;···767768 fallthrough;768769 default:769770 pr_err("Unable to determine pi_prot_type for CDB: 0x%02x "770770- "PROTECT: 0x%02x\n", cdb[0], protect);771771+ "PROTECT: 0x%02x\n", cmd->t_task_cdb[0], protect);771772 return TCM_INVALID_CDB_FIELD;772773 }773774···842843 if (sbc_check_dpofua(dev, cmd, cdb))843844 return TCM_INVALID_CDB_FIELD;844845845845- ret = sbc_check_prot(dev, cmd, cdb, sectors, false);846846+ ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);846847 if (ret)847848 return ret;848849···856857 if (sbc_check_dpofua(dev, cmd, cdb))857858 return TCM_INVALID_CDB_FIELD;858859859859- ret = sbc_check_prot(dev, cmd, cdb, sectors, false);860860+ ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);860861 if (ret)861862 return ret;862863···870871 if (sbc_check_dpofua(dev, cmd, cdb))871872 return TCM_INVALID_CDB_FIELD;872873873873- ret = sbc_check_prot(dev, cmd, cdb, sectors, false);874874+ ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);874875 if (ret)875876 return ret;876877···891892 if (sbc_check_dpofua(dev, cmd, cdb))892893 return TCM_INVALID_CDB_FIELD;893894894894- ret = sbc_check_prot(dev, cmd, cdb, sectors, true);895895+ ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);895896 if (ret)896897 return ret;897898···905906 if (sbc_check_dpofua(dev, cmd, cdb))906907 return TCM_INVALID_CDB_FIELD;907908908908- ret = sbc_check_prot(dev, cmd, cdb, sectors, true);909909+ ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);909910 if (ret)910911 return ret;911912···920921 if (sbc_check_dpofua(dev, cmd, cdb))921922 return TCM_INVALID_CDB_FIELD;922923923923- ret = sbc_check_prot(dev, cmd, cdb, sectors, true);924924+ ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);924925 if (ret)925926 return ret;926927···979980 size = sbc_get_size(cmd, 1);980981 cmd->t_task_lba = get_unaligned_be64(&cdb[12]);981982982982- ret = sbc_setup_write_same(cmd, &cdb[10], ops);983983+ ret = sbc_setup_write_same(cmd, cdb[10], ops);983984 if (ret)984985 return ret;985986 break;···10781079 size = sbc_get_size(cmd, 1);10791080 cmd->t_task_lba = get_unaligned_be64(&cdb[2]);1080108110811081- ret = sbc_setup_write_same(cmd, &cdb[1], ops);10821082+ ret = sbc_setup_write_same(cmd, cdb[1], ops);10821083 if (ret)10831084 return ret;10841085 break;···10961097 * Follow sbcr26 with WRITE_SAME (10) and check for the existence10971098 * of byte 1 bit 3 UNMAP instead of original reserved field10981099 */10991099- ret = sbc_setup_write_same(cmd, &cdb[1], ops);11001100+ ret = sbc_setup_write_same(cmd, cdb[1], ops);11001101 if (ret)11011102 return ret;11021103 break;
+1-1
drivers/target/target_core_transport.c
···886886 INIT_WORK(&cmd->work, success ? target_complete_ok_work :887887 target_complete_failure_work);888888889889- if (wwn->cmd_compl_affinity == SE_COMPL_AFFINITY_CPUID)889889+ if (!wwn || wwn->cmd_compl_affinity == SE_COMPL_AFFINITY_CPUID)890890 cpu = cmd->cpuid;891891 else892892 cpu = wwn->cmd_compl_affinity;
+3-3
drivers/usb/class/cdc-wdm.c
···824824};825825826826/* --- WWAN framework integration --- */827827-#ifdef CONFIG_WWAN827827+#ifdef CONFIG_WWAN_CORE828828static int wdm_wwan_port_start(struct wwan_port *port)829829{830830 struct wdm_device *desc = wwan_port_get_drvdata(port);···963963 /* inbuf has been copied, it is safe to check for outstanding data */964964 schedule_work(&desc->service_outs_intr);965965}966966-#else /* CONFIG_WWAN */966966+#else /* CONFIG_WWAN_CORE */967967static void wdm_wwan_init(struct wdm_device *desc) {}968968static void wdm_wwan_deinit(struct wdm_device *desc) {}969969static void wdm_wwan_rx(struct wdm_device *desc, int length) {}970970-#endif /* CONFIG_WWAN */970970+#endif /* CONFIG_WWAN_CORE */971971972972/* --- error handling --- */973973static void wdm_rxwork(struct work_struct *work)
···48484949#define USB_TP_TRANSMISSION_DELAY 40 /* ns */5050#define USB_TP_TRANSMISSION_DELAY_MAX 65535 /* ns */5151+#define USB_PING_RESPONSE_TIME 400 /* ns */51525253/* Protect struct usb_device->state and ->children members5354 * Note: Both are also protected by ->dev.sem, except that ->state can···183182}184183185184/*186186- * Set the Maximum Exit Latency (MEL) for the host to initiate a transition from187187- * either U1 or U2.185185+ * Set the Maximum Exit Latency (MEL) for the host to wakup up the path from186186+ * U1/U2, send a PING to the device and receive a PING_RESPONSE.187187+ * See USB 3.1 section C.1.5.2188188 */189189static void usb_set_lpm_mel(struct usb_device *udev,190190 struct usb3_lpm_parameters *udev_lpm_params,···195193 unsigned int hub_exit_latency)196194{197195 unsigned int total_mel;198198- unsigned int device_mel;199199- unsigned int hub_mel;200196201197 /*202202- * Calculate the time it takes to transition all links from the roothub203203- * to the parent hub into U0. The parent hub must then decode the204204- * packet (hub header decode latency) to figure out which port it was205205- * bound for.206206- *207207- * The Hub Header decode latency is expressed in 0.1us intervals (0x1208208- * means 0.1us). Multiply that by 100 to get nanoseconds.198198+ * tMEL1. time to transition path from host to device into U0.199199+ * MEL for parent already contains the delay up to parent, so only add200200+ * the exit latency for the last link (pick the slower exit latency),201201+ * and the hub header decode latency. See USB 3.1 section C 2.2.1202202+ * Store MEL in nanoseconds209203 */210204 total_mel = hub_lpm_params->mel +211211- (hub->descriptor->u.ss.bHubHdrDecLat * 100);205205+ max(udev_exit_latency, hub_exit_latency) * 1000 +206206+ hub->descriptor->u.ss.bHubHdrDecLat * 100;212207213208 /*214214- * How long will it take to transition the downstream hub's port into215215- * U0? The greater of either the hub exit latency or the device exit216216- * latency.217217- *218218- * The BOS U1/U2 exit latencies are expressed in 1us intervals.219219- * Multiply that by 1000 to get nanoseconds.209209+ * tMEL2. Time to submit PING packet. Sum of tTPTransmissionDelay for210210+ * each link + wHubDelay for each hub. Add only for last link.211211+ * tMEL4, the time for PING_RESPONSE to traverse upstream is similar.212212+ * Multiply by 2 to include it as well.220213 */221221- device_mel = udev_exit_latency * 1000;222222- hub_mel = hub_exit_latency * 1000;223223- if (device_mel > hub_mel)224224- total_mel += device_mel;225225- else226226- total_mel += hub_mel;214214+ total_mel += (__le16_to_cpu(hub->descriptor->u.ss.wHubDelay) +215215+ USB_TP_TRANSMISSION_DELAY) * 2;216216+217217+ /*218218+ * tMEL3, tPingResponse. Time taken by device to generate PING_RESPONSE219219+ * after receiving PING. Also add 2100ns as stated in USB 3.1 C 1.5.2.4220220+ * to cover the delay if the PING_RESPONSE is queued behind a Max Packet221221+ * Size DP.222222+ * Note these delays should be added only once for the entire path, so223223+ * add them to the MEL of the device connected to the roothub.224224+ */225225+ if (!hub->hdev->parent)226226+ total_mel += USB_PING_RESPONSE_TIME + 2100;227227228228 udev_lpm_params->mel = total_mel;229229}···41174113}4118411441194115/*41164116+ * Don't allow device intiated U1/U2 if the system exit latency + one bus41174117+ * interval is greater than the minimum service interval of any active41184118+ * periodic endpoint. See USB 3.2 section 9.4.941194119+ */41204120+static bool usb_device_may_initiate_lpm(struct usb_device *udev,41214121+ enum usb3_link_state state)41224122+{41234123+ unsigned int sel; /* us */41244124+ int i, j;41254125+41264126+ if (state == USB3_LPM_U1)41274127+ sel = DIV_ROUND_UP(udev->u1_params.sel, 1000);41284128+ else if (state == USB3_LPM_U2)41294129+ sel = DIV_ROUND_UP(udev->u2_params.sel, 1000);41304130+ else41314131+ return false;41324132+41334133+ for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {41344134+ struct usb_interface *intf;41354135+ struct usb_endpoint_descriptor *desc;41364136+ unsigned int interval;41374137+41384138+ intf = udev->actconfig->interface[i];41394139+ if (!intf)41404140+ continue;41414141+41424142+ for (j = 0; j < intf->cur_altsetting->desc.bNumEndpoints; j++) {41434143+ desc = &intf->cur_altsetting->endpoint[j].desc;41444144+41454145+ if (usb_endpoint_xfer_int(desc) ||41464146+ usb_endpoint_xfer_isoc(desc)) {41474147+ interval = (1 << (desc->bInterval - 1)) * 125;41484148+ if (sel + 125 > interval)41494149+ return false;41504150+ }41514151+ }41524152+ }41534153+ return true;41544154+}41554155+41564156+/*41204157 * Enable the hub-initiated U1/U2 idle timeouts, and enable device-initiated41214158 * U1/U2 entry.41224159 *···42294184 * U1/U2_ENABLE42304185 */42314186 if (udev->actconfig &&42324232- usb_set_device_initiated_lpm(udev, state, true) == 0) {42334233- if (state == USB3_LPM_U1)42344234- udev->usb3_lpm_u1_enabled = 1;42354235- else if (state == USB3_LPM_U2)42364236- udev->usb3_lpm_u2_enabled = 1;42374237- } else {42384238- /* Don't request U1/U2 entry if the device42394239- * cannot transition to U1/U2.42404240- */42414241- usb_set_lpm_timeout(udev, state, 0);42424242- hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);41874187+ usb_device_may_initiate_lpm(udev, state)) {41884188+ if (usb_set_device_initiated_lpm(udev, state, true)) {41894189+ /*41904190+ * Request to enable device initiated U1/U2 failed,41914191+ * better to turn off lpm in this case.41924192+ */41934193+ usb_set_lpm_timeout(udev, state, 0);41944194+ hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);41954195+ return;41964196+ }42434197 }42444244-}4245419841994199+ if (state == USB3_LPM_U1)42004200+ udev->usb3_lpm_u1_enabled = 1;42014201+ else if (state == USB3_LPM_U2)42024202+ udev->usb3_lpm_u2_enabled = 1;42034203+}42464204/*42474205 * Disable the hub-initiated U1/U2 idle timeouts, and disable device-initiated42484206 * U1/U2 entry.
···556556 * If neither hibernation nor partial power down are supported,557557 * clock gating is used to save power.558558 */559559- dwc2_gadget_enter_clock_gating(hsotg);559559+ if (!hsotg->params.no_clock_gating)560560+ dwc2_gadget_enter_clock_gating(hsotg);560561 }561562562563 /*
+27-4
drivers/usb/dwc2/gadget.c
···27492749 return;27502750 }2751275127522752- /* Zlp for all endpoints, for ep0 only in DATA IN stage */27522752+ /* Zlp for all endpoints in non DDMA, for ep0 only in DATA IN stage */27532753 if (hs_ep->send_zlp) {27542754- dwc2_hsotg_program_zlp(hsotg, hs_ep);27552754 hs_ep->send_zlp = 0;27562756- /* transfer will be completed on next complete interrupt */27572757- return;27552755+ if (!using_desc_dma(hsotg)) {27562756+ dwc2_hsotg_program_zlp(hsotg, hs_ep);27572757+ /* transfer will be completed on next complete interrupt */27582758+ return;27592759+ }27582760 }2759276127602762 if (hs_ep->index == 0 && hsotg->ep0_state == DWC2_EP0_DATA_IN) {···39023900 __func__);39033901 }39043902 } else {39033903+ /* Mask GINTSTS_GOUTNAKEFF interrupt */39043904+ dwc2_hsotg_disable_gsint(hsotg, GINTSTS_GOUTNAKEFF);39053905+39053906 if (!(dwc2_readl(hsotg, GINTSTS) & GINTSTS_GOUTNAKEFF))39063907 dwc2_set_bit(hsotg, DCTL, DCTL_SGOUTNAK);39083908+39093909+ if (!using_dma(hsotg)) {39103910+ /* Wait for GINTSTS_RXFLVL interrupt */39113911+ if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,39123912+ GINTSTS_RXFLVL, 100)) {39133913+ dev_warn(hsotg->dev, "%s: timeout GINTSTS.RXFLVL\n",39143914+ __func__);39153915+ } else {39163916+ /*39173917+ * Pop GLOBAL OUT NAK status packet from RxFIFO39183918+ * to assert GOUTNAKEFF interrupt39193919+ */39203920+ dwc2_readl(hsotg, GRXSTSP);39213921+ }39223922+ }3907392339083924 /* Wait for global nak to take effect */39093925 if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,···43684348 epctl = dwc2_readl(hs, epreg);4369434943704350 if (value) {43514351+ /* Unmask GOUTNAKEFF interrupt */43524352+ dwc2_hsotg_en_gsint(hs, GINTSTS_GOUTNAKEFF);43534353+43714354 if (!(dwc2_readl(hs, GINTSTS) & GINTSTS_GOUTNAKEFF))43724355 dwc2_set_bit(hs, DCTL, DCTL_SGOUTNAK);43734356 // STALL bit will be set in GOUTNAKEFF interrupt handler
+4-2
drivers/usb/dwc2/hcd.c
···33383338 * If not hibernation nor partial power down are supported,33393339 * clock gating is used to save power.33403340 */33413341- dwc2_host_enter_clock_gating(hsotg);33413341+ if (!hsotg->params.no_clock_gating)33423342+ dwc2_host_enter_clock_gating(hsotg);33423343 break;33433344 }33443345···44034402 * If not hibernation nor partial power down are supported,44044403 * clock gating is used to save power.44054404 */44064406- dwc2_host_enter_clock_gating(hsotg);44054405+ if (!hsotg->params.no_clock_gating)44064406+ dwc2_host_enter_clock_gating(hsotg);4407440744084408 /* After entering suspend, hardware is not accessible */44094409 clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
···597597598598static int dwc3_ep0_delegate_req(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)599599{600600- int ret;600600+ int ret = -EINVAL;601601602602- spin_unlock(&dwc->lock);603603- ret = dwc->gadget_driver->setup(dwc->gadget, ctrl);604604- spin_lock(&dwc->lock);602602+ if (dwc->async_callbacks) {603603+ spin_unlock(&dwc->lock);604604+ ret = dwc->gadget_driver->setup(dwc->gadget, ctrl);605605+ spin_lock(&dwc->lock);606606+ }605607 return ret;606608}607609
···703703static irqreturn_t ehci_irq (struct usb_hcd *hcd)704704{705705 struct ehci_hcd *ehci = hcd_to_ehci (hcd);706706- u32 status, masked_status, pcd_status = 0, cmd;706706+ u32 status, current_status, masked_status, pcd_status = 0;707707+ u32 cmd;707708 int bh;708709709710 spin_lock(&ehci->lock);710711711711- status = ehci_readl(ehci, &ehci->regs->status);712712+ status = 0;713713+ current_status = ehci_readl(ehci, &ehci->regs->status);714714+restart:712715713716 /* e.g. cardbus physical eject */714714- if (status == ~(u32) 0) {717717+ if (current_status == ~(u32) 0) {715718 ehci_dbg (ehci, "device removed\n");716719 goto dead;717720 }721721+ status |= current_status;718722719723 /*720724 * We don't use STS_FLR, but some controllers don't like it to721725 * remain on, so mask it out along with the other status bits.722726 */723723- masked_status = status & (INTR_MASK | STS_FLR);727727+ masked_status = current_status & (INTR_MASK | STS_FLR);724728725729 /* Shared IRQ? */726730 if (!masked_status || unlikely(ehci->rh_state == EHCI_RH_HALTED)) {···734730735731 /* clear (just) interrupts */736732 ehci_writel(ehci, masked_status, &ehci->regs->status);733733+734734+ /* For edge interrupts, don't race with an interrupt bit being raised */735735+ current_status = ehci_readl(ehci, &ehci->regs->status);736736+ if (current_status & INTR_MASK)737737+ goto restart;738738+737739 cmd = ehci_readl(ehci, &ehci->regs->command);738740 bh = 0;739741
+14-30
drivers/usb/host/max3421-hcd.c
···153153 */154154 struct urb *curr_urb;155155 enum scheduling_pass sched_pass;156156- struct usb_device *loaded_dev; /* dev that's loaded into the chip */157157- int loaded_epnum; /* epnum whose toggles are loaded */158156 int urb_done; /* > 0 -> no errors, < 0: errno */159157 size_t curr_len;160158 u8 hien;···490492 * Caller must NOT hold HCD spinlock.491493 */492494static void493493-max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum,494494- int force_toggles)495495+max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum)495496{496496- struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);497497- int old_epnum, same_ep, rcvtog, sndtog;498498- struct usb_device *old_dev;497497+ int rcvtog, sndtog;499498 u8 hctl;500499501501- old_dev = max3421_hcd->loaded_dev;502502- old_epnum = max3421_hcd->loaded_epnum;503503-504504- same_ep = (dev == old_dev && epnum == old_epnum);505505- if (same_ep && !force_toggles)506506- return;507507-508508- if (old_dev && !same_ep) {509509- /* save the old end-points toggles: */510510- u8 hrsl = spi_rd8(hcd, MAX3421_REG_HRSL);511511-512512- rcvtog = (hrsl >> MAX3421_HRSL_RCVTOGRD_BIT) & 1;513513- sndtog = (hrsl >> MAX3421_HRSL_SNDTOGRD_BIT) & 1;514514-515515- /* no locking: HCD (i.e., we) own toggles, don't we? */516516- usb_settoggle(old_dev, old_epnum, 0, rcvtog);517517- usb_settoggle(old_dev, old_epnum, 1, sndtog);518518- }519500 /* setup new endpoint's toggle bits: */520501 rcvtog = usb_gettoggle(dev, epnum, 0);521502 sndtog = usb_gettoggle(dev, epnum, 1);522503 hctl = (BIT(rcvtog + MAX3421_HCTL_RCVTOG0_BIT) |523504 BIT(sndtog + MAX3421_HCTL_SNDTOG0_BIT));524505525525- max3421_hcd->loaded_epnum = epnum;526506 spi_wr8(hcd, MAX3421_REG_HCTL, hctl);527507528508 /*···508532 * address-assignment so it's best to just always load the509533 * address whenever the end-point changed/was forced.510534 */511511- max3421_hcd->loaded_dev = dev;512535 spi_wr8(hcd, MAX3421_REG_PERADDR, dev->devnum);513536}514537···642667 struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);643668 struct urb *urb, *curr_urb = NULL;644669 struct max3421_ep *max3421_ep;645645- int epnum, force_toggles = 0;670670+ int epnum;646671 struct usb_host_endpoint *ep;647672 struct list_head *pos;648673 unsigned long flags;···752777 usb_settoggle(urb->dev, epnum, 0, 1);753778 usb_settoggle(urb->dev, epnum, 1, 1);754779 max3421_ep->pkt_state = PKT_STATE_SETUP;755755- force_toggles = 1;756780 } else757781 max3421_ep->pkt_state = PKT_STATE_TRANSFER;758782 }···759785 spin_unlock_irqrestore(&max3421_hcd->lock, flags);760786761787 max3421_ep->last_active = max3421_hcd->frame_number;762762- max3421_set_address(hcd, urb->dev, epnum, force_toggles);788788+ max3421_set_address(hcd, urb->dev, epnum);763789 max3421_set_speed(hcd, urb->dev);764790 max3421_next_transfer(hcd, 0);765791 return 1;···13531379 status = 0;13541380 urb = max3421_hcd->curr_urb;13551381 if (urb) {13821382+ /* save the old end-points toggles: */13831383+ u8 hrsl = spi_rd8(hcd, MAX3421_REG_HRSL);13841384+ int rcvtog = (hrsl >> MAX3421_HRSL_RCVTOGRD_BIT) & 1;13851385+ int sndtog = (hrsl >> MAX3421_HRSL_SNDTOGRD_BIT) & 1;13861386+ int epnum = usb_endpoint_num(&urb->ep->desc);13871387+13881388+ /* no locking: HCD (i.e., we) own toggles, don't we? */13891389+ usb_settoggle(urb->dev, epnum, 0, rcvtog);13901390+ usb_settoggle(urb->dev, epnum, 1, sndtog);13911391+13561392 max3421_hcd->curr_urb = NULL;13571393 spin_lock_irqsave(&max3421_hcd->lock, flags);13581394 usb_hcd_unlink_urb_from_ep(hcd, urb);
+2-1
drivers/usb/host/xhci-hub.c
···16381638 * Inform the usbcore about resume-in-progress by returning16391639 * a non-zero value even if there are no status changes.16401640 */16411641+ spin_lock_irqsave(&xhci->lock, flags);16421642+16411643 status = bus_state->resuming_ports;1642164416431645 mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;1644164616451645- spin_lock_irqsave(&xhci->lock, flags);16461647 /* For each port, did anything change? If so, set that bit in buf. */16471648 for (i = 0; i < max_ports; i++) {16481649 temp = readl(ports[i]->addr);
+8-8
drivers/usb/host/xhci-pci-renesas.c
···207207 return 0;208208209209 case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */210210- dev_dbg(&pdev->dev, "Unknown ROM status ...\n");211211- break;210210+ return 0;212211213212 case RENESAS_ROM_STATUS_ERROR: /* Error State */214213 default: /* All other states are marked as "Reserved states" */···224225 u8 fw_state;225226 int err;226227227227- /*228228- * Only if device has ROM and loaded FW we can skip loading and229229- * return success. Otherwise (even unknown state), attempt to load FW.230230- */231231- if (renesas_check_rom(pdev) && !renesas_check_rom_state(pdev))232232- return 0;228228+ /* Check if device has ROM and loaded, if so skip everything */229229+ err = renesas_check_rom(pdev);230230+ if (err) { /* we have rom */231231+ err = renesas_check_rom_state(pdev);232232+ if (!err)233233+ return err;234234+ }233235234236 /*235237 * Test if the device is actually needing the firmware. As most
+7
drivers/usb/host/xhci-pci.c
···636636 { /* end: all zeroes */ }637637};638638MODULE_DEVICE_TABLE(pci, pci_ids);639639+640640+/*641641+ * Without CONFIG_USB_XHCI_PCI_RENESAS renesas_xhci_check_request_fw() won't642642+ * load firmware, so don't encumber the xhci-pci driver with it.643643+ */644644+#if IS_ENABLED(CONFIG_USB_XHCI_PCI_RENESAS)639645MODULE_FIRMWARE("renesas_usb_fw.mem");646646+#endif640647641648/* pci driver glue; this is a "new style" PCI driver module */642649static struct pci_driver xhci_pci_driver = {
···101101#define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1)102102#define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0)103103static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map);104104+static void usbhsf_tx_irq_ctrl(struct usbhs_pipe *pipe, int enable);105105+static void usbhsf_rx_irq_ctrl(struct usbhs_pipe *pipe, int enable);104106struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)105107{106108 struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);···125123 if (chan) {126124 dmaengine_terminate_all(chan);127125 usbhsf_dma_unmap(pkt);126126+ } else {127127+ if (usbhs_pipe_is_dir_in(pipe))128128+ usbhsf_rx_irq_ctrl(pipe, 0);129129+ else130130+ usbhsf_tx_irq_ctrl(pipe, 0);128131 }129132130133 usbhs_pipe_clear_without_sequence(pipe, 0, 0);
+3-2
drivers/usb/serial/cp210x.c
···155155 { USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */156156 { USB_DEVICE(0x10C4, 0x89FB) }, /* Qivicon ZigBee USB Radio Stick */157157 { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */158158+ { USB_DEVICE(0x10C4, 0x8A5B) }, /* CEL EM3588 ZigBee USB Stick */158159 { USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */159160 { USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */160161 { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */···203202 { USB_DEVICE(0x1901, 0x0194) }, /* GE Healthcare Remote Alarm Box */204203 { USB_DEVICE(0x1901, 0x0195) }, /* GE B850/B650/B450 CP2104 DP UART interface */205204 { USB_DEVICE(0x1901, 0x0196) }, /* GE B850 CP2105 DP UART interface */206206- { USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 Display serial interface */207207- { USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 M.2 Key E serial interface */205205+ { USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 M.2 Key E serial interface */206206+ { USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 Display serial interface */208207 { USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */209208 { USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */210209 { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+3
drivers/usb/serial/option.c
···238238#define QUECTEL_PRODUCT_UC15 0x9090239239/* These u-blox products use Qualcomm's vendor ID */240240#define UBLOX_PRODUCT_R410M 0x90b2241241+#define UBLOX_PRODUCT_R6XX 0x90fa241242/* These Yuga products use Qualcomm's vendor ID */242243#define YUGA_PRODUCT_CLM920_NC5 0x9625243244···11021101 /* u-blox products using Qualcomm vendor ID */11031102 { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),11041103 .driver_info = RSVD(1) | RSVD(3) },11041104+ { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R6XX),11051105+ .driver_info = RSVD(3) },11051106 /* Quectel products using Quectel vendor ID */11061107 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff),11071108 .driver_info = NUMEP2 },
+7
drivers/usb/storage/unusual_uas.h
···4545 USB_SC_DEVICE, USB_PR_DEVICE, NULL,4646 US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),47474848+/* Reported-by: Julian Sikorski <belegdol@gmail.com> */4949+UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,5050+ "LaCie",5151+ "Rugged USB3-FW",5252+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,5353+ US_FL_IGNORE_UAS),5454+4855/*4956 * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI5057 * commands in UAS mode. Observed with the 1.28 firmware; are there others?
+16-4
drivers/usb/typec/stusb160x.c
···686686 return -ENODEV;687687688688 /*689689+ * This fwnode has a "compatible" property, but is never populated as a690690+ * struct device. Instead we simply parse it to read the properties.691691+ * This it breaks fw_devlink=on. To maintain backward compatibility692692+ * with existing DT files, we work around this by deleting any693693+ * fwnode_links to/from this fwnode.694694+ */695695+ fw_devlink_purge_absent_suppliers(fwnode);696696+697697+ /*689698 * When both VDD and VSYS power supplies are present, the low power690699 * supply VSYS is selected when VSYS voltage is above 3.1 V.691700 * Otherwise VDD is selected.···748739 typec_set_pwr_opmode(chip->port, chip->pwr_opmode);749740750741 if (client->irq) {751751- ret = stusb160x_irq_init(chip, client->irq);752752- if (ret)753753- goto port_unregister;754754-755742 chip->role_sw = fwnode_usb_role_switch_get(fwnode);756743 if (IS_ERR(chip->role_sw)) {757744 ret = PTR_ERR(chip->role_sw);···757752 ret);758753 goto port_unregister;759754 }755755+756756+ ret = stusb160x_irq_init(chip, client->irq);757757+ if (ret)758758+ goto role_sw_put;760759 } else {761760 /*762761 * If Source or Dual power role, need to enable VDD supply···784775785776 return 0;786777778778+role_sw_put:779779+ if (chip->role_sw)780780+ usb_role_switch_put(chip->role_sw);787781port_unregister:788782 typec_unregister_port(chip->port);789783all_reg_disable:
+9
drivers/usb/typec/tipd/core.c
···629629 if (!fwnode)630630 return -ENODEV;631631632632+ /*633633+ * This fwnode has a "compatible" property, but is never populated as a634634+ * struct device. Instead we simply parse it to read the properties.635635+ * This breaks fw_devlink=on. To maintain backward compatibility636636+ * with existing DT files, we work around this by deleting any637637+ * fwnode_links to/from this fwnode.638638+ */639639+ fw_devlink_purge_absent_suppliers(fwnode);640640+632641 tps->role_sw = fwnode_usb_role_switch_get(fwnode);633642 if (IS_ERR(tps->role_sw)) {634643 ret = PTR_ERR(tps->role_sw);
···17041704 return 0;17051705}1706170617071707-int btrfs_qgroup_trace_extent_post(struct btrfs_fs_info *fs_info,17071707+int btrfs_qgroup_trace_extent_post(struct btrfs_trans_handle *trans,17081708 struct btrfs_qgroup_extent_record *qrecord)17091709{17101710 struct ulist *old_root;17111711 u64 bytenr = qrecord->bytenr;17121712 int ret;1713171317141714- ret = btrfs_find_all_roots(NULL, fs_info, bytenr, 0, &old_root, false);17141714+ /*17151715+ * We are always called in a context where we are already holding a17161716+ * transaction handle. Often we are called when adding a data delayed17171717+ * reference from btrfs_truncate_inode_items() (truncating or unlinking),17181718+ * in which case we will be holding a write lock on extent buffer from a17191719+ * subvolume tree. In this case we can't allow btrfs_find_all_roots() to17201720+ * acquire fs_info->commit_root_sem, because that is a higher level lock17211721+ * that must be acquired before locking any extent buffers.17221722+ *17231723+ * So we want btrfs_find_all_roots() to not acquire the commit_root_sem17241724+ * but we can't pass it a non-NULL transaction handle, because otherwise17251725+ * it would not use commit roots and would lock extent buffers, causing17261726+ * a deadlock if it ends up trying to read lock the same extent buffer17271727+ * that was previously write locked at btrfs_truncate_inode_items().17281728+ *17291729+ * So pass a NULL transaction handle to btrfs_find_all_roots() and17301730+ * explicitly tell it to not acquire the commit_root_sem - if we are17311731+ * holding a transaction handle we don't need its protection.17321732+ */17331733+ ASSERT(trans != NULL);17341734+17351735+ ret = btrfs_find_all_roots(NULL, trans->fs_info, bytenr, 0, &old_root,17361736+ false, true);17151737 if (ret < 0) {17161716- fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;17171717- btrfs_warn(fs_info,17381738+ trans->fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;17391739+ btrfs_warn(trans->fs_info,17181740"error accounting new delayed refs extent (err code: %d), quota inconsistent",17191741 ret);17201742 return 0;···17801758 kfree(record);17811759 return 0;17821760 }17831783- return btrfs_qgroup_trace_extent_post(fs_info, record);17611761+ return btrfs_qgroup_trace_extent_post(trans, record);17841762}1785176317861764int btrfs_qgroup_trace_leaf_items(struct btrfs_trans_handle *trans,···26512629 /* Search commit root to find old_roots */26522630 ret = btrfs_find_all_roots(NULL, fs_info,26532631 record->bytenr, 0,26542654- &record->old_roots, false);26322632+ &record->old_roots, false, false);26552633 if (ret < 0)26562634 goto cleanup;26572635 }···26672645 * current root. It's safe inside commit_transaction().26682646 */26692647 ret = btrfs_find_all_roots(trans, fs_info,26702670- record->bytenr, BTRFS_SEQ_LAST, &new_roots, false);26482648+ record->bytenr, BTRFS_SEQ_LAST, &new_roots, false, false);26712649 if (ret < 0)26722650 goto cleanup;26732651 if (qgroup_to_skip) {···32013179 num_bytes = found.offset;3202318032033181 ret = btrfs_find_all_roots(NULL, fs_info, found.objectid, 0,32043204- &roots, false);31823182+ &roots, false, false);32053183 if (ret < 0)32063184 goto out;32073185 /* For rescan, just pass old_roots as NULL */
+1-1
fs/btrfs/qgroup.h
···298298 * using current root, then we can move all expensive backref walk out of299299 * transaction committing, but not now as qgroup accounting will be wrong again.300300 */301301-int btrfs_qgroup_trace_extent_post(struct btrfs_fs_info *fs_info,301301+int btrfs_qgroup_trace_extent_post(struct btrfs_trans_handle *trans,302302 struct btrfs_qgroup_extent_record *qrecord);303303304304/*
+10-10
fs/btrfs/tests/qgroup-tests.c
···224224 * quota.225225 */226226 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,227227- false);227227+ false, false);228228 if (ret) {229229 ulist_free(old_roots);230230 test_err("couldn't find old roots: %d", ret);···237237 return ret;238238239239 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,240240- false);240240+ false, false);241241 if (ret) {242242 ulist_free(old_roots);243243 ulist_free(new_roots);···261261 new_roots = NULL;262262263263 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,264264- false);264264+ false, false);265265 if (ret) {266266 ulist_free(old_roots);267267 test_err("couldn't find old roots: %d", ret);···273273 return -EINVAL;274274275275 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,276276- false);276276+ false, false);277277 if (ret) {278278 ulist_free(old_roots);279279 ulist_free(new_roots);···325325 }326326327327 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,328328- false);328328+ false, false);329329 if (ret) {330330 ulist_free(old_roots);331331 test_err("couldn't find old roots: %d", ret);···338338 return ret;339339340340 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,341341- false);341341+ false, false);342342 if (ret) {343343 ulist_free(old_roots);344344 ulist_free(new_roots);···360360 }361361362362 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,363363- false);363363+ false, false);364364 if (ret) {365365 ulist_free(old_roots);366366 test_err("couldn't find old roots: %d", ret);···373373 return ret;374374375375 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,376376- false);376376+ false, false);377377 if (ret) {378378 ulist_free(old_roots);379379 ulist_free(new_roots);···401401 }402402403403 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,404404- false);404404+ false, false);405405 if (ret) {406406 ulist_free(old_roots);407407 test_err("couldn't find old roots: %d", ret);···414414 return ret;415415416416 ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,417417- false);417417+ false, false);418418 if (ret) {419419 ulist_free(old_roots);420420 ulist_free(new_roots);
+22-9
fs/btrfs/tree-log.c
···55265526 spin_lock(&inode->lock);55275527 inode->logged_trans = trans->transid;55285528 /*55295529- * Don't update last_log_commit if we logged that an inode exists55305530- * after it was loaded to memory (full_sync bit set).55315531- * This is to prevent data loss when we do a write to the inode,55325532- * then the inode gets evicted after all delalloc was flushed,55335533- * then we log it exists (due to a rename for example) and then55345534- * fsync it. This last fsync would do nothing (not logging the55355535- * extents previously written).55295529+ * Don't update last_log_commit if we logged that an inode exists.55305530+ * We do this for two reasons:55315531+ *55325532+ * 1) We might have had buffered writes to this inode that were55335533+ * flushed and had their ordered extents completed in this55345534+ * transaction, but we did not previously log the inode with55355535+ * LOG_INODE_ALL. Later the inode was evicted and after that55365536+ * it was loaded again and this LOG_INODE_EXISTS log operation55375537+ * happened. We must make sure that if an explicit fsync against55385538+ * the inode is performed later, it logs the new extents, an55395539+ * updated inode item, etc, and syncs the log. The same logic55405540+ * applies to direct IO writes instead of buffered writes.55415541+ *55425542+ * 2) When we log the inode with LOG_INODE_EXISTS, its inode item55435543+ * is logged with an i_size of 0 or whatever value was logged55445544+ * before. If later the i_size of the inode is increased by a55455545+ * truncate operation, the log is synced through an fsync of55465546+ * some other inode and then finally an explicit fsync against55475547+ * this inode is made, we must make sure this fsync logs the55485548+ * inode with the new i_size, the hole between old i_size and55495549+ * the new i_size, and syncs the log.55365550 */55375537- if (inode_only != LOG_INODE_EXISTS ||55385538- !test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags))55515551+ if (inode_only != LOG_INODE_EXISTS)55395552 inode->last_log_commit = inode->last_sub_trans;55405553 spin_unlock(&inode->lock);55415554 }
+4-8
fs/btrfs/zoned.c
···13491349 return;1350135013511351 ordered->physical = physical;13521352- ordered->disk = bio->bi_bdev->bd_disk;13531353- ordered->partno = bio->bi_bdev->bd_partno;13521352+ ordered->bdev = bio->bi_bdev;1354135313551354 btrfs_put_ordered_extent(ordered);13561355}···13611362 struct extent_map_tree *em_tree;13621363 struct extent_map *em;13631364 struct btrfs_ordered_sum *sum;13641364- struct block_device *bdev;13651365 u64 orig_logical = ordered->disk_bytenr;13661366 u64 *logical = NULL;13671367 int nr, stripe_len;1368136813691369 /* Zoned devices should not have partitions. So, we can assume it is 0 */13701370- ASSERT(ordered->partno == 0);13711371- bdev = bdgrab(ordered->disk->part0);13721372- if (WARN_ON(!bdev))13701370+ ASSERT(!bdev_is_partition(ordered->bdev));13711371+ if (WARN_ON(!ordered->bdev))13731372 return;1374137313751375- if (WARN_ON(btrfs_rmap_block(fs_info, orig_logical, bdev,13741374+ if (WARN_ON(btrfs_rmap_block(fs_info, orig_logical, ordered->bdev,13761375 ordered->physical, &logical, &nr,13771376 &stripe_len)))13781377 goto out;···1399140214001403out:14011404 kfree(logical);14021402- bdput(bdev);14031405}1404140614051407bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
+1-1
fs/ceph/mds_client.c
···44564456 break;44574457 case CEPH_MDS_SESSION_CLOSING:44584458 /* Should never reach this when we're unmounting */44594459- WARN_ON_ONCE(true);44594459+ WARN_ON_ONCE(s->s_ttl);44604460 fallthrough;44614461 case CEPH_MDS_SESSION_NEW:44624462 case CEPH_MDS_SESSION_RESTARTING:
+7-3
fs/cifs/cifssmb.c
···873873 InformationLevel) - 4;874874 offset = param_offset + params;875875876876- /* Setup pointer to Request Data (inode type) */877877- pRqD = (struct unlink_psx_rq *)(((char *)&pSMB->hdr.Protocol) + offset);876876+ /* Setup pointer to Request Data (inode type).877877+ * Note that SMB offsets are from the beginning of SMB which is 4 bytes878878+ * in, after RFC1001 field879879+ */880880+ pRqD = (struct unlink_psx_rq *)((char *)(pSMB) + offset + 4);878881 pRqD->type = cpu_to_le16(type);879882 pSMB->ParameterOffset = cpu_to_le16(param_offset);880883 pSMB->DataOffset = cpu_to_le16(offset);···10841081 param_offset = offsetof(struct smb_com_transaction2_spi_req,10851082 InformationLevel) - 4;10861083 offset = param_offset + params;10871087- pdata = (OPEN_PSX_REQ *)(((char *)&pSMB->hdr.Protocol) + offset);10841084+ /* SMB offsets are from the beginning of SMB which is 4 bytes in, after RFC1001 field */10851085+ pdata = (OPEN_PSX_REQ *)((char *)(pSMB) + offset + 4);10881086 pdata->Level = cpu_to_le16(SMB_QUERY_FILE_UNIX_BASIC);10891087 pdata->Permissions = cpu_to_le64(mode);10901088 pdata->PosixOpenFlags = cpu_to_le32(posix_flags);
···1919#include "cifs_debug.h"2020#include "cifs_unicode.h"2121#include "smb2glob.h"2222+#include "dns_resolve.h"22232324#include "dfs_cache.h"2425···912911913912err_free_it:914913 list_for_each_entry_safe(it, nit, head, it_list) {914914+ list_del(&it->it_list);915915 kfree(it->it_name);916916 kfree(it);917917 }···12951293 return 0;12961294}1297129512961296+static bool target_share_equal(struct TCP_Server_Info *server, const char *s1, const char *s2)12971297+{12981298+ char unc[sizeof("\\\\") + SERVER_NAME_LENGTH] = {0};12991299+ const char *host;13001300+ size_t hostlen;13011301+ char *ip = NULL;13021302+ struct sockaddr sa;13031303+ bool match;13041304+ int rc;13051305+13061306+ if (strcasecmp(s1, s2))13071307+ return false;13081308+13091309+ /*13101310+ * Resolve share's hostname and check if server address matches. Otherwise just ignore it13111311+ * as we could not have upcall to resolve hostname or failed to convert ip address.13121312+ */13131313+ match = true;13141314+ extract_unc_hostname(s1, &host, &hostlen);13151315+ scnprintf(unc, sizeof(unc), "\\\\%.*s", (int)hostlen, host);13161316+13171317+ rc = dns_resolve_server_name_to_ip(unc, &ip, NULL);13181318+ if (rc < 0) {13191319+ cifs_dbg(FYI, "%s: could not resolve %.*s. assuming server address matches.\n",13201320+ __func__, (int)hostlen, host);13211321+ return true;13221322+ }13231323+13241324+ if (!cifs_convert_address(&sa, ip, strlen(ip))) {13251325+ cifs_dbg(VFS, "%s: failed to convert address \'%s\'. skip address matching.\n",13261326+ __func__, ip);13271327+ } else {13281328+ mutex_lock(&server->srv_mutex);13291329+ match = cifs_match_ipaddr((struct sockaddr *)&server->dstaddr, &sa);13301330+ mutex_unlock(&server->srv_mutex);13311331+ }13321332+13331333+ kfree(ip);13341334+ return match;13351335+}13361336+13371337+/*13381338+ * Mark dfs tcon for reconnecting when the currently connected tcon does not match any of the new13391339+ * target shares in @refs.13401340+ */13411341+static void mark_for_reconnect_if_needed(struct cifs_tcon *tcon, struct dfs_cache_tgt_list *tl,13421342+ const struct dfs_info3_param *refs, int numrefs)13431343+{13441344+ struct dfs_cache_tgt_iterator *it;13451345+ int i;13461346+13471347+ for (it = dfs_cache_get_tgt_iterator(tl); it; it = dfs_cache_get_next_tgt(tl, it)) {13481348+ for (i = 0; i < numrefs; i++) {13491349+ if (target_share_equal(tcon->ses->server, dfs_cache_get_tgt_name(it),13501350+ refs[i].node_name))13511351+ return;13521352+ }13531353+ }13541354+13551355+ cifs_dbg(FYI, "%s: no cached or matched targets. mark dfs share for reconnect.\n", __func__);13561356+ for (i = 0; i < tcon->ses->chan_count; i++) {13571357+ spin_lock(&GlobalMid_Lock);13581358+ if (tcon->ses->chans[i].server->tcpStatus != CifsExiting)13591359+ tcon->ses->chans[i].server->tcpStatus = CifsNeedReconnect;13601360+ spin_unlock(&GlobalMid_Lock);13611361+ }13621362+}13631363+13641364+/* Refresh dfs referral of tcon and mark it for reconnect if needed */13651365+static int refresh_tcon(struct cifs_ses **sessions, struct cifs_tcon *tcon, bool force_refresh)13661366+{13671367+ const char *path = tcon->dfs_path + 1;13681368+ struct cifs_ses *ses;13691369+ struct cache_entry *ce;13701370+ struct dfs_info3_param *refs = NULL;13711371+ int numrefs = 0;13721372+ bool needs_refresh = false;13731373+ struct dfs_cache_tgt_list tl = DFS_CACHE_TGT_LIST_INIT(tl);13741374+ int rc = 0;13751375+ unsigned int xid;13761376+13771377+ ses = find_ipc_from_server_path(sessions, path);13781378+ if (IS_ERR(ses)) {13791379+ cifs_dbg(FYI, "%s: could not find ipc session\n", __func__);13801380+ return PTR_ERR(ses);13811381+ }13821382+13831383+ down_read(&htable_rw_lock);13841384+ ce = lookup_cache_entry(path);13851385+ needs_refresh = force_refresh || IS_ERR(ce) || cache_entry_expired(ce);13861386+ if (!IS_ERR(ce)) {13871387+ rc = get_targets(ce, &tl);13881388+ if (rc)13891389+ cifs_dbg(FYI, "%s: could not get dfs targets: %d\n", __func__, rc);13901390+ }13911391+ up_read(&htable_rw_lock);13921392+13931393+ if (!needs_refresh) {13941394+ rc = 0;13951395+ goto out;13961396+ }13971397+13981398+ xid = get_xid();13991399+ rc = get_dfs_referral(xid, ses, path, &refs, &numrefs);14001400+ free_xid(xid);14011401+14021402+ /* Create or update a cache entry with the new referral */14031403+ if (!rc) {14041404+ dump_refs(refs, numrefs);14051405+14061406+ down_write(&htable_rw_lock);14071407+ ce = lookup_cache_entry(path);14081408+ if (IS_ERR(ce))14091409+ add_cache_entry_locked(refs, numrefs);14101410+ else if (force_refresh || cache_entry_expired(ce))14111411+ update_cache_entry_locked(ce, refs, numrefs);14121412+ up_write(&htable_rw_lock);14131413+14141414+ mark_for_reconnect_if_needed(tcon, &tl, refs, numrefs);14151415+ }14161416+14171417+out:14181418+ dfs_cache_free_tgts(&tl);14191419+ free_dfs_info_array(refs, numrefs);14201420+ return rc;14211421+}14221422+14231423+/**14241424+ * dfs_cache_remount_fs - remount a DFS share14251425+ *14261426+ * Reconfigure dfs mount by forcing a new DFS referral and if the currently cached targets do not14271427+ * match any of the new targets, mark it for reconnect.14281428+ *14291429+ * @cifs_sb: cifs superblock.14301430+ *14311431+ * Return zero if remounted, otherwise non-zero.14321432+ */14331433+int dfs_cache_remount_fs(struct cifs_sb_info *cifs_sb)14341434+{14351435+ struct cifs_tcon *tcon;14361436+ struct mount_group *mg;14371437+ struct cifs_ses *sessions[CACHE_MAX_ENTRIES + 1] = {NULL};14381438+ int rc;14391439+14401440+ if (!cifs_sb || !cifs_sb->master_tlink)14411441+ return -EINVAL;14421442+14431443+ tcon = cifs_sb_master_tcon(cifs_sb);14441444+ if (!tcon->dfs_path) {14451445+ cifs_dbg(FYI, "%s: not a dfs tcon\n", __func__);14461446+ return 0;14471447+ }14481448+14491449+ if (uuid_is_null(&cifs_sb->dfs_mount_id)) {14501450+ cifs_dbg(FYI, "%s: tcon has no dfs mount group id\n", __func__);14511451+ return -EINVAL;14521452+ }14531453+14541454+ mutex_lock(&mount_group_list_lock);14551455+ mg = find_mount_group_locked(&cifs_sb->dfs_mount_id);14561456+ if (IS_ERR(mg)) {14571457+ mutex_unlock(&mount_group_list_lock);14581458+ cifs_dbg(FYI, "%s: tcon has ipc session to refresh referral\n", __func__);14591459+ return PTR_ERR(mg);14601460+ }14611461+ kref_get(&mg->refcount);14621462+ mutex_unlock(&mount_group_list_lock);14631463+14641464+ spin_lock(&mg->lock);14651465+ memcpy(&sessions, mg->sessions, mg->num_sessions * sizeof(mg->sessions[0]));14661466+ spin_unlock(&mg->lock);14671467+14681468+ /*14691469+ * After reconnecting to a different server, unique ids won't match anymore, so we disable14701470+ * serverino. This prevents dentry revalidation to think the dentry are stale (ESTALE).14711471+ */14721472+ cifs_autodisable_serverino(cifs_sb);14731473+ /*14741474+ * Force the use of prefix path to support failover on DFS paths that resolve to targets14751475+ * that have different prefix paths.14761476+ */14771477+ cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;14781478+ rc = refresh_tcon(sessions, tcon, true);14791479+14801480+ kref_put(&mg->refcount, mount_group_release);14811481+ return rc;14821482+}14831483+12981484/*12991485 * Refresh all active dfs mounts regardless of whether they are in cache or not.13001486 * (cache can be cleared)···14931303 struct cifs_ses *ses;14941304 struct cifs_tcon *tcon, *ntcon;14951305 struct list_head tcons;14961496- unsigned int xid;1497130614981307 INIT_LIST_HEAD(&tcons);14991308···15101321 spin_unlock(&cifs_tcp_ses_lock);1511132215121323 list_for_each_entry_safe(tcon, ntcon, &tcons, ulist) {15131513- const char *path = tcon->dfs_path + 1;15141514- struct cache_entry *ce;15151515- struct dfs_info3_param *refs = NULL;15161516- int numrefs = 0;15171517- bool needs_refresh = false;15181518- int rc = 0;15191519-15201324 list_del_init(&tcon->ulist);15211521-15221522- ses = find_ipc_from_server_path(sessions, path);15231523- if (IS_ERR(ses))15241524- goto next_tcon;15251525-15261526- down_read(&htable_rw_lock);15271527- ce = lookup_cache_entry(path);15281528- needs_refresh = IS_ERR(ce) || cache_entry_expired(ce);15291529- up_read(&htable_rw_lock);15301530-15311531- if (!needs_refresh)15321532- goto next_tcon;15331533-15341534- xid = get_xid();15351535- rc = get_dfs_referral(xid, ses, path, &refs, &numrefs);15361536- free_xid(xid);15371537-15381538- /* Create or update a cache entry with the new referral */15391539- if (!rc) {15401540- down_write(&htable_rw_lock);15411541- ce = lookup_cache_entry(path);15421542- if (IS_ERR(ce))15431543- add_cache_entry_locked(refs, numrefs);15441544- else if (cache_entry_expired(ce))15451545- update_cache_entry_locked(ce, refs, numrefs);15461546- up_write(&htable_rw_lock);15471547- }15481548-15491549-next_tcon:15501550- free_dfs_info_array(refs, numrefs);13251325+ refresh_tcon(sessions, tcon, false);15511326 cifs_put_tcon(tcon);15521327 }15531328}
···36173617 char *buf)36183618{36193619 struct cifs_io_parms io_parms = {0};36203620- int nbytes;36203620+ int rc, nbytes;36213621 struct kvec iov[2];3622362236233623 io_parms.netfid = cfile->fid.netfid;···36253625 io_parms.tcon = tcon;36263626 io_parms.persistent_fid = cfile->fid.persistent_fid;36273627 io_parms.volatile_fid = cfile->fid.volatile_fid;36283628- io_parms.offset = off;36293629- io_parms.length = len;3630362836313631- /* iov[0] is reserved for smb header */36323632- iov[1].iov_base = buf;36333633- iov[1].iov_len = io_parms.length;36343634- return SMB2_write(xid, &io_parms, &nbytes, iov, 1);36293629+ while (len) {36303630+ io_parms.offset = off;36313631+ io_parms.length = len;36323632+ if (io_parms.length > SMB2_MAX_BUFFER_SIZE)36333633+ io_parms.length = SMB2_MAX_BUFFER_SIZE;36343634+ /* iov[0] is reserved for smb header */36353635+ iov[1].iov_base = buf;36363636+ iov[1].iov_len = io_parms.length;36373637+ rc = SMB2_write(xid, &io_parms, &nbytes, iov, 1);36383638+ if (rc)36393639+ break;36403640+ if (nbytes > len)36413641+ return -EINVAL;36423642+ buf += nbytes;36433643+ off += nbytes;36443644+ len -= nbytes;36453645+ }36463646+ return rc;36353647}3636364836373649static int smb3_simple_fallocate_range(unsigned int xid,···36663654 1024 * sizeof(struct file_allocated_range_buffer),36673655 (char **)&out_data, &out_data_len);36683656 if (rc)36693669- goto out;36703670- /*36713671- * It is already all allocated36723672- */36733673- if (out_data_len == 0)36743657 goto out;3675365836763659 buf = kzalloc(1024 * 1024, GFP_KERNEL);···37873780 if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) == 0) {37883781 rc = 0;37893782 goto out;37833783+ }37843784+37853785+ if (keep_size == true) {37863786+ /*37873787+ * We can not preallocate pages beyond the end of the file37883788+ * in SMB237893789+ */37903790+ if (off >= i_size_read(inode)) {37913791+ rc = 0;37923792+ goto out;37933793+ }37943794+ /*37953795+ * For fallocates that are partially beyond the end of file,37963796+ * clamp len so we only fallocate up to the end of file.37973797+ */37983798+ if (off + len > i_size_read(inode)) {37993799+ len = i_size_read(inode) - off;38003800+ }37903801 }3791380237923803 if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
+3
fs/fs-writeback.c
···521521 */522522 smp_mb();523523524524+ if (IS_DAX(inode))525525+ return false;526526+524527 /* while holding I_WB_SWITCH, no one else can update the association */525528 spin_lock(&inode->i_lock);526529 if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
···731731 int work_flags;732732 unsigned long flags;733733734734- if (test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state)) {734734+ /*735735+ * If io-wq is exiting for this task, or if the request has explicitly736736+ * been marked as one that should not get executed, cancel it here.737737+ */738738+ if (test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state) ||739739+ (work->flags & IO_WQ_WORK_CANCEL)) {735740 io_run_cancel(work, wqe);736741 return;737742 }
+39-16
fs/io_uring.c
···1294129412951295 /* init ->work of the whole link before punting */12961296 io_prep_async_link(req);12971297+12981298+ /*12991299+ * Not expected to happen, but if we do have a bug where this _can_13001300+ * happen, catch it here and ensure the request is marked as13011301+ * canceled. That will make io-wq go through the usual work cancel13021302+ * procedure rather than attempt to run this request (or create a new13031303+ * worker for it).13041304+ */13051305+ if (WARN_ON_ONCE(!same_thread_group(req->task, current)))13061306+ req->work.flags |= IO_WQ_WORK_CANCEL;13071307+12971308 trace_io_uring_queue_async_work(ctx, io_wq_is_hashed(&req->work), req,12981309 &req->work, req->flags);12991310 io_wq_enqueue(tctx->io_wq, &req->work);···22162205 * Find and free completed poll iocbs22172206 */22182207static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,22192219- struct list_head *done)22082208+ struct list_head *done, bool resubmit)22202209{22212210 struct req_batch rb;22222211 struct io_kiocb *req;···22312220 req = list_first_entry(done, struct io_kiocb, inflight_entry);22322221 list_del(&req->inflight_entry);2233222222342234- if (READ_ONCE(req->result) == -EAGAIN &&22232223+ if (READ_ONCE(req->result) == -EAGAIN && resubmit &&22352224 !(req->flags & REQ_F_DONT_REISSUE)) {22362225 req->iopoll_completed = 0;22372226 req_ref_get(req);···22552244}2256224522572246static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,22582258- long min)22472247+ long min, bool resubmit)22592248{22602249 struct io_kiocb *req, *tmp;22612250 LIST_HEAD(done);···22982287 }2299228823002289 if (!list_empty(&done))23012301- io_iopoll_complete(ctx, nr_events, &done);22902290+ io_iopoll_complete(ctx, nr_events, &done, resubmit);2302229123032292 return ret;23042293}···23162305 while (!list_empty(&ctx->iopoll_list)) {23172306 unsigned int nr_events = 0;2318230723192319- io_do_iopoll(ctx, &nr_events, 0);23082308+ io_do_iopoll(ctx, &nr_events, 0, false);2320230923212310 /* let it sleep and repeat later if can't complete a request */23222311 if (nr_events == 0)···23782367 list_empty(&ctx->iopoll_list))23792368 break;23802369 }23812381- ret = io_do_iopoll(ctx, &nr_events, min);23702370+ ret = io_do_iopoll(ctx, &nr_events, min, true);23822371 } while (!ret && nr_events < min && !need_resched());23832372out:23842373 mutex_unlock(&ctx->uring_lock);···48134802struct io_poll_table {48144803 struct poll_table_struct pt;48154804 struct io_kiocb *req;48054805+ int nr_entries;48164806 int error;48174807};48184808···50074995 struct io_kiocb *req = pt->req;5008499650094997 /*50105010- * If poll->head is already set, it's because the file being polled50115011- * uses multiple waitqueues for poll handling (eg one for read, one50125012- * for write). Setup a separate io_poll_iocb if this happens.49984998+ * The file being polled uses multiple waitqueues for poll handling49994999+ * (e.g. one for read, one for write). Setup a separate io_poll_iocb50005000+ * if this happens.50135001 */50145014- if (unlikely(poll->head)) {50025002+ if (unlikely(pt->nr_entries)) {50155003 struct io_poll_iocb *poll_one = poll;5016500450175005 /* already have a 2nd entry, fail a third attempt */···50395027 *poll_ptr = poll;50405028 }5041502950425042- pt->error = 0;50305030+ pt->nr_entries++;50435031 poll->head = head;5044503250455033 if (poll->events & EPOLLEXCLUSIVE)···5116510451175105 ipt->pt._key = mask;51185106 ipt->req = req;51195119- ipt->error = -EINVAL;51075107+ ipt->error = 0;51085108+ ipt->nr_entries = 0;5120510951215110 mask = vfs_poll(req->file, &ipt->pt) & poll->events;51115111+ if (unlikely(!ipt->nr_entries) && !ipt->error)51125112+ ipt->error = -EINVAL;5122511351235114 spin_lock_irq(&ctx->completion_lock);51155115+ if (ipt->error)51165116+ io_poll_remove_double(req);51245117 if (likely(poll->head)) {51255118 spin_lock(&poll->head->lock);51265119 if (unlikely(list_empty(&poll->wait.entry))) {···6809679268106793 mutex_lock(&ctx->uring_lock);68116794 if (!list_empty(&ctx->iopoll_list))68126812- io_do_iopoll(ctx, &nr_events, 0);67956795+ io_do_iopoll(ctx, &nr_events, 0, true);6813679668146797 /*68156798 * Don't submit if refs are dying, good for io_uring_register(),···79167899 struct io_wq_data data;79177900 unsigned int concurrency;7918790179027902+ mutex_lock(&ctx->uring_lock);79197903 hash = ctx->hash_map;79207904 if (!hash) {79217905 hash = kzalloc(sizeof(*hash), GFP_KERNEL);79227922- if (!hash)79067906+ if (!hash) {79077907+ mutex_unlock(&ctx->uring_lock);79237908 return ERR_PTR(-ENOMEM);79097909+ }79247910 refcount_set(&hash->refs, 1);79257911 init_waitqueue_head(&hash->wait);79267912 ctx->hash_map = hash;79277913 }79147914+ mutex_unlock(&ctx->uring_lock);7928791579297916 data.hash = hash;79307917 data.task = task;···80027981 f = fdget(p->wq_fd);80037982 if (!f.file)80047983 return -ENXIO;80058005- fdput(f);80068006- if (f.file->f_op != &io_uring_fops)79847984+ if (f.file->f_op != &io_uring_fops) {79857985+ fdput(f);80077986 return -EINVAL;79877987+ }79887988+ fdput(f);80087989 }80097990 if (ctx->flags & IORING_SETUP_SQPOLL) {80107991 struct task_struct *tsk;
+3
fs/seq_file.c
···32323333static void *seq_buf_alloc(unsigned long size)3434{3535+ if (unlikely(size > MAX_RW_COUNT))3636+ return NULL;3737+3538 return kvmalloc(size, GFP_KERNEL_ACCOUNT);3639}3740
+12-14
fs/userfaultfd.c
···12361236}1237123712381238static __always_inline int validate_range(struct mm_struct *mm,12391239- __u64 *start, __u64 len)12391239+ __u64 start, __u64 len)12401240{12411241 __u64 task_size = mm->task_size;1242124212431243- *start = untagged_addr(*start);12441244-12451245- if (*start & ~PAGE_MASK)12431243+ if (start & ~PAGE_MASK)12461244 return -EINVAL;12471245 if (len & ~PAGE_MASK)12481246 return -EINVAL;12491247 if (!len)12501248 return -EINVAL;12511251- if (*start < mmap_min_addr)12491249+ if (start < mmap_min_addr)12521250 return -EINVAL;12531253- if (*start >= task_size)12511251+ if (start >= task_size)12541252 return -EINVAL;12551255- if (len > task_size - *start)12531253+ if (len > task_size - start)12561254 return -EINVAL;12571255 return 0;12581256}···13141316 vm_flags |= VM_UFFD_MINOR;13151317 }1316131813171317- ret = validate_range(mm, &uffdio_register.range.start,13191319+ ret = validate_range(mm, uffdio_register.range.start,13181320 uffdio_register.range.len);13191321 if (ret)13201322 goto out;···15201522 if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister)))15211523 goto out;1522152415231523- ret = validate_range(mm, &uffdio_unregister.start,15251525+ ret = validate_range(mm, uffdio_unregister.start,15241526 uffdio_unregister.len);15251527 if (ret)15261528 goto out;···16691671 if (copy_from_user(&uffdio_wake, buf, sizeof(uffdio_wake)))16701672 goto out;1671167316721672- ret = validate_range(ctx->mm, &uffdio_wake.start, uffdio_wake.len);16741674+ ret = validate_range(ctx->mm, uffdio_wake.start, uffdio_wake.len);16731675 if (ret)16741676 goto out;16751677···17091711 sizeof(uffdio_copy)-sizeof(__s64)))17101712 goto out;1711171317121712- ret = validate_range(ctx->mm, &uffdio_copy.dst, uffdio_copy.len);17141714+ ret = validate_range(ctx->mm, uffdio_copy.dst, uffdio_copy.len);17131715 if (ret)17141716 goto out;17151717 /*···17661768 sizeof(uffdio_zeropage)-sizeof(__s64)))17671769 goto out;1768177017691769- ret = validate_range(ctx->mm, &uffdio_zeropage.range.start,17711771+ ret = validate_range(ctx->mm, uffdio_zeropage.range.start,17701772 uffdio_zeropage.range.len);17711773 if (ret)17721774 goto out;···18161818 sizeof(struct uffdio_writeprotect)))18171819 return -EFAULT;1818182018191819- ret = validate_range(ctx->mm, &uffdio_wp.range.start,18211821+ ret = validate_range(ctx->mm, uffdio_wp.range.start,18201822 uffdio_wp.range.len);18211823 if (ret)18221824 return ret;···18641866 sizeof(uffdio_continue) - (sizeof(__s64))))18651867 goto out;1866186818671867- ret = validate_range(ctx->mm, &uffdio_continue.range.start,18691869+ ret = validate_range(ctx->mm, uffdio_continue.range.start,18681870 uffdio_continue.range.len);18691871 if (ret)18701872 goto out;
+2-6
include/acpi/acpi_bus.h
···707707 * @hrv: Hardware Revision of the device, pass -1 to not check _HRV708708 *709709 * The caller is responsible for invoking acpi_dev_put() on the returned device.710710- *711711- * FIXME: Due to above requirement there is a window that may invalidate @adev712712- * and next iteration will use a dangling pointer, e.g. in the case of a713713- * hotplug event. That said, the caller should ensure that this will never714714- * happen.715710 */716711#define for_each_acpi_dev_match(adev, hid, uid, hrv) \717712 for (adev = acpi_dev_get_first_match_dev(hid, uid, hrv); \···720725721726static inline void acpi_dev_put(struct acpi_device *adev)722727{723723- put_device(&adev->dev);728728+ if (adev)729729+ put_device(&adev->dev);724730}725731726732struct acpi_device *acpi_bus_get_acpi_device(acpi_handle handle);
+1
include/drm/drm_ioctl.h
···6868 unsigned long arg);69697070#define DRM_IOCTL_NR(n) _IOC_NR(n)7171+#define DRM_IOCTL_TYPE(n) _IOC_TYPE(n)7172#define DRM_MAJOR 22672737374/**
+1-1
include/linux/blkdev.h
···5757 * Maximum number of blkcg policies allowed to be registered concurrently.5858 * Defined here to simplify include dependency.5959 */6060-#define BLKCG_MAX_POLS 56060+#define BLKCG_MAX_POLS 661616262typedef void (rq_end_io_fn)(struct request *, blk_status_t);6363
···712712 /* Do not create a PCM for this DAI link (Backend link) */713713 unsigned int ignore:1;714714715715+ /* This flag will reorder stop sequence. By enabling this flag716716+ * DMA controller stop sequence will be invoked first followed by717717+ * CPU DAI driver stop sequence718718+ */719719+ unsigned int stop_dma_first:1;720720+715721#ifdef CONFIG_SND_SOC_TOPOLOGY716722 struct snd_soc_dobj dobj; /* For topology */717723#endif
···36773677 if (tail_call_reachable)36783678 for (j = 0; j < frame; j++)36793679 subprog[ret_prog[j]].tail_call_reachable = true;36803680+ if (subprog[0].tail_call_reachable)36813681+ env->prog->aux->tail_call_reachable = true;3680368236813683 /* end of for() loop means the last insn of the 'subprog'36823684 * was reached. Doesn't matter whether it was JA or EXIT
+10-2
kernel/dma/ops_helpers.c
···55 */66#include <linux/dma-map-ops.h>7788+static struct page *dma_common_vaddr_to_page(void *cpu_addr)99+{1010+ if (is_vmalloc_addr(cpu_addr))1111+ return vmalloc_to_page(cpu_addr);1212+ return virt_to_page(cpu_addr);1313+}1414+815/*916 * Create scatter-list for the already allocated DMA buffer.1017 */···1912 void *cpu_addr, dma_addr_t dma_addr, size_t size,2013 unsigned long attrs)2114{2222- struct page *page = virt_to_page(cpu_addr);1515+ struct page *page = dma_common_vaddr_to_page(cpu_addr);2316 int ret;24172518 ret = sg_alloc_table(sgt, 1, GFP_KERNEL);···3932 unsigned long user_count = vma_pages(vma);4033 unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;4134 unsigned long off = vma->vm_pgoff;3535+ struct page *page = dma_common_vaddr_to_page(cpu_addr);4236 int ret = -ENXIO;43374438 vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);···5143 return -ENXIO;52445345 return remap_pfn_range(vma, vma->vm_start,5454- page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,4646+ page_to_pfn(page) + vma->vm_pgoff,5547 user_count << PAGE_SHIFT, vma->vm_page_prot);5648#else5749 return -ENXIO;
+1-1
kernel/smpboot.c
···4747 *4848 * Creates the thread if it does not exist.4949 */5050-static inline void idle_init(unsigned int cpu)5050+static __always_inline void idle_init(unsigned int cpu)5151{5252 struct task_struct *tsk = per_cpu(idle_threads, cpu);5353
+5-5
kernel/time/posix-cpu-timers.c
···991991 if (!p)992992 goto out;993993994994+ /* Protect timer list r/w in arm_timer() */995995+ sighand = lock_task_sighand(p, &flags);996996+ if (unlikely(sighand == NULL))997997+ goto out;998998+994999 /*9951000 * Fetch the current sample and update the timer's expiry time.9961001 */···10051000 now = cpu_clock_sample_group(clkid, p, true);1006100110071002 bump_cpu_timer(timer, now);10081008-10091009- /* Protect timer list r/w in arm_timer() */10101010- sighand = lock_task_sighand(p, &flags);10111011- if (unlikely(sighand == NULL))10121012- goto out;1013100310141004 /*10151005 * Now re-arm for the new expiry time.
+5-3
kernel/time/timer.c
···207207 unsigned int cpu;208208 bool next_expiry_recalc;209209 bool is_idle;210210+ bool timers_pending;210211 DECLARE_BITMAP(pending_map, WHEEL_SIZE);211212 struct hlist_head vectors[WHEEL_SIZE];212213} ____cacheline_aligned;···596595 * can reevaluate the wheel:597596 */598597 base->next_expiry = bucket_expiry;598598+ base->timers_pending = true;599599 base->next_expiry_recalc = false;600600 trigger_dyntick_cpu(base, timer);601601 }···15841582 }1585158315861584 base->next_expiry_recalc = false;15851585+ base->timers_pending = !(next == base->clk + NEXT_TIMER_MAX_DELTA);1587158615881587 return next;15891588}···16361633 struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);16371634 u64 expires = KTIME_MAX;16381635 unsigned long nextevt;16391639- bool is_max_delta;1640163616411637 /*16421638 * Pretend that there is no timer pending if the cpu is offline.···16481646 if (base->next_expiry_recalc)16491647 base->next_expiry = __next_timer_interrupt(base);16501648 nextevt = base->next_expiry;16511651- is_max_delta = (nextevt == base->clk + NEXT_TIMER_MAX_DELTA);1652164916531650 /*16541651 * We have a fresh next event. Check whether we can forward the···16651664 expires = basem;16661665 base->is_idle = false;16671666 } else {16681668- if (!is_max_delta)16671667+ if (base->timers_pending)16691668 expires = basem + (u64)(nextevt - basej) * TICK_NSEC;16701669 /*16711670 * If we expect to sleep more than a tick, mark the base idle.···19481947 base = per_cpu_ptr(&timer_bases[b], cpu);19491948 base->clk = jiffies;19501949 base->next_expiry = base->clk + NEXT_TIMER_MAX_DELTA;19501950+ base->timers_pending = false;19511951 base->is_idle = false;19521952 }19531953 return 0;
···38803880 if (unlikely(!head))38813881 return true;3882388238833883- return reader->read == rb_page_commit(reader) &&38843884- (commit == reader ||38853885- (commit == head &&38863886- head->read == rb_page_commit(commit)));38833883+ /* Reader should exhaust content in reader page */38843884+ if (reader->read != rb_page_commit(reader))38853885+ return false;38863886+38873887+ /*38883888+ * If writers are committing on the reader page, knowing all38893889+ * committed content has been read, the ring buffer is empty.38903890+ */38913891+ if (commit == reader)38923892+ return true;38933893+38943894+ /*38953895+ * If writers are committing on a page other than reader page38963896+ * and head page, there should always be content to read.38973897+ */38983898+ if (commit != head)38993899+ return false;39003900+39013901+ /*39023902+ * Writers are committing on the head page, we just need39033903+ * to care about there're committed data, and the reader will39043904+ * swap reader page with head page when it is to read data.39053905+ */39063906+ return rb_page_commit(commit) == 0;38873907}3888390838893909/**
+4
kernel/trace/trace.c
···56095609 "\t [:name=histname1]\n"56105610 "\t [:<handler>.<action>]\n"56115611 "\t [if <filter>]\n\n"56125612+ "\t Note, special fields can be used as well:\n"56135613+ "\t common_timestamp - to record current timestamp\n"56145614+ "\t common_cpu - to record the CPU the event happened on\n"56155615+ "\n"56125616 "\t When a matching event is hit, an entry is added to a hash\n"56135617 "\t table using the key(s) and value(s) named, and the value of a\n"56145618 "\t sum called 'hitcount' is incremented. Keys and values\n"
+16-6
kernel/trace/trace_events_hist.c
···11111111 field->flags & HIST_FIELD_FL_ALIAS)11121112 field_name = hist_field_name(field->operands[0], ++level);11131113 else if (field->flags & HIST_FIELD_FL_CPU)11141114- field_name = "cpu";11141114+ field_name = "common_cpu";11151115 else if (field->flags & HIST_FIELD_FL_EXPR ||11161116 field->flags & HIST_FIELD_FL_VAR_REF) {11171117 if (field->system) {···19911991 hist_data->enable_timestamps = true;19921992 if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS)19931993 hist_data->attrs->ts_in_usecs = true;19941994- } else if (strcmp(field_name, "cpu") == 0)19941994+ } else if (strcmp(field_name, "common_cpu") == 0)19951995 *flags |= HIST_FIELD_FL_CPU;19961996 else {19971997 field = trace_find_event_field(file->event_call, field_name);19981998 if (!field || !field->size) {19991999- hist_err(tr, HIST_ERR_FIELD_NOT_FOUND, errpos(field_name));20002000- field = ERR_PTR(-EINVAL);20012001- goto out;19991999+ /*20002000+ * For backward compatibility, if field_name20012001+ * was "cpu", then we treat this the same as20022002+ * common_cpu.20032003+ */20042004+ if (strcmp(field_name, "cpu") == 0) {20052005+ *flags |= HIST_FIELD_FL_CPU;20062006+ } else {20072007+ hist_err(tr, HIST_ERR_FIELD_NOT_FOUND,20082008+ errpos(field_name));20092009+ field = ERR_PTR(-EINVAL);20102010+ goto out;20112011+ }20022012 }20032013 }20042014 out:···50955085 seq_printf(m, "%s=", hist_field->var.name);5096508650975087 if (hist_field->flags & HIST_FIELD_FL_CPU)50985098- seq_puts(m, "cpu");50885088+ seq_puts(m, "common_cpu");50995089 else if (field_name) {51005090 if (hist_field->flags & HIST_FIELD_FL_VAR_REF ||51015091 hist_field->flags & HIST_FIELD_FL_ALIAS)
+3-5
kernel/trace/trace_events_synth.c
···893893 dyn_event_init(&event->devent, &synth_event_ops);894894895895 for (i = 0, j = 0; i < n_fields; i++) {896896+ fields[i]->field_pos = i;896897 event->fields[i] = fields[i];897898898898- if (fields[i]->is_dynamic) {899899- event->dynamic_fields[j] = fields[i];900900- event->dynamic_fields[j]->field_pos = i;899899+ if (fields[i]->is_dynamic)901900 event->dynamic_fields[j++] = fields[i];902902- event->n_dynamic_fields++;903903- }904901 }902902+ event->n_dynamic_fields = j;905903 event->n_fields = n_fields;906904 out:907905 return event;
···299299 * a pointer to it. This array is referenced by __DO_TRACE from300300 * include/linux/tracepoint.h using rcu_dereference_sched().301301 */302302- rcu_assign_pointer(tp->funcs, tp_funcs);303302 tracepoint_update_call(tp, tp_funcs, false);303303+ rcu_assign_pointer(tp->funcs, tp_funcs);304304 static_key_enable(&tp->key);305305306306 release_probes(old);
···734734void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)735735{736736 /*737737+ * Perform size check before switching kfence_allocation_gate, so that738738+ * we don't disable KFENCE without making an allocation.739739+ */740740+ if (size > PAGE_SIZE)741741+ return NULL;742742+743743+ /*744744+ * Skip allocations from non-default zones, including DMA. We cannot745745+ * guarantee that pages in the KFENCE pool will have the requested746746+ * properties (e.g. reside in DMAable memory).747747+ */748748+ if ((flags & GFP_ZONEMASK) ||749749+ (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32)))750750+ return NULL;751751+752752+ /*737753 * allocation_gate only needs to become non-zero, so it doesn't make738754 * sense to continue writing to it and pay the associated contention739755 * cost, in case we have a large number of concurrent allocations.···771755#endif772756773757 if (!READ_ONCE(kfence_enabled))774774- return NULL;775775-776776- if (size > PAGE_SIZE)777758 return NULL;778759779760 return kfence_guarded_alloc(s, size, flags);
···840840 }841841#endif842842843843- if (_init_on_alloc_enabled_early) {844844- if (page_poisoning_requested)845845- pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "846846- "will take precedence over init_on_alloc\n");847847- else848848- static_branch_enable(&init_on_alloc);843843+ if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) &&844844+ page_poisoning_requested) {845845+ pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "846846+ "will take precedence over init_on_alloc and init_on_free\n");847847+ _init_on_alloc_enabled_early = false;848848+ _init_on_free_enabled_early = false;849849 }850850- if (_init_on_free_enabled_early) {851851- if (page_poisoning_requested)852852- pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "853853- "will take precedence over init_on_free\n");854854- else855855- static_branch_enable(&init_on_free);856856- }850850+851851+ if (_init_on_alloc_enabled_early)852852+ static_branch_enable(&init_on_alloc);853853+ else854854+ static_branch_disable(&init_on_alloc);855855+856856+ if (_init_on_free_enabled_early)857857+ static_branch_enable(&init_on_free);858858+ else859859+ static_branch_disable(&init_on_free);857860858861#ifdef CONFIG_DEBUG_PAGEALLOC859862 if (!debug_pagealloc_enabled())
···539539 goto err;540540541541 ret = -EINVAL;542542- if (unlikely(msg->msg_iter.iov->iov_base == NULL))542542+ if (unlikely(msg->msg_iter.nr_segs == 0) ||543543+ unlikely(msg->msg_iter.iov->iov_base == NULL))543544 goto err;544545 noblock = msg->msg_flags & MSG_DONTWAIT;545546
+26-8
net/core/dev.c
···131131#include <trace/events/napi.h>132132#include <trace/events/net.h>133133#include <trace/events/skb.h>134134+#include <trace/events/qdisc.h>134135#include <linux/inetdevice.h>135136#include <linux/cpu_rmap.h>136137#include <linux/static_key.h>···38453844 }38463845}3847384638473847+static int dev_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *q,38483848+ struct sk_buff **to_free,38493849+ struct netdev_queue *txq)38503850+{38513851+ int rc;38523852+38533853+ rc = q->enqueue(skb, q, to_free) & NET_XMIT_MASK;38543854+ if (rc == NET_XMIT_SUCCESS)38553855+ trace_qdisc_enqueue(q, txq, skb);38563856+ return rc;38573857+}38583858+38483859static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,38493860 struct net_device *dev,38503861 struct netdev_queue *txq)···38753862 * of q->seqlock to protect from racing with requeuing.38763863 */38773864 if (unlikely(!nolock_qdisc_is_empty(q))) {38783878- rc = q->enqueue(skb, q, &to_free) &38793879- NET_XMIT_MASK;38653865+ rc = dev_qdisc_enqueue(skb, q, &to_free, txq);38803866 __qdisc_run(q);38813867 qdisc_run_end(q);38823868···38913879 return NET_XMIT_SUCCESS;38923880 }3893388138943894- rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;38823882+ rc = dev_qdisc_enqueue(skb, q, &to_free, txq);38953883 qdisc_run(q);3896388438973885no_lock_out:···39353923 qdisc_run_end(q);39363924 rc = NET_XMIT_SUCCESS;39373925 } else {39383938- rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;39263926+ rc = dev_qdisc_enqueue(skb, q, &to_free, txq);39393927 if (qdisc_run_begin(q)) {39403928 if (unlikely(contended)) {39413929 spin_unlock(&q->busylock);···97129700 struct net_device *dev;97139701 int err, fd;9714970297039703+ rtnl_lock();97159704 dev = dev_get_by_index(net, attr->link_create.target_ifindex);97169716- if (!dev)97059705+ if (!dev) {97069706+ rtnl_unlock();97179707 return -EINVAL;97089708+ }9718970997199710 link = kzalloc(sizeof(*link), GFP_USER);97209711 if (!link) {97219712 err = -ENOMEM;97229722- goto out_put_dev;97139713+ goto unlock;97239714 }9724971597259716 bpf_link_init(&link->link, BPF_LINK_TYPE_XDP, &bpf_xdp_link_lops, prog);···97329717 err = bpf_link_prime(&link->link, &link_primer);97339718 if (err) {97349719 kfree(link);97359735- goto out_put_dev;97209720+ goto unlock;97369721 }9737972297389738- rtnl_lock();97399723 err = dev_xdp_attach_link(dev, NULL, link);97409724 rtnl_unlock();9741972597429726 if (err) {97279727+ link->dev = NULL;97439728 bpf_link_cleanup(&link_primer);97449729 goto out_put_dev;97459730 }···97489733 /* link itself doesn't hold dev's refcnt to not complicate shutdown */97499734 dev_put(dev);97509735 return fd;97369736+97379737+unlock:97389738+ rtnl_unlock();9751973997529740out_put_dev:97539741 dev_put(dev);
+16-2
net/core/skbuff.c
···663663 if (skb->cloned &&664664 atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,665665 &shinfo->dataref))666666- return;666666+ goto exit;667667668668 skb_zcopy_clear(skb, true);669669···674674 kfree_skb_list(shinfo->frag_list);675675676676 skb_free_head(skb);677677+exit:678678+ /* When we clone an SKB we copy the reycling bit. The pp_recycle679679+ * bit is only set on the head though, so in order to avoid races680680+ * while trying to recycle fragments on __skb_frag_unref() we need681681+ * to make one SKB responsible for triggering the recycle path.682682+ * So disable the recycling bit if an SKB is cloned and we have683683+ * additional references to to the fragmented part of the SKB.684684+ * Eventually the last SKB will have the recycling bit set and it's685685+ * dataref set to 0, which will trigger the recycling686686+ */687687+ skb->pp_recycle = 0;677688}678689679690/*···3022301130233012 if (!from->head_frag ||30243013 skb_headlen(from) < L1_CACHE_BYTES ||30253025- skb_shinfo(from)->nr_frags >= MAX_SKB_FRAGS)30143014+ skb_shinfo(from)->nr_frags >= MAX_SKB_FRAGS) {30263015 hlen = skb_headlen(from);30163016+ if (!hlen)30173017+ hlen = from->len;30183018+ }3027301930283020 if (skb_has_frag_list(from))30293021 hlen = from->len;
+11-5
net/core/skmsg.c
···508508 if (skb_linearize(skb))509509 return -EAGAIN;510510 num_sge = skb_to_sgvec(skb, msg->sg.data, 0, skb->len);511511- if (unlikely(num_sge < 0)) {512512- kfree(msg);511511+ if (unlikely(num_sge < 0))513512 return num_sge;514514- }515513516514 copied = skb->len;517515 msg->sg.start = 0;···528530{529531 struct sock *sk = psock->sk;530532 struct sk_msg *msg;533533+ int err;531534532535 /* If we are receiving on the same sock skb->sk is already assigned,533536 * skip memory accounting and owner transition seeing it already set···547548 * into user buffers.548549 */549550 skb_set_owner_r(skb, sk);550550- return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);551551+ err = sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);552552+ if (err < 0)553553+ kfree(msg);554554+ return err;551555}552556553557/* Puts an skb on the ingress queue of the socket already assigned to the···561559{562560 struct sk_msg *msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_ATOMIC);563561 struct sock *sk = psock->sk;562562+ int err;564563565564 if (unlikely(!msg))566565 return -EAGAIN;567566 sk_msg_init(msg);568567 skb_set_owner_r(skb, sk);569569- return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);568568+ err = sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);569569+ if (err < 0)570570+ kfree(msg);571571+ return err;570572}571573572574static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
···5353 u8 *tag;5454 u8 *addr;55555656+ if (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb))5757+ return NULL;5858+5659 /* Tag encoding */5760 tag = skb_put(skb, KSZ_INGRESS_TAG_LEN);5861 addr = skb_mac_header(skb);···117114 u8 *addr;118115 u16 val;119116117117+ if (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb))118118+ return NULL;119119+120120 /* Tag encoding */121121 tag = skb_put(skb, KSZ9477_INGRESS_TAG_LEN);122122 addr = skb_mac_header(skb);···169163 struct dsa_port *dp = dsa_slave_to_port(dev);170164 u8 *addr;171165 u8 *tag;166166+167167+ if (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb))168168+ return NULL;172169173170 /* Tag encoding */174171 tag = skb_put(skb, KSZ_INGRESS_TAG_LEN);
+1-1
net/ipv4/tcp_bpf.c
···503503 tcp_bpf_rebuild_protos(tcp_bpf_prots[TCP_BPF_IPV4], &tcp_prot);504504 return 0;505505}506506-core_initcall(tcp_bpf_v4_build_proto);506506+late_initcall(tcp_bpf_v4_build_proto);507507508508static int tcp_bpf_assert_proto_ops(struct proto *ops)509509{
+24-4
net/ipv4/tcp_fastopen.c
···507507{508508 struct net *net = sock_net(sk);509509510510+ if (!sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout)511511+ return;512512+513513+ /* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */514514+ WRITE_ONCE(net->ipv4.tfo_active_disable_stamp, jiffies);515515+516516+ /* Paired with smp_rmb() in tcp_fastopen_active_should_disable().517517+ * We want net->ipv4.tfo_active_disable_stamp to be updated first.518518+ */519519+ smp_mb__before_atomic();510520 atomic_inc(&net->ipv4.tfo_active_disable_times);511511- net->ipv4.tfo_active_disable_stamp = jiffies;521521+512522 NET_INC_STATS(net, LINUX_MIB_TCPFASTOPENBLACKHOLE);513523}514524···529519bool tcp_fastopen_active_should_disable(struct sock *sk)530520{531521 unsigned int tfo_bh_timeout = sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout;532532- int tfo_da_times = atomic_read(&sock_net(sk)->ipv4.tfo_active_disable_times);533522 unsigned long timeout;523523+ int tfo_da_times;534524 int multiplier;535525526526+ if (!tfo_bh_timeout)527527+ return false;528528+529529+ tfo_da_times = atomic_read(&sock_net(sk)->ipv4.tfo_active_disable_times);536530 if (!tfo_da_times)537531 return false;538532533533+ /* Paired with smp_mb__before_atomic() in tcp_fastopen_active_disable() */534534+ smp_rmb();535535+539536 /* Limit timeout to max: 2^6 * initial timeout */540537 multiplier = 1 << min(tfo_da_times - 1, 6);541541- timeout = multiplier * tfo_bh_timeout * HZ;542542- if (time_before(jiffies, sock_net(sk)->ipv4.tfo_active_disable_stamp + timeout))538538+539539+ /* Paired with the WRITE_ONCE() in tcp_fastopen_active_disable(). */540540+ timeout = READ_ONCE(sock_net(sk)->ipv4.tfo_active_disable_stamp) +541541+ multiplier * tfo_bh_timeout * HZ;542542+ if (time_before(jiffies, timeout))543543 return true;544544545545 /* Mark check bit so we can check for successful active TFO
+1-1
net/ipv4/tcp_ipv4.c
···29652965 net->ipv4.sysctl_tcp_comp_sack_nr = 44;29662966 net->ipv4.sysctl_tcp_fastopen = TFO_CLIENT_ENABLE;29672967 spin_lock_init(&net->ipv4.tcp_fastopen_ctx_lock);29682968- net->ipv4.sysctl_tcp_fastopen_blackhole_timeout = 60 * 60;29682968+ net->ipv4.sysctl_tcp_fastopen_blackhole_timeout = 0;29692969 atomic_set(&net->ipv4.tfo_active_disable_times, 0);2970297029712971 /* Reno is always built in */
+19-6
net/ipv4/udp.c
···645645 const struct iphdr *iph,646646 struct udphdr *uh,647647 struct udp_table *udptable,648648+ struct sock *sk,648649 struct sk_buff *skb, u32 info)649650{651651+ int (*lookup)(struct sock *sk, struct sk_buff *skb);650652 int network_offset, transport_offset;651651- struct sock *sk;653653+ struct udp_sock *up;652654653655 network_offset = skb_network_offset(skb);654656 transport_offset = skb_transport_offset(skb);···661659 /* Transport header needs to point to the UDP header */662660 skb_set_transport_header(skb, iph->ihl << 2);663661662662+ if (sk) {663663+ up = udp_sk(sk);664664+665665+ lookup = READ_ONCE(up->encap_err_lookup);666666+ if (lookup && lookup(sk, skb))667667+ sk = NULL;668668+669669+ goto out;670670+ }671671+664672 sk = __udp4_lib_lookup(net, iph->daddr, uh->source,665673 iph->saddr, uh->dest, skb->dev->ifindex, 0,666674 udptable, NULL);667675 if (sk) {668668- int (*lookup)(struct sock *sk, struct sk_buff *skb);669669- struct udp_sock *up = udp_sk(sk);676676+ up = udp_sk(sk);670677671678 lookup = READ_ONCE(up->encap_err_lookup);672679 if (!lookup || lookup(sk, skb))673680 sk = NULL;674681 }675682683683+out:676684 if (!sk)677685 sk = ERR_PTR(__udp4_lib_err_encap_no_sk(skb, info));678686···719707 sk = __udp4_lib_lookup(net, iph->daddr, uh->dest,720708 iph->saddr, uh->source, skb->dev->ifindex,721709 inet_sdif(skb), udptable, NULL);710710+722711 if (!sk || udp_sk(sk)->encap_type) {723712 /* No socket for error: try tunnels before discarding */724724- sk = ERR_PTR(-ENOENT);725713 if (static_branch_unlikely(&udp_encap_needed_key)) {726726- sk = __udp4_lib_err_encap(net, iph, uh, udptable, skb,714714+ sk = __udp4_lib_err_encap(net, iph, uh, udptable, sk, skb,727715 info);728716 if (!sk)729717 return 0;730730- }718718+ } else719719+ sk = ERR_PTR(-ENOENT);731720732721 if (IS_ERR(sk)) {733722 __ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
···66*/7788#include <linux/module.h>99+#include <linux/if_arp.h>910#include <linux/init.h>1011#include <linux/kernel.h>1112#include <linux/skbuff.h>···3433 tcf_lastuse_update(&d->tcf_tm);3534 bstats_cpu_update(this_cpu_ptr(d->common.cpu_bstats), skb);36353636+ action = READ_ONCE(d->tcf_action);3737+ if (unlikely(action == TC_ACT_SHOT))3838+ goto drop;3939+4040+ if (!skb->dev || skb->dev->type != ARPHRD_ETHER)4141+ return action;4242+3743 /* XXX: if you are going to edit more fields beyond ethernet header3844 * (example when you add IP header replacement or vlan swap)3945 * then MAX_EDIT_LEN needs to change appropriately4046 */4147 err = skb_ensure_writable(skb, MAX_EDIT_LEN);4248 if (unlikely(err)) /* best policy is to drop on the floor */4343- goto drop;4444-4545- action = READ_ONCE(d->tcf_action);4646- if (unlikely(action == TC_ACT_SHOT))4749 goto drop;48504951 p = rcu_dereference_bh(d->skbmod_p);
+1-1
net/sched/cls_api.c
···29042904 break;29052905 case RTM_GETCHAIN:29062906 err = tc_chain_notify(chain, skb, n->nlmsg_seq,29072907- n->nlmsg_seq, n->nlmsg_type, true);29072907+ n->nlmsg_flags, n->nlmsg_type, true);29082908 if (err < 0)29092909 NL_SET_ERR_MSG(extack, "Failed to send chain notify message");29102910 break;
···860860 if (replace) {861861 list_del_init(&shkey->key_list);862862 sctp_auth_shkey_release(shkey);863863+ if (asoc && asoc->active_key_id == auth_key->sca_keynumber)864864+ sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);863865 }864866 list_add(&cur_key->key_list, sh_keys);865867
+2-2
net/sctp/output.c
···104104 if (asoc->param_flags & SPP_PMTUD_ENABLE)105105 sctp_assoc_sync_pmtu(asoc);106106 } else if (!sctp_transport_pl_enabled(tp) &&107107- !sctp_transport_pmtu_check(tp)) {108108- if (asoc->param_flags & SPP_PMTUD_ENABLE)107107+ asoc->param_flags & SPP_PMTUD_ENABLE) {108108+ if (!sctp_transport_pmtu_check(tp))109109 sctp_assoc_sync_pmtu(asoc);110110 }111111
+4
net/sctp/socket.c
···45774577 }4578457845794579 if (optlen > 0) {45804580+ /* Trim it to the biggest size sctp sockopt may need if necessary */45814581+ optlen = min_t(unsigned int, optlen,45824582+ PAGE_ALIGN(USHRT_MAX +45834583+ sizeof(__u16) * sizeof(struct sctp_reset_streams)));45804584 kopt = memdup_sockptr(optval, optlen);45814585 if (IS_ERR(kopt))45824586 return PTR_ERR(kopt);
+20-7
sound/core/pcm_native.c
···246246 if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_MMAP))247247 return false;248248249249- if (substream->ops->mmap ||250250- (substream->dma_buffer.dev.type != SNDRV_DMA_TYPE_DEV &&251251- substream->dma_buffer.dev.type != SNDRV_DMA_TYPE_DEV_UC))249249+ if (substream->ops->mmap)252250 return true;253251254254- return dma_can_mmap(substream->dma_buffer.dev.dev);252252+ switch (substream->dma_buffer.dev.type) {253253+ case SNDRV_DMA_TYPE_UNKNOWN:254254+ return false;255255+ case SNDRV_DMA_TYPE_CONTINUOUS:256256+ case SNDRV_DMA_TYPE_VMALLOC:257257+ return true;258258+ default:259259+ return dma_can_mmap(substream->dma_buffer.dev.dev);260260+ }255261}256262257263static int constrain_mask_params(struct snd_pcm_substream *substream,···30693063 boundary = 0x7fffffff;30703064 snd_pcm_stream_lock_irq(substream);30713065 /* FIXME: we should consider the boundary for the sync from app */30723072- if (!(sflags & SNDRV_PCM_SYNC_PTR_APPL))30733073- control->appl_ptr = scontrol.appl_ptr;30743074- else30663066+ if (!(sflags & SNDRV_PCM_SYNC_PTR_APPL)) {30673067+ err = pcm_lib_apply_appl_ptr(substream,30683068+ scontrol.appl_ptr);30693069+ if (err < 0) {30703070+ snd_pcm_stream_unlock_irq(substream);30713071+ return err;30723072+ }30733073+ } else30753074 scontrol.appl_ptr = control->appl_ptr % boundary;30763075 if (!(sflags & SNDRV_PCM_SYNC_PTR_AVAIL_MIN))30773076 control->avail_min = scontrol.avail_min;···36753664 return VM_FAULT_SIGBUS;36763665 if (substream->ops->page)36773666 page = substream->ops->page(substream, offset);36673667+ else if (!snd_pcm_get_dma_buf(substream))36683668+ page = virt_to_page(runtime->dma_area + offset);36783669 else36793670 page = snd_sgbuf_get_page(snd_pcm_get_dma_buf(substream), offset);36803671 if (!page)
···13251325 high-efficiency mono Class-D audio power amplifiers.1326132613271327config SND_SOC_SSM251813281328- tristate13281328+ tristate "Analog Devices SSM2518 Class-D Amplifier"13291329 depends on I2C1330133013311331config SND_SOC_SSM2602···15571557 Qualcomm SoCs like SDM845.1558155815591559config SND_SOC_WCD938X15601560+ depends on SND_SOC_WCD938X_SDW15601561 tristate1561156215621563config SND_SOC_WCD938X_SDW···18131812 Support for ZL38060 Connected Home Audio Processor from Microsemi,18141813 which consists of a Digital Signal Processor (DSP), several Digital18151814 Audio Interfaces (DAIs), analog outputs, and a block of 14 GPIOs.18161816-18171817-config SND_SOC_ZX_AUD96P2218181818- tristate "ZTE ZX AUD96P22 CODEC"18191819- depends on I2C18201820- select REGMAP_I2C1821181518221816# Amp18231817config SND_SOC_LM4857
···5555 return ret;5656}57575858-static int max98373_sdw_trigger(struct snd_pcm_substream *substream, int cmd)5858+static int mx8373_enable_spk_pin(struct snd_pcm_substream *substream, bool enable)5959{6060+ struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);6161+ struct snd_soc_dai *codec_dai;6262+ struct snd_soc_dai *cpu_dai;6063 int ret;6464+ int j;61656262- switch (cmd) {6363- case SNDRV_PCM_TRIGGER_START:6464- case SNDRV_PCM_TRIGGER_RESUME:6565- case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:6666- /* enable max98373 first */6767- ret = max_98373_trigger(substream, cmd);6868- if (ret < 0)6969- break;6666+ /* set spk pin by playback only */6767+ if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)6868+ return 0;70697171- ret = sdw_trigger(substream, cmd);7272- break;7373- case SNDRV_PCM_TRIGGER_STOP:7474- case SNDRV_PCM_TRIGGER_SUSPEND:7575- case SNDRV_PCM_TRIGGER_PAUSE_PUSH:7676- ret = sdw_trigger(substream, cmd);7777- if (ret < 0)7878- break;7070+ cpu_dai = asoc_rtd_to_cpu(rtd, 0);7171+ for_each_rtd_codec_dais(rtd, j, codec_dai) {7272+ struct snd_soc_dapm_context *dapm =7373+ snd_soc_component_get_dapm(cpu_dai->component);7474+ char pin_name[16];79758080- ret = max_98373_trigger(substream, cmd);8181- break;8282- default:8383- ret = -EINVAL;8484- break;7676+ snprintf(pin_name, ARRAY_SIZE(pin_name), "%s Spk",7777+ codec_dai->component->name_prefix);7878+7979+ if (enable)8080+ ret = snd_soc_dapm_enable_pin(dapm, pin_name);8181+ else8282+ ret = snd_soc_dapm_disable_pin(dapm, pin_name);8383+8484+ if (!ret)8585+ snd_soc_dapm_sync(dapm);8586 }86878787- return ret;8888+ return 0;8989+}9090+9191+static int mx8373_sdw_prepare(struct snd_pcm_substream *substream)9292+{9393+ int ret = 0;9494+9595+ /* according to soc_pcm_prepare dai link prepare is called first */9696+ ret = sdw_prepare(substream);9797+ if (ret < 0)9898+ return ret;9999+100100+ return mx8373_enable_spk_pin(substream, true);101101+}102102+103103+static int mx8373_sdw_hw_free(struct snd_pcm_substream *substream)104104+{105105+ int ret = 0;106106+107107+ /* according to soc_pcm_hw_free dai link free is called first */108108+ ret = sdw_hw_free(substream);109109+ if (ret < 0)110110+ return ret;111111+112112+ return mx8373_enable_spk_pin(substream, false);88113}8911490115static const struct snd_soc_ops max_98373_sdw_ops = {91116 .startup = sdw_startup,9292- .prepare = sdw_prepare,9393- .trigger = max98373_sdw_trigger,9494- .hw_free = sdw_hw_free,117117+ .prepare = mx8373_sdw_prepare,118118+ .trigger = sdw_trigger,119119+ .hw_free = mx8373_sdw_hw_free,95120 .shutdown = sdw_shutdown,96121};97122
+16-6
sound/soc/soc-pcm.c
···1015101510161016static int soc_pcm_trigger(struct snd_pcm_substream *substream, int cmd)10171017{10181018+ struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);10181019 int ret = -EINVAL, _ret = 0;10191020 int rollback = 0;10201021···10561055 case SNDRV_PCM_TRIGGER_STOP:10571056 case SNDRV_PCM_TRIGGER_SUSPEND:10581057 case SNDRV_PCM_TRIGGER_PAUSE_PUSH:10591059- ret = snd_soc_pcm_dai_trigger(substream, cmd, rollback);10601060- if (ret < 0)10611061- break;10581058+ if (rtd->dai_link->stop_dma_first) {10591059+ ret = snd_soc_pcm_component_trigger(substream, cmd, rollback);10601060+ if (ret < 0)10611061+ break;1062106210631063- ret = snd_soc_pcm_component_trigger(substream, cmd, rollback);10641064- if (ret < 0)10651065- break;10631063+ ret = snd_soc_pcm_dai_trigger(substream, cmd, rollback);10641064+ if (ret < 0)10651065+ break;10661066+ } else {10671067+ ret = snd_soc_pcm_dai_trigger(substream, cmd, rollback);10681068+ if (ret < 0)10691069+ break;1066107010711071+ ret = snd_soc_pcm_component_trigger(substream, cmd, rollback);10721072+ if (ret < 0)10731073+ break;10741074+ }10671075 ret = snd_soc_link_trigger(substream, cmd, rollback);10681076 break;10691077 }