···149149 maxItems: 6150150 $ref: /schemas/types.yaml#/definitions/uint32-array151151152152+ sink-vdos-v1:153153+ description: An array of u32 with each entry, a Vendor Defined Message Object (VDO),154154+ providing additional information corresponding to the product, the detailed bit155155+ definitions and the order of each VDO can be found in156156+ "USB Power Delivery Specification Revision 2.0, Version 1.3" chapter 6.4.4.3.1 Discover157157+ Identity. User can specify the VDO array via VDO_IDH/_CERT/_PRODUCT/_CABLE/_AMA defined in158158+ dt-bindings/usb/pd.h.159159+ minItems: 3160160+ maxItems: 6161161+ $ref: /schemas/types.yaml#/definitions/uint32-array162162+152163 op-sink-microwatt:153164 description: Sink required operating power in microwatt, if source can't154165 offer the power, Capability Mismatch is set. Required for power sink and···217206 SNK_DISCOVERY) and the actual currrent limit after reception of PS_Ready for PD link or during218207 SNK_READY for non-pd link.219208 type: boolean209209+210210+dependencies:211211+ sink-vdos-v1: [ 'sink-vdos' ]212212+ sink-vdos: [ 'sink-vdos-v1' ]220213221214required:222215 - compatible
···5757 rate58585959 sound-dai:6060- $ref: /schemas/types.yaml#/definitions/phandle6060+ $ref: /schemas/types.yaml#/definitions/phandle-array6161 description: phandle of the CPU DAI62626363 patternProperties:···71717272 properties:7373 sound-dai:7474- $ref: /schemas/types.yaml#/definitions/phandle7474+ $ref: /schemas/types.yaml#/definitions/phandle-array7575 description: phandle of the codec DAI76767777 required:
+2-2
Documentation/virt/kvm/mmu.rst
···171171 shadow pages) so role.quadrant takes values in the range 0..3. Each172172 quadrant maps 1GB virtual address space.173173 role.access:174174- Inherited guest access permissions in the form uwx. Note execute175175- permission is positive, not negative.174174+ Inherited guest access permissions from the parent ptes in the form uwx.175175+ Note execute permission is positive, not negative.176176 role.invalid:177177 The page is invalid and should not be used. It is a root page that is178178 currently pinned (by a cpu hardware register pointing to it); once it is
+16-5
MAINTAINERS
···38773877S: Maintained38783878W: http://btrfs.wiki.kernel.org/38793879Q: http://patchwork.kernel.org/project/linux-btrfs/list/38803880+C: irc://irc.libera.chat/btrfs38803881T: git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git38813882F: Documentation/filesystems/btrfs.rst38823883F: fs/btrfs/···69466945FANOTIFY69476946M: Jan Kara <jack@suse.cz>69486947R: Amir Goldstein <amir73il@gmail.com>69486948+R: Matthew Bobrowski <repnop@google.com>69496949L: linux-fsdevel@vger.kernel.org69506950S: Maintained69516951F: fs/notify/fanotify/···12905129031290612904NFC SUBSYSTEM1290712905M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>1290812908-L: linux-nfc@lists.01.org (moderated for non-subscribers)1290612906+L: linux-nfc@lists.01.org (subscribers-only)1290912907L: netdev@vger.kernel.org1291012908S: Maintained1291112909F: Documentation/devicetree/bindings/net/nfc/···1291812916NFC VIRTUAL NCI DEVICE DRIVER1291912917M: Bongsu Jeon <bongsu.jeon@samsung.com>1292012918L: netdev@vger.kernel.org1292112921-L: linux-nfc@lists.01.org (moderated for non-subscribers)1291912919+L: linux-nfc@lists.01.org (subscribers-only)1292212920S: Supported1292312921F: drivers/nfc/virtual_ncidev.c1292412922F: tools/testing/selftests/nci/···13216132141321713215NXP-NCI NFC DRIVER1321813216R: Charles Gorand <charles.gorand@effinnov.com>1321913219-L: linux-nfc@lists.01.org (moderated for non-subscribers)1321713217+L: linux-nfc@lists.01.org (subscribers-only)1322013218S: Supported1322113219F: drivers/nfc/nxp-nci1322213220···1411914117PCI ENDPOINT SUBSYSTEM1412014118M: Kishon Vijay Abraham I <kishon@ti.com>1412114119M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>1412014120+R: Krzysztof Wilczyński <kw@linux.com>1412214121L: linux-pci@vger.kernel.org1412314122S: Supported1412414123F: Documentation/PCI/endpoint/*···1416814165PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS1416914166M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>1417014167R: Rob Herring <robh@kernel.org>1416814168+R: Krzysztof Wilczyński <kw@linux.com>1417114169L: linux-pci@vger.kernel.org1417214170S: Supported1417314171Q: http://patchwork.ozlabs.org/project/linux-pci/list/···1614716143SAMSUNG S3FWRN5 NFC DRIVER1614816144M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>1614916145M: Krzysztof Opasiak <k.opasiak@samsung.com>1615016150-L: linux-nfc@lists.01.org (moderated for non-subscribers)1614616146+L: linux-nfc@lists.01.org (subscribers-only)1615116147S: Maintained1615216148F: Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml1615316149F: drivers/nfc/s3fwrn5···1833718333TI TRF7970A NFC DRIVER1833818334M: Mark Greer <mgreer@animalcreek.com>1833918335L: linux-wireless@vger.kernel.org1834018340-L: linux-nfc@lists.01.org (moderated for non-subscribers)1833618336+L: linux-nfc@lists.01.org (subscribers-only)1834118337S: Supported1834218338F: Documentation/devicetree/bindings/net/nfc/trf7970a.txt1834318339F: drivers/nfc/trf7970a.c···1887218868S: Maintained1887318869F: drivers/usb/host/isp116x*1887418870F: include/linux/usb/isp116x.h1887118871+1887218872+USB ISP1760 DRIVER1887318873+M: Rui Miguel Silva <rui.silva@linaro.org>1887418874+L: linux-usb@vger.kernel.org1887518875+S: Maintained1887618876+F: drivers/usb/isp1760/*1887718877+F: Documentation/devicetree/bindings/usb/nxp,isp1760.yaml18875188781887618879USB LAN78XX ETHERNET DRIVER1887718880M: Woojung Huh <woojung.huh@microchip.com>
···655655 irq = INT_7XX_WAKE_UP_REQ;656656 else if (cpu_is_omap16xx())657657 irq = INT_1610_WAKE_UP_REQ;658658- if (request_irq(irq, omap_wakeup_interrupt, 0, "peripheral wakeup",659659- NULL))660660- pr_err("Failed to request irq %d (peripheral wakeup)\n", irq);658658+ else659659+ irq = -1;660660+661661+ if (irq >= 0) {662662+ if (request_irq(irq, omap_wakeup_interrupt, 0, "peripheral wakeup", NULL))663663+ pr_err("Failed to request irq %d (peripheral wakeup)\n", irq);664664+ }661665662666 /* Program new power ramp-up time663667 * (0 for most boards since we don't lower voltage when in deep sleep)
+1-1
arch/arm/mach-omap2/board-n8x0.c
···322322323323static void n8x0_mmc_callback(void *data, u8 card_mask)324324{325325+#ifdef CONFIG_MMC_OMAP325326 int bit, *openp, index;326327327328 if (board_is_n800()) {···340339 else341340 *openp = 0;342341343343-#ifdef CONFIG_MMC_OMAP344342 omap_mmc_notify_cover_event(mmc_device, index, *openp);345343#else346344 pr_warn("MMC: notify cover event not available\n");
+1
arch/arm64/Kconfig.platforms
···165165166166config ARCH_MESON167167 bool "Amlogic Platforms"168168+ select COMMON_CLK168169 select MESON_IRQ_GPIO169170 help170171 This enables support for the arm64 based Amlogic SoCs
···3131 pgd_t *pgdir = init_mm.pgd;3232 return __find_linux_pte(pgdir, ea, NULL, hshift);3333}3434+3535+/*3636+ * Convert a kernel vmap virtual address (vmalloc or ioremap space) to a3737+ * physical address, without taking locks. This can be used in real-mode.3838+ */3939+static inline phys_addr_t ppc_find_vmap_phys(unsigned long addr)4040+{4141+ pte_t *ptep;4242+ phys_addr_t pa;4343+ int hugepage_shift;4444+4545+ /*4646+ * init_mm does not free page tables, and does not do THP. It may4747+ * have huge pages from huge vmalloc / ioremap etc.4848+ */4949+ ptep = find_init_mm_pte(addr, &hugepage_shift);5050+ if (WARN_ON(!ptep))5151+ return 0;5252+5353+ pa = PFN_PHYS(pte_pfn(*ptep));5454+5555+ if (!hugepage_shift)5656+ hugepage_shift = PAGE_SHIFT;5757+5858+ pa |= addr & ((1ul << hugepage_shift) - 1);5959+6060+ return pa;6161+}6262+3463/*3564 * This is what we should always use. Any other lockless page table lookup needs3665 * careful audit against THP split.
+1-22
arch/powerpc/kernel/eeh.c
···346346 */347347static inline unsigned long eeh_token_to_phys(unsigned long token)348348{349349- pte_t *ptep;350350- unsigned long pa;351351- int hugepage_shift;352352-353353- /*354354- * We won't find hugepages here(this is iomem). Hence we are not355355- * worried about _PAGE_SPLITTING/collapse. Also we will not hit356356- * page table free, because of init_mm.357357- */358358- ptep = find_init_mm_pte(token, &hugepage_shift);359359- if (!ptep)360360- return token;361361-362362- pa = pte_pfn(*ptep);363363-364364- /* On radix we can do hugepage mappings for io, so handle that */365365- if (!hugepage_shift)366366- hugepage_shift = PAGE_SHIFT;367367-368368- pa <<= PAGE_SHIFT;369369- pa |= token & ((1ul << hugepage_shift) - 1);370370- return pa;349349+ return ppc_find_vmap_phys(token);371350}372351373352/*
+3-13
arch/powerpc/kernel/io-workarounds.c
···5555#ifdef CONFIG_PPC_INDIRECT_MMIO5656struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr)5757{5858- unsigned hugepage_shift;5958 struct iowa_bus *bus;6059 int token;6160···6465 bus = &iowa_busses[token - 1];6566 else {6667 unsigned long vaddr, paddr;6767- pte_t *ptep;68686969 vaddr = (unsigned long)PCI_FIX_ADDR(addr);7070 if (vaddr < PHB_IO_BASE || vaddr >= PHB_IO_END)7171 return NULL;7272- /*7373- * We won't find huge pages here (iomem). Also can't hit7474- * a page table free due to init_mm7575- */7676- ptep = find_init_mm_pte(vaddr, &hugepage_shift);7777- if (ptep == NULL)7878- paddr = 0;7979- else {8080- WARN_ON(hugepage_shift);8181- paddr = pte_pfn(*ptep) << PAGE_SHIFT;8282- }7272+7373+ paddr = ppc_find_vmap_phys(vaddr);7474+8375 bus = iowa_pci_find(vaddr, paddr);84768577 if (bus == NULL)
+5-6
arch/powerpc/kernel/iommu.c
···898898 unsigned int order;899899 unsigned int nio_pages, io_order;900900 struct page *page;901901- size_t size_io = size;902901903902 size = PAGE_ALIGN(size);904903 order = get_order(size);···924925 memset(ret, 0, size);925926926927 /* Set up tces to cover the allocated range */927927- size_io = IOMMU_PAGE_ALIGN(size_io, tbl);928928- nio_pages = size_io >> tbl->it_page_shift;929929- io_order = get_iommu_order(size_io, tbl);928928+ nio_pages = size >> tbl->it_page_shift;929929+ io_order = get_iommu_order(size, tbl);930930 mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL,931931 mask >> tbl->it_page_shift, io_order, 0);932932 if (mapping == DMA_MAPPING_ERROR) {···940942 void *vaddr, dma_addr_t dma_handle)941943{942944 if (tbl) {943943- size_t size_io = IOMMU_PAGE_ALIGN(size, tbl);944944- unsigned int nio_pages = size_io >> tbl->it_page_shift;945945+ unsigned int nio_pages;945946947947+ size = PAGE_ALIGN(size);948948+ nio_pages = size >> tbl->it_page_shift;946949 iommu_free(tbl, dma_handle, nio_pages);947950 size = PAGE_ALIGN(size);948951 free_pages((unsigned long)vaddr, get_order(size));
+2-2
arch/powerpc/kernel/kprobes.c
···108108 int ret = 0;109109 struct kprobe *prev;110110 struct ppc_inst insn = ppc_inst_read((struct ppc_inst *)p->addr);111111- struct ppc_inst prefix = ppc_inst_read((struct ppc_inst *)(p->addr - 1));112111113112 if ((unsigned long)p->addr & 0x03) {114113 printk("Attempt to register kprobe at an unaligned address\n");···115116 } else if (IS_MTMSRD(insn) || IS_RFID(insn) || IS_RFI(insn)) {116117 printk("Cannot register a kprobe on rfi/rfid or mtmsr[d]\n");117118 ret = -EINVAL;118118- } else if (ppc_inst_prefixed(prefix)) {119119+ } else if ((unsigned long)p->addr & ~PAGE_MASK &&120120+ ppc_inst_prefixed(ppc_inst_read((struct ppc_inst *)(p->addr - 1)))) {119121 printk("Cannot register a kprobe on the second word of prefixed instruction\n");120122 ret = -EINVAL;121123 }
···2323#include <asm/pte-walk.h>24242525/* Translate address of a vmalloc'd thing to a linear map address */2626-static void *real_vmalloc_addr(void *x)2626+static void *real_vmalloc_addr(void *addr)2727{2828- unsigned long addr = (unsigned long) x;2929- pte_t *p;3030- /*3131- * assume we don't have huge pages in vmalloc space...3232- * So don't worry about THP collapse/split. Called3333- * Only in realmode with MSR_EE = 0, hence won't need irq_save/restore.3434- */3535- p = find_init_mm_pte(addr, NULL);3636- if (!p || !pte_present(*p))3737- return NULL;3838- addr = (pte_pfn(*p) << PAGE_SHIFT) | (addr & ~PAGE_MASK);3939- return __va(addr);2828+ return __va(ppc_find_vmap_phys((unsigned long)addr));4029}41304231/* Return 1 if we need to do a global tlbie, 0 if we can use tlbiel */
+7
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···5959#define STACK_SLOT_UAMOR (SFS-88)6060#define STACK_SLOT_DAWR1 (SFS-96)6161#define STACK_SLOT_DAWRX1 (SFS-104)6262+#define STACK_SLOT_FSCR (SFS-112)6263/* the following is used by the P9 short path */6364#define STACK_SLOT_NVGPRS (SFS-152) /* 18 gprs */6465···687686 std r6, STACK_SLOT_DAWR0(r1)688687 std r7, STACK_SLOT_DAWRX0(r1)689688 std r8, STACK_SLOT_IAMR(r1)689689+ mfspr r5, SPRN_FSCR690690+ std r5, STACK_SLOT_FSCR(r1)690691END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)691692BEGIN_FTR_SECTION692693 mfspr r6, SPRN_DAWR1···16661663 ld r7, STACK_SLOT_HFSCR(r1)16671664 mtspr SPRN_HFSCR, r716681665ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300)16661666+BEGIN_FTR_SECTION16671667+ ld r5, STACK_SLOT_FSCR(r1)16681668+ mtspr SPRN_FSCR, r516691669+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)16691670 /*16701671 * Restore various registers to 0, where non-zero values16711672 * set by the guest could disrupt the host.
+9-9
arch/riscv/Kconfig
···6161 select GENERIC_TIME_VSYSCALL if MMU && 64BIT6262 select HANDLE_DOMAIN_IRQ6363 select HAVE_ARCH_AUDITSYSCALL6464- select HAVE_ARCH_JUMP_LABEL6565- select HAVE_ARCH_JUMP_LABEL_RELATIVE6464+ select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL6565+ select HAVE_ARCH_JUMP_LABEL_RELATIVE if !XIP_KERNEL6666 select HAVE_ARCH_KASAN if MMU && 64BIT6767 select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT6868- select HAVE_ARCH_KGDB6868+ select HAVE_ARCH_KGDB if !XIP_KERNEL6969 select HAVE_ARCH_KGDB_QXFER_PKT7070 select HAVE_ARCH_MMAP_RND_BITS if MMU7171 select HAVE_ARCH_SECCOMP_FILTER···8080 select HAVE_GCC_PLUGINS8181 select HAVE_GENERIC_VDSO if MMU && 64BIT8282 select HAVE_IRQ_TIME_ACCOUNTING8383- select HAVE_KPROBES8484- select HAVE_KPROBES_ON_FTRACE8585- select HAVE_KRETPROBES8383+ select HAVE_KPROBES if !XIP_KERNEL8484+ select HAVE_KPROBES_ON_FTRACE if !XIP_KERNEL8585+ select HAVE_KRETPROBES if !XIP_KERNEL8686 select HAVE_PCI8787 select HAVE_PERF_EVENTS8888 select HAVE_PERF_REGS···231231 bool "RV64I"232232 select 64BIT233233 select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && GCC_VERSION >= 50000234234- select HAVE_DYNAMIC_FTRACE if MMU && $(cc-option,-fpatchable-function-entry=8)234234+ select HAVE_DYNAMIC_FTRACE if !XIP_KERNEL && MMU && $(cc-option,-fpatchable-function-entry=8)235235 select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE236236- select HAVE_FTRACE_MCOUNT_RECORD236236+ select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL237237 select HAVE_FUNCTION_GRAPH_TRACER238238- select HAVE_FUNCTION_TRACER238238+ select HAVE_FUNCTION_TRACER if !XIP_KERNEL239239 select SWIOTLB if MMU240240241241endchoice
···584584 pkru_val = pk->pkru;585585 }586586 __write_pkru(pkru_val);587587-588588- /*589589- * Expensive PASID MSR write will be avoided in update_pasid() because590590- * TIF_NEED_FPU_LOAD was set. And the PASID state won't be updated591591- * unless it's different from mm->pasid to reduce overhead.592592- */593593- update_pasid();594587}595588596589#endif /* _ASM_X86_FPU_INTERNAL_H */
···183183}184184185185/*186186+ * optimize_nops_range() - Optimize a sequence of single byte NOPs (0x90)187187+ *188188+ * @instr: instruction byte stream189189+ * @instrlen: length of the above190190+ * @off: offset within @instr where the first NOP has been detected191191+ *192192+ * Return: number of NOPs found (and replaced).193193+ */194194+static __always_inline int optimize_nops_range(u8 *instr, u8 instrlen, int off)195195+{196196+ unsigned long flags;197197+ int i = off, nnops;198198+199199+ while (i < instrlen) {200200+ if (instr[i] != 0x90)201201+ break;202202+203203+ i++;204204+ }205205+206206+ nnops = i - off;207207+208208+ if (nnops <= 1)209209+ return nnops;210210+211211+ local_irq_save(flags);212212+ add_nops(instr + off, nnops);213213+ local_irq_restore(flags);214214+215215+ DUMP_BYTES(instr, instrlen, "%px: [%d:%d) optimized NOPs: ", instr, off, i);216216+217217+ return nnops;218218+}219219+220220+/*186221 * "noinline" to cause control flow change and thus invalidate I$ and187222 * cause refetch after modification.188223 */189224static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *instr)190225{191191- unsigned long flags;192226 struct insn insn;193193- int nop, i = 0;227227+ int i = 0;194228195229 /*196196- * Jump over the non-NOP insns, the remaining bytes must be single-byte197197- * NOPs, optimize them.230230+ * Jump over the non-NOP insns and optimize single-byte NOPs into bigger231231+ * ones.198232 */199233 for (;;) {200234 if (insn_decode_kernel(&insn, &instr[i]))201235 return;202236237237+ /*238238+ * See if this and any potentially following NOPs can be239239+ * optimized.240240+ */203241 if (insn.length == 1 && insn.opcode.bytes[0] == 0x90)204204- break;242242+ i += optimize_nops_range(instr, a->instrlen, i);243243+ else244244+ i += insn.length;205245206206- if ((i += insn.length) >= a->instrlen)246246+ if (i >= a->instrlen)207247 return;208248 }209209-210210- for (nop = i; i < a->instrlen; i++) {211211- if (WARN_ONCE(instr[i] != 0x90, "Not a NOP at 0x%px\n", &instr[i]))212212- return;213213- }214214-215215- local_irq_save(flags);216216- add_nops(instr + nop, i - nop);217217- local_irq_restore(flags);218218-219219- DUMP_BYTES(instr, a->instrlen, "%px: [%d:%d) optimized NOPs: ",220220- instr, nop, a->instrlen);221249}222250223251/*
···738738 irq_matrix_assign_system(vector_matrix, ISA_IRQ_VECTOR(irq), replace);739739}740740741741+void __init lapic_update_legacy_vectors(void)742742+{743743+ unsigned int i;744744+745745+ if (IS_ENABLED(CONFIG_X86_IO_APIC) && nr_ioapics > 0)746746+ return;747747+748748+ /*749749+ * If the IO/APIC is disabled via config, kernel command line or750750+ * lack of enumeration then all legacy interrupts are routed751751+ * through the PIC. Make sure that they are marked as legacy752752+ * vectors. PIC_CASCADE_IRQ has already been marked in753753+ * lapic_assign_system_vectors().754754+ */755755+ for (i = 0; i < nr_legacy_irqs(); i++) {756756+ if (i != PIC_CASCADE_IR)757757+ lapic_assign_legacy_vector(i, true);758758+ }759759+}760760+741761void __init lapic_assign_system_vectors(void)742762{743763 unsigned int i, vector = 0;
+2-2
arch/x86/kernel/cpu/perfctr-watchdog.c
···6363 case 15:6464 return msr - MSR_P4_BPU_PERFCTR0;6565 }6666- fallthrough;6666+ break;6767 case X86_VENDOR_ZHAOXIN:6868 case X86_VENDOR_CENTAUR:6969 return msr - MSR_ARCH_PERFMON_PERFCTR0;···9696 case 15:9797 return msr - MSR_P4_BSU_ESCR0;9898 }9999- fallthrough;9999+ break;100100 case X86_VENDOR_ZHAOXIN:101101 case X86_VENDOR_CENTAUR:102102 return msr - MSR_ARCH_PERFMON_EVENTSEL0;
-57
arch/x86/kernel/fpu/xstate.c
···14021402 return 0;14031403}14041404#endif /* CONFIG_PROC_PID_ARCH_STATUS */14051405-14061406-#ifdef CONFIG_IOMMU_SUPPORT14071407-void update_pasid(void)14081408-{14091409- u64 pasid_state;14101410- u32 pasid;14111411-14121412- if (!cpu_feature_enabled(X86_FEATURE_ENQCMD))14131413- return;14141414-14151415- if (!current->mm)14161416- return;14171417-14181418- pasid = READ_ONCE(current->mm->pasid);14191419- /* Set the valid bit in the PASID MSR/state only for valid pasid. */14201420- pasid_state = pasid == PASID_DISABLED ?14211421- pasid : pasid | MSR_IA32_PASID_VALID;14221422-14231423- /*14241424- * No need to hold fregs_lock() since the task's fpstate won't14251425- * be changed by others (e.g. ptrace) while the task is being14261426- * switched to or is in IPI.14271427- */14281428- if (!test_thread_flag(TIF_NEED_FPU_LOAD)) {14291429- /* The MSR is active and can be directly updated. */14301430- wrmsrl(MSR_IA32_PASID, pasid_state);14311431- } else {14321432- struct fpu *fpu = ¤t->thread.fpu;14331433- struct ia32_pasid_state *ppasid_state;14341434- struct xregs_state *xsave;14351435-14361436- /*14371437- * The CPU's xstate registers are not currently active. Just14381438- * update the PASID state in the memory buffer here. The14391439- * PASID MSR will be loaded when returning to user mode.14401440- */14411441- xsave = &fpu->state.xsave;14421442- xsave->header.xfeatures |= XFEATURE_MASK_PASID;14431443- ppasid_state = get_xsave_addr(xsave, XFEATURE_PASID);14441444- /*14451445- * Since XFEATURE_MASK_PASID is set in xfeatures, ppasid_state14461446- * won't be NULL and no need to check its value.14471447- *14481448- * Only update the task's PASID state when it's different14491449- * from the mm's pasid.14501450- */14511451- if (ppasid_state->pasid != pasid_state) {14521452- /*14531453- * Invalid fpregs so that state restoring will pick up14541454- * the PASID state.14551455- */14561456- __fpu_invalidate_fpregs_state(fpu);14571457- ppasid_state->pasid = pasid_state;14581458- }14591459- }14601460-}14611461-#endif /* CONFIG_IOMMU_SUPPORT */
+30-14
arch/x86/kernel/setup.c
···4444#include <asm/pci-direct.h>4545#include <asm/prom.h>4646#include <asm/proto.h>4747+#include <asm/thermal.h>4748#include <asm/unwind.h>4849#include <asm/vsyscall.h>4950#include <linux/vmalloc.h>···638637 * them from accessing certain memory ranges, namely anything below639638 * 1M and in the pages listed in bad_pages[] above.640639 *641641- * To avoid these pages being ever accessed by SNB gfx devices642642- * reserve all memory below the 1 MB mark and bad_pages that have643643- * not already been reserved at boot time.640640+ * To avoid these pages being ever accessed by SNB gfx devices reserve641641+ * bad_pages that have not already been reserved at boot time.642642+ * All memory below the 1 MB mark is anyway reserved later during643643+ * setup_arch(), so there is no need to reserve it here.644644 */645645- memblock_reserve(0, 1<<20);646645647646 for (i = 0; i < ARRAY_SIZE(bad_pages); i++) {648647 if (memblock_reserve(bad_pages[i], PAGE_SIZE))···734733 * The first 4Kb of memory is a BIOS owned area, but generally it is735734 * not listed as such in the E820 table.736735 *737737- * Reserve the first memory page and typically some additional738738- * memory (64KiB by default) since some BIOSes are known to corrupt739739- * low memory. See the Kconfig help text for X86_RESERVE_LOW.736736+ * Reserve the first 64K of memory since some BIOSes are known to737737+ * corrupt low memory. After the real mode trampoline is allocated the738738+ * rest of the memory below 640k is reserved.740739 *741740 * In addition, make sure page 0 is always reserved because on742741 * systems with L1TF its contents can be leaked to user processes.743742 */744744- memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));743743+ memblock_reserve(0, SZ_64K);745744746745 early_reserve_initrd();747746···752751753752 reserve_ibft_region();754753 reserve_bios_regions();754754+ trim_snb_memory();755755}756756757757/*···10831081 (max_pfn_mapped<<PAGE_SHIFT) - 1);10841082#endif1085108310861086- reserve_real_mode();10871087-10881084 /*10891089- * Reserving memory causing GPU hangs on Sandy Bridge integrated10901090- * graphics devices should be done after we allocated memory under10911091- * 1M for the real mode trampoline.10851085+ * Find free memory for the real mode trampoline and place it10861086+ * there.10871087+ * If there is not enough free memory under 1M, on EFI-enabled10881088+ * systems there will be additional attempt to reclaim the memory10891089+ * for the real mode trampoline at efi_free_boot_services().10901090+ *10911091+ * Unconditionally reserve the entire first 1M of RAM because10921092+ * BIOSes are know to corrupt low memory and several10931093+ * hundred kilobytes are not worth complex detection what memory gets10941094+ * clobbered. Moreover, on machines with SandyBridge graphics or in10951095+ * setups that use crashkernel the entire 1M is reserved anyway.10921096 */10931093- trim_snb_memory();10971097+ reserve_real_mode();1094109810951099 init_mem_mapping();10961100···12331225 x86_init.oem.banner();1234122612351227 x86_init.timers.wallclock_init();12281228+12291229+ /*12301230+ * This needs to run before setup_local_APIC() which soft-disables the12311231+ * local APIC temporarily and that masks the thermal LVT interrupt,12321232+ * leading to softlockups on machines which have configured SMI12331233+ * interrupt delivery.12341234+ */12351235+ therm_lvt_init();1236123612371237 mcheck_init();12381238
···504504#define AMD_SME_BIT BIT(0)505505#define AMD_SEV_BIT BIT(1)506506507507- /* Check the SEV MSR whether SEV or SME is enabled */508508- sev_status = __rdmsr(MSR_AMD64_SEV);509509- feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;510510-511507 /*512508 * Check for the SME/SEV feature:513509 * CPUID Fn8000_001F[EAX]···515519 eax = 0x8000001f;516520 ecx = 0;517521 native_cpuid(&eax, &ebx, &ecx, &edx);518518- if (!(eax & feature_mask))522522+ /* Check whether SEV or SME is supported */523523+ if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))519524 return;520525521526 me_mask = 1UL << (ebx & 0x3f);527527+528528+ /* Check the SEV MSR whether SEV or SME is enabled */529529+ sev_status = __rdmsr(MSR_AMD64_SEV);530530+ feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;522531523532 /* Check if memory encryption is enabled */524533 if (feature_mask == AMD_SME_BIT) {
+12
arch/x86/platform/efi/quirks.c
···450450 size -= rm_size;451451 }452452453453+ /*454454+ * Don't free memory under 1M for two reasons:455455+ * - BIOS might clobber it456456+ * - Crash kernel needs it to be reserved457457+ */458458+ if (start + size < SZ_1M)459459+ continue;460460+ if (start < SZ_1M) {461461+ size -= (SZ_1M - start);462462+ start = SZ_1M;463463+ }464464+453465 memblock_free_late(start, size);454466 }455467
+8-6
arch/x86/realmode/init.c
···29293030 /* Has to be under 1M so we can execute real-mode AP code. */3131 mem = memblock_find_in_range(0, 1<<20, size, PAGE_SIZE);3232- if (!mem) {3232+ if (!mem)3333 pr_info("No sub-1M memory is available for the trampoline\n");3434- return;3535- }3434+ else3535+ set_real_mode_mem(mem);36363737- memblock_reserve(mem, size);3838- set_real_mode_mem(mem);3939- crash_reserve_low_1M();3737+ /*3838+ * Unconditionally reserve the entire fisrt 1M, see comment in3939+ * setup_arch().4040+ */4141+ memblock_reserve(0, SZ_1M);4042}41434244static void sme_sev_setup_real_mode(struct trampoline_header *th)
+2-1
crypto/async_tx/async_xor.c
···233233 if (submit->flags & ASYNC_TX_XOR_DROP_DST) {234234 src_cnt--;235235 src_list++;236236- src_offs++;236236+ if (src_offs)237237+ src_offs++;237238 }238239239240 /* wait for any prerequisite operations */
···330330 if (ACPI_FAILURE(acpi_run_osc(handle, &context)))331331 return;332332333333- capbuf_ret = context.ret.pointer;334334- if (context.ret.length <= OSC_SUPPORT_DWORD) {335335- kfree(context.ret.pointer);336336- return;337337- }338338-339339- /*340340- * Now run _OSC again with query flag clear and with the caps341341- * supported by both the OS and the platform.342342- */343343- capbuf[OSC_QUERY_DWORD] = 0;344344- capbuf[OSC_SUPPORT_DWORD] = capbuf_ret[OSC_SUPPORT_DWORD];345333 kfree(context.ret.pointer);334334+335335+ /* Now run _OSC again with query flag clear */336336+ capbuf[OSC_QUERY_DWORD] = 0;346337347338 if (ACPI_FAILURE(acpi_run_osc(handle, &context)))348339 return;349340350341 capbuf_ret = context.ret.pointer;351351- if (context.ret.length > OSC_SUPPORT_DWORD) {352352- osc_sb_apei_support_acked =353353- capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;354354- osc_pc_lpi_support_confirmed =355355- capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT;356356- osc_sb_native_usb4_support_confirmed =357357- capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT;358358- }342342+ osc_sb_apei_support_acked =343343+ capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;344344+ osc_pc_lpi_support_confirmed =345345+ capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT;346346+ osc_sb_native_usb4_support_confirmed =347347+ capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT;359348360349 kfree(context.ret.pointer);361350}
···218218 struct zone *zone;219219 int ret;220220221221- zone = page_zone(pfn_to_page(start_pfn));222222-223221 /*224222 * Unaccount before offlining, such that unpopulated zone and kthreads225223 * can properly be torn down in offline_pages().226224 */227227- if (nr_vmemmap_pages)225225+ if (nr_vmemmap_pages) {226226+ zone = page_zone(pfn_to_page(start_pfn));228227 adjust_present_page_count(zone, -nr_vmemmap_pages);228228+ }229229230230 ret = offline_pages(start_pfn + nr_vmemmap_pages,231231 nr_pages - nr_vmemmap_pages);
+7-18
drivers/block/loop.c
···1879187918801880static int lo_open(struct block_device *bdev, fmode_t mode)18811881{18821882- struct loop_device *lo;18821882+ struct loop_device *lo = bdev->bd_disk->private_data;18831883 int err;1884188418851885- /*18861886- * take loop_ctl_mutex to protect lo pointer from race with18871887- * loop_control_ioctl(LOOP_CTL_REMOVE), however, to reduce contention18881888- * release it prior to updating lo->lo_refcnt.18891889- */18901890- err = mutex_lock_killable(&loop_ctl_mutex);18911891- if (err)18921892- return err;18931893- lo = bdev->bd_disk->private_data;18941894- if (!lo) {18951895- mutex_unlock(&loop_ctl_mutex);18961896- return -ENXIO;18971897- }18981885 err = mutex_lock_killable(&lo->lo_mutex);18991899- mutex_unlock(&loop_ctl_mutex);19001886 if (err)19011887 return err;19021902- atomic_inc(&lo->lo_refcnt);18881888+ if (lo->lo_state == Lo_deleting)18891889+ err = -ENXIO;18901890+ else18911891+ atomic_inc(&lo->lo_refcnt);19031892 mutex_unlock(&lo->lo_mutex);19041904- return 0;18931893+ return err;19051894}1906189519071896static void lo_release(struct gendisk *disk, fmode_t mode)···22742285 mutex_unlock(&lo->lo_mutex);22752286 break;22762287 }22772277- lo->lo_disk->private_data = NULL;22882288+ lo->lo_state = Lo_deleting;22782289 mutex_unlock(&lo->lo_mutex);22792290 idr_remove(&loop_index_idr, lo->lo_number);22802291 loop_remove(lo);
···745745 * If the CPU does not support MOVDIR64B or ENQCMDS, there's no point in746746 * enumerating the device. We can not utilize it.747747 */748748- if (!boot_cpu_has(X86_FEATURE_MOVDIR64B)) {748748+ if (!cpu_feature_enabled(X86_FEATURE_MOVDIR64B)) {749749 pr_warn("idxd driver failed to load without MOVDIR64B.\n");750750 return -ENODEV;751751 }752752753753- if (!boot_cpu_has(X86_FEATURE_ENQCMD))753753+ if (!cpu_feature_enabled(X86_FEATURE_ENQCMD))754754 pr_warn("Platform does not have ENQCMD(S) support.\n");755755 else756756 support_enqcmd = true;
+1-3
drivers/firmware/efi/cper.c
···276276 if (!msg || !(mem->validation_bits & CPER_MEM_VALID_MODULE_HANDLE))277277 return 0;278278279279- n = 0;280280- len = CPER_REC_LEN - 1;279279+ len = CPER_REC_LEN;281280 dmi_memdev_name(mem->mem_dev_handle, &bank, &device);282281 if (bank && device)283282 n = snprintf(msg, len, "DIMM location: %s %s ", bank, device);···285286 "DIMM location: not present. DMI handle: 0x%.4x ",286287 mem->mem_dev_handle);287288288288- msg[n] = '\0';289289 return n;290290}291291
+3
drivers/firmware/efi/fdtparams.c
···9898 BUILD_BUG_ON(ARRAY_SIZE(target) != ARRAY_SIZE(name));9999 BUILD_BUG_ON(ARRAY_SIZE(target) != ARRAY_SIZE(dt_params[0].params));100100101101+ if (!fdt)102102+ return 0;103103+101104 for (i = 0; i < ARRAY_SIZE(dt_params); i++) {102105 node = fdt_path_offset(fdt, dt_params[i].path);103106 if (node < 0)
+1-1
drivers/firmware/efi/libstub/file.c
···103103 return 0;104104105105 /* Skip any leading slashes */106106- while (cmdline[i] == L'/' || cmdline[i] == L'\\')106106+ while (i < cmdline_len && (cmdline[i] == L'/' || cmdline[i] == L'\\'))107107 i++;108108109109 while (--result_len > 0 && i < cmdline_len) {
-5
drivers/firmware/efi/memattr.c
···6767 return false;6868 }69697070- if (!(in->attribute & (EFI_MEMORY_RO | EFI_MEMORY_XP))) {7171- pr_warn("Entry attributes invalid: RO and XP bits both cleared\n");7272- return false;7373- }7474-7570 if (PAGE_SIZE > EFI_PAGE_SIZE &&7671 (!PAGE_ALIGNED(in->phys_addr) ||7772 !PAGE_ALIGNED(in->num_pages << EFI_PAGE_SHIFT))) {
···1057105710581058 return 0;10591059err:10601060- drm_err(dev, "Failed to init gem fb: %d\n", ret);10601060+ drm_dbg_kms(dev, "Failed to init gem fb: %d\n", ret);10611061 rfb->base.obj[0] = NULL;10621062 return ret;10631063}···1094109410951095 return 0;10961096err:10971097- drm_err(dev, "Failed to verify and init gem fb: %d\n", ret);10971097+ drm_dbg_kms(dev, "Failed to verify and init gem fb: %d\n", ret);10981098 rfb->base.obj[0] = NULL;10991099 return ret;11001100}
+23-19
drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
···101101int amdgpu_fru_get_product_info(struct amdgpu_device *adev)102102{103103 unsigned char buff[34];104104- int addrptr = 0, size = 0;104104+ int addrptr, size;105105+ int len;105106106107 if (!is_fru_eeprom_supported(adev))107108 return 0;···110109 /* If algo exists, it means that the i2c_adapter's initialized */111110 if (!adev->pm.smu_i2c.algo) {112111 DRM_WARN("Cannot access FRU, EEPROM accessor not initialized");113113- return 0;112112+ return -ENODEV;114113 }115114116115 /* There's a lot of repetition here. This is due to the FRU having···129128 size = amdgpu_fru_read_eeprom(adev, addrptr, buff);130129 if (size < 1) {131130 DRM_ERROR("Failed to read FRU Manufacturer, ret:%d", size);132132- return size;131131+ return -EINVAL;133132 }134133135134 /* Increment the addrptr by the size of the field, and 1 due to the···139138 size = amdgpu_fru_read_eeprom(adev, addrptr, buff);140139 if (size < 1) {141140 DRM_ERROR("Failed to read FRU product name, ret:%d", size);142142- return size;141141+ return -EINVAL;143142 }144143144144+ len = size;145145 /* Product name should only be 32 characters. Any more,146146 * and something could be wrong. Cap it at 32 to be safe147147 */148148- if (size > 32) {148148+ if (len >= sizeof(adev->product_name)) {149149 DRM_WARN("FRU Product Number is larger than 32 characters. This is likely a mistake");150150- size = 32;150150+ len = sizeof(adev->product_name) - 1;151151 }152152 /* Start at 2 due to buff using fields 0 and 1 for the address */153153- memcpy(adev->product_name, &buff[2], size);154154- adev->product_name[size] = '\0';153153+ memcpy(adev->product_name, &buff[2], len);154154+ adev->product_name[len] = '\0';155155156156 addrptr += size + 1;157157 size = amdgpu_fru_read_eeprom(adev, addrptr, buff);158158 if (size < 1) {159159 DRM_ERROR("Failed to read FRU product number, ret:%d", size);160160- return size;160160+ return -EINVAL;161161 }162162163163+ len = size;163164 /* Product number should only be 16 characters. Any more,164165 * and something could be wrong. Cap it at 16 to be safe165166 */166166- if (size > 16) {167167+ if (len >= sizeof(adev->product_number)) {167168 DRM_WARN("FRU Product Number is larger than 16 characters. This is likely a mistake");168168- size = 16;169169+ len = sizeof(adev->product_number) - 1;169170 }170170- memcpy(adev->product_number, &buff[2], size);171171- adev->product_number[size] = '\0';171171+ memcpy(adev->product_number, &buff[2], len);172172+ adev->product_number[len] = '\0';172173173174 addrptr += size + 1;174175 size = amdgpu_fru_read_eeprom(adev, addrptr, buff);175176176177 if (size < 1) {177178 DRM_ERROR("Failed to read FRU product version, ret:%d", size);178178- return size;179179+ return -EINVAL;179180 }180181181182 addrptr += size + 1;···185182186183 if (size < 1) {187184 DRM_ERROR("Failed to read FRU serial number, ret:%d", size);188188- return size;185185+ return -EINVAL;189186 }190187188188+ len = size;191189 /* Serial number should only be 16 characters. Any more,192190 * and something could be wrong. Cap it at 16 to be safe193191 */194194- if (size > 16) {192192+ if (len >= sizeof(adev->serial)) {195193 DRM_WARN("FRU Serial Number is larger than 16 characters. This is likely a mistake");196196- size = 16;194194+ len = sizeof(adev->serial) - 1;197195 }198198- memcpy(adev->serial, &buff[2], size);199199- adev->serial[size] = '\0';196196+ memcpy(adev->serial, &buff[2], len);197197+ adev->serial[len] = '\0';200198201199 return 0;202200}
+2-2
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
···100100 kfree(ubo->metadata);101101 }102102103103- kfree(bo);103103+ kvfree(bo);104104}105105106106/**···552552 BUG_ON(bp->bo_ptr_size < sizeof(struct amdgpu_bo));553553554554 *bo_ptr = NULL;555555- bo = kzalloc(bp->bo_ptr_size, GFP_KERNEL);555555+ bo = kvzalloc(bp->bo_ptr_size, GFP_KERNEL);556556 if (bo == NULL)557557 return -ENOMEM;558558 drm_gem_private_object_init(adev_to_drm(adev), &bo->tbo.base, size);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
···7676 uint64_t ring_mem_mc_addr;7777 void *ring_mem_handle;7878 uint32_t ring_size;7979+ uint32_t ring_wptr;7980};80818182/* More registers may will be supported */
···2020 select INPUT if ACPI2121 select ACPI_VIDEO if ACPI2222 select ACPI_BUTTON if ACPI2323- select IO_MAPPING2423 select SYNC_FILE2524 select IOSF_MBI2625 select CRC32
+5-4
drivers/gpu/drm/i915/gem/i915_gem_mman.c
···367367 goto err_unpin;368368369369 /* Finally, remap it using the new GTT offset */370370- ret = io_mapping_map_user(&ggtt->iomap, area, area->vm_start +371371- (vma->ggtt_view.partial.offset << PAGE_SHIFT),372372- (ggtt->gmadr.start + vma->node.start) >> PAGE_SHIFT,373373- min_t(u64, vma->size, area->vm_end - area->vm_start));370370+ ret = remap_io_mapping(area,371371+ area->vm_start + (vma->ggtt_view.partial.offset << PAGE_SHIFT),372372+ (ggtt->gmadr.start + vma->node.start) >> PAGE_SHIFT,373373+ min_t(u64, vma->size, area->vm_end - area->vm_start),374374+ &ggtt->iomap);374375 if (ret)375376 goto err_fence;376377
+3
drivers/gpu/drm/i915/i915_drv.h
···19051905 struct drm_file *file);1906190619071907/* i915_mm.c */19081908+int remap_io_mapping(struct vm_area_struct *vma,19091909+ unsigned long addr, unsigned long pfn, unsigned long size,19101910+ struct io_mapping *iomap);19081911int remap_io_sg(struct vm_area_struct *vma,19091912 unsigned long addr, unsigned long size,19101913 struct scatterlist *sgl, resource_size_t iobase);
+44
drivers/gpu/drm/i915/i915_mm.c
···3737 resource_size_t iobase;3838};39394040+static int remap_pfn(pte_t *pte, unsigned long addr, void *data)4141+{4242+ struct remap_pfn *r = data;4343+4444+ /* Special PTE are not associated with any struct page */4545+ set_pte_at(r->mm, addr, pte, pte_mkspecial(pfn_pte(r->pfn, r->prot)));4646+ r->pfn++;4747+4848+ return 0;4949+}5050+4051#define use_dma(io) ((io) != -1)41524253static inline unsigned long sgt_pfn(const struct remap_pfn *r)···7766 return 0;7867}79686969+/**7070+ * remap_io_mapping - remap an IO mapping to userspace7171+ * @vma: user vma to map to7272+ * @addr: target user address to start at7373+ * @pfn: physical address of kernel memory7474+ * @size: size of map area7575+ * @iomap: the source io_mapping7676+ *7777+ * Note: this is only safe if the mm semaphore is held when called.7878+ */7979+int remap_io_mapping(struct vm_area_struct *vma,8080+ unsigned long addr, unsigned long pfn, unsigned long size,8181+ struct io_mapping *iomap)8282+{8383+ struct remap_pfn r;8484+ int err;8585+8086#define EXPECTED_FLAGS (VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP)8787+ GEM_BUG_ON((vma->vm_flags & EXPECTED_FLAGS) != EXPECTED_FLAGS);8888+8989+ /* We rely on prevalidation of the io-mapping to skip track_pfn(). */9090+ r.mm = vma->vm_mm;9191+ r.pfn = pfn;9292+ r.prot = __pgprot((pgprot_val(iomap->prot) & _PAGE_CACHE_MASK) |9393+ (pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK));9494+9595+ err = apply_to_page_range(r.mm, addr, size, remap_pfn, &r);9696+ if (unlikely(err)) {9797+ zap_vma_ptes(vma, addr, (r.pfn - pfn) << PAGE_SHIFT);9898+ return err;9999+ }100100+101101+ return 0;102102+}8110382104/**83105 * remap_io_sg - remap an IO mapping to userspace
+2-2
drivers/gpu/drm/i915/selftests/i915_request.c
···1592159215931593 for (n = 0; n < smoke[0].ncontexts; n++) {15941594 smoke[0].contexts[n] = live_context(i915, file);15951595- if (!smoke[0].contexts[n]) {15961596- ret = -ENOMEM;15951595+ if (IS_ERR(smoke[0].contexts[n])) {15961596+ ret = PTR_ERR(smoke[0].contexts[n]);15971597 goto out_contexts;15981598 }15991599 }
+1-1
drivers/gpu/drm/mcde/mcde_dsi.c
···577577 * porches and sync.578578 */579579 /* (ps/s) / (pixels/s) = ps/pixels */580580- pclk = DIV_ROUND_UP_ULL(1000000000000, mode->clock);580580+ pclk = DIV_ROUND_UP_ULL(1000000000000, (mode->clock * 1000));581581 dev_dbg(d->dev, "picoseconds between two pixels: %llu\n",582582 pclk);583583
+114-41
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
···157157 * GPU registers so we need to add 0x1a800 to the register value on A630158158 * to get the right value from PM4.159159 */160160- get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800,160160+ get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,161161 rbmemptr_stats(ring, index, alwayson_start));162162163163 /* Invalidate CCU depth and color */···187187188188 get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP_0_LO,189189 rbmemptr_stats(ring, index, cpcycles_end));190190- get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800,190190+ get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,191191 rbmemptr_stats(ring, index, alwayson_end));192192193193 /* Write the fence to the scratch register */···206206 OUT_RING(ring, submit->seqno);207207208208 trace_msm_gpu_submit_flush(submit,209209- gmu_read64(&a6xx_gpu->gmu, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L,210210- REG_A6XX_GMU_ALWAYS_ON_COUNTER_H));209209+ gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,210210+ REG_A6XX_CP_ALWAYS_ON_COUNTER_HI));211211212212 a6xx_flush(gpu, ring);213213}···462462 gpu_write(gpu, REG_A6XX_RBBM_CLOCK_CNTL, state ? clock_cntl_on : 0);463463}464464465465+/* For a615, a616, a618, A619, a630, a640 and a680 */466466+static const u32 a6xx_protect[] = {467467+ A6XX_PROTECT_RDONLY(0x00000, 0x04ff),468468+ A6XX_PROTECT_RDONLY(0x00501, 0x0005),469469+ A6XX_PROTECT_RDONLY(0x0050b, 0x02f4),470470+ A6XX_PROTECT_NORDWR(0x0050e, 0x0000),471471+ A6XX_PROTECT_NORDWR(0x00510, 0x0000),472472+ A6XX_PROTECT_NORDWR(0x00534, 0x0000),473473+ A6XX_PROTECT_NORDWR(0x00800, 0x0082),474474+ A6XX_PROTECT_NORDWR(0x008a0, 0x0008),475475+ A6XX_PROTECT_NORDWR(0x008ab, 0x0024),476476+ A6XX_PROTECT_RDONLY(0x008de, 0x00ae),477477+ A6XX_PROTECT_NORDWR(0x00900, 0x004d),478478+ A6XX_PROTECT_NORDWR(0x0098d, 0x0272),479479+ A6XX_PROTECT_NORDWR(0x00e00, 0x0001),480480+ A6XX_PROTECT_NORDWR(0x00e03, 0x000c),481481+ A6XX_PROTECT_NORDWR(0x03c00, 0x00c3),482482+ A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff),483483+ A6XX_PROTECT_NORDWR(0x08630, 0x01cf),484484+ A6XX_PROTECT_NORDWR(0x08e00, 0x0000),485485+ A6XX_PROTECT_NORDWR(0x08e08, 0x0000),486486+ A6XX_PROTECT_NORDWR(0x08e50, 0x001f),487487+ A6XX_PROTECT_NORDWR(0x09624, 0x01db),488488+ A6XX_PROTECT_NORDWR(0x09e70, 0x0001),489489+ A6XX_PROTECT_NORDWR(0x09e78, 0x0187),490490+ A6XX_PROTECT_NORDWR(0x0a630, 0x01cf),491491+ A6XX_PROTECT_NORDWR(0x0ae02, 0x0000),492492+ A6XX_PROTECT_NORDWR(0x0ae50, 0x032f),493493+ A6XX_PROTECT_NORDWR(0x0b604, 0x0000),494494+ A6XX_PROTECT_NORDWR(0x0be02, 0x0001),495495+ A6XX_PROTECT_NORDWR(0x0be20, 0x17df),496496+ A6XX_PROTECT_NORDWR(0x0f000, 0x0bff),497497+ A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff),498498+ A6XX_PROTECT_NORDWR(0x11c00, 0x0000), /* note: infinite range */499499+};500500+501501+/* These are for a620 and a650 */502502+static const u32 a650_protect[] = {503503+ A6XX_PROTECT_RDONLY(0x00000, 0x04ff),504504+ A6XX_PROTECT_RDONLY(0x00501, 0x0005),505505+ A6XX_PROTECT_RDONLY(0x0050b, 0x02f4),506506+ A6XX_PROTECT_NORDWR(0x0050e, 0x0000),507507+ A6XX_PROTECT_NORDWR(0x00510, 0x0000),508508+ A6XX_PROTECT_NORDWR(0x00534, 0x0000),509509+ A6XX_PROTECT_NORDWR(0x00800, 0x0082),510510+ A6XX_PROTECT_NORDWR(0x008a0, 0x0008),511511+ A6XX_PROTECT_NORDWR(0x008ab, 0x0024),512512+ A6XX_PROTECT_RDONLY(0x008de, 0x00ae),513513+ A6XX_PROTECT_NORDWR(0x00900, 0x004d),514514+ A6XX_PROTECT_NORDWR(0x0098d, 0x0272),515515+ A6XX_PROTECT_NORDWR(0x00e00, 0x0001),516516+ A6XX_PROTECT_NORDWR(0x00e03, 0x000c),517517+ A6XX_PROTECT_NORDWR(0x03c00, 0x00c3),518518+ A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff),519519+ A6XX_PROTECT_NORDWR(0x08630, 0x01cf),520520+ A6XX_PROTECT_NORDWR(0x08e00, 0x0000),521521+ A6XX_PROTECT_NORDWR(0x08e08, 0x0000),522522+ A6XX_PROTECT_NORDWR(0x08e50, 0x001f),523523+ A6XX_PROTECT_NORDWR(0x08e80, 0x027f),524524+ A6XX_PROTECT_NORDWR(0x09624, 0x01db),525525+ A6XX_PROTECT_NORDWR(0x09e60, 0x0011),526526+ A6XX_PROTECT_NORDWR(0x09e78, 0x0187),527527+ A6XX_PROTECT_NORDWR(0x0a630, 0x01cf),528528+ A6XX_PROTECT_NORDWR(0x0ae02, 0x0000),529529+ A6XX_PROTECT_NORDWR(0x0ae50, 0x032f),530530+ A6XX_PROTECT_NORDWR(0x0b604, 0x0000),531531+ A6XX_PROTECT_NORDWR(0x0b608, 0x0007),532532+ A6XX_PROTECT_NORDWR(0x0be02, 0x0001),533533+ A6XX_PROTECT_NORDWR(0x0be20, 0x17df),534534+ A6XX_PROTECT_NORDWR(0x0f000, 0x0bff),535535+ A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff),536536+ A6XX_PROTECT_NORDWR(0x18400, 0x1fff),537537+ A6XX_PROTECT_NORDWR(0x1a800, 0x1fff),538538+ A6XX_PROTECT_NORDWR(0x1f400, 0x0443),539539+ A6XX_PROTECT_RDONLY(0x1f844, 0x007b),540540+ A6XX_PROTECT_NORDWR(0x1f887, 0x001b),541541+ A6XX_PROTECT_NORDWR(0x1f8c0, 0x0000), /* note: infinite range */542542+};543543+544544+static void a6xx_set_cp_protect(struct msm_gpu *gpu)545545+{546546+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);547547+ const u32 *regs = a6xx_protect;548548+ unsigned i, count = ARRAY_SIZE(a6xx_protect), count_max = 32;549549+550550+ BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32);551551+ BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48);552552+553553+ if (adreno_is_a650(adreno_gpu)) {554554+ regs = a650_protect;555555+ count = ARRAY_SIZE(a650_protect);556556+ count_max = 48;557557+ }558558+559559+ /*560560+ * Enable access protection to privileged registers, fault on an access561561+ * protect violation and select the last span to protect from the start562562+ * address all the way to the end of the register address space563563+ */564564+ gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, BIT(0) | BIT(1) | BIT(3));565565+566566+ for (i = 0; i < count - 1; i++)567567+ gpu_write(gpu, REG_A6XX_CP_PROTECT(i), regs[i]);568568+ /* last CP_PROTECT to have "infinite" length on the last entry */569569+ gpu_write(gpu, REG_A6XX_CP_PROTECT(count_max - 1), regs[i]);570570+}571571+465572static void a6xx_set_ubwc_config(struct msm_gpu *gpu)466573{467574 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);···596489 rgb565_predicator << 11 | amsbc << 4 | lower_bit << 1);597490 gpu_write(gpu, REG_A6XX_TPL1_NC_MODE_CNTL, lower_bit << 1);598491 gpu_write(gpu, REG_A6XX_SP_NC_MODE_CNTL,599599- uavflagprd_inv >> 4 | lower_bit << 1);492492+ uavflagprd_inv << 4 | lower_bit << 1);600493 gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, lower_bit << 21);601494}602495···883776 }884777885778 /* Protect registers from the CP */886886- gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, 0x00000003);887887-888888- gpu_write(gpu, REG_A6XX_CP_PROTECT(0),889889- A6XX_PROTECT_RDONLY(0x600, 0x51));890890- gpu_write(gpu, REG_A6XX_CP_PROTECT(1), A6XX_PROTECT_RW(0xae50, 0x2));891891- gpu_write(gpu, REG_A6XX_CP_PROTECT(2), A6XX_PROTECT_RW(0x9624, 0x13));892892- gpu_write(gpu, REG_A6XX_CP_PROTECT(3), A6XX_PROTECT_RW(0x8630, 0x8));893893- gpu_write(gpu, REG_A6XX_CP_PROTECT(4), A6XX_PROTECT_RW(0x9e70, 0x1));894894- gpu_write(gpu, REG_A6XX_CP_PROTECT(5), A6XX_PROTECT_RW(0x9e78, 0x187));895895- gpu_write(gpu, REG_A6XX_CP_PROTECT(6), A6XX_PROTECT_RW(0xf000, 0x810));896896- gpu_write(gpu, REG_A6XX_CP_PROTECT(7),897897- A6XX_PROTECT_RDONLY(0xfc00, 0x3));898898- gpu_write(gpu, REG_A6XX_CP_PROTECT(8), A6XX_PROTECT_RW(0x50e, 0x0));899899- gpu_write(gpu, REG_A6XX_CP_PROTECT(9), A6XX_PROTECT_RDONLY(0x50f, 0x0));900900- gpu_write(gpu, REG_A6XX_CP_PROTECT(10), A6XX_PROTECT_RW(0x510, 0x0));901901- gpu_write(gpu, REG_A6XX_CP_PROTECT(11),902902- A6XX_PROTECT_RDONLY(0x0, 0x4f9));903903- gpu_write(gpu, REG_A6XX_CP_PROTECT(12),904904- A6XX_PROTECT_RDONLY(0x501, 0xa));905905- gpu_write(gpu, REG_A6XX_CP_PROTECT(13),906906- A6XX_PROTECT_RDONLY(0x511, 0x44));907907- gpu_write(gpu, REG_A6XX_CP_PROTECT(14), A6XX_PROTECT_RW(0xe00, 0xe));908908- gpu_write(gpu, REG_A6XX_CP_PROTECT(15), A6XX_PROTECT_RW(0x8e00, 0x0));909909- gpu_write(gpu, REG_A6XX_CP_PROTECT(16), A6XX_PROTECT_RW(0x8e50, 0xf));910910- gpu_write(gpu, REG_A6XX_CP_PROTECT(17), A6XX_PROTECT_RW(0xbe02, 0x0));911911- gpu_write(gpu, REG_A6XX_CP_PROTECT(18),912912- A6XX_PROTECT_RW(0xbe20, 0x11f3));913913- gpu_write(gpu, REG_A6XX_CP_PROTECT(19), A6XX_PROTECT_RW(0x800, 0x82));914914- gpu_write(gpu, REG_A6XX_CP_PROTECT(20), A6XX_PROTECT_RW(0x8a0, 0x8));915915- gpu_write(gpu, REG_A6XX_CP_PROTECT(21), A6XX_PROTECT_RW(0x8ab, 0x19));916916- gpu_write(gpu, REG_A6XX_CP_PROTECT(22), A6XX_PROTECT_RW(0x900, 0x4d));917917- gpu_write(gpu, REG_A6XX_CP_PROTECT(23), A6XX_PROTECT_RW(0x98d, 0x76));918918- gpu_write(gpu, REG_A6XX_CP_PROTECT(24),919919- A6XX_PROTECT_RDONLY(0x980, 0x4));920920- gpu_write(gpu, REG_A6XX_CP_PROTECT(25), A6XX_PROTECT_RW(0xa630, 0x0));779779+ a6xx_set_cp_protect(gpu);921780922781 /* Enable expanded apriv for targets that support it */923782 if (gpu->hw_apriv) {···12841211 if (ret)12851212 return ret;1286121312871287- if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami)12141214+ if (a6xx_gpu->shadow_bo)12881215 for (i = 0; i < gpu->nr_rings; i++)12891216 a6xx_gpu->shadow[i] = 0;12901217
+1-1
drivers/gpu/drm/msm/adreno/a6xx_gpu.h
···4444 * REG_CP_PROTECT_REG(n) - this will block both reads and writes for _len4545 * registers starting at _reg.4646 */4747-#define A6XX_PROTECT_RW(_reg, _len) \4747+#define A6XX_PROTECT_NORDWR(_reg, _len) \4848 ((1 << 31) | \4949 (((_len) & 0x3FFF) << 18) | ((_reg) & 0x3FFFF))5050
···510510 * dGPU sector layout.511511 */512512 if (tegra_plane_state->tiling.sector_layout == TEGRA_BO_SECTOR_LAYOUT_GPU)513513- base |= BIT(39);513513+ base |= BIT_ULL(39);514514#endif515515516516 tegra_plane_writel(p, tegra_plane_state->format, DC_WIN_COLOR_DEPTH);
+42-28
drivers/gpu/drm/tegra/sor.c
···31253125 if (err < 0) {31263126 dev_err(sor->dev, "failed to acquire SOR reset: %d\n",31273127 err);31283128- return err;31283128+ goto rpm_put;31293129 }3130313031313131 err = reset_control_assert(sor->rst);31323132 if (err < 0) {31333133 dev_err(sor->dev, "failed to assert SOR reset: %d\n",31343134 err);31353135- return err;31353135+ goto rpm_put;31363136 }31373137 }3138313831393139 err = clk_prepare_enable(sor->clk);31403140 if (err < 0) {31413141 dev_err(sor->dev, "failed to enable clock: %d\n", err);31423142- return err;31423142+ goto rpm_put;31433143 }3144314431453145 usleep_range(1000, 3000);···31503150 dev_err(sor->dev, "failed to deassert SOR reset: %d\n",31513151 err);31523152 clk_disable_unprepare(sor->clk);31533153- return err;31533153+ goto rpm_put;31543154 }3155315531563156 reset_control_release(sor->rst);···31713171 }3172317231733173 return 0;31743174+31753175+rpm_put:31763176+ if (sor->rst)31773177+ pm_runtime_put(sor->dev);31783178+31793179+ return err;31743180}3175318131763182static int tegra_sor_exit(struct host1x_client *client)···37453739 if (!sor->aux)37463740 return -EPROBE_DEFER;3747374137483748- if (get_device(&sor->aux->ddc.dev)) {37493749- if (try_module_get(sor->aux->ddc.owner))37503750- sor->output.ddc = &sor->aux->ddc;37513751- else37523752- put_device(&sor->aux->ddc.dev);37533753- }37423742+ if (get_device(sor->aux->dev))37433743+ sor->output.ddc = &sor->aux->ddc;37543744 }3755374537563746 if (!sor->aux) {···3774377237753773 err = tegra_sor_parse_dt(sor);37763774 if (err < 0)37773777- return err;37753775+ goto put_aux;3778377637793777 err = tegra_output_probe(&sor->output);37803780- if (err < 0)37813781- return dev_err_probe(&pdev->dev, err,37823782- "failed to probe output\n");37783778+ if (err < 0) {37793779+ dev_err_probe(&pdev->dev, err, "failed to probe output\n");37803780+ goto put_aux;37813781+ }3783378237843783 if (sor->ops && sor->ops->probe) {37853784 err = sor->ops->probe(sor);···39193916 platform_set_drvdata(pdev, sor);39203917 pm_runtime_enable(&pdev->dev);3921391839223922- INIT_LIST_HEAD(&sor->client.list);39193919+ host1x_client_init(&sor->client);39233920 sor->client.ops = &sor_client_ops;39243921 sor->client.dev = &pdev->dev;39253925-39263926- err = host1x_client_register(&sor->client);39273927- if (err < 0) {39283928- dev_err(&pdev->dev, "failed to register host1x client: %d\n",39293929- err);39303930- goto rpm_disable;39313931- }3932392239333923 /*39343924 * On Tegra210 and earlier, provide our own implementation for the···39343938 sor->index);39353939 if (!name) {39363940 err = -ENOMEM;39373937- goto unregister;39413941+ goto uninit;39383942 }3939394339403944 err = host1x_client_resume(&sor->client);39413945 if (err < 0) {39423946 dev_err(sor->dev, "failed to resume: %d\n", err);39433943- goto unregister;39473947+ goto uninit;39443948 }3945394939463950 sor->clk_pad = tegra_clk_sor_pad_register(sor, name);···39513955 err = PTR_ERR(sor->clk_pad);39523956 dev_err(sor->dev, "failed to register SOR pad clock: %d\n",39533957 err);39543954- goto unregister;39583958+ goto uninit;39593959+ }39603960+39613961+ err = __host1x_client_register(&sor->client);39623962+ if (err < 0) {39633963+ dev_err(&pdev->dev, "failed to register host1x client: %d\n",39643964+ err);39653965+ goto uninit;39553966 }3956396739573968 return 0;3958396939593959-unregister:39603960- host1x_client_unregister(&sor->client);39613961-rpm_disable:39703970+uninit:39713971+ host1x_client_exit(&sor->client);39623972 pm_runtime_disable(&pdev->dev);39633973remove:39743974+ if (sor->aux)39753975+ sor->output.ddc = NULL;39763976+39643977 tegra_output_remove(&sor->output);39783978+put_aux:39793979+ if (sor->aux)39803980+ put_device(sor->aux->dev);39813981+39653982 return err;39663983}39673984···39913982 }3992398339933984 pm_runtime_disable(&pdev->dev);39853985+39863986+ if (sor->aux) {39873987+ put_device(sor->aux->dev);39883988+ sor->output.ddc = NULL;39893989+ }3994399039953991 tegra_output_remove(&sor->output);39963992
+4-1
drivers/gpu/drm/ttm/ttm_bo.c
···11721172 if (!ttm_bo_evict_swapout_allowable(bo, ctx, &locked, NULL))11731173 return -EBUSY;1174117411751175- if (!ttm_bo_get_unless_zero(bo)) {11751175+ if (!bo->ttm || !ttm_tt_is_populated(bo->ttm) ||11761176+ bo->ttm->page_flags & TTM_PAGE_FLAG_SG ||11771177+ bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED ||11781178+ !ttm_bo_get_unless_zero(bo)) {11761179 if (locked)11771180 dma_resv_unlock(bo->base.resv);11781181 return -EBUSY;
+1-7
drivers/gpu/drm/ttm/ttm_device.c
···143143144144 for (j = 0; j < TTM_MAX_BO_PRIORITY; ++j) {145145 list_for_each_entry(bo, &man->lru[j], lru) {146146- uint32_t num_pages;146146+ uint32_t num_pages = PFN_UP(bo->base.size);147147148148- if (!bo->ttm || !ttm_tt_is_populated(bo->ttm) ||149149- bo->ttm->page_flags & TTM_PAGE_FLAG_SG ||150150- bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)151151- continue;152152-153153- num_pages = bo->ttm->num_pages;154148 ret = ttm_bo_swapout(bo, ctx, gfp_flags);155149 /* ttm_bo_swapout has dropped the lru_lock */156150 if (!ret)
+1-1
drivers/gpu/drm/vc4/vc4_kms.c
···372372 if (!old_hvs_state->fifo_state[channel].in_use)373373 continue;374374375375- ret = drm_crtc_commit_wait(old_hvs_state->fifo_state[i].pending_commit);375375+ ret = drm_crtc_commit_wait(old_hvs_state->fifo_state[channel].pending_commit);376376 if (ret)377377 drm_err(dev, "Timed out waiting for commit\n");378378 }
+24-6
drivers/gpu/host1x/bus.c
···736736EXPORT_SYMBOL(host1x_driver_unregister);737737738738/**739739+ * __host1x_client_init() - initialize a host1x client740740+ * @client: host1x client741741+ * @key: lock class key for the client-specific mutex742742+ */743743+void __host1x_client_init(struct host1x_client *client, struct lock_class_key *key)744744+{745745+ INIT_LIST_HEAD(&client->list);746746+ __mutex_init(&client->lock, "host1x client lock", key);747747+ client->usecount = 0;748748+}749749+EXPORT_SYMBOL(__host1x_client_init);750750+751751+/**752752+ * host1x_client_exit() - uninitialize a host1x client753753+ * @client: host1x client754754+ */755755+void host1x_client_exit(struct host1x_client *client)756756+{757757+ mutex_destroy(&client->lock);758758+}759759+EXPORT_SYMBOL(host1x_client_exit);760760+761761+/**739762 * __host1x_client_register() - register a host1x client740763 * @client: host1x client741764 * @key: lock class key for the client-specific mutex···770747 * device and call host1x_device_init(), which will in turn call each client's771748 * &host1x_client_ops.init implementation.772749 */773773-int __host1x_client_register(struct host1x_client *client,774774- struct lock_class_key *key)750750+int __host1x_client_register(struct host1x_client *client)775751{776752 struct host1x *host1x;777753 int err;778778-779779- INIT_LIST_HEAD(&client->list);780780- __mutex_init(&client->lock, "host1x client lock", key);781781- client->usecount = 0;782754783755 mutex_lock(&devices_lock);784756
+17-2
drivers/hid/Kconfig
···9393 depends on HID94949595config HID_A4TECH9696- tristate "A4 tech mice"9696+ tristate "A4TECH mice"9797 depends on HID9898 default !EXPERT9999 help100100- Support for A4 tech X5 and WOP-35 / Trust 450L mice.100100+ Support for some A4TECH mice with two scroll wheels.101101102102config HID_ACCUTOUCH103103 tristate "Accutouch touch device"···921921 depends on HID922922 help923923 Support for Samsung InfraRed remote control or keyboards.924924+925925+config HID_SEMITEK926926+ tristate "Semitek USB keyboards"927927+ depends on HID928928+ help929929+ Support for Semitek USB keyboards that are not fully compliant930930+ with the HID standard.931931+932932+ There are many variants, including:933933+ - GK61, GK64, GK68, GK84, GK96, etc.934934+ - SK61, SK64, SK68, SK84, SK96, etc.935935+ - Dierya DK61/DK66936936+ - Tronsmart TK09R937937+ - Woo-dy938938+ - X-Bows Nature/Knight924939925940config HID_SONY926941 tristate "Sony PS2/3/4 accessories"
···162162 int i;163163164164 for (i = 0; i < cli_data->num_hid_devices; ++i) {165165- kfree(cli_data->feature_report[i]);166166- kfree(cli_data->input_report[i]);167167- kfree(cli_data->report_descr[i]);168165 if (cli_data->hid_sensor_hubs[i]) {169166 kfree(cli_data->hid_sensor_hubs[i]->driver_data);170167 hid_destroy_device(cli_data->hid_sensor_hubs[i]);
···964964965965 case 0x0cd: map_key_clear(KEY_PLAYPAUSE); break;966966 case 0x0cf: map_key_clear(KEY_VOICECOMMAND); break;967967+968968+ case 0x0d9: map_key_clear(KEY_EMOJI_PICKER); break;969969+967970 case 0x0e0: map_abs_clear(ABS_VOLUME); break;968971 case 0x0e2: map_key_clear(KEY_MUTE); break;969972 case 0x0e5: map_key_clear(KEY_BASSBOOST); break;
···374374 raw_report = usbhid->ctrl[usbhid->ctrltail].raw_report;375375 dir = usbhid->ctrl[usbhid->ctrltail].dir;376376377377- len = ((report->size - 1) >> 3) + 1 + (report->id > 0);377377+ len = hid_report_len(report);378378 if (dir == USB_DIR_OUT) {379379 usbhid->urbctrl->pipe = usb_sndctrlpipe(hid_to_usb_dev(hid), 0);380380 usbhid->urbctrl->transfer_buffer_length = len;
+1
drivers/hid/usbhid/hid-pidff.c
···1292129212931293 if (pidff->pool[PID_DEVICE_MANAGED_POOL].value &&12941294 pidff->pool[PID_DEVICE_MANAGED_POOL].value[0] == 0) {12951295+ error = -EPERM;12951296 hid_notice(hid,12961297 "device does not support device managed pool\n");12971298 goto fail;
+14
drivers/hwmon/corsair-psu.c
···771771 return 0;772772}773773774774+#ifdef CONFIG_PM775775+static int corsairpsu_resume(struct hid_device *hdev)776776+{777777+ struct corsairpsu_data *priv = hid_get_drvdata(hdev);778778+779779+ /* some PSUs turn off the microcontroller during standby, so a reinit is required */780780+ return corsairpsu_init(priv);781781+}782782+#endif783783+774784static const struct hid_device_id corsairpsu_idtable[] = {775785 { HID_USB_DEVICE(0x1b1c, 0x1c03) }, /* Corsair HX550i */776786 { HID_USB_DEVICE(0x1b1c, 0x1c04) }, /* Corsair HX650i */···803793 .probe = corsairpsu_probe,804794 .remove = corsairpsu_remove,805795 .raw_event = corsairpsu_raw_event,796796+#ifdef CONFIG_PM797797+ .resume = corsairpsu_resume,798798+ .reset_resume = corsairpsu_resume,799799+#endif806800};807801module_hid_driver(corsairpsu_driver);808802
+2-2
drivers/hwmon/dell-smm-hwmon.c
···838838static umode_t i8k_is_visible(struct kobject *kobj, struct attribute *attr,839839 int index)840840{841841- if (disallow_fan_support && index >= 8)841841+ if (disallow_fan_support && index >= 20)842842 return 0;843843 if (disallow_fan_type_call &&844844- (index == 9 || index == 12 || index == 15))844844+ (index == 21 || index == 25 || index == 28))845845 return 0;846846 if (index >= 0 && index <= 1 &&847847 !(i8k_hwmon_flags & I8K_HWMON_HAVE_TEMP1))
+25-7
drivers/hwmon/pmbus/fsp-3y.c
···3737 struct pmbus_driver_info info;3838 int chip;3939 int page;4040+4141+ bool vout_linear_11;4042};41434244#define to_fsp3y_data(x) container_of(x, struct fsp3y_data, info)···110108 int rv;111109112110 /*113113- * YH5151-E outputs vout in linear11. The conversion is done when114114- * reading. Here, we have to inject pmbus_core with the correct115115- * exponent (it is -6).111111+ * Inject an exponent for non-compliant YH5151-E.116112 */117117- if (data->chip == yh5151e && reg == PMBUS_VOUT_MODE)113113+ if (data->vout_linear_11 && reg == PMBUS_VOUT_MODE)118114 return 0x1A;119115120116 rv = set_page(client, page);···161161 return rv;162162163163 /*164164- * YH-5151E is non-compliant and outputs output voltages in linear11165165- * instead of linear16.164164+ * Handle YH-5151E non-compliant linear11 vout voltage.166165 */167167- if (data->chip == yh5151e && reg == PMBUS_READ_VOUT)166166+ if (data->vout_linear_11 && reg == PMBUS_READ_VOUT)168167 rv = sign_extend32(rv, 10) & 0xffff;169168170169 return rv;···254255 data->page = rv;255256256257 data->info = fsp3y_info[data->chip];258258+259259+ /*260260+ * YH-5151E sometimes reports vout in linear11 and sometimes in261261+ * linear16. This depends on the exact individual piece of hardware. One262262+ * YH-5151E can use linear16 and another might use linear11 instead.263263+ *264264+ * The format can be recognized by reading VOUT_MODE - if it doesn't265265+ * report a valid exponent, then vout uses linear11. Otherwise, the266266+ * device is compliant and uses linear16.267267+ */268268+ data->vout_linear_11 = false;269269+ if (data->chip == yh5151e) {270270+ rv = i2c_smbus_read_byte_data(client, PMBUS_VOUT_MODE);271271+ if (rv < 0)272272+ return rv;273273+274274+ if (rv == 0xFF)275275+ data->vout_linear_11 = true;276276+ }257277258278 return pmbus_do_probe(client, &data->info);259279}
···299299 dev_err(&client->dev, "Failed to read Manufacturer ID\n");300300 return ret;301301 }302302- if (ret != 5 || strncmp(buf, "DELTA", 5)) {302302+ if (ret != 6 || strncmp(buf, "DELTA", 5)) {303303 buf[ret] = '\0';304304 dev_err(dev, "Unsupported Manufacturer ID '%s'\n", buf);305305 return -ENODEV;
+9
drivers/hwmon/scpi-hwmon.c
···9999100100 scpi_scale_reading(&value, sensor);101101102102+ /*103103+ * Temperature sensor values are treated as signed values based on104104+ * observation even though that is not explicitly specified, and105105+ * because an unsigned u64 temperature does not really make practical106106+ * sense especially when the temperature is below zero degrees Celsius.107107+ */108108+ if (sensor->info.class == TEMPERATURE)109109+ return sprintf(buf, "%lld\n", (s64)value);110110+102111 return sprintf(buf, "%llu\n", value);103112}104113
···5555#define ALTR_I2C_XFER_TIMEOUT (msecs_to_jiffies(250))56565757/**5858- * altr_i2c_dev - I2C device context5858+ * struct altr_i2c_dev - I2C device context5959 * @base: pointer to register struct6060 * @msg: pointer to current message6161 * @msg_len: number of bytes transferred in msg···172172 altr_i2c_int_enable(idev, ALTR_I2C_ALL_IRQ, false);173173}174174175175-/**175175+/*176176 * altr_i2c_transfer - On the last byte to be transmitted, send177177 * a Stop bit on the last byte.178178 */···185185 writel(data, idev->base + ALTR_I2C_TFR_CMD);186186}187187188188-/**188188+/*189189 * altr_i2c_empty_rx_fifo - Fetch data from RX FIFO until end of190190 * transfer. Send a Stop bit on the last byte.191191 */···201201 }202202}203203204204-/**204204+/*205205 * altr_i2c_fill_tx_fifo - Fill TX FIFO from current message buffer.206206- * @return: Number of bytes left to transfer.207206 */208207static int altr_i2c_fill_tx_fifo(struct altr_i2c_dev *idev)209208{
···6565 *out |= SERIALI2C_RECV_LEN;6666}67676868-/**6868+/*6969 * The serialized I2C format is simply the following:7070 * [addr little-endian][flags little-endian][len little-endian][data if write]7171 * [addr little-endian][flags little-endian][len little-endian][data if write]···109109 request->xfer.data_size = pos;110110}111111112112-/**112112+/*113113 * The data in the BPMP -> CPU direction is composed of sequential blocks for114114 * those messages that have I2C_M_RD. So, for example, if you have:115115 *
···364364365365 /* The rest of this all shows up in sysfs */366366 unsigned int sequential_cutoff;367367- unsigned int readahead;368367369368 unsigned int io_disable:1;370369 unsigned int verify:1;
+7-13
drivers/md/bcache/request.c
···880880 struct bio *bio, unsigned int sectors)881881{882882 int ret = MAP_CONTINUE;883883- unsigned int reada = 0;884883 struct cached_dev *dc = container_of(s->d, struct cached_dev, disk);885884 struct bio *miss, *cache_bio;885885+ unsigned int size_limit;886886887887 s->cache_missed = 1;888888···892892 goto out_submit;893893 }894894895895- if (!(bio->bi_opf & REQ_RAHEAD) &&896896- !(bio->bi_opf & (REQ_META|REQ_PRIO)) &&897897- s->iop.c->gc_stats.in_use < CUTOFF_CACHE_READA)898898- reada = min_t(sector_t, dc->readahead >> 9,899899- get_capacity(bio->bi_bdev->bd_disk) -900900- bio_end_sector(bio));901901-902902- s->insert_bio_sectors = min(sectors, bio_sectors(bio) + reada);895895+ /* Limitation for valid replace key size and cache_bio bvecs number */896896+ size_limit = min_t(unsigned int, BIO_MAX_VECS * PAGE_SECTORS,897897+ (1 << KEY_SIZE_BITS) - 1);898898+ s->insert_bio_sectors = min3(size_limit, sectors, bio_sectors(bio));903899904900 s->iop.replace_key = KEY(s->iop.inode,905901 bio->bi_iter.bi_sector + s->insert_bio_sectors,···907911908912 s->iop.replace = true;909913910910- miss = bio_next_split(bio, sectors, GFP_NOIO, &s->d->bio_split);914914+ miss = bio_next_split(bio, s->insert_bio_sectors, GFP_NOIO,915915+ &s->d->bio_split);911916912917 /* btree_search_recurse()'s btree iterator is no good anymore */913918 ret = miss == bio ? MAP_DONE : -EINTR;···929932 bch_bio_map(cache_bio, NULL);930933 if (bch_bio_alloc_pages(cache_bio, __GFP_NOWARN|GFP_NOIO))931934 goto out_put;932932-933933- if (reada)934934- bch_mark_cache_readahead(s->iop.c, s->d);935935936936 s->cache_miss = miss;937937 s->iop.bio = cache_bio;
···589589 * down before configuring tc params.590590 */591591 if (netif_running(dev)) {592592- cxgb_close(dev);592592+ netif_tx_stop_all_queues(dev);593593+ netif_carrier_off(dev);593594 needs_bring_up = true;594595 }595596···616615 }617616618617out:619619- if (needs_bring_up)620620- cxgb_open(dev);618618+ if (needs_bring_up) {619619+ netif_tx_start_all_queues(dev);620620+ netif_carrier_on(dev);621621+ }621622622623 mutex_unlock(&adap->tc_mqprio->mqprio_mutex);623624 return ret;
+6
drivers/net/ethernet/chelsio/cxgb4/sge.c
···25562556 if (!eosw_txq)25572557 return -ENOMEM;2558255825592559+ if (!(adap->flags & CXGB4_FW_OK)) {25602560+ /* Don't stall caller when access to FW is lost */25612561+ complete(&eosw_txq->completion);25622562+ return -EIO;25632563+ }25642564+25592565 skb = alloc_skb(len, GFP_KERNEL);25602566 if (!skb)25612567 return -ENOMEM;
+6-1
drivers/net/ethernet/intel/i40e/i40e_txrx.c
···23132313 case XDP_TX:23142314 xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->queue_index];23152315 result = i40e_xmit_xdp_tx_ring(xdp, xdp_ring);23162316+ if (result == I40E_XDP_CONSUMED)23172317+ goto out_failure;23162318 break;23172319 case XDP_REDIRECT:23182320 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);23192319- result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;23212321+ if (err)23222322+ goto out_failure;23232323+ result = I40E_XDP_REDIR;23202324 break;23212325 default:23222326 bpf_warn_invalid_xdp_action(act);23232327 fallthrough;23242328 case XDP_ABORTED:23292329+out_failure:23252330 trace_xdp_exception(rx_ring->netdev, xdp_prog, act);23262331 fallthrough; /* handle aborts by dropping packet */23272332 case XDP_DROP:
+6-2
drivers/net/ethernet/intel/i40e/i40e_xsk.c
···162162163163 if (likely(act == XDP_REDIRECT)) {164164 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);165165- result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;165165+ if (err)166166+ goto out_failure;166167 rcu_read_unlock();167167- return result;168168+ return I40E_XDP_REDIR;168169 }169170170171 switch (act) {···174173 case XDP_TX:175174 xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->queue_index];176175 result = i40e_xmit_xdp_tx_ring(xdp, xdp_ring);176176+ if (result == I40E_XDP_CONSUMED)177177+ goto out_failure;177178 break;178179 default:179180 bpf_warn_invalid_xdp_action(act);180181 fallthrough;181182 case XDP_ABORTED:183183+out_failure:182184 trace_xdp_exception(rx_ring->netdev, xdp_prog, act);183185 fallthrough; /* handle aborts by dropping packet */184186 case XDP_DROP:
···105105 if (!vsi->q_vectors)106106 goto err_vectors;107107108108+ vsi->af_xdp_zc_qps = bitmap_zalloc(max_t(int, vsi->alloc_txq, vsi->alloc_rxq), GFP_KERNEL);109109+ if (!vsi->af_xdp_zc_qps)110110+ goto err_zc_qps;111111+108112 return 0;109113114114+err_zc_qps:115115+ devm_kfree(dev, vsi->q_vectors);110116err_vectors:111117 devm_kfree(dev, vsi->rxq_map);112118err_rxq_map:···200194 break;201195 case ICE_VSI_VF:202196 vf = &pf->vf[vsi->vf_id];197197+ if (vf->num_req_qs)198198+ vf->num_vf_qs = vf->num_req_qs;203199 vsi->alloc_txq = vf->num_vf_qs;204200 vsi->alloc_rxq = vf->num_vf_qs;205201 /* pf->num_msix_per_vf includes (VF miscellaneous vector +···296288297289 dev = ice_pf_to_dev(pf);298290291291+ if (vsi->af_xdp_zc_qps) {292292+ bitmap_free(vsi->af_xdp_zc_qps);293293+ vsi->af_xdp_zc_qps = NULL;294294+ }299295 /* free the ring and vector containers */300296 if (vsi->q_vectors) {301297 devm_kfree(dev, vsi->q_vectors);
+13-4
drivers/net/ethernet/intel/ice/ice_txrx.c
···523523 struct bpf_prog *xdp_prog)524524{525525 struct ice_ring *xdp_ring;526526- int err;526526+ int err, result;527527 u32 act;528528529529 act = bpf_prog_run_xdp(xdp_prog, xdp);···532532 return ICE_XDP_PASS;533533 case XDP_TX:534534 xdp_ring = rx_ring->vsi->xdp_rings[smp_processor_id()];535535- return ice_xmit_xdp_buff(xdp, xdp_ring);535535+ result = ice_xmit_xdp_buff(xdp, xdp_ring);536536+ if (result == ICE_XDP_CONSUMED)537537+ goto out_failure;538538+ return result;536539 case XDP_REDIRECT:537540 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);538538- return !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED;541541+ if (err)542542+ goto out_failure;543543+ return ICE_XDP_REDIR;539544 default:540545 bpf_warn_invalid_xdp_action(act);541546 fallthrough;542547 case XDP_ABORTED:548548+out_failure:543549 trace_xdp_exception(rx_ring->netdev, xdp_prog, act);544550 fallthrough;545551 case XDP_DROP:···21492143 struct ice_tx_offload_params offload = { 0 };21502144 struct ice_vsi *vsi = tx_ring->vsi;21512145 struct ice_tx_buf *first;21462146+ struct ethhdr *eth;21522147 unsigned int count;21532148 int tso, csum;21542149···21962189 goto out_drop;2197219021982191 /* allow CONTROL frames egress from main VSI if FW LLDP disabled */21992199- if (unlikely(skb->priority == TC_PRIO_CONTROL &&21922192+ eth = (struct ethhdr *)skb_mac_header(skb);21932193+ if (unlikely((skb->priority == TC_PRIO_CONTROL ||21942194+ eth->h_proto == htons(ETH_P_LLDP)) &&22002195 vsi->type == ICE_VSI_PF &&22012196 vsi->port_info->qos_cfg.is_sw_lldp))22022197 offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX |
+13-6
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
···713713 */714714 clear_bit(ICE_VF_STATE_INIT, vf->vf_states);715715716716- /* VF_MBX_ARQLEN is cleared by PFR, so the driver needs to clear it717717- * in the case of VFR. If this is done for PFR, it can mess up VF718718- * resets because the VF driver may already have started cleanup719719- * by the time we get here.716716+ /* VF_MBX_ARQLEN and VF_MBX_ATQLEN are cleared by PFR, so the driver717717+ * needs to clear them in the case of VFR/VFLR. If this is done for718718+ * PFR, it can mess up VF resets because the VF driver may already719719+ * have started cleanup by the time we get here.720720 */721721- if (!is_pfr)721721+ if (!is_pfr) {722722 wr32(hw, VF_MBX_ARQLEN(vf->vf_id), 0);723723+ wr32(hw, VF_MBX_ATQLEN(vf->vf_id), 0);724724+ }723725724726 /* In the case of a VFLR, the HW has already reset the VF and we725727 * just need to clean up, so don't hit the VFRTRIG register.···17001698 ice_vf_ctrl_vsi_release(vf);1701169917021700 ice_vf_pre_vsi_rebuild(vf);17031703- ice_vf_rebuild_vsi_with_release(vf);17011701+17021702+ if (ice_vf_rebuild_vsi_with_release(vf)) {17031703+ dev_err(dev, "Failed to release and setup the VF%u's VSI\n", vf->vf_id);17041704+ return false;17051705+ }17061706+17041707 ice_vf_post_vsi_rebuild(vf);1705170817061709 /* if the VF has been reset allow it to come up again */
+9-2
drivers/net/ethernet/intel/ice/ice_xsk.c
···270270 if (!pool)271271 return -EINVAL;272272273273+ clear_bit(qid, vsi->af_xdp_zc_qps);273274 xsk_pool_dma_unmap(pool, ICE_RX_DMA_ATTR);274275275276 return 0;···300299 ICE_RX_DMA_ATTR);301300 if (err)302301 return err;302302+303303+ set_bit(qid, vsi->af_xdp_zc_qps);303304304305 return 0;305306}···476473477474 if (likely(act == XDP_REDIRECT)) {478475 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);479479- result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED;476476+ if (err)477477+ goto out_failure;480478 rcu_read_unlock();481481- return result;479479+ return ICE_XDP_REDIR;482480 }483481484482 switch (act) {···488484 case XDP_TX:489485 xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->q_index];490486 result = ice_xmit_xdp_buff(xdp, xdp_ring);487487+ if (result == ICE_XDP_CONSUMED)488488+ goto out_failure;491489 break;492490 default:493491 bpf_warn_invalid_xdp_action(act);494492 fallthrough;495493 case XDP_ABORTED:494494+out_failure:496495 trace_xdp_exception(rx_ring->netdev, xdp_prog, act);497496 fallthrough;498497 case XDP_DROP:
···82808280static struct sk_buff *igb_construct_skb(struct igb_ring *rx_ring,82818281 struct igb_rx_buffer *rx_buffer,82828282 struct xdp_buff *xdp,82838283- union e1000_adv_rx_desc *rx_desc)82838283+ ktime_t timestamp)82848284{82858285#if (PAGE_SIZE < 8192)82868286 unsigned int truesize = igb_rx_pg_size(rx_ring) / 2;···83008300 if (unlikely(!skb))83018301 return NULL;8302830283038303- if (unlikely(igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP))) {83048304- if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, xdp->data, skb)) {83058305- xdp->data += IGB_TS_HDR_LEN;83068306- size -= IGB_TS_HDR_LEN;83078307- }83088308- }83038303+ if (timestamp)83048304+ skb_hwtstamps(skb)->hwtstamp = timestamp;8309830583108306 /* Determine available headroom for copy */83118307 headlen = size;···83328336static struct sk_buff *igb_build_skb(struct igb_ring *rx_ring,83338337 struct igb_rx_buffer *rx_buffer,83348338 struct xdp_buff *xdp,83358335- union e1000_adv_rx_desc *rx_desc)83398339+ ktime_t timestamp)83368340{83378341#if (PAGE_SIZE < 8192)83388342 unsigned int truesize = igb_rx_pg_size(rx_ring) / 2;···83598363 if (metasize)83608364 skb_metadata_set(skb, metasize);8361836583628362- /* pull timestamp out of packet data */83638363- if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) {83648364- if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb))83658365- __skb_pull(skb, IGB_TS_HDR_LEN);83668366- }83668366+ if (timestamp)83678367+ skb_hwtstamps(skb)->hwtstamp = timestamp;8367836883688369 /* update buffer offset */83698370#if (PAGE_SIZE < 8192)···83948401 break;83958402 case XDP_TX:83968403 result = igb_xdp_xmit_back(adapter, xdp);84048404+ if (result == IGB_XDP_CONSUMED)84058405+ goto out_failure;83978406 break;83988407 case XDP_REDIRECT:83998408 err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog);84008400- if (!err)84018401- result = IGB_XDP_REDIR;84028402- else84038403- result = IGB_XDP_CONSUMED;84098409+ if (err)84108410+ goto out_failure;84118411+ result = IGB_XDP_REDIR;84048412 break;84058413 default:84068414 bpf_warn_invalid_xdp_action(act);84078415 fallthrough;84088416 case XDP_ABORTED:84178417+out_failure:84098418 trace_xdp_exception(rx_ring->netdev, xdp_prog, act);84108419 fallthrough;84118420 case XDP_DROP:···86778682 while (likely(total_packets < budget)) {86788683 union e1000_adv_rx_desc *rx_desc;86798684 struct igb_rx_buffer *rx_buffer;86858685+ ktime_t timestamp = 0;86868686+ int pkt_offset = 0;86808687 unsigned int size;86888688+ void *pktbuf;8681868986828690 /* return some buffers to hardware, one at a time is too slow */86838691 if (cleaned_count >= IGB_RX_BUFFER_WRITE) {···87008702 dma_rmb();8701870387028704 rx_buffer = igb_get_rx_buffer(rx_ring, size, &rx_buf_pgcnt);87058705+ pktbuf = page_address(rx_buffer->page) + rx_buffer->page_offset;87068706+87078707+ /* pull rx packet timestamp if available and valid */87088708+ if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) {87098709+ int ts_hdr_len;87108710+87118711+ ts_hdr_len = igb_ptp_rx_pktstamp(rx_ring->q_vector,87128712+ pktbuf, ×tamp);87138713+87148714+ pkt_offset += ts_hdr_len;87158715+ size -= ts_hdr_len;87168716+ }8703871787048718 /* retrieve a buffer from the ring */87058719 if (!skb) {87068706- unsigned int offset = igb_rx_offset(rx_ring);87078707- unsigned char *hard_start;87208720+ unsigned char *hard_start = pktbuf - igb_rx_offset(rx_ring);87218721+ unsigned int offset = pkt_offset + igb_rx_offset(rx_ring);8708872287098709- hard_start = page_address(rx_buffer->page) +87108710- rx_buffer->page_offset - offset;87118723 xdp_prepare_buff(&xdp, hard_start, offset, size, true);87128724#if (PAGE_SIZE > 4096)87138725 /* At larger PAGE_SIZE, frame_sz depend on len size */···87408732 } else if (skb)87418733 igb_add_rx_frag(rx_ring, rx_buffer, skb, size);87428734 else if (ring_uses_build_skb(rx_ring))87438743- skb = igb_build_skb(rx_ring, rx_buffer, &xdp, rx_desc);87358735+ skb = igb_build_skb(rx_ring, rx_buffer, &xdp,87368736+ timestamp);87448737 else87458738 skb = igb_construct_skb(rx_ring, rx_buffer,87468746- &xdp, rx_desc);87398739+ &xdp, timestamp);8747874087488741 /* exit if we failed to retrieve a buffer */87498742 if (!skb) {
+10-13
drivers/net/ethernet/intel/igb/igb_ptp.c
···856856 dev_kfree_skb_any(skb);857857}858858859859-#define IGB_RET_PTP_DISABLED 1860860-#define IGB_RET_PTP_INVALID 2861861-862859/**863860 * igb_ptp_rx_pktstamp - retrieve Rx per packet timestamp864861 * @q_vector: Pointer to interrupt specific structure865862 * @va: Pointer to address containing Rx buffer866866- * @skb: Buffer containing timestamp and packet863863+ * @timestamp: Pointer where timestamp will be stored867864 *868865 * This function is meant to retrieve a timestamp from the first buffer of an869866 * incoming frame. The value is stored in little endian format starting on870867 * byte 8871868 *872872- * Returns: 0 if success, nonzero if failure869869+ * Returns: The timestamp header length or 0 if not available873870 **/874871int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,875875- struct sk_buff *skb)872872+ ktime_t *timestamp)876873{877874 struct igb_adapter *adapter = q_vector->adapter;875875+ struct skb_shared_hwtstamps ts;878876 __le64 *regval = (__le64 *)va;879877 int adjust = 0;880878881879 if (!(adapter->ptp_flags & IGB_PTP_ENABLED))882882- return IGB_RET_PTP_DISABLED;880880+ return 0;883881884882 /* The timestamp is recorded in little endian format.885883 * DWORD: 0 1 2 3···886888887889 /* check reserved dwords are zero, be/le doesn't matter for zero */888890 if (regval[0])889889- return IGB_RET_PTP_INVALID;891891+ return 0;890892891891- igb_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb),892892- le64_to_cpu(regval[1]));893893+ igb_ptp_systim_to_hwtstamp(adapter, &ts, le64_to_cpu(regval[1]));893894894895 /* adjust timestamp for the RX latency based on link speed */895896 if (adapter->hw.mac.type == e1000_i210) {···904907 break;905908 }906909 }907907- skb_hwtstamps(skb)->hwtstamp =908908- ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust);909910910910- return 0;911911+ *timestamp = ktime_sub_ns(ts.hwtstamp, adjust);912912+913913+ return IGB_TS_HDR_LEN;911914}912915913916/**
+5-6
drivers/net/ethernet/intel/igc/igc_main.c
···20472047 break;20482048 case XDP_TX:20492049 if (igc_xdp_xmit_back(adapter, xdp) < 0)20502050- res = IGC_XDP_CONSUMED;20512051- else20522052- res = IGC_XDP_TX;20502050+ goto out_failure;20512051+ res = IGC_XDP_TX;20532052 break;20542053 case XDP_REDIRECT:20552054 if (xdp_do_redirect(adapter->netdev, xdp, prog) < 0)20562056- res = IGC_XDP_CONSUMED;20572057- else20582058- res = IGC_XDP_REDIRECT;20552055+ goto out_failure;20562056+ res = IGC_XDP_REDIRECT;20592057 break;20602058 default:20612059 bpf_warn_invalid_xdp_action(act);20622060 fallthrough;20632061 case XDP_ABORTED:20622062+out_failure:20642063 trace_xdp_exception(adapter->netdev, prog, act);20652064 fallthrough;20662065 case XDP_DROP:
+8-8
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···22132213 break;22142214 case XDP_TX:22152215 xdpf = xdp_convert_buff_to_frame(xdp);22162216- if (unlikely(!xdpf)) {22172217- result = IXGBE_XDP_CONSUMED;22182218- break;22192219- }22162216+ if (unlikely(!xdpf))22172217+ goto out_failure;22202218 result = ixgbe_xmit_xdp_ring(adapter, xdpf);22192219+ if (result == IXGBE_XDP_CONSUMED)22202220+ goto out_failure;22212221 break;22222222 case XDP_REDIRECT:22232223 err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog);22242224- if (!err)22252225- result = IXGBE_XDP_REDIR;22262226- else22272227- result = IXGBE_XDP_CONSUMED;22242224+ if (err)22252225+ goto out_failure;22262226+ result = IXGBE_XDP_REDIR;22282227 break;22292228 default:22302229 bpf_warn_invalid_xdp_action(act);22312230 fallthrough;22322231 case XDP_ABORTED:22322232+out_failure:22332233 trace_xdp_exception(rx_ring->netdev, xdp_prog, act);22342234 fallthrough; /* handle aborts by dropping packet */22352235 case XDP_DROP:
+8-6
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
···106106107107 if (likely(act == XDP_REDIRECT)) {108108 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);109109- result = !err ? IXGBE_XDP_REDIR : IXGBE_XDP_CONSUMED;109109+ if (err)110110+ goto out_failure;110111 rcu_read_unlock();111111- return result;112112+ return IXGBE_XDP_REDIR;112113 }113114114115 switch (act) {···117116 break;118117 case XDP_TX:119118 xdpf = xdp_convert_buff_to_frame(xdp);120120- if (unlikely(!xdpf)) {121121- result = IXGBE_XDP_CONSUMED;122122- break;123123- }119119+ if (unlikely(!xdpf))120120+ goto out_failure;124121 result = ixgbe_xmit_xdp_ring(adapter, xdpf);122122+ if (result == IXGBE_XDP_CONSUMED)123123+ goto out_failure;125124 break;126125 default:127126 bpf_warn_invalid_xdp_action(act);128127 fallthrough;129128 case XDP_ABORTED:129129+out_failure:130130 trace_xdp_exception(rx_ring->netdev, xdp_prog, act);131131 fallthrough; /* handle aborts by dropping packet */132132 case XDP_DROP:
+3
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
···10671067 case XDP_TX:10681068 xdp_ring = adapter->xdp_ring[rx_ring->queue_index];10691069 result = ixgbevf_xmit_xdp_ring(xdp_ring, xdp);10701070+ if (result == IXGBEVF_XDP_CONSUMED)10711071+ goto out_failure;10701072 break;10711073 default:10721074 bpf_warn_invalid_xdp_action(act);10731075 fallthrough;10741076 case XDP_ABORTED:10771077+out_failure:10751078 trace_xdp_exception(rx_ring->netdev, xdp_prog, act);10761079 fallthrough; /* handle aborts by dropping packet */10771080 case XDP_DROP:
···401401 /* If headroom is not 0, there is an offset between the beginning of the402402 * data and the allocated space, otherwise the data and the allocated403403 * space are aligned.404404+ *405405+ * Buffers with headroom use PAGE_SIZE as alloc size, see406406+ * add_recvbuf_mergeable() + get_mergeable_buf_len()404407 */405405- if (headroom) {406406- /* Buffers with headroom use PAGE_SIZE as alloc size,407407- * see add_recvbuf_mergeable() + get_mergeable_buf_len()408408- */409409- truesize = PAGE_SIZE;410410- tailroom = truesize - len - offset;411411- buf = page_address(page);412412- } else {413413- tailroom = truesize - len;414414- buf = p;415415- }408408+ truesize = headroom ? PAGE_SIZE : truesize;409409+ tailroom = truesize - len - headroom - (hdr_padded_len - hdr_len);410410+ buf = p - headroom;416411417412 len -= hdr_len;418413 offset += hdr_padded_len;···953958 put_page(page);954959 head_skb = page_to_skb(vi, rq, xdp_page, offset,955960 len, PAGE_SIZE, false,956956- metasize, headroom);961961+ metasize,962962+ VIRTIO_XDP_HEADROOM);957963 return head_skb;958964 }959965 break;
···1515struct allowedips_node {1616 struct wg_peer __rcu *peer;1717 struct allowedips_node __rcu *bit[2];1818- /* While it may seem scandalous that we waste space for v4,1919- * we're alloc'ing to the nearest power of 2 anyway, so this2020- * doesn't actually make a difference.2121- */2222- u8 bits[16] __aligned(__alignof(u64));2318 u8 cidr, bit_at_a, bit_at_b, bitlen;1919+ u8 bits[16] __aligned(__alignof(u64));24202525- /* Keep rarely used list at bottom to be beyond cache line. */2121+ /* Keep rarely used members at bottom to be beyond cache line. */2222+ unsigned long parent_bit_packed;2623 union {2724 struct list_head peer_list;2825 struct rcu_head rcu;···3033 struct allowedips_node __rcu *root4;3134 struct allowedips_node __rcu *root6;3235 u64 seq;3333-};3636+} __aligned(4); /* We pack the lower 2 bits of &root, but m68k only gives 16-bit alignment. */34373538void wg_allowedips_init(struct allowedips *table);3639void wg_allowedips_free(struct allowedips *table, struct mutex *mutex);···5255#ifdef DEBUG5356bool wg_allowedips_selftest(void);5457#endif5858+5959+int wg_allowedips_slab_init(void);6060+void wg_allowedips_slab_uninit(void);55615662#endif /* _WG_ALLOWEDIPS_H */
+16-1
drivers/net/wireguard/main.c
···2121{2222 int ret;23232424+ ret = wg_allowedips_slab_init();2525+ if (ret < 0)2626+ goto err_allowedips;2727+2428#ifdef DEBUG2929+ ret = -ENOTRECOVERABLE;2530 if (!wg_allowedips_selftest() || !wg_packet_counter_selftest() ||2631 !wg_ratelimiter_selftest())2727- return -ENOTRECOVERABLE;3232+ goto err_peer;2833#endif2934 wg_noise_init();3535+3636+ ret = wg_peer_init();3737+ if (ret < 0)3838+ goto err_peer;30393140 ret = wg_device_init();3241 if (ret < 0)···5344err_netlink:5445 wg_device_uninit();5546err_device:4747+ wg_peer_uninit();4848+err_peer:4949+ wg_allowedips_slab_uninit();5050+err_allowedips:5651 return ret;5752}5853···6451{6552 wg_genetlink_uninit();6653 wg_device_uninit();5454+ wg_peer_uninit();5555+ wg_allowedips_slab_uninit();6756}68576958module_init(mod_init);
+20-7
drivers/net/wireguard/peer.c
···1515#include <linux/rcupdate.h>1616#include <linux/list.h>17171818+static struct kmem_cache *peer_cache;1819static atomic64_t peer_counter = ATOMIC64_INIT(0);19202021struct wg_peer *wg_peer_create(struct wg_device *wg,···3029 if (wg->num_peers >= MAX_PEERS_PER_DEVICE)3130 return ERR_PTR(ret);32313333- peer = kzalloc(sizeof(*peer), GFP_KERNEL);3232+ peer = kmem_cache_zalloc(peer_cache, GFP_KERNEL);3433 if (unlikely(!peer))3534 return ERR_PTR(ret);3636- if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))3535+ if (unlikely(dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)))3736 goto err;38373938 peer->device = wg;···6564 return peer;66656766err:6868- kfree(peer);6767+ kmem_cache_free(peer_cache, peer);6968 return ERR_PTR(ret);7069}7170···8988 /* Mark as dead, so that we don't allow jumping contexts after. */9089 WRITE_ONCE(peer->is_dead, true);91909292- /* The caller must now synchronize_rcu() for this to take effect. */9191+ /* The caller must now synchronize_net() for this to take effect. */9392}94939594static void peer_remove_after_dead(struct wg_peer *peer)···161160 lockdep_assert_held(&peer->device->device_update_lock);162161163162 peer_make_dead(peer);164164- synchronize_rcu();163163+ synchronize_net();165164 peer_remove_after_dead(peer);166165}167166···179178 peer_make_dead(peer);180179 list_add_tail(&peer->peer_list, &dead_peers);181180 }182182- synchronize_rcu();181181+ synchronize_net();183182 list_for_each_entry_safe(peer, temp, &dead_peers, peer_list)184183 peer_remove_after_dead(peer);185184}···194193 /* The final zeroing takes care of clearing any remaining handshake key195194 * material and other potentially sensitive information.196195 */197197- kfree_sensitive(peer);196196+ memzero_explicit(peer, sizeof(*peer));197197+ kmem_cache_free(peer_cache, peer);198198}199199200200static void kref_release(struct kref *refcount)···226224 if (unlikely(!peer))227225 return;228226 kref_put(&peer->refcount, kref_release);227227+}228228+229229+int __init wg_peer_init(void)230230+{231231+ peer_cache = KMEM_CACHE(wg_peer, 0);232232+ return peer_cache ? 0 : -ENOMEM;233233+}234234+235235+void wg_peer_uninit(void)236236+{237237+ kmem_cache_destroy(peer_cache);229238}
···514514static void mt76_rx_release_amsdu(struct mt76_phy *phy, enum mt76_rxq_id q)515515{516516 struct sk_buff *skb = phy->rx_amsdu[q].head;517517+ struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;517518 struct mt76_dev *dev = phy->dev;518519519520 phy->rx_amsdu[q].head = NULL;520521 phy->rx_amsdu[q].tail = NULL;522522+523523+ /*524524+ * Validate if the amsdu has a proper first subframe.525525+ * A single MSDU can be parsed as A-MSDU when the unauthenticated A-MSDU526526+ * flag of the QoS header gets flipped. In such cases, the first527527+ * subframe has a LLC/SNAP header in the location of the destination528528+ * address.529529+ */530530+ if (skb_shinfo(skb)->frag_list) {531531+ int offset = 0;532532+533533+ if (!(status->flag & RX_FLAG_8023)) {534534+ offset = ieee80211_get_hdrlen_from_skb(skb);535535+536536+ if ((status->flag &537537+ (RX_FLAG_DECRYPTED | RX_FLAG_IV_STRIPPED)) ==538538+ RX_FLAG_DECRYPTED)539539+ offset += 8;540540+ }541541+542542+ if (ether_addr_equal(skb->data + offset, rfc1042_header)) {543543+ dev_kfree_skb(skb);544544+ return;545545+ }546546+ }521547 __skb_queue_tail(&dev->rx_skb[q], skb);522548}523549
···55555656 dev->mt76.mcu_ops = &mt7663u_mcu_ops,57575858- /* usb does not support runtime-pm */5959- clear_bit(MT76_STATE_PM, &dev->mphy.state);6058 mt76_set(dev, MT_UDMA_TX_QSEL, MT_FW_DL_EN);6161-6259 if (test_and_clear_bit(MT76_STATE_POWER_OFF, &dev->mphy.state)) {6360 mt7615_mcu_restart(&dev->mt76);6461 if (!mt76_poll_msec(dev, MT_CONN_ON_MISC,
···684684{685685 if (queue->task) {686686 kthread_stop(queue->task);687687+ put_task_struct(queue->task);687688 queue->task = NULL;688689 }689690···746745 if (IS_ERR(task))747746 goto kthread_err;748747 queue->task = task;748748+ /*749749+ * Take a reference to the task in order to prevent it from being freed750750+ * if the thread function returns before kthread_stop is called.751751+ */752752+ get_task_struct(task);749753750754 task = kthread_run(xenvif_dealloc_kthread, queue,751755 "%s-dealloc", queue->name);
+3-2
drivers/nvme/host/rdma.c
···13201320 int count)13211321{13221322 struct nvme_sgl_desc *sg = &c->common.dptr.sgl;13231323- struct scatterlist *sgl = req->data_sgl.sg_table.sgl;13241323 struct ib_sge *sge = &req->sge[1];13241324+ struct scatterlist *sgl;13251325 u32 len = 0;13261326 int i;1327132713281328- for (i = 0; i < count; i++, sgl++, sge++) {13281328+ for_each_sg(req->data_sgl.sg_table.sgl, sgl, count, i) {13291329 sge->addr = sg_dma_address(sgl);13301330 sge->length = sg_dma_len(sgl);13311331 sge->lkey = queue->device->pd->local_dma_lkey;13321332 len += sge->length;13331333+ sge++;13331334 }1334133513351336 sg->addr = cpu_to_le64(queue->ctrl->ctrl.icdoff);
···263263264264static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl)265265{266266- clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags);266266+ if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags))267267+ return;267268 nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);268269 blk_cleanup_queue(ctrl->ctrl.admin_q);269270 blk_cleanup_queue(ctrl->ctrl.fabrics_q);···300299 clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[i].flags);301300 nvmet_sq_destroy(&ctrl->queues[i].nvme_sq);302301 }302302+ ctrl->ctrl.queue_count = 1;303303}304304305305static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl)···407405 return 0;408406409407out_cleanup_queue:408408+ clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags);410409 blk_cleanup_queue(ctrl->ctrl.admin_q);411410out_cleanup_fabrics_q:412411 blk_cleanup_queue(ctrl->ctrl.fabrics_q);···465462 nvme_loop_shutdown_ctrl(ctrl);466463467464 if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {468468- /* state change failure should never happen */469469- WARN_ON_ONCE(1);465465+ if (ctrl->ctrl.state != NVME_CTRL_DELETING &&466466+ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO)467467+ /* state change failure for non-deleted ctrl? */468468+ WARN_ON_ONCE(1);470469 return;471470 }472471
+7
drivers/pci/of.c
···103103#endif104104}105105106106+bool pci_host_of_has_msi_map(struct device *dev)107107+{108108+ if (dev && dev->of_node)109109+ return of_get_property(dev->of_node, "msi-map", NULL);110110+ return false;111111+}112112+106113static inline int __of_pci_pci_compare(struct device_node *node,107114 unsigned int data)108115{
+2-1
drivers/pci/probe.c
···925925 device_enable_async_suspend(bus->bridge);926926 pci_set_bus_of_node(bus);927927 pci_set_bus_msi_domain(bus);928928- if (bridge->msi_domain && !dev_get_msi_domain(&bus->dev))928928+ if (bridge->msi_domain && !dev_get_msi_domain(&bus->dev) &&929929+ !pci_host_of_has_msi_map(parent))929930 bus->bus_flags |= PCI_BUS_FLAGS_NO_MSI;930931931932 if (!parent)
+2-2
drivers/phy/broadcom/phy-brcm-usb-init.h
···7878 * Other architectures (e.g., ARM) either do not support big endian, or7979 * else leave I/O in little endian mode.8080 */8181- if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN))8181+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))8282 return __raw_readl(addr);8383 else8484 return readl_relaxed(addr);···8787static inline void brcm_usb_writel(u32 val, void __iomem *addr)8888{8989 /* See brcmnand_readl() comments */9090- if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN))9090+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))9191 __raw_writel(val, addr);9292 else9393 writel_relaxed(val, addr);
+1
drivers/phy/cadence/phy-cadence-sierra.c
···940940 sp->nsubnodes = node;941941942942 if (sp->num_lanes > SIERRA_MAX_LANES) {943943+ ret = -EINVAL;943944 dev_err(dev, "Invalid lane configuration\n");944945 goto put_child2;945946 }
···1212121212131213 if (wiz->typec_dir_delay < WIZ_TYPEC_DIR_DEBOUNCE_MIN ||12141214 wiz->typec_dir_delay > WIZ_TYPEC_DIR_DEBOUNCE_MAX) {12151215+ ret = -EINVAL;12151216 dev_err(dev, "Invalid typec-dir-debounce property\n");12161217 goto err_addr_to_resource;12171218 }
+2-2
drivers/pinctrl/aspeed/pinctrl-aspeed-g5.c
···27022702}2703270327042704/**27052705- * Configure a pin's signal by applying an expression's descriptor state for27062706- * all descriptors in the expression.27052705+ * aspeed_g5_sig_expr_set() - Configure a pin's signal by applying an27062706+ * expression's descriptor state for all descriptors in the expression.27072707 *27082708 * @ctx: The pinmux context27092709 * @expr: The expression associated with the function whose signal is to be
+2-2
drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
···26112611};2612261226132613/**26142614- * Configure a pin's signal by applying an expression's descriptor state for26152615- * all descriptors in the expression.26142614+ * aspeed_g6_sig_expr_set() - Configure a pin's signal by applying an26152615+ * expression's descriptor state for all descriptors in the expression.26162616 *26172617 * @ctx: The pinmux context26182618 * @expr: The expression associated with the function whose signal is to be
+2-1
drivers/pinctrl/aspeed/pinctrl-aspeed.c
···108108}109109110110/**111111- * Disable a signal on a pin by disabling all provided signal expressions.111111+ * aspeed_disable_sig() - Disable a signal on a pin by disabling all provided112112+ * signal expressions.112113 *113114 * @ctx: The pinmux context114115 * @exprs: The list of signal expressions (from a priority level on a pin)
+2-1
drivers/pinctrl/aspeed/pinmux-aspeed.c
···2121}22222323/**2424- * Query the enabled or disabled state of a signal descriptor2424+ * aspeed_sig_desc_eval() - Query the enabled or disabled state of a signal2525+ * descriptor.2526 *2627 * @desc: The signal descriptor of interest2728 * @enabled: True to query the enabled state, false to query disabled state
+1-1
drivers/pinctrl/qcom/Kconfig
···223223config PINCTRL_SC8180X224224 tristate "Qualcomm Technologies Inc SC8180x pin controller driver"225225 depends on GPIOLIB && (OF || ACPI)226226- select PINCTRL_MSM226226+ depends on PINCTRL_MSM227227 help228228 This is the pinctrl, pinmux, pinconf and gpiolib driver for the229229 Qualcomm Technologies Inc TLMM block found on the Qualcomm
···14251425 * and we have control then make sure it is enabled.14261426 */14271427 if (rdev->constraints->always_on || rdev->constraints->boot_on) {14281428+ /* If we want to enable this regulator, make sure that we know14291429+ * the supplying regulator.14301430+ */14311431+ if (rdev->supply_name && !rdev->supply)14321432+ return -EPROBE_DEFER;14331433+14281434 if (rdev->supply) {14291435 ret = regulator_enable(rdev->supply);14301436 if (ret < 0) {
+2-1
drivers/regulator/cros-ec-regulator.c
···225225226226 drvdata->dev = devm_regulator_register(dev, &drvdata->desc, &cfg);227227 if (IS_ERR(drvdata->dev)) {228228+ ret = PTR_ERR(drvdata->dev);228229 dev_err(&pdev->dev, "Failed to register regulator: %d\n", ret);229229- return PTR_ERR(drvdata->dev);230230+ return ret;230231 }231232232233 platform_set_drvdata(pdev, drvdata);
+7-3
drivers/regulator/da9121-regulator.c
···280280 case DA9121_BUCK_MODE_FORCE_PFM:281281 return REGULATOR_MODE_STANDBY;282282 default:283283- return -EINVAL;283283+ return REGULATOR_MODE_INVALID;284284 }285285}286286···317317{318318 struct da9121 *chip = rdev_get_drvdata(rdev);319319 int id = rdev_get_id(rdev);320320- unsigned int val;320320+ unsigned int val, mode;321321 int ret = 0;322322323323 ret = regmap_read(chip->regmap, da9121_mode_field[id].reg, &val);···326326 return -EINVAL;327327 }328328329329- return da9121_map_mode(val & da9121_mode_field[id].msk);329329+ mode = da9121_map_mode(val & da9121_mode_field[id].msk);330330+ if (mode == REGULATOR_MODE_INVALID)331331+ return -EINVAL;332332+333333+ return mode;330334}331335332336static const struct regulator_ops da9121_buck_ops = {
+1-2
drivers/regulator/fan53555.c
···55555656#define FAN53555_NVOLTAGES 64 /* Numbers of voltages */5757#define FAN53526_NVOLTAGES 1285858-#define TCS4525_NVOLTAGES 127 /* Numbers of voltages */59586059#define TCS_VSEL_NSEL_MASK 0x7f6160#define TCS_VSEL0_MODE (1 << 7)···375376 /* Init voltage range and step */376377 di->vsel_min = 600000;377378 di->vsel_step = 6250;378378- di->vsel_count = TCS4525_NVOLTAGES;379379+ di->vsel_count = FAN53526_NVOLTAGES;379380380381 return 0;381382}
···8888{8989 struct fixed_voltage_data *priv = rdev_get_drvdata(rdev);9090 struct device *dev = rdev->dev.parent;9191+ int ret;9292+9393+ ret = dev_pm_genpd_set_performance_state(dev, 0);9494+ if (ret)9595+ return ret;91969297 priv->enable_counter--;93989494- return dev_pm_genpd_set_performance_state(dev, 0);9999+ return 0;95100}9610197102static int reg_is_enabled(struct regulator_dev *rdev)
+1-1
drivers/regulator/helpers.c
···948948 int ret;949949 unsigned int sel;950950951951- if (!rdev->desc->n_ramp_values)951951+ if (WARN_ON(!rdev->desc->n_ramp_values || !rdev->desc->ramp_delay_table))952952 return -EINVAL;953953954954 ret = find_closest_bigger(ramp_delay, rdev->desc->ramp_delay_table,
···814814 config.dev = dev;815815 config.driver_data = pmic;816816817817+ /*818818+ * Set of_node_reuse flag to prevent driver core from attempting to819819+ * claim any pinmux resources already claimed by the parent device.820820+ * Otherwise PMIC driver will fail to re-probe.821821+ */822822+ device_set_of_node_from_dev(&pdev->dev, pdev->dev.parent);823823+817824 for (id = 0; id < MAX77620_NUM_REGS; id++) {818825 struct regulator_dev *rdev;819826 struct regulator_desc *rdesc;···846839 return ret;847840848841 rdev = devm_regulator_register(dev, rdesc, &config);849849- if (IS_ERR(rdev)) {850850- ret = PTR_ERR(rdev);851851- dev_err(dev, "Regulator registration %s failed: %d\n",852852- rdesc->name, ret);853853- return ret;854854- }842842+ if (IS_ERR(rdev))843843+ return dev_err_probe(dev, PTR_ERR(rdev),844844+ "Regulator registration %s failed\n",845845+ rdesc->name);855846 }856847857848 return 0;
+1-1
drivers/regulator/mt6315-regulator.c
···5959 REGULATOR_LINEAR_RANGE(0, 0, 0xbf, 6250),6060};61616262-static unsigned int mt6315_map_mode(u32 mode)6262+static unsigned int mt6315_map_mode(unsigned int mode)6363{6464 switch (mode) {6565 case MT6315_BUCK_MODE_AUTO:
+2-2
drivers/regulator/rt4801-regulator.c
···6666 struct gpio_descs *gpios = priv->enable_gpios;6767 int id = rdev_get_id(rdev), ret;68686969- if (gpios->ndescs <= id) {6969+ if (!gpios || gpios->ndescs <= id) {7070 dev_warn(&rdev->dev, "no dedicated gpio can control\n");7171 goto bypass_gpio;7272 }···8888 struct gpio_descs *gpios = priv->enable_gpios;8989 int id = rdev_get_id(rdev);90909191- if (gpios->ndescs <= id) {9191+ if (!gpios || gpios->ndescs <= id) {9292 dev_warn(&rdev->dev, "no dedicated gpio can control\n");9393 goto bypass_gpio;9494 }
+42-2
drivers/regulator/rtmv20-regulator.c
···2727#define RTMV20_REG_LDIRQ 0x302828#define RTMV20_REG_LDSTAT 0x402929#define RTMV20_REG_LDMASK 0x503030+#define RTMV20_MAX_REGS (RTMV20_REG_LDMASK + 1)30313132#define RTMV20_VID_MASK GENMASK(7, 4)3233#define RICHTEK_VID 0x80···104103 return 0;105104}106105106106+static int rtmv20_lsw_set_current_limit(struct regulator_dev *rdev, int min_uA,107107+ int max_uA)108108+{109109+ int sel;110110+111111+ if (min_uA > RTMV20_LSW_MAXUA || max_uA < RTMV20_LSW_MINUA)112112+ return -EINVAL;113113+114114+ if (max_uA > RTMV20_LSW_MAXUA)115115+ max_uA = RTMV20_LSW_MAXUA;116116+117117+ sel = (max_uA - RTMV20_LSW_MINUA) / RTMV20_LSW_STEPUA;118118+119119+ /* Ensure the selected setting is still in range */120120+ if ((sel * RTMV20_LSW_STEPUA + RTMV20_LSW_MINUA) < min_uA)121121+ return -EINVAL;122122+123123+ sel <<= ffs(rdev->desc->csel_mask) - 1;124124+125125+ return regmap_update_bits(rdev->regmap, rdev->desc->csel_reg,126126+ rdev->desc->csel_mask, sel);127127+}128128+129129+static int rtmv20_lsw_get_current_limit(struct regulator_dev *rdev)130130+{131131+ unsigned int val;132132+ int ret;133133+134134+ ret = regmap_read(rdev->regmap, rdev->desc->csel_reg, &val);135135+ if (ret)136136+ return ret;137137+138138+ val &= rdev->desc->csel_mask;139139+ val >>= ffs(rdev->desc->csel_mask) - 1;140140+141141+ return val * RTMV20_LSW_STEPUA + RTMV20_LSW_MINUA;142142+}143143+107144static const struct regulator_ops rtmv20_regulator_ops = {108108- .set_current_limit = regulator_set_current_limit_regmap,109109- .get_current_limit = regulator_get_current_limit_regmap,145145+ .set_current_limit = rtmv20_lsw_set_current_limit,146146+ .get_current_limit = rtmv20_lsw_get_current_limit,110147 .enable = rtmv20_lsw_enable,111148 .disable = rtmv20_lsw_disable,112149 .is_enabled = regulator_is_enabled_regmap,···314275 .val_bits = 8,315276 .cache_type = REGCACHE_RBTREE,316277 .max_register = RTMV20_REG_LDMASK,278278+ .num_reg_defaults_raw = RTMV20_MAX_REGS,317279318280 .writeable_reg = rtmv20_is_accessible_reg,319281 .readable_reg = rtmv20_is_accessible_reg,
···99#include <linux/types.h>10101111/*1212- * This file defines the OP-TEE message protocol used to communicate1212+ * This file defines the OP-TEE message protocol (ABI) used to communicate1313 * with an instance of OP-TEE running in secure world.1414 *1515 * This file is divided into two sections.···144144 * @tmem: parameter by temporary memory reference145145 * @rmem: parameter by registered memory reference146146 * @value: parameter by opaque value147147+ * @octets: parameter by octet string147148 *148149 * @attr & OPTEE_MSG_ATTR_TYPE_MASK indicates if tmem, rmem or value is used in149149- * the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value,150150+ * the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value or octets,150151 * OPTEE_MSG_ATTR_TYPE_TMEM_* indicates @tmem and151152 * OPTEE_MSG_ATTR_TYPE_RMEM_* indicates @rmem,152153 * OPTEE_MSG_ATTR_TYPE_NONE indicates that none of the members are used.···158157 struct optee_msg_param_tmem tmem;159158 struct optee_msg_param_rmem rmem;160159 struct optee_msg_param_value value;160160+ u8 octets[24];161161 } u;162162};163163
+11-4
drivers/thermal/intel/therm_throt.c
···621621 return atomic_read(&therm_throt_en);622622}623623624624+void __init therm_lvt_init(void)625625+{626626+ /*627627+ * This function is only called on boot CPU. Save the init thermal628628+ * LVT value on BSP and use that value to restore APs' thermal LVT629629+ * entry BIOS programmed later630630+ */631631+ if (intel_thermal_supported(&boot_cpu_data))632632+ lvtthmr_init = apic_read(APIC_LVTTHMR);633633+}634634+624635void intel_init_thermal(struct cpuinfo_x86 *c)625636{626637 unsigned int cpu = smp_processor_id();···640629641630 if (!intel_thermal_supported(c))642631 return;643643-644644- /* On the BSP? */645645- if (c == &boot_cpu_data)646646- lvtthmr_init = apic_read(APIC_LVTTHMR);647632648633 /*649634 * First check if its enabled already, in which case there might
···886886 }887887}888888889889-static void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep,890890- struct dentry *parent)889889+void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep)891890{892891 struct dentry *dir;893892894894- dir = debugfs_create_dir(dep->name, parent);893893+ dir = debugfs_create_dir(dep->name, dep->dwc->root);895894 dwc3_debugfs_create_endpoint_files(dep, dir);896896-}897897-898898-static void dwc3_debugfs_create_endpoint_dirs(struct dwc3 *dwc,899899- struct dentry *parent)900900-{901901- int i;902902-903903- for (i = 0; i < dwc->num_eps; i++) {904904- struct dwc3_ep *dep = dwc->eps[i];905905-906906- if (!dep)907907- continue;908908-909909- dwc3_debugfs_create_endpoint_dir(dep, parent);910910- }911895}912896913897void dwc3_debugfs_init(struct dwc3 *dwc)···924940 &dwc3_testmode_fops);925941 debugfs_create_file("link_state", 0644, root, dwc,926942 &dwc3_link_state_fops);927927- dwc3_debugfs_create_endpoint_dirs(dwc, root);928943 }929944}930945
+10-3
drivers/usb/dwc3/dwc3-meson-g12a.c
···651651 return PTR_ERR(priv->usb_glue_regmap);652652653653 /* Create a regmap for each USB2 PHY control register set */654654- for (i = 0; i < priv->usb2_ports; i++) {654654+ for (i = 0; i < priv->drvdata->num_phys; i++) {655655 struct regmap_config u2p_regmap_config = {656656 .reg_bits = 8,657657 .val_bits = 32,658658 .reg_stride = 4,659659 .max_register = U2P_R1,660660 };661661+662662+ if (!strstr(priv->drvdata->phy_names[i], "usb2"))663663+ continue;661664662665 u2p_regmap_config.name = devm_kasprintf(priv->dev, GFP_KERNEL,663666 "u2p-%d", i);···775772776773 ret = priv->drvdata->usb_init(priv);777774 if (ret)778778- goto err_disable_clks;775775+ goto err_disable_regulator;779776780777 /* Init PHYs */781778 for (i = 0 ; i < PHY_COUNT ; ++i) {782779 ret = phy_init(priv->phys[i]);783780 if (ret)784784- goto err_disable_clks;781781+ goto err_disable_regulator;785782 }786783787784 /* Set PHY Power */···818815err_phys_exit:819816 for (i = 0 ; i < PHY_COUNT ; ++i)820817 phy_exit(priv->phys[i]);818818+819819+err_disable_regulator:820820+ if (priv->vbus)821821+ regulator_disable(priv->vbus);821822822823err_disable_clks:823824 clk_bulk_disable_unprepare(priv->drvdata->num_clks,
···22612261 }2262226222632263 /*22642264- * Synchronize any pending event handling before executing the controller22652265- * halt routine.22642264+ * Synchronize and disable any further event handling while controller22652265+ * is being enabled/disabled.22662266 */22672267- if (!is_on) {22682268- dwc3_gadget_disable_irq(dwc);22692269- synchronize_irq(dwc->irq_gadget);22702270- }22672267+ disable_irq(dwc->irq_gadget);2271226822722269 spin_lock_irqsave(&dwc->lock, flags);22732270···2302230523032306 ret = dwc3_gadget_run_stop(dwc, is_on, false);23042307 spin_unlock_irqrestore(&dwc->lock, flags);23082308+ enable_irq(dwc->irq_gadget);23092309+23052310 pm_runtime_put(dwc->dev);2306231123072312 return ret;···27532754 INIT_LIST_HEAD(&dep->started_list);27542755 INIT_LIST_HEAD(&dep->cancelled_list);2755275627572757+ dwc3_debugfs_create_endpoint_dir(dep);27582758+27562759 return 0;27572760}27582761···27982797 list_del(&dep->endpoint.ep_list);27992798 }2800279928002800+ debugfs_remove_recursive(debugfs_lookup(dep->name, dwc->root));28012801 kfree(dep);28022802 }28032803}···40484046 dwc3_gadget_free_endpoints(dwc);40494047err4:40504048 usb_put_gadget(dwc->gadget);40494049+ dwc->gadget = NULL;40514050err3:40524051 dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce,40534052 dwc->bounce_addr);···4068406540694066void dwc3_gadget_exit(struct dwc3 *dwc)40704067{40684068+ if (!dwc->gadget)40694069+ return;40704070+40714071 usb_del_gadget(dwc->gadget);40724072 dwc3_gadget_free_endpoints(dwc);40734073 usb_put_gadget(dwc->gadget);
+8
drivers/usb/gadget/config.c
···164164{165165 struct usb_gadget *g = f->config->cdev->gadget;166166167167+ /* super-speed-plus descriptor falls back to super-speed one,168168+ * if such a descriptor was provided, thus avoiding a NULL169169+ * pointer dereference if a 5gbps capable gadget is used with170170+ * a 10gbps capable config (device port + cable + host port)171171+ */172172+ if (!ssp)173173+ ssp = ss;174174+167175 if (fs) {168176 f->fs_descriptors = usb_copy_descriptors(fs);169177 if (!f->fs_descriptors)
+1-1
drivers/usb/gadget/function/f_ecm.c
···791791 fs_ecm_notify_desc.bEndpointAddress;792792793793 status = usb_assign_descriptors(f, ecm_fs_function, ecm_hs_function,794794- ecm_ss_function, NULL);794794+ ecm_ss_function, ecm_ss_function);795795 if (status)796796 goto fail;797797
+3-3
drivers/usb/gadget/function/f_eem.c
···302302 eem_ss_out_desc.bEndpointAddress = eem_fs_out_desc.bEndpointAddress;303303304304 status = usb_assign_descriptors(f, eem_fs_function, eem_hs_function,305305- eem_ss_function, NULL);305305+ eem_ss_function, eem_ss_function);306306 if (status)307307 goto fail;308308···495495 skb2 = skb_clone(skb, GFP_ATOMIC);496496 if (unlikely(!skb2)) {497497 DBG(cdev, "unable to unframe EEM packet\n");498498- continue;498498+ goto next;499499 }500500 skb_trim(skb2, len - ETH_FCS_LEN);501501···505505 GFP_ATOMIC);506506 if (unlikely(!skb3)) {507507 dev_kfree_skb_any(skb2);508508- continue;508508+ goto next;509509 }510510 dev_kfree_skb_any(skb2);511511 skb_queue_tail(list, skb3);
···431431 ss_iso_sink_desc.bEndpointAddress = fs_iso_sink_desc.bEndpointAddress;432432433433 ret = usb_assign_descriptors(f, fs_source_sink_descs,434434- hs_source_sink_descs, ss_source_sink_descs, NULL);434434+ hs_source_sink_descs, ss_source_sink_descs,435435+ ss_source_sink_descs);435436 if (ret)436437 return ret;437438
+1-1
drivers/usb/gadget/function/f_subset.c
···358358 fs_subset_out_desc.bEndpointAddress;359359360360 status = usb_assign_descriptors(f, fs_eth_function, hs_eth_function,361361- ss_eth_function, NULL);361361+ ss_eth_function, ss_eth_function);362362 if (status)363363 goto fail;364364
+2-1
drivers/usb/gadget/function/f_tcm.c
···20572057 uasp_fs_cmd_desc.bEndpointAddress = uasp_ss_cmd_desc.bEndpointAddress;2058205820592059 ret = usb_assign_descriptors(f, uasp_fs_function_desc,20602060- uasp_hs_function_desc, uasp_ss_function_desc, NULL);20602060+ uasp_hs_function_desc, uasp_ss_function_desc,20612061+ uasp_ss_function_desc);20612062 if (ret)20622063 goto ep_fail;20632064
+6-1
drivers/usb/host/xhci-pci.c
···5959#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x11386060#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e61616262+#define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x16396263#define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b96364#define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba6465#define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb···182181 (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_2) ||183182 (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_1)))184183 xhci->quirks |= XHCI_U2_DISABLE_WAKE;184184+185185+ if (pdev->vendor == PCI_VENDOR_ID_AMD &&186186+ pdev->device == PCI_DEVICE_ID_AMD_RENOIR_XHCI)187187+ xhci->quirks |= XHCI_BROKEN_D3COLD;185188186189 if (pdev->vendor == PCI_VENDOR_ID_INTEL) {187190 xhci->quirks |= XHCI_LPM_SUPPORT;···544539 * Systems with the TI redriver that loses port status change events545540 * need to have the registers polled during D3, so avoid D3cold.546541 */547547- if (xhci->quirks & XHCI_COMP_MODE_QUIRK)542542+ if (xhci->quirks & (XHCI_COMP_MODE_QUIRK | XHCI_BROKEN_D3COLD))548543 pci_d3cold_disable(pdev);549544550545 if (xhci->quirks & XHCI_PME_STUCK_QUIRK)
+1
drivers/usb/host/xhci.h
···18921892#define XHCI_DISABLE_SPARSE BIT_ULL(38)18931893#define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39)18941894#define XHCI_NO_SOFT_RETRY BIT_ULL(40)18951895+#define XHCI_BROKEN_D3COLD BIT_ULL(41)1895189618961897 unsigned int num_active_eps;18971898 unsigned int limit_active_eps;
+2
drivers/usb/misc/brcmstb-usb-pinmap.c
···263263 return -EINVAL;264264265265 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);266266+ if (!r)267267+ return -EINVAL;266268267269 pdata = devm_kzalloc(&pdev->dev,268270 sizeof(*pdata) +
+1-2
drivers/usb/musb/musb_core.c
···20092009 schedule_delayed_work(&musb->irq_work,20102010 msecs_to_jiffies(1000));20112011 musb->quirk_retries--;20122012- break;20132012 }20142014- fallthrough;20132013+ break;20152014 case MUSB_QUIRK_B_INVALID_VBUS_91:20162015 if (musb->quirk_retries && !musb->flush_irq_work) {20172016 musb_dbg(musb,
···416416417417 /* flush the port transmit buffer */418418 i = usb_control_msg(serial->dev,419419- usb_rcvctrlpipe(serial->dev, 0),419419+ usb_sndctrlpipe(serial->dev, 0),420420 QT2_FLUSH_DEVICE, 0x40, 1,421421 port_priv->device_port, NULL, 0, QT2_USB_TIMEOUT);422422···426426427427 /* flush the port receive buffer */428428 i = usb_control_msg(serial->dev,429429- usb_rcvctrlpipe(serial->dev, 0),429429+ usb_sndctrlpipe(serial->dev, 0),430430 QT2_FLUSH_DEVICE, 0x40, 0,431431 port_priv->device_port, NULL, 0, QT2_USB_TIMEOUT);432432···639639 int status;640640641641 /* power on unit */642642- status = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),642642+ status = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0),643643 0xc2, 0x40, 0x8000, 0, NULL, 0,644644 QT2_USB_TIMEOUT);645645 if (status < 0) {
+1-1
drivers/usb/typec/mux.c
···239239 dev = class_find_device(&typec_mux_class, NULL, fwnode,240240 mux_fwnode_match);241241242242- return dev ? to_typec_switch(dev) : ERR_PTR(-EPROBE_DEFER);242242+ return dev ? to_typec_mux(dev) : ERR_PTR(-EPROBE_DEFER);243243}244244245245/**
+11-4
drivers/usb/typec/mux/intel_pmc_mux.c
···582582 acpi_dev_free_resource_list(&resource_list);583583584584 if (!pmc->iom_base) {585585- put_device(&adev->dev);585585+ acpi_dev_put(adev);586586 return -ENOMEM;587587+ }588588+589589+ if (IS_ERR(pmc->iom_base)) {590590+ acpi_dev_put(adev);591591+ return PTR_ERR(pmc->iom_base);587592 }588593589594 pmc->iom_adev = adev;···641636 break;642637643638 ret = pmc_usb_register_port(pmc, i, fwnode);644644- if (ret)639639+ if (ret) {640640+ fwnode_handle_put(fwnode);645641 goto err_remove_ports;642642+ }646643 }647644648645 platform_set_drvdata(pdev, pmc);···658651 usb_role_switch_unregister(pmc->port[i].usb_sw);659652 }660653661661- put_device(&pmc->iom_adev->dev);654654+ acpi_dev_put(pmc->iom_adev);662655663656 return ret;664657}···674667 usb_role_switch_unregister(pmc->port[i].usb_sw);675668 }676669677677- put_device(&pmc->iom_adev->dev);670670+ acpi_dev_put(pmc->iom_adev);678671679672 return 0;680673}
+77-46
drivers/usb/typec/tcpm/tcpm.c
···401401 unsigned int nr_src_pdo;402402 u32 snk_pdo[PDO_MAX_OBJECTS];403403 unsigned int nr_snk_pdo;404404+ u32 snk_vdo_v1[VDO_MAX_OBJECTS];405405+ unsigned int nr_snk_vdo_v1;404406 u32 snk_vdo[VDO_MAX_OBJECTS];405407 unsigned int nr_snk_vdo;406408···15491547 if (PD_VDO_VID(p[0]) != USB_SID_PD)15501548 break;1551154915521552- if (PD_VDO_SVDM_VER(p[0]) < svdm_version)15501550+ if (PD_VDO_SVDM_VER(p[0]) < svdm_version) {15531551 typec_partner_set_svdm_version(port->partner,15541552 PD_VDO_SVDM_VER(p[0]));15531553+ svdm_version = PD_VDO_SVDM_VER(p[0]);15541554+ }1555155515561556- tcpm_ams_start(port, DISCOVER_IDENTITY);15571557- /* 6.4.4.3.1: Only respond as UFP (device) */15581558- if (port->data_role == TYPEC_DEVICE &&15561556+ port->ams = DISCOVER_IDENTITY;15571557+ /*15581558+ * PD2.0 Spec 6.10.3: respond with NAK as DFP (data host)15591559+ * PD3.1 Spec 6.4.4.2.5.1: respond with NAK if "invalid field" or15601560+ * "wrong configuation" or "Unrecognized"15611561+ */15621562+ if ((port->data_role == TYPEC_DEVICE || svdm_version >= SVDM_VER_2_0) &&15591563 port->nr_snk_vdo) {15601560- /*15611561- * Product Type DFP and Connector Type are not defined in SVDM15621562- * version 1.0 and shall be set to zero.15631563- */15641564- if (typec_get_negotiated_svdm_version(typec) < SVDM_VER_2_0)15651565- response[1] = port->snk_vdo[0] & ~IDH_DFP_MASK15661566- & ~IDH_CONN_MASK;15671567- else15681568- response[1] = port->snk_vdo[0];15691569- for (i = 1; i < port->nr_snk_vdo; i++)15701570- response[i + 1] = port->snk_vdo[i];15711571- rlen = port->nr_snk_vdo + 1;15641564+ if (svdm_version < SVDM_VER_2_0) {15651565+ for (i = 0; i < port->nr_snk_vdo_v1; i++)15661566+ response[i + 1] = port->snk_vdo_v1[i];15671567+ rlen = port->nr_snk_vdo_v1 + 1;15681568+15691569+ } else {15701570+ for (i = 0; i < port->nr_snk_vdo; i++)15711571+ response[i + 1] = port->snk_vdo[i];15721572+ rlen = port->nr_snk_vdo + 1;15731573+ }15721574 }15731575 break;15741576 case CMD_DISCOVER_SVID:15751575- tcpm_ams_start(port, DISCOVER_SVIDS);15771577+ port->ams = DISCOVER_SVIDS;15761578 break;15771579 case CMD_DISCOVER_MODES:15781578- tcpm_ams_start(port, DISCOVER_MODES);15801580+ port->ams = DISCOVER_MODES;15791581 break;15801582 case CMD_ENTER_MODE:15811581- tcpm_ams_start(port, DFP_TO_UFP_ENTER_MODE);15831583+ port->ams = DFP_TO_UFP_ENTER_MODE;15821584 break;15831585 case CMD_EXIT_MODE:15841584- tcpm_ams_start(port, DFP_TO_UFP_EXIT_MODE);15861586+ port->ams = DFP_TO_UFP_EXIT_MODE;15851587 break;15861588 case CMD_ATTENTION:15871587- tcpm_ams_start(port, ATTENTION);15881589 /* Attention command does not have response */15891590 *adev_action = ADEV_ATTENTION;15901591 return 0;···19421937 tcpm_log(port, "VDM Tx error, retry");19431938 port->vdm_retries++;19441939 port->vdm_state = VDM_STATE_READY;19401940+ if (PD_VDO_SVDM(vdo_hdr) && PD_VDO_CMDT(vdo_hdr) == CMDT_INIT)19411941+ tcpm_ams_finish(port);19421942+ } else {19451943 tcpm_ams_finish(port);19461944 }19471945 break;···2191218321922184 if (!type) {21932185 tcpm_log(port, "Alert message received with no type");21862186+ tcpm_queue_message(port, PD_MSG_CTRL_NOT_SUPP);21942187 return;21952188 }2196218921972190 /* Just handling non-battery alerts for now */21982191 if (!(type & USB_PD_ADO_TYPE_BATT_STATUS_CHANGE)) {21992199- switch (port->state) {22002200- case SRC_READY:22012201- case SNK_READY:21922192+ if (port->pwr_role == TYPEC_SOURCE) {21932193+ port->upcoming_state = GET_STATUS_SEND;21942194+ tcpm_ams_start(port, GETTING_SOURCE_SINK_STATUS);21952195+ } else {21962196+ /*21972197+ * Do not check SinkTxOk here in case the Source doesn't set its Rp to21982198+ * SinkTxOk in time.21992199+ */22002200+ port->ams = GETTING_SOURCE_SINK_STATUS;22022201 tcpm_set_state(port, GET_STATUS_SEND, 0);22032203- break;22042204- default:22052205- tcpm_queue_message(port, PD_MSG_CTRL_WAIT);22062206- break;22072202 }22032203+ } else {22042204+ tcpm_queue_message(port, PD_MSG_CTRL_NOT_SUPP);22082205 }22092206}22102207···24532440 tcpm_pd_handle_state(port, BIST_RX, BIST, 0);24542441 break;24552442 case PD_DATA_ALERT:24562456- tcpm_handle_alert(port, msg->payload, cnt);24432443+ if (port->state != SRC_READY && port->state != SNK_READY)24442444+ tcpm_pd_handle_state(port, port->pwr_role == TYPEC_SOURCE ?24452445+ SRC_SOFT_RESET_WAIT_SNK_TX : SNK_SOFT_RESET,24462446+ NONE_AMS, 0);24472447+ else24482448+ tcpm_handle_alert(port, msg->payload, cnt);24572449 break;24582450 case PD_DATA_BATT_STATUS:24592451 case PD_DATA_GET_COUNTRY_INFO:···2782276427832765 switch (type) {27842766 case PD_EXT_STATUS:27852785- /*27862786- * If PPS related events raised then get PPS status to clear27872787- * (see USB PD 3.0 Spec, 6.5.2.4)27882788- */27892789- if (msg->ext_msg.data[USB_PD_EXT_SDB_EVENT_FLAGS] &27902790- USB_PD_EXT_SDB_PPS_EVENTS)27912791- tcpm_pd_handle_state(port, GET_PPS_STATUS_SEND,27922792- GETTING_SOURCE_SINK_STATUS, 0);27932793-27942794- else27952795- tcpm_pd_handle_state(port, ready_state(port), NONE_AMS, 0);27962796- break;27972767 case PD_EXT_PPS_STATUS:27982798- /*27992799- * For now the PPS status message is used to clear events28002800- * and nothing more.28012801- */28022802- tcpm_pd_handle_state(port, ready_state(port), NONE_AMS, 0);27682768+ if (port->ams == GETTING_SOURCE_SINK_STATUS) {27692769+ tcpm_ams_finish(port);27702770+ tcpm_set_state(port, ready_state(port), 0);27712771+ } else {27722772+ /* unexpected Status or PPS_Status Message */27732773+ tcpm_pd_handle_state(port, port->pwr_role == TYPEC_SOURCE ?27742774+ SRC_SOFT_RESET_WAIT_SNK_TX : SNK_SOFT_RESET,27752775+ NONE_AMS, 0);27762776+ }28032777 break;28042778 case PD_EXT_SOURCE_CAP_EXT:28052779 case PD_EXT_GET_BATT_CAP:···59575947 return ret;59585948 }5959594959505950+ /* If sink-vdos is found, sink-vdos-v1 is expected for backward compatibility. */59515951+ if (port->nr_snk_vdo) {59525952+ ret = fwnode_property_count_u32(fwnode, "sink-vdos-v1");59535953+ if (ret < 0)59545954+ return ret;59555955+ else if (ret == 0)59565956+ return -ENODATA;59575957+59585958+ port->nr_snk_vdo_v1 = min(ret, VDO_MAX_OBJECTS);59595959+ ret = fwnode_property_read_u32_array(fwnode, "sink-vdos-v1",59605960+ port->snk_vdo_v1,59615961+ port->nr_snk_vdo_v1);59625962+ if (ret < 0)59635963+ return ret;59645964+ }59655965+59605966 return 0;59615967}59625968···63376311void tcpm_unregister_port(struct tcpm_port *port)63386312{63396313 int i;63146314+63156315+ hrtimer_cancel(&port->send_discover_timer);63166316+ hrtimer_cancel(&port->enable_frs_timer);63176317+ hrtimer_cancel(&port->vdm_state_machine_timer);63186318+ hrtimer_cancel(&port->state_machine_timer);6340631963416320 tcpm_reset_port(port);63426321 for (i = 0; i < ARRAY_SIZE(port->port_altmode); i++)
+1-1
drivers/usb/typec/tcpm/wcove.c
···378378 const u8 *data = (void *)msg;379379 int i;380380381381- for (i = 0; i < pd_header_cnt(msg->header) * 4 + 2; i++) {381381+ for (i = 0; i < pd_header_cnt_le(msg->header) * 4 + 2; i++) {382382 ret = regmap_write(wcove->regmap, USBC_TX_DATA + i,383383 data[i]);384384 if (ret)
···22config VFIO_PCI33 tristate "VFIO support for PCI devices"44 depends on VFIO && PCI && EVENTFD55+ depends on MMU56 select VFIO_VIRQFD67 select IRQ_BYPASS_MANAGER78 help
···5252 return VM_FAULT_SIGBUS;53535454 get_page(page);5555+5656+ if (vmf->vma->vm_file)5757+ page->mapping = vmf->vma->vm_file->f_mapping;5858+ else5959+ printk(KERN_ERR "no mapping available\n");6060+6161+ BUG_ON(!page->mapping);5562 page->index = vmf->pgoff;56635764 vmf->page = page;···151144 .page_mkwrite = fb_deferred_io_mkwrite,152145};153146147147+static int fb_deferred_io_set_page_dirty(struct page *page)148148+{149149+ if (!PageDirty(page))150150+ SetPageDirty(page);151151+ return 0;152152+}153153+154154+static const struct address_space_operations fb_deferred_io_aops = {155155+ .set_page_dirty = fb_deferred_io_set_page_dirty,156156+};157157+154158int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma)155159{156160 vma->vm_ops = &fb_deferred_io_vm_ops;···212194}213195EXPORT_SYMBOL_GPL(fb_deferred_io_init);214196197197+void fb_deferred_io_open(struct fb_info *info,198198+ struct inode *inode,199199+ struct file *file)200200+{201201+ file->f_mapping->a_ops = &fb_deferred_io_aops;202202+}203203+EXPORT_SYMBOL_GPL(fb_deferred_io_open);204204+215205void fb_deferred_io_cleanup(struct fb_info *info)216206{217207 struct fb_deferred_io *fbdefio = info->fbdefio;208208+ struct page *page;209209+ int i;218210219211 BUG_ON(!fbdefio);220212 cancel_delayed_work_sync(&info->deferred_work);213213+214214+ /* clear out the mapping that we setup */215215+ for (i = 0 ; i < info->fix.smem_len; i += PAGE_SIZE) {216216+ page = fb_deferred_io_page(info, i);217217+ page->mapping = NULL;218218+ }219219+221220 mutex_destroy(&fbdefio->lock);222221}223222EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
+4
drivers/video/fbdev/core/fbmem.c
···14151415 if (res)14161416 module_put(info->fbops->owner);14171417 }14181418+#ifdef CONFIG_FB_DEFERRED_IO14191419+ if (info->fbdefio)14201420+ fb_deferred_io_open(info, inode, file);14211421+#endif14181422out:14191423 unlock_fb_info(info);14201424 if (res)
···457457 bytes_left = compressed_len;458458 for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) {459459 int submit = 0;460460- int len;460460+ int len = 0;461461462462 page = compressed_pages[pg_index];463463 page->mapping = inode->vfs_inode.i_mapping;···465465 submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, bio,466466 0);467467468468- if (pg_index == 0 && use_append)469469- len = bio_add_zone_append_page(bio, page, PAGE_SIZE, 0);470470- else471471- len = bio_add_page(bio, page, PAGE_SIZE, 0);468468+ /*469469+ * Page can only be added to bio if the current bio fits in470470+ * stripe.471471+ */472472+ if (!submit) {473473+ if (pg_index == 0 && use_append)474474+ len = bio_add_zone_append_page(bio, page,475475+ PAGE_SIZE, 0);476476+ else477477+ len = bio_add_page(bio, page, PAGE_SIZE, 0);478478+ }472479473480 page->mapping = NULL;474481 if (submit || len < PAGE_SIZE) {
+18-8
fs/btrfs/disk-io.c
···26482648 ret = -EINVAL;26492649 }2650265026512651+ if (memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,26522652+ BTRFS_FSID_SIZE)) {26532653+ btrfs_err(fs_info,26542654+ "superblock fsid doesn't match fsid of fs_devices: %pU != %pU",26552655+ fs_info->super_copy->fsid, fs_info->fs_devices->fsid);26562656+ ret = -EINVAL;26572657+ }26582658+26592659+ if (btrfs_fs_incompat(fs_info, METADATA_UUID) &&26602660+ memcmp(fs_info->fs_devices->metadata_uuid,26612661+ fs_info->super_copy->metadata_uuid, BTRFS_FSID_SIZE)) {26622662+ btrfs_err(fs_info,26632663+"superblock metadata_uuid doesn't match metadata uuid of fs_devices: %pU != %pU",26642664+ fs_info->super_copy->metadata_uuid,26652665+ fs_info->fs_devices->metadata_uuid);26662666+ ret = -EINVAL;26672667+ }26682668+26512669 if (memcmp(fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid,26522670 BTRFS_FSID_SIZE) != 0) {26532671 btrfs_err(fs_info,···3297327932983280 disk_super = fs_info->super_copy;3299328133003300- ASSERT(!memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid,33013301- BTRFS_FSID_SIZE));33023302-33033303- if (btrfs_fs_incompat(fs_info, METADATA_UUID)) {33043304- ASSERT(!memcmp(fs_info->fs_devices->metadata_uuid,33053305- fs_info->super_copy->metadata_uuid,33063306- BTRFS_FSID_SIZE));33073307- }3308328233093283 features = btrfs_super_flags(disk_super);33103284 if (features & BTRFS_SUPER_FLAG_CHANGING_FSID_V2) {
···788788 u64 end_byte = bytenr + len;789789 u64 csum_end;790790 struct extent_buffer *leaf;791791- int ret;791791+ int ret = 0;792792 const u32 csum_size = fs_info->csum_size;793793 u32 blocksize_bits = fs_info->sectorsize_bits;794794···806806807807 ret = btrfs_search_slot(trans, root, &key, path, -1, 1);808808 if (ret > 0) {809809+ ret = 0;809810 if (path->slots[0] == 0)810811 break;811812 path->slots[0]--;···863862 ret = btrfs_del_items(trans, root, path,864863 path->slots[0], del_nr);865864 if (ret)866866- goto out;865865+ break;867866 if (key.offset == bytenr)868867 break;869868 } else if (key.offset < bytenr && csum_end > end_byte) {···907906 ret = btrfs_split_item(trans, root, path, &key, offset);908907 if (ret && ret != -EAGAIN) {909908 btrfs_abort_transaction(trans, ret);910910- goto out;909909+ break;911910 }911911+ ret = 0;912912913913 key.offset = end_byte - 1;914914 } else {···919917 }920918 btrfs_release_path(path);921919 }922922- ret = 0;923923-out:924920 btrfs_free_path(path);925921 return ret;922922+}923923+924924+static int find_next_csum_offset(struct btrfs_root *root,925925+ struct btrfs_path *path,926926+ u64 *next_offset)927927+{928928+ const u32 nritems = btrfs_header_nritems(path->nodes[0]);929929+ struct btrfs_key found_key;930930+ int slot = path->slots[0] + 1;931931+ int ret;932932+933933+ if (nritems == 0 || slot >= nritems) {934934+ ret = btrfs_next_leaf(root, path);935935+ if (ret < 0) {936936+ return ret;937937+ } else if (ret > 0) {938938+ *next_offset = (u64)-1;939939+ return 0;940940+ }941941+ slot = path->slots[0];942942+ }943943+944944+ btrfs_item_key_to_cpu(path->nodes[0], &found_key, slot);945945+946946+ if (found_key.objectid != BTRFS_EXTENT_CSUM_OBJECTID ||947947+ found_key.type != BTRFS_EXTENT_CSUM_KEY)948948+ *next_offset = (u64)-1;949949+ else950950+ *next_offset = found_key.offset;951951+952952+ return 0;926953}927954928955int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans,···969938 u64 total_bytes = 0;970939 u64 csum_offset;971940 u64 bytenr;972972- u32 nritems;973941 u32 ins_size;974942 int index = 0;975943 int found_next;···1011981 goto insert;1012982 }1013983 } else {10141014- int slot = path->slots[0] + 1;10151015- /* we didn't find a csum item, insert one */10161016- nritems = btrfs_header_nritems(path->nodes[0]);10171017- if (!nritems || (path->slots[0] >= nritems - 1)) {10181018- ret = btrfs_next_leaf(root, path);10191019- if (ret < 0) {10201020- goto out;10211021- } else if (ret > 0) {10221022- found_next = 1;10231023- goto insert;10241024- }10251025- slot = path->slots[0];10261026- }10271027- btrfs_item_key_to_cpu(path->nodes[0], &found_key, slot);10281028- if (found_key.objectid != BTRFS_EXTENT_CSUM_OBJECTID ||10291029- found_key.type != BTRFS_EXTENT_CSUM_KEY) {10301030- found_next = 1;10311031- goto insert;10321032- }10331033- next_offset = found_key.offset;984984+ /* We didn't find a csum item, insert one. */985985+ ret = find_next_csum_offset(root, path, &next_offset);986986+ if (ret < 0)987987+ goto out;1034988 found_next = 1;1035989 goto insert;1036990 }···10701056 tmp = sums->len - total_bytes;10711057 tmp >>= fs_info->sectorsize_bits;10721058 WARN_ON(tmp < 1);10591059+ extend_nr = max_t(int, 1, tmp);1073106010741074- extend_nr = max_t(int, 1, (int)tmp);10611061+ /*10621062+ * A log tree can already have checksum items with a subset of10631063+ * the checksums we are trying to log. This can happen after10641064+ * doing a sequence of partial writes into prealloc extents and10651065+ * fsyncs in between, with a full fsync logging a larger subrange10661066+ * of an extent for which a previous fast fsync logged a smaller10671067+ * subrange. And this happens in particular due to merging file10681068+ * extent items when we complete an ordered extent for a range10691069+ * covered by a prealloc extent - this is done at10701070+ * btrfs_mark_extent_written().10711071+ *10721072+ * So if we try to extend the previous checksum item, which has10731073+ * a range that ends at the start of the range we want to insert,10741074+ * make sure we don't extend beyond the start offset of the next10751075+ * checksum item. If we are at the last item in the leaf, then10761076+ * forget the optimization of extending and add a new checksum10771077+ * item - it is not worth the complexity of releasing the path,10781078+ * getting the first key for the next leaf, repeat the btree10791079+ * search, etc, because log trees are temporary anyway and it10801080+ * would only save a few bytes of leaf space.10811081+ */10821082+ if (root->root_key.objectid == BTRFS_TREE_LOG_OBJECTID) {10831083+ if (path->slots[0] + 1 >=10841084+ btrfs_header_nritems(path->nodes[0])) {10851085+ ret = find_next_csum_offset(root, path, &next_offset);10861086+ if (ret < 0)10871087+ goto out;10881088+ found_next = 1;10891089+ goto insert;10901090+ }10911091+10921092+ ret = find_next_csum_offset(root, path, &next_offset);10931093+ if (ret < 0)10941094+ goto out;10951095+10961096+ tmp = (next_offset - bytenr) >> fs_info->sectorsize_bits;10971097+ if (tmp <= INT_MAX)10981098+ extend_nr = min_t(int, extend_nr, tmp);10991099+ }11001100+10751101 diff = (csum_offset + extend_nr) * csum_size;10761102 diff = min(diff,10771103 MAX_CSUM_ITEMS(fs_info, csum_size) * csum_size);
···30003000 if (ret || truncated) {30013001 u64 unwritten_start = start;3002300230033003+ /*30043004+ * If we failed to finish this ordered extent for any reason we30053005+ * need to make sure BTRFS_ORDERED_IOERR is set on the ordered30063006+ * extent, and mark the inode with the error if it wasn't30073007+ * already set. Any error during writeback would have already30083008+ * set the mapping error, so we need to set it if we're the ones30093009+ * marking this ordered extent as failed.30103010+ */30113011+ if (ret && !test_and_set_bit(BTRFS_ORDERED_IOERR,30123012+ &ordered_extent->flags))30133013+ mapping_set_error(ordered_extent->inode->i_mapping, -EIO);30143014+30033015 if (truncated)30043016 unwritten_start += logical_len;30053017 clear_extent_uptodate(io_tree, unwritten_start, end, NULL);···90889076 int ret2;90899077 bool root_log_pinned = false;90909078 bool dest_log_pinned = false;90799079+ bool need_abort = false;9091908090929081 /* we only allow rename subvolume link between subvolumes */90939082 if (old_ino != BTRFS_FIRST_FREE_OBJECTID && root != dest)···91489135 old_idx);91499136 if (ret)91509137 goto out_fail;91389138+ need_abort = true;91519139 }9152914091539141 /* And now for the dest. */···91649150 new_ino,91659151 btrfs_ino(BTRFS_I(old_dir)),91669152 new_idx);91679167- if (ret)91539153+ if (ret) {91549154+ if (need_abort)91559155+ btrfs_abort_transaction(trans, ret);91689156 goto out_fail;91579157+ }91699158 }9170915991719160 /* Update inode version and ctime/mtime. */
+22-16
fs/btrfs/reflink.c
···203203 * inline extent's data to the page.204204 */205205 ASSERT(key.offset > 0);206206- ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset,207207- inline_data, size, datal,208208- comp_type);209209- goto out;206206+ goto copy_to_page;210207 }211208 } else if (i_size_read(dst) <= datal) {212209 struct btrfs_file_extent_item *ei;···219222 BTRFS_FILE_EXTENT_INLINE)220223 goto copy_inline_extent;221224222222- ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset,223223- inline_data, size, datal, comp_type);224224- goto out;225225+ goto copy_to_page;225226 }226227227228copy_inline_extent:228228- ret = 0;229229 /*230230 * We have no extent items, or we have an extent at offset 0 which may231231 * or may not be inlined. All these cases are dealt the same way.···234240 * clone. Deal with all these cases by copying the inline extent235241 * data into the respective page at the destination inode.236242 */237237- ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset,238238- inline_data, size, datal, comp_type);239239- goto out;243243+ goto copy_to_page;240244 }241245246246+ /*247247+ * Release path before starting a new transaction so we don't hold locks248248+ * that would confuse lockdep.249249+ */242250 btrfs_release_path(path);243251 /*244252 * If we end up here it means were copy the inline extent into a leaf···278282out:279283 if (!ret && !trans) {280284 /*281281- * Release path before starting a new transaction so we don't282282- * hold locks that would confuse lockdep.283283- */284284- btrfs_release_path(path);285285- /*286285 * No transaction here means we copied the inline extent into a287286 * page of the destination inode.288287 *···297306 *trans_out = trans;298307299308 return ret;309309+310310+copy_to_page:311311+ /*312312+ * Release our path because we don't need it anymore and also because313313+ * copy_inline_to_page() needs to reserve data and metadata, which may314314+ * need to flush delalloc when we are low on available space and315315+ * therefore cause a deadlock if writeback of an inline extent needs to316316+ * write to the same leaf or an ordered extent completion needs to write317317+ * to the same leaf.318318+ */319319+ btrfs_release_path(path);320320+321321+ ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset,322322+ inline_data, size, datal, comp_type);323323+ goto out;300324}301325302326/**
+29-8
fs/btrfs/tree-log.c
···15741574 if (ret)15751575 goto out;1576157615771577- btrfs_update_inode(trans, root, BTRFS_I(inode));15771577+ ret = btrfs_update_inode(trans, root, BTRFS_I(inode));15781578+ if (ret)15791579+ goto out;15781580 }1579158115801582 ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + namelen;···1751174917521750 if (nlink != inode->i_nlink) {17531751 set_nlink(inode, nlink);17541754- btrfs_update_inode(trans, root, BTRFS_I(inode));17521752+ ret = btrfs_update_inode(trans, root, BTRFS_I(inode));17531753+ if (ret)17541754+ goto out;17551755 }17561756 BTRFS_I(inode)->index_cnt = (u64)-1;17571757···17911787 break;1792178817931789 if (ret == 1) {17901790+ ret = 0;17941791 if (path->slots[0] == 0)17951792 break;17961793 path->slots[0]--;···1804179918051800 ret = btrfs_del_item(trans, root, path);18061801 if (ret)18071807- goto out;18021802+ break;1808180318091804 btrfs_release_path(path);18101805 inode = read_one_inode(root, key.offset);18111811- if (!inode)18121812- return -EIO;18061806+ if (!inode) {18071807+ ret = -EIO;18081808+ break;18091809+ }1813181018141811 ret = fixup_inode_link_count(trans, root, inode);18151812 iput(inode);18161813 if (ret)18171817- goto out;18141814+ break;1818181518191816 /*18201817 * fixup on a directory may create new entries,···18251818 */18261819 key.offset = (u64)-1;18271820 }18281828- ret = 0;18291829-out:18301821 btrfs_release_path(path);18311822 return ret;18321823}···33023297 * begins and releases it only after writing its superblock.33033298 */33043299 mutex_lock(&fs_info->tree_log_mutex);33003300+33013301+ /*33023302+ * The previous transaction writeout phase could have failed, and thus33033303+ * marked the fs in an error state. We must not commit here, as we33043304+ * could have updated our generation in the super_for_commit and33053305+ * writing the super here would result in transid mismatches. If there33063306+ * is an error here just bail.33073307+ */33083308+ if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {33093309+ ret = -EIO;33103310+ btrfs_set_log_full_commit(trans);33113311+ btrfs_abort_transaction(trans, ret);33123312+ mutex_unlock(&fs_info->tree_log_mutex);33133313+ goto out_wake_log_root;33143314+ }33153315+33053316 btrfs_set_super_log_root(fs_info->super_for_commit, log_root_start);33063317 btrfs_set_super_log_root_level(fs_info->super_for_commit, log_root_level);33073318 ret = write_all_supers(fs_info, 1);
+18-5
fs/btrfs/zoned.c
···150150 return (u32)zone;151151}152152153153+static inline sector_t zone_start_sector(u32 zone_number,154154+ struct block_device *bdev)155155+{156156+ return (sector_t)zone_number << ilog2(bdev_zone_sectors(bdev));157157+}158158+159159+static inline u64 zone_start_physical(u32 zone_number,160160+ struct btrfs_zoned_device_info *zone_info)161161+{162162+ return (u64)zone_number << zone_info->zone_size_shift;163163+}164164+153165/*154166 * Emulate blkdev_report_zones() for a non-zoned device. It slices up the block155167 * device into static sized chunks and fake a conventional zone on each of···417405 if (sb_zone + 1 >= zone_info->nr_zones)418406 continue;419407420420- sector = sb_zone << (zone_info->zone_size_shift - SECTOR_SHIFT);421421- ret = btrfs_get_dev_zones(device, sector << SECTOR_SHIFT,408408+ ret = btrfs_get_dev_zones(device,409409+ zone_start_physical(sb_zone, zone_info),422410 &zone_info->sb_zones[sb_pos],423411 &nr_zones);424412 if (ret)···733721 if (sb_zone + 1 >= nr_zones)734722 return -ENOENT;735723736736- ret = blkdev_report_zones(bdev, sb_zone << zone_sectors_shift,724724+ ret = blkdev_report_zones(bdev, zone_start_sector(sb_zone, bdev),737725 BTRFS_NR_SB_LOG_ZONES, copy_zone_info_cb,738726 zones);739727 if (ret < 0)···838826 return -ENOENT;839827840828 return blkdev_zone_mgmt(bdev, REQ_OP_ZONE_RESET,841841- sb_zone << zone_sectors_shift,829829+ zone_start_sector(sb_zone, bdev),842830 zone_sectors * BTRFS_NR_SB_LOG_ZONES, GFP_NOFS);843831}844832···890878 if (!(end <= sb_zone ||891879 sb_zone + BTRFS_NR_SB_LOG_ZONES <= begin)) {892880 have_sb = true;893893- pos = ((u64)sb_zone + BTRFS_NR_SB_LOG_ZONES) << shift;881881+ pos = zone_start_physical(882882+ sb_zone + BTRFS_NR_SB_LOG_ZONES, zinfo);894883 break;895884 }896885
+1-1
fs/coredump.c
···519519 * but then we need to teach dump_write() to restart and clear520520 * TIF_SIGPENDING.521521 */522522- return signal_pending(current);522522+ return fatal_signal_pending(current) || freezing(current);523523}524524525525static void wait_for_dump_helpers(struct file *file)
···32063206 ext4_ext_mark_unwritten(ex2);3207320732083208 err = ext4_ext_insert_extent(handle, inode, ppath, &newex, flags);32093209- if (err == -ENOSPC && (EXT4_EXT_MAY_ZEROOUT & split_flag)) {32093209+ if (err != -ENOSPC && err != -EDQUOT)32103210+ goto out;32113211+32123212+ if (EXT4_EXT_MAY_ZEROOUT & split_flag) {32103213 if (split_flag & (EXT4_EXT_DATA_VALID1|EXT4_EXT_DATA_VALID2)) {32113214 if (split_flag & EXT4_EXT_DATA_VALID1) {32123215 err = ext4_ext_zeroout(inode, ex2);···32353232 ext4_ext_pblock(&orig_ex));32363233 }3237323432383238- if (err)32393239- goto fix_extent_len;32403240- /* update the extent length and mark as initialized */32413241- ex->ee_len = cpu_to_le16(ee_len);32423242- ext4_ext_try_to_merge(handle, inode, path, ex);32433243- err = ext4_ext_dirty(handle, inode, path + path->p_depth);32443244- if (err)32453245- goto fix_extent_len;32463246-32473247- /* update extent status tree */32483248- err = ext4_zeroout_es(inode, &zero_ex);32493249-32503250- goto out;32513251- } else if (err)32523252- goto fix_extent_len;32533253-32543254-out:32553255- ext4_ext_show_leaf(inode, path);32563256- return err;32353235+ if (!err) {32363236+ /* update the extent length and mark as initialized */32373237+ ex->ee_len = cpu_to_le16(ee_len);32383238+ ext4_ext_try_to_merge(handle, inode, path, ex);32393239+ err = ext4_ext_dirty(handle, inode, path + path->p_depth);32403240+ if (!err)32413241+ /* update extent status tree */32423242+ err = ext4_zeroout_es(inode, &zero_ex);32433243+ /* If we failed at this point, we don't know in which32443244+ * state the extent tree exactly is so don't try to fix32453245+ * length of the original extent as it may do even more32463246+ * damage.32473247+ */32483248+ goto out;32493249+ }32503250+ }3257325132583252fix_extent_len:32593253 ex->ee_len = orig_ex.ee_len;···32593259 * and err is a non-zero error code.32603260 */32613261 ext4_ext_dirty(handle, inode, path + path->p_depth);32623262+ return err;32633263+out:32643264+ ext4_ext_show_leaf(inode, path);32623265 return err;32633266}32643267
+90-80
fs/ext4/fast_commit.c
···12881288};1289128912901290static inline void tl_to_darg(struct dentry_info_args *darg,12911291- struct ext4_fc_tl *tl)12911291+ struct ext4_fc_tl *tl, u8 *val)12921292{12931293- struct ext4_fc_dentry_info *fcd;12931293+ struct ext4_fc_dentry_info fcd;1294129412951295- fcd = (struct ext4_fc_dentry_info *)ext4_fc_tag_val(tl);12951295+ memcpy(&fcd, val, sizeof(fcd));1296129612971297- darg->parent_ino = le32_to_cpu(fcd->fc_parent_ino);12981298- darg->ino = le32_to_cpu(fcd->fc_ino);12991299- darg->dname = fcd->fc_dname;13001300- darg->dname_len = ext4_fc_tag_len(tl) -13011301- sizeof(struct ext4_fc_dentry_info);12971297+ darg->parent_ino = le32_to_cpu(fcd.fc_parent_ino);12981298+ darg->ino = le32_to_cpu(fcd.fc_ino);12991299+ darg->dname = val + offsetof(struct ext4_fc_dentry_info, fc_dname);13001300+ darg->dname_len = le16_to_cpu(tl->fc_len) -13011301+ sizeof(struct ext4_fc_dentry_info);13021302}1303130313041304/* Unlink replay function */13051305-static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl)13051305+static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl,13061306+ u8 *val)13061307{13071308 struct inode *inode, *old_parent;13081309 struct qstr entry;13091310 struct dentry_info_args darg;13101311 int ret = 0;1311131213121312- tl_to_darg(&darg, tl);13131313+ tl_to_darg(&darg, tl, val);1313131413141315 trace_ext4_fc_replay(sb, EXT4_FC_TAG_UNLINK, darg.ino,13151316 darg.parent_ino, darg.dname_len);···14001399}1401140014021401/* Link replay function */14031403-static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl)14021402+static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl,14031403+ u8 *val)14041404{14051405 struct inode *inode;14061406 struct dentry_info_args darg;14071407 int ret = 0;1408140814091409- tl_to_darg(&darg, tl);14091409+ tl_to_darg(&darg, tl, val);14101410 trace_ext4_fc_replay(sb, EXT4_FC_TAG_LINK, darg.ino,14111411 darg.parent_ino, darg.dname_len);14121412···14521450/*14531451 * Inode replay function14541452 */14551455-static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl)14531453+static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,14541454+ u8 *val)14561455{14571457- struct ext4_fc_inode *fc_inode;14561456+ struct ext4_fc_inode fc_inode;14581457 struct ext4_inode *raw_inode;14591458 struct ext4_inode *raw_fc_inode;14601459 struct inode *inode = NULL;···14631460 int inode_len, ino, ret, tag = le16_to_cpu(tl->fc_tag);14641461 struct ext4_extent_header *eh;1465146214661466- fc_inode = (struct ext4_fc_inode *)ext4_fc_tag_val(tl);14631463+ memcpy(&fc_inode, val, sizeof(fc_inode));1467146414681468- ino = le32_to_cpu(fc_inode->fc_ino);14651465+ ino = le32_to_cpu(fc_inode.fc_ino);14691466 trace_ext4_fc_replay(sb, tag, ino, 0, 0);1470146714711468 inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);···1477147414781475 ext4_fc_record_modified_inode(sb, ino);1479147614801480- raw_fc_inode = (struct ext4_inode *)fc_inode->fc_raw_inode;14771477+ raw_fc_inode = (struct ext4_inode *)14781478+ (val + offsetof(struct ext4_fc_inode, fc_raw_inode));14811479 ret = ext4_get_fc_inode_loc(sb, ino, &iloc);14821480 if (ret)14831481 goto out;1484148214851485- inode_len = ext4_fc_tag_len(tl) - sizeof(struct ext4_fc_inode);14831483+ inode_len = le16_to_cpu(tl->fc_len) - sizeof(struct ext4_fc_inode);14861484 raw_inode = ext4_raw_inode(&iloc);1487148514881486 memcpy(raw_inode, raw_fc_inode, offsetof(struct ext4_inode, i_block));···15511547 * inode for which we are trying to create a dentry here, should already have15521548 * been replayed before we start here.15531549 */15541554-static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl)15501550+static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl,15511551+ u8 *val)15551552{15561553 int ret = 0;15571554 struct inode *inode = NULL;15581555 struct inode *dir = NULL;15591556 struct dentry_info_args darg;1560155715611561- tl_to_darg(&darg, tl);15581558+ tl_to_darg(&darg, tl, val);1562155915631560 trace_ext4_fc_replay(sb, EXT4_FC_TAG_CREAT, darg.ino,15641561 darg.parent_ino, darg.dname_len);···1638163316391634/* Replay add range tag */16401635static int ext4_fc_replay_add_range(struct super_block *sb,16411641- struct ext4_fc_tl *tl)16361636+ struct ext4_fc_tl *tl, u8 *val)16421637{16431643- struct ext4_fc_add_range *fc_add_ex;16381638+ struct ext4_fc_add_range fc_add_ex;16441639 struct ext4_extent newex, *ex;16451640 struct inode *inode;16461641 ext4_lblk_t start, cur;···16501645 struct ext4_ext_path *path = NULL;16511646 int ret;1652164716531653- fc_add_ex = (struct ext4_fc_add_range *)ext4_fc_tag_val(tl);16541654- ex = (struct ext4_extent *)&fc_add_ex->fc_ex;16481648+ memcpy(&fc_add_ex, val, sizeof(fc_add_ex));16491649+ ex = (struct ext4_extent *)&fc_add_ex.fc_ex;1655165016561651 trace_ext4_fc_replay(sb, EXT4_FC_TAG_ADD_RANGE,16571657- le32_to_cpu(fc_add_ex->fc_ino), le32_to_cpu(ex->ee_block),16521652+ le32_to_cpu(fc_add_ex.fc_ino), le32_to_cpu(ex->ee_block),16581653 ext4_ext_get_actual_len(ex));1659165416601660- inode = ext4_iget(sb, le32_to_cpu(fc_add_ex->fc_ino),16611661- EXT4_IGET_NORMAL);16551655+ inode = ext4_iget(sb, le32_to_cpu(fc_add_ex.fc_ino), EXT4_IGET_NORMAL);16621656 if (IS_ERR(inode)) {16631657 jbd_debug(1, "Inode not found.");16641658 return 0;···1766176217671763/* Replay DEL_RANGE tag */17681764static int17691769-ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl)17651765+ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,17661766+ u8 *val)17701767{17711768 struct inode *inode;17721772- struct ext4_fc_del_range *lrange;17691769+ struct ext4_fc_del_range lrange;17731770 struct ext4_map_blocks map;17741771 ext4_lblk_t cur, remaining;17751772 int ret;1776177317771777- lrange = (struct ext4_fc_del_range *)ext4_fc_tag_val(tl);17781778- cur = le32_to_cpu(lrange->fc_lblk);17791779- remaining = le32_to_cpu(lrange->fc_len);17741774+ memcpy(&lrange, val, sizeof(lrange));17751775+ cur = le32_to_cpu(lrange.fc_lblk);17761776+ remaining = le32_to_cpu(lrange.fc_len);1780177717811778 trace_ext4_fc_replay(sb, EXT4_FC_TAG_DEL_RANGE,17821782- le32_to_cpu(lrange->fc_ino), cur, remaining);17791779+ le32_to_cpu(lrange.fc_ino), cur, remaining);1783178017841784- inode = ext4_iget(sb, le32_to_cpu(lrange->fc_ino), EXT4_IGET_NORMAL);17811781+ inode = ext4_iget(sb, le32_to_cpu(lrange.fc_ino), EXT4_IGET_NORMAL);17851782 if (IS_ERR(inode)) {17861786- jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange->fc_ino));17831783+ jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange.fc_ino));17871784 return 0;17881785 }1789178617901787 ret = ext4_fc_record_modified_inode(sb, inode->i_ino);1791178817921789 jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n",17931793- inode->i_ino, le32_to_cpu(lrange->fc_lblk),17941794- le32_to_cpu(lrange->fc_len));17901790+ inode->i_ino, le32_to_cpu(lrange.fc_lblk),17911791+ le32_to_cpu(lrange.fc_len));17951792 while (remaining > 0) {17961793 map.m_lblk = cur;17971794 map.m_len = remaining;···18131808 }1814180918151810 ret = ext4_punch_hole(inode,18161816- le32_to_cpu(lrange->fc_lblk) << sb->s_blocksize_bits,18171817- le32_to_cpu(lrange->fc_len) << sb->s_blocksize_bits);18111811+ le32_to_cpu(lrange.fc_lblk) << sb->s_blocksize_bits,18121812+ le32_to_cpu(lrange.fc_len) << sb->s_blocksize_bits);18181813 if (ret)18191814 jbd_debug(1, "ext4_punch_hole returned %d", ret);18201815 ext4_ext_replay_shrink_inode(inode,···19301925 struct ext4_sb_info *sbi = EXT4_SB(sb);19311926 struct ext4_fc_replay_state *state;19321927 int ret = JBD2_FC_REPLAY_CONTINUE;19331933- struct ext4_fc_add_range *ext;19341934- struct ext4_fc_tl *tl;19351935- struct ext4_fc_tail *tail;19361936- __u8 *start, *end;19371937- struct ext4_fc_head *head;19281928+ struct ext4_fc_add_range ext;19291929+ struct ext4_fc_tl tl;19301930+ struct ext4_fc_tail tail;19311931+ __u8 *start, *end, *cur, *val;19321932+ struct ext4_fc_head head;19381933 struct ext4_extent *ex;1939193419401935 state = &sbi->s_fc_replay_state;···19611956 }1962195719631958 state->fc_replay_expected_off++;19641964- fc_for_each_tl(start, end, tl) {19591959+ for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) {19601960+ memcpy(&tl, cur, sizeof(tl));19611961+ val = cur + sizeof(tl);19651962 jbd_debug(3, "Scan phase, tag:%s, blk %lld\n",19661966- tag2str(le16_to_cpu(tl->fc_tag)), bh->b_blocknr);19671967- switch (le16_to_cpu(tl->fc_tag)) {19631963+ tag2str(le16_to_cpu(tl.fc_tag)), bh->b_blocknr);19641964+ switch (le16_to_cpu(tl.fc_tag)) {19681965 case EXT4_FC_TAG_ADD_RANGE:19691969- ext = (struct ext4_fc_add_range *)ext4_fc_tag_val(tl);19701970- ex = (struct ext4_extent *)&ext->fc_ex;19661966+ memcpy(&ext, val, sizeof(ext));19671967+ ex = (struct ext4_extent *)&ext.fc_ex;19711968 ret = ext4_fc_record_regions(sb,19721972- le32_to_cpu(ext->fc_ino),19691969+ le32_to_cpu(ext.fc_ino),19731970 le32_to_cpu(ex->ee_block), ext4_ext_pblock(ex),19741971 ext4_ext_get_actual_len(ex));19751972 if (ret < 0)···19851978 case EXT4_FC_TAG_INODE:19861979 case EXT4_FC_TAG_PAD:19871980 state->fc_cur_tag++;19881988- state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl,19891989- sizeof(*tl) + ext4_fc_tag_len(tl));19811981+ state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,19821982+ sizeof(tl) + le16_to_cpu(tl.fc_len));19901983 break;19911984 case EXT4_FC_TAG_TAIL:19921985 state->fc_cur_tag++;19931993- tail = (struct ext4_fc_tail *)ext4_fc_tag_val(tl);19941994- state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl,19951995- sizeof(*tl) +19861986+ memcpy(&tail, val, sizeof(tail));19871987+ state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,19881988+ sizeof(tl) +19961989 offsetof(struct ext4_fc_tail,19971990 fc_crc));19981998- if (le32_to_cpu(tail->fc_tid) == expected_tid &&19991999- le32_to_cpu(tail->fc_crc) == state->fc_crc) {19911991+ if (le32_to_cpu(tail.fc_tid) == expected_tid &&19921992+ le32_to_cpu(tail.fc_crc) == state->fc_crc) {20001993 state->fc_replay_num_tags = state->fc_cur_tag;20011994 state->fc_regions_valid =20021995 state->fc_regions_used;···20072000 state->fc_crc = 0;20082001 break;20092002 case EXT4_FC_TAG_HEAD:20102010- head = (struct ext4_fc_head *)ext4_fc_tag_val(tl);20112011- if (le32_to_cpu(head->fc_features) &20032003+ memcpy(&head, val, sizeof(head));20042004+ if (le32_to_cpu(head.fc_features) &20122005 ~EXT4_FC_SUPPORTED_FEATURES) {20132006 ret = -EOPNOTSUPP;20142007 break;20152008 }20162016- if (le32_to_cpu(head->fc_tid) != expected_tid) {20092009+ if (le32_to_cpu(head.fc_tid) != expected_tid) {20172010 ret = JBD2_FC_REPLAY_STOP;20182011 break;20192012 }20202013 state->fc_cur_tag++;20212021- state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl,20222022- sizeof(*tl) + ext4_fc_tag_len(tl));20142014+ state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,20152015+ sizeof(tl) + le16_to_cpu(tl.fc_len));20232016 break;20242017 default:20252018 ret = state->fc_replay_num_tags ?···20432036{20442037 struct super_block *sb = journal->j_private;20452038 struct ext4_sb_info *sbi = EXT4_SB(sb);20462046- struct ext4_fc_tl *tl;20472047- __u8 *start, *end;20392039+ struct ext4_fc_tl tl;20402040+ __u8 *start, *end, *cur, *val;20482041 int ret = JBD2_FC_REPLAY_CONTINUE;20492042 struct ext4_fc_replay_state *state = &sbi->s_fc_replay_state;20502050- struct ext4_fc_tail *tail;20432043+ struct ext4_fc_tail tail;2051204420522045 if (pass == PASS_SCAN) {20532046 state->fc_current_pass = PASS_SCAN;···20742067 start = (u8 *)bh->b_data;20752068 end = (__u8 *)bh->b_data + journal->j_blocksize - 1;2076206920772077- fc_for_each_tl(start, end, tl) {20702070+ for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) {20712071+ memcpy(&tl, cur, sizeof(tl));20722072+ val = cur + sizeof(tl);20732073+20782074 if (state->fc_replay_num_tags == 0) {20792075 ret = JBD2_FC_REPLAY_STOP;20802076 ext4_fc_set_bitmaps_and_counters(sb);20812077 break;20822078 }20832079 jbd_debug(3, "Replay phase, tag:%s\n",20842084- tag2str(le16_to_cpu(tl->fc_tag)));20802080+ tag2str(le16_to_cpu(tl.fc_tag)));20852081 state->fc_replay_num_tags--;20862086- switch (le16_to_cpu(tl->fc_tag)) {20822082+ switch (le16_to_cpu(tl.fc_tag)) {20872083 case EXT4_FC_TAG_LINK:20882088- ret = ext4_fc_replay_link(sb, tl);20842084+ ret = ext4_fc_replay_link(sb, &tl, val);20892085 break;20902086 case EXT4_FC_TAG_UNLINK:20912091- ret = ext4_fc_replay_unlink(sb, tl);20872087+ ret = ext4_fc_replay_unlink(sb, &tl, val);20922088 break;20932089 case EXT4_FC_TAG_ADD_RANGE:20942094- ret = ext4_fc_replay_add_range(sb, tl);20902090+ ret = ext4_fc_replay_add_range(sb, &tl, val);20952091 break;20962092 case EXT4_FC_TAG_CREAT:20972097- ret = ext4_fc_replay_create(sb, tl);20932093+ ret = ext4_fc_replay_create(sb, &tl, val);20982094 break;20992095 case EXT4_FC_TAG_DEL_RANGE:21002100- ret = ext4_fc_replay_del_range(sb, tl);20962096+ ret = ext4_fc_replay_del_range(sb, &tl, val);21012097 break;21022098 case EXT4_FC_TAG_INODE:21032103- ret = ext4_fc_replay_inode(sb, tl);20992099+ ret = ext4_fc_replay_inode(sb, &tl, val);21042100 break;21052101 case EXT4_FC_TAG_PAD:21062102 trace_ext4_fc_replay(sb, EXT4_FC_TAG_PAD, 0,21072107- ext4_fc_tag_len(tl), 0);21032103+ le16_to_cpu(tl.fc_len), 0);21082104 break;21092105 case EXT4_FC_TAG_TAIL:21102106 trace_ext4_fc_replay(sb, EXT4_FC_TAG_TAIL, 0,21112111- ext4_fc_tag_len(tl), 0);21122112- tail = (struct ext4_fc_tail *)ext4_fc_tag_val(tl);21132113- WARN_ON(le32_to_cpu(tail->fc_tid) != expected_tid);21072107+ le16_to_cpu(tl.fc_len), 0);21082108+ memcpy(&tail, val, sizeof(tail));21092109+ WARN_ON(le32_to_cpu(tail.fc_tid) != expected_tid);21142110 break;21152111 case EXT4_FC_TAG_HEAD:21162112 break;21172113 default:21182118- trace_ext4_fc_replay(sb, le16_to_cpu(tl->fc_tag), 0,21192119- ext4_fc_tag_len(tl), 0);21142114+ trace_ext4_fc_replay(sb, le16_to_cpu(tl.fc_tag), 0,21152115+ le16_to_cpu(tl.fc_len), 0);21202116 ret = -ECANCELED;21212117 break;21222118 }
-19
fs/ext4/fast_commit.h
···153153#define region_last(__region) (((__region)->lblk) + ((__region)->len) - 1)154154#endif155155156156-#define fc_for_each_tl(__start, __end, __tl) \157157- for (tl = (struct ext4_fc_tl *)(__start); \158158- (__u8 *)tl < (__u8 *)(__end); \159159- tl = (struct ext4_fc_tl *)((__u8 *)tl + \160160- sizeof(struct ext4_fc_tl) + \161161- + le16_to_cpu(tl->fc_len)))162162-163156static inline const char *tag2str(__u16 tag)164157{165158 switch (tag) {···177184 default:178185 return "ERROR";179186 }180180-}181181-182182-/* Get length of a particular tlv */183183-static inline int ext4_fc_tag_len(struct ext4_fc_tl *tl)184184-{185185- return le16_to_cpu(tl->fc_len);186186-}187187-188188-/* Get a pointer to "value" of a tlv */189189-static inline __u8 *ext4_fc_tag_val(struct ext4_fc_tl *tl)190190-{191191- return (__u8 *)tl + sizeof(*tl);192187}193188194189#endif /* __FAST_COMMIT_H__ */
+4-2
fs/ext4/ialloc.c
···322322 if (is_directory) {323323 count = ext4_used_dirs_count(sb, gdp) - 1;324324 ext4_used_dirs_set(sb, gdp, count);325325- percpu_counter_dec(&sbi->s_dirs_counter);325325+ if (percpu_counter_initialized(&sbi->s_dirs_counter))326326+ percpu_counter_dec(&sbi->s_dirs_counter);326327 }327328 ext4_inode_bitmap_csum_set(sb, block_group, gdp, bitmap_bh,328329 EXT4_INODES_PER_GROUP(sb) / 8);329330 ext4_group_desc_csum_set(sb, block_group, gdp);330331 ext4_unlock_group(sb, block_group);331332332332- percpu_counter_inc(&sbi->s_freeinodes_counter);333333+ if (percpu_counter_initialized(&sbi->s_freeinodes_counter))334334+ percpu_counter_inc(&sbi->s_freeinodes_counter);333335 if (sbi->s_log_groups_per_flex) {334336 struct flex_groups *fg;335337
+1-1
fs/ext4/mballoc.c
···32173217 */32183218 if (sbi->s_es->s_log_groups_per_flex >= 32) {32193219 ext4_msg(sb, KERN_ERR, "too many log groups per flexible block group");32203220- goto err_freesgi;32203220+ goto err_freebuddy;32213221 }32223222 sbi->s_mb_prefetch = min_t(uint, 1 << sbi->s_es->s_log_groups_per_flex,32233223 BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9));
+4-2
fs/ext4/namei.c
···13761376 struct dx_hash_info *hinfo = &name->hinfo;13771377 int len;1378137813791379- if (!IS_CASEFOLDED(dir) || !dir->i_sb->s_encoding) {13791379+ if (!IS_CASEFOLDED(dir) || !dir->i_sb->s_encoding ||13801380+ (IS_ENCRYPTED(dir) && !fscrypt_has_encryption_key(dir))) {13801381 cf_name->name = NULL;13811382 return 0;13821383 }···14281427#endif1429142814301429#ifdef CONFIG_UNICODE14311431- if (parent->i_sb->s_encoding && IS_CASEFOLDED(parent)) {14301430+ if (parent->i_sb->s_encoding && IS_CASEFOLDED(parent) &&14311431+ (!IS_ENCRYPTED(parent) || fscrypt_has_encryption_key(parent))) {14321432 if (fname->cf_name.name) {14331433 struct qstr cf = {.name = fname->cf_name.name,14341434 .len = fname->cf_name.len};
+9-2
fs/ext4/super.c
···44624462 }4463446344644464 if (sb->s_blocksize != blocksize) {44654465+ /*44664466+ * bh must be released before kill_bdev(), otherwise44674467+ * it won't be freed and its page also. kill_bdev()44684468+ * is called by sb_set_blocksize().44694469+ */44704470+ brelse(bh);44654471 /* Validate the filesystem blocksize */44664472 if (!sb_set_blocksize(sb, blocksize)) {44674473 ext4_msg(sb, KERN_ERR, "bad block size %d",44684474 blocksize);44754475+ bh = NULL;44694476 goto failed_mount;44704477 }4471447844724472- brelse(bh);44734479 logical_sb_block = sb_block * EXT4_MIN_BLOCK_SIZE;44744480 offset = do_div(logical_sb_block, blocksize);44754481 bh = ext4_sb_bread_unmovable(sb, logical_sb_block);···52085202 kfree(get_qf_name(sb, sbi, i));52095203#endif52105204 fscrypt_free_dummy_policy(&sbi->s_dummy_enc_policy);52115211- ext4_blkdev_remove(sbi);52055205+ /* ext4_blkdev_remove() calls kill_bdev(), release bh before it. */52125206 brelse(bh);52075207+ ext4_blkdev_remove(sbi);52135208out_fail:52145209 sb->s_fs_info = NULL;52155210 kfree(sbi->s_blockgroup_lock);
···911911 current->backing_dev_info = inode_to_bdi(inode);912912 buffered = iomap_file_buffered_write(iocb, from, &gfs2_iomap_ops);913913 current->backing_dev_info = NULL;914914- if (unlikely(buffered <= 0))914914+ if (unlikely(buffered <= 0)) {915915+ if (!ret)916916+ ret = buffered;915917 goto out_unlock;918918+ }916919917920 /*918921 * We need to ensure that the page cache pages are written to
+24-4
fs/gfs2/glock.c
···582582 spin_unlock(&gl->gl_lockref.lock);583583}584584585585+static bool is_system_glock(struct gfs2_glock *gl)586586+{587587+ struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;588588+ struct gfs2_inode *m_ip = GFS2_I(sdp->sd_statfs_inode);589589+590590+ if (gl == m_ip->i_gl)591591+ return true;592592+ return false;593593+}594594+585595/**586596 * do_xmote - Calls the DLM to change the state of a lock587597 * @gl: The lock state···681671 * to see sd_log_error and withdraw, and in the meantime, requeue the682672 * work for later.683673 *674674+ * We make a special exception for some system glocks, such as the675675+ * system statfs inode glock, which needs to be granted before the676676+ * gfs2_quotad daemon can exit, and that exit needs to finish before677677+ * we can unmount the withdrawn file system.678678+ *684679 * However, if we're just unlocking the lock (say, for unmount, when685680 * gfs2_gl_hash_clear calls clear_glock) and recovery is complete686681 * then it's okay to tell dlm to unlock it.687682 */688683 if (unlikely(sdp->sd_log_error && !gfs2_withdrawn(sdp)))689684 gfs2_withdraw_delayed(sdp);690690- if (glock_blocked_by_withdraw(gl)) {691691- if (target != LM_ST_UNLOCKED ||692692- test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags)) {685685+ if (glock_blocked_by_withdraw(gl) &&686686+ (target != LM_ST_UNLOCKED ||687687+ test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags))) {688688+ if (!is_system_glock(gl)) {693689 gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD);694690 goto out;691691+ } else {692692+ clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags);695693 }696694 }697695···14841466 glock_blocked_by_withdraw(gl) &&14851467 gh->gh_gl != sdp->sd_jinode_gl) {14861468 sdp->sd_glock_dqs_held++;14691469+ spin_unlock(&gl->gl_lockref.lock);14871470 might_sleep();14881471 wait_on_bit(&sdp->sd_flags, SDF_WITHDRAW_RECOVERY,14891472 TASK_UNINTERRUPTIBLE);14731473+ spin_lock(&gl->gl_lockref.lock);14901474 }14911475 if (gh->gh_flags & GL_NOCACHE)14921476 handle_callback(gl, LM_ST_UNLOCKED, 0, false);···17951775 while(!list_empty(list)) {17961776 gl = list_first_entry(list, struct gfs2_glock, gl_lru);17971777 list_del_init(&gl->gl_lru);17781778+ clear_bit(GLF_LRU, &gl->gl_flags);17981779 if (!spin_trylock(&gl->gl_lockref.lock)) {17991780add_back_to_lru:18001781 list_add(&gl->gl_lru, &lru_list);···18411820 if (!test_bit(GLF_LOCK, &gl->gl_flags)) {18421821 list_move(&gl->gl_lru, &dispose);18431822 atomic_dec(&lru_count);18441844- clear_bit(GLF_LRU, &gl->gl_flags);18451823 freed++;18461824 continue;18471825 }
···131131 if (test_bit(SDF_NORECOVERY, &sdp->sd_flags) || !sdp->sd_jdesc)132132 return;133133134134+ gfs2_ail_drain(sdp); /* frees all transactions */134135 inode = sdp->sd_jdesc->jd_inode;135136 ip = GFS2_I(inode);136137 i_gl = ip->i_gl;
+30-13
fs/io_uring.c
···783783 task_work_func_t func;784784};785785786786+enum {787787+ IORING_RSRC_FILE = 0,788788+ IORING_RSRC_BUFFER = 1,789789+};790790+786791/*787792 * NOTE! Each of the iocb union members has the file pointer788793 * as the first entry in their struct definition. So you can···82338228{82348229 int i, ret;8235823082318231+ imu->acct_pages = 0;82368232 for (i = 0; i < nr_pages; i++) {82378233 if (!PageCompound(pages[i])) {82388234 imu->acct_pages++;···96769670 IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |96779671 IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |96789672 IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |96799679- IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS;96739673+ IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |96749674+ IORING_FEAT_RSRC_TAGS;9680967596819676 if (copy_to_user(params, p, sizeof(*p))) {96829677 ret = -EFAULT;···99179910}9918991199199912static int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,99209920- unsigned size)99139913+ unsigned size, unsigned type)99219914{99229915 struct io_uring_rsrc_update2 up;99239916···99259918 return -EINVAL;99269919 if (copy_from_user(&up, arg, sizeof(up)))99279920 return -EFAULT;99289928- if (!up.nr)99219921+ if (!up.nr || up.resv)99299922 return -EINVAL;99309930- return __io_register_rsrc_update(ctx, up.type, &up, up.nr);99239923+ return __io_register_rsrc_update(ctx, type, &up, up.nr);99319924}9932992599339926static int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,99349934- unsigned int size)99279927+ unsigned int size, unsigned int type)99359928{99369929 struct io_uring_rsrc_register rr;99379930···99429935 memset(&rr, 0, sizeof(rr));99439936 if (copy_from_user(&rr, arg, size))99449937 return -EFAULT;99459945- if (!rr.nr)99389938+ if (!rr.nr || rr.resv || rr.resv2)99469939 return -EINVAL;9947994099489948- switch (rr.type) {99419941+ switch (type) {99499942 case IORING_RSRC_FILE:99509943 return io_sqe_files_register(ctx, u64_to_user_ptr(rr.data),99519944 rr.nr, u64_to_user_ptr(rr.tags));···99679960 case IORING_REGISTER_PROBE:99689961 case IORING_REGISTER_PERSONALITY:99699962 case IORING_UNREGISTER_PERSONALITY:99709970- case IORING_REGISTER_RSRC:99719971- case IORING_REGISTER_RSRC_UPDATE:99639963+ case IORING_REGISTER_FILES2:99649964+ case IORING_REGISTER_FILES_UPDATE2:99659965+ case IORING_REGISTER_BUFFERS2:99669966+ case IORING_REGISTER_BUFFERS_UPDATE:99729967 return false;99739968 default:99749969 return true;···1009610087 case IORING_REGISTER_RESTRICTIONS:1009710088 ret = io_register_restrictions(ctx, arg, nr_args);1009810089 break;1009910099- case IORING_REGISTER_RSRC:1010010100- ret = io_register_rsrc(ctx, arg, nr_args);1009010090+ case IORING_REGISTER_FILES2:1009110091+ ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_FILE);1010110092 break;1010210102- case IORING_REGISTER_RSRC_UPDATE:1010310103- ret = io_register_rsrc_update(ctx, arg, nr_args);1009310093+ case IORING_REGISTER_FILES_UPDATE2:1009410094+ ret = io_register_rsrc_update(ctx, arg, nr_args,1009510095+ IORING_RSRC_FILE);1009610096+ break;1009710097+ case IORING_REGISTER_BUFFERS2:1009810098+ ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_BUFFER);1009910099+ break;1010010100+ case IORING_REGISTER_BUFFERS_UPDATE:1010110101+ ret = io_register_rsrc_update(ctx, arg, nr_args,1010210102+ IORING_RSRC_BUFFER);1010410103 break;1010510104 default:1010610105 ret = -EINVAL;
+1-1
fs/nfs/client.c
···406406407407 if (cl_init->hostname == NULL) {408408 WARN_ON(1);409409- return NULL;409409+ return ERR_PTR(-EINVAL);410410 }411411412412 /* see if the client already exists */
···424424 * events generated by the listener process itself, without disclosing425425 * the pids of other processes.426426 */427427- if (!capable(CAP_SYS_ADMIN) &&427427+ if (FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) &&428428 task_tgid(current) != event->pid)429429 metadata.pid = 0;430430431431- if (path && path->mnt && path->dentry) {431431+ /*432432+ * For now, fid mode is required for an unprivileged listener and433433+ * fid mode does not report fd in events. Keep this check anyway434434+ * for safety in case fid mode requirement is relaxed in the future435435+ * to allow unprivileged listener to get events with no fd and no fid.436436+ */437437+ if (!FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) &&438438+ path && path->mnt && path->dentry) {432439 fd = create_fd(group, path, &f);433440 if (fd < 0)434441 return fd;···10471040 int f_flags, fd;10481041 unsigned int fid_mode = flags & FANOTIFY_FID_BITS;10491042 unsigned int class = flags & FANOTIFY_CLASS_BITS;10431043+ unsigned int internal_flags = 0;1050104410511045 pr_debug("%s: flags=%x event_f_flags=%x\n",10521046 __func__, flags, event_f_flags);···10611053 */10621054 if ((flags & FANOTIFY_ADMIN_INIT_FLAGS) || !fid_mode)10631055 return -EPERM;10561056+10571057+ /*10581058+ * Setting the internal flag FANOTIFY_UNPRIV on the group10591059+ * prevents setting mount/filesystem marks on this group and10601060+ * prevents reporting pid and open fd in events.10611061+ */10621062+ internal_flags |= FANOTIFY_UNPRIV;10641063 }1065106410661065#ifdef CONFIG_AUDITSYSCALL···11201105 goto out_destroy_group;11211106 }1122110711231123- group->fanotify_data.flags = flags;11081108+ group->fanotify_data.flags = flags | internal_flags;11241109 group->memcg = get_mem_cgroup_from_mm(current->mm);1125111011261111 group->fanotify_data.merge_hash = fanotify_alloc_merge_hash();···13201305 group = f.file->private_data;1321130613221307 /*13231323- * An unprivileged user is not allowed to watch a mount point nor13241324- * a filesystem.13081308+ * An unprivileged user is not allowed to setup mount nor filesystem13091309+ * marks. This also includes setting up such marks by a group that13101310+ * was initialized by an unprivileged user.13251311 */13261312 ret = -EPERM;13271327- if (!capable(CAP_SYS_ADMIN) &&13131313+ if ((!capable(CAP_SYS_ADMIN) ||13141314+ FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV)) &&13281315 mark_type != FAN_MARK_INODE)13291316 goto fput_and_out;13301317···14771460 max_marks = clamp(max_marks, FANOTIFY_OLD_DEFAULT_MAX_MARKS,14781461 FANOTIFY_DEFAULT_MAX_USER_MARKS);1479146214631463+ BUILD_BUG_ON(FANOTIFY_INIT_FLAGS & FANOTIFY_INTERNAL_GROUP_FLAGS);14801464 BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 10);14811465 BUILD_BUG_ON(HWEIGHT32(FANOTIFY_MARK_FLAGS) != 9);14821466
···830830831831struct virtchnl_proto_hdrs {832832 u8 tunnel_level;833833+ u8 pad[3];833834 /**834835 * specify where protocol header start from.835836 * 0 - from the outer layer
+1
include/linux/compiler_attributes.h
···199199 * must end with any of these keywords:200200 * break;201201 * fallthrough;202202+ * continue;202203 * goto <label>;203204 * return [expression];204205 *
+2-1
include/linux/entry-kvm.h
···33#define __LINUX_ENTRYKVM_H4455#include <linux/entry-common.h>66+#include <linux/tick.h>6778/* Transfer to guest mode work */89#ifdef CONFIG_KVM_XFER_TO_GUEST_WORK···5857static inline void xfer_to_guest_mode_prepare(void)5958{6059 lockdep_assert_irqs_disabled();6161- rcu_nocb_flush_deferred_wakeup();6060+ tick_nohz_user_enter_prepare();6261}63626463/**
···332332int host1x_device_init(struct host1x_device *device);333333int host1x_device_exit(struct host1x_device *device);334334335335-int __host1x_client_register(struct host1x_client *client,336336- struct lock_class_key *key);337337-#define host1x_client_register(class) \338338- ({ \339339- static struct lock_class_key __key; \340340- __host1x_client_register(class, &__key); \335335+void __host1x_client_init(struct host1x_client *client, struct lock_class_key *key);336336+void host1x_client_exit(struct host1x_client *client);337337+338338+#define host1x_client_init(client) \339339+ ({ \340340+ static struct lock_class_key __key; \341341+ __host1x_client_init(client, &__key); \342342+ })343343+344344+int __host1x_client_register(struct host1x_client *client);345345+346346+/*347347+ * Note that this wrapper calls __host1x_client_init() for compatibility348348+ * with existing callers. Callers that want to separately initialize and349349+ * register a host1x client must first initialize using either of the350350+ * __host1x_client_init() or host1x_client_init() functions and then use351351+ * the low-level __host1x_client_register() function to avoid the client352352+ * getting reinitialized.353353+ */354354+#define host1x_client_register(client) \355355+ ({ \356356+ static struct lock_class_key __key; \357357+ __host1x_client_init(client, &__key); \358358+ __host1x_client_register(client); \341359 })342360343361int host1x_client_unregister(struct host1x_client *client);
+9-1
include/linux/kvm_host.h
···11851185static inline unsigned long11861186__gfn_to_hva_memslot(const struct kvm_memory_slot *slot, gfn_t gfn)11871187{11881188- return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE;11881188+ /*11891189+ * The index was checked originally in search_memslots. To avoid11901190+ * that a malicious guest builds a Spectre gadget out of e.g. page11911191+ * table walks, do not let the processor speculate loads outside11921192+ * the guest's registered memslots.11931193+ */11941194+ unsigned long offset = gfn - slot->base_gfn;11951195+ offset = array_index_nospec(offset, slot->npages);11961196+ return slot->userspace_addr + offset * PAGE_SIZE;11891197}1190119811911199static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
···445445 */446446 atomic_t has_pinned;447447448448- /**449449- * @write_protect_seq: Locked when any thread is write450450- * protecting pages mapped by this mm to enforce a later COW,451451- * for instance during page table copying for fork().452452- */453453- seqcount_t write_protect_seq;454454-455448#ifdef CONFIG_MMU456449 atomic_long_t pgtables_bytes; /* PTE page table pages */457450#endif···453460 spinlock_t page_table_lock; /* Protects page tables and some454461 * counters455462 */463463+ /*464464+ * With some kernel config, the current mmap_lock's offset465465+ * inside 'mm_struct' is at 0x120, which is very optimal, as466466+ * its two hot fields 'count' and 'owner' sit in 2 different467467+ * cachelines, and when mmap_lock is highly contended, both468468+ * of the 2 fields will be accessed frequently, current layout469469+ * will help to reduce cache bouncing.470470+ *471471+ * So please be careful with adding new fields before472472+ * mmap_lock, which can easily push the 2 fields into one473473+ * cacheline.474474+ */456475 struct rw_semaphore mmap_lock;457476458477 struct list_head mmlist; /* List of maybe swapped mm's. These···485480 unsigned long stack_vm; /* VM_STACK */486481 unsigned long def_flags;487482483483+ /**484484+ * @write_protect_seq: Locked when any thread is write485485+ * protecting pages mapped by this mm to enforce a later COW,486486+ * for instance during page table copying for fork().487487+ */488488+ seqcount_t write_protect_seq;489489+488490 spinlock_t arg_lock; /* protect the below fields */491491+489492 unsigned long start_code, end_code, start_data, end_data;490493 unsigned long start_brk, brk, start_stack;491494 unsigned long arg_start, arg_end, env_start, env_end;
···432432 * To be differentiate with macro pte_mkyoung, this macro is used on platforms433433 * where software maintains page access bit.434434 */435435+#ifndef pte_sw_mkyoung436436+static inline pte_t pte_sw_mkyoung(pte_t pte)437437+{438438+ return pte;439439+}440440+#define pte_sw_mkyoung pte_sw_mkyoung441441+#endif442442+435443#ifndef pte_savedwrite436444#define pte_savedwrite pte_write437445#endif
···350350 * Only for tasks we track a moving average of the past instantaneous351351 * estimated utilization. This allows to absorb sporadic drops in utilization352352 * of an otherwise almost periodic task.353353+ *354354+ * The UTIL_AVG_UNCHANGED flag is used to synchronize util_est with util_avg355355+ * updates. When a task is dequeued, its util_est should not be updated if its356356+ * util_avg has not been updated in the meantime.357357+ * This information is mapped into the MSB bit of util_est.enqueued at dequeue358358+ * time. Since max value of util_est.enqueued for a task is 1024 (PELT util_avg359359+ * for a task) it is safe to use MSB.353360 */354361struct util_est {355362 unsigned int enqueued;356363 unsigned int ewma;357364#define UTIL_EST_WEIGHT_SHIFT 2365365+#define UTIL_AVG_UNCHANGED 0x80000000358366} __attribute__((__aligned__(sizeof(u64))));359367360368/*
+7
include/linux/tick.h
···1111#include <linux/context_tracking_state.h>1212#include <linux/cpumask.h>1313#include <linux/sched.h>1414+#include <linux/rcupdate.h>14151516#ifdef CONFIG_GENERIC_CLOCKEVENTS1617extern void __init tick_init(void);···299298{300299 if (tick_nohz_full_enabled())301300 __tick_nohz_task_switch();301301+}302302+303303+static inline void tick_nohz_user_enter_prepare(void)304304+{305305+ if (tick_nohz_full_cpu(smp_processor_id()))306306+ rcu_nocb_flush_deferred_wakeup();302307}303308304309#endif
···119119 * The link_support layer is used to add any Link Layer specific120120 * framing.121121 */122122-void caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,122122+int caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev,123123 struct cflayer *link_support, int head_room,124124 struct cflayer **layer, int (**rcv_func)(125125 struct sk_buff *, struct net_device *,
+1-1
include/net/caif/cfcnfg.h
···6262 * @fcs: Specify if checksum is used in CAIF Framing Layer.6363 * @head_room: Head space needed by link specific protocol.6464 */6565-void6565+int6666cfcnfg_add_phy_layer(struct cfcnfg *cnfg,6767 struct net_device *dev, struct cflayer *phy_layer,6868 enum cfcnfg_phy_preference pref,
···193193 (sizeof(struct tls_offload_context_tx) + TLS_DRIVER_STATE_SIZE_TX)194194195195enum tls_context_flags {196196- TLS_RX_SYNC_RUNNING = 0,196196+ /* tls_device_down was called after the netdev went down, device state197197+ * was released, and kTLS works in software, even though rx_conf is198198+ * still TLS_HW (needed for transition).199199+ */200200+ TLS_RX_DEV_DEGRADED = 0,197201 /* Unlike RX where resync is driven entirely by the core in TX only198202 * the driver knows when things went out of sync, so we need the flag199203 * to be atomic.···270266271267 /* cache cold stuff */272268 struct proto *sk_proto;269269+ struct sock *sk;273270274271 void (*sk_destruct)(struct sock *sk);275272···453448struct sk_buff *454449tls_validate_xmit_skb(struct sock *sk, struct net_device *dev,455450 struct sk_buff *skb);451451+struct sk_buff *452452+tls_validate_xmit_skb_sw(struct sock *sk, struct net_device *dev,453453+ struct sk_buff *skb);456454457455static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk)458456{
+1
include/uapi/linux/input-event-codes.h
···611611#define KEY_VOICECOMMAND 0x246 /* Listening Voice Command */612612#define KEY_ASSISTANT 0x247 /* AL Context-aware desktop assistant */613613#define KEY_KBD_LAYOUT_NEXT 0x248 /* AC Next Keyboard Layout Select */614614+#define KEY_EMOJI_PICKER 0x249 /* Show/hide emoji picker (HUTRR101) */614615615616#define KEY_BRIGHTNESS_MIN 0x250 /* Set Brightness to Minimum */616617#define KEY_BRIGHTNESS_MAX 0x251 /* Set Brightness to Maximum */
···1414#include <linux/jiffies.h>1515#include <linux/pid_namespace.h>1616#include <linux/proc_ns.h>1717+#include <linux/security.h>17181819#include "../../lib/kstrtox.h"1920···10701069 case BPF_FUNC_probe_read_user:10711070 return &bpf_probe_read_user_proto;10721071 case BPF_FUNC_probe_read_kernel:10731073- return &bpf_probe_read_kernel_proto;10721072+ return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?10731073+ NULL : &bpf_probe_read_kernel_proto;10741074 case BPF_FUNC_probe_read_user_str:10751075 return &bpf_probe_read_user_str_proto;10761076 case BPF_FUNC_probe_read_kernel_str:10771077- return &bpf_probe_read_kernel_str_proto;10771077+ return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?10781078+ NULL : &bpf_probe_read_kernel_str_proto;10781079 case BPF_FUNC_snprintf_btf:10791080 return &bpf_snprintf_btf_proto;10801081 case BPF_FUNC_snprintf:
+4
kernel/cgroup/cgroup-v1.c
···820820 struct cgroup *cgrp = kn->priv;821821 int ret;822822823823+ /* do not accept '\n' to prevent making /proc/<pid>/cgroup unparsable */824824+ if (strchr(new_name_str, '\n'))825825+ return -EINVAL;826826+823827 if (kernfs_type(kn) != KERNFS_DIR)824828 return -ENOTDIR;825829 if (kn->parent != new_parent)
+3-2
kernel/entry/common.c
···55#include <linux/highmem.h>66#include <linux/livepatch.h>77#include <linux/audit.h>88+#include <linux/tick.h>89910#include "common.h"1011···187186 local_irq_disable_exit_to_user();188187189188 /* Check if any of the above work has queued a deferred wakeup */190190- rcu_nocb_flush_deferred_wakeup();189189+ tick_nohz_user_enter_prepare();191190192191 ti_work = READ_ONCE(current_thread_info()->flags);193192 }···203202 lockdep_assert_irqs_disabled();204203205204 /* Flush pending rcuog wakeup before the last need_resched() check */206206- rcu_nocb_flush_deferred_wakeup();205205+ tick_nohz_user_enter_prepare();207206208207 if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK))209208 ti_work = exit_to_user_mode_loop(regs, ti_work);
···7070 if (!irq_work_claim(work))7171 return false;72727373- /*record irq_work call stack in order to print it in KASAN reports*/7474- kasan_record_aux_stack(work);7575-7673 /* Queue the entry and raise the IPI if needed. */7774 preempt_disable();7875 __irq_work_queue_local(work);
···34993499static inline void35003500update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq)35013501{35023502- long delta_avg, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum;35023502+ long delta, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum;35033503 unsigned long load_avg;35043504 u64 load_sum = 0;35053505- s64 delta_sum;35063505 u32 divider;3507350635083507 if (!runnable_sum)···35483549 load_sum = (s64)se_weight(se) * runnable_sum;35493550 load_avg = div_s64(load_sum, divider);3550355135513551- delta_sum = load_sum - (s64)se_weight(se) * se->avg.load_sum;35523552- delta_avg = load_avg - se->avg.load_avg;35523552+ delta = load_avg - se->avg.load_avg;3553355335543554 se->avg.load_sum = runnable_sum;35553555 se->avg.load_avg = load_avg;35563556- add_positive(&cfs_rq->avg.load_avg, delta_avg);35573557- add_positive(&cfs_rq->avg.load_sum, delta_sum);35563556+35573557+ add_positive(&cfs_rq->avg.load_avg, delta);35583558+ cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * divider;35583559}3559356035603561static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum)···37653766 */37663767static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)37673768{37693769+ /*37703770+ * cfs_rq->avg.period_contrib can be used for both cfs_rq and se.37713771+ * See ___update_load_avg() for details.37723772+ */37733773+ u32 divider = get_pelt_divider(&cfs_rq->avg);37743774+37683775 dequeue_load_avg(cfs_rq, se);37693776 sub_positive(&cfs_rq->avg.util_avg, se->avg.util_avg);37703770- sub_positive(&cfs_rq->avg.util_sum, se->avg.util_sum);37773777+ cfs_rq->avg.util_sum = cfs_rq->avg.util_avg * divider;37713778 sub_positive(&cfs_rq->avg.runnable_avg, se->avg.runnable_avg);37723772- sub_positive(&cfs_rq->avg.runnable_sum, se->avg.runnable_sum);37793779+ cfs_rq->avg.runnable_sum = cfs_rq->avg.runnable_avg * divider;3773378037743781 add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);37753782···39073902{39083903 struct util_est ue = READ_ONCE(p->se.avg.util_est);3909390439103910- return (max(ue.ewma, ue.enqueued) | UTIL_AVG_UNCHANGED);39053905+ return max(ue.ewma, (ue.enqueued & ~UTIL_AVG_UNCHANGED));39113906}3912390739133908static inline unsigned long task_util_est(struct task_struct *p)···40074002 * Reset EWMA on utilization increases, the moving average is used only40084003 * to smooth utilization decreases.40094004 */40104010- ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED);40054005+ ue.enqueued = task_util(p);40114006 if (sched_feat(UTIL_EST_FASTUP)) {40124007 if (ue.ewma < ue.enqueued) {40134008 ue.ewma = ue.enqueued;···40564051 ue.ewma += last_ewma_diff;40574052 ue.ewma >>= UTIL_EST_WEIGHT_SHIFT;40584053done:40544054+ ue.enqueued |= UTIL_AVG_UNCHANGED;40594055 WRITE_ONCE(p->se.avg.util_est, ue);4060405640614057 trace_sched_util_est_se_tp(&p->se);···80368030 /* Propagate pending load changes to the parent, if any: */80378031 se = cfs_rq->tg->se[cpu];80388032 if (se && !skip_blocked_update(se))80398039- update_load_avg(cfs_rq_of(se), se, 0);80338033+ update_load_avg(cfs_rq_of(se), se, UPDATE_TG);8040803480418035 /*80428036 * There can be a lot of idle CPU cgroups. Don't let fully
+1-10
kernel/sched/pelt.h
···4242 return LOAD_AVG_MAX - 1024 + avg->period_contrib;4343}44444545-/*4646- * When a task is dequeued, its estimated utilization should not be update if4747- * its util_avg has not been updated at least once.4848- * This flag is used to synchronize util_avg updates with util_est updates.4949- * We map this information into the LSB bit of the utilization saved at5050- * dequeue time (i.e. util_est.dequeued).5151- */5252-#define UTIL_AVG_UNCHANGED 0x15353-5445static inline void cfs_se_util_change(struct sched_avg *avg)5546{5647 unsigned int enqueued;···4958 if (!sched_feat(UTIL_EST))5059 return;51605252- /* Avoid store if the flag has been already set */6161+ /* Avoid store if the flag has been already reset */5362 enqueued = avg->util_est.enqueued;5463 if (!(enqueued & UTIL_AVG_UNCHANGED))5564 return;
···215215static __always_inline int216216bpf_probe_read_kernel_common(void *dst, u32 size, const void *unsafe_ptr)217217{218218- int ret = security_locked_down(LOCKDOWN_BPF_READ);218218+ int ret;219219220220- if (unlikely(ret < 0))221221- goto fail;222220 ret = copy_from_kernel_nofault(dst, unsafe_ptr, size);223221 if (unlikely(ret < 0))224224- goto fail;225225- return ret;226226-fail:227227- memset(dst, 0, size);222222+ memset(dst, 0, size);228223 return ret;229224}230225···241246static __always_inline int242247bpf_probe_read_kernel_str_common(void *dst, u32 size, const void *unsafe_ptr)243248{244244- int ret = security_locked_down(LOCKDOWN_BPF_READ);245245-246246- if (unlikely(ret < 0))247247- goto fail;249249+ int ret;248250249251 /*250252 * The strncpy_from_kernel_nofault() call will likely not fill the···254262 */255263 ret = strncpy_from_kernel_nofault(dst, unsafe_ptr, size);256264 if (unlikely(ret < 0))257257- goto fail;258258-259259- return ret;260260-fail:261261- memset(dst, 0, size);265265+ memset(dst, 0, size);262266 return ret;263267}264268···9991011 case BPF_FUNC_probe_read_user:10001012 return &bpf_probe_read_user_proto;10011013 case BPF_FUNC_probe_read_kernel:10021002- return &bpf_probe_read_kernel_proto;10141014+ return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?10151015+ NULL : &bpf_probe_read_kernel_proto;10031016 case BPF_FUNC_probe_read_user_str:10041017 return &bpf_probe_read_user_str_proto;10051018 case BPF_FUNC_probe_read_kernel_str:10061006- return &bpf_probe_read_kernel_str_proto;10191019+ return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?10201020+ NULL : &bpf_probe_read_kernel_str_proto;10071021#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE10081022 case BPF_FUNC_probe_read:10091009- return &bpf_probe_read_compat_proto;10231023+ return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?10241024+ NULL : &bpf_probe_read_compat_proto;10101025 case BPF_FUNC_probe_read_str:10111011- return &bpf_probe_read_compat_str_proto;10261026+ return security_locked_down(LOCKDOWN_BPF_READ) < 0 ?10271027+ NULL : &bpf_probe_read_compat_str_proto;10121028#endif10131029#ifdef CONFIG_CGROUPS10141030 case BPF_FUNC_get_current_cgroup_id:
+7-1
kernel/trace/ftrace.c
···1967196719681968static void print_ip_ins(const char *fmt, const unsigned char *p)19691969{19701970+ char ins[MCOUNT_INSN_SIZE];19701971 int i;19721972+19731973+ if (copy_from_kernel_nofault(ins, p, MCOUNT_INSN_SIZE)) {19741974+ printk(KERN_CONT "%s[FAULT] %px\n", fmt, p);19751975+ return;19761976+ }1971197719721978 printk(KERN_CONT "%s", fmt);1973197919741980 for (i = 0; i < MCOUNT_INSN_SIZE; i++)19751975- printk(KERN_CONT "%s%02x", i ? ":" : "", p[i]);19811981+ printk(KERN_CONT "%s%02x", i ? ":" : "", ins[i]);19761982}1977198319781984enum ftrace_bug_type ftrace_bug_type;
+1-1
kernel/trace/trace.c
···27362736 (entry = this_cpu_read(trace_buffered_event))) {27372737 /* Try to use the per cpu buffer first */27382738 val = this_cpu_inc_return(trace_buffered_event_cnt);27392739- if ((len < (PAGE_SIZE - sizeof(*entry))) && val == 1) {27392739+ if ((len < (PAGE_SIZE - sizeof(*entry) - sizeof(entry->array[0]))) && val == 1) {27402740 trace_event_setup(entry, type, trace_ctx);27412741 entry->array[0] = len;27422742 return entry;
+1-1
lib/crc64.c
···3737/**3838 * crc64_be - Calculate bitwise big-endian ECMA-182 CRC643939 * @crc: seed value for computation. 0 or (u64)~0 for a new CRC calculation,4040- or the previous crc64 value if computing incrementally.4040+ * or the previous crc64 value if computing incrementally.4141 * @p: pointer to buffer over which CRC64 is run4242 * @len: length of buffer @p4343 */
···17931793 SetPageHWPoison(page);17941794 ClearPageHWPoison(head);17951795 }17961796- remove_hugetlb_page(h, page, false);17961796+ remove_hugetlb_page(h, head, false);17971797 h->max_huge_pages--;17981798 spin_unlock_irq(&hugetlb_lock);17991799 update_and_free_page(h, head);···48894889 if (!page)48904890 goto out;48914891 } else if (!*pagep) {48924892- ret = -ENOMEM;48934893- page = alloc_huge_page(dst_vma, dst_addr, 0);48944894- if (IS_ERR(page))48924892+ /* If a page already exists, then it's UFFDIO_COPY for48934893+ * a non-missing case. Return -EEXIST.48944894+ */48954895+ if (vm_shared &&48964896+ hugetlbfs_pagecache_present(h, dst_vma, dst_addr)) {48974897+ ret = -EEXIST;48954898 goto out;48994899+ }49004900+49014901+ page = alloc_huge_page(dst_vma, dst_addr, 0);49024902+ if (IS_ERR(page)) {49034903+ ret = -ENOMEM;49044904+ goto out;49054905+ }4896490648974907 ret = copy_huge_page_from_user(page,48984908 (const void __user *) src_addr,
+2-2
mm/kasan/init.c
···220220/**221221 * kasan_populate_early_shadow - populate shadow memory region with222222 * kasan_early_shadow_page223223- * @shadow_start - start of the memory range to populate224224- * @shadow_end - end of the memory range to populate223223+ * @shadow_start: start of the memory range to populate224224+ * @shadow_end: end of the memory range to populate225225 */226226int __ref kasan_populate_early_shadow(const void *shadow_start,227227 const void *shadow_end)
+3-3
mm/kfence/core.c
···627627 * During low activity with no allocations we might wait a628628 * while; let's avoid the hung task warning.629629 */630630- wait_event_timeout(allocation_wait, atomic_read(&kfence_allocation_gate),631631- sysctl_hung_task_timeout_secs * HZ / 2);630630+ wait_event_idle_timeout(allocation_wait, atomic_read(&kfence_allocation_gate),631631+ sysctl_hung_task_timeout_secs * HZ / 2);632632 } else {633633- wait_event(allocation_wait, atomic_read(&kfence_allocation_gate));633633+ wait_event_idle(allocation_wait, atomic_read(&kfence_allocation_gate));634634 }635635636636 /* Disable static key and reset timer. */
···91589158 del_page_from_free_list(page_head, zone, page_order);91599159 break_down_buddy_pages(zone, page_head, page, 0,91609160 page_order, migratetype);91619161+ if (!is_migrate_isolate(migratetype))91629162+ __mod_zone_freepage_state(zone, -1, migratetype);91619163 ret = true;91629164 break;91639165 }
+6-1
net/bluetooth/hci_core.c
···16101610 } else {16111611 /* Init failed, cleanup */16121612 flush_work(&hdev->tx_work);16131613- flush_work(&hdev->cmd_work);16131613+16141614+ /* Since hci_rx_work() is possible to awake new cmd_work16151615+ * it should be flushed first to avoid unexpected call of16161616+ * hci_cmd_work()16171617+ */16141618 flush_work(&hdev->rx_work);16191619+ flush_work(&hdev->cmd_work);1615162016161621 skb_queue_purge(&hdev->cmd_q);16171622 skb_queue_purge(&hdev->rx_q);
···177177 if (kcmlen > stackbuf_size)178178 kcmsg_base = kcmsg = sock_kmalloc(sk, kcmlen, GFP_KERNEL);179179 if (kcmsg == NULL)180180- return -ENOBUFS;180180+ return -ENOMEM;181181182182 /* Now copy them over neatly. */183183 memset(kcmsg, 0, kcmlen);
+2-2
net/core/devlink.c
···705705 case DEVLINK_PORT_FLAVOUR_PHYSICAL:706706 case DEVLINK_PORT_FLAVOUR_CPU:707707 case DEVLINK_PORT_FLAVOUR_DSA:708708- case DEVLINK_PORT_FLAVOUR_VIRTUAL:709708 if (nla_put_u32(msg, DEVLINK_ATTR_PORT_NUMBER,710709 attrs->phys.port_number))711710 return -EMSGSIZE;···8630863186318632 switch (attrs->flavour) {86328633 case DEVLINK_PORT_FLAVOUR_PHYSICAL:86338633- case DEVLINK_PORT_FLAVOUR_VIRTUAL:86348634 if (!attrs->split)86358635 n = snprintf(name, len, "p%u", attrs->phys.port_number);86368636 else···86778679 n = snprintf(name, len, "pf%usf%u", attrs->pci_sf.pf,86788680 attrs->pci_sf.sf);86798681 break;86828682+ case DEVLINK_PORT_FLAVOUR_VIRTUAL:86838683+ return -EOPNOTSUPP;86808684 }8681868586828686 if (n >= len)
+1-1
net/core/fib_rules.c
···11681168{11691169 struct net *net;11701170 struct sk_buff *skb;11711171- int err = -ENOBUFS;11711171+ int err = -ENOMEM;1172117211731173 net = ops->fro_net;11741174 skb = nlmsg_new(fib_rule_nlmsg_size(ops, rule), GFP_KERNEL);
···12981298 if (!nla || nla_parse_nested_deprecated(attrs, NL802154_DEV_ADDR_ATTR_MAX, nla, nl802154_dev_addr_policy, NULL))12991299 return -EINVAL;1300130013011301- if (!attrs[NL802154_DEV_ADDR_ATTR_PAN_ID] ||13021302- !attrs[NL802154_DEV_ADDR_ATTR_MODE] ||13031303- !(attrs[NL802154_DEV_ADDR_ATTR_SHORT] ||13041304- attrs[NL802154_DEV_ADDR_ATTR_EXTENDED]))13011301+ if (!attrs[NL802154_DEV_ADDR_ATTR_PAN_ID] || !attrs[NL802154_DEV_ADDR_ATTR_MODE])13051302 return -EINVAL;1306130313071304 addr->pan_id = nla_get_le16(attrs[NL802154_DEV_ADDR_ATTR_PAN_ID]);13081305 addr->mode = nla_get_u32(attrs[NL802154_DEV_ADDR_ATTR_MODE]);13091306 switch (addr->mode) {13101307 case NL802154_DEV_ADDR_SHORT:13081308+ if (!attrs[NL802154_DEV_ADDR_ATTR_SHORT])13091309+ return -EINVAL;13111310 addr->short_addr = nla_get_le16(attrs[NL802154_DEV_ADDR_ATTR_SHORT]);13121311 break;13131312 case NL802154_DEV_ADDR_EXTENDED:13131313+ if (!attrs[NL802154_DEV_ADDR_ATTR_EXTENDED])13141314+ return -EINVAL;13141315 addr->extended_addr = nla_get_le64(attrs[NL802154_DEV_ADDR_ATTR_EXTENDED]);13151316 break;13161317 default:
+8-5
net/ipv4/ipconfig.c
···886886887887888888/*889889- * Copy BOOTP-supplied string if not already set.889889+ * Copy BOOTP-supplied string890890 */891891static int __init ic_bootp_string(char *dest, char *src, int len, int max)892892{···935935 }936936 break;937937 case 12: /* Host name */938938- ic_bootp_string(utsname()->nodename, ext+1, *ext,939939- __NEW_UTS_LEN);940940- ic_host_name_set = 1;938938+ if (!ic_host_name_set) {939939+ ic_bootp_string(utsname()->nodename, ext+1, *ext,940940+ __NEW_UTS_LEN);941941+ ic_host_name_set = 1;942942+ }941943 break;942944 case 15: /* Domain name (DNS) */943943- ic_bootp_string(ic_domain, ext+1, *ext, sizeof(ic_domain));945945+ if (!ic_domain[0])946946+ ic_bootp_string(ic_domain, ext+1, *ext, sizeof(ic_domain));944947 break;945948 case 17: /* Root path */946949 if (!root_server_path[0])
+6-2
net/ipv6/route.c
···36733673 if (nh) {36743674 if (rt->fib6_src.plen) {36753675 NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing");36763676- goto out;36763676+ goto out_free;36773677 }36783678 if (!nexthop_get(nh)) {36793679 NL_SET_ERR_MSG(extack, "Nexthop has been deleted");36803680- goto out;36803680+ goto out_free;36813681 }36823682 rt->nh = nh;36833683 fib6_nh = nexthop_fib6_nh(rt->nh);···37133713 return rt;37143714out:37153715 fib6_info_release(rt);37163716+ return ERR_PTR(err);37173717+out_free:37183718+ ip_fib_metrics_put(rt->fib6_metrics);37193719+ kfree(rt);37163720 return ERR_PTR(err);37173721}37183722
+3
net/ipv6/sit.c
···271271 if (ipip6_tunnel_create(dev) < 0)272272 goto failed_free;273273274274+ if (!parms->name[0])275275+ strcpy(parms->name, dev->name);276276+274277 return nt;275278276279failed_free:
+5
net/kcm/kcmsock.c
···10661066 goto partial_message;10671067 }1068106810691069+ if (skb_has_frag_list(head)) {10701070+ kfree_skb_list(skb_shinfo(head)->frag_list);10711071+ skb_shinfo(head)->frag_list = NULL;10721072+ }10731073+10691074 if (head != kcm->seq_skb)10701075 kfree_skb(head);10711076
···984984 */985985 cached = tcf_ct_skb_nfct_cached(net, skb, p->zone, force);986986 if (!cached) {987987- if (!commit && tcf_ct_flow_table_lookup(p, skb, family)) {987987+ if (tcf_ct_flow_table_lookup(p, skb, family)) {988988 skip_add = true;989989 goto do_nat;990990 }···10221022 * even if the connection is already confirmed.10231023 */10241024 nf_conntrack_confirm(skb);10251025- } else if (!skip_add) {10261026- tcf_ct_flow_table_process_conn(p->ct_ft, ct, ctinfo);10271025 }10261026+10271027+ if (!skip_add)10281028+ tcf_ct_flow_table_process_conn(p->ct_ft, ct, ctinfo);1028102910291030out_push:10301031 skb_push_rcsum(skb, nh_ofs);···12021201 NULL, TCA_CT_UNSPEC,12031202 sizeof(p->zone));12041203 }12051205-12061206- if (p->zone == NF_CT_DEFAULT_ZONE_ID)12071207- return 0;1208120412091205 nf_ct_zone_init(&zone, p->zone, NF_CT_DEFAULT_ZONE_DIR, 0);12101206 tmpl = nf_ct_tmpl_alloc(net, &zone, GFP_KERNEL);
+4-4
net/sched/sch_htb.c
···14881488 struct Qdisc *old_q;1489148914901490 /* One ref for cl->leaf.q, the other for dev_queue->qdisc. */14911491- qdisc_refcount_inc(new_q);14911491+ if (new_q)14921492+ qdisc_refcount_inc(new_q);14921493 old_q = htb_graft_helper(dev_queue, new_q);14931494 WARN_ON(!(old_q->flags & TCQ_F_BUILTIN));14941495}···16761675 cl->parent->common.classid,16771676 NULL);16781677 if (q->offload) {16791679- if (new_q) {16781678+ if (new_q)16801679 htb_set_lockdep_class_child(new_q);16811681- htb_parent_to_leaf_offload(sch, dev_queue, new_q);16821682- }16801680+ htb_parent_to_leaf_offload(sch, dev_queue, new_q);16831681 }16841682 }16851683
+50-12
net/tls/tls_device.c
···5050static DECLARE_WORK(tls_device_gc_work, tls_device_gc_task);5151static LIST_HEAD(tls_device_gc_list);5252static LIST_HEAD(tls_device_list);5353+static LIST_HEAD(tls_device_down_list);5354static DEFINE_SPINLOCK(tls_device_lock);54555556static void tls_device_free_ctx(struct tls_context *ctx)···681680 struct tls_offload_context_rx *rx_ctx = tls_offload_ctx_rx(tls_ctx);682681 struct net_device *netdev;683682684684- if (WARN_ON(test_and_set_bit(TLS_RX_SYNC_RUNNING, &tls_ctx->flags)))685685- return;686686-687683 trace_tls_device_rx_resync_send(sk, seq, rcd_sn, rx_ctx->resync_type);684684+ rcu_read_lock();688685 netdev = READ_ONCE(tls_ctx->netdev);689686 if (netdev)690687 netdev->tlsdev_ops->tls_dev_resync(netdev, sk, seq, rcd_sn,691688 TLS_OFFLOAD_CTX_DIR_RX);692692- clear_bit_unlock(TLS_RX_SYNC_RUNNING, &tls_ctx->flags);689689+ rcu_read_unlock();693690 TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSRXDEVICERESYNC);694691}695692···759760 u32 req_seq;760761761762 if (tls_ctx->rx_conf != TLS_HW)763763+ return;764764+ if (unlikely(test_bit(TLS_RX_DEV_DEGRADED, &tls_ctx->flags)))762765 return;763766764767 prot = &tls_ctx->prot_info;···963962 is_encrypted, is_decrypted);964963965964 ctx->sw.decrypted |= is_decrypted;965965+966966+ if (unlikely(test_bit(TLS_RX_DEV_DEGRADED, &tls_ctx->flags))) {967967+ if (likely(is_encrypted || is_decrypted))968968+ return 0;969969+970970+ /* After tls_device_down disables the offload, the next SKB will971971+ * likely have initial fragments decrypted, and final ones not972972+ * decrypted. We need to reencrypt that single SKB.973973+ */974974+ return tls_device_reencrypt(sk, skb);975975+ }966976967977 /* Return immediately if the record is either entirely plaintext or968978 * entirely ciphertext. Otherwise handle reencrypt partially decrypted···13041292 spin_unlock_irqrestore(&tls_device_lock, flags);1305129313061294 list_for_each_entry_safe(ctx, tmp, &list, list) {12951295+ /* Stop offloaded TX and switch to the fallback.12961296+ * tls_is_sk_tx_device_offloaded will return false.12971297+ */12981298+ WRITE_ONCE(ctx->sk->sk_validate_xmit_skb, tls_validate_xmit_skb_sw);12991299+13001300+ /* Stop the RX and TX resync.13011301+ * tls_dev_resync must not be called after tls_dev_del.13021302+ */13031303+ WRITE_ONCE(ctx->netdev, NULL);13041304+13051305+ /* Start skipping the RX resync logic completely. */13061306+ set_bit(TLS_RX_DEV_DEGRADED, &ctx->flags);13071307+13081308+ /* Sync with inflight packets. After this point:13091309+ * TX: no non-encrypted packets will be passed to the driver.13101310+ * RX: resync requests from the driver will be ignored.13111311+ */13121312+ synchronize_net();13131313+13141314+ /* Release the offload context on the driver side. */13071315 if (ctx->tx_conf == TLS_HW)13081316 netdev->tlsdev_ops->tls_dev_del(netdev, ctx,13091317 TLS_OFFLOAD_CTX_DIR_TX);···13311299 !test_bit(TLS_RX_DEV_CLOSED, &ctx->flags))13321300 netdev->tlsdev_ops->tls_dev_del(netdev, ctx,13331301 TLS_OFFLOAD_CTX_DIR_RX);13341334- WRITE_ONCE(ctx->netdev, NULL);13351335- smp_mb__before_atomic(); /* pairs with test_and_set_bit() */13361336- while (test_bit(TLS_RX_SYNC_RUNNING, &ctx->flags))13371337- usleep_range(10, 200);13381338- dev_put(netdev);13391339- list_del_init(&ctx->list);1340130213411341- if (refcount_dec_and_test(&ctx->refcount))13421342- tls_device_free_ctx(ctx);13031303+ dev_put(netdev);13041304+13051305+ /* Move the context to a separate list for two reasons:13061306+ * 1. When the context is deallocated, list_del is called.13071307+ * 2. It's no longer an offloaded context, so we don't want to13081308+ * run offload-specific code on this context.13091309+ */13101310+ spin_lock_irqsave(&tls_device_lock, flags);13111311+ list_move_tail(&ctx->list, &tls_device_down_list);13121312+ spin_unlock_irqrestore(&tls_device_lock, flags);13131313+13141314+ /* Device contexts for RX and TX will be freed in on sk_destruct13151315+ * by tls_device_free_ctx. rx_conf and tx_conf stay in TLS_HW.13161316+ */13431317 }1344131813451319 up_write(&device_offload_lock);
···117117 if (format != DRM_FORMAT_XRGB8888) {118118 pci_err(pdev, "format mismatch (0x%x != 0x%x)\n",119119 format, DRM_FORMAT_XRGB8888);120120- return -EINVAL;120120+ ret = -EINVAL;121121+ goto err_release_regions;121122 }122123 if (width < 100 || width > 10000) {123124 pci_err(pdev, "width (%d) out of range\n", width);124124- return -EINVAL;125125+ ret = -EINVAL;126126+ goto err_release_regions;125127 }126128 if (height < 100 || height > 10000) {127129 pci_err(pdev, "height (%d) out of range\n", height);128128- return -EINVAL;130130+ ret = -EINVAL;131131+ goto err_release_regions;129132 }130133 pci_info(pdev, "mdpy found: %dx%d framebuffer\n",131134 width, height);132135133136 info = framebuffer_alloc(sizeof(struct mdpy_fb_par), &pdev->dev);134134- if (!info)137137+ if (!info) {138138+ ret = -ENOMEM;135139 goto err_release_regions;140140+ }136141 pci_set_drvdata(pdev, info);137142 par = info->par;138143
+1-1
scripts/Makefile.modfinal
···5959quiet_cmd_btf_ko = BTF [M] $@6060 cmd_btf_ko = \6161 if [ -f vmlinux ]; then \6262- LLVM_OBJCOPY=$(OBJCOPY) $(PAHOLE) -J --btf_base vmlinux $@; \6262+ LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J --btf_base vmlinux $@; \6363 else \6464 printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \6565 fi;
+1-1
scripts/link-vmlinux.sh
···240240 fi241241242242 info "BTF" ${2}243243- LLVM_OBJCOPY=${OBJCOPY} ${PAHOLE} -J ${extra_paholeopt} ${1}243243+ LLVM_OBJCOPY="${OBJCOPY}" ${PAHOLE} -J ${extra_paholeopt} ${1}244244245245 # Create ${2} which contains just .BTF section but no symbols. Add246246 # SHF_ALLOC because .BTF will be part of the vmlinux image. --strip-all
···29172917#ifdef CONFIG_PM_SLEEP29182918static int hda_codec_pm_prepare(struct device *dev)29192919{29202920+ dev->power.power_state = PMSG_SUSPEND;29202921 return pm_runtime_suspended(dev);29212922}2922292329232924static void hda_codec_pm_complete(struct device *dev)29242925{29252926 struct hda_codec *codec = dev_to_hda_codec(dev);29272927+29282928+ /* If no other pm-functions are called between prepare() and complete() */29292929+ if (dev->power.power_state.event == PM_EVENT_SUSPEND)29302930+ dev->power.power_state = PMSG_RESUME;2926293129272932 if (pm_runtime_suspended(dev) && (codec->jackpoll_interval ||29282933 hda_codec_need_resume(codec) || codec->forced_resume))
···22062206 break;22072207 case HDA_FIXUP_ACT_PROBE:2208220822092209- /* Set initial volume on Bullseye to -26 dB */22102210- if (codec->fixup_id == CS8409_BULLSEYE)22112211- snd_hda_codec_amp_init_stereo(codec, CS8409_CS42L42_DMIC_ADC_PIN_NID,22122212- HDA_INPUT, 0, 0xff, 0x19);22092209+ /* Set initial DMIC volume to -26 dB */22102210+ snd_hda_codec_amp_init_stereo(codec, CS8409_CS42L42_DMIC_ADC_PIN_NID,22112211+ HDA_INPUT, 0, 0xff, 0x19);22132212 snd_hda_gen_add_kctl(&spec->gen,22142213 NULL, &cs8409_cs42l42_hp_volume_mixer);22152214 snd_hda_gen_add_kctl(&spec->gen,
···9393 struct snd_soc_dai *dai)9494{9595 struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai);9696+ struct lpaif_i2sctl *i2sctl = drvdata->i2sctl;9797+ unsigned int id = dai->driver->id;96989799 clk_disable_unprepare(drvdata->mi2s_osr_clk[dai->driver->id]);100100+ /*101101+ * Ensure LRCLK is disabled even in device node validation.102102+ * Will not impact if disabled in lpass_cpu_daiops_trigger()103103+ * suspend.104104+ */105105+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)106106+ regmap_fields_write(i2sctl->spken, id, LPAIF_I2SCTL_SPKEN_DISABLE);107107+ else108108+ regmap_fields_write(i2sctl->micen, id, LPAIF_I2SCTL_MICEN_DISABLE);109109+110110+ /*111111+ * BCLK may not be enabled if lpass_cpu_daiops_prepare is called before112112+ * lpass_cpu_daiops_shutdown. It's paired with the clk_enable in113113+ * lpass_cpu_daiops_prepare.114114+ */115115+ if (drvdata->mi2s_was_prepared[dai->driver->id]) {116116+ drvdata->mi2s_was_prepared[dai->driver->id] = false;117117+ clk_disable(drvdata->mi2s_bit_clk[dai->driver->id]);118118+ }119119+98120 clk_unprepare(drvdata->mi2s_bit_clk[dai->driver->id]);99121}100122···297275 case SNDRV_PCM_TRIGGER_START:298276 case SNDRV_PCM_TRIGGER_RESUME:299277 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:278278+ /*279279+ * Ensure lpass BCLK/LRCLK is enabled during280280+ * device resume as lpass_cpu_daiops_prepare() is not called281281+ * after the device resumes. We don't check mi2s_was_prepared before282282+ * enable/disable BCLK in trigger events because:283283+ * 1. These trigger events are paired, so the BCLK284284+ * enable_count is balanced.285285+ * 2. the BCLK can be shared (ex: headset and headset mic),286286+ * we need to increase the enable_count so that we don't287287+ * turn off the shared BCLK while other devices are using288288+ * it.289289+ */300290 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {301291 ret = regmap_fields_write(i2sctl->spken, id,302292 LPAIF_I2SCTL_SPKEN_ENABLE);···330296 case SNDRV_PCM_TRIGGER_STOP:331297 case SNDRV_PCM_TRIGGER_SUSPEND:332298 case SNDRV_PCM_TRIGGER_PAUSE_PUSH:299299+ /*300300+ * To ensure lpass BCLK/LRCLK is disabled during301301+ * device suspend.302302+ */333303 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {334304 ret = regmap_fields_write(i2sctl->spken, id,335305 LPAIF_I2SCTL_SPKEN_DISABLE);···353315 return ret;354316}355317318318+static int lpass_cpu_daiops_prepare(struct snd_pcm_substream *substream,319319+ struct snd_soc_dai *dai)320320+{321321+ struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai);322322+ struct lpaif_i2sctl *i2sctl = drvdata->i2sctl;323323+ unsigned int id = dai->driver->id;324324+ int ret;325325+326326+ /*327327+ * Ensure lpass BCLK/LRCLK is enabled bit before playback/capture328328+ * data flow starts. This allows other codec to have some delay before329329+ * the data flow.330330+ * (ex: to drop start up pop noise before capture starts).331331+ */332332+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)333333+ ret = regmap_fields_write(i2sctl->spken, id, LPAIF_I2SCTL_SPKEN_ENABLE);334334+ else335335+ ret = regmap_fields_write(i2sctl->micen, id, LPAIF_I2SCTL_MICEN_ENABLE);336336+337337+ if (ret) {338338+ dev_err(dai->dev, "error writing to i2sctl reg: %d\n", ret);339339+ return ret;340340+ }341341+342342+ /*343343+ * Check mi2s_was_prepared before enabling BCLK as lpass_cpu_daiops_prepare can344344+ * be called multiple times. It's paired with the clk_disable in345345+ * lpass_cpu_daiops_shutdown.346346+ */347347+ if (!drvdata->mi2s_was_prepared[dai->driver->id]) {348348+ ret = clk_enable(drvdata->mi2s_bit_clk[id]);349349+ if (ret) {350350+ dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret);351351+ return ret;352352+ }353353+ drvdata->mi2s_was_prepared[dai->driver->id] = true;354354+ }355355+ return 0;356356+}357357+356358const struct snd_soc_dai_ops asoc_qcom_lpass_cpu_dai_ops = {357359 .set_sysclk = lpass_cpu_daiops_set_sysclk,358360 .startup = lpass_cpu_daiops_startup,359361 .shutdown = lpass_cpu_daiops_shutdown,360362 .hw_params = lpass_cpu_daiops_hw_params,361363 .trigger = lpass_cpu_daiops_trigger,364364+ .prepare = lpass_cpu_daiops_prepare,362365};363366EXPORT_SYMBOL_GPL(asoc_qcom_lpass_cpu_dai_ops);364367
+4
sound/soc/qcom/lpass.h
···6767 /* MI2S SD lines to use for playback/capture */6868 unsigned int mi2s_playback_sd_mode[LPASS_MAX_MI2S_PORTS];6969 unsigned int mi2s_capture_sd_mode[LPASS_MAX_MI2S_PORTS];7070+7171+ /* The state of MI2S prepare dai_ops was called */7272+ bool mi2s_was_prepared[LPASS_MAX_MI2S_PORTS];7373+7074 int hdmi_port_enable;71757276 /* low-power audio interface (LPAIF) registers */
+2
sound/soc/soc-core.c
···22252225 return NULL;2226222622272227 name = devm_kstrdup(dev, devname, GFP_KERNEL);22282228+ if (!name)22292229+ return NULL;2228223022292231 /* are we a "%s.%d" name (platform and SPI components) */22302232 found = strstr(name, dev->driver->name);
+3-3
sound/soc/soc-topology.c
···19011901 * @src: older version of pcm as a source19021902 * @pcm: latest version of pcm created from the source19031903 *19041904- * Support from vesion 4. User should free the returned pcm manually.19041904+ * Support from version 4. User should free the returned pcm manually.19051905 */19061906static int pcm_new_ver(struct soc_tplg *tplg,19071907 struct snd_soc_tplg_pcm *src,···20892089 * @src: old version of phyical link config as a source20902090 * @link: latest version of physical link config created from the source20912091 *20922092- * Support from vesion 4. User need free the returned link config manually.20922092+ * Support from version 4. User need free the returned link config manually.20932093 */20942094static int link_new_ver(struct soc_tplg *tplg,20952095 struct snd_soc_tplg_link_config *src,···24002400 * @src: old version of manifest as a source24012401 * @manifest: latest version of manifest created from the source24022402 *24032403- * Support from vesion 4. Users need free the returned manifest manually.24032403+ * Support from version 4. Users need free the returned manifest manually.24042404 */24052405static int manifest_new_ver(struct soc_tplg *tplg,24062406 struct snd_soc_tplg_manifest *src,
···399399 }400400 /* TODO: Ensure the @path is initramfs/initrd image */401401 if (fstat(fd, &stat) < 0) {402402+ ret = -errno;402403 pr_err("Failed to get the size of %s\n", path);403404 goto out;404405 }
+4
tools/objtool/arch/x86/decode.c
···747747748748 list_for_each_entry(insn, &file->retpoline_call_list, call_node) {749749750750+ if (insn->type != INSN_JUMP_DYNAMIC &&751751+ insn->type != INSN_CALL_DYNAMIC)752752+ continue;753753+750754 if (!strcmp(insn->sec->name, ".text.__x86.indirect_thunk"))751755 continue;752756
···521521522522 evsel->bperf_leader_link_fd = bpf_link_get_fd_by_id(entry.link_id);523523 if (evsel->bperf_leader_link_fd < 0 &&524524- bperf_reload_leader_program(evsel, attr_map_fd, &entry))524524+ bperf_reload_leader_program(evsel, attr_map_fd, &entry)) {525525+ err = -1;525526 goto out;526526-527527+ }527528 /*528529 * The bpf_link holds reference to the leader program, and the529530 * leader program holds reference to the maps. Therefore, if···551550 /* Step 2: load the follower skeleton */552551 evsel->follower_skel = bperf_follower_bpf__open();553552 if (!evsel->follower_skel) {553553+ err = -1;554554 pr_err("Failed to open follower skeleton\n");555555 goto out;556556 }
+6-2
tools/perf/util/dwarf-aux.c
···975975 if ((tag == DW_TAG_formal_parameter ||976976 tag == DW_TAG_variable) &&977977 die_compare_name(die_mem, fvp->name) &&978978- /* Does the DIE have location information or external instance? */978978+ /*979979+ * Does the DIE have location information or const value980980+ * or external instance?981981+ */979982 (dwarf_attr(die_mem, DW_AT_external, &attr) ||980980- dwarf_attr(die_mem, DW_AT_location, &attr)))983983+ dwarf_attr(die_mem, DW_AT_location, &attr) ||984984+ dwarf_attr(die_mem, DW_AT_const_value, &attr)))981985 return DIE_FIND_CB_END;982986 if (dwarf_haspc(die_mem, fvp->addr))983987 return DIE_FIND_CB_CONTINUE;
···4343 VM_MODE_P40V48_4K,4444 VM_MODE_P40V48_64K,4545 VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */4646+ VM_MODE_P47V64_4K,4647 NUM_VM_MODES,4748};4849···61606261#elif defined(__s390x__)63626464-#define VM_MODE_DEFAULT VM_MODE_P52V48_4K6363+#define VM_MODE_DEFAULT VM_MODE_P47V64_4K6564#define MIN_PAGE_SHIFT 12U6665#define ptes_per_page(page_size) ((page_size) / 16)6766···286285 uint32_t num_percpu_pages, void *guest_code,287286 uint32_t vcpuids[]);288287289289-/* Like vm_create_default_with_vcpus, but accepts mode as a parameter */288288+/* Like vm_create_default_with_vcpus, but accepts mode and slot0 memory as a parameter */290289struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,291291- uint64_t extra_mem_pages, uint32_t num_percpu_pages,292292- void *guest_code, uint32_t vcpuids[]);290290+ uint64_t slot0_mem_pages, uint64_t extra_mem_pages,291291+ uint32_t num_percpu_pages, void *guest_code,292292+ uint32_t vcpuids[]);293293294294/*295295 * Adds a vCPU with reasonable defaults (e.g. a stack)
+1-1
tools/testing/selftests/kvm/kvm_page_table_test.c
···268268269269 /* Create a VM with enough guest pages */270270 guest_num_pages = test_mem_size / guest_page_size;271271- vm = vm_create_with_vcpus(mode, nr_vcpus,271271+ vm = vm_create_with_vcpus(mode, nr_vcpus, DEFAULT_GUEST_PHY_PAGES,272272 guest_num_pages, 0, guest_code, NULL);273273274274 /* Align down GPA of the testing memslot */
+43-9
tools/testing/selftests/kvm/lib/kvm_util.c
···175175 [VM_MODE_P40V48_4K] = "PA-bits:40, VA-bits:48, 4K pages",176176 [VM_MODE_P40V48_64K] = "PA-bits:40, VA-bits:48, 64K pages",177177 [VM_MODE_PXXV48_4K] = "PA-bits:ANY, VA-bits:48, 4K pages",178178+ [VM_MODE_P47V64_4K] = "PA-bits:47, VA-bits:64, 4K pages",178179 };179180 _Static_assert(sizeof(strings)/sizeof(char *) == NUM_VM_MODES,180181 "Missing new mode strings?");···193192 { 40, 48, 0x1000, 12 },194193 { 40, 48, 0x10000, 16 },195194 { 0, 0, 0x1000, 12 },195195+ { 47, 64, 0x1000, 12 },196196};197197_Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,198198 "Missing new mode params?");···279277 TEST_FAIL("VM_MODE_PXXV48_4K not supported on non-x86 platforms");280278#endif281279 break;280280+ case VM_MODE_P47V64_4K:281281+ vm->pgtable_levels = 5;282282+ break;282283 default:283284 TEST_FAIL("Unknown guest mode, mode: 0x%x", mode);284285 }···313308 return vm;314309}315310311311+/*312312+ * VM Create with customized parameters313313+ *314314+ * Input Args:315315+ * mode - VM Mode (e.g. VM_MODE_P52V48_4K)316316+ * nr_vcpus - VCPU count317317+ * slot0_mem_pages - Slot0 physical memory size318318+ * extra_mem_pages - Non-slot0 physical memory total size319319+ * num_percpu_pages - Per-cpu physical memory pages320320+ * guest_code - Guest entry point321321+ * vcpuids - VCPU IDs322322+ *323323+ * Output Args: None324324+ *325325+ * Return:326326+ * Pointer to opaque structure that describes the created VM.327327+ *328328+ * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K),329329+ * with customized slot0 memory size, at least 512 pages currently.330330+ * extra_mem_pages is only used to calculate the maximum page table size,331331+ * no real memory allocation for non-slot0 memory in this function.332332+ */316333struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,317317- uint64_t extra_mem_pages, uint32_t num_percpu_pages,318318- void *guest_code, uint32_t vcpuids[])334334+ uint64_t slot0_mem_pages, uint64_t extra_mem_pages,335335+ uint32_t num_percpu_pages, void *guest_code,336336+ uint32_t vcpuids[])319337{338338+ uint64_t vcpu_pages, extra_pg_pages, pages;339339+ struct kvm_vm *vm;340340+ int i;341341+342342+ /* Force slot0 memory size not small than DEFAULT_GUEST_PHY_PAGES */343343+ if (slot0_mem_pages < DEFAULT_GUEST_PHY_PAGES)344344+ slot0_mem_pages = DEFAULT_GUEST_PHY_PAGES;345345+320346 /* The maximum page table size for a memory region will be when the321347 * smallest pages are used. Considering each page contains x page322348 * table descriptors, the total extra size for page tables (for extra323349 * N pages) will be: N/x+N/x^2+N/x^3+... which is definitely smaller324350 * than N/x*2.325351 */326326- uint64_t vcpu_pages = (DEFAULT_STACK_PGS + num_percpu_pages) * nr_vcpus;327327- uint64_t extra_pg_pages = (extra_mem_pages + vcpu_pages) / PTES_PER_MIN_PAGE * 2;328328- uint64_t pages = DEFAULT_GUEST_PHY_PAGES + extra_mem_pages + vcpu_pages + extra_pg_pages;329329- struct kvm_vm *vm;330330- int i;352352+ vcpu_pages = (DEFAULT_STACK_PGS + num_percpu_pages) * nr_vcpus;353353+ extra_pg_pages = (slot0_mem_pages + extra_mem_pages + vcpu_pages) / PTES_PER_MIN_PAGE * 2;354354+ pages = slot0_mem_pages + vcpu_pages + extra_pg_pages;331355332356 TEST_ASSERT(nr_vcpus <= kvm_check_cap(KVM_CAP_MAX_VCPUS),333357 "nr_vcpus = %d too large for host, max-vcpus = %d",···388354 uint32_t num_percpu_pages, void *guest_code,389355 uint32_t vcpuids[])390356{391391- return vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, extra_mem_pages,392392- num_percpu_pages, guest_code, vcpuids);357357+ return vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, DEFAULT_GUEST_PHY_PAGES,358358+ extra_mem_pages, num_percpu_pages, guest_code, vcpuids);393359}394360395361struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+1-1
tools/testing/selftests/kvm/lib/perf_test_util.c
···6969 TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0,7070 "Guest memory size is not guest page size aligned.");71717272- vm = vm_create_with_vcpus(mode, vcpus,7272+ vm = vm_create_with_vcpus(mode, vcpus, DEFAULT_GUEST_PHY_PAGES,7373 (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size,7474 0, guest_code, NULL);7575