···562Zoned Filesystems563=================564565-For zoned file systems, the following attribute is exposed in:566567 /sys/fs/xfs/<dev>/zoned/568···572 is limited by the capabilities of the backing zoned device, file system573 size and the max_open_zones mount option.574575-Zoned Filesystems576-=================577-578-For zoned file systems, the following attributes are exposed in:579-580- /sys/fs/xfs/<dev>/zoned/581-582- max_open_zones (Min: 1 Default: Varies Max: UINTMAX)583- This read-only attribute exposes the maximum number of open zones584- available for data placement. The value is determined at mount time and585- is limited by the capabilities of the backing zoned device, file system586- size and the max_open_zones mount option.587-588- zonegc_low_space (Min: 0 Default: 0 Max: 100)589- Define a percentage for how much of the unused space that GC should keep590- available for writing. A high value will reclaim more of the space591- occupied by unused blocks, creating a larger buffer against write592- bursts at the cost of increased write amplification. Regardless593- of this value, garbage collection will always aim to free a minimum594- amount of blocks to keep max_open_zones open for data placement purposes.
···562Zoned Filesystems563=================564565+For zoned file systems, the following attributes are exposed in:566567 /sys/fs/xfs/<dev>/zoned/568···572 is limited by the capabilities of the backing zoned device, file system573 size and the max_open_zones mount option.574575+ zonegc_low_space (Min: 0 Default: 0 Max: 100)576+ Define a percentage for how much of the unused space that GC should keep577+ available for writing. A high value will reclaim more of the space578+ occupied by unused blocks, creating a larger buffer against write579+ bursts at the cost of increased write amplification. Regardless580+ of this value, garbage collection will always aim to free a minimum581+ amount of blocks to keep max_open_zones open for data placement purposes.0000000000000
+6-6
Documentation/arch/openrisc/openrisc_port.rst
···78For information about OpenRISC processors and ongoing development:910- ======= =============================11 website https://openrisc.io12- email openrisc@lists.librecores.org13- ======= =============================1415---------------------------------------------------------------------16···27Instructions for building the different toolchains can be found on openrisc.io28or Stafford's toolchain build and release scripts.2930- ========== =================================================31- binaries https://github.com/openrisc/or1k-gcc/releases32 toolchains https://openrisc.io/software33 building https://github.com/stffrdhrn/or1k-toolchain-build34- ========== =================================================35362) Building37
···78For information about OpenRISC processors and ongoing development:910+ ======= ==============================11 website https://openrisc.io12+ email linux-openrisc@vger.kernel.org13+ ======= ==============================1415---------------------------------------------------------------------16···27Instructions for building the different toolchains can be found on openrisc.io28or Stafford's toolchain build and release scripts.2930+ ========== ==========================================================31+ binaries https://github.com/stffrdhrn/or1k-toolchain-build/releases32 toolchains https://openrisc.io/software33 building https://github.com/stffrdhrn/or1k-toolchain-build34+ ========== ==========================================================35362) Building37
+8
Documentation/bpf/bpf_devel_QA.rst
···382into the Linux kernel, please implement support into LLVM's BPF back383end. See LLVM_ section below for further information.38400000000385Stable submission386=================387
···382into the Linux kernel, please implement support into LLVM's BPF back383end. See LLVM_ section below for further information.384385+Q: What "BPF_INTERNAL" symbol namespace is for?386+-----------------------------------------------387+A: Symbols exported as BPF_INTERNAL can only be used by BPF infrastructure388+like preload kernel modules with light skeleton. Most symbols outside389+of BPF_INTERNAL are not expected to be used by code outside of BPF either.390+Symbols may lack the designation because they predate the namespaces,391+or due to an oversight.392+393Stable submission394=================395
···27 $ref: /schemas/types.yaml#/definitions/uint32-array28 items:29 - minimum: 030- maximum: 731 description:32 Offset in bit within the address range specified by reg.33 - minimum: 1
···27 $ref: /schemas/types.yaml#/definitions/uint32-array28 items:29 - minimum: 030+ maximum: 3131 description:32 Offset in bit within the address range specified by reg.33 - minimum: 1
···553 die_if_kernel("Kernel ale access", regs);554 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);555#else0556 unsigned int *pc;557558- if (regs->csr_prmd & CSR_PRMD_PIE)559 local_irq_enable();560561 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr);···583 die_if_kernel("Kernel ale access", regs);584 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);585out:586- if (regs->csr_prmd & CSR_PRMD_PIE)587 local_irq_disable();588#endif589 irqentry_exit(regs, state);···622asmlinkage void noinstr do_bce(struct pt_regs *regs)623{624 bool user = user_mode(regs);0625 unsigned long era = exception_era(regs);626 u64 badv = 0, lower = 0, upper = ULONG_MAX;627 union loongarch_instruction insn;628 irqentry_state_t state = irqentry_enter(regs);629630- if (regs->csr_prmd & CSR_PRMD_PIE)631 local_irq_enable();632633 current->thread.trap_nr = read_csr_excode();···694 force_sig_bnderr((void __user *)badv, (void __user *)lower, (void __user *)upper);695696out:697- if (regs->csr_prmd & CSR_PRMD_PIE)698 local_irq_disable();699700 irqentry_exit(regs, state);···712asmlinkage void noinstr do_bp(struct pt_regs *regs)713{714 bool user = user_mode(regs);0715 unsigned int opcode, bcode;716 unsigned long era = exception_era(regs);717 irqentry_state_t state = irqentry_enter(regs);718719- if (regs->csr_prmd & CSR_PRMD_PIE)720 local_irq_enable();721722 if (__get_inst(&opcode, (u32 *)era, user))···783 }784785out:786- if (regs->csr_prmd & CSR_PRMD_PIE)787 local_irq_disable();788789 irqentry_exit(regs, state);···10181019asmlinkage void noinstr do_lbt(struct pt_regs *regs)1020{01021 irqentry_state_t state = irqentry_enter(regs);10221023 /*···1028 * (including the user using 'MOVGR2GCSR' to turn on TM, which1029 * will not trigger the BTE), we need to check PRMD first.1030 */1031- if (regs->csr_prmd & CSR_PRMD_PIE)1032 local_irq_enable();10331034 if (!cpu_has_lbt) {···1042 preempt_enable();10431044out:1045- if (regs->csr_prmd & CSR_PRMD_PIE)1046 local_irq_disable();10471048 irqentry_exit(regs, state);
···553 die_if_kernel("Kernel ale access", regs);554 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);555#else556+ bool pie = regs_irqs_disabled(regs);557 unsigned int *pc;558559+ if (!pie)560 local_irq_enable();561562 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr);···582 die_if_kernel("Kernel ale access", regs);583 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);584out:585+ if (!pie)586 local_irq_disable();587#endif588 irqentry_exit(regs, state);···621asmlinkage void noinstr do_bce(struct pt_regs *regs)622{623 bool user = user_mode(regs);624+ bool pie = regs_irqs_disabled(regs);625 unsigned long era = exception_era(regs);626 u64 badv = 0, lower = 0, upper = ULONG_MAX;627 union loongarch_instruction insn;628 irqentry_state_t state = irqentry_enter(regs);629630+ if (!pie)631 local_irq_enable();632633 current->thread.trap_nr = read_csr_excode();···692 force_sig_bnderr((void __user *)badv, (void __user *)lower, (void __user *)upper);693694out:695+ if (!pie)696 local_irq_disable();697698 irqentry_exit(regs, state);···710asmlinkage void noinstr do_bp(struct pt_regs *regs)711{712 bool user = user_mode(regs);713+ bool pie = regs_irqs_disabled(regs);714 unsigned int opcode, bcode;715 unsigned long era = exception_era(regs);716 irqentry_state_t state = irqentry_enter(regs);717718+ if (!pie)719 local_irq_enable();720721 if (__get_inst(&opcode, (u32 *)era, user))···780 }781782out:783+ if (!pie)784 local_irq_disable();785786 irqentry_exit(regs, state);···10151016asmlinkage void noinstr do_lbt(struct pt_regs *regs)1017{1018+ bool pie = regs_irqs_disabled(regs);1019 irqentry_state_t state = irqentry_enter(regs);10201021 /*···1024 * (including the user using 'MOVGR2GCSR' to turn on TM, which1025 * will not trigger the BTE), we need to check PRMD first.1026 */1027+ if (!pie)1028 local_irq_enable();10291030 if (!cpu_has_lbt) {···1038 preempt_enable();10391040out:1041+ if (!pie)1042 local_irq_disable();10431044 irqentry_exit(regs, state);
···111 ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);112 srcu_read_unlock(&vcpu->kvm->srcu, idx);113 if (unlikely(ret)) {114- kvm_err("%s: : read date from addr %llx failed\n", __func__, addr);115 return ret;116 }117 /* Construct the mask by scanning the bit 27-30 */···127 ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);128 srcu_read_unlock(&vcpu->kvm->srcu, idx);129 if (unlikely(ret))130- kvm_err("%s: : write date to addr %llx failed\n", __func__, addr);131132 return ret;133}
···111 ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);112 srcu_read_unlock(&vcpu->kvm->srcu, idx);113 if (unlikely(ret)) {114+ kvm_err("%s: : read data from addr %llx failed\n", __func__, addr);115 return ret;116 }117 /* Construct the mask by scanning the bit 27-30 */···127 ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);128 srcu_read_unlock(&vcpu->kvm->srcu, idx);129 if (unlikely(ret))130+ kvm_err("%s: : write data to addr %llx failed\n", __func__, addr);131132 return ret;133}
+2-2
arch/loongarch/kvm/main.c
···296 /*297 * Enable virtualization features granting guest direct control of298 * certain features:299- * GCI=2: Trap on init or unimplement cache instruction.300 * TORU=0: Trap on Root Unimplement.301 * CACTRL=1: Root control cache.302- * TOP=0: Trap on Previlege.303 * TOE=0: Trap on Exception.304 * TIT=0: Trap on Timer.305 */
···296 /*297 * Enable virtualization features granting guest direct control of298 * certain features:299+ * GCI=2: Trap on init or unimplemented cache instruction.300 * TORU=0: Trap on Root Unimplement.301 * CACTRL=1: Root control cache.302+ * TOP=0: Trap on Privilege.303 * TOE=0: Trap on Exception.304 * TIT=0: Trap on Timer.305 */
+8
arch/loongarch/kvm/vcpu.c
···294 vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST;295296 if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) {0297 /* make sure the vcpu mode has been written */298 smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE);299 local_irq_enable();···903 vcpu->arch.st.guest_addr = 0;904 memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));905 memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));0000000906 break;907 default:908 ret = -EINVAL;
···294 vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST;295296 if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) {297+ kvm_lose_pmu(vcpu);298 /* make sure the vcpu mode has been written */299 smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE);300 local_irq_enable();···902 vcpu->arch.st.guest_addr = 0;903 memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));904 memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));905+906+ /*907+ * When vCPU reset, clear the ESTAT and GINTC registers908+ * Other CSR registers are cleared with function _kvm_setcsr().909+ */910+ kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_GINTC, 0);911+ kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_ESTAT, 0);912 break;913 default:914 ret = -EINVAL;
···65{66 unsigned long max_zone_pfns[MAX_NR_ZONES];6700068#ifdef CONFIG_ZONE_DMA3269 max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;70#endif
+17
arch/openrisc/include/asm/cacheflush.h
···23 */24extern void local_dcache_page_flush(struct page *page);25extern void local_icache_page_inv(struct page *page);0002627/*28 * Data cache flushing always happen on the local cpu. Instruction cache···40#define icache_page_inv(page) smp_icache_page_inv(page)41extern void smp_icache_page_inv(struct page *page);42#endif /* CONFIG_SMP */000000000000004344/*45 * Synchronizes caches. Whenever a cpu writes executable code to memory, this
···23 */24extern void local_dcache_page_flush(struct page *page);25extern void local_icache_page_inv(struct page *page);26+extern void local_dcache_range_flush(unsigned long start, unsigned long end);27+extern void local_dcache_range_inv(unsigned long start, unsigned long end);28+extern void local_icache_range_inv(unsigned long start, unsigned long end);2930/*31 * Data cache flushing always happen on the local cpu. Instruction cache···37#define icache_page_inv(page) smp_icache_page_inv(page)38extern void smp_icache_page_inv(struct page *page);39#endif /* CONFIG_SMP */40+41+/*42+ * Even if the actual block size is larger than L1_CACHE_BYTES, paddr43+ * can be incremented by L1_CACHE_BYTES. When paddr is written to the44+ * invalidate register, the entire cache line encompassing this address45+ * is invalidated. Each subsequent reference to the same cache line will46+ * not affect the invalidation process.47+ */48+#define local_dcache_block_flush(addr) \49+ local_dcache_range_flush(addr, addr + L1_CACHE_BYTES)50+#define local_dcache_block_inv(addr) \51+ local_dcache_range_inv(addr, addr + L1_CACHE_BYTES)52+#define local_icache_block_inv(addr) \53+ local_icache_range_inv(addr, addr + L1_CACHE_BYTES)5455/*56 * Synchronizes caches. Whenever a cpu writes executable code to memory, this
···14#include <asm/spr_defs.h>15#include <asm/cache.h>16#include <asm/cacheflush.h>017#include <asm/tlbflush.h>1819-static __always_inline void cache_loop(struct page *page, const unsigned int reg)00000000000000000000000020{21 unsigned long paddr = page_to_pfn(page) << PAGE_SHIFT;22- unsigned long line = paddr & ~(L1_CACHE_BYTES - 1);2324- while (line < paddr + PAGE_SIZE) {25- mtspr(reg, line);26- line += L1_CACHE_BYTES;27- }28}2930void local_dcache_page_flush(struct page *page)31{32- cache_loop(page, SPR_DCBFR);33}34EXPORT_SYMBOL(local_dcache_page_flush);3536void local_icache_page_inv(struct page *page)37{38- cache_loop(page, SPR_ICBIR);39}40EXPORT_SYMBOL(local_icache_page_inv);0000000000000004142void update_cache(struct vm_area_struct *vma, unsigned long address,43 pte_t *pte)···97 sync_icache_dcache(folio_page(folio, nr));98 }99}100-
···14#include <asm/spr_defs.h>15#include <asm/cache.h>16#include <asm/cacheflush.h>17+#include <asm/cpuinfo.h>18#include <asm/tlbflush.h>1920+/*21+ * Check if the cache component exists.22+ */23+bool cpu_cache_is_present(const unsigned int cache_type)24+{25+ unsigned long upr = mfspr(SPR_UPR);26+ unsigned long mask = SPR_UPR_UP | cache_type;27+28+ return !((upr & mask) ^ mask);29+}30+31+static __always_inline void cache_loop(unsigned long paddr, unsigned long end,32+ const unsigned short reg, const unsigned int cache_type)33+{34+ if (!cpu_cache_is_present(cache_type))35+ return;36+37+ while (paddr < end) {38+ mtspr(reg, paddr);39+ paddr += L1_CACHE_BYTES;40+ }41+}42+43+static __always_inline void cache_loop_page(struct page *page, const unsigned short reg,44+ const unsigned int cache_type)45{46 unsigned long paddr = page_to_pfn(page) << PAGE_SHIFT;47+ unsigned long end = paddr + PAGE_SIZE;4849+ paddr &= ~(L1_CACHE_BYTES - 1);50+51+ cache_loop(paddr, end, reg, cache_type);052}5354void local_dcache_page_flush(struct page *page)55{56+ cache_loop_page(page, SPR_DCBFR, SPR_UPR_DCP);57}58EXPORT_SYMBOL(local_dcache_page_flush);5960void local_icache_page_inv(struct page *page)61{62+ cache_loop_page(page, SPR_ICBIR, SPR_UPR_ICP);63}64EXPORT_SYMBOL(local_icache_page_inv);65+66+void local_dcache_range_flush(unsigned long start, unsigned long end)67+{68+ cache_loop(start, end, SPR_DCBFR, SPR_UPR_DCP);69+}70+71+void local_dcache_range_inv(unsigned long start, unsigned long end)72+{73+ cache_loop(start, end, SPR_DCBIR, SPR_UPR_DCP);74+}75+76+void local_icache_range_inv(unsigned long start, unsigned long end)77+{78+ cache_loop(start, end, SPR_ICBIR, SPR_UPR_ICP);79+}8081void update_cache(struct vm_area_struct *vma, unsigned long address,82 pte_t *pte)···58 sync_icache_dcache(folio_page(folio, nr));59 }60}0
+3-2
arch/openrisc/mm/init.c
···35#include <asm/fixmap.h>36#include <asm/tlbflush.h>37#include <asm/sections.h>03839int mem_init_done;40···177 barrier();178179 /* Invalidate instruction caches after code modification */180- mtspr(SPR_ICBIR, 0x900);181- mtspr(SPR_ICBIR, 0xa00);182183 /* New TLB miss handlers and kernel page tables are in now place.184 * Make sure that page flags get updated for all pages in TLB by
···35#include <asm/fixmap.h>36#include <asm/tlbflush.h>37#include <asm/sections.h>38+#include <asm/cacheflush.h>3940int mem_init_done;41···176 barrier();177178 /* Invalidate instruction caches after code modification */179+ local_icache_block_inv(0x900);180+ local_icache_block_inv(0xa00);181182 /* New TLB miss handlers and kernel page tables are in now place.183 * Make sure that page flags get updated for all pages in TLB by
+10-5
arch/riscv/include/asm/cacheflush.h
···34 flush_dcache_folio(page_folio(page));35}3637-/*38- * RISC-V doesn't have an instruction to flush parts of the instruction cache,39- * so instead we just flush the whole thing.40- */41-#define flush_icache_range(start, end) flush_icache_all()42#define flush_icache_user_page(vma, pg, addr, len) \43do { \44 if (vma->vm_flags & VM_EXEC) \···72void flush_icache_mm(struct mm_struct *mm, bool local);7374#endif /* CONFIG_SMP */00000000007576extern unsigned int riscv_cbom_block_size;77extern unsigned int riscv_cboz_block_size;
···34 flush_dcache_folio(page_folio(page));35}360000037#define flush_icache_user_page(vma, pg, addr, len) \38do { \39 if (vma->vm_flags & VM_EXEC) \···77void flush_icache_mm(struct mm_struct *mm, bool local);7879#endif /* CONFIG_SMP */80+81+/*82+ * RISC-V doesn't have an instruction to flush parts of the instruction cache,83+ * so instead we just flush the whole thing.84+ */85+#define flush_icache_range flush_icache_range86+static inline void flush_icache_range(unsigned long start, unsigned long end)87+{88+ flush_icache_all();89+}9091extern unsigned int riscv_cbom_block_size;92extern unsigned int riscv_cboz_block_size;
···167 /* Initialize the slot */168 void *kaddr = kmap_atomic(page);169 void *dst = kaddr + (vaddr & ~PAGE_MASK);0170171 memcpy(dst, src, len);172···177 *(uprobe_opcode_t *)dst = __BUG_INSN_32;178 }1790180 kunmap_atomic(kaddr);181-182- /*183- * We probably need flush_icache_user_page() but it needs vma.184- * This should work on most of architectures by default. If185- * architecture needs to do something different it can define186- * its own version of the function.187- */188- flush_dcache_page(page);189}
···6#include <linux/mm.h> /* for struct page */7#include <linux/pagemap.h>8009#define __HAVE_ARCH_PTE_ALLOC_ONE10#define __HAVE_ARCH_PGD_FREE11#include <asm-generic/pgalloc.h>···31static inline void paravirt_release_p4d(unsigned long pfn) {}32#endif3334-#ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION35/*36- * Instead of one PGD, we acquire two PGDs. Being order-1, it is37- * both 8k in size and 8k-aligned. That lets us just flip bit 1238- * in a pointer to swap between the two 4k halves.39 */40-#define PGD_ALLOCATION_ORDER 141-#else42-#define PGD_ALLOCATION_ORDER 043-#endif004445/*46 * Allocate and free page tables.
···6#include <linux/mm.h> /* for struct page */7#include <linux/pagemap.h>89+#include <asm/cpufeature.h>10+11#define __HAVE_ARCH_PTE_ALLOC_ONE12#define __HAVE_ARCH_PGD_FREE13#include <asm-generic/pgalloc.h>···29static inline void paravirt_release_p4d(unsigned long pfn) {}30#endif31032/*33+ * In case of Page Table Isolation active, we acquire two PGDs instead of one.34+ * Being order-1, it is both 8k in size and 8k-aligned. That lets us just35+ * flip bit 12 in a pointer to swap between the two 4k halves.36 */37+static inline unsigned int pgd_allocation_order(void)38+{39+ if (cpu_feature_enabled(X86_FEATURE_PTI))40+ return 1;41+ return 0;42+}4344/*45 * Allocate and free page tables.
···1299 memblock_add(entry->addr, entry->size);1300 }13011302+ /*1303+ * 32-bit systems are limited to 4BG of memory even with HIGHMEM and1304+ * to even less without it.1305+ * Discard memory after max_pfn - the actual limit detected at runtime.1306+ */1307+ if (IS_ENABLED(CONFIG_X86_32))1308+ memblock_remove(PFN_PHYS(max_pfn), -1);1309+1310 /* Throw away partial pages: */1311 memblock_trim_memory(PAGE_SIZE);1312
···796 struct amd_svm_iommu_ir *ir;797 u64 entry;798000799 /**800 * In some cases, the existing irte is updated and re-set,801 * so we need to check here if it's already been * added802 * to the ir_list.803 */804- if (pi->ir_data && (pi->prev_ga_tag != 0)) {805 struct kvm *kvm = svm->vcpu.kvm;806 u32 vcpu_id = AVIC_GATAG_TO_VCPUID(pi->prev_ga_tag);807 struct kvm_vcpu *prev_vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id);···823 * Allocating new amd_iommu_pi_data, which will get824 * add to the per-vcpu ir_list.825 */826- ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_KERNEL_ACCOUNT);827 if (!ir) {828 ret = -ENOMEM;829 goto out;···899{900 struct kvm_kernel_irq_routing_entry *e;901 struct kvm_irq_routing_table *irq_rt;0902 int idx, ret = 0;903904- if (!kvm_arch_has_assigned_device(kvm) ||905- !irq_remapping_cap(IRQ_POSTING_CAP))906 return 0;907908 pr_debug("SVM: %s: host_irq=%#x, guest_irq=%#x, set=%#x\n",···936 kvm_vcpu_apicv_active(&svm->vcpu)) {937 struct amd_iommu_pi_data pi;93800939 /* Try to enable guest_mode in IRTE */940 pi.base = __sme_set(page_to_phys(svm->avic_backing_page) &941 AVIC_HPA_MASK);···956 */957 if (!ret && pi.is_guest_mode)958 svm_ir_list_add(svm, &pi);959- } else {960- /* Use legacy mode in IRTE */961- struct amd_iommu_pi_data pi;962-963- /**964- * Here, pi is used to:965- * - Tell IOMMU to use legacy mode for this interrupt.966- * - Retrieve ga_tag of prior interrupt remapping data.967- */968- pi.prev_ga_tag = 0;969- pi.is_guest_mode = false;970- ret = irq_set_vcpu_affinity(host_irq, &pi);971-972- /**973- * Check if the posted interrupt was previously974- * setup with the guest_mode by checking if the ga_tag975- * was cached. If so, we need to clean up the per-vcpu976- * ir_list.977- */978- if (!ret && pi.prev_ga_tag) {979- int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag);980- struct kvm_vcpu *vcpu;981-982- vcpu = kvm_get_vcpu_by_id(kvm, id);983- if (vcpu)984- svm_ir_list_del(to_svm(vcpu), &pi);985- }986 }987988 if (!ret && svm) {···971 }972973 ret = 0;0000000000000000000000000000974out:975 srcu_read_unlock(&kvm->irq_srcu, idx);976 return ret;
···796 struct amd_svm_iommu_ir *ir;797 u64 entry;798799+ if (WARN_ON_ONCE(!pi->ir_data))800+ return -EINVAL;801+802 /**803 * In some cases, the existing irte is updated and re-set,804 * so we need to check here if it's already been * added805 * to the ir_list.806 */807+ if (pi->prev_ga_tag) {808 struct kvm *kvm = svm->vcpu.kvm;809 u32 vcpu_id = AVIC_GATAG_TO_VCPUID(pi->prev_ga_tag);810 struct kvm_vcpu *prev_vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id);···820 * Allocating new amd_iommu_pi_data, which will get821 * add to the per-vcpu ir_list.822 */823+ ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_ATOMIC | __GFP_ACCOUNT);824 if (!ir) {825 ret = -ENOMEM;826 goto out;···896{897 struct kvm_kernel_irq_routing_entry *e;898 struct kvm_irq_routing_table *irq_rt;899+ bool enable_remapped_mode = true;900 int idx, ret = 0;901902+ if (!kvm_arch_has_assigned_device(kvm) || !kvm_arch_has_irq_bypass())0903 return 0;904905 pr_debug("SVM: %s: host_irq=%#x, guest_irq=%#x, set=%#x\n",···933 kvm_vcpu_apicv_active(&svm->vcpu)) {934 struct amd_iommu_pi_data pi;935936+ enable_remapped_mode = false;937+938 /* Try to enable guest_mode in IRTE */939 pi.base = __sme_set(page_to_phys(svm->avic_backing_page) &940 AVIC_HPA_MASK);···951 */952 if (!ret && pi.is_guest_mode)953 svm_ir_list_add(svm, &pi);000000000000000000000000000954 }955956 if (!ret && svm) {···993 }994995 ret = 0;996+ if (enable_remapped_mode) {997+ /* Use legacy mode in IRTE */998+ struct amd_iommu_pi_data pi;999+1000+ /**1001+ * Here, pi is used to:1002+ * - Tell IOMMU to use legacy mode for this interrupt.1003+ * - Retrieve ga_tag of prior interrupt remapping data.1004+ */1005+ pi.prev_ga_tag = 0;1006+ pi.is_guest_mode = false;1007+ ret = irq_set_vcpu_affinity(host_irq, &pi);1008+1009+ /**1010+ * Check if the posted interrupt was previously1011+ * setup with the guest_mode by checking if the ga_tag1012+ * was cached. If so, we need to clean up the per-vcpu1013+ * ir_list.1014+ */1015+ if (!ret && pi.prev_ga_tag) {1016+ int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag);1017+ struct kvm_vcpu *vcpu;1018+1019+ vcpu = kvm_get_vcpu_by_id(kvm, id);1020+ if (vcpu)1021+ svm_ir_list_del(to_svm(vcpu), &pi);1022+ }1023+ }1024out:1025 srcu_read_unlock(&kvm->irq_srcu, idx);1026 return ret;
···360 * We allocate one page for pgd.361 */362 if (!SHARED_KERNEL_PMD)363- return __pgd_alloc(mm, PGD_ALLOCATION_ORDER);364365 /*366 * Now PAE kernel is not running as a Xen domain. We can allocate···380381static inline pgd_t *_pgd_alloc(struct mm_struct *mm)382{383- return __pgd_alloc(mm, PGD_ALLOCATION_ORDER);384}385386static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
···360 * We allocate one page for pgd.361 */362 if (!SHARED_KERNEL_PMD)363+ return __pgd_alloc(mm, pgd_allocation_order());364365 /*366 * Now PAE kernel is not running as a Xen domain. We can allocate···380381static inline pgd_t *_pgd_alloc(struct mm_struct *mm)382{383+ return __pgd_alloc(mm, pgd_allocation_order());384}385386static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
···152 get_order(bsize));153}15400000000000000000000000155int set_blocksize(struct file *file, int size)156{157 struct inode *inode = file->f_mapping->host;158 struct block_device *bdev = I_BDEV(inode);0159160- if (blk_validate_block_size(size))161- return -EINVAL;162-163- /* Size cannot be smaller than the size supported by the device */164- if (size < bdev_logical_block_size(bdev))165- return -EINVAL;166167 if (!file->private_data)168 return -EINVAL;169170 /* Don't change the size if it is same as current */171 if (inode->i_blkbits != blksize_bits(size)) {0000000000000172 sync_blockdev(bdev);00173 inode->i_blkbits = blksize_bits(size);174 mapping_set_folio_min_order(inode->i_mapping, get_order(size));175 kill_bdev(bdev);00176 }177 return 0;178}···815 blkdev_put_whole(whole);816}817818-struct block_device *blkdev_get_no_open(dev_t dev)819{820 struct block_device *bdev;821 struct inode *inode;822823 inode = ilookup(blockdev_superblock, dev);824- if (!inode && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) {825 blk_request_module(dev);826 inode = ilookup(blockdev_superblock, dev);827 if (inode)···1043 if (ret)1044 return ERR_PTR(ret);10451046- bdev = blkdev_get_no_open(dev);1047 if (!bdev)1048 return ERR_PTR(-ENXIO);1049···1312 */1313void bdev_statx(const struct path *path, struct kstat *stat, u32 request_mask)1314{1315- struct inode *backing_inode;1316 struct block_device *bdev;13171318- backing_inode = d_backing_inode(path->dentry);1319-1320 /*1321- * Note that backing_inode is the inode of a block device node file,1322- * not the block device's internal inode. Therefore it is *not* valid1323- * to use I_BDEV() here; the block device has to be looked up by i_rdev1324 * instead.1325 */1326- bdev = blkdev_get_no_open(backing_inode->i_rdev);1327 if (!bdev)1328 return;1329
···152 get_order(bsize));153}154155+/**156+ * bdev_validate_blocksize - check that this block size is acceptable157+ * @bdev: blockdevice to check158+ * @block_size: block size to check159+ *160+ * For block device users that do not use buffer heads or the block device161+ * page cache, make sure that this block size can be used with the device.162+ *163+ * Return: On success zero is returned, negative error code on failure.164+ */165+int bdev_validate_blocksize(struct block_device *bdev, int block_size)166+{167+ if (blk_validate_block_size(block_size))168+ return -EINVAL;169+170+ /* Size cannot be smaller than the size supported by the device */171+ if (block_size < bdev_logical_block_size(bdev))172+ return -EINVAL;173+174+ return 0;175+}176+EXPORT_SYMBOL_GPL(bdev_validate_blocksize);177+178int set_blocksize(struct file *file, int size)179{180 struct inode *inode = file->f_mapping->host;181 struct block_device *bdev = I_BDEV(inode);182+ int ret;183184+ ret = bdev_validate_blocksize(bdev, size);185+ if (ret)186+ return ret;000187188 if (!file->private_data)189 return -EINVAL;190191 /* Don't change the size if it is same as current */192 if (inode->i_blkbits != blksize_bits(size)) {193+ /*194+ * Flush and truncate the pagecache before we reconfigure the195+ * mapping geometry because folio sizes are variable now. If a196+ * reader has already allocated a folio whose size is smaller197+ * than the new min_order but invokes readahead after the new198+ * min_order becomes visible, readahead will think there are199+ * "zero" blocks per folio and crash. Take the inode and200+ * invalidation locks to avoid racing with201+ * read/write/fallocate.202+ */203+ inode_lock(inode);204+ filemap_invalidate_lock(inode->i_mapping);205+206 sync_blockdev(bdev);207+ kill_bdev(bdev);208+209 inode->i_blkbits = blksize_bits(size);210 mapping_set_folio_min_order(inode->i_mapping, get_order(size));211 kill_bdev(bdev);212+ filemap_invalidate_unlock(inode->i_mapping);213+ inode_unlock(inode);214 }215 return 0;216}···777 blkdev_put_whole(whole);778}779780+struct block_device *blkdev_get_no_open(dev_t dev, bool autoload)781{782 struct block_device *bdev;783 struct inode *inode;784785 inode = ilookup(blockdev_superblock, dev);786+ if (!inode && autoload && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) {787 blk_request_module(dev);788 inode = ilookup(blockdev_superblock, dev);789 if (inode)···1005 if (ret)1006 return ERR_PTR(ret);10071008+ bdev = blkdev_get_no_open(dev, true);1009 if (!bdev)1010 return ERR_PTR(-ENXIO);1011···1274 */1275void bdev_statx(const struct path *path, struct kstat *stat, u32 request_mask)1276{01277 struct block_device *bdev;1278001279 /*1280+ * Note that d_backing_inode() returns the block device node inode, not1281+ * the block device's internal inode. Therefore it is *not* valid to1282+ * use I_BDEV() here; the block device has to be looked up by i_rdev1283 * instead.1284 */1285+ bdev = blkdev_get_no_open(d_backing_inode(path->dentry)->i_rdev, false);1286 if (!bdev)1287 return;1288
+1-1
block/blk-cgroup.c
···797 return -EINVAL;798 input = skip_spaces(input);799800- bdev = blkdev_get_no_open(MKDEV(major, minor));801 if (!bdev)802 return -ENODEV;803 if (bdev_is_partition(bdev)) {
···797 return -EINVAL;798 input = skip_spaces(input);799800+ bdev = blkdev_get_no_open(MKDEV(major, minor), false);801 if (!bdev)802 return -ENODEV;803 if (bdev_is_partition(bdev)) {
+7-1
block/blk-settings.c
···61 /*62 * For read-ahead of large files to be effective, we need to read ahead63 * at least twice the optimal I/O size.000064 */65- bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);0066 bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT;67}68
···61 /*62 * For read-ahead of large files to be effective, we need to read ahead63 * at least twice the optimal I/O size.64+ *65+ * There is no hardware limitation for the read-ahead size and the user66+ * might have increased the read-ahead size through sysfs, so don't ever67+ * decrease it.68 */69+ bdi->ra_pages = max3(bdi->ra_pages,70+ lim->io_opt * 2 / PAGE_SIZE,71+ VM_READAHEAD_PAGES);72 bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT;73}74
+4-1
block/blk-zoned.c
···343 op = REQ_OP_ZONE_RESET;344345 /* Invalidate the page cache, including dirty pages. */0346 filemap_invalidate_lock(bdev->bd_mapping);347 ret = blkdev_truncate_zone_range(bdev, mode, &zrange);348 if (ret)···365 ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors);366367fail:368- if (cmd == BLKRESETZONE)369 filemap_invalidate_unlock(bdev->bd_mapping);00370371 return ret;372}
···343 op = REQ_OP_ZONE_RESET;344345 /* Invalidate the page cache, including dirty pages. */346+ inode_lock(bdev->bd_mapping->host);347 filemap_invalidate_lock(bdev->bd_mapping);348 ret = blkdev_truncate_zone_range(bdev, mode, &zrange);349 if (ret)···364 ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors);365366fail:367+ if (cmd == BLKRESETZONE) {368 filemap_invalidate_unlock(bdev->bd_mapping);369+ inode_unlock(bdev->bd_mapping->host);370+ }371372 return ret;373}
···156 * },157 * .ops = my_custom_ops,158 * };159+ *160+ * Please note that such custom ops approach is valid, but it is hard to implement161+ * it right without global locks per-device to protect from auxiliary_drv removal162+ * during call to that ops. In addition, this implementation lacks proper module163+ * dependency, which causes to load/unload races between auxiliary parent and devices164+ * modules.165+ *166+ * The most easiest way to provide these ops reliably without needing to167+ * have a lock is to EXPORT_SYMBOL*() them and rely on already existing168+ * modules infrastructure for validity and correct dependencies chains.169 */170171static const struct auxiliary_device_id *auxiliary_match_id(const struct auxiliary_device_id *id,
···73 kset_put(&sp->subsys);74}7576+struct subsys_private *bus_to_subsys(const struct bus_type *bus);77struct subsys_private *class_to_subsys(const struct class *class);7879struct driver_private {···179int driver_add_groups(const struct device_driver *drv, const struct attribute_group **groups);180void driver_remove_groups(const struct device_driver *drv, const struct attribute_group **groups);181void device_driver_detach(struct device *dev);182+183+static inline void device_set_driver(struct device *dev, const struct device_driver *drv)184+{185+ /*186+ * Majority (all?) read accesses to dev->driver happens either187+ * while holding device lock or in bus/driver code that is only188+ * invoked when the device is bound to a driver and there is no189+ * concern of the pointer being changed while it is being read.190+ * However when reading device's uevent file we read driver pointer191+ * without taking device lock (so we do not block there for192+ * arbitrary amount of time). We use WRITE_ONCE() here to prevent193+ * tearing so that READ_ONCE() can safely be used in uevent code.194+ */195+ // FIXME - this cast should not be needed "soon"196+ WRITE_ONCE(dev->driver, (struct device_driver *)drv);197+}198199int devres_release_all(struct device *dev);200void device_block_probing(void);
+1-1
drivers/base/bus.c
···57 * NULL. A call to subsys_put() must be done when finished with the pointer in58 * order for it to be properly freed.59 */60-static struct subsys_private *bus_to_subsys(const struct bus_type *bus)61{62 struct subsys_private *sp = NULL;63 struct kobject *kobj;
···57 * NULL. A call to subsys_put() must be done when finished with the pointer in58 * order for it to be properly freed.59 */60+struct subsys_private *bus_to_subsys(const struct bus_type *bus)61{62 struct subsys_private *sp = NULL;63 struct kobject *kobj;
+32-6
drivers/base/core.c
···2624 return NULL;2625}2626000000000000000000000000000002627static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)2628{2629 const struct device *dev = kobj_to_dev(kobj);···2684 if (dev->type && dev->type->name)2685 add_uevent_var(env, "DEVTYPE=%s", dev->type->name);26862687- if (dev->driver)2688- add_uevent_var(env, "DRIVER=%s", dev->driver->name);26892690 /* Add common DT information about the device */2691 of_device_uevent(dev, env);···2755 if (!env)2756 return -ENOMEM;27572758- /* Synchronize with really_probe() */2759- device_lock(dev);2760 /* let the kset specific function add its keys */2761 retval = kset->uevent_ops->uevent(&dev->kobj, env);2762- device_unlock(dev);2763 if (retval)2764 goto out;2765···3726 device_pm_remove(dev);3727 dpm_sysfs_remove(dev);3728 DPMError:3729- dev->driver = NULL;3730 bus_remove_device(dev);3731 BusError:3732 device_remove_attrs(dev);
···2624 return NULL;2625}26262627+/*2628+ * Try filling "DRIVER=<name>" uevent variable for a device. Because this2629+ * function may race with binding and unbinding the device from a driver,2630+ * we need to be careful. Binding is generally safe, at worst we miss the2631+ * fact that the device is already bound to a driver (but the driver2632+ * information that is delivered through uevents is best-effort, it may2633+ * become obsolete as soon as it is generated anyways). Unbinding is more2634+ * risky as driver pointer is transitioning to NULL, so READ_ONCE() should2635+ * be used to make sure we are dealing with the same pointer, and to2636+ * ensure that driver structure is not going to disappear from under us2637+ * we take bus' drivers klist lock. The assumption that only registered2638+ * driver can be bound to a device, and to unregister a driver bus code2639+ * will take the same lock.2640+ */2641+static void dev_driver_uevent(const struct device *dev, struct kobj_uevent_env *env)2642+{2643+ struct subsys_private *sp = bus_to_subsys(dev->bus);2644+2645+ if (sp) {2646+ scoped_guard(spinlock, &sp->klist_drivers.k_lock) {2647+ struct device_driver *drv = READ_ONCE(dev->driver);2648+ if (drv)2649+ add_uevent_var(env, "DRIVER=%s", drv->name);2650+ }2651+2652+ subsys_put(sp);2653+ }2654+}2655+2656static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)2657{2658 const struct device *dev = kobj_to_dev(kobj);···2655 if (dev->type && dev->type->name)2656 add_uevent_var(env, "DEVTYPE=%s", dev->type->name);26572658+ /* Add "DRIVER=%s" variable if the device is bound to a driver */2659+ dev_driver_uevent(dev, env);26602661 /* Add common DT information about the device */2662 of_device_uevent(dev, env);···2726 if (!env)2727 return -ENOMEM;2728002729 /* let the kset specific function add its keys */2730 retval = kset->uevent_ops->uevent(&dev->kobj, env);02731 if (retval)2732 goto out;2733···3700 device_pm_remove(dev);3701 dpm_sysfs_remove(dev);3702 DPMError:3703+ device_set_driver(dev, NULL);3704 bus_remove_device(dev);3705 BusError:3706 device_remove_attrs(dev);
+3-4
drivers/base/dd.c
···550 arch_teardown_dma_ops(dev);551 kfree(dev->dma_range_map);552 dev->dma_range_map = NULL;553- dev->driver = NULL;554 dev_set_drvdata(dev, NULL);555 if (dev->pm_domain && dev->pm_domain->dismiss)556 dev->pm_domain->dismiss(dev);···629 }630631re_probe:632- // FIXME - this cast should not be needed "soon"633- dev->driver = (struct device_driver *)drv;634635 /* If using pinctrl, bind pins now before probing */636 ret = pinctrl_bind_pins(dev);···1013 if (ret == 0)1014 ret = 1;1015 else {1016- dev->driver = NULL;1017 ret = 0;1018 }1019 } else {
···550 arch_teardown_dma_ops(dev);551 kfree(dev->dma_range_map);552 dev->dma_range_map = NULL;553+ device_set_driver(dev, NULL);554 dev_set_drvdata(dev, NULL);555 if (dev->pm_domain && dev->pm_domain->dismiss)556 dev->pm_domain->dismiss(dev);···629 }630631re_probe:632+ device_set_driver(dev, drv);0633634 /* If using pinctrl, bind pins now before probing */635 ret = pinctrl_bind_pins(dev);···1014 if (ret == 0)1015 ret = 1;1016 else {1017+ device_set_driver(dev, NULL);1018 ret = 0;1019 }1020 } else {
+9-13
drivers/base/devtmpfs.c
···296 return err;297}298299-static int dev_mynode(struct device *dev, struct inode *inode, struct kstat *stat)300{301 /* did we create it */302 if (inode->i_private != &thread)···304305 /* does the dev_t match */306 if (is_blockdev(dev)) {307- if (!S_ISBLK(stat->mode))308 return 0;309 } else {310- if (!S_ISCHR(stat->mode))311 return 0;312 }313- if (stat->rdev != dev->devt)314 return 0;315316 /* ours */···321{322 struct path parent;323 struct dentry *dentry;324- struct kstat stat;325- struct path p;326 int deleted = 0;327- int err;328329 dentry = kern_path_locked(nodename, &parent);330 if (IS_ERR(dentry))331 return PTR_ERR(dentry);332333- p.mnt = parent.mnt;334- p.dentry = dentry;335- err = vfs_getattr(&p, &stat, STATX_TYPE | STATX_MODE,336- AT_STATX_SYNC_AS_STAT);337- if (!err && dev_mynode(dev, d_inode(dentry), &stat)) {338 struct iattr newattrs;339 /*340 * before unlinking this node, reset permissions···338 */339 newattrs.ia_uid = GLOBAL_ROOT_UID;340 newattrs.ia_gid = GLOBAL_ROOT_GID;341- newattrs.ia_mode = stat.mode & ~0777;342 newattrs.ia_valid =343 ATTR_UID|ATTR_GID|ATTR_MODE;344 inode_lock(d_inode(dentry));
···296 return err;297}298299+static int dev_mynode(struct device *dev, struct inode *inode)300{301 /* did we create it */302 if (inode->i_private != &thread)···304305 /* does the dev_t match */306 if (is_blockdev(dev)) {307+ if (!S_ISBLK(inode->i_mode))308 return 0;309 } else {310+ if (!S_ISCHR(inode->i_mode))311 return 0;312 }313+ if (inode->i_rdev != dev->devt)314 return 0;315316 /* ours */···321{322 struct path parent;323 struct dentry *dentry;324+ struct inode *inode;0325 int deleted = 0;326+ int err = 0;327328 dentry = kern_path_locked(nodename, &parent);329 if (IS_ERR(dentry))330 return PTR_ERR(dentry);331332+ inode = d_inode(dentry);333+ if (dev_mynode(dev, inode)) {000334 struct iattr newattrs;335 /*336 * before unlinking this node, reset permissions···342 */343 newattrs.ia_uid = GLOBAL_ROOT_UID;344 newattrs.ia_gid = GLOBAL_ROOT_GID;345+ newattrs.ia_mode = inode->i_mode & ~0777;346 newattrs.ia_valid =347 ATTR_UID|ATTR_GID|ATTR_MODE;348 inode_lock(d_inode(dentry));
+17-24
drivers/base/memory.c
···816 return 0;817}818819-static int __init add_boot_memory_block(unsigned long base_section_nr)820-{821- unsigned long nr;822-823- for_each_present_section_nr(base_section_nr, nr) {824- if (nr >= (base_section_nr + sections_per_block))825- break;826-827- return add_memory_block(memory_block_id(base_section_nr),828- MEM_ONLINE, NULL, NULL);829- }830-831- return 0;832-}833-834static int add_hotplug_memory_block(unsigned long block_id,835 struct vmem_altmap *altmap,836 struct memory_group *group)···942void __init memory_dev_init(void)943{944 int ret;945- unsigned long block_sz, nr;946947 /* Validate the configured memory block size */948 block_sz = memory_block_size_bytes();···955 panic("%s() failed to register subsystem: %d\n", __func__, ret);956957 /*958- * Create entries for memory sections that were found959- * during boot and have been initialized000960 */961- for (nr = 0; nr <= __highest_present_section_nr;962- nr += sections_per_block) {963- ret = add_boot_memory_block(nr);964- if (ret)965- panic("%s() failed to add memory block: %d\n", __func__,966- ret);00000967 }968}969
···816 return 0;817}818000000000000000819static int add_hotplug_memory_block(unsigned long block_id,820 struct vmem_altmap *altmap,821 struct memory_group *group)···957void __init memory_dev_init(void)958{959 int ret;960+ unsigned long block_sz, block_id, nr;961962 /* Validate the configured memory block size */963 block_sz = memory_block_size_bytes();···970 panic("%s() failed to register subsystem: %d\n", __func__, ret);971972 /*973+ * Create entries for memory sections that were found during boot974+ * and have been initialized. Use @block_id to track the last975+ * handled block and initialize it to an invalid value (ULONG_MAX)976+ * to bypass the block ID matching check for the first present977+ * block so that it can be covered.978 */979+ block_id = ULONG_MAX;980+ for_each_present_section_nr(0, nr) {981+ if (block_id != ULONG_MAX && memory_block_id(nr) == block_id)982+ continue;983+984+ block_id = memory_block_id(nr);985+ ret = add_memory_block(block_id, MEM_ONLINE, NULL, NULL);986+ if (ret) {987+ panic("%s() failed to add memory block: %d\n",988+ __func__, ret);989+ }990 }991}992
+1-2
drivers/base/swnode.c
···1080 if (!swnode)1081 return;108201083 ret = sysfs_create_link(&dev->kobj, &swnode->kobj, "software_node");1084 if (ret)1085 return;···1090 sysfs_remove_link(&dev->kobj, "software_node");1091 return;1092 }1093-1094- kobject_get(&swnode->kobj);1095}10961097void software_node_notify_remove(struct device *dev)
···1080 if (!swnode)1081 return;10821083+ kobject_get(&swnode->kobj);1084 ret = sysfs_create_link(&dev->kobj, &swnode->kobj, "software_node");1085 if (ret)1086 return;···1089 sysfs_remove_link(&dev->kobj, "software_node");1090 return;1091 }001092}10931094void software_node_notify_remove(struct device *dev)
+24-17
drivers/block/ublk_drv.c
···1683 ublk_put_disk(disk);1684}16851686-static void ublk_cancel_cmd(struct ublk_queue *ubq, struct ublk_io *io,1687 unsigned int issue_flags)1688{0001689 bool done;16901691 if (!(io->flags & UBLK_IO_FLAG_ACTIVE))000000000000001692 return;16931694 spin_lock(&ubq->cancel_lock);···1739 struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);1740 struct ublk_queue *ubq = pdu->ubq;1741 struct task_struct *task;1742- struct ublk_io *io;17431744 if (WARN_ON_ONCE(!ubq))1745 return;···1753 if (!ubq->canceling)1754 ublk_start_cancel(ubq);17551756- io = &ubq->ios[pdu->tag];1757- WARN_ON_ONCE(io->cmd != cmd);1758- ublk_cancel_cmd(ubq, io, issue_flags);1759}17601761static inline bool ublk_queue_ready(struct ublk_queue *ubq)···1767 int i;17681769 for (i = 0; i < ubq->q_depth; i++)1770- ublk_cancel_cmd(ubq, &ubq->ios[i], IO_URING_F_UNLOCKED);1771}17721773/* Cancel all pending commands, must be called after del_gendisk() returns */···1899 ublk_reset_io_flags(ub);1900 complete_all(&ub->completion);1901 }1902-}1903-1904-static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id,1905- int tag)1906-{1907- struct ublk_queue *ubq = ublk_get_queue(ub, q_id);1908- struct request *req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag);1909-1910- ublk_queue_cmd(ubq, req);1911}19121913static inline int ublk_check_cmd_op(u32 cmd_op)···2109 if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))2110 goto out;2111 ublk_fill_io_cmd(io, cmd, ub_cmd->addr);2112- ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag);2113- break;02114 default:2115 goto out;2116 }
···1683 ublk_put_disk(disk);1684}16851686+static void ublk_cancel_cmd(struct ublk_queue *ubq, unsigned tag,1687 unsigned int issue_flags)1688{1689+ struct ublk_io *io = &ubq->ios[tag];1690+ struct ublk_device *ub = ubq->dev;1691+ struct request *req;1692 bool done;16931694 if (!(io->flags & UBLK_IO_FLAG_ACTIVE))1695+ return;1696+1697+ /*1698+ * Don't try to cancel this command if the request is started for1699+ * avoiding race between io_uring_cmd_done() and1700+ * io_uring_cmd_complete_in_task().1701+ *1702+ * Either the started request will be aborted via __ublk_abort_rq(),1703+ * then this uring_cmd is canceled next time, or it will be done in1704+ * task work function ublk_dispatch_req() because io_uring guarantees1705+ * that ublk_dispatch_req() is always called1706+ */1707+ req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag);1708+ if (req && blk_mq_request_started(req))1709 return;17101711 spin_lock(&ubq->cancel_lock);···1722 struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);1723 struct ublk_queue *ubq = pdu->ubq;1724 struct task_struct *task;017251726 if (WARN_ON_ONCE(!ubq))1727 return;···1737 if (!ubq->canceling)1738 ublk_start_cancel(ubq);17391740+ WARN_ON_ONCE(ubq->ios[pdu->tag].cmd != cmd);1741+ ublk_cancel_cmd(ubq, pdu->tag, issue_flags);01742}17431744static inline bool ublk_queue_ready(struct ublk_queue *ubq)···1752 int i;17531754 for (i = 0; i < ubq->q_depth; i++)1755+ ublk_cancel_cmd(ubq, i, IO_URING_F_UNLOCKED);1756}17571758/* Cancel all pending commands, must be called after del_gendisk() returns */···1884 ublk_reset_io_flags(ub);1885 complete_all(&ub->completion);1886 }0000000001887}18881889static inline int ublk_check_cmd_op(u32 cmd_op)···2103 if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))2104 goto out;2105 ublk_fill_io_cmd(io, cmd, ub_cmd->addr);2106+ req = blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag);2107+ ublk_dispatch_req(ubq, req, issue_flags);2108+ return -EIOCBQUEUED;2109 default:2110 goto out;2111 }
···758 struct jr3_pci_dev_private *devpriv = dev->private;759760 if (devpriv)761- timer_delete_sync(&devpriv->timer);762763 comedi_pci_detach(dev);764}
···758 struct jr3_pci_dev_private *devpriv = dev->private;759760 if (devpriv)761+ timer_shutdown_sync(&devpriv->timer);762763 comedi_pci_detach(dev);764}
+10-10
drivers/cpufreq/Kconfig.arm
···76config ARM_BRCMSTB_AVS_CPUFREQ77 tristate "Broadcom STB AVS CPUfreq driver"78 depends on (ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ) || COMPILE_TEST79- default y80 help81 Some Broadcom STB SoCs use a co-processor running proprietary firmware82 ("AVS") to handle voltage and frequency scaling. This driver provides···88 tristate "Calxeda Highbank-based"89 depends on ARCH_HIGHBANK || COMPILE_TEST90 depends on CPUFREQ_DT && REGULATOR && PL320_MBOX91- default m92 help93 This adds the CPUFreq driver for Calxeda Highbank SoC94 based boards.···133config ARM_MEDIATEK_CPUFREQ_HW134 tristate "MediaTek CPUFreq HW driver"135 depends on ARCH_MEDIATEK || COMPILE_TEST136- default m137 help138 Support for the CPUFreq HW driver.139 Some MediaTek chipsets have a HW engine to offload the steps···181config ARM_S3C64XX_CPUFREQ182 bool "Samsung S3C64XX"183 depends on CPU_S3C6410 || COMPILE_TEST184- default y185 help186 This adds the CPUFreq driver for Samsung S3C6410 SoC.187···190config ARM_S5PV210_CPUFREQ191 bool "Samsung S5PV210 and S5PC110"192 depends on CPU_S5PV210 || COMPILE_TEST193- default y194 help195 This adds the CPUFreq driver for Samsung S5PV210 and196 S5PC110 SoCs.···214config ARM_SPEAR_CPUFREQ215 bool "SPEAr CPUFreq support"216 depends on PLAT_SPEAR || COMPILE_TEST217- default y218 help219 This adds the CPUFreq driver support for SPEAr SOCs.220···233 tristate "Tegra20/30 CPUFreq support"234 depends on ARCH_TEGRA || COMPILE_TEST235 depends on CPUFREQ_DT236- default y237 help238 This adds the CPUFreq driver support for Tegra20/30 SOCs.239···241 bool "Tegra124 CPUFreq support"242 depends on ARCH_TEGRA || COMPILE_TEST243 depends on CPUFREQ_DT244- default y245 help246 This adds the CPUFreq driver support for Tegra124 SOCs.247···256 tristate "Tegra194 CPUFreq support"257 depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST)258 depends on TEGRA_BPMP259- default y260 help261 This adds CPU frequency driver support for Tegra194 SOCs.262263config ARM_TI_CPUFREQ264 bool "Texas Instruments CPUFreq support"265 depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST266- default y267 help268 This driver enables valid OPPs on the running platform based on269 values contained within the SoC in use. Enable this in order to
···76config ARM_BRCMSTB_AVS_CPUFREQ77 tristate "Broadcom STB AVS CPUfreq driver"78 depends on (ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ) || COMPILE_TEST79+ default y if ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ80 help81 Some Broadcom STB SoCs use a co-processor running proprietary firmware82 ("AVS") to handle voltage and frequency scaling. This driver provides···88 tristate "Calxeda Highbank-based"89 depends on ARCH_HIGHBANK || COMPILE_TEST90 depends on CPUFREQ_DT && REGULATOR && PL320_MBOX91+ default m if ARCH_HIGHBANK92 help93 This adds the CPUFreq driver for Calxeda Highbank SoC94 based boards.···133config ARM_MEDIATEK_CPUFREQ_HW134 tristate "MediaTek CPUFreq HW driver"135 depends on ARCH_MEDIATEK || COMPILE_TEST136+ default m if ARCH_MEDIATEK137 help138 Support for the CPUFreq HW driver.139 Some MediaTek chipsets have a HW engine to offload the steps···181config ARM_S3C64XX_CPUFREQ182 bool "Samsung S3C64XX"183 depends on CPU_S3C6410 || COMPILE_TEST184+ default CPU_S3C6410185 help186 This adds the CPUFreq driver for Samsung S3C6410 SoC.187···190config ARM_S5PV210_CPUFREQ191 bool "Samsung S5PV210 and S5PC110"192 depends on CPU_S5PV210 || COMPILE_TEST193+ default CPU_S5PV210194 help195 This adds the CPUFreq driver for Samsung S5PV210 and196 S5PC110 SoCs.···214config ARM_SPEAR_CPUFREQ215 bool "SPEAr CPUFreq support"216 depends on PLAT_SPEAR || COMPILE_TEST217+ default PLAT_SPEAR218 help219 This adds the CPUFreq driver support for SPEAr SOCs.220···233 tristate "Tegra20/30 CPUFreq support"234 depends on ARCH_TEGRA || COMPILE_TEST235 depends on CPUFREQ_DT236+ default ARCH_TEGRA237 help238 This adds the CPUFreq driver support for Tegra20/30 SOCs.239···241 bool "Tegra124 CPUFreq support"242 depends on ARCH_TEGRA || COMPILE_TEST243 depends on CPUFREQ_DT244+ default ARCH_TEGRA245 help246 This adds the CPUFreq driver support for Tegra124 SOCs.247···256 tristate "Tegra194 CPUFreq support"257 depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST)258 depends on TEGRA_BPMP259+ default ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC260 help261 This adds CPU frequency driver support for Tegra194 SOCs.262263config ARM_TI_CPUFREQ264 bool "Texas Instruments CPUFreq support"265 depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST266+ default ARCH_OMAP2PLUS || ARCH_K3267 help268 This driver enables valid OPPs on the running platform based on269 values contained within the SoC in use. Enable this in order to
+8-2
drivers/cpufreq/apple-soc-cpufreq.c
···134135static unsigned int apple_soc_cpufreq_get_rate(unsigned int cpu)136{137- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);138- struct apple_cpu_priv *priv = policy->driver_data;139 struct cpufreq_frequency_table *p;140 unsigned int pstate;000000141142 if (priv->info->cur_pstate_mask) {143 u32 reg = readl_relaxed(priv->reg_base + APPLE_DVFS_STATUS);
···134135static unsigned int apple_soc_cpufreq_get_rate(unsigned int cpu)136{137+ struct cpufreq_policy *policy;138+ struct apple_cpu_priv *priv;139 struct cpufreq_frequency_table *p;140 unsigned int pstate;141+142+ policy = cpufreq_cpu_get_raw(cpu);143+ if (unlikely(!policy))144+ return 0;145+146+ priv = policy->driver_data;147148 if (priv->info->cur_pstate_mask) {149 u32 reg = readl_relaxed(priv->reg_base + APPLE_DVFS_STATUS);
+1-1
drivers/cpufreq/cppc_cpufreq.c
···747 int ret;748749 if (!policy)750- return -ENODEV;751752 cpu_data = policy->driver_data;753
···747 int ret;748749 if (!policy)750+ return 0;751752 cpu_data = policy->driver_data;753
···3738static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)39{40- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);41- struct scmi_data *priv = policy->driver_data;42 unsigned long rate;43 int ret;0000004445 ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false);46 if (ret)
···3738static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)39{40+ struct cpufreq_policy *policy;41+ struct scmi_data *priv;42 unsigned long rate;43 int ret;44+45+ policy = cpufreq_cpu_get_raw(cpu);46+ if (unlikely(!policy))47+ return 0;48+49+ priv = policy->driver_data;5051 ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false);52 if (ret)
+10-3
drivers/cpufreq/scpi-cpufreq.c
···2930static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)31{32- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);33- struct scpi_data *priv = policy->driver_data;34- unsigned long rate = clk_get_rate(priv->clk);00000003536 return rate / 1000;37}
···2930static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)31{32+ struct cpufreq_policy *policy;33+ struct scpi_data *priv;34+ unsigned long rate;35+36+ policy = cpufreq_cpu_get_raw(cpu);37+ if (unlikely(!policy))38+ return 0;39+40+ priv = policy->driver_data;41+ rate = clk_get_rate(priv->clk);4243 return rate / 1000;44}
+12-6
drivers/cpufreq/sun50i-cpufreq-nvmem.c
···194 struct nvmem_cell *speedbin_nvmem;195 const struct of_device_id *match;196 struct device *cpu_dev;197- u32 *speedbin;00198 int ret;199200 cpu_dev = get_cpu_device(0);···219 return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),220 "Could not get nvmem cell\n");221222- speedbin = nvmem_cell_read(speedbin_nvmem, NULL);223 nvmem_cell_put(speedbin_nvmem);224- if (IS_ERR(speedbin))225- return PTR_ERR(speedbin);226227- ret = opp_data->efuse_xlate(*speedbin);00228229- kfree(speedbin);00230231 return ret;232};
···194 struct nvmem_cell *speedbin_nvmem;195 const struct of_device_id *match;196 struct device *cpu_dev;197+ void *speedbin_ptr;198+ u32 speedbin = 0;199+ size_t len;200 int ret;201202 cpu_dev = get_cpu_device(0);···217 return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),218 "Could not get nvmem cell\n");219220+ speedbin_ptr = nvmem_cell_read(speedbin_nvmem, &len);221 nvmem_cell_put(speedbin_nvmem);222+ if (IS_ERR(speedbin_ptr))223+ return PTR_ERR(speedbin_ptr);224225+ if (len <= 4)226+ memcpy(&speedbin, speedbin_ptr, len);227+ speedbin = le32_to_cpu(speedbin);228229+ ret = opp_data->efuse_xlate(speedbin);230+231+ kfree(speedbin_ptr);232233 return ret;234};
+6
drivers/crypto/atmel-sha204a.c
···163 i2c_priv->hwrng.name = dev_name(&client->dev);164 i2c_priv->hwrng.read = atmel_sha204a_rng_read;165000000166 ret = devm_hwrng_register(&client->dev, &i2c_priv->hwrng);167 if (ret)168 dev_warn(&client->dev, "failed to register RNG (%d)\n", ret);
···163 i2c_priv->hwrng.name = dev_name(&client->dev);164 i2c_priv->hwrng.read = atmel_sha204a_rng_read;165166+ /*167+ * According to review by Bill Cox [1], this HWRNG has very low entropy.168+ * [1] https://www.metzdowd.com/pipermail/cryptography/2014-December/023858.html169+ */170+ i2c_priv->hwrng.quality = 1;171+172 ret = devm_hwrng_register(&client->dev, &i2c_priv->hwrng);173 if (ret)174 dev_warn(&client->dev, "failed to register RNG (%d)\n", ret);
···1678 if (rc && rc != -EBUSY)1679 return rc;16801681- cxl_gpf_port_setup(dport_dev, port);16821683 /* Any more ports to add between this one and the root? */1684 if (!dev_is_cxl_root_child(&port->dev))
···1678 if (rc && rc != -EBUSY)1679 return rc;16801681+ cxl_gpf_port_setup(dport);16821683 /* Any more ports to add between this one and the root? */1684 if (!dev_is_cxl_root_child(&port->dev))
-4
drivers/cxl/core/regs.c
···581 resource_size_t rcrb = ri->base;582 void __iomem *addr;583 u32 bar0, bar1;584- u16 cmd;585 u32 id;586587 if (which == CXL_RCRB_UPSTREAM)···602 }603604 id = readl(addr + PCI_VENDOR_ID);605- cmd = readw(addr + PCI_COMMAND);606 bar0 = readl(addr + PCI_BASE_ADDRESS_0);607 bar1 = readl(addr + PCI_BASE_ADDRESS_1);608 iounmap(addr);···616 dev_err(dev, "Failed to access Downstream Port RCRB\n");617 return CXL_RESOURCE_NONE;618 }619- if (!(cmd & PCI_COMMAND_MEMORY))620- return CXL_RESOURCE_NONE;621 /* The RCRB is a Memory Window, and the MEM_TYPE_1M bit is obsolete */622 if (bar0 & (PCI_BASE_ADDRESS_MEM_TYPE_1M | PCI_BASE_ADDRESS_SPACE_IO))623 return CXL_RESOURCE_NONE;
···581 resource_size_t rcrb = ri->base;582 void __iomem *addr;583 u32 bar0, bar1;0584 u32 id;585586 if (which == CXL_RCRB_UPSTREAM)···603 }604605 id = readl(addr + PCI_VENDOR_ID);0606 bar0 = readl(addr + PCI_BASE_ADDRESS_0);607 bar1 = readl(addr + PCI_BASE_ADDRESS_1);608 iounmap(addr);···618 dev_err(dev, "Failed to access Downstream Port RCRB\n");619 return CXL_RESOURCE_NONE;620 }00621 /* The RCRB is a Memory Window, and the MEM_TYPE_1M bit is obsolete */622 if (bar0 & (PCI_BASE_ADDRESS_MEM_TYPE_1M | PCI_BASE_ADDRESS_SPACE_IO))623 return CXL_RESOURCE_NONE;
+3-3
drivers/cxl/cxl.h
···592 * @cdat: Cached CDAT data593 * @cdat_available: Should a CDAT attribute be available in sysfs594 * @pci_latency: Upstream latency in picoseconds595- * @gpf_dvsec: Cached GPF port DVSEC596 */597struct cxl_port {598 struct device dev;···615 } cdat;616 bool cdat_available;617 long pci_latency;618- int gpf_dvsec;619};620621/**···662 * @regs: Dport parsed register blocks663 * @coord: access coordinates (bandwidth and latency performance attributes)664 * @link_latency: calculated PCIe downstream latency0665 */666struct cxl_dport {667 struct device *dport_dev;···674 struct cxl_regs regs;675 struct access_coordinate coord[ACCESS_COORDINATE_MAX];676 long link_latency;0677};678679/**···910#define __mock static911#endif912913-u16 cxl_gpf_get_dvsec(struct device *dev, bool is_port);914915#endif /* __CXL_H__ */
···592 * @cdat: Cached CDAT data593 * @cdat_available: Should a CDAT attribute be available in sysfs594 * @pci_latency: Upstream latency in picoseconds0595 */596struct cxl_port {597 struct device dev;···616 } cdat;617 bool cdat_available;618 long pci_latency;0619};620621/**···664 * @regs: Dport parsed register blocks665 * @coord: access coordinates (bandwidth and latency performance attributes)666 * @link_latency: calculated PCIe downstream latency667+ * @gpf_dvsec: Cached GPF port DVSEC668 */669struct cxl_dport {670 struct device *dport_dev;···675 struct cxl_regs regs;676 struct access_coordinate coord[ACCESS_COORDINATE_MAX];677 long link_latency;678+ int gpf_dvsec;679};680681/**···910#define __mock static911#endif912913+u16 cxl_gpf_get_dvsec(struct device *dev);914915#endif /* __CXL_H__ */
+1-1
drivers/cxl/pci.c
···1018 if (rc)1019 return rc;10201021- rc = devm_cxl_setup_fwctl(cxlmd);1022 if (rc)1023 dev_dbg(&pdev->dev, "No CXL FWCTL setup\n");1024
···1018 if (rc)1019 return rc;10201021+ rc = devm_cxl_setup_fwctl(&pdev->dev, cxlmd);1022 if (rc)1023 dev_dbg(&pdev->dev, "No CXL FWCTL setup\n");1024
+1-1
drivers/cxl/pmem.c
···108 return;109 }110111- if (!cxl_gpf_get_dvsec(cxlds->dev, false))112 return;113114 if (cxl_get_dirty_count(mds, &count)) {
···108 return;109 }110111+ if (!cxl_gpf_get_dvsec(cxlds->dev))112 return;113114 if (cxl_get_dirty_count(mds, &count)) {
+11-3
drivers/firmware/stratix10-svc.c
···1224 if (!svc->intel_svc_fcs) {1225 dev_err(dev, "failed to allocate %s device\n", INTEL_FCS);1226 ret = -ENOMEM;1227- goto err_unregister_dev;1228 }12291230 ret = platform_device_add(svc->intel_svc_fcs);1231 if (ret) {1232 platform_device_put(svc->intel_svc_fcs);1233- goto err_unregister_dev;1234 }000012351236 dev_set_drvdata(dev, svc);1237···12431244 return 0;12451246-err_unregister_dev:001247 platform_device_unregister(svc->stratix10_svc_rsu);1248err_free_kfifo:1249 kfifo_free(&controller->svc_fifo);···1258{1259 struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev);1260 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev);0012611262 platform_device_unregister(svc->intel_svc_fcs);1263 platform_device_unregister(svc->stratix10_svc_rsu);
···1224 if (!svc->intel_svc_fcs) {1225 dev_err(dev, "failed to allocate %s device\n", INTEL_FCS);1226 ret = -ENOMEM;1227+ goto err_unregister_rsu_dev;1228 }12291230 ret = platform_device_add(svc->intel_svc_fcs);1231 if (ret) {1232 platform_device_put(svc->intel_svc_fcs);1233+ goto err_unregister_rsu_dev;1234 }1235+1236+ ret = of_platform_default_populate(dev_of_node(dev), NULL, dev);1237+ if (ret)1238+ goto err_unregister_fcs_dev;12391240 dev_set_drvdata(dev, svc);1241···12391240 return 0;12411242+err_unregister_fcs_dev:1243+ platform_device_unregister(svc->intel_svc_fcs);1244+err_unregister_rsu_dev:1245 platform_device_unregister(svc->stratix10_svc_rsu);1246err_free_kfifo:1247 kfifo_free(&controller->svc_fifo);···1252{1253 struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev);1254 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev);1255+1256+ of_platform_depopulate(ctrl->dev);12571258 platform_device_unregister(svc->intel_svc_fcs);1259 platform_device_unregister(svc->stratix10_svc_rsu);
+45-7
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
···43#include <linux/dma-fence-array.h>44#include <linux/pci-p2pdma.h>450000000000000000000000046/**47 * amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation48 *···77static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf,78 struct dma_buf_attachment *attach)79{080 struct drm_gem_object *obj = dmabuf->priv;81 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);82 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);8384- if (pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0)085 attach->peer2peer = false;8687 amdgpu_vm_bo_update_shared(bo);···102{103 struct dma_buf *dmabuf = attach->dmabuf;104 struct amdgpu_bo *bo = gem_to_amdgpu_bo(dmabuf->priv);105- u32 domains = bo->preferred_domains;106107 dma_resv_assert_held(dmabuf->resv);108109- /*110- * Try pinning into VRAM to allow P2P with RDMA NICs without ODP111 * support if all attachments can do P2P. If any attachment can't do112 * P2P just pin into GTT instead.0000113 */114- list_for_each_entry(attach, &dmabuf->attachments, node)115- if (!attach->peer2peer)116- domains &= ~AMDGPU_GEM_DOMAIN_VRAM;0000117118 if (domains & AMDGPU_GEM_DOMAIN_VRAM)119 bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;000120121 return amdgpu_bo_pin(bo, domains);122}···504{505 struct drm_gem_object *obj = &bo->tbo.base;506 struct drm_gem_object *gobj;000507508 if (obj->import_attach) {509 struct dma_buf *dma_buf = obj->import_attach->dmabuf;
···43#include <linux/dma-fence-array.h>44#include <linux/pci-p2pdma.h>4546+static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops;47+48+/**49+ * dma_buf_attach_adev - Helper to get adev of an attachment50+ *51+ * @attach: attachment52+ *53+ * Returns:54+ * A struct amdgpu_device * if the attaching device is an amdgpu device or55+ * partition, NULL otherwise.56+ */57+static struct amdgpu_device *dma_buf_attach_adev(struct dma_buf_attachment *attach)58+{59+ if (attach->importer_ops == &amdgpu_dma_buf_attach_ops) {60+ struct drm_gem_object *obj = attach->importer_priv;61+ struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);62+63+ return amdgpu_ttm_adev(bo->tbo.bdev);64+ }65+66+ return NULL;67+}68+69/**70 * amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation71 *···54static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf,55 struct dma_buf_attachment *attach)56{57+ struct amdgpu_device *attach_adev = dma_buf_attach_adev(attach);58 struct drm_gem_object *obj = dmabuf->priv;59 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);60 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);6162+ if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) &&63+ pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0)64 attach->peer2peer = false;6566 amdgpu_vm_bo_update_shared(bo);···77{78 struct dma_buf *dmabuf = attach->dmabuf;79 struct amdgpu_bo *bo = gem_to_amdgpu_bo(dmabuf->priv);80+ u32 domains = bo->allowed_domains;8182 dma_resv_assert_held(dmabuf->resv);8384+ /* Try pinning into VRAM to allow P2P with RDMA NICs without ODP085 * support if all attachments can do P2P. If any attachment can't do86 * P2P just pin into GTT instead.87+ *88+ * To avoid with conflicting pinnings between GPUs and RDMA when move89+ * notifiers are disabled, only allow pinning in VRAM when move90+ * notiers are enabled.91 */92+ if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) {93+ domains &= ~AMDGPU_GEM_DOMAIN_VRAM;94+ } else {95+ list_for_each_entry(attach, &dmabuf->attachments, node)96+ if (!attach->peer2peer)97+ domains &= ~AMDGPU_GEM_DOMAIN_VRAM;98+ }99100 if (domains & AMDGPU_GEM_DOMAIN_VRAM)101 bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;102+103+ if (WARN_ON(!domains))104+ return -EINVAL;105106 return amdgpu_bo_pin(bo, domains);107}···469{470 struct drm_gem_object *obj = &bo->tbo.base;471 struct drm_gem_object *gobj;472+473+ if (!adev)474+ return false;475476 if (obj->import_attach) {477 struct dma_buf *dma_buf = obj->import_attach->dmabuf;
+12-29
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···1920 switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {1921 case IP_VERSION(3, 5, 0):1922 case IP_VERSION(3, 6, 0):1923- /*1924- * On DCN35 systems with Z8 enabled, it's possible for IPS2 + Z8 to1925- * cause a hard hang. A fix exists for newer PMFW.1926- *1927- * As a workaround, for non-fixed PMFW, force IPS1+RCG as the deepest1928- * IPS state in all cases, except for s0ix and all displays off (DPMS),1929- * where IPS2 is allowed.1930- *1931- * When checking pmfw version, use the major and minor only.1932- */1933- if ((adev->pm.fw_version & 0x00FFFF00) < 0x005D6300)1934- ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;1935- else if (amdgpu_ip_version(adev, GC_HWIP, 0) > IP_VERSION(11, 5, 0))1936- /*1937- * Other ASICs with DCN35 that have residency issues with1938- * IPS2 in idle.1939- * We want them to use IPS2 only in display off cases.1940- */1941- ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;1942- break;1943 case IP_VERSION(3, 5, 1):1944 ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;1945 break;···3335 for (k = 0; k < dc_state->stream_count; k++) {3336 bundle->stream_update.stream = dc_state->streams[k];33373338- for (m = 0; m < dc_state->stream_status->plane_count; m++) {3339 bundle->surface_updates[m].surface =3340- dc_state->stream_status->plane_states[m];3341 bundle->surface_updates[m].surface->force_full_update =3342 true;3343 }33443345 update_planes_and_stream_adapter(dm->dc,3346 UPDATE_TYPE_FULL,3347- dc_state->stream_status->plane_count,3348 dc_state->streams[k],3349 &bundle->stream_update,3350 bundle->surface_updates);···6501 const struct drm_display_mode *native_mode,6502 bool scale_enabled)6503{6504- if (scale_enabled) {6505- copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode);6506- } else if (native_mode->clock == drm_mode->clock &&6507- native_mode->htotal == drm_mode->htotal &&6508- native_mode->vtotal == drm_mode->vtotal) {6509- copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode);6510 } else {6511 /* no scaling nor amdgpu inserted, no need to patch */6512 }···11021 */11022 if (amdgpu_ip_version(adev, DCE_HWIP, 0) < IP_VERSION(3, 2, 0) &&11023 state->allow_modeset)00011024 return true;1102511026 /* Exit early if we know that we're adding or removing the plane. */
···1920 switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {1921 case IP_VERSION(3, 5, 0):1922 case IP_VERSION(3, 6, 0):000000000000000000001923 case IP_VERSION(3, 5, 1):1924 ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;1925 break;···3355 for (k = 0; k < dc_state->stream_count; k++) {3356 bundle->stream_update.stream = dc_state->streams[k];33573358+ for (m = 0; m < dc_state->stream_status[k].plane_count; m++) {3359 bundle->surface_updates[m].surface =3360+ dc_state->stream_status[k].plane_states[m];3361 bundle->surface_updates[m].surface->force_full_update =3362 true;3363 }33643365 update_planes_and_stream_adapter(dm->dc,3366 UPDATE_TYPE_FULL,3367+ dc_state->stream_status[k].plane_count,3368 dc_state->streams[k],3369 &bundle->stream_update,3370 bundle->surface_updates);···6521 const struct drm_display_mode *native_mode,6522 bool scale_enabled)6523{6524+ if (scale_enabled || (6525+ native_mode->clock == drm_mode->clock &&6526+ native_mode->htotal == drm_mode->htotal &&6527+ native_mode->vtotal == drm_mode->vtotal)) {6528+ if (native_mode->crtc_clock)6529+ copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode);6530 } else {6531 /* no scaling nor amdgpu inserted, no need to patch */6532 }···11041 */11042 if (amdgpu_ip_version(adev, DCE_HWIP, 0) < IP_VERSION(3, 2, 0) &&11043 state->allow_modeset)11044+ return true;11045+11046+ if (amdgpu_in_reset(adev) && state->allow_modeset)11047 return true;1104811049 /* Exit early if we know that we're adding or removing the plane. */
···35#define DC_LOGGER \36 link->ctx->logger3738+static void get_default_8b_10b_lttpr_aux_rd_interval(39+ union training_aux_rd_interval *training_rd_interval)40+{41+ /* LTTPR are required to program DPCD 0000Eh to 0x4 (16ms) upon AUX42+ * read reply to this register. Since old sinks with DPCD rev 1.143+ * and earlier may not support this register, assume the mandatory44+ * value is programmed by the LTTPR to avoid AUX timeout issues.45+ */46+ training_rd_interval->raw = 0x4;47+}48+49static int32_t get_cr_training_aux_rd_interval(struct dc_link *link,50 const struct dc_link_settings *link_settings,51 enum lttpr_mode lttpr_mode)···43 uint32_t wait_in_micro_secs = 100;4445 memset(&training_rd_interval, 0, sizeof(training_rd_interval));46+ if (link_dp_get_encoding_format(link_settings) == DP_8b_10b_ENCODING) {47+ if (link->dpcd_caps.dpcd_rev.raw >= DPCD_REV_12)48+ core_link_read_dpcd(49+ link,50+ DP_TRAINING_AUX_RD_INTERVAL,51+ (uint8_t *)&training_rd_interval,52+ sizeof(training_rd_interval));53+ else if (dp_is_lttpr_present(link))54+ get_default_8b_10b_lttpr_aux_rd_interval(&training_rd_interval);55+56+ if (training_rd_interval.raw != 0) {57+ if (lttpr_mode != LTTPR_MODE_NON_TRANSPARENT)58+ wait_in_micro_secs = 400;59+ if (training_rd_interval.bits.TRAINIG_AUX_RD_INTERVAL)60+ wait_in_micro_secs = training_rd_interval.bits.TRAINIG_AUX_RD_INTERVAL * 4000;61+ }62 }63 return wait_in_micro_secs;64}···71 DP_128B132B_TRAINING_AUX_RD_INTERVAL,72 (uint8_t *)&training_rd_interval,73 sizeof(training_rd_interval));74+ } else if (link_dp_get_encoding_format(link_settings) == DP_8b_10b_ENCODING) {75+ if (link->dpcd_caps.dpcd_rev.raw >= DPCD_REV_12)76+ core_link_read_dpcd(77+ link,78+ DP_TRAINING_AUX_RD_INTERVAL,79+ (uint8_t *)&training_rd_interval,80+ sizeof(training_rd_interval));81+ else if (dp_is_lttpr_present(link))82+ get_default_8b_10b_lttpr_aux_rd_interval(&training_rd_interval);83 }8485 switch (training_rd_interval.bits.TRAINIG_AUX_RD_INTERVAL) {
···731 /*732 * Setting dma-burst to 16Word causes permanent tearing for very small733 * buffers, e.g. cursor buffer. Burst Mode switching which based on734- * plane size is not recommended as plane size varies alot towards the735 * end of the screen and rapid movement causes unstable DMA, but it is736 * still better to change dma-burst than displaying garbage.737 */
···731 /*732 * Setting dma-burst to 16Word causes permanent tearing for very small733 * buffers, e.g. cursor buffer. Burst Mode switching which based on734+ * plane size is not recommended as plane size varies a lot towards the735 * end of the screen and rapid movement causes unstable DMA, but it is736 * still better to change dma-burst than displaying garbage.737 */
···37};3839struct meson_drm_soc_limits {40- unsigned int max_hdmi_phy_freq;41};4243struct meson_drm {
···37};3839struct meson_drm_soc_limits {40+ unsigned long long max_hdmi_phy_freq;41};4243struct meson_drm {
+16-13
drivers/gpu/drm/meson/meson_encoder_hdmi.c
···70{71 struct meson_drm *priv = encoder_hdmi->priv;72 int vic = drm_match_cea_mode(mode);73- unsigned int phy_freq;74- unsigned int vclk_freq;75- unsigned int venc_freq;76- unsigned int hdmi_freq;7778- vclk_freq = mode->clock;7980 /* For 420, pixel clock is half unlike venc clock */81 if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)···107 if (mode->flags & DRM_MODE_FLAG_DBLCLK)108 venc_freq /= 2;109110- dev_dbg(priv->dev, "vclk:%d phy=%d venc=%d hdmi=%d enci=%d\n",0111 phy_freq, vclk_freq, venc_freq, hdmi_freq,112 priv->venc.hdmi_use_enci);113···123 struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);124 struct meson_drm *priv = encoder_hdmi->priv;125 bool is_hdmi2_sink = display_info->hdmi.scdc.supported;126- unsigned int phy_freq;127- unsigned int vclk_freq;128- unsigned int venc_freq;129- unsigned int hdmi_freq;0130 int vic = drm_match_cea_mode(mode);131 enum drm_mode_status status;132···146 if (status != MODE_OK)147 return status;148149- return meson_vclk_dmt_supported_freq(priv, mode->clock);150 /* Check against supported VIC modes */151 } else if (!meson_venc_hdmi_supported_vic(vic))152 return MODE_BAD;153154- vclk_freq = mode->clock;155156 /* For 420, pixel clock is half unlike venc clock */157 if (drm_mode_is_420_only(display_info, mode) ||···181 if (mode->flags & DRM_MODE_FLAG_DBLCLK)182 venc_freq /= 2;183184- dev_dbg(priv->dev, "%s: vclk:%d phy=%d venc=%d hdmi=%d\n",0185 __func__, phy_freq, vclk_freq, venc_freq, hdmi_freq);186187 return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
···70{71 struct meson_drm *priv = encoder_hdmi->priv;72 int vic = drm_match_cea_mode(mode);73+ unsigned long long phy_freq;74+ unsigned long long vclk_freq;75+ unsigned long long venc_freq;76+ unsigned long long hdmi_freq;7778+ vclk_freq = mode->clock * 1000;7980 /* For 420, pixel clock is half unlike venc clock */81 if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)···107 if (mode->flags & DRM_MODE_FLAG_DBLCLK)108 venc_freq /= 2;109110+ dev_dbg(priv->dev,111+ "vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n",112 phy_freq, vclk_freq, venc_freq, hdmi_freq,113 priv->venc.hdmi_use_enci);114···122 struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);123 struct meson_drm *priv = encoder_hdmi->priv;124 bool is_hdmi2_sink = display_info->hdmi.scdc.supported;125+ unsigned long long clock = mode->clock * 1000;126+ unsigned long long phy_freq;127+ unsigned long long vclk_freq;128+ unsigned long long venc_freq;129+ unsigned long long hdmi_freq;130 int vic = drm_match_cea_mode(mode);131 enum drm_mode_status status;132···144 if (status != MODE_OK)145 return status;146147+ return meson_vclk_dmt_supported_freq(priv, clock);148 /* Check against supported VIC modes */149 } else if (!meson_venc_hdmi_supported_vic(vic))150 return MODE_BAD;151152+ vclk_freq = clock;153154 /* For 420, pixel clock is half unlike venc clock */155 if (drm_mode_is_420_only(display_info, mode) ||···179 if (mode->flags & DRM_MODE_FLAG_DBLCLK)180 venc_freq /= 2;181182+ dev_dbg(priv->dev,183+ "%s: vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz\n",184 __func__, phy_freq, vclk_freq, venc_freq, hdmi_freq);185186 return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
+101-94
drivers/gpu/drm/meson/meson_vclk.c
···110#define HDMI_PLL_LOCK BIT(31)111#define HDMI_PLL_LOCK_G12A (3 << 30)112113-#define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST(_freq * 1000, 1001)000114115/* VID PLL Dividers */116enum {···363};364365struct meson_vclk_params {366- unsigned int pll_freq;367- unsigned int phy_freq;368- unsigned int vclk_freq;369- unsigned int venc_freq;370- unsigned int pixel_freq;371 unsigned int pll_od1;372 unsigned int pll_od2;373 unsigned int pll_od3;···375 unsigned int vclk_div;376} params[] = {377 [MESON_VCLK_HDMI_ENCI_54000] = {378- .pll_freq = 4320000,379- .phy_freq = 270000,380- .vclk_freq = 54000,381- .venc_freq = 54000,382- .pixel_freq = 54000,383 .pll_od1 = 4,384 .pll_od2 = 4,385 .pll_od3 = 1,···387 .vclk_div = 1,388 },389 [MESON_VCLK_HDMI_DDR_54000] = {390- .pll_freq = 4320000,391- .phy_freq = 270000,392- .vclk_freq = 54000,393- .venc_freq = 54000,394- .pixel_freq = 27000,395 .pll_od1 = 4,396 .pll_od2 = 4,397 .pll_od3 = 1,···399 .vclk_div = 1,400 },401 [MESON_VCLK_HDMI_DDR_148500] = {402- .pll_freq = 2970000,403- .phy_freq = 742500,404- .vclk_freq = 148500,405- .venc_freq = 148500,406- .pixel_freq = 74250,407 .pll_od1 = 4,408 .pll_od2 = 1,409 .pll_od3 = 1,···411 .vclk_div = 1,412 },413 [MESON_VCLK_HDMI_74250] = {414- .pll_freq = 2970000,415- .phy_freq = 742500,416- .vclk_freq = 74250,417- .venc_freq = 74250,418- .pixel_freq = 74250,419 .pll_od1 = 2,420 .pll_od2 = 2,421 .pll_od3 = 2,···423 .vclk_div = 1,424 },425 [MESON_VCLK_HDMI_148500] = {426- .pll_freq = 2970000,427- .phy_freq = 1485000,428- .vclk_freq = 148500,429- .venc_freq = 148500,430- .pixel_freq = 148500,431 .pll_od1 = 1,432 .pll_od2 = 2,433 .pll_od3 = 2,···435 .vclk_div = 1,436 },437 [MESON_VCLK_HDMI_297000] = {438- .pll_freq = 5940000,439- .phy_freq = 2970000,440- .venc_freq = 297000,441- .vclk_freq = 297000,442- .pixel_freq = 297000,443 .pll_od1 = 2,444 .pll_od2 = 1,445 .pll_od3 = 1,···447 .vclk_div = 2,448 },449 [MESON_VCLK_HDMI_594000] = {450- .pll_freq = 5940000,451- .phy_freq = 5940000,452- .venc_freq = 594000,453- .vclk_freq = 594000,454- .pixel_freq = 594000,455 .pll_od1 = 1,456 .pll_od2 = 1,457 .pll_od3 = 2,···459 .vclk_div = 1,460 },461 [MESON_VCLK_HDMI_594000_YUV420] = {462- .pll_freq = 5940000,463- .phy_freq = 2970000,464- .venc_freq = 594000,465- .vclk_freq = 594000,466- .pixel_freq = 297000,467 .pll_od1 = 2,468 .pll_od2 = 1,469 .pll_od3 = 1,···620 3 << 20, pll_od_to_reg(od3) << 20);621}622623-#define XTAL_FREQ 24000624625static unsigned int meson_hdmi_pll_get_m(struct meson_drm *priv,626- unsigned int pll_freq)627{628 /* The GXBB PLL has a /2 pre-multiplier */629 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB))630- pll_freq /= 2;631632- return pll_freq / XTAL_FREQ;633}634635#define HDMI_FRAC_MAX_GXBB 4096···638639static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv,640 unsigned int m,641- unsigned int pll_freq)642{643- unsigned int parent_freq = XTAL_FREQ;644 unsigned int frac_max = HDMI_FRAC_MAX_GXL;645 unsigned int frac_m;646 unsigned int frac;0647648 /* The GXBB PLL has a /2 pre-multiplier and a larger FRAC width */649 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {···656 frac_max = HDMI_FRAC_MAX_G12A;657658 /* We can have a perfect match !*/659- if (pll_freq / m == parent_freq &&660- pll_freq % m == 0)661 return 0;662663- frac = div_u64((u64)pll_freq * (u64)frac_max, parent_freq);664 frac_m = m * frac_max;665 if (frac_m > frac)666 return frac_max;···670}671672static bool meson_hdmi_pll_validate_params(struct meson_drm *priv,673- unsigned int m,674 unsigned int frac)675{676 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {···698}699700static bool meson_hdmi_pll_find_params(struct meson_drm *priv,701- unsigned int freq,702 unsigned int *m,703 unsigned int *frac,704 unsigned int *od)···710 continue;711 *frac = meson_hdmi_pll_get_frac(priv, *m, freq * *od);712713- DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d\n",714 freq, *m, *frac, *od);715716 if (meson_hdmi_pll_validate_params(priv, *m, *frac))···722723/* pll_freq is the frequency after the OD dividers */724enum drm_mode_status725-meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq)726{727 unsigned int od, m, frac;728···745746/* pll_freq is the frequency after the OD dividers */747static void meson_hdmi_pll_generic_set(struct meson_drm *priv,748- unsigned int pll_freq)749{750 unsigned int od, m, frac, od1, od2, od3;751···760 od1 = od / od2;761 }762763- DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d/%d/%d\n",764 pll_freq, m, frac, od1, od2, od3);765766 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3);···768 return;769 }770771- DRM_ERROR("Fatal, unable to find parameters for PLL freq %d\n",772 pll_freq);773}774775enum drm_mode_status776-meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,777- unsigned int vclk_freq)0778{779 int i;780781- DRM_DEBUG_DRIVER("phy_freq = %d vclk_freq = %d\n",782 phy_freq, vclk_freq);783784 /* Check against soc revision/package limits */···790 }791792 for (i = 0 ; params[i].pixel_freq ; ++i) {793- DRM_DEBUG_DRIVER("i = %d pixel_freq = %d alt = %d\n",794 i, params[i].pixel_freq,795- FREQ_1000_1001(params[i].pixel_freq));796- DRM_DEBUG_DRIVER("i = %d phy_freq = %d alt = %d\n",797 i, params[i].phy_freq,798- FREQ_1000_1001(params[i].phy_freq/1000)*1000);799 /* Match strict frequency */800 if (phy_freq == params[i].phy_freq &&801 vclk_freq == params[i].vclk_freq)802 return MODE_OK;803 /* Match 1000/1001 variant */804- if (phy_freq == (FREQ_1000_1001(params[i].phy_freq/1000)*1000) &&805- vclk_freq == FREQ_1000_1001(params[i].vclk_freq))806 return MODE_OK;807 }808···810}811EXPORT_SYMBOL_GPL(meson_vclk_vic_supported_freq);812813-static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq,814- unsigned int od1, unsigned int od2, unsigned int od3,0815 unsigned int vid_pll_div, unsigned int vclk_div,816 unsigned int hdmi_tx_div, unsigned int venc_div,817 bool hdmi_use_enci, bool vic_alternate_clock)···832 meson_hdmi_pll_generic_set(priv, pll_base_freq);833 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {834 switch (pll_base_freq) {835- case 2970000:836 m = 0x3d;837 frac = vic_alternate_clock ? 0xd02 : 0xe00;838 break;839- case 4320000:840 m = vic_alternate_clock ? 0x59 : 0x5a;841 frac = vic_alternate_clock ? 0xe8f : 0;842 break;843- case 5940000:844 m = 0x7b;845 frac = vic_alternate_clock ? 0xa05 : 0xc00;846 break;···850 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) ||851 meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) {852 switch (pll_base_freq) {853- case 2970000:854 m = 0x7b;855 frac = vic_alternate_clock ? 0x281 : 0x300;856 break;857- case 4320000:858 m = vic_alternate_clock ? 0xb3 : 0xb4;859 frac = vic_alternate_clock ? 0x347 : 0;860 break;861- case 5940000:862 m = 0xf7;863 frac = vic_alternate_clock ? 0x102 : 0x200;864 break;···867 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3);868 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {869 switch (pll_base_freq) {870- case 2970000:871 m = 0x7b;872 frac = vic_alternate_clock ? 0x140b4 : 0x18000;873 break;874- case 4320000:875 m = vic_alternate_clock ? 0xb3 : 0xb4;876 frac = vic_alternate_clock ? 0x1a3ee : 0;877 break;878- case 5940000:879 m = 0xf7;880 frac = vic_alternate_clock ? 0x8148 : 0x10000;881 break;···1031}10321033void meson_vclk_setup(struct meson_drm *priv, unsigned int target,1034- unsigned int phy_freq, unsigned int vclk_freq,1035- unsigned int venc_freq, unsigned int dac_freq,1036 bool hdmi_use_enci)1037{1038 bool vic_alternate_clock = false;1039- unsigned int freq;1040- unsigned int hdmi_tx_div;1041- unsigned int venc_div;10421043 if (target == MESON_VCLK_TARGET_CVBS) {1044 meson_venci_cvbs_clock_config(priv);···1058 return;1059 }10601061- hdmi_tx_div = vclk_freq / dac_freq;10621063 if (hdmi_tx_div == 0) {1064- pr_err("Fatal Error, invalid HDMI-TX freq %d\n",1065 dac_freq);1066 return;1067 }10681069- venc_div = vclk_freq / venc_freq;10701071 if (venc_div == 0) {1072- pr_err("Fatal Error, invalid HDMI venc freq %d\n",1073 venc_freq);1074 return;1075 }10761077 for (freq = 0 ; params[freq].pixel_freq ; ++freq) {1078 if ((phy_freq == params[freq].phy_freq ||1079- phy_freq == FREQ_1000_1001(params[freq].phy_freq/1000)*1000) &&1080 (vclk_freq == params[freq].vclk_freq ||1081- vclk_freq == FREQ_1000_1001(params[freq].vclk_freq))) {1082 if (vclk_freq != params[freq].vclk_freq)1083 vic_alternate_clock = true;1084 else···1104 }11051106 if (!params[freq].pixel_freq) {1107- pr_err("Fatal Error, invalid HDMI vclk freq %d\n", vclk_freq);01108 return;1109 }1110
···110#define HDMI_PLL_LOCK BIT(31)111#define HDMI_PLL_LOCK_G12A (3 << 30)112113+#define PIXEL_FREQ_1000_1001(_freq) \114+ DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)115+#define PHY_FREQ_1000_1001(_freq) \116+ (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10)117118/* VID PLL Dividers */119enum {···360};361362struct meson_vclk_params {363+ unsigned long long pll_freq;364+ unsigned long long phy_freq;365+ unsigned long long vclk_freq;366+ unsigned long long venc_freq;367+ unsigned long long pixel_freq;368 unsigned int pll_od1;369 unsigned int pll_od2;370 unsigned int pll_od3;···372 unsigned int vclk_div;373} params[] = {374 [MESON_VCLK_HDMI_ENCI_54000] = {375+ .pll_freq = 4320000000,376+ .phy_freq = 270000000,377+ .vclk_freq = 54000000,378+ .venc_freq = 54000000,379+ .pixel_freq = 54000000,380 .pll_od1 = 4,381 .pll_od2 = 4,382 .pll_od3 = 1,···384 .vclk_div = 1,385 },386 [MESON_VCLK_HDMI_DDR_54000] = {387+ .pll_freq = 4320000000,388+ .phy_freq = 270000000,389+ .vclk_freq = 54000000,390+ .venc_freq = 54000000,391+ .pixel_freq = 27000000,392 .pll_od1 = 4,393 .pll_od2 = 4,394 .pll_od3 = 1,···396 .vclk_div = 1,397 },398 [MESON_VCLK_HDMI_DDR_148500] = {399+ .pll_freq = 2970000000,400+ .phy_freq = 742500000,401+ .vclk_freq = 148500000,402+ .venc_freq = 148500000,403+ .pixel_freq = 74250000,404 .pll_od1 = 4,405 .pll_od2 = 1,406 .pll_od3 = 1,···408 .vclk_div = 1,409 },410 [MESON_VCLK_HDMI_74250] = {411+ .pll_freq = 2970000000,412+ .phy_freq = 742500000,413+ .vclk_freq = 74250000,414+ .venc_freq = 74250000,415+ .pixel_freq = 74250000,416 .pll_od1 = 2,417 .pll_od2 = 2,418 .pll_od3 = 2,···420 .vclk_div = 1,421 },422 [MESON_VCLK_HDMI_148500] = {423+ .pll_freq = 2970000000,424+ .phy_freq = 1485000000,425+ .vclk_freq = 148500000,426+ .venc_freq = 148500000,427+ .pixel_freq = 148500000,428 .pll_od1 = 1,429 .pll_od2 = 2,430 .pll_od3 = 2,···432 .vclk_div = 1,433 },434 [MESON_VCLK_HDMI_297000] = {435+ .pll_freq = 5940000000,436+ .phy_freq = 2970000000,437+ .venc_freq = 297000000,438+ .vclk_freq = 297000000,439+ .pixel_freq = 297000000,440 .pll_od1 = 2,441 .pll_od2 = 1,442 .pll_od3 = 1,···444 .vclk_div = 2,445 },446 [MESON_VCLK_HDMI_594000] = {447+ .pll_freq = 5940000000,448+ .phy_freq = 5940000000,449+ .venc_freq = 594000000,450+ .vclk_freq = 594000000,451+ .pixel_freq = 594000000,452 .pll_od1 = 1,453 .pll_od2 = 1,454 .pll_od3 = 2,···456 .vclk_div = 1,457 },458 [MESON_VCLK_HDMI_594000_YUV420] = {459+ .pll_freq = 5940000000,460+ .phy_freq = 2970000000,461+ .venc_freq = 594000000,462+ .vclk_freq = 594000000,463+ .pixel_freq = 297000000,464 .pll_od1 = 2,465 .pll_od2 = 1,466 .pll_od3 = 1,···617 3 << 20, pll_od_to_reg(od3) << 20);618}619620+#define XTAL_FREQ (24 * 1000 * 1000)621622static unsigned int meson_hdmi_pll_get_m(struct meson_drm *priv,623+ unsigned long long pll_freq)624{625 /* The GXBB PLL has a /2 pre-multiplier */626 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB))627+ pll_freq = DIV_ROUND_DOWN_ULL(pll_freq, 2);628629+ return DIV_ROUND_DOWN_ULL(pll_freq, XTAL_FREQ);630}631632#define HDMI_FRAC_MAX_GXBB 4096···635636static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv,637 unsigned int m,638+ unsigned long long pll_freq)639{640+ unsigned long long parent_freq = XTAL_FREQ;641 unsigned int frac_max = HDMI_FRAC_MAX_GXL;642 unsigned int frac_m;643 unsigned int frac;644+ u32 remainder;645646 /* The GXBB PLL has a /2 pre-multiplier and a larger FRAC width */647 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {···652 frac_max = HDMI_FRAC_MAX_G12A;653654 /* We can have a perfect match !*/655+ if (div_u64_rem(pll_freq, m, &remainder) == parent_freq &&656+ remainder == 0)657 return 0;658659+ frac = mul_u64_u64_div_u64(pll_freq, frac_max, parent_freq);660 frac_m = m * frac_max;661 if (frac_m > frac)662 return frac_max;···666}667668static bool meson_hdmi_pll_validate_params(struct meson_drm *priv,669+ unsigned long long m,670 unsigned int frac)671{672 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {···694}695696static bool meson_hdmi_pll_find_params(struct meson_drm *priv,697+ unsigned long long freq,698 unsigned int *m,699 unsigned int *frac,700 unsigned int *od)···706 continue;707 *frac = meson_hdmi_pll_get_frac(priv, *m, freq * *od);708709+ DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d\n",710 freq, *m, *frac, *od);711712 if (meson_hdmi_pll_validate_params(priv, *m, *frac))···718719/* pll_freq is the frequency after the OD dividers */720enum drm_mode_status721+meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq)722{723 unsigned int od, m, frac;724···741742/* pll_freq is the frequency after the OD dividers */743static void meson_hdmi_pll_generic_set(struct meson_drm *priv,744+ unsigned long long pll_freq)745{746 unsigned int od, m, frac, od1, od2, od3;747···756 od1 = od / od2;757 }758759+ DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d/%d/%d\n",760 pll_freq, m, frac, od1, od2, od3);761762 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3);···764 return;765 }766767+ DRM_ERROR("Fatal, unable to find parameters for PLL freq %lluHz\n",768 pll_freq);769}770771enum drm_mode_status772+meson_vclk_vic_supported_freq(struct meson_drm *priv,773+ unsigned long long phy_freq,774+ unsigned long long vclk_freq)775{776 int i;777778+ DRM_DEBUG_DRIVER("phy_freq = %lluHz vclk_freq = %lluHz\n",779 phy_freq, vclk_freq);780781 /* Check against soc revision/package limits */···785 }786787 for (i = 0 ; params[i].pixel_freq ; ++i) {788+ DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n",789 i, params[i].pixel_freq,790+ PIXEL_FREQ_1000_1001(params[i].pixel_freq));791+ DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",792 i, params[i].phy_freq,793+ PHY_FREQ_1000_1001(params[i].phy_freq));794 /* Match strict frequency */795 if (phy_freq == params[i].phy_freq &&796 vclk_freq == params[i].vclk_freq)797 return MODE_OK;798 /* Match 1000/1001 variant */799+ if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) &&800+ vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq))801 return MODE_OK;802 }803···805}806EXPORT_SYMBOL_GPL(meson_vclk_vic_supported_freq);807808+static void meson_vclk_set(struct meson_drm *priv,809+ unsigned long long pll_base_freq, unsigned int od1,810+ unsigned int od2, unsigned int od3,811 unsigned int vid_pll_div, unsigned int vclk_div,812 unsigned int hdmi_tx_div, unsigned int venc_div,813 bool hdmi_use_enci, bool vic_alternate_clock)···826 meson_hdmi_pll_generic_set(priv, pll_base_freq);827 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {828 switch (pll_base_freq) {829+ case 2970000000:830 m = 0x3d;831 frac = vic_alternate_clock ? 0xd02 : 0xe00;832 break;833+ case 4320000000:834 m = vic_alternate_clock ? 0x59 : 0x5a;835 frac = vic_alternate_clock ? 0xe8f : 0;836 break;837+ case 5940000000:838 m = 0x7b;839 frac = vic_alternate_clock ? 0xa05 : 0xc00;840 break;···844 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) ||845 meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) {846 switch (pll_base_freq) {847+ case 2970000000:848 m = 0x7b;849 frac = vic_alternate_clock ? 0x281 : 0x300;850 break;851+ case 4320000000:852 m = vic_alternate_clock ? 0xb3 : 0xb4;853 frac = vic_alternate_clock ? 0x347 : 0;854 break;855+ case 5940000000:856 m = 0xf7;857 frac = vic_alternate_clock ? 0x102 : 0x200;858 break;···861 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3);862 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {863 switch (pll_base_freq) {864+ case 2970000000:865 m = 0x7b;866 frac = vic_alternate_clock ? 0x140b4 : 0x18000;867 break;868+ case 4320000000:869 m = vic_alternate_clock ? 0xb3 : 0xb4;870 frac = vic_alternate_clock ? 0x1a3ee : 0;871 break;872+ case 5940000000:873 m = 0xf7;874 frac = vic_alternate_clock ? 0x8148 : 0x10000;875 break;···1025}10261027void meson_vclk_setup(struct meson_drm *priv, unsigned int target,1028+ unsigned long long phy_freq, unsigned long long vclk_freq,1029+ unsigned long long venc_freq, unsigned long long dac_freq,1030 bool hdmi_use_enci)1031{1032 bool vic_alternate_clock = false;1033+ unsigned long long freq;1034+ unsigned long long hdmi_tx_div;1035+ unsigned long long venc_div;10361037 if (target == MESON_VCLK_TARGET_CVBS) {1038 meson_venci_cvbs_clock_config(priv);···1052 return;1053 }10541055+ hdmi_tx_div = DIV_ROUND_DOWN_ULL(vclk_freq, dac_freq);10561057 if (hdmi_tx_div == 0) {1058+ pr_err("Fatal Error, invalid HDMI-TX freq %lluHz\n",1059 dac_freq);1060 return;1061 }10621063+ venc_div = DIV_ROUND_DOWN_ULL(vclk_freq, venc_freq);10641065 if (venc_div == 0) {1066+ pr_err("Fatal Error, invalid HDMI venc freq %lluHz\n",1067 venc_freq);1068 return;1069 }10701071 for (freq = 0 ; params[freq].pixel_freq ; ++freq) {1072 if ((phy_freq == params[freq].phy_freq ||1073+ phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) &&1074 (vclk_freq == params[freq].vclk_freq ||1075+ vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) {1076 if (vclk_freq != params[freq].vclk_freq)1077 vic_alternate_clock = true;1078 else···1098 }10991100 if (!params[freq].pixel_freq) {1101+ pr_err("Fatal Error, invalid HDMI vclk freq %lluHz\n",1102+ vclk_freq);1103 return;1104 }1105
+7-6
drivers/gpu/drm/meson/meson_vclk.h
···20};2122/* 27MHz is the CVBS Pixel Clock */23-#define MESON_VCLK_CVBS 270002425enum drm_mode_status26-meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq);27enum drm_mode_status28-meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,29- unsigned int vclk_freq);03031void meson_vclk_setup(struct meson_drm *priv, unsigned int target,32- unsigned int phy_freq, unsigned int vclk_freq,33- unsigned int venc_freq, unsigned int dac_freq,34 bool hdmi_use_enci);3536#endif /* __MESON_VCLK_H */
···20};2122/* 27MHz is the CVBS Pixel Clock */23+#define MESON_VCLK_CVBS (27 * 1000 * 1000)2425enum drm_mode_status26+meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq);27enum drm_mode_status28+meson_vclk_vic_supported_freq(struct meson_drm *priv,29+ unsigned long long phy_freq,30+ unsigned long long vclk_freq);3132void meson_vclk_setup(struct meson_drm *priv, unsigned int target,33+ unsigned long long phy_freq, unsigned long long vclk_freq,34+ unsigned long long venc_freq, unsigned long long dac_freq,35 bool hdmi_use_enci);3637#endif /* __MESON_VCLK_H */
···128 drm_dev_put(dev);129}130131+static void virtio_gpu_shutdown(struct virtio_device *vdev)132+{133+ /*134+ * drm does its own synchronization on shutdown.135+ * Do nothing here, opt out of device reset.136+ */137+}138+139static void virtio_gpu_config_changed(struct virtio_device *vdev)140{141 struct drm_device *dev = vdev->priv;···162 .id_table = id_table,163 .probe = virtio_gpu_probe,164 .remove = virtio_gpu_remove,165+ .shutdown = virtio_gpu_shutdown,166 .config_changed = virtio_gpu_config_changed167};168
+1
drivers/hwtracing/intel_th/Kconfig
···6061config INTEL_TH_MSU62 tristate "Intel(R) Trace Hub Memory Storage Unit"063 help64 Memory Storage Unit (MSU) trace output device enables65 storing STP traces to system memory. It supports single
···6061config INTEL_TH_MSU62 tristate "Intel(R) Trace Hub Memory Storage Unit"63+ depends on MMU64 help65 Memory Storage Unit (MSU) trace output device enables66 storing STP traces to system memory. It supports single
···3869 struct irq_2_irte *irte_info = &ir_data->irq_2_irte;3870 struct iommu_dev_data *dev_data;38710003872 if (ir_data->iommu == NULL)3873 return -EINVAL;3874···3882 * we should not modify the IRTE3883 */3884 if (!dev_data || !dev_data->use_vapic)3885- return 0;38863887 ir_data->cfg = irqd_cfg(data);3888 pi_data->ir_data = ir_data;3889-3890- /* Note:3891- * SVM tries to set up for VAPIC mode, but we are in3892- * legacy mode. So, we force legacy mode instead.3893- */3894- if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir)) {3895- pr_debug("%s: Fall back to using intr legacy remap\n",3896- __func__);3897- pi_data->is_guest_mode = false;3898- }38993900 pi_data->prev_ga_tag = ir_data->cached_ga_tag;3901 if (pi_data->is_guest_mode) {
···3869 struct irq_2_irte *irte_info = &ir_data->irq_2_irte;3870 struct iommu_dev_data *dev_data;38713872+ if (WARN_ON_ONCE(!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir)))3873+ return -EINVAL;3874+3875 if (ir_data->iommu == NULL)3876 return -EINVAL;3877···3879 * we should not modify the IRTE3880 */3881 if (!dev_data || !dev_data->use_vapic)3882+ return -EINVAL;38833884 ir_data->cfg = irqd_cfg(data);3885 pi_data->ir_data = ir_data;000000000038863887 pi_data->prev_ga_tag = ir_data->cached_ga_tag;3888 if (pi_data->is_guest_mode) {
···117118#define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */1190120#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */121122/*
···117118#define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */119120+#define MEI_DEV_ID_PTL_H 0xE370 /* Panther Lake H */121#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */122123/*
+1
drivers/misc/mei/pci-me.c
···124125 {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)},1260127 {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)},128129 /* required last entry */
···167 q->base = base;168 q->base_pa = base_pa;169170- for (i = 0, cur = q->info; i < q->num_descs; i++, cur++)171 cur->desc = base + (i * q->desc_size);00172}173174static void pdsc_cq_map(struct pdsc_cq *cq, void *base, dma_addr_t base_pa)···327 size_t sz;328 int err;329330- /* Scale the descriptor ring length based on number of CPUs and VFs */331- numdescs = max_t(int, PDSC_ADMINQ_MIN_LENGTH, num_online_cpus());332- numdescs += 2 * pci_sriov_get_totalvfs(pdsc->pdev);333- numdescs = roundup_pow_of_two(numdescs);334 err = pdsc_qcq_alloc(pdsc, PDS_CORE_QTYPE_ADMINQ, 0, "adminq",335 PDS_CORE_QCQ_F_CORE | PDS_CORE_QCQ_F_INTR,336 numdescs,
···167 q->base = base;168 q->base_pa = base_pa;169170+ for (i = 0, cur = q->info; i < q->num_descs; i++, cur++) {171 cur->desc = base + (i * q->desc_size);172+ init_completion(&cur->completion);173+ }174}175176static void pdsc_cq_map(struct pdsc_cq *cq, void *base, dma_addr_t base_pa)···325 size_t sz;326 int err;327328+ numdescs = PDSC_ADMINQ_MAX_LENGTH;000329 err = pdsc_qcq_alloc(pdsc, PDS_CORE_QTYPE_ADMINQ, 0, "adminq",330 PDS_CORE_QCQ_F_CORE | PDS_CORE_QCQ_F_INTR,331 numdescs,
+2-2
drivers/net/ethernet/amd/pds_core/core.h
···1617#define PDSC_WATCHDOG_SECS 518#define PDSC_QUEUE_NAME_MAX_SZ 1619-#define PDSC_ADMINQ_MIN_LENGTH 16 /* must be a power of two */20#define PDSC_NOTIFYQ_LENGTH 64 /* must be a power of two */21#define PDSC_TEARDOWN_RECOVERY false22#define PDSC_TEARDOWN_REMOVING true···96 unsigned int bytes;97 unsigned int nbufs;98 struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS];99- struct pdsc_wait_context *wc;100 void *dest;101};102
···1617#define PDSC_WATCHDOG_SECS 518#define PDSC_QUEUE_NAME_MAX_SZ 1619+#define PDSC_ADMINQ_MAX_LENGTH 16 /* must be a power of two */20#define PDSC_NOTIFYQ_LENGTH 64 /* must be a power of two */21#define PDSC_TEARDOWN_RECOVERY false22#define PDSC_TEARDOWN_REMOVING true···96 unsigned int bytes;97 unsigned int nbufs;98 struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS];99+ struct completion completion;100 void *dest;101};102
+1-3
drivers/net/ethernet/amd/pds_core/devlink.c
···105 .fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,106 .fw_control.oper = PDS_CORE_FW_GET_LIST,107 };108- struct pds_core_fw_list_info fw_list;109 struct pdsc *pdsc = devlink_priv(dl);110 union pds_core_dev_comp comp;111 char buf[32];···118 if (!err)119 memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));120 mutex_unlock(&pdsc->devcmd_lock);121- if (err && err != -EIO)122- return err;123124 listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names));125 for (i = 0; i < listlen; i++) {
···105 .fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,106 .fw_control.oper = PDS_CORE_FW_GET_LIST,107 };108+ struct pds_core_fw_list_info fw_list = {};109 struct pdsc *pdsc = devlink_priv(dl);110 union pds_core_dev_comp comp;111 char buf[32];···118 if (!err)119 memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));120 mutex_unlock(&pdsc->devcmd_lock);00121122 listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names));123 for (i = 0; i < listlen; i++) {
···1850 }1851}18521853+static void enetc_bulk_flip_buff(struct enetc_bdr *rx_ring, int rx_ring_first,1854+ int rx_ring_last)1855+{1856+ while (rx_ring_first != rx_ring_last) {1857+ enetc_flip_rx_buff(rx_ring,1858+ &rx_ring->rx_swbd[rx_ring_first]);1859+ enetc_bdr_idx_inc(rx_ring, &rx_ring_first);1860+ }1861+}1862+1863static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,1864 struct napi_struct *napi, int work_limit,1865 struct bpf_prog *prog)···18681869 while (likely(rx_frm_cnt < work_limit)) {1870 union enetc_rx_bd *rxbd, *orig_rxbd;01871 struct xdp_buff xdp_buff;1872 struct sk_buff *skb;1873+ int orig_i, err;1874 u32 bd_status;018751876 rxbd = enetc_rxbd(rx_ring, i);1877 bd_status = le32_to_cpu(rxbd->r.lstatus);···1887 break;18881889 orig_rxbd = rxbd;01890 orig_i = i;18911892 enetc_build_xdp_buff(rx_ring, bd_status, &rxbd, &i,···1915 rx_ring->stats.xdp_drops++;1916 break;1917 case XDP_PASS:1918+ skb = xdp_build_skb_from_buff(&xdp_buff);1919+ /* Probably under memory pressure, stop NAPI */1920+ if (unlikely(!skb)) {1921+ enetc_xdp_drop(rx_ring, orig_i, i);1922+ rx_ring->stats.xdp_drops++;0001923 goto out;1924+ }1925+1926+ enetc_get_offloads(rx_ring, orig_rxbd, skb);1927+1928+ /* These buffers are about to be owned by the stack.1929+ * Update our buffer cache (the rx_swbd array elements)1930+ * with their other page halves.1931+ */1932+ enetc_bulk_flip_buff(rx_ring, orig_i, i);19331934 napi_gro_receive(napi, skb);1935 break;···1965 enetc_xdp_drop(rx_ring, orig_i, i);1966 rx_ring->stats.xdp_redirect_failures++;1967 } else {1968+ enetc_bulk_flip_buff(rx_ring, orig_i, i);00001969 xdp_redirect_frm_cnt++;1970 rx_ring->stats.xdp_redirect++;1971 }···3362 bdr->buffer_offset = ENETC_RXB_PAD;3363 priv->rx_ring[i] = bdr;33643365+ err = __xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0,3366+ ENETC_RXB_DMA_SIZE_XDP);3367 if (err)3368 goto free_vector;3369
+20-4
drivers/net/ethernet/mediatek/mtk_eth_soc.c
···4043 mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP);40444045 if (mtk_is_netsys_v3_or_greater(eth)) {4046- /* PSE should not drop port1, port8 and port9 packets */4047- mtk_w32(eth, 0x00000302, PSE_DROP_CFG);000000000000000040484049 /* GDM and CDM Threshold */4050- mtk_w32(eth, 0x00000707, MTK_CDMW0_THRES);4051 mtk_w32(eth, 0x00000077, MTK_CDMW1_THRES);40524053 /* Disable GDM1 RX CRC stripping */···4080 mtk_w32(eth, 0x00000300, PSE_DROP_CFG);40814082 /* PSE should drop packets to port 8/9 on WDMA Rx ring full */4083- mtk_w32(eth, 0x00000300, PSE_PPE0_DROP);40844085 /* PSE Free Queue Flow Control */4086 mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2);
···4043 mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP);40444045 if (mtk_is_netsys_v3_or_greater(eth)) {4046+ /* PSE dummy page mechanism */4047+ mtk_w32(eth, PSE_DUMMY_WORK_GDM(1) | PSE_DUMMY_WORK_GDM(2) |4048+ PSE_DUMMY_WORK_GDM(3) | DUMMY_PAGE_THR, PSE_DUMY_REQ);4049+4050+ /* PSE free buffer drop threshold */4051+ mtk_w32(eth, 0x00600009, PSE_IQ_REV(8));4052+4053+ /* PSE should not drop port8, port9 and port13 packets from4054+ * WDMA Tx4055+ */4056+ mtk_w32(eth, 0x00002300, PSE_DROP_CFG);4057+4058+ /* PSE should drop packets to port8, port9 and port13 on WDMA Rx4059+ * ring full4060+ */4061+ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(0));4062+ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(1));4063+ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(2));40644065 /* GDM and CDM Threshold */4066+ mtk_w32(eth, 0x08000707, MTK_CDMW0_THRES);4067 mtk_w32(eth, 0x00000077, MTK_CDMW1_THRES);40684069 /* Disable GDM1 RX CRC stripping */···4064 mtk_w32(eth, 0x00000300, PSE_DROP_CFG);40654066 /* PSE should drop packets to port 8/9 on WDMA Rx ring full */4067+ mtk_w32(eth, 0x00000300, PSE_PPE_DROP(0));40684069 /* PSE Free Queue Flow Control */4070 mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2);
···151#define PSE_FQFC_CFG1 0x100152#define PSE_FQFC_CFG2 0x104153#define PSE_DROP_CFG 0x108154+#define PSE_PPE_DROP(x) (0x110 + ((x) * 0x4))155+156+/* PSE Last FreeQ Page Request Control */157+#define PSE_DUMY_REQ 0x10C158+/* PSE_DUMY_REQ is not a typo but actually called like that also in159+ * MediaTek's datasheet160+ */161+#define PSE_DUMMY_WORK_GDM(x) BIT(16 + (x))162+#define DUMMY_PAGE_THR 0x1163164/* PSE Input Queue Reservation Register*/165#define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2))
···37 return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page);38}3940-static int lan88xx_phy_config_intr(struct phy_device *phydev)41-{42- int rc;43-44- if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {45- /* unmask all source and clear them before enable */46- rc = phy_write(phydev, LAN88XX_INT_MASK, 0x7FFF);47- rc = phy_read(phydev, LAN88XX_INT_STS);48- rc = phy_write(phydev, LAN88XX_INT_MASK,49- LAN88XX_INT_MASK_MDINTPIN_EN_ |50- LAN88XX_INT_MASK_LINK_CHANGE_);51- } else {52- rc = phy_write(phydev, LAN88XX_INT_MASK, 0);53- if (rc)54- return rc;55-56- /* Ack interrupts after they have been disabled */57- rc = phy_read(phydev, LAN88XX_INT_STS);58- }59-60- return rc < 0 ? rc : 0;61-}62-63-static irqreturn_t lan88xx_handle_interrupt(struct phy_device *phydev)64-{65- int irq_status;66-67- irq_status = phy_read(phydev, LAN88XX_INT_STS);68- if (irq_status < 0) {69- phy_error(phydev);70- return IRQ_NONE;71- }72-73- if (!(irq_status & LAN88XX_INT_STS_LINK_CHANGE_))74- return IRQ_NONE;75-76- phy_trigger_machine(phydev);77-78- return IRQ_HANDLED;79-}80-81static int lan88xx_suspend(struct phy_device *phydev)82{83 struct lan88xx_priv *priv = phydev->priv;···487 .config_aneg = lan88xx_config_aneg,488 .link_change_notify = lan88xx_link_change_notify,489490- .config_intr = lan88xx_phy_config_intr,491- .handle_interrupt = lan88xx_handle_interrupt,0492493 .suspend = lan88xx_suspend,494 .resume = genphy_resume,
···37 return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page);38}390000000000000000000000000000000000000000040static int lan88xx_suspend(struct phy_device *phydev)41{42 struct lan88xx_priv *priv = phydev->priv;···528 .config_aneg = lan88xx_config_aneg,529 .link_change_notify = lan88xx_link_change_notify,530531+ /* Interrupt handling is broken, do not define related532+ * functions to force polling.533+ */534535 .suspend = lan88xx_suspend,536 .resume = genphy_resume,
+13-10
drivers/net/phy/phy_led_triggers.c
···93 if (!phy->phy_num_led_triggers)94 return 0;9596- phy->led_link_trigger = devm_kzalloc(&phy->mdio.dev,97- sizeof(*phy->led_link_trigger),98- GFP_KERNEL);99 if (!phy->led_link_trigger) {100 err = -ENOMEM;101 goto out_clear;···104 if (err)105 goto out_free_link;106107- phy->phy_led_triggers = devm_kcalloc(&phy->mdio.dev,108- phy->phy_num_led_triggers,109- sizeof(struct phy_led_trigger),110- GFP_KERNEL);111 if (!phy->phy_led_triggers) {112 err = -ENOMEM;113 goto out_unreg_link;···127out_unreg:128 while (i--)129 phy_led_trigger_unregister(&phy->phy_led_triggers[i]);130- devm_kfree(&phy->mdio.dev, phy->phy_led_triggers);131out_unreg_link:132 phy_led_trigger_unregister(phy->led_link_trigger);133out_free_link:134- devm_kfree(&phy->mdio.dev, phy->led_link_trigger);135 phy->led_link_trigger = NULL;136out_clear:137 phy->phy_num_led_triggers = 0;···145146 for (i = 0; i < phy->phy_num_led_triggers; i++)147 phy_led_trigger_unregister(&phy->phy_led_triggers[i]);00148149- if (phy->led_link_trigger)150 phy_led_trigger_unregister(phy->led_link_trigger);000151}152EXPORT_SYMBOL_GPL(phy_led_triggers_unregister);
···93 if (!phy->phy_num_led_triggers)94 return 0;9596+ phy->led_link_trigger = kzalloc(sizeof(*phy->led_link_trigger),97+ GFP_KERNEL);098 if (!phy->led_link_trigger) {99 err = -ENOMEM;100 goto out_clear;···105 if (err)106 goto out_free_link;107108+ phy->phy_led_triggers = kcalloc(phy->phy_num_led_triggers,109+ sizeof(struct phy_led_trigger),110+ GFP_KERNEL);0111 if (!phy->phy_led_triggers) {112 err = -ENOMEM;113 goto out_unreg_link;···129out_unreg:130 while (i--)131 phy_led_trigger_unregister(&phy->phy_led_triggers[i]);132+ kfree(phy->phy_led_triggers);133out_unreg_link:134 phy_led_trigger_unregister(phy->led_link_trigger);135out_free_link:136+ kfree(phy->led_link_trigger);137 phy->led_link_trigger = NULL;138out_clear:139 phy->phy_num_led_triggers = 0;···147148 for (i = 0; i < phy->phy_num_led_triggers; i++)149 phy_led_trigger_unregister(&phy->phy_led_triggers[i]);150+ kfree(phy->phy_led_triggers);151+ phy->phy_led_triggers = NULL;152153+ if (phy->led_link_trigger) {154 phy_led_trigger_unregister(phy->led_link_trigger);155+ kfree(phy->led_link_trigger);156+ phy->led_link_trigger = NULL;157+ }158}159EXPORT_SYMBOL_GPL(phy_led_triggers_unregister);
+22-16
drivers/net/phy/phylink.c
···81 unsigned int pcs_state;8283 bool link_failed;084 bool major_config_failed;85 bool mac_supports_eee_ops;86 bool mac_supports_eee;···2546 /* Stop the resolver bringing the link up */2547 __set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state);25482549- /* Disable the carrier, to prevent transmit timeouts,2550- * but one would hope all packets have been sent. This2551- * also means phylink_resolve() will do nothing.2552- */2553- if (pl->netdev)2554- netif_carrier_off(pl->netdev);2555- else02556 pl->old_link_state = false;025572558 /* We do not call mac_link_down() here as we want the2559 * link to remain up to receive the WoL packets.···2606 if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) {2607 /* Wake-on-Lan enabled, MAC handling */26082609- /* Call mac_link_down() so we keep the overall state balanced.2610- * Do this under the state_mutex lock for consistency. This2611- * will cause a "Link Down" message to be printed during2612- * resume, which is harmless - the true link state will be2613- * printed when we run a resolve.2614- */2615- mutex_lock(&pl->state_mutex);2616- phylink_link_down(pl);2617- mutex_unlock(&pl->state_mutex);00026182619 /* Re-apply the link parameters so that all the settings get2620 * restored to the MAC.
···81 unsigned int pcs_state;8283 bool link_failed;84+ bool suspend_link_up;85 bool major_config_failed;86 bool mac_supports_eee_ops;87 bool mac_supports_eee;···2545 /* Stop the resolver bringing the link up */2546 __set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state);25472548+ pl->suspend_link_up = phylink_link_is_up(pl);2549+ if (pl->suspend_link_up) {2550+ /* Disable the carrier, to prevent transmit timeouts,2551+ * but one would hope all packets have been sent. This2552+ * also means phylink_resolve() will do nothing.2553+ */2554+ if (pl->netdev)2555+ netif_carrier_off(pl->netdev);2556 pl->old_link_state = false;2557+ }25582559 /* We do not call mac_link_down() here as we want the2560 * link to remain up to receive the WoL packets.···2603 if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) {2604 /* Wake-on-Lan enabled, MAC handling */26052606+ if (pl->suspend_link_up) {2607+ /* Call mac_link_down() so we keep the overall state2608+ * balanced. Do this under the state_mutex lock for2609+ * consistency. This will cause a "Link Down" message2610+ * to be printed during resume, which is harmless -2611+ * the true link state will be printed when we run a2612+ * resolve.2613+ */2614+ mutex_lock(&pl->state_mutex);2615+ phylink_link_down(pl);2616+ mutex_unlock(&pl->state_mutex);2617+ }26182619 /* Re-apply the link parameters so that all the settings get2620 * restored to the MAC.
+57-12
drivers/net/virtio_net.c
···3342 return NETDEV_TX_OK;3343}33443345-static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)03346{3347 bool running = netif_running(vi->dev);3348···3353 }3354}33553356-static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)0000000000000000000000000003357{3358 bool running = netif_running(vi->dev);33593360- if (!try_fill_recv(vi, rq, GFP_KERNEL))3361 schedule_delayed_work(&vi->refill, 0);33623363 if (running)3364 virtnet_napi_enable(rq);00000000000000000003365}33663367static int virtnet_rx_resize(struct virtnet_info *vi,···6006 if (prog)6007 bpf_prog_add(prog, vi->max_queue_pairs - 1);6008006009 /* Make sure NAPI is not using any XDP TX queues for RX. */6010 if (netif_running(dev)) {6011- for (i = 0; i < vi->max_queue_pairs; i++) {6012- virtnet_napi_disable(&vi->rq[i]);6013 virtnet_napi_tx_disable(&vi->sq[i]);6014- }6015 }60166017 if (!prog) {···6043 vi->xdp_enabled = false;6044 }604506046 for (i = 0; i < vi->max_queue_pairs; i++) {6047 if (old_prog)6048 bpf_prog_put(old_prog);6049- if (netif_running(dev)) {6050- virtnet_napi_enable(&vi->rq[i]);6051 virtnet_napi_tx_enable(&vi->sq[i]);6052- }6053 }60546055 return 0;···6060 rcu_assign_pointer(vi->rq[i].xdp_prog, old_prog);6061 }606206063 if (netif_running(dev)) {6064- for (i = 0; i < vi->max_queue_pairs; i++) {6065- virtnet_napi_enable(&vi->rq[i]);6066 virtnet_napi_tx_enable(&vi->sq[i]);6067- }6068 }6069 if (prog)6070 bpf_prog_sub(prog, vi->max_queue_pairs - 1);
···3342 return NETDEV_TX_OK;3343}33443345+static void __virtnet_rx_pause(struct virtnet_info *vi,3346+ struct receive_queue *rq)3347{3348 bool running = netif_running(vi->dev);3349···3352 }3353}33543355+static void virtnet_rx_pause_all(struct virtnet_info *vi)3356+{3357+ int i;3358+3359+ /*3360+ * Make sure refill_work does not run concurrently to3361+ * avoid napi_disable race which leads to deadlock.3362+ */3363+ disable_delayed_refill(vi);3364+ cancel_delayed_work_sync(&vi->refill);3365+ for (i = 0; i < vi->max_queue_pairs; i++)3366+ __virtnet_rx_pause(vi, &vi->rq[i]);3367+}3368+3369+static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)3370+{3371+ /*3372+ * Make sure refill_work does not run concurrently to3373+ * avoid napi_disable race which leads to deadlock.3374+ */3375+ disable_delayed_refill(vi);3376+ cancel_delayed_work_sync(&vi->refill);3377+ __virtnet_rx_pause(vi, rq);3378+}3379+3380+static void __virtnet_rx_resume(struct virtnet_info *vi,3381+ struct receive_queue *rq,3382+ bool refill)3383{3384 bool running = netif_running(vi->dev);33853386+ if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))3387 schedule_delayed_work(&vi->refill, 0);33883389 if (running)3390 virtnet_napi_enable(rq);3391+}3392+3393+static void virtnet_rx_resume_all(struct virtnet_info *vi)3394+{3395+ int i;3396+3397+ enable_delayed_refill(vi);3398+ for (i = 0; i < vi->max_queue_pairs; i++) {3399+ if (i < vi->curr_queue_pairs)3400+ __virtnet_rx_resume(vi, &vi->rq[i], true);3401+ else3402+ __virtnet_rx_resume(vi, &vi->rq[i], false);3403+ }3404+}3405+3406+static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)3407+{3408+ enable_delayed_refill(vi);3409+ __virtnet_rx_resume(vi, rq, true);3410}34113412static int virtnet_rx_resize(struct virtnet_info *vi,···5959 if (prog)5960 bpf_prog_add(prog, vi->max_queue_pairs - 1);59615962+ virtnet_rx_pause_all(vi);5963+5964 /* Make sure NAPI is not using any XDP TX queues for RX. */5965 if (netif_running(dev)) {5966+ for (i = 0; i < vi->max_queue_pairs; i++)05967 virtnet_napi_tx_disable(&vi->sq[i]);05968 }59695970 if (!prog) {···5996 vi->xdp_enabled = false;5997 }59985999+ virtnet_rx_resume_all(vi);6000 for (i = 0; i < vi->max_queue_pairs; i++) {6001 if (old_prog)6002 bpf_prog_put(old_prog);6003+ if (netif_running(dev))06004 virtnet_napi_tx_enable(&vi->sq[i]);06005 }60066007 return 0;···6014 rcu_assign_pointer(vi->rq[i].xdp_prog, old_prog);6015 }60166017+ virtnet_rx_resume_all(vi);6018 if (netif_running(dev)) {6019+ for (i = 0; i < vi->max_queue_pairs; i++)06020 virtnet_napi_tx_enable(&vi->sq[i]);06021 }6022 if (prog)6023 bpf_prog_sub(prog, vi->max_queue_pairs - 1);
+13-6
drivers/net/xen-netfront.c
···985 act = bpf_prog_run_xdp(prog, xdp);986 switch (act) {987 case XDP_TX:988- get_page(pdata);989 xdpf = xdp_convert_buff_to_frame(xdp);990- err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0);991- if (unlikely(!err))992- xdp_return_frame_rx_napi(xdpf);993- else if (unlikely(err < 0))994 trace_xdp_exception(queue->info->netdev, prog, act);000000000995 break;996 case XDP_REDIRECT:997 get_page(pdata);998 err = xdp_do_redirect(queue->info->netdev, xdp, prog);999 *need_xdp_flush = true;1000- if (unlikely(err))1001 trace_xdp_exception(queue->info->netdev, prog, act);001002 break;1003 case XDP_PASS:1004 case XDP_DROP:
···985 act = bpf_prog_run_xdp(prog, xdp);986 switch (act) {987 case XDP_TX:0988 xdpf = xdp_convert_buff_to_frame(xdp);989+ if (unlikely(!xdpf)) {000990 trace_xdp_exception(queue->info->netdev, prog, act);991+ break;992+ }993+ get_page(pdata);994+ err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0);995+ if (unlikely(err <= 0)) {996+ if (err < 0)997+ trace_xdp_exception(queue->info->netdev, prog, act);998+ xdp_return_frame_rx_napi(xdpf);999+ }1000 break;1001 case XDP_REDIRECT:1002 get_page(pdata);1003 err = xdp_do_redirect(queue->info->netdev, xdp, prog);1004 *need_xdp_flush = true;1005+ if (unlikely(err)) {1006 trace_xdp_exception(queue->info->netdev, prog, act);1007+ xdp_return_buff(xdp);1008+ }1009 break;1010 case XDP_PASS:1011 case XDP_DROP:
+3
drivers/nvme/target/core.c
···324325 lockdep_assert_held(&nvmet_config_sem);326000327 ops = nvmet_transports[port->disc_addr.trtype];328 if (!ops) {329 up_write(&nvmet_config_sem);
···324325 lockdep_assert_held(&nvmet_config_sem);326327+ if (port->disc_addr.trtype == NVMF_TRTYPE_MAX)328+ return -EINVAL;329+330 ops = nvmet_transports[port->disc_addr.trtype];331 if (!ops) {332 up_write(&nvmet_config_sem);
+32-8
drivers/nvmem/core.c
···594 cell->nbits = info->nbits;595 cell->np = info->np;596597- if (cell->nbits)598 cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset,599 BITS_PER_BYTE);00600601 if (!IS_ALIGNED(cell->offset, nvmem->stride)) {602 dev_err(&nvmem->dev,603 "cell %s unaligned to nvmem stride %d\n",604 cell->name ?: "<unknown>", nvmem->stride);605 return -EINVAL;000000000000606 }607608 return 0;···851 if (addr && len == (2 * sizeof(u32))) {852 info.bit_offset = be32_to_cpup(addr++);853 info.nbits = be32_to_cpup(addr);854- if (info.bit_offset >= BITS_PER_BYTE || info.nbits < 1) {00855 dev_err(dev, "nvmem: invalid bits on %pOF\n", child);856 of_node_put(child);857 return -EINVAL;···1646static void nvmem_shift_read_buffer_in_place(struct nvmem_cell_entry *cell, void *buf)1647{1648 u8 *p, *b;1649- int i, extra, bit_offset = cell->bit_offset;016501651 p = b = buf;1652- if (bit_offset) {000001653 /* First shift */1654- *b++ >>= bit_offset;16551656 /* setup rest of the bytes if any */1657 for (i = 1; i < cell->bytes; i++) {1658 /* Get bits from next byte and shift them towards msb */1659- *p |= *b << (BITS_PER_BYTE - bit_offset);16601661- p = b;1662- *b++ >>= bit_offset;1663 }0001664 } else {1665 /* point to the msb */1666 p += cell->bytes - 1;
···594 cell->nbits = info->nbits;595 cell->np = info->np;596597+ if (cell->nbits) {598 cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset,599 BITS_PER_BYTE);600+ cell->raw_len = ALIGN(cell->bytes, nvmem->word_size);601+ }602603 if (!IS_ALIGNED(cell->offset, nvmem->stride)) {604 dev_err(&nvmem->dev,605 "cell %s unaligned to nvmem stride %d\n",606 cell->name ?: "<unknown>", nvmem->stride);607 return -EINVAL;608+ }609+610+ if (!IS_ALIGNED(cell->raw_len, nvmem->word_size)) {611+ dev_err(&nvmem->dev,612+ "cell %s raw len %zd unaligned to nvmem word size %d\n",613+ cell->name ?: "<unknown>", cell->raw_len,614+ nvmem->word_size);615+616+ if (info->raw_len)617+ return -EINVAL;618+619+ cell->raw_len = ALIGN(cell->raw_len, nvmem->word_size);620 }621622 return 0;···837 if (addr && len == (2 * sizeof(u32))) {838 info.bit_offset = be32_to_cpup(addr++);839 info.nbits = be32_to_cpup(addr);840+ if (info.bit_offset >= BITS_PER_BYTE * info.bytes ||841+ info.nbits < 1 ||842+ info.bit_offset + info.nbits > BITS_PER_BYTE * info.bytes) {843 dev_err(dev, "nvmem: invalid bits on %pOF\n", child);844 of_node_put(child);845 return -EINVAL;···1630static void nvmem_shift_read_buffer_in_place(struct nvmem_cell_entry *cell, void *buf)1631{1632 u8 *p, *b;1633+ int i, extra, bytes_offset;1634+ int bit_offset = cell->bit_offset;16351636 p = b = buf;1637+1638+ bytes_offset = bit_offset / BITS_PER_BYTE;1639+ b += bytes_offset;1640+ bit_offset %= BITS_PER_BYTE;1641+1642+ if (bit_offset % BITS_PER_BYTE) {1643 /* First shift */1644+ *p = *b++ >> bit_offset;16451646 /* setup rest of the bytes if any */1647 for (i = 1; i < cell->bytes; i++) {1648 /* Get bits from next byte and shift them towards msb */1649+ *p++ |= *b << (BITS_PER_BYTE - bit_offset);16501651+ *p = *b++ >> bit_offset;01652 }1653+ } else if (p != b) {1654+ memmove(p, b, cell->bytes - bytes_offset);1655+ p += cell->bytes - 1;1656 } else {1657 /* point to the msb */1658 p += cell->bytes - 1;
···707 */708int scsi_cdl_enable(struct scsi_device *sdev, bool enable)709{710- struct scsi_mode_data data;711- struct scsi_sense_hdr sshdr;712- struct scsi_vpd *vpd;713- bool is_ata = false;714 char buf[64];0715 int ret;716717 if (!sdev->cdl_supported)718 return -EOPNOTSUPP;719720 rcu_read_lock();721- vpd = rcu_dereference(sdev->vpd_pg89);722- if (vpd)723- is_ata = true;724 rcu_read_unlock();725726 /*727 * For ATA devices, CDL needs to be enabled with a SET FEATURES command.728 */729 if (is_ata) {00730 char *buf_data;731 int len;732···732 if (ret)733 return -EINVAL;734735- /* Enable CDL using the ATA feature page */736 len = min_t(size_t, sizeof(buf),737 data.length - data.header_length -738 data.block_descriptor_length);739 buf_data = buf + data.header_length +740 data.block_descriptor_length;741- if (enable)742- buf_data[4] = 0x02;743- else744- buf_data[4] = 0;00000000000000745746 ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3,747 &data, &sshdr);···767 }768 }7690770 sdev->cdl_enable = enable;771772 return 0;
···707 */708int scsi_cdl_enable(struct scsi_device *sdev, bool enable)709{0000710 char buf[64];711+ bool is_ata;712 int ret;713714 if (!sdev->cdl_supported)715 return -EOPNOTSUPP;716717 rcu_read_lock();718+ is_ata = rcu_dereference(sdev->vpd_pg89);00719 rcu_read_unlock();720721 /*722 * For ATA devices, CDL needs to be enabled with a SET FEATURES command.723 */724 if (is_ata) {725+ struct scsi_mode_data data;726+ struct scsi_sense_hdr sshdr;727 char *buf_data;728 int len;729···735 if (ret)736 return -EINVAL;737738+ /* Enable or disable CDL using the ATA feature page */739 len = min_t(size_t, sizeof(buf),740 data.length - data.header_length -741 data.block_descriptor_length);742 buf_data = buf + data.header_length +743 data.block_descriptor_length;744+745+ /*746+ * If we want to enable CDL and CDL is already enabled on the747+ * device, do nothing. This avoids needlessly resetting the CDL748+ * statistics on the device as that is implied by the CDL enable749+ * action. Similar to this, there is no need to do anything if750+ * we want to disable CDL and CDL is already disabled.751+ */752+ if (enable) {753+ if ((buf_data[4] & 0x03) == 0x02)754+ goto out;755+ buf_data[4] &= ~0x03;756+ buf_data[4] |= 0x02;757+ } else {758+ if ((buf_data[4] & 0x03) == 0x00)759+ goto out;760+ buf_data[4] &= ~0x03;761+ }762763 ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3,764 &data, &sshdr);···756 }757 }758759+out:760 sdev->cdl_enable = enable;761762 return 0;
···193 return -EFAULT;194195 /*196- * TIOCL_SELCLEAR, TIOCL_SELPOINTER and TIOCL_SELMOUSEREPORT are OK to197- * use without CAP_SYS_ADMIN as they do not modify the selection.198 */199 switch (v.sel_mode) {200 case TIOCL_SELCLEAR:201 case TIOCL_SELPOINTER:202- case TIOCL_SELMOUSEREPORT:203 break;204 default:205 if (!capable(CAP_SYS_ADMIN))
···193 return -EFAULT;194195 /*196+ * TIOCL_SELCLEAR and TIOCL_SELPOINTER are OK to use without197+ * CAP_SYS_ADMIN as they do not modify the selection.198 */199 switch (v.sel_mode) {200 case TIOCL_SELCLEAR:201 case TIOCL_SELPOINTER:0202 break;203 default:204 if (!capable(CAP_SYS_ADMIN))
+5-7
drivers/ufs/core/ufs-mcq.c
···677 unsigned long flags;678 int err;679680- if (!ufshcd_cmd_inflight(lrbp->cmd)) {681- dev_err(hba->dev,682- "%s: skip abort. cmd at tag %d already completed.\n",683- __func__, tag);684- return FAILED;685- }686-687 /* Skip task abort in case previous aborts failed and report failure */688 if (lrbp->req_abort_skip) {689 dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n",···685 }686687 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd));00000688689 if (ufshcd_mcq_sqe_search(hba, hwq, tag)) {690 /*
···677 unsigned long flags;678 int err;6790000000680 /* Skip task abort in case previous aborts failed and report failure */681 if (lrbp->req_abort_skip) {682 dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n",···692 }693694 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd));695+ if (!hwq) {696+ dev_err(hba->dev, "%s: skip abort. cmd at tag %d already completed.\n",697+ __func__, tag);698+ return FAILED;699+ }700701 if (ufshcd_mcq_sqe_search(hba, hwq, tag)) {702 /*
···33 ((((c) >> 16) & MCQ_QCFGPTR_MASK) * MCQ_QCFGPTR_UNIT)34#define MCQ_QCFG_SIZE 0x403536+/* De-emphasis for gear-5 */37+#define DEEMPHASIS_3_5_dB 0x0438+#define NO_DEEMPHASIS 0x039+40enum {41 TSTBUS_UAWM,42 TSTBUS_UARM,···795 return ufs_qcom_icc_set_bw(host, bw_table.mem_bw, bw_table.cfg_bw);796}797798+static void ufs_qcom_set_tx_hs_equalizer(struct ufs_hba *hba, u32 gear, u32 tx_lanes)799+{800+ u32 equalizer_val;801+ int ret, i;802+803+ /* Determine the equalizer value based on the gear */804+ equalizer_val = (gear == 5) ? DEEMPHASIS_3_5_dB : NO_DEEMPHASIS;805+806+ for (i = 0; i < tx_lanes; i++) {807+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(TX_HS_EQUALIZER, i),808+ equalizer_val);809+ if (ret)810+ dev_err(hba->dev, "%s: failed equalizer lane %d\n",811+ __func__, i);812+ }813+}814+815static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba,816 enum ufs_notify_change_status status,817 const struct ufs_pa_layer_attr *dev_max_params,···846 dev_req_params->gear_tx,847 PA_INITIAL_ADAPT);848 }849+850+ if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING)851+ ufs_qcom_set_tx_hs_equalizer(hba,852+ dev_req_params->gear_tx, dev_req_params->lane_tx);853+854 break;855 case POST_CHANGE:856 if (ufs_qcom_cfg_timers(hba, false)) {···893 (pa_vs_config_reg1 | (1 << 12)));894}895896+static void ufs_qcom_override_pa_tx_hsg1_sync_len(struct ufs_hba *hba)897+{898+ int err;899+900+ err = ufshcd_dme_peer_set(hba, UIC_ARG_MIB(PA_TX_HSG1_SYNC_LENGTH),901+ PA_TX_HSG1_SYNC_LENGTH_VAL);902+ if (err)903+ dev_err(hba->dev, "Failed (%d) set PA_TX_HSG1_SYNC_LENGTH\n", err);904+}905+906static int ufs_qcom_apply_dev_quirks(struct ufs_hba *hba)907{908 int err = 0;909910 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME)911 err = ufs_qcom_quirk_host_pa_saveconfigtime(hba);912+913+ if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH)914+ ufs_qcom_override_pa_tx_hsg1_sync_len(hba);915916 return err;917}···914 { .wmanufacturerid = UFS_VENDOR_WDC,915 .model = UFS_ANY_MODEL,916 .quirk = UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE },917+ { .wmanufacturerid = UFS_VENDOR_SAMSUNG,918+ .model = UFS_ANY_MODEL,919+ .quirk = UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH |920+ UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING },921 {}922};923
+18
drivers/ufs/host/ufs-qcom.h
···122 TMRLUT_HW_CGC_EN | OCSC_HW_CGC_EN)123124/* QUniPro Vendor specific attributes */0125#define PA_VS_CONFIG_REG1 0x9000126#define DME_VS_CORE_CLK_CTRL 0xD00200127/* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */128#define CLK_1US_CYCLES_MASK_V4 GENMASK(27, 16)129#define CLK_1US_CYCLES_MASK GENMASK(7, 0)···143#define UNIPRO_CORE_CLK_FREQ_300_MHZ 300144#define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202145#define UNIPRO_CORE_CLK_FREQ_403_MHZ 403000000000000000146147/* ICE allocator type to share AES engines among TX stream and RX stream */148#define ICE_ALLOCATOR_TYPE 2
···122 TMRLUT_HW_CGC_EN | OCSC_HW_CGC_EN)123124/* QUniPro Vendor specific attributes */125+#define PA_TX_HSG1_SYNC_LENGTH 0x1552126#define PA_VS_CONFIG_REG1 0x9000127#define DME_VS_CORE_CLK_CTRL 0xD002128+#define TX_HS_EQUALIZER 0x0037129+130/* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */131#define CLK_1US_CYCLES_MASK_V4 GENMASK(27, 16)132#define CLK_1US_CYCLES_MASK GENMASK(7, 0)···140#define UNIPRO_CORE_CLK_FREQ_300_MHZ 300141#define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202142#define UNIPRO_CORE_CLK_FREQ_403_MHZ 403143+144+/* TX_HSG1_SYNC_LENGTH attr value */145+#define PA_TX_HSG1_SYNC_LENGTH_VAL 0x4A146+147+/*148+ * Some ufs device vendors need a different TSync length.149+ * Enable this quirk to give an additional TX_HS_SYNC_LENGTH.150+ */151+#define UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH BIT(16)152+153+/*154+ * Some ufs device vendors need a different Deemphasis setting.155+ * Enable this quirk to tune TX Deemphasis parameters.156+ */157+#define UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING BIT(17)158159/* ICE allocator type to share AES engines among TX stream and RX stream */160#define ICE_ALLOCATOR_TYPE 2
+2
drivers/usb/cdns3/cdns3-gadget.c
···1963 unsigned int bit;1964 unsigned long reg;196501966 spin_lock_irqsave(&priv_dev->lock, flags);19671968 reg = readl(&priv_dev->regs->usb_ists);···2005irqend:2006 writel(~0, &priv_dev->regs->ep_ien);2007 spin_unlock_irqrestore(&priv_dev->lock, flags);020082009 return ret;2010}
···1963 unsigned int bit;1964 unsigned long reg;19651966+ local_bh_disable();1967 spin_lock_irqsave(&priv_dev->lock, flags);19681969 reg = readl(&priv_dev->regs->usb_ists);···2004irqend:2005 writel(~0, &priv_dev->regs->ep_ien);2006 spin_unlock_irqrestore(&priv_dev->lock, flags);2007+ local_bh_enable();20082009 return ret;2010}
+31-13
drivers/usb/chipidea/ci_hdrc_imx.c
···336 return ret;337}3380000000339static int ci_hdrc_imx_probe(struct platform_device *pdev)340{341 struct ci_hdrc_imx_data *data;···401 "Failed to enable HSIC pad regulator\n");402 goto err_put;403 }0000000404 }405 }406···446447 ret = imx_get_clks(dev);448 if (ret)449- goto disable_hsic_regulator;450451 ret = imx_prepare_enable_clks(dev);452 if (ret)453- goto disable_hsic_regulator;454455 ret = clk_prepare_enable(data->clk_wakeup);456 if (ret)···484 of_usb_get_phy_mode(np) == USBPHY_INTERFACE_MODE_ULPI) {485 pdata.flags |= CI_HDRC_OVERRIDE_PHY_CONTROL;486 data->override_phy_control = true;487- usb_phy_init(pdata.usb_phy);0000488 }489490 if (pdata.flags & CI_HDRC_SUPPORTS_RUNTIME_PM)···497 ret = imx_usbmisc_init(data->usbmisc_data);498 if (ret) {499 dev_err(dev, "usbmisc init failed, ret=%d\n", ret);500- goto err_clk;501 }502503 data->ci_pdev = ci_hdrc_add_device(dev,···506 if (IS_ERR(data->ci_pdev)) {507 ret = PTR_ERR(data->ci_pdev);508 dev_err_probe(dev, ret, "ci_hdrc_add_device failed\n");509- goto err_clk;510 }511512 if (data->usbmisc_data) {···540541disable_device:542 ci_hdrc_remove_device(data->ci_pdev);000543err_clk:544 clk_disable_unprepare(data->clk_wakeup);545err_wakeup_clk:546 imx_disable_unprepare_clks(dev);547-disable_hsic_regulator:548- if (data->hsic_pad_regulator)549- /* don't overwrite original ret (cf. EPROBE_DEFER) */550- regulator_disable(data->hsic_pad_regulator);551 if (pdata.flags & CI_HDRC_PMQOS)552 cpu_latency_qos_remove_request(&data->pm_qos_req);553 data->ci_pdev = NULL;554err_put:555- put_device(data->usbmisc_data->dev);0556 return ret;557}558···575 clk_disable_unprepare(data->clk_wakeup);576 if (data->plat_data->flags & CI_HDRC_PMQOS)577 cpu_latency_qos_remove_request(&data->pm_qos_req);578- if (data->hsic_pad_regulator)579- regulator_disable(data->hsic_pad_regulator);580 }581- put_device(data->usbmisc_data->dev);0582}583584static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
···336 return ret;337}338339+static void ci_hdrc_imx_disable_regulator(void *arg)340+{341+ struct ci_hdrc_imx_data *data = arg;342+343+ regulator_disable(data->hsic_pad_regulator);344+}345+346static int ci_hdrc_imx_probe(struct platform_device *pdev)347{348 struct ci_hdrc_imx_data *data;···394 "Failed to enable HSIC pad regulator\n");395 goto err_put;396 }397+ ret = devm_add_action_or_reset(dev,398+ ci_hdrc_imx_disable_regulator, data);399+ if (ret) {400+ dev_err(dev,401+ "Failed to add regulator devm action\n");402+ goto err_put;403+ }404 }405 }406···432433 ret = imx_get_clks(dev);434 if (ret)435+ goto qos_remove_request;436437 ret = imx_prepare_enable_clks(dev);438 if (ret)439+ goto qos_remove_request;440441 ret = clk_prepare_enable(data->clk_wakeup);442 if (ret)···470 of_usb_get_phy_mode(np) == USBPHY_INTERFACE_MODE_ULPI) {471 pdata.flags |= CI_HDRC_OVERRIDE_PHY_CONTROL;472 data->override_phy_control = true;473+ ret = usb_phy_init(pdata.usb_phy);474+ if (ret) {475+ dev_err(dev, "Failed to init phy\n");476+ goto err_clk;477+ }478 }479480 if (pdata.flags & CI_HDRC_SUPPORTS_RUNTIME_PM)···479 ret = imx_usbmisc_init(data->usbmisc_data);480 if (ret) {481 dev_err(dev, "usbmisc init failed, ret=%d\n", ret);482+ goto phy_shutdown;483 }484485 data->ci_pdev = ci_hdrc_add_device(dev,···488 if (IS_ERR(data->ci_pdev)) {489 ret = PTR_ERR(data->ci_pdev);490 dev_err_probe(dev, ret, "ci_hdrc_add_device failed\n");491+ goto phy_shutdown;492 }493494 if (data->usbmisc_data) {···522523disable_device:524 ci_hdrc_remove_device(data->ci_pdev);525+phy_shutdown:526+ if (data->override_phy_control)527+ usb_phy_shutdown(data->phy);528err_clk:529 clk_disable_unprepare(data->clk_wakeup);530err_wakeup_clk:531 imx_disable_unprepare_clks(dev);532+qos_remove_request:000533 if (pdata.flags & CI_HDRC_PMQOS)534 cpu_latency_qos_remove_request(&data->pm_qos_req);535 data->ci_pdev = NULL;536err_put:537+ if (data->usbmisc_data)538+ put_device(data->usbmisc_data->dev);539 return ret;540}541···556 clk_disable_unprepare(data->clk_wakeup);557 if (data->plat_data->flags & CI_HDRC_PMQOS)558 cpu_latency_qos_remove_request(&data->pm_qos_req);00559 }560+ if (data->usbmisc_data)561+ put_device(data->usbmisc_data->dev);562}563564static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
+16-5
drivers/usb/class/cdc-wdm.c
···726 rv = -EBUSY;727 goto out;728 }729-730 rv = usb_autopm_get_interface(desc->intf);731 if (rv < 0) {732 dev_err(&desc->intf->dev, "Error autopm - %d\n", rv);···829static int wdm_wwan_port_start(struct wwan_port *port)830{831 struct wdm_device *desc = wwan_port_get_drvdata(port);0832833 /* The interface is both exposed via the WWAN framework and as a834 * legacy usbmisc chardev. If chardev is already open, just fail···849 wwan_port_txon(port);850851 /* Start getting events */852- return usb_submit_urb(desc->validity, GFP_KERNEL);00000000853}854855static void wdm_wwan_port_stop(struct wwan_port *port)···868 poison_urbs(desc);869 desc->manage_power(desc->intf, 0);870 clear_bit(WDM_READ, &desc->flags);871- clear_bit(WDM_WWAN_IN_USE, &desc->flags);872 unpoison_urbs(desc);000873}874875static void wdm_wwan_port_tx_complete(struct urb *urb)···879 struct sk_buff *skb = urb->context;880 struct wdm_device *desc = skb_shinfo(skb)->destructor_arg;881882- usb_autopm_put_interface(desc->intf);883 wwan_port_txon(desc->wwanp);884 kfree_skb(skb);885}···909 req->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE);910 req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;911 req->wValue = 0;912- req->wIndex = desc->inum;913 req->wLength = cpu_to_le16(skb->len);914915 skb_shinfo(skb)->destructor_arg = desc;
···726 rv = -EBUSY;727 goto out;728 }729+ smp_rmb(); /* ordered against wdm_wwan_port_stop() */730 rv = usb_autopm_get_interface(desc->intf);731 if (rv < 0) {732 dev_err(&desc->intf->dev, "Error autopm - %d\n", rv);···829static int wdm_wwan_port_start(struct wwan_port *port)830{831 struct wdm_device *desc = wwan_port_get_drvdata(port);832+ int rv;833834 /* The interface is both exposed via the WWAN framework and as a835 * legacy usbmisc chardev. If chardev is already open, just fail···848 wwan_port_txon(port);849850 /* Start getting events */851+ rv = usb_submit_urb(desc->validity, GFP_KERNEL);852+ if (rv < 0) {853+ wwan_port_txoff(port);854+ desc->manage_power(desc->intf, 0);855+ /* this must be last lest we race with chardev open */856+ clear_bit(WDM_WWAN_IN_USE, &desc->flags);857+ }858+859+ return rv;860}861862static void wdm_wwan_port_stop(struct wwan_port *port)···859 poison_urbs(desc);860 desc->manage_power(desc->intf, 0);861 clear_bit(WDM_READ, &desc->flags);0862 unpoison_urbs(desc);863+ smp_wmb(); /* ordered against wdm_open() */864+ /* this must be last lest we open a poisoned device */865+ clear_bit(WDM_WWAN_IN_USE, &desc->flags);866}867868static void wdm_wwan_port_tx_complete(struct urb *urb)···868 struct sk_buff *skb = urb->context;869 struct wdm_device *desc = skb_shinfo(skb)->destructor_arg;870871+ usb_autopm_put_interface_async(desc->intf);872 wwan_port_txon(desc->wwanp);873 kfree_skb(skb);874}···898 req->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE);899 req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;900 req->wValue = 0;901+ req->wIndex = desc->inum; /* already converted */902 req->wLength = cpu_to_le16(skb->len);903904 skb_shinfo(skb)->destructor_arg = desc;
···165 return 0;166}167168+static int ohci_quirk_loongson(struct usb_hcd *hcd)169+{170+ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);171+172+ /*173+ * Loongson's LS7A OHCI controller (rev 0x02) has a174+ * flaw. MMIO register with offset 0x60/64 is treated175+ * as legacy PS2-compatible keyboard/mouse interface.176+ * Since OHCI only use 4KB BAR resource, LS7A OHCI's177+ * 32KB BAR is wrapped around (the 2nd 4KB BAR space178+ * is the same as the 1st 4KB internally). So add 4KB179+ * offset (0x1000) to the OHCI registers as a quirk.180+ */181+ if (pdev->revision == 0x2)182+ hcd->regs += SZ_4K; /* SZ_4K = 0x1000 */183+184+ return 0;185+}186+187static int ohci_quirk_qemu(struct usb_hcd *hcd)188{189 struct ohci_hcd *ohci = hcd_to_ohci(hcd);···223 {224 PCI_DEVICE(PCI_VENDOR_ID_ATI, 0x4399),225 .driver_data = (unsigned long)ohci_quirk_amd700,226+ },227+ {228+ PCI_DEVICE(PCI_VENDOR_ID_LOONGSON, 0x7a24),229+ .driver_data = (unsigned long)ohci_quirk_loongson,230 },231 {232 .vendor = PCI_VENDOR_ID_APPLE,
+16-14
drivers/usb/host/xhci-hub.c
···1878 int max_ports, port_index;1879 int sret;1880 u32 next_state;1881- u32 temp, portsc;1882 struct xhci_hub *rhub;1883 struct xhci_port **ports;018841885 rhub = xhci_get_rhub(hcd);1886 ports = rhub->ports;···1897 return -ESHUTDOWN;1898 }18991900- /* delay the irqs */1901- temp = readl(&xhci->op_regs->command);1902- temp &= ~CMD_EIE;1903- writel(temp, &xhci->op_regs->command);1904-1905 /* bus specific resume for ports we suspended at bus_suspend */1906- if (hcd->speed >= HCD_USB3)1907 next_state = XDEV_U0;1908- else1909 next_state = XDEV_RESUME;1910-000000001911 port_index = max_ports;1912 while (port_index--) {1913 portsc = readl(ports[port_index]->addr);···1978 (void) readl(&xhci->op_regs->command);19791980 bus_state->next_statechange = jiffies + msecs_to_jiffies(5);1981- /* re-enable irqs */1982- temp = readl(&xhci->op_regs->command);1983- temp |= CMD_EIE;1984- writel(temp, &xhci->op_regs->command);1985- temp = readl(&xhci->op_regs->command);19861987 spin_unlock_irqrestore(&xhci->lock, flags);1988 return 0;
···1878 int max_ports, port_index;1879 int sret;1880 u32 next_state;1881+ u32 portsc;1882 struct xhci_hub *rhub;1883 struct xhci_port **ports;1884+ bool disabled_irq = false;18851886 rhub = xhci_get_rhub(hcd);1887 ports = rhub->ports;···1896 return -ESHUTDOWN;1897 }1898000001899 /* bus specific resume for ports we suspended at bus_suspend */1900+ if (hcd->speed >= HCD_USB3) {1901 next_state = XDEV_U0;1902+ } else {1903 next_state = XDEV_RESUME;1904+ if (bus_state->bus_suspended) {1905+ /*1906+ * prevent port event interrupts from interfering1907+ * with usb2 port resume process1908+ */1909+ xhci_disable_interrupter(xhci->interrupters[0]);1910+ disabled_irq = true;1911+ }1912+ }1913 port_index = max_ports;1914 while (port_index--) {1915 portsc = readl(ports[port_index]->addr);···1974 (void) readl(&xhci->op_regs->command);19751976 bus_state->next_statechange = jiffies + msecs_to_jiffies(5);1977+ /* re-enable interrupter */1978+ if (disabled_irq)1979+ xhci_enable_interrupter(xhci->interrupters[0]);0019801981 spin_unlock_irqrestore(&xhci->lock, flags);1982 return 0;
+4-7
drivers/usb/host/xhci-ring.c
···561 * pointer command pending because the device can choose to start any562 * stream once the endpoint is on the HW schedule.563 */564- if (ep_state & (EP_STOP_CMD_PENDING | SET_DEQ_PENDING | EP_HALTED |565- EP_CLEARING_TT | EP_STALLED))566 return;567568 trace_xhci_ring_ep_doorbell(slot_id, DB_VALUE(ep_index, stream_id));···25732574 xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET);2575 return;2576- case COMP_STALL_ERROR:2577- ep->ep_state |= EP_STALLED;2578- break;2579 default:2580 /* do nothing */2581 break;···2913 if (xhci_spurious_success_tx_event(xhci, ep_ring)) {2914 xhci_dbg(xhci, "Spurious event dma %pad, comp_code %u after %u\n",2915 &ep_trb_dma, trb_comp_code, ep_ring->old_trb_comp_code);2916- ep_ring->old_trb_comp_code = trb_comp_code;2917 return 0;2918 }2919···3777 * enqueue a No Op TRB, this can prevent the Setup and Data Stage3778 * TRB to be breaked by the Link TRB.3779 */3780- if (trb_is_link(ep_ring->enqueue + 1)) {3781 field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state;3782 queue_trb(xhci, ep_ring, false, 0, 0,3783 TRB_INTR_TARGET(0), field);
···561 * pointer command pending because the device can choose to start any562 * stream once the endpoint is on the HW schedule.563 */564+ if ((ep_state & EP_STOP_CMD_PENDING) || (ep_state & SET_DEQ_PENDING) ||565+ (ep_state & EP_HALTED) || (ep_state & EP_CLEARING_TT))566 return;567568 trace_xhci_ring_ep_doorbell(slot_id, DB_VALUE(ep_index, stream_id));···25732574 xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET);2575 return;0002576 default:2577 /* do nothing */2578 break;···2916 if (xhci_spurious_success_tx_event(xhci, ep_ring)) {2917 xhci_dbg(xhci, "Spurious event dma %pad, comp_code %u after %u\n",2918 &ep_trb_dma, trb_comp_code, ep_ring->old_trb_comp_code);2919+ ep_ring->old_trb_comp_code = 0;2920 return 0;2921 }2922···3780 * enqueue a No Op TRB, this can prevent the Setup and Data Stage3781 * TRB to be breaked by the Link TRB.3782 */3783+ if (last_trb_on_seg(ep_ring->enq_seg, ep_ring->enqueue + 1)) {3784 field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state;3785 queue_trb(xhci, ep_ring, false, 0, 0,3786 TRB_INTR_TARGET(0), field);
+5-13
drivers/usb/host/xhci.c
···322 xhci_info(xhci, "Fault detected\n");323}324325-static int xhci_enable_interrupter(struct xhci_interrupter *ir)326{327 u32 iman;328···335 return 0;336}337338-static int xhci_disable_interrupter(struct xhci_interrupter *ir)339{340 u32 iman;341···1605 goto free_priv;1606 }16071608- /* Class driver might not be aware ep halted due to async URB giveback */1609- if (*ep_state & EP_STALLED)1610- dev_dbg(&urb->dev->dev, "URB %p queued before clearing halt\n",1611- urb);1612-1613 switch (usb_endpoint_type(&urb->ep->desc)) {16141615 case USB_ENDPOINT_XFER_CONTROL:···1765 goto done;1766 }17671768- /* In these cases no commands are pending but the endpoint is stopped */1769- if (ep->ep_state & (EP_CLEARING_TT | EP_STALLED)) {1770 /* and cancelled TDs can be given back right away */1771 xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n",1772 urb->dev->slot_id, ep_index, ep->ep_state);···32043205 ep = &vdev->eps[ep_index];32063207- spin_lock_irqsave(&xhci->lock, flags);3208-3209- ep->ep_state &= ~EP_STALLED;3210-3211 /* Bail out if toggle is already being cleared by a endpoint reset */03212 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) {3213 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE;3214 spin_unlock_irqrestore(&xhci->lock, flags);
···322 xhci_info(xhci, "Fault detected\n");323}324325+int xhci_enable_interrupter(struct xhci_interrupter *ir)326{327 u32 iman;328···335 return 0;336}337338+int xhci_disable_interrupter(struct xhci_interrupter *ir)339{340 u32 iman;341···1605 goto free_priv;1606 }1607000001608 switch (usb_endpoint_type(&urb->ep->desc)) {16091610 case USB_ENDPOINT_XFER_CONTROL:···1770 goto done;1771 }17721773+ /* In this case no commands are pending but the endpoint is stopped */1774+ if (ep->ep_state & EP_CLEARING_TT) {1775 /* and cancelled TDs can be given back right away */1776 xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n",1777 urb->dev->slot_id, ep_index, ep->ep_state);···32093210 ep = &vdev->eps[ep_index];321100003212 /* Bail out if toggle is already being cleared by a endpoint reset */3213+ spin_lock_irqsave(&xhci->lock, flags);3214 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) {3215 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE;3216 spin_unlock_irqrestore(&xhci->lock, flags);
+3-2
drivers/usb/host/xhci.h
···664 unsigned int err_count;665 unsigned int ep_state;666#define SET_DEQ_PENDING (1 << 0)667-#define EP_HALTED (1 << 1) /* Halted host ep handling */668#define EP_STOP_CMD_PENDING (1 << 2) /* For URB cancellation */669/* Transitioning the endpoint to using streams, don't enqueue URBs */670#define EP_GETTING_STREAMS (1 << 3)···675#define EP_SOFT_CLEAR_TOGGLE (1 << 7)676/* usb_hub_clear_tt_buffer is in progress */677#define EP_CLEARING_TT (1 << 8)678-#define EP_STALLED (1 << 9) /* For stall handling */679 /* ---- Related to URB cancellation ---- */680 struct list_head cancelled_td_list;681 struct xhci_hcd *xhci;···1890 struct usb_tt *tt, gfp_t mem_flags);1891int xhci_set_interrupter_moderation(struct xhci_interrupter *ir,1892 u32 imod_interval);0018931894/* xHCI ring, segment, TRB, and TD functions */1895dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb);
···664 unsigned int err_count;665 unsigned int ep_state;666#define SET_DEQ_PENDING (1 << 0)667+#define EP_HALTED (1 << 1) /* For stall handling */668#define EP_STOP_CMD_PENDING (1 << 2) /* For URB cancellation */669/* Transitioning the endpoint to using streams, don't enqueue URBs */670#define EP_GETTING_STREAMS (1 << 3)···675#define EP_SOFT_CLEAR_TOGGLE (1 << 7)676/* usb_hub_clear_tt_buffer is in progress */677#define EP_CLEARING_TT (1 << 8)0678 /* ---- Related to URB cancellation ---- */679 struct list_head cancelled_td_list;680 struct xhci_hcd *xhci;···1891 struct usb_tt *tt, gfp_t mem_flags);1892int xhci_set_interrupter_moderation(struct xhci_interrupter *ir,1893 u32 imod_interval);1894+int xhci_enable_interrupter(struct xhci_interrupter *ir);1895+int xhci_disable_interrupter(struct xhci_interrupter *ir);18961897/* xHCI ring, segment, TRB, and TD functions */1898dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb);
···407 if (!drv)408 return;409000000410 /*411 * Some devices get wedged if you kick them after they are412 * reset. Mark all vqs as broken to make sure we don't.
···407 if (!drv)408 return;409410+ /* If the driver has its own shutdown method, use that. */411+ if (drv->shutdown) {412+ drv->shutdown(dev);413+ return;414+ }415+416 /*417 * Some devices get wedged if you kick them after they are418 * reset. Mark all vqs as broken to make sure we don't.
···2577 struct bpos end)2578{2579 if ((iter->flags & (BTREE_ITER_is_extents|BTREE_ITER_filter_snapshots)) &&2580- !bkey_eq(iter->pos, POS_MAX)) {0002581 /*2582 * bkey_start_pos(), for extents, is not monotonically2583 * increasing until after filtering for snapshots:···26052606 bch2_trans_verify_not_unlocked_or_in_restart(trans);2607 bch2_btree_iter_verify_entry_exit(iter);2608- EBUG_ON((iter->flags & BTREE_ITER_filter_snapshots) && bpos_eq(end, POS_MIN));26092610 int ret = trans_maybe_inject_restart(trans, _RET_IP_);2611 if (unlikely(ret)) {
···2577 struct bpos end)2578{2579 if ((iter->flags & (BTREE_ITER_is_extents|BTREE_ITER_filter_snapshots)) &&2580+ !bkey_eq(iter->pos, POS_MAX) &&2581+ !((iter->flags & BTREE_ITER_is_extents) &&2582+ iter->pos.offset == U64_MAX)) {2583+2584 /*2585 * bkey_start_pos(), for extents, is not monotonically2586 * increasing until after filtering for snapshots:···26022603 bch2_trans_verify_not_unlocked_or_in_restart(trans);2604 bch2_btree_iter_verify_entry_exit(iter);2605+ EBUG_ON((iter->flags & BTREE_ITER_filter_snapshots) && iter->pos.inode != end.inode);26062607 int ret = trans_maybe_inject_restart(trans, _RET_IP_);2608 if (unlikely(ret)) {
···21#define FSOP_GOING_FLAGS_LOGFLUSH 0x1 /* flush log but not data */22#define FSOP_GOING_FLAGS_NOLOGFLUSH 0x2 /* don't flush log nor data */2324-struct flags_set {25- unsigned mask;26- unsigned flags;27-28- unsigned projid;29-30- bool set_projinherit;31- bool projinherit;32-};33-34-static int bch2_inode_flags_set(struct btree_trans *trans,35- struct bch_inode_info *inode,36- struct bch_inode_unpacked *bi,37- void *p)38-{39- struct bch_fs *c = inode->v.i_sb->s_fs_info;40- /*41- * We're relying on btree locking here for exclusion with other ioctl42- * calls - use the flags in the btree (@bi), not inode->i_flags:43- */44- struct flags_set *s = p;45- unsigned newflags = s->flags;46- unsigned oldflags = bi->bi_flags & s->mask;47-48- if (((newflags ^ oldflags) & (BCH_INODE_append|BCH_INODE_immutable)) &&49- !capable(CAP_LINUX_IMMUTABLE))50- return -EPERM;51-52- if (!S_ISREG(bi->bi_mode) &&53- !S_ISDIR(bi->bi_mode) &&54- (newflags & (BCH_INODE_nodump|BCH_INODE_noatime)) != newflags)55- return -EINVAL;56-57- if ((newflags ^ oldflags) & BCH_INODE_casefolded) {58-#ifdef CONFIG_UNICODE59- int ret = 0;60- /* Not supported on individual files. */61- if (!S_ISDIR(bi->bi_mode))62- return -EOPNOTSUPP;63-64- /*65- * Make sure the dir is empty, as otherwise we'd need to66- * rehash everything and update the dirent keys.67- */68- ret = bch2_empty_dir_trans(trans, inode_inum(inode));69- if (ret < 0)70- return ret;71-72- ret = bch2_request_incompat_feature(c, bcachefs_metadata_version_casefolding);73- if (ret)74- return ret;75-76- bch2_check_set_feature(c, BCH_FEATURE_casefolding);77-#else78- printk(KERN_ERR "Cannot use casefolding on a kernel without CONFIG_UNICODE\n");79- return -EOPNOTSUPP;80-#endif81- }82-83- if (s->set_projinherit) {84- bi->bi_fields_set &= ~(1 << Inode_opt_project);85- bi->bi_fields_set |= ((int) s->projinherit << Inode_opt_project);86- }87-88- bi->bi_flags &= ~s->mask;89- bi->bi_flags |= newflags;90-91- bi->bi_ctime = timespec_to_bch2_time(c, current_time(&inode->v));92- return 0;93-}94-95-static int bch2_ioc_getflags(struct bch_inode_info *inode, int __user *arg)96-{97- unsigned flags = map_flags(bch_flags_to_uflags, inode->ei_inode.bi_flags);98-99- return put_user(flags, arg);100-}101-102-static int bch2_ioc_setflags(struct bch_fs *c,103- struct file *file,104- struct bch_inode_info *inode,105- void __user *arg)106-{107- struct flags_set s = { .mask = map_defined(bch_flags_to_uflags) };108- unsigned uflags;109- int ret;110-111- if (get_user(uflags, (int __user *) arg))112- return -EFAULT;113-114- s.flags = map_flags_rev(bch_flags_to_uflags, uflags);115- if (uflags)116- return -EOPNOTSUPP;117-118- ret = mnt_want_write_file(file);119- if (ret)120- return ret;121-122- inode_lock(&inode->v);123- if (!inode_owner_or_capable(file_mnt_idmap(file), &inode->v)) {124- ret = -EACCES;125- goto setflags_out;126- }127-128- mutex_lock(&inode->ei_update_lock);129- ret = bch2_subvol_is_ro(c, inode->ei_inum.subvol) ?:130- bch2_write_inode(c, inode, bch2_inode_flags_set, &s,131- ATTR_CTIME);132- mutex_unlock(&inode->ei_update_lock);133-134-setflags_out:135- inode_unlock(&inode->v);136- mnt_drop_write_file(file);137- return ret;138-}139-140-static int bch2_ioc_fsgetxattr(struct bch_inode_info *inode,141- struct fsxattr __user *arg)142-{143- struct fsxattr fa = { 0 };144-145- fa.fsx_xflags = map_flags(bch_flags_to_xflags, inode->ei_inode.bi_flags);146-147- if (inode->ei_inode.bi_fields_set & (1 << Inode_opt_project))148- fa.fsx_xflags |= FS_XFLAG_PROJINHERIT;149-150- fa.fsx_projid = inode->ei_qid.q[QTYP_PRJ];151-152- if (copy_to_user(arg, &fa, sizeof(fa)))153- return -EFAULT;154-155- return 0;156-}157-158-static int fssetxattr_inode_update_fn(struct btree_trans *trans,159- struct bch_inode_info *inode,160- struct bch_inode_unpacked *bi,161- void *p)162-{163- struct flags_set *s = p;164-165- if (s->projid != bi->bi_project) {166- bi->bi_fields_set |= 1U << Inode_opt_project;167- bi->bi_project = s->projid;168- }169-170- return bch2_inode_flags_set(trans, inode, bi, p);171-}172-173-static int bch2_ioc_fssetxattr(struct bch_fs *c,174- struct file *file,175- struct bch_inode_info *inode,176- struct fsxattr __user *arg)177-{178- struct flags_set s = { .mask = map_defined(bch_flags_to_xflags) };179- struct fsxattr fa;180- int ret;181-182- if (copy_from_user(&fa, arg, sizeof(fa)))183- return -EFAULT;184-185- s.set_projinherit = true;186- s.projinherit = (fa.fsx_xflags & FS_XFLAG_PROJINHERIT) != 0;187- fa.fsx_xflags &= ~FS_XFLAG_PROJINHERIT;188-189- s.flags = map_flags_rev(bch_flags_to_xflags, fa.fsx_xflags);190- if (fa.fsx_xflags)191- return -EOPNOTSUPP;192-193- if (fa.fsx_projid >= U32_MAX)194- return -EINVAL;195-196- /*197- * inode fields accessible via the xattr interface are stored with a +1198- * bias, so that 0 means unset:199- */200- s.projid = fa.fsx_projid + 1;201-202- ret = mnt_want_write_file(file);203- if (ret)204- return ret;205-206- inode_lock(&inode->v);207- if (!inode_owner_or_capable(file_mnt_idmap(file), &inode->v)) {208- ret = -EACCES;209- goto err;210- }211-212- mutex_lock(&inode->ei_update_lock);213- ret = bch2_subvol_is_ro(c, inode->ei_inum.subvol) ?:214- bch2_set_projid(c, inode, fa.fsx_projid) ?:215- bch2_write_inode(c, inode, fssetxattr_inode_update_fn, &s,216- ATTR_CTIME);217- mutex_unlock(&inode->ei_update_lock);218-err:219- inode_unlock(&inode->v);220- mnt_drop_write_file(file);221- return ret;222-}223-224static int bch2_reinherit_attrs_fn(struct btree_trans *trans,225 struct bch_inode_info *inode,226 struct bch_inode_unpacked *bi,···358 long ret;359360 switch (cmd) {361- case FS_IOC_GETFLAGS:362- ret = bch2_ioc_getflags(inode, (int __user *) arg);363- break;364-365- case FS_IOC_SETFLAGS:366- ret = bch2_ioc_setflags(c, file, inode, (int __user *) arg);367- break;368-369- case FS_IOC_FSGETXATTR:370- ret = bch2_ioc_fsgetxattr(inode, (void __user *) arg);371- break;372-373- case FS_IOC_FSSETXATTR:374- ret = bch2_ioc_fssetxattr(c, file, inode,375- (void __user *) arg);376- break;377-378 case BCHFS_IOC_REINHERIT_ATTRS:379 ret = bch2_ioc_reinherit_attrs(c, file, inode,380 (void __user *) arg);
···21#define FSOP_GOING_FLAGS_LOGFLUSH 0x1 /* flush log but not data */22#define FSOP_GOING_FLAGS_NOLOGFLUSH 0x2 /* don't flush log nor data */230000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000024static int bch2_reinherit_attrs_fn(struct btree_trans *trans,25 struct bch_inode_info *inode,26 struct bch_inode_unpacked *bi,···558 long ret;559560 switch (cmd) {00000000000000000561 case BCHFS_IOC_REINHERIT_ATTRS:562 ret = bch2_ioc_reinherit_attrs(c, file, inode,563 (void __user *) arg);
···18#include "journal_seq_blacklist.h"19#include "logged_ops.h"20#include "move.h"021#include "namei.h"22#include "quota.h"23#include "rebalance.h"···1130 if (ret)1131 goto err;11321133- set_bit(BCH_FS_accounting_replay_done, &c->flags);1134- bch2_journal_set_replay_done(&c->journal);1135-1136 ret = bch2_fs_read_write_early(c);1137 if (ret)1138 goto err;00011391140 for_each_member_device(c, ca) {1141 ret = bch2_dev_usage_init(ca, false);···1194 goto err;11951196 c->recovery_pass_done = BCH_RECOVERY_PASS_NR - 1;00011971198 if (enabled_qtypes(c)) {1199 ret = bch2_fs_quota_read(c);
···18#include "journal_seq_blacklist.h"19#include "logged_ops.h"20#include "move.h"21+#include "movinggc.h"22#include "namei.h"23#include "quota.h"24#include "rebalance.h"···1129 if (ret)1130 goto err;11310001132 ret = bch2_fs_read_write_early(c);1133 if (ret)1134 goto err;1135+1136+ set_bit(BCH_FS_accounting_replay_done, &c->flags);1137+ bch2_journal_set_replay_done(&c->journal);11381139 for_each_member_device(c, ca) {1140 ret = bch2_dev_usage_init(ca, false);···1193 goto err;11941195 c->recovery_pass_done = BCH_RECOVERY_PASS_NR - 1;1196+1197+ bch2_copygc_wakeup(c);1198+ bch2_rebalance_wakeup(c);11991200 if (enabled_qtypes(c)) {1201 ret = bch2_fs_quota_read(c);
+36-32
fs/bcachefs/recovery_passes.c
···12#include "journal.h"13#include "lru.h"14#include "logged_ops.h"015#include "rebalance.h"16#include "recovery.h"17#include "recovery_passes.h"···263 */264 c->opts.recovery_passes_exclude &= ~BCH_RECOVERY_PASS_set_may_go_rw;265266- while (c->curr_recovery_pass < ARRAY_SIZE(recovery_pass_fns) && !ret) {267- c->next_recovery_pass = c->curr_recovery_pass + 1;268269- spin_lock_irq(&c->recovery_pass_lock);0270 unsigned pass = c->curr_recovery_pass;27100272 if (c->opts.recovery_pass_last &&273- c->curr_recovery_pass > c->opts.recovery_pass_last) {274- spin_unlock_irq(&c->recovery_pass_lock);275 break;276- }277278- if (!should_run_recovery_pass(c, pass)) {279- c->curr_recovery_pass++;280- c->recovery_pass_done = max(c->recovery_pass_done, pass);281 spin_unlock_irq(&c->recovery_pass_lock);282- continue;0000000000000000000283 }284- spin_unlock_irq(&c->recovery_pass_lock);285286- ret = bch2_run_recovery_pass(c, pass) ?:287- bch2_journal_flush(&c->journal);288-289- if (!ret && !test_bit(BCH_FS_error, &c->flags))290- bch2_clear_recovery_pass_required(c, pass);291-292- spin_lock_irq(&c->recovery_pass_lock);293- if (c->next_recovery_pass < c->curr_recovery_pass) {294- /*295- * bch2_run_explicit_recovery_pass() was called: we296- * can't always catch -BCH_ERR_restart_recovery because297- * it may have been called from another thread (btree298- * node read completion)299- */300- ret = 0;301- c->recovery_passes_complete &= ~(~0ULL << c->curr_recovery_pass);302- } else {303- c->recovery_passes_complete |= BIT_ULL(pass);304- c->recovery_pass_done = max(c->recovery_pass_done, pass);305- }306 c->curr_recovery_pass = c->next_recovery_pass;307- spin_unlock_irq(&c->recovery_pass_lock);00000308 }00309310 return ret;311}
···12#include "journal.h"13#include "lru.h"14#include "logged_ops.h"15+#include "movinggc.h"16#include "rebalance.h"17#include "recovery.h"18#include "recovery_passes.h"···262 */263 c->opts.recovery_passes_exclude &= ~BCH_RECOVERY_PASS_set_may_go_rw;264265+ spin_lock_irq(&c->recovery_pass_lock);0266267+ while (c->curr_recovery_pass < ARRAY_SIZE(recovery_pass_fns) && !ret) {268+ unsigned prev_done = c->recovery_pass_done;269 unsigned pass = c->curr_recovery_pass;270271+ c->next_recovery_pass = pass + 1;272+273 if (c->opts.recovery_pass_last &&274+ c->curr_recovery_pass > c->opts.recovery_pass_last)0275 break;0276277+ if (should_run_recovery_pass(c, pass)) {00278 spin_unlock_irq(&c->recovery_pass_lock);279+ ret = bch2_run_recovery_pass(c, pass) ?:280+ bch2_journal_flush(&c->journal);281+282+ if (!ret && !test_bit(BCH_FS_error, &c->flags))283+ bch2_clear_recovery_pass_required(c, pass);284+ spin_lock_irq(&c->recovery_pass_lock);285+286+ if (c->next_recovery_pass < c->curr_recovery_pass) {287+ /*288+ * bch2_run_explicit_recovery_pass() was called: we289+ * can't always catch -BCH_ERR_restart_recovery because290+ * it may have been called from another thread (btree291+ * node read completion)292+ */293+ ret = 0;294+ c->recovery_passes_complete &= ~(~0ULL << c->curr_recovery_pass);295+ } else {296+ c->recovery_passes_complete |= BIT_ULL(pass);297+ c->recovery_pass_done = max(c->recovery_pass_done, pass);298+ }299 }030000000000000000000000301 c->curr_recovery_pass = c->next_recovery_pass;302+303+ if (prev_done <= BCH_RECOVERY_PASS_check_snapshots &&304+ c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots) {305+ bch2_copygc_wakeup(c);306+ bch2_rebalance_wakeup(c);307+ }308 }309+310+ spin_unlock_irq(&c->recovery_pass_lock);311312 return ret;313}
+1-1
fs/bcachefs/snapshot.c
···396 u32 subvol = 0, s;397398 rcu_read_lock();399- while (id) {400 s = snapshot_t(c, id)->subvol;401402 if (s && (!subvol || s < subvol))
···396 u32 subvol = 0, s;397398 rcu_read_lock();399+ while (id && bch2_snapshot_exists(c, id)) {400 s = snapshot_t(c, id)->subvol;401402 if (s && (!subvol || s < subvol))
+2-3
fs/bcachefs/str_hash.h
···3334struct bch_hash_info {35 u8 type;36- struct unicode_map *cf_encoding;37 /*38 * For crc32 or crc64 string hashes the first key value of39 * the siphash_key (k0) is used as the key.···44static inline struct bch_hash_info45bch2_hash_info_init(struct bch_fs *c, const struct bch_inode_unpacked *bi)46{47- /* XXX ick */48 struct bch_hash_info info = {49 .type = INODE_STR_HASH(bi),50#ifdef CONFIG_UNICODE51- .cf_encoding = !!(bi->bi_flags & BCH_INODE_casefolded) ? c->cf_encoding : NULL,52#endif53 .siphash_key = { .k0 = bi->bi_hash_seed }54 };
···3334struct bch_hash_info {35 u8 type;36+ struct unicode_map *cf_encoding;37 /*38 * For crc32 or crc64 string hashes the first key value of39 * the siphash_key (k0) is used as the key.···44static inline struct bch_hash_info45bch2_hash_info_init(struct bch_fs *c, const struct bch_inode_unpacked *bi)46{047 struct bch_hash_info info = {48 .type = INODE_STR_HASH(bi),49#ifdef CONFIG_UNICODE50+ .cf_encoding = bch2_inode_casefold(c, bi) ? c->cf_encoding : NULL,51#endif52 .siphash_key = { .k0 = bi->bi_hash_seed }53 };
···418 return ret;419}420421-static int bch2_fs_read_write_late(struct bch_fs *c)422-{423- int ret;424-425- /*426- * Data move operations can't run until after check_snapshots has427- * completed, and bch2_snapshot_is_ancestor() is available.428- *429- * Ideally we'd start copygc/rebalance earlier instead of waiting for430- * all of recovery/fsck to complete:431- */432- ret = bch2_copygc_start(c);433- if (ret) {434- bch_err(c, "error starting copygc thread");435- return ret;436- }437-438- ret = bch2_rebalance_start(c);439- if (ret) {440- bch_err(c, "error starting rebalance thread");441- return ret;442- }443-444- return 0;445-}446-447static int __bch2_fs_read_write(struct bch_fs *c, bool early)448{449 int ret;···440441 clear_bit(BCH_FS_clean_shutdown, &c->flags);442443- /*444- * First journal write must be a flush write: after a clean shutdown we445- * don't read the journal, so the first journal write may end up446- * overwriting whatever was there previously, and there must always be447- * at least one non-flush write in the journal or recovery will fail:448- */449- set_bit(JOURNAL_need_flush_write, &c->journal.flags);450- set_bit(JOURNAL_running, &c->journal.flags);451-452 __for_each_online_member(c, ca, BIT(BCH_MEMBER_STATE_rw), READ) {453 bch2_dev_allocator_add(c, ca);454 percpu_ref_reinit(&ca->io_ref[WRITE]);455 }456 bch2_recalc_capacity(c);457000000000000458 ret = bch2_fs_mark_dirty(c);459 if (ret)460 goto err;461-462- spin_lock(&c->journal.lock);463- bch2_journal_space_available(&c->journal);464- spin_unlock(&c->journal.lock);465466 ret = bch2_journal_reclaim_start(&c->journal);467 if (ret)···477 atomic_long_inc(&c->writes[i]);478 }479#endif480- if (!early) {481- ret = bch2_fs_read_write_late(c);482- if (ret)483- goto err;0000000484 }485486 bch2_do_discards(c);···533534 bch2_find_btree_nodes_exit(&c->found_btree_nodes);535 bch2_free_pending_node_rewrites(c);0536 bch2_fs_accounting_exit(c);537 bch2_fs_sb_errors_exit(c);538 bch2_fs_counters_exit(c);···1004 printbuf_exit(&p);1005}100600000000000000000000000000000000001007int bch2_fs_start(struct bch_fs *c)1008{1009 time64_t now = ktime_get_real_seconds();1010 int ret = 0;10111012 print_mount_opts(c);00010131014 down_write(&c->state_lock);1015 mutex_lock(&c->sb_lock);···1100 wake_up(&c->ro_ref_wait);11011102 down_write(&c->state_lock);1103- if (c->opts.read_only) {1104 bch2_fs_read_only(c);1105- } else {1106- ret = !test_bit(BCH_FS_rw, &c->flags)1107- ? bch2_fs_read_write(c)1108- : bch2_fs_read_write_late(c);1109- }1110 up_write(&c->state_lock);11111112err:···15151516 printbuf_exit(&name);15171518- rebalance_wakeup(c);1519 return 0;1520}1521···1574 }1575}15761577-static bool bch2_fs_may_start(struct bch_fs *c)1578-{1579- struct bch_dev *ca;1580- unsigned i, flags = 0;1581-1582- if (c->opts.very_degraded)1583- flags |= BCH_FORCE_IF_DEGRADED|BCH_FORCE_IF_LOST;1584-1585- if (c->opts.degraded)1586- flags |= BCH_FORCE_IF_DEGRADED;1587-1588- if (!c->opts.degraded &&1589- !c->opts.very_degraded) {1590- mutex_lock(&c->sb_lock);1591-1592- for (i = 0; i < c->disk_sb.sb->nr_devices; i++) {1593- if (!bch2_member_exists(c->disk_sb.sb, i))1594- continue;1595-1596- ca = bch2_dev_locked(c, i);1597-1598- if (!bch2_dev_is_online(ca) &&1599- (ca->mi.state == BCH_MEMBER_STATE_rw ||1600- ca->mi.state == BCH_MEMBER_STATE_ro)) {1601- mutex_unlock(&c->sb_lock);1602- return false;1603- }1604- }1605- mutex_unlock(&c->sb_lock);1606- }1607-1608- return bch2_have_enough_devs(c, bch2_online_devs(c), flags, true);1609-}1610-1611static void __bch2_dev_read_only(struct bch_fs *c, struct bch_dev *ca)1612{1613 bch2_dev_io_ref_stop(ca, WRITE);···1627 if (new_state == BCH_MEMBER_STATE_rw)1628 __bch2_dev_read_write(c, ca);16291630- rebalance_wakeup(c);16311632 return ret;1633}···2208 }2209 }2210 up_write(&c->state_lock);2211-2212- if (!bch2_fs_may_start(c)) {2213- ret = -BCH_ERR_insufficient_devices_to_start;2214- goto err_print;2215- }22162217 if (!c->opts.nostart) {2218 ret = bch2_fs_start(c);
···418 return ret;419}42000000000000000000000000000421static int __bch2_fs_read_write(struct bch_fs *c, bool early)422{423 int ret;···466467 clear_bit(BCH_FS_clean_shutdown, &c->flags);468000000000469 __for_each_online_member(c, ca, BIT(BCH_MEMBER_STATE_rw), READ) {470 bch2_dev_allocator_add(c, ca);471 percpu_ref_reinit(&ca->io_ref[WRITE]);472 }473 bch2_recalc_capacity(c);474475+ /*476+ * First journal write must be a flush write: after a clean shutdown we477+ * don't read the journal, so the first journal write may end up478+ * overwriting whatever was there previously, and there must always be479+ * at least one non-flush write in the journal or recovery will fail:480+ */481+ spin_lock(&c->journal.lock);482+ set_bit(JOURNAL_need_flush_write, &c->journal.flags);483+ set_bit(JOURNAL_running, &c->journal.flags);484+ bch2_journal_space_available(&c->journal);485+ spin_unlock(&c->journal.lock);486+487 ret = bch2_fs_mark_dirty(c);488 if (ret)489 goto err;0000490491 ret = bch2_journal_reclaim_start(&c->journal);492 if (ret)···504 atomic_long_inc(&c->writes[i]);505 }506#endif507+508+ ret = bch2_copygc_start(c);509+ if (ret) {510+ bch_err_msg(c, ret, "error starting copygc thread");511+ goto err;512+ }513+514+ ret = bch2_rebalance_start(c);515+ if (ret) {516+ bch_err_msg(c, ret, "error starting rebalance thread");517+ goto err;518 }519520 bch2_do_discards(c);···553554 bch2_find_btree_nodes_exit(&c->found_btree_nodes);555 bch2_free_pending_node_rewrites(c);556+ bch2_free_fsck_errs(c);557 bch2_fs_accounting_exit(c);558 bch2_fs_sb_errors_exit(c);559 bch2_fs_counters_exit(c);···1023 printbuf_exit(&p);1024}10251026+static bool bch2_fs_may_start(struct bch_fs *c)1027+{1028+ struct bch_dev *ca;1029+ unsigned i, flags = 0;1030+1031+ if (c->opts.very_degraded)1032+ flags |= BCH_FORCE_IF_DEGRADED|BCH_FORCE_IF_LOST;1033+1034+ if (c->opts.degraded)1035+ flags |= BCH_FORCE_IF_DEGRADED;1036+1037+ if (!c->opts.degraded &&1038+ !c->opts.very_degraded) {1039+ mutex_lock(&c->sb_lock);1040+1041+ for (i = 0; i < c->disk_sb.sb->nr_devices; i++) {1042+ if (!bch2_member_exists(c->disk_sb.sb, i))1043+ continue;1044+1045+ ca = bch2_dev_locked(c, i);1046+1047+ if (!bch2_dev_is_online(ca) &&1048+ (ca->mi.state == BCH_MEMBER_STATE_rw ||1049+ ca->mi.state == BCH_MEMBER_STATE_ro)) {1050+ mutex_unlock(&c->sb_lock);1051+ return false;1052+ }1053+ }1054+ mutex_unlock(&c->sb_lock);1055+ }1056+1057+ return bch2_have_enough_devs(c, bch2_online_devs(c), flags, true);1058+}1059+1060int bch2_fs_start(struct bch_fs *c)1061{1062 time64_t now = ktime_get_real_seconds();1063 int ret = 0;10641065 print_mount_opts(c);1066+1067+ if (!bch2_fs_may_start(c))1068+ return -BCH_ERR_insufficient_devices_to_start;10691070 down_write(&c->state_lock);1071 mutex_lock(&c->sb_lock);···1082 wake_up(&c->ro_ref_wait);10831084 down_write(&c->state_lock);1085+ if (c->opts.read_only)1086 bch2_fs_read_only(c);1087+ else if (!test_bit(BCH_FS_rw, &c->flags))1088+ ret = bch2_fs_read_write(c);0001089 up_write(&c->state_lock);10901091err:···15001501 printbuf_exit(&name);15021503+ bch2_rebalance_wakeup(c);1504 return 0;1505}1506···1559 }1560}156100000000000000000000000000000000001562static void __bch2_dev_read_only(struct bch_fs *c, struct bch_dev *ca)1563{1564 bch2_dev_io_ref_stop(ca, WRITE);···1646 if (new_state == BCH_MEMBER_STATE_rw)1647 __bch2_dev_read_write(c, ca);16481649+ bch2_rebalance_wakeup(c);16501651 return ret;1652}···2227 }2228 }2229 up_write(&c->state_lock);0000022302231 if (!c->opts.nostart) {2232 ret = bch2_fs_start(c);
+3-4
fs/bcachefs/sysfs.c
···654 bch2_set_rebalance_needs_scan(c, 0);655656 if (v && id == Opt_rebalance_enabled)657- rebalance_wakeup(c);658659- if (v && id == Opt_copygc_enabled &&660- c->copygc_thread)661- wake_up_process(c->copygc_thread);662663 if (id == Opt_discard && !ca) {664 mutex_lock(&c->sb_lock);
···654 bch2_set_rebalance_needs_scan(c, 0);655656 if (v && id == Opt_rebalance_enabled)657+ bch2_rebalance_wakeup(c);658659+ if (v && id == Opt_copygc_enabled)660+ bch2_copygc_wakeup(c);0661662 if (id == Opt_discard && !ca) {663 mutex_lock(&c->sb_lock);
···2104 * will always return true.2105 * So here we need to do extra page alignment for2106 * filemap_range_has_page().002107 */2108 const u64 page_lockstart = round_up(lockstart, PAGE_SIZE);2109- const u64 page_lockend = round_down(lockend + 1, PAGE_SIZE) - 1;21102111 while (1) {2112 truncate_pagecache_range(inode, lockstart, lockend);21132114 lock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,2115 cached_state);0002116 /*2117 * We can't have ordered extents in the range, nor dirty/writeback2118 * pages, because we have locked the inode's VFS lock in exclusive···2129 * we do, unlock the range and retry.2130 */2131 if (!filemap_range_has_page(inode->i_mapping, page_lockstart,2132- page_lockend))2133 break;21342135 unlock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,
···2104 * will always return true.2105 * So here we need to do extra page alignment for2106 * filemap_range_has_page().2107+ *2108+ * And do not decrease page_lockend right now, as it can be 0.2109 */2110 const u64 page_lockstart = round_up(lockstart, PAGE_SIZE);2111+ const u64 page_lockend = round_down(lockend + 1, PAGE_SIZE);21122113 while (1) {2114 truncate_pagecache_range(inode, lockstart, lockend);21152116 lock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,2117 cached_state);2118+ /* The same page or adjacent pages. */2119+ if (page_lockend <= page_lockstart)2120+ break;2121 /*2122 * We can't have ordered extents in the range, nor dirty/writeback2123 * pages, because we have locked the inode's VFS lock in exclusive···2124 * we do, unlock the range and retry.2125 */2126 if (!filemap_range_has_page(inode->i_mapping, page_lockstart,2127+ page_lockend - 1))2128 break;21292130 unlock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,
+1-1
fs/btrfs/relocation.c
···3803 if (ret) {3804 if (inode)3805 iput(&inode->vfs_inode);3806- inode = ERR_PTR(ret);3807 }3808 return &inode->vfs_inode;3809}
···3803 if (ret) {3804 if (inode)3805 iput(&inode->vfs_inode);3806+ return ERR_PTR(ret);3807 }3808 return &inode->vfs_inode;3809}
···12771278static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,1279 struct zone_info *info, unsigned long *active,1280- struct btrfs_chunk_map *map)1281{1282 struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace;1283 struct btrfs_device *device;···1307 return 0;1308 }1309001310 /* This zone will be used for allocation, so mark this zone non-empty. */1311 btrfs_dev_clear_zone_empty(device, info->physical);1312···1321 * to determine the allocation offset within the zone.1322 */1323 WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size));0000000000001324 nofs_flag = memalloc_nofs_save();1325 ret = btrfs_get_dev_zone(device, info->physical, &zone);1326 memalloc_nofs_restore(nofs_flag);···1602 }16031604 for (i = 0; i < map->num_stripes; i++) {1605- ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active, map);1606 if (ret)1607 goto out;1608···1673 * stripe.1674 */1675 cache->alloc_offset = cache->zone_capacity;1676- ret = 0;1677 }16781679out:
···12771278static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,1279 struct zone_info *info, unsigned long *active,1280+ struct btrfs_chunk_map *map, bool new)1281{1282 struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace;1283 struct btrfs_device *device;···1307 return 0;1308 }13091310+ ASSERT(!new || btrfs_dev_is_empty_zone(device, info->physical));1311+1312 /* This zone will be used for allocation, so mark this zone non-empty. */1313 btrfs_dev_clear_zone_empty(device, info->physical);1314···1319 * to determine the allocation offset within the zone.1320 */1321 WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size));1322+1323+ if (new) {1324+ sector_t capacity;1325+1326+ capacity = bdev_zone_capacity(device->bdev, info->physical >> SECTOR_SHIFT);1327+ up_read(&dev_replace->rwsem);1328+ info->alloc_offset = 0;1329+ info->capacity = capacity << SECTOR_SHIFT;1330+1331+ return 0;1332+ }1333+1334 nofs_flag = memalloc_nofs_save();1335 ret = btrfs_get_dev_zone(device, info->physical, &zone);1336 memalloc_nofs_restore(nofs_flag);···1588 }15891590 for (i = 0; i < map->num_stripes; i++) {1591+ ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active, map, new);1592 if (ret)1593 goto out;1594···1659 * stripe.1660 */1661 cache->alloc_offset = cache->zone_capacity;01662 }16631664out:
+54-19
fs/buffer.c
···176}177EXPORT_SYMBOL(end_buffer_write_sync);178179-/*180- * Various filesystems appear to want __find_get_block to be non-blocking.181- * But it's the page lock which protects the buffers. To get around this,182- * we get exclusion from try_to_free_buffers with the blockdev mapping's183- * i_private_lock.184- *185- * Hack idea: for the blockdev mapping, i_private_lock contention186- * may be quite high. This code could TryLock the page, and if that187- * succeeds, there is no need to take i_private_lock.188- */189static struct buffer_head *190-__find_get_block_slow(struct block_device *bdev, sector_t block)191{192 struct address_space *bd_mapping = bdev->bd_mapping;193 const int blkbits = bd_mapping->host->i_blkbits;···194 if (IS_ERR(folio))195 goto out;196197- spin_lock(&bd_mapping->i_private_lock);000000000198 head = folio_buffers(folio);199 if (!head)200 goto out_unlock;000000000201 bh = head;202 do {203 if (!buffer_mapped(bh))···244 1 << blkbits);245 }246out_unlock:247- spin_unlock(&bd_mapping->i_private_lock);000248 folio_put(folio);249out:250 return ret;···667void write_boundary_block(struct block_device *bdev,668 sector_t bblock, unsigned blocksize)669{670- struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize);00671 if (bh) {672 if (buffer_dirty(bh))673 write_dirty_buffer(bh, 0);···1399/*1400 * Perform a pagecache lookup for the matching buffer. If it's there, refresh1401 * it in the LRU and mark it as accessed. If it is not present then return1402- * NULL01403 */1404-struct buffer_head *1405-__find_get_block(struct block_device *bdev, sector_t block, unsigned size)01406{1407 struct buffer_head *bh = lookup_bh_lru(bdev, block, size);14081409 if (bh == NULL) {1410 /* __find_get_block_slow will mark the page accessed */1411- bh = __find_get_block_slow(bdev, block);1412 if (bh)1413 bh_lru_install(bh);1414 } else···14181419 return bh;1420}0000001421EXPORT_SYMBOL(__find_get_block);00000000014221423/**1424 * bdev_getblk - Get a buffer_head in a block device's buffer cache.···1452struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block,1453 unsigned size, gfp_t gfp)1454{1455- struct buffer_head *bh = __find_get_block(bdev, block, size);0000014561457 might_alloc(gfp);1458 if (bh)
···176}177EXPORT_SYMBOL(end_buffer_write_sync);1780000000000179static struct buffer_head *180+__find_get_block_slow(struct block_device *bdev, sector_t block, bool atomic)181{182 struct address_space *bd_mapping = bdev->bd_mapping;183 const int blkbits = bd_mapping->host->i_blkbits;···204 if (IS_ERR(folio))205 goto out;206207+ /*208+ * Folio lock protects the buffers. Callers that cannot block209+ * will fallback to serializing vs try_to_free_buffers() via210+ * the i_private_lock.211+ */212+ if (atomic)213+ spin_lock(&bd_mapping->i_private_lock);214+ else215+ folio_lock(folio);216+217 head = folio_buffers(folio);218 if (!head)219 goto out_unlock;220+ /*221+ * Upon a noref migration, the folio lock serializes here;222+ * otherwise bail.223+ */224+ if (test_bit_acquire(BH_Migrate, &head->b_state)) {225+ WARN_ON(!atomic);226+ goto out_unlock;227+ }228+229 bh = head;230 do {231 if (!buffer_mapped(bh))···236 1 << blkbits);237 }238out_unlock:239+ if (atomic)240+ spin_unlock(&bd_mapping->i_private_lock);241+ else242+ folio_unlock(folio);243 folio_put(folio);244out:245 return ret;···656void write_boundary_block(struct block_device *bdev,657 sector_t bblock, unsigned blocksize)658{659+ struct buffer_head *bh;660+661+ bh = __find_get_block_nonatomic(bdev, bblock + 1, blocksize);662 if (bh) {663 if (buffer_dirty(bh))664 write_dirty_buffer(bh, 0);···1386/*1387 * Perform a pagecache lookup for the matching buffer. If it's there, refresh1388 * it in the LRU and mark it as accessed. If it is not present then return1389+ * NULL. Atomic context callers may also return NULL if the buffer is being1390+ * migrated; similarly the page is not marked accessed either.1391 */1392+static struct buffer_head *1393+find_get_block_common(struct block_device *bdev, sector_t block,1394+ unsigned size, bool atomic)1395{1396 struct buffer_head *bh = lookup_bh_lru(bdev, block, size);13971398 if (bh == NULL) {1399 /* __find_get_block_slow will mark the page accessed */1400+ bh = __find_get_block_slow(bdev, block, atomic);1401 if (bh)1402 bh_lru_install(bh);1403 } else···14031404 return bh;1405}1406+1407+struct buffer_head *1408+__find_get_block(struct block_device *bdev, sector_t block, unsigned size)1409+{1410+ return find_get_block_common(bdev, block, size, true);1411+}1412EXPORT_SYMBOL(__find_get_block);1413+1414+/* same as __find_get_block() but allows sleeping contexts */1415+struct buffer_head *1416+__find_get_block_nonatomic(struct block_device *bdev, sector_t block,1417+ unsigned size)1418+{1419+ return find_get_block_common(bdev, block, size, false);1420+}1421+EXPORT_SYMBOL(__find_get_block_nonatomic);14221423/**1424 * bdev_getblk - Get a buffer_head in a block device's buffer cache.···1422struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block,1423 unsigned size, gfp_t gfp)1424{1425+ struct buffer_head *bh;1426+1427+ if (gfpflags_allow_blocking(gfp))1428+ bh = __find_get_block_nonatomic(bdev, block, size);1429+ else1430+ bh = __find_get_block(bdev, block, size);14311432 might_alloc(gfp);1433 if (bh)
+1-1
fs/ceph/inode.c
···23672368 /* Try to writeback the dirty pagecaches */2369 if (issued & (CEPH_CAP_FILE_BUFFER)) {2370- loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SHIFT - 1;23712372 ret = filemap_write_and_wait_range(inode->i_mapping,2373 orig_pos, lend);
···23672368 /* Try to writeback the dirty pagecaches */2369 if (issued & (CEPH_CAP_FILE_BUFFER)) {2370+ loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SIZE - 1;23712372 ret = filemap_write_and_wait_range(inode->i_mapping,2373 orig_pos, lend);
+2-1
fs/ext4/ialloc.c
···691 if (!bh || !buffer_uptodate(bh))692 /*693 * If the block is not in the buffer cache, then it694- * must have been written out.0695 */696 goto out;697
···691 if (!bh || !buffer_uptodate(bh))692 /*693 * If the block is not in the buffer cache, then it694+ * must have been written out, or, most unlikely, is695+ * being migrated - false failure should be OK here.696 */697 goto out;698
+2-1
fs/ext4/mballoc.c
···6642 for (i = 0; i < count; i++) {6643 cond_resched();6644 if (is_metadata)6645- bh = sb_find_get_block(inode->i_sb, block + i);06646 ext4_forget(handle, is_metadata, inode, bh, block + i);6647 }6648 }
···6642 for (i = 0; i < count; i++) {6643 cond_resched();6644 if (is_metadata)6645+ bh = sb_find_get_block_nonatomic(inode->i_sb,6646+ block + i);6647 ext4_forget(handle, is_metadata, inode, bh, block + i);6648 }6649 }
+1-1
fs/file.c
···2627#include "internal.h"2829-bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt)30{31 /*32 * If the reference count was already in the dead zone, then this
···2627#include "internal.h"2829+static noinline bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt)30{31 /*32 * If the reference count was already in the dead zone, then this
+9-6
fs/jbd2/revoke.c
···345 bh = bh_in;346347 if (!bh) {348- bh = __find_get_block(bdev, blocknr, journal->j_blocksize);0349 if (bh)350 BUFFER_TRACE(bh, "found on hash");351 }···356357 /* If there is a different buffer_head lying around in358 * memory anywhere... */359- bh2 = __find_get_block(bdev, blocknr, journal->j_blocksize);0360 if (bh2) {361 /* ... and it has RevokeValid status... */362 if (bh2 != bh && buffer_revokevalid(bh2))···466 * state machine will get very upset later on. */467 if (need_cancel) {468 struct buffer_head *bh2;469- bh2 = __find_get_block(bh->b_bdev, bh->b_blocknr, bh->b_size);0470 if (bh2) {471 if (bh2 != bh)472 clear_buffer_revoked(bh2);···495 struct jbd2_revoke_record_s *record;496 struct buffer_head *bh;497 record = (struct jbd2_revoke_record_s *)list_entry;498- bh = __find_get_block(journal->j_fs_dev,499- record->blocknr,500- journal->j_blocksize);501 if (bh) {502 clear_buffer_revoked(bh);503 __brelse(bh);
···345 bh = bh_in;346347 if (!bh) {348+ bh = __find_get_block_nonatomic(bdev, blocknr,349+ journal->j_blocksize);350 if (bh)351 BUFFER_TRACE(bh, "found on hash");352 }···355356 /* If there is a different buffer_head lying around in357 * memory anywhere... */358+ bh2 = __find_get_block_nonatomic(bdev, blocknr,359+ journal->j_blocksize);360 if (bh2) {361 /* ... and it has RevokeValid status... */362 if (bh2 != bh && buffer_revokevalid(bh2))···464 * state machine will get very upset later on. */465 if (need_cancel) {466 struct buffer_head *bh2;467+ bh2 = __find_get_block_nonatomic(bh->b_bdev, bh->b_blocknr,468+ bh->b_size);469 if (bh2) {470 if (bh2 != bh)471 clear_buffer_revoked(bh2);···492 struct jbd2_revoke_record_s *record;493 struct buffer_head *bh;494 record = (struct jbd2_revoke_record_s *)list_entry;495+ bh = __find_get_block_nonatomic(journal->j_fs_dev,496+ record->blocknr,497+ journal->j_blocksize);498 if (bh) {499 clear_buffer_revoked(bh);500 __brelse(bh);
+36-33
fs/namespace.c
···2826 struct vfsmount *mnt = path->mnt;2827 struct dentry *dentry;2828 struct mountpoint *mp = ERR_PTR(-ENOENT);028292830 for (;;) {2831- struct mount *m;28322833 if (beneath) {2834- m = real_mount(mnt);2835 read_seqlock_excl(&mount_lock);2836- dentry = dget(m->mnt_mountpoint);02837 read_sequnlock_excl(&mount_lock);02838 } else {2839 dentry = path->dentry;2840 }28412842 inode_lock(dentry->d_inode);2843- if (unlikely(cant_mount(dentry))) {2844- inode_unlock(dentry->d_inode);2845- goto out;2846- }2847-2848 namespace_lock();28492850- if (beneath && (!is_mounted(mnt) || m->mnt_mountpoint != dentry)) {00002851 namespace_unlock();2852 inode_unlock(dentry->d_inode);2853- goto out;2854 }28552856 mnt = lookup_mnt(path);2857- if (likely(!mnt))0000000002858 break;2859-2860- namespace_unlock();2861- inode_unlock(dentry->d_inode);2862- if (beneath)2863- dput(dentry);2864- path_put(path);2865- path->mnt = mnt;2866- path->dentry = dget(mnt->mnt_root);002867 }2868-2869- mp = get_mountpoint(dentry);2870- if (IS_ERR(mp)) {2871- namespace_unlock();2872- inode_unlock(dentry->d_inode);2873- }2874-2875-out:2876 if (beneath)2877- dput(dentry);2878-2879 return mp;2880}2881···28922893static void unlock_mount(struct mountpoint *where)2894{2895- struct dentry *dentry = where->m_dentry;2896-2897 read_seqlock_excl(&mount_lock);2898 put_mountpoint(where);2899 read_sequnlock_excl(&mount_lock);2900-2901 namespace_unlock();2902- inode_unlock(dentry->d_inode);2903}29042905static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
···2826 struct vfsmount *mnt = path->mnt;2827 struct dentry *dentry;2828 struct mountpoint *mp = ERR_PTR(-ENOENT);2829+ struct path under = {};28302831 for (;;) {2832+ struct mount *m = real_mount(mnt);28332834 if (beneath) {2835+ path_put(&under);2836 read_seqlock_excl(&mount_lock);2837+ under.mnt = mntget(&m->mnt_parent->mnt);2838+ under.dentry = dget(m->mnt_mountpoint);2839 read_sequnlock_excl(&mount_lock);2840+ dentry = under.dentry;2841 } else {2842 dentry = path->dentry;2843 }28442845 inode_lock(dentry->d_inode);000002846 namespace_lock();28472848+ if (unlikely(cant_mount(dentry) || !is_mounted(mnt)))2849+ break; // not to be mounted on2850+2851+ if (beneath && unlikely(m->mnt_mountpoint != dentry ||2852+ &m->mnt_parent->mnt != under.mnt)) {2853 namespace_unlock();2854 inode_unlock(dentry->d_inode);2855+ continue; // got moved2856 }28572858 mnt = lookup_mnt(path);2859+ if (unlikely(mnt)) {2860+ namespace_unlock();2861+ inode_unlock(dentry->d_inode);2862+ path_put(path);2863+ path->mnt = mnt;2864+ path->dentry = dget(mnt->mnt_root);2865+ continue; // got overmounted2866+ }2867+ mp = get_mountpoint(dentry);2868+ if (IS_ERR(mp))2869 break;2870+ if (beneath) {2871+ /*2872+ * @under duplicates the references that will stay2873+ * at least until namespace_unlock(), so the path_put()2874+ * below is safe (and OK to do under namespace_lock -2875+ * we are not dropping the final references here).2876+ */2877+ path_put(&under);2878+ }2879+ return mp;2880 }2881+ namespace_unlock();2882+ inode_unlock(dentry->d_inode);0000002883 if (beneath)2884+ path_put(&under);02885 return mp;2886}2887···28862887static void unlock_mount(struct mountpoint *where)2888{2889+ inode_unlock(where->m_dentry->d_inode);02890 read_seqlock_excl(&mount_lock);2891 put_mountpoint(where);2892 read_sequnlock_excl(&mount_lock);02893 namespace_unlock();02894}28952896static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
+1-1
fs/ocfs2/journal.c
···1249 }12501251 for (i = 0; i < p_blocks; i++, p_blkno++) {1252- bh = __find_get_block(osb->sb->s_bdev, p_blkno,1253 osb->sb->s_blocksize);1254 /* block not cached. */1255 if (!bh)
···1249 }12501251 for (i = 0; i < p_blocks; i++, p_blkno++) {1252+ bh = __find_get_block_nonatomic(osb->sb->s_bdev, p_blkno,1253 osb->sb->s_blocksize);1254 /* block not cached. */1255 if (!bh)
+1-1
fs/splice.c
···45 * here if set to avoid blocking other users of this pipe if splice is46 * being done on it.47 */48-static noinline void noinline pipe_clear_nowait(struct file *file)49{50 fmode_t fmode = READ_ONCE(file->f_mode);51
···45 * here if set to avoid blocking other users of this pipe if splice is46 * being done on it.47 */48+static noinline void pipe_clear_nowait(struct file *file)49{50 fmode_t fmode = READ_ONCE(file->f_mode);51
+2-2
fs/xattr.c
···703 return error;704705 filename = getname_maybe_null(pathname, at_flags);706- if (!filename) {707 CLASS(fd, f)(dfd);708 if (fd_empty(f))709 error = -EBADF;···847 return error;848849 filename = getname_maybe_null(pathname, at_flags);850- if (!filename) {851 CLASS(fd, f)(dfd);852 if (fd_empty(f))853 return -EBADF;
···703 return error;704705 filename = getname_maybe_null(pathname, at_flags);706+ if (!filename && dfd >= 0) {707 CLASS(fd, f)(dfd);708 if (fd_empty(f))709 error = -EBADF;···847 return error;848849 filename = getname_maybe_null(pathname, at_flags);850+ if (!filename && dfd >= 0) {851 CLASS(fd, f)(dfd);852 if (fd_empty(f))853 return -EBADF;
···61 atomic_long_set(&ref->refcnt, cnt - 1);62}6364-bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt);65bool __file_ref_put(file_ref_t *ref, unsigned long cnt);6667/**···177 */178static __always_inline __must_check bool file_ref_put_close(file_ref_t *ref)179{180- long old, new;181182 old = atomic_long_read(&ref->refcnt);183- do {184- if (unlikely(old < 0))185- return __file_ref_put_badval(ref, old);186-187- if (old == FILE_REF_ONEREF)188- new = FILE_REF_DEAD;189- else190- new = old - 1;191- } while (!atomic_long_try_cmpxchg(&ref->refcnt, &old, new));192-193- return new == FILE_REF_DEAD;194}195196/**
···61 atomic_long_set(&ref->refcnt, cnt - 1);62}63064bool __file_ref_put(file_ref_t *ref, unsigned long cnt);6566/**···178 */179static __always_inline __must_check bool file_ref_put_close(file_ref_t *ref)180{181+ long old;182183 old = atomic_long_read(&ref->refcnt);184+ if (likely(old == FILE_REF_ONEREF)) {185+ if (likely(atomic_long_try_cmpxchg(&ref->refcnt, &old, FILE_REF_DEAD)))186+ return true;187+ }188+ return file_ref_put(ref);000000189}190191/**
+5
include/linux/fwnode.h
···2/*3 * fwnode.h - Firmware device node object handle type definition.4 *000005 * Copyright (C) 2015, Intel Corporation6 * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>7 */
···2/*3 * fwnode.h - Firmware device node object handle type definition.4 *5+ * This header file provides low-level data types and definitions for firmware6+ * and device property providers. The respective API header files supplied by7+ * them should contain all of the requisite data types and definitions for end8+ * users, so including it directly should not be necessary.9+ *10 * Copyright (C) 2015, Intel Corporation11 * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>12 */
···361 phy_interface_t iface);362363/**364- * mac_link_down() - take the link down365 * @config: a pointer to a &struct phylink_config.366 * @mode: link autonegotiation mode367 * @interface: link &typedef phy_interface_t mode368 *369- * If @mode is not an in-band negotiation mode (as defined by370- * phylink_autoneg_inband()), force the link down and disable any371- * Energy Efficient Ethernet MAC configuration. Interface type372- * selection must be done in mac_config().000000373 */374void mac_link_down(struct phylink_config *config, unsigned int mode,375 phy_interface_t interface);376377/**378- * mac_link_up() - allow the link to come up379 * @config: a pointer to a &struct phylink_config.380- * @phy: any attached phy381 * @mode: link autonegotiation mode382 * @interface: link &typedef phy_interface_t mode383 * @speed: link speed···391 * @tx_pause: link transmit pause enablement status392 * @rx_pause: link receive pause enablement status393 *394- * Configure the MAC for an established link.000395 *396 * @speed, @duplex, @tx_pause and @rx_pause indicate the finalised link397 * settings, and should be used to configure the MAC block appropriately···406 * that the user wishes to override the pause settings, and this should407 * be allowed when considering the implementation of this method.408 *409- * If in-band negotiation mode is disabled, allow the link to come up. If410- * @phy is non-%NULL, configure Energy Efficient Ethernet by calling411- * phy_init_eee() and perform appropriate MAC configuration for EEE.412 * Interface type selection must be done in mac_config().413 */414void mac_link_up(struct phylink_config *config, struct phy_device *phy,
···361 phy_interface_t iface);362363/**364+ * mac_link_down() - notification that the link has gone down365 * @config: a pointer to a &struct phylink_config.366 * @mode: link autonegotiation mode367 * @interface: link &typedef phy_interface_t mode368 *369+ * Notifies the MAC that the link has gone down. This will not be called370+ * unless mac_link_up() has been previously called.371+ *372+ * The MAC should stop processing packets for transmission and reception.373+ * phylink will have called netif_carrier_off() to notify the networking374+ * stack that the link has gone down, so MAC drivers should not make this375+ * call.376+ *377+ * If @mode is %MLO_AN_INBAND, then this function must not prevent the378+ * link coming up.379 */380void mac_link_down(struct phylink_config *config, unsigned int mode,381 phy_interface_t interface);382383/**384+ * mac_link_up() - notification that the link has come up385 * @config: a pointer to a &struct phylink_config.386+ * @phy: any attached phy (deprecated - please use LPI interfaces)387 * @mode: link autonegotiation mode388 * @interface: link &typedef phy_interface_t mode389 * @speed: link speed···385 * @tx_pause: link transmit pause enablement status386 * @rx_pause: link receive pause enablement status387 *388+ * Notifies the MAC that the link has come up, and the parameters of the389+ * link as seen from the MACs point of view. If mac_link_up() has been390+ * called previously, there will be an intervening call to mac_link_down()391+ * before this method will be subsequently called.392 *393 * @speed, @duplex, @tx_pause and @rx_pause indicate the finalised link394 * settings, and should be used to configure the MAC block appropriately···397 * that the user wishes to override the pause settings, and this should398 * be allowed when considering the implementation of this method.399 *400+ * Once configured, the MAC may begin to process packets for transmission401+ * and reception.402+ *403 * Interface type selection must be done in mac_config().404 */405void mac_link_up(struct phylink_config *config, struct phy_device *phy,
+3
include/linux/virtio.h
···220 * occurs.221 * @reset_done: optional function to call after transport specific reset222 * operation has finished.00223 */224struct virtio_driver {225 struct device_driver driver;···239 int (*restore)(struct virtio_device *dev);240 int (*reset_prepare)(struct virtio_device *dev);241 int (*reset_done)(struct virtio_device *dev);0242};243244#define drv_to_virtio(__drv) container_of_const(__drv, struct virtio_driver, driver)
···220 * occurs.221 * @reset_done: optional function to call after transport specific reset222 * operation has finished.223+ * @shutdown: synchronize with the device on shutdown. If provided, replaces224+ * the virtio core implementation.225 */226struct virtio_driver {227 struct device_driver driver;···237 int (*restore)(struct virtio_device *dev);238 int (*reset_prepare)(struct virtio_device *dev);239 int (*reset_done)(struct virtio_device *dev);240+ void (*shutdown)(struct virtio_device *dev);241};242243#define drv_to_virtio(__drv) container_of_const(__drv, struct virtio_driver, driver)
+56-29
include/uapi/linux/landlock.h
···53 __u64 scoped;54};5556-/*57- * sys_landlock_create_ruleset() flags:58 *59- * - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI60- * version.61- * - %LANDLOCK_CREATE_RULESET_ERRATA: Get a bitmask of fixed issues.000062 */63/* clang-format off */64#define LANDLOCK_CREATE_RULESET_VERSION (1U << 0)65#define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1)66/* clang-format on */6768-/*69- * sys_landlock_restrict_self() flags:70 *71- * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF: Do not create any log related to the72- * enforced restrictions. This should only be set by tools launching unknown73- * or untrusted programs (e.g. a sandbox tool, container runtime, system74- * service manager). Because programs sandboxing themselves should fix any75- * denied access, they should not set this flag to be aware of potential76- * issues reported by system's logs (i.e. audit).77- * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON: Explicitly ask to continue78- * logging denied access requests even after an :manpage:`execve(2)` call.79- * This flag should only be set if all the programs than can legitimately be80- * executed will not try to request a denied access (which could spam audit81- * logs).82- * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF: Do not create any log related83- * to the enforced restrictions coming from future nested domains created by84- * the caller or its descendants. This should only be set according to a85- * runtime configuration (i.e. not hardcoded) by programs launching other86- * unknown or untrusted programs that may create their own Landlock domains87- * and spam logs. The main use case is for container runtimes to enable users88- * to mute buggy sandboxed programs for a specific container image. Other use89- * cases include sandboxer tools and init systems. Unlike90- * %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF,91- * %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF does not impact the requested92- * restriction (if any) but only the future nested domains.0000000000000000000000093 */94/* clang-format off */95#define LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF (1U << 0)
···53 __u64 scoped;54};5556+/**57+ * DOC: landlock_create_ruleset_flags58 *59+ * **Flags**60+ *61+ * %LANDLOCK_CREATE_RULESET_VERSION62+ * Get the highest supported Landlock ABI version (starting at 1).63+ *64+ * %LANDLOCK_CREATE_RULESET_ERRATA65+ * Get a bitmask of fixed issues for the current Landlock ABI version.66 */67/* clang-format off */68#define LANDLOCK_CREATE_RULESET_VERSION (1U << 0)69#define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1)70/* clang-format on */7172+/**73+ * DOC: landlock_restrict_self_flags74 *75+ * **Flags**76+ *77+ * By default, denied accesses originating from programs that sandbox themselves78+ * are logged via the audit subsystem. Such events typically indicate unexpected79+ * behavior, such as bugs or exploitation attempts. However, to avoid excessive80+ * logging, access requests denied by a domain not created by the originating81+ * program are not logged by default. The rationale is that programs should know82+ * their own behavior, but not necessarily the behavior of other programs. This83+ * default configuration is suitable for most programs that sandbox themselves.84+ * For specific use cases, the following flags allow programs to modify this85+ * default logging behavior.86+ *87+ * The %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF and88+ * %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON flags apply to the newly created89+ * Landlock domain.90+ *91+ * %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF92+ * Disables logging of denied accesses originating from the thread creating93+ * the Landlock domain, as well as its children, as long as they continue94+ * running the same executable code (i.e., without an intervening95+ * :manpage:`execve(2)` call). This is intended for programs that execute96+ * unknown code without invoking :manpage:`execve(2)`, such as script97+ * interpreters. Programs that only sandbox themselves should not set this98+ * flag, so users can be notified of unauthorized access attempts via system99+ * logs.100+ *101+ * %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON102+ * Enables logging of denied accesses after an :manpage:`execve(2)` call,103+ * providing visibility into unauthorized access attempts by newly executed104+ * programs within the created Landlock domain. This flag is recommended105+ * only when all potential executables in the domain are expected to comply106+ * with the access restrictions, as excessive audit log entries could make107+ * it more difficult to identify critical events.108+ *109+ * %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF110+ * Disables logging of denied accesses originating from nested Landlock111+ * domains created by the caller or its descendants. This flag should be set112+ * according to runtime configuration, not hardcoded, to avoid suppressing113+ * important security events. It is useful for container runtimes or114+ * sandboxing tools that may launch programs which themselves create115+ * Landlock domains and could otherwise generate excessive logs. Unlike116+ * ``LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF``, this flag only affects117+ * future nested domains, not the one being created. It can also be used118+ * with a @ruleset_fd value of -1 to mute subdomain logs without creating a119+ * domain.120 */121/* clang-format off */122#define LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF (1U << 0)
+2-2
include/uapi/linux/vhost.h
···2829/* Set current process as the (exclusive) owner of this file descriptor. This30 * must be called before any other vhost command. Further calls to31- * VHOST_OWNER_SET fail until VHOST_OWNER_RESET is called. */32#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)33/* Give up ownership, and reset the device to default values.34- * Allows subsequent call to VHOST_OWNER_SET to succeed. */35#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)3637/* Set up/modify memory layout */
···2829/* Set current process as the (exclusive) owner of this file descriptor. This30 * must be called before any other vhost command. Further calls to31+ * VHOST_SET_OWNER fail until VHOST_RESET_OWNER is called. */32#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)33/* Give up ownership, and reset the device to default values.34+ * Allows subsequent call to VHOST_SET_OWNER to succeed. */35#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)3637/* Set up/modify memory layout */
···107 */108#define UFS_DEVICE_QUIRK_DELAY_AFTER_LPM (1 << 11)109110+/*111+ * Some ufs devices may need more time to be in hibern8 before exiting.112+ * Enable this quirk to give it an additional 100us.113+ */114+#define UFS_DEVICE_QUIRK_PA_HIBER8TIME (1 << 12)115+116#endif /* UFS_QUIRKS_H_ */
···872 lockdep_assert(!io_wq_current_is_worker());873 lockdep_assert_held(&ctx->uring_lock);874875+ if (!ctx->lockless_cq) {876+ spin_lock(&ctx->completion_lock);877+ posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);878+ spin_unlock(&ctx->completion_lock);879+ } else {880+ posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);881+ }882+883 ctx->submit_state.cq_flush = true;0884 return posted;885}886···1078 while (node) {1079 req = container_of(node, struct io_kiocb, io_task_work.node);1080 node = node->next;1081+ if (last_ctx != req->ctx) {1082 if (last_ctx) {1083+ if (sync)1084+ flush_delayed_work(&last_ctx->fallback_work);1085 percpu_ref_put(&last_ctx->refs);1086 }1087 last_ctx = req->ctx;1088 percpu_ref_get(&last_ctx->refs);1089 }1090+ if (llist_add(&req->io_task_work.node, &last_ctx->fallback_llist))1091+ schedule_delayed_work(&last_ctx->fallback_work, 1);01092 }10931094 if (last_ctx) {1095+ if (sync)1096+ flush_delayed_work(&last_ctx->fallback_work);1097 percpu_ref_put(&last_ctx->refs);1098 }1099}
+1-1
kernel/bpf/hashtab.c
···2189 b = &htab->buckets[i];2190 rcu_read_lock();2191 head = &b->head;2192- hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) {2193 key = elem->key;2194 if (is_percpu) {2195 /* current cpu value for percpu map */
···2189 b = &htab->buckets[i];2190 rcu_read_lock();2191 head = &b->head;2192+ hlist_nulls_for_each_entry_safe(elem, n, head, hash_node) {2193 key = elem->key;2194 if (is_percpu) {2195 /* current cpu value for percpu map */
+1
kernel/bpf/preload/bpf_preload_kern.c
···89}90late_initcall(load);91module_exit(fini);092MODULE_LICENSE("GPL");93MODULE_DESCRIPTION("Embedded BPF programs for introspection in bpffs");
···89}90late_initcall(load);91module_exit(fini);92+MODULE_IMPORT_NS("BPF_INTERNAL");93MODULE_LICENSE("GPL");94MODULE_DESCRIPTION("Embedded BPF programs for introspection in bpffs");
···336337static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)338{339- if (!rmem->priv) {340- struct dma_coherent_mem *mem;3410342 mem = dma_init_coherent_memory(rmem->base, rmem->base,343 rmem->size, true);344 if (IS_ERR(mem))345 return PTR_ERR(mem);346 rmem->priv = mem;347 }348- dma_assign_coherent_memory(dev, rmem->priv);000000349 return 0;350}351
···336337static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)338{339+ struct dma_coherent_mem *mem = rmem->priv;0340341+ if (!mem) {342 mem = dma_init_coherent_memory(rmem->base, rmem->base,343 rmem->size, true);344 if (IS_ERR(mem))345 return PTR_ERR(mem);346 rmem->priv = mem;347 }348+349+ /* Warn if the device potentially can't use the reserved memory */350+ if (mem->device_base + rmem->size - 1 >351+ min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit))352+ dev_warn(dev, "reserved memory is beyond device's set DMA address range\n");353+354+ dma_assign_coherent_memory(dev, mem);355 return 0;356}357
+1-2
kernel/dma/contiguous.c
···64 * Users, who want to set the size of global CMA area for their system65 * should use cma= kernel parameter.66 */67-static const phys_addr_t size_bytes __initconst =68- (phys_addr_t)CMA_SIZE_MBYTES * SZ_1M;69static phys_addr_t size_cmdline __initdata = -1;70static phys_addr_t base_cmdline __initdata;71static phys_addr_t limit_cmdline __initdata;
···64 * Users, who want to set the size of global CMA area for their system65 * should use cma= kernel parameter.66 */67+#define size_bytes ((phys_addr_t)CMA_SIZE_MBYTES * SZ_1M)068static phys_addr_t size_cmdline __initdata = -1;69static phys_addr_t base_cmdline __initdata;70static phys_addr_t limit_cmdline __initdata;
+18-9
kernel/dma/mapping.c
···910}911EXPORT_SYMBOL(dma_set_coherent_mask);912913-/**914- * dma_addressing_limited - return if the device is addressing limited915- * @dev: device to check916- *917- * Return %true if the devices DMA mask is too small to address all memory in918- * the system, else %false. Lack of addressing bits is the prime reason for919- * bounce buffering, but might not be the only one.920- */921-bool dma_addressing_limited(struct device *dev)922{923 const struct dma_map_ops *ops = get_dma_ops(dev);924···921 if (unlikely(ops) || use_dma_iommu(dev))922 return false;923 return !dma_direct_all_ram_mapped(dev);00000000000000000924}925EXPORT_SYMBOL_GPL(dma_addressing_limited);926
···910}911EXPORT_SYMBOL(dma_set_coherent_mask);912913+static bool __dma_addressing_limited(struct device *dev)00000000914{915 const struct dma_map_ops *ops = get_dma_ops(dev);916···929 if (unlikely(ops) || use_dma_iommu(dev))930 return false;931 return !dma_direct_all_ram_mapped(dev);932+}933+934+/**935+ * dma_addressing_limited - return if the device is addressing limited936+ * @dev: device to check937+ *938+ * Return %true if the devices DMA mask is too small to address all memory in939+ * the system, else %false. Lack of addressing bits is the prime reason for940+ * bounce buffering, but might not be the only one.941+ */942+bool dma_addressing_limited(struct device *dev)943+{944+ if (!__dma_addressing_limited(dev))945+ return false;946+947+ dev_dbg(dev, "device is DMA addressing limited\n");948+ return true;949}950EXPORT_SYMBOL_GPL(dma_addressing_limited);951
+2-2
kernel/events/core.c
···3943 perf_event_set_state(event, PERF_EVENT_STATE_ERROR);39443945 if (*perf_event_fasync(event))3946- event->pending_kill = POLL_HUP;39473948 perf_event_wakeup(event);3949 } else {···60756076 if (unlikely(READ_ONCE(event->state) == PERF_EVENT_STATE_ERROR &&6077 event->attr.pinned))6078- return events;60796080 /*6081 * Pin the event->rb by taking event->mmap_mutex; otherwise
···3943 perf_event_set_state(event, PERF_EVENT_STATE_ERROR);39443945 if (*perf_event_fasync(event))3946+ event->pending_kill = POLL_ERR;39473948 perf_event_wakeup(event);3949 } else {···60756076 if (unlikely(READ_ONCE(event->state) == PERF_EVENT_STATE_ERROR &&6077 event->attr.pinned))6078+ return EPOLLERR;60796080 /*6081 * Pin the event->rb by taking event->mmap_mutex; otherwise
+7-43
kernel/sched/ext.c
···163 /*164 * CPU cgroup support flags165 */166- SCX_OPS_HAS_CGROUP_WEIGHT = 1LLU << 16, /* cpu.weight */167168 SCX_OPS_ALL_FLAGS = SCX_OPS_KEEP_BUILTIN_IDLE |169 SCX_OPS_ENQ_LAST |···38993900DEFINE_STATIC_PERCPU_RWSEM(scx_cgroup_rwsem);3901static bool scx_cgroup_enabled;3902-static bool cgroup_warned_missing_weight;3903-static bool cgroup_warned_missing_idle;3904-3905-static void scx_cgroup_warn_missing_weight(struct task_group *tg)3906-{3907- if (scx_ops_enable_state() == SCX_OPS_DISABLED ||3908- cgroup_warned_missing_weight)3909- return;3910-3911- if ((scx_ops.flags & SCX_OPS_HAS_CGROUP_WEIGHT) || !tg->css.parent)3912- return;3913-3914- pr_warn("sched_ext: \"%s\" does not implement cgroup cpu.weight\n",3915- scx_ops.name);3916- cgroup_warned_missing_weight = true;3917-}3918-3919-static void scx_cgroup_warn_missing_idle(struct task_group *tg)3920-{3921- if (!scx_cgroup_enabled || cgroup_warned_missing_idle)3922- return;3923-3924- if (!tg->idle)3925- return;3926-3927- pr_warn("sched_ext: \"%s\" does not implement cgroup cpu.idle\n",3928- scx_ops.name);3929- cgroup_warned_missing_idle = true;3930-}39313932int scx_tg_online(struct task_group *tg)3933{···3907 WARN_ON_ONCE(tg->scx_flags & (SCX_TG_ONLINE | SCX_TG_INITED));39083909 percpu_down_read(&scx_cgroup_rwsem);3910-3911- scx_cgroup_warn_missing_weight(tg);39123913 if (scx_cgroup_enabled) {3914 if (SCX_HAS_OP(cgroup_init)) {···40454046void scx_group_set_idle(struct task_group *tg, bool idle)4047{4048- percpu_down_read(&scx_cgroup_rwsem);4049- scx_cgroup_warn_missing_idle(tg);4050- percpu_up_read(&scx_cgroup_rwsem);4051}40524053static void scx_cgroup_lock(void)···42394240 percpu_rwsem_assert_held(&scx_cgroup_rwsem);42414242- cgroup_warned_missing_weight = false;4243- cgroup_warned_missing_idle = false;4244-4245 /*4246 * scx_tg_on/offline() are excluded through scx_cgroup_rwsem. If we walk4247 * cgroups and init, all online cgroups are initialized.···4247 css_for_each_descendant_pre(css, &root_task_group.css) {4248 struct task_group *tg = css_tg(css);4249 struct scx_cgroup_init_args args = { .weight = tg->scx_weight };4250-4251- scx_cgroup_warn_missing_weight(tg);4252- scx_cgroup_warn_missing_idle(tg);42534254 if ((tg->scx_flags &4255 (SCX_TG_ONLINE | SCX_TG_INITED)) != SCX_TG_ONLINE)···45844585static void free_exit_info(struct scx_exit_info *ei)4586{4587- kfree(ei->dump);4588 kfree(ei->msg);4589 kfree(ei->bt);4590 kfree(ei);···46004601 ei->bt = kcalloc(SCX_EXIT_BT_LEN, sizeof(ei->bt[0]), GFP_KERNEL);4602 ei->msg = kzalloc(SCX_EXIT_MSG_LEN, GFP_KERNEL);4603- ei->dump = kzalloc(exit_dump_len, GFP_KERNEL);46044605 if (!ei->bt || !ei->msg || !ei->dump) {4606 free_exit_info(ei);···5212 scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE requires CPU idle selection enabled");5213 return -EINVAL;5214 }00052155216 return 0;5217}
···163 /*164 * CPU cgroup support flags165 */166+ SCX_OPS_HAS_CGROUP_WEIGHT = 1LLU << 16, /* DEPRECATED, will be removed on 6.18 */167168 SCX_OPS_ALL_FLAGS = SCX_OPS_KEEP_BUILTIN_IDLE |169 SCX_OPS_ENQ_LAST |···38993900DEFINE_STATIC_PERCPU_RWSEM(scx_cgroup_rwsem);3901static bool scx_cgroup_enabled;0000000000000000000000000000039023903int scx_tg_online(struct task_group *tg)3904{···3936 WARN_ON_ONCE(tg->scx_flags & (SCX_TG_ONLINE | SCX_TG_INITED));39373938 percpu_down_read(&scx_cgroup_rwsem);0039393940 if (scx_cgroup_enabled) {3941 if (SCX_HAS_OP(cgroup_init)) {···40764077void scx_group_set_idle(struct task_group *tg, bool idle)4078{4079+ /* TODO: Implement ops->cgroup_set_idle() */004080}40814082static void scx_cgroup_lock(void)···42724273 percpu_rwsem_assert_held(&scx_cgroup_rwsem);42740004275 /*4276 * scx_tg_on/offline() are excluded through scx_cgroup_rwsem. If we walk4277 * cgroups and init, all online cgroups are initialized.···4283 css_for_each_descendant_pre(css, &root_task_group.css) {4284 struct task_group *tg = css_tg(css);4285 struct scx_cgroup_init_args args = { .weight = tg->scx_weight };00042864287 if ((tg->scx_flags &4288 (SCX_TG_ONLINE | SCX_TG_INITED)) != SCX_TG_ONLINE)···46234624static void free_exit_info(struct scx_exit_info *ei)4625{4626+ kvfree(ei->dump);4627 kfree(ei->msg);4628 kfree(ei->bt);4629 kfree(ei);···46394640 ei->bt = kcalloc(SCX_EXIT_BT_LEN, sizeof(ei->bt[0]), GFP_KERNEL);4641 ei->msg = kzalloc(SCX_EXIT_MSG_LEN, GFP_KERNEL);4642+ ei->dump = kvzalloc(exit_dump_len, GFP_KERNEL);46434644 if (!ei->bt || !ei->msg || !ei->dump) {4645 free_exit_info(ei);···5251 scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE requires CPU idle selection enabled");5252 return -EINVAL;5253 }5254+5255+ if (ops->flags & SCX_OPS_HAS_CGROUP_WEIGHT)5256+ pr_warn("SCX_OPS_HAS_CGROUP_WEIGHT is deprecated and a noop\n");52575258 return 0;5259}
···111 * @arg: data to be passed to fn and handled_kill112 * @name: the thread's name113 *114- * This returns a specialized task for use by the vhost layer or NULL on115 * failure. The returned task is inactive, and the caller must fire it up116 * through vhost_task_start().117 */
···111 * @arg: data to be passed to fn and handled_kill112 * @name: the thread's name113 *114+ * This returns a specialized task for use by the vhost layer or ERR_PTR() on115 * failure. The returned task is inactive, and the caller must fire it up116 * through vhost_task_start().117 */
+5-3
mm/migrate.c
···845 return -EAGAIN;846847 if (check_refs) {848- bool busy;849 bool invalidated = false;85000851recheck_buffers:852 busy = false;853 spin_lock(&mapping->i_private_lock);···861 }862 bh = bh->b_this_page;863 } while (bh != head);0864 if (busy) {865 if (invalidated) {866 rc = -EAGAIN;867 goto unlock_buffers;868 }869- spin_unlock(&mapping->i_private_lock);870 invalidate_bh_lrus();871 invalidated = true;872 goto recheck_buffers;···885886unlock_buffers:887 if (check_refs)888- spin_unlock(&mapping->i_private_lock);889 bh = head;890 do {891 unlock_buffer(bh);
···845 return -EAGAIN;846847 if (check_refs) {848+ bool busy, migrating;849 bool invalidated = false;850851+ migrating = test_and_set_bit_lock(BH_Migrate, &head->b_state);852+ VM_WARN_ON_ONCE(migrating);853recheck_buffers:854 busy = false;855 spin_lock(&mapping->i_private_lock);···859 }860 bh = bh->b_this_page;861 } while (bh != head);862+ spin_unlock(&mapping->i_private_lock);863 if (busy) {864 if (invalidated) {865 rc = -EAGAIN;866 goto unlock_buffers;867 }0868 invalidate_bh_lrus();869 invalidated = true;870 goto recheck_buffers;···883884unlock_buffers:885 if (check_refs)886+ clear_bit_unlock(BH_Migrate, &head->b_state);887 bh = head;888 do {889 unlock_buffer(bh);
···337338 release_sock(sk);339340+ kfree_rcu_mightsleep(match);341+ /* Adjust sk_omem_alloc like sock_kfree_s() does, to match342+ * with allocation of this memory by sock_kmemdup()343+ */344+ atomic_sub(sizeof(*match), &sk->sk_omem_alloc);345346 err = 0;347out:
+17-6
net/sched/sch_hfsc.c
···961962 if (cl != NULL) {963 int old_flags;0964965 if (parentid) {966 if (cl->cl_parent &&···992 if (usc != NULL)993 hfsc_change_usc(cl, usc, cur_time);994000000995 if (cl->qdisc->q.qlen != 0) {996- int len = qdisc_peek_len(cl->qdisc);997-998 if (cl->cl_flags & HFSC_RSC) {999 if (old_flags & HFSC_RSC)1000 update_ed(cl, len);···1641 if (cl->qdisc->q.qlen != 0) {1642 /* update ed */1643 next_len = qdisc_peek_len(cl->qdisc);1644- if (realtime)1645- update_ed(cl, next_len);1646- else1647- update_d(cl, next_len);0000001648 } else {1649 /* the class becomes passive */1650 eltree_remove(cl);
···961962 if (cl != NULL) {963 int old_flags;964+ int len = 0;965966 if (parentid) {967 if (cl->cl_parent &&···991 if (usc != NULL)992 hfsc_change_usc(cl, usc, cur_time);993994+ if (cl->qdisc->q.qlen != 0)995+ len = qdisc_peek_len(cl->qdisc);996+ /* Check queue length again since some qdisc implementations997+ * (e.g., netem/codel) might empty the queue during the peek998+ * operation.999+ */1000 if (cl->qdisc->q.qlen != 0) {001001 if (cl->cl_flags & HFSC_RSC) {1002 if (old_flags & HFSC_RSC)1003 update_ed(cl, len);···1636 if (cl->qdisc->q.qlen != 0) {1637 /* update ed */1638 next_len = qdisc_peek_len(cl->qdisc);1639+ /* Check queue length again since some qdisc implementations1640+ * (e.g., netem/codel) might empty the queue during the peek1641+ * operation.1642+ */1643+ if (cl->qdisc->q.qlen != 0) {1644+ if (realtime)1645+ update_ed(cl, next_len);1646+ else1647+ update_d(cl, next_len);1648+ }1649 } else {1650 /* the class becomes passive */1651 eltree_remove(cl);
+1-5
net/sunrpc/cache.c
···1536 * or by one second if it has already reached the current time.1537 * Newly added cache entries will always have ->last_refresh greater1538 * that ->flush_time, so they don't get flushed prematurely.1539- *1540- * If someone frequently calls the flush interface, we should1541- * immediately clean the corresponding cache_detail instead of1542- * continuously accumulating nextcheck.1543 */15441545- if (cd->flush_time >= now && cd->flush_time < (now + 5))1546 now = cd->flush_time + 1;15471548 cd->flush_time = now;
···1536 * or by one second if it has already reached the current time.1537 * Newly added cache entries will always have ->last_refresh greater1538 * that ->flush_time, so they don't get flushed prematurely.00001539 */15401541+ if (cd->flush_time >= now)1542 now = cd->flush_time + 1;15431544 cd->flush_time = now;
+2-1
net/tipc/monitor.c
···716 if (!mon)717 continue;718 write_lock_bh(&mon->lock);719- mon->self->addr = tipc_own_addr(net);0720 write_unlock_bh(&mon->lock);721 }722}
···716 if (!mon)717 continue;718 write_lock_bh(&mon->lock);719+ if (mon->self)720+ mon->self->addr = tipc_own_addr(net);721 write_unlock_bh(&mon->lock);722 }723}
+6-2
rust/kernel/firmware.rs
···4//!5//! C header: [`include/linux/firmware.h`](srctree/include/linux/firmware.h)67-use crate::{bindings, device::Device, error::Error, error::Result, str::CStr};8use core::ptr::NonNull;910/// # Invariants···12/// One of the following: `bindings::request_firmware`, `bindings::firmware_request_nowarn`,13/// `bindings::firmware_request_platform`, `bindings::request_firmware_direct`.14struct FwFunc(15- unsafe extern "C" fn(*mut *const bindings::firmware, *const u8, *mut bindings::device) -> i32,000016);1718impl FwFunc {
···4//!5//! C header: [`include/linux/firmware.h`](srctree/include/linux/firmware.h)67+use crate::{bindings, device::Device, error::Error, error::Result, ffi, str::CStr};8use core::ptr::NonNull;910/// # Invariants···12/// One of the following: `bindings::request_firmware`, `bindings::firmware_request_nowarn`,13/// `bindings::firmware_request_platform`, `bindings::request_firmware_direct`.14struct FwFunc(15+ unsafe extern "C" fn(16+ *mut *const bindings::firmware,17+ *const ffi::c_char,18+ *mut bindings::device,19+ ) -> i32,20);2122impl FwFunc {
···169 * the new ruleset.170 * @size: Size of the pointed &struct landlock_ruleset_attr (needed for171 * backward and forward compatibility).172- * @flags: Supported value:0173 * - %LANDLOCK_CREATE_RULESET_VERSION174 * - %LANDLOCK_CREATE_RULESET_ERRATA175 *176 * This system call enables to create a new Landlock ruleset, and returns the177 * related file descriptor on success.178 *179- * If @flags is %LANDLOCK_CREATE_RULESET_VERSION and @attr is NULL and @size is180- * 0, then the returned value is the highest supported Landlock ABI version181- * (starting at 1).182- *183- * If @flags is %LANDLOCK_CREATE_RULESET_ERRATA and @attr is NULL and @size is184- * 0, then the returned value is a bitmask of fixed issues for the current185- * Landlock ABI version.186 *187 * Possible returned errors are:188 *···187 * - %E2BIG: @attr or @size inconsistencies;188 * - %EFAULT: @attr or @size inconsistencies;189 * - %ENOMSG: empty &landlock_ruleset_attr.handled_access_fs.000190 */191SYSCALL_DEFINE3(landlock_create_ruleset,192 const struct landlock_ruleset_attr __user *const, attr,···451 * @ruleset_fd: File descriptor tied to the ruleset to merge with the target.452 * @flags: Supported values:453 *454- * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF455- * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON456- * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF457 *458 * This system call enables to enforce a Landlock ruleset on the current459 * thread. Enforcing a ruleset requires that the task has %CAP_SYS_ADMIN in its460 * namespace or is running with no_new_privs. This avoids scenarios where461 * unprivileged tasks can affect the behavior of privileged children.462- *463- * It is allowed to only pass the %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF464- * flag with a @ruleset_fd value of -1.465 *466 * Possible returned errors are:467 *···471 * %CAP_SYS_ADMIN in its namespace.472 * - %E2BIG: The maximum number of stacked rulesets is reached for the current473 * thread.000474 */475SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,476 flags)
···169 * the new ruleset.170 * @size: Size of the pointed &struct landlock_ruleset_attr (needed for171 * backward and forward compatibility).172+ * @flags: Supported values:173+ *174 * - %LANDLOCK_CREATE_RULESET_VERSION175 * - %LANDLOCK_CREATE_RULESET_ERRATA176 *177 * This system call enables to create a new Landlock ruleset, and returns the178 * related file descriptor on success.179 *180+ * If %LANDLOCK_CREATE_RULESET_VERSION or %LANDLOCK_CREATE_RULESET_ERRATA is181+ * set, then @attr must be NULL and @size must be 0.00000182 *183 * Possible returned errors are:184 *···191 * - %E2BIG: @attr or @size inconsistencies;192 * - %EFAULT: @attr or @size inconsistencies;193 * - %ENOMSG: empty &landlock_ruleset_attr.handled_access_fs.194+ *195+ * .. kernel-doc:: include/uapi/linux/landlock.h196+ * :identifiers: landlock_create_ruleset_flags197 */198SYSCALL_DEFINE3(landlock_create_ruleset,199 const struct landlock_ruleset_attr __user *const, attr,···452 * @ruleset_fd: File descriptor tied to the ruleset to merge with the target.453 * @flags: Supported values:454 *455+ * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF456+ * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON457+ * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF458 *459 * This system call enables to enforce a Landlock ruleset on the current460 * thread. Enforcing a ruleset requires that the task has %CAP_SYS_ADMIN in its461 * namespace or is running with no_new_privs. This avoids scenarios where462 * unprivileged tasks can affect the behavior of privileged children.000463 *464 * Possible returned errors are:465 *···475 * %CAP_SYS_ADMIN in its namespace.476 * - %E2BIG: The maximum number of stacked rulesets is reached for the current477 * thread.478+ *479+ * .. kernel-doc:: include/uapi/linux/landlock.h480+ * :identifiers: landlock_restrict_self_flags481 */482SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,483 flags)
···6#include "for_each_array_map_elem.skel.h"7#include "for_each_map_elem_write_key.skel.h"8#include "for_each_multi_maps.skel.h"0910static unsigned int duration;11···204 for_each_multi_maps__destroy(skel);205}2060000000000000000000000000000000000207void test_for_each(void)208{209 if (test__start_subtest("hash_map"))···248 test_write_map_key();249 if (test__start_subtest("multi_maps"))250 test_multi_maps();00251}
···6#include "for_each_array_map_elem.skel.h"7#include "for_each_map_elem_write_key.skel.h"8#include "for_each_multi_maps.skel.h"9+#include "for_each_hash_modify.skel.h"1011static unsigned int duration;12···203 for_each_multi_maps__destroy(skel);204}205206+static void test_hash_modify(void)207+{208+ struct for_each_hash_modify *skel;209+ int max_entries, i, err;210+ __u64 key, val;211+212+ LIBBPF_OPTS(bpf_test_run_opts, topts,213+ .data_in = &pkt_v4,214+ .data_size_in = sizeof(pkt_v4),215+ .repeat = 1216+ );217+218+ skel = for_each_hash_modify__open_and_load();219+ if (!ASSERT_OK_PTR(skel, "for_each_hash_modify__open_and_load"))220+ return;221+222+ max_entries = bpf_map__max_entries(skel->maps.hashmap);223+ for (i = 0; i < max_entries; i++) {224+ key = i;225+ val = i;226+ err = bpf_map__update_elem(skel->maps.hashmap, &key, sizeof(key),227+ &val, sizeof(val), BPF_ANY);228+ if (!ASSERT_OK(err, "map_update"))229+ goto out;230+ }231+232+ err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_pkt_access), &topts);233+ ASSERT_OK(err, "bpf_prog_test_run_opts");234+ ASSERT_OK(topts.retval, "retval");235+236+out:237+ for_each_hash_modify__destroy(skel);238+}239+240void test_for_each(void)241{242 if (test__start_subtest("hash_map"))···213 test_write_map_key();214 if (test__start_subtest("multi_maps"))215 test_multi_maps();216+ if (test__start_subtest("hash_modify"))217+ test_hash_modify();218}
···313 "$TC qdisc del dev $DUMMY handle 1: root",314 "$IP addr del 10.10.10.10/24 dev $DUMMY || true"315 ]000000000000000000000000000000000000000316 }317]
···313 "$TC qdisc del dev $DUMMY handle 1: root",314 "$IP addr del 10.10.10.10/24 dev $DUMMY || true"315 ]316+ },317+ {318+ "id": "a4c3",319+ "name": "Test HFSC with netem/blackhole - queue emptying during peek operation",320+ "category": [321+ "qdisc",322+ "hfsc",323+ "netem",324+ "blackhole"325+ ],326+ "plugins": {327+ "requires": "nsPlugin"328+ },329+ "setup": [330+ "$IP link set dev $DUMMY up || true",331+ "$IP addr add 10.10.10.10/24 dev $DUMMY || true",332+ "$TC qdisc add dev $DUMMY handle 1:0 root drr",333+ "$TC class add dev $DUMMY parent 1:0 classid 1:1 drr",334+ "$TC class add dev $DUMMY parent 1:0 classid 1:2 drr",335+ "$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 plug limit 1024",336+ "$TC qdisc add dev $DUMMY parent 1:2 handle 3:0 hfsc default 1",337+ "$TC class add dev $DUMMY parent 3:0 classid 3:1 hfsc rt m1 5Mbit d 10ms m2 10Mbit",338+ "$TC qdisc add dev $DUMMY parent 3:1 handle 4:0 netem delay 1ms",339+ "$TC qdisc add dev $DUMMY parent 4:1 handle 5:0 blackhole",340+ "ping -c 3 -W 0.01 -i 0.001 -s 1 10.10.10.10 -I $DUMMY > /dev/null 2>&1 || true",341+ "$TC class change dev $DUMMY parent 3:0 classid 3:1 hfsc sc m1 5Mbit d 10ms m2 10Mbit",342+ "$TC class del dev $DUMMY parent 3:0 classid 3:1",343+ "$TC class add dev $DUMMY parent 3:0 classid 3:1 hfsc rt m1 5Mbit d 10ms m2 10Mbit",344+ "ping -c 3 -W 0.01 -i 0.001 -s 1 10.10.10.10 -I $DUMMY > /dev/null 2>&1 || true"345+ ],346+ "cmdUnderTest": "$TC class change dev $DUMMY parent 3:0 classid 3:1 hfsc sc m1 5Mbit d 10ms m2 10Mbit",347+ "expExitCode": "0",348+ "verifyCmd": "$TC -s qdisc show dev $DUMMY",349+ "matchPattern": "qdisc hfsc 3:.*parent 1:2.*default 1",350+ "matchCount": "1",351+ "teardown": [352+ "$TC qdisc del dev $DUMMY handle 1:0 root",353+ "$IP addr del 10.10.10.10/24 dev $DUMMY || true"354+ ]355 }356]
+1
tools/testing/selftests/ublk/kublk.c
···1354 value = strtol(optarg, NULL, 10);1355 if (value)1356 ctx.flags |= UBLK_F_NEED_GET_DATA;01357 case 0:1358 if (!strcmp(longopts[option_idx].name, "debug_mask"))1359 ublk_dbg_mask = strtol(optarg, NULL, 16);
···1354 value = strtol(optarg, NULL, 10);1355 if (value)1356 ctx.flags |= UBLK_F_NEED_GET_DATA;1357+ break;1358 case 0:1359 if (!strcmp(longopts[option_idx].name, "debug_mask"))1360 ublk_dbg_mask = strtol(optarg, NULL, 16);
-3
tools/testing/selftests/ublk/kublk.h
···86 unsigned int fg:1;87 unsigned int recovery:1;8889- /* fault_inject */90- long long delay_us;91-92 int _evtfd;93 int _shmid;94
···86 unsigned int fg:1;87 unsigned int recovery:1;8800089 int _evtfd;90 int _shmid;91