···562562Zoned Filesystems563563=================564564565565-For zoned file systems, the following attribute is exposed in:565565+For zoned file systems, the following attributes are exposed in:566566567567 /sys/fs/xfs/<dev>/zoned/568568···572572 is limited by the capabilities of the backing zoned device, file system573573 size and the max_open_zones mount option.574574575575-Zoned Filesystems576576-=================577577-578578-For zoned file systems, the following attributes are exposed in:579579-580580- /sys/fs/xfs/<dev>/zoned/581581-582582- max_open_zones (Min: 1 Default: Varies Max: UINTMAX)583583- This read-only attribute exposes the maximum number of open zones584584- available for data placement. The value is determined at mount time and585585- is limited by the capabilities of the backing zoned device, file system586586- size and the max_open_zones mount option.587587-588588- zonegc_low_space (Min: 0 Default: 0 Max: 100)589589- Define a percentage for how much of the unused space that GC should keep590590- available for writing. A high value will reclaim more of the space591591- occupied by unused blocks, creating a larger buffer against write592592- bursts at the cost of increased write amplification. Regardless593593- of this value, garbage collection will always aim to free a minimum594594- amount of blocks to keep max_open_zones open for data placement purposes.575575+ zonegc_low_space (Min: 0 Default: 0 Max: 100)576576+ Define a percentage for how much of the unused space that GC should keep577577+ available for writing. A high value will reclaim more of the space578578+ occupied by unused blocks, creating a larger buffer against write579579+ bursts at the cost of increased write amplification. Regardless580580+ of this value, garbage collection will always aim to free a minimum581581+ amount of blocks to keep max_open_zones open for data placement purposes.
+6-6
Documentation/arch/openrisc/openrisc_port.rst
···7788For information about OpenRISC processors and ongoing development:991010- ======= =============================1010+ ======= ==============================1111 website https://openrisc.io1212- email openrisc@lists.librecores.org1313- ======= =============================1212+ email linux-openrisc@vger.kernel.org1313+ ======= ==============================14141515---------------------------------------------------------------------1616···2727Instructions for building the different toolchains can be found on openrisc.io2828or Stafford's toolchain build and release scripts.29293030- ========== =================================================3131- binaries https://github.com/openrisc/or1k-gcc/releases3030+ ========== ==========================================================3131+ binaries https://github.com/stffrdhrn/or1k-toolchain-build/releases3232 toolchains https://openrisc.io/software3333 building https://github.com/stffrdhrn/or1k-toolchain-build3434- ========== =================================================3434+ ========== ==========================================================353536362) Building3737
+8
Documentation/bpf/bpf_devel_QA.rst
···382382into the Linux kernel, please implement support into LLVM's BPF back383383end. See LLVM_ section below for further information.384384385385+Q: What "BPF_INTERNAL" symbol namespace is for?386386+-----------------------------------------------387387+A: Symbols exported as BPF_INTERNAL can only be used by BPF infrastructure388388+like preload kernel modules with light skeleton. Most symbols outside389389+of BPF_INTERNAL are not expected to be used by code outside of BPF either.390390+Symbols may lack the designation because they predate the namespaces,391391+or due to an oversight.392392+385393Stable submission386394=================387395
···2727 $ref: /schemas/types.yaml#/definitions/uint32-array2828 items:2929 - minimum: 03030- maximum: 73030+ maximum: 313131 description:3232 Offset in bit within the address range specified by reg.3333 - minimum: 1
···553553 die_if_kernel("Kernel ale access", regs);554554 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);555555#else556556+ bool pie = regs_irqs_disabled(regs);556557 unsigned int *pc;557558558558- if (regs->csr_prmd & CSR_PRMD_PIE)559559+ if (!pie)559560 local_irq_enable();560561561562 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr);···583582 die_if_kernel("Kernel ale access", regs);584583 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);585584out:586586- if (regs->csr_prmd & CSR_PRMD_PIE)585585+ if (!pie)587586 local_irq_disable();588587#endif589588 irqentry_exit(regs, state);···622621asmlinkage void noinstr do_bce(struct pt_regs *regs)623622{624623 bool user = user_mode(regs);624624+ bool pie = regs_irqs_disabled(regs);625625 unsigned long era = exception_era(regs);626626 u64 badv = 0, lower = 0, upper = ULONG_MAX;627627 union loongarch_instruction insn;628628 irqentry_state_t state = irqentry_enter(regs);629629630630- if (regs->csr_prmd & CSR_PRMD_PIE)630630+ if (!pie)631631 local_irq_enable();632632633633 current->thread.trap_nr = read_csr_excode();···694692 force_sig_bnderr((void __user *)badv, (void __user *)lower, (void __user *)upper);695693696694out:697697- if (regs->csr_prmd & CSR_PRMD_PIE)695695+ if (!pie)698696 local_irq_disable();699697700698 irqentry_exit(regs, state);···712710asmlinkage void noinstr do_bp(struct pt_regs *regs)713711{714712 bool user = user_mode(regs);713713+ bool pie = regs_irqs_disabled(regs);715714 unsigned int opcode, bcode;716715 unsigned long era = exception_era(regs);717716 irqentry_state_t state = irqentry_enter(regs);718717719719- if (regs->csr_prmd & CSR_PRMD_PIE)718718+ if (!pie)720719 local_irq_enable();721720722721 if (__get_inst(&opcode, (u32 *)era, user))···783780 }784781785782out:786786- if (regs->csr_prmd & CSR_PRMD_PIE)783783+ if (!pie)787784 local_irq_disable();788785789786 irqentry_exit(regs, state);···1018101510191016asmlinkage void noinstr do_lbt(struct pt_regs *regs)10201017{10181018+ bool pie = regs_irqs_disabled(regs);10211019 irqentry_state_t state = irqentry_enter(regs);1022102010231021 /*···10281024 * (including the user using 'MOVGR2GCSR' to turn on TM, which10291025 * will not trigger the BTE), we need to check PRMD first.10301026 */10311031- if (regs->csr_prmd & CSR_PRMD_PIE)10271027+ if (!pie)10321028 local_irq_enable();1033102910341030 if (!cpu_has_lbt) {···10421038 preempt_enable();1043103910441040out:10451045- if (regs->csr_prmd & CSR_PRMD_PIE)10411041+ if (!pie)10461042 local_irq_disable();1047104310481044 irqentry_exit(regs, state);
···111111 ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);112112 srcu_read_unlock(&vcpu->kvm->srcu, idx);113113 if (unlikely(ret)) {114114- kvm_err("%s: : read date from addr %llx failed\n", __func__, addr);114114+ kvm_err("%s: : read data from addr %llx failed\n", __func__, addr);115115 return ret;116116 }117117 /* Construct the mask by scanning the bit 27-30 */···127127 ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);128128 srcu_read_unlock(&vcpu->kvm->srcu, idx);129129 if (unlikely(ret))130130- kvm_err("%s: : write date to addr %llx failed\n", __func__, addr);130130+ kvm_err("%s: : write data to addr %llx failed\n", __func__, addr);131131132132 return ret;133133}
+2-2
arch/loongarch/kvm/main.c
···296296 /*297297 * Enable virtualization features granting guest direct control of298298 * certain features:299299- * GCI=2: Trap on init or unimplement cache instruction.299299+ * GCI=2: Trap on init or unimplemented cache instruction.300300 * TORU=0: Trap on Root Unimplement.301301 * CACTRL=1: Root control cache.302302- * TOP=0: Trap on Previlege.302302+ * TOP=0: Trap on Privilege.303303 * TOE=0: Trap on Exception.304304 * TIT=0: Trap on Timer.305305 */
+8
arch/loongarch/kvm/vcpu.c
···294294 vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST;295295296296 if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) {297297+ kvm_lose_pmu(vcpu);297298 /* make sure the vcpu mode has been written */298299 smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE);299300 local_irq_enable();···903902 vcpu->arch.st.guest_addr = 0;904903 memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));905904 memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));905905+906906+ /*907907+ * When vCPU reset, clear the ESTAT and GINTC registers908908+ * Other CSR registers are cleared with function _kvm_setcsr().909909+ */910910+ kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_GINTC, 0);911911+ kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_ESTAT, 0);906912 break;907913 default:908914 ret = -EINVAL;
···2323 */2424extern void local_dcache_page_flush(struct page *page);2525extern void local_icache_page_inv(struct page *page);2626+extern void local_dcache_range_flush(unsigned long start, unsigned long end);2727+extern void local_dcache_range_inv(unsigned long start, unsigned long end);2828+extern void local_icache_range_inv(unsigned long start, unsigned long end);26292730/*2831 * Data cache flushing always happen on the local cpu. Instruction cache···4037#define icache_page_inv(page) smp_icache_page_inv(page)4138extern void smp_icache_page_inv(struct page *page);4239#endif /* CONFIG_SMP */4040+4141+/*4242+ * Even if the actual block size is larger than L1_CACHE_BYTES, paddr4343+ * can be incremented by L1_CACHE_BYTES. When paddr is written to the4444+ * invalidate register, the entire cache line encompassing this address4545+ * is invalidated. Each subsequent reference to the same cache line will4646+ * not affect the invalidation process.4747+ */4848+#define local_dcache_block_flush(addr) \4949+ local_dcache_range_flush(addr, addr + L1_CACHE_BYTES)5050+#define local_dcache_block_inv(addr) \5151+ local_dcache_range_inv(addr, addr + L1_CACHE_BYTES)5252+#define local_icache_block_inv(addr) \5353+ local_icache_range_inv(addr, addr + L1_CACHE_BYTES)43544455/*4556 * Synchronizes caches. Whenever a cpu writes executable code to memory, this
···1414#include <asm/spr_defs.h>1515#include <asm/cache.h>1616#include <asm/cacheflush.h>1717+#include <asm/cpuinfo.h>1718#include <asm/tlbflush.h>18191919-static __always_inline void cache_loop(struct page *page, const unsigned int reg)2020+/*2121+ * Check if the cache component exists.2222+ */2323+bool cpu_cache_is_present(const unsigned int cache_type)2424+{2525+ unsigned long upr = mfspr(SPR_UPR);2626+ unsigned long mask = SPR_UPR_UP | cache_type;2727+2828+ return !((upr & mask) ^ mask);2929+}3030+3131+static __always_inline void cache_loop(unsigned long paddr, unsigned long end,3232+ const unsigned short reg, const unsigned int cache_type)3333+{3434+ if (!cpu_cache_is_present(cache_type))3535+ return;3636+3737+ while (paddr < end) {3838+ mtspr(reg, paddr);3939+ paddr += L1_CACHE_BYTES;4040+ }4141+}4242+4343+static __always_inline void cache_loop_page(struct page *page, const unsigned short reg,4444+ const unsigned int cache_type)2045{2146 unsigned long paddr = page_to_pfn(page) << PAGE_SHIFT;2222- unsigned long line = paddr & ~(L1_CACHE_BYTES - 1);4747+ unsigned long end = paddr + PAGE_SIZE;23482424- while (line < paddr + PAGE_SIZE) {2525- mtspr(reg, line);2626- line += L1_CACHE_BYTES;2727- }4949+ paddr &= ~(L1_CACHE_BYTES - 1);5050+5151+ cache_loop(paddr, end, reg, cache_type);2852}29533054void local_dcache_page_flush(struct page *page)3155{3232- cache_loop(page, SPR_DCBFR);5656+ cache_loop_page(page, SPR_DCBFR, SPR_UPR_DCP);3357}3458EXPORT_SYMBOL(local_dcache_page_flush);35593660void local_icache_page_inv(struct page *page)3761{3838- cache_loop(page, SPR_ICBIR);6262+ cache_loop_page(page, SPR_ICBIR, SPR_UPR_ICP);3963}4064EXPORT_SYMBOL(local_icache_page_inv);6565+6666+void local_dcache_range_flush(unsigned long start, unsigned long end)6767+{6868+ cache_loop(start, end, SPR_DCBFR, SPR_UPR_DCP);6969+}7070+7171+void local_dcache_range_inv(unsigned long start, unsigned long end)7272+{7373+ cache_loop(start, end, SPR_DCBIR, SPR_UPR_DCP);7474+}7575+7676+void local_icache_range_inv(unsigned long start, unsigned long end)7777+{7878+ cache_loop(start, end, SPR_ICBIR, SPR_UPR_ICP);7979+}41804281void update_cache(struct vm_area_struct *vma, unsigned long address,4382 pte_t *pte)···9758 sync_icache_dcache(folio_page(folio, nr));9859 }9960}100100-
+3-2
arch/openrisc/mm/init.c
···3535#include <asm/fixmap.h>3636#include <asm/tlbflush.h>3737#include <asm/sections.h>3838+#include <asm/cacheflush.h>38393940int mem_init_done;4041···177176 barrier();178177179178 /* Invalidate instruction caches after code modification */180180- mtspr(SPR_ICBIR, 0x900);181181- mtspr(SPR_ICBIR, 0xa00);179179+ local_icache_block_inv(0x900);180180+ local_icache_block_inv(0xa00);182181183182 /* New TLB miss handlers and kernel page tables are in now place.184183 * Make sure that page flags get updated for all pages in TLB by
+10-5
arch/riscv/include/asm/cacheflush.h
···3434 flush_dcache_folio(page_folio(page));3535}36363737-/*3838- * RISC-V doesn't have an instruction to flush parts of the instruction cache,3939- * so instead we just flush the whole thing.4040- */4141-#define flush_icache_range(start, end) flush_icache_all()4237#define flush_icache_user_page(vma, pg, addr, len) \4338do { \4439 if (vma->vm_flags & VM_EXEC) \···7277void flush_icache_mm(struct mm_struct *mm, bool local);73787479#endif /* CONFIG_SMP */8080+8181+/*8282+ * RISC-V doesn't have an instruction to flush parts of the instruction cache,8383+ * so instead we just flush the whole thing.8484+ */8585+#define flush_icache_range flush_icache_range8686+static inline void flush_icache_range(unsigned long start, unsigned long end)8787+{8888+ flush_icache_all();8989+}75907691extern unsigned int riscv_cbom_block_size;7792extern unsigned int riscv_cboz_block_size;
···167167 /* Initialize the slot */168168 void *kaddr = kmap_atomic(page);169169 void *dst = kaddr + (vaddr & ~PAGE_MASK);170170+ unsigned long start = (unsigned long)dst;170171171172 memcpy(dst, src, len);172173···177176 *(uprobe_opcode_t *)dst = __BUG_INSN_32;178177 }179178179179+ flush_icache_range(start, start + len);180180 kunmap_atomic(kaddr);181181-182182- /*183183- * We probably need flush_icache_user_page() but it needs vma.184184- * This should work on most of architectures by default. If185185- * architecture needs to do something different it can define186186- * its own version of the function.187187- */188188- flush_dcache_page(page);189181}
···66#include <linux/mm.h> /* for struct page */77#include <linux/pagemap.h>8899+#include <asm/cpufeature.h>1010+911#define __HAVE_ARCH_PTE_ALLOC_ONE1012#define __HAVE_ARCH_PGD_FREE1113#include <asm-generic/pgalloc.h>···3129static inline void paravirt_release_p4d(unsigned long pfn) {}3230#endif33313434-#ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION3532/*3636- * Instead of one PGD, we acquire two PGDs. Being order-1, it is3737- * both 8k in size and 8k-aligned. That lets us just flip bit 123838- * in a pointer to swap between the two 4k halves.3333+ * In case of Page Table Isolation active, we acquire two PGDs instead of one.3434+ * Being order-1, it is both 8k in size and 8k-aligned. That lets us just3535+ * flip bit 12 in a pointer to swap between the two 4k halves.3936 */4040-#define PGD_ALLOCATION_ORDER 14141-#else4242-#define PGD_ALLOCATION_ORDER 04343-#endif3737+static inline unsigned int pgd_allocation_order(void)3838+{3939+ if (cpu_feature_enabled(X86_FEATURE_PTI))4040+ return 1;4141+ return 0;4242+}44434544/*4645 * Allocate and free page tables.
+8
arch/x86/kernel/e820.c
···12991299 memblock_add(entry->addr, entry->size);13001300 }1301130113021302+ /*13031303+ * 32-bit systems are limited to 4BG of memory even with HIGHMEM and13041304+ * to even less without it.13051305+ * Discard memory after max_pfn - the actual limit detected at runtime.13061306+ */13071307+ if (IS_ENABLED(CONFIG_X86_32))13081308+ memblock_remove(PFN_PHYS(max_pfn), -1);13091309+13021310 /* Throw away partial pages: */13031311 memblock_trim_memory(PAGE_SIZE);13041312
···796796 struct amd_svm_iommu_ir *ir;797797 u64 entry;798798799799+ if (WARN_ON_ONCE(!pi->ir_data))800800+ return -EINVAL;801801+799802 /**800803 * In some cases, the existing irte is updated and re-set,801804 * so we need to check here if it's already been * added802805 * to the ir_list.803806 */804804- if (pi->ir_data && (pi->prev_ga_tag != 0)) {807807+ if (pi->prev_ga_tag) {805808 struct kvm *kvm = svm->vcpu.kvm;806809 u32 vcpu_id = AVIC_GATAG_TO_VCPUID(pi->prev_ga_tag);807810 struct kvm_vcpu *prev_vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id);···823820 * Allocating new amd_iommu_pi_data, which will get824821 * add to the per-vcpu ir_list.825822 */826826- ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_KERNEL_ACCOUNT);823823+ ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_ATOMIC | __GFP_ACCOUNT);827824 if (!ir) {828825 ret = -ENOMEM;829826 goto out;···899896{900897 struct kvm_kernel_irq_routing_entry *e;901898 struct kvm_irq_routing_table *irq_rt;899899+ bool enable_remapped_mode = true;902900 int idx, ret = 0;903901904904- if (!kvm_arch_has_assigned_device(kvm) ||905905- !irq_remapping_cap(IRQ_POSTING_CAP))902902+ if (!kvm_arch_has_assigned_device(kvm) || !kvm_arch_has_irq_bypass())906903 return 0;907904908905 pr_debug("SVM: %s: host_irq=%#x, guest_irq=%#x, set=%#x\n",···936933 kvm_vcpu_apicv_active(&svm->vcpu)) {937934 struct amd_iommu_pi_data pi;938935936936+ enable_remapped_mode = false;937937+939938 /* Try to enable guest_mode in IRTE */940939 pi.base = __sme_set(page_to_phys(svm->avic_backing_page) &941940 AVIC_HPA_MASK);···956951 */957952 if (!ret && pi.is_guest_mode)958953 svm_ir_list_add(svm, &pi);959959- } else {960960- /* Use legacy mode in IRTE */961961- struct amd_iommu_pi_data pi;962962-963963- /**964964- * Here, pi is used to:965965- * - Tell IOMMU to use legacy mode for this interrupt.966966- * - Retrieve ga_tag of prior interrupt remapping data.967967- */968968- pi.prev_ga_tag = 0;969969- pi.is_guest_mode = false;970970- ret = irq_set_vcpu_affinity(host_irq, &pi);971971-972972- /**973973- * Check if the posted interrupt was previously974974- * setup with the guest_mode by checking if the ga_tag975975- * was cached. If so, we need to clean up the per-vcpu976976- * ir_list.977977- */978978- if (!ret && pi.prev_ga_tag) {979979- int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag);980980- struct kvm_vcpu *vcpu;981981-982982- vcpu = kvm_get_vcpu_by_id(kvm, id);983983- if (vcpu)984984- svm_ir_list_del(to_svm(vcpu), &pi);985985- }986954 }987955988956 if (!ret && svm) {···971993 }972994973995 ret = 0;996996+ if (enable_remapped_mode) {997997+ /* Use legacy mode in IRTE */998998+ struct amd_iommu_pi_data pi;999999+10001000+ /**10011001+ * Here, pi is used to:10021002+ * - Tell IOMMU to use legacy mode for this interrupt.10031003+ * - Retrieve ga_tag of prior interrupt remapping data.10041004+ */10051005+ pi.prev_ga_tag = 0;10061006+ pi.is_guest_mode = false;10071007+ ret = irq_set_vcpu_affinity(host_irq, &pi);10081008+10091009+ /**10101010+ * Check if the posted interrupt was previously10111011+ * setup with the guest_mode by checking if the ga_tag10121012+ * was cached. If so, we need to clean up the per-vcpu10131013+ * ir_list.10141014+ */10151015+ if (!ret && pi.prev_ga_tag) {10161016+ int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag);10171017+ struct kvm_vcpu *vcpu;10181018+10191019+ vcpu = kvm_get_vcpu_by_id(kvm, id);10201020+ if (vcpu)10211021+ svm_ir_list_del(to_svm(vcpu), &pi);10221022+ }10231023+ }9741024out:9751025 srcu_read_unlock(&kvm->irq_srcu, idx);9761026 return ret;
···360360 * We allocate one page for pgd.361361 */362362 if (!SHARED_KERNEL_PMD)363363- return __pgd_alloc(mm, PGD_ALLOCATION_ORDER);363363+ return __pgd_alloc(mm, pgd_allocation_order());364364365365 /*366366 * Now PAE kernel is not running as a Xen domain. We can allocate···380380381381static inline pgd_t *_pgd_alloc(struct mm_struct *mm)382382{383383- return __pgd_alloc(mm, PGD_ALLOCATION_ORDER);383383+ return __pgd_alloc(mm, pgd_allocation_order());384384}385385386386static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
···152152 get_order(bsize));153153}154154155155+/**156156+ * bdev_validate_blocksize - check that this block size is acceptable157157+ * @bdev: blockdevice to check158158+ * @block_size: block size to check159159+ *160160+ * For block device users that do not use buffer heads or the block device161161+ * page cache, make sure that this block size can be used with the device.162162+ *163163+ * Return: On success zero is returned, negative error code on failure.164164+ */165165+int bdev_validate_blocksize(struct block_device *bdev, int block_size)166166+{167167+ if (blk_validate_block_size(block_size))168168+ return -EINVAL;169169+170170+ /* Size cannot be smaller than the size supported by the device */171171+ if (block_size < bdev_logical_block_size(bdev))172172+ return -EINVAL;173173+174174+ return 0;175175+}176176+EXPORT_SYMBOL_GPL(bdev_validate_blocksize);177177+155178int set_blocksize(struct file *file, int size)156179{157180 struct inode *inode = file->f_mapping->host;158181 struct block_device *bdev = I_BDEV(inode);182182+ int ret;159183160160- if (blk_validate_block_size(size))161161- return -EINVAL;162162-163163- /* Size cannot be smaller than the size supported by the device */164164- if (size < bdev_logical_block_size(bdev))165165- return -EINVAL;184184+ ret = bdev_validate_blocksize(bdev, size);185185+ if (ret)186186+ return ret;166187167188 if (!file->private_data)168189 return -EINVAL;169190170191 /* Don't change the size if it is same as current */171192 if (inode->i_blkbits != blksize_bits(size)) {193193+ /*194194+ * Flush and truncate the pagecache before we reconfigure the195195+ * mapping geometry because folio sizes are variable now. If a196196+ * reader has already allocated a folio whose size is smaller197197+ * than the new min_order but invokes readahead after the new198198+ * min_order becomes visible, readahead will think there are199199+ * "zero" blocks per folio and crash. Take the inode and200200+ * invalidation locks to avoid racing with201201+ * read/write/fallocate.202202+ */203203+ inode_lock(inode);204204+ filemap_invalidate_lock(inode->i_mapping);205205+172206 sync_blockdev(bdev);207207+ kill_bdev(bdev);208208+173209 inode->i_blkbits = blksize_bits(size);174210 mapping_set_folio_min_order(inode->i_mapping, get_order(size));175211 kill_bdev(bdev);212212+ filemap_invalidate_unlock(inode->i_mapping);213213+ inode_unlock(inode);176214 }177215 return 0;178216}···815777 blkdev_put_whole(whole);816778}817779818818-struct block_device *blkdev_get_no_open(dev_t dev)780780+struct block_device *blkdev_get_no_open(dev_t dev, bool autoload)819781{820782 struct block_device *bdev;821783 struct inode *inode;822784823785 inode = ilookup(blockdev_superblock, dev);824824- if (!inode && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) {786786+ if (!inode && autoload && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) {825787 blk_request_module(dev);826788 inode = ilookup(blockdev_superblock, dev);827789 if (inode)···10431005 if (ret)10441006 return ERR_PTR(ret);1045100710461046- bdev = blkdev_get_no_open(dev);10081008+ bdev = blkdev_get_no_open(dev, true);10471009 if (!bdev)10481010 return ERR_PTR(-ENXIO);10491011···13121274 */13131275void bdev_statx(const struct path *path, struct kstat *stat, u32 request_mask)13141276{13151315- struct inode *backing_inode;13161277 struct block_device *bdev;1317127813181318- backing_inode = d_backing_inode(path->dentry);13191319-13201279 /*13211321- * Note that backing_inode is the inode of a block device node file,13221322- * not the block device's internal inode. Therefore it is *not* valid13231323- * to use I_BDEV() here; the block device has to be looked up by i_rdev12801280+ * Note that d_backing_inode() returns the block device node inode, not12811281+ * the block device's internal inode. Therefore it is *not* valid to12821282+ * use I_BDEV() here; the block device has to be looked up by i_rdev13241283 * instead.13251284 */13261326- bdev = blkdev_get_no_open(backing_inode->i_rdev);12851285+ bdev = blkdev_get_no_open(d_backing_inode(path->dentry)->i_rdev, false);13271286 if (!bdev)13281287 return;13291288
···6161 /*6262 * For read-ahead of large files to be effective, we need to read ahead6363 * at least twice the optimal I/O size.6464+ *6565+ * There is no hardware limitation for the read-ahead size and the user6666+ * might have increased the read-ahead size through sysfs, so don't ever6767+ * decrease it.6468 */6565- bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);6969+ bdi->ra_pages = max3(bdi->ra_pages,7070+ lim->io_opt * 2 / PAGE_SIZE,7171+ VM_READAHEAD_PAGES);6672 bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT;6773}6874
+4-1
block/blk-zoned.c
···343343 op = REQ_OP_ZONE_RESET;344344345345 /* Invalidate the page cache, including dirty pages. */346346+ inode_lock(bdev->bd_mapping->host);346347 filemap_invalidate_lock(bdev->bd_mapping);347348 ret = blkdev_truncate_zone_range(bdev, mode, &zrange);348349 if (ret)···365364 ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors);366365367366fail:368368- if (cmd == BLKRESETZONE)367367+ if (cmd == BLKRESETZONE) {369368 filemap_invalidate_unlock(bdev->bd_mapping);369369+ inode_unlock(bdev->bd_mapping->host);370370+ }370371371372 return ret;372373}
···24532453 */24542454 put_unaligned_be16(ATA_FEATURE_SUB_MPAGE_LEN - 4, &buf[2]);2455245524562456- if (dev->flags & ATA_DFLAG_CDL)24572457- buf[4] = 0x02; /* Support T2A and T2B pages */24562456+ if (dev->flags & ATA_DFLAG_CDL_ENABLED)24572457+ buf[4] = 0x02; /* T2A and T2B pages enabled */24582458 else24592459 buf[4] = 0;24602460···38863886}3887388738883888/*38893889- * Translate MODE SELECT control mode page, sub-pages f2h (ATA feature mode38893889+ * Translate MODE SELECT control mode page, sub-page f2h (ATA feature mode38903890 * page) into a SET FEATURES command.38913891 */38923892-static unsigned int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,38933893- const u8 *buf, int len,38943894- u16 *fp)38923892+static int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,38933893+ const u8 *buf, int len, u16 *fp)38953894{38963895 struct ata_device *dev = qc->dev;38973896 struct ata_taskfile *tf = &qc->tf;···39083909 /* Check cdl_ctrl */39093910 switch (buf[0] & 0x03) {39103911 case 0:39113911- /* Disable CDL */39123912+ /* Disable CDL if it is enabled */39133913+ if (!(dev->flags & ATA_DFLAG_CDL_ENABLED))39143914+ return 0;39153915+ ata_dev_dbg(dev, "Disabling CDL\n");39123916 cdl_action = 0;39133917 dev->flags &= ~ATA_DFLAG_CDL_ENABLED;39143918 break;39153919 case 0x02:39163916- /* Enable CDL T2A/T2B: NCQ priority must be disabled */39203920+ /*39213921+ * Enable CDL if not already enabled. Since this is mutually39223922+ * exclusive with NCQ priority, allow this only if NCQ priority39233923+ * is disabled.39243924+ */39253925+ if (dev->flags & ATA_DFLAG_CDL_ENABLED)39263926+ return 0;39173927 if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) {39183928 ata_dev_err(dev,39193929 "NCQ priority must be disabled to enable CDL\n");39203930 return -EINVAL;39213931 }39323932+ ata_dev_dbg(dev, "Enabling CDL\n");39223933 cdl_action = 1;39233934 dev->flags |= ATA_DFLAG_CDL_ENABLED;39243935 break;
+10
drivers/base/auxiliary.c
···156156 * },157157 * .ops = my_custom_ops,158158 * };159159+ *160160+ * Please note that such custom ops approach is valid, but it is hard to implement161161+ * it right without global locks per-device to protect from auxiliary_drv removal162162+ * during call to that ops. In addition, this implementation lacks proper module163163+ * dependency, which causes to load/unload races between auxiliary parent and devices164164+ * modules.165165+ *166166+ * The most easiest way to provide these ops reliably without needing to167167+ * have a lock is to EXPORT_SYMBOL*() them and rely on already existing168168+ * modules infrastructure for validity and correct dependencies chains.159169 */160170161171static const struct auxiliary_device_id *auxiliary_match_id(const struct auxiliary_device_id *id,
+17
drivers/base/base.h
···7373 kset_put(&sp->subsys);7474}75757676+struct subsys_private *bus_to_subsys(const struct bus_type *bus);7677struct subsys_private *class_to_subsys(const struct class *class);77787879struct driver_private {···180179int driver_add_groups(const struct device_driver *drv, const struct attribute_group **groups);181180void driver_remove_groups(const struct device_driver *drv, const struct attribute_group **groups);182181void device_driver_detach(struct device *dev);182182+183183+static inline void device_set_driver(struct device *dev, const struct device_driver *drv)184184+{185185+ /*186186+ * Majority (all?) read accesses to dev->driver happens either187187+ * while holding device lock or in bus/driver code that is only188188+ * invoked when the device is bound to a driver and there is no189189+ * concern of the pointer being changed while it is being read.190190+ * However when reading device's uevent file we read driver pointer191191+ * without taking device lock (so we do not block there for192192+ * arbitrary amount of time). We use WRITE_ONCE() here to prevent193193+ * tearing so that READ_ONCE() can safely be used in uevent code.194194+ */195195+ // FIXME - this cast should not be needed "soon"196196+ WRITE_ONCE(dev->driver, (struct device_driver *)drv);197197+}183198184199int devres_release_all(struct device *dev);185200void device_block_probing(void);
+1-1
drivers/base/bus.c
···5757 * NULL. A call to subsys_put() must be done when finished with the pointer in5858 * order for it to be properly freed.5959 */6060-static struct subsys_private *bus_to_subsys(const struct bus_type *bus)6060+struct subsys_private *bus_to_subsys(const struct bus_type *bus)6161{6262 struct subsys_private *sp = NULL;6363 struct kobject *kobj;
+32-6
drivers/base/core.c
···26242624 return NULL;26252625}2626262626272627+/*26282628+ * Try filling "DRIVER=<name>" uevent variable for a device. Because this26292629+ * function may race with binding and unbinding the device from a driver,26302630+ * we need to be careful. Binding is generally safe, at worst we miss the26312631+ * fact that the device is already bound to a driver (but the driver26322632+ * information that is delivered through uevents is best-effort, it may26332633+ * become obsolete as soon as it is generated anyways). Unbinding is more26342634+ * risky as driver pointer is transitioning to NULL, so READ_ONCE() should26352635+ * be used to make sure we are dealing with the same pointer, and to26362636+ * ensure that driver structure is not going to disappear from under us26372637+ * we take bus' drivers klist lock. The assumption that only registered26382638+ * driver can be bound to a device, and to unregister a driver bus code26392639+ * will take the same lock.26402640+ */26412641+static void dev_driver_uevent(const struct device *dev, struct kobj_uevent_env *env)26422642+{26432643+ struct subsys_private *sp = bus_to_subsys(dev->bus);26442644+26452645+ if (sp) {26462646+ scoped_guard(spinlock, &sp->klist_drivers.k_lock) {26472647+ struct device_driver *drv = READ_ONCE(dev->driver);26482648+ if (drv)26492649+ add_uevent_var(env, "DRIVER=%s", drv->name);26502650+ }26512651+26522652+ subsys_put(sp);26532653+ }26542654+}26552655+26272656static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)26282657{26292658 const struct device *dev = kobj_to_dev(kobj);···26842655 if (dev->type && dev->type->name)26852656 add_uevent_var(env, "DEVTYPE=%s", dev->type->name);2686265726872687- if (dev->driver)26882688- add_uevent_var(env, "DRIVER=%s", dev->driver->name);26582658+ /* Add "DRIVER=%s" variable if the device is bound to a driver */26592659+ dev_driver_uevent(dev, env);2689266026902661 /* Add common DT information about the device */26912662 of_device_uevent(dev, env);···27552726 if (!env)27562727 return -ENOMEM;2757272827582758- /* Synchronize with really_probe() */27592759- device_lock(dev);27602729 /* let the kset specific function add its keys */27612730 retval = kset->uevent_ops->uevent(&dev->kobj, env);27622762- device_unlock(dev);27632731 if (retval)27642732 goto out;27652733···37263700 device_pm_remove(dev);37273701 dpm_sysfs_remove(dev);37283702 DPMError:37293729- dev->driver = NULL;37033703+ device_set_driver(dev, NULL);37303704 bus_remove_device(dev);37313705 BusError:37323706 device_remove_attrs(dev);
+3-4
drivers/base/dd.c
···550550 arch_teardown_dma_ops(dev);551551 kfree(dev->dma_range_map);552552 dev->dma_range_map = NULL;553553- dev->driver = NULL;553553+ device_set_driver(dev, NULL);554554 dev_set_drvdata(dev, NULL);555555 if (dev->pm_domain && dev->pm_domain->dismiss)556556 dev->pm_domain->dismiss(dev);···629629 }630630631631re_probe:632632- // FIXME - this cast should not be needed "soon"633633- dev->driver = (struct device_driver *)drv;632632+ device_set_driver(dev, drv);634633635634 /* If using pinctrl, bind pins now before probing */636635 ret = pinctrl_bind_pins(dev);···10131014 if (ret == 0)10141015 ret = 1;10151016 else {10161016- dev->driver = NULL;10171017+ device_set_driver(dev, NULL);10171018 ret = 0;10181019 }10191020 } else {
+9-13
drivers/base/devtmpfs.c
···296296 return err;297297}298298299299-static int dev_mynode(struct device *dev, struct inode *inode, struct kstat *stat)299299+static int dev_mynode(struct device *dev, struct inode *inode)300300{301301 /* did we create it */302302 if (inode->i_private != &thread)···304304305305 /* does the dev_t match */306306 if (is_blockdev(dev)) {307307- if (!S_ISBLK(stat->mode))307307+ if (!S_ISBLK(inode->i_mode))308308 return 0;309309 } else {310310- if (!S_ISCHR(stat->mode))310310+ if (!S_ISCHR(inode->i_mode))311311 return 0;312312 }313313- if (stat->rdev != dev->devt)313313+ if (inode->i_rdev != dev->devt)314314 return 0;315315316316 /* ours */···321321{322322 struct path parent;323323 struct dentry *dentry;324324- struct kstat stat;325325- struct path p;324324+ struct inode *inode;326325 int deleted = 0;327327- int err;326326+ int err = 0;328327329328 dentry = kern_path_locked(nodename, &parent);330329 if (IS_ERR(dentry))331330 return PTR_ERR(dentry);332331333333- p.mnt = parent.mnt;334334- p.dentry = dentry;335335- err = vfs_getattr(&p, &stat, STATX_TYPE | STATX_MODE,336336- AT_STATX_SYNC_AS_STAT);337337- if (!err && dev_mynode(dev, d_inode(dentry), &stat)) {332332+ inode = d_inode(dentry);333333+ if (dev_mynode(dev, inode)) {338334 struct iattr newattrs;339335 /*340336 * before unlinking this node, reset permissions···338342 */339343 newattrs.ia_uid = GLOBAL_ROOT_UID;340344 newattrs.ia_gid = GLOBAL_ROOT_GID;341341- newattrs.ia_mode = stat.mode & ~0777;345345+ newattrs.ia_mode = inode->i_mode & ~0777;342346 newattrs.ia_valid =343347 ATTR_UID|ATTR_GID|ATTR_MODE;344348 inode_lock(d_inode(dentry));
+17-24
drivers/base/memory.c
···816816 return 0;817817}818818819819-static int __init add_boot_memory_block(unsigned long base_section_nr)820820-{821821- unsigned long nr;822822-823823- for_each_present_section_nr(base_section_nr, nr) {824824- if (nr >= (base_section_nr + sections_per_block))825825- break;826826-827827- return add_memory_block(memory_block_id(base_section_nr),828828- MEM_ONLINE, NULL, NULL);829829- }830830-831831- return 0;832832-}833833-834819static int add_hotplug_memory_block(unsigned long block_id,835820 struct vmem_altmap *altmap,836821 struct memory_group *group)···942957void __init memory_dev_init(void)943958{944959 int ret;945945- unsigned long block_sz, nr;960960+ unsigned long block_sz, block_id, nr;946961947962 /* Validate the configured memory block size */948963 block_sz = memory_block_size_bytes();···955970 panic("%s() failed to register subsystem: %d\n", __func__, ret);956971957972 /*958958- * Create entries for memory sections that were found959959- * during boot and have been initialized973973+ * Create entries for memory sections that were found during boot974974+ * and have been initialized. Use @block_id to track the last975975+ * handled block and initialize it to an invalid value (ULONG_MAX)976976+ * to bypass the block ID matching check for the first present977977+ * block so that it can be covered.960978 */961961- for (nr = 0; nr <= __highest_present_section_nr;962962- nr += sections_per_block) {963963- ret = add_boot_memory_block(nr);964964- if (ret)965965- panic("%s() failed to add memory block: %d\n", __func__,966966- ret);979979+ block_id = ULONG_MAX;980980+ for_each_present_section_nr(0, nr) {981981+ if (block_id != ULONG_MAX && memory_block_id(nr) == block_id)982982+ continue;983983+984984+ block_id = memory_block_id(nr);985985+ ret = add_memory_block(block_id, MEM_ONLINE, NULL, NULL);986986+ if (ret) {987987+ panic("%s() failed to add memory block: %d\n",988988+ __func__, ret);989989+ }967990 }968991}969992
···7676config ARM_BRCMSTB_AVS_CPUFREQ7777 tristate "Broadcom STB AVS CPUfreq driver"7878 depends on (ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ) || COMPILE_TEST7979- default y7979+ default y if ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ8080 help8181 Some Broadcom STB SoCs use a co-processor running proprietary firmware8282 ("AVS") to handle voltage and frequency scaling. This driver provides···8888 tristate "Calxeda Highbank-based"8989 depends on ARCH_HIGHBANK || COMPILE_TEST9090 depends on CPUFREQ_DT && REGULATOR && PL320_MBOX9191- default m9191+ default m if ARCH_HIGHBANK9292 help9393 This adds the CPUFreq driver for Calxeda Highbank SoC9494 based boards.···133133config ARM_MEDIATEK_CPUFREQ_HW134134 tristate "MediaTek CPUFreq HW driver"135135 depends on ARCH_MEDIATEK || COMPILE_TEST136136- default m136136+ default m if ARCH_MEDIATEK137137 help138138 Support for the CPUFreq HW driver.139139 Some MediaTek chipsets have a HW engine to offload the steps···181181config ARM_S3C64XX_CPUFREQ182182 bool "Samsung S3C64XX"183183 depends on CPU_S3C6410 || COMPILE_TEST184184- default y184184+ default CPU_S3C6410185185 help186186 This adds the CPUFreq driver for Samsung S3C6410 SoC.187187···190190config ARM_S5PV210_CPUFREQ191191 bool "Samsung S5PV210 and S5PC110"192192 depends on CPU_S5PV210 || COMPILE_TEST193193- default y193193+ default CPU_S5PV210194194 help195195 This adds the CPUFreq driver for Samsung S5PV210 and196196 S5PC110 SoCs.···214214config ARM_SPEAR_CPUFREQ215215 bool "SPEAr CPUFreq support"216216 depends on PLAT_SPEAR || COMPILE_TEST217217- default y217217+ default PLAT_SPEAR218218 help219219 This adds the CPUFreq driver support for SPEAr SOCs.220220···233233 tristate "Tegra20/30 CPUFreq support"234234 depends on ARCH_TEGRA || COMPILE_TEST235235 depends on CPUFREQ_DT236236- default y236236+ default ARCH_TEGRA237237 help238238 This adds the CPUFreq driver support for Tegra20/30 SOCs.239239···241241 bool "Tegra124 CPUFreq support"242242 depends on ARCH_TEGRA || COMPILE_TEST243243 depends on CPUFREQ_DT244244- default y244244+ default ARCH_TEGRA245245 help246246 This adds the CPUFreq driver support for Tegra124 SOCs.247247···256256 tristate "Tegra194 CPUFreq support"257257 depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST)258258 depends on TEGRA_BPMP259259- default y259259+ default ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC260260 help261261 This adds CPU frequency driver support for Tegra194 SOCs.262262263263config ARM_TI_CPUFREQ264264 bool "Texas Instruments CPUFreq support"265265 depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST266266- default y266266+ default ARCH_OMAP2PLUS || ARCH_K3267267 help268268 This driver enables valid OPPs on the running platform based on269269 values contained within the SoC in use. Enable this in order to
+8-2
drivers/cpufreq/apple-soc-cpufreq.c
···134134135135static unsigned int apple_soc_cpufreq_get_rate(unsigned int cpu)136136{137137- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);138138- struct apple_cpu_priv *priv = policy->driver_data;137137+ struct cpufreq_policy *policy;138138+ struct apple_cpu_priv *priv;139139 struct cpufreq_frequency_table *p;140140 unsigned int pstate;141141+142142+ policy = cpufreq_cpu_get_raw(cpu);143143+ if (unlikely(!policy))144144+ return 0;145145+146146+ priv = policy->driver_data;141147142148 if (priv->info->cur_pstate_mask) {143149 u32 reg = readl_relaxed(priv->reg_base + APPLE_DVFS_STATUS);
···37373838static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)3939{4040- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);4141- struct scmi_data *priv = policy->driver_data;4040+ struct cpufreq_policy *policy;4141+ struct scmi_data *priv;4242 unsigned long rate;4343 int ret;4444+4545+ policy = cpufreq_cpu_get_raw(cpu);4646+ if (unlikely(!policy))4747+ return 0;4848+4949+ priv = policy->driver_data;44504551 ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false);4652 if (ret)
+10-3
drivers/cpufreq/scpi-cpufreq.c
···29293030static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)3131{3232- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);3333- struct scpi_data *priv = policy->driver_data;3434- unsigned long rate = clk_get_rate(priv->clk);3232+ struct cpufreq_policy *policy;3333+ struct scpi_data *priv;3434+ unsigned long rate;3535+3636+ policy = cpufreq_cpu_get_raw(cpu);3737+ if (unlikely(!policy))3838+ return 0;3939+4040+ priv = policy->driver_data;4141+ rate = clk_get_rate(priv->clk);35423643 return rate / 1000;3744}
+12-6
drivers/cpufreq/sun50i-cpufreq-nvmem.c
···194194 struct nvmem_cell *speedbin_nvmem;195195 const struct of_device_id *match;196196 struct device *cpu_dev;197197- u32 *speedbin;197197+ void *speedbin_ptr;198198+ u32 speedbin = 0;199199+ size_t len;198200 int ret;199201200202 cpu_dev = get_cpu_device(0);···219217 return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),220218 "Could not get nvmem cell\n");221219222222- speedbin = nvmem_cell_read(speedbin_nvmem, NULL);220220+ speedbin_ptr = nvmem_cell_read(speedbin_nvmem, &len);223221 nvmem_cell_put(speedbin_nvmem);224224- if (IS_ERR(speedbin))225225- return PTR_ERR(speedbin);222222+ if (IS_ERR(speedbin_ptr))223223+ return PTR_ERR(speedbin_ptr);226224227227- ret = opp_data->efuse_xlate(*speedbin);225225+ if (len <= 4)226226+ memcpy(&speedbin, speedbin_ptr, len);227227+ speedbin = le32_to_cpu(speedbin);228228229229- kfree(speedbin);229229+ ret = opp_data->efuse_xlate(speedbin);230230+231231+ kfree(speedbin_ptr);230232231233 return ret;232234};
+6
drivers/crypto/atmel-sha204a.c
···163163 i2c_priv->hwrng.name = dev_name(&client->dev);164164 i2c_priv->hwrng.read = atmel_sha204a_rng_read;165165166166+ /*167167+ * According to review by Bill Cox [1], this HWRNG has very low entropy.168168+ * [1] https://www.metzdowd.com/pipermail/cryptography/2014-December/023858.html169169+ */170170+ i2c_priv->hwrng.quality = 1;171171+166172 ret = devm_hwrng_register(&client->dev, &i2c_priv->hwrng);167173 if (ret)168174 dev_warn(&client->dev, "failed to register RNG (%d)\n", ret);
···16781678 if (rc && rc != -EBUSY)16791679 return rc;1680168016811681- cxl_gpf_port_setup(dport_dev, port);16811681+ cxl_gpf_port_setup(dport);1682168216831683 /* Any more ports to add between this one and the root? */16841684 if (!dev_is_cxl_root_child(&port->dev))
-4
drivers/cxl/core/regs.c
···581581 resource_size_t rcrb = ri->base;582582 void __iomem *addr;583583 u32 bar0, bar1;584584- u16 cmd;585584 u32 id;586585587586 if (which == CXL_RCRB_UPSTREAM)···602603 }603604604605 id = readl(addr + PCI_VENDOR_ID);605605- cmd = readw(addr + PCI_COMMAND);606606 bar0 = readl(addr + PCI_BASE_ADDRESS_0);607607 bar1 = readl(addr + PCI_BASE_ADDRESS_1);608608 iounmap(addr);···616618 dev_err(dev, "Failed to access Downstream Port RCRB\n");617619 return CXL_RESOURCE_NONE;618620 }619619- if (!(cmd & PCI_COMMAND_MEMORY))620620- return CXL_RESOURCE_NONE;621621 /* The RCRB is a Memory Window, and the MEM_TYPE_1M bit is obsolete */622622 if (bar0 & (PCI_BASE_ADDRESS_MEM_TYPE_1M | PCI_BASE_ADDRESS_SPACE_IO))623623 return CXL_RESOURCE_NONE;
+3-3
drivers/cxl/cxl.h
···592592 * @cdat: Cached CDAT data593593 * @cdat_available: Should a CDAT attribute be available in sysfs594594 * @pci_latency: Upstream latency in picoseconds595595- * @gpf_dvsec: Cached GPF port DVSEC596595 */597596struct cxl_port {598597 struct device dev;···615616 } cdat;616617 bool cdat_available;617618 long pci_latency;618618- int gpf_dvsec;619619};620620621621/**···662664 * @regs: Dport parsed register blocks663665 * @coord: access coordinates (bandwidth and latency performance attributes)664666 * @link_latency: calculated PCIe downstream latency667667+ * @gpf_dvsec: Cached GPF port DVSEC665668 */666669struct cxl_dport {667670 struct device *dport_dev;···674675 struct cxl_regs regs;675676 struct access_coordinate coord[ACCESS_COORDINATE_MAX];676677 long link_latency;678678+ int gpf_dvsec;677679};678680679681/**···910910#define __mock static911911#endif912912913913-u16 cxl_gpf_get_dvsec(struct device *dev, bool is_port);913913+u16 cxl_gpf_get_dvsec(struct device *dev);914914915915#endif /* __CXL_H__ */
···108108 return;109109 }110110111111- if (!cxl_gpf_get_dvsec(cxlds->dev, false))111111+ if (!cxl_gpf_get_dvsec(cxlds->dev))112112 return;113113114114 if (cxl_get_dirty_count(mds, &count)) {
+11-3
drivers/firmware/stratix10-svc.c
···12241224 if (!svc->intel_svc_fcs) {12251225 dev_err(dev, "failed to allocate %s device\n", INTEL_FCS);12261226 ret = -ENOMEM;12271227- goto err_unregister_dev;12271227+ goto err_unregister_rsu_dev;12281228 }1229122912301230 ret = platform_device_add(svc->intel_svc_fcs);12311231 if (ret) {12321232 platform_device_put(svc->intel_svc_fcs);12331233- goto err_unregister_dev;12331233+ goto err_unregister_rsu_dev;12341234 }12351235+12361236+ ret = of_platform_default_populate(dev_of_node(dev), NULL, dev);12371237+ if (ret)12381238+ goto err_unregister_fcs_dev;1235123912361240 dev_set_drvdata(dev, svc);12371241···1243123912441240 return 0;1245124112461246-err_unregister_dev:12421242+err_unregister_fcs_dev:12431243+ platform_device_unregister(svc->intel_svc_fcs);12441244+err_unregister_rsu_dev:12471245 platform_device_unregister(svc->stratix10_svc_rsu);12481246err_free_kfifo:12491247 kfifo_free(&controller->svc_fifo);···12581252{12591253 struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev);12601254 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev);12551255+12561256+ of_platform_depopulate(ctrl->dev);1261125712621258 platform_device_unregister(svc->intel_svc_fcs);12631259 platform_device_unregister(svc->stratix10_svc_rsu);
+45-7
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
···4343#include <linux/dma-fence-array.h>4444#include <linux/pci-p2pdma.h>45454646+static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops;4747+4848+/**4949+ * dma_buf_attach_adev - Helper to get adev of an attachment5050+ *5151+ * @attach: attachment5252+ *5353+ * Returns:5454+ * A struct amdgpu_device * if the attaching device is an amdgpu device or5555+ * partition, NULL otherwise.5656+ */5757+static struct amdgpu_device *dma_buf_attach_adev(struct dma_buf_attachment *attach)5858+{5959+ if (attach->importer_ops == &amdgpu_dma_buf_attach_ops) {6060+ struct drm_gem_object *obj = attach->importer_priv;6161+ struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);6262+6363+ return amdgpu_ttm_adev(bo->tbo.bdev);6464+ }6565+6666+ return NULL;6767+}6868+4669/**4770 * amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation4871 *···7754static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf,7855 struct dma_buf_attachment *attach)7956{5757+ struct amdgpu_device *attach_adev = dma_buf_attach_adev(attach);8058 struct drm_gem_object *obj = dmabuf->priv;8159 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);8260 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);83618484- if (pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0)6262+ if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) &&6363+ pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0)8564 attach->peer2peer = false;86658766 amdgpu_vm_bo_update_shared(bo);···10277{10378 struct dma_buf *dmabuf = attach->dmabuf;10479 struct amdgpu_bo *bo = gem_to_amdgpu_bo(dmabuf->priv);105105- u32 domains = bo->preferred_domains;8080+ u32 domains = bo->allowed_domains;1068110782 dma_resv_assert_held(dmabuf->resv);10883109109- /*110110- * Try pinning into VRAM to allow P2P with RDMA NICs without ODP8484+ /* Try pinning into VRAM to allow P2P with RDMA NICs without ODP11185 * support if all attachments can do P2P. If any attachment can't do11286 * P2P just pin into GTT instead.8787+ *8888+ * To avoid with conflicting pinnings between GPUs and RDMA when move8989+ * notifiers are disabled, only allow pinning in VRAM when move9090+ * notiers are enabled.11391 */114114- list_for_each_entry(attach, &dmabuf->attachments, node)115115- if (!attach->peer2peer)116116- domains &= ~AMDGPU_GEM_DOMAIN_VRAM;9292+ if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) {9393+ domains &= ~AMDGPU_GEM_DOMAIN_VRAM;9494+ } else {9595+ list_for_each_entry(attach, &dmabuf->attachments, node)9696+ if (!attach->peer2peer)9797+ domains &= ~AMDGPU_GEM_DOMAIN_VRAM;9898+ }11799118100 if (domains & AMDGPU_GEM_DOMAIN_VRAM)119101 bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;102102+103103+ if (WARN_ON(!domains))104104+ return -EINVAL;120105121106 return amdgpu_bo_pin(bo, domains);122107}···504469{505470 struct drm_gem_object *obj = &bo->tbo.base;506471 struct drm_gem_object *gobj;472472+473473+ if (!adev)474474+ return false;507475508476 if (obj->import_attach) {509477 struct dma_buf *dma_buf = obj->import_attach->dmabuf;
+12-29
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···19201920 switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {19211921 case IP_VERSION(3, 5, 0):19221922 case IP_VERSION(3, 6, 0):19231923- /*19241924- * On DCN35 systems with Z8 enabled, it's possible for IPS2 + Z8 to19251925- * cause a hard hang. A fix exists for newer PMFW.19261926- *19271927- * As a workaround, for non-fixed PMFW, force IPS1+RCG as the deepest19281928- * IPS state in all cases, except for s0ix and all displays off (DPMS),19291929- * where IPS2 is allowed.19301930- *19311931- * When checking pmfw version, use the major and minor only.19321932- */19331933- if ((adev->pm.fw_version & 0x00FFFF00) < 0x005D6300)19341934- ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;19351935- else if (amdgpu_ip_version(adev, GC_HWIP, 0) > IP_VERSION(11, 5, 0))19361936- /*19371937- * Other ASICs with DCN35 that have residency issues with19381938- * IPS2 in idle.19391939- * We want them to use IPS2 only in display off cases.19401940- */19411941- ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;19421942- break;19431923 case IP_VERSION(3, 5, 1):19441924 ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;19451925 break;···33353355 for (k = 0; k < dc_state->stream_count; k++) {33363356 bundle->stream_update.stream = dc_state->streams[k];3337335733383338- for (m = 0; m < dc_state->stream_status->plane_count; m++) {33583358+ for (m = 0; m < dc_state->stream_status[k].plane_count; m++) {33393359 bundle->surface_updates[m].surface =33403340- dc_state->stream_status->plane_states[m];33603360+ dc_state->stream_status[k].plane_states[m];33413361 bundle->surface_updates[m].surface->force_full_update =33423362 true;33433363 }3344336433453365 update_planes_and_stream_adapter(dm->dc,33463366 UPDATE_TYPE_FULL,33473347- dc_state->stream_status->plane_count,33673367+ dc_state->stream_status[k].plane_count,33483368 dc_state->streams[k],33493369 &bundle->stream_update,33503370 bundle->surface_updates);···65016521 const struct drm_display_mode *native_mode,65026522 bool scale_enabled)65036523{65046504- if (scale_enabled) {65056505- copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode);65066506- } else if (native_mode->clock == drm_mode->clock &&65076507- native_mode->htotal == drm_mode->htotal &&65086508- native_mode->vtotal == drm_mode->vtotal) {65096509- copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode);65246524+ if (scale_enabled || (65256525+ native_mode->clock == drm_mode->clock &&65266526+ native_mode->htotal == drm_mode->htotal &&65276527+ native_mode->vtotal == drm_mode->vtotal)) {65286528+ if (native_mode->crtc_clock)65296529+ copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode);65106530 } else {65116531 /* no scaling nor amdgpu inserted, no need to patch */65126532 }···1102111041 */1102211042 if (amdgpu_ip_version(adev, DCE_HWIP, 0) < IP_VERSION(3, 2, 0) &&1102311043 state->allow_modeset)1104411044+ return true;1104511045+1104611046+ if (amdgpu_in_reset(adev) && state->allow_modeset)1102411047 return true;11025110481102611049 /* Exit early if we know that we're adding or removing the plane. */
···731731 /*732732 * Setting dma-burst to 16Word causes permanent tearing for very small733733 * buffers, e.g. cursor buffer. Burst Mode switching which based on734734- * plane size is not recommended as plane size varies alot towards the734734+ * plane size is not recommended as plane size varies a lot towards the735735 * end of the screen and rapid movement causes unstable DMA, but it is736736 * still better to change dma-burst than displaying garbage.737737 */
···3737};38383939struct meson_drm_soc_limits {4040- unsigned int max_hdmi_phy_freq;4040+ unsigned long long max_hdmi_phy_freq;4141};42424343struct meson_drm {
+16-13
drivers/gpu/drm/meson/meson_encoder_hdmi.c
···7070{7171 struct meson_drm *priv = encoder_hdmi->priv;7272 int vic = drm_match_cea_mode(mode);7373- unsigned int phy_freq;7474- unsigned int vclk_freq;7575- unsigned int venc_freq;7676- unsigned int hdmi_freq;7373+ unsigned long long phy_freq;7474+ unsigned long long vclk_freq;7575+ unsigned long long venc_freq;7676+ unsigned long long hdmi_freq;77777878- vclk_freq = mode->clock;7878+ vclk_freq = mode->clock * 1000;79798080 /* For 420, pixel clock is half unlike venc clock */8181 if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)···107107 if (mode->flags & DRM_MODE_FLAG_DBLCLK)108108 venc_freq /= 2;109109110110- dev_dbg(priv->dev, "vclk:%d phy=%d venc=%d hdmi=%d enci=%d\n",110110+ dev_dbg(priv->dev,111111+ "vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n",111112 phy_freq, vclk_freq, venc_freq, hdmi_freq,112113 priv->venc.hdmi_use_enci);113114···123122 struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);124123 struct meson_drm *priv = encoder_hdmi->priv;125124 bool is_hdmi2_sink = display_info->hdmi.scdc.supported;126126- unsigned int phy_freq;127127- unsigned int vclk_freq;128128- unsigned int venc_freq;129129- unsigned int hdmi_freq;125125+ unsigned long long clock = mode->clock * 1000;126126+ unsigned long long phy_freq;127127+ unsigned long long vclk_freq;128128+ unsigned long long venc_freq;129129+ unsigned long long hdmi_freq;130130 int vic = drm_match_cea_mode(mode);131131 enum drm_mode_status status;132132···146144 if (status != MODE_OK)147145 return status;148146149149- return meson_vclk_dmt_supported_freq(priv, mode->clock);147147+ return meson_vclk_dmt_supported_freq(priv, clock);150148 /* Check against supported VIC modes */151149 } else if (!meson_venc_hdmi_supported_vic(vic))152150 return MODE_BAD;153151154154- vclk_freq = mode->clock;152152+ vclk_freq = clock;155153156154 /* For 420, pixel clock is half unlike venc clock */157155 if (drm_mode_is_420_only(display_info, mode) ||···181179 if (mode->flags & DRM_MODE_FLAG_DBLCLK)182180 venc_freq /= 2;183181184184- dev_dbg(priv->dev, "%s: vclk:%d phy=%d venc=%d hdmi=%d\n",182182+ dev_dbg(priv->dev,183183+ "%s: vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz\n",185184 __func__, phy_freq, vclk_freq, venc_freq, hdmi_freq);186185187186 return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
+101-94
drivers/gpu/drm/meson/meson_vclk.c
···110110#define HDMI_PLL_LOCK BIT(31)111111#define HDMI_PLL_LOCK_G12A (3 << 30)112112113113-#define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST(_freq * 1000, 1001)113113+#define PIXEL_FREQ_1000_1001(_freq) \114114+ DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)115115+#define PHY_FREQ_1000_1001(_freq) \116116+ (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10)114117115118/* VID PLL Dividers */116119enum {···363360};364361365362struct meson_vclk_params {366366- unsigned int pll_freq;367367- unsigned int phy_freq;368368- unsigned int vclk_freq;369369- unsigned int venc_freq;370370- unsigned int pixel_freq;363363+ unsigned long long pll_freq;364364+ unsigned long long phy_freq;365365+ unsigned long long vclk_freq;366366+ unsigned long long venc_freq;367367+ unsigned long long pixel_freq;371368 unsigned int pll_od1;372369 unsigned int pll_od2;373370 unsigned int pll_od3;···375372 unsigned int vclk_div;376373} params[] = {377374 [MESON_VCLK_HDMI_ENCI_54000] = {378378- .pll_freq = 4320000,379379- .phy_freq = 270000,380380- .vclk_freq = 54000,381381- .venc_freq = 54000,382382- .pixel_freq = 54000,375375+ .pll_freq = 4320000000,376376+ .phy_freq = 270000000,377377+ .vclk_freq = 54000000,378378+ .venc_freq = 54000000,379379+ .pixel_freq = 54000000,383380 .pll_od1 = 4,384381 .pll_od2 = 4,385382 .pll_od3 = 1,···387384 .vclk_div = 1,388385 },389386 [MESON_VCLK_HDMI_DDR_54000] = {390390- .pll_freq = 4320000,391391- .phy_freq = 270000,392392- .vclk_freq = 54000,393393- .venc_freq = 54000,394394- .pixel_freq = 27000,387387+ .pll_freq = 4320000000,388388+ .phy_freq = 270000000,389389+ .vclk_freq = 54000000,390390+ .venc_freq = 54000000,391391+ .pixel_freq = 27000000,395392 .pll_od1 = 4,396393 .pll_od2 = 4,397394 .pll_od3 = 1,···399396 .vclk_div = 1,400397 },401398 [MESON_VCLK_HDMI_DDR_148500] = {402402- .pll_freq = 2970000,403403- .phy_freq = 742500,404404- .vclk_freq = 148500,405405- .venc_freq = 148500,406406- .pixel_freq = 74250,399399+ .pll_freq = 2970000000,400400+ .phy_freq = 742500000,401401+ .vclk_freq = 148500000,402402+ .venc_freq = 148500000,403403+ .pixel_freq = 74250000,407404 .pll_od1 = 4,408405 .pll_od2 = 1,409406 .pll_od3 = 1,···411408 .vclk_div = 1,412409 },413410 [MESON_VCLK_HDMI_74250] = {414414- .pll_freq = 2970000,415415- .phy_freq = 742500,416416- .vclk_freq = 74250,417417- .venc_freq = 74250,418418- .pixel_freq = 74250,411411+ .pll_freq = 2970000000,412412+ .phy_freq = 742500000,413413+ .vclk_freq = 74250000,414414+ .venc_freq = 74250000,415415+ .pixel_freq = 74250000,419416 .pll_od1 = 2,420417 .pll_od2 = 2,421418 .pll_od3 = 2,···423420 .vclk_div = 1,424421 },425422 [MESON_VCLK_HDMI_148500] = {426426- .pll_freq = 2970000,427427- .phy_freq = 1485000,428428- .vclk_freq = 148500,429429- .venc_freq = 148500,430430- .pixel_freq = 148500,423423+ .pll_freq = 2970000000,424424+ .phy_freq = 1485000000,425425+ .vclk_freq = 148500000,426426+ .venc_freq = 148500000,427427+ .pixel_freq = 148500000,431428 .pll_od1 = 1,432429 .pll_od2 = 2,433430 .pll_od3 = 2,···435432 .vclk_div = 1,436433 },437434 [MESON_VCLK_HDMI_297000] = {438438- .pll_freq = 5940000,439439- .phy_freq = 2970000,440440- .venc_freq = 297000,441441- .vclk_freq = 297000,442442- .pixel_freq = 297000,435435+ .pll_freq = 5940000000,436436+ .phy_freq = 2970000000,437437+ .venc_freq = 297000000,438438+ .vclk_freq = 297000000,439439+ .pixel_freq = 297000000,443440 .pll_od1 = 2,444441 .pll_od2 = 1,445442 .pll_od3 = 1,···447444 .vclk_div = 2,448445 },449446 [MESON_VCLK_HDMI_594000] = {450450- .pll_freq = 5940000,451451- .phy_freq = 5940000,452452- .venc_freq = 594000,453453- .vclk_freq = 594000,454454- .pixel_freq = 594000,447447+ .pll_freq = 5940000000,448448+ .phy_freq = 5940000000,449449+ .venc_freq = 594000000,450450+ .vclk_freq = 594000000,451451+ .pixel_freq = 594000000,455452 .pll_od1 = 1,456453 .pll_od2 = 1,457454 .pll_od3 = 2,···459456 .vclk_div = 1,460457 },461458 [MESON_VCLK_HDMI_594000_YUV420] = {462462- .pll_freq = 5940000,463463- .phy_freq = 2970000,464464- .venc_freq = 594000,465465- .vclk_freq = 594000,466466- .pixel_freq = 297000,459459+ .pll_freq = 5940000000,460460+ .phy_freq = 2970000000,461461+ .venc_freq = 594000000,462462+ .vclk_freq = 594000000,463463+ .pixel_freq = 297000000,467464 .pll_od1 = 2,468465 .pll_od2 = 1,469466 .pll_od3 = 1,···620617 3 << 20, pll_od_to_reg(od3) << 20);621618}622619623623-#define XTAL_FREQ 24000620620+#define XTAL_FREQ (24 * 1000 * 1000)624621625622static unsigned int meson_hdmi_pll_get_m(struct meson_drm *priv,626626- unsigned int pll_freq)623623+ unsigned long long pll_freq)627624{628625 /* The GXBB PLL has a /2 pre-multiplier */629626 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB))630630- pll_freq /= 2;627627+ pll_freq = DIV_ROUND_DOWN_ULL(pll_freq, 2);631628632632- return pll_freq / XTAL_FREQ;629629+ return DIV_ROUND_DOWN_ULL(pll_freq, XTAL_FREQ);633630}634631635632#define HDMI_FRAC_MAX_GXBB 4096···638635639636static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv,640637 unsigned int m,641641- unsigned int pll_freq)638638+ unsigned long long pll_freq)642639{643643- unsigned int parent_freq = XTAL_FREQ;640640+ unsigned long long parent_freq = XTAL_FREQ;644641 unsigned int frac_max = HDMI_FRAC_MAX_GXL;645642 unsigned int frac_m;646643 unsigned int frac;644644+ u32 remainder;647645648646 /* The GXBB PLL has a /2 pre-multiplier and a larger FRAC width */649647 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {···656652 frac_max = HDMI_FRAC_MAX_G12A;657653658654 /* We can have a perfect match !*/659659- if (pll_freq / m == parent_freq &&660660- pll_freq % m == 0)655655+ if (div_u64_rem(pll_freq, m, &remainder) == parent_freq &&656656+ remainder == 0)661657 return 0;662658663663- frac = div_u64((u64)pll_freq * (u64)frac_max, parent_freq);659659+ frac = mul_u64_u64_div_u64(pll_freq, frac_max, parent_freq);664660 frac_m = m * frac_max;665661 if (frac_m > frac)666662 return frac_max;···670666}671667672668static bool meson_hdmi_pll_validate_params(struct meson_drm *priv,673673- unsigned int m,669669+ unsigned long long m,674670 unsigned int frac)675671{676672 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {···698694}699695700696static bool meson_hdmi_pll_find_params(struct meson_drm *priv,701701- unsigned int freq,697697+ unsigned long long freq,702698 unsigned int *m,703699 unsigned int *frac,704700 unsigned int *od)···710706 continue;711707 *frac = meson_hdmi_pll_get_frac(priv, *m, freq * *od);712708713713- DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d\n",709709+ DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d\n",714710 freq, *m, *frac, *od);715711716712 if (meson_hdmi_pll_validate_params(priv, *m, *frac))···722718723719/* pll_freq is the frequency after the OD dividers */724720enum drm_mode_status725725-meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq)721721+meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq)726722{727723 unsigned int od, m, frac;728724···745741746742/* pll_freq is the frequency after the OD dividers */747743static void meson_hdmi_pll_generic_set(struct meson_drm *priv,748748- unsigned int pll_freq)744744+ unsigned long long pll_freq)749745{750746 unsigned int od, m, frac, od1, od2, od3;751747···760756 od1 = od / od2;761757 }762758763763- DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d/%d/%d\n",759759+ DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d/%d/%d\n",764760 pll_freq, m, frac, od1, od2, od3);765761766762 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3);···768764 return;769765 }770766771771- DRM_ERROR("Fatal, unable to find parameters for PLL freq %d\n",767767+ DRM_ERROR("Fatal, unable to find parameters for PLL freq %lluHz\n",772768 pll_freq);773769}774770775771enum drm_mode_status776776-meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,777777- unsigned int vclk_freq)772772+meson_vclk_vic_supported_freq(struct meson_drm *priv,773773+ unsigned long long phy_freq,774774+ unsigned long long vclk_freq)778775{779776 int i;780777781781- DRM_DEBUG_DRIVER("phy_freq = %d vclk_freq = %d\n",778778+ DRM_DEBUG_DRIVER("phy_freq = %lluHz vclk_freq = %lluHz\n",782779 phy_freq, vclk_freq);783780784781 /* Check against soc revision/package limits */···790785 }791786792787 for (i = 0 ; params[i].pixel_freq ; ++i) {793793- DRM_DEBUG_DRIVER("i = %d pixel_freq = %d alt = %d\n",788788+ DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n",794789 i, params[i].pixel_freq,795795- FREQ_1000_1001(params[i].pixel_freq));796796- DRM_DEBUG_DRIVER("i = %d phy_freq = %d alt = %d\n",790790+ PIXEL_FREQ_1000_1001(params[i].pixel_freq));791791+ DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",797792 i, params[i].phy_freq,798798- FREQ_1000_1001(params[i].phy_freq/1000)*1000);793793+ PHY_FREQ_1000_1001(params[i].phy_freq));799794 /* Match strict frequency */800795 if (phy_freq == params[i].phy_freq &&801796 vclk_freq == params[i].vclk_freq)802797 return MODE_OK;803798 /* Match 1000/1001 variant */804804- if (phy_freq == (FREQ_1000_1001(params[i].phy_freq/1000)*1000) &&805805- vclk_freq == FREQ_1000_1001(params[i].vclk_freq))799799+ if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) &&800800+ vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq))806801 return MODE_OK;807802 }808803···810805}811806EXPORT_SYMBOL_GPL(meson_vclk_vic_supported_freq);812807813813-static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq,814814- unsigned int od1, unsigned int od2, unsigned int od3,808808+static void meson_vclk_set(struct meson_drm *priv,809809+ unsigned long long pll_base_freq, unsigned int od1,810810+ unsigned int od2, unsigned int od3,815811 unsigned int vid_pll_div, unsigned int vclk_div,816812 unsigned int hdmi_tx_div, unsigned int venc_div,817813 bool hdmi_use_enci, bool vic_alternate_clock)···832826 meson_hdmi_pll_generic_set(priv, pll_base_freq);833827 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {834828 switch (pll_base_freq) {835835- case 2970000:829829+ case 2970000000:836830 m = 0x3d;837831 frac = vic_alternate_clock ? 0xd02 : 0xe00;838832 break;839839- case 4320000:833833+ case 4320000000:840834 m = vic_alternate_clock ? 0x59 : 0x5a;841835 frac = vic_alternate_clock ? 0xe8f : 0;842836 break;843843- case 5940000:837837+ case 5940000000:844838 m = 0x7b;845839 frac = vic_alternate_clock ? 0xa05 : 0xc00;846840 break;···850844 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) ||851845 meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) {852846 switch (pll_base_freq) {853853- case 2970000:847847+ case 2970000000:854848 m = 0x7b;855849 frac = vic_alternate_clock ? 0x281 : 0x300;856850 break;857857- case 4320000:851851+ case 4320000000:858852 m = vic_alternate_clock ? 0xb3 : 0xb4;859853 frac = vic_alternate_clock ? 0x347 : 0;860854 break;861861- case 5940000:855855+ case 5940000000:862856 m = 0xf7;863857 frac = vic_alternate_clock ? 0x102 : 0x200;864858 break;···867861 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3);868862 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {869863 switch (pll_base_freq) {870870- case 2970000:864864+ case 2970000000:871865 m = 0x7b;872866 frac = vic_alternate_clock ? 0x140b4 : 0x18000;873867 break;874874- case 4320000:868868+ case 4320000000:875869 m = vic_alternate_clock ? 0xb3 : 0xb4;876870 frac = vic_alternate_clock ? 0x1a3ee : 0;877871 break;878878- case 5940000:872872+ case 5940000000:879873 m = 0xf7;880874 frac = vic_alternate_clock ? 0x8148 : 0x10000;881875 break;···10311025}1032102610331027void meson_vclk_setup(struct meson_drm *priv, unsigned int target,10341034- unsigned int phy_freq, unsigned int vclk_freq,10351035- unsigned int venc_freq, unsigned int dac_freq,10281028+ unsigned long long phy_freq, unsigned long long vclk_freq,10291029+ unsigned long long venc_freq, unsigned long long dac_freq,10361030 bool hdmi_use_enci)10371031{10381032 bool vic_alternate_clock = false;10391039- unsigned int freq;10401040- unsigned int hdmi_tx_div;10411041- unsigned int venc_div;10331033+ unsigned long long freq;10341034+ unsigned long long hdmi_tx_div;10351035+ unsigned long long venc_div;1042103610431037 if (target == MESON_VCLK_TARGET_CVBS) {10441038 meson_venci_cvbs_clock_config(priv);···10581052 return;10591053 }1060105410611061- hdmi_tx_div = vclk_freq / dac_freq;10551055+ hdmi_tx_div = DIV_ROUND_DOWN_ULL(vclk_freq, dac_freq);1062105610631057 if (hdmi_tx_div == 0) {10641064- pr_err("Fatal Error, invalid HDMI-TX freq %d\n",10581058+ pr_err("Fatal Error, invalid HDMI-TX freq %lluHz\n",10651059 dac_freq);10661060 return;10671061 }1068106210691069- venc_div = vclk_freq / venc_freq;10631063+ venc_div = DIV_ROUND_DOWN_ULL(vclk_freq, venc_freq);1070106410711065 if (venc_div == 0) {10721072- pr_err("Fatal Error, invalid HDMI venc freq %d\n",10661066+ pr_err("Fatal Error, invalid HDMI venc freq %lluHz\n",10731067 venc_freq);10741068 return;10751069 }1076107010771071 for (freq = 0 ; params[freq].pixel_freq ; ++freq) {10781072 if ((phy_freq == params[freq].phy_freq ||10791079- phy_freq == FREQ_1000_1001(params[freq].phy_freq/1000)*1000) &&10731073+ phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) &&10801074 (vclk_freq == params[freq].vclk_freq ||10811081- vclk_freq == FREQ_1000_1001(params[freq].vclk_freq))) {10751075+ vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) {10821076 if (vclk_freq != params[freq].vclk_freq)10831077 vic_alternate_clock = true;10841078 else···11041098 }1105109911061100 if (!params[freq].pixel_freq) {11071107- pr_err("Fatal Error, invalid HDMI vclk freq %d\n", vclk_freq);11011101+ pr_err("Fatal Error, invalid HDMI vclk freq %lluHz\n",11021102+ vclk_freq);11081103 return;11091104 }11101105
+7-6
drivers/gpu/drm/meson/meson_vclk.h
···2020};21212222/* 27MHz is the CVBS Pixel Clock */2323-#define MESON_VCLK_CVBS 270002323+#define MESON_VCLK_CVBS (27 * 1000 * 1000)24242525enum drm_mode_status2626-meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq);2626+meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq);2727enum drm_mode_status2828-meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,2929- unsigned int vclk_freq);2828+meson_vclk_vic_supported_freq(struct meson_drm *priv,2929+ unsigned long long phy_freq,3030+ unsigned long long vclk_freq);30313132void meson_vclk_setup(struct meson_drm *priv, unsigned int target,3232- unsigned int phy_freq, unsigned int vclk_freq,3333- unsigned int venc_freq, unsigned int dac_freq,3333+ unsigned long long phy_freq, unsigned long long vclk_freq,3434+ unsigned long long venc_freq, unsigned long long dac_freq,3435 bool hdmi_use_enci);35363637#endif /* __MESON_VCLK_H */
···128128 drm_dev_put(dev);129129}130130131131+static void virtio_gpu_shutdown(struct virtio_device *vdev)132132+{133133+ /*134134+ * drm does its own synchronization on shutdown.135135+ * Do nothing here, opt out of device reset.136136+ */137137+}138138+131139static void virtio_gpu_config_changed(struct virtio_device *vdev)132140{133141 struct drm_device *dev = vdev->priv;···170162 .id_table = id_table,171163 .probe = virtio_gpu_probe,172164 .remove = virtio_gpu_remove,165165+ .shutdown = virtio_gpu_shutdown,173166 .config_changed = virtio_gpu_config_changed174167};175168
+1
drivers/hwtracing/intel_th/Kconfig
···60606161config INTEL_TH_MSU6262 tristate "Intel(R) Trace Hub Memory Storage Unit"6363+ depends on MMU6364 help6465 Memory Storage Unit (MSU) trace output device enables6566 storing STP traces to system memory. It supports single
···38693869 struct irq_2_irte *irte_info = &ir_data->irq_2_irte;38703870 struct iommu_dev_data *dev_data;3871387138723872+ if (WARN_ON_ONCE(!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir)))38733873+ return -EINVAL;38743874+38723875 if (ir_data->iommu == NULL)38733876 return -EINVAL;38743877···38823879 * we should not modify the IRTE38833880 */38843881 if (!dev_data || !dev_data->use_vapic)38853885- return 0;38823882+ return -EINVAL;3886388338873884 ir_data->cfg = irqd_cfg(data);38883885 pi_data->ir_data = ir_data;38893889-38903890- /* Note:38913891- * SVM tries to set up for VAPIC mode, but we are in38923892- * legacy mode. So, we force legacy mode instead.38933893- */38943894- if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir)) {38953895- pr_debug("%s: Fall back to using intr legacy remap\n",38963896- __func__);38973897- pi_data->is_guest_mode = false;38983898- }3899388639003887 pi_data->prev_ga_tag = ir_data->cached_ga_tag;39013888 if (pi_data->is_guest_mode) {
···117117118118#define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */119119120120+#define MEI_DEV_ID_PTL_H 0xE370 /* Panther Lake H */120121#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */121122122123/*
···167167 q->base = base;168168 q->base_pa = base_pa;169169170170- for (i = 0, cur = q->info; i < q->num_descs; i++, cur++)170170+ for (i = 0, cur = q->info; i < q->num_descs; i++, cur++) {171171 cur->desc = base + (i * q->desc_size);172172+ init_completion(&cur->completion);173173+ }172174}173175174176static void pdsc_cq_map(struct pdsc_cq *cq, void *base, dma_addr_t base_pa)···327325 size_t sz;328326 int err;329327330330- /* Scale the descriptor ring length based on number of CPUs and VFs */331331- numdescs = max_t(int, PDSC_ADMINQ_MIN_LENGTH, num_online_cpus());332332- numdescs += 2 * pci_sriov_get_totalvfs(pdsc->pdev);333333- numdescs = roundup_pow_of_two(numdescs);328328+ numdescs = PDSC_ADMINQ_MAX_LENGTH;334329 err = pdsc_qcq_alloc(pdsc, PDS_CORE_QTYPE_ADMINQ, 0, "adminq",335330 PDS_CORE_QCQ_F_CORE | PDS_CORE_QCQ_F_INTR,336331 numdescs,
+2-2
drivers/net/ethernet/amd/pds_core/core.h
···16161717#define PDSC_WATCHDOG_SECS 51818#define PDSC_QUEUE_NAME_MAX_SZ 161919-#define PDSC_ADMINQ_MIN_LENGTH 16 /* must be a power of two */1919+#define PDSC_ADMINQ_MAX_LENGTH 16 /* must be a power of two */2020#define PDSC_NOTIFYQ_LENGTH 64 /* must be a power of two */2121#define PDSC_TEARDOWN_RECOVERY false2222#define PDSC_TEARDOWN_REMOVING true···9696 unsigned int bytes;9797 unsigned int nbufs;9898 struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS];9999- struct pdsc_wait_context *wc;9999+ struct completion completion;100100 void *dest;101101};102102
+1-3
drivers/net/ethernet/amd/pds_core/devlink.c
···105105 .fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,106106 .fw_control.oper = PDS_CORE_FW_GET_LIST,107107 };108108- struct pds_core_fw_list_info fw_list;108108+ struct pds_core_fw_list_info fw_list = {};109109 struct pdsc *pdsc = devlink_priv(dl);110110 union pds_core_dev_comp comp;111111 char buf[32];···118118 if (!err)119119 memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));120120 mutex_unlock(&pdsc->devcmd_lock);121121- if (err && err != -EIO)122122- return err;123121124122 listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names));125123 for (i = 0; i < listlen; i++) {
+28-17
drivers/net/ethernet/freescale/enetc/enetc.c
···18501850 }18511851}1852185218531853+static void enetc_bulk_flip_buff(struct enetc_bdr *rx_ring, int rx_ring_first,18541854+ int rx_ring_last)18551855+{18561856+ while (rx_ring_first != rx_ring_last) {18571857+ enetc_flip_rx_buff(rx_ring,18581858+ &rx_ring->rx_swbd[rx_ring_first]);18591859+ enetc_bdr_idx_inc(rx_ring, &rx_ring_first);18601860+ }18611861+}18621862+18531863static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,18541864 struct napi_struct *napi, int work_limit,18551865 struct bpf_prog *prog)···1878186818791869 while (likely(rx_frm_cnt < work_limit)) {18801870 union enetc_rx_bd *rxbd, *orig_rxbd;18811881- int orig_i, orig_cleaned_cnt;18821871 struct xdp_buff xdp_buff;18831872 struct sk_buff *skb;18731873+ int orig_i, err;18841874 u32 bd_status;18851885- int err;1886187518871876 rxbd = enetc_rxbd(rx_ring, i);18881877 bd_status = le32_to_cpu(rxbd->r.lstatus);···18961887 break;1897188818981889 orig_rxbd = rxbd;18991899- orig_cleaned_cnt = cleaned_cnt;19001890 orig_i = i;1901189119021892 enetc_build_xdp_buff(rx_ring, bd_status, &rxbd, &i,···19231915 rx_ring->stats.xdp_drops++;19241916 break;19251917 case XDP_PASS:19261926- rxbd = orig_rxbd;19271927- cleaned_cnt = orig_cleaned_cnt;19281928- i = orig_i;19291929-19301930- skb = enetc_build_skb(rx_ring, bd_status, &rxbd,19311931- &i, &cleaned_cnt,19321932- ENETC_RXB_DMA_SIZE_XDP);19331933- if (unlikely(!skb))19181918+ skb = xdp_build_skb_from_buff(&xdp_buff);19191919+ /* Probably under memory pressure, stop NAPI */19201920+ if (unlikely(!skb)) {19211921+ enetc_xdp_drop(rx_ring, orig_i, i);19221922+ rx_ring->stats.xdp_drops++;19341923 goto out;19241924+ }19251925+19261926+ enetc_get_offloads(rx_ring, orig_rxbd, skb);19271927+19281928+ /* These buffers are about to be owned by the stack.19291929+ * Update our buffer cache (the rx_swbd array elements)19301930+ * with their other page halves.19311931+ */19321932+ enetc_bulk_flip_buff(rx_ring, orig_i, i);1935193319361934 napi_gro_receive(napi, skb);19371935 break;···19791965 enetc_xdp_drop(rx_ring, orig_i, i);19801966 rx_ring->stats.xdp_redirect_failures++;19811967 } else {19821982- while (orig_i != i) {19831983- enetc_flip_rx_buff(rx_ring,19841984- &rx_ring->rx_swbd[orig_i]);19851985- enetc_bdr_idx_inc(rx_ring, &orig_i);19861986- }19681968+ enetc_bulk_flip_buff(rx_ring, orig_i, i);19871969 xdp_redirect_frm_cnt++;19881970 rx_ring->stats.xdp_redirect++;19891971 }···33723362 bdr->buffer_offset = ENETC_RXB_PAD;33733363 priv->rx_ring[i] = bdr;3374336433753375- err = xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0);33653365+ err = __xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0,33663366+ ENETC_RXB_DMA_SIZE_XDP);33763367 if (err)33773368 goto free_vector;33783369
+20-4
drivers/net/ethernet/mediatek/mtk_eth_soc.c
···40434043 mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP);4044404440454045 if (mtk_is_netsys_v3_or_greater(eth)) {40464046- /* PSE should not drop port1, port8 and port9 packets */40474047- mtk_w32(eth, 0x00000302, PSE_DROP_CFG);40464046+ /* PSE dummy page mechanism */40474047+ mtk_w32(eth, PSE_DUMMY_WORK_GDM(1) | PSE_DUMMY_WORK_GDM(2) |40484048+ PSE_DUMMY_WORK_GDM(3) | DUMMY_PAGE_THR, PSE_DUMY_REQ);40494049+40504050+ /* PSE free buffer drop threshold */40514051+ mtk_w32(eth, 0x00600009, PSE_IQ_REV(8));40524052+40534053+ /* PSE should not drop port8, port9 and port13 packets from40544054+ * WDMA Tx40554055+ */40564056+ mtk_w32(eth, 0x00002300, PSE_DROP_CFG);40574057+40584058+ /* PSE should drop packets to port8, port9 and port13 on WDMA Rx40594059+ * ring full40604060+ */40614061+ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(0));40624062+ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(1));40634063+ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(2));4048406440494065 /* GDM and CDM Threshold */40504050- mtk_w32(eth, 0x00000707, MTK_CDMW0_THRES);40664066+ mtk_w32(eth, 0x08000707, MTK_CDMW0_THRES);40514067 mtk_w32(eth, 0x00000077, MTK_CDMW1_THRES);4052406840534069 /* Disable GDM1 RX CRC stripping */···40804064 mtk_w32(eth, 0x00000300, PSE_DROP_CFG);4081406540824066 /* PSE should drop packets to port 8/9 on WDMA Rx ring full */40834083- mtk_w32(eth, 0x00000300, PSE_PPE0_DROP);40674067+ mtk_w32(eth, 0x00000300, PSE_PPE_DROP(0));4084406840854069 /* PSE Free Queue Flow Control */40864070 mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2);
+9-1
drivers/net/ethernet/mediatek/mtk_eth_soc.h
···151151#define PSE_FQFC_CFG1 0x100152152#define PSE_FQFC_CFG2 0x104153153#define PSE_DROP_CFG 0x108154154-#define PSE_PPE0_DROP 0x110154154+#define PSE_PPE_DROP(x) (0x110 + ((x) * 0x4))155155+156156+/* PSE Last FreeQ Page Request Control */157157+#define PSE_DUMY_REQ 0x10C158158+/* PSE_DUMY_REQ is not a typo but actually called like that also in159159+ * MediaTek's datasheet160160+ */161161+#define PSE_DUMMY_WORK_GDM(x) BIT(16 + (x))162162+#define DUMMY_PAGE_THR 0x1155163156164/* PSE Input Queue Reservation Register*/157165#define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2))
···3737 return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page);3838}39394040-static int lan88xx_phy_config_intr(struct phy_device *phydev)4141-{4242- int rc;4343-4444- if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {4545- /* unmask all source and clear them before enable */4646- rc = phy_write(phydev, LAN88XX_INT_MASK, 0x7FFF);4747- rc = phy_read(phydev, LAN88XX_INT_STS);4848- rc = phy_write(phydev, LAN88XX_INT_MASK,4949- LAN88XX_INT_MASK_MDINTPIN_EN_ |5050- LAN88XX_INT_MASK_LINK_CHANGE_);5151- } else {5252- rc = phy_write(phydev, LAN88XX_INT_MASK, 0);5353- if (rc)5454- return rc;5555-5656- /* Ack interrupts after they have been disabled */5757- rc = phy_read(phydev, LAN88XX_INT_STS);5858- }5959-6060- return rc < 0 ? rc : 0;6161-}6262-6363-static irqreturn_t lan88xx_handle_interrupt(struct phy_device *phydev)6464-{6565- int irq_status;6666-6767- irq_status = phy_read(phydev, LAN88XX_INT_STS);6868- if (irq_status < 0) {6969- phy_error(phydev);7070- return IRQ_NONE;7171- }7272-7373- if (!(irq_status & LAN88XX_INT_STS_LINK_CHANGE_))7474- return IRQ_NONE;7575-7676- phy_trigger_machine(phydev);7777-7878- return IRQ_HANDLED;7979-}8080-8140static int lan88xx_suspend(struct phy_device *phydev)8241{8342 struct lan88xx_priv *priv = phydev->priv;···487528 .config_aneg = lan88xx_config_aneg,488529 .link_change_notify = lan88xx_link_change_notify,489530490490- .config_intr = lan88xx_phy_config_intr,491491- .handle_interrupt = lan88xx_handle_interrupt,531531+ /* Interrupt handling is broken, do not define related532532+ * functions to force polling.533533+ */492534493535 .suspend = lan88xx_suspend,494536 .resume = genphy_resume,
+13-10
drivers/net/phy/phy_led_triggers.c
···9393 if (!phy->phy_num_led_triggers)9494 return 0;95959696- phy->led_link_trigger = devm_kzalloc(&phy->mdio.dev,9797- sizeof(*phy->led_link_trigger),9898- GFP_KERNEL);9696+ phy->led_link_trigger = kzalloc(sizeof(*phy->led_link_trigger),9797+ GFP_KERNEL);9998 if (!phy->led_link_trigger) {10099 err = -ENOMEM;101100 goto out_clear;···104105 if (err)105106 goto out_free_link;106107107107- phy->phy_led_triggers = devm_kcalloc(&phy->mdio.dev,108108- phy->phy_num_led_triggers,109109- sizeof(struct phy_led_trigger),110110- GFP_KERNEL);108108+ phy->phy_led_triggers = kcalloc(phy->phy_num_led_triggers,109109+ sizeof(struct phy_led_trigger),110110+ GFP_KERNEL);111111 if (!phy->phy_led_triggers) {112112 err = -ENOMEM;113113 goto out_unreg_link;···127129out_unreg:128130 while (i--)129131 phy_led_trigger_unregister(&phy->phy_led_triggers[i]);130130- devm_kfree(&phy->mdio.dev, phy->phy_led_triggers);132132+ kfree(phy->phy_led_triggers);131133out_unreg_link:132134 phy_led_trigger_unregister(phy->led_link_trigger);133135out_free_link:134134- devm_kfree(&phy->mdio.dev, phy->led_link_trigger);136136+ kfree(phy->led_link_trigger);135137 phy->led_link_trigger = NULL;136138out_clear:137139 phy->phy_num_led_triggers = 0;···145147146148 for (i = 0; i < phy->phy_num_led_triggers; i++)147149 phy_led_trigger_unregister(&phy->phy_led_triggers[i]);150150+ kfree(phy->phy_led_triggers);151151+ phy->phy_led_triggers = NULL;148152149149- if (phy->led_link_trigger)153153+ if (phy->led_link_trigger) {150154 phy_led_trigger_unregister(phy->led_link_trigger);155155+ kfree(phy->led_link_trigger);156156+ phy->led_link_trigger = NULL;157157+ }151158}152159EXPORT_SYMBOL_GPL(phy_led_triggers_unregister);
+22-16
drivers/net/phy/phylink.c
···8181 unsigned int pcs_state;82828383 bool link_failed;8484+ bool suspend_link_up;8485 bool major_config_failed;8586 bool mac_supports_eee_ops;8687 bool mac_supports_eee;···25462545 /* Stop the resolver bringing the link up */25472546 __set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state);2548254725492549- /* Disable the carrier, to prevent transmit timeouts,25502550- * but one would hope all packets have been sent. This25512551- * also means phylink_resolve() will do nothing.25522552- */25532553- if (pl->netdev)25542554- netif_carrier_off(pl->netdev);25552555- else25482548+ pl->suspend_link_up = phylink_link_is_up(pl);25492549+ if (pl->suspend_link_up) {25502550+ /* Disable the carrier, to prevent transmit timeouts,25512551+ * but one would hope all packets have been sent. This25522552+ * also means phylink_resolve() will do nothing.25532553+ */25542554+ if (pl->netdev)25552555+ netif_carrier_off(pl->netdev);25562556 pl->old_link_state = false;25572557+ }2557255825582559 /* We do not call mac_link_down() here as we want the25592560 * link to remain up to receive the WoL packets.···26062603 if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) {26072604 /* Wake-on-Lan enabled, MAC handling */2608260526092609- /* Call mac_link_down() so we keep the overall state balanced.26102610- * Do this under the state_mutex lock for consistency. This26112611- * will cause a "Link Down" message to be printed during26122612- * resume, which is harmless - the true link state will be26132613- * printed when we run a resolve.26142614- */26152615- mutex_lock(&pl->state_mutex);26162616- phylink_link_down(pl);26172617- mutex_unlock(&pl->state_mutex);26062606+ if (pl->suspend_link_up) {26072607+ /* Call mac_link_down() so we keep the overall state26082608+ * balanced. Do this under the state_mutex lock for26092609+ * consistency. This will cause a "Link Down" message26102610+ * to be printed during resume, which is harmless -26112611+ * the true link state will be printed when we run a26122612+ * resolve.26132613+ */26142614+ mutex_lock(&pl->state_mutex);26152615+ phylink_link_down(pl);26162616+ mutex_unlock(&pl->state_mutex);26172617+ }2618261826192619 /* Re-apply the link parameters so that all the settings get26202620 * restored to the MAC.
+57-12
drivers/net/virtio_net.c
···33423342 return NETDEV_TX_OK;33433343}3344334433453345-static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)33453345+static void __virtnet_rx_pause(struct virtnet_info *vi,33463346+ struct receive_queue *rq)33463347{33473348 bool running = netif_running(vi->dev);33483349···33533352 }33543353}3355335433563356-static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)33553355+static void virtnet_rx_pause_all(struct virtnet_info *vi)33563356+{33573357+ int i;33583358+33593359+ /*33603360+ * Make sure refill_work does not run concurrently to33613361+ * avoid napi_disable race which leads to deadlock.33623362+ */33633363+ disable_delayed_refill(vi);33643364+ cancel_delayed_work_sync(&vi->refill);33653365+ for (i = 0; i < vi->max_queue_pairs; i++)33663366+ __virtnet_rx_pause(vi, &vi->rq[i]);33673367+}33683368+33693369+static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)33703370+{33713371+ /*33723372+ * Make sure refill_work does not run concurrently to33733373+ * avoid napi_disable race which leads to deadlock.33743374+ */33753375+ disable_delayed_refill(vi);33763376+ cancel_delayed_work_sync(&vi->refill);33773377+ __virtnet_rx_pause(vi, rq);33783378+}33793379+33803380+static void __virtnet_rx_resume(struct virtnet_info *vi,33813381+ struct receive_queue *rq,33823382+ bool refill)33573383{33583384 bool running = netif_running(vi->dev);3359338533603360- if (!try_fill_recv(vi, rq, GFP_KERNEL))33863386+ if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))33613387 schedule_delayed_work(&vi->refill, 0);3362338833633389 if (running)33643390 virtnet_napi_enable(rq);33913391+}33923392+33933393+static void virtnet_rx_resume_all(struct virtnet_info *vi)33943394+{33953395+ int i;33963396+33973397+ enable_delayed_refill(vi);33983398+ for (i = 0; i < vi->max_queue_pairs; i++) {33993399+ if (i < vi->curr_queue_pairs)34003400+ __virtnet_rx_resume(vi, &vi->rq[i], true);34013401+ else34023402+ __virtnet_rx_resume(vi, &vi->rq[i], false);34033403+ }34043404+}34053405+34063406+static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)34073407+{34083408+ enable_delayed_refill(vi);34093409+ __virtnet_rx_resume(vi, rq, true);33653410}3366341133673412static int virtnet_rx_resize(struct virtnet_info *vi,···60065959 if (prog)60075960 bpf_prog_add(prog, vi->max_queue_pairs - 1);6008596159625962+ virtnet_rx_pause_all(vi);59635963+60095964 /* Make sure NAPI is not using any XDP TX queues for RX. */60105965 if (netif_running(dev)) {60116011- for (i = 0; i < vi->max_queue_pairs; i++) {60126012- virtnet_napi_disable(&vi->rq[i]);59665966+ for (i = 0; i < vi->max_queue_pairs; i++)60135967 virtnet_napi_tx_disable(&vi->sq[i]);60146014- }60155968 }6016596960175970 if (!prog) {···60435996 vi->xdp_enabled = false;60445997 }6045599859995999+ virtnet_rx_resume_all(vi);60466000 for (i = 0; i < vi->max_queue_pairs; i++) {60476001 if (old_prog)60486002 bpf_prog_put(old_prog);60496049- if (netif_running(dev)) {60506050- virtnet_napi_enable(&vi->rq[i]);60036003+ if (netif_running(dev))60516004 virtnet_napi_tx_enable(&vi->sq[i]);60526052- }60536005 }6054600660556007 return 0;···60606014 rcu_assign_pointer(vi->rq[i].xdp_prog, old_prog);60616015 }6062601660176017+ virtnet_rx_resume_all(vi);60636018 if (netif_running(dev)) {60646064- for (i = 0; i < vi->max_queue_pairs; i++) {60656065- virtnet_napi_enable(&vi->rq[i]);60196019+ for (i = 0; i < vi->max_queue_pairs; i++)60666020 virtnet_napi_tx_enable(&vi->sq[i]);60676067- }60686021 }60696022 if (prog)60706023 bpf_prog_sub(prog, vi->max_queue_pairs - 1);
+13-6
drivers/net/xen-netfront.c
···985985 act = bpf_prog_run_xdp(prog, xdp);986986 switch (act) {987987 case XDP_TX:988988- get_page(pdata);989988 xdpf = xdp_convert_buff_to_frame(xdp);990990- err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0);991991- if (unlikely(!err))992992- xdp_return_frame_rx_napi(xdpf);993993- else if (unlikely(err < 0))989989+ if (unlikely(!xdpf)) {994990 trace_xdp_exception(queue->info->netdev, prog, act);991991+ break;992992+ }993993+ get_page(pdata);994994+ err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0);995995+ if (unlikely(err <= 0)) {996996+ if (err < 0)997997+ trace_xdp_exception(queue->info->netdev, prog, act);998998+ xdp_return_frame_rx_napi(xdpf);999999+ }9951000 break;9961001 case XDP_REDIRECT:9971002 get_page(pdata);9981003 err = xdp_do_redirect(queue->info->netdev, xdp, prog);9991004 *need_xdp_flush = true;10001000- if (unlikely(err))10051005+ if (unlikely(err)) {10011006 trace_xdp_exception(queue->info->netdev, prog, act);10071007+ xdp_return_buff(xdp);10081008+ }10021009 break;10031010 case XDP_PASS:10041011 case XDP_DROP:
+3
drivers/nvme/target/core.c
···324324325325 lockdep_assert_held(&nvmet_config_sem);326326327327+ if (port->disc_addr.trtype == NVMF_TRTYPE_MAX)328328+ return -EINVAL;329329+327330 ops = nvmet_transports[port->disc_addr.trtype];328331 if (!ops) {329332 up_write(&nvmet_config_sem);
+32-8
drivers/nvmem/core.c
···594594 cell->nbits = info->nbits;595595 cell->np = info->np;596596597597- if (cell->nbits)597597+ if (cell->nbits) {598598 cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset,599599 BITS_PER_BYTE);600600+ cell->raw_len = ALIGN(cell->bytes, nvmem->word_size);601601+ }600602601603 if (!IS_ALIGNED(cell->offset, nvmem->stride)) {602604 dev_err(&nvmem->dev,603605 "cell %s unaligned to nvmem stride %d\n",604606 cell->name ?: "<unknown>", nvmem->stride);605607 return -EINVAL;608608+ }609609+610610+ if (!IS_ALIGNED(cell->raw_len, nvmem->word_size)) {611611+ dev_err(&nvmem->dev,612612+ "cell %s raw len %zd unaligned to nvmem word size %d\n",613613+ cell->name ?: "<unknown>", cell->raw_len,614614+ nvmem->word_size);615615+616616+ if (info->raw_len)617617+ return -EINVAL;618618+619619+ cell->raw_len = ALIGN(cell->raw_len, nvmem->word_size);606620 }607621608622 return 0;···851837 if (addr && len == (2 * sizeof(u32))) {852838 info.bit_offset = be32_to_cpup(addr++);853839 info.nbits = be32_to_cpup(addr);854854- if (info.bit_offset >= BITS_PER_BYTE || info.nbits < 1) {840840+ if (info.bit_offset >= BITS_PER_BYTE * info.bytes ||841841+ info.nbits < 1 ||842842+ info.bit_offset + info.nbits > BITS_PER_BYTE * info.bytes) {855843 dev_err(dev, "nvmem: invalid bits on %pOF\n", child);856844 of_node_put(child);857845 return -EINVAL;···16461630static void nvmem_shift_read_buffer_in_place(struct nvmem_cell_entry *cell, void *buf)16471631{16481632 u8 *p, *b;16491649- int i, extra, bit_offset = cell->bit_offset;16331633+ int i, extra, bytes_offset;16341634+ int bit_offset = cell->bit_offset;1650163516511636 p = b = buf;16521652- if (bit_offset) {16371637+16381638+ bytes_offset = bit_offset / BITS_PER_BYTE;16391639+ b += bytes_offset;16401640+ bit_offset %= BITS_PER_BYTE;16411641+16421642+ if (bit_offset % BITS_PER_BYTE) {16531643 /* First shift */16541654- *b++ >>= bit_offset;16441644+ *p = *b++ >> bit_offset;1655164516561646 /* setup rest of the bytes if any */16571647 for (i = 1; i < cell->bytes; i++) {16581648 /* Get bits from next byte and shift them towards msb */16591659- *p |= *b << (BITS_PER_BYTE - bit_offset);16491649+ *p++ |= *b << (BITS_PER_BYTE - bit_offset);1660165016611661- p = b;16621662- *b++ >>= bit_offset;16511651+ *p = *b++ >> bit_offset;16631652 }16531653+ } else if (p != b) {16541654+ memmove(p, b, cell->bytes - bytes_offset);16551655+ p += cell->bytes - 1;16641656 } else {16651657 /* point to the msb */16661658 p += cell->bytes - 1;
+20-6
drivers/nvmem/qfprom.c
···321321 unsigned int reg, void *_val, size_t bytes)322322{323323 struct qfprom_priv *priv = context;324324- u8 *val = _val;325325- int i = 0, words = bytes;324324+ u32 *val = _val;326325 void __iomem *base = priv->qfpcorrected;326326+ int words = DIV_ROUND_UP(bytes, sizeof(u32));327327+ int i;327328328329 if (read_raw_data && priv->qfpraw)329330 base = priv->qfpraw;330331331331- while (words--)332332- *val++ = readb(base + reg + i++);332332+ for (i = 0; i < words; i++)333333+ *val++ = readl(base + reg + i * sizeof(u32));333334334335 return 0;336336+}337337+338338+/* Align reads to word boundary */339339+static void qfprom_fixup_dt_cell_info(struct nvmem_device *nvmem,340340+ struct nvmem_cell_info *cell)341341+{342342+ unsigned int byte_offset = cell->offset % sizeof(u32);343343+344344+ cell->bit_offset += byte_offset * BITS_PER_BYTE;345345+ cell->offset -= byte_offset;346346+ if (byte_offset && !cell->nbits)347347+ cell->nbits = cell->bytes * BITS_PER_BYTE;335348}336349337350static void qfprom_runtime_disable(void *data)···371358 struct nvmem_config econfig = {372359 .name = "qfprom",373360 .add_legacy_fixed_of_cells = true,374374- .stride = 1,375375- .word_size = 1,361361+ .stride = 4,362362+ .word_size = 4,376363 .id = NVMEM_DEVID_AUTO,377364 .reg_read = qfprom_reg_read,365365+ .fixup_dt_cell_info = qfprom_fixup_dt_cell_info,378366 };379367 struct device *dev = &pdev->dev;380368 struct resource *res;
···707707 */708708int scsi_cdl_enable(struct scsi_device *sdev, bool enable)709709{710710- struct scsi_mode_data data;711711- struct scsi_sense_hdr sshdr;712712- struct scsi_vpd *vpd;713713- bool is_ata = false;714710 char buf[64];711711+ bool is_ata;715712 int ret;716713717714 if (!sdev->cdl_supported)718715 return -EOPNOTSUPP;719716720717 rcu_read_lock();721721- vpd = rcu_dereference(sdev->vpd_pg89);722722- if (vpd)723723- is_ata = true;718718+ is_ata = rcu_dereference(sdev->vpd_pg89);724719 rcu_read_unlock();725720726721 /*727722 * For ATA devices, CDL needs to be enabled with a SET FEATURES command.728723 */729724 if (is_ata) {725725+ struct scsi_mode_data data;726726+ struct scsi_sense_hdr sshdr;730727 char *buf_data;731728 int len;732729···732735 if (ret)733736 return -EINVAL;734737735735- /* Enable CDL using the ATA feature page */738738+ /* Enable or disable CDL using the ATA feature page */736739 len = min_t(size_t, sizeof(buf),737740 data.length - data.header_length -738741 data.block_descriptor_length);739742 buf_data = buf + data.header_length +740743 data.block_descriptor_length;741741- if (enable)742742- buf_data[4] = 0x02;743743- else744744- buf_data[4] = 0;744744+745745+ /*746746+ * If we want to enable CDL and CDL is already enabled on the747747+ * device, do nothing. This avoids needlessly resetting the CDL748748+ * statistics on the device as that is implied by the CDL enable749749+ * action. Similar to this, there is no need to do anything if750750+ * we want to disable CDL and CDL is already disabled.751751+ */752752+ if (enable) {753753+ if ((buf_data[4] & 0x03) == 0x02)754754+ goto out;755755+ buf_data[4] &= ~0x03;756756+ buf_data[4] |= 0x02;757757+ } else {758758+ if ((buf_data[4] & 0x03) == 0x00)759759+ goto out;760760+ buf_data[4] &= ~0x03;761761+ }745762746763 ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3,747764 &data, &sshdr);···767756 }768757 }769758759759+out:770760 sdev->cdl_enable = enable;771761772762 return 0;
···193193 return -EFAULT;194194195195 /*196196- * TIOCL_SELCLEAR, TIOCL_SELPOINTER and TIOCL_SELMOUSEREPORT are OK to197197- * use without CAP_SYS_ADMIN as they do not modify the selection.196196+ * TIOCL_SELCLEAR and TIOCL_SELPOINTER are OK to use without197197+ * CAP_SYS_ADMIN as they do not modify the selection.198198 */199199 switch (v.sel_mode) {200200 case TIOCL_SELCLEAR:201201 case TIOCL_SELPOINTER:202202- case TIOCL_SELMOUSEREPORT:203202 break;204203 default:205204 if (!capable(CAP_SYS_ADMIN))
+5-7
drivers/ufs/core/ufs-mcq.c
···677677 unsigned long flags;678678 int err;679679680680- if (!ufshcd_cmd_inflight(lrbp->cmd)) {681681- dev_err(hba->dev,682682- "%s: skip abort. cmd at tag %d already completed.\n",683683- __func__, tag);684684- return FAILED;685685- }686686-687680 /* Skip task abort in case previous aborts failed and report failure */688681 if (lrbp->req_abort_skip) {689682 dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n",···685692 }686693687694 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd));695695+ if (!hwq) {696696+ dev_err(hba->dev, "%s: skip abort. cmd at tag %d already completed.\n",697697+ __func__, tag);698698+ return FAILED;699699+ }688700689701 if (ufshcd_mcq_sqe_search(hba, hwq, tag)) {690702 /*
+31
drivers/ufs/core/ufshcd.c
···278278 .model = UFS_ANY_MODEL,279279 .quirk = UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM |280280 UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE |281281+ UFS_DEVICE_QUIRK_PA_HIBER8TIME |281282 UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS },282283 { .wmanufacturerid = UFS_VENDOR_SKHYNIX,283284 .model = UFS_ANY_MODEL,···56785677 continue;5679567856805679 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd));56805680+ if (!hwq)56815681+ continue;5681568256825683 if (force_compl) {56835684 ufshcd_mcq_compl_all_cqes_lock(hba, hwq);···84738470 return ret;84748471}8475847284738473+/**84748474+ * ufshcd_quirk_override_pa_h8time - Ensures proper adjustment of PA_HIBERN8TIME.84758475+ * @hba: per-adapter instance84768476+ *84778477+ * Some UFS devices require specific adjustments to the PA_HIBERN8TIME parameter84788478+ * to ensure proper hibernation timing. This function retrieves the current84798479+ * PA_HIBERN8TIME value and increments it by 100us.84808480+ */84818481+static void ufshcd_quirk_override_pa_h8time(struct ufs_hba *hba)84828482+{84838483+ u32 pa_h8time;84848484+ int ret;84858485+84868486+ ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_HIBERN8TIME), &pa_h8time);84878487+ if (ret) {84888488+ dev_err(hba->dev, "Failed to get PA_HIBERN8TIME: %d\n", ret);84898489+ return;84908490+ }84918491+84928492+ /* Increment by 1 to increase hibernation time by 100 µs */84938493+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_HIBERN8TIME), pa_h8time + 1);84948494+ if (ret)84958495+ dev_err(hba->dev, "Failed updating PA_HIBERN8TIME: %d\n", ret);84968496+}84978497+84768498static void ufshcd_tune_unipro_params(struct ufs_hba *hba)84778499{84788500 ufshcd_vops_apply_dev_quirks(hba);···8508848085098481 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE)85108482 ufshcd_quirk_tune_host_pa_tactivate(hba);84838483+84848484+ if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_HIBER8TIME)84858485+ ufshcd_quirk_override_pa_h8time(hba);85118486}8512848785138488static void ufshcd_clear_dbg_ufs_stats(struct ufs_hba *hba)
+43
drivers/ufs/host/ufs-qcom.c
···3333 ((((c) >> 16) & MCQ_QCFGPTR_MASK) * MCQ_QCFGPTR_UNIT)3434#define MCQ_QCFG_SIZE 0x4035353636+/* De-emphasis for gear-5 */3737+#define DEEMPHASIS_3_5_dB 0x043838+#define NO_DEEMPHASIS 0x03939+3640enum {3741 TSTBUS_UAWM,3842 TSTBUS_UARM,···799795 return ufs_qcom_icc_set_bw(host, bw_table.mem_bw, bw_table.cfg_bw);800796}801797798798+static void ufs_qcom_set_tx_hs_equalizer(struct ufs_hba *hba, u32 gear, u32 tx_lanes)799799+{800800+ u32 equalizer_val;801801+ int ret, i;802802+803803+ /* Determine the equalizer value based on the gear */804804+ equalizer_val = (gear == 5) ? DEEMPHASIS_3_5_dB : NO_DEEMPHASIS;805805+806806+ for (i = 0; i < tx_lanes; i++) {807807+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(TX_HS_EQUALIZER, i),808808+ equalizer_val);809809+ if (ret)810810+ dev_err(hba->dev, "%s: failed equalizer lane %d\n",811811+ __func__, i);812812+ }813813+}814814+802815static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba,803816 enum ufs_notify_change_status status,804817 const struct ufs_pa_layer_attr *dev_max_params,···867846 dev_req_params->gear_tx,868847 PA_INITIAL_ADAPT);869848 }849849+850850+ if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING)851851+ ufs_qcom_set_tx_hs_equalizer(hba,852852+ dev_req_params->gear_tx, dev_req_params->lane_tx);853853+870854 break;871855 case POST_CHANGE:872856 if (ufs_qcom_cfg_timers(hba, false)) {···919893 (pa_vs_config_reg1 | (1 << 12)));920894}921895896896+static void ufs_qcom_override_pa_tx_hsg1_sync_len(struct ufs_hba *hba)897897+{898898+ int err;899899+900900+ err = ufshcd_dme_peer_set(hba, UIC_ARG_MIB(PA_TX_HSG1_SYNC_LENGTH),901901+ PA_TX_HSG1_SYNC_LENGTH_VAL);902902+ if (err)903903+ dev_err(hba->dev, "Failed (%d) set PA_TX_HSG1_SYNC_LENGTH\n", err);904904+}905905+922906static int ufs_qcom_apply_dev_quirks(struct ufs_hba *hba)923907{924908 int err = 0;925909926910 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME)927911 err = ufs_qcom_quirk_host_pa_saveconfigtime(hba);912912+913913+ if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH)914914+ ufs_qcom_override_pa_tx_hsg1_sync_len(hba);928915929916 return err;930917}···953914 { .wmanufacturerid = UFS_VENDOR_WDC,954915 .model = UFS_ANY_MODEL,955916 .quirk = UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE },917917+ { .wmanufacturerid = UFS_VENDOR_SAMSUNG,918918+ .model = UFS_ANY_MODEL,919919+ .quirk = UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH |920920+ UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING },956921 {}957922};958923
+18
drivers/ufs/host/ufs-qcom.h
···122122 TMRLUT_HW_CGC_EN | OCSC_HW_CGC_EN)123123124124/* QUniPro Vendor specific attributes */125125+#define PA_TX_HSG1_SYNC_LENGTH 0x1552125126#define PA_VS_CONFIG_REG1 0x9000126127#define DME_VS_CORE_CLK_CTRL 0xD002128128+#define TX_HS_EQUALIZER 0x0037129129+127130/* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */128131#define CLK_1US_CYCLES_MASK_V4 GENMASK(27, 16)129132#define CLK_1US_CYCLES_MASK GENMASK(7, 0)···143140#define UNIPRO_CORE_CLK_FREQ_300_MHZ 300144141#define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202145142#define UNIPRO_CORE_CLK_FREQ_403_MHZ 403143143+144144+/* TX_HSG1_SYNC_LENGTH attr value */145145+#define PA_TX_HSG1_SYNC_LENGTH_VAL 0x4A146146+147147+/*148148+ * Some ufs device vendors need a different TSync length.149149+ * Enable this quirk to give an additional TX_HS_SYNC_LENGTH.150150+ */151151+#define UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH BIT(16)152152+153153+/*154154+ * Some ufs device vendors need a different Deemphasis setting.155155+ * Enable this quirk to tune TX Deemphasis parameters.156156+ */157157+#define UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING BIT(17)146158147159/* ICE allocator type to share AES engines among TX stream and RX stream */148160#define ICE_ALLOCATOR_TYPE 2
···165165 return 0;166166}167167168168+static int ohci_quirk_loongson(struct usb_hcd *hcd)169169+{170170+ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);171171+172172+ /*173173+ * Loongson's LS7A OHCI controller (rev 0x02) has a174174+ * flaw. MMIO register with offset 0x60/64 is treated175175+ * as legacy PS2-compatible keyboard/mouse interface.176176+ * Since OHCI only use 4KB BAR resource, LS7A OHCI's177177+ * 32KB BAR is wrapped around (the 2nd 4KB BAR space178178+ * is the same as the 1st 4KB internally). So add 4KB179179+ * offset (0x1000) to the OHCI registers as a quirk.180180+ */181181+ if (pdev->revision == 0x2)182182+ hcd->regs += SZ_4K; /* SZ_4K = 0x1000 */183183+184184+ return 0;185185+}186186+168187static int ohci_quirk_qemu(struct usb_hcd *hcd)169188{170189 struct ohci_hcd *ohci = hcd_to_ohci(hcd);···242223 {243224 PCI_DEVICE(PCI_VENDOR_ID_ATI, 0x4399),244225 .driver_data = (unsigned long)ohci_quirk_amd700,226226+ },227227+ {228228+ PCI_DEVICE(PCI_VENDOR_ID_LOONGSON, 0x7a24),229229+ .driver_data = (unsigned long)ohci_quirk_loongson,245230 },246231 {247232 .vendor = PCI_VENDOR_ID_APPLE,
+16-14
drivers/usb/host/xhci-hub.c
···18781878 int max_ports, port_index;18791879 int sret;18801880 u32 next_state;18811881- u32 temp, portsc;18811881+ u32 portsc;18821882 struct xhci_hub *rhub;18831883 struct xhci_port **ports;18841884+ bool disabled_irq = false;1884188518851886 rhub = xhci_get_rhub(hcd);18861887 ports = rhub->ports;···18971896 return -ESHUTDOWN;18981897 }1899189819001900- /* delay the irqs */19011901- temp = readl(&xhci->op_regs->command);19021902- temp &= ~CMD_EIE;19031903- writel(temp, &xhci->op_regs->command);19041904-19051899 /* bus specific resume for ports we suspended at bus_suspend */19061906- if (hcd->speed >= HCD_USB3)19001900+ if (hcd->speed >= HCD_USB3) {19071901 next_state = XDEV_U0;19081908- else19021902+ } else {19091903 next_state = XDEV_RESUME;19101910-19041904+ if (bus_state->bus_suspended) {19051905+ /*19061906+ * prevent port event interrupts from interfering19071907+ * with usb2 port resume process19081908+ */19091909+ xhci_disable_interrupter(xhci->interrupters[0]);19101910+ disabled_irq = true;19111911+ }19121912+ }19111913 port_index = max_ports;19121914 while (port_index--) {19131915 portsc = readl(ports[port_index]->addr);···19781974 (void) readl(&xhci->op_regs->command);1979197519801976 bus_state->next_statechange = jiffies + msecs_to_jiffies(5);19811981- /* re-enable irqs */19821982- temp = readl(&xhci->op_regs->command);19831983- temp |= CMD_EIE;19841984- writel(temp, &xhci->op_regs->command);19851985- temp = readl(&xhci->op_regs->command);19771977+ /* re-enable interrupter */19781978+ if (disabled_irq)19791979+ xhci_enable_interrupter(xhci->interrupters[0]);1986198019871981 spin_unlock_irqrestore(&xhci->lock, flags);19881982 return 0;
+4-7
drivers/usb/host/xhci-ring.c
···561561 * pointer command pending because the device can choose to start any562562 * stream once the endpoint is on the HW schedule.563563 */564564- if (ep_state & (EP_STOP_CMD_PENDING | SET_DEQ_PENDING | EP_HALTED |565565- EP_CLEARING_TT | EP_STALLED))564564+ if ((ep_state & EP_STOP_CMD_PENDING) || (ep_state & SET_DEQ_PENDING) ||565565+ (ep_state & EP_HALTED) || (ep_state & EP_CLEARING_TT))566566 return;567567568568 trace_xhci_ring_ep_doorbell(slot_id, DB_VALUE(ep_index, stream_id));···2573257325742574 xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET);25752575 return;25762576- case COMP_STALL_ERROR:25772577- ep->ep_state |= EP_STALLED;25782578- break;25792576 default:25802577 /* do nothing */25812578 break;···29132916 if (xhci_spurious_success_tx_event(xhci, ep_ring)) {29142917 xhci_dbg(xhci, "Spurious event dma %pad, comp_code %u after %u\n",29152918 &ep_trb_dma, trb_comp_code, ep_ring->old_trb_comp_code);29162916- ep_ring->old_trb_comp_code = trb_comp_code;29192919+ ep_ring->old_trb_comp_code = 0;29172920 return 0;29182921 }29192922···37773780 * enqueue a No Op TRB, this can prevent the Setup and Data Stage37783781 * TRB to be breaked by the Link TRB.37793782 */37803780- if (trb_is_link(ep_ring->enqueue + 1)) {37833783+ if (last_trb_on_seg(ep_ring->enq_seg, ep_ring->enqueue + 1)) {37813784 field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state;37823785 queue_trb(xhci, ep_ring, false, 0, 0,37833786 TRB_INTR_TARGET(0), field);
+5-13
drivers/usb/host/xhci.c
···322322 xhci_info(xhci, "Fault detected\n");323323}324324325325-static int xhci_enable_interrupter(struct xhci_interrupter *ir)325325+int xhci_enable_interrupter(struct xhci_interrupter *ir)326326{327327 u32 iman;328328···335335 return 0;336336}337337338338-static int xhci_disable_interrupter(struct xhci_interrupter *ir)338338+int xhci_disable_interrupter(struct xhci_interrupter *ir)339339{340340 u32 iman;341341···16051605 goto free_priv;16061606 }1607160716081608- /* Class driver might not be aware ep halted due to async URB giveback */16091609- if (*ep_state & EP_STALLED)16101610- dev_dbg(&urb->dev->dev, "URB %p queued before clearing halt\n",16111611- urb);16121612-16131608 switch (usb_endpoint_type(&urb->ep->desc)) {1614160916151610 case USB_ENDPOINT_XFER_CONTROL:···17651770 goto done;17661771 }1767177217681768- /* In these cases no commands are pending but the endpoint is stopped */17691769- if (ep->ep_state & (EP_CLEARING_TT | EP_STALLED)) {17731773+ /* In this case no commands are pending but the endpoint is stopped */17741774+ if (ep->ep_state & EP_CLEARING_TT) {17701775 /* and cancelled TDs can be given back right away */17711776 xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n",17721777 urb->dev->slot_id, ep_index, ep->ep_state);···3204320932053210 ep = &vdev->eps[ep_index];3206321132073207- spin_lock_irqsave(&xhci->lock, flags);32083208-32093209- ep->ep_state &= ~EP_STALLED;32103210-32113212 /* Bail out if toggle is already being cleared by a endpoint reset */32133213+ spin_lock_irqsave(&xhci->lock, flags);32123214 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) {32133215 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE;32143216 spin_unlock_irqrestore(&xhci->lock, flags);
+3-2
drivers/usb/host/xhci.h
···664664 unsigned int err_count;665665 unsigned int ep_state;666666#define SET_DEQ_PENDING (1 << 0)667667-#define EP_HALTED (1 << 1) /* Halted host ep handling */667667+#define EP_HALTED (1 << 1) /* For stall handling */668668#define EP_STOP_CMD_PENDING (1 << 2) /* For URB cancellation */669669/* Transitioning the endpoint to using streams, don't enqueue URBs */670670#define EP_GETTING_STREAMS (1 << 3)···675675#define EP_SOFT_CLEAR_TOGGLE (1 << 7)676676/* usb_hub_clear_tt_buffer is in progress */677677#define EP_CLEARING_TT (1 << 8)678678-#define EP_STALLED (1 << 9) /* For stall handling */679678 /* ---- Related to URB cancellation ---- */680679 struct list_head cancelled_td_list;681680 struct xhci_hcd *xhci;···18901891 struct usb_tt *tt, gfp_t mem_flags);18911892int xhci_set_interrupter_moderation(struct xhci_interrupter *ir,18921893 u32 imod_interval);18941894+int xhci_enable_interrupter(struct xhci_interrupter *ir);18951895+int xhci_disable_interrupter(struct xhci_interrupter *ir);1893189618941897/* xHCI ring, segment, TRB, and TD functions */18951898dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb);
···407407 if (!drv)408408 return;409409410410+ /* If the driver has its own shutdown method, use that. */411411+ if (drv->shutdown) {412412+ drv->shutdown(dev);413413+ return;414414+ }415415+410416 /*411417 * Some devices get wedged if you kick them after they are412418 * reset. Mark all vqs as broken to make sure we don't.
···4747 if (ret)4848 goto err;49495050- /* Inherit casefold state from parent. */5151- if (S_ISDIR(mode))5252- new_inode->bi_flags |= dir_u->bi_flags & BCH_INODE_casefolded;5353-5450 if (!(flags & BCH_CREATE_SNAPSHOT)) {5551 /* Normal create path - allocate a new inode: */5652 bch2_inode_init_late(new_inode, now, uid, gid, mode, rdev, dir_u);
+5
fs/bcachefs/opts.h
···228228 OPT_BOOL(), \229229 BCH_SB_ERASURE_CODE, false, \230230 NULL, "Enable erasure coding (DO NOT USE YET)") \231231+ x(casefold, u8, \232232+ OPT_FS|OPT_INODE|OPT_FORMAT, \233233+ OPT_BOOL(), \234234+ BCH_SB_CASEFOLD, false, \235235+ NULL, "Dirent lookups are casefolded") \231236 x(inodes_32bit, u8, \232237 OPT_FS|OPT_INODE|OPT_FORMAT|OPT_MOUNT|OPT_RUNTIME, \233238 OPT_BOOL(), \
+9-2
fs/bcachefs/rebalance.c
···262262 int ret = bch2_trans_commit_do(c, NULL, NULL,263263 BCH_TRANS_COMMIT_no_enospc,264264 bch2_set_rebalance_needs_scan_trans(trans, inum));265265- rebalance_wakeup(c);265265+ bch2_rebalance_wakeup(c);266266 return ret;267267}268268···581581582582 set_freezable();583583584584+ /*585585+ * Data move operations can't run until after check_snapshots has586586+ * completed, and bch2_snapshot_is_ancestor() is available.587587+ */588588+ kthread_wait_freezable(c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots ||589589+ kthread_should_stop());590590+584591 bch2_moving_ctxt_init(&ctxt, c, NULL, &r->work_stats,585592 writepoint_ptr(&c->rebalance_write_point),586593 true);···671664 c->rebalance.thread = NULL;672665673666 if (p) {674674- /* for sychronizing with rebalance_wakeup() */667667+ /* for sychronizing with bch2_rebalance_wakeup() */675668 synchronize_rcu();676669677670 kthread_stop(p);
···1818#include "journal_seq_blacklist.h"1919#include "logged_ops.h"2020#include "move.h"2121+#include "movinggc.h"2122#include "namei.h"2223#include "quota.h"2324#include "rebalance.h"···11301129 if (ret)11311130 goto err;1132113111331133- set_bit(BCH_FS_accounting_replay_done, &c->flags);11341134- bch2_journal_set_replay_done(&c->journal);11351135-11361132 ret = bch2_fs_read_write_early(c);11371133 if (ret)11381134 goto err;11351135+11361136+ set_bit(BCH_FS_accounting_replay_done, &c->flags);11371137+ bch2_journal_set_replay_done(&c->journal);1139113811401139 for_each_member_device(c, ca) {11411140 ret = bch2_dev_usage_init(ca, false);···11941193 goto err;1195119411961195 c->recovery_pass_done = BCH_RECOVERY_PASS_NR - 1;11961196+11971197+ bch2_copygc_wakeup(c);11981198+ bch2_rebalance_wakeup(c);1197119911981200 if (enabled_qtypes(c)) {11991201 ret = bch2_fs_quota_read(c);
+36-32
fs/bcachefs/recovery_passes.c
···1212#include "journal.h"1313#include "lru.h"1414#include "logged_ops.h"1515+#include "movinggc.h"1516#include "rebalance.h"1617#include "recovery.h"1718#include "recovery_passes.h"···263262 */264263 c->opts.recovery_passes_exclude &= ~BCH_RECOVERY_PASS_set_may_go_rw;265264266266- while (c->curr_recovery_pass < ARRAY_SIZE(recovery_pass_fns) && !ret) {267267- c->next_recovery_pass = c->curr_recovery_pass + 1;265265+ spin_lock_irq(&c->recovery_pass_lock);268266269269- spin_lock_irq(&c->recovery_pass_lock);267267+ while (c->curr_recovery_pass < ARRAY_SIZE(recovery_pass_fns) && !ret) {268268+ unsigned prev_done = c->recovery_pass_done;270269 unsigned pass = c->curr_recovery_pass;271270271271+ c->next_recovery_pass = pass + 1;272272+272273 if (c->opts.recovery_pass_last &&273273- c->curr_recovery_pass > c->opts.recovery_pass_last) {274274- spin_unlock_irq(&c->recovery_pass_lock);274274+ c->curr_recovery_pass > c->opts.recovery_pass_last)275275 break;276276- }277276278278- if (!should_run_recovery_pass(c, pass)) {279279- c->curr_recovery_pass++;280280- c->recovery_pass_done = max(c->recovery_pass_done, pass);277277+ if (should_run_recovery_pass(c, pass)) {281278 spin_unlock_irq(&c->recovery_pass_lock);282282- continue;279279+ ret = bch2_run_recovery_pass(c, pass) ?:280280+ bch2_journal_flush(&c->journal);281281+282282+ if (!ret && !test_bit(BCH_FS_error, &c->flags))283283+ bch2_clear_recovery_pass_required(c, pass);284284+ spin_lock_irq(&c->recovery_pass_lock);285285+286286+ if (c->next_recovery_pass < c->curr_recovery_pass) {287287+ /*288288+ * bch2_run_explicit_recovery_pass() was called: we289289+ * can't always catch -BCH_ERR_restart_recovery because290290+ * it may have been called from another thread (btree291291+ * node read completion)292292+ */293293+ ret = 0;294294+ c->recovery_passes_complete &= ~(~0ULL << c->curr_recovery_pass);295295+ } else {296296+ c->recovery_passes_complete |= BIT_ULL(pass);297297+ c->recovery_pass_done = max(c->recovery_pass_done, pass);298298+ }283299 }284284- spin_unlock_irq(&c->recovery_pass_lock);285300286286- ret = bch2_run_recovery_pass(c, pass) ?:287287- bch2_journal_flush(&c->journal);288288-289289- if (!ret && !test_bit(BCH_FS_error, &c->flags))290290- bch2_clear_recovery_pass_required(c, pass);291291-292292- spin_lock_irq(&c->recovery_pass_lock);293293- if (c->next_recovery_pass < c->curr_recovery_pass) {294294- /*295295- * bch2_run_explicit_recovery_pass() was called: we296296- * can't always catch -BCH_ERR_restart_recovery because297297- * it may have been called from another thread (btree298298- * node read completion)299299- */300300- ret = 0;301301- c->recovery_passes_complete &= ~(~0ULL << c->curr_recovery_pass);302302- } else {303303- c->recovery_passes_complete |= BIT_ULL(pass);304304- c->recovery_pass_done = max(c->recovery_pass_done, pass);305305- }306301 c->curr_recovery_pass = c->next_recovery_pass;307307- spin_unlock_irq(&c->recovery_pass_lock);302302+303303+ if (prev_done <= BCH_RECOVERY_PASS_check_snapshots &&304304+ c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots) {305305+ bch2_copygc_wakeup(c);306306+ bch2_rebalance_wakeup(c);307307+ }308308 }309309+310310+ spin_unlock_irq(&c->recovery_pass_lock);309311310312 return ret;311313}
+1-1
fs/bcachefs/snapshot.c
···396396 u32 subvol = 0, s;397397398398 rcu_read_lock();399399- while (id) {399399+ while (id && bch2_snapshot_exists(c, id)) {400400 s = snapshot_t(c, id)->subvol;401401402402 if (s && (!subvol || s < subvol))
+2-3
fs/bcachefs/str_hash.h
···33333434struct bch_hash_info {3535 u8 type;3636- struct unicode_map *cf_encoding;3636+ struct unicode_map *cf_encoding;3737 /*3838 * For crc32 or crc64 string hashes the first key value of3939 * the siphash_key (k0) is used as the key.···4444static inline struct bch_hash_info4545bch2_hash_info_init(struct bch_fs *c, const struct bch_inode_unpacked *bi)4646{4747- /* XXX ick */4847 struct bch_hash_info info = {4948 .type = INODE_STR_HASH(bi),5049#ifdef CONFIG_UNICODE5151- .cf_encoding = !!(bi->bi_flags & BCH_INODE_casefolded) ? c->cf_encoding : NULL,5050+ .cf_encoding = bch2_inode_casefold(c, bi) ? c->cf_encoding : NULL,5251#endif5352 .siphash_key = { .k0 = bi->bi_hash_seed }5453 };
···418418 return ret;419419}420420421421-static int bch2_fs_read_write_late(struct bch_fs *c)422422-{423423- int ret;424424-425425- /*426426- * Data move operations can't run until after check_snapshots has427427- * completed, and bch2_snapshot_is_ancestor() is available.428428- *429429- * Ideally we'd start copygc/rebalance earlier instead of waiting for430430- * all of recovery/fsck to complete:431431- */432432- ret = bch2_copygc_start(c);433433- if (ret) {434434- bch_err(c, "error starting copygc thread");435435- return ret;436436- }437437-438438- ret = bch2_rebalance_start(c);439439- if (ret) {440440- bch_err(c, "error starting rebalance thread");441441- return ret;442442- }443443-444444- return 0;445445-}446446-447421static int __bch2_fs_read_write(struct bch_fs *c, bool early)448422{449423 int ret;···440466441467 clear_bit(BCH_FS_clean_shutdown, &c->flags);442468443443- /*444444- * First journal write must be a flush write: after a clean shutdown we445445- * don't read the journal, so the first journal write may end up446446- * overwriting whatever was there previously, and there must always be447447- * at least one non-flush write in the journal or recovery will fail:448448- */449449- set_bit(JOURNAL_need_flush_write, &c->journal.flags);450450- set_bit(JOURNAL_running, &c->journal.flags);451451-452469 __for_each_online_member(c, ca, BIT(BCH_MEMBER_STATE_rw), READ) {453470 bch2_dev_allocator_add(c, ca);454471 percpu_ref_reinit(&ca->io_ref[WRITE]);455472 }456473 bch2_recalc_capacity(c);457474475475+ /*476476+ * First journal write must be a flush write: after a clean shutdown we477477+ * don't read the journal, so the first journal write may end up478478+ * overwriting whatever was there previously, and there must always be479479+ * at least one non-flush write in the journal or recovery will fail:480480+ */481481+ spin_lock(&c->journal.lock);482482+ set_bit(JOURNAL_need_flush_write, &c->journal.flags);483483+ set_bit(JOURNAL_running, &c->journal.flags);484484+ bch2_journal_space_available(&c->journal);485485+ spin_unlock(&c->journal.lock);486486+458487 ret = bch2_fs_mark_dirty(c);459488 if (ret)460489 goto err;461461-462462- spin_lock(&c->journal.lock);463463- bch2_journal_space_available(&c->journal);464464- spin_unlock(&c->journal.lock);465490466491 ret = bch2_journal_reclaim_start(&c->journal);467492 if (ret)···477504 atomic_long_inc(&c->writes[i]);478505 }479506#endif480480- if (!early) {481481- ret = bch2_fs_read_write_late(c);482482- if (ret)483483- goto err;507507+508508+ ret = bch2_copygc_start(c);509509+ if (ret) {510510+ bch_err_msg(c, ret, "error starting copygc thread");511511+ goto err;512512+ }513513+514514+ ret = bch2_rebalance_start(c);515515+ if (ret) {516516+ bch_err_msg(c, ret, "error starting rebalance thread");517517+ goto err;484518 }485519486520 bch2_do_discards(c);···533553534554 bch2_find_btree_nodes_exit(&c->found_btree_nodes);535555 bch2_free_pending_node_rewrites(c);556556+ bch2_free_fsck_errs(c);536557 bch2_fs_accounting_exit(c);537558 bch2_fs_sb_errors_exit(c);538559 bch2_fs_counters_exit(c);···10041023 printbuf_exit(&p);10051024}1006102510261026+static bool bch2_fs_may_start(struct bch_fs *c)10271027+{10281028+ struct bch_dev *ca;10291029+ unsigned i, flags = 0;10301030+10311031+ if (c->opts.very_degraded)10321032+ flags |= BCH_FORCE_IF_DEGRADED|BCH_FORCE_IF_LOST;10331033+10341034+ if (c->opts.degraded)10351035+ flags |= BCH_FORCE_IF_DEGRADED;10361036+10371037+ if (!c->opts.degraded &&10381038+ !c->opts.very_degraded) {10391039+ mutex_lock(&c->sb_lock);10401040+10411041+ for (i = 0; i < c->disk_sb.sb->nr_devices; i++) {10421042+ if (!bch2_member_exists(c->disk_sb.sb, i))10431043+ continue;10441044+10451045+ ca = bch2_dev_locked(c, i);10461046+10471047+ if (!bch2_dev_is_online(ca) &&10481048+ (ca->mi.state == BCH_MEMBER_STATE_rw ||10491049+ ca->mi.state == BCH_MEMBER_STATE_ro)) {10501050+ mutex_unlock(&c->sb_lock);10511051+ return false;10521052+ }10531053+ }10541054+ mutex_unlock(&c->sb_lock);10551055+ }10561056+10571057+ return bch2_have_enough_devs(c, bch2_online_devs(c), flags, true);10581058+}10591059+10071060int bch2_fs_start(struct bch_fs *c)10081061{10091062 time64_t now = ktime_get_real_seconds();10101063 int ret = 0;1011106410121065 print_mount_opts(c);10661066+10671067+ if (!bch2_fs_may_start(c))10681068+ return -BCH_ERR_insufficient_devices_to_start;1013106910141070 down_write(&c->state_lock);10151071 mutex_lock(&c->sb_lock);···11001082 wake_up(&c->ro_ref_wait);1101108311021084 down_write(&c->state_lock);11031103- if (c->opts.read_only) {10851085+ if (c->opts.read_only)11041086 bch2_fs_read_only(c);11051105- } else {11061106- ret = !test_bit(BCH_FS_rw, &c->flags)11071107- ? bch2_fs_read_write(c)11081108- : bch2_fs_read_write_late(c);11091109- }10871087+ else if (!test_bit(BCH_FS_rw, &c->flags))10881088+ ret = bch2_fs_read_write(c);11101089 up_write(&c->state_lock);1111109011121091err:···1515150015161501 printbuf_exit(&name);1517150215181518- rebalance_wakeup(c);15031503+ bch2_rebalance_wakeup(c);15191504 return 0;15201505}15211506···15741559 }15751560}1576156115771577-static bool bch2_fs_may_start(struct bch_fs *c)15781578-{15791579- struct bch_dev *ca;15801580- unsigned i, flags = 0;15811581-15821582- if (c->opts.very_degraded)15831583- flags |= BCH_FORCE_IF_DEGRADED|BCH_FORCE_IF_LOST;15841584-15851585- if (c->opts.degraded)15861586- flags |= BCH_FORCE_IF_DEGRADED;15871587-15881588- if (!c->opts.degraded &&15891589- !c->opts.very_degraded) {15901590- mutex_lock(&c->sb_lock);15911591-15921592- for (i = 0; i < c->disk_sb.sb->nr_devices; i++) {15931593- if (!bch2_member_exists(c->disk_sb.sb, i))15941594- continue;15951595-15961596- ca = bch2_dev_locked(c, i);15971597-15981598- if (!bch2_dev_is_online(ca) &&15991599- (ca->mi.state == BCH_MEMBER_STATE_rw ||16001600- ca->mi.state == BCH_MEMBER_STATE_ro)) {16011601- mutex_unlock(&c->sb_lock);16021602- return false;16031603- }16041604- }16051605- mutex_unlock(&c->sb_lock);16061606- }16071607-16081608- return bch2_have_enough_devs(c, bch2_online_devs(c), flags, true);16091609-}16101610-16111562static void __bch2_dev_read_only(struct bch_fs *c, struct bch_dev *ca)16121563{16131564 bch2_dev_io_ref_stop(ca, WRITE);···16271646 if (new_state == BCH_MEMBER_STATE_rw)16281647 __bch2_dev_read_write(c, ca);1629164816301630- rebalance_wakeup(c);16491649+ bch2_rebalance_wakeup(c);1631165016321651 return ret;16331652}···22082227 }22092228 }22102229 up_write(&c->state_lock);22112211-22122212- if (!bch2_fs_may_start(c)) {22132213- ret = -BCH_ERR_insufficient_devices_to_start;22142214- goto err_print;22152215- }2216223022172231 if (!c->opts.nostart) {22182232 ret = bch2_fs_start(c);
+3-4
fs/bcachefs/sysfs.c
···654654 bch2_set_rebalance_needs_scan(c, 0);655655656656 if (v && id == Opt_rebalance_enabled)657657- rebalance_wakeup(c);657657+ bch2_rebalance_wakeup(c);658658659659- if (v && id == Opt_copygc_enabled &&660660- c->copygc_thread)661661- wake_up_process(c->copygc_thread);659659+ if (v && id == Opt_copygc_enabled)660660+ bch2_copygc_wakeup(c);662661663662 if (id == Opt_discard && !ca) {664663 mutex_lock(&c->sb_lock);
···21042104 * will always return true.21052105 * So here we need to do extra page alignment for21062106 * filemap_range_has_page().21072107+ *21082108+ * And do not decrease page_lockend right now, as it can be 0.21072109 */21082110 const u64 page_lockstart = round_up(lockstart, PAGE_SIZE);21092109- const u64 page_lockend = round_down(lockend + 1, PAGE_SIZE) - 1;21112111+ const u64 page_lockend = round_down(lockend + 1, PAGE_SIZE);2110211221112113 while (1) {21122114 truncate_pagecache_range(inode, lockstart, lockend);2113211521142116 lock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,21152117 cached_state);21182118+ /* The same page or adjacent pages. */21192119+ if (page_lockend <= page_lockstart)21202120+ break;21162121 /*21172122 * We can't have ordered extents in the range, nor dirty/writeback21182123 * pages, because we have locked the inode's VFS lock in exclusive···21292124 * we do, unlock the range and retry.21302125 */21312126 if (!filemap_range_has_page(inode->i_mapping, page_lockstart,21322132- page_lockend))21272127+ page_lockend - 1))21332128 break;2134212921352130 unlock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,
···1277127712781278static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,12791279 struct zone_info *info, unsigned long *active,12801280- struct btrfs_chunk_map *map)12801280+ struct btrfs_chunk_map *map, bool new)12811281{12821282 struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace;12831283 struct btrfs_device *device;···13071307 return 0;13081308 }1309130913101310+ ASSERT(!new || btrfs_dev_is_empty_zone(device, info->physical));13111311+13101312 /* This zone will be used for allocation, so mark this zone non-empty. */13111313 btrfs_dev_clear_zone_empty(device, info->physical);13121314···13211319 * to determine the allocation offset within the zone.13221320 */13231321 WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size));13221322+13231323+ if (new) {13241324+ sector_t capacity;13251325+13261326+ capacity = bdev_zone_capacity(device->bdev, info->physical >> SECTOR_SHIFT);13271327+ up_read(&dev_replace->rwsem);13281328+ info->alloc_offset = 0;13291329+ info->capacity = capacity << SECTOR_SHIFT;13301330+13311331+ return 0;13321332+ }13331333+13241334 nofs_flag = memalloc_nofs_save();13251335 ret = btrfs_get_dev_zone(device, info->physical, &zone);13261336 memalloc_nofs_restore(nofs_flag);···16021588 }1603158916041590 for (i = 0; i < map->num_stripes; i++) {16051605- ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active, map);15911591+ ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active, map, new);16061592 if (ret)16071593 goto out;16081594···16731659 * stripe.16741660 */16751661 cache->alloc_offset = cache->zone_capacity;16761676- ret = 0;16771662 }1678166316791664out:
+54-19
fs/buffer.c
···176176}177177EXPORT_SYMBOL(end_buffer_write_sync);178178179179-/*180180- * Various filesystems appear to want __find_get_block to be non-blocking.181181- * But it's the page lock which protects the buffers. To get around this,182182- * we get exclusion from try_to_free_buffers with the blockdev mapping's183183- * i_private_lock.184184- *185185- * Hack idea: for the blockdev mapping, i_private_lock contention186186- * may be quite high. This code could TryLock the page, and if that187187- * succeeds, there is no need to take i_private_lock.188188- */189179static struct buffer_head *190190-__find_get_block_slow(struct block_device *bdev, sector_t block)180180+__find_get_block_slow(struct block_device *bdev, sector_t block, bool atomic)191181{192182 struct address_space *bd_mapping = bdev->bd_mapping;193183 const int blkbits = bd_mapping->host->i_blkbits;···194204 if (IS_ERR(folio))195205 goto out;196206197197- spin_lock(&bd_mapping->i_private_lock);207207+ /*208208+ * Folio lock protects the buffers. Callers that cannot block209209+ * will fallback to serializing vs try_to_free_buffers() via210210+ * the i_private_lock.211211+ */212212+ if (atomic)213213+ spin_lock(&bd_mapping->i_private_lock);214214+ else215215+ folio_lock(folio);216216+198217 head = folio_buffers(folio);199218 if (!head)200219 goto out_unlock;220220+ /*221221+ * Upon a noref migration, the folio lock serializes here;222222+ * otherwise bail.223223+ */224224+ if (test_bit_acquire(BH_Migrate, &head->b_state)) {225225+ WARN_ON(!atomic);226226+ goto out_unlock;227227+ }228228+201229 bh = head;202230 do {203231 if (!buffer_mapped(bh))···244236 1 << blkbits);245237 }246238out_unlock:247247- spin_unlock(&bd_mapping->i_private_lock);239239+ if (atomic)240240+ spin_unlock(&bd_mapping->i_private_lock);241241+ else242242+ folio_unlock(folio);248243 folio_put(folio);249244out:250245 return ret;···667656void write_boundary_block(struct block_device *bdev,668657 sector_t bblock, unsigned blocksize)669658{670670- struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize);659659+ struct buffer_head *bh;660660+661661+ bh = __find_get_block_nonatomic(bdev, bblock + 1, blocksize);671662 if (bh) {672663 if (buffer_dirty(bh))673664 write_dirty_buffer(bh, 0);···13991386/*14001387 * Perform a pagecache lookup for the matching buffer. If it's there, refresh14011388 * it in the LRU and mark it as accessed. If it is not present then return14021402- * NULL13891389+ * NULL. Atomic context callers may also return NULL if the buffer is being13901390+ * migrated; similarly the page is not marked accessed either.14031391 */14041404-struct buffer_head *14051405-__find_get_block(struct block_device *bdev, sector_t block, unsigned size)13921392+static struct buffer_head *13931393+find_get_block_common(struct block_device *bdev, sector_t block,13941394+ unsigned size, bool atomic)14061395{14071396 struct buffer_head *bh = lookup_bh_lru(bdev, block, size);1408139714091398 if (bh == NULL) {14101399 /* __find_get_block_slow will mark the page accessed */14111411- bh = __find_get_block_slow(bdev, block);14001400+ bh = __find_get_block_slow(bdev, block, atomic);14121401 if (bh)14131402 bh_lru_install(bh);14141403 } else···1418140314191404 return bh;14201405}14061406+14071407+struct buffer_head *14081408+__find_get_block(struct block_device *bdev, sector_t block, unsigned size)14091409+{14101410+ return find_get_block_common(bdev, block, size, true);14111411+}14211412EXPORT_SYMBOL(__find_get_block);14131413+14141414+/* same as __find_get_block() but allows sleeping contexts */14151415+struct buffer_head *14161416+__find_get_block_nonatomic(struct block_device *bdev, sector_t block,14171417+ unsigned size)14181418+{14191419+ return find_get_block_common(bdev, block, size, false);14201420+}14211421+EXPORT_SYMBOL(__find_get_block_nonatomic);1422142214231423/**14241424 * bdev_getblk - Get a buffer_head in a block device's buffer cache.···14521422struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block,14531423 unsigned size, gfp_t gfp)14541424{14551455- struct buffer_head *bh = __find_get_block(bdev, block, size);14251425+ struct buffer_head *bh;14261426+14271427+ if (gfpflags_allow_blocking(gfp))14281428+ bh = __find_get_block_nonatomic(bdev, block, size);14291429+ else14301430+ bh = __find_get_block(bdev, block, size);1456143114571432 might_alloc(gfp);14581433 if (bh)
+1-1
fs/ceph/inode.c
···2367236723682368 /* Try to writeback the dirty pagecaches */23692369 if (issued & (CEPH_CAP_FILE_BUFFER)) {23702370- loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SHIFT - 1;23702370+ loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SIZE - 1;2371237123722372 ret = filemap_write_and_wait_range(inode->i_mapping,23732373 orig_pos, lend);
+2-1
fs/ext4/ialloc.c
···691691 if (!bh || !buffer_uptodate(bh))692692 /*693693 * If the block is not in the buffer cache, then it694694- * must have been written out.694694+ * must have been written out, or, most unlikely, is695695+ * being migrated - false failure should be OK here.695696 */696697 goto out;697698
+2-1
fs/ext4/mballoc.c
···66426642 for (i = 0; i < count; i++) {66436643 cond_resched();66446644 if (is_metadata)66456645- bh = sb_find_get_block(inode->i_sb, block + i);66456645+ bh = sb_find_get_block_nonatomic(inode->i_sb,66466646+ block + i);66466647 ext4_forget(handle, is_metadata, inode, bh, block + i);66476648 }66486649 }
+1-1
fs/file.c
···26262727#include "internal.h"28282929-bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt)2929+static noinline bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt)3030{3131 /*3232 * If the reference count was already in the dead zone, then this
+9-6
fs/jbd2/revoke.c
···345345 bh = bh_in;346346347347 if (!bh) {348348- bh = __find_get_block(bdev, blocknr, journal->j_blocksize);348348+ bh = __find_get_block_nonatomic(bdev, blocknr,349349+ journal->j_blocksize);349350 if (bh)350351 BUFFER_TRACE(bh, "found on hash");351352 }···356355357356 /* If there is a different buffer_head lying around in358357 * memory anywhere... */359359- bh2 = __find_get_block(bdev, blocknr, journal->j_blocksize);358358+ bh2 = __find_get_block_nonatomic(bdev, blocknr,359359+ journal->j_blocksize);360360 if (bh2) {361361 /* ... and it has RevokeValid status... */362362 if (bh2 != bh && buffer_revokevalid(bh2))···466464 * state machine will get very upset later on. */467465 if (need_cancel) {468466 struct buffer_head *bh2;469469- bh2 = __find_get_block(bh->b_bdev, bh->b_blocknr, bh->b_size);467467+ bh2 = __find_get_block_nonatomic(bh->b_bdev, bh->b_blocknr,468468+ bh->b_size);470469 if (bh2) {471470 if (bh2 != bh)472471 clear_buffer_revoked(bh2);···495492 struct jbd2_revoke_record_s *record;496493 struct buffer_head *bh;497494 record = (struct jbd2_revoke_record_s *)list_entry;498498- bh = __find_get_block(journal->j_fs_dev,499499- record->blocknr,500500- journal->j_blocksize);495495+ bh = __find_get_block_nonatomic(journal->j_fs_dev,496496+ record->blocknr,497497+ journal->j_blocksize);501498 if (bh) {502499 clear_buffer_revoked(bh);503500 __brelse(bh);
+36-33
fs/namespace.c
···28262826 struct vfsmount *mnt = path->mnt;28272827 struct dentry *dentry;28282828 struct mountpoint *mp = ERR_PTR(-ENOENT);28292829+ struct path under = {};2829283028302831 for (;;) {28312831- struct mount *m;28322832+ struct mount *m = real_mount(mnt);2832283328332834 if (beneath) {28342834- m = real_mount(mnt);28352835+ path_put(&under);28352836 read_seqlock_excl(&mount_lock);28362836- dentry = dget(m->mnt_mountpoint);28372837+ under.mnt = mntget(&m->mnt_parent->mnt);28382838+ under.dentry = dget(m->mnt_mountpoint);28372839 read_sequnlock_excl(&mount_lock);28402840+ dentry = under.dentry;28382841 } else {28392842 dentry = path->dentry;28402843 }2841284428422845 inode_lock(dentry->d_inode);28432843- if (unlikely(cant_mount(dentry))) {28442844- inode_unlock(dentry->d_inode);28452845- goto out;28462846- }28472847-28482846 namespace_lock();2849284728502850- if (beneath && (!is_mounted(mnt) || m->mnt_mountpoint != dentry)) {28482848+ if (unlikely(cant_mount(dentry) || !is_mounted(mnt)))28492849+ break; // not to be mounted on28502850+28512851+ if (beneath && unlikely(m->mnt_mountpoint != dentry ||28522852+ &m->mnt_parent->mnt != under.mnt)) {28512853 namespace_unlock();28522854 inode_unlock(dentry->d_inode);28532853- goto out;28552855+ continue; // got moved28542856 }2855285728562858 mnt = lookup_mnt(path);28572857- if (likely(!mnt))28592859+ if (unlikely(mnt)) {28602860+ namespace_unlock();28612861+ inode_unlock(dentry->d_inode);28622862+ path_put(path);28632863+ path->mnt = mnt;28642864+ path->dentry = dget(mnt->mnt_root);28652865+ continue; // got overmounted28662866+ }28672867+ mp = get_mountpoint(dentry);28682868+ if (IS_ERR(mp))28582869 break;28592859-28602860- namespace_unlock();28612861- inode_unlock(dentry->d_inode);28622862- if (beneath)28632863- dput(dentry);28642864- path_put(path);28652865- path->mnt = mnt;28662866- path->dentry = dget(mnt->mnt_root);28702870+ if (beneath) {28712871+ /*28722872+ * @under duplicates the references that will stay28732873+ * at least until namespace_unlock(), so the path_put()28742874+ * below is safe (and OK to do under namespace_lock -28752875+ * we are not dropping the final references here).28762876+ */28772877+ path_put(&under);28782878+ }28792879+ return mp;28672880 }28682868-28692869- mp = get_mountpoint(dentry);28702870- if (IS_ERR(mp)) {28712871- namespace_unlock();28722872- inode_unlock(dentry->d_inode);28732873- }28742874-28752875-out:28812881+ namespace_unlock();28822882+ inode_unlock(dentry->d_inode);28762883 if (beneath)28772877- dput(dentry);28782878-28842884+ path_put(&under);28792885 return mp;28802886}28812887···2892288628932887static void unlock_mount(struct mountpoint *where)28942888{28952895- struct dentry *dentry = where->m_dentry;28962896-28892889+ inode_unlock(where->m_dentry->d_inode);28972890 read_seqlock_excl(&mount_lock);28982891 put_mountpoint(where);28992892 read_sequnlock_excl(&mount_lock);29002900-29012893 namespace_unlock();29022902- inode_unlock(dentry->d_inode);29032894}2904289529052896static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
+1-1
fs/ocfs2/journal.c
···12491249 }1250125012511251 for (i = 0; i < p_blocks; i++, p_blkno++) {12521252- bh = __find_get_block(osb->sb->s_bdev, p_blkno,12521252+ bh = __find_get_block_nonatomic(osb->sb->s_bdev, p_blkno,12531253 osb->sb->s_blocksize);12541254 /* block not cached. */12551255 if (!bh)
+1-1
fs/splice.c
···4545 * here if set to avoid blocking other users of this pipe if splice is4646 * being done on it.4747 */4848-static noinline void noinline pipe_clear_nowait(struct file *file)4848+static noinline void pipe_clear_nowait(struct file *file)4949{5050 fmode_t fmode = READ_ONCE(file->f_mode);5151
+2-2
fs/xattr.c
···703703 return error;704704705705 filename = getname_maybe_null(pathname, at_flags);706706- if (!filename) {706706+ if (!filename && dfd >= 0) {707707 CLASS(fd, f)(dfd);708708 if (fd_empty(f))709709 error = -EBADF;···847847 return error;848848849849 filename = getname_maybe_null(pathname, at_flags);850850- if (!filename) {850850+ if (!filename && dfd >= 0) {851851 CLASS(fd, f)(dfd);852852 if (fd_empty(f))853853 return -EBADF;
···629629#else630630#define DEFINE_DMA_UNMAP_ADDR(ADDR_NAME)631631#define DEFINE_DMA_UNMAP_LEN(LEN_NAME)632632-#define dma_unmap_addr(PTR, ADDR_NAME) (0)633633-#define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) do { } while (0)634634-#define dma_unmap_len(PTR, LEN_NAME) (0)635635-#define dma_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0)632632+#define dma_unmap_addr(PTR, ADDR_NAME) \633633+ ({ typeof(PTR) __p __maybe_unused = PTR; 0; })634634+#define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) \635635+ do { typeof(PTR) __p __maybe_unused = PTR; } while (0)636636+#define dma_unmap_len(PTR, LEN_NAME) \637637+ ({ typeof(PTR) __p __maybe_unused = PTR; 0; })638638+#define dma_unmap_len_set(PTR, LEN_NAME, VAL) \639639+ do { typeof(PTR) __p __maybe_unused = PTR; } while (0)636640#endif637641638642#endif /* _LINUX_DMA_MAPPING_H */
+6-13
include/linux/file_ref.h
···6161 atomic_long_set(&ref->refcnt, cnt - 1);6262}63636464-bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt);6564bool __file_ref_put(file_ref_t *ref, unsigned long cnt);66656766/**···177178 */178179static __always_inline __must_check bool file_ref_put_close(file_ref_t *ref)179180{180180- long old, new;181181+ long old;181182182183 old = atomic_long_read(&ref->refcnt);183183- do {184184- if (unlikely(old < 0))185185- return __file_ref_put_badval(ref, old);186186-187187- if (old == FILE_REF_ONEREF)188188- new = FILE_REF_DEAD;189189- else190190- new = old - 1;191191- } while (!atomic_long_try_cmpxchg(&ref->refcnt, &old, new));192192-193193- return new == FILE_REF_DEAD;184184+ if (likely(old == FILE_REF_ONEREF)) {185185+ if (likely(atomic_long_try_cmpxchg(&ref->refcnt, &old, FILE_REF_DEAD)))186186+ return true;187187+ }188188+ return file_ref_put(ref);194189}195190196191/**
+5
include/linux/fwnode.h
···22/*33 * fwnode.h - Firmware device node object handle type definition.44 *55+ * This header file provides low-level data types and definitions for firmware66+ * and device property providers. The respective API header files supplied by77+ * them should contain all of the requisite data types and definitions for end88+ * users, so including it directly should not be necessary.99+ *510 * Copyright (C) 2015, Intel Corporation611 * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>712 */
···361361 phy_interface_t iface);362362363363/**364364- * mac_link_down() - take the link down364364+ * mac_link_down() - notification that the link has gone down365365 * @config: a pointer to a &struct phylink_config.366366 * @mode: link autonegotiation mode367367 * @interface: link &typedef phy_interface_t mode368368 *369369- * If @mode is not an in-band negotiation mode (as defined by370370- * phylink_autoneg_inband()), force the link down and disable any371371- * Energy Efficient Ethernet MAC configuration. Interface type372372- * selection must be done in mac_config().369369+ * Notifies the MAC that the link has gone down. This will not be called370370+ * unless mac_link_up() has been previously called.371371+ *372372+ * The MAC should stop processing packets for transmission and reception.373373+ * phylink will have called netif_carrier_off() to notify the networking374374+ * stack that the link has gone down, so MAC drivers should not make this375375+ * call.376376+ *377377+ * If @mode is %MLO_AN_INBAND, then this function must not prevent the378378+ * link coming up.373379 */374380void mac_link_down(struct phylink_config *config, unsigned int mode,375381 phy_interface_t interface);376382377383/**378378- * mac_link_up() - allow the link to come up384384+ * mac_link_up() - notification that the link has come up379385 * @config: a pointer to a &struct phylink_config.380380- * @phy: any attached phy386386+ * @phy: any attached phy (deprecated - please use LPI interfaces)381387 * @mode: link autonegotiation mode382388 * @interface: link &typedef phy_interface_t mode383389 * @speed: link speed···391385 * @tx_pause: link transmit pause enablement status392386 * @rx_pause: link receive pause enablement status393387 *394394- * Configure the MAC for an established link.388388+ * Notifies the MAC that the link has come up, and the parameters of the389389+ * link as seen from the MACs point of view. If mac_link_up() has been390390+ * called previously, there will be an intervening call to mac_link_down()391391+ * before this method will be subsequently called.395392 *396393 * @speed, @duplex, @tx_pause and @rx_pause indicate the finalised link397394 * settings, and should be used to configure the MAC block appropriately···406397 * that the user wishes to override the pause settings, and this should407398 * be allowed when considering the implementation of this method.408399 *409409- * If in-band negotiation mode is disabled, allow the link to come up. If410410- * @phy is non-%NULL, configure Energy Efficient Ethernet by calling411411- * phy_init_eee() and perform appropriate MAC configuration for EEE.400400+ * Once configured, the MAC may begin to process packets for transmission401401+ * and reception.402402+ *412403 * Interface type selection must be done in mac_config().413404 */414405void mac_link_up(struct phylink_config *config, struct phy_device *phy,
+3
include/linux/virtio.h
···220220 * occurs.221221 * @reset_done: optional function to call after transport specific reset222222 * operation has finished.223223+ * @shutdown: synchronize with the device on shutdown. If provided, replaces224224+ * the virtio core implementation.223225 */224226struct virtio_driver {225227 struct device_driver driver;···239237 int (*restore)(struct virtio_device *dev);240238 int (*reset_prepare)(struct virtio_device *dev);241239 int (*reset_done)(struct virtio_device *dev);240240+ void (*shutdown)(struct virtio_device *dev);242241};243242244243#define drv_to_virtio(__drv) container_of_const(__drv, struct virtio_driver, driver)
+56-29
include/uapi/linux/landlock.h
···5353 __u64 scoped;5454};55555656-/*5757- * sys_landlock_create_ruleset() flags:5656+/**5757+ * DOC: landlock_create_ruleset_flags5858 *5959- * - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI6060- * version.6161- * - %LANDLOCK_CREATE_RULESET_ERRATA: Get a bitmask of fixed issues.5959+ * **Flags**6060+ *6161+ * %LANDLOCK_CREATE_RULESET_VERSION6262+ * Get the highest supported Landlock ABI version (starting at 1).6363+ *6464+ * %LANDLOCK_CREATE_RULESET_ERRATA6565+ * Get a bitmask of fixed issues for the current Landlock ABI version.6266 */6367/* clang-format off */6468#define LANDLOCK_CREATE_RULESET_VERSION (1U << 0)6569#define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1)6670/* clang-format on */67716868-/*6969- * sys_landlock_restrict_self() flags:7272+/**7373+ * DOC: landlock_restrict_self_flags7074 *7171- * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF: Do not create any log related to the7272- * enforced restrictions. This should only be set by tools launching unknown7373- * or untrusted programs (e.g. a sandbox tool, container runtime, system7474- * service manager). Because programs sandboxing themselves should fix any7575- * denied access, they should not set this flag to be aware of potential7676- * issues reported by system's logs (i.e. audit).7777- * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON: Explicitly ask to continue7878- * logging denied access requests even after an :manpage:`execve(2)` call.7979- * This flag should only be set if all the programs than can legitimately be8080- * executed will not try to request a denied access (which could spam audit8181- * logs).8282- * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF: Do not create any log related8383- * to the enforced restrictions coming from future nested domains created by8484- * the caller or its descendants. This should only be set according to a8585- * runtime configuration (i.e. not hardcoded) by programs launching other8686- * unknown or untrusted programs that may create their own Landlock domains8787- * and spam logs. The main use case is for container runtimes to enable users8888- * to mute buggy sandboxed programs for a specific container image. Other use8989- * cases include sandboxer tools and init systems. Unlike9090- * %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF,9191- * %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF does not impact the requested9292- * restriction (if any) but only the future nested domains.7575+ * **Flags**7676+ *7777+ * By default, denied accesses originating from programs that sandbox themselves7878+ * are logged via the audit subsystem. Such events typically indicate unexpected7979+ * behavior, such as bugs or exploitation attempts. However, to avoid excessive8080+ * logging, access requests denied by a domain not created by the originating8181+ * program are not logged by default. The rationale is that programs should know8282+ * their own behavior, but not necessarily the behavior of other programs. This8383+ * default configuration is suitable for most programs that sandbox themselves.8484+ * For specific use cases, the following flags allow programs to modify this8585+ * default logging behavior.8686+ *8787+ * The %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF and8888+ * %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON flags apply to the newly created8989+ * Landlock domain.9090+ *9191+ * %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF9292+ * Disables logging of denied accesses originating from the thread creating9393+ * the Landlock domain, as well as its children, as long as they continue9494+ * running the same executable code (i.e., without an intervening9595+ * :manpage:`execve(2)` call). This is intended for programs that execute9696+ * unknown code without invoking :manpage:`execve(2)`, such as script9797+ * interpreters. Programs that only sandbox themselves should not set this9898+ * flag, so users can be notified of unauthorized access attempts via system9999+ * logs.100100+ *101101+ * %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON102102+ * Enables logging of denied accesses after an :manpage:`execve(2)` call,103103+ * providing visibility into unauthorized access attempts by newly executed104104+ * programs within the created Landlock domain. This flag is recommended105105+ * only when all potential executables in the domain are expected to comply106106+ * with the access restrictions, as excessive audit log entries could make107107+ * it more difficult to identify critical events.108108+ *109109+ * %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF110110+ * Disables logging of denied accesses originating from nested Landlock111111+ * domains created by the caller or its descendants. This flag should be set112112+ * according to runtime configuration, not hardcoded, to avoid suppressing113113+ * important security events. It is useful for container runtimes or114114+ * sandboxing tools that may launch programs which themselves create115115+ * Landlock domains and could otherwise generate excessive logs. Unlike116116+ * ``LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF``, this flag only affects117117+ * future nested domains, not the one being created. It can also be used118118+ * with a @ruleset_fd value of -1 to mute subdomain logs without creating a119119+ * domain.93120 */94121/* clang-format off */95122#define LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF (1U << 0)
+2-2
include/uapi/linux/vhost.h
···28282929/* Set current process as the (exclusive) owner of this file descriptor. This3030 * must be called before any other vhost command. Further calls to3131- * VHOST_OWNER_SET fail until VHOST_OWNER_RESET is called. */3131+ * VHOST_SET_OWNER fail until VHOST_RESET_OWNER is called. */3232#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)3333/* Give up ownership, and reset the device to default values.3434- * Allows subsequent call to VHOST_OWNER_SET to succeed. */3434+ * Allows subsequent call to VHOST_SET_OWNER to succeed. */3535#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)36363737/* Set up/modify memory layout */
···107107 */108108#define UFS_DEVICE_QUIRK_DELAY_AFTER_LPM (1 << 11)109109110110+/*111111+ * Some ufs devices may need more time to be in hibern8 before exiting.112112+ * Enable this quirk to give it an additional 100us.113113+ */114114+#define UFS_DEVICE_QUIRK_PA_HIBER8TIME (1 << 12)115115+110116#endif /* UFS_QUIRKS_H_ */
+15-9
io_uring/io_uring.c
···872872 lockdep_assert(!io_wq_current_is_worker());873873 lockdep_assert_held(&ctx->uring_lock);874874875875- __io_cq_lock(ctx);876876- posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);875875+ if (!ctx->lockless_cq) {876876+ spin_lock(&ctx->completion_lock);877877+ posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);878878+ spin_unlock(&ctx->completion_lock);879879+ } else {880880+ posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);881881+ }882882+877883 ctx->submit_state.cq_flush = true;878878- __io_cq_unlock_post(ctx);879884 return posted;880885}881886···10831078 while (node) {10841079 req = container_of(node, struct io_kiocb, io_task_work.node);10851080 node = node->next;10861086- if (sync && last_ctx != req->ctx) {10811081+ if (last_ctx != req->ctx) {10871082 if (last_ctx) {10881088- flush_delayed_work(&last_ctx->fallback_work);10831083+ if (sync)10841084+ flush_delayed_work(&last_ctx->fallback_work);10891085 percpu_ref_put(&last_ctx->refs);10901086 }10911087 last_ctx = req->ctx;10921088 percpu_ref_get(&last_ctx->refs);10931089 }10941094- if (llist_add(&req->io_task_work.node,10951095- &req->ctx->fallback_llist))10961096- schedule_delayed_work(&req->ctx->fallback_work, 1);10901090+ if (llist_add(&req->io_task_work.node, &last_ctx->fallback_llist))10911091+ schedule_delayed_work(&last_ctx->fallback_work, 1);10971092 }1098109310991094 if (last_ctx) {11001100- flush_delayed_work(&last_ctx->fallback_work);10951095+ if (sync)10961096+ flush_delayed_work(&last_ctx->fallback_work);11011097 percpu_ref_put(&last_ctx->refs);11021098 }11031099}
+1-1
kernel/bpf/hashtab.c
···21892189 b = &htab->buckets[i];21902190 rcu_read_lock();21912191 head = &b->head;21922192- hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) {21922192+ hlist_nulls_for_each_entry_safe(elem, n, head, hash_node) {21932193 key = elem->key;21942194 if (is_percpu) {21952195 /* current cpu value for percpu map */
+1
kernel/bpf/preload/bpf_preload_kern.c
···8989}9090late_initcall(load);9191module_exit(fini);9292+MODULE_IMPORT_NS("BPF_INTERNAL");9293MODULE_LICENSE("GPL");9394MODULE_DESCRIPTION("Embedded BPF programs for introspection in bpffs");
···336336337337static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)338338{339339- if (!rmem->priv) {340340- struct dma_coherent_mem *mem;339339+ struct dma_coherent_mem *mem = rmem->priv;341340341341+ if (!mem) {342342 mem = dma_init_coherent_memory(rmem->base, rmem->base,343343 rmem->size, true);344344 if (IS_ERR(mem))345345 return PTR_ERR(mem);346346 rmem->priv = mem;347347 }348348- dma_assign_coherent_memory(dev, rmem->priv);348348+349349+ /* Warn if the device potentially can't use the reserved memory */350350+ if (mem->device_base + rmem->size - 1 >351351+ min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit))352352+ dev_warn(dev, "reserved memory is beyond device's set DMA address range\n");353353+354354+ dma_assign_coherent_memory(dev, mem);349355 return 0;350356}351357
+1-2
kernel/dma/contiguous.c
···6464 * Users, who want to set the size of global CMA area for their system6565 * should use cma= kernel parameter.6666 */6767-static const phys_addr_t size_bytes __initconst =6868- (phys_addr_t)CMA_SIZE_MBYTES * SZ_1M;6767+#define size_bytes ((phys_addr_t)CMA_SIZE_MBYTES * SZ_1M)6968static phys_addr_t size_cmdline __initdata = -1;7069static phys_addr_t base_cmdline __initdata;7170static phys_addr_t limit_cmdline __initdata;
+18-9
kernel/dma/mapping.c
···910910}911911EXPORT_SYMBOL(dma_set_coherent_mask);912912913913-/**914914- * dma_addressing_limited - return if the device is addressing limited915915- * @dev: device to check916916- *917917- * Return %true if the devices DMA mask is too small to address all memory in918918- * the system, else %false. Lack of addressing bits is the prime reason for919919- * bounce buffering, but might not be the only one.920920- */921921-bool dma_addressing_limited(struct device *dev)913913+static bool __dma_addressing_limited(struct device *dev)922914{923915 const struct dma_map_ops *ops = get_dma_ops(dev);924916···921929 if (unlikely(ops) || use_dma_iommu(dev))922930 return false;923931 return !dma_direct_all_ram_mapped(dev);932932+}933933+934934+/**935935+ * dma_addressing_limited - return if the device is addressing limited936936+ * @dev: device to check937937+ *938938+ * Return %true if the devices DMA mask is too small to address all memory in939939+ * the system, else %false. Lack of addressing bits is the prime reason for940940+ * bounce buffering, but might not be the only one.941941+ */942942+bool dma_addressing_limited(struct device *dev)943943+{944944+ if (!__dma_addressing_limited(dev))945945+ return false;946946+947947+ dev_dbg(dev, "device is DMA addressing limited\n");948948+ return true;924949}925950EXPORT_SYMBOL_GPL(dma_addressing_limited);926951
+2-2
kernel/events/core.c
···39433943 perf_event_set_state(event, PERF_EVENT_STATE_ERROR);3944394439453945 if (*perf_event_fasync(event))39463946- event->pending_kill = POLL_HUP;39463946+ event->pending_kill = POLL_ERR;3947394739483948 perf_event_wakeup(event);39493949 } else {···6075607560766076 if (unlikely(READ_ONCE(event->state) == PERF_EVENT_STATE_ERROR &&60776077 event->attr.pinned))60786078- return events;60786078+ return EPOLLERR;6079607960806080 /*60816081 * Pin the event->rb by taking event->mmap_mutex; otherwise
+7-43
kernel/sched/ext.c
···163163 /*164164 * CPU cgroup support flags165165 */166166- SCX_OPS_HAS_CGROUP_WEIGHT = 1LLU << 16, /* cpu.weight */166166+ SCX_OPS_HAS_CGROUP_WEIGHT = 1LLU << 16, /* DEPRECATED, will be removed on 6.18 */167167168168 SCX_OPS_ALL_FLAGS = SCX_OPS_KEEP_BUILTIN_IDLE |169169 SCX_OPS_ENQ_LAST |···3899389939003900DEFINE_STATIC_PERCPU_RWSEM(scx_cgroup_rwsem);39013901static bool scx_cgroup_enabled;39023902-static bool cgroup_warned_missing_weight;39033903-static bool cgroup_warned_missing_idle;39043904-39053905-static void scx_cgroup_warn_missing_weight(struct task_group *tg)39063906-{39073907- if (scx_ops_enable_state() == SCX_OPS_DISABLED ||39083908- cgroup_warned_missing_weight)39093909- return;39103910-39113911- if ((scx_ops.flags & SCX_OPS_HAS_CGROUP_WEIGHT) || !tg->css.parent)39123912- return;39133913-39143914- pr_warn("sched_ext: \"%s\" does not implement cgroup cpu.weight\n",39153915- scx_ops.name);39163916- cgroup_warned_missing_weight = true;39173917-}39183918-39193919-static void scx_cgroup_warn_missing_idle(struct task_group *tg)39203920-{39213921- if (!scx_cgroup_enabled || cgroup_warned_missing_idle)39223922- return;39233923-39243924- if (!tg->idle)39253925- return;39263926-39273927- pr_warn("sched_ext: \"%s\" does not implement cgroup cpu.idle\n",39283928- scx_ops.name);39293929- cgroup_warned_missing_idle = true;39303930-}3931390239323903int scx_tg_online(struct task_group *tg)39333904{···39073936 WARN_ON_ONCE(tg->scx_flags & (SCX_TG_ONLINE | SCX_TG_INITED));3908393739093938 percpu_down_read(&scx_cgroup_rwsem);39103910-39113911- scx_cgroup_warn_missing_weight(tg);3912393939133940 if (scx_cgroup_enabled) {39143941 if (SCX_HAS_OP(cgroup_init)) {···4045407640464077void scx_group_set_idle(struct task_group *tg, bool idle)40474078{40484048- percpu_down_read(&scx_cgroup_rwsem);40494049- scx_cgroup_warn_missing_idle(tg);40504050- percpu_up_read(&scx_cgroup_rwsem);40794079+ /* TODO: Implement ops->cgroup_set_idle() */40514080}4052408140534082static void scx_cgroup_lock(void)···4239427242404273 percpu_rwsem_assert_held(&scx_cgroup_rwsem);4241427442424242- cgroup_warned_missing_weight = false;42434243- cgroup_warned_missing_idle = false;42444244-42454275 /*42464276 * scx_tg_on/offline() are excluded through scx_cgroup_rwsem. If we walk42474277 * cgroups and init, all online cgroups are initialized.···42474283 css_for_each_descendant_pre(css, &root_task_group.css) {42484284 struct task_group *tg = css_tg(css);42494285 struct scx_cgroup_init_args args = { .weight = tg->scx_weight };42504250-42514251- scx_cgroup_warn_missing_weight(tg);42524252- scx_cgroup_warn_missing_idle(tg);4253428642544287 if ((tg->scx_flags &42554288 (SCX_TG_ONLINE | SCX_TG_INITED)) != SCX_TG_ONLINE)···4584462345854624static void free_exit_info(struct scx_exit_info *ei)45864625{45874587- kfree(ei->dump);46264626+ kvfree(ei->dump);45884627 kfree(ei->msg);45894628 kfree(ei->bt);45904629 kfree(ei);···4600463946014640 ei->bt = kcalloc(SCX_EXIT_BT_LEN, sizeof(ei->bt[0]), GFP_KERNEL);46024641 ei->msg = kzalloc(SCX_EXIT_MSG_LEN, GFP_KERNEL);46034603- ei->dump = kzalloc(exit_dump_len, GFP_KERNEL);46424642+ ei->dump = kvzalloc(exit_dump_len, GFP_KERNEL);4604464346054644 if (!ei->bt || !ei->msg || !ei->dump) {46064645 free_exit_info(ei);···52125251 scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE requires CPU idle selection enabled");52135252 return -EINVAL;52145253 }52545254+52555255+ if (ops->flags & SCX_OPS_HAS_CGROUP_WEIGHT)52565256+ pr_warn("SCX_OPS_HAS_CGROUP_WEIGHT is deprecated and a noop\n");5215525752165258 return 0;52175259}
···111111 * @arg: data to be passed to fn and handled_kill112112 * @name: the thread's name113113 *114114- * This returns a specialized task for use by the vhost layer or NULL on114114+ * This returns a specialized task for use by the vhost layer or ERR_PTR() on115115 * failure. The returned task is inactive, and the caller must fire it up116116 * through vhost_task_start().117117 */
···337337338338 release_sock(sk);339339340340- sock_kfree_s(sk, match, sizeof(*match));340340+ kfree_rcu_mightsleep(match);341341+ /* Adjust sk_omem_alloc like sock_kfree_s() does, to match342342+ * with allocation of this memory by sock_kmemdup()343343+ */344344+ atomic_sub(sizeof(*match), &sk->sk_omem_alloc);341345342346 err = 0;343347out:
+17-6
net/sched/sch_hfsc.c
···961961962962 if (cl != NULL) {963963 int old_flags;964964+ int len = 0;964965965966 if (parentid) {966967 if (cl->cl_parent &&···992991 if (usc != NULL)993992 hfsc_change_usc(cl, usc, cur_time);994993994994+ if (cl->qdisc->q.qlen != 0)995995+ len = qdisc_peek_len(cl->qdisc);996996+ /* Check queue length again since some qdisc implementations997997+ * (e.g., netem/codel) might empty the queue during the peek998998+ * operation.999999+ */9951000 if (cl->qdisc->q.qlen != 0) {996996- int len = qdisc_peek_len(cl->qdisc);997997-9981001 if (cl->cl_flags & HFSC_RSC) {9991002 if (old_flags & HFSC_RSC)10001003 update_ed(cl, len);···16411636 if (cl->qdisc->q.qlen != 0) {16421637 /* update ed */16431638 next_len = qdisc_peek_len(cl->qdisc);16441644- if (realtime)16451645- update_ed(cl, next_len);16461646- else16471647- update_d(cl, next_len);16391639+ /* Check queue length again since some qdisc implementations16401640+ * (e.g., netem/codel) might empty the queue during the peek16411641+ * operation.16421642+ */16431643+ if (cl->qdisc->q.qlen != 0) {16441644+ if (realtime)16451645+ update_ed(cl, next_len);16461646+ else16471647+ update_d(cl, next_len);16481648+ }16481649 } else {16491650 /* the class becomes passive */16501651 eltree_remove(cl);
+1-5
net/sunrpc/cache.c
···15361536 * or by one second if it has already reached the current time.15371537 * Newly added cache entries will always have ->last_refresh greater15381538 * that ->flush_time, so they don't get flushed prematurely.15391539- *15401540- * If someone frequently calls the flush interface, we should15411541- * immediately clean the corresponding cache_detail instead of15421542- * continuously accumulating nextcheck.15431539 */1544154015451545- if (cd->flush_time >= now && cd->flush_time < (now + 5))15411541+ if (cd->flush_time >= now)15461542 now = cd->flush_time + 1;1547154315481544 cd->flush_time = now;
+2-1
net/tipc/monitor.c
···716716 if (!mon)717717 continue;718718 write_lock_bh(&mon->lock);719719- mon->self->addr = tipc_own_addr(net);719719+ if (mon->self)720720+ mon->self->addr = tipc_own_addr(net);720721 write_unlock_bh(&mon->lock);721722 }722723}
···130130static inline void131131landlock_free_hierarchy_details(struct landlock_hierarchy *const hierarchy)132132{133133- if (WARN_ON_ONCE(!hierarchy || !hierarchy->details))133133+ if (!hierarchy || !hierarchy->details)134134 return;135135136136 put_pid(hierarchy->details->pid);
+13-14
security/landlock/syscalls.c
···169169 * the new ruleset.170170 * @size: Size of the pointed &struct landlock_ruleset_attr (needed for171171 * backward and forward compatibility).172172- * @flags: Supported value:172172+ * @flags: Supported values:173173+ *173174 * - %LANDLOCK_CREATE_RULESET_VERSION174175 * - %LANDLOCK_CREATE_RULESET_ERRATA175176 *176177 * This system call enables to create a new Landlock ruleset, and returns the177178 * related file descriptor on success.178179 *179179- * If @flags is %LANDLOCK_CREATE_RULESET_VERSION and @attr is NULL and @size is180180- * 0, then the returned value is the highest supported Landlock ABI version181181- * (starting at 1).182182- *183183- * If @flags is %LANDLOCK_CREATE_RULESET_ERRATA and @attr is NULL and @size is184184- * 0, then the returned value is a bitmask of fixed issues for the current185185- * Landlock ABI version.180180+ * If %LANDLOCK_CREATE_RULESET_VERSION or %LANDLOCK_CREATE_RULESET_ERRATA is181181+ * set, then @attr must be NULL and @size must be 0.186182 *187183 * Possible returned errors are:188184 *···187191 * - %E2BIG: @attr or @size inconsistencies;188192 * - %EFAULT: @attr or @size inconsistencies;189193 * - %ENOMSG: empty &landlock_ruleset_attr.handled_access_fs.194194+ *195195+ * .. kernel-doc:: include/uapi/linux/landlock.h196196+ * :identifiers: landlock_create_ruleset_flags190197 */191198SYSCALL_DEFINE3(landlock_create_ruleset,192199 const struct landlock_ruleset_attr __user *const, attr,···451452 * @ruleset_fd: File descriptor tied to the ruleset to merge with the target.452453 * @flags: Supported values:453454 *454454- * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF455455- * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON456456- * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF455455+ * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF456456+ * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON457457+ * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF457458 *458459 * This system call enables to enforce a Landlock ruleset on the current459460 * thread. Enforcing a ruleset requires that the task has %CAP_SYS_ADMIN in its460461 * namespace or is running with no_new_privs. This avoids scenarios where461462 * unprivileged tasks can affect the behavior of privileged children.462462- *463463- * It is allowed to only pass the %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF464464- * flag with a @ruleset_fd value of -1.465463 *466464 * Possible returned errors are:467465 *···471475 * %CAP_SYS_ADMIN in its namespace.472476 * - %E2BIG: The maximum number of stacked rulesets is reached for the current473477 * thread.478478+ *479479+ * .. kernel-doc:: include/uapi/linux/landlock.h480480+ * :identifiers: landlock_restrict_self_flags474481 */475482SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,476483 flags)