···149149DMA_ATTR_MMIO will not perform any cache flushing. The address150150provided must never be mapped cacheable into the CPU.151151152152-DMA_ATTR_CPU_CACHE_CLEAN153153-------------------------152152+DMA_ATTR_DEBUGGING_IGNORE_CACHELINES153153+------------------------------------154154155155-This attribute indicates the CPU will not dirty any cacheline overlapping this156156-DMA_FROM_DEVICE/DMA_BIDIRECTIONAL buffer while it is mapped. This allows157157-multiple small buffers to safely share a cacheline without risk of data158158-corruption, suppressing DMA debug warnings about overlapping mappings.159159-All mappings sharing a cacheline should have this attribute.155155+This attribute indicates that CPU cache lines may overlap for buffers mapped156156+with DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.157157+158158+Such overlap may occur when callers map multiple small buffers that reside159159+within the same cache line. In this case, callers must guarantee that the CPU160160+will not dirty these cache lines after the mappings are established. When this161161+condition is met, multiple buffers can safely share a cache line without risking162162+data corruption.163163+164164+All mappings that share a cache line must set this attribute to suppress DMA165165+debug warnings about overlapping mappings.166166+167167+DMA_ATTR_REQUIRE_COHERENT168168+-------------------------169169+170170+DMA mapping requests with the DMA_ATTR_REQUIRE_COHERENT fail on any171171+system where SWIOTLB or cache management is required. This should only172172+be used to support uAPI designs that require continuous HW DMA173173+coherence with userspace processes, for example RDMA and DRM. At a174174+minimum the memory being mapped must be userspace memory from175175+pin_user_pages() or similar.176176+177177+Drivers should consider using dma_mmap_pages() instead of this178178+interface when building their uAPIs, when possible.179179+180180+It must never be used in an in-kernel driver that only works with181181+kernel memory.
···168168 offset from voltage set to regulator.169169170170 regulator-uv-protection-microvolt:171171- description: Set over under voltage protection limit. This is a limit where171171+ description: Set under voltage protection limit. This is a limit where172172 hardware performs emergency shutdown. Zero can be passed to disable173173 protection and value '1' indicates that protection should be enabled but174174 limit setting can be omitted. Limit is given as microvolt offset from···182182 is given as microvolt offset from voltage set to regulator.183183184184 regulator-uv-warn-microvolt:185185- description: Set over under voltage warning limit. This is a limit where185185+ description: Set under voltage warning limit. This is a limit where186186 hardware is assumed still to be functional but approaching limit where187187 it gets damaged. Recovery actions should be initiated. Zero can be passed188188 to disable detection and value '1' indicates that detection should
+48
Documentation/driver-api/driver-model/binding.rst
···9999When a driver is removed, the list of devices that it supports is100100iterated over, and the driver's remove callback is called for each101101one. The device is removed from that list and the symlinks removed.102102+103103+104104+Driver Override105105+~~~~~~~~~~~~~~~106106+107107+Userspace may override the standard matching by writing a driver name to108108+a device's ``driver_override`` sysfs attribute. When set, only a driver109109+whose name matches the override will be considered during binding. This110110+bypasses all bus-specific matching (OF, ACPI, ID tables, etc.).111111+112112+The override may be cleared by writing an empty string, which returns113113+the device to standard matching rules. Writing to ``driver_override``114114+does not automatically unbind the device from its current driver or115115+make any attempt to load the specified driver.116116+117117+Buses opt into this mechanism by setting the ``driver_override`` flag in118118+their ``struct bus_type``::119119+120120+ const struct bus_type example_bus_type = {121121+ ...122122+ .driver_override = true,123123+ };124124+125125+When the flag is set, the driver core automatically creates the126126+``driver_override`` sysfs attribute for every device on that bus.127127+128128+The bus's ``match()`` callback should check the override before performing129129+its own matching, using ``device_match_driver_override()``::130130+131131+ static int example_match(struct device *dev, const struct device_driver *drv)132132+ {133133+ int ret;134134+135135+ ret = device_match_driver_override(dev, drv);136136+ if (ret >= 0)137137+ return ret;138138+139139+ /* Fall through to bus-specific matching... */140140+ }141141+142142+``device_match_driver_override()`` returns > 0 if the override matches143143+the given driver, 0 if the override is set but does not match, or < 0 if144144+no override is set at all.145145+146146+Additional helpers are available:147147+148148+- ``device_set_driver_override()`` - set or clear the override from kernel code.149149+- ``device_has_driver_override()`` - check whether an override is set.
+5-3
MAINTAINERS
···39863986ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS39873987M: Corentin Chary <corentin.chary@gmail.com>39883988M: Luke D. Jones <luke@ljones.dev>39893989-M: Denis Benato <benato.denis96@gmail.com>39893989+M: Denis Benato <denis.benato@linux.dev>39903990L: platform-driver-x86@vger.kernel.org39913991S: Maintained39923992W: https://asus-linux.org/···79987998F: drivers/gpu/drm/tiny/hx8357d.c7999799980008000DRM DRIVER FOR HYPERV SYNTHETIC VIDEO DEVICE80018001-M: Deepak Rawat <drawat.floss@gmail.com>80018001+M: Dexuan Cui <decui@microsoft.com>80028002+M: Long Li <longli@microsoft.com>80038003+M: Saurabh Sengar <ssengar@linux.microsoft.com>80028004L: linux-hyperv@vger.kernel.org80038005L: dri-devel@lists.freedesktop.org80048006S: Maintained···2490624904F: drivers/pinctrl/spear/24907249052490824906SPI NOR SUBSYSTEM2490924909-M: Tudor Ambarus <tudor.ambarus@linaro.org>2491024907M: Pratyush Yadav <pratyush@kernel.org>2491124908M: Michael Walle <mwalle@kernel.org>2490924909+R: Takahiro Kuwano <takahiro.kuwano@infineon.com>2491224910L: linux-mtd@lists.infradead.org2491324911S: Maintained2491424912W: http://www.linux-mtd.infradead.org/
···17531753 if (!writable)17541754 return -EPERM;1755175517561756- ptep = (u64 __user *)hva + offset;17561756+ ptep = (void __user *)hva + offset;17571757 if (cpus_have_final_cap(ARM64_HAS_LSE_ATOMICS))17581758 r = __lse_swap_desc(ptep, old, new);17591759 else
+14
arch/arm64/kvm/reset.c
···247247 kvm_vcpu_set_be(vcpu);248248249249 *vcpu_pc(vcpu) = target_pc;250250+251251+ /*252252+ * We may come from a state where either a PC update was253253+ * pending (SMC call resulting in PC being increpented to254254+ * skip the SMC) or a pending exception. Make sure we get255255+ * rid of all that, as this cannot be valid out of reset.256256+ *257257+ * Note that clearing the exception mask also clears PC258258+ * updates, but that's an implementation detail, and we259259+ * really want to make it explicit.260260+ */261261+ vcpu_clear_flag(vcpu, PENDING_EXCEPTION);262262+ vcpu_clear_flag(vcpu, EXCEPT_MASK);263263+ vcpu_clear_flag(vcpu, INCREMENT_PC);250264 vcpu_set_reg(vcpu, 0, reset_state.r0);251265 }252266
···710710void kvm_arch_crypto_set_masks(struct kvm *kvm, unsigned long *apm,711711 unsigned long *aqm, unsigned long *adm);712712713713+#define SIE64_RETURN_NORMAL 0714714+#define SIE64_RETURN_MCCK 1715715+713716int __sie64a(phys_addr_t sie_block_phys, struct kvm_s390_sie_block *sie_block, u64 *rsa,714717 unsigned long gasce);715718
+1-1
arch/s390/include/asm/stacktrace.h
···6262 struct {6363 unsigned long sie_control_block;6464 unsigned long sie_savearea;6565- unsigned long sie_reason;6565+ unsigned long sie_return;6666 unsigned long sie_flags;6767 unsigned long sie_control_block_phys;6868 unsigned long sie_guest_asce;
···11221122{11231123 struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;11241124 struct kvm_s390_sie_block *scb_o = vsie_page->scb_o;11251125+ unsigned long sie_return = SIE64_RETURN_NORMAL;11251126 int guest_bp_isolation;11261127 int rc = 0;11271128···11641163 goto xfer_to_guest_mode_check;11651164 }11661165 guest_timing_enter_irqoff();11671167- rc = kvm_s390_enter_exit_sie(scb_s, vcpu->run->s.regs.gprs, sg->asce.val);11661166+ sie_return = kvm_s390_enter_exit_sie(scb_s, vcpu->run->s.regs.gprs, sg->asce.val);11681167 guest_timing_exit_irqoff();11691168 local_irq_enable();11701169 }···1179117811801179 kvm_vcpu_srcu_read_lock(vcpu);1181118011821182- if (rc == -EINTR) {11831183- VCPU_EVENT(vcpu, 3, "%s", "machine check");11811181+ if (sie_return == SIE64_RETURN_MCCK) {11841182 kvm_s390_reinject_machine_check(vcpu, &vsie_page->mcck_info);11851183 return 0;11861184 }11851185+11861186+ WARN_ON_ONCE(sie_return != SIE64_RETURN_NORMAL);1187118711881188 if (rc > 0)11891189 rc = 0; /* we could still have an icpt */
+9-2
arch/s390/mm/fault.c
···441441 folio = phys_to_folio(addr);442442 if (unlikely(!folio_try_get(folio)))443443 return;444444- rc = arch_make_folio_accessible(folio);444444+ rc = uv_convert_from_secure(folio_to_phys(folio));445445+ if (!rc)446446+ clear_bit(PG_arch_1, &folio->flags.f);445447 folio_put(folio);448448+ /*449449+ * There are some valid fixup types for kernel450450+ * accesses to donated secure memory. zeropad is one451451+ * of them.452452+ */446453 if (rc)447447- BUG();454454+ return handle_fault_error_nolock(regs, 0);448455 } else {449456 if (faulthandler_disabled())450457 return handle_fault_error_nolock(regs, 0);
-4
arch/sh/drivers/platform_early.c
···2626 struct platform_device *pdev = to_platform_device(dev);2727 struct platform_driver *pdrv = to_platform_driver(drv);28282929- /* When driver_override is set, only bind to the matching driver */3030- if (pdev->driver_override)3131- return !strcmp(pdev->driver_override, drv->name);3232-3329 /* Then try to match against the id table */3430 if (pdrv->id_table)3531 return platform_match_id(pdrv->id_table, pdev) != NULL;
···13721372 else if (i < n_running)13731373 continue;1374137413751375- if (hwc->state & PERF_HES_ARCH)13751375+ cpuc->events[hwc->idx] = event;13761376+13771377+ if (hwc->state & PERF_HES_ARCH) {13781378+ static_call(x86_pmu_set_period)(event);13761379 continue;13801380+ }1377138113781382 /*13791383 * if cpuc->enabled = 0, then no wrmsr as13801384 * per x86_pmu_enable_event()13811385 */13821382- cpuc->events[hwc->idx] = event;13831386 x86_pmu_start(event, PERF_EF_RELOAD);13841387 }13851388 cpuc->n_added = 0;
+21-10
arch/x86/events/intel/core.c
···46284628 event->hw.dyn_constraint &= hybrid(event->pmu, acr_cause_mask64);46294629}4630463046314631+static inline int intel_set_branch_counter_constr(struct perf_event *event,46324632+ int *num)46334633+{46344634+ if (branch_sample_call_stack(event))46354635+ return -EINVAL;46364636+ if (branch_sample_counters(event)) {46374637+ (*num)++;46384638+ event->hw.dyn_constraint &= x86_pmu.lbr_counters;46394639+ }46404640+46414641+ return 0;46424642+}46434643+46314644static int intel_pmu_hw_config(struct perf_event *event)46324645{46334646 int ret = x86_pmu_hw_config(event);···47114698 * group, which requires the extra space to store the counters.47124699 */47134700 leader = event->group_leader;47144714- if (branch_sample_call_stack(leader))47014701+ if (intel_set_branch_counter_constr(leader, &num))47154702 return -EINVAL;47164716- if (branch_sample_counters(leader)) {47174717- num++;47184718- leader->hw.dyn_constraint &= x86_pmu.lbr_counters;47194719- }47204703 leader->hw.flags |= PERF_X86_EVENT_BRANCH_COUNTERS;4721470447224705 for_each_sibling_event(sibling, leader) {47234723- if (branch_sample_call_stack(sibling))47064706+ if (intel_set_branch_counter_constr(sibling, &num))47244707 return -EINVAL;47254725- if (branch_sample_counters(sibling)) {47264726- num++;47274727- sibling->hw.dyn_constraint &= x86_pmu.lbr_counters;47284728- }47084708+ }47094709+47104710+ /* event isn't installed as a sibling yet. */47114711+ if (event != leader) {47124712+ if (intel_set_branch_counter_constr(event, &num))47134713+ return -EINVAL;47294714 }4730471547314716 if (num > fls(x86_pmu.lbr_counters))
+7-4
arch/x86/events/intel/ds.c
···345345 if (omr.omr_remote)346346 val |= REM;347347348348- val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT);349349-350348 if (omr.omr_source == 0x2) {351351- u8 snoop = omr.omr_snoop | omr.omr_promoted;349349+ u8 snoop = omr.omr_snoop | (omr.omr_promoted << 1);352350353353- if (snoop == 0x0)351351+ if (omr.omr_hitm)352352+ val |= P(SNOOP, HITM);353353+ else if (snoop == 0x0)354354 val |= P(SNOOP, NA);355355 else if (snoop == 0x1)356356 val |= P(SNOOP, MISS);···359359 else if (snoop == 0x3)360360 val |= P(SNOOP, NONE);361361 } else if (omr.omr_source > 0x2 && omr.omr_source < 0x7) {362362+ val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT);362363 val |= omr.omr_snoop ? P(SNOOPX, FWD) : 0;364364+ } else {365365+ val |= P(SNOOP, NONE);363366 }364367365368 return val;
+61-57
arch/x86/hyperv/hv_crash.c
···107107 cpu_relax();108108}109109110110-/* This cannot be inlined as it needs stack */111111-static noinline __noclone void hv_crash_restore_tss(void)110110+static void hv_crash_restore_tss(void)112111{113112 load_TR_desc();114113}115114116116-/* This cannot be inlined as it needs stack */117117-static noinline void hv_crash_clear_kernpt(void)115115+static void hv_crash_clear_kernpt(void)118116{119117 pgd_t *pgd;120118 p4d_t *p4d;···123125 native_p4d_clear(p4d);124126}125127126126-/*127127- * This is the C entry point from the asm glue code after the disable hypercall.128128- * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel129129- * page tables with our below 4G page identity mapped, but using a temporary130130- * GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not131131- * available. We restore kernel GDT, and rest of the context, and continue132132- * to kexec.133133- */134134-static asmlinkage void __noreturn hv_crash_c_entry(void)128128+129129+static void __noreturn hv_crash_handle(void)135130{136136- struct hv_crash_ctxt *ctxt = &hv_crash_ctxt;137137-138138- /* first thing, restore kernel gdt */139139- native_load_gdt(&ctxt->gdtr);140140-141141- asm volatile("movw %%ax, %%ss" : : "a"(ctxt->ss));142142- asm volatile("movq %0, %%rsp" : : "m"(ctxt->rsp));143143-144144- asm volatile("movw %%ax, %%ds" : : "a"(ctxt->ds));145145- asm volatile("movw %%ax, %%es" : : "a"(ctxt->es));146146- asm volatile("movw %%ax, %%fs" : : "a"(ctxt->fs));147147- asm volatile("movw %%ax, %%gs" : : "a"(ctxt->gs));148148-149149- native_wrmsrq(MSR_IA32_CR_PAT, ctxt->pat);150150- asm volatile("movq %0, %%cr0" : : "r"(ctxt->cr0));151151-152152- asm volatile("movq %0, %%cr8" : : "r"(ctxt->cr8));153153- asm volatile("movq %0, %%cr4" : : "r"(ctxt->cr4));154154- asm volatile("movq %0, %%cr2" : : "r"(ctxt->cr4));155155-156156- native_load_idt(&ctxt->idtr);157157- native_wrmsrq(MSR_GS_BASE, ctxt->gsbase);158158- native_wrmsrq(MSR_EFER, ctxt->efer);159159-160160- /* restore the original kernel CS now via far return */161161- asm volatile("movzwq %0, %%rax\n\t"162162- "pushq %%rax\n\t"163163- "pushq $1f\n\t"164164- "lretq\n\t"165165- "1:nop\n\t" : : "m"(ctxt->cs) : "rax");166166-167167- /* We are in asmlinkage without stack frame, hence make C function168168- * calls which will buy stack frames.169169- */170131 hv_crash_restore_tss();171132 hv_crash_clear_kernpt();172133···134177135178 hv_panic_timeout_reboot();136179}137137-/* Tell gcc we are using lretq long jump in the above function intentionally */180180+181181+/*182182+ * __naked functions do not permit function calls, not even to __always_inline183183+ * functions that only contain asm() blocks themselves. So use a macro instead.184184+ */185185+#define hv_wrmsr(msr, val) \186186+ asm volatile("wrmsr" :: "c"(msr), "a"((u32)val), "d"((u32)(val >> 32)) : "memory")187187+188188+/*189189+ * This is the C entry point from the asm glue code after the disable hypercall.190190+ * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel191191+ * page tables with our below 4G page identity mapped, but using a temporary192192+ * GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not193193+ * available. We restore kernel GDT, and rest of the context, and continue194194+ * to kexec.195195+ */196196+static void __naked hv_crash_c_entry(void)197197+{198198+ /* first thing, restore kernel gdt */199199+ asm volatile("lgdt %0" : : "m" (hv_crash_ctxt.gdtr));200200+201201+ asm volatile("movw %0, %%ss\n\t"202202+ "movq %1, %%rsp"203203+ :: "m"(hv_crash_ctxt.ss), "m"(hv_crash_ctxt.rsp));204204+205205+ asm volatile("movw %0, %%ds" : : "m"(hv_crash_ctxt.ds));206206+ asm volatile("movw %0, %%es" : : "m"(hv_crash_ctxt.es));207207+ asm volatile("movw %0, %%fs" : : "m"(hv_crash_ctxt.fs));208208+ asm volatile("movw %0, %%gs" : : "m"(hv_crash_ctxt.gs));209209+210210+ hv_wrmsr(MSR_IA32_CR_PAT, hv_crash_ctxt.pat);211211+ asm volatile("movq %0, %%cr0" : : "r"(hv_crash_ctxt.cr0));212212+213213+ asm volatile("movq %0, %%cr8" : : "r"(hv_crash_ctxt.cr8));214214+ asm volatile("movq %0, %%cr4" : : "r"(hv_crash_ctxt.cr4));215215+ asm volatile("movq %0, %%cr2" : : "r"(hv_crash_ctxt.cr2));216216+217217+ asm volatile("lidt %0" : : "m" (hv_crash_ctxt.idtr));218218+ hv_wrmsr(MSR_GS_BASE, hv_crash_ctxt.gsbase);219219+ hv_wrmsr(MSR_EFER, hv_crash_ctxt.efer);220220+221221+ /* restore the original kernel CS now via far return */222222+ asm volatile("pushq %q0\n\t"223223+ "pushq %q1\n\t"224224+ "lretq"225225+ :: "r"(hv_crash_ctxt.cs), "r"(hv_crash_handle));226226+}227227+/* Tell objtool we are using lretq long jump in the above function intentionally */138228STACK_FRAME_NON_STANDARD(hv_crash_c_entry);139229140230static void hv_mark_tss_not_busy(void)···199195{200196 struct hv_crash_ctxt *ctxt = &hv_crash_ctxt;201197202202- asm volatile("movq %%rsp,%0" : "=m"(ctxt->rsp));198198+ ctxt->rsp = current_stack_pointer;203199204200 ctxt->cr0 = native_read_cr0();205201 ctxt->cr4 = native_read_cr4();206202207207- asm volatile("movq %%cr2, %0" : "=a"(ctxt->cr2));208208- asm volatile("movq %%cr8, %0" : "=a"(ctxt->cr8));203203+ asm volatile("movq %%cr2, %0" : "=r"(ctxt->cr2));204204+ asm volatile("movq %%cr8, %0" : "=r"(ctxt->cr8));209205210210- asm volatile("movl %%cs, %%eax" : "=a"(ctxt->cs));211211- asm volatile("movl %%ss, %%eax" : "=a"(ctxt->ss));212212- asm volatile("movl %%ds, %%eax" : "=a"(ctxt->ds));213213- asm volatile("movl %%es, %%eax" : "=a"(ctxt->es));214214- asm volatile("movl %%fs, %%eax" : "=a"(ctxt->fs));215215- asm volatile("movl %%gs, %%eax" : "=a"(ctxt->gs));206206+ asm volatile("movw %%cs, %0" : "=m"(ctxt->cs));207207+ asm volatile("movw %%ss, %0" : "=m"(ctxt->ss));208208+ asm volatile("movw %%ds, %0" : "=m"(ctxt->ds));209209+ asm volatile("movw %%es, %0" : "=m"(ctxt->es));210210+ asm volatile("movw %%fs, %0" : "=m"(ctxt->fs));211211+ asm volatile("movw %%gs, %0" : "=m"(ctxt->gs));216212217213 native_store_gdt(&ctxt->gdtr);218214 store_idt(&ctxt->idtr);
+16-2
arch/x86/kernel/apic/x2apic_uv_x.c
···17081708 struct uv_hub_info_s *new_hub;1709170917101710 /* Allocate & fill new per hub info list */17111711- new_hub = (bid == 0) ? &uv_hub_info_node017121712- : kzalloc_node(bytes, GFP_KERNEL, uv_blade_to_node(bid));17111711+ if (bid == 0) {17121712+ new_hub = &uv_hub_info_node0;17131713+ } else {17141714+ int nid;17151715+17161716+ /*17171717+ * Deconfigured sockets are mapped to SOCK_EMPTY. Use17181718+ * NUMA_NO_NODE to allocate on a valid node.17191719+ */17201720+ nid = uv_blade_to_node(bid);17211721+ if (nid == SOCK_EMPTY)17221722+ nid = NUMA_NO_NODE;17231723+17241724+ new_hub = kzalloc_node(bytes, GFP_KERNEL, nid);17251725+ }17261726+17131727 if (WARN_ON_ONCE(!new_hub)) {17141728 /* do not kfree() bid 0, which is statically allocated */17151729 while (--bid > 0)
+11-6
arch/x86/kernel/cpu/mce/amd.c
···875875{876876 amd_reset_thr_limit(m->bank);877877878878- /* Clear MCA_DESTAT for all deferred errors even those logged in MCA_STATUS. */879879- if (m->status & MCI_STATUS_DEFERRED)880880- mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0);878878+ if (mce_flags.smca) {879879+ /*880880+ * Clear MCA_DESTAT for all deferred errors even those881881+ * logged in MCA_STATUS.882882+ */883883+ if (m->status & MCI_STATUS_DEFERRED)884884+ mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0);881885882882- /* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */883883- if (m->kflags & MCE_CHECK_DFR_REGS)884884- return;886886+ /* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */887887+ if (m->kflags & MCE_CHECK_DFR_REGS)888888+ return;889889+ }885890886891 mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0);887892}
···381381}382382__exitcall(deferred_probe_exit);383383384384+int __device_set_driver_override(struct device *dev, const char *s, size_t len)385385+{386386+ const char *new, *old;387387+ char *cp;388388+389389+ if (!s)390390+ return -EINVAL;391391+392392+ /*393393+ * The stored value will be used in sysfs show callback (sysfs_emit()),394394+ * which has a length limit of PAGE_SIZE and adds a trailing newline.395395+ * Thus we can store one character less to avoid truncation during sysfs396396+ * show.397397+ */398398+ if (len >= (PAGE_SIZE - 1))399399+ return -EINVAL;400400+401401+ /*402402+ * Compute the real length of the string in case userspace sends us a403403+ * bunch of \0 characters like python likes to do.404404+ */405405+ len = strlen(s);406406+407407+ if (!len) {408408+ /* Empty string passed - clear override */409409+ spin_lock(&dev->driver_override.lock);410410+ old = dev->driver_override.name;411411+ dev->driver_override.name = NULL;412412+ spin_unlock(&dev->driver_override.lock);413413+ kfree(old);414414+415415+ return 0;416416+ }417417+418418+ cp = strnchr(s, len, '\n');419419+ if (cp)420420+ len = cp - s;421421+422422+ new = kstrndup(s, len, GFP_KERNEL);423423+ if (!new)424424+ return -ENOMEM;425425+426426+ spin_lock(&dev->driver_override.lock);427427+ old = dev->driver_override.name;428428+ if (cp != s) {429429+ dev->driver_override.name = new;430430+ spin_unlock(&dev->driver_override.lock);431431+ } else {432432+ /* "\n" passed - clear override */433433+ dev->driver_override.name = NULL;434434+ spin_unlock(&dev->driver_override.lock);435435+436436+ kfree(new);437437+ }438438+ kfree(old);439439+440440+ return 0;441441+}442442+EXPORT_SYMBOL_GPL(__device_set_driver_override);443443+384444/**385445 * device_is_bound() - Check if device is bound to a driver386446 * @dev: device to check
+5-32
drivers/base/platform.c
···603603 kfree(pa->pdev.dev.platform_data);604604 kfree(pa->pdev.mfd_cell);605605 kfree(pa->pdev.resource);606606- kfree(pa->pdev.driver_override);607606 kfree(pa);608607}609608···13051306}13061307static DEVICE_ATTR_RO(numa_node);1307130813081308-static ssize_t driver_override_show(struct device *dev,13091309- struct device_attribute *attr, char *buf)13101310-{13111311- struct platform_device *pdev = to_platform_device(dev);13121312- ssize_t len;13131313-13141314- device_lock(dev);13151315- len = sysfs_emit(buf, "%s\n", pdev->driver_override);13161316- device_unlock(dev);13171317-13181318- return len;13191319-}13201320-13211321-static ssize_t driver_override_store(struct device *dev,13221322- struct device_attribute *attr,13231323- const char *buf, size_t count)13241324-{13251325- struct platform_device *pdev = to_platform_device(dev);13261326- int ret;13271327-13281328- ret = driver_set_override(dev, &pdev->driver_override, buf, count);13291329- if (ret)13301330- return ret;13311331-13321332- return count;13331333-}13341334-static DEVICE_ATTR_RW(driver_override);13351335-13361309static struct attribute *platform_dev_attrs[] = {13371310 &dev_attr_modalias.attr,13381311 &dev_attr_numa_node.attr,13391339- &dev_attr_driver_override.attr,13401312 NULL,13411313};13421314···13471377{13481378 struct platform_device *pdev = to_platform_device(dev);13491379 struct platform_driver *pdrv = to_platform_driver(drv);13801380+ int ret;1350138113511382 /* When driver_override is set, only bind to the matching driver */13521352- if (pdev->driver_override)13531353- return !strcmp(pdev->driver_override, drv->name);13831383+ ret = device_match_driver_override(dev, drv);13841384+ if (ret >= 0)13851385+ return ret;1354138613551387 /* Attempt an OF style match first */13561388 if (of_driver_match_device(dev, drv))···14881516const struct bus_type platform_bus_type = {14891517 .name = "platform",14901518 .dev_groups = platform_dev_groups,15191519+ .driver_override = true,14911520 .match = platform_match,14921521 .uevent = platform_uevent,14931522 .probe = platform_probe,
+14-25
drivers/block/zram/zram_drv.c
···917917918918static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *req)919919{920920- u32 size, index = req->pps->index;921921- int err, prio;922922- bool huge;920920+ u32 index = req->pps->index;921921+ int err;923922924923 err = blk_status_to_errno(req->bio.bi_status);925924 if (err) {···945946 goto out;946947 }947948948948- if (zram->compressed_wb) {949949- /*950950- * ZRAM_WB slots get freed, we need to preserve data required951951- * for read decompression.952952- */953953- size = get_slot_size(zram, index);954954- prio = get_slot_comp_priority(zram, index);955955- huge = test_slot_flag(zram, index, ZRAM_HUGE);956956- }957957-958958- slot_free(zram, index);959959- set_slot_flag(zram, index, ZRAM_WB);949949+ clear_slot_flag(zram, index, ZRAM_IDLE);950950+ if (test_slot_flag(zram, index, ZRAM_HUGE))951951+ atomic64_dec(&zram->stats.huge_pages);952952+ atomic64_sub(get_slot_size(zram, index), &zram->stats.compr_data_size);953953+ zs_free(zram->mem_pool, get_slot_handle(zram, index));960954 set_slot_handle(zram, index, req->blk_idx);961961-962962- if (zram->compressed_wb) {963963- if (huge)964964- set_slot_flag(zram, index, ZRAM_HUGE);965965- set_slot_size(zram, index, size);966966- set_slot_comp_priority(zram, index, prio);967967- }968968-969969- atomic64_inc(&zram->stats.pages_stored);955955+ set_slot_flag(zram, index, ZRAM_WB);970956971957out:972958 slot_unlock(zram, index);···19942010 set_slot_comp_priority(zram, index, 0);1995201119962012 if (test_slot_flag(zram, index, ZRAM_HUGE)) {20132013+ /*20142014+ * Writeback completion decrements ->huge_pages but keeps20152015+ * ZRAM_HUGE flag for deferred decompression path.20162016+ */20172017+ if (!test_slot_flag(zram, index, ZRAM_WB))20182018+ atomic64_dec(&zram->stats.huge_pages);19972019 clear_slot_flag(zram, index, ZRAM_HUGE);19981998- atomic64_dec(&zram->stats.huge_pages);19992020 }2000202120012022 if (test_slot_flag(zram, index, ZRAM_WB)) {
+8-3
drivers/bluetooth/btintel.c
···251251252252 bt_dev_err(hdev, "Hardware error 0x%2.2x", code);253253254254+ hci_req_sync_lock(hdev);255255+254256 skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);255257 if (IS_ERR(skb)) {256258 bt_dev_err(hdev, "Reset after hardware error failed (%ld)",257259 PTR_ERR(skb));258258- return;260260+ goto unlock;259261 }260262 kfree_skb(skb);261263···265263 if (IS_ERR(skb)) {266264 bt_dev_err(hdev, "Retrieving Intel exception info failed (%ld)",267265 PTR_ERR(skb));268268- return;266266+ goto unlock;269267 }270268271269 if (skb->len != 13) {272270 bt_dev_err(hdev, "Exception info size mismatch");273271 kfree_skb(skb);274274- return;272272+ goto unlock;275273 }276274277275 bt_dev_err(hdev, "Exception info %s", (char *)(skb->data + 1));278276279277 kfree_skb(skb);278278+279279+unlock:280280+ hci_req_sync_unlock(hdev);280281}281282EXPORT_SYMBOL_GPL(btintel_hw_error);282283
···541541 if (err || !fw->data || !fw->size) {542542 bt_dev_err(lldev->hu.hdev, "request_firmware failed(errno %d) for %s",543543 err, bts_scr_name);544544+ if (!err)545545+ release_firmware(fw);544546 return -EINVAL;545547 }546548 ptr = (void *)fw->data;
+2-2
drivers/bus/simple-pm-bus.c
···3636 * that's not listed in simple_pm_bus_of_match. We don't want to do any3737 * of the simple-pm-bus tasks for these devices, so return early.3838 */3939- if (pdev->driver_override)3939+ if (device_has_driver_override(&pdev->dev))4040 return 0;41414242 match = of_match_device(dev->driver->of_match_table, dev);···7878{7979 const void *data = of_device_get_match_data(&pdev->dev);80808181- if (pdev->driver_override || data)8181+ if (device_has_driver_override(&pdev->dev) || data)8282 return;83838484 dev_dbg(&pdev->dev, "%s\n", __func__);
+1-2
drivers/clk/imx/clk-scu.c
···706706 if (ret)707707 goto put_device;708708709709- ret = driver_set_override(&pdev->dev, &pdev->driver_override,710710- "imx-scu-clk", strlen("imx-scu-clk"));709709+ ret = device_set_driver_override(&pdev->dev, "imx-scu-clk");711710 if (ret)712711 goto put_device;713712
+1
drivers/cxl/Kconfig
···5959 tristate "CXL ACPI: Platform Support"6060 depends on ACPI6161 depends on ACPI_NUMA6262+ depends on CXL_PMEM || !CXL_PMEM6263 default CXL_BUS6364 select ACPI_TABLE_LIB6465 select ACPI_HMAT
+9-16
drivers/cxl/core/hdm.c
···9494 struct cxl_hdm *cxlhdm;9595 void __iomem *hdm;9696 u32 ctrl;9797- int i;98979998 if (!info)10099 return false;···112113 return false;113114114115 /*115115- * If any decoders are committed already, there should not be any116116- * emulated DVSEC decoders.116116+ * If HDM decoders are globally enabled, do not fall back to DVSEC117117+ * range emulation. Zeroed decoder registers after region teardown118118+ * do not imply absence of HDM capability.119119+ *120120+ * Falling back to DVSEC here would treat the decoder as AUTO and121121+ * may incorrectly latch default interleave settings.117122 */118118- for (i = 0; i < cxlhdm->decoder_count; i++) {119119- ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(i));120120- dev_dbg(&info->port->dev,121121- "decoder%d.%d: committed: %ld base: %#x_%.8x size: %#x_%.8x\n",122122- info->port->id, i,123123- FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl),124124- readl(hdm + CXL_HDM_DECODER0_BASE_HIGH_OFFSET(i)),125125- readl(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(i)),126126- readl(hdm + CXL_HDM_DECODER0_SIZE_HIGH_OFFSET(i)),127127- readl(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(i)));128128- if (FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl))129129- return false;130130- }123123+ ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET);124124+ if (ctrl & CXL_HDM_DECODER_ENABLE)125125+ return false;131126132127 return true;133128}
+1-1
drivers/cxl/core/mbox.c
···13011301 * Require an endpoint to be safe otherwise the driver can not13021302 * be sure that the device is unmapped.13031303 */13041304- if (endpoint && cxl_num_decoders_committed(endpoint) == 0)13041304+ if (cxlmd->dev.driver && cxl_num_decoders_committed(endpoint) == 0)13051305 return __cxl_mem_sanitize(mds, cmd);1306130613071307 return -EBUSY;
···36363737#define AMDGPU_BO_LIST_MAX_PRIORITY 32u3838#define AMDGPU_BO_LIST_NUM_BUCKETS (AMDGPU_BO_LIST_MAX_PRIORITY + 1)3939+#define AMDGPU_BO_LIST_MAX_ENTRIES (128 * 1024)39404041static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu)4142{···188187 const uint32_t bo_info_size = in->bo_info_size;189188 const uint32_t bo_number = in->bo_number;190189 struct drm_amdgpu_bo_list_entry *info;190190+191191+ if (bo_number > AMDGPU_BO_LIST_MAX_ENTRIES)192192+ return -EINVAL;191193192194 /* copy the handle array from userspace to a kernel buffer */193195 if (likely(info_size == bo_info_size)) {
+6-1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
···10691069 }1070107010711071 /* Prepare a TLB flush fence to be attached to PTs */10721072- if (!params->unlocked) {10721072+ /* The check for need_tlb_fence should be dropped once we10731073+ * sort out the issues with KIQ/MES TLB invalidation timeouts.10741074+ */10751075+ if (!params->unlocked && vm->need_tlb_fence) {10731076 amdgpu_vm_tlb_fence_create(params->adev, vm, fence);1074107710751078 /* Makes sure no PD/PT is freed before the flush */···26052602 ttm_lru_bulk_move_init(&vm->lru_bulk_move);2606260326072604 vm->is_compute_context = false;26052605+ vm->need_tlb_fence = amdgpu_userq_enabled(&adev->ddev);2608260626092607 vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &26102608 AMDGPU_VM_USE_CPU_FOR_GFX);···27432739 dma_fence_put(vm->last_update);27442740 vm->last_update = dma_fence_get_stub();27452741 vm->is_compute_context = true;27422742+ vm->need_tlb_fence = true;2746274327472744unreserve_bo:27482745 amdgpu_bo_unreserve(vm->root.bo);
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
···441441 struct ttm_lru_bulk_move lru_bulk_move;442442 /* Flag to indicate if VM is used for compute */443443 bool is_compute_context;444444+ /* Flag to indicate if VM needs a TLB fence (KFD or KGD) */445445+ bool need_tlb_fence;444446445447 /* Memory partition number, -1 means any partition */446448 int8_t mem_id;
···129129 if (!pdev)130130 return -EINVAL;131131132132- if (!dev->type->name) {132132+ if (!dev->type || !dev->type->name) {133133 drm_dbg(&adev->ddev, "Invalid device type to add\n");134134 goto exit;135135 }···165165 if (!pdev)166166 return -EINVAL;167167168168- if (!dev->type->name) {168168+ if (!dev->type || !dev->type->name) {169169 drm_dbg(&adev->ddev, "Invalid device type to remove\n");170170 goto exit;171171 }
···19671967 if (engine->sanitize)19681968 engine->sanitize(engine);1969196919701970- engine->set_default_submission(engine);19701970+ if (engine->set_default_submission)19711971+ engine->set_default_submission(engine);19711972 }19721973}19731974
-17
drivers/gpu/drm/imagination/pvr_device.c
···225225 }226226227227 if (pvr_dev->has_safety_events) {228228- int err;229229-230230- /*231231- * Ensure the GPU is powered on since some safety events (such232232- * as ECC faults) can happen outside of job submissions, which233233- * are otherwise the only time a power reference is held.234234- */235235- err = pvr_power_get(pvr_dev);236236- if (err) {237237- drm_err_ratelimited(drm_dev,238238- "%s: could not take power reference (%d)\n",239239- __func__, err);240240- return ret;241241- }242242-243228 while (pvr_device_safety_irq_pending(pvr_dev)) {244229 pvr_device_safety_irq_clear(pvr_dev);245230 pvr_device_handle_safety_events(pvr_dev);246231247232 ret = IRQ_HANDLED;248233 }249249-250250- pvr_power_put(pvr_dev);251234 }252235253236 return ret;
+39-12
drivers/gpu/drm/imagination/pvr_power.c
···9090}91919292static int9393-pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)9393+pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset, bool rpm_suspend)9494{9595- if (!hard_reset) {9696- int err;9595+ int err;97969797+ if (!hard_reset) {9898 cancel_delayed_work_sync(&pvr_dev->watchdog.work);9999100100 err = pvr_power_request_idle(pvr_dev);···106106 return err;107107 }108108109109- return pvr_fw_stop(pvr_dev);109109+ if (rpm_suspend) {110110+ /* This also waits for late processing of GPU or firmware IRQs in other cores */111111+ disable_irq(pvr_dev->irq);112112+ }113113+114114+ err = pvr_fw_stop(pvr_dev);115115+ if (err && rpm_suspend)116116+ enable_irq(pvr_dev->irq);117117+118118+ return err;110119}111120112121static int113113-pvr_power_fw_enable(struct pvr_device *pvr_dev)122122+pvr_power_fw_enable(struct pvr_device *pvr_dev, bool rpm_resume)114123{115124 int err;116125126126+ if (rpm_resume)127127+ enable_irq(pvr_dev->irq);128128+117129 err = pvr_fw_start(pvr_dev);118130 if (err)119119- return err;131131+ goto out;120132121133 err = pvr_wait_for_fw_boot(pvr_dev);122134 if (err) {123135 drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n");124136 pvr_fw_stop(pvr_dev);125125- return err;137137+ goto out;126138 }127139128140 queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,129141 msecs_to_jiffies(WATCHDOG_TIME_MS));130142131143 return 0;144144+145145+out:146146+ if (rpm_resume)147147+ disable_irq(pvr_dev->irq);148148+149149+ return err;132150}133151134152bool···379361 return -EIO;380362381363 if (pvr_dev->fw_dev.booted) {382382- err = pvr_power_fw_disable(pvr_dev, false);364364+ err = pvr_power_fw_disable(pvr_dev, false, true);383365 if (err)384366 goto err_drm_dev_exit;385367 }···409391 goto err_drm_dev_exit;410392411393 if (pvr_dev->fw_dev.booted) {412412- err = pvr_power_fw_enable(pvr_dev);394394+ err = pvr_power_fw_enable(pvr_dev, true);413395 if (err)414396 goto err_power_off;415397 }···528510 }529511530512 /* Disable IRQs for the duration of the reset. */531531- disable_irq(pvr_dev->irq);513513+ if (hard_reset) {514514+ disable_irq(pvr_dev->irq);515515+ } else {516516+ /*517517+ * Soft reset is triggered as a response to a FW command to the Host and is518518+ * processed from the threaded IRQ handler. This code cannot (nor needs to)519519+ * wait for any IRQ processing to complete.520520+ */521521+ disable_irq_nosync(pvr_dev->irq);522522+ }532523533524 do {534525 if (hard_reset) {···545518 queues_disabled = true;546519 }547520548548- err = pvr_power_fw_disable(pvr_dev, hard_reset);521521+ err = pvr_power_fw_disable(pvr_dev, hard_reset, false);549522 if (!err) {550523 if (hard_reset) {551524 pvr_dev->fw_dev.booted = false;···568541569542 pvr_fw_irq_clear(pvr_dev);570543571571- err = pvr_power_fw_enable(pvr_dev);544544+ err = pvr_power_fw_enable(pvr_dev, false);572545 }573546574547 if (err && hard_reset)
···96969797struct vmw_res_func;98989999+struct vmw_bo;100100+struct vmw_bo;101101+struct vmw_resource_dirty;102102+99103/**100100- * struct vmw-resource - base class for hardware resources104104+ * struct vmw_resource - base class for hardware resources101105 *102106 * @kref: For refcounting.103107 * @dev_priv: Pointer to the device private for this resource. Immutable.104108 * @id: Device id. Protected by @dev_priv::resource_lock.109109+ * @used_prio: Priority for this resource.105110 * @guest_memory_size: Guest memory buffer size. Immutable.106111 * @res_dirty: Resource contains data not yet in the guest memory buffer.107112 * Protected by resource reserved.···122117 * pin-count greater than zero. It is not on the resource LRU lists and its123118 * guest memory buffer is pinned. Hence it can't be evicted.124119 * @func: Method vtable for this resource. Immutable.125125- * @mob_node; Node for the MOB guest memory rbtree. Protected by120120+ * @mob_node: Node for the MOB guest memory rbtree. Protected by126121 * @guest_memory_bo reserved.127122 * @lru_head: List head for the LRU list. Protected by @dev_priv::resource_lock.128123 * @binding_head: List head for the context binding list. Protected by129124 * the @dev_priv::binding_mutex125125+ * @dirty: resource's dirty tracker130126 * @res_free: The resource destructor.131127 * @hw_destroy: Callback to destroy the resource on the device, as part of132128 * resource destruction.133129 */134134-struct vmw_bo;135135-struct vmw_bo;136136-struct vmw_resource_dirty;137130struct vmw_resource {138131 struct kref kref;139132 struct vmw_private *dev_priv;···199196 * @quality_level: Quality level.200197 * @autogen_filter: Filter for automatically generated mipmaps.201198 * @array_size: Number of array elements for a 1D/2D texture. For cubemap202202- texture number of faces * array_size. This should be 0 for pre203203- SM4 device.199199+ * texture number of faces * array_size. This should be 0 for pre200200+ * SM4 device.204201 * @buffer_byte_stride: Buffer byte stride.205202 * @num_sizes: Size of @sizes. For GB surface this should always be 1.206203 * @base_size: Surface dimension.···268265struct vmw_res_cache_entry {269266 uint32_t handle;270267 struct vmw_resource *res;268268+ /* private: */271269 void *private;270270+ /* public: */272271 unsigned short valid_handle;273272 unsigned short valid;274273};275274276275/**277276 * enum vmw_dma_map_mode - indicate how to perform TTM page dma mappings.277277+ * @vmw_dma_alloc_coherent: Use TTM coherent pages278278+ * @vmw_dma_map_populate: Unmap from DMA just after unpopulate279279+ * @vmw_dma_map_bind: Unmap from DMA just before unbind278280 */279281enum vmw_dma_map_mode {280280- vmw_dma_alloc_coherent, /* Use TTM coherent pages */281281- vmw_dma_map_populate, /* Unmap from DMA just after unpopulate */282282- vmw_dma_map_bind, /* Unmap from DMA just before unbind */282282+ vmw_dma_alloc_coherent,283283+ vmw_dma_map_populate,284284+ vmw_dma_map_bind,285285+ /* private: */283286 vmw_dma_map_max284287};285288···293284 * struct vmw_sg_table - Scatter/gather table for binding, with additional294285 * device-specific information.295286 *287287+ * @mode: which page mapping mode to use288288+ * @pages: Array of page pointers to the pages.289289+ * @addrs: DMA addresses to the pages if coherent pages are used.296290 * @sgt: Pointer to a struct sg_table with binding information297297- * @num_regions: Number of regions with device-address contiguous pages291291+ * @num_pages: Number of @pages298292 */299293struct vmw_sg_table {300294 enum vmw_dma_map_mode mode;···365353 * than from user-space366354 * @fp: If @kernel is false, points to the file of the client. Otherwise367355 * NULL356356+ * @filp: DRM state for this file368357 * @cmd_bounce: Command bounce buffer used for command validation before369358 * copying to fifo space370359 * @cmd_bounce_size: Current command bounce buffer size···742729bool vmwgfx_supported(struct vmw_private *vmw);743730744731745745-/**732732+/*746733 * GMR utilities - vmwgfx_gmr.c747734 */748735···752739 int gmr_id);753740extern void vmw_gmr_unbind(struct vmw_private *dev_priv, int gmr_id);754741755755-/**742742+/*756743 * User handles757744 */758745struct vmw_user_object {···772759void vmw_user_object_unmap(struct vmw_user_object *uo);773760bool vmw_user_object_is_mapped(struct vmw_user_object *uo);774761775775-/**762762+/*776763 * Resource utilities - vmwgfx_resource.c777764 */778765struct vmw_user_resource_conv;···832819 return !RB_EMPTY_NODE(&res->mob_node);833820}834821835835-/**822822+/*836823 * GEM related functionality - vmwgfx_gem.c837824 */838825struct vmw_bo_params;···846833 struct drm_file *filp);847834extern void vmw_debugfs_gem_init(struct vmw_private *vdev);848835849849-/**836836+/*850837 * Misc Ioctl functionality - vmwgfx_ioctl.c851838 */852839···859846extern int vmw_present_readback_ioctl(struct drm_device *dev, void *data,860847 struct drm_file *file_priv);861848862862-/**849849+/*863850 * Fifo utilities - vmwgfx_fifo.c864851 */865852···893880894881895882/**896896- * vmw_fifo_caps - Returns the capabilities of the FIFO command883883+ * vmw_fifo_caps - Get the capabilities of the FIFO command897884 * queue or 0 if fifo memory isn't present.898885 * @dev_priv: The device private context886886+ *887887+ * Returns: capabilities of the FIFO command or %0 if fifo memory not present899888 */900889static inline uint32_t vmw_fifo_caps(const struct vmw_private *dev_priv)901890{···908893909894910895/**911911- * vmw_is_cursor_bypass3_enabled - Returns TRUE iff Cursor Bypass 3912912- * is enabled in the FIFO.896896+ * vmw_is_cursor_bypass3_enabled - check Cursor Bypass 3 enabled setting897897+ * in the FIFO.913898 * @dev_priv: The device private context899899+ *900900+ * Returns: %true iff Cursor Bypass 3 is enabled in the FIFO914901 */915902static inline bool916903vmw_is_cursor_bypass3_enabled(const struct vmw_private *dev_priv)···920903 return (vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_CURSOR_BYPASS_3) != 0;921904}922905923923-/**906906+/*924907 * TTM buffer object driver - vmwgfx_ttm_buffer.c925908 */926909···944927 *945928 * @viter: Pointer to the iterator to advance.946929 *947947- * Returns false if past the list of pages, true otherwise.930930+ * Returns: false if past the list of pages, true otherwise.948931 */949932static inline bool vmw_piter_next(struct vmw_piter *viter)950933{···956939 *957940 * @viter: Pointer to the iterator958941 *959959- * Returns the DMA address of the page pointed to by @viter.942942+ * Returns: the DMA address of the page pointed to by @viter.960943 */961944static inline dma_addr_t vmw_piter_dma_addr(struct vmw_piter *viter)962945{···968951 *969952 * @viter: Pointer to the iterator970953 *971971- * Returns the DMA address of the page pointed to by @viter.954954+ * Returns: the DMA address of the page pointed to by @viter.972955 */973956static inline struct page *vmw_piter_page(struct vmw_piter *viter)974957{975958 return viter->pages[viter->i];976959}977960978978-/**961961+/*979962 * Command submission - vmwgfx_execbuf.c980963 */981964···1010993 int32_t out_fence_fd);1011994bool vmw_cmd_describe(const void *buf, u32 *size, char const **cmd);101299510131013-/**996996+/*1014997 * IRQs and wating - vmwgfx_irq.c1015998 */1016999···10331016bool vmw_generic_waiter_remove(struct vmw_private *dev_priv,10341017 u32 flag, int *waiter_count);1035101810361036-/**10191019+/*10371020 * Kernel modesetting - vmwgfx_kms.c10381021 */10391022···10651048extern void vmw_resource_unpin(struct vmw_resource *res);10661049extern enum vmw_res_type vmw_res_type(const struct vmw_resource *res);1067105010681068-/**10511051+/*10691052 * Overlay control - vmwgfx_overlay.c10701053 */10711054···10801063int vmw_overlay_num_overlays(struct vmw_private *dev_priv);10811064int vmw_overlay_num_free_overlays(struct vmw_private *dev_priv);1082106510831083-/**10661066+/*10841067 * GMR Id manager10851068 */1086106910871070int vmw_gmrid_man_init(struct vmw_private *dev_priv, int type);10881071void vmw_gmrid_man_fini(struct vmw_private *dev_priv, int type);1089107210901090-/**10731073+/*10911074 * System memory manager10921075 */10931076int vmw_sys_man_init(struct vmw_private *dev_priv);10941077void vmw_sys_man_fini(struct vmw_private *dev_priv);1095107810961096-/**10791079+/*10971080 * Prime - vmwgfx_prime.c10981081 */10991082···13091292 * @line: The current line of the blit.13101293 * @line_offset: Offset of the current line segment.13111294 * @cpp: Bytes per pixel (granularity information).13121312- * @memcpy: Which memcpy function to use.12951295+ * @do_cpy: Which memcpy function to use.13131296 */13141297struct vmw_diff_cpy {13151298 struct drm_rect rect;···1397138013981381/**13991382 * VMW_DEBUG_KMS - Debug output for kernel mode-setting13831383+ * @fmt: format string for the args14001384 *14011385 * This macro is for debugging vmwgfx mode-setting code.14021386 */14031387#define VMW_DEBUG_KMS(fmt, ...) \14041388 DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__)1405138914061406-/**13901390+/*14071391 * Inline helper functions14081392 */14091393···1435141714361418/**14371419 * vmw_fifo_mem_read - Perform a MMIO read from the fifo memory14381438- *14201420+ * @vmw: The device private structure14391421 * @fifo_reg: The fifo register to read from14401422 *14411423 * This function is intended to be equivalent to ioread32() on14421424 * memremap'd memory, but without byteswapping.14251425+ *14261426+ * Returns: the value read14431427 */14441428static inline u32 vmw_fifo_mem_read(struct vmw_private *vmw, uint32 fifo_reg)14451429{···1451143114521432/**14531433 * vmw_fifo_mem_write - Perform a MMIO write to volatile memory14541454- *14551455- * @addr: The fifo register to write to14341434+ * @vmw: The device private structure14351435+ * @fifo_reg: The fifo register to write to14361436+ * @value: The value to write14561437 *14571438 * This function is intended to be equivalent to iowrite32 on14581439 * memremap'd memory, but without byteswapping.
+2-1
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
···771771 ret = vmw_bo_dirty_add(bo);772772 if (!ret && surface && surface->res.func->dirty_alloc) {773773 surface->res.coherent = true;774774- ret = surface->res.func->dirty_alloc(&surface->res);774774+ if (surface->res.dirty == NULL)775775+ ret = surface->res.func->dirty_alloc(&surface->res);775776 }776777 ttm_bo_unreserve(&bo->tbo);777778 }
···2828 /** @size: Total usable size of this GGTT */2929 u64 size;30303131-#define XE_GGTT_FLAGS_64K BIT(0)3131+#define XE_GGTT_FLAGS_64K BIT(0)3232+#define XE_GGTT_FLAGS_ONLINE BIT(1)3233 /**3334 * @flags: Flags for this GGTT3435 * Acceptable flags:3536 * - %XE_GGTT_FLAGS_64K - if PTE size is 64K. Otherwise, regular is 4K.3737+ * - %XE_GGTT_FLAGS_ONLINE - is GGTT online, protected by ggtt->lock3838+ * after init3639 */3740 unsigned int flags;3841 /** @scratch: Internal object allocation used as a scratch page */
···48484949#define XE_GUC_EXEC_QUEUE_CGP_CONTEXT_ERROR_LEN 650505151+static int guc_submit_reset_prepare(struct xe_guc *guc);5252+5153static struct xe_guc *5254exec_queue_to_guc(struct xe_exec_queue *q)5355{···241239 EXEC_QUEUE_STATE_BANNED));242240}243241244244-static void guc_submit_fini(struct drm_device *drm, void *arg)242242+static void guc_submit_sw_fini(struct drm_device *drm, void *arg)245243{246244 struct xe_guc *guc = arg;247245 struct xe_device *xe = guc_to_xe(guc);···257255 xe_gt_assert(gt, ret);258256259257 xa_destroy(&guc->submission_state.exec_queue_lookup);258258+}259259+260260+static void guc_submit_fini(void *arg)261261+{262262+ struct xe_guc *guc = arg;263263+264264+ /* Forcefully kill any remaining exec queues */265265+ xe_guc_ct_stop(&guc->ct);266266+ guc_submit_reset_prepare(guc);267267+ xe_guc_softreset(guc);268268+ xe_guc_submit_stop(guc);269269+ xe_uc_fw_sanitize(&guc->fw);270270+ xe_guc_submit_pause_abort(guc);260271}261272262273static void guc_submit_wedged_fini(void *arg)···341326342327 guc->submission_state.initialized = true;343328344344- return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);329329+ err = drmm_add_action_or_reset(&xe->drm, guc_submit_sw_fini, guc);330330+ if (err)331331+ return err;332332+333333+ return devm_add_action_or_reset(xe->drm.dev, guc_submit_fini, guc);345334}346335347336/*···12711252 */12721253void xe_guc_submit_wedge(struct xe_guc *guc)12731254{12551255+ struct xe_device *xe = guc_to_xe(guc);12741256 struct xe_gt *gt = guc_to_gt(guc);12751257 struct xe_exec_queue *q;12761258 unsigned long index;···12861266 if (!guc->submission_state.initialized)12871267 return;1288126812891289- err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev,12901290- guc_submit_wedged_fini, guc);12911291- if (err) {12921292- xe_gt_err(gt, "Failed to register clean-up in wedged.mode=%s; "12931293- "Although device is wedged.\n",12941294- xe_wedged_mode_to_string(XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET));12951295- return;12961296- }12691269+ if (xe->wedged.mode == 2) {12701270+ err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev,12711271+ guc_submit_wedged_fini, guc);12721272+ if (err) {12731273+ xe_gt_err(gt, "Failed to register clean-up on wedged.mode=2; "12741274+ "Although device is wedged.\n");12751275+ return;12761276+ }1297127712981298- mutex_lock(&guc->submission_state.lock);12991299- xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)13001300- if (xe_exec_queue_get_unless_zero(q))13011301- set_exec_queue_wedged(q);13021302- mutex_unlock(&guc->submission_state.lock);12781278+ mutex_lock(&guc->submission_state.lock);12791279+ xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)12801280+ if (xe_exec_queue_get_unless_zero(q))12811281+ set_exec_queue_wedged(q);12821282+ mutex_unlock(&guc->submission_state.lock);12831283+ } else {12841284+ /* Forcefully kill any remaining exec queues, signal fences */12851285+ guc_submit_reset_prepare(guc);12861286+ xe_guc_submit_stop(guc);12871287+ xe_guc_softreset(guc);12881288+ xe_uc_fw_sanitize(&guc->fw);12891289+ xe_guc_submit_pause_abort(guc);12901290+ }13031291}1304129213051293static bool guc_submit_hint_wedged(struct xe_guc *guc)···22582230static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)22592231{22602232 struct xe_gpu_scheduler *sched = &q->guc->sched;22332233+ bool do_destroy = false;2261223422622235 /* Stop scheduling + flush any DRM scheduler operations */22632236 xe_sched_submission_stop(sched);···22662237 /* Clean up lost G2H + reset engine state */22672238 if (exec_queue_registered(q)) {22682239 if (exec_queue_destroyed(q))22692269- __guc_exec_queue_destroy(guc, q);22402240+ do_destroy = true;22702241 }22712242 if (q->guc->suspend_pending) {22722243 set_exec_queue_suspended(q);···23022273 xe_guc_exec_queue_trigger_cleanup(q);23032274 }23042275 }22762276+22772277+ if (do_destroy)22782278+ __guc_exec_queue_destroy(guc, q);23052279}2306228023072307-int xe_guc_submit_reset_prepare(struct xe_guc *guc)22812281+static int guc_submit_reset_prepare(struct xe_guc *guc)23082282{23092283 int ret;23102310-23112311- if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc)))23122312- return 0;23132313-23142314- if (!guc->submission_state.initialized)23152315- return 0;2316228423172285 /*23182286 * Using an atomic here rather than submission_state.lock as this···23232297 wake_up_all(&guc->ct.wq);2324229823252299 return ret;23002300+}23012301+23022302+int xe_guc_submit_reset_prepare(struct xe_guc *guc)23032303+{23042304+ if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc)))23052305+ return 0;23062306+23072307+ if (!guc->submission_state.initialized)23082308+ return 0;23092309+23102310+ return guc_submit_reset_prepare(guc);23262311}2327231223282313void xe_guc_submit_reset_wait(struct xe_guc *guc)···27322695 continue;2733269627342697 xe_sched_submission_start(sched);27352735- if (exec_queue_killed_or_banned_or_wedged(q))27362736- xe_guc_exec_queue_trigger_cleanup(q);26982698+ guc_exec_queue_kill(q);27372699 }27382700 mutex_unlock(&guc->submission_state.lock);27392701}
+2-2
drivers/gpu/drm/xe/xe_lrc.c
···24132413 * @lrc: Pointer to the lrc.24142414 *24152415 * Return latest ctx timestamp. With support for active contexts, the24162416- * calculation may bb slightly racy, so follow a read-again logic to ensure that24162416+ * calculation may be slightly racy, so follow a read-again logic to ensure that24172417 * the context is still active before returning the right timestamp.24182418 *24192419 * Returns: New ctx timestamp value24202420 */24212421u64 xe_lrc_timestamp(struct xe_lrc *lrc)24222422{24232423- u64 lrc_ts, reg_ts, new_ts;24232423+ u64 lrc_ts, reg_ts, new_ts = lrc->ctx_timestamp;24242424 u32 engine_id;2425242524262426 lrc_ts = xe_lrc_ctx_timestamp(lrc);
+5-2
drivers/gpu/drm/xe/xe_oa.c
···543543 size_t offset = 0;544544 int ret;545545546546- /* Can't read from disabled streams */547547- if (!stream->enabled || !stream->sample)546546+ if (!stream->sample)548547 return -EINVAL;549548550549 if (!(file->f_flags & O_NONBLOCK)) {···1459146014601461 if (stream->sample)14611462 hrtimer_cancel(&stream->poll_check_timer);14631463+14641464+ /* Update stream->oa_buffer.tail to allow any final reports to be read */14651465+ if (xe_oa_buffer_check_unlocked(stream))14661466+ wake_up(&stream->poll_wq);14621467}1463146814641469static int xe_oa_enable_preempt_timeslice(struct xe_oa_stream *stream)
+29-9
drivers/gpu/drm/xe/xe_pt.c
···16551655 XE_WARN_ON(!level);16561656 /* Check for leaf node */16571657 if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) &&16581658- (!xe_child->base.children || !xe_child->base.children[first])) {16581658+ xe_child->level <= MAX_HUGEPTE_LEVEL) {16591659 struct iosys_map *leaf_map = &xe_child->bo->vmap;16601660 pgoff_t count = xe_pt_num_entries(addr, next, xe_child->level, walk);1661166116621662 for (pgoff_t i = 0; i < count; i++) {16631663- u64 pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64);16631663+ u64 pte;16641664 int ret;16651665+16661666+ /*16671667+ * If not a leaf pt, skip unless non-leaf pt is interleaved between16681668+ * leaf ptes which causes the page walk to skip over the child leaves16691669+ */16701670+ if (xe_child->base.children && xe_child->base.children[first + i]) {16711671+ u64 pt_size = 1ULL << walk->shifts[xe_child->level];16721672+ bool edge_pt = (i == 0 && !IS_ALIGNED(addr, pt_size)) ||16731673+ (i == count - 1 && !IS_ALIGNED(next, pt_size));16741674+16751675+ if (!edge_pt) {16761676+ xe_page_reclaim_list_abort(xe_walk->tile->primary_gt,16771677+ xe_walk->prl,16781678+ "PT is skipped by walk at level=%u offset=%lu",16791679+ xe_child->level, first + i);16801680+ break;16811681+ }16821682+ continue;16831683+ }16841684+16851685+ pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64);1665168616661687 /*16671688 * In rare scenarios, pte may not be written yet due to racy conditions.···16951674 }1696167516971676 /* Ensure it is a defined page */16981698- xe_tile_assert(xe_walk->tile,16991699- xe_child->level == 0 ||17001700- (pte & (XE_PTE_PS64 | XE_PDE_PS_2M | XE_PDPE_PS_1G)));16771677+ xe_tile_assert(xe_walk->tile, xe_child->level == 0 ||16781678+ (pte & (XE_PDE_PS_2M | XE_PDPE_PS_1G)));1701167917021680 /* An entry should be added for 64KB but contigious 4K have XE_PTE_PS64 */17031681 if (pte & XE_PTE_PS64)···17211701 killed = xe_pt_check_kill(addr, next, level - 1, xe_child, action, walk);1722170217231703 /*17241724- * Verify PRL is active and if entry is not a leaf pte (base.children conditions),17251725- * there is a potential need to invalidate the PRL if any PTE (num_live) are dropped.17041704+ * Verify if any PTE are potentially dropped at non-leaf levels, either from being17051705+ * killed or the page walk covers the region.17261706 */17271727- if (xe_walk->prl && level > 1 && xe_child->num_live &&17281728- xe_child->base.children && xe_child->base.children[first]) {17071707+ if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) &&17081708+ xe_child->level > MAX_HUGEPTE_LEVEL && xe_child->num_live) {17291709 bool covered = xe_pt_covers(addr, next, xe_child->level, &xe_walk->base);1730171017311711 /*
+4-2
drivers/hv/mshv_regions.c
···314314 ret = pin_user_pages_fast(userspace_addr, nr_pages,315315 FOLL_WRITE | FOLL_LONGTERM,316316 pages);317317- if (ret < 0)317317+ if (ret != nr_pages)318318 goto release_pages;319319 }320320321321 return 0;322322323323release_pages:324324+ if (ret > 0)325325+ done_count += ret;324326 mshv_region_invalidate_pages(region, 0, done_count);325325- return ret;327327+ return ret < 0 ? ret : -ENOMEM;326328}327329328330static int mshv_region_chunk_unmap(struct mshv_mem_region *region,
···120120 HVCALL_SET_VP_REGISTERS,121121 HVCALL_TRANSLATE_VIRTUAL_ADDRESS,122122 HVCALL_CLEAR_VIRTUAL_INTERRUPT,123123- HVCALL_SCRUB_PARTITION,124123 HVCALL_REGISTER_INTERCEPT_RESULT,125124 HVCALL_ASSERT_VIRTUAL_INTERRUPT,126125 HVCALL_GET_GPA_PAGES_ACCESS_STATES,···12881289 */12891290static long12901291mshv_map_user_memory(struct mshv_partition *partition,12911291- struct mshv_user_mem_region mem)12921292+ struct mshv_user_mem_region *mem)12921293{12931294 struct mshv_mem_region *region;12941295 struct vm_area_struct *vma;···12961297 ulong mmio_pfn;12971298 long ret;1298129912991299- if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP) ||13001300- !access_ok((const void __user *)mem.userspace_addr, mem.size))13001300+ if (mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP) ||13011301+ !access_ok((const void __user *)mem->userspace_addr, mem->size))13011302 return -EINVAL;1302130313031304 mmap_read_lock(current->mm);13041304- vma = vma_lookup(current->mm, mem.userspace_addr);13051305+ vma = vma_lookup(current->mm, mem->userspace_addr);13051306 is_mmio = vma ? !!(vma->vm_flags & (VM_IO | VM_PFNMAP)) : 0;13061307 mmio_pfn = is_mmio ? vma->vm_pgoff : 0;13071308 mmap_read_unlock(current->mm);···13091310 if (!vma)13101311 return -EINVAL;1311131213121312- ret = mshv_partition_create_region(partition, &mem, ®ion,13131313+ ret = mshv_partition_create_region(partition, mem, ®ion,13131314 is_mmio);13141315 if (ret)13151316 return ret;···13471348 return 0;1348134913491350errout:13501350- vfree(region);13511351+ mshv_region_put(region);13511352 return ret;13521353}1353135413541355/* Called for unmapping both the guest ram and the mmio space */13551356static long13561357mshv_unmap_user_memory(struct mshv_partition *partition,13571357- struct mshv_user_mem_region mem)13581358+ struct mshv_user_mem_region *mem)13581359{13591360 struct mshv_mem_region *region;1360136113611361- if (!(mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP)))13621362+ if (!(mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP)))13621363 return -EINVAL;1363136413641365 spin_lock(&partition->pt_mem_regions_lock);1365136613661366- region = mshv_partition_region_by_gfn(partition, mem.guest_pfn);13671367+ region = mshv_partition_region_by_gfn(partition, mem->guest_pfn);13671368 if (!region) {13681369 spin_unlock(&partition->pt_mem_regions_lock);13691370 return -ENOENT;13701371 }1371137213721373 /* Paranoia check */13731373- if (region->start_uaddr != mem.userspace_addr ||13741374- region->start_gfn != mem.guest_pfn ||13751375- region->nr_pages != HVPFN_DOWN(mem.size)) {13741374+ if (region->start_uaddr != mem->userspace_addr ||13751375+ region->start_gfn != mem->guest_pfn ||13761376+ region->nr_pages != HVPFN_DOWN(mem->size)) {13761377 spin_unlock(&partition->pt_mem_regions_lock);13771378 return -EINVAL;13781379 }···14031404 return -EINVAL;1404140514051406 if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP))14061406- return mshv_unmap_user_memory(partition, mem);14071407+ return mshv_unmap_user_memory(partition, &mem);1407140814081408- return mshv_map_user_memory(partition, mem);14091409+ return mshv_map_user_memory(partition, &mem);14091410}1410141114111412static long···20632064 return 0;20642065}2065206620662066-static int mshv_cpuhp_online;20672067static int mshv_root_sched_online;2068206820692069static const char *scheduler_type_to_string(enum hv_scheduler_type type)···22472249 free_percpu(root_scheduler_output);22482250}2249225122502250-static int mshv_reboot_notify(struct notifier_block *nb,22512251- unsigned long code, void *unused)22522252-{22532253- cpuhp_remove_state(mshv_cpuhp_online);22542254- return 0;22552255-}22562256-22572257-struct notifier_block mshv_reboot_nb = {22582258- .notifier_call = mshv_reboot_notify,22592259-};22602260-22612261-static void mshv_root_partition_exit(void)22622262-{22632263- unregister_reboot_notifier(&mshv_reboot_nb);22642264-}22652265-22662266-static int __init mshv_root_partition_init(struct device *dev)22672267-{22682268- return register_reboot_notifier(&mshv_reboot_nb);22692269-}22702270-22712252static int __init mshv_init_vmm_caps(struct device *dev)22722253{22732254 int ret;···22912314 MSHV_HV_MAX_VERSION);22922315 }2293231622942294- mshv_root.synic_pages = alloc_percpu(struct hv_synic_pages);22952295- if (!mshv_root.synic_pages) {22962296- dev_err(dev, "Failed to allocate percpu synic page\n");22972297- ret = -ENOMEM;23172317+ ret = mshv_synic_init(dev);23182318+ if (ret)22982319 goto device_deregister;22992299- }23002300-23012301- ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mshv_synic",23022302- mshv_synic_init,23032303- mshv_synic_cleanup);23042304- if (ret < 0) {23052305- dev_err(dev, "Failed to setup cpu hotplug state: %i\n", ret);23062306- goto free_synic_pages;23072307- }23082308-23092309- mshv_cpuhp_online = ret;2310232023112321 ret = mshv_init_vmm_caps(dev);23122322 if (ret)23132313- goto remove_cpu_state;23232323+ goto synic_cleanup;2314232423152325 ret = mshv_retrieve_scheduler_type(dev);23162326 if (ret)23172317- goto remove_cpu_state;23182318-23192319- if (hv_root_partition())23202320- ret = mshv_root_partition_init(dev);23212321- if (ret)23222322- goto remove_cpu_state;23272327+ goto synic_cleanup;2323232823242329 ret = root_scheduler_init(dev);23252330 if (ret)23262326- goto exit_partition;23312331+ goto synic_cleanup;2327233223282333 ret = mshv_debugfs_init();23292334 if (ret)···23262367 mshv_debugfs_exit();23272368deinit_root_scheduler:23282369 root_scheduler_deinit();23292329-exit_partition:23302330- if (hv_root_partition())23312331- mshv_root_partition_exit();23322332-remove_cpu_state:23332333- cpuhp_remove_state(mshv_cpuhp_online);23342334-free_synic_pages:23352335- free_percpu(mshv_root.synic_pages);23702370+synic_cleanup:23712371+ mshv_synic_exit();23362372device_deregister:23372373 misc_deregister(&mshv_dev);23382374 return ret;···23412387 misc_deregister(&mshv_dev);23422388 mshv_irqfd_wq_cleanup();23432389 root_scheduler_deinit();23442344- if (hv_root_partition())23452345- mshv_root_partition_exit();23462346- cpuhp_remove_state(mshv_cpuhp_online);23472347- free_percpu(mshv_root.synic_pages);23902390+ mshv_synic_exit();23482391}2349239223502393module_init(mshv_parent_partition_init);
+173-15
drivers/hv/mshv_synic.c
···1010#include <linux/kernel.h>1111#include <linux/slab.h>1212#include <linux/mm.h>1313+#include <linux/interrupt.h>1314#include <linux/io.h>1415#include <linux/random.h>1616+#include <linux/cpuhotplug.h>1717+#include <linux/reboot.h>1518#include <asm/mshyperv.h>1919+#include <linux/acpi.h>16201721#include "mshv_eventfd.h"1822#include "mshv.h"2323+2424+static int synic_cpuhp_online;2525+static struct hv_synic_pages __percpu *synic_pages;2626+static int mshv_sint_vector = -1; /* hwirq for the SynIC SINTs */2727+static int mshv_sint_irq = -1; /* Linux IRQ for mshv_sint_vector */19282029static u32 synic_event_ring_get_queued_port(u32 sint_index)2130{···3526 u32 message;3627 u8 tail;37283838- spages = this_cpu_ptr(mshv_root.synic_pages);2929+ spages = this_cpu_ptr(synic_pages);3930 event_ring_page = &spages->synic_event_ring_page;4031 synic_eventring_tail = (u8 **)this_cpu_ptr(hv_synic_eventring_tail);4132···402393403394void mshv_isr(void)404395{405405- struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages);396396+ struct hv_synic_pages *spages = this_cpu_ptr(synic_pages);406397 struct hv_message_page **msg_page = &spages->hyp_synic_message_page;407398 struct hv_message *msg;408399 bool handled;···446437 if (msg->header.message_flags.msg_pending)447438 hv_set_non_nested_msr(HV_MSR_EOM, 0);448439449449-#ifdef HYPERVISOR_CALLBACK_VECTOR450450- add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR);451451-#endif440440+ add_interrupt_randomness(mshv_sint_vector);452441 } else {453442 pr_warn_once("%s: unknown message type 0x%x\n", __func__,454443 msg->header.message_type);455444 }456445}457446458458-int mshv_synic_init(unsigned int cpu)447447+static int mshv_synic_cpu_init(unsigned int cpu)459448{460449 union hv_synic_simp simp;461450 union hv_synic_siefp siefp;462451 union hv_synic_sirbp sirbp;463463-#ifdef HYPERVISOR_CALLBACK_VECTOR464452 union hv_synic_sint sint;465465-#endif466453 union hv_synic_scontrol sctrl;467467- struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages);454454+ struct hv_synic_pages *spages = this_cpu_ptr(synic_pages);468455 struct hv_message_page **msg_page = &spages->hyp_synic_message_page;469456 struct hv_synic_event_flags_page **event_flags_page =470457 &spages->synic_event_flags_page;···501496502497 hv_set_non_nested_msr(HV_MSR_SIRBP, sirbp.as_uint64);503498504504-#ifdef HYPERVISOR_CALLBACK_VECTOR499499+ if (mshv_sint_irq != -1)500500+ enable_percpu_irq(mshv_sint_irq, 0);501501+505502 /* Enable intercepts */506503 sint.as_uint64 = 0;507507- sint.vector = HYPERVISOR_CALLBACK_VECTOR;504504+ sint.vector = mshv_sint_vector;508505 sint.masked = false;509506 sint.auto_eoi = hv_recommend_using_aeoi();510507 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_INTERCEPTION_SINT_INDEX,···514507515508 /* Doorbell SINT */516509 sint.as_uint64 = 0;517517- sint.vector = HYPERVISOR_CALLBACK_VECTOR;510510+ sint.vector = mshv_sint_vector;518511 sint.masked = false;519512 sint.as_intercept = 1;520513 sint.auto_eoi = hv_recommend_using_aeoi();521514 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_DOORBELL_SINT_INDEX,522515 sint.as_uint64);523523-#endif524516525517 /* Enable global synic bit */526518 sctrl.as_uint64 = hv_get_non_nested_msr(HV_MSR_SCONTROL);···548542 return -EFAULT;549543}550544551551-int mshv_synic_cleanup(unsigned int cpu)545545+static int mshv_synic_cpu_exit(unsigned int cpu)552546{553547 union hv_synic_sint sint;554548 union hv_synic_simp simp;555549 union hv_synic_siefp siefp;556550 union hv_synic_sirbp sirbp;557551 union hv_synic_scontrol sctrl;558558- struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages);552552+ struct hv_synic_pages *spages = this_cpu_ptr(synic_pages);559553 struct hv_message_page **msg_page = &spages->hyp_synic_message_page;560554 struct hv_synic_event_flags_page **event_flags_page =561555 &spages->synic_event_flags_page;···573567 sint.masked = true;574568 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_DOORBELL_SINT_INDEX,575569 sint.as_uint64);570570+571571+ if (mshv_sint_irq != -1)572572+ disable_percpu_irq(mshv_sint_irq);576573577574 /* Disable Synic's event ring page */578575 sirbp.as_uint64 = hv_get_non_nested_msr(HV_MSR_SIRBP);···671662 hv_call_delete_port(hv_current_partition_id, port_id);672663673664 mshv_portid_free(doorbell_portid);665665+}666666+667667+static int mshv_synic_reboot_notify(struct notifier_block *nb,668668+ unsigned long code, void *unused)669669+{670670+ if (!hv_root_partition())671671+ return 0;672672+673673+ cpuhp_remove_state(synic_cpuhp_online);674674+ return 0;675675+}676676+677677+static struct notifier_block mshv_synic_reboot_nb = {678678+ .notifier_call = mshv_synic_reboot_notify,679679+};680680+681681+#ifndef HYPERVISOR_CALLBACK_VECTOR682682+static DEFINE_PER_CPU(long, mshv_evt);683683+684684+static irqreturn_t mshv_percpu_isr(int irq, void *dev_id)685685+{686686+ mshv_isr();687687+ return IRQ_HANDLED;688688+}689689+690690+#ifdef CONFIG_ACPI691691+static int __init mshv_acpi_setup_sint_irq(void)692692+{693693+ return acpi_register_gsi(NULL, mshv_sint_vector, ACPI_EDGE_SENSITIVE,694694+ ACPI_ACTIVE_HIGH);695695+}696696+697697+static void mshv_acpi_cleanup_sint_irq(void)698698+{699699+ acpi_unregister_gsi(mshv_sint_vector);700700+}701701+#else702702+static int __init mshv_acpi_setup_sint_irq(void)703703+{704704+ return -ENODEV;705705+}706706+707707+static void mshv_acpi_cleanup_sint_irq(void)708708+{709709+}710710+#endif711711+712712+static int __init mshv_sint_vector_setup(void)713713+{714714+ int ret;715715+ struct hv_register_assoc reg = {716716+ .name = HV_ARM64_REGISTER_SINT_RESERVED_INTERRUPT_ID,717717+ };718718+ union hv_input_vtl input_vtl = { 0 };719719+720720+ if (acpi_disabled)721721+ return -ENODEV;722722+723723+ ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,724724+ 1, input_vtl, ®);725725+ if (ret || !reg.value.reg64)726726+ return -ENODEV;727727+728728+ mshv_sint_vector = reg.value.reg64;729729+ ret = mshv_acpi_setup_sint_irq();730730+ if (ret < 0) {731731+ pr_err("Failed to setup IRQ for MSHV SINT vector %d: %d\n",732732+ mshv_sint_vector, ret);733733+ goto out_fail;734734+ }735735+736736+ mshv_sint_irq = ret;737737+738738+ ret = request_percpu_irq(mshv_sint_irq, mshv_percpu_isr, "MSHV",739739+ &mshv_evt);740740+ if (ret)741741+ goto out_unregister;742742+743743+ return 0;744744+745745+out_unregister:746746+ mshv_acpi_cleanup_sint_irq();747747+out_fail:748748+ return ret;749749+}750750+751751+static void mshv_sint_vector_cleanup(void)752752+{753753+ free_percpu_irq(mshv_sint_irq, &mshv_evt);754754+ mshv_acpi_cleanup_sint_irq();755755+}756756+#else /* !HYPERVISOR_CALLBACK_VECTOR */757757+static int __init mshv_sint_vector_setup(void)758758+{759759+ mshv_sint_vector = HYPERVISOR_CALLBACK_VECTOR;760760+ return 0;761761+}762762+763763+static void mshv_sint_vector_cleanup(void)764764+{765765+}766766+#endif /* HYPERVISOR_CALLBACK_VECTOR */767767+768768+int __init mshv_synic_init(struct device *dev)769769+{770770+ int ret = 0;771771+772772+ ret = mshv_sint_vector_setup();773773+ if (ret)774774+ return ret;775775+776776+ synic_pages = alloc_percpu(struct hv_synic_pages);777777+ if (!synic_pages) {778778+ dev_err(dev, "Failed to allocate percpu synic page\n");779779+ ret = -ENOMEM;780780+ goto sint_vector_cleanup;781781+ }782782+783783+ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mshv_synic",784784+ mshv_synic_cpu_init,785785+ mshv_synic_cpu_exit);786786+ if (ret < 0) {787787+ dev_err(dev, "Failed to setup cpu hotplug state: %i\n", ret);788788+ goto free_synic_pages;789789+ }790790+791791+ synic_cpuhp_online = ret;792792+793793+ ret = register_reboot_notifier(&mshv_synic_reboot_nb);794794+ if (ret)795795+ goto remove_cpuhp_state;796796+797797+ return 0;798798+799799+remove_cpuhp_state:800800+ cpuhp_remove_state(synic_cpuhp_online);801801+free_synic_pages:802802+ free_percpu(synic_pages);803803+sint_vector_cleanup:804804+ mshv_sint_vector_cleanup();805805+ return ret;806806+}807807+808808+void mshv_synic_exit(void)809809+{810810+ unregister_reboot_notifier(&mshv_synic_reboot_nb);811811+ cpuhp_remove_state(synic_cpuhp_online);812812+ free_percpu(synic_pages);813813+ mshv_sint_vector_cleanup();674814}
+1-1
drivers/hwmon/axi-fan-control.c
···507507 ret = devm_request_threaded_irq(&pdev->dev, ctl->irq, NULL,508508 axi_fan_control_irq_handler,509509 IRQF_ONESHOT | IRQF_TRIGGER_HIGH,510510- pdev->driver_override, ctl);510510+ NULL, ctl);511511 if (ret)512512 return dev_err_probe(&pdev->dev, ret,513513 "failed to request an irq\n");
+5-5
drivers/hwmon/max6639.c
···232232static int max6639_set_ppr(struct max6639_data *data, int channel, u8 ppr)233233{234234 /* Decrement the PPR value and shift left by 6 to match the register format */235235- return regmap_write(data->regmap, MAX6639_REG_FAN_PPR(channel), ppr-- << 6);235235+ return regmap_write(data->regmap, MAX6639_REG_FAN_PPR(channel), --ppr << 6);236236}237237238238static int max6639_write_fan(struct device *dev, u32 attr, int channel,···524524525525{526526 struct device *dev = &client->dev;527527- u32 i;528528- int err, val;527527+ u32 i, val;528528+ int err;529529530530 err = of_property_read_u32(child, "reg", &i);531531 if (err) {···540540541541 err = of_property_read_u32(child, "pulses-per-revolution", &val);542542 if (!err) {543543- if (val < 1 || val > 5) {544544- dev_err(dev, "invalid pulses-per-revolution %d of %pOFn\n", val, child);543543+ if (val < 1 || val > 4) {544544+ dev_err(dev, "invalid pulses-per-revolution %u of %pOFn\n", val, child);545545 return -EINVAL;546546 }547547 data->ppr[i] = val;
+2
drivers/hwmon/pmbus/hac300s.c
···5858 case PMBUS_MFR_VOUT_MIN:5959 case PMBUS_READ_VOUT:6060 rv = pmbus_read_word_data(client, page, phase, reg);6161+ if (rv < 0)6262+ return rv;6163 return FIELD_GET(LINEAR11_MANTISSA_MASK, rv);6264 default:6365 return -ENODATA;
+2
drivers/hwmon/pmbus/ina233.c
···6767 switch (reg) {6868 case PMBUS_VIRT_READ_VMON:6969 ret = pmbus_read_word_data(client, 0, 0xff, MFR_READ_VSHUNT);7070+ if (ret < 0)7171+ return ret;70727173 /* Adjust returned value to match VIN coefficients */7274 /* VIN: 1.25 mV VSHUNT: 2.5 uV LSB */
···165165{166166 const struct pmbus_driver_info *info = pmbus_get_driver_info(client);167167 struct mp2869_data *data = to_mp2869_data(info);168168- int ret;168168+ int ret, mfr;169169170170 switch (reg) {171171 case PMBUS_VOUT_MODE:···188188 if (ret < 0)189189 return ret;190190191191+ mfr = pmbus_read_byte_data(client, page,192192+ PMBUS_STATUS_MFR_SPECIFIC);193193+ if (mfr < 0)194194+ return mfr;195195+191196 ret = (ret & ~GENMASK(2, 2)) |192197 FIELD_PREP(GENMASK(2, 2),193193- FIELD_GET(GENMASK(1, 1),194194- pmbus_read_byte_data(client, page,195195- PMBUS_STATUS_MFR_SPECIFIC)));198198+ FIELD_GET(GENMASK(1, 1), mfr));196199 break;197200 case PMBUS_STATUS_TEMPERATURE:198201 /*···210207 if (ret < 0)211208 return ret;212209210210+ mfr = pmbus_read_byte_data(client, page,211211+ PMBUS_STATUS_MFR_SPECIFIC);212212+ if (mfr < 0)213213+ return mfr;214214+213215 ret = (ret & ~GENMASK(7, 6)) |214216 FIELD_PREP(GENMASK(6, 6),215215- FIELD_GET(GENMASK(1, 1),216216- pmbus_read_byte_data(client, page,217217- PMBUS_STATUS_MFR_SPECIFIC))) |217217+ FIELD_GET(GENMASK(1, 1), mfr)) |218218 FIELD_PREP(GENMASK(7, 7),219219- FIELD_GET(GENMASK(1, 1),220220- pmbus_read_byte_data(client, page,221221- PMBUS_STATUS_MFR_SPECIFIC)));219219+ FIELD_GET(GENMASK(1, 1), mfr));222220 break;223221 default:224222 ret = -ENODATA;···234230{235231 const struct pmbus_driver_info *info = pmbus_get_driver_info(client);236232 struct mp2869_data *data = to_mp2869_data(info);237237- int ret;233233+ int ret, mfr;238234239235 switch (reg) {240236 case PMBUS_STATUS_WORD:···250246 if (ret < 0)251247 return ret;252248249249+ mfr = pmbus_read_byte_data(client, page,250250+ PMBUS_STATUS_MFR_SPECIFIC);251251+ if (mfr < 0)252252+ return mfr;253253+253254 ret = (ret & ~GENMASK(2, 2)) |254255 FIELD_PREP(GENMASK(2, 2),255255- FIELD_GET(GENMASK(1, 1),256256- pmbus_read_byte_data(client, page,257257- PMBUS_STATUS_MFR_SPECIFIC)));256256+ FIELD_GET(GENMASK(1, 1), mfr));258257 break;259258 case PMBUS_READ_VIN:260259 /*
+2
drivers/hwmon/pmbus/mp2975.c
···313313 case PMBUS_STATUS_WORD:314314 /* MP2973 & MP2971 return PGOOD instead of PB_STATUS_POWER_GOOD_N. */315315 ret = pmbus_read_word_data(client, page, phase, reg);316316+ if (ret < 0)317317+ return ret;316318 ret ^= PB_STATUS_POWER_GOOD_N;317319 break;318320 case PMBUS_OT_FAULT_LIMIT:
+2
drivers/i2c/busses/Kconfig
···12131213 tristate "NVIDIA Tegra internal I2C controller"12141214 depends on ARCH_TEGRA || (COMPILE_TEST && (ARC || ARM || ARM64 || M68K || RISCV || SUPERH || SPARC))12151215 # COMPILE_TEST needs architectures with readsX()/writesX() primitives12161216+ depends on PINCTRL12171217+ # ARCH_TEGRA implies PINCTRL, but the COMPILE_TEST side doesn't.12161218 help12171219 If you say yes to this option, support will be included for the12181220 I2C controller embedded in NVIDIA Tegra SOCs
+3
drivers/i2c/busses/i2c-cp2615.c
···298298 if (!adap)299299 return -ENOMEM;300300301301+ if (!usbdev->serial)302302+ return -EINVAL;303303+301304 strscpy(adap->name, usbdev->serial, sizeof(adap->name));302305 adap->owner = THIS_MODULE;303306 adap->dev.parent = &usbif->dev;
+1
drivers/i2c/busses/i2c-fsi.c
···729729 rc = i2c_add_adapter(&port->adapter);730730 if (rc < 0) {731731 dev_err(dev, "Failed to register adapter: %d\n", rc);732732+ of_node_put(np);732733 kfree(port);733734 continue;734735 }
+16-1
drivers/i2c/busses/i2c-pxa.c
···268268 struct pinctrl *pinctrl;269269 struct pinctrl_state *pinctrl_default;270270 struct pinctrl_state *pinctrl_recovery;271271+ bool reset_before_xfer;271272};272273273274#define _IBMR(i2c) ((i2c)->reg_ibmr)···11451144{11461145 struct pxa_i2c *i2c = adap->algo_data;1147114611471147+ if (i2c->reset_before_xfer) {11481148+ i2c_pxa_reset(i2c);11491149+ i2c->reset_before_xfer = false;11501150+ }11511151+11481152 return i2c_pxa_internal_xfer(i2c, msgs, num, i2c_pxa_do_xfer);11491153}11501154···15271521 }15281522 }1529152315301530- i2c_pxa_reset(i2c);15241524+ /*15251525+ * Skip reset on Armada 3700 when recovery is used to avoid15261526+ * controller hang due to the pinctrl state changes done by15271527+ * the generic recovery initialization code. The reset will15281528+ * be performed later, prior to the first transfer.15291529+ */15301530+ if (i2c_type == REGS_A3700 && i2c->adap.bus_recovery_info)15311531+ i2c->reset_before_xfer = true;15321532+ else15331533+ i2c_pxa_reset(i2c);1531153415321535 ret = i2c_add_numbered_adapter(&i2c->adap);15331536 if (ret < 0)
+4-1
drivers/i2c/busses/i2c-tegra.c
···20472047 *20482048 * VI I2C device shouldn't be marked as IRQ-safe because VI I2C won't20492049 * be used for atomic transfers. ACPI device is not IRQ safe also.20502050+ *20512051+ * Devices with pinctrl states cannot be marked IRQ-safe as the pinctrl20522052+ * state transitions during runtime PM require mutexes.20502053 */20512051- if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev))20542054+ if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev) && !i2c_dev->dev->pins)20522055 pm_runtime_irq_safe(i2c_dev->dev);2053205620542057 pm_runtime_enable(i2c_dev->dev);
+3-2
drivers/infiniband/core/umem.c
···55555656 if (dirty)5757 ib_dma_unmap_sgtable_attrs(dev, &umem->sgt_append.sgt,5858- DMA_BIDIRECTIONAL, 0);5858+ DMA_BIDIRECTIONAL,5959+ DMA_ATTR_REQUIRE_COHERENT);59606061 for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) {6162 unpin_user_page_range_dirty_lock(sg_page(sg),···170169 unsigned long lock_limit;171170 unsigned long new_pinned;172171 unsigned long cur_base;173173- unsigned long dma_attr = 0;172172+ unsigned long dma_attr = DMA_ATTR_REQUIRE_COHERENT;174173 struct mm_struct *mm;175174 unsigned long npages;176175 int pinned, ret;
+14-1
drivers/iommu/amd/iommu.c
···2909290929102910static struct protection_domain identity_domain;2911291129122912+static int amd_iommu_identity_attach(struct iommu_domain *dom, struct device *dev,29132913+ struct iommu_domain *old)29142914+{29152915+ /*29162916+ * Don't allow attaching a device to the identity domain if SNP is29172917+ * enabled.29182918+ */29192919+ if (amd_iommu_snp_en)29202920+ return -EINVAL;29212921+29222922+ return amd_iommu_attach_device(dom, dev, old);29232923+}29242924+29122925static const struct iommu_domain_ops identity_domain_ops = {29132913- .attach_dev = amd_iommu_attach_device,29262926+ .attach_dev = amd_iommu_identity_attach,29142927};2915292829162929void amd_iommu_init_identity_domain(void)
+17-4
drivers/iommu/dma-iommu.c
···12111211 */12121212 if (dev_use_swiotlb(dev, size, dir) &&12131213 iova_unaligned(iovad, phys, size)) {12141214- if (attrs & DMA_ATTR_MMIO)12141214+ if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT))12151215 return DMA_MAPPING_ERROR;1216121612171217 phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs);···12231223 arch_sync_dma_for_device(phys, size, dir);1224122412251225 iova = __iommu_dma_map(dev, phys, size, prot, dma_mask);12261226- if (iova == DMA_MAPPING_ERROR && !(attrs & DMA_ATTR_MMIO))12261226+ if (iova == DMA_MAPPING_ERROR &&12271227+ !(attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)))12271228 swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);12281229 return iova;12291230}···12341233{12351234 phys_addr_t phys;1236123512371237- if (attrs & DMA_ATTR_MMIO) {12361236+ if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) {12381237 __iommu_dma_unmap(dev, dma_handle, size);12391238 return;12401239 }···19461945 if (WARN_ON_ONCE(iova_start_pad && offset > 0))19471946 return -EIO;1948194719481948+ /*19491949+ * DMA_IOVA_USE_SWIOTLB is set on state after some entry19501950+ * took SWIOTLB path, which we were supposed to prevent19511951+ * for DMA_ATTR_REQUIRE_COHERENT attribute.19521952+ */19531953+ if (WARN_ON_ONCE((state->__size & DMA_IOVA_USE_SWIOTLB) &&19541954+ (attrs & DMA_ATTR_REQUIRE_COHERENT)))19551955+ return -EOPNOTSUPP;19561956+19571957+ if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT))19581958+ return -EOPNOTSUPP;19591959+19491960 if (dev_use_swiotlb(dev, size, dir) &&19501961 iova_unaligned(iovad, phys, size)) {19511951- if (attrs & DMA_ATTR_MMIO)19621962+ if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT))19521963 return -EPERM;1953196419541965 return iommu_dma_iova_link_swiotlb(dev, state, phys, offset,
+1-2
drivers/iommu/intel/dmar.c
···13141314 if (fault & DMA_FSTS_ITE) {13151315 head = readl(iommu->reg + DMAR_IQH_REG);13161316 head = ((head >> shift) - 1 + QI_LENGTH) % QI_LENGTH;13171317- head |= 1;13181317 tail = readl(iommu->reg + DMAR_IQT_REG);13191318 tail = ((tail >> shift) - 1 + QI_LENGTH) % QI_LENGTH;13201319···13301331 do {13311332 if (qi->desc_status[head] == QI_IN_USE)13321333 qi->desc_status[head] = QI_ABORT;13331333- head = (head - 2 + QI_LENGTH) % QI_LENGTH;13341334+ head = (head - 1 + QI_LENGTH) % QI_LENGTH;13341335 } while (head != tail);1335133613361337 /*
+8-4
drivers/iommu/intel/svm.c
···164164 if (IS_ERR(dev_pasid))165165 return PTR_ERR(dev_pasid);166166167167- ret = iopf_for_domain_replace(domain, old, dev);168168- if (ret)169169- goto out_remove_dev_pasid;167167+ /* SVA with non-IOMMU/PRI IOPF handling is allowed. */168168+ if (info->pri_supported) {169169+ ret = iopf_for_domain_replace(domain, old, dev);170170+ if (ret)171171+ goto out_remove_dev_pasid;172172+ }170173171174 /* Setup the pasid table: */172175 sflags = cpu_feature_enabled(X86_FEATURE_LA57) ? PASID_FLAG_FL5LP : 0;···184181185182 return 0;186183out_unwind_iopf:187187- iopf_for_domain_replace(old, domain, dev);184184+ if (info->pri_supported)185185+ iopf_for_domain_replace(old, domain, dev);188186out_remove_dev_pasid:189187 domain_remove_dev_pasid(domain, dev, pasid);190188 return ret;
+6-6
drivers/iommu/iommu-sva.c
···182182 iommu_detach_device_pasid(domain, dev, iommu_mm->pasid);183183 if (--domain->users == 0) {184184 list_del(&domain->next);185185- iommu_domain_free(domain);186186- }185185+ if (list_empty(&iommu_mm->sva_domains)) {186186+ list_del(&iommu_mm->mm_list_elm);187187+ if (list_empty(&iommu_sva_mms))188188+ iommu_sva_present = false;189189+ }187190188188- if (list_empty(&iommu_mm->sva_domains)) {189189- list_del(&iommu_mm->mm_list_elm);190190- if (list_empty(&iommu_sva_mms))191191- iommu_sva_present = false;191191+ iommu_domain_free(domain);192192 }193193194194 mutex_unlock(&iommu_sva_lock);
+5-1
drivers/iommu/iommu.c
···12131213 if (addr == end)12141214 goto map_end;1215121512161216- phys_addr = iommu_iova_to_phys(domain, addr);12161216+ /*12171217+ * Return address by iommu_iova_to_phys for 0 is12181218+ * ambiguous. Offset to address 1 if addr is 0.12191219+ */12201220+ phys_addr = iommu_iova_to_phys(domain, addr ? addr : 1);12171221 if (!phys_addr) {12181222 map_size += pg_size;12191223 continue;
···30823082 }3083308330843084 /*30853085- * We need to serialize streamon/off with queueing new requests.30853085+ * We need to serialize streamon/off/reqbufs with queueing new requests.30863086 * These ioctls may trigger the cancellation of a streaming30873087 * operation, and that should not be mixed with queueing a new30883088 * request at the same time.30893089 */30903090 if (v4l2_device_supports_requests(vfd->v4l2_dev) &&30913091- (cmd == VIDIOC_STREAMON || cmd == VIDIOC_STREAMOFF)) {30913091+ (cmd == VIDIOC_STREAMON || cmd == VIDIOC_STREAMOFF ||30923092+ cmd == VIDIOC_REQBUFS)) {30923093 req_queue_lock = &vfd->v4l2_dev->mdev->req_queue_mutex;3093309430943095 if (mutex_lock_interruptible(req_queue_lock))
···45324532 * their platform code before calling sdhci_add_host(), and we45334533 * won't assume 8-bit width for hosts without that CAP.45344534 */45354535- if (!(host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA))45354535+ if (host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA) {45364536+ host->caps1 &= ~(SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50);45374537+ if (host->quirks2 & SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400)45384538+ host->caps1 &= ~SDHCI_SUPPORT_HS400;45394539+ mmc->caps2 &= ~(MMC_CAP2_HS200 | MMC_CAP2_HS400 | MMC_CAP2_HS400_ES);45404540+ mmc->caps &= ~(MMC_CAP_DDR | MMC_CAP_UHS);45414541+ } else {45364542 mmc->caps |= MMC_CAP_4_BIT_DATA;45434543+ }4537454445384545 if (host->quirks2 & SDHCI_QUIRK2_HOST_NO_CMD23)45394546 mmc->caps &= ~MMC_CAP_CMD23;
+2-4
drivers/mtd/nand/raw/brcmnand/brcmnand.c
···23502350 for (i = 0; i < ctrl->max_oob; i += 4)23512351 oob_reg_write(ctrl, i, 0xffffffff);2352235223532353- if (mtd->oops_panic_write)23532353+ if (mtd->oops_panic_write) {23542354 /* switch to interrupt polling and PIO mode */23552355 disable_ctrl_irqs(ctrl);23562356-23572357- if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) {23562356+ } else if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) {23582357 if (ctrl->dma_trans(host, addr, (u32 *)buf, oob, mtd->writesize,23592358 CMD_PROGRAM_PAGE))23602360-23612359 ret = -EIO;2362236023632361 goto out;
+1-1
drivers/mtd/nand/raw/cadence-nand-controller.c
···31333133 sizeof(*cdns_ctrl->cdma_desc),31343134 &cdns_ctrl->dma_cdma_desc,31353135 GFP_KERNEL);31363136- if (!cdns_ctrl->dma_cdma_desc)31363136+ if (!cdns_ctrl->cdma_desc)31373137 return -ENOMEM;3138313831393139 cdns_ctrl->buf_size = SZ_16K;
+12-2
drivers/mtd/nand/raw/nand_base.c
···47374737static int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)47384738{47394739 struct nand_chip *chip = mtd_to_nand(mtd);47404740+ int ret;4740474147414742 if (!chip->ops.lock_area)47424743 return -ENOTSUPP;4743474447444744- return chip->ops.lock_area(chip, ofs, len);47454745+ nand_get_device(chip);47464746+ ret = chip->ops.lock_area(chip, ofs, len);47474747+ nand_release_device(chip);47484748+47494749+ return ret;47454750}4746475147474752/**···47584753static int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)47594754{47604755 struct nand_chip *chip = mtd_to_nand(mtd);47564756+ int ret;4761475747624758 if (!chip->ops.unlock_area)47634759 return -ENOTSUPP;4764476047654765- return chip->ops.unlock_area(chip, ofs, len);47614761+ nand_get_device(chip);47624762+ ret = chip->ops.unlock_area(chip, ofs, len);47634763+ nand_release_device(chip);47644764+47654765+ return ret;47664766}4767476747684768/* Set default functions */
···23452345}2346234623472347/**23482348- * spi_nor_spimem_check_op - check if the operation is supported23492349- * by controller23482348+ * spi_nor_spimem_check_read_pp_op - check if a read or a page program operation is23492349+ * supported by controller23502350 *@nor: pointer to a 'struct spi_nor'23512351 *@op: pointer to op template to be checked23522352 *23532353 * Returns 0 if operation is supported, -EOPNOTSUPP otherwise.23542354 */23552355-static int spi_nor_spimem_check_op(struct spi_nor *nor,23562356- struct spi_mem_op *op)23552355+static int spi_nor_spimem_check_read_pp_op(struct spi_nor *nor,23562356+ struct spi_mem_op *op)23572357{23582358 /*23592359 * First test with 4 address bytes. The opcode itself might···23962396 if (spi_nor_protocol_is_dtr(nor->read_proto))23972397 op.dummy.nbytes *= 2;2398239823992399- return spi_nor_spimem_check_op(nor, &op);23992399+ return spi_nor_spimem_check_read_pp_op(nor, &op);24002400}2401240124022402/**···2414241424152415 spi_nor_spimem_setup_op(nor, &op, pp->proto);2416241624172417- return spi_nor_spimem_check_op(nor, &op);24172417+ return spi_nor_spimem_check_read_pp_op(nor, &op);24182418}2419241924202420/**···2466246624672467 spi_nor_spimem_setup_op(nor, &op, nor->reg_proto);2468246824692469- if (spi_nor_spimem_check_op(nor, &op))24692469+ if (!spi_mem_supports_op(nor->spimem, &op))24702470 nor->flags |= SNOR_F_NO_READ_CR;24712471 }24722472}
+3-1
drivers/net/can/dev/netlink.c
···601601 /* We need synchronization with dev->stop() */602602 ASSERT_RTNL();603603604604- can_ctrlmode_changelink(dev, data, extack);604604+ err = can_ctrlmode_changelink(dev, data, extack);605605+ if (err)606606+ return err;605607606608 if (data[IFLA_CAN_BITTIMING]) {607609 struct can_bittiming bt;
+24-5
drivers/net/can/spi/mcp251x.c
···12251225 }1226122612271227 mutex_lock(&priv->mcp_lock);12281228- mcp251x_power_enable(priv->transceiver, 1);12281228+ ret = mcp251x_power_enable(priv->transceiver, 1);12291229+ if (ret) {12301230+ dev_err(&spi->dev, "failed to enable transceiver power: %pe\n", ERR_PTR(ret));12311231+ goto out_close_candev;12321232+ }1229123312301234 priv->force_quit = 0;12311235 priv->tx_skb = NULL;···12761272 mcp251x_hw_sleep(spi);12771273out_close:12781274 mcp251x_power_enable(priv->transceiver, 0);12751275+out_close_candev:12791276 close_candev(net);12801277 mutex_unlock(&priv->mcp_lock);12811278 if (release_irq)···15211516{15221517 struct spi_device *spi = to_spi_device(dev);15231518 struct mcp251x_priv *priv = spi_get_drvdata(spi);15191519+ int ret = 0;1524152015251525- if (priv->after_suspend & AFTER_SUSPEND_POWER)15261526- mcp251x_power_enable(priv->power, 1);15271527- if (priv->after_suspend & AFTER_SUSPEND_UP)15281528- mcp251x_power_enable(priv->transceiver, 1);15211521+ if (priv->after_suspend & AFTER_SUSPEND_POWER) {15221522+ ret = mcp251x_power_enable(priv->power, 1);15231523+ if (ret) {15241524+ dev_err(dev, "failed to restore power: %pe\n", ERR_PTR(ret));15251525+ return ret;15261526+ }15271527+ }15281528+15291529+ if (priv->after_suspend & AFTER_SUSPEND_UP) {15301530+ ret = mcp251x_power_enable(priv->transceiver, 1);15311531+ if (ret) {15321532+ dev_err(dev, "failed to restore transceiver power: %pe\n", ERR_PTR(ret));15331533+ if (priv->after_suspend & AFTER_SUSPEND_POWER)15341534+ mcp251x_power_enable(priv->power, 0);15351535+ return ret;15361536+ }15371537+ }1529153815301539 if (priv->after_suspend & (AFTER_SUSPEND_POWER | AFTER_SUSPEND_UP))15311540 queue_work(priv->wq, &priv->restart_work);
···2525 select SSB2626 select MII2727 select PHYLIB2828- select FIXED_PHY if BCM47XX2828+ select FIXED_PHY2929 help3030 If you have a network (Ethernet) controller of this type, say Y3131 or M here.
+23-18
drivers/net/ethernet/broadcom/asp2/bcmasp.c
···11521152 }11531153}1154115411551155-static void bcmasp_wol_irq_destroy(struct bcmasp_priv *priv)11561156-{11571157- if (priv->wol_irq > 0)11581158- free_irq(priv->wol_irq, priv);11591159-}11601160-11611155static void bcmasp_eee_fixup(struct bcmasp_intf *intf, bool en)11621156{11631157 u32 reg, phy_lpi_overwrite;···12491255 if (priv->irq <= 0)12501256 return -EINVAL;1251125712521252- priv->clk = devm_clk_get_optional_enabled(dev, "sw_asp");12581258+ priv->clk = devm_clk_get_optional(dev, "sw_asp");12531259 if (IS_ERR(priv->clk))12541260 return dev_err_probe(dev, PTR_ERR(priv->clk),12551261 "failed to request clock\n");···1277128312781284 bcmasp_set_pdata(priv, pdata);1279128512861286+ ret = clk_prepare_enable(priv->clk);12871287+ if (ret)12881288+ return dev_err_probe(dev, ret, "failed to start clock\n");12891289+12801290 /* Enable all clocks to ensure successful probing */12811291 bcmasp_core_clock_set(priv, ASP_CTRL_CLOCK_CTRL_ASP_ALL_DISABLE, 0);12821292···1292129412931295 ret = devm_request_irq(&pdev->dev, priv->irq, bcmasp_isr, 0,12941296 pdev->name, priv);12951295- if (ret)12961296- return dev_err_probe(dev, ret, "failed to request ASP interrupt: %d", ret);12971297+ if (ret) {12981298+ dev_err(dev, "Failed to request ASP interrupt: %d", ret);12991299+ goto err_clock_disable;13001300+ }1297130112981302 /* Register mdio child nodes */12991303 of_platform_populate(dev->of_node, bcmasp_mdio_of_match, NULL, dev);···1307130713081308 priv->mda_filters = devm_kcalloc(dev, priv->num_mda_filters,13091309 sizeof(*priv->mda_filters), GFP_KERNEL);13101310- if (!priv->mda_filters)13111311- return -ENOMEM;13101310+ if (!priv->mda_filters) {13111311+ ret = -ENOMEM;13121312+ goto err_clock_disable;13131313+ }1312131413131315 priv->net_filters = devm_kcalloc(dev, priv->num_net_filters,13141316 sizeof(*priv->net_filters), GFP_KERNEL);13151315- if (!priv->net_filters)13161316- return -ENOMEM;13171317+ if (!priv->net_filters) {13181318+ ret = -ENOMEM;13191319+ goto err_clock_disable;13201320+ }1317132113181322 bcmasp_core_init_filters(priv);13191323···13261322 ports_node = of_find_node_by_name(dev->of_node, "ethernet-ports");13271323 if (!ports_node) {13281324 dev_warn(dev, "No ports found\n");13291329- return -EINVAL;13251325+ ret = -EINVAL;13261326+ goto err_clock_disable;13301327 }1331132813321329 i = 0;···13491344 */13501345 bcmasp_core_clock_set(priv, 0, ASP_CTRL_CLOCK_CTRL_ASP_ALL_DISABLE);1351134613521352- clk_disable_unprepare(priv->clk);13531353-13541347 /* Now do the registration of the network ports which will take care13551348 * of managing the clock properly.13561349 */···13611358 count++;13621359 }1363136013611361+ clk_disable_unprepare(priv->clk);13621362+13641363 dev_info(dev, "Initialized %d port(s)\n", count);1365136413661365 return ret;1367136613681367err_cleanup:13691369- bcmasp_wol_irq_destroy(priv);13701368 bcmasp_remove_intfs(priv);13691369+err_clock_disable:13701370+ clk_disable_unprepare(priv->clk);1371137113721372 return ret;13731373}···13821376 if (!priv)13831377 return;1384137813851385- bcmasp_wol_irq_destroy(priv);13861379 bcmasp_remove_intfs(priv);13871380}13881381
+25-16
drivers/net/ethernet/cadence/macb_main.c
···12101210 }1211121112121212 if (tx_skb->skb) {12131213- napi_consume_skb(tx_skb->skb, budget);12131213+ dev_consume_skb_any(tx_skb->skb);12141214 tx_skb->skb = NULL;12151215 }12161216}···33693369 spin_lock_irq(&bp->stats_lock);33703370 gem_update_stats(bp);33713371 memcpy(data, &bp->ethtool_stats, sizeof(u64)33723372- * (GEM_STATS_LEN + QUEUE_STATS_LEN * MACB_MAX_QUEUES));33723372+ * (GEM_STATS_LEN + QUEUE_STATS_LEN * bp->num_queues));33733373 spin_unlock_irq(&bp->stats_lock);33743374}33753375···59235923 struct macb_queue *queue;59245924 struct in_device *idev;59255925 unsigned long flags;59265926+ u32 tmp, ifa_local;59265927 unsigned int q;59275928 int err;59285928- u32 tmp;5929592959305930 if (!device_may_wakeup(&bp->dev->dev))59315931 phy_exit(bp->phy);···59345934 return 0;5935593559365936 if (bp->wol & MACB_WOL_ENABLED) {59375937- /* Check for IP address in WOL ARP mode */59385938- idev = __in_dev_get_rcu(bp->dev);59395939- if (idev)59405940- ifa = rcu_dereference(idev->ifa_list);59415941- if ((bp->wolopts & WAKE_ARP) && !ifa) {59425942- netdev_err(netdev, "IP address not assigned as required by WoL walk ARP\n");59435943- return -EOPNOTSUPP;59375937+ if (bp->wolopts & WAKE_ARP) {59385938+ /* Check for IP address in WOL ARP mode */59395939+ rcu_read_lock();59405940+ idev = __in_dev_get_rcu(bp->dev);59415941+ if (idev)59425942+ ifa = rcu_dereference(idev->ifa_list);59435943+ if (!ifa) {59445944+ rcu_read_unlock();59455945+ netdev_err(netdev, "IP address not assigned as required by WoL walk ARP\n");59465946+ return -EOPNOTSUPP;59475947+ }59485948+ ifa_local = be32_to_cpu(ifa->ifa_local);59495949+ rcu_read_unlock();59445950 }59515951+59455952 spin_lock_irqsave(&bp->lock, flags);5946595359475954 /* Disable Tx and Rx engines before disabling the queues,···59875980 if (bp->wolopts & WAKE_ARP) {59885981 tmp |= MACB_BIT(ARP);59895982 /* write IP address into register */59905990- tmp |= MACB_BFEXT(IP, be32_to_cpu(ifa->ifa_local));59835983+ tmp |= MACB_BFEXT(IP, ifa_local);59915984 }59855985+ spin_unlock_irqrestore(&bp->lock, flags);5992598659935987 /* Change interrupt handler and59945988 * Enable WoL IRQ on queue 0···60025994 dev_err(dev,60035995 "Unable to request IRQ %d (error %d)\n",60045996 bp->queues[0].irq, err);60056005- spin_unlock_irqrestore(&bp->lock, flags);60065997 return err;60075998 }59995999+ spin_lock_irqsave(&bp->lock, flags);60086000 queue_writel(bp->queues, IER, GEM_BIT(WOL));60096001 gem_writel(bp, WOL, tmp);60026002+ spin_unlock_irqrestore(&bp->lock, flags);60106003 } else {60116004 err = devm_request_irq(dev, bp->queues[0].irq, macb_wol_interrupt,60126005 IRQF_SHARED, netdev->name, bp->queues);···60156006 dev_err(dev,60166007 "Unable to request IRQ %d (error %d)\n",60176008 bp->queues[0].irq, err);60186018- spin_unlock_irqrestore(&bp->lock, flags);60196009 return err;60206010 }60116011+ spin_lock_irqsave(&bp->lock, flags);60216012 queue_writel(bp->queues, IER, MACB_BIT(WOL));60226013 macb_writel(bp, WOL, tmp);60146014+ spin_unlock_irqrestore(&bp->lock, flags);60236015 }60246024- spin_unlock_irqrestore(&bp->lock, flags);6025601660266017 enable_irq_wake(bp->queues[0].irq);60276018 }···60886079 queue_readl(bp->queues, ISR);60896080 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)60906081 queue_writel(bp->queues, ISR, -1);60826082+ spin_unlock_irqrestore(&bp->lock, flags);60836083+60916084 /* Replace interrupt handler on queue 0 */60926085 devm_free_irq(dev, bp->queues[0].irq, bp->queues);60936086 err = devm_request_irq(dev, bp->queues[0].irq, macb_interrupt,···60986087 dev_err(dev,60996088 "Unable to request IRQ %d (error %d)\n",61006089 bp->queues[0].irq, err);61016101- spin_unlock_irqrestore(&bp->lock, flags);61026090 return err;61036091 }61046104- spin_unlock_irqrestore(&bp->lock, flags);6105609261066093 disable_irq_wake(bp->queues[0].irq);61076094
···313313{314314 /* Report the maximum number queues, even if not every queue is315315 * currently configured. Since allocation of queues is in pairs,316316- * use netdev->real_num_tx_queues * 2. The real_num_tx_queues is set317317- * at device creation and never changes.316316+ * use netdev->num_tx_queues * 2. The num_tx_queues is set at317317+ * device creation and never changes.318318 */319319320320 if (sset == ETH_SS_STATS)321321 return IAVF_STATS_LEN +322322- (IAVF_QUEUE_STATS_LEN * 2 *323323- netdev->real_num_tx_queues);322322+ (IAVF_QUEUE_STATS_LEN * 2 * netdev->num_tx_queues);324323 else325324 return -EINVAL;326325}···344345 iavf_add_ethtool_stats(&data, adapter, iavf_gstrings_stats);345346346347 rcu_read_lock();347347- /* As num_active_queues describe both tx and rx queues, we can use348348- * it to iterate over rings' stats.348348+ /* Use num_tx_queues to report stats for the maximum number of queues.349349+ * Queues beyond num_active_queues will report zero.349350 */350350- for (i = 0; i < adapter->num_active_queues; i++) {351351- struct iavf_ring *ring;351351+ for (i = 0; i < netdev->num_tx_queues; i++) {352352+ struct iavf_ring *tx_ring = NULL, *rx_ring = NULL;352353353353- /* Tx rings stats */354354- ring = &adapter->tx_rings[i];355355- iavf_add_queue_stats(&data, ring);354354+ if (i < adapter->num_active_queues) {355355+ tx_ring = &adapter->tx_rings[i];356356+ rx_ring = &adapter->rx_rings[i];357357+ }356358357357- /* Rx rings stats */358358- ring = &adapter->rx_rings[i];359359- iavf_add_queue_stats(&data, ring);359359+ iavf_add_queue_stats(&data, tx_ring);360360+ iavf_add_queue_stats(&data, rx_ring);360361 }361362 rcu_read_unlock();362363}···375376 iavf_add_stat_strings(&data, iavf_gstrings_stats);376377377378 /* Queues are always allocated in pairs, so we just use378378- * real_num_tx_queues for both Tx and Rx queues.379379+ * num_tx_queues for both Tx and Rx queues.379380 */380380- for (i = 0; i < netdev->real_num_tx_queues; i++) {381381+ for (i = 0; i < netdev->num_tx_queues; i++) {381382 iavf_add_stat_strings(&data, iavf_gstrings_queue_stats,382383 "tx", i);383384 iavf_add_stat_strings(&data, iavf_gstrings_queue_stats,
+22
drivers/net/ethernet/intel/ice/ice.h
···840840}841841842842/**843843+ * ice_get_max_txq - return the maximum number of Tx queues for in a PF844844+ * @pf: PF structure845845+ *846846+ * Return: maximum number of Tx queues847847+ */848848+static inline int ice_get_max_txq(struct ice_pf *pf)849849+{850850+ return min(num_online_cpus(), pf->hw.func_caps.common_cap.num_txq);851851+}852852+853853+/**854854+ * ice_get_max_rxq - return the maximum number of Rx queues for in a PF855855+ * @pf: PF structure856856+ *857857+ * Return: maximum number of Rx queues858858+ */859859+static inline int ice_get_max_rxq(struct ice_pf *pf)860860+{861861+ return min(num_online_cpus(), pf->hw.func_caps.common_cap.num_rxq);862862+}863863+864864+/**843865 * ice_get_main_vsi - Get the PF VSI844866 * @pf: PF instance845867 *
+11-21
drivers/net/ethernet/intel/ice/ice_ethtool.c
···19301930 int i = 0;19311931 char *p;1932193219331933+ if (ice_is_port_repr_netdev(netdev)) {19341934+ ice_update_eth_stats(vsi);19351935+19361936+ for (j = 0; j < ICE_VSI_STATS_LEN; j++) {19371937+ p = (char *)vsi + ice_gstrings_vsi_stats[j].stat_offset;19381938+ data[i++] = (ice_gstrings_vsi_stats[j].sizeof_stat ==19391939+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;19401940+ }19411941+ return;19421942+ }19431943+19331944 ice_update_pf_stats(pf);19341945 ice_update_vsi_stats(vsi);19351946···19491938 data[i++] = (ice_gstrings_vsi_stats[j].sizeof_stat ==19501939 sizeof(u64)) ? *(u64 *)p : *(u32 *)p;19511940 }19521952-19531953- if (ice_is_port_repr_netdev(netdev))19541954- return;1955194119561942 /* populate per queue stats */19571943 rcu_read_lock();···37793771 info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) | BIT(HWTSTAMP_FILTER_ALL);3780377237813773 return 0;37823782-}37833783-37843784-/**37853785- * ice_get_max_txq - return the maximum number of Tx queues for in a PF37863786- * @pf: PF structure37873787- */37883788-static int ice_get_max_txq(struct ice_pf *pf)37893789-{37903790- return min(num_online_cpus(), pf->hw.func_caps.common_cap.num_txq);37913791-}37923792-37933793-/**37943794- * ice_get_max_rxq - return the maximum number of Rx queues for in a PF37953795- * @pf: PF structure37963796- */37973797-static int ice_get_max_rxq(struct ice_pf *pf)37983798-{37993799- return min(num_online_cpus(), pf->hw.func_caps.common_cap.num_rxq);38003774}3801377538023776/**
···17191719 if (ether_addr_equal(netdev->dev_addr, mac))17201720 return 0;1721172117221722- err = ionic_program_mac(lif, mac);17231723- if (err < 0)17241724- return err;17221722+ /* Only program macs for virtual functions to avoid losing the permanent17231723+ * Mac across warm reset/reboot.17241724+ */17251725+ if (lif->ionic->pdev->is_virtfn) {17261726+ err = ionic_program_mac(lif, mac);17271727+ if (err < 0)17281728+ return err;1725172917261726- if (err > 0)17271727- netdev_dbg(netdev, "%s: SET and GET ATTR Mac are not equal-due to old FW running\n",17281728- __func__);17301730+ if (err > 0)17311731+ netdev_dbg(netdev, "%s: SET and GET ATTR Mac are not equal-due to old FW running\n",17321732+ __func__);17331733+ }1729173417301735 err = eth_prepare_mac_addr_change(netdev, addr);17311736 if (err)
+2-2
drivers/net/ethernet/ti/icssg/icssg_common.c
···962962 pkt_len -= 4;963963 cppi5_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL);964964 psdata = cppi5_hdesc_get_psdata(desc_rx);965965- k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx);966965 count++;967966 xsk_buff_set_size(xdp, pkt_len);968967 xsk_buff_dma_sync_for_cpu(xdp);···987988 emac_dispatch_skb_zc(emac, xdp, psdata);988989 xsk_buff_free(xdp);989990 }991991+ k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx);990992 }991993992994 if (xdp_status & ICSSG_XDP_REDIR)···10571057 /* firmware adds 4 CRC bytes, strip them */10581058 pkt_len -= 4;10591059 cppi5_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL);10601060- k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx);1061106010621061 /* if allocation fails we drop the packet but push the10631062 * descriptor back to the ring with old page to prevent a stall···11141115 ndev->stats.rx_packets++;1115111611161117requeue:11181118+ k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx);11171119 /* queue another RX DMA */11181120 ret = prueth_dma_rx_push_mapped(emac, &emac->rx_chns, new_page,11191121 PRUETH_MAX_PKT_SIZE);
+64-1
drivers/net/team/team_core.c
···20582058 * rt netlink interface20592059 ***********************/2060206020612061+/* For tx path we need a linkup && enabled port and for parse any port20622062+ * suffices.20632063+ */20642064+static struct team_port *team_header_port_get_rcu(struct team *team,20652065+ bool txable)20662066+{20672067+ struct team_port *port;20682068+20692069+ list_for_each_entry_rcu(port, &team->port_list, list) {20702070+ if (!txable || team_port_txable(port))20712071+ return port;20722072+ }20732073+20742074+ return NULL;20752075+}20762076+20772077+static int team_header_create(struct sk_buff *skb, struct net_device *team_dev,20782078+ unsigned short type, const void *daddr,20792079+ const void *saddr, unsigned int len)20802080+{20812081+ struct team *team = netdev_priv(team_dev);20822082+ const struct header_ops *port_ops;20832083+ struct team_port *port;20842084+ int ret = 0;20852085+20862086+ rcu_read_lock();20872087+ port = team_header_port_get_rcu(team, true);20882088+ if (port) {20892089+ port_ops = READ_ONCE(port->dev->header_ops);20902090+ if (port_ops && port_ops->create)20912091+ ret = port_ops->create(skb, port->dev,20922092+ type, daddr, saddr, len);20932093+ }20942094+ rcu_read_unlock();20952095+ return ret;20962096+}20972097+20982098+static int team_header_parse(const struct sk_buff *skb,20992099+ const struct net_device *team_dev,21002100+ unsigned char *haddr)21012101+{21022102+ struct team *team = netdev_priv(team_dev);21032103+ const struct header_ops *port_ops;21042104+ struct team_port *port;21052105+ int ret = 0;21062106+21072107+ rcu_read_lock();21082108+ port = team_header_port_get_rcu(team, false);21092109+ if (port) {21102110+ port_ops = READ_ONCE(port->dev->header_ops);21112111+ if (port_ops && port_ops->parse)21122112+ ret = port_ops->parse(skb, port->dev, haddr);21132113+ }21142114+ rcu_read_unlock();21152115+ return ret;21162116+}21172117+21182118+static const struct header_ops team_header_ops = {21192119+ .create = team_header_create,21202120+ .parse = team_header_parse,21212121+};21222122+20612123static void team_setup_by_port(struct net_device *dev,20622124 struct net_device *port_dev)20632125{···21282066 if (port_dev->type == ARPHRD_ETHER)21292067 dev->header_ops = team->header_ops_cache;21302068 else21312131- dev->header_ops = port_dev->header_ops;20692069+ dev->header_ops = port_dev->header_ops ?20702070+ &team_header_ops : NULL;21322071 dev->type = port_dev->type;21332072 dev->hard_header_len = port_dev->hard_header_len;21342073 dev->needed_headroom = port_dev->needed_headroom;
···32783278 struct virtio_net_hdr_v1_hash_tunnel *hdr;32793279 int num_sg;32803280 unsigned hdr_len = vi->hdr_len;32813281+ bool feature_hdrlen;32813282 bool can_push;32833283+32843284+ feature_hdrlen = virtio_has_feature(vi->vdev,32853285+ VIRTIO_NET_F_GUEST_HDRLEN);3282328632833287 pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest);32843288···3303329933043300 if (virtio_net_hdr_tnl_from_skb(skb, hdr, vi->tx_tnl,33053301 virtio_is_little_endian(vi->vdev), 0,33063306- false))33023302+ false, feature_hdrlen))33073303 return -EPROTO;3308330433093305 if (vi->mergeable_rx_bufs)···33663362 /* Don't wait up for transmitted skbs to be freed. */33673363 if (!use_napi) {33683364 skb_orphan(skb);33653365+ skb_dst_drop(skb);33693366 nf_reset_ct(skb);33703367 }33713368
+5
drivers/pci/endpoint/functions/pci-epf-test.c
···894894 dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret);895895 bar->submap = old_submap;896896 bar->num_submap = old_nsub;897897+ ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar);898898+ if (ret)899899+ dev_warn(&epf->dev, "Failed to restore the original BAR mapping: %d\n",900900+ ret);901901+897902 kfree(submap);898903 goto err;899904 }
+41-13
drivers/pci/pwrctrl/core.c
···268268}269269EXPORT_SYMBOL_GPL(pci_pwrctrl_power_on_devices);270270271271+/*272272+ * Check whether the pwrctrl device really needs to be created or not. The273273+ * pwrctrl device will only be created if the node satisfies below requirements:274274+ *275275+ * 1. Presence of compatible property with "pci" prefix to match against the276276+ * pwrctrl driver (AND)277277+ * 2. At least one of the power supplies defined in the devicetree node of the278278+ * device (OR) in the remote endpoint parent node to indicate pwrctrl279279+ * requirement.280280+ */281281+static bool pci_pwrctrl_is_required(struct device_node *np)282282+{283283+ struct device_node *endpoint;284284+ const char *compat;285285+ int ret;286286+287287+ ret = of_property_read_string(np, "compatible", &compat);288288+ if (ret < 0)289289+ return false;290290+291291+ if (!strstarts(compat, "pci"))292292+ return false;293293+294294+ if (of_pci_supply_present(np))295295+ return true;296296+297297+ if (of_graph_is_present(np)) {298298+ for_each_endpoint_of_node(np, endpoint) {299299+ struct device_node *remote __free(device_node) =300300+ of_graph_get_remote_port_parent(endpoint);301301+ if (remote) {302302+ if (of_pci_supply_present(remote))303303+ return true;304304+ }305305+ }306306+ }307307+308308+ return false;309309+}310310+271311static int pci_pwrctrl_create_device(struct device_node *np,272312 struct device *parent)273313{···327287 return 0;328288 }329289330330- /*331331- * Sanity check to make sure that the node has the compatible property332332- * to allow driver binding.333333- */334334- if (!of_property_present(np, "compatible"))335335- return 0;336336-337337- /*338338- * Check whether the pwrctrl device really needs to be created or not.339339- * This is decided based on at least one of the power supplies defined340340- * in the devicetree node of the device or the graph property.341341- */342342- if (!of_pci_supply_present(np) && !of_graph_is_present(np)) {290290+ if (!pci_pwrctrl_is_required(np)) {343291 dev_dbg(parent, "Skipping OF node: %s\n", np->name);344292 return 0;345293 }
+6-3
drivers/pinctrl/mediatek/pinctrl-mtk-common.c
···11351135 goto chip_error;11361136 }1137113711381138- ret = mtk_eint_init(pctl, pdev);11391139- if (ret)11401140- goto chip_error;11381138+ /* Only initialize EINT if we have EINT pins */11391139+ if (data->eint_hw.ap_num > 0) {11401140+ ret = mtk_eint_init(pctl, pdev);11411141+ if (ret)11421142+ goto chip_error;11431143+ }1141114411421145 return 0;11431146
+16
drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
···723723 .pin_config_group_dbg_show = pmic_gpio_config_dbg_show,724724};725725726726+static int pmic_gpio_get_direction(struct gpio_chip *chip, unsigned pin)727727+{728728+ struct pmic_gpio_state *state = gpiochip_get_data(chip);729729+ struct pmic_gpio_pad *pad;730730+731731+ pad = state->ctrl->desc->pins[pin].drv_data;732732+733733+ if (!pad->is_enabled || pad->analog_pass ||734734+ (!pad->input_enabled && !pad->output_enabled))735735+ return -EINVAL;736736+737737+ /* Make sure the state is aligned on what pmic_gpio_get() returns */738738+ return pad->input_enabled ? GPIO_LINE_DIRECTION_IN : GPIO_LINE_DIRECTION_OUT;739739+}740740+726741static int pmic_gpio_direction_input(struct gpio_chip *chip, unsigned pin)727742{728743 struct pmic_gpio_state *state = gpiochip_get_data(chip);···816801}817802818803static const struct gpio_chip pmic_gpio_gpio_template = {804804+ .get_direction = pmic_gpio_get_direction,819805 .direction_input = pmic_gpio_direction_input,820806 .direction_output = pmic_gpio_direction_output,821807 .get = pmic_gpio_get,
···6565 select PINMUX6666 select GENERIC_PINCONF6767 select GPIOLIB6868+ select GPIO_GENERIC6869 help6970 The Hardware Debug Port allows the observation of internal signals.7071 It uses configurable multiplexer to route signals in a dedicated observation register.
+32-11
drivers/pinctrl/sunxi/pinctrl-sunxi.c
···157157 const char *pin_name,158158 const char *func_name)159159{160160+ unsigned long variant = pctl->flags & SUNXI_PINCTRL_VARIANT_MASK;160161 int i;161162162163 for (i = 0; i < pctl->desc->npins; i++) {···169168 while (func->name) {170169 if (!strcmp(func->name, func_name) &&171170 (!func->variant ||172172- func->variant & pctl->variant))171171+ func->variant & variant))173172 return func;174173175174 func++;···210209 const u16 pin_num,211210 const u8 muxval)212211{212212+ unsigned long variant = pctl->flags & SUNXI_PINCTRL_VARIANT_MASK;213213+213214 for (unsigned int i = 0; i < pctl->desc->npins; i++) {214215 const struct sunxi_desc_pin *pin = pctl->desc->pins + i;215216 struct sunxi_desc_function *func = pin->functions;···219216 if (pin->pin.number != pin_num)220217 continue;221218222222- if (pin->variant && !(pctl->variant & pin->variant))219219+ if (pin->variant && !(variant & pin->variant))223220 continue;224221225222 while (func->name) {···10921089{10931090 struct sunxi_pinctrl *pctl = irq_data_get_irq_chip_data(d);10941091 struct sunxi_desc_function *func;10921092+ unsigned int offset;10931093+ u32 reg, shift, mask;10941094+ u8 disabled_mux, muxval;10951095 int ret;1096109610971097 func = sunxi_pinctrl_desc_find_function_by_pin(pctl,···11021096 if (!func)11031097 return -EINVAL;1104109811051105- ret = gpiochip_lock_as_irq(pctl->chip,11061106- pctl->irq_array[d->hwirq] - pctl->desc->pin_base);10991099+ offset = pctl->irq_array[d->hwirq] - pctl->desc->pin_base;11001100+ sunxi_mux_reg(pctl, offset, ®, &shift, &mask);11011101+ muxval = (readl(pctl->membase + reg) & mask) >> shift;11021102+11031103+ /* Change muxing to GPIO INPUT mode if at reset value */11041104+ if (pctl->flags & SUNXI_PINCTRL_NEW_REG_LAYOUT)11051105+ disabled_mux = SUN4I_FUNC_DISABLED_NEW;11061106+ else11071107+ disabled_mux = SUN4I_FUNC_DISABLED_OLD;11081108+11091109+ if (muxval == disabled_mux)11101110+ sunxi_pmx_set(pctl->pctl_dev, pctl->irq_array[d->hwirq],11111111+ SUN4I_FUNC_INPUT);11121112+11131113+ ret = gpiochip_lock_as_irq(pctl->chip, offset);11071114 if (ret) {11081115 dev_err(pctl->dev, "unable to lock HW IRQ %lu for IRQ\n",11091116 irqd_to_hwirq(d));···13571338static int sunxi_pinctrl_build_state(struct platform_device *pdev)13581339{13591340 struct sunxi_pinctrl *pctl = platform_get_drvdata(pdev);13411341+ unsigned long variant = pctl->flags & SUNXI_PINCTRL_VARIANT_MASK;13601342 void *ptr;13611343 int i;13621344···13821362 const struct sunxi_desc_pin *pin = pctl->desc->pins + i;13831363 struct sunxi_pinctrl_group *group = pctl->groups + pctl->ngroups;1384136413851385- if (pin->variant && !(pctl->variant & pin->variant))13651365+ if (pin->variant && !(variant & pin->variant))13861366 continue;1387136713881368 group->name = pin->pin.name;···14071387 const struct sunxi_desc_pin *pin = pctl->desc->pins + i;14081388 struct sunxi_desc_function *func;1409138914101410- if (pin->variant && !(pctl->variant & pin->variant))13901390+ if (pin->variant && !(variant & pin->variant))14111391 continue;1412139214131393 for (func = pin->functions; func->name; func++) {14141414- if (func->variant && !(pctl->variant & func->variant))13941394+ if (func->variant && !(variant & func->variant))14151395 continue;1416139614171397 /* Create interrupt mapping while we're at it */···14391419 const struct sunxi_desc_pin *pin = pctl->desc->pins + i;14401420 struct sunxi_desc_function *func;1441142114421442- if (pin->variant && !(pctl->variant & pin->variant))14221422+ if (pin->variant && !(variant & pin->variant))14431423 continue;1444142414451425 for (func = pin->functions; func->name; func++) {14461426 struct sunxi_pinctrl_function *func_item;14471427 const char **func_grp;1448142814491449- if (func->variant && !(pctl->variant & func->variant))14291429+ if (func->variant && !(variant & func->variant))14501430 continue;1451143114521432 func_item = sunxi_pinctrl_find_function_by_name(pctl,···1588156815891569 pctl->dev = &pdev->dev;15901570 pctl->desc = desc;15911591- pctl->variant = flags & SUNXI_PINCTRL_VARIANT_MASK;15711571+ pctl->flags = flags;15921572 if (flags & SUNXI_PINCTRL_NEW_REG_LAYOUT) {15931573 pctl->bank_mem_size = D1_BANK_MEM_SIZE;15941574 pctl->pull_regs_offset = D1_PULL_REGS_OFFSET;···1624160416251605 for (i = 0, pin_idx = 0; i < pctl->desc->npins; i++) {16261606 const struct sunxi_desc_pin *pin = pctl->desc->pins + i;16071607+ unsigned long variant = pctl->flags & SUNXI_PINCTRL_VARIANT_MASK;1627160816281628- if (pin->variant && !(pctl->variant & pin->variant))16091609+ if (pin->variant && !(variant & pin->variant))16291610 continue;1630161116311612 pins[pin_idx++] = pin->pin;
···162162 */163163 dma->tx_size = 0;164164165165+ /*166166+ * We can't use `dmaengine_terminate_sync` because `uart_flush_buffer` is167167+ * holding the uart port spinlock.168168+ */165169 dmaengine_terminate_async(dma->txchan);170170+171171+ /*172172+ * The callback might or might not run. If it doesn't run, we need to ensure173173+ * that `tx_running` is cleared so that we can schedule new transactions.174174+ * If it does run, then the zombie callback will clear `tx_running` again175175+ * and perform a no-op since `tx_size` was cleared above.176176+ *177177+ * In either case, we ASSUME the DMA transaction will terminate before we178178+ * issue a new `serial8250_tx_dma`.179179+ */180180+ dma->tx_running = 0;166181}167182168183int serial8250_rx_dma(struct uart_8250_port *p)
+239-65
drivers/tty/serial/8250/8250_dw.c
···99 * LCR is written whilst busy. If it is, then a busy detect interrupt is1010 * raised, the LCR needs to be rewritten and the uart status register read.1111 */1212+#include <linux/bitfield.h>1313+#include <linux/bits.h>1414+#include <linux/cleanup.h>1215#include <linux/clk.h>1316#include <linux/delay.h>1417#include <linux/device.h>1518#include <linux/io.h>1919+#include <linux/lockdep.h>1620#include <linux/mod_devicetable.h>1721#include <linux/module.h>1822#include <linux/notifier.h>···4440#define RZN1_UART_RDMACR 0x110 /* DMA Control Register Receive Mode */45414642/* DesignWare specific register fields */4343+#define DW_UART_IIR_IID GENMASK(3, 0)4444+4745#define DW_UART_MCR_SIRE BIT(6)4646+4747+#define DW_UART_USR_BUSY BIT(0)48484949/* Renesas specific register fields */5050#define RZN1_UART_xDMACR_DMA_EN BIT(0)···6456#define DW_UART_QUIRK_IS_DMA_FC BIT(3)6557#define DW_UART_QUIRK_APMC0D08 BIT(4)6658#define DW_UART_QUIRK_CPR_VALUE BIT(5)5959+#define DW_UART_QUIRK_IER_KICK BIT(6)6060+6161+/*6262+ * Number of consecutive IIR_NO_INT interrupts required to trigger interrupt6363+ * storm prevention code.6464+ */6565+#define DW_UART_QUIRK_IER_KICK_THRES 467666867struct dw8250_platform_data {6968 u8 usr_reg;···92779378 unsigned int skip_autocfg:1;9479 unsigned int uart_16550_compatible:1;8080+ unsigned int in_idle:1;8181+8282+ u8 no_int_count;9583};96849785static inline struct dw8250_data *to_dw8250_data(struct dw8250_port_data *data)···125107 return value;126108}127109128128-/*129129- * This function is being called as part of the uart_port::serial_out()130130- * routine. Hence, it must not call serial_port_out() or serial_out()131131- * against the modified registers here, i.e. LCR.132132- */133133-static void dw8250_force_idle(struct uart_port *p)110110+static void dw8250_idle_exit(struct uart_port *p)134111{112112+ struct dw8250_data *d = to_dw8250_data(p->private_data);135113 struct uart_8250_port *up = up_to_u8250p(p);136136- unsigned int lsr;137114138138- /*139139- * The following call currently performs serial_out()140140- * against the FCR register. Because it differs to LCR141141- * there will be no infinite loop, but if it ever gets142142- * modified, we might need a new custom version of it143143- * that avoids infinite recursion.144144- */145145- serial8250_clear_and_reinit_fifos(up);115115+ if (d->uart_16550_compatible)116116+ return;146117147147- /*148148- * With PSLVERR_RESP_EN parameter set to 1, the device generates an149149- * error response when an attempt to read an empty RBR with FIFO150150- * enabled.151151- */152152- if (up->fcr & UART_FCR_ENABLE_FIFO) {153153- lsr = serial_port_in(p, UART_LSR);154154- if (!(lsr & UART_LSR_DR))155155- return;118118+ if (up->capabilities & UART_CAP_FIFO)119119+ serial_port_out(p, UART_FCR, up->fcr);120120+ serial_port_out(p, UART_MCR, up->mcr);121121+ serial_port_out(p, UART_IER, up->ier);122122+123123+ /* DMA Rx is restarted by IRQ handler as needed. */124124+ if (up->dma)125125+ serial8250_tx_dma_resume(up);126126+127127+ d->in_idle = 0;128128+}129129+130130+/*131131+ * Ensure BUSY is not asserted. If DW UART is configured with132132+ * !uart_16550_compatible, the writes to LCR, DLL, and DLH fail while133133+ * BUSY is asserted.134134+ *135135+ * Context: port's lock must be held136136+ */137137+static int dw8250_idle_enter(struct uart_port *p)138138+{139139+ struct dw8250_data *d = to_dw8250_data(p->private_data);140140+ unsigned int usr_reg = d->pdata ? d->pdata->usr_reg : DW_UART_USR;141141+ struct uart_8250_port *up = up_to_u8250p(p);142142+ int retries;143143+ u32 lsr;144144+145145+ lockdep_assert_held_once(&p->lock);146146+147147+ if (d->uart_16550_compatible)148148+ return 0;149149+150150+ d->in_idle = 1;151151+152152+ /* Prevent triggering interrupt from RBR filling */153153+ serial_port_out(p, UART_IER, 0);154154+155155+ if (up->dma) {156156+ serial8250_rx_dma_flush(up);157157+ if (serial8250_tx_dma_running(up))158158+ serial8250_tx_dma_pause(up);156159 }157160158158- serial_port_in(p, UART_RX);161161+ /*162162+ * Wait until Tx becomes empty + one extra frame time to ensure all bits163163+ * have been sent on the wire.164164+ *165165+ * FIXME: frame_time delay is too long with very low baudrates.166166+ */167167+ serial8250_fifo_wait_for_lsr_thre(up, p->fifosize);168168+ ndelay(p->frame_time);169169+170170+ serial_port_out(p, UART_MCR, up->mcr | UART_MCR_LOOP);171171+172172+ retries = 4; /* Arbitrary limit, 2 was always enough in tests */173173+ do {174174+ serial8250_clear_fifos(up);175175+ if (!(serial_port_in(p, usr_reg) & DW_UART_USR_BUSY))176176+ break;177177+ /* FIXME: frame_time delay is too long with very low baudrates. */178178+ ndelay(p->frame_time);179179+ } while (--retries);180180+181181+ lsr = serial_lsr_in(up);182182+ if (lsr & UART_LSR_DR) {183183+ serial_port_in(p, UART_RX);184184+ up->lsr_saved_flags = 0;185185+ }186186+187187+ /* Now guaranteed to have BUSY deasserted? Just sanity check */188188+ if (serial_port_in(p, usr_reg) & DW_UART_USR_BUSY) {189189+ dw8250_idle_exit(p);190190+ return -EBUSY;191191+ }192192+193193+ return 0;194194+}195195+196196+static void dw8250_set_divisor(struct uart_port *p, unsigned int baud,197197+ unsigned int quot, unsigned int quot_frac)198198+{199199+ struct uart_8250_port *up = up_to_u8250p(p);200200+ int ret;201201+202202+ ret = dw8250_idle_enter(p);203203+ if (ret < 0)204204+ return;205205+206206+ serial_port_out(p, UART_LCR, up->lcr | UART_LCR_DLAB);207207+ if (!(serial_port_in(p, UART_LCR) & UART_LCR_DLAB))208208+ goto idle_failed;209209+210210+ serial_dl_write(up, quot);211211+ serial_port_out(p, UART_LCR, up->lcr);212212+213213+idle_failed:214214+ dw8250_idle_exit(p);159215}160216161217/*162218 * This function is being called as part of the uart_port::serial_out()163163- * routine. Hence, it must not call serial_port_out() or serial_out()164164- * against the modified registers here, i.e. LCR.219219+ * routine. Hence, special care must be taken when serial_port_out() or220220+ * serial_out() against the modified registers here, i.e. LCR (d->in_idle is221221+ * used to break recursion loop).165222 */166223static void dw8250_check_lcr(struct uart_port *p, unsigned int offset, u32 value)167224{168225 struct dw8250_data *d = to_dw8250_data(p->private_data);169169- void __iomem *addr = p->membase + (offset << p->regshift);170170- int tries = 1000;226226+ u32 lcr;227227+ int ret;171228172229 if (offset != UART_LCR || d->uart_16550_compatible)173230 return;174231232232+ lcr = serial_port_in(p, UART_LCR);233233+175234 /* Make sure LCR write wasn't ignored */176176- while (tries--) {177177- u32 lcr = serial_port_in(p, offset);235235+ if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR))236236+ return;178237179179- if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR))180180- return;238238+ if (d->in_idle)239239+ goto write_err;181240182182- dw8250_force_idle(p);241241+ ret = dw8250_idle_enter(p);242242+ if (ret < 0)243243+ goto write_err;183244184184-#ifdef CONFIG_64BIT185185- if (p->type == PORT_OCTEON)186186- __raw_writeq(value & 0xff, addr);187187- else188188-#endif189189- if (p->iotype == UPIO_MEM32)190190- writel(value, addr);191191- else if (p->iotype == UPIO_MEM32BE)192192- iowrite32be(value, addr);193193- else194194- writeb(value, addr);195195- }245245+ serial_port_out(p, UART_LCR, value);246246+ dw8250_idle_exit(p);247247+ return;248248+249249+write_err:196250 /*197251 * FIXME: this deadlocks if port->lock is already held198252 * dev_err(p->dev, "Couldn't set LCR to %d\n", value);199253 */254254+ return; /* Silences "label at the end of compound statement" */255255+}256256+257257+/*258258+ * With BUSY, LCR writes can be very expensive (IRQ + complex retry logic).259259+ * If the write does not change the value of the LCR register, skip it entirely.260260+ */261261+static bool dw8250_can_skip_reg_write(struct uart_port *p, unsigned int offset, u32 value)262262+{263263+ struct dw8250_data *d = to_dw8250_data(p->private_data);264264+ u32 lcr;265265+266266+ if (offset != UART_LCR || d->uart_16550_compatible)267267+ return false;268268+269269+ lcr = serial_port_in(p, offset);270270+ return lcr == value;200271}201272202273/* Returns once the transmitter is empty or we run out of retries */···314207315208static void dw8250_serial_out(struct uart_port *p, unsigned int offset, u32 value)316209{210210+ if (dw8250_can_skip_reg_write(p, offset, value))211211+ return;212212+317213 writeb(value, p->membase + (offset << p->regshift));318214 dw8250_check_lcr(p, offset, value);319215}320216321217static void dw8250_serial_out38x(struct uart_port *p, unsigned int offset, u32 value)322218{219219+ if (dw8250_can_skip_reg_write(p, offset, value))220220+ return;221221+323222 /* Allow the TX to drain before we reconfigure */324223 if (offset == UART_LCR)325224 dw8250_tx_wait_empty(p);···350237351238static void dw8250_serial_outq(struct uart_port *p, unsigned int offset, u32 value)352239{240240+ if (dw8250_can_skip_reg_write(p, offset, value))241241+ return;242242+353243 value &= 0xff;354244 __raw_writeq(value, p->membase + (offset << p->regshift));355245 /* Read back to ensure register write ordering. */···364248365249static void dw8250_serial_out32(struct uart_port *p, unsigned int offset, u32 value)366250{251251+ if (dw8250_can_skip_reg_write(p, offset, value))252252+ return;253253+367254 writel(value, p->membase + (offset << p->regshift));368255 dw8250_check_lcr(p, offset, value);369256}···380261381262static void dw8250_serial_out32be(struct uart_port *p, unsigned int offset, u32 value)382263{264264+ if (dw8250_can_skip_reg_write(p, offset, value))265265+ return;266266+383267 iowrite32be(value, p->membase + (offset << p->regshift));384268 dw8250_check_lcr(p, offset, value);385269}···394272 return dw8250_modify_msr(p, offset, value);395273}396274275275+/*276276+ * INTC10EE UART can IRQ storm while reporting IIR_NO_INT. Inducing IIR value277277+ * change has been observed to break the storm.278278+ *279279+ * If Tx is empty (THRE asserted), we use here IER_THRI to cause IIR_NO_INT ->280280+ * IIR_THRI transition.281281+ */282282+static void dw8250_quirk_ier_kick(struct uart_port *p)283283+{284284+ struct uart_8250_port *up = up_to_u8250p(p);285285+ u32 lsr;286286+287287+ if (up->ier & UART_IER_THRI)288288+ return;289289+290290+ lsr = serial_lsr_in(up);291291+ if (!(lsr & UART_LSR_THRE))292292+ return;293293+294294+ serial_port_out(p, UART_IER, up->ier | UART_IER_THRI);295295+ serial_port_in(p, UART_LCR); /* safe, no side-effects */296296+ serial_port_out(p, UART_IER, up->ier);297297+}397298398299static int dw8250_handle_irq(struct uart_port *p)399300{···426281 bool rx_timeout = (iir & 0x3f) == UART_IIR_RX_TIMEOUT;427282 unsigned int quirks = d->pdata->quirks;428283 unsigned int status;429429- unsigned long flags;284284+285285+ guard(uart_port_lock_irqsave)(p);286286+287287+ switch (FIELD_GET(DW_UART_IIR_IID, iir)) {288288+ case UART_IIR_NO_INT:289289+ if (d->uart_16550_compatible || up->dma)290290+ return 0;291291+292292+ if (quirks & DW_UART_QUIRK_IER_KICK &&293293+ d->no_int_count == (DW_UART_QUIRK_IER_KICK_THRES - 1))294294+ dw8250_quirk_ier_kick(p);295295+ d->no_int_count = (d->no_int_count + 1) % DW_UART_QUIRK_IER_KICK_THRES;296296+297297+ return 0;298298+299299+ case UART_IIR_BUSY:300300+ /* Clear the USR */301301+ serial_port_in(p, d->pdata->usr_reg);302302+303303+ d->no_int_count = 0;304304+305305+ return 1;306306+ }307307+308308+ d->no_int_count = 0;430309431310 /*432311 * There are ways to get Designware-based UARTs into a state where···463294 * so we limit the workaround only to non-DMA mode.464295 */465296 if (!up->dma && rx_timeout) {466466- uart_port_lock_irqsave(p, &flags);467297 status = serial_lsr_in(up);468298469299 if (!(status & (UART_LSR_DR | UART_LSR_BI)))470300 serial_port_in(p, UART_RX);471471-472472- uart_port_unlock_irqrestore(p, flags);473301 }474302475303 /* Manually stop the Rx DMA transfer when acting as flow controller */476304 if (quirks & DW_UART_QUIRK_IS_DMA_FC && up->dma && up->dma->rx_running && rx_timeout) {477477- uart_port_lock_irqsave(p, &flags);478305 status = serial_lsr_in(up);479479- uart_port_unlock_irqrestore(p, flags);480306481307 if (status & (UART_LSR_DR | UART_LSR_BI)) {482308 dw8250_writel_ext(p, RZN1_UART_RDMACR, 0);···479315 }480316 }481317482482- if (serial8250_handle_irq(p, iir))483483- return 1;318318+ serial8250_handle_irq_locked(p, iir);484319485485- if ((iir & UART_IIR_BUSY) == UART_IIR_BUSY) {486486- /* Clear the USR */487487- serial_port_in(p, d->pdata->usr_reg);488488-489489- return 1;490490- }491491-492492- return 0;320320+ return 1;493321}494322495323static void dw8250_clk_work_cb(struct work_struct *work)···683527 reset_control_assert(data);684528}685529530530+static void dw8250_shutdown(struct uart_port *port)531531+{532532+ struct dw8250_data *d = to_dw8250_data(port->private_data);533533+534534+ serial8250_do_shutdown(port);535535+ d->no_int_count = 0;536536+}537537+686538static int dw8250_probe(struct platform_device *pdev)687539{688540 struct uart_8250_port uart = {}, *up = &uart;···709545 p->type = PORT_8250;710546 p->flags = UPF_FIXED_PORT;711547 p->dev = dev;548548+712549 p->set_ldisc = dw8250_set_ldisc;713550 p->set_termios = dw8250_set_termios;551551+ p->set_divisor = dw8250_set_divisor;714552715553 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);716554 if (!data)···820654 dw8250_quirks(p, data);821655822656 /* If the Busy Functionality is not implemented, don't handle it */823823- if (data->uart_16550_compatible)657657+ if (data->uart_16550_compatible) {824658 p->handle_irq = NULL;825825- else if (data->pdata)659659+ } else if (data->pdata) {826660 p->handle_irq = dw8250_handle_irq;661661+ p->shutdown = dw8250_shutdown;662662+ }827663828664 dw8250_setup_dma_filter(p, data);829665···957789 .quirks = DW_UART_QUIRK_SKIP_SET_RATE,958790};959791792792+static const struct dw8250_platform_data dw8250_intc10ee = {793793+ .usr_reg = DW_UART_USR,794794+ .quirks = DW_UART_QUIRK_IER_KICK,795795+};796796+960797static const struct of_device_id dw8250_of_match[] = {961798 { .compatible = "snps,dw-apb-uart", .data = &dw8250_dw_apb },962799 { .compatible = "cavium,octeon-3860-uart", .data = &dw8250_octeon_3860_data },···991818 { "INT33C5", (kernel_ulong_t)&dw8250_dw_apb },992819 { "INT3434", (kernel_ulong_t)&dw8250_dw_apb },993820 { "INT3435", (kernel_ulong_t)&dw8250_dw_apb },994994- { "INTC10EE", (kernel_ulong_t)&dw8250_dw_apb },821821+ { "INTC10EE", (kernel_ulong_t)&dw8250_intc10ee },995822 { },996823};997824MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);···10098361010837module_platform_driver(dw8250_platform_driver);1011838839839+MODULE_IMPORT_NS("SERIAL_8250");1012840MODULE_AUTHOR("Jamie Iles");1013841MODULE_LICENSE("GPL");1014842MODULE_DESCRIPTION("Synopsys DesignWare 8250 serial port driver");
···1818#include <linux/irq.h>1919#include <linux/console.h>2020#include <linux/gpio/consumer.h>2121+#include <linux/lockdep.h>2122#include <linux/sysrq.h>2223#include <linux/delay.h>2324#include <linux/platform_device.h>···489488/*490489 * FIFO support.491490 */492492-static void serial8250_clear_fifos(struct uart_8250_port *p)491491+void serial8250_clear_fifos(struct uart_8250_port *p)493492{494493 if (p->capabilities & UART_CAP_FIFO) {495494 serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO);···498497 serial_out(p, UART_FCR, 0);499498 }500499}500500+EXPORT_SYMBOL_NS_GPL(serial8250_clear_fifos, "SERIAL_8250");501501502502static enum hrtimer_restart serial8250_em485_handle_start_tx(struct hrtimer *t);503503static enum hrtimer_restart serial8250_em485_handle_stop_tx(struct hrtimer *t);···17841782}1785178317861784/*17871787- * This handles the interrupt from one port.17851785+ * Context: port's lock must be held by the caller.17881786 */17891789-int serial8250_handle_irq(struct uart_port *port, unsigned int iir)17871787+void serial8250_handle_irq_locked(struct uart_port *port, unsigned int iir)17901788{17911789 struct uart_8250_port *up = up_to_u8250p(port);17921790 struct tty_port *tport = &port->state->port;17931791 bool skip_rx = false;17941794- unsigned long flags;17951792 u16 status;1796179317971797- if (iir & UART_IIR_NO_INT)17981798- return 0;17991799-18001800- uart_port_lock_irqsave(port, &flags);17941794+ lockdep_assert_held_once(&port->lock);1801179518021796 status = serial_lsr_in(up);18031797···18261828 else if (!up->dma->tx_running)18271829 __stop_tx(up);18281830 }18311831+}18321832+EXPORT_SYMBOL_NS_GPL(serial8250_handle_irq_locked, "SERIAL_8250");1829183318301830- uart_unlock_and_check_sysrq_irqrestore(port, flags);18341834+/*18351835+ * This handles the interrupt from one port.18361836+ */18371837+int serial8250_handle_irq(struct uart_port *port, unsigned int iir)18381838+{18391839+ if (iir & UART_IIR_NO_INT)18401840+ return 0;18411841+18421842+ guard(uart_port_lock_irqsave)(port);18431843+ serial8250_handle_irq_locked(port, iir);1831184418321845 return 1;18331846}···21562147 if (up->port.flags & UPF_NO_THRE_TEST)21572148 return;2158214921592159- if (port->irqflags & IRQF_SHARED)21602160- disable_irq_nosync(port->irq);21502150+ disable_irq(port->irq);2161215121622152 /*21632153 * Test for UARTs that do not reassert THRE when the transmitter is idle and the interrupt···21782170 serial_port_out(port, UART_IER, 0);21792171 }2180217221812181- if (port->irqflags & IRQF_SHARED)21822182- enable_irq(port->irq);21732173+ enable_irq(port->irq);2183217421842175 /*21852176 * If the interrupt is not reasserted, or we otherwise don't trust the iir, setup a timer to···23572350void serial8250_do_shutdown(struct uart_port *port)23582351{23592352 struct uart_8250_port *up = up_to_u8250p(port);23532353+ u32 lcr;2360235423612355 serial8250_rpm_get(up);23622356 /*···23842376 port->mctrl &= ~TIOCM_OUT2;2385237723862378 serial8250_set_mctrl(port, port->mctrl);23792379+23802380+ /* Disable break condition */23812381+ lcr = serial_port_in(port, UART_LCR);23822382+ lcr &= ~UART_LCR_SBC;23832383+ serial_port_out(port, UART_LCR, lcr);23872384 }2388238523892389- /*23902390- * Disable break condition and FIFOs23912391- */23922392- serial_port_out(port, UART_LCR,23932393- serial_port_in(port, UART_LCR) & ~UART_LCR_SBC);23942386 serial8250_clear_fifos(up);2395238723962388 rsa_disable(up);···24002392 * the IRQ chain.24012393 */24022394 serial_port_in(port, UART_RX);23952395+ /*23962396+ * LCR writes on DW UART can trigger late (unmaskable) IRQs.23972397+ * Handle them before releasing the handler.23982398+ */23992399+ synchronize_irq(port->irq);24002400+24032401 serial8250_rpm_put(up);2404240224052403 up->ops->release_irq(up);···31993185}32003186EXPORT_SYMBOL_GPL(serial8250_set_defaults);3201318731883188+void serial8250_fifo_wait_for_lsr_thre(struct uart_8250_port *up, unsigned int count)31893189+{31903190+ unsigned int i;31913191+31923192+ for (i = 0; i < count; i++) {31933193+ if (wait_for_lsr(up, UART_LSR_THRE))31943194+ return;31953195+ }31963196+}31973197+EXPORT_SYMBOL_NS_GPL(serial8250_fifo_wait_for_lsr_thre, "SERIAL_8250");31983198+32023199#ifdef CONFIG_SERIAL_8250_CONSOLE3203320032043201static void serial8250_console_putchar(struct uart_port *port, unsigned char ch)···32513226 serial8250_out_MCR(up, up->mcr | UART_MCR_DTR | UART_MCR_RTS);32523227}3253322832543254-static void fifo_wait_for_lsr(struct uart_8250_port *up, unsigned int count)32553255-{32563256- unsigned int i;32573257-32583258- for (i = 0; i < count; i++) {32593259- if (wait_for_lsr(up, UART_LSR_THRE))32603260- return;32613261- }32623262-}32633263-32643229/*32653230 * Print a string to the serial port using the device FIFO32663231 *···3269325432703255 while (s != end) {32713256 /* Allow timeout for each byte of a possibly full FIFO */32723272- fifo_wait_for_lsr(up, fifosize);32573257+ serial8250_fifo_wait_for_lsr_thre(up, fifosize);3273325832743259 for (i = 0; i < fifosize && s != end; ++i) {32753260 if (*s == '\n' && !cr_sent) {···32873272 * Allow timeout for each byte written since the caller will only wait32883273 * for UART_LSR_BOTH_EMPTY using the timeout of a single character32893274 */32903290- fifo_wait_for_lsr(up, tx_count);32753275+ serial8250_fifo_wait_for_lsr_thre(up, tx_count);32913276}3292327732933278/*
+4-1
drivers/tty/serial/serial_core.c
···643643 unsigned int ret;644644645645 port = uart_port_ref_lock(state, &flags);646646- ret = kfifo_avail(&state->port.xmit_fifo);646646+ if (!state->port.xmit_buf)647647+ ret = 0;648648+ else649649+ ret = kfifo_avail(&state->port.xmit_fifo);647650 uart_port_unlock_deref(port, flags);648651 return ret;649652}
···29122912 * @data: the token identifying the buffer.29132913 * @gfp: how to do memory allocations (if necessary).29142914 *29152915- * Same as virtqueue_add_inbuf but passes DMA_ATTR_CPU_CACHE_CLEAN to indicate29162916- * that the CPU will not dirty any cacheline overlapping this buffer while it29172917- * is available, and to suppress overlapping cacheline warnings in DMA debug29182918- * builds.29152915+ * Same as virtqueue_add_inbuf but passes DMA_ATTR_DEBUGGING_IGNORE_CACHELINES29162916+ * to indicate that the CPU will not dirty any cacheline overlapping this buffer29172917+ * while it is available, and to suppress overlapping cacheline warnings in DMA29182918+ * debug builds.29192919 *29202920 * Caller must ensure we don't call this with other virtqueue operations29212921 * at the same time (except where noted).···29282928 gfp_t gfp)29292929{29302930 return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp,29312931- DMA_ATTR_CPU_CACHE_CLEAN);29312931+ DMA_ATTR_DEBUGGING_IGNORE_CACHELINES);29322932}29332933EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_cache_clean);29342934
+70-3
drivers/xen/privcmd.c
···1212#include <linux/eventfd.h>1313#include <linux/file.h>1414#include <linux/kernel.h>1515+#include <linux/kstrtox.h>1516#include <linux/module.h>1617#include <linux/mutex.h>1718#include <linux/poll.h>···3130#include <linux/seq_file.h>3231#include <linux/miscdevice.h>3332#include <linux/moduleparam.h>3333+#include <linux/notifier.h>3434+#include <linux/security.h>3435#include <linux/virtio_mmio.h>3636+#include <linux/wait.h>35373638#include <asm/xen/hypervisor.h>3739#include <asm/xen/hypercall.h>···5046#include <xen/page.h>5147#include <xen/xen-ops.h>5248#include <xen/balloon.h>4949+#include <xen/xenbus.h>5350#ifdef CONFIG_XEN_ACPI5451#include <xen/acpi.h>5552#endif···7368MODULE_PARM_DESC(dm_op_buf_max_size,7469 "Maximum size of a dm_op hypercall buffer");75707171+static bool unrestricted;7272+module_param(unrestricted, bool, 0);7373+MODULE_PARM_DESC(unrestricted,7474+ "Don't restrict hypercalls to target domain if running in a domU");7575+7676struct privcmd_data {7777 domid_t domid;7878};7979+8080+/* DOMID_INVALID implies no restriction */8181+static domid_t target_domain = DOMID_INVALID;8282+static bool restrict_wait;8383+static DECLARE_WAIT_QUEUE_HEAD(restrict_wait_wq);79848085static int privcmd_vma_range_is_mapped(8186 struct vm_area_struct *vma,···1578156315791564static int privcmd_open(struct inode *ino, struct file *file)15801565{15811581- struct privcmd_data *data = kzalloc_obj(*data);15661566+ struct privcmd_data *data;1582156715681568+ if (wait_event_interruptible(restrict_wait_wq, !restrict_wait) < 0)15691569+ return -EINTR;15701570+15711571+ data = kzalloc_obj(*data);15831572 if (!data)15841573 return -ENOMEM;1585157415861586- /* DOMID_INVALID implies no restriction */15871587- data->domid = DOMID_INVALID;15751575+ data->domid = target_domain;1588157615891577 file->private_data = data;15901578 return 0;···16801662 .fops = &xen_privcmd_fops,16811663};1682166416651665+static int init_restrict(struct notifier_block *notifier,16661666+ unsigned long event,16671667+ void *data)16681668+{16691669+ char *target;16701670+ unsigned int domid;16711671+16721672+ /* Default to an guaranteed unused domain-id. */16731673+ target_domain = DOMID_IDLE;16741674+16751675+ target = xenbus_read(XBT_NIL, "target", "", NULL);16761676+ if (IS_ERR(target) || kstrtouint(target, 10, &domid)) {16771677+ pr_err("No target domain found, blocking all hypercalls\n");16781678+ goto out;16791679+ }16801680+16811681+ target_domain = domid;16821682+16831683+ out:16841684+ if (!IS_ERR(target))16851685+ kfree(target);16861686+16871687+ restrict_wait = false;16881688+ wake_up_all(&restrict_wait_wq);16891689+16901690+ return NOTIFY_DONE;16911691+}16921692+16931693+static struct notifier_block xenstore_notifier = {16941694+ .notifier_call = init_restrict,16951695+};16961696+16971697+static void __init restrict_driver(void)16981698+{16991699+ if (unrestricted) {17001700+ if (security_locked_down(LOCKDOWN_XEN_USER_ACTIONS))17011701+ pr_warn("Kernel is locked down, parameter \"unrestricted\" ignored\n");17021702+ else17031703+ return;17041704+ }17051705+17061706+ restrict_wait = true;17071707+17081708+ register_xenstore_notifier(&xenstore_notifier);17091709+}17101710+16831711static int __init privcmd_init(void)16841712{16851713 int err;1686171416871715 if (!xen_domain())16881716 return -ENODEV;17171717+17181718+ if (!xen_initial_domain())17191719+ restrict_driver();1689172016901721 err = misc_register(&privcmd_dev);16911722 if (err != 0) {
···13931393 .indirect_missing_keys = PREFTREE_INIT13941394 };1395139513961396+ if (unlikely(!root)) {13971397+ btrfs_err(ctx->fs_info,13981398+ "missing extent root for extent at bytenr %llu",13991399+ ctx->bytenr);14001400+ return -EUCLEAN;14011401+ }14021402+13961403 /* Roots ulist is not needed when using a sharedness check context. */13971404 if (sc)13981405 ASSERT(ctx->roots == NULL);···22112204 struct btrfs_extent_item *ei;22122205 struct btrfs_key key;2213220622072207+ if (unlikely(!extent_root)) {22082208+ btrfs_err(fs_info,22092209+ "missing extent root for extent at bytenr %llu",22102210+ logical);22112211+ return -EUCLEAN;22122212+ }22132213+22142214 key.objectid = logical;22152215 if (btrfs_fs_incompat(fs_info, SKINNY_METADATA))22162216 key.type = BTRFS_METADATA_ITEM_KEY;···28652851 struct btrfs_key key;28662852 int ret;2867285328542854+ if (unlikely(!extent_root)) {28552855+ btrfs_err(fs_info,28562856+ "missing extent root for extent at bytenr %llu",28572857+ bytenr);28582858+ return -EUCLEAN;28592859+ }28602860+28682861 key.objectid = bytenr;28692862 key.type = BTRFS_METADATA_ITEM_KEY;28702863 key.offset = (u64)-1;···3008298730092988 /* We're at keyed items, there is no inline item, go to the next one */30102989 extent_root = btrfs_extent_root(iter->fs_info, iter->bytenr);29902990+ if (unlikely(!extent_root)) {29912991+ btrfs_err(iter->fs_info,29922992+ "missing extent root for extent at bytenr %llu",29932993+ iter->bytenr);29942994+ return -EUCLEAN;29952995+ }29962996+30112997 ret = btrfs_next_item(extent_root, iter->path);30122998 if (ret)30132999 return ret;
+36
fs/btrfs/block-group.c
···739739740740 last = max_t(u64, block_group->start, BTRFS_SUPER_INFO_OFFSET);741741 extent_root = btrfs_extent_root(fs_info, last);742742+ if (unlikely(!extent_root)) {743743+ btrfs_err(fs_info,744744+ "missing extent root for block group at offset %llu",745745+ block_group->start);746746+ return -EUCLEAN;747747+ }742748743749#ifdef CONFIG_BTRFS_DEBUG744750 /*···10671061 int ret;1068106210691063 root = btrfs_block_group_root(fs_info);10641064+ if (unlikely(!root)) {10651065+ btrfs_err(fs_info, "missing block group root");10661066+ return -EUCLEAN;10671067+ }10681068+10701069 key.objectid = block_group->start;10711070 key.type = BTRFS_BLOCK_GROUP_ITEM_KEY;10721071 key.offset = block_group->length;···13591348 struct btrfs_root *root = btrfs_block_group_root(fs_info);13601349 struct btrfs_chunk_map *map;13611350 unsigned int num_items;13511351+13521352+ if (unlikely(!root)) {13531353+ btrfs_err(fs_info, "missing block group root");13541354+ return ERR_PTR(-EUCLEAN);13551355+ }1362135613631357 map = btrfs_find_chunk_map(fs_info, chunk_offset, 1);13641358 ASSERT(map != NULL);···21562140 int ret;21572141 struct btrfs_key found_key;2158214221432143+ if (unlikely(!root)) {21442144+ btrfs_err(fs_info, "missing block group root");21452145+ return -EUCLEAN;21462146+ }21472147+21592148 btrfs_for_each_slot(root, key, &found_key, path, ret) {21602149 if (found_key.objectid >= key->objectid &&21612150 found_key.type == BTRFS_BLOCK_GROUP_ITEM_KEY) {···27342713 size_t size;27352714 int ret;2736271527162716+ if (unlikely(!root)) {27172717+ btrfs_err(fs_info, "missing block group root");27182718+ return -EUCLEAN;27192719+ }27202720+27372721 spin_lock(&block_group->lock);27382722 btrfs_set_stack_block_group_v2_used(&bgi, block_group->used);27392723 btrfs_set_stack_block_group_v2_chunk_objectid(&bgi, block_group->global_root_id);···30743048 int ret;30753049 bool dirty_bg_running;3076305030513051+ if (unlikely(!root)) {30523052+ btrfs_err(fs_info, "missing block group root");30533053+ return -EUCLEAN;30543054+ }30553055+30773056 /*30783057 * This can only happen when we are doing read-only scrub on read-only30793058 * mount.···32223191 u32 old_last_identity_remap_count;32233192 u64 used, remap_bytes;32243193 u32 identity_remap_count;31943194+31953195+ if (unlikely(!root)) {31963196+ btrfs_err(fs_info, "missing block group root");31973197+ return -EUCLEAN;31983198+ }3225319932263200 /*32273201 * Block group items update can be triggered out of commit transaction
+8-3
fs/btrfs/compression.c
···320320321321 ASSERT(IS_ALIGNED(ordered->file_offset, fs_info->sectorsize));322322 ASSERT(IS_ALIGNED(ordered->num_bytes, fs_info->sectorsize));323323- ASSERT(cb->writeback);323323+ /*324324+ * This flag determines if we should clear the writeback flag from the325325+ * page cache. But this function is only utilized by encoded writes, it326326+ * never goes through the page cache.327327+ */328328+ ASSERT(!cb->writeback);324329325330 cb->start = ordered->file_offset;326331 cb->len = ordered->num_bytes;332332+ ASSERT(cb->bbio.bio.bi_iter.bi_size == ordered->disk_num_bytes);327333 cb->compressed_len = ordered->disk_num_bytes;328334 cb->bbio.bio.bi_iter.bi_sector = ordered->disk_bytenr >> SECTOR_SHIFT;329335 cb->bbio.ordered = ordered;···351345 cb = alloc_compressed_bio(inode, start, REQ_OP_WRITE, end_bbio_compressed_write);352346 cb->start = start;353347 cb->len = len;354354- cb->writeback = true;355355-348348+ cb->writeback = false;356349 return cb;357350}358351
+17-3
fs/btrfs/disk-io.c
···15911591 * this will bump the backup pointer by one when it is15921592 * done15931593 */15941594-static void backup_super_roots(struct btrfs_fs_info *info)15941594+static int backup_super_roots(struct btrfs_fs_info *info)15951595{15961596 const int next_backup = info->backup_root_index;15971597 struct btrfs_root_backup *root_backup;···16221622 if (!btrfs_fs_incompat(info, EXTENT_TREE_V2)) {16231623 struct btrfs_root *extent_root = btrfs_extent_root(info, 0);16241624 struct btrfs_root *csum_root = btrfs_csum_root(info, 0);16251625+16261626+ if (unlikely(!extent_root)) {16271627+ btrfs_err(info, "missing extent root for extent at bytenr 0");16281628+ return -EUCLEAN;16291629+ }16301630+ if (unlikely(!csum_root)) {16311631+ btrfs_err(info, "missing csum root for extent at bytenr 0");16321632+ return -EUCLEAN;16331633+ }1625163416261635 btrfs_set_backup_extent_root(root_backup,16271636 extent_root->node->start);···16791670 memcpy(&info->super_copy->super_roots,16801671 &info->super_for_commit->super_roots,16811672 sizeof(*root_backup) * BTRFS_NUM_BACKUP_ROOTS);16731673+16741674+ return 0;16821675}1683167616841677/*···40624051 * not from fsync where the tree roots in fs_info have not40634052 * been consistent on disk.40644053 */40654065- if (max_mirrors == 0)40664066- backup_super_roots(fs_info);40544054+ if (max_mirrors == 0) {40554055+ ret = backup_super_roots(fs_info);40564056+ if (ret < 0)40574057+ return ret;40584058+ }4067405940684060 sb = fs_info->super_for_commit;40694061 dev_item = &sb->dev_item;
+93-5
fs/btrfs/extent-tree.c
···7575 struct btrfs_key key;7676 BTRFS_PATH_AUTO_FREE(path);77777878+ if (unlikely(!root)) {7979+ btrfs_err(fs_info,8080+ "missing extent root for extent at bytenr %llu", start);8181+ return -EUCLEAN;8282+ }8383+7884 path = btrfs_alloc_path();7985 if (!path)8086 return -ENOMEM;···137131 key.offset = offset;138132139133 extent_root = btrfs_extent_root(fs_info, bytenr);134134+ if (unlikely(!extent_root)) {135135+ btrfs_err(fs_info,136136+ "missing extent root for extent at bytenr %llu", bytenr);137137+ return -EUCLEAN;138138+ }139139+140140 ret = btrfs_search_slot(NULL, extent_root, &key, path, 0, 0);141141 if (ret < 0)142142 return ret;···448436 int recow;449437 int ret;450438439439+ if (unlikely(!root)) {440440+ btrfs_err(trans->fs_info,441441+ "missing extent root for extent at bytenr %llu", bytenr);442442+ return -EUCLEAN;443443+ }444444+451445 key.objectid = bytenr;452446 if (parent) {453447 key.type = BTRFS_SHARED_DATA_REF_KEY;···527509 u32 size;528510 u32 num_refs;529511 int ret;512512+513513+ if (unlikely(!root)) {514514+ btrfs_err(trans->fs_info,515515+ "missing extent root for extent at bytenr %llu", bytenr);516516+ return -EUCLEAN;517517+ }530518531519 key.objectid = bytenr;532520 if (node->parent) {···692668 struct btrfs_key key;693669 int ret;694670671671+ if (unlikely(!root)) {672672+ btrfs_err(trans->fs_info,673673+ "missing extent root for extent at bytenr %llu", bytenr);674674+ return -EUCLEAN;675675+ }676676+695677 key.objectid = bytenr;696678 if (parent) {697679 key.type = BTRFS_SHARED_BLOCK_REF_KEY;···721691 struct btrfs_root *root = btrfs_extent_root(trans->fs_info, bytenr);722692 struct btrfs_key key;723693 int ret;694694+695695+ if (unlikely(!root)) {696696+ btrfs_err(trans->fs_info,697697+ "missing extent root for extent at bytenr %llu", bytenr);698698+ return -EUCLEAN;699699+ }724700725701 key.objectid = bytenr;726702 if (node->parent) {···817781 int ret;818782 bool skinny_metadata = btrfs_fs_incompat(fs_info, SKINNY_METADATA);819783 int needed;784784+785785+ if (unlikely(!root)) {786786+ btrfs_err(fs_info,787787+ "missing extent root for extent at bytenr %llu", bytenr);788788+ return -EUCLEAN;789789+ }820790821791 key.objectid = bytenr;822792 key.type = BTRFS_EXTENT_ITEM_KEY;···17221680 }1723168117241682 root = btrfs_extent_root(fs_info, key.objectid);16831683+ if (unlikely(!root)) {16841684+ btrfs_err(fs_info,16851685+ "missing extent root for extent at bytenr %llu",16861686+ key.objectid);16871687+ return -EUCLEAN;16881688+ }17251689again:17261690 ret = btrfs_search_slot(trans, root, &key, path, 0, 1);17271691 if (ret < 0) {···19741926 struct btrfs_root *csum_root;1975192719761928 csum_root = btrfs_csum_root(fs_info, head->bytenr);19771977- ret = btrfs_del_csums(trans, csum_root, head->bytenr,19781978- head->num_bytes);19291929+ if (unlikely(!csum_root)) {19301930+ btrfs_err(fs_info,19311931+ "missing csum root for extent at bytenr %llu",19321932+ head->bytenr);19331933+ ret = -EUCLEAN;19341934+ } else {19351935+ ret = btrfs_del_csums(trans, csum_root, head->bytenr,19361936+ head->num_bytes);19371937+ }19791938 }19801939 }19811940···24332378 u32 expected_size;24342379 int type;24352380 int ret;23812381+23822382+ if (unlikely(!extent_root)) {23832383+ btrfs_err(fs_info,23842384+ "missing extent root for extent at bytenr %llu", bytenr);23852385+ return -EUCLEAN;23862386+ }2436238724372388 key.objectid = bytenr;24382389 key.type = BTRFS_EXTENT_ITEM_KEY;···31543093 struct btrfs_root *csum_root;3155309431563095 csum_root = btrfs_csum_root(trans->fs_info, bytenr);30963096+ if (unlikely(!csum_root)) {30973097+ ret = -EUCLEAN;30983098+ btrfs_abort_transaction(trans, ret);30993099+ btrfs_err(trans->fs_info,31003100+ "missing csum root for extent at bytenr %llu",31013101+ bytenr);31023102+ return ret;31033103+ }31043104+31573105 ret = btrfs_del_csums(trans, csum_root, bytenr, num_bytes);31583106 if (unlikely(ret)) {31593107 btrfs_abort_transaction(trans, ret);···32923222 u64 delayed_ref_root = href->owning_root;3293322332943224 extent_root = btrfs_extent_root(info, bytenr);32953295- ASSERT(extent_root);32253225+ if (unlikely(!extent_root)) {32263226+ btrfs_err(info,32273227+ "missing extent root for extent at bytenr %llu", bytenr);32283228+ return -EUCLEAN;32293229+ }3296323032973231 path = btrfs_alloc_path();32983232 if (!path)···50134939 size += btrfs_extent_inline_ref_size(BTRFS_EXTENT_OWNER_REF_KEY);50144940 size += btrfs_extent_inline_ref_size(type);5015494149424942+ extent_root = btrfs_extent_root(fs_info, ins->objectid);49434943+ if (unlikely(!extent_root)) {49444944+ btrfs_err(fs_info,49454945+ "missing extent root for extent at bytenr %llu",49464946+ ins->objectid);49474947+ return -EUCLEAN;49484948+ }49494949+50164950 path = btrfs_alloc_path();50174951 if (!path)50184952 return -ENOMEM;5019495350205020- extent_root = btrfs_extent_root(fs_info, ins->objectid);50214954 ret = btrfs_insert_empty_item(trans, extent_root, path, ins, size);50224955 if (ret) {50234956 btrfs_free_path(path);···51005019 size += sizeof(*block_info);51015020 }5102502150225022+ extent_root = btrfs_extent_root(fs_info, extent_key.objectid);50235023+ if (unlikely(!extent_root)) {50245024+ btrfs_err(fs_info,50255025+ "missing extent root for extent at bytenr %llu",50265026+ extent_key.objectid);50275027+ return -EUCLEAN;50285028+ }50295029+51035030 path = btrfs_alloc_path();51045031 if (!path)51055032 return -ENOMEM;5106503351075107- extent_root = btrfs_extent_root(fs_info, extent_key.objectid);51085034 ret = btrfs_insert_empty_item(trans, extent_root, path, &extent_key,51095035 size);51105036 if (ret) {
+7
fs/btrfs/file-item.c
···308308 /* Current item doesn't contain the desired range, search again */309309 btrfs_release_path(path);310310 csum_root = btrfs_csum_root(fs_info, disk_bytenr);311311+ if (unlikely(!csum_root)) {312312+ btrfs_err(fs_info,313313+ "missing csum root for extent at bytenr %llu",314314+ disk_bytenr);315315+ return -EUCLEAN;316316+ }317317+311318 item = btrfs_lookup_csum(NULL, csum_root, path, disk_bytenr, 0);312319 if (IS_ERR(item)) {313320 ret = PTR_ERR(item);
+8-1
fs/btrfs/free-space-tree.c
···10731073 if (ret)10741074 return ret;1075107510761076+ extent_root = btrfs_extent_root(trans->fs_info, block_group->start);10771077+ if (unlikely(!extent_root)) {10781078+ btrfs_err(trans->fs_info,10791079+ "missing extent root for block group at offset %llu",10801080+ block_group->start);10811081+ return -EUCLEAN;10821082+ }10831083+10761084 mutex_lock(&block_group->free_space_lock);1077108510781086 /*···10941086 key.type = BTRFS_EXTENT_ITEM_KEY;10951087 key.offset = 0;1096108810971097- extent_root = btrfs_extent_root(trans->fs_info, key.objectid);10981089 ret = btrfs_search_slot_for_read(extent_root, &key, path, 1, 0);10991090 if (ret < 0)11001091 goto out_locked;
+20-5
fs/btrfs/inode.c
···20122012 */2013201320142014 csum_root = btrfs_csum_root(root->fs_info, io_start);20152015+ if (unlikely(!csum_root)) {20162016+ btrfs_err(root->fs_info,20172017+ "missing csum root for extent at bytenr %llu", io_start);20182018+ ret = -EUCLEAN;20192019+ goto out;20202020+ }20212021+20152022 ret = btrfs_lookup_csums_list(csum_root, io_start,20162023 io_start + args->file_extent.num_bytes - 1,20172024 NULL, nowait);···27562749 int ret;2757275027582751 list_for_each_entry(sum, list, list) {27592759- trans->adding_csums = true;27602760- if (!csum_root)27522752+ if (!csum_root) {27612753 csum_root = btrfs_csum_root(trans->fs_info,27622754 sum->logical);27552755+ if (unlikely(!csum_root)) {27562756+ btrfs_err(trans->fs_info,27572757+ "missing csum root for extent at bytenr %llu",27582758+ sum->logical);27592759+ return -EUCLEAN;27602760+ }27612761+ }27622762+ trans->adding_csums = true;27632763 ret = btrfs_csum_file_blocks(trans, csum_root, sum);27642764 trans->adding_csums = false;27652765 if (ret)···98889874 int compression;98899875 size_t orig_count;98909876 const u32 min_folio_size = btrfs_min_folio_size(fs_info);98779877+ const u32 blocksize = fs_info->sectorsize;98919878 u64 start, end;98929879 u64 num_bytes, ram_bytes, disk_num_bytes;98939880 struct btrfs_key ins;···99999984 ret = -EFAULT;100009985 goto out_cb;100019986 }1000210002- if (bytes < min_folio_size)1000310003- folio_zero_range(folio, bytes, min_folio_size - bytes);1000410004- ret = bio_add_folio(&cb->bbio.bio, folio, folio_size(folio), 0);99879987+ if (!IS_ALIGNED(bytes, blocksize))99889988+ folio_zero_range(folio, bytes, round_up(bytes, blocksize) - bytes);99899989+ ret = bio_add_folio(&cb->bbio.bio, folio, round_up(bytes, blocksize), 0);100059990 if (unlikely(!ret)) {100069991 folio_put(folio);100079992 ret = -EINVAL;
+9-3
fs/btrfs/ioctl.c
···36173617 }36183618 }3619361936203620- trans = btrfs_join_transaction(root);36203620+ /* 2 BTRFS_QGROUP_RELATION_KEY items. */36213621+ trans = btrfs_start_transaction(root, 2);36213622 if (IS_ERR(trans)) {36223623 ret = PTR_ERR(trans);36233624 goto out;···36903689 goto out;36913690 }3692369136933693- trans = btrfs_join_transaction(root);36923692+ /*36933693+ * 1 BTRFS_QGROUP_INFO_KEY item.36943694+ * 1 BTRFS_QGROUP_LIMIT_KEY item.36953695+ */36963696+ trans = btrfs_start_transaction(root, 2);36943697 if (IS_ERR(trans)) {36953698 ret = PTR_ERR(trans);36963699 goto out;···37433738 goto drop_write;37443739 }3745374037463746- trans = btrfs_join_transaction(root);37413741+ /* 1 BTRFS_QGROUP_LIMIT_KEY item. */37423742+ trans = btrfs_start_transaction(root, 1);37473743 if (IS_ERR(trans)) {37483744 ret = PTR_ERR(trans);37493745 goto out;
+2-2
fs/btrfs/lzo.c
···429429int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)430430{431431 struct workspace *workspace = list_entry(ws, struct workspace, list);432432- const struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info;432432+ struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info;433433 const u32 sectorsize = fs_info->sectorsize;434434 struct folio_iter fi;435435 char *kaddr;···447447 /* There must be a compressed folio and matches the sectorsize. */448448 if (unlikely(!fi.folio))449449 return -EINVAL;450450- ASSERT(folio_size(fi.folio) == sectorsize);450450+ ASSERT(folio_size(fi.folio) == btrfs_min_folio_size(fs_info));451451 kaddr = kmap_local_folio(fi.folio, 0);452452 len_in = read_compress_length(kaddr);453453 kunmap_local(kaddr);
+8
fs/btrfs/qgroup.c
···37393739 mutex_lock(&fs_info->qgroup_rescan_lock);37403740 extent_root = btrfs_extent_root(fs_info,37413741 fs_info->qgroup_rescan_progress.objectid);37423742+ if (unlikely(!extent_root)) {37433743+ btrfs_err(fs_info,37443744+ "missing extent root for extent at bytenr %llu",37453745+ fs_info->qgroup_rescan_progress.objectid);37463746+ mutex_unlock(&fs_info->qgroup_rescan_lock);37473747+ return -EUCLEAN;37483748+ }37493749+37423750 ret = btrfs_search_slot_for_read(extent_root,37433751 &fs_info->qgroup_rescan_progress,37443752 path, 1, 0);
+10-2
fs/btrfs/raid56.c
···22972297static void fill_data_csums(struct btrfs_raid_bio *rbio)22982298{22992299 struct btrfs_fs_info *fs_info = rbio->bioc->fs_info;23002300- struct btrfs_root *csum_root = btrfs_csum_root(fs_info,23012301- rbio->bioc->full_stripe_logical);23002300+ struct btrfs_root *csum_root;23022301 const u64 start = rbio->bioc->full_stripe_logical;23032302 const u32 len = (rbio->nr_data * rbio->stripe_nsectors) <<23042303 fs_info->sectorsize_bits;···23272328 GFP_NOFS);23282329 if (!rbio->csum_buf || !rbio->csum_bitmap) {23292330 ret = -ENOMEM;23312331+ goto error;23322332+ }23332333+23342334+ csum_root = btrfs_csum_root(fs_info, rbio->bioc->full_stripe_logical);23352335+ if (unlikely(!csum_root)) {23362336+ btrfs_err(fs_info,23372337+ "missing csum root for extent at bytenr %llu",23382338+ rbio->bioc->full_stripe_logical);23392339+ ret = -EUCLEAN;23302340 goto error;23312341 }23322342
+32-7
fs/btrfs/relocation.c
···41854185 dest_addr = ins.objectid;41864186 dest_length = ins.offset;4187418741884188+ dest_bg = btrfs_lookup_block_group(fs_info, dest_addr);41894189+41884190 if (!is_data && !IS_ALIGNED(dest_length, fs_info->nodesize)) {41894191 u64 new_length = ALIGN_DOWN(dest_length, fs_info->nodesize);41904192···42974295 if (unlikely(ret))42984296 goto end;4299429743004300- dest_bg = btrfs_lookup_block_group(fs_info, dest_addr);43014301-43024298 adjust_block_group_remap_bytes(trans, dest_bg, dest_length);4303429943044300 mutex_lock(&dest_bg->free_space_lock);43054301 bg_needs_free_space = test_bit(BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE,43064302 &dest_bg->runtime_flags);43074303 mutex_unlock(&dest_bg->free_space_lock);43084308- btrfs_put_block_group(dest_bg);4309430443104305 if (bg_needs_free_space) {43114306 ret = btrfs_add_block_group_free_space(trans, dest_bg);···43324333 btrfs_end_transaction(trans);43334334 }43344335 } else {43354335- dest_bg = btrfs_lookup_block_group(fs_info, dest_addr);43364336 btrfs_free_reserved_bytes(dest_bg, dest_length, 0);43374337- btrfs_put_block_group(dest_bg);4338433743394338 ret = btrfs_commit_transaction(trans);43404339 }43404340+43414341+ btrfs_put_block_group(dest_bg);4341434243424343 return ret;43434344}···49534954 struct btrfs_space_info *sinfo = src_bg->space_info;4954495549554956 extent_root = btrfs_extent_root(fs_info, src_bg->start);49574957+ if (unlikely(!extent_root)) {49584958+ btrfs_err(fs_info,49594959+ "missing extent root for block group at offset %llu",49604960+ src_bg->start);49614961+ return -EUCLEAN;49624962+ }4956496349574964 trans = btrfs_start_transaction(extent_root, 0);49584965 if (IS_ERR(trans))···53115306 int ret;53125307 bool bg_is_ro = false;5313530853095309+ if (unlikely(!extent_root)) {53105310+ btrfs_err(fs_info,53115311+ "missing extent root for block group at offset %llu",53125312+ group_start);53135313+ return -EUCLEAN;53145314+ }53155315+53145316 /*53155317 * This only gets set if we had a half-deleted snapshot on mount. We53165318 * cannot allow relocation to start while we're still trying to clean up···55485536 goto out;55495537 }5550553855395539+ rc->extent_root = btrfs_extent_root(fs_info, 0);55405540+ if (unlikely(!rc->extent_root)) {55415541+ btrfs_err(fs_info, "missing extent root for extent at bytenr 0");55425542+ ret = -EUCLEAN;55435543+ goto out;55445544+ }55455545+55515546 ret = reloc_chunk_start(fs_info);55525547 if (ret < 0)55535548 goto out_end;55545554-55555555- rc->extent_root = btrfs_extent_root(fs_info, 0);5556554955575550 set_reloc_control(rc);55585551···56525635 struct btrfs_root *csum_root = btrfs_csum_root(fs_info, disk_bytenr);56535636 LIST_HEAD(list);56545637 int ret;56385638+56395639+ if (unlikely(!csum_root)) {56405640+ btrfs_mark_ordered_extent_error(ordered);56415641+ btrfs_err(fs_info,56425642+ "missing csum root for extent at bytenr %llu",56435643+ disk_bytenr);56445644+ return -EUCLEAN;56455645+ }5655564656565647 ret = btrfs_lookup_csums_list(csum_root, disk_bytenr,56575648 disk_bytenr + ordered->num_bytes - 1,
+17
fs/btrfs/tree-checker.c
···12881288 btrfs_root_drop_level(&ri), BTRFS_MAX_LEVEL - 1);12891289 return -EUCLEAN;12901290 }12911291+ /*12921292+ * If drop_progress.objectid is non-zero, a btrfs_drop_snapshot() was12931293+ * interrupted and the resume point was recorded in drop_progress and12941294+ * drop_level. In that case drop_level must be >= 1: level 0 is the12951295+ * leaf level and drop_snapshot never saves a checkpoint there (it12961296+ * only records checkpoints at internal node levels in DROP_REFERENCE12971297+ * stage). A zero drop_level combined with a non-zero drop_progress12981298+ * objectid indicates on-disk corruption and would cause a BUG_ON in12991299+ * merge_reloc_root() and btrfs_drop_snapshot() at mount time.13001300+ */13011301+ if (unlikely(btrfs_disk_key_objectid(&ri.drop_progress) != 0 &&13021302+ btrfs_root_drop_level(&ri) == 0)) {13031303+ generic_err(leaf, slot,13041304+ "invalid root drop_level 0 with non-zero drop_progress objectid %llu",13051305+ btrfs_disk_key_objectid(&ri.drop_progress));13061306+ return -EUCLEAN;13071307+ }1291130812921309 /* Flags check */12931310 if (unlikely(btrfs_root_flags(&ri) & ~valid_root_flags)) {
+21
fs/btrfs/tree-log.c
···984984985985 sums = list_first_entry(&ordered_sums, struct btrfs_ordered_sum, list);986986 csum_root = btrfs_csum_root(fs_info, sums->logical);987987+ if (unlikely(!csum_root)) {988988+ btrfs_err(fs_info,989989+ "missing csum root for extent at bytenr %llu",990990+ sums->logical);991991+ ret = -EUCLEAN;992992+ }993993+987994 if (!ret) {988995 ret = btrfs_del_csums(trans, csum_root, sums->logical,989996 sums->len);···48974890 }4898489148994892 csum_root = btrfs_csum_root(trans->fs_info, disk_bytenr);48934893+ if (unlikely(!csum_root)) {48944894+ btrfs_err(trans->fs_info,48954895+ "missing csum root for extent at bytenr %llu",48964896+ disk_bytenr);48974897+ return -EUCLEAN;48984898+ }48994899+49004900 disk_bytenr += extent_offset;49014901 ret = btrfs_lookup_csums_list(csum_root, disk_bytenr,49024902 disk_bytenr + extent_num_bytes - 1,···51005086 /* block start is already adjusted for the file extent offset. */51015087 block_start = btrfs_extent_map_block_start(em);51025088 csum_root = btrfs_csum_root(trans->fs_info, block_start);50895089+ if (unlikely(!csum_root)) {50905090+ btrfs_err(trans->fs_info,50915091+ "missing csum root for extent at bytenr %llu",50925092+ block_start);50935093+ return -EUCLEAN;50945094+ }50955095+51035096 ret = btrfs_lookup_csums_list(csum_root, block_start + csum_offset,51045097 block_start + csum_offset + csum_len - 1,51055098 &ordered_sums, false);
+17-8
fs/btrfs/volumes.c
···42774277end:42784278 while (!list_empty(chunks)) {42794279 bool is_unused;42804280+ struct btrfs_block_group *bg;4280428142814282 rci = list_first_entry(chunks, struct remap_chunk_info, list);4282428342834283- spin_lock(&rci->bg->lock);42844284- is_unused = !btrfs_is_block_group_used(rci->bg);42854285- spin_unlock(&rci->bg->lock);42844284+ bg = rci->bg;42854285+ if (bg) {42864286+ /*42874287+ * This is a bit racy and the 'used' status can change42884288+ * but this is not a problem as later functions will42894289+ * verify it again.42904290+ */42914291+ spin_lock(&bg->lock);42924292+ is_unused = !btrfs_is_block_group_used(bg);42934293+ spin_unlock(&bg->lock);4286429442874287- if (is_unused)42884288- btrfs_mark_bg_unused(rci->bg);42954295+ if (is_unused)42964296+ btrfs_mark_bg_unused(bg);4289429742904290- if (rci->made_ro)42914291- btrfs_dec_block_group_ro(rci->bg);42984298+ if (rci->made_ro)42994299+ btrfs_dec_block_group_ro(bg);4292430042934293- btrfs_put_block_group(rci->bg);43014301+ btrfs_put_block_group(bg);43024302+ }4294430342954304 list_del(&rci->list);42964305 kfree(rci);
+7
fs/btrfs/zoned.c
···12611261 key.offset = 0;1262126212631263 root = btrfs_extent_root(fs_info, key.objectid);12641264+ if (unlikely(!root)) {12651265+ btrfs_err(fs_info,12661266+ "missing extent root for extent at bytenr %llu",12671267+ key.objectid);12681268+ return -EUCLEAN;12691269+ }12701270+12641271 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);12651272 /* We should not find the exact match */12661273 if (unlikely(!ret))
···1616 select ZLIB_INFLATE if EROFS_FS_ZIP_DEFLATE1717 select ZSTD_DECOMPRESS if EROFS_FS_ZIP_ZSTD1818 help1919- EROFS (Enhanced Read-Only File System) is a lightweight read-only2020- file system with modern designs (e.g. no buffer heads, inline2121- xattrs/data, chunk-based deduplication, multiple devices, etc.) for2222- scenarios which need high-performance read-only solutions, e.g.2323- smartphones with Android OS, LiveCDs and high-density hosts with2424- numerous containers;1919+ EROFS (Enhanced Read-Only File System) is a modern, lightweight,2020+ secure read-only filesystem for various use cases, such as immutable2121+ system images, container images, application sandboxes, and datasets.25222626- It also provides transparent compression and deduplication support to2727- improve storage density and maintain relatively high compression2828- ratios, and it implements in-place decompression to temporarily reuse2929- page cache for compressed data using proper strategies, which is3030- quite useful for ensuring guaranteed end-to-end runtime decompression2323+ EROFS uses a flexible, hierarchical on-disk design so that features2424+ can be enabled on demand: the core on-disk format is block-aligned in2525+ order to perform optimally on all kinds of devices, including block2626+ and memory-backed devices; the format is easy to parse and has zero2727+ metadata redundancy, unlike generic filesystems, making it ideal for2828+ filesystem auditing and remote access; inline data, random-access2929+ friendly directory data, inline/shared extended attributes and3030+ chunk-based deduplication ensure space efficiency while maintaining3131+ high performance.3232+3333+ Optionally, it supports multiple devices to reference external data,3434+ enabling data sharing for container images.3535+3636+ It also has advanced encoded on-disk layouts, particularly for data3737+ compression and fine-grained deduplication. It utilizes fixed-size3838+ output compression to improve storage density while keeping relatively3939+ high compression ratios. Furthermore, it implements in-place4040+ decompression to reuse file pages to keep compressed data temporarily4141+ with proper strategies, which ensures guaranteed end-to-end runtime3142 performance under extreme memory pressure without extra cost.32433333- See the documentation at <file:Documentation/filesystems/erofs.rst>3434- and the web pages at <https://erofs.docs.kernel.org> for more details.4444+ For more details, see the web pages at <https://erofs.docs.kernel.org>4545+ and the documentation at <file:Documentation/filesystems/erofs.rst>.4646+4747+ To compile EROFS filesystem support as a module, choose M here. The4848+ module will be called erofs.35493650 If unsure, say N.3751···119105 depends on EROFS_FS120106 default y121107 help122122- Enable transparent compression support for EROFS file systems.108108+ Enable EROFS compression layouts so that filesystems containing109109+ compressed files can be parsed by the kernel.123110124111 If you don't want to enable compression feature, say N.125112
+2-4
fs/erofs/fileio.c
···2525 container_of(iocb, struct erofs_fileio_rq, iocb);2626 struct folio_iter fi;27272828- if (ret >= 0 && ret != rq->bio.bi_iter.bi_size) {2929- bio_advance(&rq->bio, ret);3030- zero_fill_bio(&rq->bio);3131- }2828+ if (ret >= 0 && ret != rq->bio.bi_iter.bi_size)2929+ ret = -EIO;3230 if (!rq->bio.bi_end_io) {3331 bio_for_each_folio_all(fi, &rq->bio) {3432 DBG_BUGON(folio_test_uptodate(fi.folio));
+13-2
fs/erofs/ishare.c
···200200201201int __init erofs_init_ishare(void)202202{203203- erofs_ishare_mnt = kern_mount(&erofs_anon_fs_type);204204- return PTR_ERR_OR_ZERO(erofs_ishare_mnt);203203+ struct vfsmount *mnt;204204+ int ret;205205+206206+ mnt = kern_mount(&erofs_anon_fs_type);207207+ if (IS_ERR(mnt))208208+ return PTR_ERR(mnt);209209+ /* generic_fadvise() doesn't work if s_bdi == &noop_backing_dev_info */210210+ ret = super_setup_bdi(mnt->mnt_sb);211211+ if (ret)212212+ kern_unmount(mnt);213213+ else214214+ erofs_ishare_mnt = mnt;215215+ return ret;205216}206217207218void erofs_exit_ishare(void)
+3
fs/erofs/zdata.c
···14451445 int bios)14461446{14471447 struct erofs_sb_info *const sbi = EROFS_SB(io->sb);14481448+ int gfp_flag;1448144914491450 /* wake up the caller thread for sync decompression */14501451 if (io->sync) {···14781477 sbi->sync_decompress = EROFS_SYNC_DECOMPRESS_FORCE_ON;14791478 return;14801479 }14801480+ gfp_flag = memalloc_noio_save();14811481 z_erofs_decompressqueue_work(&io->u.work);14821482+ memalloc_noio_restore(gfp_flag);14821483}1483148414841485static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+6
fs/smb/client/cifsglob.h
···23862386 return opts;23872387}2388238823892389+/*23902390+ * The number of blocks is not related to (i_size / i_blksize), but instead23912391+ * 512 byte (2**9) size is required for calculating num blocks.23922392+ */23932393+#define CIFS_INO_BLOCKS(size) DIV_ROUND_UP_ULL((u64)(size), 512)23942394+23892395#endif /* _CIFS_GLOB_H */
+4
fs/smb/client/connect.c
···19551955 case Kerberos:19561956 if (!uid_eq(ctx->cred_uid, ses->cred_uid))19571957 return 0;19581958+ if (strncmp(ses->user_name ?: "",19591959+ ctx->username ?: "",19601960+ CIFS_MAX_USERNAME_LEN))19611961+ return 0;19581962 break;19591963 case NTLMv2:19601964 case RawNTLMSSP:
-1
fs/smb/client/file.c
···993993 if (!rc) {994994 netfs_resize_file(&cinode->netfs, 0, true);995995 cifs_setsize(inode, 0);996996- inode->i_blocks = 0;997996 }998997 }999998 if (cfile)
+6-15
fs/smb/client/inode.c
···219219 */220220 if (is_size_safe_to_change(cifs_i, fattr->cf_eof, from_readdir)) {221221 i_size_write(inode, fattr->cf_eof);222222-223223- /*224224- * i_blocks is not related to (i_size / i_blksize),225225- * but instead 512 byte (2**9) size is required for226226- * calculating num blocks.227227- */228228- inode->i_blocks = (512 - 1 + fattr->cf_bytes) >> 9;222222+ inode->i_blocks = CIFS_INO_BLOCKS(fattr->cf_bytes);229223 }230224231225 if (S_ISLNK(fattr->cf_mode) && fattr->cf_symlink_target) {···30093015{30103016 spin_lock(&inode->i_lock);30113017 i_size_write(inode, offset);30183018+ /*30193019+ * Until we can query the server for actual allocation size,30203020+ * this is best estimate we have for blocks allocated for a file.30213021+ */30223022+ inode->i_blocks = CIFS_INO_BLOCKS(offset);30123023 spin_unlock(&inode->i_lock);30133024 inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));30143025 truncate_pagecache(inode, offset);···30863087 if (rc == 0) {30873088 netfs_resize_file(&cifsInode->netfs, size, true);30883089 cifs_setsize(inode, size);30893089- /*30903090- * i_blocks is not related to (i_size / i_blksize), but instead30913091- * 512 byte (2**9) size is required for calculating num blocks.30923092- * Until we can query the server for actual allocation size,30933093- * this is best estimate we have for blocks allocated for a file30943094- * Number of blocks must be rounded up so size 1 is not 0 blocks30953095- */30963096- inode->i_blocks = (512 - 1 + size) >> 9;30973090 }3098309130993092 return rc;
+1-1
fs/smb/client/smb1transport.c
···460460 return 0;461461462462 /*463463- * Windows NT server returns error resposne (e.g. STATUS_DELETE_PENDING463463+ * Windows NT server returns error response (e.g. STATUS_DELETE_PENDING464464 * or STATUS_OBJECT_NAME_NOT_FOUND or ERRDOS/ERRbadfile or any other)465465 * for some TRANS2 requests without the RESPONSE flag set in header.466466 */
+4-16
fs/smb/client/smb2ops.c
···14971497{14981498 struct smb2_file_network_open_info file_inf;14991499 struct inode *inode;15001500+ u64 asize;15001501 int rc;1501150215021503 rc = __SMB2_close(xid, tcon, cfile->fid.persistent_fid,···15211520 inode_set_atime_to_ts(inode,15221521 cifs_NTtimeToUnix(file_inf.LastAccessTime));1523152215241524- /*15251525- * i_blocks is not related to (i_size / i_blksize),15261526- * but instead 512 byte (2**9) size is required for15271527- * calculating num blocks.15281528- */15291529- if (le64_to_cpu(file_inf.AllocationSize) > 4096)15301530- inode->i_blocks =15311531- (512 - 1 + le64_to_cpu(file_inf.AllocationSize)) >> 9;15231523+ asize = le64_to_cpu(file_inf.AllocationSize);15241524+ if (asize > 4096)15251525+ inode->i_blocks = CIFS_INO_BLOCKS(asize);1532152615331527 /* End of file and Attributes should not have to be updated on close */15341528 spin_unlock(&inode->i_lock);···22002204 rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false);22012205 if (rc)22022206 goto duplicate_extents_out;22032203-22042204- /*22052205- * Although also could set plausible allocation size (i_blocks)22062206- * here in addition to setting the file size, in reflink22072207- * it is likely that the target file is sparse. Its allocation22082208- * size will be queried on next revalidate, but it is important22092209- * to make sure that file's cached size is updated immediately22102210- */22112207 netfs_resize_file(netfs_inode(inode), dest_off + len, true);22122208 cifs_setsize(inode, dest_off + len);22132209 }
+6-3
fs/smb/server/mgmt/tree_connect.c
···102102103103void ksmbd_tree_connect_put(struct ksmbd_tree_connect *tcon)104104{105105- if (atomic_dec_and_test(&tcon->refcount))105105+ if (atomic_dec_and_test(&tcon->refcount)) {106106+ ksmbd_share_config_put(tcon->share_conf);106107 kfree(tcon);108108+ }107109}108110109111static int __ksmbd_tree_conn_disconnect(struct ksmbd_session *sess,···115113116114 ret = ksmbd_ipc_tree_disconnect_request(sess->id, tree_conn->id);117115 ksmbd_release_tree_conn_id(sess, tree_conn->id);118118- ksmbd_share_config_put(tree_conn->share_conf);119116 ksmbd_counter_dec(KSMBD_COUNTER_TREE_CONNS);120120- if (atomic_dec_and_test(&tree_conn->refcount))117117+ if (atomic_dec_and_test(&tree_conn->refcount)) {118118+ ksmbd_share_config_put(tree_conn->share_conf);121119 kfree(tree_conn);120120+ }122121 return ret;123122}124123
+12-5
fs/smb/server/smb2pdu.c
···126126 pr_err("The first operation in the compound does not have tcon\n");127127 return -EINVAL;128128 }129129+ if (work->tcon->t_state != TREE_CONNECTED)130130+ return -ENOENT;129131 if (tree_id != UINT_MAX && work->tcon->id != tree_id) {130132 pr_err("tree id(%u) is different with id(%u) in first operation\n",131133 tree_id, work->tcon->id);···19501948 }19511949 }19521950 smb2_set_err_rsp(work);19511951+ conn->binding = false;19531952 } else {19541953 unsigned int iov_len;19551954···28312828 goto out;28322829 }2833283028342834- dh_info->fp->conn = conn;28312831+ if (dh_info->fp->conn) {28322832+ ksmbd_put_durable_fd(dh_info->fp);28332833+ err = -EBADF;28342834+ goto out;28352835+ }28352836 dh_info->reconnected = true;28362837 goto out;28372838 }···54595452 struct smb2_query_info_req *req,54605453 struct smb2_query_info_rsp *rsp)54615454{54625462- struct ksmbd_session *sess = work->sess;54635455 struct ksmbd_conn *conn = work->conn;54645456 struct ksmbd_share_config *share = work->tcon->share_conf;54655457 int fsinfoclass = 0;···5595558955965590 info = (struct object_id_info *)(rsp->Buffer);5597559155985598- if (!user_guest(sess->user))55995599- memcpy(info->objid, user_passkey(sess->user), 16);55925592+ if (path.mnt->mnt_sb->s_uuid_len == 16)55935593+ memcpy(info->objid, path.mnt->mnt_sb->s_uuid.b,55945594+ path.mnt->mnt_sb->s_uuid_len);56005595 else56015601- memset(info->objid, 0, 16);55965596+ memcpy(info->objid, &stfs.f_fsid, sizeof(stfs.f_fsid));5602559756035598 info->extended_info.magic = cpu_to_le32(EXTENDED_INFO_MAGIC);56045599 info->extended_info.version = cpu_to_le32(1);
···4455#include <uapi/linux/auxvec.h>6677-#define AT_VECTOR_SIZE_BASE 22 /* NEW_AUX_ENT entries in auxiliary table */77+#define AT_VECTOR_SIZE_BASE 24 /* NEW_AUX_ENT entries in auxiliary table */88 /* number of "#define AT_.*" above, minus {AT_NULL, AT_IGNORE, AT_NOTELF} */99#endif /* _LINUX_AUXVEC_H */
+1
include/linux/console_struct.h
···160160 struct uni_pagedict **uni_pagedict_loc; /* [!] Location of uni_pagedict variable for this console */161161 u32 **vc_uni_lines; /* unicode screen content */162162 u16 *vc_saved_screen;163163+ u32 **vc_saved_uni_lines;163164 unsigned int vc_saved_cols;164165 unsigned int vc_saved_rows;165166 /* additional information is in vt_kern.h */
+6
include/linux/damon.h
···810810 struct damos_walk_control *walk_control;811811 struct mutex walk_control_lock;812812813813+ /*814814+ * indicate if this may be corrupted. Currentonly this is set only for815815+ * damon_commit_ctx() failure.816816+ */817817+ bool maybe_corrupted;818818+813819 /* Working thread of the given DAMON context */814820 struct task_struct *kdamond;815821 /* Protects @kdamond field access */
+54
include/linux/device.h
···483483 * on. This shrinks the "Board Support Packages" (BSPs) and484484 * minimizes board-specific #ifdefs in drivers.485485 * @driver_data: Private pointer for driver specific info.486486+ * @driver_override: Driver name to force a match. Do not touch directly; use487487+ * device_set_driver_override() instead.486488 * @links: Links to suppliers and consumers of this device.487489 * @power: For device power management.488490 * See Documentation/driver-api/pm/devices.rst for details.···578576 core doesn't touch it */579577 void *driver_data; /* Driver data, set and get with580578 dev_set_drvdata/dev_get_drvdata */579579+ struct {580580+ const char *name;581581+ spinlock_t lock;582582+ } driver_override;581583 struct mutex mutex; /* mutex to synchronize calls to582584 * its driver.583585 */···706700};707701708702#define kobj_to_dev(__kobj) container_of_const(__kobj, struct device, kobj)703703+704704+int __device_set_driver_override(struct device *dev, const char *s, size_t len);705705+706706+/**707707+ * device_set_driver_override() - Helper to set or clear driver override.708708+ * @dev: Device to change709709+ * @s: NUL-terminated string, new driver name to force a match, pass empty710710+ * string to clear it ("" or "\n", where the latter is only for sysfs711711+ * interface).712712+ *713713+ * Helper to set or clear driver override of a device.714714+ *715715+ * Returns: 0 on success or a negative error code on failure.716716+ */717717+static inline int device_set_driver_override(struct device *dev, const char *s)718718+{719719+ return __device_set_driver_override(dev, s, s ? strlen(s) : 0);720720+}721721+722722+/**723723+ * device_has_driver_override() - Check if a driver override has been set.724724+ * @dev: device to check725725+ *726726+ * Returns true if a driver override has been set for this device.727727+ */728728+static inline bool device_has_driver_override(struct device *dev)729729+{730730+ guard(spinlock)(&dev->driver_override.lock);731731+ return !!dev->driver_override.name;732732+}733733+734734+/**735735+ * device_match_driver_override() - Match a driver against the device's driver_override.736736+ * @dev: device to check737737+ * @drv: driver to match against738738+ *739739+ * Returns > 0 if a driver override is set and matches the given driver, 0 if a740740+ * driver override is set but does not match, or < 0 if a driver override is not741741+ * set at all.742742+ */743743+static inline int device_match_driver_override(struct device *dev,744744+ const struct device_driver *drv)745745+{746746+ guard(spinlock)(&dev->driver_override.lock);747747+ if (dev->driver_override.name)748748+ return !strcmp(dev->driver_override.name, drv->name);749749+ return -1;750750+}709751710752/**711753 * device_iommu_mapped - Returns true when the device DMA is translated
+4
include/linux/device/bus.h
···6565 * this bus.6666 * @pm: Power management operations of this bus, callback the specific6767 * device driver's pm-ops.6868+ * @driver_override: Set to true if this bus supports the driver_override6969+ * mechanism, which allows userspace to force a specific7070+ * driver to bind to a device via a sysfs attribute.6871 * @need_parent_lock: When probing or removing a device on this bus, the6972 * device core should lock the device's parent.7073 *···109106110107 const struct dev_pm_ops *pm;111108109109+ bool driver_override;112110 bool need_parent_lock;113111};114112
+13-6
include/linux/dma-mapping.h
···8080#define DMA_ATTR_MMIO (1UL << 10)81818282/*8383- * DMA_ATTR_CPU_CACHE_CLEAN: Indicates the CPU will not dirty any cacheline8484- * overlapping this buffer while it is mapped for DMA. All mappings sharing8585- * a cacheline must have this attribute for this to be considered safe.8383+ * DMA_ATTR_DEBUGGING_IGNORE_CACHELINES: Indicates the CPU cache line can be8484+ * overlapped. All mappings sharing a cacheline must have this attribute for8585+ * this to be considered safe.8686 */8787-#define DMA_ATTR_CPU_CACHE_CLEAN (1UL << 11)8787+#define DMA_ATTR_DEBUGGING_IGNORE_CACHELINES (1UL << 11)8888+8989+/*9090+ * DMA_ATTR_REQUIRE_COHERENT: Indicates that DMA coherency is required.9191+ * All mappings that carry this attribute can't work with SWIOTLB and cache9292+ * flushing.9393+ */9494+#define DMA_ATTR_REQUIRE_COHERENT (1UL << 12)88958996/*9097 * A dma_addr_t can hold any valid DMA or bus address for the platform. It can···255248{256249 return NULL;257250}258258-static void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,259259- dma_addr_t dma_handle, unsigned long attrs)251251+static inline void dma_free_attrs(struct device *dev, size_t size,252252+ void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)260253{261254}262255static inline void *dmam_alloc_attrs(struct device *dev, size_t size,
+8-2
include/linux/io-pgtable.h
···5353 * tables.5454 * @ias: Input address (iova) size, in bits.5555 * @oas: Output address (paddr) size, in bits.5656- * @coherent_walk A flag to indicate whether or not page table walks made5656+ * @coherent_walk: A flag to indicate whether or not page table walks made5757 * by the IOMMU are coherent with the CPU caches.5858 * @tlb: TLB management callbacks for this set of tables.5959 * @iommu_dev: The device representing the DMA configuration for the···136136 void (*free)(void *cookie, void *pages, size_t size);137137138138 /* Low-level data specific to the table format */139139+ /* private: */139140 union {140141 struct {141142 u64 ttbr;···204203 * @unmap_pages: Unmap a range of virtually contiguous pages of the same size.205204 * @iova_to_phys: Translate iova to physical address.206205 * @pgtable_walk: (optional) Perform a page table walk for a given iova.206206+ * @read_and_clear_dirty: Record dirty info per IOVA. If an IOVA is dirty,207207+ * clear its dirty state from the PTE unless the208208+ * IOMMU_DIRTY_NO_CLEAR flag is passed in.207209 *208210 * These functions map directly onto the iommu_ops member functions with209211 * the same names.···235231 * the configuration actually provided by the allocator (e.g. the236232 * pgsize_bitmap may be restricted).237233 * @cookie: An opaque token provided by the IOMMU driver and passed back to238238- * the callback routines in cfg->tlb.234234+ * the callback routines.235235+ *236236+ * Returns: Pointer to the &struct io_pgtable_ops for this set of page tables.239237 */240238struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,241239 struct io_pgtable_cfg *cfg,
+3
include/linux/io_uring_types.h
···541541 REQ_F_BL_NO_RECYCLE_BIT,542542 REQ_F_BUFFERS_COMMIT_BIT,543543 REQ_F_BUF_NODE_BIT,544544+ REQ_F_BUF_MORE_BIT,544545 REQ_F_HAS_METADATA_BIT,545546 REQ_F_IMPORT_BUFFER_BIT,546547 REQ_F_SQE_COPIED_BIT,···627626 REQ_F_BUFFERS_COMMIT = IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT),628627 /* buf node is valid */629628 REQ_F_BUF_NODE = IO_REQ_FLAG(REQ_F_BUF_NODE_BIT),629629+ /* incremental buffer consumption, more space available */630630+ REQ_F_BUF_MORE = IO_REQ_FLAG(REQ_F_BUF_MORE_BIT),630631 /* request has read/write metadata assigned */631632 REQ_F_HAS_METADATA = IO_REQ_FLAG(REQ_F_HAS_METADATA_BIT),632633 /*
+1-1
include/linux/local_lock_internal.h
···315315316316#endif /* CONFIG_PREEMPT_RT */317317318318-#if defined(WARN_CONTEXT_ANALYSIS)318318+#if defined(WARN_CONTEXT_ANALYSIS) && !defined(__CHECKER__)319319/*320320 * Because the compiler only knows about the base per-CPU variable, use this321321 * helper function to make the compiler think we lock/unlock the @base variable,
-5
include/linux/platform_device.h
···3131 struct resource *resource;32323333 const struct platform_device_id *id_entry;3434- /*3535- * Driver name to force a match. Do not set directly, because core3636- * frees it. Use driver_set_override() to set or clear it.3737- */3838- const char *driver_override;39344035 /* MFD cell pointer */4136 struct mfd_cell *mfd_cell;
···264264 return &hinfo->bhash2[hash & (hinfo->bhash_size - 1)];265265}266266267267+static inline bool inet_use_hash2_on_bind(const struct sock *sk)268268+{269269+#if IS_ENABLED(CONFIG_IPV6)270270+ if (sk->sk_family == AF_INET6) {271271+ if (ipv6_addr_any(&sk->sk_v6_rcv_saddr))272272+ return false;273273+274274+ if (!ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr))275275+ return true;276276+ }277277+#endif278278+ return sk->sk_rcv_saddr != htonl(INADDR_ANY);279279+}280280+267281struct inet_bind_hashbucket *268282inet_bhash2_addr_any_hashbucket(const struct sock *sk, const struct net *net, int port);269283
+20-1
include/net/ip6_fib.h
···507507void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,508508 unsigned int flags);509509510510+void fib6_age_exceptions(struct fib6_info *rt, struct fib6_gc_args *gc_args,511511+ unsigned long now);510512void fib6_run_gc(unsigned long expires, struct net *net, bool force);511511-512513void fib6_gc_cleanup(void);513514514515int fib6_init(void);515516517517+#if IS_ENABLED(CONFIG_IPV6)516518/* Add the route to the gc list if it is not already there517519 *518520 * The callers should hold f6i->fib6_table->tb6_lock.···546544 if (!hlist_unhashed(&f6i->gc_link))547545 hlist_del_init(&f6i->gc_link);548546}547547+548548+static inline void fib6_may_remove_gc_list(struct net *net,549549+ struct fib6_info *f6i)550550+{551551+ struct fib6_gc_args gc_args;552552+553553+ if (hlist_unhashed(&f6i->gc_link))554554+ return;555555+556556+ gc_args.timeout = READ_ONCE(net->ipv6.sysctl.ip6_rt_gc_interval);557557+ gc_args.more = 0;558558+559559+ rcu_read_lock();560560+ fib6_age_exceptions(f6i, &gc_args, jiffies);561561+ rcu_read_unlock();562562+}563563+#endif549564550565struct ipv6_route_iter {551566 struct seq_net_private p;
+5
include/net/netfilter/nf_conntrack_core.h
···83838484extern spinlock_t nf_conntrack_expect_lock;85858686+static inline void lockdep_nfct_expect_lock_held(void)8787+{8888+ lockdep_assert_held(&nf_conntrack_expect_lock);8989+}9090+8691/* ctnetlink code shared by both ctnetlink and nf_conntrack_bpf */87928893static inline void __nf_ct_set_timeout(struct nf_conn *ct, u64 timeout)
+18-2
include/net/netfilter/nf_conntrack_expect.h
···2222 /* Hash member */2323 struct hlist_node hnode;24242525+ /* Network namespace */2626+ possible_net_t net;2727+2528 /* We expect this tuple, with the following mask */2629 struct nf_conntrack_tuple tuple;2730 struct nf_conntrack_tuple_mask mask;28313232+#ifdef CONFIG_NF_CONNTRACK_ZONES3333+ struct nf_conntrack_zone zone;3434+#endif2935 /* Usage count. */3036 refcount_t use;3137···4640 struct nf_conntrack_expect *this);47414842 /* Helper to assign to new connection */4949- struct nf_conntrack_helper *helper;4343+ struct nf_conntrack_helper __rcu *helper;50445145 /* The conntrack of the master connection */5246 struct nf_conn *master;···68626963static inline struct net *nf_ct_exp_net(struct nf_conntrack_expect *exp)7064{7171- return nf_ct_net(exp->master);6565+ return read_pnet(&exp->net);6666+}6767+6868+static inline bool nf_ct_exp_zone_equal_any(const struct nf_conntrack_expect *a,6969+ const struct nf_conntrack_zone *b)7070+{7171+#ifdef CONFIG_NF_CONNTRACK_ZONES7272+ return a->zone.id == b->id;7373+#else7474+ return true;7575+#endif7276}73777478#define NF_CT_EXP_POLICY_NAME_LEN 16
···146146config CC_HAS_COUNTED_BY_PTR147147 bool148148 # supported since clang 22149149- default y if CC_IS_CLANG && CLANG_VERSION >= 220000149149+ default y if CC_IS_CLANG && CLANG_VERSION >= 220100150150 # supported since gcc 16.0.0151151 default y if CC_IS_GCC && GCC_VERSION >= 160000152152
+11-3
io_uring/kbuf.c
···34343535static bool io_kbuf_inc_commit(struct io_buffer_list *bl, int len)3636{3737+ /* No data consumed, return false early to avoid consuming the buffer */3838+ if (!len)3939+ return false;4040+3741 while (len) {3842 struct io_uring_buf *buf;3943 u32 buf_len, this_len;···216212 sel.addr = u64_to_user_ptr(READ_ONCE(buf->addr));217213218214 if (io_should_commit(req, issue_flags)) {219219- io_kbuf_commit(req, sel.buf_list, *len, 1);215215+ if (!io_kbuf_commit(req, sel.buf_list, *len, 1))216216+ req->flags |= REQ_F_BUF_MORE;220217 sel.buf_list = NULL;221218 }222219 return sel;···350345 */351346 if (ret > 0) {352347 req->flags |= REQ_F_BUFFERS_COMMIT | REQ_F_BL_NO_RECYCLE;353353- io_kbuf_commit(req, sel->buf_list, arg->out_len, ret);348348+ if (!io_kbuf_commit(req, sel->buf_list, arg->out_len, ret))349349+ req->flags |= REQ_F_BUF_MORE;354350 }355351 } else {356352 ret = io_provided_buffers_select(req, &arg->out_len, sel->buf_list, arg->iovs);···397391398392 if (bl)399393 ret = io_kbuf_commit(req, bl, len, nr);394394+ if (ret && (req->flags & REQ_F_BUF_MORE))395395+ ret = false;400396401401- req->flags &= ~REQ_F_BUFFER_RING;397397+ req->flags &= ~(REQ_F_BUFFER_RING | REQ_F_BUF_MORE);402398 return ret;403399}404400
+7-2
io_uring/poll.c
···272272 atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs);273273 v &= ~IO_POLL_RETRY_FLAG;274274 }275275+ v &= IO_POLL_REF_MASK;275276 }276277277278 /* the mask was stashed in __io_poll_execute */···305304 return IOU_POLL_REMOVE_POLL_USE_RES;306305 }307306 } else {308308- int ret = io_poll_issue(req, tw);307307+ int ret;309308309309+ /* multiple refs and HUP, ensure we loop once more */310310+ if ((req->cqe.res & (POLLHUP | POLLRDHUP)) && v != 1)311311+ v--;312312+313313+ ret = io_poll_issue(req, tw);310314 if (ret == IOU_COMPLETE)311315 return IOU_POLL_REMOVE_POLL_USE_RES;312316 else if (ret == IOU_REQUEUE)···327321 * Release all references, retry if someone tried to restart328322 * task_work while we were executing it.329323 */330330- v &= IO_POLL_REF_MASK;331324 } while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK);332325333326 io_napi_add(req);
+20-4
kernel/bpf/btf.c
···17871787 * of the _bh() version.17881788 */17891789 spin_lock_irqsave(&btf_idr_lock, flags);17901790- idr_remove(&btf_idr, btf->id);17901790+ if (btf->id) {17911791+ idr_remove(&btf_idr, btf->id);17921792+ /*17931793+ * Clear the id here to make this function idempotent, since it will get17941794+ * called a couple of times for module BTFs: on module unload, and then17951795+ * the final btf_put(). btf_alloc_id() starts IDs with 1, so we can use17961796+ * 0 as sentinel value.17971797+ */17981798+ WRITE_ONCE(btf->id, 0);17991799+ }17911800 spin_unlock_irqrestore(&btf_idr_lock, flags);17921801}17931802···81248115{81258116 const struct btf *btf = filp->private_data;8126811781278127- seq_printf(m, "btf_id:\t%u\n", btf->id);81188118+ seq_printf(m, "btf_id:\t%u\n", READ_ONCE(btf->id));81288119}81298120#endif81308121···82068197 if (copy_from_user(&info, uinfo, info_copy))82078198 return -EFAULT;8208819982098209- info.id = btf->id;82008200+ info.id = READ_ONCE(btf->id);82108201 ubtf = u64_to_user_ptr(info.btf);82118202 btf_copy = min_t(u32, btf->data_size, info.btf_size);82128203 if (copy_to_user(ubtf, btf->data, btf_copy))···8269826082708261u32 btf_obj_id(const struct btf *btf)82718262{82728272- return btf->id;82638263+ return READ_ONCE(btf->id);82738264}8274826582758266bool btf_is_kernel(const struct btf *btf)···83918382 if (btf_mod->module != module)83928383 continue;8393838483858385+ /*83868386+ * For modules, we do the freeing of BTF IDR as soon as83878387+ * module goes away to disable BTF discovery, since the83888388+ * btf_try_get_module() on such BTFs will fail. This may83898389+ * be called again on btf_put(), but it's ok to do so.83908390+ */83918391+ btf_free_id(btf_mod->btf);83948392 list_del(&btf_mod->list);83958393 if (btf_mod->sysfs_attr)83968394 sysfs_remove_bin_file(btf_kobj, btf_mod->sysfs_attr);
+35-8
kernel/bpf/core.c
···14221422 *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);14231423 *to++ = BPF_STX_MEM(from->code, from->dst_reg, BPF_REG_AX, from->off);14241424 break;14251425+14261426+ case BPF_ST | BPF_PROBE_MEM32 | BPF_DW:14271427+ case BPF_ST | BPF_PROBE_MEM32 | BPF_W:14281428+ case BPF_ST | BPF_PROBE_MEM32 | BPF_H:14291429+ case BPF_ST | BPF_PROBE_MEM32 | BPF_B:14301430+ *to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^14311431+ from->imm);14321432+ *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);14331433+ /*14341434+ * Cannot use BPF_STX_MEM() macro here as it14351435+ * hardcodes BPF_MEM mode, losing PROBE_MEM3214361436+ * and breaking arena addressing in the JIT.14371437+ */14381438+ *to++ = (struct bpf_insn) {14391439+ .code = BPF_STX | BPF_PROBE_MEM32 |14401440+ BPF_SIZE(from->code),14411441+ .dst_reg = from->dst_reg,14421442+ .src_reg = BPF_REG_AX,14431443+ .off = from->off,14441444+ };14451445+ break;14251446 }14261447out:14271448 return to - to_buff;···17571736}1758173717591738#ifndef CONFIG_BPF_JIT_ALWAYS_ON17391739+/* Absolute value of s32 without undefined behavior for S32_MIN */17401740+static u32 abs_s32(s32 x)17411741+{17421742+ return x >= 0 ? (u32)x : -(u32)x;17431743+}17441744+17601745/**17611746 * ___bpf_prog_run - run eBPF program on a given context17621747 * @regs: is the array of MAX_BPF_EXT_REG eBPF pseudo-registers···19271900 DST = do_div(AX, (u32) SRC);19281901 break;19291902 case 1:19301930- AX = abs((s32)DST);19311931- AX = do_div(AX, abs((s32)SRC));19031903+ AX = abs_s32((s32)DST);19041904+ AX = do_div(AX, abs_s32((s32)SRC));19321905 if ((s32)DST < 0)19331906 DST = (u32)-AX;19341907 else···19551928 DST = do_div(AX, (u32) IMM);19561929 break;19571930 case 1:19581958- AX = abs((s32)DST);19591959- AX = do_div(AX, abs((s32)IMM));19311931+ AX = abs_s32((s32)DST);19321932+ AX = do_div(AX, abs_s32((s32)IMM));19601933 if ((s32)DST < 0)19611934 DST = (u32)-AX;19621935 else···19821955 DST = (u32) AX;19831956 break;19841957 case 1:19851985- AX = abs((s32)DST);19861986- do_div(AX, abs((s32)SRC));19581958+ AX = abs_s32((s32)DST);19591959+ do_div(AX, abs_s32((s32)SRC));19871960 if (((s32)DST < 0) == ((s32)SRC < 0))19881961 DST = (u32)AX;19891962 else···20091982 DST = (u32) AX;20101983 break;20111984 case 1:20122012- AX = abs((s32)DST);20132013- do_div(AX, abs((s32)IMM));19851985+ AX = abs_s32((s32)DST);19861986+ do_div(AX, abs_s32((s32)IMM));20141987 if (((s32)DST < 0) == ((s32)IMM < 0))20151988 DST = (u32)AX;20161989 else
+25-8
kernel/bpf/verifier.c
···1591015910 /* Apply bswap if alu64 or switch between big-endian and little-endian machines */1591115911 bool need_bswap = alu64 || (to_le == is_big_endian);15912159121591315913+ /*1591415914+ * If the register is mutated, manually reset its scalar ID to break1591515915+ * any existing ties and avoid incorrect bounds propagation.1591615916+ */1591715917+ if (need_bswap || insn->imm == 16 || insn->imm == 32)1591815918+ dst_reg->id = 0;1591915919+1591315920 if (need_bswap) {1591415921 if (insn->imm == 16)1591515922 dst_reg->var_off = tnum_bswap16(dst_reg->var_off);···1599915992 else1600015993 return 0;16001159941600216002- branch = push_stack(env, env->insn_idx + 1, env->insn_idx, false);1599515995+ branch = push_stack(env, env->insn_idx, env->insn_idx, false);1600315996 if (IS_ERR(branch))1600415997 return PTR_ERR(branch);1600515998···1741517408 continue;1741617409 if ((reg->id & ~BPF_ADD_CONST) != (known_reg->id & ~BPF_ADD_CONST))1741717410 continue;1741117411+ /*1741217412+ * Skip mixed 32/64-bit links: the delta relationship doesn't1741317413+ * hold across different ALU widths.1741417414+ */1741517415+ if (((reg->id ^ known_reg->id) & BPF_ADD_CONST) == BPF_ADD_CONST)1741617416+ continue;1741817417 if ((!(reg->id & BPF_ADD_CONST) && !(known_reg->id & BPF_ADD_CONST)) ||1741917418 reg->off == known_reg->off) {1742017419 s32 saved_subreg_def = reg->subreg_def;···1744817435 scalar32_min_max_add(reg, &fake_reg);1744917436 scalar_min_max_add(reg, &fake_reg);1745017437 reg->var_off = tnum_add(reg->var_off, fake_reg.var_off);1745117451- if (known_reg->id & BPF_ADD_CONST32)1743817438+ if ((reg->id | known_reg->id) & BPF_ADD_CONST32)1745217439 zext_32_to_64(reg);1745317440 reg_bounds_sync(reg);1745417441 }···1987619863 * Also verify that new value satisfies old value range knowledge.1987719864 */19878198651987919879- /* ADD_CONST mismatch: different linking semantics */1988019880- if ((rold->id & BPF_ADD_CONST) && !(rcur->id & BPF_ADD_CONST))1988119881- return false;1988219882-1988319883- if (rold->id && !(rold->id & BPF_ADD_CONST) && (rcur->id & BPF_ADD_CONST))1986619866+ /*1986719867+ * ADD_CONST flags must match exactly: BPF_ADD_CONST32 and1986819868+ * BPF_ADD_CONST64 have different linking semantics in1986919869+ * sync_linked_regs() (alu32 zero-extends, alu64 does not),1987019870+ * so pruning across different flag types is unsafe.1987119871+ */1987219872+ if (rold->id &&1987319873+ (rold->id & BPF_ADD_CONST) != (rcur->id & BPF_ADD_CONST))1988419874 return false;19885198751988619876 /* Both have offset linkage: offsets must match */···2092020904 * state when it exits.2092120905 */2092220906 int err = check_resource_leak(env, exception_exit,2092320923- !env->cur_state->curframe,2090720907+ exception_exit || !env->cur_state->curframe,2090820908+ exception_exit ? "bpf_throw" :2092420909 "BPF_EXIT instruction in main prog");2092520910 if (err)2092620911 return err;
+5-4
kernel/dma/debug.c
···453453 return overlap;454454}455455456456-static void active_cacheline_inc_overlap(phys_addr_t cln)456456+static void active_cacheline_inc_overlap(phys_addr_t cln, bool is_cache_clean)457457{458458 int overlap = active_cacheline_read_overlap(cln);459459···462462 /* If we overflowed the overlap counter then we're potentially463463 * leaking dma-mappings.464464 */465465- WARN_ONCE(overlap > ACTIVE_CACHELINE_MAX_OVERLAP,465465+ WARN_ONCE(!is_cache_clean && overlap > ACTIVE_CACHELINE_MAX_OVERLAP,466466 pr_fmt("exceeded %d overlapping mappings of cacheline %pa\n"),467467 ACTIVE_CACHELINE_MAX_OVERLAP, &cln);468468}···495495 if (rc == -EEXIST) {496496 struct dma_debug_entry *existing;497497498498- active_cacheline_inc_overlap(cln);498498+ active_cacheline_inc_overlap(cln, entry->is_cache_clean);499499 existing = radix_tree_lookup(&dma_active_cacheline, cln);500500 /* A lookup failure here after we got -EEXIST is unexpected. */501501 WARN_ON(!existing);···601601 unsigned long flags;602602 int rc;603603604604- entry->is_cache_clean = !!(attrs & DMA_ATTR_CPU_CACHE_CLEAN);604604+ entry->is_cache_clean = attrs & (DMA_ATTR_DEBUGGING_IGNORE_CACHELINES |605605+ DMA_ATTR_REQUIRE_COHERENT);605606606607 bucket = get_hash_bucket(entry, &flags);607608 hash_bucket_add(bucket, entry);
+4-3
kernel/dma/direct.h
···8484 dma_addr_t dma_addr;85858686 if (is_swiotlb_force_bounce(dev)) {8787- if (attrs & DMA_ATTR_MMIO)8787+ if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT))8888 return DMA_MAPPING_ERROR;89899090 return swiotlb_map(dev, phys, size, dir, attrs);···9898 dma_addr = phys_to_dma(dev, phys);9999 if (unlikely(!dma_capable(dev, dma_addr, size, true)) ||100100 dma_kmalloc_needs_bounce(dev, size, dir)) {101101- if (is_swiotlb_active(dev))101101+ if (is_swiotlb_active(dev) &&102102+ !(attrs & DMA_ATTR_REQUIRE_COHERENT))102103 return swiotlb_map(dev, phys, size, dir, attrs);103104104105 goto err_overflow;···124123{125124 phys_addr_t phys;126125127127- if (attrs & DMA_ATTR_MMIO)126126+ if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT))128127 /* nothing to do: uncached and no swiotlb */129128 return;130129
+6
kernel/dma/mapping.c
···164164 if (WARN_ON_ONCE(!dev->dma_mask))165165 return DMA_MAPPING_ERROR;166166167167+ if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT))168168+ return DMA_MAPPING_ERROR;169169+167170 if (dma_map_direct(dev, ops) ||168171 (!is_mmio && arch_dma_map_phys_direct(dev, phys + size)))169172 addr = dma_direct_map_phys(dev, phys, size, dir, attrs);···237234 int ents;238235239236 BUG_ON(!valid_dma_direction(dir));237237+238238+ if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT))239239+ return -EOPNOTSUPP;240240241241 if (WARN_ON_ONCE(!dev->dma_mask))242242 return 0;
+19-2
kernel/dma/swiotlb.c
···3030#include <linux/gfp.h>3131#include <linux/highmem.h>3232#include <linux/io.h>3333+#include <linux/kmsan-checks.h>3334#include <linux/iommu-helper.h>3435#include <linux/init.h>3536#include <linux/memblock.h>···902901903902 local_irq_save(flags);904903 page = pfn_to_page(pfn);905905- if (dir == DMA_TO_DEVICE)904904+ if (dir == DMA_TO_DEVICE) {905905+ /*906906+ * Ideally, kmsan_check_highmem_page()907907+ * could be used here to detect infoleaks,908908+ * but callers may map uninitialized buffers909909+ * that will be written by the device,910910+ * causing false positives.911911+ */906912 memcpy_from_page(vaddr, page, offset, sz);907907- else913913+ } else {914914+ kmsan_unpoison_memory(vaddr, sz);908915 memcpy_to_page(page, offset, vaddr, sz);916916+ }909917 local_irq_restore(flags);910918911919 size -= sz;···923913 offset = 0;924914 }925915 } else if (dir == DMA_TO_DEVICE) {916916+ /*917917+ * Ideally, kmsan_check_memory() could be used here to detect918918+ * infoleaks (uninitialized data being sent to device), but919919+ * callers may map uninitialized buffers that will be written920920+ * by the device, causing false positives.921921+ */926922 memcpy(vaddr, phys_to_virt(orig_addr), size);927923 } else {924924+ kmsan_unpoison_memory(vaddr, size);928925 memcpy(phys_to_virt(orig_addr), vaddr, size);929926 }930927}
+8-11
kernel/events/core.c
···48134813 struct perf_event *sub, *event = data->event;48144814 struct perf_event_context *ctx = event->ctx;48154815 struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context);48164816- struct pmu *pmu = event->pmu;48164816+ struct pmu *pmu;4817481748184818 /*48194819 * If this is a task context, we need to check whether it is···48254825 if (ctx->task && cpuctx->task_ctx != ctx)48264826 return;4827482748284828- raw_spin_lock(&ctx->lock);48284828+ guard(raw_spinlock)(&ctx->lock);48294829 ctx_time_update_event(ctx, event);4830483048314831 perf_event_update_time(event);···48334833 perf_event_update_sibling_time(event);4834483448354835 if (event->state != PERF_EVENT_STATE_ACTIVE)48364836- goto unlock;48364836+ return;4837483748384838 if (!data->group) {48394839- pmu->read(event);48394839+ perf_pmu_read(event);48404840 data->ret = 0;48414841- goto unlock;48414841+ return;48424842 }4843484348444844+ pmu = event->pmu_ctx->pmu;48444845 pmu->start_txn(pmu, PERF_PMU_TXN_READ);4845484648464846- pmu->read(event);48474847-48474847+ perf_pmu_read(event);48484848 for_each_sibling_event(sub, event)48494849 perf_pmu_read(sub);4850485048514851 data->ret = pmu->commit_txn(pmu);48524852-48534853-unlock:48544854- raw_spin_unlock(&ctx->lock);48554852}4856485348574854static inline u64 perf_event_count(struct perf_event *event, bool self)···1474114744 get_ctx(child_ctx);1474214745 child_event->ctx = child_ctx;14743147461474414744- pmu_ctx = find_get_pmu_context(child_event->pmu, child_ctx, child_event);1474714747+ pmu_ctx = find_get_pmu_context(parent_event->pmu_ctx->pmu, child_ctx, child_event);1474514748 if (IS_ERR(pmu_ctx)) {1474614749 free_event(child_event);1474714750 return ERR_CAST(pmu_ctx);
···99 */10101111#include <linux/export.h>1212+#include <linux/irq_work.h>1213#include <linux/mutex.h>1314#include <linux/preempt.h>1415#include <linux/rcupdate_wait.h>···4241 ssp->srcu_idx_max = 0;4342 INIT_WORK(&ssp->srcu_work, srcu_drive_gp);4443 INIT_LIST_HEAD(&ssp->srcu_work.entry);4444+ init_irq_work(&ssp->srcu_irq_work, srcu_tiny_irq_work);4545 return 0;4646}4747···8684void cleanup_srcu_struct(struct srcu_struct *ssp)8785{8886 WARN_ON(ssp->srcu_lock_nesting[0] || ssp->srcu_lock_nesting[1]);8787+ irq_work_sync(&ssp->srcu_irq_work);8988 flush_work(&ssp->srcu_work);9089 WARN_ON(ssp->srcu_gp_running);9190 WARN_ON(ssp->srcu_gp_waiting);···180177}181178EXPORT_SYMBOL_GPL(srcu_drive_gp);182179180180+/*181181+ * Use an irq_work to defer schedule_work() to avoid acquiring the workqueue182182+ * pool->lock while the caller might hold scheduler locks, causing lockdep183183+ * splats due to workqueue_init() doing a wakeup.184184+ */185185+void srcu_tiny_irq_work(struct irq_work *irq_work)186186+{187187+ struct srcu_struct *ssp;188188+189189+ ssp = container_of(irq_work, struct srcu_struct, srcu_irq_work);190190+ schedule_work(&ssp->srcu_work);191191+}192192+EXPORT_SYMBOL_GPL(srcu_tiny_irq_work);193193+183194static void srcu_gp_start_if_needed(struct srcu_struct *ssp)184195{185196 unsigned long cookie;···206189 WRITE_ONCE(ssp->srcu_idx_max, cookie);207190 if (!READ_ONCE(ssp->srcu_gp_running)) {208191 if (likely(srcu_init_done))209209- schedule_work(&ssp->srcu_work);192192+ irq_work_queue(&ssp->srcu_irq_work);210193 else if (list_empty(&ssp->srcu_work.entry))211194 list_add(&ssp->srcu_work.entry, &srcu_boot_list);212195 }
+102-109
kernel/rcu/srcutree.c
···1919#include <linux/mutex.h>2020#include <linux/percpu.h>2121#include <linux/preempt.h>2222+#include <linux/irq_work.h>2223#include <linux/rcupdate_wait.h>2324#include <linux/sched.h>2425#include <linux/smp.h>···7675static void srcu_invoke_callbacks(struct work_struct *work);7776static void srcu_reschedule(struct srcu_struct *ssp, unsigned long delay);7877static void process_srcu(struct work_struct *work);7878+static void srcu_irq_work(struct irq_work *work);7979static void srcu_delay_timer(struct timer_list *t);8080-8181-/* Wrappers for lock acquisition and release, see raw_spin_lock_rcu_node(). */8282-#define spin_lock_rcu_node(p) \8383-do { \8484- spin_lock(&ACCESS_PRIVATE(p, lock)); \8585- smp_mb__after_unlock_lock(); \8686-} while (0)8787-8888-#define spin_unlock_rcu_node(p) spin_unlock(&ACCESS_PRIVATE(p, lock))8989-9090-#define spin_lock_irq_rcu_node(p) \9191-do { \9292- spin_lock_irq(&ACCESS_PRIVATE(p, lock)); \9393- smp_mb__after_unlock_lock(); \9494-} while (0)9595-9696-#define spin_unlock_irq_rcu_node(p) \9797- spin_unlock_irq(&ACCESS_PRIVATE(p, lock))9898-9999-#define spin_lock_irqsave_rcu_node(p, flags) \100100-do { \101101- spin_lock_irqsave(&ACCESS_PRIVATE(p, lock), flags); \102102- smp_mb__after_unlock_lock(); \103103-} while (0)104104-105105-#define spin_trylock_irqsave_rcu_node(p, flags) \106106-({ \107107- bool ___locked = spin_trylock_irqsave(&ACCESS_PRIVATE(p, lock), flags); \108108- \109109- if (___locked) \110110- smp_mb__after_unlock_lock(); \111111- ___locked; \112112-})113113-114114-#define spin_unlock_irqrestore_rcu_node(p, flags) \115115- spin_unlock_irqrestore(&ACCESS_PRIVATE(p, lock), flags) \1168011781/*11882 * Initialize SRCU per-CPU data. Note that statically allocated···97131 */98132 for_each_possible_cpu(cpu) {99133 sdp = per_cpu_ptr(ssp->sda, cpu);100100- spin_lock_init(&ACCESS_PRIVATE(sdp, lock));134134+ raw_spin_lock_init(&ACCESS_PRIVATE(sdp, lock));101135 rcu_segcblist_init(&sdp->srcu_cblist);102136 sdp->srcu_cblist_invoking = false;103137 sdp->srcu_gp_seq_needed = ssp->srcu_sup->srcu_gp_seq;···152186153187 /* Each pass through this loop initializes one srcu_node structure. */154188 srcu_for_each_node_breadth_first(ssp, snp) {155155- spin_lock_init(&ACCESS_PRIVATE(snp, lock));189189+ raw_spin_lock_init(&ACCESS_PRIVATE(snp, lock));156190 BUILD_BUG_ON(ARRAY_SIZE(snp->srcu_have_cbs) !=157191 ARRAY_SIZE(snp->srcu_data_have_cbs));158192 for (i = 0; i < ARRAY_SIZE(snp->srcu_have_cbs); i++) {···208242 if (!ssp->srcu_sup)209243 return -ENOMEM;210244 if (!is_static)211211- spin_lock_init(&ACCESS_PRIVATE(ssp->srcu_sup, lock));245245+ raw_spin_lock_init(&ACCESS_PRIVATE(ssp->srcu_sup, lock));212246 ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL;213247 ssp->srcu_sup->node = NULL;214248 mutex_init(&ssp->srcu_sup->srcu_cb_mutex);···218252 mutex_init(&ssp->srcu_sup->srcu_barrier_mutex);219253 atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 0);220254 INIT_DELAYED_WORK(&ssp->srcu_sup->work, process_srcu);255255+ init_irq_work(&ssp->srcu_sup->irq_work, srcu_irq_work);221256 ssp->srcu_sup->sda_is_static = is_static;222257 if (!is_static) {223258 ssp->sda = alloc_percpu(struct srcu_data);···230263 ssp->srcu_sup->srcu_gp_seq_needed_exp = SRCU_GP_SEQ_INITIAL_VAL;231264 ssp->srcu_sup->srcu_last_gp_end = ktime_get_mono_fast_ns();232265 if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) {233233- if (!init_srcu_struct_nodes(ssp, is_static ? GFP_ATOMIC : GFP_KERNEL))266266+ if (!preemptible())267267+ WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_ALLOC);268268+ else if (init_srcu_struct_nodes(ssp, GFP_KERNEL))269269+ WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG);270270+ else234271 goto err_free_sda;235235- WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG);236272 }237273 ssp->srcu_sup->srcu_ssp = ssp;238274 smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed,···364394 /* Double-checked locking on ->srcu_size-state. */365395 if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) != SRCU_SIZE_SMALL)366396 return;367367- spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags);397397+ raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags);368398 if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) != SRCU_SIZE_SMALL) {369369- spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);399399+ raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);370400 return;371401 }372402 __srcu_transition_to_big(ssp);373373- spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);403403+ raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);374404}375405376406/*377407 * Check to see if the just-encountered contention event justifies378408 * a transition to SRCU_SIZE_BIG.379409 */380380-static void spin_lock_irqsave_check_contention(struct srcu_struct *ssp)410410+static void raw_spin_lock_irqsave_check_contention(struct srcu_struct *ssp)381411{382412 unsigned long j;383413···399429 * to SRCU_SIZE_BIG. But only if the srcutree.convert_to_big module400430 * parameter permits this.401431 */402402-static void spin_lock_irqsave_sdp_contention(struct srcu_data *sdp, unsigned long *flags)432432+static void raw_spin_lock_irqsave_sdp_contention(struct srcu_data *sdp, unsigned long *flags)403433{404434 struct srcu_struct *ssp = sdp->ssp;405435406406- if (spin_trylock_irqsave_rcu_node(sdp, *flags))436436+ if (raw_spin_trylock_irqsave_rcu_node(sdp, *flags))407437 return;408408- spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags);409409- spin_lock_irqsave_check_contention(ssp);410410- spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, *flags);411411- spin_lock_irqsave_rcu_node(sdp, *flags);438438+ raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags);439439+ raw_spin_lock_irqsave_check_contention(ssp);440440+ raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, *flags);441441+ raw_spin_lock_irqsave_rcu_node(sdp, *flags);412442}413443414444/*···417447 * to SRCU_SIZE_BIG. But only if the srcutree.convert_to_big module418448 * parameter permits this.419449 */420420-static void spin_lock_irqsave_ssp_contention(struct srcu_struct *ssp, unsigned long *flags)450450+static void raw_spin_lock_irqsave_ssp_contention(struct srcu_struct *ssp, unsigned long *flags)421451{422422- if (spin_trylock_irqsave_rcu_node(ssp->srcu_sup, *flags))452452+ if (raw_spin_trylock_irqsave_rcu_node(ssp->srcu_sup, *flags))423453 return;424424- spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags);425425- spin_lock_irqsave_check_contention(ssp);454454+ raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags);455455+ raw_spin_lock_irqsave_check_contention(ssp);426456}427457428458/*···440470 /* The smp_load_acquire() pairs with the smp_store_release(). */441471 if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq_needed))) /*^^^*/442472 return; /* Already initialized. */443443- spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags);473473+ raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags);444474 if (!rcu_seq_state(ssp->srcu_sup->srcu_gp_seq_needed)) {445445- spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);475475+ raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);446476 return;447477 }448478 init_srcu_struct_fields(ssp, true);449449- spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);479479+ raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);450480}451481452482/*···712742 unsigned long delay;713743 struct srcu_usage *sup = ssp->srcu_sup;714744715715- spin_lock_irq_rcu_node(ssp->srcu_sup);745745+ raw_spin_lock_irq_rcu_node(ssp->srcu_sup);716746 delay = srcu_get_delay(ssp);717717- spin_unlock_irq_rcu_node(ssp->srcu_sup);747747+ raw_spin_unlock_irq_rcu_node(ssp->srcu_sup);718748 if (WARN_ON(!delay))719749 return; /* Just leak it! */720750 if (WARN_ON(srcu_readers_active(ssp)))721751 return; /* Just leak it! */752752+ /* Wait for irq_work to finish first as it may queue a new work. */753753+ irq_work_sync(&sup->irq_work);722754 flush_delayed_work(&sup->work);723755 for_each_possible_cpu(cpu) {724756 struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu);···932960 mutex_lock(&sup->srcu_cb_mutex);933961934962 /* End the current grace period. */935935- spin_lock_irq_rcu_node(sup);963963+ raw_spin_lock_irq_rcu_node(sup);936964 idx = rcu_seq_state(sup->srcu_gp_seq);937965 WARN_ON_ONCE(idx != SRCU_STATE_SCAN2);938966 if (srcu_gp_is_expedited(ssp))···943971 gpseq = rcu_seq_current(&sup->srcu_gp_seq);944972 if (ULONG_CMP_LT(sup->srcu_gp_seq_needed_exp, gpseq))945973 WRITE_ONCE(sup->srcu_gp_seq_needed_exp, gpseq);946946- spin_unlock_irq_rcu_node(sup);974974+ raw_spin_unlock_irq_rcu_node(sup);947975 mutex_unlock(&sup->srcu_gp_mutex);948976 /* A new grace period can start at this point. But only one. */949977···955983 } else {956984 idx = rcu_seq_ctr(gpseq) % ARRAY_SIZE(snp->srcu_have_cbs);957985 srcu_for_each_node_breadth_first(ssp, snp) {958958- spin_lock_irq_rcu_node(snp);986986+ raw_spin_lock_irq_rcu_node(snp);959987 cbs = false;960988 last_lvl = snp >= sup->level[rcu_num_lvls - 1];961989 if (last_lvl)···970998 else971999 mask = snp->srcu_data_have_cbs[idx];9721000 snp->srcu_data_have_cbs[idx] = 0;973973- spin_unlock_irq_rcu_node(snp);10011001+ raw_spin_unlock_irq_rcu_node(snp);9741002 if (cbs)9751003 srcu_schedule_cbs_snp(ssp, snp, mask, cbdelay);9761004 }···9801008 if (!(gpseq & counter_wrap_check))9811009 for_each_possible_cpu(cpu) {9821010 sdp = per_cpu_ptr(ssp->sda, cpu);983983- spin_lock_irq_rcu_node(sdp);10111011+ raw_spin_lock_irq_rcu_node(sdp);9841012 if (ULONG_CMP_GE(gpseq, sdp->srcu_gp_seq_needed + 100))9851013 sdp->srcu_gp_seq_needed = gpseq;9861014 if (ULONG_CMP_GE(gpseq, sdp->srcu_gp_seq_needed_exp + 100))9871015 sdp->srcu_gp_seq_needed_exp = gpseq;988988- spin_unlock_irq_rcu_node(sdp);10161016+ raw_spin_unlock_irq_rcu_node(sdp);9891017 }99010189911019 /* Callback initiation done, allow grace periods after next. */9921020 mutex_unlock(&sup->srcu_cb_mutex);99310219941022 /* Start a new grace period if needed. */995995- spin_lock_irq_rcu_node(sup);10231023+ raw_spin_lock_irq_rcu_node(sup);9961024 gpseq = rcu_seq_current(&sup->srcu_gp_seq);9971025 if (!rcu_seq_state(gpseq) &&9981026 ULONG_CMP_LT(gpseq, sup->srcu_gp_seq_needed)) {9991027 srcu_gp_start(ssp);10001000- spin_unlock_irq_rcu_node(sup);10281028+ raw_spin_unlock_irq_rcu_node(sup);10011029 srcu_reschedule(ssp, 0);10021030 } else {10031003- spin_unlock_irq_rcu_node(sup);10311031+ raw_spin_unlock_irq_rcu_node(sup);10041032 }1005103310061034 /* Transition to big if needed. */···10311059 if (WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_sup->srcu_gp_seq, s)) ||10321060 (!srcu_invl_snp_seq(sgsne) && ULONG_CMP_GE(sgsne, s)))10331061 return;10341034- spin_lock_irqsave_rcu_node(snp, flags);10621062+ raw_spin_lock_irqsave_rcu_node(snp, flags);10351063 sgsne = snp->srcu_gp_seq_needed_exp;10361064 if (!srcu_invl_snp_seq(sgsne) && ULONG_CMP_GE(sgsne, s)) {10371037- spin_unlock_irqrestore_rcu_node(snp, flags);10651065+ raw_spin_unlock_irqrestore_rcu_node(snp, flags);10381066 return;10391067 }10401068 WRITE_ONCE(snp->srcu_gp_seq_needed_exp, s);10411041- spin_unlock_irqrestore_rcu_node(snp, flags);10691069+ raw_spin_unlock_irqrestore_rcu_node(snp, flags);10421070 }10431043- spin_lock_irqsave_ssp_contention(ssp, &flags);10711071+ raw_spin_lock_irqsave_ssp_contention(ssp, &flags);10441072 if (ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed_exp, s))10451073 WRITE_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp, s);10461046- spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);10741074+ raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);10471075}1048107610491077/*···10811109 for (snp = snp_leaf; snp != NULL; snp = snp->srcu_parent) {10821110 if (WARN_ON_ONCE(rcu_seq_done(&sup->srcu_gp_seq, s)) && snp != snp_leaf)10831111 return; /* GP already done and CBs recorded. */10841084- spin_lock_irqsave_rcu_node(snp, flags);11121112+ raw_spin_lock_irqsave_rcu_node(snp, flags);10851113 snp_seq = snp->srcu_have_cbs[idx];10861114 if (!srcu_invl_snp_seq(snp_seq) && ULONG_CMP_GE(snp_seq, s)) {10871115 if (snp == snp_leaf && snp_seq == s)10881116 snp->srcu_data_have_cbs[idx] |= sdp->grpmask;10891089- spin_unlock_irqrestore_rcu_node(snp, flags);11171117+ raw_spin_unlock_irqrestore_rcu_node(snp, flags);10901118 if (snp == snp_leaf && snp_seq != s) {10911119 srcu_schedule_cbs_sdp(sdp, do_norm ? SRCU_INTERVAL : 0);10921120 return;···11011129 sgsne = snp->srcu_gp_seq_needed_exp;11021130 if (!do_norm && (srcu_invl_snp_seq(sgsne) || ULONG_CMP_LT(sgsne, s)))11031131 WRITE_ONCE(snp->srcu_gp_seq_needed_exp, s);11041104- spin_unlock_irqrestore_rcu_node(snp, flags);11321132+ raw_spin_unlock_irqrestore_rcu_node(snp, flags);11051133 }1106113411071135 /* Top of tree, must ensure the grace period will be started. */11081108- spin_lock_irqsave_ssp_contention(ssp, &flags);11361136+ raw_spin_lock_irqsave_ssp_contention(ssp, &flags);11091137 if (ULONG_CMP_LT(sup->srcu_gp_seq_needed, s)) {11101138 /*11111139 * Record need for grace period s. Pair with load···11261154 // it isn't. And it does not have to be. After all, it11271155 // can only be executed during early boot when there is only11281156 // the one boot CPU running with interrupts still disabled.11571157+ //11581158+ // Use an irq_work here to avoid acquiring runqueue lock with11591159+ // srcu rcu_node::lock held. BPF instrument could introduce the11601160+ // opposite dependency, hence we need to break the possible11611161+ // locking dependency here.11291162 if (likely(srcu_init_done))11301130- queue_delayed_work(rcu_gp_wq, &sup->work,11311131- !!srcu_get_delay(ssp));11631163+ irq_work_queue(&sup->irq_work);11321164 else if (list_empty(&sup->work.work.entry))11331165 list_add(&sup->work.work.entry, &srcu_boot_list);11341166 }11351135- spin_unlock_irqrestore_rcu_node(sup, flags);11671167+ raw_spin_unlock_irqrestore_rcu_node(sup, flags);11361168}1137116911381170/*···11481172{11491173 unsigned long curdelay;1150117411511151- spin_lock_irq_rcu_node(ssp->srcu_sup);11751175+ raw_spin_lock_irq_rcu_node(ssp->srcu_sup);11521176 curdelay = !srcu_get_delay(ssp);11531153- spin_unlock_irq_rcu_node(ssp->srcu_sup);11771177+ raw_spin_unlock_irq_rcu_node(ssp->srcu_sup);1154117811551179 for (;;) {11561180 if (srcu_readers_active_idx_check(ssp, idx))···12611285 return false;12621286 /* If the local srcu_data structure has callbacks, not idle. */12631287 sdp = raw_cpu_ptr(ssp->sda);12641264- spin_lock_irqsave_rcu_node(sdp, flags);12881288+ raw_spin_lock_irqsave_rcu_node(sdp, flags);12651289 if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {12661266- spin_unlock_irqrestore_rcu_node(sdp, flags);12901290+ raw_spin_unlock_irqrestore_rcu_node(sdp, flags);12671291 return false; /* Callbacks already present, so not idle. */12681292 }12691269- spin_unlock_irqrestore_rcu_node(sdp, flags);12931293+ raw_spin_unlock_irqrestore_rcu_node(sdp, flags);1270129412711295 /*12721296 * No local callbacks, so probabilistically probe global state.···13261350 sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id());13271351 else13281352 sdp = raw_cpu_ptr(ssp->sda);13291329- spin_lock_irqsave_sdp_contention(sdp, &flags);13531353+ raw_spin_lock_irqsave_sdp_contention(sdp, &flags);13301354 if (rhp)13311355 rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);13321356 /*···13861410 sdp->srcu_gp_seq_needed_exp = s;13871411 needexp = true;13881412 }13891389- spin_unlock_irqrestore_rcu_node(sdp, flags);14131413+ raw_spin_unlock_irqrestore_rcu_node(sdp, flags);1390141413911415 /* Ensure that snp node tree is fully initialized before traversing it */13921416 if (ss_state < SRCU_SIZE_WAIT_BARRIER)···1498152214991523 /*15001524 * Make sure that later code is ordered after the SRCU grace15011501- * period. This pairs with the spin_lock_irq_rcu_node()15251525+ * period. This pairs with the raw_spin_lock_irq_rcu_node()15021526 * in srcu_invoke_callbacks(). Unlike Tree RCU, this is needed15031527 * because the current CPU might have been totally uninvolved with15041528 * (and thus unordered against) that grace period.···16771701 */16781702static void srcu_barrier_one_cpu(struct srcu_struct *ssp, struct srcu_data *sdp)16791703{16801680- spin_lock_irq_rcu_node(sdp);17041704+ raw_spin_lock_irq_rcu_node(sdp);16811705 atomic_inc(&ssp->srcu_sup->srcu_barrier_cpu_cnt);16821706 sdp->srcu_barrier_head.func = srcu_barrier_cb;16831707 debug_rcu_head_queue(&sdp->srcu_barrier_head);···16861710 debug_rcu_head_unqueue(&sdp->srcu_barrier_head);16871711 atomic_dec(&ssp->srcu_sup->srcu_barrier_cpu_cnt);16881712 }16891689- spin_unlock_irq_rcu_node(sdp);17131713+ raw_spin_unlock_irq_rcu_node(sdp);16901714}1691171516921716/**···17371761 bool needcb = false;17381762 struct srcu_data *sdp = container_of(rhp, struct srcu_data, srcu_ec_head);1739176317401740- spin_lock_irqsave_sdp_contention(sdp, &flags);17641764+ raw_spin_lock_irqsave_sdp_contention(sdp, &flags);17411765 if (sdp->srcu_ec_state == SRCU_EC_IDLE) {17421766 WARN_ON_ONCE(1);17431767 } else if (sdp->srcu_ec_state == SRCU_EC_PENDING) {···17471771 sdp->srcu_ec_state = SRCU_EC_PENDING;17481772 needcb = true;17491773 }17501750- spin_unlock_irqrestore_rcu_node(sdp, flags);17741774+ raw_spin_unlock_irqrestore_rcu_node(sdp, flags);17511775 // If needed, requeue ourselves as an expedited SRCU callback.17521776 if (needcb)17531777 __call_srcu(sdp->ssp, &sdp->srcu_ec_head, srcu_expedite_current_cb, false);···1771179517721796 migrate_disable();17731797 sdp = this_cpu_ptr(ssp->sda);17741774- spin_lock_irqsave_sdp_contention(sdp, &flags);17981798+ raw_spin_lock_irqsave_sdp_contention(sdp, &flags);17751799 if (sdp->srcu_ec_state == SRCU_EC_IDLE) {17761800 sdp->srcu_ec_state = SRCU_EC_PENDING;17771801 needcb = true;···17801804 } else {17811805 WARN_ON_ONCE(sdp->srcu_ec_state != SRCU_EC_REPOST);17821806 }17831783- spin_unlock_irqrestore_rcu_node(sdp, flags);18071807+ raw_spin_unlock_irqrestore_rcu_node(sdp, flags);17841808 // If needed, queue an expedited SRCU callback.17851809 if (needcb)17861810 __call_srcu(ssp, &sdp->srcu_ec_head, srcu_expedite_current_cb, false);···18241848 */18251849 idx = rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq)); /* ^^^ */18261850 if (idx == SRCU_STATE_IDLE) {18271827- spin_lock_irq_rcu_node(ssp->srcu_sup);18511851+ raw_spin_lock_irq_rcu_node(ssp->srcu_sup);18281852 if (ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)) {18291853 WARN_ON_ONCE(rcu_seq_state(ssp->srcu_sup->srcu_gp_seq));18301830- spin_unlock_irq_rcu_node(ssp->srcu_sup);18541854+ raw_spin_unlock_irq_rcu_node(ssp->srcu_sup);18311855 mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex);18321856 return;18331857 }18341858 idx = rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq));18351859 if (idx == SRCU_STATE_IDLE)18361860 srcu_gp_start(ssp);18371837- spin_unlock_irq_rcu_node(ssp->srcu_sup);18611861+ raw_spin_unlock_irq_rcu_node(ssp->srcu_sup);18381862 if (idx != SRCU_STATE_IDLE) {18391863 mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex);18401864 return; /* Someone else started the grace period. */···18481872 return; /* readers present, retry later. */18491873 }18501874 srcu_flip(ssp);18511851- spin_lock_irq_rcu_node(ssp->srcu_sup);18751875+ raw_spin_lock_irq_rcu_node(ssp->srcu_sup);18521876 rcu_seq_set_state(&ssp->srcu_sup->srcu_gp_seq, SRCU_STATE_SCAN2);18531877 ssp->srcu_sup->srcu_n_exp_nodelay = 0;18541854- spin_unlock_irq_rcu_node(ssp->srcu_sup);18781878+ raw_spin_unlock_irq_rcu_node(ssp->srcu_sup);18551879 }1856188018571881 if (rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)) == SRCU_STATE_SCAN2) {···1889191318901914 ssp = sdp->ssp;18911915 rcu_cblist_init(&ready_cbs);18921892- spin_lock_irq_rcu_node(sdp);19161916+ raw_spin_lock_irq_rcu_node(sdp);18931917 WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL));18941918 rcu_segcblist_advance(&sdp->srcu_cblist,18951919 rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq));···19001924 */19011925 if (sdp->srcu_cblist_invoking ||19021926 !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) {19031903- spin_unlock_irq_rcu_node(sdp);19271927+ raw_spin_unlock_irq_rcu_node(sdp);19041928 return; /* Someone else on the job or nothing to do. */19051929 }19061930···19081932 sdp->srcu_cblist_invoking = true;19091933 rcu_segcblist_extract_done_cbs(&sdp->srcu_cblist, &ready_cbs);19101934 len = ready_cbs.len;19111911- spin_unlock_irq_rcu_node(sdp);19351935+ raw_spin_unlock_irq_rcu_node(sdp);19121936 rhp = rcu_cblist_dequeue(&ready_cbs);19131937 for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) {19141938 debug_rcu_head_unqueue(rhp);···19231947 * Update counts, accelerate new callbacks, and if needed,19241948 * schedule another round of callback invocation.19251949 */19261926- spin_lock_irq_rcu_node(sdp);19501950+ raw_spin_lock_irq_rcu_node(sdp);19271951 rcu_segcblist_add_len(&sdp->srcu_cblist, -len);19281952 sdp->srcu_cblist_invoking = false;19291953 more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist);19301930- spin_unlock_irq_rcu_node(sdp);19541954+ raw_spin_unlock_irq_rcu_node(sdp);19311955 /* An SRCU barrier or callbacks from previous nesting work pending */19321956 if (more)19331957 srcu_schedule_cbs_sdp(sdp, 0);···19411965{19421966 bool pushgp = true;1943196719441944- spin_lock_irq_rcu_node(ssp->srcu_sup);19681968+ raw_spin_lock_irq_rcu_node(ssp->srcu_sup);19451969 if (ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)) {19461970 if (!WARN_ON_ONCE(rcu_seq_state(ssp->srcu_sup->srcu_gp_seq))) {19471971 /* All requests fulfilled, time to go idle. */···19511975 /* Outstanding request and no GP. Start one. */19521976 srcu_gp_start(ssp);19531977 }19541954- spin_unlock_irq_rcu_node(ssp->srcu_sup);19781978+ raw_spin_unlock_irq_rcu_node(ssp->srcu_sup);1955197919561980 if (pushgp)19571981 queue_delayed_work(rcu_gp_wq, &ssp->srcu_sup->work, delay);···19711995 ssp = sup->srcu_ssp;1972199619731997 srcu_advance_state(ssp);19741974- spin_lock_irq_rcu_node(ssp->srcu_sup);19981998+ raw_spin_lock_irq_rcu_node(ssp->srcu_sup);19751999 curdelay = srcu_get_delay(ssp);19761976- spin_unlock_irq_rcu_node(ssp->srcu_sup);20002000+ raw_spin_unlock_irq_rcu_node(ssp->srcu_sup);19772001 if (curdelay) {19782002 WRITE_ONCE(sup->reschedule_count, 0);19792003 } else {···19892013 }19902014 }19912015 srcu_reschedule(ssp, curdelay);20162016+}20172017+20182018+static void srcu_irq_work(struct irq_work *work)20192019+{20202020+ struct srcu_struct *ssp;20212021+ struct srcu_usage *sup;20222022+ unsigned long delay;20232023+ unsigned long flags;20242024+20252025+ sup = container_of(work, struct srcu_usage, irq_work);20262026+ ssp = sup->srcu_ssp;20272027+20282028+ raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags);20292029+ delay = srcu_get_delay(ssp);20302030+ raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags);20312031+20322032+ queue_delayed_work(rcu_gp_wq, &sup->work, !!delay);19922033}1993203419942035void srcutorture_get_gp_data(struct srcu_struct *ssp, int *flags,
+2-2
kernel/trace/ftrace.c
···66066606 if (!orig_hash)66076607 goto unlock;6608660866096609- /* Enable the tmp_ops to have the same functions as the direct ops */66096609+ /* Enable the tmp_ops to have the same functions as the hash object. */66106610 ftrace_ops_init(&tmp_ops);66116611- tmp_ops.func_hash = ops->func_hash;66116611+ tmp_ops.func_hash->filter_hash = hash;6612661266136613 err = register_ftrace_function_nolock(&tmp_ops);66146614 if (err)
···555555 lockdep_assert_held(&event_mutex);556556557557 if (enabled) {558558- if (!list_empty(&tr->marker_list))558558+ if (tr->trace_flags & TRACE_ITER(COPY_MARKER))559559 return false;560560561561 list_add_rcu(&tr->marker_list, &marker_copies);···563563 return true;564564 }565565566566- if (list_empty(&tr->marker_list))566566+ if (!(tr->trace_flags & TRACE_ITER(COPY_MARKER)))567567 return false;568568569569- list_del_init(&tr->marker_list);569569+ list_del_rcu(&tr->marker_list);570570 tr->trace_flags &= ~TRACE_ITER(COPY_MARKER);571571 return true;572572}···6784678467856785 do {67866786 /*67876787+ * It is possible that something is trying to migrate this67886788+ * task. What happens then, is when preemption is enabled,67896789+ * the migration thread will preempt this task, try to67906790+ * migrate it, fail, then let it run again. That will67916791+ * cause this to loop again and never succeed.67926792+ * On failures, enabled and disable preemption with67936793+ * migration enabled, to allow the migration thread to67946794+ * migrate this task.67956795+ */67966796+ if (trys) {67976797+ preempt_enable_notrace();67986798+ preempt_disable_notrace();67996799+ cpu = smp_processor_id();68006800+ buffer = per_cpu_ptr(tinfo->tbuf, cpu)->buf;68016801+ }68026802+68036803+ /*67876804 * If for some reason, copy_from_user() always causes a context67886805 * switch, this would then cause an infinite loop.67896806 * If this task is preempted by another user space task, it···9761974497629745 list_del(&tr->list);9763974697479747+ if (printk_trace == tr)97489748+ update_printk_trace(&global_trace);97499749+97509750+ /* Must be done before disabling all the flags */97519751+ if (update_marker_trace(tr, 0))97529752+ synchronize_rcu();97539753+97649754 /* Disable all the flags that were enabled coming in */97659755 for (i = 0; i < TRACE_FLAGS_MAX_SIZE; i++) {97669756 if ((1ULL << i) & ZEROED_TRACE_FLAGS)97679757 set_tracer_flag(tr, 1ULL << i, 0);97689758 }97699769-97709770- if (printk_trace == tr)97719771- update_printk_trace(&global_trace);97729772-97739773- if (update_marker_trace(tr, 0))97749774- synchronize_rcu();9775975997769760 tracing_set_nop(tr);97779761 clear_ftrace_function_probes(tr);
+2-1
lib/bootconfig.c
···723723 if (op == ':') {724724 unsigned short nidx = child->next;725725726726- xbc_init_node(child, v, XBC_VALUE);726726+ if (xbc_init_node(child, v, XBC_VALUE) < 0)727727+ return xbc_parse_error("Failed to override value", v);727728 child->next = nidx; /* keep subkeys */728729 goto array;729730 }
···145145 return 0;146146}147147148148+struct damon_stat_system_ram_range_walk_arg {149149+ bool walked;150150+ struct resource res;151151+};152152+153153+static int damon_stat_system_ram_walk_fn(struct resource *res, void *arg)154154+{155155+ struct damon_stat_system_ram_range_walk_arg *a = arg;156156+157157+ if (!a->walked) {158158+ a->walked = true;159159+ a->res.start = res->start;160160+ }161161+ a->res.end = res->end;162162+ return 0;163163+}164164+165165+static unsigned long damon_stat_res_to_core_addr(resource_size_t ra,166166+ unsigned long addr_unit)167167+{168168+ /*169169+ * Use div_u64() for avoiding linking errors related with __udivdi3,170170+ * __aeabi_uldivmod, or similar problems. This should also improve the171171+ * performance optimization (read div_u64() comment for the detail).172172+ */173173+ if (sizeof(ra) == 8 && sizeof(addr_unit) == 4)174174+ return div_u64(ra, addr_unit);175175+ return ra / addr_unit;176176+}177177+178178+static int damon_stat_set_monitoring_region(struct damon_target *t,179179+ unsigned long addr_unit, unsigned long min_region_sz)180180+{181181+ struct damon_addr_range addr_range;182182+ struct damon_stat_system_ram_range_walk_arg arg = {};183183+184184+ walk_system_ram_res(0, -1, &arg, damon_stat_system_ram_walk_fn);185185+ if (!arg.walked)186186+ return -EINVAL;187187+ addr_range.start = damon_stat_res_to_core_addr(188188+ arg.res.start, addr_unit);189189+ addr_range.end = damon_stat_res_to_core_addr(190190+ arg.res.end + 1, addr_unit);191191+ if (addr_range.end <= addr_range.start)192192+ return -EINVAL;193193+ return damon_set_regions(t, &addr_range, 1, min_region_sz);194194+}195195+148196static struct damon_ctx *damon_stat_build_ctx(void)149197{150198 struct damon_ctx *ctx;151199 struct damon_attrs attrs;152200 struct damon_target *target;153153- unsigned long start = 0, end = 0;154201155202 ctx = damon_new_ctx();156203 if (!ctx)···227180 if (!target)228181 goto free_out;229182 damon_add_target(ctx, target);230230- if (damon_set_region_biggest_system_ram_default(target, &start, &end,231231- ctx->min_region_sz))183183+ if (damon_stat_set_monitoring_region(target, ctx->addr_unit,184184+ ctx->min_region_sz))232185 goto free_out;233186 return ctx;234187free_out:
+2-2
mm/hmm.c
···778778 struct page *page = hmm_pfn_to_page(pfns[idx]);779779 phys_addr_t paddr = hmm_pfn_to_phys(pfns[idx]);780780 size_t offset = idx * map->dma_entry_size;781781- unsigned long attrs = 0;781781+ unsigned long attrs = DMA_ATTR_REQUIRE_COHERENT;782782 dma_addr_t dma_addr;783783 int ret;784784···871871 struct dma_iova_state *state = &map->state;872872 dma_addr_t *dma_addrs = map->dma_list;873873 unsigned long *pfns = map->pfn_list;874874- unsigned long attrs = 0;874874+ unsigned long attrs = DMA_ATTR_REQUIRE_COHERENT;875875876876 if ((pfns[idx] & valid_dma) != valid_dma)877877 return false;
+7
mm/rmap.c
···457457 list_del(&avc->same_vma);458458 anon_vma_chain_free(avc);459459 }460460+461461+ /*462462+ * The anon_vma assigned to this VMA is no longer valid, as we were not463463+ * able to correctly clone AVC state. Avoid inconsistent anon_vma tree464464+ * state by resetting.465465+ */466466+ vma->anon_vma = NULL;460467}461468462469/**
···30953095 * hci_connect_le serializes the connection attempts so only one30963096 * connection can be in BT_CONNECT at time.30973097 */30983098- if (conn->state == BT_CONNECT && hdev->req_status == HCI_REQ_PEND) {30983098+ if (conn->state == BT_CONNECT && READ_ONCE(hdev->req_status) == HCI_REQ_PEND) {30993099 switch (hci_skb_event(hdev->sent_cmd)) {31003100 case HCI_EV_CONN_COMPLETE:31013101 case HCI_EV_LE_CONN_COMPLETE:
···2525{2626 bt_dev_dbg(hdev, "result 0x%2.2x", result);27272828- if (hdev->req_status != HCI_REQ_PEND)2828+ if (READ_ONCE(hdev->req_status) != HCI_REQ_PEND)2929 return;30303131 hdev->req_result = result;3232- hdev->req_status = HCI_REQ_DONE;3232+ WRITE_ONCE(hdev->req_status, HCI_REQ_DONE);33333434 /* Free the request command so it is not used as response */3535 kfree_skb(hdev->req_skb);···167167168168 hci_cmd_sync_add(&req, opcode, plen, param, event, sk);169169170170- hdev->req_status = HCI_REQ_PEND;170170+ WRITE_ONCE(hdev->req_status, HCI_REQ_PEND);171171172172 err = hci_req_sync_run(&req);173173 if (err < 0)174174 return ERR_PTR(err);175175176176 err = wait_event_interruptible_timeout(hdev->req_wait_q,177177- hdev->req_status != HCI_REQ_PEND,177177+ READ_ONCE(hdev->req_status) != HCI_REQ_PEND,178178 timeout);179179180180 if (err == -ERESTARTSYS)181181 return ERR_PTR(-EINTR);182182183183- switch (hdev->req_status) {183183+ switch (READ_ONCE(hdev->req_status)) {184184 case HCI_REQ_DONE:185185 err = -bt_to_errno(hdev->req_result);186186 break;···194194 break;195195 }196196197197- hdev->req_status = 0;197197+ WRITE_ONCE(hdev->req_status, 0);198198 hdev->req_result = 0;199199 skb = hdev->req_rsp;200200 hdev->req_rsp = NULL;···665665{666666 bt_dev_dbg(hdev, "err 0x%2.2x", err);667667668668- if (hdev->req_status == HCI_REQ_PEND) {668668+ if (READ_ONCE(hdev->req_status) == HCI_REQ_PEND) {669669 hdev->req_result = err;670670- hdev->req_status = HCI_REQ_CANCELED;670670+ WRITE_ONCE(hdev->req_status, HCI_REQ_CANCELED);671671672672 queue_work(hdev->workqueue, &hdev->cmd_sync_cancel_work);673673 }···683683{684684 bt_dev_dbg(hdev, "err 0x%2.2x", err);685685686686- if (hdev->req_status == HCI_REQ_PEND) {686686+ if (READ_ONCE(hdev->req_status) == HCI_REQ_PEND) {687687 /* req_result is __u32 so error must be positive to be properly688688 * propagated.689689 */690690 hdev->req_result = err < 0 ? -err : err;691691- hdev->req_status = HCI_REQ_CANCELED;691691+ WRITE_ONCE(hdev->req_status, HCI_REQ_CANCELED);692692693693 wake_up_interruptible(&hdev->req_wait_q);694694 }
+53-18
net/bluetooth/l2cap_core.c
···926926927927static int l2cap_get_ident(struct l2cap_conn *conn)928928{929929+ u8 max;930930+ int ident;931931+929932 /* LE link does not support tools like l2ping so use the full range */930933 if (conn->hcon->type == LE_LINK)931931- return ida_alloc_range(&conn->tx_ida, 1, 255, GFP_ATOMIC);932932-934934+ max = 255;933935 /* Get next available identificator.934936 * 1 - 128 are used by kernel.935937 * 129 - 199 are reserved.936938 * 200 - 254 are used by utilities like l2ping, etc.937939 */938938- return ida_alloc_range(&conn->tx_ida, 1, 128, GFP_ATOMIC);940940+ else941941+ max = 128;942942+943943+ /* Allocate ident using min as last used + 1 (cyclic) */944944+ ident = ida_alloc_range(&conn->tx_ida, READ_ONCE(conn->tx_ident) + 1,945945+ max, GFP_ATOMIC);946946+ /* Force min 1 to start over */947947+ if (ident <= 0) {948948+ ident = ida_alloc_range(&conn->tx_ida, 1, max, GFP_ATOMIC);949949+ if (ident <= 0) {950950+ /* If all idents are in use, log an error, this is951951+ * extremely unlikely to happen and would indicate a bug952952+ * in the code that idents are not being freed properly.953953+ */954954+ BT_ERR("Unable to allocate ident: %d", ident);955955+ return 0;956956+ }957957+ }958958+959959+ WRITE_ONCE(conn->tx_ident, ident);960960+961961+ return ident;939962}940963941964static void l2cap_send_acl(struct l2cap_conn *conn, struct sk_buff *skb,···1771174817721749 BT_DBG("hcon %p conn %p, err %d", hcon, conn, err);1773175017511751+ disable_delayed_work_sync(&conn->info_timer);17521752+ disable_delayed_work_sync(&conn->id_addr_timer);17531753+17741754 mutex_lock(&conn->lock);1775175517761756 kfree_skb(conn->rx_skb);···17881762 cancel_work_sync(&conn->pending_rx_work);1789176317901764 ida_destroy(&conn->tx_ida);17911791-17921792- cancel_delayed_work_sync(&conn->id_addr_timer);1793176517941766 l2cap_unregister_all_users(conn);17951767···18061782 l2cap_chan_unlock(chan);18071783 l2cap_chan_put(chan);18081784 }18091809-18101810- if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT)18111811- cancel_delayed_work_sync(&conn->info_timer);1812178518131786 hci_chan_del(conn->hchan);18141787 conn->hchan = NULL;···2397237623982377 /* Remote device may have requested smaller PDUs */23992378 pdu_len = min_t(size_t, pdu_len, chan->remote_mps);23792379+23802380+ if (!pdu_len)23812381+ return -EINVAL;2400238224012383 if (len <= pdu_len) {24022384 sar = L2CAP_SAR_UNSEGMENTED;···43364312 if (test_bit(CONF_INPUT_DONE, &chan->conf_state)) {43374313 set_default_fcs(chan);4338431443394339- if (chan->mode == L2CAP_MODE_ERTM ||43404340- chan->mode == L2CAP_MODE_STREAMING)43414341- err = l2cap_ertm_init(chan);43154315+ if (chan->state != BT_CONNECTED) {43164316+ if (chan->mode == L2CAP_MODE_ERTM ||43174317+ chan->mode == L2CAP_MODE_STREAMING)43184318+ err = l2cap_ertm_init(chan);4342431943434343- if (err < 0)43444344- l2cap_send_disconn_req(chan, -err);43454345- else43464346- l2cap_chan_ready(chan);43204320+ if (err < 0)43214321+ l2cap_send_disconn_req(chan, -err);43224322+ else43234323+ l2cap_chan_ready(chan);43244324+ }4347432543484326 goto unlock;43494327 }···51075081 cmd_len -= sizeof(*req);51085082 num_scid = cmd_len / sizeof(u16);5109508351105110- /* Always respond with the same number of scids as in the request */51115111- rsp_len = cmd_len;51125112-51135084 if (num_scid > L2CAP_ECRED_MAX_CID) {51145085 result = L2CAP_CR_LE_INVALID_PARAMS;51155086 goto response;51165087 }50885088+50895089+ /* Always respond with the same number of scids as in the request */50905090+ rsp_len = cmd_len;5117509151185092 mtu = __le16_to_cpu(req->mtu);51195093 mps = __le16_to_cpu(req->mps);···66336607 struct l2cap_le_credits pkt;66346608 u16 return_credits = l2cap_le_rx_credits(chan);6635660966106610+ if (chan->mode != L2CAP_MODE_LE_FLOWCTL &&66116611+ chan->mode != L2CAP_MODE_EXT_FLOWCTL)66126612+ return;66136613+66366614 if (chan->rx_credits >= return_credits)66376615 return;66386616···6719668967206690 if (!chan->sdu) {67216691 u16 sdu_len;66926692+66936693+ if (!pskb_may_pull(skb, L2CAP_SDULEN_SIZE)) {66946694+ err = -EINVAL;66956695+ goto failed;66966696+ }6722669767236698 sdu_len = get_unaligned_le16(skb->data);67246699 skb_pull(skb, L2CAP_SDULEN_SIZE);
···53555355 * hci_adv_monitors_clear is about to be called which will take care of53565356 * freeing the adv_monitor instances.53575357 */53585358- if (status == -ECANCELED && !mgmt_pending_valid(hdev, cmd))53585358+ if (status == -ECANCELED || !mgmt_pending_valid(hdev, cmd))53595359 return;5360536053615361 monitor = cmd->user_data;
+7-3
net/bluetooth/sco.c
···401401 struct sock *sk;402402403403 sco_conn_lock(conn);404404- sk = conn->sk;404404+ sk = sco_sock_hold(conn);405405 sco_conn_unlock(conn);406406407407 if (!sk)···410410 BT_DBG("sk %p len %u", sk, skb->len);411411412412 if (sk->sk_state != BT_CONNECTED)413413- goto drop;413413+ goto drop_put;414414415415- if (!sock_queue_rcv_skb(sk, skb))415415+ if (!sock_queue_rcv_skb(sk, skb)) {416416+ sock_put(sk);416417 return;418418+ }417419420420+drop_put:421421+ sock_put(sk);418422drop:419423 kfree_skb(skb);420424}
···395395396396static bool expect_iter_me(struct nf_conntrack_expect *exp, void *data)397397{398398- struct nf_conn_help *help = nfct_help(exp->master);399398 const struct nf_conntrack_helper *me = data;400399 const struct nf_conntrack_helper *this;401400402402- if (exp->helper == me)403403- return true;404404-405405- this = rcu_dereference_protected(help->helper,401401+ this = rcu_dereference_protected(exp->helper,406402 lockdep_is_held(&nf_conntrack_expect_lock));407403 return this == me;408404}···417421418422 nf_ct_expect_iterate_destroy(expect_iter_me, NULL);419423 nf_ct_iterate_destroy(unhelp, me);424424+425425+ /* nf_ct_iterate_destroy() does an unconditional synchronize_rcu() as426426+ * last step, this ensures rcu readers of exp->helper are done.427427+ * No need for another synchronize_rcu() here.428428+ */420429}421430EXPORT_SYMBOL_GPL(nf_conntrack_helper_unregister);422431
···654654655655 if (data_len) {656656 struct nlattr *nla;657657- int size = nla_attr_size(data_len);658657659659- if (skb_tailroom(inst->skb) < nla_total_size(data_len))658658+ nla = nla_reserve(inst->skb, NFULA_PAYLOAD, data_len);659659+ if (!nla)660660 goto nla_put_failure;661661-662662- nla = skb_put(inst->skb, nla_total_size(data_len));663663- nla->nla_type = NFULA_PAYLOAD;664664- nla->nla_len = size;665661666662 if (skb_copy_bits(skb, 0, nla_data(nla), data_len))667663 BUG();
+10-10
net/netfilter/nft_set_pipapo_avx2.c
···242242243243 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);244244 if (last)245245- return b;245245+ ret = b;246246247247 if (unlikely(ret == -1))248248 ret = b / XSAVE_YMM_SIZE;···319319320320 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);321321 if (last)322322- return b;322322+ ret = b;323323324324 if (unlikely(ret == -1))325325 ret = b / XSAVE_YMM_SIZE;···414414415415 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);416416 if (last)417417- return b;417417+ ret = b;418418419419 if (unlikely(ret == -1))420420 ret = b / XSAVE_YMM_SIZE;···505505506506 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);507507 if (last)508508- return b;508508+ ret = b;509509510510 if (unlikely(ret == -1))511511 ret = b / XSAVE_YMM_SIZE;···641641642642 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);643643 if (last)644644- return b;644644+ ret = b;645645646646 if (unlikely(ret == -1))647647 ret = b / XSAVE_YMM_SIZE;···699699700700 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);701701 if (last)702702- return b;702702+ ret = b;703703704704 if (unlikely(ret == -1))705705 ret = b / XSAVE_YMM_SIZE;···764764765765 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);766766 if (last)767767- return b;767767+ ret = b;768768769769 if (unlikely(ret == -1))770770 ret = b / XSAVE_YMM_SIZE;···839839840840 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);841841 if (last)842842- return b;842842+ ret = b;843843844844 if (unlikely(ret == -1))845845 ret = b / XSAVE_YMM_SIZE;···925925926926 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);927927 if (last)928928- return b;928928+ ret = b;929929930930 if (unlikely(ret == -1))931931 ret = b / XSAVE_YMM_SIZE;···1019101910201020 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last);10211021 if (last)10221022- return b;10221022+ ret = b;1023102310241024 if (unlikely(ret == -1))10251025 ret = b / XSAVE_YMM_SIZE;
+75-17
net/netfilter/nft_set_rbtree.c
···572572 return array;573573}574574575575-#define NFT_ARRAY_EXTRA_SIZE 10240576576-577575/* Similar to nft_rbtree_{u,k}size to hide details to userspace, but consider578576 * packed representation coming from userspace for anonymous sets too.579577 */580578static u32 nft_array_elems(const struct nft_set *set)581579{582582- u32 nelems = atomic_read(&set->nelems);580580+ u32 nelems = atomic_read(&set->nelems) - set->ndeact;583581584582 /* Adjacent intervals are represented with a single start element in585583 * anonymous sets, use the current element counter as is.···593595 return (nelems / 2) + 2;594596}595597596596-static int nft_array_may_resize(const struct nft_set *set)598598+#define NFT_ARRAY_INITIAL_SIZE 1024599599+#define NFT_ARRAY_INITIAL_ANON_SIZE 16600600+#define NFT_ARRAY_INITIAL_ANON_THRESH (8192U / sizeof(struct nft_array_interval))601601+602602+static int nft_array_may_resize(const struct nft_set *set, bool flush)597603{598598- u32 nelems = nft_array_elems(set), new_max_intervals;604604+ u32 initial_intervals, max_intervals, new_max_intervals, delta;605605+ u32 shrinked_max_intervals, nelems = nft_array_elems(set);599606 struct nft_rbtree *priv = nft_set_priv(set);600607 struct nft_array *array;601608602602- if (!priv->array_next) {603603- array = nft_array_alloc(nelems + NFT_ARRAY_EXTRA_SIZE);609609+ if (nft_set_is_anonymous(set))610610+ initial_intervals = NFT_ARRAY_INITIAL_ANON_SIZE;611611+ else612612+ initial_intervals = NFT_ARRAY_INITIAL_SIZE;613613+614614+ if (priv->array_next) {615615+ max_intervals = priv->array_next->max_intervals;616616+ new_max_intervals = priv->array_next->max_intervals;617617+ } else {618618+ if (priv->array) {619619+ max_intervals = priv->array->max_intervals;620620+ new_max_intervals = priv->array->max_intervals;621621+ } else {622622+ max_intervals = 0;623623+ new_max_intervals = initial_intervals;624624+ }625625+ }626626+627627+ if (nft_set_is_anonymous(set))628628+ goto maybe_grow;629629+630630+ if (flush) {631631+ /* Set flush just started, nelems still report elements.*/632632+ nelems = 0;633633+ new_max_intervals = NFT_ARRAY_INITIAL_SIZE;634634+ goto realloc_array;635635+ }636636+637637+ if (check_add_overflow(new_max_intervals, new_max_intervals,638638+ &shrinked_max_intervals))639639+ return -EOVERFLOW;640640+641641+ shrinked_max_intervals = DIV_ROUND_UP(shrinked_max_intervals, 3);642642+643643+ if (shrinked_max_intervals > NFT_ARRAY_INITIAL_SIZE &&644644+ nelems < shrinked_max_intervals) {645645+ new_max_intervals = shrinked_max_intervals;646646+ goto realloc_array;647647+ }648648+maybe_grow:649649+ if (nelems > new_max_intervals) {650650+ if (nft_set_is_anonymous(set) &&651651+ new_max_intervals < NFT_ARRAY_INITIAL_ANON_THRESH) {652652+ new_max_intervals <<= 1;653653+ } else {654654+ delta = new_max_intervals >> 1;655655+ if (check_add_overflow(new_max_intervals, delta,656656+ &new_max_intervals))657657+ return -EOVERFLOW;658658+ }659659+ }660660+661661+realloc_array:662662+ if (WARN_ON_ONCE(nelems > new_max_intervals))663663+ return -ENOMEM;664664+665665+ if (priv->array_next) {666666+ if (max_intervals == new_max_intervals)667667+ return 0;668668+669669+ if (nft_array_intervals_alloc(priv->array_next, new_max_intervals) < 0)670670+ return -ENOMEM;671671+ } else {672672+ array = nft_array_alloc(new_max_intervals);604673 if (!array)605674 return -ENOMEM;606675607676 priv->array_next = array;608677 }609609-610610- if (nelems < priv->array_next->max_intervals)611611- return 0;612612-613613- new_max_intervals = priv->array_next->max_intervals + NFT_ARRAY_EXTRA_SIZE;614614- if (nft_array_intervals_alloc(priv->array_next, new_max_intervals) < 0)615615- return -ENOMEM;616678617679 return 0;618680}···688630689631 nft_rbtree_maybe_reset_start_cookie(priv, tstamp);690632691691- if (nft_array_may_resize(set) < 0)633633+ if (nft_array_may_resize(set, false) < 0)692634 return -ENOMEM;693635694636 do {···794736 nft_rbtree_interval_null(set, this))795737 priv->start_rbe_cookie = 0;796738797797- if (nft_array_may_resize(set) < 0)739739+ if (nft_array_may_resize(set, false) < 0)798740 return NULL;799741800742 while (parent != NULL) {···864806865807 switch (iter->type) {866808 case NFT_ITER_UPDATE_CLONE:867867- if (nft_array_may_resize(set) < 0) {809809+ if (nft_array_may_resize(set, true) < 0) {868810 iter->err = -ENOMEM;869811 break;870812 }
+6-4
net/nfc/nci/core.c
···579579 skb_queue_purge(&ndev->rx_q);580580 skb_queue_purge(&ndev->tx_q);581581582582- /* Flush RX and TX wq */583583- flush_workqueue(ndev->rx_wq);582582+ /* Flush TX wq, RX wq flush can't be under the lock */584583 flush_workqueue(ndev->tx_wq);585584586585 /* Reset device */···591592 msecs_to_jiffies(NCI_RESET_TIMEOUT));592593593594 /* After this point our queues are empty594594- * and no works are scheduled.595595+ * rx work may be running but will see that NCI_UP was cleared595596 */596597 ndev->ops->close(ndev);597598598599 clear_bit(NCI_INIT, &ndev->flags);599600600600- /* Flush cmd wq */601601+ /* Flush cmd and tx wq */601602 flush_workqueue(ndev->cmd_wq);602603603604 timer_delete_sync(&ndev->cmd_timer);···611612 ndev->flags &= BIT(NCI_UNREG);612613613614 mutex_unlock(&ndev->req_lock);615615+616616+ /* rx_work may take req_lock via nci_deactivate_target */617617+ flush_workqueue(ndev->rx_wq);614618615619 return 0;616620}
+2
net/openvswitch/flow_netlink.c
···29532953 case OVS_KEY_ATTR_MPLS:29542954 if (!eth_p_mpls(eth_type))29552955 return -EINVAL;29562956+ if (key_len != sizeof(struct ovs_key_mpls))29572957+ return -EINVAL;29562958 break;2957295929582960 case OVS_KEY_ATTR_SCTP:
···246246 crypto_wait_req(-EINPROGRESS, &ctx->async_wait);247247 atomic_inc(&ctx->decrypt_pending);248248249249+ __skb_queue_purge(&ctx->async_hold);249250 return ctx->async_wait.err;250251}251252···2225222422262225 /* Wait for all previously submitted records to be decrypted */22272226 ret = tls_decrypt_async_wait(ctx);22282228- __skb_queue_purge(&ctx->async_hold);2229222722302228 if (ret) {22312229 if (err >= 0 || err == -EINPROGRESS)
+4-1
net/xfrm/xfrm_input.c
···75757676 spin_lock_bh(&xfrm_input_afinfo_lock);7777 if (likely(xfrm_input_afinfo[afinfo->is_ipip][afinfo->family])) {7878- if (unlikely(xfrm_input_afinfo[afinfo->is_ipip][afinfo->family] != afinfo))7878+ const struct xfrm_input_afinfo *cur;7979+8080+ cur = rcu_access_pointer(xfrm_input_afinfo[afinfo->is_ipip][afinfo->family]);8181+ if (unlikely(cur != afinfo))7982 err = -EINVAL;8083 else8184 RCU_INIT_POINTER(xfrm_input_afinfo[afinfo->is_ipip][afinfo->family], NULL);
···3535#endif3636#include <linux/unaligned.h>37373838+static struct sock *xfrm_net_nlsk(const struct net *net, const struct sk_buff *skb)3939+{4040+ /* get the source of this request, see netlink_unicast_kernel */4141+ const struct sock *sk = NETLINK_CB(skb).sk;4242+4343+ /* sk is refcounted, the netns stays alive and nlsk with it */4444+ return rcu_dereference_protected(net->xfrm.nlsk, sk->sk_net_refcnt);4545+}4646+3847static int verify_one_alg(struct nlattr **attrs, enum xfrm_attr_type_t type,3948 struct netlink_ext_ack *extack)4049{···17361727 err = build_spdinfo(r_skb, net, sportid, seq, *flags);17371728 BUG_ON(err < 0);1738172917391739- return nlmsg_unicast(net->xfrm.nlsk, r_skb, sportid);17301730+ return nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, sportid);17401731}1741173217421733static inline unsigned int xfrm_sadinfo_msgsize(void)···17961787 err = build_sadinfo(r_skb, net, sportid, seq, *flags);17971788 BUG_ON(err < 0);1798178917991799- return nlmsg_unicast(net->xfrm.nlsk, r_skb, sportid);17901790+ return nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, sportid);18001791}1801179218021793static int xfrm_get_sa(struct sk_buff *skb, struct nlmsghdr *nlh,···18161807 if (IS_ERR(resp_skb)) {18171808 err = PTR_ERR(resp_skb);18181809 } else {18191819- err = nlmsg_unicast(net->xfrm.nlsk, resp_skb, NETLINK_CB(skb).portid);18101810+ err = nlmsg_unicast(xfrm_net_nlsk(net, skb), resp_skb, NETLINK_CB(skb).portid);18201811 }18211812 xfrm_state_put(x);18221813out_noput:···18591850 pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]);18601851 if (pcpu_num >= num_possible_cpus()) {18611852 err = -EINVAL;18531853+ NL_SET_ERR_MSG(extack, "pCPU number too big");18621854 goto out_noput;18631855 }18641856 }···19071897 }19081898 }1909189919101910- err = nlmsg_unicast(net->xfrm.nlsk, resp_skb, NETLINK_CB(skb).portid);19001900+ err = nlmsg_unicast(xfrm_net_nlsk(net, skb), resp_skb, NETLINK_CB(skb).portid);1911190119121902out:19131903 xfrm_state_put(x);···25522542 r_up->out = net->xfrm.policy_default[XFRM_POLICY_OUT];25532543 nlmsg_end(r_skb, r_nlh);2554254425552555- return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid);25452545+ return nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, portid);25562546}2557254725582548static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh,···26182608 if (IS_ERR(resp_skb)) {26192609 err = PTR_ERR(resp_skb);26202610 } else {26212621- err = nlmsg_unicast(net->xfrm.nlsk, resp_skb,26112611+ err = nlmsg_unicast(xfrm_net_nlsk(net, skb), resp_skb,26222612 NETLINK_CB(skb).portid);26232613 }26242614 } else {···27912781 err = build_aevent(r_skb, x, &c);27922782 BUG_ON(err < 0);2793278327942794- err = nlmsg_unicast(net->xfrm.nlsk, r_skb, NETLINK_CB(skb).portid);27842784+ err = nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, NETLINK_CB(skb).portid);27952785 spin_unlock_bh(&x->lock);27962786 xfrm_state_put(x);27972787 return err;···30113001 if (attrs[XFRMA_SA_PCPU]) {30123002 x->pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]);30133003 err = -EINVAL;30143014- if (x->pcpu_num >= num_possible_cpus())30043004+ if (x->pcpu_num >= num_possible_cpus()) {30053005+ NL_SET_ERR_MSG(extack, "pCPU number too big");30153006 goto free_state;30073007+ }30163008 }3017300930183010 err = verify_newpolicy_info(&ua->policy, extack);···34953483 goto err;34963484 }3497348534983498- err = netlink_dump_start(net->xfrm.nlsk, skb, nlh, &c);34863486+ err = netlink_dump_start(xfrm_net_nlsk(net, skb), skb, nlh, &c);34993487 goto err;35003488 }35013489···36853673 }36863674 if (x->if_id)36873675 l += nla_total_size(sizeof(x->if_id));36883688- if (x->pcpu_num)36763676+ if (x->pcpu_num != UINT_MAX)36893677 l += nla_total_size(sizeof(x->pcpu_num));3690367836913679 /* Must count x->lastused as it may become non-zero behind our back. */
+11
scripts/coccinelle/api/kmalloc_objs.cocci
···122122- ALLOC(struct_size_t(TYPE, FLEX, COUNT), GFP)123123+ ALLOC_FLEX(TYPE, FLEX, COUNT, GFP)124124)125125+126126+@drop_gfp_kernel depends on patch && !(file in "tools") && !(file in "samples")@127127+identifier ALLOC = {kmalloc_obj,kmalloc_objs,kmalloc_flex,128128+ kzalloc_obj,kzalloc_objs,kzalloc_flex,129129+ kvmalloc_obj,kvmalloc_objs,kvmalloc_flex,130130+ kvzalloc_obj,kvzalloc_objs,kvzalloc_flex};131131+@@132132+133133+ ALLOC(...134134+- , GFP_KERNEL135135+ )
+10-14
scripts/kconfig/merge_config.sh
···151151 if ! "$AWK" -v prefix="$CONFIG_PREFIX" \152152 -v warnoverride="$WARNOVERRIDE" \153153 -v strict="$STRICT" \154154+ -v outfile="$TMP_FILE.new" \154155 -v builtin="$BUILTIN" \155156 -v warnredun="$WARNREDUN" '156157 BEGIN {···196195197196 # First pass: read merge file, store all lines and index198197 FILENAME == ARGV[1] {199199- mergefile = FILENAME198198+ mergefile = FILENAME200199 merge_lines[FNR] = $0201200 merge_total = FNR202201 cfg = get_cfg($0)···213212214213 # Not a config or not in merge file - keep it215214 if (cfg == "" || !(cfg in merge_cfg)) {216216- print $0 >> ARGV[3]215215+ print $0 >> outfile217216 next218217 }219218220220- prev_val = $0219219+ prev_val = $0221220 new_val = merge_cfg[cfg]222221223222 # BUILTIN: do not demote y to m224223 if (builtin == "true" && new_val ~ /=m$/ && prev_val ~ /=y$/) {225224 warn_builtin(cfg, prev_val, new_val)226226- print $0 >> ARGV[3]225225+ print $0 >> outfile227226 skip_merge[merge_cfg_line[cfg]] = 1228227 next229228 }···236235237236 # "=n" is the same as "is not set"238237 if (prev_val ~ /=n$/ && new_val ~ / is not set$/) {239239- print $0 >> ARGV[3]238238+ print $0 >> outfile240239 next241240 }242241···247246 }248247 }249248250250- # output file, skip all lines251251- FILENAME == ARGV[3] {252252- nextfile253253- }254254-255249 END {256250 # Newline in case base file lacks trailing newline257257- print "" >> ARGV[3]251251+ print "" >> outfile258252 # Append merge file, skipping lines marked for builtin preservation259253 for (i = 1; i <= merge_total; i++) {260254 if (!(i in skip_merge)) {261261- print merge_lines[i] >> ARGV[3]255255+ print merge_lines[i] >> outfile262256 }263257 }264258 if (strict_violated) {265259 exit 1266260 }267261 }' \268268- "$ORIG_MERGE_FILE" "$TMP_FILE" "$TMP_FILE.new"; then262262+ "$ORIG_MERGE_FILE" "$TMP_FILE"; then269263 # awk exited non-zero, strict mode was violated270264 STRICT_MODE_VIOLATED=true271265 fi···377381 STRICT_MODE_VIOLATED=true378382fi379383380380-if [ "$STRICT" == "true" ] && [ "$STRICT_MODE_VIOLATED" == "true" ]; then384384+if [ "$STRICT" = "true" ] && [ "$STRICT_MODE_VIOLATED" = "true" ]; then381385 echo "Requested and effective config differ"382386 exit 1383387fi
+4-5
scripts/livepatch/klp-build
···285285# application from appending it with '+' due to a dirty git working tree.286286set_kernelversion() {287287 local file="$SRC/scripts/setlocalversion"288288- local localversion288288+ local kernelrelease289289290290 stash_file "$file"291291292292- localversion="$(cd "$SRC" && make --no-print-directory kernelversion)"293293- localversion="$(cd "$SRC" && KERNELVERSION="$localversion" ./scripts/setlocalversion)"294294- [[ -z "$localversion" ]] && die "setlocalversion failed"292292+ kernelrelease="$(cd "$SRC" && make syncconfig &>/dev/null && make -s kernelrelease)"293293+ [[ -z "$kernelrelease" ]] && die "failed to get kernel version"295294296296- sed -i "2i echo $localversion; exit 0" scripts/setlocalversion295295+ sed -i "2i echo $kernelrelease; exit 0" scripts/setlocalversion297296}298297299298get_patch_files() {
+1
security/security.c
···6161 [LOCKDOWN_BPF_WRITE_USER] = "use of bpf to write user RAM",6262 [LOCKDOWN_DBG_WRITE_KERNEL] = "use of kgdb/kdb to write kernel RAM",6363 [LOCKDOWN_RTAS_ERROR_INJECTION] = "RTAS error injection",6464+ [LOCKDOWN_XEN_USER_ACTIONS] = "Xen guest user action",6465 [LOCKDOWN_INTEGRITY_MAX] = "integrity",6566 [LOCKDOWN_KCORE] = "/proc/kcore access",6667 [LOCKDOWN_KPROBES] = "use of kprobes",
+3-3
sound/soc/samsung/i2s.c
···13601360 if (!pdev_sec)13611361 return -ENOMEM;1362136213631363- pdev_sec->driver_override = kstrdup("samsung-i2s", GFP_KERNEL);13641364- if (!pdev_sec->driver_override) {13631363+ ret = device_set_driver_override(&pdev_sec->dev, "samsung-i2s");13641364+ if (ret) {13651365 platform_device_put(pdev_sec);13661366- return -ENOMEM;13661366+ return ret;13671367 }1368136813691369 ret = platform_device_add(pdev_sec);
···162162 if (fd < 0)163163 return -errno;164164 ret = fstat(fd, &stat);165165- if (ret < 0)166166- return -errno;165165+ if (ret < 0) {166166+ ret = -errno;167167+ close(fd);168168+ return ret;169169+ }167170168171 ret = load_xbc_fd(fd, buf, stat.st_size);169172
+3-1
tools/include/linux/build_bug.h
···3232/**3333 * BUILD_BUG_ON_MSG - break compile if a condition is true & emit supplied3434 * error message.3535- * @condition: the condition which the compiler should know is false.3535+ * @cond: the condition which the compiler should know is false.3636+ * @msg: build-time error message3637 *3738 * See BUILD_BUG_ON for description.3839 */···61606261/**6362 * static_assert - check integer constant expression at build time6363+ * @expr: expression to be checked6464 *6565 * static_assert() is a wrapper for the C11 _Static_assert, with a6666 * little macro magic to make the message optional (defaulting to the
···21842184 last = insn;2185218521862186 /*21872187- * Store back-pointers for unconditional forward jumps such21872187+ * Store back-pointers for forward jumps such21882188 * that find_jump_table() can back-track using those and21892189 * avoid some potentially confusing code.21902190 */21912191- if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->jump_dest &&21922192- insn->offset > last->offset &&21912191+ if (insn->jump_dest &&21932192 insn->jump_dest->offset > insn->offset &&21942193 !insn->jump_dest->first_jump_src) {21952194
+3-20
tools/objtool/elf.c
···1616#include <string.h>1717#include <unistd.h>1818#include <errno.h>1919-#include <libgen.h>2019#include <ctype.h>2120#include <linux/align.h>2221#include <linux/kernel.h>···11881189struct elf *elf_create_file(GElf_Ehdr *ehdr, const char *name)11891190{11901191 struct section *null, *symtab, *strtab, *shstrtab;11911191- char *dir, *base, *tmp_name;11921192+ char *tmp_name;11921193 struct symbol *sym;11931194 struct elf *elf;11941195···1202120312031204 INIT_LIST_HEAD(&elf->sections);1204120512051205- dir = strdup(name);12061206- if (!dir) {12071207- ERROR_GLIBC("strdup");12081208- return NULL;12091209- }12101210-12111211- dir = dirname(dir);12121212-12131213- base = strdup(name);12141214- if (!base) {12151215- ERROR_GLIBC("strdup");12161216- return NULL;12171217- }12181218-12191219- base = basename(base);12201220-12211221- tmp_name = malloc(256);12061206+ tmp_name = malloc(strlen(name) + 8);12221207 if (!tmp_name) {12231208 ERROR_GLIBC("malloc");12241209 return NULL;12251210 }1226121112271227- snprintf(tmp_name, 256, "%s/%s.XXXXXX", dir, base);12121212+ sprintf(tmp_name, "%s.XXXXXX", name);1228121312291214 elf->fd = mkstemp(tmp_name);12301215 if (elf->fd == -1) {
···1117111711181118static struct evsel_config_term *add_config_term(enum evsel_term_type type,11191119 struct list_head *head_terms,11201120- bool weak)11201120+ bool weak, char *str, u64 val)11211121{11221122 struct evsel_config_term *t;11231123···11281128 INIT_LIST_HEAD(&t->list);11291129 t->type = type;11301130 t->weak = weak;11311131- list_add_tail(&t->list, head_terms);1132113111321132+ switch (type) {11331133+ case EVSEL__CONFIG_TERM_PERIOD:11341134+ case EVSEL__CONFIG_TERM_FREQ:11351135+ case EVSEL__CONFIG_TERM_STACK_USER:11361136+ case EVSEL__CONFIG_TERM_USR_CHG_CONFIG:11371137+ case EVSEL__CONFIG_TERM_USR_CHG_CONFIG1:11381138+ case EVSEL__CONFIG_TERM_USR_CHG_CONFIG2:11391139+ case EVSEL__CONFIG_TERM_USR_CHG_CONFIG3:11401140+ case EVSEL__CONFIG_TERM_USR_CHG_CONFIG4:11411141+ t->val.val = val;11421142+ break;11431143+ case EVSEL__CONFIG_TERM_TIME:11441144+ t->val.time = val;11451145+ break;11461146+ case EVSEL__CONFIG_TERM_INHERIT:11471147+ t->val.inherit = val;11481148+ break;11491149+ case EVSEL__CONFIG_TERM_OVERWRITE:11501150+ t->val.overwrite = val;11511151+ break;11521152+ case EVSEL__CONFIG_TERM_MAX_STACK:11531153+ t->val.max_stack = val;11541154+ break;11551155+ case EVSEL__CONFIG_TERM_MAX_EVENTS:11561156+ t->val.max_events = val;11571157+ break;11581158+ case EVSEL__CONFIG_TERM_PERCORE:11591159+ t->val.percore = val;11601160+ break;11611161+ case EVSEL__CONFIG_TERM_AUX_OUTPUT:11621162+ t->val.aux_output = val;11631163+ break;11641164+ case EVSEL__CONFIG_TERM_AUX_SAMPLE_SIZE:11651165+ t->val.aux_sample_size = val;11661166+ break;11671167+ case EVSEL__CONFIG_TERM_CALLGRAPH:11681168+ case EVSEL__CONFIG_TERM_BRANCH:11691169+ case EVSEL__CONFIG_TERM_DRV_CFG:11701170+ case EVSEL__CONFIG_TERM_RATIO_TO_PREV:11711171+ case EVSEL__CONFIG_TERM_AUX_ACTION:11721172+ if (str) {11731173+ t->val.str = strdup(str);11741174+ if (!t->val.str) {11751175+ zfree(&t);11761176+ return NULL;11771177+ }11781178+ t->free_str = true;11791179+ }11801180+ break;11811181+ default:11821182+ t->val.val = val;11831183+ break;11841184+ }11851185+11861186+ list_add_tail(&t->list, head_terms);11331187 return t;11341188}11351189···11961142 struct evsel_config_term *new_term;11971143 enum evsel_term_type new_type;11981144 bool str_type = false;11991199- u64 val;11451145+ u64 val = 0;1200114612011147 switch (term->type_term) {12021148 case PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD:···12881234 continue;12891235 }1290123612911291- new_term = add_config_term(new_type, head_terms, term->weak);12371237+ /*12381238+ * Note: Members evsel_config_term::val and12391239+ * parse_events_term::val are unions and endianness needs12401240+ * to be taken into account when changing such union members.12411241+ */12421242+ new_term = add_config_term(new_type, head_terms, term->weak,12431243+ str_type ? term->val.str : NULL, val);12921244 if (!new_term)12931245 return -ENOMEM;12941294-12951295- if (str_type) {12961296- new_term->val.str = strdup(term->val.str);12971297- if (!new_term->val.str) {12981298- zfree(&new_term);12991299- return -ENOMEM;13001300- }13011301- new_term->free_str = true;13021302- } else {13031303- new_term->val.val = val;13041304- }13051246 }13061247 return 0;13071248}···13261277 if (bits) {13271278 struct evsel_config_term *new_term;1328127913291329- new_term = add_config_term(new_term_type, head_terms, false);12801280+ new_term = add_config_term(new_term_type, head_terms, false, NULL, bits);13301281 if (!new_term)13311282 return -ENOMEM;13321332- new_term->val.cfg_chg = bits;13331283 }1334128413351285 return 0;
+1-1
tools/testing/selftests/bpf/Makefile
···409409 CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \410410 LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \411411 EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \412412- HOSTPKG_CONFIG=$(PKG_CONFIG) \412412+ HOSTPKG_CONFIG='$(PKG_CONFIG)' \413413 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ)414414415415# Get Clang's default includes on this system, as opposed to those seen by
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+# shellcheck disable=SC215444+#55+# Reproduce the non-Ethernet header_ops confusion scenario with:66+# g0 (gre) -> b0 (bond) -> t0 (team)77+#88+# Before the fix, direct header_ops inheritance in this stack could call99+# callbacks with the wrong net_device context and crash.1010+1111+lib_dir=$(dirname "$0")1212+source "$lib_dir"/../../../net/lib.sh1313+1414+trap cleanup_all_ns EXIT1515+1616+setup_ns ns11717+1818+ip -n "$ns1" link add d0 type dummy1919+ip -n "$ns1" addr add 10.10.10.1/24 dev d02020+ip -n "$ns1" link set d0 up2121+2222+ip -n "$ns1" link add g0 type gre local 10.10.10.12323+ip -n "$ns1" link add b0 type bond mode active-backup2424+ip -n "$ns1" link add t0 type team2525+2626+ip -n "$ns1" link set g0 master b02727+ip -n "$ns1" link set b0 master t02828+2929+ip -n "$ns1" link set g0 up3030+ip -n "$ns1" link set b0 up3131+ip -n "$ns1" link set t0 up3232+3333+# IPv6 address assignment triggers MLD join reports that call3434+# dev_hard_header() on t0, exercising the inherited header_ops path.3535+ip -n "$ns1" -6 addr add 2001:db8:1::1/64 dev t0 nodad3636+for i in $(seq 1 20); do3737+ ip netns exec "$ns1" ping -6 -I t0 ff02::1 -c1 -W1 &>/dev/null || true3838+done3939+4040+echo "PASS: non-Ethernet header_ops stacking did not crash"4141+exit "$EXIT_STATUS"
···868868 check_rt_num 5 $($IP -6 route list |grep -v expires|grep 2001:20::|wc -l)869869 log_test $ret 0 "ipv6 route garbage collection (replace with permanent)"870870871871+ # Delete dummy_10 and remove all routes872872+ $IP link del dev dummy_10873873+874874+ # rd6 is required for the next test. (ipv6toolkit)875875+ if [ ! -x "$(command -v rd6)" ]; then876876+ echo "SKIP: rd6 not found."877877+ set +e878878+ cleanup &> /dev/null879879+ return880880+ fi881881+882882+ setup_ns ns2883883+ $IP link add veth1 type veth peer veth2 netns $ns2884884+ $IP link set veth1 up885885+ ip -netns $ns2 link set veth2 up886886+ $IP addr add fe80:dead::1/64 dev veth1887887+ ip -netns $ns2 addr add fe80:dead::2/64 dev veth2888888+889889+ # Add NTF_ROUTER neighbour to prevent rt6_age_examine_exception()890890+ # from removing not-yet-expired exceptions.891891+ ip -netns $ns2 link set veth2 address 00:11:22:33:44:55892892+ $IP neigh add fe80:dead::3 lladdr 00:11:22:33:44:55 dev veth1 router893893+894894+ $NS_EXEC sysctl -wq net.ipv6.conf.veth1.accept_redirects=1895895+ $NS_EXEC sysctl -wq net.ipv6.conf.veth1.forwarding=0896896+897897+ # Temporary routes898898+ for i in $(seq 1 5); do899899+ # Expire route after $EXPIRE seconds900900+ $IP -6 route add 2001:10::$i \901901+ via fe80:dead::2 dev veth1 expires $EXPIRE902902+903903+ ip netns exec $ns2 rd6 -i veth2 \904904+ -s fe80:dead::2 -d fe80:dead::1 \905905+ -r 2001:10::$i -t fe80:dead::3 -p ICMP6906906+ done907907+908908+ check_rt_num 5 $($IP -6 route list | grep expires | grep 2001:10:: | wc -l)909909+910910+ # Promote to permanent routes by "prepend" (w/o NLM_F_EXCL and NLM_F_REPLACE)911911+ for i in $(seq 1 5); do912912+ # -EEXIST, but the temporary route becomes the permanent route.913913+ $IP -6 route append 2001:10::$i \914914+ via fe80:dead::2 dev veth1 2>/dev/null || true915915+ done916916+917917+ check_rt_num 5 $($IP -6 route list | grep -v expires | grep 2001:10:: | wc -l)918918+ check_rt_num 5 $($IP -6 route list cache | grep 2001:10:: | wc -l)919919+920920+ # Trigger GC instead of waiting $GC_WAIT_TIME.921921+ # rt6_nh_dump_exceptions() just skips expired exceptions.922922+ $NS_EXEC sysctl -wq net.ipv6.route.flush=1923923+ check_rt_num 0 $($IP -6 route list cache | grep 2001:10:: | wc -l)924924+ log_test $ret 0 "ipv6 route garbage collection (promote to permanent routes)"925925+926926+ $IP neigh del fe80:dead::3 lladdr 00:11:22:33:44:55 dev veth1 router927927+ $IP link del veth1928928+871929 # ra6 is required for the next test. (ipv6toolkit)872930 if [ ! -x "$(command -v ra6)" ]; then873931 echo "SKIP: ra6 not found."···933875 cleanup &> /dev/null934876 return935877 fi936936-937937- # Delete dummy_10 and remove all routes938938- $IP link del dev dummy_10939878940879 # Create a pair of veth devices to send a RA message from one941880 # device to another.