···11+22+.. SPDX-License-Identifier: GPL-2.033+44+Cross-Thread Return Address Predictions55+=======================================66+77+Certain AMD and Hygon processors are subject to a cross-thread return address88+predictions vulnerability. When running in SMT mode and one sibling thread99+transitions out of C0 state, the other sibling thread could use return target1010+predictions from the sibling thread that transitioned out of C0.1111+1212+The Spectre v2 mitigations protect the Linux kernel, as it fills the return1313+address prediction entries with safe targets when context switching to the idle1414+thread. However, KVM does allow a VMM to prevent exiting guest mode when1515+transitioning out of C0. This could result in a guest-controlled return target1616+being consumed by the sibling thread.1717+1818+Affected processors1919+-------------------2020+2121+The following CPUs are vulnerable:2222+2323+ - AMD Family 17h processors2424+ - Hygon Family 18h processors2525+2626+Related CVEs2727+------------2828+2929+The following CVE entry is related to this issue:3030+3131+ ============== =======================================3232+ CVE-2022-27672 Cross-Thread Return Address Predictions3333+ ============== =======================================3434+3535+Problem3636+-------3737+3838+Affected SMT-capable processors support 1T and 2T modes of execution when SMT3939+is enabled. In 2T mode, both threads in a core are executing code. For the4040+processor core to enter 1T mode, it is required that one of the threads4141+requests to transition out of the C0 state. This can be communicated with the4242+HLT instruction or with an MWAIT instruction that requests non-C0.4343+When the thread re-enters the C0 state, the processor transitions back4444+to 2T mode, assuming the other thread is also still in C0 state.4545+4646+In affected processors, the return address predictor (RAP) is partitioned4747+depending on the SMT mode. For instance, in 2T mode each thread uses a private4848+16-entry RAP, but in 1T mode, the active thread uses a 32-entry RAP. Upon4949+transition between 1T/2T mode, the RAP contents are not modified but the RAP5050+pointers (which control the next return target to use for predictions) may5151+change. This behavior may result in return targets from one SMT thread being5252+used by RET predictions in the sibling thread following a 1T/2T switch. In5353+particular, a RET instruction executed immediately after a transition to 1T may5454+use a return target from the thread that just became idle. In theory, this5555+could lead to information disclosure if the return targets used do not come5656+from trustworthy code.5757+5858+Attack scenarios5959+----------------6060+6161+An attack can be mounted on affected processors by performing a series of CALL6262+instructions with targeted return locations and then transitioning out of C06363+state.6464+6565+Mitigation mechanism6666+--------------------6767+6868+Before entering idle state, the kernel context switches to the idle thread. The6969+context switch fills the RAP entries (referred to as the RSB in Linux) with safe7070+targets by performing a sequence of CALL instructions.7171+7272+Prevent a guest VM from directly putting the processor into an idle state by7373+intercepting HLT and MWAIT instructions.7474+7575+Both mitigations are required to fully address this issue.7676+7777+Mitigation control on the kernel command line7878+---------------------------------------------7979+8080+Use existing Spectre v2 mitigations that will fill the RSB on context switch.8181+8282+Mitigation control for KVM - module parameter8383+---------------------------------------------8484+8585+By default, the KVM hypervisor mitigates this issue by intercepting guest8686+attempts to transition out of C0. A VMM can use the KVM_CAP_X86_DISABLE_EXITS8787+capability to override those interceptions, but since this is not common, the8888+mitigation that covers this path is not enabled by default.8989+9090+The mitigation for the KVM_CAP_X86_DISABLE_EXITS capability can be turned on9191+using the boolean module parameter mitigate_smt_rsb, e.g.:9292+ kvm.mitigate_smt_rsb=1
···5050 */5151static notrace __always_inline bool prep_irq_for_enabled_exit(bool restartable)5252{5353+ bool must_hard_disable = (exit_must_hard_disable() || !restartable);5454+5355 /* This must be done with RI=1 because tracing may touch vmaps */5456 trace_hardirqs_on();55575656- if (exit_must_hard_disable() || !restartable)5858+ if (must_hard_disable)5759 __hard_EE_RI_disable();58605961#ifdef CONFIG_PPC646062 /* This pattern matches prep_irq_for_idle */6163 if (unlikely(lazy_irq_pending_nocheck())) {6262- if (exit_must_hard_disable() || !restartable) {6464+ if (must_hard_disable) {6365 local_paca->irq_happened |= PACA_IRQ_HARD_DIS;6466 __hard_RI_enable();6567 }
···9090 if (PageHuge(page))9191 page = compound_head(page);92929393- if (!test_and_set_bit(PG_dcache_clean, &page->flags))9393+ if (!test_bit(PG_dcache_clean, &page->flags)) {9494 flush_icache_all();9595+ set_bit(PG_dcache_clean, &page->flags);9696+ }9597}9698#endif /* CONFIG_MMU */9799
+20
arch/riscv/mm/pgtable.c
···8181}82828383#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */8484+#ifdef CONFIG_TRANSPARENT_HUGEPAGE8585+pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,8686+ unsigned long address, pmd_t *pmdp)8787+{8888+ pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);8989+9090+ VM_BUG_ON(address & ~HPAGE_PMD_MASK);9191+ VM_BUG_ON(pmd_trans_huge(*pmdp));9292+ /*9393+ * When leaf PTE entries (regular pages) are collapsed into a leaf9494+ * PMD entry (huge page), a valid non-leaf PTE is converted into a9595+ * valid leaf PTE at the level 1 page table. Since the sfence.vma9696+ * forms that specify an address only apply to leaf PTEs, we need a9797+ * global flush here. collapse_huge_page() assumes these flushes are9898+ * eager, so just do the fence here.9999+ */100100+ flush_tlb_mm(vma->vm_mm);101101+ return pmd;102102+}103103+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+1
arch/x86/include/asm/cpufeatures.h
···466466#define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */467467#define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */468468#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */469469+#define X86_BUG_SMT_RSB X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */469470470471#endif /* _ASM_X86_CPUFEATURES_H */
···191191bool __read_mostly eager_page_split = true;192192module_param(eager_page_split, bool, 0644);193193194194+/* Enable/disable SMT_RSB bug mitigation */195195+bool __read_mostly mitigate_smt_rsb;196196+module_param(mitigate_smt_rsb, bool, 0444);197197+194198/*195199 * Restoring the host value for MSRs that are only consumed when running in196200 * usermode, e.g. SYSCALL MSRs and TSC_AUX, can be deferred until the CPU···44524448 r = KVM_CLOCK_VALID_FLAGS;44534449 break;44544450 case KVM_CAP_X86_DISABLE_EXITS:44554455- r |= KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_PAUSE |44564456- KVM_X86_DISABLE_EXITS_CSTATE;44574457- if(kvm_can_mwait_in_guest())44584458- r |= KVM_X86_DISABLE_EXITS_MWAIT;44514451+ r = KVM_X86_DISABLE_EXITS_PAUSE;44524452+44534453+ if (!mitigate_smt_rsb) {44544454+ r |= KVM_X86_DISABLE_EXITS_HLT |44554455+ KVM_X86_DISABLE_EXITS_CSTATE;44564456+44574457+ if (kvm_can_mwait_in_guest())44584458+ r |= KVM_X86_DISABLE_EXITS_MWAIT;44594459+ }44594460 break;44604461 case KVM_CAP_X86_SMM:44614462 if (!IS_ENABLED(CONFIG_KVM_SMM))···62366227 if (cap->args[0] & ~KVM_X86_DISABLE_VALID_EXITS)62376228 break;6238622962396239- if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) &&62406240- kvm_can_mwait_in_guest())62416241- kvm->arch.mwait_in_guest = true;62426242- if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT)62436243- kvm->arch.hlt_in_guest = true;62446230 if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE)62456231 kvm->arch.pause_in_guest = true;62466246- if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE)62476247- kvm->arch.cstate_in_guest = true;62326232+62336233+#define SMT_RSB_MSG "This processor is affected by the Cross-Thread Return Predictions vulnerability. " \62346234+ "KVM_CAP_X86_DISABLE_EXITS should only be used with SMT disabled or trusted guests."62356235+62366236+ if (!mitigate_smt_rsb) {62376237+ if (boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible() &&62386238+ (cap->args[0] & ~KVM_X86_DISABLE_EXITS_PAUSE))62396239+ pr_warn_once(SMT_RSB_MSG);62406240+62416241+ if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) &&62426242+ kvm_can_mwait_in_guest())62436243+ kvm->arch.mwait_in_guest = true;62446244+ if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT)62456245+ kvm->arch.hlt_in_guest = true;62466246+ if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE)62476247+ kvm->arch.cstate_in_guest = true;62486248+ }62496249+62486250 r = 0;62496251 break;62506252 case KVM_CAP_MSR_PLATFORM_INFO:···1347613456static int __init kvm_x86_init(void)1347713457{1347813458 kvm_mmu_x86_module_init();1345913459+ mitigate_smt_rsb &= boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible();1347913460 return 0;1348013461}1348113462module_init(kvm_x86_init);
+1-1
drivers/acpi/nfit/core.c
···3297329732983298 mutex_lock(&acpi_desc->init_mutex);32993299 set_bit(ARS_CANCEL, &acpi_desc->scrub_flags);33003300- cancel_delayed_work_sync(&acpi_desc->dwork);33013300 mutex_unlock(&acpi_desc->init_mutex);33013301+ cancel_delayed_work_sync(&acpi_desc->dwork);3302330233033303 /*33043304 * Bounce the nvdimm bus lock to make sure any in-flight
+8-10
drivers/clk/ingenic/jz4760-cgu.c
···5858 unsigned long rate, unsigned long parent_rate,5959 unsigned int *pm, unsigned int *pn, unsigned int *pod)6060{6161- unsigned int m, n, od, m_max = (1 << pll_info->m_bits) - 2;6161+ unsigned int m, n, od, m_max = (1 << pll_info->m_bits) - 1;62626363 /* The frequency after the N divider must be between 1 and 50 MHz. */6464 n = parent_rate / (1 * MHZ);···6666 /* The N divider must be >= 2. */6767 n = clamp_val(n, 2, 1 << pll_info->n_bits);68686969- for (;; n >>= 1) {7070- od = (unsigned int)-1;6969+ rate /= MHZ;7070+ parent_rate /= MHZ;71717272- do {7373- m = (rate / MHZ) * (1 << ++od) * n / (parent_rate / MHZ);7474- } while ((m > m_max || m & 1) && (od < 4));7575-7676- if (od < 4 && m >= 4 && m <= m_max)7777- break;7272+ for (m = m_max; m >= m_max && n >= 2; n--) {7373+ m = rate * n / parent_rate;7474+ od = m & 1;7575+ m <<= od;7876 }79778078 *pm = m;8181- *pn = n;7979+ *pn = n + 1;8280 *pod = 1 << od;8381}8482
+4-6
drivers/clk/microchip/clk-mpfs-ccc.c
···164164165165 for (unsigned int i = 0; i < num_clks; i++) {166166 struct mpfs_ccc_out_hw_clock *out_hw = &out_hws[i];167167- char *name = devm_kzalloc(dev, 23, GFP_KERNEL);167167+ char *name = devm_kasprintf(dev, GFP_KERNEL, "%s_out%u", parent->name, i);168168169169 if (!name)170170 return -ENOMEM;171171172172- snprintf(name, 23, "%s_out%u", parent->name, i);173172 out_hw->divider.hw.init = CLK_HW_INIT_HW(name, &parent->hw, &clk_divider_ops, 0);174173 out_hw->divider.reg = data->pll_base[i / MPFS_CCC_OUTPUTS_PER_PLL] +175174 out_hw->reg_offset;···200201201202 for (unsigned int i = 0; i < num_clks; i++) {202203 struct mpfs_ccc_pll_hw_clock *pll_hw = &pll_hws[i];203203- char *name = devm_kzalloc(dev, 18, GFP_KERNEL);204204205205- if (!name)205205+ pll_hw->name = devm_kasprintf(dev, GFP_KERNEL, "ccc%s_pll%u",206206+ strchrnul(dev->of_node->full_name, '@'), i);207207+ if (!pll_hw->name)206208 return -ENOMEM;207209208210 pll_hw->base = data->pll_base[i];209209- snprintf(name, 18, "ccc%s_pll%u", strchrnul(dev->of_node->full_name, '@'), i);210210- pll_hw->name = (const char *)name;211211 pll_hw->hw.init = CLK_HW_INIT_PARENTS_DATA_FIXED_SIZE(pll_hw->name,212212 pll_hw->parents,213213 &mpfs_ccc_pll_ops, 0);
+19-15
drivers/cpufreq/qcom-cpufreq-hw.c
···143143 return lval * xo_rate;144144}145145146146-/* Get the current frequency of the CPU (after throttling) */147147-static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)148148-{149149- struct qcom_cpufreq_data *data;150150- struct cpufreq_policy *policy;151151-152152- policy = cpufreq_cpu_get_raw(cpu);153153- if (!policy)154154- return 0;155155-156156- data = policy->driver_data;157157-158158- return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ;159159-}160160-161146/* Get the frequency requested by the cpufreq core for the CPU */162147static unsigned int qcom_cpufreq_get_freq(unsigned int cpu)163148{···162177 index = min(index, LUT_MAX_ENTRIES - 1);163178164179 return policy->freq_table[index].frequency;180180+}181181+182182+static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)183183+{184184+ struct qcom_cpufreq_data *data;185185+ struct cpufreq_policy *policy;186186+187187+ policy = cpufreq_cpu_get_raw(cpu);188188+ if (!policy)189189+ return 0;190190+191191+ data = policy->driver_data;192192+193193+ if (data->throttle_irq >= 0)194194+ return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ;195195+196196+ return qcom_cpufreq_get_freq(cpu);165197}166198167199static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,···706704 return -ENOMEM;707705708706 qcom_cpufreq.soc_data = of_device_get_match_data(dev);707707+ if (!qcom_cpufreq.soc_data)708708+ return -ENODEV;709709710710 clk_data = devm_kzalloc(dev, struct_size(clk_data, hws, num_domains), GFP_KERNEL);711711 if (!clk_data)
+7-5
drivers/cxl/core/region.c
···131131 struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);132132 struct cxl_port *iter = cxled_to_port(cxled);133133 struct cxl_ep *ep;134134- int rc;134134+ int rc = 0;135135136136 while (!is_cxl_root(to_cxl_port(iter->dev.parent)))137137 iter = to_cxl_port(iter->dev.parent);···143143144144 cxl_rr = cxl_rr_load(iter, cxlr);145145 cxld = cxl_rr->decoder;146146- rc = cxld->reset(cxld);146146+ if (cxld->reset)147147+ rc = cxld->reset(cxld);147148 if (rc)148149 return rc;149150 }···187186 iter = ep->next, ep = cxl_ep_load(iter, cxlmd)) {188187 cxl_rr = cxl_rr_load(iter, cxlr);189188 cxld = cxl_rr->decoder;190190- cxld->reset(cxld);189189+ if (cxld->reset)190190+ cxld->reset(cxld);191191 }192192193193 cxled->cxld.reset(&cxled->cxld);···993991 int i, distance;994992995993 /*996996- * Passthrough ports impose no distance requirements between994994+ * Passthrough decoders impose no distance requirements between997995 * peers998996 */999999- if (port->nr_dports == 1)997997+ if (cxl_rr->nr_targets == 1)1000998 distance = 0;1001999 else10021000 distance = p->nr_targets / cxl_rr->nr_targets;
+1-1
drivers/dax/super.c
···475475/**476476 * dax_holder() - obtain the holder of a dax device477477 * @dax_dev: a dax_device instance478478-478478+ *479479 * Return: the holder's data which represents the holder if registered,480480 * otherwize NULL.481481 */
+6-3
drivers/firmware/efi/libstub/arm64.c
···1919 const u8 *type1_family = efi_get_smbios_string(1, family);20202121 /*2222- * Ampere Altra machines crash in SetTime() if SetVirtualAddressMap()2323- * has not been called prior.2222+ * Ampere eMAG, Altra, and Altra Max machines crash in SetTime() if2323+ * SetVirtualAddressMap() has not been called prior.2424 */2525- if (!type1_family || strcmp(type1_family, "Altra"))2525+ if (!type1_family || (2626+ strcmp(type1_family, "eMAG") &&2727+ strcmp(type1_family, "Altra") &&2828+ strcmp(type1_family, "Altra Max")))2629 return false;27302831 efi_warn("Working around broken SetVirtualAddressMap()\n");
+1
drivers/gpio/Kconfig
···15311531 tristate "Mellanox BlueField 2 SoC GPIO"15321532 depends on (MELLANOX_PLATFORM && ARM64 && ACPI) || (64BIT && COMPILE_TEST)15331533 select GPIO_GENERIC15341534+ select GPIOLIB_IRQCHIP15341535 help15351536 Say Y here if you want GPIO support on Mellanox BlueField 2 SoC.15361537
···53535454config DRM_USE_DYNAMIC_DEBUG5555 bool "use dynamic debug to implement drm.debug"5656- default y5656+ default n5757+ depends on BROKEN5758 depends on DRM5859 depends on DYNAMIC_DEBUG || DYNAMIC_DEBUG_CORE5960 depends on JUMP_LABEL
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
···243243244244#define AMDGPU_VCNFW_LOG_SIZE (32 * 1024)245245extern int amdgpu_vcnfw_log;246246+extern int amdgpu_sg_display;246247247248#define AMDGPU_VM_MAX_NUM_CTX 4096248249#define AMDGPU_SG_THRESHOLD (256*1024*1024)
+4-1
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
···12201220 * next job actually sees the results from the previous one12211221 * before we start executing on the same scheduler ring.12221222 */12231223- if (!s_fence || s_fence->sched != sched)12231223+ if (!s_fence || s_fence->sched != sched) {12241224+ dma_fence_put(fence);12241225 continue;12261226+ }1225122712261228 r = amdgpu_sync_fence(&p->gang_leader->explicit_sync, fence);12291229+ dma_fence_put(fence);12271230 if (r)12281231 return r;12291232 }
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···42684268 }42694269 adev->in_suspend = false;4270427042714271+ if (adev->enable_mes)42724272+ amdgpu_mes_self_test(adev);42734273+42714274 if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))42724275 DRM_WARN("smart shift update failed\n");42734276
+11
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···186186int amdgpu_smartshift_bias;187187int amdgpu_use_xgmi_p2p = 1;188188int amdgpu_vcnfw_log;189189+int amdgpu_sg_display = -1; /* auto */189190190191static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work);191192···931930 */932931MODULE_PARM_DESC(vcnfw_log, "Enable vcnfw log(0 = disable (default value), 1 = enable)");933932module_param_named(vcnfw_log, amdgpu_vcnfw_log, int, 0444);933933+934934+/**935935+ * DOC: sg_display (int)936936+ * Disable S/G (scatter/gather) display (i.e., display from system memory).937937+ * This option is only relevant on APUs. Set this option to 0 to disable938938+ * S/G display if you experience flickering or other issues under memory939939+ * pressure and report the issue.940940+ */941941+MODULE_PARM_DESC(sg_display, "S/G Display (-1 = auto (default), 0 = disable)");942942+module_param_named(sg_display, amdgpu_sg_display, int, 0444);934943935944/**936945 * DOC: smu_pptable_id (int)
+7-1
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
···618618 if (!ring || !ring->fence_drv.initialized)619619 continue;620620621621- if (!ring->no_scheduler)621621+ /*622622+ * Notice we check for sched.ops since there's some623623+ * override on the meaning of sched.ready by amdgpu.624624+ * The natural check would be sched.ready, which is625625+ * set as drm_sched_init() finishes...626626+ */627627+ if (ring->sched.ops)622628 drm_sched_fini(&ring->sched);623629624630 for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
···13441344 struct amdgpu_device *adev = (struct amdgpu_device *)handle;1345134513461346 /* it's only intended for use in mes_self_test case, not for s0ix and reset */13471347- if (!amdgpu_in_reset(adev) && !adev->in_s0ix &&13471347+ if (!amdgpu_in_reset(adev) && !adev->in_s0ix && !adev->in_suspend &&13481348 (adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3)))13491349 amdgpu_mes_self_test(adev);13501350
···1184118411851185 memset(pa_config, 0, sizeof(*pa_config));1186118611871187- logical_addr_low = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18;11881188- pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);11891189-11901190- if (adev->apu_flags & AMD_APU_IS_RAVEN2)11911191- /*11921192- * Raven2 has a HW issue that it is unable to use the vram which11931193- * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the11941194- * workaround that increase system aperture high address (add 1)11951195- * to get rid of the VM fault and hardware hang.11961196- */11971197- logical_addr_high = max((adev->gmc.fb_end >> 18) + 0x1, adev->gmc.agp_end >> 18);11981198- else11991199- logical_addr_high = max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18;12001200-12011187 agp_base = 0;12021188 agp_bot = adev->gmc.agp_start >> 24;12031189 agp_top = adev->gmc.agp_end >> 24;1204119011911191+ /* AGP aperture is disabled */11921192+ if (agp_bot == agp_top) {11931193+ logical_addr_low = adev->gmc.vram_start >> 18;11941194+ if (adev->apu_flags & AMD_APU_IS_RAVEN2)11951195+ /*11961196+ * Raven2 has a HW issue that it is unable to use the vram which11971197+ * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the11981198+ * workaround that increase system aperture high address (add 1)11991199+ * to get rid of the VM fault and hardware hang.12001200+ */12011201+ logical_addr_high = (adev->gmc.fb_end >> 18) + 0x1;12021202+ else12031203+ logical_addr_high = adev->gmc.vram_end >> 18;12041204+ } else {12051205+ logical_addr_low = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18;12061206+ if (adev->apu_flags & AMD_APU_IS_RAVEN2)12071207+ /*12081208+ * Raven2 has a HW issue that it is unable to use the vram which12091209+ * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the12101210+ * workaround that increase system aperture high address (add 1)12111211+ * to get rid of the VM fault and hardware hang.12121212+ */12131213+ logical_addr_high = max((adev->gmc.fb_end >> 18) + 0x1, adev->gmc.agp_end >> 18);12141214+ else12151215+ logical_addr_high = max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18;12161216+ }12171217+12181218+ pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);1205121912061220 page_table_start.high_part = (u32)(adev->gmc.gart_start >> 44) & 0xF;12071221 page_table_start.low_part = (u32)(adev->gmc.gart_start >> 12);···15171503 case IP_VERSION(3, 0, 1):15181504 case IP_VERSION(3, 1, 2):15191505 case IP_VERSION(3, 1, 3):15061506+ case IP_VERSION(3, 1, 4):15071507+ case IP_VERSION(3, 1, 5):15201508 case IP_VERSION(3, 1, 6):15211509 init_data.flags.gpu_vm_support = true;15221510 break;···15271511 }15281512 break;15291513 }15141514+ if (init_data.flags.gpu_vm_support &&15151515+ (amdgpu_sg_display == 0))15161516+ init_data.flags.gpu_vm_support = false;1530151715311518 if (init_data.flags.gpu_vm_support)15321519 adev->mode_info.gpu_vm_support = true;···96589639 * `dcn10_can_pipe_disable_cursor`). By now, all modified planes are in96599640 * atomic state, so call drm helper to normalize zpos.96609641 */96619661- drm_atomic_normalize_zpos(dev, state);96429642+ ret = drm_atomic_normalize_zpos(dev, state);96439643+ if (ret) {96449644+ drm_dbg(dev, "drm_atomic_normalize_zpos() failed\n");96459645+ goto fail;96469646+ }9662964796639648 /* Remove exiting planes if they are modified */96649649 for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
···462462 return -ENOMEM;463463 }464464465465+ /*466466+ * vmw_bo_init will delete the *p_bo object if it fails467467+ */465468 ret = vmw_bo_init(vmw, *p_bo, size,466469 placement, interruptible, pin,467470 bo_free);···473470474471 return ret;475472out_error:476476- kfree(*p_bo);477473 *p_bo = NULL;478474 return ret;479475}···598596 ttm_bo_put(&vmw_bo->base);599597 }600598599599+ drm_gem_object_put(&vmw_bo->base.base);601600 return ret;602601}603602···639636640637 ret = vmw_user_bo_synccpu_grab(vbo, arg->flags);641638 vmw_bo_unreference(&vbo);639639+ drm_gem_object_put(&vbo->base.base);642640 if (unlikely(ret != 0)) {643641 if (ret == -ERESTARTSYS || ret == -EBUSY)644642 return -EBUSY;···697693 * struct vmw_buffer_object should be placed.698694 * Return: Zero on success, Negative error code on error.699695 *700700- * The vmw buffer object pointer will be refcounted.696696+ * The vmw buffer object pointer will be refcounted (both ttm and gem)701697 */702698int vmw_user_bo_lookup(struct drm_file *filp,703699 uint32_t handle,···714710715711 *out = gem_to_vmw_bo(gobj);716712 ttm_bo_get(&(*out)->base);717717- drm_gem_object_put(gobj);718713719714 return 0;720715}···794791 ret = vmw_gem_object_create_with_handle(dev_priv, file_priv,795792 args->size, &args->handle,796793 &vbo);797797-794794+ /* drop reference from allocate - handle holds it now */795795+ drm_gem_object_put(&vbo->base.base);798796 return ret;799797}800798
+2
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
···11601160 }11611161 ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, true, false);11621162 ttm_bo_put(&vmw_bo->base);11631163+ drm_gem_object_put(&vmw_bo->base.base);11631164 if (unlikely(ret != 0))11641165 return ret;11651166···12151214 }12161215 ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, false, false);12171216 ttm_bo_put(&vmw_bo->base);12171217+ drm_gem_object_put(&vmw_bo->base.base);12181218 if (unlikely(ret != 0))12191219 return ret;12201220
+4-4
drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
···146146 &vmw_sys_placement :147147 &vmw_vram_sys_placement,148148 true, false, &vmw_gem_destroy, p_vbo);149149-150150- (*p_vbo)->base.base.funcs = &vmw_gem_object_funcs;151149 if (ret != 0)152150 goto out_no_bo;153151152152+ (*p_vbo)->base.base.funcs = &vmw_gem_object_funcs;153153+154154 ret = drm_gem_handle_create(filp, &(*p_vbo)->base.base, handle);155155- /* drop reference from allocate - handle holds it now */156156- drm_gem_object_put(&(*p_vbo)->base.base);157155out_no_bo:158156 return ret;159157}···178180 rep->map_handle = drm_vma_node_offset_addr(&vbo->base.base.vma_node);179181 rep->cur_gmr_id = handle;180182 rep->cur_gmr_offset = 0;183183+ /* drop reference from allocate - handle holds it now */184184+ drm_gem_object_put(&vbo->base.base);181185out_no_bo:182186 return ret;183187}
+3-1
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
···1815181518161816err_out:18171817 /* vmw_user_lookup_handle takes one ref so does new_fb */18181818- if (bo)18181818+ if (bo) {18191819 vmw_bo_unreference(&bo);18201820+ drm_gem_object_put(&bo->base.base);18211821+ }18201822 if (surface)18211823 vmw_surface_unreference(&surface);18221824
+1
drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
···458458 ret = vmw_overlay_update_stream(dev_priv, buf, arg, true);459459460460 vmw_bo_unreference(&buf);461461+ drm_gem_object_put(&buf->base.base);461462462463out_unlock:463464 mutex_unlock(&overlay->mutex);
···13181318 addr = arg + offsetof(struct hfi1_tid_info, tidcnt);13191319 if (copy_to_user((void __user *)addr, &tinfo.tidcnt,13201320 sizeof(tinfo.tidcnt)))13211321- return -EFAULT;13211321+ ret = -EFAULT;1322132213231323 addr = arg + offsetof(struct hfi1_tid_info, length);13241324- if (copy_to_user((void __user *)addr, &tinfo.length,13241324+ if (!ret && copy_to_user((void __user *)addr, &tinfo.length,13251325 sizeof(tinfo.length)))13261326 ret = -EFAULT;13271327+13281328+ if (ret)13291329+ hfi1_user_exp_rcv_invalid(fd, &tinfo);13271330 }1328133113291332 return ret;
+2-7
drivers/infiniband/hw/hfi1/user_exp_rcv.c
···160160static int pin_rcv_pages(struct hfi1_filedata *fd, struct tid_user_buf *tidbuf)161161{162162 int pinned;163163- unsigned int npages;163163+ unsigned int npages = tidbuf->npages;164164 unsigned long vaddr = tidbuf->vaddr;165165 struct page **pages = NULL;166166 struct hfi1_devdata *dd = fd->uctxt->dd;167167-168168- /* Get the number of pages the user buffer spans */169169- npages = num_user_pages(vaddr, tidbuf->length);170170- if (!npages)171171- return -EINVAL;172167173168 if (npages > fd->uctxt->expected_count) {174169 dd_dev_err(dd, "Expected buffer too big\n");···191196 return pinned;192197 }193198 tidbuf->pages = pages;194194- tidbuf->npages = npages;195199 fd->tid_n_pinned += pinned;196200 return pinned;197201}···268274 mutex_init(&tidbuf->cover_mutex);269275 tidbuf->vaddr = tinfo->vaddr;270276 tidbuf->length = tinfo->length;277277+ tidbuf->npages = num_user_pages(tidbuf->vaddr, tidbuf->length);271278 tidbuf->psets = kcalloc(uctxt->expected_count, sizeof(*tidbuf->psets),272279 GFP_KERNEL);273280 if (!tidbuf->psets) {
+3
drivers/infiniband/hw/irdma/cm.c
···17221722 continue;1723172317241724 idev = in_dev_get(ip_dev);17251725+ if (!idev)17261726+ continue;17271727+17251728 in_dev_for_each_ifa_rtnl(ifa, idev) {17261729 ibdev_dbg(&iwdev->ibdev,17271730 "CM: Allocating child CM Listener forIP=%pI4, vlan_id=%d, MAC=%pM\n",
+1-1
drivers/infiniband/hw/mana/qp.c
···289289290290 /* IB ports start with 1, MANA Ethernet ports start with 0 */291291 port = ucmd.port;292292- if (ucmd.port > mc->num_ports)292292+ if (port < 1 || port > mc->num_ports)293293 return -EINVAL;294294295295 if (attr->cap.max_send_wr > MAX_SEND_BUFFERS_PER_QUEUE) {
+4-4
drivers/infiniband/hw/usnic/usnic_uiom.c
···276276 size = pa_end - pa_start + PAGE_SIZE;277277 usnic_dbg("va 0x%lx pa %pa size 0x%zx flags 0x%x",278278 va_start, &pa_start, size, flags);279279- err = iommu_map(pd->domain, va_start, pa_start,280280- size, flags);279279+ err = iommu_map_atomic(pd->domain, va_start,280280+ pa_start, size, flags);281281 if (err) {282282 usnic_err("Failed to map va 0x%lx pa %pa size 0x%zx with err %d\n",283283 va_start, &pa_start, size, err);···293293 size = pa - pa_start + PAGE_SIZE;294294 usnic_dbg("va 0x%lx pa %pa size 0x%zx flags 0x%x\n",295295 va_start, &pa_start, size, flags);296296- err = iommu_map(pd->domain, va_start, pa_start,297297- size, flags);296296+ err = iommu_map_atomic(pd->domain, va_start,297297+ pa_start, size, flags);298298 if (err) {299299 usnic_err("Failed to map va 0x%lx pa %pa size 0x%zx with err %d\n",300300 va_start, &pa_start, size, err);
···29212921 struct i40e_pf *pf = vsi->back;2922292229232923 if (i40e_enabled_xdp_vsi(vsi)) {29242924- int frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;29242924+ int frame_size = new_mtu + I40E_PACKET_HDR_PAD;2925292529262926 if (frame_size > i40e_max_xdp_frame_size(vsi))29272927 return -EINVAL;···1316713167 }13168131681316913169 br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);1317013170+ if (!br_spec)1317113171+ return -EINVAL;13170131721317113173 nla_for_each_nested(attr, br_spec, rem) {1317213174 __u16 mode;
+2-2
drivers/net/ethernet/intel/ice/ice_devlink.c
···927927{928928 int status;929929930930- if (node->tx_priority >= 8) {930930+ if (priority >= 8) {931931 NL_SET_ERR_MSG_MOD(extack, "Priority should be less than 8");932932 return -EINVAL;933933 }···957957{958958 int status;959959960960- if (node->tx_weight > 200 || node->tx_weight < 1) {960960+ if (weight > 200 || weight < 1) {961961 NL_SET_ERR_MSG_MOD(extack, "Weight must be between 1 and 200");962962 return -EINVAL;963963 }
+26
drivers/net/ethernet/intel/ice/ice_main.c
···275275 if (status && status != -EEXIST)276276 return status;277277278278+ netdev_dbg(vsi->netdev, "set promisc filter bits for VSI %i: 0x%x\n",279279+ vsi->vsi_num, promisc_m);278280 return 0;279281}280282···302300 promisc_m, 0);303301 }304302303303+ netdev_dbg(vsi->netdev, "clear promisc filter bits for VSI %i: 0x%x\n",304304+ vsi->vsi_num, promisc_m);305305 return status;306306}307307···418414 }419415 err = 0;420416 vlan_ops->dis_rx_filtering(vsi);417417+418418+ /* promiscuous mode implies allmulticast so419419+ * that VSIs that are in promiscuous mode are420420+ * subscribed to multicast packets coming to421421+ * the port422422+ */423423+ err = ice_set_promisc(vsi,424424+ ICE_MCAST_PROMISC_BITS);425425+ if (err)426426+ goto out_promisc;421427 }422428 } else {423429 /* Clear Rx filter to remove traffic from wire */···443429 if (vsi->netdev->features &444430 NETIF_F_HW_VLAN_CTAG_FILTER)445431 vlan_ops->ena_rx_filtering(vsi);432432+ }433433+434434+ /* disable allmulti here, but only if allmulti is not435435+ * still enabled for the netdev436436+ */437437+ if (!(vsi->current_netdev_flags & IFF_ALLMULTI)) {438438+ err = ice_clear_promisc(vsi,439439+ ICE_MCAST_PROMISC_BITS);440440+ if (err) {441441+ netdev_err(netdev, "Error %d clearing multicast promiscuous on VSI %i\n",442442+ err, vsi->vsi_num);443443+ }446444 }447445 }448446 }
+31-13
drivers/net/ethernet/intel/ice/ice_xsk.c
···598598}599599600600/**601601- * ice_clean_xdp_irq_zc - AF_XDP ZC specific Tx cleaning routine601601+ * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer602602+ * @xdp_ring: XDP Tx ring603603+ * @tx_buf: Tx buffer to clean604604+ */605605+static void606606+ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf)607607+{608608+ page_frag_free(tx_buf->raw_buf);609609+ xdp_ring->xdp_tx_active--;610610+ dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma),611611+ dma_unmap_len(tx_buf, len), DMA_TO_DEVICE);612612+ dma_unmap_len_set(tx_buf, len, 0);613613+}614614+615615+/**616616+ * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ602617 * @xdp_ring: XDP Tx ring603618 */604619static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)···622607 struct ice_tx_desc *tx_desc;623608 u16 cnt = xdp_ring->count;624609 struct ice_tx_buf *tx_buf;610610+ u16 completed_frames = 0;625611 u16 xsk_frames = 0;626612 u16 last_rs;627613 int i;628614629615 last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1;630616 tx_desc = ICE_TX_DESC(xdp_ring, last_rs);631631- if (tx_desc->cmd_type_offset_bsz &632632- cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)) {617617+ if ((tx_desc->cmd_type_offset_bsz &618618+ cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) {633619 if (last_rs >= ntc)634634- xsk_frames = last_rs - ntc + 1;620620+ completed_frames = last_rs - ntc + 1;635621 else636636- xsk_frames = last_rs + cnt - ntc + 1;622622+ completed_frames = last_rs + cnt - ntc + 1;637623 }638624639639- if (!xsk_frames)625625+ if (!completed_frames)640626 return;641627642642- if (likely(!xdp_ring->xdp_tx_active))628628+ if (likely(!xdp_ring->xdp_tx_active)) {629629+ xsk_frames = completed_frames;643630 goto skip;631631+ }644632645633 ntc = xdp_ring->next_to_clean;646646- for (i = 0; i < xsk_frames; i++) {634634+ for (i = 0; i < completed_frames; i++) {647635 tx_buf = &xdp_ring->tx_buf[ntc];648636649649- if (tx_buf->xdp) {650650- xsk_buff_free(tx_buf->xdp);651651- xdp_ring->xdp_tx_active--;637637+ if (tx_buf->raw_buf) {638638+ ice_clean_xdp_tx_buf(xdp_ring, tx_buf);639639+ tx_buf->raw_buf = NULL;652640 } else {653641 xsk_frames++;654642 }655643656644 ntc++;657657- if (ntc == cnt)645645+ if (ntc >= xdp_ring->count)658646 ntc = 0;659647 }660648skip:661649 tx_desc->cmd_type_offset_bsz = 0;662662- xdp_ring->next_to_clean += xsk_frames;650650+ xdp_ring->next_to_clean += completed_frames;663651 if (xdp_ring->next_to_clean >= cnt)664652 xdp_ring->next_to_clean -= cnt;665653 if (xsk_frames)
+38-16
drivers/net/ethernet/intel/igb/igb_main.c
···22562256 }22572257}2258225822592259+#ifdef CONFIG_IGB_HWMON22602260+/**22612261+ * igb_set_i2c_bb - Init I2C interface22622262+ * @hw: pointer to hardware structure22632263+ **/22642264+static void igb_set_i2c_bb(struct e1000_hw *hw)22652265+{22662266+ u32 ctrl_ext;22672267+ s32 i2cctl;22682268+22692269+ ctrl_ext = rd32(E1000_CTRL_EXT);22702270+ ctrl_ext |= E1000_CTRL_I2C_ENA;22712271+ wr32(E1000_CTRL_EXT, ctrl_ext);22722272+ wrfl();22732273+22742274+ i2cctl = rd32(E1000_I2CPARAMS);22752275+ i2cctl |= E1000_I2CBB_EN22762276+ | E1000_I2C_CLK_OE_N22772277+ | E1000_I2C_DATA_OE_N;22782278+ wr32(E1000_I2CPARAMS, i2cctl);22792279+ wrfl();22802280+}22812281+#endif22822282+22592283void igb_reset(struct igb_adapter *adapter)22602284{22612285 struct pci_dev *pdev = adapter->pdev;···24242400 * interface.24252401 */24262402 if (adapter->ets)24272427- mac->ops.init_thermal_sensor_thresh(hw);24032403+ igb_set_i2c_bb(hw);24042404+ mac->ops.init_thermal_sensor_thresh(hw);24282405 }24292406 }24302407#endif···31663141 **/31673142static s32 igb_init_i2c(struct igb_adapter *adapter)31683143{31693169- struct e1000_hw *hw = &adapter->hw;31703144 s32 status = 0;31713171- s32 i2cctl;3172314531733146 /* I2C interface supported on i350 devices */31743147 if (adapter->hw.mac.type != e1000_i350)31753148 return 0;31763176-31773177- i2cctl = rd32(E1000_I2CPARAMS);31783178- i2cctl |= E1000_I2CBB_EN31793179- | E1000_I2C_CLK_OUT | E1000_I2C_CLK_OE_N31803180- | E1000_I2C_DATA_OUT | E1000_I2C_DATA_OE_N;31813181- wr32(E1000_I2CPARAMS, i2cctl);31823182- wrfl();3183314931843150 /* Initialize the i2c bus which is controlled by the registers.31853151 * This bus will use the i2c_algo_bit structure that implements···35603544 adapter->ets = true;35613545 else35623546 adapter->ets = false;35473547+ /* Only enable I2C bit banging if an external thermal35483548+ * sensor is supported.35493549+ */35503550+ if (adapter->ets)35513551+ igb_set_i2c_bb(hw);35523552+ hw->mac.ops.init_thermal_sensor_thresh(hw);35633553 if (igb_sysfs_init(adapter))35643554 dev_err(&pdev->dev,35653555 "failed to allocate sysfs resources\n");···68366814 struct timespec64 ts;68376815 u32 tsauxc;6838681668396839- if (pin < 0 || pin >= IGB_N_PEROUT)68176817+ if (pin < 0 || pin >= IGB_N_SDP)68406818 return;6841681968426820 spin_lock(&adapter->tmreg_lock);···68446822 if (hw->mac.type == e1000_82580 ||68456823 hw->mac.type == e1000_i354 ||68466824 hw->mac.type == e1000_i350) {68476847- s64 ns = timespec64_to_ns(&adapter->perout[pin].period);68256825+ s64 ns = timespec64_to_ns(&adapter->perout[tsintr_tt].period);68486826 u32 systiml, systimh, level_mask, level, rem;68496827 u64 systim, now;68506828···68926870 ts.tv_nsec = (u32)systim;68936871 ts.tv_sec = ((u32)(systim >> 32)) & 0xFF;68946872 } else {68956895- ts = timespec64_add(adapter->perout[pin].start,68966896- adapter->perout[pin].period);68736873+ ts = timespec64_add(adapter->perout[tsintr_tt].start,68746874+ adapter->perout[tsintr_tt].period);68976875 }6898687668996877 /* u32 conversion of tv_sec is safe until y2106 */···69026880 tsauxc = rd32(E1000_TSAUXC);69036881 tsauxc |= TSAUXC_EN_TT0;69046882 wr32(E1000_TSAUXC, tsauxc);69056905- adapter->perout[pin].start = ts;68836883+ adapter->perout[tsintr_tt].start = ts;6906688469076885 spin_unlock(&adapter->tmreg_lock);69086886}···69166894 struct ptp_clock_event event;69176895 struct timespec64 ts;6918689669196919- if (pin < 0 || pin >= IGB_N_EXTTS)68976897+ if (pin < 0 || pin >= IGB_N_SDP)69206898 return;6921689969226900 if (hw->mac.type == e1000_82580 ||
+2
drivers/net/ethernet/intel/ixgbe/ixgbe.h
···7373#define IXGBE_RXBUFFER_4K 40967474#define IXGBE_MAX_RXBUFFER 16384 /* largest size for a single descriptor */75757676+#define IXGBE_PKT_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))7777+7678/* Attempt to maximize the headroom available for incoming frames. We7779 * use a 2K buffer for receives and need 1536/1534 to store the data for7880 * the frame. This leaves us with 512 bytes of room. From that we need
+17-11
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···67786778}6779677967806780/**67816781+ * ixgbe_max_xdp_frame_size - returns the maximum allowed frame size for XDP67826782+ * @adapter: device handle, pointer to adapter67836783+ */67846784+static int ixgbe_max_xdp_frame_size(struct ixgbe_adapter *adapter)67856785+{67866786+ if (PAGE_SIZE >= 8192 || adapter->flags2 & IXGBE_FLAG2_RX_LEGACY)67876787+ return IXGBE_RXBUFFER_2K;67886788+ else67896789+ return IXGBE_RXBUFFER_3K;67906790+}67916791+67926792+/**67816793 * ixgbe_change_mtu - Change the Maximum Transfer Unit67826794 * @netdev: network interface device structure67836795 * @new_mtu: new value for maximum frame size···68006788{68016789 struct ixgbe_adapter *adapter = netdev_priv(netdev);6802679068036803- if (adapter->xdp_prog) {68046804- int new_frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN +68056805- VLAN_HLEN;68066806- int i;67916791+ if (ixgbe_enabled_xdp_adapter(adapter)) {67926792+ int new_frame_size = new_mtu + IXGBE_PKT_HDR_PAD;6807679368086808- for (i = 0; i < adapter->num_rx_queues; i++) {68096809- struct ixgbe_ring *ring = adapter->rx_ring[i];68106810-68116811- if (new_frame_size > ixgbe_rx_bufsz(ring)) {68126812- e_warn(probe, "Requested MTU size is not supported with XDP\n");68136813- return -EINVAL;68146814- }67946794+ if (new_frame_size > ixgbe_max_xdp_frame_size(adapter)) {67956795+ e_warn(probe, "Requested MTU size is not supported with XDP\n");67966796+ return -EINVAL;68156797 }68166798 }68176799
+28-17
drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
···130130 };131131};132132133133-static int nfp_ipsec_cfg_cmd_issue(struct nfp_net *nn, int type, int saidx,134134- struct nfp_ipsec_cfg_mssg *msg)133133+static int nfp_net_ipsec_cfg(struct nfp_net *nn, struct nfp_mbox_amsg_entry *entry)135134{135135+ unsigned int offset = nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL;136136+ struct nfp_ipsec_cfg_mssg *msg = (struct nfp_ipsec_cfg_mssg *)entry->msg;136137 int i, msg_size, ret;137138138138- msg->cmd = type;139139- msg->sa_idx = saidx;140140- msg->rsp = 0;141141- msg_size = ARRAY_SIZE(msg->raw);142142-143143- for (i = 0; i < msg_size; i++)144144- nn_writel(nn, NFP_NET_CFG_MBOX_VAL + 4 * i, msg->raw[i]);145145-146146- ret = nfp_net_mbox_reconfig(nn, NFP_NET_CFG_MBOX_CMD_IPSEC);147147- if (ret < 0)139139+ ret = nfp_net_mbox_lock(nn, sizeof(*msg));140140+ if (ret)148141 return ret;142142+143143+ msg_size = ARRAY_SIZE(msg->raw);144144+ for (i = 0; i < msg_size; i++)145145+ nn_writel(nn, offset + 4 * i, msg->raw[i]);146146+147147+ ret = nfp_net_mbox_reconfig(nn, entry->cmd);148148+ if (ret < 0) {149149+ nn_ctrl_bar_unlock(nn);150150+ return ret;151151+ }149152150153 /* For now we always read the whole message response back */151154 for (i = 0; i < msg_size; i++)152152- msg->raw[i] = nn_readl(nn, NFP_NET_CFG_MBOX_VAL + 4 * i);155155+ msg->raw[i] = nn_readl(nn, offset + 4 * i);156156+157157+ nn_ctrl_bar_unlock(nn);153158154159 switch (msg->rsp) {155160 case NFP_IPSEC_CFG_MSSG_OK:···492487 }493488494489 /* Allocate saidx and commit the SA */495495- err = nfp_ipsec_cfg_cmd_issue(nn, NFP_IPSEC_CFG_MSSG_ADD_SA, saidx, &msg);490490+ msg.cmd = NFP_IPSEC_CFG_MSSG_ADD_SA;491491+ msg.sa_idx = saidx;492492+ err = nfp_net_sched_mbox_amsg_work(nn, NFP_NET_CFG_MBOX_CMD_IPSEC, &msg,493493+ sizeof(msg), nfp_net_ipsec_cfg);496494 if (err) {497495 xa_erase(&nn->xa_ipsec, saidx);498496 NL_SET_ERR_MSG_MOD(extack, "Failed to issue IPsec command");···509501510502static void nfp_net_xfrm_del_state(struct xfrm_state *x)511503{504504+ struct nfp_ipsec_cfg_mssg msg = {505505+ .cmd = NFP_IPSEC_CFG_MSSG_INV_SA,506506+ .sa_idx = x->xso.offload_handle - 1,507507+ };512508 struct net_device *netdev = x->xso.dev;513513- struct nfp_ipsec_cfg_mssg msg;514509 struct nfp_net *nn;515510 int err;516511517512 nn = netdev_priv(netdev);518518- err = nfp_ipsec_cfg_cmd_issue(nn, NFP_IPSEC_CFG_MSSG_INV_SA,519519- x->xso.offload_handle - 1, &msg);513513+ err = nfp_net_sched_mbox_amsg_work(nn, NFP_NET_CFG_MBOX_CMD_IPSEC, &msg,514514+ sizeof(msg), nfp_net_ipsec_cfg);520515 if (err)521516 nn_warn(nn, "Failed to invalidate SA in hardware\n");522517
+19-6
drivers/net/ethernet/netronome/nfp/nfp_net.h
···617617 * @vnic_no_name: For non-port PF vNIC make ndo_get_phys_port_name return618618 * -EOPNOTSUPP to keep backwards compatibility (set by app)619619 * @port: Pointer to nfp_port structure if vNIC is a port620620- * @mc_lock: Protect mc_addrs list621621- * @mc_addrs: List of mc addrs to add/del to HW622622- * @mc_work: Work to update mc addrs620620+ * @mbox_amsg: Asynchronously processed message via mailbox621621+ * @mbox_amsg.lock: Protect message list622622+ * @mbox_amsg.list: List of message to process623623+ * @mbox_amsg.work: Work to process message asynchronously623624 * @app_priv: APP private data for this vNIC624625 */625626struct nfp_net {···722721723722 struct nfp_port *port;724723725725- spinlock_t mc_lock;726726- struct list_head mc_addrs;727727- struct work_struct mc_work;724724+ struct {725725+ spinlock_t lock;726726+ struct list_head list;727727+ struct work_struct work;728728+ } mbox_amsg;728729729730 void *app_priv;730731};732732+733733+struct nfp_mbox_amsg_entry {734734+ struct list_head list;735735+ int (*cfg)(struct nfp_net *nn, struct nfp_mbox_amsg_entry *entry);736736+ u32 cmd;737737+ char msg[];738738+};739739+740740+int nfp_net_sched_mbox_amsg_work(struct nfp_net *nn, u32 cmd, const void *data, size_t len,741741+ int (*cb)(struct nfp_net *, struct nfp_mbox_amsg_entry *));731742732743/* Functions to read/write from/to a BAR733744 * Performs any endian conversion necessary.
···6565 init_msg, init_msg_len, &act_len, KALMIA_USB_TIMEOUT);6666 if (status != 0) {6767 netdev_err(dev->net,6868- "Error sending init packet. Status %i, length %i\n",6969- status, act_len);6868+ "Error sending init packet. Status %i\n",6969+ status);7070 return status;7171 }7272 else if (act_len != init_msg_len) {···83838484 if (status != 0)8585 netdev_err(dev->net,8686- "Error receiving init result. Status %i, length %i\n",8787- status, act_len);8686+ "Error receiving init result. Status %i\n",8787+ status);8888 else if (act_len != expected_len)8989 netdev_err(dev->net, "Unexpected init result length: %i\n",9090 act_len);
+25-25
drivers/net/vmxnet3/vmxnet3_drv.c
···15461546 rxd->len = rbi->len;15471547 }1548154815491549-#ifdef VMXNET3_RSS15501550- if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE &&15511551- (adapter->netdev->features & NETIF_F_RXHASH)) {15521552- enum pkt_hash_types hash_type;15531553-15541554- switch (rcd->rssType) {15551555- case VMXNET3_RCD_RSS_TYPE_IPV4:15561556- case VMXNET3_RCD_RSS_TYPE_IPV6:15571557- hash_type = PKT_HASH_TYPE_L3;15581558- break;15591559- case VMXNET3_RCD_RSS_TYPE_TCPIPV4:15601560- case VMXNET3_RCD_RSS_TYPE_TCPIPV6:15611561- case VMXNET3_RCD_RSS_TYPE_UDPIPV4:15621562- case VMXNET3_RCD_RSS_TYPE_UDPIPV6:15631563- hash_type = PKT_HASH_TYPE_L4;15641564- break;15651565- default:15661566- hash_type = PKT_HASH_TYPE_L3;15671567- break;15681568- }15691569- skb_set_hash(ctx->skb,15701570- le32_to_cpu(rcd->rssHash),15711571- hash_type);15721572- }15731573-#endif15741549 skb_record_rx_queue(ctx->skb, rq->qid);15751550 skb_put(ctx->skb, rcd->len);15761551···16281653 u32 mtu = adapter->netdev->mtu;16291654 skb->len += skb->data_len;1630165516561656+#ifdef VMXNET3_RSS16571657+ if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE &&16581658+ (adapter->netdev->features & NETIF_F_RXHASH)) {16591659+ enum pkt_hash_types hash_type;16601660+16611661+ switch (rcd->rssType) {16621662+ case VMXNET3_RCD_RSS_TYPE_IPV4:16631663+ case VMXNET3_RCD_RSS_TYPE_IPV6:16641664+ hash_type = PKT_HASH_TYPE_L3;16651665+ break;16661666+ case VMXNET3_RCD_RSS_TYPE_TCPIPV4:16671667+ case VMXNET3_RCD_RSS_TYPE_TCPIPV6:16681668+ case VMXNET3_RCD_RSS_TYPE_UDPIPV4:16691669+ case VMXNET3_RCD_RSS_TYPE_UDPIPV6:16701670+ hash_type = PKT_HASH_TYPE_L4;16711671+ break;16721672+ default:16731673+ hash_type = PKT_HASH_TYPE_L3;16741674+ break;16751675+ }16761676+ skb_set_hash(skb,16771677+ le32_to_cpu(rcd->rssHash),16781678+ hash_type);16791679+ }16801680+#endif16311681 vmxnet3_rx_csum(adapter, skb,16321682 (union Vmxnet3_GenericDesc *)rcd);16331683 skb->protocol = eth_type_trans(skb, adapter->netdev);
+19
drivers/nvdimm/Kconfig
···102102 depends on ENCRYPTED_KEYS103103 depends on (LIBNVDIMM=ENCRYPTED_KEYS) || LIBNVDIMM=m104104105105+config NVDIMM_KMSAN106106+ bool107107+ depends on KMSAN108108+ help109109+ KMSAN, and other memory debug facilities, increase the size of110110+ 'struct page' to contain extra metadata. This collides with111111+ the NVDIMM capability to store a potentially112112+ larger-than-"System RAM" size 'struct page' array in a113113+ reservation of persistent memory rather than limited /114114+ precious DRAM. However, that reservation needs to persist for115115+ the life of the given NVDIMM namespace. If you are using KMSAN116116+ to debug an issue unrelated to NVDIMMs or DAX then say N to this117117+ option. Otherwise, say Y but understand that any namespaces118118+ (with the page array stored pmem) created with this build of119119+ the kernel will permanently reserve and strand excess120120+ capacity compared to the CONFIG_KMSAN=n case.121121+122122+ Select N if unsure.123123+105124config NVDIMM_TEST_BUILD106125 tristate "Build the unit test core"107126 depends on m
+1-1
drivers/nvdimm/nd.h
···652652 struct nd_namespace_common *ndns);653653#if IS_ENABLED(CONFIG_ND_CLAIM)654654/* max struct page size independent of kernel config */655655-#define MAX_STRUCT_PAGE_SIZE 128655655+#define MAX_STRUCT_PAGE_SIZE 64656656int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap);657657#else658658static inline int nvdimm_setup_pfn(struct nd_pfn *nd_pfn,
+27-15
drivers/nvdimm/pfn_devs.c
···1313#include "pfn.h"1414#include "nd.h"15151616+static const bool page_struct_override = IS_ENABLED(CONFIG_NVDIMM_KMSAN);1717+1618static void nd_pfn_release(struct device *dev)1719{1820 struct nd_region *nd_region = to_nd_region(dev->parent);···760758 return -ENXIO;761759 }762760763763- /*764764- * Note, we use 64 here for the standard size of struct page,765765- * debugging options may cause it to be larger in which case the766766- * implementation will limit the pfns advertised through767767- * ->direct_access() to those that are included in the memmap.768768- */769761 start = nsio->res.start;770762 size = resource_size(&nsio->res);771763 npfns = PHYS_PFN(size - SZ_8K);···778782 }779783 end_trunc = start + size - ALIGN_DOWN(start + size, align);780784 if (nd_pfn->mode == PFN_MODE_PMEM) {785785+ unsigned long page_map_size = MAX_STRUCT_PAGE_SIZE * npfns;786786+781787 /*782788 * The altmap should be padded out to the block size used783789 * when populating the vmemmap. This *should* be equal to784790 * PMD_SIZE for most architectures.785791 *786786- * Also make sure size of struct page is less than 128. We787787- * want to make sure we use large enough size here so that788788- * we don't have a dynamic reserve space depending on789789- * struct page size. But we also want to make sure we notice790790- * when we end up adding new elements to struct page.792792+ * Also make sure size of struct page is less than793793+ * MAX_STRUCT_PAGE_SIZE. The goal here is compatibility in the794794+ * face of production kernel configurations that reduce the795795+ * 'struct page' size below MAX_STRUCT_PAGE_SIZE. For debug796796+ * kernel configurations that increase the 'struct page' size797797+ * above MAX_STRUCT_PAGE_SIZE, the page_struct_override allows798798+ * for continuing with the capacity that will be wasted when799799+ * reverting to a production kernel configuration. Otherwise,800800+ * those configurations are blocked by default.791801 */792792- BUILD_BUG_ON(sizeof(struct page) > MAX_STRUCT_PAGE_SIZE);793793- offset = ALIGN(start + SZ_8K + MAX_STRUCT_PAGE_SIZE * npfns, align)794794- - start;802802+ if (sizeof(struct page) > MAX_STRUCT_PAGE_SIZE) {803803+ if (page_struct_override)804804+ page_map_size = sizeof(struct page) * npfns;805805+ else {806806+ dev_err(&nd_pfn->dev,807807+ "Memory debug options prevent using pmem for the page map\n");808808+ return -EINVAL;809809+ }810810+ }811811+ offset = ALIGN(start + SZ_8K + page_map_size, align) - start;795812 } else if (nd_pfn->mode == PFN_MODE_RAM)796813 offset = ALIGN(start + SZ_8K, align) - start;797814 else···827818 pfn_sb->version_minor = cpu_to_le16(4);828819 pfn_sb->end_trunc = cpu_to_le32(end_trunc);829820 pfn_sb->align = cpu_to_le32(nd_pfn->align);830830- pfn_sb->page_struct_size = cpu_to_le16(MAX_STRUCT_PAGE_SIZE);821821+ if (sizeof(struct page) > MAX_STRUCT_PAGE_SIZE && page_struct_override)822822+ pfn_sb->page_struct_size = cpu_to_le16(sizeof(struct page));823823+ else824824+ pfn_sb->page_struct_size = cpu_to_le16(MAX_STRUCT_PAGE_SIZE);831825 pfn_sb->page_size = cpu_to_le32(PAGE_SIZE);832826 checksum = nd_sb_checksum((struct nd_gen_sb *) pfn_sb);833827 pfn_sb->checksum = cpu_to_le64(checksum);
···9393static int aspeed_sig_expr_disable(struct aspeed_pinmux_data *ctx,9494 const struct aspeed_sig_expr *expr)9595{9696+ int ret;9797+9698 pr_debug("Disabling signal %s for %s\n", expr->signal,9799 expr->function);981009999- return aspeed_sig_expr_set(ctx, expr, false);101101+ ret = aspeed_sig_expr_eval(ctx, expr, true);102102+ if (ret < 0)103103+ return ret;104104+105105+ if (ret)106106+ return aspeed_sig_expr_set(ctx, expr, false);107107+108108+ return 0;100109}101110102111/**···123114 int ret = 0;124115125116 if (!exprs)126126- return true;117117+ return -EINVAL;127118128119 while (*exprs && !ret) {129120 ret = aspeed_sig_expr_disable(ctx, *exprs);
+13-3
drivers/pinctrl/intel/pinctrl-intel.c
···17091709EXPORT_SYMBOL_GPL(intel_pinctrl_get_soc_data);1710171017111711#ifdef CONFIG_PM_SLEEP17121712+static bool __intel_gpio_is_direct_irq(u32 value)17131713+{17141714+ return (value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&17151715+ (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO);17161716+}17171717+17121718static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int pin)17131719{17141720 const struct pin_desc *pd = pin_desc_get(pctrl->pctldev, pin);···17481742 * See https://bugzilla.kernel.org/show_bug.cgi?id=214749.17491743 */17501744 value = readl(intel_get_padcfg(pctrl, pin, PADCFG0));17511751- if ((value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&17521752- (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO))17451745+ if (__intel_gpio_is_direct_irq(value))17531746 return true;1754174717551748 return false;···18781873 for (i = 0; i < pctrl->soc->npins; i++) {18791874 const struct pinctrl_pin_desc *desc = &pctrl->soc->pins[i];1880187518811881- if (!intel_pinctrl_should_save(pctrl, desc->number))18761876+ if (!(intel_pinctrl_should_save(pctrl, desc->number) ||18771877+ /*18781878+ * If the firmware mangled the register contents too much,18791879+ * check the saved value for the Direct IRQ mode.18801880+ */18811881+ __intel_gpio_is_direct_irq(pads[i].padcfg0)))18821882 continue;1883188318841884 intel_restore_padcfg(pctrl, desc->number, PADCFG0, pads[i].padcfg0);
···372372 if (!pcs->fmask)373373 return 0;374374 function = pinmux_generic_get_function(pctldev, fselector);375375+ if (!function)376376+ return -EINVAL;375377 func = function->data;376378 if (!func)377379 return -EINVAL;
···366366 * will be adjusted at the final stage of the IRQ-based SPI transfer367367 * execution so not to lose the leftover of the incoming data.368368 */369369- level = min_t(u16, dws->fifo_len / 2, dws->tx_len);369369+ level = min_t(unsigned int, dws->fifo_len / 2, dws->tx_len);370370 dw_writel(dws, DW_SPI_TXFTLR, level);371371 dw_writel(dws, DW_SPI_RXFTLR, level - 1);372372
···798798 net->max_mtu = GETHER_MAX_MTU_SIZE;799799800800 dev->gadget = g;801801+ SET_NETDEV_DEV(net, &g->dev);801802 SET_NETDEV_DEVTYPE(net, &gadget_type);802803803804 status = register_netdev(net);···873872 struct usb_gadget *g;874873 int status;875874875875+ if (!net->dev.parent)876876+ return -EINVAL;876877 dev = netdev_priv(net);877878 g = dev->gadget;878879···905902906903 dev = netdev_priv(net);907904 dev->gadget = g;905905+ SET_NETDEV_DEV(net, &g->dev);908906}909907EXPORT_SYMBOL_GPL(gether_set_gadget);910908
+4-4
drivers/usb/typec/altmodes/displayport.c
···535535 /* FIXME: Port can only be DFP_U. */536536537537 /* Make sure we have compatiple pin configurations */538538- if (!(DP_CAP_DFP_D_PIN_ASSIGN(port->vdo) &539539- DP_CAP_UFP_D_PIN_ASSIGN(alt->vdo)) &&540540- !(DP_CAP_UFP_D_PIN_ASSIGN(port->vdo) &541541- DP_CAP_DFP_D_PIN_ASSIGN(alt->vdo)))538538+ if (!(DP_CAP_PIN_ASSIGN_DFP_D(port->vdo) &539539+ DP_CAP_PIN_ASSIGN_UFP_D(alt->vdo)) &&540540+ !(DP_CAP_PIN_ASSIGN_UFP_D(port->vdo) &541541+ DP_CAP_PIN_ASSIGN_DFP_D(alt->vdo)))542542 return -ENODEV;543543544544 ret = sysfs_create_group(&alt->dev.kobj, &dp_altmode_group);
···35763576}3577357735783578static int flush_dir_items_batch(struct btrfs_trans_handle *trans,35793579- struct btrfs_root *log,35793579+ struct btrfs_inode *inode,35803580 struct extent_buffer *src,35813581 struct btrfs_path *dst_path,35823582 int start_slot,35833583 int count)35843584{35853585+ struct btrfs_root *log = inode->root->log_root;35853586 char *ins_data = NULL;35863587 struct btrfs_item_batch batch;35873588 struct extent_buffer *dst;35883589 unsigned long src_offset;35893590 unsigned long dst_offset;35913591+ u64 last_index;35903592 struct btrfs_key key;35913593 u32 item_size;35923594 int ret;···36463644 src_offset = btrfs_item_ptr_offset(src, start_slot + count - 1);36473645 copy_extent_buffer(dst, src, dst_offset, src_offset, batch.total_data_size);36483646 btrfs_release_path(dst_path);36473647+36483648+ last_index = batch.keys[count - 1].offset;36493649+ ASSERT(last_index > inode->last_dir_index_offset);36503650+36513651+ /*36523652+ * If for some unexpected reason the last item's index is not greater36533653+ * than the last index we logged, warn and return an error to fallback36543654+ * to a transaction commit.36553655+ */36563656+ if (WARN_ON(last_index <= inode->last_dir_index_offset))36573657+ ret = -EUCLEAN;36583658+ else36593659+ inode->last_dir_index_offset = last_index;36493660out:36503661 kfree(ins_data);36513662···37083693 }3709369437103695 di = btrfs_item_ptr(src, i, struct btrfs_dir_item);37113711- ctx->last_dir_item_offset = key.offset;3712369637133697 /*37143698 * Skip ranges of items that consist only of dir item keys created···37703756 if (batch_size > 0) {37713757 int ret;3772375837733773- ret = flush_dir_items_batch(trans, log, src, dst_path,37593759+ ret = flush_dir_items_batch(trans, inode, src, dst_path,37743760 batch_start, batch_size);37753761 if (ret < 0)37763762 return ret;···4058404440594045 min_key = BTRFS_DIR_START_INDEX;40604046 max_key = 0;40614061- ctx->last_dir_item_offset = inode->last_dir_index_offset;4062404740634048 while (1) {40644049 ret = log_dir_items(trans, inode, path, dst_path,···40684055 break;40694056 min_key = max_key + 1;40704057 }40714071-40724072- inode->last_dir_index_offset = ctx->last_dir_item_offset;4073405840744059 return 0;40754060}
-2
fs/btrfs/tree-log.h
···2424 bool logging_new_delayed_dentries;2525 /* Indicate if the inode being logged was logged before. */2626 bool logged_before;2727- /* Tracks the last logged dir item/index key offset. */2828- u64 last_dir_item_offset;2927 struct inode *inode;3028 struct list_head list;3129 /* Only used for fast fsyncs. */
+15-1
fs/btrfs/volumes.c
···403403static void free_fs_devices(struct btrfs_fs_devices *fs_devices)404404{405405 struct btrfs_device *device;406406+406407 WARN_ON(fs_devices->opened);407408 while (!list_empty(&fs_devices->devices)) {408409 device = list_entry(fs_devices->devices.next,···1182118111831182 mutex_lock(&uuid_mutex);11841183 close_fs_devices(fs_devices);11851185- if (!fs_devices->opened)11841184+ if (!fs_devices->opened) {11861185 list_splice_init(&fs_devices->seed_list, &list);11861186+11871187+ /*11881188+ * If the struct btrfs_fs_devices is not assembled with any11891189+ * other device, it can be re-initialized during the next mount11901190+ * without the needing device-scan step. Therefore, it can be11911191+ * fully freed.11921192+ */11931193+ if (fs_devices->num_devices == 1) {11941194+ list_del(&fs_devices->fs_list);11951195+ free_fs_devices(fs_devices);11961196+ }11971197+ }11981198+1187119911881200 list_for_each_entry_safe(fs_devices, tmp, &list, seed_list) {11891201 close_fs_devices(fs_devices);
+6
fs/ceph/mds_client.c
···36853685 break;3686368636873687 case CEPH_SESSION_FLUSHMSG:36883688+ /* flush cap releases */36893689+ spin_lock(&session->s_cap_lock);36903690+ if (session->s_num_cap_releases)36913691+ ceph_flush_cap_releases(mdsc, session);36923692+ spin_unlock(&session->s_cap_lock);36933693+36883694 send_flushmsg_ack(mdsc, session, seq);36893695 break;36903696
+3-2
fs/dax.c
···12711271 if (ret < 0)12721272 goto out_unlock;1273127312741274- ret = copy_mc_to_kernel(daddr, saddr, length);12751275- if (ret)12741274+ if (copy_mc_to_kernel(daddr, saddr, length) == 0)12751275+ ret = length;12761276+ else12761277 ret = -EIO;1277127812781279out_unlock:
···7676 /* Sanity check values */77777878 /* there is always at least one xattr id */7979- if (*xattr_ids <= 0)7979+ if (*xattr_ids == 0)8080 return ERR_PTR(-EINVAL);81818282 len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
···137137 * define their own version of this macro in <asm/pgtable.h>138138 */139139#if BITS_PER_LONG == 64140140-/* This function must be updated when the size of struct page grows above 80140140+/* This function must be updated when the size of struct page grows above 96141141 * or reduces below 56. The idea that compiler optimizes out switch()142142 * statement, and only leaves move/store instructions. Also the compiler can143143 * combine write statements if they are both assignments and can be reordered,···148148{149149 unsigned long *_pp = (void *)page;150150151151- /* Check that struct page is either 56, 64, 72, or 80 bytes */151151+ /* Check that struct page is either 56, 64, 72, 80, 88 or 96 bytes */152152 BUILD_BUG_ON(sizeof(struct page) & 7);153153 BUILD_BUG_ON(sizeof(struct page) < 56);154154- BUILD_BUG_ON(sizeof(struct page) > 80);154154+ BUILD_BUG_ON(sizeof(struct page) > 96);155155156156 switch (sizeof(struct page)) {157157+ case 96:158158+ _pp[11] = 0;159159+ fallthrough;160160+ case 88:161161+ _pp[10] = 0;162162+ fallthrough;157163 case 80:158164 _pp[9] = 0;159165 fallthrough;
-2
include/linux/netdevice.h
···28582858int register_netdevice_notifier_net(struct net *net, struct notifier_block *nb);28592859int unregister_netdevice_notifier_net(struct net *net,28602860 struct notifier_block *nb);28612861-void move_netdevice_notifier_net(struct net *src_net, struct net *dst_net,28622862- struct notifier_block *nb);28632861int register_netdevice_notifier_dev_net(struct net_device *dev,28642862 struct notifier_block *nb,28652863 struct netdev_net_notifier *nn);
···270270 const int align;271271 const int is_signed;272272 const int filter_type;273273+ const int len;273274 };274275 int (*define_fields)(struct trace_event_call *);275276 };
···6464 __u32 pad;6565};66666767+/* fence_fd is modified on success if VIRTGPU_EXECBUF_FENCE_FD_OUT flag is set. */6768struct drm_virtgpu_execbuffer {6869 __u32 flags;6970 __u32 size;
+3-2
kernel/locking/rtmutex.c
···901901 * then we need to wake the new top waiter up to try902902 * to get the lock.903903 */904904- if (prerequeue_top_waiter != rt_mutex_top_waiter(lock))905905- wake_up_state(waiter->task, waiter->wake_state);904904+ top_waiter = rt_mutex_top_waiter(lock);905905+ if (prerequeue_top_waiter != top_waiter)906906+ wake_up_state(top_waiter->task, top_waiter->wake_state);906907 raw_spin_unlock_irq(&lock->wait_lock);907908 return 0;908909 }
···291291 unsigned long shadow_start, shadow_end;292292 int ret;293293294294+ if (!kasan_arch_is_ready())295295+ return 0;296296+294297 if (!is_vmalloc_or_module_addr((void *)addr))295298 return 0;296299···462459 unsigned long region_start, region_end;463460 unsigned long size;464461462462+ if (!kasan_arch_is_ready())463463+ return;464464+465465 region_start = ALIGN(start, KASAN_MEMORY_PER_SHADOW_PAGE);466466 region_end = ALIGN_DOWN(end, KASAN_MEMORY_PER_SHADOW_PAGE);467467···508502 * with setting memory tags, so the KASAN_VMALLOC_INIT flag is ignored.509503 */510504505505+ if (!kasan_arch_is_ready())506506+ return (void *)start;507507+511508 if (!is_vmalloc_or_module_addr(start))512509 return (void *)start;513510···533524 */534525void __kasan_poison_vmalloc(const void *start, unsigned long size)535526{527527+ if (!kasan_arch_is_ready())528528+ return;529529+536530 if (!is_vmalloc_or_module_addr(start))537531 return;538532
···16401640 end = PFN_DOWN(base + size);1641164116421642 for (; cursor < end; cursor++) {16431643- /*16441644- * Reserved pages are always initialized by the end of16451645- * memblock_free_all() (by memmap_init() and, if deferred16461646- * initialization is enabled, memmap_init_reserved_pages()), so16471647- * these pages can be released directly to the buddy allocator.16481648- */16491649- __free_pages_core(pfn_to_page(cursor), 0);16431643+ memblock_free_pages(pfn_to_page(cursor), cursor, 0);16501644 totalram_pages_inc();16511645 }16521646}
+3
mm/memory.c
···38403840 if (unlikely(!page)) {38413841 ret = VM_FAULT_OOM;38423842 goto out_page;38433843+ } else if (unlikely(PTR_ERR(page) == -EHWPOISON)) {38443844+ ret = VM_FAULT_HWPOISON;38453845+ goto out_page;38433846 }38443847 folio = page_folio(page);38453848
+4-1
mm/page_alloc.c
···56315631 */56325632void __free_pages(struct page *page, unsigned int order)56335633{56345634+ /* get PageHead before we drop reference */56355635+ int head = PageHead(page);56365636+56345637 if (put_page_testzero(page))56355638 free_the_page(page, order);56365636- else if (!PageHead(page))56395639+ else if (!head)56375640 while (order-- > 0)56385641 free_the_page(page + (1 << order), order);56395642}
···18701870 __register_netdevice_notifier_net(dst_net, nb, true);18711871}1872187218731873-void move_netdevice_notifier_net(struct net *src_net, struct net *dst_net,18741874- struct notifier_block *nb)18751875-{18761876- rtnl_lock();18771877- __move_netdevice_notifier_net(src_net, dst_net, nb);18781878- rtnl_unlock();18791879-}18801880-18811873int register_netdevice_notifier_dev_net(struct net_device *dev,18821874 struct notifier_block *nb,18831875 struct netdev_net_notifier *nn)···10374103821037510383 BUILD_BUG_ON(n > sizeof(*stats64) / sizeof(u64));1037610384 for (i = 0; i < n; i++)1037710377- dst[i] = atomic_long_read(&src[i]);1038510385+ dst[i] = (unsigned long)atomic_long_read(&src[i]);1037810386 /* zero out counters that only exist in rtnl_link_stats64 */1037910387 memset((char *)stats64 + n * sizeof(u64), 0,1038010388 sizeof(*stats64) - n * sizeof(u64));
+9-1
net/core/net_namespace.c
···304304}305305EXPORT_SYMBOL_GPL(get_net_ns_by_id);306306307307+/* init code that must occur even if setup_net() is not called. */308308+static __net_init void preinit_net(struct net *net)309309+{310310+ ref_tracker_dir_init(&net->notrefcnt_tracker, 128);311311+}312312+307313/*308314 * setup_net runs the initializers for the network namespace object.309315 */···322316323317 refcount_set(&net->ns.count, 1);324318 ref_tracker_dir_init(&net->refcnt_tracker, 128);325325- ref_tracker_dir_init(&net->notrefcnt_tracker, 128);326319327320 refcount_set(&net->passive, 1);328321 get_random_bytes(&net->hash_mix, sizeof(u32));···477472 rv = -ENOMEM;478473 goto dec_ucounts;479474 }475475+476476+ preinit_net(net);480477 refcount_set(&net->passive, 1);481478 net->ucounts = ucounts;482479 get_user_ns(user_ns);···11251118 init_net.key_domain = &init_net_key_domain;11261119#endif11271120 down_write(&pernet_ops_rwsem);11211121+ preinit_net(&init_net);11281122 if (setup_net(&init_net, &init_user_ns))11291123 panic("Could not setup the initial network namespace");11301124
-1
net/core/stream.c
···209209 sk_mem_reclaim_final(sk);210210211211 WARN_ON_ONCE(sk->sk_wmem_queued);212212- WARN_ON_ONCE(sk->sk_forward_alloc);213212214213 /* It is _impossible_ for the backlog to contain anything215214 * when we get here. All user references to this socket
+2-5
net/dccp/ipv6.c
···551551 *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL);552552 /* Clone pktoptions received with SYN, if we own the req */553553 if (*own_req && ireq->pktopts) {554554- newnp->pktoptions = skb_clone(ireq->pktopts, GFP_ATOMIC);554554+ newnp->pktoptions = skb_clone_and_charge_r(ireq->pktopts, newsk);555555 consume_skb(ireq->pktopts);556556 ireq->pktopts = NULL;557557- if (newnp->pktoptions)558558- skb_set_owner_r(newnp->pktoptions, newsk);559557 }560558561559 return newsk;···613615 --ANK (980728)614616 */615617 if (np->rxopt.all)616616- opt_skb = skb_clone(skb, GFP_ATOMIC);618618+ opt_skb = skb_clone_and_charge_r(skb, sk);617619618620 if (sk->sk_state == DCCP_OPEN) { /* Fast path */619621 if (dccp_rcv_established(sk, skb, dccp_hdr(skb), skb->len))···677679 np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));678680 if (ipv6_opt_accepted(sk, opt_skb,679681 &DCCP_SKB_CB(opt_skb)->header.h6)) {680680- skb_set_owner_r(opt_skb, sk);681682 memmove(IP6CB(opt_skb),682683 &DCCP_SKB_CB(opt_skb)->header.h6,683684 sizeof(struct inet6_skb_parm));
-2
net/devlink/dev.c
···343343 * reload process so the notifications are generated separatelly.344344 */345345 devlink_notify_unregister(devlink);346346- move_netdevice_notifier_net(curr_net, dest_net,347347- &devlink->netdevice_nb);348346 write_pnet(&devlink->_net, dest_net);349347 devlink_notify_register(devlink);350348}
···9393 cp = rcu_dereference_bh(ca->params);94949595 tcf_lastuse_update(&ca->tcf_tm);9696- bstats_update(&ca->tcf_bstats, skb);9696+ tcf_action_update_bstats(&ca->common, skb);9797 action = READ_ONCE(ca->tcf_action);98989999 wlen = skb_network_offset(skb);···212212 index = actparm->index;213213 err = tcf_idr_check_alloc(tn, &index, a, bind);214214 if (!err) {215215- ret = tcf_idr_create(tn, index, est, a,216216- &act_ctinfo_ops, bind, false, flags);215215+ ret = tcf_idr_create_from_flags(tn, index, est, a,216216+ &act_ctinfo_ops, bind, flags);217217 if (ret) {218218 tcf_idr_cleanup(tn, index);219219 return ret;
+1-3
net/sctp/diag.c
···343343 struct sctp_comm_param *commp = p;344344 struct sock *sk = ep->base.sk;345345 const struct inet_diag_req_v2 *r = commp->r;346346- struct sctp_association *assoc =347347- list_entry(ep->asocs.next, struct sctp_association, asocs);348346349347 /* find the ep only once through the transports by this condition */350350- if (tsp->asoc != assoc)348348+ if (!list_is_first(&tsp->asoc->asocs, &ep->asocs))351349 return 0;352350353351 if (r->sdiag_family != AF_UNSPEC && sk->sk_family != r->sdiag_family)
+6-3
net/socket.c
···982982static void sock_recv_mark(struct msghdr *msg, struct sock *sk,983983 struct sk_buff *skb)984984{985985- if (sock_flag(sk, SOCK_RCVMARK) && skb)986986- put_cmsg(msg, SOL_SOCKET, SO_MARK, sizeof(__u32),987987- &skb->mark);985985+ if (sock_flag(sk, SOCK_RCVMARK) && skb) {986986+ /* We must use a bounce buffer for CONFIG_HARDENED_USERCOPY=y */987987+ __u32 mark = skb->mark;988988+989989+ put_cmsg(msg, SOL_SOCKET, SO_MARK, sizeof(__u32), &mark);990990+ }988991}989992990993void __sock_recv_cmsgs(struct msghdr *msg, struct sock *sk,
+2
net/tipc/socket.c
···26172617 /* Send a 'SYN-' to destination */26182618 m.msg_name = dest;26192619 m.msg_namelen = destlen;26202620+ iov_iter_kvec(&m.msg_iter, ITER_SOURCE, NULL, 0, 0);2620262126212622 /* If connect is in non-blocking case, set MSG_DONTWAIT to26222623 * indicate send_msg() is never blocked.···27802779 __skb_queue_head(&new_sk->sk_receive_queue, buf);27812780 skb_set_owner_r(buf, new_sk);27822781 }27822782+ iov_iter_kvec(&m.msg_iter, ITER_SOURCE, NULL, 0, 0);27832783 __tipc_sendstream(new_sock, &m, 0);27842784 release_sock(new_sk);27852785exit:
···15151616struct page {};17171818-void __free_pages_core(struct page *page, unsigned int order)1919-{2020-}2121-2218void memblock_free_pages(struct page *page, unsigned long pfn,2319 unsigned int order)2420{
+127-1
tools/testing/selftests/net/fib_rule_tests.sh
···10101111PAUSE_ON_FAIL=${PAUSE_ON_FAIL:=no}1212IP="ip -netns testns"1313+IP_PEER="ip -netns peerns"13141415RTABLE=1001616+RTABLE_PEER=1011517GW_IP4=192.51.100.21618SRC_IP=192.51.100.31719GW_IP6=2001:db8:1::2···2220DEV_ADDR=192.51.100.12321DEV_ADDR6=2001:db8:1::12422DEV=dummy02525-TESTS="fib_rule6 fib_rule4"2323+TESTS="fib_rule6 fib_rule4 fib_rule6_connect fib_rule4_connect"2424+2525+SELFTEST_PATH=""26262727log_test()2828{···5652 echo "######################################################################"5753}58545555+check_nettest()5656+{5757+ if which nettest > /dev/null 2>&1; then5858+ return 05959+ fi6060+6161+ # Add the selftest directory to PATH if not already done6262+ if [ "${SELFTEST_PATH}" = "" ]; then6363+ SELFTEST_PATH="$(dirname $0)"6464+ PATH="${PATH}:${SELFTEST_PATH}"6565+6666+ # Now retry with the new path6767+ if which nettest > /dev/null 2>&1; then6868+ return 06969+ fi7070+7171+ if [ "${ret}" -eq 0 ]; then7272+ ret="${ksft_skip}"7373+ fi7474+ echo "nettest not found (try 'make -C ${SELFTEST_PATH} nettest')"7575+ fi7676+7777+ return 17878+}7979+5980setup()6081{6182 set -e···9970{10071 $IP link del dev dummy0 &> /dev/null10172 ip netns del testns7373+}7474+7575+setup_peer()7676+{7777+ set -e7878+7979+ ip netns add peerns8080+ $IP_PEER link set dev lo up8181+8282+ ip link add name veth0 netns testns type veth \8383+ peer name veth1 netns peerns8484+ $IP link set dev veth0 up8585+ $IP_PEER link set dev veth1 up8686+8787+ $IP address add 192.0.2.10 peer 192.0.2.11/32 dev veth08888+ $IP_PEER address add 192.0.2.11 peer 192.0.2.10/32 dev veth18989+9090+ $IP address add 2001:db8::10 peer 2001:db8::11/128 dev veth0 nodad9191+ $IP_PEER address add 2001:db8::11 peer 2001:db8::10/128 dev veth1 nodad9292+9393+ $IP_PEER address add 198.51.100.11/32 dev lo9494+ $IP route add table $RTABLE_PEER 198.51.100.11/32 via 192.0.2.119595+9696+ $IP_PEER address add 2001:db8::1:11/128 dev lo9797+ $IP route add table $RTABLE_PEER 2001:db8::1:11/128 via 2001:db8::119898+9999+ set +e100100+}101101+102102+cleanup_peer()103103+{104104+ $IP link del dev veth0105105+ ip netns del peerns102106}103107104108fib_check_iproute_support()···252190 fi253191}254192193193+# Verify that the IPV6_TCLASS option of UDPv6 and TCPv6 sockets is properly194194+# taken into account when connecting the socket and when sending packets.195195+fib_rule6_connect_test()196196+{197197+ local dsfield198198+199199+ if ! check_nettest; then200200+ echo "SKIP: Could not run test without nettest tool"201201+ return202202+ fi203203+204204+ setup_peer205205+ $IP -6 rule add dsfield 0x04 table $RTABLE_PEER206206+207207+ # Combine the base DS Field value (0x04) with all possible ECN values208208+ # (Not-ECT: 0, ECT(1): 1, ECT(0): 2, CE: 3).209209+ # The ECN bits shouldn't influence the result of the test.210210+ for dsfield in 0x04 0x05 0x06 0x07; do211211+ nettest -q -6 -B -t 5 -N testns -O peerns -U -D \212212+ -Q "${dsfield}" -l 2001:db8::1:11 -r 2001:db8::1:11213213+ log_test $? 0 "rule6 dsfield udp connect (dsfield ${dsfield})"214214+215215+ nettest -q -6 -B -t 5 -N testns -O peerns -Q "${dsfield}" \216216+ -l 2001:db8::1:11 -r 2001:db8::1:11217217+ log_test $? 0 "rule6 dsfield tcp connect (dsfield ${dsfield})"218218+ done219219+220220+ $IP -6 rule del dsfield 0x04 table $RTABLE_PEER221221+ cleanup_peer222222+}223223+255224fib_rule4_del()256225{257226 $IP rule del $1···389296 fi390297}391298299299+# Verify that the IP_TOS option of UDPv4 and TCPv4 sockets is properly taken300300+# into account when connecting the socket and when sending packets.301301+fib_rule4_connect_test()302302+{303303+ local dsfield304304+305305+ if ! check_nettest; then306306+ echo "SKIP: Could not run test without nettest tool"307307+ return308308+ fi309309+310310+ setup_peer311311+ $IP -4 rule add dsfield 0x04 table $RTABLE_PEER312312+313313+ # Combine the base DS Field value (0x04) with all possible ECN values314314+ # (Not-ECT: 0, ECT(1): 1, ECT(0): 2, CE: 3).315315+ # The ECN bits shouldn't influence the result of the test.316316+ for dsfield in 0x04 0x05 0x06 0x07; do317317+ nettest -q -B -t 5 -N testns -O peerns -D -U -Q "${dsfield}" \318318+ -l 198.51.100.11 -r 198.51.100.11319319+ log_test $? 0 "rule4 dsfield udp connect (dsfield ${dsfield})"320320+321321+ nettest -q -B -t 5 -N testns -O peerns -Q "${dsfield}" \322322+ -l 198.51.100.11 -r 198.51.100.11323323+ log_test $? 0 "rule4 dsfield tcp connect (dsfield ${dsfield})"324324+ done325325+326326+ $IP -4 rule del dsfield 0x04 table $RTABLE_PEER327327+ cleanup_peer328328+}329329+392330run_fibrule_tests()393331{394332 log_section "IPv4 fib rule"···469345 case $t in470346 fib_rule6_test|fib_rule6) fib_rule6_test;;471347 fib_rule4_test|fib_rule4) fib_rule4_test;;348348+ fib_rule6_connect_test|fib_rule6_connect) fib_rule6_connect_test;;349349+ fib_rule4_connect_test|fib_rule4_connect) fib_rule4_connect_test;;472350473351 help) echo "Test names: $TESTS"; exit 0;;474352
+50-1
tools/testing/selftests/net/nettest.c
···8787 int use_setsockopt;8888 int use_freebind;8989 int use_cmsg;9090+ uint8_t dsfield;9091 const char *dev;9192 const char *server_dev;9293 int ifindex;···579578 }580579581580 return rc;581581+}582582+583583+static int set_dsfield(int sd, int version, int dsfield)584584+{585585+ if (!dsfield)586586+ return 0;587587+588588+ switch (version) {589589+ case AF_INET:590590+ if (setsockopt(sd, SOL_IP, IP_TOS, &dsfield,591591+ sizeof(dsfield)) < 0) {592592+ log_err_errno("setsockopt(IP_TOS)");593593+ return -1;594594+ }595595+ break;596596+597597+ case AF_INET6:598598+ if (setsockopt(sd, SOL_IPV6, IPV6_TCLASS, &dsfield,599599+ sizeof(dsfield)) < 0) {600600+ log_err_errno("setsockopt(IPV6_TCLASS)");601601+ return -1;602602+ }603603+ break;604604+605605+ default:606606+ log_error("Invalid address family\n");607607+ return -1;608608+ }609609+610610+ return 0;582611}583612584613static int str_to_uint(const char *str, int min, int max, unsigned int *value)···13481317 (char *)&one, sizeof(one)) < 0)13491318 log_err_errno("Setting SO_BROADCAST error");1350131913201320+ if (set_dsfield(sd, AF_INET, args->dsfield) != 0)13211321+ goto out_err;13221322+13511323 if (args->dev && bind_to_device(sd, args->dev) != 0)13521324 goto out_err;13531325 else if (args->use_setsockopt &&···14771443 goto err;1478144414791445 if (set_reuseport(sd) != 0)14461446+ goto err;14471447+14481448+ if (set_dsfield(sd, args->version, args->dsfield) != 0)14801449 goto err;1481145014821451 if (args->dev && bind_to_device(sd, args->dev) != 0)···16951658 if (set_reuseport(sd) != 0)16961659 goto err;1697166016611661+ if (set_dsfield(sd, args->version, args->dsfield) != 0)16621662+ goto err;16631663+16981664 if (args->dev && bind_to_device(sd, args->dev) != 0)16991665 goto err;17001666 else if (args->use_setsockopt &&···19021862 return client_status;19031863}1904186419051905-#define GETOPT_STR "sr:l:c:p:t:g:P:DRn:M:X:m:d:I:BN:O:SUCi6xL:0:1:2:3:Fbqf"18651865+#define GETOPT_STR "sr:l:c:Q:p:t:g:P:DRn:M:X:m:d:I:BN:O:SUCi6xL:0:1:2:3:Fbqf"19061866#define OPT_FORCE_BIND_KEY_IFINDEX 100119071867#define OPT_NO_BIND_KEY_IFINDEX 100219081868···19331893 " -D|R datagram (D) / raw (R) socket (default stream)\n"19341894 " -l addr local address to bind to in server mode\n"19351895 " -c addr local address to bind to in client mode\n"18961896+ " -Q dsfield DS Field value of the socket (the IP_TOS or\n"18971897+ " IPV6_TCLASS socket option)\n"19361898 " -x configure XFRM policy on socket\n"19371899 "\n"19381900 " -d dev bind socket to given device name\n"···20121970 case 'c':20131971 args.has_local_ip = 1;20141972 args.client_local_addr_str = optarg;19731973+ break;19741974+ case 'Q':19751975+ if (str_to_uint(optarg, 0, 255, &tmp) != 0) {19761976+ fprintf(stderr, "Invalid DS Field\n");19771977+ return 1;19781978+ }19791979+ args.dsfield = tmp;20151980 break;20161981 case 'p':20171982 if (str_to_uint(optarg, 1, 65535, &tmp) != 0) {