···376376Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com>377377Iskren Chernev <me@iskren.info> <iskren.chernev@gmail.com>378378Kalle Valo <kvalo@kernel.org> <kvalo@codeaurora.org>379379+Kalle Valo <kvalo@kernel.org> <quic_kvalo@quicinc.com>379380Kalyan Thota <quic_kalyant@quicinc.com> <kalyan_t@codeaurora.org>380381Karthikeyan Periyasamy <quic_periyasa@quicinc.com> <periyasa@codeaurora.org>381382Kathiravan T <quic_kathirav@quicinc.com> <kathirav@codeaurora.org>
+1-1
Documentation/arch/arm64/gcs.rst
···3737 shadow stacks rather than GCS.38383939* Support for GCS is reported to userspace via HWCAP_GCS in the aux vector4040- AT_HWCAP2 entry.4040+ AT_HWCAP entry.41414242* GCS is enabled per thread. While there is support for disabling GCS4343 at runtime this should be done with great care.
···8899maintainers:1010 - Taniya Das <quic_tdas@quicinc.com>1111+ - Imran Shaik <quic_imrashai@quicinc.com>11121213description: |1314 Qualcomm camera clock control module provides the clocks, resets and power1415 domains on SA8775p.15161616- See also: include/dt-bindings/clock/qcom,sa8775p-camcc.h1717+ See also:1818+ include/dt-bindings/clock/qcom,qcs8300-camcc.h1919+ include/dt-bindings/clock/qcom,sa8775p-camcc.h17201821properties:1922 compatible:2023 enum:2424+ - qcom,qcs8300-camcc2125 - qcom,sa8775p-camcc22262327 clocks:
···77title: Qualcomm Technologies ath10k wireless devices8899maintainers:1010- - Kalle Valo <kvalo@kernel.org>1110 - Jeff Johnson <jjohnson@kernel.org>12111312description:
···88title: Qualcomm Technologies ath11k wireless devices (PCIe)991010maintainers:1111- - Kalle Valo <kvalo@kernel.org>1211 - Jeff Johnson <jjohnson@kernel.org>13121413description: |
···88title: Qualcomm Technologies ath11k wireless devices991010maintainers:1111- - Kalle Valo <kvalo@kernel.org>1211 - Jeff Johnson <jjohnson@kernel.org>13121413description: |
···991010maintainers:1111 - Jeff Johnson <jjohnson@kernel.org>1212- - Kalle Valo <kvalo@kernel.org>13121413description: |1514 Qualcomm Technologies IEEE 802.11be PCIe devices with WSI interface.
···991010maintainers:1111 - Jeff Johnson <quic_jjohnson@quicinc.com>1212- - Kalle Valo <kvalo@kernel.org>13121413description:1514 Qualcomm Technologies IEEE 802.11be PCIe devices.
···2222 Each sub-node is identified using the node's name, with valid values listed2323 for each of the pmics below.24242525- For mp5496, s1, s22525+ For mp5496, s1, s2, l2, l526262727 For pm2250, s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11,2828 l12, l13, l14, l15, l16, l17, l18, l19, l20, l21, l22
···36543654F: drivers/phy/qualcomm/phy-ath79-usb.c3655365536563656ATHEROS ATH GENERIC UTILITIES36573657-M: Kalle Valo <kvalo@kernel.org>36583657M: Jeff Johnson <jjohnson@kernel.org>36593658L: linux-wireless@vger.kernel.org36603659S: Supported···38573858W: https://ez.analog.com/linux-software-drivers38583859F: Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml38593860F: drivers/pwm/pwm-axi-pwmgen.c38603860-38613861-AXXIA I2C CONTROLLER38623862-M: Krzysztof Adamski <krzysztof.adamski@nokia.com>38633863-L: linux-i2c@vger.kernel.org38643864-S: Maintained38653865-F: Documentation/devicetree/bindings/i2c/i2c-axxia.txt38663866-F: drivers/i2c/busses/i2c-axxia.c3867386138683862AZ6007 DVB DRIVER38693863M: Mauro Carvalho Chehab <mchehab@kernel.org>···71087116F: rust/kernel/device_id.rs71097117F: rust/kernel/devres.rs71107118F: rust/kernel/driver.rs71197119+F: rust/kernel/faux.rs71117120F: rust/kernel/platform.rs71127121F: samples/rust/rust_driver_platform.rs71227122+F: samples/rust/rust_driver_faux.rs7113712371147124DRIVERS FOR OMAP ADAPTIVE VOLTAGE SCALING (AVS)71157125M: Nishanth Menon <nm@ti.com>···1081610822F: drivers/tty/hvc/10817108231081810824I2C ACPI SUPPORT1081910819-M: Mika Westerberg <mika.westerberg@linux.intel.com>1082510825+M: Mika Westerberg <westeri@kernel.org>1082010826L: linux-i2c@vger.kernel.org1082110827L: linux-acpi@vger.kernel.org1082210828S: Maintained···1643216438X: drivers/net/wireless/16433164391643416440NETWORKING DRIVERS (WIRELESS)1643516435-M: Kalle Valo <kvalo@kernel.org>1644116441+M: Johannes Berg <johannes@sipsolutions.net>1643616442L: linux-wireless@vger.kernel.org1643716443S: Maintained1643816444W: https://wireless.wiki.kernel.org/···1650316509F: include/linux/netlink.h1650416510F: include/linux/netpoll.h1650516511F: include/linux/rtnetlink.h1651216512+F: include/linux/sctp.h1650616513F: include/linux/seq_file_net.h1650716514F: include/linux/skbuff*1650816515F: include/net/···1652016525F: include/uapi/linux/netlink.h1652116526F: include/uapi/linux/netlink_diag.h1652216527F: include/uapi/linux/rtnetlink.h1652816528+F: include/uapi/linux/sctp.h1652316529F: lib/net_utils.c1652416530F: lib/random32.c1652516531F: net/···1935119355F: drivers/media/tuners/qt1010*19352193561935319357QUALCOMM ATH12K WIRELESS DRIVER1935419354-M: Kalle Valo <kvalo@kernel.org>1935519358M: Jeff Johnson <jjohnson@kernel.org>1935619359L: ath12k@lists.infradead.org1935719360S: Supported···1936019365N: ath12k19361193661936219367QUALCOMM ATHEROS ATH10K WIRELESS DRIVER1936319363-M: Kalle Valo <kvalo@kernel.org>1936419368M: Jeff Johnson <jjohnson@kernel.org>1936519369L: ath10k@lists.infradead.org1936619370S: Supported···1936919375N: ath10k19370193761937119377QUALCOMM ATHEROS ATH11K WIRELESS DRIVER1937219372-M: Kalle Valo <kvalo@kernel.org>1937319378M: Jeff Johnson <jjohnson@kernel.org>1937419379L: ath11k@lists.infradead.org1937519380S: Supported···1950219509L: dmaengine@vger.kernel.org1950319510S: Supported1950419511F: drivers/dma/qcom/hidma*1951219512+1951319513+QUALCOMM I2C QCOM GENI DRIVER1951419514+M: Mukesh Kumar Savaliya <quic_msavaliy@quicinc.com>1951519515+M: Viken Dadhaniya <quic_vdadhani@quicinc.com>1951619516+L: linux-i2c@vger.kernel.org1951719517+L: linux-arm-msm@vger.kernel.org1951819518+S: Maintained1951919519+F: Documentation/devicetree/bindings/i2c/qcom,i2c-geni-qcom.yaml1952019520+F: drivers/i2c/busses/i2c-qcom-geni.c19505195211950619522QUALCOMM I2C CCI DRIVER1950719523M: Loic Poulain <loic.poulain@linaro.org>
+5-10
Makefile
···22VERSION = 633PATCHLEVEL = 1444SUBLEVEL = 055-EXTRAVERSION = -rc255+EXTRAVERSION = -rc366NAME = Baby Opossum Posse7788# *DOCUMENTATION*···11201120endif1121112111221122# Align the bit size of userspace programs with the kernel11231123-KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))11241124-KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))11231123+KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))11241124+KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))1125112511261126# make the checker run with the right architecture11271127CHECKFLAGS += --arch=$(ARCH)···14211421 $(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean14221422endif1423142314241424-# Clear a bunch of variables before executing the submake14251425-ifeq ($(quiet),silent_)14261426-tools_silent=s14271427-endif14281428-14291424tools/: FORCE14301425 $(Q)mkdir -p $(objtree)/tools14311431- $(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/14261426+ $(Q)$(MAKE) LDFLAGS= O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/1432142714331428tools/%: FORCE14341429 $(Q)mkdir -p $(objtree)/tools14351435- $(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $*14301430+ $(Q)$(MAKE) LDFLAGS= O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $*1436143114371432# ---------------------------------------------------------------------------14381433# Kernel selftest
+1-1
arch/alpha/include/asm/hwrpb.h
···135135 /* virtual->physical map */136136 unsigned long map_entries;137137 unsigned long map_pages;138138- struct vf_map_struct map[1];138138+ struct vf_map_struct map[];139139};140140141141struct memclust_struct {
+2
arch/alpha/include/uapi/asm/ptrace.h
···4242 unsigned long trap_a0;4343 unsigned long trap_a1;4444 unsigned long trap_a2;4545+/* This makes the stack 16-byte aligned as GCC expects */4646+ unsigned long __pad0;4547/* These are saved by PAL-code: */4648 unsigned long ps;4749 unsigned long pc;
···1313#include <linux/log2.h>1414#include <linux/dma-map-ops.h>1515#include <linux/iommu-helper.h>1616+#include <linux/string_choices.h>16171718#include <asm/io.h>1819#include <asm/hwrpb.h>···213212214213 /* If both conditions above are met, we are fine. */215214 DBGA("pci_dac_dma_supported %s from %ps\n",216216- ok ? "yes" : "no", __builtin_return_address(0));215215+ str_yes_no(ok), __builtin_return_address(0));217216218217 return ok;219218}
+1-1
arch/alpha/kernel/traps.c
···649649static int unauser_reg_offsets[32] = {650650 R(r0), R(r1), R(r2), R(r3), R(r4), R(r5), R(r6), R(r7), R(r8),651651 /* r9 ... r15 are stored in front of regs. */652652- -56, -48, -40, -32, -24, -16, -8,652652+ -64, -56, -48, -40, -32, -24, -16, /* padding at -8 */653653 R(r16), R(r17), R(r18),654654 R(r19), R(r20), R(r21), R(r22), R(r23), R(r24), R(r25), R(r26),655655 R(r27), R(r28), R(gp),
···605605 __cpacr_to_cptr_set(clr, set));\606606 } while (0)607607608608-static __always_inline void kvm_write_cptr_el2(u64 val)609609-{610610- if (has_vhe() || has_hvhe())611611- write_sysreg(val, cpacr_el1);612612- else613613- write_sysreg(val, cptr_el2);614614-}615615-616616-/* Resets the value of cptr_el2 when returning to the host. */617617-static __always_inline void __kvm_reset_cptr_el2(struct kvm *kvm)618618-{619619- u64 val;620620-621621- if (has_vhe()) {622622- val = (CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN);623623- if (cpus_have_final_cap(ARM64_SME))624624- val |= CPACR_EL1_SMEN_EL1EN;625625- } else if (has_hvhe()) {626626- val = CPACR_EL1_FPEN;627627-628628- if (!kvm_has_sve(kvm) || !guest_owns_fp_regs())629629- val |= CPACR_EL1_ZEN;630630- if (cpus_have_final_cap(ARM64_SME))631631- val |= CPACR_EL1_SMEN;632632- } else {633633- val = CPTR_NVHE_EL2_RES1;634634-635635- if (kvm_has_sve(kvm) && guest_owns_fp_regs())636636- val |= CPTR_EL2_TZ;637637- if (!cpus_have_final_cap(ARM64_SME))638638- val |= CPTR_EL2_TSM;639639- }640640-641641- kvm_write_cptr_el2(val);642642-}643643-644644-#ifdef __KVM_NVHE_HYPERVISOR__645645-#define kvm_reset_cptr_el2(v) __kvm_reset_cptr_el2(kern_hyp_va((v)->kvm))646646-#else647647-#define kvm_reset_cptr_el2(v) __kvm_reset_cptr_el2((v)->kvm)648648-#endif649649-650608/*651609 * Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE652610 * format if E2H isn't set.
+5-17
arch/arm64/include/asm/kvm_host.h
···100100static inline void *pop_hyp_memcache(struct kvm_hyp_memcache *mc,101101 void *(*to_va)(phys_addr_t phys))102102{103103- phys_addr_t *p = to_va(mc->head);103103+ phys_addr_t *p = to_va(mc->head & PAGE_MASK);104104105105 if (!mc->nr_pages)106106 return NULL;···615615struct kvm_host_data {616616#define KVM_HOST_DATA_FLAG_HAS_SPE 0617617#define KVM_HOST_DATA_FLAG_HAS_TRBE 1618618-#define KVM_HOST_DATA_FLAG_HOST_SVE_ENABLED 2619619-#define KVM_HOST_DATA_FLAG_HOST_SME_ENABLED 3620618#define KVM_HOST_DATA_FLAG_TRBE_ENABLED 4621619#define KVM_HOST_DATA_FLAG_EL1_TRACING_CONFIGURED 5622620 unsigned long flags;···622624 struct kvm_cpu_context host_ctxt;623625624626 /*625625- * All pointers in this union are hyp VA.627627+ * Hyp VA.626628 * sve_state is only used in pKVM and if system_supports_sve().627629 */628628- union {629629- struct user_fpsimd_state *fpsimd_state;630630- struct cpu_sve_state *sve_state;631631- };630630+ struct cpu_sve_state *sve_state;632631633633- union {634634- /* HYP VA pointer to the host storage for FPMR */635635- u64 *fpmr_ptr;636636- /*637637- * Used by pKVM only, as it needs to provide storage638638- * for the host639639- */640640- u64 fpmr;641641- };632632+ /* Used by pKVM only. */633633+ u64 fpmr;642634643635 /* Ownership of the FP regs */644636 enum {
···16951695}1696169616971697/*16981698- * Called by KVM when entering the guest.16991699- */17001700-void fpsimd_kvm_prepare(void)17011701-{17021702- if (!system_supports_sve())17031703- return;17041704-17051705- /*17061706- * KVM does not save host SVE state since we can only enter17071707- * the guest from a syscall so the ABI means that only the17081708- * non-saved SVE state needs to be saved. If we have left17091709- * SVE enabled for performance reasons then update the task17101710- * state to be FPSIMD only.17111711- */17121712- get_cpu_fpsimd_context();17131713-17141714- if (test_and_clear_thread_flag(TIF_SVE)) {17151715- sve_to_fpsimd(current);17161716- current->thread.fp_type = FP_STATE_FPSIMD;17171717- }17181718-17191719- put_cpu_fpsimd_context();17201720-}17211721-17221722-/*17231698 * Associate current's FPSIMD context with this cpu17241699 * The caller must have ownership of the cpu FPSIMD context before calling17251700 * this function.
+10-12
arch/arm64/kernel/topology.c
···194194 int cpu;195195196196 /* We are already set since the last insmod of cpufreq driver */197197- if (unlikely(cpumask_subset(cpus, amu_fie_cpus)))197197+ if (cpumask_available(amu_fie_cpus) &&198198+ unlikely(cpumask_subset(cpus, amu_fie_cpus)))198199 return;199200200200- for_each_cpu(cpu, cpus) {201201+ for_each_cpu(cpu, cpus)201202 if (!freq_counters_valid(cpu))202203 return;204204+205205+ if (!cpumask_available(amu_fie_cpus) &&206206+ !zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) {207207+ WARN_ONCE(1, "Failed to allocate FIE cpumask for CPUs[%*pbl]\n",208208+ cpumask_pr_args(cpus));209209+ return;203210 }204211205212 cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus);···244237245238static int __init init_amu_fie(void)246239{247247- int ret;248248-249249- if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL))250250- return -ENOMEM;251251-252252- ret = cpufreq_register_notifier(&init_amu_fie_notifier,240240+ return cpufreq_register_notifier(&init_amu_fie_notifier,253241 CPUFREQ_POLICY_NOTIFIER);254254- if (ret)255255- free_cpumask_var(amu_fie_cpus);256256-257257- return ret;258242}259243core_initcall(init_amu_fie);260244
···5454 if (!system_supports_fpsimd())5555 return;56565757- fpsimd_kvm_prepare();5858-5957 /*6060- * We will check TIF_FOREIGN_FPSTATE just before entering the6161- * guest in kvm_arch_vcpu_ctxflush_fp() and override this to6262- * FP_STATE_FREE if the flag set.5858+ * Ensure that any host FPSIMD/SVE/SME state is saved and unbound such5959+ * that the host kernel is responsible for restoring this state upon6060+ * return to userspace, and the hyp code doesn't need to save anything.6161+ *6262+ * When the host may use SME, fpsimd_save_and_flush_cpu_state() ensures6363+ * that PSTATE.{SM,ZA} == {0,0}.6364 */6464- *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;6565- *host_data_ptr(fpsimd_state) = kern_hyp_va(¤t->thread.uw.fpsimd_state);6666- *host_data_ptr(fpmr_ptr) = kern_hyp_va(¤t->thread.uw.fpmr);6565+ fpsimd_save_and_flush_cpu_state();6666+ *host_data_ptr(fp_owner) = FP_STATE_FREE;67676868- host_data_clear_flag(HOST_SVE_ENABLED);6969- if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)7070- host_data_set_flag(HOST_SVE_ENABLED);7171-7272- if (system_supports_sme()) {7373- host_data_clear_flag(HOST_SME_ENABLED);7474- if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)7575- host_data_set_flag(HOST_SME_ENABLED);7676-7777- /*7878- * If PSTATE.SM is enabled then save any pending FP7979- * state and disable PSTATE.SM. If we leave PSTATE.SM8080- * enabled and the guest does not enable SME via8181- * CPACR_EL1.SMEN then operations that should be valid8282- * may generate SME traps from EL1 to EL1 which we8383- * can't intercept and which would confuse the guest.8484- *8585- * Do the same for PSTATE.ZA in the case where there8686- * is state in the registers which has not already8787- * been saved, this is very unlikely to happen.8888- */8989- if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) {9090- *host_data_ptr(fp_owner) = FP_STATE_FREE;9191- fpsimd_save_and_flush_cpu_state();9292- }9393- }9494-9595- /*9696- * If normal guests gain SME support, maintain this behavior for pKVM9797- * guests, which don't support SME.9898- */9999- WARN_ON(is_protected_kvm_enabled() && system_supports_sme() &&100100- read_sysreg_s(SYS_SVCR));6868+ WARN_ON_ONCE(system_supports_sme() && read_sysreg_s(SYS_SVCR));10169}1027010371/*···130162131163 local_irq_save(flags);132164133133- /*134134- * If we have VHE then the Hyp code will reset CPACR_EL1 to135135- * the default value and we need to reenable SME.136136- */137137- if (has_vhe() && system_supports_sme()) {138138- /* Also restore EL0 state seen on entry */139139- if (host_data_test_flag(HOST_SME_ENABLED))140140- sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_SMEN);141141- else142142- sysreg_clear_set(CPACR_EL1,143143- CPACR_EL1_SMEN_EL0EN,144144- CPACR_EL1_SMEN_EL1EN);145145- isb();146146- }147147-148165 if (guest_owns_fp_regs()) {149149- if (vcpu_has_sve(vcpu)) {150150- u64 zcr = read_sysreg_el1(SYS_ZCR);151151-152152- /*153153- * If the vCPU is in the hyp context then ZCR_EL1 is154154- * loaded with its vEL2 counterpart.155155- */156156- __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr;157157-158158- /*159159- * Restore the VL that was saved when bound to the CPU,160160- * which is the maximum VL for the guest. Because the161161- * layout of the data when saving the sve state depends162162- * on the VL, we need to use a consistent (i.e., the163163- * maximum) VL.164164- * Note that this means that at guest exit ZCR_EL1 is165165- * not necessarily the same as on guest entry.166166- *167167- * ZCR_EL2 holds the guest hypervisor's VL when running168168- * a nested guest, which could be smaller than the169169- * max for the vCPU. Similar to above, we first need to170170- * switch to a VL consistent with the layout of the171171- * vCPU's SVE state. KVM support for NV implies VHE, so172172- * using the ZCR_EL1 alias is safe.173173- */174174- if (!has_vhe() || (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)))175175- sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1,176176- SYS_ZCR_EL1);177177- }178178-179166 /*180167 * Flush (save and invalidate) the fpsimd/sve state so that if181168 * the host tries to use fpsimd/sve, it's not using stale data···142219 * when needed.143220 */144221 fpsimd_save_and_flush_cpu_state();145145- } else if (has_vhe() && system_supports_sve()) {146146- /*147147- * The FPSIMD/SVE state in the CPU has not been touched, and we148148- * have SVE (and VHE): CPACR_EL1 (alias CPTR_EL2) has been149149- * reset by kvm_reset_cptr_el2() in the Hyp code, disabling SVE150150- * for EL0. To avoid spurious traps, restore the trap state151151- * seen by kvm_arch_vcpu_load_fp():152152- */153153- if (host_data_test_flag(HOST_SVE_ENABLED))154154- sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN);155155- else156156- sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0);157222 }158223159224 local_irq_restore(flags);
+5
arch/arm64/kvm/hyp/entry.S
···4444alternative_else_nop_endif4545 mrs x1, isr_el14646 cbz x1, 1f4747+4848+ // Ensure that __guest_enter() always provides a context4949+ // synchronization event so that callers don't need ISBs for anything5050+ // that would usually be synchonized by the ERET.5151+ isb4752 mov x0, #ARM_EXCEPTION_IRQ4853 ret4954
+111-37
arch/arm64/kvm/hyp/include/hyp/switch.h
···326326 return __get_fault_info(vcpu->arch.fault.esr_el2, &vcpu->arch.fault);327327}328328329329-static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)329329+static inline bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)330330{331331 *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);332332 arm64_mops_reset_regs(vcpu_gp_regs(vcpu), vcpu->arch.fault.esr_el2);···375375 true);376376}377377378378-static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);378378+static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu)379379+{380380+ u64 zcr_el1, zcr_el2;381381+382382+ if (!guest_owns_fp_regs())383383+ return;384384+385385+ if (vcpu_has_sve(vcpu)) {386386+ /* A guest hypervisor may restrict the effective max VL. */387387+ if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))388388+ zcr_el2 = __vcpu_sys_reg(vcpu, ZCR_EL2);389389+ else390390+ zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;391391+392392+ write_sysreg_el2(zcr_el2, SYS_ZCR);393393+394394+ zcr_el1 = __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu));395395+ write_sysreg_el1(zcr_el1, SYS_ZCR);396396+ }397397+}398398+399399+static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu)400400+{401401+ u64 zcr_el1, zcr_el2;402402+403403+ if (!guest_owns_fp_regs())404404+ return;405405+406406+ /*407407+ * When the guest owns the FP regs, we know that guest+hyp traps for408408+ * any FPSIMD/SVE/SME features exposed to the guest have been disabled409409+ * by either fpsimd_lazy_switch_to_guest() or kvm_hyp_handle_fpsimd()410410+ * prior to __guest_entry(). As __guest_entry() guarantees a context411411+ * synchronization event, we don't need an ISB here to avoid taking412412+ * traps for anything that was exposed to the guest.413413+ */414414+ if (vcpu_has_sve(vcpu)) {415415+ zcr_el1 = read_sysreg_el1(SYS_ZCR);416416+ __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr_el1;417417+418418+ /*419419+ * The guest's state is always saved using the guest's max VL.420420+ * Ensure that the host has the guest's max VL active such that421421+ * the host can save the guest's state lazily, but don't422422+ * artificially restrict the host to the guest's max VL.423423+ */424424+ if (has_vhe()) {425425+ zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;426426+ write_sysreg_el2(zcr_el2, SYS_ZCR);427427+ } else {428428+ zcr_el2 = sve_vq_from_vl(kvm_host_sve_max_vl) - 1;429429+ write_sysreg_el2(zcr_el2, SYS_ZCR);430430+431431+ zcr_el1 = vcpu_sve_max_vq(vcpu) - 1;432432+ write_sysreg_el1(zcr_el1, SYS_ZCR);433433+ }434434+ }435435+}436436+437437+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)438438+{439439+ /*440440+ * Non-protected kvm relies on the host restoring its sve state.441441+ * Protected kvm restores the host's sve state as not to reveal that442442+ * fpsimd was used by a guest nor leak upper sve bits.443443+ */444444+ if (system_supports_sve()) {445445+ __hyp_sve_save_host();446446+447447+ /* Re-enable SVE traps if not supported for the guest vcpu. */448448+ if (!vcpu_has_sve(vcpu))449449+ cpacr_clear_set(CPACR_EL1_ZEN, 0);450450+451451+ } else {452452+ __fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs));453453+ }454454+455455+ if (kvm_has_fpmr(kern_hyp_va(vcpu->kvm)))456456+ *host_data_ptr(fpmr) = read_sysreg_s(SYS_FPMR);457457+}458458+379459380460/*381461 * We trap the first access to the FP/SIMD to save the host context and···463383 * If FP/SIMD is not implemented, handle the trap and inject an undefined464384 * instruction exception to the guest. Similarly for trapped SVE accesses.465385 */466466-static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)386386+static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)467387{468388 bool sve_guest;469389 u8 esr_ec;···505425 isb();506426507427 /* Write out the host state if it's in the registers */508508- if (host_owns_fp_regs())428428+ if (is_protected_kvm_enabled() && host_owns_fp_regs())509429 kvm_hyp_save_fpsimd_host(vcpu);510430511431 /* Restore the guest state */···581501 return true;582502}583503504504+/* Open-coded version of timer_get_offset() to allow for kern_hyp_va() */505505+static inline u64 hyp_timer_get_offset(struct arch_timer_context *ctxt)506506+{507507+ u64 offset = 0;508508+509509+ if (ctxt->offset.vm_offset)510510+ offset += *kern_hyp_va(ctxt->offset.vm_offset);511511+ if (ctxt->offset.vcpu_offset)512512+ offset += *kern_hyp_va(ctxt->offset.vcpu_offset);513513+514514+ return offset;515515+}516516+584517static inline u64 compute_counter_value(struct arch_timer_context *ctxt)585518{586586- return arch_timer_read_cntpct_el0() - timer_get_offset(ctxt);519519+ return arch_timer_read_cntpct_el0() - hyp_timer_get_offset(ctxt);587520}588521589522static bool kvm_handle_cntxct(struct kvm_vcpu *vcpu)···680587 return true;681588}682589683683-static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)590590+static inline bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)684591{685592 if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&686593 handle_tx2_tvm(vcpu))···700607 return false;701608}702609703703-static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code)610610+static inline bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code)704611{705612 if (static_branch_unlikely(&vgic_v3_cpuif_trap) &&706613 __vgic_v3_perform_cpuif_access(vcpu) == 1)···709616 return false;710617}711618712712-static bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu, u64 *exit_code)619619+static inline bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu,620620+ u64 *exit_code)713621{714622 if (!__populate_fault_info(vcpu))715623 return true;716624717625 return false;718626}719719-static bool kvm_hyp_handle_iabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)720720- __alias(kvm_hyp_handle_memory_fault);721721-static bool kvm_hyp_handle_watchpt_low(struct kvm_vcpu *vcpu, u64 *exit_code)722722- __alias(kvm_hyp_handle_memory_fault);627627+#define kvm_hyp_handle_iabt_low kvm_hyp_handle_memory_fault628628+#define kvm_hyp_handle_watchpt_low kvm_hyp_handle_memory_fault723629724724-static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)630630+static inline bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)725631{726632 if (kvm_hyp_handle_memory_fault(vcpu, exit_code))727633 return true;···750658751659typedef bool (*exit_handler_fn)(struct kvm_vcpu *, u64 *);752660753753-static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu);754754-755755-static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code);756756-757661/*758662 * Allow the hypervisor to handle the exit with an exit handler if it has one.759663 *760664 * Returns true if the hypervisor handled the exit, and control should go back761665 * to the guest, or false if it hasn't.762666 */763763-static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code)667667+static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code,668668+ const exit_handler_fn *handlers)764669{765765- const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu);766766- exit_handler_fn fn;767767-768768- fn = handlers[kvm_vcpu_trap_get_class(vcpu)];769769-670670+ exit_handler_fn fn = handlers[kvm_vcpu_trap_get_class(vcpu)];770671 if (fn)771672 return fn(vcpu, exit_code);772673···789704 * the guest, false when we should restore the host state and return to the790705 * main run loop.791706 */792792-static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)707707+static inline bool __fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code,708708+ const exit_handler_fn *handlers)793709{794794- /*795795- * Save PSTATE early so that we can evaluate the vcpu mode796796- * early on.797797- */798798- synchronize_vcpu_pstate(vcpu, exit_code);799799-800800- /*801801- * Check whether we want to repaint the state one way or802802- * another.803803- */804804- early_exit_filter(vcpu, exit_code);805805-806710 if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)807711 vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);808712···821747 goto exit;822748823749 /* Check if there's an exit handler and allow it to handle the exit. */824824- if (kvm_hyp_handle_exit(vcpu, exit_code))750750+ if (kvm_hyp_handle_exit(vcpu, exit_code, handlers))825751 goto guest;826752exit:827753 /* Return to the host kernel and handle the exit */
+7-8
arch/arm64/kvm/hyp/nvhe/hyp-main.c
···55 */6677#include <hyp/adjust_pc.h>88+#include <hyp/switch.h>89910#include <asm/pgtable-types.h>1011#include <asm/kvm_asm.h>···8483 if (system_supports_sve())8584 __hyp_sve_restore_host();8685 else8787- __fpsimd_restore_state(*host_data_ptr(fpsimd_state));8686+ __fpsimd_restore_state(host_data_ptr(host_ctxt.fp_regs));88878988 if (has_fpmr)9089 write_sysreg_s(*host_data_ptr(fpmr), SYS_FPMR);···225224226225 sync_hyp_vcpu(hyp_vcpu);227226 } else {227227+ struct kvm_vcpu *vcpu = kern_hyp_va(host_vcpu);228228+228229 /* The host is fully trusted, run its vCPU directly. */229229- ret = __kvm_vcpu_run(kern_hyp_va(host_vcpu));230230+ fpsimd_lazy_switch_to_guest(vcpu);231231+ ret = __kvm_vcpu_run(vcpu);232232+ fpsimd_lazy_switch_to_host(vcpu);230233 }231234out:232235 cpu_reg(host_ctxt, 1) = ret;···679674 break;680675 case ESR_ELx_EC_SMC64:681676 handle_host_smc(host_ctxt);682682- break;683683- case ESR_ELx_EC_SVE:684684- cpacr_clear_set(0, CPACR_EL1_ZEN);685685- isb();686686- sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_sve_max_vl) - 1,687687- SYS_ZCR_EL2);688677 break;689678 case ESR_ELx_EC_IABT_LOW:690679 case ESR_ELx_EC_DABT_LOW:
+41-35
arch/arm64/kvm/hyp/nvhe/mem_protect.c
···943943 ret = kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level);944944 if (ret)945945 return ret;946946- if (level != KVM_PGTABLE_LAST_LEVEL)947947- return -E2BIG;948946 if (!kvm_pte_valid(pte))949947 return -ENOENT;948948+ if (level != KVM_PGTABLE_LAST_LEVEL)949949+ return -E2BIG;950950951951 state = guest_get_page_state(pte, ipa);952952 if (state != PKVM_PAGE_SHARED_BORROWED)···998998 return ret;999999}1000100010011001-int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot)10011001+static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa)10021002{10031003- struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu);10041004- u64 ipa = hyp_pfn_to_phys(gfn);10051003 u64 phys;10061004 int ret;1007100510081008- if (prot & ~KVM_PGTABLE_PROT_RWX)10091009- return -EINVAL;10061006+ if (!IS_ENABLED(CONFIG_NVHE_EL2_DEBUG))10071007+ return;1010100810111009 host_lock_component();10121010 guest_lock_component(vm);1013101110141012 ret = __check_host_shared_guest(vm, &phys, ipa);10151015- if (!ret)10161016- ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0);1017101310181014 guest_unlock_component(vm);10191015 host_unlock_component();10161016+10171017+ WARN_ON(ret && ret != -ENOENT);10181018+}10191019+10201020+int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot)10211021+{10221022+ struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu);10231023+ u64 ipa = hyp_pfn_to_phys(gfn);10241024+ int ret;10251025+10261026+ if (pkvm_hyp_vm_is_protected(vm))10271027+ return -EPERM;10281028+10291029+ if (prot & ~KVM_PGTABLE_PROT_RWX)10301030+ return -EINVAL;10311031+10321032+ assert_host_shared_guest(vm, ipa);10331033+ guest_lock_component(vm);10341034+ ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0);10351035+ guest_unlock_component(vm);1020103610211037 return ret;10221038}···10401024int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm)10411025{10421026 u64 ipa = hyp_pfn_to_phys(gfn);10431043- u64 phys;10441027 int ret;1045102810461046- host_lock_component();10291029+ if (pkvm_hyp_vm_is_protected(vm))10301030+ return -EPERM;10311031+10321032+ assert_host_shared_guest(vm, ipa);10471033 guest_lock_component(vm);10481048-10491049- ret = __check_host_shared_guest(vm, &phys, ipa);10501050- if (!ret)10511051- ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE);10521052-10341034+ ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE);10531035 guest_unlock_component(vm);10541054- host_unlock_component();1055103610561037 return ret;10571038}···10561043int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm)10571044{10581045 u64 ipa = hyp_pfn_to_phys(gfn);10591059- u64 phys;10601046 int ret;1061104710621062- host_lock_component();10481048+ if (pkvm_hyp_vm_is_protected(vm))10491049+ return -EPERM;10501050+10511051+ assert_host_shared_guest(vm, ipa);10631052 guest_lock_component(vm);10641064-10651065- ret = __check_host_shared_guest(vm, &phys, ipa);10661066- if (!ret)10671067- ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold);10681068-10531053+ ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold);10691054 guest_unlock_component(vm);10701070- host_unlock_component();1071105510721056 return ret;10731057}···10731063{10741064 struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu);10751065 u64 ipa = hyp_pfn_to_phys(gfn);10761076- u64 phys;10771077- int ret;1078106610791079- host_lock_component();10671067+ if (pkvm_hyp_vm_is_protected(vm))10681068+ return -EPERM;10691069+10701070+ assert_host_shared_guest(vm, ipa);10801071 guest_lock_component(vm);10811081-10821082- ret = __check_host_shared_guest(vm, &phys, ipa);10831083- if (!ret)10841084- kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0);10851085-10721072+ kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0);10861073 guest_unlock_component(vm);10871087- host_unlock_component();1088107410891089- return ret;10751075+ return 0;10901076}
+45-44
arch/arm64/kvm/hyp/nvhe/switch.c
···3939{4040 u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */41414242+ if (!guest_owns_fp_regs())4343+ __activate_traps_fpsimd32(vcpu);4444+4245 if (has_hvhe()) {4346 val |= CPACR_EL1_TTA;4447···5047 if (vcpu_has_sve(vcpu))5148 val |= CPACR_EL1_ZEN;5249 }5050+5151+ write_sysreg(val, cpacr_el1);5352 } else {5453 val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1;5554···66616762 if (!guest_owns_fp_regs())6863 val |= CPTR_EL2_TFP;6464+6565+ write_sysreg(val, cptr_el2);6966 }6767+}70687171- if (!guest_owns_fp_regs())7272- __activate_traps_fpsimd32(vcpu);6969+static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)7070+{7171+ if (has_hvhe()) {7272+ u64 val = CPACR_EL1_FPEN;73737474- kvm_write_cptr_el2(val);7474+ if (cpus_have_final_cap(ARM64_SVE))7575+ val |= CPACR_EL1_ZEN;7676+ if (cpus_have_final_cap(ARM64_SME))7777+ val |= CPACR_EL1_SMEN;7878+7979+ write_sysreg(val, cpacr_el1);8080+ } else {8181+ u64 val = CPTR_NVHE_EL2_RES1;8282+8383+ if (!cpus_have_final_cap(ARM64_SVE))8484+ val |= CPTR_EL2_TZ;8585+ if (!cpus_have_final_cap(ARM64_SME))8686+ val |= CPTR_EL2_TSM;8787+8888+ write_sysreg(val, cptr_el2);8989+ }7590}76917792static void __activate_traps(struct kvm_vcpu *vcpu)···144119145120 write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);146121147147- kvm_reset_cptr_el2(vcpu);122122+ __deactivate_cptr_traps(vcpu);148123 write_sysreg(__kvm_hyp_host_vector, vbar_el2);149124}150125···217192 kvm_handle_pvm_sysreg(vcpu, exit_code));218193}219194220220-static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)221221-{222222- /*223223- * Non-protected kvm relies on the host restoring its sve state.224224- * Protected kvm restores the host's sve state as not to reveal that225225- * fpsimd was used by a guest nor leak upper sve bits.226226- */227227- if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {228228- __hyp_sve_save_host();229229-230230- /* Re-enable SVE traps if not supported for the guest vcpu. */231231- if (!vcpu_has_sve(vcpu))232232- cpacr_clear_set(CPACR_EL1_ZEN, 0);233233-234234- } else {235235- __fpsimd_save_state(*host_data_ptr(fpsimd_state));236236- }237237-238238- if (kvm_has_fpmr(kern_hyp_va(vcpu->kvm))) {239239- u64 val = read_sysreg_s(SYS_FPMR);240240-241241- if (unlikely(is_protected_kvm_enabled()))242242- *host_data_ptr(fpmr) = val;243243- else244244- **host_data_ptr(fpmr_ptr) = val;245245- }246246-}247247-248195static const exit_handler_fn hyp_exit_handlers[] = {249196 [0 ... ESR_ELx_EC_MAX] = NULL,250197 [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,···248251 return hyp_exit_handlers;249252}250253251251-/*252252- * Some guests (e.g., protected VMs) are not be allowed to run in AArch32.253253- * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a254254- * guest from dropping to AArch32 EL0 if implemented by the CPU. If the255255- * hypervisor spots a guest in such a state ensure it is handled, and don't256256- * trust the host to spot or fix it. The check below is based on the one in257257- * kvm_arch_vcpu_ioctl_run().258258- *259259- * Returns false if the guest ran in AArch32 when it shouldn't have, and260260- * thus should exit to the host, or true if a the guest run loop can continue.261261- */262262-static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)254254+static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)263255{256256+ const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu);257257+258258+ synchronize_vcpu_pstate(vcpu, exit_code);259259+260260+ /*261261+ * Some guests (e.g., protected VMs) are not be allowed to run in262262+ * AArch32. The ARMv8 architecture does not give the hypervisor a263263+ * mechanism to prevent a guest from dropping to AArch32 EL0 if264264+ * implemented by the CPU. If the hypervisor spots a guest in such a265265+ * state ensure it is handled, and don't trust the host to spot or fix266266+ * it. The check below is based on the one in267267+ * kvm_arch_vcpu_ioctl_run().268268+ */264269 if (unlikely(vcpu_is_protected(vcpu) && vcpu_mode_is_32bit(vcpu))) {265270 /*266271 * As we have caught the guest red-handed, decide that it isn't···275276 *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT);276277 *exit_code |= ARM_EXCEPTION_IL;277278 }279279+280280+ return __fixup_guest_exit(vcpu, exit_code, handlers);278281}279282280283/* Switch to the guest for legacy non-VHE systems */
+19-14
arch/arm64/kvm/hyp/vhe/switch.c
···136136 write_sysreg(val, cpacr_el1);137137}138138139139+static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)140140+{141141+ u64 val = CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN;142142+143143+ if (cpus_have_final_cap(ARM64_SME))144144+ val |= CPACR_EL1_SMEN_EL1EN;145145+146146+ write_sysreg(val, cpacr_el1);147147+}148148+139149static void __activate_traps(struct kvm_vcpu *vcpu)140150{141151 u64 val;···217207 */218208 asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));219209220220- kvm_reset_cptr_el2(vcpu);210210+ __deactivate_cptr_traps(vcpu);221211222212 if (!arm64_kernel_unmapped_at_el0())223213 host_vectors = __this_cpu_read(this_cpu_vector);···423413 return true;424414}425415426426-static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)427427-{428428- __fpsimd_save_state(*host_data_ptr(fpsimd_state));429429-430430- if (kvm_has_fpmr(vcpu->kvm))431431- **host_data_ptr(fpmr_ptr) = read_sysreg_s(SYS_FPMR);432432-}433433-434416static bool kvm_hyp_handle_tlbi_el2(struct kvm_vcpu *vcpu, u64 *exit_code)435417{436418 int ret = -EINVAL;···540538 [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops,541539};542540543543-static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)541541+static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)544542{545545- return hyp_exit_handlers;546546-}543543+ synchronize_vcpu_pstate(vcpu, exit_code);547544548548-static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)549549-{550545 /*551546 * If we were in HYP context on entry, adjust the PSTATE view552547 * so that the usual helpers work correctly.···563564 *vcpu_cpsr(vcpu) &= ~(PSR_MODE_MASK | PSR_MODE32_BIT);564565 *vcpu_cpsr(vcpu) |= mode;565566 }567567+568568+ return __fixup_guest_exit(vcpu, exit_code, hyp_exit_handlers);566569}567570568571/* Switch to the guest for VHE systems running in EL2 */···578577 guest_ctxt = &vcpu->arch.ctxt;579578580579 sysreg_save_host_state_vhe(host_ctxt);580580+581581+ fpsimd_lazy_switch_to_guest(vcpu);581582582583 /*583584 * Note that ARM erratum 1165522 requires us to configure both stage 1···604601 sysreg_save_guest_state_vhe(guest_ctxt);605602606603 __deactivate_traps(vcpu);604604+605605+ fpsimd_lazy_switch_to_host(vcpu);607606608607 sysreg_restore_host_state_vhe(host_ctxt);609608
+37-37
arch/arm64/kvm/vgic/vgic-init.c
···3434 *3535 * CPU Interface:3636 *3737- * - kvm_vgic_vcpu_init(): initialization of static data that3838- * doesn't depend on any sizing information or emulation type. No3939- * allocation is allowed there.3737+ * - kvm_vgic_vcpu_init(): initialization of static data that doesn't depend3838+ * on any sizing information. Private interrupts are allocated if not3939+ * already allocated at vgic-creation time.4040 */41414242/* EARLY INIT */···5757}58585959/* CREATION */6060+6161+static int vgic_allocate_private_irqs_locked(struct kvm_vcpu *vcpu, u32 type);60626163/**6264 * kvm_vgic_create: triggered by the instantiation of the VGIC device by···111109112110 if (atomic_read(&kvm->online_vcpus) > kvm->max_vcpus) {113111 ret = -E2BIG;112112+ goto out_unlock;113113+ }114114+115115+ kvm_for_each_vcpu(i, vcpu, kvm) {116116+ ret = vgic_allocate_private_irqs_locked(vcpu, type);117117+ if (ret)118118+ break;119119+ }120120+121121+ if (ret) {122122+ kvm_for_each_vcpu(i, vcpu, kvm) {123123+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;124124+ kfree(vgic_cpu->private_irqs);125125+ vgic_cpu->private_irqs = NULL;126126+ }127127+114128 goto out_unlock;115129 }116130···198180 return 0;199181}200182201201-static int vgic_allocate_private_irqs_locked(struct kvm_vcpu *vcpu)183183+static int vgic_allocate_private_irqs_locked(struct kvm_vcpu *vcpu, u32 type)202184{203185 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;204186 int i;···236218 /* PPIs */237219 irq->config = VGIC_CONFIG_LEVEL;238220 }221221+222222+ switch (type) {223223+ case KVM_DEV_TYPE_ARM_VGIC_V3:224224+ irq->group = 1;225225+ irq->mpidr = kvm_vcpu_get_mpidr_aff(vcpu);226226+ break;227227+ case KVM_DEV_TYPE_ARM_VGIC_V2:228228+ irq->group = 0;229229+ irq->targets = BIT(vcpu->vcpu_id);230230+ break;231231+ }239232 }240233241234 return 0;242235}243236244244-static int vgic_allocate_private_irqs(struct kvm_vcpu *vcpu)237237+static int vgic_allocate_private_irqs(struct kvm_vcpu *vcpu, u32 type)245238{246239 int ret;247240248241 mutex_lock(&vcpu->kvm->arch.config_lock);249249- ret = vgic_allocate_private_irqs_locked(vcpu);242242+ ret = vgic_allocate_private_irqs_locked(vcpu, type);250243 mutex_unlock(&vcpu->kvm->arch.config_lock);251244252245 return ret;···287258 if (!irqchip_in_kernel(vcpu->kvm))288259 return 0;289260290290- ret = vgic_allocate_private_irqs(vcpu);261261+ ret = vgic_allocate_private_irqs(vcpu, dist->vgic_model);291262 if (ret)292263 return ret;293264···324295{325296 struct vgic_dist *dist = &kvm->arch.vgic;326297 struct kvm_vcpu *vcpu;327327- int ret = 0, i;298298+ int ret = 0;328299 unsigned long idx;329300330301 lockdep_assert_held(&kvm->arch.config_lock);···343314 ret = kvm_vgic_dist_init(kvm, dist->nr_spis);344315 if (ret)345316 goto out;346346-347347- /* Initialize groups on CPUs created before the VGIC type was known */348348- kvm_for_each_vcpu(idx, vcpu, kvm) {349349- ret = vgic_allocate_private_irqs_locked(vcpu);350350- if (ret)351351- goto out;352352-353353- for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++) {354354- struct vgic_irq *irq = vgic_get_vcpu_irq(vcpu, i);355355-356356- switch (dist->vgic_model) {357357- case KVM_DEV_TYPE_ARM_VGIC_V3:358358- irq->group = 1;359359- irq->mpidr = kvm_vcpu_get_mpidr_aff(vcpu);360360- break;361361- case KVM_DEV_TYPE_ARM_VGIC_V2:362362- irq->group = 0;363363- irq->targets = 1U << idx;364364- break;365365- default:366366- ret = -EINVAL;367367- }368368-369369- vgic_put_irq(kvm, irq);370370-371371- if (ret)372372- goto out;373373- }374374- }375317376318 /*377319 * If we have GICv4.1 enabled, unconditionally request enable the
+7
arch/arm64/mm/trans_pgd.c
···162162 unsigned long next;163163 unsigned long addr = start;164164165165+ if (pgd_none(READ_ONCE(*dst_pgdp))) {166166+ dst_p4dp = trans_alloc(info);167167+ if (!dst_p4dp)168168+ return -ENOMEM;169169+ pgd_populate(NULL, dst_pgdp, dst_p4dp);170170+ }171171+165172 dst_p4dp = p4d_offset(dst_pgdp, start);166173 src_p4dp = p4d_offset(src_pgdp, start);167174 do {
-21
arch/loongarch/include/asm/cpu-info.h
···7676#define cpu_family_string() __cpu_family[raw_smp_processor_id()]7777#define cpu_full_name_string() __cpu_full_name[raw_smp_processor_id()]78787979-struct seq_file;8080-struct notifier_block;8181-8282-extern int register_proc_cpuinfo_notifier(struct notifier_block *nb);8383-extern int proc_cpuinfo_notifier_call_chain(unsigned long val, void *v);8484-8585-#define proc_cpuinfo_notifier(fn, pri) \8686-({ \8787- static struct notifier_block fn##_nb = { \8888- .notifier_call = fn, \8989- .priority = pri \9090- }; \9191- \9292- register_proc_cpuinfo_notifier(&fn##_nb); \9393-})9494-9595-struct proc_cpuinfo_notifier_args {9696- struct seq_file *m;9797- unsigned long n;9898-};9999-10079static inline bool cpus_are_siblings(int cpua, int cpub)10180{10281 struct cpuinfo_loongarch *infoa = &cpu_data[cpua];
+2
arch/loongarch/include/asm/smp.h
···7777#define SMP_IRQ_WORK BIT(ACTION_IRQ_WORK)7878#define SMP_CLEAR_VECTOR BIT(ACTION_CLEAR_VECTOR)79798080+struct seq_file;8181+8082struct secondary_data {8183 unsigned long stack;8284 unsigned long thread_info;
+15-13
arch/loongarch/kernel/genex.S
···18181919 .align 52020SYM_FUNC_START(__arch_cpu_idle)2121- /* start of rollback region */2222- LONG_L t0, tp, TI_FLAGS2323- nop2424- andi t0, t0, _TIF_NEED_RESCHED2525- bnez t0, 1f2626- nop2727- nop2828- nop2121+ /* start of idle interrupt region */2222+ ori t0, zero, CSR_CRMD_IE2323+ /* idle instruction needs irq enabled */2424+ csrxchg t0, t0, LOONGARCH_CSR_CRMD2525+ /*2626+ * If an interrupt lands here; between enabling interrupts above and2727+ * going idle on the next instruction, we must *NOT* go idle since the2828+ * interrupt could have set TIF_NEED_RESCHED or caused an timer to need2929+ * reprogramming. Fall through -- see handle_vint() below -- and have3030+ * the idle loop take care of things.3131+ */2932 idle 03030- /* end of rollback region */3333+ /* end of idle interrupt region */31341: jr ra3235SYM_FUNC_END(__arch_cpu_idle)3336···3835 UNWIND_HINT_UNDEFINED3936 BACKUP_T0T14037 SAVE_ALL4141- la_abs t1, __arch_cpu_idle3838+ la_abs t1, 1b4239 LONG_L t0, sp, PT_ERA4343- /* 32 byte rollback region */4444- ori t0, t0, 0x1f4545- xori t0, t0, 0x1f4040+ /* 3 instructions idle interrupt region */4141+ ori t0, t0, 0b11004642 bne t0, t1, 1f4743 LONG_S t0, sp, PT_ERA48441: move a0, sp
···303303 * TOE=0: Trap on Exception.304304 * TIT=0: Trap on Timer.305305 */306306- if (env & CSR_GCFG_GCIP_ALL)306306+ if (env & CSR_GCFG_GCIP_SECURE)307307 gcfg |= CSR_GCFG_GCI_SECURE;308308- if (env & CSR_GCFG_MATC_ROOT)308308+ if (env & CSR_GCFG_MATP_ROOT)309309 gcfg |= CSR_GCFG_MATC_ROOT;310310311311 write_csr_gcfg(gcfg);
+1-1
arch/loongarch/kvm/switch.S
···8585 * Guest CRMD comes from separate GCSR_CRMD register8686 */8787 ori t0, zero, CSR_PRMD_PIE8888- csrxchg t0, t0, LOONGARCH_CSR_PRMD8888+ csrwr t0, LOONGARCH_CSR_PRMD89899090 /* Set PVM bit to setup ertn to guest context */9191 ori t0, zero, CSR_GSTAT_PVM
-3
arch/loongarch/kvm/vcpu.c
···1548154815491549 /* Restore timer state regardless */15501550 kvm_restore_timer(vcpu);15511551-15521552- /* Control guest page CCA attribute */15531553- change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT);15541551 kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);1555155215561553 /* Restore hardware PMU CSRs */
···2727 */2828struct pt_regs {2929#ifdef CONFIG_32BIT3030- /* Pad bytes for argument save space on the stack. */3131- unsigned long pad0[8];3030+ /* Saved syscall stack arguments; entries 0-3 unused. */3131+ unsigned long args[8];3232#endif33333434 /* Saved main processor registers. */
+8-24
arch/mips/include/asm/syscall.h
···5757static inline void mips_get_syscall_arg(unsigned long *arg,5858 struct task_struct *task, struct pt_regs *regs, unsigned int n)5959{6060- unsigned long usp __maybe_unused = regs->regs[29];6161-6060+#ifdef CONFIG_32BIT6261 switch (n) {6362 case 0: case 1: case 2: case 3:6463 *arg = regs->regs[4 + n];6565-6664 return;6767-6868-#ifdef CONFIG_32BIT6965 case 4: case 5: case 6: case 7:7070- get_user(*arg, (int *)usp + n);6666+ *arg = regs->args[n];7167 return;7272-#endif7373-7474-#ifdef CONFIG_64BIT7575- case 4: case 5: case 6: case 7:7676-#ifdef CONFIG_MIPS32_O327777- if (test_tsk_thread_flag(task, TIF_32BIT_REGS))7878- get_user(*arg, (int *)usp + n);7979- else8080-#endif8181- *arg = regs->regs[4 + n];8282-8383- return;8484-#endif8585-8686- default:8787- BUG();8868 }8989-9090- unreachable();6969+#else7070+ *arg = regs->regs[4 + n];7171+ if ((IS_ENABLED(CONFIG_MIPS32_O32) &&7272+ test_tsk_thread_flag(task, TIF_32BIT_REGS)))7373+ *arg = (unsigned int)*arg;7474+#endif9175}92769377static inline long syscall_get_error(struct task_struct *task,
···6464load_a7: user_lw(t8, 28(t0)) # argument #8 from usp6565loads_done:66666767- sw t5, 16(sp) # argument #5 to ksp6868- sw t6, 20(sp) # argument #6 to ksp6969- sw t7, 24(sp) # argument #7 to ksp7070- sw t8, 28(sp) # argument #8 to ksp6767+ sw t5, PT_ARG4(sp) # argument #5 to ksp6868+ sw t6, PT_ARG5(sp) # argument #6 to ksp6969+ sw t7, PT_ARG6(sp) # argument #7 to ksp7070+ sw t8, PT_ARG7(sp) # argument #8 to ksp7171 .set pop72727373 .section __ex_table,"a"
-1
arch/s390/configs/debug_defconfig
···740740CONFIG_IMA_DEFAULT_HASH_SHA256=y741741CONFIG_IMA_WRITE_POLICY=y742742CONFIG_IMA_APPRAISE=y743743-CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"744743CONFIG_BUG_ON_DATA_CORRUPTION=y745744CONFIG_CRYPTO_USER=m746745# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
···6262# CONFIG_INOTIFY_USER is not set6363# CONFIG_MISC_FILESYSTEMS is not set6464# CONFIG_NETWORK_FILESYSTEMS is not set6565-CONFIG_LSM="yama,loadpin,safesetid,integrity"6665# CONFIG_ZLIB_DFLTCC is not set6766CONFIG_XZ_DEC_MICROLZMA=y6867CONFIG_PRINTK_TIME=y
+5-1
arch/s390/include/asm/bitops.h
···5353 unsigned long mask;5454 int cc;55555656- if (__builtin_constant_p(nr)) {5656+ /*5757+ * With CONFIG_PROFILE_ALL_BRANCHES enabled gcc fails to5858+ * handle __builtin_constant_p() in some cases.5959+ */6060+ if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && __builtin_constant_p(nr)) {5761 addr = (const volatile unsigned char *)ptr;5862 addr += (nr ^ (BITS_PER_LONG - BITS_PER_BYTE)) / BITS_PER_BYTE;5963 mask = 1UL << (nr & (BITS_PER_BYTE - 1));
+20
arch/s390/pci/pci_bus.c
···331331 return rc;332332}333333334334+static bool zpci_bus_is_isolated_vf(struct zpci_bus *zbus, struct zpci_dev *zdev)335335+{336336+ struct pci_dev *pdev;337337+338338+ pdev = zpci_iov_find_parent_pf(zbus, zdev);339339+ if (!pdev)340340+ return true;341341+ pci_dev_put(pdev);342342+ return false;343343+}344344+334345int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)335346{336347 bool topo_is_tid = zdev->tid_avail;···356345357346 topo = topo_is_tid ? zdev->tid : zdev->pchid;358347 zbus = zpci_bus_get(topo, topo_is_tid);348348+ /*349349+ * An isolated VF gets its own domain/bus even if there exists350350+ * a matching domain/bus already351351+ */352352+ if (zbus && zpci_bus_is_isolated_vf(zbus, zdev)) {353353+ zpci_bus_put(zbus);354354+ zbus = NULL;355355+ }356356+359357 if (!zbus) {360358 zbus = zpci_bus_alloc(topo, topo_is_tid);361359 if (!zbus)
+42-14
arch/s390/pci/pci_iov.c
···6060 return 0;6161}62626363-int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn)6363+/**6464+ * zpci_iov_find_parent_pf - Find the parent PF, if any, of the given function6565+ * @zbus: The bus that the PCI function is on, or would be added on6666+ * @zdev: The PCI function6767+ *6868+ * Finds the parent PF, if it exists and is configured, of the given PCI function6969+ * and increments its refcount. Th PF is searched for on the provided bus so the7070+ * caller has to ensure that this is the correct bus to search. This function may7171+ * be used before adding the PCI function to a zbus.7272+ *7373+ * Return: Pointer to the struct pci_dev of the parent PF or NULL if it not7474+ * found. If the function is not a VF or has no RequesterID information,7575+ * NULL is returned as well.7676+ */7777+struct pci_dev *zpci_iov_find_parent_pf(struct zpci_bus *zbus, struct zpci_dev *zdev)6478{6565- int i, cand_devfn;6666- struct zpci_dev *zdev;7979+ int i, vfid, devfn, cand_devfn;6780 struct pci_dev *pdev;6868- int vfid = vfn - 1; /* Linux' vfid's start at 0 vfn at 1*/6969- int rc = 0;70817182 if (!zbus->multifunction)7272- return 0;7373-7474- /* If the parent PF for the given VF is also configured in the8383+ return NULL;8484+ /* Non-VFs and VFs without RID available don't have a parent */8585+ if (!zdev->vfn || !zdev->rid_available)8686+ return NULL;8787+ /* Linux vfid starts at 0 vfn at 1 */8888+ vfid = zdev->vfn - 1;8989+ devfn = zdev->rid & ZPCI_RID_MASK_DEVFN;9090+ /*9191+ * If the parent PF for the given VF is also configured in the7592 * instance, it must be on the same zbus.7693 * We can then identify the parent PF by checking what7794 * devfn the VF would have if it belonged to that PF using the PF's···10285 if (!pdev)10386 continue;10487 cand_devfn = pci_iov_virtfn_devfn(pdev, vfid);105105- if (cand_devfn == virtfn->devfn) {106106- rc = zpci_iov_link_virtfn(pdev, virtfn, vfid);107107- /* balance pci_get_slot() */108108- pci_dev_put(pdev);109109- break;110110- }8888+ if (cand_devfn == devfn)8989+ return pdev;11190 /* balance pci_get_slot() */11291 pci_dev_put(pdev);11392 }9393+ }9494+ return NULL;9595+}9696+9797+int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn)9898+{9999+ struct zpci_dev *zdev = to_zpci(virtfn);100100+ struct pci_dev *pdev_pf;101101+ int rc = 0;102102+103103+ pdev_pf = zpci_iov_find_parent_pf(zbus, zdev);104104+ if (pdev_pf) {105105+ /* Linux' vfids start at 0 while zdev->vfn starts at 1 */106106+ rc = zpci_iov_link_virtfn(pdev_pf, virtfn, zdev->vfn - 1);107107+ pci_dev_put(pdev_pf);114108 }115109 return rc;116110}
···181181182182static int stub_exe_fd;183183184184+#ifndef CLOSE_RANGE_CLOEXEC185185+#define CLOSE_RANGE_CLOEXEC (1U << 2)186186+#endif187187+184188static int userspace_tramp(void *stack)185189{186190 char *const argv[] = { "uml-userspace", NULL };···206202 init_data.stub_data_fd = phys_mapping(uml_to_phys(stack), &offset);207203 init_data.stub_data_offset = MMAP_OFFSET(offset);208204209209- /* Set CLOEXEC on all FDs and then unset on all memory related FDs */210210- close_range(0, ~0U, CLOSE_RANGE_CLOEXEC);205205+ /*206206+ * Avoid leaking unneeded FDs to the stub by setting CLOEXEC on all FDs207207+ * and then unsetting it on all memory related FDs.208208+ * This is not strictly necessary from a safety perspective.209209+ */210210+ syscall(__NR_close_range, 0, ~0U, CLOSE_RANGE_CLOEXEC);211211212212 fcntl(init_data.stub_data_fd, F_SETFD, 0);213213 for (iomem = iomem_regions; iomem; iomem = iomem->next)···232224 if (ret != sizeof(init_data))233225 exit(4);234226235235- execveat(stub_exe_fd, "", argv, NULL, AT_EMPTY_PATH);227227+ /* Raw execveat for compatibility with older libc versions */228228+ syscall(__NR_execveat, stub_exe_fd, (unsigned long)"",229229+ (unsigned long)argv, NULL, AT_EMPTY_PATH);236230237231 exit(5);238232}
+2-1
arch/x86/Kconfig
···26002600 depends on CPU_SUP_AMD && X86_6426012601 default y26022602 help26032603- Compile the kernel with support for the retbleed=ibpb mitigation.26032603+ Compile the kernel with support for the retbleed=ibpb and26042604+ spec_rstack_overflow={ibpb,ibpb-vmexit} mitigations.2604260526052606config MITIGATION_IBRS_ENTRY26062607 bool "Enable IBRS on kernel entry"
+14-19
arch/x86/events/intel/core.c
···4905490549064906static void update_pmu_cap(struct x86_hybrid_pmu *pmu)49074907{49084908- unsigned int sub_bitmaps, eax, ebx, ecx, edx;49084908+ unsigned int cntr, fixed_cntr, ecx, edx;49094909+ union cpuid35_eax eax;49104910+ union cpuid35_ebx ebx;4909491149104910- cpuid(ARCH_PERFMON_EXT_LEAF, &sub_bitmaps, &ebx, &ecx, &edx);49124912+ cpuid(ARCH_PERFMON_EXT_LEAF, &eax.full, &ebx.full, &ecx, &edx);4911491349124912- if (ebx & ARCH_PERFMON_EXT_UMASK2)49144914+ if (ebx.split.umask2)49134915 pmu->config_mask |= ARCH_PERFMON_EVENTSEL_UMASK2;49144914- if (ebx & ARCH_PERFMON_EXT_EQ)49164916+ if (ebx.split.eq)49154917 pmu->config_mask |= ARCH_PERFMON_EVENTSEL_EQ;4916491849174917- if (sub_bitmaps & ARCH_PERFMON_NUM_COUNTER_LEAF_BIT) {49194919+ if (eax.split.cntr_subleaf) {49184920 cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_NUM_COUNTER_LEAF,49194919- &eax, &ebx, &ecx, &edx);49204920- pmu->cntr_mask64 = eax;49214921- pmu->fixed_cntr_mask64 = ebx;49214921+ &cntr, &fixed_cntr, &ecx, &edx);49224922+ pmu->cntr_mask64 = cntr;49234923+ pmu->fixed_cntr_mask64 = fixed_cntr;49224924 }4923492549244926 if (!intel_pmu_broken_perf_cap()) {···49424940 pmu->intel_ctrl |= 1ULL << GLOBAL_CTRL_EN_PERF_METRICS;49434941 else49444942 pmu->intel_ctrl &= ~(1ULL << GLOBAL_CTRL_EN_PERF_METRICS);49454945-49464946- if (pmu->intel_cap.pebs_output_pt_available)49474947- pmu->pmu.capabilities |= PERF_PMU_CAP_AUX_OUTPUT;49484948- else49494949- pmu->pmu.capabilities &= ~PERF_PMU_CAP_AUX_OUTPUT;4950494349514944 intel_pmu_check_event_constraints(pmu->event_constraints,49524945 pmu->cntr_mask64,···5020502350215024 pr_info("%s PMU driver: ", pmu->name);5022502550235023- if (pmu->intel_cap.pebs_output_pt_available)50245024- pr_cont("PEBS-via-PT ");50255025-50265026 pr_cont("\n");5027502750285028 x86_pmu_show_pmu_cap(&pmu->pmu);···5042504850435049 init_debug_store_on_cpu(cpu);50445050 /*50455045- * Deal with CPUs that don't clear their LBRs on power-up.50515051+ * Deal with CPUs that don't clear their LBRs on power-up, and that may50525052+ * even boot with LBRs enabled.50465053 */50545054+ if (!static_cpu_has(X86_FEATURE_ARCH_LBR) && x86_pmu.lbr_nr)50555055+ msr_clear_bit(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR_BIT);50475056 intel_pmu_lbr_reset();5048505750495058 cpuc->lbr_sel = NULL;···63676370 pmu->intel_cap.capabilities = x86_pmu.intel_cap.capabilities;63686371 if (pmu->pmu_type & hybrid_small_tiny) {63696372 pmu->intel_cap.perf_metrics = 0;63706370- pmu->intel_cap.pebs_output_pt_available = 1;63716373 pmu->mid_ack = true;63726374 } else if (pmu->pmu_type & hybrid_big) {63736375 pmu->intel_cap.perf_metrics = 1;63746374- pmu->intel_cap.pebs_output_pt_available = 0;63756376 pmu->late_ack = true;63766377 }63776378 }
+9-1
arch/x86/events/intel/ds.c
···25782578 }25792579 pr_cont("PEBS fmt4%c%s, ", pebs_type, pebs_qual);2580258025812581- if (!is_hybrid() && x86_pmu.intel_cap.pebs_output_pt_available) {25812581+ /*25822582+ * The PEBS-via-PT is not supported on hybrid platforms,25832583+ * because not all CPUs of a hybrid machine support it.25842584+ * The global x86_pmu.intel_cap, which only contains the25852585+ * common capabilities, is used to check the availability25862586+ * of the feature. The per-PMU pebs_output_pt_available25872587+ * in a hybrid machine should be ignored.25882588+ */25892589+ if (x86_pmu.intel_cap.pebs_output_pt_available) {25822590 pr_cont("PEBS-via-PT, ");25832591 x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_AUX_OUTPUT;25842592 }
+4-8
arch/x86/events/rapl.c
···370370 unsigned int rapl_pmu_idx;371371 struct rapl_pmus *rapl_pmus;372372373373+ /* only look at RAPL events */374374+ if (event->attr.type != event->pmu->type)375375+ return -ENOENT;376376+373377 /* unsupported modes and filters */374378 if (event->attr.sample_period) /* no sampling */375379 return -EINVAL;···391387 rapl_pmus_scope = rapl_pmus->pmu.scope;392388393389 if (rapl_pmus_scope == PERF_PMU_SCOPE_PKG || rapl_pmus_scope == PERF_PMU_SCOPE_DIE) {394394- /* only look at RAPL package events */395395- if (event->attr.type != rapl_pmus_pkg->pmu.type)396396- return -ENOENT;397397-398390 cfg = array_index_nospec((long)cfg, NR_RAPL_PKG_DOMAINS + 1);399391 if (!cfg || cfg >= NR_RAPL_PKG_DOMAINS + 1)400392 return -EINVAL;···398398 bit = cfg - 1;399399 event->hw.event_base = rapl_model->rapl_pkg_msrs[bit].msr;400400 } else if (rapl_pmus_scope == PERF_PMU_SCOPE_CORE) {401401- /* only look at RAPL core events */402402- if (event->attr.type != rapl_pmus_core->pmu.type)403403- return -ENOENT;404404-405401 cfg = array_index_nospec((long)cfg, NR_RAPL_CORE_DOMAINS + 1);406402 if (!cfg || cfg >= NR_RAPL_PKG_DOMAINS + 1)407403 return -EINVAL;
···188188 * detection/enumeration details:189189 */190190#define ARCH_PERFMON_EXT_LEAF 0x00000023191191-#define ARCH_PERFMON_EXT_UMASK2 0x1192192-#define ARCH_PERFMON_EXT_EQ 0x2193193-#define ARCH_PERFMON_NUM_COUNTER_LEAF_BIT 0x1194191#define ARCH_PERFMON_NUM_COUNTER_LEAF 0x1192192+193193+union cpuid35_eax {194194+ struct {195195+ unsigned int leaf0:1;196196+ /* Counters Sub-Leaf */197197+ unsigned int cntr_subleaf:1;198198+ /* Auto Counter Reload Sub-Leaf */199199+ unsigned int acr_subleaf:1;200200+ /* Events Sub-Leaf */201201+ unsigned int events_subleaf:1;202202+ unsigned int reserved:28;203203+ } split;204204+ unsigned int full;205205+};206206+207207+union cpuid35_ebx {208208+ struct {209209+ /* UnitMask2 Supported */210210+ unsigned int umask2:1;211211+ /* EQ-bit Supported */212212+ unsigned int eq:1;213213+ unsigned int reserved:30;214214+ } split;215215+ unsigned int full;216216+};195217196218/*197219 * Intel Architectural LBR CPUID detection/enumeration details:
+2
arch/x86/include/asm/sev.h
···531531532532#ifdef CONFIG_KVM_AMD_SEV533533bool snp_probe_rmptable_info(void);534534+int snp_rmptable_init(void);534535int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level);535536void snp_dump_hva_rmpentry(unsigned long address);536537int psmash(u64 pfn);···542541void snp_fixup_e820_tables(void);543542#else544543static inline bool snp_probe_rmptable_info(void) { return false; }544544+static inline int snp_rmptable_init(void) { return -ENOSYS; }545545static inline int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level) { return -ENODEV; }546546static inline void snp_dump_hva_rmpentry(unsigned long address) {}547547static inline int psmash(u64 pfn) { return -ENODEV; }
+14-7
arch/x86/kernel/cpu/bugs.c
···1115111511161116 case RETBLEED_MITIGATION_IBPB:11171117 setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);11181118+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);11191119+ mitigate_smt = true;1118112011191121 /*11201122 * IBPB on entry already obviates the need for···11251123 */11261124 setup_clear_cpu_cap(X86_FEATURE_UNRET);11271125 setup_clear_cpu_cap(X86_FEATURE_RETHUNK);11281128-11291129- setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);11301130- mitigate_smt = true;1131112611321127 /*11331128 * There is no need for RSB filling: entry_ibpb() ensures···26452646 if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {26462647 if (has_microcode) {26472648 setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);26492649+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);26482650 srso_mitigation = SRSO_MITIGATION_IBPB;2649265126502652 /*···26552655 */26562656 setup_clear_cpu_cap(X86_FEATURE_UNRET);26572657 setup_clear_cpu_cap(X86_FEATURE_RETHUNK);26582658+26592659+ /*26602660+ * There is no need for RSB filling: entry_ibpb() ensures26612661+ * all predictions, including the RSB, are invalidated,26622662+ * regardless of IBPB implementation.26632663+ */26642664+ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);26582665 }26592666 } else {26602667 pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");···2670266326712664ibpb_on_vmexit:26722665 case SRSO_CMD_IBPB_ON_VMEXIT:26732673- if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {26742674- if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {26662666+ if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {26672667+ if (has_microcode) {26752668 setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);26762669 srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;26772670···26832676 setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);26842677 }26852678 } else {26862686- pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");26872687- }26792679+ pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");26802680+ }26882681 break;26892682 default:26902683 break;
+5-1
arch/x86/kvm/hyperv.c
···22262226 u32 vector;22272227 bool all_cpus;2228222822292229+ if (!lapic_in_kernel(vcpu))22302230+ return HV_STATUS_INVALID_HYPERCALL_INPUT;22312231+22292232 if (hc->code == HVCALL_SEND_IPI) {22302233 if (!hc->fast) {22312234 if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,···28552852 ent->eax |= HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED;28562853 ent->eax |= HV_X64_APIC_ACCESS_RECOMMENDED;28572854 ent->eax |= HV_X64_RELAXED_TIMING_RECOMMENDED;28582858- ent->eax |= HV_X64_CLUSTER_IPI_RECOMMENDED;28552855+ if (!vcpu || lapic_in_kernel(vcpu))28562856+ ent->eax |= HV_X64_CLUSTER_IPI_RECOMMENDED;28592857 ent->eax |= HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED;28602858 if (evmcs_ver)28612859 ent->eax |= HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
···646646 u32 pause_count12;647647 u32 pause_thresh12;648648649649+ nested_svm_transition_tlb_flush(vcpu);650650+651651+ /* Enter Guest-Mode */652652+ enter_guest_mode(vcpu);653653+649654 /*650655 * Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,651656 * exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes.···766761 vmcb02->control.pause_filter_thresh = 0;767762 }768763 }769769-770770- nested_svm_transition_tlb_flush(vcpu);771771-772772- /* Enter Guest-Mode */773773- enter_guest_mode(vcpu);774764775765 /*776766 * Merge guest and host intercepts - must be called with vcpu in
+10
arch/x86/kvm/svm/sev.c
···29722972 WARN_ON_ONCE(!boot_cpu_has(X86_FEATURE_FLUSHBYASID)))29732973 goto out;2974297429752975+ /*29762976+ * The kernel's initcall infrastructure lacks the ability to express29772977+ * dependencies between initcalls, whereas the modules infrastructure29782978+ * automatically handles dependencies via symbol loading. Ensure the29792979+ * PSP SEV driver is initialized before proceeding if KVM is built-in,29802980+ * as the dependency isn't handled by the initcall infrastructure.29812981+ */29822982+ if (IS_BUILTIN(CONFIG_KVM_AMD) && sev_module_init())29832983+ goto out;29842984+29752985 /* Retrieve SEV CPUID information */29762986 cpuid(0x8000001f, &eax, &ebx, &ecx, &edx);29772987
+6-7
arch/x86/kvm/svm/svm.c
···19911991 svm->asid = sd->next_asid++;19921992}1993199319941994-static void svm_set_dr6(struct vcpu_svm *svm, unsigned long value)19941994+static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)19951995{19961996- struct vmcb *vmcb = svm->vmcb;19961996+ struct vmcb *vmcb = to_svm(vcpu)->vmcb;1997199719981998- if (svm->vcpu.arch.guest_state_protected)19981998+ if (vcpu->arch.guest_state_protected)19991999 return;2000200020012001 if (unlikely(value != vmcb->save.dr6)) {···42474247 * Run with all-zero DR6 unless needed, so that we can get the exact cause42484248 * of a #DB.42494249 */42504250- if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))42514251- svm_set_dr6(svm, vcpu->arch.dr6);42524252- else42534253- svm_set_dr6(svm, DR6_ACTIVE_LOW);42504250+ if (likely(!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)))42514251+ svm_set_dr6(vcpu, DR6_ACTIVE_LOW);4254425242554253 clgi();42564254 kvm_load_guest_xsave_state(vcpu);···50415043 .set_idt = svm_set_idt,50425044 .get_gdt = svm_get_gdt,50435045 .set_gdt = svm_set_gdt,50465046+ .set_dr6 = svm_set_dr6,50445047 .set_dr7 = svm_set_dr7,50455048 .sync_dirty_debug_regs = svm_sync_dirty_debug_regs,50465049 .cache_reg = svm_cache_reg,
···56485648 set_debugreg(DR6_RESERVED, 6);56495649}5650565056515651+void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)56525652+{56535653+ lockdep_assert_irqs_disabled();56545654+ set_debugreg(vcpu->arch.dr6, 6);56555655+}56565656+56515657void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)56525658{56535659 vmcs_writel(GUEST_DR7, val);···74227416 vmcs_writel(HOST_CR4, cr4);74237417 vmx->loaded_vmcs->host_state.cr4 = cr4;74247418 }74257425-74267426- /* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */74277427- if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))74287428- set_debugreg(vcpu->arch.dr6, 6);7429741974307420 /* When single-stepping over STI and MOV SS, we must clear the74317421 * corresponding interruptibility bits in the guest state. Otherwise
···1096110961 set_debugreg(vcpu->arch.eff_db[1], 1);1096210962 set_debugreg(vcpu->arch.eff_db[2], 2);1096310963 set_debugreg(vcpu->arch.eff_db[3], 3);1096410964+ /* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */1096510965+ if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))1096610966+ kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6);1096410967 } else if (unlikely(hw_breakpoint_active())) {1096510968 set_debugreg(0, 7);1096610969 }
+18-3
arch/x86/um/os-Linux/registers.c
···1818#include <registers.h>1919#include <sys/mman.h>20202121+static unsigned long ptrace_regset;2122unsigned long host_fp_size;22232324int get_fp_registers(int pid, unsigned long *regs)···2827 .iov_len = host_fp_size,2928 };30293131- if (ptrace(PTRACE_GETREGSET, pid, NT_X86_XSTATE, &iov) < 0)3030+ if (ptrace(PTRACE_GETREGSET, pid, ptrace_regset, &iov) < 0)3231 return -errno;3332 return 0;3433}···4039 .iov_len = host_fp_size,4140 };42414343- if (ptrace(PTRACE_SETREGSET, pid, NT_X86_XSTATE, &iov) < 0)4242+ if (ptrace(PTRACE_SETREGSET, pid, ptrace_regset, &iov) < 0)4443 return -errno;4544 return 0;4645}···5958 return -ENOMEM;60596160 /* GDB has x86_xsave_length, which uses x86_cpuid_count */6262- ret = ptrace(PTRACE_GETREGSET, pid, NT_X86_XSTATE, &iov);6161+ ptrace_regset = NT_X86_XSTATE;6262+ ret = ptrace(PTRACE_GETREGSET, pid, ptrace_regset, &iov);6363 if (ret)6464 ret = -errno;6565+6666+ if (ret == -ENODEV) {6767+#ifdef CONFIG_X86_326868+ ptrace_regset = NT_PRXFPREG;6969+#else7070+ ptrace_regset = NT_PRFPREG;7171+#endif7272+ iov.iov_len = 2 * 1024 * 1024;7373+ ret = ptrace(PTRACE_GETREGSET, pid, ptrace_regset, &iov);7474+ if (ret)7575+ ret = -errno;7676+ }7777+6578 munmap(iov.iov_base, 2 * 1024 * 1024);66796780 host_fp_size = iov.iov_len;
+10-3
arch/x86/um/signal.c
···187187 * Put magic/size values for userspace. We do not bother to verify them188188 * later on, however, userspace needs them should it try to read the189189 * XSTATE data. And ptrace does not fill in these parts.190190+ *191191+ * Skip this if we do not have an XSTATE frame.190192 */193193+ if (host_fp_size <= sizeof(to_fp64->fpstate))194194+ return 0;195195+191196 BUILD_BUG_ON(sizeof(int) != FP_XSTATE_MAGIC2_SIZE);192197#ifdef CONFIG_X86_32193198 __put_user(offsetof(struct _fpstate_32, _fxsr_env) +···372367 int err = 0, sig = ksig->sig;373368 unsigned long fp_to;374369375375- frame = (struct rt_sigframe __user *)376376- round_down(stack_top - sizeof(struct rt_sigframe), 16);370370+ frame = (void __user *)stack_top - sizeof(struct rt_sigframe);377371378372 /* Add required space for math frame */379379- frame = (struct rt_sigframe __user *)((unsigned long)frame - math_size);373373+ frame = (void __user *)((unsigned long)frame - math_size);374374+375375+ /* ABI requires 16 byte boundary alignment */376376+ frame = (void __user *)round_down((unsigned long)frame, 16);380377381378 /* Subtract 128 for a red zone and 8 for proper alignment */382379 frame = (struct rt_sigframe __user *) ((unsigned long) frame - 128 - 8);
+7-16
arch/x86/virt/svm/sev.c
···505505 * described in the SNP_INIT_EX firmware command description in the SNP506506 * firmware ABI spec.507507 */508508-static int __init snp_rmptable_init(void)508508+int __init snp_rmptable_init(void)509509{510510 unsigned int i;511511 u64 val;512512513513- if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))514514- return 0;513513+ if (WARN_ON_ONCE(!cc_platform_has(CC_ATTR_HOST_SEV_SNP)))514514+ return -ENOSYS;515515516516- if (!amd_iommu_snp_en)517517- goto nosnp;516516+ if (WARN_ON_ONCE(!amd_iommu_snp_en))517517+ return -ENOSYS;518518519519 if (!setup_rmptable())520520- goto nosnp;520520+ return -ENOSYS;521521522522 /*523523 * Check if SEV-SNP is already enabled, this can happen in case of···530530 /* Zero out the RMP bookkeeping area */531531 if (!clear_rmptable_bookkeeping()) {532532 free_rmp_segment_table();533533- goto nosnp;533533+ return -ENOSYS;534534 }535535536536 /* Zero out the RMP entries */···562562 crash_kexec_post_notifiers = true;563563564564 return 0;565565-566566-nosnp:567567- cc_platform_clear(CC_ATTR_HOST_SEV_SNP);568568- return -ENOSYS;569565}570570-571571-/*572572- * This must be called after the IOMMU has been initialized.573573- */574574-device_initcall(snp_rmptable_init);575566576567static void set_rmp_segment_info(unsigned int segment_shift)577568{
+62-9
arch/x86/xen/mmu_pv.c
···111111 */112112static DEFINE_SPINLOCK(xen_reservation_lock);113113114114+/* Protected by xen_reservation_lock. */115115+#define MIN_CONTIG_ORDER 9 /* 2MB */116116+static unsigned int discontig_frames_order = MIN_CONTIG_ORDER;117117+static unsigned long discontig_frames_early[1UL << MIN_CONTIG_ORDER] __initdata;118118+static unsigned long *discontig_frames __refdata = discontig_frames_early;119119+static bool discontig_frames_dyn;120120+121121+static int alloc_discontig_frames(unsigned int order)122122+{123123+ unsigned long *new_array, *old_array;124124+ unsigned int old_order;125125+ unsigned long flags;126126+127127+ BUG_ON(order < MIN_CONTIG_ORDER);128128+ BUILD_BUG_ON(sizeof(discontig_frames_early) != PAGE_SIZE);129129+130130+ new_array = (unsigned long *)__get_free_pages(GFP_KERNEL,131131+ order - MIN_CONTIG_ORDER);132132+ if (!new_array)133133+ return -ENOMEM;134134+135135+ spin_lock_irqsave(&xen_reservation_lock, flags);136136+137137+ old_order = discontig_frames_order;138138+139139+ if (order > discontig_frames_order || !discontig_frames_dyn) {140140+ if (!discontig_frames_dyn)141141+ old_array = NULL;142142+ else143143+ old_array = discontig_frames;144144+145145+ discontig_frames = new_array;146146+ discontig_frames_order = order;147147+ discontig_frames_dyn = true;148148+ } else {149149+ old_array = new_array;150150+ }151151+152152+ spin_unlock_irqrestore(&xen_reservation_lock, flags);153153+154154+ free_pages((unsigned long)old_array, old_order - MIN_CONTIG_ORDER);155155+156156+ return 0;157157+}158158+114159/*115160 * Note about cr3 (pagetable base) values:116161 *···859814 SetPagePinned(virt_to_page(level3_user_vsyscall));860815#endif861816 xen_pgd_walk(&init_mm, xen_mark_pinned, FIXADDR_TOP);817817+818818+ if (alloc_discontig_frames(MIN_CONTIG_ORDER))819819+ BUG();862820}863821864822static void xen_unpin_page(struct mm_struct *mm, struct page *page,···22512203 memset(dummy_mapping, 0xff, PAGE_SIZE);22522204}2253220522542254-/* Protected by xen_reservation_lock. */22552255-#define MAX_CONTIG_ORDER 9 /* 2MB */22562256-static unsigned long discontig_frames[1<<MAX_CONTIG_ORDER];22572257-22582206#define VOID_PTE (mfn_pte(0, __pgprot(0)))22592207static void xen_zap_pfn_range(unsigned long vaddr, unsigned int order,22602208 unsigned long *in_frames,···23672323 unsigned int address_bits,23682324 dma_addr_t *dma_handle)23692325{23702370- unsigned long *in_frames = discontig_frames, out_frame;23262326+ unsigned long *in_frames, out_frame;23712327 unsigned long flags;23722328 int success;23732329 unsigned long vstart = (unsigned long)phys_to_virt(pstart);2374233023752375- if (unlikely(order > MAX_CONTIG_ORDER))23762376- return -ENOMEM;23312331+ if (unlikely(order > discontig_frames_order)) {23322332+ if (!discontig_frames_dyn)23332333+ return -ENOMEM;23342334+23352335+ if (alloc_discontig_frames(order))23362336+ return -ENOMEM;23372337+ }2377233823782339 memset((void *) vstart, 0, PAGE_SIZE << order);2379234023802341 spin_lock_irqsave(&xen_reservation_lock, flags);23422342+23432343+ in_frames = discontig_frames;2381234423822345 /* 1. Zap current PTEs, remembering MFNs. */23832346 xen_zap_pfn_range(vstart, order, in_frames, NULL);···2409235824102359void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)24112360{24122412- unsigned long *out_frames = discontig_frames, in_frame;23612361+ unsigned long *out_frames, in_frame;24132362 unsigned long flags;24142363 int success;24152364 unsigned long vstart;2416236524172417- if (unlikely(order > MAX_CONTIG_ORDER))23662366+ if (unlikely(order > discontig_frames_order))24182367 return;2419236824202369 vstart = (unsigned long)phys_to_virt(pstart);24212370 memset((void *) vstart, 0, PAGE_SIZE << order);2422237124232372 spin_lock_irqsave(&xen_reservation_lock, flags);23732373+23742374+ out_frames = discontig_frames;2424237524252376 /* 1. Find start MFN of contiguous extent. */24262377 in_frame = virt_to_mfn((void *)vstart);
+15-3
block/partitions/mac.c
···5353 }5454 secsize = be16_to_cpu(md->block_size);5555 put_dev_sector(sect);5656+5757+ /*5858+ * If the "block size" is not a power of 2, things get weird - we might5959+ * end up with a partition straddling a sector boundary, so we wouldn't6060+ * be able to read a partition entry with read_part_sector().6161+ * Real block sizes are probably (?) powers of two, so just require6262+ * that.6363+ */6464+ if (!is_power_of_2(secsize))6565+ return -1;5666 datasize = round_down(secsize, 512);5767 data = read_part_sector(state, datasize / 512, §);5868 if (!data)5969 return -1;6070 partoffset = secsize % 512;6161- if (partoffset + sizeof(*part) > datasize)7171+ if (partoffset + sizeof(*part) > datasize) {7272+ put_dev_sector(sect);6273 return -1;7474+ }6375 part = (struct mac_partition *) (data + partoffset);6476 if (be16_to_cpu(part->signature) != MAC_PARTITION_MAGIC) {6577 put_dev_sector(sect);···124112 int i, l;125113126114 goodness++;127127- l = strlen(part->name);128128- if (strcmp(part->name, "/") == 0)115115+ l = strnlen(part->name, sizeof(part->name));116116+ if (strncmp(part->name, "/", sizeof(part->name)) == 0)129117 goodness++;130118 for (i = 0; i <= l - 4; ++i) {131119 if (strncasecmp(part->name + i, "root",
···11+// SPDX-License-Identifier: GPL-2.0-only22+/*33+ * Copyright (c) 2025 Greg Kroah-Hartman <gregkh@linuxfoundation.org>44+ * Copyright (c) 2025 The Linux Foundation55+ *66+ * A "simple" faux bus that allows devices to be created and added77+ * automatically to it. This is to be used whenever you need to create a88+ * device that is not associated with any "real" system resources, and do99+ * not want to have to deal with a bus/driver binding logic. It is1010+ * intended to be very simple, with only a create and a destroy function1111+ * available.1212+ */1313+#include <linux/err.h>1414+#include <linux/init.h>1515+#include <linux/slab.h>1616+#include <linux/string.h>1717+#include <linux/container_of.h>1818+#include <linux/device/faux.h>1919+#include "base.h"2020+2121+/*2222+ * Internal wrapper structure so we can hold a pointer to the2323+ * faux_device_ops for this device.2424+ */2525+struct faux_object {2626+ struct faux_device faux_dev;2727+ const struct faux_device_ops *faux_ops;2828+};2929+#define to_faux_object(dev) container_of_const(dev, struct faux_object, faux_dev.dev)3030+3131+static struct device faux_bus_root = {3232+ .init_name = "faux",3333+};3434+3535+static int faux_match(struct device *dev, const struct device_driver *drv)3636+{3737+ /* Match always succeeds, we only have one driver */3838+ return 1;3939+}4040+4141+static int faux_probe(struct device *dev)4242+{4343+ struct faux_object *faux_obj = to_faux_object(dev);4444+ struct faux_device *faux_dev = &faux_obj->faux_dev;4545+ const struct faux_device_ops *faux_ops = faux_obj->faux_ops;4646+ int ret = 0;4747+4848+ if (faux_ops && faux_ops->probe)4949+ ret = faux_ops->probe(faux_dev);5050+5151+ return ret;5252+}5353+5454+static void faux_remove(struct device *dev)5555+{5656+ struct faux_object *faux_obj = to_faux_object(dev);5757+ struct faux_device *faux_dev = &faux_obj->faux_dev;5858+ const struct faux_device_ops *faux_ops = faux_obj->faux_ops;5959+6060+ if (faux_ops && faux_ops->remove)6161+ faux_ops->remove(faux_dev);6262+}6363+6464+static const struct bus_type faux_bus_type = {6565+ .name = "faux",6666+ .match = faux_match,6767+ .probe = faux_probe,6868+ .remove = faux_remove,6969+};7070+7171+static struct device_driver faux_driver = {7272+ .name = "faux_driver",7373+ .bus = &faux_bus_type,7474+ .probe_type = PROBE_FORCE_SYNCHRONOUS,7575+};7676+7777+static void faux_device_release(struct device *dev)7878+{7979+ struct faux_object *faux_obj = to_faux_object(dev);8080+8181+ kfree(faux_obj);8282+}8383+8484+/**8585+ * faux_device_create_with_groups - Create and register with the driver8686+ * core a faux device and populate the device with an initial8787+ * set of sysfs attributes.8888+ * @name: The name of the device we are adding, must be unique for8989+ * all faux devices.9090+ * @parent: Pointer to a potential parent struct device. If set to9191+ * NULL, the device will be created in the "root" of the faux9292+ * device tree in sysfs.9393+ * @faux_ops: struct faux_device_ops that the new device will call back9494+ * into, can be NULL.9595+ * @groups: The set of sysfs attributes that will be created for this9696+ * device when it is registered with the driver core.9797+ *9898+ * Create a new faux device and register it in the driver core properly.9999+ * If present, callbacks in @faux_ops will be called with the device that100100+ * for the caller to do something with at the proper time given the101101+ * device's lifecycle.102102+ *103103+ * Note, when this function is called, the functions specified in struct104104+ * faux_ops can be called before the function returns, so be prepared for105105+ * everything to be properly initialized before that point in time.106106+ *107107+ * Return:108108+ * * NULL if an error happened with creating the device109109+ * * pointer to a valid struct faux_device that is registered with sysfs110110+ */111111+struct faux_device *faux_device_create_with_groups(const char *name,112112+ struct device *parent,113113+ const struct faux_device_ops *faux_ops,114114+ const struct attribute_group **groups)115115+{116116+ struct faux_object *faux_obj;117117+ struct faux_device *faux_dev;118118+ struct device *dev;119119+ int ret;120120+121121+ faux_obj = kzalloc(sizeof(*faux_obj), GFP_KERNEL);122122+ if (!faux_obj)123123+ return NULL;124124+125125+ /* Save off the callbacks so we can use them in the future */126126+ faux_obj->faux_ops = faux_ops;127127+128128+ /* Initialize the device portion and register it with the driver core */129129+ faux_dev = &faux_obj->faux_dev;130130+ dev = &faux_dev->dev;131131+132132+ device_initialize(dev);133133+ dev->release = faux_device_release;134134+ if (parent)135135+ dev->parent = parent;136136+ else137137+ dev->parent = &faux_bus_root;138138+ dev->bus = &faux_bus_type;139139+ dev->groups = groups;140140+ dev_set_name(dev, "%s", name);141141+142142+ ret = device_add(dev);143143+ if (ret) {144144+ pr_err("%s: device_add for faux device '%s' failed with %d\n",145145+ __func__, name, ret);146146+ put_device(dev);147147+ return NULL;148148+ }149149+150150+ return faux_dev;151151+}152152+EXPORT_SYMBOL_GPL(faux_device_create_with_groups);153153+154154+/**155155+ * faux_device_create - create and register with the driver core a faux device156156+ * @name: The name of the device we are adding, must be unique for all157157+ * faux devices.158158+ * @parent: Pointer to a potential parent struct device. If set to159159+ * NULL, the device will be created in the "root" of the faux160160+ * device tree in sysfs.161161+ * @faux_ops: struct faux_device_ops that the new device will call back162162+ * into, can be NULL.163163+ *164164+ * Create a new faux device and register it in the driver core properly.165165+ * If present, callbacks in @faux_ops will be called with the device that166166+ * for the caller to do something with at the proper time given the167167+ * device's lifecycle.168168+ *169169+ * Note, when this function is called, the functions specified in struct170170+ * faux_ops can be called before the function returns, so be prepared for171171+ * everything to be properly initialized before that point in time.172172+ *173173+ * Return:174174+ * * NULL if an error happened with creating the device175175+ * * pointer to a valid struct faux_device that is registered with sysfs176176+ */177177+struct faux_device *faux_device_create(const char *name,178178+ struct device *parent,179179+ const struct faux_device_ops *faux_ops)180180+{181181+ return faux_device_create_with_groups(name, parent, faux_ops, NULL);182182+}183183+EXPORT_SYMBOL_GPL(faux_device_create);184184+185185+/**186186+ * faux_device_destroy - destroy a faux device187187+ * @faux_dev: faux device to destroy188188+ *189189+ * Unregisters and cleans up a device that was created with a call to190190+ * faux_device_create()191191+ */192192+void faux_device_destroy(struct faux_device *faux_dev)193193+{194194+ struct device *dev = &faux_dev->dev;195195+196196+ if (!faux_dev)197197+ return;198198+199199+ device_del(dev);200200+201201+ /* The final put_device() will clean up the memory we allocated for this device. */202202+ put_device(dev);203203+}204204+EXPORT_SYMBOL_GPL(faux_device_destroy);205205+206206+int __init faux_bus_init(void)207207+{208208+ int ret;209209+210210+ ret = device_register(&faux_bus_root);211211+ if (ret) {212212+ put_device(&faux_bus_root);213213+ return ret;214214+ }215215+216216+ ret = bus_register(&faux_bus_type);217217+ if (ret)218218+ goto error_bus;219219+220220+ ret = driver_register(&faux_driver);221221+ if (ret)222222+ goto error_driver;223223+224224+ return ret;225225+226226+error_driver:227227+ bus_unregister(&faux_bus_type);228228+229229+error_bus:230230+ device_unregister(&faux_bus_root);231231+ return ret;232232+}
+1
drivers/base/init.c
···3232 /* These are also core pieces, but must come after the3333 * core core pieces.3434 */3535+ faux_bus_init();3536 of_core_init();3637 platform_bus_init();3738 auxiliary_bus_init();
+2
drivers/base/regmap/regmap-irq.c
···906906 kfree(d->wake_buf);907907 kfree(d->mask_buf_def);908908 kfree(d->mask_buf);909909+ kfree(d->main_status_buf);909910 kfree(d->status_buf);910911 kfree(d->status_reg_buf);911912 if (d->config_buf) {···982981 kfree(d->wake_buf);983982 kfree(d->mask_buf_def);984983 kfree(d->mask_buf);984984+ kfree(d->main_status_buf);985985 kfree(d->status_reg_buf);986986 kfree(d->status_buf);987987 if (d->config_buf) {
+4-1
drivers/bluetooth/btintel_pcie.c
···13201320 if (opcode == 0xfc01)13211321 btintel_pcie_inject_cmd_complete(hdev, opcode);13221322 }13231323+ /* Firmware raises alive interrupt on HCI_OP_RESET */13241324+ if (opcode == HCI_OP_RESET)13251325+ data->gp0_received = false;13261326+13231327 hdev->stat.cmd_tx++;13241328 break;13251329 case HCI_ACLDATA_PKT:···13611357 opcode, btintel_pcie_alivectxt_state2str(old_ctxt),13621358 btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));13631359 if (opcode == HCI_OP_RESET) {13641364- data->gp0_received = false;13651360 ret = wait_event_timeout(data->gp0_wait_q,13661361 data->gp0_received,13671362 msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
+14
drivers/crypto/ccp/sp-dev.c
···1919#include <linux/types.h>2020#include <linux/ccp.h>21212222+#include "sev-dev.h"2223#include "ccp-dev.h"2324#include "sp-dev.h"2425···254253static int __init sp_mod_init(void)255254{256255#ifdef CONFIG_X86256256+ static bool initialized;257257 int ret;258258+259259+ if (initialized)260260+ return 0;258261259262 ret = sp_pci_init();260263 if (ret)···267262#ifdef CONFIG_CRYPTO_DEV_SP_PSP268263 psp_pci_init();269264#endif265265+266266+ initialized = true;270267271268 return 0;272269#endif···285278286279 return -ENODEV;287280}281281+282282+#if IS_BUILTIN(CONFIG_KVM_AMD) && IS_ENABLED(CONFIG_KVM_AMD_SEV)283283+int __init sev_module_init(void)284284+{285285+ return sp_mod_init();286286+}287287+#endif288288289289static void __exit sp_mod_exit(void)290290{
+14-3
drivers/dma/tegra210-adma.c
···887887 const struct tegra_adma_chip_data *cdata;888888 struct tegra_adma *tdma;889889 struct resource *res_page, *res_base;890890- int ret, i, page_no;890890+ int ret, i;891891892892 cdata = of_device_get_match_data(&pdev->dev);893893 if (!cdata) {···914914915915 res_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "global");916916 if (res_base) {917917- page_no = (res_page->start - res_base->start) / cdata->ch_base_offset;918918- if (page_no <= 0)917917+ resource_size_t page_offset, page_no;918918+ unsigned int ch_base_offset;919919+920920+ if (res_page->start < res_base->start)919921 return -EINVAL;922922+ page_offset = res_page->start - res_base->start;923923+ ch_base_offset = cdata->ch_base_offset;924924+ if (!ch_base_offset)925925+ return -EINVAL;926926+927927+ page_no = div_u64(page_offset, ch_base_offset);928928+ if (!page_no || page_no > INT_MAX)929929+ return -EINVAL;930930+920931 tdma->ch_page_no = page_no - 1;921932 tdma->base_addr = devm_ioremap_resource(&pdev->dev, res_base);922933 if (IS_ERR(tdma->base_addr))
···2525 if (md->type != EFI_CONVENTIONAL_MEMORY)2626 return 0;27272828+ if (md->attribute & EFI_MEMORY_HOT_PLUGGABLE)2929+ return 0;3030+2831 if (efi_soft_reserve_enabled() &&2932 (md->attribute & EFI_MEMORY_SP))3033 return 0;
+3
drivers/firmware/efi/libstub/relocate.c
···5353 if (desc->type != EFI_CONVENTIONAL_MEMORY)5454 continue;55555656+ if (desc->attribute & EFI_MEMORY_HOT_PLUGGABLE)5757+ continue;5858+5659 if (efi_soft_reserve_enabled() &&5760 (desc->attribute & EFI_MEMORY_SP))5861 continue;
+58-13
drivers/gpio/gpio-bcm-kona.c
···6969struct bcm_kona_gpio_bank {7070 int id;7171 int irq;7272+ /*7373+ * Used to keep track of lock/unlock operations for each GPIO in the7474+ * bank.7575+ *7676+ * All GPIOs are locked by default (see bcm_kona_gpio_reset), and the7777+ * unlock count for all GPIOs is 0 by default. Each unlock increments7878+ * the counter, and each lock decrements the counter.7979+ *8080+ * The lock function only locks the GPIO once its unlock counter is8181+ * down to 0. This is necessary because the GPIO is unlocked in two8282+ * places in this driver: once for requested GPIOs, and once for8383+ * requested IRQs. Since it is possible for a GPIO to be requested8484+ * as both a GPIO and an IRQ, we need to ensure that we don't lock it8585+ * too early.8686+ */8787+ u8 gpio_unlock_count[GPIO_PER_BANK];7288 /* Used in the interrupt handler */7389 struct bcm_kona_gpio *kona_gpio;7490};···10286 u32 val;10387 unsigned long flags;10488 int bank_id = GPIO_BANK(gpio);8989+ int bit = GPIO_BIT(gpio);9090+ struct bcm_kona_gpio_bank *bank = &kona_gpio->banks[bank_id];10591106106- raw_spin_lock_irqsave(&kona_gpio->lock, flags);9292+ if (bank->gpio_unlock_count[bit] == 0) {9393+ dev_err(kona_gpio->gpio_chip.parent,9494+ "Unbalanced locks for GPIO %u\n", gpio);9595+ return;9696+ }10797108108- val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));109109- val |= BIT(gpio);110110- bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);9898+ if (--bank->gpio_unlock_count[bit] == 0) {9999+ raw_spin_lock_irqsave(&kona_gpio->lock, flags);111100112112- raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);101101+ val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));102102+ val |= BIT(bit);103103+ bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);104104+105105+ raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);106106+ }113107}114108115109static void bcm_kona_gpio_unlock_gpio(struct bcm_kona_gpio *kona_gpio,···128102 u32 val;129103 unsigned long flags;130104 int bank_id = GPIO_BANK(gpio);105105+ int bit = GPIO_BIT(gpio);106106+ struct bcm_kona_gpio_bank *bank = &kona_gpio->banks[bank_id];131107132132- raw_spin_lock_irqsave(&kona_gpio->lock, flags);108108+ if (bank->gpio_unlock_count[bit] == 0) {109109+ raw_spin_lock_irqsave(&kona_gpio->lock, flags);133110134134- val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));135135- val &= ~BIT(gpio);136136- bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);111111+ val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));112112+ val &= ~BIT(bit);113113+ bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);137114138138- raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);115115+ raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);116116+ }117117+118118+ ++bank->gpio_unlock_count[bit];139119}140120141121static int bcm_kona_gpio_get_dir(struct gpio_chip *chip, unsigned gpio)···392360393361 kona_gpio = irq_data_get_irq_chip_data(d);394362 reg_base = kona_gpio->reg_base;363363+395364 raw_spin_lock_irqsave(&kona_gpio->lock, flags);396365397366 val = readl(reg_base + GPIO_INT_MASK(bank_id));···415382416383 kona_gpio = irq_data_get_irq_chip_data(d);417384 reg_base = kona_gpio->reg_base;385385+418386 raw_spin_lock_irqsave(&kona_gpio->lock, flags);419387420388 val = readl(reg_base + GPIO_INT_MSKCLR(bank_id));···511477static int bcm_kona_gpio_irq_reqres(struct irq_data *d)512478{513479 struct bcm_kona_gpio *kona_gpio = irq_data_get_irq_chip_data(d);480480+ unsigned int gpio = d->hwirq;514481515515- return gpiochip_reqres_irq(&kona_gpio->gpio_chip, d->hwirq);482482+ /*483483+ * We need to unlock the GPIO before any other operations are performed484484+ * on the relevant GPIO configuration registers485485+ */486486+ bcm_kona_gpio_unlock_gpio(kona_gpio, gpio);487487+488488+ return gpiochip_reqres_irq(&kona_gpio->gpio_chip, gpio);516489}517490518491static void bcm_kona_gpio_irq_relres(struct irq_data *d)519492{520493 struct bcm_kona_gpio *kona_gpio = irq_data_get_irq_chip_data(d);494494+ unsigned int gpio = d->hwirq;521495522522- gpiochip_relres_irq(&kona_gpio->gpio_chip, d->hwirq);496496+ /* Once we no longer use it, lock the GPIO again */497497+ bcm_kona_gpio_lock_gpio(kona_gpio, gpio);498498+499499+ gpiochip_relres_irq(&kona_gpio->gpio_chip, gpio);523500}524501525502static struct irq_chip bcm_gpio_irq_chip = {···659614 bank->irq = platform_get_irq(pdev, i);660615 bank->kona_gpio = kona_gpio;661616 if (bank->irq < 0) {662662- dev_err(dev, "Couldn't get IRQ for bank %d", i);617617+ dev_err(dev, "Couldn't get IRQ for bank %d\n", i);663618 ret = -ENOENT;664619 goto err_irq_domain;665620 }
+12-3
drivers/gpio/gpio-stmpe.c
···191191 [REG_IE][CSB] = STMPE_IDX_IEGPIOR_CSB,192192 [REG_IE][MSB] = STMPE_IDX_IEGPIOR_MSB,193193 };194194- int i, j;194194+ int ret, i, j;195195196196 /*197197 * STMPE1600: to be able to get IRQ from pins,···199199 * GPSR or GPCR registers200200 */201201 if (stmpe->partnum == STMPE1600) {202202- stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_LSB]);203203- stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_CSB]);202202+ ret = stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_LSB]);203203+ if (ret < 0) {204204+ dev_err(stmpe->dev, "Failed to read GPMR_LSB: %d\n", ret);205205+ goto err;206206+ }207207+ ret = stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_CSB]);208208+ if (ret < 0) {209209+ dev_err(stmpe->dev, "Failed to read GPMR_CSB: %d\n", ret);210210+ goto err;211211+ }204212 }205213206214 for (i = 0; i < CACHE_NR_REGS; i++) {···230222 }231223 }232224225225+err:233226 mutex_unlock(&stmpe_gpio->irq_lock);234227}235228
···904904 }905905906906 if (gc->ngpio == 0) {907907- chip_err(gc, "tried to insert a GPIO chip with zero lines\n");907907+ dev_err(dev, "tried to insert a GPIO chip with zero lines\n");908908 return -EINVAL;909909 }910910911911 if (gc->ngpio > FASTPATH_NGPIO)912912- chip_warn(gc, "line cnt %u is greater than fast path cnt %u\n",913913- gc->ngpio, FASTPATH_NGPIO);912912+ dev_warn(dev, "line cnt %u is greater than fast path cnt %u\n",913913+ gc->ngpio, FASTPATH_NGPIO);914914915915 return 0;916916}
···38153815 if (err == -ENODEV) {38163816 dev_warn(adev->dev, "cap microcode does not exist, skip\n");38173817 err = 0;38183818- goto out;38183818+ } else {38193819+ dev_err(adev->dev, "fail to initialize cap microcode\n");38193820 }38203820- dev_err(adev->dev, "fail to initialize cap microcode\n");38213821+ goto out;38213822 }3822382338233824 info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CAP];
+34-2
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
···74377437 amdgpu_ring_write(ring, 0); /* RESERVED field, programmed to zero */74387438}7439743974407440+static void gfx_v9_0_ring_begin_use_compute(struct amdgpu_ring *ring)74417441+{74427442+ struct amdgpu_device *adev = ring->adev;74437443+ struct amdgpu_ip_block *gfx_block =74447444+ amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_GFX);74457445+74467446+ amdgpu_gfx_enforce_isolation_ring_begin_use(ring);74477447+74487448+ /* Raven and PCO APUs seem to have stability issues74497449+ * with compute and gfxoff and gfx pg. Disable gfx pg during74507450+ * submission and allow again afterwards.74517451+ */74527452+ if (gfx_block && amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 1, 0))74537453+ gfx_v9_0_set_powergating_state(gfx_block, AMD_PG_STATE_UNGATE);74547454+}74557455+74567456+static void gfx_v9_0_ring_end_use_compute(struct amdgpu_ring *ring)74577457+{74587458+ struct amdgpu_device *adev = ring->adev;74597459+ struct amdgpu_ip_block *gfx_block =74607460+ amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_GFX);74617461+74627462+ /* Raven and PCO APUs seem to have stability issues74637463+ * with compute and gfxoff and gfx pg. Disable gfx pg during74647464+ * submission and allow again afterwards.74657465+ */74667466+ if (gfx_block && amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 1, 0))74677467+ gfx_v9_0_set_powergating_state(gfx_block, AMD_PG_STATE_GATE);74687468+74697469+ amdgpu_gfx_enforce_isolation_ring_end_use(ring);74707470+}74717471+74407472static const struct amd_ip_funcs gfx_v9_0_ip_funcs = {74417473 .name = "gfx_v9_0",74427474 .early_init = gfx_v9_0_early_init,···76457613 .emit_wave_limit = gfx_v9_0_emit_wave_limit,76467614 .reset = gfx_v9_0_reset_kcq,76477615 .emit_cleaner_shader = gfx_v9_0_ring_emit_cleaner_shader,76487648- .begin_use = amdgpu_gfx_enforce_isolation_ring_begin_use,76497649- .end_use = amdgpu_gfx_enforce_isolation_ring_end_use,76167616+ .begin_use = gfx_v9_0_ring_begin_use_compute,76177617+ .end_use = gfx_v9_0_ring_end_use_compute,76507618};7651761976527620static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_kiq = {
···10491049 s_rfe_b64 s_restore_pc_lo //Return to the main shader program and resume execution1050105010511051L_END_PGM:10521052+ // Make sure that no wave of the workgroup can exit the trap handler10531053+ // before the workgroup barrier state is saved.10541054+ s_barrier_signal -210551055+ s_barrier_wait -210521056 s_endpgm_saved10531057end10541058
···168168 return PTR_ERR(ppgtt);169169170170 if (!ppgtt->vm.allocate_va_range)171171- goto err_ppgtt_cleanup;171171+ goto ppgtt_vm_put;172172173173 /*174174 * While we only allocate the page tables here and so we could···236236 goto retry;237237 }238238 i915_gem_ww_ctx_fini(&ww);239239-239239+ppgtt_vm_put:240240 i915_vm_put(&ppgtt->vm);241241 return err;242242}
+1
drivers/gpu/drm/panthor/panthor_drv.c
···802802{803803 int prio;804804805805+ memset(arg, 0, sizeof(*arg));805806 for (prio = PANTHOR_GROUP_PRIORITY_REALTIME; prio >= 0; prio--) {806807 if (!group_priority_permit(file, prio))807808 arg->allowed_mask |= BIT(prio);
···5757 return GRAPHICS_VERx100(xe) < 1270 && !IS_DGFX(xe);5858}59596060-static s64 detect_bar2_dgfx(struct xe_device *xe, struct xe_ttm_stolen_mgr *mgr)6161-{6262- struct xe_tile *tile = xe_device_get_root_tile(xe);6363- struct xe_mmio *mmio = xe_root_tile_mmio(xe);6464- struct pci_dev *pdev = to_pci_dev(xe->drm.dev);6565- u64 stolen_size;6666- u64 tile_offset;6767- u64 tile_size;6868-6969- tile_offset = tile->mem.vram.io_start - xe->mem.vram.io_start;7070- tile_size = tile->mem.vram.actual_physical_size;7171-7272- /* Use DSM base address instead for stolen memory */7373- mgr->stolen_base = (xe_mmio_read64_2x32(mmio, DSMBASE) & BDSM_MASK) - tile_offset;7474- if (drm_WARN_ON(&xe->drm, tile_size < mgr->stolen_base))7575- return 0;7676-7777- stolen_size = tile_size - mgr->stolen_base;7878-7979- /* Verify usage fits in the actual resource available */8080- if (mgr->stolen_base + stolen_size <= pci_resource_len(pdev, LMEM_BAR))8181- mgr->io_base = tile->mem.vram.io_start + mgr->stolen_base;8282-8383- /*8484- * There may be few KB of platform dependent reserved memory at the end8585- * of vram which is not part of the DSM. Such reserved memory portion is8686- * always less then DSM granularity so align down the stolen_size to DSM8787- * granularity to accommodate such reserve vram portion.8888- */8989- return ALIGN_DOWN(stolen_size, SZ_1M);9090-}9191-9260static u32 get_wopcm_size(struct xe_device *xe)9361{9462 u32 wopcm_size;···78110 }7911180112 return wopcm_size;113113+}114114+115115+static s64 detect_bar2_dgfx(struct xe_device *xe, struct xe_ttm_stolen_mgr *mgr)116116+{117117+ struct xe_tile *tile = xe_device_get_root_tile(xe);118118+ struct xe_mmio *mmio = xe_root_tile_mmio(xe);119119+ struct pci_dev *pdev = to_pci_dev(xe->drm.dev);120120+ u64 stolen_size, wopcm_size;121121+ u64 tile_offset;122122+ u64 tile_size;123123+124124+ tile_offset = tile->mem.vram.io_start - xe->mem.vram.io_start;125125+ tile_size = tile->mem.vram.actual_physical_size;126126+127127+ /* Use DSM base address instead for stolen memory */128128+ mgr->stolen_base = (xe_mmio_read64_2x32(mmio, DSMBASE) & BDSM_MASK) - tile_offset;129129+ if (drm_WARN_ON(&xe->drm, tile_size < mgr->stolen_base))130130+ return 0;131131+132132+ /* Carve out the top of DSM as it contains the reserved WOPCM region */133133+ wopcm_size = get_wopcm_size(xe);134134+ if (drm_WARN_ON(&xe->drm, !wopcm_size))135135+ return 0;136136+137137+ stolen_size = tile_size - mgr->stolen_base;138138+ stolen_size -= wopcm_size;139139+140140+ /* Verify usage fits in the actual resource available */141141+ if (mgr->stolen_base + stolen_size <= pci_resource_len(pdev, LMEM_BAR))142142+ mgr->io_base = tile->mem.vram.io_start + mgr->stolen_base;143143+144144+ /*145145+ * There may be few KB of platform dependent reserved memory at the end146146+ * of vram which is not part of the DSM. Such reserved memory portion is147147+ * always less then DSM granularity so align down the stolen_size to DSM148148+ * granularity to accommodate such reserve vram portion.149149+ */150150+ return ALIGN_DOWN(stolen_size, SZ_1M);81151}8215283153static u32 detect_bar2_integrated(struct xe_device *xe, struct xe_ttm_stolen_mgr *mgr)
···104104 unsigned int id;105105 int i, err;106106107107- mutex_init(&host->intr_mutex);108108-109107 for (id = 0; id < host1x_syncpt_nb_pts(host); ++id) {110108 struct host1x_syncpt *syncpt = &host->syncpt[id];111109
+10-5
drivers/hid/Kconfig
···570570571571config HID_LENOVO572572 tristate "Lenovo / Thinkpad devices"573573+ depends on ACPI574574+ select ACPI_PLATFORM_PROFILE573575 select NEW_LEDS574576 select LEDS_CLASS575577 help···11691167 tristate "Topre REALFORCE keyboards"11701168 depends on HID11711169 help11721172- Say Y for N-key rollover support on Topre REALFORCE R2 108/87 key keyboards.11701170+ Say Y for N-key rollover support on Topre REALFORCE R2 108/87 key and11711171+ Topre REALFORCE R3S 87 key keyboards.1173117211741173config HID_THINGM11751174 tristate "ThingM blink(1) USB RGB LED"···1377137413781375source "drivers/hid/bpf/Kconfig"1379137613801380-endif # HID13811381-13821382-source "drivers/hid/usbhid/Kconfig"13831383-13841377source "drivers/hid/i2c-hid/Kconfig"1385137813861379source "drivers/hid/intel-ish-hid/Kconfig"···13861387source "drivers/hid/surface-hid/Kconfig"1387138813881389source "drivers/hid/intel-thc-hid/Kconfig"13901390+13911391+endif # HID13921392+13931393+# USB support may be used with HID disabled13941394+13951395+source "drivers/hid/usbhid/Kconfig"1389139613901397endif # HID_SUPPORT
-1
drivers/hid/amd-sfh-hid/Kconfig
···5566config AMD_SFH_HID77 tristate "AMD Sensor Fusion Hub"88- depends on HID98 depends on X86109 help1110 If you say yes to this option, support will be included for the
···106106 "%s::%s",107107 dev_name(&input->dev),108108 info->led_name);109109+ if (!led->cdev.name)110110+ return -ENOMEM;109111110112 ret = devm_led_classdev_register(&hdev->dev, &led->cdev);111113 if (ret)
+1-1
drivers/hid/i2c-hid/Kconfig
···22menuconfig I2C_HID33 tristate "I2C HID support"44 default y55- depends on I2C && INPUT && HID55+ depends on I2C6677if I2C_HID88
-1
drivers/hid/intel-ish-hid/Kconfig
···66 tristate "Intel Integrated Sensor Hub"77 default n88 depends on X8699- depends on HID109 help1110 The Integrated Sensor Hub (ISH) enables the ability to offload1211 sensor polling and algorithm processing to a dedicated low power
···517517 /* ISH FW is dead */518518 if (!ish_is_input_ready(dev))519519 return -EPIPE;520520+521521+ /* Send clock sync at once after reset */522522+ ishtp_dev->prev_sync = 0;523523+520524 /*521525 * Set HOST2ISH.ILUP. Apparently we need this BEFORE sending522526 * RESET_NOTIFY_ACK - FW will be checking for it···581577 */582578static void _ish_sync_fw_clock(struct ishtp_device *dev)583579{584584- static unsigned long prev_sync;585585- uint64_t usec;580580+ struct ipc_time_update_msg time = {};586581587587- if (prev_sync && time_before(jiffies, prev_sync + 20 * HZ))582582+ if (dev->prev_sync && time_before(jiffies, dev->prev_sync + 20 * HZ))588583 return;589584590590- prev_sync = jiffies;591591- usec = ktime_to_us(ktime_get_boottime());592592- ipc_send_mng_msg(dev, MNG_SYNC_FW_CLOCK, &usec, sizeof(uint64_t));585585+ dev->prev_sync = jiffies;586586+ /* The fields of time would be updated while sending message */587587+ ipc_send_mng_msg(dev, MNG_SYNC_FW_CLOCK, &time, sizeof(time));593588}594589595590/**
···253253 unsigned int ipc_tx_cnt;254254 unsigned long long ipc_tx_bytes_cnt;255255256256+ /* Time of the last clock sync */257257+ unsigned long prev_sync;256258 const struct ishtp_hw_ops *ops;257259 size_t mtu;258260 uint32_t ishtp_msg_hdr;
-1
drivers/hid/intel-thc-hid/Kconfig
···77config INTEL_THC_HID88 tristate "Intel Touch Host Controller"99 depends on ACPI1010- select HID1110 help1211 THC (Touch Host Controller) is the name of the IP block in PCH that1312 interfaces with Touch Devices (ex: touchscreen, touchpad etc.). It
-2
drivers/hid/surface-hid/Kconfig
···11# SPDX-License-Identifier: GPL-2.0+22menu "Surface System Aggregator Module HID support"33 depends on SURFACE_AGGREGATOR44- depends on INPUT5465config SURFACE_HID76 tristate "HID transport driver for Surface System Aggregator Module"···38393940config SURFACE_HID_CORE4041 tristate4141- select HID
+1-2
drivers/hid/usbhid/Kconfig
···55config USB_HID66 tristate "USB HID transport layer"77 default y88- depends on USB && INPUT99- select HID88+ depends on HID109 help1110 Say Y here if you want to connect USB keyboards,1211 mice, joysticks, graphic tablets, or any other HID based devices
···2653265326542654 /* Set IOTLB invalidation timeout to 1s */26552655 iommu_set_inv_tlb_timeout(iommu, CTRL_INV_TO_1S);26562656+26572657+ /* Enable Enhanced Peripheral Page Request Handling */26582658+ if (check_feature(FEATURE_EPHSUP))26592659+ iommu_feature_enable(iommu, CONTROL_EPH_EN);26562660}2657266126582662static void iommu_apply_resume_quirks(struct amd_iommu *iommu)···31983194 return true;31993195}3200319632013201-static void iommu_snp_enable(void)31973197+static __init void iommu_snp_enable(void)32023198{32033199#ifdef CONFIG_KVM_AMD_SEV32043200 if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))···32203216 amd_iommu_snp_en = check_feature(FEATURE_SNP);32213217 if (!amd_iommu_snp_en) {32223218 pr_warn("SNP: IOMMU SNP feature not enabled, SNP cannot be supported.\n");32193219+ goto disable_snp;32203220+ }32213221+32223222+ /*32233223+ * Enable host SNP support once SNP support is checked on IOMMU.32243224+ */32253225+ if (snp_rmptable_init()) {32263226+ pr_warn("SNP: RMP initialization failed, SNP cannot be supported.\n");32233227 goto disable_snp;32243228 }32253229···33293317 break;33303318 ret = state_next();33313319 }33203320+33213321+ /*33223322+ * SNP platform initilazation requires IOMMUs to be fully configured.33233323+ * If the SNP support on IOMMUs has NOT been checked, simply mark SNP33243324+ * as unsupported. If the SNP support on IOMMUs has been checked and33253325+ * host SNP support enabled but RMP enforcement has not been enabled33263326+ * in IOMMUs, then the system is in a half-baked state, but can limp33273327+ * along as all memory should be Hypervisor-Owned in the RMP. WARN,33283328+ * but leave SNP as "supported" to avoid confusing the kernel.33293329+ */33303330+ if (ret && cc_platform_has(CC_ATTR_HOST_SEV_SNP) &&33313331+ !WARN_ON_ONCE(amd_iommu_snp_en))33323332+ cc_platform_clear(CC_ATTR_HOST_SEV_SNP);3332333333333334 return ret;33343335}···34513426 int ret;3452342734533428 if (no_iommu || (iommu_detected && !gart_iommu_aperture))34543454- return;34293429+ goto disable_snp;3455343034563431 if (!amd_iommu_sme_check())34573457- return;34323432+ goto disable_snp;3458343334593434 ret = iommu_go_to_state(IOMMU_IVRS_DETECTED);34603435 if (ret)34613461- return;34363436+ goto disable_snp;3462343734633438 amd_iommu_detected = true;34643439 iommu_detected = 1;34653440 x86_init.iommu.iommu_init = amd_iommu_init;34413441+ return;34423442+34433443+disable_snp:34443444+ if (cc_platform_has(CC_ATTR_HOST_SEV_SNP))34453445+ cc_platform_clear(CC_ATTR_HOST_SEV_SNP);34663446}3467344734683448/****************************************************************************
+3-3
drivers/iommu/exynos-iommu.c
···249249 struct list_head clients; /* list of sysmmu_drvdata.domain_node */250250 sysmmu_pte_t *pgtable; /* lv1 page table, 16KB */251251 short *lv2entcnt; /* free lv2 entry counter for each section */252252- spinlock_t lock; /* lock for modyfying list of clients */252252+ spinlock_t lock; /* lock for modifying list of clients */253253 spinlock_t pgtablelock; /* lock for modifying page table @ pgtable */254254 struct iommu_domain domain; /* generic domain data structure */255255};···292292 struct clk *aclk; /* SYSMMU's aclk clock */293293 struct clk *pclk; /* SYSMMU's pclk clock */294294 struct clk *clk_master; /* master's device clock */295295- spinlock_t lock; /* lock for modyfying state */295295+ spinlock_t lock; /* lock for modifying state */296296 bool active; /* current status */297297 struct exynos_iommu_domain *domain; /* domain we belong to */298298 struct list_head domain_node; /* node for domain clients list */···746746 ret = devm_request_irq(dev, irq, exynos_sysmmu_irq, 0,747747 dev_name(dev), data);748748 if (ret) {749749- dev_err(dev, "Unabled to register handler of irq %d\n", irq);749749+ dev_err(dev, "Unable to register handler of irq %d\n", irq);750750 return ret;751751 }752752
···17561756 group->id);1757175717581758 /*17591759- * Try to recover, drivers are allowed to force IDENITY or DMA, IDENTITY17591759+ * Try to recover, drivers are allowed to force IDENTITY or DMA, IDENTITY17601760 * takes precedence.17611761 */17621762 if (type == IOMMU_DOMAIN_IDENTITY)
+26-3
drivers/mfd/syscon.c
···159159}160160161161static struct regmap *device_node_get_regmap(struct device_node *np,162162+ bool create_regmap,162163 bool check_res)163164{164165 struct syscon *entry, *syscon = NULL;···173172 }174173175174 if (!syscon) {176176- if (of_device_is_compatible(np, "syscon"))175175+ if (create_regmap)177176 syscon = of_syscon_register(np, check_res);178177 else179178 syscon = ERR_PTR(-EINVAL);···234233}235234EXPORT_SYMBOL_GPL(of_syscon_register_regmap);236235236236+/**237237+ * device_node_to_regmap() - Get or create a regmap for specified device node238238+ * @np: Device tree node239239+ *240240+ * Get a regmap for the specified device node. If there's not an existing241241+ * regmap, then one is instantiated. This function should not be used if the242242+ * device node has a custom regmap driver or has resources (clocks, resets) to243243+ * be managed. Use syscon_node_to_regmap() instead for those cases.244244+ *245245+ * Return: regmap ptr on success, negative error code on failure.246246+ */237247struct regmap *device_node_to_regmap(struct device_node *np)238248{239239- return device_node_get_regmap(np, false);249249+ return device_node_get_regmap(np, true, false);240250}241251EXPORT_SYMBOL_GPL(device_node_to_regmap);242252253253+/**254254+ * syscon_node_to_regmap() - Get or create a regmap for specified syscon device node255255+ * @np: Device tree node256256+ *257257+ * Get a regmap for the specified device node. If there's not an existing258258+ * regmap, then one is instantiated if the node is a generic "syscon". This259259+ * function is safe to use for a syscon registered with260260+ * of_syscon_register_regmap().261261+ *262262+ * Return: regmap ptr on success, negative error code on failure.263263+ */243264struct regmap *syscon_node_to_regmap(struct device_node *np)244265{245245- return device_node_get_regmap(np, true);266266+ return device_node_get_regmap(np, of_device_is_compatible(np, "syscon"), true);246267}247268EXPORT_SYMBOL_GPL(syscon_node_to_regmap);248269
···21052105 /* hand second half of page back to the ring */21062106 ixgbe_reuse_rx_page(rx_ring, rx_buffer);21072107 } else {21082108- if (!IS_ERR(skb) && IXGBE_CB(skb)->dma == rx_buffer->dma) {21082108+ if (skb && IXGBE_CB(skb)->dma == rx_buffer->dma) {21092109 /* the page has been released from the ring */21102110 IXGBE_CB(skb)->page_released = true;21112111 } else {
···22652265 /* Allow the MAC to stop its clock if the PHY has the capability */22662266 pl->mac_tx_clk_stop = phy_eee_tx_clock_stop_capable(phy) > 0;2267226722682268- /* Explicitly configure whether the PHY is allowed to stop it's22692269- * receive clock.22702270- */22712271- ret = phy_eee_rx_clock_stop(phy, pl->config->eee_rx_clk_stop_enable);22722272- if (ret == -EOPNOTSUPP)22732273- ret = 0;22682268+ if (pl->mac_supports_eee_ops) {22692269+ /* Explicitly configure whether the PHY is allowed to stop it's22702270+ * receive clock.22712271+ */22722272+ ret = phy_eee_rx_clock_stop(phy,22732273+ pl->config->eee_rx_clk_stop_enable);22742274+ if (ret == -EOPNOTSUPP)22752275+ ret = 0;22762276+ }2274227722752278 return ret;22762279}
+2-2
drivers/net/pse-pd/pse_core.c
···319319 goto out;320320 mW = ret;321321322322- ret = pse_pi_get_voltage(rdev);322322+ ret = _pse_pi_get_voltage(rdev);323323 if (!ret) {324324 dev_err(pcdev->dev, "Voltage null\n");325325 ret = -ERANGE;···356356357357 id = rdev_get_id(rdev);358358 mutex_lock(&pcdev->lock);359359- ret = pse_pi_get_voltage(rdev);359359+ ret = _pse_pi_get_voltage(rdev);360360 if (!ret) {361361 dev_err(pcdev->dev, "Voltage null\n");362362 ret = -ERANGE;
+3-1
drivers/net/team/team_core.c
···26392639 ctx.data.u32_val = nla_get_u32(attr_data);26402640 break;26412641 case TEAM_OPTION_TYPE_STRING:26422642- if (nla_len(attr_data) > TEAM_STRING_MAX_LEN) {26422642+ if (nla_len(attr_data) > TEAM_STRING_MAX_LEN ||26432643+ !memchr(nla_data(attr_data), '\0',26442644+ nla_len(attr_data))) {26432645 err = -EINVAL;26442646 goto team_put;26452647 }
+5-2
drivers/net/vxlan/vxlan_core.c
···28982898 struct vxlan_dev *vxlan = netdev_priv(dev);28992899 int err;2900290029012901- if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)29022902- vxlan_vnigroup_init(vxlan);29012901+ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER) {29022902+ err = vxlan_vnigroup_init(vxlan);29032903+ if (err)29042904+ return err;29052905+ }2903290629042907 err = gro_cells_init(&vxlan->gro_cells, dev);29052908 if (err)
+45-16
drivers/net/wireless/ath/ath12k/wmi.c
···48514851 return reg_rule_ptr;48524852}4853485348544854+static u8 ath12k_wmi_ignore_num_extra_rules(struct ath12k_wmi_reg_rule_ext_params *rule,48554855+ u32 num_reg_rules)48564856+{48574857+ u8 num_invalid_5ghz_rules = 0;48584858+ u32 count, start_freq;48594859+48604860+ for (count = 0; count < num_reg_rules; count++) {48614861+ start_freq = le32_get_bits(rule[count].freq_info, REG_RULE_START_FREQ);48624862+48634863+ if (start_freq >= ATH12K_MIN_6G_FREQ)48644864+ num_invalid_5ghz_rules++;48654865+ }48664866+48674867+ return num_invalid_5ghz_rules;48684868+}48694869+48544870static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,48554871 struct sk_buff *skb,48564872 struct ath12k_reg_info *reg_info)···48774861 u32 num_2g_reg_rules, num_5g_reg_rules;48784862 u32 num_6g_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];48794863 u32 num_6g_reg_rules_cl[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];48644864+ u8 num_invalid_5ghz_ext_rules;48804865 u32 total_reg_rules = 0;48814866 int ret, i, j;48824867···49704953 }4971495449724955 memcpy(reg_info->alpha2, &ev->alpha2, REG_ALPHA2_LEN);49734973-49744974- /* FIXME: Currently FW includes 6G reg rule also in 5G rule49754975- * list for country US.49764976- * Having same 6G reg rule in 5G and 6G rules list causes49774977- * intersect check to be true, and same rules will be shown49784978- * multiple times in iw cmd. So added hack below to avoid49794979- * parsing 6G rule from 5G reg rule list, and this can be49804980- * removed later, after FW updates to remove 6G reg rule49814981- * from 5G rules list.49824982- */49834983- if (memcmp(reg_info->alpha2, "US", 2) == 0) {49844984- reg_info->num_5g_reg_rules = REG_US_5G_NUM_REG_RULES;49854985- num_5g_reg_rules = reg_info->num_5g_reg_rules;49864986- }4987495649884957 reg_info->dfs_region = le32_to_cpu(ev->dfs_region);49894958 reg_info->phybitmap = le32_to_cpu(ev->phybitmap);···50735070 }50745071 }5075507250735073+ ext_wmi_reg_rule += num_2g_reg_rules;50745074+50755075+ /* Firmware might include 6 GHz reg rule in 5 GHz rule list50765076+ * for few countries along with separate 6 GHz rule.50775077+ * Having same 6 GHz reg rule in 5 GHz and 6 GHz rules list50785078+ * causes intersect check to be true, and same rules will be50795079+ * shown multiple times in iw cmd.50805080+ * Hence, avoid parsing 6 GHz rule from 5 GHz reg rule list50815081+ */50825082+ num_invalid_5ghz_ext_rules = ath12k_wmi_ignore_num_extra_rules(ext_wmi_reg_rule,50835083+ num_5g_reg_rules);50845084+50855085+ if (num_invalid_5ghz_ext_rules) {50865086+ ath12k_dbg(ab, ATH12K_DBG_WMI,50875087+ "CC: %s 5 GHz reg rules number %d from fw, %d number of invalid 5 GHz rules",50885088+ reg_info->alpha2, reg_info->num_5g_reg_rules,50895089+ num_invalid_5ghz_ext_rules);50905090+50915091+ num_5g_reg_rules = num_5g_reg_rules - num_invalid_5ghz_ext_rules;50925092+ reg_info->num_5g_reg_rules = num_5g_reg_rules;50935093+ }50945094+50765095 if (num_5g_reg_rules) {50775077- ext_wmi_reg_rule += num_2g_reg_rules;50785096 reg_info->reg_rules_5g_ptr =50795097 create_ext_reg_rules_from_wmi(num_5g_reg_rules,50805098 ext_wmi_reg_rule);···51075083 }51085084 }5109508551105110- ext_wmi_reg_rule += num_5g_reg_rules;50865086+ /* We have adjusted the number of 5 GHz reg rules above. But still those50875087+ * many rules needs to be adjusted in ext_wmi_reg_rule.50885088+ *50895089+ * NOTE: num_invalid_5ghz_ext_rules will be 0 for rest other cases.50905090+ */50915091+ ext_wmi_reg_rule += (num_5g_reg_rules + num_invalid_5ghz_ext_rules);5111509251125093 for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {51135094 reg_info->reg_rules_6g_ap_ptr[i] =
···8989 seq_puts(s, items[i].display);9090 /* Print unit if available */9191 if (items[i].has_arg) {9292- seq_printf(s, " (0x%x",9393- pinconf_to_config_argument(config));9292+ u32 val = pinconf_to_config_argument(config);9393+9494 if (items[i].format)9595- seq_printf(s, " %s)", items[i].format);9595+ seq_printf(s, " (%u %s)", val, items[i].format);9696 else9797- seq_puts(s, ")");9797+ seq_printf(s, " (0x%x)", val);9898 }9999 }100100}
+25-17
drivers/pinctrl/pinctrl-cy8c95x0.c
···4242#define CY8C95X0_PORTSEL 0x184343/* Port settings, write PORTSEL first */4444#define CY8C95X0_INTMASK 0x194545-#define CY8C95X0_PWMSEL 0x1A4545+#define CY8C95X0_SELPWM 0x1A4646#define CY8C95X0_INVERT 0x1B4747#define CY8C95X0_DIRECTION 0x1C4848/* Drive mode register change state on writing '1' */···328328static bool cy8c95x0_readable_register(struct device *dev, unsigned int reg)329329{330330 /*331331- * Only 12 registers are present per port (see Table 6 in the332332- * datasheet).331331+ * Only 12 registers are present per port (see Table 6 in the datasheet).333332 */334334- if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) < 12)335335- return true;333333+ if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) >= 12)334334+ return false;336335337336 switch (reg) {338337 case 0x24 ... 0x27:338338+ case 0x31 ... 0x3f:339339 return false;340340 default:341341 return true;···344344345345static bool cy8c95x0_writeable_register(struct device *dev, unsigned int reg)346346{347347- if (reg >= CY8C95X0_VIRTUAL)348348- return true;347347+ /*348348+ * Only 12 registers are present per port (see Table 6 in the datasheet).349349+ */350350+ if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) >= 12)351351+ return false;349352350353 switch (reg) {351354 case CY8C95X0_INPUT_(0) ... CY8C95X0_INPUT_(7):···356353 case CY8C95X0_DEVID:357354 return false;358355 case 0x24 ... 0x27:356356+ case 0x31 ... 0x3f:359357 return false;360358 default:361359 return true;···369365 case CY8C95X0_INPUT_(0) ... CY8C95X0_INPUT_(7):370366 case CY8C95X0_INTSTATUS_(0) ... CY8C95X0_INTSTATUS_(7):371367 case CY8C95X0_INTMASK:368368+ case CY8C95X0_SELPWM:372369 case CY8C95X0_INVERT:373373- case CY8C95X0_PWMSEL:374370 case CY8C95X0_DIRECTION:375371 case CY8C95X0_DRV_PU:376372 case CY8C95X0_DRV_PD:···399395{400396 switch (reg) {401397 case CY8C95X0_INTMASK:402402- case CY8C95X0_PWMSEL:398398+ case CY8C95X0_SELPWM:403399 case CY8C95X0_INVERT:404400 case CY8C95X0_DIRECTION:405401 case CY8C95X0_DRV_PU:···470466 .max_register = 0, /* Updated at runtime */471467 .num_reg_defaults_raw = 0, /* Updated at runtime */472468 .use_single_read = true, /* Workaround for regcache bug */469469+#if IS_ENABLED(CONFIG_DEBUG_PINCTRL)470470+ .disable_locking = false,471471+#else473472 .disable_locking = true,473473+#endif474474};475475476476static inline int cy8c95x0_regmap_update_bits_base(struct cy8c95x0_pinctrl *chip,···797789 reg = CY8C95X0_DIRECTION;798790 break;799791 case PIN_CONFIG_MODE_PWM:800800- reg = CY8C95X0_PWMSEL;792792+ reg = CY8C95X0_SELPWM;801793 break;802794 case PIN_CONFIG_OUTPUT:803795 reg = CY8C95X0_OUTPUT;···876868 reg = CY8C95X0_DRV_PP_FAST;877869 break;878870 case PIN_CONFIG_MODE_PWM:879879- reg = CY8C95X0_PWMSEL;871871+ reg = CY8C95X0_SELPWM;880872 break;881873 case PIN_CONFIG_OUTPUT_ENABLE:882874 return cy8c95x0_pinmux_direction(chip, off, !arg);···11611153 bitmap_zero(mask, MAX_LINE);11621154 __set_bit(pin, mask);1163115511641164- if (cy8c95x0_read_regs_mask(chip, CY8C95X0_PWMSEL, pwm, mask)) {11561156+ if (cy8c95x0_read_regs_mask(chip, CY8C95X0_SELPWM, pwm, mask)) {11651157 seq_puts(s, "not available");11661158 return;11671159 }···12061198 u8 port = cypress_get_port(chip, off);12071199 u8 bit = cypress_get_pin_mask(chip, off);1208120012091209- return cy8c95x0_regmap_write_bits(chip, CY8C95X0_PWMSEL, port, bit, mode ? bit : 0);12011201+ return cy8c95x0_regmap_write_bits(chip, CY8C95X0_SELPWM, port, bit, mode ? bit : 0);12101202}1211120312121204static int cy8c95x0_pinmux_mode(struct cy8c95x0_pinctrl *chip,···1355134713561348 ret = devm_request_threaded_irq(chip->dev, irq,13571349 NULL, cy8c95x0_irq_handler,13581358- IRQF_ONESHOT | IRQF_SHARED | IRQF_TRIGGER_HIGH,13501350+ IRQF_ONESHOT | IRQF_SHARED,13591351 dev_name(chip->dev), chip);13601352 if (ret) {13611353 dev_err(chip->dev, "failed to request irq %d\n", irq);···14461438 switch (chip->tpin) {14471439 case 20:14481440 strscpy(chip->name, cy8c95x0_id[0].name);14491449- regmap_range_conf.range_max = CY8C95X0_VIRTUAL + 3 * MUXED_STRIDE;14411441+ regmap_range_conf.range_max = CY8C95X0_VIRTUAL + 3 * MUXED_STRIDE - 1;14501442 break;14511443 case 40:14521444 strscpy(chip->name, cy8c95x0_id[1].name);14531453- regmap_range_conf.range_max = CY8C95X0_VIRTUAL + 6 * MUXED_STRIDE;14451445+ regmap_range_conf.range_max = CY8C95X0_VIRTUAL + 6 * MUXED_STRIDE - 1;14541446 break;14551447 case 60:14561448 strscpy(chip->name, cy8c95x0_id[2].name);14571457- regmap_range_conf.range_max = CY8C95X0_VIRTUAL + 8 * MUXED_STRIDE;14491449+ regmap_range_conf.range_max = CY8C95X0_VIRTUAL + 8 * MUXED_STRIDE - 1;14581450 break;14591451 default:14601452 return -ENODEV;
+65-20
drivers/platform/x86/intel/int3472/discrete.c
···22/* Author: Dan Scally <djrscally@gmail.com> */3344#include <linux/acpi.h>55+#include <linux/array_size.h>56#include <linux/bitfield.h>67#include <linux/device.h>78#include <linux/gpio/consumer.h>···56555756static int skl_int3472_fill_gpiod_lookup(struct gpiod_lookup *table_entry,5857 struct acpi_resource_gpio *agpio,5959- const char *func, u32 polarity)5858+ const char *func, unsigned long gpio_flags)6059{6160 char *path = agpio->resource_source.string_ptr;6261 struct acpi_device *adev;···7170 if (!adev)7271 return -ENODEV;73727474- *table_entry = GPIO_LOOKUP(acpi_dev_name(adev), agpio->pin_table[0], func, polarity);7373+ *table_entry = GPIO_LOOKUP(acpi_dev_name(adev), agpio->pin_table[0], func, gpio_flags);75747675 return 0;7776}78777978static int skl_int3472_map_gpio_to_sensor(struct int3472_discrete_device *int3472,8079 struct acpi_resource_gpio *agpio,8181- const char *func, u32 polarity)8080+ const char *func, unsigned long gpio_flags)8281{8382 int ret;8483···8887 }89889089 ret = skl_int3472_fill_gpiod_lookup(&int3472->gpios.table[int3472->n_sensor_gpios],9191- agpio, func, polarity);9090+ agpio, func, gpio_flags);9291 if (ret)9392 return ret;9493···101100static struct gpio_desc *102101skl_int3472_gpiod_get_from_temp_lookup(struct int3472_discrete_device *int3472,103102 struct acpi_resource_gpio *agpio,104104- const char *func, u32 polarity)103103+ const char *func, unsigned long gpio_flags)105104{106105 struct gpio_desc *desc;107106 int ret;···112111 return ERR_PTR(-ENOMEM);113112114113 lookup->dev_id = dev_name(int3472->dev);115115- ret = skl_int3472_fill_gpiod_lookup(&lookup->table[0], agpio, func, polarity);114114+ ret = skl_int3472_fill_gpiod_lookup(&lookup->table[0], agpio, func, gpio_flags);116115 if (ret)117116 return ERR_PTR(ret);118117···123122 return desc;124123}125124126126-static void int3472_get_func_and_polarity(u8 type, const char **func, u32 *polarity)125125+/**126126+ * struct int3472_gpio_map - Map GPIOs to whatever is expected by the127127+ * sensor driver (as in DT bindings)128128+ * @hid: The ACPI HID of the device without the instance number e.g. INT347E129129+ * @type_from: The GPIO type from ACPI ?SDT130130+ * @type_to: The assigned GPIO type, typically same as @type_from131131+ * @func: The function, e.g. "enable"132132+ * @polarity_low: GPIO_ACTIVE_LOW true if the @polarity_low is true,133133+ * GPIO_ACTIVE_HIGH otherwise134134+ */135135+struct int3472_gpio_map {136136+ const char *hid;137137+ u8 type_from;138138+ u8 type_to;139139+ bool polarity_low;140140+ const char *func;141141+};142142+143143+static const struct int3472_gpio_map int3472_gpio_map[] = {144144+ { "INT347E", INT3472_GPIO_TYPE_RESET, INT3472_GPIO_TYPE_RESET, false, "enable" },145145+};146146+147147+static void int3472_get_func_and_polarity(struct acpi_device *adev, u8 *type,148148+ const char **func, unsigned long *gpio_flags)127149{128128- switch (type) {150150+ unsigned int i;151151+152152+ for (i = 0; i < ARRAY_SIZE(int3472_gpio_map); i++) {153153+ /*154154+ * Map the firmware-provided GPIO to whatever a driver expects155155+ * (as in DT bindings). First check if the type matches with the156156+ * GPIO map, then further check that the device _HID matches.157157+ */158158+ if (*type != int3472_gpio_map[i].type_from)159159+ continue;160160+161161+ if (!acpi_dev_hid_uid_match(adev, int3472_gpio_map[i].hid, NULL))162162+ continue;163163+164164+ *type = int3472_gpio_map[i].type_to;165165+ *gpio_flags = int3472_gpio_map[i].polarity_low ?166166+ GPIO_ACTIVE_LOW : GPIO_ACTIVE_HIGH;167167+ *func = int3472_gpio_map[i].func;168168+ return;169169+ }170170+171171+ switch (*type) {129172 case INT3472_GPIO_TYPE_RESET:130173 *func = "reset";131131- *polarity = GPIO_ACTIVE_LOW;174174+ *gpio_flags = GPIO_ACTIVE_LOW;132175 break;133176 case INT3472_GPIO_TYPE_POWERDOWN:134177 *func = "powerdown";135135- *polarity = GPIO_ACTIVE_LOW;178178+ *gpio_flags = GPIO_ACTIVE_LOW;136179 break;137180 case INT3472_GPIO_TYPE_CLK_ENABLE:138181 *func = "clk-enable";139139- *polarity = GPIO_ACTIVE_HIGH;182182+ *gpio_flags = GPIO_ACTIVE_HIGH;140183 break;141184 case INT3472_GPIO_TYPE_PRIVACY_LED:142185 *func = "privacy-led";143143- *polarity = GPIO_ACTIVE_HIGH;186186+ *gpio_flags = GPIO_ACTIVE_HIGH;144187 break;145188 case INT3472_GPIO_TYPE_POWER_ENABLE:146189 *func = "power-enable";147147- *polarity = GPIO_ACTIVE_HIGH;190190+ *gpio_flags = GPIO_ACTIVE_HIGH;148191 break;149192 default:150193 *func = "unknown";151151- *polarity = GPIO_ACTIVE_HIGH;194194+ *gpio_flags = GPIO_ACTIVE_HIGH;152195 break;153196 }154197}···239194 struct gpio_desc *gpio;240195 const char *err_msg;241196 const char *func;242242- u32 polarity;197197+ unsigned long gpio_flags;243198 int ret;244199245200 if (!acpi_gpio_get_io_resource(ares, &agpio))···262217263218 type = FIELD_GET(INT3472_GPIO_DSM_TYPE, obj->integer.value);264219265265- int3472_get_func_and_polarity(type, &func, &polarity);220220+ int3472_get_func_and_polarity(int3472->sensor, &type, &func, &gpio_flags);266221267222 pin = FIELD_GET(INT3472_GPIO_DSM_PIN, obj->integer.value);268223 /* Pin field is not really used under Windows and wraps around at 8 bits */···272227273228 active_value = FIELD_GET(INT3472_GPIO_DSM_SENSOR_ON_VAL, obj->integer.value);274229 if (!active_value)275275- polarity ^= GPIO_ACTIVE_LOW;230230+ gpio_flags ^= GPIO_ACTIVE_LOW;276231277232 dev_dbg(int3472->dev, "%s %s pin %d active-%s\n", func,278233 agpio->resource_source.string_ptr, agpio->pin_table[0],279279- str_high_low(polarity == GPIO_ACTIVE_HIGH));234234+ str_high_low(gpio_flags == GPIO_ACTIVE_HIGH));280235281236 switch (type) {282237 case INT3472_GPIO_TYPE_RESET:283238 case INT3472_GPIO_TYPE_POWERDOWN:284284- ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, func, polarity);239239+ ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, func, gpio_flags);285240 if (ret)286241 err_msg = "Failed to map GPIO pin to sensor\n";287242···289244 case INT3472_GPIO_TYPE_CLK_ENABLE:290245 case INT3472_GPIO_TYPE_PRIVACY_LED:291246 case INT3472_GPIO_TYPE_POWER_ENABLE:292292- gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, func, polarity);247247+ gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, func, gpio_flags);293248 if (IS_ERR(gpio)) {294249 ret = PTR_ERR(gpio);295250 err_msg = "Failed to get GPIO\n";
+45-16
drivers/platform/x86/thinkpad_acpi.c
···7885788578867886#define FAN_NS_CTRL_STATUS BIT(2) /* Bit which determines control is enabled or not */78877887#define FAN_NS_CTRL BIT(4) /* Bit which determines control is by host or EC */78887888+#define FAN_CLOCK_TPM (22500*60) /* Ticks per minute for a 22.5 kHz clock */7888788978897890enum { /* Fan control constants */78907891 fan_status_offset = 0x2f, /* EC register 0x2f */···7941794079427941static bool fan_with_ns_addr;79437942static bool ecfw_with_fan_dec_rpm;79437943+static bool fan_speed_in_tpr;7944794479457945static struct mutex fan_mutex;79467946···81448142 !acpi_ec_read(fan_rpm_offset + 1, &hi)))81458143 return -EIO;8146814481478147- if (likely(speed))81458145+ if (likely(speed)) {81488146 *speed = (hi << 8) | lo;81478147+ if (fan_speed_in_tpr && *speed != 0)81488148+ *speed = FAN_CLOCK_TPM / *speed;81498149+ }81498150 break;81508151 case TPACPI_FAN_RD_TPEC_NS:81518152 if (!acpi_ec_read(fan_rpm_status_ns, &lo))···81818176 if (rc)81828177 return -EIO;8183817881848184- if (likely(speed))81798179+ if (likely(speed)) {81858180 *speed = (hi << 8) | lo;81818181+ if (fan_speed_in_tpr && *speed != 0)81828182+ *speed = FAN_CLOCK_TPM / *speed;81838183+ }81868184 break;8187818581888186 case TPACPI_FAN_RD_TPEC_NS:···87968788#define TPACPI_FAN_NOFAN 0x0008 /* no fan available */87978789#define TPACPI_FAN_NS 0x0010 /* For EC with non-Standard register addresses */87988790#define TPACPI_FAN_DECRPM 0x0020 /* For ECFW's with RPM in register as decimal */87918791+#define TPACPI_FAN_TPR 0x0040 /* Fan speed is in Ticks Per Revolution */8799879288008793static const struct tpacpi_quirk fan_quirk_table[] __initconst = {88018794 TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1),···88268817 TPACPI_Q_LNV3('R', '0', 'V', TPACPI_FAN_NS), /* 11e Gen5 KL-Y */88278818 TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */88288819 TPACPI_Q_LNV3('R', '0', 'Q', TPACPI_FAN_DECRPM),/* L480 */88208820+ TPACPI_Q_LNV('8', 'F', TPACPI_FAN_TPR), /* ThinkPad x120e */88298821};8830882288318823static int __init fan_init(struct ibm_init_struct *iibm)···8897888788988888 if (quirks & TPACPI_FAN_Q1)88998889 fan_quirk1_setup();88908890+ if (quirks & TPACPI_FAN_TPR)88918891+ fan_speed_in_tpr = true;89008892 /* Try and probe the 2nd fan */89018893 tp_features.second_fan = 1; /* needed for get_speed to work */89028894 res = fan2_get_speed(&speed);···1033110319#define DYTC_MODE_PSC_BALANCE 5 /* Default mode aka balanced */1033210320#define DYTC_MODE_PSC_PERFORM 7 /* High power mode aka performance */10333103211032210322+#define DYTC_MODE_PSCV9_LOWPOWER 1 /* Low power mode */1032310323+#define DYTC_MODE_PSCV9_BALANCE 3 /* Default mode aka balanced */1032410324+#define DYTC_MODE_PSCV9_PERFORM 4 /* High power mode aka performance */1032510325+1033410326#define DYTC_ERR_MASK 0xF /* Bits 0-3 in cmd result are the error result */1033510327#define DYTC_ERR_SUCCESS 1 /* CMD completed successful */1033610328···1035410338static int dytc_capabilities;1035510339static bool dytc_mmc_get_available;1035610340static int profile_force;1034110341+1034210342+static int platform_psc_profile_lowpower = DYTC_MODE_PSC_LOWPOWER;1034310343+static int platform_psc_profile_balanced = DYTC_MODE_PSC_BALANCE;1034410344+static int platform_psc_profile_performance = DYTC_MODE_PSC_PERFORM;10357103451035810346static int convert_dytc_to_profile(int funcmode, int dytcmode,1035910347 enum platform_profile_option *profile)···1038010360 }1038110361 return 0;1038210362 case DYTC_FUNCTION_PSC:1038310383- switch (dytcmode) {1038410384- case DYTC_MODE_PSC_LOWPOWER:1036310363+ if (dytcmode == platform_psc_profile_lowpower)1038510364 *profile = PLATFORM_PROFILE_LOW_POWER;1038610386- break;1038710387- case DYTC_MODE_PSC_BALANCE:1036510365+ else if (dytcmode == platform_psc_profile_balanced)1038810366 *profile = PLATFORM_PROFILE_BALANCED;1038910389- break;1039010390- case DYTC_MODE_PSC_PERFORM:1036710367+ else if (dytcmode == platform_psc_profile_performance)1039110368 *profile = PLATFORM_PROFILE_PERFORMANCE;1039210392- break;1039310393- default: /* Unknown mode */1036910369+ else1039410370 return -EINVAL;1039510395- }1037110371+1039610372 return 0;1039710373 case DYTC_FUNCTION_AMT:1039810374 /* For now return balanced. It's the closest we have to 'auto' */···1040910393 if (dytc_capabilities & BIT(DYTC_FC_MMC))1041010394 *perfmode = DYTC_MODE_MMC_LOWPOWER;1041110395 else if (dytc_capabilities & BIT(DYTC_FC_PSC))1041210412- *perfmode = DYTC_MODE_PSC_LOWPOWER;1039610396+ *perfmode = platform_psc_profile_lowpower;1041310397 break;1041410398 case PLATFORM_PROFILE_BALANCED:1041510399 if (dytc_capabilities & BIT(DYTC_FC_MMC))1041610400 *perfmode = DYTC_MODE_MMC_BALANCE;1041710401 else if (dytc_capabilities & BIT(DYTC_FC_PSC))1041810418- *perfmode = DYTC_MODE_PSC_BALANCE;1040210402+ *perfmode = platform_psc_profile_balanced;1041910403 break;1042010404 case PLATFORM_PROFILE_PERFORMANCE:1042110405 if (dytc_capabilities & BIT(DYTC_FC_MMC))1042210406 *perfmode = DYTC_MODE_MMC_PERFORM;1042310407 else if (dytc_capabilities & BIT(DYTC_FC_PSC))1042410424- *perfmode = DYTC_MODE_PSC_PERFORM;1040810408+ *perfmode = platform_psc_profile_performance;1042510409 break;1042610410 default: /* Unknown profile */1042710411 return -EOPNOTSUPP;···1061510599 if (output & BIT(DYTC_QUERY_ENABLE_BIT))1061610600 dytc_version = (output >> DYTC_QUERY_REV_BIT) & 0xF;10617106011060210602+ dbg_printk(TPACPI_DBG_INIT, "DYTC version %d\n", dytc_version);1061810603 /* Check DYTC is enabled and supports mode setting */1061910604 if (dytc_version < 5)1062010605 return -ENODEV;···1065410637 }1065510638 } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */1065610639 pr_debug("PSC is supported\n");1064010640+ if (dytc_version >= 9) { /* update profiles for DYTC 9 and up */1064110641+ platform_psc_profile_lowpower = DYTC_MODE_PSCV9_LOWPOWER;1064210642+ platform_psc_profile_balanced = DYTC_MODE_PSCV9_BALANCE;1064310643+ platform_psc_profile_performance = DYTC_MODE_PSCV9_PERFORM;1064410644+ }1065710645 } else {1065810646 dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n");1065910647 return -ENODEV;···1066810646 "DYTC version %d: thermal mode available\n", dytc_version);10669106471067010648 /* Create platform_profile structure and register */1067110671- tpacpi_pprof = devm_platform_profile_register(&tpacpi_pdev->dev, "thinkpad-acpi",1067210672- NULL, &dytc_profile_ops);1064910649+ tpacpi_pprof = platform_profile_register(&tpacpi_pdev->dev, "thinkpad-acpi-profile",1065010650+ NULL, &dytc_profile_ops);1067310651 /*1067410652 * If for some reason platform_profiles aren't enabled1067510653 * don't quit terminally.···1068710665 return 0;1068810666}10689106671066810668+static void dytc_profile_exit(void)1066910669+{1067010670+ if (!IS_ERR_OR_NULL(tpacpi_pprof))1067110671+ platform_profile_remove(tpacpi_pprof);1067210672+}1067310673+1069010674static struct ibm_struct dytc_profile_driver_data = {1069110675 .name = "dytc-profile",1067610676+ .exit = dytc_profile_exit,1069210677};10693106781069410679/*************************************************************************
+21-26
drivers/ptp/ptp_vmclock.c
···414414}415415416416static const struct file_operations vmclock_miscdev_fops = {417417+ .owner = THIS_MODULE,417418 .mmap = vmclock_miscdev_mmap,418419 .read = vmclock_miscdev_read,419420};420421421422/* module operations */422423423423-static void vmclock_remove(struct platform_device *pdev)424424+static void vmclock_remove(void *data)424425{425425- struct device *dev = &pdev->dev;426426- struct vmclock_state *st = dev_get_drvdata(dev);426426+ struct vmclock_state *st = data;427427428428 if (st->ptp_clock)429429 ptp_clock_unregister(st->ptp_clock);···506506507507 if (ret) {508508 dev_info(dev, "Failed to obtain physical address: %d\n", ret);509509- goto out;509509+ return ret;510510 }511511512512 if (resource_size(&st->res) < VMCLOCK_MIN_SIZE) {513513 dev_info(dev, "Region too small (0x%llx)\n",514514 resource_size(&st->res));515515- ret = -EINVAL;516516- goto out;515515+ return -EINVAL;517516 }518517 st->clk = devm_memremap(dev, st->res.start, resource_size(&st->res),519518 MEMREMAP_WB | MEMREMAP_DEC);···520521 ret = PTR_ERR(st->clk);521522 dev_info(dev, "failed to map shared memory\n");522523 st->clk = NULL;523523- goto out;524524+ return ret;524525 }525526526527 if (le32_to_cpu(st->clk->magic) != VMCLOCK_MAGIC ||527528 le32_to_cpu(st->clk->size) > resource_size(&st->res) ||528529 le16_to_cpu(st->clk->version) != 1) {529530 dev_info(dev, "vmclock magic fields invalid\n");530530- ret = -EINVAL;531531- goto out;531531+ return -EINVAL;532532 }533533534534 ret = ida_alloc(&vmclock_ida, GFP_KERNEL);535535 if (ret < 0)536536- goto out;536536+ return ret;537537538538 st->index = ret;539539 ret = devm_add_action_or_reset(&pdev->dev, vmclock_put_idx, st);540540 if (ret)541541- goto out;541541+ return ret;542542543543 st->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "vmclock%d", st->index);544544- if (!st->name) {545545- ret = -ENOMEM;546546- goto out;547547- }544544+ if (!st->name)545545+ return -ENOMEM;546546+547547+ st->miscdev.minor = MISC_DYNAMIC_MINOR;548548+549549+ ret = devm_add_action_or_reset(&pdev->dev, vmclock_remove, st);550550+ if (ret)551551+ return ret;548552549553 /*550554 * If the structure is big enough, it can be mapped to userspace.···556554 * cross that bridge if/when we come to it.557555 */558556 if (le32_to_cpu(st->clk->size) >= PAGE_SIZE) {559559- st->miscdev.minor = MISC_DYNAMIC_MINOR;560557 st->miscdev.fops = &vmclock_miscdev_fops;561558 st->miscdev.name = st->name;562559563560 ret = misc_register(&st->miscdev);564561 if (ret)565565- goto out;562562+ return ret;566563 }567564568565 /* If there is valid clock information, register a PTP clock */···571570 if (IS_ERR(st->ptp_clock)) {572571 ret = PTR_ERR(st->ptp_clock);573572 st->ptp_clock = NULL;574574- vmclock_remove(pdev);575575- goto out;573573+ return ret;576574 }577575 }578576579577 if (!st->miscdev.minor && !st->ptp_clock) {580578 /* Neither miscdev nor PTP registered */581579 dev_info(dev, "vmclock: Neither miscdev nor PTP available; not registering\n");582582- ret = -ENODEV;583583- goto out;580580+ return -ENODEV;584581 }585582586583 dev_info(dev, "%s: registered %s%s%s\n", st->name,···586587 (st->miscdev.minor && st->ptp_clock) ? ", " : "",587588 st->ptp_clock ? "PTP" : "");588589589589- dev_set_drvdata(dev, st);590590-591591- out:592592- return ret;590590+ return 0;593591}594592595593static const struct acpi_device_id vmclock_acpi_ids[] = {···597601598602static struct platform_driver vmclock_platform_driver = {599603 .probe = vmclock_probe,600600- .remove = vmclock_remove,601604 .driver = {602605 .name = "vmclock",603606 .acpi_match_table = vmclock_acpi_ids,
+27-34
drivers/regulator/core.c
···57745774 goto clean;57755775 }5776577657775777- if (config->init_data) {57785778- /*57795779- * Providing of_match means the framework is expected to parse57805780- * DT to get the init_data. This would conflict with provided57815781- * init_data, if set. Warn if it happens.57825782- */57835783- if (regulator_desc->of_match)57845784- dev_warn(dev, "Using provided init data - OF match ignored\n");57775777+ /*57785778+ * DT may override the config->init_data provided if the platform57795779+ * needs to do so. If so, config->init_data is completely ignored.57805780+ */57815781+ init_data = regulator_of_get_init_data(dev, regulator_desc, config,57825782+ &rdev->dev.of_node);5785578357845784+ /*57855785+ * Sometimes not all resources are probed already so we need to take57865786+ * that into account. This happens most the time if the ena_gpiod comes57875787+ * from a gpio extender or something else.57885788+ */57895789+ if (PTR_ERR(init_data) == -EPROBE_DEFER) {57905790+ ret = -EPROBE_DEFER;57915791+ goto clean;57925792+ }57935793+57945794+ /*57955795+ * We need to keep track of any GPIO descriptor coming from the57965796+ * device tree until we have handled it over to the core. If the57975797+ * config that was passed in to this function DOES NOT contain57985798+ * a descriptor, and the config after this call DOES contain57995799+ * a descriptor, we definitely got one from parsing the device58005800+ * tree.58015801+ */58025802+ if (!cfg->ena_gpiod && config->ena_gpiod)58035803+ dangling_of_gpiod = true;58045804+ if (!init_data) {57865805 init_data = config->init_data;57875806 rdev->dev.of_node = of_node_get(config->of_node);57885788-57895789- } else {57905790- init_data = regulator_of_get_init_data(dev, regulator_desc,57915791- config,57925792- &rdev->dev.of_node);57935793-57945794- /*57955795- * Sometimes not all resources are probed already so we need to57965796- * take that into account. This happens most the time if the57975797- * ena_gpiod comes from a gpio extender or something else.57985798- */57995799- if (PTR_ERR(init_data) == -EPROBE_DEFER) {58005800- ret = -EPROBE_DEFER;58015801- goto clean;58025802- }58035803-58045804- /*58055805- * We need to keep track of any GPIO descriptor coming from the58065806- * device tree until we have handled it over to the core. If the58075807- * config that was passed in to this function DOES NOT contain a58085808- * descriptor, and the config after this call DOES contain a58095809- * descriptor, we definitely got one from parsing the device58105810- * tree.58115811- */58125812- if (!cfg->ena_gpiod && config->ena_gpiod)58135813- dangling_of_gpiod = true;58145807 }5815580858165809 ww_mutex_init(&rdev->mutex, ®ulator_ww_class);
+2-1
drivers/s390/cio/chp.c
···695695 if (time_after(jiffies, chp_info_expires)) {696696 /* Data is too old, update. */697697 rc = sclp_chp_read_info(&chp_info);698698- chp_info_expires = jiffies + CHP_INFO_UPDATE_INTERVAL ;698698+ if (!rc)699699+ chp_info_expires = jiffies + CHP_INFO_UPDATE_INTERVAL;699700 }700701 mutex_unlock(&info_lock);701702
···5757 * @max_level: maximum cooling level. One less than total number of valid5858 * cpufreq frequencies.5959 * @em: Reference on the Energy Model of the device6060- * @cdev: thermal_cooling_device pointer to keep track of the6161- * registered cooling device.6260 * @policy: cpufreq policy.6361 * @cooling_ops: cpufreq callbacks to thermal cooling device ops6462 * @idle_time: idle time stats
+2
drivers/tty/serial/8250/8250.h
···374374375375#ifdef CONFIG_SERIAL_8250_DMA376376extern int serial8250_tx_dma(struct uart_8250_port *);377377+extern void serial8250_tx_dma_flush(struct uart_8250_port *);377378extern int serial8250_rx_dma(struct uart_8250_port *);378379extern void serial8250_rx_dma_flush(struct uart_8250_port *);379380extern int serial8250_request_dma(struct uart_8250_port *);···407406{408407 return -1;409408}409409+static inline void serial8250_tx_dma_flush(struct uart_8250_port *p) { }410410static inline int serial8250_rx_dma(struct uart_8250_port *p)411411{412412 return -1;
+16
drivers/tty/serial/8250/8250_dma.c
···149149 return ret;150150}151151152152+void serial8250_tx_dma_flush(struct uart_8250_port *p)153153+{154154+ struct uart_8250_dma *dma = p->dma;155155+156156+ if (!dma->tx_running)157157+ return;158158+159159+ /*160160+ * kfifo_reset() has been called by the serial core, avoid161161+ * advancing and underflowing in __dma_tx_complete().162162+ */163163+ dma->tx_size = 0;164164+165165+ dmaengine_terminate_async(dma->rxchan);166166+}167167+152168int serial8250_rx_dma(struct uart_8250_port *p)153169{154170 struct uart_8250_dma *dma = p->dma;
···15611561 /* Always ask for fixed clock rate from a property. */15621562 device_property_read_u32(dev, "clock-frequency", &uartclk);1563156315641564- s->polling = !!irq;15641564+ s->polling = (irq <= 0);15651565 if (s->polling)15661566 dev_dbg(dev,15671567 "No interrupt pin definition, falling back to polling mode\n");
+7-5
drivers/tty/serial/serial_port.c
···173173 * The caller is responsible to initialize the following fields of the @port174174 * ->dev (must be valid)175175 * ->flags176176+ * ->iobase176177 * ->mapbase177178 * ->mapsize178179 * ->regshift (if @use_defaults is false)···215214 /* Read the registers I/O access type (default: MMIO 8-bit) */216215 ret = device_property_read_u32(dev, "reg-io-width", &value);217216 if (ret) {218218- port->iotype = UPIO_MEM;217217+ port->iotype = port->iobase ? UPIO_PORT : UPIO_MEM;219218 } else {220219 switch (value) {221220 case 1:···228227 port->iotype = device_is_big_endian(dev) ? UPIO_MEM32BE : UPIO_MEM32;229228 break;230229 default:231231- if (!use_defaults) {232232- dev_err(dev, "Unsupported reg-io-width (%u)\n", value);233233- return -EINVAL;234234- }235230 port->iotype = UPIO_UNKNOWN;236231 break;237232 }233233+ }234234+235235+ if (!use_defaults && port->iotype == UPIO_UNKNOWN) {236236+ dev_err(dev, "Unsupported reg-io-width (%u)\n", value);237237+ return -EINVAL;238238 }239239240240 /* Read the address mapping base offset (default: no offset) */
+21-7
drivers/usb/class/cdc-acm.c
···371371static void acm_ctrl_irq(struct urb *urb)372372{373373 struct acm *acm = urb->context;374374- struct usb_cdc_notification *dr = urb->transfer_buffer;374374+ struct usb_cdc_notification *dr;375375 unsigned int current_size = urb->actual_length;376376 unsigned int expected_size, copy_size, alloc_size;377377 int retval;···398398399399 usb_mark_last_busy(acm->dev);400400401401- if (acm->nb_index)401401+ if (acm->nb_index == 0) {402402+ /*403403+ * The first chunk of a message must contain at least the404404+ * notification header with the length field, otherwise we405405+ * can't get an expected_size.406406+ */407407+ if (current_size < sizeof(struct usb_cdc_notification)) {408408+ dev_dbg(&acm->control->dev, "urb too short\n");409409+ goto exit;410410+ }411411+ dr = urb->transfer_buffer;412412+ } else {402413 dr = (struct usb_cdc_notification *)acm->notification_buffer;403403-414414+ }404415 /* size = notification-header + (optional) data */405416 expected_size = sizeof(struct usb_cdc_notification) +406417 le16_to_cpu(dr->wLength);407418408408- if (current_size < expected_size) {419419+ if (acm->nb_index != 0 || current_size < expected_size) {409420 /* notification is transmitted fragmented, reassemble */410421 if (acm->nb_size < expected_size) {411422 u8 *new_buffer;···17381727 { USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */17391728 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */17401729 },17411741- { USB_DEVICE(0x045b, 0x023c), /* Renesas USB Download mode */17301730+ { USB_DEVICE(0x045b, 0x023c), /* Renesas R-Car H3 USB Download mode */17421731 .driver_info = DISABLE_ECHO, /* Don't echo banner */17431732 },17441744- { USB_DEVICE(0x045b, 0x0248), /* Renesas USB Download mode */17331733+ { USB_DEVICE(0x045b, 0x0247), /* Renesas R-Car D3 USB Download mode */17451734 .driver_info = DISABLE_ECHO, /* Don't echo banner */17461735 },17471747- { USB_DEVICE(0x045b, 0x024D), /* Renesas USB Download mode */17361736+ { USB_DEVICE(0x045b, 0x0248), /* Renesas R-Car M3-N USB Download mode */17371737+ .driver_info = DISABLE_ECHO, /* Don't echo banner */17381738+ },17391739+ { USB_DEVICE(0x045b, 0x024D), /* Renesas R-Car E3 USB Download mode */17481740 .driver_info = DISABLE_ECHO, /* Don't echo banner */17491741 },17501742 { USB_DEVICE(0x0e8d, 0x0003), /* FIREFLY, MediaTek Inc; andrey.arapov@gmail.com */
+12-2
drivers/usb/core/hub.c
···18491849 hdev = interface_to_usbdev(intf);1850185018511851 /*18521852+ * The USB 2.0 spec prohibits hubs from having more than one18531853+ * configuration or interface, and we rely on this prohibition.18541854+ * Refuse to accept a device that violates it.18551855+ */18561856+ if (hdev->descriptor.bNumConfigurations > 1 ||18571857+ hdev->actconfig->desc.bNumInterfaces > 1) {18581858+ dev_err(&intf->dev, "Invalid hub with more than one config or interface\n");18591859+ return -EINVAL;18601860+ }18611861+18621862+ /*18521863 * Set default autosuspend delay as 0 to speedup bus suspend,18531864 * based on the below considerations:18541865 *···47094698EXPORT_SYMBOL_GPL(usb_ep0_reinit);4710469947114700#define usb_sndaddr0pipe() (PIPE_CONTROL << 30)47124712-#define usb_rcvaddr0pipe() ((PIPE_CONTROL << 30) | USB_DIR_IN)4713470147144702static int hub_set_address(struct usb_device *udev, int devnum)47154703{···48144804 for (i = 0; i < GET_MAXPACKET0_TRIES; ++i) {48154805 /* Start with invalid values in case the transfer fails */48164806 buf->bDescriptorType = buf->bMaxPacketSize0 = 0;48174817- rc = usb_control_msg(udev, usb_rcvaddr0pipe(),48074807+ rc = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),48184808 USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,48194809 USB_DT_DEVICE << 8, 0,48204810 buf, size,
···717717/**718718 * struct dwc3_ep - device side endpoint representation719719 * @endpoint: usb endpoint720720+ * @nostream_work: work for handling bulk NoStream720721 * @cancelled_list: list of cancelled requests for this endpoint721722 * @pending_list: list of pending requests for this endpoint722723 * @started_list: list of started requests on this endpoint
+34
drivers/usb/dwc3/gadget.c
···26292629{26302630 u32 reg;26312631 u32 timeout = 2000;26322632+ u32 saved_config = 0;2632263326332634 if (pm_runtime_suspended(dwc->dev))26342635 return 0;26362636+26372637+ /*26382638+ * When operating in USB 2.0 speeds (HS/FS), ensure that26392639+ * GUSB2PHYCFG.ENBLSLPM and GUSB2PHYCFG.SUSPHY are cleared before starting26402640+ * or stopping the controller. This resolves timeout issues that occur26412641+ * during frequent role switches between host and device modes.26422642+ *26432643+ * Save and clear these settings, then restore them after completing the26442644+ * controller start or stop sequence.26452645+ *26462646+ * This solution was discovered through experimentation as it is not26472647+ * mentioned in the dwc3 programming guide. It has been tested on an26482648+ * Exynos platforms.26492649+ */26502650+ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));26512651+ if (reg & DWC3_GUSB2PHYCFG_SUSPHY) {26522652+ saved_config |= DWC3_GUSB2PHYCFG_SUSPHY;26532653+ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;26542654+ }26552655+26562656+ if (reg & DWC3_GUSB2PHYCFG_ENBLSLPM) {26572657+ saved_config |= DWC3_GUSB2PHYCFG_ENBLSLPM;26582658+ reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM;26592659+ }26602660+26612661+ if (saved_config)26622662+ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);2635266326362664 reg = dwc3_readl(dwc->regs, DWC3_DCTL);26372665 if (is_on) {···26872659 reg = dwc3_readl(dwc->regs, DWC3_DSTS);26882660 reg &= DWC3_DSTS_DEVCTRLHLT;26892661 } while (--timeout && !(!is_on ^ !reg));26622662+26632663+ if (saved_config) {26642664+ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));26652665+ reg |= saved_config;26662666+ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);26672667+ }2690266826912669 if (!timeout)26922670 return -ETIMEDOUT;
+14-5
drivers/usb/gadget/function/f_midi.c
···283283 /* Our transmit completed. See if there's more to go.284284 * f_midi_transmit eats req, don't queue it again. */285285 req->length = 0;286286- f_midi_transmit(midi);286286+ queue_work(system_highpri_wq, &midi->work);287287 return;288288 }289289 break;···907907908908 status = -ENODEV;909909910910+ /*911911+ * Reset wMaxPacketSize with maximum packet size of FS bulk transfer before912912+ * endpoint claim. This ensures that the wMaxPacketSize does not exceed the913913+ * limit during bind retries where configured dwc3 TX/RX FIFO's maxpacket914914+ * size of 512 bytes for IN/OUT endpoints in support HS speed only.915915+ */916916+ bulk_in_desc.wMaxPacketSize = cpu_to_le16(64);917917+ bulk_out_desc.wMaxPacketSize = cpu_to_le16(64);918918+910919 /* allocate instance-specific endpoints */911920 midi->in_ep = usb_ep_autoconfig(cdev->gadget, &bulk_in_desc);912921 if (!midi->in_ep)···10091000 }1010100110111002 /* configure the endpoint descriptors ... */10121012- ms_out_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->in_ports);10131013- ms_out_desc.bNumEmbMIDIJack = midi->in_ports;10031003+ ms_out_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->out_ports);10041004+ ms_out_desc.bNumEmbMIDIJack = midi->out_ports;1014100510151015- ms_in_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->out_ports);10161016- ms_in_desc.bNumEmbMIDIJack = midi->out_ports;10061006+ ms_in_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->in_ports);10071007+ ms_in_desc.bNumEmbMIDIJack = midi->in_ports;1017100810181009 /* ... and add them to the list */10191010 endpoint_descriptor_index = i;
+1-1
drivers/usb/gadget/function/uvc_video.c
···818818 return -EINVAL;819819820820 /* Allocate a kthread for asynchronous hw submit handler. */821821- video->kworker = kthread_create_worker(0, "UVCG");821821+ video->kworker = kthread_run_worker(0, "UVCG");822822 if (IS_ERR(video->kworker)) {823823 uvcg_err(&video->uvc->func, "failed to create UVCG kworker\n");824824 return PTR_ERR(video->kworker);
···958958 * booting from USB disk or using a usb keyboard959959 */960960 hcc_params = readl(base + EHCI_HCC_PARAMS);961961+962962+ /* LS7A EHCI controller doesn't have extended capabilities, the963963+ * EECP (EHCI Extended Capabilities Pointer) field of HCCPARAMS964964+ * register should be 0x0 but it reads as 0xa0. So clear it to965965+ * avoid error messages on boot.966966+ */967967+ if (pdev->vendor == PCI_VENDOR_ID_LOONGSON && pdev->device == 0x7a14)968968+ hcc_params &= ~(0xffL << 8);969969+961970 offset = (hcc_params >> 8) & 0xff;962971 while (offset && --count) {963972 pci_read_config_dword(pdev, offset, &cap);
+4-3
drivers/usb/host/xhci-pci.c
···653653}654654EXPORT_SYMBOL_NS_GPL(xhci_pci_common_probe, "xhci");655655656656-static const struct pci_device_id pci_ids_reject[] = {657657- /* handled by xhci-pci-renesas */656656+/* handled by xhci-pci-renesas if enabled */657657+static const struct pci_device_id pci_ids_renesas[] = {658658 { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, 0x0014) },659659 { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, 0x0015) },660660 { /* end: all zeroes */ }···662662663663static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)664664{665665- if (pci_match_id(pci_ids_reject, dev))665665+ if (IS_ENABLED(CONFIG_USB_XHCI_PCI_RENESAS) &&666666+ pci_match_id(pci_ids_renesas, dev))666667 return -ENODEV;667668668669 return xhci_pci_common_probe(dev, id);
···7474 return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr));7575}76767777+static inline bool range_requires_alignment(phys_addr_t p, size_t size)7878+{7979+ phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);8080+ phys_addr_t bus_addr = pfn_to_bfn(XEN_PFN_DOWN(p)) << XEN_PAGE_SHIFT;8181+8282+ return IS_ALIGNED(p, algn) && !IS_ALIGNED(bus_addr, algn);8383+}8484+7785static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)7886{7987 unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);8088 unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);8181- phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);82898390 next_bfn = pfn_to_bfn(xen_pfn);8484-8585- /* If buffer is physically aligned, ensure DMA alignment. */8686- if (IS_ALIGNED(p, algn) &&8787- !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn))8888- return 1;89919092 for (i = 1; i < nr_pages; i++)9193 if (pfn_to_bfn(++xen_pfn) != ++next_bfn)···113111}114112115113#ifdef CONFIG_X86116116-int xen_swiotlb_fixup(void *buf, unsigned long nslabs)114114+int __init xen_swiotlb_fixup(void *buf, unsigned long nslabs)117115{118116 int rc;119117 unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT);···158156159157 *dma_handle = xen_phys_to_dma(dev, phys);160158 if (*dma_handle + size - 1 > dma_mask ||161161- range_straddles_page_boundary(phys, size)) {159159+ range_straddles_page_boundary(phys, size) ||160160+ range_requires_alignment(phys, size)) {162161 if (xen_create_contiguous_region(phys, order, fls64(dma_mask),163162 dma_handle) != 0)164163 goto out_free_pages;···185182 size = ALIGN(size, XEN_PAGE_SIZE);186183187184 if (WARN_ON_ONCE(dma_handle + size - 1 > dev->coherent_dma_mask) ||188188- WARN_ON_ONCE(range_straddles_page_boundary(phys, size)))185185+ WARN_ON_ONCE(range_straddles_page_boundary(phys, size) ||186186+ range_requires_alignment(phys, size)))189187 return;190188191189 if (TestClearPageXenRemapped(virt_to_page(vaddr)))
+7
fs/bcachefs/Kconfig
···6161 The resulting code will be significantly slower than normal; you6262 probably shouldn't select this option unless you're a developer.63636464+config BCACHEFS_INJECT_TRANSACTION_RESTARTS6565+ bool "Randomly inject transaction restarts"6666+ depends on BCACHEFS_DEBUG6767+ help6868+ Randomly inject transaction restarts in a few core paths - may have a6969+ significant performance penalty7070+6471config BCACHEFS_TESTS6572 bool "bcachefs unit and performance tests"6673 depends on BCACHEFS_FS
+32-1
fs/bcachefs/btree_iter.c
···23572357 bch2_btree_iter_verify_entry_exit(iter);23582358 EBUG_ON((iter->flags & BTREE_ITER_filter_snapshots) && bkey_eq(end, POS_MAX));2359235923602360+ ret = trans_maybe_inject_restart(trans, _RET_IP_);23612361+ if (unlikely(ret)) {23622362+ k = bkey_s_c_err(ret);23632363+ goto out_no_locked;23642364+ }23652365+23602366 if (iter->update_path) {23612367 bch2_path_put_nokeep(trans, iter->update_path,23622368 iter->flags & BTREE_ITER_intent);···26282622 bch2_btree_iter_verify_entry_exit(iter);26292623 EBUG_ON((iter->flags & BTREE_ITER_filter_snapshots) && bpos_eq(end, POS_MIN));2630262426252625+ int ret = trans_maybe_inject_restart(trans, _RET_IP_);26262626+ if (unlikely(ret)) {26272627+ k = bkey_s_c_err(ret);26282628+ goto out_no_locked;26292629+ }26302630+26312631 while (1) {26322632 k = __bch2_btree_iter_peek_prev(iter, search_key);26332633 if (unlikely(!k.k))···27602748 bch2_btree_iter_verify(iter);27612749 bch2_btree_iter_verify_entry_exit(iter);27622750 EBUG_ON(btree_iter_path(trans, iter)->level && (iter->flags & BTREE_ITER_with_key_cache));27512751+27522752+ ret = trans_maybe_inject_restart(trans, _RET_IP_);27532753+ if (unlikely(ret)) {27542754+ k = bkey_s_c_err(ret);27552755+ goto out_no_locked;27562756+ }2763275727642758 /* extents can't span inode numbers: */27652759 if ((iter->flags & BTREE_ITER_is_extents) &&···3124310631253107 WARN_ON_ONCE(new_bytes > BTREE_TRANS_MEM_MAX);3126310831093109+ ret = trans_maybe_inject_restart(trans, _RET_IP_);31103110+ if (ret)31113111+ return ERR_PTR(ret);31123112+31273113 struct btree_transaction_stats *s = btree_trans_stats(trans);31283114 s->max_mem = max(s->max_mem, new_bytes);31293115···3185316331863164 if (old_bytes) {31873165 trace_and_count(c, trans_restart_mem_realloced, trans, _RET_IP_, new_bytes);31883188- return ERR_PTR(btree_trans_restart(trans, BCH_ERR_transaction_restart_mem_realloced));31663166+ return ERR_PTR(btree_trans_restart_ip(trans,31673167+ BCH_ERR_transaction_restart_mem_realloced, _RET_IP_));31893168 }31903169out_change_top:31913170 p = trans->mem + trans->mem_top;···32933270 bch2_trans_srcu_unlock(trans);3294327132953272 trans->last_begin_ip = _RET_IP_;32733273+32743274+#ifdef CONFIG_BCACHEFS_INJECT_TRANSACTION_RESTARTS32753275+ if (trans->restarted) {32763276+ trans->restart_count_this_trans++;32773277+ } else {32783278+ trans->restart_count_this_trans = 0;32793279+ }32803280+#endif3296328132973282 trans_set_locked(trans, false);32983283
···381381not_found:382382 if (flags & BTREE_TRIGGER_check_repair) {383383 ret = bch2_indirect_extent_missing_error(trans, p, *idx, next_idx, false);384384+ if (ret == -BCH_ERR_missing_indirect_extent)385385+ ret = 0;384386 if (ret)385387 goto err;386388 }
···380380 error = check_nfsd_access(exp, rqstp, may_bypass_gss);381381 if (error)382382 goto out;383383-384384- svc_xprt_set_valid(rqstp->rq_xprt);383383+ /* During LOCALIO call to fh_verify will be called with a NULL rqstp */384384+ if (rqstp)385385+ svc_xprt_set_valid(rqstp->rq_xprt);385386386387 /* Finally, check access permissions. */387388 error = nfsd_permission(cred, exp, dentry, access);
···11+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */22+/*33+ * Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.44+ */55+66+#ifndef _DT_BINDINGS_CLK_QCOM_QCS8300_CAM_CC_H77+#define _DT_BINDINGS_CLK_QCOM_QCS8300_CAM_CC_H88+99+#include "qcom,sa8775p-camcc.h"1010+1111+/* QCS8300 introduces below new clocks compared to SA8775P */1212+1313+/* CAM_CC clocks */1414+#define CAM_CC_TITAN_TOP_ACCU_SHIFT_CLK 861515+1616+#endif
+17
include/dt-bindings/clock/qcom,qcs8300-gpucc.h
···11+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */22+/*33+ * Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved.44+ */55+66+#ifndef _DT_BINDINGS_CLK_QCOM_GPUCC_QCS8300_H77+#define _DT_BINDINGS_CLK_QCOM_GPUCC_QCS8300_H88+99+#include "qcom,sa8775p-gpucc.h"1010+1111+/* QCS8300 introduces below new clocks compared to SA8775P */1212+1313+/* GPU_CC clocks */1414+#define GPU_CC_CX_ACCU_SHIFT_CLK 231515+#define GPU_CC_GX_ACCU_SHIFT_CLK 241616+1717+#endif
+14-4
include/linux/blk-mq.h
···861861 void (*complete)(struct io_comp_batch *))862862{863863 /*864864- * blk_mq_end_request_batch() can't end request allocated from865865- * sched tags864864+ * Check various conditions that exclude batch processing:865865+ * 1) No batch container866866+ * 2) Has scheduler data attached867867+ * 3) Not a passthrough request and end_io set868868+ * 4) Not a passthrough request and an ioerror866869 */867867- if (!iob || (req->rq_flags & RQF_SCHED_TAGS) || ioerror ||868868- (req->end_io && !blk_rq_is_passthrough(req)))870870+ if (!iob)869871 return false;872872+ if (req->rq_flags & RQF_SCHED_TAGS)873873+ return false;874874+ if (!blk_rq_is_passthrough(req)) {875875+ if (req->end_io)876876+ return false;877877+ if (ioerror < 0)878878+ return false;879879+ }870880871881 if (!iob->complete)872882 iob->complete = complete;
+3-3
include/linux/cgroup-defs.h
···71717272 /* Cgroup is frozen. */7373 CGRP_FROZEN,7474-7575- /* Control group has to be killed. */7676- CGRP_KILL,7774};78757976/* cgroup_root->flags */···457460 int nr_populated_threaded_children;458461459462 int nr_threaded_children; /* # of live threaded child cgroups */463463+464464+ /* sequence number for cgroup.kill, serialized by css_set_lock. */465465+ unsigned int kill_seq;460466461467 struct kernfs_node *kn; /* cgroup kernfs entry */462468 struct cgroup_file procs_file; /* handle for "cgroup.procs" */
+69
include/linux/device/faux.h
···11+/* SPDX-License-Identifier: GPL-2.0-only */22+/*33+ * Copyright (c) 2025 Greg Kroah-Hartman <gregkh@linuxfoundation.org>44+ * Copyright (c) 2025 The Linux Foundation55+ *66+ * A "simple" faux bus that allows devices to be created and added77+ * automatically to it. This is to be used whenever you need to create a88+ * device that is not associated with any "real" system resources, and do99+ * not want to have to deal with a bus/driver binding logic. It is1010+ * intended to be very simple, with only a create and a destroy function1111+ * available.1212+ */1313+#ifndef _FAUX_DEVICE_H_1414+#define _FAUX_DEVICE_H_1515+1616+#include <linux/container_of.h>1717+#include <linux/device.h>1818+1919+/**2020+ * struct faux_device - a "faux" device2121+ * @dev: internal struct device of the object2222+ *2323+ * A simple faux device that can be created/destroyed. To be used when a2424+ * driver only needs to have a device to "hang" something off. This can be2525+ * used for downloading firmware or other basic tasks. Use this instead of2626+ * a struct platform_device if the device has no resources assigned to2727+ * it at all.2828+ */2929+struct faux_device {3030+ struct device dev;3131+};3232+#define to_faux_device(x) container_of_const((x), struct faux_device, dev)3333+3434+/**3535+ * struct faux_device_ops - a set of callbacks for a struct faux_device3636+ * @probe: called when a faux device is probed by the driver core3737+ * before the device is fully bound to the internal faux bus3838+ * code. If probe succeeds, return 0, otherwise return a3939+ * negative error number to stop the probe sequence from4040+ * succeeding.4141+ * @remove: called when a faux device is removed from the system4242+ *4343+ * Both @probe and @remove are optional, if not needed, set to NULL.4444+ */4545+struct faux_device_ops {4646+ int (*probe)(struct faux_device *faux_dev);4747+ void (*remove)(struct faux_device *faux_dev);4848+};4949+5050+struct faux_device *faux_device_create(const char *name,5151+ struct device *parent,5252+ const struct faux_device_ops *faux_ops);5353+struct faux_device *faux_device_create_with_groups(const char *name,5454+ struct device *parent,5555+ const struct faux_device_ops *faux_ops,5656+ const struct attribute_group **groups);5757+void faux_device_destroy(struct faux_device *faux_dev);5858+5959+static inline void *faux_device_get_drvdata(const struct faux_device *faux_dev)6060+{6161+ return dev_get_drvdata(&faux_dev->dev);6262+}6363+6464+static inline void faux_device_set_drvdata(struct faux_device *faux_dev, void *data)6565+{6666+ dev_set_drvdata(&faux_dev->dev, data);6767+}6868+6969+#endif /* _FAUX_DEVICE_H_ */
···26642664}2665266526662666static inline26672667+struct net *dev_net_rcu(const struct net_device *dev)26682668+{26692669+ return read_pnet_rcu(&dev->nd_net);26702670+}26712671+26722672+static inline26672673void dev_net_set(struct net_device *dev, struct net *net)26682674{26692675 write_pnet(&dev->nd_net, net);
+9
include/linux/psp-sev.h
···815815#ifdef CONFIG_CRYPTO_DEV_SP_PSP816816817817/**818818+ * sev_module_init - perform PSP SEV module initialization819819+ *820820+ * Returns:821821+ * 0 if the PSP module is successfully initialized822822+ * negative value if the PSP module initialization fails823823+ */824824+int sev_module_init(void);825825+826826+/**818827 * sev_platform_init - perform SEV INIT command819828 *820829 * @args: struct sev_platform_init_args to pass in arguments
···398398#endif399399}400400401401-static inline struct net *read_pnet_rcu(possible_net_t *pnet)401401+static inline struct net *read_pnet_rcu(const possible_net_t *pnet)402402{403403#ifdef CONFIG_NET_NS404404 return rcu_dereference(pnet->net);
+7-2
include/net/route.h
···382382static inline int ip4_dst_hoplimit(const struct dst_entry *dst)383383{384384 int hoplimit = dst_metric_raw(dst, RTAX_HOPLIMIT);385385- struct net *net = dev_net(dst->dev);386385387387- if (hoplimit == 0)386386+ if (hoplimit == 0) {387387+ const struct net *net;388388+389389+ rcu_read_lock();390390+ net = dev_net_rcu(dst->dev);388391 hoplimit = READ_ONCE(net->ipv4.sysctl_ip_default_ttl);392392+ rcu_read_unlock();393393+ }389394 return hoplimit;390395}391396
+2
include/uapi/linux/ethtool.h
···682682 * @ETH_SS_STATS_ETH_CTRL: names of IEEE 802.3 MAC Control statistics683683 * @ETH_SS_STATS_RMON: names of RMON statistics684684 * @ETH_SS_STATS_PHY: names of PHY(dev) statistics685685+ * @ETH_SS_TS_FLAGS: hardware timestamping flags685686 *686687 * @ETH_SS_COUNT: number of defined string sets687688 */···709708 ETH_SS_STATS_ETH_CTRL,710709 ETH_SS_STATS_RMON,711710 ETH_SS_STATS_PHY,711711+ ETH_SS_TS_FLAGS,712712713713 /* add new constants above here */714714 ETH_SS_COUNT
···5454 continue;55555656 if (cmd->flags & IORING_URING_CMD_CANCELABLE) {5757- /* ->sqe isn't available if no async data */5858- if (!req_has_async_data(req))5959- cmd->sqe = NULL;6057 file->f_op->uring_cmd(cmd, IO_URING_F_CANCEL |6158 IO_URING_F_COMPLETE_DEFER);6259 ret = true;···176179 return -ENOMEM;177180 cache->op_data = NULL;178181179179- if (!(req->flags & REQ_F_FORCE_ASYNC)) {180180- /* defer memcpy until we need it */181181- ioucmd->sqe = sqe;182182- return 0;183183- }184184-182182+ /*183183+ * Unconditionally cache the SQE for now - this is only needed for184184+ * requests that go async, but prep handlers must ensure that any185185+ * sqe data is stable beyond prep. Since uring_cmd is special in186186+ * that it doesn't read in per-op data, play it safe and ensure that187187+ * any SQE data is stable beyond prep. This can later get relaxed.188188+ */185189 memcpy(cache->sqes, sqe, uring_sqe_size(req->ctx));186190 ioucmd->sqe = cache->sqes;187191 return 0;···247249 }248250249251 ret = file->f_op->uring_cmd(ioucmd, issue_flags);250250- if (ret == -EAGAIN) {251251- struct io_uring_cmd_data *cache = req->async_data;252252-253253- if (ioucmd->sqe != (void *) cache)254254- memcpy(cache->sqes, ioucmd->sqe, uring_sqe_size(req->ctx));255255- return -EAGAIN;256256- } else if (ret == -EIOCBQUEUED) {257257- return -EIOCBQUEUED;258258- }259259-252252+ if (ret == -EAGAIN || ret == -EIOCBQUEUED)253253+ return ret;260254 if (ret < 0)261255 req_set_fail(req);262256 io_req_uring_cleanup(req, issue_flags);
···3131config GENERIC_PENDING_IRQ3232 bool33333434-# Deduce delayed migration from top-level interrupt chip flags3535-config GENERIC_PENDING_IRQ_CHIPFLAGS3636- bool3737-3834# Support for generic irq migrating off cpu before the cpu is offline.3935config GENERIC_IRQ_MIGRATION4036 bool
+2-2
kernel/sched/autogroup.c
···150150 * see this thread after that: we can no longer use signal->autogroup.151151 * See the PF_EXITING check in task_wants_autogroup().152152 */153153- sched_move_task(p);153153+ sched_move_task(p, true);154154}155155156156static void···182182 * sched_autogroup_exit_task().183183 */184184 for_each_thread(p, t)185185- sched_move_task(t);185185+ sched_move_task(t, true);186186187187 unlock_task_sighand(p, &flags);188188 autogroup_kref_put(prev);
+7-5
kernel/sched/core.c
···10631063 struct task_struct *task;1064106410651065 task = container_of(node, struct task_struct, wake_q);10661066- /* Task can safely be re-inserted now: */10671066 node = node->next;10681068- task->wake_q.next = NULL;10671067+ /* pairs with cmpxchg_relaxed() in __wake_q_add() */10681068+ WRITE_ONCE(task->wake_q.next, NULL);10691069+ /* Task can safely be re-inserted now. */1069107010701071 /*10711072 * wake_up_process() executes a full barrier, which pairs with···90519050 * now. This function just updates tsk->se.cfs_rq and tsk->se.parent to reflect90529051 * its new group.90539052 */90549054-void sched_move_task(struct task_struct *tsk)90539053+void sched_move_task(struct task_struct *tsk, bool for_autogroup)90559054{90569055 int queued, running, queue_flags =90579056 DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;···90809079 put_prev_task(rq, tsk);9081908090829081 sched_change_group(tsk, group);90839083- scx_move_task(tsk);90829082+ if (!for_autogroup)90839083+ scx_cgroup_move_task(tsk);9084908490859085 if (queued)90869086 enqueue_task(rq, tsk, queue_flags);···91829180 struct cgroup_subsys_state *css;9183918191849182 cgroup_taskset_for_each(task, css, tset)91859185- sched_move_task(task);91839183+ sched_move_task(task, false);9186918491879185 scx_cgroup_finish_attach();91889186}
+76-37
kernel/sched/ext.c
···123123 SCX_OPS_SWITCH_PARTIAL = 1LLU << 3,124124125125 /*126126+ * A migration disabled task can only execute on its current CPU. By127127+ * default, such tasks are automatically put on the CPU's local DSQ with128128+ * the default slice on enqueue. If this ops flag is set, they also go129129+ * through ops.enqueue().130130+ *131131+ * A migration disabled task never invokes ops.select_cpu() as it can132132+ * only select the current CPU. Also, p->cpus_ptr will only contain its133133+ * current CPU while p->nr_cpus_allowed keeps tracking p->user_cpus_ptr134134+ * and thus may disagree with cpumask_weight(p->cpus_ptr).135135+ */136136+ SCX_OPS_ENQ_MIGRATION_DISABLED = 1LLU << 4,137137+138138+ /*126139 * CPU cgroup support flags127140 */128141 SCX_OPS_HAS_CGROUP_WEIGHT = 1LLU << 16, /* cpu.weight */···143130 SCX_OPS_ALL_FLAGS = SCX_OPS_KEEP_BUILTIN_IDLE |144131 SCX_OPS_ENQ_LAST |145132 SCX_OPS_ENQ_EXITING |133133+ SCX_OPS_ENQ_MIGRATION_DISABLED |146134 SCX_OPS_SWITCH_PARTIAL |147135 SCX_OPS_HAS_CGROUP_WEIGHT,148136};···430416431417 /**432418 * @update_idle: Update the idle state of a CPU433433- * @cpu: CPU to udpate the idle state for419419+ * @cpu: CPU to update the idle state for434420 * @idle: whether entering or exiting the idle state435421 *436422 * This operation is called when @rq's CPU goes or leaves the idle···896882897883static DEFINE_STATIC_KEY_FALSE(scx_ops_enq_last);898884static DEFINE_STATIC_KEY_FALSE(scx_ops_enq_exiting);885885+static DEFINE_STATIC_KEY_FALSE(scx_ops_enq_migration_disabled);899886static DEFINE_STATIC_KEY_FALSE(scx_ops_cpu_preempt);900887static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);901888···1229121412301215/**12311216 * nldsq_next_task - Iterate to the next task in a non-local DSQ12321232- * @dsq: user dsq being interated12171217+ * @dsq: user dsq being iterated12331218 * @cur: current position, %NULL to start iteration12341219 * @rev: walk backwards12351220 *···20292014 unlikely(p->flags & PF_EXITING))20302015 goto local;2031201620172017+ /* see %SCX_OPS_ENQ_MIGRATION_DISABLED */20182018+ if (!static_branch_unlikely(&scx_ops_enq_migration_disabled) &&20192019+ is_migration_disabled(p))20202020+ goto local;20212021+20322022 if (!SCX_HAS_OP(enqueue))20332023 goto global;20342024···2098207820992079 /*21002080 * list_add_tail() must be used. scx_ops_bypass() depends on tasks being21012101- * appened to the runnable_list.20812081+ * appended to the runnable_list.21022082 */21032083 list_add_tail(&p->scx.runnable_node, &rq->scx.runnable_list);21042084}···23332313 *23342314 * - The BPF scheduler is bypassed while the rq is offline and we can always say23352315 * no to the BPF scheduler initiated migrations while offline.23162316+ *23172317+ * The caller must ensure that @p and @rq are on different CPUs.23362318 */23372319static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq,23382320 bool trigger_error)23392321{23402322 int cpu = cpu_of(rq);23232323+23242324+ SCHED_WARN_ON(task_cpu(p) == cpu);23252325+23262326+ /*23272327+ * If @p has migration disabled, @p->cpus_ptr is updated to contain only23282328+ * the pinned CPU in migrate_disable_switch() while @p is being switched23292329+ * out. However, put_prev_task_scx() is called before @p->cpus_ptr is23302330+ * updated and thus another CPU may see @p on a DSQ inbetween leading to23312331+ * @p passing the below task_allowed_on_cpu() check while migration is23322332+ * disabled.23332333+ *23342334+ * Test the migration disabled state first as the race window is narrow23352335+ * and the BPF scheduler failing to check migration disabled state can23362336+ * easily be masked if task_allowed_on_cpu() is done first.23372337+ */23382338+ if (unlikely(is_migration_disabled(p))) {23392339+ if (trigger_error)23402340+ scx_ops_error("SCX_DSQ_LOCAL[_ON] cannot move migration disabled %s[%d] from CPU %d to %d",23412341+ p->comm, p->pid, task_cpu(p), cpu);23422342+ return false;23432343+ }2341234423422345 /*23432346 * We don't require the BPF scheduler to avoid dispatching to offline···23702327 */23712328 if (!task_allowed_on_cpu(p, cpu)) {23722329 if (trigger_error)23732373- scx_ops_error("SCX_DSQ_LOCAL[_ON] verdict target cpu %d not allowed for %s[%d]",23742374- cpu_of(rq), p->comm, p->pid);23302330+ scx_ops_error("SCX_DSQ_LOCAL[_ON] target CPU %d not allowed for %s[%d]",23312331+ cpu, p->comm, p->pid);23752332 return false;23762333 }23772377-23782378- if (unlikely(is_migration_disabled(p)))23792379- return false;2380233423812335 if (!scx_rq_online(rq))23822336 return false;···2477243724782438 if (dst_dsq->id == SCX_DSQ_LOCAL) {24792439 dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);24802480- if (!task_can_run_on_remote_rq(p, dst_rq, true)) {24402440+ if (src_rq != dst_rq &&24412441+ unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {24812442 dst_dsq = find_global_dsq(p);24822443 dst_rq = src_rq;24832444 }···25212480/*25222481 * A poorly behaving BPF scheduler can live-lock the system by e.g. incessantly25232482 * banging on the same DSQ on a large NUMA system to the point where switching25242524- * to the bypass mode can take a long time. Inject artifical delays while the24832483+ * to the bypass mode can take a long time. Inject artificial delays while the25252484 * bypass mode is switching to guarantee timely completion.25262485 */25272486static void scx_ops_breather(struct rq *rq)···26162575{26172576 struct rq *src_rq = task_rq(p);26182577 struct rq *dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);25782578+#ifdef CONFIG_SMP25792579+ struct rq *locked_rq = rq;25802580+#endif2619258126202582 /*26212583 * We're synchronized against dequeue through DISPATCHING. As @p can't···26322588 }2633258926342590#ifdef CONFIG_SMP26352635- if (unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {25912591+ if (src_rq != dst_rq &&25922592+ unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {26362593 dispatch_enqueue(find_global_dsq(p), p,26372594 enq_flags | SCX_ENQ_CLEAR_OPSS);26382595 return;···26562611 atomic_long_set_release(&p->scx.ops_state, SCX_OPSS_NONE);2657261226582613 /* switch to @src_rq lock */26592659- if (rq != src_rq) {26602660- raw_spin_rq_unlock(rq);26142614+ if (locked_rq != src_rq) {26152615+ raw_spin_rq_unlock(locked_rq);26162616+ locked_rq = src_rq;26612617 raw_spin_rq_lock(src_rq);26622618 }26632619···26762630 } else {26772631 move_remote_task_to_local_dsq(p, enq_flags,26782632 src_rq, dst_rq);26332633+ /* task has been moved to dst_rq, which is now locked */26342634+ locked_rq = dst_rq;26792635 }2680263626812637 /* if the destination CPU is idle, wake it up */···26862638 }2687263926882640 /* switch back to @rq lock */26892689- if (rq != dst_rq) {26902690- raw_spin_rq_unlock(dst_rq);26412641+ if (locked_rq != rq) {26422642+ raw_spin_rq_unlock(locked_rq);26912643 raw_spin_rq_lock(rq);26922644 }26932645#else /* CONFIG_SMP */···31923144 *31933145 * Unless overridden by ops.core_sched_before(), @p->scx.core_sched_at is used31943146 * to implement the default task ordering. The older the timestamp, the higher31953195- * prority the task - the global FIFO ordering matching the default scheduling31473147+ * priority the task - the global FIFO ordering matching the default scheduling31963148 * behavior.31973149 *31983150 * When ops.core_sched_before() is enabled, @p->scx.core_sched_at is used to···38993851 curr->scx.slice = 0;39003852 touch_core_sched(rq, curr);39013853 } else if (SCX_HAS_OP(tick)) {39023902- SCX_CALL_OP(SCX_KF_REST, tick, curr);38543854+ SCX_CALL_OP_TASK(SCX_KF_REST, tick, curr);39033855 }3904385639053857 if (!curr->scx.slice)···40463998 WARN_ON_ONCE(scx_get_task_state(p) != SCX_TASK_ENABLED);4047399940484000 if (SCX_HAS_OP(disable))40494049- SCX_CALL_OP(SCX_KF_REST, disable, p);40014001+ SCX_CALL_OP_TASK(SCX_KF_REST, disable, p);40504002 scx_set_task_state(p, SCX_TASK_READY);40514003}40524004···40754027 }4076402840774029 if (SCX_HAS_OP(exit_task))40784078- SCX_CALL_OP(SCX_KF_REST, exit_task, p, &args);40304030+ SCX_CALL_OP_TASK(SCX_KF_REST, exit_task, p, &args);40794031 scx_set_task_state(p, SCX_TASK_NONE);40804032}40814033···43714323 return ops_sanitize_err("cgroup_prep_move", ret);43724324}4373432543744374-void scx_move_task(struct task_struct *p)43264326+void scx_cgroup_move_task(struct task_struct *p)43754327{43764328 if (!scx_cgroup_enabled)43774377- return;43784378-43794379- /*43804380- * We're called from sched_move_task() which handles both cgroup and43814381- * autogroup moves. Ignore the latter.43824382- *43834383- * Also ignore exiting tasks, because in the exit path tasks transition43844384- * from the autogroup to the root group, so task_group_is_autogroup()43854385- * alone isn't able to catch exiting autogroup tasks. This is safe for43864386- * cgroup_move(), because cgroup migrations never happen for PF_EXITING43874387- * tasks.43884388- */43894389- if (task_group_is_autogroup(task_group(p)) || (p->flags & PF_EXITING))43904329 return;4391433043924331 /*···46254590 cgroup_warned_missing_idle = false;4626459146274592 /*46284628- * scx_tg_on/offline() are excluded thorugh scx_cgroup_rwsem. If we walk45934593+ * scx_tg_on/offline() are excluded through scx_cgroup_rwsem. If we walk46294594 * cgroups and init, all online cgroups are initialized.46304595 */46314596 rcu_read_lock();···50945059 static_branch_disable(&scx_has_op[i]);50955060 static_branch_disable(&scx_ops_enq_last);50965061 static_branch_disable(&scx_ops_enq_exiting);50625062+ static_branch_disable(&scx_ops_enq_migration_disabled);50975063 static_branch_disable(&scx_ops_cpu_preempt);50985064 static_branch_disable(&scx_builtin_idle_enabled);50995065 synchronize_rcu();···53135277 scx_get_task_state(p), p->scx.flags & ~SCX_TASK_STATE_MASK,53145278 p->scx.dsq_flags, ops_state & SCX_OPSS_STATE_MASK,53155279 ops_state >> SCX_OPSS_QSEQ_SHIFT);53165316- dump_line(s, " sticky/holding_cpu=%d/%d dsq_id=%s dsq_vtime=%llu slice=%llu",53175317- p->scx.sticky_cpu, p->scx.holding_cpu, dsq_id_buf,53185318- p->scx.dsq_vtime, p->scx.slice);52805280+ dump_line(s, " sticky/holding_cpu=%d/%d dsq_id=%s",52815281+ p->scx.sticky_cpu, p->scx.holding_cpu, dsq_id_buf);52825282+ dump_line(s, " dsq_vtime=%llu slice=%llu weight=%u",52835283+ p->scx.dsq_vtime, p->scx.slice, p->scx.weight);53195284 dump_line(s, " cpus=%*pb", cpumask_pr_args(p->cpus_ptr));5320528553215286 if (SCX_HAS_OP(dump_task)) {···5704566757055668 if (ops->flags & SCX_OPS_ENQ_EXITING)57065669 static_branch_enable(&scx_ops_enq_exiting);56705670+ if (ops->flags & SCX_OPS_ENQ_MIGRATION_DISABLED)56715671+ static_branch_enable(&scx_ops_enq_migration_disabled);57075672 if (scx_ops.cpu_acquire || scx_ops.cpu_release)57085673 static_branch_enable(&scx_ops_cpu_preempt);57095674
···16721672 * must be the same.16731673 */16741674static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,16751675- struct trace_buffer *buffer, int nr_pages)16751675+ struct trace_buffer *buffer, int nr_pages,16761676+ unsigned long *subbuf_mask)16761677{16771678 int subbuf_size = PAGE_SIZE;16781679 struct buffer_data_page *subbuf;16791680 unsigned long buffers_start;16801681 unsigned long buffers_end;16811682 int i;16831683+16841684+ if (!subbuf_mask)16851685+ return false;1682168616831687 /* Check the meta magic and meta struct size */16841688 if (meta->magic != RING_BUFFER_META_MAGIC ||···1716171217171713 subbuf = rb_subbufs_from_meta(meta);1718171417151715+ bitmap_clear(subbuf_mask, 0, meta->nr_subbufs);17161716+17191717 /* Is the meta buffers and the subbufs themselves have correct data? */17201718 for (i = 0; i < meta->nr_subbufs; i++) {17211719 if (meta->buffers[i] < 0 ||···17311725 return false;17321726 }1733172717281728+ if (test_bit(meta->buffers[i], subbuf_mask)) {17291729+ pr_info("Ring buffer boot meta [%d] array has duplicates\n", cpu);17301730+ return false;17311731+ }17321732+17331733+ set_bit(meta->buffers[i], subbuf_mask);17341734 subbuf = (void *)subbuf + subbuf_size;17351735 }17361736···18501838 cpu_buffer->cpu);18511839 goto invalid;18521840 }18411841+18421842+ /* If the buffer has content, update pages_touched */18431843+ if (ret)18441844+ local_inc(&cpu_buffer->pages_touched);18451845+18531846 entries += ret;18541847 entry_bytes += local_read(&head_page->page->commit);18551848 local_set(&cpu_buffer->head_page->entries, ret);···19061889static void rb_range_meta_init(struct trace_buffer *buffer, int nr_pages)19071890{19081891 struct ring_buffer_meta *meta;18921892+ unsigned long *subbuf_mask;19091893 unsigned long delta;19101894 void *subbuf;19111895 int cpu;19121896 int i;18971897+18981898+ /* Create a mask to test the subbuf array */18991899+ subbuf_mask = bitmap_alloc(nr_pages + 1, GFP_KERNEL);19001900+ /* If subbuf_mask fails to allocate, then rb_meta_valid() will return false */1913190119141902 for (cpu = 0; cpu < nr_cpu_ids; cpu++) {19151903 void *next_meta;1916190419171905 meta = rb_range_meta(buffer, nr_pages, cpu);1918190619191919- if (rb_meta_valid(meta, cpu, buffer, nr_pages)) {19071907+ if (rb_meta_valid(meta, cpu, buffer, nr_pages, subbuf_mask)) {19201908 /* Make the mappings match the current address */19211909 subbuf = rb_subbufs_from_meta(meta);19221910 delta = (unsigned long)subbuf - meta->first_buffer;···19651943 subbuf += meta->subbuf_size;19661944 }19671945 }19461946+ bitmap_free(subbuf_mask);19681947}1969194819701949static void *rbm_start(struct seq_file *m, loff_t *pos)···71497126 kfree(cpu_buffer->subbuf_ids);71507127 cpu_buffer->subbuf_ids = NULL;71517128 rb_free_meta_page(cpu_buffer);71297129+ atomic_dec(&cpu_buffer->resize_disabled);71527130 }7153713171547132unlock:
+5-7
kernel/trace/trace.c
···59775977ssize_t tracing_resize_ring_buffer(struct trace_array *tr,59785978 unsigned long size, int cpu_id)59795979{59805980- int ret;59815981-59825980 guard(mutex)(&trace_types_lock);5983598159845982 if (cpu_id != RING_BUFFER_ALL_CPUS) {···59855987 return -EINVAL;59865988 }5987598959885988- ret = __tracing_resize_ring_buffer(tr, size, cpu_id);59895989- if (ret < 0)59905990- ret = -ENOMEM;59915991-59925992- return ret;59905990+ return __tracing_resize_ring_buffer(tr, size, cpu_id);59935991}5994599259955993static void update_last_data(struct trace_array *tr)···82788284 struct ftrace_buffer_info *info = filp->private_data;82798285 struct trace_iterator *iter = &info->iter;82808286 int ret = 0;82878287+82888288+ /* Currently the boot mapped buffer is not supported for mmap */82898289+ if (iter->tr->flags & TRACE_ARRAY_FL_BOOT)82908290+ return -ENODEV;8281829182828292 ret = get_snapshot_map(iter->tr);82838293 if (ret)
+6-6
kernel/workqueue.c
···35173517 }3518351835193519 /*35203520- * Put the reference grabbed by send_mayday(). @pool won't35213521- * go away while we're still attached to it.35223522- */35233523- put_pwq(pwq);35243524-35253525- /*35263520 * Leave this pool. Notify regular workers; otherwise, we end up35273521 * with 0 concurrency and stalling the execution.35283522 */···35253531 raw_spin_unlock_irq(&pool->lock);3526353235273533 worker_detach_from_pool(rescuer);35343534+35353535+ /*35363536+ * Put the reference grabbed by send_mayday(). @pool might35373537+ * go away any time after it.35383538+ */35393539+ put_pwq_unlocked(pwq);3528354035293541 raw_spin_lock_irq(&wq_mayday_lock);35303542 }
···1818#include <linux/if_ether.h>1919#include <linux/jiffies.h>2020#include <linux/kref.h>2121+#include <linux/list.h>2122#include <linux/minmax.h>2223#include <linux/netdevice.h>2324#include <linux/nl80211.h>···2726#include <linux/rcupdate.h>2827#include <linux/rtnetlink.h>2928#include <linux/skbuff.h>2929+#include <linux/slab.h>3030#include <linux/stddef.h>3131#include <linux/string.h>3232#include <linux/types.h>···4240#include "originator.h"4341#include "routing.h"4442#include "send.h"4343+4444+/**4545+ * struct batadv_v_metric_queue_entry - list of hardif neighbors which require4646+ * and metric update4747+ */4848+struct batadv_v_metric_queue_entry {4949+ /** @hardif_neigh: hardif neighbor scheduled for metric update */5050+ struct batadv_hardif_neigh_node *hardif_neigh;5151+5252+ /** @list: list node for metric_queue */5353+ struct list_head list;5454+};45554656/**4757 * batadv_v_elp_start_timer() - restart timer for ELP periodic work···7359/**7460 * batadv_v_elp_get_throughput() - get the throughput towards a neighbour7561 * @neigh: the neighbour for which the throughput has to be obtained6262+ * @pthroughput: calculated throughput towards the given neighbour in multiples6363+ * of 100kpbs (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc).7664 *7777- * Return: The throughput towards the given neighbour in multiples of 100kpbs7878- * (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc).6565+ * Return: true when value behind @pthroughput was set7966 */8080-static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)6767+static bool batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh,6868+ u32 *pthroughput)8169{8270 struct batadv_hard_iface *hard_iface = neigh->if_incoming;7171+ struct net_device *soft_iface = hard_iface->soft_iface;8372 struct ethtool_link_ksettings link_settings;8473 struct net_device *real_netdev;8574 struct station_info sinfo;8675 u32 throughput;8776 int ret;88777878+ /* don't query throughput when no longer associated with any7979+ * batman-adv interface8080+ */8181+ if (!soft_iface)8282+ return false;8383+8984 /* if the user specified a customised value for this interface, then9085 * return it directly9186 */9287 throughput = atomic_read(&hard_iface->bat_v.throughput_override);9393- if (throughput != 0)9494- return throughput;8888+ if (throughput != 0) {8989+ *pthroughput = throughput;9090+ return true;9191+ }95929693 /* if this is a wireless device, then ask its throughput through9794 * cfg80211 API···129104 * possible to delete this neighbor. For now set130105 * the throughput metric to 0.131106 */132132- return 0;107107+ *pthroughput = 0;108108+ return true;133109 }134110 if (ret)135111 goto default_throughput;136112137137- if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT))138138- return sinfo.expected_throughput / 100;113113+ if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT)) {114114+ *pthroughput = sinfo.expected_throughput / 100;115115+ return true;116116+ }139117140118 /* try to estimate the expected throughput based on reported tx141119 * rates142120 */143143- if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE))144144- return cfg80211_calculate_bitrate(&sinfo.txrate) / 3;121121+ if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE)) {122122+ *pthroughput = cfg80211_calculate_bitrate(&sinfo.txrate) / 3;123123+ return true;124124+ }145125146126 goto default_throughput;147127 }148128129129+ /* only use rtnl_trylock because the elp worker will be cancelled while130130+ * the rntl_lock is held. the cancel_delayed_work_sync() would otherwise131131+ * wait forever when the elp work_item was started and it is then also132132+ * trying to rtnl_lock133133+ */134134+ if (!rtnl_trylock())135135+ return false;136136+149137 /* if not a wifi interface, check if this device provides data via150138 * ethtool (e.g. an Ethernet adapter)151139 */152152- rtnl_lock();153140 ret = __ethtool_get_link_ksettings(hard_iface->net_dev, &link_settings);154141 rtnl_unlock();155142 if (ret == 0) {···172135 hard_iface->bat_v.flags &= ~BATADV_FULL_DUPLEX;173136174137 throughput = link_settings.base.speed;175175- if (throughput && throughput != SPEED_UNKNOWN)176176- return throughput * 10;138138+ if (throughput && throughput != SPEED_UNKNOWN) {139139+ *pthroughput = throughput * 10;140140+ return true;141141+ }177142 }178143179144default_throughput:180145 if (!(hard_iface->bat_v.flags & BATADV_WARNING_DEFAULT)) {181181- batadv_info(hard_iface->soft_iface,146146+ batadv_info(soft_iface,182147 "WiFi driver or ethtool info does not provide information about link speeds on interface %s, therefore defaulting to hardcoded throughput values of %u.%1u Mbps. Consider overriding the throughput manually or checking your driver.\n",183148 hard_iface->net_dev->name,184149 BATADV_THROUGHPUT_DEFAULT_VALUE / 10,···189150 }190151191152 /* if none of the above cases apply, return the base_throughput */192192- return BATADV_THROUGHPUT_DEFAULT_VALUE;153153+ *pthroughput = BATADV_THROUGHPUT_DEFAULT_VALUE;154154+ return true;193155}194156195157/**196158 * batadv_v_elp_throughput_metric_update() - worker updating the throughput197159 * metric of a single hop neighbour198198- * @work: the work queue item160160+ * @neigh: the neighbour to probe199161 */200200-void batadv_v_elp_throughput_metric_update(struct work_struct *work)162162+static void163163+batadv_v_elp_throughput_metric_update(struct batadv_hardif_neigh_node *neigh)201164{202202- struct batadv_hardif_neigh_node_bat_v *neigh_bat_v;203203- struct batadv_hardif_neigh_node *neigh;165165+ u32 throughput;166166+ bool valid;204167205205- neigh_bat_v = container_of(work, struct batadv_hardif_neigh_node_bat_v,206206- metric_work);207207- neigh = container_of(neigh_bat_v, struct batadv_hardif_neigh_node,208208- bat_v);168168+ valid = batadv_v_elp_get_throughput(neigh, &throughput);169169+ if (!valid)170170+ return;209171210210- ewma_throughput_add(&neigh->bat_v.throughput,211211- batadv_v_elp_get_throughput(neigh));212212-213213- /* decrement refcounter to balance increment performed before scheduling214214- * this task215215- */216216- batadv_hardif_neigh_put(neigh);172172+ ewma_throughput_add(&neigh->bat_v.throughput, throughput);217173}218174219175/**···282248 */283249static void batadv_v_elp_periodic_work(struct work_struct *work)284250{251251+ struct batadv_v_metric_queue_entry *metric_entry;252252+ struct batadv_v_metric_queue_entry *metric_safe;285253 struct batadv_hardif_neigh_node *hardif_neigh;286254 struct batadv_hard_iface *hard_iface;287255 struct batadv_hard_iface_bat_v *bat_v;288256 struct batadv_elp_packet *elp_packet;257257+ struct list_head metric_queue;289258 struct batadv_priv *bat_priv;290259 struct sk_buff *skb;291260 u32 elp_interval;292292- bool ret;293261294262 bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work);295263 hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v);···327291328292 atomic_inc(&hard_iface->bat_v.elp_seqno);329293294294+ INIT_LIST_HEAD(&metric_queue);295295+330296 /* The throughput metric is updated on each sent packet. This way, if a331297 * node is dead and no longer sends packets, batman-adv is still able to332298 * react timely to its death.···353315354316 /* Reading the estimated throughput from cfg80211 is a task that355317 * may sleep and that is not allowed in an rcu protected356356- * context. Therefore schedule a task for that.318318+ * context. Therefore add it to metric_queue and process it319319+ * outside rcu protected context.357320 */358358- ret = queue_work(batadv_event_workqueue,359359- &hardif_neigh->bat_v.metric_work);360360-361361- if (!ret)321321+ metric_entry = kzalloc(sizeof(*metric_entry), GFP_ATOMIC);322322+ if (!metric_entry) {362323 batadv_hardif_neigh_put(hardif_neigh);324324+ continue;325325+ }326326+327327+ metric_entry->hardif_neigh = hardif_neigh;328328+ list_add(&metric_entry->list, &metric_queue);363329 }364330 rcu_read_unlock();331331+332332+ list_for_each_entry_safe(metric_entry, metric_safe, &metric_queue, list) {333333+ batadv_v_elp_throughput_metric_update(metric_entry->hardif_neigh);334334+335335+ batadv_hardif_neigh_put(metric_entry->hardif_neigh);336336+ list_del(&metric_entry->list);337337+ kfree(metric_entry);338338+ }365339366340restart_timer:367341 batadv_v_elp_start_timer(hard_iface);
···596596 * neighbor597597 */598598 unsigned long last_unicast_tx;599599-600600- /** @metric_work: work queue callback item for metric update */601601- struct work_struct metric_work;602599};603600604601/**
+1-2
net/bluetooth/hidp/Kconfig
···11# SPDX-License-Identifier: GPL-2.0-only22config BT_HIDP33 tristate "HIDP protocol support"44- depends on BT_BREDR && INPUT && HID_SUPPORT55- select HID44+ depends on BT_BREDR && HID65 help76 HIDP (Human Interface Device Protocol) is a transport layer87 for HID reports. HIDP is required for the Bluetooth Human
+79-90
net/bluetooth/l2cap_core.c
···119119{120120 struct l2cap_chan *c;121121122122- mutex_lock(&conn->chan_lock);123122 c = __l2cap_get_chan_by_scid(conn, cid);124123 if (c) {125124 /* Only lock if chan reference is not 0 */···126127 if (c)127128 l2cap_chan_lock(c);128129 }129129- mutex_unlock(&conn->chan_lock);130130131131 return c;132132}···138140{139141 struct l2cap_chan *c;140142141141- mutex_lock(&conn->chan_lock);142143 c = __l2cap_get_chan_by_dcid(conn, cid);143144 if (c) {144145 /* Only lock if chan reference is not 0 */···145148 if (c)146149 l2cap_chan_lock(c);147150 }148148- mutex_unlock(&conn->chan_lock);149151150152 return c;151153}···414418 if (!conn)415419 return;416420417417- mutex_lock(&conn->chan_lock);421421+ mutex_lock(&conn->lock);418422 /* __set_chan_timer() calls l2cap_chan_hold(chan) while scheduling419423 * this work. No need to call l2cap_chan_hold(chan) here again.420424 */···435439 l2cap_chan_unlock(chan);436440 l2cap_chan_put(chan);437441438438- mutex_unlock(&conn->chan_lock);442442+ mutex_unlock(&conn->lock);439443}440444441445struct l2cap_chan *l2cap_chan_create(void)···637641638642void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)639643{640640- mutex_lock(&conn->chan_lock);644644+ mutex_lock(&conn->lock);641645 __l2cap_chan_add(conn, chan);642642- mutex_unlock(&conn->chan_lock);646646+ mutex_unlock(&conn->lock);643647}644648645649void l2cap_chan_del(struct l2cap_chan *chan, int err)···727731 if (!conn)728732 return;729733730730- mutex_lock(&conn->chan_lock);734734+ mutex_lock(&conn->lock);731735 __l2cap_chan_list(conn, func, data);732732- mutex_unlock(&conn->chan_lock);736736+ mutex_unlock(&conn->lock);733737}734738735739EXPORT_SYMBOL_GPL(l2cap_chan_list);···741745 struct hci_conn *hcon = conn->hcon;742746 struct l2cap_chan *chan;743747744744- mutex_lock(&conn->chan_lock);748748+ mutex_lock(&conn->lock);745749746750 list_for_each_entry(chan, &conn->chan_l, list) {747751 l2cap_chan_lock(chan);···750754 l2cap_chan_unlock(chan);751755 }752756753753- mutex_unlock(&conn->chan_lock);757757+ mutex_unlock(&conn->lock);754758}755759756760static void l2cap_chan_le_connect_reject(struct l2cap_chan *chan)···944948 return id;945949}946950951951+static void l2cap_send_acl(struct l2cap_conn *conn, struct sk_buff *skb,952952+ u8 flags)953953+{954954+ /* Check if the hcon still valid before attempting to send */955955+ if (hci_conn_valid(conn->hcon->hdev, conn->hcon))956956+ hci_send_acl(conn->hchan, skb, flags);957957+ else958958+ kfree_skb(skb);959959+}960960+947961static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len,948962 void *data)949963{···976970 bt_cb(skb)->force_active = BT_POWER_FORCE_ACTIVE_ON;977971 skb->priority = HCI_PRIO_MAX;978972979979- hci_send_acl(conn->hchan, skb, flags);973973+ l2cap_send_acl(conn, skb, flags);980974}981975982976static void l2cap_do_send(struct l2cap_chan *chan, struct sk_buff *skb)···1503149715041498 BT_DBG("conn %p", conn);1505149915061506- mutex_lock(&conn->chan_lock);15071507-15081500 list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {15091501 l2cap_chan_lock(chan);15101502···1571156715721568 l2cap_chan_unlock(chan);15731569 }15741574-15751575- mutex_unlock(&conn->chan_lock);15761570}1577157115781572static void l2cap_le_conn_ready(struct l2cap_conn *conn)···16161614 if (hcon->type == ACL_LINK)16171615 l2cap_request_info(conn);1618161616191619- mutex_lock(&conn->chan_lock);16171617+ mutex_lock(&conn->lock);1620161816211619 list_for_each_entry(chan, &conn->chan_l, list) {16221620···16341632 l2cap_chan_unlock(chan);16351633 }1636163416371637- mutex_unlock(&conn->chan_lock);16351635+ mutex_unlock(&conn->lock);1638163616391637 if (hcon->type == LE_LINK)16401638 l2cap_le_conn_ready(conn);···1649164716501648 BT_DBG("conn %p", conn);1651164916521652- mutex_lock(&conn->chan_lock);16531653-16541650 list_for_each_entry(chan, &conn->chan_l, list) {16551651 if (test_bit(FLAG_FORCE_RELIABLE, &chan->flags))16561652 l2cap_chan_set_err(chan, err);16571653 }16581658-16591659- mutex_unlock(&conn->chan_lock);16601654}1661165516621656static void l2cap_info_timeout(struct work_struct *work)···16631665 conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_DONE;16641666 conn->info_ident = 0;1665166716681668+ mutex_lock(&conn->lock);16661669 l2cap_conn_start(conn);16701670+ mutex_unlock(&conn->lock);16671671}1668167216691673/*···1757175717581758 BT_DBG("hcon %p conn %p, err %d", hcon, conn, err);1759175917601760+ mutex_lock(&conn->lock);17611761+17601762 kfree_skb(conn->rx_skb);1761176317621764 skb_queue_purge(&conn->pending_rx);···17771775 /* Force the connection to be immediately dropped */17781776 hcon->disc_timeout = 0;1779177717801780- mutex_lock(&conn->chan_lock);17811781-17821778 /* Kill channels */17831779 list_for_each_entry_safe(chan, l, &conn->chan_l, list) {17841780 l2cap_chan_hold(chan);···17901790 l2cap_chan_put(chan);17911791 }1792179217931793- mutex_unlock(&conn->chan_lock);17941794-17951795- hci_chan_del(conn->hchan);17961796-17971793 if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT)17981794 cancel_delayed_work_sync(&conn->info_timer);1799179518001800- hcon->l2cap_data = NULL;17961796+ hci_chan_del(conn->hchan);18011797 conn->hchan = NULL;17981798+17991799+ hcon->l2cap_data = NULL;18001800+ mutex_unlock(&conn->lock);18021801 l2cap_conn_put(conn);18031802}18041803···2915291629162917 BT_DBG("conn %p", conn);2917291829182918- mutex_lock(&conn->chan_lock);29192919-29202919 list_for_each_entry(chan, &conn->chan_l, list) {29212920 if (chan->chan_type != L2CAP_CHAN_RAW)29222921 continue;···29292932 if (chan->ops->recv(chan, nskb))29302933 kfree_skb(nskb);29312934 }29322932-29332933- mutex_unlock(&conn->chan_lock);29342935}2935293629362937/* ---- L2CAP signalling commands ---- */···39473952 goto response;39483953 }3949395439503950- mutex_lock(&conn->chan_lock);39513955 l2cap_chan_lock(pchan);3952395639533957 /* Check if the ACL is secure enough (if not SDP) */···40534059 }4054406040554061 l2cap_chan_unlock(pchan);40564056- mutex_unlock(&conn->chan_lock);40574062 l2cap_chan_put(pchan);40584063}40594064···40914098 BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x",40924099 dcid, scid, result, status);4093410040944094- mutex_lock(&conn->chan_lock);40954095-40964101 if (scid) {40974102 chan = __l2cap_get_chan_by_scid(conn, scid);40984098- if (!chan) {40994099- err = -EBADSLT;41004100- goto unlock;41014101- }41034103+ if (!chan)41044104+ return -EBADSLT;41024105 } else {41034106 chan = __l2cap_get_chan_by_ident(conn, cmd->ident);41044104- if (!chan) {41054105- err = -EBADSLT;41064106- goto unlock;41074107- }41074107+ if (!chan)41084108+ return -EBADSLT;41084109 }4109411041104111 chan = l2cap_chan_hold_unless_zero(chan);41114111- if (!chan) {41124112- err = -EBADSLT;41134113- goto unlock;41144114- }41124112+ if (!chan)41134113+ return -EBADSLT;4115411441164115 err = 0;41174116···4140415541414156 l2cap_chan_unlock(chan);41424157 l2cap_chan_put(chan);41434143-41444144-unlock:41454145- mutex_unlock(&conn->chan_lock);4146415841474159 return err;41484160}···4428444644294447 chan->ops->set_shutdown(chan);4430444844314431- l2cap_chan_unlock(chan);44324432- mutex_lock(&conn->chan_lock);44334433- l2cap_chan_lock(chan);44344449 l2cap_chan_del(chan, ECONNRESET);44354435- mutex_unlock(&conn->chan_lock);4436445044374451 chan->ops->close(chan);44384452···44654487 return 0;44664488 }4467448944684468- l2cap_chan_unlock(chan);44694469- mutex_lock(&conn->chan_lock);44704470- l2cap_chan_lock(chan);44714490 l2cap_chan_del(chan, 0);44724472- mutex_unlock(&conn->chan_lock);4473449144744492 chan->ops->close(chan);44754493···46634689 BT_DBG("dcid 0x%4.4x mtu %u mps %u credits %u result 0x%2.2x",46644690 dcid, mtu, mps, credits, result);4665469146664666- mutex_lock(&conn->chan_lock);46674667-46684692 chan = __l2cap_get_chan_by_ident(conn, cmd->ident);46694669- if (!chan) {46704670- err = -EBADSLT;46714671- goto unlock;46724672- }46934693+ if (!chan)46944694+ return -EBADSLT;4673469546744696 err = 0;46754697···47124742 }4713474347144744 l2cap_chan_unlock(chan);47154715-47164716-unlock:47174717- mutex_unlock(&conn->chan_lock);4718474547194746 return err;47204747}···48244857 goto response;48254858 }4826485948274827- mutex_lock(&conn->chan_lock);48284860 l2cap_chan_lock(pchan);4829486148304862 if (!smp_sufficient_security(conn->hcon, pchan->sec_level,···4889492348904924response_unlock:48914925 l2cap_chan_unlock(pchan);48924892- mutex_unlock(&conn->chan_lock);48934926 l2cap_chan_put(pchan);4894492748954928 if (result == L2CAP_CR_PEND)···50225057 goto response;50235058 }5024505950255025- mutex_lock(&conn->chan_lock);50265060 l2cap_chan_lock(pchan);5027506150285062 if (!smp_sufficient_security(conn->hcon, pchan->sec_level,···5096513250975133unlock:50985134 l2cap_chan_unlock(pchan);50995099- mutex_unlock(&conn->chan_lock);51005135 l2cap_chan_put(pchan);5101513651025137response:···5131516851325169 BT_DBG("mtu %u mps %u credits %u result 0x%4.4x", mtu, mps, credits,51335170 result);51345134-51355135- mutex_lock(&conn->chan_lock);5136517151375172 cmd_len -= sizeof(*rsp);51385173···5216525552175256 l2cap_chan_unlock(chan);52185257 }52195219-52205220- mutex_unlock(&conn->chan_lock);5221525852225259 return err;52235260}···53295370 if (cmd_len < sizeof(*rej))53305371 return -EPROTO;5331537253325332- mutex_lock(&conn->chan_lock);53335333-53345373 chan = __l2cap_get_chan_by_ident(conn, cmd->ident);53355374 if (!chan)53365375 goto done;···53435386 l2cap_chan_put(chan);5344538753455388done:53465346- mutex_unlock(&conn->chan_lock);53475389 return 0;53485390}53495391···6797684167986842 BT_DBG("");6799684368446844+ mutex_lock(&conn->lock);68456845+68006846 while ((skb = skb_dequeue(&conn->pending_rx)))68016847 l2cap_recv_frame(conn, skb);68486848+68496849+ mutex_unlock(&conn->lock);68026850}6803685168046852static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon)···68416881 conn->local_fixed_chan |= L2CAP_FC_SMP_BREDR;6842688268436883 mutex_init(&conn->ident_lock);68446844- mutex_init(&conn->chan_lock);68846884+ mutex_init(&conn->lock);6845688568466886 INIT_LIST_HEAD(&conn->chan_l);68476887 INIT_LIST_HEAD(&conn->users);···70327072 }70337073 }7034707470357035- mutex_lock(&conn->chan_lock);70757075+ mutex_lock(&conn->lock);70367076 l2cap_chan_lock(chan);7037707770387078 if (cid && __l2cap_get_chan_by_dcid(conn, cid)) {···7073711370747114chan_unlock:70757115 l2cap_chan_unlock(chan);70767076- mutex_unlock(&conn->chan_lock);71167116+ mutex_unlock(&conn->lock);70777117done:70787118 hci_dev_unlock(hdev);70797119 hci_dev_put(hdev);···7285732572867326 BT_DBG("conn %p status 0x%2.2x encrypt %u", conn, status, encrypt);7287732772887288- mutex_lock(&conn->chan_lock);73287328+ mutex_lock(&conn->lock);7289732972907330 list_for_each_entry(chan, &conn->chan_l, list) {72917331 l2cap_chan_lock(chan);···73597399 l2cap_chan_unlock(chan);73607400 }7361740173627362- mutex_unlock(&conn->chan_lock);74027402+ mutex_unlock(&conn->lock);73637403}7364740473657405/* Append fragment into frame respecting the maximum len of rx_skb */···74267466 conn->rx_len = 0;74277467}7428746874697469+struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *c)74707470+{74717471+ if (!c)74727472+ return NULL;74737473+74747474+ BT_DBG("conn %p orig refcnt %u", c, kref_read(&c->ref));74757475+74767476+ if (!kref_get_unless_zero(&c->ref))74777477+ return NULL;74787478+74797479+ return c;74807480+}74817481+74297482void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)74307483{74317431- struct l2cap_conn *conn = hcon->l2cap_data;74847484+ struct l2cap_conn *conn;74327485 int len;74867486+74877487+ /* Lock hdev to access l2cap_data to avoid race with l2cap_conn_del */74887488+ hci_dev_lock(hcon->hdev);74897489+74907490+ conn = hcon->l2cap_data;7433749174347492 if (!conn)74357493 conn = l2cap_conn_add(hcon);7436749474377437- if (!conn)74387438- goto drop;74957495+ conn = l2cap_conn_hold_unless_zero(conn);74967496+74977497+ hci_dev_unlock(hcon->hdev);74987498+74997499+ if (!conn) {75007500+ kfree_skb(skb);75017501+ return;75027502+ }7439750374407504 BT_DBG("conn %p len %u flags 0x%x", conn, skb->len, flags);75057505+75067506+ mutex_lock(&conn->lock);7441750774427508 switch (flags) {74437509 case ACL_START:···74897503 if (len == skb->len) {74907504 /* Complete frame received */74917505 l2cap_recv_frame(conn, skb);74927492- return;75067506+ goto unlock;74937507 }7494750874957509 BT_DBG("Start: total len %d, frag len %u", len, skb->len);···7553756775547568drop:75557569 kfree_skb(skb);75707570+unlock:75717571+ mutex_unlock(&conn->lock);75727572+ l2cap_conn_put(conn);75567573}7557757475587575static struct hci_cb l2cap_cb = {
+7-8
net/bluetooth/l2cap_sock.c
···13261326 /* prevent sk structure from being freed whilst unlocked */13271327 sock_hold(sk);1328132813291329- chan = l2cap_pi(sk)->chan;13301329 /* prevent chan structure from being freed whilst unlocked */13311331- l2cap_chan_hold(chan);13301330+ chan = l2cap_chan_hold_unless_zero(l2cap_pi(sk)->chan);13311331+ if (!chan)13321332+ goto shutdown_already;1332133313331334 BT_DBG("chan %p state %s", chan, state_to_string(chan->state));13341335···13591358 release_sock(sk);1360135913611360 l2cap_chan_lock(chan);13621362- conn = chan->conn;13631363- if (conn)13641364- /* prevent conn structure from being freed */13651365- l2cap_conn_get(conn);13611361+ /* prevent conn structure from being freed */13621362+ conn = l2cap_conn_hold_unless_zero(chan->conn);13661363 l2cap_chan_unlock(chan);1367136413681365 if (conn)13691366 /* mutex lock must be taken before l2cap_chan_lock() */13701370- mutex_lock(&conn->chan_lock);13671367+ mutex_lock(&conn->lock);1371136813721369 l2cap_chan_lock(chan);13731370 l2cap_chan_close(chan, 0);13741371 l2cap_chan_unlock(chan);1375137213761373 if (conn) {13771377- mutex_unlock(&conn->chan_lock);13741374+ mutex_unlock(&conn->lock);13781375 l2cap_conn_put(conn);13791376 }13801377
···418418{419419 int hlen = LL_RESERVED_SPACE(dev);420420 int tlen = dev->needed_tailroom;421421- struct sock *sk = dev_net(dev)->ipv6.ndisc_sk;422421 struct sk_buff *skb;423422424423 skb = alloc_skb(hlen + sizeof(struct ipv6hdr) + len + tlen, GFP_ATOMIC);425425- if (!skb) {426426- ND_PRINTK(0, err, "ndisc: %s failed to allocate an skb\n",427427- __func__);424424+ if (!skb)428425 return NULL;429429- }430426431427 skb->protocol = htons(ETH_P_IPV6);432428 skb->dev = dev;···433437 /* Manually assign socket ownership as we avoid calling434438 * sock_alloc_send_pskb() to bypass wmem buffer limits435439 */436436- skb_set_owner_w(skb, sk);440440+ rcu_read_lock();441441+ skb_set_owner_w(skb, dev_net_rcu(dev)->ipv6.ndisc_sk);442442+ rcu_read_unlock();437443438444 return skb;439445}···471473void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,472474 const struct in6_addr *saddr)473475{474474- struct dst_entry *dst = skb_dst(skb);475475- struct net *net = dev_net(skb->dev);476476- struct sock *sk = net->ipv6.ndisc_sk;477477- struct inet6_dev *idev;478478- int err;479476 struct icmp6hdr *icmp6h = icmp6_hdr(skb);477477+ struct dst_entry *dst = skb_dst(skb);478478+ struct inet6_dev *idev;479479+ struct net *net;480480+ struct sock *sk;481481+ int err;480482 u8 type;481483482484 type = icmp6h->icmp6_type;483485486486+ rcu_read_lock();487487+488488+ net = dev_net_rcu(skb->dev);489489+ sk = net->ipv6.ndisc_sk;484490 if (!dst) {485491 struct flowi6 fl6;486492 int oif = skb->dev->ifindex;···492490 icmpv6_flow_init(sk, &fl6, type, saddr, daddr, oif);493491 dst = icmp6_dst_alloc(skb->dev, &fl6);494492 if (IS_ERR(dst)) {493493+ rcu_read_unlock();495494 kfree_skb(skb);496495 return;497496 }···507504508505 ip6_nd_hdr(skb, saddr, daddr, READ_ONCE(inet6_sk(sk)->hop_limit), skb->len);509506510510- rcu_read_lock();511507 idev = __in6_dev_get(dst->dev);512508 IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);513509···16961694 bool ret;1697169516981696 if (netif_is_l3_master(skb->dev)) {16991699- dev = __dev_get_by_index(dev_net(skb->dev), IPCB(skb)->iif);16971697+ dev = dev_get_by_index_rcu(dev_net(skb->dev), IPCB(skb)->iif);17001698 if (!dev)17011699 return;17021700 }
+6-1
net/ipv6/route.c
···31963196{31973197 struct net_device *dev = dst->dev;31983198 unsigned int mtu = dst_mtu(dst);31993199- struct net *net = dev_net(dev);31993199+ struct net *net;3200320032013201 mtu -= sizeof(struct ipv6hdr) + sizeof(struct tcphdr);3202320232033203+ rcu_read_lock();32043204+32053205+ net = dev_net_rcu(dev);32033206 if (mtu < net->ipv6.sysctl.ip6_rt_min_advmss)32043207 mtu = net->ipv6.sysctl.ip6_rt_min_advmss;32083208+32093209+ rcu_read_unlock();3205321032063211 /*32073212 * Maximal non-jumbo IPv6 payload is IPV6_MAXPLEN and
···21012101{21022102 struct ovs_header *ovs_header;21032103 struct ovs_vport_stats vport_stats;21042104+ struct net *net_vport;21042105 int err;2105210621062107 ovs_header = genlmsg_put(skb, portid, seq, &dp_vport_genl_family,···21182117 nla_put_u32(skb, OVS_VPORT_ATTR_IFINDEX, vport->dev->ifindex))21192118 goto nla_put_failure;2120211921212121- if (!net_eq(net, dev_net(vport->dev))) {21222122- int id = peernet2id_alloc(net, dev_net(vport->dev), gfp);21202120+ rcu_read_lock();21212121+ net_vport = dev_net_rcu(vport->dev);21222122+ if (!net_eq(net, net_vport)) {21232123+ int id = peernet2id_alloc(net, net_vport, GFP_ATOMIC);2123212421242125 if (nla_put_s32(skb, OVS_VPORT_ATTR_NETNSID, id))21252125- goto nla_put_failure;21262126+ goto nla_put_failure_unlock;21262127 }21282128+ rcu_read_unlock();2127212921282130 ovs_vport_get_stats(vport, &vport_stats);21292131 if (nla_put_64bit(skb, OVS_VPORT_ATTR_STATS,···21472143 genlmsg_end(skb, ovs_header);21482144 return 0;2149214521462146+nla_put_failure_unlock:21472147+ rcu_read_unlock();21502148nla_put_failure:21512149 err = -EMSGSIZE;21522150error:
+3-4
net/rxrpc/ar-internal.h
···327327 * packet with a maximum set of jumbo subpackets or a PING ACK padded328328 * out to 64K with zeropages for PMTUD.329329 */330330- struct kvec kvec[RXRPC_MAX_NR_JUMBO > 3 + 16 ?331331- RXRPC_MAX_NR_JUMBO : 3 + 16];330330+ struct kvec kvec[1 + RXRPC_MAX_NR_JUMBO > 3 + 16 ?331331+ 1 + RXRPC_MAX_NR_JUMBO : 3 + 16];332332};333333334334/*···874874#define RXRPC_TXBUF_RESENT 0x100 /* Set if has been resent */875875 __be16 cksum; /* Checksum to go in header */876876 bool jumboable; /* Can be non-terminal jumbo subpacket */877877- u8 nr_kvec; /* Amount of kvec[] used */878878- struct kvec kvec[1];877877+ void *data; /* Data with preceding jumbo header */879878};880879881880static inline bool rxrpc_sending_to_server(const struct rxrpc_txbuf *txb)
+35-15
net/rxrpc/output.c
···428428static size_t rxrpc_prepare_data_subpacket(struct rxrpc_call *call,429429 struct rxrpc_send_data_req *req,430430 struct rxrpc_txbuf *txb,431431+ struct rxrpc_wire_header *whdr,431432 rxrpc_serial_t serial, int subpkt)432433{433433- struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base;434434- struct rxrpc_jumbo_header *jumbo = (void *)(whdr + 1) - sizeof(*jumbo);434434+ struct rxrpc_jumbo_header *jumbo = txb->data - sizeof(*jumbo);435435 enum rxrpc_req_ack_trace why;436436 struct rxrpc_connection *conn = call->conn;437437- struct kvec *kv = &call->local->kvec[subpkt];437437+ struct kvec *kv = &call->local->kvec[1 + subpkt];438438 size_t len = txb->pkt_len;439439 bool last;440440 u8 flags;···491491 }492492dont_set_request_ack:493493494494- /* The jumbo header overlays the wire header in the txbuf. */494494+ /* There's a jumbo header prepended to the data if we need it. */495495 if (subpkt < req->n - 1)496496 flags |= RXRPC_JUMBO_PACKET;497497 else498498 flags &= ~RXRPC_JUMBO_PACKET;499499 if (subpkt == 0) {500500 whdr->flags = flags;501501- whdr->serial = htonl(txb->serial);502501 whdr->cksum = txb->cksum;503503- whdr->serviceId = htons(conn->service_id);504504- kv->iov_base = whdr;505505- len += sizeof(*whdr);502502+ kv->iov_base = txb->data;506503 } else {507504 jumbo->flags = flags;508505 jumbo->pad = 0;···532535/*533536 * Prepare a (jumbo) packet for transmission.534537 */535535-static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_send_data_req *req)538538+static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call,539539+ struct rxrpc_send_data_req *req,540540+ struct rxrpc_wire_header *whdr)536541{537542 struct rxrpc_txqueue *tq = req->tq;538543 rxrpc_serial_t serial;···547548548549 /* Each transmission of a Tx packet needs a new serial number */549550 serial = rxrpc_get_next_serials(call->conn, req->n);551551+552552+ whdr->epoch = htonl(call->conn->proto.epoch);553553+ whdr->cid = htonl(call->cid);554554+ whdr->callNumber = htonl(call->call_id);555555+ whdr->seq = htonl(seq);556556+ whdr->serial = htonl(serial);557557+ whdr->type = RXRPC_PACKET_TYPE_DATA;558558+ whdr->flags = 0;559559+ whdr->userStatus = 0;560560+ whdr->securityIndex = call->security_ix;561561+ whdr->_rsvd = 0;562562+ whdr->serviceId = htons(call->conn->service_id);550563551564 call->tx_last_serial = serial + req->n - 1;552565 call->tx_last_sent = req->now;···587576 if (i + 1 == req->n)588577 /* Only sample the last subpacket in a jumbo. */589578 __set_bit(ix, &tq->rtt_samples);590590- len += rxrpc_prepare_data_subpacket(call, req, txb, serial, i);579579+ len += rxrpc_prepare_data_subpacket(call, req, txb, whdr, serial, i);591580 serial++;592581 seq++;593582 i++;···629618 }630619631620 rxrpc_set_keepalive(call, req->now);621621+ page_frag_free(whdr);632622 return len;633623}634624···638626 */639627void rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_send_data_req *req)640628{629629+ struct rxrpc_wire_header *whdr;641630 struct rxrpc_connection *conn = call->conn;642631 enum rxrpc_tx_point frag;643632 struct rxrpc_txqueue *tq = req->tq;644633 struct rxrpc_txbuf *txb;645634 struct msghdr msg;646635 rxrpc_seq_t seq = req->seq;647647- size_t len;636636+ size_t len = sizeof(*whdr);648637 bool new_call = test_bit(RXRPC_CALL_BEGAN_RX_TIMER, &call->flags);649638 int ret, stat_ix;650639651640 _enter("%x,%x-%x", tq->qbase, seq, seq + req->n - 1);652641642642+ whdr = page_frag_alloc(&call->local->tx_alloc, sizeof(*whdr), GFP_NOFS);643643+ if (!whdr)644644+ return; /* Drop the packet if no memory. */645645+646646+ call->local->kvec[0].iov_base = whdr;647647+ call->local->kvec[0].iov_len = sizeof(*whdr);648648+653649 stat_ix = umin(req->n, ARRAY_SIZE(call->rxnet->stat_tx_jumbo)) - 1;654650 atomic_inc(&call->rxnet->stat_tx_jumbo[stat_ix]);655651656656- len = rxrpc_prepare_data_packet(call, req);652652+ len += rxrpc_prepare_data_packet(call, req, whdr);657653 txb = tq->bufs[seq & RXRPC_TXQ_MASK];658654659659- iov_iter_kvec(&msg.msg_iter, WRITE, call->local->kvec, req->n, len);655655+ iov_iter_kvec(&msg.msg_iter, WRITE, call->local->kvec, 1 + req->n, len);660656661657 msg.msg_name = &call->peer->srx.transport;662658 msg.msg_namelen = call->peer->srx.transport_len;···715695716696 if (ret == -EMSGSIZE) {717697 rxrpc_inc_stat(call->rxnet, stat_tx_data_send_msgsize);718718- trace_rxrpc_tx_packet(call->debug_id, call->local->kvec[0].iov_base, frag);698698+ trace_rxrpc_tx_packet(call->debug_id, whdr, frag);719699 ret = 0;720700 } else if (ret < 0) {721701 rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail);722702 trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, frag);723703 } else {724724- trace_rxrpc_tx_packet(call->debug_id, call->local->kvec[0].iov_base, frag);704704+ trace_rxrpc_tx_packet(call->debug_id, whdr, frag);725705 }726706727707 rxrpc_tx_backoff(call, ret);
···824824 */825825 lock_sock_nested(sk, level);826826827827- sock_orphan(sk);827827+ /* Indicate to vsock_remove_sock() that the socket is being released and828828+ * can be removed from the bound_table. Unlike transport reassignment829829+ * case, where the socket must remain bound despite vsock_remove_sock()830830+ * being called from the transport release() callback.831831+ */832832+ sock_set_flag(sk, SOCK_DEAD);828833829834 if (vsk->transport)830835 vsk->transport->release(vsk);831836 else if (sock_type_connectible(sk->sk_type))832837 vsock_remove_sock(vsk);833838839839+ sock_orphan(sk);834840 sk->sk_shutdown = SHUTDOWN_MASK;835841836842 skb_queue_purge(&sk->sk_receive_queue);
···11+// SPDX-License-Identifier: GPL-2.0-only22+33+//! Abstractions for the faux bus.44+//!55+//! This module provides bindings for working with faux devices in kernel modules.66+//!77+//! C header: [`include/linux/device/faux.h`]88+99+use crate::{bindings, device, error::code::*, prelude::*};1010+use core::ptr::{addr_of_mut, null, null_mut, NonNull};1111+1212+/// The registration of a faux device.1313+///1414+/// This type represents the registration of a [`struct faux_device`]. When an instance of this type1515+/// is dropped, its respective faux device will be unregistered from the system.1616+///1717+/// # Invariants1818+///1919+/// `self.0` always holds a valid pointer to an initialized and registered [`struct faux_device`].2020+///2121+/// [`struct faux_device`]: srctree/include/linux/device/faux.h2222+#[repr(transparent)]2323+pub struct Registration(NonNull<bindings::faux_device>);2424+2525+impl Registration {2626+ /// Create and register a new faux device with the given name.2727+ pub fn new(name: &CStr) -> Result<Self> {2828+ // SAFETY:2929+ // - `name` is copied by this function into its own storage3030+ // - `faux_ops` is safe to leave NULL according to the C API3131+ let dev = unsafe { bindings::faux_device_create(name.as_char_ptr(), null_mut(), null()) };3232+3333+ // The above function will return either a valid device, or NULL on failure3434+ // INVARIANT: The device will remain registered until faux_device_destroy() is called, which3535+ // happens in our Drop implementation.3636+ Ok(Self(NonNull::new(dev).ok_or(ENODEV)?))3737+ }3838+3939+ fn as_raw(&self) -> *mut bindings::faux_device {4040+ self.0.as_ptr()4141+ }4242+}4343+4444+impl AsRef<device::Device> for Registration {4545+ fn as_ref(&self) -> &device::Device {4646+ // SAFETY: The underlying `device` in `faux_device` is guaranteed by the C API to be4747+ // a valid initialized `device`.4848+ unsafe { device::Device::as_ref(addr_of_mut!((*self.as_raw()).dev)) }4949+ }5050+}5151+5252+impl Drop for Registration {5353+ fn drop(&mut self) {5454+ // SAFETY: `self.0` is a valid registered faux_device via our type invariants.5555+ unsafe { bindings::faux_device_destroy(self.as_raw()) }5656+ }5757+}5858+5959+// SAFETY: The faux device API is thread-safe as guaranteed by the device core, as long as6060+// faux_device_destroy() is guaranteed to only be called once - which is guaranteed by our type not6161+// having Copy/Clone.6262+unsafe impl Send for Registration {}6363+6464+// SAFETY: The faux device API is thread-safe as guaranteed by the device core, as long as6565+// faux_device_destroy() is guaranteed to only be called once - which is guaranteed by our type not6666+// having Copy/Clone.6767+unsafe impl Sync for Registration {}
+1
rust/kernel/lib.rs
···4646pub mod devres;4747pub mod driver;4848pub mod error;4949+pub mod faux;4950#[cfg(CONFIG_RUST_FW_LOADER_ABSTRACTIONS)]5051pub mod firmware;5152pub mod fs;
+1-1
rust/kernel/rbtree.rs
···11491149/// # Invariants11501150/// - `parent` may be null if the new node becomes the root.11511151/// - `child_field_of_parent` is a valid pointer to the left-child or right-child of `parent`. If `parent` is11521152-/// null, it is a pointer to the root of the [`RBTree`].11521152+/// null, it is a pointer to the root of the [`RBTree`].11531153struct RawVacantEntry<'a, K, V> {11541154 rbtree: *mut RBTree<K, V>,11551155 /// The node that will become the parent of the new node if we insert one.
···61616262 If unsure, say N.63636464+config SAMPLE_RUST_DRIVER_FAUX6565+ tristate "Faux Driver"6666+ help6767+ This option builds the Rust Faux driver sample.6868+6969+ To compile this as a module, choose M here:7070+ the module will be called rust_driver_faux.7171+7272+ If unsure, say N.7373+6474config SAMPLE_RUST_HOSTPROGS6575 bool "Host programs"6676 help
···190190191191 /*192192 * Set mod->is_gpl_compatible to true by default. If MODULE_LICENSE()193193- * is missing, do not check the use for EXPORT_SYMBOL_GPL() becasue194194- * modpost will exit wiht error anyway.193193+ * is missing, do not check the use for EXPORT_SYMBOL_GPL() because194194+ * modpost will exit with an error anyway.195195 */196196 mod->is_gpl_compatible = true;197197
+2-2
scripts/package/install-extmod-build
···6262 #6363 # Clear VPATH and srcroot because the source files reside in the output6464 # directory.6565- # shellcheck disable=SC2016 # $(MAKE), $(CC), and $(build) will be expanded by Make6666- "${MAKE}" run-command KBUILD_RUN_COMMAND='+$(MAKE) HOSTCC="$(CC)" VPATH= srcroot=. $(build)='"${destdir}"/scripts6565+ # shellcheck disable=SC2016 # $(MAKE) and $(build) will be expanded by Make6666+ "${MAKE}" run-command KBUILD_RUN_COMMAND='+$(MAKE) HOSTCC='"${CC}"' VPATH= srcroot=. $(build)='"${destdir}"/scripts67676868 rm -f "${destdir}/scripts/Kbuild"6969fi
+112-33
security/tomoyo/common.c
···19811981}1982198219831983/**19841984+ * tomoyo_numscan - sscanf() which stores the length of a decimal integer value.19851985+ *19861986+ * @str: String to scan.19871987+ * @head: Leading string that must start with.19881988+ * @width: Pointer to "int" for storing length of a decimal integer value after @head.19891989+ * @tail: Optional character that must match after a decimal integer value.19901990+ *19911991+ * Returns whether @str starts with @head and a decimal value follows @head.19921992+ */19931993+static bool tomoyo_numscan(const char *str, const char *head, int *width, const char tail)19941994+{19951995+ const char *cp;19961996+ const int n = strlen(head);19971997+19981998+ if (!strncmp(str, head, n)) {19991999+ cp = str + n;20002000+ while (*cp && *cp >= '0' && *cp <= '9')20012001+ cp++;20022002+ if (*cp == tail || !tail) {20032003+ *width = cp - (str + n);20042004+ return *width != 0;20052005+ }20062006+ }20072007+ *width = 0;20082008+ return 0;20092009+}20102010+20112011+/**20122012+ * tomoyo_patternize_path - Make patterns for file path. Used by learning mode.20132013+ *20142014+ * @buffer: Destination buffer.20152015+ * @len: Size of @buffer.20162016+ * @entry: Original line.20172017+ *20182018+ * Returns nothing.20192019+ */20202020+static void tomoyo_patternize_path(char *buffer, const int len, char *entry)20212021+{20222022+ int width;20232023+ char *cp = entry;20242024+20252025+ /* Nothing to do if this line is not for "file" related entry. */20262026+ if (strncmp(entry, "file ", 5))20272027+ goto flush;20282028+ /*20292029+ * Nothing to do if there is no colon in this line, for this rewriting20302030+ * applies to only filesystems where numeric values in the path are volatile.20312031+ */20322032+ cp = strchr(entry + 5, ':');20332033+ if (!cp) {20342034+ cp = entry;20352035+ goto flush;20362036+ }20372037+ /* Flush e.g. "file ioctl" part. */20382038+ while (*cp != ' ')20392039+ cp--;20402040+ *cp++ = '\0';20412041+ tomoyo_addprintf(buffer, len, "%s ", entry);20422042+ /* e.g. file ioctl pipe:[$INO] $CMD */20432043+ if (tomoyo_numscan(cp, "pipe:[", &width, ']')) {20442044+ cp += width + 7;20452045+ tomoyo_addprintf(buffer, len, "pipe:[\\$]");20462046+ goto flush;20472047+ }20482048+ /* e.g. file ioctl socket:[$INO] $CMD */20492049+ if (tomoyo_numscan(cp, "socket:[", &width, ']')) {20502050+ cp += width + 9;20512051+ tomoyo_addprintf(buffer, len, "socket:[\\$]");20522052+ goto flush;20532053+ }20542054+ if (!strncmp(cp, "proc:/self", 10)) {20552055+ /* e.g. file read proc:/self/task/$TID/fdinfo/$FD */20562056+ cp += 10;20572057+ tomoyo_addprintf(buffer, len, "proc:/self");20582058+ } else if (tomoyo_numscan(cp, "proc:/", &width, 0)) {20592059+ /* e.g. file read proc:/$PID/task/$TID/fdinfo/$FD */20602060+ /*20612061+ * Don't patternize $PID part if $PID == 1, for several20622062+ * programs access only files in /proc/1/ directory.20632063+ */20642064+ cp += width + 6;20652065+ if (width == 1 && *(cp - 1) == '1')20662066+ tomoyo_addprintf(buffer, len, "proc:/1");20672067+ else20682068+ tomoyo_addprintf(buffer, len, "proc:/\\$");20692069+ } else {20702070+ goto flush;20712071+ }20722072+ /* Patternize $TID part if "/task/" follows. */20732073+ if (tomoyo_numscan(cp, "/task/", &width, 0)) {20742074+ cp += width + 6;20752075+ tomoyo_addprintf(buffer, len, "/task/\\$");20762076+ }20772077+ /* Patternize $FD part if "/fd/" or "/fdinfo/" follows. */20782078+ if (tomoyo_numscan(cp, "/fd/", &width, 0)) {20792079+ cp += width + 4;20802080+ tomoyo_addprintf(buffer, len, "/fd/\\$");20812081+ } else if (tomoyo_numscan(cp, "/fdinfo/", &width, 0)) {20822082+ cp += width + 8;20832083+ tomoyo_addprintf(buffer, len, "/fdinfo/\\$");20842084+ }20852085+flush:20862086+ /* Flush remaining part if any. */20872087+ if (*cp)20882088+ tomoyo_addprintf(buffer, len, "%s", cp);20892089+}20902090+20912091+/**19842092 * tomoyo_add_entry - Add an ACL to current thread's domain. Used by learning mode.19852093 *19862094 * @domain: Pointer to "struct tomoyo_domain_info".···21112003 if (!cp)21122004 return;21132005 *cp++ = '\0';21142114- len = strlen(cp) + 1;20062006+ /* Reserve some space for potentially using patterns. */20072007+ len = strlen(cp) + 16;21152008 /* strstr() will return NULL if ordering is wrong. */21162009 if (*cp == 'f') {21172010 argv0 = strstr(header, " argv[]={ \"");···21292020 if (symlink)21302021 len += tomoyo_truncate(symlink + 1) + 1;21312022 }21322132- buffer = kmalloc(len, GFP_NOFS);20232023+ buffer = kmalloc(len, GFP_NOFS | __GFP_ZERO);21332024 if (!buffer)21342025 return;21352135- snprintf(buffer, len - 1, "%s", cp);21362136- if (*cp == 'f' && strchr(buffer, ':')) {21372137- /* Automatically replace 2 or more digits with \$ pattern. */21382138- char *cp2;21392139-21402140- /* e.g. file read proc:/$PID/stat */21412141- cp = strstr(buffer, " proc:/");21422142- if (cp && simple_strtoul(cp + 7, &cp2, 10) >= 10 && *cp2 == '/') {21432143- *(cp + 7) = '\\';21442144- *(cp + 8) = '$';21452145- memmove(cp + 9, cp2, strlen(cp2) + 1);21462146- goto ok;21472147- }21482148- /* e.g. file ioctl pipe:[$INO] $CMD */21492149- cp = strstr(buffer, " pipe:[");21502150- if (cp && simple_strtoul(cp + 7, &cp2, 10) >= 10 && *cp2 == ']') {21512151- *(cp + 7) = '\\';21522152- *(cp + 8) = '$';21532153- memmove(cp + 9, cp2, strlen(cp2) + 1);21542154- goto ok;21552155- }21562156- /* e.g. file ioctl socket:[$INO] $CMD */21572157- cp = strstr(buffer, " socket:[");21582158- if (cp && simple_strtoul(cp + 9, &cp2, 10) >= 10 && *cp2 == ']') {21592159- *(cp + 9) = '\\';21602160- *(cp + 10) = '$';21612161- memmove(cp + 11, cp2, strlen(cp2) + 1);21622162- goto ok;21632163- }21642164- }21652165-ok:20262026+ tomoyo_patternize_path(buffer, len, cp);21662027 if (realpath)21672028 tomoyo_addprintf(buffer, len, " exec.%s", realpath);21682029 if (argv0)
+1-1
security/tomoyo/domain.c
···920920#ifdef CONFIG_MMU921921 /*922922 * This is called at execve() time in order to dig around923923- * in the argv/environment of the new proceess923923+ * in the argv/environment of the new process924924 * (represented by bprm).925925 */926926 mmap_read_lock(bprm->mm);
···549549 .id = LSM_ID_TOMOYO,550550};551551552552-/*553553- * tomoyo_security_ops is a "struct security_operations" which is used for554554- * registering TOMOYO.555555- */552552+/* tomoyo_hooks is used for registering TOMOYO. */556553static struct security_hook_list tomoyo_hooks[] __ro_after_init = {557554 LSM_HOOK_INIT(cred_prepare, tomoyo_cred_prepare),558555 LSM_HOOK_INIT(bprm_committed_creds, tomoyo_bprm_committed_creds),
+11-1
tools/objtool/check.c
···227227 str_ends_with(func->name, "_4core9panicking18panic_bounds_check") ||228228 str_ends_with(func->name, "_4core9panicking19assert_failed_inner") ||229229 str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") ||230230+ strstr(func->name, "_4core9panicking13assert_failed") ||230231 strstr(func->name, "_4core9panicking11panic_const24panic_const_") ||231232 (strstr(func->name, "_4core5slice5index24slice_") &&232233 str_ends_with(func->name, "_fail"));···19681967 reloc_addend(reloc) == pfunc->offset)19691968 break;1970196919701970+ /*19711971+ * Clang sometimes leaves dangling unused jump table entries19721972+ * which point to the end of the function. Ignore them.19731973+ */19741974+ if (reloc->sym->sec == pfunc->sec &&19751975+ reloc_addend(reloc) == pfunc->offset + pfunc->len)19761976+ goto next;19771977+19711978 dest_insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));19721979 if (!dest_insn)19731980 break;···19931984 alt->insn = dest_insn;19941985 alt->next = insn->alts;19951986 insn->alts = alt;19871987+next:19961988 prev_offset = reloc_offset(reloc);19971989 }19981990···2266225622672257 if (sec->sh.sh_entsize != 8) {22682258 static bool warned = false;22692269- if (!warned) {22592259+ if (!warned && opts.verbose) {22702260 WARN("%s: dodgy linker, sh_entsize != 8", sec->name);22712261 warned = true;22722262 }
···49495050 SCX_ASSERT(is_cpu_online());51515252- skel = hotplug__open_and_load();5353- SCX_ASSERT(skel);5252+ skel = hotplug__open();5353+ SCX_FAIL_IF(!skel, "Failed to open");5454+ SCX_ENUM_INIT(skel);5555+ SCX_FAIL_IF(hotplug__load(skel), "Failed to load skel");54565557 /* Testing the offline -> online path, so go offline before starting */5658 if (onlining)
···15151616#define SCHED_EXT 717171818-static struct init_enable_count *1919-open_load_prog(bool global)2020-{2121- struct init_enable_count *skel;2222-2323- skel = init_enable_count__open();2424- SCX_BUG_ON(!skel, "Failed to open skel");2525-2626- if (!global)2727- skel->struct_ops.init_enable_count_ops->flags |= SCX_OPS_SWITCH_PARTIAL;2828-2929- SCX_BUG_ON(init_enable_count__load(skel), "Failed to load skel");3030-3131- return skel;3232-}3333-3418static enum scx_test_status run_test(bool global)3519{3620 struct init_enable_count *skel;···2440 struct sched_param param = {};2541 pid_t pids[num_pre_forks];26422727- skel = open_load_prog(global);4343+ skel = init_enable_count__open();4444+ SCX_FAIL_IF(!skel, "Failed to open");4545+ SCX_ENUM_INIT(skel);4646+4747+ if (!global)4848+ skel->struct_ops.init_enable_count_ops->flags |= SCX_OPS_SWITCH_PARTIAL;4949+5050+ SCX_FAIL_IF(init_enable_count__load(skel), "Failed to load skel");28512952 /*3053 * Fork a bunch of children before we attach the scheduler so that we···150159151160struct scx_test init_enable_count = {152161 .name = "init_enable_count",153153- .description = "Verify we do the correct amount of counting of init, "162162+ .description = "Verify we correctly count the occurrences of init, "154163 "enable, etc callbacks.",155164 .run = run,156165};
+5-2
tools/testing/selftests/sched_ext/maximal.c
···1414{1515 struct maximal *skel;16161717- skel = maximal__open_and_load();1818- SCX_FAIL_IF(!skel, "Failed to open and load skel");1717+ skel = maximal__open();1818+ SCX_FAIL_IF(!skel, "Failed to open");1919+ SCX_ENUM_INIT(skel);2020+ SCX_FAIL_IF(maximal__load(skel), "Failed to load skel");2121+1922 *ctx = skel;20232124 return SCX_TEST_PASS;
+1-1
tools/testing/selftests/sched_ext/maybe_null.c
···43434444struct scx_test maybe_null = {4545 .name = "maybe_null",4646- .description = "Verify if PTR_MAYBE_NULL work for .dispatch",4646+ .description = "Verify if PTR_MAYBE_NULL works for .dispatch",4747 .run = run,4848};4949REGISTER_SCX_TEST(&maybe_null)
+5-5
tools/testing/selftests/sched_ext/minimal.c
···1515{1616 struct minimal *skel;17171818- skel = minimal__open_and_load();1919- if (!skel) {2020- SCX_ERR("Failed to open and load skel");2121- return SCX_TEST_FAIL;2222- }1818+ skel = minimal__open();1919+ SCX_FAIL_IF(!skel, "Failed to open");2020+ SCX_ENUM_INIT(skel);2121+ SCX_FAIL_IF(minimal__load(skel), "Failed to load skel");2222+2323 *ctx = skel;24242525 return SCX_TEST_PASS;
+5-5
tools/testing/selftests/sched_ext/prog_run.c
···1515{1616 struct prog_run *skel;17171818- skel = prog_run__open_and_load();1919- if (!skel) {2020- SCX_ERR("Failed to open and load skel");2121- return SCX_TEST_FAIL;2222- }1818+ skel = prog_run__open();1919+ SCX_FAIL_IF(!skel, "Failed to open");2020+ SCX_ENUM_INIT(skel);2121+ SCX_FAIL_IF(prog_run__load(skel), "Failed to load skel");2222+2323 *ctx = skel;24242525 return SCX_TEST_PASS;
+4-5
tools/testing/selftests/sched_ext/reload_loop.c
···18181919static enum scx_test_status setup(void **ctx)2020{2121- skel = maximal__open_and_load();2222- if (!skel) {2323- SCX_ERR("Failed to open and load skel");2424- return SCX_TEST_FAIL;2525- }2121+ skel = maximal__open();2222+ SCX_FAIL_IF(!skel, "Failed to open");2323+ SCX_ENUM_INIT(skel);2424+ SCX_FAIL_IF(maximal__load(skel), "Failed to load skel");26252726 return SCX_TEST_PASS;2827}