Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ASoC: codec: Convert to GPIO descriptors for

Merge series from Peng Fan <peng.fan@nxp.com>:

This patchset is a pick up of patch 1,2 from [1]. And I also collect
Linus's R-b for patch 2. After this patchset, there is only one user of
of_gpio.h left in sound driver(pxa2xx-ac97).

of_gpio.h is deprecated, update the driver to use GPIO descriptors.

Patch 1 is to drop legacy platform data which in-tree no users are using it
Patch 2 is to convert to GPIO descriptors

Checking the DTS that use the device, all are using GPIOD_ACTIVE_LOW
polarity for reset-gpios, so all should work as expected with this patch.

[1] https://lore.kernel.org/all/20250408-asoc-gpio-v1-0-c0db9d3fd6e9@nxp.com/

+2987 -1320
+1
.mailmap
··· 416 416 Kenneth Westfield <quic_kwestfie@quicinc.com> <kwestfie@codeaurora.org> 417 417 Kiran Gunda <quic_kgunda@quicinc.com> <kgunda@codeaurora.org> 418 418 Kirill Tkhai <tkhai@ya.ru> <ktkhai@virtuozzo.com> 419 + Kirill A. Shutemov <kas@kernel.org> <kirill.shutemov@linux.intel.com> 419 420 Kishon Vijay Abraham I <kishon@kernel.org> <kishon@ti.com> 420 421 Konrad Dybcio <konradybcio@kernel.org> <konrad.dybcio@linaro.org> 421 422 Konrad Dybcio <konradybcio@kernel.org> <konrad.dybcio@somainline.org>
+1
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 584 584 /sys/devices/system/cpu/vulnerabilities/spectre_v1 585 585 /sys/devices/system/cpu/vulnerabilities/spectre_v2 586 586 /sys/devices/system/cpu/vulnerabilities/srbds 587 + /sys/devices/system/cpu/vulnerabilities/tsa 587 588 /sys/devices/system/cpu/vulnerabilities/tsx_async_abort 588 589 Date: January 2018 589 590 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
-6
Documentation/admin-guide/cgroup-v2.rst
··· 1732 1732 numa_hint_faults (npn) 1733 1733 Number of NUMA hinting faults. 1734 1734 1735 - numa_task_migrated (npn) 1736 - Number of task migration by NUMA balancing. 1737 - 1738 - numa_task_swapped (npn) 1739 - Number of task swap by NUMA balancing. 1740 - 1741 1735 pgdemote_kswapd 1742 1736 Number of pages demoted by kswapd. 1743 1737
+1 -3
Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
··· 157 157 combination with a microcode update. The microcode clears the affected CPU 158 158 buffers when the VERW instruction is executed. 159 159 160 - Kernel reuses the MDS function to invoke the buffer clearing: 161 - 162 - mds_clear_cpu_buffers() 160 + Kernel does the buffer clearing with x86_clear_cpu_buffers(). 163 161 164 162 On MDS affected CPUs, the kernel already invokes CPU buffer clear on 165 163 kernel/userspace, hypervisor/guest and C-state (idle) transitions. No
+13
Documentation/admin-guide/kernel-parameters.txt
··· 7488 7488 having this key zero'ed is acceptable. E.g. in testing 7489 7489 scenarios. 7490 7490 7491 + tsa= [X86] Control mitigation for Transient Scheduler 7492 + Attacks on AMD CPUs. Search the following in your 7493 + favourite search engine for more details: 7494 + 7495 + "Technical guidance for mitigating transient scheduler 7496 + attacks". 7497 + 7498 + off - disable the mitigation 7499 + on - enable the mitigation (default) 7500 + user - mitigate only user/kernel transitions 7501 + vm - mitigate only guest/host transitions 7502 + 7503 + 7491 7504 tsc= Disable clocksource stability checks for TSC. 7492 7505 Format: <string> 7493 7506 [x86] reliable: mark tsc clocksource as reliable, this
+4 -4
Documentation/arch/x86/mds.rst
··· 93 93 94 94 The kernel provides a function to invoke the buffer clearing: 95 95 96 - mds_clear_cpu_buffers() 96 + x86_clear_cpu_buffers() 97 97 98 98 Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path. 99 99 Other than CFLAGS.ZF, this macro doesn't clobber any registers. ··· 185 185 idle clearing would be a window dressing exercise and is therefore not 186 186 activated. 187 187 188 - The invocation is controlled by the static key mds_idle_clear which is 189 - switched depending on the chosen mitigation mode and the SMT state of 190 - the system. 188 + The invocation is controlled by the static key cpu_buf_idle_clear which is 189 + switched depending on the chosen mitigation mode and the SMT state of the 190 + system. 191 191 192 192 The buffer clear is only invoked before entering the C-State to prevent 193 193 that stale data from the idling CPU from spilling to the Hyper-Thread
+3
Documentation/devicetree/bindings/clock/mediatek,mt8188-clock.yaml
··· 52 52 '#clock-cells': 53 53 const: 1 54 54 55 + '#reset-cells': 56 + const: 1 57 + 55 58 required: 56 59 - compatible 57 60 - reg
+1 -1
Documentation/devicetree/bindings/net/allwinner,sun8i-a83t-emac.yaml
··· 23 23 - allwinner,sun20i-d1-emac 24 24 - allwinner,sun50i-h6-emac 25 25 - allwinner,sun50i-h616-emac0 26 - - allwinner,sun55i-a523-emac0 26 + - allwinner,sun55i-a523-gmac0 27 27 - const: allwinner,sun50i-a64-emac 28 28 29 29 reg:
+21 -14
Documentation/virt/kvm/api.rst
··· 7196 7196 u64 leaf; 7197 7197 u64 r11, r12, r13, r14; 7198 7198 } get_tdvmcall_info; 7199 + struct { 7200 + u64 ret; 7201 + u64 vector; 7202 + } setup_event_notify; 7199 7203 }; 7200 7204 } tdx; 7201 7205 ··· 7214 7210 inputs and outputs of the TDVMCALL. Currently the following values of 7215 7211 ``nr`` are defined: 7216 7212 7217 - * ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote 7218 - signed by a service hosting TD-Quoting Enclave operating on the host. 7219 - Parameters and return value are in the ``get_quote`` field of the union. 7220 - The ``gpa`` field and ``size`` specify the guest physical address 7221 - (without the shared bit set) and the size of a shared-memory buffer, in 7222 - which the TDX guest passes a TD Report. The ``ret`` field represents 7223 - the return value of the GetQuote request. When the request has been 7224 - queued successfully, the TDX guest can poll the status field in the 7225 - shared-memory area to check whether the Quote generation is completed or 7226 - not. When completed, the generated Quote is returned via the same buffer. 7213 + * ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote 7214 + signed by a service hosting TD-Quoting Enclave operating on the host. 7215 + Parameters and return value are in the ``get_quote`` field of the union. 7216 + The ``gpa`` field and ``size`` specify the guest physical address 7217 + (without the shared bit set) and the size of a shared-memory buffer, in 7218 + which the TDX guest passes a TD Report. The ``ret`` field represents 7219 + the return value of the GetQuote request. When the request has been 7220 + queued successfully, the TDX guest can poll the status field in the 7221 + shared-memory area to check whether the Quote generation is completed or 7222 + not. When completed, the generated Quote is returned via the same buffer. 7227 7223 7228 - * ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support 7229 - status of TDVMCALLs. The output values for the given leaf should be 7230 - placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7231 - field of the union. 7224 + * ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support 7225 + status of TDVMCALLs. The output values for the given leaf should be 7226 + placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7227 + field of the union. 7228 + 7229 + * ``TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT``: the guest has requested to 7230 + set up a notification interrupt for vector ``vector``. 7232 7231 7233 7232 KVM may add support for more values in the future that may cause a userspace 7234 7233 exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case,
+14 -1
Documentation/virt/kvm/x86/intel-tdx.rst
··· 79 79 struct kvm_tdx_capabilities { 80 80 __u64 supported_attrs; 81 81 __u64 supported_xfam; 82 - __u64 reserved[254]; 82 + 83 + /* TDG.VP.VMCALL hypercalls executed in kernel and forwarded to 84 + * userspace, respectively 85 + */ 86 + __u64 kernel_tdvmcallinfo_1_r11; 87 + __u64 user_tdvmcallinfo_1_r11; 88 + 89 + /* TDG.VP.VMCALL instruction executions subfunctions executed in kernel 90 + * and forwarded to userspace, respectively 91 + */ 92 + __u64 kernel_tdvmcallinfo_1_r12; 93 + __u64 user_tdvmcallinfo_1_r12; 94 + 95 + __u64 reserved[250]; 83 96 84 97 /* Configurable CPUID bits for userspace */ 85 98 struct kvm_cpuid2 cpuid;
+13 -25
MAINTAINERS
··· 4181 4181 F: include/linux/find.h 4182 4182 F: include/linux/nodemask.h 4183 4183 F: include/linux/nodemask_types.h 4184 + F: include/uapi/linux/bits.h 4184 4185 F: include/vdso/bits.h 4185 4186 F: lib/bitmap-str.c 4186 4187 F: lib/bitmap.c ··· 4194 4193 F: tools/include/linux/bitmap.h 4195 4194 F: tools/include/linux/bits.h 4196 4195 F: tools/include/linux/find.h 4196 + F: tools/include/uapi/linux/bits.h 4197 4197 F: tools/include/vdso/bits.h 4198 4198 F: tools/lib/bitmap.c 4199 4199 F: tools/lib/find_bit.c ··· 10506 10504 F: block/partitions/efi.* 10507 10505 10508 10506 HABANALABS PCI DRIVER 10509 - M: Ofir Bitton <obitton@habana.ai> 10507 + M: Yaron Avizrat <yaron.avizrat@intel.com> 10510 10508 L: dri-devel@lists.freedesktop.org 10511 10509 S: Supported 10512 10510 C: irc://irc.oftc.net/dri-devel ··· 16824 16822 MODULE SUPPORT 16825 16823 M: Luis Chamberlain <mcgrof@kernel.org> 16826 16824 M: Petr Pavlu <petr.pavlu@suse.com> 16825 + M: Daniel Gomez <da.gomez@kernel.org> 16827 16826 R: Sami Tolvanen <samitolvanen@google.com> 16828 - R: Daniel Gomez <da.gomez@samsung.com> 16829 16827 L: linux-modules@vger.kernel.org 16830 16828 L: linux-kernel@vger.kernel.org 16831 16829 S: Maintained ··· 17224 17222 F: include/linux/mfd/ntxec.h 17225 17223 17226 17224 NETRONOME ETHERNET DRIVERS 17227 - M: Louis Peens <louis.peens@corigine.com> 17228 17225 R: Jakub Kicinski <kuba@kernel.org> 17226 + R: Simon Horman <horms@kernel.org> 17229 17227 L: oss-drivers@corigine.com 17230 - S: Maintained 17228 + S: Odd Fixes 17231 17229 F: drivers/net/ethernet/netronome/ 17232 17230 17233 17231 NETWORK BLOCK DEVICE (NBD) ··· 19603 19601 F: drivers/pinctrl/intel/ 19604 19602 19605 19603 PIN CONTROLLER - KEEMBAY 19606 - M: Lakshmi Sowjanya D <lakshmi.sowjanya.d@intel.com> 19607 - S: Supported 19604 + S: Orphan 19608 19605 F: drivers/pinctrl/pinctrl-keembay* 19609 19606 19610 19607 PIN CONTROLLER - MEDIATEK ··· 20156 20155 F: Documentation/devicetree/bindings/soc/qcom/qcom,apr* 20157 20156 F: Documentation/devicetree/bindings/sound/qcom,* 20158 20157 F: drivers/soc/qcom/apr.c 20159 - F: include/dt-bindings/sound/qcom,wcd9335.h 20160 - F: include/dt-bindings/sound/qcom,wcd934x.h 20161 - F: sound/soc/codecs/lpass-rx-macro.* 20162 - F: sound/soc/codecs/lpass-tx-macro.* 20163 - F: sound/soc/codecs/lpass-va-macro.c 20164 - F: sound/soc/codecs/lpass-wsa-macro.* 20158 + F: drivers/soundwire/qcom.c 20159 + F: include/dt-bindings/sound/qcom,wcd93* 20160 + F: sound/soc/codecs/lpass-*.* 20165 20161 F: sound/soc/codecs/msm8916-wcd-analog.c 20166 20162 F: sound/soc/codecs/msm8916-wcd-digital.c 20167 20163 F: sound/soc/codecs/wcd-clsh-v2.* 20168 20164 F: sound/soc/codecs/wcd-mbhc-v2.* 20169 - F: sound/soc/codecs/wcd9335.* 20170 - F: sound/soc/codecs/wcd934x.c 20171 - F: sound/soc/codecs/wsa881x.c 20172 - F: sound/soc/codecs/wsa883x.c 20173 - F: sound/soc/codecs/wsa884x.c 20165 + F: sound/soc/codecs/wcd93*.* 20166 + F: sound/soc/codecs/wsa88*.* 20174 20167 F: sound/soc/qcom/ 20175 20168 20176 20169 QCOM EMBEDDED USB DEBUGGER (EUD) ··· 26950 26955 F: arch/x86/kernel/unwind_*.c 26951 26956 26952 26957 X86 TRUST DOMAIN EXTENSIONS (TDX) 26953 - M: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> 26958 + M: Kirill A. Shutemov <kas@kernel.org> 26954 26959 R: Dave Hansen <dave.hansen@linux.intel.com> 26955 26960 L: x86@kernel.org 26956 26961 L: linux-coco@lists.linux.dev ··· 27318 27323 S: Supported 27319 27324 W: http://www.marvell.com 27320 27325 F: drivers/i2c/busses/i2c-xlp9xx.c 27321 - 27322 - XRA1403 GPIO EXPANDER 27323 - M: Nandor Han <nandor.han@ge.com> 27324 - L: linux-gpio@vger.kernel.org 27325 - S: Maintained 27326 - F: Documentation/devicetree/bindings/gpio/gpio-xra1403.txt 27327 - F: drivers/gpio/gpio-xra1403.c 27328 27326 27329 27327 XTENSA XTFPGA PLATFORM SUPPORT 27330 27328 M: Max Filippov <jcmvbkbc@gmail.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm64/Kconfig
··· 256 256 select HOTPLUG_SMT if HOTPLUG_CPU 257 257 select IRQ_DOMAIN 258 258 select IRQ_FORCED_THREADING 259 + select JUMP_LABEL 259 260 select KASAN_VMALLOC if KASAN 260 261 select LOCK_MM_AND_FIND_VMA 261 262 select MODULES_USE_ELF_RELA
+7 -12
arch/arm64/include/asm/el2_setup.h
··· 287 287 .Lskip_fgt2_\@: 288 288 .endm 289 289 290 - .macro __init_el2_gcs 291 - mrs_s x1, SYS_ID_AA64PFR1_EL1 292 - ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4 293 - cbz x1, .Lskip_gcs_\@ 294 - 295 - /* Ensure GCS is not enabled when we start trying to do BLs */ 296 - msr_s SYS_GCSCR_EL1, xzr 297 - msr_s SYS_GCSCRE0_EL1, xzr 298 - .Lskip_gcs_\@: 299 - .endm 300 - 301 290 /** 302 291 * Initialize EL2 registers to sane values. This should be called early on all 303 292 * cores that were booted in EL2. Note that everything gets initialised as ··· 308 319 __init_el2_cptr 309 320 __init_el2_fgt 310 321 __init_el2_fgt2 311 - __init_el2_gcs 312 322 .endm 313 323 314 324 #ifndef __KVM_NVHE_HYPERVISOR__ ··· 359 371 msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2 360 372 361 373 .Lskip_mpam_\@: 374 + check_override id_aa64pfr1, ID_AA64PFR1_EL1_GCS_SHIFT, .Linit_gcs_\@, .Lskip_gcs_\@, x1, x2 375 + 376 + .Linit_gcs_\@: 377 + msr_s SYS_GCSCR_EL1, xzr 378 + msr_s SYS_GCSCRE0_EL1, xzr 379 + 380 + .Lskip_gcs_\@: 362 381 check_override id_aa64pfr0, ID_AA64PFR0_EL1_SVE_SHIFT, .Linit_sve_\@, .Lskip_sve_\@, x1, x2 363 382 364 383 .Linit_sve_\@: /* SVE register access */
-1
arch/arm64/include/asm/kvm_host.h
··· 1480 1480 struct reg_mask_range *range); 1481 1481 1482 1482 /* Guest/host FPSIMD coordination helpers */ 1483 - int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); 1484 1483 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); 1485 1484 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu); 1486 1485 void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
+1 -2
arch/arm64/kernel/Makefile
··· 34 34 cpufeature.o alternative.o cacheinfo.o \ 35 35 smp.o smp_spin_table.o topology.o smccc-call.o \ 36 36 syscall.o proton-pack.o idle.o patching.o pi/ \ 37 - rsi.o 37 + rsi.o jump_label.o 38 38 39 39 obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ 40 40 sys_compat.o ··· 47 47 obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF) += watchdog_hld.o 48 48 obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o 49 49 obj-$(CONFIG_CPU_PM) += sleep.o suspend.o 50 - obj-$(CONFIG_JUMP_LABEL) += jump_label.o 51 50 obj-$(CONFIG_KGDB) += kgdb.o 52 51 obj-$(CONFIG_EFI) += efi.o efi-rt-wrapper.o 53 52 obj-$(CONFIG_PCI) += pci.o
+32 -25
arch/arm64/kernel/cpufeature.c
··· 3135 3135 } 3136 3136 #endif 3137 3137 3138 + #ifdef CONFIG_ARM64_SME 3139 + static bool has_sme_feature(const struct arm64_cpu_capabilities *cap, int scope) 3140 + { 3141 + return system_supports_sme() && has_user_cpuid_feature(cap, scope); 3142 + } 3143 + #endif 3144 + 3138 3145 static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { 3139 3146 HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMULL, CAP_HWCAP, KERNEL_HWCAP_PMULL), 3140 3147 HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES, CAP_HWCAP, KERNEL_HWCAP_AES), ··· 3230 3223 HWCAP_CAP(ID_AA64ISAR2_EL1, BC, IMP, CAP_HWCAP, KERNEL_HWCAP_HBC), 3231 3224 #ifdef CONFIG_ARM64_SME 3232 3225 HWCAP_CAP(ID_AA64PFR1_EL1, SME, IMP, CAP_HWCAP, KERNEL_HWCAP_SME), 3233 - HWCAP_CAP(ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64), 3234 - HWCAP_CAP(ID_AA64SMFR0_EL1, LUTv2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_LUTV2), 3235 - HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p2, CAP_HWCAP, KERNEL_HWCAP_SME2P2), 3236 - HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1), 3237 - HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2), 3238 - HWCAP_CAP(ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64), 3239 - HWCAP_CAP(ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64), 3240 - HWCAP_CAP(ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32), 3241 - HWCAP_CAP(ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16), 3242 - HWCAP_CAP(ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16), 3243 - HWCAP_CAP(ID_AA64SMFR0_EL1, F8F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F16), 3244 - HWCAP_CAP(ID_AA64SMFR0_EL1, F8F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F32), 3245 - HWCAP_CAP(ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32), 3246 - HWCAP_CAP(ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32), 3247 - HWCAP_CAP(ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32), 3248 - HWCAP_CAP(ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32), 3249 - HWCAP_CAP(ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32), 3250 - HWCAP_CAP(ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA), 3251 - HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4), 3252 - HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2), 3253 - HWCAP_CAP(ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM), 3254 - HWCAP_CAP(ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES), 3255 - HWCAP_CAP(ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA), 3256 - HWCAP_CAP(ID_AA64SMFR0_EL1, STMOP, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_STMOP), 3257 - HWCAP_CAP(ID_AA64SMFR0_EL1, SMOP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SMOP4), 3226 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64), 3227 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, LUTv2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_LUTV2), 3228 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p2, CAP_HWCAP, KERNEL_HWCAP_SME2P2), 3229 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1), 3230 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2), 3231 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64), 3232 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64), 3233 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32), 3234 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16), 3235 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16), 3236 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F8F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F16), 3237 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F8F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F32), 3238 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32), 3239 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32), 3240 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32), 3241 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32), 3242 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32), 3243 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA), 3244 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4), 3245 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2), 3246 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM), 3247 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES), 3248 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA), 3249 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, STMOP, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_STMOP), 3250 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMOP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SMOP4), 3258 3251 #endif /* CONFIG_ARM64_SME */ 3259 3252 HWCAP_CAP(ID_AA64FPFR0_EL1, F8CVT, IMP, CAP_HWCAP, KERNEL_HWCAP_F8CVT), 3260 3253 HWCAP_CAP(ID_AA64FPFR0_EL1, F8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_F8FMA),
+8 -3
arch/arm64/kernel/efi.c
··· 15 15 16 16 #include <asm/efi.h> 17 17 #include <asm/stacktrace.h> 18 + #include <asm/vmap_stack.h> 18 19 19 20 static bool region_is_misaligned(const efi_memory_desc_t *md) 20 21 { ··· 215 214 if (!efi_enabled(EFI_RUNTIME_SERVICES)) 216 215 return 0; 217 216 218 - p = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN, GFP_KERNEL, 219 - NUMA_NO_NODE, &&l); 220 - l: if (!p) { 217 + if (!IS_ENABLED(CONFIG_VMAP_STACK)) { 218 + clear_bit(EFI_RUNTIME_SERVICES, &efi.flags); 219 + return -ENOMEM; 220 + } 221 + 222 + p = arch_alloc_vmap_stack(THREAD_SIZE, NUMA_NO_NODE); 223 + if (!p) { 221 224 pr_warn("Failed to allocate EFI runtime stack\n"); 222 225 clear_bit(EFI_RUNTIME_SERVICES, &efi.flags); 223 226 return -ENOMEM;
+5
arch/arm64/kernel/process.c
··· 673 673 current->thread.por_el0 = read_sysreg_s(SYS_POR_EL0); 674 674 if (current->thread.por_el0 != next->thread.por_el0) { 675 675 write_sysreg_s(next->thread.por_el0, SYS_POR_EL0); 676 + /* 677 + * No ISB required as we can tolerate spurious Overlay faults - 678 + * the fault handler will check again based on the new value 679 + * of POR_EL0. 680 + */ 676 681 } 677 682 } 678 683
+1 -1
arch/arm64/kernel/smp.c
··· 1143 1143 void smp_send_stop(void) 1144 1144 { 1145 1145 static unsigned long stop_in_progress; 1146 - cpumask_t mask; 1146 + static cpumask_t mask; 1147 1147 unsigned long timeout; 1148 1148 1149 1149 /*
+10 -6
arch/arm64/kvm/arm.c
··· 825 825 if (!kvm_arm_vcpu_is_finalized(vcpu)) 826 826 return -EPERM; 827 827 828 - ret = kvm_arch_vcpu_run_map_fp(vcpu); 829 - if (ret) 830 - return ret; 831 - 832 828 if (likely(vcpu_has_run_once(vcpu))) 833 829 return 0; 834 830 ··· 2125 2129 2126 2130 static void cpu_hyp_uninit(void *discard) 2127 2131 { 2128 - if (__this_cpu_read(kvm_hyp_initialized)) { 2132 + if (!is_protected_kvm_enabled() && __this_cpu_read(kvm_hyp_initialized)) { 2129 2133 cpu_hyp_reset(); 2130 2134 __this_cpu_write(kvm_hyp_initialized, 0); 2131 2135 } ··· 2341 2345 2342 2346 free_hyp_pgds(); 2343 2347 for_each_possible_cpu(cpu) { 2348 + if (per_cpu(kvm_hyp_initialized, cpu)) 2349 + continue; 2350 + 2344 2351 free_pages(per_cpu(kvm_arm_hyp_stack_base, cpu), NVHE_STACK_SHIFT - PAGE_SHIFT); 2345 - free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); 2352 + 2353 + if (!kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]) 2354 + continue; 2346 2355 2347 2356 if (free_sve) { 2348 2357 struct cpu_sve_state *sve_state; ··· 2355 2354 sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state; 2356 2355 free_pages((unsigned long) sve_state, pkvm_host_sve_state_order()); 2357 2356 } 2357 + 2358 + free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); 2359 + 2358 2360 } 2359 2361 } 2360 2362
-26
arch/arm64/kvm/fpsimd.c
··· 15 15 #include <asm/sysreg.h> 16 16 17 17 /* 18 - * Called on entry to KVM_RUN unless this vcpu previously ran at least 19 - * once and the most recent prior KVM_RUN for this vcpu was called from 20 - * the same task as current (highly likely). 21 - * 22 - * This is guaranteed to execute before kvm_arch_vcpu_load_fp(vcpu), 23 - * such that on entering hyp the relevant parts of current are already 24 - * mapped. 25 - */ 26 - int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) 27 - { 28 - struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state; 29 - int ret; 30 - 31 - /* pKVM has its own tracking of the host fpsimd state. */ 32 - if (is_protected_kvm_enabled()) 33 - return 0; 34 - 35 - /* Make sure the host task fpsimd state is visible to hyp: */ 36 - ret = kvm_share_hyp(fpsimd, fpsimd + 1); 37 - if (ret) 38 - return ret; 39 - 40 - return 0; 41 - } 42 - 43 - /* 44 18 * Prepare vcpu for saving the host's FPSIMD state and loading the guest's. 45 19 * The actual loading is done by the FPSIMD access trap taken to hyp. 46 20 *
+12 -8
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 479 479 { 480 480 struct kvm_mem_range cur; 481 481 kvm_pte_t pte; 482 + u64 granule; 482 483 s8 level; 483 484 int ret; 484 485 ··· 497 496 return -EPERM; 498 497 } 499 498 500 - do { 501 - u64 granule = kvm_granule_size(level); 499 + for (; level <= KVM_PGTABLE_LAST_LEVEL; level++) { 500 + if (!kvm_level_supports_block_mapping(level)) 501 + continue; 502 + granule = kvm_granule_size(level); 502 503 cur.start = ALIGN_DOWN(addr, granule); 503 504 cur.end = cur.start + granule; 504 - level++; 505 - } while ((level <= KVM_PGTABLE_LAST_LEVEL) && 506 - !(kvm_level_supports_block_mapping(level) && 507 - range_included(&cur, range))); 505 + if (!range_included(&cur, range)) 506 + continue; 507 + *range = cur; 508 + return 0; 509 + } 508 510 509 - *range = cur; 511 + WARN_ON(1); 510 512 511 - return 0; 513 + return -EINVAL; 512 514 } 513 515 514 516 int host_stage2_idmap_locked(phys_addr_t addr, u64 size,
+23 -3
arch/arm64/kvm/nested.c
··· 1402 1402 } 1403 1403 } 1404 1404 1405 + #define has_tgran_2(__r, __sz) \ 1406 + ({ \ 1407 + u64 _s1, _s2, _mmfr0 = __r; \ 1408 + \ 1409 + _s2 = SYS_FIELD_GET(ID_AA64MMFR0_EL1, \ 1410 + TGRAN##__sz##_2, _mmfr0); \ 1411 + \ 1412 + _s1 = SYS_FIELD_GET(ID_AA64MMFR0_EL1, \ 1413 + TGRAN##__sz, _mmfr0); \ 1414 + \ 1415 + ((_s2 != ID_AA64MMFR0_EL1_TGRAN##__sz##_2_NI && \ 1416 + _s2 != ID_AA64MMFR0_EL1_TGRAN##__sz##_2_TGRAN##__sz) || \ 1417 + (_s2 == ID_AA64MMFR0_EL1_TGRAN##__sz##_2_TGRAN##__sz && \ 1418 + _s1 != ID_AA64MMFR0_EL1_TGRAN##__sz##_NI)); \ 1419 + }) 1405 1420 /* 1406 1421 * Our emulated CPU doesn't support all the possible features. For the 1407 1422 * sake of simplicity (and probably mental sanity), wipe out a number ··· 1426 1411 */ 1427 1412 u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val) 1428 1413 { 1414 + u64 orig_val = val; 1415 + 1429 1416 switch (reg) { 1430 1417 case SYS_ID_AA64ISAR0_EL1: 1431 1418 /* Support everything but TME */ ··· 1497 1480 */ 1498 1481 switch (PAGE_SIZE) { 1499 1482 case SZ_4K: 1500 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP); 1483 + if (has_tgran_2(orig_val, 4)) 1484 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP); 1501 1485 fallthrough; 1502 1486 case SZ_16K: 1503 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP); 1487 + if (has_tgran_2(orig_val, 16)) 1488 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP); 1504 1489 fallthrough; 1505 1490 case SZ_64K: 1506 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP); 1491 + if (has_tgran_2(orig_val, 64)) 1492 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP); 1507 1493 break; 1508 1494 } 1509 1495
+1 -3
arch/arm64/kvm/vgic/vgic-v3-nested.c
··· 401 401 { 402 402 bool level; 403 403 404 - level = __vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En; 405 - if (level) 406 - level &= vgic_v3_get_misr(vcpu); 404 + level = (__vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En) && vgic_v3_get_misr(vcpu); 407 405 kvm_vgic_inject_irq(vcpu->kvm, vcpu, 408 406 vcpu->kvm->arch.vgic.mi_intid, level, vcpu); 409 407 }
+21 -9
arch/arm64/mm/fault.c
··· 487 487 } 488 488 } 489 489 490 - static bool fault_from_pkey(unsigned long esr, struct vm_area_struct *vma, 491 - unsigned int mm_flags) 490 + static bool fault_from_pkey(struct vm_area_struct *vma, unsigned int mm_flags) 492 491 { 493 - unsigned long iss2 = ESR_ELx_ISS2(esr); 494 - 495 492 if (!system_supports_poe()) 496 493 return false; 497 494 498 - if (esr_fsc_is_permission_fault(esr) && (iss2 & ESR_ELx_Overlay)) 499 - return true; 500 - 495 + /* 496 + * We do not check whether an Overlay fault has occurred because we 497 + * cannot make a decision based solely on its value: 498 + * 499 + * - If Overlay is set, a fault did occur due to POE, but it may be 500 + * spurious in those cases where we update POR_EL0 without ISB (e.g. 501 + * on context-switch). We would then need to manually check POR_EL0 502 + * against vma_pkey(vma), which is exactly what 503 + * arch_vma_access_permitted() does. 504 + * 505 + * - If Overlay is not set, we may still need to report a pkey fault. 506 + * This is the case if an access was made within a mapping but with no 507 + * page mapped, and POR_EL0 forbids the access (according to 508 + * vma_pkey()). Such access will result in a SIGSEGV regardless 509 + * because core code checks arch_vma_access_permitted(), but in order 510 + * to report the correct error code - SEGV_PKUERR - we must handle 511 + * that case here. 512 + */ 501 513 return !arch_vma_access_permitted(vma, 502 514 mm_flags & FAULT_FLAG_WRITE, 503 515 mm_flags & FAULT_FLAG_INSTRUCTION, ··· 647 635 goto bad_area; 648 636 } 649 637 650 - if (fault_from_pkey(esr, vma, mm_flags)) { 638 + if (fault_from_pkey(vma, mm_flags)) { 651 639 pkey = vma_pkey(vma); 652 640 vma_end_read(vma); 653 641 fault = 0; ··· 691 679 goto bad_area; 692 680 } 693 681 694 - if (fault_from_pkey(esr, vma, mm_flags)) { 682 + if (fault_from_pkey(vma, mm_flags)) { 695 683 pkey = vma_pkey(vma); 696 684 mmap_read_unlock(mm); 697 685 fault = 0;
-1
arch/arm64/mm/proc.S
··· 518 518 msr REG_PIR_EL1, x0 519 519 520 520 orr tcr2, tcr2, TCR2_EL1_PIE 521 - msr REG_TCR2_EL1, x0 522 521 523 522 .Lskip_indirection: 524 523
+2
arch/s390/crypto/sha1_s390.c
··· 38 38 sctx->state[4] = SHA1_H4; 39 39 sctx->count = 0; 40 40 sctx->func = CPACF_KIMD_SHA_1; 41 + sctx->first_message_part = 0; 41 42 42 43 return 0; 43 44 } ··· 61 60 sctx->count = ictx->count; 62 61 memcpy(sctx->state, ictx->state, sizeof(ictx->state)); 63 62 sctx->func = CPACF_KIMD_SHA_1; 63 + sctx->first_message_part = 0; 64 64 return 0; 65 65 } 66 66
+3
arch/s390/crypto/sha512_s390.c
··· 32 32 ctx->count = 0; 33 33 ctx->sha512.count_hi = 0; 34 34 ctx->func = CPACF_KIMD_SHA_512; 35 + ctx->first_message_part = 0; 35 36 36 37 return 0; 37 38 } ··· 58 57 59 58 memcpy(sctx->state, ictx->state, sizeof(ictx->state)); 60 59 sctx->func = CPACF_KIMD_SHA_512; 60 + sctx->first_message_part = 0; 61 61 return 0; 62 62 } 63 63 ··· 99 97 ctx->count = 0; 100 98 ctx->sha512.count_hi = 0; 101 99 ctx->func = CPACF_KIMD_SHA_512; 100 + ctx->first_message_part = 0; 102 101 103 102 return 0; 104 103 }
+10 -1
arch/x86/Kconfig
··· 147 147 select ARCH_WANTS_DYNAMIC_TASK_STRUCT 148 148 select ARCH_WANTS_NO_INSTR 149 149 select ARCH_WANT_GENERAL_HUGETLB 150 - select ARCH_WANT_HUGE_PMD_SHARE 150 + select ARCH_WANT_HUGE_PMD_SHARE if X86_64 151 151 select ARCH_WANT_LD_ORPHAN_WARN 152 152 select ARCH_WANT_OPTIMIZE_DAX_VMEMMAP if X86_64 153 153 select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP if X86_64 ··· 2695 2695 disabled, mitigation cannot be enabled via cmdline. 2696 2696 See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst> 2697 2697 2698 + config MITIGATION_TSA 2699 + bool "Mitigate Transient Scheduler Attacks" 2700 + depends on CPU_SUP_AMD 2701 + default y 2702 + help 2703 + Enable mitigation for Transient Scheduler Attacks. TSA is a hardware 2704 + security vulnerability on AMD CPUs which can lead to forwarding of 2705 + invalid info to subsequent instructions and thus can affect their 2706 + timing and thereby cause a leakage. 2698 2707 endif 2699 2708 2700 2709 config ARCH_HAS_ADD_PAGES
+4 -4
arch/x86/entry/entry.S
··· 36 36 37 37 /* 38 38 * Define the VERW operand that is disguised as entry code so that 39 - * it can be referenced with KPTI enabled. This ensure VERW can be 39 + * it can be referenced with KPTI enabled. This ensures VERW can be 40 40 * used late in exit-to-user path after page tables are switched. 41 41 */ 42 42 .pushsection .entry.text, "ax" 43 43 44 44 .align L1_CACHE_BYTES, 0xcc 45 - SYM_CODE_START_NOALIGN(mds_verw_sel) 45 + SYM_CODE_START_NOALIGN(x86_verw_sel) 46 46 UNWIND_HINT_UNDEFINED 47 47 ANNOTATE_NOENDBR 48 48 .word __KERNEL_DS 49 49 .align L1_CACHE_BYTES, 0xcc 50 - SYM_CODE_END(mds_verw_sel); 50 + SYM_CODE_END(x86_verw_sel); 51 51 /* For KVM */ 52 - EXPORT_SYMBOL_GPL(mds_verw_sel); 52 + EXPORT_SYMBOL_GPL(x86_verw_sel); 53 53 54 54 .popsection 55 55
+5 -1
arch/x86/include/asm/cpufeatures.h
··· 456 456 #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */ 457 457 #define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */ 458 458 #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */ 459 + #define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* The memory form of VERW mitigates TSA */ 459 460 #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */ 460 461 #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */ 461 462 #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */ ··· 488 487 #define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */ 489 488 #define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */ 490 489 #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */ 490 + #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */ 491 + #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */ 492 + #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */ 491 493 492 494 /* 493 495 * BUG word(s) ··· 546 542 #define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */ 547 543 #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ 548 544 #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 549 - 545 + #define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */ 550 546 #endif /* _ASM_X86_CPUFEATURES_H */
+2 -2
arch/x86/include/asm/irqflags.h
··· 44 44 45 45 static __always_inline void native_safe_halt(void) 46 46 { 47 - mds_idle_clear_cpu_buffers(); 47 + x86_idle_clear_cpu_buffers(); 48 48 asm volatile("sti; hlt": : :"memory"); 49 49 } 50 50 51 51 static __always_inline void native_halt(void) 52 52 { 53 - mds_idle_clear_cpu_buffers(); 53 + x86_idle_clear_cpu_buffers(); 54 54 asm volatile("hlt": : :"memory"); 55 55 } 56 56
+7 -1
arch/x86/include/asm/kvm_host.h
··· 700 700 701 701 struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS]; 702 702 703 - /* Preallocated buffer for handling hypercalls passing sparse vCPU set */ 703 + /* 704 + * Preallocated buffers for handling hypercalls that pass sparse vCPU 705 + * sets (for high vCPU counts, they're too large to comfortably fit on 706 + * the stack). 707 + */ 704 708 u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS]; 709 + DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); 705 710 706 711 struct hv_vp_assist_page vp_assist_page; 707 712 ··· 769 764 CPUID_8000_0022_EAX, 770 765 CPUID_7_2_EDX, 771 766 CPUID_24_0_EBX, 767 + CPUID_8000_0021_ECX, 772 768 NR_KVM_CPU_CAPS, 773 769 774 770 NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
+1
arch/x86/include/asm/msr-index.h
··· 628 628 #define MSR_AMD64_OSVW_STATUS 0xc0010141 629 629 #define MSR_AMD_PPIN_CTL 0xc00102f0 630 630 #define MSR_AMD_PPIN 0xc00102f1 631 + #define MSR_AMD64_CPUID_FN_7 0xc0011002 631 632 #define MSR_AMD64_CPUID_FN_1 0xc0011004 632 633 #define MSR_AMD64_LS_CFG 0xc0011020 633 634 #define MSR_AMD64_DC_CFG 0xc0011022
+16 -11
arch/x86/include/asm/mwait.h
··· 43 43 44 44 static __always_inline void __mwait(u32 eax, u32 ecx) 45 45 { 46 - mds_idle_clear_cpu_buffers(); 47 - 48 46 /* 49 47 * Use the instruction mnemonic with implicit operands, as the LLVM 50 48 * assembler fails to assemble the mnemonic with explicit operands: ··· 78 80 */ 79 81 static __always_inline void __mwaitx(u32 eax, u32 ebx, u32 ecx) 80 82 { 81 - /* No MDS buffer clear as this is AMD/HYGON only */ 83 + /* No need for TSA buffer clearing on AMD */ 82 84 83 85 /* "mwaitx %eax, %ebx, %ecx" */ 84 86 asm volatile(".byte 0x0f, 0x01, 0xfb" ··· 96 98 */ 97 99 static __always_inline void __sti_mwait(u32 eax, u32 ecx) 98 100 { 99 - mds_idle_clear_cpu_buffers(); 100 101 101 102 asm volatile("sti; mwait" :: "a" (eax), "c" (ecx)); 102 103 } ··· 112 115 */ 113 116 static __always_inline void mwait_idle_with_hints(u32 eax, u32 ecx) 114 117 { 118 + if (need_resched()) 119 + return; 120 + 121 + x86_idle_clear_cpu_buffers(); 122 + 115 123 if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) { 116 124 const void *addr = &current_thread_info()->flags; 117 125 118 126 alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr)); 119 127 __monitor(addr, 0, 0); 120 128 121 - if (!need_resched()) { 122 - if (ecx & 1) { 123 - __mwait(eax, ecx); 124 - } else { 125 - __sti_mwait(eax, ecx); 126 - raw_local_irq_disable(); 127 - } 129 + if (need_resched()) 130 + goto out; 131 + 132 + if (ecx & 1) { 133 + __mwait(eax, ecx); 134 + } else { 135 + __sti_mwait(eax, ecx); 136 + raw_local_irq_disable(); 128 137 } 129 138 } 139 + 140 + out: 130 141 current_clr_polling(); 131 142 } 132 143
+22 -15
arch/x86/include/asm/nospec-branch.h
··· 302 302 .endm 303 303 304 304 /* 305 - * Macro to execute VERW instruction that mitigate transient data sampling 306 - * attacks such as MDS. On affected systems a microcode update overloaded VERW 307 - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. 308 - * 305 + * Macro to execute VERW insns that mitigate transient data sampling 306 + * attacks such as MDS or TSA. On affected systems a microcode update 307 + * overloaded VERW insns to also clear the CPU buffers. VERW clobbers 308 + * CFLAGS.ZF. 309 309 * Note: Only the memory operand variant of VERW clears the CPU buffers. 310 310 */ 311 - .macro CLEAR_CPU_BUFFERS 311 + .macro __CLEAR_CPU_BUFFERS feature 312 312 #ifdef CONFIG_X86_64 313 - ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF 313 + ALTERNATIVE "", "verw x86_verw_sel(%rip)", \feature 314 314 #else 315 315 /* 316 316 * In 32bit mode, the memory operand must be a %cs reference. The data 317 317 * segments may not be usable (vm86 mode), and the stack segment may not 318 318 * be flat (ESPFIX32). 319 319 */ 320 - ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF 320 + ALTERNATIVE "", "verw %cs:x86_verw_sel", \feature 321 321 #endif 322 322 .endm 323 + 324 + #define CLEAR_CPU_BUFFERS \ 325 + __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF 326 + 327 + #define VM_CLEAR_CPU_BUFFERS \ 328 + __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF_VM 323 329 324 330 #ifdef CONFIG_X86_64 325 331 .macro CLEAR_BRANCH_HISTORY ··· 573 567 574 568 DECLARE_STATIC_KEY_FALSE(switch_vcpu_ibpb); 575 569 576 - DECLARE_STATIC_KEY_FALSE(mds_idle_clear); 570 + DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear); 577 571 578 572 DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); 579 573 580 574 DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear); 581 575 582 - extern u16 mds_verw_sel; 576 + extern u16 x86_verw_sel; 583 577 584 578 #include <asm/segment.h> 585 579 586 580 /** 587 - * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability 581 + * x86_clear_cpu_buffers - Buffer clearing support for different x86 CPU vulns 588 582 * 589 583 * This uses the otherwise unused and obsolete VERW instruction in 590 584 * combination with microcode which triggers a CPU buffer flush when the 591 585 * instruction is executed. 592 586 */ 593 - static __always_inline void mds_clear_cpu_buffers(void) 587 + static __always_inline void x86_clear_cpu_buffers(void) 594 588 { 595 589 static const u16 ds = __KERNEL_DS; 596 590 ··· 607 601 } 608 602 609 603 /** 610 - * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability 604 + * x86_idle_clear_cpu_buffers - Buffer clearing support in idle for the MDS 605 + * and TSA vulnerabilities. 611 606 * 612 607 * Clear CPU buffers if the corresponding static key is enabled 613 608 */ 614 - static __always_inline void mds_idle_clear_cpu_buffers(void) 609 + static __always_inline void x86_idle_clear_cpu_buffers(void) 615 610 { 616 - if (static_branch_likely(&mds_idle_clear)) 617 - mds_clear_cpu_buffers(); 611 + if (static_branch_likely(&cpu_buf_idle_clear)) 612 + x86_clear_cpu_buffers(); 618 613 } 619 614 620 615 #endif /* __ASSEMBLER__ */
+1
arch/x86/include/asm/shared/tdx.h
··· 72 72 #define TDVMCALL_MAP_GPA 0x10001 73 73 #define TDVMCALL_GET_QUOTE 0x10002 74 74 #define TDVMCALL_REPORT_FATAL_ERROR 0x10003 75 + #define TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT 0x10004ULL 75 76 76 77 /* 77 78 * TDG.VP.VMCALL Status Codes (returned in R10)
+7 -1
arch/x86/include/uapi/asm/kvm.h
··· 965 965 struct kvm_tdx_capabilities { 966 966 __u64 supported_attrs; 967 967 __u64 supported_xfam; 968 - __u64 reserved[254]; 968 + 969 + __u64 kernel_tdvmcallinfo_1_r11; 970 + __u64 user_tdvmcallinfo_1_r11; 971 + __u64 kernel_tdvmcallinfo_1_r12; 972 + __u64 user_tdvmcallinfo_1_r12; 973 + 974 + __u64 reserved[250]; 969 975 970 976 /* Configurable CPUID bits for userspace */ 971 977 struct kvm_cpuid2 cpuid;
+54
arch/x86/kernel/cpu/amd.c
··· 377 377 #endif 378 378 } 379 379 380 + #define ZEN_MODEL_STEP_UCODE(fam, model, step, ucode) \ 381 + X86_MATCH_VFM_STEPS(VFM_MAKE(X86_VENDOR_AMD, fam, model), \ 382 + step, step, ucode) 383 + 384 + static const struct x86_cpu_id amd_tsa_microcode[] = { 385 + ZEN_MODEL_STEP_UCODE(0x19, 0x01, 0x1, 0x0a0011d7), 386 + ZEN_MODEL_STEP_UCODE(0x19, 0x01, 0x2, 0x0a00123b), 387 + ZEN_MODEL_STEP_UCODE(0x19, 0x08, 0x2, 0x0a00820d), 388 + ZEN_MODEL_STEP_UCODE(0x19, 0x11, 0x1, 0x0a10114c), 389 + ZEN_MODEL_STEP_UCODE(0x19, 0x11, 0x2, 0x0a10124c), 390 + ZEN_MODEL_STEP_UCODE(0x19, 0x18, 0x1, 0x0a108109), 391 + ZEN_MODEL_STEP_UCODE(0x19, 0x21, 0x0, 0x0a20102e), 392 + ZEN_MODEL_STEP_UCODE(0x19, 0x21, 0x2, 0x0a201211), 393 + ZEN_MODEL_STEP_UCODE(0x19, 0x44, 0x1, 0x0a404108), 394 + ZEN_MODEL_STEP_UCODE(0x19, 0x50, 0x0, 0x0a500012), 395 + ZEN_MODEL_STEP_UCODE(0x19, 0x61, 0x2, 0x0a60120a), 396 + ZEN_MODEL_STEP_UCODE(0x19, 0x74, 0x1, 0x0a704108), 397 + ZEN_MODEL_STEP_UCODE(0x19, 0x75, 0x2, 0x0a705208), 398 + ZEN_MODEL_STEP_UCODE(0x19, 0x78, 0x0, 0x0a708008), 399 + ZEN_MODEL_STEP_UCODE(0x19, 0x7c, 0x0, 0x0a70c008), 400 + ZEN_MODEL_STEP_UCODE(0x19, 0xa0, 0x2, 0x0aa00216), 401 + {}, 402 + }; 403 + 404 + static void tsa_init(struct cpuinfo_x86 *c) 405 + { 406 + if (cpu_has(c, X86_FEATURE_HYPERVISOR)) 407 + return; 408 + 409 + if (cpu_has(c, X86_FEATURE_ZEN3) || 410 + cpu_has(c, X86_FEATURE_ZEN4)) { 411 + if (x86_match_min_microcode_rev(amd_tsa_microcode)) 412 + setup_force_cpu_cap(X86_FEATURE_VERW_CLEAR); 413 + else 414 + pr_debug("%s: current revision: 0x%x\n", __func__, c->microcode); 415 + } else { 416 + setup_force_cpu_cap(X86_FEATURE_TSA_SQ_NO); 417 + setup_force_cpu_cap(X86_FEATURE_TSA_L1_NO); 418 + } 419 + } 420 + 380 421 static void bsp_init_amd(struct cpuinfo_x86 *c) 381 422 { 382 423 if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) { ··· 530 489 } 531 490 532 491 bsp_determine_snp(c); 492 + 493 + tsa_init(c); 494 + 533 495 return; 534 496 535 497 warn: ··· 974 930 init_spectral_chicken(c); 975 931 fix_erratum_1386(c); 976 932 zen2_zenbleed_check(c); 933 + 934 + /* Disable RDSEED on AMD Cyan Skillfish because of an error. */ 935 + if (c->x86_model == 0x47 && c->x86_stepping == 0x0) { 936 + clear_cpu_cap(c, X86_FEATURE_RDSEED); 937 + msr_clear_bit(MSR_AMD64_CPUID_FN_7, 18); 938 + pr_emerg("RDSEED is not reliable on this platform; disabling.\n"); 939 + } 940 + 941 + /* Correct misconfigured CPUID on some clients. */ 942 + clear_cpu_cap(c, X86_FEATURE_INVLPGB); 977 943 } 978 944 979 945 static void init_amd_zen3(struct cpuinfo_x86 *c)
+130 -6
arch/x86/kernel/cpu/bugs.c
··· 94 94 static void __init its_select_mitigation(void); 95 95 static void __init its_update_mitigation(void); 96 96 static void __init its_apply_mitigation(void); 97 + static void __init tsa_select_mitigation(void); 98 + static void __init tsa_apply_mitigation(void); 97 99 98 100 /* The base value of the SPEC_CTRL MSR without task-specific bits set */ 99 101 u64 x86_spec_ctrl_base; ··· 171 169 DEFINE_STATIC_KEY_FALSE(switch_vcpu_ibpb); 172 170 EXPORT_SYMBOL_GPL(switch_vcpu_ibpb); 173 171 174 - /* Control MDS CPU buffer clear before idling (halt, mwait) */ 175 - DEFINE_STATIC_KEY_FALSE(mds_idle_clear); 176 - EXPORT_SYMBOL_GPL(mds_idle_clear); 172 + /* Control CPU buffer clear before idling (halt, mwait) */ 173 + DEFINE_STATIC_KEY_FALSE(cpu_buf_idle_clear); 174 + EXPORT_SYMBOL_GPL(cpu_buf_idle_clear); 177 175 178 176 /* 179 177 * Controls whether l1d flush based mitigations are enabled, ··· 227 225 gds_select_mitigation(); 228 226 its_select_mitigation(); 229 227 bhi_select_mitigation(); 228 + tsa_select_mitigation(); 230 229 231 230 /* 232 231 * After mitigations are selected, some may need to update their ··· 275 272 gds_apply_mitigation(); 276 273 its_apply_mitigation(); 277 274 bhi_apply_mitigation(); 275 + tsa_apply_mitigation(); 278 276 } 279 277 280 278 /* ··· 641 637 * is required irrespective of SMT state. 642 638 */ 643 639 if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) 644 - static_branch_enable(&mds_idle_clear); 640 + static_branch_enable(&cpu_buf_idle_clear); 645 641 646 642 if (mmio_nosmt || cpu_mitigations_auto_nosmt()) 647 643 cpu_smt_disable(false); ··· 1492 1488 } 1493 1489 1494 1490 #undef pr_fmt 1491 + #define pr_fmt(fmt) "Transient Scheduler Attacks: " fmt 1492 + 1493 + enum tsa_mitigations { 1494 + TSA_MITIGATION_NONE, 1495 + TSA_MITIGATION_AUTO, 1496 + TSA_MITIGATION_UCODE_NEEDED, 1497 + TSA_MITIGATION_USER_KERNEL, 1498 + TSA_MITIGATION_VM, 1499 + TSA_MITIGATION_FULL, 1500 + }; 1501 + 1502 + static const char * const tsa_strings[] = { 1503 + [TSA_MITIGATION_NONE] = "Vulnerable", 1504 + [TSA_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode", 1505 + [TSA_MITIGATION_USER_KERNEL] = "Mitigation: Clear CPU buffers: user/kernel boundary", 1506 + [TSA_MITIGATION_VM] = "Mitigation: Clear CPU buffers: VM", 1507 + [TSA_MITIGATION_FULL] = "Mitigation: Clear CPU buffers", 1508 + }; 1509 + 1510 + static enum tsa_mitigations tsa_mitigation __ro_after_init = 1511 + IS_ENABLED(CONFIG_MITIGATION_TSA) ? TSA_MITIGATION_AUTO : TSA_MITIGATION_NONE; 1512 + 1513 + static int __init tsa_parse_cmdline(char *str) 1514 + { 1515 + if (!str) 1516 + return -EINVAL; 1517 + 1518 + if (!strcmp(str, "off")) 1519 + tsa_mitigation = TSA_MITIGATION_NONE; 1520 + else if (!strcmp(str, "on")) 1521 + tsa_mitigation = TSA_MITIGATION_FULL; 1522 + else if (!strcmp(str, "user")) 1523 + tsa_mitigation = TSA_MITIGATION_USER_KERNEL; 1524 + else if (!strcmp(str, "vm")) 1525 + tsa_mitigation = TSA_MITIGATION_VM; 1526 + else 1527 + pr_err("Ignoring unknown tsa=%s option.\n", str); 1528 + 1529 + return 0; 1530 + } 1531 + early_param("tsa", tsa_parse_cmdline); 1532 + 1533 + static void __init tsa_select_mitigation(void) 1534 + { 1535 + if (cpu_mitigations_off() || !boot_cpu_has_bug(X86_BUG_TSA)) { 1536 + tsa_mitigation = TSA_MITIGATION_NONE; 1537 + return; 1538 + } 1539 + 1540 + if (tsa_mitigation == TSA_MITIGATION_NONE) 1541 + return; 1542 + 1543 + if (!boot_cpu_has(X86_FEATURE_VERW_CLEAR)) { 1544 + tsa_mitigation = TSA_MITIGATION_UCODE_NEEDED; 1545 + goto out; 1546 + } 1547 + 1548 + if (tsa_mitigation == TSA_MITIGATION_AUTO) 1549 + tsa_mitigation = TSA_MITIGATION_FULL; 1550 + 1551 + /* 1552 + * No need to set verw_clear_cpu_buf_mitigation_selected - it 1553 + * doesn't fit all cases here and it is not needed because this 1554 + * is the only VERW-based mitigation on AMD. 1555 + */ 1556 + out: 1557 + pr_info("%s\n", tsa_strings[tsa_mitigation]); 1558 + } 1559 + 1560 + static void __init tsa_apply_mitigation(void) 1561 + { 1562 + switch (tsa_mitigation) { 1563 + case TSA_MITIGATION_USER_KERNEL: 1564 + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); 1565 + break; 1566 + case TSA_MITIGATION_VM: 1567 + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); 1568 + break; 1569 + case TSA_MITIGATION_FULL: 1570 + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); 1571 + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); 1572 + break; 1573 + default: 1574 + break; 1575 + } 1576 + } 1577 + 1578 + #undef pr_fmt 1495 1579 #define pr_fmt(fmt) "Spectre V2 : " fmt 1496 1580 1497 1581 static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init = ··· 2341 2249 return; 2342 2250 2343 2251 if (sched_smt_active()) { 2344 - static_branch_enable(&mds_idle_clear); 2252 + static_branch_enable(&cpu_buf_idle_clear); 2345 2253 } else if (mmio_mitigation == MMIO_MITIGATION_OFF || 2346 2254 (x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) { 2347 - static_branch_disable(&mds_idle_clear); 2255 + static_branch_disable(&cpu_buf_idle_clear); 2348 2256 } 2349 2257 } 2350 2258 ··· 2405 2313 pr_warn_once(MMIO_MSG_SMT); 2406 2314 break; 2407 2315 case MMIO_MITIGATION_OFF: 2316 + break; 2317 + } 2318 + 2319 + switch (tsa_mitigation) { 2320 + case TSA_MITIGATION_USER_KERNEL: 2321 + case TSA_MITIGATION_VM: 2322 + case TSA_MITIGATION_AUTO: 2323 + case TSA_MITIGATION_FULL: 2324 + /* 2325 + * TSA-SQ can potentially lead to info leakage between 2326 + * SMT threads. 2327 + */ 2328 + if (sched_smt_active()) 2329 + static_branch_enable(&cpu_buf_idle_clear); 2330 + else 2331 + static_branch_disable(&cpu_buf_idle_clear); 2332 + break; 2333 + case TSA_MITIGATION_NONE: 2334 + case TSA_MITIGATION_UCODE_NEEDED: 2408 2335 break; 2409 2336 } 2410 2337 ··· 3376 3265 return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]); 3377 3266 } 3378 3267 3268 + static ssize_t tsa_show_state(char *buf) 3269 + { 3270 + return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]); 3271 + } 3272 + 3379 3273 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, 3380 3274 char *buf, unsigned int bug) 3381 3275 { ··· 3443 3327 3444 3328 case X86_BUG_ITS: 3445 3329 return its_show_state(buf); 3330 + 3331 + case X86_BUG_TSA: 3332 + return tsa_show_state(buf); 3446 3333 3447 3334 default: 3448 3335 break; ··· 3532 3413 ssize_t cpu_show_indirect_target_selection(struct device *dev, struct device_attribute *attr, char *buf) 3533 3414 { 3534 3415 return cpu_show_common(dev, attr, buf, X86_BUG_ITS); 3416 + } 3417 + 3418 + ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf) 3419 + { 3420 + return cpu_show_common(dev, attr, buf, X86_BUG_TSA); 3535 3421 } 3536 3422 #endif 3537 3423
+13 -1
arch/x86/kernel/cpu/common.c
··· 1233 1233 #define ITS BIT(8) 1234 1234 /* CPU is affected by Indirect Target Selection, but guest-host isolation is not affected */ 1235 1235 #define ITS_NATIVE_ONLY BIT(9) 1236 + /* CPU is affected by Transient Scheduler Attacks */ 1237 + #define TSA BIT(10) 1236 1238 1237 1239 static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { 1238 1240 VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS), ··· 1282 1280 VULNBL_AMD(0x16, RETBLEED), 1283 1281 VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO), 1284 1282 VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO), 1285 - VULNBL_AMD(0x19, SRSO), 1283 + VULNBL_AMD(0x19, SRSO | TSA), 1286 1284 VULNBL_AMD(0x1a, SRSO), 1287 1285 {} 1288 1286 }; ··· 1530 1528 setup_force_cpu_bug(X86_BUG_ITS); 1531 1529 if (cpu_matches(cpu_vuln_blacklist, ITS_NATIVE_ONLY)) 1532 1530 setup_force_cpu_bug(X86_BUG_ITS_NATIVE_ONLY); 1531 + } 1532 + 1533 + if (c->x86_vendor == X86_VENDOR_AMD) { 1534 + if (!cpu_has(c, X86_FEATURE_TSA_SQ_NO) || 1535 + !cpu_has(c, X86_FEATURE_TSA_L1_NO)) { 1536 + if (cpu_matches(cpu_vuln_blacklist, TSA) || 1537 + /* Enable bug on Zen guests to allow for live migration. */ 1538 + (cpu_has(c, X86_FEATURE_HYPERVISOR) && cpu_has(c, X86_FEATURE_ZEN))) 1539 + setup_force_cpu_bug(X86_BUG_TSA); 1540 + } 1533 1541 } 1534 1542 1535 1543 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+112
arch/x86/kernel/cpu/microcode/amd_shas.c
··· 231 231 0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21, 232 232 } 233 233 }, 234 + { 0xa0011d7, { 235 + 0x35,0x07,0xcd,0x40,0x94,0xbc,0x81,0x6b, 236 + 0xfc,0x61,0x56,0x1a,0xe2,0xdb,0x96,0x12, 237 + 0x1c,0x1c,0x31,0xb1,0x02,0x6f,0xe5,0xd2, 238 + 0xfe,0x1b,0x04,0x03,0x2c,0x8f,0x4c,0x36, 239 + } 240 + }, 234 241 { 0xa001223, { 235 242 0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8, 236 243 0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4, ··· 301 294 0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59, 302 295 } 303 296 }, 297 + { 0xa00123b, { 298 + 0xef,0xa1,0x1e,0x71,0xf1,0xc3,0x2c,0xe2, 299 + 0xc3,0xef,0x69,0x41,0x7a,0x54,0xca,0xc3, 300 + 0x8f,0x62,0x84,0xee,0xc2,0x39,0xd9,0x28, 301 + 0x95,0xa7,0x12,0x49,0x1e,0x30,0x71,0x72, 302 + } 303 + }, 304 304 { 0xa00820c, { 305 305 0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3, 306 306 0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63, 307 307 0xf1,0x8c,0x88,0x45,0xd7,0x82,0x80,0xd1, 308 308 0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2, 309 + } 310 + }, 311 + { 0xa00820d, { 312 + 0xf9,0x2a,0xc0,0xf4,0x9e,0xa4,0x87,0xa4, 313 + 0x7d,0x87,0x00,0xfd,0xab,0xda,0x19,0xca, 314 + 0x26,0x51,0x32,0xc1,0x57,0x91,0xdf,0xc1, 315 + 0x05,0xeb,0x01,0x7c,0x5a,0x95,0x21,0xb7, 309 316 } 310 317 }, 311 318 { 0xa10113e, { ··· 343 322 0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4, 344 323 } 345 324 }, 325 + { 0xa10114c, { 326 + 0x9e,0xb6,0xa2,0xd9,0x87,0x38,0xc5,0x64, 327 + 0xd8,0x88,0xfa,0x78,0x98,0xf9,0x6f,0x74, 328 + 0x39,0x90,0x1b,0xa5,0xcf,0x5e,0xb4,0x2a, 329 + 0x02,0xff,0xd4,0x8c,0x71,0x8b,0xe2,0xc0, 330 + } 331 + }, 346 332 { 0xa10123e, { 347 333 0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18, 348 334 0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d, ··· 371 343 0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75, 372 344 } 373 345 }, 346 + { 0xa10124c, { 347 + 0x29,0xea,0xf1,0x2c,0xb2,0xe4,0xef,0x90, 348 + 0xa4,0xcd,0x1d,0x86,0x97,0x17,0x61,0x46, 349 + 0xfc,0x22,0xcb,0x57,0x75,0x19,0xc8,0xcc, 350 + 0x0c,0xf5,0xbc,0xac,0x81,0x9d,0x9a,0xd2, 351 + } 352 + }, 374 353 { 0xa108108, { 375 354 0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9, 376 355 0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6, 377 356 0xf5,0xd4,0x3f,0x7b,0x14,0xd5,0x60,0x2c, 378 357 0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16, 358 + } 359 + }, 360 + { 0xa108109, { 361 + 0x85,0xb4,0xbd,0x7c,0x49,0xa7,0xbd,0xfa, 362 + 0x49,0x36,0x80,0x81,0xc5,0xb7,0x39,0x1b, 363 + 0x9a,0xaa,0x50,0xde,0x9b,0xe9,0x32,0x35, 364 + 0x42,0x7e,0x51,0x4f,0x52,0x2c,0x28,0x59, 379 365 } 380 366 }, 381 367 { 0xa20102d, { ··· 399 357 0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4, 400 358 } 401 359 }, 360 + { 0xa20102e, { 361 + 0xbe,0x1f,0x32,0x04,0x0d,0x3c,0x9c,0xdd, 362 + 0xe1,0xa4,0xbf,0x76,0x3a,0xec,0xc2,0xf6, 363 + 0x11,0x00,0xa7,0xaf,0x0f,0xe5,0x02,0xc5, 364 + 0x54,0x3a,0x1f,0x8c,0x16,0xb5,0xff,0xbe, 365 + } 366 + }, 402 367 { 0xa201210, { 403 368 0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe, 404 369 0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9, 405 370 0x6d,0x3d,0x0e,0x6b,0xa7,0xac,0xe3,0x68, 406 371 0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41, 372 + } 373 + }, 374 + { 0xa201211, { 375 + 0x69,0xa1,0x17,0xec,0xd0,0xf6,0x6c,0x95, 376 + 0xe2,0x1e,0xc5,0x59,0x1a,0x52,0x0a,0x27, 377 + 0xc4,0xed,0xd5,0x59,0x1f,0xbf,0x00,0xff, 378 + 0x08,0x88,0xb5,0xe1,0x12,0xb6,0xcc,0x27, 407 379 } 408 380 }, 409 381 { 0xa404107, { ··· 427 371 0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99, 428 372 } 429 373 }, 374 + { 0xa404108, { 375 + 0x69,0x67,0x43,0x06,0xf8,0x0c,0x62,0xdc, 376 + 0xa4,0x21,0x30,0x4f,0x0f,0x21,0x2c,0xcb, 377 + 0xcc,0x37,0xf1,0x1c,0xc3,0xf8,0x2f,0x19, 378 + 0xdf,0x53,0x53,0x46,0xb1,0x15,0xea,0x00, 379 + } 380 + }, 430 381 { 0xa500011, { 431 382 0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4, 432 383 0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1, 433 384 0xd7,0x5b,0x65,0x3a,0x7d,0xab,0xdf,0xa2, 434 385 0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74, 386 + } 387 + }, 388 + { 0xa500012, { 389 + 0xeb,0x74,0x0d,0x47,0xa1,0x8e,0x09,0xe4, 390 + 0x93,0x4c,0xad,0x03,0x32,0x4c,0x38,0x16, 391 + 0x10,0x39,0xdd,0x06,0xaa,0xce,0xd6,0x0f, 392 + 0x62,0x83,0x9d,0x8e,0x64,0x55,0xbe,0x63, 435 393 } 436 394 }, 437 395 { 0xa601209, { ··· 455 385 0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d, 456 386 } 457 387 }, 388 + { 0xa60120a, { 389 + 0x0c,0x8b,0x3d,0xfd,0x52,0x52,0x85,0x7d, 390 + 0x20,0x3a,0xe1,0x7e,0xa4,0x21,0x3b,0x7b, 391 + 0x17,0x86,0xae,0xac,0x13,0xb8,0x63,0x9d, 392 + 0x06,0x01,0xd0,0xa0,0x51,0x9a,0x91,0x2c, 393 + } 394 + }, 458 395 { 0xa704107, { 459 396 0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6, 460 397 0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93, 461 398 0x2a,0xad,0x8e,0x6b,0xea,0x9b,0xb7,0xc2, 462 399 0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39, 400 + } 401 + }, 402 + { 0xa704108, { 403 + 0xd7,0x55,0x15,0x2b,0xfe,0xc4,0xbc,0x93, 404 + 0xec,0x91,0xa0,0xae,0x45,0xb7,0xc3,0x98, 405 + 0x4e,0xff,0x61,0x77,0x88,0xc2,0x70,0x49, 406 + 0xe0,0x3a,0x1d,0x84,0x38,0x52,0xbf,0x5a, 463 407 } 464 408 }, 465 409 { 0xa705206, { ··· 483 399 0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc, 484 400 } 485 401 }, 402 + { 0xa705208, { 403 + 0x30,0x1d,0x55,0x24,0xbc,0x6b,0x5a,0x19, 404 + 0x0c,0x7d,0x1d,0x74,0xaa,0xd1,0xeb,0xd2, 405 + 0x16,0x62,0xf7,0x5b,0xe1,0x1f,0x18,0x11, 406 + 0x5c,0xf0,0x94,0x90,0x26,0xec,0x69,0xff, 407 + } 408 + }, 486 409 { 0xa708007, { 487 410 0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3, 488 411 0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2, ··· 497 406 0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93, 498 407 } 499 408 }, 409 + { 0xa708008, { 410 + 0x08,0x6e,0xf0,0x22,0x4b,0x8e,0xc4,0x46, 411 + 0x58,0x34,0xe6,0x47,0xa2,0x28,0xfd,0xab, 412 + 0x22,0x3d,0xdd,0xd8,0x52,0x9e,0x1d,0x16, 413 + 0xfa,0x01,0x68,0x14,0x79,0x3e,0xe8,0x6b, 414 + } 415 + }, 500 416 { 0xa70c005, { 501 417 0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b, 502 418 0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f, 503 419 0x1f,0x1f,0xf1,0x97,0xeb,0xfe,0x56,0x55, 504 420 0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13, 421 + } 422 + }, 423 + { 0xa70c008, { 424 + 0x0f,0xdb,0x37,0xa1,0x10,0xaf,0xd4,0x21, 425 + 0x94,0x0d,0xa4,0xa2,0xe9,0x86,0x6c,0x0e, 426 + 0x85,0x7c,0x36,0x30,0xa3,0x3a,0x78,0x66, 427 + 0x18,0x10,0x60,0x0d,0x78,0x3d,0x44,0xd0, 505 428 } 506 429 }, 507 430 { 0xaa00116, { ··· 544 439 0x4e,0x85,0x4b,0x7c,0x6b,0xd5,0x7c,0xd4, 545 440 0x1b,0x51,0x71,0x3a,0x0e,0x0b,0xdc,0x9b, 546 441 0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef, 442 + } 443 + }, 444 + { 0xaa00216, { 445 + 0x79,0xfb,0x5b,0x9f,0xb6,0xe6,0xa8,0xf5, 446 + 0x4e,0x7c,0x4f,0x8e,0x1d,0xad,0xd0,0x08, 447 + 0xc2,0x43,0x7c,0x8b,0xe6,0xdb,0xd0,0xd2, 448 + 0xe8,0x39,0x26,0xc1,0xe5,0x5a,0x48,0xf1, 547 449 } 548 450 }, 549 451 };
+2
arch/x86/kernel/cpu/scattered.c
··· 50 50 { X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 }, 51 51 { X86_FEATURE_SMBA, CPUID_EBX, 2, 0x80000020, 0 }, 52 52 { X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 }, 53 + { X86_FEATURE_TSA_SQ_NO, CPUID_ECX, 1, 0x80000021, 0 }, 54 + { X86_FEATURE_TSA_L1_NO, CPUID_ECX, 2, 0x80000021, 0 }, 53 55 { X86_FEATURE_AMD_WORKLOAD_CLASS, CPUID_EAX, 22, 0x80000021, 0 }, 54 56 { X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 }, 55 57 { X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 },
+12 -4
arch/x86/kernel/process.c
··· 907 907 */ 908 908 static __cpuidle void mwait_idle(void) 909 909 { 910 + if (need_resched()) 911 + return; 912 + 913 + x86_idle_clear_cpu_buffers(); 914 + 910 915 if (!current_set_polling_and_test()) { 911 916 const void *addr = &current_thread_info()->flags; 912 917 913 918 alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr)); 914 919 __monitor(addr, 0, 0); 915 - if (!need_resched()) { 916 - __sti_mwait(0, 0); 917 - raw_local_irq_disable(); 918 - } 920 + if (need_resched()) 921 + goto out; 922 + 923 + __sti_mwait(0, 0); 924 + raw_local_irq_disable(); 919 925 } 926 + 927 + out: 920 928 __current_clr_polling(); 921 929 } 922 930
+9 -1
arch/x86/kvm/cpuid.c
··· 1165 1165 */ 1166 1166 SYNTHESIZED_F(LFENCE_RDTSC), 1167 1167 /* SmmPgCfgLock */ 1168 + /* 4: Resv */ 1169 + SYNTHESIZED_F(VERW_CLEAR), 1168 1170 F(NULL_SEL_CLR_BASE), 1169 1171 /* UpperAddressIgnore */ 1170 1172 F(AUTOIBRS), ··· 1179 1177 SYNTHESIZED_F(IBPB_BRTYPE), 1180 1178 SYNTHESIZED_F(SRSO_NO), 1181 1179 F(SRSO_USER_KERNEL_NO), 1180 + ); 1181 + 1182 + kvm_cpu_cap_init(CPUID_8000_0021_ECX, 1183 + SYNTHESIZED_F(TSA_SQ_NO), 1184 + SYNTHESIZED_F(TSA_L1_NO), 1182 1185 ); 1183 1186 1184 1187 kvm_cpu_cap_init(CPUID_8000_0022_EAX, ··· 1755 1748 entry->eax = entry->ebx = entry->ecx = entry->edx = 0; 1756 1749 break; 1757 1750 case 0x80000021: 1758 - entry->ebx = entry->ecx = entry->edx = 0; 1751 + entry->ebx = entry->edx = 0; 1759 1752 cpuid_entry_override(entry, CPUID_8000_0021_EAX); 1753 + cpuid_entry_override(entry, CPUID_8000_0021_ECX); 1760 1754 break; 1761 1755 /* AMD Extended Performance Monitoring and Debug */ 1762 1756 case 0x80000022: {
+4 -1
arch/x86/kvm/hyperv.c
··· 1979 1979 if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY) 1980 1980 goto out_flush_all; 1981 1981 1982 + if (is_noncanonical_invlpg_address(entries[i], vcpu)) 1983 + continue; 1984 + 1982 1985 /* 1983 1986 * Lower 12 bits of 'address' encode the number of additional 1984 1987 * pages to flush. ··· 2004 2001 static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) 2005 2002 { 2006 2003 struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); 2004 + unsigned long *vcpu_mask = hv_vcpu->vcpu_mask; 2007 2005 u64 *sparse_banks = hv_vcpu->sparse_banks; 2008 2006 struct kvm *kvm = vcpu->kvm; 2009 2007 struct hv_tlb_flush_ex flush_ex; 2010 2008 struct hv_tlb_flush flush; 2011 - DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); 2012 2009 struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; 2013 2010 /* 2014 2011 * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
+7
arch/x86/kvm/reverse_cpuid.h
··· 52 52 /* CPUID level 0x80000022 (EAX) */ 53 53 #define KVM_X86_FEATURE_PERFMON_V2 KVM_X86_FEATURE(CPUID_8000_0022_EAX, 0) 54 54 55 + /* CPUID level 0x80000021 (ECX) */ 56 + #define KVM_X86_FEATURE_TSA_SQ_NO KVM_X86_FEATURE(CPUID_8000_0021_ECX, 1) 57 + #define KVM_X86_FEATURE_TSA_L1_NO KVM_X86_FEATURE(CPUID_8000_0021_ECX, 2) 58 + 55 59 struct cpuid_reg { 56 60 u32 function; 57 61 u32 index; ··· 86 82 [CPUID_8000_0022_EAX] = {0x80000022, 0, CPUID_EAX}, 87 83 [CPUID_7_2_EDX] = { 7, 2, CPUID_EDX}, 88 84 [CPUID_24_0_EBX] = { 0x24, 0, CPUID_EBX}, 85 + [CPUID_8000_0021_ECX] = {0x80000021, 0, CPUID_ECX}, 89 86 }; 90 87 91 88 /* ··· 126 121 KVM_X86_TRANSLATE_FEATURE(PERFMON_V2); 127 122 KVM_X86_TRANSLATE_FEATURE(RRSBA_CTRL); 128 123 KVM_X86_TRANSLATE_FEATURE(BHI_CTRL); 124 + KVM_X86_TRANSLATE_FEATURE(TSA_SQ_NO); 125 + KVM_X86_TRANSLATE_FEATURE(TSA_L1_NO); 129 126 default: 130 127 return x86_feature; 131 128 }
+10 -2
arch/x86/kvm/svm/sev.c
··· 1971 1971 struct kvm_vcpu *src_vcpu; 1972 1972 unsigned long i; 1973 1973 1974 + if (src->created_vcpus != atomic_read(&src->online_vcpus) || 1975 + dst->created_vcpus != atomic_read(&dst->online_vcpus)) 1976 + return -EBUSY; 1977 + 1974 1978 if (!sev_es_guest(src)) 1975 1979 return 0; 1976 1980 ··· 4449 4445 * the VMSA will be NULL if this vCPU is the destination for intrahost 4450 4446 * migration, and will be copied later. 4451 4447 */ 4452 - if (svm->sev_es.vmsa && !svm->sev_es.snp_has_guest_vmsa) 4453 - svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); 4448 + if (!svm->sev_es.snp_has_guest_vmsa) { 4449 + if (svm->sev_es.vmsa) 4450 + svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); 4451 + else 4452 + svm->vmcb->control.vmsa_pa = INVALID_PAGE; 4453 + } 4454 4454 4455 4455 if (cpu_feature_enabled(X86_FEATURE_ALLOWED_SEV_FEATURES)) 4456 4456 svm->vmcb->control.allowed_sev_features = sev->vmsa_features |
+6
arch/x86/kvm/svm/vmenter.S
··· 169 169 #endif 170 170 mov VCPU_RDI(%_ASM_DI), %_ASM_DI 171 171 172 + /* Clobbers EFLAGS.ZF */ 173 + VM_CLEAR_CPU_BUFFERS 174 + 172 175 /* Enter guest mode */ 173 176 3: vmrun %_ASM_AX 174 177 4: ··· 337 334 /* Get svm->current_vmcb->pa into RAX. */ 338 335 mov SVM_current_vmcb(%rdi), %rax 339 336 mov KVM_VMCB_pa(%rax), %rax 337 + 338 + /* Clobbers EFLAGS.ZF */ 339 + VM_CLEAR_CPU_BUFFERS 340 340 341 341 /* Enter guest mode */ 342 342 1: vmrun %rax
+30
arch/x86/kvm/vmx/tdx.c
··· 173 173 tdx_clear_unsupported_cpuid(entry); 174 174 } 175 175 176 + #define TDVMCALLINFO_GET_QUOTE BIT(0) 177 + #define TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT BIT(1) 178 + 176 179 static int init_kvm_tdx_caps(const struct tdx_sys_info_td_conf *td_conf, 177 180 struct kvm_tdx_capabilities *caps) 178 181 { ··· 190 187 return -EIO; 191 188 192 189 caps->cpuid.nent = td_conf->num_cpuid_config; 190 + 191 + caps->user_tdvmcallinfo_1_r11 = 192 + TDVMCALLINFO_GET_QUOTE | 193 + TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT; 193 194 194 195 for (i = 0; i < td_conf->num_cpuid_config; i++) 195 196 td_init_cpuid_entry2(&caps->cpuid.entries[i], i); ··· 1537 1530 return 0; 1538 1531 } 1539 1532 1533 + static int tdx_setup_event_notify_interrupt(struct kvm_vcpu *vcpu) 1534 + { 1535 + struct vcpu_tdx *tdx = to_tdx(vcpu); 1536 + u64 vector = tdx->vp_enter_args.r12; 1537 + 1538 + if (vector < 32 || vector > 255) { 1539 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1540 + return 1; 1541 + } 1542 + 1543 + vcpu->run->exit_reason = KVM_EXIT_TDX; 1544 + vcpu->run->tdx.flags = 0; 1545 + vcpu->run->tdx.nr = TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT; 1546 + vcpu->run->tdx.setup_event_notify.ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED; 1547 + vcpu->run->tdx.setup_event_notify.vector = vector; 1548 + 1549 + vcpu->arch.complete_userspace_io = tdx_complete_simple; 1550 + 1551 + return 0; 1552 + } 1553 + 1540 1554 static int handle_tdvmcall(struct kvm_vcpu *vcpu) 1541 1555 { 1542 1556 switch (tdvmcall_leaf(vcpu)) { ··· 1569 1541 return tdx_get_td_vm_call_info(vcpu); 1570 1542 case TDVMCALL_GET_QUOTE: 1571 1543 return tdx_get_quote(vcpu); 1544 + case TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT: 1545 + return tdx_setup_event_notify_interrupt(vcpu); 1572 1546 default: 1573 1547 break; 1574 1548 }
+1 -1
arch/x86/kvm/vmx/vmx.c
··· 7291 7291 vmx_l1d_flush(vcpu); 7292 7292 else if (static_branch_unlikely(&cpu_buf_vm_clear) && 7293 7293 kvm_arch_has_assigned_device(vcpu->kvm)) 7294 - mds_clear_cpu_buffers(); 7294 + x86_clear_cpu_buffers(); 7295 7295 7296 7296 vmx_disable_fb_clear(vmx); 7297 7297
+3 -1
arch/x86/kvm/x86.c
··· 3258 3258 3259 3259 /* With all the info we got, fill in the values */ 3260 3260 3261 - if (kvm_caps.has_tsc_control) 3261 + if (kvm_caps.has_tsc_control) { 3262 3262 tgt_tsc_khz = kvm_scale_tsc(tgt_tsc_khz, 3263 3263 v->arch.l1_tsc_scaling_ratio); 3264 + tgt_tsc_khz = tgt_tsc_khz ? : 1; 3265 + } 3264 3266 3265 3267 if (unlikely(vcpu->hw_tsc_khz != tgt_tsc_khz)) { 3266 3268 kvm_get_time_scale(NSEC_PER_SEC, tgt_tsc_khz * 1000LL,
+13 -2
arch/x86/kvm/xen.c
··· 1971 1971 { 1972 1972 struct kvm_vcpu *vcpu; 1973 1973 1974 - if (ue->u.xen_evtchn.port >= max_evtchn_port(kvm)) 1975 - return -EINVAL; 1974 + /* 1975 + * Don't check for the port being within range of max_evtchn_port(). 1976 + * Userspace can configure what ever targets it likes; events just won't 1977 + * be delivered if/while the target is invalid, just like userspace can 1978 + * configure MSIs which target non-existent APICs. 1979 + * 1980 + * This allow on Live Migration and Live Update, the IRQ routing table 1981 + * can be restored *independently* of other things like creating vCPUs, 1982 + * without imposing an ordering dependency on userspace. In this 1983 + * particular case, the problematic ordering would be with setting the 1984 + * Xen 'long mode' flag, which changes max_evtchn_port() to allow 4096 1985 + * instead of 1024 event channels. 1986 + */ 1976 1987 1977 1988 /* We only support 2 level event channels for now */ 1978 1989 if (ue->u.xen_evtchn.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
+3
drivers/base/cpu.c
··· 602 602 CPU_SHOW_VULN_FALLBACK(ghostwrite); 603 603 CPU_SHOW_VULN_FALLBACK(old_microcode); 604 604 CPU_SHOW_VULN_FALLBACK(indirect_target_selection); 605 + CPU_SHOW_VULN_FALLBACK(tsa); 605 606 606 607 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); 607 608 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); ··· 621 620 static DEVICE_ATTR(ghostwrite, 0444, cpu_show_ghostwrite, NULL); 622 621 static DEVICE_ATTR(old_microcode, 0444, cpu_show_old_microcode, NULL); 623 622 static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL); 623 + static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL); 624 624 625 625 static struct attribute *cpu_root_vulnerabilities_attrs[] = { 626 626 &dev_attr_meltdown.attr, ··· 641 639 &dev_attr_ghostwrite.attr, 642 640 &dev_attr_old_microcode.attr, 643 641 &dev_attr_indirect_target_selection.attr, 642 + &dev_attr_tsa.attr, 644 643 NULL 645 644 }; 646 645
+1 -1
drivers/base/power/main.c
··· 1236 1236 */ 1237 1237 void dpm_resume_end(pm_message_t state) 1238 1238 { 1239 - pm_restore_gfp_mask(); 1240 1239 dpm_resume(state); 1240 + pm_restore_gfp_mask(); 1241 1241 dpm_complete(state); 1242 1242 } 1243 1243 EXPORT_SYMBOL_GPL(dpm_resume_end);
+3 -3
drivers/block/nbd.c
··· 2198 2198 goto out; 2199 2199 } 2200 2200 } 2201 - ret = nbd_start_device(nbd); 2202 - if (ret) 2203 - goto out; 2201 + 2204 2202 if (info->attrs[NBD_ATTR_BACKEND_IDENTIFIER]) { 2205 2203 nbd->backend = nla_strdup(info->attrs[NBD_ATTR_BACKEND_IDENTIFIER], 2206 2204 GFP_KERNEL); ··· 2214 2216 goto out; 2215 2217 } 2216 2218 set_bit(NBD_RT_HAS_BACKEND_FILE, &config->runtime_flags); 2219 + 2220 + ret = nbd_start_device(nbd); 2217 2221 out: 2218 2222 mutex_unlock(&nbd->config_lock); 2219 2223 if (!ret) {
+8 -8
drivers/char/agp/amd64-agp.c
··· 720 720 721 721 MODULE_DEVICE_TABLE(pci, agp_amd64_pci_table); 722 722 723 - static const struct pci_device_id agp_amd64_pci_promisc_table[] = { 724 - { PCI_DEVICE_CLASS(0, 0) }, 725 - { } 726 - }; 727 - 728 723 static DEFINE_SIMPLE_DEV_PM_OPS(agp_amd64_pm_ops, NULL, agp_amd64_resume); 729 724 730 725 static struct pci_driver agp_amd64_pci_driver = { ··· 734 739 /* Not static due to IOMMU code calling it early. */ 735 740 int __init agp_amd64_init(void) 736 741 { 742 + struct pci_dev *pdev = NULL; 737 743 int err = 0; 738 744 739 745 if (agp_off) ··· 763 767 } 764 768 765 769 /* Look for any AGP bridge */ 766 - agp_amd64_pci_driver.id_table = agp_amd64_pci_promisc_table; 767 - err = driver_attach(&agp_amd64_pci_driver.driver); 768 - if (err == 0 && agp_bridges_found == 0) { 770 + for_each_pci_dev(pdev) 771 + if (pci_find_capability(pdev, PCI_CAP_ID_AGP)) 772 + pci_add_dynid(&agp_amd64_pci_driver, 773 + pdev->vendor, pdev->device, 774 + pdev->subsystem_vendor, 775 + pdev->subsystem_device, 0, 0, 0); 776 + if (agp_bridges_found == 0) { 769 777 pci_unregister_driver(&agp_amd64_pci_driver); 770 778 err = -ENODEV; 771 779 }
+11 -9
drivers/clk/clk-scmi.c
··· 404 404 const struct scmi_handle *handle = sdev->handle; 405 405 struct scmi_protocol_handle *ph; 406 406 const struct clk_ops *scmi_clk_ops_db[SCMI_MAX_CLK_OPS] = {}; 407 + struct scmi_clk *sclks; 407 408 408 409 if (!handle) 409 410 return -ENODEV; ··· 431 430 transport_is_atomic = handle->is_transport_atomic(handle, 432 431 &atomic_threshold_us); 433 432 434 - for (idx = 0; idx < count; idx++) { 435 - struct scmi_clk *sclk; 436 - const struct clk_ops *scmi_ops; 433 + sclks = devm_kcalloc(dev, count, sizeof(*sclks), GFP_KERNEL); 434 + if (!sclks) 435 + return -ENOMEM; 437 436 438 - sclk = devm_kzalloc(dev, sizeof(*sclk), GFP_KERNEL); 439 - if (!sclk) 440 - return -ENOMEM; 437 + for (idx = 0; idx < count; idx++) 438 + hws[idx] = &sclks[idx].hw; 439 + 440 + for (idx = 0; idx < count; idx++) { 441 + struct scmi_clk *sclk = &sclks[idx]; 442 + const struct clk_ops *scmi_ops; 441 443 442 444 sclk->info = scmi_proto_clk_ops->info_get(ph, idx); 443 445 if (!sclk->info) { 444 446 dev_dbg(dev, "invalid clock info for idx %d\n", idx); 445 - devm_kfree(dev, sclk); 447 + hws[idx] = NULL; 446 448 continue; 447 449 } 448 450 ··· 483 479 if (err) { 484 480 dev_err(dev, "failed to register clock %d\n", idx); 485 481 devm_kfree(dev, sclk->parent_data); 486 - devm_kfree(dev, sclk); 487 482 hws[idx] = NULL; 488 483 } else { 489 484 dev_dbg(dev, "Registered clock:%s%s\n", 490 485 sclk->info->name, 491 486 scmi_ops->enable ? " (atomic ops)" : ""); 492 - hws[idx] = &sclk->hw; 493 487 } 494 488 } 495 489
+8 -4
drivers/clk/imx/clk-imx95-blk-ctl.c
··· 219 219 .clk_reg_offset = 0, 220 220 }; 221 221 222 + static const char * const disp_engine_parents[] = { 223 + "videopll1", "dsi_pll", "ldb_pll_div7" 224 + }; 225 + 222 226 static const struct imx95_blk_ctl_clk_dev_data dispmix_csr_clk_dev_data[] = { 223 227 [IMX95_CLK_DISPMIX_ENG0_SEL] = { 224 228 .name = "disp_engine0_sel", 225 - .parent_names = (const char *[]){"videopll1", "dsi_pll", "ldb_pll_div7", }, 226 - .num_parents = 4, 229 + .parent_names = disp_engine_parents, 230 + .num_parents = ARRAY_SIZE(disp_engine_parents), 227 231 .reg = 0, 228 232 .bit_idx = 0, 229 233 .bit_width = 2, ··· 236 232 }, 237 233 [IMX95_CLK_DISPMIX_ENG1_SEL] = { 238 234 .name = "disp_engine1_sel", 239 - .parent_names = (const char *[]){"videopll1", "dsi_pll", "ldb_pll_div7", }, 240 - .num_parents = 4, 235 + .parent_names = disp_engine_parents, 236 + .num_parents = ARRAY_SIZE(disp_engine_parents), 241 237 .reg = 0, 242 238 .bit_idx = 2, 243 239 .bit_width = 2,
+2 -4
drivers/firmware/efi/libstub/zboot.lds
··· 29 29 . = _etext; 30 30 } 31 31 32 - #ifdef CONFIG_EFI_SBAT 33 32 .sbat : ALIGN(4096) { 34 33 _sbat = .; 35 34 *(.sbat) 36 35 _esbat = ALIGN(4096); 37 36 . = _esbat; 38 37 } 39 - #endif 40 38 41 39 .data : ALIGN(4096) { 42 40 _data = .; ··· 58 60 PROVIDE(__efistub__gzdata_size = 59 61 ABSOLUTE(__efistub__gzdata_end - __efistub__gzdata_start)); 60 62 61 - PROVIDE(__data_rawsize = ABSOLUTE(_edata - _etext)); 62 - PROVIDE(__data_size = ABSOLUTE(_end - _etext)); 63 + PROVIDE(__data_rawsize = ABSOLUTE(_edata - _data)); 64 + PROVIDE(__data_size = ABSOLUTE(_end - _data)); 63 65 PROVIDE(__sbat_size = ABSOLUTE(_esbat - _sbat));
+1 -1
drivers/gpio/gpiolib-of.c
··· 708 708 unsigned int idx, unsigned long *flags) 709 709 { 710 710 char propname[32]; /* 32 is max size of property name */ 711 - enum of_gpio_flags of_flags; 711 + enum of_gpio_flags of_flags = 0; 712 712 const of_find_gpio_quirk *q; 713 713 struct gpio_desc *desc; 714 714
+3 -2
drivers/gpio/gpiolib.c
··· 3297 3297 static int gpio_chip_get_multiple(struct gpio_chip *gc, 3298 3298 unsigned long *mask, unsigned long *bits) 3299 3299 { 3300 - int ret; 3301 - 3302 3300 lockdep_assert_held(&gc->gpiodev->srcu); 3303 3301 3304 3302 if (gc->get_multiple) { 3303 + int ret; 3304 + 3305 3305 ret = gc->get_multiple(gc, mask, bits); 3306 3306 if (ret > 0) 3307 3307 return -EBADE; 3308 + return ret; 3308 3309 } 3309 3310 3310 3311 if (gc->get) {
+29 -2
drivers/gpu/drm/drm_framebuffer.c
··· 862 862 int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb, 863 863 const struct drm_framebuffer_funcs *funcs) 864 864 { 865 + unsigned int i; 865 866 int ret; 867 + bool exists; 866 868 867 869 if (WARN_ON_ONCE(fb->dev != dev || !fb->format)) 868 870 return -EINVAL; 871 + 872 + for (i = 0; i < fb->format->num_planes; i++) { 873 + if (drm_WARN_ON_ONCE(dev, fb->internal_flags & DRM_FRAMEBUFFER_HAS_HANDLE_REF(i))) 874 + fb->internal_flags &= ~DRM_FRAMEBUFFER_HAS_HANDLE_REF(i); 875 + if (fb->obj[i]) { 876 + exists = drm_gem_object_handle_get_if_exists_unlocked(fb->obj[i]); 877 + if (exists) 878 + fb->internal_flags |= DRM_FRAMEBUFFER_HAS_HANDLE_REF(i); 879 + } 880 + } 869 881 870 882 INIT_LIST_HEAD(&fb->filp_head); 871 883 ··· 887 875 ret = __drm_mode_object_add(dev, &fb->base, DRM_MODE_OBJECT_FB, 888 876 false, drm_framebuffer_free); 889 877 if (ret) 890 - goto out; 878 + goto err; 891 879 892 880 mutex_lock(&dev->mode_config.fb_lock); 893 881 dev->mode_config.num_fb++; ··· 895 883 mutex_unlock(&dev->mode_config.fb_lock); 896 884 897 885 drm_mode_object_register(dev, &fb->base); 898 - out: 886 + 887 + return 0; 888 + 889 + err: 890 + for (i = 0; i < fb->format->num_planes; i++) { 891 + if (fb->internal_flags & DRM_FRAMEBUFFER_HAS_HANDLE_REF(i)) { 892 + drm_gem_object_handle_put_unlocked(fb->obj[i]); 893 + fb->internal_flags &= ~DRM_FRAMEBUFFER_HAS_HANDLE_REF(i); 894 + } 895 + } 899 896 return ret; 900 897 } 901 898 EXPORT_SYMBOL(drm_framebuffer_init); ··· 981 960 void drm_framebuffer_cleanup(struct drm_framebuffer *fb) 982 961 { 983 962 struct drm_device *dev = fb->dev; 963 + unsigned int i; 964 + 965 + for (i = 0; i < fb->format->num_planes; i++) { 966 + if (fb->internal_flags & DRM_FRAMEBUFFER_HAS_HANDLE_REF(i)) 967 + drm_gem_object_handle_put_unlocked(fb->obj[i]); 968 + } 984 969 985 970 mutex_lock(&dev->mode_config.fb_lock); 986 971 list_del(&fb->head);
+33 -15
drivers/gpu/drm/drm_gem.c
··· 223 223 } 224 224 225 225 /** 226 - * drm_gem_object_handle_get_unlocked - acquire reference on user-space handles 226 + * drm_gem_object_handle_get_if_exists_unlocked - acquire reference on user-space handle, if any 227 227 * @obj: GEM object 228 228 * 229 - * Acquires a reference on the GEM buffer object's handle. Required 230 - * to keep the GEM object alive. Call drm_gem_object_handle_put_unlocked() 231 - * to release the reference. 229 + * Acquires a reference on the GEM buffer object's handle. Required to keep 230 + * the GEM object alive. Call drm_gem_object_handle_put_if_exists_unlocked() 231 + * to release the reference. Does nothing if the buffer object has no handle. 232 + * 233 + * Returns: 234 + * True if a handle exists, or false otherwise 232 235 */ 233 - void drm_gem_object_handle_get_unlocked(struct drm_gem_object *obj) 236 + bool drm_gem_object_handle_get_if_exists_unlocked(struct drm_gem_object *obj) 234 237 { 235 238 struct drm_device *dev = obj->dev; 236 239 237 240 guard(mutex)(&dev->object_name_lock); 238 241 239 - drm_WARN_ON(dev, !obj->handle_count); /* first ref taken in create-tail helper */ 242 + /* 243 + * First ref taken during GEM object creation, if any. Some 244 + * drivers set up internal framebuffers with GEM objects that 245 + * do not have a GEM handle. Hence, this counter can be zero. 246 + */ 247 + if (!obj->handle_count) 248 + return false; 249 + 240 250 drm_gem_object_handle_get(obj); 251 + 252 + return true; 241 253 } 242 - EXPORT_SYMBOL(drm_gem_object_handle_get_unlocked); 243 254 244 255 /** 245 256 * drm_gem_object_handle_free - release resources bound to userspace handles ··· 283 272 } 284 273 285 274 /** 286 - * drm_gem_object_handle_put_unlocked - releases reference on user-space handles 275 + * drm_gem_object_handle_put_unlocked - releases reference on user-space handle 287 276 * @obj: GEM object 288 277 * 289 278 * Releases a reference on the GEM buffer object's handle. Possibly releases ··· 294 283 struct drm_device *dev = obj->dev; 295 284 bool final = false; 296 285 297 - if (WARN_ON(READ_ONCE(obj->handle_count) == 0)) 286 + if (drm_WARN_ON(dev, READ_ONCE(obj->handle_count) == 0)) 298 287 return; 299 288 300 289 /* 301 - * Must bump handle count first as this may be the last 302 - * ref, in which case the object would disappear before we 303 - * checked for a name 304 - */ 290 + * Must bump handle count first as this may be the last 291 + * ref, in which case the object would disappear before 292 + * we checked for a name. 293 + */ 305 294 306 295 mutex_lock(&dev->object_name_lock); 307 296 if (--obj->handle_count == 0) { ··· 314 303 if (final) 315 304 drm_gem_object_put(obj); 316 305 } 317 - EXPORT_SYMBOL(drm_gem_object_handle_put_unlocked); 318 306 319 307 /* 320 308 * Called at device or object close to release the file's ··· 324 314 { 325 315 struct drm_file *file_priv = data; 326 316 struct drm_gem_object *obj = ptr; 317 + 318 + if (drm_WARN_ON(obj->dev, !data)) 319 + return 0; 327 320 328 321 if (obj->funcs->close) 329 322 obj->funcs->close(obj, file_priv); ··· 448 435 idr_preload(GFP_KERNEL); 449 436 spin_lock(&file_priv->table_lock); 450 437 451 - ret = idr_alloc(&file_priv->object_idr, obj, 1, 0, GFP_NOWAIT); 438 + ret = idr_alloc(&file_priv->object_idr, NULL, 1, 0, GFP_NOWAIT); 452 439 453 440 spin_unlock(&file_priv->table_lock); 454 441 idr_preload_end(); ··· 469 456 goto err_revoke; 470 457 } 471 458 459 + /* mirrors drm_gem_handle_delete to avoid races */ 460 + spin_lock(&file_priv->table_lock); 461 + obj = idr_replace(&file_priv->object_idr, obj, handle); 462 + WARN_ON(obj != NULL); 463 + spin_unlock(&file_priv->table_lock); 472 464 *handlep = handle; 473 465 return 0; 474 466
+7 -9
drivers/gpu/drm/drm_gem_framebuffer_helper.c
··· 99 99 unsigned int i; 100 100 101 101 for (i = 0; i < fb->format->num_planes; i++) 102 - drm_gem_object_handle_put_unlocked(fb->obj[i]); 102 + drm_gem_object_put(fb->obj[i]); 103 103 104 104 drm_framebuffer_cleanup(fb); 105 105 kfree(fb); ··· 182 182 if (!objs[i]) { 183 183 drm_dbg_kms(dev, "Failed to lookup GEM object\n"); 184 184 ret = -ENOENT; 185 - goto err_gem_object_handle_put_unlocked; 185 + goto err_gem_object_put; 186 186 } 187 - drm_gem_object_handle_get_unlocked(objs[i]); 188 - drm_gem_object_put(objs[i]); 189 187 190 188 min_size = (height - 1) * mode_cmd->pitches[i] 191 189 + drm_format_info_min_pitch(info, i, width) ··· 193 195 drm_dbg_kms(dev, 194 196 "GEM object size (%zu) smaller than minimum size (%u) for plane %d\n", 195 197 objs[i]->size, min_size, i); 196 - drm_gem_object_handle_put_unlocked(objs[i]); 198 + drm_gem_object_put(objs[i]); 197 199 ret = -EINVAL; 198 - goto err_gem_object_handle_put_unlocked; 200 + goto err_gem_object_put; 199 201 } 200 202 } 201 203 202 204 ret = drm_gem_fb_init(dev, fb, mode_cmd, objs, i, funcs); 203 205 if (ret) 204 - goto err_gem_object_handle_put_unlocked; 206 + goto err_gem_object_put; 205 207 206 208 return 0; 207 209 208 - err_gem_object_handle_put_unlocked: 210 + err_gem_object_put: 209 211 while (i > 0) { 210 212 --i; 211 - drm_gem_object_handle_put_unlocked(objs[i]); 213 + drm_gem_object_put(objs[i]); 212 214 } 213 215 return ret; 214 216 }
+1 -1
drivers/gpu/drm/drm_internal.h
··· 161 161 162 162 /* drm_gem.c */ 163 163 int drm_gem_init(struct drm_device *dev); 164 - void drm_gem_object_handle_get_unlocked(struct drm_gem_object *obj); 164 + bool drm_gem_object_handle_get_if_exists_unlocked(struct drm_gem_object *obj); 165 165 void drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj); 166 166 int drm_gem_handle_create_tail(struct drm_file *file_priv, 167 167 struct drm_gem_object *obj,
+1 -1
drivers/gpu/drm/drm_panic_qr.rs
··· 27 27 //! * <https://github.com/erwanvivien/fast_qr> 28 28 //! * <https://github.com/bjguillot/qr> 29 29 30 - use kernel::{prelude::*, str::CStr}; 30 + use kernel::prelude::*; 31 31 32 32 #[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd)] 33 33 struct Version(usize);
+4 -4
drivers/gpu/drm/i915/display/intel_bios.c
··· 1938 1938 int index, len; 1939 1939 1940 1940 if (drm_WARN_ON(display->drm, 1941 - !data || panel->vbt.dsi.seq_version != 1)) 1941 + !data || panel->vbt.dsi.seq_version >= 3)) 1942 1942 return 0; 1943 1943 1944 1944 /* index = 1 to skip sequence byte */ ··· 1961 1961 } 1962 1962 1963 1963 /* 1964 - * Some v1 VBT MIPI sequences do the deassert in the init OTP sequence. 1964 + * Some v1/v2 VBT MIPI sequences do the deassert in the init OTP sequence. 1965 1965 * The deassert must be done before calling intel_dsi_device_ready, so for 1966 1966 * these devices we split the init OTP sequence into a deassert sequence and 1967 1967 * the actual init OTP part. ··· 1972 1972 u8 *init_otp; 1973 1973 int len; 1974 1974 1975 - /* Limit this to v1 vid-mode sequences */ 1975 + /* Limit this to v1/v2 vid-mode sequences */ 1976 1976 if (panel->vbt.dsi.config->is_cmd_mode || 1977 - panel->vbt.dsi.seq_version != 1) 1977 + panel->vbt.dsi.seq_version >= 3) 1978 1978 return; 1979 1979 1980 1980 /* Only do this if there are otp and assert seqs and no deassert seq */
+2 -2
drivers/gpu/drm/imagination/pvr_power.c
··· 386 386 if (!err) { 387 387 if (hard_reset) { 388 388 pvr_dev->fw_dev.booted = false; 389 - WARN_ON(pm_runtime_force_suspend(from_pvr_device(pvr_dev)->dev)); 389 + WARN_ON(pvr_power_device_suspend(from_pvr_device(pvr_dev)->dev)); 390 390 391 391 err = pvr_fw_hard_reset(pvr_dev); 392 392 if (err) 393 393 goto err_device_lost; 394 394 395 - err = pm_runtime_force_resume(from_pvr_device(pvr_dev)->dev); 395 + err = pvr_power_device_resume(from_pvr_device(pvr_dev)->dev); 396 396 pvr_dev->fw_dev.booted = true; 397 397 if (err) 398 398 goto err_device_lost;
+1 -5
drivers/gpu/drm/nouveau/nouveau_debugfs.c
··· 314 314 drm->debugfs = NULL; 315 315 } 316 316 317 - int 317 + void 318 318 nouveau_module_debugfs_init(void) 319 319 { 320 320 nouveau_debugfs_root = debugfs_create_dir("nouveau", NULL); 321 - if (IS_ERR(nouveau_debugfs_root)) 322 - return PTR_ERR(nouveau_debugfs_root); 323 - 324 - return 0; 325 321 } 326 322 327 323 void
+2 -3
drivers/gpu/drm/nouveau/nouveau_debugfs.h
··· 24 24 25 25 extern struct dentry *nouveau_debugfs_root; 26 26 27 - int nouveau_module_debugfs_init(void); 27 + void nouveau_module_debugfs_init(void); 28 28 void nouveau_module_debugfs_fini(void); 29 29 #else 30 30 static inline void ··· 42 42 { 43 43 } 44 44 45 - static inline int 45 + static inline void 46 46 nouveau_module_debugfs_init(void) 47 47 { 48 - return 0; 49 48 } 50 49 51 50 static inline void
+1 -3
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 1461 1461 if (!nouveau_modeset) 1462 1462 return 0; 1463 1463 1464 - ret = nouveau_module_debugfs_init(); 1465 - if (ret) 1466 - return ret; 1464 + nouveau_module_debugfs_init(); 1467 1465 1468 1466 #ifdef CONFIG_NOUVEAU_PLATFORM_DRIVER 1469 1467 platform_driver_register(&nouveau_platform_driver);
+21 -6
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c
··· 719 719 union acpi_object argv4 = { 720 720 .buffer.type = ACPI_TYPE_BUFFER, 721 721 .buffer.length = 4, 722 - .buffer.pointer = kmalloc(argv4.buffer.length, GFP_KERNEL), 723 722 }, *obj; 724 723 725 724 caps->status = 0xffff; ··· 726 727 if (!acpi_check_dsm(handle, &NVOP_DSM_GUID, NVOP_DSM_REV, BIT_ULL(0x1a))) 727 728 return; 728 729 730 + argv4.buffer.pointer = kmalloc(argv4.buffer.length, GFP_KERNEL); 731 + if (!argv4.buffer.pointer) 732 + return; 733 + 729 734 obj = acpi_evaluate_dsm(handle, &NVOP_DSM_GUID, NVOP_DSM_REV, 0x1a, &argv4); 730 735 if (!obj) 731 - return; 736 + goto done; 732 737 733 738 if (WARN_ON(obj->type != ACPI_TYPE_BUFFER) || 734 739 WARN_ON(obj->buffer.length != 4)) 735 - return; 740 + goto done; 736 741 737 742 caps->status = 0; 738 743 caps->optimusCaps = *(u32 *)obj->buffer.pointer; 739 744 745 + done: 740 746 ACPI_FREE(obj); 741 747 742 748 kfree(argv4.buffer.pointer); ··· 758 754 union acpi_object argv4 = { 759 755 .buffer.type = ACPI_TYPE_BUFFER, 760 756 .buffer.length = sizeof(caps), 761 - .buffer.pointer = kmalloc(argv4.buffer.length, GFP_KERNEL), 762 757 }, *obj; 763 758 764 759 jt->status = 0xffff; 765 760 761 + argv4.buffer.pointer = kmalloc(argv4.buffer.length, GFP_KERNEL); 762 + if (!argv4.buffer.pointer) 763 + return; 764 + 766 765 obj = acpi_evaluate_dsm(handle, &JT_DSM_GUID, JT_DSM_REV, 0x1, &argv4); 767 766 if (!obj) 768 - return; 767 + goto done; 769 768 770 769 if (WARN_ON(obj->type != ACPI_TYPE_BUFFER) || 771 770 WARN_ON(obj->buffer.length != 4)) 772 - return; 771 + goto done; 773 772 774 773 jt->status = 0; 775 774 jt->jtCaps = *(u32 *)obj->buffer.pointer; 776 775 jt->jtRevId = (jt->jtCaps & 0xfff00000) >> 20; 777 776 jt->bSBIOSCaps = 0; 778 777 778 + done: 779 779 ACPI_FREE(obj); 780 780 781 781 kfree(argv4.buffer.pointer); ··· 1752 1744 nvkm_gsp_sg_free(gsp->subdev.device, &gsp->sr.sgt); 1753 1745 return ret; 1754 1746 } 1747 + 1748 + /* 1749 + * TODO: Debug the GSP firmware / RPC handling to find out why 1750 + * without this Turing (but none of the other architectures) 1751 + * ends up resetting all channels after resume. 1752 + */ 1753 + msleep(50); 1755 1754 } 1756 1755 1757 1756 ret = r535_gsp_rpc_unloading_guest_driver(gsp, suspend);
+2 -4
drivers/gpu/drm/tegra/nvdec.c
··· 261 261 262 262 if (!client->group) { 263 263 virt = dma_alloc_coherent(nvdec->dev, size, &iova, GFP_KERNEL); 264 - 265 - err = dma_mapping_error(nvdec->dev, iova); 266 - if (err < 0) 267 - return err; 264 + if (!virt) 265 + return -ENOMEM; 268 266 } else { 269 267 virt = tegra_drm_alloc(tegra, size, &iova); 270 268 if (IS_ERR(virt))
+28 -10
drivers/gpu/drm/xe/xe_devcoredump.c
··· 171 171 172 172 #define XE_DEVCOREDUMP_CHUNK_MAX (SZ_512M + SZ_1G) 173 173 174 + /** 175 + * xe_devcoredump_read() - Read data from the Xe device coredump snapshot 176 + * @buffer: Destination buffer to copy the coredump data into 177 + * @offset: Offset in the coredump data to start reading from 178 + * @count: Number of bytes to read 179 + * @data: Pointer to the xe_devcoredump structure 180 + * @datalen: Length of the data (unused) 181 + * 182 + * Reads a chunk of the coredump snapshot data into the provided buffer. 183 + * If the devcoredump is smaller than 1.5 GB (XE_DEVCOREDUMP_CHUNK_MAX), 184 + * it is read directly from a pre-written buffer. For larger devcoredumps, 185 + * the pre-written buffer must be periodically repopulated from the snapshot 186 + * state due to kmalloc size limitations. 187 + * 188 + * Return: Number of bytes copied on success, or a negative error code on failure. 189 + */ 174 190 static ssize_t xe_devcoredump_read(char *buffer, loff_t offset, 175 191 size_t count, void *data, size_t datalen) 176 192 { 177 193 struct xe_devcoredump *coredump = data; 178 194 struct xe_devcoredump_snapshot *ss; 179 - ssize_t byte_copied; 195 + ssize_t byte_copied = 0; 180 196 u32 chunk_offset; 181 197 ssize_t new_chunk_position; 198 + bool pm_needed = false; 199 + int ret = 0; 182 200 183 201 if (!coredump) 184 202 return -ENODEV; ··· 206 188 /* Ensure delayed work is captured before continuing */ 207 189 flush_work(&ss->work); 208 190 209 - if (ss->read.size > XE_DEVCOREDUMP_CHUNK_MAX) 191 + pm_needed = ss->read.size > XE_DEVCOREDUMP_CHUNK_MAX; 192 + if (pm_needed) 210 193 xe_pm_runtime_get(gt_to_xe(ss->gt)); 211 194 212 195 mutex_lock(&coredump->lock); 213 196 214 197 if (!ss->read.buffer) { 215 - mutex_unlock(&coredump->lock); 216 - return -ENODEV; 198 + ret = -ENODEV; 199 + goto unlock; 217 200 } 218 201 219 - if (offset >= ss->read.size) { 220 - mutex_unlock(&coredump->lock); 221 - return 0; 222 - } 202 + if (offset >= ss->read.size) 203 + goto unlock; 223 204 224 205 new_chunk_position = div_u64_rem(offset, 225 206 XE_DEVCOREDUMP_CHUNK_MAX, ··· 238 221 ss->read.size - offset; 239 222 memcpy(buffer, ss->read.buffer + chunk_offset, byte_copied); 240 223 224 + unlock: 241 225 mutex_unlock(&coredump->lock); 242 226 243 - if (ss->read.size > XE_DEVCOREDUMP_CHUNK_MAX) 227 + if (pm_needed) 244 228 xe_pm_runtime_put(gt_to_xe(ss->gt)); 245 229 246 - return byte_copied; 230 + return byte_copied ? byte_copied : ret; 247 231 } 248 232 249 233 static void xe_devcoredump_free(void *data)
+1
drivers/gpu/drm/xe/xe_gt_pagefault.c
··· 444 444 #define PF_MULTIPLIER 8 445 445 pf_queue->num_dw = 446 446 (num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW * PF_MULTIPLIER; 447 + pf_queue->num_dw = roundup_pow_of_two(pf_queue->num_dw); 447 448 #undef PF_MULTIPLIER 448 449 449 450 pf_queue->gt = gt;
+11
drivers/gpu/drm/xe/xe_lmtt.c
··· 78 78 } 79 79 80 80 lmtt_assert(lmtt, xe_bo_is_vram(bo)); 81 + lmtt_debug(lmtt, "level=%u addr=%#llx\n", level, (u64)xe_bo_main_addr(bo, XE_PAGE_SIZE)); 82 + 83 + xe_map_memset(lmtt_to_xe(lmtt), &bo->vmap, 0, 0, bo->size); 81 84 82 85 pt->level = level; 83 86 pt->bo = bo; ··· 94 91 95 92 static void lmtt_pt_free(struct xe_lmtt_pt *pt) 96 93 { 94 + lmtt_debug(&pt->bo->tile->sriov.pf.lmtt, "level=%u addr=%llx\n", 95 + pt->level, (u64)xe_bo_main_addr(pt->bo, XE_PAGE_SIZE)); 96 + 97 97 xe_bo_unpin_map_no_vm(pt->bo); 98 98 kfree(pt); 99 99 } ··· 232 226 233 227 switch (lmtt->ops->lmtt_pte_size(level)) { 234 228 case sizeof(u32): 229 + lmtt_assert(lmtt, !overflows_type(pte, u32)); 230 + lmtt_assert(lmtt, !pte || !iosys_map_rd(&pt->bo->vmap, idx * sizeof(u32), u32)); 231 + 235 232 xe_map_wr(lmtt_to_xe(lmtt), &pt->bo->vmap, idx * sizeof(u32), u32, pte); 236 233 break; 237 234 case sizeof(u64): 235 + lmtt_assert(lmtt, !pte || !iosys_map_rd(&pt->bo->vmap, idx * sizeof(u64), u64)); 236 + 238 237 xe_map_wr(lmtt_to_xe(lmtt), &pt->bo->vmap, idx * sizeof(u64), u64, pte); 239 238 break; 240 239 default:
+1 -1
drivers/gpu/drm/xe/xe_migrate.c
··· 863 863 if (src_is_vram && xe_migrate_allow_identity(src_L0, &src_it)) 864 864 xe_res_next(&src_it, src_L0); 865 865 else 866 - emit_pte(m, bb, src_L0_pt, src_is_vram, copy_system_ccs, 866 + emit_pte(m, bb, src_L0_pt, src_is_vram, copy_system_ccs || use_comp_pat, 867 867 &src_it, src_L0, src); 868 868 869 869 if (dst_is_vram && xe_migrate_allow_identity(src_L0, &dst_it))
+1 -1
drivers/gpu/drm/xe/xe_module.c
··· 20 20 21 21 struct xe_modparam xe_modparam = { 22 22 .probe_display = true, 23 - .guc_log_level = 3, 23 + .guc_log_level = IS_ENABLED(CONFIG_DRM_XE_DEBUG) ? 3 : 1, 24 24 .force_probe = CONFIG_DRM_XE_FORCE_PROBE, 25 25 .wedged_mode = 1, 26 26 .svm_notifier_size = 512,
-1
drivers/gpu/drm/xe/xe_pci.c
··· 140 140 .has_asid = 1, \ 141 141 .has_atomic_enable_pte_bit = 1, \ 142 142 .has_flat_ccs = 1, \ 143 - .has_indirect_ring_state = 1, \ 144 143 .has_range_tlb_invalidation = 1, \ 145 144 .has_usm = 1, \ 146 145 .has_64bit_timestamp = 1, \
+6 -5
drivers/gpu/drm/xe/xe_pm.c
··· 134 134 /* FIXME: Super racey... */ 135 135 err = xe_bo_evict_all(xe); 136 136 if (err) 137 - goto err_pxp; 137 + goto err_display; 138 138 139 139 for_each_gt(gt, xe, id) { 140 140 err = xe_gt_suspend(gt); ··· 151 151 152 152 err_display: 153 153 xe_display_pm_resume(xe); 154 - err_pxp: 155 154 xe_pxp_pm_resume(xe->pxp); 156 155 err: 157 156 drm_dbg(&xe->drm, "Device suspend failed %d\n", err); ··· 752 753 } 753 754 754 755 /** 755 - * xe_pm_set_vram_threshold - Set a vram threshold for allowing/blocking D3Cold 756 + * xe_pm_set_vram_threshold - Set a VRAM threshold for allowing/blocking D3Cold 756 757 * @xe: xe device instance 757 - * @threshold: VRAM size in bites for the D3cold threshold 758 + * @threshold: VRAM size in MiB for the D3cold threshold 758 759 * 759 - * Returns 0 for success, negative error code otherwise. 760 + * Return: 761 + * * 0 - success 762 + * * -EINVAL - invalid argument 760 763 */ 761 764 int xe_pm_set_vram_threshold(struct xe_device *xe, u32 threshold) 762 765 {
+3 -3
drivers/gpu/drm/xe/xe_uc_fw.c
··· 114 114 #define XE_GT_TYPE_ANY XE_GT_TYPE_UNINITIALIZED 115 115 116 116 #define XE_GUC_FIRMWARE_DEFS(fw_def, mmp_ver, major_ver) \ 117 - fw_def(BATTLEMAGE, GT_TYPE_ANY, major_ver(xe, guc, bmg, 70, 44, 1)) \ 118 - fw_def(LUNARLAKE, GT_TYPE_ANY, major_ver(xe, guc, lnl, 70, 44, 1)) \ 117 + fw_def(BATTLEMAGE, GT_TYPE_ANY, major_ver(xe, guc, bmg, 70, 45, 2)) \ 118 + fw_def(LUNARLAKE, GT_TYPE_ANY, major_ver(xe, guc, lnl, 70, 45, 2)) \ 119 119 fw_def(METEORLAKE, GT_TYPE_ANY, major_ver(i915, guc, mtl, 70, 44, 1)) \ 120 - fw_def(DG2, GT_TYPE_ANY, major_ver(i915, guc, dg2, 70, 44, 1)) \ 120 + fw_def(DG2, GT_TYPE_ANY, major_ver(i915, guc, dg2, 70, 45, 2)) \ 121 121 fw_def(DG1, GT_TYPE_ANY, major_ver(i915, guc, dg1, 70, 44, 1)) \ 122 122 fw_def(ALDERLAKE_N, GT_TYPE_ANY, major_ver(i915, guc, tgl, 70, 44, 1)) \ 123 123 fw_def(ALDERLAKE_P, GT_TYPE_ANY, major_ver(i915, guc, adlp, 70, 44, 1)) \
+2 -2
drivers/gpu/drm/xe/xe_wa_oob.rules
··· 38 38 GRAPHICS_VERSION(2004) 39 39 GRAPHICS_VERSION_RANGE(3000, 3001) 40 40 22019338487 MEDIA_VERSION(2000) 41 - GRAPHICS_VERSION(2001) 41 + GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_not_sriov_vf) 42 42 MEDIA_VERSION(3000), MEDIA_STEP(A0, B0), FUNC(xe_rtp_match_not_sriov_vf) 43 43 22019338487_display PLATFORM(LUNARLAKE) 44 - 16023588340 GRAPHICS_VERSION(2001) 44 + 16023588340 GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_not_sriov_vf) 45 45 14019789679 GRAPHICS_VERSION(1255) 46 46 GRAPHICS_VERSION_RANGE(1270, 2004) 47 47 no_media_l3 MEDIA_VERSION(3000)
+1 -2
drivers/md/md-bitmap.c
··· 2366 2366 2367 2367 if (!bitmap) 2368 2368 return -ENOENT; 2369 - if (!bitmap->mddev->bitmap_info.external && 2370 - !bitmap->storage.sb_page) 2369 + if (!bitmap->storage.sb_page) 2371 2370 return -EINVAL; 2372 2371 sb = kmap_local_page(bitmap->storage.sb_page); 2373 2372 stats->sync_size = le64_to_cpu(sb->sync_size);
+3 -1
drivers/md/raid1.c
··· 1399 1399 } 1400 1400 read_bio = bio_alloc_clone(mirror->rdev->bdev, bio, gfp, 1401 1401 &mddev->bio_set); 1402 - 1402 + read_bio->bi_opf &= ~REQ_NOWAIT; 1403 1403 r1_bio->bios[rdisk] = read_bio; 1404 1404 1405 1405 read_bio->bi_iter.bi_sector = r1_bio->sector + ··· 1649 1649 wait_for_serialization(rdev, r1_bio); 1650 1650 } 1651 1651 1652 + mbio->bi_opf &= ~REQ_NOWAIT; 1652 1653 r1_bio->bios[i] = mbio; 1653 1654 1654 1655 mbio->bi_iter.bi_sector = (r1_bio->sector + rdev->data_offset); ··· 3429 3428 /* ok, everything is stopped */ 3430 3429 oldpool = conf->r1bio_pool; 3431 3430 conf->r1bio_pool = newpool; 3431 + init_waitqueue_head(&conf->r1bio_pool.wait); 3432 3432 3433 3433 for (d = d2 = 0; d < conf->raid_disks; d++) { 3434 3434 struct md_rdev *rdev = conf->mirrors[d].rdev;
+10 -2
drivers/md/raid10.c
··· 1182 1182 } 1183 1183 } 1184 1184 1185 - if (!regular_request_wait(mddev, conf, bio, r10_bio->sectors)) 1185 + if (!regular_request_wait(mddev, conf, bio, r10_bio->sectors)) { 1186 + raid_end_bio_io(r10_bio); 1186 1187 return; 1188 + } 1189 + 1187 1190 rdev = read_balance(conf, r10_bio, &max_sectors); 1188 1191 if (!rdev) { 1189 1192 if (err_rdev) { ··· 1224 1221 r10_bio->master_bio = bio; 1225 1222 } 1226 1223 read_bio = bio_alloc_clone(rdev->bdev, bio, gfp, &mddev->bio_set); 1224 + read_bio->bi_opf &= ~REQ_NOWAIT; 1227 1225 1228 1226 r10_bio->devs[slot].bio = read_bio; 1229 1227 r10_bio->devs[slot].rdev = rdev; ··· 1260 1256 conf->mirrors[devnum].rdev; 1261 1257 1262 1258 mbio = bio_alloc_clone(rdev->bdev, bio, GFP_NOIO, &mddev->bio_set); 1259 + mbio->bi_opf &= ~REQ_NOWAIT; 1263 1260 if (replacement) 1264 1261 r10_bio->devs[n_copy].repl_bio = mbio; 1265 1262 else ··· 1375 1370 } 1376 1371 1377 1372 sectors = r10_bio->sectors; 1378 - if (!regular_request_wait(mddev, conf, bio, sectors)) 1373 + if (!regular_request_wait(mddev, conf, bio, sectors)) { 1374 + raid_end_bio_io(r10_bio); 1379 1375 return; 1376 + } 1377 + 1380 1378 if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && 1381 1379 (mddev->reshape_backwards 1382 1380 ? (bio->bi_iter.bi_sector < conf->reshape_safe &&
+1 -1
drivers/net/can/m_can/m_can.c
··· 665 665 struct can_frame *frame; 666 666 u32 timestamp = 0; 667 667 668 - netdev_err(dev, "msg lost in rxf0\n"); 668 + netdev_dbg(dev, "msg lost in rxf0\n"); 669 669 670 670 stats->rx_errors++; 671 671 stats->rx_over_errors++;
+1
drivers/net/ethernet/airoha/airoha_eth.c
··· 2984 2984 error_napi_stop: 2985 2985 for (i = 0; i < ARRAY_SIZE(eth->qdma); i++) 2986 2986 airoha_qdma_stop_napi(&eth->qdma[i]); 2987 + airoha_ppe_deinit(eth); 2987 2988 error_hw_cleanup: 2988 2989 for (i = 0; i < ARRAY_SIZE(eth->qdma); i++) 2989 2990 airoha_hw_cleanup(&eth->qdma[i]);
+4 -6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 11607 11607 11608 11608 static int bnxt_request_irq(struct bnxt *bp) 11609 11609 { 11610 + struct cpu_rmap *rmap = NULL; 11610 11611 int i, j, rc = 0; 11611 11612 unsigned long flags = 0; 11612 - #ifdef CONFIG_RFS_ACCEL 11613 - struct cpu_rmap *rmap; 11614 - #endif 11615 11613 11616 11614 rc = bnxt_setup_int_mode(bp); 11617 11615 if (rc) { ··· 11630 11632 int map_idx = bnxt_cp_num_to_irq_num(bp, i); 11631 11633 struct bnxt_irq *irq = &bp->irq_tbl[map_idx]; 11632 11634 11633 - #ifdef CONFIG_RFS_ACCEL 11634 - if (rmap && bp->bnapi[i]->rx_ring) { 11635 + if (IS_ENABLED(CONFIG_RFS_ACCEL) && 11636 + rmap && bp->bnapi[i]->rx_ring) { 11635 11637 rc = irq_cpu_rmap_add(rmap, irq->vector); 11636 11638 if (rc) 11637 11639 netdev_warn(bp->dev, "failed adding irq rmap for ring %d\n", 11638 11640 j); 11639 11641 j++; 11640 11642 } 11641 - #endif 11643 + 11642 11644 rc = request_irq(irq->vector, irq->handler, flags, irq->name, 11643 11645 bp->bnapi[i]); 11644 11646 if (rc)
+11 -7
drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
··· 368 368 if (!ctxm->mem_valid || !seg_id) 369 369 continue; 370 370 371 - if (trace) 371 + if (trace) { 372 372 extra_hlen = BNXT_SEG_RCD_LEN; 373 + if (buf) { 374 + u16 trace_type = bnxt_bstore_to_trace[type]; 375 + 376 + bnxt_fill_drv_seg_record(bp, &record, ctxm, 377 + trace_type); 378 + } 379 + } 380 + 373 381 if (buf) 374 382 data = buf + BNXT_SEG_HDR_LEN + extra_hlen; 383 + 375 384 seg_len = bnxt_copy_ctx_mem(bp, ctxm, data, 0) + extra_hlen; 376 385 if (buf) { 377 386 bnxt_fill_coredump_seg_hdr(bp, &seg_hdr, NULL, seg_len, 378 387 0, 0, 0, comp_id, seg_id); 379 388 memcpy(buf, &seg_hdr, BNXT_SEG_HDR_LEN); 380 389 buf += BNXT_SEG_HDR_LEN; 381 - if (trace) { 382 - u16 trace_type = bnxt_bstore_to_trace[type]; 383 - 384 - bnxt_fill_drv_seg_record(bp, &record, ctxm, 385 - trace_type); 390 + if (trace) 386 391 memcpy(buf, &record, BNXT_SEG_RCD_LEN); 387 - } 388 392 buf += seg_len; 389 393 } 390 394 len += BNXT_SEG_HDR_LEN + seg_len;
+2
drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
··· 487 487 488 488 if ((ets->tc_tx_bw[i] || ets->tc_tsa[i]) && i > bp->max_tc) 489 489 return -EINVAL; 490 + } 490 491 492 + for (i = 0; i < max_tc; i++) { 491 493 switch (ets->tc_tsa[i]) { 492 494 case IEEE_8021QAZ_TSA_STRICT: 493 495 break;
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
··· 115 115 tx_buf->action = XDP_REDIRECT; 116 116 tx_buf->xdpf = xdpf; 117 117 dma_unmap_addr_set(tx_buf, mapping, mapping); 118 - dma_unmap_len_set(tx_buf, len, 0); 118 + dma_unmap_len_set(tx_buf, len, len); 119 119 } 120 120 121 121 void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+6
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 4092 4092 for (i = 0; i <= priv->hw_params->rx_queues; i++) 4093 4093 priv->rx_rings[i].rx_max_coalesced_frames = 1; 4094 4094 4095 + /* Initialize u64 stats seq counter for 32bit machines */ 4096 + for (i = 0; i <= priv->hw_params->rx_queues; i++) 4097 + u64_stats_init(&priv->rx_rings[i].stats64.syncp); 4098 + for (i = 0; i <= priv->hw_params->tx_queues; i++) 4099 + u64_stats_init(&priv->tx_rings[i].stats64.syncp); 4100 + 4095 4101 /* libphy will determine the link state */ 4096 4102 netif_carrier_off(dev); 4097 4103
+3 -9
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 1578 1578 static int nicvf_change_mtu(struct net_device *netdev, int new_mtu) 1579 1579 { 1580 1580 struct nicvf *nic = netdev_priv(netdev); 1581 - int orig_mtu = netdev->mtu; 1582 1581 1583 1582 /* For now just support only the usual MTU sized frames, 1584 1583 * plus some headroom for VLAN, QinQ. ··· 1588 1589 return -EINVAL; 1589 1590 } 1590 1591 1591 - WRITE_ONCE(netdev->mtu, new_mtu); 1592 - 1593 - if (!netif_running(netdev)) 1594 - return 0; 1595 - 1596 - if (nicvf_update_hw_max_frs(nic, new_mtu)) { 1597 - netdev->mtu = orig_mtu; 1592 + if (netif_running(netdev) && nicvf_update_hw_max_frs(nic, new_mtu)) 1598 1593 return -EINVAL; 1599 - } 1594 + 1595 + WRITE_ONCE(netdev->mtu, new_mtu); 1600 1596 1601 1597 return 0; 1602 1598 }
+6 -2
drivers/net/ethernet/ibm/ibmvnic.h
··· 211 211 u8 reserved[72]; 212 212 } __packed __aligned(8); 213 213 214 - #define NUM_TX_STATS 3 215 214 struct ibmvnic_tx_queue_stats { 216 215 u64 batched_packets; 217 216 u64 direct_packets; ··· 218 219 u64 dropped_packets; 219 220 }; 220 221 221 - #define NUM_RX_STATS 3 222 + #define NUM_TX_STATS \ 223 + (sizeof(struct ibmvnic_tx_queue_stats) / sizeof(u64)) 224 + 222 225 struct ibmvnic_rx_queue_stats { 223 226 u64 packets; 224 227 u64 bytes; 225 228 u64 interrupts; 226 229 }; 230 + 231 + #define NUM_RX_STATS \ 232 + (sizeof(struct ibmvnic_rx_queue_stats) / sizeof(u64)) 227 233 228 234 struct ibmvnic_acl_buffer { 229 235 __be32 len;
+7 -2
drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
··· 18 18 19 19 enum { 20 20 MLX5E_TC_PRIO = 0, 21 - MLX5E_NIC_PRIO 21 + MLX5E_PROMISC_PRIO, 22 + MLX5E_NIC_PRIO, 22 23 }; 23 24 24 25 struct mlx5e_flow_table { ··· 69 68 MLX5_HASH_FIELD_SEL_DST_IP |\ 70 69 MLX5_HASH_FIELD_SEL_IPSEC_SPI) 71 70 72 - /* NIC prio FTS */ 71 + /* NIC promisc FT level */ 73 72 enum { 74 73 MLX5E_PROMISC_FT_LEVEL, 74 + }; 75 + 76 + /* NIC prio FTS */ 77 + enum { 75 78 MLX5E_VLAN_FT_LEVEL, 76 79 MLX5E_L2_FT_LEVEL, 77 80 MLX5E_TTC_FT_LEVEL,
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_dim.c
··· 113 113 __set_bit(MLX5E_RQ_STATE_DIM, &rq->state); 114 114 } else { 115 115 __clear_bit(MLX5E_RQ_STATE_DIM, &rq->state); 116 - 116 + synchronize_net(); 117 117 mlx5e_dim_disable(rq->dim); 118 118 rq->dim = NULL; 119 119 } ··· 140 140 __set_bit(MLX5E_SQ_STATE_DIM, &sq->state); 141 141 } else { 142 142 __clear_bit(MLX5E_SQ_STATE_DIM, &sq->state); 143 - 143 + synchronize_net(); 144 144 mlx5e_dim_disable(sq->dim); 145 145 sq->dim = NULL; 146 146 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
··· 780 780 ft_attr.max_fte = MLX5E_PROMISC_TABLE_SIZE; 781 781 ft_attr.autogroup.max_num_groups = 1; 782 782 ft_attr.level = MLX5E_PROMISC_FT_LEVEL; 783 - ft_attr.prio = MLX5E_NIC_PRIO; 783 + ft_attr.prio = MLX5E_PROMISC_PRIO; 784 784 785 785 ft->t = mlx5_create_auto_grouped_flow_table(fs->ns, &ft_attr); 786 786 if (IS_ERR(ft->t)) {
+1
drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
··· 1076 1076 return err; 1077 1077 } 1078 1078 esw_qos_node_set_parent(node, parent); 1079 + node->bw_share = 0; 1079 1080 1080 1081 return 0; 1081 1082 }
+9 -4
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 113 113 #define ETHTOOL_PRIO_NUM_LEVELS 1 114 114 #define ETHTOOL_NUM_PRIOS 11 115 115 #define ETHTOOL_MIN_LEVEL (KERNEL_MIN_LEVEL + ETHTOOL_NUM_PRIOS) 116 - /* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy, 116 + /* Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy, 117 117 * {IPsec RoCE MPV,Alias table},IPsec RoCE policy 118 118 */ 119 - #define KERNEL_NIC_PRIO_NUM_LEVELS 11 119 + #define KERNEL_NIC_PRIO_NUM_LEVELS 10 120 120 #define KERNEL_NIC_NUM_PRIOS 1 121 - /* One more level for tc */ 122 - #define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1) 121 + /* One more level for tc, and one more for promisc */ 122 + #define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 2) 123 + 124 + #define KERNEL_NIC_PROMISC_NUM_PRIOS 1 125 + #define KERNEL_NIC_PROMISC_NUM_LEVELS 1 123 126 124 127 #define KERNEL_NIC_TC_NUM_PRIOS 1 125 128 #define KERNEL_NIC_TC_NUM_LEVELS 3 ··· 190 187 ADD_NS(MLX5_FLOW_TABLE_MISS_ACTION_DEF, 191 188 ADD_MULTIPLE_PRIO(KERNEL_NIC_TC_NUM_PRIOS, 192 189 KERNEL_NIC_TC_NUM_LEVELS), 190 + ADD_MULTIPLE_PRIO(KERNEL_NIC_PROMISC_NUM_PRIOS, 191 + KERNEL_NIC_PROMISC_NUM_LEVELS), 193 192 ADD_MULTIPLE_PRIO(KERNEL_NIC_NUM_PRIOS, 194 193 KERNEL_NIC_PRIO_NUM_LEVELS))), 195 194 ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0, FS_CHAINING_CAPS,
+5
drivers/net/ethernet/renesas/rtsn.c
··· 1259 1259 priv = netdev_priv(ndev); 1260 1260 priv->pdev = pdev; 1261 1261 priv->ndev = ndev; 1262 + 1262 1263 priv->ptp_priv = rcar_gen4_ptp_alloc(pdev); 1264 + if (!priv->ptp_priv) { 1265 + ret = -ENOMEM; 1266 + goto error_free; 1267 + } 1263 1268 1264 1269 spin_lock_init(&priv->lock); 1265 1270 platform_set_drvdata(pdev, priv);
+11 -13
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
··· 364 364 } 365 365 366 366 /* TX/RX NORMAL interrupts */ 367 - if (likely(intr_status & XGMAC_NIS)) { 368 - if (likely(intr_status & XGMAC_RI)) { 369 - u64_stats_update_begin(&stats->syncp); 370 - u64_stats_inc(&stats->rx_normal_irq_n[chan]); 371 - u64_stats_update_end(&stats->syncp); 372 - ret |= handle_rx; 373 - } 374 - if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) { 375 - u64_stats_update_begin(&stats->syncp); 376 - u64_stats_inc(&stats->tx_normal_irq_n[chan]); 377 - u64_stats_update_end(&stats->syncp); 378 - ret |= handle_tx; 379 - } 367 + if (likely(intr_status & XGMAC_RI)) { 368 + u64_stats_update_begin(&stats->syncp); 369 + u64_stats_inc(&stats->rx_normal_irq_n[chan]); 370 + u64_stats_update_end(&stats->syncp); 371 + ret |= handle_rx; 372 + } 373 + if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) { 374 + u64_stats_update_begin(&stats->syncp); 375 + u64_stats_inc(&stats->tx_normal_irq_n[chan]); 376 + u64_stats_update_end(&stats->syncp); 377 + ret |= handle_tx; 380 378 } 381 379 382 380 /* Clear interrupts */
+1 -3
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 856 856 { 857 857 struct sk_buff *skb; 858 858 859 - len += AM65_CPSW_HEADROOM; 860 - 861 859 skb = build_skb(page_addr, len); 862 860 if (unlikely(!skb)) 863 861 return NULL; ··· 1342 1344 } 1343 1345 1344 1346 skb = am65_cpsw_build_skb(page_addr, ndev, 1345 - AM65_CPSW_MAX_PACKET_SIZE, headroom); 1347 + PAGE_SIZE, headroom); 1346 1348 if (unlikely(!skb)) { 1347 1349 new_page = page; 1348 1350 goto requeue;
+1 -1
drivers/net/ethernet/xilinx/ll_temac_main.c
··· 1309 1309 if (ering->rx_pending > RX_BD_NUM_MAX || 1310 1310 ering->rx_mini_pending || 1311 1311 ering->rx_jumbo_pending || 1312 - ering->rx_pending > TX_BD_NUM_MAX) 1312 + ering->tx_pending > TX_BD_NUM_MAX) 1313 1313 return -EINVAL; 1314 1314 1315 1315 if (netif_running(ndev))
+2 -1
drivers/net/phy/microchip.c
··· 332 332 * As workaround, set to 10 before setting to 100 333 333 * at forced 100 F/H mode. 334 334 */ 335 - if (!phydev->autoneg && phydev->speed == 100) { 335 + if (phydev->state == PHY_NOLINK && !phydev->autoneg && phydev->speed == 100) { 336 336 /* disable phy interrupt */ 337 337 temp = phy_read(phydev, LAN88XX_INT_MASK); 338 338 temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_; ··· 488 488 .config_init = lan88xx_config_init, 489 489 .config_aneg = lan88xx_config_aneg, 490 490 .link_change_notify = lan88xx_link_change_notify, 491 + .soft_reset = genphy_soft_reset, 491 492 492 493 /* Interrupt handling is broken, do not define related 493 494 * functions to force polling.
-27
drivers/net/phy/qcom/at803x.c
··· 26 26 27 27 #define AT803X_LED_CONTROL 0x18 28 28 29 - #define AT803X_PHY_MMD3_WOL_CTRL 0x8012 30 - #define AT803X_WOL_EN BIT(5) 31 - 32 29 #define AT803X_REG_CHIP_CONFIG 0x1f 33 30 #define AT803X_BT_BX_REG_SEL 0x8000 34 31 ··· 861 864 return ret; 862 865 863 866 return at803x_config_init(phydev); 864 - } 865 - 866 - static int at8031_set_wol(struct phy_device *phydev, 867 - struct ethtool_wolinfo *wol) 868 - { 869 - int ret; 870 - 871 - /* First setup MAC address and enable WOL interrupt */ 872 - ret = at803x_set_wol(phydev, wol); 873 - if (ret) 874 - return ret; 875 - 876 - if (wol->wolopts & WAKE_MAGIC) 877 - /* Enable WOL function for 1588 */ 878 - ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 879 - AT803X_PHY_MMD3_WOL_CTRL, 880 - 0, AT803X_WOL_EN); 881 - else 882 - /* Disable WoL function for 1588 */ 883 - ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 884 - AT803X_PHY_MMD3_WOL_CTRL, 885 - AT803X_WOL_EN, 0); 886 - 887 - return ret; 888 867 } 889 868 890 869 static int at8031_config_intr(struct phy_device *phydev)
+1 -1
drivers/net/phy/qcom/qca808x.c
··· 633 633 .handle_interrupt = at803x_handle_interrupt, 634 634 .get_tunable = at803x_get_tunable, 635 635 .set_tunable = at803x_set_tunable, 636 - .set_wol = at803x_set_wol, 636 + .set_wol = at8031_set_wol, 637 637 .get_wol = at803x_get_wol, 638 638 .get_features = qca808x_get_features, 639 639 .config_aneg = qca808x_config_aneg,
+25
drivers/net/phy/qcom/qcom-phy-lib.c
··· 115 115 } 116 116 EXPORT_SYMBOL_GPL(at803x_set_wol); 117 117 118 + int at8031_set_wol(struct phy_device *phydev, 119 + struct ethtool_wolinfo *wol) 120 + { 121 + int ret; 122 + 123 + /* First setup MAC address and enable WOL interrupt */ 124 + ret = at803x_set_wol(phydev, wol); 125 + if (ret) 126 + return ret; 127 + 128 + if (wol->wolopts & WAKE_MAGIC) 129 + /* Enable WOL function for 1588 */ 130 + ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 131 + AT803X_PHY_MMD3_WOL_CTRL, 132 + 0, AT803X_WOL_EN); 133 + else 134 + /* Disable WoL function for 1588 */ 135 + ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 136 + AT803X_PHY_MMD3_WOL_CTRL, 137 + AT803X_WOL_EN, 0); 138 + 139 + return ret; 140 + } 141 + EXPORT_SYMBOL_GPL(at8031_set_wol); 142 + 118 143 void at803x_get_wol(struct phy_device *phydev, 119 144 struct ethtool_wolinfo *wol) 120 145 {
+5
drivers/net/phy/qcom/qcom.h
··· 172 172 #define AT803X_LOC_MAC_ADDR_16_31_OFFSET 0x804B 173 173 #define AT803X_LOC_MAC_ADDR_32_47_OFFSET 0x804A 174 174 175 + #define AT803X_PHY_MMD3_WOL_CTRL 0x8012 176 + #define AT803X_WOL_EN BIT(5) 177 + 175 178 #define AT803X_DEBUG_ADDR 0x1D 176 179 #define AT803X_DEBUG_DATA 0x1E 177 180 ··· 217 214 u16 clear, u16 set); 218 215 int at803x_debug_reg_write(struct phy_device *phydev, u16 reg, u16 data); 219 216 int at803x_set_wol(struct phy_device *phydev, 217 + struct ethtool_wolinfo *wol); 218 + int at8031_set_wol(struct phy_device *phydev, 220 219 struct ethtool_wolinfo *wol); 221 220 void at803x_get_wol(struct phy_device *phydev, 222 221 struct ethtool_wolinfo *wol);
+52 -5
drivers/net/phy/smsc.c
··· 155 155 156 156 static int lan87xx_config_aneg(struct phy_device *phydev) 157 157 { 158 - int rc; 158 + u8 mdix_ctrl; 159 159 int val; 160 + int rc; 160 161 161 - switch (phydev->mdix_ctrl) { 162 + /* When auto-negotiation is disabled (forced mode), the PHY's 163 + * Auto-MDIX will continue toggling the TX/RX pairs. 164 + * 165 + * To establish a stable link, we must select a fixed MDI mode. 166 + * If the user has not specified a fixed MDI mode (i.e., mdix_ctrl is 167 + * 'auto'), we default to ETH_TP_MDI. This choice of a ETH_TP_MDI mode 168 + * mirrors the behavior the hardware would exhibit if the AUTOMDIX_EN 169 + * strap were configured for a fixed MDI connection. 170 + */ 171 + if (phydev->autoneg == AUTONEG_DISABLE) { 172 + if (phydev->mdix_ctrl == ETH_TP_MDI_AUTO) 173 + mdix_ctrl = ETH_TP_MDI; 174 + else 175 + mdix_ctrl = phydev->mdix_ctrl; 176 + } else { 177 + mdix_ctrl = phydev->mdix_ctrl; 178 + } 179 + 180 + switch (mdix_ctrl) { 162 181 case ETH_TP_MDI: 163 182 val = SPECIAL_CTRL_STS_OVRRD_AMDIX_; 164 183 break; ··· 186 167 SPECIAL_CTRL_STS_AMDIX_STATE_; 187 168 break; 188 169 case ETH_TP_MDI_AUTO: 189 - val = SPECIAL_CTRL_STS_AMDIX_ENABLE_; 170 + val = SPECIAL_CTRL_STS_OVRRD_AMDIX_ | 171 + SPECIAL_CTRL_STS_AMDIX_ENABLE_; 190 172 break; 191 173 default: 192 174 return genphy_config_aneg(phydev); ··· 203 183 rc |= val; 204 184 phy_write(phydev, SPECIAL_CTRL_STS, rc); 205 185 206 - phydev->mdix = phydev->mdix_ctrl; 186 + phydev->mdix = mdix_ctrl; 207 187 return genphy_config_aneg(phydev); 208 188 } 209 189 ··· 280 260 return err; 281 261 } 282 262 EXPORT_SYMBOL_GPL(lan87xx_read_status); 263 + 264 + static int lan87xx_phy_config_init(struct phy_device *phydev) 265 + { 266 + int rc; 267 + 268 + /* The LAN87xx PHY's initial MDI-X mode is determined by the AUTOMDIX_EN 269 + * hardware strap, but the driver cannot read the strap's status. This 270 + * creates an unpredictable initial state. 271 + * 272 + * To ensure consistent and reliable behavior across all boards, 273 + * override the strap configuration on initialization and force the PHY 274 + * into a known state with Auto-MDIX enabled, which is the expected 275 + * default for modern hardware. 276 + */ 277 + rc = phy_modify(phydev, SPECIAL_CTRL_STS, 278 + SPECIAL_CTRL_STS_OVRRD_AMDIX_ | 279 + SPECIAL_CTRL_STS_AMDIX_ENABLE_ | 280 + SPECIAL_CTRL_STS_AMDIX_STATE_, 281 + SPECIAL_CTRL_STS_OVRRD_AMDIX_ | 282 + SPECIAL_CTRL_STS_AMDIX_ENABLE_); 283 + if (rc < 0) 284 + return rc; 285 + 286 + phydev->mdix_ctrl = ETH_TP_MDI_AUTO; 287 + 288 + return smsc_phy_config_init(phydev); 289 + } 283 290 284 291 static int lan874x_phy_config_init(struct phy_device *phydev) 285 292 { ··· 742 695 743 696 /* basic functions */ 744 697 .read_status = lan87xx_read_status, 745 - .config_init = smsc_phy_config_init, 698 + .config_init = lan87xx_phy_config_init, 746 699 .soft_reset = smsc_phy_reset, 747 700 .config_aneg = lan87xx_config_aneg, 748 701
+3 -1
drivers/net/wireless/marvell/mwifiex/util.c
··· 459 459 "auth: receive authentication from %pM\n", 460 460 ieee_hdr->addr3); 461 461 } else { 462 - if (!priv->wdev.connected) 462 + if (!priv->wdev.connected || 463 + !ether_addr_equal(ieee_hdr->addr3, 464 + priv->curr_bss_params.bss_descriptor.mac_address)) 463 465 return 0; 464 466 465 467 if (ieee80211_is_deauth(ieee_hdr->frame_control)) {
+10
drivers/net/wireless/mediatek/mt76/mt76.h
··· 1224 1224 #define mt76_dereference(p, dev) \ 1225 1225 rcu_dereference_protected(p, lockdep_is_held(&(dev)->mutex)) 1226 1226 1227 + static inline struct mt76_wcid * 1228 + __mt76_wcid_ptr(struct mt76_dev *dev, u16 idx) 1229 + { 1230 + if (idx >= ARRAY_SIZE(dev->wcid)) 1231 + return NULL; 1232 + return rcu_dereference(dev->wcid[idx]); 1233 + } 1234 + 1235 + #define mt76_wcid_ptr(dev, idx) __mt76_wcid_ptr(&(dev)->mt76, idx) 1236 + 1227 1237 struct mt76_dev *mt76_alloc_device(struct device *pdev, unsigned int size, 1228 1238 const struct ieee80211_ops *ops, 1229 1239 const struct mt76_driver_ops *drv_ops);
+1 -1
drivers/net/wireless/mediatek/mt76/mt7603/dma.c
··· 44 44 if (idx >= MT7603_WTBL_STA - 1) 45 45 goto free; 46 46 47 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 47 + wcid = mt76_wcid_ptr(dev, idx); 48 48 if (!wcid) 49 49 goto free; 50 50
+2 -8
drivers/net/wireless/mediatek/mt76/mt7603/mac.c
··· 487 487 struct mt7603_sta *sta; 488 488 struct mt76_wcid *wcid; 489 489 490 - if (idx >= MT7603_WTBL_SIZE) 491 - return NULL; 492 - 493 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 490 + wcid = mt76_wcid_ptr(dev, idx); 494 491 if (unicast || !wcid) 495 492 return wcid; 496 493 ··· 1263 1266 if (pid == MT_PACKET_ID_NO_ACK) 1264 1267 return; 1265 1268 1266 - if (wcidx >= MT7603_WTBL_SIZE) 1267 - return; 1268 - 1269 1269 rcu_read_lock(); 1270 1270 1271 - wcid = rcu_dereference(dev->mt76.wcid[wcidx]); 1271 + wcid = mt76_wcid_ptr(dev, wcidx); 1272 1272 if (!wcid) 1273 1273 goto out; 1274 1274
+2 -5
drivers/net/wireless/mediatek/mt76/mt7615/mac.c
··· 90 90 struct mt7615_sta *sta; 91 91 struct mt76_wcid *wcid; 92 92 93 - if (idx >= MT7615_WTBL_SIZE) 94 - return NULL; 95 - 96 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 93 + wcid = mt76_wcid_ptr(dev, idx); 97 94 if (unicast || !wcid) 98 95 return wcid; 99 96 ··· 1501 1504 1502 1505 rcu_read_lock(); 1503 1506 1504 - wcid = rcu_dereference(dev->mt76.wcid[wcidx]); 1507 + wcid = mt76_wcid_ptr(dev, wcidx); 1505 1508 if (!wcid) 1506 1509 goto out; 1507 1510
+1 -1
drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
··· 1172 1172 wcid_idx = wcid->idx; 1173 1173 } else { 1174 1174 wcid_idx = le32_get_bits(txwi[1], MT_TXD1_WLAN_IDX); 1175 - wcid = rcu_dereference(dev->wcid[wcid_idx]); 1175 + wcid = __mt76_wcid_ptr(dev, wcid_idx); 1176 1176 1177 1177 if (wcid && wcid->sta) { 1178 1178 sta = container_of((void *)wcid, struct ieee80211_sta,
+3 -3
drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
··· 287 287 288 288 mt76_connac_mcu_get_wlan_idx(dev, wcid, &hdr.wlan_idx_lo, 289 289 &hdr.wlan_idx_hi); 290 - skb = mt76_mcu_msg_alloc(dev, NULL, len); 290 + skb = __mt76_mcu_msg_alloc(dev, NULL, len, len, GFP_ATOMIC); 291 291 if (!skb) 292 292 return ERR_PTR(-ENOMEM); 293 293 ··· 1740 1740 if (!sreq->ssids[i].ssid_len) 1741 1741 continue; 1742 1742 1743 - req->ssids[i].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len); 1744 - memcpy(req->ssids[i].ssid, sreq->ssids[i].ssid, 1743 + req->ssids[n_ssids].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len); 1744 + memcpy(req->ssids[n_ssids].ssid, sreq->ssids[i].ssid, 1745 1745 sreq->ssids[i].ssid_len); 1746 1746 n_ssids++; 1747 1747 }
+1 -4
drivers/net/wireless/mediatek/mt76/mt76x02.h
··· 262 262 { 263 263 struct mt76_wcid *wcid; 264 264 265 - if (idx >= MT76x02_N_WCIDS) 266 - return NULL; 267 - 268 - wcid = rcu_dereference(dev->wcid[idx]); 265 + wcid = __mt76_wcid_ptr(dev, idx); 269 266 if (!wcid) 270 267 return NULL; 271 268
+1 -3
drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
··· 564 564 565 565 rcu_read_lock(); 566 566 567 - if (stat->wcid < MT76x02_N_WCIDS) 568 - wcid = rcu_dereference(dev->mt76.wcid[stat->wcid]); 569 - 567 + wcid = mt76_wcid_ptr(dev, stat->wcid); 570 568 if (wcid && wcid->sta) { 571 569 void *priv; 572 570
+3 -9
drivers/net/wireless/mediatek/mt76/mt7915/mac.c
··· 56 56 struct mt7915_sta *sta; 57 57 struct mt76_wcid *wcid; 58 58 59 - if (idx >= ARRAY_SIZE(dev->mt76.wcid)) 60 - return NULL; 61 - 62 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 59 + wcid = mt76_wcid_ptr(dev, idx); 63 60 if (unicast || !wcid) 64 61 return wcid; 65 62 ··· 914 917 u16 idx; 915 918 916 919 idx = FIELD_GET(MT_TX_FREE_WLAN_ID, info); 917 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 920 + wcid = mt76_wcid_ptr(dev, idx); 918 921 sta = wcid_to_sta(wcid); 919 922 if (!sta) 920 923 continue; ··· 1010 1013 if (pid < MT_PACKET_ID_WED) 1011 1014 return; 1012 1015 1013 - if (wcidx >= mt7915_wtbl_size(dev)) 1014 - return; 1015 - 1016 1016 rcu_read_lock(); 1017 1017 1018 - wcid = rcu_dereference(dev->mt76.wcid[wcidx]); 1018 + wcid = mt76_wcid_ptr(dev, wcidx); 1019 1019 if (!wcid) 1020 1020 goto out; 1021 1021
+1 -1
drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
··· 3986 3986 3987 3987 rcu_read_lock(); 3988 3988 3989 - wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]); 3989 + wcid = mt76_wcid_ptr(dev, wlan_idx); 3990 3990 if (wcid) 3991 3991 wcid->stats.tx_packets += le32_to_cpu(res->tx_packets); 3992 3992 else
+1 -4
drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
··· 587 587 588 588 dev = container_of(wed, struct mt7915_dev, mt76.mmio.wed); 589 589 590 - if (idx >= mt7915_wtbl_size(dev)) 591 - return; 592 - 593 590 rcu_read_lock(); 594 591 595 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 592 + wcid = mt76_wcid_ptr(dev, idx); 596 593 if (wcid) { 597 594 wcid->stats.rx_bytes += le32_to_cpu(stats->rx_byte_cnt); 598 595 wcid->stats.rx_packets += le32_to_cpu(stats->rx_pkt_cnt);
+3 -3
drivers/net/wireless/mediatek/mt76/mt7921/mac.c
··· 465 465 466 466 rcu_read_lock(); 467 467 468 - wcid = rcu_dereference(dev->mt76.wcid[wcidx]); 468 + wcid = mt76_wcid_ptr(dev, wcidx); 469 469 if (!wcid) 470 470 goto out; 471 471 ··· 516 516 517 517 count++; 518 518 idx = FIELD_GET(MT_TX_FREE_WLAN_ID, info); 519 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 519 + wcid = mt76_wcid_ptr(dev, idx); 520 520 sta = wcid_to_sta(wcid); 521 521 if (!sta) 522 522 continue; ··· 816 816 u16 idx; 817 817 818 818 idx = le32_get_bits(txwi[1], MT_TXD1_WLAN_IDX); 819 - wcid = rcu_dereference(mdev->wcid[idx]); 819 + wcid = __mt76_wcid_ptr(mdev, idx); 820 820 sta = wcid_to_sta(wcid); 821 821 822 822 if (sta && likely(e->skb->protocol != cpu_to_be16(ETH_P_PAE)))
+3
drivers/net/wireless/mediatek/mt76/mt7921/main.c
··· 1180 1180 struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv; 1181 1181 struct mt792x_dev *dev = mt792x_hw_dev(hw); 1182 1182 1183 + if (!msta->deflink.wcid.sta) 1184 + return; 1185 + 1183 1186 mt792x_mutex_acquire(dev); 1184 1187 1185 1188 if (enabled)
+2
drivers/net/wireless/mediatek/mt76/mt7925/init.c
··· 52 52 53 53 name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7925_%s", 54 54 wiphy_name(wiphy)); 55 + if (!name) 56 + return -ENOMEM; 55 57 56 58 hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, phy, 57 59 mt7925_hwmon_groups);
+3 -3
drivers/net/wireless/mediatek/mt76/mt7925/mac.c
··· 1040 1040 1041 1041 rcu_read_lock(); 1042 1042 1043 - wcid = rcu_dereference(dev->mt76.wcid[wcidx]); 1043 + wcid = mt76_wcid_ptr(dev, wcidx); 1044 1044 if (!wcid) 1045 1045 goto out; 1046 1046 ··· 1122 1122 u16 idx; 1123 1123 1124 1124 idx = FIELD_GET(MT_TXFREE_INFO_WLAN_ID, info); 1125 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 1125 + wcid = mt76_wcid_ptr(dev, idx); 1126 1126 sta = wcid_to_sta(wcid); 1127 1127 if (!sta) 1128 1128 continue; ··· 1445 1445 u16 idx; 1446 1446 1447 1447 idx = le32_get_bits(txwi[1], MT_TXD1_WLAN_IDX); 1448 - wcid = rcu_dereference(mdev->wcid[idx]); 1448 + wcid = __mt76_wcid_ptr(mdev, idx); 1449 1449 sta = wcid_to_sta(wcid); 1450 1450 1451 1451 if (sta && likely(e->skb->protocol != cpu_to_be16(ETH_P_PAE)))
+7 -1
drivers/net/wireless/mediatek/mt76/mt7925/main.c
··· 1481 1481 1482 1482 mt792x_mutex_acquire(dev); 1483 1483 1484 - err = mt7925_mcu_sched_scan_req(mphy, vif, req); 1484 + err = mt7925_mcu_sched_scan_req(mphy, vif, req, ies); 1485 1485 if (err < 0) 1486 1486 goto out; 1487 1487 ··· 1603 1603 unsigned long valid = mvif->valid_links; 1604 1604 u8 i; 1605 1605 1606 + if (!msta->vif) 1607 + return; 1608 + 1606 1609 mt792x_mutex_acquire(dev); 1607 1610 1608 1611 valid = ieee80211_vif_is_mld(vif) ? mvif->valid_links : BIT(0); ··· 1619 1616 set_bit(MT_WCID_FLAG_HDR_TRANS, &mlink->wcid.flags); 1620 1617 else 1621 1618 clear_bit(MT_WCID_FLAG_HDR_TRANS, &mlink->wcid.flags); 1619 + 1620 + if (!mlink->wcid.sta) 1621 + continue; 1622 1622 1623 1623 mt7925_mcu_wtbl_update_hdr_trans(dev, vif, sta, i); 1624 1624 }
+61 -18
drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
··· 164 164 bool suspend, struct cfg80211_wowlan *wowlan) 165 165 { 166 166 struct mt76_vif_link *mvif = (struct mt76_vif_link *)vif->drv_priv; 167 + struct ieee80211_scan_ies ies = {}; 167 168 struct mt76_dev *dev = phy->dev; 168 169 struct { 169 170 struct { ··· 195 194 req.wow_ctrl_tlv.trigger |= (UNI_WOW_DETECT_TYPE_DISCONNECT | 196 195 UNI_WOW_DETECT_TYPE_BCN_LOST); 197 196 if (wowlan->nd_config) { 198 - mt7925_mcu_sched_scan_req(phy, vif, wowlan->nd_config); 197 + mt7925_mcu_sched_scan_req(phy, vif, wowlan->nd_config, &ies); 199 198 req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_SCH_SCAN_HIT; 200 199 mt7925_mcu_sched_scan_enable(phy, vif, suspend); 201 200 } ··· 2819 2818 return err; 2820 2819 } 2821 2820 2821 + static void 2822 + mt7925_mcu_build_scan_ie_tlv(struct mt76_dev *mdev, 2823 + struct sk_buff *skb, 2824 + struct ieee80211_scan_ies *scan_ies) 2825 + { 2826 + u32 max_len = sizeof(struct scan_ie_tlv) + MT76_CONNAC_SCAN_IE_LEN; 2827 + struct scan_ie_tlv *ie; 2828 + enum nl80211_band i; 2829 + struct tlv *tlv; 2830 + const u8 *ies; 2831 + u16 ies_len; 2832 + 2833 + for (i = 0; i <= NL80211_BAND_6GHZ; i++) { 2834 + if (i == NL80211_BAND_60GHZ) 2835 + continue; 2836 + 2837 + ies = scan_ies->ies[i]; 2838 + ies_len = scan_ies->len[i]; 2839 + 2840 + if (!ies || !ies_len) 2841 + continue; 2842 + 2843 + if (ies_len > max_len) 2844 + return; 2845 + 2846 + tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_IE, 2847 + sizeof(*ie) + ies_len); 2848 + ie = (struct scan_ie_tlv *)tlv; 2849 + 2850 + memcpy(ie->ies, ies, ies_len); 2851 + ie->ies_len = cpu_to_le16(ies_len); 2852 + 2853 + switch (i) { 2854 + case NL80211_BAND_2GHZ: 2855 + ie->band = 1; 2856 + break; 2857 + case NL80211_BAND_6GHZ: 2858 + ie->band = 3; 2859 + break; 2860 + default: 2861 + ie->band = 2; 2862 + break; 2863 + } 2864 + 2865 + max_len -= (sizeof(*ie) + ies_len); 2866 + } 2867 + } 2868 + 2822 2869 int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif, 2823 2870 struct ieee80211_scan_request *scan_req) 2824 2871 { ··· 2892 2843 2893 2844 max_len = sizeof(*hdr) + sizeof(*req) + sizeof(*ssid) + 2894 2845 sizeof(*bssid) * MT7925_RNR_SCAN_MAX_BSSIDS + 2895 - sizeof(*chan_info) + sizeof(*misc) + sizeof(*ie); 2846 + sizeof(*chan_info) + sizeof(*misc) + sizeof(*ie) + 2847 + MT76_CONNAC_SCAN_IE_LEN; 2896 2848 2897 2849 skb = mt76_mcu_msg_alloc(mdev, NULL, max_len); 2898 2850 if (!skb) ··· 2919 2869 if (i > MT7925_RNR_SCAN_MAX_BSSIDS) 2920 2870 break; 2921 2871 2922 - ssid->ssids[i].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len); 2923 - memcpy(ssid->ssids[i].ssid, sreq->ssids[i].ssid, 2872 + ssid->ssids[n_ssids].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len); 2873 + memcpy(ssid->ssids[n_ssids].ssid, sreq->ssids[i].ssid, 2924 2874 sreq->ssids[i].ssid_len); 2925 2875 n_ssids++; 2926 2876 } ··· 2975 2925 } 2976 2926 chan_info->channel_type = sreq->n_channels ? 4 : 0; 2977 2927 2978 - tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_IE, sizeof(*ie)); 2979 - ie = (struct scan_ie_tlv *)tlv; 2980 - if (sreq->ie_len > 0) { 2981 - memcpy(ie->ies, sreq->ie, sreq->ie_len); 2982 - ie->ies_len = cpu_to_le16(sreq->ie_len); 2983 - } 2984 - 2985 2928 req->scan_func |= SCAN_FUNC_SPLIT_SCAN; 2986 2929 2987 2930 tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_MISC, sizeof(*misc)); ··· 2984 2941 sreq->mac_addr_mask); 2985 2942 req->scan_func |= SCAN_FUNC_RANDOM_MAC; 2986 2943 } 2944 + 2945 + /* Append scan probe IEs as the last tlv */ 2946 + mt7925_mcu_build_scan_ie_tlv(mdev, skb, &scan_req->ies); 2987 2947 2988 2948 err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ), 2989 2949 true); ··· 2999 2953 3000 2954 int mt7925_mcu_sched_scan_req(struct mt76_phy *phy, 3001 2955 struct ieee80211_vif *vif, 3002 - struct cfg80211_sched_scan_request *sreq) 2956 + struct cfg80211_sched_scan_request *sreq, 2957 + struct ieee80211_scan_ies *ies) 3003 2958 { 3004 2959 struct mt76_vif_link *mvif = (struct mt76_vif_link *)vif->drv_priv; 3005 2960 struct ieee80211_channel **scan_list = sreq->channels; ··· 3088 3041 } 3089 3042 chan_info->channel_type = sreq->n_channels ? 4 : 0; 3090 3043 3091 - tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_IE, sizeof(*ie)); 3092 - ie = (struct scan_ie_tlv *)tlv; 3093 - if (sreq->ie_len > 0) { 3094 - memcpy(ie->ies, sreq->ie, sreq->ie_len); 3095 - ie->ies_len = cpu_to_le16(sreq->ie_len); 3096 - } 3044 + /* Append scan probe IEs as the last tlv */ 3045 + mt7925_mcu_build_scan_ie_tlv(mdev, skb, ies); 3097 3046 3098 3047 return mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ), 3099 3048 true);
+3 -2
drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
··· 269 269 __le16 ies_len; 270 270 u8 band; 271 271 u8 pad; 272 - u8 ies[MT76_CONNAC_SCAN_IE_LEN]; 272 + u8 ies[]; 273 273 }; 274 274 275 275 struct scan_misc_tlv { ··· 673 673 struct ieee80211_vif *vif); 674 674 int mt7925_mcu_sched_scan_req(struct mt76_phy *phy, 675 675 struct ieee80211_vif *vif, 676 - struct cfg80211_sched_scan_request *sreq); 676 + struct cfg80211_sched_scan_request *sreq, 677 + struct ieee80211_scan_ies *ies); 677 678 int mt7925_mcu_sched_scan_enable(struct mt76_phy *phy, 678 679 struct ieee80211_vif *vif, 679 680 bool enable);
+1 -1
drivers/net/wireless/mediatek/mt76/mt7925/regs.h
··· 58 58 59 59 #define MT_INT_TX_DONE_MCU (MT_INT_TX_DONE_MCU_WM | \ 60 60 MT_INT_TX_DONE_FWDL) 61 - #define MT_INT_TX_DONE_ALL (MT_INT_TX_DONE_MCU_WM | \ 61 + #define MT_INT_TX_DONE_ALL (MT_INT_TX_DONE_MCU | \ 62 62 MT_INT_TX_DONE_BAND0 | \ 63 63 GENMASK(18, 4)) 64 64
+27 -5
drivers/net/wireless/mediatek/mt76/mt792x_core.c
··· 28 28 }, 29 29 }; 30 30 31 - static const struct ieee80211_iface_limit if_limits_chanctx[] = { 31 + static const struct ieee80211_iface_limit if_limits_chanctx_mcc[] = { 32 32 { 33 33 .max = 2, 34 34 .types = BIT(NL80211_IFTYPE_STATION) | ··· 36 36 }, 37 37 { 38 38 .max = 1, 39 - .types = BIT(NL80211_IFTYPE_AP) | 40 - BIT(NL80211_IFTYPE_P2P_GO) 39 + .types = BIT(NL80211_IFTYPE_P2P_GO) 40 + }, 41 + { 42 + .max = 1, 43 + .types = BIT(NL80211_IFTYPE_P2P_DEVICE) 44 + } 45 + }; 46 + 47 + static const struct ieee80211_iface_limit if_limits_chanctx_scc[] = { 48 + { 49 + .max = 2, 50 + .types = BIT(NL80211_IFTYPE_STATION) | 51 + BIT(NL80211_IFTYPE_P2P_CLIENT) 52 + }, 53 + { 54 + .max = 1, 55 + .types = BIT(NL80211_IFTYPE_AP) 41 56 }, 42 57 { 43 58 .max = 1, ··· 62 47 63 48 static const struct ieee80211_iface_combination if_comb_chanctx[] = { 64 49 { 65 - .limits = if_limits_chanctx, 66 - .n_limits = ARRAY_SIZE(if_limits_chanctx), 50 + .limits = if_limits_chanctx_mcc, 51 + .n_limits = ARRAY_SIZE(if_limits_chanctx_mcc), 67 52 .max_interfaces = 3, 68 53 .num_different_channels = 2, 54 + .beacon_int_infra_match = false, 55 + }, 56 + { 57 + .limits = if_limits_chanctx_scc, 58 + .n_limits = ARRAY_SIZE(if_limits_chanctx_scc), 59 + .max_interfaces = 3, 60 + .num_different_channels = 1, 69 61 .beacon_int_infra_match = false, 70 62 } 71 63 };
+1 -4
drivers/net/wireless/mediatek/mt76/mt792x_mac.c
··· 142 142 struct mt792x_sta *sta; 143 143 struct mt76_wcid *wcid; 144 144 145 - if (idx >= ARRAY_SIZE(dev->mt76.wcid)) 146 - return NULL; 147 - 148 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 145 + wcid = mt76_wcid_ptr(dev, idx); 149 146 if (unicast || !wcid) 150 147 return wcid; 151 148
+10 -42
drivers/net/wireless/mediatek/mt76/mt7996/mac.c
··· 61 61 struct mt76_wcid *wcid; 62 62 int i; 63 63 64 - if (idx >= ARRAY_SIZE(dev->mt76.wcid)) 65 - return NULL; 66 - 67 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 64 + wcid = mt76_wcid_ptr(dev, idx); 68 65 if (!wcid) 69 66 return NULL; 70 67 ··· 1246 1249 u16 idx; 1247 1250 1248 1251 idx = FIELD_GET(MT_TXFREE_INFO_WLAN_ID, info); 1249 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 1252 + wcid = mt76_wcid_ptr(dev, idx); 1250 1253 sta = wcid_to_sta(wcid); 1251 1254 if (!sta) 1252 1255 goto next; ··· 1468 1471 if (pid < MT_PACKET_ID_NO_SKB) 1469 1472 return; 1470 1473 1471 - if (wcidx >= mt7996_wtbl_size(dev)) 1472 - return; 1473 - 1474 1474 rcu_read_lock(); 1475 1475 1476 - wcid = rcu_dereference(dev->mt76.wcid[wcidx]); 1476 + wcid = mt76_wcid_ptr(dev, wcidx); 1477 1477 if (!wcid) 1478 1478 goto out; 1479 1479 ··· 2347 2353 void mt7996_mac_sta_rc_work(struct work_struct *work) 2348 2354 { 2349 2355 struct mt7996_dev *dev = container_of(work, struct mt7996_dev, rc_work); 2350 - struct ieee80211_bss_conf *link_conf; 2351 - struct ieee80211_link_sta *link_sta; 2352 2356 struct mt7996_sta_link *msta_link; 2353 - struct mt7996_vif_link *link; 2354 - struct mt76_vif_link *mlink; 2355 - struct ieee80211_sta *sta; 2356 2357 struct ieee80211_vif *vif; 2357 - struct mt7996_sta *msta; 2358 2358 struct mt7996_vif *mvif; 2359 2359 LIST_HEAD(list); 2360 2360 u32 changed; 2361 - u8 link_id; 2362 2361 2363 - rcu_read_lock(); 2364 2362 spin_lock_bh(&dev->mt76.sta_poll_lock); 2365 2363 list_splice_init(&dev->sta_rc_list, &list); 2366 2364 ··· 2363 2377 2364 2378 changed = msta_link->changed; 2365 2379 msta_link->changed = 0; 2366 - 2367 - sta = wcid_to_sta(&msta_link->wcid); 2368 - link_id = msta_link->wcid.link_id; 2369 - msta = msta_link->sta; 2370 - mvif = msta->vif; 2371 - vif = container_of((void *)mvif, struct ieee80211_vif, drv_priv); 2372 - 2373 - mlink = rcu_dereference(mvif->mt76.link[link_id]); 2374 - if (!mlink) 2375 - continue; 2376 - 2377 - link_sta = rcu_dereference(sta->link[link_id]); 2378 - if (!link_sta) 2379 - continue; 2380 - 2381 - link_conf = rcu_dereference(vif->link_conf[link_id]); 2382 - if (!link_conf) 2383 - continue; 2380 + mvif = msta_link->sta->vif; 2381 + vif = container_of((void *)mvif, struct ieee80211_vif, 2382 + drv_priv); 2384 2383 2385 2384 spin_unlock_bh(&dev->mt76.sta_poll_lock); 2386 - 2387 - link = (struct mt7996_vif_link *)mlink; 2388 2385 2389 2386 if (changed & (IEEE80211_RC_SUPP_RATES_CHANGED | 2390 2387 IEEE80211_RC_NSS_CHANGED | 2391 2388 IEEE80211_RC_BW_CHANGED)) 2392 - mt7996_mcu_add_rate_ctrl(dev, vif, link_conf, 2393 - link_sta, link, msta_link, 2389 + mt7996_mcu_add_rate_ctrl(dev, msta_link->sta, vif, 2390 + msta_link->wcid.link_id, 2394 2391 true); 2395 2392 2396 2393 if (changed & IEEE80211_RC_SMPS_CHANGED) 2397 - mt7996_mcu_set_fixed_field(dev, link_sta, link, 2398 - msta_link, NULL, 2394 + mt7996_mcu_set_fixed_field(dev, msta_link->sta, NULL, 2395 + msta_link->wcid.link_id, 2399 2396 RATE_PARAM_MMPS_UPDATE); 2400 2397 2401 2398 spin_lock_bh(&dev->mt76.sta_poll_lock); 2402 2399 } 2403 2400 2404 2401 spin_unlock_bh(&dev->mt76.sta_poll_lock); 2405 - rcu_read_unlock(); 2406 2402 } 2407 2403 2408 2404 void mt7996_mac_work(struct work_struct *work)
+2 -3
drivers/net/wireless/mediatek/mt76/mt7996/main.c
··· 1112 1112 if (err) 1113 1113 return err; 1114 1114 1115 - err = mt7996_mcu_add_rate_ctrl(dev, vif, link_conf, 1116 - link_sta, link, 1117 - msta_link, false); 1115 + err = mt7996_mcu_add_rate_ctrl(dev, msta_link->sta, vif, 1116 + link_id, false); 1118 1117 if (err) 1119 1118 return err; 1120 1119
+141 -58
drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
··· 555 555 switch (le16_to_cpu(res->tag)) { 556 556 case UNI_ALL_STA_TXRX_RATE: 557 557 wlan_idx = le16_to_cpu(res->rate[i].wlan_idx); 558 - wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]); 558 + wcid = mt76_wcid_ptr(dev, wlan_idx); 559 559 560 560 if (!wcid) 561 561 break; ··· 565 565 break; 566 566 case UNI_ALL_STA_TXRX_ADM_STAT: 567 567 wlan_idx = le16_to_cpu(res->adm_stat[i].wlan_idx); 568 - wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]); 568 + wcid = mt76_wcid_ptr(dev, wlan_idx); 569 569 570 570 if (!wcid) 571 571 break; ··· 579 579 break; 580 580 case UNI_ALL_STA_TXRX_MSDU_COUNT: 581 581 wlan_idx = le16_to_cpu(res->msdu_cnt[i].wlan_idx); 582 - wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]); 582 + wcid = mt76_wcid_ptr(dev, wlan_idx); 583 583 584 584 if (!wcid) 585 585 break; ··· 676 676 677 677 e = (void *)skb->data; 678 678 idx = le16_to_cpu(e->wlan_id); 679 - if (idx >= ARRAY_SIZE(dev->mt76.wcid)) 680 - break; 681 - 682 - wcid = rcu_dereference(dev->mt76.wcid[idx]); 679 + wcid = mt76_wcid_ptr(dev, idx); 683 680 if (!wcid || !wcid->sta) 684 681 break; 685 682 ··· 1902 1905 MCU_WM_UNI_CMD(RA), true); 1903 1906 } 1904 1907 1905 - int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev, 1906 - struct ieee80211_link_sta *link_sta, 1907 - struct mt7996_vif_link *link, 1908 - struct mt7996_sta_link *msta_link, 1909 - void *data, u32 field) 1908 + int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev, struct mt7996_sta *msta, 1909 + void *data, u8 link_id, u32 field) 1910 1910 { 1911 - struct sta_phy_uni *phy = data; 1911 + struct mt7996_vif *mvif = msta->vif; 1912 + struct mt7996_sta_link *msta_link; 1912 1913 struct sta_rec_ra_fixed_uni *ra; 1914 + struct sta_phy_uni *phy = data; 1915 + struct mt76_vif_link *mlink; 1913 1916 struct sk_buff *skb; 1917 + int err = -ENODEV; 1914 1918 struct tlv *tlv; 1915 1919 1916 - skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &link->mt76, 1920 + rcu_read_lock(); 1921 + 1922 + mlink = rcu_dereference(mvif->mt76.link[link_id]); 1923 + if (!mlink) 1924 + goto error_unlock; 1925 + 1926 + msta_link = rcu_dereference(msta->link[link_id]); 1927 + if (!msta_link) 1928 + goto error_unlock; 1929 + 1930 + skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, mlink, 1917 1931 &msta_link->wcid, 1918 1932 MT7996_STA_UPDATE_MAX_SIZE); 1919 - if (IS_ERR(skb)) 1920 - return PTR_ERR(skb); 1933 + if (IS_ERR(skb)) { 1934 + err = PTR_ERR(skb); 1935 + goto error_unlock; 1936 + } 1921 1937 1922 1938 tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_RA_UPDATE, sizeof(*ra)); 1923 1939 ra = (struct sta_rec_ra_fixed_uni *)tlv; ··· 1945 1935 if (phy) 1946 1936 ra->phy = *phy; 1947 1937 break; 1948 - case RATE_PARAM_MMPS_UPDATE: 1938 + case RATE_PARAM_MMPS_UPDATE: { 1939 + struct ieee80211_sta *sta = wcid_to_sta(&msta_link->wcid); 1940 + struct ieee80211_link_sta *link_sta; 1941 + 1942 + link_sta = rcu_dereference(sta->link[link_id]); 1943 + if (!link_sta) { 1944 + dev_kfree_skb(skb); 1945 + goto error_unlock; 1946 + } 1947 + 1949 1948 ra->mmps_mode = mt7996_mcu_get_mmps_mode(link_sta->smps_mode); 1950 1949 break; 1950 + } 1951 1951 default: 1952 1952 break; 1953 1953 } 1954 1954 ra->field = cpu_to_le32(field); 1955 1955 1956 + rcu_read_unlock(); 1957 + 1956 1958 return mt76_mcu_skb_send_msg(&dev->mt76, skb, 1957 1959 MCU_WMWA_UNI_CMD(STA_REC_UPDATE), true); 1960 + error_unlock: 1961 + rcu_read_unlock(); 1962 + 1963 + return err; 1958 1964 } 1959 1965 1960 1966 static int 1961 - mt7996_mcu_add_rate_ctrl_fixed(struct mt7996_dev *dev, 1962 - struct ieee80211_link_sta *link_sta, 1963 - struct mt7996_vif_link *link, 1964 - struct mt7996_sta_link *msta_link) 1967 + mt7996_mcu_add_rate_ctrl_fixed(struct mt7996_dev *dev, struct mt7996_sta *msta, 1968 + struct ieee80211_vif *vif, u8 link_id) 1965 1969 { 1966 - struct cfg80211_chan_def *chandef = &link->phy->mt76->chandef; 1967 - struct cfg80211_bitrate_mask *mask = &link->bitrate_mask; 1968 - enum nl80211_band band = chandef->chan->band; 1970 + struct ieee80211_link_sta *link_sta; 1971 + struct cfg80211_bitrate_mask mask; 1972 + struct mt7996_sta_link *msta_link; 1973 + struct mt7996_vif_link *link; 1969 1974 struct sta_phy_uni phy = {}; 1970 - int ret, nrates = 0; 1975 + struct ieee80211_sta *sta; 1976 + int ret, nrates = 0, idx; 1977 + enum nl80211_band band; 1978 + bool has_he; 1971 1979 1972 1980 #define __sta_phy_bitrate_mask_check(_mcs, _gi, _ht, _he) \ 1973 1981 do { \ 1974 - u8 i, gi = mask->control[band]._gi; \ 1982 + u8 i, gi = mask.control[band]._gi; \ 1975 1983 gi = (_he) ? gi : gi == NL80211_TXRATE_FORCE_SGI; \ 1976 1984 phy.sgi = gi; \ 1977 - phy.he_ltf = mask->control[band].he_ltf; \ 1978 - for (i = 0; i < ARRAY_SIZE(mask->control[band]._mcs); i++) { \ 1979 - if (!mask->control[band]._mcs[i]) \ 1985 + phy.he_ltf = mask.control[band].he_ltf; \ 1986 + for (i = 0; i < ARRAY_SIZE(mask.control[band]._mcs); i++) { \ 1987 + if (!mask.control[band]._mcs[i]) \ 1980 1988 continue; \ 1981 - nrates += hweight16(mask->control[band]._mcs[i]); \ 1982 - phy.mcs = ffs(mask->control[band]._mcs[i]) - 1; \ 1989 + nrates += hweight16(mask.control[band]._mcs[i]); \ 1990 + phy.mcs = ffs(mask.control[band]._mcs[i]) - 1; \ 1983 1991 if (_ht) \ 1984 1992 phy.mcs += 8 * i; \ 1985 1993 } \ 1986 1994 } while (0) 1987 1995 1988 - if (link_sta->he_cap.has_he) { 1996 + rcu_read_lock(); 1997 + 1998 + link = mt7996_vif_link(dev, vif, link_id); 1999 + if (!link) 2000 + goto error_unlock; 2001 + 2002 + msta_link = rcu_dereference(msta->link[link_id]); 2003 + if (!msta_link) 2004 + goto error_unlock; 2005 + 2006 + sta = wcid_to_sta(&msta_link->wcid); 2007 + link_sta = rcu_dereference(sta->link[link_id]); 2008 + if (!link_sta) 2009 + goto error_unlock; 2010 + 2011 + band = link->phy->mt76->chandef.chan->band; 2012 + has_he = link_sta->he_cap.has_he; 2013 + mask = link->bitrate_mask; 2014 + idx = msta_link->wcid.idx; 2015 + 2016 + if (has_he) { 1989 2017 __sta_phy_bitrate_mask_check(he_mcs, he_gi, 0, 1); 1990 2018 } else if (link_sta->vht_cap.vht_supported) { 1991 2019 __sta_phy_bitrate_mask_check(vht_mcs, gi, 0, 0); 1992 2020 } else if (link_sta->ht_cap.ht_supported) { 1993 2021 __sta_phy_bitrate_mask_check(ht_mcs, gi, 1, 0); 1994 2022 } else { 1995 - nrates = hweight32(mask->control[band].legacy); 1996 - phy.mcs = ffs(mask->control[band].legacy) - 1; 2023 + nrates = hweight32(mask.control[band].legacy); 2024 + phy.mcs = ffs(mask.control[band].legacy) - 1; 1997 2025 } 2026 + 2027 + rcu_read_unlock(); 2028 + 1998 2029 #undef __sta_phy_bitrate_mask_check 1999 2030 2000 2031 /* fall back to auto rate control */ 2001 - if (mask->control[band].gi == NL80211_TXRATE_DEFAULT_GI && 2002 - mask->control[band].he_gi == GENMASK(7, 0) && 2003 - mask->control[band].he_ltf == GENMASK(7, 0) && 2032 + if (mask.control[band].gi == NL80211_TXRATE_DEFAULT_GI && 2033 + mask.control[band].he_gi == GENMASK(7, 0) && 2034 + mask.control[band].he_ltf == GENMASK(7, 0) && 2004 2035 nrates != 1) 2005 2036 return 0; 2006 2037 2007 2038 /* fixed single rate */ 2008 2039 if (nrates == 1) { 2009 - ret = mt7996_mcu_set_fixed_field(dev, link_sta, link, 2010 - msta_link, &phy, 2040 + ret = mt7996_mcu_set_fixed_field(dev, msta, &phy, link_id, 2011 2041 RATE_PARAM_FIXED_MCS); 2012 2042 if (ret) 2013 2043 return ret; 2014 2044 } 2015 2045 2016 2046 /* fixed GI */ 2017 - if (mask->control[band].gi != NL80211_TXRATE_DEFAULT_GI || 2018 - mask->control[band].he_gi != GENMASK(7, 0)) { 2047 + if (mask.control[band].gi != NL80211_TXRATE_DEFAULT_GI || 2048 + mask.control[band].he_gi != GENMASK(7, 0)) { 2019 2049 u32 addr; 2020 2050 2021 2051 /* firmware updates only TXCMD but doesn't take WTBL into 2022 2052 * account, so driver should update here to reflect the 2023 2053 * actual txrate hardware sends out. 2024 2054 */ 2025 - addr = mt7996_mac_wtbl_lmac_addr(dev, msta_link->wcid.idx, 7); 2026 - if (link_sta->he_cap.has_he) 2055 + addr = mt7996_mac_wtbl_lmac_addr(dev, idx, 7); 2056 + if (has_he) 2027 2057 mt76_rmw_field(dev, addr, GENMASK(31, 24), phy.sgi); 2028 2058 else 2029 2059 mt76_rmw_field(dev, addr, GENMASK(15, 12), phy.sgi); 2030 2060 2031 - ret = mt7996_mcu_set_fixed_field(dev, link_sta, link, 2032 - msta_link, &phy, 2061 + ret = mt7996_mcu_set_fixed_field(dev, msta, &phy, link_id, 2033 2062 RATE_PARAM_FIXED_GI); 2034 2063 if (ret) 2035 2064 return ret; 2036 2065 } 2037 2066 2038 2067 /* fixed HE_LTF */ 2039 - if (mask->control[band].he_ltf != GENMASK(7, 0)) { 2040 - ret = mt7996_mcu_set_fixed_field(dev, link_sta, link, 2041 - msta_link, &phy, 2068 + if (mask.control[band].he_ltf != GENMASK(7, 0)) { 2069 + ret = mt7996_mcu_set_fixed_field(dev, msta, &phy, link_id, 2042 2070 RATE_PARAM_FIXED_HE_LTF); 2043 2071 if (ret) 2044 2072 return ret; 2045 2073 } 2046 2074 2047 2075 return 0; 2076 + 2077 + error_unlock: 2078 + rcu_read_unlock(); 2079 + 2080 + return -ENODEV; 2048 2081 } 2049 2082 2050 2083 static void ··· 2198 2145 memset(ra->rx_rcpi, INIT_RCPI, sizeof(ra->rx_rcpi)); 2199 2146 } 2200 2147 2201 - int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev, 2202 - struct ieee80211_vif *vif, 2203 - struct ieee80211_bss_conf *link_conf, 2204 - struct ieee80211_link_sta *link_sta, 2205 - struct mt7996_vif_link *link, 2206 - struct mt7996_sta_link *msta_link, bool changed) 2148 + int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev, struct mt7996_sta *msta, 2149 + struct ieee80211_vif *vif, u8 link_id, 2150 + bool changed) 2207 2151 { 2152 + struct ieee80211_bss_conf *link_conf; 2153 + struct ieee80211_link_sta *link_sta; 2154 + struct mt7996_sta_link *msta_link; 2155 + struct mt7996_vif_link *link; 2156 + struct ieee80211_sta *sta; 2208 2157 struct sk_buff *skb; 2209 - int ret; 2158 + int ret = -ENODEV; 2159 + 2160 + rcu_read_lock(); 2161 + 2162 + link = mt7996_vif_link(dev, vif, link_id); 2163 + if (!link) 2164 + goto error_unlock; 2165 + 2166 + msta_link = rcu_dereference(msta->link[link_id]); 2167 + if (!msta_link) 2168 + goto error_unlock; 2169 + 2170 + sta = wcid_to_sta(&msta_link->wcid); 2171 + link_sta = rcu_dereference(sta->link[link_id]); 2172 + if (!link_sta) 2173 + goto error_unlock; 2174 + 2175 + link_conf = rcu_dereference(vif->link_conf[link_id]); 2176 + if (!link_conf) 2177 + goto error_unlock; 2210 2178 2211 2179 skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &link->mt76, 2212 2180 &msta_link->wcid, 2213 2181 MT7996_STA_UPDATE_MAX_SIZE); 2214 - if (IS_ERR(skb)) 2215 - return PTR_ERR(skb); 2182 + if (IS_ERR(skb)) { 2183 + ret = PTR_ERR(skb); 2184 + goto error_unlock; 2185 + } 2216 2186 2217 2187 /* firmware rc algorithm refers to sta_rec_he for HE control. 2218 2188 * once dev->rc_work changes the settings driver should also ··· 2249 2173 */ 2250 2174 mt7996_mcu_sta_rate_ctrl_tlv(skb, dev, vif, link_conf, link_sta, link); 2251 2175 2176 + rcu_read_unlock(); 2177 + 2252 2178 ret = mt76_mcu_skb_send_msg(&dev->mt76, skb, 2253 2179 MCU_WMWA_UNI_CMD(STA_REC_UPDATE), true); 2254 2180 if (ret) 2255 2181 return ret; 2256 2182 2257 - return mt7996_mcu_add_rate_ctrl_fixed(dev, link_sta, link, msta_link); 2183 + return mt7996_mcu_add_rate_ctrl_fixed(dev, msta, vif, link_id); 2184 + 2185 + error_unlock: 2186 + rcu_read_unlock(); 2187 + 2188 + return ret; 2258 2189 } 2259 2190 2260 2191 static int
+5 -11
drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
··· 620 620 int mt7996_mcu_add_obss_spr(struct mt7996_phy *phy, 621 621 struct mt7996_vif_link *link, 622 622 struct ieee80211_he_obss_pd *he_obss_pd); 623 - int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev, 624 - struct ieee80211_vif *vif, 625 - struct ieee80211_bss_conf *link_conf, 626 - struct ieee80211_link_sta *link_sta, 627 - struct mt7996_vif_link *link, 628 - struct mt7996_sta_link *msta_link, bool changed); 623 + int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev, struct mt7996_sta *msta, 624 + struct ieee80211_vif *vif, u8 link_id, 625 + bool changed); 629 626 int mt7996_set_channel(struct mt76_phy *mphy); 630 627 int mt7996_mcu_set_chan_info(struct mt7996_phy *phy, u16 tag); 631 628 int mt7996_mcu_set_tx(struct mt7996_dev *dev, struct ieee80211_vif *vif, 632 629 struct ieee80211_bss_conf *link_conf); 633 630 int mt7996_mcu_set_fixed_rate_ctrl(struct mt7996_dev *dev, 634 631 void *data, u16 version); 635 - int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev, 636 - struct ieee80211_link_sta *link_sta, 637 - struct mt7996_vif_link *link, 638 - struct mt7996_sta_link *msta_link, 639 - void *data, u32 field); 632 + int mt7996_mcu_set_fixed_field(struct mt7996_dev *dev, struct mt7996_sta *msta, 633 + void *data, u8 link_id, u32 field); 640 634 int mt7996_mcu_set_eeprom(struct mt7996_dev *dev); 641 635 int mt7996_mcu_get_eeprom(struct mt7996_dev *dev, u32 offset, u8 *buf, u32 buf_len); 642 636 int mt7996_mcu_get_eeprom_free_block(struct mt7996_dev *dev, u8 *block_num);
+5 -6
drivers/net/wireless/mediatek/mt76/tx.c
··· 64 64 struct mt76_tx_cb *cb = mt76_tx_skb_cb(skb); 65 65 struct mt76_wcid *wcid; 66 66 67 - wcid = rcu_dereference(dev->wcid[cb->wcid]); 67 + wcid = __mt76_wcid_ptr(dev, cb->wcid); 68 68 if (wcid) { 69 69 status.sta = wcid_to_sta(wcid); 70 70 if (status.sta && (wcid->rate.flags || wcid->rate.legacy)) { ··· 251 251 252 252 rcu_read_lock(); 253 253 254 - if (wcid_idx < ARRAY_SIZE(dev->wcid)) 255 - wcid = rcu_dereference(dev->wcid[wcid_idx]); 256 - 254 + wcid = __mt76_wcid_ptr(dev, wcid_idx); 257 255 mt76_tx_check_non_aql(dev, wcid, skb); 258 256 259 257 #ifdef CONFIG_NL80211_TESTMODE ··· 536 538 break; 537 539 538 540 mtxq = (struct mt76_txq *)txq->drv_priv; 539 - wcid = rcu_dereference(dev->wcid[mtxq->wcid]); 541 + wcid = __mt76_wcid_ptr(dev, mtxq->wcid); 540 542 if (!wcid || test_bit(MT_WCID_FLAG_PS, &wcid->flags)) 541 543 continue; 542 544 ··· 615 617 if ((dev->drv->drv_flags & MT_DRV_HW_MGMT_TXQ) && 616 618 !(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) && 617 619 !ieee80211_is_data(hdr->frame_control) && 618 - !ieee80211_is_bufferable_mmpdu(skb)) 620 + (!ieee80211_is_bufferable_mmpdu(skb) || 621 + ieee80211_is_deauth(hdr->frame_control))) 619 622 qid = MT_TXQ_PSD; 620 623 621 624 q = phy->q_tx[qid];
+1 -1
drivers/net/wireless/mediatek/mt76/util.c
··· 83 83 if (!(mask & 1)) 84 84 continue; 85 85 86 - wcid = rcu_dereference(dev->wcid[j]); 86 + wcid = __mt76_wcid_ptr(dev, j); 87 87 if (!wcid || wcid->phy_idx != phy_idx) 88 88 continue; 89 89
+1 -3
drivers/net/wireless/ralink/rt2x00/rt2x00soc.c
··· 108 108 } 109 109 EXPORT_SYMBOL_GPL(rt2x00soc_probe); 110 110 111 - int rt2x00soc_remove(struct platform_device *pdev) 111 + void rt2x00soc_remove(struct platform_device *pdev) 112 112 { 113 113 struct ieee80211_hw *hw = platform_get_drvdata(pdev); 114 114 struct rt2x00_dev *rt2x00dev = hw->priv; ··· 119 119 rt2x00lib_remove_dev(rt2x00dev); 120 120 rt2x00soc_free_reg(rt2x00dev); 121 121 ieee80211_free_hw(hw); 122 - 123 - return 0; 124 122 } 125 123 EXPORT_SYMBOL_GPL(rt2x00soc_remove); 126 124
+1 -1
drivers/net/wireless/ralink/rt2x00/rt2x00soc.h
··· 17 17 * SoC driver handlers. 18 18 */ 19 19 int rt2x00soc_probe(struct platform_device *pdev, const struct rt2x00_ops *ops); 20 - int rt2x00soc_remove(struct platform_device *pdev); 20 + void rt2x00soc_remove(struct platform_device *pdev); 21 21 #ifdef CONFIG_PM 22 22 int rt2x00soc_suspend(struct platform_device *pdev, pm_message_t state); 23 23 int rt2x00soc_resume(struct platform_device *pdev);
+5 -1
drivers/net/wireless/zydas/zd1211rw/zd_mac.c
··· 583 583 584 584 skb_queue_tail(q, skb); 585 585 while (skb_queue_len(q) > ZD_MAC_MAX_ACK_WAITERS) { 586 - zd_mac_tx_status(hw, skb_dequeue(q), 586 + skb = skb_dequeue(q); 587 + if (!skb) 588 + break; 589 + 590 + zd_mac_tx_status(hw, skb, 587 591 mac->ack_pending ? mac->ack_signal : 0, 588 592 NULL); 589 593 mac->ack_pending = 0;
+2 -2
drivers/pci/controller/pci-host-common.c
··· 64 64 65 65 of_pci_check_probe_only(); 66 66 67 + platform_set_drvdata(pdev, bridge); 68 + 67 69 /* Parse and map our Configuration Space windows */ 68 70 cfg = gen_pci_init(dev, bridge, ops); 69 71 if (IS_ERR(cfg)) 70 72 return PTR_ERR(cfg); 71 - 72 - platform_set_drvdata(pdev, bridge); 73 73 74 74 bridge->sysdata = cfg; 75 75 bridge->ops = (struct pci_ops *)&ops->pci_ops;
+49 -4
drivers/pci/controller/pcie-apple.c
··· 187 187 const struct hw_info *hw; 188 188 unsigned long *bitmap; 189 189 struct list_head ports; 190 + struct list_head entry; 190 191 struct completion event; 191 192 struct irq_fwspec fwspec; 192 193 u32 nvecs; ··· 205 204 int sid_map_sz; 206 205 int idx; 207 206 }; 207 + 208 + static LIST_HEAD(pcie_list); 209 + static DEFINE_MUTEX(pcie_list_lock); 208 210 209 211 static void rmw_set(u32 set, void __iomem *addr) 210 212 { ··· 724 720 return 0; 725 721 } 726 722 723 + static void apple_pcie_register(struct apple_pcie *pcie) 724 + { 725 + guard(mutex)(&pcie_list_lock); 726 + 727 + list_add_tail(&pcie->entry, &pcie_list); 728 + } 729 + 730 + static void apple_pcie_unregister(struct apple_pcie *pcie) 731 + { 732 + guard(mutex)(&pcie_list_lock); 733 + 734 + list_del(&pcie->entry); 735 + } 736 + 737 + static struct apple_pcie *apple_pcie_lookup(struct device *dev) 738 + { 739 + struct apple_pcie *pcie; 740 + 741 + guard(mutex)(&pcie_list_lock); 742 + 743 + list_for_each_entry(pcie, &pcie_list, entry) { 744 + if (pcie->dev == dev) 745 + return pcie; 746 + } 747 + 748 + return NULL; 749 + } 750 + 727 751 static struct apple_pcie_port *apple_pcie_get_port(struct pci_dev *pdev) 728 752 { 729 753 struct pci_config_window *cfg = pdev->sysdata; 730 - struct apple_pcie *pcie = cfg->priv; 754 + struct apple_pcie *pcie; 731 755 struct pci_dev *port_pdev; 732 756 struct apple_pcie_port *port; 757 + 758 + pcie = apple_pcie_lookup(cfg->parent); 759 + if (WARN_ON(!pcie)) 760 + return NULL; 733 761 734 762 /* Find the root port this device is on */ 735 763 port_pdev = pcie_find_root_port(pdev); ··· 842 806 843 807 static int apple_pcie_init(struct pci_config_window *cfg) 844 808 { 845 - struct apple_pcie *pcie = cfg->priv; 846 809 struct device *dev = cfg->parent; 810 + struct apple_pcie *pcie; 847 811 int ret; 812 + 813 + pcie = apple_pcie_lookup(dev); 814 + if (WARN_ON(!pcie)) 815 + return -ENOENT; 848 816 849 817 for_each_available_child_of_node_scoped(dev->of_node, of_port) { 850 818 ret = apple_pcie_setup_port(pcie, of_port); ··· 892 852 893 853 mutex_init(&pcie->lock); 894 854 INIT_LIST_HEAD(&pcie->ports); 895 - dev_set_drvdata(dev, pcie); 896 855 897 856 ret = apple_msi_init(pcie); 898 857 if (ret) 899 858 return ret; 900 859 901 - return pci_host_common_init(pdev, &apple_pcie_cfg_ecam_ops); 860 + apple_pcie_register(pcie); 861 + 862 + ret = pci_host_common_init(pdev, &apple_pcie_cfg_ecam_ops); 863 + if (ret) 864 + apple_pcie_unregister(pcie); 865 + 866 + return ret; 902 867 } 903 868 904 869 static const struct of_device_id apple_pcie_of_match[] = {
-2
drivers/pci/ecam.c
··· 84 84 goto err_exit_iomap; 85 85 } 86 86 87 - cfg->priv = dev_get_drvdata(dev); 88 - 89 87 if (ops->init) { 90 88 err = ops->init(cfg); 91 89 if (err)
+3 -1
drivers/pci/msi/msi.c
··· 934 934 if (!pdev->msix_enabled) 935 935 return -ENXIO; 936 936 937 - guard(msi_descs_lock)(&pdev->dev); 938 937 virq = msi_get_virq(&pdev->dev, index); 939 938 if (!virq) 940 939 return -ENXIO; 940 + 941 + guard(msi_descs_lock)(&pdev->dev); 942 + 941 943 /* 942 944 * This is a horrible hack, but short of implementing a PCI 943 945 * specific interrupt chip callback and a huge pile of
+8 -2
drivers/pinctrl/nuvoton/pinctrl-ma35.c
··· 1074 1074 u32 idx = 0; 1075 1075 int ret; 1076 1076 1077 - for_each_gpiochip_node(dev, child) { 1077 + device_for_each_child_node(dev, child) { 1078 + if (fwnode_property_present(child, "gpio-controller")) 1079 + continue; 1080 + 1078 1081 npctl->nfunctions++; 1079 1082 npctl->ngroups += of_get_child_count(to_of_node(child)); 1080 1083 } ··· 1095 1092 if (!npctl->groups) 1096 1093 return -ENOMEM; 1097 1094 1098 - for_each_gpiochip_node(dev, child) { 1095 + device_for_each_child_node(dev, child) { 1096 + if (fwnode_property_present(child, "gpio-controller")) 1097 + continue; 1098 + 1099 1099 ret = ma35_pinctrl_parse_functions(child, npctl, idx++); 1100 1100 if (ret) { 1101 1101 fwnode_handle_put(child);
+11
drivers/pinctrl/pinctrl-amd.c
··· 979 979 pin, is_suspend ? "suspend" : "hibernate"); 980 980 } 981 981 982 + /* 983 + * debounce enabled over suspend has shown issues with a GPIO 984 + * being unable to wake the system, as we're only interested in 985 + * the actual wakeup event, clear it. 986 + */ 987 + if (gpio_dev->saved_regs[i] & (DB_CNTRl_MASK << DB_CNTRL_OFF)) { 988 + amd_gpio_set_debounce(gpio_dev, pin, 0); 989 + pm_pr_dbg("Clearing debounce for GPIO #%d during %s.\n", 990 + pin, is_suspend ? "suspend" : "hibernate"); 991 + } 992 + 982 993 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 983 994 } 984 995
+1 -1
drivers/pinctrl/pinctrl-aw9523.c
··· 784 784 gc->set_config = gpiochip_generic_config; 785 785 gc->parent = dev; 786 786 gc->owner = THIS_MODULE; 787 - gc->can_sleep = false; 787 + gc->can_sleep = true; 788 788 789 789 return 0; 790 790 }
+20
drivers/pinctrl/qcom/pinctrl-msm.c
··· 1038 1038 test_bit(d->hwirq, pctrl->skip_wake_irqs); 1039 1039 } 1040 1040 1041 + static void msm_gpio_irq_init_valid_mask(struct gpio_chip *gc, 1042 + unsigned long *valid_mask, 1043 + unsigned int ngpios) 1044 + { 1045 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 1046 + const struct msm_pingroup *g; 1047 + int i; 1048 + 1049 + bitmap_fill(valid_mask, ngpios); 1050 + 1051 + for (i = 0; i < ngpios; i++) { 1052 + g = &pctrl->soc->groups[i]; 1053 + 1054 + if (g->intr_detection_width != 1 && 1055 + g->intr_detection_width != 2) 1056 + clear_bit(i, valid_mask); 1057 + } 1058 + } 1059 + 1041 1060 static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type) 1042 1061 { 1043 1062 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); ··· 1460 1441 girq->default_type = IRQ_TYPE_NONE; 1461 1442 girq->handler = handle_bad_irq; 1462 1443 girq->parents[0] = pctrl->irq; 1444 + girq->init_valid_mask = msm_gpio_irq_init_valid_mask; 1463 1445 1464 1446 ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl); 1465 1447 if (ret) {
+1 -1
drivers/pwm/core.c
··· 596 596 * and supposed to be ignored. So also ignore any strange values and 597 597 * consider the state ok. 598 598 */ 599 - if (state->enabled) 599 + if (!state->enabled) 600 600 return true; 601 601 602 602 if (!state->period)
+8 -5
drivers/pwm/pwm-mediatek.c
··· 130 130 return ret; 131 131 132 132 clk_rate = clk_get_rate(pc->clk_pwms[pwm->hwpwm]); 133 - if (!clk_rate) 134 - return -EINVAL; 133 + if (!clk_rate) { 134 + ret = -EINVAL; 135 + goto out; 136 + } 135 137 136 138 /* Make sure we use the bus clock and not the 26MHz clock */ 137 139 if (pc->soc->has_ck_26m_sel) ··· 152 150 } 153 151 154 152 if (clkdiv > PWM_CLK_DIV_MAX) { 155 - pwm_mediatek_clk_disable(chip, pwm); 156 153 dev_err(pwmchip_parent(chip), "period of %d ns not supported\n", period_ns); 157 - return -EINVAL; 154 + ret = -EINVAL; 155 + goto out; 158 156 } 159 157 160 158 if (pc->soc->pwm45_fixup && pwm->hwpwm > 2) { ··· 171 169 pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period); 172 170 pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty); 173 171 172 + out: 174 173 pwm_mediatek_clk_disable(chip, pwm); 175 174 176 - return 0; 175 + return ret; 177 176 } 178 177 179 178 static int pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+13 -13
fs/bcachefs/btree_cache.c
··· 85 85 six_unlock_intent(&b->c.lock); 86 86 } 87 87 88 - static void __btree_node_data_free(struct btree_cache *bc, struct btree *b) 88 + void __btree_node_data_free(struct btree *b) 89 89 { 90 90 BUG_ON(!list_empty(&b->list)); 91 91 BUG_ON(btree_node_hashed(b)); ··· 112 112 munmap(b->aux_data, btree_aux_data_bytes(b)); 113 113 #endif 114 114 b->aux_data = NULL; 115 - 116 - btree_node_to_freedlist(bc, b); 117 115 } 118 116 119 117 static void btree_node_data_free(struct btree_cache *bc, struct btree *b) 120 118 { 121 119 BUG_ON(list_empty(&b->list)); 122 120 list_del_init(&b->list); 121 + 122 + __btree_node_data_free(b); 123 + 123 124 --bc->nr_freeable; 124 - __btree_node_data_free(bc, b); 125 + btree_node_to_freedlist(bc, b); 125 126 } 126 127 127 128 static int bch2_btree_cache_cmp_fn(struct rhashtable_compare_arg *arg, ··· 186 185 187 186 struct btree *__bch2_btree_node_mem_alloc(struct bch_fs *c) 188 187 { 189 - struct btree_cache *bc = &c->btree_cache; 190 - struct btree *b; 191 - 192 - b = __btree_node_mem_alloc(c, GFP_KERNEL); 188 + struct btree *b = __btree_node_mem_alloc(c, GFP_KERNEL); 193 189 if (!b) 194 190 return NULL; 195 191 ··· 196 198 } 197 199 198 200 bch2_btree_lock_init(&b->c, 0, GFP_KERNEL); 199 - 200 - __bch2_btree_node_to_freelist(bc, b); 201 201 return b; 202 202 } 203 203 ··· 520 524 --touched;; 521 525 } else if (!btree_node_reclaim(c, b)) { 522 526 __bch2_btree_node_hash_remove(bc, b); 523 - __btree_node_data_free(bc, b); 527 + __btree_node_data_free(b); 528 + btree_node_to_freedlist(bc, b); 524 529 525 530 freed++; 526 531 bc->nr_freed++; ··· 649 652 650 653 bch2_recalc_btree_reserve(c); 651 654 652 - for (i = 0; i < bc->nr_reserve; i++) 653 - if (!__bch2_btree_node_mem_alloc(c)) 655 + for (i = 0; i < bc->nr_reserve; i++) { 656 + struct btree *b = __bch2_btree_node_mem_alloc(c); 657 + if (!b) 654 658 goto err; 659 + __bch2_btree_node_to_freelist(bc, b); 660 + } 655 661 656 662 list_splice_init(&bc->live[0].list, &bc->freeable); 657 663
+1
fs/bcachefs/btree_cache.h
··· 30 30 void bch2_btree_cache_cannibalize_unlock(struct btree_trans *); 31 31 int bch2_btree_cache_cannibalize_lock(struct btree_trans *, struct closure *); 32 32 33 + void __btree_node_data_free(struct btree *); 33 34 struct btree *__bch2_btree_node_mem_alloc(struct bch_fs *); 34 35 struct btree *bch2_btree_node_mem_alloc(struct btree_trans *, bool); 35 36
+3 -5
fs/bcachefs/btree_io.c
··· 568 568 bch2_mark_btree_validate_failure(failed, ca->dev_idx); 569 569 570 570 struct extent_ptr_decoded pick; 571 - have_retry = !bch2_bkey_pick_read_device(c, 571 + have_retry = bch2_bkey_pick_read_device(c, 572 572 bkey_i_to_s_c(&b->key), 573 - failed, &pick, -1); 573 + failed, &pick, -1) == 1; 574 574 } 575 575 576 576 if (!have_retry && ret == -BCH_ERR_btree_node_read_err_want_retry) ··· 615 615 goto out; 616 616 case -BCH_ERR_btree_node_read_err_bad_node: 617 617 prt_str(&out, ", "); 618 - ret = __bch2_topology_error(c, &out); 619 618 break; 620 619 } 621 620 ··· 643 644 goto out; 644 645 case -BCH_ERR_btree_node_read_err_bad_node: 645 646 prt_str(&out, ", "); 646 - ret = __bch2_topology_error(c, &out); 647 647 break; 648 648 } 649 649 print: ··· 1406 1408 ret = bch2_bkey_pick_read_device(c, 1407 1409 bkey_i_to_s_c(&b->key), 1408 1410 &failed, &rb->pick, -1); 1409 - if (ret) { 1411 + if (ret <= 0) { 1410 1412 set_btree_node_read_error(b); 1411 1413 break; 1412 1414 }
+41 -43
fs/bcachefs/btree_node_scan.c
··· 75 75 } 76 76 } 77 77 78 - static bool found_btree_node_is_readable(struct btree_trans *trans, 79 - struct found_btree_node *f) 80 - { 81 - struct { __BKEY_PADDED(k, BKEY_BTREE_PTR_VAL_U64s_MAX); } tmp; 82 - 83 - found_btree_node_to_key(&tmp.k, f); 84 - 85 - struct btree *b = bch2_btree_node_get_noiter(trans, &tmp.k, f->btree_id, f->level, false); 86 - bool ret = !IS_ERR_OR_NULL(b); 87 - if (!ret) 88 - return ret; 89 - 90 - f->sectors_written = b->written; 91 - f->journal_seq = le64_to_cpu(b->data->keys.journal_seq); 92 - 93 - struct bkey_s_c k; 94 - struct bkey unpacked; 95 - struct btree_node_iter iter; 96 - for_each_btree_node_key_unpack(b, k, &iter, &unpacked) 97 - f->journal_seq = max(f->journal_seq, bkey_journal_seq(k)); 98 - 99 - six_unlock_read(&b->c.lock); 100 - 101 - /* 102 - * We might update this node's range; if that happens, we need the node 103 - * to be re-read so the read path can trim keys that are no longer in 104 - * this node 105 - */ 106 - if (b != btree_node_root(trans->c, b)) 107 - bch2_btree_node_evict(trans, &tmp.k); 108 - return ret; 109 - } 110 - 111 78 static int found_btree_node_cmp_cookie(const void *_l, const void *_r) 112 79 { 113 80 const struct found_btree_node *l = _l; ··· 126 159 }; 127 160 128 161 static void try_read_btree_node(struct find_btree_nodes *f, struct bch_dev *ca, 129 - struct bio *bio, struct btree_node *bn, u64 offset) 162 + struct btree *b, struct bio *bio, u64 offset) 130 163 { 131 164 struct bch_fs *c = container_of(f, struct bch_fs, found_btree_nodes); 165 + struct btree_node *bn = b->data; 132 166 133 167 bio_reset(bio, ca->disk_sb.bdev, REQ_OP_READ); 134 168 bio->bi_iter.bi_sector = offset; 135 - bch2_bio_map(bio, bn, PAGE_SIZE); 169 + bch2_bio_map(bio, b->data, c->opts.block_size); 136 170 137 171 u64 submit_time = local_clock(); 138 172 submit_bio_wait(bio); 139 - 140 173 bch2_account_io_completion(ca, BCH_MEMBER_ERROR_read, submit_time, !bio->bi_status); 141 174 142 175 if (bio->bi_status) { ··· 168 201 if (BTREE_NODE_ID(bn) >= BTREE_ID_NR_MAX) 169 202 return; 170 203 204 + bio_reset(bio, ca->disk_sb.bdev, REQ_OP_READ); 205 + bio->bi_iter.bi_sector = offset; 206 + bch2_bio_map(bio, b->data, c->opts.btree_node_size); 207 + 208 + submit_time = local_clock(); 209 + submit_bio_wait(bio); 210 + bch2_account_io_completion(ca, BCH_MEMBER_ERROR_read, submit_time, !bio->bi_status); 211 + 171 212 rcu_read_lock(); 172 213 struct found_btree_node n = { 173 214 .btree_id = BTREE_NODE_ID(bn), ··· 192 217 }; 193 218 rcu_read_unlock(); 194 219 195 - if (bch2_trans_run(c, found_btree_node_is_readable(trans, &n))) { 220 + found_btree_node_to_key(&b->key, &n); 221 + 222 + CLASS(printbuf, buf)(); 223 + if (!bch2_btree_node_read_done(c, ca, b, NULL, &buf)) { 224 + /* read_done will swap out b->data for another buffer */ 225 + bn = b->data; 226 + /* 227 + * Grab journal_seq here because we want the max journal_seq of 228 + * any bset; read_done sorts down to a single set and picks the 229 + * max journal_seq 230 + */ 231 + n.journal_seq = le64_to_cpu(bn->keys.journal_seq), 232 + n.sectors_written = b->written; 233 + 196 234 mutex_lock(&f->lock); 197 235 if (BSET_BIG_ENDIAN(&bn->keys) != CPU_BIG_ENDIAN) { 198 236 bch_err(c, "try_read_btree_node() can't handle endian conversion"); ··· 225 237 struct find_btree_nodes_worker *w = p; 226 238 struct bch_fs *c = container_of(w->f, struct bch_fs, found_btree_nodes); 227 239 struct bch_dev *ca = w->ca; 228 - void *buf = (void *) __get_free_page(GFP_KERNEL); 229 - struct bio *bio = bio_alloc(NULL, 1, 0, GFP_KERNEL); 230 240 unsigned long last_print = jiffies; 241 + struct btree *b = NULL; 242 + struct bio *bio = NULL; 231 243 232 - if (!buf || !bio) { 233 - bch_err(c, "read_btree_nodes_worker: error allocating bio/buf"); 244 + b = __bch2_btree_node_mem_alloc(c); 245 + if (!b) { 246 + bch_err(c, "read_btree_nodes_worker: error allocating buf"); 247 + w->f->ret = -ENOMEM; 248 + goto err; 249 + } 250 + 251 + bio = bio_alloc(NULL, buf_pages(b->data, c->opts.btree_node_size), 0, GFP_KERNEL); 252 + if (!bio) { 253 + bch_err(c, "read_btree_nodes_worker: error allocating bio"); 234 254 w->f->ret = -ENOMEM; 235 255 goto err; 236 256 } ··· 262 266 !bch2_dev_btree_bitmap_marked_sectors(ca, sector, btree_sectors(c))) 263 267 continue; 264 268 265 - try_read_btree_node(w->f, ca, bio, buf, sector); 269 + try_read_btree_node(w->f, ca, b, bio, sector); 266 270 } 267 271 err: 272 + if (b) 273 + __btree_node_data_free(b); 274 + kfree(b); 268 275 bio_put(bio); 269 - free_page((unsigned long) buf); 270 276 enumerated_ref_put(&ca->io_ref[READ], BCH_DEV_READ_REF_btree_node_scan); 271 277 closure_put(w->cl); 272 278 kfree(w);
+9 -2
fs/bcachefs/debug.c
··· 153 153 c->verify_data = __bch2_btree_node_mem_alloc(c); 154 154 if (!c->verify_data) 155 155 goto out; 156 - 157 - list_del_init(&c->verify_data->list); 158 156 } 159 157 160 158 BUG_ON(b->nsets != 1); ··· 584 586 i->ubuf = buf; 585 587 i->size = size; 586 588 i->ret = 0; 589 + 590 + int srcu_idx = srcu_read_lock(&c->btree_trans_barrier); 587 591 restart: 588 592 seqmutex_lock(&c->btree_trans_lock); 589 593 list_sort(&c->btree_trans_list, list_ptr_order_cmp); ··· 598 598 599 599 if (!closure_get_not_zero(&trans->ref)) 600 600 continue; 601 + 602 + if (!trans->srcu_held) { 603 + closure_put(&trans->ref); 604 + continue; 605 + } 601 606 602 607 u32 seq = seqmutex_unlock(&c->btree_trans_lock); 603 608 ··· 625 620 } 626 621 seqmutex_unlock(&c->btree_trans_lock); 627 622 unlocked: 623 + srcu_read_unlock(&c->btree_trans_barrier, srcu_idx); 624 + 628 625 if (i->buf.allocation_failure) 629 626 ret = -ENOMEM; 630 627
-1
fs/bcachefs/errcode.h
··· 282 282 x(EIO, sb_not_downgraded) \ 283 283 x(EIO, btree_node_write_all_failed) \ 284 284 x(EIO, btree_node_read_error) \ 285 - x(EIO, btree_node_read_validate_error) \ 286 285 x(EIO, btree_need_topology_repair) \ 287 286 x(EIO, bucket_ref_update) \ 288 287 x(EIO, trigger_alloc) \
+4 -2
fs/bcachefs/error.c
··· 103 103 return bch_err_throw(c, btree_need_topology_repair); 104 104 } else { 105 105 return bch2_run_explicit_recovery_pass(c, out, BCH_RECOVERY_PASS_check_topology, 0) ?: 106 - bch_err_throw(c, btree_node_read_validate_error); 106 + bch_err_throw(c, btree_need_topology_repair); 107 107 } 108 108 } 109 109 ··· 633 633 * log_fsck_err()s: that would require us to track for every error type 634 634 * which recovery pass corrects it, to get the fsck exit status correct: 635 635 */ 636 - if (bch2_err_matches(ret, BCH_ERR_fsck_fix)) { 636 + if (bch2_err_matches(ret, BCH_ERR_transaction_restart)) { 637 + /* nothing */ 638 + } else if (bch2_err_matches(ret, BCH_ERR_fsck_fix)) { 637 639 set_bit(BCH_FS_errors_fixed, &c->flags); 638 640 } else { 639 641 set_bit(BCH_FS_errors_not_fixed, &c->flags);
+8 -8
fs/bcachefs/extents.c
··· 50 50 struct bch_io_failures *failed) 51 51 { 52 52 static const char * const error_types[] = { 53 - "io", "checksum", "ec reconstruct", NULL 53 + "btree validate", "io", "checksum", "ec reconstruct", NULL 54 54 }; 55 55 56 56 for (struct bch_dev_io_failures *f = failed->devs; 57 57 f < failed->devs + failed->nr; 58 58 f++) { 59 59 unsigned errflags = 60 - ((!!f->failed_io) << 0) | 61 - ((!!f->failed_csum_nr) << 1) | 62 - ((!!f->failed_ec) << 2); 63 - 64 - if (!errflags) 65 - continue; 60 + ((!!f->failed_btree_validate) << 0) | 61 + ((!!f->failed_io) << 1) | 62 + ((!!f->failed_csum_nr) << 2) | 63 + ((!!f->failed_ec) << 3); 66 64 67 65 bch2_printbuf_make_room(out, 1024); 68 66 out->atomic++; ··· 75 77 76 78 prt_char(out, ' '); 77 79 78 - if (is_power_of_2(errflags)) { 80 + if (!errflags) { 81 + prt_str(out, "no error - confused"); 82 + } else if (is_power_of_2(errflags)) { 79 83 prt_bitflags(out, error_types, errflags); 80 84 prt_str(out, " error"); 81 85 } else {
+6 -27
fs/bcachefs/fsck.c
··· 12 12 #include "fs.h" 13 13 #include "fsck.h" 14 14 #include "inode.h" 15 + #include "io_misc.h" 15 16 #include "keylist.h" 16 17 #include "namei.h" 17 18 #include "recovery_passes.h" ··· 1920 1919 "extent type past end of inode %llu:%u, i_size %llu\n%s", 1921 1920 i->inode.bi_inum, i->inode.bi_snapshot, i->inode.bi_size, 1922 1921 (bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 1923 - struct bkey_i *whiteout = bch2_trans_kmalloc(trans, sizeof(*whiteout)); 1924 - ret = PTR_ERR_OR_ZERO(whiteout); 1925 - if (ret) 1926 - goto err; 1927 - 1928 - bkey_init(&whiteout->k); 1929 - whiteout->k.p = SPOS(k.k->p.inode, 1930 - last_block, 1931 - i->inode.bi_snapshot); 1932 - bch2_key_resize(&whiteout->k, 1933 - min(KEY_SIZE_MAX & (~0 << c->block_bits), 1934 - U64_MAX - whiteout->k.p.offset)); 1935 - 1936 - 1937 - /* 1938 - * Need a normal (not BTREE_ITER_all_snapshots) 1939 - * iterator, if we're deleting in a different 1940 - * snapshot and need to emit a whiteout 1941 - */ 1942 - struct btree_iter iter2; 1943 - bch2_trans_iter_init(trans, &iter2, BTREE_ID_extents, 1944 - bkey_start_pos(&whiteout->k), 1945 - BTREE_ITER_intent); 1946 - ret = bch2_btree_iter_traverse(trans, &iter2) ?: 1947 - bch2_trans_update(trans, &iter2, whiteout, 1948 - BTREE_UPDATE_internal_snapshot_node); 1949 - bch2_trans_iter_exit(trans, &iter2); 1922 + ret = bch2_fpunch_snapshot(trans, 1923 + SPOS(i->inode.bi_inum, 1924 + last_block, 1925 + i->inode.bi_snapshot), 1926 + POS(i->inode.bi_inum, U64_MAX)); 1950 1927 if (ret) 1951 1928 goto err; 1952 1929
+27
fs/bcachefs/io_misc.c
··· 135 135 return ret; 136 136 } 137 137 138 + /* For fsck */ 139 + int bch2_fpunch_snapshot(struct btree_trans *trans, struct bpos start, struct bpos end) 140 + { 141 + u32 restart_count = trans->restart_count; 142 + struct bch_fs *c = trans->c; 143 + struct disk_reservation disk_res = bch2_disk_reservation_init(c, 0); 144 + unsigned max_sectors = KEY_SIZE_MAX & (~0 << c->block_bits); 145 + struct bkey_i delete; 146 + 147 + int ret = for_each_btree_key_max_commit(trans, iter, BTREE_ID_extents, 148 + start, end, 0, k, 149 + &disk_res, NULL, BCH_TRANS_COMMIT_no_enospc, ({ 150 + bkey_init(&delete.k); 151 + delete.k.p = iter.pos; 152 + 153 + /* create the biggest key we can */ 154 + bch2_key_resize(&delete.k, max_sectors); 155 + bch2_cut_back(end, &delete); 156 + 157 + bch2_extent_trim_atomic(trans, &iter, &delete) ?: 158 + bch2_trans_update(trans, &iter, &delete, 0); 159 + })); 160 + 161 + bch2_disk_reservation_put(c, &disk_res); 162 + return ret ?: trans_was_restarted(trans, restart_count); 163 + } 164 + 138 165 /* 139 166 * Returns -BCH_ERR_transacton_restart if we had to drop locks: 140 167 */
+2
fs/bcachefs/io_misc.h
··· 5 5 int bch2_extent_fallocate(struct btree_trans *, subvol_inum, struct btree_iter *, 6 6 u64, struct bch_io_opts, s64 *, 7 7 struct write_point_specifier); 8 + 9 + int bch2_fpunch_snapshot(struct btree_trans *, struct bpos, struct bpos); 8 10 int bch2_fpunch_at(struct btree_trans *, struct btree_iter *, 9 11 subvol_inum, u64, s64 *); 10 12 int bch2_fpunch(struct bch_fs *c, subvol_inum, u64, u64, s64 *);
+6
fs/bcachefs/journal_reclaim.c
··· 170 170 return (struct journal_space) { 0, 0 }; 171 171 172 172 /* 173 + * It's possible for bucket size to be misaligned w.r.t. the filesystem 174 + * block size: 175 + */ 176 + min_bucket_size = round_down(min_bucket_size, block_sectors(c)); 177 + 178 + /* 173 179 * We sorted largest to smallest, and we want the smallest out of the 174 180 * @nr_devs_want largest devices: 175 181 */
+17 -6
fs/bcachefs/recovery.c
··· 273 273 goto out; 274 274 275 275 struct btree_path *path = btree_iter_path(trans, &iter); 276 - if (unlikely(!btree_path_node(path, k->level) && 277 - !k->allocated)) { 276 + if (unlikely(!btree_path_node(path, k->level))) { 278 277 struct bch_fs *c = trans->c; 278 + 279 + CLASS(printbuf, buf)(); 280 + prt_str(&buf, "btree="); 281 + bch2_btree_id_to_text(&buf, k->btree_id); 282 + prt_printf(&buf, " level=%u ", k->level); 283 + bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(k->k)); 279 284 280 285 if (!(c->recovery.passes_complete & (BIT_ULL(BCH_RECOVERY_PASS_scan_for_btree_nodes)| 281 286 BIT_ULL(BCH_RECOVERY_PASS_check_topology)))) { 282 - bch_err(c, "have key in journal replay for btree depth that does not exist, confused"); 287 + bch_err(c, "have key in journal replay for btree depth that does not exist, confused\n%s", 288 + buf.buf); 283 289 ret = -EINVAL; 284 290 } 285 - #if 0 291 + 292 + if (!k->allocated) { 293 + bch_notice(c, "dropping key in journal replay for depth that does not exist because we're recovering from scan\n%s", 294 + buf.buf); 295 + k->overwritten = true; 296 + goto out; 297 + } 298 + 286 299 bch2_trans_iter_exit(trans, &iter); 287 300 bch2_trans_node_iter_init(trans, &iter, k->btree_id, k->k->k.p, 288 301 BTREE_MAX_DEPTH, 0, iter_flags); 289 302 ret = bch2_btree_iter_traverse(trans, &iter) ?: 290 303 bch2_btree_increase_depth(trans, iter.path, 0) ?: 291 304 -BCH_ERR_transaction_restart_nested; 292 - #endif 293 - k->overwritten = true; 294 305 goto out; 295 306 } 296 307
+1 -1
fs/bcachefs/recovery_passes.c
··· 360 360 !(r->passes_complete & BIT_ULL(pass)); 361 361 bool ratelimit = flags & RUN_RECOVERY_PASS_ratelimit; 362 362 363 - if (!(in_recovery && (flags & RUN_RECOVERY_PASS_nopersistent))) { 363 + if (!(flags & RUN_RECOVERY_PASS_nopersistent)) { 364 364 struct bch_sb_field_ext *ext = bch2_sb_field_get(c->disk_sb.sb, ext); 365 365 __set_bit_le64(bch2_recovery_pass_to_stable(pass), ext->recovery_passes_required); 366 366 }
+16 -5
fs/erofs/data.c
··· 214 214 215 215 /* 216 216 * bit 30: I/O error occurred on this folio 217 + * bit 29: CPU has dirty data in D-cache (needs aliasing handling); 217 218 * bit 0 - 29: remaining parts to complete this folio 218 219 */ 219 - #define EROFS_ONLINEFOLIO_EIO (1 << 30) 220 + #define EROFS_ONLINEFOLIO_EIO 30 221 + #define EROFS_ONLINEFOLIO_DIRTY 29 220 222 221 223 void erofs_onlinefolio_init(struct folio *folio) 222 224 { ··· 235 233 atomic_inc((atomic_t *)&folio->private); 236 234 } 237 235 238 - void erofs_onlinefolio_end(struct folio *folio, int err) 236 + void erofs_onlinefolio_end(struct folio *folio, int err, bool dirty) 239 237 { 240 238 int orig, v; 241 239 242 240 do { 243 241 orig = atomic_read((atomic_t *)&folio->private); 244 - v = (orig - 1) | (err ? EROFS_ONLINEFOLIO_EIO : 0); 242 + DBG_BUGON(orig <= 0); 243 + v = dirty << EROFS_ONLINEFOLIO_DIRTY; 244 + v |= (orig - 1) | (!!err << EROFS_ONLINEFOLIO_EIO); 245 245 } while (atomic_cmpxchg((atomic_t *)&folio->private, orig, v) != orig); 246 246 247 - if (v & ~EROFS_ONLINEFOLIO_EIO) 247 + if (v & (BIT(EROFS_ONLINEFOLIO_DIRTY) - 1)) 248 248 return; 249 249 folio->private = 0; 250 - folio_end_read(folio, !(v & EROFS_ONLINEFOLIO_EIO)); 250 + if (v & BIT(EROFS_ONLINEFOLIO_DIRTY)) 251 + flush_dcache_folio(folio); 252 + folio_end_read(folio, !(v & BIT(EROFS_ONLINEFOLIO_EIO))); 251 253 } 252 254 253 255 static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length, ··· 357 351 */ 358 352 static int erofs_read_folio(struct file *file, struct folio *folio) 359 353 { 354 + trace_erofs_read_folio(folio, true); 355 + 360 356 return iomap_read_folio(folio, &erofs_iomap_ops); 361 357 } 362 358 363 359 static void erofs_readahead(struct readahead_control *rac) 364 360 { 361 + trace_erofs_readahead(rac->mapping->host, readahead_index(rac), 362 + readahead_count(rac), true); 363 + 365 364 return iomap_readahead(rac, &erofs_iomap_ops); 366 365 } 367 366
+4 -8
fs/erofs/decompressor.c
··· 301 301 cur = min(cur, rq->outputsize); 302 302 if (cur && rq->out[0]) { 303 303 kin = kmap_local_page(rq->in[nrpages_in - 1]); 304 - if (rq->out[0] == rq->in[nrpages_in - 1]) { 304 + if (rq->out[0] == rq->in[nrpages_in - 1]) 305 305 memmove(kin + rq->pageofs_out, kin + pi, cur); 306 - flush_dcache_page(rq->out[0]); 307 - } else { 306 + else 308 307 memcpy_to_page(rq->out[0], rq->pageofs_out, 309 308 kin + pi, cur); 310 - } 311 309 kunmap_local(kin); 312 310 } 313 311 rq->outputsize -= cur; ··· 323 325 po = (rq->pageofs_out + cur + pi) & ~PAGE_MASK; 324 326 DBG_BUGON(no >= nrpages_out); 325 327 cnt = min(insz - pi, PAGE_SIZE - po); 326 - if (rq->out[no] == rq->in[ni]) { 328 + if (rq->out[no] == rq->in[ni]) 327 329 memmove(kin + po, 328 330 kin + rq->pageofs_in + pi, cnt); 329 - flush_dcache_page(rq->out[no]); 330 - } else if (rq->out[no]) { 331 + else if (rq->out[no]) 331 332 memcpy_to_page(rq->out[no], po, 332 333 kin + rq->pageofs_in + pi, cnt); 333 - } 334 334 pi += cnt; 335 335 } while (pi < insz); 336 336 kunmap_local(kin);
+6
fs/erofs/dir.c
··· 58 58 struct erofs_dirent *de; 59 59 unsigned int nameoff, maxsize; 60 60 61 + if (fatal_signal_pending(current)) { 62 + err = -ERESTARTSYS; 63 + break; 64 + } 65 + 61 66 de = erofs_bread(&buf, dbstart, true); 62 67 if (IS_ERR(de)) { 63 68 erofs_err(sb, "failed to readdir of logical block %llu of nid %llu", ··· 93 88 break; 94 89 ctx->pos = dbstart + maxsize; 95 90 ofs = 0; 91 + cond_resched(); 96 92 } 97 93 erofs_put_metabuf(&buf); 98 94 if (EROFS_I(dir)->dot_omitted && ctx->pos == dir->i_size) {
+3 -11
fs/erofs/fileio.c
··· 38 38 } else { 39 39 bio_for_each_folio_all(fi, &rq->bio) { 40 40 DBG_BUGON(folio_test_uptodate(fi.folio)); 41 - erofs_onlinefolio_end(fi.folio, ret); 41 + erofs_onlinefolio_end(fi.folio, ret, false); 42 42 } 43 43 } 44 44 bio_uninit(&rq->bio); ··· 96 96 struct erofs_map_blocks *map = &io->map; 97 97 unsigned int cur = 0, end = folio_size(folio), len, attached = 0; 98 98 loff_t pos = folio_pos(folio), ofs; 99 - struct iov_iter iter; 100 - struct bio_vec bv; 101 99 int err = 0; 102 100 103 101 erofs_onlinefolio_init(folio); ··· 120 122 err = PTR_ERR(src); 121 123 break; 122 124 } 123 - bvec_set_folio(&bv, folio, len, cur); 124 - iov_iter_bvec(&iter, ITER_DEST, &bv, 1, len); 125 - if (copy_to_iter(src, len, &iter) != len) { 126 - erofs_put_metabuf(&buf); 127 - err = -EIO; 128 - break; 129 - } 125 + memcpy_to_folio(folio, cur, src, len); 130 126 erofs_put_metabuf(&buf); 131 127 } else if (!(map->m_flags & EROFS_MAP_MAPPED)) { 132 128 folio_zero_segment(folio, cur, cur + len); ··· 154 162 } 155 163 cur += len; 156 164 } 157 - erofs_onlinefolio_end(folio, err); 165 + erofs_onlinefolio_end(folio, err, false); 158 166 return err; 159 167 } 160 168
+4 -2
fs/erofs/internal.h
··· 315 315 /* The length of extent is full */ 316 316 #define EROFS_MAP_FULL_MAPPED 0x0008 317 317 /* Located in the special packed inode */ 318 - #define EROFS_MAP_FRAGMENT 0x0010 318 + #define __EROFS_MAP_FRAGMENT 0x0010 319 319 /* The extent refers to partial decompressed data */ 320 320 #define EROFS_MAP_PARTIAL_REF 0x0020 321 + 322 + #define EROFS_MAP_FRAGMENT (EROFS_MAP_MAPPED | __EROFS_MAP_FRAGMENT) 321 323 322 324 struct erofs_map_blocks { 323 325 struct erofs_buf buf; ··· 392 390 int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map); 393 391 void erofs_onlinefolio_init(struct folio *folio); 394 392 void erofs_onlinefolio_split(struct folio *folio); 395 - void erofs_onlinefolio_end(struct folio *folio, int err); 393 + void erofs_onlinefolio_end(struct folio *folio, int err, bool dirty); 396 394 struct inode *erofs_iget(struct super_block *sb, erofs_nid_t nid); 397 395 int erofs_getattr(struct mnt_idmap *idmap, const struct path *path, 398 396 struct kstat *stat, u32 request_mask,
+4 -4
fs/erofs/zdata.c
··· 1034 1034 if (!(map->m_flags & EROFS_MAP_MAPPED)) { 1035 1035 folio_zero_segment(folio, cur, end); 1036 1036 tight = false; 1037 - } else if (map->m_flags & EROFS_MAP_FRAGMENT) { 1037 + } else if (map->m_flags & __EROFS_MAP_FRAGMENT) { 1038 1038 erofs_off_t fpos = offset + cur - map->m_la; 1039 1039 1040 1040 err = z_erofs_read_fragment(inode->i_sb, folio, cur, ··· 1091 1091 tight = (bs == PAGE_SIZE); 1092 1092 } 1093 1093 } while ((end = cur) > 0); 1094 - erofs_onlinefolio_end(folio, err); 1094 + erofs_onlinefolio_end(folio, err, false); 1095 1095 return err; 1096 1096 } 1097 1097 ··· 1196 1196 cur += len; 1197 1197 } 1198 1198 kunmap_local(dst); 1199 - erofs_onlinefolio_end(page_folio(bvi->bvec.page), err); 1199 + erofs_onlinefolio_end(page_folio(bvi->bvec.page), err, true); 1200 1200 list_del(p); 1201 1201 kfree(bvi); 1202 1202 } ··· 1355 1355 1356 1356 DBG_BUGON(z_erofs_page_is_invalidated(page)); 1357 1357 if (!z_erofs_is_shortlived_page(page)) { 1358 - erofs_onlinefolio_end(page_folio(page), err); 1358 + erofs_onlinefolio_end(page_folio(page), err, true); 1359 1359 continue; 1360 1360 } 1361 1361 if (pcl->algorithmformat != Z_EROFS_COMPRESSION_LZ4) {
+4 -5
fs/erofs/zmap.c
··· 413 413 !vi->z_tailextent_headlcn) { 414 414 map->m_la = 0; 415 415 map->m_llen = inode->i_size; 416 - map->m_flags = EROFS_MAP_MAPPED | 417 - EROFS_MAP_FULL_MAPPED | EROFS_MAP_FRAGMENT; 416 + map->m_flags = EROFS_MAP_FRAGMENT; 418 417 return 0; 419 418 } 420 419 initial_lcn = ofs >> lclusterbits; ··· 488 489 goto unmap_out; 489 490 } 490 491 } else if (fragment && m.lcn == vi->z_tailextent_headlcn) { 491 - map->m_flags |= EROFS_MAP_FRAGMENT; 492 + map->m_flags = EROFS_MAP_FRAGMENT; 492 493 } else { 493 494 map->m_pa = erofs_pos(sb, m.pblk); 494 495 err = z_erofs_get_extent_compressedlen(&m, initial_lcn); ··· 616 617 if (lstart < lend) { 617 618 map->m_la = lstart; 618 619 if (last && (vi->z_advise & Z_EROFS_ADVISE_FRAGMENT_PCLUSTER)) { 619 - map->m_flags |= EROFS_MAP_MAPPED | EROFS_MAP_FRAGMENT; 620 + map->m_flags = EROFS_MAP_FRAGMENT; 620 621 vi->z_fragmentoff = map->m_plen; 621 622 if (recsz > offsetof(struct z_erofs_extent, pstart_lo)) 622 623 vi->z_fragmentoff |= map->m_pa << 32; ··· 796 797 iomap->length = map.m_llen; 797 798 if (map.m_flags & EROFS_MAP_MAPPED) { 798 799 iomap->type = IOMAP_MAPPED; 799 - iomap->addr = map.m_flags & EROFS_MAP_FRAGMENT ? 800 + iomap->addr = map.m_flags & __EROFS_MAP_FRAGMENT ? 800 801 IOMAP_NULL_ADDR : map.m_pa; 801 802 } else { 802 803 iomap->type = IOMAP_HOLE;
+329 -141
fs/eventpoll.c
··· 137 137 }; 138 138 139 139 /* List header used to link this structure to the eventpoll ready list */ 140 - struct llist_node rdllink; 140 + struct list_head rdllink; 141 + 142 + /* 143 + * Works together "struct eventpoll"->ovflist in keeping the 144 + * single linked chain of items. 145 + */ 146 + struct epitem *next; 141 147 142 148 /* The file descriptor information this item refers to */ 143 149 struct epoll_filefd ffd; ··· 191 185 /* Wait queue used by file->poll() */ 192 186 wait_queue_head_t poll_wait; 193 187 194 - /* 195 - * List of ready file descriptors. Adding to this list is lockless. Items can be removed 196 - * only with eventpoll::mtx 197 - */ 198 - struct llist_head rdllist; 188 + /* List of ready file descriptors */ 189 + struct list_head rdllist; 190 + 191 + /* Lock which protects rdllist and ovflist */ 192 + rwlock_t lock; 199 193 200 194 /* RB tree root used to store monitored fd structs */ 201 195 struct rb_root_cached rbr; 196 + 197 + /* 198 + * This is a single linked list that chains all the "struct epitem" that 199 + * happened while transferring ready events to userspace w/out 200 + * holding ->lock. 201 + */ 202 + struct epitem *ovflist; 202 203 203 204 /* wakeup_source used when ep_send_events or __ep_eventpoll_poll is running */ 204 205 struct wakeup_source *ws; ··· 361 348 (p1->file < p2->file ? -1 : p1->fd - p2->fd)); 362 349 } 363 350 364 - /* 365 - * Add the item to its container eventpoll's rdllist; do nothing if the item is already on rdllist. 366 - */ 367 - static void epitem_ready(struct epitem *epi) 351 + /* Tells us if the item is currently linked */ 352 + static inline int ep_is_linked(struct epitem *epi) 368 353 { 369 - if (&epi->rdllink == cmpxchg(&epi->rdllink.next, &epi->rdllink, NULL)) 370 - llist_add(&epi->rdllink, &epi->ep->rdllist); 371 - 354 + return !list_empty(&epi->rdllink); 372 355 } 373 356 374 357 static inline struct eppoll_entry *ep_pwq_from_wait(wait_queue_entry_t *p) ··· 383 374 * 384 375 * @ep: Pointer to the eventpoll context. 385 376 * 386 - * Return: true if ready events might be available, false otherwise. 377 + * Return: a value different than %zero if ready events are available, 378 + * or %zero otherwise. 387 379 */ 388 - static inline bool ep_events_available(struct eventpoll *ep) 380 + static inline int ep_events_available(struct eventpoll *ep) 389 381 { 390 - bool available; 391 - int locked; 392 - 393 - locked = mutex_trylock(&ep->mtx); 394 - if (!locked) { 395 - /* 396 - * The lock held and someone might have removed all items while inspecting it. The 397 - * llist_empty() check in this case is futile. Assume that something is enqueued and 398 - * let ep_try_send_events() figure it out. 399 - */ 400 - return true; 401 - } 402 - 403 - available = !llist_empty(&ep->rdllist); 404 - mutex_unlock(&ep->mtx); 405 - return available; 382 + return !list_empty_careful(&ep->rdllist) || 383 + READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR; 406 384 } 407 385 408 386 #ifdef CONFIG_NET_RX_BUSY_POLL ··· 724 728 rcu_read_unlock(); 725 729 } 726 730 731 + 732 + /* 733 + * ep->mutex needs to be held because we could be hit by 734 + * eventpoll_release_file() and epoll_ctl(). 735 + */ 736 + static void ep_start_scan(struct eventpoll *ep, struct list_head *txlist) 737 + { 738 + /* 739 + * Steal the ready list, and re-init the original one to the 740 + * empty list. Also, set ep->ovflist to NULL so that events 741 + * happening while looping w/out locks, are not lost. We cannot 742 + * have the poll callback to queue directly on ep->rdllist, 743 + * because we want the "sproc" callback to be able to do it 744 + * in a lockless way. 745 + */ 746 + lockdep_assert_irqs_enabled(); 747 + write_lock_irq(&ep->lock); 748 + list_splice_init(&ep->rdllist, txlist); 749 + WRITE_ONCE(ep->ovflist, NULL); 750 + write_unlock_irq(&ep->lock); 751 + } 752 + 753 + static void ep_done_scan(struct eventpoll *ep, 754 + struct list_head *txlist) 755 + { 756 + struct epitem *epi, *nepi; 757 + 758 + write_lock_irq(&ep->lock); 759 + /* 760 + * During the time we spent inside the "sproc" callback, some 761 + * other events might have been queued by the poll callback. 762 + * We re-insert them inside the main ready-list here. 763 + */ 764 + for (nepi = READ_ONCE(ep->ovflist); (epi = nepi) != NULL; 765 + nepi = epi->next, epi->next = EP_UNACTIVE_PTR) { 766 + /* 767 + * We need to check if the item is already in the list. 768 + * During the "sproc" callback execution time, items are 769 + * queued into ->ovflist but the "txlist" might already 770 + * contain them, and the list_splice() below takes care of them. 771 + */ 772 + if (!ep_is_linked(epi)) { 773 + /* 774 + * ->ovflist is LIFO, so we have to reverse it in order 775 + * to keep in FIFO. 776 + */ 777 + list_add(&epi->rdllink, &ep->rdllist); 778 + ep_pm_stay_awake(epi); 779 + } 780 + } 781 + /* 782 + * We need to set back ep->ovflist to EP_UNACTIVE_PTR, so that after 783 + * releasing the lock, events will be queued in the normal way inside 784 + * ep->rdllist. 785 + */ 786 + WRITE_ONCE(ep->ovflist, EP_UNACTIVE_PTR); 787 + 788 + /* 789 + * Quickly re-inject items left on "txlist". 790 + */ 791 + list_splice(txlist, &ep->rdllist); 792 + __pm_relax(ep->ws); 793 + 794 + if (!list_empty(&ep->rdllist)) { 795 + if (waitqueue_active(&ep->wq)) 796 + wake_up(&ep->wq); 797 + } 798 + 799 + write_unlock_irq(&ep->lock); 800 + } 801 + 727 802 static void ep_get(struct eventpoll *ep) 728 803 { 729 804 refcount_inc(&ep->refcount); ··· 832 765 static bool __ep_remove(struct eventpoll *ep, struct epitem *epi, bool force) 833 766 { 834 767 struct file *file = epi->ffd.file; 835 - struct llist_node *put_back_last; 836 768 struct epitems_head *to_free; 837 769 struct hlist_head *head; 838 - LLIST_HEAD(put_back); 839 770 840 - lockdep_assert_held(&ep->mtx); 771 + lockdep_assert_irqs_enabled(); 841 772 842 773 /* 843 774 * Removes poll wait queue hooks. ··· 867 802 868 803 rb_erase_cached(&epi->rbn, &ep->rbr); 869 804 870 - if (llist_on_list(&epi->rdllink)) { 871 - put_back_last = NULL; 872 - while (true) { 873 - struct llist_node *n = llist_del_first(&ep->rdllist); 874 - 875 - if (&epi->rdllink == n || WARN_ON(!n)) 876 - break; 877 - if (!put_back_last) 878 - put_back_last = n; 879 - __llist_add(n, &put_back); 880 - } 881 - if (put_back_last) 882 - llist_add_batch(put_back.first, put_back_last, &ep->rdllist); 883 - } 805 + write_lock_irq(&ep->lock); 806 + if (ep_is_linked(epi)) 807 + list_del_init(&epi->rdllink); 808 + write_unlock_irq(&ep->lock); 884 809 885 810 wakeup_source_unregister(ep_wakeup_source(epi)); 886 811 /* ··· 883 828 kfree_rcu(epi, rcu); 884 829 885 830 percpu_counter_dec(&ep->user->epoll_watches); 886 - return ep_refcount_dec_and_test(ep); 831 + return true; 887 832 } 888 833 889 834 /* ··· 891 836 */ 892 837 static void ep_remove_safe(struct eventpoll *ep, struct epitem *epi) 893 838 { 894 - WARN_ON_ONCE(__ep_remove(ep, epi, false)); 839 + if (__ep_remove(ep, epi, false)) 840 + WARN_ON_ONCE(ep_refcount_dec_and_test(ep)); 895 841 } 896 842 897 843 static void ep_clear_and_put(struct eventpoll *ep) 898 844 { 899 845 struct rb_node *rbp, *next; 900 846 struct epitem *epi; 901 - bool dispose; 902 847 903 848 /* We need to release all tasks waiting for these file */ 904 849 if (waitqueue_active(&ep->poll_wait)) ··· 931 876 cond_resched(); 932 877 } 933 878 934 - dispose = ep_refcount_dec_and_test(ep); 935 879 mutex_unlock(&ep->mtx); 936 - 937 - if (dispose) 880 + if (ep_refcount_dec_and_test(ep)) 938 881 ep_free(ep); 939 882 } 940 883 ··· 972 919 static __poll_t __ep_eventpoll_poll(struct file *file, poll_table *wait, int depth) 973 920 { 974 921 struct eventpoll *ep = file->private_data; 975 - struct wakeup_source *ws; 976 - struct llist_node *n; 977 - struct epitem *epi; 922 + LIST_HEAD(txlist); 923 + struct epitem *epi, *tmp; 978 924 poll_table pt; 979 925 __poll_t res = 0; 980 926 ··· 987 935 * the ready list. 988 936 */ 989 937 mutex_lock_nested(&ep->mtx, depth); 990 - while (true) { 991 - n = llist_del_first_init(&ep->rdllist); 992 - if (!n) 993 - break; 994 - 995 - epi = llist_entry(n, struct epitem, rdllink); 996 - 938 + ep_start_scan(ep, &txlist); 939 + list_for_each_entry_safe(epi, tmp, &txlist, rdllink) { 997 940 if (ep_item_poll(epi, &pt, depth + 1)) { 998 941 res = EPOLLIN | EPOLLRDNORM; 999 - epitem_ready(epi); 1000 942 break; 1001 943 } else { 1002 944 /* 1003 - * We need to activate ep before deactivating epi, to prevent autosuspend 1004 - * just in case epi becomes active after ep_item_poll() above. 1005 - * 1006 - * This is similar to ep_send_events(). 945 + * Item has been dropped into the ready list by the poll 946 + * callback, but it's not actually ready, as far as 947 + * caller requested events goes. We can remove it here. 1007 948 */ 1008 - ws = ep_wakeup_source(epi); 1009 - if (ws) { 1010 - if (ws->active) 1011 - __pm_stay_awake(ep->ws); 1012 - __pm_relax(ws); 1013 - } 1014 949 __pm_relax(ep_wakeup_source(epi)); 1015 - 1016 - /* Just in case epi becomes active right before __pm_relax() */ 1017 - if (unlikely(ep_item_poll(epi, &pt, depth + 1))) 1018 - ep_pm_stay_awake(epi); 1019 - 1020 - __pm_relax(ep->ws); 950 + list_del_init(&epi->rdllink); 1021 951 } 1022 952 } 953 + ep_done_scan(ep, &txlist); 1023 954 mutex_unlock(&ep->mtx); 1024 955 return res; 1025 956 } ··· 1135 1100 dispose = __ep_remove(ep, epi, true); 1136 1101 mutex_unlock(&ep->mtx); 1137 1102 1138 - if (dispose) 1103 + if (dispose && ep_refcount_dec_and_test(ep)) 1139 1104 ep_free(ep); 1140 1105 goto again; 1141 1106 } ··· 1151 1116 return -ENOMEM; 1152 1117 1153 1118 mutex_init(&ep->mtx); 1119 + rwlock_init(&ep->lock); 1154 1120 init_waitqueue_head(&ep->wq); 1155 1121 init_waitqueue_head(&ep->poll_wait); 1156 - init_llist_head(&ep->rdllist); 1122 + INIT_LIST_HEAD(&ep->rdllist); 1157 1123 ep->rbr = RB_ROOT_CACHED; 1124 + ep->ovflist = EP_UNACTIVE_PTR; 1158 1125 ep->user = get_current_user(); 1159 1126 refcount_set(&ep->refcount, 1); 1160 1127 ··· 1239 1202 #endif /* CONFIG_KCMP */ 1240 1203 1241 1204 /* 1205 + * Adds a new entry to the tail of the list in a lockless way, i.e. 1206 + * multiple CPUs are allowed to call this function concurrently. 1207 + * 1208 + * Beware: it is necessary to prevent any other modifications of the 1209 + * existing list until all changes are completed, in other words 1210 + * concurrent list_add_tail_lockless() calls should be protected 1211 + * with a read lock, where write lock acts as a barrier which 1212 + * makes sure all list_add_tail_lockless() calls are fully 1213 + * completed. 1214 + * 1215 + * Also an element can be locklessly added to the list only in one 1216 + * direction i.e. either to the tail or to the head, otherwise 1217 + * concurrent access will corrupt the list. 1218 + * 1219 + * Return: %false if element has been already added to the list, %true 1220 + * otherwise. 1221 + */ 1222 + static inline bool list_add_tail_lockless(struct list_head *new, 1223 + struct list_head *head) 1224 + { 1225 + struct list_head *prev; 1226 + 1227 + /* 1228 + * This is simple 'new->next = head' operation, but cmpxchg() 1229 + * is used in order to detect that same element has been just 1230 + * added to the list from another CPU: the winner observes 1231 + * new->next == new. 1232 + */ 1233 + if (!try_cmpxchg(&new->next, &new, head)) 1234 + return false; 1235 + 1236 + /* 1237 + * Initially ->next of a new element must be updated with the head 1238 + * (we are inserting to the tail) and only then pointers are atomically 1239 + * exchanged. XCHG guarantees memory ordering, thus ->next should be 1240 + * updated before pointers are actually swapped and pointers are 1241 + * swapped before prev->next is updated. 1242 + */ 1243 + 1244 + prev = xchg(&head->prev, new); 1245 + 1246 + /* 1247 + * It is safe to modify prev->next and new->prev, because a new element 1248 + * is added only to the tail and new->next is updated before XCHG. 1249 + */ 1250 + 1251 + prev->next = new; 1252 + new->prev = prev; 1253 + 1254 + return true; 1255 + } 1256 + 1257 + /* 1258 + * Chains a new epi entry to the tail of the ep->ovflist in a lockless way, 1259 + * i.e. multiple CPUs are allowed to call this function concurrently. 1260 + * 1261 + * Return: %false if epi element has been already chained, %true otherwise. 1262 + */ 1263 + static inline bool chain_epi_lockless(struct epitem *epi) 1264 + { 1265 + struct eventpoll *ep = epi->ep; 1266 + 1267 + /* Fast preliminary check */ 1268 + if (epi->next != EP_UNACTIVE_PTR) 1269 + return false; 1270 + 1271 + /* Check that the same epi has not been just chained from another CPU */ 1272 + if (cmpxchg(&epi->next, EP_UNACTIVE_PTR, NULL) != EP_UNACTIVE_PTR) 1273 + return false; 1274 + 1275 + /* Atomically exchange tail */ 1276 + epi->next = xchg(&ep->ovflist, epi); 1277 + 1278 + return true; 1279 + } 1280 + 1281 + /* 1242 1282 * This is the callback that is passed to the wait queue wakeup 1243 1283 * mechanism. It is called by the stored file descriptors when they 1244 1284 * have events to report. 1285 + * 1286 + * This callback takes a read lock in order not to contend with concurrent 1287 + * events from another file descriptor, thus all modifications to ->rdllist 1288 + * or ->ovflist are lockless. Read lock is paired with the write lock from 1289 + * ep_start/done_scan(), which stops all list modifications and guarantees 1290 + * that lists state is seen correctly. 1245 1291 * 1246 1292 * Another thing worth to mention is that ep_poll_callback() can be called 1247 1293 * concurrently for the same @epi from different CPUs if poll table was inited ··· 1335 1215 */ 1336 1216 static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, void *key) 1337 1217 { 1218 + int pwake = 0; 1338 1219 struct epitem *epi = ep_item_from_wait(wait); 1339 1220 struct eventpoll *ep = epi->ep; 1340 1221 __poll_t pollflags = key_to_poll(key); 1222 + unsigned long flags; 1341 1223 int ewake = 0; 1224 + 1225 + read_lock_irqsave(&ep->lock, flags); 1342 1226 1343 1227 ep_set_busy_poll_napi_id(epi); 1344 1228 ··· 1353 1229 * until the next EPOLL_CTL_MOD will be issued. 1354 1230 */ 1355 1231 if (!(epi->event.events & ~EP_PRIVATE_BITS)) 1356 - goto out; 1232 + goto out_unlock; 1357 1233 1358 1234 /* 1359 1235 * Check the events coming with the callback. At this stage, not ··· 1362 1238 * test for "key" != NULL before the event match test. 1363 1239 */ 1364 1240 if (pollflags && !(pollflags & epi->event.events)) 1365 - goto out; 1241 + goto out_unlock; 1366 1242 1367 - ep_pm_stay_awake_rcu(epi); 1368 - epitem_ready(epi); 1243 + /* 1244 + * If we are transferring events to userspace, we can hold no locks 1245 + * (because we're accessing user memory, and because of linux f_op->poll() 1246 + * semantics). All the events that happen during that period of time are 1247 + * chained in ep->ovflist and requeued later on. 1248 + */ 1249 + if (READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR) { 1250 + if (chain_epi_lockless(epi)) 1251 + ep_pm_stay_awake_rcu(epi); 1252 + } else if (!ep_is_linked(epi)) { 1253 + /* In the usual case, add event to ready list. */ 1254 + if (list_add_tail_lockless(&epi->rdllink, &ep->rdllist)) 1255 + ep_pm_stay_awake_rcu(epi); 1256 + } 1369 1257 1370 1258 /* 1371 1259 * Wake up ( if active ) both the eventpoll wait list and the ->poll() ··· 1406 1270 wake_up(&ep->wq); 1407 1271 } 1408 1272 if (waitqueue_active(&ep->poll_wait)) 1273 + pwake++; 1274 + 1275 + out_unlock: 1276 + read_unlock_irqrestore(&ep->lock, flags); 1277 + 1278 + /* We have to call this outside the lock */ 1279 + if (pwake) 1409 1280 ep_poll_safewake(ep, epi, pollflags & EPOLL_URING_WAKE); 1410 1281 1411 - out: 1412 1282 if (!(epi->event.events & EPOLLEXCLUSIVE)) 1413 1283 ewake = 1; 1414 1284 ··· 1659 1517 if (is_file_epoll(tfile)) 1660 1518 tep = tfile->private_data; 1661 1519 1520 + lockdep_assert_irqs_enabled(); 1521 + 1662 1522 if (unlikely(percpu_counter_compare(&ep->user->epoll_watches, 1663 1523 max_user_watches) >= 0)) 1664 1524 return -ENOSPC; ··· 1672 1528 } 1673 1529 1674 1530 /* Item initialization follow here ... */ 1675 - init_llist_node(&epi->rdllink); 1531 + INIT_LIST_HEAD(&epi->rdllink); 1676 1532 epi->ep = ep; 1677 1533 ep_set_ffd(&epi->ffd, tfile, fd); 1678 1534 epi->event = *event; 1535 + epi->next = EP_UNACTIVE_PTR; 1679 1536 1680 1537 if (tep) 1681 1538 mutex_lock_nested(&tep->mtx, 1); ··· 1743 1598 return -ENOMEM; 1744 1599 } 1745 1600 1601 + /* We have to drop the new item inside our item list to keep track of it */ 1602 + write_lock_irq(&ep->lock); 1603 + 1746 1604 /* record NAPI ID of new item if present */ 1747 1605 ep_set_busy_poll_napi_id(epi); 1748 1606 1749 1607 /* If the file is already "ready" we drop it inside the ready list */ 1750 - if (revents) { 1608 + if (revents && !ep_is_linked(epi)) { 1609 + list_add_tail(&epi->rdllink, &ep->rdllist); 1751 1610 ep_pm_stay_awake(epi); 1752 - epitem_ready(epi); 1753 1611 1754 1612 /* Notify waiting tasks that events are available */ 1755 1613 if (waitqueue_active(&ep->wq)) ··· 1760 1612 if (waitqueue_active(&ep->poll_wait)) 1761 1613 pwake++; 1762 1614 } 1615 + 1616 + write_unlock_irq(&ep->lock); 1763 1617 1764 1618 /* We have to call this outside the lock */ 1765 1619 if (pwake) ··· 1777 1627 static int ep_modify(struct eventpoll *ep, struct epitem *epi, 1778 1628 const struct epoll_event *event) 1779 1629 { 1630 + int pwake = 0; 1780 1631 poll_table pt; 1632 + 1633 + lockdep_assert_irqs_enabled(); 1781 1634 1782 1635 init_poll_funcptr(&pt, NULL); 1783 1636 ··· 1825 1672 * list, push it inside. 1826 1673 */ 1827 1674 if (ep_item_poll(epi, &pt, 1)) { 1828 - ep_pm_stay_awake(epi); 1829 - epitem_ready(epi); 1675 + write_lock_irq(&ep->lock); 1676 + if (!ep_is_linked(epi)) { 1677 + list_add_tail(&epi->rdllink, &ep->rdllist); 1678 + ep_pm_stay_awake(epi); 1830 1679 1831 - /* Notify waiting tasks that events are available */ 1832 - if (waitqueue_active(&ep->wq)) 1833 - wake_up(&ep->wq); 1834 - if (waitqueue_active(&ep->poll_wait)) 1835 - ep_poll_safewake(ep, NULL, 0); 1680 + /* Notify waiting tasks that events are available */ 1681 + if (waitqueue_active(&ep->wq)) 1682 + wake_up(&ep->wq); 1683 + if (waitqueue_active(&ep->poll_wait)) 1684 + pwake++; 1685 + } 1686 + write_unlock_irq(&ep->lock); 1836 1687 } 1688 + 1689 + /* We have to call this outside the lock */ 1690 + if (pwake) 1691 + ep_poll_safewake(ep, NULL, 0); 1837 1692 1838 1693 return 0; 1839 1694 } ··· 1850 1689 struct epoll_event __user *events, int maxevents) 1851 1690 { 1852 1691 struct epitem *epi, *tmp; 1853 - LLIST_HEAD(txlist); 1692 + LIST_HEAD(txlist); 1854 1693 poll_table pt; 1855 1694 int res = 0; 1856 1695 ··· 1865 1704 init_poll_funcptr(&pt, NULL); 1866 1705 1867 1706 mutex_lock(&ep->mtx); 1707 + ep_start_scan(ep, &txlist); 1868 1708 1869 - while (res < maxevents) { 1709 + /* 1710 + * We can loop without lock because we are passed a task private list. 1711 + * Items cannot vanish during the loop we are holding ep->mtx. 1712 + */ 1713 + list_for_each_entry_safe(epi, tmp, &txlist, rdllink) { 1870 1714 struct wakeup_source *ws; 1871 - struct llist_node *n; 1872 1715 __poll_t revents; 1873 1716 1874 - n = llist_del_first(&ep->rdllist); 1875 - if (!n) 1717 + if (res >= maxevents) 1876 1718 break; 1877 - 1878 - epi = llist_entry(n, struct epitem, rdllink); 1879 1719 1880 1720 /* 1881 1721 * Activate ep->ws before deactivating epi->ws to prevent ··· 1894 1732 __pm_relax(ws); 1895 1733 } 1896 1734 1735 + list_del_init(&epi->rdllink); 1736 + 1897 1737 /* 1898 1738 * If the event mask intersect the caller-requested one, 1899 1739 * deliver the event to userspace. Again, we are holding ep->mtx, 1900 1740 * so no operations coming from userspace can change the item. 1901 1741 */ 1902 1742 revents = ep_item_poll(epi, &pt, 1); 1903 - if (!revents) { 1904 - init_llist_node(n); 1905 - 1906 - /* 1907 - * Just in case epi becomes ready after ep_item_poll() above, but before 1908 - * init_llist_node(). Make sure to add it to the ready list, otherwise an 1909 - * event may be lost. 1910 - */ 1911 - if (unlikely(ep_item_poll(epi, &pt, 1))) { 1912 - ep_pm_stay_awake(epi); 1913 - epitem_ready(epi); 1914 - } 1743 + if (!revents) 1915 1744 continue; 1916 - } 1917 1745 1918 1746 events = epoll_put_uevent(revents, epi->event.data, events); 1919 1747 if (!events) { 1920 - llist_add(&epi->rdllink, &ep->rdllist); 1748 + list_add(&epi->rdllink, &txlist); 1749 + ep_pm_stay_awake(epi); 1921 1750 if (!res) 1922 1751 res = -EFAULT; 1923 1752 break; ··· 1916 1763 res++; 1917 1764 if (epi->event.events & EPOLLONESHOT) 1918 1765 epi->event.events &= EP_PRIVATE_BITS; 1919 - __llist_add(n, &txlist); 1920 - } 1921 - 1922 - llist_for_each_entry_safe(epi, tmp, txlist.first, rdllink) { 1923 - init_llist_node(&epi->rdllink); 1924 - 1925 - if (!(epi->event.events & EPOLLET)) { 1766 + else if (!(epi->event.events & EPOLLET)) { 1926 1767 /* 1927 - * If this file has been added with Level Trigger mode, we need to insert 1928 - * back inside the ready list, so that the next call to epoll_wait() will 1929 - * check again the events availability. 1768 + * If this file has been added with Level 1769 + * Trigger mode, we need to insert back inside 1770 + * the ready list, so that the next call to 1771 + * epoll_wait() will check again the events 1772 + * availability. At this point, no one can insert 1773 + * into ep->rdllist besides us. The epoll_ctl() 1774 + * callers are locked out by 1775 + * ep_send_events() holding "mtx" and the 1776 + * poll callback will queue them in ep->ovflist. 1930 1777 */ 1778 + list_add_tail(&epi->rdllink, &ep->rdllist); 1931 1779 ep_pm_stay_awake(epi); 1932 - epitem_ready(epi); 1933 1780 } 1934 1781 } 1935 - 1936 - __pm_relax(ep->ws); 1782 + ep_done_scan(ep, &txlist); 1937 1783 mutex_unlock(&ep->mtx); 1938 - 1939 - if (!llist_empty(&ep->rdllist)) { 1940 - if (waitqueue_active(&ep->wq)) 1941 - wake_up(&ep->wq); 1942 - } 1943 1784 1944 1785 return res; 1945 1786 } ··· 2027 1880 wait_queue_entry_t wait; 2028 1881 ktime_t expires, *to = NULL; 2029 1882 1883 + lockdep_assert_irqs_enabled(); 1884 + 2030 1885 if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { 2031 1886 slack = select_estimate_accuracy(timeout); 2032 1887 to = &expires; ··· 2088 1939 init_wait(&wait); 2089 1940 wait.func = ep_autoremove_wake_function; 2090 1941 2091 - prepare_to_wait_exclusive(&ep->wq, &wait, TASK_INTERRUPTIBLE); 1942 + write_lock_irq(&ep->lock); 1943 + /* 1944 + * Barrierless variant, waitqueue_active() is called under 1945 + * the same lock on wakeup ep_poll_callback() side, so it 1946 + * is safe to avoid an explicit barrier. 1947 + */ 1948 + __set_current_state(TASK_INTERRUPTIBLE); 2092 1949 2093 - if (!ep_events_available(ep)) 1950 + /* 1951 + * Do the final check under the lock. ep_start/done_scan() 1952 + * plays with two lists (->rdllist and ->ovflist) and there 1953 + * is always a race when both lists are empty for short 1954 + * period of time although events are pending, so lock is 1955 + * important. 1956 + */ 1957 + eavail = ep_events_available(ep); 1958 + if (!eavail) 1959 + __add_wait_queue_exclusive(&ep->wq, &wait); 1960 + 1961 + write_unlock_irq(&ep->lock); 1962 + 1963 + if (!eavail) 2094 1964 timed_out = !ep_schedule_timeout(to) || 2095 1965 !schedule_hrtimeout_range(to, slack, 2096 1966 HRTIMER_MODE_ABS); 1967 + __set_current_state(TASK_RUNNING); 2097 1968 2098 - finish_wait(&ep->wq, &wait); 2099 - eavail = ep_events_available(ep); 1969 + /* 1970 + * We were woken up, thus go and try to harvest some events. 1971 + * If timed out and still on the wait queue, recheck eavail 1972 + * carefully under lock, below. 1973 + */ 1974 + eavail = 1; 1975 + 1976 + if (!list_empty_careful(&wait.entry)) { 1977 + write_lock_irq(&ep->lock); 1978 + /* 1979 + * If the thread timed out and is not on the wait queue, 1980 + * it means that the thread was woken up after its 1981 + * timeout expired before it could reacquire the lock. 1982 + * Thus, when wait.entry is empty, it needs to harvest 1983 + * events. 1984 + */ 1985 + if (timed_out) 1986 + eavail = list_empty(&wait.entry); 1987 + __remove_wait_queue(&ep->wq, &wait); 1988 + write_unlock_irq(&ep->lock); 1989 + } 2100 1990 } 2101 1991 } 2102 1992
+7 -7
fs/proc/task_mmu.c
··· 36 36 unsigned long text, lib, swap, anon, file, shmem; 37 37 unsigned long hiwater_vm, total_vm, hiwater_rss, total_rss; 38 38 39 - anon = get_mm_counter(mm, MM_ANONPAGES); 40 - file = get_mm_counter(mm, MM_FILEPAGES); 41 - shmem = get_mm_counter(mm, MM_SHMEMPAGES); 39 + anon = get_mm_counter_sum(mm, MM_ANONPAGES); 40 + file = get_mm_counter_sum(mm, MM_FILEPAGES); 41 + shmem = get_mm_counter_sum(mm, MM_SHMEMPAGES); 42 42 43 43 /* 44 44 * Note: to minimize their overhead, mm maintains hiwater_vm and ··· 59 59 text = min(text, mm->exec_vm << PAGE_SHIFT); 60 60 lib = (mm->exec_vm << PAGE_SHIFT) - text; 61 61 62 - swap = get_mm_counter(mm, MM_SWAPENTS); 62 + swap = get_mm_counter_sum(mm, MM_SWAPENTS); 63 63 SEQ_PUT_DEC("VmPeak:\t", hiwater_vm); 64 64 SEQ_PUT_DEC(" kB\nVmSize:\t", total_vm); 65 65 SEQ_PUT_DEC(" kB\nVmLck:\t", mm->locked_vm); ··· 92 92 unsigned long *shared, unsigned long *text, 93 93 unsigned long *data, unsigned long *resident) 94 94 { 95 - *shared = get_mm_counter(mm, MM_FILEPAGES) + 96 - get_mm_counter(mm, MM_SHMEMPAGES); 95 + *shared = get_mm_counter_sum(mm, MM_FILEPAGES) + 96 + get_mm_counter_sum(mm, MM_SHMEMPAGES); 97 97 *text = (PAGE_ALIGN(mm->end_code) - (mm->start_code & PAGE_MASK)) 98 98 >> PAGE_SHIFT; 99 99 *data = mm->data_vm + mm->stack_vm; 100 - *resident = *shared + get_mm_counter(mm, MM_ANONPAGES); 100 + *resident = *shared + get_mm_counter_sum(mm, MM_ANONPAGES); 101 101 return mm->total_vm; 102 102 } 103 103
+9 -20
fs/smb/server/smb2pdu.c
··· 8573 8573 goto err_out; 8574 8574 } 8575 8575 8576 - opinfo->op_state = OPLOCK_STATE_NONE; 8577 - wake_up_interruptible_all(&opinfo->oplock_q); 8578 - opinfo_put(opinfo); 8579 - ksmbd_fd_put(work, fp); 8580 - 8581 8576 rsp->StructureSize = cpu_to_le16(24); 8582 8577 rsp->OplockLevel = rsp_oplevel; 8583 8578 rsp->Reserved = 0; ··· 8580 8585 rsp->VolatileFid = volatile_id; 8581 8586 rsp->PersistentFid = persistent_id; 8582 8587 ret = ksmbd_iov_pin_rsp(work, rsp, sizeof(struct smb2_oplock_break)); 8583 - if (!ret) 8584 - return; 8585 - 8588 + if (ret) { 8586 8589 err_out: 8590 + smb2_set_err_rsp(work); 8591 + } 8592 + 8587 8593 opinfo->op_state = OPLOCK_STATE_NONE; 8588 8594 wake_up_interruptible_all(&opinfo->oplock_q); 8589 - 8590 8595 opinfo_put(opinfo); 8591 8596 ksmbd_fd_put(work, fp); 8592 - smb2_set_err_rsp(work); 8593 8597 } 8594 8598 8595 8599 static int check_lease_state(struct lease *lease, __le32 req_state) ··· 8718 8724 } 8719 8725 8720 8726 lease_state = lease->state; 8721 - opinfo->op_state = OPLOCK_STATE_NONE; 8722 - wake_up_interruptible_all(&opinfo->oplock_q); 8723 - atomic_dec(&opinfo->breaking_cnt); 8724 - wake_up_interruptible_all(&opinfo->oplock_brk); 8725 - opinfo_put(opinfo); 8726 8727 8727 8728 rsp->StructureSize = cpu_to_le16(36); 8728 8729 rsp->Reserved = 0; ··· 8726 8737 rsp->LeaseState = lease_state; 8727 8738 rsp->LeaseDuration = 0; 8728 8739 ret = ksmbd_iov_pin_rsp(work, rsp, sizeof(struct smb2_lease_ack)); 8729 - if (!ret) 8730 - return; 8731 - 8740 + if (ret) { 8732 8741 err_out: 8742 + smb2_set_err_rsp(work); 8743 + } 8744 + 8745 + opinfo->op_state = OPLOCK_STATE_NONE; 8733 8746 wake_up_interruptible_all(&opinfo->oplock_q); 8734 8747 atomic_dec(&opinfo->breaking_cnt); 8735 8748 wake_up_interruptible_all(&opinfo->oplock_brk); 8736 - 8737 8749 opinfo_put(opinfo); 8738 - smb2_set_err_rsp(work); 8739 8750 } 8740 8751 8741 8752 /**
+3 -2
fs/smb/server/transport_rdma.c
··· 433 433 if (t->qp) { 434 434 ib_drain_qp(t->qp); 435 435 ib_mr_pool_destroy(t->qp, &t->qp->rdma_mrs); 436 - ib_destroy_qp(t->qp); 436 + t->qp = NULL; 437 + rdma_destroy_qp(t->cm_id); 437 438 } 438 439 439 440 ksmbd_debug(RDMA, "drain the reassembly queue\n"); ··· 1941 1940 return 0; 1942 1941 err: 1943 1942 if (t->qp) { 1944 - ib_destroy_qp(t->qp); 1945 1943 t->qp = NULL; 1944 + rdma_destroy_qp(t->cm_id); 1946 1945 } 1947 1946 if (t->recv_cq) { 1948 1947 ib_destroy_cq(t->recv_cq);
+1
fs/smb/server/vfs.c
··· 1282 1282 1283 1283 err = ksmbd_vfs_lock_parent(parent_path->dentry, path->dentry); 1284 1284 if (err) { 1285 + mnt_drop_write(parent_path->mnt); 1285 1286 path_put(path); 1286 1287 path_put(parent_path); 1287 1288 }
+3
include/drm/drm_file.h
··· 300 300 * 301 301 * Mapping of mm object handles to object pointers. Used by the GEM 302 302 * subsystem. Protected by @table_lock. 303 + * 304 + * Note that allocated entries might be NULL as a transient state when 305 + * creating or deleting a handle. 303 306 */ 304 307 struct idr object_idr; 305 308
+7
include/drm/drm_framebuffer.h
··· 23 23 #ifndef __DRM_FRAMEBUFFER_H__ 24 24 #define __DRM_FRAMEBUFFER_H__ 25 25 26 + #include <linux/bits.h> 26 27 #include <linux/ctype.h> 27 28 #include <linux/list.h> 28 29 #include <linux/sched.h> ··· 100 99 unsigned color, struct drm_clip_rect *clips, 101 100 unsigned num_clips); 102 101 }; 102 + 103 + #define DRM_FRAMEBUFFER_HAS_HANDLE_REF(_i) BIT(0u + (_i)) 103 104 104 105 /** 105 106 * struct drm_framebuffer - frame buffer object ··· 191 188 * DRM_MODE_FB_MODIFIERS. 192 189 */ 193 190 int flags; 191 + /** 192 + * @internal_flags: Framebuffer flags like DRM_FRAMEBUFFER_HAS_HANDLE_REF. 193 + */ 194 + unsigned int internal_flags; 194 195 /** 195 196 * @filp_head: Placed on &drm_file.fbs, protected by &drm_file.fbs_lock. 196 197 */
+5
include/linux/blkdev.h
··· 269 269 return MKDEV(disk->major, disk->first_minor); 270 270 } 271 271 272 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 272 273 /* 273 274 * We should strive for 1 << (PAGE_SHIFT + MAX_PAGECACHE_ORDER) 274 275 * however we constrain this to what we can validate and test. 275 276 */ 276 277 #define BLK_MAX_BLOCK_SIZE SZ_64K 278 + #else 279 + #define BLK_MAX_BLOCK_SIZE PAGE_SIZE 280 + #endif 281 + 277 282 278 283 /* blk_validate_limits() validates bsize, so drivers don't usually need to */ 279 284 static inline int blk_validate_block_size(unsigned long bsize)
+1
include/linux/cpu.h
··· 82 82 struct device_attribute *attr, char *buf); 83 83 extern ssize_t cpu_show_indirect_target_selection(struct device *dev, 84 84 struct device_attribute *attr, char *buf); 85 + extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf); 85 86 86 87 extern __printf(4, 5) 87 88 struct device *cpu_device_create(struct device *parent, void *drvdata,
+33 -12
include/linux/ieee80211.h
··· 663 663 } 664 664 665 665 /** 666 - * ieee80211_is_s1g_short_beacon - check if frame is an S1G short beacon 667 - * @fc: frame control bytes in little-endian byteorder 668 - * Return: whether or not the frame is an S1G short beacon, 669 - * i.e. it is an S1G beacon with 'next TBTT' flag set 670 - */ 671 - static inline bool ieee80211_is_s1g_short_beacon(__le16 fc) 672 - { 673 - return ieee80211_is_s1g_beacon(fc) && 674 - (fc & cpu_to_le16(IEEE80211_S1G_BCN_NEXT_TBTT)); 675 - } 676 - 677 - /** 678 666 * ieee80211_is_atim - check if IEEE80211_FTYPE_MGMT && IEEE80211_STYPE_ATIM 679 667 * @fc: frame control bytes in little-endian byteorder 680 668 * Return: whether or not the frame is an ATIM frame ··· 4887 4899 return true; 4888 4900 4889 4901 return false; 4902 + } 4903 + 4904 + /** 4905 + * ieee80211_is_s1g_short_beacon - check if frame is an S1G short beacon 4906 + * @fc: frame control bytes in little-endian byteorder 4907 + * @variable: pointer to the beacon frame elements 4908 + * @variable_len: length of the frame elements 4909 + * Return: whether or not the frame is an S1G short beacon. As per 4910 + * IEEE80211-2024 11.1.3.10.1, The S1G beacon compatibility element shall 4911 + * always be present as the first element in beacon frames generated at a 4912 + * TBTT (Target Beacon Transmission Time), so any frame not containing 4913 + * this element must have been generated at a TSBTT (Target Short Beacon 4914 + * Transmission Time) that is not a TBTT. Additionally, short beacons are 4915 + * prohibited from containing the S1G beacon compatibility element as per 4916 + * IEEE80211-2024 9.3.4.3 Table 9-76, so if we have an S1G beacon with 4917 + * either no elements or the first element is not the beacon compatibility 4918 + * element, we have a short beacon. 4919 + */ 4920 + static inline bool ieee80211_is_s1g_short_beacon(__le16 fc, const u8 *variable, 4921 + size_t variable_len) 4922 + { 4923 + if (!ieee80211_is_s1g_beacon(fc)) 4924 + return false; 4925 + 4926 + /* 4927 + * If the frame does not contain at least 1 element (this is perfectly 4928 + * valid in a short beacon) and is an S1G beacon, we have a short 4929 + * beacon. 4930 + */ 4931 + if (variable_len < 2) 4932 + return true; 4933 + 4934 + return variable[0] != WLAN_EID_S1G_BCN_COMPAT; 4890 4935 } 4891 4936 4892 4937 struct element {
+2
include/linux/io_uring_types.h
··· 698 698 struct hlist_node hash_node; 699 699 /* For IOPOLL setup queues, with hybrid polling */ 700 700 u64 iopoll_start; 701 + /* for private io_kiocb freeing */ 702 + struct rcu_head rcu_head; 701 703 }; 702 704 /* internal polling, see IORING_FEAT_FAST_POLL */ 703 705 struct async_poll *apoll;
+1
include/linux/irqchip/irq-msi-lib.h
··· 17 17 18 18 #define MATCH_PLATFORM_MSI BIT(DOMAIN_BUS_PLATFORM_MSI) 19 19 20 + struct msi_domain_info; 20 21 int msi_lib_irq_domain_select(struct irq_domain *d, struct irq_fwspec *fwspec, 21 22 enum irq_domain_bus_token bus_token); 22 23
+5
include/linux/mm.h
··· 2568 2568 return percpu_counter_read_positive(&mm->rss_stat[member]); 2569 2569 } 2570 2570 2571 + static inline unsigned long get_mm_counter_sum(struct mm_struct *mm, int member) 2572 + { 2573 + return percpu_counter_sum_positive(&mm->rss_stat[member]); 2574 + } 2575 + 2571 2576 void mm_trace_rss_stat(struct mm_struct *mm, int member); 2572 2577 2573 2578 static inline void add_mm_counter(struct mm_struct *mm, int member, long value)
+2
include/linux/psp-sev.h
··· 594 594 * @imi_en: launch flow is launching an IMI (Incoming Migration Image) for the 595 595 * purpose of guest-assisted migration. 596 596 * @rsvd: reserved 597 + * @desired_tsc_khz: hypervisor desired mean TSC freq in kHz of the guest 597 598 * @gosvw: guest OS-visible workarounds, as defined by hypervisor 598 599 */ 599 600 struct sev_data_snp_launch_start { ··· 604 603 u32 ma_en:1; /* In */ 605 604 u32 imi_en:1; /* In */ 606 605 u32 rsvd:30; 606 + u32 desired_tsc_khz; /* In */ 607 607 u8 gosvw[16]; /* In */ 608 608 } __packed; 609 609
-4
include/linux/sched.h
··· 548 548 u64 nr_failed_migrations_running; 549 549 u64 nr_failed_migrations_hot; 550 550 u64 nr_forced_migrations; 551 - #ifdef CONFIG_NUMA_BALANCING 552 - u64 numa_task_migrated; 553 - u64 numa_task_swapped; 554 - #endif 555 551 556 552 u64 nr_wakeups; 557 553 u64 nr_wakeups_sync;
-2
include/linux/vm_event_item.h
··· 66 66 NUMA_HINT_FAULTS, 67 67 NUMA_HINT_FAULTS_LOCAL, 68 68 NUMA_PAGE_MIGRATE, 69 - NUMA_TASK_MIGRATE, 70 - NUMA_TASK_SWAP, 71 69 #endif 72 70 #ifdef CONFIG_MIGRATION 73 71 PGMIGRATE_SUCCESS, PGMIGRATE_FAIL,
+1 -1
include/net/af_vsock.h
··· 243 243 int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, 244 244 size_t len, int flags); 245 245 246 - #ifdef CONFIG_BPF_SYSCALL 247 246 extern struct proto vsock_proto; 247 + #ifdef CONFIG_BPF_SYSCALL 248 248 int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore); 249 249 void __init vsock_bpf_build_proto(void); 250 250 #else
+1 -2
include/net/bluetooth/hci_core.h
··· 1350 1350 rcu_read_lock(); 1351 1351 1352 1352 list_for_each_entry_rcu(c, &h->list, list) { 1353 - if (c->type != BIS_LINK || bacmp(&c->dst, BDADDR_ANY) || 1354 - c->state != state) 1353 + if (c->type != BIS_LINK || c->state != state) 1355 1354 continue; 1356 1355 1357 1356 if (handle == c->iso_qos.bcast.big) {
+1 -1
include/net/netfilter/nf_flow_table.h
··· 370 370 371 371 static inline bool nf_flow_pppoe_proto(struct sk_buff *skb, __be16 *inner_proto) 372 372 { 373 - if (!pskb_may_pull(skb, PPPOE_SES_HLEN)) 373 + if (!pskb_may_pull(skb, ETH_HLEN + PPPOE_SES_HLEN)) 374 374 return false; 375 375 376 376 *inner_proto = __nf_flow_pppoe_proto(skb);
+24 -1
include/net/pkt_sched.h
··· 114 114 struct netlink_ext_ack *extack); 115 115 void qdisc_put_rtab(struct qdisc_rate_table *tab); 116 116 void qdisc_put_stab(struct qdisc_size_table *tab); 117 - void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc); 118 117 bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q, 119 118 struct net_device *dev, struct netdev_queue *txq, 120 119 spinlock_t *root_lock, bool validate); ··· 287 288 288 289 arg->count++; 289 290 return true; 291 + } 292 + 293 + static inline void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc) 294 + { 295 + if (!(qdisc->flags & TCQ_F_WARN_NONWC)) { 296 + pr_warn("%s: %s qdisc %X: is non-work-conserving?\n", 297 + txt, qdisc->ops->id, qdisc->handle >> 16); 298 + qdisc->flags |= TCQ_F_WARN_NONWC; 299 + } 300 + } 301 + 302 + static inline unsigned int qdisc_peek_len(struct Qdisc *sch) 303 + { 304 + struct sk_buff *skb; 305 + unsigned int len; 306 + 307 + skb = sch->ops->peek(sch); 308 + if (unlikely(skb == NULL)) { 309 + qdisc_warn_nonwc("qdisc_peek_len", sch); 310 + return 0; 311 + } 312 + len = qdisc_pkt_len(skb); 313 + 314 + return len; 290 315 } 291 316 292 317 #endif
-9
include/sound/tlv320aic32x4.h
··· 40 40 struct aic32x4_setup_data { 41 41 unsigned int gpio_func[5]; 42 42 }; 43 - 44 - struct aic32x4_pdata { 45 - struct aic32x4_setup_data *setup; 46 - u32 power_cfg; 47 - u32 micpga_routing; 48 - bool swapdacs; 49 - int rstn_gpio; 50 - }; 51 - 52 43 #endif
+2 -2
include/uapi/linux/bits.h
··· 4 4 #ifndef _UAPI_LINUX_BITS_H 5 5 #define _UAPI_LINUX_BITS_H 6 6 7 - #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h)))) 7 + #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h)))) 8 8 9 - #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) 9 + #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 10 10 11 11 #define __GENMASK_U128(h, l) \ 12 12 ((_BIT128((h)) << 1) - (_BIT128(l)))
+4
include/uapi/linux/kvm.h
··· 467 467 __u64 leaf; 468 468 __u64 r11, r12, r13, r14; 469 469 } get_tdvmcall_info; 470 + struct { 471 + __u64 ret; 472 + __u64 vector; 473 + } setup_event_notify; 470 474 }; 471 475 } tdx; 472 476 /* Fix the size of the union. */
+1 -2
io_uring/io_uring.c
··· 1666 1666 1667 1667 io_req_flags_t io_file_get_flags(struct file *file) 1668 1668 { 1669 - struct inode *inode = file_inode(file); 1670 1669 io_req_flags_t res = 0; 1671 1670 1672 1671 BUILD_BUG_ON(REQ_F_ISREG_BIT != REQ_F_SUPPORT_NOWAIT_BIT + 1); 1673 1672 1674 - if (S_ISREG(inode->i_mode) && !(inode->i_flags & S_ANON_INODE)) 1673 + if (S_ISREG(file_inode(file)->i_mode)) 1675 1674 res |= REQ_F_ISREG; 1676 1675 if ((file->f_flags & O_NONBLOCK) || (file->f_mode & FMODE_NOWAIT)) 1677 1676 res |= REQ_F_SUPPORT_NOWAIT;
+2 -2
io_uring/msg_ring.c
··· 82 82 spin_unlock(&ctx->msg_lock); 83 83 } 84 84 if (req) 85 - kmem_cache_free(req_cachep, req); 85 + kfree_rcu(req, rcu_head); 86 86 percpu_ref_put(&ctx->refs); 87 87 } 88 88 ··· 90 90 int res, u32 cflags, u64 user_data) 91 91 { 92 92 if (!READ_ONCE(ctx->submitter_task)) { 93 - kmem_cache_free(req_cachep, req); 93 + kfree_rcu(req, rcu_head); 94 94 return -EOWNERDEAD; 95 95 } 96 96 req->opcode = IORING_OP_NOP;
-3
io_uring/zcrx.c
··· 863 863 static void io_pp_zc_destroy(struct page_pool *pp) 864 864 { 865 865 struct io_zcrx_ifq *ifq = io_pp_to_ifq(pp); 866 - struct io_zcrx_area *area = ifq->area; 867 866 868 - if (WARN_ON_ONCE(area->free_count != area->nia.num_niovs)) 869 - return; 870 867 percpu_ref_put(&ifq->ctx->refs); 871 868 } 872 869
+4 -1
kernel/dma/contiguous.c
··· 222 222 if (size_cmdline != -1) { 223 223 selected_size = size_cmdline; 224 224 selected_base = base_cmdline; 225 - selected_limit = min_not_zero(limit_cmdline, limit); 225 + 226 + /* Hornor the user setup dma address limit */ 227 + selected_limit = limit_cmdline ?: limit; 228 + 226 229 if (base_cmdline + size_cmdline == limit_cmdline) 227 230 fixed = true; 228 231 } else {
+7 -7
kernel/events/core.c
··· 7204 7204 static void perf_sigtrap(struct perf_event *event) 7205 7205 { 7206 7206 /* 7207 + * Both perf_pending_task() and perf_pending_irq() can race with the 7208 + * task exiting. 7209 + */ 7210 + if (current->flags & PF_EXITING) 7211 + return; 7212 + 7213 + /* 7207 7214 * We'd expect this to only occur if the irq_work is delayed and either 7208 7215 * ctx->task or current has changed in the meantime. This can be the 7209 7216 * case on architectures that do not implement arch_irq_work_raise(). 7210 7217 */ 7211 7218 if (WARN_ON_ONCE(event->ctx->task != current)) 7212 - return; 7213 - 7214 - /* 7215 - * Both perf_pending_task() and perf_pending_irq() can race with the 7216 - * task exiting. 7217 - */ 7218 - if (current->flags & PF_EXITING) 7219 7219 return; 7220 7220 7221 7221 send_sig_perf((void __user *)event->pending_addr,
+11 -6
kernel/module/main.c
··· 1573 1573 if (infosec >= info->hdr->e_shnum) 1574 1574 continue; 1575 1575 1576 - /* Don't bother with non-allocated sections */ 1577 - if (!(info->sechdrs[infosec].sh_flags & SHF_ALLOC)) 1576 + /* 1577 + * Don't bother with non-allocated sections. 1578 + * An exception is the percpu section, which has separate allocations 1579 + * for individual CPUs. We relocate the percpu section in the initial 1580 + * ELF template and subsequently copy it to the per-CPU destinations. 1581 + */ 1582 + if (!(info->sechdrs[infosec].sh_flags & SHF_ALLOC) && 1583 + (!infosec || infosec != info->index.pcpu)) 1578 1584 continue; 1579 1585 1580 1586 if (info->sechdrs[i].sh_flags & SHF_RELA_LIVEPATCH) ··· 2702 2696 2703 2697 static int move_module(struct module *mod, struct load_info *info) 2704 2698 { 2705 - int i; 2706 - enum mod_mem_type t = 0; 2707 - int ret = -ENOMEM; 2699 + int i, ret; 2700 + enum mod_mem_type t = MOD_MEM_NUM_TYPES; 2708 2701 bool codetag_section_found = false; 2709 2702 2710 2703 for_each_mod_mem_type(type) { ··· 2781 2776 return 0; 2782 2777 out_err: 2783 2778 module_memory_restore_rox(mod); 2784 - for (t--; t >= 0; t--) 2779 + while (t--) 2785 2780 module_memory_free(mod, t); 2786 2781 if (codetag_section_found) 2787 2782 codetag_free_module_sections(mod);
+2 -7
kernel/sched/core.c
··· 3362 3362 #ifdef CONFIG_NUMA_BALANCING 3363 3363 static void __migrate_swap_task(struct task_struct *p, int cpu) 3364 3364 { 3365 - __schedstat_inc(p->stats.numa_task_swapped); 3366 - count_vm_numa_event(NUMA_TASK_SWAP); 3367 - count_memcg_event_mm(p->mm, NUMA_TASK_SWAP); 3368 - 3369 3365 if (task_on_rq_queued(p)) { 3370 3366 struct rq *src_rq, *dst_rq; 3371 3367 struct rq_flags srf, drf; ··· 7935 7939 if (!cpumask_test_cpu(target_cpu, p->cpus_ptr)) 7936 7940 return -EINVAL; 7937 7941 7938 - __schedstat_inc(p->stats.numa_task_migrated); 7939 - count_vm_numa_event(NUMA_TASK_MIGRATE); 7940 - count_memcg_event_mm(p->mm, NUMA_TASK_MIGRATE); 7942 + /* TODO: This is not properly updating schedstats */ 7943 + 7941 7944 trace_sched_move_numa(p, curr_cpu, target_cpu); 7942 7945 return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg); 7943 7946 }
-4
kernel/sched/debug.c
··· 1210 1210 P_SCHEDSTAT(nr_failed_migrations_running); 1211 1211 P_SCHEDSTAT(nr_failed_migrations_hot); 1212 1212 P_SCHEDSTAT(nr_forced_migrations); 1213 - #ifdef CONFIG_NUMA_BALANCING 1214 - P_SCHEDSTAT(numa_task_migrated); 1215 - P_SCHEDSTAT(numa_task_swapped); 1216 - #endif 1217 1213 P_SCHEDSTAT(nr_wakeups); 1218 1214 P_SCHEDSTAT(nr_wakeups_sync); 1219 1215 P_SCHEDSTAT(nr_wakeups_migrate);
+3
lib/alloc_tag.c
··· 135 135 struct codetag_bytes n; 136 136 unsigned int i, nr = 0; 137 137 138 + if (IS_ERR_OR_NULL(alloc_tag_cttype)) 139 + return 0; 140 + 138 141 if (can_sleep) 139 142 codetag_lock_module_list(alloc_tag_cttype, true); 140 143 else if (!codetag_trylock_module_list(alloc_tag_cttype))
+1
lib/maple_tree.c
··· 5319 5319 struct maple_enode *start; 5320 5320 5321 5321 if (mte_is_leaf(enode)) { 5322 + mte_set_node_dead(enode); 5322 5323 node->type = mte_node_type(enode); 5323 5324 goto free_leaf; 5324 5325 }
+4 -4
mm/damon/core.c
··· 1449 1449 } 1450 1450 } 1451 1451 target_access_events = max_access_events * goal_bp / 10000; 1452 + target_access_events = target_access_events ? : 1; 1452 1453 return access_events * 10000 / target_access_events; 1453 1454 } 1454 1455 ··· 2356 2355 * 2357 2356 * If there is a &struct damon_call_control request that registered via 2358 2357 * &damon_call() on @ctx, do or cancel the invocation of the function depending 2359 - * on @cancel. @cancel is set when the kdamond is deactivated by DAMOS 2360 - * watermarks, or the kdamond is already out of the main loop and therefore 2361 - * will be terminated. 2358 + * on @cancel. @cancel is set when the kdamond is already out of the main loop 2359 + * and therefore will be terminated. 2362 2360 */ 2363 2361 static void kdamond_call(struct damon_ctx *ctx, bool cancel) 2364 2362 { ··· 2405 2405 if (ctx->callback.after_wmarks_check && 2406 2406 ctx->callback.after_wmarks_check(ctx)) 2407 2407 break; 2408 - kdamond_call(ctx, true); 2408 + kdamond_call(ctx, false); 2409 2409 damos_walk_cancel(ctx); 2410 2410 } 2411 2411 return -EBUSY;
+6 -3
mm/hugetlb.c
··· 2340 2340 struct folio *folio; 2341 2341 2342 2342 spin_lock_irq(&hugetlb_lock); 2343 + if (!h->resv_huge_pages) { 2344 + spin_unlock_irq(&hugetlb_lock); 2345 + return NULL; 2346 + } 2347 + 2343 2348 folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask, preferred_nid, 2344 2349 nmask); 2345 - if (folio) { 2346 - VM_BUG_ON(!h->resv_huge_pages); 2350 + if (folio) 2347 2351 h->resv_huge_pages--; 2348 - } 2349 2352 2350 2353 spin_unlock_irq(&hugetlb_lock); 2351 2354 return folio;
+2 -43
mm/kasan/report.c
··· 370 370 sizeof(init_thread_union.stack)); 371 371 } 372 372 373 - /* 374 - * This function is invoked with report_lock (a raw_spinlock) held. A 375 - * PREEMPT_RT kernel cannot call find_vm_area() as it will acquire a sleeping 376 - * rt_spinlock. 377 - * 378 - * For !RT kernel, the PROVE_RAW_LOCK_NESTING config option will print a 379 - * lockdep warning for this raw_spinlock -> spinlock dependency. This config 380 - * option is enabled by default to ensure better test coverage to expose this 381 - * kind of RT kernel problem. This lockdep splat, however, can be suppressed 382 - * by using DEFINE_WAIT_OVERRIDE_MAP() if it serves a useful purpose and the 383 - * invalid PREEMPT_RT case has been taken care of. 384 - */ 385 - static inline struct vm_struct *kasan_find_vm_area(void *addr) 386 - { 387 - static DEFINE_WAIT_OVERRIDE_MAP(vmalloc_map, LD_WAIT_SLEEP); 388 - struct vm_struct *va; 389 - 390 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 391 - return NULL; 392 - 393 - /* 394 - * Suppress lockdep warning and fetch vmalloc area of the 395 - * offending address. 396 - */ 397 - lock_map_acquire_try(&vmalloc_map); 398 - va = find_vm_area(addr); 399 - lock_map_release(&vmalloc_map); 400 - return va; 401 - } 402 - 403 373 static void print_address_description(void *addr, u8 tag, 404 374 struct kasan_report_info *info) 405 375 { ··· 399 429 } 400 430 401 431 if (is_vmalloc_addr(addr)) { 402 - struct vm_struct *va = kasan_find_vm_area(addr); 403 - 404 - if (va) { 405 - pr_err("The buggy address belongs to the virtual mapping at\n" 406 - " [%px, %px) created by:\n" 407 - " %pS\n", 408 - va->addr, va->addr + va->size, va->caller); 409 - pr_err("\n"); 410 - 411 - page = vmalloc_to_page(addr); 412 - } else { 413 - pr_err("The buggy address %px belongs to a vmalloc virtual mapping\n", addr); 414 - } 432 + pr_err("The buggy address %px belongs to a vmalloc virtual mapping\n", addr); 433 + page = vmalloc_to_page(addr); 415 434 } 416 435 417 436 if (page) {
-2
mm/memcontrol.c
··· 474 474 NUMA_PAGE_MIGRATE, 475 475 NUMA_PTE_UPDATES, 476 476 NUMA_HINT_FAULTS, 477 - NUMA_TASK_MIGRATE, 478 - NUMA_TASK_SWAP, 479 477 #endif 480 478 }; 481 479
+8 -6
mm/migrate.c
··· 2399 2399 2400 2400 static int get_compat_pages_array(const void __user *chunk_pages[], 2401 2401 const void __user * __user *pages, 2402 + unsigned long chunk_offset, 2402 2403 unsigned long chunk_nr) 2403 2404 { 2404 2405 compat_uptr_t __user *pages32 = (compat_uptr_t __user *)pages; ··· 2407 2406 int i; 2408 2407 2409 2408 for (i = 0; i < chunk_nr; i++) { 2410 - if (get_user(p, pages32 + i)) 2409 + if (get_user(p, pages32 + chunk_offset + i)) 2411 2410 return -EFAULT; 2412 2411 chunk_pages[i] = compat_ptr(p); 2413 2412 } ··· 2426 2425 #define DO_PAGES_STAT_CHUNK_NR 16UL 2427 2426 const void __user *chunk_pages[DO_PAGES_STAT_CHUNK_NR]; 2428 2427 int chunk_status[DO_PAGES_STAT_CHUNK_NR]; 2428 + unsigned long chunk_offset = 0; 2429 2429 2430 2430 while (nr_pages) { 2431 2431 unsigned long chunk_nr = min(nr_pages, DO_PAGES_STAT_CHUNK_NR); 2432 2432 2433 2433 if (in_compat_syscall()) { 2434 2434 if (get_compat_pages_array(chunk_pages, pages, 2435 - chunk_nr)) 2435 + chunk_offset, chunk_nr)) 2436 2436 break; 2437 2437 } else { 2438 - if (copy_from_user(chunk_pages, pages, 2438 + if (copy_from_user(chunk_pages, pages + chunk_offset, 2439 2439 chunk_nr * sizeof(*chunk_pages))) 2440 2440 break; 2441 2441 } 2442 2442 2443 2443 do_pages_stat_array(mm, chunk_nr, chunk_pages, chunk_status); 2444 2444 2445 - if (copy_to_user(status, chunk_status, chunk_nr * sizeof(*status))) 2445 + if (copy_to_user(status + chunk_offset, chunk_status, 2446 + chunk_nr * sizeof(*status))) 2446 2447 break; 2447 2448 2448 - pages += chunk_nr; 2449 - status += chunk_nr; 2449 + chunk_offset += chunk_nr; 2450 2450 nr_pages -= chunk_nr; 2451 2451 } 2452 2452 return nr_pages ? -EFAULT : 0;
+28 -18
mm/rmap.c
··· 1845 1845 #endif 1846 1846 } 1847 1847 1848 - /* We support batch unmapping of PTEs for lazyfree large folios */ 1849 - static inline bool can_batch_unmap_folio_ptes(unsigned long addr, 1850 - struct folio *folio, pte_t *ptep) 1848 + static inline unsigned int folio_unmap_pte_batch(struct folio *folio, 1849 + struct page_vma_mapped_walk *pvmw, 1850 + enum ttu_flags flags, pte_t pte) 1851 1851 { 1852 1852 const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; 1853 - int max_nr = folio_nr_pages(folio); 1854 - pte_t pte = ptep_get(ptep); 1853 + unsigned long end_addr, addr = pvmw->address; 1854 + struct vm_area_struct *vma = pvmw->vma; 1855 + unsigned int max_nr; 1855 1856 1857 + if (flags & TTU_HWPOISON) 1858 + return 1; 1859 + if (!folio_test_large(folio)) 1860 + return 1; 1861 + 1862 + /* We may only batch within a single VMA and a single page table. */ 1863 + end_addr = pmd_addr_end(addr, vma->vm_end); 1864 + max_nr = (end_addr - addr) >> PAGE_SHIFT; 1865 + 1866 + /* We only support lazyfree batching for now ... */ 1856 1867 if (!folio_test_anon(folio) || folio_test_swapbacked(folio)) 1857 - return false; 1868 + return 1; 1858 1869 if (pte_unused(pte)) 1859 - return false; 1860 - if (pte_pfn(pte) != folio_pfn(folio)) 1861 - return false; 1870 + return 1; 1862 1871 1863 - return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL, 1864 - NULL, NULL) == max_nr; 1872 + return folio_pte_batch(folio, addr, pvmw->pte, pte, max_nr, fpb_flags, 1873 + NULL, NULL, NULL); 1865 1874 } 1866 1875 1867 1876 /* ··· 2033 2024 if (pte_dirty(pteval)) 2034 2025 folio_mark_dirty(folio); 2035 2026 } else if (likely(pte_present(pteval))) { 2036 - if (folio_test_large(folio) && !(flags & TTU_HWPOISON) && 2037 - can_batch_unmap_folio_ptes(address, folio, pvmw.pte)) 2038 - nr_pages = folio_nr_pages(folio); 2027 + nr_pages = folio_unmap_pte_batch(folio, &pvmw, flags, pteval); 2039 2028 end_addr = address + nr_pages * PAGE_SIZE; 2040 2029 flush_cache_range(vma, address, end_addr); 2041 2030 ··· 2213 2206 hugetlb_remove_rmap(folio); 2214 2207 } else { 2215 2208 folio_remove_rmap_ptes(folio, subpage, nr_pages, vma); 2216 - folio_ref_sub(folio, nr_pages - 1); 2217 2209 } 2218 2210 if (vma->vm_flags & VM_LOCKED) 2219 2211 mlock_drain_local(); 2220 - folio_put(folio); 2221 - /* We have already batched the entire folio */ 2222 - if (nr_pages > 1) 2212 + folio_put_refs(folio, nr_pages); 2213 + 2214 + /* 2215 + * If we are sure that we batched the entire folio and cleared 2216 + * all PTEs, we can just optimize and stop right here. 2217 + */ 2218 + if (nr_pages == folio_nr_pages(folio)) 2223 2219 goto walk_done; 2224 2220 continue; 2225 2221 walk_abort:
+15 -7
mm/vmalloc.c
··· 514 514 unsigned long end, pgprot_t prot, struct page **pages, int *nr, 515 515 pgtbl_mod_mask *mask) 516 516 { 517 + int err = 0; 517 518 pte_t *pte; 518 519 519 520 /* ··· 531 530 do { 532 531 struct page *page = pages[*nr]; 533 532 534 - if (WARN_ON(!pte_none(ptep_get(pte)))) 535 - return -EBUSY; 536 - if (WARN_ON(!page)) 537 - return -ENOMEM; 538 - if (WARN_ON(!pfn_valid(page_to_pfn(page)))) 539 - return -EINVAL; 533 + if (WARN_ON(!pte_none(ptep_get(pte)))) { 534 + err = -EBUSY; 535 + break; 536 + } 537 + if (WARN_ON(!page)) { 538 + err = -ENOMEM; 539 + break; 540 + } 541 + if (WARN_ON(!pfn_valid(page_to_pfn(page)))) { 542 + err = -EINVAL; 543 + break; 544 + } 540 545 541 546 set_pte_at(&init_mm, addr, pte, mk_pte(page, prot)); 542 547 (*nr)++; ··· 550 543 551 544 arch_leave_lazy_mmu_mode(); 552 545 *mask |= PGTBL_PTE_MODIFIED; 553 - return 0; 546 + 547 + return err; 554 548 } 555 549 556 550 static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr,
-2
mm/vmstat.c
··· 1346 1346 "numa_hint_faults", 1347 1347 "numa_hint_faults_local", 1348 1348 "numa_pages_migrated", 1349 - "numa_task_migrated", 1350 - "numa_task_swapped", 1351 1349 #endif 1352 1350 #ifdef CONFIG_MIGRATION 1353 1351 "pgmigrate_success",
+1
net/appletalk/ddp.c
··· 576 576 577 577 /* Fill in the routing entry */ 578 578 rt->target = ta->sat_addr; 579 + dev_put(rt->dev); /* Release old device */ 579 580 dev_hold(devhint); 580 581 rt->dev = devhint; 581 582 rt->flags = r->rt_flags;
+48 -16
net/atm/clip.c
··· 45 45 #include <net/atmclip.h> 46 46 47 47 static struct net_device *clip_devs; 48 - static struct atm_vcc *atmarpd; 48 + static struct atm_vcc __rcu *atmarpd; 49 + static DEFINE_MUTEX(atmarpd_lock); 49 50 static struct timer_list idle_timer; 50 51 static const struct neigh_ops clip_neigh_ops; 51 52 ··· 54 53 { 55 54 struct sock *sk; 56 55 struct atmarp_ctrl *ctrl; 56 + struct atm_vcc *vcc; 57 57 struct sk_buff *skb; 58 + int err = 0; 58 59 59 60 pr_debug("(%d)\n", type); 60 - if (!atmarpd) 61 - return -EUNATCH; 61 + 62 + rcu_read_lock(); 63 + vcc = rcu_dereference(atmarpd); 64 + if (!vcc) { 65 + err = -EUNATCH; 66 + goto unlock; 67 + } 62 68 skb = alloc_skb(sizeof(struct atmarp_ctrl), GFP_ATOMIC); 63 - if (!skb) 64 - return -ENOMEM; 69 + if (!skb) { 70 + err = -ENOMEM; 71 + goto unlock; 72 + } 65 73 ctrl = skb_put(skb, sizeof(struct atmarp_ctrl)); 66 74 ctrl->type = type; 67 75 ctrl->itf_num = itf; 68 76 ctrl->ip = ip; 69 - atm_force_charge(atmarpd, skb->truesize); 77 + atm_force_charge(vcc, skb->truesize); 70 78 71 - sk = sk_atm(atmarpd); 79 + sk = sk_atm(vcc); 72 80 skb_queue_tail(&sk->sk_receive_queue, skb); 73 81 sk->sk_data_ready(sk); 74 - return 0; 82 + unlock: 83 + rcu_read_unlock(); 84 + return err; 75 85 } 76 86 77 87 static void link_vcc(struct clip_vcc *clip_vcc, struct atmarp_entry *entry) ··· 429 417 430 418 if (!vcc->push) 431 419 return -EBADFD; 420 + if (vcc->user_back) 421 + return -EINVAL; 432 422 clip_vcc = kmalloc(sizeof(struct clip_vcc), GFP_KERNEL); 433 423 if (!clip_vcc) 434 424 return -ENOMEM; ··· 621 607 { 622 608 pr_debug("\n"); 623 609 624 - rtnl_lock(); 625 - atmarpd = NULL; 610 + mutex_lock(&atmarpd_lock); 611 + RCU_INIT_POINTER(atmarpd, NULL); 612 + mutex_unlock(&atmarpd_lock); 613 + 614 + synchronize_rcu(); 626 615 skb_queue_purge(&sk_atm(vcc)->sk_receive_queue); 627 - rtnl_unlock(); 628 616 629 617 pr_debug("(done)\n"); 630 618 module_put(THIS_MODULE); 631 619 } 632 620 621 + static int atmarpd_send(struct atm_vcc *vcc, struct sk_buff *skb) 622 + { 623 + atm_return_tx(vcc, skb); 624 + dev_kfree_skb_any(skb); 625 + return 0; 626 + } 627 + 633 628 static const struct atmdev_ops atmarpd_dev_ops = { 634 - .close = atmarpd_close 629 + .close = atmarpd_close, 630 + .send = atmarpd_send 635 631 }; 636 632 637 633 ··· 655 631 656 632 static int atm_init_atmarp(struct atm_vcc *vcc) 657 633 { 658 - rtnl_lock(); 634 + if (vcc->push == clip_push) 635 + return -EINVAL; 636 + 637 + mutex_lock(&atmarpd_lock); 659 638 if (atmarpd) { 660 - rtnl_unlock(); 639 + mutex_unlock(&atmarpd_lock); 661 640 return -EADDRINUSE; 662 641 } 663 642 664 643 mod_timer(&idle_timer, jiffies + CLIP_CHECK_INTERVAL * HZ); 665 644 666 - atmarpd = vcc; 645 + rcu_assign_pointer(atmarpd, vcc); 667 646 set_bit(ATM_VF_META, &vcc->flags); 668 647 set_bit(ATM_VF_READY, &vcc->flags); 669 648 /* allow replies and avoid getting closed if signaling dies */ ··· 675 648 vcc->push = NULL; 676 649 vcc->pop = NULL; /* crash */ 677 650 vcc->push_oam = NULL; /* crash */ 678 - rtnl_unlock(); 651 + mutex_unlock(&atmarpd_lock); 679 652 return 0; 680 653 } 681 654 682 655 static int clip_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) 683 656 { 684 657 struct atm_vcc *vcc = ATM_SD(sock); 658 + struct sock *sk = sock->sk; 685 659 int err = 0; 686 660 687 661 switch (cmd) { ··· 703 675 err = clip_create(arg); 704 676 break; 705 677 case ATMARPD_CTRL: 678 + lock_sock(sk); 706 679 err = atm_init_atmarp(vcc); 707 680 if (!err) { 708 681 sock->state = SS_CONNECTED; 709 682 __module_get(THIS_MODULE); 710 683 } 684 + release_sock(sk); 711 685 break; 712 686 case ATMARP_MKIP: 687 + lock_sock(sk); 713 688 err = clip_mkip(vcc, arg); 689 + release_sock(sk); 714 690 break; 715 691 case ATMARP_SETENTRY: 716 692 err = clip_setentry(vcc, (__force __be32)arg);
+3
net/bluetooth/hci_event.c
··· 6966 6966 bis->iso_qos.bcast.in.sdu = le16_to_cpu(ev->max_pdu); 6967 6967 6968 6968 if (!ev->status) { 6969 + bis->state = BT_CONNECTED; 6969 6970 set_bit(HCI_CONN_BIG_SYNC, &bis->flags); 6971 + hci_debugfs_create_conn(bis); 6972 + hci_conn_add_sysfs(bis); 6970 6973 hci_iso_setup_path(bis); 6971 6974 } 6972 6975 }
+2 -2
net/bluetooth/hci_sync.c
··· 1345 1345 * Command Disallowed error, so we must first disable the 1346 1346 * instance if it is active. 1347 1347 */ 1348 - if (adv && !adv->pending) { 1348 + if (adv) { 1349 1349 err = hci_disable_ext_adv_instance_sync(hdev, instance); 1350 1350 if (err) 1351 1351 return err; ··· 5493 5493 { 5494 5494 struct hci_cp_disconnect cp; 5495 5495 5496 - if (test_bit(HCI_CONN_BIG_CREATED, &conn->flags)) { 5496 + if (conn->type == BIS_LINK) { 5497 5497 /* This is a BIS connection, hci_conn_del will 5498 5498 * do the necessary cleanup. 5499 5499 */
+1 -1
net/ipv4/tcp.c
··· 1176 1176 goto do_error; 1177 1177 1178 1178 while (msg_data_left(msg)) { 1179 - ssize_t copy = 0; 1179 + int copy = 0; 1180 1180 1181 1181 skb = tcp_write_queue_tail(sk); 1182 1182 if (skb)
+3 -1
net/ipv4/tcp_input.c
··· 5181 5181 skb_condense(skb); 5182 5182 skb_set_owner_r(skb, sk); 5183 5183 } 5184 - tcp_rcvbuf_grow(sk); 5184 + /* do not grow rcvbuf for not-yet-accepted or orphaned sockets. */ 5185 + if (sk->sk_socket) 5186 + tcp_rcvbuf_grow(sk); 5185 5187 } 5186 5188 5187 5189 static int __must_check tcp_queue_rcv(struct sock *sk, struct sk_buff *skb,
+2 -7
net/ipv6/addrconf.c
··· 3525 3525 3526 3526 ASSERT_RTNL(); 3527 3527 3528 - idev = ipv6_find_idev(dev); 3529 - if (IS_ERR(idev)) { 3530 - pr_debug("%s: add_dev failed\n", __func__); 3528 + idev = addrconf_add_dev(dev); 3529 + if (IS_ERR(idev)) 3531 3530 return; 3532 - } 3533 3531 3534 3532 /* Generate the IPv6 link-local address using addrconf_addr_gen(), 3535 3533 * unless we have an IPv4 GRE device not bound to an IP address and ··· 3541 3543 } 3542 3544 3543 3545 add_v4_addrs(idev); 3544 - 3545 - if (dev->flags & IFF_POINTOPOINT) 3546 - addrconf_add_mroute(dev); 3547 3546 } 3548 3547 #endif 3549 3548
+14
net/mac80211/cfg.c
··· 1959 1959 ieee80211_sta_init_nss(link_sta); 1960 1960 1961 1961 if (params->opmode_notif_used) { 1962 + enum nl80211_chan_width width = link->conf->chanreq.oper.width; 1963 + 1964 + switch (width) { 1965 + case NL80211_CHAN_WIDTH_20: 1966 + case NL80211_CHAN_WIDTH_40: 1967 + case NL80211_CHAN_WIDTH_80: 1968 + case NL80211_CHAN_WIDTH_160: 1969 + case NL80211_CHAN_WIDTH_80P80: 1970 + case NL80211_CHAN_WIDTH_320: /* not VHT, allowed for HE/EHT */ 1971 + break; 1972 + default: 1973 + return -EINVAL; 1974 + } 1975 + 1962 1976 /* returned value is only needed for rc update, but the 1963 1977 * rc isn't initialized here yet, so ignore it 1964 1978 */
+2 -2
net/mac80211/iface.c
··· 1150 1150 { 1151 1151 sdata->local = local; 1152 1152 1153 + INIT_LIST_HEAD(&sdata->key_list); 1154 + 1153 1155 /* 1154 1156 * Initialize the default link, so we can use link_id 0 for non-MLD, 1155 1157 * and that continues to work for non-MLD-aware drivers that use just ··· 2211 2209 ieee80211_sdata_init(local, sdata); 2212 2210 2213 2211 ieee80211_init_frag_cache(&sdata->frags); 2214 - 2215 - INIT_LIST_HEAD(&sdata->key_list); 2216 2212 2217 2213 wiphy_delayed_work_init(&sdata->dec_tailroom_needed_wk, 2218 2214 ieee80211_delayed_tailroom_dec);
+9 -3
net/mac80211/mlme.c
··· 3934 3934 3935 3935 lockdep_assert_wiphy(local->hw.wiphy); 3936 3936 3937 + if (frame_buf) 3938 + memset(frame_buf, 0, IEEE80211_DEAUTH_FRAME_LEN); 3939 + 3937 3940 if (WARN_ON(!ap_sta)) 3938 3941 return; 3939 3942 ··· 7198 7195 struct ieee80211_bss_conf *bss_conf = link->conf; 7199 7196 struct ieee80211_vif_cfg *vif_cfg = &sdata->vif.cfg; 7200 7197 struct ieee80211_mgmt *mgmt = (void *) hdr; 7198 + struct ieee80211_ext *ext = NULL; 7201 7199 size_t baselen; 7202 7200 struct ieee802_11_elems *elems; 7203 7201 struct ieee80211_local *local = sdata->local; ··· 7224 7220 /* Process beacon from the current BSS */ 7225 7221 bssid = ieee80211_get_bssid(hdr, len, sdata->vif.type); 7226 7222 if (ieee80211_is_s1g_beacon(mgmt->frame_control)) { 7227 - struct ieee80211_ext *ext = (void *) mgmt; 7223 + ext = (void *)mgmt; 7228 7224 variable = ext->u.s1g_beacon.variable + 7229 7225 ieee80211_s1g_optional_len(ext->frame_control); 7230 7226 } ··· 7411 7407 } 7412 7408 7413 7409 if ((ncrc == link->u.mgd.beacon_crc && link->u.mgd.beacon_crc_valid) || 7414 - ieee80211_is_s1g_short_beacon(mgmt->frame_control)) 7410 + (ext && ieee80211_is_s1g_short_beacon(ext->frame_control, 7411 + parse_params.start, 7412 + parse_params.len))) 7415 7413 goto free; 7416 7414 link->u.mgd.beacon_crc = ncrc; 7417 7415 link->u.mgd.beacon_crc_valid = true; ··· 10705 10699 */ 10706 10700 for_each_mle_subelement(sub, (const u8 *)elems->ml_epcs, 10707 10701 elems->ml_epcs_len) { 10702 + struct ieee802_11_elems *link_elems __free(kfree) = NULL; 10708 10703 struct ieee80211_link_data *link; 10709 - struct ieee802_11_elems *link_elems __free(kfree); 10710 10704 u8 *pos = (void *)sub->data; 10711 10705 u16 control; 10712 10706 ssize_t len;
+2 -4
net/mac80211/parse.c
··· 758 758 { 759 759 const struct element *elem, *sub; 760 760 size_t profile_len = 0; 761 - bool found = false; 762 761 763 762 if (!bss || !bss->transmitted_bss) 764 763 return profile_len; ··· 808 809 index[2], 809 810 new_bssid); 810 811 if (ether_addr_equal(new_bssid, bss->bssid)) { 811 - found = true; 812 812 elems->bssid_index_len = index[1]; 813 813 elems->bssid_index = (void *)&index[2]; 814 - break; 814 + return profile_len; 815 815 } 816 816 } 817 817 } 818 818 819 - return found ? profile_len : 0; 819 + return 0; 820 820 } 821 821 822 822 static void
+4 -5
net/mac80211/util.c
··· 2144 2144 cfg80211_sched_scan_stopped_locked(local->hw.wiphy, 0); 2145 2145 2146 2146 wake_up: 2147 - 2148 - if (local->virt_monitors > 0 && 2149 - local->virt_monitors == local->open_count) 2150 - ieee80211_add_virtual_monitor(local); 2151 - 2152 2147 /* 2153 2148 * Clear the WLAN_STA_BLOCK_BA flag so new aggregation 2154 2149 * sessions can be established after a resume. ··· 2196 2201 ieee80211_sta_restart(sdata); 2197 2202 } 2198 2203 } 2204 + 2205 + if (local->virt_monitors > 0 && 2206 + local->virt_monitors == local->open_count) 2207 + ieee80211_add_virtual_monitor(local); 2199 2208 2200 2209 if (!suspended) 2201 2210 return 0;
+54 -36
net/netlink/af_netlink.c
··· 387 387 WARN_ON(skb->sk != NULL); 388 388 skb->sk = sk; 389 389 skb->destructor = netlink_skb_destructor; 390 - atomic_add(skb->truesize, &sk->sk_rmem_alloc); 391 390 sk_mem_charge(sk, skb->truesize); 392 391 } 393 392 ··· 1211 1212 int netlink_attachskb(struct sock *sk, struct sk_buff *skb, 1212 1213 long *timeo, struct sock *ssk) 1213 1214 { 1215 + DECLARE_WAITQUEUE(wait, current); 1214 1216 struct netlink_sock *nlk; 1217 + unsigned int rmem; 1215 1218 1216 1219 nlk = nlk_sk(sk); 1220 + rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc); 1217 1221 1218 - if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf || 1219 - test_bit(NETLINK_S_CONGESTED, &nlk->state))) { 1220 - DECLARE_WAITQUEUE(wait, current); 1221 - if (!*timeo) { 1222 - if (!ssk || netlink_is_kernel(ssk)) 1223 - netlink_overrun(sk); 1224 - sock_put(sk); 1225 - kfree_skb(skb); 1226 - return -EAGAIN; 1227 - } 1228 - 1229 - __set_current_state(TASK_INTERRUPTIBLE); 1230 - add_wait_queue(&nlk->wait, &wait); 1231 - 1232 - if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf || 1233 - test_bit(NETLINK_S_CONGESTED, &nlk->state)) && 1234 - !sock_flag(sk, SOCK_DEAD)) 1235 - *timeo = schedule_timeout(*timeo); 1236 - 1237 - __set_current_state(TASK_RUNNING); 1238 - remove_wait_queue(&nlk->wait, &wait); 1239 - sock_put(sk); 1240 - 1241 - if (signal_pending(current)) { 1242 - kfree_skb(skb); 1243 - return sock_intr_errno(*timeo); 1244 - } 1245 - return 1; 1222 + if ((rmem == skb->truesize || rmem < READ_ONCE(sk->sk_rcvbuf)) && 1223 + !test_bit(NETLINK_S_CONGESTED, &nlk->state)) { 1224 + netlink_skb_set_owner_r(skb, sk); 1225 + return 0; 1246 1226 } 1247 - netlink_skb_set_owner_r(skb, sk); 1248 - return 0; 1227 + 1228 + atomic_sub(skb->truesize, &sk->sk_rmem_alloc); 1229 + 1230 + if (!*timeo) { 1231 + if (!ssk || netlink_is_kernel(ssk)) 1232 + netlink_overrun(sk); 1233 + sock_put(sk); 1234 + kfree_skb(skb); 1235 + return -EAGAIN; 1236 + } 1237 + 1238 + __set_current_state(TASK_INTERRUPTIBLE); 1239 + add_wait_queue(&nlk->wait, &wait); 1240 + rmem = atomic_read(&sk->sk_rmem_alloc); 1241 + 1242 + if (((rmem && rmem + skb->truesize > READ_ONCE(sk->sk_rcvbuf)) || 1243 + test_bit(NETLINK_S_CONGESTED, &nlk->state)) && 1244 + !sock_flag(sk, SOCK_DEAD)) 1245 + *timeo = schedule_timeout(*timeo); 1246 + 1247 + __set_current_state(TASK_RUNNING); 1248 + remove_wait_queue(&nlk->wait, &wait); 1249 + sock_put(sk); 1250 + 1251 + if (signal_pending(current)) { 1252 + kfree_skb(skb); 1253 + return sock_intr_errno(*timeo); 1254 + } 1255 + 1256 + return 1; 1249 1257 } 1250 1258 1251 1259 static int __netlink_sendskb(struct sock *sk, struct sk_buff *skb) ··· 1313 1307 ret = -ECONNREFUSED; 1314 1308 if (nlk->netlink_rcv != NULL) { 1315 1309 ret = skb->len; 1310 + atomic_add(skb->truesize, &sk->sk_rmem_alloc); 1316 1311 netlink_skb_set_owner_r(skb, sk); 1317 1312 NETLINK_CB(skb).sk = ssk; 1318 1313 netlink_deliver_tap_kernel(sk, ssk, skb); ··· 1390 1383 static int netlink_broadcast_deliver(struct sock *sk, struct sk_buff *skb) 1391 1384 { 1392 1385 struct netlink_sock *nlk = nlk_sk(sk); 1386 + unsigned int rmem, rcvbuf; 1393 1387 1394 - if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf && 1388 + rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc); 1389 + rcvbuf = READ_ONCE(sk->sk_rcvbuf); 1390 + 1391 + if ((rmem == skb->truesize || rmem <= rcvbuf) && 1395 1392 !test_bit(NETLINK_S_CONGESTED, &nlk->state)) { 1396 1393 netlink_skb_set_owner_r(skb, sk); 1397 1394 __netlink_sendskb(sk, skb); 1398 - return atomic_read(&sk->sk_rmem_alloc) > (sk->sk_rcvbuf >> 1); 1395 + return rmem > (rcvbuf >> 1); 1399 1396 } 1397 + 1398 + atomic_sub(skb->truesize, &sk->sk_rmem_alloc); 1400 1399 return -1; 1401 1400 } 1402 1401 ··· 2258 2245 struct netlink_ext_ack extack = {}; 2259 2246 struct netlink_callback *cb; 2260 2247 struct sk_buff *skb = NULL; 2248 + unsigned int rmem, rcvbuf; 2261 2249 size_t max_recvmsg_len; 2262 2250 struct module *module; 2263 2251 int err = -ENOBUFS; ··· 2271 2257 err = -EINVAL; 2272 2258 goto errout_skb; 2273 2259 } 2274 - 2275 - if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf) 2276 - goto errout_skb; 2277 2260 2278 2261 /* NLMSG_GOODSIZE is small to avoid high order allocations being 2279 2262 * required, but it makes sense to _attempt_ a 32KiB allocation ··· 2293 2282 } 2294 2283 if (!skb) 2295 2284 goto errout_skb; 2285 + 2286 + rcvbuf = READ_ONCE(sk->sk_rcvbuf); 2287 + rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc); 2288 + if (rmem != skb->truesize && rmem >= rcvbuf) { 2289 + atomic_sub(skb->truesize, &sk->sk_rmem_alloc); 2290 + goto errout_skb; 2291 + } 2296 2292 2297 2293 /* Trim skb to allocated size. User is expected to provide buffer as 2298 2294 * large as max(min_dump_alloc, 32KiB (max_recvmsg_len capped at
+9 -6
net/rxrpc/ar-internal.h
··· 361 361 struct list_head new_client_calls; /* Newly created client calls need connection */ 362 362 spinlock_t client_call_lock; /* Lock for ->new_client_calls */ 363 363 struct sockaddr_rxrpc srx; /* local address */ 364 - /* Provide a kvec table sufficiently large to manage either a DATA 365 - * packet with a maximum set of jumbo subpackets or a PING ACK padded 366 - * out to 64K with zeropages for PMTUD. 367 - */ 368 - struct kvec kvec[1 + RXRPC_MAX_NR_JUMBO > 3 + 16 ? 369 - 1 + RXRPC_MAX_NR_JUMBO : 3 + 16]; 364 + union { 365 + /* Provide a kvec table sufficiently large to manage either a 366 + * DATA packet with a maximum set of jumbo subpackets or a PING 367 + * ACK padded out to 64K with zeropages for PMTUD. 368 + */ 369 + struct kvec kvec[1 + RXRPC_MAX_NR_JUMBO > 3 + 16 ? 370 + 1 + RXRPC_MAX_NR_JUMBO : 3 + 16]; 371 + struct bio_vec bvec[3 + 16]; 372 + }; 370 373 }; 371 374 372 375 /*
+4
net/rxrpc/call_accept.c
··· 149 149 150 150 id_in_use: 151 151 write_unlock(&rx->call_lock); 152 + rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, -EBADSLT); 152 153 rxrpc_cleanup_call(call); 153 154 _leave(" = -EBADSLT"); 154 155 return -EBADSLT; ··· 254 253 unsigned short call_head, conn_head, peer_head; 255 254 unsigned short call_tail, conn_tail, peer_tail; 256 255 unsigned short call_count, conn_count; 256 + 257 + if (!b) 258 + return NULL; 257 259 258 260 /* #calls >= #conns >= #peers must hold true. */ 259 261 call_head = smp_load_acquire(&b->call_backlog_head);
+4 -1
net/rxrpc/output.c
··· 924 924 { 925 925 struct rxrpc_skb_priv *sp = rxrpc_skb(response); 926 926 struct scatterlist sg[16]; 927 - struct bio_vec bvec[16]; 927 + struct bio_vec *bvec = conn->local->bvec; 928 928 struct msghdr msg; 929 929 size_t len = sp->resp.len; 930 930 __be32 wserial; ··· 938 938 if (ret < 0) 939 939 goto fail; 940 940 nr_sg = ret; 941 + ret = -EIO; 942 + if (WARN_ON_ONCE(nr_sg > ARRAY_SIZE(conn->local->bvec))) 943 + goto fail; 941 944 942 945 for (int i = 0; i < nr_sg; i++) 943 946 bvec_set_page(&bvec[i], sg_page(&sg[i]), sg[i].length, sg[i].offset);
+16 -17
net/sched/sch_api.c
··· 336 336 return q; 337 337 } 338 338 339 - static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid) 339 + static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid, 340 + struct netlink_ext_ack *extack) 340 341 { 341 342 unsigned long cl; 342 343 const struct Qdisc_class_ops *cops = p->ops->cl_ops; 343 344 344 - if (cops == NULL) 345 - return NULL; 345 + if (cops == NULL) { 346 + NL_SET_ERR_MSG(extack, "Parent qdisc is not classful"); 347 + return ERR_PTR(-EOPNOTSUPP); 348 + } 346 349 cl = cops->find(p, classid); 347 350 348 - if (cl == 0) 349 - return NULL; 351 + if (cl == 0) { 352 + NL_SET_ERR_MSG(extack, "Specified class not found"); 353 + return ERR_PTR(-ENOENT); 354 + } 350 355 return cops->leaf(p, cl); 351 356 } 352 357 ··· 600 595 pkt_len = 1; 601 596 qdisc_skb_cb(skb)->pkt_len = pkt_len; 602 597 } 603 - 604 - void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc) 605 - { 606 - if (!(qdisc->flags & TCQ_F_WARN_NONWC)) { 607 - pr_warn("%s: %s qdisc %X: is non-work-conserving?\n", 608 - txt, qdisc->ops->id, qdisc->handle >> 16); 609 - qdisc->flags |= TCQ_F_WARN_NONWC; 610 - } 611 - } 612 - EXPORT_SYMBOL(qdisc_warn_nonwc); 613 598 614 599 static enum hrtimer_restart qdisc_watchdog(struct hrtimer *timer) 615 600 { ··· 1485 1490 NL_SET_ERR_MSG(extack, "Failed to find qdisc with specified classid"); 1486 1491 return -ENOENT; 1487 1492 } 1488 - q = qdisc_leaf(p, clid); 1493 + q = qdisc_leaf(p, clid, extack); 1489 1494 } else if (dev_ingress_queue(dev)) { 1490 1495 q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping); 1491 1496 } ··· 1496 1501 NL_SET_ERR_MSG(extack, "Cannot find specified qdisc on specified device"); 1497 1502 return -ENOENT; 1498 1503 } 1504 + if (IS_ERR(q)) 1505 + return PTR_ERR(q); 1499 1506 1500 1507 if (tcm->tcm_handle && q->handle != tcm->tcm_handle) { 1501 1508 NL_SET_ERR_MSG(extack, "Invalid handle"); ··· 1599 1602 NL_SET_ERR_MSG(extack, "Failed to find specified qdisc"); 1600 1603 return -ENOENT; 1601 1604 } 1602 - q = qdisc_leaf(p, clid); 1605 + q = qdisc_leaf(p, clid, extack); 1606 + if (IS_ERR(q)) 1607 + return PTR_ERR(q); 1603 1608 } else if (dev_ingress_queue_create(dev)) { 1604 1609 q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping); 1605 1610 }
-16
net/sched/sch_hfsc.c
··· 835 835 } 836 836 } 837 837 838 - static unsigned int 839 - qdisc_peek_len(struct Qdisc *sch) 840 - { 841 - struct sk_buff *skb; 842 - unsigned int len; 843 - 844 - skb = sch->ops->peek(sch); 845 - if (unlikely(skb == NULL)) { 846 - qdisc_warn_nonwc("qdisc_peek_len", sch); 847 - return 0; 848 - } 849 - len = qdisc_pkt_len(skb); 850 - 851 - return len; 852 - } 853 - 854 838 static void 855 839 hfsc_adjust_levels(struct hfsc_class *cl) 856 840 {
+1 -1
net/sched/sch_qfq.c
··· 989 989 990 990 if (cl->qdisc->q.qlen == 0) /* no more packets, remove from list */ 991 991 list_del_init(&cl->alist); 992 - else if (cl->deficit < qdisc_pkt_len(cl->qdisc->ops->peek(cl->qdisc))) { 992 + else if (cl->deficit < qdisc_peek_len(cl->qdisc)) { 993 993 cl->deficit += agg->lmax; 994 994 list_move_tail(&cl->alist, &agg->active); 995 995 }
+2
net/tipc/topsrv.c
··· 704 704 for (id = 0; srv->idr_in_use; id++) { 705 705 con = idr_find(&srv->conn_idr, id); 706 706 if (con) { 707 + conn_get(con); 707 708 spin_unlock_bh(&srv->idr_lock); 708 709 tipc_conn_close(con); 710 + conn_put(con); 709 711 spin_lock_bh(&srv->idr_lock); 710 712 } 711 713 }
+46 -11
net/vmw_vsock/af_vsock.c
··· 407 407 408 408 static bool vsock_use_local_transport(unsigned int remote_cid) 409 409 { 410 + lockdep_assert_held(&vsock_register_mutex); 411 + 410 412 if (!transport_local) 411 413 return false; 412 414 ··· 466 464 467 465 remote_flags = vsk->remote_addr.svm_flags; 468 466 467 + mutex_lock(&vsock_register_mutex); 468 + 469 469 switch (sk->sk_type) { 470 470 case SOCK_DGRAM: 471 471 new_transport = transport_dgram; ··· 483 479 new_transport = transport_h2g; 484 480 break; 485 481 default: 486 - return -ESOCKTNOSUPPORT; 482 + ret = -ESOCKTNOSUPPORT; 483 + goto err; 487 484 } 488 485 489 486 if (vsk->transport) { 490 - if (vsk->transport == new_transport) 491 - return 0; 487 + if (vsk->transport == new_transport) { 488 + ret = 0; 489 + goto err; 490 + } 492 491 493 492 /* transport->release() must be called with sock lock acquired. 494 493 * This path can only be taken during vsock_connect(), where we ··· 515 508 /* We increase the module refcnt to prevent the transport unloading 516 509 * while there are open sockets assigned to it. 517 510 */ 518 - if (!new_transport || !try_module_get(new_transport->module)) 519 - return -ENODEV; 511 + if (!new_transport || !try_module_get(new_transport->module)) { 512 + ret = -ENODEV; 513 + goto err; 514 + } 515 + 516 + /* It's safe to release the mutex after a successful try_module_get(). 517 + * Whichever transport `new_transport` points at, it won't go away until 518 + * the last module_put() below or in vsock_deassign_transport(). 519 + */ 520 + mutex_unlock(&vsock_register_mutex); 520 521 521 522 if (sk->sk_type == SOCK_SEQPACKET) { 522 523 if (!new_transport->seqpacket_allow || ··· 543 528 vsk->transport = new_transport; 544 529 545 530 return 0; 531 + err: 532 + mutex_unlock(&vsock_register_mutex); 533 + return ret; 546 534 } 547 535 EXPORT_SYMBOL_GPL(vsock_assign_transport); 548 536 537 + /* 538 + * Provide safe access to static transport_{h2g,g2h,dgram,local} callbacks. 539 + * Otherwise we may race with module removal. Do not use on `vsk->transport`. 540 + */ 541 + static u32 vsock_registered_transport_cid(const struct vsock_transport **transport) 542 + { 543 + u32 cid = VMADDR_CID_ANY; 544 + 545 + mutex_lock(&vsock_register_mutex); 546 + if (*transport) 547 + cid = (*transport)->get_local_cid(); 548 + mutex_unlock(&vsock_register_mutex); 549 + 550 + return cid; 551 + } 552 + 549 553 bool vsock_find_cid(unsigned int cid) 550 554 { 551 - if (transport_g2h && cid == transport_g2h->get_local_cid()) 555 + if (cid == vsock_registered_transport_cid(&transport_g2h)) 552 556 return true; 553 557 554 558 if (transport_h2g && cid == VMADDR_CID_HOST) ··· 2570 2536 unsigned int cmd, void __user *ptr) 2571 2537 { 2572 2538 u32 __user *p = ptr; 2573 - u32 cid = VMADDR_CID_ANY; 2574 2539 int retval = 0; 2540 + u32 cid; 2575 2541 2576 2542 switch (cmd) { 2577 2543 case IOCTL_VM_SOCKETS_GET_LOCAL_CID: 2578 2544 /* To be compatible with the VMCI behavior, we prioritize the 2579 2545 * guest CID instead of well-know host CID (VMADDR_CID_HOST). 2580 2546 */ 2581 - if (transport_g2h) 2582 - cid = transport_g2h->get_local_cid(); 2583 - else if (transport_h2g) 2584 - cid = transport_h2g->get_local_cid(); 2547 + cid = vsock_registered_transport_cid(&transport_g2h); 2548 + if (cid == VMADDR_CID_ANY) 2549 + cid = vsock_registered_transport_cid(&transport_h2g); 2550 + if (cid == VMADDR_CID_ANY) 2551 + cid = vsock_registered_transport_cid(&transport_local); 2585 2552 2586 2553 if (put_user(cid, p) != 0) 2587 2554 retval = -EFAULT;
+5 -2
net/wireless/nl80211.c
··· 229 229 unsigned int len = nla_len(attr); 230 230 const struct element *elem; 231 231 const struct ieee80211_mgmt *mgmt = (void *)data; 232 + const struct ieee80211_ext *ext; 232 233 unsigned int fixedlen, hdrlen; 233 234 bool s1g_bcn; 234 235 ··· 238 237 239 238 s1g_bcn = ieee80211_is_s1g_beacon(mgmt->frame_control); 240 239 if (s1g_bcn) { 241 - fixedlen = offsetof(struct ieee80211_ext, 242 - u.s1g_beacon.variable); 240 + ext = (struct ieee80211_ext *)mgmt; 241 + fixedlen = 242 + offsetof(struct ieee80211_ext, u.s1g_beacon.variable) + 243 + ieee80211_s1g_optional_len(ext->frame_control); 243 244 hdrlen = offsetof(struct ieee80211_ext, u.s1g_beacon); 244 245 } else { 245 246 fixedlen = offsetof(struct ieee80211_mgmt,
+50 -2
net/wireless/util.c
··· 820 820 } 821 821 EXPORT_SYMBOL(ieee80211_is_valid_amsdu); 822 822 823 + 824 + /* 825 + * Detects if an MSDU frame was maliciously converted into an A-MSDU 826 + * frame by an adversary. This is done by parsing the received frame 827 + * as if it were a regular MSDU, even though the A-MSDU flag is set. 828 + * 829 + * For non-mesh interfaces, detection involves checking whether the 830 + * payload, when interpreted as an MSDU, begins with a valid RFC1042 831 + * header. This is done by comparing the A-MSDU subheader's destination 832 + * address to the start of the RFC1042 header. 833 + * 834 + * For mesh interfaces, the MSDU includes a 6-byte Mesh Control field 835 + * and an optional variable-length Mesh Address Extension field before 836 + * the RFC1042 header. The position of the RFC1042 header must therefore 837 + * be calculated based on the mesh header length. 838 + * 839 + * Since this function intentionally parses an A-MSDU frame as an MSDU, 840 + * it only assumes that the A-MSDU subframe header is present, and 841 + * beyond this it performs its own bounds checks under the assumption 842 + * that the frame is instead parsed as a non-aggregated MSDU. 843 + */ 844 + static bool 845 + is_amsdu_aggregation_attack(struct ethhdr *eth, struct sk_buff *skb, 846 + enum nl80211_iftype iftype) 847 + { 848 + int offset; 849 + 850 + /* Non-mesh case can be directly compared */ 851 + if (iftype != NL80211_IFTYPE_MESH_POINT) 852 + return ether_addr_equal(eth->h_dest, rfc1042_header); 853 + 854 + offset = __ieee80211_get_mesh_hdrlen(eth->h_dest[0]); 855 + if (offset == 6) { 856 + /* Mesh case with empty address extension field */ 857 + return ether_addr_equal(eth->h_source, rfc1042_header); 858 + } else if (offset + ETH_ALEN <= skb->len) { 859 + /* Mesh case with non-empty address extension field */ 860 + u8 temp[ETH_ALEN]; 861 + 862 + skb_copy_bits(skb, offset, temp, ETH_ALEN); 863 + return ether_addr_equal(temp, rfc1042_header); 864 + } 865 + 866 + return false; 867 + } 868 + 823 869 void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list, 824 870 const u8 *addr, enum nl80211_iftype iftype, 825 871 const unsigned int extra_headroom, ··· 907 861 /* the last MSDU has no padding */ 908 862 if (subframe_len > remaining) 909 863 goto purge; 910 - /* mitigate A-MSDU aggregation injection attacks */ 911 - if (ether_addr_equal(hdr.eth.h_dest, rfc1042_header)) 864 + /* mitigate A-MSDU aggregation injection attacks, to be 865 + * checked when processing first subframe (offset == 0). 866 + */ 867 + if (offset == 0 && is_amsdu_aggregation_attack(&hdr.eth, skb, iftype)) 912 868 goto purge; 913 869 914 870 offset += sizeof(struct ethhdr);
+11 -1
rust/kernel/drm/device.rs
··· 66 66 open: Some(drm::File::<T::File>::open_callback), 67 67 postclose: Some(drm::File::<T::File>::postclose_callback), 68 68 unload: None, 69 - release: None, 69 + release: Some(Self::release), 70 70 master_set: None, 71 71 master_drop: None, 72 72 debugfs_init: None, ··· 161 161 162 162 // SAFETY: `ptr` is valid by the safety requirements of this function. 163 163 unsafe { &*ptr.cast() } 164 + } 165 + 166 + extern "C" fn release(ptr: *mut bindings::drm_device) { 167 + // SAFETY: `ptr` is a valid pointer to a `struct drm_device` and embedded in `Self`. 168 + let this = unsafe { Self::from_drm_device(ptr) }; 169 + 170 + // SAFETY: 171 + // - When `release` runs it is guaranteed that there is no further access to `this`. 172 + // - `this` is valid for dropping. 173 + unsafe { core::ptr::drop_in_place(this) }; 164 174 } 165 175 } 166 176
-1
rust/kernel/drm/driver.rs
··· 10 10 drm, 11 11 error::{to_result, Result}, 12 12 prelude::*, 13 - str::CStr, 14 13 types::ARef, 15 14 }; 16 15 use macros::vtable;
+6 -2
samples/damon/mtier.c
··· 164 164 if (enable == enabled) 165 165 return 0; 166 166 167 - if (enable) 168 - return damon_sample_mtier_start(); 167 + if (enable) { 168 + err = damon_sample_mtier_start(); 169 + if (err) 170 + enable = false; 171 + return err; 172 + } 169 173 damon_sample_mtier_stop(); 170 174 return 0; 171 175 }
+6 -2
samples/damon/prcl.c
··· 122 122 if (enable == enabled) 123 123 return 0; 124 124 125 - if (enable) 126 - return damon_sample_prcl_start(); 125 + if (enable) { 126 + err = damon_sample_prcl_start(); 127 + if (err) 128 + enable = false; 129 + return err; 130 + } 127 131 damon_sample_prcl_stop(); 128 132 return 0; 129 133 }
+6 -2
samples/damon/wsse.c
··· 102 102 if (enable == enabled) 103 103 return 0; 104 104 105 - if (enable) 106 - return damon_sample_wsse_start(); 105 + if (enable) { 106 + err = damon_sample_wsse_start(); 107 + if (err) 108 + enable = false; 109 + return err; 110 + } 107 111 damon_sample_wsse_stop(); 108 112 return 0; 109 113 }
+7
scripts/gdb/linux/constants.py.in
··· 20 20 #include <linux/of_fdt.h> 21 21 #include <linux/page_ext.h> 22 22 #include <linux/radix-tree.h> 23 + #include <linux/maple_tree.h> 23 24 #include <linux/slab.h> 24 25 #include <linux/threads.h> 25 26 #include <linux/vmalloc.h> ··· 93 92 LX_GDBPARSED(RADIX_TREE_MAP_SIZE) 94 93 LX_GDBPARSED(RADIX_TREE_MAP_SHIFT) 95 94 LX_GDBPARSED(RADIX_TREE_MAP_MASK) 95 + 96 + /* linux/maple_tree.h */ 97 + LX_VALUE(MAPLE_NODE_SLOTS) 98 + LX_VALUE(MAPLE_RANGE64_SLOTS) 99 + LX_VALUE(MAPLE_ARANGE64_SLOTS) 100 + LX_GDBPARSED(MAPLE_NODE_MASK) 96 101 97 102 /* linux/vmalloc.h */ 98 103 LX_VALUE(VM_IOREMAP)
+8 -8
scripts/gdb/linux/interrupts.py
··· 7 7 from linux import constants 8 8 from linux import cpus 9 9 from linux import utils 10 - from linux import radixtree 10 + from linux import mapletree 11 11 12 12 irq_desc_type = utils.CachedType("struct irq_desc") 13 13 ··· 23 23 def show_irq_desc(prec, irq): 24 24 text = "" 25 25 26 - desc = radixtree.lookup(gdb.parse_and_eval("&irq_desc_tree"), irq) 26 + desc = mapletree.mtree_load(gdb.parse_and_eval("&sparse_irqs"), irq) 27 27 if desc is None: 28 28 return text 29 29 30 - desc = desc.cast(irq_desc_type.get_type()) 31 - if desc is None: 30 + desc = desc.cast(irq_desc_type.get_type().pointer()) 31 + if desc == 0: 32 32 return text 33 33 34 34 if irq_settings_is_hidden(desc): ··· 110 110 pvar = gdb.parse_and_eval(var) 111 111 text = "%*s: " % (prec, pfx) 112 112 for cpu in cpus.each_online_cpu(): 113 - text += "%10u " % (cpus.per_cpu(pvar, cpu)) 113 + text += "%10u " % (cpus.per_cpu(pvar, cpu).dereference()) 114 114 text += " %s\n" % (desc) 115 115 return text 116 116 ··· 142 142 143 143 if constants.LX_CONFIG_X86_MCE: 144 144 text += x86_show_mce(prec, "&mce_exception_count", "MCE", "Machine check exceptions") 145 - text == x86_show_mce(prec, "&mce_poll_count", "MCP", "Machine check polls") 145 + text += x86_show_mce(prec, "&mce_poll_count", "MCP", "Machine check polls") 146 146 147 147 text += show_irq_err_count(prec) 148 148 ··· 221 221 gdb.write("CPU%-8d" % cpu) 222 222 gdb.write("\n") 223 223 224 - if utils.gdb_eval_or_none("&irq_desc_tree") is None: 225 - return 224 + if utils.gdb_eval_or_none("&sparse_irqs") is None: 225 + raise gdb.GdbError("Unable to find the sparse IRQ tree, is CONFIG_SPARSE_IRQ enabled?") 226 226 227 227 for irq in range(nr_irqs): 228 228 gdb.write(show_irq_desc(prec, irq))
+252
scripts/gdb/linux/mapletree.py
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # Maple tree helpers 4 + # 5 + # Copyright (c) 2025 Broadcom 6 + # 7 + # Authors: 8 + # Florian Fainelli <florian.fainelli@broadcom.com> 9 + 10 + import gdb 11 + 12 + from linux import utils 13 + from linux import constants 14 + from linux import xarray 15 + 16 + maple_tree_root_type = utils.CachedType("struct maple_tree") 17 + maple_node_type = utils.CachedType("struct maple_node") 18 + maple_enode_type = utils.CachedType("void") 19 + 20 + maple_dense = 0 21 + maple_leaf_64 = 1 22 + maple_range_64 = 2 23 + maple_arange_64 = 3 24 + 25 + class Mas(object): 26 + ma_active = 0 27 + ma_start = 1 28 + ma_root = 2 29 + ma_none = 3 30 + ma_pause = 4 31 + ma_overflow = 5 32 + ma_underflow = 6 33 + ma_error = 7 34 + 35 + def __init__(self, mt, first, end): 36 + if mt.type == maple_tree_root_type.get_type().pointer(): 37 + self.tree = mt.dereference() 38 + elif mt.type != maple_tree_root_type.get_type(): 39 + raise gdb.GdbError("must be {} not {}" 40 + .format(maple_tree_root_type.get_type().pointer(), mt.type)) 41 + self.tree = mt 42 + self.index = first 43 + self.last = end 44 + self.node = None 45 + self.status = self.ma_start 46 + self.min = 0 47 + self.max = -1 48 + 49 + def is_start(self): 50 + # mas_is_start() 51 + return self.status == self.ma_start 52 + 53 + def is_ptr(self): 54 + # mas_is_ptr() 55 + return self.status == self.ma_root 56 + 57 + def is_none(self): 58 + # mas_is_none() 59 + return self.status == self.ma_none 60 + 61 + def root(self): 62 + # mas_root() 63 + return self.tree['ma_root'].cast(maple_enode_type.get_type().pointer()) 64 + 65 + def start(self): 66 + # mas_start() 67 + if self.is_start() is False: 68 + return None 69 + 70 + self.min = 0 71 + self.max = ~0 72 + 73 + while True: 74 + self.depth = 0 75 + root = self.root() 76 + if xarray.xa_is_node(root): 77 + self.depth = 0 78 + self.status = self.ma_active 79 + self.node = mte_safe_root(root) 80 + self.offset = 0 81 + if mte_dead_node(self.node) is True: 82 + continue 83 + 84 + return None 85 + 86 + self.node = None 87 + # Empty tree 88 + if root is None: 89 + self.status = self.ma_none 90 + self.offset = constants.LX_MAPLE_NODE_SLOTS 91 + return None 92 + 93 + # Single entry tree 94 + self.status = self.ma_root 95 + self.offset = constants.LX_MAPLE_NODE_SLOTS 96 + 97 + if self.index != 0: 98 + return None 99 + 100 + return root 101 + 102 + return None 103 + 104 + def reset(self): 105 + # mas_reset() 106 + self.status = self.ma_start 107 + self.node = None 108 + 109 + def mte_safe_root(node): 110 + if node.type != maple_enode_type.get_type().pointer(): 111 + raise gdb.GdbError("{} must be {} not {}" 112 + .format(mte_safe_root.__name__, maple_enode_type.get_type().pointer(), node.type)) 113 + ulong_type = utils.get_ulong_type() 114 + indirect_ptr = node.cast(ulong_type) & ~0x2 115 + val = indirect_ptr.cast(maple_enode_type.get_type().pointer()) 116 + return val 117 + 118 + def mte_node_type(entry): 119 + ulong_type = utils.get_ulong_type() 120 + val = None 121 + if entry.type == maple_enode_type.get_type().pointer(): 122 + val = entry.cast(ulong_type) 123 + elif entry.type == ulong_type: 124 + val = entry 125 + else: 126 + raise gdb.GdbError("{} must be {} not {}" 127 + .format(mte_node_type.__name__, maple_enode_type.get_type().pointer(), entry.type)) 128 + return (val >> 0x3) & 0xf 129 + 130 + def ma_dead_node(node): 131 + if node.type != maple_node_type.get_type().pointer(): 132 + raise gdb.GdbError("{} must be {} not {}" 133 + .format(ma_dead_node.__name__, maple_node_type.get_type().pointer(), node.type)) 134 + ulong_type = utils.get_ulong_type() 135 + parent = node['parent'] 136 + indirect_ptr = node['parent'].cast(ulong_type) & ~constants.LX_MAPLE_NODE_MASK 137 + return indirect_ptr == node 138 + 139 + def mte_to_node(enode): 140 + ulong_type = utils.get_ulong_type() 141 + if enode.type == maple_enode_type.get_type().pointer(): 142 + indirect_ptr = enode.cast(ulong_type) 143 + elif enode.type == ulong_type: 144 + indirect_ptr = enode 145 + else: 146 + raise gdb.GdbError("{} must be {} not {}" 147 + .format(mte_to_node.__name__, maple_enode_type.get_type().pointer(), enode.type)) 148 + indirect_ptr = indirect_ptr & ~constants.LX_MAPLE_NODE_MASK 149 + return indirect_ptr.cast(maple_node_type.get_type().pointer()) 150 + 151 + def mte_dead_node(enode): 152 + if enode.type != maple_enode_type.get_type().pointer(): 153 + raise gdb.GdbError("{} must be {} not {}" 154 + .format(mte_dead_node.__name__, maple_enode_type.get_type().pointer(), enode.type)) 155 + node = mte_to_node(enode) 156 + return ma_dead_node(node) 157 + 158 + def ma_is_leaf(tp): 159 + result = tp < maple_range_64 160 + return tp < maple_range_64 161 + 162 + def mt_pivots(t): 163 + if t == maple_dense: 164 + return 0 165 + elif t == maple_leaf_64 or t == maple_range_64: 166 + return constants.LX_MAPLE_RANGE64_SLOTS - 1 167 + elif t == maple_arange_64: 168 + return constants.LX_MAPLE_ARANGE64_SLOTS - 1 169 + 170 + def ma_pivots(node, t): 171 + if node.type != maple_node_type.get_type().pointer(): 172 + raise gdb.GdbError("{}: must be {} not {}" 173 + .format(ma_pivots.__name__, maple_node_type.get_type().pointer(), node.type)) 174 + if t == maple_arange_64: 175 + return node['ma64']['pivot'] 176 + elif t == maple_leaf_64 or t == maple_range_64: 177 + return node['mr64']['pivot'] 178 + else: 179 + return None 180 + 181 + def ma_slots(node, tp): 182 + if node.type != maple_node_type.get_type().pointer(): 183 + raise gdb.GdbError("{}: must be {} not {}" 184 + .format(ma_slots.__name__, maple_node_type.get_type().pointer(), node.type)) 185 + if tp == maple_arange_64: 186 + return node['ma64']['slot'] 187 + elif tp == maple_range_64 or tp == maple_leaf_64: 188 + return node['mr64']['slot'] 189 + elif tp == maple_dense: 190 + return node['slot'] 191 + else: 192 + return None 193 + 194 + def mt_slot(mt, slots, offset): 195 + ulong_type = utils.get_ulong_type() 196 + return slots[offset].cast(ulong_type) 197 + 198 + def mtree_lookup_walk(mas): 199 + ulong_type = utils.get_ulong_type() 200 + n = mas.node 201 + 202 + while True: 203 + node = mte_to_node(n) 204 + tp = mte_node_type(n) 205 + pivots = ma_pivots(node, tp) 206 + end = mt_pivots(tp) 207 + offset = 0 208 + while True: 209 + if pivots[offset] >= mas.index: 210 + break 211 + if offset >= end: 212 + break 213 + offset += 1 214 + 215 + slots = ma_slots(node, tp) 216 + n = mt_slot(mas.tree, slots, offset) 217 + if ma_dead_node(node) is True: 218 + mas.reset() 219 + return None 220 + break 221 + 222 + if ma_is_leaf(tp) is True: 223 + break 224 + 225 + return n 226 + 227 + def mtree_load(mt, index): 228 + ulong_type = utils.get_ulong_type() 229 + # MT_STATE(...) 230 + mas = Mas(mt, index, index) 231 + entry = None 232 + 233 + while True: 234 + entry = mas.start() 235 + if mas.is_none(): 236 + return None 237 + 238 + if mas.is_ptr(): 239 + if index != 0: 240 + entry = None 241 + return entry 242 + 243 + entry = mtree_lookup_walk(mas) 244 + if entry is None and mas.is_start(): 245 + continue 246 + else: 247 + break 248 + 249 + if xarray.xa_is_zero(entry): 250 + return None 251 + 252 + return entry
+1 -1
scripts/gdb/linux/vfs.py
··· 22 22 if parent == d or parent == 0: 23 23 return "" 24 24 p = dentry_name(d['d_parent']) + "/" 25 - return p + d['d_shortname']['string'].string() 25 + return p + d['d_name']['name'].string() 26 26 27 27 class DentryName(gdb.Function): 28 28 """Return string of the full path of a dentry.
+28
scripts/gdb/linux/xarray.py
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # Xarray helpers 4 + # 5 + # Copyright (c) 2025 Broadcom 6 + # 7 + # Authors: 8 + # Florian Fainelli <florian.fainelli@broadcom.com> 9 + 10 + import gdb 11 + 12 + from linux import utils 13 + from linux import constants 14 + 15 + def xa_is_internal(entry): 16 + ulong_type = utils.get_ulong_type() 17 + return ((entry.cast(ulong_type) & 3) == 2) 18 + 19 + def xa_mk_internal(v): 20 + return ((v << 2) | 2) 21 + 22 + def xa_is_zero(entry): 23 + ulong_type = utils.get_ulong_type() 24 + return entry.cast(ulong_type) == xa_mk_internal(257) 25 + 26 + def xa_is_node(entry): 27 + ulong_type = utils.get_ulong_type() 28 + return xa_is_internal(entry) and (entry.cast(ulong_type) > 4096)
+1 -1
sound/isa/ad1816a/ad1816a.c
··· 98 98 pdev = pnp_request_card_device(card, id->devs[1].id, NULL); 99 99 if (pdev == NULL) { 100 100 mpu_port[dev] = -1; 101 - dev_warn(&pdev->dev, "MPU401 device busy, skipping.\n"); 101 + pr_warn("MPU401 device busy, skipping.\n"); 102 102 return 0; 103 103 } 104 104
+19
sound/pci/hda/patch_hdmi.c
··· 4551 4551 HDA_CODEC_ENTRY(0x10de002f, "Tegra194 HDMI/DP2", patch_tegra_hdmi), 4552 4552 HDA_CODEC_ENTRY(0x10de0030, "Tegra194 HDMI/DP3", patch_tegra_hdmi), 4553 4553 HDA_CODEC_ENTRY(0x10de0031, "Tegra234 HDMI/DP", patch_tegra234_hdmi), 4554 + HDA_CODEC_ENTRY(0x10de0033, "SoC 33 HDMI/DP", patch_tegra234_hdmi), 4554 4555 HDA_CODEC_ENTRY(0x10de0034, "Tegra264 HDMI/DP", patch_tegra234_hdmi), 4556 + HDA_CODEC_ENTRY(0x10de0035, "SoC 35 HDMI/DP", patch_tegra234_hdmi), 4555 4557 HDA_CODEC_ENTRY(0x10de0040, "GPU 40 HDMI/DP", patch_nvhdmi), 4556 4558 HDA_CODEC_ENTRY(0x10de0041, "GPU 41 HDMI/DP", patch_nvhdmi), 4557 4559 HDA_CODEC_ENTRY(0x10de0042, "GPU 42 HDMI/DP", patch_nvhdmi), ··· 4592 4590 HDA_CODEC_ENTRY(0x10de0098, "GPU 98 HDMI/DP", patch_nvhdmi), 4593 4591 HDA_CODEC_ENTRY(0x10de0099, "GPU 99 HDMI/DP", patch_nvhdmi), 4594 4592 HDA_CODEC_ENTRY(0x10de009a, "GPU 9a HDMI/DP", patch_nvhdmi), 4593 + HDA_CODEC_ENTRY(0x10de009b, "GPU 9b HDMI/DP", patch_nvhdmi), 4594 + HDA_CODEC_ENTRY(0x10de009c, "GPU 9c HDMI/DP", patch_nvhdmi), 4595 4595 HDA_CODEC_ENTRY(0x10de009d, "GPU 9d HDMI/DP", patch_nvhdmi), 4596 4596 HDA_CODEC_ENTRY(0x10de009e, "GPU 9e HDMI/DP", patch_nvhdmi), 4597 4597 HDA_CODEC_ENTRY(0x10de009f, "GPU 9f HDMI/DP", patch_nvhdmi), 4598 4598 HDA_CODEC_ENTRY(0x10de00a0, "GPU a0 HDMI/DP", patch_nvhdmi), 4599 + HDA_CODEC_ENTRY(0x10de00a1, "GPU a1 HDMI/DP", patch_nvhdmi), 4599 4600 HDA_CODEC_ENTRY(0x10de00a3, "GPU a3 HDMI/DP", patch_nvhdmi), 4600 4601 HDA_CODEC_ENTRY(0x10de00a4, "GPU a4 HDMI/DP", patch_nvhdmi), 4601 4602 HDA_CODEC_ENTRY(0x10de00a5, "GPU a5 HDMI/DP", patch_nvhdmi), 4602 4603 HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP", patch_nvhdmi), 4603 4604 HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP", patch_nvhdmi), 4605 + HDA_CODEC_ENTRY(0x10de00a8, "GPU a8 HDMI/DP", patch_nvhdmi), 4606 + HDA_CODEC_ENTRY(0x10de00a9, "GPU a9 HDMI/DP", patch_nvhdmi), 4607 + HDA_CODEC_ENTRY(0x10de00aa, "GPU aa HDMI/DP", patch_nvhdmi), 4608 + HDA_CODEC_ENTRY(0x10de00ab, "GPU ab HDMI/DP", patch_nvhdmi), 4609 + HDA_CODEC_ENTRY(0x10de00ad, "GPU ad HDMI/DP", patch_nvhdmi), 4610 + HDA_CODEC_ENTRY(0x10de00ae, "GPU ae HDMI/DP", patch_nvhdmi), 4611 + HDA_CODEC_ENTRY(0x10de00af, "GPU af HDMI/DP", patch_nvhdmi), 4612 + HDA_CODEC_ENTRY(0x10de00b0, "GPU b0 HDMI/DP", patch_nvhdmi), 4613 + HDA_CODEC_ENTRY(0x10de00b1, "GPU b1 HDMI/DP", patch_nvhdmi), 4614 + HDA_CODEC_ENTRY(0x10de00c0, "GPU c0 HDMI/DP", patch_nvhdmi), 4615 + HDA_CODEC_ENTRY(0x10de00c1, "GPU c1 HDMI/DP", patch_nvhdmi), 4616 + HDA_CODEC_ENTRY(0x10de00c3, "GPU c3 HDMI/DP", patch_nvhdmi), 4617 + HDA_CODEC_ENTRY(0x10de00c4, "GPU c4 HDMI/DP", patch_nvhdmi), 4618 + HDA_CODEC_ENTRY(0x10de00c5, "GPU c5 HDMI/DP", patch_nvhdmi), 4604 4619 HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI", patch_nvhdmi_2ch), 4605 4620 HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI", patch_nvhdmi_2ch), 4606 4621 HDA_CODEC_ENTRY(0x67663d82, "Arise 82 HDMI/DP", patch_gf_hdmi),
+3
sound/pci/hda/patch_realtek.c
··· 10881 10881 SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10882 10882 SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), 10883 10883 SND_PCI_QUIRK(0x103c, 0x8d01, "HP ZBook Power 14 G12", ALC285_FIXUP_HP_GPIO_LED), 10884 + SND_PCI_QUIRK(0x103c, 0x8d07, "HP Victus 15-fb2xxx (MB 8D07)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10884 10885 SND_PCI_QUIRK(0x103c, 0x8d18, "HP EliteStudio 8 AIO", ALC274_FIXUP_HP_AIO_BIND_DACS), 10885 10886 SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED), 10886 10887 SND_PCI_QUIRK(0x103c, 0x8d85, "HP EliteBook 14 G12", ALC285_FIXUP_HP_GPIO_LED), ··· 11041 11040 SND_PCI_QUIRK(0x1043, 0x1e63, "ASUS H7606W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 11042 11041 SND_PCI_QUIRK(0x1043, 0x1e83, "ASUS GA605W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 11043 11042 SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401), 11043 + SND_PCI_QUIRK(0x1043, 0x1e93, "ASUS ExpertBook B9403CVAR", ALC294_FIXUP_ASUS_HPE), 11044 11044 SND_PCI_QUIRK(0x1043, 0x1eb3, "ASUS Ally RCLA72", ALC287_FIXUP_TAS2781_I2C), 11045 11045 SND_PCI_QUIRK(0x1043, 0x1ed3, "ASUS HN7306W", ALC287_FIXUP_CS35L41_I2C_2), 11046 11046 SND_PCI_QUIRK(0x1043, 0x1ee2, "ASUS UM6702RA/RC", ALC287_FIXUP_CS35L41_I2C_2), ··· 11426 11424 SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), 11427 11425 SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO), 11428 11426 SND_PCI_QUIRK(0x2782, 0x1407, "Positivo P15X", ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC), 11427 + SND_PCI_QUIRK(0x2782, 0x1409, "Positivo K116J", ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC), 11429 11428 SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX), 11430 11429 SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX), 11431 11430 SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+5 -3
sound/pci/hda/tas2781_hda.c
··· 44 44 TASDEVICE_REG(0, 0x13, 0x70), 45 45 TASDEVICE_REG(0, 0x18, 0x7c), 46 46 }; 47 - unsigned int crc, oft; 47 + unsigned int crc, oft, node_num; 48 48 unsigned char *buf; 49 49 int i, j, k, l; 50 50 ··· 80 80 dev_err(p->dev, "%s: CRC error\n", __func__); 81 81 return; 82 82 } 83 + node_num = tmp_val[1]; 83 84 84 - for (j = 0, k = 0; j < tmp_val[1]; j++) { 85 + for (j = 0, k = 0; j < node_num; j++) { 85 86 oft = j * 6 + 3; 86 87 if (tmp_val[oft] == TASDEV_UEFI_CALI_REG_ADDR_FLG) { 87 88 for (i = 0; i < TASDEV_CALIB_N; i++) { ··· 100 99 } 101 100 102 101 data[l] = k; 102 + oft++; 103 103 for (i = 0; i < TASDEV_CALIB_N * 4; i++) 104 - data[l + i] = data[4 * oft + i]; 104 + data[l + i + 1] = data[4 * oft + i]; 105 105 k++; 106 106 } 107 107 }
+1 -1
sound/soc/codecs/cs35l56-shared.c
··· 980 980 break; 981 981 default: 982 982 dev_err(cs35l56_base->dev, "Unknown device %x\n", devid); 983 - return ret; 983 + return -ENODEV; 984 984 } 985 985 986 986 cs35l56_base->type = devid & 0xFF;
+23 -30
sound/soc/codecs/tlv320aic32x4.c
··· 9 9 * Based on sound/soc/codecs/wm8974 and TI driver for kernel 2.6.27. 10 10 */ 11 11 12 + #include <linux/cdev.h> 13 + #include <linux/clk.h> 14 + #include <linux/delay.h> 15 + #include <linux/gpio/consumer.h> 16 + #include <linux/init.h> 12 17 #include <linux/module.h> 13 18 #include <linux/moduleparam.h> 14 - #include <linux/init.h> 15 - #include <linux/delay.h> 16 - #include <linux/pm.h> 17 - #include <linux/gpio.h> 18 - #include <linux/of_gpio.h> 19 - #include <linux/cdev.h> 20 - #include <linux/slab.h> 21 - #include <linux/clk.h> 22 19 #include <linux/of_clk.h> 20 + #include <linux/pm.h> 23 21 #include <linux/regulator/consumer.h> 22 + #include <linux/slab.h> 24 23 25 - #include <sound/tlv320aic32x4.h> 26 24 #include <sound/core.h> 25 + #include <sound/initval.h> 27 26 #include <sound/pcm.h> 28 27 #include <sound/pcm_params.h> 29 28 #include <sound/soc.h> 30 29 #include <sound/soc-dapm.h> 31 - #include <sound/initval.h> 32 30 #include <sound/tlv.h> 31 + #include <sound/tlv320aic32x4.h> 33 32 34 33 #include "tlv320aic32x4.h" 35 34 ··· 37 38 u32 power_cfg; 38 39 u32 micpga_routing; 39 40 bool swapdacs; 40 - int rstn_gpio; 41 + struct gpio_desc *rstn_gpio; 41 42 const char *mclk_name; 42 43 43 44 struct regulator *supply_ldo; ··· 1235 1236 1236 1237 aic32x4->swapdacs = false; 1237 1238 aic32x4->micpga_routing = 0; 1238 - aic32x4->rstn_gpio = of_get_named_gpio(np, "reset-gpios", 0); 1239 + /* Assert reset using GPIOD_OUT_HIGH, because reset is GPIO_ACTIVE_LOW */ 1240 + aic32x4->rstn_gpio = devm_gpiod_get_optional(aic32x4->dev, "reset", GPIOD_OUT_HIGH); 1241 + if (IS_ERR(aic32x4->rstn_gpio)) { 1242 + return dev_err_probe(aic32x4->dev, PTR_ERR(aic32x4->rstn_gpio), 1243 + "Failed to get reset gpio\n"); 1244 + } else { 1245 + gpiod_set_consumer_name(aic32x4->rstn_gpio, "tlv320aic32x4_rstn"); 1246 + } 1239 1247 1240 1248 if (of_property_read_u32_array(np, "aic32x4-gpio-func", 1241 1249 aic32x4_setup->gpio_func, 5) >= 0) ··· 1352 1346 enum aic32x4_type type) 1353 1347 { 1354 1348 struct aic32x4_priv *aic32x4; 1355 - struct aic32x4_pdata *pdata = dev->platform_data; 1356 1349 struct device_node *np = dev->of_node; 1357 1350 int ret; 1358 1351 ··· 1368 1363 1369 1364 dev_set_drvdata(dev, aic32x4); 1370 1365 1371 - if (pdata) { 1372 - aic32x4->power_cfg = pdata->power_cfg; 1373 - aic32x4->swapdacs = pdata->swapdacs; 1374 - aic32x4->micpga_routing = pdata->micpga_routing; 1375 - aic32x4->rstn_gpio = pdata->rstn_gpio; 1376 - aic32x4->mclk_name = "mclk"; 1377 - } else if (np) { 1366 + if (np) { 1378 1367 ret = aic32x4_parse_dt(aic32x4, np); 1379 1368 if (ret) { 1380 1369 dev_err(dev, "Failed to parse DT node\n"); ··· 1378 1379 aic32x4->power_cfg = 0; 1379 1380 aic32x4->swapdacs = false; 1380 1381 aic32x4->micpga_routing = 0; 1381 - aic32x4->rstn_gpio = -1; 1382 + aic32x4->rstn_gpio = NULL; 1382 1383 aic32x4->mclk_name = "mclk"; 1383 - } 1384 - 1385 - if (gpio_is_valid(aic32x4->rstn_gpio)) { 1386 - ret = devm_gpio_request_one(dev, aic32x4->rstn_gpio, 1387 - GPIOF_OUT_INIT_LOW, "tlv320aic32x4 rstn"); 1388 - if (ret != 0) 1389 - return ret; 1390 1384 } 1391 1385 1392 1386 ret = aic32x4_setup_regulators(dev, aic32x4); ··· 1388 1396 return ret; 1389 1397 } 1390 1398 1391 - if (gpio_is_valid(aic32x4->rstn_gpio)) { 1399 + if (!aic32x4->rstn_gpio) { 1392 1400 ndelay(10); 1393 - gpio_set_value_cansleep(aic32x4->rstn_gpio, 1); 1401 + /* deassert reset */ 1402 + gpiod_set_value_cansleep(aic32x4->rstn_gpio, 0); 1394 1403 mdelay(1); 1395 1404 } 1396 1405
+2 -1
sound/soc/fsl/fsl_asrc.c
··· 517 517 regmap_update_bits(asrc->regmap, REG_ASRCTR, 518 518 ASRCTR_ATSi_MASK(index), ASRCTR_ATS(index)); 519 519 regmap_update_bits(asrc->regmap, REG_ASRCTR, 520 - ASRCTR_USRi_MASK(index), 0); 520 + ASRCTR_IDRi_MASK(index) | ASRCTR_USRi_MASK(index), 521 + ASRCTR_USR(index)); 521 522 522 523 /* Set the input and output clock sources */ 523 524 regmap_update_bits(asrc->regmap, REG_ASRCSR,
+8 -6
sound/soc/fsl/fsl_sai.c
··· 803 803 * anymore. Add software reset to fix this issue. 804 804 * This is a hardware bug, and will be fix in the 805 805 * next sai version. 806 + * 807 + * In consumer mode, this can happen even after a 808 + * single open/close, especially if both tx and rx 809 + * are running concurrently. 806 810 */ 807 - if (!sai->is_consumer_mode[tx]) { 808 - /* Software Reset */ 809 - regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR); 810 - /* Clear SR bit to finish the reset */ 811 - regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0); 812 - } 811 + /* Software Reset */ 812 + regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR); 813 + /* Clear SR bit to finish the reset */ 814 + regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0); 813 815 } 814 816 815 817 static int fsl_sai_trigger(struct snd_pcm_substream *substream, int cmd,
+1
sound/soc/intel/boards/Kconfig
··· 42 42 tristate 43 43 44 44 config SND_SOC_INTEL_SOF_BOARD_HELPERS 45 + select SND_SOC_ACPI_INTEL_MATCH 45 46 tristate 46 47 47 48 if SND_SOC_INTEL_CATPT
+3
sound/soc/intel/boards/sof_sdw.c
··· 783 783 static const struct snd_pci_quirk sof_sdw_ssid_quirk_table[] = { 784 784 SND_PCI_QUIRK(0x1043, 0x1e13, "ASUS Zenbook S14", SOC_SDW_CODEC_MIC), 785 785 SND_PCI_QUIRK(0x1043, 0x1f43, "ASUS Zenbook S16", SOC_SDW_CODEC_MIC), 786 + SND_PCI_QUIRK(0x17aa, 0x2347, "Lenovo P16", SOC_SDW_CODEC_MIC), 787 + SND_PCI_QUIRK(0x17aa, 0x2348, "Lenovo P16", SOC_SDW_CODEC_MIC), 788 + SND_PCI_QUIRK(0x17aa, 0x2349, "Lenovo P1", SOC_SDW_CODEC_MIC), 786 789 {} 787 790 }; 788 791
+7 -7
sound/soc/intel/common/soc-acpi-intel-arl-match.c
··· 468 468 .get_function_tplg_files = sof_sdw_get_tplg_files, 469 469 }, 470 470 { 471 - .link_mask = BIT(2), 472 - .links = arl_cs42l43_l2, 473 - .drv_name = "sof_sdw", 474 - .sof_tplg_filename = "sof-arl-cs42l43-l2.tplg", 475 - .get_function_tplg_files = sof_sdw_get_tplg_files, 476 - }, 477 - { 478 471 .link_mask = BIT(2) | BIT(3), 479 472 .links = arl_cs42l43_l2_cs35l56_l3, 480 473 .drv_name = "sof_sdw", 481 474 .sof_tplg_filename = "sof-arl-cs42l43-l2-cs35l56-l3.tplg", 475 + .get_function_tplg_files = sof_sdw_get_tplg_files, 476 + }, 477 + { 478 + .link_mask = BIT(2), 479 + .links = arl_cs42l43_l2, 480 + .drv_name = "sof_sdw", 481 + .sof_tplg_filename = "sof-arl-cs42l43-l2.tplg", 482 482 .get_function_tplg_files = sof_sdw_get_tplg_files, 483 483 }, 484 484 {
+10 -12
sound/usb/format.c
··· 310 310 struct audioformat *fp, 311 311 unsigned int rate) 312 312 { 313 - struct usb_interface *iface; 314 313 struct usb_host_interface *alts; 315 314 unsigned char *fmt; 316 315 unsigned int max_rate; 317 316 318 - iface = usb_ifnum_to_if(chip->dev, fp->iface); 319 - if (!iface) 317 + alts = snd_usb_get_host_interface(chip, fp->iface, fp->altsetting); 318 + if (!alts) 320 319 return true; 321 320 322 - alts = &iface->altsetting[fp->altset_idx]; 323 321 fmt = snd_usb_find_csint_desc(alts->extra, alts->extralen, 324 322 NULL, UAC_FORMAT_TYPE); 325 323 if (!fmt) ··· 326 328 if (fmt[0] == 10) { /* bLength */ 327 329 max_rate = combine_quad(&fmt[6]); 328 330 329 - /* Validate max rate */ 330 - if (max_rate != 48000 && 331 - max_rate != 96000 && 332 - max_rate != 192000 && 333 - max_rate != 384000) { 334 - 331 + switch (max_rate) { 332 + case 48000: 333 + return (rate == 44100 || rate == 48000); 334 + case 96000: 335 + return (rate == 88200 || rate == 96000); 336 + case 192000: 337 + return (rate == 176400 || rate == 192000); 338 + default: 335 339 usb_audio_info(chip, 336 340 "%u:%d : unexpected max rate: %u\n", 337 341 fp->iface, fp->altsetting, max_rate); 338 342 339 343 return true; 340 344 } 341 - 342 - return rate <= max_rate; 343 345 } 344 346 345 347 return true;
+1
tools/arch/x86/include/asm/msr-index.h
··· 628 628 #define MSR_AMD64_OSVW_STATUS 0xc0010141 629 629 #define MSR_AMD_PPIN_CTL 0xc00102f0 630 630 #define MSR_AMD_PPIN 0xc00102f1 631 + #define MSR_AMD64_CPUID_FN_7 0xc0011002 631 632 #define MSR_AMD64_CPUID_FN_1 0xc0011004 632 633 #define MSR_AMD64_LS_CFG 0xc0011020 633 634 #define MSR_AMD64_DC_CFG 0xc0011022
+4
tools/include/linux/kallsyms.h
··· 18 18 return NULL; 19 19 } 20 20 21 + #ifdef HAVE_BACKTRACE_SUPPORT 21 22 #include <execinfo.h> 22 23 #include <stdlib.h> 23 24 static inline void print_ip_sym(const char *loglvl, unsigned long ip) ··· 31 30 32 31 free(name); 33 32 } 33 + #else 34 + static inline void print_ip_sym(const char *loglvl, unsigned long ip) {} 35 + #endif 34 36 35 37 #endif
+2 -2
tools/include/uapi/linux/bits.h
··· 4 4 #ifndef _UAPI_LINUX_BITS_H 5 5 #define _UAPI_LINUX_BITS_H 6 6 7 - #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h)))) 7 + #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h)))) 8 8 9 - #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) 9 + #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 10 10 11 11 #define __GENMASK_U128(h, l) \ 12 12 ((_BIT128((h)) << 1) - (_BIT128(l)))
+1
tools/testing/selftests/kvm/x86/monitor_mwait_test.c
··· 74 74 int testcase; 75 75 char test[80]; 76 76 77 + TEST_REQUIRE(this_cpu_has(X86_FEATURE_MWAIT)); 77 78 TEST_REQUIRE(kvm_has_cap(KVM_CAP_DISABLE_QUIRKS2)); 78 79 79 80 ksft_print_header();
+17 -10
tools/testing/selftests/net/gre_ipv6_lladdr.sh
··· 24 24 ip -netns "${NS0}" address add dev lo 2001:db8::10/64 nodad 25 25 } 26 26 27 - # Check if network device has an IPv6 link-local address assigned. 27 + # Check the IPv6 configuration of a network device. 28 + # 29 + # We currently check the generation of the link-local IPv6 address and the 30 + # creation of the ff00::/8 multicast route. 28 31 # 29 32 # Parameters: 30 33 # ··· 38 35 # a link-local address) 39 36 # * $4: The user visible name for the scenario being tested 40 37 # 41 - check_ipv6_ll_addr() 38 + check_ipv6_device_config() 42 39 { 43 40 local DEV="$1" 44 41 local EXTRA_MATCH="$2" ··· 48 45 RET=0 49 46 set +e 50 47 ip -netns "${NS0}" -6 address show dev "${DEV}" scope link | grep "fe80::" | grep -q "${EXTRA_MATCH}" 51 - check_err_fail "${XRET}" $? "" 48 + check_err_fail "${XRET}" $? "IPv6 link-local address generation" 49 + 50 + ip -netns "${NS0}" -6 route show table local type multicast ff00::/8 proto kernel | grep -q "${DEV}" 51 + check_err_fail 0 $? "IPv6 multicast route creation" 52 + 52 53 log_test "${MSG}" 53 54 set -e 54 55 } ··· 109 102 ;; 110 103 esac 111 104 112 - # Check that IPv6 link-local address is generated when device goes up 105 + # Check the IPv6 device configuration when it goes up 113 106 ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}" 114 107 ip -netns "${NS0}" link set dev gretest up 115 - check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "config: ${MSG}" 108 + check_ipv6_device_config gretest "${MATCH_REGEXP}" "${XRET}" "config: ${MSG}" 116 109 117 110 # Now disable link-local address generation 118 111 ip -netns "${NS0}" link set dev gretest down 119 112 ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode=1 120 113 ip -netns "${NS0}" link set dev gretest up 121 114 122 - # Check that link-local address generation works when re-enabled while 123 - # the device is already up 115 + # Check the IPv6 device configuration when link-local address 116 + # generation is re-enabled while the device is already up 124 117 ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}" 125 - check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "update: ${MSG}" 118 + check_ipv6_device_config gretest "${MATCH_REGEXP}" "${XRET}" "update: ${MSG}" 126 119 127 120 ip -netns "${NS0}" link del dev gretest 128 121 } ··· 133 126 local MODE 134 127 135 128 for GRE_TYPE in "gre" "gretap"; do 136 - printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n" 129 + printf "\n####\nTesting IPv6 configuration of ${GRE_TYPE} devices\n####\n\n" 137 130 138 131 for MODE in "eui64" "none" "stable-privacy" "random"; do 139 132 test_gre_device "${GRE_TYPE}" 192.0.2.10 192.0.2.11 "${MODE}" ··· 149 142 local MODE 150 143 151 144 for GRE_TYPE in "ip6gre" "ip6gretap"; do 152 - printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n" 145 + printf "\n####\nTesting IPv6 configuration of ${GRE_TYPE} devices\n####\n\n" 153 146 154 147 for MODE in "eui64" "none" "stable-privacy" "random"; do 155 148 test_gre_device "${GRE_TYPE}" 2001:db8::10 2001:db8::11 "${MODE}"
+1 -1
tools/testing/selftests/net/lib.sh
··· 312 312 local test_name=$1; shift 313 313 local opt_str=$1; shift 314 314 local result=$1; shift 315 - local retmsg=$1; shift 315 + local retmsg=$1 316 316 317 317 printf "TEST: %-60s [%s]\n" "$test_name $opt_str" "$result" 318 318 if [[ $retmsg ]]; then
+53
tools/testing/selftests/net/packetdrill/tcp_ooo-before-and-after-accept.pkt
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + --mss=1000 4 + 5 + `./defaults.sh 6 + sysctl -q net.ipv4.tcp_rmem="4096 131072 $((32*1024*1024))"` 7 + 8 + // Test that a not-yet-accepted socket does not change 9 + // its initial sk_rcvbuf (tcp_rmem[1]) when receiving ooo packets. 10 + 11 + +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 12 + +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 13 + +0 bind(3, ..., ...) = 0 14 + +0 listen(3, 1) = 0 15 + 16 + +0 < S 0:0(0) win 65535 <mss 1000,nop,nop,sackOK,nop,wscale 7> 17 + +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 10> 18 + +.1 < . 1:1(0) ack 1 win 257 19 + +0 < . 2001:41001(39000) ack 1 win 257 20 + +0 > . 1:1(0) ack 1 <nop,nop,sack 2001:41001> 21 + +0 < . 41001:101001(60000) ack 1 win 257 22 + +0 > . 1:1(0) ack 1 <nop,nop,sack 2001:101001> 23 + +0 < . 1:1001(1000) ack 1 win 257 24 + +0 > . 1:1(0) ack 1001 <nop,nop,sack 2001:101001> 25 + +0 < . 1001:2001(1000) ack 1 win 257 26 + +0 > . 1:1(0) ack 101001 27 + 28 + +0 accept(3, ..., ...) = 4 29 + 30 + +0 %{ assert SK_MEMINFO_RCVBUF == 131072, SK_MEMINFO_RCVBUF }% 31 + 32 + +0 close(4) = 0 33 + +0 close(3) = 0 34 + 35 + // Test that ooo packets for accepted sockets do increase sk_rcvbuf 36 + +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 37 + +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 38 + +0 bind(3, ..., ...) = 0 39 + +0 listen(3, 1) = 0 40 + 41 + +0 < S 0:0(0) win 65535 <mss 1000,nop,nop,sackOK,nop,wscale 7> 42 + +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 10> 43 + +.1 < . 1:1(0) ack 1 win 257 44 + 45 + +0 accept(3, ..., ...) = 4 46 + 47 + +0 < . 2001:41001(39000) ack 1 win 257 48 + +0 > . 1:1(0) ack 1 <nop,nop,sack 2001:41001> 49 + +0 < . 41001:101001(60000) ack 1 win 257 50 + +0 > . 1:1(0) ack 1 <nop,nop,sack 2001:101001> 51 + 52 + +0 %{ assert SK_MEMINFO_RCVBUF > 131072, SK_MEMINFO_RCVBUF }% 53 +
+37
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 635 635 "$TC qdisc del dev $DUMMY handle 1:0 root", 636 636 "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 637 637 ] 638 + }, 639 + { 640 + "id": "d74b", 641 + "name": "Test use-after-free with DRR/NETEM/BLACKHOLE chain", 642 + "category": [ 643 + "qdisc", 644 + "hfsc", 645 + "drr", 646 + "netem", 647 + "blackhole" 648 + ], 649 + "plugins": { 650 + "requires": [ 651 + "nsPlugin", 652 + "scapyPlugin" 653 + ] 654 + }, 655 + "setup": [ 656 + "$IP link set dev $DUMMY up || true", 657 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 658 + "$TC qdisc add dev $DUMMY root handle 1: drr", 659 + "$TC filter add dev $DUMMY parent 1: basic classid 1:1", 660 + "$TC class add dev $DUMMY parent 1: classid 1:1 drr", 661 + "$TC qdisc add dev $DUMMY parent 1:1 handle 2: hfsc def 1", 662 + "$TC class add dev $DUMMY parent 2: classid 2:1 hfsc rt m1 8 d 1 m2 0", 663 + "$TC qdisc add dev $DUMMY parent 2:1 handle 3: netem", 664 + "$TC qdisc add dev $DUMMY parent 3:1 handle 4: blackhole", 665 + "ping -c1 -W0.01 -I $DUMMY 10.10.11.11 || true", 666 + "$TC class del dev $DUMMY classid 1:1" 667 + ], 668 + "cmdUnderTest": "ping -c1 -W0.01 -I $DUMMY 10.10.11.11", 669 + "expExitCode": "1", 670 + "verifyCmd": "$TC -j class ls dev $DUMMY classid 1:1", 671 + "matchJSON": [], 672 + "teardown": [ 673 + "$TC qdisc del dev $DUMMY root handle 1: drr" 674 + ] 638 675 } 639 676 ]
+3
virt/kvm/kvm_main.c
··· 2572 2572 r = xa_reserve(&kvm->mem_attr_array, i, GFP_KERNEL_ACCOUNT); 2573 2573 if (r) 2574 2574 goto out_unlock; 2575 + 2576 + cond_resched(); 2575 2577 } 2576 2578 2577 2579 kvm_handle_gfn_range(kvm, &pre_set_range); ··· 2582 2580 r = xa_err(xa_store(&kvm->mem_attr_array, i, entry, 2583 2581 GFP_KERNEL_ACCOUNT)); 2584 2582 KVM_BUG_ON(r, kvm); 2583 + cond_resched(); 2585 2584 } 2586 2585 2587 2586 kvm_handle_gfn_range(kvm, &post_set_range);