Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.16-rc6).

No conflicts.

Adjacent changes:

Documentation/devicetree/bindings/net/allwinner,sun8i-a83t-emac.yaml
0a12c435a1d6 ("dt-bindings: net: sun8i-emac: Add A100 EMAC compatible")
b3603c0466a8 ("dt-bindings: net: sun8i-emac: Rename A523 EMAC0 to GMAC0")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3349 -1829
+1 -1
Documentation/ABI/testing/sysfs-devices-power
··· 56 56 Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 57 57 Description: 58 58 The /sys/devices/.../async attribute allows the user space to 59 - enable or diasble the device's suspend and resume callbacks to 59 + enable or disable the device's suspend and resume callbacks to 60 60 be executed asynchronously (ie. in separate threads, in parallel 61 61 with the main suspend/resume thread) during system-wide power 62 62 transitions (eg. suspend to RAM, hibernation).
+1
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 584 584 /sys/devices/system/cpu/vulnerabilities/spectre_v1 585 585 /sys/devices/system/cpu/vulnerabilities/spectre_v2 586 586 /sys/devices/system/cpu/vulnerabilities/srbds 587 + /sys/devices/system/cpu/vulnerabilities/tsa 587 588 /sys/devices/system/cpu/vulnerabilities/tsx_async_abort 588 589 Date: January 2018 589 590 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
+1 -1
Documentation/ABI/testing/sysfs-driver-ufs
··· 711 711 712 712 The file is read only. 713 713 714 - What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resourse_count 714 + What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resource_count 715 715 Date: February 2018 716 716 Contact: Stanislav Nijnikov <stanislav.nijnikov@wdc.com> 717 717 Description: This file shows the total physical memory resources. This is
+1 -3
Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
··· 157 157 combination with a microcode update. The microcode clears the affected CPU 158 158 buffers when the VERW instruction is executed. 159 159 160 - Kernel reuses the MDS function to invoke the buffer clearing: 161 - 162 - mds_clear_cpu_buffers() 160 + Kernel does the buffer clearing with x86_clear_cpu_buffers(). 163 161 164 162 On MDS affected CPUs, the kernel already invokes CPU buffer clear on 165 163 kernel/userspace, hypervisor/guest and C-state (idle) transitions. No
+13
Documentation/admin-guide/kernel-parameters.txt
··· 7488 7488 having this key zero'ed is acceptable. E.g. in testing 7489 7489 scenarios. 7490 7490 7491 + tsa= [X86] Control mitigation for Transient Scheduler 7492 + Attacks on AMD CPUs. Search the following in your 7493 + favourite search engine for more details: 7494 + 7495 + "Technical guidance for mitigating transient scheduler 7496 + attacks". 7497 + 7498 + off - disable the mitigation 7499 + on - enable the mitigation (default) 7500 + user - mitigate only user/kernel transitions 7501 + vm - mitigate only guest/host transitions 7502 + 7503 + 7491 7504 tsc= Disable clocksource stability checks for TSC. 7492 7505 Format: <string> 7493 7506 [x86] reliable: mark tsc clocksource as reliable, this
+4 -4
Documentation/arch/x86/mds.rst
··· 93 93 94 94 The kernel provides a function to invoke the buffer clearing: 95 95 96 - mds_clear_cpu_buffers() 96 + x86_clear_cpu_buffers() 97 97 98 98 Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path. 99 99 Other than CFLAGS.ZF, this macro doesn't clobber any registers. ··· 185 185 idle clearing would be a window dressing exercise and is therefore not 186 186 activated. 187 187 188 - The invocation is controlled by the static key mds_idle_clear which is 189 - switched depending on the chosen mitigation mode and the SMT state of 190 - the system. 188 + The invocation is controlled by the static key cpu_buf_idle_clear which is 189 + switched depending on the chosen mitigation mode and the SMT state of the 190 + system. 191 191 192 192 The buffer clear is only invoked before entering the C-State to prevent 193 193 that stale data from the idling CPU from spilling to the Hyper-Thread
+2 -1
Documentation/devicetree/bindings/i2c/realtek,rtl9301-i2c.yaml
··· 26 26 - const: realtek,rtl9301-i2c 27 27 28 28 reg: 29 - description: Register offset and size this I2C controller. 29 + items: 30 + - description: Register offset and size this I2C controller. 30 31 31 32 "#address-cells": 32 33 const: 1
+7 -5
Documentation/devicetree/bindings/input/elan,ekth6915.yaml
··· 4 4 $id: http://devicetree.org/schemas/input/elan,ekth6915.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Elan eKTH6915 touchscreen controller 7 + title: Elan I2C-HID touchscreen controllers 8 8 9 9 maintainers: 10 10 - Douglas Anderson <dianders@chromium.org> 11 11 12 12 description: 13 - Supports the Elan eKTH6915 touchscreen controller. 14 - This touchscreen controller uses the i2c-hid protocol with a reset GPIO. 13 + Supports the Elan eKTH6915 and other I2C-HID touchscreen controllers. 14 + These touchscreen controller use the i2c-hid protocol with a reset GPIO. 15 15 16 16 allOf: 17 17 - $ref: /schemas/input/touchscreen/touchscreen.yaml# ··· 23 23 - enum: 24 24 - elan,ekth5015m 25 25 - const: elan,ekth6915 26 + - items: 27 + - const: elan,ekth8d18 28 + - const: elan,ekth6a12nay 26 29 - enum: 27 30 - elan,ekth6915 28 31 - elan,ekth6a12nay 29 32 30 - reg: 31 - const: 0x10 33 + reg: true 32 34 33 35 interrupts: 34 36 maxItems: 1
+1 -1
Documentation/devicetree/bindings/net/allwinner,sun8i-a83t-emac.yaml
··· 24 24 - allwinner,sun50i-a100-emac 25 25 - allwinner,sun50i-h6-emac 26 26 - allwinner,sun50i-h616-emac0 27 - - allwinner,sun55i-a523-emac0 27 + - allwinner,sun55i-a523-gmac0 28 28 - const: allwinner,sun50i-a64-emac 29 29 30 30 reg:
+21 -14
Documentation/virt/kvm/api.rst
··· 7196 7196 u64 leaf; 7197 7197 u64 r11, r12, r13, r14; 7198 7198 } get_tdvmcall_info; 7199 + struct { 7200 + u64 ret; 7201 + u64 vector; 7202 + } setup_event_notify; 7199 7203 }; 7200 7204 } tdx; 7201 7205 ··· 7214 7210 inputs and outputs of the TDVMCALL. Currently the following values of 7215 7211 ``nr`` are defined: 7216 7212 7217 - * ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote 7218 - signed by a service hosting TD-Quoting Enclave operating on the host. 7219 - Parameters and return value are in the ``get_quote`` field of the union. 7220 - The ``gpa`` field and ``size`` specify the guest physical address 7221 - (without the shared bit set) and the size of a shared-memory buffer, in 7222 - which the TDX guest passes a TD Report. The ``ret`` field represents 7223 - the return value of the GetQuote request. When the request has been 7224 - queued successfully, the TDX guest can poll the status field in the 7225 - shared-memory area to check whether the Quote generation is completed or 7226 - not. When completed, the generated Quote is returned via the same buffer. 7213 + * ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote 7214 + signed by a service hosting TD-Quoting Enclave operating on the host. 7215 + Parameters and return value are in the ``get_quote`` field of the union. 7216 + The ``gpa`` field and ``size`` specify the guest physical address 7217 + (without the shared bit set) and the size of a shared-memory buffer, in 7218 + which the TDX guest passes a TD Report. The ``ret`` field represents 7219 + the return value of the GetQuote request. When the request has been 7220 + queued successfully, the TDX guest can poll the status field in the 7221 + shared-memory area to check whether the Quote generation is completed or 7222 + not. When completed, the generated Quote is returned via the same buffer. 7227 7223 7228 - * ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support 7229 - status of TDVMCALLs. The output values for the given leaf should be 7230 - placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7231 - field of the union. 7224 + * ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support 7225 + status of TDVMCALLs. The output values for the given leaf should be 7226 + placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7227 + field of the union. 7228 + 7229 + * ``TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT``: the guest has requested to 7230 + set up a notification interrupt for vector ``vector``. 7232 7231 7233 7232 KVM may add support for more values in the future that may cause a userspace 7234 7233 exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case,
+14 -1
Documentation/virt/kvm/x86/intel-tdx.rst
··· 79 79 struct kvm_tdx_capabilities { 80 80 __u64 supported_attrs; 81 81 __u64 supported_xfam; 82 - __u64 reserved[254]; 82 + 83 + /* TDG.VP.VMCALL hypercalls executed in kernel and forwarded to 84 + * userspace, respectively 85 + */ 86 + __u64 kernel_tdvmcallinfo_1_r11; 87 + __u64 user_tdvmcallinfo_1_r11; 88 + 89 + /* TDG.VP.VMCALL instruction executions subfunctions executed in kernel 90 + * and forwarded to userspace, respectively 91 + */ 92 + __u64 kernel_tdvmcallinfo_1_r12; 93 + __u64 user_tdvmcallinfo_1_r12; 94 + 95 + __u64 reserved[250]; 83 96 84 97 /* Configurable CPUID bits for userspace */ 85 98 struct kvm_cpuid2 cpuid;
+9 -5
Documentation/wmi/acpi-interface.rst
··· 36 36 37 37 The WMI object flags control whether the method or notification ID is used: 38 38 39 - - 0x1: Data block usage is expensive and must be explicitly enabled/disabled. 39 + - 0x1: Data block is expensive to collect. 40 40 - 0x2: Data block contains WMI methods. 41 41 - 0x4: Data block contains ASCIZ string. 42 42 - 0x8: Data block describes a WMI event, use notification ID instead ··· 83 83 of 0 if the WMI event should be disabled, other values will enable 84 84 the WMI event. 85 85 86 + Those ACPI methods are always called even for WMI events not registered as 87 + being expensive to collect to match the behavior of the Windows driver. 88 + 86 89 WCxx ACPI methods 87 90 ----------------- 88 - Similar to the ``WExx`` ACPI methods, except that it controls data collection 89 - instead of events and thus the last two characters of the ACPI method name are 90 - the method ID of the data block to enable/disable. 91 + Similar to the ``WExx`` ACPI methods, except that instead of WMI events it controls 92 + data collection of data blocks registered as being expensive to collect. Thus the 93 + last two characters of the ACPI method name are the method ID of the data block 94 + to enable/disable. 91 95 92 96 Those ACPI methods are also called before setting data blocks to match the 93 - behaviour of the Windows driver. 97 + behavior of the Windows driver. 94 98 95 99 _WED ACPI method 96 100 ----------------
+11 -16
MAINTAINERS
··· 4186 4186 F: include/linux/find.h 4187 4187 F: include/linux/nodemask.h 4188 4188 F: include/linux/nodemask_types.h 4189 + F: include/uapi/linux/bits.h 4189 4190 F: include/vdso/bits.h 4190 4191 F: lib/bitmap-str.c 4191 4192 F: lib/bitmap.c ··· 4199 4198 F: tools/include/linux/bitmap.h 4200 4199 F: tools/include/linux/bits.h 4201 4200 F: tools/include/linux/find.h 4201 + F: tools/include/uapi/linux/bits.h 4202 4202 F: tools/include/vdso/bits.h 4203 4203 F: tools/lib/bitmap.c 4204 4204 F: tools/lib/find_bit.c ··· 16845 16843 MODULE SUPPORT 16846 16844 M: Luis Chamberlain <mcgrof@kernel.org> 16847 16845 M: Petr Pavlu <petr.pavlu@suse.com> 16846 + M: Daniel Gomez <da.gomez@kernel.org> 16848 16847 R: Sami Tolvanen <samitolvanen@google.com> 16849 - R: Daniel Gomez <da.gomez@samsung.com> 16850 16848 L: linux-modules@vger.kernel.org 16851 16849 L: linux-kernel@vger.kernel.org 16852 16850 S: Maintained ··· 17245 17243 F: include/linux/mfd/ntxec.h 17246 17244 17247 17245 NETRONOME ETHERNET DRIVERS 17248 - M: Louis Peens <louis.peens@corigine.com> 17249 17246 R: Jakub Kicinski <kuba@kernel.org> 17247 + R: Simon Horman <horms@kernel.org> 17250 17248 L: oss-drivers@corigine.com 17251 - S: Maintained 17249 + S: Odd Fixes 17252 17250 F: drivers/net/ethernet/netronome/ 17253 17251 17254 17252 NETWORK BLOCK DEVICE (NBD) ··· 19624 19622 F: drivers/pinctrl/intel/ 19625 19623 19626 19624 PIN CONTROLLER - KEEMBAY 19627 - M: Lakshmi Sowjanya D <lakshmi.sowjanya.d@intel.com> 19628 - S: Supported 19625 + S: Orphan 19629 19626 F: drivers/pinctrl/pinctrl-keembay* 19630 19627 19631 19628 PIN CONTROLLER - MEDIATEK ··· 20177 20176 F: Documentation/devicetree/bindings/soc/qcom/qcom,apr* 20178 20177 F: Documentation/devicetree/bindings/sound/qcom,* 20179 20178 F: drivers/soc/qcom/apr.c 20180 - F: include/dt-bindings/sound/qcom,wcd9335.h 20181 - F: include/dt-bindings/sound/qcom,wcd934x.h 20182 - F: sound/soc/codecs/lpass-rx-macro.* 20183 - F: sound/soc/codecs/lpass-tx-macro.* 20184 - F: sound/soc/codecs/lpass-va-macro.c 20185 - F: sound/soc/codecs/lpass-wsa-macro.* 20179 + F: drivers/soundwire/qcom.c 20180 + F: include/dt-bindings/sound/qcom,wcd93* 20181 + F: sound/soc/codecs/lpass-*.* 20186 20182 F: sound/soc/codecs/msm8916-wcd-analog.c 20187 20183 F: sound/soc/codecs/msm8916-wcd-digital.c 20188 20184 F: sound/soc/codecs/wcd-clsh-v2.* 20189 20185 F: sound/soc/codecs/wcd-mbhc-v2.* 20190 - F: sound/soc/codecs/wcd9335.* 20191 - F: sound/soc/codecs/wcd934x.c 20192 - F: sound/soc/codecs/wsa881x.c 20193 - F: sound/soc/codecs/wsa883x.c 20194 - F: sound/soc/codecs/wsa884x.c 20186 + F: sound/soc/codecs/wcd93*.* 20187 + F: sound/soc/codecs/wsa88*.* 20195 20188 F: sound/soc/qcom/ 20196 20189 20197 20190 QCOM EMBEDDED USB DEBUGGER (EUD)
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm64/Kconfig
··· 256 256 select HOTPLUG_SMT if HOTPLUG_CPU 257 257 select IRQ_DOMAIN 258 258 select IRQ_FORCED_THREADING 259 + select JUMP_LABEL 259 260 select KASAN_VMALLOC if KASAN 260 261 select LOCK_MM_AND_FIND_VMA 261 262 select MODULES_USE_ELF_RELA
-2
arch/arm64/boot/dts/apple/spi1-nvram.dtsi
··· 20 20 compatible = "jedec,spi-nor"; 21 21 reg = <0x0>; 22 22 spi-max-frequency = <25000000>; 23 - #address-cells = <1>; 24 - #size-cells = <1>; 25 23 26 24 partitions { 27 25 compatible = "fixed-partitions";
+2
arch/arm64/boot/dts/apple/t8103-j293.dts
··· 100 100 101 101 &displaydfr_mipi { 102 102 status = "okay"; 103 + #address-cells = <1>; 104 + #size-cells = <0>; 103 105 104 106 dfr_panel: panel@0 { 105 107 compatible = "apple,j293-summit", "apple,summit";
+1 -1
arch/arm64/boot/dts/apple/t8103-jxxx.dtsi
··· 71 71 */ 72 72 &port00 { 73 73 bus-range = <1 1>; 74 - wifi0: network@0,0 { 74 + wifi0: wifi@0,0 { 75 75 compatible = "pci14e4,4425"; 76 76 reg = <0x10000 0x0 0x0 0x0 0x0>; 77 77 /* To be filled by the loader */
-2
arch/arm64/boot/dts/apple/t8103.dtsi
··· 405 405 compatible = "apple,t8103-display-pipe-mipi", "apple,h7-display-pipe-mipi"; 406 406 reg = <0x2 0x28600000 0x0 0x100000>; 407 407 power-domains = <&ps_mipi_dsi>; 408 - #address-cells = <1>; 409 - #size-cells = <0>; 410 408 status = "disabled"; 411 409 412 410 ports {
+2
arch/arm64/boot/dts/apple/t8112-j493.dts
··· 63 63 64 64 &displaydfr_mipi { 65 65 status = "okay"; 66 + #address-cells = <1>; 67 + #size-cells = <0>; 66 68 67 69 dfr_panel: panel@0 { 68 70 compatible = "apple,j493-summit", "apple,summit";
-2
arch/arm64/boot/dts/apple/t8112.dtsi
··· 420 420 compatible = "apple,t8112-display-pipe-mipi", "apple,h7-display-pipe-mipi"; 421 421 reg = <0x2 0x28600000 0x0 0x100000>; 422 422 power-domains = <&ps_mipi_dsi>; 423 - #address-cells = <1>; 424 - #size-cells = <0>; 425 423 status = "disabled"; 426 424 427 425 ports {
+1 -1
arch/arm64/configs/defconfig
··· 1573 1573 CONFIG_RESET_QCOM_PDC=m 1574 1574 CONFIG_RESET_RZG2L_USBPHY_CTRL=y 1575 1575 CONFIG_RESET_TI_SCI=y 1576 + CONFIG_PHY_SNPS_EUSB2=m 1576 1577 CONFIG_PHY_XGENE=y 1577 1578 CONFIG_PHY_CAN_TRANSCEIVER=m 1578 1579 CONFIG_PHY_NXP_PTN3222=m ··· 1598 1597 CONFIG_PHY_QCOM_PCIE2=m 1599 1598 CONFIG_PHY_QCOM_QMP=m 1600 1599 CONFIG_PHY_QCOM_QUSB2=m 1601 - CONFIG_PHY_QCOM_SNPS_EUSB2=m 1602 1600 CONFIG_PHY_QCOM_EUSB2_REPEATER=m 1603 1601 CONFIG_PHY_QCOM_M31_USB=m 1604 1602 CONFIG_PHY_QCOM_USB_HS=m
+7 -12
arch/arm64/include/asm/el2_setup.h
··· 287 287 .Lskip_fgt2_\@: 288 288 .endm 289 289 290 - .macro __init_el2_gcs 291 - mrs_s x1, SYS_ID_AA64PFR1_EL1 292 - ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4 293 - cbz x1, .Lskip_gcs_\@ 294 - 295 - /* Ensure GCS is not enabled when we start trying to do BLs */ 296 - msr_s SYS_GCSCR_EL1, xzr 297 - msr_s SYS_GCSCRE0_EL1, xzr 298 - .Lskip_gcs_\@: 299 - .endm 300 - 301 290 /** 302 291 * Initialize EL2 registers to sane values. This should be called early on all 303 292 * cores that were booted in EL2. Note that everything gets initialised as ··· 308 319 __init_el2_cptr 309 320 __init_el2_fgt 310 321 __init_el2_fgt2 311 - __init_el2_gcs 312 322 .endm 313 323 314 324 #ifndef __KVM_NVHE_HYPERVISOR__ ··· 359 371 msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2 360 372 361 373 .Lskip_mpam_\@: 374 + check_override id_aa64pfr1, ID_AA64PFR1_EL1_GCS_SHIFT, .Linit_gcs_\@, .Lskip_gcs_\@, x1, x2 375 + 376 + .Linit_gcs_\@: 377 + msr_s SYS_GCSCR_EL1, xzr 378 + msr_s SYS_GCSCRE0_EL1, xzr 379 + 380 + .Lskip_gcs_\@: 362 381 check_override id_aa64pfr0, ID_AA64PFR0_EL1_SVE_SHIFT, .Linit_sve_\@, .Lskip_sve_\@, x1, x2 363 382 364 383 .Linit_sve_\@: /* SVE register access */
-1
arch/arm64/include/asm/kvm_host.h
··· 1480 1480 struct reg_mask_range *range); 1481 1481 1482 1482 /* Guest/host FPSIMD coordination helpers */ 1483 - int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); 1484 1483 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); 1485 1484 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu); 1486 1485 void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
+1 -2
arch/arm64/kernel/Makefile
··· 34 34 cpufeature.o alternative.o cacheinfo.o \ 35 35 smp.o smp_spin_table.o topology.o smccc-call.o \ 36 36 syscall.o proton-pack.o idle.o patching.o pi/ \ 37 - rsi.o 37 + rsi.o jump_label.o 38 38 39 39 obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ 40 40 sys_compat.o ··· 47 47 obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF) += watchdog_hld.o 48 48 obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o 49 49 obj-$(CONFIG_CPU_PM) += sleep.o suspend.o 50 - obj-$(CONFIG_JUMP_LABEL) += jump_label.o 51 50 obj-$(CONFIG_KGDB) += kgdb.o 52 51 obj-$(CONFIG_EFI) += efi.o efi-rt-wrapper.o 53 52 obj-$(CONFIG_PCI) += pci.o
+32 -25
arch/arm64/kernel/cpufeature.c
··· 3135 3135 } 3136 3136 #endif 3137 3137 3138 + #ifdef CONFIG_ARM64_SME 3139 + static bool has_sme_feature(const struct arm64_cpu_capabilities *cap, int scope) 3140 + { 3141 + return system_supports_sme() && has_user_cpuid_feature(cap, scope); 3142 + } 3143 + #endif 3144 + 3138 3145 static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { 3139 3146 HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMULL, CAP_HWCAP, KERNEL_HWCAP_PMULL), 3140 3147 HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES, CAP_HWCAP, KERNEL_HWCAP_AES), ··· 3230 3223 HWCAP_CAP(ID_AA64ISAR2_EL1, BC, IMP, CAP_HWCAP, KERNEL_HWCAP_HBC), 3231 3224 #ifdef CONFIG_ARM64_SME 3232 3225 HWCAP_CAP(ID_AA64PFR1_EL1, SME, IMP, CAP_HWCAP, KERNEL_HWCAP_SME), 3233 - HWCAP_CAP(ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64), 3234 - HWCAP_CAP(ID_AA64SMFR0_EL1, LUTv2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_LUTV2), 3235 - HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p2, CAP_HWCAP, KERNEL_HWCAP_SME2P2), 3236 - HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1), 3237 - HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2), 3238 - HWCAP_CAP(ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64), 3239 - HWCAP_CAP(ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64), 3240 - HWCAP_CAP(ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32), 3241 - HWCAP_CAP(ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16), 3242 - HWCAP_CAP(ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16), 3243 - HWCAP_CAP(ID_AA64SMFR0_EL1, F8F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F16), 3244 - HWCAP_CAP(ID_AA64SMFR0_EL1, F8F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F32), 3245 - HWCAP_CAP(ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32), 3246 - HWCAP_CAP(ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32), 3247 - HWCAP_CAP(ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32), 3248 - HWCAP_CAP(ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32), 3249 - HWCAP_CAP(ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32), 3250 - HWCAP_CAP(ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA), 3251 - HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4), 3252 - HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2), 3253 - HWCAP_CAP(ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM), 3254 - HWCAP_CAP(ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES), 3255 - HWCAP_CAP(ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA), 3256 - HWCAP_CAP(ID_AA64SMFR0_EL1, STMOP, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_STMOP), 3257 - HWCAP_CAP(ID_AA64SMFR0_EL1, SMOP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SMOP4), 3226 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64), 3227 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, LUTv2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_LUTV2), 3228 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p2, CAP_HWCAP, KERNEL_HWCAP_SME2P2), 3229 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1), 3230 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2), 3231 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64), 3232 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64), 3233 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32), 3234 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16), 3235 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16), 3236 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F8F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F16), 3237 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F8F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F32), 3238 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32), 3239 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32), 3240 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32), 3241 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32), 3242 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32), 3243 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA), 3244 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4), 3245 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2), 3246 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM), 3247 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES), 3248 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA), 3249 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, STMOP, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_STMOP), 3250 + HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMOP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SMOP4), 3258 3251 #endif /* CONFIG_ARM64_SME */ 3259 3252 HWCAP_CAP(ID_AA64FPFR0_EL1, F8CVT, IMP, CAP_HWCAP, KERNEL_HWCAP_F8CVT), 3260 3253 HWCAP_CAP(ID_AA64FPFR0_EL1, F8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_F8FMA),
+8 -3
arch/arm64/kernel/efi.c
··· 15 15 16 16 #include <asm/efi.h> 17 17 #include <asm/stacktrace.h> 18 + #include <asm/vmap_stack.h> 18 19 19 20 static bool region_is_misaligned(const efi_memory_desc_t *md) 20 21 { ··· 215 214 if (!efi_enabled(EFI_RUNTIME_SERVICES)) 216 215 return 0; 217 216 218 - p = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN, GFP_KERNEL, 219 - NUMA_NO_NODE, &&l); 220 - l: if (!p) { 217 + if (!IS_ENABLED(CONFIG_VMAP_STACK)) { 218 + clear_bit(EFI_RUNTIME_SERVICES, &efi.flags); 219 + return -ENOMEM; 220 + } 221 + 222 + p = arch_alloc_vmap_stack(THREAD_SIZE, NUMA_NO_NODE); 223 + if (!p) { 221 224 pr_warn("Failed to allocate EFI runtime stack\n"); 222 225 clear_bit(EFI_RUNTIME_SERVICES, &efi.flags); 223 226 return -ENOMEM;
+5
arch/arm64/kernel/process.c
··· 673 673 current->thread.por_el0 = read_sysreg_s(SYS_POR_EL0); 674 674 if (current->thread.por_el0 != next->thread.por_el0) { 675 675 write_sysreg_s(next->thread.por_el0, SYS_POR_EL0); 676 + /* 677 + * No ISB required as we can tolerate spurious Overlay faults - 678 + * the fault handler will check again based on the new value 679 + * of POR_EL0. 680 + */ 676 681 } 677 682 } 678 683
+1 -1
arch/arm64/kernel/smp.c
··· 1143 1143 void smp_send_stop(void) 1144 1144 { 1145 1145 static unsigned long stop_in_progress; 1146 - cpumask_t mask; 1146 + static cpumask_t mask; 1147 1147 unsigned long timeout; 1148 1148 1149 1149 /*
+10 -6
arch/arm64/kvm/arm.c
··· 825 825 if (!kvm_arm_vcpu_is_finalized(vcpu)) 826 826 return -EPERM; 827 827 828 - ret = kvm_arch_vcpu_run_map_fp(vcpu); 829 - if (ret) 830 - return ret; 831 - 832 828 if (likely(vcpu_has_run_once(vcpu))) 833 829 return 0; 834 830 ··· 2125 2129 2126 2130 static void cpu_hyp_uninit(void *discard) 2127 2131 { 2128 - if (__this_cpu_read(kvm_hyp_initialized)) { 2132 + if (!is_protected_kvm_enabled() && __this_cpu_read(kvm_hyp_initialized)) { 2129 2133 cpu_hyp_reset(); 2130 2134 __this_cpu_write(kvm_hyp_initialized, 0); 2131 2135 } ··· 2341 2345 2342 2346 free_hyp_pgds(); 2343 2347 for_each_possible_cpu(cpu) { 2348 + if (per_cpu(kvm_hyp_initialized, cpu)) 2349 + continue; 2350 + 2344 2351 free_pages(per_cpu(kvm_arm_hyp_stack_base, cpu), NVHE_STACK_SHIFT - PAGE_SHIFT); 2345 - free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); 2352 + 2353 + if (!kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]) 2354 + continue; 2346 2355 2347 2356 if (free_sve) { 2348 2357 struct cpu_sve_state *sve_state; ··· 2355 2354 sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state; 2356 2355 free_pages((unsigned long) sve_state, pkvm_host_sve_state_order()); 2357 2356 } 2357 + 2358 + free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); 2359 + 2358 2360 } 2359 2361 } 2360 2362
-26
arch/arm64/kvm/fpsimd.c
··· 15 15 #include <asm/sysreg.h> 16 16 17 17 /* 18 - * Called on entry to KVM_RUN unless this vcpu previously ran at least 19 - * once and the most recent prior KVM_RUN for this vcpu was called from 20 - * the same task as current (highly likely). 21 - * 22 - * This is guaranteed to execute before kvm_arch_vcpu_load_fp(vcpu), 23 - * such that on entering hyp the relevant parts of current are already 24 - * mapped. 25 - */ 26 - int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) 27 - { 28 - struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state; 29 - int ret; 30 - 31 - /* pKVM has its own tracking of the host fpsimd state. */ 32 - if (is_protected_kvm_enabled()) 33 - return 0; 34 - 35 - /* Make sure the host task fpsimd state is visible to hyp: */ 36 - ret = kvm_share_hyp(fpsimd, fpsimd + 1); 37 - if (ret) 38 - return ret; 39 - 40 - return 0; 41 - } 42 - 43 - /* 44 18 * Prepare vcpu for saving the host's FPSIMD state and loading the guest's. 45 19 * The actual loading is done by the FPSIMD access trap taken to hyp. 46 20 *
+12 -8
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 479 479 { 480 480 struct kvm_mem_range cur; 481 481 kvm_pte_t pte; 482 + u64 granule; 482 483 s8 level; 483 484 int ret; 484 485 ··· 497 496 return -EPERM; 498 497 } 499 498 500 - do { 501 - u64 granule = kvm_granule_size(level); 499 + for (; level <= KVM_PGTABLE_LAST_LEVEL; level++) { 500 + if (!kvm_level_supports_block_mapping(level)) 501 + continue; 502 + granule = kvm_granule_size(level); 502 503 cur.start = ALIGN_DOWN(addr, granule); 503 504 cur.end = cur.start + granule; 504 - level++; 505 - } while ((level <= KVM_PGTABLE_LAST_LEVEL) && 506 - !(kvm_level_supports_block_mapping(level) && 507 - range_included(&cur, range))); 505 + if (!range_included(&cur, range)) 506 + continue; 507 + *range = cur; 508 + return 0; 509 + } 508 510 509 - *range = cur; 511 + WARN_ON(1); 510 512 511 - return 0; 513 + return -EINVAL; 512 514 } 513 515 514 516 int host_stage2_idmap_locked(phys_addr_t addr, u64 size,
+23 -3
arch/arm64/kvm/nested.c
··· 1402 1402 } 1403 1403 } 1404 1404 1405 + #define has_tgran_2(__r, __sz) \ 1406 + ({ \ 1407 + u64 _s1, _s2, _mmfr0 = __r; \ 1408 + \ 1409 + _s2 = SYS_FIELD_GET(ID_AA64MMFR0_EL1, \ 1410 + TGRAN##__sz##_2, _mmfr0); \ 1411 + \ 1412 + _s1 = SYS_FIELD_GET(ID_AA64MMFR0_EL1, \ 1413 + TGRAN##__sz, _mmfr0); \ 1414 + \ 1415 + ((_s2 != ID_AA64MMFR0_EL1_TGRAN##__sz##_2_NI && \ 1416 + _s2 != ID_AA64MMFR0_EL1_TGRAN##__sz##_2_TGRAN##__sz) || \ 1417 + (_s2 == ID_AA64MMFR0_EL1_TGRAN##__sz##_2_TGRAN##__sz && \ 1418 + _s1 != ID_AA64MMFR0_EL1_TGRAN##__sz##_NI)); \ 1419 + }) 1405 1420 /* 1406 1421 * Our emulated CPU doesn't support all the possible features. For the 1407 1422 * sake of simplicity (and probably mental sanity), wipe out a number ··· 1426 1411 */ 1427 1412 u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val) 1428 1413 { 1414 + u64 orig_val = val; 1415 + 1429 1416 switch (reg) { 1430 1417 case SYS_ID_AA64ISAR0_EL1: 1431 1418 /* Support everything but TME */ ··· 1497 1480 */ 1498 1481 switch (PAGE_SIZE) { 1499 1482 case SZ_4K: 1500 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP); 1483 + if (has_tgran_2(orig_val, 4)) 1484 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP); 1501 1485 fallthrough; 1502 1486 case SZ_16K: 1503 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP); 1487 + if (has_tgran_2(orig_val, 16)) 1488 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP); 1504 1489 fallthrough; 1505 1490 case SZ_64K: 1506 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP); 1491 + if (has_tgran_2(orig_val, 64)) 1492 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP); 1507 1493 break; 1508 1494 } 1509 1495
+1 -3
arch/arm64/kvm/vgic/vgic-v3-nested.c
··· 401 401 { 402 402 bool level; 403 403 404 - level = __vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En; 405 - if (level) 406 - level &= vgic_v3_get_misr(vcpu); 404 + level = (__vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En) && vgic_v3_get_misr(vcpu); 407 405 kvm_vgic_inject_irq(vcpu->kvm, vcpu, 408 406 vcpu->kvm->arch.vgic.mi_intid, level, vcpu); 409 407 }
+21 -9
arch/arm64/mm/fault.c
··· 487 487 } 488 488 } 489 489 490 - static bool fault_from_pkey(unsigned long esr, struct vm_area_struct *vma, 491 - unsigned int mm_flags) 490 + static bool fault_from_pkey(struct vm_area_struct *vma, unsigned int mm_flags) 492 491 { 493 - unsigned long iss2 = ESR_ELx_ISS2(esr); 494 - 495 492 if (!system_supports_poe()) 496 493 return false; 497 494 498 - if (esr_fsc_is_permission_fault(esr) && (iss2 & ESR_ELx_Overlay)) 499 - return true; 500 - 495 + /* 496 + * We do not check whether an Overlay fault has occurred because we 497 + * cannot make a decision based solely on its value: 498 + * 499 + * - If Overlay is set, a fault did occur due to POE, but it may be 500 + * spurious in those cases where we update POR_EL0 without ISB (e.g. 501 + * on context-switch). We would then need to manually check POR_EL0 502 + * against vma_pkey(vma), which is exactly what 503 + * arch_vma_access_permitted() does. 504 + * 505 + * - If Overlay is not set, we may still need to report a pkey fault. 506 + * This is the case if an access was made within a mapping but with no 507 + * page mapped, and POR_EL0 forbids the access (according to 508 + * vma_pkey()). Such access will result in a SIGSEGV regardless 509 + * because core code checks arch_vma_access_permitted(), but in order 510 + * to report the correct error code - SEGV_PKUERR - we must handle 511 + * that case here. 512 + */ 501 513 return !arch_vma_access_permitted(vma, 502 514 mm_flags & FAULT_FLAG_WRITE, 503 515 mm_flags & FAULT_FLAG_INSTRUCTION, ··· 647 635 goto bad_area; 648 636 } 649 637 650 - if (fault_from_pkey(esr, vma, mm_flags)) { 638 + if (fault_from_pkey(vma, mm_flags)) { 651 639 pkey = vma_pkey(vma); 652 640 vma_end_read(vma); 653 641 fault = 0; ··· 691 679 goto bad_area; 692 680 } 693 681 694 - if (fault_from_pkey(esr, vma, mm_flags)) { 682 + if (fault_from_pkey(vma, mm_flags)) { 695 683 pkey = vma_pkey(vma); 696 684 mmap_read_unlock(mm); 697 685 fault = 0;
-1
arch/arm64/mm/proc.S
··· 518 518 msr REG_PIR_EL1, x0 519 519 520 520 orr tcr2, tcr2, TCR2_EL1_PIE 521 - msr REG_TCR2_EL1, x0 522 521 523 522 .Lskip_indirection: 524 523
+2 -1
arch/riscv/Kconfig
··· 63 63 select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 64 64 select ARCH_STACKWALK 65 65 select ARCH_SUPPORTS_ATOMIC_RMW 66 - select ARCH_SUPPORTS_CFI_CLANG 66 + # clang >= 17: https://github.com/llvm/llvm-project/commit/62fa708ceb027713b386c7e0efda994f8bdc27e2 67 + select ARCH_SUPPORTS_CFI_CLANG if CLANG_VERSION >= 170000 67 68 select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU 68 69 select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE 69 70 select ARCH_SUPPORTS_HUGETLBFS if MMU
+3 -3
arch/riscv/kernel/cpu_ops_sbi.c
··· 18 18 19 19 /* 20 20 * Ordered booting via HSM brings one cpu at a time. However, cpu hotplug can 21 - * be invoked from multiple threads in parallel. Define a per cpu data 21 + * be invoked from multiple threads in parallel. Define an array of boot data 22 22 * to handle that. 23 23 */ 24 - static DEFINE_PER_CPU(struct sbi_hart_boot_data, boot_data); 24 + static struct sbi_hart_boot_data boot_data[NR_CPUS]; 25 25 26 26 static int sbi_hsm_hart_start(unsigned long hartid, unsigned long saddr, 27 27 unsigned long priv) ··· 67 67 unsigned long boot_addr = __pa_symbol(secondary_start_sbi); 68 68 unsigned long hartid = cpuid_to_hartid_map(cpuid); 69 69 unsigned long hsm_data; 70 - struct sbi_hart_boot_data *bdata = &per_cpu(boot_data, cpuid); 70 + struct sbi_hart_boot_data *bdata = &boot_data[cpuid]; 71 71 72 72 /* Make sure tidle is updated */ 73 73 smp_mb();
+2
arch/s390/crypto/sha1_s390.c
··· 38 38 sctx->state[4] = SHA1_H4; 39 39 sctx->count = 0; 40 40 sctx->func = CPACF_KIMD_SHA_1; 41 + sctx->first_message_part = 0; 41 42 42 43 return 0; 43 44 } ··· 61 60 sctx->count = ictx->count; 62 61 memcpy(sctx->state, ictx->state, sizeof(ictx->state)); 63 62 sctx->func = CPACF_KIMD_SHA_1; 63 + sctx->first_message_part = 0; 64 64 return 0; 65 65 } 66 66
+3
arch/s390/crypto/sha512_s390.c
··· 32 32 ctx->count = 0; 33 33 ctx->sha512.count_hi = 0; 34 34 ctx->func = CPACF_KIMD_SHA_512; 35 + ctx->first_message_part = 0; 35 36 36 37 return 0; 37 38 } ··· 58 57 59 58 memcpy(sctx->state, ictx->state, sizeof(ictx->state)); 60 59 sctx->func = CPACF_KIMD_SHA_512; 60 + sctx->first_message_part = 0; 61 61 return 0; 62 62 } 63 63 ··· 99 97 ctx->count = 0; 100 98 ctx->sha512.count_hi = 0; 101 99 ctx->func = CPACF_KIMD_SHA_512; 100 + ctx->first_message_part = 0; 102 101 103 102 return 0; 104 103 }
+9
arch/x86/Kconfig
··· 2695 2695 disabled, mitigation cannot be enabled via cmdline. 2696 2696 See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst> 2697 2697 2698 + config MITIGATION_TSA 2699 + bool "Mitigate Transient Scheduler Attacks" 2700 + depends on CPU_SUP_AMD 2701 + default y 2702 + help 2703 + Enable mitigation for Transient Scheduler Attacks. TSA is a hardware 2704 + security vulnerability on AMD CPUs which can lead to forwarding of 2705 + invalid info to subsequent instructions and thus can affect their 2706 + timing and thereby cause a leakage. 2698 2707 endif 2699 2708 2700 2709 config ARCH_HAS_ADD_PAGES
+19 -3
arch/x86/coco/sev/core.c
··· 88 88 */ 89 89 static u64 snp_tsc_scale __ro_after_init; 90 90 static u64 snp_tsc_offset __ro_after_init; 91 - static u64 snp_tsc_freq_khz __ro_after_init; 91 + static unsigned long snp_tsc_freq_khz __ro_after_init; 92 92 93 93 DEFINE_PER_CPU(struct sev_es_runtime_data*, runtime_data); 94 94 DEFINE_PER_CPU(struct sev_es_save_area *, sev_vmsa); ··· 2167 2167 2168 2168 void __init snp_secure_tsc_init(void) 2169 2169 { 2170 - unsigned long long tsc_freq_mhz; 2170 + struct snp_secrets_page *secrets; 2171 + unsigned long tsc_freq_mhz; 2172 + void *mem; 2171 2173 2172 2174 if (!cc_platform_has(CC_ATTR_GUEST_SNP_SECURE_TSC)) 2173 2175 return; 2174 2176 2177 + mem = early_memremap_encrypted(sev_secrets_pa, PAGE_SIZE); 2178 + if (!mem) { 2179 + pr_err("Unable to get TSC_FACTOR: failed to map the SNP secrets page.\n"); 2180 + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SECURE_TSC); 2181 + } 2182 + 2183 + secrets = (__force struct snp_secrets_page *)mem; 2184 + 2175 2185 setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ); 2176 2186 rdmsrq(MSR_AMD64_GUEST_TSC_FREQ, tsc_freq_mhz); 2177 - snp_tsc_freq_khz = (unsigned long)(tsc_freq_mhz * 1000); 2187 + 2188 + /* Extract the GUEST TSC MHZ from BIT[17:0], rest is reserved space */ 2189 + tsc_freq_mhz &= GENMASK_ULL(17, 0); 2190 + 2191 + snp_tsc_freq_khz = SNP_SCALE_TSC_FREQ(tsc_freq_mhz * 1000, secrets->tsc_factor); 2178 2192 2179 2193 x86_platform.calibrate_cpu = securetsc_get_tsc_khz; 2180 2194 x86_platform.calibrate_tsc = securetsc_get_tsc_khz; 2195 + 2196 + early_memunmap(mem, PAGE_SIZE); 2181 2197 }
+4 -4
arch/x86/entry/entry.S
··· 36 36 37 37 /* 38 38 * Define the VERW operand that is disguised as entry code so that 39 - * it can be referenced with KPTI enabled. This ensure VERW can be 39 + * it can be referenced with KPTI enabled. This ensures VERW can be 40 40 * used late in exit-to-user path after page tables are switched. 41 41 */ 42 42 .pushsection .entry.text, "ax" 43 43 44 44 .align L1_CACHE_BYTES, 0xcc 45 - SYM_CODE_START_NOALIGN(mds_verw_sel) 45 + SYM_CODE_START_NOALIGN(x86_verw_sel) 46 46 UNWIND_HINT_UNDEFINED 47 47 ANNOTATE_NOENDBR 48 48 .word __KERNEL_DS 49 49 .align L1_CACHE_BYTES, 0xcc 50 - SYM_CODE_END(mds_verw_sel); 50 + SYM_CODE_END(x86_verw_sel); 51 51 /* For KVM */ 52 - EXPORT_SYMBOL_GPL(mds_verw_sel); 52 + EXPORT_SYMBOL_GPL(x86_verw_sel); 53 53 54 54 .popsection 55 55
arch/x86/include/asm/amd/fch.h include/linux/platform_data/x86/amd-fch.h
+5 -1
arch/x86/include/asm/cpufeatures.h
··· 456 456 #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */ 457 457 #define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */ 458 458 #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */ 459 + #define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* The memory form of VERW mitigates TSA */ 459 460 #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */ 460 461 #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */ 461 462 #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */ ··· 488 487 #define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */ 489 488 #define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */ 490 489 #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */ 490 + #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */ 491 + #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */ 492 + #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */ 491 493 492 494 /* 493 495 * BUG word(s) ··· 546 542 #define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */ 547 543 #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ 548 544 #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 549 - 545 + #define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */ 550 546 #endif /* _ASM_X86_CPUFEATURES_H */
+2 -2
arch/x86/include/asm/irqflags.h
··· 44 44 45 45 static __always_inline void native_safe_halt(void) 46 46 { 47 - mds_idle_clear_cpu_buffers(); 47 + x86_idle_clear_cpu_buffers(); 48 48 asm volatile("sti; hlt": : :"memory"); 49 49 } 50 50 51 51 static __always_inline void native_halt(void) 52 52 { 53 - mds_idle_clear_cpu_buffers(); 53 + x86_idle_clear_cpu_buffers(); 54 54 asm volatile("hlt": : :"memory"); 55 55 } 56 56
+7 -1
arch/x86/include/asm/kvm_host.h
··· 700 700 701 701 struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS]; 702 702 703 - /* Preallocated buffer for handling hypercalls passing sparse vCPU set */ 703 + /* 704 + * Preallocated buffers for handling hypercalls that pass sparse vCPU 705 + * sets (for high vCPU counts, they're too large to comfortably fit on 706 + * the stack). 707 + */ 704 708 u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS]; 709 + DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); 705 710 706 711 struct hv_vp_assist_page vp_assist_page; 707 712 ··· 769 764 CPUID_8000_0022_EAX, 770 765 CPUID_7_2_EDX, 771 766 CPUID_24_0_EBX, 767 + CPUID_8000_0021_ECX, 772 768 NR_KVM_CPU_CAPS, 773 769 774 770 NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
+16 -11
arch/x86/include/asm/mwait.h
··· 43 43 44 44 static __always_inline void __mwait(u32 eax, u32 ecx) 45 45 { 46 - mds_idle_clear_cpu_buffers(); 47 - 48 46 /* 49 47 * Use the instruction mnemonic with implicit operands, as the LLVM 50 48 * assembler fails to assemble the mnemonic with explicit operands: ··· 78 80 */ 79 81 static __always_inline void __mwaitx(u32 eax, u32 ebx, u32 ecx) 80 82 { 81 - /* No MDS buffer clear as this is AMD/HYGON only */ 83 + /* No need for TSA buffer clearing on AMD */ 82 84 83 85 /* "mwaitx %eax, %ebx, %ecx" */ 84 86 asm volatile(".byte 0x0f, 0x01, 0xfb" ··· 96 98 */ 97 99 static __always_inline void __sti_mwait(u32 eax, u32 ecx) 98 100 { 99 - mds_idle_clear_cpu_buffers(); 100 101 101 102 asm volatile("sti; mwait" :: "a" (eax), "c" (ecx)); 102 103 } ··· 112 115 */ 113 116 static __always_inline void mwait_idle_with_hints(u32 eax, u32 ecx) 114 117 { 118 + if (need_resched()) 119 + return; 120 + 121 + x86_idle_clear_cpu_buffers(); 122 + 115 123 if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) { 116 124 const void *addr = &current_thread_info()->flags; 117 125 118 126 alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr)); 119 127 __monitor(addr, 0, 0); 120 128 121 - if (!need_resched()) { 122 - if (ecx & 1) { 123 - __mwait(eax, ecx); 124 - } else { 125 - __sti_mwait(eax, ecx); 126 - raw_local_irq_disable(); 127 - } 129 + if (need_resched()) 130 + goto out; 131 + 132 + if (ecx & 1) { 133 + __mwait(eax, ecx); 134 + } else { 135 + __sti_mwait(eax, ecx); 136 + raw_local_irq_disable(); 128 137 } 129 138 } 139 + 140 + out: 130 141 current_clr_polling(); 131 142 } 132 143
+22 -15
arch/x86/include/asm/nospec-branch.h
··· 302 302 .endm 303 303 304 304 /* 305 - * Macro to execute VERW instruction that mitigate transient data sampling 306 - * attacks such as MDS. On affected systems a microcode update overloaded VERW 307 - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. 308 - * 305 + * Macro to execute VERW insns that mitigate transient data sampling 306 + * attacks such as MDS or TSA. On affected systems a microcode update 307 + * overloaded VERW insns to also clear the CPU buffers. VERW clobbers 308 + * CFLAGS.ZF. 309 309 * Note: Only the memory operand variant of VERW clears the CPU buffers. 310 310 */ 311 - .macro CLEAR_CPU_BUFFERS 311 + .macro __CLEAR_CPU_BUFFERS feature 312 312 #ifdef CONFIG_X86_64 313 - ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF 313 + ALTERNATIVE "", "verw x86_verw_sel(%rip)", \feature 314 314 #else 315 315 /* 316 316 * In 32bit mode, the memory operand must be a %cs reference. The data 317 317 * segments may not be usable (vm86 mode), and the stack segment may not 318 318 * be flat (ESPFIX32). 319 319 */ 320 - ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF 320 + ALTERNATIVE "", "verw %cs:x86_verw_sel", \feature 321 321 #endif 322 322 .endm 323 + 324 + #define CLEAR_CPU_BUFFERS \ 325 + __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF 326 + 327 + #define VM_CLEAR_CPU_BUFFERS \ 328 + __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF_VM 323 329 324 330 #ifdef CONFIG_X86_64 325 331 .macro CLEAR_BRANCH_HISTORY ··· 573 567 574 568 DECLARE_STATIC_KEY_FALSE(switch_vcpu_ibpb); 575 569 576 - DECLARE_STATIC_KEY_FALSE(mds_idle_clear); 570 + DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear); 577 571 578 572 DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); 579 573 580 574 DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear); 581 575 582 - extern u16 mds_verw_sel; 576 + extern u16 x86_verw_sel; 583 577 584 578 #include <asm/segment.h> 585 579 586 580 /** 587 - * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability 581 + * x86_clear_cpu_buffers - Buffer clearing support for different x86 CPU vulns 588 582 * 589 583 * This uses the otherwise unused and obsolete VERW instruction in 590 584 * combination with microcode which triggers a CPU buffer flush when the 591 585 * instruction is executed. 592 586 */ 593 - static __always_inline void mds_clear_cpu_buffers(void) 587 + static __always_inline void x86_clear_cpu_buffers(void) 594 588 { 595 589 static const u16 ds = __KERNEL_DS; 596 590 ··· 607 601 } 608 602 609 603 /** 610 - * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability 604 + * x86_idle_clear_cpu_buffers - Buffer clearing support in idle for the MDS 605 + * and TSA vulnerabilities. 611 606 * 612 607 * Clear CPU buffers if the corresponding static key is enabled 613 608 */ 614 - static __always_inline void mds_idle_clear_cpu_buffers(void) 609 + static __always_inline void x86_idle_clear_cpu_buffers(void) 615 610 { 616 - if (static_branch_likely(&mds_idle_clear)) 617 - mds_clear_cpu_buffers(); 611 + if (static_branch_likely(&cpu_buf_idle_clear)) 612 + x86_clear_cpu_buffers(); 618 613 } 619 614 620 615 #endif /* __ASSEMBLER__ */
+16 -1
arch/x86/include/asm/sev.h
··· 223 223 u8 rsvd2[100]; 224 224 } __packed; 225 225 226 + /* 227 + * Obtain the mean TSC frequency by decreasing the nominal TSC frequency with 228 + * TSC_FACTOR as documented in the SNP Firmware ABI specification: 229 + * 230 + * GUEST_TSC_FREQ * (1 - (TSC_FACTOR * 0.00001)) 231 + * 232 + * which is equivalent to: 233 + * 234 + * GUEST_TSC_FREQ -= (GUEST_TSC_FREQ * TSC_FACTOR) / 100000; 235 + */ 236 + #define SNP_SCALE_TSC_FREQ(freq, factor) ((freq) - (freq) * (factor) / 100000) 237 + 226 238 struct snp_guest_req { 227 239 void *req_buf; 228 240 size_t req_sz; ··· 294 282 u8 svsm_guest_vmpl; 295 283 u8 rsvd3[3]; 296 284 285 + /* The percentage decrease from nominal to mean TSC frequency. */ 286 + u32 tsc_factor; 287 + 297 288 /* Remainder of page */ 298 - u8 rsvd4[3744]; 289 + u8 rsvd4[3740]; 299 290 } __packed; 300 291 301 292 struct snp_msg_desc {
+1
arch/x86/include/asm/shared/tdx.h
··· 72 72 #define TDVMCALL_MAP_GPA 0x10001 73 73 #define TDVMCALL_GET_QUOTE 0x10002 74 74 #define TDVMCALL_REPORT_FATAL_ERROR 0x10003 75 + #define TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT 0x10004ULL 75 76 76 77 /* 77 78 * TDG.VP.VMCALL Status Codes (returned in R10)
+7 -1
arch/x86/include/uapi/asm/kvm.h
··· 965 965 struct kvm_tdx_capabilities { 966 966 __u64 supported_attrs; 967 967 __u64 supported_xfam; 968 - __u64 reserved[254]; 968 + 969 + __u64 kernel_tdvmcallinfo_1_r11; 970 + __u64 user_tdvmcallinfo_1_r11; 971 + __u64 kernel_tdvmcallinfo_1_r12; 972 + __u64 user_tdvmcallinfo_1_r12; 973 + 974 + __u64 reserved[250]; 969 975 970 976 /* Configurable CPUID bits for userspace */ 971 977 struct kvm_cpuid2 cpuid;
+45 -1
arch/x86/kernel/cpu/amd.c
··· 9 9 #include <linux/sched/clock.h> 10 10 #include <linux/random.h> 11 11 #include <linux/topology.h> 12 - #include <asm/amd/fch.h> 12 + #include <linux/platform_data/x86/amd-fch.h> 13 13 #include <asm/processor.h> 14 14 #include <asm/apic.h> 15 15 #include <asm/cacheinfo.h> ··· 377 377 #endif 378 378 } 379 379 380 + #define ZEN_MODEL_STEP_UCODE(fam, model, step, ucode) \ 381 + X86_MATCH_VFM_STEPS(VFM_MAKE(X86_VENDOR_AMD, fam, model), \ 382 + step, step, ucode) 383 + 384 + static const struct x86_cpu_id amd_tsa_microcode[] = { 385 + ZEN_MODEL_STEP_UCODE(0x19, 0x01, 0x1, 0x0a0011d7), 386 + ZEN_MODEL_STEP_UCODE(0x19, 0x01, 0x2, 0x0a00123b), 387 + ZEN_MODEL_STEP_UCODE(0x19, 0x08, 0x2, 0x0a00820d), 388 + ZEN_MODEL_STEP_UCODE(0x19, 0x11, 0x1, 0x0a10114c), 389 + ZEN_MODEL_STEP_UCODE(0x19, 0x11, 0x2, 0x0a10124c), 390 + ZEN_MODEL_STEP_UCODE(0x19, 0x18, 0x1, 0x0a108109), 391 + ZEN_MODEL_STEP_UCODE(0x19, 0x21, 0x0, 0x0a20102e), 392 + ZEN_MODEL_STEP_UCODE(0x19, 0x21, 0x2, 0x0a201211), 393 + ZEN_MODEL_STEP_UCODE(0x19, 0x44, 0x1, 0x0a404108), 394 + ZEN_MODEL_STEP_UCODE(0x19, 0x50, 0x0, 0x0a500012), 395 + ZEN_MODEL_STEP_UCODE(0x19, 0x61, 0x2, 0x0a60120a), 396 + ZEN_MODEL_STEP_UCODE(0x19, 0x74, 0x1, 0x0a704108), 397 + ZEN_MODEL_STEP_UCODE(0x19, 0x75, 0x2, 0x0a705208), 398 + ZEN_MODEL_STEP_UCODE(0x19, 0x78, 0x0, 0x0a708008), 399 + ZEN_MODEL_STEP_UCODE(0x19, 0x7c, 0x0, 0x0a70c008), 400 + ZEN_MODEL_STEP_UCODE(0x19, 0xa0, 0x2, 0x0aa00216), 401 + {}, 402 + }; 403 + 404 + static void tsa_init(struct cpuinfo_x86 *c) 405 + { 406 + if (cpu_has(c, X86_FEATURE_HYPERVISOR)) 407 + return; 408 + 409 + if (cpu_has(c, X86_FEATURE_ZEN3) || 410 + cpu_has(c, X86_FEATURE_ZEN4)) { 411 + if (x86_match_min_microcode_rev(amd_tsa_microcode)) 412 + setup_force_cpu_cap(X86_FEATURE_VERW_CLEAR); 413 + else 414 + pr_debug("%s: current revision: 0x%x\n", __func__, c->microcode); 415 + } else { 416 + setup_force_cpu_cap(X86_FEATURE_TSA_SQ_NO); 417 + setup_force_cpu_cap(X86_FEATURE_TSA_L1_NO); 418 + } 419 + } 420 + 380 421 static void bsp_init_amd(struct cpuinfo_x86 *c) 381 422 { 382 423 if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) { ··· 530 489 } 531 490 532 491 bsp_determine_snp(c); 492 + 493 + tsa_init(c); 494 + 533 495 return; 534 496 535 497 warn:
+130 -6
arch/x86/kernel/cpu/bugs.c
··· 94 94 static void __init its_select_mitigation(void); 95 95 static void __init its_update_mitigation(void); 96 96 static void __init its_apply_mitigation(void); 97 + static void __init tsa_select_mitigation(void); 98 + static void __init tsa_apply_mitigation(void); 97 99 98 100 /* The base value of the SPEC_CTRL MSR without task-specific bits set */ 99 101 u64 x86_spec_ctrl_base; ··· 171 169 DEFINE_STATIC_KEY_FALSE(switch_vcpu_ibpb); 172 170 EXPORT_SYMBOL_GPL(switch_vcpu_ibpb); 173 171 174 - /* Control MDS CPU buffer clear before idling (halt, mwait) */ 175 - DEFINE_STATIC_KEY_FALSE(mds_idle_clear); 176 - EXPORT_SYMBOL_GPL(mds_idle_clear); 172 + /* Control CPU buffer clear before idling (halt, mwait) */ 173 + DEFINE_STATIC_KEY_FALSE(cpu_buf_idle_clear); 174 + EXPORT_SYMBOL_GPL(cpu_buf_idle_clear); 177 175 178 176 /* 179 177 * Controls whether l1d flush based mitigations are enabled, ··· 227 225 gds_select_mitigation(); 228 226 its_select_mitigation(); 229 227 bhi_select_mitigation(); 228 + tsa_select_mitigation(); 230 229 231 230 /* 232 231 * After mitigations are selected, some may need to update their ··· 275 272 gds_apply_mitigation(); 276 273 its_apply_mitigation(); 277 274 bhi_apply_mitigation(); 275 + tsa_apply_mitigation(); 278 276 } 279 277 280 278 /* ··· 641 637 * is required irrespective of SMT state. 642 638 */ 643 639 if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) 644 - static_branch_enable(&mds_idle_clear); 640 + static_branch_enable(&cpu_buf_idle_clear); 645 641 646 642 if (mmio_nosmt || cpu_mitigations_auto_nosmt()) 647 643 cpu_smt_disable(false); ··· 1492 1488 } 1493 1489 1494 1490 #undef pr_fmt 1491 + #define pr_fmt(fmt) "Transient Scheduler Attacks: " fmt 1492 + 1493 + enum tsa_mitigations { 1494 + TSA_MITIGATION_NONE, 1495 + TSA_MITIGATION_AUTO, 1496 + TSA_MITIGATION_UCODE_NEEDED, 1497 + TSA_MITIGATION_USER_KERNEL, 1498 + TSA_MITIGATION_VM, 1499 + TSA_MITIGATION_FULL, 1500 + }; 1501 + 1502 + static const char * const tsa_strings[] = { 1503 + [TSA_MITIGATION_NONE] = "Vulnerable", 1504 + [TSA_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode", 1505 + [TSA_MITIGATION_USER_KERNEL] = "Mitigation: Clear CPU buffers: user/kernel boundary", 1506 + [TSA_MITIGATION_VM] = "Mitigation: Clear CPU buffers: VM", 1507 + [TSA_MITIGATION_FULL] = "Mitigation: Clear CPU buffers", 1508 + }; 1509 + 1510 + static enum tsa_mitigations tsa_mitigation __ro_after_init = 1511 + IS_ENABLED(CONFIG_MITIGATION_TSA) ? TSA_MITIGATION_AUTO : TSA_MITIGATION_NONE; 1512 + 1513 + static int __init tsa_parse_cmdline(char *str) 1514 + { 1515 + if (!str) 1516 + return -EINVAL; 1517 + 1518 + if (!strcmp(str, "off")) 1519 + tsa_mitigation = TSA_MITIGATION_NONE; 1520 + else if (!strcmp(str, "on")) 1521 + tsa_mitigation = TSA_MITIGATION_FULL; 1522 + else if (!strcmp(str, "user")) 1523 + tsa_mitigation = TSA_MITIGATION_USER_KERNEL; 1524 + else if (!strcmp(str, "vm")) 1525 + tsa_mitigation = TSA_MITIGATION_VM; 1526 + else 1527 + pr_err("Ignoring unknown tsa=%s option.\n", str); 1528 + 1529 + return 0; 1530 + } 1531 + early_param("tsa", tsa_parse_cmdline); 1532 + 1533 + static void __init tsa_select_mitigation(void) 1534 + { 1535 + if (cpu_mitigations_off() || !boot_cpu_has_bug(X86_BUG_TSA)) { 1536 + tsa_mitigation = TSA_MITIGATION_NONE; 1537 + return; 1538 + } 1539 + 1540 + if (tsa_mitigation == TSA_MITIGATION_NONE) 1541 + return; 1542 + 1543 + if (!boot_cpu_has(X86_FEATURE_VERW_CLEAR)) { 1544 + tsa_mitigation = TSA_MITIGATION_UCODE_NEEDED; 1545 + goto out; 1546 + } 1547 + 1548 + if (tsa_mitigation == TSA_MITIGATION_AUTO) 1549 + tsa_mitigation = TSA_MITIGATION_FULL; 1550 + 1551 + /* 1552 + * No need to set verw_clear_cpu_buf_mitigation_selected - it 1553 + * doesn't fit all cases here and it is not needed because this 1554 + * is the only VERW-based mitigation on AMD. 1555 + */ 1556 + out: 1557 + pr_info("%s\n", tsa_strings[tsa_mitigation]); 1558 + } 1559 + 1560 + static void __init tsa_apply_mitigation(void) 1561 + { 1562 + switch (tsa_mitigation) { 1563 + case TSA_MITIGATION_USER_KERNEL: 1564 + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); 1565 + break; 1566 + case TSA_MITIGATION_VM: 1567 + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); 1568 + break; 1569 + case TSA_MITIGATION_FULL: 1570 + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); 1571 + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); 1572 + break; 1573 + default: 1574 + break; 1575 + } 1576 + } 1577 + 1578 + #undef pr_fmt 1495 1579 #define pr_fmt(fmt) "Spectre V2 : " fmt 1496 1580 1497 1581 static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init = ··· 2341 2249 return; 2342 2250 2343 2251 if (sched_smt_active()) { 2344 - static_branch_enable(&mds_idle_clear); 2252 + static_branch_enable(&cpu_buf_idle_clear); 2345 2253 } else if (mmio_mitigation == MMIO_MITIGATION_OFF || 2346 2254 (x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) { 2347 - static_branch_disable(&mds_idle_clear); 2255 + static_branch_disable(&cpu_buf_idle_clear); 2348 2256 } 2349 2257 } 2350 2258 ··· 2405 2313 pr_warn_once(MMIO_MSG_SMT); 2406 2314 break; 2407 2315 case MMIO_MITIGATION_OFF: 2316 + break; 2317 + } 2318 + 2319 + switch (tsa_mitigation) { 2320 + case TSA_MITIGATION_USER_KERNEL: 2321 + case TSA_MITIGATION_VM: 2322 + case TSA_MITIGATION_AUTO: 2323 + case TSA_MITIGATION_FULL: 2324 + /* 2325 + * TSA-SQ can potentially lead to info leakage between 2326 + * SMT threads. 2327 + */ 2328 + if (sched_smt_active()) 2329 + static_branch_enable(&cpu_buf_idle_clear); 2330 + else 2331 + static_branch_disable(&cpu_buf_idle_clear); 2332 + break; 2333 + case TSA_MITIGATION_NONE: 2334 + case TSA_MITIGATION_UCODE_NEEDED: 2408 2335 break; 2409 2336 } 2410 2337 ··· 3376 3265 return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]); 3377 3266 } 3378 3267 3268 + static ssize_t tsa_show_state(char *buf) 3269 + { 3270 + return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]); 3271 + } 3272 + 3379 3273 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, 3380 3274 char *buf, unsigned int bug) 3381 3275 { ··· 3443 3327 3444 3328 case X86_BUG_ITS: 3445 3329 return its_show_state(buf); 3330 + 3331 + case X86_BUG_TSA: 3332 + return tsa_show_state(buf); 3446 3333 3447 3334 default: 3448 3335 break; ··· 3532 3413 ssize_t cpu_show_indirect_target_selection(struct device *dev, struct device_attribute *attr, char *buf) 3533 3414 { 3534 3415 return cpu_show_common(dev, attr, buf, X86_BUG_ITS); 3416 + } 3417 + 3418 + ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf) 3419 + { 3420 + return cpu_show_common(dev, attr, buf, X86_BUG_TSA); 3535 3421 } 3536 3422 #endif 3537 3423
+13 -1
arch/x86/kernel/cpu/common.c
··· 1233 1233 #define ITS BIT(8) 1234 1234 /* CPU is affected by Indirect Target Selection, but guest-host isolation is not affected */ 1235 1235 #define ITS_NATIVE_ONLY BIT(9) 1236 + /* CPU is affected by Transient Scheduler Attacks */ 1237 + #define TSA BIT(10) 1236 1238 1237 1239 static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { 1238 1240 VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS), ··· 1282 1280 VULNBL_AMD(0x16, RETBLEED), 1283 1281 VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO), 1284 1282 VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO), 1285 - VULNBL_AMD(0x19, SRSO), 1283 + VULNBL_AMD(0x19, SRSO | TSA), 1286 1284 VULNBL_AMD(0x1a, SRSO), 1287 1285 {} 1288 1286 }; ··· 1530 1528 setup_force_cpu_bug(X86_BUG_ITS); 1531 1529 if (cpu_matches(cpu_vuln_blacklist, ITS_NATIVE_ONLY)) 1532 1530 setup_force_cpu_bug(X86_BUG_ITS_NATIVE_ONLY); 1531 + } 1532 + 1533 + if (c->x86_vendor == X86_VENDOR_AMD) { 1534 + if (!cpu_has(c, X86_FEATURE_TSA_SQ_NO) || 1535 + !cpu_has(c, X86_FEATURE_TSA_L1_NO)) { 1536 + if (cpu_matches(cpu_vuln_blacklist, TSA) || 1537 + /* Enable bug on Zen guests to allow for live migration. */ 1538 + (cpu_has(c, X86_FEATURE_HYPERVISOR) && cpu_has(c, X86_FEATURE_ZEN))) 1539 + setup_force_cpu_bug(X86_BUG_TSA); 1540 + } 1533 1541 } 1534 1542 1535 1543 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+17 -11
arch/x86/kernel/cpu/mce/amd.c
··· 350 350 351 351 struct thresh_restart { 352 352 struct threshold_block *b; 353 - int reset; 354 353 int set_lvt_off; 355 354 int lvt_off; 356 355 u16 old_limit; ··· 431 432 432 433 rdmsr(tr->b->address, lo, hi); 433 434 434 - if (tr->b->threshold_limit < (hi & THRESHOLD_MAX)) 435 - tr->reset = 1; /* limit cannot be lower than err count */ 436 - 437 - if (tr->reset) { /* reset err count and overflow bit */ 438 - hi = 439 - (hi & ~(MASK_ERR_COUNT_HI | MASK_OVERFLOW_HI)) | 440 - (THRESHOLD_MAX - tr->b->threshold_limit); 435 + /* 436 + * Reset error count and overflow bit. 437 + * This is done during init or after handling an interrupt. 438 + */ 439 + if (hi & MASK_OVERFLOW_HI || tr->set_lvt_off) { 440 + hi &= ~(MASK_ERR_COUNT_HI | MASK_OVERFLOW_HI); 441 + hi |= THRESHOLD_MAX - tr->b->threshold_limit; 441 442 } else if (tr->old_limit) { /* change limit w/o reset */ 442 443 int new_count = (hi & THRESHOLD_MAX) + 443 444 (tr->old_limit - tr->b->threshold_limit); ··· 1112 1113 } 1113 1114 1114 1115 bank_type = smca_get_bank_type(cpu, bank); 1115 - if (bank_type >= N_SMCA_BANK_TYPES) 1116 - return NULL; 1117 1116 1118 1117 if (b && (bank_type == SMCA_UMC || bank_type == SMCA_UMC_V2)) { 1119 1118 if (b->block < ARRAY_SIZE(smca_umc_block_names)) 1120 1119 return smca_umc_block_names[b->block]; 1121 - return NULL; 1120 + } 1121 + 1122 + if (b && b->block) { 1123 + snprintf(buf_mcatype, MAX_MCATYPE_NAME_LEN, "th_block_%u", b->block); 1124 + return buf_mcatype; 1125 + } 1126 + 1127 + if (bank_type >= N_SMCA_BANK_TYPES) { 1128 + snprintf(buf_mcatype, MAX_MCATYPE_NAME_LEN, "th_bank_%u", bank); 1129 + return buf_mcatype; 1122 1130 } 1123 1131 1124 1132 if (per_cpu(smca_bank_counts, cpu)[bank_type] == 1)
+11 -13
arch/x86/kernel/cpu/mce/core.c
··· 1740 1740 1741 1741 void (*mc_poll_banks)(void) = mc_poll_banks_default; 1742 1742 1743 + static bool should_enable_timer(unsigned long iv) 1744 + { 1745 + return !mca_cfg.ignore_ce && iv; 1746 + } 1747 + 1743 1748 static void mce_timer_fn(struct timer_list *t) 1744 1749 { 1745 1750 struct timer_list *cpu_t = this_cpu_ptr(&mce_timer); ··· 1768 1763 1769 1764 if (mce_get_storm_mode()) { 1770 1765 __start_timer(t, HZ); 1771 - } else { 1766 + } else if (should_enable_timer(iv)) { 1772 1767 __this_cpu_write(mce_next_interval, iv); 1773 1768 __start_timer(t, iv); 1774 1769 } ··· 2161 2156 { 2162 2157 unsigned long iv = check_interval * HZ; 2163 2158 2164 - if (mca_cfg.ignore_ce || !iv) 2165 - return; 2166 - 2167 - this_cpu_write(mce_next_interval, iv); 2168 - __start_timer(t, iv); 2159 + if (should_enable_timer(iv)) { 2160 + this_cpu_write(mce_next_interval, iv); 2161 + __start_timer(t, iv); 2162 + } 2169 2163 } 2170 2164 2171 2165 static void __mcheck_cpu_setup_timer(void) ··· 2805 2801 static int mce_cpu_online(unsigned int cpu) 2806 2802 { 2807 2803 struct timer_list *t = this_cpu_ptr(&mce_timer); 2808 - int ret; 2809 2804 2810 2805 mce_device_create(cpu); 2811 - 2812 - ret = mce_threshold_create_device(cpu); 2813 - if (ret) { 2814 - mce_device_remove(cpu); 2815 - return ret; 2816 - } 2806 + mce_threshold_create_device(cpu); 2817 2807 mce_reenable_cpu(); 2818 2808 mce_start_timer(t); 2819 2809 return 0;
+1
arch/x86/kernel/cpu/mce/intel.c
··· 478 478 void mce_intel_feature_clear(struct cpuinfo_x86 *c) 479 479 { 480 480 intel_clear_lmce(); 481 + cmci_clear(); 481 482 } 482 483 483 484 bool intel_filter_mce(struct mce *m)
+112
arch/x86/kernel/cpu/microcode/amd_shas.c
··· 231 231 0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21, 232 232 } 233 233 }, 234 + { 0xa0011d7, { 235 + 0x35,0x07,0xcd,0x40,0x94,0xbc,0x81,0x6b, 236 + 0xfc,0x61,0x56,0x1a,0xe2,0xdb,0x96,0x12, 237 + 0x1c,0x1c,0x31,0xb1,0x02,0x6f,0xe5,0xd2, 238 + 0xfe,0x1b,0x04,0x03,0x2c,0x8f,0x4c,0x36, 239 + } 240 + }, 234 241 { 0xa001223, { 235 242 0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8, 236 243 0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4, ··· 301 294 0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59, 302 295 } 303 296 }, 297 + { 0xa00123b, { 298 + 0xef,0xa1,0x1e,0x71,0xf1,0xc3,0x2c,0xe2, 299 + 0xc3,0xef,0x69,0x41,0x7a,0x54,0xca,0xc3, 300 + 0x8f,0x62,0x84,0xee,0xc2,0x39,0xd9,0x28, 301 + 0x95,0xa7,0x12,0x49,0x1e,0x30,0x71,0x72, 302 + } 303 + }, 304 304 { 0xa00820c, { 305 305 0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3, 306 306 0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63, 307 307 0xf1,0x8c,0x88,0x45,0xd7,0x82,0x80,0xd1, 308 308 0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2, 309 + } 310 + }, 311 + { 0xa00820d, { 312 + 0xf9,0x2a,0xc0,0xf4,0x9e,0xa4,0x87,0xa4, 313 + 0x7d,0x87,0x00,0xfd,0xab,0xda,0x19,0xca, 314 + 0x26,0x51,0x32,0xc1,0x57,0x91,0xdf,0xc1, 315 + 0x05,0xeb,0x01,0x7c,0x5a,0x95,0x21,0xb7, 309 316 } 310 317 }, 311 318 { 0xa10113e, { ··· 343 322 0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4, 344 323 } 345 324 }, 325 + { 0xa10114c, { 326 + 0x9e,0xb6,0xa2,0xd9,0x87,0x38,0xc5,0x64, 327 + 0xd8,0x88,0xfa,0x78,0x98,0xf9,0x6f,0x74, 328 + 0x39,0x90,0x1b,0xa5,0xcf,0x5e,0xb4,0x2a, 329 + 0x02,0xff,0xd4,0x8c,0x71,0x8b,0xe2,0xc0, 330 + } 331 + }, 346 332 { 0xa10123e, { 347 333 0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18, 348 334 0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d, ··· 371 343 0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75, 372 344 } 373 345 }, 346 + { 0xa10124c, { 347 + 0x29,0xea,0xf1,0x2c,0xb2,0xe4,0xef,0x90, 348 + 0xa4,0xcd,0x1d,0x86,0x97,0x17,0x61,0x46, 349 + 0xfc,0x22,0xcb,0x57,0x75,0x19,0xc8,0xcc, 350 + 0x0c,0xf5,0xbc,0xac,0x81,0x9d,0x9a,0xd2, 351 + } 352 + }, 374 353 { 0xa108108, { 375 354 0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9, 376 355 0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6, 377 356 0xf5,0xd4,0x3f,0x7b,0x14,0xd5,0x60,0x2c, 378 357 0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16, 358 + } 359 + }, 360 + { 0xa108109, { 361 + 0x85,0xb4,0xbd,0x7c,0x49,0xa7,0xbd,0xfa, 362 + 0x49,0x36,0x80,0x81,0xc5,0xb7,0x39,0x1b, 363 + 0x9a,0xaa,0x50,0xde,0x9b,0xe9,0x32,0x35, 364 + 0x42,0x7e,0x51,0x4f,0x52,0x2c,0x28,0x59, 379 365 } 380 366 }, 381 367 { 0xa20102d, { ··· 399 357 0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4, 400 358 } 401 359 }, 360 + { 0xa20102e, { 361 + 0xbe,0x1f,0x32,0x04,0x0d,0x3c,0x9c,0xdd, 362 + 0xe1,0xa4,0xbf,0x76,0x3a,0xec,0xc2,0xf6, 363 + 0x11,0x00,0xa7,0xaf,0x0f,0xe5,0x02,0xc5, 364 + 0x54,0x3a,0x1f,0x8c,0x16,0xb5,0xff,0xbe, 365 + } 366 + }, 402 367 { 0xa201210, { 403 368 0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe, 404 369 0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9, 405 370 0x6d,0x3d,0x0e,0x6b,0xa7,0xac,0xe3,0x68, 406 371 0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41, 372 + } 373 + }, 374 + { 0xa201211, { 375 + 0x69,0xa1,0x17,0xec,0xd0,0xf6,0x6c,0x95, 376 + 0xe2,0x1e,0xc5,0x59,0x1a,0x52,0x0a,0x27, 377 + 0xc4,0xed,0xd5,0x59,0x1f,0xbf,0x00,0xff, 378 + 0x08,0x88,0xb5,0xe1,0x12,0xb6,0xcc,0x27, 407 379 } 408 380 }, 409 381 { 0xa404107, { ··· 427 371 0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99, 428 372 } 429 373 }, 374 + { 0xa404108, { 375 + 0x69,0x67,0x43,0x06,0xf8,0x0c,0x62,0xdc, 376 + 0xa4,0x21,0x30,0x4f,0x0f,0x21,0x2c,0xcb, 377 + 0xcc,0x37,0xf1,0x1c,0xc3,0xf8,0x2f,0x19, 378 + 0xdf,0x53,0x53,0x46,0xb1,0x15,0xea,0x00, 379 + } 380 + }, 430 381 { 0xa500011, { 431 382 0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4, 432 383 0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1, 433 384 0xd7,0x5b,0x65,0x3a,0x7d,0xab,0xdf,0xa2, 434 385 0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74, 386 + } 387 + }, 388 + { 0xa500012, { 389 + 0xeb,0x74,0x0d,0x47,0xa1,0x8e,0x09,0xe4, 390 + 0x93,0x4c,0xad,0x03,0x32,0x4c,0x38,0x16, 391 + 0x10,0x39,0xdd,0x06,0xaa,0xce,0xd6,0x0f, 392 + 0x62,0x83,0x9d,0x8e,0x64,0x55,0xbe,0x63, 435 393 } 436 394 }, 437 395 { 0xa601209, { ··· 455 385 0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d, 456 386 } 457 387 }, 388 + { 0xa60120a, { 389 + 0x0c,0x8b,0x3d,0xfd,0x52,0x52,0x85,0x7d, 390 + 0x20,0x3a,0xe1,0x7e,0xa4,0x21,0x3b,0x7b, 391 + 0x17,0x86,0xae,0xac,0x13,0xb8,0x63,0x9d, 392 + 0x06,0x01,0xd0,0xa0,0x51,0x9a,0x91,0x2c, 393 + } 394 + }, 458 395 { 0xa704107, { 459 396 0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6, 460 397 0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93, 461 398 0x2a,0xad,0x8e,0x6b,0xea,0x9b,0xb7,0xc2, 462 399 0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39, 400 + } 401 + }, 402 + { 0xa704108, { 403 + 0xd7,0x55,0x15,0x2b,0xfe,0xc4,0xbc,0x93, 404 + 0xec,0x91,0xa0,0xae,0x45,0xb7,0xc3,0x98, 405 + 0x4e,0xff,0x61,0x77,0x88,0xc2,0x70,0x49, 406 + 0xe0,0x3a,0x1d,0x84,0x38,0x52,0xbf,0x5a, 463 407 } 464 408 }, 465 409 { 0xa705206, { ··· 483 399 0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc, 484 400 } 485 401 }, 402 + { 0xa705208, { 403 + 0x30,0x1d,0x55,0x24,0xbc,0x6b,0x5a,0x19, 404 + 0x0c,0x7d,0x1d,0x74,0xaa,0xd1,0xeb,0xd2, 405 + 0x16,0x62,0xf7,0x5b,0xe1,0x1f,0x18,0x11, 406 + 0x5c,0xf0,0x94,0x90,0x26,0xec,0x69,0xff, 407 + } 408 + }, 486 409 { 0xa708007, { 487 410 0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3, 488 411 0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2, ··· 497 406 0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93, 498 407 } 499 408 }, 409 + { 0xa708008, { 410 + 0x08,0x6e,0xf0,0x22,0x4b,0x8e,0xc4,0x46, 411 + 0x58,0x34,0xe6,0x47,0xa2,0x28,0xfd,0xab, 412 + 0x22,0x3d,0xdd,0xd8,0x52,0x9e,0x1d,0x16, 413 + 0xfa,0x01,0x68,0x14,0x79,0x3e,0xe8,0x6b, 414 + } 415 + }, 500 416 { 0xa70c005, { 501 417 0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b, 502 418 0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f, 503 419 0x1f,0x1f,0xf1,0x97,0xeb,0xfe,0x56,0x55, 504 420 0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13, 421 + } 422 + }, 423 + { 0xa70c008, { 424 + 0x0f,0xdb,0x37,0xa1,0x10,0xaf,0xd4,0x21, 425 + 0x94,0x0d,0xa4,0xa2,0xe9,0x86,0x6c,0x0e, 426 + 0x85,0x7c,0x36,0x30,0xa3,0x3a,0x78,0x66, 427 + 0x18,0x10,0x60,0x0d,0x78,0x3d,0x44,0xd0, 505 428 } 506 429 }, 507 430 { 0xaa00116, { ··· 544 439 0x4e,0x85,0x4b,0x7c,0x6b,0xd5,0x7c,0xd4, 545 440 0x1b,0x51,0x71,0x3a,0x0e,0x0b,0xdc,0x9b, 546 441 0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef, 442 + } 443 + }, 444 + { 0xaa00216, { 445 + 0x79,0xfb,0x5b,0x9f,0xb6,0xe6,0xa8,0xf5, 446 + 0x4e,0x7c,0x4f,0x8e,0x1d,0xad,0xd0,0x08, 447 + 0xc2,0x43,0x7c,0x8b,0xe6,0xdb,0xd0,0xd2, 448 + 0xe8,0x39,0x26,0xc1,0xe5,0x5a,0x48,0xf1, 547 449 } 548 450 }, 549 451 };
+2
arch/x86/kernel/cpu/scattered.c
··· 50 50 { X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 }, 51 51 { X86_FEATURE_SMBA, CPUID_EBX, 2, 0x80000020, 0 }, 52 52 { X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 }, 53 + { X86_FEATURE_TSA_SQ_NO, CPUID_ECX, 1, 0x80000021, 0 }, 54 + { X86_FEATURE_TSA_L1_NO, CPUID_ECX, 2, 0x80000021, 0 }, 53 55 { X86_FEATURE_AMD_WORKLOAD_CLASS, CPUID_EAX, 22, 0x80000021, 0 }, 54 56 { X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 }, 55 57 { X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 },
+12 -4
arch/x86/kernel/process.c
··· 907 907 */ 908 908 static __cpuidle void mwait_idle(void) 909 909 { 910 + if (need_resched()) 911 + return; 912 + 913 + x86_idle_clear_cpu_buffers(); 914 + 910 915 if (!current_set_polling_and_test()) { 911 916 const void *addr = &current_thread_info()->flags; 912 917 913 918 alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr)); 914 919 __monitor(addr, 0, 0); 915 - if (!need_resched()) { 916 - __sti_mwait(0, 0); 917 - raw_local_irq_disable(); 918 - } 920 + if (need_resched()) 921 + goto out; 922 + 923 + __sti_mwait(0, 0); 924 + raw_local_irq_disable(); 919 925 } 926 + 927 + out: 920 928 __current_clr_polling(); 921 929 } 922 930
+9 -1
arch/x86/kvm/cpuid.c
··· 1165 1165 */ 1166 1166 SYNTHESIZED_F(LFENCE_RDTSC), 1167 1167 /* SmmPgCfgLock */ 1168 + /* 4: Resv */ 1169 + SYNTHESIZED_F(VERW_CLEAR), 1168 1170 F(NULL_SEL_CLR_BASE), 1169 1171 /* UpperAddressIgnore */ 1170 1172 F(AUTOIBRS), ··· 1179 1177 SYNTHESIZED_F(IBPB_BRTYPE), 1180 1178 SYNTHESIZED_F(SRSO_NO), 1181 1179 F(SRSO_USER_KERNEL_NO), 1180 + ); 1181 + 1182 + kvm_cpu_cap_init(CPUID_8000_0021_ECX, 1183 + SYNTHESIZED_F(TSA_SQ_NO), 1184 + SYNTHESIZED_F(TSA_L1_NO), 1182 1185 ); 1183 1186 1184 1187 kvm_cpu_cap_init(CPUID_8000_0022_EAX, ··· 1755 1748 entry->eax = entry->ebx = entry->ecx = entry->edx = 0; 1756 1749 break; 1757 1750 case 0x80000021: 1758 - entry->ebx = entry->ecx = entry->edx = 0; 1751 + entry->ebx = entry->edx = 0; 1759 1752 cpuid_entry_override(entry, CPUID_8000_0021_EAX); 1753 + cpuid_entry_override(entry, CPUID_8000_0021_ECX); 1760 1754 break; 1761 1755 /* AMD Extended Performance Monitoring and Debug */ 1762 1756 case 0x80000022: {
+4 -1
arch/x86/kvm/hyperv.c
··· 1979 1979 if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY) 1980 1980 goto out_flush_all; 1981 1981 1982 + if (is_noncanonical_invlpg_address(entries[i], vcpu)) 1983 + continue; 1984 + 1982 1985 /* 1983 1986 * Lower 12 bits of 'address' encode the number of additional 1984 1987 * pages to flush. ··· 2004 2001 static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) 2005 2002 { 2006 2003 struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); 2004 + unsigned long *vcpu_mask = hv_vcpu->vcpu_mask; 2007 2005 u64 *sparse_banks = hv_vcpu->sparse_banks; 2008 2006 struct kvm *kvm = vcpu->kvm; 2009 2007 struct hv_tlb_flush_ex flush_ex; 2010 2008 struct hv_tlb_flush flush; 2011 - DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); 2012 2009 struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; 2013 2010 /* 2014 2011 * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
+7
arch/x86/kvm/reverse_cpuid.h
··· 52 52 /* CPUID level 0x80000022 (EAX) */ 53 53 #define KVM_X86_FEATURE_PERFMON_V2 KVM_X86_FEATURE(CPUID_8000_0022_EAX, 0) 54 54 55 + /* CPUID level 0x80000021 (ECX) */ 56 + #define KVM_X86_FEATURE_TSA_SQ_NO KVM_X86_FEATURE(CPUID_8000_0021_ECX, 1) 57 + #define KVM_X86_FEATURE_TSA_L1_NO KVM_X86_FEATURE(CPUID_8000_0021_ECX, 2) 58 + 55 59 struct cpuid_reg { 56 60 u32 function; 57 61 u32 index; ··· 86 82 [CPUID_8000_0022_EAX] = {0x80000022, 0, CPUID_EAX}, 87 83 [CPUID_7_2_EDX] = { 7, 2, CPUID_EDX}, 88 84 [CPUID_24_0_EBX] = { 0x24, 0, CPUID_EBX}, 85 + [CPUID_8000_0021_ECX] = {0x80000021, 0, CPUID_ECX}, 89 86 }; 90 87 91 88 /* ··· 126 121 KVM_X86_TRANSLATE_FEATURE(PERFMON_V2); 127 122 KVM_X86_TRANSLATE_FEATURE(RRSBA_CTRL); 128 123 KVM_X86_TRANSLATE_FEATURE(BHI_CTRL); 124 + KVM_X86_TRANSLATE_FEATURE(TSA_SQ_NO); 125 + KVM_X86_TRANSLATE_FEATURE(TSA_L1_NO); 129 126 default: 130 127 return x86_feature; 131 128 }
+10 -2
arch/x86/kvm/svm/sev.c
··· 1971 1971 struct kvm_vcpu *src_vcpu; 1972 1972 unsigned long i; 1973 1973 1974 + if (src->created_vcpus != atomic_read(&src->online_vcpus) || 1975 + dst->created_vcpus != atomic_read(&dst->online_vcpus)) 1976 + return -EBUSY; 1977 + 1974 1978 if (!sev_es_guest(src)) 1975 1979 return 0; 1976 1980 ··· 4449 4445 * the VMSA will be NULL if this vCPU is the destination for intrahost 4450 4446 * migration, and will be copied later. 4451 4447 */ 4452 - if (svm->sev_es.vmsa && !svm->sev_es.snp_has_guest_vmsa) 4453 - svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); 4448 + if (!svm->sev_es.snp_has_guest_vmsa) { 4449 + if (svm->sev_es.vmsa) 4450 + svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); 4451 + else 4452 + svm->vmcb->control.vmsa_pa = INVALID_PAGE; 4453 + } 4454 4454 4455 4455 if (cpu_feature_enabled(X86_FEATURE_ALLOWED_SEV_FEATURES)) 4456 4456 svm->vmcb->control.allowed_sev_features = sev->vmsa_features |
+6
arch/x86/kvm/svm/vmenter.S
··· 169 169 #endif 170 170 mov VCPU_RDI(%_ASM_DI), %_ASM_DI 171 171 172 + /* Clobbers EFLAGS.ZF */ 173 + VM_CLEAR_CPU_BUFFERS 174 + 172 175 /* Enter guest mode */ 173 176 3: vmrun %_ASM_AX 174 177 4: ··· 337 334 /* Get svm->current_vmcb->pa into RAX. */ 338 335 mov SVM_current_vmcb(%rdi), %rax 339 336 mov KVM_VMCB_pa(%rax), %rax 337 + 338 + /* Clobbers EFLAGS.ZF */ 339 + VM_CLEAR_CPU_BUFFERS 340 340 341 341 /* Enter guest mode */ 342 342 1: vmrun %rax
+30
arch/x86/kvm/vmx/tdx.c
··· 173 173 tdx_clear_unsupported_cpuid(entry); 174 174 } 175 175 176 + #define TDVMCALLINFO_GET_QUOTE BIT(0) 177 + #define TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT BIT(1) 178 + 176 179 static int init_kvm_tdx_caps(const struct tdx_sys_info_td_conf *td_conf, 177 180 struct kvm_tdx_capabilities *caps) 178 181 { ··· 190 187 return -EIO; 191 188 192 189 caps->cpuid.nent = td_conf->num_cpuid_config; 190 + 191 + caps->user_tdvmcallinfo_1_r11 = 192 + TDVMCALLINFO_GET_QUOTE | 193 + TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT; 193 194 194 195 for (i = 0; i < td_conf->num_cpuid_config; i++) 195 196 td_init_cpuid_entry2(&caps->cpuid.entries[i], i); ··· 1537 1530 return 0; 1538 1531 } 1539 1532 1533 + static int tdx_setup_event_notify_interrupt(struct kvm_vcpu *vcpu) 1534 + { 1535 + struct vcpu_tdx *tdx = to_tdx(vcpu); 1536 + u64 vector = tdx->vp_enter_args.r12; 1537 + 1538 + if (vector < 32 || vector > 255) { 1539 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1540 + return 1; 1541 + } 1542 + 1543 + vcpu->run->exit_reason = KVM_EXIT_TDX; 1544 + vcpu->run->tdx.flags = 0; 1545 + vcpu->run->tdx.nr = TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT; 1546 + vcpu->run->tdx.setup_event_notify.ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED; 1547 + vcpu->run->tdx.setup_event_notify.vector = vector; 1548 + 1549 + vcpu->arch.complete_userspace_io = tdx_complete_simple; 1550 + 1551 + return 0; 1552 + } 1553 + 1540 1554 static int handle_tdvmcall(struct kvm_vcpu *vcpu) 1541 1555 { 1542 1556 switch (tdvmcall_leaf(vcpu)) { ··· 1569 1541 return tdx_get_td_vm_call_info(vcpu); 1570 1542 case TDVMCALL_GET_QUOTE: 1571 1543 return tdx_get_quote(vcpu); 1544 + case TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT: 1545 + return tdx_setup_event_notify_interrupt(vcpu); 1572 1546 default: 1573 1547 break; 1574 1548 }
+1 -1
arch/x86/kvm/vmx/vmx.c
··· 7291 7291 vmx_l1d_flush(vcpu); 7292 7292 else if (static_branch_unlikely(&cpu_buf_vm_clear) && 7293 7293 kvm_arch_has_assigned_device(vcpu->kvm)) 7294 - mds_clear_cpu_buffers(); 7294 + x86_clear_cpu_buffers(); 7295 7295 7296 7296 vmx_disable_fb_clear(vmx); 7297 7297
+3 -1
arch/x86/kvm/x86.c
··· 3258 3258 3259 3259 /* With all the info we got, fill in the values */ 3260 3260 3261 - if (kvm_caps.has_tsc_control) 3261 + if (kvm_caps.has_tsc_control) { 3262 3262 tgt_tsc_khz = kvm_scale_tsc(tgt_tsc_khz, 3263 3263 v->arch.l1_tsc_scaling_ratio); 3264 + tgt_tsc_khz = tgt_tsc_khz ? : 1; 3265 + } 3264 3266 3265 3267 if (unlikely(vcpu->hw_tsc_khz != tgt_tsc_khz)) { 3266 3268 kvm_get_time_scale(NSEC_PER_SEC, tgt_tsc_khz * 1000LL,
+13 -2
arch/x86/kvm/xen.c
··· 1971 1971 { 1972 1972 struct kvm_vcpu *vcpu; 1973 1973 1974 - if (ue->u.xen_evtchn.port >= max_evtchn_port(kvm)) 1975 - return -EINVAL; 1974 + /* 1975 + * Don't check for the port being within range of max_evtchn_port(). 1976 + * Userspace can configure what ever targets it likes; events just won't 1977 + * be delivered if/while the target is invalid, just like userspace can 1978 + * configure MSIs which target non-existent APICs. 1979 + * 1980 + * This allow on Live Migration and Live Update, the IRQ routing table 1981 + * can be restored *independently* of other things like creating vCPUs, 1982 + * without imposing an ordering dependency on userspace. In this 1983 + * particular case, the problematic ordering would be with setting the 1984 + * Xen 'long mode' flag, which changes max_evtchn_port() to allow 4096 1985 + * instead of 1024 event channels. 1986 + */ 1976 1987 1977 1988 /* We only support 2 level event channels for now */ 1978 1989 if (ue->u.xen_evtchn.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
+3 -16
drivers/acpi/battery.c
··· 243 243 break; 244 244 case POWER_SUPPLY_PROP_CURRENT_NOW: 245 245 case POWER_SUPPLY_PROP_POWER_NOW: 246 - if (battery->rate_now == ACPI_BATTERY_VALUE_UNKNOWN) { 246 + if (battery->rate_now == ACPI_BATTERY_VALUE_UNKNOWN) 247 247 ret = -ENODEV; 248 - break; 249 - } 250 - 251 - val->intval = battery->rate_now * 1000; 252 - /* 253 - * When discharging, the current should be reported as a 254 - * negative number as per the power supply class interface 255 - * definition. 256 - */ 257 - if (psp == POWER_SUPPLY_PROP_CURRENT_NOW && 258 - (battery->state & ACPI_BATTERY_STATE_DISCHARGING) && 259 - acpi_battery_handle_discharging(battery) 260 - == POWER_SUPPLY_STATUS_DISCHARGING) 261 - val->intval = -val->intval; 262 - 248 + else 249 + val->intval = battery->rate_now * 1000; 263 250 break; 264 251 case POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN: 265 252 case POWER_SUPPLY_PROP_ENERGY_FULL_DESIGN:
+3
drivers/base/cpu.c
··· 602 602 CPU_SHOW_VULN_FALLBACK(ghostwrite); 603 603 CPU_SHOW_VULN_FALLBACK(old_microcode); 604 604 CPU_SHOW_VULN_FALLBACK(indirect_target_selection); 605 + CPU_SHOW_VULN_FALLBACK(tsa); 605 606 606 607 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); 607 608 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); ··· 621 620 static DEVICE_ATTR(ghostwrite, 0444, cpu_show_ghostwrite, NULL); 622 621 static DEVICE_ATTR(old_microcode, 0444, cpu_show_old_microcode, NULL); 623 622 static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL); 623 + static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL); 624 624 625 625 static struct attribute *cpu_root_vulnerabilities_attrs[] = { 626 626 &dev_attr_meltdown.attr, ··· 641 639 &dev_attr_ghostwrite.attr, 642 640 &dev_attr_old_microcode.attr, 643 641 &dev_attr_indirect_target_selection.attr, 642 + &dev_attr_tsa.attr, 644 643 NULL 645 644 }; 646 645
+4 -1
drivers/base/power/main.c
··· 1236 1236 */ 1237 1237 void dpm_resume_end(pm_message_t state) 1238 1238 { 1239 + pm_restore_gfp_mask(); 1239 1240 dpm_resume(state); 1240 1241 dpm_complete(state); 1241 1242 } ··· 2177 2176 error = dpm_prepare(state); 2178 2177 if (error) 2179 2178 dpm_save_failed_step(SUSPEND_PREPARE); 2180 - else 2179 + else { 2180 + pm_restrict_gfp_mask(); 2181 2181 error = dpm_suspend(state); 2182 + } 2182 2183 2183 2184 dpm_show_time(starttime, state, error, "start"); 2184 2185 return error;
+4 -2
drivers/block/brd.c
··· 64 64 65 65 rcu_read_unlock(); 66 66 page = alloc_page(gfp | __GFP_ZERO | __GFP_HIGHMEM); 67 - rcu_read_lock(); 68 - if (!page) 67 + if (!page) { 68 + rcu_read_lock(); 69 69 return ERR_PTR(-ENOMEM); 70 + } 70 71 71 72 xa_lock(&brd->brd_pages); 72 73 ret = __xa_cmpxchg(&brd->brd_pages, sector >> PAGE_SECTORS_SHIFT, NULL, 73 74 page, gfp); 75 + rcu_read_lock(); 74 76 if (ret) { 75 77 xa_unlock(&brd->brd_pages); 76 78 __free_page(page);
+6 -5
drivers/block/ublk_drv.c
··· 1442 1442 struct ublk_queue *this_q = req->mq_hctx->driver_data; 1443 1443 struct ublk_io *this_io = &this_q->ios[req->tag]; 1444 1444 1445 + if (ublk_prep_req(this_q, req, true) != BLK_STS_OK) { 1446 + rq_list_add_tail(&requeue_list, req); 1447 + continue; 1448 + } 1449 + 1445 1450 if (io && !ublk_belong_to_same_batch(io, this_io) && 1446 1451 !rq_list_empty(&submit_list)) 1447 1452 ublk_queue_cmd_list(io, &submit_list); 1448 1453 io = this_io; 1449 - 1450 - if (ublk_prep_req(this_q, req, true) == BLK_STS_OK) 1451 - rq_list_add_tail(&submit_list, req); 1452 - else 1453 - rq_list_add_tail(&requeue_list, req); 1454 + rq_list_add_tail(&submit_list, req); 1454 1455 } 1455 1456 1456 1457 if (!rq_list_empty(&submit_list))
+7 -5
drivers/dma-buf/dma-resv.c
··· 685 685 dma_resv_iter_begin(&cursor, obj, usage); 686 686 dma_resv_for_each_fence_unlocked(&cursor, fence) { 687 687 688 - ret = dma_fence_wait_timeout(fence, intr, ret); 689 - if (ret <= 0) { 690 - dma_resv_iter_end(&cursor); 691 - return ret; 692 - } 688 + ret = dma_fence_wait_timeout(fence, intr, timeout); 689 + if (ret <= 0) 690 + break; 691 + 692 + /* Even for zero timeout the return value is 1 */ 693 + if (timeout) 694 + timeout = ret; 693 695 } 694 696 dma_resv_iter_end(&cursor); 695 697
+3 -1
drivers/edac/ecs.c
··· 170 170 fru_ctx->dev_attr[ECS_RESET] = EDAC_ECS_ATTR_WO(reset, fru); 171 171 fru_ctx->dev_attr[ECS_THRESHOLD] = EDAC_ECS_ATTR_RW(threshold, fru); 172 172 173 - for (i = 0; i < ECS_MAX_ATTRS; i++) 173 + for (i = 0; i < ECS_MAX_ATTRS; i++) { 174 + sysfs_attr_init(&fru_ctx->dev_attr[i].dev_attr.attr); 174 175 fru_ctx->ecs_attrs[i] = &fru_ctx->dev_attr[i].dev_attr.attr; 176 + } 175 177 176 178 sprintf(fru_ctx->name, "%s%d", EDAC_ECS_FRU_NAME, fru); 177 179 group->name = fru_ctx->name;
+1
drivers/edac/mem_repair.c
··· 333 333 for (i = 0; i < MR_MAX_ATTRS; i++) { 334 334 memcpy(&ctx->mem_repair_dev_attr[i], 335 335 &dev_attr[i], sizeof(dev_attr[i])); 336 + sysfs_attr_init(&ctx->mem_repair_dev_attr[i].dev_attr.attr); 336 337 ctx->mem_repair_attrs[i] = 337 338 &ctx->mem_repair_dev_attr[i].dev_attr.attr; 338 339 }
+1
drivers/edac/scrub.c
··· 176 176 group = &scrub_ctx->group; 177 177 for (i = 0; i < SCRUB_MAX_ATTRS; i++) { 178 178 memcpy(&scrub_ctx->scrub_dev_attr[i], &dev_attr[i], sizeof(dev_attr[i])); 179 + sysfs_attr_init(&scrub_ctx->scrub_dev_attr[i].dev_attr.attr); 179 180 scrub_ctx->scrub_attrs[i] = &scrub_ctx->scrub_dev_attr[i].dev_attr.attr; 180 181 } 181 182 sprintf(scrub_ctx->name, "%s%d", "scrub", instance);
+36 -35
drivers/firmware/arm_ffa/driver.c
··· 110 110 struct work_struct sched_recv_irq_work; 111 111 struct xarray partition_info; 112 112 DECLARE_HASHTABLE(notifier_hash, ilog2(FFA_MAX_NOTIFICATIONS)); 113 - struct mutex notify_lock; /* lock to protect notifier hashtable */ 113 + rwlock_t notify_lock; /* lock to protect notifier hashtable */ 114 114 }; 115 115 116 116 static struct ffa_drv_info *drv_info; ··· 1250 1250 return NULL; 1251 1251 } 1252 1252 1253 - static int 1254 - update_notifier_cb(struct ffa_device *dev, int notify_id, void *cb, 1255 - void *cb_data, bool is_registration, bool is_framework) 1253 + static int update_notifier_cb(struct ffa_device *dev, int notify_id, 1254 + struct notifier_cb_info *cb, bool is_framework) 1256 1255 { 1257 1256 struct notifier_cb_info *cb_info = NULL; 1258 1257 enum notify_type type = ffa_notify_type_get(dev->vm_id); 1259 - bool cb_found; 1258 + bool cb_found, is_registration = !!cb; 1260 1259 1261 1260 if (is_framework) 1262 1261 cb_info = notifier_hnode_get_by_vmid_uuid(notify_id, dev->vm_id, ··· 1269 1270 return -EINVAL; 1270 1271 1271 1272 if (is_registration) { 1272 - cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL); 1273 - if (!cb_info) 1274 - return -ENOMEM; 1275 - 1276 - cb_info->dev = dev; 1277 - cb_info->cb_data = cb_data; 1278 - if (is_framework) 1279 - cb_info->fwk_cb = cb; 1280 - else 1281 - cb_info->cb = cb; 1282 - 1283 - hash_add(drv_info->notifier_hash, &cb_info->hnode, notify_id); 1273 + hash_add(drv_info->notifier_hash, &cb->hnode, notify_id); 1284 1274 } else { 1285 1275 hash_del(&cb_info->hnode); 1276 + kfree(cb_info); 1286 1277 } 1287 1278 1288 1279 return 0; ··· 1289 1300 if (notify_id >= FFA_MAX_NOTIFICATIONS) 1290 1301 return -EINVAL; 1291 1302 1292 - mutex_lock(&drv_info->notify_lock); 1303 + write_lock(&drv_info->notify_lock); 1293 1304 1294 - rc = update_notifier_cb(dev, notify_id, NULL, NULL, false, 1295 - is_framework); 1305 + rc = update_notifier_cb(dev, notify_id, NULL, is_framework); 1296 1306 if (rc) { 1297 1307 pr_err("Could not unregister notification callback\n"); 1298 - mutex_unlock(&drv_info->notify_lock); 1308 + write_unlock(&drv_info->notify_lock); 1299 1309 return rc; 1300 1310 } 1301 1311 1302 1312 if (!is_framework) 1303 1313 rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id)); 1304 1314 1305 - mutex_unlock(&drv_info->notify_lock); 1315 + write_unlock(&drv_info->notify_lock); 1306 1316 1307 1317 return rc; 1308 1318 } ··· 1322 1334 { 1323 1335 int rc; 1324 1336 u32 flags = 0; 1337 + struct notifier_cb_info *cb_info = NULL; 1325 1338 1326 1339 if (ffa_notifications_disabled()) 1327 1340 return -EOPNOTSUPP; ··· 1330 1341 if (notify_id >= FFA_MAX_NOTIFICATIONS) 1331 1342 return -EINVAL; 1332 1343 1333 - mutex_lock(&drv_info->notify_lock); 1344 + cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL); 1345 + if (!cb_info) 1346 + return -ENOMEM; 1347 + 1348 + cb_info->dev = dev; 1349 + cb_info->cb_data = cb_data; 1350 + if (is_framework) 1351 + cb_info->fwk_cb = cb; 1352 + else 1353 + cb_info->cb = cb; 1354 + 1355 + write_lock(&drv_info->notify_lock); 1334 1356 1335 1357 if (!is_framework) { 1336 1358 if (is_per_vcpu) 1337 1359 flags = PER_VCPU_NOTIFICATION_FLAG; 1338 1360 1339 1361 rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags); 1340 - if (rc) { 1341 - mutex_unlock(&drv_info->notify_lock); 1342 - return rc; 1343 - } 1362 + if (rc) 1363 + goto out_unlock_free; 1344 1364 } 1345 1365 1346 - rc = update_notifier_cb(dev, notify_id, cb, cb_data, true, 1347 - is_framework); 1366 + rc = update_notifier_cb(dev, notify_id, cb_info, is_framework); 1348 1367 if (rc) { 1349 1368 pr_err("Failed to register callback for %d - %d\n", 1350 1369 notify_id, rc); 1351 1370 if (!is_framework) 1352 1371 ffa_notification_unbind(dev->vm_id, BIT(notify_id)); 1353 1372 } 1354 - mutex_unlock(&drv_info->notify_lock); 1373 + 1374 + out_unlock_free: 1375 + write_unlock(&drv_info->notify_lock); 1376 + if (rc) 1377 + kfree(cb_info); 1355 1378 1356 1379 return rc; 1357 1380 } ··· 1407 1406 if (!(bitmap & 1)) 1408 1407 continue; 1409 1408 1410 - mutex_lock(&drv_info->notify_lock); 1409 + read_lock(&drv_info->notify_lock); 1411 1410 cb_info = notifier_hnode_get_by_type(notify_id, type); 1412 - mutex_unlock(&drv_info->notify_lock); 1411 + read_unlock(&drv_info->notify_lock); 1413 1412 1414 1413 if (cb_info && cb_info->cb) 1415 1414 cb_info->cb(notify_id, cb_info->cb_data); ··· 1447 1446 1448 1447 ffa_rx_release(); 1449 1448 1450 - mutex_lock(&drv_info->notify_lock); 1449 + read_lock(&drv_info->notify_lock); 1451 1450 cb_info = notifier_hnode_get_by_vmid_uuid(notify_id, target, &uuid); 1452 - mutex_unlock(&drv_info->notify_lock); 1451 + read_unlock(&drv_info->notify_lock); 1453 1452 1454 1453 if (cb_info && cb_info->fwk_cb) 1455 1454 cb_info->fwk_cb(notify_id, cb_info->cb_data, buf); ··· 1974 1973 goto cleanup; 1975 1974 1976 1975 hash_init(drv_info->notifier_hash); 1977 - mutex_init(&drv_info->notify_lock); 1976 + rwlock_init(&drv_info->notify_lock); 1978 1977 1979 1978 drv_info->notif_enabled = true; 1980 1979 return;
+2 -4
drivers/firmware/efi/libstub/zboot.lds
··· 29 29 . = _etext; 30 30 } 31 31 32 - #ifdef CONFIG_EFI_SBAT 33 32 .sbat : ALIGN(4096) { 34 33 _sbat = .; 35 34 *(.sbat) 36 35 _esbat = ALIGN(4096); 37 36 . = _esbat; 38 37 } 39 - #endif 40 38 41 39 .data : ALIGN(4096) { 42 40 _data = .; ··· 58 60 PROVIDE(__efistub__gzdata_size = 59 61 ABSOLUTE(__efistub__gzdata_end - __efistub__gzdata_start)); 60 62 61 - PROVIDE(__data_rawsize = ABSOLUTE(_edata - _etext)); 62 - PROVIDE(__data_size = ABSOLUTE(_end - _etext)); 63 + PROVIDE(__data_rawsize = ABSOLUTE(_edata - _data)); 64 + PROVIDE(__data_size = ABSOLUTE(_end - _data)); 63 65 PROVIDE(__sbat_size = ABSOLUTE(_esbat - _sbat));
+10 -17
drivers/firmware/samsung/exynos-acpm.c
··· 430 430 return -EOPNOTSUPP; 431 431 } 432 432 433 + msg.chan_id = xfer->acpm_chan_id; 434 + msg.chan_type = EXYNOS_MBOX_CHAN_TYPE_DOORBELL; 435 + 433 436 scoped_guard(mutex, &achan->tx_lock) { 434 437 tx_front = readl(achan->tx.front); 435 438 idx = (tx_front + 1) % achan->qlen; ··· 449 446 450 447 /* Advance TX front. */ 451 448 writel(idx, achan->tx.front); 449 + 450 + ret = mbox_send_message(achan->chan, (void *)&msg); 451 + if (ret < 0) 452 + return ret; 453 + 454 + mbox_client_txdone(achan->chan, 0); 452 455 } 453 456 454 - msg.chan_id = xfer->acpm_chan_id; 455 - msg.chan_type = EXYNOS_MBOX_CHAN_TYPE_DOORBELL; 456 - ret = mbox_send_message(achan->chan, (void *)&msg); 457 - if (ret < 0) 458 - return ret; 459 - 460 - ret = acpm_wait_for_message_response(achan, xfer); 461 - 462 - /* 463 - * NOTE: we might prefer not to need the mailbox ticker to manage the 464 - * transfer queueing since the protocol layer queues things by itself. 465 - * Unfortunately, we have to kick the mailbox framework after we have 466 - * received our message. 467 - */ 468 - mbox_client_txdone(achan->chan, ret); 469 - 470 - return ret; 457 + return acpm_wait_for_message_response(achan, xfer); 471 458 } 472 459 473 460 /**
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
··· 561 561 return REG_GET_FIELD(status, VM_CONTEXT1_PROTECTION_FAULT_STATUS, VMID); 562 562 } 563 563 564 + static uint32_t kgd_hqd_sdma_get_doorbell(struct amdgpu_device *adev, 565 + int engine, int queue) 566 + 567 + { 568 + return 0; 569 + } 570 + 564 571 const struct kfd2kgd_calls gfx_v7_kfd2kgd = { 565 572 .program_sh_mem_settings = kgd_program_sh_mem_settings, 566 573 .set_pasid_vmid_mapping = kgd_set_pasid_vmid_mapping, ··· 585 578 .set_scratch_backing_va = set_scratch_backing_va, 586 579 .set_vm_context_page_table_base = set_vm_context_page_table_base, 587 580 .read_vmid_from_vmfault_reg = read_vmid_from_vmfault_reg, 581 + .hqd_sdma_get_doorbell = kgd_hqd_sdma_get_doorbell, 588 582 };
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
··· 582 582 lower_32_bits(page_table_base)); 583 583 } 584 584 585 + static uint32_t kgd_hqd_sdma_get_doorbell(struct amdgpu_device *adev, 586 + int engine, int queue) 587 + 588 + { 589 + return 0; 590 + } 591 + 585 592 const struct kfd2kgd_calls gfx_v8_kfd2kgd = { 586 593 .program_sh_mem_settings = kgd_program_sh_mem_settings, 587 594 .set_pasid_vmid_mapping = kgd_set_pasid_vmid_mapping, ··· 606 599 get_atc_vmid_pasid_mapping_info, 607 600 .set_scratch_backing_va = set_scratch_backing_va, 608 601 .set_vm_context_page_table_base = set_vm_context_page_table_base, 602 + .hqd_sdma_get_doorbell = kgd_hqd_sdma_get_doorbell, 609 603 };
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
··· 944 944 drm_sched_entity_fini(entity); 945 945 } 946 946 } 947 + kref_put(&ctx->refcount, amdgpu_ctx_fini); 947 948 } 948 949 } 949 950
+1
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
··· 45 45 #include "amdgpu_ras.h" 46 46 47 47 MODULE_FIRMWARE("amdgpu/sdma_4_4_2.bin"); 48 + MODULE_FIRMWARE("amdgpu/sdma_4_4_4.bin"); 48 49 MODULE_FIRMWARE("amdgpu/sdma_4_4_5.bin"); 49 50 50 51 static const struct amdgpu_hwip_reg_entry sdma_reg_list_4_4_2[] = {
+6 -1
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
··· 1543 1543 { 1544 1544 struct amdgpu_device *adev = ring->adev; 1545 1545 u32 inst_id = ring->me; 1546 + int r; 1546 1547 1547 - return amdgpu_sdma_reset_engine(adev, inst_id); 1548 + amdgpu_amdkfd_suspend(adev, true); 1549 + r = amdgpu_sdma_reset_engine(adev, inst_id); 1550 + amdgpu_amdkfd_resume(adev, true); 1551 + 1552 + return r; 1548 1553 } 1549 1554 1550 1555 static int sdma_v5_0_stop_queue(struct amdgpu_ring *ring)
+6 -1
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
··· 1456 1456 { 1457 1457 struct amdgpu_device *adev = ring->adev; 1458 1458 u32 inst_id = ring->me; 1459 + int r; 1459 1460 1460 - return amdgpu_sdma_reset_engine(adev, inst_id); 1461 + amdgpu_amdkfd_suspend(adev, true); 1462 + r = amdgpu_sdma_reset_engine(adev, inst_id); 1463 + amdgpu_amdkfd_resume(adev, true); 1464 + 1465 + return r; 1461 1466 } 1462 1467 1463 1468 static int sdma_v5_2_stop_queue(struct amdgpu_ring *ring)
+21 -24
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 1171 1171 } 1172 1172 1173 1173 static void 1174 - svm_range_add_child(struct svm_range *prange, struct mm_struct *mm, 1175 - struct svm_range *pchild, enum svm_work_list_ops op) 1174 + svm_range_add_child(struct svm_range *prange, struct svm_range *pchild, enum svm_work_list_ops op) 1176 1175 { 1177 1176 pr_debug("add child 0x%p [0x%lx 0x%lx] to prange 0x%p child list %d\n", 1178 1177 pchild, pchild->start, pchild->last, prange, op); 1179 1178 1180 - pchild->work_item.mm = mm; 1179 + pchild->work_item.mm = NULL; 1181 1180 pchild->work_item.op = op; 1182 1181 list_add_tail(&pchild->child_list, &prange->child_list); 1183 1182 } ··· 1277 1278 mapping_flags |= ext_coherent ? AMDGPU_VM_MTYPE_UC : AMDGPU_VM_MTYPE_NC; 1278 1279 /* system memory accessed by the dGPU */ 1279 1280 } else { 1280 - if (gc_ip_version < IP_VERSION(9, 5, 0)) 1281 + if (gc_ip_version < IP_VERSION(9, 5, 0) || ext_coherent) 1281 1282 mapping_flags |= AMDGPU_VM_MTYPE_UC; 1282 1283 else 1283 1284 mapping_flags |= AMDGPU_VM_MTYPE_NC; ··· 2393 2394 prange->work_item.op != SVM_OP_UNMAP_RANGE) 2394 2395 prange->work_item.op = op; 2395 2396 } else { 2396 - prange->work_item.op = op; 2397 - 2398 - /* Pairs with mmput in deferred_list_work */ 2399 - mmget(mm); 2400 - prange->work_item.mm = mm; 2401 - list_add_tail(&prange->deferred_list, 2402 - &prange->svms->deferred_range_list); 2403 - pr_debug("add prange 0x%p [0x%lx 0x%lx] to work list op %d\n", 2404 - prange, prange->start, prange->last, op); 2397 + /* Pairs with mmput in deferred_list_work. 2398 + * If process is exiting and mm is gone, don't update mmu notifier. 2399 + */ 2400 + if (mmget_not_zero(mm)) { 2401 + prange->work_item.mm = mm; 2402 + prange->work_item.op = op; 2403 + list_add_tail(&prange->deferred_list, 2404 + &prange->svms->deferred_range_list); 2405 + pr_debug("add prange 0x%p [0x%lx 0x%lx] to work list op %d\n", 2406 + prange, prange->start, prange->last, op); 2407 + } 2405 2408 } 2406 2409 spin_unlock(&svms->deferred_list_lock); 2407 2410 } ··· 2417 2416 } 2418 2417 2419 2418 static void 2420 - svm_range_unmap_split(struct mm_struct *mm, struct svm_range *parent, 2421 - struct svm_range *prange, unsigned long start, 2419 + svm_range_unmap_split(struct svm_range *parent, struct svm_range *prange, unsigned long start, 2422 2420 unsigned long last) 2423 2421 { 2424 2422 struct svm_range *head; ··· 2438 2438 svm_range_split(tail, last + 1, tail->last, &head); 2439 2439 2440 2440 if (head != prange && tail != prange) { 2441 - svm_range_add_child(parent, mm, head, SVM_OP_UNMAP_RANGE); 2442 - svm_range_add_child(parent, mm, tail, SVM_OP_ADD_RANGE); 2441 + svm_range_add_child(parent, head, SVM_OP_UNMAP_RANGE); 2442 + svm_range_add_child(parent, tail, SVM_OP_ADD_RANGE); 2443 2443 } else if (tail != prange) { 2444 - svm_range_add_child(parent, mm, tail, SVM_OP_UNMAP_RANGE); 2444 + svm_range_add_child(parent, tail, SVM_OP_UNMAP_RANGE); 2445 2445 } else if (head != prange) { 2446 - svm_range_add_child(parent, mm, head, SVM_OP_UNMAP_RANGE); 2446 + svm_range_add_child(parent, head, SVM_OP_UNMAP_RANGE); 2447 2447 } else if (parent != prange) { 2448 2448 prange->work_item.op = SVM_OP_UNMAP_RANGE; 2449 2449 } ··· 2520 2520 l = min(last, pchild->last); 2521 2521 if (l >= s) 2522 2522 svm_range_unmap_from_gpus(pchild, s, l, trigger); 2523 - svm_range_unmap_split(mm, prange, pchild, start, last); 2523 + svm_range_unmap_split(prange, pchild, start, last); 2524 2524 mutex_unlock(&pchild->lock); 2525 2525 } 2526 2526 s = max(start, prange->start); 2527 2527 l = min(last, prange->last); 2528 2528 if (l >= s) 2529 2529 svm_range_unmap_from_gpus(prange, s, l, trigger); 2530 - svm_range_unmap_split(mm, prange, prange, start, last); 2530 + svm_range_unmap_split(prange, prange, start, last); 2531 2531 2532 2532 if (unmap_parent) 2533 2533 svm_range_add_list_work(svms, prange, mm, SVM_OP_UNMAP_RANGE); ··· 2570 2570 2571 2571 if (range->event == MMU_NOTIFY_RELEASE) 2572 2572 return true; 2573 - if (!mmget_not_zero(mni->mm)) 2574 - return true; 2575 2573 2576 2574 start = mni->interval_tree.start; 2577 2575 last = mni->interval_tree.last; ··· 2596 2598 } 2597 2599 2598 2600 svm_range_unlock(prange); 2599 - mmput(mni->mm); 2600 2601 2601 2602 return true; 2602 2603 }
+7 -5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 3610 3610 3611 3611 luminance_range = &conn_base->display_info.luminance_range; 3612 3612 3613 - if (luminance_range->max_luminance) { 3614 - caps->aux_min_input_signal = luminance_range->min_luminance; 3613 + if (luminance_range->max_luminance) 3615 3614 caps->aux_max_input_signal = luminance_range->max_luminance; 3616 - } else { 3617 - caps->aux_min_input_signal = 0; 3615 + else 3618 3616 caps->aux_max_input_signal = 512; 3619 - } 3617 + 3618 + if (luminance_range->min_luminance) 3619 + caps->aux_min_input_signal = luminance_range->min_luminance; 3620 + else 3621 + caps->aux_min_input_signal = 1; 3620 3622 3621 3623 min_input_signal_override = drm_get_panel_min_brightness_quirk(aconnector->drm_edid); 3622 3624 if (min_input_signal_override >= 0)
+1
drivers/gpu/drm/amd/display/dc/dc_hw_types.h
··· 974 974 uint32_t pix_clk_100hz; 975 975 976 976 uint32_t min_refresh_in_uhz; 977 + uint32_t max_refresh_in_uhz; 977 978 978 979 uint32_t vic; 979 980 uint32_t hdmi_vic;
+8
drivers/gpu/drm/amd/display/modules/freesync/freesync.c
··· 155 155 v_total = div64_u64(div64_u64(((unsigned long long)( 156 156 frame_duration_in_ns) * (stream->timing.pix_clk_100hz / 10)), 157 157 stream->timing.h_total), 1000000); 158 + } else if (refresh_in_uhz >= stream->timing.max_refresh_in_uhz) { 159 + /* When the target refresh rate is the maximum panel refresh rate 160 + * round up the vtotal value to prevent off-by-one error causing 161 + * v_total_min to be below the panel's lower bound 162 + */ 163 + v_total = div64_u64(div64_u64(((unsigned long long)( 164 + frame_duration_in_ns) * (stream->timing.pix_clk_100hz / 10)), 165 + stream->timing.h_total) + (1000000 - 1), 1000000); 158 166 } else { 159 167 v_total = div64_u64(div64_u64(((unsigned long long)( 160 168 frame_duration_in_ns) * (stream->timing.pix_clk_100hz / 10)),
+2 -1
drivers/gpu/drm/bridge/aux-hpd-bridge.c
··· 64 64 adev->id = ret; 65 65 adev->name = "dp_hpd_bridge"; 66 66 adev->dev.parent = parent; 67 - adev->dev.of_node = of_node_get(parent->of_node); 68 67 adev->dev.release = drm_aux_hpd_bridge_release; 69 68 adev->dev.platform_data = of_node_get(np); 69 + 70 + device_set_of_node_from_dev(&adev->dev, parent); 70 71 71 72 ret = auxiliary_device_init(adev); 72 73 if (ret) {
+1 -4
drivers/gpu/drm/bridge/panel.c
··· 299 299 panel_bridge->bridge.of_node = panel->dev->of_node; 300 300 panel_bridge->bridge.ops = DRM_BRIDGE_OP_MODES; 301 301 panel_bridge->bridge.type = connector_type; 302 + panel_bridge->bridge.pre_enable_prev_first = panel->prepare_prev_first; 302 303 303 304 drm_bridge_add(&panel_bridge->bridge); 304 305 ··· 414 413 return bridge; 415 414 } 416 415 417 - bridge->pre_enable_prev_first = panel->prepare_prev_first; 418 - 419 416 *ptr = bridge; 420 417 devres_add(dev, ptr); 421 418 ··· 454 455 bridge); 455 456 if (ret) 456 457 return ERR_PTR(ret); 457 - 458 - bridge->pre_enable_prev_first = panel->prepare_prev_first; 459 458 460 459 return bridge; 461 460 }
+2 -2
drivers/gpu/drm/bridge/samsung-dsim.c
··· 1095 1095 bool first = !xfer->tx_done; 1096 1096 u32 reg; 1097 1097 1098 - dev_dbg(dev, "< xfer %pK: tx len %u, done %u, rx len %u, done %u\n", 1098 + dev_dbg(dev, "< xfer %p: tx len %u, done %u, rx len %u, done %u\n", 1099 1099 xfer, length, xfer->tx_done, xfer->rx_len, xfer->rx_done); 1100 1100 1101 1101 if (length > DSI_TX_FIFO_SIZE) ··· 1293 1293 spin_unlock_irqrestore(&dsi->transfer_lock, flags); 1294 1294 1295 1295 dev_dbg(dsi->dev, 1296 - "> xfer %pK, tx_len %zu, tx_done %u, rx_len %u, rx_done %u\n", 1296 + "> xfer %p, tx_len %zu, tx_done %u, rx_len %u, rx_done %u\n", 1297 1297 xfer, xfer->packet.payload_length, xfer->tx_done, xfer->rx_len, 1298 1298 xfer->rx_done); 1299 1299
+40 -4
drivers/gpu/drm/drm_gem.c
··· 212 212 } 213 213 EXPORT_SYMBOL(drm_gem_private_object_fini); 214 214 215 + static void drm_gem_object_handle_get(struct drm_gem_object *obj) 216 + { 217 + struct drm_device *dev = obj->dev; 218 + 219 + drm_WARN_ON(dev, !mutex_is_locked(&dev->object_name_lock)); 220 + 221 + if (obj->handle_count++ == 0) 222 + drm_gem_object_get(obj); 223 + } 224 + 225 + /** 226 + * drm_gem_object_handle_get_unlocked - acquire reference on user-space handles 227 + * @obj: GEM object 228 + * 229 + * Acquires a reference on the GEM buffer object's handle. Required 230 + * to keep the GEM object alive. Call drm_gem_object_handle_put_unlocked() 231 + * to release the reference. 232 + */ 233 + void drm_gem_object_handle_get_unlocked(struct drm_gem_object *obj) 234 + { 235 + struct drm_device *dev = obj->dev; 236 + 237 + guard(mutex)(&dev->object_name_lock); 238 + 239 + drm_WARN_ON(dev, !obj->handle_count); /* first ref taken in create-tail helper */ 240 + drm_gem_object_handle_get(obj); 241 + } 242 + EXPORT_SYMBOL(drm_gem_object_handle_get_unlocked); 243 + 215 244 /** 216 245 * drm_gem_object_handle_free - release resources bound to userspace handles 217 246 * @obj: GEM object to clean up. ··· 271 242 } 272 243 } 273 244 274 - static void 275 - drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj) 245 + /** 246 + * drm_gem_object_handle_put_unlocked - releases reference on user-space handles 247 + * @obj: GEM object 248 + * 249 + * Releases a reference on the GEM buffer object's handle. Possibly releases 250 + * the GEM buffer object and associated dma-buf objects. 251 + */ 252 + void drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj) 276 253 { 277 254 struct drm_device *dev = obj->dev; 278 255 bool final = false; ··· 303 268 if (final) 304 269 drm_gem_object_put(obj); 305 270 } 271 + EXPORT_SYMBOL(drm_gem_object_handle_put_unlocked); 306 272 307 273 /* 308 274 * Called at device or object close to release the file's ··· 425 389 int ret; 426 390 427 391 WARN_ON(!mutex_is_locked(&dev->object_name_lock)); 428 - if (obj->handle_count++ == 0) 429 - drm_gem_object_get(obj); 392 + 393 + drm_gem_object_handle_get(obj); 430 394 431 395 /* 432 396 * Get the user-visible handle using idr. Preload and perform
+9 -7
drivers/gpu/drm/drm_gem_framebuffer_helper.c
··· 99 99 unsigned int i; 100 100 101 101 for (i = 0; i < fb->format->num_planes; i++) 102 - drm_gem_object_put(fb->obj[i]); 102 + drm_gem_object_handle_put_unlocked(fb->obj[i]); 103 103 104 104 drm_framebuffer_cleanup(fb); 105 105 kfree(fb); ··· 182 182 if (!objs[i]) { 183 183 drm_dbg_kms(dev, "Failed to lookup GEM object\n"); 184 184 ret = -ENOENT; 185 - goto err_gem_object_put; 185 + goto err_gem_object_handle_put_unlocked; 186 186 } 187 + drm_gem_object_handle_get_unlocked(objs[i]); 188 + drm_gem_object_put(objs[i]); 187 189 188 190 min_size = (height - 1) * mode_cmd->pitches[i] 189 191 + drm_format_info_min_pitch(info, i, width) ··· 195 193 drm_dbg_kms(dev, 196 194 "GEM object size (%zu) smaller than minimum size (%u) for plane %d\n", 197 195 objs[i]->size, min_size, i); 198 - drm_gem_object_put(objs[i]); 196 + drm_gem_object_handle_put_unlocked(objs[i]); 199 197 ret = -EINVAL; 200 - goto err_gem_object_put; 198 + goto err_gem_object_handle_put_unlocked; 201 199 } 202 200 } 203 201 204 202 ret = drm_gem_fb_init(dev, fb, mode_cmd, objs, i, funcs); 205 203 if (ret) 206 - goto err_gem_object_put; 204 + goto err_gem_object_handle_put_unlocked; 207 205 208 206 return 0; 209 207 210 - err_gem_object_put: 208 + err_gem_object_handle_put_unlocked: 211 209 while (i > 0) { 212 210 --i; 213 - drm_gem_object_put(objs[i]); 211 + drm_gem_object_handle_put_unlocked(objs[i]); 214 212 } 215 213 return ret; 216 214 }
+2
drivers/gpu/drm/drm_internal.h
··· 161 161 162 162 /* drm_gem.c */ 163 163 int drm_gem_init(struct drm_device *dev); 164 + void drm_gem_object_handle_get_unlocked(struct drm_gem_object *obj); 165 + void drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj); 164 166 int drm_gem_handle_create_tail(struct drm_file *file_priv, 165 167 struct drm_gem_object *obj, 166 168 u32 *handlep);
+2 -1
drivers/gpu/drm/drm_mipi_dsi.c
··· 91 91 .restore = pm_generic_restore, 92 92 }; 93 93 94 - static const struct bus_type mipi_dsi_bus_type = { 94 + const struct bus_type mipi_dsi_bus_type = { 95 95 .name = "mipi-dsi", 96 96 .match = mipi_dsi_device_match, 97 97 .uevent = mipi_dsi_uevent, 98 98 .pm = &mipi_dsi_device_pm_ops, 99 99 }; 100 + EXPORT_SYMBOL_GPL(mipi_dsi_bus_type); 100 101 101 102 /** 102 103 * of_find_mipi_dsi_device_by_node() - find the MIPI DSI device matching a
+4
drivers/gpu/drm/exynos/exynos7_drm_decon.c
··· 636 636 if (!ctx->drm_dev) 637 637 goto out; 638 638 639 + /* check if crtc and vblank have been initialized properly */ 640 + if (!drm_dev_has_vblank(ctx->drm_dev)) 641 + goto out; 642 + 639 643 if (!ctx->i80_if) { 640 644 drm_crtc_handle_vblank(&ctx->crtc->base); 641 645
+12
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 187 187 u32 i80ifcon; 188 188 bool i80_if; 189 189 bool suspended; 190 + bool dp_clk_enabled; 190 191 wait_queue_head_t wait_vsync_queue; 191 192 atomic_t wait_vsync_event; 192 193 atomic_t win_updated; ··· 1048 1047 struct fimd_context *ctx = container_of(clk, struct fimd_context, 1049 1048 dp_clk); 1050 1049 u32 val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE; 1050 + 1051 + if (enable == ctx->dp_clk_enabled) 1052 + return; 1053 + 1054 + if (enable) 1055 + pm_runtime_resume_and_get(ctx->dev); 1056 + 1057 + ctx->dp_clk_enabled = enable; 1051 1058 writel(val, ctx->regs + DP_MIE_CLKCON); 1059 + 1060 + if (!enable) 1061 + pm_runtime_put(ctx->dev); 1052 1062 } 1053 1063 1054 1064 static const struct exynos_drm_crtc_ops fimd_crtc_ops = {
+1 -1
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 174 174 return ERR_PTR(ret); 175 175 } 176 176 177 - DRM_DEV_DEBUG_KMS(dev->dev, "created file object = %pK\n", obj->filp); 177 + DRM_DEV_DEBUG_KMS(dev->dev, "created file object = %p\n", obj->filp); 178 178 179 179 return exynos_gem; 180 180 }
+16 -16
drivers/gpu/drm/exynos/exynos_drm_ipp.c
··· 271 271 task->src.rect.h = task->dst.rect.h = UINT_MAX; 272 272 task->transform.rotation = DRM_MODE_ROTATE_0; 273 273 274 - DRM_DEV_DEBUG_DRIVER(task->dev, "Allocated task %pK\n", task); 274 + DRM_DEV_DEBUG_DRIVER(task->dev, "Allocated task %p\n", task); 275 275 276 276 return task; 277 277 } ··· 339 339 } 340 340 341 341 DRM_DEV_DEBUG_DRIVER(task->dev, 342 - "Got task %pK configuration from userspace\n", 342 + "Got task %p configuration from userspace\n", 343 343 task); 344 344 return 0; 345 345 } ··· 394 394 static void exynos_drm_ipp_task_free(struct exynos_drm_ipp *ipp, 395 395 struct exynos_drm_ipp_task *task) 396 396 { 397 - DRM_DEV_DEBUG_DRIVER(task->dev, "Freeing task %pK\n", task); 397 + DRM_DEV_DEBUG_DRIVER(task->dev, "Freeing task %p\n", task); 398 398 399 399 exynos_drm_ipp_task_release_buf(&task->src); 400 400 exynos_drm_ipp_task_release_buf(&task->dst); ··· 559 559 DRM_EXYNOS_IPP_FORMAT_DESTINATION); 560 560 if (!fmt) { 561 561 DRM_DEV_DEBUG_DRIVER(task->dev, 562 - "Task %pK: %s format not supported\n", 562 + "Task %p: %s format not supported\n", 563 563 task, buf == src ? "src" : "dst"); 564 564 return -EINVAL; 565 565 } ··· 609 609 bool rotate = (rotation != DRM_MODE_ROTATE_0); 610 610 bool scale = false; 611 611 612 - DRM_DEV_DEBUG_DRIVER(task->dev, "Checking task %pK\n", task); 612 + DRM_DEV_DEBUG_DRIVER(task->dev, "Checking task %p\n", task); 613 613 614 614 if (src->rect.w == UINT_MAX) 615 615 src->rect.w = src->buf.width; ··· 625 625 dst->rect.x + dst->rect.w > (dst->buf.width) || 626 626 dst->rect.y + dst->rect.h > (dst->buf.height)) { 627 627 DRM_DEV_DEBUG_DRIVER(task->dev, 628 - "Task %pK: defined area is outside provided buffers\n", 628 + "Task %p: defined area is outside provided buffers\n", 629 629 task); 630 630 return -EINVAL; 631 631 } ··· 642 642 (!(ipp->capabilities & DRM_EXYNOS_IPP_CAP_SCALE) && scale) || 643 643 (!(ipp->capabilities & DRM_EXYNOS_IPP_CAP_CONVERT) && 644 644 src->buf.fourcc != dst->buf.fourcc)) { 645 - DRM_DEV_DEBUG_DRIVER(task->dev, "Task %pK: hw capabilities exceeded\n", 645 + DRM_DEV_DEBUG_DRIVER(task->dev, "Task %p: hw capabilities exceeded\n", 646 646 task); 647 647 return -EINVAL; 648 648 } ··· 655 655 if (ret) 656 656 return ret; 657 657 658 - DRM_DEV_DEBUG_DRIVER(ipp->dev, "Task %pK: all checks done.\n", 658 + DRM_DEV_DEBUG_DRIVER(ipp->dev, "Task %p: all checks done.\n", 659 659 task); 660 660 661 661 return ret; ··· 667 667 struct exynos_drm_ipp_buffer *src = &task->src, *dst = &task->dst; 668 668 int ret = 0; 669 669 670 - DRM_DEV_DEBUG_DRIVER(task->dev, "Setting buffer for task %pK\n", 670 + DRM_DEV_DEBUG_DRIVER(task->dev, "Setting buffer for task %p\n", 671 671 task); 672 672 673 673 ret = exynos_drm_ipp_task_setup_buffer(src, filp); 674 674 if (ret) { 675 675 DRM_DEV_DEBUG_DRIVER(task->dev, 676 - "Task %pK: src buffer setup failed\n", 676 + "Task %p: src buffer setup failed\n", 677 677 task); 678 678 return ret; 679 679 } 680 680 ret = exynos_drm_ipp_task_setup_buffer(dst, filp); 681 681 if (ret) { 682 682 DRM_DEV_DEBUG_DRIVER(task->dev, 683 - "Task %pK: dst buffer setup failed\n", 683 + "Task %p: dst buffer setup failed\n", 684 684 task); 685 685 return ret; 686 686 } 687 687 688 - DRM_DEV_DEBUG_DRIVER(task->dev, "Task %pK: buffers prepared.\n", 688 + DRM_DEV_DEBUG_DRIVER(task->dev, "Task %p: buffers prepared.\n", 689 689 task); 690 690 691 691 return ret; ··· 764 764 struct exynos_drm_ipp *ipp = task->ipp; 765 765 unsigned long flags; 766 766 767 - DRM_DEV_DEBUG_DRIVER(task->dev, "ipp: %d, task %pK done: %d\n", 767 + DRM_DEV_DEBUG_DRIVER(task->dev, "ipp: %d, task %p done: %d\n", 768 768 ipp->id, task, ret); 769 769 770 770 spin_lock_irqsave(&ipp->lock, flags); ··· 807 807 spin_unlock_irqrestore(&ipp->lock, flags); 808 808 809 809 DRM_DEV_DEBUG_DRIVER(ipp->dev, 810 - "ipp: %d, selected task %pK to run\n", ipp->id, 810 + "ipp: %d, selected task %p to run\n", ipp->id, 811 811 task); 812 812 813 813 ret = ipp->funcs->commit(ipp, task); ··· 917 917 */ 918 918 if (arg->flags & DRM_EXYNOS_IPP_FLAG_NONBLOCK) { 919 919 DRM_DEV_DEBUG_DRIVER(ipp->dev, 920 - "ipp: %d, nonblocking processing task %pK\n", 920 + "ipp: %d, nonblocking processing task %p\n", 921 921 ipp->id, task); 922 922 923 923 task->flags |= DRM_EXYNOS_IPP_TASK_ASYNC; 924 924 exynos_drm_ipp_schedule_task(task->ipp, task); 925 925 ret = 0; 926 926 } else { 927 - DRM_DEV_DEBUG_DRIVER(ipp->dev, "ipp: %d, processing task %pK\n", 927 + DRM_DEV_DEBUG_DRIVER(ipp->dev, "ipp: %d, processing task %p\n", 928 928 ipp->id, task); 929 929 exynos_drm_ipp_schedule_task(ipp, task); 930 930 ret = wait_event_interruptible(ipp->done_wq,
+1 -1
drivers/gpu/drm/i915/display/vlv_dsi.c
··· 1589 1589 1590 1590 static void vlv_dphy_param_init(struct intel_dsi *intel_dsi) 1591 1591 { 1592 + struct intel_display *display = to_intel_display(&intel_dsi->base); 1592 1593 struct intel_connector *connector = intel_dsi->attached_connector; 1593 - struct intel_display *display = to_intel_display(connector); 1594 1594 struct mipi_config *mipi_config = connector->panel.vbt.dsi.config; 1595 1595 u32 tlpx_ns, extra_byte_count, tlpx_ui; 1596 1596 u32 ui_num, ui_den;
+1 -1
drivers/gpu/drm/i915/gt/intel_gsc.c
··· 284 284 if (gt->gsc.intf[intf_id].irq < 0) 285 285 return; 286 286 287 - ret = generic_handle_irq(gt->gsc.intf[intf_id].irq); 287 + ret = generic_handle_irq_safe(gt->gsc.intf[intf_id].irq); 288 288 if (ret) 289 289 gt_err_ratelimited(gt, "error handling GSC irq: %d\n", ret); 290 290 }
+2 -1
drivers/gpu/drm/i915/gt/intel_ring_submission.c
··· 610 610 /* One ringbuffer to rule them all */ 611 611 GEM_BUG_ON(!engine->legacy.ring); 612 612 ce->ring = engine->legacy.ring; 613 - ce->timeline = intel_timeline_get(engine->legacy.timeline); 614 613 615 614 GEM_BUG_ON(ce->state); 616 615 if (engine->context_size) { ··· 621 622 622 623 ce->state = vma; 623 624 } 625 + 626 + ce->timeline = intel_timeline_get(engine->legacy.timeline); 624 627 625 628 return 0; 626 629 }
+10 -10
drivers/gpu/drm/i915/selftests/i915_request.c
··· 73 73 /* Basic preliminary test to create a request and let it loose! */ 74 74 75 75 request = mock_request(rcs0(i915)->kernel_context, HZ / 10); 76 - if (!request) 77 - return -ENOMEM; 76 + if (IS_ERR(request)) 77 + return PTR_ERR(request); 78 78 79 79 i915_request_add(request); 80 80 ··· 91 91 /* Submit a request, then wait upon it */ 92 92 93 93 request = mock_request(rcs0(i915)->kernel_context, T); 94 - if (!request) 95 - return -ENOMEM; 94 + if (IS_ERR(request)) 95 + return PTR_ERR(request); 96 96 97 97 i915_request_get(request); 98 98 ··· 160 160 /* Submit a request, treat it as a fence and wait upon it */ 161 161 162 162 request = mock_request(rcs0(i915)->kernel_context, T); 163 - if (!request) 164 - return -ENOMEM; 163 + if (IS_ERR(request)) 164 + return PTR_ERR(request); 165 165 166 166 if (dma_fence_wait_timeout(&request->fence, false, T) != -ETIME) { 167 167 pr_err("fence wait success before submit (expected timeout)!\n"); ··· 219 219 GEM_BUG_ON(IS_ERR(ce)); 220 220 request = mock_request(ce, 2 * HZ); 221 221 intel_context_put(ce); 222 - if (!request) { 223 - err = -ENOMEM; 222 + if (IS_ERR(request)) { 223 + err = PTR_ERR(request); 224 224 goto err_context_0; 225 225 } 226 226 ··· 237 237 GEM_BUG_ON(IS_ERR(ce)); 238 238 vip = mock_request(ce, 0); 239 239 intel_context_put(ce); 240 - if (!vip) { 241 - err = -ENOMEM; 240 + if (IS_ERR(vip)) { 241 + err = PTR_ERR(vip); 242 242 goto err_context_1; 243 243 } 244 244
+1 -1
drivers/gpu/drm/i915/selftests/mock_request.c
··· 35 35 /* NB the i915->requests slab cache is enlarged to fit mock_request */ 36 36 request = intel_context_create_request(ce); 37 37 if (IS_ERR(request)) 38 - return NULL; 38 + return request; 39 39 40 40 request->mock.delay = delay; 41 41 return request;
+82 -50
drivers/gpu/drm/panel/panel-simple.c
··· 26 26 #include <linux/i2c.h> 27 27 #include <linux/media-bus-format.h> 28 28 #include <linux/module.h> 29 + #include <linux/of_device.h> 29 30 #include <linux/of_platform.h> 30 31 #include <linux/platform_device.h> 31 32 #include <linux/pm_runtime.h> ··· 135 134 136 135 /** @connector_type: LVDS, eDP, DSI, DPI, etc. */ 137 136 int connector_type; 137 + }; 138 + 139 + struct panel_desc_dsi { 140 + struct panel_desc desc; 141 + 142 + unsigned long flags; 143 + enum mipi_dsi_pixel_format format; 144 + unsigned int lanes; 138 145 }; 139 146 140 147 struct panel_simple { ··· 439 430 .get_timings = panel_simple_get_timings, 440 431 }; 441 432 442 - static struct panel_desc panel_dpi; 443 - 444 - static int panel_dpi_probe(struct device *dev, 445 - struct panel_simple *panel) 433 + static struct panel_desc *panel_dpi_probe(struct device *dev) 446 434 { 447 435 struct display_timing *timing; 448 436 const struct device_node *np; ··· 451 445 np = dev->of_node; 452 446 desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL); 453 447 if (!desc) 454 - return -ENOMEM; 448 + return ERR_PTR(-ENOMEM); 455 449 456 450 timing = devm_kzalloc(dev, sizeof(*timing), GFP_KERNEL); 457 451 if (!timing) 458 - return -ENOMEM; 452 + return ERR_PTR(-ENOMEM); 459 453 460 454 ret = of_get_display_timing(np, "panel-timing", timing); 461 455 if (ret < 0) { 462 456 dev_err(dev, "%pOF: no panel-timing node found for \"panel-dpi\" binding\n", 463 457 np); 464 - return ret; 458 + return ERR_PTR(ret); 465 459 } 466 460 467 461 desc->timings = timing; ··· 479 473 /* We do not know the connector for the DT node, so guess it */ 480 474 desc->connector_type = DRM_MODE_CONNECTOR_DPI; 481 475 482 - panel->desc = desc; 483 - 484 - return 0; 476 + return desc; 485 477 } 486 478 487 479 #define PANEL_SIMPLE_BOUNDS_CHECK(to_check, bounds, field) \ ··· 574 570 return 0; 575 571 } 576 572 577 - static int panel_simple_probe(struct device *dev, const struct panel_desc *desc) 573 + static const struct panel_desc *panel_simple_get_desc(struct device *dev) 578 574 { 575 + if (IS_ENABLED(CONFIG_DRM_MIPI_DSI) && 576 + dev_is_mipi_dsi(dev)) { 577 + const struct panel_desc_dsi *dsi_desc; 578 + 579 + dsi_desc = of_device_get_match_data(dev); 580 + if (!dsi_desc) 581 + return ERR_PTR(-ENODEV); 582 + 583 + return &dsi_desc->desc; 584 + } 585 + 586 + if (dev_is_platform(dev)) { 587 + const struct panel_desc *desc; 588 + 589 + desc = of_device_get_match_data(dev); 590 + if (!desc) { 591 + /* 592 + * panel-dpi probes without a descriptor and 593 + * panel_dpi_probe() will initialize one for us 594 + * based on the device tree. 595 + */ 596 + if (of_device_is_compatible(dev->of_node, "panel-dpi")) 597 + return panel_dpi_probe(dev); 598 + else 599 + return ERR_PTR(-ENODEV); 600 + } 601 + 602 + return desc; 603 + } 604 + 605 + return ERR_PTR(-ENODEV); 606 + } 607 + 608 + static struct panel_simple *panel_simple_probe(struct device *dev) 609 + { 610 + const struct panel_desc *desc; 579 611 struct panel_simple *panel; 580 612 struct display_timing dt; 581 613 struct device_node *ddc; ··· 619 579 u32 bus_flags; 620 580 int err; 621 581 582 + desc = panel_simple_get_desc(dev); 583 + if (IS_ERR(desc)) 584 + return ERR_CAST(desc); 585 + 622 586 panel = devm_drm_panel_alloc(dev, struct panel_simple, base, 623 587 &panel_simple_funcs, desc->connector_type); 624 588 if (IS_ERR(panel)) 625 - return PTR_ERR(panel); 589 + return ERR_CAST(panel); 626 590 627 591 panel->desc = desc; 628 592 629 593 panel->supply = devm_regulator_get(dev, "power"); 630 594 if (IS_ERR(panel->supply)) 631 - return PTR_ERR(panel->supply); 595 + return ERR_CAST(panel->supply); 632 596 633 597 panel->enable_gpio = devm_gpiod_get_optional(dev, "enable", 634 598 GPIOD_OUT_LOW); 635 599 if (IS_ERR(panel->enable_gpio)) 636 - return dev_err_probe(dev, PTR_ERR(panel->enable_gpio), 637 - "failed to request GPIO\n"); 600 + return dev_err_cast_probe(dev, panel->enable_gpio, 601 + "failed to request GPIO\n"); 638 602 639 603 err = of_drm_get_panel_orientation(dev->of_node, &panel->orientation); 640 604 if (err) { 641 605 dev_err(dev, "%pOF: failed to get orientation %d\n", dev->of_node, err); 642 - return err; 606 + return ERR_PTR(err); 643 607 } 644 608 645 609 ddc = of_parse_phandle(dev->of_node, "ddc-i2c-bus", 0); ··· 652 608 of_node_put(ddc); 653 609 654 610 if (!panel->ddc) 655 - return -EPROBE_DEFER; 611 + return ERR_PTR(-EPROBE_DEFER); 656 612 } 657 613 658 - if (desc == &panel_dpi) { 659 - /* Handle the generic panel-dpi binding */ 660 - err = panel_dpi_probe(dev, panel); 661 - if (err) 662 - goto free_ddc; 663 - desc = panel->desc; 664 - } else { 665 - if (!of_get_display_timing(dev->of_node, "panel-timing", &dt)) 666 - panel_simple_parse_panel_timing_node(dev, panel, &dt); 667 - } 614 + if (!of_device_is_compatible(dev->of_node, "panel-dpi") && 615 + !of_get_display_timing(dev->of_node, "panel-timing", &dt)) 616 + panel_simple_parse_panel_timing_node(dev, panel, &dt); 668 617 669 618 if (desc->connector_type == DRM_MODE_CONNECTOR_LVDS) { 670 619 /* Optional data-mapping property for overriding bus format */ ··· 740 703 741 704 drm_panel_add(&panel->base); 742 705 743 - return 0; 706 + return panel; 744 707 745 708 disable_pm_runtime: 746 709 pm_runtime_dont_use_autosuspend(dev); ··· 749 712 if (panel->ddc) 750 713 put_device(&panel->ddc->dev); 751 714 752 - return err; 715 + return ERR_PTR(err); 753 716 } 754 717 755 718 static void panel_simple_shutdown(struct device *dev) ··· 5404 5367 }, { 5405 5368 /* Must be the last entry */ 5406 5369 .compatible = "panel-dpi", 5407 - .data = &panel_dpi, 5370 + 5371 + /* 5372 + * Explicitly NULL, the panel_desc structure will be 5373 + * allocated by panel_dpi_probe(). 5374 + */ 5375 + .data = NULL, 5408 5376 }, { 5409 5377 /* sentinel */ 5410 5378 } ··· 5418 5376 5419 5377 static int panel_simple_platform_probe(struct platform_device *pdev) 5420 5378 { 5421 - const struct panel_desc *desc; 5379 + struct panel_simple *panel; 5422 5380 5423 - desc = of_device_get_match_data(&pdev->dev); 5424 - if (!desc) 5425 - return -ENODEV; 5381 + panel = panel_simple_probe(&pdev->dev); 5382 + if (IS_ERR(panel)) 5383 + return PTR_ERR(panel); 5426 5384 5427 - return panel_simple_probe(&pdev->dev, desc); 5385 + return 0; 5428 5386 } 5429 5387 5430 5388 static void panel_simple_platform_remove(struct platform_device *pdev) ··· 5452 5410 .probe = panel_simple_platform_probe, 5453 5411 .remove = panel_simple_platform_remove, 5454 5412 .shutdown = panel_simple_platform_shutdown, 5455 - }; 5456 - 5457 - struct panel_desc_dsi { 5458 - struct panel_desc desc; 5459 - 5460 - unsigned long flags; 5461 - enum mipi_dsi_pixel_format format; 5462 - unsigned int lanes; 5463 5413 }; 5464 5414 5465 5415 static const struct drm_display_mode auo_b080uan01_mode = { ··· 5687 5653 static int panel_simple_dsi_probe(struct mipi_dsi_device *dsi) 5688 5654 { 5689 5655 const struct panel_desc_dsi *desc; 5656 + struct panel_simple *panel; 5690 5657 int err; 5691 5658 5692 - desc = of_device_get_match_data(&dsi->dev); 5693 - if (!desc) 5694 - return -ENODEV; 5659 + panel = panel_simple_probe(&dsi->dev); 5660 + if (IS_ERR(panel)) 5661 + return PTR_ERR(panel); 5695 5662 5696 - err = panel_simple_probe(&dsi->dev, &desc->desc); 5697 - if (err < 0) 5698 - return err; 5699 - 5663 + desc = container_of(panel->desc, struct panel_desc_dsi, desc); 5700 5664 dsi->mode_flags = desc->flags; 5701 5665 dsi->format = desc->format; 5702 5666 dsi->lanes = desc->lanes;
+9 -4
drivers/gpu/drm/sysfb/vesadrm.c
··· 362 362 363 363 if (!__screen_info_vbe_mode_nonvga(si)) { 364 364 vesa->cmap_write = vesadrm_vga_cmap_write; 365 - #if defined(CONFIG_X86_32) 366 365 } else { 366 + #if defined(CONFIG_X86_32) 367 367 phys_addr_t pmi_base = __screen_info_vesapm_info_base(si); 368 - const u16 *pmi_addr = phys_to_virt(pmi_base); 369 368 370 - vesa->pmi.PrimaryPalette = (u8 *)pmi_addr + pmi_addr[2]; 371 - vesa->cmap_write = vesadrm_pmi_cmap_write; 369 + if (pmi_base) { 370 + const u16 *pmi_addr = phys_to_virt(pmi_base); 371 + 372 + vesa->pmi.PrimaryPalette = (u8 *)pmi_addr + pmi_addr[2]; 373 + vesa->cmap_write = vesadrm_pmi_cmap_write; 374 + } else 372 375 #endif 376 + if (format->is_color_indexed) 377 + drm_warn(dev, "hardware palette is unchangeable, colors may be incorrect\n"); 373 378 } 374 379 375 380 #ifdef CONFIG_X86
+7 -6
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 254 254 ret = dma_resv_trylock(&fbo->base.base._resv); 255 255 WARN_ON(!ret); 256 256 257 + ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1); 258 + if (ret) { 259 + dma_resv_unlock(&fbo->base.base._resv); 260 + kfree(fbo); 261 + return ret; 262 + } 263 + 257 264 if (fbo->base.resource) { 258 265 ttm_resource_set_bo(fbo->base.resource, &fbo->base); 259 266 bo->resource = NULL; 260 267 ttm_bo_set_bulk_move(&fbo->base, NULL); 261 268 } else { 262 269 fbo->base.bulk_move = NULL; 263 - } 264 - 265 - ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1); 266 - if (ret) { 267 - kfree(fbo); 268 - return ret; 269 270 } 270 271 271 272 ttm_bo_get(bo);
+8
drivers/gpu/drm/v3d/v3d_drv.h
··· 101 101 V3D_GEN_71 = 71, 102 102 }; 103 103 104 + enum v3d_irq { 105 + V3D_CORE_IRQ, 106 + V3D_HUB_IRQ, 107 + V3D_MAX_IRQS, 108 + }; 109 + 104 110 struct v3d_dev { 105 111 struct drm_device drm; 106 112 ··· 117 111 int rev; 118 112 119 113 bool single_irq_line; 114 + 115 + int irq[V3D_MAX_IRQS]; 120 116 121 117 struct v3d_perfmon_info perfmon_info; 122 118
+2
drivers/gpu/drm/v3d/v3d_gem.c
··· 134 134 if (false) 135 135 v3d_idle_axi(v3d, 0); 136 136 137 + v3d_irq_disable(v3d); 138 + 137 139 v3d_idle_gca(v3d); 138 140 v3d_reset_sms(v3d); 139 141 v3d_reset_v3d(v3d);
+27 -10
drivers/gpu/drm/v3d/v3d_irq.c
··· 260 260 int 261 261 v3d_irq_init(struct v3d_dev *v3d) 262 262 { 263 - int irq1, ret, core; 263 + int irq, ret, core; 264 264 265 265 INIT_WORK(&v3d->overflow_mem_work, v3d_overflow_mem_work); 266 266 ··· 271 271 V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS(v3d->ver)); 272 272 V3D_WRITE(V3D_HUB_INT_CLR, V3D_HUB_IRQS(v3d->ver)); 273 273 274 - irq1 = platform_get_irq_optional(v3d_to_pdev(v3d), 1); 275 - if (irq1 == -EPROBE_DEFER) 276 - return irq1; 277 - if (irq1 > 0) { 278 - ret = devm_request_irq(v3d->drm.dev, irq1, 274 + irq = platform_get_irq_optional(v3d_to_pdev(v3d), 1); 275 + if (irq == -EPROBE_DEFER) 276 + return irq; 277 + if (irq > 0) { 278 + v3d->irq[V3D_CORE_IRQ] = irq; 279 + 280 + ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ], 279 281 v3d_irq, IRQF_SHARED, 280 282 "v3d_core0", v3d); 281 283 if (ret) 282 284 goto fail; 283 - ret = devm_request_irq(v3d->drm.dev, 284 - platform_get_irq(v3d_to_pdev(v3d), 0), 285 + 286 + irq = platform_get_irq(v3d_to_pdev(v3d), 0); 287 + if (irq < 0) 288 + return irq; 289 + v3d->irq[V3D_HUB_IRQ] = irq; 290 + 291 + ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_HUB_IRQ], 285 292 v3d_hub_irq, IRQF_SHARED, 286 293 "v3d_hub", v3d); 287 294 if (ret) ··· 296 289 } else { 297 290 v3d->single_irq_line = true; 298 291 299 - ret = devm_request_irq(v3d->drm.dev, 300 - platform_get_irq(v3d_to_pdev(v3d), 0), 292 + irq = platform_get_irq(v3d_to_pdev(v3d), 0); 293 + if (irq < 0) 294 + return irq; 295 + v3d->irq[V3D_CORE_IRQ] = irq; 296 + 297 + ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ], 301 298 v3d_irq, IRQF_SHARED, 302 299 "v3d", v3d); 303 300 if (ret) ··· 341 330 for (core = 0; core < v3d->cores; core++) 342 331 V3D_CORE_WRITE(core, V3D_CTL_INT_MSK_SET, ~0); 343 332 V3D_WRITE(V3D_HUB_INT_MSK_SET, ~0); 333 + 334 + /* Finish any interrupt handler still in flight. */ 335 + for (int i = 0; i < V3D_MAX_IRQS; i++) { 336 + if (v3d->irq[i]) 337 + synchronize_irq(v3d->irq[i]); 338 + } 344 339 345 340 /* Clear any pending interrupts we might have left. */ 346 341 for (core = 0; core < v3d->cores; core++)
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 749 749 dev->fifo_mem = devm_memremap(dev->drm.dev, 750 750 fifo_start, 751 751 fifo_size, 752 - MEMREMAP_WB); 752 + MEMREMAP_WB | MEMREMAP_DEC); 753 753 754 754 if (IS_ERR(dev->fifo_mem)) { 755 755 drm_err(&dev->drm,
+5 -3
drivers/gpu/drm/xe/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_XE 3 - tristate "Intel Xe Graphics" 4 - depends on DRM && PCI && (m || (y && KUNIT=y)) 3 + tristate "Intel Xe2 Graphics" 4 + depends on DRM && PCI 5 + depends on KUNIT || !KUNIT 5 6 depends on INTEL_VSEC || !INTEL_VSEC 6 7 depends on X86_PLATFORM_DEVICES || !(X86 && ACPI) 7 8 select INTERVAL_TREE ··· 47 46 select AUXILIARY_BUS 48 47 select HMM_MIRROR 49 48 help 50 - Experimental driver for Intel Xe series GPUs 49 + Driver for Intel Xe2 series GPUs and later. Experimental support 50 + for Xe series is also available. 51 51 52 52 If "M" is selected, the module will be called xe. 53 53
+44 -28
drivers/gpu/drm/xe/xe_device.c
··· 40 40 #include "xe_gt_printk.h" 41 41 #include "xe_gt_sriov_vf.h" 42 42 #include "xe_guc.h" 43 + #include "xe_guc_pc.h" 43 44 #include "xe_hw_engine_group.h" 44 45 #include "xe_hwmon.h" 45 46 #include "xe_irq.h" ··· 987 986 xe_mmio_write32(xe_root_tile_mmio(xe), VF_CAP_REG, 0); 988 987 } 989 988 990 - /** 991 - * xe_device_td_flush() - Flush transient L3 cache entries 992 - * @xe: The device 993 - * 994 - * Display engine has direct access to memory and is never coherent with L3/L4 995 - * caches (or CPU caches), however KMD is responsible for specifically flushing 996 - * transient L3 GPU cache entries prior to the flip sequence to ensure scanout 997 - * can happen from such a surface without seeing corruption. 998 - * 999 - * Display surfaces can be tagged as transient by mapping it using one of the 1000 - * various L3:XD PAT index modes on Xe2. 1001 - * 1002 - * Note: On non-discrete xe2 platforms, like LNL, the entire L3 cache is flushed 1003 - * at the end of each submission via PIPE_CONTROL for compute/render, since SA 1004 - * Media is not coherent with L3 and we want to support render-vs-media 1005 - * usescases. For other engines like copy/blt the HW internally forces uncached 1006 - * behaviour, hence why we can skip the TDF on such platforms. 989 + /* 990 + * Issue a TRANSIENT_FLUSH_REQUEST and wait for completion on each gt. 1007 991 */ 1008 - void xe_device_td_flush(struct xe_device *xe) 992 + static void tdf_request_sync(struct xe_device *xe) 1009 993 { 1010 - struct xe_gt *gt; 1011 994 unsigned int fw_ref; 995 + struct xe_gt *gt; 1012 996 u8 id; 1013 - 1014 - if (!IS_DGFX(xe) || GRAPHICS_VER(xe) < 20) 1015 - return; 1016 - 1017 - if (XE_WA(xe_root_mmio_gt(xe), 16023588340)) { 1018 - xe_device_l2_flush(xe); 1019 - return; 1020 - } 1021 997 1022 998 for_each_gt(gt, xe, id) { 1023 999 if (xe_gt_is_media_type(gt)) ··· 1005 1027 return; 1006 1028 1007 1029 xe_mmio_write32(&gt->mmio, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST); 1030 + 1008 1031 /* 1009 1032 * FIXME: We can likely do better here with our choice of 1010 1033 * timeout. Currently we just assume the worst case, i.e. 150us, ··· 1036 1057 return; 1037 1058 1038 1059 spin_lock(&gt->global_invl_lock); 1039 - xe_mmio_write32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1); 1040 1060 1061 + xe_mmio_write32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1); 1041 1062 if (xe_mmio_wait32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1, 0x0, 500, NULL, true)) 1042 1063 xe_gt_err_once(gt, "Global invalidation timeout\n"); 1064 + 1043 1065 spin_unlock(&gt->global_invl_lock); 1044 1066 1045 1067 xe_force_wake_put(gt_to_fw(gt), fw_ref); 1068 + } 1069 + 1070 + /** 1071 + * xe_device_td_flush() - Flush transient L3 cache entries 1072 + * @xe: The device 1073 + * 1074 + * Display engine has direct access to memory and is never coherent with L3/L4 1075 + * caches (or CPU caches), however KMD is responsible for specifically flushing 1076 + * transient L3 GPU cache entries prior to the flip sequence to ensure scanout 1077 + * can happen from such a surface without seeing corruption. 1078 + * 1079 + * Display surfaces can be tagged as transient by mapping it using one of the 1080 + * various L3:XD PAT index modes on Xe2. 1081 + * 1082 + * Note: On non-discrete xe2 platforms, like LNL, the entire L3 cache is flushed 1083 + * at the end of each submission via PIPE_CONTROL for compute/render, since SA 1084 + * Media is not coherent with L3 and we want to support render-vs-media 1085 + * usescases. For other engines like copy/blt the HW internally forces uncached 1086 + * behaviour, hence why we can skip the TDF on such platforms. 1087 + */ 1088 + void xe_device_td_flush(struct xe_device *xe) 1089 + { 1090 + struct xe_gt *root_gt; 1091 + 1092 + if (!IS_DGFX(xe) || GRAPHICS_VER(xe) < 20) 1093 + return; 1094 + 1095 + root_gt = xe_root_mmio_gt(xe); 1096 + if (XE_WA(root_gt, 16023588340)) { 1097 + /* A transient flush is not sufficient: flush the L2 */ 1098 + xe_device_l2_flush(xe); 1099 + } else { 1100 + xe_guc_pc_apply_flush_freq_limit(&root_gt->uc.guc.pc); 1101 + tdf_request_sync(xe); 1102 + xe_guc_pc_remove_flush_freq_limit(&root_gt->uc.guc.pc); 1103 + } 1046 1104 } 1047 1105 1048 1106 u32 xe_device_ccs_bytes(struct xe_device *xe, u64 size)
+1 -1
drivers/gpu/drm/xe/xe_drv.h
··· 9 9 #include <drm/drm_drv.h> 10 10 11 11 #define DRIVER_NAME "xe" 12 - #define DRIVER_DESC "Intel Xe Graphics" 12 + #define DRIVER_DESC "Intel Xe2 Graphics" 13 13 14 14 /* Interface history: 15 15 *
+217 -71
drivers/gpu/drm/xe/xe_guc_pc.c
··· 5 5 6 6 #include "xe_guc_pc.h" 7 7 8 + #include <linux/cleanup.h> 8 9 #include <linux/delay.h> 10 + #include <linux/jiffies.h> 9 11 #include <linux/ktime.h> 12 + #include <linux/wait_bit.h> 10 13 11 14 #include <drm/drm_managed.h> 12 15 #include <drm/drm_print.h> ··· 54 51 55 52 #define LNL_MERT_FREQ_CAP 800 56 53 #define BMG_MERT_FREQ_CAP 2133 54 + #define BMG_MIN_FREQ 1200 55 + #define BMG_MERT_FLUSH_FREQ_CAP 2600 57 56 58 57 #define SLPC_RESET_TIMEOUT_MS 5 /* roughly 5ms, but no need for precision */ 59 58 #define SLPC_RESET_EXTENDED_TIMEOUT_MS 1000 /* To be used only at pc_start */ 59 + #define SLPC_ACT_FREQ_TIMEOUT_MS 100 60 60 61 61 /** 62 62 * DOC: GuC Power Conservation (PC) ··· 147 141 return -ETIMEDOUT; 148 142 } 149 143 144 + static int wait_for_flush_complete(struct xe_guc_pc *pc) 145 + { 146 + const unsigned long timeout = msecs_to_jiffies(30); 147 + 148 + if (!wait_var_event_timeout(&pc->flush_freq_limit, 149 + !atomic_read(&pc->flush_freq_limit), 150 + timeout)) 151 + return -ETIMEDOUT; 152 + 153 + return 0; 154 + } 155 + 156 + static int wait_for_act_freq_limit(struct xe_guc_pc *pc, u32 freq) 157 + { 158 + int timeout_us = SLPC_ACT_FREQ_TIMEOUT_MS * USEC_PER_MSEC; 159 + int slept, wait = 10; 160 + 161 + for (slept = 0; slept < timeout_us;) { 162 + if (xe_guc_pc_get_act_freq(pc) <= freq) 163 + return 0; 164 + 165 + usleep_range(wait, wait << 1); 166 + slept += wait; 167 + wait <<= 1; 168 + if (slept + wait > timeout_us) 169 + wait = timeout_us - slept; 170 + } 171 + 172 + return -ETIMEDOUT; 173 + } 150 174 static int pc_action_reset(struct xe_guc_pc *pc) 151 175 { 152 176 struct xe_guc_ct *ct = pc_to_ct(pc); ··· 589 553 return pc->rpn_freq; 590 554 } 591 555 556 + static int xe_guc_pc_get_min_freq_locked(struct xe_guc_pc *pc, u32 *freq) 557 + { 558 + int ret; 559 + 560 + lockdep_assert_held(&pc->freq_lock); 561 + 562 + /* Might be in the middle of a gt reset */ 563 + if (!pc->freq_ready) 564 + return -EAGAIN; 565 + 566 + ret = pc_action_query_task_state(pc); 567 + if (ret) 568 + return ret; 569 + 570 + *freq = pc_get_min_freq(pc); 571 + 572 + return 0; 573 + } 574 + 592 575 /** 593 576 * xe_guc_pc_get_min_freq - Get the min operational frequency 594 577 * @pc: The GuC PC ··· 618 563 */ 619 564 int xe_guc_pc_get_min_freq(struct xe_guc_pc *pc, u32 *freq) 620 565 { 566 + guard(mutex)(&pc->freq_lock); 567 + 568 + return xe_guc_pc_get_min_freq_locked(pc, freq); 569 + } 570 + 571 + static int xe_guc_pc_set_min_freq_locked(struct xe_guc_pc *pc, u32 freq) 572 + { 621 573 int ret; 622 574 623 - xe_device_assert_mem_access(pc_to_xe(pc)); 575 + lockdep_assert_held(&pc->freq_lock); 624 576 625 - mutex_lock(&pc->freq_lock); 626 - if (!pc->freq_ready) { 627 - /* Might be in the middle of a gt reset */ 628 - ret = -EAGAIN; 629 - goto out; 630 - } 577 + /* Might be in the middle of a gt reset */ 578 + if (!pc->freq_ready) 579 + return -EAGAIN; 631 580 632 - ret = pc_action_query_task_state(pc); 581 + ret = pc_set_min_freq(pc, freq); 633 582 if (ret) 634 - goto out; 583 + return ret; 635 584 636 - *freq = pc_get_min_freq(pc); 585 + pc->user_requested_min = freq; 637 586 638 - out: 639 - mutex_unlock(&pc->freq_lock); 640 - return ret; 587 + return 0; 641 588 } 642 589 643 590 /** ··· 653 596 */ 654 597 int xe_guc_pc_set_min_freq(struct xe_guc_pc *pc, u32 freq) 655 598 { 599 + guard(mutex)(&pc->freq_lock); 600 + 601 + return xe_guc_pc_set_min_freq_locked(pc, freq); 602 + } 603 + 604 + static int xe_guc_pc_get_max_freq_locked(struct xe_guc_pc *pc, u32 *freq) 605 + { 656 606 int ret; 657 607 658 - mutex_lock(&pc->freq_lock); 659 - if (!pc->freq_ready) { 660 - /* Might be in the middle of a gt reset */ 661 - ret = -EAGAIN; 662 - goto out; 663 - } 608 + lockdep_assert_held(&pc->freq_lock); 664 609 665 - ret = pc_set_min_freq(pc, freq); 610 + /* Might be in the middle of a gt reset */ 611 + if (!pc->freq_ready) 612 + return -EAGAIN; 613 + 614 + ret = pc_action_query_task_state(pc); 666 615 if (ret) 667 - goto out; 616 + return ret; 668 617 669 - pc->user_requested_min = freq; 618 + *freq = pc_get_max_freq(pc); 670 619 671 - out: 672 - mutex_unlock(&pc->freq_lock); 673 - return ret; 620 + return 0; 674 621 } 675 622 676 623 /** ··· 687 626 */ 688 627 int xe_guc_pc_get_max_freq(struct xe_guc_pc *pc, u32 *freq) 689 628 { 629 + guard(mutex)(&pc->freq_lock); 630 + 631 + return xe_guc_pc_get_max_freq_locked(pc, freq); 632 + } 633 + 634 + static int xe_guc_pc_set_max_freq_locked(struct xe_guc_pc *pc, u32 freq) 635 + { 690 636 int ret; 691 637 692 - mutex_lock(&pc->freq_lock); 693 - if (!pc->freq_ready) { 694 - /* Might be in the middle of a gt reset */ 695 - ret = -EAGAIN; 696 - goto out; 697 - } 638 + lockdep_assert_held(&pc->freq_lock); 698 639 699 - ret = pc_action_query_task_state(pc); 640 + /* Might be in the middle of a gt reset */ 641 + if (!pc->freq_ready) 642 + return -EAGAIN; 643 + 644 + ret = pc_set_max_freq(pc, freq); 700 645 if (ret) 701 - goto out; 646 + return ret; 702 647 703 - *freq = pc_get_max_freq(pc); 648 + pc->user_requested_max = freq; 704 649 705 - out: 706 - mutex_unlock(&pc->freq_lock); 707 - return ret; 650 + return 0; 708 651 } 709 652 710 653 /** ··· 722 657 */ 723 658 int xe_guc_pc_set_max_freq(struct xe_guc_pc *pc, u32 freq) 724 659 { 725 - int ret; 726 - 727 - mutex_lock(&pc->freq_lock); 728 - if (!pc->freq_ready) { 729 - /* Might be in the middle of a gt reset */ 730 - ret = -EAGAIN; 731 - goto out; 660 + if (XE_WA(pc_to_gt(pc), 22019338487)) { 661 + if (wait_for_flush_complete(pc) != 0) 662 + return -EAGAIN; 732 663 } 733 664 734 - ret = pc_set_max_freq(pc, freq); 735 - if (ret) 736 - goto out; 665 + guard(mutex)(&pc->freq_lock); 737 666 738 - pc->user_requested_max = freq; 739 - 740 - out: 741 - mutex_unlock(&pc->freq_lock); 742 - return ret; 667 + return xe_guc_pc_set_max_freq_locked(pc, freq); 743 668 } 744 669 745 670 /** ··· 872 817 873 818 static int pc_adjust_freq_bounds(struct xe_guc_pc *pc) 874 819 { 820 + struct xe_tile *tile = gt_to_tile(pc_to_gt(pc)); 875 821 int ret; 876 822 877 823 lockdep_assert_held(&pc->freq_lock); ··· 899 843 if (pc_get_min_freq(pc) > pc->rp0_freq) 900 844 ret = pc_set_min_freq(pc, pc->rp0_freq); 901 845 846 + if (XE_WA(tile->primary_gt, 14022085890)) 847 + ret = pc_set_min_freq(pc, max(BMG_MIN_FREQ, pc_get_min_freq(pc))); 848 + 902 849 out: 903 850 return ret; 904 851 } ··· 927 868 return ret; 928 869 } 929 870 930 - static int pc_set_mert_freq_cap(struct xe_guc_pc *pc) 871 + static bool needs_flush_freq_limit(struct xe_guc_pc *pc) 931 872 { 873 + struct xe_gt *gt = pc_to_gt(pc); 874 + 875 + return XE_WA(gt, 22019338487) && 876 + pc->rp0_freq > BMG_MERT_FLUSH_FREQ_CAP; 877 + } 878 + 879 + /** 880 + * xe_guc_pc_apply_flush_freq_limit() - Limit max GT freq during L2 flush 881 + * @pc: the xe_guc_pc object 882 + * 883 + * As per the WA, reduce max GT frequency during L2 cache flush 884 + */ 885 + void xe_guc_pc_apply_flush_freq_limit(struct xe_guc_pc *pc) 886 + { 887 + struct xe_gt *gt = pc_to_gt(pc); 888 + u32 max_freq; 889 + int ret; 890 + 891 + if (!needs_flush_freq_limit(pc)) 892 + return; 893 + 894 + guard(mutex)(&pc->freq_lock); 895 + 896 + ret = xe_guc_pc_get_max_freq_locked(pc, &max_freq); 897 + if (!ret && max_freq > BMG_MERT_FLUSH_FREQ_CAP) { 898 + ret = pc_set_max_freq(pc, BMG_MERT_FLUSH_FREQ_CAP); 899 + if (ret) { 900 + xe_gt_err_once(gt, "Failed to cap max freq on flush to %u, %pe\n", 901 + BMG_MERT_FLUSH_FREQ_CAP, ERR_PTR(ret)); 902 + return; 903 + } 904 + 905 + atomic_set(&pc->flush_freq_limit, 1); 906 + 907 + /* 908 + * If user has previously changed max freq, stash that value to 909 + * restore later, otherwise use the current max. New user 910 + * requests wait on flush. 911 + */ 912 + if (pc->user_requested_max != 0) 913 + pc->stashed_max_freq = pc->user_requested_max; 914 + else 915 + pc->stashed_max_freq = max_freq; 916 + } 917 + 918 + /* 919 + * Wait for actual freq to go below the flush cap: even if the previous 920 + * max was below cap, the current one might still be above it 921 + */ 922 + ret = wait_for_act_freq_limit(pc, BMG_MERT_FLUSH_FREQ_CAP); 923 + if (ret) 924 + xe_gt_err_once(gt, "Actual freq did not reduce to %u, %pe\n", 925 + BMG_MERT_FLUSH_FREQ_CAP, ERR_PTR(ret)); 926 + } 927 + 928 + /** 929 + * xe_guc_pc_remove_flush_freq_limit() - Remove max GT freq limit after L2 flush completes. 930 + * @pc: the xe_guc_pc object 931 + * 932 + * Retrieve the previous GT max frequency value. 933 + */ 934 + void xe_guc_pc_remove_flush_freq_limit(struct xe_guc_pc *pc) 935 + { 936 + struct xe_gt *gt = pc_to_gt(pc); 932 937 int ret = 0; 933 938 934 - if (XE_WA(pc_to_gt(pc), 22019338487)) { 935 - /* 936 - * Get updated min/max and stash them. 937 - */ 938 - ret = xe_guc_pc_get_min_freq(pc, &pc->stashed_min_freq); 939 - if (!ret) 940 - ret = xe_guc_pc_get_max_freq(pc, &pc->stashed_max_freq); 941 - if (ret) 942 - return ret; 939 + if (!needs_flush_freq_limit(pc)) 940 + return; 943 941 944 - /* 945 - * Ensure min and max are bound by MERT_FREQ_CAP until driver loads. 946 - */ 947 - mutex_lock(&pc->freq_lock); 948 - ret = pc_set_min_freq(pc, min(pc->rpe_freq, pc_max_freq_cap(pc))); 949 - if (!ret) 950 - ret = pc_set_max_freq(pc, min(pc->rp0_freq, pc_max_freq_cap(pc))); 951 - mutex_unlock(&pc->freq_lock); 952 - } 942 + if (!atomic_read(&pc->flush_freq_limit)) 943 + return; 944 + 945 + mutex_lock(&pc->freq_lock); 946 + 947 + ret = pc_set_max_freq(&gt->uc.guc.pc, pc->stashed_max_freq); 948 + if (ret) 949 + xe_gt_err_once(gt, "Failed to restore max freq %u:%d", 950 + pc->stashed_max_freq, ret); 951 + 952 + atomic_set(&pc->flush_freq_limit, 0); 953 + mutex_unlock(&pc->freq_lock); 954 + wake_up_var(&pc->flush_freq_limit); 955 + } 956 + 957 + static int pc_set_mert_freq_cap(struct xe_guc_pc *pc) 958 + { 959 + int ret; 960 + 961 + if (!XE_WA(pc_to_gt(pc), 22019338487)) 962 + return 0; 963 + 964 + guard(mutex)(&pc->freq_lock); 965 + 966 + /* 967 + * Get updated min/max and stash them. 968 + */ 969 + ret = xe_guc_pc_get_min_freq_locked(pc, &pc->stashed_min_freq); 970 + if (!ret) 971 + ret = xe_guc_pc_get_max_freq_locked(pc, &pc->stashed_max_freq); 972 + if (ret) 973 + return ret; 974 + 975 + /* 976 + * Ensure min and max are bound by MERT_FREQ_CAP until driver loads. 977 + */ 978 + ret = pc_set_min_freq(pc, min(pc->rpe_freq, pc_max_freq_cap(pc))); 979 + if (!ret) 980 + ret = pc_set_max_freq(pc, min(pc->rp0_freq, pc_max_freq_cap(pc))); 953 981 954 982 return ret; 955 983 }
+2
drivers/gpu/drm/xe/xe_guc_pc.h
··· 38 38 void xe_guc_pc_init_early(struct xe_guc_pc *pc); 39 39 int xe_guc_pc_restore_stashed_freq(struct xe_guc_pc *pc); 40 40 void xe_guc_pc_raise_unslice(struct xe_guc_pc *pc); 41 + void xe_guc_pc_apply_flush_freq_limit(struct xe_guc_pc *pc); 42 + void xe_guc_pc_remove_flush_freq_limit(struct xe_guc_pc *pc); 41 43 42 44 #endif /* _XE_GUC_PC_H_ */
+2
drivers/gpu/drm/xe/xe_guc_pc_types.h
··· 15 15 struct xe_guc_pc { 16 16 /** @bo: GGTT buffer object that is shared with GuC PC */ 17 17 struct xe_bo *bo; 18 + /** @flush_freq_limit: 1 when max freq changes are limited by driver */ 19 + atomic_t flush_freq_limit; 18 20 /** @rp0_freq: HW RP0 frequency - The Maximum one */ 19 21 u32 rp0_freq; 20 22 /** @rpa_freq: HW RPa frequency - The Achievable one */
+6 -4
drivers/gpu/drm/xe/xe_guc_submit.c
··· 891 891 struct xe_exec_queue *q = ge->q; 892 892 struct xe_guc *guc = exec_queue_to_guc(q); 893 893 struct xe_gpu_scheduler *sched = &ge->sched; 894 - bool wedged; 894 + bool wedged = false; 895 895 896 896 xe_gt_assert(guc_to_gt(guc), xe_exec_queue_is_lr(q)); 897 897 trace_xe_exec_queue_lr_cleanup(q); 898 898 899 - wedged = guc_submit_hint_wedged(exec_queue_to_guc(q)); 899 + if (!exec_queue_killed(q)) 900 + wedged = guc_submit_hint_wedged(exec_queue_to_guc(q)); 900 901 901 902 /* Kill the run_job / process_msg entry points */ 902 903 xe_sched_submission_stop(sched); ··· 1071 1070 int err = -ETIME; 1072 1071 pid_t pid = -1; 1073 1072 int i = 0; 1074 - bool wedged, skip_timeout_check; 1073 + bool wedged = false, skip_timeout_check; 1075 1074 1076 1075 /* 1077 1076 * TDR has fired before free job worker. Common if exec queue ··· 1117 1116 * doesn't work for SRIOV. For now assuming timeouts in wedged mode are 1118 1117 * genuine timeouts. 1119 1118 */ 1120 - wedged = guc_submit_hint_wedged(exec_queue_to_guc(q)); 1119 + if (!exec_queue_killed(q)) 1120 + wedged = guc_submit_hint_wedged(exec_queue_to_guc(q)); 1121 1121 1122 1122 /* Engine state now stable, disable scheduling to check timestamp */ 1123 1123 if (!wedged && exec_queue_registered(q)) {
+19 -18
drivers/gpu/drm/xe/xe_lrc.c
··· 40 40 41 41 #define LRC_PPHWSP_SIZE SZ_4K 42 42 #define LRC_INDIRECT_RING_STATE_SIZE SZ_4K 43 + #define LRC_WA_BB_SIZE SZ_4K 43 44 44 45 static struct xe_device * 45 46 lrc_to_xe(struct xe_lrc *lrc) ··· 911 910 { 912 911 xe_hw_fence_ctx_finish(&lrc->fence_ctx); 913 912 xe_bo_unpin_map_no_vm(lrc->bo); 914 - xe_bo_unpin_map_no_vm(lrc->bb_per_ctx_bo); 913 + } 914 + 915 + static size_t wa_bb_offset(struct xe_lrc *lrc) 916 + { 917 + return lrc->bo->size - LRC_WA_BB_SIZE; 915 918 } 916 919 917 920 /* ··· 948 943 #define CONTEXT_ACTIVE 1ULL 949 944 static int xe_lrc_setup_utilization(struct xe_lrc *lrc) 950 945 { 946 + const size_t max_size = LRC_WA_BB_SIZE; 951 947 u32 *cmd, *buf = NULL; 952 948 953 - if (lrc->bb_per_ctx_bo->vmap.is_iomem) { 954 - buf = kmalloc(lrc->bb_per_ctx_bo->size, GFP_KERNEL); 949 + if (lrc->bo->vmap.is_iomem) { 950 + buf = kmalloc(max_size, GFP_KERNEL); 955 951 if (!buf) 956 952 return -ENOMEM; 957 953 cmd = buf; 958 954 } else { 959 - cmd = lrc->bb_per_ctx_bo->vmap.vaddr; 955 + cmd = lrc->bo->vmap.vaddr + wa_bb_offset(lrc); 960 956 } 961 957 962 958 *cmd++ = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT | MI_SRM_ADD_CS_OFFSET; ··· 980 974 *cmd++ = MI_BATCH_BUFFER_END; 981 975 982 976 if (buf) { 983 - xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bb_per_ctx_bo->vmap, 0, 984 - buf, (cmd - buf) * sizeof(*cmd)); 977 + xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bo->vmap, 978 + wa_bb_offset(lrc), buf, 979 + (cmd - buf) * sizeof(*cmd)); 985 980 kfree(buf); 986 981 } 987 982 988 - xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR, 989 - xe_bo_ggtt_addr(lrc->bb_per_ctx_bo) | 1); 983 + xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR, xe_bo_ggtt_addr(lrc->bo) + 984 + wa_bb_offset(lrc) + 1); 990 985 991 986 return 0; 992 987 } ··· 1025 1018 * FIXME: Perma-pinning LRC as we don't yet support moving GGTT address 1026 1019 * via VM bind calls. 1027 1020 */ 1028 - lrc->bo = xe_bo_create_pin_map(xe, tile, NULL, lrc_size, 1021 + lrc->bo = xe_bo_create_pin_map(xe, tile, NULL, 1022 + lrc_size + LRC_WA_BB_SIZE, 1029 1023 ttm_bo_type_kernel, 1030 1024 bo_flags); 1031 1025 if (IS_ERR(lrc->bo)) 1032 1026 return PTR_ERR(lrc->bo); 1033 - 1034 - lrc->bb_per_ctx_bo = xe_bo_create_pin_map(xe, tile, NULL, SZ_4K, 1035 - ttm_bo_type_kernel, 1036 - bo_flags); 1037 - if (IS_ERR(lrc->bb_per_ctx_bo)) { 1038 - err = PTR_ERR(lrc->bb_per_ctx_bo); 1039 - goto err_lrc_finish; 1040 - } 1041 1027 1042 1028 lrc->size = lrc_size; 1043 1029 lrc->ring.size = ring_size; ··· 1819 1819 snapshot->seqno = xe_lrc_seqno(lrc); 1820 1820 snapshot->lrc_bo = xe_bo_get(lrc->bo); 1821 1821 snapshot->lrc_offset = xe_lrc_pphwsp_offset(lrc); 1822 - snapshot->lrc_size = lrc->bo->size - snapshot->lrc_offset; 1822 + snapshot->lrc_size = lrc->bo->size - snapshot->lrc_offset - 1823 + LRC_WA_BB_SIZE; 1823 1824 snapshot->lrc_snapshot = NULL; 1824 1825 snapshot->ctx_timestamp = lower_32_bits(xe_lrc_ctx_timestamp(lrc)); 1825 1826 snapshot->ctx_job_timestamp = xe_lrc_ctx_job_timestamp(lrc);
-3
drivers/gpu/drm/xe/xe_lrc_types.h
··· 53 53 54 54 /** @ctx_timestamp: readout value of CTX_TIMESTAMP on last update */ 55 55 u64 ctx_timestamp; 56 - 57 - /** @bb_per_ctx_bo: buffer object for per context batch wa buffer */ 58 - struct xe_bo *bb_per_ctx_bo; 59 56 }; 60 57 61 58 struct xe_lrc_snapshot;
+10 -8
drivers/gpu/drm/xe/xe_migrate.c
··· 82 82 * of the instruction. Subtracting the instruction header (1 dword) and 83 83 * address (2 dwords), that leaves 0x3FD dwords (0x1FE qwords) for PTE values. 84 84 */ 85 - #define MAX_PTE_PER_SDI 0x1FE 85 + #define MAX_PTE_PER_SDI 0x1FEU 86 86 87 87 /** 88 88 * xe_tile_migrate_exec_queue() - Get this tile's migrate exec queue. ··· 1553 1553 u64 entries = DIV_U64_ROUND_UP(size, XE_PAGE_SIZE); 1554 1554 1555 1555 XE_WARN_ON(size > MAX_PREEMPTDISABLE_TRANSFER); 1556 + 1556 1557 /* 1557 1558 * MI_STORE_DATA_IMM command is used to update page table. Each 1558 - * instruction can update maximumly 0x1ff pte entries. To update 1559 - * n (n <= 0x1ff) pte entries, we need: 1560 - * 1 dword for the MI_STORE_DATA_IMM command header (opcode etc) 1561 - * 2 dword for the page table's physical location 1562 - * 2*n dword for value of pte to fill (each pte entry is 2 dwords) 1559 + * instruction can update maximumly MAX_PTE_PER_SDI pte entries. To 1560 + * update n (n <= MAX_PTE_PER_SDI) pte entries, we need: 1561 + * 1562 + * - 1 dword for the MI_STORE_DATA_IMM command header (opcode etc) 1563 + * - 2 dword for the page table's physical location 1564 + * - 2*n dword for value of pte to fill (each pte entry is 2 dwords) 1563 1565 */ 1564 - num_dword = (1 + 2) * DIV_U64_ROUND_UP(entries, 0x1ff); 1566 + num_dword = (1 + 2) * DIV_U64_ROUND_UP(entries, MAX_PTE_PER_SDI); 1565 1567 num_dword += entries * 2; 1566 1568 1567 1569 return num_dword; ··· 1579 1577 1580 1578 ptes = DIV_ROUND_UP(size, XE_PAGE_SIZE); 1581 1579 while (ptes) { 1582 - u32 chunk = min(0x1ffU, ptes); 1580 + u32 chunk = min(MAX_PTE_PER_SDI, ptes); 1583 1581 1584 1582 bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_NUM_QW(chunk); 1585 1583 bb->cs[bb->len++] = pt_offset;
+6 -1
drivers/gpu/drm/xe/xe_wa_oob.rules
··· 21 21 GRAPHICS_VERSION_RANGE(1270, 1274) 22 22 MEDIA_VERSION(1300) 23 23 PLATFORM(DG2) 24 - 14018094691 GRAPHICS_VERSION(2004) 24 + 14018094691 GRAPHICS_VERSION_RANGE(2001, 2002) 25 + GRAPHICS_VERSION(2004) 25 26 14019882105 GRAPHICS_VERSION(2004), GRAPHICS_STEP(A0, B0) 26 27 18024947630 GRAPHICS_VERSION(2001) 27 28 GRAPHICS_VERSION(2004) ··· 60 59 MEDIA_VERSION_RANGE(1301, 3000) 61 60 16026508708 GRAPHICS_VERSION_RANGE(1200, 3001) 62 61 MEDIA_VERSION_RANGE(1300, 3000) 62 + 63 + # SoC workaround - currently applies to all platforms with the following 64 + # primary GT GMDID 65 + 14022085890 GRAPHICS_VERSION(2001)
+9 -5
drivers/hid/hid-appletb-kbd.c
··· 430 430 ret = appletb_kbd_set_mode(kbd, appletb_tb_def_mode); 431 431 if (ret) { 432 432 dev_err_probe(dev, ret, "Failed to set touchbar mode\n"); 433 - goto close_hw; 433 + goto unregister_handler; 434 434 } 435 435 436 436 hid_set_drvdata(hdev, kbd); 437 437 438 438 return 0; 439 439 440 + unregister_handler: 441 + input_unregister_handler(&kbd->inp_handler); 440 442 close_hw: 441 - if (kbd->backlight_dev) 443 + if (kbd->backlight_dev) { 442 444 put_device(&kbd->backlight_dev->dev); 445 + timer_delete_sync(&kbd->inactivity_timer); 446 + } 443 447 hid_hw_close(hdev); 444 448 stop_hw: 445 449 hid_hw_stop(hdev); ··· 457 453 appletb_kbd_set_mode(kbd, APPLETB_KBD_MODE_OFF); 458 454 459 455 input_unregister_handler(&kbd->inp_handler); 460 - timer_delete_sync(&kbd->inactivity_timer); 461 - 462 - if (kbd->backlight_dev) 456 + if (kbd->backlight_dev) { 463 457 put_device(&kbd->backlight_dev->dev); 458 + timer_delete_sync(&kbd->inactivity_timer); 459 + } 464 460 465 461 hid_hw_close(hdev); 466 462 hid_hw_stop(hdev);
+2 -2
drivers/hid/hid-debug.c
··· 3298 3298 [BTN_TOUCH] = "Touch", [BTN_STYLUS] = "Stylus", 3299 3299 [BTN_STYLUS2] = "Stylus2", [BTN_TOOL_DOUBLETAP] = "ToolDoubleTap", 3300 3300 [BTN_TOOL_TRIPLETAP] = "ToolTripleTap", [BTN_TOOL_QUADTAP] = "ToolQuadrupleTap", 3301 - [BTN_GEAR_DOWN] = "WheelBtn", 3302 - [BTN_GEAR_UP] = "Gear up", [KEY_OK] = "Ok", 3301 + [BTN_GEAR_DOWN] = "BtnGearDown", [BTN_GEAR_UP] = "BtnGearUp", 3302 + [BTN_WHEEL] = "BtnWheel", [KEY_OK] = "Ok", 3303 3303 [KEY_SELECT] = "Select", [KEY_GOTO] = "Goto", 3304 3304 [KEY_CLEAR] = "Clear", [KEY_POWER2] = "Power2", 3305 3305 [KEY_OPTION] = "Option", [KEY_INFO] = "Info",
+4 -2
drivers/hid/hid-elecom.c
··· 89 89 break; 90 90 case USB_DEVICE_ID_ELECOM_M_DT1URBK: 91 91 case USB_DEVICE_ID_ELECOM_M_DT1DRBK: 92 - case USB_DEVICE_ID_ELECOM_M_HT1URBK: 92 + case USB_DEVICE_ID_ELECOM_M_HT1URBK_010C: 93 + case USB_DEVICE_ID_ELECOM_M_HT1URBK_019B: 93 94 case USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D: 94 95 /* 95 96 * Report descriptor format: ··· 123 122 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) }, 124 123 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, 125 124 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, 126 - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) }, 125 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_010C) }, 126 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_019B) }, 127 127 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) }, 128 128 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) }, 129 129 { }
+2 -1
drivers/hid/hid-ids.h
··· 448 448 #define USB_DEVICE_ID_ELECOM_M_XT4DRBK 0x00fd 449 449 #define USB_DEVICE_ID_ELECOM_M_DT1URBK 0x00fe 450 450 #define USB_DEVICE_ID_ELECOM_M_DT1DRBK 0x00ff 451 - #define USB_DEVICE_ID_ELECOM_M_HT1URBK 0x010c 451 + #define USB_DEVICE_ID_ELECOM_M_HT1URBK_010C 0x010c 452 + #define USB_DEVICE_ID_ELECOM_M_HT1URBK_019B 0x019b 452 453 #define USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D 0x010d 453 454 #define USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C 0x011c 454 455
+2 -1
drivers/hid/hid-quirks.c
··· 410 410 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) }, 411 411 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, 412 412 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, 413 - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) }, 413 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_010C) }, 414 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_019B) }, 414 415 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) }, 415 416 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) }, 416 417 #endif
+1 -1
drivers/i2c/busses/Kconfig
··· 200 200 201 201 config I2C_PIIX4 202 202 tristate "Intel PIIX4 and compatible (ATI/AMD/Serverworks/Broadcom/SMSC)" 203 - depends on PCI && HAS_IOPORT && X86 203 + depends on PCI && HAS_IOPORT 204 204 select I2C_SMBUS 205 205 help 206 206 If you say yes to this option, support will be included for the Intel
+1
drivers/i2c/busses/i2c-designware-master.c
··· 363 363 364 364 dev->msgs = msgs; 365 365 dev->msgs_num = num_msgs; 366 + dev->msg_write_idx = 0; 366 367 i2c_dw_xfer_init(dev); 367 368 368 369 /* Initiate messages read/write transaction */
+5 -1
drivers/i2c/busses/i2c-microchip-corei2c.c
··· 435 435 u8 tx_buf[I2C_SMBUS_BLOCK_MAX + 2]; 436 436 u8 rx_buf[I2C_SMBUS_BLOCK_MAX + 1]; 437 437 int num_msgs = 1; 438 + int ret; 438 439 439 440 msgs[CORE_I2C_SMBUS_MSG_WR].addr = addr; 440 441 msgs[CORE_I2C_SMBUS_MSG_WR].flags = 0; ··· 506 505 return -EOPNOTSUPP; 507 506 } 508 507 509 - mchp_corei2c_xfer(&idev->adapter, msgs, num_msgs); 508 + ret = mchp_corei2c_xfer(&idev->adapter, msgs, num_msgs); 509 + if (ret < 0) 510 + return ret; 511 + 510 512 if (read_write == I2C_SMBUS_WRITE || size <= I2C_SMBUS_BYTE_DATA) 511 513 return 0; 512 514
+1 -1
drivers/i2c/busses/i2c-piix4.c
··· 34 34 #include <linux/dmi.h> 35 35 #include <linux/acpi.h> 36 36 #include <linux/io.h> 37 - #include <asm/amd/fch.h> 37 + #include <linux/platform_data/x86/amd-fch.h> 38 38 39 39 #include "i2c-piix4.h" 40 40
+3 -2
drivers/infiniband/ulp/srp/ib_srp.c
··· 3705 3705 target_host->max_id = 1; 3706 3706 target_host->max_lun = -1LL; 3707 3707 target_host->max_cmd_len = sizeof ((struct srp_cmd *) (void *) 0L)->cdb; 3708 - target_host->max_segment_size = ib_dma_max_seg_size(ibdev); 3709 3708 3710 - if (!(ibdev->attrs.kernel_cap_flags & IBK_SG_GAPS_REG)) 3709 + if (ibdev->attrs.kernel_cap_flags & IBK_SG_GAPS_REG) 3710 + target_host->max_segment_size = ib_dma_max_seg_size(ibdev); 3711 + else 3711 3712 target_host->virt_boundary_mask = ~srp_dev->mr_page_mask; 3712 3713 3713 3714 target = host_to_target(target_host);
+1 -1
drivers/input/joystick/fsia6b.c
··· 149 149 } 150 150 fsia6b->dev = input_dev; 151 151 152 - snprintf(fsia6b->phys, sizeof(fsia6b->phys), "%s/input0", serio->phys); 152 + scnprintf(fsia6b->phys, sizeof(fsia6b->phys), "%s/input0", serio->phys); 153 153 154 154 input_dev->name = DRIVER_DESC; 155 155 input_dev->phys = fsia6b->phys;
+7 -4
drivers/input/joystick/xpad.c
··· 177 177 { 0x05fd, 0x107a, "InterAct 'PowerPad Pro' X-Box pad (Germany)", 0, XTYPE_XBOX }, 178 178 { 0x05fe, 0x3030, "Chic Controller", 0, XTYPE_XBOX }, 179 179 { 0x05fe, 0x3031, "Chic Controller", 0, XTYPE_XBOX }, 180 + { 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX }, 180 181 { 0x062a, 0x0020, "Logic3 Xbox GamePad", 0, XTYPE_XBOX }, 181 182 { 0x062a, 0x0033, "Competition Pro Steering Wheel", 0, XTYPE_XBOX }, 182 183 { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX }, ··· 525 524 XPAD_XBOX360_VENDOR(0x045e), /* Microsoft Xbox 360 controllers */ 526 525 XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft Xbox One controllers */ 527 526 XPAD_XBOX360_VENDOR(0x046d), /* Logitech Xbox 360-style controllers */ 527 + XPAD_XBOX360_VENDOR(0x0502), /* Acer Inc. Xbox 360 style controllers */ 528 528 XPAD_XBOX360_VENDOR(0x056e), /* Elecom JC-U3613M */ 529 529 XPAD_XBOX360_VENDOR(0x06a3), /* Saitek P3600 */ 530 530 XPAD_XBOX360_VENDOR(0x0738), /* Mad Catz Xbox 360 controllers */ ··· 1346 1344 usb_anchor_urb(xpad->irq_out, &xpad->irq_out_anchor); 1347 1345 error = usb_submit_urb(xpad->irq_out, GFP_ATOMIC); 1348 1346 if (error) { 1349 - dev_err(&xpad->intf->dev, 1350 - "%s - usb_submit_urb failed with result %d\n", 1351 - __func__, error); 1347 + if (error != -ENODEV) 1348 + dev_err(&xpad->intf->dev, 1349 + "%s - usb_submit_urb failed with result %d\n", 1350 + __func__, error); 1352 1351 usb_unanchor_urb(xpad->irq_out); 1353 - return -EIO; 1352 + return error; 1354 1353 } 1355 1354 1356 1355 xpad->irq_out_active = true;
+2 -2
drivers/input/keyboard/atkbd.c
··· 1191 1191 "AT %s Set %d keyboard", 1192 1192 atkbd->translated ? "Translated" : "Raw", atkbd->set); 1193 1193 1194 - snprintf(atkbd->phys, sizeof(atkbd->phys), 1195 - "%s/input0", atkbd->ps2dev.serio->phys); 1194 + scnprintf(atkbd->phys, sizeof(atkbd->phys), 1195 + "%s/input0", atkbd->ps2dev.serio->phys); 1196 1196 1197 1197 input_dev->name = atkbd->name; 1198 1198 input_dev->phys = atkbd->phys;
+2
drivers/input/misc/cs40l50-vibra.c
··· 238 238 header.data_words = len / sizeof(u32); 239 239 240 240 new_owt_effect_data = kmalloc(sizeof(header) + len, GFP_KERNEL); 241 + if (!new_owt_effect_data) 242 + return -ENOMEM; 241 243 242 244 memcpy(new_owt_effect_data, &header, sizeof(header)); 243 245 memcpy(new_owt_effect_data + sizeof(header), work_data->custom_data, len);
+1 -1
drivers/input/misc/gpio-beeper.c
··· 94 94 95 95 #ifdef CONFIG_OF 96 96 static const struct of_device_id gpio_beeper_of_match[] = { 97 - { .compatible = BEEPER_MODNAME, }, 97 + { .compatible = "gpio-beeper", }, 98 98 { } 99 99 }; 100 100 MODULE_DEVICE_TABLE(of, gpio_beeper_of_match);
+1 -1
drivers/input/misc/iqs626a.c
··· 771 771 u8 *thresh = &sys_reg->tp_grp_reg.ch_reg_tp[i].thresh; 772 772 char tc_name[10]; 773 773 774 - snprintf(tc_name, sizeof(tc_name), "channel-%d", i); 774 + scnprintf(tc_name, sizeof(tc_name), "channel-%d", i); 775 775 776 776 struct fwnode_handle *tc_node __free(fwnode_handle) = 777 777 fwnode_get_named_child_node(ch_node, tc_name);
+5 -2
drivers/input/misc/iqs7222.c
··· 301 301 int allow_offset; 302 302 int event_offset; 303 303 int comms_offset; 304 + int ext_chan; 304 305 bool legacy_gesture; 305 306 struct iqs7222_reg_grp_desc reg_grps[IQS7222_NUM_REG_GRPS]; 306 307 }; ··· 316 315 .allow_offset = 9, 317 316 .event_offset = 10, 318 317 .comms_offset = 12, 318 + .ext_chan = 10, 319 319 .reg_grps = { 320 320 [IQS7222_REG_GRP_STAT] = { 321 321 .base = IQS7222_SYS_STATUS, ··· 375 373 .allow_offset = 9, 376 374 .event_offset = 10, 377 375 .comms_offset = 12, 376 + .ext_chan = 10, 378 377 .legacy_gesture = true, 379 378 .reg_grps = { 380 379 [IQS7222_REG_GRP_STAT] = { ··· 2247 2244 const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc; 2248 2245 struct i2c_client *client = iqs7222->client; 2249 2246 int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row; 2250 - int ext_chan = rounddown(num_chan, 10); 2247 + int ext_chan = dev_desc->ext_chan ? : num_chan; 2251 2248 int error, i; 2252 2249 u16 *chan_setup = iqs7222->chan_setup[chan_index]; 2253 2250 u16 *sys_setup = iqs7222->sys_setup; ··· 2448 2445 const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc; 2449 2446 struct i2c_client *client = iqs7222->client; 2450 2447 int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row; 2451 - int ext_chan = rounddown(num_chan, 10); 2448 + int ext_chan = dev_desc->ext_chan ? : num_chan; 2452 2449 int count, error, reg_offset, i; 2453 2450 u16 *event_mask = &iqs7222->sys_setup[dev_desc->event_offset]; 2454 2451 u16 *sldr_setup = iqs7222->sldr_setup[sldr_index];
+5 -5
drivers/input/mouse/alps.c
··· 1408 1408 return -ENOMEM; 1409 1409 } 1410 1410 1411 - snprintf(priv->phys3, sizeof(priv->phys3), "%s/%s", 1412 - psmouse->ps2dev.serio->phys, 1413 - (priv->dev2 ? "input2" : "input1")); 1411 + scnprintf(priv->phys3, sizeof(priv->phys3), "%s/%s", 1412 + psmouse->ps2dev.serio->phys, 1413 + (priv->dev2 ? "input2" : "input1")); 1414 1414 dev3->phys = priv->phys3; 1415 1415 1416 1416 /* ··· 3103 3103 goto init_fail; 3104 3104 } 3105 3105 3106 - snprintf(priv->phys2, sizeof(priv->phys2), "%s/input1", 3107 - psmouse->ps2dev.serio->phys); 3106 + scnprintf(priv->phys2, sizeof(priv->phys2), "%s/input1", 3107 + psmouse->ps2dev.serio->phys); 3108 3108 dev2->phys = priv->phys2; 3109 3109 3110 3110 /*
+2 -2
drivers/input/mouse/lifebook.c
··· 279 279 goto err_out; 280 280 281 281 priv->dev2 = dev2; 282 - snprintf(priv->phys, sizeof(priv->phys), 283 - "%s/input1", psmouse->ps2dev.serio->phys); 282 + scnprintf(priv->phys, sizeof(priv->phys), 283 + "%s/input1", psmouse->ps2dev.serio->phys); 284 284 285 285 dev2->phys = priv->phys; 286 286 dev2->name = "LBPS/2 Fujitsu Lifebook Touchpad";
+1 -1
drivers/input/mouse/psmouse-base.c
··· 1600 1600 psmouse_pre_receive_byte, psmouse_receive_byte); 1601 1601 INIT_DELAYED_WORK(&psmouse->resync_work, psmouse_resync); 1602 1602 psmouse->dev = input_dev; 1603 - snprintf(psmouse->phys, sizeof(psmouse->phys), "%s/input0", serio->phys); 1603 + scnprintf(psmouse->phys, sizeof(psmouse->phys), "%s/input0", serio->phys); 1604 1604 1605 1605 psmouse_set_state(psmouse, PSMOUSE_INITIALIZING); 1606 1606
-1
drivers/input/touchscreen/Kconfig
··· 105 105 106 106 config TOUCHSCREEN_APPLE_Z2 107 107 tristate "Apple Z2 touchscreens" 108 - default ARCH_APPLE 109 108 depends on SPI && (ARCH_APPLE || COMPILE_TEST) 110 109 help 111 110 Say Y here if you have an ARM Apple device with
+1 -1
drivers/input/touchscreen/melfas_mip4.c
··· 1554 1554 1555 1555 #ifdef CONFIG_OF 1556 1556 static const struct of_device_id mip4_of_match[] = { 1557 - { .compatible = "melfas,"MIP4_DEVICE_NAME, }, 1557 + { .compatible = "melfas,mip4_ts", }, 1558 1558 { }, 1559 1559 }; 1560 1560 MODULE_DEVICE_TABLE(of, mip4_of_match);
+2 -3
drivers/iommu/intel/cache.c
··· 40 40 } 41 41 42 42 /* Assign a cache tag with specified type to domain. */ 43 - static int cache_tag_assign(struct dmar_domain *domain, u16 did, 44 - struct device *dev, ioasid_t pasid, 45 - enum cache_tag_type type) 43 + int cache_tag_assign(struct dmar_domain *domain, u16 did, struct device *dev, 44 + ioasid_t pasid, enum cache_tag_type type) 46 45 { 47 46 struct device_domain_info *info = dev_iommu_priv_get(dev); 48 47 struct intel_iommu *iommu = info->iommu;
+10 -1
drivers/iommu/intel/iommu.c
··· 3780 3780 !pci_enable_pasid(to_pci_dev(dev), info->pasid_supported & ~1)) 3781 3781 info->pasid_enabled = 1; 3782 3782 3783 - if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) 3783 + if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) { 3784 3784 iommu_enable_pci_ats(info); 3785 + /* Assign a DEVTLB cache tag to the default domain. */ 3786 + if (info->ats_enabled && info->domain) { 3787 + u16 did = domain_id_iommu(info->domain, iommu); 3788 + 3789 + if (cache_tag_assign(info->domain, did, dev, 3790 + IOMMU_NO_PASID, CACHE_TAG_DEVTLB)) 3791 + iommu_disable_pci_ats(info); 3792 + } 3793 + } 3785 3794 iommu_enable_pci_pri(info); 3786 3795 } 3787 3796
+2
drivers/iommu/intel/iommu.h
··· 1289 1289 unsigned int users; 1290 1290 }; 1291 1291 1292 + int cache_tag_assign(struct dmar_domain *domain, u16 did, struct device *dev, 1293 + ioasid_t pasid, enum cache_tag_type type); 1292 1294 int cache_tag_assign_domain(struct dmar_domain *domain, 1293 1295 struct device *dev, ioasid_t pasid); 1294 1296 void cache_tag_unassign_domain(struct dmar_domain *domain,
+2 -1
drivers/iommu/rockchip-iommu.c
··· 1157 1157 return -ENOMEM; 1158 1158 1159 1159 data->iommu = platform_get_drvdata(iommu_dev); 1160 - data->iommu->domain = &rk_identity_domain; 1161 1160 dev_iommu_priv_set(dev, data); 1162 1161 1163 1162 platform_device_put(iommu_dev); ··· 1193 1194 iommu = devm_kzalloc(dev, sizeof(*iommu), GFP_KERNEL); 1194 1195 if (!iommu) 1195 1196 return -ENOMEM; 1197 + 1198 + iommu->domain = &rk_identity_domain; 1196 1199 1197 1200 platform_set_drvdata(pdev, iommu); 1198 1201 iommu->dev = dev;
+1
drivers/irqchip/Kconfig
··· 74 74 75 75 config IRQ_MSI_LIB 76 76 bool 77 + select GENERIC_MSI_IRQ 77 78 78 79 config ARMADA_370_XP_IRQ 79 80 bool
+26 -4
drivers/mtd/nand/qpic_common.c
··· 57 57 bam_txn_buf += sizeof(*bam_txn); 58 58 59 59 bam_txn->bam_ce = bam_txn_buf; 60 - bam_txn_buf += 61 - sizeof(*bam_txn->bam_ce) * QPIC_PER_CW_CMD_ELEMENTS * num_cw; 60 + bam_txn->bam_ce_nitems = QPIC_PER_CW_CMD_ELEMENTS * num_cw; 61 + bam_txn_buf += sizeof(*bam_txn->bam_ce) * bam_txn->bam_ce_nitems; 62 62 63 63 bam_txn->cmd_sgl = bam_txn_buf; 64 - bam_txn_buf += 65 - sizeof(*bam_txn->cmd_sgl) * QPIC_PER_CW_CMD_SGL * num_cw; 64 + bam_txn->cmd_sgl_nitems = QPIC_PER_CW_CMD_SGL * num_cw; 65 + bam_txn_buf += sizeof(*bam_txn->cmd_sgl) * bam_txn->cmd_sgl_nitems; 66 66 67 67 bam_txn->data_sgl = bam_txn_buf; 68 + bam_txn->data_sgl_nitems = QPIC_PER_CW_DATA_SGL * num_cw; 68 69 69 70 init_completion(&bam_txn->txn_done); 70 71 ··· 239 238 struct bam_transaction *bam_txn = nandc->bam_txn; 240 239 u32 offset; 241 240 241 + if (bam_txn->bam_ce_pos + size > bam_txn->bam_ce_nitems) { 242 + dev_err(nandc->dev, "BAM %s array is full\n", "CE"); 243 + return -EINVAL; 244 + } 245 + 242 246 bam_ce_buffer = &bam_txn->bam_ce[bam_txn->bam_ce_pos]; 243 247 244 248 /* fill the command desc */ ··· 264 258 265 259 /* use the separate sgl after this command */ 266 260 if (flags & NAND_BAM_NEXT_SGL) { 261 + if (bam_txn->cmd_sgl_pos >= bam_txn->cmd_sgl_nitems) { 262 + dev_err(nandc->dev, "BAM %s array is full\n", 263 + "CMD sgl"); 264 + return -EINVAL; 265 + } 266 + 267 267 bam_ce_buffer = &bam_txn->bam_ce[bam_txn->bam_ce_start]; 268 268 bam_ce_size = (bam_txn->bam_ce_pos - 269 269 bam_txn->bam_ce_start) * ··· 309 297 struct bam_transaction *bam_txn = nandc->bam_txn; 310 298 311 299 if (read) { 300 + if (bam_txn->rx_sgl_pos >= bam_txn->data_sgl_nitems) { 301 + dev_err(nandc->dev, "BAM %s array is full\n", "RX sgl"); 302 + return -EINVAL; 303 + } 304 + 312 305 sg_set_buf(&bam_txn->data_sgl[bam_txn->rx_sgl_pos], 313 306 vaddr, size); 314 307 bam_txn->rx_sgl_pos++; 315 308 } else { 309 + if (bam_txn->tx_sgl_pos >= bam_txn->data_sgl_nitems) { 310 + dev_err(nandc->dev, "BAM %s array is full\n", "TX sgl"); 311 + return -EINVAL; 312 + } 313 + 316 314 sg_set_buf(&bam_txn->data_sgl[bam_txn->tx_sgl_pos], 317 315 vaddr, size); 318 316 bam_txn->tx_sgl_pos++;
+1
drivers/net/ethernet/airoha/airoha_eth.c
··· 2979 2979 error_napi_stop: 2980 2980 for (i = 0; i < ARRAY_SIZE(eth->qdma); i++) 2981 2981 airoha_qdma_stop_napi(&eth->qdma[i]); 2982 + airoha_ppe_deinit(eth); 2982 2983 error_hw_cleanup: 2983 2984 for (i = 0; i < ARRAY_SIZE(eth->qdma); i++) 2984 2985 airoha_hw_cleanup(&eth->qdma[i]);
+4 -6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 11616 11616 11617 11617 static int bnxt_request_irq(struct bnxt *bp) 11618 11618 { 11619 + struct cpu_rmap *rmap = NULL; 11619 11620 int i, j, rc = 0; 11620 11621 unsigned long flags = 0; 11621 - #ifdef CONFIG_RFS_ACCEL 11622 - struct cpu_rmap *rmap; 11623 - #endif 11624 11622 11625 11623 rc = bnxt_setup_int_mode(bp); 11626 11624 if (rc) { ··· 11639 11641 int map_idx = bnxt_cp_num_to_irq_num(bp, i); 11640 11642 struct bnxt_irq *irq = &bp->irq_tbl[map_idx]; 11641 11643 11642 - #ifdef CONFIG_RFS_ACCEL 11643 - if (rmap && bp->bnapi[i]->rx_ring) { 11644 + if (IS_ENABLED(CONFIG_RFS_ACCEL) && 11645 + rmap && bp->bnapi[i]->rx_ring) { 11644 11646 rc = irq_cpu_rmap_add(rmap, irq->vector); 11645 11647 if (rc) 11646 11648 netdev_warn(bp->dev, "failed adding irq rmap for ring %d\n", 11647 11649 j); 11648 11650 j++; 11649 11651 } 11650 - #endif 11652 + 11651 11653 rc = request_irq(irq->vector, irq->handler, flags, irq->name, 11652 11654 bp->bnapi[i]); 11653 11655 if (rc)
+6
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 4092 4092 for (i = 0; i <= priv->hw_params->rx_queues; i++) 4093 4093 priv->rx_rings[i].rx_max_coalesced_frames = 1; 4094 4094 4095 + /* Initialize u64 stats seq counter for 32bit machines */ 4096 + for (i = 0; i <= priv->hw_params->rx_queues; i++) 4097 + u64_stats_init(&priv->rx_rings[i].stats64.syncp); 4098 + for (i = 0; i <= priv->hw_params->tx_queues; i++) 4099 + u64_stats_init(&priv->tx_rings[i].stats64.syncp); 4100 + 4095 4101 /* libphy will determine the link state */ 4096 4102 netif_carrier_off(dev); 4097 4103
+3 -9
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 1578 1578 static int nicvf_change_mtu(struct net_device *netdev, int new_mtu) 1579 1579 { 1580 1580 struct nicvf *nic = netdev_priv(netdev); 1581 - int orig_mtu = netdev->mtu; 1582 1581 1583 1582 /* For now just support only the usual MTU sized frames, 1584 1583 * plus some headroom for VLAN, QinQ. ··· 1588 1589 return -EINVAL; 1589 1590 } 1590 1591 1591 - WRITE_ONCE(netdev->mtu, new_mtu); 1592 - 1593 - if (!netif_running(netdev)) 1594 - return 0; 1595 - 1596 - if (nicvf_update_hw_max_frs(nic, new_mtu)) { 1597 - netdev->mtu = orig_mtu; 1592 + if (netif_running(netdev) && nicvf_update_hw_max_frs(nic, new_mtu)) 1598 1593 return -EINVAL; 1599 - } 1594 + 1595 + WRITE_ONCE(netdev->mtu, new_mtu); 1600 1596 1601 1597 return 0; 1602 1598 }
+5
drivers/net/ethernet/renesas/rtsn.c
··· 1259 1259 priv = netdev_priv(ndev); 1260 1260 priv->pdev = pdev; 1261 1261 priv->ndev = ndev; 1262 + 1262 1263 priv->ptp_priv = rcar_gen4_ptp_alloc(pdev); 1264 + if (!priv->ptp_priv) { 1265 + ret = -ENOMEM; 1266 + goto error_free; 1267 + } 1263 1268 1264 1269 spin_lock_init(&priv->lock); 1265 1270 platform_set_drvdata(pdev, priv);
+11 -13
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
··· 364 364 } 365 365 366 366 /* TX/RX NORMAL interrupts */ 367 - if (likely(intr_status & XGMAC_NIS)) { 368 - if (likely(intr_status & XGMAC_RI)) { 369 - u64_stats_update_begin(&stats->syncp); 370 - u64_stats_inc(&stats->rx_normal_irq_n[chan]); 371 - u64_stats_update_end(&stats->syncp); 372 - ret |= handle_rx; 373 - } 374 - if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) { 375 - u64_stats_update_begin(&stats->syncp); 376 - u64_stats_inc(&stats->tx_normal_irq_n[chan]); 377 - u64_stats_update_end(&stats->syncp); 378 - ret |= handle_tx; 379 - } 367 + if (likely(intr_status & XGMAC_RI)) { 368 + u64_stats_update_begin(&stats->syncp); 369 + u64_stats_inc(&stats->rx_normal_irq_n[chan]); 370 + u64_stats_update_end(&stats->syncp); 371 + ret |= handle_rx; 372 + } 373 + if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) { 374 + u64_stats_update_begin(&stats->syncp); 375 + u64_stats_inc(&stats->tx_normal_irq_n[chan]); 376 + u64_stats_update_end(&stats->syncp); 377 + ret |= handle_tx; 380 378 } 381 379 382 380 /* Clear interrupts */
+1 -3
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 856 856 { 857 857 struct sk_buff *skb; 858 858 859 - len += AM65_CPSW_HEADROOM; 860 - 861 859 skb = build_skb(page_addr, len); 862 860 if (unlikely(!skb)) 863 861 return NULL; ··· 1342 1344 } 1343 1345 1344 1346 skb = am65_cpsw_build_skb(page_addr, ndev, 1345 - AM65_CPSW_MAX_PACKET_SIZE, headroom); 1347 + PAGE_SIZE, headroom); 1346 1348 if (unlikely(!skb)) { 1347 1349 new_page = page; 1348 1350 goto requeue;
-27
drivers/net/phy/qcom/at803x.c
··· 27 27 28 28 #define AT803X_LED_CONTROL 0x18 29 29 30 - #define AT803X_PHY_MMD3_WOL_CTRL 0x8012 31 - #define AT803X_WOL_EN BIT(5) 32 - 33 30 #define AT803X_REG_CHIP_CONFIG 0x1f 34 31 #define AT803X_BT_BX_REG_SEL 0x8000 35 32 ··· 911 914 return ret; 912 915 913 916 return at803x_config_init(phydev); 914 - } 915 - 916 - static int at8031_set_wol(struct phy_device *phydev, 917 - struct ethtool_wolinfo *wol) 918 - { 919 - int ret; 920 - 921 - /* First setup MAC address and enable WOL interrupt */ 922 - ret = at803x_set_wol(phydev, wol); 923 - if (ret) 924 - return ret; 925 - 926 - if (wol->wolopts & WAKE_MAGIC) 927 - /* Enable WOL function for 1588 */ 928 - ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 929 - AT803X_PHY_MMD3_WOL_CTRL, 930 - 0, AT803X_WOL_EN); 931 - else 932 - /* Disable WoL function for 1588 */ 933 - ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 934 - AT803X_PHY_MMD3_WOL_CTRL, 935 - AT803X_WOL_EN, 0); 936 - 937 - return ret; 938 917 } 939 918 940 919 static int at8031_config_intr(struct phy_device *phydev)
+1 -1
drivers/net/phy/qcom/qca808x.c
··· 633 633 .handle_interrupt = at803x_handle_interrupt, 634 634 .get_tunable = at803x_get_tunable, 635 635 .set_tunable = at803x_set_tunable, 636 - .set_wol = at803x_set_wol, 636 + .set_wol = at8031_set_wol, 637 637 .get_wol = at803x_get_wol, 638 638 .get_features = qca808x_get_features, 639 639 .config_aneg = qca808x_config_aneg,
+25
drivers/net/phy/qcom/qcom-phy-lib.c
··· 115 115 } 116 116 EXPORT_SYMBOL_GPL(at803x_set_wol); 117 117 118 + int at8031_set_wol(struct phy_device *phydev, 119 + struct ethtool_wolinfo *wol) 120 + { 121 + int ret; 122 + 123 + /* First setup MAC address and enable WOL interrupt */ 124 + ret = at803x_set_wol(phydev, wol); 125 + if (ret) 126 + return ret; 127 + 128 + if (wol->wolopts & WAKE_MAGIC) 129 + /* Enable WOL function for 1588 */ 130 + ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 131 + AT803X_PHY_MMD3_WOL_CTRL, 132 + 0, AT803X_WOL_EN); 133 + else 134 + /* Disable WoL function for 1588 */ 135 + ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, 136 + AT803X_PHY_MMD3_WOL_CTRL, 137 + AT803X_WOL_EN, 0); 138 + 139 + return ret; 140 + } 141 + EXPORT_SYMBOL_GPL(at8031_set_wol); 142 + 118 143 void at803x_get_wol(struct phy_device *phydev, 119 144 struct ethtool_wolinfo *wol) 120 145 {
+5
drivers/net/phy/qcom/qcom.h
··· 172 172 #define AT803X_LOC_MAC_ADDR_16_31_OFFSET 0x804B 173 173 #define AT803X_LOC_MAC_ADDR_32_47_OFFSET 0x804A 174 174 175 + #define AT803X_PHY_MMD3_WOL_CTRL 0x8012 176 + #define AT803X_WOL_EN BIT(5) 177 + 175 178 #define AT803X_DEBUG_ADDR 0x1D 176 179 #define AT803X_DEBUG_DATA 0x1E 177 180 ··· 217 214 u16 clear, u16 set); 218 215 int at803x_debug_reg_write(struct phy_device *phydev, u16 reg, u16 data); 219 216 int at803x_set_wol(struct phy_device *phydev, 217 + struct ethtool_wolinfo *wol); 218 + int at8031_set_wol(struct phy_device *phydev, 220 219 struct ethtool_wolinfo *wol); 221 220 void at803x_get_wol(struct phy_device *phydev, 222 221 struct ethtool_wolinfo *wol);
+52 -5
drivers/net/phy/smsc.c
··· 155 155 156 156 static int lan87xx_config_aneg(struct phy_device *phydev) 157 157 { 158 - int rc; 158 + u8 mdix_ctrl; 159 159 int val; 160 + int rc; 160 161 161 - switch (phydev->mdix_ctrl) { 162 + /* When auto-negotiation is disabled (forced mode), the PHY's 163 + * Auto-MDIX will continue toggling the TX/RX pairs. 164 + * 165 + * To establish a stable link, we must select a fixed MDI mode. 166 + * If the user has not specified a fixed MDI mode (i.e., mdix_ctrl is 167 + * 'auto'), we default to ETH_TP_MDI. This choice of a ETH_TP_MDI mode 168 + * mirrors the behavior the hardware would exhibit if the AUTOMDIX_EN 169 + * strap were configured for a fixed MDI connection. 170 + */ 171 + if (phydev->autoneg == AUTONEG_DISABLE) { 172 + if (phydev->mdix_ctrl == ETH_TP_MDI_AUTO) 173 + mdix_ctrl = ETH_TP_MDI; 174 + else 175 + mdix_ctrl = phydev->mdix_ctrl; 176 + } else { 177 + mdix_ctrl = phydev->mdix_ctrl; 178 + } 179 + 180 + switch (mdix_ctrl) { 162 181 case ETH_TP_MDI: 163 182 val = SPECIAL_CTRL_STS_OVRRD_AMDIX_; 164 183 break; ··· 186 167 SPECIAL_CTRL_STS_AMDIX_STATE_; 187 168 break; 188 169 case ETH_TP_MDI_AUTO: 189 - val = SPECIAL_CTRL_STS_AMDIX_ENABLE_; 170 + val = SPECIAL_CTRL_STS_OVRRD_AMDIX_ | 171 + SPECIAL_CTRL_STS_AMDIX_ENABLE_; 190 172 break; 191 173 default: 192 174 return genphy_config_aneg(phydev); ··· 203 183 rc |= val; 204 184 phy_write(phydev, SPECIAL_CTRL_STS, rc); 205 185 206 - phydev->mdix = phydev->mdix_ctrl; 186 + phydev->mdix = mdix_ctrl; 207 187 return genphy_config_aneg(phydev); 208 188 } 209 189 ··· 280 260 return err; 281 261 } 282 262 EXPORT_SYMBOL_GPL(lan87xx_read_status); 263 + 264 + static int lan87xx_phy_config_init(struct phy_device *phydev) 265 + { 266 + int rc; 267 + 268 + /* The LAN87xx PHY's initial MDI-X mode is determined by the AUTOMDIX_EN 269 + * hardware strap, but the driver cannot read the strap's status. This 270 + * creates an unpredictable initial state. 271 + * 272 + * To ensure consistent and reliable behavior across all boards, 273 + * override the strap configuration on initialization and force the PHY 274 + * into a known state with Auto-MDIX enabled, which is the expected 275 + * default for modern hardware. 276 + */ 277 + rc = phy_modify(phydev, SPECIAL_CTRL_STS, 278 + SPECIAL_CTRL_STS_OVRRD_AMDIX_ | 279 + SPECIAL_CTRL_STS_AMDIX_ENABLE_ | 280 + SPECIAL_CTRL_STS_AMDIX_STATE_, 281 + SPECIAL_CTRL_STS_OVRRD_AMDIX_ | 282 + SPECIAL_CTRL_STS_AMDIX_ENABLE_); 283 + if (rc < 0) 284 + return rc; 285 + 286 + phydev->mdix_ctrl = ETH_TP_MDI_AUTO; 287 + 288 + return smsc_phy_config_init(phydev); 289 + } 283 290 284 291 static int lan874x_phy_config_init(struct phy_device *phydev) 285 292 { ··· 742 695 743 696 /* basic functions */ 744 697 .read_status = lan87xx_read_status, 745 - .config_init = smsc_phy_config_init, 698 + .config_init = lan87xx_phy_config_init, 746 699 .soft_reset = smsc_phy_reset, 747 700 .config_aneg = lan87xx_config_aneg, 748 701
+16 -2
drivers/nvme/host/core.c
··· 386 386 nr->cmd->common.cdw12, 387 387 nr->cmd->common.cdw13, 388 388 nr->cmd->common.cdw14, 389 - nr->cmd->common.cdw14); 389 + nr->cmd->common.cdw15); 390 390 } 391 391 392 392 enum nvme_disposition { ··· 4086 4086 struct nvme_ns *ns; 4087 4087 struct gendisk *disk; 4088 4088 int node = ctrl->numa_node; 4089 + bool last_path = false; 4089 4090 4090 4091 ns = kzalloc_node(sizeof(*ns), GFP_KERNEL, node); 4091 4092 if (!ns) ··· 4179 4178 out_unlink_ns: 4180 4179 mutex_lock(&ctrl->subsys->lock); 4181 4180 list_del_rcu(&ns->siblings); 4182 - if (list_empty(&ns->head->list)) 4181 + if (list_empty(&ns->head->list)) { 4183 4182 list_del_init(&ns->head->entry); 4183 + /* 4184 + * If multipath is not configured, we still create a namespace 4185 + * head (nshead), but head->disk is not initialized in that 4186 + * case. As a result, only a single reference to nshead is held 4187 + * (via kref_init()) when it is created. Therefore, ensure that 4188 + * we do not release the reference to nshead twice if head->disk 4189 + * is not present. 4190 + */ 4191 + if (ns->head->disk) 4192 + last_path = true; 4193 + } 4184 4194 mutex_unlock(&ctrl->subsys->lock); 4195 + if (last_path) 4196 + nvme_put_ns_head(ns->head); 4185 4197 nvme_put_ns_head(ns->head); 4186 4198 out_cleanup_disk: 4187 4199 put_disk(disk);
+6 -2
drivers/nvme/host/multipath.c
··· 690 690 nvme_cdev_del(&head->cdev, &head->cdev_device); 691 691 synchronize_srcu(&head->srcu); 692 692 del_gendisk(head->disk); 693 - nvme_put_ns_head(head); 694 693 } 694 + nvme_put_ns_head(head); 695 695 } 696 696 697 697 static void nvme_remove_head_work(struct work_struct *work) ··· 1200 1200 */ 1201 1201 srcu_idx = srcu_read_lock(&head->srcu); 1202 1202 1203 - list_for_each_entry_rcu(ns, &head->list, siblings) { 1203 + list_for_each_entry_srcu(ns, &head->list, siblings, 1204 + srcu_read_lock_held(&head->srcu)) { 1204 1205 /* 1205 1206 * Ensure that ns path disk node is already added otherwise we 1206 1207 * may get invalid kobj name for target ··· 1291 1290 void nvme_mpath_remove_disk(struct nvme_ns_head *head) 1292 1291 { 1293 1292 bool remove = false; 1293 + 1294 + if (!head->disk) 1295 + return; 1294 1296 1295 1297 mutex_lock(&head->subsys->lock); 1296 1298 /*
+4 -2
drivers/nvme/host/pci.c
··· 2101 2101 if ((dev->cmbsz & (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) == 2102 2102 (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) 2103 2103 pci_p2pmem_publish(pdev, true); 2104 - 2105 - nvme_update_attrs(dev); 2106 2104 } 2107 2105 2108 2106 static int nvme_set_host_mem(struct nvme_dev *dev, u32 bits) ··· 3008 3010 if (result < 0) 3009 3011 goto out; 3010 3012 3013 + nvme_update_attrs(dev); 3014 + 3011 3015 result = nvme_setup_io_queues(dev); 3012 3016 if (result) 3013 3017 goto out; ··· 3342 3342 result = nvme_setup_host_mem(dev); 3343 3343 if (result < 0) 3344 3344 goto out_disable; 3345 + 3346 + nvme_update_attrs(dev); 3345 3347 3346 3348 result = nvme_setup_io_queues(dev); 3347 3349 if (result)
+2
drivers/nvme/target/nvmet.h
··· 867 867 { 868 868 if (bio != &req->b.inline_bio) 869 869 bio_put(bio); 870 + else 871 + bio_uninit(bio); 870 872 } 871 873 872 874 #ifdef CONFIG_NVME_TARGET_TCP_TLS
+8 -2
drivers/pinctrl/nuvoton/pinctrl-ma35.c
··· 1074 1074 u32 idx = 0; 1075 1075 int ret; 1076 1076 1077 - for_each_gpiochip_node(dev, child) { 1077 + device_for_each_child_node(dev, child) { 1078 + if (fwnode_property_present(child, "gpio-controller")) 1079 + continue; 1080 + 1078 1081 npctl->nfunctions++; 1079 1082 npctl->ngroups += of_get_child_count(to_of_node(child)); 1080 1083 } ··· 1095 1092 if (!npctl->groups) 1096 1093 return -ENOMEM; 1097 1094 1098 - for_each_gpiochip_node(dev, child) { 1095 + device_for_each_child_node(dev, child) { 1096 + if (fwnode_property_present(child, "gpio-controller")) 1097 + continue; 1098 + 1099 1099 ret = ma35_pinctrl_parse_functions(child, npctl, idx++); 1100 1100 if (ret) { 1101 1101 fwnode_handle_put(child);
+11
drivers/pinctrl/pinctrl-amd.c
··· 979 979 pin, is_suspend ? "suspend" : "hibernate"); 980 980 } 981 981 982 + /* 983 + * debounce enabled over suspend has shown issues with a GPIO 984 + * being unable to wake the system, as we're only interested in 985 + * the actual wakeup event, clear it. 986 + */ 987 + if (gpio_dev->saved_regs[i] & (DB_CNTRl_MASK << DB_CNTRL_OFF)) { 988 + amd_gpio_set_debounce(gpio_dev, pin, 0); 989 + pm_pr_dbg("Clearing debounce for GPIO #%d during %s.\n", 990 + pin, is_suspend ? "suspend" : "hibernate"); 991 + } 992 + 982 993 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 983 994 } 984 995
+1 -1
drivers/pinctrl/pinctrl-aw9523.c
··· 784 784 gc->set_config = gpiochip_generic_config; 785 785 gc->parent = dev; 786 786 gc->owner = THIS_MODULE; 787 - gc->can_sleep = false; 787 + gc->can_sleep = true; 788 788 789 789 return 0; 790 790 }
+20
drivers/pinctrl/qcom/pinctrl-msm.c
··· 1038 1038 test_bit(d->hwirq, pctrl->skip_wake_irqs); 1039 1039 } 1040 1040 1041 + static void msm_gpio_irq_init_valid_mask(struct gpio_chip *gc, 1042 + unsigned long *valid_mask, 1043 + unsigned int ngpios) 1044 + { 1045 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 1046 + const struct msm_pingroup *g; 1047 + int i; 1048 + 1049 + bitmap_fill(valid_mask, ngpios); 1050 + 1051 + for (i = 0; i < ngpios; i++) { 1052 + g = &pctrl->soc->groups[i]; 1053 + 1054 + if (g->intr_detection_width != 1 && 1055 + g->intr_detection_width != 2) 1056 + clear_bit(i, valid_mask); 1057 + } 1058 + } 1059 + 1041 1060 static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type) 1042 1061 { 1043 1062 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); ··· 1460 1441 girq->default_type = IRQ_TYPE_NONE; 1461 1442 girq->handler = handle_bad_irq; 1462 1443 girq->parents[0] = pctrl->irq; 1444 + girq->init_valid_mask = msm_gpio_irq_init_valid_mask; 1463 1445 1464 1446 ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl); 1465 1447 if (ret) {
+1 -1
drivers/platform/mellanox/mlxbf-pmc.c
··· 715 715 {101, "GDC_BANK0_HIT_DCL_PARTIAL"}, 716 716 {102, "GDC_BANK0_EVICT_DCL"}, 717 717 {103, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA0"}, 718 - {103, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA1"}, 718 + {104, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA1"}, 719 719 {105, "GDC_BANK0_ARB_STRB"}, 720 720 {106, "GDC_BANK0_ARB_WAIT"}, 721 721 {107, "GDC_BANK0_GGA_STRB"},
+3 -2
drivers/platform/mellanox/mlxbf-tmfifo.c
··· 281 281 vring->align = SMP_CACHE_BYTES; 282 282 vring->index = i; 283 283 vring->vdev_id = tm_vdev->vdev.id.device; 284 - vring->drop_desc.len = VRING_DROP_DESC_MAX_LEN; 284 + vring->drop_desc.len = cpu_to_virtio32(&tm_vdev->vdev, 285 + VRING_DROP_DESC_MAX_LEN); 285 286 dev = &tm_vdev->vdev.dev; 286 287 287 288 size = vring_size(vring->num, vring->align); ··· 1288 1287 ether_addr_copy(mac, mlxbf_tmfifo_net_default_mac); 1289 1288 } 1290 1289 1291 - /* Set TmFifo thresolds which is used to trigger interrupts. */ 1290 + /* Set TmFifo thresholds which is used to trigger interrupts. */ 1292 1291 static void mlxbf_tmfifo_set_threshold(struct mlxbf_tmfifo *fifo) 1293 1292 { 1294 1293 u64 ctl;
+1 -1
drivers/platform/mellanox/mlxreg-dpu.c
··· 483 483 mlxreg_dpu->io_data, 484 484 sizeof(*mlxreg_dpu->io_data)); 485 485 if (IS_ERR(mlxreg_dpu->io_regs)) { 486 - dev_err(dev, "Failed to create regio for client %s at bus %d at addr 0x%02x\n", 486 + dev_err(dev, "Failed to create region for client %s at bus %d at addr 0x%02x\n", 487 487 data->hpdev.brdinfo->type, data->hpdev.nr, 488 488 data->hpdev.brdinfo->addr); 489 489 return PTR_ERR(mlxreg_dpu->io_regs);
+6 -6
drivers/platform/mellanox/mlxreg-lc.c
··· 57 57 * @dev: platform device; 58 58 * @lock: line card lock; 59 59 * @par_regmap: parent device regmap handle; 60 - * @data: pltaform core data; 60 + * @data: platform core data; 61 61 * @io_data: register access platform data; 62 - * @led_data: LED platform data ; 62 + * @led_data: LED platform data; 63 63 * @mux_data: MUX platform data; 64 64 * @led: LED device; 65 65 * @io_regs: register access device; ··· 171 171 0x4e, 0x4f 172 172 }; 173 173 174 - /* Defaul mux configuration. */ 174 + /* Default mux configuration. */ 175 175 static struct mlxcpld_mux_plat_data mlxreg_lc_mux_data[] = { 176 176 { 177 177 .chan_ids = mlxreg_lc_chan, ··· 181 181 }, 182 182 }; 183 183 184 - /* Defaul mux board info. */ 184 + /* Default mux board info. */ 185 185 static struct i2c_board_info mlxreg_lc_mux_brdinfo = { 186 186 I2C_BOARD_INFO("i2c-mux-mlxcpld", 0x32), 187 187 }; ··· 688 688 if (regval & mlxreg_lc->data->mask) { 689 689 mlxreg_lc->state |= MLXREG_LC_SYNCED; 690 690 mlxreg_lc_state_update_locked(mlxreg_lc, MLXREG_LC_SYNCED, 1); 691 - if (mlxreg_lc->state & ~MLXREG_LC_POWERED) { 691 + if (!(mlxreg_lc->state & MLXREG_LC_POWERED)) { 692 692 err = mlxreg_lc_power_on_off(mlxreg_lc, 1); 693 693 if (err) 694 694 goto mlxreg_lc_regmap_power_on_off_fail; ··· 758 758 platform_device_register_resndata(dev, "mlxreg-io", data->hpdev.nr, NULL, 0, 759 759 mlxreg_lc->io_data, sizeof(*mlxreg_lc->io_data)); 760 760 if (IS_ERR(mlxreg_lc->io_regs)) { 761 - dev_err(dev, "Failed to create regio for client %s at bus %d at addr 0x%02x\n", 761 + dev_err(dev, "Failed to create region for client %s at bus %d at addr 0x%02x\n", 762 762 data->hpdev.brdinfo->type, data->hpdev.nr, 763 763 data->hpdev.brdinfo->addr); 764 764 err = PTR_ERR(mlxreg_lc->io_regs);
+1 -1
drivers/platform/mellanox/nvsw-sn2201.c
··· 1181 1181 if (!nvsw_sn2201->main_mux_devs->adapter) { 1182 1182 err = -ENODEV; 1183 1183 dev_err(nvsw_sn2201->dev, "Failed to get adapter for bus %d\n", 1184 - nvsw_sn2201->cpld_devs->nr); 1184 + nvsw_sn2201->main_mux_devs->nr); 1185 1185 goto i2c_get_adapter_main_fail; 1186 1186 } 1187 1187
+144 -37
drivers/platform/x86/amd/amd_isp4.c
··· 21 21 #define AMDISP_OV05C10_REMOTE_EP_NAME "ov05c10_isp_4_1_1" 22 22 #define AMD_ISP_PLAT_DRV_NAME "amd-isp4" 23 23 24 + static const struct software_node isp4_mipi1_endpoint_node; 25 + static const struct software_node ov05c10_endpoint_node; 26 + 24 27 /* 25 28 * AMD ISP platform info definition to initialize sensor 26 29 * specific platform configuration to prepare the amdisp ··· 46 43 struct mutex lock; /* protects i2c client creation */ 47 44 }; 48 45 49 - /* Top-level OV05C10 camera node property table */ 46 + /* Root AMD CAMERA SWNODE */ 47 + 48 + /* Root amd camera node definition */ 49 + static const struct software_node amd_camera_node = { 50 + .name = "amd_camera", 51 + }; 52 + 53 + /* ISP4 SWNODE */ 54 + 55 + /* ISP4 OV05C10 camera node definition */ 56 + static const struct software_node isp4_node = { 57 + .name = "isp4", 58 + .parent = &amd_camera_node, 59 + }; 60 + 61 + /* 62 + * ISP4 Ports node definition. No properties defined for 63 + * ports node. 64 + */ 65 + static const struct software_node isp4_ports = { 66 + .name = "ports", 67 + .parent = &isp4_node, 68 + }; 69 + 70 + /* 71 + * ISP4 Port node definition. No properties defined for 72 + * port node. 73 + */ 74 + static const struct software_node isp4_port_node = { 75 + .name = "port@0", 76 + .parent = &isp4_ports, 77 + }; 78 + 79 + /* 80 + * ISP4 MIPI1 remote endpoint points to OV05C10 endpoint 81 + * node. 82 + */ 83 + static const struct software_node_ref_args isp4_refs[] = { 84 + SOFTWARE_NODE_REFERENCE(&ov05c10_endpoint_node), 85 + }; 86 + 87 + /* ISP4 MIPI1 endpoint node properties table */ 88 + static const struct property_entry isp4_mipi1_endpoint_props[] = { 89 + PROPERTY_ENTRY_REF_ARRAY("remote-endpoint", isp4_refs), 90 + { } 91 + }; 92 + 93 + /* ISP4 MIPI1 endpoint node definition */ 94 + static const struct software_node isp4_mipi1_endpoint_node = { 95 + .name = "endpoint", 96 + .parent = &isp4_port_node, 97 + .properties = isp4_mipi1_endpoint_props, 98 + }; 99 + 100 + /* I2C1 SWNODE */ 101 + 102 + /* I2C1 camera node property table */ 103 + static const struct property_entry i2c1_camera_props[] = { 104 + PROPERTY_ENTRY_U32("clock-frequency", 1 * HZ_PER_MHZ), 105 + { } 106 + }; 107 + 108 + /* I2C1 camera node definition */ 109 + static const struct software_node i2c1_node = { 110 + .name = "i2c1", 111 + .parent = &amd_camera_node, 112 + .properties = i2c1_camera_props, 113 + }; 114 + 115 + /* I2C1 camera node property table */ 50 116 static const struct property_entry ov05c10_camera_props[] = { 51 117 PROPERTY_ENTRY_U32("clock-frequency", 24 * HZ_PER_MHZ), 52 118 { } 53 119 }; 54 120 55 - /* Root AMD ISP OV05C10 camera node definition */ 56 - static const struct software_node camera_node = { 121 + /* OV05C10 camera node definition */ 122 + static const struct software_node ov05c10_camera_node = { 57 123 .name = AMDISP_OV05C10_HID, 124 + .parent = &i2c1_node, 58 125 .properties = ov05c10_camera_props, 59 126 }; 60 127 61 128 /* 62 - * AMD ISP OV05C10 Ports node definition. No properties defined for 129 + * OV05C10 Ports node definition. No properties defined for 63 130 * ports node for OV05C10. 64 131 */ 65 - static const struct software_node ports = { 132 + static const struct software_node ov05c10_ports = { 66 133 .name = "ports", 67 - .parent = &camera_node, 134 + .parent = &ov05c10_camera_node, 68 135 }; 69 136 70 137 /* 71 - * AMD ISP OV05C10 Port node definition. No properties defined for 72 - * port node for OV05C10. 138 + * OV05C10 Port node definition. 73 139 */ 74 - static const struct software_node port_node = { 75 - .name = "port@", 76 - .parent = &ports, 140 + static const struct software_node ov05c10_port_node = { 141 + .name = "port@0", 142 + .parent = &ov05c10_ports, 77 143 }; 78 144 79 145 /* 80 - * Remote endpoint AMD ISP node definition. No properties defined for 81 - * remote endpoint node for OV05C10. 82 - */ 83 - static const struct software_node remote_ep_isp_node = { 84 - .name = AMDISP_OV05C10_REMOTE_EP_NAME, 85 - }; 86 - 87 - /* 88 - * Remote endpoint reference for isp node included in the 89 - * OV05C10 endpoint. 146 + * OV05C10 remote endpoint points to ISP4 MIPI1 endpoint 147 + * node. 90 148 */ 91 149 static const struct software_node_ref_args ov05c10_refs[] = { 92 - SOFTWARE_NODE_REFERENCE(&remote_ep_isp_node), 150 + SOFTWARE_NODE_REFERENCE(&isp4_mipi1_endpoint_node), 93 151 }; 94 152 95 153 /* OV05C10 supports one single link frequency */ 96 154 static const u64 ov05c10_link_freqs[] = { 97 - 925 * HZ_PER_MHZ, 155 + 900 * HZ_PER_MHZ, 98 156 }; 99 157 100 158 /* OV05C10 supports only 2-lane configuration */ ··· 175 111 { } 176 112 }; 177 113 178 - /* AMD ISP endpoint node definition */ 179 - static const struct software_node endpoint_node = { 114 + /* OV05C10 endpoint node definition */ 115 + static const struct software_node ov05c10_endpoint_node = { 180 116 .name = "endpoint", 181 - .parent = &port_node, 117 + .parent = &ov05c10_port_node, 182 118 .properties = ov05c10_endpoint_props, 183 119 }; 184 120 185 121 /* 186 - * AMD ISP swnode graph uses 5 nodes and also its relationship is 187 - * fixed to align with the structure that v4l2 expects for successful 188 - * endpoint fwnode parsing. 122 + * AMD Camera swnode graph uses 10 nodes and also its relationship is 123 + * fixed to align with the structure that v4l2 and i2c frameworks expects 124 + * for successful parsing of fwnodes and its properties with standard names. 189 125 * 190 126 * It is only the node property_entries that will vary for each platform 191 127 * supporting different sensor modules. 128 + * 129 + * AMD ISP4 SWNODE GRAPH Structure 130 + * 131 + * amd_camera { 132 + * isp4 { 133 + * ports { 134 + * port@0 { 135 + * isp4_mipi1_ep: endpoint { 136 + * remote-endpoint = &OMNI5C10_ep; 137 + * }; 138 + * }; 139 + * }; 140 + * }; 141 + * 142 + * i2c1 { 143 + * clock-frequency = 1 MHz; 144 + * OMNI5C10 { 145 + * clock-frequency = 24MHz; 146 + * ports { 147 + * port@0 { 148 + * OMNI5C10_ep: endpoint { 149 + * bus-type = 4; 150 + * data-lanes = <1 2>; 151 + * link-frequencies = 900MHz; 152 + * remote-endpoint = &isp4_mipi1; 153 + * }; 154 + * }; 155 + * }; 156 + * }; 157 + * }; 158 + * }; 159 + * 192 160 */ 193 - static const struct software_node *ov05c10_nodes[] = { 194 - &camera_node, 195 - &ports, 196 - &port_node, 197 - &endpoint_node, 198 - &remote_ep_isp_node, 161 + static const struct software_node *amd_isp4_nodes[] = { 162 + &amd_camera_node, 163 + &isp4_node, 164 + &isp4_ports, 165 + &isp4_port_node, 166 + &isp4_mipi1_endpoint_node, 167 + &i2c1_node, 168 + &ov05c10_camera_node, 169 + &ov05c10_ports, 170 + &ov05c10_port_node, 171 + &ov05c10_endpoint_node, 199 172 NULL 200 173 }; 201 174 ··· 242 141 .dev_name = "ov05c10", 243 142 I2C_BOARD_INFO("ov05c10", AMDISP_OV05C10_I2C_ADDR), 244 143 }, 245 - .swnodes = ov05c10_nodes, 144 + .swnodes = amd_isp4_nodes, 246 145 }; 247 146 248 147 static const struct acpi_device_id amdisp_sensor_ids[] = { ··· 334 233 if (ret) 335 234 return ERR_PTR(ret); 336 235 337 - isp4_platform->board_info.swnode = src->swnodes[0]; 236 + /* initialize ov05c10_camera_node */ 237 + isp4_platform->board_info.swnode = src->swnodes[6]; 338 238 339 239 return isp4_platform; 340 240 } ··· 360 258 { 361 259 const struct amdisp_platform_info *pinfo; 362 260 struct amdisp_platform *isp4_platform; 261 + struct acpi_device *adev; 363 262 int ret; 364 263 365 264 pinfo = device_get_match_data(&pdev->dev); ··· 377 274 ret = bus_register_notifier(&i2c_bus_type, &isp4_platform->i2c_nb); 378 275 if (ret) 379 276 goto error_unregister_sw_node; 277 + 278 + adev = ACPI_COMPANION(&pdev->dev); 279 + /* initialize root amd_camera_node */ 280 + adev->driver_data = (void *)pinfo->swnodes[0]; 380 281 381 282 /* check if adapter is already registered and create i2c client instance */ 382 283 i2c_for_each_dev(isp4_platform, try_to_instantiate_i2c_client);
+1 -1
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 11 11 #include <linux/dmi.h> 12 12 #include <linux/io.h> 13 13 #include <linux/ioport.h> 14 - #include <asm/amd/fch.h> 14 + #include <linux/platform_data/x86/amd-fch.h> 15 15 16 16 #include "pmc.h" 17 17
+9
drivers/platform/x86/asus-nb-wmi.c
··· 530 530 }, 531 531 .driver_data = &quirk_asus_zenbook_duo_kbd, 532 532 }, 533 + { 534 + .callback = dmi_matched, 535 + .ident = "ASUS Zenbook Duo UX8406CA", 536 + .matches = { 537 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 538 + DMI_MATCH(DMI_PRODUCT_NAME, "UX8406CA"), 539 + }, 540 + .driver_data = &quirk_asus_zenbook_duo_kbd, 541 + }, 533 542 {}, 534 543 }; 535 544
+1
drivers/platform/x86/dell/dell-lis3lv02d.c
··· 45 45 * Additional individual entries were added after verification. 46 46 */ 47 47 DELL_LIS3LV02D_DMI_ENTRY("Latitude 5480", 0x29), 48 + DELL_LIS3LV02D_DMI_ENTRY("Latitude 5500", 0x29), 48 49 DELL_LIS3LV02D_DMI_ENTRY("Latitude E6330", 0x29), 49 50 DELL_LIS3LV02D_DMI_ENTRY("Latitude E6430", 0x29), 50 51 DELL_LIS3LV02D_DMI_ENTRY("Precision 3540", 0x29),
+5
drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h
··· 89 89 90 90 enum { ENUM, INT, STR, PO }; 91 91 92 + #define ENUM_MIN_ELEMENTS 8 93 + #define INT_MIN_ELEMENTS 9 94 + #define STR_MIN_ELEMENTS 8 95 + #define PO_MIN_ELEMENTS 4 96 + 92 97 enum { 93 98 ATTR_NAME, 94 99 DISPL_NAME_LANG_CODE,
+3 -2
drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c
··· 23 23 obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_ENUMERATION_ATTRIBUTE_GUID); 24 24 if (!obj) 25 25 return -EIO; 26 - if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) { 26 + if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < ENUM_MIN_ELEMENTS || 27 + obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) { 27 28 kfree(obj); 28 - return -EINVAL; 29 + return -EIO; 29 30 } 30 31 ret = snprintf(buf, PAGE_SIZE, "%s\n", obj->package.elements[CURRENT_VAL].string.pointer); 31 32 kfree(obj);
+3 -2
drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c
··· 25 25 obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_INTEGER_ATTRIBUTE_GUID); 26 26 if (!obj) 27 27 return -EIO; 28 - if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_INTEGER) { 28 + if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < INT_MIN_ELEMENTS || 29 + obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_INTEGER) { 29 30 kfree(obj); 30 - return -EINVAL; 31 + return -EIO; 31 32 } 32 33 ret = snprintf(buf, PAGE_SIZE, "%lld\n", obj->package.elements[CURRENT_VAL].integer.value); 33 34 kfree(obj);
+3 -2
drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
··· 26 26 obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_PASSOBJ_ATTRIBUTE_GUID); 27 27 if (!obj) 28 28 return -EIO; 29 - if (obj->package.elements[IS_PASS_SET].type != ACPI_TYPE_INTEGER) { 29 + if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < PO_MIN_ELEMENTS || 30 + obj->package.elements[IS_PASS_SET].type != ACPI_TYPE_INTEGER) { 30 31 kfree(obj); 31 - return -EINVAL; 32 + return -EIO; 32 33 } 33 34 ret = snprintf(buf, PAGE_SIZE, "%lld\n", obj->package.elements[IS_PASS_SET].integer.value); 34 35 kfree(obj);
+3 -2
drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c
··· 25 25 obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_STRING_ATTRIBUTE_GUID); 26 26 if (!obj) 27 27 return -EIO; 28 - if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) { 28 + if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < STR_MIN_ELEMENTS || 29 + obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) { 29 30 kfree(obj); 30 - return -EINVAL; 31 + return -EIO; 31 32 } 32 33 ret = snprintf(buf, PAGE_SIZE, "%s\n", obj->package.elements[CURRENT_VAL].string.pointer); 33 34 kfree(obj);
+6 -6
drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
··· 407 407 return retval; 408 408 409 409 switch (attr_type) { 410 - case ENUM: min_elements = 8; break; 411 - case INT: min_elements = 9; break; 412 - case STR: min_elements = 8; break; 413 - case PO: min_elements = 4; break; 410 + case ENUM: min_elements = ENUM_MIN_ELEMENTS; break; 411 + case INT: min_elements = INT_MIN_ELEMENTS; break; 412 + case STR: min_elements = STR_MIN_ELEMENTS; break; 413 + case PO: min_elements = PO_MIN_ELEMENTS; break; 414 414 default: 415 415 pr_err("Error: Unknown attr_type: %d\n", attr_type); 416 416 return -EINVAL; ··· 597 597 release_attributes_data(); 598 598 599 599 err_destroy_classdev: 600 - device_destroy(&firmware_attributes_class, MKDEV(0, 0)); 600 + device_unregister(wmi_priv.class_dev); 601 601 602 602 err_exit_bios_attr_pass_interface: 603 603 exit_bios_attr_pass_interface(); ··· 611 611 static void __exit sysman_exit(void) 612 612 { 613 613 release_attributes_data(); 614 - device_destroy(&firmware_attributes_class, MKDEV(0, 0)); 614 + device_unregister(wmi_priv.class_dev); 615 615 exit_bios_attr_set_interface(); 616 616 exit_bios_attr_pass_interface(); 617 617 }
+2 -2
drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
··· 1034 1034 release_attributes_data(); 1035 1035 1036 1036 err_destroy_classdev: 1037 - device_destroy(&firmware_attributes_class, MKDEV(0, 0)); 1037 + device_unregister(bioscfg_drv.class_dev); 1038 1038 1039 1039 err_unregister_class: 1040 1040 hp_exit_attr_set_interface(); ··· 1045 1045 static void __exit hp_exit(void) 1046 1046 { 1047 1047 release_attributes_data(); 1048 - device_destroy(&firmware_attributes_class, MKDEV(0, 0)); 1048 + device_unregister(bioscfg_drv.class_dev); 1049 1049 1050 1050 hp_exit_attr_set_interface(); 1051 1051 }
+1
drivers/platform/x86/intel/hid.c
··· 54 54 { "INTC107B" }, 55 55 { "INTC10CB" }, 56 56 { "INTC10CC" }, 57 + { "INTC10F1" }, 57 58 { } 58 59 }; 59 60 MODULE_DEVICE_TABLE(acpi, intel_hid_ids);
+1
drivers/platform/x86/portwell-ec.c
··· 236 236 return ret; 237 237 } 238 238 239 + ec_wdt_dev.parent = &pdev->dev; 239 240 ret = devm_watchdog_register_device(&pdev->dev, &ec_wdt_dev); 240 241 if (ret < 0) { 241 242 dev_err(&pdev->dev, "failed to register Portwell EC Watchdog\n");
+32 -62
drivers/platform/x86/think-lmi.c
··· 973 973 .is_visible = auth_attr_is_visible, 974 974 .attrs = auth_attrs, 975 975 }; 976 + __ATTRIBUTE_GROUPS(auth_attr); 976 977 977 978 /* ---- Attributes sysfs --------------------------------------------------------- */ 978 979 static ssize_t display_name_show(struct kobject *kobj, struct kobj_attribute *attr, ··· 1189 1188 .is_visible = attr_is_visible, 1190 1189 .attrs = tlmi_attrs, 1191 1190 }; 1191 + __ATTRIBUTE_GROUPS(tlmi_attr); 1192 1192 1193 1193 static void tlmi_attr_setting_release(struct kobject *kobj) 1194 1194 { ··· 1209 1207 static const struct kobj_type tlmi_attr_setting_ktype = { 1210 1208 .release = &tlmi_attr_setting_release, 1211 1209 .sysfs_ops = &kobj_sysfs_ops, 1210 + .default_groups = tlmi_attr_groups, 1212 1211 }; 1213 1212 1214 1213 static const struct kobj_type tlmi_pwd_setting_ktype = { 1215 1214 .release = &tlmi_pwd_setting_release, 1216 1215 .sysfs_ops = &kobj_sysfs_ops, 1216 + .default_groups = auth_attr_groups, 1217 1217 }; 1218 1218 1219 1219 static ssize_t pending_reboot_show(struct kobject *kobj, struct kobj_attribute *attr, ··· 1384 1380 /* ---- Initialisation --------------------------------------------------------- */ 1385 1381 static void tlmi_release_attr(void) 1386 1382 { 1387 - int i; 1383 + struct kobject *pos, *n; 1388 1384 1389 1385 /* Attribute structures */ 1390 - for (i = 0; i < TLMI_SETTINGS_COUNT; i++) { 1391 - if (tlmi_priv.setting[i]) { 1392 - sysfs_remove_group(&tlmi_priv.setting[i]->kobj, &tlmi_attr_group); 1393 - kobject_put(&tlmi_priv.setting[i]->kobj); 1394 - } 1395 - } 1396 1386 sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &pending_reboot.attr); 1397 1387 sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &save_settings.attr); 1398 1388 1399 1389 if (tlmi_priv.can_debug_cmd && debug_support) 1400 1390 sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &debug_cmd.attr); 1391 + 1392 + list_for_each_entry_safe(pos, n, &tlmi_priv.attribute_kset->list, entry) 1393 + kobject_put(pos); 1401 1394 1402 1395 kset_unregister(tlmi_priv.attribute_kset); 1403 1396 ··· 1403 1402 kfree(tlmi_priv.pwd_admin->save_signature); 1404 1403 1405 1404 /* Authentication structures */ 1406 - sysfs_remove_group(&tlmi_priv.pwd_admin->kobj, &auth_attr_group); 1407 - kobject_put(&tlmi_priv.pwd_admin->kobj); 1408 - sysfs_remove_group(&tlmi_priv.pwd_power->kobj, &auth_attr_group); 1409 - kobject_put(&tlmi_priv.pwd_power->kobj); 1410 - 1411 - if (tlmi_priv.opcode_support) { 1412 - sysfs_remove_group(&tlmi_priv.pwd_system->kobj, &auth_attr_group); 1413 - kobject_put(&tlmi_priv.pwd_system->kobj); 1414 - sysfs_remove_group(&tlmi_priv.pwd_hdd->kobj, &auth_attr_group); 1415 - kobject_put(&tlmi_priv.pwd_hdd->kobj); 1416 - sysfs_remove_group(&tlmi_priv.pwd_nvme->kobj, &auth_attr_group); 1417 - kobject_put(&tlmi_priv.pwd_nvme->kobj); 1418 - } 1405 + list_for_each_entry_safe(pos, n, &tlmi_priv.authentication_kset->list, entry) 1406 + kobject_put(pos); 1419 1407 1420 1408 kset_unregister(tlmi_priv.authentication_kset); 1421 1409 } ··· 1445 1455 goto fail_device_created; 1446 1456 } 1447 1457 1458 + tlmi_priv.authentication_kset = kset_create_and_add("authentication", NULL, 1459 + &tlmi_priv.class_dev->kobj); 1460 + if (!tlmi_priv.authentication_kset) { 1461 + kset_unregister(tlmi_priv.attribute_kset); 1462 + ret = -ENOMEM; 1463 + goto fail_device_created; 1464 + } 1465 + 1448 1466 for (i = 0; i < TLMI_SETTINGS_COUNT; i++) { 1449 1467 /* Check if index is a valid setting - skip if it isn't */ 1450 1468 if (!tlmi_priv.setting[i]) ··· 1469 1471 1470 1472 /* Build attribute */ 1471 1473 tlmi_priv.setting[i]->kobj.kset = tlmi_priv.attribute_kset; 1472 - ret = kobject_add(&tlmi_priv.setting[i]->kobj, NULL, 1473 - "%s", tlmi_priv.setting[i]->display_name); 1474 - if (ret) 1475 - goto fail_create_attr; 1476 - 1477 - ret = sysfs_create_group(&tlmi_priv.setting[i]->kobj, &tlmi_attr_group); 1474 + ret = kobject_init_and_add(&tlmi_priv.setting[i]->kobj, &tlmi_attr_setting_ktype, 1475 + NULL, "%s", tlmi_priv.setting[i]->display_name); 1478 1476 if (ret) 1479 1477 goto fail_create_attr; 1480 1478 } ··· 1490 1496 } 1491 1497 1492 1498 /* Create authentication entries */ 1493 - tlmi_priv.authentication_kset = kset_create_and_add("authentication", NULL, 1494 - &tlmi_priv.class_dev->kobj); 1495 - if (!tlmi_priv.authentication_kset) { 1496 - ret = -ENOMEM; 1497 - goto fail_create_attr; 1498 - } 1499 1499 tlmi_priv.pwd_admin->kobj.kset = tlmi_priv.authentication_kset; 1500 - ret = kobject_add(&tlmi_priv.pwd_admin->kobj, NULL, "%s", "Admin"); 1501 - if (ret) 1502 - goto fail_create_attr; 1503 - 1504 - ret = sysfs_create_group(&tlmi_priv.pwd_admin->kobj, &auth_attr_group); 1500 + ret = kobject_init_and_add(&tlmi_priv.pwd_admin->kobj, &tlmi_pwd_setting_ktype, 1501 + NULL, "%s", "Admin"); 1505 1502 if (ret) 1506 1503 goto fail_create_attr; 1507 1504 1508 1505 tlmi_priv.pwd_power->kobj.kset = tlmi_priv.authentication_kset; 1509 - ret = kobject_add(&tlmi_priv.pwd_power->kobj, NULL, "%s", "Power-on"); 1510 - if (ret) 1511 - goto fail_create_attr; 1512 - 1513 - ret = sysfs_create_group(&tlmi_priv.pwd_power->kobj, &auth_attr_group); 1506 + ret = kobject_init_and_add(&tlmi_priv.pwd_power->kobj, &tlmi_pwd_setting_ktype, 1507 + NULL, "%s", "Power-on"); 1514 1508 if (ret) 1515 1509 goto fail_create_attr; 1516 1510 1517 1511 if (tlmi_priv.opcode_support) { 1518 1512 tlmi_priv.pwd_system->kobj.kset = tlmi_priv.authentication_kset; 1519 - ret = kobject_add(&tlmi_priv.pwd_system->kobj, NULL, "%s", "System"); 1520 - if (ret) 1521 - goto fail_create_attr; 1522 - 1523 - ret = sysfs_create_group(&tlmi_priv.pwd_system->kobj, &auth_attr_group); 1513 + ret = kobject_init_and_add(&tlmi_priv.pwd_system->kobj, &tlmi_pwd_setting_ktype, 1514 + NULL, "%s", "System"); 1524 1515 if (ret) 1525 1516 goto fail_create_attr; 1526 1517 1527 1518 tlmi_priv.pwd_hdd->kobj.kset = tlmi_priv.authentication_kset; 1528 - ret = kobject_add(&tlmi_priv.pwd_hdd->kobj, NULL, "%s", "HDD"); 1529 - if (ret) 1530 - goto fail_create_attr; 1531 - 1532 - ret = sysfs_create_group(&tlmi_priv.pwd_hdd->kobj, &auth_attr_group); 1519 + ret = kobject_init_and_add(&tlmi_priv.pwd_hdd->kobj, &tlmi_pwd_setting_ktype, 1520 + NULL, "%s", "HDD"); 1533 1521 if (ret) 1534 1522 goto fail_create_attr; 1535 1523 1536 1524 tlmi_priv.pwd_nvme->kobj.kset = tlmi_priv.authentication_kset; 1537 - ret = kobject_add(&tlmi_priv.pwd_nvme->kobj, NULL, "%s", "NVMe"); 1538 - if (ret) 1539 - goto fail_create_attr; 1540 - 1541 - ret = sysfs_create_group(&tlmi_priv.pwd_nvme->kobj, &auth_attr_group); 1525 + ret = kobject_init_and_add(&tlmi_priv.pwd_nvme->kobj, &tlmi_pwd_setting_ktype, 1526 + NULL, "%s", "NVMe"); 1542 1527 if (ret) 1543 1528 goto fail_create_attr; 1544 1529 } ··· 1527 1554 fail_create_attr: 1528 1555 tlmi_release_attr(); 1529 1556 fail_device_created: 1530 - device_destroy(&firmware_attributes_class, MKDEV(0, 0)); 1557 + device_unregister(tlmi_priv.class_dev); 1531 1558 fail_class_created: 1532 1559 return ret; 1533 1560 } ··· 1549 1576 new_pwd->minlen = tlmi_priv.pwdcfg.core.min_length; 1550 1577 new_pwd->maxlen = tlmi_priv.pwdcfg.core.max_length; 1551 1578 new_pwd->index = 0; 1552 - 1553 - kobject_init(&new_pwd->kobj, &tlmi_pwd_setting_ktype); 1554 1579 1555 1580 return new_pwd; 1556 1581 } ··· 1654 1683 if (setting->possible_values) 1655 1684 strreplace(setting->possible_values, ',', ';'); 1656 1685 1657 - kobject_init(&setting->kobj, &tlmi_attr_setting_ktype); 1658 1686 tlmi_priv.setting[i] = setting; 1659 1687 kfree(item); 1660 1688 } ··· 1751 1781 static void tlmi_remove(struct wmi_device *wdev) 1752 1782 { 1753 1783 tlmi_release_attr(); 1754 - device_destroy(&firmware_attributes_class, MKDEV(0, 0)); 1784 + device_unregister(tlmi_priv.class_dev); 1755 1785 } 1756 1786 1757 1787 static int tlmi_probe(struct wmi_device *wdev, const void *context)
+1
drivers/platform/x86/thinkpad_acpi.c
··· 3295 3295 */ 3296 3296 { KE_KEY, 0x131d, { KEY_VENDOR } }, /* System debug info, similar to old ThinkPad key */ 3297 3297 { KE_KEY, 0x1320, { KEY_LINK_PHONE } }, 3298 + { KE_KEY, 0x1402, { KEY_LINK_PHONE } }, 3298 3299 { KE_KEY, TP_HKEY_EV_TRACK_DOUBLETAP /* 0x8036 */, { KEY_PROG4 } }, 3299 3300 { KE_END } 3300 3301 };
+11 -5
drivers/platform/x86/wmi.c
··· 177 177 acpi_handle handle; 178 178 acpi_status status; 179 179 180 - if (!(wblock->gblock.flags & ACPI_WMI_EXPENSIVE)) 181 - return 0; 182 - 183 180 if (wblock->dev.dev.type == &wmi_type_method) 184 181 return 0; 185 182 186 - if (wblock->dev.dev.type == &wmi_type_event) 183 + if (wblock->dev.dev.type == &wmi_type_event) { 184 + /* 185 + * Windows always enables/disables WMI events, even when they are 186 + * not marked as being expensive. We follow this behavior for 187 + * compatibility reasons. 188 + */ 187 189 snprintf(method, sizeof(method), "WE%02X", wblock->gblock.notify_id); 188 - else 190 + } else { 191 + if (!(wblock->gblock.flags & ACPI_WMI_EXPENSIVE)) 192 + return 0; 193 + 189 194 get_acpi_method_name(wblock, 'C', method); 195 + } 190 196 191 197 /* 192 198 * Not all WMI devices marked as expensive actually implement the
+17 -1
drivers/powercap/intel_rapl_common.c
··· 341 341 { 342 342 struct rapl_domain *rd = power_zone_to_rapl_domain(power_zone); 343 343 struct rapl_defaults *defaults = get_defaults(rd->rp); 344 + u64 val; 344 345 int ret; 345 346 346 347 cpus_read_lock(); 347 348 ret = rapl_write_pl_data(rd, POWER_LIMIT1, PL_ENABLE, mode); 348 - if (!ret && defaults->set_floor_freq) 349 + if (ret) 350 + goto end; 351 + 352 + ret = rapl_read_pl_data(rd, POWER_LIMIT1, PL_ENABLE, false, &val); 353 + if (ret) 354 + goto end; 355 + 356 + if (mode != val) { 357 + pr_debug("%s cannot be %s\n", power_zone->name, 358 + str_enabled_disabled(mode)); 359 + goto end; 360 + } 361 + 362 + if (defaults->set_floor_freq) 349 363 defaults->set_floor_freq(rd, mode); 364 + 365 + end: 350 366 cpus_read_unlock(); 351 367 352 368 return ret;
+1 -1
drivers/pwm/core.c
··· 596 596 * and supposed to be ignored. So also ignore any strange values and 597 597 * consider the state ok. 598 598 */ 599 - if (state->enabled) 599 + if (!state->enabled) 600 600 return true; 601 601 602 602 if (!state->period)
+8 -5
drivers/pwm/pwm-mediatek.c
··· 130 130 return ret; 131 131 132 132 clk_rate = clk_get_rate(pc->clk_pwms[pwm->hwpwm]); 133 - if (!clk_rate) 134 - return -EINVAL; 133 + if (!clk_rate) { 134 + ret = -EINVAL; 135 + goto out; 136 + } 135 137 136 138 /* Make sure we use the bus clock and not the 26MHz clock */ 137 139 if (pc->soc->has_ck_26m_sel) ··· 152 150 } 153 151 154 152 if (clkdiv > PWM_CLK_DIV_MAX) { 155 - pwm_mediatek_clk_disable(chip, pwm); 156 153 dev_err(pwmchip_parent(chip), "period of %d ns not supported\n", period_ns); 157 - return -EINVAL; 154 + ret = -EINVAL; 155 + goto out; 158 156 } 159 157 160 158 if (pc->soc->pwm45_fixup && pwm->hwpwm > 2) { ··· 171 169 pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period); 172 170 pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty); 173 171 172 + out: 174 173 pwm_mediatek_clk_disable(chip, pwm); 175 174 176 - return 0; 175 + return ret; 177 176 } 178 177 179 178 static int pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+1
drivers/regulator/core.c
··· 5639 5639 ERR_PTR(err)); 5640 5640 } 5641 5641 5642 + rdev->coupling_desc.n_coupled = 0; 5642 5643 kfree(rdev->coupling_desc.coupled_rdevs); 5643 5644 rdev->coupling_desc.coupled_rdevs = NULL; 5644 5645 }
+4 -4
drivers/regulator/gpio-regulator.c
··· 260 260 return -ENOMEM; 261 261 } 262 262 263 - drvdata->gpiods = devm_kzalloc(dev, sizeof(struct gpio_desc *), 264 - GFP_KERNEL); 263 + drvdata->gpiods = devm_kcalloc(dev, config->ngpios, 264 + sizeof(struct gpio_desc *), GFP_KERNEL); 265 + if (!drvdata->gpiods) 266 + return -ENOMEM; 265 267 266 268 if (config->input_supply) { 267 269 drvdata->desc.supply_name = devm_kstrdup(&pdev->dev, ··· 276 274 } 277 275 } 278 276 279 - if (!drvdata->gpiods) 280 - return -ENOMEM; 281 277 for (i = 0; i < config->ngpios; i++) { 282 278 drvdata->gpiods[i] = devm_gpiod_get_index(dev, 283 279 NULL,
+2 -1
drivers/regulator/mp886x.c
··· 348 348 MODULE_DEVICE_TABLE(of, mp886x_dt_ids); 349 349 350 350 static const struct i2c_device_id mp886x_id[] = { 351 - { "mp886x", (kernel_ulong_t)&mp8869_ci }, 351 + { "mp8867", (kernel_ulong_t)&mp8867_ci }, 352 + { "mp8869", (kernel_ulong_t)&mp8869_ci }, 352 353 { }, 353 354 }; 354 355 MODULE_DEVICE_TABLE(i2c, mp886x_id);
+4 -1
drivers/regulator/sy8824x.c
··· 213 213 MODULE_DEVICE_TABLE(of, sy8824_dt_ids); 214 214 215 215 static const struct i2c_device_id sy8824_id[] = { 216 - { "sy8824", (kernel_ulong_t)&sy8824c_cfg }, 216 + { "sy8824c", (kernel_ulong_t)&sy8824c_cfg }, 217 + { "sy8824e", (kernel_ulong_t)&sy8824e_cfg }, 218 + { "sy20276", (kernel_ulong_t)&sy20276_cfg }, 219 + { "sy20278", (kernel_ulong_t)&sy20278_cfg }, 217 220 { } 218 221 }; 219 222 MODULE_DEVICE_TABLE(i2c, sy8824_id);
+14 -14
drivers/regulator/tps65219-regulator.c
··· 436 436 pmic->rdesc[i].name); 437 437 } 438 438 439 - irq_data = devm_kmalloc(tps->dev, pmic->common_irq_size, GFP_KERNEL); 440 - if (!irq_data) 441 - return -ENOMEM; 442 - 443 439 for (i = 0; i < pmic->common_irq_size; ++i) { 444 440 irq_type = &pmic->common_irq_types[i]; 445 441 irq = platform_get_irq_byname(pdev, irq_type->irq_name); 446 442 if (irq < 0) 447 443 return -EINVAL; 448 444 449 - irq_data[i].dev = tps->dev; 450 - irq_data[i].type = irq_type; 445 + irq_data = devm_kmalloc(tps->dev, sizeof(*irq_data), GFP_KERNEL); 446 + if (!irq_data) 447 + return -ENOMEM; 448 + 449 + irq_data->dev = tps->dev; 450 + irq_data->type = irq_type; 451 451 error = devm_request_threaded_irq(tps->dev, irq, NULL, 452 452 tps65219_regulator_irq_handler, 453 453 IRQF_ONESHOT, 454 454 irq_type->irq_name, 455 - &irq_data[i]); 455 + irq_data); 456 456 if (error) 457 457 return dev_err_probe(tps->dev, PTR_ERR(rdev), 458 458 "Failed to request %s IRQ %d: %d\n", 459 459 irq_type->irq_name, irq, error); 460 460 } 461 - 462 - irq_data = devm_kmalloc(tps->dev, pmic->dev_irq_size, GFP_KERNEL); 463 - if (!irq_data) 464 - return -ENOMEM; 465 461 466 462 for (i = 0; i < pmic->dev_irq_size; ++i) { 467 463 irq_type = &pmic->irq_types[i]; ··· 465 469 if (irq < 0) 466 470 return -EINVAL; 467 471 468 - irq_data[i].dev = tps->dev; 469 - irq_data[i].type = irq_type; 472 + irq_data = devm_kmalloc(tps->dev, sizeof(*irq_data), GFP_KERNEL); 473 + if (!irq_data) 474 + return -ENOMEM; 475 + 476 + irq_data->dev = tps->dev; 477 + irq_data->type = irq_type; 470 478 error = devm_request_threaded_irq(tps->dev, irq, NULL, 471 479 tps65219_regulator_irq_handler, 472 480 IRQF_ONESHOT, 473 481 irq_type->irq_name, 474 - &irq_data[i]); 482 + irq_data); 475 483 if (error) 476 484 return dev_err_probe(tps->dev, PTR_ERR(rdev), 477 485 "Failed to request %s IRQ %d: %d\n",
+11 -7
drivers/scsi/hosts.c
··· 473 473 else 474 474 shost->max_sectors = SCSI_DEFAULT_MAX_SECTORS; 475 475 476 - if (sht->max_segment_size) 477 - shost->max_segment_size = sht->max_segment_size; 478 - else 479 - shost->max_segment_size = BLK_MAX_SEGMENT_SIZE; 476 + shost->virt_boundary_mask = sht->virt_boundary_mask; 477 + if (shost->virt_boundary_mask) { 478 + WARN_ON_ONCE(sht->max_segment_size && 479 + sht->max_segment_size != UINT_MAX); 480 + shost->max_segment_size = UINT_MAX; 481 + } else { 482 + if (sht->max_segment_size) 483 + shost->max_segment_size = sht->max_segment_size; 484 + else 485 + shost->max_segment_size = BLK_MAX_SEGMENT_SIZE; 486 + } 480 487 481 488 /* 32-byte (dword) is a common minimum for HBAs. */ 482 489 if (sht->dma_alignment) ··· 498 491 shost->dma_boundary = sht->dma_boundary; 499 492 else 500 493 shost->dma_boundary = 0xffffffff; 501 - 502 - if (sht->virt_boundary_mask) 503 - shost->virt_boundary_mask = sht->virt_boundary_mask; 504 494 505 495 device_initialize(&shost->shost_gendev); 506 496 dev_set_name(&shost->shost_gendev, "host%d", shost->host_no);
+1 -1
drivers/scsi/qla2xxx/qla_mbx.c
··· 2147 2147 2148 2148 pdb_dma = dma_map_single(&vha->hw->pdev->dev, pdb, 2149 2149 sizeof(*pdb), DMA_FROM_DEVICE); 2150 - if (!pdb_dma) { 2150 + if (dma_mapping_error(&vha->hw->pdev->dev, pdb_dma)) { 2151 2151 ql_log(ql_log_warn, vha, 0x1116, "Failed to map dma buffer.\n"); 2152 2152 return QLA_MEMORY_ALLOC_FAILED; 2153 2153 }
+2
drivers/scsi/qla4xxx/ql4_os.c
··· 3420 3420 task_data->data_dma = dma_map_single(&ha->pdev->dev, task->data, 3421 3421 task->data_count, 3422 3422 DMA_TO_DEVICE); 3423 + if (dma_mapping_error(&ha->pdev->dev, task_data->data_dma)) 3424 + return -ENOMEM; 3423 3425 } 3424 3426 3425 3427 DEBUG2(ql4_printk(KERN_INFO, ha, "%s: MaxRecvLen %u, iscsi hrd %d\n",
+1 -1
drivers/scsi/sd.c
··· 3384 3384 3385 3385 rcu_read_lock(); 3386 3386 vpd = rcu_dereference(sdkp->device->vpd_pgb7); 3387 - if (vpd && vpd->len >= 2) 3387 + if (vpd && vpd->len >= 6) 3388 3388 sdkp->rscs = vpd->data[5] & 1; 3389 3389 rcu_read_unlock(); 3390 3390 }
-5
drivers/spi/spi-cadence-quadspi.c
··· 1960 1960 1961 1961 pm_runtime_enable(dev); 1962 1962 1963 - if (cqspi->rx_chan) { 1964 - dma_release_channel(cqspi->rx_chan); 1965 - goto probe_setup_failed; 1966 - } 1967 - 1968 1963 pm_runtime_set_autosuspend_delay(dev, CQSPI_AUTOSUSPEND_TIMEOUT); 1969 1964 pm_runtime_use_autosuspend(dev); 1970 1965 pm_runtime_get_noresume(dev);
+10 -1
drivers/spi/spi-fsl-dspi.c
··· 983 983 if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) { 984 984 status = dspi_dma_xfer(dspi); 985 985 } else { 986 + /* 987 + * Reinitialize the completion before transferring data 988 + * to avoid the case where it might remain in the done 989 + * state due to a spurious interrupt from a previous 990 + * transfer. This could falsely signal that the current 991 + * transfer has completed. 992 + */ 993 + if (dspi->irq) 994 + reinit_completion(&dspi->xfer_done); 995 + 986 996 dspi_fifo_write(dspi); 987 997 988 998 if (dspi->irq) { 989 999 wait_for_completion(&dspi->xfer_done); 990 - reinit_completion(&dspi->xfer_done); 991 1000 } else { 992 1001 do { 993 1002 status = dspi_poll(dspi);
+16
drivers/spi/spi-qpic-snand.c
··· 315 315 316 316 mtd_set_ooblayout(mtd, &qcom_spi_ooblayout); 317 317 318 + /* 319 + * Free the temporary BAM transaction allocated initially by 320 + * qcom_nandc_alloc(), and allocate a new one based on the 321 + * updated max_cwperpage value. 322 + */ 323 + qcom_free_bam_transaction(snandc); 324 + 325 + snandc->max_cwperpage = cwperpage; 326 + 327 + snandc->bam_txn = qcom_alloc_bam_transaction(snandc); 328 + if (!snandc->bam_txn) { 329 + dev_err(snandc->dev, "failed to allocate BAM transaction\n"); 330 + ret = -ENOMEM; 331 + goto err_free_ecc_cfg; 332 + } 333 + 318 334 ecc_cfg->cfg0 = FIELD_PREP(CW_PER_PAGE_MASK, (cwperpage - 1)) | 319 335 FIELD_PREP(UD_SIZE_BYTES_MASK, ecc_cfg->cw_data) | 320 336 FIELD_PREP(DISABLE_STATUS_AFTER_WRITE, 1) |
+32 -9
drivers/tee/optee/ffa_abi.c
··· 728 728 return true; 729 729 } 730 730 731 + static void notif_work_fn(struct work_struct *work) 732 + { 733 + struct optee_ffa *optee_ffa = container_of(work, struct optee_ffa, 734 + notif_work); 735 + struct optee *optee = container_of(optee_ffa, struct optee, ffa); 736 + 737 + optee_do_bottom_half(optee->ctx); 738 + } 739 + 731 740 static void notif_callback(int notify_id, void *cb_data) 732 741 { 733 742 struct optee *optee = cb_data; 734 743 735 744 if (notify_id == optee->ffa.bottom_half_value) 736 - optee_do_bottom_half(optee->ctx); 745 + queue_work(optee->ffa.notif_wq, &optee->ffa.notif_work); 737 746 else 738 747 optee_notif_send(optee, notify_id); 739 748 } ··· 826 817 struct optee *optee = ffa_dev_get_drvdata(ffa_dev); 827 818 u32 bottom_half_id = optee->ffa.bottom_half_value; 828 819 829 - if (bottom_half_id != U32_MAX) 820 + if (bottom_half_id != U32_MAX) { 830 821 ffa_dev->ops->notifier_ops->notify_relinquish(ffa_dev, 831 822 bottom_half_id); 823 + destroy_workqueue(optee->ffa.notif_wq); 824 + } 832 825 optee_remove_common(optee); 833 826 834 827 mutex_destroy(&optee->ffa.mutex); ··· 845 834 bool is_per_vcpu = false; 846 835 u32 notif_id = 0; 847 836 int rc; 837 + 838 + INIT_WORK(&optee->ffa.notif_work, notif_work_fn); 839 + optee->ffa.notif_wq = create_workqueue("optee_notification"); 840 + if (!optee->ffa.notif_wq) { 841 + rc = -EINVAL; 842 + goto err; 843 + } 848 844 849 845 while (true) { 850 846 rc = ffa_dev->ops->notifier_ops->notify_request(ffa_dev, ··· 869 851 * notifications in that case. 870 852 */ 871 853 if (rc != -EACCES) 872 - return rc; 854 + goto err_wq; 873 855 notif_id++; 874 856 if (notif_id >= OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE) 875 - return rc; 857 + goto err_wq; 876 858 } 877 859 optee->ffa.bottom_half_value = notif_id; 878 860 879 861 rc = enable_async_notif(optee); 880 - if (rc < 0) { 881 - ffa_dev->ops->notifier_ops->notify_relinquish(ffa_dev, 882 - notif_id); 883 - optee->ffa.bottom_half_value = U32_MAX; 884 - } 862 + if (rc < 0) 863 + goto err_rel; 864 + 865 + return 0; 866 + err_rel: 867 + ffa_dev->ops->notifier_ops->notify_relinquish(ffa_dev, notif_id); 868 + err_wq: 869 + destroy_workqueue(optee->ffa.notif_wq); 870 + err: 871 + optee->ffa.bottom_half_value = U32_MAX; 885 872 886 873 return rc; 887 874 }
+2
drivers/tee/optee/optee_private.h
··· 165 165 /* Serializes access to @global_ids */ 166 166 struct mutex mutex; 167 167 struct rhashtable global_ids; 168 + struct workqueue_struct *notif_wq; 169 + struct work_struct notif_work; 168 170 }; 169 171 170 172 struct optee;
+2 -2
drivers/ufs/core/ufs-sysfs.c
··· 1808 1808 UFS_UNIT_DESC_PARAM(logical_block_count, _LOGICAL_BLK_COUNT, 8); 1809 1809 UFS_UNIT_DESC_PARAM(erase_block_size, _ERASE_BLK_SIZE, 4); 1810 1810 UFS_UNIT_DESC_PARAM(provisioning_type, _PROVISIONING_TYPE, 1); 1811 - UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8); 1811 + UFS_UNIT_DESC_PARAM(physical_memory_resource_count, _PHY_MEM_RSRC_CNT, 8); 1812 1812 UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2); 1813 1813 UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1); 1814 1814 UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4); ··· 1825 1825 &dev_attr_logical_block_count.attr, 1826 1826 &dev_attr_erase_block_size.attr, 1827 1827 &dev_attr_provisioning_type.attr, 1828 - &dev_attr_physical_memory_resourse_count.attr, 1828 + &dev_attr_physical_memory_resource_count.attr, 1829 1829 &dev_attr_context_capabilities.attr, 1830 1830 &dev_attr_large_unit_granularity.attr, 1831 1831 &dev_attr_wb_buf_alloc_units.attr,
+3 -2
drivers/usb/cdns3/cdnsp-debug.h
··· 327 327 case TRB_RESET_EP: 328 328 case TRB_HALT_ENDPOINT: 329 329 ret = scnprintf(str, size, 330 - "%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c", 330 + "%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c %c", 331 331 cdnsp_trb_type_string(type), 332 332 ep_num, ep_id % 2 ? "out" : "in", 333 333 TRB_TO_EP_INDEX(field3), field1, field0, 334 334 TRB_TO_SLOT_ID(field3), 335 - field3 & TRB_CYCLE ? 'C' : 'c'); 335 + field3 & TRB_CYCLE ? 'C' : 'c', 336 + field3 & TRB_ESP ? 'P' : 'p'); 336 337 break; 337 338 case TRB_STOP_RING: 338 339 ret = scnprintf(str, size,
+15 -3
drivers/usb/cdns3/cdnsp-ep0.c
··· 414 414 void cdnsp_setup_analyze(struct cdnsp_device *pdev) 415 415 { 416 416 struct usb_ctrlrequest *ctrl = &pdev->setup; 417 + struct cdnsp_ep *pep; 417 418 int ret = -EINVAL; 418 419 u16 len; 419 420 ··· 428 427 goto out; 429 428 } 430 429 430 + pep = &pdev->eps[0]; 431 + 431 432 /* Restore the ep0 to Stopped/Running state. */ 432 - if (pdev->eps[0].ep_state & EP_HALTED) { 433 - trace_cdnsp_ep0_halted("Restore to normal state"); 434 - cdnsp_halt_endpoint(pdev, &pdev->eps[0], 0); 433 + if (pep->ep_state & EP_HALTED) { 434 + if (GET_EP_CTX_STATE(pep->out_ctx) == EP_STATE_HALTED) 435 + cdnsp_halt_endpoint(pdev, pep, 0); 436 + 437 + /* 438 + * Halt Endpoint Command for SSP2 for ep0 preserve current 439 + * endpoint state and driver has to synchronize the 440 + * software endpoint state with endpoint output context 441 + * state. 442 + */ 443 + pep->ep_state &= ~EP_HALTED; 444 + pep->ep_state |= EP_STOPPED; 435 445 } 436 446 437 447 /*
+6
drivers/usb/cdns3/cdnsp-gadget.h
··· 987 987 #define STREAM_ID_FOR_TRB(p) ((((p)) << 16) & GENMASK(31, 16)) 988 988 #define SCT_FOR_TRB(p) (((p) << 1) & 0x7) 989 989 990 + /* 991 + * Halt Endpoint Command TRB field. 992 + * The ESP bit only exists in the SSP2 controller. 993 + */ 994 + #define TRB_ESP BIT(9) 995 + 990 996 /* Link TRB specific fields. */ 991 997 #define TRB_TC BIT(1) 992 998
+5 -2
drivers/usb/cdns3/cdnsp-ring.c
··· 772 772 } 773 773 774 774 if (port_id != old_port) { 775 - cdnsp_disable_slot(pdev); 775 + if (pdev->slot_id) 776 + cdnsp_disable_slot(pdev); 777 + 776 778 pdev->active_port = port; 777 779 cdnsp_enable_slot(pdev); 778 780 } ··· 2485 2483 { 2486 2484 cdnsp_queue_command(pdev, 0, 0, 0, TRB_TYPE(TRB_HALT_ENDPOINT) | 2487 2485 SLOT_ID_FOR_TRB(pdev->slot_id) | 2488 - EP_ID_FOR_TRB(ep_index)); 2486 + EP_ID_FOR_TRB(ep_index) | 2487 + (!ep_index ? TRB_ESP : 0)); 2489 2488 } 2490 2489 2491 2490 void cdnsp_force_header_wakeup(struct cdnsp_device *pdev, int intf_num)
+7
drivers/usb/chipidea/udc.c
··· 2374 2374 */ 2375 2375 if (hw_read(ci, OP_ENDPTLISTADDR, ~0) == 0) 2376 2376 hw_write(ci, OP_ENDPTLISTADDR, ~0, ~0); 2377 + 2378 + if (ci->gadget.connected && 2379 + (!ci->suspended || !device_may_wakeup(ci->dev))) 2380 + usb_gadget_disconnect(&ci->gadget); 2377 2381 } 2378 2382 2379 2383 static void udc_resume(struct ci_hdrc *ci, bool power_lost) ··· 2388 2384 OTGSC_BSVIS | OTGSC_BSVIE); 2389 2385 if (ci->vbus_active) 2390 2386 usb_gadget_vbus_disconnect(&ci->gadget); 2387 + } else if (ci->vbus_active && ci->driver && 2388 + !ci->gadget.connected) { 2389 + usb_gadget_connect(&ci->gadget); 2391 2390 } 2392 2391 2393 2392 /* Restore value 0 if it was set for power lost check */
+31
drivers/usb/core/hub.c
··· 68 68 */ 69 69 #define USB_SHORT_SET_ADDRESS_REQ_TIMEOUT 500 /* ms */ 70 70 71 + /* 72 + * Give SS hubs 200ms time after wake to train downstream links before 73 + * assuming no port activity and allowing hub to runtime suspend back. 74 + */ 75 + #define USB_SS_PORT_U0_WAKE_TIME 200 /* ms */ 76 + 71 77 /* Protect struct usb_device->state and ->children members 72 78 * Note: Both are also protected by ->dev.sem, except that ->state can 73 79 * change to USB_STATE_NOTATTACHED even when the semaphore isn't held. */ ··· 1101 1095 goto init2; 1102 1096 goto init3; 1103 1097 } 1098 + 1104 1099 hub_get(hub); 1105 1100 1106 1101 /* The superspeed hub except for root hub has to use Hub Depth ··· 1350 1343 device_unlock(&hdev->dev); 1351 1344 } 1352 1345 1346 + if (type == HUB_RESUME && hub_is_superspeed(hub->hdev)) { 1347 + /* give usb3 downstream links training time after hub resume */ 1348 + usb_autopm_get_interface_no_resume( 1349 + to_usb_interface(hub->intfdev)); 1350 + 1351 + queue_delayed_work(system_power_efficient_wq, 1352 + &hub->post_resume_work, 1353 + msecs_to_jiffies(USB_SS_PORT_U0_WAKE_TIME)); 1354 + return; 1355 + } 1356 + 1353 1357 hub_put(hub); 1354 1358 } 1355 1359 ··· 1377 1359 struct usb_hub *hub = container_of(ws, struct usb_hub, init_work.work); 1378 1360 1379 1361 hub_activate(hub, HUB_INIT3); 1362 + } 1363 + 1364 + static void hub_post_resume(struct work_struct *ws) 1365 + { 1366 + struct usb_hub *hub = container_of(ws, struct usb_hub, post_resume_work.work); 1367 + 1368 + usb_autopm_put_interface_async(to_usb_interface(hub->intfdev)); 1369 + hub_put(hub); 1380 1370 } 1381 1371 1382 1372 enum hub_quiescing_type { ··· 1412 1386 1413 1387 /* Stop hub_wq and related activity */ 1414 1388 timer_delete_sync(&hub->irq_urb_retry); 1389 + flush_delayed_work(&hub->post_resume_work); 1415 1390 usb_kill_urb(hub->urb); 1416 1391 if (hub->has_indicators) 1417 1392 cancel_delayed_work_sync(&hub->leds); ··· 1971 1944 hub->hdev = hdev; 1972 1945 INIT_DELAYED_WORK(&hub->leds, led_work); 1973 1946 INIT_DELAYED_WORK(&hub->init_work, NULL); 1947 + INIT_DELAYED_WORK(&hub->post_resume_work, hub_post_resume); 1974 1948 INIT_WORK(&hub->events, hub_event); 1975 1949 INIT_LIST_HEAD(&hub->onboard_devs); 1976 1950 spin_lock_init(&hub->irq_urb_lock); ··· 2364 2336 2365 2337 usb_remove_ep_devs(&udev->ep0); 2366 2338 usb_unlock_device(udev); 2339 + 2340 + if (udev->usb4_link) 2341 + device_link_del(udev->usb4_link); 2367 2342 2368 2343 /* Unregister the device. The device driver is responsible 2369 2344 * for de-configuring the device and invoking the remove-device
+1
drivers/usb/core/hub.h
··· 70 70 u8 indicator[USB_MAXCHILDREN]; 71 71 struct delayed_work leds; 72 72 struct delayed_work init_work; 73 + struct delayed_work post_resume_work; 73 74 struct work_struct events; 74 75 spinlock_t irq_urb_lock; 75 76 struct timer_list irq_urb_retry;
+2 -1
drivers/usb/core/quirks.c
··· 227 227 { USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME }, 228 228 229 229 /* Logitech HD Webcam C270 */ 230 - { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME }, 230 + { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME | 231 + USB_QUIRK_NO_LPM}, 231 232 232 233 /* Logitech HD Pro Webcams C920, C920-C, C922, C925e and C930e */ 233 234 { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
+3 -1
drivers/usb/core/usb-acpi.c
··· 157 157 */ 158 158 static int usb_acpi_add_usb4_devlink(struct usb_device *udev) 159 159 { 160 - const struct device_link *link; 160 + struct device_link *link; 161 161 struct usb_port *port_dev; 162 162 struct usb_hub *hub; 163 163 ··· 187 187 188 188 dev_dbg(&port_dev->dev, "Created device link from %s to %s\n", 189 189 dev_name(&port_dev->child->dev), dev_name(nhi_fwnode->dev)); 190 + 191 + udev->usb4_link = link; 190 192 191 193 return 0; 192 194 }
+7 -2
drivers/usb/dwc3/core.c
··· 2422 2422 { 2423 2423 u32 reg; 2424 2424 int i; 2425 + int ret; 2425 2426 2426 2427 if (!pm_runtime_suspended(dwc->dev) && !PMSG_IS_AUTO(msg)) { 2427 2428 dwc->susphy_state = (dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)) & ··· 2441 2440 case DWC3_GCTL_PRTCAP_DEVICE: 2442 2441 if (pm_runtime_suspended(dwc->dev)) 2443 2442 break; 2444 - dwc3_gadget_suspend(dwc); 2443 + ret = dwc3_gadget_suspend(dwc); 2444 + if (ret) 2445 + return ret; 2445 2446 synchronize_irq(dwc->irq_gadget); 2446 2447 dwc3_core_exit(dwc); 2447 2448 break; ··· 2478 2475 break; 2479 2476 2480 2477 if (dwc->current_otg_role == DWC3_OTG_ROLE_DEVICE) { 2481 - dwc3_gadget_suspend(dwc); 2478 + ret = dwc3_gadget_suspend(dwc); 2479 + if (ret) 2480 + return ret; 2482 2481 synchronize_irq(dwc->irq_gadget); 2483 2482 } 2484 2483
+10 -14
drivers/usb/dwc3/gadget.c
··· 3516 3516 * We're going to do that here to avoid problems of HW trying 3517 3517 * to use bogus TRBs for transfers. 3518 3518 */ 3519 - if (chain && (trb->ctrl & DWC3_TRB_CTRL_HWO)) 3519 + if (trb->ctrl & DWC3_TRB_CTRL_HWO) 3520 3520 trb->ctrl &= ~DWC3_TRB_CTRL_HWO; 3521 3521 3522 3522 /* ··· 4821 4821 int ret; 4822 4822 4823 4823 ret = dwc3_gadget_soft_disconnect(dwc); 4824 - if (ret) 4825 - goto err; 4824 + /* 4825 + * Attempt to reset the controller's state. Likely no 4826 + * communication can be established until the host 4827 + * performs a port reset. 4828 + */ 4829 + if (ret && dwc->softconnect) { 4830 + dwc3_gadget_soft_connect(dwc); 4831 + return -EAGAIN; 4832 + } 4826 4833 4827 4834 spin_lock_irqsave(&dwc->lock, flags); 4828 4835 if (dwc->gadget_driver) ··· 4837 4830 spin_unlock_irqrestore(&dwc->lock, flags); 4838 4831 4839 4832 return 0; 4840 - 4841 - err: 4842 - /* 4843 - * Attempt to reset the controller's state. Likely no 4844 - * communication can be established until the host 4845 - * performs a port reset. 4846 - */ 4847 - if (dwc->softconnect) 4848 - dwc3_gadget_soft_connect(dwc); 4849 - 4850 - return ret; 4851 4833 } 4852 4834 4853 4835 int dwc3_gadget_resume(struct dwc3 *dwc)
+4 -8
drivers/usb/gadget/function/u_serial.c
··· 295 295 break; 296 296 } 297 297 298 - if (do_tty_wake && port->port.tty) 299 - tty_wakeup(port->port.tty); 298 + if (do_tty_wake) 299 + tty_port_tty_wakeup(&port->port); 300 300 return status; 301 301 } 302 302 ··· 544 544 static int gs_start_io(struct gs_port *port) 545 545 { 546 546 struct list_head *head = &port->read_pool; 547 - struct usb_ep *ep; 547 + struct usb_ep *ep = port->port_usb->out; 548 548 int status; 549 549 unsigned started; 550 - 551 - if (!port->port_usb || !port->port.tty) 552 - return -EIO; 553 550 554 551 /* Allocate RX and TX I/O buffers. We can't easily do this much 555 552 * earlier (with GFP_KERNEL) because the requests are coupled to ··· 554 557 * configurations may use different endpoints with a given port; 555 558 * and high speed vs full speed changes packet sizes too. 556 559 */ 557 - ep = port->port_usb->out; 558 560 status = gs_alloc_requests(ep, head, gs_read_complete, 559 561 &port->read_allocated); 560 562 if (status) ··· 574 578 gs_start_tx(port); 575 579 /* Unblock any pending writes into our circular buffer, in case 576 580 * we didn't in gs_start_tx() */ 577 - tty_wakeup(port->port.tty); 581 + tty_port_tty_wakeup(&port->port); 578 582 } else { 579 583 /* Free reqs only if we are still connected */ 580 584 if (port->port_usb) {
+4
drivers/usb/host/xhci-dbgcap.c
··· 652 652 case DS_DISABLED: 653 653 return; 654 654 case DS_CONFIGURED: 655 + spin_lock(&dbc->lock); 656 + xhci_dbc_flush_requests(dbc); 657 + spin_unlock(&dbc->lock); 658 + 655 659 if (dbc->driver->disconnect) 656 660 dbc->driver->disconnect(dbc); 657 661 break;
+1
drivers/usb/host/xhci-dbgtty.c
··· 617 617 dbc_tty_driver->type = TTY_DRIVER_TYPE_SERIAL; 618 618 dbc_tty_driver->subtype = SERIAL_TYPE_NORMAL; 619 619 dbc_tty_driver->init_termios = tty_std_termios; 620 + dbc_tty_driver->init_termios.c_lflag &= ~ECHO; 620 621 dbc_tty_driver->init_termios.c_cflag = 621 622 B9600 | CS8 | CREAD | HUPCL | CLOCAL; 622 623 dbc_tty_driver->init_termios.c_ispeed = 9600;
+4
drivers/usb/host/xhci-mem.c
··· 1449 1449 /* Periodic endpoint bInterval limit quirk */ 1450 1450 if (usb_endpoint_xfer_int(&ep->desc) || 1451 1451 usb_endpoint_xfer_isoc(&ep->desc)) { 1452 + if ((xhci->quirks & XHCI_LIMIT_ENDPOINT_INTERVAL_9) && 1453 + interval >= 9) { 1454 + interval = 8; 1455 + } 1452 1456 if ((xhci->quirks & XHCI_LIMIT_ENDPOINT_INTERVAL_7) && 1453 1457 udev->speed >= USB_SPEED_HIGH && 1454 1458 interval >= 7) {
+25
drivers/usb/host/xhci-pci.c
··· 71 71 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_XHCI 0x15ec 72 72 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI 0x15f0 73 73 74 + #define PCI_DEVICE_ID_AMD_ARIEL_TYPEC_XHCI 0x13ed 75 + #define PCI_DEVICE_ID_AMD_ARIEL_TYPEA_XHCI 0x13ee 76 + #define PCI_DEVICE_ID_AMD_STARSHIP_XHCI 0x148c 77 + #define PCI_DEVICE_ID_AMD_FIREFLIGHT_15D4_XHCI 0x15d4 78 + #define PCI_DEVICE_ID_AMD_FIREFLIGHT_15D5_XHCI 0x15d5 79 + #define PCI_DEVICE_ID_AMD_RAVEN_15E0_XHCI 0x15e0 80 + #define PCI_DEVICE_ID_AMD_RAVEN_15E1_XHCI 0x15e1 81 + #define PCI_DEVICE_ID_AMD_RAVEN2_XHCI 0x15e5 74 82 #define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639 75 83 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 76 84 #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba 77 85 #define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb 78 86 #define PCI_DEVICE_ID_AMD_PROMONTORYA_1 0x43bc 87 + 88 + #define PCI_DEVICE_ID_ATI_NAVI10_7316_XHCI 0x7316 79 89 80 90 #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042 81 91 #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142 ··· 289 279 290 280 if (pdev->vendor == PCI_VENDOR_ID_NEC) 291 281 xhci->quirks |= XHCI_NEC_HOST; 282 + 283 + if (pdev->vendor == PCI_VENDOR_ID_AMD && 284 + (pdev->device == PCI_DEVICE_ID_AMD_ARIEL_TYPEC_XHCI || 285 + pdev->device == PCI_DEVICE_ID_AMD_ARIEL_TYPEA_XHCI || 286 + pdev->device == PCI_DEVICE_ID_AMD_STARSHIP_XHCI || 287 + pdev->device == PCI_DEVICE_ID_AMD_FIREFLIGHT_15D4_XHCI || 288 + pdev->device == PCI_DEVICE_ID_AMD_FIREFLIGHT_15D5_XHCI || 289 + pdev->device == PCI_DEVICE_ID_AMD_RAVEN_15E0_XHCI || 290 + pdev->device == PCI_DEVICE_ID_AMD_RAVEN_15E1_XHCI || 291 + pdev->device == PCI_DEVICE_ID_AMD_RAVEN2_XHCI)) 292 + xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_9; 293 + 294 + if (pdev->vendor == PCI_VENDOR_ID_ATI && 295 + pdev->device == PCI_DEVICE_ID_ATI_NAVI10_7316_XHCI) 296 + xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_9; 292 297 293 298 if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version == 0x96) 294 299 xhci->quirks |= XHCI_AMD_0x96_HOST;
+2 -1
drivers/usb/host/xhci-plat.c
··· 328 328 } 329 329 330 330 usb3_hcd = xhci_get_usb3_hcd(xhci); 331 - if (usb3_hcd && HCC_MAX_PSA(xhci->hcc_params) >= 4) 331 + if (usb3_hcd && HCC_MAX_PSA(xhci->hcc_params) >= 4 && 332 + !(xhci->quirks & XHCI_BROKEN_STREAMS)) 332 333 usb3_hcd->can_do_streams = 1; 333 334 334 335 if (xhci->shared_hcd) {
+2 -3
drivers/usb/host/xhci-ring.c
··· 518 518 * In the future we should distinguish between -ENODEV and -ETIMEDOUT 519 519 * and try to recover a -ETIMEDOUT with a host controller reset. 520 520 */ 521 - ret = xhci_handshake_check_state(xhci, &xhci->op_regs->cmd_ring, 522 - CMD_RING_RUNNING, 0, 5 * 1000 * 1000, 523 - XHCI_STATE_REMOVING); 521 + ret = xhci_handshake(&xhci->op_regs->cmd_ring, 522 + CMD_RING_RUNNING, 0, 5 * 1000 * 1000); 524 523 if (ret < 0) { 525 524 xhci_err(xhci, "Abort failed to stop command ring: %d\n", ret); 526 525 xhci_halt(xhci);
+5 -26
drivers/usb/host/xhci.c
··· 85 85 } 86 86 87 87 /* 88 - * xhci_handshake_check_state - same as xhci_handshake but takes an additional 89 - * exit_state parameter, and bails out with an error immediately when xhc_state 90 - * has exit_state flag set. 91 - */ 92 - int xhci_handshake_check_state(struct xhci_hcd *xhci, void __iomem *ptr, 93 - u32 mask, u32 done, int usec, unsigned int exit_state) 94 - { 95 - u32 result; 96 - int ret; 97 - 98 - ret = readl_poll_timeout_atomic(ptr, result, 99 - (result & mask) == done || 100 - result == U32_MAX || 101 - xhci->xhc_state & exit_state, 102 - 1, usec); 103 - 104 - if (result == U32_MAX || xhci->xhc_state & exit_state) 105 - return -ENODEV; 106 - 107 - return ret; 108 - } 109 - 110 - /* 111 88 * Disable interrupts and begin the xHCI halting process. 112 89 */ 113 90 void xhci_quiesce(struct xhci_hcd *xhci) ··· 204 227 if (xhci->quirks & XHCI_INTEL_HOST) 205 228 udelay(1000); 206 229 207 - ret = xhci_handshake_check_state(xhci, &xhci->op_regs->command, 208 - CMD_RESET, 0, timeout_us, XHCI_STATE_REMOVING); 230 + ret = xhci_handshake(&xhci->op_regs->command, CMD_RESET, 0, timeout_us); 209 231 if (ret) 210 232 return ret; 211 233 ··· 1158 1182 xhci_dbg(xhci, "Stop HCD\n"); 1159 1183 xhci_halt(xhci); 1160 1184 xhci_zero_64b_regs(xhci); 1161 - retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC); 1185 + if (xhci->xhc_state & XHCI_STATE_REMOVING) 1186 + retval = -ENODEV; 1187 + else 1188 + retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC); 1162 1189 spin_unlock_irq(&xhci->lock); 1163 1190 if (retval) 1164 1191 return retval;
+1 -2
drivers/usb/host/xhci.h
··· 1643 1643 #define XHCI_WRITE_64_HI_LO BIT_ULL(47) 1644 1644 #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48) 1645 1645 #define XHCI_ETRON_HOST BIT_ULL(49) 1646 + #define XHCI_LIMIT_ENDPOINT_INTERVAL_9 BIT_ULL(50) 1646 1647 1647 1648 unsigned int num_active_eps; 1648 1649 unsigned int limit_active_eps; ··· 1869 1868 /* xHCI host controller glue */ 1870 1869 typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *); 1871 1870 int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us); 1872 - int xhci_handshake_check_state(struct xhci_hcd *xhci, void __iomem *ptr, 1873 - u32 mask, u32 done, int usec, unsigned int exit_state); 1874 1871 void xhci_quiesce(struct xhci_hcd *xhci); 1875 1872 int xhci_halt(struct xhci_hcd *xhci); 1876 1873 int xhci_start(struct xhci_hcd *xhci);
+2 -3
drivers/usb/typec/altmodes/displayport.c
··· 394 394 case CMDT_RSP_NAK: 395 395 switch (cmd) { 396 396 case DP_CMD_STATUS_UPDATE: 397 - if (typec_altmode_exit(alt)) 398 - dev_err(&dp->alt->dev, "Exit Mode Failed!\n"); 397 + dp->state = DP_STATE_EXIT; 399 398 break; 400 399 case DP_CMD_CONFIGURE: 401 400 dp->data.conf = 0; ··· 676 677 677 678 assignments = get_current_pin_assignments(dp); 678 679 679 - for (i = 0; assignments; assignments >>= 1, i++) { 680 + for (i = 0; assignments && i < DP_PIN_ASSIGN_MAX; assignments >>= 1, i++) { 680 681 if (assignments & 1) { 681 682 if (i == cur) 682 683 len += sprintf(buf + len, "[%s] ",
+17 -17
drivers/usb/typec/tcpm/tcpm.c
··· 4410 4410 4411 4411 tcpm_enable_auto_vbus_discharge(port, true); 4412 4412 4413 - ret = tcpm_set_roles(port, true, TYPEC_STATE_USB, 4414 - TYPEC_SOURCE, tcpm_data_role_for_source(port)); 4415 - if (ret < 0) 4416 - return ret; 4417 - 4418 - if (port->pd_supported) { 4419 - ret = port->tcpc->set_pd_rx(port->tcpc, true); 4420 - if (ret < 0) 4421 - goto out_disable_mux; 4422 - } 4423 - 4424 4413 /* 4425 4414 * USB Type-C specification, version 1.2, 4426 4415 * chapter 4.5.2.2.8.1 (Attached.SRC Requirements) ··· 4419 4430 (polarity == TYPEC_POLARITY_CC2 && port->cc1 == TYPEC_CC_RA)) { 4420 4431 ret = tcpm_set_vconn(port, true); 4421 4432 if (ret < 0) 4422 - goto out_disable_pd; 4433 + return ret; 4423 4434 } 4424 4435 4425 4436 ret = tcpm_set_vbus(port, true); 4426 4437 if (ret < 0) 4427 4438 goto out_disable_vconn; 4439 + 4440 + ret = tcpm_set_roles(port, true, TYPEC_STATE_USB, TYPEC_SOURCE, 4441 + tcpm_data_role_for_source(port)); 4442 + if (ret < 0) 4443 + goto out_disable_vbus; 4444 + 4445 + if (port->pd_supported) { 4446 + ret = port->tcpc->set_pd_rx(port->tcpc, true); 4447 + if (ret < 0) 4448 + goto out_disable_mux; 4449 + } 4428 4450 4429 4451 port->pd_capable = false; 4430 4452 ··· 4447 4447 4448 4448 return 0; 4449 4449 4450 - out_disable_vconn: 4451 - tcpm_set_vconn(port, false); 4452 - out_disable_pd: 4453 - if (port->pd_supported) 4454 - port->tcpc->set_pd_rx(port->tcpc, false); 4455 4450 out_disable_mux: 4456 4451 tcpm_mux_set(port, TYPEC_STATE_SAFE, USB_ROLE_NONE, 4457 4452 TYPEC_ORIENTATION_NONE); 4453 + out_disable_vbus: 4454 + tcpm_set_vbus(port, false); 4455 + out_disable_vconn: 4456 + tcpm_set_vconn(port, false); 4457 + 4458 4458 return ret; 4459 4459 } 4460 4460
+18 -5
fs/anon_inodes.c
··· 98 98 .kill_sb = kill_anon_super, 99 99 }; 100 100 101 - static struct inode *anon_inode_make_secure_inode( 102 - const char *name, 103 - const struct inode *context_inode) 101 + /** 102 + * anon_inode_make_secure_inode - allocate an anonymous inode with security context 103 + * @sb: [in] Superblock to allocate from 104 + * @name: [in] Name of the class of the newfile (e.g., "secretmem") 105 + * @context_inode: 106 + * [in] Optional parent inode for security inheritance 107 + * 108 + * The function ensures proper security initialization through the LSM hook 109 + * security_inode_init_security_anon(). 110 + * 111 + * Return: Pointer to new inode on success, ERR_PTR on failure. 112 + */ 113 + struct inode *anon_inode_make_secure_inode(struct super_block *sb, const char *name, 114 + const struct inode *context_inode) 104 115 { 105 116 struct inode *inode; 106 117 int error; 107 118 108 - inode = alloc_anon_inode(anon_inode_mnt->mnt_sb); 119 + inode = alloc_anon_inode(sb); 109 120 if (IS_ERR(inode)) 110 121 return inode; 111 122 inode->i_flags &= ~S_PRIVATE; ··· 129 118 } 130 119 return inode; 131 120 } 121 + EXPORT_SYMBOL_GPL_FOR_MODULES(anon_inode_make_secure_inode, "kvm"); 132 122 133 123 static struct file *__anon_inode_getfile(const char *name, 134 124 const struct file_operations *fops, ··· 144 132 return ERR_PTR(-ENOENT); 145 133 146 134 if (make_inode) { 147 - inode = anon_inode_make_secure_inode(name, context_inode); 135 + inode = anon_inode_make_secure_inode(anon_inode_mnt->mnt_sb, 136 + name, context_inode); 148 137 if (IS_ERR(inode)) { 149 138 file = ERR_CAST(inode); 150 139 goto err;
+9 -2
fs/bcachefs/bcachefs.h
··· 863 863 DARRAY(enum bcachefs_metadata_version) 864 864 incompat_versions_requested; 865 865 866 - #ifdef CONFIG_UNICODE 867 866 struct unicode_map *cf_encoding; 868 - #endif 869 867 870 868 struct bch_sb_handle disk_sb; 871 869 ··· 1281 1283 return test_bit(BCH_FS_discard_mount_opt_set, &c->flags) 1282 1284 ? c->opts.discard 1283 1285 : ca->mi.discard; 1286 + } 1287 + 1288 + static inline bool bch2_fs_casefold_enabled(struct bch_fs *c) 1289 + { 1290 + #ifdef CONFIG_UNICODE 1291 + return !c->opts.casefold_disabled; 1292 + #else 1293 + return false; 1294 + #endif 1284 1295 } 1285 1296 1286 1297 #endif /* _BCACHEFS_H */
+34 -7
fs/bcachefs/btree_io.c
··· 1337 1337 1338 1338 btree_node_reset_sib_u64s(b); 1339 1339 1340 - scoped_guard(rcu) 1341 - bkey_for_each_ptr(bch2_bkey_ptrs(bkey_i_to_s(&b->key)), ptr) { 1342 - struct bch_dev *ca2 = bch2_dev_rcu(c, ptr->dev); 1340 + /* 1341 + * XXX: 1342 + * 1343 + * We deadlock if too many btree updates require node rewrites while 1344 + * we're still in journal replay. 1345 + * 1346 + * This is because btree node rewrites generate more updates for the 1347 + * interior updates (alloc, backpointers), and if those updates touch 1348 + * new nodes and generate more rewrites - well, you see the problem. 1349 + * 1350 + * The biggest cause is that we don't use the btree write buffer (for 1351 + * the backpointer updates - this needs some real thought on locking in 1352 + * order to fix. 1353 + * 1354 + * The problem with this workaround (not doing the rewrite for degraded 1355 + * nodes in journal replay) is that those degraded nodes persist, and we 1356 + * don't want that (this is a real bug when a btree node write completes 1357 + * with fewer replicas than we wanted and leaves a degraded node due to 1358 + * device _removal_, i.e. the device went away mid write). 1359 + * 1360 + * It's less of a bug here, but still a problem because we don't yet 1361 + * have a way of tracking degraded data - we another index (all 1362 + * extents/btree nodes, by replicas entry) in order to fix properly 1363 + * (re-replicate degraded data at the earliest possible time). 1364 + */ 1365 + if (c->recovery.passes_complete & BIT_ULL(BCH_RECOVERY_PASS_journal_replay)) { 1366 + scoped_guard(rcu) 1367 + bkey_for_each_ptr(bch2_bkey_ptrs(bkey_i_to_s(&b->key)), ptr) { 1368 + struct bch_dev *ca2 = bch2_dev_rcu(c, ptr->dev); 1343 1369 1344 - if (!ca2 || ca2->mi.state != BCH_MEMBER_STATE_rw) { 1345 - set_btree_node_need_rewrite(b); 1346 - set_btree_node_need_rewrite_degraded(b); 1370 + if (!ca2 || ca2->mi.state != BCH_MEMBER_STATE_rw) { 1371 + set_btree_node_need_rewrite(b); 1372 + set_btree_node_need_rewrite_degraded(b); 1373 + } 1347 1374 } 1348 - } 1375 + } 1349 1376 1350 1377 if (!ptr_written) { 1351 1378 set_btree_node_need_rewrite(b);
+1 -1
fs/bcachefs/btree_iter.c
··· 2189 2189 struct btree_path *path = btree_iter_path(trans, iter); 2190 2190 struct bkey_i *next_journal = 2191 2191 bch2_btree_journal_peek_prev(trans, iter, search_key, 2192 - k->k ? k->k->p : path_l(path)->b->key.k.p); 2192 + k->k ? k->k->p : path_l(path)->b->data->min_key); 2193 2193 2194 2194 if (next_journal) { 2195 2195 iter->k = next_journal->k;
+9 -10
fs/bcachefs/dirent.c
··· 18 18 { 19 19 *out_cf = (struct qstr) QSTR_INIT(NULL, 0); 20 20 21 - #ifdef CONFIG_UNICODE 21 + if (!bch2_fs_casefold_enabled(trans->c)) 22 + return -EOPNOTSUPP; 23 + 22 24 unsigned char *buf = bch2_trans_kmalloc(trans, BCH_NAME_MAX + 1); 23 25 int ret = PTR_ERR_OR_ZERO(buf); 24 26 if (ret) ··· 32 30 33 31 *out_cf = (struct qstr) QSTR_INIT(buf, ret); 34 32 return 0; 35 - #else 36 - return -EOPNOTSUPP; 37 - #endif 38 33 } 39 34 40 35 static unsigned bch2_dirent_name_bytes(struct bkey_s_c_dirent d) ··· 230 231 prt_printf(out, " type %s", bch2_d_type_str(d.v->d_type)); 231 232 } 232 233 233 - int bch2_dirent_init_name(struct bkey_i_dirent *dirent, 234 + int bch2_dirent_init_name(struct bch_fs *c, 235 + struct bkey_i_dirent *dirent, 234 236 const struct bch_hash_info *hash_info, 235 237 const struct qstr *name, 236 238 const struct qstr *cf_name) ··· 251 251 offsetof(struct bch_dirent, d_name) - 252 252 name->len); 253 253 } else { 254 - #ifdef CONFIG_UNICODE 254 + if (!bch2_fs_casefold_enabled(c)) 255 + return -EOPNOTSUPP; 256 + 255 257 memcpy(&dirent->v.d_cf_name_block.d_names[0], name->name, name->len); 256 258 257 259 char *cf_out = &dirent->v.d_cf_name_block.d_names[name->len]; ··· 279 277 dirent->v.d_cf_name_block.d_cf_name_len = cpu_to_le16(cf_len); 280 278 281 279 EBUG_ON(bch2_dirent_get_casefold_name(dirent_i_to_s_c(dirent)).len != cf_len); 282 - #else 283 - return -EOPNOTSUPP; 284 - #endif 285 280 } 286 281 287 282 unsigned u64s = dirent_val_u64s(name->len, cf_len); ··· 312 313 dirent->v.d_type = type; 313 314 dirent->v.d_unused = 0; 314 315 315 - int ret = bch2_dirent_init_name(dirent, hash_info, name, cf_name); 316 + int ret = bch2_dirent_init_name(trans->c, dirent, hash_info, name, cf_name); 316 317 if (ret) 317 318 return ERR_PTR(ret); 318 319
+2 -1
fs/bcachefs/dirent.h
··· 59 59 dst->v.d_type = src.v->d_type; 60 60 } 61 61 62 - int bch2_dirent_init_name(struct bkey_i_dirent *, 62 + int bch2_dirent_init_name(struct bch_fs *, 63 + struct bkey_i_dirent *, 63 64 const struct bch_hash_info *, 64 65 const struct qstr *, 65 66 const struct qstr *);
+3 -4
fs/bcachefs/fs.c
··· 722 722 if (IS_ERR(inode)) 723 723 inode = NULL; 724 724 725 - #ifdef CONFIG_UNICODE 726 725 if (!inode && IS_CASEFOLDED(vdir)) { 727 726 /* 728 727 * Do not cache a negative dentry in casefolded directories ··· 736 737 */ 737 738 return NULL; 738 739 } 739 - #endif 740 740 741 741 return d_splice_alias(&inode->v, dentry); 742 742 } ··· 2564 2566 sb->s_shrink->seeks = 0; 2565 2567 2566 2568 #ifdef CONFIG_UNICODE 2567 - sb->s_encoding = c->cf_encoding; 2568 - #endif 2569 + if (bch2_fs_casefold_enabled(c)) 2570 + sb->s_encoding = c->cf_encoding; 2569 2571 generic_set_sb_d_ops(sb); 2572 + #endif 2570 2573 2571 2574 vinode = bch2_vfs_inode_get(c, BCACHEFS_ROOT_SUBVOL_INUM); 2572 2575 ret = PTR_ERR_OR_ZERO(vinode);
+1 -3
fs/bcachefs/fsck.c
··· 2302 2302 *hash_info = bch2_hash_info_init(c, &i->inode); 2303 2303 dir->first_this_inode = false; 2304 2304 2305 - #ifdef CONFIG_UNICODE 2306 2305 hash_info->cf_encoding = bch2_inode_casefold(c, &i->inode) ? c->cf_encoding : NULL; 2307 - #endif 2308 2306 2309 2307 ret = bch2_str_hash_check_key(trans, s, &bch2_dirent_hash_desc, hash_info, 2310 2308 iter, k, need_second_pass); ··· 2817 2819 ret = remove_backpointer(trans, &inode); 2818 2820 bch_err_msg(c, ret, "removing dirent"); 2819 2821 if (ret) 2820 - break; 2822 + goto out; 2821 2823 2822 2824 ret = reattach_inode(trans, &inode); 2823 2825 bch_err_msg(c, ret, "reattaching inode %llu", inode.bi_inum);
+8 -5
fs/bcachefs/inode.c
··· 1265 1265 { 1266 1266 struct bch_fs *c = trans->c; 1267 1267 1268 - #ifdef CONFIG_UNICODE 1268 + #ifndef CONFIG_UNICODE 1269 + bch_err(c, "Cannot use casefolding on a kernel without CONFIG_UNICODE"); 1270 + return -EOPNOTSUPP; 1271 + #endif 1272 + 1273 + if (c->opts.casefold_disabled) 1274 + return -EOPNOTSUPP; 1275 + 1269 1276 int ret = 0; 1270 1277 /* Not supported on individual files. */ 1271 1278 if (!S_ISDIR(bi->bi_mode)) ··· 1296 1289 bi->bi_fields_set |= BIT(Inode_opt_casefold); 1297 1290 1298 1291 return bch2_maybe_propagate_has_case_insensitive(trans, inum, bi); 1299 - #else 1300 - bch_err(c, "Cannot use casefolding on a kernel without CONFIG_UNICODE"); 1301 - return -EOPNOTSUPP; 1302 - #endif 1303 1292 } 1304 1293 1305 1294 static noinline int __bch2_inode_rm_snapshot(struct btree_trans *trans, u64 inum, u32 snapshot)
+5
fs/bcachefs/opts.h
··· 234 234 OPT_BOOL(), \ 235 235 BCH_SB_CASEFOLD, false, \ 236 236 NULL, "Dirent lookups are casefolded") \ 237 + x(casefold_disabled, u8, \ 238 + OPT_FS|OPT_MOUNT, \ 239 + OPT_BOOL(), \ 240 + BCH2_NO_SB_OPT, false, \ 241 + NULL, "Disable casefolding filesystem wide") \ 237 242 x(inodes_32bit, u8, \ 238 243 OPT_FS|OPT_INODE|OPT_FORMAT|OPT_MOUNT|OPT_RUNTIME, \ 239 244 OPT_BOOL(), \
+1 -1
fs/bcachefs/sb-errors_format.h
··· 314 314 x(accounting_mismatch, 272, FSCK_AUTOFIX) \ 315 315 x(accounting_replicas_not_marked, 273, 0) \ 316 316 x(accounting_to_invalid_device, 289, 0) \ 317 - x(invalid_btree_id, 274, 0) \ 317 + x(invalid_btree_id, 274, FSCK_AUTOFIX) \ 318 318 x(alloc_key_io_time_bad, 275, 0) \ 319 319 x(alloc_key_fragmentation_lru_wrong, 276, FSCK_AUTOFIX) \ 320 320 x(accounting_key_junk_at_end, 277, FSCK_AUTOFIX) \
+3 -2
fs/bcachefs/str_hash.c
··· 38 38 struct bkey_s_c_dirent old, 39 39 bool *updated_before_k_pos) 40 40 { 41 + struct bch_fs *c = trans->c; 41 42 struct qstr old_name = bch2_dirent_get_name(old); 42 43 struct bkey_i_dirent *new = bch2_trans_kmalloc(trans, BKEY_U64s_MAX * sizeof(u64)); 43 44 int ret = PTR_ERR_OR_ZERO(new); ··· 61 60 sprintf(renamed_buf, "%.*s.fsck_renamed-%u", 62 61 old_name.len, old_name.name, i)); 63 62 64 - ret = bch2_dirent_init_name(new, hash_info, &renamed_name, NULL); 63 + ret = bch2_dirent_init_name(c, new, hash_info, &renamed_name, NULL); 65 64 if (ret) 66 65 return ret; 67 66 ··· 80 79 } 81 80 82 81 ret = ret ?: bch2_fsck_update_backpointers(trans, s, desc, hash_info, &new->k_i); 83 - bch_err_fn(trans->c, ret); 82 + bch_err_fn(c, ret); 84 83 return ret; 85 84 } 86 85
-2
fs/bcachefs/str_hash.h
··· 48 48 struct bch_hash_info info = { 49 49 .inum_snapshot = bi->bi_snapshot, 50 50 .type = INODE_STR_HASH(bi), 51 - #ifdef CONFIG_UNICODE 52 51 .cf_encoding = bch2_inode_casefold(c, bi) ? c->cf_encoding : NULL, 53 - #endif 54 52 .siphash_key = { .k0 = bi->bi_hash_seed } 55 53 }; 56 54
+16 -15
fs/bcachefs/super.c
··· 1025 1025 } 1026 1026 1027 1027 #ifdef CONFIG_UNICODE 1028 - /* Default encoding until we can potentially have more as an option. */ 1029 - c->cf_encoding = utf8_load(BCH_FS_DEFAULT_UTF8_ENCODING); 1030 - if (IS_ERR(c->cf_encoding)) { 1031 - printk(KERN_ERR "Cannot load UTF-8 encoding for filesystem. Version: %u.%u.%u", 1032 - unicode_major(BCH_FS_DEFAULT_UTF8_ENCODING), 1033 - unicode_minor(BCH_FS_DEFAULT_UTF8_ENCODING), 1034 - unicode_rev(BCH_FS_DEFAULT_UTF8_ENCODING)); 1035 - ret = -EINVAL; 1036 - goto err; 1028 + if (bch2_fs_casefold_enabled(c)) { 1029 + /* Default encoding until we can potentially have more as an option. */ 1030 + c->cf_encoding = utf8_load(BCH_FS_DEFAULT_UTF8_ENCODING); 1031 + if (IS_ERR(c->cf_encoding)) { 1032 + printk(KERN_ERR "Cannot load UTF-8 encoding for filesystem. Version: %u.%u.%u", 1033 + unicode_major(BCH_FS_DEFAULT_UTF8_ENCODING), 1034 + unicode_minor(BCH_FS_DEFAULT_UTF8_ENCODING), 1035 + unicode_rev(BCH_FS_DEFAULT_UTF8_ENCODING)); 1036 + ret = -EINVAL; 1037 + goto err; 1038 + } 1037 1039 } 1038 1040 #else 1039 1041 if (c->sb.features & BIT_ULL(BCH_FEATURE_casefolding)) { ··· 1162 1160 1163 1161 print_mount_opts(c); 1164 1162 1165 - #ifdef CONFIG_UNICODE 1166 - bch_info(c, "Using encoding defined by superblock: utf8-%u.%u.%u", 1167 - unicode_major(BCH_FS_DEFAULT_UTF8_ENCODING), 1168 - unicode_minor(BCH_FS_DEFAULT_UTF8_ENCODING), 1169 - unicode_rev(BCH_FS_DEFAULT_UTF8_ENCODING)); 1170 - #endif 1163 + if (c->cf_encoding) 1164 + bch_info(c, "Using encoding defined by superblock: utf8-%u.%u.%u", 1165 + unicode_major(BCH_FS_DEFAULT_UTF8_ENCODING), 1166 + unicode_minor(BCH_FS_DEFAULT_UTF8_ENCODING), 1167 + unicode_rev(BCH_FS_DEFAULT_UTF8_ENCODING)); 1171 1168 1172 1169 if (!bch2_fs_may_start(c)) 1173 1170 return bch_err_throw(c, insufficient_devices_to_start);
+2
fs/btrfs/block-group.h
··· 83 83 BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, 84 84 /* Does the block group need to be added to the free space tree? */ 85 85 BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE, 86 + /* Set after we add a new block group to the free space tree. */ 87 + BLOCK_GROUP_FLAG_FREE_SPACE_ADDED, 86 88 /* Indicate that the block group is placed on a sequential zone */ 87 89 BLOCK_GROUP_FLAG_SEQUENTIAL_ZONE, 88 90 /*
+40
fs/btrfs/free-space-tree.c
··· 1241 1241 { 1242 1242 BTRFS_PATH_AUTO_FREE(path); 1243 1243 struct btrfs_key key; 1244 + struct rb_node *node; 1244 1245 int nr; 1245 1246 int ret; 1246 1247 ··· 1268 1267 return ret; 1269 1268 1270 1269 btrfs_release_path(path); 1270 + } 1271 + 1272 + node = rb_first_cached(&trans->fs_info->block_group_cache_tree); 1273 + while (node) { 1274 + struct btrfs_block_group *bg; 1275 + 1276 + bg = rb_entry(node, struct btrfs_block_group, cache_node); 1277 + clear_bit(BLOCK_GROUP_FLAG_FREE_SPACE_ADDED, &bg->runtime_flags); 1278 + node = rb_next(node); 1279 + cond_resched(); 1271 1280 } 1272 1281 1273 1282 return 0; ··· 1369 1358 1370 1359 block_group = rb_entry(node, struct btrfs_block_group, 1371 1360 cache_node); 1361 + 1362 + if (test_bit(BLOCK_GROUP_FLAG_FREE_SPACE_ADDED, 1363 + &block_group->runtime_flags)) 1364 + goto next; 1365 + 1372 1366 ret = populate_free_space_tree(trans, block_group); 1373 1367 if (ret) { 1374 1368 btrfs_abort_transaction(trans, ret); 1375 1369 btrfs_end_transaction(trans); 1376 1370 return ret; 1377 1371 } 1372 + next: 1378 1373 if (btrfs_should_end_transaction(trans)) { 1379 1374 btrfs_end_transaction(trans); 1380 1375 trans = btrfs_start_transaction(free_space_root, 1); ··· 1406 1389 int ret; 1407 1390 1408 1391 clear_bit(BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE, &block_group->runtime_flags); 1392 + 1393 + /* 1394 + * While rebuilding the free space tree we may allocate new metadata 1395 + * block groups while modifying the free space tree. 1396 + * 1397 + * Because during the rebuild (at btrfs_rebuild_free_space_tree()) we 1398 + * can use multiple transactions, every time btrfs_end_transaction() is 1399 + * called at btrfs_rebuild_free_space_tree() we finish the creation of 1400 + * new block groups by calling btrfs_create_pending_block_groups(), and 1401 + * that in turn calls us, through add_block_group_free_space(), to add 1402 + * a free space info item and a free space extent item for the block 1403 + * group. 1404 + * 1405 + * Then later btrfs_rebuild_free_space_tree() may find such new block 1406 + * groups and processes them with populate_free_space_tree(), which can 1407 + * fail with EEXIST since there are already items for the block group in 1408 + * the free space tree. Notice that we say "may find" because a new 1409 + * block group may be added to the block groups rbtree in a node before 1410 + * or after the block group currently being processed by the rebuild 1411 + * process. So signal the rebuild process to skip such new block groups 1412 + * if it finds them. 1413 + */ 1414 + set_bit(BLOCK_GROUP_FLAG_FREE_SPACE_ADDED, &block_group->runtime_flags); 1409 1415 1410 1416 ret = add_new_free_space_info(trans, block_group, path); 1411 1417 if (ret)
+18 -18
fs/btrfs/inode.c
··· 4710 4710 struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info; 4711 4711 int ret = 0; 4712 4712 struct btrfs_trans_handle *trans; 4713 - u64 last_unlink_trans; 4714 4713 struct fscrypt_name fname; 4715 4714 4716 4715 if (inode->i_size > BTRFS_EMPTY_DIR_SIZE) ··· 4735 4736 goto out_notrans; 4736 4737 } 4737 4738 4739 + /* 4740 + * Propagate the last_unlink_trans value of the deleted dir to its 4741 + * parent directory. This is to prevent an unrecoverable log tree in the 4742 + * case we do something like this: 4743 + * 1) create dir foo 4744 + * 2) create snapshot under dir foo 4745 + * 3) delete the snapshot 4746 + * 4) rmdir foo 4747 + * 5) mkdir foo 4748 + * 6) fsync foo or some file inside foo 4749 + * 4750 + * This is because we can't unlink other roots when replaying the dir 4751 + * deletes for directory foo. 4752 + */ 4753 + if (BTRFS_I(inode)->last_unlink_trans >= trans->transid) 4754 + btrfs_record_snapshot_destroy(trans, BTRFS_I(dir)); 4755 + 4738 4756 if (unlikely(btrfs_ino(BTRFS_I(inode)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)) { 4739 4757 ret = btrfs_unlink_subvol(trans, BTRFS_I(dir), dentry); 4740 4758 goto out; ··· 4761 4745 if (ret) 4762 4746 goto out; 4763 4747 4764 - last_unlink_trans = BTRFS_I(inode)->last_unlink_trans; 4765 - 4766 4748 /* now the directory is empty */ 4767 4749 ret = btrfs_unlink_inode(trans, BTRFS_I(dir), BTRFS_I(d_inode(dentry)), 4768 4750 &fname.disk_name); 4769 - if (!ret) { 4751 + if (!ret) 4770 4752 btrfs_i_size_write(BTRFS_I(inode), 0); 4771 - /* 4772 - * Propagate the last_unlink_trans value of the deleted dir to 4773 - * its parent directory. This is to prevent an unrecoverable 4774 - * log tree in the case we do something like this: 4775 - * 1) create dir foo 4776 - * 2) create snapshot under dir foo 4777 - * 3) delete the snapshot 4778 - * 4) rmdir foo 4779 - * 5) mkdir foo 4780 - * 6) fsync foo or some file inside foo 4781 - */ 4782 - if (last_unlink_trans >= trans->transid) 4783 - BTRFS_I(dir)->last_unlink_trans = last_unlink_trans; 4784 - } 4785 4753 out: 4786 4754 btrfs_end_transaction(trans); 4787 4755 out_notrans:
+2 -2
fs/btrfs/ioctl.c
··· 666 666 goto out; 667 667 } 668 668 669 + btrfs_record_new_subvolume(trans, BTRFS_I(dir)); 670 + 669 671 ret = btrfs_create_new_inode(trans, &new_inode_args); 670 672 if (ret) { 671 673 btrfs_abort_transaction(trans, ret); 672 674 goto out; 673 675 } 674 - 675 - btrfs_record_new_subvolume(trans, BTRFS_I(dir)); 676 676 677 677 d_instantiate_new(dentry, new_inode_args.inode); 678 678 new_inode_args.inode = NULL;
+69 -68
fs/btrfs/tree-log.c
··· 143 143 unsigned int nofs_flag; 144 144 struct btrfs_inode *inode; 145 145 146 + /* Only meant to be called for subvolume roots and not for log roots. */ 147 + ASSERT(is_fstree(btrfs_root_id(root))); 148 + 146 149 /* 147 150 * We're holding a transaction handle whether we are logging or 148 151 * replaying a log tree, so we must make sure NOFS semantics apply ··· 607 604 return 0; 608 605 } 609 606 610 - /* 611 - * simple helper to read an inode off the disk from a given root 612 - * This can only be called for subvolume roots and not for the log 613 - */ 614 - static noinline struct btrfs_inode *read_one_inode(struct btrfs_root *root, 615 - u64 objectid) 616 - { 617 - struct btrfs_inode *inode; 618 - 619 - inode = btrfs_iget_logging(objectid, root); 620 - if (IS_ERR(inode)) 621 - return NULL; 622 - return inode; 623 - } 624 - 625 607 /* replays a single extent in 'eb' at 'slot' with 'key' into the 626 608 * subvolume 'root'. path is released on entry and should be released 627 609 * on exit. ··· 662 674 return -EUCLEAN; 663 675 } 664 676 665 - inode = read_one_inode(root, key->objectid); 666 - if (!inode) 667 - return -EIO; 677 + inode = btrfs_iget_logging(key->objectid, root); 678 + if (IS_ERR(inode)) 679 + return PTR_ERR(inode); 668 680 669 681 /* 670 682 * first check to see if we already have this extent in the ··· 936 948 937 949 btrfs_release_path(path); 938 950 939 - inode = read_one_inode(root, location.objectid); 940 - if (!inode) { 941 - ret = -EIO; 951 + inode = btrfs_iget_logging(location.objectid, root); 952 + if (IS_ERR(inode)) { 953 + ret = PTR_ERR(inode); 954 + inode = NULL; 942 955 goto out; 943 956 } 944 957 ··· 1062 1073 search_key.type = BTRFS_INODE_REF_KEY; 1063 1074 search_key.offset = parent_objectid; 1064 1075 ret = btrfs_search_slot(NULL, root, &search_key, path, 0, 0); 1065 - if (ret == 0) { 1076 + if (ret < 0) { 1077 + return ret; 1078 + } else if (ret == 0) { 1066 1079 struct btrfs_inode_ref *victim_ref; 1067 1080 unsigned long ptr; 1068 1081 unsigned long ptr_end; ··· 1137 1146 struct fscrypt_str victim_name; 1138 1147 1139 1148 extref = (struct btrfs_inode_extref *)(base + cur_offset); 1149 + victim_name.len = btrfs_inode_extref_name_len(leaf, extref); 1140 1150 1141 1151 if (btrfs_inode_extref_parent(leaf, extref) != parent_objectid) 1142 1152 goto next; 1143 1153 1144 1154 ret = read_alloc_one_name(leaf, &extref->name, 1145 - btrfs_inode_extref_name_len(leaf, extref), 1146 - &victim_name); 1155 + victim_name.len, &victim_name); 1147 1156 if (ret) 1148 1157 return ret; 1149 1158 ··· 1158 1167 kfree(victim_name.name); 1159 1168 return ret; 1160 1169 } else if (!ret) { 1161 - ret = -ENOENT; 1162 - victim_parent = read_one_inode(root, 1163 - parent_objectid); 1164 - if (victim_parent) { 1170 + victim_parent = btrfs_iget_logging(parent_objectid, root); 1171 + if (IS_ERR(victim_parent)) { 1172 + ret = PTR_ERR(victim_parent); 1173 + } else { 1165 1174 inc_nlink(&inode->vfs_inode); 1166 1175 btrfs_release_path(path); 1167 1176 ··· 1306 1315 struct btrfs_inode *dir; 1307 1316 1308 1317 btrfs_release_path(path); 1309 - dir = read_one_inode(root, parent_id); 1310 - if (!dir) { 1311 - ret = -ENOENT; 1318 + dir = btrfs_iget_logging(parent_id, root); 1319 + if (IS_ERR(dir)) { 1320 + ret = PTR_ERR(dir); 1312 1321 kfree(name.name); 1313 1322 goto out; 1314 1323 } ··· 1380 1389 * copy the back ref in. The link count fixup code will take 1381 1390 * care of the rest 1382 1391 */ 1383 - dir = read_one_inode(root, parent_objectid); 1384 - if (!dir) { 1385 - ret = -ENOENT; 1392 + dir = btrfs_iget_logging(parent_objectid, root); 1393 + if (IS_ERR(dir)) { 1394 + ret = PTR_ERR(dir); 1395 + dir = NULL; 1386 1396 goto out; 1387 1397 } 1388 1398 1389 - inode = read_one_inode(root, inode_objectid); 1390 - if (!inode) { 1391 - ret = -EIO; 1399 + inode = btrfs_iget_logging(inode_objectid, root); 1400 + if (IS_ERR(inode)) { 1401 + ret = PTR_ERR(inode); 1402 + inode = NULL; 1392 1403 goto out; 1393 1404 } 1394 1405 ··· 1402 1409 * parent object can change from one array 1403 1410 * item to another. 1404 1411 */ 1405 - if (!dir) 1406 - dir = read_one_inode(root, parent_objectid); 1407 1412 if (!dir) { 1408 - ret = -ENOENT; 1409 - goto out; 1413 + dir = btrfs_iget_logging(parent_objectid, root); 1414 + if (IS_ERR(dir)) { 1415 + ret = PTR_ERR(dir); 1416 + dir = NULL; 1417 + goto out; 1418 + } 1410 1419 } 1411 1420 } else { 1412 1421 ret = ref_get_fields(eb, ref_ptr, &name, &ref_index); ··· 1677 1682 break; 1678 1683 1679 1684 btrfs_release_path(path); 1680 - inode = read_one_inode(root, key.offset); 1681 - if (!inode) { 1682 - ret = -EIO; 1685 + inode = btrfs_iget_logging(key.offset, root); 1686 + if (IS_ERR(inode)) { 1687 + ret = PTR_ERR(inode); 1683 1688 break; 1684 1689 } 1685 1690 ··· 1715 1720 struct btrfs_inode *inode; 1716 1721 struct inode *vfs_inode; 1717 1722 1718 - inode = read_one_inode(root, objectid); 1719 - if (!inode) 1720 - return -EIO; 1723 + inode = btrfs_iget_logging(objectid, root); 1724 + if (IS_ERR(inode)) 1725 + return PTR_ERR(inode); 1721 1726 1722 1727 vfs_inode = &inode->vfs_inode; 1723 1728 key.objectid = BTRFS_TREE_LOG_FIXUP_OBJECTID; ··· 1756 1761 struct btrfs_inode *dir; 1757 1762 int ret; 1758 1763 1759 - inode = read_one_inode(root, location->objectid); 1760 - if (!inode) 1761 - return -ENOENT; 1764 + inode = btrfs_iget_logging(location->objectid, root); 1765 + if (IS_ERR(inode)) 1766 + return PTR_ERR(inode); 1762 1767 1763 - dir = read_one_inode(root, dirid); 1764 - if (!dir) { 1768 + dir = btrfs_iget_logging(dirid, root); 1769 + if (IS_ERR(dir)) { 1765 1770 iput(&inode->vfs_inode); 1766 - return -EIO; 1771 + return PTR_ERR(dir); 1767 1772 } 1768 1773 1769 1774 ret = btrfs_add_link(trans, dir, inode, name, 1, index); ··· 1840 1845 bool update_size = true; 1841 1846 bool name_added = false; 1842 1847 1843 - dir = read_one_inode(root, key->objectid); 1844 - if (!dir) 1845 - return -EIO; 1848 + dir = btrfs_iget_logging(key->objectid, root); 1849 + if (IS_ERR(dir)) 1850 + return PTR_ERR(dir); 1846 1851 1847 1852 ret = read_alloc_one_name(eb, di + 1, btrfs_dir_name_len(eb, di), &name); 1848 1853 if (ret) ··· 2142 2147 btrfs_dir_item_key_to_cpu(eb, di, &location); 2143 2148 btrfs_release_path(path); 2144 2149 btrfs_release_path(log_path); 2145 - inode = read_one_inode(root, location.objectid); 2146 - if (!inode) { 2147 - ret = -EIO; 2150 + inode = btrfs_iget_logging(location.objectid, root); 2151 + if (IS_ERR(inode)) { 2152 + ret = PTR_ERR(inode); 2153 + inode = NULL; 2148 2154 goto out; 2149 2155 } 2150 2156 ··· 2297 2301 if (!log_path) 2298 2302 return -ENOMEM; 2299 2303 2300 - dir = read_one_inode(root, dirid); 2301 - /* it isn't an error if the inode isn't there, that can happen 2302 - * because we replay the deletes before we copy in the inode item 2303 - * from the log 2304 + dir = btrfs_iget_logging(dirid, root); 2305 + /* 2306 + * It isn't an error if the inode isn't there, that can happen because 2307 + * we replay the deletes before we copy in the inode item from the log. 2304 2308 */ 2305 - if (!dir) { 2309 + if (IS_ERR(dir)) { 2306 2310 btrfs_free_path(log_path); 2307 - return 0; 2311 + ret = PTR_ERR(dir); 2312 + if (ret == -ENOENT) 2313 + ret = 0; 2314 + return ret; 2308 2315 } 2309 2316 2310 2317 range_start = 0; ··· 2466 2467 struct btrfs_inode *inode; 2467 2468 u64 from; 2468 2469 2469 - inode = read_one_inode(root, key.objectid); 2470 - if (!inode) { 2471 - ret = -EIO; 2470 + inode = btrfs_iget_logging(key.objectid, root); 2471 + if (IS_ERR(inode)) { 2472 + ret = PTR_ERR(inode); 2472 2473 break; 2473 2474 } 2474 2475 from = ALIGN(i_size_read(&inode->vfs_inode), ··· 7447 7448 * full log sync. 7448 7449 * Also we don't need to worry with renames, since btrfs_rename() marks the log 7449 7450 * for full commit when renaming a subvolume. 7451 + * 7452 + * Must be called before creating the subvolume entry in its parent directory. 7450 7453 */ 7451 7454 void btrfs_record_new_subvolume(const struct btrfs_trans_handle *trans, 7452 7455 struct btrfs_inode *dir)
+139 -331
fs/eventpoll.c
··· 137 137 }; 138 138 139 139 /* List header used to link this structure to the eventpoll ready list */ 140 - struct list_head rdllink; 141 - 142 - /* 143 - * Works together "struct eventpoll"->ovflist in keeping the 144 - * single linked chain of items. 145 - */ 146 - struct epitem *next; 140 + struct llist_node rdllink; 147 141 148 142 /* The file descriptor information this item refers to */ 149 143 struct epoll_filefd ffd; ··· 185 191 /* Wait queue used by file->poll() */ 186 192 wait_queue_head_t poll_wait; 187 193 188 - /* List of ready file descriptors */ 189 - struct list_head rdllist; 190 - 191 - /* Lock which protects rdllist and ovflist */ 192 - rwlock_t lock; 194 + /* 195 + * List of ready file descriptors. Adding to this list is lockless. Items can be removed 196 + * only with eventpoll::mtx 197 + */ 198 + struct llist_head rdllist; 193 199 194 200 /* RB tree root used to store monitored fd structs */ 195 201 struct rb_root_cached rbr; 196 - 197 - /* 198 - * This is a single linked list that chains all the "struct epitem" that 199 - * happened while transferring ready events to userspace w/out 200 - * holding ->lock. 201 - */ 202 - struct epitem *ovflist; 203 202 204 203 /* wakeup_source used when ep_send_events or __ep_eventpoll_poll is running */ 205 204 struct wakeup_source *ws; ··· 348 361 (p1->file < p2->file ? -1 : p1->fd - p2->fd)); 349 362 } 350 363 351 - /* Tells us if the item is currently linked */ 352 - static inline int ep_is_linked(struct epitem *epi) 364 + /* 365 + * Add the item to its container eventpoll's rdllist; do nothing if the item is already on rdllist. 366 + */ 367 + static void epitem_ready(struct epitem *epi) 353 368 { 354 - return !list_empty(&epi->rdllink); 369 + if (&epi->rdllink == cmpxchg(&epi->rdllink.next, &epi->rdllink, NULL)) 370 + llist_add(&epi->rdllink, &epi->ep->rdllist); 371 + 355 372 } 356 373 357 374 static inline struct eppoll_entry *ep_pwq_from_wait(wait_queue_entry_t *p) ··· 374 383 * 375 384 * @ep: Pointer to the eventpoll context. 376 385 * 377 - * Return: a value different than %zero if ready events are available, 378 - * or %zero otherwise. 386 + * Return: true if ready events might be available, false otherwise. 379 387 */ 380 - static inline int ep_events_available(struct eventpoll *ep) 388 + static inline bool ep_events_available(struct eventpoll *ep) 381 389 { 382 - return !list_empty_careful(&ep->rdllist) || 383 - READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR; 390 + bool available; 391 + int locked; 392 + 393 + locked = mutex_trylock(&ep->mtx); 394 + if (!locked) { 395 + /* 396 + * The lock held and someone might have removed all items while inspecting it. The 397 + * llist_empty() check in this case is futile. Assume that something is enqueued and 398 + * let ep_try_send_events() figure it out. 399 + */ 400 + return true; 401 + } 402 + 403 + available = !llist_empty(&ep->rdllist); 404 + mutex_unlock(&ep->mtx); 405 + return available; 384 406 } 385 407 386 408 #ifdef CONFIG_NET_RX_BUSY_POLL ··· 728 724 rcu_read_unlock(); 729 725 } 730 726 731 - 732 - /* 733 - * ep->mutex needs to be held because we could be hit by 734 - * eventpoll_release_file() and epoll_ctl(). 735 - */ 736 - static void ep_start_scan(struct eventpoll *ep, struct list_head *txlist) 737 - { 738 - /* 739 - * Steal the ready list, and re-init the original one to the 740 - * empty list. Also, set ep->ovflist to NULL so that events 741 - * happening while looping w/out locks, are not lost. We cannot 742 - * have the poll callback to queue directly on ep->rdllist, 743 - * because we want the "sproc" callback to be able to do it 744 - * in a lockless way. 745 - */ 746 - lockdep_assert_irqs_enabled(); 747 - write_lock_irq(&ep->lock); 748 - list_splice_init(&ep->rdllist, txlist); 749 - WRITE_ONCE(ep->ovflist, NULL); 750 - write_unlock_irq(&ep->lock); 751 - } 752 - 753 - static void ep_done_scan(struct eventpoll *ep, 754 - struct list_head *txlist) 755 - { 756 - struct epitem *epi, *nepi; 757 - 758 - write_lock_irq(&ep->lock); 759 - /* 760 - * During the time we spent inside the "sproc" callback, some 761 - * other events might have been queued by the poll callback. 762 - * We re-insert them inside the main ready-list here. 763 - */ 764 - for (nepi = READ_ONCE(ep->ovflist); (epi = nepi) != NULL; 765 - nepi = epi->next, epi->next = EP_UNACTIVE_PTR) { 766 - /* 767 - * We need to check if the item is already in the list. 768 - * During the "sproc" callback execution time, items are 769 - * queued into ->ovflist but the "txlist" might already 770 - * contain them, and the list_splice() below takes care of them. 771 - */ 772 - if (!ep_is_linked(epi)) { 773 - /* 774 - * ->ovflist is LIFO, so we have to reverse it in order 775 - * to keep in FIFO. 776 - */ 777 - list_add(&epi->rdllink, &ep->rdllist); 778 - ep_pm_stay_awake(epi); 779 - } 780 - } 781 - /* 782 - * We need to set back ep->ovflist to EP_UNACTIVE_PTR, so that after 783 - * releasing the lock, events will be queued in the normal way inside 784 - * ep->rdllist. 785 - */ 786 - WRITE_ONCE(ep->ovflist, EP_UNACTIVE_PTR); 787 - 788 - /* 789 - * Quickly re-inject items left on "txlist". 790 - */ 791 - list_splice(txlist, &ep->rdllist); 792 - __pm_relax(ep->ws); 793 - 794 - if (!list_empty(&ep->rdllist)) { 795 - if (waitqueue_active(&ep->wq)) 796 - wake_up(&ep->wq); 797 - } 798 - 799 - write_unlock_irq(&ep->lock); 800 - } 801 - 802 727 static void ep_get(struct eventpoll *ep) 803 728 { 804 729 refcount_inc(&ep->refcount); ··· 765 832 static bool __ep_remove(struct eventpoll *ep, struct epitem *epi, bool force) 766 833 { 767 834 struct file *file = epi->ffd.file; 835 + struct llist_node *put_back_last; 768 836 struct epitems_head *to_free; 769 837 struct hlist_head *head; 838 + LLIST_HEAD(put_back); 770 839 771 - lockdep_assert_irqs_enabled(); 840 + lockdep_assert_held(&ep->mtx); 772 841 773 842 /* 774 843 * Removes poll wait queue hooks. ··· 802 867 803 868 rb_erase_cached(&epi->rbn, &ep->rbr); 804 869 805 - write_lock_irq(&ep->lock); 806 - if (ep_is_linked(epi)) 807 - list_del_init(&epi->rdllink); 808 - write_unlock_irq(&ep->lock); 870 + if (llist_on_list(&epi->rdllink)) { 871 + put_back_last = NULL; 872 + while (true) { 873 + struct llist_node *n = llist_del_first(&ep->rdllist); 874 + 875 + if (&epi->rdllink == n || WARN_ON(!n)) 876 + break; 877 + if (!put_back_last) 878 + put_back_last = n; 879 + __llist_add(n, &put_back); 880 + } 881 + if (put_back_last) 882 + llist_add_batch(put_back.first, put_back_last, &ep->rdllist); 883 + } 809 884 810 885 wakeup_source_unregister(ep_wakeup_source(epi)); 811 886 /* ··· 828 883 kfree_rcu(epi, rcu); 829 884 830 885 percpu_counter_dec(&ep->user->epoll_watches); 831 - return ep_refcount_dec_and_test(ep); 886 + return true; 832 887 } 833 888 834 889 /* ··· 836 891 */ 837 892 static void ep_remove_safe(struct eventpoll *ep, struct epitem *epi) 838 893 { 839 - WARN_ON_ONCE(__ep_remove(ep, epi, false)); 894 + if (__ep_remove(ep, epi, false)) 895 + WARN_ON_ONCE(ep_refcount_dec_and_test(ep)); 840 896 } 841 897 842 898 static void ep_clear_and_put(struct eventpoll *ep) 843 899 { 844 900 struct rb_node *rbp, *next; 845 901 struct epitem *epi; 846 - bool dispose; 847 902 848 903 /* We need to release all tasks waiting for these file */ 849 904 if (waitqueue_active(&ep->poll_wait)) ··· 876 931 cond_resched(); 877 932 } 878 933 879 - dispose = ep_refcount_dec_and_test(ep); 880 934 mutex_unlock(&ep->mtx); 881 - 882 - if (dispose) 935 + if (ep_refcount_dec_and_test(ep)) 883 936 ep_free(ep); 884 937 } 885 938 ··· 917 974 static __poll_t __ep_eventpoll_poll(struct file *file, poll_table *wait, int depth) 918 975 { 919 976 struct eventpoll *ep = file->private_data; 920 - LIST_HEAD(txlist); 921 - struct epitem *epi, *tmp; 977 + struct wakeup_source *ws; 978 + struct llist_node *n; 979 + struct epitem *epi; 922 980 poll_table pt; 923 981 __poll_t res = 0; 924 982 ··· 933 989 * the ready list. 934 990 */ 935 991 mutex_lock_nested(&ep->mtx, depth); 936 - ep_start_scan(ep, &txlist); 937 - list_for_each_entry_safe(epi, tmp, &txlist, rdllink) { 992 + while (true) { 993 + n = llist_del_first_init(&ep->rdllist); 994 + if (!n) 995 + break; 996 + 997 + epi = llist_entry(n, struct epitem, rdllink); 998 + 938 999 if (ep_item_poll(epi, &pt, depth + 1)) { 939 1000 res = EPOLLIN | EPOLLRDNORM; 1001 + epitem_ready(epi); 940 1002 break; 941 1003 } else { 942 1004 /* 943 - * Item has been dropped into the ready list by the poll 944 - * callback, but it's not actually ready, as far as 945 - * caller requested events goes. We can remove it here. 1005 + * We need to activate ep before deactivating epi, to prevent autosuspend 1006 + * just in case epi becomes active after ep_item_poll() above. 1007 + * 1008 + * This is similar to ep_send_events(). 946 1009 */ 1010 + ws = ep_wakeup_source(epi); 1011 + if (ws) { 1012 + if (ws->active) 1013 + __pm_stay_awake(ep->ws); 1014 + __pm_relax(ws); 1015 + } 947 1016 __pm_relax(ep_wakeup_source(epi)); 948 - list_del_init(&epi->rdllink); 1017 + 1018 + /* Just in case epi becomes active right before __pm_relax() */ 1019 + if (unlikely(ep_item_poll(epi, &pt, depth + 1))) 1020 + ep_pm_stay_awake(epi); 1021 + 1022 + __pm_relax(ep->ws); 949 1023 } 950 1024 } 951 - ep_done_scan(ep, &txlist); 952 1025 mutex_unlock(&ep->mtx); 953 1026 return res; 954 1027 } ··· 1098 1137 dispose = __ep_remove(ep, epi, true); 1099 1138 mutex_unlock(&ep->mtx); 1100 1139 1101 - if (dispose) 1140 + if (dispose && ep_refcount_dec_and_test(ep)) 1102 1141 ep_free(ep); 1103 1142 goto again; 1104 1143 } ··· 1114 1153 return -ENOMEM; 1115 1154 1116 1155 mutex_init(&ep->mtx); 1117 - rwlock_init(&ep->lock); 1118 1156 init_waitqueue_head(&ep->wq); 1119 1157 init_waitqueue_head(&ep->poll_wait); 1120 - INIT_LIST_HEAD(&ep->rdllist); 1158 + init_llist_head(&ep->rdllist); 1121 1159 ep->rbr = RB_ROOT_CACHED; 1122 - ep->ovflist = EP_UNACTIVE_PTR; 1123 1160 ep->user = get_current_user(); 1124 1161 refcount_set(&ep->refcount, 1); 1125 1162 ··· 1200 1241 #endif /* CONFIG_KCMP */ 1201 1242 1202 1243 /* 1203 - * Adds a new entry to the tail of the list in a lockless way, i.e. 1204 - * multiple CPUs are allowed to call this function concurrently. 1205 - * 1206 - * Beware: it is necessary to prevent any other modifications of the 1207 - * existing list until all changes are completed, in other words 1208 - * concurrent list_add_tail_lockless() calls should be protected 1209 - * with a read lock, where write lock acts as a barrier which 1210 - * makes sure all list_add_tail_lockless() calls are fully 1211 - * completed. 1212 - * 1213 - * Also an element can be locklessly added to the list only in one 1214 - * direction i.e. either to the tail or to the head, otherwise 1215 - * concurrent access will corrupt the list. 1216 - * 1217 - * Return: %false if element has been already added to the list, %true 1218 - * otherwise. 1219 - */ 1220 - static inline bool list_add_tail_lockless(struct list_head *new, 1221 - struct list_head *head) 1222 - { 1223 - struct list_head *prev; 1224 - 1225 - /* 1226 - * This is simple 'new->next = head' operation, but cmpxchg() 1227 - * is used in order to detect that same element has been just 1228 - * added to the list from another CPU: the winner observes 1229 - * new->next == new. 1230 - */ 1231 - if (!try_cmpxchg(&new->next, &new, head)) 1232 - return false; 1233 - 1234 - /* 1235 - * Initially ->next of a new element must be updated with the head 1236 - * (we are inserting to the tail) and only then pointers are atomically 1237 - * exchanged. XCHG guarantees memory ordering, thus ->next should be 1238 - * updated before pointers are actually swapped and pointers are 1239 - * swapped before prev->next is updated. 1240 - */ 1241 - 1242 - prev = xchg(&head->prev, new); 1243 - 1244 - /* 1245 - * It is safe to modify prev->next and new->prev, because a new element 1246 - * is added only to the tail and new->next is updated before XCHG. 1247 - */ 1248 - 1249 - prev->next = new; 1250 - new->prev = prev; 1251 - 1252 - return true; 1253 - } 1254 - 1255 - /* 1256 - * Chains a new epi entry to the tail of the ep->ovflist in a lockless way, 1257 - * i.e. multiple CPUs are allowed to call this function concurrently. 1258 - * 1259 - * Return: %false if epi element has been already chained, %true otherwise. 1260 - */ 1261 - static inline bool chain_epi_lockless(struct epitem *epi) 1262 - { 1263 - struct eventpoll *ep = epi->ep; 1264 - 1265 - /* Fast preliminary check */ 1266 - if (epi->next != EP_UNACTIVE_PTR) 1267 - return false; 1268 - 1269 - /* Check that the same epi has not been just chained from another CPU */ 1270 - if (cmpxchg(&epi->next, EP_UNACTIVE_PTR, NULL) != EP_UNACTIVE_PTR) 1271 - return false; 1272 - 1273 - /* Atomically exchange tail */ 1274 - epi->next = xchg(&ep->ovflist, epi); 1275 - 1276 - return true; 1277 - } 1278 - 1279 - /* 1280 1244 * This is the callback that is passed to the wait queue wakeup 1281 1245 * mechanism. It is called by the stored file descriptors when they 1282 1246 * have events to report. 1283 - * 1284 - * This callback takes a read lock in order not to contend with concurrent 1285 - * events from another file descriptor, thus all modifications to ->rdllist 1286 - * or ->ovflist are lockless. Read lock is paired with the write lock from 1287 - * ep_start/done_scan(), which stops all list modifications and guarantees 1288 - * that lists state is seen correctly. 1289 1247 * 1290 1248 * Another thing worth to mention is that ep_poll_callback() can be called 1291 1249 * concurrently for the same @epi from different CPUs if poll table was inited ··· 1213 1337 */ 1214 1338 static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, void *key) 1215 1339 { 1216 - int pwake = 0; 1217 1340 struct epitem *epi = ep_item_from_wait(wait); 1218 1341 struct eventpoll *ep = epi->ep; 1219 1342 __poll_t pollflags = key_to_poll(key); 1220 - unsigned long flags; 1221 1343 int ewake = 0; 1222 - 1223 - read_lock_irqsave(&ep->lock, flags); 1224 1344 1225 1345 ep_set_busy_poll_napi_id(epi); 1226 1346 ··· 1227 1355 * until the next EPOLL_CTL_MOD will be issued. 1228 1356 */ 1229 1357 if (!(epi->event.events & ~EP_PRIVATE_BITS)) 1230 - goto out_unlock; 1358 + goto out; 1231 1359 1232 1360 /* 1233 1361 * Check the events coming with the callback. At this stage, not ··· 1236 1364 * test for "key" != NULL before the event match test. 1237 1365 */ 1238 1366 if (pollflags && !(pollflags & epi->event.events)) 1239 - goto out_unlock; 1367 + goto out; 1240 1368 1241 - /* 1242 - * If we are transferring events to userspace, we can hold no locks 1243 - * (because we're accessing user memory, and because of linux f_op->poll() 1244 - * semantics). All the events that happen during that period of time are 1245 - * chained in ep->ovflist and requeued later on. 1246 - */ 1247 - if (READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR) { 1248 - if (chain_epi_lockless(epi)) 1249 - ep_pm_stay_awake_rcu(epi); 1250 - } else if (!ep_is_linked(epi)) { 1251 - /* In the usual case, add event to ready list. */ 1252 - if (list_add_tail_lockless(&epi->rdllink, &ep->rdllist)) 1253 - ep_pm_stay_awake_rcu(epi); 1254 - } 1369 + ep_pm_stay_awake_rcu(epi); 1370 + epitem_ready(epi); 1255 1371 1256 1372 /* 1257 1373 * Wake up ( if active ) both the eventpoll wait list and the ->poll() ··· 1268 1408 wake_up(&ep->wq); 1269 1409 } 1270 1410 if (waitqueue_active(&ep->poll_wait)) 1271 - pwake++; 1272 - 1273 - out_unlock: 1274 - read_unlock_irqrestore(&ep->lock, flags); 1275 - 1276 - /* We have to call this outside the lock */ 1277 - if (pwake) 1278 1411 ep_poll_safewake(ep, epi, pollflags & EPOLL_URING_WAKE); 1279 1412 1413 + out: 1280 1414 if (!(epi->event.events & EPOLLEXCLUSIVE)) 1281 1415 ewake = 1; 1282 1416 ··· 1515 1661 if (is_file_epoll(tfile)) 1516 1662 tep = tfile->private_data; 1517 1663 1518 - lockdep_assert_irqs_enabled(); 1519 - 1520 1664 if (unlikely(percpu_counter_compare(&ep->user->epoll_watches, 1521 1665 max_user_watches) >= 0)) 1522 1666 return -ENOSPC; ··· 1526 1674 } 1527 1675 1528 1676 /* Item initialization follow here ... */ 1529 - INIT_LIST_HEAD(&epi->rdllink); 1677 + init_llist_node(&epi->rdllink); 1530 1678 epi->ep = ep; 1531 1679 ep_set_ffd(&epi->ffd, tfile, fd); 1532 1680 epi->event = *event; 1533 - epi->next = EP_UNACTIVE_PTR; 1534 1681 1535 1682 if (tep) 1536 1683 mutex_lock_nested(&tep->mtx, 1); ··· 1596 1745 return -ENOMEM; 1597 1746 } 1598 1747 1599 - /* We have to drop the new item inside our item list to keep track of it */ 1600 - write_lock_irq(&ep->lock); 1601 - 1602 1748 /* record NAPI ID of new item if present */ 1603 1749 ep_set_busy_poll_napi_id(epi); 1604 1750 1605 1751 /* If the file is already "ready" we drop it inside the ready list */ 1606 - if (revents && !ep_is_linked(epi)) { 1607 - list_add_tail(&epi->rdllink, &ep->rdllist); 1752 + if (revents) { 1608 1753 ep_pm_stay_awake(epi); 1754 + epitem_ready(epi); 1609 1755 1610 1756 /* Notify waiting tasks that events are available */ 1611 1757 if (waitqueue_active(&ep->wq)) ··· 1610 1762 if (waitqueue_active(&ep->poll_wait)) 1611 1763 pwake++; 1612 1764 } 1613 - 1614 - write_unlock_irq(&ep->lock); 1615 1765 1616 1766 /* We have to call this outside the lock */ 1617 1767 if (pwake) ··· 1625 1779 static int ep_modify(struct eventpoll *ep, struct epitem *epi, 1626 1780 const struct epoll_event *event) 1627 1781 { 1628 - int pwake = 0; 1629 1782 poll_table pt; 1630 - 1631 - lockdep_assert_irqs_enabled(); 1632 1783 1633 1784 init_poll_funcptr(&pt, NULL); 1634 1785 ··· 1670 1827 * list, push it inside. 1671 1828 */ 1672 1829 if (ep_item_poll(epi, &pt, 1)) { 1673 - write_lock_irq(&ep->lock); 1674 - if (!ep_is_linked(epi)) { 1675 - list_add_tail(&epi->rdllink, &ep->rdllist); 1676 - ep_pm_stay_awake(epi); 1830 + ep_pm_stay_awake(epi); 1831 + epitem_ready(epi); 1677 1832 1678 - /* Notify waiting tasks that events are available */ 1679 - if (waitqueue_active(&ep->wq)) 1680 - wake_up(&ep->wq); 1681 - if (waitqueue_active(&ep->poll_wait)) 1682 - pwake++; 1683 - } 1684 - write_unlock_irq(&ep->lock); 1833 + /* Notify waiting tasks that events are available */ 1834 + if (waitqueue_active(&ep->wq)) 1835 + wake_up(&ep->wq); 1836 + if (waitqueue_active(&ep->poll_wait)) 1837 + ep_poll_safewake(ep, NULL, 0); 1685 1838 } 1686 - 1687 - /* We have to call this outside the lock */ 1688 - if (pwake) 1689 - ep_poll_safewake(ep, NULL, 0); 1690 1839 1691 1840 return 0; 1692 1841 } ··· 1687 1852 struct epoll_event __user *events, int maxevents) 1688 1853 { 1689 1854 struct epitem *epi, *tmp; 1690 - LIST_HEAD(txlist); 1855 + LLIST_HEAD(txlist); 1691 1856 poll_table pt; 1692 1857 int res = 0; 1693 1858 ··· 1702 1867 init_poll_funcptr(&pt, NULL); 1703 1868 1704 1869 mutex_lock(&ep->mtx); 1705 - ep_start_scan(ep, &txlist); 1706 1870 1707 - /* 1708 - * We can loop without lock because we are passed a task private list. 1709 - * Items cannot vanish during the loop we are holding ep->mtx. 1710 - */ 1711 - list_for_each_entry_safe(epi, tmp, &txlist, rdllink) { 1871 + while (res < maxevents) { 1712 1872 struct wakeup_source *ws; 1873 + struct llist_node *n; 1713 1874 __poll_t revents; 1714 1875 1715 - if (res >= maxevents) 1876 + n = llist_del_first(&ep->rdllist); 1877 + if (!n) 1716 1878 break; 1879 + 1880 + epi = llist_entry(n, struct epitem, rdllink); 1717 1881 1718 1882 /* 1719 1883 * Activate ep->ws before deactivating epi->ws to prevent ··· 1730 1896 __pm_relax(ws); 1731 1897 } 1732 1898 1733 - list_del_init(&epi->rdllink); 1734 - 1735 1899 /* 1736 1900 * If the event mask intersect the caller-requested one, 1737 1901 * deliver the event to userspace. Again, we are holding ep->mtx, 1738 1902 * so no operations coming from userspace can change the item. 1739 1903 */ 1740 1904 revents = ep_item_poll(epi, &pt, 1); 1741 - if (!revents) 1905 + if (!revents) { 1906 + init_llist_node(n); 1907 + 1908 + /* 1909 + * Just in case epi becomes ready after ep_item_poll() above, but before 1910 + * init_llist_node(). Make sure to add it to the ready list, otherwise an 1911 + * event may be lost. 1912 + */ 1913 + if (unlikely(ep_item_poll(epi, &pt, 1))) { 1914 + ep_pm_stay_awake(epi); 1915 + epitem_ready(epi); 1916 + } 1742 1917 continue; 1918 + } 1743 1919 1744 1920 events = epoll_put_uevent(revents, epi->event.data, events); 1745 1921 if (!events) { 1746 - list_add(&epi->rdllink, &txlist); 1747 - ep_pm_stay_awake(epi); 1922 + llist_add(&epi->rdllink, &ep->rdllist); 1748 1923 if (!res) 1749 1924 res = -EFAULT; 1750 1925 break; ··· 1761 1918 res++; 1762 1919 if (epi->event.events & EPOLLONESHOT) 1763 1920 epi->event.events &= EP_PRIVATE_BITS; 1764 - else if (!(epi->event.events & EPOLLET)) { 1921 + __llist_add(n, &txlist); 1922 + } 1923 + 1924 + llist_for_each_entry_safe(epi, tmp, txlist.first, rdllink) { 1925 + init_llist_node(&epi->rdllink); 1926 + 1927 + if (!(epi->event.events & EPOLLET)) { 1765 1928 /* 1766 - * If this file has been added with Level 1767 - * Trigger mode, we need to insert back inside 1768 - * the ready list, so that the next call to 1769 - * epoll_wait() will check again the events 1770 - * availability. At this point, no one can insert 1771 - * into ep->rdllist besides us. The epoll_ctl() 1772 - * callers are locked out by 1773 - * ep_send_events() holding "mtx" and the 1774 - * poll callback will queue them in ep->ovflist. 1929 + * If this file has been added with Level Trigger mode, we need to insert 1930 + * back inside the ready list, so that the next call to epoll_wait() will 1931 + * check again the events availability. 1775 1932 */ 1776 - list_add_tail(&epi->rdllink, &ep->rdllist); 1777 1933 ep_pm_stay_awake(epi); 1934 + epitem_ready(epi); 1778 1935 } 1779 1936 } 1780 - ep_done_scan(ep, &txlist); 1937 + 1938 + __pm_relax(ep->ws); 1781 1939 mutex_unlock(&ep->mtx); 1940 + 1941 + if (!llist_empty(&ep->rdllist)) { 1942 + if (waitqueue_active(&ep->wq)) 1943 + wake_up(&ep->wq); 1944 + } 1782 1945 1783 1946 return res; 1784 1947 } ··· 1878 2029 wait_queue_entry_t wait; 1879 2030 ktime_t expires, *to = NULL; 1880 2031 1881 - lockdep_assert_irqs_enabled(); 1882 - 1883 2032 if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { 1884 2033 slack = select_estimate_accuracy(timeout); 1885 2034 to = &expires; ··· 1937 2090 init_wait(&wait); 1938 2091 wait.func = ep_autoremove_wake_function; 1939 2092 1940 - write_lock_irq(&ep->lock); 1941 - /* 1942 - * Barrierless variant, waitqueue_active() is called under 1943 - * the same lock on wakeup ep_poll_callback() side, so it 1944 - * is safe to avoid an explicit barrier. 1945 - */ 1946 - __set_current_state(TASK_INTERRUPTIBLE); 2093 + prepare_to_wait_exclusive(&ep->wq, &wait, TASK_INTERRUPTIBLE); 1947 2094 1948 - /* 1949 - * Do the final check under the lock. ep_start/done_scan() 1950 - * plays with two lists (->rdllist and ->ovflist) and there 1951 - * is always a race when both lists are empty for short 1952 - * period of time although events are pending, so lock is 1953 - * important. 1954 - */ 1955 - eavail = ep_events_available(ep); 1956 - if (!eavail) 1957 - __add_wait_queue_exclusive(&ep->wq, &wait); 1958 - 1959 - write_unlock_irq(&ep->lock); 1960 - 1961 - if (!eavail) 2095 + if (!ep_events_available(ep)) 1962 2096 timed_out = !ep_schedule_timeout(to) || 1963 2097 !schedule_hrtimeout_range(to, slack, 1964 2098 HRTIMER_MODE_ABS); 1965 - __set_current_state(TASK_RUNNING); 1966 2099 1967 - /* 1968 - * We were woken up, thus go and try to harvest some events. 1969 - * If timed out and still on the wait queue, recheck eavail 1970 - * carefully under lock, below. 1971 - */ 1972 - eavail = 1; 1973 - 1974 - if (!list_empty_careful(&wait.entry)) { 1975 - write_lock_irq(&ep->lock); 1976 - /* 1977 - * If the thread timed out and is not on the wait queue, 1978 - * it means that the thread was woken up after its 1979 - * timeout expired before it could reacquire the lock. 1980 - * Thus, when wait.entry is empty, it needs to harvest 1981 - * events. 1982 - */ 1983 - if (timed_out) 1984 - eavail = list_empty(&wait.entry); 1985 - __remove_wait_queue(&ep->wq, &wait); 1986 - write_unlock_irq(&ep->lock); 1987 - } 2100 + finish_wait(&ep->wq, &wait); 2101 + eavail = ep_events_available(ep); 1988 2102 } 1989 2103 } 1990 2104
+7 -2
fs/exec.c
··· 114 114 115 115 bool path_noexec(const struct path *path) 116 116 { 117 + /* If it's an anonymous inode make sure that we catch any shenanigans. */ 118 + VFS_WARN_ON_ONCE(IS_ANON_FILE(d_inode(path->dentry)) && 119 + !(path->mnt->mnt_sb->s_iflags & SB_I_NOEXEC)); 117 120 return (path->mnt->mnt_flags & MNT_NOEXEC) || 118 121 (path->mnt->mnt_sb->s_iflags & SB_I_NOEXEC); 119 122 } ··· 784 781 if (IS_ERR(file)) 785 782 return file; 786 783 784 + if (path_noexec(&file->f_path)) 785 + return ERR_PTR(-EACCES); 786 + 787 787 /* 788 788 * In the past the regular type check was here. It moved to may_open() in 789 789 * 633fb6ac3980 ("exec: move S_ISREG() check earlier"). Since then it is 790 790 * an invariant that all non-regular files error out before we get here. 791 791 */ 792 - if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) || 793 - path_noexec(&file->f_path)) 792 + if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode))) 794 793 return ERR_PTR(-EACCES); 795 794 796 795 err = exe_file_deny_write_access(file);
+2 -3
fs/fuse/file.c
··· 1147 1147 static ssize_t fuse_fill_write_pages(struct fuse_io_args *ia, 1148 1148 struct address_space *mapping, 1149 1149 struct iov_iter *ii, loff_t pos, 1150 - unsigned int max_pages) 1150 + unsigned int max_folios) 1151 1151 { 1152 1152 struct fuse_args_pages *ap = &ia->ap; 1153 1153 struct fuse_conn *fc = get_fuse_conn(mapping->host); ··· 1157 1157 int err = 0; 1158 1158 1159 1159 num = min(iov_iter_count(ii), fc->max_write); 1160 - num = min(num, max_pages << PAGE_SHIFT); 1161 1160 1162 1161 ap->args.in_pages = true; 1163 1162 ap->descs[0].offset = offset; 1164 1163 1165 - while (num) { 1164 + while (num && ap->num_folios < max_folios) { 1166 1165 size_t tmp; 1167 1166 struct folio *folio; 1168 1167 pgoff_t index = pos >> PAGE_SHIFT;
+3 -5
fs/libfs.c
··· 1649 1649 */ 1650 1650 inode->i_state = I_DIRTY; 1651 1651 /* 1652 - * Historically anonymous inodes didn't have a type at all and 1653 - * userspace has come to rely on this. Internally they're just 1654 - * regular files but S_IFREG is masked off when reporting 1655 - * information to userspace. 1652 + * Historically anonymous inodes don't have a type at all and 1653 + * userspace has come to rely on this. 1656 1654 */ 1657 - inode->i_mode = S_IFREG | S_IRUSR | S_IWUSR; 1655 + inode->i_mode = S_IRUSR | S_IWUSR; 1658 1656 inode->i_uid = current_fsuid(); 1659 1657 inode->i_gid = current_fsgid(); 1660 1658 inode->i_flags |= S_PRIVATE | S_ANON_INODE;
+1 -1
fs/namei.c
··· 3480 3480 return -EACCES; 3481 3481 break; 3482 3482 default: 3483 - VFS_BUG_ON_INODE(1, inode); 3483 + VFS_BUG_ON_INODE(!IS_ANON_FILE(inode), inode); 3484 3484 } 3485 3485 3486 3486 error = inode_permission(idmap, inode, MAY_OPEN | acc_mode);
+23 -15
fs/netfs/buffered_write.c
··· 53 53 * data written into the pagecache until we can find out from the server what 54 54 * the values actually are. 55 55 */ 56 - static void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode, 57 - loff_t i_size, loff_t pos, size_t copied) 56 + void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode, 57 + loff_t pos, size_t copied) 58 58 { 59 + loff_t i_size, end = pos + copied; 59 60 blkcnt_t add; 60 61 size_t gap; 61 62 63 + if (end <= i_size_read(inode)) 64 + return; 65 + 62 66 if (ctx->ops->update_i_size) { 63 - ctx->ops->update_i_size(inode, pos); 67 + ctx->ops->update_i_size(inode, end); 64 68 return; 65 69 } 66 70 67 - i_size_write(inode, pos); 71 + spin_lock(&inode->i_lock); 72 + 73 + i_size = i_size_read(inode); 74 + if (end > i_size) { 75 + i_size_write(inode, end); 68 76 #if IS_ENABLED(CONFIG_FSCACHE) 69 - fscache_update_cookie(ctx->cache, NULL, &pos); 77 + fscache_update_cookie(ctx->cache, NULL, &end); 70 78 #endif 71 79 72 - gap = SECTOR_SIZE - (i_size & (SECTOR_SIZE - 1)); 73 - if (copied > gap) { 74 - add = DIV_ROUND_UP(copied - gap, SECTOR_SIZE); 80 + gap = SECTOR_SIZE - (i_size & (SECTOR_SIZE - 1)); 81 + if (copied > gap) { 82 + add = DIV_ROUND_UP(copied - gap, SECTOR_SIZE); 75 83 76 - inode->i_blocks = min_t(blkcnt_t, 77 - DIV_ROUND_UP(pos, SECTOR_SIZE), 78 - inode->i_blocks + add); 84 + inode->i_blocks = min_t(blkcnt_t, 85 + DIV_ROUND_UP(end, SECTOR_SIZE), 86 + inode->i_blocks + add); 87 + } 79 88 } 89 + spin_unlock(&inode->i_lock); 80 90 } 81 91 82 92 /** ··· 121 111 struct folio *folio = NULL, *writethrough = NULL; 122 112 unsigned int bdp_flags = (iocb->ki_flags & IOCB_NOWAIT) ? BDP_ASYNC : 0; 123 113 ssize_t written = 0, ret, ret2; 124 - loff_t i_size, pos = iocb->ki_pos; 114 + loff_t pos = iocb->ki_pos; 125 115 size_t max_chunk = mapping_max_folio_size(mapping); 126 116 bool maybe_trouble = false; 127 117 ··· 354 344 flush_dcache_folio(folio); 355 345 356 346 /* Update the inode size if we moved the EOF marker */ 347 + netfs_update_i_size(ctx, inode, pos, copied); 357 348 pos += copied; 358 - i_size = i_size_read(inode); 359 - if (pos > i_size) 360 - netfs_update_i_size(ctx, inode, i_size, pos, copied); 361 349 written += copied; 362 350 363 351 if (likely(!wreq)) {
-16
fs/netfs/direct_write.c
··· 9 9 #include <linux/uio.h> 10 10 #include "internal.h" 11 11 12 - static void netfs_cleanup_dio_write(struct netfs_io_request *wreq) 13 - { 14 - struct inode *inode = wreq->inode; 15 - unsigned long long end = wreq->start + wreq->transferred; 16 - 17 - if (!wreq->error && 18 - i_size_read(inode) < end) { 19 - if (wreq->netfs_ops->update_i_size) 20 - wreq->netfs_ops->update_i_size(inode, end); 21 - else 22 - i_size_write(inode, end); 23 - } 24 - } 25 - 26 12 /* 27 13 * Perform an unbuffered write where we may have to do an RMW operation on an 28 14 * encrypted file. This can also be used for direct I/O writes. ··· 84 98 if (async) 85 99 wreq->iocb = iocb; 86 100 wreq->len = iov_iter_count(&wreq->buffer.iter); 87 - wreq->cleanup = netfs_cleanup_dio_write; 88 101 ret = netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), wreq->len); 89 102 if (ret < 0) { 90 103 _debug("begin = %zd", ret); ··· 91 106 } 92 107 93 108 if (!async) { 94 - trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip); 95 109 ret = netfs_wait_for_write(wreq); 96 110 if (ret > 0) 97 111 iocb->ki_pos += ret;
+25 -1
fs/netfs/internal.h
··· 28 28 size_t offset, size_t len); 29 29 30 30 /* 31 + * buffered_write.c 32 + */ 33 + void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode, 34 + loff_t pos, size_t copied); 35 + 36 + /* 31 37 * main.c 32 38 */ 33 39 extern unsigned int netfs_debug; ··· 273 267 enum netfs_rreq_trace trace) 274 268 { 275 269 if (test_bit(rreq_flag, &rreq->flags)) { 276 - trace_netfs_rreq(rreq, trace); 277 270 clear_bit_unlock(rreq_flag, &rreq->flags); 278 271 smp_mb__after_atomic(); /* Set flag before task state */ 272 + trace_netfs_rreq(rreq, trace); 279 273 wake_up(&rreq->waitq); 280 274 } 275 + } 276 + 277 + /* 278 + * Test the NETFS_RREQ_IN_PROGRESS flag, inserting an appropriate barrier. 279 + */ 280 + static inline bool netfs_check_rreq_in_progress(const struct netfs_io_request *rreq) 281 + { 282 + /* Order read of flags before read of anything else, such as error. */ 283 + return test_bit_acquire(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 284 + } 285 + 286 + /* 287 + * Test the NETFS_SREQ_IN_PROGRESS flag, inserting an appropriate barrier. 288 + */ 289 + static inline bool netfs_check_subreq_in_progress(const struct netfs_io_subrequest *subreq) 290 + { 291 + /* Order read of flags before read of anything else, such as error. */ 292 + return test_bit_acquire(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 281 293 } 282 294 283 295 /*
+3 -3
fs/netfs/main.c
··· 58 58 59 59 if (v == &netfs_io_requests) { 60 60 seq_puts(m, 61 - "REQUEST OR REF FL ERR OPS COVERAGE\n" 62 - "======== == === == ==== === =========\n" 61 + "REQUEST OR REF FLAG ERR OPS COVERAGE\n" 62 + "======== == === ==== ==== === =========\n" 63 63 ); 64 64 return 0; 65 65 } 66 66 67 67 rreq = list_entry(v, struct netfs_io_request, proc_link); 68 68 seq_printf(m, 69 - "%08x %s %3d %2lx %4ld %3d @%04llx %llx/%llx", 69 + "%08x %s %3d %4lx %4ld %3d @%04llx %llx/%llx", 70 70 rreq->debug_id, 71 71 netfs_origins[rreq->origin], 72 72 refcount_read(&rreq->ref),
+31 -19
fs/netfs/misc.c
··· 356 356 DEFINE_WAIT(myself); 357 357 358 358 list_for_each_entry(subreq, &stream->subrequests, rreq_link) { 359 - if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags)) 359 + if (!netfs_check_subreq_in_progress(subreq)) 360 360 continue; 361 361 362 - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue); 362 + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_quiesce); 363 363 for (;;) { 364 364 prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE); 365 365 366 - if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags)) 366 + if (!netfs_check_subreq_in_progress(subreq)) 367 367 break; 368 368 369 369 trace_netfs_sreq(subreq, netfs_sreq_trace_wait_for); 370 370 schedule(); 371 - trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue); 372 371 } 373 372 } 374 373 374 + trace_netfs_rreq(rreq, netfs_rreq_trace_waited_quiesce); 375 375 finish_wait(&rreq->waitq, &myself); 376 376 } 377 377 ··· 381 381 static int netfs_collect_in_app(struct netfs_io_request *rreq, 382 382 bool (*collector)(struct netfs_io_request *rreq)) 383 383 { 384 - bool need_collect = false, inactive = true; 384 + bool need_collect = false, inactive = true, done = true; 385 + 386 + if (!netfs_check_rreq_in_progress(rreq)) { 387 + trace_netfs_rreq(rreq, netfs_rreq_trace_recollect); 388 + return 1; /* Done */ 389 + } 385 390 386 391 for (int i = 0; i < NR_IO_STREAMS; i++) { 387 392 struct netfs_io_subrequest *subreq; ··· 400 395 struct netfs_io_subrequest, 401 396 rreq_link); 402 397 if (subreq && 403 - (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags) || 398 + (!netfs_check_subreq_in_progress(subreq) || 404 399 test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags))) { 405 400 need_collect = true; 406 401 break; 407 402 } 403 + if (subreq || !test_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags)) 404 + done = false; 408 405 } 409 406 410 - if (!need_collect && !inactive) 407 + if (!need_collect && !inactive && !done) 411 408 return 0; /* Sleep */ 412 409 413 410 __set_current_state(TASK_RUNNING); ··· 430 423 /* 431 424 * Wait for a request to complete, successfully or otherwise. 432 425 */ 433 - static ssize_t netfs_wait_for_request(struct netfs_io_request *rreq, 434 - bool (*collector)(struct netfs_io_request *rreq)) 426 + static ssize_t netfs_wait_for_in_progress(struct netfs_io_request *rreq, 427 + bool (*collector)(struct netfs_io_request *rreq)) 435 428 { 436 429 DEFINE_WAIT(myself); 437 430 ssize_t ret; 438 431 439 432 for (;;) { 440 - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue); 441 433 prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE); 442 434 443 435 if (!test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) { ··· 446 440 case 1: 447 441 goto all_collected; 448 442 case 2: 443 + if (!netfs_check_rreq_in_progress(rreq)) 444 + break; 445 + cond_resched(); 449 446 continue; 450 447 } 451 448 } 452 449 453 - if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) 450 + if (!netfs_check_rreq_in_progress(rreq)) 454 451 break; 455 452 453 + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); 456 454 schedule(); 457 - trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue); 458 455 } 459 456 460 457 all_collected: 458 + trace_netfs_rreq(rreq, netfs_rreq_trace_waited_ip); 461 459 finish_wait(&rreq->waitq, &myself); 462 460 463 461 ret = rreq->error; ··· 488 478 489 479 ssize_t netfs_wait_for_read(struct netfs_io_request *rreq) 490 480 { 491 - return netfs_wait_for_request(rreq, netfs_read_collection); 481 + return netfs_wait_for_in_progress(rreq, netfs_read_collection); 492 482 } 493 483 494 484 ssize_t netfs_wait_for_write(struct netfs_io_request *rreq) 495 485 { 496 - return netfs_wait_for_request(rreq, netfs_write_collection); 486 + return netfs_wait_for_in_progress(rreq, netfs_write_collection); 497 487 } 498 488 499 489 /* ··· 504 494 { 505 495 DEFINE_WAIT(myself); 506 496 507 - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_pause); 508 - 509 497 for (;;) { 510 - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue); 498 + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_pause); 511 499 prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE); 512 500 513 501 if (!test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) { ··· 515 507 case 1: 516 508 goto all_collected; 517 509 case 2: 510 + if (!netfs_check_rreq_in_progress(rreq) || 511 + !test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) 512 + break; 513 + cond_resched(); 518 514 continue; 519 515 } 520 516 } 521 517 522 - if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags) || 518 + if (!netfs_check_rreq_in_progress(rreq) || 523 519 !test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) 524 520 break; 525 521 526 522 schedule(); 527 - trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue); 528 523 } 529 524 530 525 all_collected: 526 + trace_netfs_rreq(rreq, netfs_rreq_trace_waited_pause); 531 527 finish_wait(&rreq->waitq, &myself); 532 528 } 533 529
+11 -5
fs/netfs/read_collect.c
··· 218 218 stream->collected_to = front->start; 219 219 } 220 220 221 - if (test_bit(NETFS_SREQ_IN_PROGRESS, &front->flags)) 221 + if (netfs_check_subreq_in_progress(front)) 222 222 notes |= HIT_PENDING; 223 223 smp_rmb(); /* Read counters after IN_PROGRESS flag. */ 224 224 transferred = READ_ONCE(front->transferred); ··· 293 293 spin_lock(&rreq->lock); 294 294 295 295 remove = front; 296 - trace_netfs_sreq(front, netfs_sreq_trace_discard); 296 + trace_netfs_sreq(front, 297 + notes & ABANDON_SREQ ? 298 + netfs_sreq_trace_abandoned : netfs_sreq_trace_consumed); 297 299 list_del_init(&front->rreq_link); 298 300 front = list_first_entry_or_null(&stream->subrequests, 299 301 struct netfs_io_subrequest, rreq_link); ··· 355 353 356 354 if (rreq->iocb) { 357 355 rreq->iocb->ki_pos += rreq->transferred; 358 - if (rreq->iocb->ki_complete) 356 + if (rreq->iocb->ki_complete) { 357 + trace_netfs_rreq(rreq, netfs_rreq_trace_ki_complete); 359 358 rreq->iocb->ki_complete( 360 359 rreq->iocb, rreq->error ? rreq->error : rreq->transferred); 360 + } 361 361 } 362 362 if (rreq->netfs_ops->done) 363 363 rreq->netfs_ops->done(rreq); ··· 383 379 384 380 if (rreq->iocb) { 385 381 rreq->iocb->ki_pos += rreq->transferred; 386 - if (rreq->iocb->ki_complete) 382 + if (rreq->iocb->ki_complete) { 383 + trace_netfs_rreq(rreq, netfs_rreq_trace_ki_complete); 387 384 rreq->iocb->ki_complete( 388 385 rreq->iocb, rreq->error ? rreq->error : rreq->transferred); 386 + } 389 387 } 390 388 if (rreq->netfs_ops->done) 391 389 rreq->netfs_ops->done(rreq); ··· 451 445 struct netfs_io_request *rreq = container_of(work, struct netfs_io_request, work); 452 446 453 447 netfs_see_request(rreq, netfs_rreq_trace_see_work); 454 - if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) { 448 + if (netfs_check_rreq_in_progress(rreq)) { 455 449 if (netfs_read_collection(rreq)) 456 450 /* Drop the ref from the IN_PROGRESS flag. */ 457 451 netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
+9 -5
fs/netfs/write_collect.c
··· 240 240 } 241 241 242 242 /* Stall if the front is still undergoing I/O. */ 243 - if (test_bit(NETFS_SREQ_IN_PROGRESS, &front->flags)) { 243 + if (netfs_check_subreq_in_progress(front)) { 244 244 notes |= HIT_PENDING; 245 245 break; 246 246 } ··· 393 393 ictx->ops->invalidate_cache(wreq); 394 394 } 395 395 396 - if (wreq->cleanup) 397 - wreq->cleanup(wreq); 396 + if ((wreq->origin == NETFS_UNBUFFERED_WRITE || 397 + wreq->origin == NETFS_DIO_WRITE) && 398 + !wreq->error) 399 + netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred); 398 400 399 401 if (wreq->origin == NETFS_DIO_WRITE && 400 402 wreq->mapping->nrpages) { ··· 421 419 if (wreq->iocb) { 422 420 size_t written = min(wreq->transferred, wreq->len); 423 421 wreq->iocb->ki_pos += written; 424 - if (wreq->iocb->ki_complete) 422 + if (wreq->iocb->ki_complete) { 423 + trace_netfs_rreq(wreq, netfs_rreq_trace_ki_complete); 425 424 wreq->iocb->ki_complete( 426 425 wreq->iocb, wreq->error ? wreq->error : written); 426 + } 427 427 wreq->iocb = VFS_PTR_POISON; 428 428 } 429 429 ··· 438 434 struct netfs_io_request *rreq = container_of(work, struct netfs_io_request, work); 439 435 440 436 netfs_see_request(rreq, netfs_rreq_trace_see_work); 441 - if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) { 437 + if (netfs_check_rreq_in_progress(rreq)) { 442 438 if (netfs_write_collection(rreq)) 443 439 /* Drop the ref from the IN_PROGRESS flag. */ 444 440 netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
+1 -2
fs/netfs/write_retry.c
··· 146 146 subreq = netfs_alloc_subrequest(wreq); 147 147 subreq->source = to->source; 148 148 subreq->start = start; 149 - subreq->debug_index = atomic_inc_return(&wreq->subreq_counter); 150 149 subreq->stream_nr = to->stream_nr; 151 150 subreq->retry_count = 1; 152 151 153 152 trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, 154 153 refcount_read(&subreq->ref), 155 154 netfs_sreq_trace_new); 156 - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 155 + trace_netfs_sreq(subreq, netfs_sreq_trace_split); 157 156 158 157 list_add(&subreq->rreq_link, &to->rreq_link); 159 158 to = list_next_entry(to, rreq_link);
+1 -1
fs/proc/inode.c
··· 42 42 43 43 head = ei->sysctl; 44 44 if (head) { 45 - RCU_INIT_POINTER(ei->sysctl, NULL); 45 + WRITE_ONCE(ei->sysctl, NULL); 46 46 proc_sys_evict_inode(inode, head); 47 47 } 48 48 }
+11 -7
fs/proc/proc_sysctl.c
··· 918 918 struct ctl_table_header *head; 919 919 struct inode *inode; 920 920 921 - /* Although proc doesn't have negative dentries, rcu-walk means 922 - * that inode here can be NULL */ 923 - /* AV: can it, indeed? */ 924 - inode = d_inode_rcu(dentry); 925 - if (!inode) 926 - return 1; 927 921 if (name->len != len) 928 922 return 1; 929 923 if (memcmp(name->name, str, len)) 930 924 return 1; 931 - head = rcu_dereference(PROC_I(inode)->sysctl); 925 + 926 + // false positive is fine here - we'll recheck anyway 927 + if (d_in_lookup(dentry)) 928 + return 0; 929 + 930 + inode = d_inode_rcu(dentry); 931 + // we just might have run into dentry in the middle of __dentry_kill() 932 + if (!inode) 933 + return 1; 934 + 935 + head = READ_ONCE(PROC_I(inode)->sysctl); 932 936 return !head || !sysctl_is_seen(head); 933 937 } 934 938
+2
fs/smb/client/cifsglob.h
··· 777 777 __le32 session_key_id; /* retrieved from negotiate response and send in session setup request */ 778 778 struct session_key session_key; 779 779 unsigned long lstrp; /* when we got last response from this server */ 780 + unsigned long neg_start; /* when negotiate started (jiffies) */ 780 781 struct cifs_secmech secmech; /* crypto sec mech functs, descriptors */ 781 782 #define CIFS_NEGFLAVOR_UNENCAP 1 /* wct == 17, but no ext_sec */ 782 783 #define CIFS_NEGFLAVOR_EXTENDED 2 /* wct == 17, ext_sec bit set */ ··· 1304 1303 bool use_persistent:1; /* use persistent instead of durable handles */ 1305 1304 bool no_lease:1; /* Do not request leases on files or directories */ 1306 1305 bool use_witness:1; /* use witness protocol */ 1306 + bool dummy:1; /* dummy tcon used for reconnecting channels */ 1307 1307 __le32 capabilities; 1308 1308 __u32 share_flags; 1309 1309 __u32 maximal_access;
+1
fs/smb/client/cifsproto.h
··· 136 136 struct smb_hdr *out_buf, 137 137 int *bytes_returned); 138 138 139 + void smb2_query_server_interfaces(struct work_struct *work); 139 140 void 140 141 cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server, 141 142 bool all_channels);
+24 -2
fs/smb/client/cifssmb.c
··· 1334 1334 cifs_stats_bytes_read(tcon, rdata->got_bytes); 1335 1335 break; 1336 1336 case MID_REQUEST_SUBMITTED: 1337 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_req_submitted); 1338 + goto do_retry; 1337 1339 case MID_RETRY_NEEDED: 1340 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_retry_needed); 1341 + do_retry: 1342 + __set_bit(NETFS_SREQ_NEED_RETRY, &rdata->subreq.flags); 1338 1343 rdata->result = -EAGAIN; 1339 1344 if (server->sign && rdata->got_bytes) 1340 1345 /* reset bytes number since we can not check a sign */ ··· 1348 1343 task_io_account_read(rdata->got_bytes); 1349 1344 cifs_stats_bytes_read(tcon, rdata->got_bytes); 1350 1345 break; 1351 - default: 1346 + case MID_RESPONSE_MALFORMED: 1347 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_malformed); 1352 1348 rdata->result = -EIO; 1349 + break; 1350 + default: 1351 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_unknown); 1352 + rdata->result = -EIO; 1353 + break; 1353 1354 } 1354 1355 1355 1356 if (rdata->result == -ENODATA) { ··· 1724 1713 } 1725 1714 break; 1726 1715 case MID_REQUEST_SUBMITTED: 1727 - case MID_RETRY_NEEDED: 1716 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_req_submitted); 1717 + __set_bit(NETFS_SREQ_NEED_RETRY, &wdata->subreq.flags); 1728 1718 result = -EAGAIN; 1729 1719 break; 1720 + case MID_RETRY_NEEDED: 1721 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_retry_needed); 1722 + __set_bit(NETFS_SREQ_NEED_RETRY, &wdata->subreq.flags); 1723 + result = -EAGAIN; 1724 + break; 1725 + case MID_RESPONSE_MALFORMED: 1726 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_malformed); 1727 + result = -EIO; 1728 + break; 1730 1729 default: 1730 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_unknown); 1731 1731 result = -EIO; 1732 1732 break; 1733 1733 }
+5 -10
fs/smb/client/connect.c
··· 97 97 return rc; 98 98 } 99 99 100 - static void smb2_query_server_interfaces(struct work_struct *work) 100 + void smb2_query_server_interfaces(struct work_struct *work) 101 101 { 102 102 int rc; 103 103 int xid; ··· 679 679 /* 680 680 * If we're in the process of mounting a share or reconnecting a session 681 681 * and the server abruptly shut down (e.g. socket wasn't closed, packet 682 - * had been ACK'ed but no SMB response), don't wait longer than 20s to 683 - * negotiate protocol. 682 + * had been ACK'ed but no SMB response), don't wait longer than 20s from 683 + * when negotiate actually started. 684 684 */ 685 685 spin_lock(&server->srv_lock); 686 686 if (server->tcpStatus == CifsInNegotiate && 687 - time_after(jiffies, server->lstrp + 20 * HZ)) { 687 + time_after(jiffies, server->neg_start + 20 * HZ)) { 688 688 spin_unlock(&server->srv_lock); 689 689 cifs_reconnect(server, false); 690 690 return true; ··· 2880 2880 tcon->max_cached_dirs = ctx->max_cached_dirs; 2881 2881 tcon->nodelete = ctx->nodelete; 2882 2882 tcon->local_lease = ctx->local_lease; 2883 - INIT_LIST_HEAD(&tcon->pending_opens); 2884 2883 tcon->status = TID_GOOD; 2885 2884 2886 - INIT_DELAYED_WORK(&tcon->query_interfaces, 2887 - smb2_query_server_interfaces); 2888 2885 if (ses->server->dialect >= SMB30_PROT_ID && 2889 2886 (ses->server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) { 2890 2887 /* schedule query interfaces poll */ 2891 2888 queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, 2892 2889 (SMB_INTERFACE_POLL_INTERVAL * HZ)); 2893 2890 } 2894 - #ifdef CONFIG_CIFS_DFS_UPCALL 2895 - INIT_DELAYED_WORK(&tcon->dfs_cache_work, dfs_cache_refresh); 2896 - #endif 2897 2891 spin_lock(&cifs_tcp_ses_lock); 2898 2892 list_add(&tcon->tcon_list, &ses->tcon_list); 2899 2893 spin_unlock(&cifs_tcp_ses_lock); ··· 4209 4215 4210 4216 server->lstrp = jiffies; 4211 4217 server->tcpStatus = CifsInNegotiate; 4218 + server->neg_start = jiffies; 4212 4219 spin_unlock(&server->srv_lock); 4213 4220 4214 4221 rc = server->ops->negotiate(xid, ses, server);
+7 -10
fs/smb/client/fs_context.c
··· 1824 1824 cifs_errorf(fc, "symlinkroot mount options must be absolute path\n"); 1825 1825 goto cifs_parse_mount_err; 1826 1826 } 1827 - kfree(ctx->symlinkroot); 1828 - ctx->symlinkroot = kstrdup(param->string, GFP_KERNEL); 1829 - if (!ctx->symlinkroot) 1827 + if (strnlen(param->string, PATH_MAX) == PATH_MAX) { 1828 + cifs_errorf(fc, "symlinkroot path too long (max path length: %u)\n", 1829 + PATH_MAX - 1); 1830 1830 goto cifs_parse_mount_err; 1831 + } 1832 + kfree(ctx->symlinkroot); 1833 + ctx->symlinkroot = param->string; 1834 + param->string = NULL; 1831 1835 break; 1832 1836 } 1833 1837 /* case Opt_ignore: - is ignored as expected ... */ ··· 1840 1836 cifs_errorf(fc, "multiuser mount option not supported with upcalltarget set as 'mount'\n"); 1841 1837 goto cifs_parse_mount_err; 1842 1838 } 1843 - 1844 - /* 1845 - * By default resolve all native absolute symlinks relative to "/mnt/". 1846 - * Same default has drvfs driver running in WSL for resolving SMB shares. 1847 - */ 1848 - if (!ctx->symlinkroot) 1849 - ctx->symlinkroot = kstrdup("/mnt/", GFP_KERNEL); 1850 1839 1851 1840 return 0; 1852 1841
+6
fs/smb/client/misc.c
··· 151 151 #ifdef CONFIG_CIFS_DFS_UPCALL 152 152 INIT_LIST_HEAD(&ret_buf->dfs_ses_list); 153 153 #endif 154 + INIT_LIST_HEAD(&ret_buf->pending_opens); 155 + INIT_DELAYED_WORK(&ret_buf->query_interfaces, 156 + smb2_query_server_interfaces); 157 + #ifdef CONFIG_CIFS_DFS_UPCALL 158 + INIT_DELAYED_WORK(&ret_buf->dfs_cache_work, dfs_cache_refresh); 159 + #endif 154 160 155 161 return ret_buf; 156 162 }
+1 -1
fs/smb/client/readdir.c
··· 264 264 /* The Mode field in the response can now include the file type as well */ 265 265 fattr->cf_mode = wire_mode_to_posix(le32_to_cpu(info->Mode), 266 266 fattr->cf_cifsattrs & ATTR_DIRECTORY); 267 - fattr->cf_dtype = S_DT(le32_to_cpu(info->Mode)); 267 + fattr->cf_dtype = S_DT(fattr->cf_mode); 268 268 269 269 switch (fattr->cf_mode & S_IFMT) { 270 270 case S_IFLNK:
+13 -9
fs/smb/client/reparse.c
··· 57 57 struct reparse_symlink_data_buffer *buf = NULL; 58 58 struct cifs_open_info_data data = {}; 59 59 struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 60 + const char *symroot = cifs_sb->ctx->symlinkroot; 60 61 struct inode *new; 61 62 struct kvec iov; 62 63 __le16 *path = NULL; ··· 83 82 .symlink_target = symlink_target, 84 83 }; 85 84 86 - if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && symname[0] == '/') { 85 + if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && 86 + symroot && symname[0] == '/') { 87 87 /* 88 88 * This is a request to create an absolute symlink on the server 89 89 * which does not support POSIX paths, and expects symlink in ··· 94 92 * ensure compatibility of this symlink stored in absolute form 95 93 * on the SMB server. 96 94 */ 97 - if (!strstarts(symname, cifs_sb->ctx->symlinkroot)) { 95 + if (!strstarts(symname, symroot)) { 98 96 /* 99 97 * If the absolute Linux symlink target path is not 100 98 * inside "symlinkroot" location then there is no way ··· 103 101 cifs_dbg(VFS, 104 102 "absolute symlink '%s' cannot be converted to NT format " 105 103 "because it is outside of symlinkroot='%s'\n", 106 - symname, cifs_sb->ctx->symlinkroot); 104 + symname, symroot); 107 105 rc = -EINVAL; 108 106 goto out; 109 107 } 110 - len = strlen(cifs_sb->ctx->symlinkroot); 111 - if (cifs_sb->ctx->symlinkroot[len-1] != '/') 108 + len = strlen(symroot); 109 + if (symroot[len - 1] != '/') 112 110 len++; 113 111 if (symname[len] >= 'a' && symname[len] <= 'z' && 114 112 (symname[len+1] == '/' || symname[len+1] == '\0')) { ··· 784 782 const char *full_path, 785 783 struct cifs_sb_info *cifs_sb) 786 784 { 785 + const char *symroot = cifs_sb->ctx->symlinkroot; 787 786 char sep = CIFS_DIR_SEP(cifs_sb); 788 787 char *linux_target = NULL; 789 788 char *smb_target = NULL; ··· 818 815 goto out; 819 816 } 820 817 821 - if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && !relative) { 818 + if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && 819 + symroot && !relative) { 822 820 /* 823 821 * This is an absolute symlink from the server which does not 824 822 * support POSIX paths, so the symlink is in NT-style path. ··· 911 907 } 912 908 913 909 abs_path_len = strlen(abs_path)+1; 914 - symlinkroot_len = strlen(cifs_sb->ctx->symlinkroot); 915 - if (cifs_sb->ctx->symlinkroot[symlinkroot_len-1] == '/') 910 + symlinkroot_len = strlen(symroot); 911 + if (symroot[symlinkroot_len - 1] == '/') 916 912 symlinkroot_len--; 917 913 linux_target = kmalloc(symlinkroot_len + 1 + abs_path_len, GFP_KERNEL); 918 914 if (!linux_target) { 919 915 rc = -ENOMEM; 920 916 goto out; 921 917 } 922 - memcpy(linux_target, cifs_sb->ctx->symlinkroot, symlinkroot_len); 918 + memcpy(linux_target, symroot, symlinkroot_len); 923 919 linux_target[symlinkroot_len] = '/'; 924 920 memcpy(linux_target + symlinkroot_len + 1, abs_path, abs_path_len); 925 921 } else if (smb_target[0] == sep && relative) {
+28 -11
fs/smb/client/smb2pdu.c
··· 424 424 free_xid(xid); 425 425 ses->flags &= ~CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES; 426 426 427 - /* regardless of rc value, setup polling */ 428 - queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, 429 - (SMB_INTERFACE_POLL_INTERVAL * HZ)); 427 + if (!tcon->ipc && !tcon->dummy) 428 + queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, 429 + (SMB_INTERFACE_POLL_INTERVAL * HZ)); 430 430 431 431 mutex_unlock(&ses->session_mutex); 432 432 ··· 4229 4229 } 4230 4230 goto done; 4231 4231 } 4232 - 4233 4232 tcon->status = TID_GOOD; 4234 - tcon->retry = false; 4235 - tcon->need_reconnect = false; 4233 + tcon->dummy = true; 4236 4234 4237 4235 /* now reconnect sessions for necessary channels */ 4238 4236 list_for_each_entry_safe(ses, ses2, &tmp_ses_list, rlist) { ··· 4565 4567 cifs_stats_bytes_read(tcon, rdata->got_bytes); 4566 4568 break; 4567 4569 case MID_REQUEST_SUBMITTED: 4570 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_req_submitted); 4571 + goto do_retry; 4568 4572 case MID_RETRY_NEEDED: 4573 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_retry_needed); 4574 + do_retry: 4569 4575 __set_bit(NETFS_SREQ_NEED_RETRY, &rdata->subreq.flags); 4570 4576 rdata->result = -EAGAIN; 4571 4577 if (server->sign && rdata->got_bytes) ··· 4580 4578 cifs_stats_bytes_read(tcon, rdata->got_bytes); 4581 4579 break; 4582 4580 case MID_RESPONSE_MALFORMED: 4581 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_malformed); 4583 4582 credits.value = le16_to_cpu(shdr->CreditRequest); 4584 4583 credits.instance = server->reconnect_instance; 4585 - fallthrough; 4586 - default: 4587 4584 rdata->result = -EIO; 4585 + break; 4586 + default: 4587 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_unknown); 4588 + rdata->result = -EIO; 4589 + break; 4588 4590 } 4589 4591 #ifdef CONFIG_CIFS_SMB_DIRECT 4590 4592 /* ··· 4841 4835 4842 4836 switch (mid->mid_state) { 4843 4837 case MID_RESPONSE_RECEIVED: 4838 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_progress); 4844 4839 credits.value = le16_to_cpu(rsp->hdr.CreditRequest); 4845 4840 credits.instance = server->reconnect_instance; 4846 4841 result = smb2_check_receive(mid, server, 0); 4847 - if (result != 0) 4842 + if (result != 0) { 4843 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_bad); 4848 4844 break; 4845 + } 4849 4846 4850 4847 written = le32_to_cpu(rsp->DataLength); 4851 4848 /* ··· 4870 4861 } 4871 4862 break; 4872 4863 case MID_REQUEST_SUBMITTED: 4864 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_req_submitted); 4865 + __set_bit(NETFS_SREQ_NEED_RETRY, &wdata->subreq.flags); 4866 + result = -EAGAIN; 4867 + break; 4873 4868 case MID_RETRY_NEEDED: 4869 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_retry_needed); 4870 + __set_bit(NETFS_SREQ_NEED_RETRY, &wdata->subreq.flags); 4874 4871 result = -EAGAIN; 4875 4872 break; 4876 4873 case MID_RESPONSE_MALFORMED: 4874 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_malformed); 4877 4875 credits.value = le16_to_cpu(rsp->hdr.CreditRequest); 4878 4876 credits.instance = server->reconnect_instance; 4879 - fallthrough; 4877 + result = -EIO; 4878 + break; 4880 4879 default: 4880 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_unknown); 4881 4881 result = -EIO; 4882 4882 break; 4883 4883 } ··· 4926 4908 server->credits, server->in_flight, 4927 4909 0, cifs_trace_rw_credits_write_response_clear); 4928 4910 wdata->credits.value = 0; 4929 - trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_progress); 4930 4911 cifs_write_subrequest_terminated(wdata, result ?: written); 4931 4912 release_mid(mid); 4932 4913 trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0,
+3
include/drm/drm_mipi_dsi.h
··· 223 223 224 224 #define to_mipi_dsi_device(__dev) container_of_const(__dev, struct mipi_dsi_device, dev) 225 225 226 + extern const struct bus_type mipi_dsi_bus_type; 227 + #define dev_is_mipi_dsi(dev) ((dev)->bus == &mipi_dsi_bus_type) 228 + 226 229 /** 227 230 * mipi_dsi_pixel_format_to_bpp - obtain the number of bits per pixel for any 228 231 * given pixel format defined by the MIPI DSI
+3 -1
include/drm/spsc_queue.h
··· 70 70 71 71 preempt_disable(); 72 72 73 + atomic_inc(&queue->job_count); 74 + smp_mb__after_atomic(); 75 + 73 76 tail = (struct spsc_node **)atomic_long_xchg(&queue->tail, (long)&node->next); 74 77 WRITE_ONCE(*tail, node); 75 - atomic_inc(&queue->job_count); 76 78 77 79 /* 78 80 * In case of first element verify new node will be visible to the consumer
+1
include/linux/arm_ffa.h
··· 283 283 u32 offset; 284 284 u32 send_recv_id; 285 285 u32 size; 286 + u32 res1; 286 287 uuid_t uuid; 287 288 }; 288 289
+1
include/linux/cpu.h
··· 82 82 struct device_attribute *attr, char *buf); 83 83 extern ssize_t cpu_show_indirect_target_selection(struct device *dev, 84 84 struct device_attribute *attr, char *buf); 85 + extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf); 85 86 86 87 extern __printf(4, 5) 87 88 struct device *cpu_device_create(struct device *parent, void *drvdata,
+2
include/linux/fs.h
··· 3608 3608 extern const struct address_space_operations ram_aops; 3609 3609 extern int always_delete_dentry(const struct dentry *); 3610 3610 extern struct inode *alloc_anon_inode(struct super_block *); 3611 + struct inode *anon_inode_make_secure_inode(struct super_block *sb, const char *name, 3612 + const struct inode *context_inode); 3611 3613 extern int simple_nosetlease(struct file *, int, struct file_lease **, void **); 3612 3614 extern const struct dentry_operations simple_dentry_operations; 3613 3615
+8
include/linux/mtd/nand-qpic-common.h
··· 237 237 * @last_data_desc - last DMA desc in data channel (tx/rx). 238 238 * @last_cmd_desc - last DMA desc in command channel. 239 239 * @txn_done - completion for NAND transfer. 240 + * @bam_ce_nitems - the number of elements in the @bam_ce array 241 + * @cmd_sgl_nitems - the number of elements in the @cmd_sgl array 242 + * @data_sgl_nitems - the number of elements in the @data_sgl array 240 243 * @bam_ce_pos - the index in bam_ce which is available for next sgl 241 244 * @bam_ce_start - the index in bam_ce which marks the start position ce 242 245 * for current sgl. It will be used for size calculation ··· 258 255 struct dma_async_tx_descriptor *last_data_desc; 259 256 struct dma_async_tx_descriptor *last_cmd_desc; 260 257 struct completion txn_done; 258 + 259 + unsigned int bam_ce_nitems; 260 + unsigned int cmd_sgl_nitems; 261 + unsigned int data_sgl_nitems; 262 + 261 263 struct_group(bam_positions, 262 264 u32 bam_ce_pos; 263 265 u32 bam_ce_start;
+10 -11
include/linux/netfs.h
··· 265 265 bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ 266 266 refcount_t ref; 267 267 unsigned long flags; 268 - #define NETFS_RREQ_OFFLOAD_COLLECTION 0 /* Offload collection to workqueue */ 269 - #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */ 270 - #define NETFS_RREQ_FAILED 4 /* The request failed */ 271 - #define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes (has ref) */ 272 - #define NETFS_RREQ_FOLIO_COPY_TO_CACHE 6 /* Copy current folio to cache from read */ 273 - #define NETFS_RREQ_UPLOAD_TO_SERVER 8 /* Need to write to the server */ 274 - #define NETFS_RREQ_PAUSE 11 /* Pause subrequest generation */ 268 + #define NETFS_RREQ_IN_PROGRESS 0 /* Unlocked when the request completes (has ref) */ 269 + #define NETFS_RREQ_ALL_QUEUED 1 /* All subreqs are now queued */ 270 + #define NETFS_RREQ_PAUSE 2 /* Pause subrequest generation */ 271 + #define NETFS_RREQ_FAILED 3 /* The request failed */ 272 + #define NETFS_RREQ_RETRYING 4 /* Set if we're in the retry path */ 273 + #define NETFS_RREQ_SHORT_TRANSFER 5 /* Set if we have a short transfer */ 274 + #define NETFS_RREQ_OFFLOAD_COLLECTION 8 /* Offload collection to workqueue */ 275 + #define NETFS_RREQ_NO_UNLOCK_FOLIO 9 /* Don't unlock no_unlock_folio on completion */ 276 + #define NETFS_RREQ_FOLIO_COPY_TO_CACHE 10 /* Copy current folio to cache from read */ 277 + #define NETFS_RREQ_UPLOAD_TO_SERVER 11 /* Need to write to the server */ 275 278 #define NETFS_RREQ_USE_IO_ITER 12 /* Use ->io_iter rather than ->i_pages */ 276 - #define NETFS_RREQ_ALL_QUEUED 13 /* All subreqs are now queued */ 277 - #define NETFS_RREQ_RETRYING 14 /* Set if we're in the retry path */ 278 - #define NETFS_RREQ_SHORT_TRANSFER 15 /* Set if we have a short transfer */ 279 279 #define NETFS_RREQ_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark 280 280 * write to cache on read */ 281 281 const struct netfs_request_ops *netfs_ops; 282 - void (*cleanup)(struct netfs_io_request *req); 283 282 }; 284 283 285 284 /*
+2
include/linux/psp-sev.h
··· 594 594 * @imi_en: launch flow is launching an IMI (Incoming Migration Image) for the 595 595 * purpose of guest-assisted migration. 596 596 * @rsvd: reserved 597 + * @desired_tsc_khz: hypervisor desired mean TSC freq in kHz of the guest 597 598 * @gosvw: guest OS-visible workarounds, as defined by hypervisor 598 599 */ 599 600 struct sev_data_snp_launch_start { ··· 604 603 u32 ma_en:1; /* In */ 605 604 u32 imi_en:1; /* In */ 606 605 u32 rsvd:30; 606 + u32 desired_tsc_khz; /* In */ 607 607 u8 gosvw[16]; /* In */ 608 608 } __packed; 609 609
+1 -1
include/linux/spi/spi.h
··· 21 21 #include <uapi/linux/spi/spi.h> 22 22 23 23 /* Max no. of CS supported per spi device */ 24 - #define SPI_CS_CNT_MAX 16 24 + #define SPI_CS_CNT_MAX 24 25 25 26 26 struct dma_chan; 27 27 struct software_node;
+5
include/linux/suspend.h
··· 446 446 extern void ksys_sync_helper(void); 447 447 extern void pm_report_hw_sleep_time(u64 t); 448 448 extern void pm_report_max_hw_sleep(u64 t); 449 + void pm_restrict_gfp_mask(void); 450 + void pm_restore_gfp_mask(void); 449 451 450 452 #define pm_notifier(fn, pri) { \ 451 453 static struct notifier_block fn##_nb = \ ··· 493 491 494 492 static inline void pm_report_hw_sleep_time(u64 t) {}; 495 493 static inline void pm_report_max_hw_sleep(u64 t) {}; 494 + 495 + static inline void pm_restrict_gfp_mask(void) {} 496 + static inline void pm_restore_gfp_mask(void) {} 496 497 497 498 static inline void ksys_sync_helper(void) {} 498 499
+2
include/linux/usb.h
··· 614 614 * FIXME -- complete doc 615 615 * @authenticated: Crypto authentication passed 616 616 * @tunnel_mode: Connection native or tunneled over USB4 617 + * @usb4_link: device link to the USB4 host interface 617 618 * @lpm_capable: device supports LPM 618 619 * @lpm_devinit_allow: Allow USB3 device initiated LPM, exit latency is in range 619 620 * @usb2_hw_lpm_capable: device can perform USB2 hardware LPM ··· 725 724 unsigned reset_resume:1; 726 725 unsigned port_is_suspended:1; 727 726 enum usb_link_tunnel_mode tunnel_mode; 727 + struct device_link *usb4_link; 728 728 729 729 int slot_id; 730 730 struct usb2_lpm_parameters l1_params;
+1
include/linux/usb/typec_dp.h
··· 57 57 DP_PIN_ASSIGN_D, 58 58 DP_PIN_ASSIGN_E, 59 59 DP_PIN_ASSIGN_F, /* Not supported after v1.0b */ 60 + DP_PIN_ASSIGN_MAX, 60 61 }; 61 62 62 63 /* DisplayPort alt mode specific commands */
+1 -1
include/net/af_vsock.h
··· 243 243 int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, 244 244 size_t len, int flags); 245 245 246 - #ifdef CONFIG_BPF_SYSCALL 247 246 extern struct proto vsock_proto; 247 + #ifdef CONFIG_BPF_SYSCALL 248 248 int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore); 249 249 void __init vsock_bpf_build_proto(void); 250 250 #else
+1 -2
include/net/bluetooth/hci_core.h
··· 1350 1350 rcu_read_lock(); 1351 1351 1352 1352 list_for_each_entry_rcu(c, &h->list, list) { 1353 - if (c->type != BIS_LINK || bacmp(&c->dst, BDADDR_ANY) || 1354 - c->state != state) 1353 + if (c->type != BIS_LINK || c->state != state) 1355 1354 continue; 1356 1355 1357 1356 if (handle == c->iso_qos.bcast.big) {
+24 -1
include/net/pkt_sched.h
··· 114 114 struct netlink_ext_ack *extack); 115 115 void qdisc_put_rtab(struct qdisc_rate_table *tab); 116 116 void qdisc_put_stab(struct qdisc_size_table *tab); 117 - void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc); 118 117 bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q, 119 118 struct net_device *dev, struct netdev_queue *txq, 120 119 spinlock_t *root_lock, bool validate); ··· 287 288 288 289 arg->count++; 289 290 return true; 291 + } 292 + 293 + static inline void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc) 294 + { 295 + if (!(qdisc->flags & TCQ_F_WARN_NONWC)) { 296 + pr_warn("%s: %s qdisc %X: is non-work-conserving?\n", 297 + txt, qdisc->ops->id, qdisc->handle >> 16); 298 + qdisc->flags |= TCQ_F_WARN_NONWC; 299 + } 300 + } 301 + 302 + static inline unsigned int qdisc_peek_len(struct Qdisc *sch) 303 + { 304 + struct sk_buff *skb; 305 + unsigned int len; 306 + 307 + skb = sch->ops->peek(sch); 308 + if (unlikely(skb == NULL)) { 309 + qdisc_warn_nonwc("qdisc_peek_len", sch); 310 + return 0; 311 + } 312 + len = qdisc_pkt_len(skb); 313 + 314 + return len; 290 315 } 291 316 292 317 #endif
+20 -9
include/trace/events/netfs.h
··· 50 50 51 51 #define netfs_rreq_traces \ 52 52 EM(netfs_rreq_trace_assess, "ASSESS ") \ 53 - EM(netfs_rreq_trace_copy, "COPY ") \ 54 53 EM(netfs_rreq_trace_collect, "COLLECT") \ 55 54 EM(netfs_rreq_trace_complete, "COMPLET") \ 55 + EM(netfs_rreq_trace_copy, "COPY ") \ 56 56 EM(netfs_rreq_trace_dirty, "DIRTY ") \ 57 57 EM(netfs_rreq_trace_done, "DONE ") \ 58 58 EM(netfs_rreq_trace_free, "FREE ") \ 59 + EM(netfs_rreq_trace_ki_complete, "KI-CMPL") \ 60 + EM(netfs_rreq_trace_recollect, "RECLLCT") \ 59 61 EM(netfs_rreq_trace_redirty, "REDIRTY") \ 60 62 EM(netfs_rreq_trace_resubmit, "RESUBMT") \ 61 63 EM(netfs_rreq_trace_set_abandon, "S-ABNDN") \ ··· 65 63 EM(netfs_rreq_trace_unlock, "UNLOCK ") \ 66 64 EM(netfs_rreq_trace_unlock_pgpriv2, "UNLCK-2") \ 67 65 EM(netfs_rreq_trace_unmark, "UNMARK ") \ 66 + EM(netfs_rreq_trace_unpause, "UNPAUSE") \ 68 67 EM(netfs_rreq_trace_wait_ip, "WAIT-IP") \ 69 - EM(netfs_rreq_trace_wait_pause, "WT-PAUS") \ 70 - EM(netfs_rreq_trace_wait_queue, "WAIT-Q ") \ 68 + EM(netfs_rreq_trace_wait_pause, "--PAUSED--") \ 69 + EM(netfs_rreq_trace_wait_quiesce, "WAIT-QUIESCE") \ 70 + EM(netfs_rreq_trace_waited_ip, "DONE-IP") \ 71 + EM(netfs_rreq_trace_waited_pause, "--UNPAUSED--") \ 72 + EM(netfs_rreq_trace_waited_quiesce, "DONE-QUIESCE") \ 71 73 EM(netfs_rreq_trace_wake_ip, "WAKE-IP") \ 72 74 EM(netfs_rreq_trace_wake_queue, "WAKE-Q ") \ 73 - EM(netfs_rreq_trace_woke_queue, "WOKE-Q ") \ 74 - EM(netfs_rreq_trace_unpause, "UNPAUSE") \ 75 75 E_(netfs_rreq_trace_write_done, "WR-DONE") 76 76 77 77 #define netfs_sreq_sources \ ··· 86 82 E_(NETFS_WRITE_TO_CACHE, "WRIT") 87 83 88 84 #define netfs_sreq_traces \ 85 + EM(netfs_sreq_trace_abandoned, "ABNDN") \ 89 86 EM(netfs_sreq_trace_add_donations, "+DON ") \ 90 87 EM(netfs_sreq_trace_added, "ADD ") \ 91 88 EM(netfs_sreq_trace_cache_nowrite, "CA-NW") \ ··· 94 89 EM(netfs_sreq_trace_cache_write, "CA-WR") \ 95 90 EM(netfs_sreq_trace_cancel, "CANCL") \ 96 91 EM(netfs_sreq_trace_clear, "CLEAR") \ 92 + EM(netfs_sreq_trace_consumed, "CONSM") \ 97 93 EM(netfs_sreq_trace_discard, "DSCRD") \ 98 94 EM(netfs_sreq_trace_donate_to_prev, "DON-P") \ 99 95 EM(netfs_sreq_trace_donate_to_next, "DON-N") \ ··· 102 96 EM(netfs_sreq_trace_fail, "FAIL ") \ 103 97 EM(netfs_sreq_trace_free, "FREE ") \ 104 98 EM(netfs_sreq_trace_hit_eof, "EOF ") \ 105 - EM(netfs_sreq_trace_io_progress, "IO ") \ 99 + EM(netfs_sreq_trace_io_bad, "I-BAD") \ 100 + EM(netfs_sreq_trace_io_malformed, "I-MLF") \ 101 + EM(netfs_sreq_trace_io_unknown, "I-UNK") \ 102 + EM(netfs_sreq_trace_io_progress, "I-OK ") \ 103 + EM(netfs_sreq_trace_io_req_submitted, "I-RSB") \ 104 + EM(netfs_sreq_trace_io_retry_needed, "I-RTR") \ 106 105 EM(netfs_sreq_trace_limited, "LIMIT") \ 107 106 EM(netfs_sreq_trace_need_clear, "N-CLR") \ 108 107 EM(netfs_sreq_trace_partial_read, "PARTR") \ ··· 153 142 154 143 #define netfs_sreq_ref_traces \ 155 144 EM(netfs_sreq_trace_get_copy_to_cache, "GET COPY2C ") \ 156 - EM(netfs_sreq_trace_get_resubmit, "GET RESUBMIT") \ 157 - EM(netfs_sreq_trace_get_submit, "GET SUBMIT") \ 145 + EM(netfs_sreq_trace_get_resubmit, "GET RESUBMT") \ 146 + EM(netfs_sreq_trace_get_submit, "GET SUBMIT ") \ 158 147 EM(netfs_sreq_trace_get_short_read, "GET SHORTRD") \ 159 148 EM(netfs_sreq_trace_new, "NEW ") \ 160 149 EM(netfs_sreq_trace_put_abandon, "PUT ABANDON") \ ··· 377 366 __entry->slot = sreq->io_iter.folioq_slot; 378 367 ), 379 368 380 - TP_printk("R=%08x[%x] %s %s f=%02x s=%llx %zx/%zx s=%u e=%d", 369 + TP_printk("R=%08x[%x] %s %s f=%03x s=%llx %zx/%zx s=%u e=%d", 381 370 __entry->rreq, __entry->index, 382 371 __print_symbolic(__entry->source, netfs_sreq_sources), 383 372 __print_symbolic(__entry->what, netfs_sreq_traces),
+2 -2
include/uapi/linux/bits.h
··· 4 4 #ifndef _UAPI_LINUX_BITS_H 5 5 #define _UAPI_LINUX_BITS_H 6 6 7 - #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h)))) 7 + #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h)))) 8 8 9 - #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) 9 + #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 10 10 11 11 #define __GENMASK_U128(h, l) \ 12 12 ((_BIT128((h)) << 1) - (_BIT128(l)))
+4
include/uapi/linux/kvm.h
··· 467 467 __u64 leaf; 468 468 __u64 r11, r12, r13, r14; 469 469 } get_tdvmcall_info; 470 + struct { 471 + __u64 ret; 472 + __u64 vector; 473 + } setup_event_notify; 470 474 }; 471 475 } tdx; 472 476 /* Fix the size of the union. */
+4
init/Kconfig
··· 1716 1716 depends on FUTEX && RT_MUTEXES 1717 1717 default y 1718 1718 1719 + # 1720 + # marked broken for performance reasons; gives us one more cycle to sort things out. 1721 + # 1719 1722 config FUTEX_PRIVATE_HASH 1720 1723 bool 1721 1724 depends on FUTEX && !BASE_SMALL && MMU 1725 + depends on BROKEN 1722 1726 default y 1723 1727 1724 1728 config FUTEX_MPOL
+3 -3
kernel/events/core.c
··· 951 951 if (READ_ONCE(cpuctx->cgrp) == NULL) 952 952 return; 953 953 954 - WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0); 955 - 956 954 cgrp = perf_cgroup_from_task(task, NULL); 957 955 if (READ_ONCE(cpuctx->cgrp) == cgrp) 958 956 return; ··· 961 963 */ 962 964 if (READ_ONCE(cpuctx->cgrp) == NULL) 963 965 return; 966 + 967 + WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0); 964 968 965 969 perf_ctx_disable(&cpuctx->ctx, true); 966 970 ··· 11116 11116 if (event->attr.type != perf_uprobe.type) 11117 11117 return -ENOENT; 11118 11118 11119 - if (!perfmon_capable()) 11119 + if (!capable(CAP_SYS_ADMIN)) 11120 11120 return -EACCES; 11121 11121 11122 11122 /*
+1
kernel/kexec_core.c
··· 1136 1136 Resume_devices: 1137 1137 dpm_resume_end(PMSG_RESTORE); 1138 1138 Resume_console: 1139 + pm_restore_gfp_mask(); 1139 1140 console_resume_all(); 1140 1141 thaw_processes(); 1141 1142 Restore_console:
+11 -6
kernel/module/main.c
··· 1573 1573 if (infosec >= info->hdr->e_shnum) 1574 1574 continue; 1575 1575 1576 - /* Don't bother with non-allocated sections */ 1577 - if (!(info->sechdrs[infosec].sh_flags & SHF_ALLOC)) 1576 + /* 1577 + * Don't bother with non-allocated sections. 1578 + * An exception is the percpu section, which has separate allocations 1579 + * for individual CPUs. We relocate the percpu section in the initial 1580 + * ELF template and subsequently copy it to the per-CPU destinations. 1581 + */ 1582 + if (!(info->sechdrs[infosec].sh_flags & SHF_ALLOC) && 1583 + (!infosec || infosec != info->index.pcpu)) 1578 1584 continue; 1579 1585 1580 1586 if (info->sechdrs[i].sh_flags & SHF_RELA_LIVEPATCH) ··· 2702 2696 2703 2697 static int move_module(struct module *mod, struct load_info *info) 2704 2698 { 2705 - int i; 2706 - enum mod_mem_type t = 0; 2707 - int ret = -ENOMEM; 2699 + int i, ret; 2700 + enum mod_mem_type t = MOD_MEM_NUM_TYPES; 2708 2701 bool codetag_section_found = false; 2709 2702 2710 2703 for_each_mod_mem_type(type) { ··· 2781 2776 return 0; 2782 2777 out_err: 2783 2778 module_memory_restore_rox(mod); 2784 - for (t--; t >= 0; t--) 2779 + while (t--) 2785 2780 module_memory_free(mod, t); 2786 2781 if (codetag_section_found) 2787 2782 codetag_free_module_sections(mod);
-3
kernel/power/hibernate.c
··· 423 423 } 424 424 425 425 console_suspend_all(); 426 - pm_restrict_gfp_mask(); 427 426 428 427 error = dpm_suspend(PMSG_FREEZE); 429 428 ··· 558 559 559 560 pm_prepare_console(); 560 561 console_suspend_all(); 561 - pm_restrict_gfp_mask(); 562 562 error = dpm_suspend_start(PMSG_QUIESCE); 563 563 if (!error) { 564 564 error = resume_target_kernel(platform_mode); ··· 569 571 BUG_ON(!error); 570 572 } 571 573 dpm_resume_end(PMSG_RECOVER); 572 - pm_restore_gfp_mask(); 573 574 console_resume_all(); 574 575 pm_restore_console(); 575 576 return error;
-5
kernel/power/power.h
··· 239 239 /* kernel/power/main.c */ 240 240 extern int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down); 241 241 extern int pm_notifier_call_chain(unsigned long val); 242 - void pm_restrict_gfp_mask(void); 243 - void pm_restore_gfp_mask(void); 244 - #else 245 - static inline void pm_restrict_gfp_mask(void) {} 246 - static inline void pm_restore_gfp_mask(void) {} 247 242 #endif 248 243 249 244 #ifdef CONFIG_HIGHMEM
+1 -2
kernel/power/suspend.c
··· 540 540 return error; 541 541 542 542 Recover_platform: 543 + pm_restore_gfp_mask(); 543 544 platform_recover(state); 544 545 goto Resume_devices; 545 546 } ··· 607 606 608 607 trace_suspend_resume(TPS("suspend_enter"), state, false); 609 608 pm_pr_dbg("Suspending system (%s)\n", mem_sleep_labels[state]); 610 - pm_restrict_gfp_mask(); 611 609 error = suspend_devices_and_enter(state); 612 - pm_restore_gfp_mask(); 613 610 614 611 Finish: 615 612 events_check_enabled = false;
+6 -1
kernel/sched/core.c
··· 3943 3943 if (!scx_allow_ttwu_queue(p)) 3944 3944 return false; 3945 3945 3946 + #ifdef CONFIG_SMP 3947 + if (p->sched_class == &stop_sched_class) 3948 + return false; 3949 + #endif 3950 + 3946 3951 /* 3947 3952 * Do not complicate things with the async wake_list while the CPU is 3948 3953 * in hotplug state. ··· 7668 7663 7669 7664 if (IS_ENABLED(CONFIG_PREEMPT_DYNAMIC)) { 7670 7665 seq_buf_printf(&s, "(%s)%s", 7671 - preempt_dynamic_mode > 0 ? 7666 + preempt_dynamic_mode >= 0 ? 7672 7667 preempt_modes[preempt_dynamic_mode] : "undef", 7673 7668 brace ? "}" : ""); 7674 7669 return seq_buf_str(&s);
+5 -5
kernel/sched/deadline.c
··· 1504 1504 if (dl_entity_is_special(dl_se)) 1505 1505 return; 1506 1506 1507 - scaled_delta_exec = dl_scaled_delta_exec(rq, dl_se, delta_exec); 1507 + scaled_delta_exec = delta_exec; 1508 + if (!dl_server(dl_se)) 1509 + scaled_delta_exec = dl_scaled_delta_exec(rq, dl_se, delta_exec); 1508 1510 1509 1511 dl_se->runtime -= scaled_delta_exec; 1510 1512 ··· 1613 1611 */ 1614 1612 void dl_server_update_idle_time(struct rq *rq, struct task_struct *p) 1615 1613 { 1616 - s64 delta_exec, scaled_delta_exec; 1614 + s64 delta_exec; 1617 1615 1618 1616 if (!rq->fair_server.dl_defer) 1619 1617 return; ··· 1626 1624 if (delta_exec < 0) 1627 1625 return; 1628 1626 1629 - scaled_delta_exec = dl_scaled_delta_exec(rq, &rq->fair_server, delta_exec); 1630 - 1631 - rq->fair_server.runtime -= scaled_delta_exec; 1627 + rq->fair_server.runtime -= delta_exec; 1632 1628 1633 1629 if (rq->fair_server.runtime < 0) { 1634 1630 rq->fair_server.dl_defer_running = 0;
+10 -10
kernel/stop_machine.c
··· 82 82 } 83 83 84 84 static void __cpu_stop_queue_work(struct cpu_stopper *stopper, 85 - struct cpu_stop_work *work, 86 - struct wake_q_head *wakeq) 85 + struct cpu_stop_work *work) 87 86 { 88 87 list_add_tail(&work->list, &stopper->works); 89 - wake_q_add(wakeq, stopper->thread); 90 88 } 91 89 92 90 /* queue @work to @stopper. if offline, @work is completed immediately */ 93 91 static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work) 94 92 { 95 93 struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu); 96 - DEFINE_WAKE_Q(wakeq); 97 94 unsigned long flags; 98 95 bool enabled; 99 96 ··· 98 101 raw_spin_lock_irqsave(&stopper->lock, flags); 99 102 enabled = stopper->enabled; 100 103 if (enabled) 101 - __cpu_stop_queue_work(stopper, work, &wakeq); 104 + __cpu_stop_queue_work(stopper, work); 102 105 else if (work->done) 103 106 cpu_stop_signal_done(work->done); 104 107 raw_spin_unlock_irqrestore(&stopper->lock, flags); 105 108 106 - wake_up_q(&wakeq); 109 + if (enabled) 110 + wake_up_process(stopper->thread); 107 111 preempt_enable(); 108 112 109 113 return enabled; ··· 262 264 { 263 265 struct cpu_stopper *stopper1 = per_cpu_ptr(&cpu_stopper, cpu1); 264 266 struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2); 265 - DEFINE_WAKE_Q(wakeq); 266 267 int err; 267 268 268 269 retry: ··· 297 300 } 298 301 299 302 err = 0; 300 - __cpu_stop_queue_work(stopper1, work1, &wakeq); 301 - __cpu_stop_queue_work(stopper2, work2, &wakeq); 303 + __cpu_stop_queue_work(stopper1, work1); 304 + __cpu_stop_queue_work(stopper2, work2); 302 305 303 306 unlock: 304 307 raw_spin_unlock(&stopper2->lock); ··· 313 316 goto retry; 314 317 } 315 318 316 - wake_up_q(&wakeq); 319 + if (!err) { 320 + wake_up_process(stopper1->thread); 321 + wake_up_process(stopper2->thread); 322 + } 317 323 preempt_enable(); 318 324 319 325 return err;
+1 -8
mm/secretmem.c
··· 195 195 struct file *file; 196 196 struct inode *inode; 197 197 const char *anon_name = "[secretmem]"; 198 - int err; 199 198 200 - inode = alloc_anon_inode(secretmem_mnt->mnt_sb); 199 + inode = anon_inode_make_secure_inode(secretmem_mnt->mnt_sb, anon_name, NULL); 201 200 if (IS_ERR(inode)) 202 201 return ERR_CAST(inode); 203 - 204 - err = security_inode_init_security_anon(inode, &QSTR(anon_name), NULL); 205 - if (err) { 206 - file = ERR_PTR(err); 207 - goto err_free_inode; 208 - } 209 202 210 203 file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem", 211 204 O_RDWR, &secretmem_fops);
+48 -16
net/atm/clip.c
··· 45 45 #include <net/atmclip.h> 46 46 47 47 static struct net_device *clip_devs; 48 - static struct atm_vcc *atmarpd; 48 + static struct atm_vcc __rcu *atmarpd; 49 + static DEFINE_MUTEX(atmarpd_lock); 49 50 static struct timer_list idle_timer; 50 51 static const struct neigh_ops clip_neigh_ops; 51 52 ··· 54 53 { 55 54 struct sock *sk; 56 55 struct atmarp_ctrl *ctrl; 56 + struct atm_vcc *vcc; 57 57 struct sk_buff *skb; 58 + int err = 0; 58 59 59 60 pr_debug("(%d)\n", type); 60 - if (!atmarpd) 61 - return -EUNATCH; 61 + 62 + rcu_read_lock(); 63 + vcc = rcu_dereference(atmarpd); 64 + if (!vcc) { 65 + err = -EUNATCH; 66 + goto unlock; 67 + } 62 68 skb = alloc_skb(sizeof(struct atmarp_ctrl), GFP_ATOMIC); 63 - if (!skb) 64 - return -ENOMEM; 69 + if (!skb) { 70 + err = -ENOMEM; 71 + goto unlock; 72 + } 65 73 ctrl = skb_put(skb, sizeof(struct atmarp_ctrl)); 66 74 ctrl->type = type; 67 75 ctrl->itf_num = itf; 68 76 ctrl->ip = ip; 69 - atm_force_charge(atmarpd, skb->truesize); 77 + atm_force_charge(vcc, skb->truesize); 70 78 71 - sk = sk_atm(atmarpd); 79 + sk = sk_atm(vcc); 72 80 skb_queue_tail(&sk->sk_receive_queue, skb); 73 81 sk->sk_data_ready(sk); 74 - return 0; 82 + unlock: 83 + rcu_read_unlock(); 84 + return err; 75 85 } 76 86 77 87 static void link_vcc(struct clip_vcc *clip_vcc, struct atmarp_entry *entry) ··· 429 417 430 418 if (!vcc->push) 431 419 return -EBADFD; 420 + if (vcc->user_back) 421 + return -EINVAL; 432 422 clip_vcc = kmalloc(sizeof(struct clip_vcc), GFP_KERNEL); 433 423 if (!clip_vcc) 434 424 return -ENOMEM; ··· 621 607 { 622 608 pr_debug("\n"); 623 609 624 - rtnl_lock(); 625 - atmarpd = NULL; 610 + mutex_lock(&atmarpd_lock); 611 + RCU_INIT_POINTER(atmarpd, NULL); 612 + mutex_unlock(&atmarpd_lock); 613 + 614 + synchronize_rcu(); 626 615 skb_queue_purge(&sk_atm(vcc)->sk_receive_queue); 627 - rtnl_unlock(); 628 616 629 617 pr_debug("(done)\n"); 630 618 module_put(THIS_MODULE); 631 619 } 632 620 621 + static int atmarpd_send(struct atm_vcc *vcc, struct sk_buff *skb) 622 + { 623 + atm_return_tx(vcc, skb); 624 + dev_kfree_skb_any(skb); 625 + return 0; 626 + } 627 + 633 628 static const struct atmdev_ops atmarpd_dev_ops = { 634 - .close = atmarpd_close 629 + .close = atmarpd_close, 630 + .send = atmarpd_send 635 631 }; 636 632 637 633 ··· 655 631 656 632 static int atm_init_atmarp(struct atm_vcc *vcc) 657 633 { 658 - rtnl_lock(); 634 + if (vcc->push == clip_push) 635 + return -EINVAL; 636 + 637 + mutex_lock(&atmarpd_lock); 659 638 if (atmarpd) { 660 - rtnl_unlock(); 639 + mutex_unlock(&atmarpd_lock); 661 640 return -EADDRINUSE; 662 641 } 663 642 664 643 mod_timer(&idle_timer, jiffies + CLIP_CHECK_INTERVAL * HZ); 665 644 666 - atmarpd = vcc; 645 + rcu_assign_pointer(atmarpd, vcc); 667 646 set_bit(ATM_VF_META, &vcc->flags); 668 647 set_bit(ATM_VF_READY, &vcc->flags); 669 648 /* allow replies and avoid getting closed if signaling dies */ ··· 675 648 vcc->push = NULL; 676 649 vcc->pop = NULL; /* crash */ 677 650 vcc->push_oam = NULL; /* crash */ 678 - rtnl_unlock(); 651 + mutex_unlock(&atmarpd_lock); 679 652 return 0; 680 653 } 681 654 682 655 static int clip_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) 683 656 { 684 657 struct atm_vcc *vcc = ATM_SD(sock); 658 + struct sock *sk = sock->sk; 685 659 int err = 0; 686 660 687 661 switch (cmd) { ··· 703 675 err = clip_create(arg); 704 676 break; 705 677 case ATMARPD_CTRL: 678 + lock_sock(sk); 706 679 err = atm_init_atmarp(vcc); 707 680 if (!err) { 708 681 sock->state = SS_CONNECTED; 709 682 __module_get(THIS_MODULE); 710 683 } 684 + release_sock(sk); 711 685 break; 712 686 case ATMARP_MKIP: 687 + lock_sock(sk); 713 688 err = clip_mkip(vcc, arg); 689 + release_sock(sk); 714 690 break; 715 691 case ATMARP_SETENTRY: 716 692 err = clip_setentry(vcc, (__force __be32)arg);
+3
net/bluetooth/hci_event.c
··· 6966 6966 bis->iso_qos.bcast.in.sdu = le16_to_cpu(ev->max_pdu); 6967 6967 6968 6968 if (!ev->status) { 6969 + bis->state = BT_CONNECTED; 6969 6970 set_bit(HCI_CONN_BIG_SYNC, &bis->flags); 6971 + hci_debugfs_create_conn(bis); 6972 + hci_conn_add_sysfs(bis); 6970 6973 hci_iso_setup_path(bis); 6971 6974 } 6972 6975 }
+2 -2
net/bluetooth/hci_sync.c
··· 1345 1345 * Command Disallowed error, so we must first disable the 1346 1346 * instance if it is active. 1347 1347 */ 1348 - if (adv && !adv->pending) { 1348 + if (adv) { 1349 1349 err = hci_disable_ext_adv_instance_sync(hdev, instance); 1350 1350 if (err) 1351 1351 return err; ··· 5493 5493 { 5494 5494 struct hci_cp_disconnect cp; 5495 5495 5496 - if (test_bit(HCI_CONN_BIG_CREATED, &conn->flags)) { 5496 + if (conn->type == BIS_LINK) { 5497 5497 /* This is a BIS connection, hci_conn_del will 5498 5498 * do the necessary cleanup. 5499 5499 */
+1 -1
net/ipv4/tcp.c
··· 1174 1174 goto do_error; 1175 1175 1176 1176 while (msg_data_left(msg)) { 1177 - ssize_t copy = 0; 1177 + int copy = 0; 1178 1178 1179 1179 skb = tcp_write_queue_tail(sk); 1180 1180 if (skb)
+3 -1
net/ipv4/tcp_input.c
··· 5042 5042 skb_condense(skb); 5043 5043 skb_set_owner_r(skb, sk); 5044 5044 } 5045 - tcp_rcvbuf_grow(sk); 5045 + /* do not grow rcvbuf for not-yet-accepted or orphaned sockets. */ 5046 + if (sk->sk_socket) 5047 + tcp_rcvbuf_grow(sk); 5046 5048 } 5047 5049 5048 5050 static int __must_check tcp_queue_rcv(struct sock *sk, struct sk_buff *skb,
+53 -36
net/netlink/af_netlink.c
··· 387 387 WARN_ON(skb->sk != NULL); 388 388 skb->sk = sk; 389 389 skb->destructor = netlink_skb_destructor; 390 - atomic_add(skb->truesize, &sk->sk_rmem_alloc); 391 390 sk_mem_charge(sk, skb->truesize); 392 391 } 393 392 ··· 1211 1212 int netlink_attachskb(struct sock *sk, struct sk_buff *skb, 1212 1213 long *timeo, struct sock *ssk) 1213 1214 { 1215 + DECLARE_WAITQUEUE(wait, current); 1214 1216 struct netlink_sock *nlk; 1217 + unsigned int rmem; 1215 1218 1216 1219 nlk = nlk_sk(sk); 1220 + rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc); 1217 1221 1218 - if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf || 1219 - test_bit(NETLINK_S_CONGESTED, &nlk->state))) { 1220 - DECLARE_WAITQUEUE(wait, current); 1221 - if (!*timeo) { 1222 - if (!ssk || netlink_is_kernel(ssk)) 1223 - netlink_overrun(sk); 1224 - sock_put(sk); 1225 - kfree_skb(skb); 1226 - return -EAGAIN; 1227 - } 1228 - 1229 - __set_current_state(TASK_INTERRUPTIBLE); 1230 - add_wait_queue(&nlk->wait, &wait); 1231 - 1232 - if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf || 1233 - test_bit(NETLINK_S_CONGESTED, &nlk->state)) && 1234 - !sock_flag(sk, SOCK_DEAD)) 1235 - *timeo = schedule_timeout(*timeo); 1236 - 1237 - __set_current_state(TASK_RUNNING); 1238 - remove_wait_queue(&nlk->wait, &wait); 1239 - sock_put(sk); 1240 - 1241 - if (signal_pending(current)) { 1242 - kfree_skb(skb); 1243 - return sock_intr_errno(*timeo); 1244 - } 1245 - return 1; 1222 + if ((rmem == skb->truesize || rmem < READ_ONCE(sk->sk_rcvbuf)) && 1223 + !test_bit(NETLINK_S_CONGESTED, &nlk->state)) { 1224 + netlink_skb_set_owner_r(skb, sk); 1225 + return 0; 1246 1226 } 1247 - netlink_skb_set_owner_r(skb, sk); 1248 - return 0; 1227 + 1228 + atomic_sub(skb->truesize, &sk->sk_rmem_alloc); 1229 + 1230 + if (!*timeo) { 1231 + if (!ssk || netlink_is_kernel(ssk)) 1232 + netlink_overrun(sk); 1233 + sock_put(sk); 1234 + kfree_skb(skb); 1235 + return -EAGAIN; 1236 + } 1237 + 1238 + __set_current_state(TASK_INTERRUPTIBLE); 1239 + add_wait_queue(&nlk->wait, &wait); 1240 + rmem = atomic_read(&sk->sk_rmem_alloc); 1241 + 1242 + if (((rmem && rmem + skb->truesize > READ_ONCE(sk->sk_rcvbuf)) || 1243 + test_bit(NETLINK_S_CONGESTED, &nlk->state)) && 1244 + !sock_flag(sk, SOCK_DEAD)) 1245 + *timeo = schedule_timeout(*timeo); 1246 + 1247 + __set_current_state(TASK_RUNNING); 1248 + remove_wait_queue(&nlk->wait, &wait); 1249 + sock_put(sk); 1250 + 1251 + if (signal_pending(current)) { 1252 + kfree_skb(skb); 1253 + return sock_intr_errno(*timeo); 1254 + } 1255 + 1256 + return 1; 1249 1257 } 1250 1258 1251 1259 static int __netlink_sendskb(struct sock *sk, struct sk_buff *skb) ··· 1313 1307 ret = -ECONNREFUSED; 1314 1308 if (nlk->netlink_rcv != NULL) { 1315 1309 ret = skb->len; 1310 + atomic_add(skb->truesize, &sk->sk_rmem_alloc); 1316 1311 netlink_skb_set_owner_r(skb, sk); 1317 1312 NETLINK_CB(skb).sk = ssk; 1318 1313 netlink_deliver_tap_kernel(sk, ssk, skb); ··· 1390 1383 static int netlink_broadcast_deliver(struct sock *sk, struct sk_buff *skb) 1391 1384 { 1392 1385 struct netlink_sock *nlk = nlk_sk(sk); 1386 + unsigned int rmem, rcvbuf; 1393 1387 1394 - if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf && 1388 + rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc); 1389 + rcvbuf = READ_ONCE(sk->sk_rcvbuf); 1390 + 1391 + if ((rmem != skb->truesize || rmem <= rcvbuf) && 1395 1392 !test_bit(NETLINK_S_CONGESTED, &nlk->state)) { 1396 1393 netlink_skb_set_owner_r(skb, sk); 1397 1394 __netlink_sendskb(sk, skb); 1398 - return atomic_read(&sk->sk_rmem_alloc) > (sk->sk_rcvbuf >> 1); 1395 + return rmem > (rcvbuf >> 1); 1399 1396 } 1397 + 1398 + atomic_sub(skb->truesize, &sk->sk_rmem_alloc); 1400 1399 return -1; 1401 1400 } 1402 1401 ··· 2262 2249 struct module *module; 2263 2250 int err = -ENOBUFS; 2264 2251 int alloc_min_size; 2252 + unsigned int rmem; 2265 2253 int alloc_size; 2266 2254 2267 2255 if (!lock_taken) ··· 2271 2257 err = -EINVAL; 2272 2258 goto errout_skb; 2273 2259 } 2274 - 2275 - if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf) 2276 - goto errout_skb; 2277 2260 2278 2261 /* NLMSG_GOODSIZE is small to avoid high order allocations being 2279 2262 * required, but it makes sense to _attempt_ a 32KiB allocation ··· 2293 2282 } 2294 2283 if (!skb) 2295 2284 goto errout_skb; 2285 + 2286 + rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc); 2287 + if (rmem >= READ_ONCE(sk->sk_rcvbuf)) { 2288 + atomic_sub(skb->truesize, &sk->sk_rmem_alloc); 2289 + goto errout_skb; 2290 + } 2296 2291 2297 2292 /* Trim skb to allocated size. User is expected to provide buffer as 2298 2293 * large as max(min_dump_alloc, 32KiB (max_recvmsg_len capped at
+9 -6
net/rxrpc/ar-internal.h
··· 361 361 struct list_head new_client_calls; /* Newly created client calls need connection */ 362 362 spinlock_t client_call_lock; /* Lock for ->new_client_calls */ 363 363 struct sockaddr_rxrpc srx; /* local address */ 364 - /* Provide a kvec table sufficiently large to manage either a DATA 365 - * packet with a maximum set of jumbo subpackets or a PING ACK padded 366 - * out to 64K with zeropages for PMTUD. 367 - */ 368 - struct kvec kvec[1 + RXRPC_MAX_NR_JUMBO > 3 + 16 ? 369 - 1 + RXRPC_MAX_NR_JUMBO : 3 + 16]; 364 + union { 365 + /* Provide a kvec table sufficiently large to manage either a 366 + * DATA packet with a maximum set of jumbo subpackets or a PING 367 + * ACK padded out to 64K with zeropages for PMTUD. 368 + */ 369 + struct kvec kvec[1 + RXRPC_MAX_NR_JUMBO > 3 + 16 ? 370 + 1 + RXRPC_MAX_NR_JUMBO : 3 + 16]; 371 + struct bio_vec bvec[3 + 16]; 372 + }; 370 373 }; 371 374 372 375 /*
+4
net/rxrpc/call_accept.c
··· 149 149 150 150 id_in_use: 151 151 write_unlock(&rx->call_lock); 152 + rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, -EBADSLT); 152 153 rxrpc_cleanup_call(call); 153 154 _leave(" = -EBADSLT"); 154 155 return -EBADSLT; ··· 254 253 unsigned short call_head, conn_head, peer_head; 255 254 unsigned short call_tail, conn_tail, peer_tail; 256 255 unsigned short call_count, conn_count; 256 + 257 + if (!b) 258 + return NULL; 257 259 258 260 /* #calls >= #conns >= #peers must hold true. */ 259 261 call_head = smp_load_acquire(&b->call_backlog_head);
+4 -1
net/rxrpc/output.c
··· 924 924 { 925 925 struct rxrpc_skb_priv *sp = rxrpc_skb(response); 926 926 struct scatterlist sg[16]; 927 - struct bio_vec bvec[16]; 927 + struct bio_vec *bvec = conn->local->bvec; 928 928 struct msghdr msg; 929 929 size_t len = sp->resp.len; 930 930 __be32 wserial; ··· 938 938 if (ret < 0) 939 939 goto fail; 940 940 nr_sg = ret; 941 + ret = -EIO; 942 + if (WARN_ON_ONCE(nr_sg > ARRAY_SIZE(conn->local->bvec))) 943 + goto fail; 941 944 942 945 for (int i = 0; i < nr_sg; i++) 943 946 bvec_set_page(&bvec[i], sg_page(&sg[i]), sg[i].length, sg[i].offset);
+16 -17
net/sched/sch_api.c
··· 336 336 return q; 337 337 } 338 338 339 - static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid) 339 + static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid, 340 + struct netlink_ext_ack *extack) 340 341 { 341 342 unsigned long cl; 342 343 const struct Qdisc_class_ops *cops = p->ops->cl_ops; 343 344 344 - if (cops == NULL) 345 - return NULL; 345 + if (cops == NULL) { 346 + NL_SET_ERR_MSG(extack, "Parent qdisc is not classful"); 347 + return ERR_PTR(-EOPNOTSUPP); 348 + } 346 349 cl = cops->find(p, classid); 347 350 348 - if (cl == 0) 349 - return NULL; 351 + if (cl == 0) { 352 + NL_SET_ERR_MSG(extack, "Specified class not found"); 353 + return ERR_PTR(-ENOENT); 354 + } 350 355 return cops->leaf(p, cl); 351 356 } 352 357 ··· 600 595 pkt_len = 1; 601 596 qdisc_skb_cb(skb)->pkt_len = pkt_len; 602 597 } 603 - 604 - void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc) 605 - { 606 - if (!(qdisc->flags & TCQ_F_WARN_NONWC)) { 607 - pr_warn("%s: %s qdisc %X: is non-work-conserving?\n", 608 - txt, qdisc->ops->id, qdisc->handle >> 16); 609 - qdisc->flags |= TCQ_F_WARN_NONWC; 610 - } 611 - } 612 - EXPORT_SYMBOL(qdisc_warn_nonwc); 613 598 614 599 static enum hrtimer_restart qdisc_watchdog(struct hrtimer *timer) 615 600 { ··· 1485 1490 NL_SET_ERR_MSG(extack, "Failed to find qdisc with specified classid"); 1486 1491 return -ENOENT; 1487 1492 } 1488 - q = qdisc_leaf(p, clid); 1493 + q = qdisc_leaf(p, clid, extack); 1489 1494 } else if (dev_ingress_queue(dev)) { 1490 1495 q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping); 1491 1496 } ··· 1496 1501 NL_SET_ERR_MSG(extack, "Cannot find specified qdisc on specified device"); 1497 1502 return -ENOENT; 1498 1503 } 1504 + if (IS_ERR(q)) 1505 + return PTR_ERR(q); 1499 1506 1500 1507 if (tcm->tcm_handle && q->handle != tcm->tcm_handle) { 1501 1508 NL_SET_ERR_MSG(extack, "Invalid handle"); ··· 1599 1602 NL_SET_ERR_MSG(extack, "Failed to find specified qdisc"); 1600 1603 return -ENOENT; 1601 1604 } 1602 - q = qdisc_leaf(p, clid); 1605 + q = qdisc_leaf(p, clid, extack); 1606 + if (IS_ERR(q)) 1607 + return PTR_ERR(q); 1603 1608 } else if (dev_ingress_queue_create(dev)) { 1604 1609 q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping); 1605 1610 }
-16
net/sched/sch_hfsc.c
··· 835 835 } 836 836 } 837 837 838 - static unsigned int 839 - qdisc_peek_len(struct Qdisc *sch) 840 - { 841 - struct sk_buff *skb; 842 - unsigned int len; 843 - 844 - skb = sch->ops->peek(sch); 845 - if (unlikely(skb == NULL)) { 846 - qdisc_warn_nonwc("qdisc_peek_len", sch); 847 - return 0; 848 - } 849 - len = qdisc_pkt_len(skb); 850 - 851 - return len; 852 - } 853 - 854 838 static void 855 839 hfsc_adjust_levels(struct hfsc_class *cl) 856 840 {
+1 -1
net/sched/sch_qfq.c
··· 989 989 990 990 if (cl->qdisc->q.qlen == 0) /* no more packets, remove from list */ 991 991 list_del_init(&cl->alist); 992 - else if (cl->deficit < qdisc_pkt_len(cl->qdisc->ops->peek(cl->qdisc))) { 992 + else if (cl->deficit < qdisc_peek_len(cl->qdisc)) { 993 993 cl->deficit += agg->lmax; 994 994 list_move_tail(&cl->alist, &agg->active); 995 995 }
+2
net/tipc/topsrv.c
··· 704 704 for (id = 0; srv->idr_in_use; id++) { 705 705 con = idr_find(&srv->conn_idr, id); 706 706 if (con) { 707 + conn_get(con); 707 708 spin_unlock_bh(&srv->idr_lock); 708 709 tipc_conn_close(con); 710 + conn_put(con); 709 711 spin_lock_bh(&srv->idr_lock); 710 712 } 711 713 }
+46 -11
net/vmw_vsock/af_vsock.c
··· 407 407 408 408 static bool vsock_use_local_transport(unsigned int remote_cid) 409 409 { 410 + lockdep_assert_held(&vsock_register_mutex); 411 + 410 412 if (!transport_local) 411 413 return false; 412 414 ··· 466 464 467 465 remote_flags = vsk->remote_addr.svm_flags; 468 466 467 + mutex_lock(&vsock_register_mutex); 468 + 469 469 switch (sk->sk_type) { 470 470 case SOCK_DGRAM: 471 471 new_transport = transport_dgram; ··· 483 479 new_transport = transport_h2g; 484 480 break; 485 481 default: 486 - return -ESOCKTNOSUPPORT; 482 + ret = -ESOCKTNOSUPPORT; 483 + goto err; 487 484 } 488 485 489 486 if (vsk->transport) { 490 - if (vsk->transport == new_transport) 491 - return 0; 487 + if (vsk->transport == new_transport) { 488 + ret = 0; 489 + goto err; 490 + } 492 491 493 492 /* transport->release() must be called with sock lock acquired. 494 493 * This path can only be taken during vsock_connect(), where we ··· 515 508 /* We increase the module refcnt to prevent the transport unloading 516 509 * while there are open sockets assigned to it. 517 510 */ 518 - if (!new_transport || !try_module_get(new_transport->module)) 519 - return -ENODEV; 511 + if (!new_transport || !try_module_get(new_transport->module)) { 512 + ret = -ENODEV; 513 + goto err; 514 + } 515 + 516 + /* It's safe to release the mutex after a successful try_module_get(). 517 + * Whichever transport `new_transport` points at, it won't go away until 518 + * the last module_put() below or in vsock_deassign_transport(). 519 + */ 520 + mutex_unlock(&vsock_register_mutex); 520 521 521 522 if (sk->sk_type == SOCK_SEQPACKET) { 522 523 if (!new_transport->seqpacket_allow || ··· 543 528 vsk->transport = new_transport; 544 529 545 530 return 0; 531 + err: 532 + mutex_unlock(&vsock_register_mutex); 533 + return ret; 546 534 } 547 535 EXPORT_SYMBOL_GPL(vsock_assign_transport); 548 536 537 + /* 538 + * Provide safe access to static transport_{h2g,g2h,dgram,local} callbacks. 539 + * Otherwise we may race with module removal. Do not use on `vsk->transport`. 540 + */ 541 + static u32 vsock_registered_transport_cid(const struct vsock_transport **transport) 542 + { 543 + u32 cid = VMADDR_CID_ANY; 544 + 545 + mutex_lock(&vsock_register_mutex); 546 + if (*transport) 547 + cid = (*transport)->get_local_cid(); 548 + mutex_unlock(&vsock_register_mutex); 549 + 550 + return cid; 551 + } 552 + 549 553 bool vsock_find_cid(unsigned int cid) 550 554 { 551 - if (transport_g2h && cid == transport_g2h->get_local_cid()) 555 + if (cid == vsock_registered_transport_cid(&transport_g2h)) 552 556 return true; 553 557 554 558 if (transport_h2g && cid == VMADDR_CID_HOST) ··· 2592 2558 unsigned int cmd, void __user *ptr) 2593 2559 { 2594 2560 u32 __user *p = ptr; 2595 - u32 cid = VMADDR_CID_ANY; 2596 2561 int retval = 0; 2562 + u32 cid; 2597 2563 2598 2564 switch (cmd) { 2599 2565 case IOCTL_VM_SOCKETS_GET_LOCAL_CID: 2600 2566 /* To be compatible with the VMCI behavior, we prioritize the 2601 2567 * guest CID instead of well-know host CID (VMADDR_CID_HOST). 2602 2568 */ 2603 - if (transport_g2h) 2604 - cid = transport_g2h->get_local_cid(); 2605 - else if (transport_h2g) 2606 - cid = transport_h2g->get_local_cid(); 2569 + cid = vsock_registered_transport_cid(&transport_g2h); 2570 + if (cid == VMADDR_CID_ANY) 2571 + cid = vsock_registered_transport_cid(&transport_h2g); 2572 + if (cid == VMADDR_CID_ANY) 2573 + cid = vsock_registered_transport_cid(&transport_local); 2607 2574 2608 2575 if (put_user(cid, p) != 0) 2609 2576 retval = -EFAULT;
+1 -1
sound/isa/ad1816a/ad1816a.c
··· 98 98 pdev = pnp_request_card_device(card, id->devs[1].id, NULL); 99 99 if (pdev == NULL) { 100 100 mpu_port[dev] = -1; 101 - dev_warn(&pdev->dev, "MPU401 device busy, skipping.\n"); 101 + pr_warn("MPU401 device busy, skipping.\n"); 102 102 return 0; 103 103 } 104 104
+19
sound/pci/hda/patch_hdmi.c
··· 4551 4551 HDA_CODEC_ENTRY(0x10de002f, "Tegra194 HDMI/DP2", patch_tegra_hdmi), 4552 4552 HDA_CODEC_ENTRY(0x10de0030, "Tegra194 HDMI/DP3", patch_tegra_hdmi), 4553 4553 HDA_CODEC_ENTRY(0x10de0031, "Tegra234 HDMI/DP", patch_tegra234_hdmi), 4554 + HDA_CODEC_ENTRY(0x10de0033, "SoC 33 HDMI/DP", patch_tegra234_hdmi), 4554 4555 HDA_CODEC_ENTRY(0x10de0034, "Tegra264 HDMI/DP", patch_tegra234_hdmi), 4556 + HDA_CODEC_ENTRY(0x10de0035, "SoC 35 HDMI/DP", patch_tegra234_hdmi), 4555 4557 HDA_CODEC_ENTRY(0x10de0040, "GPU 40 HDMI/DP", patch_nvhdmi), 4556 4558 HDA_CODEC_ENTRY(0x10de0041, "GPU 41 HDMI/DP", patch_nvhdmi), 4557 4559 HDA_CODEC_ENTRY(0x10de0042, "GPU 42 HDMI/DP", patch_nvhdmi), ··· 4592 4590 HDA_CODEC_ENTRY(0x10de0098, "GPU 98 HDMI/DP", patch_nvhdmi), 4593 4591 HDA_CODEC_ENTRY(0x10de0099, "GPU 99 HDMI/DP", patch_nvhdmi), 4594 4592 HDA_CODEC_ENTRY(0x10de009a, "GPU 9a HDMI/DP", patch_nvhdmi), 4593 + HDA_CODEC_ENTRY(0x10de009b, "GPU 9b HDMI/DP", patch_nvhdmi), 4594 + HDA_CODEC_ENTRY(0x10de009c, "GPU 9c HDMI/DP", patch_nvhdmi), 4595 4595 HDA_CODEC_ENTRY(0x10de009d, "GPU 9d HDMI/DP", patch_nvhdmi), 4596 4596 HDA_CODEC_ENTRY(0x10de009e, "GPU 9e HDMI/DP", patch_nvhdmi), 4597 4597 HDA_CODEC_ENTRY(0x10de009f, "GPU 9f HDMI/DP", patch_nvhdmi), 4598 4598 HDA_CODEC_ENTRY(0x10de00a0, "GPU a0 HDMI/DP", patch_nvhdmi), 4599 + HDA_CODEC_ENTRY(0x10de00a1, "GPU a1 HDMI/DP", patch_nvhdmi), 4599 4600 HDA_CODEC_ENTRY(0x10de00a3, "GPU a3 HDMI/DP", patch_nvhdmi), 4600 4601 HDA_CODEC_ENTRY(0x10de00a4, "GPU a4 HDMI/DP", patch_nvhdmi), 4601 4602 HDA_CODEC_ENTRY(0x10de00a5, "GPU a5 HDMI/DP", patch_nvhdmi), 4602 4603 HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP", patch_nvhdmi), 4603 4604 HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP", patch_nvhdmi), 4605 + HDA_CODEC_ENTRY(0x10de00a8, "GPU a8 HDMI/DP", patch_nvhdmi), 4606 + HDA_CODEC_ENTRY(0x10de00a9, "GPU a9 HDMI/DP", patch_nvhdmi), 4607 + HDA_CODEC_ENTRY(0x10de00aa, "GPU aa HDMI/DP", patch_nvhdmi), 4608 + HDA_CODEC_ENTRY(0x10de00ab, "GPU ab HDMI/DP", patch_nvhdmi), 4609 + HDA_CODEC_ENTRY(0x10de00ad, "GPU ad HDMI/DP", patch_nvhdmi), 4610 + HDA_CODEC_ENTRY(0x10de00ae, "GPU ae HDMI/DP", patch_nvhdmi), 4611 + HDA_CODEC_ENTRY(0x10de00af, "GPU af HDMI/DP", patch_nvhdmi), 4612 + HDA_CODEC_ENTRY(0x10de00b0, "GPU b0 HDMI/DP", patch_nvhdmi), 4613 + HDA_CODEC_ENTRY(0x10de00b1, "GPU b1 HDMI/DP", patch_nvhdmi), 4614 + HDA_CODEC_ENTRY(0x10de00c0, "GPU c0 HDMI/DP", patch_nvhdmi), 4615 + HDA_CODEC_ENTRY(0x10de00c1, "GPU c1 HDMI/DP", patch_nvhdmi), 4616 + HDA_CODEC_ENTRY(0x10de00c3, "GPU c3 HDMI/DP", patch_nvhdmi), 4617 + HDA_CODEC_ENTRY(0x10de00c4, "GPU c4 HDMI/DP", patch_nvhdmi), 4618 + HDA_CODEC_ENTRY(0x10de00c5, "GPU c5 HDMI/DP", patch_nvhdmi), 4604 4619 HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI", patch_nvhdmi_2ch), 4605 4620 HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI", patch_nvhdmi_2ch), 4606 4621 HDA_CODEC_ENTRY(0x67663d82, "Arise 82 HDMI/DP", patch_gf_hdmi),
+3
sound/pci/hda/patch_realtek.c
··· 10881 10881 SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10882 10882 SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), 10883 10883 SND_PCI_QUIRK(0x103c, 0x8d01, "HP ZBook Power 14 G12", ALC285_FIXUP_HP_GPIO_LED), 10884 + SND_PCI_QUIRK(0x103c, 0x8d07, "HP Victus 15-fb2xxx (MB 8D07)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10884 10885 SND_PCI_QUIRK(0x103c, 0x8d18, "HP EliteStudio 8 AIO", ALC274_FIXUP_HP_AIO_BIND_DACS), 10885 10886 SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED), 10886 10887 SND_PCI_QUIRK(0x103c, 0x8d85, "HP EliteBook 14 G12", ALC285_FIXUP_HP_GPIO_LED), ··· 11041 11040 SND_PCI_QUIRK(0x1043, 0x1e63, "ASUS H7606W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 11042 11041 SND_PCI_QUIRK(0x1043, 0x1e83, "ASUS GA605W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 11043 11042 SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401), 11043 + SND_PCI_QUIRK(0x1043, 0x1e93, "ASUS ExpertBook B9403CVAR", ALC294_FIXUP_ASUS_HPE), 11044 11044 SND_PCI_QUIRK(0x1043, 0x1eb3, "ASUS Ally RCLA72", ALC287_FIXUP_TAS2781_I2C), 11045 11045 SND_PCI_QUIRK(0x1043, 0x1ed3, "ASUS HN7306W", ALC287_FIXUP_CS35L41_I2C_2), 11046 11046 SND_PCI_QUIRK(0x1043, 0x1ee2, "ASUS UM6702RA/RC", ALC287_FIXUP_CS35L41_I2C_2), ··· 11426 11424 SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), 11427 11425 SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO), 11428 11426 SND_PCI_QUIRK(0x2782, 0x1407, "Positivo P15X", ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC), 11427 + SND_PCI_QUIRK(0x2782, 0x1409, "Positivo K116J", ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC), 11429 11428 SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX), 11430 11429 SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX), 11431 11430 SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+5 -3
sound/pci/hda/tas2781_hda.c
··· 44 44 TASDEVICE_REG(0, 0x13, 0x70), 45 45 TASDEVICE_REG(0, 0x18, 0x7c), 46 46 }; 47 - unsigned int crc, oft; 47 + unsigned int crc, oft, node_num; 48 48 unsigned char *buf; 49 49 int i, j, k, l; 50 50 ··· 80 80 dev_err(p->dev, "%s: CRC error\n", __func__); 81 81 return; 82 82 } 83 + node_num = tmp_val[1]; 83 84 84 - for (j = 0, k = 0; j < tmp_val[1]; j++) { 85 + for (j = 0, k = 0; j < node_num; j++) { 85 86 oft = j * 6 + 3; 86 87 if (tmp_val[oft] == TASDEV_UEFI_CALI_REG_ADDR_FLG) { 87 88 for (i = 0; i < TASDEV_CALIB_N; i++) { ··· 100 99 } 101 100 102 101 data[l] = k; 102 + oft++; 103 103 for (i = 0; i < TASDEV_CALIB_N * 4; i++) 104 - data[l + i] = data[4 * oft + i]; 104 + data[l + i + 1] = data[4 * oft + i]; 105 105 k++; 106 106 } 107 107 }
+1 -1
sound/soc/codecs/cs35l56-shared.c
··· 980 980 break; 981 981 default: 982 982 dev_err(cs35l56_base->dev, "Unknown device %x\n", devid); 983 - return ret; 983 + return -ENODEV; 984 984 } 985 985 986 986 cs35l56_base->type = devid & 0xFF;
+2 -1
sound/soc/fsl/fsl_asrc.c
··· 517 517 regmap_update_bits(asrc->regmap, REG_ASRCTR, 518 518 ASRCTR_ATSi_MASK(index), ASRCTR_ATS(index)); 519 519 regmap_update_bits(asrc->regmap, REG_ASRCTR, 520 - ASRCTR_USRi_MASK(index), 0); 520 + ASRCTR_IDRi_MASK(index) | ASRCTR_USRi_MASK(index), 521 + ASRCTR_USR(index)); 521 522 522 523 /* Set the input and output clock sources */ 523 524 regmap_update_bits(asrc->regmap, REG_ASRCSR,
+8 -6
sound/soc/fsl/fsl_sai.c
··· 803 803 * anymore. Add software reset to fix this issue. 804 804 * This is a hardware bug, and will be fix in the 805 805 * next sai version. 806 + * 807 + * In consumer mode, this can happen even after a 808 + * single open/close, especially if both tx and rx 809 + * are running concurrently. 806 810 */ 807 - if (!sai->is_consumer_mode[tx]) { 808 - /* Software Reset */ 809 - regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR); 810 - /* Clear SR bit to finish the reset */ 811 - regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0); 812 - } 811 + /* Software Reset */ 812 + regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR); 813 + /* Clear SR bit to finish the reset */ 814 + regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0); 813 815 } 814 816 815 817 static int fsl_sai_trigger(struct snd_pcm_substream *substream, int cmd,
+1
sound/soc/intel/boards/Kconfig
··· 42 42 tristate 43 43 44 44 config SND_SOC_INTEL_SOF_BOARD_HELPERS 45 + select SND_SOC_ACPI_INTEL_MATCH 45 46 tristate 46 47 47 48 if SND_SOC_INTEL_CATPT
+3
sound/soc/intel/boards/sof_sdw.c
··· 783 783 static const struct snd_pci_quirk sof_sdw_ssid_quirk_table[] = { 784 784 SND_PCI_QUIRK(0x1043, 0x1e13, "ASUS Zenbook S14", SOC_SDW_CODEC_MIC), 785 785 SND_PCI_QUIRK(0x1043, 0x1f43, "ASUS Zenbook S16", SOC_SDW_CODEC_MIC), 786 + SND_PCI_QUIRK(0x17aa, 0x2347, "Lenovo P16", SOC_SDW_CODEC_MIC), 787 + SND_PCI_QUIRK(0x17aa, 0x2348, "Lenovo P16", SOC_SDW_CODEC_MIC), 788 + SND_PCI_QUIRK(0x17aa, 0x2349, "Lenovo P1", SOC_SDW_CODEC_MIC), 786 789 {} 787 790 }; 788 791
+7 -7
sound/soc/intel/common/soc-acpi-intel-arl-match.c
··· 468 468 .get_function_tplg_files = sof_sdw_get_tplg_files, 469 469 }, 470 470 { 471 - .link_mask = BIT(2), 472 - .links = arl_cs42l43_l2, 473 - .drv_name = "sof_sdw", 474 - .sof_tplg_filename = "sof-arl-cs42l43-l2.tplg", 475 - .get_function_tplg_files = sof_sdw_get_tplg_files, 476 - }, 477 - { 478 471 .link_mask = BIT(2) | BIT(3), 479 472 .links = arl_cs42l43_l2_cs35l56_l3, 480 473 .drv_name = "sof_sdw", 481 474 .sof_tplg_filename = "sof-arl-cs42l43-l2-cs35l56-l3.tplg", 475 + .get_function_tplg_files = sof_sdw_get_tplg_files, 476 + }, 477 + { 478 + .link_mask = BIT(2), 479 + .links = arl_cs42l43_l2, 480 + .drv_name = "sof_sdw", 481 + .sof_tplg_filename = "sof-arl-cs42l43-l2.tplg", 482 482 .get_function_tplg_files = sof_sdw_get_tplg_files, 483 483 }, 484 484 {
+10 -12
sound/usb/format.c
··· 310 310 struct audioformat *fp, 311 311 unsigned int rate) 312 312 { 313 - struct usb_interface *iface; 314 313 struct usb_host_interface *alts; 315 314 unsigned char *fmt; 316 315 unsigned int max_rate; 317 316 318 - iface = usb_ifnum_to_if(chip->dev, fp->iface); 319 - if (!iface) 317 + alts = snd_usb_get_host_interface(chip, fp->iface, fp->altsetting); 318 + if (!alts) 320 319 return true; 321 320 322 - alts = &iface->altsetting[fp->altset_idx]; 323 321 fmt = snd_usb_find_csint_desc(alts->extra, alts->extralen, 324 322 NULL, UAC_FORMAT_TYPE); 325 323 if (!fmt) ··· 326 328 if (fmt[0] == 10) { /* bLength */ 327 329 max_rate = combine_quad(&fmt[6]); 328 330 329 - /* Validate max rate */ 330 - if (max_rate != 48000 && 331 - max_rate != 96000 && 332 - max_rate != 192000 && 333 - max_rate != 384000) { 334 - 331 + switch (max_rate) { 332 + case 48000: 333 + return (rate == 44100 || rate == 48000); 334 + case 96000: 335 + return (rate == 88200 || rate == 96000); 336 + case 192000: 337 + return (rate == 176400 || rate == 192000); 338 + default: 335 339 usb_audio_info(chip, 336 340 "%u:%d : unexpected max rate: %u\n", 337 341 fp->iface, fp->altsetting, max_rate); 338 342 339 343 return true; 340 344 } 341 - 342 - return rate <= max_rate; 343 345 } 344 346 345 347 return true;
+2 -2
tools/include/uapi/linux/bits.h
··· 4 4 #ifndef _UAPI_LINUX_BITS_H 5 5 #define _UAPI_LINUX_BITS_H 6 6 7 - #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h)))) 7 + #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h)))) 8 8 9 - #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) 9 + #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 10 10 11 11 #define __GENMASK_U128(h, l) \ 12 12 ((_BIT128((h)) << 1) - (_BIT128(l)))
+1
tools/objtool/check.c
··· 2318 2318 2319 2319 for_each_reloc(sec->rsec, reloc) { 2320 2320 type = *(u32 *)(sec->data->d_buf + (reloc_idx(reloc) * sec->sh.sh_entsize) + 4); 2321 + type = bswap_if_needed(file->elf, type); 2321 2322 2322 2323 offset = reloc->sym->offset + reloc_addend(reloc); 2323 2324 insn = find_insn(file, reloc->sym->sec, offset);
+5
tools/testing/selftests/coredump/stackdump_test.c
··· 461 461 _exit(EXIT_FAILURE); 462 462 } 463 463 464 + ret = read(fd_coredump, &c, 1); 465 + 464 466 close(fd_coredump); 465 467 close(fd_server); 466 468 close(fd_peer_pidfd); 467 469 close(fd_core_file); 470 + 471 + if (ret < 1) 472 + _exit(EXIT_FAILURE); 468 473 _exit(EXIT_SUCCESS); 469 474 } 470 475 self->pid_coredump_server = pid_coredump_server;
+1
tools/testing/selftests/futex/functional/.gitignore
··· 11 11 futex_wait_uninitialized_heap 12 12 futex_wait_wouldblock 13 13 futex_waitv 14 + futex_numa
+1
tools/testing/selftests/kvm/x86/monitor_mwait_test.c
··· 74 74 int testcase; 75 75 char test[80]; 76 76 77 + TEST_REQUIRE(this_cpu_has(X86_FEATURE_MWAIT)); 77 78 TEST_REQUIRE(kvm_has_cap(KVM_CAP_DISABLE_QUIRKS2)); 78 79 79 80 ksft_print_header();
+53
tools/testing/selftests/net/packetdrill/tcp_ooo-before-and-after-accept.pkt
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + --mss=1000 4 + 5 + `./defaults.sh 6 + sysctl -q net.ipv4.tcp_rmem="4096 131072 $((32*1024*1024))"` 7 + 8 + // Test that a not-yet-accepted socket does not change 9 + // its initial sk_rcvbuf (tcp_rmem[1]) when receiving ooo packets. 10 + 11 + +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 12 + +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 13 + +0 bind(3, ..., ...) = 0 14 + +0 listen(3, 1) = 0 15 + 16 + +0 < S 0:0(0) win 65535 <mss 1000,nop,nop,sackOK,nop,wscale 7> 17 + +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 10> 18 + +.1 < . 1:1(0) ack 1 win 257 19 + +0 < . 2001:41001(39000) ack 1 win 257 20 + +0 > . 1:1(0) ack 1 <nop,nop,sack 2001:41001> 21 + +0 < . 41001:101001(60000) ack 1 win 257 22 + +0 > . 1:1(0) ack 1 <nop,nop,sack 2001:101001> 23 + +0 < . 1:1001(1000) ack 1 win 257 24 + +0 > . 1:1(0) ack 1001 <nop,nop,sack 2001:101001> 25 + +0 < . 1001:2001(1000) ack 1 win 257 26 + +0 > . 1:1(0) ack 101001 27 + 28 + +0 accept(3, ..., ...) = 4 29 + 30 + +0 %{ assert SK_MEMINFO_RCVBUF == 131072, SK_MEMINFO_RCVBUF }% 31 + 32 + +0 close(4) = 0 33 + +0 close(3) = 0 34 + 35 + // Test that ooo packets for accepted sockets do increase sk_rcvbuf 36 + +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 37 + +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 38 + +0 bind(3, ..., ...) = 0 39 + +0 listen(3, 1) = 0 40 + 41 + +0 < S 0:0(0) win 65535 <mss 1000,nop,nop,sackOK,nop,wscale 7> 42 + +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 10> 43 + +.1 < . 1:1(0) ack 1 win 257 44 + 45 + +0 accept(3, ..., ...) = 4 46 + 47 + +0 < . 2001:41001(39000) ack 1 win 257 48 + +0 > . 1:1(0) ack 1 <nop,nop,sack 2001:41001> 49 + +0 < . 41001:101001(60000) ack 1 win 257 50 + +0 > . 1:1(0) ack 1 <nop,nop,sack 2001:101001> 51 + 52 + +0 %{ assert SK_MEMINFO_RCVBUF > 131072, SK_MEMINFO_RCVBUF }% 53 +
+37
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 635 635 "$TC qdisc del dev $DUMMY handle 1:0 root", 636 636 "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 637 637 ] 638 + }, 639 + { 640 + "id": "d74b", 641 + "name": "Test use-after-free with DRR/NETEM/BLACKHOLE chain", 642 + "category": [ 643 + "qdisc", 644 + "hfsc", 645 + "drr", 646 + "netem", 647 + "blackhole" 648 + ], 649 + "plugins": { 650 + "requires": [ 651 + "nsPlugin", 652 + "scapyPlugin" 653 + ] 654 + }, 655 + "setup": [ 656 + "$IP link set dev $DUMMY up || true", 657 + "$IP addr add 10.10.11.10/24 dev $DUMMY || true", 658 + "$TC qdisc add dev $DUMMY root handle 1: drr", 659 + "$TC filter add dev $DUMMY parent 1: basic classid 1:1", 660 + "$TC class add dev $DUMMY parent 1: classid 1:1 drr", 661 + "$TC qdisc add dev $DUMMY parent 1:1 handle 2: hfsc def 1", 662 + "$TC class add dev $DUMMY parent 2: classid 2:1 hfsc rt m1 8 d 1 m2 0", 663 + "$TC qdisc add dev $DUMMY parent 2:1 handle 3: netem", 664 + "$TC qdisc add dev $DUMMY parent 3:1 handle 4: blackhole", 665 + "ping -c1 -W0.01 -I $DUMMY 10.10.11.11 || true", 666 + "$TC class del dev $DUMMY classid 1:1" 667 + ], 668 + "cmdUnderTest": "ping -c1 -W0.01 -I $DUMMY 10.10.11.11", 669 + "expExitCode": "1", 670 + "verifyCmd": "$TC -j class ls dev $DUMMY classid 1:1", 671 + "matchJSON": [], 672 + "teardown": [ 673 + "$TC qdisc del dev $DUMMY root handle 1: drr" 674 + ] 638 675 } 639 676 ]
+3
virt/kvm/kvm_main.c
··· 2572 2572 r = xa_reserve(&kvm->mem_attr_array, i, GFP_KERNEL_ACCOUNT); 2573 2573 if (r) 2574 2574 goto out_unlock; 2575 + 2576 + cond_resched(); 2575 2577 } 2576 2578 2577 2579 kvm_handle_gfn_range(kvm, &pre_set_range); ··· 2582 2580 r = xa_err(xa_store(&kvm->mem_attr_array, i, entry, 2583 2581 GFP_KERNEL_ACCOUNT)); 2584 2582 KVM_BUG_ON(r, kvm); 2583 + cond_resched(); 2585 2584 } 2586 2585 2587 2586 kvm_handle_gfn_range(kvm, &post_set_range);