Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 6.11-rc4 into tty-next

We need the tty/serial fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+5019 -2846
+2 -1
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 562 562 ================ ========================================= 563 563 564 564 If control status is "forceoff" or "notsupported" writes 565 - are rejected. 565 + are rejected. Note that enabling SMT on PowerPC skips 566 + offline cores. 566 567 567 568 What: /sys/devices/system/cpu/cpuX/power/energy_perf_bias 568 569 Date: March 2019
+8 -7
Documentation/admin-guide/device-mapper/dm-crypt.rst
··· 162 162 163 163 164 164 Module parameters:: 165 - max_read_size 166 - max_write_size 167 - Maximum size of read or write requests. When a request larger than this size 168 - is received, dm-crypt will split the request. The splitting improves 169 - concurrency (the split requests could be encrypted in parallel by multiple 170 - cores), but it also causes overhead. The user should tune these parameters to 171 - fit the actual workload. 165 + 166 + max_read_size 167 + max_write_size 168 + Maximum size of read or write requests. When a request larger than this size 169 + is received, dm-crypt will split the request. The splitting improves 170 + concurrency (the split requests could be encrypted in parallel by multiple 171 + cores), but it also causes overhead. The user should tune these parameters to 172 + fit the actual workload. 172 173 173 174 174 175 Example scripts
+22 -14
Documentation/arch/riscv/hwprobe.rst
··· 239 239 ratified in commit 98918c844281 ("Merge pull request #1217 from 240 240 riscv/zawrs") of riscv-isa-manual. 241 241 242 - * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance 243 - information about the selected set of processors. 242 + * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: Deprecated. Returns similar values to 243 + :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`, but the key was 244 + mistakenly classified as a bitmask rather than a value. 244 245 245 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNKNOWN`: The performance of misaligned 246 - accesses is unknown. 246 + * :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`: An enum value describing 247 + the performance of misaligned scalar native word accesses on the selected set 248 + of processors. 247 249 248 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_EMULATED`: Misaligned accesses are 249 - emulated via software, either in or below the kernel. These accesses are 250 - always extremely slow. 250 + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN`: The performance of 251 + misaligned scalar accesses is unknown. 251 252 252 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are slower 253 - than equivalent byte accesses. Misaligned accesses may be supported 254 - directly in hardware, or trapped and emulated by software. 253 + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED`: Misaligned scalar 254 + accesses are emulated via software, either in or below the kernel. These 255 + accesses are always extremely slow. 255 256 256 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are faster 257 - than equivalent byte accesses. 257 + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW`: Misaligned scalar native 258 + word sized accesses are slower than the equivalent quantity of byte 259 + accesses. Misaligned accesses may be supported directly in hardware, or 260 + trapped and emulated by software. 258 261 259 - * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNSUPPORTED`: Misaligned accesses are 260 - not supported at all and will generate a misaligned address fault. 262 + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_FAST`: Misaligned scalar native 263 + word sized accesses are faster than the equivalent quantity of byte 264 + accesses. 265 + 266 + * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_UNSUPPORTED`: Misaligned scalar 267 + accesses are not supported at all and will generate a misaligned address 268 + fault. 261 269 262 270 * :c:macro:`RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE`: An unsigned int which 263 271 represents the size of the Zicboz block in bytes.
+1 -1
Documentation/devicetree/bindings/clock/qcom,dispcc-sm6350.yaml
··· 7 7 title: Qualcomm Display Clock & Reset Controller on SM6350 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm display clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,gcc-msm8994.yaml
··· 7 7 title: Qualcomm Global Clock & Reset Controller on MSM8994 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm global clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,gcc-sm6125.yaml
··· 7 7 title: Qualcomm Global Clock & Reset Controller on SM6125 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm global clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,gcc-sm6350.yaml
··· 7 7 title: Qualcomm Global Clock & Reset Controller on SM6350 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm global clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,sm6115-gpucc.yaml
··· 7 7 title: Qualcomm Graphics Clock & Reset Controller on SM6115 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm graphics clock control module provides clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,sm6125-gpucc.yaml
··· 7 7 title: Qualcomm Graphics Clock & Reset Controller on SM6125 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm graphics clock control module provides clocks and power domains on
+1 -1
Documentation/devicetree/bindings/clock/qcom,sm6350-camcc.yaml
··· 7 7 title: Qualcomm Camera Clock & Reset Controller on SM6350 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm camera clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,sm6375-dispcc.yaml
··· 7 7 title: Qualcomm Display Clock & Reset Controller on SM6375 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm display clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,sm6375-gcc.yaml
··· 7 7 title: Qualcomm Global Clock & Reset Controller on SM6375 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm global clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,sm6375-gpucc.yaml
··· 7 7 title: Qualcomm Graphics Clock & Reset Controller on SM6375 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm graphics clock control module provides clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,sm8350-videocc.yaml
··· 7 7 title: Qualcomm SM8350 Video Clock & Reset Controller 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm video clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/clock/qcom,sm8450-gpucc.yaml
··· 7 7 title: Qualcomm Graphics Clock & Reset Controller on SM8450 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm graphics clock control module provides the clocks, resets and power
+1 -1
Documentation/devicetree/bindings/display/msm/qcom,sm6375-mdss.yaml
··· 7 7 title: Qualcomm SM6375 Display MDSS 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: 13 13 SM6375 MSM Mobile Display Subsystem (MDSS), which encapsulates sub-blocks
+1 -1
Documentation/devicetree/bindings/display/panel/asus,z00t-tm5p5-nt35596.yaml
··· 7 7 title: ASUS Z00T TM5P5 NT35596 5.5" 1080×1920 LCD Panel 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konradybcio@gmail.com> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: |+ 13 13 This panel seems to only be found in the Asus Z00T
+6 -6
Documentation/devicetree/bindings/display/panel/samsung,atna33xc20.yaml
··· 18 18 # Samsung 13.3" FHD (1920x1080 pixels) eDP AMOLED panel 19 19 - const: samsung,atna33xc20 20 20 - items: 21 - - enum: 22 - # Samsung 14.5" WQXGA+ (2880x1800 pixels) eDP AMOLED panel 23 - - samsung,atna45af01 24 - # Samsung 14.5" 3K (2944x1840 pixels) eDP AMOLED panel 25 - - samsung,atna45dc02 26 - - const: samsung,atna33xc20 21 + - enum: 22 + # Samsung 14.5" WQXGA+ (2880x1800 pixels) eDP AMOLED panel 23 + - samsung,atna45af01 24 + # Samsung 14.5" 3K (2944x1840 pixels) eDP AMOLED panel 25 + - samsung,atna45dc02 26 + - const: samsung,atna33xc20 27 27 28 28 enable-gpios: true 29 29 port: true
+1 -1
Documentation/devicetree/bindings/display/panel/sony,td4353-jdi.yaml
··· 7 7 title: Sony TD4353 JDI 5 / 5.7" 2160x1080 MIPI-DSI Panel 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 The Sony TD4353 JDI is a 5 (XZ2c) / 5.7 (XZ2) inch 2160x1080
+1
Documentation/devicetree/bindings/eeprom/at25.yaml
··· 28 28 - anvo,anv32e61w 29 29 - atmel,at25256B 30 30 - fujitsu,mb85rs1mt 31 + - fujitsu,mb85rs256 31 32 - fujitsu,mb85rs64 32 33 - microchip,at25160bn 33 34 - microchip,25lc040
+1 -1
Documentation/devicetree/bindings/interconnect/qcom,sc7280-rpmh.yaml
··· 8 8 9 9 maintainers: 10 10 - Bjorn Andersson <andersson@kernel.org> 11 - - Konrad Dybcio <konrad.dybcio@linaro.org> 11 + - Konrad Dybcio <konradybcio@kernel.org> 12 12 13 13 description: | 14 14 RPMh interconnect providers support system bandwidth requirements through
+1 -1
Documentation/devicetree/bindings/interconnect/qcom,sc8280xp-rpmh.yaml
··· 8 8 9 9 maintainers: 10 10 - Bjorn Andersson <andersson@kernel.org> 11 - - Konrad Dybcio <konrad.dybcio@linaro.org> 11 + - Konrad Dybcio <konradybcio@kernel.org> 12 12 13 13 description: | 14 14 RPMh interconnect providers support system bandwidth requirements through
+1 -1
Documentation/devicetree/bindings/interconnect/qcom,sm8450-rpmh.yaml
··· 8 8 9 9 maintainers: 10 10 - Bjorn Andersson <andersson@kernel.org> 11 - - Konrad Dybcio <konrad.dybcio@linaro.org> 11 + - Konrad Dybcio <konradybcio@kernel.org> 12 12 13 13 description: | 14 14 RPMh interconnect providers support system bandwidth requirements through
+1 -1
Documentation/devicetree/bindings/iommu/qcom,iommu.yaml
··· 7 7 title: Qualcomm Technologies legacy IOMMU implementations 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 Qualcomm "B" family devices which are not compatible with arm-smmu have
+4
Documentation/devicetree/bindings/net/fsl,qoriq-mc-dpmac.yaml
··· 38 38 39 39 managed: true 40 40 41 + phys: 42 + description: A reference to the SerDes lane(s) 43 + maxItems: 1 44 + 41 45 required: 42 46 - reg 43 47
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,mdm9607-tlmm.yaml
··· 7 7 title: Qualcomm Technologies, Inc. MDM9607 TLMM block 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: 13 13 Top Level Mode Multiplexer pin controller in Qualcomm MDM9607 SoC.
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,sm6350-tlmm.yaml
··· 7 7 title: Qualcomm Technologies, Inc. SM6350 TLMM block 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: 13 13 Top Level Mode Multiplexer pin controller in Qualcomm SM6350 SoC.
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,sm6375-tlmm.yaml
··· 7 7 title: Qualcomm Technologies, Inc. SM6375 TLMM block 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@somainline.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: 13 13 Top Level Mode Multiplexer pin controller in Qualcomm SM6375 SoC.
+1 -1
Documentation/devicetree/bindings/remoteproc/qcom,rpm-proc.yaml
··· 8 8 9 9 maintainers: 10 10 - Bjorn Andersson <andersson@kernel.org> 11 - - Konrad Dybcio <konrad.dybcio@linaro.org> 11 + - Konrad Dybcio <konradybcio@kernel.org> 12 12 - Stephan Gerhold <stephan@gerhold.net> 13 13 14 14 description: |
+1 -1
Documentation/devicetree/bindings/soc/qcom/qcom,rpm-master-stats.yaml
··· 7 7 title: Qualcomm Technologies, Inc. (QTI) RPM Master Stats 8 8 9 9 maintainers: 10 - - Konrad Dybcio <konrad.dybcio@linaro.org> 10 + - Konrad Dybcio <konradybcio@kernel.org> 11 11 12 12 description: | 13 13 The Qualcomm RPM (Resource Power Manager) architecture includes a concept
+4 -4
Documentation/filesystems/caching/fscache.rst
··· 318 318 Debugging 319 319 ========= 320 320 321 - If CONFIG_FSCACHE_DEBUG is enabled, the FS-Cache facility can have runtime 322 - debugging enabled by adjusting the value in:: 321 + If CONFIG_NETFS_DEBUG is enabled, the FS-Cache facility and NETFS support can 322 + have runtime debugging enabled by adjusting the value in:: 323 323 324 - /sys/module/fscache/parameters/debug 324 + /sys/module/netfs/parameters/debug 325 325 326 326 This is a bitmask of debugging streams to enable: 327 327 ··· 343 343 The appropriate set of values should be OR'd together and the result written to 344 344 the control file. For example:: 345 345 346 - echo $((1|8|512)) >/sys/module/fscache/parameters/debug 346 + echo $((1|8|512)) >/sys/module/netfs/parameters/debug 347 347 348 348 will turn on all function entry debugging.
+1 -1
Documentation/virt/kvm/api.rst
··· 2592 2592 0x6030 0000 0010 004a SPSR_ABT 64 spsr[KVM_SPSR_ABT] 2593 2593 0x6030 0000 0010 004c SPSR_UND 64 spsr[KVM_SPSR_UND] 2594 2594 0x6030 0000 0010 004e SPSR_IRQ 64 spsr[KVM_SPSR_IRQ] 2595 - 0x6060 0000 0010 0050 SPSR_FIQ 64 spsr[KVM_SPSR_FIQ] 2595 + 0x6030 0000 0010 0050 SPSR_FIQ 64 spsr[KVM_SPSR_FIQ] 2596 2596 0x6040 0000 0010 0054 V0 128 fp_regs.vregs[0] [1]_ 2597 2597 0x6040 0000 0010 0058 V1 128 fp_regs.vregs[1] [1]_ 2598 2598 ...
+2 -2
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 11 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1963 1963 # Protocol). 1964 1964 PHONY += rust-analyzer 1965 1965 rust-analyzer: 1966 - $(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh 1966 + +$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh 1967 1967 $(Q)$(MAKE) $(build)=rust $@ 1968 1968 1969 1969 # Script to generate missing namespace dependencies
+1 -1
arch/arm/mach-rpc/ecard.c
··· 1109 1109 driver_unregister(&drv->drv); 1110 1110 } 1111 1111 1112 - static int ecard_match(struct device *_dev, struct device_driver *_drv) 1112 + static int ecard_match(struct device *_dev, const struct device_driver *_drv) 1113 1113 { 1114 1114 struct expansion_card *ec = ECARD_DEV(_dev); 1115 1115 struct ecard_driver *drv = ECARD_DRV(_drv);
+1 -1
arch/arm64/include/asm/kvm_ptrauth.h
··· 104 104 105 105 #define __ptrauth_save_key(ctxt, key) \ 106 106 do { \ 107 - u64 __val; \ 107 + u64 __val; \ 108 108 __val = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \ 109 109 ctxt_sys_reg(ctxt, key ## KEYLO_EL1) = __val; \ 110 110 __val = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \
+1 -1
arch/arm64/include/asm/uaccess.h
··· 188 188 #define __get_mem_asm(load, reg, x, addr, label, type) \ 189 189 asm_goto_output( \ 190 190 "1: " load " " reg "0, [%1]\n" \ 191 - _ASM_EXTABLE_##type##ACCESS_ERR(1b, %l2, %w0) \ 191 + _ASM_EXTABLE_##type##ACCESS(1b, %l2) \ 192 192 : "=r" (x) \ 193 193 : "r" (addr) : : label) 194 194 #else
+1 -1
arch/arm64/kernel/acpi_numa.c
··· 27 27 28 28 #include <asm/numa.h> 29 29 30 - static int acpi_early_node_map[NR_CPUS] __initdata = { NUMA_NO_NODE }; 30 + static int acpi_early_node_map[NR_CPUS] __initdata = { [0 ... NR_CPUS - 1] = NUMA_NO_NODE }; 31 31 32 32 int __init acpi_numa_get_nid(unsigned int cpu) 33 33 {
-3
arch/arm64/kernel/setup.c
··· 355 355 smp_init_cpus(); 356 356 smp_build_mpidr_hash(); 357 357 358 - /* Init percpu seeds for random tags after cpus are set up. */ 359 - kasan_init_sw_tags(); 360 - 361 358 #ifdef CONFIG_ARM64_SW_TTBR0_PAN 362 359 /* 363 360 * Make sure init_thread_info.ttbr0 always generates translation
+2
arch/arm64/kernel/smp.c
··· 467 467 init_gic_priority_masking(); 468 468 469 469 kasan_init_hw_tags(); 470 + /* Init percpu seeds for random tags after cpus are set up. */ 471 + kasan_init_sw_tags(); 470 472 } 471 473 472 474 /*
+1
arch/arm64/kvm/Kconfig
··· 19 19 20 20 menuconfig KVM 21 21 bool "Kernel-based Virtual Machine (KVM) support" 22 + depends on AS_HAS_ARMV8_4 22 23 select KVM_COMMON 23 24 select KVM_GENERIC_HARDWARE_ENABLING 24 25 select KVM_GENERIC_MMU_NOTIFIER
+3
arch/arm64/kvm/Makefile
··· 10 10 obj-$(CONFIG_KVM) += kvm.o 11 11 obj-$(CONFIG_KVM) += hyp/ 12 12 13 + CFLAGS_sys_regs.o += -Wno-override-init 14 + CFLAGS_handle_exit.o += -Wno-override-init 15 + 13 16 kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ 14 17 inject_fault.o va_layout.o handle_exit.o \ 15 18 guest.o debug.o reset.o sys_regs.o stacktrace.o \
+5 -10
arch/arm64/kvm/arm.c
··· 164 164 /** 165 165 * kvm_arch_init_vm - initializes a VM data structure 166 166 * @kvm: pointer to the KVM struct 167 + * @type: kvm device type 167 168 */ 168 169 int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) 169 170 { ··· 522 521 523 522 static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu) 524 523 { 525 - if (vcpu_has_ptrauth(vcpu)) { 524 + if (vcpu_has_ptrauth(vcpu) && !is_protected_kvm_enabled()) { 526 525 /* 527 - * Either we're running running an L2 guest, and the API/APK 528 - * bits come from L1's HCR_EL2, or API/APK are both set. 526 + * Either we're running an L2 guest, and the API/APK bits come 527 + * from L1's HCR_EL2, or API/APK are both set. 529 528 */ 530 529 if (unlikely(vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))) { 531 530 u64 val; ··· 542 541 * Save the host keys if there is any chance for the guest 543 542 * to use pauth, as the entry code will reload the guest 544 543 * keys in that case. 545 - * Protected mode is the exception to that rule, as the 546 - * entry into the EL2 code eagerly switch back and forth 547 - * between host and hyp keys (and kvm_hyp_ctxt is out of 548 - * reach anyway). 549 544 */ 550 - if (is_protected_kvm_enabled()) 551 - return; 552 - 553 545 if (vcpu->arch.hcr_el2 & (HCR_API | HCR_APK)) { 554 546 struct kvm_cpu_context *ctxt; 547 + 555 548 ctxt = this_cpu_ptr_hyp_sym(kvm_hyp_ctxt); 556 549 ptrauth_save_keys(ctxt); 557 550 }
-1
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 27 27 #include <asm/kvm_hyp.h> 28 28 #include <asm/kvm_mmu.h> 29 29 #include <asm/kvm_nested.h> 30 - #include <asm/kvm_ptrauth.h> 31 30 #include <asm/fpsimd.h> 32 31 #include <asm/debug-monitors.h> 33 32 #include <asm/processor.h>
+2
arch/arm64/kvm/hyp/nvhe/Makefile
··· 20 20 lib-objs := clear_page.o copy_page.o memcpy.o memset.o 21 21 lib-objs := $(addprefix ../../../lib/, $(lib-objs)) 22 22 23 + CFLAGS_switch.nvhe.o += -Wno-override-init 24 + 23 25 hyp-obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ 24 26 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \ 25 27 cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o ffa.o
+2 -3
arch/arm64/kvm/hyp/nvhe/switch.c
··· 173 173 static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code) 174 174 { 175 175 /* 176 - * Make sure we handle the exit for workarounds and ptrauth 177 - * before the pKVM handling, as the latter could decide to 178 - * UNDEF. 176 + * Make sure we handle the exit for workarounds before the pKVM 177 + * handling, as the latter could decide to UNDEF. 179 178 */ 180 179 return (kvm_hyp_handle_sysreg(vcpu, exit_code) || 181 180 kvm_handle_pvm_sysreg(vcpu, exit_code));
+2
arch/arm64/kvm/hyp/vhe/Makefile
··· 6 6 asflags-y := -D__KVM_VHE_HYPERVISOR__ 7 7 ccflags-y := -D__KVM_VHE_HYPERVISOR__ 8 8 9 + CFLAGS_switch.o += -Wno-override-init 10 + 9 11 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o 10 12 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ 11 13 ../fpsimd.o ../hyp-entry.o ../exception.o
+1 -1
arch/arm64/kvm/nested.c
··· 786 786 if (!WARN_ON(atomic_read(&mmu->refcnt))) 787 787 kvm_free_stage2_pgd(mmu); 788 788 } 789 - kfree(kvm->arch.nested_mmus); 789 + kvfree(kvm->arch.nested_mmus); 790 790 kvm->arch.nested_mmus = NULL; 791 791 kvm->arch.nested_mmus_size = 0; 792 792 kvm_uninit_stage2_mmu(kvm);
+3 -2
arch/arm64/kvm/vgic/vgic-debug.c
··· 45 45 * Let the xarray drive the iterator after the last SPI, as the iterator 46 46 * has exhausted the sequentially-allocated INTID space. 47 47 */ 48 - if (iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS - 1)) { 48 + if (iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS - 1) && 49 + iter->nr_lpis) { 49 50 if (iter->lpi_idx < iter->nr_lpis) 50 51 xa_find_after(&dist->lpi_xa, &iter->intid, 51 52 VGIC_LPI_MAX_INTID, ··· 113 112 return iter->dist_id > 0 && 114 113 iter->vcpu_id == iter->nr_cpus && 115 114 iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS) && 116 - iter->lpi_idx > iter->nr_lpis; 115 + (!iter->nr_lpis || iter->lpi_idx > iter->nr_lpis); 117 116 } 118 117 119 118 static void *vgic_debug_start(struct seq_file *s, loff_t *pos)
+1 -2
arch/arm64/kvm/vgic/vgic-init.c
··· 438 438 unsigned long i; 439 439 440 440 mutex_lock(&kvm->slots_lock); 441 + mutex_lock(&kvm->arch.config_lock); 441 442 442 443 vgic_debug_destroy(kvm); 443 444 444 445 kvm_for_each_vcpu(i, vcpu, kvm) 445 446 __kvm_vgic_vcpu_destroy(vcpu); 446 - 447 - mutex_lock(&kvm->arch.config_lock); 448 447 449 448 kvm_vgic_dist_destroy(kvm); 450 449
+4 -3
arch/arm64/kvm/vgic/vgic-irqfd.c
··· 9 9 #include <kvm/arm_vgic.h> 10 10 #include "vgic.h" 11 11 12 - /** 12 + /* 13 13 * vgic_irqfd_set_irq: inject the IRQ corresponding to the 14 14 * irqchip routing entry 15 15 * ··· 75 75 msi->flags = e->msi.flags; 76 76 msi->devid = e->msi.devid; 77 77 } 78 - /** 78 + 79 + /* 79 80 * kvm_set_msi: inject the MSI corresponding to the 80 81 * MSI routing entry 81 82 * ··· 99 98 return vgic_its_inject_msi(kvm, &msi); 100 99 } 101 100 102 - /** 101 + /* 103 102 * kvm_arch_set_irq_inatomic: fast-path for irqfd injection 104 103 */ 105 104 int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
+11 -7
arch/arm64/kvm/vgic/vgic-its.c
··· 2040 2040 * @start_id: the ID of the first entry in the table 2041 2041 * (non zero for 2d level tables) 2042 2042 * @fn: function to apply on each entry 2043 + * @opaque: pointer to opaque data 2043 2044 * 2044 2045 * Return: < 0 on error, 0 if last element was identified, 1 otherwise 2045 2046 * (the last element may not be found on second level tables) ··· 2080 2079 return 1; 2081 2080 } 2082 2081 2083 - /** 2082 + /* 2084 2083 * vgic_its_save_ite - Save an interrupt translation entry at @gpa 2085 2084 */ 2086 2085 static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev, ··· 2100 2099 2101 2100 /** 2102 2101 * vgic_its_restore_ite - restore an interrupt translation entry 2102 + * 2103 + * @its: its handle 2103 2104 * @event_id: id used for indexing 2104 2105 * @ptr: pointer to the ITE entry 2105 2106 * @opaque: pointer to the its_device ··· 2234 2231 * @its: ITS handle 2235 2232 * @dev: ITS device 2236 2233 * @ptr: GPA 2234 + * @dte_esz: device table entry size 2237 2235 */ 2238 2236 static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev, 2239 2237 gpa_t ptr, int dte_esz) ··· 2317 2313 return 1; 2318 2314 } 2319 2315 2320 - /** 2316 + /* 2321 2317 * vgic_its_save_device_tables - Save the device table and all ITT 2322 2318 * into guest RAM 2323 2319 * ··· 2390 2386 return ret; 2391 2387 } 2392 2388 2393 - /** 2389 + /* 2394 2390 * vgic_its_restore_device_tables - Restore the device table and all ITT 2395 2391 * from guest RAM to internal data structs 2396 2392 */ ··· 2482 2478 return 1; 2483 2479 } 2484 2480 2485 - /** 2481 + /* 2486 2482 * vgic_its_save_collection_table - Save the collection table into 2487 2483 * guest RAM 2488 2484 */ ··· 2522 2518 return ret; 2523 2519 } 2524 2520 2525 - /** 2521 + /* 2526 2522 * vgic_its_restore_collection_table - reads the collection table 2527 2523 * in guest memory and restores the ITS internal state. Requires the 2528 2524 * BASER registers to be restored before. ··· 2560 2556 return ret; 2561 2557 } 2562 2558 2563 - /** 2559 + /* 2564 2560 * vgic_its_save_tables_v0 - Save the ITS tables into guest ARM 2565 2561 * according to v0 ABI 2566 2562 */ ··· 2575 2571 return vgic_its_save_collection_table(its); 2576 2572 } 2577 2573 2578 - /** 2574 + /* 2579 2575 * vgic_its_restore_tables_v0 - Restore the ITS tables from guest RAM 2580 2576 * to internal data structs according to V0 ABI 2581 2577 *
+1 -1
arch/arm64/kvm/vgic/vgic-v3.c
··· 370 370 dist->its_vm.vpes[i]->irq)); 371 371 } 372 372 373 - /** 373 + /* 374 374 * vgic_v3_save_pending_tables - Save the pending tables into guest RAM 375 375 * kvm lock and all vcpu lock must be held 376 376 */
+1 -1
arch/arm64/kvm/vgic/vgic.c
··· 313 313 * with all locks dropped. 314 314 */ 315 315 bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, 316 - unsigned long flags) 316 + unsigned long flags) __releases(&irq->irq_lock) 317 317 { 318 318 struct kvm_vcpu *vcpu; 319 319
+1 -1
arch/arm64/kvm/vgic/vgic.h
··· 186 186 void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending); 187 187 void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active); 188 188 bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, 189 - unsigned long flags); 189 + unsigned long flags) __releases(&irq->irq_lock); 190 190 void vgic_kick_vcpus(struct kvm *kvm); 191 191 void vgic_irq_handle_resampling(struct vgic_irq *irq, 192 192 bool lr_deactivated, bool lr_pending);
+1 -1
arch/mips/sgi-ip22/ip22-gio.c
··· 111 111 } 112 112 EXPORT_SYMBOL_GPL(gio_device_unregister); 113 113 114 - static int gio_bus_match(struct device *dev, struct device_driver *drv) 114 + static int gio_bus_match(struct device *dev, const struct device_driver *drv) 115 115 { 116 116 struct gio_device *gio_dev = to_gio_device(dev); 117 117 struct gio_driver *gio_drv = to_gio_driver(drv);
+13
arch/powerpc/include/asm/topology.h
··· 145 145 146 146 #ifdef CONFIG_HOTPLUG_SMT 147 147 #include <linux/cpu_smt.h> 148 + #include <linux/cpumask.h> 148 149 #include <asm/cputhreads.h> 149 150 150 151 static inline bool topology_is_primary_thread(unsigned int cpu) ··· 156 155 static inline bool topology_smt_thread_allowed(unsigned int cpu) 157 156 { 158 157 return cpu_thread_in_core(cpu) < cpu_smt_num_threads; 158 + } 159 + 160 + #define topology_is_core_online topology_is_core_online 161 + static inline bool topology_is_core_online(unsigned int cpu) 162 + { 163 + int i, first_cpu = cpu_first_thread_sibling(cpu); 164 + 165 + for (i = first_cpu; i < first_cpu + threads_per_core; ++i) { 166 + if (cpu_online(i)) 167 + return true; 168 + } 169 + return false; 159 170 } 160 171 #endif 161 172
+1
arch/powerpc/kernel/setup-common.c
··· 959 959 mem_topology_setup(); 960 960 /* Set max_mapnr before paging_init() */ 961 961 set_max_mapnr(max_pfn); 962 + high_memory = (void *)__va(max_low_pfn * PAGE_SIZE); 962 963 963 964 /* 964 965 * Release secondary cpus out of their spinloops at 0x60 now that
+2 -2
arch/powerpc/mm/init-common.c
··· 73 73 74 74 #define CTOR(shift) static void ctor_##shift(void *addr) \ 75 75 { \ 76 - memset(addr, 0, sizeof(void *) << (shift)); \ 76 + memset(addr, 0, sizeof(pgd_t) << (shift)); \ 77 77 } 78 78 79 79 CTOR(0); CTOR(1); CTOR(2); CTOR(3); CTOR(4); CTOR(5); CTOR(6); CTOR(7); ··· 117 117 void pgtable_cache_add(unsigned int shift) 118 118 { 119 119 char *name; 120 - unsigned long table_size = sizeof(void *) << shift; 120 + unsigned long table_size = sizeof(pgd_t) << shift; 121 121 unsigned long align = table_size; 122 122 123 123 /* When batching pgtable pointers for RCU freeing, we store
-2
arch/powerpc/mm/mem.c
··· 290 290 swiotlb_init(ppc_swiotlb_enable, ppc_swiotlb_flags); 291 291 #endif 292 292 293 - high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); 294 - 295 293 kasan_late_init(); 296 294 297 295 memblock_free_all();
+1 -1
arch/riscv/include/asm/hwprobe.h
··· 8 8 9 9 #include <uapi/asm/hwprobe.h> 10 10 11 - #define RISCV_HWPROBE_MAX_KEY 8 11 + #define RISCV_HWPROBE_MAX_KEY 9 12 12 13 13 static inline bool riscv_hwprobe_key_is_valid(__s64 key) 14 14 {
+6
arch/riscv/include/uapi/asm/hwprobe.h
··· 82 82 #define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6 83 83 #define RISCV_HWPROBE_KEY_HIGHEST_VIRT_ADDRESS 7 84 84 #define RISCV_HWPROBE_KEY_TIME_CSR_FREQ 8 85 + #define RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF 9 86 + #define RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN 0 87 + #define RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED 1 88 + #define RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW 2 89 + #define RISCV_HWPROBE_MISALIGNED_SCALAR_FAST 3 90 + #define RISCV_HWPROBE_MISALIGNED_SCALAR_UNSUPPORTED 4 85 91 /* Increase RISCV_HWPROBE_MAX_KEY when adding items. */ 86 92 87 93 /* Flags */
+1 -1
arch/riscv/kernel/acpi_numa.c
··· 28 28 29 29 #include <asm/numa.h> 30 30 31 - static int acpi_early_node_map[NR_CPUS] __initdata = { NUMA_NO_NODE }; 31 + static int acpi_early_node_map[NR_CPUS] __initdata = { [0 ... NR_CPUS - 1] = NUMA_NO_NODE }; 32 32 33 33 int __init acpi_numa_get_nid(unsigned int cpu) 34 34 {
+4
arch/riscv/kernel/patch.c
··· 205 205 int ret; 206 206 207 207 ret = patch_insn_set(addr, c, len); 208 + if (!ret) 209 + flush_icache_range((uintptr_t)addr, (uintptr_t)addr + len); 208 210 209 211 return ret; 210 212 } ··· 241 239 int ret; 242 240 243 241 ret = patch_insn_write(addr, insns, len); 242 + if (!ret) 243 + flush_icache_range((uintptr_t)addr, (uintptr_t)addr + len); 244 244 245 245 return ret; 246 246 }
+6 -5
arch/riscv/kernel/sys_hwprobe.c
··· 178 178 perf = this_perf; 179 179 180 180 if (perf != this_perf) { 181 - perf = RISCV_HWPROBE_MISALIGNED_UNKNOWN; 181 + perf = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN; 182 182 break; 183 183 } 184 184 } 185 185 186 186 if (perf == -1ULL) 187 - return RISCV_HWPROBE_MISALIGNED_UNKNOWN; 187 + return RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN; 188 188 189 189 return perf; 190 190 } ··· 192 192 static u64 hwprobe_misaligned(const struct cpumask *cpus) 193 193 { 194 194 if (IS_ENABLED(CONFIG_RISCV_EFFICIENT_UNALIGNED_ACCESS)) 195 - return RISCV_HWPROBE_MISALIGNED_FAST; 195 + return RISCV_HWPROBE_MISALIGNED_SCALAR_FAST; 196 196 197 197 if (IS_ENABLED(CONFIG_RISCV_EMULATED_UNALIGNED_ACCESS) && unaligned_ctl_available()) 198 - return RISCV_HWPROBE_MISALIGNED_EMULATED; 198 + return RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED; 199 199 200 - return RISCV_HWPROBE_MISALIGNED_SLOW; 200 + return RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW; 201 201 } 202 202 #endif 203 203 ··· 225 225 break; 226 226 227 227 case RISCV_HWPROBE_KEY_CPUPERF_0: 228 + case RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF: 228 229 pair->value = hwprobe_misaligned(cpus); 229 230 break; 230 231
+2 -2
arch/riscv/kernel/traps.c
··· 319 319 320 320 regs->epc += 4; 321 321 regs->orig_a0 = regs->a0; 322 + regs->a0 = -ENOSYS; 322 323 323 324 riscv_v_vstate_discard(regs); 324 325 ··· 329 328 330 329 if (syscall >= 0 && syscall < NR_syscalls) 331 330 syscall_handler(regs, syscall); 332 - else if (syscall != -1) 333 - regs->a0 = -ENOSYS; 331 + 334 332 /* 335 333 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), 336 334 * so the maximum stack offset is 1k bytes (10 bits).
+3 -3
arch/riscv/kernel/traps_misaligned.c
··· 338 338 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); 339 339 340 340 #ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS 341 - *this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_EMULATED; 341 + *this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED; 342 342 #endif 343 343 344 344 if (!unaligned_enabled) ··· 532 532 unsigned long tmp_var, tmp_val; 533 533 bool misaligned_emu_detected; 534 534 535 - *mas_ptr = RISCV_HWPROBE_MISALIGNED_UNKNOWN; 535 + *mas_ptr = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN; 536 536 537 537 __asm__ __volatile__ ( 538 538 " "REG_L" %[tmp], 1(%[ptr])\n" 539 539 : [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory"); 540 540 541 - misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_EMULATED); 541 + misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED); 542 542 /* 543 543 * If unaligned_ctl is already set, this means that we detected that all 544 544 * CPUS uses emulated misaligned access at boot time. If that changed
+6 -6
arch/riscv/kernel/unaligned_access_speed.c
··· 34 34 struct page *page = param; 35 35 void *dst; 36 36 void *src; 37 - long speed = RISCV_HWPROBE_MISALIGNED_SLOW; 37 + long speed = RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW; 38 38 39 - if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN) 39 + if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN) 40 40 return 0; 41 41 42 42 /* Make an unaligned destination buffer. */ ··· 95 95 } 96 96 97 97 if (word_cycles < byte_cycles) 98 - speed = RISCV_HWPROBE_MISALIGNED_FAST; 98 + speed = RISCV_HWPROBE_MISALIGNED_SCALAR_FAST; 99 99 100 100 ratio = div_u64((byte_cycles * 100), word_cycles); 101 101 pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n", 102 102 cpu, 103 103 ratio / 100, 104 104 ratio % 100, 105 - (speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow"); 105 + (speed == RISCV_HWPROBE_MISALIGNED_SCALAR_FAST) ? "fast" : "slow"); 106 106 107 107 per_cpu(misaligned_access_speed, cpu) = speed; 108 108 ··· 110 110 * Set the value of fast_misaligned_access of a CPU. These operations 111 111 * are atomic to avoid race conditions. 112 112 */ 113 - if (speed == RISCV_HWPROBE_MISALIGNED_FAST) 113 + if (speed == RISCV_HWPROBE_MISALIGNED_SCALAR_FAST) 114 114 cpumask_set_cpu(cpu, &fast_misaligned_access); 115 115 else 116 116 cpumask_clear_cpu(cpu, &fast_misaligned_access); ··· 188 188 static struct page *buf; 189 189 190 190 /* We are already set since the last check */ 191 - if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN) 191 + if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN) 192 192 goto exit; 193 193 194 194 buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
+1 -1
arch/riscv/kernel/vendor_extensions.c
··· 38 38 #ifdef CONFIG_RISCV_ISA_VENDOR_EXT_ANDES 39 39 case ANDES_VENDOR_ID: 40 40 bmap = &riscv_isa_vendor_ext_list_andes.all_harts_isa_bitmap; 41 - cpu_bmap = &riscv_isa_vendor_ext_list_andes.per_hart_isa_bitmap[cpu]; 41 + cpu_bmap = riscv_isa_vendor_ext_list_andes.per_hart_isa_bitmap; 42 42 break; 43 43 #endif 44 44 default:
+2 -2
arch/riscv/mm/init.c
··· 927 927 PMD_SIZE, PAGE_KERNEL_EXEC); 928 928 929 929 /* Map the data in RAM */ 930 - end_va = kernel_map.virt_addr + XIP_OFFSET + kernel_map.size; 930 + end_va = kernel_map.virt_addr + kernel_map.size; 931 931 for (va = kernel_map.virt_addr + XIP_OFFSET; va < end_va; va += PMD_SIZE) 932 932 create_pgd_mapping(pgdir, va, 933 933 kernel_map.phys_addr + (va - (kernel_map.virt_addr + XIP_OFFSET)), ··· 1096 1096 1097 1097 phys_ram_base = CONFIG_PHYS_RAM_BASE; 1098 1098 kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE; 1099 - kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_sdata); 1099 + kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_start); 1100 1100 1101 1101 kernel_map.va_kernel_xip_pa_offset = kernel_map.virt_addr - kernel_map.xiprom; 1102 1102 #else
+4 -1
arch/s390/include/asm/uv.h
··· 441 441 442 442 if (!uv_call(0, (u64)&uvcb)) 443 443 return 0; 444 - return -EINVAL; 444 + pr_err("%s UVC failed (rc: 0x%x, rrc: 0x%x), possible hypervisor bug.\n", 445 + uvcb.header.cmd == UVC_CMD_SET_SHARED_ACCESS ? "Share" : "Unshare", 446 + uvcb.header.rc, uvcb.header.rrc); 447 + panic("System security cannot be guaranteed unless the system panics now.\n"); 445 448 } 446 449 447 450 /*
+6 -1
arch/s390/kvm/kvm-s390.h
··· 267 267 268 268 static inline u32 kvm_s390_get_gisa_desc(struct kvm *kvm) 269 269 { 270 - u32 gd = virt_to_phys(kvm->arch.gisa_int.origin); 270 + u32 gd; 271 + 272 + if (!kvm->arch.gisa_int.origin) 273 + return 0; 274 + 275 + gd = virt_to_phys(kvm->arch.gisa_int.origin); 271 276 272 277 if (gd && sclp.has_gisaf) 273 278 gd |= GISA_FORMAT1;
+2
arch/x86/include/asm/kvm_host.h
··· 2192 2192 #define kvm_arch_has_private_mem(kvm) false 2193 2193 #endif 2194 2194 2195 + #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) 2196 + 2195 2197 static inline u16 kvm_read_ldt(void) 2196 2198 { 2197 2199 u16 ldt;
-1
arch/x86/kvm/hyperv.h
··· 286 286 return HV_STATUS_ACCESS_DENIED; 287 287 } 288 288 static inline void kvm_hv_vcpu_purge_flush_tlb(struct kvm_vcpu *vcpu) {} 289 - static inline void kvm_hv_free_pa_page(struct kvm *kvm) {} 290 289 static inline bool kvm_hv_synic_has_vector(struct kvm_vcpu *vcpu, int vector) 291 290 { 292 291 return false;
+15 -7
arch/x86/kvm/lapic.c
··· 351 351 * reversing the LDR calculation to get cluster of APICs, i.e. no 352 352 * additional work is required. 353 353 */ 354 - if (apic_x2apic_mode(apic)) { 355 - WARN_ON_ONCE(ldr != kvm_apic_calc_x2apic_ldr(kvm_x2apic_id(apic))); 354 + if (apic_x2apic_mode(apic)) 356 355 return; 357 - } 358 356 359 357 if (WARN_ON_ONCE(!kvm_apic_map_get_logical_dest(new, ldr, 360 358 &cluster, &mask))) { ··· 2964 2966 struct kvm_lapic_state *s, bool set) 2965 2967 { 2966 2968 if (apic_x2apic_mode(vcpu->arch.apic)) { 2969 + u32 x2apic_id = kvm_x2apic_id(vcpu->arch.apic); 2967 2970 u32 *id = (u32 *)(s->regs + APIC_ID); 2968 2971 u32 *ldr = (u32 *)(s->regs + APIC_LDR); 2969 2972 u64 icr; 2970 2973 2971 2974 if (vcpu->kvm->arch.x2apic_format) { 2972 - if (*id != vcpu->vcpu_id) 2975 + if (*id != x2apic_id) 2973 2976 return -EINVAL; 2974 2977 } else { 2978 + /* 2979 + * Ignore the userspace value when setting APIC state. 2980 + * KVM's model is that the x2APIC ID is readonly, e.g. 2981 + * KVM only supports delivering interrupts to KVM's 2982 + * version of the x2APIC ID. However, for backwards 2983 + * compatibility, don't reject attempts to set a 2984 + * mismatched ID for userspace that hasn't opted into 2985 + * x2apic_format. 2986 + */ 2975 2987 if (set) 2976 - *id >>= 24; 2988 + *id = x2apic_id; 2977 2989 else 2978 - *id <<= 24; 2990 + *id = x2apic_id << 24; 2979 2991 } 2980 2992 2981 2993 /* ··· 2994 2986 * split to ICR+ICR2 in userspace for backwards compatibility. 2995 2987 */ 2996 2988 if (set) { 2997 - *ldr = kvm_apic_calc_x2apic_ldr(*id); 2989 + *ldr = kvm_apic_calc_x2apic_ldr(x2apic_id); 2998 2990 2999 2991 icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) | 3000 2992 (u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32;
+4 -3
arch/x86/kvm/svm/sev.c
··· 2276 2276 2277 2277 for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) { 2278 2278 struct sev_data_snp_launch_update fw_args = {0}; 2279 - bool assigned; 2279 + bool assigned = false; 2280 2280 int level; 2281 2281 2282 2282 ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level); ··· 2290 2290 if (src) { 2291 2291 void *vaddr = kmap_local_pfn(pfn + i); 2292 2292 2293 - ret = copy_from_user(vaddr, src + i * PAGE_SIZE, PAGE_SIZE); 2294 - if (ret) 2293 + if (copy_from_user(vaddr, src + i * PAGE_SIZE, PAGE_SIZE)) { 2294 + ret = -EFAULT; 2295 2295 goto err; 2296 + } 2296 2297 kunmap_local(vaddr); 2297 2298 } 2298 2299
+2 -4
arch/x86/kvm/x86.c
··· 427 427 428 428 int kvm_set_user_return_msr(unsigned slot, u64 value, u64 mask) 429 429 { 430 - unsigned int cpu = smp_processor_id(); 431 - struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu); 430 + struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs); 432 431 int err; 433 432 434 433 value = (value & mask) | (msrs->values[slot].host & ~mask); ··· 449 450 450 451 static void drop_user_return_notifiers(void) 451 452 { 452 - unsigned int cpu = smp_processor_id(); 453 - struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu); 453 + struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs); 454 454 455 455 if (msrs->registered) 456 456 kvm_on_user_return(&msrs->urn);
+3 -2
block/blk-mq-tag.c
··· 38 38 void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx) 39 39 { 40 40 unsigned int users; 41 + unsigned long flags; 41 42 struct blk_mq_tags *tags = hctx->tags; 42 43 43 44 /* ··· 57 56 return; 58 57 } 59 58 60 - spin_lock_irq(&tags->lock); 59 + spin_lock_irqsave(&tags->lock, flags); 61 60 users = tags->active_queues + 1; 62 61 WRITE_ONCE(tags->active_queues, users); 63 62 blk_mq_update_wake_batch(tags, users); 64 - spin_unlock_irq(&tags->lock); 63 + spin_unlock_irqrestore(&tags->lock, flags); 65 64 } 66 65 67 66 /*
+1 -5
drivers/acpi/acpica/acevents.h
··· 188 188 u8 acpi_ns_is_locked); 189 189 190 190 void 191 - acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, 191 + acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, u32 max_depth, 192 192 acpi_adr_space_type space_id, u32 function); 193 - 194 - void 195 - acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *node, 196 - acpi_adr_space_type space_id); 197 193 198 194 acpi_status 199 195 acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function);
+9 -3
drivers/acpi/acpica/evregion.c
··· 20 20 21 21 /* Local prototypes */ 22 22 23 + static void 24 + acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node, 25 + acpi_adr_space_type space_id); 26 + 23 27 static acpi_status 24 28 acpi_ev_reg_run(acpi_handle obj_handle, 25 29 u32 level, void *context, void **return_value); ··· 65 61 acpi_gbl_default_address_spaces 66 62 [i])) { 67 63 acpi_ev_execute_reg_methods(acpi_gbl_root_node, 64 + ACPI_UINT32_MAX, 68 65 acpi_gbl_default_address_spaces 69 66 [i], ACPI_REG_CONNECT); 70 67 } ··· 673 668 * FUNCTION: acpi_ev_execute_reg_methods 674 669 * 675 670 * PARAMETERS: node - Namespace node for the device 671 + * max_depth - Depth to which search for _REG 676 672 * space_id - The address space ID 677 673 * function - Passed to _REG: On (1) or Off (0) 678 674 * ··· 685 679 ******************************************************************************/ 686 680 687 681 void 688 - acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, 682 + acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, u32 max_depth, 689 683 acpi_adr_space_type space_id, u32 function) 690 684 { 691 685 struct acpi_reg_walk_info info; ··· 719 713 * regions and _REG methods. (i.e. handlers must be installed for all 720 714 * regions of this Space ID before we can run any _REG methods) 721 715 */ 722 - (void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, ACPI_UINT32_MAX, 716 + (void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, max_depth, 723 717 ACPI_NS_WALK_UNLOCK, acpi_ev_reg_run, NULL, 724 718 &info, NULL); 725 719 ··· 820 814 * 821 815 ******************************************************************************/ 822 816 823 - void 817 + static void 824 818 acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node, 825 819 acpi_adr_space_type space_id) 826 820 {
+7 -57
drivers/acpi/acpica/evxfregn.c
··· 85 85 /* Run all _REG methods for this address space */ 86 86 87 87 if (run_reg) { 88 - acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT); 88 + acpi_ev_execute_reg_methods(node, ACPI_UINT32_MAX, space_id, 89 + ACPI_REG_CONNECT); 89 90 } 90 91 91 92 unlock_and_exit: ··· 264 263 * FUNCTION: acpi_execute_reg_methods 265 264 * 266 265 * PARAMETERS: device - Handle for the device 266 + * max_depth - Depth to which search for _REG 267 267 * space_id - The address space ID 268 268 * 269 269 * RETURN: Status ··· 273 271 * 274 272 ******************************************************************************/ 275 273 acpi_status 276 - acpi_execute_reg_methods(acpi_handle device, acpi_adr_space_type space_id) 274 + acpi_execute_reg_methods(acpi_handle device, u32 max_depth, 275 + acpi_adr_space_type space_id) 277 276 { 278 277 struct acpi_namespace_node *node; 279 278 acpi_status status; ··· 299 296 300 297 /* Run all _REG methods for this address space */ 301 298 302 - acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT); 299 + acpi_ev_execute_reg_methods(node, max_depth, space_id, 300 + ACPI_REG_CONNECT); 303 301 } else { 304 302 status = AE_BAD_PARAMETER; 305 303 } ··· 310 306 } 311 307 312 308 ACPI_EXPORT_SYMBOL(acpi_execute_reg_methods) 313 - 314 - /******************************************************************************* 315 - * 316 - * FUNCTION: acpi_execute_orphan_reg_method 317 - * 318 - * PARAMETERS: device - Handle for the device 319 - * space_id - The address space ID 320 - * 321 - * RETURN: Status 322 - * 323 - * DESCRIPTION: Execute an "orphan" _REG method that appears under an ACPI 324 - * device. This is a _REG method that has no corresponding region 325 - * within the device's scope. 326 - * 327 - ******************************************************************************/ 328 - acpi_status 329 - acpi_execute_orphan_reg_method(acpi_handle device, acpi_adr_space_type space_id) 330 - { 331 - struct acpi_namespace_node *node; 332 - acpi_status status; 333 - 334 - ACPI_FUNCTION_TRACE(acpi_execute_orphan_reg_method); 335 - 336 - /* Parameter validation */ 337 - 338 - if (!device) { 339 - return_ACPI_STATUS(AE_BAD_PARAMETER); 340 - } 341 - 342 - status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 343 - if (ACPI_FAILURE(status)) { 344 - return_ACPI_STATUS(status); 345 - } 346 - 347 - /* Convert and validate the device handle */ 348 - 349 - node = acpi_ns_validate_handle(device); 350 - if (node) { 351 - 352 - /* 353 - * If an "orphan" _REG method is present in the device's scope 354 - * for the given address space ID, run it. 355 - */ 356 - 357 - acpi_ev_execute_orphan_reg_method(node, space_id); 358 - } else { 359 - status = AE_BAD_PARAMETER; 360 - } 361 - 362 - (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 363 - return_ACPI_STATUS(status); 364 - } 365 - 366 - ACPI_EXPORT_SYMBOL(acpi_execute_orphan_reg_method)
+9 -5
drivers/acpi/ec.c
··· 1487 1487 static int ec_install_handlers(struct acpi_ec *ec, struct acpi_device *device, 1488 1488 bool call_reg) 1489 1489 { 1490 - acpi_handle scope_handle = ec == first_ec ? ACPI_ROOT_OBJECT : ec->handle; 1491 1490 acpi_status status; 1492 1491 1493 1492 acpi_ec_start(ec, false); 1494 1493 1495 1494 if (!test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) { 1495 + acpi_handle scope_handle = ec == first_ec ? ACPI_ROOT_OBJECT : ec->handle; 1496 + 1496 1497 acpi_ec_enter_noirq(ec); 1497 1498 status = acpi_install_address_space_handler_no_reg(scope_handle, 1498 1499 ACPI_ADR_SPACE_EC, ··· 1507 1506 } 1508 1507 1509 1508 if (call_reg && !test_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags)) { 1510 - acpi_execute_reg_methods(scope_handle, ACPI_ADR_SPACE_EC); 1511 - if (scope_handle != ec->handle) 1512 - acpi_execute_orphan_reg_method(ec->handle, ACPI_ADR_SPACE_EC); 1513 - 1509 + acpi_execute_reg_methods(ec->handle, ACPI_UINT32_MAX, ACPI_ADR_SPACE_EC); 1514 1510 set_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags); 1515 1511 } 1516 1512 ··· 1720 1722 ec_remove_handlers(ec); 1721 1723 acpi_ec_free(ec); 1722 1724 } 1725 + } 1726 + 1727 + void acpi_ec_register_opregions(struct acpi_device *adev) 1728 + { 1729 + if (first_ec && first_ec->handle != adev->handle) 1730 + acpi_execute_reg_methods(adev->handle, 1, ACPI_ADR_SPACE_EC); 1723 1731 } 1724 1732 1725 1733 static acpi_status
+1
drivers/acpi/internal.h
··· 223 223 acpi_handle handle, acpi_ec_query_func func, 224 224 void *data); 225 225 void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit); 226 + void acpi_ec_register_opregions(struct acpi_device *adev); 226 227 227 228 #ifdef CONFIG_PM_SLEEP 228 229 void acpi_ec_flush_work(void);
+2
drivers/acpi/scan.c
··· 2273 2273 if (device->handler) 2274 2274 goto ok; 2275 2275 2276 + acpi_ec_register_opregions(device); 2277 + 2276 2278 if (!device->flags.initialized) { 2277 2279 device->flags.power_manageable = 2278 2280 device->power.states[ACPI_STATE_D0].flags.valid;
+13 -2
drivers/ata/libata-scsi.c
··· 951 951 &sense_key, &asc, &ascq); 952 952 ata_scsi_set_sense(qc->dev, cmd, sense_key, asc, ascq); 953 953 } else { 954 - /* ATA PASS-THROUGH INFORMATION AVAILABLE */ 955 - ata_scsi_set_sense(qc->dev, cmd, RECOVERED_ERROR, 0, 0x1D); 954 + /* 955 + * ATA PASS-THROUGH INFORMATION AVAILABLE 956 + * 957 + * Note: we are supposed to call ata_scsi_set_sense(), which 958 + * respects the D_SENSE bit, instead of unconditionally 959 + * generating the sense data in descriptor format. However, 960 + * because hdparm, hddtemp, and udisks incorrectly assume sense 961 + * data in descriptor format, without even looking at the 962 + * RESPONSE CODE field in the returned sense data (to see which 963 + * format the returned sense data is in), we are stuck with 964 + * being bug compatible with older kernels. 965 + */ 966 + scsi_build_sense(cmd, 1, RECOVERED_ERROR, 0, 0x1D); 956 967 } 957 968 } 958 969
+5 -4
drivers/atm/idt77252.c
··· 1118 1118 rpp->len += skb->len; 1119 1119 1120 1120 if (stat & SAR_RSQE_EPDU) { 1121 + unsigned int len, truesize; 1121 1122 unsigned char *l1l2; 1122 - unsigned int len; 1123 1123 1124 1124 l1l2 = (unsigned char *) ((unsigned long) skb->data + skb->len - 6); 1125 1125 ··· 1189 1189 ATM_SKB(skb)->vcc = vcc; 1190 1190 __net_timestamp(skb); 1191 1191 1192 + truesize = skb->truesize; 1192 1193 vcc->push(vcc, skb); 1193 1194 atomic_inc(&vcc->stats->rx); 1194 1195 1195 - if (skb->truesize > SAR_FB_SIZE_3) 1196 + if (truesize > SAR_FB_SIZE_3) 1196 1197 add_rx_skb(card, 3, SAR_FB_SIZE_3, 1); 1197 - else if (skb->truesize > SAR_FB_SIZE_2) 1198 + else if (truesize > SAR_FB_SIZE_2) 1198 1199 add_rx_skb(card, 2, SAR_FB_SIZE_2, 1); 1199 - else if (skb->truesize > SAR_FB_SIZE_1) 1200 + else if (truesize > SAR_FB_SIZE_1) 1200 1201 add_rx_skb(card, 1, SAR_FB_SIZE_1, 1); 1201 1202 else 1202 1203 add_rx_skb(card, 0, SAR_FB_SIZE_0, 1);
+34 -8
drivers/char/xillybus/xillyusb.c
··· 50 50 static const char xillyname[] = "xillyusb"; 51 51 52 52 static unsigned int fifo_buf_order; 53 + static struct workqueue_struct *wakeup_wq; 53 54 54 55 #define USB_VENDOR_ID_XILINX 0x03fd 55 56 #define USB_VENDOR_ID_ALTERA 0x09fb ··· 570 569 * errors if executed. The mechanism relies on that xdev->error is assigned 571 570 * a non-zero value by report_io_error() prior to queueing wakeup_all(), 572 571 * which prevents bulk_in_work() from calling process_bulk_in(). 573 - * 574 - * The fact that wakeup_all() and bulk_in_work() are queued on the same 575 - * workqueue makes their concurrent execution very unlikely, however the 576 - * kernel's API doesn't seem to ensure this strictly. 577 572 */ 578 573 579 574 static void wakeup_all(struct work_struct *work) ··· 624 627 625 628 if (do_once) { 626 629 kref_get(&xdev->kref); /* xdev is used by work item */ 627 - queue_work(xdev->workq, &xdev->wakeup_workitem); 630 + queue_work(wakeup_wq, &xdev->wakeup_workitem); 628 631 } 629 632 } 630 633 ··· 1903 1906 1904 1907 static int xillyusb_setup_base_eps(struct xillyusb_dev *xdev) 1905 1908 { 1909 + struct usb_device *udev = xdev->udev; 1910 + 1911 + /* Verify that device has the two fundamental bulk in/out endpoints */ 1912 + if (usb_pipe_type_check(udev, usb_sndbulkpipe(udev, MSG_EP_NUM)) || 1913 + usb_pipe_type_check(udev, usb_rcvbulkpipe(udev, IN_EP_NUM))) 1914 + return -ENODEV; 1915 + 1906 1916 xdev->msg_ep = endpoint_alloc(xdev, MSG_EP_NUM | USB_DIR_OUT, 1907 1917 bulk_out_work, 1, 2); 1908 1918 if (!xdev->msg_ep) ··· 1939 1935 __le16 *chandesc, 1940 1936 int num_channels) 1941 1937 { 1942 - struct xillyusb_channel *chan; 1938 + struct usb_device *udev = xdev->udev; 1939 + struct xillyusb_channel *chan, *new_channels; 1943 1940 int i; 1944 1941 1945 1942 chan = kcalloc(num_channels, sizeof(*chan), GFP_KERNEL); 1946 1943 if (!chan) 1947 1944 return -ENOMEM; 1948 1945 1949 - xdev->channels = chan; 1946 + new_channels = chan; 1950 1947 1951 1948 for (i = 0; i < num_channels; i++, chan++) { 1952 1949 unsigned int in_desc = le16_to_cpu(*chandesc++); ··· 1976 1971 */ 1977 1972 1978 1973 if ((out_desc & 0x80) && i < 14) { /* Entry is valid */ 1974 + if (usb_pipe_type_check(udev, 1975 + usb_sndbulkpipe(udev, i + 2))) { 1976 + dev_err(xdev->dev, 1977 + "Missing BULK OUT endpoint %d\n", 1978 + i + 2); 1979 + kfree(new_channels); 1980 + return -ENODEV; 1981 + } 1982 + 1979 1983 chan->writable = 1; 1980 1984 chan->out_synchronous = !!(out_desc & 0x40); 1981 1985 chan->out_seekable = !!(out_desc & 0x20); ··· 1994 1980 } 1995 1981 } 1996 1982 1983 + xdev->channels = new_channels; 1997 1984 return 0; 1998 1985 } 1999 1986 ··· 2111 2096 * just after responding with the IDT, there is no reason for any 2112 2097 * work item to be running now. To be sure that xdev->channels 2113 2098 * is updated on anything that might run in parallel, flush the 2114 - * workqueue, which rarely does anything. 2099 + * device's workqueue and the wakeup work item. This rarely 2100 + * does anything. 2115 2101 */ 2116 2102 flush_workqueue(xdev->workq); 2103 + flush_work(&xdev->wakeup_workitem); 2117 2104 2118 2105 xdev->num_channels = num_channels; 2119 2106 ··· 2275 2258 { 2276 2259 int rc = 0; 2277 2260 2261 + wakeup_wq = alloc_workqueue(xillyname, 0, 0); 2262 + if (!wakeup_wq) 2263 + return -ENOMEM; 2264 + 2278 2265 if (LOG2_INITIAL_FIFO_BUF_SIZE > PAGE_SHIFT) 2279 2266 fifo_buf_order = LOG2_INITIAL_FIFO_BUF_SIZE - PAGE_SHIFT; 2280 2267 else ··· 2286 2265 2287 2266 rc = usb_register(&xillyusb_driver); 2288 2267 2268 + if (rc) 2269 + destroy_workqueue(wakeup_wq); 2270 + 2289 2271 return rc; 2290 2272 } 2291 2273 2292 2274 static void __exit xillyusb_exit(void) 2293 2275 { 2294 2276 usb_deregister(&xillyusb_driver); 2277 + 2278 + destroy_workqueue(wakeup_wq); 2295 2279 } 2296 2280 2297 2281 module_init(xillyusb_init);
+1 -1
drivers/clk/thead/clk-th1520-ap.c
··· 738 738 .hw.init = CLK_HW_INIT_PARENTS_HW("vp-axi", 739 739 video_pll_clk_parent, 740 740 &ccu_div_ops, 741 - 0), 741 + CLK_IGNORE_UNUSED), 742 742 }, 743 743 }; 744 744
+14
drivers/gpio/gpio-mlxbf3.c
··· 39 39 #define MLXBF_GPIO_CAUSE_OR_EVTEN0 0x14 40 40 #define MLXBF_GPIO_CAUSE_OR_CLRCAUSE 0x18 41 41 42 + #define MLXBF_GPIO_CLR_ALL_INTS GENMASK(31, 0) 43 + 42 44 struct mlxbf3_gpio_context { 43 45 struct gpio_chip gc; 44 46 ··· 84 82 val = readl(gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0); 85 83 val &= ~BIT(offset); 86 84 writel(val, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0); 85 + 86 + writel(BIT(offset), gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_CLRCAUSE); 87 87 raw_spin_unlock_irqrestore(&gs->gc.bgpio_lock, flags); 88 88 89 89 gpiochip_disable_irq(gc, offset); ··· 257 253 return 0; 258 254 } 259 255 256 + static void mlxbf3_gpio_shutdown(struct platform_device *pdev) 257 + { 258 + struct mlxbf3_gpio_context *gs = platform_get_drvdata(pdev); 259 + 260 + /* Disable and clear all interrupts */ 261 + writel(0, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0); 262 + writel(MLXBF_GPIO_CLR_ALL_INTS, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_CLRCAUSE); 263 + } 264 + 260 265 static const struct acpi_device_id mlxbf3_gpio_acpi_match[] = { 261 266 { "MLNXBF33", 0 }, 262 267 {} ··· 278 265 .acpi_match_table = mlxbf3_gpio_acpi_match, 279 266 }, 280 267 .probe = mlxbf3_gpio_probe, 268 + .shutdown = mlxbf3_gpio_shutdown, 281 269 }; 282 270 module_platform_driver(mlxbf3_gpio_driver); 283 271
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 1057 1057 r = amdgpu_ring_parse_cs(ring, p, job, ib); 1058 1058 if (r) 1059 1059 return r; 1060 + 1061 + if (ib->sa_bo) 1062 + ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo); 1060 1063 } else { 1061 1064 ib->ptr = (uint32_t *)kptr; 1062 1065 r = amdgpu_ring_patch_cs_in_place(ring, p, job, ib);
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
··· 685 685 686 686 switch (args->in.op) { 687 687 case AMDGPU_CTX_OP_ALLOC_CTX: 688 + if (args->in.flags) 689 + return -EINVAL; 688 690 r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id); 689 691 args->out.alloc.ctx_id = id; 690 692 break; 691 693 case AMDGPU_CTX_OP_FREE_CTX: 694 + if (args->in.flags) 695 + return -EINVAL; 692 696 r = amdgpu_ctx_free(fpriv, id); 693 697 break; 694 698 case AMDGPU_CTX_OP_QUERY_STATE: 699 + if (args->in.flags) 700 + return -EINVAL; 695 701 r = amdgpu_ctx_query(adev, fpriv, id, &args->out); 696 702 break; 697 703 case AMDGPU_CTX_OP_QUERY_STATE2: 704 + if (args->in.flags) 705 + return -EINVAL; 698 706 r = amdgpu_ctx_query2(adev, fpriv, id, &args->out); 699 707 break; 700 708 case AMDGPU_CTX_OP_GET_STABLE_PSTATE:
+24 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
··· 509 509 int i, r = 0; 510 510 int j; 511 511 512 + if (adev->enable_mes) { 513 + for (i = 0; i < adev->gfx.num_compute_rings; i++) { 514 + j = i + xcc_id * adev->gfx.num_compute_rings; 515 + amdgpu_mes_unmap_legacy_queue(adev, 516 + &adev->gfx.compute_ring[j], 517 + RESET_QUEUES, 0, 0); 518 + } 519 + return 0; 520 + } 521 + 512 522 if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) 513 523 return -EINVAL; 514 524 ··· 560 550 struct amdgpu_ring *kiq_ring = &kiq->ring; 561 551 int i, r = 0; 562 552 int j; 553 + 554 + if (adev->enable_mes) { 555 + if (amdgpu_gfx_is_master_xcc(adev, xcc_id)) { 556 + for (i = 0; i < adev->gfx.num_gfx_rings; i++) { 557 + j = i + xcc_id * adev->gfx.num_gfx_rings; 558 + amdgpu_mes_unmap_legacy_queue(adev, 559 + &adev->gfx.gfx_ring[j], 560 + PREEMPT_QUEUES, 0, 0); 561 + } 562 + } 563 + return 0; 564 + } 563 565 564 566 if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) 565 567 return -EINVAL; ··· 1017 995 if (amdgpu_device_skip_hw_access(adev)) 1018 996 return 0; 1019 997 1020 - if (adev->mes.ring.sched.ready) 998 + if (adev->mes.ring[0].sched.ready) 1021 999 return amdgpu_mes_rreg(adev, reg); 1022 1000 1023 1001 BUG_ON(!ring->funcs->emit_rreg); ··· 1087 1065 if (amdgpu_device_skip_hw_access(adev)) 1088 1066 return; 1089 1067 1090 - if (adev->mes.ring.sched.ready) { 1068 + if (adev->mes.ring[0].sched.ready) { 1091 1069 amdgpu_mes_wreg(adev, reg, v); 1092 1070 return; 1093 1071 }
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
··· 589 589 ring = adev->rings[i]; 590 590 vmhub = ring->vm_hub; 591 591 592 - if (ring == &adev->mes.ring || 592 + if (ring == &adev->mes.ring[0] || 593 + ring == &adev->mes.ring[1] || 593 594 ring == &adev->umsch_mm.ring) 594 595 continue; 595 596 ··· 762 761 unsigned long flags; 763 762 uint32_t seq; 764 763 765 - if (adev->mes.ring.sched.ready) { 764 + if (adev->mes.ring[0].sched.ready) { 766 765 amdgpu_mes_reg_write_reg_wait(adev, reg0, reg1, 767 766 ref, mask); 768 767 return;
+51 -32
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
··· 135 135 idr_init(&adev->mes.queue_id_idr); 136 136 ida_init(&adev->mes.doorbell_ida); 137 137 spin_lock_init(&adev->mes.queue_id_lock); 138 - spin_lock_init(&adev->mes.ring_lock); 139 138 mutex_init(&adev->mes.mutex_hidden); 139 + 140 + for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) 141 + spin_lock_init(&adev->mes.ring_lock[i]); 140 142 141 143 adev->mes.total_max_queue = AMDGPU_FENCE_MES_QUEUE_ID_MASK; 142 144 adev->mes.vmid_mask_mmhub = 0xffffff00; ··· 165 163 adev->mes.sdma_hqd_mask[i] = 0xfc; 166 164 } 167 165 168 - r = amdgpu_device_wb_get(adev, &adev->mes.sch_ctx_offs); 169 - if (r) { 170 - dev_err(adev->dev, 171 - "(%d) ring trail_fence_offs wb alloc failed\n", r); 172 - goto error_ids; 173 - } 174 - adev->mes.sch_ctx_gpu_addr = 175 - adev->wb.gpu_addr + (adev->mes.sch_ctx_offs * 4); 176 - adev->mes.sch_ctx_ptr = 177 - (uint64_t *)&adev->wb.wb[adev->mes.sch_ctx_offs]; 166 + for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) { 167 + r = amdgpu_device_wb_get(adev, &adev->mes.sch_ctx_offs[i]); 168 + if (r) { 169 + dev_err(adev->dev, 170 + "(%d) ring trail_fence_offs wb alloc failed\n", 171 + r); 172 + goto error; 173 + } 174 + adev->mes.sch_ctx_gpu_addr[i] = 175 + adev->wb.gpu_addr + (adev->mes.sch_ctx_offs[i] * 4); 176 + adev->mes.sch_ctx_ptr[i] = 177 + (uint64_t *)&adev->wb.wb[adev->mes.sch_ctx_offs[i]]; 178 178 179 - r = amdgpu_device_wb_get(adev, &adev->mes.query_status_fence_offs); 180 - if (r) { 181 - amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); 182 - dev_err(adev->dev, 183 - "(%d) query_status_fence_offs wb alloc failed\n", r); 184 - goto error_ids; 179 + r = amdgpu_device_wb_get(adev, 180 + &adev->mes.query_status_fence_offs[i]); 181 + if (r) { 182 + dev_err(adev->dev, 183 + "(%d) query_status_fence_offs wb alloc failed\n", 184 + r); 185 + goto error; 186 + } 187 + adev->mes.query_status_fence_gpu_addr[i] = adev->wb.gpu_addr + 188 + (adev->mes.query_status_fence_offs[i] * 4); 189 + adev->mes.query_status_fence_ptr[i] = 190 + (uint64_t *)&adev->wb.wb[adev->mes.query_status_fence_offs[i]]; 185 191 } 186 - adev->mes.query_status_fence_gpu_addr = 187 - adev->wb.gpu_addr + (adev->mes.query_status_fence_offs * 4); 188 - adev->mes.query_status_fence_ptr = 189 - (uint64_t *)&adev->wb.wb[adev->mes.query_status_fence_offs]; 190 192 191 193 r = amdgpu_device_wb_get(adev, &adev->mes.read_val_offs); 192 194 if (r) { 193 - amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); 194 - amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs); 195 195 dev_err(adev->dev, 196 196 "(%d) read_val_offs alloc failed\n", r); 197 - goto error_ids; 197 + goto error; 198 198 } 199 199 adev->mes.read_val_gpu_addr = 200 200 adev->wb.gpu_addr + (adev->mes.read_val_offs * 4); ··· 216 212 error_doorbell: 217 213 amdgpu_mes_doorbell_free(adev); 218 214 error: 219 - amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); 220 - amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs); 221 - amdgpu_device_wb_free(adev, adev->mes.read_val_offs); 222 - error_ids: 215 + for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) { 216 + if (adev->mes.sch_ctx_ptr[i]) 217 + amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs[i]); 218 + if (adev->mes.query_status_fence_ptr[i]) 219 + amdgpu_device_wb_free(adev, 220 + adev->mes.query_status_fence_offs[i]); 221 + } 222 + if (adev->mes.read_val_ptr) 223 + amdgpu_device_wb_free(adev, adev->mes.read_val_offs); 224 + 223 225 idr_destroy(&adev->mes.pasid_idr); 224 226 idr_destroy(&adev->mes.gang_id_idr); 225 227 idr_destroy(&adev->mes.queue_id_idr); ··· 236 226 237 227 void amdgpu_mes_fini(struct amdgpu_device *adev) 238 228 { 229 + int i; 230 + 239 231 amdgpu_bo_free_kernel(&adev->mes.event_log_gpu_obj, 240 232 &adev->mes.event_log_gpu_addr, 241 233 &adev->mes.event_log_cpu_addr); 242 234 243 - amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); 244 - amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs); 245 - amdgpu_device_wb_free(adev, adev->mes.read_val_offs); 235 + for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) { 236 + if (adev->mes.sch_ctx_ptr[i]) 237 + amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs[i]); 238 + if (adev->mes.query_status_fence_ptr[i]) 239 + amdgpu_device_wb_free(adev, 240 + adev->mes.query_status_fence_offs[i]); 241 + } 242 + if (adev->mes.read_val_ptr) 243 + amdgpu_device_wb_free(adev, adev->mes.read_val_offs); 244 + 246 245 amdgpu_mes_doorbell_free(adev); 247 246 248 247 idr_destroy(&adev->mes.pasid_idr); ··· 1518 1499 1519 1500 amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, 1520 1501 sizeof(ucode_prefix)); 1521 - if (adev->enable_uni_mes && pipe == AMDGPU_MES_SCHED_PIPE) { 1502 + if (adev->enable_uni_mes) { 1522 1503 snprintf(fw_name, sizeof(fw_name), 1523 1504 "amdgpu/%s_uni_mes.bin", ucode_prefix); 1524 1505 } else if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) &&
+8 -8
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
··· 82 82 uint64_t default_process_quantum; 83 83 uint64_t default_gang_quantum; 84 84 85 - struct amdgpu_ring ring; 86 - spinlock_t ring_lock; 85 + struct amdgpu_ring ring[AMDGPU_MAX_MES_PIPES]; 86 + spinlock_t ring_lock[AMDGPU_MAX_MES_PIPES]; 87 87 88 88 const struct firmware *fw[AMDGPU_MAX_MES_PIPES]; 89 89 ··· 112 112 uint32_t gfx_hqd_mask[AMDGPU_MES_MAX_GFX_PIPES]; 113 113 uint32_t sdma_hqd_mask[AMDGPU_MES_MAX_SDMA_PIPES]; 114 114 uint32_t aggregated_doorbells[AMDGPU_MES_PRIORITY_NUM_LEVELS]; 115 - uint32_t sch_ctx_offs; 116 - uint64_t sch_ctx_gpu_addr; 117 - uint64_t *sch_ctx_ptr; 118 - uint32_t query_status_fence_offs; 119 - uint64_t query_status_fence_gpu_addr; 120 - uint64_t *query_status_fence_ptr; 115 + uint32_t sch_ctx_offs[AMDGPU_MAX_MES_PIPES]; 116 + uint64_t sch_ctx_gpu_addr[AMDGPU_MAX_MES_PIPES]; 117 + uint64_t *sch_ctx_ptr[AMDGPU_MAX_MES_PIPES]; 118 + uint32_t query_status_fence_offs[AMDGPU_MAX_MES_PIPES]; 119 + uint64_t query_status_fence_gpu_addr[AMDGPU_MAX_MES_PIPES]; 120 + uint64_t *query_status_fence_ptr[AMDGPU_MAX_MES_PIPES]; 121 121 uint32_t read_val_offs; 122 122 uint64_t read_val_gpu_addr; 123 123 uint32_t *read_val_ptr;
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
··· 212 212 */ 213 213 if (ring->funcs->type == AMDGPU_RING_TYPE_KIQ) 214 214 sched_hw_submission = max(sched_hw_submission, 256); 215 + if (ring->funcs->type == AMDGPU_RING_TYPE_MES) 216 + sched_hw_submission = 8; 215 217 else if (ring == &adev->sdma.instance[0].page) 216 218 sched_hw_submission = 256; 217 219
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
··· 461 461 struct amdgpu_fw_shared_unified_queue_struct sq; 462 462 uint8_t pad1[8]; 463 463 struct amdgpu_fw_shared_fw_logging fw_log; 464 + uint8_t pad2[20]; 464 465 struct amdgpu_fw_shared_rb_setup rb_setup; 465 - uint8_t pad2[4]; 466 + struct amdgpu_fw_shared_smu_interface_info smu_dpm_interface; 467 + struct amdgpu_fw_shared_drm_key_wa drm_key_wa; 468 + uint8_t pad3[9]; 466 469 }; 467 470 468 471 #define VCN_BLOCK_ENCODE_DISABLE_MASK 0x80
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
··· 858 858 adev->gfx.is_poweron = false; 859 859 } 860 860 861 - adev->mes.ring.sched.ready = false; 861 + adev->mes.ring[0].sched.ready = false; 862 862 } 863 863 864 864 bool amdgpu_virt_fw_load_skip_check(struct amdgpu_device *adev, uint32_t ucode_id)
+1 -26
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
··· 3546 3546 return r; 3547 3547 } 3548 3548 3549 - static int gfx_v12_0_kiq_disable_kgq(struct amdgpu_device *adev) 3550 - { 3551 - struct amdgpu_kiq *kiq = &adev->gfx.kiq[0]; 3552 - struct amdgpu_ring *kiq_ring = &kiq->ring; 3553 - int i, r = 0; 3554 - 3555 - if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) 3556 - return -EINVAL; 3557 - 3558 - if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size * 3559 - adev->gfx.num_gfx_rings)) 3560 - return -ENOMEM; 3561 - 3562 - for (i = 0; i < adev->gfx.num_gfx_rings; i++) 3563 - kiq->pmf->kiq_unmap_queues(kiq_ring, &adev->gfx.gfx_ring[i], 3564 - PREEMPT_QUEUES, 0, 0); 3565 - 3566 - if (adev->gfx.kiq[0].ring.sched.ready) 3567 - r = amdgpu_ring_test_helper(kiq_ring); 3568 - 3569 - return r; 3570 - } 3571 - 3572 3549 static int gfx_v12_0_hw_fini(void *handle) 3573 3550 { 3574 3551 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 3575 - int r; 3576 3552 uint32_t tmp; 3577 3553 3578 3554 amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0); ··· 3556 3580 3557 3581 if (!adev->no_hw_access) { 3558 3582 if (amdgpu_async_gfx_ring) { 3559 - r = gfx_v12_0_kiq_disable_kgq(adev); 3560 - if (r) 3583 + if (amdgpu_gfx_disable_kgq(adev, 0)) 3561 3584 DRM_ERROR("KGQ disable failed\n"); 3562 3585 } 3563 3586
+1 -1
drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
··· 231 231 /* This is necessary for SRIOV as well as for GFXOFF to function 232 232 * properly under bare metal 233 233 */ 234 - if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring.sched.ready) && 234 + if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring[0].sched.ready) && 235 235 (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) { 236 236 amdgpu_gmc_fw_reg_write_reg_wait(adev, req, ack, inv_req, 237 237 1 << vmid, GET_INST(GC, 0));
+1 -1
drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
··· 299 299 /* This is necessary for SRIOV as well as for GFXOFF to function 300 300 * properly under bare metal 301 301 */ 302 - if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring.sched.ready) && 302 + if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring[0].sched.ready) && 303 303 (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) { 304 304 struct amdgpu_vmhub *hub = &adev->vmhub[vmhub]; 305 305 const unsigned eng = 17;
+2 -2
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
··· 538 538 539 539 amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET, 540 540 0, 0, PACKETJ_TYPE0)); 541 - amdgpu_ring_write(ring, (vmid | (vmid << 4))); 541 + amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8))); 542 542 543 543 amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JPEG_VMID_INTERNAL_OFFSET, 544 544 0, 0, PACKETJ_TYPE0)); 545 - amdgpu_ring_write(ring, (vmid | (vmid << 4))); 545 + amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8))); 546 546 547 547 amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JRBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET, 548 548 0, 0, PACKETJ_TYPE0));
+61 -2
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
··· 23 23 24 24 #include "amdgpu.h" 25 25 #include "amdgpu_jpeg.h" 26 + #include "amdgpu_cs.h" 26 27 #include "soc15.h" 27 28 #include "soc15d.h" 28 29 #include "jpeg_v4_0_3.h" ··· 783 782 784 783 amdgpu_ring_write(ring, PACKETJ(regUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET, 785 784 0, 0, PACKETJ_TYPE0)); 786 - amdgpu_ring_write(ring, (vmid | (vmid << 4))); 785 + 786 + if (ring->funcs->parse_cs) 787 + amdgpu_ring_write(ring, 0); 788 + else 789 + amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8))); 787 790 788 791 amdgpu_ring_write(ring, PACKETJ(regUVD_LMI_JPEG_VMID_INTERNAL_OFFSET, 789 792 0, 0, PACKETJ_TYPE0)); 790 - amdgpu_ring_write(ring, (vmid | (vmid << 4))); 793 + amdgpu_ring_write(ring, (vmid | (vmid << 4) | (vmid << 8))); 791 794 792 795 amdgpu_ring_write(ring, PACKETJ(regUVD_LMI_JRBC_IB_64BIT_BAR_LOW_INTERNAL_OFFSET, 793 796 0, 0, PACKETJ_TYPE0)); ··· 1089 1084 .get_rptr = jpeg_v4_0_3_dec_ring_get_rptr, 1090 1085 .get_wptr = jpeg_v4_0_3_dec_ring_get_wptr, 1091 1086 .set_wptr = jpeg_v4_0_3_dec_ring_set_wptr, 1087 + .parse_cs = jpeg_v4_0_3_dec_ring_parse_cs, 1092 1088 .emit_frame_size = 1093 1089 SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 + 1094 1090 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 + ··· 1253 1247 static void jpeg_v4_0_3_set_ras_funcs(struct amdgpu_device *adev) 1254 1248 { 1255 1249 adev->jpeg.ras = &jpeg_v4_0_3_ras; 1250 + } 1251 + 1252 + /** 1253 + * jpeg_v4_0_3_dec_ring_parse_cs - command submission parser 1254 + * 1255 + * @parser: Command submission parser context 1256 + * @job: the job to parse 1257 + * @ib: the IB to parse 1258 + * 1259 + * Parse the command stream, return -EINVAL for invalid packet, 1260 + * 0 otherwise 1261 + */ 1262 + int jpeg_v4_0_3_dec_ring_parse_cs(struct amdgpu_cs_parser *parser, 1263 + struct amdgpu_job *job, 1264 + struct amdgpu_ib *ib) 1265 + { 1266 + uint32_t i, reg, res, cond, type; 1267 + struct amdgpu_device *adev = parser->adev; 1268 + 1269 + for (i = 0; i < ib->length_dw ; i += 2) { 1270 + reg = CP_PACKETJ_GET_REG(ib->ptr[i]); 1271 + res = CP_PACKETJ_GET_RES(ib->ptr[i]); 1272 + cond = CP_PACKETJ_GET_COND(ib->ptr[i]); 1273 + type = CP_PACKETJ_GET_TYPE(ib->ptr[i]); 1274 + 1275 + if (res) /* only support 0 at the moment */ 1276 + return -EINVAL; 1277 + 1278 + switch (type) { 1279 + case PACKETJ_TYPE0: 1280 + if (cond != PACKETJ_CONDITION_CHECK0 || reg < JPEG_REG_RANGE_START || reg > JPEG_REG_RANGE_END) { 1281 + dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]); 1282 + return -EINVAL; 1283 + } 1284 + break; 1285 + case PACKETJ_TYPE3: 1286 + if (cond != PACKETJ_CONDITION_CHECK3 || reg < JPEG_REG_RANGE_START || reg > JPEG_REG_RANGE_END) { 1287 + dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]); 1288 + return -EINVAL; 1289 + } 1290 + break; 1291 + case PACKETJ_TYPE6: 1292 + if (ib->ptr[i] == CP_PACKETJ_NOP) 1293 + continue; 1294 + dev_err(adev->dev, "Invalid packet [0x%08x]!\n", ib->ptr[i]); 1295 + return -EINVAL; 1296 + default: 1297 + dev_err(adev->dev, "Unknown packet type %d !\n", type); 1298 + return -EINVAL; 1299 + } 1300 + } 1301 + 1302 + return 0; 1256 1303 }
+6 -1
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
··· 46 46 47 47 #define JRBC_DEC_EXTERNAL_REG_WRITE_ADDR 0x18000 48 48 49 + #define JPEG_REG_RANGE_START 0x4000 50 + #define JPEG_REG_RANGE_END 0x41c2 51 + 49 52 extern const struct amdgpu_ip_block_version jpeg_v4_0_3_ip_block; 50 53 51 54 void jpeg_v4_0_3_dec_ring_emit_ib(struct amdgpu_ring *ring, ··· 65 62 void jpeg_v4_0_3_dec_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg, uint32_t val); 66 63 void jpeg_v4_0_3_dec_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg, 67 64 uint32_t val, uint32_t mask); 68 - 65 + int jpeg_v4_0_3_dec_ring_parse_cs(struct amdgpu_cs_parser *parser, 66 + struct amdgpu_job *job, 67 + struct amdgpu_ib *ib); 69 68 #endif /* __JPEG_V4_0_3_H__ */
+1
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
··· 646 646 .get_rptr = jpeg_v5_0_0_dec_ring_get_rptr, 647 647 .get_wptr = jpeg_v5_0_0_dec_ring_get_wptr, 648 648 .set_wptr = jpeg_v5_0_0_dec_ring_set_wptr, 649 + .parse_cs = jpeg_v4_0_3_dec_ring_parse_cs, 649 650 .emit_frame_size = 650 651 SOC15_FLUSH_GPU_TLB_NUM_WREG * 6 + 651 652 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 8 +
+33 -26
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 162 162 union MESAPI__QUERY_MES_STATUS mes_status_pkt; 163 163 signed long timeout = 3000000; /* 3000 ms */ 164 164 struct amdgpu_device *adev = mes->adev; 165 - struct amdgpu_ring *ring = &mes->ring; 165 + struct amdgpu_ring *ring = &mes->ring[0]; 166 166 struct MES_API_STATUS *api_status; 167 167 union MESAPI__MISC *x_pkt = pkt; 168 168 const char *op_str, *misc_op_str; 169 169 unsigned long flags; 170 170 u64 status_gpu_addr; 171 - u32 status_offset; 171 + u32 seq, status_offset; 172 172 u64 *status_ptr; 173 173 signed long r; 174 174 int ret; ··· 191 191 status_ptr = (u64 *)&adev->wb.wb[status_offset]; 192 192 *status_ptr = 0; 193 193 194 - spin_lock_irqsave(&mes->ring_lock, flags); 194 + spin_lock_irqsave(&mes->ring_lock[0], flags); 195 195 r = amdgpu_ring_alloc(ring, (size + sizeof(mes_status_pkt)) / 4); 196 196 if (r) 197 197 goto error_unlock_free; 198 + 199 + seq = ++ring->fence_drv.sync_seq; 200 + r = amdgpu_fence_wait_polling(ring, 201 + seq - ring->fence_drv.num_fences_mask, 202 + timeout); 203 + if (r < 1) 204 + goto error_undo; 198 205 199 206 api_status = (struct MES_API_STATUS *)((char *)pkt + api_status_off); 200 207 api_status->api_completion_fence_addr = status_gpu_addr; ··· 215 208 mes_status_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS; 216 209 mes_status_pkt.api_status.api_completion_fence_addr = 217 210 ring->fence_drv.gpu_addr; 218 - mes_status_pkt.api_status.api_completion_fence_value = 219 - ++ring->fence_drv.sync_seq; 211 + mes_status_pkt.api_status.api_completion_fence_value = seq; 220 212 221 213 amdgpu_ring_write_multiple(ring, &mes_status_pkt, 222 214 sizeof(mes_status_pkt) / 4); 223 215 224 216 amdgpu_ring_commit(ring); 225 - spin_unlock_irqrestore(&mes->ring_lock, flags); 217 + spin_unlock_irqrestore(&mes->ring_lock[0], flags); 226 218 227 219 op_str = mes_v11_0_get_op_string(x_pkt); 228 220 misc_op_str = mes_v11_0_get_misc_op_string(x_pkt); ··· 235 229 dev_dbg(adev->dev, "MES msg=%d was emitted\n", 236 230 x_pkt->header.opcode); 237 231 238 - r = amdgpu_fence_wait_polling(ring, ring->fence_drv.sync_seq, timeout); 232 + r = amdgpu_fence_wait_polling(ring, seq, timeout); 239 233 if (r < 1 || !*status_ptr) { 240 234 241 235 if (misc_op_str) ··· 258 252 amdgpu_device_wb_free(adev, status_offset); 259 253 return 0; 260 254 255 + error_undo: 256 + dev_err(adev->dev, "MES ring buffer is full.\n"); 257 + amdgpu_ring_undo(ring); 258 + 261 259 error_unlock_free: 262 - spin_unlock_irqrestore(&mes->ring_lock, flags); 260 + spin_unlock_irqrestore(&mes->ring_lock[0], flags); 263 261 264 262 error_wb_free: 265 263 amdgpu_device_wb_free(adev, status_offset); ··· 522 512 mes_set_hw_res_pkt.vmid_mask_gfxhub = mes->vmid_mask_gfxhub; 523 513 mes_set_hw_res_pkt.gds_size = adev->gds.gds_size; 524 514 mes_set_hw_res_pkt.paging_vmid = 0; 525 - mes_set_hw_res_pkt.g_sch_ctx_gpu_mc_ptr = mes->sch_ctx_gpu_addr; 515 + mes_set_hw_res_pkt.g_sch_ctx_gpu_mc_ptr = mes->sch_ctx_gpu_addr[0]; 526 516 mes_set_hw_res_pkt.query_status_fence_gpu_mc_ptr = 527 - mes->query_status_fence_gpu_addr; 517 + mes->query_status_fence_gpu_addr[0]; 528 518 529 519 for (i = 0; i < MAX_COMPUTE_PIPES; i++) 530 520 mes_set_hw_res_pkt.compute_hqd_mask[i] = ··· 1025 1015 return r; 1026 1016 } 1027 1017 1028 - kiq->pmf->kiq_map_queues(kiq_ring, &adev->mes.ring); 1018 + kiq->pmf->kiq_map_queues(kiq_ring, &adev->mes.ring[0]); 1029 1019 1030 1020 return amdgpu_ring_test_helper(kiq_ring); 1031 1021 } ··· 1039 1029 if (pipe == AMDGPU_MES_KIQ_PIPE) 1040 1030 ring = &adev->gfx.kiq[0].ring; 1041 1031 else if (pipe == AMDGPU_MES_SCHED_PIPE) 1042 - ring = &adev->mes.ring; 1032 + ring = &adev->mes.ring[0]; 1043 1033 else 1044 1034 BUG(); 1045 1035 ··· 1081 1071 { 1082 1072 struct amdgpu_ring *ring; 1083 1073 1084 - ring = &adev->mes.ring; 1074 + ring = &adev->mes.ring[0]; 1085 1075 1086 1076 ring->funcs = &mes_v11_0_ring_funcs; 1087 1077 ··· 1134 1124 if (pipe == AMDGPU_MES_KIQ_PIPE) 1135 1125 ring = &adev->gfx.kiq[0].ring; 1136 1126 else if (pipe == AMDGPU_MES_SCHED_PIPE) 1137 - ring = &adev->mes.ring; 1127 + ring = &adev->mes.ring[0]; 1138 1128 else 1139 1129 BUG(); 1140 1130 ··· 1210 1200 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1211 1201 int pipe; 1212 1202 1213 - amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); 1214 - amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs); 1215 - 1216 1203 for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 1217 1204 kfree(adev->mes.mqd_backup[pipe]); 1218 1205 ··· 1223 1216 &adev->gfx.kiq[0].ring.mqd_gpu_addr, 1224 1217 &adev->gfx.kiq[0].ring.mqd_ptr); 1225 1218 1226 - amdgpu_bo_free_kernel(&adev->mes.ring.mqd_obj, 1227 - &adev->mes.ring.mqd_gpu_addr, 1228 - &adev->mes.ring.mqd_ptr); 1219 + amdgpu_bo_free_kernel(&adev->mes.ring[0].mqd_obj, 1220 + &adev->mes.ring[0].mqd_gpu_addr, 1221 + &adev->mes.ring[0].mqd_ptr); 1229 1222 1230 1223 amdgpu_ring_fini(&adev->gfx.kiq[0].ring); 1231 - amdgpu_ring_fini(&adev->mes.ring); 1224 + amdgpu_ring_fini(&adev->mes.ring[0]); 1232 1225 1233 1226 if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) { 1234 1227 mes_v11_0_free_ucode_buffers(adev, AMDGPU_MES_KIQ_PIPE); ··· 1339 1332 1340 1333 static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev) 1341 1334 { 1342 - if (adev->mes.ring.sched.ready) { 1343 - mes_v11_0_kiq_dequeue(&adev->mes.ring); 1344 - adev->mes.ring.sched.ready = false; 1335 + if (adev->mes.ring[0].sched.ready) { 1336 + mes_v11_0_kiq_dequeue(&adev->mes.ring[0]); 1337 + adev->mes.ring[0].sched.ready = false; 1345 1338 } 1346 1339 1347 1340 if (amdgpu_sriov_vf(adev)) { ··· 1359 1352 int r; 1360 1353 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1361 1354 1362 - if (adev->mes.ring.sched.ready) 1355 + if (adev->mes.ring[0].sched.ready) 1363 1356 goto out; 1364 1357 1365 1358 if (!adev->enable_mes_kiq) { ··· 1404 1397 * with MES enabled. 1405 1398 */ 1406 1399 adev->gfx.kiq[0].ring.sched.ready = false; 1407 - adev->mes.ring.sched.ready = true; 1400 + adev->mes.ring[0].sched.ready = true; 1408 1401 1409 1402 return 0; 1410 1403
+161 -135
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
··· 142 142 } 143 143 144 144 static int mes_v12_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes, 145 - void *pkt, int size, 146 - int api_status_off) 145 + int pipe, void *pkt, int size, 146 + int api_status_off) 147 147 { 148 148 union MESAPI__QUERY_MES_STATUS mes_status_pkt; 149 149 signed long timeout = 3000000; /* 3000 ms */ 150 150 struct amdgpu_device *adev = mes->adev; 151 - struct amdgpu_ring *ring = &mes->ring; 151 + struct amdgpu_ring *ring = &mes->ring[pipe]; 152 + spinlock_t *ring_lock = &mes->ring_lock[pipe]; 152 153 struct MES_API_STATUS *api_status; 153 154 union MESAPI__MISC *x_pkt = pkt; 154 155 const char *op_str, *misc_op_str; 155 156 unsigned long flags; 156 157 u64 status_gpu_addr; 157 - u32 status_offset; 158 + u32 seq, status_offset; 158 159 u64 *status_ptr; 159 160 signed long r; 160 161 int ret; ··· 178 177 status_ptr = (u64 *)&adev->wb.wb[status_offset]; 179 178 *status_ptr = 0; 180 179 181 - spin_lock_irqsave(&mes->ring_lock, flags); 180 + spin_lock_irqsave(ring_lock, flags); 182 181 r = amdgpu_ring_alloc(ring, (size + sizeof(mes_status_pkt)) / 4); 183 182 if (r) 184 183 goto error_unlock_free; 184 + 185 + seq = ++ring->fence_drv.sync_seq; 186 + r = amdgpu_fence_wait_polling(ring, 187 + seq - ring->fence_drv.num_fences_mask, 188 + timeout); 189 + if (r < 1) 190 + goto error_undo; 185 191 186 192 api_status = (struct MES_API_STATUS *)((char *)pkt + api_status_off); 187 193 api_status->api_completion_fence_addr = status_gpu_addr; ··· 202 194 mes_status_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS; 203 195 mes_status_pkt.api_status.api_completion_fence_addr = 204 196 ring->fence_drv.gpu_addr; 205 - mes_status_pkt.api_status.api_completion_fence_value = 206 - ++ring->fence_drv.sync_seq; 197 + mes_status_pkt.api_status.api_completion_fence_value = seq; 207 198 208 199 amdgpu_ring_write_multiple(ring, &mes_status_pkt, 209 200 sizeof(mes_status_pkt) / 4); 210 201 211 202 amdgpu_ring_commit(ring); 212 - spin_unlock_irqrestore(&mes->ring_lock, flags); 203 + spin_unlock_irqrestore(ring_lock, flags); 213 204 214 205 op_str = mes_v12_0_get_op_string(x_pkt); 215 206 misc_op_str = mes_v12_0_get_misc_op_string(x_pkt); 216 207 217 208 if (misc_op_str) 218 - dev_dbg(adev->dev, "MES msg=%s (%s) was emitted\n", op_str, 219 - misc_op_str); 209 + dev_dbg(adev->dev, "MES(%d) msg=%s (%s) was emitted\n", 210 + pipe, op_str, misc_op_str); 220 211 else if (op_str) 221 - dev_dbg(adev->dev, "MES msg=%s was emitted\n", op_str); 212 + dev_dbg(adev->dev, "MES(%d) msg=%s was emitted\n", 213 + pipe, op_str); 222 214 else 223 - dev_dbg(adev->dev, "MES msg=%d was emitted\n", 224 - x_pkt->header.opcode); 215 + dev_dbg(adev->dev, "MES(%d) msg=%d was emitted\n", 216 + pipe, x_pkt->header.opcode); 225 217 226 - r = amdgpu_fence_wait_polling(ring, ring->fence_drv.sync_seq, timeout); 218 + r = amdgpu_fence_wait_polling(ring, seq, timeout); 227 219 if (r < 1 || !*status_ptr) { 228 220 229 221 if (misc_op_str) 230 - dev_err(adev->dev, "MES failed to respond to msg=%s (%s)\n", 231 - op_str, misc_op_str); 222 + dev_err(adev->dev, "MES(%d) failed to respond to msg=%s (%s)\n", 223 + pipe, op_str, misc_op_str); 232 224 else if (op_str) 233 - dev_err(adev->dev, "MES failed to respond to msg=%s\n", 234 - op_str); 225 + dev_err(adev->dev, "MES(%d) failed to respond to msg=%s\n", 226 + pipe, op_str); 235 227 else 236 - dev_err(adev->dev, "MES failed to respond to msg=%d\n", 237 - x_pkt->header.opcode); 228 + dev_err(adev->dev, "MES(%d) failed to respond to msg=%d\n", 229 + pipe, x_pkt->header.opcode); 238 230 239 231 while (halt_if_hws_hang) 240 232 schedule(); ··· 246 238 amdgpu_device_wb_free(adev, status_offset); 247 239 return 0; 248 240 241 + error_undo: 242 + dev_err(adev->dev, "MES ring buffer is full.\n"); 243 + amdgpu_ring_undo(ring); 244 + 249 245 error_unlock_free: 250 - spin_unlock_irqrestore(&mes->ring_lock, flags); 246 + spin_unlock_irqrestore(ring_lock, flags); 251 247 252 248 error_wb_free: 253 249 amdgpu_device_wb_free(adev, status_offset); ··· 266 254 return MES_QUEUE_TYPE_COMPUTE; 267 255 else if (queue_type == AMDGPU_RING_TYPE_SDMA) 268 256 return MES_QUEUE_TYPE_SDMA; 257 + else if (queue_type == AMDGPU_RING_TYPE_MES) 258 + return MES_QUEUE_TYPE_SCHQ; 269 259 else 270 260 BUG(); 271 261 return -1; ··· 325 311 mes_add_queue_pkt.gds_size = input->queue_size; 326 312 327 313 return mes_v12_0_submit_pkt_and_poll_completion(mes, 314 + AMDGPU_MES_SCHED_PIPE, 328 315 &mes_add_queue_pkt, sizeof(mes_add_queue_pkt), 329 316 offsetof(union MESAPI__ADD_QUEUE, api_status)); 330 317 } ··· 345 330 mes_remove_queue_pkt.gang_context_addr = input->gang_context_addr; 346 331 347 332 return mes_v12_0_submit_pkt_and_poll_completion(mes, 333 + AMDGPU_MES_SCHED_PIPE, 348 334 &mes_remove_queue_pkt, sizeof(mes_remove_queue_pkt), 349 335 offsetof(union MESAPI__REMOVE_QUEUE, api_status)); 350 336 } ··· 354 338 struct mes_map_legacy_queue_input *input) 355 339 { 356 340 union MESAPI__ADD_QUEUE mes_add_queue_pkt; 341 + int pipe; 357 342 358 343 memset(&mes_add_queue_pkt, 0, sizeof(mes_add_queue_pkt)); 359 344 ··· 371 354 convert_to_mes_queue_type(input->queue_type); 372 355 mes_add_queue_pkt.map_legacy_kq = 1; 373 356 374 - return mes_v12_0_submit_pkt_and_poll_completion(mes, 357 + if (mes->adev->enable_uni_mes) 358 + pipe = AMDGPU_MES_KIQ_PIPE; 359 + else 360 + pipe = AMDGPU_MES_SCHED_PIPE; 361 + 362 + return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe, 375 363 &mes_add_queue_pkt, sizeof(mes_add_queue_pkt), 376 364 offsetof(union MESAPI__ADD_QUEUE, api_status)); 377 365 } ··· 385 363 struct mes_unmap_legacy_queue_input *input) 386 364 { 387 365 union MESAPI__REMOVE_QUEUE mes_remove_queue_pkt; 366 + int pipe; 388 367 389 368 memset(&mes_remove_queue_pkt, 0, sizeof(mes_remove_queue_pkt)); 390 369 ··· 410 387 convert_to_mes_queue_type(input->queue_type); 411 388 } 412 389 413 - return mes_v12_0_submit_pkt_and_poll_completion(mes, 390 + if (mes->adev->enable_uni_mes) 391 + pipe = AMDGPU_MES_KIQ_PIPE; 392 + else 393 + pipe = AMDGPU_MES_SCHED_PIPE; 394 + 395 + return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe, 414 396 &mes_remove_queue_pkt, sizeof(mes_remove_queue_pkt), 415 397 offsetof(union MESAPI__REMOVE_QUEUE, api_status)); 416 398 } ··· 432 404 return 0; 433 405 } 434 406 435 - static int mes_v12_0_query_sched_status(struct amdgpu_mes *mes) 407 + static int mes_v12_0_query_sched_status(struct amdgpu_mes *mes, int pipe) 436 408 { 437 409 union MESAPI__QUERY_MES_STATUS mes_status_pkt; 438 410 ··· 442 414 mes_status_pkt.header.opcode = MES_SCH_API_QUERY_SCHEDULER_STATUS; 443 415 mes_status_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS; 444 416 445 - return mes_v12_0_submit_pkt_and_poll_completion(mes, 417 + return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe, 446 418 &mes_status_pkt, sizeof(mes_status_pkt), 447 419 offsetof(union MESAPI__QUERY_MES_STATUS, api_status)); 448 420 } ··· 451 423 struct mes_misc_op_input *input) 452 424 { 453 425 union MESAPI__MISC misc_pkt; 426 + int pipe; 454 427 455 428 memset(&misc_pkt, 0, sizeof(misc_pkt)); 456 429 ··· 504 475 return -EINVAL; 505 476 } 506 477 507 - return mes_v12_0_submit_pkt_and_poll_completion(mes, 478 + if (mes->adev->enable_uni_mes) 479 + pipe = AMDGPU_MES_KIQ_PIPE; 480 + else 481 + pipe = AMDGPU_MES_SCHED_PIPE; 482 + 483 + return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe, 508 484 &misc_pkt, sizeof(misc_pkt), 509 485 offsetof(union MESAPI__MISC, api_status)); 510 486 } 511 487 512 - static int mes_v12_0_set_hw_resources_1(struct amdgpu_mes *mes) 488 + static int mes_v12_0_set_hw_resources_1(struct amdgpu_mes *mes, int pipe) 513 489 { 514 490 union MESAPI_SET_HW_RESOURCES_1 mes_set_hw_res_1_pkt; 515 491 ··· 525 491 mes_set_hw_res_1_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS; 526 492 mes_set_hw_res_1_pkt.mes_kiq_unmap_timeout = 100; 527 493 528 - return mes_v12_0_submit_pkt_and_poll_completion(mes, 494 + return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe, 529 495 &mes_set_hw_res_1_pkt, sizeof(mes_set_hw_res_1_pkt), 530 496 offsetof(union MESAPI_SET_HW_RESOURCES_1, api_status)); 531 497 } 532 498 533 - static int mes_v12_0_set_hw_resources(struct amdgpu_mes *mes) 499 + static int mes_v12_0_set_hw_resources(struct amdgpu_mes *mes, int pipe) 534 500 { 535 501 int i; 536 502 struct amdgpu_device *adev = mes->adev; ··· 542 508 mes_set_hw_res_pkt.header.opcode = MES_SCH_API_SET_HW_RSRC; 543 509 mes_set_hw_res_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS; 544 510 545 - mes_set_hw_res_pkt.vmid_mask_mmhub = mes->vmid_mask_mmhub; 546 - mes_set_hw_res_pkt.vmid_mask_gfxhub = mes->vmid_mask_gfxhub; 547 - mes_set_hw_res_pkt.gds_size = adev->gds.gds_size; 548 - mes_set_hw_res_pkt.paging_vmid = 0; 549 - mes_set_hw_res_pkt.g_sch_ctx_gpu_mc_ptr = mes->sch_ctx_gpu_addr; 511 + if (pipe == AMDGPU_MES_SCHED_PIPE) { 512 + mes_set_hw_res_pkt.vmid_mask_mmhub = mes->vmid_mask_mmhub; 513 + mes_set_hw_res_pkt.vmid_mask_gfxhub = mes->vmid_mask_gfxhub; 514 + mes_set_hw_res_pkt.gds_size = adev->gds.gds_size; 515 + mes_set_hw_res_pkt.paging_vmid = 0; 516 + 517 + for (i = 0; i < MAX_COMPUTE_PIPES; i++) 518 + mes_set_hw_res_pkt.compute_hqd_mask[i] = 519 + mes->compute_hqd_mask[i]; 520 + 521 + for (i = 0; i < MAX_GFX_PIPES; i++) 522 + mes_set_hw_res_pkt.gfx_hqd_mask[i] = 523 + mes->gfx_hqd_mask[i]; 524 + 525 + for (i = 0; i < MAX_SDMA_PIPES; i++) 526 + mes_set_hw_res_pkt.sdma_hqd_mask[i] = 527 + mes->sdma_hqd_mask[i]; 528 + 529 + for (i = 0; i < AMD_PRIORITY_NUM_LEVELS; i++) 530 + mes_set_hw_res_pkt.aggregated_doorbells[i] = 531 + mes->aggregated_doorbells[i]; 532 + } 533 + 534 + mes_set_hw_res_pkt.g_sch_ctx_gpu_mc_ptr = 535 + mes->sch_ctx_gpu_addr[pipe]; 550 536 mes_set_hw_res_pkt.query_status_fence_gpu_mc_ptr = 551 - mes->query_status_fence_gpu_addr; 552 - 553 - for (i = 0; i < MAX_COMPUTE_PIPES; i++) 554 - mes_set_hw_res_pkt.compute_hqd_mask[i] = 555 - mes->compute_hqd_mask[i]; 556 - 557 - for (i = 0; i < MAX_GFX_PIPES; i++) 558 - mes_set_hw_res_pkt.gfx_hqd_mask[i] = mes->gfx_hqd_mask[i]; 559 - 560 - for (i = 0; i < MAX_SDMA_PIPES; i++) 561 - mes_set_hw_res_pkt.sdma_hqd_mask[i] = mes->sdma_hqd_mask[i]; 562 - 563 - for (i = 0; i < AMD_PRIORITY_NUM_LEVELS; i++) 564 - mes_set_hw_res_pkt.aggregated_doorbells[i] = 565 - mes->aggregated_doorbells[i]; 537 + mes->query_status_fence_gpu_addr[pipe]; 566 538 567 539 for (i = 0; i < 5; i++) { 568 540 mes_set_hw_res_pkt.gc_base[i] = adev->reg_offset[GC_HWIP][0][i]; ··· 596 556 mes_set_hw_res_pkt.event_intr_history_gpu_mc_ptr = mes->event_log_gpu_addr; 597 557 } 598 558 599 - return mes_v12_0_submit_pkt_and_poll_completion(mes, 559 + return mes_v12_0_submit_pkt_and_poll_completion(mes, pipe, 600 560 &mes_set_hw_res_pkt, sizeof(mes_set_hw_res_pkt), 601 561 offsetof(union MESAPI_SET_HW_RESOURCES, api_status)); 602 562 } ··· 774 734 if (enable) { 775 735 data = RREG32_SOC15(GC, 0, regCP_MES_CNTL); 776 736 data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE0_RESET, 1); 777 - data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_RESET, 778 - (!adev->enable_uni_mes && adev->enable_mes_kiq) ? 1 : 0); 737 + data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_RESET, 1); 779 738 WREG32_SOC15(GC, 0, regCP_MES_CNTL, data); 780 739 781 740 mutex_lock(&adev->srbm_mutex); 782 741 for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 783 - if ((!adev->enable_mes_kiq || adev->enable_uni_mes) && 784 - pipe == AMDGPU_MES_KIQ_PIPE) 785 - continue; 786 - 787 742 soc21_grbm_select(adev, 3, pipe, 0, 0); 788 743 789 744 ucode_addr = adev->mes.uc_start_addr[pipe] >> 2; ··· 792 757 793 758 /* unhalt MES and activate pipe0 */ 794 759 data = REG_SET_FIELD(0, CP_MES_CNTL, MES_PIPE0_ACTIVE, 1); 795 - data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_ACTIVE, 796 - (!adev->enable_uni_mes && adev->enable_mes_kiq) ? 1 : 0); 760 + data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_ACTIVE, 1); 797 761 WREG32_SOC15(GC, 0, regCP_MES_CNTL, data); 798 762 799 763 if (amdgpu_emu_mode) ··· 808 774 data = REG_SET_FIELD(data, CP_MES_CNTL, 809 775 MES_INVALIDATE_ICACHE, 1); 810 776 data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE0_RESET, 1); 811 - data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_RESET, 812 - (!adev->enable_uni_mes && adev->enable_mes_kiq) ? 1 : 0); 777 + data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_RESET, 1); 813 778 data = REG_SET_FIELD(data, CP_MES_CNTL, MES_HALT, 1); 814 779 WREG32_SOC15(GC, 0, regCP_MES_CNTL, data); 815 780 } ··· 823 790 824 791 mutex_lock(&adev->srbm_mutex); 825 792 for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 826 - if ((!adev->enable_mes_kiq || adev->enable_uni_mes) && 827 - pipe == AMDGPU_MES_KIQ_PIPE) 828 - continue; 829 - 830 793 /* me=3, queue=0 */ 831 794 soc21_grbm_select(adev, 3, pipe, 0, 0); 832 795 ··· 1114 1085 return r; 1115 1086 } 1116 1087 1117 - kiq->pmf->kiq_map_queues(kiq_ring, &adev->mes.ring); 1088 + kiq->pmf->kiq_map_queues(kiq_ring, &adev->mes.ring[0]); 1118 1089 1119 1090 r = amdgpu_ring_test_ring(kiq_ring); 1120 1091 if (r) { ··· 1130 1101 struct amdgpu_ring *ring; 1131 1102 int r; 1132 1103 1133 - if (pipe == AMDGPU_MES_KIQ_PIPE) 1104 + if (!adev->enable_uni_mes && pipe == AMDGPU_MES_KIQ_PIPE) 1134 1105 ring = &adev->gfx.kiq[0].ring; 1135 - else if (pipe == AMDGPU_MES_SCHED_PIPE) 1136 - ring = &adev->mes.ring; 1137 1106 else 1138 - BUG(); 1107 + ring = &adev->mes.ring[pipe]; 1139 1108 1140 - if ((pipe == AMDGPU_MES_SCHED_PIPE) && 1109 + if ((adev->enable_uni_mes || pipe == AMDGPU_MES_SCHED_PIPE) && 1141 1110 (amdgpu_in_reset(adev) || adev->in_suspend)) { 1142 1111 *(ring->wptr_cpu_addr) = 0; 1143 1112 *(ring->rptr_cpu_addr) = 0; ··· 1147 1120 return r; 1148 1121 1149 1122 if (pipe == AMDGPU_MES_SCHED_PIPE) { 1150 - if (adev->enable_uni_mes) { 1151 - mes_v12_0_queue_init_register(ring); 1152 - } else { 1123 + if (adev->enable_uni_mes) 1124 + r = amdgpu_mes_map_legacy_queue(adev, ring); 1125 + else 1153 1126 r = mes_v12_0_kiq_enable_queue(adev); 1154 - if (r) 1155 - return r; 1156 - } 1127 + if (r) 1128 + return r; 1157 1129 } else { 1158 1130 mes_v12_0_queue_init_register(ring); 1159 1131 } ··· 1172 1146 return 0; 1173 1147 } 1174 1148 1175 - static int mes_v12_0_ring_init(struct amdgpu_device *adev) 1149 + static int mes_v12_0_ring_init(struct amdgpu_device *adev, int pipe) 1176 1150 { 1177 1151 struct amdgpu_ring *ring; 1178 1152 1179 - ring = &adev->mes.ring; 1153 + ring = &adev->mes.ring[pipe]; 1180 1154 1181 1155 ring->funcs = &mes_v12_0_ring_funcs; 1182 1156 1183 1157 ring->me = 3; 1184 - ring->pipe = 0; 1158 + ring->pipe = pipe; 1185 1159 ring->queue = 0; 1186 1160 1187 1161 ring->ring_obj = NULL; 1188 1162 ring->use_doorbell = true; 1189 - ring->doorbell_index = adev->doorbell_index.mes_ring0 << 1; 1190 - ring->eop_gpu_addr = adev->mes.eop_gpu_addr[AMDGPU_MES_SCHED_PIPE]; 1163 + ring->eop_gpu_addr = adev->mes.eop_gpu_addr[pipe]; 1191 1164 ring->no_scheduler = true; 1192 1165 sprintf(ring->name, "mes_%d.%d.%d", ring->me, ring->pipe, ring->queue); 1166 + 1167 + if (pipe == AMDGPU_MES_SCHED_PIPE) 1168 + ring->doorbell_index = adev->doorbell_index.mes_ring0 << 1; 1169 + else 1170 + ring->doorbell_index = adev->doorbell_index.mes_ring1 << 1; 1193 1171 1194 1172 return amdgpu_ring_init(adev, ring, 1024, NULL, 0, 1195 1173 AMDGPU_RING_PRIO_DEFAULT, NULL); ··· 1208 1178 ring = &adev->gfx.kiq[0].ring; 1209 1179 1210 1180 ring->me = 3; 1211 - ring->pipe = adev->enable_uni_mes ? 0 : 1; 1181 + ring->pipe = 1; 1212 1182 ring->queue = 0; 1213 1183 1214 1184 ring->adev = NULL; ··· 1230 1200 int r, mqd_size = sizeof(struct v12_compute_mqd); 1231 1201 struct amdgpu_ring *ring; 1232 1202 1233 - if (pipe == AMDGPU_MES_KIQ_PIPE) 1203 + if (!adev->enable_uni_mes && pipe == AMDGPU_MES_KIQ_PIPE) 1234 1204 ring = &adev->gfx.kiq[0].ring; 1235 - else if (pipe == AMDGPU_MES_SCHED_PIPE) 1236 - ring = &adev->mes.ring; 1237 1205 else 1238 - BUG(); 1206 + ring = &adev->mes.ring[pipe]; 1239 1207 1240 1208 if (ring->mqd_obj) 1241 1209 return 0; ··· 1274 1246 return r; 1275 1247 1276 1248 for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 1277 - if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) 1278 - continue; 1279 - 1280 1249 r = mes_v12_0_allocate_eop_buf(adev, pipe); 1281 1250 if (r) 1282 1251 return r; ··· 1281 1256 r = mes_v12_0_mqd_sw_init(adev, pipe); 1282 1257 if (r) 1283 1258 return r; 1284 - } 1285 1259 1286 - if (adev->enable_mes_kiq) { 1287 - r = mes_v12_0_kiq_ring_init(adev); 1260 + if (!adev->enable_uni_mes && pipe == AMDGPU_MES_KIQ_PIPE) 1261 + r = mes_v12_0_kiq_ring_init(adev); 1262 + else 1263 + r = mes_v12_0_ring_init(adev, pipe); 1288 1264 if (r) 1289 1265 return r; 1290 1266 } 1291 - 1292 - r = mes_v12_0_ring_init(adev); 1293 - if (r) 1294 - return r; 1295 1267 1296 1268 return 0; 1297 1269 } ··· 1298 1276 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1299 1277 int pipe; 1300 1278 1301 - amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); 1302 - amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs); 1303 - 1304 1279 for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 1305 1280 kfree(adev->mes.mqd_backup[pipe]); 1306 1281 ··· 1305 1286 &adev->mes.eop_gpu_addr[pipe], 1306 1287 NULL); 1307 1288 amdgpu_ucode_release(&adev->mes.fw[pipe]); 1289 + 1290 + if (adev->enable_uni_mes || pipe == AMDGPU_MES_SCHED_PIPE) { 1291 + amdgpu_bo_free_kernel(&adev->mes.ring[pipe].mqd_obj, 1292 + &adev->mes.ring[pipe].mqd_gpu_addr, 1293 + &adev->mes.ring[pipe].mqd_ptr); 1294 + amdgpu_ring_fini(&adev->mes.ring[pipe]); 1295 + } 1308 1296 } 1309 1297 1310 - amdgpu_bo_free_kernel(&adev->gfx.kiq[0].ring.mqd_obj, 1311 - &adev->gfx.kiq[0].ring.mqd_gpu_addr, 1312 - &adev->gfx.kiq[0].ring.mqd_ptr); 1313 - 1314 - amdgpu_bo_free_kernel(&adev->mes.ring.mqd_obj, 1315 - &adev->mes.ring.mqd_gpu_addr, 1316 - &adev->mes.ring.mqd_ptr); 1317 - 1318 - amdgpu_ring_fini(&adev->gfx.kiq[0].ring); 1319 - amdgpu_ring_fini(&adev->mes.ring); 1298 + if (!adev->enable_uni_mes) { 1299 + amdgpu_bo_free_kernel(&adev->gfx.kiq[0].ring.mqd_obj, 1300 + &adev->gfx.kiq[0].ring.mqd_gpu_addr, 1301 + &adev->gfx.kiq[0].ring.mqd_ptr); 1302 + amdgpu_ring_fini(&adev->gfx.kiq[0].ring); 1303 + } 1320 1304 1321 1305 if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) { 1322 1306 mes_v12_0_free_ucode_buffers(adev, AMDGPU_MES_KIQ_PIPE); ··· 1363 1341 soc21_grbm_select(adev, 0, 0, 0, 0); 1364 1342 mutex_unlock(&adev->srbm_mutex); 1365 1343 1366 - adev->mes.ring.sched.ready = false; 1344 + adev->mes.ring[0].sched.ready = false; 1367 1345 } 1368 1346 1369 1347 static void mes_v12_0_kiq_setting(struct amdgpu_ring *ring) ··· 1384 1362 { 1385 1363 int r = 0; 1386 1364 1387 - mes_v12_0_kiq_setting(&adev->gfx.kiq[0].ring); 1388 - 1389 1365 if (adev->enable_uni_mes) 1390 - return mes_v12_0_hw_init(adev); 1366 + mes_v12_0_kiq_setting(&adev->mes.ring[AMDGPU_MES_KIQ_PIPE]); 1367 + else 1368 + mes_v12_0_kiq_setting(&adev->gfx.kiq[0].ring); 1391 1369 1392 1370 if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) { 1393 1371 ··· 1414 1392 if (r) 1415 1393 goto failure; 1416 1394 1395 + if (adev->enable_uni_mes) { 1396 + r = mes_v12_0_set_hw_resources(&adev->mes, AMDGPU_MES_KIQ_PIPE); 1397 + if (r) 1398 + goto failure; 1399 + 1400 + mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_KIQ_PIPE); 1401 + } 1402 + 1417 1403 r = mes_v12_0_hw_init(adev); 1418 1404 if (r) 1419 1405 goto failure; ··· 1435 1405 1436 1406 static int mes_v12_0_kiq_hw_fini(struct amdgpu_device *adev) 1437 1407 { 1438 - if (adev->mes.ring.sched.ready) { 1439 - mes_v12_0_kiq_dequeue_sched(adev); 1440 - adev->mes.ring.sched.ready = false; 1408 + if (adev->mes.ring[0].sched.ready) { 1409 + if (adev->enable_uni_mes) 1410 + amdgpu_mes_unmap_legacy_queue(adev, 1411 + &adev->mes.ring[AMDGPU_MES_SCHED_PIPE], 1412 + RESET_QUEUES, 0, 0); 1413 + else 1414 + mes_v12_0_kiq_dequeue_sched(adev); 1415 + 1416 + adev->mes.ring[0].sched.ready = false; 1441 1417 } 1442 1418 1443 1419 mes_v12_0_enable(adev, false); ··· 1456 1420 int r; 1457 1421 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1458 1422 1459 - if (adev->mes.ring.sched.ready) 1423 + if (adev->mes.ring[0].sched.ready) 1460 1424 goto out; 1461 1425 1462 - if (!adev->enable_mes_kiq || adev->enable_uni_mes) { 1426 + if (!adev->enable_mes_kiq) { 1463 1427 if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) { 1464 1428 r = mes_v12_0_load_microcode(adev, 1465 1429 AMDGPU_MES_SCHED_PIPE, true); ··· 1479 1443 mes_v12_0_enable(adev, true); 1480 1444 } 1481 1445 1446 + /* Enable the MES to handle doorbell ring on unmapped queue */ 1447 + mes_v12_0_enable_unmapped_doorbell_handling(&adev->mes, true); 1448 + 1482 1449 r = mes_v12_0_queue_init(adev, AMDGPU_MES_SCHED_PIPE); 1483 1450 if (r) 1484 1451 goto failure; 1485 1452 1486 - r = mes_v12_0_set_hw_resources(&adev->mes); 1453 + r = mes_v12_0_set_hw_resources(&adev->mes, AMDGPU_MES_SCHED_PIPE); 1487 1454 if (r) 1488 1455 goto failure; 1489 1456 1490 1457 if (adev->enable_uni_mes) 1491 - mes_v12_0_set_hw_resources_1(&adev->mes); 1458 + mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE); 1492 1459 1493 1460 mes_v12_0_init_aggregated_doorbell(&adev->mes); 1494 1461 1495 - /* Enable the MES to handle doorbell ring on unmapped queue */ 1496 - mes_v12_0_enable_unmapped_doorbell_handling(&adev->mes, true); 1497 - 1498 - r = mes_v12_0_query_sched_status(&adev->mes); 1462 + r = mes_v12_0_query_sched_status(&adev->mes, AMDGPU_MES_SCHED_PIPE); 1499 1463 if (r) { 1500 1464 DRM_ERROR("MES is busy\n"); 1501 1465 goto failure; ··· 1508 1472 * with MES enabled. 1509 1473 */ 1510 1474 adev->gfx.kiq[0].ring.sched.ready = false; 1511 - adev->mes.ring.sched.ready = true; 1475 + adev->mes.ring[0].sched.ready = true; 1512 1476 1513 1477 return 0; 1514 1478 ··· 1551 1515 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1552 1516 int pipe, r; 1553 1517 1554 - if (adev->enable_uni_mes) { 1555 - r = amdgpu_mes_init_microcode(adev, AMDGPU_MES_SCHED_PIPE); 1556 - if (!r) 1557 - return 0; 1558 - 1559 - adev->enable_uni_mes = false; 1560 - } 1561 - 1562 1518 for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 1563 - if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) 1564 - continue; 1565 1519 r = amdgpu_mes_init_microcode(adev, pipe); 1566 1520 if (r) 1567 1521 return r;
+6
drivers/gpu/drm/amd/amdgpu/soc15d.h
··· 76 76 ((cond & 0xF) << 24) | \ 77 77 ((type & 0xF) << 28)) 78 78 79 + #define CP_PACKETJ_NOP 0x60000000 80 + #define CP_PACKETJ_GET_REG(x) ((x) & 0x3FFFF) 81 + #define CP_PACKETJ_GET_RES(x) (((x) >> 18) & 0x3F) 82 + #define CP_PACKETJ_GET_COND(x) (((x) >> 24) & 0xF) 83 + #define CP_PACKETJ_GET_TYPE(x) (((x) >> 28) & 0xF) 84 + 79 85 /* Packet 3 types */ 80 86 #define PACKET3_NOP 0x10 81 87 #define PACKET3_SET_BASE 0x11
+2
drivers/gpu/drm/amd/amdgpu/soc24.c
··· 406 406 AMD_CG_SUPPORT_ATHUB_MGCG | 407 407 AMD_CG_SUPPORT_ATHUB_LS | 408 408 AMD_CG_SUPPORT_MC_MGCG | 409 + AMD_CG_SUPPORT_HDP_SD | 409 410 AMD_CG_SUPPORT_MC_LS; 410 411 adev->pg_flags = AMD_PG_SUPPORT_VCN | 411 412 AMD_PG_SUPPORT_JPEG | ··· 425 424 AMD_CG_SUPPORT_ATHUB_MGCG | 426 425 AMD_CG_SUPPORT_ATHUB_LS | 427 426 AMD_CG_SUPPORT_MC_MGCG | 427 + AMD_CG_SUPPORT_HDP_SD | 428 428 AMD_CG_SUPPORT_MC_LS; 429 429 430 430 adev->pg_flags = AMD_PG_SUPPORT_VCN |
+3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2893 2893 2894 2894 hpd_rx_irq_work_suspend(dm); 2895 2895 2896 + if (adev->dm.dc->caps.ips_support) 2897 + dc_allow_idle_optimizations(adev->dm.dc, true); 2898 + 2896 2899 dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3); 2897 2900 dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D3); 2898 2901
+24 -9
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 804 804 }; 805 805 806 806 #if defined(CONFIG_DRM_AMD_DC_FP) 807 - static int kbps_to_peak_pbn(int kbps) 807 + static uint16_t get_fec_overhead_multiplier(struct dc_link *dc_link) 808 + { 809 + u8 link_coding_cap; 810 + uint16_t fec_overhead_multiplier_x1000 = PBN_FEC_OVERHEAD_MULTIPLIER_8B_10B; 811 + 812 + link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(dc_link); 813 + if (link_coding_cap == DP_128b_132b_ENCODING) 814 + fec_overhead_multiplier_x1000 = PBN_FEC_OVERHEAD_MULTIPLIER_128B_132B; 815 + 816 + return fec_overhead_multiplier_x1000; 817 + } 818 + 819 + static int kbps_to_peak_pbn(int kbps, uint16_t fec_overhead_multiplier_x1000) 808 820 { 809 821 u64 peak_kbps = kbps; 810 822 811 823 peak_kbps *= 1006; 812 - peak_kbps = div_u64(peak_kbps, 1000); 824 + peak_kbps *= fec_overhead_multiplier_x1000; 825 + peak_kbps = div_u64(peak_kbps, 1000 * 1000); 813 826 return (int) DIV64_U64_ROUND_UP(peak_kbps * 64, (54 * 8 * 1000)); 814 827 } 815 828 ··· 923 910 int link_timeslots_used; 924 911 int fair_pbn_alloc; 925 912 int ret = 0; 913 + uint16_t fec_overhead_multiplier_x1000 = get_fec_overhead_multiplier(dc_link); 926 914 927 915 for (i = 0; i < count; i++) { 928 916 if (vars[i + k].dsc_enabled) { 929 917 initial_slack[i] = 930 - kbps_to_peak_pbn(params[i].bw_range.max_kbps) - vars[i + k].pbn; 918 + kbps_to_peak_pbn(params[i].bw_range.max_kbps, fec_overhead_multiplier_x1000) - vars[i + k].pbn; 931 919 bpp_increased[i] = false; 932 920 remaining_to_increase += 1; 933 921 } else { ··· 1024 1010 int next_index; 1025 1011 int remaining_to_try = 0; 1026 1012 int ret; 1013 + uint16_t fec_overhead_multiplier_x1000 = get_fec_overhead_multiplier(dc_link); 1027 1014 1028 1015 for (i = 0; i < count; i++) { 1029 1016 if (vars[i + k].dsc_enabled ··· 1054 1039 if (next_index == -1) 1055 1040 break; 1056 1041 1057 - vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps); 1042 + vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps, fec_overhead_multiplier_x1000); 1058 1043 ret = drm_dp_atomic_find_time_slots(state, 1059 1044 params[next_index].port->mgr, 1060 1045 params[next_index].port, ··· 1067 1052 vars[next_index].dsc_enabled = false; 1068 1053 vars[next_index].bpp_x16 = 0; 1069 1054 } else { 1070 - vars[next_index].pbn = kbps_to_peak_pbn( 1071 - params[next_index].bw_range.max_kbps); 1055 + vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps, fec_overhead_multiplier_x1000); 1072 1056 ret = drm_dp_atomic_find_time_slots(state, 1073 1057 params[next_index].port->mgr, 1074 1058 params[next_index].port, ··· 1096 1082 int count = 0; 1097 1083 int i, k, ret; 1098 1084 bool debugfs_overwrite = false; 1085 + uint16_t fec_overhead_multiplier_x1000 = get_fec_overhead_multiplier(dc_link); 1099 1086 1100 1087 memset(params, 0, sizeof(params)); 1101 1088 ··· 1161 1146 /* Try no compression */ 1162 1147 for (i = 0; i < count; i++) { 1163 1148 vars[i + k].aconnector = params[i].aconnector; 1164 - vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); 1149 + vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps, fec_overhead_multiplier_x1000); 1165 1150 vars[i + k].dsc_enabled = false; 1166 1151 vars[i + k].bpp_x16 = 0; 1167 1152 ret = drm_dp_atomic_find_time_slots(state, params[i].port->mgr, params[i].port, ··· 1180 1165 /* Try max compression */ 1181 1166 for (i = 0; i < count; i++) { 1182 1167 if (params[i].compression_possible && params[i].clock_force_enable != DSC_CLK_FORCE_DISABLE) { 1183 - vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps); 1168 + vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps, fec_overhead_multiplier_x1000); 1184 1169 vars[i + k].dsc_enabled = true; 1185 1170 vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16; 1186 1171 ret = drm_dp_atomic_find_time_slots(state, params[i].port->mgr, ··· 1188 1173 if (ret < 0) 1189 1174 return ret; 1190 1175 } else { 1191 - vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); 1176 + vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps, fec_overhead_multiplier_x1000); 1192 1177 vars[i + k].dsc_enabled = false; 1193 1178 vars[i + k].bpp_x16 = 0; 1194 1179 ret = drm_dp_atomic_find_time_slots(state, params[i].port->mgr,
+3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
··· 46 46 #define SYNAPTICS_CASCADED_HUB_ID 0x5A 47 47 #define IS_SYNAPTICS_CASCADED_PANAMERA(devName, data) ((IS_SYNAPTICS_PANAMERA(devName) && ((int)data[2] == SYNAPTICS_CASCADED_HUB_ID)) ? 1 : 0) 48 48 49 + #define PBN_FEC_OVERHEAD_MULTIPLIER_8B_10B 1031 50 + #define PBN_FEC_OVERHEAD_MULTIPLIER_128B_132B 1000 51 + 49 52 enum mst_msg_ready_type { 50 53 NONE_MSG_RDY_EVENT = 0, 51 54 DOWN_REP_MSG_RDY_EVENT = 1,
+2 -2
drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
··· 3589 3589 (int)hubp->curs_attr.width || pos_cpy.x 3590 3590 <= (int)hubp->curs_attr.width + 3591 3591 pipe_ctx->plane_state->src_rect.x) { 3592 - pos_cpy.x = temp_x + viewport_width; 3592 + pos_cpy.x = 2 * viewport_width - temp_x; 3593 3593 } 3594 3594 } 3595 3595 } else { ··· 3682 3682 (int)hubp->curs_attr.width || pos_cpy.x 3683 3683 <= (int)hubp->curs_attr.width + 3684 3684 pipe_ctx->plane_state->src_rect.x) { 3685 - pos_cpy.x = 2 * viewport_width - temp_x; 3685 + pos_cpy.x = temp_x + viewport_width; 3686 3686 } 3687 3687 } 3688 3688 } else {
+3
drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
··· 1778 1778 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1779 1779 dc->caps.color.mpc.ocsc = 1; 1780 1780 1781 + /* Use pipe context based otg sync logic */ 1782 + dc->config.use_pipe_ctx_sync_logic = true; 1783 + 1781 1784 dc->config.dc_mode_clk_limit_support = true; 1782 1785 dc->config.enable_windowed_mpo_odm = true; 1783 1786 /* read VBIOS LTTPR caps */
+6 -1
drivers/gpu/drm/amd/include/mes_v12_api_def.h
··· 97 97 MES_QUEUE_TYPE_SDMA, 98 98 99 99 MES_QUEUE_TYPE_MAX, 100 + MES_QUEUE_TYPE_SCHQ = MES_QUEUE_TYPE_MAX, 100 101 }; 101 102 102 103 struct MES_API_STATUS { ··· 243 242 uint32_t send_write_data : 1; 244 243 uint32_t os_tdr_timeout_override : 1; 245 244 uint32_t use_rs64mem_for_proc_gang_ctx : 1; 245 + uint32_t halt_on_misaligned_access : 1; 246 + uint32_t use_add_queue_unmap_flag_addr : 1; 247 + uint32_t enable_mes_sch_stb_log : 1; 248 + uint32_t limit_single_process : 1; 246 249 uint32_t unmapped_doorbell_handling: 2; 247 - uint32_t reserved : 15; 250 + uint32_t reserved : 11; 248 251 }; 249 252 uint32_t uint32_all; 250 253 };
+12
drivers/gpu/drm/drm_panel_orientation_quirks.c
··· 208 208 DMI_MATCH(DMI_BOARD_NAME, "KUN"), 209 209 }, 210 210 .driver_data = (void *)&lcd1600x2560_rightside_up, 211 + }, { /* AYN Loki Max */ 212 + .matches = { 213 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ayn"), 214 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Loki Max"), 215 + }, 216 + .driver_data = (void *)&lcd1080x1920_leftside_up, 217 + }, { /* AYN Loki Zero */ 218 + .matches = { 219 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ayn"), 220 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Loki Zero"), 221 + }, 222 + .driver_data = (void *)&lcd1080x1920_leftside_up, 211 223 }, { /* Chuwi HiBook (CWI514) */ 212 224 .matches = { 213 225 DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
+2 -2
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 539 539 } 540 540 541 541 /* IGT will check if the cursor size is configured */ 542 - drm->mode_config.cursor_width = drm->mode_config.max_width; 543 - drm->mode_config.cursor_height = drm->mode_config.max_height; 542 + drm->mode_config.cursor_width = 512; 543 + drm->mode_config.cursor_height = 512; 544 544 545 545 /* Use OVL device for all DMA memory allocations */ 546 546 crtc = drm_crtc_from_index(drm, 0);
+1 -3
drivers/gpu/drm/rockchip/inno_hdmi.c
··· 279 279 const u8 *buffer, size_t len) 280 280 { 281 281 struct inno_hdmi *hdmi = connector_to_inno_hdmi(connector); 282 - u8 packed_frame[HDMI_MAXIMUM_INFO_FRAME_SIZE]; 283 282 ssize_t i; 284 283 285 284 if (type != HDMI_INFOFRAME_TYPE_AVI) { ··· 290 291 inno_hdmi_disable_frame(connector, type); 291 292 292 293 for (i = 0; i < len; i++) 293 - hdmi_writeb(hdmi, HDMI_CONTROL_PACKET_ADDR + i, 294 - packed_frame[i]); 294 + hdmi_writeb(hdmi, HDMI_CONTROL_PACKET_ADDR + i, buffer[i]); 295 295 296 296 return 0; 297 297 }
+11 -3
drivers/gpu/drm/v3d/v3d_sched.c
··· 315 315 struct v3d_dev *v3d = job->base.v3d; 316 316 struct drm_device *dev = &v3d->drm; 317 317 struct dma_fence *fence; 318 - int i, csd_cfg0_reg, csd_cfg_reg_count; 318 + int i, csd_cfg0_reg; 319 319 320 320 v3d->csd_job = job; 321 321 ··· 335 335 v3d_switch_perfmon(v3d, &job->base); 336 336 337 337 csd_cfg0_reg = V3D_CSD_QUEUED_CFG0(v3d->ver); 338 - csd_cfg_reg_count = v3d->ver < 71 ? 6 : 7; 339 - for (i = 1; i <= csd_cfg_reg_count; i++) 338 + for (i = 1; i <= 6; i++) 340 339 V3D_CORE_WRITE(0, csd_cfg0_reg + 4 * i, job->args.cfg[i]); 340 + 341 + /* Although V3D 7.1 has an eighth configuration register, we are not 342 + * using it. Therefore, make sure it remains unused. 343 + * 344 + * XXX: Set the CFG7 register 345 + */ 346 + if (v3d->ver >= 71) 347 + V3D_CORE_WRITE(0, V3D_V7_CSD_QUEUED_CFG7, 0); 348 + 341 349 /* CFG0 write kicks off the job. */ 342 350 V3D_CORE_WRITE(0, csd_cfg0_reg, job->args.cfg[0]); 343 351
+50 -9
drivers/gpu/drm/xe/xe_device.c
··· 87 87 spin_unlock(&xe->clients.lock); 88 88 89 89 file->driver_priv = xef; 90 + kref_init(&xef->refcount); 91 + 90 92 return 0; 93 + } 94 + 95 + static void xe_file_destroy(struct kref *ref) 96 + { 97 + struct xe_file *xef = container_of(ref, struct xe_file, refcount); 98 + struct xe_device *xe = xef->xe; 99 + 100 + xa_destroy(&xef->exec_queue.xa); 101 + mutex_destroy(&xef->exec_queue.lock); 102 + xa_destroy(&xef->vm.xa); 103 + mutex_destroy(&xef->vm.lock); 104 + 105 + spin_lock(&xe->clients.lock); 106 + xe->clients.count--; 107 + spin_unlock(&xe->clients.lock); 108 + 109 + xe_drm_client_put(xef->client); 110 + kfree(xef); 111 + } 112 + 113 + /** 114 + * xe_file_get() - Take a reference to the xe file object 115 + * @xef: Pointer to the xe file 116 + * 117 + * Anyone with a pointer to xef must take a reference to the xe file 118 + * object using this call. 119 + * 120 + * Return: xe file pointer 121 + */ 122 + struct xe_file *xe_file_get(struct xe_file *xef) 123 + { 124 + kref_get(&xef->refcount); 125 + return xef; 126 + } 127 + 128 + /** 129 + * xe_file_put() - Drop a reference to the xe file object 130 + * @xef: Pointer to the xe file 131 + * 132 + * Used to drop reference to the xef object 133 + */ 134 + void xe_file_put(struct xe_file *xef) 135 + { 136 + kref_put(&xef->refcount, xe_file_destroy); 91 137 } 92 138 93 139 static void xe_file_close(struct drm_device *dev, struct drm_file *file) ··· 143 97 struct xe_vm *vm; 144 98 struct xe_exec_queue *q; 145 99 unsigned long idx; 100 + 101 + xe_pm_runtime_get(xe); 146 102 147 103 /* 148 104 * No need for exec_queue.lock here as there is no contention for it ··· 156 108 xe_exec_queue_kill(q); 157 109 xe_exec_queue_put(q); 158 110 } 159 - xa_destroy(&xef->exec_queue.xa); 160 - mutex_destroy(&xef->exec_queue.lock); 161 111 mutex_lock(&xef->vm.lock); 162 112 xa_for_each(&xef->vm.xa, idx, vm) 163 113 xe_vm_close_and_put(vm); 164 114 mutex_unlock(&xef->vm.lock); 165 - xa_destroy(&xef->vm.xa); 166 - mutex_destroy(&xef->vm.lock); 167 115 168 - spin_lock(&xe->clients.lock); 169 - xe->clients.count--; 170 - spin_unlock(&xe->clients.lock); 116 + xe_file_put(xef); 171 117 172 - xe_drm_client_put(xef->client); 173 - kfree(xef); 118 + xe_pm_runtime_put(xe); 174 119 } 175 120 176 121 static const struct drm_ioctl_desc xe_ioctls[] = {
+3
drivers/gpu/drm/xe/xe_device.h
··· 170 170 171 171 void xe_device_declare_wedged(struct xe_device *xe); 172 172 173 + struct xe_file *xe_file_get(struct xe_file *xef); 174 + void xe_file_put(struct xe_file *xef); 175 + 173 176 #endif
+3
drivers/gpu/drm/xe/xe_device_types.h
··· 566 566 567 567 /** @client: drm client */ 568 568 struct xe_drm_client *client; 569 + 570 + /** @refcount: ref count of this xe file */ 571 + struct kref refcount; 569 572 }; 570 573 571 574 #endif
+1 -4
drivers/gpu/drm/xe/xe_drm_client.c
··· 251 251 252 252 /* Accumulate all the exec queues from this client */ 253 253 mutex_lock(&xef->exec_queue.lock); 254 - xa_for_each(&xef->exec_queue.xa, i, q) { 254 + xa_for_each(&xef->exec_queue.xa, i, q) 255 255 xe_exec_queue_update_run_ticks(q); 256 - xef->run_ticks[q->class] += q->run_ticks - q->old_run_ticks; 257 - q->old_run_ticks = q->run_ticks; 258 - } 259 256 mutex_unlock(&xef->exec_queue.lock); 260 257 261 258 /* Get the total GPU cycles */
+9 -1
drivers/gpu/drm/xe/xe_exec_queue.c
··· 37 37 { 38 38 if (q->vm) 39 39 xe_vm_put(q->vm); 40 + 41 + if (q->xef) 42 + xe_file_put(q->xef); 43 + 40 44 kfree(q); 41 45 } 42 46 ··· 653 649 goto kill_exec_queue; 654 650 655 651 args->exec_queue_id = id; 652 + q->xef = xe_file_get(xef); 656 653 657 654 return 0; 658 655 ··· 767 762 */ 768 763 void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q) 769 764 { 765 + struct xe_file *xef; 770 766 struct xe_lrc *lrc; 771 767 u32 old_ts, new_ts; 772 768 ··· 779 773 if (!q->vm || !q->vm->xef) 780 774 return; 781 775 776 + xef = q->vm->xef; 777 + 782 778 /* 783 779 * Only sample the first LRC. For parallel submission, all of them are 784 780 * scheduled together and we compensate that below by multiplying by ··· 791 783 */ 792 784 lrc = q->lrc[0]; 793 785 new_ts = xe_lrc_update_timestamp(lrc, &old_ts); 794 - q->run_ticks += (new_ts - old_ts) * q->width; 786 + xef->run_ticks[q->class] += (new_ts - old_ts) * q->width; 795 787 } 796 788 797 789 void xe_exec_queue_kill(struct xe_exec_queue *q)
+3 -4
drivers/gpu/drm/xe/xe_exec_queue_types.h
··· 38 38 * a kernel object. 39 39 */ 40 40 struct xe_exec_queue { 41 + /** @xef: Back pointer to xe file if this is user created exec queue */ 42 + struct xe_file *xef; 43 + 41 44 /** @gt: graphics tile this exec queue can submit to */ 42 45 struct xe_gt *gt; 43 46 /** ··· 142 139 * Protected by @vm's resv. Unused if @vm == NULL. 143 140 */ 144 141 u64 tlb_flush_seqno; 145 - /** @old_run_ticks: prior hw engine class run time in ticks for this exec queue */ 146 - u64 old_run_ticks; 147 - /** @run_ticks: hw engine class run time in ticks for this exec queue */ 148 - u64 run_ticks; 149 142 /** @lrc: logical ring context for this exec queue */ 150 143 struct xe_lrc *lrc[]; 151 144 };
+8 -3
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
··· 1927 1927 { 1928 1928 struct xe_gt *primary_gt = gt_to_tile(gt)->primary_gt; 1929 1929 struct xe_device *xe = gt_to_xe(gt); 1930 + bool is_primary = !xe_gt_is_media_type(gt); 1930 1931 bool valid_ggtt, valid_ctxs, valid_dbs; 1931 1932 bool valid_any, valid_all; 1932 1933 ··· 1936 1935 valid_dbs = pf_get_vf_config_dbs(gt, vfid); 1937 1936 1938 1937 /* note that GuC doorbells are optional */ 1939 - valid_any = valid_ggtt || valid_ctxs || valid_dbs; 1940 - valid_all = valid_ggtt && valid_ctxs; 1938 + valid_any = valid_ctxs || valid_dbs; 1939 + valid_all = valid_ctxs; 1940 + 1941 + /* and GGTT/LMEM is configured on primary GT only */ 1942 + valid_all = valid_all && valid_ggtt; 1943 + valid_any = valid_any || (valid_ggtt && is_primary); 1941 1944 1942 1945 if (IS_DGFX(xe)) { 1943 1946 bool valid_lmem = pf_get_vf_config_ggtt(primary_gt, vfid); 1944 1947 1945 - valid_any = valid_any || valid_lmem; 1948 + valid_any = valid_any || (valid_lmem && is_primary); 1946 1949 valid_all = valid_all && valid_lmem; 1947 1950 } 1948 1951
+1 -1
drivers/gpu/drm/xe/xe_gt_sriov_vf.c
··· 850 850 851 851 xe_gt_assert(gt, IS_SRIOV_VF(gt_to_xe(gt))); 852 852 853 - return bsearch(&key, runtime->regs, runtime->regs_size, sizeof(key), 853 + return bsearch(&key, runtime->regs, runtime->num_regs, sizeof(key), 854 854 vf_runtime_reg_cmp); 855 855 } 856 856
+108 -91
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
··· 13 13 #include "xe_guc.h" 14 14 #include "xe_guc_ct.h" 15 15 #include "xe_mmio.h" 16 + #include "xe_pm.h" 16 17 #include "xe_sriov.h" 17 18 #include "xe_trace.h" 18 19 #include "regs/xe_guc_regs.h" 20 + 21 + #define FENCE_STACK_BIT DMA_FENCE_FLAG_USER_BITS 19 22 20 23 /* 21 24 * TLB inval depends on pending commands in the CT queue and then the real ··· 36 33 return hw_tlb_timeout + 2 * delay; 37 34 } 38 35 36 + static void 37 + __invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence) 38 + { 39 + bool stack = test_bit(FENCE_STACK_BIT, &fence->base.flags); 40 + 41 + trace_xe_gt_tlb_invalidation_fence_signal(xe, fence); 42 + xe_gt_tlb_invalidation_fence_fini(fence); 43 + dma_fence_signal(&fence->base); 44 + if (!stack) 45 + dma_fence_put(&fence->base); 46 + } 47 + 48 + static void 49 + invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence) 50 + { 51 + list_del(&fence->link); 52 + __invalidation_fence_signal(xe, fence); 53 + } 39 54 40 55 static void xe_gt_tlb_fence_timeout(struct work_struct *work) 41 56 { ··· 75 54 xe_gt_err(gt, "TLB invalidation fence timeout, seqno=%d recv=%d", 76 55 fence->seqno, gt->tlb_invalidation.seqno_recv); 77 56 78 - list_del(&fence->link); 79 57 fence->base.error = -ETIME; 80 - dma_fence_signal(&fence->base); 81 - dma_fence_put(&fence->base); 58 + invalidation_fence_signal(xe, fence); 82 59 } 83 60 if (!list_empty(&gt->tlb_invalidation.pending_fences)) 84 61 queue_delayed_work(system_wq, ··· 106 87 return 0; 107 88 } 108 89 109 - static void 110 - __invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence) 111 - { 112 - trace_xe_gt_tlb_invalidation_fence_signal(xe, fence); 113 - dma_fence_signal(&fence->base); 114 - dma_fence_put(&fence->base); 115 - } 116 - 117 - static void 118 - invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence) 119 - { 120 - list_del(&fence->link); 121 - __invalidation_fence_signal(xe, fence); 122 - } 123 - 124 90 /** 125 91 * xe_gt_tlb_invalidation_reset - Initialize GT TLB invalidation reset 126 92 * @gt: graphics tile ··· 115 111 void xe_gt_tlb_invalidation_reset(struct xe_gt *gt) 116 112 { 117 113 struct xe_gt_tlb_invalidation_fence *fence, *next; 118 - struct xe_guc *guc = &gt->uc.guc; 119 114 int pending_seqno; 120 115 121 116 /* ··· 137 134 else 138 135 pending_seqno = gt->tlb_invalidation.seqno - 1; 139 136 WRITE_ONCE(gt->tlb_invalidation.seqno_recv, pending_seqno); 140 - wake_up_all(&guc->ct.wq); 141 137 142 138 list_for_each_entry_safe(fence, next, 143 139 &gt->tlb_invalidation.pending_fences, link) ··· 167 165 int seqno; 168 166 int ret; 169 167 168 + xe_gt_assert(gt, fence); 169 + 170 170 /* 171 171 * XXX: The seqno algorithm relies on TLB invalidation being processed 172 172 * in order which they currently are, if that changes the algorithm will ··· 177 173 178 174 mutex_lock(&guc->ct.lock); 179 175 seqno = gt->tlb_invalidation.seqno; 180 - if (fence) { 181 - fence->seqno = seqno; 182 - trace_xe_gt_tlb_invalidation_fence_send(xe, fence); 183 - } 176 + fence->seqno = seqno; 177 + trace_xe_gt_tlb_invalidation_fence_send(xe, fence); 184 178 action[1] = seqno; 185 179 ret = xe_guc_ct_send_locked(&guc->ct, action, len, 186 180 G2H_LEN_DW_TLB_INVALIDATE, 1); ··· 211 209 TLB_INVALIDATION_SEQNO_MAX; 212 210 if (!gt->tlb_invalidation.seqno) 213 211 gt->tlb_invalidation.seqno = 1; 214 - ret = seqno; 215 212 } 216 213 mutex_unlock(&guc->ct.lock); 217 214 ··· 224 223 /** 225 224 * xe_gt_tlb_invalidation_guc - Issue a TLB invalidation on this GT for the GuC 226 225 * @gt: graphics tile 226 + * @fence: invalidation fence which will be signal on TLB invalidation 227 + * completion 227 228 * 228 229 * Issue a TLB invalidation for the GuC. Completion of TLB is asynchronous and 229 - * caller can use seqno + xe_gt_tlb_invalidation_wait to wait for completion. 230 + * caller can use the invalidation fence to wait for completion. 230 231 * 231 - * Return: Seqno which can be passed to xe_gt_tlb_invalidation_wait on success, 232 - * negative error code on error. 232 + * Return: 0 on success, negative error code on error 233 233 */ 234 - static int xe_gt_tlb_invalidation_guc(struct xe_gt *gt) 234 + static int xe_gt_tlb_invalidation_guc(struct xe_gt *gt, 235 + struct xe_gt_tlb_invalidation_fence *fence) 235 236 { 236 237 u32 action[] = { 237 238 XE_GUC_ACTION_TLB_INVALIDATION, ··· 241 238 MAKE_INVAL_OP(XE_GUC_TLB_INVAL_GUC), 242 239 }; 243 240 244 - return send_tlb_invalidation(&gt->uc.guc, NULL, action, 241 + return send_tlb_invalidation(&gt->uc.guc, fence, action, 245 242 ARRAY_SIZE(action)); 246 243 } 247 244 ··· 260 257 261 258 if (xe_guc_ct_enabled(&gt->uc.guc.ct) && 262 259 gt->uc.guc.submission_state.enabled) { 263 - int seqno; 260 + struct xe_gt_tlb_invalidation_fence fence; 261 + int ret; 264 262 265 - seqno = xe_gt_tlb_invalidation_guc(gt); 266 - if (seqno <= 0) 267 - return seqno; 263 + xe_gt_tlb_invalidation_fence_init(gt, &fence, true); 264 + ret = xe_gt_tlb_invalidation_guc(gt, &fence); 265 + if (ret < 0) { 266 + xe_gt_tlb_invalidation_fence_fini(&fence); 267 + return ret; 268 + } 268 269 269 - xe_gt_tlb_invalidation_wait(gt, seqno); 270 + xe_gt_tlb_invalidation_fence_wait(&fence); 270 271 } else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) { 271 272 if (IS_SRIOV_VF(xe)) 272 273 return 0; ··· 297 290 * 298 291 * @gt: graphics tile 299 292 * @fence: invalidation fence which will be signal on TLB invalidation 300 - * completion, can be NULL 293 + * completion 301 294 * @start: start address 302 295 * @end: end address 303 296 * @asid: address space id 304 297 * 305 298 * Issue a range based TLB invalidation if supported, if not fallback to a full 306 - * TLB invalidation. Completion of TLB is asynchronous and caller can either use 307 - * the invalidation fence or seqno + xe_gt_tlb_invalidation_wait to wait for 308 - * completion. 299 + * TLB invalidation. Completion of TLB is asynchronous and caller can use 300 + * the invalidation fence to wait for completion. 309 301 * 310 - * Return: Seqno which can be passed to xe_gt_tlb_invalidation_wait on success, 311 - * negative error code on error. 302 + * Return: Negative error code on error, 0 on success 312 303 */ 313 304 int xe_gt_tlb_invalidation_range(struct xe_gt *gt, 314 305 struct xe_gt_tlb_invalidation_fence *fence, ··· 317 312 u32 action[MAX_TLB_INVALIDATION_LEN]; 318 313 int len = 0; 319 314 315 + xe_gt_assert(gt, fence); 316 + 320 317 /* Execlists not supported */ 321 318 if (gt_to_xe(gt)->info.force_execlist) { 322 - if (fence) 323 - __invalidation_fence_signal(xe, fence); 324 - 319 + __invalidation_fence_signal(xe, fence); 325 320 return 0; 326 321 } 327 322 ··· 387 382 * @vma: VMA to invalidate 388 383 * 389 384 * Issue a range based TLB invalidation if supported, if not fallback to a full 390 - * TLB invalidation. Completion of TLB is asynchronous and caller can either use 391 - * the invalidation fence or seqno + xe_gt_tlb_invalidation_wait to wait for 392 - * completion. 385 + * TLB invalidation. Completion of TLB is asynchronous and caller can use 386 + * the invalidation fence to wait for completion. 393 387 * 394 - * Return: Seqno which can be passed to xe_gt_tlb_invalidation_wait on success, 395 - * negative error code on error. 388 + * Return: Negative error code on error, 0 on success 396 389 */ 397 390 int xe_gt_tlb_invalidation_vma(struct xe_gt *gt, 398 391 struct xe_gt_tlb_invalidation_fence *fence, ··· 401 398 return xe_gt_tlb_invalidation_range(gt, fence, xe_vma_start(vma), 402 399 xe_vma_end(vma), 403 400 xe_vma_vm(vma)->usm.asid); 404 - } 405 - 406 - /** 407 - * xe_gt_tlb_invalidation_wait - Wait for TLB to complete 408 - * @gt: graphics tile 409 - * @seqno: seqno to wait which was returned from xe_gt_tlb_invalidation 410 - * 411 - * Wait for tlb_timeout_jiffies() for a TLB invalidation to complete. 412 - * 413 - * Return: 0 on success, -ETIME on TLB invalidation timeout 414 - */ 415 - int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno) 416 - { 417 - struct xe_guc *guc = &gt->uc.guc; 418 - int ret; 419 - 420 - /* Execlists not supported */ 421 - if (gt_to_xe(gt)->info.force_execlist) 422 - return 0; 423 - 424 - /* 425 - * XXX: See above, this algorithm only works if seqno are always in 426 - * order 427 - */ 428 - ret = wait_event_timeout(guc->ct.wq, 429 - tlb_invalidation_seqno_past(gt, seqno), 430 - tlb_timeout_jiffies(gt)); 431 - if (!ret) { 432 - struct drm_printer p = xe_gt_err_printer(gt); 433 - 434 - xe_gt_err(gt, "TLB invalidation time'd out, seqno=%d, recv=%d\n", 435 - seqno, gt->tlb_invalidation.seqno_recv); 436 - xe_guc_ct_print(&guc->ct, &p, true); 437 - return -ETIME; 438 - } 439 - 440 - return 0; 441 401 } 442 402 443 403 /** ··· 446 480 return 0; 447 481 } 448 482 449 - /* 450 - * wake_up_all() and wait_event_timeout() already have the correct 451 - * barriers. 452 - */ 453 483 WRITE_ONCE(gt->tlb_invalidation.seqno_recv, msg[0]); 454 - wake_up_all(&guc->ct.wq); 455 484 456 485 list_for_each_entry_safe(fence, next, 457 486 &gt->tlb_invalidation.pending_fences, link) { ··· 468 507 spin_unlock_irqrestore(&gt->tlb_invalidation.pending_lock, flags); 469 508 470 509 return 0; 510 + } 511 + 512 + static const char * 513 + invalidation_fence_get_driver_name(struct dma_fence *dma_fence) 514 + { 515 + return "xe"; 516 + } 517 + 518 + static const char * 519 + invalidation_fence_get_timeline_name(struct dma_fence *dma_fence) 520 + { 521 + return "invalidation_fence"; 522 + } 523 + 524 + static const struct dma_fence_ops invalidation_fence_ops = { 525 + .get_driver_name = invalidation_fence_get_driver_name, 526 + .get_timeline_name = invalidation_fence_get_timeline_name, 527 + }; 528 + 529 + /** 530 + * xe_gt_tlb_invalidation_fence_init - Initialize TLB invalidation fence 531 + * @gt: GT 532 + * @fence: TLB invalidation fence to initialize 533 + * @stack: fence is stack variable 534 + * 535 + * Initialize TLB invalidation fence for use. xe_gt_tlb_invalidation_fence_fini 536 + * must be called if fence is not signaled. 537 + */ 538 + void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt, 539 + struct xe_gt_tlb_invalidation_fence *fence, 540 + bool stack) 541 + { 542 + xe_pm_runtime_get_noresume(gt_to_xe(gt)); 543 + 544 + spin_lock_irq(&gt->tlb_invalidation.lock); 545 + dma_fence_init(&fence->base, &invalidation_fence_ops, 546 + &gt->tlb_invalidation.lock, 547 + dma_fence_context_alloc(1), 1); 548 + spin_unlock_irq(&gt->tlb_invalidation.lock); 549 + INIT_LIST_HEAD(&fence->link); 550 + if (stack) 551 + set_bit(FENCE_STACK_BIT, &fence->base.flags); 552 + else 553 + dma_fence_get(&fence->base); 554 + fence->gt = gt; 555 + } 556 + 557 + /** 558 + * xe_gt_tlb_invalidation_fence_fini - Finalize TLB invalidation fence 559 + * @fence: TLB invalidation fence to finalize 560 + * 561 + * Drop PM ref which fence took durinig init. 562 + */ 563 + void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence) 564 + { 565 + xe_pm_runtime_put(gt_to_xe(fence->gt)); 471 566 }
+11 -1
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
··· 23 23 int xe_gt_tlb_invalidation_range(struct xe_gt *gt, 24 24 struct xe_gt_tlb_invalidation_fence *fence, 25 25 u64 start, u64 end, u32 asid); 26 - int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno); 27 26 int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len); 27 + 28 + void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt, 29 + struct xe_gt_tlb_invalidation_fence *fence, 30 + bool stack); 31 + void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence); 32 + 33 + static inline void 34 + xe_gt_tlb_invalidation_fence_wait(struct xe_gt_tlb_invalidation_fence *fence) 35 + { 36 + dma_fence_wait(&fence->base, false); 37 + } 28 38 29 39 #endif /* _XE_GT_TLB_INVALIDATION_ */
+4
drivers/gpu/drm/xe/xe_gt_tlb_invalidation_types.h
··· 8 8 9 9 #include <linux/dma-fence.h> 10 10 11 + struct xe_gt; 12 + 11 13 /** 12 14 * struct xe_gt_tlb_invalidation_fence - XE GT TLB invalidation fence 13 15 * ··· 19 17 struct xe_gt_tlb_invalidation_fence { 20 18 /** @base: dma fence base */ 21 19 struct dma_fence base; 20 + /** @gt: GT which fence belong to */ 21 + struct xe_gt *gt; 22 22 /** @link: link into list of pending tlb fences */ 23 23 struct list_head link; 24 24 /** @seqno: seqno of TLB invalidation to signal fence one */
+9 -1
drivers/gpu/drm/xe/xe_guc_ct.c
··· 327 327 xe_gt_assert(ct_to_gt(ct), ct->g2h_outstanding == 0 || 328 328 state == XE_GUC_CT_STATE_STOPPED); 329 329 330 + if (ct->g2h_outstanding) 331 + xe_pm_runtime_put(ct_to_xe(ct)); 330 332 ct->g2h_outstanding = 0; 331 333 ct->state = state; 332 334 ··· 497 495 static void __g2h_reserve_space(struct xe_guc_ct *ct, u32 g2h_len, u32 num_g2h) 498 496 { 499 497 xe_gt_assert(ct_to_gt(ct), g2h_len <= ct->ctbs.g2h.info.space); 498 + xe_gt_assert(ct_to_gt(ct), (!g2h_len && !num_g2h) || 499 + (g2h_len && num_g2h)); 500 500 501 501 if (g2h_len) { 502 502 lockdep_assert_held(&ct->fast_lock); 503 + 504 + if (!ct->g2h_outstanding) 505 + xe_pm_runtime_get_noresume(ct_to_xe(ct)); 503 506 504 507 ct->ctbs.g2h.info.space -= g2h_len; 505 508 ct->g2h_outstanding += num_g2h; ··· 518 511 ct->ctbs.g2h.info.size - ct->ctbs.g2h.info.resv_space); 519 512 520 513 ct->ctbs.g2h.info.space += g2h_len; 521 - --ct->g2h_outstanding; 514 + if (!--ct->g2h_outstanding) 515 + xe_pm_runtime_put(ct_to_xe(ct)); 522 516 } 523 517 524 518 static void g2h_release_space(struct xe_guc_ct *ct, u32 g2h_len)
+4
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1393 1393 default: 1394 1394 XE_WARN_ON("Unknown message type"); 1395 1395 } 1396 + 1397 + xe_pm_runtime_put(guc_to_xe(exec_queue_to_guc(msg->private_data))); 1396 1398 } 1397 1399 1398 1400 static const struct drm_sched_backend_ops drm_sched_ops = { ··· 1484 1482 static void guc_exec_queue_add_msg(struct xe_exec_queue *q, struct xe_sched_msg *msg, 1485 1483 u32 opcode) 1486 1484 { 1485 + xe_pm_runtime_get_noresume(guc_to_xe(exec_queue_to_guc(q))); 1486 + 1487 1487 INIT_LIST_HEAD(&msg->link); 1488 1488 msg->opcode = opcode; 1489 1489 msg->private_data = q;
+1 -25
drivers/gpu/drm/xe/xe_pt.c
··· 1115 1115 u32 asid; 1116 1116 }; 1117 1117 1118 - static const char * 1119 - invalidation_fence_get_driver_name(struct dma_fence *dma_fence) 1120 - { 1121 - return "xe"; 1122 - } 1123 - 1124 - static const char * 1125 - invalidation_fence_get_timeline_name(struct dma_fence *dma_fence) 1126 - { 1127 - return "invalidation_fence"; 1128 - } 1129 - 1130 - static const struct dma_fence_ops invalidation_fence_ops = { 1131 - .get_driver_name = invalidation_fence_get_driver_name, 1132 - .get_timeline_name = invalidation_fence_get_timeline_name, 1133 - }; 1134 - 1135 1118 static void invalidation_fence_cb(struct dma_fence *fence, 1136 1119 struct dma_fence_cb *cb) 1137 1120 { ··· 1153 1170 1154 1171 trace_xe_gt_tlb_invalidation_fence_create(gt_to_xe(gt), &ifence->base); 1155 1172 1156 - spin_lock_irq(&gt->tlb_invalidation.lock); 1157 - dma_fence_init(&ifence->base.base, &invalidation_fence_ops, 1158 - &gt->tlb_invalidation.lock, 1159 - dma_fence_context_alloc(1), 1); 1160 - spin_unlock_irq(&gt->tlb_invalidation.lock); 1173 + xe_gt_tlb_invalidation_fence_init(gt, &ifence->base, false); 1161 1174 1162 - INIT_LIST_HEAD(&ifence->base.link); 1163 - 1164 - dma_fence_get(&ifence->base.base); /* Ref for caller */ 1165 1175 ifence->fence = fence; 1166 1176 ifence->gt = gt; 1167 1177 ifence->start = start;
+8 -4
drivers/gpu/drm/xe/xe_sync.c
··· 53 53 u64 value) 54 54 { 55 55 struct xe_user_fence *ufence; 56 + u64 __user *ptr = u64_to_user_ptr(addr); 57 + 58 + if (!access_ok(ptr, sizeof(ptr))) 59 + return ERR_PTR(-EFAULT); 56 60 57 61 ufence = kmalloc(sizeof(*ufence), GFP_KERNEL); 58 62 if (!ufence) 59 - return NULL; 63 + return ERR_PTR(-ENOMEM); 60 64 61 65 ufence->xe = xe; 62 66 kref_init(&ufence->refcount); 63 - ufence->addr = u64_to_user_ptr(addr); 67 + ufence->addr = ptr; 64 68 ufence->value = value; 65 69 ufence->mm = current->mm; 66 70 mmgrab(ufence->mm); ··· 187 183 } else { 188 184 sync->ufence = user_fence_create(xe, sync_in.addr, 189 185 sync_in.timeline_value); 190 - if (XE_IOCTL_DBG(xe, !sync->ufence)) 191 - return -ENOMEM; 186 + if (XE_IOCTL_DBG(xe, IS_ERR(sync->ufence))) 187 + return PTR_ERR(sync->ufence); 192 188 } 193 189 194 190 break;
+23 -15
drivers/gpu/drm/xe/xe_vm.c
··· 1601 1601 XE_WARN_ON(vm->pt_root[id]); 1602 1602 1603 1603 trace_xe_vm_free(vm); 1604 + 1605 + if (vm->xef) 1606 + xe_file_put(vm->xef); 1607 + 1604 1608 kfree(vm); 1605 1609 } 1606 1610 ··· 1920 1916 } 1921 1917 1922 1918 args->vm_id = id; 1923 - vm->xef = xef; 1919 + vm->xef = xe_file_get(xef); 1924 1920 1925 1921 /* Record BO memory for VM pagetable created against client */ 1926 1922 for_each_tile(tile, xe, id) ··· 3341 3337 { 3342 3338 struct xe_device *xe = xe_vma_vm(vma)->xe; 3343 3339 struct xe_tile *tile; 3340 + struct xe_gt_tlb_invalidation_fence fence[XE_MAX_TILES_PER_DEVICE]; 3344 3341 u32 tile_needs_invalidate = 0; 3345 - int seqno[XE_MAX_TILES_PER_DEVICE]; 3346 3342 u8 id; 3347 - int ret; 3343 + int ret = 0; 3348 3344 3349 3345 xe_assert(xe, !xe_vma_is_null(vma)); 3350 3346 trace_xe_vma_invalidate(vma); ··· 3369 3365 3370 3366 for_each_tile(tile, xe, id) { 3371 3367 if (xe_pt_zap_ptes(tile, vma)) { 3372 - tile_needs_invalidate |= BIT(id); 3373 3368 xe_device_wmb(xe); 3369 + xe_gt_tlb_invalidation_fence_init(tile->primary_gt, 3370 + &fence[id], true); 3371 + 3374 3372 /* 3375 3373 * FIXME: We potentially need to invalidate multiple 3376 3374 * GTs within the tile 3377 3375 */ 3378 - seqno[id] = xe_gt_tlb_invalidation_vma(tile->primary_gt, NULL, vma); 3379 - if (seqno[id] < 0) 3380 - return seqno[id]; 3376 + ret = xe_gt_tlb_invalidation_vma(tile->primary_gt, 3377 + &fence[id], vma); 3378 + if (ret < 0) { 3379 + xe_gt_tlb_invalidation_fence_fini(&fence[id]); 3380 + goto wait; 3381 + } 3382 + 3383 + tile_needs_invalidate |= BIT(id); 3381 3384 } 3382 3385 } 3383 3386 3384 - for_each_tile(tile, xe, id) { 3385 - if (tile_needs_invalidate & BIT(id)) { 3386 - ret = xe_gt_tlb_invalidation_wait(tile->primary_gt, seqno[id]); 3387 - if (ret < 0) 3388 - return ret; 3389 - } 3390 - } 3387 + wait: 3388 + for_each_tile(tile, xe, id) 3389 + if (tile_needs_invalidate & BIT(id)) 3390 + xe_gt_tlb_invalidation_fence_wait(&fence[id]); 3391 3391 3392 3392 vma->tile_invalidated = vma->tile_mask; 3393 3393 3394 - return 0; 3394 + return ret; 3395 3395 } 3396 3396 3397 3397 struct xe_vm_snapshot {
+3 -1
drivers/i2c/busses/i2c-qcom-geni.c
··· 986 986 return ret; 987 987 988 988 ret = clk_prepare_enable(gi2c->core_clk); 989 - if (ret) 989 + if (ret) { 990 + geni_icc_disable(&gi2c->se); 990 991 return ret; 992 + } 991 993 992 994 ret = geni_se_resources_on(&gi2c->se); 993 995 if (ret) {
+2 -2
drivers/i2c/busses/i2c-tegra.c
··· 1802 1802 * domain. 1803 1803 * 1804 1804 * VI I2C device shouldn't be marked as IRQ-safe because VI I2C won't 1805 - * be used for atomic transfers. 1805 + * be used for atomic transfers. ACPI device is not IRQ safe also. 1806 1806 */ 1807 - if (!IS_VI(i2c_dev)) 1807 + if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev)) 1808 1808 pm_runtime_irq_safe(i2c_dev->dev); 1809 1809 1810 1810 pm_runtime_enable(i2c_dev->dev);
+1
drivers/iommu/io-pgfault.c
··· 170 170 report_partial_fault(iopf_param, fault); 171 171 iopf_put_dev_fault_param(iopf_param); 172 172 /* A request that is not the last does not need to be ack'd */ 173 + return; 173 174 } 174 175 175 176 /*
+20 -2
drivers/md/dm-ioctl.c
··· 1181 1181 suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG; 1182 1182 if (param->flags & DM_NOFLUSH_FLAG) 1183 1183 suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG; 1184 - if (!dm_suspended_md(md)) 1185 - dm_suspend(md, suspend_flags); 1184 + if (!dm_suspended_md(md)) { 1185 + r = dm_suspend(md, suspend_flags); 1186 + if (r) { 1187 + down_write(&_hash_lock); 1188 + hc = dm_get_mdptr(md); 1189 + if (hc && !hc->new_map) { 1190 + hc->new_map = new_map; 1191 + new_map = NULL; 1192 + } else { 1193 + r = -ENXIO; 1194 + } 1195 + up_write(&_hash_lock); 1196 + if (new_map) { 1197 + dm_sync_table(md); 1198 + dm_table_destroy(new_map); 1199 + } 1200 + dm_put(md); 1201 + return r; 1202 + } 1203 + } 1186 1204 1187 1205 old_size = dm_get_size(md); 1188 1206 old_map = dm_swap_table(md, new_map);
+2 -2
drivers/md/dm.c
··· 2737 2737 break; 2738 2738 2739 2739 if (signal_pending_state(task_state, current)) { 2740 - r = -EINTR; 2740 + r = -ERESTARTSYS; 2741 2741 break; 2742 2742 } 2743 2743 ··· 2762 2762 break; 2763 2763 2764 2764 if (signal_pending_state(task_state, current)) { 2765 - r = -EINTR; 2765 + r = -ERESTARTSYS; 2766 2766 break; 2767 2767 } 2768 2768
+2 -2
drivers/md/persistent-data/dm-space-map-metadata.c
··· 277 277 { 278 278 struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm); 279 279 280 - kfree(smm); 280 + kvfree(smm); 281 281 } 282 282 283 283 static int sm_metadata_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count) ··· 772 772 { 773 773 struct sm_metadata *smm; 774 774 775 - smm = kmalloc(sizeof(*smm), GFP_KERNEL); 775 + smm = kvmalloc(sizeof(*smm), GFP_KERNEL); 776 776 if (!smm) 777 777 return ERR_PTR(-ENOMEM); 778 778
+10 -4
drivers/md/raid1.c
··· 617 617 return -1; 618 618 } 619 619 620 + static bool rdev_in_recovery(struct md_rdev *rdev, struct r1bio *r1_bio) 621 + { 622 + return !test_bit(In_sync, &rdev->flags) && 623 + rdev->recovery_offset < r1_bio->sector + r1_bio->sectors; 624 + } 625 + 620 626 static int choose_bb_rdev(struct r1conf *conf, struct r1bio *r1_bio, 621 627 int *max_sectors) 622 628 { ··· 641 635 642 636 rdev = conf->mirrors[disk].rdev; 643 637 if (!rdev || test_bit(Faulty, &rdev->flags) || 638 + rdev_in_recovery(rdev, r1_bio) || 644 639 test_bit(WriteMostly, &rdev->flags)) 645 640 continue; 646 641 ··· 680 673 681 674 rdev = conf->mirrors[disk].rdev; 682 675 if (!rdev || test_bit(Faulty, &rdev->flags) || 683 - !test_bit(WriteMostly, &rdev->flags)) 676 + !test_bit(WriteMostly, &rdev->flags) || 677 + rdev_in_recovery(rdev, r1_bio)) 684 678 continue; 685 679 686 680 /* there are no bad blocks, we can use this disk */ ··· 741 733 if (!rdev || test_bit(Faulty, &rdev->flags)) 742 734 return false; 743 735 744 - /* still in recovery */ 745 - if (!test_bit(In_sync, &rdev->flags) && 746 - rdev->recovery_offset < r1_bio->sector + r1_bio->sectors) 736 + if (rdev_in_recovery(rdev, r1_bio)) 747 737 return false; 748 738 749 739 /* don't read from slow disk unless have to */
+4 -31
drivers/media/usb/dvb-usb/dvb-usb-init.c
··· 23 23 module_param_named(force_pid_filter_usage, dvb_usb_force_pid_filter_usage, int, 0444); 24 24 MODULE_PARM_DESC(force_pid_filter_usage, "force all dvb-usb-devices to use a PID filter, if any (default: 0)."); 25 25 26 - static int dvb_usb_check_bulk_endpoint(struct dvb_usb_device *d, u8 endpoint) 27 - { 28 - if (endpoint) { 29 - int ret; 30 - 31 - ret = usb_pipe_type_check(d->udev, usb_sndbulkpipe(d->udev, endpoint)); 32 - if (ret) 33 - return ret; 34 - ret = usb_pipe_type_check(d->udev, usb_rcvbulkpipe(d->udev, endpoint)); 35 - if (ret) 36 - return ret; 37 - } 38 - return 0; 39 - } 40 - 41 - static void dvb_usb_clear_halt(struct dvb_usb_device *d, u8 endpoint) 42 - { 43 - if (endpoint) { 44 - usb_clear_halt(d->udev, usb_sndbulkpipe(d->udev, endpoint)); 45 - usb_clear_halt(d->udev, usb_rcvbulkpipe(d->udev, endpoint)); 46 - } 47 - } 48 - 49 26 static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs) 50 27 { 51 28 struct dvb_usb_adapter *adap; 52 29 int ret, n, o; 53 30 54 - ret = dvb_usb_check_bulk_endpoint(d, d->props.generic_bulk_ctrl_endpoint); 55 - if (ret) 56 - return ret; 57 - ret = dvb_usb_check_bulk_endpoint(d, d->props.generic_bulk_ctrl_endpoint_response); 58 - if (ret) 59 - return ret; 60 31 for (n = 0; n < d->props.num_adapters; n++) { 61 32 adap = &d->adapter[n]; 62 33 adap->dev = d; ··· 103 132 * when reloading the driver w/o replugging the device 104 133 * sometimes a timeout occurs, this helps 105 134 */ 106 - dvb_usb_clear_halt(d, d->props.generic_bulk_ctrl_endpoint); 107 - dvb_usb_clear_halt(d, d->props.generic_bulk_ctrl_endpoint_response); 135 + if (d->props.generic_bulk_ctrl_endpoint != 0) { 136 + usb_clear_halt(d->udev, usb_sndbulkpipe(d->udev, d->props.generic_bulk_ctrl_endpoint)); 137 + usb_clear_halt(d->udev, usb_rcvbulkpipe(d->udev, d->props.generic_bulk_ctrl_endpoint)); 138 + } 108 139 109 140 return 0; 110 141
+3 -19
drivers/misc/fastrpc.c
··· 2085 2085 return err; 2086 2086 } 2087 2087 2088 - static int is_attach_rejected(struct fastrpc_user *fl) 2089 - { 2090 - /* Check if the device node is non-secure */ 2091 - if (!fl->is_secure_dev) { 2092 - dev_dbg(&fl->cctx->rpdev->dev, "untrusted app trying to attach to privileged DSP PD\n"); 2093 - return -EACCES; 2094 - } 2095 - return 0; 2096 - } 2097 - 2098 2088 static long fastrpc_device_ioctl(struct file *file, unsigned int cmd, 2099 2089 unsigned long arg) 2100 2090 { ··· 2097 2107 err = fastrpc_invoke(fl, argp); 2098 2108 break; 2099 2109 case FASTRPC_IOCTL_INIT_ATTACH: 2100 - err = is_attach_rejected(fl); 2101 - if (!err) 2102 - err = fastrpc_init_attach(fl, ROOT_PD); 2110 + err = fastrpc_init_attach(fl, ROOT_PD); 2103 2111 break; 2104 2112 case FASTRPC_IOCTL_INIT_ATTACH_SNS: 2105 - err = is_attach_rejected(fl); 2106 - if (!err) 2107 - err = fastrpc_init_attach(fl, SENSORS_PD); 2113 + err = fastrpc_init_attach(fl, SENSORS_PD); 2108 2114 break; 2109 2115 case FASTRPC_IOCTL_INIT_CREATE_STATIC: 2110 - err = is_attach_rejected(fl); 2111 - if (!err) 2112 - err = fastrpc_init_create_static_process(fl, argp); 2116 + err = fastrpc_init_create_static_process(fl, argp); 2113 2117 break; 2114 2118 case FASTRPC_IOCTL_INIT_CREATE: 2115 2119 err = fastrpc_init_create_process(fl, argp);
+16
drivers/misc/lkdtm/refcount.c
··· 182 182 check_negative(&neg, 3); 183 183 } 184 184 185 + /* 186 + * A refcount_sub_and_test() by zero when the counter is at zero should act like 187 + * refcount_sub_and_test() above when going negative. 188 + */ 189 + static void lkdtm_REFCOUNT_SUB_AND_TEST_ZERO(void) 190 + { 191 + refcount_t neg = REFCOUNT_INIT(0); 192 + 193 + pr_info("attempting bad refcount_sub_and_test() at zero\n"); 194 + if (refcount_sub_and_test(0, &neg)) 195 + pr_warn("Weird: refcount_sub_and_test() reported zero\n"); 196 + 197 + check_negative(&neg, 0); 198 + } 199 + 185 200 static void check_from_zero(refcount_t *ref) 186 201 { 187 202 switch (refcount_read(ref)) { ··· 415 400 CRASHTYPE(REFCOUNT_DEC_NEGATIVE), 416 401 CRASHTYPE(REFCOUNT_DEC_AND_TEST_NEGATIVE), 417 402 CRASHTYPE(REFCOUNT_SUB_AND_TEST_NEGATIVE), 403 + CRASHTYPE(REFCOUNT_SUB_AND_TEST_ZERO), 418 404 CRASHTYPE(REFCOUNT_INC_ZERO), 419 405 CRASHTYPE(REFCOUNT_ADD_ZERO), 420 406 CRASHTYPE(REFCOUNT_INC_SATURATED),
+41 -13
drivers/net/dsa/vitesse-vsc73xx-core.c
··· 40 40 #define VSC73XX_BLOCK_ARBITER 0x5 /* Only subblock 0 */ 41 41 #define VSC73XX_BLOCK_SYSTEM 0x7 /* Only subblock 0 */ 42 42 43 + /* MII Block subblock */ 44 + #define VSC73XX_BLOCK_MII_INTERNAL 0x0 /* Internal MDIO subblock */ 45 + #define VSC73XX_BLOCK_MII_EXTERNAL 0x1 /* External MDIO subblock */ 46 + 43 47 #define CPU_PORT 6 /* CPU port */ 44 48 45 49 /* MAC Block registers */ ··· 229 225 #define VSC73XX_MII_CMD 0x1 230 226 #define VSC73XX_MII_DATA 0x2 231 227 228 + #define VSC73XX_MII_STAT_BUSY BIT(3) 229 + 232 230 /* Arbiter block 5 registers */ 233 231 #define VSC73XX_ARBEMPTY 0x0c 234 232 #define VSC73XX_ARBDISC 0x0e ··· 305 299 #define IS_739X(a) (IS_7395(a) || IS_7398(a)) 306 300 307 301 #define VSC73XX_POLL_SLEEP_US 1000 302 + #define VSC73XX_MDIO_POLL_SLEEP_US 5 308 303 #define VSC73XX_POLL_TIMEOUT_US 10000 309 304 310 305 struct vsc73xx_counter { ··· 534 527 return 0; 535 528 } 536 529 530 + static int vsc73xx_mdio_busy_check(struct vsc73xx *vsc) 531 + { 532 + int ret, err; 533 + u32 val; 534 + 535 + ret = read_poll_timeout(vsc73xx_read, err, 536 + err < 0 || !(val & VSC73XX_MII_STAT_BUSY), 537 + VSC73XX_MDIO_POLL_SLEEP_US, 538 + VSC73XX_POLL_TIMEOUT_US, false, vsc, 539 + VSC73XX_BLOCK_MII, VSC73XX_BLOCK_MII_INTERNAL, 540 + VSC73XX_MII_STAT, &val); 541 + if (ret) 542 + return ret; 543 + return err; 544 + } 545 + 537 546 static int vsc73xx_phy_read(struct dsa_switch *ds, int phy, int regnum) 538 547 { 539 548 struct vsc73xx *vsc = ds->priv; ··· 557 534 u32 val; 558 535 int ret; 559 536 537 + ret = vsc73xx_mdio_busy_check(vsc); 538 + if (ret) 539 + return ret; 540 + 560 541 /* Setting bit 26 means "read" */ 561 542 cmd = BIT(26) | (phy << 21) | (regnum << 16); 562 543 ret = vsc73xx_write(vsc, VSC73XX_BLOCK_MII, 0, 1, cmd); 563 544 if (ret) 564 545 return ret; 565 - msleep(2); 546 + 547 + ret = vsc73xx_mdio_busy_check(vsc); 548 + if (ret) 549 + return ret; 550 + 566 551 ret = vsc73xx_read(vsc, VSC73XX_BLOCK_MII, 0, 2, &val); 567 552 if (ret) 568 553 return ret; ··· 594 563 u32 cmd; 595 564 int ret; 596 565 597 - /* It was found through tedious experiments that this router 598 - * chip really hates to have it's PHYs reset. They 599 - * never recover if that happens: autonegotiation stops 600 - * working after a reset. Just filter out this command. 601 - * (Resetting the whole chip is OK.) 602 - */ 603 - if (regnum == 0 && (val & BIT(15))) { 604 - dev_info(vsc->dev, "reset PHY - disallowed\n"); 605 - return 0; 606 - } 566 + ret = vsc73xx_mdio_busy_check(vsc); 567 + if (ret) 568 + return ret; 607 569 608 - cmd = (phy << 21) | (regnum << 16); 570 + cmd = (phy << 21) | (regnum << 16) | val; 609 571 ret = vsc73xx_write(vsc, VSC73XX_BLOCK_MII, 0, 1, cmd); 610 572 if (ret) 611 573 return ret; ··· 981 957 982 958 if (duplex == DUPLEX_FULL) 983 959 val |= VSC73XX_MAC_CFG_FDX; 960 + else 961 + /* In datasheet description ("Port Mode Procedure" in 5.6.2) 962 + * this bit is configured only for half duplex. 963 + */ 964 + val |= VSC73XX_MAC_CFG_WEXC_DIS; 984 965 985 966 /* This routine is described in the datasheet (below ARBDISC register 986 967 * description) ··· 996 967 get_random_bytes(&seed, 1); 997 968 val |= seed << VSC73XX_MAC_CFG_SEED_OFFSET; 998 969 val |= VSC73XX_MAC_CFG_SEED_LOAD; 999 - val |= VSC73XX_MAC_CFG_WEXC_DIS; 1000 970 1001 971 /* Those bits are responsible for MTU only. Kernel takes care about MTU, 1002 972 * let's enable +8 bytes frame length unconditionally.
+2 -2
drivers/net/ethernet/cadence/macb_main.c
··· 5250 5250 if (bp->wol & MACB_WOL_ENABLED) { 5251 5251 /* Check for IP address in WOL ARP mode */ 5252 5252 idev = __in_dev_get_rcu(bp->dev); 5253 - if (idev && idev->ifa_list) 5254 - ifa = rcu_access_pointer(idev->ifa_list); 5253 + if (idev) 5254 + ifa = rcu_dereference(idev->ifa_list); 5255 5255 if ((bp->wolopts & WAKE_ARP) && !ifa) { 5256 5256 netdev_err(netdev, "IP address not assigned as required by WoL walk ARP\n"); 5257 5257 return -EOPNOTSUPP;
+21 -9
drivers/net/ethernet/cavium/thunder/thunder_bgx.c
··· 1054 1054 1055 1055 static int bgx_lmac_enable(struct bgx *bgx, u8 lmacid) 1056 1056 { 1057 - struct lmac *lmac, **priv; 1057 + struct lmac *lmac; 1058 1058 u64 cfg; 1059 1059 1060 1060 lmac = &bgx->lmac[lmacid]; 1061 1061 lmac->bgx = bgx; 1062 - 1063 - lmac->netdev = alloc_netdev_dummy(sizeof(struct lmac *)); 1064 - if (!lmac->netdev) 1065 - return -ENOMEM; 1066 - priv = netdev_priv(lmac->netdev); 1067 - *priv = lmac; 1068 1062 1069 1063 if ((lmac->lmac_type == BGX_MODE_SGMII) || 1070 1064 (lmac->lmac_type == BGX_MODE_QSGMII) || ··· 1185 1191 (lmac->lmac_type != BGX_MODE_10G_KR) && lmac->phydev) 1186 1192 phy_disconnect(lmac->phydev); 1187 1193 1188 - free_netdev(lmac->netdev); 1189 1194 lmac->phydev = NULL; 1190 1195 } 1191 1196 ··· 1646 1653 1647 1654 bgx_get_qlm_mode(bgx); 1648 1655 1656 + for (lmac = 0; lmac < bgx->lmac_count; lmac++) { 1657 + struct lmac *lmacp, **priv; 1658 + 1659 + lmacp = &bgx->lmac[lmac]; 1660 + lmacp->netdev = alloc_netdev_dummy(sizeof(struct lmac *)); 1661 + 1662 + if (!lmacp->netdev) { 1663 + for (int i = 0; i < lmac; i++) 1664 + free_netdev(bgx->lmac[i].netdev); 1665 + err = -ENOMEM; 1666 + goto err_enable; 1667 + } 1668 + 1669 + priv = netdev_priv(lmacp->netdev); 1670 + *priv = lmacp; 1671 + } 1672 + 1649 1673 err = bgx_init_phy(bgx); 1650 1674 if (err) 1651 1675 goto err_enable; ··· 1702 1692 u8 lmac; 1703 1693 1704 1694 /* Disable all LMACs */ 1705 - for (lmac = 0; lmac < bgx->lmac_count; lmac++) 1695 + for (lmac = 0; lmac < bgx->lmac_count; lmac++) { 1706 1696 bgx_lmac_disable(bgx, lmac); 1697 + free_netdev(bgx->lmac[lmac].netdev); 1698 + } 1707 1699 1708 1700 pci_free_irq(pdev, GMPX_GMI_TX_INT, bgx); 1709 1701
+3
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 5724 5724 struct net_device *netdev = handle->kinfo.netdev; 5725 5725 struct hns3_nic_priv *priv = netdev_priv(netdev); 5726 5726 5727 + if (!test_bit(HNS3_NIC_STATE_DOWN, &priv->state)) 5728 + hns3_nic_net_stop(netdev); 5729 + 5727 5730 if (!test_and_clear_bit(HNS3_NIC_STATE_INITED, &priv->state)) { 5728 5731 netdev_warn(netdev, "already uninitialized\n"); 5729 5732 return 0;
+3 -3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
··· 1598 1598 { 1599 1599 u32 loop_para[HCLGE_MOD_MSG_PARA_ARRAY_MAX_SIZE] = {0}; 1600 1600 struct hclge_mod_reg_common_msg msg; 1601 - u8 i, j, num; 1602 - u32 loop_time; 1601 + u8 i, j, num, loop_time; 1603 1602 1604 1603 num = ARRAY_SIZE(hclge_ssu_reg_common_msg); 1605 1604 for (i = 0; i < num; i++) { ··· 1608 1609 loop_time = 1; 1609 1610 loop_para[0] = 0; 1610 1611 if (msg.need_para) { 1611 - loop_time = hdev->ae_dev->dev_specs.tnl_num; 1612 + loop_time = min(hdev->ae_dev->dev_specs.tnl_num, 1613 + HCLGE_MOD_MSG_PARA_ARRAY_MAX_SIZE); 1612 1614 for (j = 0; j < loop_time; j++) 1613 1615 loop_para[j] = j + 1; 1614 1616 }
+21 -9
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 2653 2653 { 2654 2654 struct hclge_vport *vport = hclge_get_vport(handle); 2655 2655 struct hclge_dev *hdev = vport->back; 2656 + int ret; 2656 2657 2657 - return hclge_cfg_mac_speed_dup(hdev, speed, duplex, lane_num); 2658 + ret = hclge_cfg_mac_speed_dup(hdev, speed, duplex, lane_num); 2659 + 2660 + if (ret) 2661 + return ret; 2662 + 2663 + hdev->hw.mac.req_speed = speed; 2664 + hdev->hw.mac.req_duplex = duplex; 2665 + 2666 + return 0; 2658 2667 } 2659 2668 2660 2669 static int hclge_set_autoneg_en(struct hclge_dev *hdev, bool enable) ··· 2965 2956 if (!test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) 2966 2957 hdev->hw.mac.duplex = HCLGE_MAC_FULL; 2967 2958 2968 - ret = hclge_cfg_mac_speed_dup_hw(hdev, hdev->hw.mac.speed, 2969 - hdev->hw.mac.duplex, hdev->hw.mac.lane_num); 2970 - if (ret) 2971 - return ret; 2972 - 2973 2959 if (hdev->hw.mac.support_autoneg) { 2974 2960 ret = hclge_set_autoneg_en(hdev, hdev->hw.mac.autoneg); 2961 + if (ret) 2962 + return ret; 2963 + } 2964 + 2965 + if (!hdev->hw.mac.autoneg) { 2966 + ret = hclge_cfg_mac_speed_dup_hw(hdev, hdev->hw.mac.req_speed, 2967 + hdev->hw.mac.req_duplex, 2968 + hdev->hw.mac.lane_num); 2975 2969 if (ret) 2976 2970 return ret; 2977 2971 } ··· 11456 11444 11457 11445 pcim_iounmap(pdev, hdev->hw.hw.io_base); 11458 11446 pci_free_irq_vectors(pdev); 11459 - pci_release_mem_regions(pdev); 11447 + pci_release_regions(pdev); 11460 11448 pci_disable_device(pdev); 11461 11449 } 11462 11450 ··· 11528 11516 dev_err(&hdev->pdev->dev, "fail to rebuild, ret=%d\n", ret); 11529 11517 11530 11518 hdev->reset_type = HNAE3_NONE_RESET; 11531 - clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state); 11532 - up(&hdev->reset_sem); 11519 + if (test_and_clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) 11520 + up(&hdev->reset_sem); 11533 11521 } 11534 11522 11535 11523 static void hclge_clear_resetting_state(struct hclge_dev *hdev)
+3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
··· 191 191 if (ret) 192 192 netdev_err(netdev, "failed to adjust link.\n"); 193 193 194 + hdev->hw.mac.req_speed = (u32)speed; 195 + hdev->hw.mac.req_duplex = (u8)duplex; 196 + 194 197 ret = hclge_cfg_flowctrl(hdev); 195 198 if (ret) 196 199 netdev_err(netdev, "failed to configure flow control.\n");
+2 -2
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 1747 1747 ret); 1748 1748 1749 1749 hdev->reset_type = HNAE3_NONE_RESET; 1750 - clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state); 1751 - up(&hdev->reset_sem); 1750 + if (test_and_clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state)) 1751 + up(&hdev->reset_sem); 1752 1752 } 1753 1753 1754 1754 static u32 hclgevf_get_fw_version(struct hnae3_handle *handle)
+6
drivers/net/ethernet/intel/igc/igc_defines.h
··· 404 404 #define IGC_DTXMXPKTSZ_TSN 0x19 /* 1600 bytes of max TX DMA packet size */ 405 405 #define IGC_DTXMXPKTSZ_DEFAULT 0x98 /* 9728-byte Jumbo frames */ 406 406 407 + /* Retry Buffer Control */ 408 + #define IGC_RETX_CTL 0x041C 409 + #define IGC_RETX_CTL_WATERMARK_MASK 0xF 410 + #define IGC_RETX_CTL_QBVFULLTH_SHIFT 8 /* QBV Retry Buffer Full Threshold */ 411 + #define IGC_RETX_CTL_QBVFULLEN 0x1000 /* Enable QBV Retry Buffer Full Threshold */ 412 + 407 413 /* Transmit Scheduling Latency */ 408 414 /* Latency between transmission scheduling (LaunchTime) and the time 409 415 * the packet is transmitted to the network in nanosecond.
+6 -2
drivers/net/ethernet/intel/igc/igc_main.c
··· 6315 6315 if (!validate_schedule(adapter, qopt)) 6316 6316 return -EINVAL; 6317 6317 6318 + igc_ptp_read(adapter, &now); 6319 + 6320 + if (igc_tsn_is_taprio_activated_by_user(adapter) && 6321 + is_base_time_past(qopt->base_time, &now)) 6322 + adapter->qbv_config_change_errors++; 6323 + 6318 6324 adapter->cycle_time = qopt->cycle_time; 6319 6325 adapter->base_time = qopt->base_time; 6320 6326 adapter->taprio_offload_enable = true; 6321 - 6322 - igc_ptp_read(adapter, &now); 6323 6327 6324 6328 for (n = 0; n < qopt->num_entries; n++) { 6325 6329 struct tc_taprio_sched_entry *e = &qopt->entries[n];
+62 -14
drivers/net/ethernet/intel/igc/igc_tsn.c
··· 49 49 return new_flags; 50 50 } 51 51 52 + static bool igc_tsn_is_tx_mode_in_tsn(struct igc_adapter *adapter) 53 + { 54 + struct igc_hw *hw = &adapter->hw; 55 + 56 + return !!(rd32(IGC_TQAVCTRL) & IGC_TQAVCTRL_TRANSMIT_MODE_TSN); 57 + } 58 + 52 59 void igc_tsn_adjust_txtime_offset(struct igc_adapter *adapter) 53 60 { 54 61 struct igc_hw *hw = &adapter->hw; 55 62 u16 txoffset; 56 63 57 - if (!is_any_launchtime(adapter)) 64 + if (!igc_tsn_is_tx_mode_in_tsn(adapter)) 58 65 return; 59 66 60 67 switch (adapter->link_speed) { ··· 85 78 wr32(IGC_GTXOFFSET, txoffset); 86 79 } 87 80 81 + static void igc_tsn_restore_retx_default(struct igc_adapter *adapter) 82 + { 83 + struct igc_hw *hw = &adapter->hw; 84 + u32 retxctl; 85 + 86 + retxctl = rd32(IGC_RETX_CTL) & IGC_RETX_CTL_WATERMARK_MASK; 87 + wr32(IGC_RETX_CTL, retxctl); 88 + } 89 + 90 + bool igc_tsn_is_taprio_activated_by_user(struct igc_adapter *adapter) 91 + { 92 + struct igc_hw *hw = &adapter->hw; 93 + 94 + return (rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) && 95 + adapter->taprio_offload_enable; 96 + } 97 + 88 98 /* Returns the TSN specific registers to their default values after 89 99 * the adapter is reset. 90 100 */ ··· 114 90 wr32(IGC_GTXOFFSET, 0); 115 91 wr32(IGC_TXPBS, I225_TXPBSIZE_DEFAULT); 116 92 wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_DEFAULT); 93 + 94 + if (igc_is_device_id_i226(hw)) 95 + igc_tsn_restore_retx_default(adapter); 117 96 118 97 tqavctrl = rd32(IGC_TQAVCTRL); 119 98 tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN | ··· 138 111 return 0; 139 112 } 140 113 114 + /* To partially fix i226 HW errata, reduce MAC internal buffering from 192 Bytes 115 + * to 88 Bytes by setting RETX_CTL register using the recommendation from: 116 + * a) Ethernet Controller I225/I226 Specification Update Rev 2.1 117 + * Item 9: TSN: Packet Transmission Might Cross the Qbv Window 118 + * b) I225/6 SW User Manual Rev 1.2.4: Section 8.11.5 Retry Buffer Control 119 + */ 120 + static void igc_tsn_set_retx_qbvfullthreshold(struct igc_adapter *adapter) 121 + { 122 + struct igc_hw *hw = &adapter->hw; 123 + u32 retxctl, watermark; 124 + 125 + retxctl = rd32(IGC_RETX_CTL); 126 + watermark = retxctl & IGC_RETX_CTL_WATERMARK_MASK; 127 + /* Set QBVFULLTH value using watermark and set QBVFULLEN */ 128 + retxctl |= (watermark << IGC_RETX_CTL_QBVFULLTH_SHIFT) | 129 + IGC_RETX_CTL_QBVFULLEN; 130 + wr32(IGC_RETX_CTL, retxctl); 131 + } 132 + 141 133 static int igc_tsn_enable_offload(struct igc_adapter *adapter) 142 134 { 143 135 struct igc_hw *hw = &adapter->hw; ··· 168 122 wr32(IGC_TSAUXC, 0); 169 123 wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN); 170 124 wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN); 125 + 126 + if (igc_is_device_id_i226(hw)) 127 + igc_tsn_set_retx_qbvfullthreshold(adapter); 171 128 172 129 for (i = 0; i < adapter->num_tx_queues; i++) { 173 130 struct igc_ring *ring = adapter->tx_ring[i]; ··· 311 262 s64 n = div64_s64(ktime_sub_ns(systim, base_time), cycle); 312 263 313 264 base_time = ktime_add_ns(base_time, (n + 1) * cycle); 314 - 315 - /* Increase the counter if scheduling into the past while 316 - * Gate Control List (GCL) is running. 317 - */ 318 - if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) && 319 - (adapter->tc_setup_type == TC_SETUP_QDISC_TAPRIO) && 320 - (adapter->qbv_count > 1)) 321 - adapter->qbv_config_change_errors++; 322 265 } else { 323 266 if (igc_is_device_id_i226(hw)) { 324 267 ktime_t adjust_time, expires_time; ··· 372 331 return err; 373 332 } 374 333 334 + static bool igc_tsn_will_tx_mode_change(struct igc_adapter *adapter) 335 + { 336 + bool any_tsn_enabled = !!(igc_tsn_new_flags(adapter) & 337 + IGC_FLAG_TSN_ANY_ENABLED); 338 + 339 + return (any_tsn_enabled && !igc_tsn_is_tx_mode_in_tsn(adapter)) || 340 + (!any_tsn_enabled && igc_tsn_is_tx_mode_in_tsn(adapter)); 341 + } 342 + 375 343 int igc_tsn_offload_apply(struct igc_adapter *adapter) 376 344 { 377 - struct igc_hw *hw = &adapter->hw; 378 - 379 - /* Per I225/6 HW Design Section 7.5.2.1, transmit mode 380 - * cannot be changed dynamically. Require reset the adapter. 345 + /* Per I225/6 HW Design Section 7.5.2.1 guideline, if tx mode change 346 + * from legacy->tsn or tsn->legacy, then reset adapter is needed. 381 347 */ 382 348 if (netif_running(adapter->netdev) && 383 - (igc_is_device_id_i225(hw) || !adapter->qbv_count)) { 349 + igc_tsn_will_tx_mode_change(adapter)) { 384 350 schedule_work(&adapter->reset_task); 385 351 return 0; 386 352 }
+1
drivers/net/ethernet/intel/igc/igc_tsn.h
··· 7 7 int igc_tsn_offload_apply(struct igc_adapter *adapter); 8 8 int igc_tsn_reset(struct igc_adapter *adapter); 9 9 void igc_tsn_adjust_txtime_offset(struct igc_adapter *adapter); 10 + bool igc_tsn_is_taprio_activated_by_user(struct igc_adapter *adapter); 10 11 11 12 #endif /* _IGC_BASE_H */
+4 -6
drivers/net/ethernet/jme.c
··· 946 946 if (skb->protocol != htons(ETH_P_IP)) 947 947 return csum; 948 948 skb_set_network_header(skb, ETH_HLEN); 949 - if ((ip_hdr(skb)->protocol != IPPROTO_UDP) || 950 - (skb->len < (ETH_HLEN + 951 - (ip_hdr(skb)->ihl << 2) + 952 - sizeof(struct udphdr)))) { 949 + 950 + if (ip_hdr(skb)->protocol != IPPROTO_UDP || 951 + skb->len < (ETH_HLEN + ip_hdrlen(skb) + sizeof(struct udphdr))) { 953 952 skb_reset_network_header(skb); 954 953 return csum; 955 954 } 956 - skb_set_transport_header(skb, 957 - ETH_HLEN + (ip_hdr(skb)->ihl << 2)); 955 + skb_set_transport_header(skb, ETH_HLEN + ip_hdrlen(skb)); 958 956 csum = udp_hdr(skb)->check; 959 957 skb_reset_transport_header(skb); 960 958 skb_reset_network_header(skb);
+4 -2
drivers/net/ethernet/mediatek/mtk_wed.c
··· 2666 2666 { 2667 2667 struct mtk_wed_flow_block_priv *priv = cb_priv; 2668 2668 struct flow_cls_offload *cls = type_data; 2669 - struct mtk_wed_hw *hw = priv->hw; 2669 + struct mtk_wed_hw *hw = NULL; 2670 2670 2671 - if (!tc_can_offload(priv->dev)) 2671 + if (!priv || !tc_can_offload(priv->dev)) 2672 2672 return -EOPNOTSUPP; 2673 2673 2674 2674 if (type != TC_SETUP_CLSFLOWER) 2675 2675 return -EOPNOTSUPP; 2676 2676 2677 + hw = priv->hw; 2677 2678 return mtk_flow_offload_cmd(hw->eth, cls, hw->index); 2678 2679 } 2679 2680 ··· 2730 2729 flow_block_cb_remove(block_cb, f); 2731 2730 list_del(&block_cb->driver_list); 2732 2731 kfree(block_cb->cb_priv); 2732 + block_cb->cb_priv = NULL; 2733 2733 } 2734 2734 return 0; 2735 2735 default:
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 130 130 #define MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE_MPW 0x2 131 131 132 132 #define MLX5E_DEFAULT_LRO_TIMEOUT 32 133 - #define MLX5E_LRO_TIMEOUT_ARR_SIZE 4 133 + #define MLX5E_DEFAULT_SHAMPO_TIMEOUT 1024 134 134 135 135 #define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC 0x10 136 136 #define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC_FROM_CQE 0x3
+15 -1
drivers/net/ethernet/mellanox/mlx5/core/en/params.c
··· 928 928 MLX5_SET(wq, wq, log_headers_entry_size, 929 929 mlx5e_shampo_get_log_hd_entry_size(mdev, params)); 930 930 MLX5_SET(rqc, rqc, reservation_timeout, 931 - params->packet_merge.timeout); 931 + mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_SHAMPO_TIMEOUT)); 932 932 MLX5_SET(rqc, rqc, shampo_match_criteria_type, 933 933 params->packet_merge.shampo.match_criteria_type); 934 934 MLX5_SET(rqc, rqc, shampo_no_match_alignment_granularity, ··· 1085 1085 wqebbs += MLX5E_KSM_UMR_WQEBBS(rest); 1086 1086 wqebbs *= wq_size; 1087 1087 return wqebbs; 1088 + } 1089 + 1090 + #define MLX5E_LRO_TIMEOUT_ARR_SIZE 4 1091 + 1092 + u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout) 1093 + { 1094 + int i; 1095 + 1096 + /* The supported periods are organized in ascending order */ 1097 + for (i = 0; i < MLX5E_LRO_TIMEOUT_ARR_SIZE - 1; i++) 1098 + if (MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]) >= wanted_timeout) 1099 + break; 1100 + 1101 + return MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]); 1088 1102 } 1089 1103 1090 1104 static u32 mlx5e_mpwrq_total_umr_wqebbs(struct mlx5_core_dev *mdev,
+1
drivers/net/ethernet/mellanox/mlx5/core/en/params.h
··· 108 108 u32 mlx5e_shampo_hd_per_wq(struct mlx5_core_dev *mdev, 109 109 struct mlx5e_params *params, 110 110 struct mlx5e_rq_param *rq_param); 111 + u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout); 111 112 u8 mlx5e_mpwqe_get_log_stride_size(struct mlx5_core_dev *mdev, 112 113 struct mlx5e_params *params, 113 114 struct mlx5e_xsk_param *xsk);
+2
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
··· 146 146 return err; 147 147 } 148 148 149 + mutex_lock(&priv->state_lock); 149 150 err = mlx5e_safe_reopen_channels(priv); 151 + mutex_unlock(&priv->state_lock); 150 152 if (!err) { 151 153 to_ctx->status = 1; /* all channels recovered */ 152 154 return err;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
··· 734 734 if (num_tuples <= 0) { 735 735 netdev_warn(priv->netdev, "%s: flow is not valid %d\n", 736 736 __func__, num_tuples); 737 - return num_tuples; 737 + return num_tuples < 0 ? num_tuples : -EINVAL; 738 738 } 739 739 740 740 eth_ft = get_flow_table(priv, fs, num_tuples);
+4 -13
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 5167 5167 #endif 5168 5168 }; 5169 5169 5170 - static u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout) 5171 - { 5172 - int i; 5173 - 5174 - /* The supported periods are organized in ascending order */ 5175 - for (i = 0; i < MLX5E_LRO_TIMEOUT_ARR_SIZE - 1; i++) 5176 - if (MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]) >= wanted_timeout) 5177 - break; 5178 - 5179 - return MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]); 5180 - } 5181 - 5182 5170 void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16 mtu) 5183 5171 { 5184 5172 struct mlx5e_params *params = &priv->channels.params; ··· 5296 5308 struct mlx5e_rq_stats *rq_stats; 5297 5309 5298 5310 ASSERT_RTNL(); 5299 - if (mlx5e_is_uplink_rep(priv)) 5311 + if (mlx5e_is_uplink_rep(priv) || !priv->stats_nch) 5300 5312 return; 5301 5313 5302 5314 channel_stats = priv->channel_stats[i]; ··· 5316 5328 struct mlx5e_sq_stats *sq_stats; 5317 5329 5318 5330 ASSERT_RTNL(); 5331 + if (!priv->stats_nch) 5332 + return; 5333 + 5319 5334 /* no special case needed for ptp htb etc since txq2sq_stats is kept up 5320 5335 * to date for active sq_stats, otherwise get_base_stats takes care of 5321 5336 * inactive sqs.
+9 -9
drivers/net/ethernet/mellanox/mlx5/core/lib/sd.c
··· 126 126 } 127 127 128 128 static int mlx5_query_sd(struct mlx5_core_dev *dev, bool *sdm, 129 - u8 *host_buses, u8 *sd_group) 129 + u8 *host_buses) 130 130 { 131 131 u32 out[MLX5_ST_SZ_DW(mpir_reg)]; 132 132 int err; 133 133 134 134 err = mlx5_query_mpir_reg(dev, out); 135 - if (err) 136 - return err; 137 - 138 - err = mlx5_query_nic_vport_sd_group(dev, sd_group); 139 135 if (err) 140 136 return err; 141 137 ··· 162 166 if (mlx5_core_is_ecpf(dev)) 163 167 return 0; 164 168 169 + err = mlx5_query_nic_vport_sd_group(dev, &sd_group); 170 + if (err) 171 + return err; 172 + 173 + if (!sd_group) 174 + return 0; 175 + 165 176 if (!MLX5_CAP_MCAM_REG(dev, mpir)) 166 177 return 0; 167 178 168 - err = mlx5_query_sd(dev, &sdm, &host_buses, &sd_group); 179 + err = mlx5_query_sd(dev, &sdm, &host_buses); 169 180 if (err) 170 181 return err; 171 182 172 183 if (!sdm) 173 - return 0; 174 - 175 - if (!sd_group) 176 184 return 0; 177 185 178 186 group_id = mlx5_sd_group_id(dev, sd_group);
+8
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige.h
··· 40 40 */ 41 41 #define MLXBF_GIGE_BCAST_MAC_FILTER_IDX 0 42 42 #define MLXBF_GIGE_LOCAL_MAC_FILTER_IDX 1 43 + #define MLXBF_GIGE_MAX_FILTER_IDX 3 43 44 44 45 /* Define for broadcast MAC literal */ 45 46 #define BCAST_MAC_ADDR 0xFFFFFFFFFFFF ··· 176 175 int mlxbf_gige_mdio_probe(struct platform_device *pdev, 177 176 struct mlxbf_gige *priv); 178 177 void mlxbf_gige_mdio_remove(struct mlxbf_gige *priv); 178 + 179 + void mlxbf_gige_enable_multicast_rx(struct mlxbf_gige *priv); 180 + void mlxbf_gige_disable_multicast_rx(struct mlxbf_gige *priv); 181 + void mlxbf_gige_enable_mac_rx_filter(struct mlxbf_gige *priv, 182 + unsigned int index); 183 + void mlxbf_gige_disable_mac_rx_filter(struct mlxbf_gige *priv, 184 + unsigned int index); 179 185 void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv, 180 186 unsigned int index, u64 dmac); 181 187 void mlxbf_gige_get_mac_rx_filter(struct mlxbf_gige *priv,
+10
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
··· 168 168 if (err) 169 169 goto napi_deinit; 170 170 171 + mlxbf_gige_enable_mac_rx_filter(priv, MLXBF_GIGE_BCAST_MAC_FILTER_IDX); 172 + mlxbf_gige_enable_mac_rx_filter(priv, MLXBF_GIGE_LOCAL_MAC_FILTER_IDX); 173 + mlxbf_gige_enable_multicast_rx(priv); 174 + 171 175 /* Set bits in INT_EN that we care about */ 172 176 int_en = MLXBF_GIGE_INT_EN_HW_ACCESS_ERROR | 173 177 MLXBF_GIGE_INT_EN_TX_CHECKSUM_INPUTS | ··· 383 379 void __iomem *plu_base; 384 380 void __iomem *base; 385 381 int addr, phy_irq; 382 + unsigned int i; 386 383 int err; 387 384 388 385 base = devm_platform_ioremap_resource(pdev, MLXBF_GIGE_RES_MAC); ··· 427 422 428 423 priv->rx_q_entries = MLXBF_GIGE_DEFAULT_RXQ_SZ; 429 424 priv->tx_q_entries = MLXBF_GIGE_DEFAULT_TXQ_SZ; 425 + 426 + for (i = 0; i <= MLXBF_GIGE_MAX_FILTER_IDX; i++) 427 + mlxbf_gige_disable_mac_rx_filter(priv, i); 428 + mlxbf_gige_disable_multicast_rx(priv); 429 + mlxbf_gige_disable_promisc(priv); 430 430 431 431 /* Write initial MAC address to hardware */ 432 432 mlxbf_gige_initial_mac(priv);
+2
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_regs.h
··· 62 62 #define MLXBF_GIGE_TX_STATUS_DATA_FIFO_FULL BIT(1) 63 63 #define MLXBF_GIGE_RX_MAC_FILTER_DMAC_RANGE_START 0x0520 64 64 #define MLXBF_GIGE_RX_MAC_FILTER_DMAC_RANGE_END 0x0528 65 + #define MLXBF_GIGE_RX_MAC_FILTER_GENERAL 0x0530 66 + #define MLXBF_GIGE_RX_MAC_FILTER_EN_MULTICAST BIT(1) 65 67 #define MLXBF_GIGE_RX_MAC_FILTER_COUNT_DISC 0x0540 66 68 #define MLXBF_GIGE_RX_MAC_FILTER_COUNT_DISC_EN BIT(0) 67 69 #define MLXBF_GIGE_RX_MAC_FILTER_COUNT_PASS 0x0548
+44 -6
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
··· 11 11 #include "mlxbf_gige.h" 12 12 #include "mlxbf_gige_regs.h" 13 13 14 - void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv, 15 - unsigned int index, u64 dmac) 14 + void mlxbf_gige_enable_multicast_rx(struct mlxbf_gige *priv) 15 + { 16 + void __iomem *base = priv->base; 17 + u64 data; 18 + 19 + data = readq(base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL); 20 + data |= MLXBF_GIGE_RX_MAC_FILTER_EN_MULTICAST; 21 + writeq(data, base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL); 22 + } 23 + 24 + void mlxbf_gige_disable_multicast_rx(struct mlxbf_gige *priv) 25 + { 26 + void __iomem *base = priv->base; 27 + u64 data; 28 + 29 + data = readq(base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL); 30 + data &= ~MLXBF_GIGE_RX_MAC_FILTER_EN_MULTICAST; 31 + writeq(data, base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL); 32 + } 33 + 34 + void mlxbf_gige_enable_mac_rx_filter(struct mlxbf_gige *priv, 35 + unsigned int index) 16 36 { 17 37 void __iomem *base = priv->base; 18 38 u64 control; 19 - 20 - /* Write destination MAC to specified MAC RX filter */ 21 - writeq(dmac, base + MLXBF_GIGE_RX_MAC_FILTER + 22 - (index * MLXBF_GIGE_RX_MAC_FILTER_STRIDE)); 23 39 24 40 /* Enable MAC receive filter mask for specified index */ 25 41 control = readq(base + MLXBF_GIGE_CONTROL); 26 42 control |= (MLXBF_GIGE_CONTROL_EN_SPECIFIC_MAC << index); 27 43 writeq(control, base + MLXBF_GIGE_CONTROL); 44 + } 45 + 46 + void mlxbf_gige_disable_mac_rx_filter(struct mlxbf_gige *priv, 47 + unsigned int index) 48 + { 49 + void __iomem *base = priv->base; 50 + u64 control; 51 + 52 + /* Disable MAC receive filter mask for specified index */ 53 + control = readq(base + MLXBF_GIGE_CONTROL); 54 + control &= ~(MLXBF_GIGE_CONTROL_EN_SPECIFIC_MAC << index); 55 + writeq(control, base + MLXBF_GIGE_CONTROL); 56 + } 57 + 58 + void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv, 59 + unsigned int index, u64 dmac) 60 + { 61 + void __iomem *base = priv->base; 62 + 63 + /* Write destination MAC to specified MAC RX filter */ 64 + writeq(dmac, base + MLXBF_GIGE_RX_MAC_FILTER + 65 + (index * MLXBF_GIGE_RX_MAC_FILTER_STRIDE)); 28 66 } 29 67 30 68 void mlxbf_gige_get_mac_rx_filter(struct mlxbf_gige *priv,
+19 -9
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 599 599 else 600 600 *headroom = XDP_PACKET_HEADROOM; 601 601 602 - *alloc_size = mtu + MANA_RXBUF_PAD + *headroom; 602 + *alloc_size = SKB_DATA_ALIGN(mtu + MANA_RXBUF_PAD + *headroom); 603 + 604 + /* Using page pool in this case, so alloc_size is PAGE_SIZE */ 605 + if (*alloc_size < PAGE_SIZE) 606 + *alloc_size = PAGE_SIZE; 603 607 604 608 *datasize = mtu + ETH_HLEN; 605 609 } ··· 1792 1788 static int mana_cq_handler(void *context, struct gdma_queue *gdma_queue) 1793 1789 { 1794 1790 struct mana_cq *cq = context; 1795 - u8 arm_bit; 1796 1791 int w; 1797 1792 1798 1793 WARN_ON_ONCE(cq->gdma_cq != gdma_queue); ··· 1802 1799 mana_poll_tx_cq(cq); 1803 1800 1804 1801 w = cq->work_done; 1802 + cq->work_done_since_doorbell += w; 1805 1803 1806 - if (w < cq->budget && 1807 - napi_complete_done(&cq->napi, w)) { 1808 - arm_bit = SET_ARM_BIT; 1809 - } else { 1810 - arm_bit = 0; 1804 + if (w < cq->budget) { 1805 + mana_gd_ring_cq(gdma_queue, SET_ARM_BIT); 1806 + cq->work_done_since_doorbell = 0; 1807 + napi_complete_done(&cq->napi, w); 1808 + } else if (cq->work_done_since_doorbell > 1809 + cq->gdma_cq->queue_size / COMP_ENTRY_SIZE * 4) { 1810 + /* MANA hardware requires at least one doorbell ring every 8 1811 + * wraparounds of CQ even if there is no need to arm the CQ. 1812 + * This driver rings the doorbell as soon as we have exceeded 1813 + * 4 wraparounds. 1814 + */ 1815 + mana_gd_ring_cq(gdma_queue, 0); 1816 + cq->work_done_since_doorbell = 0; 1811 1817 } 1812 - 1813 - mana_gd_ring_cq(gdma_queue, arm_bit); 1814 1818 1815 1819 return w; 1816 1820 }
+8 -8
drivers/net/ethernet/xilinx/xilinx_axienet.h
··· 160 160 #define XAE_RCW1_OFFSET 0x00000404 /* Rx Configuration Word 1 */ 161 161 #define XAE_TC_OFFSET 0x00000408 /* Tx Configuration */ 162 162 #define XAE_FCC_OFFSET 0x0000040C /* Flow Control Configuration */ 163 - #define XAE_EMMC_OFFSET 0x00000410 /* EMAC mode configuration */ 164 - #define XAE_PHYC_OFFSET 0x00000414 /* RGMII/SGMII configuration */ 163 + #define XAE_EMMC_OFFSET 0x00000410 /* MAC speed configuration */ 164 + #define XAE_PHYC_OFFSET 0x00000414 /* RX Max Frame Configuration */ 165 165 #define XAE_ID_OFFSET 0x000004F8 /* Identification register */ 166 - #define XAE_MDIO_MC_OFFSET 0x00000500 /* MII Management Config */ 167 - #define XAE_MDIO_MCR_OFFSET 0x00000504 /* MII Management Control */ 168 - #define XAE_MDIO_MWD_OFFSET 0x00000508 /* MII Management Write Data */ 169 - #define XAE_MDIO_MRD_OFFSET 0x0000050C /* MII Management Read Data */ 166 + #define XAE_MDIO_MC_OFFSET 0x00000500 /* MDIO Setup */ 167 + #define XAE_MDIO_MCR_OFFSET 0x00000504 /* MDIO Control */ 168 + #define XAE_MDIO_MWD_OFFSET 0x00000508 /* MDIO Write Data */ 169 + #define XAE_MDIO_MRD_OFFSET 0x0000050C /* MDIO Read Data */ 170 170 #define XAE_UAW0_OFFSET 0x00000700 /* Unicast address word 0 */ 171 171 #define XAE_UAW1_OFFSET 0x00000704 /* Unicast address word 1 */ 172 - #define XAE_FMI_OFFSET 0x00000708 /* Filter Mask Index */ 172 + #define XAE_FMI_OFFSET 0x00000708 /* Frame Filter Control */ 173 173 #define XAE_AF0_OFFSET 0x00000710 /* Address Filter 0 */ 174 174 #define XAE_AF1_OFFSET 0x00000714 /* Address Filter 1 */ 175 175 ··· 308 308 */ 309 309 #define XAE_UAW1_UNICASTADDR_MASK 0x0000FFFF 310 310 311 - /* Bit masks for Axi Ethernet FMI register */ 311 + /* Bit masks for Axi Ethernet FMC register */ 312 312 #define XAE_FMI_PM_MASK 0x80000000 /* Promis. mode enable */ 313 313 #define XAE_FMI_IND_MASK 0x00000003 /* Index Mask */ 314 314
+3
drivers/net/gtp.c
··· 1269 1269 if (skb_cow_head(skb, dev->needed_headroom)) 1270 1270 goto tx_err; 1271 1271 1272 + if (!pskb_inet_may_pull(skb)) 1273 + goto tx_err; 1274 + 1272 1275 skb_reset_inner_headers(skb); 1273 1276 1274 1277 /* PDP context lookups in gtp_build_skb_*() need rcu read-side lock. */
-14
drivers/net/phy/vitesse.c
··· 237 237 return 0; 238 238 } 239 239 240 - static int vsc73xx_config_aneg(struct phy_device *phydev) 241 - { 242 - /* The VSC73xx switches does not like to be instructed to 243 - * do autonegotiation in any way, it prefers that you just go 244 - * with the power-on/reset defaults. Writing some registers will 245 - * just make autonegotiation permanently fail. 246 - */ 247 - return 0; 248 - } 249 - 250 240 /* This adds a skew for both TX and RX clocks, so the skew should only be 251 241 * applied to "rgmii-id" interfaces. It may not work as expected 252 242 * on "rgmii-txid", "rgmii-rxid" or "rgmii" interfaces. ··· 434 444 .phy_id_mask = 0x000ffff0, 435 445 /* PHY_GBIT_FEATURES */ 436 446 .config_init = vsc738x_config_init, 437 - .config_aneg = vsc73xx_config_aneg, 438 447 .read_page = vsc73xx_read_page, 439 448 .write_page = vsc73xx_write_page, 440 449 }, { ··· 442 453 .phy_id_mask = 0x000ffff0, 443 454 /* PHY_GBIT_FEATURES */ 444 455 .config_init = vsc738x_config_init, 445 - .config_aneg = vsc73xx_config_aneg, 446 456 .read_page = vsc73xx_read_page, 447 457 .write_page = vsc73xx_write_page, 448 458 }, { ··· 450 462 .phy_id_mask = 0x000ffff0, 451 463 /* PHY_GBIT_FEATURES */ 452 464 .config_init = vsc739x_config_init, 453 - .config_aneg = vsc73xx_config_aneg, 454 465 .read_page = vsc73xx_read_page, 455 466 .write_page = vsc73xx_write_page, 456 467 }, { ··· 458 471 .phy_id_mask = 0x000ffff0, 459 472 /* PHY_GBIT_FEATURES */ 460 473 .config_init = vsc739x_config_init, 461 - .config_aneg = vsc73xx_config_aneg, 462 474 .read_page = vsc73xx_read_page, 463 475 .write_page = vsc73xx_write_page, 464 476 }, {
+8 -3
drivers/net/pse-pd/pse_core.c
··· 401 401 rdesc->ops = &pse_pi_ops; 402 402 rdesc->owner = pcdev->owner; 403 403 404 - rinit_data->constraints.valid_ops_mask = REGULATOR_CHANGE_STATUS | 405 - REGULATOR_CHANGE_CURRENT; 406 - rinit_data->constraints.max_uA = MAX_PI_CURRENT; 404 + rinit_data->constraints.valid_ops_mask = REGULATOR_CHANGE_STATUS; 405 + 406 + if (pcdev->ops->pi_set_current_limit) { 407 + rinit_data->constraints.valid_ops_mask |= 408 + REGULATOR_CHANGE_CURRENT; 409 + rinit_data->constraints.max_uA = MAX_PI_CURRENT; 410 + } 411 + 407 412 rinit_data->supply_regulator = "vpwr"; 408 413 409 414 rconfig.dev = pcdev->dev;
+11 -9
drivers/net/usb/ipheth.c
··· 286 286 return; 287 287 } 288 288 289 - if (urb->actual_length <= IPHETH_IP_ALIGN) { 290 - dev->net->stats.rx_length_errors++; 291 - return; 292 - } 289 + /* iPhone may periodically send URBs with no payload 290 + * on the "bulk in" endpoint. It is safe to ignore them. 291 + */ 292 + if (urb->actual_length == 0) 293 + goto rx_submit; 293 294 294 295 /* RX URBs starting with 0x00 0x01 do not encapsulate Ethernet frames, 295 296 * but rather are control frames. Their purpose is not documented, and ··· 299 298 * URB received from the bulk IN endpoint. 300 299 */ 301 300 if (unlikely 302 - (((char *)urb->transfer_buffer)[0] == 0 && 301 + (urb->actual_length == 4 && 302 + ((char *)urb->transfer_buffer)[0] == 0 && 303 303 ((char *)urb->transfer_buffer)[1] == 1)) 304 304 goto rx_submit; 305 305 ··· 308 306 if (retval != 0) { 309 307 dev_err(&dev->intf->dev, "%s: callback retval: %d\n", 310 308 __func__, retval); 311 - return; 312 309 } 313 310 314 311 rx_submit: ··· 355 354 0x02, /* index */ 356 355 dev->ctrl_buf, IPHETH_CTRL_BUF_SIZE, 357 356 IPHETH_CTRL_TIMEOUT); 358 - if (retval < 0) { 357 + if (retval <= 0) { 359 358 dev_err(&dev->intf->dev, "%s: usb_control_msg: %d\n", 360 359 __func__, retval); 361 360 return retval; 362 361 } 363 362 364 - if (dev->ctrl_buf[0] == IPHETH_CARRIER_ON) { 363 + if ((retval == 1 && dev->ctrl_buf[0] == IPHETH_CARRIER_ON) || 364 + (retval >= 2 && dev->ctrl_buf[1] == IPHETH_CARRIER_ON)) { 365 365 netif_carrier_on(dev->net); 366 366 if (dev->tx_urb->status != -EINPROGRESS) 367 367 netif_wake_queue(dev->net); ··· 477 475 { 478 476 struct ipheth_device *dev = netdev_priv(net); 479 477 480 - cancel_delayed_work_sync(&dev->carrier_work); 481 478 netif_stop_queue(net); 479 + cancel_delayed_work_sync(&dev->carrier_work); 482 480 return 0; 483 481 } 484 482
+72
drivers/net/wireless/ath/ath12k/dp_tx.c
··· 162 162 return 0; 163 163 } 164 164 165 + static void ath12k_dp_tx_move_payload(struct sk_buff *skb, 166 + unsigned long delta, 167 + bool head) 168 + { 169 + unsigned long len = skb->len; 170 + 171 + if (head) { 172 + skb_push(skb, delta); 173 + memmove(skb->data, skb->data + delta, len); 174 + skb_trim(skb, len); 175 + } else { 176 + skb_put(skb, delta); 177 + memmove(skb->data + delta, skb->data, len); 178 + skb_pull(skb, delta); 179 + } 180 + } 181 + 182 + static int ath12k_dp_tx_align_payload(struct ath12k_base *ab, 183 + struct sk_buff **pskb) 184 + { 185 + u32 iova_mask = ab->hw_params->iova_mask; 186 + unsigned long offset, delta1, delta2; 187 + struct sk_buff *skb2, *skb = *pskb; 188 + unsigned int headroom = skb_headroom(skb); 189 + int tailroom = skb_tailroom(skb); 190 + int ret = 0; 191 + 192 + offset = (unsigned long)skb->data & iova_mask; 193 + delta1 = offset; 194 + delta2 = iova_mask - offset + 1; 195 + 196 + if (headroom >= delta1) { 197 + ath12k_dp_tx_move_payload(skb, delta1, true); 198 + } else if (tailroom >= delta2) { 199 + ath12k_dp_tx_move_payload(skb, delta2, false); 200 + } else { 201 + skb2 = skb_realloc_headroom(skb, iova_mask); 202 + if (!skb2) { 203 + ret = -ENOMEM; 204 + goto out; 205 + } 206 + 207 + dev_kfree_skb_any(skb); 208 + 209 + offset = (unsigned long)skb2->data & iova_mask; 210 + if (offset) 211 + ath12k_dp_tx_move_payload(skb2, offset, true); 212 + *pskb = skb2; 213 + } 214 + 215 + out: 216 + return ret; 217 + } 218 + 165 219 int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif, 166 220 struct sk_buff *skb) 167 221 { ··· 238 184 bool tcl_ring_retry; 239 185 bool msdu_ext_desc = false; 240 186 bool add_htt_metadata = false; 187 + u32 iova_mask = ab->hw_params->iova_mask; 241 188 242 189 if (test_bit(ATH12K_FLAG_CRASH_FLUSH, &ar->ab->dev_flags)) 243 190 return -ESHUTDOWN; ··· 334 279 goto fail_remove_tx_buf; 335 280 } 336 281 282 + if (iova_mask && 283 + (unsigned long)skb->data & iova_mask) { 284 + ret = ath12k_dp_tx_align_payload(ab, &skb); 285 + if (ret) { 286 + ath12k_warn(ab, "failed to align TX buffer %d\n", ret); 287 + /* don't bail out, give original buffer 288 + * a chance even unaligned. 289 + */ 290 + goto map; 291 + } 292 + 293 + /* hdr is pointing to a wrong place after alignment, 294 + * so refresh it for later use. 295 + */ 296 + hdr = (void *)skb->data; 297 + } 298 + map: 337 299 ti.paddr = dma_map_single(ab->dev, skb->data, skb->len, DMA_TO_DEVICE); 338 300 if (dma_mapping_error(ab->dev, ti.paddr)) { 339 301 atomic_inc(&ab->soc_stats.tx_err.misc_fail);
+6
drivers/net/wireless/ath/ath12k/hw.c
··· 924 924 925 925 .acpi_guid = NULL, 926 926 .supports_dynamic_smps_6ghz = true, 927 + 928 + .iova_mask = 0, 927 929 }, 928 930 { 929 931 .name = "wcn7850 hw2.0", ··· 1002 1000 1003 1001 .acpi_guid = &wcn7850_uuid, 1004 1002 .supports_dynamic_smps_6ghz = false, 1003 + 1004 + .iova_mask = ATH12K_PCIE_MAX_PAYLOAD_SIZE - 1, 1005 1005 }, 1006 1006 { 1007 1007 .name = "qcn9274 hw2.0", ··· 1076 1072 1077 1073 .acpi_guid = NULL, 1078 1074 .supports_dynamic_smps_6ghz = true, 1075 + 1076 + .iova_mask = 0, 1079 1077 }, 1080 1078 }; 1081 1079
+4
drivers/net/wireless/ath/ath12k/hw.h
··· 96 96 #define ATH12K_M3_FILE "m3.bin" 97 97 #define ATH12K_REGDB_FILE_NAME "regdb.bin" 98 98 99 + #define ATH12K_PCIE_MAX_PAYLOAD_SIZE 128 100 + 99 101 enum ath12k_hw_rate_cck { 100 102 ATH12K_HW_RATE_CCK_LP_11M = 0, 101 103 ATH12K_HW_RATE_CCK_LP_5_5M, ··· 217 215 218 216 const guid_t *acpi_guid; 219 217 bool supports_dynamic_smps_6ghz; 218 + 219 + u32 iova_mask; 220 220 }; 221 221 222 222 struct ath12k_hw_ops {
+1
drivers/net/wireless/ath/ath12k/mac.c
··· 9193 9193 9194 9194 hw->vif_data_size = sizeof(struct ath12k_vif); 9195 9195 hw->sta_data_size = sizeof(struct ath12k_sta); 9196 + hw->extra_tx_headroom = ab->hw_params->iova_mask; 9196 9197 9197 9198 wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST); 9198 9199 wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_STA_TX_PWR);
+10 -3
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
··· 4320 4320 /* Single PMK operation */ 4321 4321 pmk_op->count = cpu_to_le16(1); 4322 4322 length += sizeof(struct brcmf_pmksa_v3); 4323 - memcpy(pmk_op->pmk[0].bssid, pmksa->bssid, ETH_ALEN); 4324 - memcpy(pmk_op->pmk[0].pmkid, pmksa->pmkid, WLAN_PMKID_LEN); 4325 - pmk_op->pmk[0].pmkid_len = WLAN_PMKID_LEN; 4323 + if (pmksa->bssid) 4324 + memcpy(pmk_op->pmk[0].bssid, pmksa->bssid, ETH_ALEN); 4325 + if (pmksa->pmkid) { 4326 + memcpy(pmk_op->pmk[0].pmkid, pmksa->pmkid, WLAN_PMKID_LEN); 4327 + pmk_op->pmk[0].pmkid_len = WLAN_PMKID_LEN; 4328 + } 4329 + if (pmksa->ssid && pmksa->ssid_len) { 4330 + memcpy(pmk_op->pmk[0].ssid.SSID, pmksa->ssid, pmksa->ssid_len); 4331 + pmk_op->pmk[0].ssid.SSID_len = pmksa->ssid_len; 4332 + } 4326 4333 pmk_op->pmk[0].time_left = cpu_to_le32(alive ? BRCMF_PMKSA_NO_EXPIRY : 0); 4327 4334 } 4328 4335
+2 -1
drivers/net/wireless/intel/iwlwifi/pcie/internal.h
··· 639 639 int iwl_pcie_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, 640 640 int slots_num, bool cmd_queue); 641 641 642 - dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, void *addr); 642 + dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset, 643 + unsigned int len); 643 644 struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb, 644 645 struct iwl_cmd_meta *cmd_meta, 645 646 u8 **hdr, unsigned int hdr_room);
+4 -1
drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
··· 168 168 struct ieee80211_hdr *hdr = (void *)skb->data; 169 169 unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; 170 170 unsigned int mss = skb_shinfo(skb)->gso_size; 171 + unsigned int data_offset = 0; 171 172 dma_addr_t start_hdr_phys; 172 173 u16 length, amsdu_pad; 173 174 u8 *start_hdr; ··· 261 260 int ret; 262 261 263 262 tb_len = min_t(unsigned int, tso.size, data_left); 264 - tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, tso.data); 263 + tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, data_offset, 264 + tb_len); 265 265 /* Not a real mapping error, use direct comparison */ 266 266 if (unlikely(tb_phys == DMA_MAPPING_ERROR)) 267 267 goto out_err; ··· 274 272 goto out_err; 275 273 276 274 data_left -= tb_len; 275 + data_offset += tb_len; 277 276 tso_build_data(skb, &tso, tb_len); 278 277 } 279 278 }
+22 -10
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 1814 1814 /** 1815 1815 * iwl_pcie_get_sgt_tb_phys - Find TB address in mapped SG list 1816 1816 * @sgt: scatter gather table 1817 - * @addr: Virtual address 1817 + * @offset: Offset into the mapped memory (i.e. SKB payload data) 1818 + * @len: Length of the area 1818 1819 * 1819 - * Find the entry that includes the address for the given address and return 1820 - * correct physical address for the TB entry. 1820 + * Find the DMA address that corresponds to the SKB payload data at the 1821 + * position given by @offset. 1821 1822 * 1822 1823 * Returns: Address for TB entry 1823 1824 */ 1824 - dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, void *addr) 1825 + dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset, 1826 + unsigned int len) 1825 1827 { 1826 1828 struct scatterlist *sg; 1829 + unsigned int sg_offset = 0; 1827 1830 int i; 1828 1831 1832 + /* 1833 + * Search the mapped DMA areas in the SG for the area that contains the 1834 + * data at offset with the given length. 1835 + */ 1829 1836 for_each_sgtable_dma_sg(sgt, sg, i) { 1830 - if (addr >= sg_virt(sg) && 1831 - (u8 *)addr < (u8 *)sg_virt(sg) + sg_dma_len(sg)) 1832 - return sg_dma_address(sg) + 1833 - ((unsigned long)addr - (unsigned long)sg_virt(sg)); 1837 + if (offset >= sg_offset && 1838 + offset + len <= sg_offset + sg_dma_len(sg)) 1839 + return sg_dma_address(sg) + offset - sg_offset; 1840 + 1841 + sg_offset += sg_dma_len(sg); 1834 1842 } 1835 1843 1836 1844 WARN_ON_ONCE(1); ··· 1883 1875 1884 1876 sg_init_table(sgt->sgl, skb_shinfo(skb)->nr_frags + 1); 1885 1877 1886 - sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, 0, skb->len); 1878 + /* Only map the data, not the header (it is copied to the TSO page) */ 1879 + sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, skb_headlen(skb), 1880 + skb->data_len); 1887 1881 if (WARN_ON_ONCE(sgt->orig_nents <= 0)) 1888 1882 return NULL; 1889 1883 ··· 1910 1900 struct ieee80211_hdr *hdr = (void *)skb->data; 1911 1901 unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; 1912 1902 unsigned int mss = skb_shinfo(skb)->gso_size; 1903 + unsigned int data_offset = 0; 1913 1904 u16 length, iv_len, amsdu_pad; 1914 1905 dma_addr_t start_hdr_phys; 1915 1906 u8 *start_hdr, *pos_hdr; ··· 2011 2000 data_left); 2012 2001 dma_addr_t tb_phys; 2013 2002 2014 - tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, tso.data); 2003 + tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, data_offset, size); 2015 2004 /* Not a real mapping error, use direct comparison */ 2016 2005 if (unlikely(tb_phys == DMA_MAPPING_ERROR)) 2017 2006 return -EINVAL; ··· 2022 2011 tb_phys, size); 2023 2012 2024 2013 data_left -= size; 2014 + data_offset += size; 2025 2015 tso_build_data(skb, &tso, size); 2026 2016 } 2027 2017 }
+1 -1
drivers/net/wireless/mediatek/mt76/mt7921/main.c
··· 1183 1183 struct inet6_dev *idev) 1184 1184 { 1185 1185 struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv; 1186 - struct mt792x_dev *dev = mvif->phy->dev; 1186 + struct mt792x_dev *dev = mt792x_hw_dev(hw); 1187 1187 struct inet6_ifaddr *ifa; 1188 1188 struct in6_addr ns_addrs[IEEE80211_BSS_ARP_ADDR_LIST_LEN]; 1189 1189 struct sk_buff *skb;
+1 -1
drivers/net/wireless/realtek/rtlwifi/rtl8192du/hw.c
··· 181 181 struct rtl_hal *rtlhal = rtl_hal(rtlpriv); 182 182 u32 txqpagenum, txqpageunit; 183 183 u32 txqremainingpage; 184 + u32 value32 = 0; 184 185 u32 numhq = 0; 185 186 u32 numlq = 0; 186 187 u32 numnq = 0; 187 188 u32 numpubq; 188 - u32 value32; 189 189 190 190 if (rtlhal->macphymode != SINGLEMAC_SINGLEPHY) { 191 191 numpubq = NORMAL_PAGE_NUM_PUBQ_92D_DUAL_MAC;
+1 -1
drivers/nvdimm/pmem.c
··· 498 498 } 499 499 if (fua) 500 500 lim.features |= BLK_FEAT_FUA; 501 - if (is_nd_pfn(dev)) 501 + if (is_nd_pfn(dev) || pmem_should_map_pages(dev)) 502 502 lim.features |= BLK_FEAT_DAX; 503 503 504 504 if (!devm_request_mem_region(dev, res->start, resource_size(res),
+11 -4
drivers/of/irq.c
··· 344 344 struct device_node *p; 345 345 const __be32 *addr; 346 346 u32 intsize; 347 - int i, res; 347 + int i, res, addr_len; 348 + __be32 addr_buf[3] = { 0 }; 348 349 349 350 pr_debug("of_irq_parse_one: dev=%pOF, index=%d\n", device, index); 350 351 ··· 354 353 return of_irq_parse_oldworld(device, index, out_irq); 355 354 356 355 /* Get the reg property (if any) */ 357 - addr = of_get_property(device, "reg", NULL); 356 + addr = of_get_property(device, "reg", &addr_len); 357 + 358 + /* Prevent out-of-bounds read in case of longer interrupt parent address size */ 359 + if (addr_len > (3 * sizeof(__be32))) 360 + addr_len = 3 * sizeof(__be32); 361 + if (addr) 362 + memcpy(addr_buf, addr, addr_len); 358 363 359 364 /* Try the new-style interrupts-extended first */ 360 365 res = of_parse_phandle_with_args(device, "interrupts-extended", 361 366 "#interrupt-cells", index, out_irq); 362 367 if (!res) 363 - return of_irq_parse_raw(addr, out_irq); 368 + return of_irq_parse_raw(addr_buf, out_irq); 364 369 365 370 /* Look for the interrupt parent. */ 366 371 p = of_irq_find_parent(device); ··· 396 389 397 390 398 391 /* Check if there are any interrupt-map translations to process */ 399 - res = of_irq_parse_raw(addr, out_irq); 392 + res = of_irq_parse_raw(addr_buf, out_irq); 400 393 out: 401 394 of_node_put(p); 402 395 return res;
+1
drivers/platform/x86/Kconfig
··· 477 477 tristate "Lenovo Yoga Tablet Mode Control" 478 478 depends on ACPI_WMI 479 479 depends on INPUT 480 + depends on IDEAPAD_LAPTOP 480 481 select INPUT_SPARSEKMAP 481 482 help 482 483 This driver maps the Tablet Mode Control switch to SW_TABLET_MODE input
+11 -21
drivers/platform/x86/amd/pmf/spc.c
··· 150 150 return 0; 151 151 } 152 152 153 - static int amd_pmf_get_sensor_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in) 153 + static void amd_pmf_get_sensor_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in) 154 154 { 155 155 struct amd_sfh_info sfh_info; 156 - int ret; 156 + 157 + /* Get the latest information from SFH */ 158 + in->ev_info.user_present = false; 157 159 158 160 /* Get ALS data */ 159 - ret = amd_get_sfh_info(&sfh_info, MT_ALS); 160 - if (!ret) 161 + if (!amd_get_sfh_info(&sfh_info, MT_ALS)) 161 162 in->ev_info.ambient_light = sfh_info.ambient_light; 162 163 else 163 - return ret; 164 + dev_dbg(dev->dev, "ALS is not enabled/detected\n"); 164 165 165 166 /* get HPD data */ 166 - ret = amd_get_sfh_info(&sfh_info, MT_HPD); 167 - if (ret) 168 - return ret; 169 - 170 - switch (sfh_info.user_present) { 171 - case SFH_NOT_DETECTED: 172 - in->ev_info.user_present = 0xff; /* assume no sensors connected */ 173 - break; 174 - case SFH_USER_PRESENT: 175 - in->ev_info.user_present = 1; 176 - break; 177 - case SFH_USER_AWAY: 178 - in->ev_info.user_present = 0; 179 - break; 167 + if (!amd_get_sfh_info(&sfh_info, MT_HPD)) { 168 + if (sfh_info.user_present == SFH_USER_PRESENT) 169 + in->ev_info.user_present = true; 170 + } else { 171 + dev_dbg(dev->dev, "HPD is not enabled/detected\n"); 180 172 } 181 - 182 - return 0; 183 173 } 184 174 185 175 void amd_pmf_populate_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
+132 -16
drivers/platform/x86/ideapad-laptop.c
··· 126 126 127 127 struct ideapad_private { 128 128 struct acpi_device *adev; 129 + struct mutex vpc_mutex; /* protects the VPC calls */ 129 130 struct rfkill *rfk[IDEAPAD_RFKILL_DEV_NUM]; 130 131 struct ideapad_rfk_priv rfk_priv[IDEAPAD_RFKILL_DEV_NUM]; 131 132 struct platform_device *platform_device; ··· 147 146 bool touchpad_ctrl_via_ec : 1; 148 147 bool ctrl_ps2_aux_port : 1; 149 148 bool usb_charging : 1; 149 + bool ymc_ec_trigger : 1; 150 150 } features; 151 151 struct { 152 152 bool initialized; ··· 195 193 MODULE_PARM_DESC(touchpad_ctrl_via_ec, 196 194 "Enable registering a 'touchpad' sysfs-attribute which can be used to manually " 197 195 "tell the EC to enable/disable the touchpad. This may not work on all models."); 196 + 197 + static bool ymc_ec_trigger __read_mostly; 198 + module_param(ymc_ec_trigger, bool, 0444); 199 + MODULE_PARM_DESC(ymc_ec_trigger, 200 + "Enable EC triggering work-around to force emitting tablet mode events. " 201 + "If you need this please report this to: platform-driver-x86@vger.kernel.org"); 198 202 199 203 /* 200 204 * shared data ··· 301 293 { 302 294 struct ideapad_private *priv = s->private; 303 295 unsigned long value; 296 + 297 + guard(mutex)(&priv->vpc_mutex); 304 298 305 299 if (!read_ec_data(priv->adev->handle, VPCCMD_R_BL_MAX, &value)) 306 300 seq_printf(s, "Backlight max: %lu\n", value); ··· 422 412 unsigned long result; 423 413 int err; 424 414 425 - err = read_ec_data(priv->adev->handle, VPCCMD_R_CAMERA, &result); 415 + scoped_guard(mutex, &priv->vpc_mutex) 416 + err = read_ec_data(priv->adev->handle, VPCCMD_R_CAMERA, &result); 426 417 if (err) 427 418 return err; 428 419 ··· 442 431 if (err) 443 432 return err; 444 433 445 - err = write_ec_cmd(priv->adev->handle, VPCCMD_W_CAMERA, state); 434 + scoped_guard(mutex, &priv->vpc_mutex) 435 + err = write_ec_cmd(priv->adev->handle, VPCCMD_W_CAMERA, state); 446 436 if (err) 447 437 return err; 448 438 ··· 496 484 unsigned long result; 497 485 int err; 498 486 499 - err = read_ec_data(priv->adev->handle, VPCCMD_R_FAN, &result); 487 + scoped_guard(mutex, &priv->vpc_mutex) 488 + err = read_ec_data(priv->adev->handle, VPCCMD_R_FAN, &result); 500 489 if (err) 501 490 return err; 502 491 ··· 519 506 if (state > 4 || state == 3) 520 507 return -EINVAL; 521 508 522 - err = write_ec_cmd(priv->adev->handle, VPCCMD_W_FAN, state); 509 + scoped_guard(mutex, &priv->vpc_mutex) 510 + err = write_ec_cmd(priv->adev->handle, VPCCMD_W_FAN, state); 523 511 if (err) 524 512 return err; 525 513 ··· 605 591 unsigned long result; 606 592 int err; 607 593 608 - err = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &result); 594 + scoped_guard(mutex, &priv->vpc_mutex) 595 + err = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &result); 609 596 if (err) 610 597 return err; 611 598 ··· 627 612 if (err) 628 613 return err; 629 614 630 - err = write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, state); 615 + scoped_guard(mutex, &priv->vpc_mutex) 616 + err = write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, state); 631 617 if (err) 632 618 return err; 633 619 ··· 1021 1005 struct ideapad_rfk_priv *priv = data; 1022 1006 int opcode = ideapad_rfk_data[priv->dev].opcode; 1023 1007 1008 + guard(mutex)(&priv->priv->vpc_mutex); 1009 + 1024 1010 return write_ec_cmd(priv->priv->adev->handle, opcode, !blocked); 1025 1011 } 1026 1012 ··· 1036 1018 int i; 1037 1019 1038 1020 if (priv->features.hw_rfkill_switch) { 1021 + guard(mutex)(&priv->vpc_mutex); 1022 + 1039 1023 if (read_ec_data(priv->adev->handle, VPCCMD_R_RF, &hw_blocked)) 1040 1024 return; 1041 1025 hw_blocked = !hw_blocked; ··· 1211 1191 { 1212 1192 unsigned long long_pressed; 1213 1193 1214 - if (read_ec_data(priv->adev->handle, VPCCMD_R_NOVO, &long_pressed)) 1215 - return; 1194 + scoped_guard(mutex, &priv->vpc_mutex) 1195 + if (read_ec_data(priv->adev->handle, VPCCMD_R_NOVO, &long_pressed)) 1196 + return; 1216 1197 1217 1198 if (long_pressed) 1218 1199 ideapad_input_report(priv, 17); ··· 1225 1204 { 1226 1205 unsigned long bit, value; 1227 1206 1228 - if (read_ec_data(priv->adev->handle, VPCCMD_R_SPECIAL_BUTTONS, &value)) 1229 - return; 1207 + scoped_guard(mutex, &priv->vpc_mutex) 1208 + if (read_ec_data(priv->adev->handle, VPCCMD_R_SPECIAL_BUTTONS, &value)) 1209 + return; 1230 1210 1231 1211 for_each_set_bit (bit, &value, 16) { 1232 1212 switch (bit) { ··· 1260 1238 unsigned long now; 1261 1239 int err; 1262 1240 1241 + guard(mutex)(&priv->vpc_mutex); 1242 + 1263 1243 err = read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now); 1264 1244 if (err) 1265 1245 return err; ··· 1273 1249 { 1274 1250 struct ideapad_private *priv = bl_get_data(blightdev); 1275 1251 int err; 1252 + 1253 + guard(mutex)(&priv->vpc_mutex); 1276 1254 1277 1255 err = write_ec_cmd(priv->adev->handle, VPCCMD_W_BL, 1278 1256 blightdev->props.brightness); ··· 1353 1327 if (!blightdev) 1354 1328 return; 1355 1329 1330 + guard(mutex)(&priv->vpc_mutex); 1331 + 1356 1332 if (read_ec_data(priv->adev->handle, VPCCMD_R_BL_POWER, &power)) 1357 1333 return; 1358 1334 ··· 1367 1339 1368 1340 /* if we control brightness via acpi video driver */ 1369 1341 if (!priv->blightdev) 1370 - read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now); 1342 + scoped_guard(mutex, &priv->vpc_mutex) 1343 + read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now); 1371 1344 else 1372 1345 backlight_force_update(priv->blightdev, BACKLIGHT_UPDATE_HOTKEY); 1373 1346 } ··· 1593 1564 int ret; 1594 1565 1595 1566 /* Without reading from EC touchpad LED doesn't switch state */ 1596 - ret = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value); 1567 + scoped_guard(mutex, &priv->vpc_mutex) 1568 + ret = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value); 1597 1569 if (ret) 1598 1570 return; 1599 1571 ··· 1622 1592 priv->r_touchpad_val = value; 1623 1593 } 1624 1594 1595 + static const struct dmi_system_id ymc_ec_trigger_quirk_dmi_table[] = { 1596 + { 1597 + /* Lenovo Yoga 7 14ARB7 */ 1598 + .matches = { 1599 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1600 + DMI_MATCH(DMI_PRODUCT_NAME, "82QF"), 1601 + }, 1602 + }, 1603 + { 1604 + /* Lenovo Yoga 7 14ACN6 */ 1605 + .matches = { 1606 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1607 + DMI_MATCH(DMI_PRODUCT_NAME, "82N7"), 1608 + }, 1609 + }, 1610 + { } 1611 + }; 1612 + 1613 + static void ideapad_laptop_trigger_ec(void) 1614 + { 1615 + struct ideapad_private *priv; 1616 + int ret; 1617 + 1618 + guard(mutex)(&ideapad_shared_mutex); 1619 + 1620 + priv = ideapad_shared; 1621 + if (!priv) 1622 + return; 1623 + 1624 + if (!priv->features.ymc_ec_trigger) 1625 + return; 1626 + 1627 + scoped_guard(mutex, &priv->vpc_mutex) 1628 + ret = write_ec_cmd(priv->adev->handle, VPCCMD_W_YMC, 1); 1629 + if (ret) 1630 + dev_warn(&priv->platform_device->dev, "Could not write YMC: %d\n", ret); 1631 + } 1632 + 1633 + static int ideapad_laptop_nb_notify(struct notifier_block *nb, 1634 + unsigned long action, void *data) 1635 + { 1636 + switch (action) { 1637 + case IDEAPAD_LAPTOP_YMC_EVENT: 1638 + ideapad_laptop_trigger_ec(); 1639 + break; 1640 + } 1641 + 1642 + return 0; 1643 + } 1644 + 1645 + static struct notifier_block ideapad_laptop_notifier = { 1646 + .notifier_call = ideapad_laptop_nb_notify, 1647 + }; 1648 + 1649 + static BLOCKING_NOTIFIER_HEAD(ideapad_laptop_chain_head); 1650 + 1651 + int ideapad_laptop_register_notifier(struct notifier_block *nb) 1652 + { 1653 + return blocking_notifier_chain_register(&ideapad_laptop_chain_head, nb); 1654 + } 1655 + EXPORT_SYMBOL_NS_GPL(ideapad_laptop_register_notifier, IDEAPAD_LAPTOP); 1656 + 1657 + int ideapad_laptop_unregister_notifier(struct notifier_block *nb) 1658 + { 1659 + return blocking_notifier_chain_unregister(&ideapad_laptop_chain_head, nb); 1660 + } 1661 + EXPORT_SYMBOL_NS_GPL(ideapad_laptop_unregister_notifier, IDEAPAD_LAPTOP); 1662 + 1663 + void ideapad_laptop_call_notifier(unsigned long action, void *data) 1664 + { 1665 + blocking_notifier_call_chain(&ideapad_laptop_chain_head, action, data); 1666 + } 1667 + EXPORT_SYMBOL_NS_GPL(ideapad_laptop_call_notifier, IDEAPAD_LAPTOP); 1668 + 1625 1669 static void ideapad_acpi_notify(acpi_handle handle, u32 event, void *data) 1626 1670 { 1627 1671 struct ideapad_private *priv = data; 1628 1672 unsigned long vpc1, vpc2, bit; 1629 1673 1630 - if (read_ec_data(handle, VPCCMD_R_VPC1, &vpc1)) 1631 - return; 1674 + scoped_guard(mutex, &priv->vpc_mutex) { 1675 + if (read_ec_data(handle, VPCCMD_R_VPC1, &vpc1)) 1676 + return; 1632 1677 1633 - if (read_ec_data(handle, VPCCMD_R_VPC2, &vpc2)) 1634 - return; 1678 + if (read_ec_data(handle, VPCCMD_R_VPC2, &vpc2)) 1679 + return; 1680 + } 1635 1681 1636 1682 vpc1 = (vpc2 << 8) | vpc1; 1637 1683 ··· 1834 1728 priv->features.ctrl_ps2_aux_port = 1835 1729 ctrl_ps2_aux_port || dmi_check_system(ctrl_ps2_aux_port_list); 1836 1730 priv->features.touchpad_ctrl_via_ec = touchpad_ctrl_via_ec; 1731 + priv->features.ymc_ec_trigger = 1732 + ymc_ec_trigger || dmi_check_system(ymc_ec_trigger_quirk_dmi_table); 1837 1733 1838 1734 if (!read_ec_data(handle, VPCCMD_R_FAN, &val)) 1839 1735 priv->features.fan_mode = true; ··· 2014 1906 priv->adev = adev; 2015 1907 priv->platform_device = pdev; 2016 1908 1909 + err = devm_mutex_init(&pdev->dev, &priv->vpc_mutex); 1910 + if (err) 1911 + return err; 1912 + 2017 1913 ideapad_check_features(priv); 2018 1914 2019 1915 err = ideapad_sysfs_init(priv); ··· 2086 1974 if (err) 2087 1975 goto shared_init_failed; 2088 1976 1977 + ideapad_laptop_register_notifier(&ideapad_laptop_notifier); 1978 + 2089 1979 return 0; 2090 1980 2091 1981 shared_init_failed: ··· 2119 2005 { 2120 2006 struct ideapad_private *priv = dev_get_drvdata(&pdev->dev); 2121 2007 int i; 2008 + 2009 + ideapad_laptop_unregister_notifier(&ideapad_laptop_notifier); 2122 2010 2123 2011 ideapad_shared_exit(priv); 2124 2012
+9
drivers/platform/x86/ideapad-laptop.h
··· 12 12 #include <linux/acpi.h> 13 13 #include <linux/jiffies.h> 14 14 #include <linux/errno.h> 15 + #include <linux/notifier.h> 16 + 17 + enum ideapad_laptop_notifier_actions { 18 + IDEAPAD_LAPTOP_YMC_EVENT, 19 + }; 20 + 21 + int ideapad_laptop_register_notifier(struct notifier_block *nb); 22 + int ideapad_laptop_unregister_notifier(struct notifier_block *nb); 23 + void ideapad_laptop_call_notifier(unsigned long action, void *data); 15 24 16 25 enum { 17 26 VPCCMD_R_VPC1 = 0x10,
+2 -58
drivers/platform/x86/lenovo-ymc.c
··· 20 20 #define LENOVO_YMC_QUERY_INSTANCE 0 21 21 #define LENOVO_YMC_QUERY_METHOD 0x01 22 22 23 - static bool ec_trigger __read_mostly; 24 - module_param(ec_trigger, bool, 0444); 25 - MODULE_PARM_DESC(ec_trigger, "Enable EC triggering work-around to force emitting tablet mode events"); 26 - 27 23 static bool force; 28 24 module_param(force, bool, 0444); 29 25 MODULE_PARM_DESC(force, "Force loading on boards without a convertible DMI chassis-type"); 30 - 31 - static const struct dmi_system_id ec_trigger_quirk_dmi_table[] = { 32 - { 33 - /* Lenovo Yoga 7 14ARB7 */ 34 - .matches = { 35 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 36 - DMI_MATCH(DMI_PRODUCT_NAME, "82QF"), 37 - }, 38 - }, 39 - { 40 - /* Lenovo Yoga 7 14ACN6 */ 41 - .matches = { 42 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 43 - DMI_MATCH(DMI_PRODUCT_NAME, "82N7"), 44 - }, 45 - }, 46 - { } 47 - }; 48 26 49 27 static const struct dmi_system_id allowed_chasis_types_dmi_table[] = { 50 28 { ··· 40 62 41 63 struct lenovo_ymc_private { 42 64 struct input_dev *input_dev; 43 - struct acpi_device *ec_acpi_dev; 44 65 }; 45 - 46 - static void lenovo_ymc_trigger_ec(struct wmi_device *wdev, struct lenovo_ymc_private *priv) 47 - { 48 - int err; 49 - 50 - if (!priv->ec_acpi_dev) 51 - return; 52 - 53 - err = write_ec_cmd(priv->ec_acpi_dev->handle, VPCCMD_W_YMC, 1); 54 - if (err) 55 - dev_warn(&wdev->dev, "Could not write YMC: %d\n", err); 56 - } 57 66 58 67 static const struct key_entry lenovo_ymc_keymap[] = { 59 68 /* Laptop */ ··· 90 125 91 126 free_obj: 92 127 kfree(obj); 93 - lenovo_ymc_trigger_ec(wdev, priv); 128 + ideapad_laptop_call_notifier(IDEAPAD_LAPTOP_YMC_EVENT, &code); 94 129 } 95 - 96 - static void acpi_dev_put_helper(void *p) { acpi_dev_put(p); } 97 130 98 131 static int lenovo_ymc_probe(struct wmi_device *wdev, const void *ctx) 99 132 { ··· 106 143 return -ENODEV; 107 144 } 108 145 109 - ec_trigger |= dmi_check_system(ec_trigger_quirk_dmi_table); 110 - 111 146 priv = devm_kzalloc(&wdev->dev, sizeof(*priv), GFP_KERNEL); 112 147 if (!priv) 113 148 return -ENOMEM; 114 - 115 - if (ec_trigger) { 116 - pr_debug("Lenovo YMC enable EC triggering.\n"); 117 - priv->ec_acpi_dev = acpi_dev_get_first_match_dev("VPC2004", NULL, -1); 118 - 119 - if (!priv->ec_acpi_dev) { 120 - dev_err(&wdev->dev, "Could not find EC ACPI device.\n"); 121 - return -ENODEV; 122 - } 123 - err = devm_add_action_or_reset(&wdev->dev, 124 - acpi_dev_put_helper, priv->ec_acpi_dev); 125 - if (err) { 126 - dev_err(&wdev->dev, 127 - "Could not clean up EC ACPI device: %d\n", err); 128 - return err; 129 - } 130 - } 131 149 132 150 input_dev = devm_input_allocate_device(&wdev->dev); 133 151 if (!input_dev) ··· 136 192 dev_set_drvdata(&wdev->dev, priv); 137 193 138 194 /* Report the state for the first time on probe */ 139 - lenovo_ymc_trigger_ec(wdev, priv); 140 195 lenovo_ymc_notify(wdev, NULL); 141 196 return 0; 142 197 } ··· 160 217 MODULE_AUTHOR("Gergo Koteles <soyer@irl.hu>"); 161 218 MODULE_DESCRIPTION("Lenovo Yoga Mode Control driver"); 162 219 MODULE_LICENSE("GPL"); 220 + MODULE_IMPORT_NS(IDEAPAD_LAPTOP);
+23 -13
drivers/s390/block/dasd.c
··· 1601 1601 if (!sense) 1602 1602 return 0; 1603 1603 1604 - return !!(sense[1] & SNS1_NO_REC_FOUND) || 1605 - !!(sense[1] & SNS1_FILE_PROTECTED) || 1606 - scsw_cstat(&irb->scsw) == SCHN_STAT_INCORR_LEN; 1604 + if (sense[1] & SNS1_NO_REC_FOUND) 1605 + return 1; 1606 + 1607 + if ((sense[1] & SNS1_INV_TRACK_FORMAT) && 1608 + scsw_is_tm(&irb->scsw) && 1609 + !(sense[2] & SNS2_ENV_DATA_PRESENT)) 1610 + return 1; 1611 + 1612 + return 0; 1607 1613 } 1608 1614 1609 1615 static int dasd_ese_oos_cond(u8 *sense) ··· 1630 1624 struct dasd_device *device; 1631 1625 unsigned long now; 1632 1626 int nrf_suppressed = 0; 1633 - int fp_suppressed = 0; 1627 + int it_suppressed = 0; 1634 1628 struct request *req; 1635 1629 u8 *sense = NULL; 1636 1630 int expires; ··· 1685 1679 */ 1686 1680 sense = dasd_get_sense(irb); 1687 1681 if (sense) { 1688 - fp_suppressed = (sense[1] & SNS1_FILE_PROTECTED) && 1689 - test_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags); 1682 + it_suppressed = (sense[1] & SNS1_INV_TRACK_FORMAT) && 1683 + !(sense[2] & SNS2_ENV_DATA_PRESENT) && 1684 + test_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags); 1690 1685 nrf_suppressed = (sense[1] & SNS1_NO_REC_FOUND) && 1691 1686 test_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags); 1692 1687 ··· 1702 1695 return; 1703 1696 } 1704 1697 } 1705 - if (!(fp_suppressed || nrf_suppressed)) 1698 + if (!(it_suppressed || nrf_suppressed)) 1706 1699 device->discipline->dump_sense_dbf(device, irb, "int"); 1707 1700 1708 1701 if (device->features & DASD_FEATURE_ERPLOG) ··· 2466 2459 rc = 0; 2467 2460 list_for_each_entry_safe(cqr, n, ccw_queue, blocklist) { 2468 2461 /* 2469 - * In some cases the 'File Protected' or 'Incorrect Length' 2470 - * error might be expected and error recovery would be 2471 - * unnecessary in these cases. Check if the according suppress 2472 - * bit is set. 2462 + * In some cases certain errors might be expected and 2463 + * error recovery would be unnecessary in these cases. 2464 + * Check if the according suppress bit is set. 2473 2465 */ 2474 2466 sense = dasd_get_sense(&cqr->irb); 2475 - if (sense && sense[1] & SNS1_FILE_PROTECTED && 2476 - test_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags)) 2467 + if (sense && (sense[1] & SNS1_INV_TRACK_FORMAT) && 2468 + !(sense[2] & SNS2_ENV_DATA_PRESENT) && 2469 + test_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags)) 2470 + continue; 2471 + if (sense && (sense[1] & SNS1_NO_REC_FOUND) && 2472 + test_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags)) 2477 2473 continue; 2478 2474 if (scsw_cstat(&cqr->irb.scsw) == 0x40 && 2479 2475 test_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags))
+2 -8
drivers/s390/block/dasd_3990_erp.c
··· 1386 1386 1387 1387 struct dasd_device *device = erp->startdev; 1388 1388 1389 - /* 1390 - * In some cases the 'File Protected' error might be expected and 1391 - * log messages shouldn't be written then. 1392 - * Check if the according suppress bit is set. 1393 - */ 1394 - if (!test_bit(DASD_CQR_SUPPRESS_FP, &erp->flags)) 1395 - dev_err(&device->cdev->dev, 1396 - "Accessing the DASD failed because of a hardware error\n"); 1389 + dev_err(&device->cdev->dev, 1390 + "Accessing the DASD failed because of a hardware error\n"); 1397 1391 1398 1392 return dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED); 1399 1393
+25 -32
drivers/s390/block/dasd_eckd.c
··· 2275 2275 cqr->status = DASD_CQR_FILLED; 2276 2276 /* Set flags to suppress output for expected errors */ 2277 2277 set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags); 2278 + set_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags); 2278 2279 2279 2280 return cqr; 2280 2281 } ··· 2557 2556 cqr->buildclk = get_tod_clock(); 2558 2557 cqr->status = DASD_CQR_FILLED; 2559 2558 /* Set flags to suppress output for expected errors */ 2560 - set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags); 2561 2559 set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags); 2562 2560 2563 2561 return cqr; ··· 4130 4130 4131 4131 /* Set flags to suppress output for expected errors */ 4132 4132 if (dasd_eckd_is_ese(basedev)) { 4133 - set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags); 4134 - set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags); 4135 4133 set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags); 4136 4134 } 4137 4135 ··· 4631 4633 4632 4634 /* Set flags to suppress output for expected errors */ 4633 4635 if (dasd_eckd_is_ese(basedev)) { 4634 - set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags); 4635 - set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags); 4636 4636 set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags); 4637 + set_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags); 4637 4638 } 4638 4639 4639 4640 return cqr; ··· 5777 5780 { 5778 5781 u8 *sense = dasd_get_sense(irb); 5779 5782 5780 - if (scsw_is_tm(&irb->scsw)) { 5781 - /* 5782 - * In some cases the 'File Protected' or 'Incorrect Length' 5783 - * error might be expected and log messages shouldn't be written 5784 - * then. Check if the according suppress bit is set. 5785 - */ 5786 - if (sense && (sense[1] & SNS1_FILE_PROTECTED) && 5787 - test_bit(DASD_CQR_SUPPRESS_FP, &req->flags)) 5788 - return; 5789 - if (scsw_cstat(&irb->scsw) == 0x40 && 5790 - test_bit(DASD_CQR_SUPPRESS_IL, &req->flags)) 5791 - return; 5783 + /* 5784 + * In some cases certain errors might be expected and 5785 + * log messages shouldn't be written then. 5786 + * Check if the according suppress bit is set. 5787 + */ 5788 + if (sense && (sense[1] & SNS1_INV_TRACK_FORMAT) && 5789 + !(sense[2] & SNS2_ENV_DATA_PRESENT) && 5790 + test_bit(DASD_CQR_SUPPRESS_IT, &req->flags)) 5791 + return; 5792 5792 5793 + if (sense && sense[0] & SNS0_CMD_REJECT && 5794 + test_bit(DASD_CQR_SUPPRESS_CR, &req->flags)) 5795 + return; 5796 + 5797 + if (sense && sense[1] & SNS1_NO_REC_FOUND && 5798 + test_bit(DASD_CQR_SUPPRESS_NRF, &req->flags)) 5799 + return; 5800 + 5801 + if (scsw_cstat(&irb->scsw) == 0x40 && 5802 + test_bit(DASD_CQR_SUPPRESS_IL, &req->flags)) 5803 + return; 5804 + 5805 + if (scsw_is_tm(&irb->scsw)) 5793 5806 dasd_eckd_dump_sense_tcw(device, req, irb); 5794 - } else { 5795 - /* 5796 - * In some cases the 'Command Reject' or 'No Record Found' 5797 - * error might be expected and log messages shouldn't be 5798 - * written then. Check if the according suppress bit is set. 5799 - */ 5800 - if (sense && sense[0] & SNS0_CMD_REJECT && 5801 - test_bit(DASD_CQR_SUPPRESS_CR, &req->flags)) 5802 - return; 5803 - 5804 - if (sense && sense[1] & SNS1_NO_REC_FOUND && 5805 - test_bit(DASD_CQR_SUPPRESS_NRF, &req->flags)) 5806 - return; 5807 - 5807 + else 5808 5808 dasd_eckd_dump_sense_ccw(device, req, irb); 5809 - } 5810 5809 } 5811 5810 5812 5811 static int dasd_eckd_reload_device(struct dasd_device *device)
-1
drivers/s390/block/dasd_genhd.c
··· 41 41 */ 42 42 .max_segment_size = PAGE_SIZE, 43 43 .seg_boundary_mask = PAGE_SIZE - 1, 44 - .dma_alignment = PAGE_SIZE - 1, 45 44 .max_segments = USHRT_MAX, 46 45 }; 47 46 struct gendisk *gdp;
+1 -1
drivers/s390/block/dasd_int.h
··· 196 196 * The following flags are used to suppress output of certain errors. 197 197 */ 198 198 #define DASD_CQR_SUPPRESS_NRF 4 /* Suppress 'No Record Found' error */ 199 - #define DASD_CQR_SUPPRESS_FP 5 /* Suppress 'File Protected' error*/ 199 + #define DASD_CQR_SUPPRESS_IT 5 /* Suppress 'Invalid Track' error*/ 200 200 #define DASD_CQR_SUPPRESS_IL 6 /* Suppress 'Incorrect Length' error */ 201 201 #define DASD_CQR_SUPPRESS_CR 7 /* Suppress 'Command Reject' error */ 202 202
+8 -3
drivers/scsi/mpi3mr/mpi3mr_app.c
··· 100 100 dprint_init(mrioc, 101 101 "trying to allocate trace diag buffer of size = %dKB\n", 102 102 trace_size / 1024); 103 - if (mpi3mr_alloc_trace_buffer(mrioc, trace_size)) { 103 + if (get_order(trace_size) > MAX_PAGE_ORDER || 104 + mpi3mr_alloc_trace_buffer(mrioc, trace_size)) { 104 105 retry = true; 105 106 trace_size -= trace_dec_size; 106 107 dprint_init(mrioc, "trace diag buffer allocation failed\n" ··· 119 118 diag_buffer->type = MPI3_DIAG_BUFFER_TYPE_FW; 120 119 diag_buffer->status = MPI3MR_HDB_BUFSTATUS_NOT_ALLOCATED; 121 120 if ((mrioc->facts.diag_fw_sz < fw_size) && (fw_size >= fw_min_size)) { 122 - diag_buffer->addr = dma_alloc_coherent(&mrioc->pdev->dev, 123 - fw_size, &diag_buffer->dma_addr, GFP_KERNEL); 121 + if (get_order(fw_size) <= MAX_PAGE_ORDER) { 122 + diag_buffer->addr 123 + = dma_alloc_coherent(&mrioc->pdev->dev, fw_size, 124 + &diag_buffer->dma_addr, 125 + GFP_KERNEL); 126 + } 124 127 if (!retry) 125 128 dprint_init(mrioc, 126 129 "%s:trying to allocate firmware diag buffer of size = %dKB\n",
+1
drivers/scsi/mpi3mr/mpi3mr_os.c
··· 5234 5234 spin_lock_init(&mrioc->watchdog_lock); 5235 5235 spin_lock_init(&mrioc->chain_buf_lock); 5236 5236 spin_lock_init(&mrioc->sas_node_lock); 5237 + spin_lock_init(&mrioc->trigger_lock); 5237 5238 5238 5239 INIT_LIST_HEAD(&mrioc->fwevt_list); 5239 5240 INIT_LIST_HEAD(&mrioc->tgtdev_list);
-5
drivers/soc/fsl/qbman/qman.c
··· 2546 2546 } 2547 2547 EXPORT_SYMBOL(qman_delete_cgr); 2548 2548 2549 - struct cgr_comp { 2550 - struct qman_cgr *cgr; 2551 - struct completion completion; 2552 - }; 2553 - 2554 2549 static void qman_delete_cgr_smp_call(void *p) 2555 2550 { 2556 2551 qman_delete_cgr((struct qman_cgr *)p);
+6 -2
drivers/staging/media/atomisp/pci/ia_css_stream_public.h
··· 27 27 #include "ia_css_prbs.h" 28 28 #include "ia_css_input_port.h" 29 29 30 - /* Input modes, these enumerate all supported input modes. 31 - * Note that not all ISP modes support all input modes. 30 + /* 31 + * Input modes, these enumerate all supported input modes. 32 + * This enum is part of the atomisp firmware ABI and must 33 + * NOT be changed! 34 + * Note that not all ISP modes support all input modes. 32 35 */ 33 36 enum ia_css_input_mode { 34 37 IA_CSS_INPUT_MODE_SENSOR, /** data from sensor */ 35 38 IA_CSS_INPUT_MODE_FIFO, /** data from input-fifo */ 39 + IA_CSS_INPUT_MODE_TPG, /** data from test-pattern generator */ 36 40 IA_CSS_INPUT_MODE_PRBS, /** data from pseudo-random bit stream */ 37 41 IA_CSS_INPUT_MODE_MEMORY, /** data from a frame in memory */ 38 42 IA_CSS_INPUT_MODE_BUFFERED_SENSOR /** data is sent through mipi buffer */
+16 -3
drivers/staging/media/atomisp/pci/sh_css_internal.h
··· 344 344 345 345 #define IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT (3) 346 346 347 - /* SP configuration information */ 347 + /* 348 + * SP configuration information 349 + * 350 + * This struct is part of the atomisp firmware ABI and is directly copied 351 + * to ISP DRAM by sh_css_store_sp_group_to_ddr() 352 + * 353 + * Do NOT change this struct's layout or remove seemingly unused fields! 354 + */ 348 355 struct sh_css_sp_config { 349 356 u8 no_isp_sync; /* Signal host immediately after start */ 350 357 u8 enable_raw_pool_locking; /** Enable Raw Buffer Locking for HALv3 Support */ ··· 361 354 host (true) or when they are passed to the preview/video pipe 362 355 (false). */ 363 356 357 + /* 358 + * Note the fields below are only used on the ISP2400 not on the ISP2401, 359 + * sh_css_store_sp_group_to_ddr() skip copying these when run on the ISP2401. 360 + */ 364 361 struct { 365 362 u8 a_changed; 366 363 u8 b_changed; ··· 374 363 } input_formatter; 375 364 376 365 sync_generator_cfg_t sync_gen; 366 + tpg_cfg_t tpg; 377 367 prbs_cfg_t prbs; 378 368 input_system_cfg_t input_circuit; 379 369 u8 input_circuit_cfg_changed; 380 - u32 mipi_sizes_for_check[N_CSI_PORTS][IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT]; 381 - u8 enable_isys_event_queue; 370 + u32 mipi_sizes_for_check[N_CSI_PORTS][IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT]; 371 + /* These last 2 fields are used on both the ISP2400 and the ISP2401 */ 372 + u8 enable_isys_event_queue; 382 373 u8 disable_cont_vf; 383 374 }; 384 375
+66 -17
drivers/thermal/gov_bang_bang.c
··· 13 13 14 14 #include "thermal_core.h" 15 15 16 + static void bang_bang_set_instance_target(struct thermal_instance *instance, 17 + unsigned int target) 18 + { 19 + if (instance->target != 0 && instance->target != 1 && 20 + instance->target != THERMAL_NO_TARGET) 21 + pr_debug("Unexpected state %ld of thermal instance %s in bang-bang\n", 22 + instance->target, instance->name); 23 + 24 + /* 25 + * Enable the fan when the trip is crossed on the way up and disable it 26 + * when the trip is crossed on the way down. 27 + */ 28 + instance->target = target; 29 + instance->initialized = true; 30 + 31 + dev_dbg(&instance->cdev->device, "target=%ld\n", instance->target); 32 + 33 + mutex_lock(&instance->cdev->lock); 34 + __thermal_cdev_update(instance->cdev); 35 + mutex_unlock(&instance->cdev->lock); 36 + } 37 + 16 38 /** 17 39 * bang_bang_control - controls devices associated with the given zone 18 40 * @tz: thermal_zone_device ··· 76 54 tz->temperature, trip->hysteresis); 77 55 78 56 list_for_each_entry(instance, &tz->thermal_instances, tz_node) { 79 - if (instance->trip != trip) 57 + if (instance->trip == trip) 58 + bang_bang_set_instance_target(instance, crossed_up); 59 + } 60 + } 61 + 62 + static void bang_bang_manage(struct thermal_zone_device *tz) 63 + { 64 + const struct thermal_trip_desc *td; 65 + struct thermal_instance *instance; 66 + 67 + /* If the code below has run already, nothing needs to be done. */ 68 + if (tz->governor_data) 69 + return; 70 + 71 + for_each_trip_desc(tz, td) { 72 + const struct thermal_trip *trip = &td->trip; 73 + 74 + if (tz->temperature >= td->threshold || 75 + trip->temperature == THERMAL_TEMP_INVALID || 76 + trip->type == THERMAL_TRIP_CRITICAL || 77 + trip->type == THERMAL_TRIP_HOT) 80 78 continue; 81 79 82 - if (instance->target != 0 && instance->target != 1 && 83 - instance->target != THERMAL_NO_TARGET) 84 - pr_debug("Unexpected state %ld of thermal instance %s in bang-bang\n", 85 - instance->target, instance->name); 86 - 87 80 /* 88 - * Enable the fan when the trip is crossed on the way up and 89 - * disable it when the trip is crossed on the way down. 81 + * If the initial cooling device state is "on", but the zone 82 + * temperature is not above the trip point, the core will not 83 + * call bang_bang_control() until the zone temperature reaches 84 + * the trip point temperature which may be never. In those 85 + * cases, set the initial state of the cooling device to 0. 90 86 */ 91 - instance->target = crossed_up; 92 - 93 - dev_dbg(&instance->cdev->device, "target=%ld\n", instance->target); 94 - 95 - mutex_lock(&instance->cdev->lock); 96 - instance->cdev->updated = false; /* cdev needs update */ 97 - mutex_unlock(&instance->cdev->lock); 87 + list_for_each_entry(instance, &tz->thermal_instances, tz_node) { 88 + if (!instance->initialized && instance->trip == trip) 89 + bang_bang_set_instance_target(instance, 0); 90 + } 98 91 } 99 92 100 - list_for_each_entry(instance, &tz->thermal_instances, tz_node) 101 - thermal_cdev_update(instance->cdev); 93 + tz->governor_data = (void *)true; 94 + } 95 + 96 + static void bang_bang_update_tz(struct thermal_zone_device *tz, 97 + enum thermal_notify_event reason) 98 + { 99 + /* 100 + * Let bang_bang_manage() know that it needs to walk trips after binding 101 + * a new cdev and after system resume. 102 + */ 103 + if (reason == THERMAL_TZ_BIND_CDEV || reason == THERMAL_TZ_RESUME) 104 + tz->governor_data = NULL; 102 105 } 103 106 104 107 static struct thermal_governor thermal_gov_bang_bang = { 105 108 .name = "bang_bang", 106 109 .trip_crossed = bang_bang_control, 110 + .manage = bang_bang_manage, 111 + .update_tz = bang_bang_update_tz, 107 112 }; 108 113 THERMAL_GOVERNOR_DECLARE(thermal_gov_bang_bang);
+2 -1
drivers/thermal/thermal_core.c
··· 1728 1728 1729 1729 thermal_debug_tz_resume(tz); 1730 1730 thermal_zone_device_init(tz); 1731 - __thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED); 1731 + thermal_governor_update_tz(tz, THERMAL_TZ_RESUME); 1732 + __thermal_zone_device_update(tz, THERMAL_TZ_RESUME); 1732 1733 1733 1734 complete(&tz->resume); 1734 1735 tz->resuming = false;
+6 -4
drivers/thunderbolt/debugfs.c
··· 323 323 324 324 if (mutex_lock_interruptible(&tb->lock)) { 325 325 ret = -ERESTARTSYS; 326 - goto out_rpm_put; 326 + goto out; 327 327 } 328 328 329 329 ret = sb_regs_write(port, port_sb_regs, ARRAY_SIZE(port_sb_regs), 330 330 USB4_SB_TARGET_ROUTER, 0, buf, count, ppos); 331 331 332 332 mutex_unlock(&tb->lock); 333 - out_rpm_put: 333 + out: 334 334 pm_runtime_mark_last_busy(&sw->dev); 335 335 pm_runtime_put_autosuspend(&sw->dev); 336 + free_page((unsigned long)buf); 336 337 337 338 return ret < 0 ? ret : count; 338 339 } ··· 356 355 357 356 if (mutex_lock_interruptible(&tb->lock)) { 358 357 ret = -ERESTARTSYS; 359 - goto out_rpm_put; 358 + goto out; 360 359 } 361 360 362 361 ret = sb_regs_write(rt->port, retimer_sb_regs, ARRAY_SIZE(retimer_sb_regs), 363 362 USB4_SB_TARGET_RETIMER, rt->index, buf, count, ppos); 364 363 365 364 mutex_unlock(&tb->lock); 366 - out_rpm_put: 365 + out: 367 366 pm_runtime_mark_last_busy(&rt->dev); 368 367 pm_runtime_put_autosuspend(&rt->dev); 368 + free_page((unsigned long)buf); 369 369 370 370 return ret < 0 ? ret : count; 371 371 }
+1
drivers/thunderbolt/switch.c
··· 3392 3392 tb_switch_remove(port->remote->sw); 3393 3393 port->remote = NULL; 3394 3394 } else if (port->xdomain) { 3395 + port->xdomain->is_unplugged = true; 3395 3396 tb_xdomain_remove(port->xdomain); 3396 3397 port->xdomain = NULL; 3397 3398 }
+5 -28
drivers/tty/serial/8250/8250_omap.c
··· 27 27 #include <linux/pm_wakeirq.h> 28 28 #include <linux/dma-mapping.h> 29 29 #include <linux/sys_soc.h> 30 - #include <linux/pm_domain.h> 31 30 32 31 #include "8250.h" 33 32 ··· 117 118 /* Timeout low and High */ 118 119 #define UART_OMAP_TO_L 0x26 119 120 #define UART_OMAP_TO_H 0x27 120 - 121 - /* 122 - * Copy of the genpd flags for the console. 123 - * Only used if console suspend is disabled 124 - */ 125 - static unsigned int genpd_flags_console; 126 121 127 122 struct omap8250_priv { 128 123 void __iomem *membase; ··· 1650 1657 { 1651 1658 struct omap8250_priv *priv = dev_get_drvdata(dev); 1652 1659 struct uart_8250_port *up = serial8250_get_port(priv->line); 1653 - struct generic_pm_domain *genpd = pd_to_genpd(dev->pm_domain); 1654 1660 int err = 0; 1655 1661 1656 1662 serial8250_suspend_port(priv->line); ··· 1660 1668 if (!device_may_wakeup(dev)) 1661 1669 priv->wer = 0; 1662 1670 serial_out(up, UART_OMAP_WER, priv->wer); 1663 - if (uart_console(&up->port)) { 1664 - if (console_suspend_enabled) 1665 - err = pm_runtime_force_suspend(dev); 1666 - else { 1667 - /* 1668 - * The pd shall not be powered-off (no console suspend). 1669 - * Make copy of genpd flags before to set it always on. 1670 - * The original value is restored during the resume. 1671 - */ 1672 - genpd_flags_console = genpd->flags; 1673 - genpd->flags |= GENPD_FLAG_ALWAYS_ON; 1674 - } 1675 - } 1671 + if (uart_console(&up->port) && console_suspend_enabled) 1672 + err = pm_runtime_force_suspend(dev); 1676 1673 flush_work(&priv->qos_work); 1677 1674 1678 1675 return err; ··· 1671 1690 { 1672 1691 struct omap8250_priv *priv = dev_get_drvdata(dev); 1673 1692 struct uart_8250_port *up = serial8250_get_port(priv->line); 1674 - struct generic_pm_domain *genpd = pd_to_genpd(dev->pm_domain); 1675 1693 int err; 1676 1694 1677 1695 if (uart_console(&up->port) && console_suspend_enabled) { 1678 - if (console_suspend_enabled) { 1679 - err = pm_runtime_force_resume(dev); 1680 - if (err) 1681 - return err; 1682 - } else 1683 - genpd->flags = genpd_flags_console; 1696 + err = pm_runtime_force_resume(dev); 1697 + if (err) 1698 + return err; 1684 1699 } 1685 1700 1686 1701 serial8250_resume_port(priv->line);
+1 -1
drivers/tty/serial/atmel_serial.c
··· 2514 2514 }; 2515 2515 2516 2516 static const struct serial_rs485 atmel_rs485_supported = { 2517 - .flags = SER_RS485_ENABLED | SER_RS485_RTS_AFTER_SEND | SER_RS485_RX_DURING_TX, 2517 + .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RX_DURING_TX, 2518 2518 .delay_rts_before_send = 1, 2519 2519 .delay_rts_after_send = 1, 2520 2520 };
+1
drivers/tty/serial/fsl_lpuart.c
··· 2923 2923 pm_runtime_set_autosuspend_delay(&pdev->dev, UART_AUTOSUSPEND_TIMEOUT); 2924 2924 pm_runtime_set_active(&pdev->dev); 2925 2925 pm_runtime_enable(&pdev->dev); 2926 + pm_runtime_mark_last_busy(&pdev->dev); 2926 2927 2927 2928 ret = lpuart_global_reset(sport); 2928 2929 if (ret)
+2 -10
drivers/tty/vt/conmakehash.c
··· 11 11 * Copyright (C) 1995-1997 H. Peter Anvin 12 12 */ 13 13 14 - #include <libgen.h> 15 - #include <linux/limits.h> 16 14 #include <stdio.h> 17 15 #include <stdlib.h> 18 16 #include <sysexits.h> ··· 77 79 { 78 80 FILE *ctbl; 79 81 const char *tblname; 80 - char base_tblname[PATH_MAX]; 81 82 char buffer[65536]; 82 83 int fontlen; 83 84 int i, nuni, nent; ··· 242 245 for ( i = 0 ; i < fontlen ; i++ ) 243 246 nuni += unicount[i]; 244 247 245 - strncpy(base_tblname, tblname, PATH_MAX); 246 - base_tblname[PATH_MAX - 1] = 0; 247 248 printf("\ 248 249 /*\n\ 249 - * Do not edit this file; it was automatically generated by\n\ 250 - *\n\ 251 - * conmakehash %s > [this file]\n\ 252 - *\n\ 250 + * Automatically generated file; Do not edit.\n\ 253 251 */\n\ 254 252 \n\ 255 253 #include <linux/types.h>\n\ 256 254 \n\ 257 255 u8 dfont_unicount[%d] = \n\ 258 - {\n\t", basename(base_tblname), fontlen); 256 + {\n\t", fontlen); 259 257 260 258 for ( i = 0 ; i < fontlen ; i++ ) 261 259 {
+1 -1
drivers/usb/host/xhci-mem.c
··· 1872 1872 1873 1873 cancel_delayed_work_sync(&xhci->cmd_timer); 1874 1874 1875 - for (i = 0; i < xhci->max_interrupters; i++) { 1875 + for (i = 0; xhci->interrupters && i < xhci->max_interrupters; i++) { 1876 1876 if (xhci->interrupters[i]) { 1877 1877 xhci_remove_interrupter(xhci, xhci->interrupters[i]); 1878 1878 xhci_free_interrupter(xhci, xhci->interrupters[i]);
+1
drivers/usb/host/xhci-ring.c
··· 2910 2910 process_isoc_td(xhci, ep, ep_ring, td, ep_trb, event); 2911 2911 else 2912 2912 process_bulk_intr_td(xhci, ep, ep_ring, td, ep_trb, event); 2913 + return 0; 2913 2914 2914 2915 check_endpoint_halted: 2915 2916 if (xhci_halted_host_endpoint(ep_ctx, trb_comp_code))
+5 -3
drivers/usb/host/xhci.c
··· 2837 2837 xhci->num_active_eps); 2838 2838 return -ENOMEM; 2839 2839 } 2840 - if ((xhci->quirks & XHCI_SW_BW_CHECKING) && 2840 + if ((xhci->quirks & XHCI_SW_BW_CHECKING) && !ctx_change && 2841 2841 xhci_reserve_bandwidth(xhci, virt_dev, command->in_ctx)) { 2842 2842 if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK)) 2843 2843 xhci_free_host_resources(xhci, ctrl_ctx); ··· 4200 4200 mutex_unlock(&xhci->mutex); 4201 4201 ret = xhci_disable_slot(xhci, udev->slot_id); 4202 4202 xhci_free_virt_device(xhci, udev->slot_id); 4203 - if (!ret) 4204 - xhci_alloc_dev(hcd, udev); 4203 + if (!ret) { 4204 + if (xhci_alloc_dev(hcd, udev) == 1) 4205 + xhci_setup_addressable_virt_dev(xhci, udev); 4206 + } 4205 4207 kfree(command->completion); 4206 4208 kfree(command); 4207 4209 return -EPROTO;
+1
drivers/usb/misc/usb-ljca.c
··· 169 169 { "INTC1096" }, 170 170 { "INTC100B" }, 171 171 { "INTC10D1" }, 172 + { "INTC10B5" }, 172 173 {}, 173 174 }; 174 175
-1
drivers/usb/typec/tcpm/tcpm.c
··· 5655 5655 break; 5656 5656 case PORT_RESET: 5657 5657 tcpm_reset_port(port); 5658 - port->pd_events = 0; 5659 5658 if (port->self_powered) 5660 5659 tcpm_set_cc(port, TYPEC_CC_OPEN); 5661 5660 else
+1 -1
drivers/usb/typec/ucsi/ucsi.c
··· 137 137 if (ret) 138 138 return ret; 139 139 140 - return err; 140 + return err ?: UCSI_CCI_LENGTH(*cci); 141 141 } 142 142 143 143 static int ucsi_read_error(struct ucsi *ucsi, u8 connector_num)
+2 -1
fs/9p/vfs_addr.c
··· 75 75 76 76 /* if we just extended the file size, any portion not in 77 77 * cache won't be on server and is zeroes */ 78 - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 78 + if (subreq->rreq->origin != NETFS_DIO_READ) 79 + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 79 80 80 81 netfs_subreq_terminated(subreq, err ?: total, false); 81 82 }
+2 -1
fs/afs/file.c
··· 242 242 243 243 req->error = error; 244 244 if (subreq) { 245 - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 245 + if (subreq->rreq->origin != NETFS_DIO_READ) 246 + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 246 247 netfs_subreq_terminated(subreq, error ?: req->actual_len, false); 247 248 req->subreq = NULL; 248 249 } else if (req->done) {
+42 -35
fs/bcachefs/alloc_background.c
··· 196 196 return DIV_ROUND_UP(bytes, sizeof(u64)); 197 197 } 198 198 199 - int bch2_alloc_v1_invalid(struct bch_fs *c, struct bkey_s_c k, 200 - enum bch_validate_flags flags, 201 - struct printbuf *err) 199 + int bch2_alloc_v1_validate(struct bch_fs *c, struct bkey_s_c k, 200 + enum bch_validate_flags flags) 202 201 { 203 202 struct bkey_s_c_alloc a = bkey_s_c_to_alloc(k); 204 203 int ret = 0; 205 204 206 205 /* allow for unknown fields */ 207 - bkey_fsck_err_on(bkey_val_u64s(a.k) < bch_alloc_v1_val_u64s(a.v), c, err, 208 - alloc_v1_val_size_bad, 206 + bkey_fsck_err_on(bkey_val_u64s(a.k) < bch_alloc_v1_val_u64s(a.v), 207 + c, alloc_v1_val_size_bad, 209 208 "incorrect value size (%zu < %u)", 210 209 bkey_val_u64s(a.k), bch_alloc_v1_val_u64s(a.v)); 211 210 fsck_err: 212 211 return ret; 213 212 } 214 213 215 - int bch2_alloc_v2_invalid(struct bch_fs *c, struct bkey_s_c k, 216 - enum bch_validate_flags flags, 217 - struct printbuf *err) 214 + int bch2_alloc_v2_validate(struct bch_fs *c, struct bkey_s_c k, 215 + enum bch_validate_flags flags) 218 216 { 219 217 struct bkey_alloc_unpacked u; 220 218 int ret = 0; 221 219 222 - bkey_fsck_err_on(bch2_alloc_unpack_v2(&u, k), c, err, 223 - alloc_v2_unpack_error, 220 + bkey_fsck_err_on(bch2_alloc_unpack_v2(&u, k), 221 + c, alloc_v2_unpack_error, 224 222 "unpack error"); 225 223 fsck_err: 226 224 return ret; 227 225 } 228 226 229 - int bch2_alloc_v3_invalid(struct bch_fs *c, struct bkey_s_c k, 230 - enum bch_validate_flags flags, 231 - struct printbuf *err) 227 + int bch2_alloc_v3_validate(struct bch_fs *c, struct bkey_s_c k, 228 + enum bch_validate_flags flags) 232 229 { 233 230 struct bkey_alloc_unpacked u; 234 231 int ret = 0; 235 232 236 - bkey_fsck_err_on(bch2_alloc_unpack_v3(&u, k), c, err, 237 - alloc_v2_unpack_error, 233 + bkey_fsck_err_on(bch2_alloc_unpack_v3(&u, k), 234 + c, alloc_v2_unpack_error, 238 235 "unpack error"); 239 236 fsck_err: 240 237 return ret; 241 238 } 242 239 243 - int bch2_alloc_v4_invalid(struct bch_fs *c, struct bkey_s_c k, 244 - enum bch_validate_flags flags, struct printbuf *err) 240 + int bch2_alloc_v4_validate(struct bch_fs *c, struct bkey_s_c k, 241 + enum bch_validate_flags flags) 245 242 { 246 243 struct bkey_s_c_alloc_v4 a = bkey_s_c_to_alloc_v4(k); 247 244 int ret = 0; 248 245 249 - bkey_fsck_err_on(alloc_v4_u64s_noerror(a.v) > bkey_val_u64s(k.k), c, err, 250 - alloc_v4_val_size_bad, 246 + bkey_fsck_err_on(alloc_v4_u64s_noerror(a.v) > bkey_val_u64s(k.k), 247 + c, alloc_v4_val_size_bad, 251 248 "bad val size (%u > %zu)", 252 249 alloc_v4_u64s_noerror(a.v), bkey_val_u64s(k.k)); 253 250 254 251 bkey_fsck_err_on(!BCH_ALLOC_V4_BACKPOINTERS_START(a.v) && 255 - BCH_ALLOC_V4_NR_BACKPOINTERS(a.v), c, err, 256 - alloc_v4_backpointers_start_bad, 252 + BCH_ALLOC_V4_NR_BACKPOINTERS(a.v), 253 + c, alloc_v4_backpointers_start_bad, 257 254 "invalid backpointers_start"); 258 255 259 - bkey_fsck_err_on(alloc_data_type(*a.v, a.v->data_type) != a.v->data_type, c, err, 260 - alloc_key_data_type_bad, 256 + bkey_fsck_err_on(alloc_data_type(*a.v, a.v->data_type) != a.v->data_type, 257 + c, alloc_key_data_type_bad, 261 258 "invalid data type (got %u should be %u)", 262 259 a.v->data_type, alloc_data_type(*a.v, a.v->data_type)); 263 260 264 261 for (unsigned i = 0; i < 2; i++) 265 262 bkey_fsck_err_on(a.v->io_time[i] > LRU_TIME_MAX, 266 - c, err, 267 - alloc_key_io_time_bad, 263 + c, alloc_key_io_time_bad, 268 264 "invalid io_time[%s]: %llu, max %llu", 269 265 i == READ ? "read" : "write", 270 266 a.v->io_time[i], LRU_TIME_MAX); ··· 278 282 a.v->dirty_sectors || 279 283 a.v->cached_sectors || 280 284 a.v->stripe, 281 - c, err, alloc_key_empty_but_have_data, 285 + c, alloc_key_empty_but_have_data, 282 286 "empty data type free but have data %u.%u.%u %u", 283 287 stripe_sectors, 284 288 a.v->dirty_sectors, ··· 292 296 case BCH_DATA_parity: 293 297 bkey_fsck_err_on(!a.v->dirty_sectors && 294 298 !stripe_sectors, 295 - c, err, alloc_key_dirty_sectors_0, 299 + c, alloc_key_dirty_sectors_0, 296 300 "data_type %s but dirty_sectors==0", 297 301 bch2_data_type_str(a.v->data_type)); 298 302 break; ··· 301 305 a.v->dirty_sectors || 302 306 stripe_sectors || 303 307 a.v->stripe, 304 - c, err, alloc_key_cached_inconsistency, 308 + c, alloc_key_cached_inconsistency, 305 309 "data type inconsistency"); 306 310 307 311 bkey_fsck_err_on(!a.v->io_time[READ] && 308 312 c->curr_recovery_pass > BCH_RECOVERY_PASS_check_alloc_to_lru_refs, 309 - c, err, alloc_key_cached_but_read_time_zero, 313 + c, alloc_key_cached_but_read_time_zero, 310 314 "cached bucket with read_time == 0"); 311 315 break; 312 316 case BCH_DATA_stripe: ··· 509 513 : 0; 510 514 } 511 515 512 - int bch2_bucket_gens_invalid(struct bch_fs *c, struct bkey_s_c k, 513 - enum bch_validate_flags flags, 514 - struct printbuf *err) 516 + int bch2_bucket_gens_validate(struct bch_fs *c, struct bkey_s_c k, 517 + enum bch_validate_flags flags) 515 518 { 516 519 int ret = 0; 517 520 518 - bkey_fsck_err_on(bkey_val_bytes(k.k) != sizeof(struct bch_bucket_gens), c, err, 519 - bucket_gens_val_size_bad, 521 + bkey_fsck_err_on(bkey_val_bytes(k.k) != sizeof(struct bch_bucket_gens), 522 + c, bucket_gens_val_size_bad, 520 523 "bad val size (%zu != %zu)", 521 524 bkey_val_bytes(k.k), sizeof(struct bch_bucket_gens)); 522 525 fsck_err: ··· 824 829 825 830 struct bch_alloc_v4 old_a_convert; 826 831 const struct bch_alloc_v4 *old_a = bch2_alloc_to_v4(old, &old_a_convert); 827 - struct bch_alloc_v4 *new_a = bkey_s_to_alloc_v4(new).v; 832 + 833 + struct bch_alloc_v4 *new_a; 834 + if (likely(new.k->type == KEY_TYPE_alloc_v4)) { 835 + new_a = bkey_s_to_alloc_v4(new).v; 836 + } else { 837 + BUG_ON(!(flags & BTREE_TRIGGER_gc)); 838 + 839 + struct bkey_i_alloc_v4 *new_ka = bch2_alloc_to_v4_mut_inlined(trans, new.s_c); 840 + ret = PTR_ERR_OR_ZERO(new_ka); 841 + if (unlikely(ret)) 842 + goto err; 843 + new_a = &new_ka->v; 844 + } 828 845 829 846 if (flags & BTREE_TRIGGER_transactional) { 830 847 alloc_data_type_set(new_a, new_a->data_type);
+14 -16
fs/bcachefs/alloc_background.h
··· 150 150 151 151 static inline u64 alloc_lru_idx_read(struct bch_alloc_v4 a) 152 152 { 153 - return a.data_type == BCH_DATA_cached ? a.io_time[READ] : 0; 153 + return a.data_type == BCH_DATA_cached 154 + ? a.io_time[READ] & LRU_TIME_MAX 155 + : 0; 154 156 } 155 157 156 158 #define DATA_TYPES_MOVABLE \ ··· 242 240 243 241 int bch2_bucket_io_time_reset(struct btree_trans *, unsigned, size_t, int); 244 242 245 - int bch2_alloc_v1_invalid(struct bch_fs *, struct bkey_s_c, 246 - enum bch_validate_flags, struct printbuf *); 247 - int bch2_alloc_v2_invalid(struct bch_fs *, struct bkey_s_c, 248 - enum bch_validate_flags, struct printbuf *); 249 - int bch2_alloc_v3_invalid(struct bch_fs *, struct bkey_s_c, 250 - enum bch_validate_flags, struct printbuf *); 251 - int bch2_alloc_v4_invalid(struct bch_fs *, struct bkey_s_c, 252 - enum bch_validate_flags, struct printbuf *); 243 + int bch2_alloc_v1_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 244 + int bch2_alloc_v2_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 245 + int bch2_alloc_v3_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 246 + int bch2_alloc_v4_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 253 247 void bch2_alloc_v4_swab(struct bkey_s); 254 248 void bch2_alloc_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 255 249 256 250 #define bch2_bkey_ops_alloc ((struct bkey_ops) { \ 257 - .key_invalid = bch2_alloc_v1_invalid, \ 251 + .key_validate = bch2_alloc_v1_validate, \ 258 252 .val_to_text = bch2_alloc_to_text, \ 259 253 .trigger = bch2_trigger_alloc, \ 260 254 .min_val_size = 8, \ 261 255 }) 262 256 263 257 #define bch2_bkey_ops_alloc_v2 ((struct bkey_ops) { \ 264 - .key_invalid = bch2_alloc_v2_invalid, \ 258 + .key_validate = bch2_alloc_v2_validate, \ 265 259 .val_to_text = bch2_alloc_to_text, \ 266 260 .trigger = bch2_trigger_alloc, \ 267 261 .min_val_size = 8, \ 268 262 }) 269 263 270 264 #define bch2_bkey_ops_alloc_v3 ((struct bkey_ops) { \ 271 - .key_invalid = bch2_alloc_v3_invalid, \ 265 + .key_validate = bch2_alloc_v3_validate, \ 272 266 .val_to_text = bch2_alloc_to_text, \ 273 267 .trigger = bch2_trigger_alloc, \ 274 268 .min_val_size = 16, \ 275 269 }) 276 270 277 271 #define bch2_bkey_ops_alloc_v4 ((struct bkey_ops) { \ 278 - .key_invalid = bch2_alloc_v4_invalid, \ 272 + .key_validate = bch2_alloc_v4_validate, \ 279 273 .val_to_text = bch2_alloc_to_text, \ 280 274 .swab = bch2_alloc_v4_swab, \ 281 275 .trigger = bch2_trigger_alloc, \ 282 276 .min_val_size = 48, \ 283 277 }) 284 278 285 - int bch2_bucket_gens_invalid(struct bch_fs *, struct bkey_s_c, 286 - enum bch_validate_flags, struct printbuf *); 279 + int bch2_bucket_gens_validate(struct bch_fs *, struct bkey_s_c, 280 + enum bch_validate_flags); 287 281 void bch2_bucket_gens_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 288 282 289 283 #define bch2_bkey_ops_bucket_gens ((struct bkey_ops) { \ 290 - .key_invalid = bch2_bucket_gens_invalid, \ 284 + .key_validate = bch2_bucket_gens_validate, \ 291 285 .val_to_text = bch2_bucket_gens_to_text, \ 292 286 }) 293 287
+8 -15
fs/bcachefs/backpointers.c
··· 47 47 return false; 48 48 } 49 49 50 - int bch2_backpointer_invalid(struct bch_fs *c, struct bkey_s_c k, 51 - enum bch_validate_flags flags, 52 - struct printbuf *err) 50 + int bch2_backpointer_validate(struct bch_fs *c, struct bkey_s_c k, 51 + enum bch_validate_flags flags) 53 52 { 54 53 struct bkey_s_c_backpointer bp = bkey_s_c_to_backpointer(k); 55 54 ··· 67 68 68 69 bkey_fsck_err_on((bp.v->bucket_offset >> MAX_EXTENT_COMPRESS_RATIO_SHIFT) >= ca->mi.bucket_size || 69 70 !bpos_eq(bp.k->p, bp_pos), 70 - c, err, 71 - backpointer_bucket_offset_wrong, 71 + c, backpointer_bucket_offset_wrong, 72 72 "backpointer bucket_offset wrong"); 73 73 fsck_err: 74 74 return ret; ··· 761 763 btree < BTREE_ID_NR && !ret; 762 764 btree++) { 763 765 unsigned depth = (BIT_ULL(btree) & btree_leaf_mask) ? 0 : 1; 764 - struct btree_iter iter; 765 - struct btree *b; 766 766 767 767 if (!(BIT_ULL(btree) & btree_leaf_mask) && 768 768 !(BIT_ULL(btree) & btree_interior_mask)) 769 769 continue; 770 770 771 - bch2_trans_begin(trans); 772 - 773 - __for_each_btree_node(trans, iter, btree, 771 + ret = __for_each_btree_node(trans, iter, btree, 774 772 btree == start.btree ? start.pos : POS_MIN, 775 - 0, depth, BTREE_ITER_prefetch, b, ret) { 773 + 0, depth, BTREE_ITER_prefetch, b, ({ 776 774 mem_may_pin -= btree_buf_bytes(b); 777 775 if (mem_may_pin <= 0) { 778 776 c->btree_cache.pinned_nodes_end = *end = 779 777 BBPOS(btree, b->key.k.p); 780 - bch2_trans_iter_exit(trans, &iter); 781 - return 0; 778 + break; 782 779 } 783 - } 784 - bch2_trans_iter_exit(trans, &iter); 780 + 0; 781 + })); 785 782 } 786 783 787 784 return ret;
+2 -3
fs/bcachefs/backpointers.h
··· 18 18 ((x & 0xff00000000ULL) >> 32)); 19 19 } 20 20 21 - int bch2_backpointer_invalid(struct bch_fs *, struct bkey_s_c k, 22 - enum bch_validate_flags, struct printbuf *); 21 + int bch2_backpointer_validate(struct bch_fs *, struct bkey_s_c k, enum bch_validate_flags); 23 22 void bch2_backpointer_to_text(struct printbuf *, const struct bch_backpointer *); 24 23 void bch2_backpointer_k_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 25 24 void bch2_backpointer_swab(struct bkey_s); 26 25 27 26 #define bch2_bkey_ops_backpointer ((struct bkey_ops) { \ 28 - .key_invalid = bch2_backpointer_invalid, \ 27 + .key_validate = bch2_backpointer_validate, \ 29 28 .val_to_text = bch2_backpointer_k_to_text, \ 30 29 .swab = bch2_backpointer_swab, \ 31 30 .min_val_size = 32, \
+1
fs/bcachefs/bcachefs.h
··· 447 447 x(blocked_journal_low_on_space) \ 448 448 x(blocked_journal_low_on_pin) \ 449 449 x(blocked_journal_max_in_flight) \ 450 + x(blocked_key_cache_flush) \ 450 451 x(blocked_allocate) \ 451 452 x(blocked_allocate_open_bucket) \ 452 453 x(blocked_write_buffer_full) \
+2 -1
fs/bcachefs/bcachefs_format.h
··· 676 676 x(mi_btree_bitmap, BCH_VERSION(1, 7)) \ 677 677 x(bucket_stripe_sectors, BCH_VERSION(1, 8)) \ 678 678 x(disk_accounting_v2, BCH_VERSION(1, 9)) \ 679 - x(disk_accounting_v3, BCH_VERSION(1, 10)) 679 + x(disk_accounting_v3, BCH_VERSION(1, 10)) \ 680 + x(disk_accounting_inum, BCH_VERSION(1, 11)) 680 681 681 682 enum bcachefs_metadata_version { 682 683 bcachefs_metadata_version_min = 9,
+4 -3
fs/bcachefs/bkey.h
··· 10 10 #include "vstructs.h" 11 11 12 12 enum bch_validate_flags { 13 - BCH_VALIDATE_write = (1U << 0), 14 - BCH_VALIDATE_commit = (1U << 1), 15 - BCH_VALIDATE_journal = (1U << 2), 13 + BCH_VALIDATE_write = BIT(0), 14 + BCH_VALIDATE_commit = BIT(1), 15 + BCH_VALIDATE_journal = BIT(2), 16 + BCH_VALIDATE_silent = BIT(3), 16 17 }; 17 18 18 19 #if 0
+53 -56
fs/bcachefs/bkey_methods.c
··· 27 27 NULL 28 28 }; 29 29 30 - static int deleted_key_invalid(struct bch_fs *c, struct bkey_s_c k, 31 - enum bch_validate_flags flags, struct printbuf *err) 30 + static int deleted_key_validate(struct bch_fs *c, struct bkey_s_c k, 31 + enum bch_validate_flags flags) 32 32 { 33 33 return 0; 34 34 } 35 35 36 36 #define bch2_bkey_ops_deleted ((struct bkey_ops) { \ 37 - .key_invalid = deleted_key_invalid, \ 37 + .key_validate = deleted_key_validate, \ 38 38 }) 39 39 40 40 #define bch2_bkey_ops_whiteout ((struct bkey_ops) { \ 41 - .key_invalid = deleted_key_invalid, \ 41 + .key_validate = deleted_key_validate, \ 42 42 }) 43 43 44 - static int empty_val_key_invalid(struct bch_fs *c, struct bkey_s_c k, 45 - enum bch_validate_flags flags, struct printbuf *err) 44 + static int empty_val_key_validate(struct bch_fs *c, struct bkey_s_c k, 45 + enum bch_validate_flags flags) 46 46 { 47 47 int ret = 0; 48 48 49 - bkey_fsck_err_on(bkey_val_bytes(k.k), c, err, 50 - bkey_val_size_nonzero, 49 + bkey_fsck_err_on(bkey_val_bytes(k.k), 50 + c, bkey_val_size_nonzero, 51 51 "incorrect value size (%zu != 0)", 52 52 bkey_val_bytes(k.k)); 53 53 fsck_err: ··· 55 55 } 56 56 57 57 #define bch2_bkey_ops_error ((struct bkey_ops) { \ 58 - .key_invalid = empty_val_key_invalid, \ 58 + .key_validate = empty_val_key_validate, \ 59 59 }) 60 60 61 - static int key_type_cookie_invalid(struct bch_fs *c, struct bkey_s_c k, 62 - enum bch_validate_flags flags, struct printbuf *err) 61 + static int key_type_cookie_validate(struct bch_fs *c, struct bkey_s_c k, 62 + enum bch_validate_flags flags) 63 63 { 64 64 return 0; 65 65 } ··· 73 73 } 74 74 75 75 #define bch2_bkey_ops_cookie ((struct bkey_ops) { \ 76 - .key_invalid = key_type_cookie_invalid, \ 76 + .key_validate = key_type_cookie_validate, \ 77 77 .val_to_text = key_type_cookie_to_text, \ 78 78 .min_val_size = 8, \ 79 79 }) 80 80 81 81 #define bch2_bkey_ops_hash_whiteout ((struct bkey_ops) {\ 82 - .key_invalid = empty_val_key_invalid, \ 82 + .key_validate = empty_val_key_validate, \ 83 83 }) 84 84 85 - static int key_type_inline_data_invalid(struct bch_fs *c, struct bkey_s_c k, 86 - enum bch_validate_flags flags, struct printbuf *err) 85 + static int key_type_inline_data_validate(struct bch_fs *c, struct bkey_s_c k, 86 + enum bch_validate_flags flags) 87 87 { 88 88 return 0; 89 89 } ··· 98 98 datalen, min(datalen, 32U), d.v->data); 99 99 } 100 100 101 - #define bch2_bkey_ops_inline_data ((struct bkey_ops) { \ 102 - .key_invalid = key_type_inline_data_invalid, \ 103 - .val_to_text = key_type_inline_data_to_text, \ 101 + #define bch2_bkey_ops_inline_data ((struct bkey_ops) { \ 102 + .key_validate = key_type_inline_data_validate, \ 103 + .val_to_text = key_type_inline_data_to_text, \ 104 104 }) 105 105 106 106 static bool key_type_set_merge(struct bch_fs *c, struct bkey_s l, struct bkey_s_c r) ··· 110 110 } 111 111 112 112 #define bch2_bkey_ops_set ((struct bkey_ops) { \ 113 - .key_invalid = empty_val_key_invalid, \ 113 + .key_validate = empty_val_key_validate, \ 114 114 .key_merge = key_type_set_merge, \ 115 115 }) 116 116 ··· 123 123 const struct bkey_ops bch2_bkey_null_ops = { 124 124 }; 125 125 126 - int bch2_bkey_val_invalid(struct bch_fs *c, struct bkey_s_c k, 127 - enum bch_validate_flags flags, 128 - struct printbuf *err) 126 + int bch2_bkey_val_validate(struct bch_fs *c, struct bkey_s_c k, 127 + enum bch_validate_flags flags) 129 128 { 130 129 if (test_bit(BCH_FS_no_invalid_checks, &c->flags)) 131 130 return 0; ··· 132 133 const struct bkey_ops *ops = bch2_bkey_type_ops(k.k->type); 133 134 int ret = 0; 134 135 135 - bkey_fsck_err_on(bkey_val_bytes(k.k) < ops->min_val_size, c, err, 136 - bkey_val_size_too_small, 136 + bkey_fsck_err_on(bkey_val_bytes(k.k) < ops->min_val_size, 137 + c, bkey_val_size_too_small, 137 138 "bad val size (%zu < %u)", 138 139 bkey_val_bytes(k.k), ops->min_val_size); 139 140 140 - if (!ops->key_invalid) 141 + if (!ops->key_validate) 141 142 return 0; 142 143 143 - ret = ops->key_invalid(c, k, flags, err); 144 + ret = ops->key_validate(c, k, flags); 144 145 fsck_err: 145 146 return ret; 146 147 } ··· 160 161 return type == BKEY_TYPE_btree ? "internal btree node" : bch2_btree_id_str(type - 1); 161 162 } 162 163 163 - int __bch2_bkey_invalid(struct bch_fs *c, struct bkey_s_c k, 164 - enum btree_node_type type, 165 - enum bch_validate_flags flags, 166 - struct printbuf *err) 164 + int __bch2_bkey_validate(struct bch_fs *c, struct bkey_s_c k, 165 + enum btree_node_type type, 166 + enum bch_validate_flags flags) 167 167 { 168 168 if (test_bit(BCH_FS_no_invalid_checks, &c->flags)) 169 169 return 0; 170 170 171 171 int ret = 0; 172 172 173 - bkey_fsck_err_on(k.k->u64s < BKEY_U64s, c, err, 174 - bkey_u64s_too_small, 173 + bkey_fsck_err_on(k.k->u64s < BKEY_U64s, 174 + c, bkey_u64s_too_small, 175 175 "u64s too small (%u < %zu)", k.k->u64s, BKEY_U64s); 176 176 177 177 if (type >= BKEY_TYPE_NR) ··· 178 180 179 181 bkey_fsck_err_on(k.k->type < KEY_TYPE_MAX && 180 182 (type == BKEY_TYPE_btree || (flags & BCH_VALIDATE_commit)) && 181 - !(bch2_key_types_allowed[type] & BIT_ULL(k.k->type)), c, err, 182 - bkey_invalid_type_for_btree, 183 + !(bch2_key_types_allowed[type] & BIT_ULL(k.k->type)), 184 + c, bkey_invalid_type_for_btree, 183 185 "invalid key type for btree %s (%s)", 184 186 bch2_btree_node_type_str(type), 185 187 k.k->type < KEY_TYPE_MAX ··· 187 189 : "(unknown)"); 188 190 189 191 if (btree_node_type_is_extents(type) && !bkey_whiteout(k.k)) { 190 - bkey_fsck_err_on(k.k->size == 0, c, err, 191 - bkey_extent_size_zero, 192 + bkey_fsck_err_on(k.k->size == 0, 193 + c, bkey_extent_size_zero, 192 194 "size == 0"); 193 195 194 - bkey_fsck_err_on(k.k->size > k.k->p.offset, c, err, 195 - bkey_extent_size_greater_than_offset, 196 + bkey_fsck_err_on(k.k->size > k.k->p.offset, 197 + c, bkey_extent_size_greater_than_offset, 196 198 "size greater than offset (%u > %llu)", 197 199 k.k->size, k.k->p.offset); 198 200 } else { 199 - bkey_fsck_err_on(k.k->size, c, err, 200 - bkey_size_nonzero, 201 + bkey_fsck_err_on(k.k->size, 202 + c, bkey_size_nonzero, 201 203 "size != 0"); 202 204 } 203 205 ··· 205 207 enum btree_id btree = type - 1; 206 208 207 209 if (btree_type_has_snapshots(btree)) { 208 - bkey_fsck_err_on(!k.k->p.snapshot, c, err, 209 - bkey_snapshot_zero, 210 + bkey_fsck_err_on(!k.k->p.snapshot, 211 + c, bkey_snapshot_zero, 210 212 "snapshot == 0"); 211 213 } else if (!btree_type_has_snapshot_field(btree)) { 212 - bkey_fsck_err_on(k.k->p.snapshot, c, err, 213 - bkey_snapshot_nonzero, 214 + bkey_fsck_err_on(k.k->p.snapshot, 215 + c, bkey_snapshot_nonzero, 214 216 "nonzero snapshot"); 215 217 } else { 216 218 /* ··· 219 221 */ 220 222 } 221 223 222 - bkey_fsck_err_on(bkey_eq(k.k->p, POS_MAX), c, err, 223 - bkey_at_pos_max, 224 + bkey_fsck_err_on(bkey_eq(k.k->p, POS_MAX), 225 + c, bkey_at_pos_max, 224 226 "key at POS_MAX"); 225 227 } 226 228 fsck_err: 227 229 return ret; 228 230 } 229 231 230 - int bch2_bkey_invalid(struct bch_fs *c, struct bkey_s_c k, 232 + int bch2_bkey_validate(struct bch_fs *c, struct bkey_s_c k, 231 233 enum btree_node_type type, 232 - enum bch_validate_flags flags, 233 - struct printbuf *err) 234 + enum bch_validate_flags flags) 234 235 { 235 - return __bch2_bkey_invalid(c, k, type, flags, err) ?: 236 - bch2_bkey_val_invalid(c, k, flags, err); 236 + return __bch2_bkey_validate(c, k, type, flags) ?: 237 + bch2_bkey_val_validate(c, k, flags); 237 238 } 238 239 239 240 int bch2_bkey_in_btree_node(struct bch_fs *c, struct btree *b, 240 - struct bkey_s_c k, struct printbuf *err) 241 + struct bkey_s_c k, enum bch_validate_flags flags) 241 242 { 242 243 int ret = 0; 243 244 244 - bkey_fsck_err_on(bpos_lt(k.k->p, b->data->min_key), c, err, 245 - bkey_before_start_of_btree_node, 245 + bkey_fsck_err_on(bpos_lt(k.k->p, b->data->min_key), 246 + c, bkey_before_start_of_btree_node, 246 247 "key before start of btree node"); 247 248 248 - bkey_fsck_err_on(bpos_gt(k.k->p, b->data->max_key), c, err, 249 - bkey_after_end_of_btree_node, 249 + bkey_fsck_err_on(bpos_gt(k.k->p, b->data->max_key), 250 + c, bkey_after_end_of_btree_node, 250 251 "key past end of btree node"); 251 252 fsck_err: 252 253 return ret;
+10 -11
fs/bcachefs/bkey_methods.h
··· 14 14 extern const struct bkey_ops bch2_bkey_null_ops; 15 15 16 16 /* 17 - * key_invalid: checks validity of @k, returns 0 if good or -EINVAL if bad. If 17 + * key_validate: checks validity of @k, returns 0 if good or -EINVAL if bad. If 18 18 * invalid, entire key will be deleted. 19 19 * 20 20 * When invalid, error string is returned via @err. @rw indicates whether key is 21 21 * being read or written; more aggressive checks can be enabled when rw == WRITE. 22 22 */ 23 23 struct bkey_ops { 24 - int (*key_invalid)(struct bch_fs *c, struct bkey_s_c k, 25 - enum bch_validate_flags flags, struct printbuf *err); 24 + int (*key_validate)(struct bch_fs *c, struct bkey_s_c k, 25 + enum bch_validate_flags flags); 26 26 void (*val_to_text)(struct printbuf *, struct bch_fs *, 27 27 struct bkey_s_c); 28 28 void (*swab)(struct bkey_s); ··· 48 48 : &bch2_bkey_null_ops; 49 49 } 50 50 51 - int bch2_bkey_val_invalid(struct bch_fs *, struct bkey_s_c, 52 - enum bch_validate_flags, struct printbuf *); 53 - int __bch2_bkey_invalid(struct bch_fs *, struct bkey_s_c, enum btree_node_type, 54 - enum bch_validate_flags, struct printbuf *); 55 - int bch2_bkey_invalid(struct bch_fs *, struct bkey_s_c, enum btree_node_type, 56 - enum bch_validate_flags, struct printbuf *); 57 - int bch2_bkey_in_btree_node(struct bch_fs *, struct btree *, 58 - struct bkey_s_c, struct printbuf *); 51 + int bch2_bkey_val_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 52 + int __bch2_bkey_validate(struct bch_fs *, struct bkey_s_c, enum btree_node_type, 53 + enum bch_validate_flags); 54 + int bch2_bkey_validate(struct bch_fs *, struct bkey_s_c, enum btree_node_type, 55 + enum bch_validate_flags); 56 + int bch2_bkey_in_btree_node(struct bch_fs *, struct btree *, struct bkey_s_c, 57 + enum bch_validate_flags); 59 58 60 59 void bch2_bpos_to_text(struct printbuf *, struct bpos); 61 60 void bch2_bkey_to_text(struct printbuf *, const struct bkey *);
+1 -4
fs/bcachefs/btree_gc.c
··· 741 741 742 742 static int bch2_mark_superblocks(struct bch_fs *c) 743 743 { 744 - mutex_lock(&c->sb_lock); 745 744 gc_pos_set(c, gc_phase(GC_PHASE_sb)); 746 745 747 - int ret = bch2_trans_mark_dev_sbs_flags(c, BTREE_TRIGGER_gc); 748 - mutex_unlock(&c->sb_lock); 749 - return ret; 746 + return bch2_trans_mark_dev_sbs_flags(c, BTREE_TRIGGER_gc); 750 747 } 751 748 752 749 static void bch2_gc_free(struct bch_fs *c)
+22 -47
fs/bcachefs/btree_io.c
··· 836 836 return ret; 837 837 } 838 838 839 - static int bset_key_invalid(struct bch_fs *c, struct btree *b, 840 - struct bkey_s_c k, 841 - bool updated_range, int rw, 842 - struct printbuf *err) 839 + static int bset_key_validate(struct bch_fs *c, struct btree *b, 840 + struct bkey_s_c k, 841 + bool updated_range, int rw) 843 842 { 844 - return __bch2_bkey_invalid(c, k, btree_node_type(b), READ, err) ?: 845 - (!updated_range ? bch2_bkey_in_btree_node(c, b, k, err) : 0) ?: 846 - (rw == WRITE ? bch2_bkey_val_invalid(c, k, READ, err) : 0); 843 + return __bch2_bkey_validate(c, k, btree_node_type(b), 0) ?: 844 + (!updated_range ? bch2_bkey_in_btree_node(c, b, k, 0) : 0) ?: 845 + (rw == WRITE ? bch2_bkey_val_validate(c, k, 0) : 0); 847 846 } 848 847 849 848 static bool bkey_packed_valid(struct bch_fs *c, struct btree *b, ··· 857 858 if (!bkeyp_u64s_valid(&b->format, k)) 858 859 return false; 859 860 860 - struct printbuf buf = PRINTBUF; 861 861 struct bkey tmp; 862 862 struct bkey_s u = __bkey_disassemble(b, k, &tmp); 863 - bool ret = __bch2_bkey_invalid(c, u.s_c, btree_node_type(b), READ, &buf); 864 - printbuf_exit(&buf); 865 - return ret; 863 + return !__bch2_bkey_validate(c, u.s_c, btree_node_type(b), BCH_VALIDATE_silent); 866 864 } 867 865 868 866 static int validate_bset_keys(struct bch_fs *c, struct btree *b, ··· 911 915 912 916 u = __bkey_disassemble(b, k, &tmp); 913 917 914 - printbuf_reset(&buf); 915 - if (bset_key_invalid(c, b, u.s_c, updated_range, write, &buf)) { 916 - printbuf_reset(&buf); 917 - bset_key_invalid(c, b, u.s_c, updated_range, write, &buf); 918 - prt_printf(&buf, "\n "); 919 - bch2_bkey_val_to_text(&buf, c, u.s_c); 920 - 921 - btree_err(-BCH_ERR_btree_node_read_err_fixable, 922 - c, NULL, b, i, k, 923 - btree_node_bad_bkey, 924 - "invalid bkey: %s", buf.buf); 918 + ret = bset_key_validate(c, b, u.s_c, updated_range, write); 919 + if (ret == -BCH_ERR_fsck_delete_bkey) 925 920 goto drop_this_key; 926 - } 921 + if (ret) 922 + goto fsck_err; 927 923 928 924 if (write) 929 925 bch2_bkey_compat(b->c.level, b->c.btree_id, version, ··· 1216 1228 struct bkey tmp; 1217 1229 struct bkey_s u = __bkey_disassemble(b, k, &tmp); 1218 1230 1219 - printbuf_reset(&buf); 1220 - 1221 - if (bch2_bkey_val_invalid(c, u.s_c, READ, &buf) || 1231 + ret = bch2_bkey_val_validate(c, u.s_c, READ); 1232 + if (ret == -BCH_ERR_fsck_delete_bkey || 1222 1233 (bch2_inject_invalid_keys && 1223 1234 !bversion_cmp(u.k->version, MAX_VERSION))) { 1224 - printbuf_reset(&buf); 1225 - 1226 - prt_printf(&buf, "invalid bkey: "); 1227 - bch2_bkey_val_invalid(c, u.s_c, READ, &buf); 1228 - prt_printf(&buf, "\n "); 1229 - bch2_bkey_val_to_text(&buf, c, u.s_c); 1230 - 1231 - btree_err(-BCH_ERR_btree_node_read_err_fixable, 1232 - c, NULL, b, i, k, 1233 - btree_node_bad_bkey, 1234 - "%s", buf.buf); 1235 - 1236 1235 btree_keys_account_key_drop(&b->nr, 0, k); 1237 1236 1238 1237 i->u64s = cpu_to_le16(le16_to_cpu(i->u64s) - k->u64s); ··· 1228 1253 set_btree_bset_end(b, b->set); 1229 1254 continue; 1230 1255 } 1256 + if (ret) 1257 + goto fsck_err; 1231 1258 1232 1259 if (u.k->type == KEY_TYPE_btree_ptr_v2) { 1233 1260 struct bkey_s_btree_ptr_v2 bp = bkey_s_to_btree_ptr_v2(u); ··· 1744 1767 1745 1768 set_btree_node_read_in_flight(b); 1746 1769 1770 + /* we can't pass the trans to read_done() for fsck errors, so it must be unlocked */ 1771 + bch2_trans_unlock(trans); 1747 1772 bch2_btree_node_read(trans, b, true); 1748 1773 1749 1774 if (btree_node_read_error(b)) { ··· 1931 1952 static int validate_bset_for_write(struct bch_fs *c, struct btree *b, 1932 1953 struct bset *i, unsigned sectors) 1933 1954 { 1934 - struct printbuf buf = PRINTBUF; 1935 1955 bool saw_error; 1936 - int ret; 1937 1956 1938 - ret = bch2_bkey_invalid(c, bkey_i_to_s_c(&b->key), 1939 - BKEY_TYPE_btree, WRITE, &buf); 1940 - 1941 - if (ret) 1942 - bch2_fs_inconsistent(c, "invalid btree node key before write: %s", buf.buf); 1943 - printbuf_exit(&buf); 1944 - if (ret) 1957 + int ret = bch2_bkey_validate(c, bkey_i_to_s_c(&b->key), 1958 + BKEY_TYPE_btree, WRITE); 1959 + if (ret) { 1960 + bch2_fs_inconsistent(c, "invalid btree node key before write"); 1945 1961 return ret; 1962 + } 1946 1963 1947 1964 ret = validate_bset_keys(c, b, i, WRITE, false, &saw_error) ?: 1948 1965 validate_bset(c, NULL, b, i, b->written, sectors, WRITE, false, &saw_error);
+1
fs/bcachefs/btree_iter.c
··· 1900 1900 goto out; 1901 1901 } 1902 1902 1903 + /* Only kept for -tools */ 1903 1904 struct btree *bch2_btree_iter_peek_node_and_restart(struct btree_iter *iter) 1904 1905 { 1905 1906 struct btree *b;
+27 -15
fs/bcachefs/btree_iter.h
··· 600 600 601 601 u32 bch2_trans_begin(struct btree_trans *); 602 602 603 - /* 604 - * XXX 605 - * this does not handle transaction restarts from bch2_btree_iter_next_node() 606 - * correctly 607 - */ 608 - #define __for_each_btree_node(_trans, _iter, _btree_id, _start, \ 609 - _locks_want, _depth, _flags, _b, _ret) \ 610 - for (bch2_trans_node_iter_init((_trans), &(_iter), (_btree_id), \ 611 - _start, _locks_want, _depth, _flags); \ 612 - (_b) = bch2_btree_iter_peek_node_and_restart(&(_iter)), \ 613 - !((_ret) = PTR_ERR_OR_ZERO(_b)) && (_b); \ 614 - (_b) = bch2_btree_iter_next_node(&(_iter))) 603 + #define __for_each_btree_node(_trans, _iter, _btree_id, _start, \ 604 + _locks_want, _depth, _flags, _b, _do) \ 605 + ({ \ 606 + bch2_trans_begin((_trans)); \ 607 + \ 608 + struct btree_iter _iter; \ 609 + bch2_trans_node_iter_init((_trans), &_iter, (_btree_id), \ 610 + _start, _locks_want, _depth, _flags); \ 611 + int _ret3 = 0; \ 612 + do { \ 613 + _ret3 = lockrestart_do((_trans), ({ \ 614 + struct btree *_b = bch2_btree_iter_peek_node(&_iter); \ 615 + if (!_b) \ 616 + break; \ 617 + \ 618 + PTR_ERR_OR_ZERO(_b) ?: (_do); \ 619 + })) ?: \ 620 + lockrestart_do((_trans), \ 621 + PTR_ERR_OR_ZERO(bch2_btree_iter_next_node(&_iter))); \ 622 + } while (!_ret3); \ 623 + \ 624 + bch2_trans_iter_exit((_trans), &(_iter)); \ 625 + _ret3; \ 626 + }) 615 627 616 628 #define for_each_btree_node(_trans, _iter, _btree_id, _start, \ 617 - _flags, _b, _ret) \ 618 - __for_each_btree_node(_trans, _iter, _btree_id, _start, \ 619 - 0, 0, _flags, _b, _ret) 629 + _flags, _b, _do) \ 630 + __for_each_btree_node(_trans, _iter, _btree_id, _start, \ 631 + 0, 0, _flags, _b, _do) 620 632 621 633 static inline struct bkey_s_c bch2_btree_iter_peek_prev_type(struct btree_iter *iter, 622 634 unsigned flags)
-5
fs/bcachefs/btree_key_cache.c
··· 497 497 498 498 path->l[1].b = NULL; 499 499 500 - if (bch2_btree_node_relock_notrace(trans, path, 0)) { 501 - path->uptodate = BTREE_ITER_UPTODATE; 502 - return 0; 503 - } 504 - 505 500 int ret; 506 501 do { 507 502 ret = btree_path_traverse_cached_fast(trans, path);
+16 -2
fs/bcachefs/btree_key_cache.h
··· 11 11 return max_t(ssize_t, 0, nr_dirty - max_dirty); 12 12 } 13 13 14 - static inline bool bch2_btree_key_cache_must_wait(struct bch_fs *c) 14 + static inline ssize_t __bch2_btree_key_cache_must_wait(struct bch_fs *c) 15 15 { 16 16 size_t nr_dirty = atomic_long_read(&c->btree_key_cache.nr_dirty); 17 17 size_t nr_keys = atomic_long_read(&c->btree_key_cache.nr_keys); 18 18 size_t max_dirty = 4096 + (nr_keys * 3) / 4; 19 19 20 - return nr_dirty > max_dirty; 20 + return nr_dirty - max_dirty; 21 + } 22 + 23 + static inline bool bch2_btree_key_cache_must_wait(struct bch_fs *c) 24 + { 25 + return __bch2_btree_key_cache_must_wait(c) > 0; 26 + } 27 + 28 + static inline bool bch2_btree_key_cache_wait_done(struct bch_fs *c) 29 + { 30 + size_t nr_dirty = atomic_long_read(&c->btree_key_cache.nr_dirty); 31 + size_t nr_keys = atomic_long_read(&c->btree_key_cache.nr_keys); 32 + size_t max_dirty = 2048 + (nr_keys * 5) / 8; 33 + 34 + return nr_dirty <= max_dirty; 21 35 } 22 36 23 37 int bch2_btree_key_cache_journal_flush(struct journal *,
+1 -1
fs/bcachefs/btree_node_scan.c
··· 530 530 bch_verbose(c, "%s(): recovering %s", __func__, buf.buf); 531 531 printbuf_exit(&buf); 532 532 533 - BUG_ON(bch2_bkey_invalid(c, bkey_i_to_s_c(&tmp.k), BKEY_TYPE_btree, 0, NULL)); 533 + BUG_ON(bch2_bkey_validate(c, bkey_i_to_s_c(&tmp.k), BKEY_TYPE_btree, 0)); 534 534 535 535 ret = bch2_journal_key_insert(c, btree, level + 1, &tmp.k); 536 536 if (ret)
+21 -61
fs/bcachefs/btree_trans_commit.c
··· 712 712 a->k.version = journal_pos_to_bversion(&trans->journal_res, 713 713 (u64 *) entry - (u64 *) trans->journal_entries); 714 714 BUG_ON(bversion_zero(a->k.version)); 715 - ret = bch2_accounting_mem_mod_locked(trans, accounting_i_to_s_c(a), false); 715 + ret = bch2_accounting_mem_mod_locked(trans, accounting_i_to_s_c(a), false, false); 716 716 if (ret) 717 717 goto revert_fs_usage; 718 718 } ··· 798 798 struct bkey_s_accounting a = bkey_i_to_s_accounting(entry2->start); 799 799 800 800 bch2_accounting_neg(a); 801 - bch2_accounting_mem_mod_locked(trans, a.c, false); 801 + bch2_accounting_mem_mod_locked(trans, a.c, false, false); 802 802 bch2_accounting_neg(a); 803 803 } 804 804 percpu_up_read(&c->mark_lock); ··· 816 816 trans_for_each_update(trans, i) 817 817 if (i->k->k.type != KEY_TYPE_accounting) 818 818 bch2_journal_key_overwritten(trans->c, i->btree_id, i->level, i->k->k.p); 819 - } 820 - 821 - static noinline int bch2_trans_commit_bkey_invalid(struct btree_trans *trans, 822 - enum bch_validate_flags flags, 823 - struct btree_insert_entry *i, 824 - struct printbuf *err) 825 - { 826 - struct bch_fs *c = trans->c; 827 - 828 - printbuf_reset(err); 829 - prt_printf(err, "invalid bkey on insert from %s -> %ps\n", 830 - trans->fn, (void *) i->ip_allocated); 831 - printbuf_indent_add(err, 2); 832 - 833 - bch2_bkey_val_to_text(err, c, bkey_i_to_s_c(i->k)); 834 - prt_newline(err); 835 - 836 - bch2_bkey_invalid(c, bkey_i_to_s_c(i->k), i->bkey_type, flags, err); 837 - bch2_print_string_as_lines(KERN_ERR, err->buf); 838 - 839 - bch2_inconsistent_error(c); 840 - bch2_dump_trans_updates(trans); 841 - 842 - return -EINVAL; 843 - } 844 - 845 - static noinline int bch2_trans_commit_journal_entry_invalid(struct btree_trans *trans, 846 - struct jset_entry *i) 847 - { 848 - struct bch_fs *c = trans->c; 849 - struct printbuf buf = PRINTBUF; 850 - 851 - prt_printf(&buf, "invalid bkey on insert from %s\n", trans->fn); 852 - printbuf_indent_add(&buf, 2); 853 - 854 - bch2_journal_entry_to_text(&buf, c, i); 855 - prt_newline(&buf); 856 - 857 - bch2_print_string_as_lines(KERN_ERR, buf.buf); 858 - 859 - bch2_inconsistent_error(c); 860 - bch2_dump_trans_updates(trans); 861 - 862 - return -EINVAL; 863 819 } 864 820 865 821 static int bch2_trans_commit_journal_pin_flush(struct journal *j, ··· 883 927 static int journal_reclaim_wait_done(struct bch_fs *c) 884 928 { 885 929 int ret = bch2_journal_error(&c->journal) ?: 886 - !bch2_btree_key_cache_must_wait(c); 930 + bch2_btree_key_cache_wait_done(c); 887 931 888 932 if (!ret) 889 933 journal_reclaim_kick(&c->journal); ··· 929 973 bch2_trans_unlock(trans); 930 974 931 975 trace_and_count(c, trans_blocked_journal_reclaim, trans, trace_ip); 976 + track_event_change(&c->times[BCH_TIME_blocked_key_cache_flush], true); 932 977 933 978 wait_event_freezable(c->journal.reclaim_wait, 934 979 (ret = journal_reclaim_wait_done(c))); 980 + 981 + track_event_change(&c->times[BCH_TIME_blocked_key_cache_flush], false); 982 + 935 983 if (ret < 0) 936 984 break; 937 985 ··· 1020 1060 goto out_reset; 1021 1061 1022 1062 trans_for_each_update(trans, i) { 1023 - struct printbuf buf = PRINTBUF; 1024 1063 enum bch_validate_flags invalid_flags = 0; 1025 1064 1026 1065 if (!(flags & BCH_TRANS_COMMIT_no_journal_res)) 1027 1066 invalid_flags |= BCH_VALIDATE_write|BCH_VALIDATE_commit; 1028 1067 1029 - if (unlikely(bch2_bkey_invalid(c, bkey_i_to_s_c(i->k), 1030 - i->bkey_type, invalid_flags, &buf))) 1031 - ret = bch2_trans_commit_bkey_invalid(trans, invalid_flags, i, &buf); 1032 - btree_insert_entry_checks(trans, i); 1033 - printbuf_exit(&buf); 1034 - 1035 - if (ret) 1068 + ret = bch2_bkey_validate(c, bkey_i_to_s_c(i->k), 1069 + i->bkey_type, invalid_flags); 1070 + if (unlikely(ret)){ 1071 + bch2_trans_inconsistent(trans, "invalid bkey on insert from %s -> %ps\n", 1072 + trans->fn, (void *) i->ip_allocated); 1036 1073 return ret; 1074 + } 1075 + btree_insert_entry_checks(trans, i); 1037 1076 } 1038 1077 1039 1078 for (struct jset_entry *i = trans->journal_entries; ··· 1043 1084 if (!(flags & BCH_TRANS_COMMIT_no_journal_res)) 1044 1085 invalid_flags |= BCH_VALIDATE_write|BCH_VALIDATE_commit; 1045 1086 1046 - if (unlikely(bch2_journal_entry_validate(c, NULL, i, 1047 - bcachefs_metadata_version_current, 1048 - CPU_BIG_ENDIAN, invalid_flags))) 1049 - ret = bch2_trans_commit_journal_entry_invalid(trans, i); 1050 - 1051 - if (ret) 1087 + ret = bch2_journal_entry_validate(c, NULL, i, 1088 + bcachefs_metadata_version_current, 1089 + CPU_BIG_ENDIAN, invalid_flags); 1090 + if (unlikely(ret)) { 1091 + bch2_trans_inconsistent(trans, "invalid journal entry on insert from %s\n", 1092 + trans->fn); 1052 1093 return ret; 1094 + } 1053 1095 } 1054 1096 1055 1097 if (unlikely(!test_bit(BCH_FS_may_go_rw, &c->flags))) {
+4 -12
fs/bcachefs/btree_update_interior.c
··· 1364 1364 if (unlikely(!test_bit(JOURNAL_replay_done, &c->journal.flags))) 1365 1365 bch2_journal_key_overwritten(c, b->c.btree_id, b->c.level, insert->k.p); 1366 1366 1367 - if (bch2_bkey_invalid(c, bkey_i_to_s_c(insert), 1368 - btree_node_type(b), WRITE, &buf) ?: 1369 - bch2_bkey_in_btree_node(c, b, bkey_i_to_s_c(insert), &buf)) { 1370 - printbuf_reset(&buf); 1371 - prt_printf(&buf, "inserting invalid bkey\n "); 1372 - bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); 1373 - prt_printf(&buf, "\n "); 1374 - bch2_bkey_invalid(c, bkey_i_to_s_c(insert), 1375 - btree_node_type(b), WRITE, &buf); 1376 - bch2_bkey_in_btree_node(c, b, bkey_i_to_s_c(insert), &buf); 1377 - 1378 - bch2_fs_inconsistent(c, "%s", buf.buf); 1367 + if (bch2_bkey_validate(c, bkey_i_to_s_c(insert), 1368 + btree_node_type(b), BCH_VALIDATE_write) ?: 1369 + bch2_bkey_in_btree_node(c, b, bkey_i_to_s_c(insert), BCH_VALIDATE_write)) { 1370 + bch2_fs_inconsistent(c, "%s: inserting invalid bkey", __func__); 1379 1371 dump_stack(); 1380 1372 } 1381 1373
+24 -6
fs/bcachefs/buckets.c
··· 810 810 ret = bch2_disk_accounting_mod(trans, &acc_btree_key, &replicas_sectors, 1, gc); 811 811 if (ret) 812 812 return ret; 813 + } else { 814 + bool insert = !(flags & BTREE_TRIGGER_overwrite); 815 + struct disk_accounting_pos acc_inum_key = { 816 + .type = BCH_DISK_ACCOUNTING_inum, 817 + .inum.inum = k.k->p.inode, 818 + }; 819 + s64 v[3] = { 820 + insert ? 1 : -1, 821 + insert ? k.k->size : -((s64) k.k->size), 822 + replicas_sectors, 823 + }; 824 + ret = bch2_disk_accounting_mod(trans, &acc_inum_key, v, ARRAY_SIZE(v), gc); 825 + if (ret) 826 + return ret; 813 827 } 814 828 815 829 if (bch2_bkey_rebalance_opts(k)) { ··· 915 901 enum bch_data_type type, 916 902 unsigned sectors) 917 903 { 918 - struct bch_fs *c = trans->c; 919 904 struct btree_iter iter; 920 905 int ret = 0; 921 906 ··· 924 911 return PTR_ERR(a); 925 912 926 913 if (a->v.data_type && type && a->v.data_type != type) { 927 - bch2_fsck_err(c, FSCK_CAN_IGNORE|FSCK_NEED_FSCK, 914 + bch2_fsck_err(trans, FSCK_CAN_IGNORE|FSCK_NEED_FSCK, 928 915 bucket_metadata_type_mismatch, 929 916 "bucket %llu:%llu gen %u different types of data in same bucket: %s, %s\n" 930 917 "while marking %s", ··· 1045 1032 static int __bch2_trans_mark_dev_sb(struct btree_trans *trans, struct bch_dev *ca, 1046 1033 enum btree_iter_update_trigger_flags flags) 1047 1034 { 1048 - struct bch_sb_layout *layout = &ca->disk_sb.sb->layout; 1035 + struct bch_fs *c = trans->c; 1036 + 1037 + mutex_lock(&c->sb_lock); 1038 + struct bch_sb_layout layout = ca->disk_sb.sb->layout; 1039 + mutex_unlock(&c->sb_lock); 1040 + 1049 1041 u64 bucket = 0; 1050 1042 unsigned i, bucket_sectors = 0; 1051 1043 int ret; 1052 1044 1053 - for (i = 0; i < layout->nr_superblocks; i++) { 1054 - u64 offset = le64_to_cpu(layout->sb_offset[i]); 1045 + for (i = 0; i < layout.nr_superblocks; i++) { 1046 + u64 offset = le64_to_cpu(layout.sb_offset[i]); 1055 1047 1056 1048 if (offset == BCH_SB_SECTOR) { 1057 1049 ret = bch2_trans_mark_metadata_sectors(trans, ca, ··· 1067 1049 } 1068 1050 1069 1051 ret = bch2_trans_mark_metadata_sectors(trans, ca, offset, 1070 - offset + (1 << layout->sb_max_size_bits), 1052 + offset + (1 << layout.sb_max_size_bits), 1071 1053 BCH_DATA_sb, &bucket, &bucket_sectors, flags); 1072 1054 if (ret) 1073 1055 return ret;
+9 -2
fs/bcachefs/buckets_waiting_for_journal.c
··· 93 93 .dev_bucket = (u64) dev << 56 | bucket, 94 94 .journal_seq = journal_seq, 95 95 }; 96 - size_t i, size, new_bits, nr_elements = 1, nr_rehashes = 0; 96 + size_t i, size, new_bits, nr_elements = 1, nr_rehashes = 0, nr_rehashes_this_size = 0; 97 97 int ret = 0; 98 98 99 99 mutex_lock(&b->lock); ··· 106 106 for (i = 0; i < size; i++) 107 107 nr_elements += t->d[i].journal_seq > flushed_seq; 108 108 109 - new_bits = t->bits + (nr_elements * 3 > size); 109 + new_bits = ilog2(roundup_pow_of_two(nr_elements * 3)); 110 110 111 111 n = kvmalloc(sizeof(*n) + (sizeof(n->d[0]) << new_bits), GFP_KERNEL); 112 112 if (!n) { ··· 115 115 } 116 116 117 117 retry_rehash: 118 + if (nr_rehashes_this_size == 3) { 119 + new_bits++; 120 + nr_rehashes_this_size = 0; 121 + } 122 + 118 123 nr_rehashes++; 124 + nr_rehashes_this_size++; 125 + 119 126 bucket_table_init(n, new_bits); 120 127 121 128 tmp = new;
+2 -4
fs/bcachefs/data_update.c
··· 250 250 * it's been hard to reproduce, so this should give us some more 251 251 * information when it does occur: 252 252 */ 253 - struct printbuf err = PRINTBUF; 254 - int invalid = bch2_bkey_invalid(c, bkey_i_to_s_c(insert), __btree_node_type(0, m->btree_id), 0, &err); 255 - printbuf_exit(&err); 256 - 253 + int invalid = bch2_bkey_validate(c, bkey_i_to_s_c(insert), __btree_node_type(0, m->btree_id), 254 + BCH_VALIDATE_commit); 257 255 if (invalid) { 258 256 struct printbuf buf = PRINTBUF; 259 257
+9 -29
fs/bcachefs/debug.c
··· 397 397 size_t size, loff_t *ppos) 398 398 { 399 399 struct dump_iter *i = file->private_data; 400 - struct btree_trans *trans; 401 - struct btree_iter iter; 402 - struct btree *b; 403 - ssize_t ret; 404 400 405 401 i->ubuf = buf; 406 402 i->size = size; 407 403 i->ret = 0; 408 404 409 - ret = flush_buf(i); 405 + ssize_t ret = flush_buf(i); 410 406 if (ret) 411 407 return ret; 412 408 413 409 if (bpos_eq(SPOS_MAX, i->from)) 414 410 return i->ret; 415 411 416 - trans = bch2_trans_get(i->c); 417 - retry: 418 - bch2_trans_begin(trans); 412 + return bch2_trans_run(i->c, 413 + for_each_btree_node(trans, iter, i->id, i->from, 0, b, ({ 414 + bch2_btree_node_to_text(&i->buf, i->c, b); 415 + i->from = !bpos_eq(SPOS_MAX, b->key.k.p) 416 + ? bpos_successor(b->key.k.p) 417 + : b->key.k.p; 419 418 420 - for_each_btree_node(trans, iter, i->id, i->from, 0, b, ret) { 421 - bch2_btree_node_to_text(&i->buf, i->c, b); 422 - i->from = !bpos_eq(SPOS_MAX, b->key.k.p) 423 - ? bpos_successor(b->key.k.p) 424 - : b->key.k.p; 425 - 426 - ret = drop_locks_do(trans, flush_buf(i)); 427 - if (ret) 428 - break; 429 - } 430 - bch2_trans_iter_exit(trans, &iter); 431 - 432 - if (bch2_err_matches(ret, BCH_ERR_transaction_restart)) 433 - goto retry; 434 - 435 - bch2_trans_put(trans); 436 - 437 - if (!ret) 438 - ret = flush_buf(i); 439 - 440 - return ret ?: i->ret; 419 + drop_locks_do(trans, flush_buf(i)); 420 + }))) ?: i->ret; 441 421 } 442 422 443 423 static const struct file_operations btree_format_debug_ops = {
+16 -17
fs/bcachefs/dirent.c
··· 100 100 .is_visible = dirent_is_visible, 101 101 }; 102 102 103 - int bch2_dirent_invalid(struct bch_fs *c, struct bkey_s_c k, 104 - enum bch_validate_flags flags, 105 - struct printbuf *err) 103 + int bch2_dirent_validate(struct bch_fs *c, struct bkey_s_c k, 104 + enum bch_validate_flags flags) 106 105 { 107 106 struct bkey_s_c_dirent d = bkey_s_c_to_dirent(k); 108 107 struct qstr d_name = bch2_dirent_get_name(d); 109 108 int ret = 0; 110 109 111 - bkey_fsck_err_on(!d_name.len, c, err, 112 - dirent_empty_name, 110 + bkey_fsck_err_on(!d_name.len, 111 + c, dirent_empty_name, 113 112 "empty name"); 114 113 115 - bkey_fsck_err_on(bkey_val_u64s(k.k) > dirent_val_u64s(d_name.len), c, err, 116 - dirent_val_too_big, 114 + bkey_fsck_err_on(bkey_val_u64s(k.k) > dirent_val_u64s(d_name.len), 115 + c, dirent_val_too_big, 117 116 "value too big (%zu > %u)", 118 117 bkey_val_u64s(k.k), dirent_val_u64s(d_name.len)); 119 118 ··· 120 121 * Check new keys don't exceed the max length 121 122 * (older keys may be larger.) 122 123 */ 123 - bkey_fsck_err_on((flags & BCH_VALIDATE_commit) && d_name.len > BCH_NAME_MAX, c, err, 124 - dirent_name_too_long, 124 + bkey_fsck_err_on((flags & BCH_VALIDATE_commit) && d_name.len > BCH_NAME_MAX, 125 + c, dirent_name_too_long, 125 126 "dirent name too big (%u > %u)", 126 127 d_name.len, BCH_NAME_MAX); 127 128 128 - bkey_fsck_err_on(d_name.len != strnlen(d_name.name, d_name.len), c, err, 129 - dirent_name_embedded_nul, 129 + bkey_fsck_err_on(d_name.len != strnlen(d_name.name, d_name.len), 130 + c, dirent_name_embedded_nul, 130 131 "dirent has stray data after name's NUL"); 131 132 132 133 bkey_fsck_err_on((d_name.len == 1 && !memcmp(d_name.name, ".", 1)) || 133 - (d_name.len == 2 && !memcmp(d_name.name, "..", 2)), c, err, 134 - dirent_name_dot_or_dotdot, 134 + (d_name.len == 2 && !memcmp(d_name.name, "..", 2)), 135 + c, dirent_name_dot_or_dotdot, 135 136 "invalid name"); 136 137 137 - bkey_fsck_err_on(memchr(d_name.name, '/', d_name.len), c, err, 138 - dirent_name_has_slash, 138 + bkey_fsck_err_on(memchr(d_name.name, '/', d_name.len), 139 + c, dirent_name_has_slash, 139 140 "name with /"); 140 141 141 142 bkey_fsck_err_on(d.v->d_type != DT_SUBVOL && 142 - le64_to_cpu(d.v->d_inum) == d.k->p.inode, c, err, 143 - dirent_to_itself, 143 + le64_to_cpu(d.v->d_inum) == d.k->p.inode, 144 + c, dirent_to_itself, 144 145 "dirent points to own directory"); 145 146 fsck_err: 146 147 return ret;
+2 -3
fs/bcachefs/dirent.h
··· 7 7 enum bch_validate_flags; 8 8 extern const struct bch_hash_desc bch2_dirent_hash_desc; 9 9 10 - int bch2_dirent_invalid(struct bch_fs *, struct bkey_s_c, 11 - enum bch_validate_flags, struct printbuf *); 10 + int bch2_dirent_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 12 11 void bch2_dirent_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 13 12 14 13 #define bch2_bkey_ops_dirent ((struct bkey_ops) { \ 15 - .key_invalid = bch2_dirent_invalid, \ 14 + .key_validate = bch2_dirent_validate, \ 16 15 .val_to_text = bch2_dirent_to_text, \ 17 16 .min_val_size = 16, \ 18 17 })
+21 -13
fs/bcachefs/disk_accounting.c
··· 126 126 127 127 #define field_end(p, member) (((void *) (&p.member)) + sizeof(p.member)) 128 128 129 - int bch2_accounting_invalid(struct bch_fs *c, struct bkey_s_c k, 130 - enum bch_validate_flags flags, 131 - struct printbuf *err) 129 + int bch2_accounting_validate(struct bch_fs *c, struct bkey_s_c k, 130 + enum bch_validate_flags flags) 132 131 { 133 132 struct disk_accounting_pos acc_k; 134 133 bpos_to_disk_accounting_pos(&acc_k, k.k->p); ··· 143 144 break; 144 145 case BCH_DISK_ACCOUNTING_replicas: 145 146 bkey_fsck_err_on(!acc_k.replicas.nr_devs, 146 - c, err, accounting_key_replicas_nr_devs_0, 147 + c, accounting_key_replicas_nr_devs_0, 147 148 "accounting key replicas entry with nr_devs=0"); 148 149 149 150 bkey_fsck_err_on(acc_k.replicas.nr_required > acc_k.replicas.nr_devs || 150 151 (acc_k.replicas.nr_required > 1 && 151 152 acc_k.replicas.nr_required == acc_k.replicas.nr_devs), 152 - c, err, accounting_key_replicas_nr_required_bad, 153 + c, accounting_key_replicas_nr_required_bad, 153 154 "accounting key replicas entry with bad nr_required"); 154 155 155 156 for (unsigned i = 0; i + 1 < acc_k.replicas.nr_devs; i++) 156 - bkey_fsck_err_on(acc_k.replicas.devs[i] > acc_k.replicas.devs[i + 1], 157 - c, err, accounting_key_replicas_devs_unsorted, 157 + bkey_fsck_err_on(acc_k.replicas.devs[i] >= acc_k.replicas.devs[i + 1], 158 + c, accounting_key_replicas_devs_unsorted, 158 159 "accounting key replicas entry with unsorted devs"); 159 160 160 161 end = (void *) &acc_k.replicas + replicas_entry_bytes(&acc_k.replicas); ··· 177 178 } 178 179 179 180 bkey_fsck_err_on(!is_zero(end, (void *) (&acc_k + 1)), 180 - c, err, accounting_key_junk_at_end, 181 + c, accounting_key_junk_at_end, 181 182 "junk at end of accounting key"); 182 183 fsck_err: 183 184 return ret; ··· 527 528 struct disk_accounting_pos acc_k; 528 529 bpos_to_disk_accounting_pos(&acc_k, e->pos); 529 530 531 + if (acc_k.type >= BCH_DISK_ACCOUNTING_TYPE_NR) 532 + continue; 533 + 530 534 u64 src_v[BCH_ACCOUNTING_MAX_COUNTERS]; 531 535 u64 dst_v[BCH_ACCOUNTING_MAX_COUNTERS]; 532 536 ··· 566 564 struct { __BKEY_PADDED(k, BCH_ACCOUNTING_MAX_COUNTERS); } k_i; 567 565 568 566 accounting_key_init(&k_i.k, &acc_k, src_v, nr); 569 - bch2_accounting_mem_mod_locked(trans, bkey_i_to_s_c_accounting(&k_i.k), false); 567 + bch2_accounting_mem_mod_locked(trans, bkey_i_to_s_c_accounting(&k_i.k), false, false); 570 568 571 569 preempt_disable(); 572 570 struct bch_fs_usage_base *dst = this_cpu_ptr(c->usage); ··· 595 593 return 0; 596 594 597 595 percpu_down_read(&c->mark_lock); 598 - int ret = __bch2_accounting_mem_mod(c, bkey_s_c_to_accounting(k), false); 596 + int ret = bch2_accounting_mem_mod_locked(trans, bkey_s_c_to_accounting(k), false, true); 599 597 percpu_up_read(&c->mark_lock); 600 598 601 599 if (bch2_accounting_key_is_zero(bkey_s_c_to_accounting(k)) && ··· 762 760 struct bkey_s_c_accounting a = bkey_s_c_to_accounting(k); 763 761 unsigned nr = bch2_accounting_counters(k.k); 764 762 763 + struct disk_accounting_pos acc_k; 764 + bpos_to_disk_accounting_pos(&acc_k, k.k->p); 765 + 766 + if (acc_k.type >= BCH_DISK_ACCOUNTING_TYPE_NR) 767 + continue; 768 + 769 + if (acc_k.type == BCH_DISK_ACCOUNTING_inum) 770 + continue; 771 + 765 772 bch2_accounting_mem_read(c, k.k->p, v, nr); 766 773 767 774 if (memcmp(a.v->d, v, nr * sizeof(u64))) { ··· 785 774 printbuf_exit(&buf); 786 775 mismatch = true; 787 776 } 788 - 789 - struct disk_accounting_pos acc_k; 790 - bpos_to_disk_accounting_pos(&acc_k, a.k->p); 791 777 792 778 switch (acc_k.type) { 793 779 case BCH_DISK_ACCOUNTING_persistent_reserved:
+36 -40
fs/bcachefs/disk_accounting.h
··· 82 82 s64 *, unsigned, bool); 83 83 int bch2_mod_dev_cached_sectors(struct btree_trans *, unsigned, s64, bool); 84 84 85 - int bch2_accounting_invalid(struct bch_fs *, struct bkey_s_c, 86 - enum bch_validate_flags, struct printbuf *); 85 + int bch2_accounting_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 87 86 void bch2_accounting_key_to_text(struct printbuf *, struct disk_accounting_pos *); 88 87 void bch2_accounting_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 89 88 void bch2_accounting_swab(struct bkey_s); 90 89 91 90 #define bch2_bkey_ops_accounting ((struct bkey_ops) { \ 92 - .key_invalid = bch2_accounting_invalid, \ 91 + .key_validate = bch2_accounting_validate, \ 93 92 .val_to_text = bch2_accounting_to_text, \ 94 93 .swab = bch2_accounting_swab, \ 95 94 .min_val_size = 8, \ ··· 106 107 int bch2_accounting_mem_insert(struct bch_fs *, struct bkey_s_c_accounting, bool); 107 108 void bch2_accounting_mem_gc(struct bch_fs *); 108 109 109 - static inline int __bch2_accounting_mem_mod(struct bch_fs *c, struct bkey_s_c_accounting a, bool gc) 110 + /* 111 + * Update in memory counters so they match the btree update we're doing; called 112 + * from transaction commit path 113 + */ 114 + static inline int bch2_accounting_mem_mod_locked(struct btree_trans *trans, struct bkey_s_c_accounting a, bool gc, bool read) 110 115 { 116 + struct bch_fs *c = trans->c; 117 + struct disk_accounting_pos acc_k; 118 + bpos_to_disk_accounting_pos(&acc_k, a.k->p); 119 + 120 + if (acc_k.type == BCH_DISK_ACCOUNTING_inum) 121 + return 0; 122 + 123 + if (!gc && !read) { 124 + switch (acc_k.type) { 125 + case BCH_DISK_ACCOUNTING_persistent_reserved: 126 + trans->fs_usage_delta.reserved += acc_k.persistent_reserved.nr_replicas * a.v->d[0]; 127 + break; 128 + case BCH_DISK_ACCOUNTING_replicas: 129 + fs_usage_data_type_to_base(&trans->fs_usage_delta, acc_k.replicas.data_type, a.v->d[0]); 130 + break; 131 + case BCH_DISK_ACCOUNTING_dev_data_type: 132 + rcu_read_lock(); 133 + struct bch_dev *ca = bch2_dev_rcu(c, acc_k.dev_data_type.dev); 134 + if (ca) { 135 + this_cpu_add(ca->usage->d[acc_k.dev_data_type.data_type].buckets, a.v->d[0]); 136 + this_cpu_add(ca->usage->d[acc_k.dev_data_type.data_type].sectors, a.v->d[1]); 137 + this_cpu_add(ca->usage->d[acc_k.dev_data_type.data_type].fragmented, a.v->d[2]); 138 + } 139 + rcu_read_unlock(); 140 + break; 141 + } 142 + } 143 + 111 144 struct bch_accounting_mem *acc = &c->accounting; 112 145 unsigned idx; 113 146 ··· 161 130 return 0; 162 131 } 163 132 164 - /* 165 - * Update in memory counters so they match the btree update we're doing; called 166 - * from transaction commit path 167 - */ 168 - static inline int bch2_accounting_mem_mod_locked(struct btree_trans *trans, struct bkey_s_c_accounting a, bool gc) 169 - { 170 - struct bch_fs *c = trans->c; 171 - 172 - if (!gc) { 173 - struct disk_accounting_pos acc_k; 174 - bpos_to_disk_accounting_pos(&acc_k, a.k->p); 175 - 176 - switch (acc_k.type) { 177 - case BCH_DISK_ACCOUNTING_persistent_reserved: 178 - trans->fs_usage_delta.reserved += acc_k.persistent_reserved.nr_replicas * a.v->d[0]; 179 - break; 180 - case BCH_DISK_ACCOUNTING_replicas: 181 - fs_usage_data_type_to_base(&trans->fs_usage_delta, acc_k.replicas.data_type, a.v->d[0]); 182 - break; 183 - case BCH_DISK_ACCOUNTING_dev_data_type: 184 - rcu_read_lock(); 185 - struct bch_dev *ca = bch2_dev_rcu(c, acc_k.dev_data_type.dev); 186 - if (ca) { 187 - this_cpu_add(ca->usage->d[acc_k.dev_data_type.data_type].buckets, a.v->d[0]); 188 - this_cpu_add(ca->usage->d[acc_k.dev_data_type.data_type].sectors, a.v->d[1]); 189 - this_cpu_add(ca->usage->d[acc_k.dev_data_type.data_type].fragmented, a.v->d[2]); 190 - } 191 - rcu_read_unlock(); 192 - break; 193 - } 194 - } 195 - 196 - return __bch2_accounting_mem_mod(c, a, gc); 197 - } 198 - 199 133 static inline int bch2_accounting_mem_add(struct btree_trans *trans, struct bkey_s_c_accounting a, bool gc) 200 134 { 201 135 percpu_down_read(&trans->c->mark_lock); 202 - int ret = bch2_accounting_mem_mod_locked(trans, a, gc); 136 + int ret = bch2_accounting_mem_mod_locked(trans, a, gc, false); 203 137 percpu_up_read(&trans->c->mark_lock); 204 138 return ret; 205 139 }
+7 -1
fs/bcachefs/disk_accounting_format.h
··· 103 103 x(compression, 4) \ 104 104 x(snapshot, 5) \ 105 105 x(btree, 6) \ 106 - x(rebalance_work, 7) 106 + x(rebalance_work, 7) \ 107 + x(inum, 8) 107 108 108 109 enum disk_accounting_type { 109 110 #define x(f, nr) BCH_DISK_ACCOUNTING_##f = nr, ··· 137 136 __u32 id; 138 137 } __packed; 139 138 139 + struct bch_acct_inum { 140 + __u64 inum; 141 + } __packed; 142 + 140 143 struct bch_acct_rebalance_work { 141 144 }; 142 145 ··· 157 152 struct bch_acct_snapshot snapshot; 158 153 struct bch_acct_btree btree; 159 154 struct bch_acct_rebalance_work rebalance_work; 155 + struct bch_acct_inum inum; 160 156 } __packed; 161 157 } __packed; 162 158 struct bpos _pad;
+7 -8
fs/bcachefs/ec.c
··· 107 107 108 108 /* Stripes btree keys: */ 109 109 110 - int bch2_stripe_invalid(struct bch_fs *c, struct bkey_s_c k, 111 - enum bch_validate_flags flags, 112 - struct printbuf *err) 110 + int bch2_stripe_validate(struct bch_fs *c, struct bkey_s_c k, 111 + enum bch_validate_flags flags) 113 112 { 114 113 const struct bch_stripe *s = bkey_s_c_to_stripe(k).v; 115 114 int ret = 0; 116 115 117 116 bkey_fsck_err_on(bkey_eq(k.k->p, POS_MIN) || 118 - bpos_gt(k.k->p, POS(0, U32_MAX)), c, err, 119 - stripe_pos_bad, 117 + bpos_gt(k.k->p, POS(0, U32_MAX)), 118 + c, stripe_pos_bad, 120 119 "stripe at bad pos"); 121 120 122 - bkey_fsck_err_on(bkey_val_u64s(k.k) < stripe_val_u64s(s), c, err, 123 - stripe_val_size_bad, 121 + bkey_fsck_err_on(bkey_val_u64s(k.k) < stripe_val_u64s(s), 122 + c, stripe_val_size_bad, 124 123 "incorrect value size (%zu < %u)", 125 124 bkey_val_u64s(k.k), stripe_val_u64s(s)); 126 125 127 - ret = bch2_bkey_ptrs_invalid(c, k, flags, err); 126 + ret = bch2_bkey_ptrs_validate(c, k, flags); 128 127 fsck_err: 129 128 return ret; 130 129 }
+2 -3
fs/bcachefs/ec.h
··· 8 8 9 9 enum bch_validate_flags; 10 10 11 - int bch2_stripe_invalid(struct bch_fs *, struct bkey_s_c, 12 - enum bch_validate_flags, struct printbuf *); 11 + int bch2_stripe_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 13 12 void bch2_stripe_to_text(struct printbuf *, struct bch_fs *, 14 13 struct bkey_s_c); 15 14 int bch2_trigger_stripe(struct btree_trans *, enum btree_id, unsigned, ··· 16 17 enum btree_iter_update_trigger_flags); 17 18 18 19 #define bch2_bkey_ops_stripe ((struct bkey_ops) { \ 19 - .key_invalid = bch2_stripe_invalid, \ 20 + .key_validate = bch2_stripe_validate, \ 20 21 .val_to_text = bch2_stripe_to_text, \ 21 22 .swab = bch2_ptr_swab, \ 22 23 .trigger = bch2_trigger_stripe, \
+1
fs/bcachefs/errcode.h
··· 166 166 x(0, journal_reclaim_would_deadlock) \ 167 167 x(EINVAL, fsck) \ 168 168 x(BCH_ERR_fsck, fsck_fix) \ 169 + x(BCH_ERR_fsck, fsck_delete_bkey) \ 169 170 x(BCH_ERR_fsck, fsck_ignore) \ 170 171 x(BCH_ERR_fsck, fsck_errors_not_fixed) \ 171 172 x(BCH_ERR_fsck, fsck_repair_unimplemented) \
+22
fs/bcachefs/error.c
··· 416 416 return ret; 417 417 } 418 418 419 + int __bch2_bkey_fsck_err(struct bch_fs *c, 420 + struct bkey_s_c k, 421 + enum bch_fsck_flags flags, 422 + enum bch_sb_error_id err, 423 + const char *fmt, ...) 424 + { 425 + struct printbuf buf = PRINTBUF; 426 + va_list args; 427 + 428 + prt_str(&buf, "invalid bkey "); 429 + bch2_bkey_val_to_text(&buf, c, k); 430 + prt_str(&buf, "\n "); 431 + va_start(args, fmt); 432 + prt_vprintf(&buf, fmt, args); 433 + va_end(args); 434 + prt_str(&buf, ": delete?"); 435 + 436 + int ret = __bch2_fsck_err(c, NULL, flags, err, "%s", buf.buf); 437 + printbuf_exit(&buf); 438 + return ret; 439 + } 440 + 419 441 void bch2_flush_fsck_errs(struct bch_fs *c) 420 442 { 421 443 struct fsck_err_state *s, *n;
+23 -16
fs/bcachefs/error.h
··· 4 4 5 5 #include <linux/list.h> 6 6 #include <linux/printk.h> 7 + #include "bkey_types.h" 7 8 #include "sb-errors.h" 8 9 9 10 struct bch_dev; ··· 167 166 #define fsck_err_on(cond, c, _err_type, ...) \ 168 167 __fsck_err_on(cond, c, FSCK_CAN_FIX|FSCK_CAN_IGNORE, _err_type, __VA_ARGS__) 169 168 170 - __printf(4, 0) 171 - static inline void bch2_bkey_fsck_err(struct bch_fs *c, 172 - struct printbuf *err_msg, 173 - enum bch_sb_error_id err_type, 174 - const char *fmt, ...) 175 - { 176 - va_list args; 169 + __printf(5, 6) 170 + int __bch2_bkey_fsck_err(struct bch_fs *, 171 + struct bkey_s_c, 172 + enum bch_fsck_flags, 173 + enum bch_sb_error_id, 174 + const char *, ...); 177 175 178 - va_start(args, fmt); 179 - prt_vprintf(err_msg, fmt, args); 180 - va_end(args); 181 - } 182 - 183 - #define bkey_fsck_err(c, _err_msg, _err_type, ...) \ 176 + /* 177 + * for now, bkey fsck errors are always handled by deleting the entire key - 178 + * this will change at some point 179 + */ 180 + #define bkey_fsck_err(c, _err_type, _err_msg, ...) \ 184 181 do { \ 185 - prt_printf(_err_msg, __VA_ARGS__); \ 186 - bch2_sb_error_count(c, BCH_FSCK_ERR_##_err_type); \ 187 - ret = -BCH_ERR_invalid_bkey; \ 182 + if ((flags & BCH_VALIDATE_silent)) { \ 183 + ret = -BCH_ERR_fsck_delete_bkey; \ 184 + goto fsck_err; \ 185 + } \ 186 + int _ret = __bch2_bkey_fsck_err(c, k, FSCK_CAN_FIX, \ 187 + BCH_FSCK_ERR_##_err_type, \ 188 + _err_msg, ##__VA_ARGS__); \ 189 + if (_ret != -BCH_ERR_fsck_fix && \ 190 + _ret != -BCH_ERR_fsck_ignore) \ 191 + ret = _ret; \ 192 + ret = -BCH_ERR_fsck_delete_bkey; \ 188 193 goto fsck_err; \ 189 194 } while (0) 190 195
+72 -72
fs/bcachefs/extents.c
··· 171 171 172 172 /* KEY_TYPE_btree_ptr: */ 173 173 174 - int bch2_btree_ptr_invalid(struct bch_fs *c, struct bkey_s_c k, 175 - enum bch_validate_flags flags, 176 - struct printbuf *err) 174 + int bch2_btree_ptr_validate(struct bch_fs *c, struct bkey_s_c k, 175 + enum bch_validate_flags flags) 177 176 { 178 177 int ret = 0; 179 178 180 - bkey_fsck_err_on(bkey_val_u64s(k.k) > BCH_REPLICAS_MAX, c, err, 181 - btree_ptr_val_too_big, 179 + bkey_fsck_err_on(bkey_val_u64s(k.k) > BCH_REPLICAS_MAX, 180 + c, btree_ptr_val_too_big, 182 181 "value too big (%zu > %u)", bkey_val_u64s(k.k), BCH_REPLICAS_MAX); 183 182 184 - ret = bch2_bkey_ptrs_invalid(c, k, flags, err); 183 + ret = bch2_bkey_ptrs_validate(c, k, flags); 185 184 fsck_err: 186 185 return ret; 187 186 } ··· 191 192 bch2_bkey_ptrs_to_text(out, c, k); 192 193 } 193 194 194 - int bch2_btree_ptr_v2_invalid(struct bch_fs *c, struct bkey_s_c k, 195 - enum bch_validate_flags flags, 196 - struct printbuf *err) 195 + int bch2_btree_ptr_v2_validate(struct bch_fs *c, struct bkey_s_c k, 196 + enum bch_validate_flags flags) 197 197 { 198 198 struct bkey_s_c_btree_ptr_v2 bp = bkey_s_c_to_btree_ptr_v2(k); 199 199 int ret = 0; 200 200 201 201 bkey_fsck_err_on(bkey_val_u64s(k.k) > BKEY_BTREE_PTR_VAL_U64s_MAX, 202 - c, err, btree_ptr_v2_val_too_big, 202 + c, btree_ptr_v2_val_too_big, 203 203 "value too big (%zu > %zu)", 204 204 bkey_val_u64s(k.k), BKEY_BTREE_PTR_VAL_U64s_MAX); 205 205 206 206 bkey_fsck_err_on(bpos_ge(bp.v->min_key, bp.k->p), 207 - c, err, btree_ptr_v2_min_key_bad, 207 + c, btree_ptr_v2_min_key_bad, 208 208 "min_key > key"); 209 209 210 210 if (flags & BCH_VALIDATE_write) 211 211 bkey_fsck_err_on(!bp.v->sectors_written, 212 - c, err, btree_ptr_v2_written_0, 212 + c, btree_ptr_v2_written_0, 213 213 "sectors_written == 0"); 214 214 215 - ret = bch2_bkey_ptrs_invalid(c, k, flags, err); 215 + ret = bch2_bkey_ptrs_validate(c, k, flags); 216 216 fsck_err: 217 217 return ret; 218 218 } ··· 397 399 398 400 /* KEY_TYPE_reservation: */ 399 401 400 - int bch2_reservation_invalid(struct bch_fs *c, struct bkey_s_c k, 401 - enum bch_validate_flags flags, 402 - struct printbuf *err) 402 + int bch2_reservation_validate(struct bch_fs *c, struct bkey_s_c k, 403 + enum bch_validate_flags flags) 403 404 { 404 405 struct bkey_s_c_reservation r = bkey_s_c_to_reservation(k); 405 406 int ret = 0; 406 407 407 - bkey_fsck_err_on(!r.v->nr_replicas || r.v->nr_replicas > BCH_REPLICAS_MAX, c, err, 408 - reservation_key_nr_replicas_invalid, 408 + bkey_fsck_err_on(!r.v->nr_replicas || r.v->nr_replicas > BCH_REPLICAS_MAX, 409 + c, reservation_key_nr_replicas_invalid, 409 410 "invalid nr_replicas (%u)", r.v->nr_replicas); 410 411 fsck_err: 411 412 return ret; ··· 1099 1102 } 1100 1103 } 1101 1104 1102 - 1103 - static int extent_ptr_invalid(struct bch_fs *c, 1104 - struct bkey_s_c k, 1105 - enum bch_validate_flags flags, 1106 - const struct bch_extent_ptr *ptr, 1107 - unsigned size_ondisk, 1108 - bool metadata, 1109 - struct printbuf *err) 1105 + static int extent_ptr_validate(struct bch_fs *c, 1106 + struct bkey_s_c k, 1107 + enum bch_validate_flags flags, 1108 + const struct bch_extent_ptr *ptr, 1109 + unsigned size_ondisk, 1110 + bool metadata) 1110 1111 { 1111 1112 int ret = 0; 1112 1113 ··· 1123 1128 1124 1129 struct bkey_ptrs_c ptrs = bch2_bkey_ptrs_c(k); 1125 1130 bkey_for_each_ptr(ptrs, ptr2) 1126 - bkey_fsck_err_on(ptr != ptr2 && ptr->dev == ptr2->dev, c, err, 1127 - ptr_to_duplicate_device, 1131 + bkey_fsck_err_on(ptr != ptr2 && ptr->dev == ptr2->dev, 1132 + c, ptr_to_duplicate_device, 1128 1133 "multiple pointers to same device (%u)", ptr->dev); 1129 1134 1130 1135 1131 - bkey_fsck_err_on(bucket >= nbuckets, c, err, 1132 - ptr_after_last_bucket, 1136 + bkey_fsck_err_on(bucket >= nbuckets, 1137 + c, ptr_after_last_bucket, 1133 1138 "pointer past last bucket (%llu > %llu)", bucket, nbuckets); 1134 - bkey_fsck_err_on(bucket < first_bucket, c, err, 1135 - ptr_before_first_bucket, 1139 + bkey_fsck_err_on(bucket < first_bucket, 1140 + c, ptr_before_first_bucket, 1136 1141 "pointer before first bucket (%llu < %u)", bucket, first_bucket); 1137 - bkey_fsck_err_on(bucket_offset + size_ondisk > bucket_size, c, err, 1138 - ptr_spans_multiple_buckets, 1142 + bkey_fsck_err_on(bucket_offset + size_ondisk > bucket_size, 1143 + c, ptr_spans_multiple_buckets, 1139 1144 "pointer spans multiple buckets (%u + %u > %u)", 1140 1145 bucket_offset, size_ondisk, bucket_size); 1141 1146 fsck_err: 1142 1147 return ret; 1143 1148 } 1144 1149 1145 - int bch2_bkey_ptrs_invalid(struct bch_fs *c, struct bkey_s_c k, 1146 - enum bch_validate_flags flags, 1147 - struct printbuf *err) 1150 + int bch2_bkey_ptrs_validate(struct bch_fs *c, struct bkey_s_c k, 1151 + enum bch_validate_flags flags) 1148 1152 { 1149 1153 struct bkey_ptrs_c ptrs = bch2_bkey_ptrs_c(k); 1150 1154 const union bch_extent_entry *entry; ··· 1158 1164 size_ondisk = btree_sectors(c); 1159 1165 1160 1166 bkey_extent_entry_for_each(ptrs, entry) { 1161 - bkey_fsck_err_on(__extent_entry_type(entry) >= BCH_EXTENT_ENTRY_MAX, c, err, 1162 - extent_ptrs_invalid_entry, 1163 - "invalid extent entry type (got %u, max %u)", 1164 - __extent_entry_type(entry), BCH_EXTENT_ENTRY_MAX); 1167 + bkey_fsck_err_on(__extent_entry_type(entry) >= BCH_EXTENT_ENTRY_MAX, 1168 + c, extent_ptrs_invalid_entry, 1169 + "invalid extent entry type (got %u, max %u)", 1170 + __extent_entry_type(entry), BCH_EXTENT_ENTRY_MAX); 1165 1171 1166 1172 bkey_fsck_err_on(bkey_is_btree_ptr(k.k) && 1167 - !extent_entry_is_ptr(entry), c, err, 1168 - btree_ptr_has_non_ptr, 1173 + !extent_entry_is_ptr(entry), 1174 + c, btree_ptr_has_non_ptr, 1169 1175 "has non ptr field"); 1170 1176 1171 1177 switch (extent_entry_type(entry)) { 1172 1178 case BCH_EXTENT_ENTRY_ptr: 1173 - ret = extent_ptr_invalid(c, k, flags, &entry->ptr, 1174 - size_ondisk, false, err); 1179 + ret = extent_ptr_validate(c, k, flags, &entry->ptr, size_ondisk, false); 1175 1180 if (ret) 1176 1181 return ret; 1177 1182 1178 - bkey_fsck_err_on(entry->ptr.cached && have_ec, c, err, 1179 - ptr_cached_and_erasure_coded, 1183 + bkey_fsck_err_on(entry->ptr.cached && have_ec, 1184 + c, ptr_cached_and_erasure_coded, 1180 1185 "cached, erasure coded ptr"); 1181 1186 1182 1187 if (!entry->ptr.unwritten) ··· 1192 1199 case BCH_EXTENT_ENTRY_crc128: 1193 1200 crc = bch2_extent_crc_unpack(k.k, entry_to_crc(entry)); 1194 1201 1195 - bkey_fsck_err_on(crc.offset + crc.live_size > crc.uncompressed_size, c, err, 1196 - ptr_crc_uncompressed_size_too_small, 1202 + bkey_fsck_err_on(crc.offset + crc.live_size > crc.uncompressed_size, 1203 + c, ptr_crc_uncompressed_size_too_small, 1197 1204 "checksum offset + key size > uncompressed size"); 1198 - bkey_fsck_err_on(!bch2_checksum_type_valid(c, crc.csum_type), c, err, 1199 - ptr_crc_csum_type_unknown, 1205 + bkey_fsck_err_on(!bch2_checksum_type_valid(c, crc.csum_type), 1206 + c, ptr_crc_csum_type_unknown, 1200 1207 "invalid checksum type"); 1201 - bkey_fsck_err_on(crc.compression_type >= BCH_COMPRESSION_TYPE_NR, c, err, 1202 - ptr_crc_compression_type_unknown, 1208 + bkey_fsck_err_on(crc.compression_type >= BCH_COMPRESSION_TYPE_NR, 1209 + c, ptr_crc_compression_type_unknown, 1203 1210 "invalid compression type"); 1204 1211 1205 1212 if (bch2_csum_type_is_encryption(crc.csum_type)) { 1206 1213 if (nonce == UINT_MAX) 1207 1214 nonce = crc.offset + crc.nonce; 1208 1215 else if (nonce != crc.offset + crc.nonce) 1209 - bkey_fsck_err(c, err, ptr_crc_nonce_mismatch, 1216 + bkey_fsck_err(c, ptr_crc_nonce_mismatch, 1210 1217 "incorrect nonce"); 1211 1218 } 1212 1219 1213 - bkey_fsck_err_on(crc_since_last_ptr, c, err, 1214 - ptr_crc_redundant, 1220 + bkey_fsck_err_on(crc_since_last_ptr, 1221 + c, ptr_crc_redundant, 1215 1222 "redundant crc entry"); 1216 1223 crc_since_last_ptr = true; 1217 1224 1218 1225 bkey_fsck_err_on(crc_is_encoded(crc) && 1219 1226 (crc.uncompressed_size > c->opts.encoded_extent_max >> 9) && 1220 - (flags & (BCH_VALIDATE_write|BCH_VALIDATE_commit)), c, err, 1221 - ptr_crc_uncompressed_size_too_big, 1227 + (flags & (BCH_VALIDATE_write|BCH_VALIDATE_commit)), 1228 + c, ptr_crc_uncompressed_size_too_big, 1222 1229 "too large encoded extent"); 1223 1230 1224 1231 size_ondisk = crc.compressed_size; 1225 1232 break; 1226 1233 case BCH_EXTENT_ENTRY_stripe_ptr: 1227 - bkey_fsck_err_on(have_ec, c, err, 1228 - ptr_stripe_redundant, 1234 + bkey_fsck_err_on(have_ec, 1235 + c, ptr_stripe_redundant, 1229 1236 "redundant stripe entry"); 1230 1237 have_ec = true; 1231 1238 break; 1232 1239 case BCH_EXTENT_ENTRY_rebalance: { 1240 + /* 1241 + * this shouldn't be a fsck error, for forward 1242 + * compatibility; the rebalance code should just refetch 1243 + * the compression opt if it's unknown 1244 + */ 1245 + #if 0 1233 1246 const struct bch_extent_rebalance *r = &entry->rebalance; 1234 1247 1235 1248 if (!bch2_compression_opt_valid(r->compression)) { ··· 1244 1245 opt.type, opt.level); 1245 1246 return -BCH_ERR_invalid_bkey; 1246 1247 } 1248 + #endif 1247 1249 break; 1248 1250 } 1249 1251 } 1250 1252 } 1251 1253 1252 - bkey_fsck_err_on(!nr_ptrs, c, err, 1253 - extent_ptrs_no_ptrs, 1254 + bkey_fsck_err_on(!nr_ptrs, 1255 + c, extent_ptrs_no_ptrs, 1254 1256 "no ptrs"); 1255 - bkey_fsck_err_on(nr_ptrs > BCH_BKEY_PTRS_MAX, c, err, 1256 - extent_ptrs_too_many_ptrs, 1257 + bkey_fsck_err_on(nr_ptrs > BCH_BKEY_PTRS_MAX, 1258 + c, extent_ptrs_too_many_ptrs, 1257 1259 "too many ptrs: %u > %u", nr_ptrs, BCH_BKEY_PTRS_MAX); 1258 - bkey_fsck_err_on(have_written && have_unwritten, c, err, 1259 - extent_ptrs_written_and_unwritten, 1260 + bkey_fsck_err_on(have_written && have_unwritten, 1261 + c, extent_ptrs_written_and_unwritten, 1260 1262 "extent with unwritten and written ptrs"); 1261 - bkey_fsck_err_on(k.k->type != KEY_TYPE_extent && have_unwritten, c, err, 1262 - extent_ptrs_unwritten, 1263 + bkey_fsck_err_on(k.k->type != KEY_TYPE_extent && have_unwritten, 1264 + c, extent_ptrs_unwritten, 1263 1265 "has unwritten ptrs"); 1264 - bkey_fsck_err_on(crc_since_last_ptr, c, err, 1265 - extent_ptrs_redundant_crc, 1266 + bkey_fsck_err_on(crc_since_last_ptr, 1267 + c, extent_ptrs_redundant_crc, 1266 1268 "redundant crc entry"); 1267 - bkey_fsck_err_on(have_ec, c, err, 1268 - extent_ptrs_redundant_stripe, 1269 + bkey_fsck_err_on(have_ec, 1270 + c, extent_ptrs_redundant_stripe, 1269 1271 "redundant stripe entry"); 1270 1272 fsck_err: 1271 1273 return ret;
+12 -12
fs/bcachefs/extents.h
··· 409 409 410 410 /* KEY_TYPE_btree_ptr: */ 411 411 412 - int bch2_btree_ptr_invalid(struct bch_fs *, struct bkey_s_c, 413 - enum bch_validate_flags, struct printbuf *); 412 + int bch2_btree_ptr_validate(struct bch_fs *, struct bkey_s_c, 413 + enum bch_validate_flags); 414 414 void bch2_btree_ptr_to_text(struct printbuf *, struct bch_fs *, 415 415 struct bkey_s_c); 416 416 417 - int bch2_btree_ptr_v2_invalid(struct bch_fs *, struct bkey_s_c, 418 - enum bch_validate_flags, struct printbuf *); 417 + int bch2_btree_ptr_v2_validate(struct bch_fs *, struct bkey_s_c, 418 + enum bch_validate_flags); 419 419 void bch2_btree_ptr_v2_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 420 420 void bch2_btree_ptr_v2_compat(enum btree_id, unsigned, unsigned, 421 421 int, struct bkey_s); 422 422 423 423 #define bch2_bkey_ops_btree_ptr ((struct bkey_ops) { \ 424 - .key_invalid = bch2_btree_ptr_invalid, \ 424 + .key_validate = bch2_btree_ptr_validate, \ 425 425 .val_to_text = bch2_btree_ptr_to_text, \ 426 426 .swab = bch2_ptr_swab, \ 427 427 .trigger = bch2_trigger_extent, \ 428 428 }) 429 429 430 430 #define bch2_bkey_ops_btree_ptr_v2 ((struct bkey_ops) { \ 431 - .key_invalid = bch2_btree_ptr_v2_invalid, \ 431 + .key_validate = bch2_btree_ptr_v2_validate, \ 432 432 .val_to_text = bch2_btree_ptr_v2_to_text, \ 433 433 .swab = bch2_ptr_swab, \ 434 434 .compat = bch2_btree_ptr_v2_compat, \ ··· 441 441 bool bch2_extent_merge(struct bch_fs *, struct bkey_s, struct bkey_s_c); 442 442 443 443 #define bch2_bkey_ops_extent ((struct bkey_ops) { \ 444 - .key_invalid = bch2_bkey_ptrs_invalid, \ 444 + .key_validate = bch2_bkey_ptrs_validate, \ 445 445 .val_to_text = bch2_bkey_ptrs_to_text, \ 446 446 .swab = bch2_ptr_swab, \ 447 447 .key_normalize = bch2_extent_normalize, \ ··· 451 451 452 452 /* KEY_TYPE_reservation: */ 453 453 454 - int bch2_reservation_invalid(struct bch_fs *, struct bkey_s_c, 455 - enum bch_validate_flags, struct printbuf *); 454 + int bch2_reservation_validate(struct bch_fs *, struct bkey_s_c, 455 + enum bch_validate_flags); 456 456 void bch2_reservation_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 457 457 bool bch2_reservation_merge(struct bch_fs *, struct bkey_s, struct bkey_s_c); 458 458 459 459 #define bch2_bkey_ops_reservation ((struct bkey_ops) { \ 460 - .key_invalid = bch2_reservation_invalid, \ 460 + .key_validate = bch2_reservation_validate, \ 461 461 .val_to_text = bch2_reservation_to_text, \ 462 462 .key_merge = bch2_reservation_merge, \ 463 463 .trigger = bch2_trigger_reservation, \ ··· 683 683 void bch2_extent_ptr_to_text(struct printbuf *out, struct bch_fs *, const struct bch_extent_ptr *); 684 684 void bch2_bkey_ptrs_to_text(struct printbuf *, struct bch_fs *, 685 685 struct bkey_s_c); 686 - int bch2_bkey_ptrs_invalid(struct bch_fs *, struct bkey_s_c, 687 - enum bch_validate_flags, struct printbuf *); 686 + int bch2_bkey_ptrs_validate(struct bch_fs *, struct bkey_s_c, 687 + enum bch_validate_flags); 688 688 689 689 void bch2_ptr_swab(struct bkey_s); 690 690
+1 -1
fs/bcachefs/fs.c
··· 193 193 * only insert fully created inodes in the inode hash table. But 194 194 * discard_new_inode() expects it to be set... 195 195 */ 196 - inode->v.i_flags |= I_NEW; 196 + inode->v.i_state |= I_NEW; 197 197 /* 198 198 * We don't want bch2_evict_inode() to delete the inode on disk, 199 199 * we just raced and had another inode in cache. Normally new
+37 -40
fs/bcachefs/inode.c
··· 434 434 return &inode_p->inode.k_i; 435 435 } 436 436 437 - static int __bch2_inode_invalid(struct bch_fs *c, struct bkey_s_c k, struct printbuf *err) 437 + static int __bch2_inode_validate(struct bch_fs *c, struct bkey_s_c k, 438 + enum bch_validate_flags flags) 438 439 { 439 440 struct bch_inode_unpacked unpacked; 440 441 int ret = 0; 441 442 442 - bkey_fsck_err_on(k.k->p.inode, c, err, 443 - inode_pos_inode_nonzero, 443 + bkey_fsck_err_on(k.k->p.inode, 444 + c, inode_pos_inode_nonzero, 444 445 "nonzero k.p.inode"); 445 446 446 - bkey_fsck_err_on(k.k->p.offset < BLOCKDEV_INODE_MAX, c, err, 447 - inode_pos_blockdev_range, 447 + bkey_fsck_err_on(k.k->p.offset < BLOCKDEV_INODE_MAX, 448 + c, inode_pos_blockdev_range, 448 449 "fs inode in blockdev range"); 449 450 450 - bkey_fsck_err_on(bch2_inode_unpack(k, &unpacked), c, err, 451 - inode_unpack_error, 451 + bkey_fsck_err_on(bch2_inode_unpack(k, &unpacked), 452 + c, inode_unpack_error, 452 453 "invalid variable length fields"); 453 454 454 - bkey_fsck_err_on(unpacked.bi_data_checksum >= BCH_CSUM_OPT_NR + 1, c, err, 455 - inode_checksum_type_invalid, 455 + bkey_fsck_err_on(unpacked.bi_data_checksum >= BCH_CSUM_OPT_NR + 1, 456 + c, inode_checksum_type_invalid, 456 457 "invalid data checksum type (%u >= %u", 457 458 unpacked.bi_data_checksum, BCH_CSUM_OPT_NR + 1); 458 459 459 460 bkey_fsck_err_on(unpacked.bi_compression && 460 - !bch2_compression_opt_valid(unpacked.bi_compression - 1), c, err, 461 - inode_compression_type_invalid, 461 + !bch2_compression_opt_valid(unpacked.bi_compression - 1), 462 + c, inode_compression_type_invalid, 462 463 "invalid compression opt %u", unpacked.bi_compression - 1); 463 464 464 465 bkey_fsck_err_on((unpacked.bi_flags & BCH_INODE_unlinked) && 465 - unpacked.bi_nlink != 0, c, err, 466 - inode_unlinked_but_nlink_nonzero, 466 + unpacked.bi_nlink != 0, 467 + c, inode_unlinked_but_nlink_nonzero, 467 468 "flagged as unlinked but bi_nlink != 0"); 468 469 469 - bkey_fsck_err_on(unpacked.bi_subvol && !S_ISDIR(unpacked.bi_mode), c, err, 470 - inode_subvol_root_but_not_dir, 470 + bkey_fsck_err_on(unpacked.bi_subvol && !S_ISDIR(unpacked.bi_mode), 471 + c, inode_subvol_root_but_not_dir, 471 472 "subvolume root but not a directory"); 472 473 fsck_err: 473 474 return ret; 474 475 } 475 476 476 - int bch2_inode_invalid(struct bch_fs *c, struct bkey_s_c k, 477 - enum bch_validate_flags flags, 478 - struct printbuf *err) 477 + int bch2_inode_validate(struct bch_fs *c, struct bkey_s_c k, 478 + enum bch_validate_flags flags) 479 479 { 480 480 struct bkey_s_c_inode inode = bkey_s_c_to_inode(k); 481 481 int ret = 0; 482 482 483 - bkey_fsck_err_on(INODE_STR_HASH(inode.v) >= BCH_STR_HASH_NR, c, err, 484 - inode_str_hash_invalid, 483 + bkey_fsck_err_on(INODE_STR_HASH(inode.v) >= BCH_STR_HASH_NR, 484 + c, inode_str_hash_invalid, 485 485 "invalid str hash type (%llu >= %u)", 486 486 INODE_STR_HASH(inode.v), BCH_STR_HASH_NR); 487 487 488 - ret = __bch2_inode_invalid(c, k, err); 488 + ret = __bch2_inode_validate(c, k, flags); 489 489 fsck_err: 490 490 return ret; 491 491 } 492 492 493 - int bch2_inode_v2_invalid(struct bch_fs *c, struct bkey_s_c k, 494 - enum bch_validate_flags flags, 495 - struct printbuf *err) 493 + int bch2_inode_v2_validate(struct bch_fs *c, struct bkey_s_c k, 494 + enum bch_validate_flags flags) 496 495 { 497 496 struct bkey_s_c_inode_v2 inode = bkey_s_c_to_inode_v2(k); 498 497 int ret = 0; 499 498 500 - bkey_fsck_err_on(INODEv2_STR_HASH(inode.v) >= BCH_STR_HASH_NR, c, err, 501 - inode_str_hash_invalid, 499 + bkey_fsck_err_on(INODEv2_STR_HASH(inode.v) >= BCH_STR_HASH_NR, 500 + c, inode_str_hash_invalid, 502 501 "invalid str hash type (%llu >= %u)", 503 502 INODEv2_STR_HASH(inode.v), BCH_STR_HASH_NR); 504 503 505 - ret = __bch2_inode_invalid(c, k, err); 504 + ret = __bch2_inode_validate(c, k, flags); 506 505 fsck_err: 507 506 return ret; 508 507 } 509 508 510 - int bch2_inode_v3_invalid(struct bch_fs *c, struct bkey_s_c k, 511 - enum bch_validate_flags flags, 512 - struct printbuf *err) 509 + int bch2_inode_v3_validate(struct bch_fs *c, struct bkey_s_c k, 510 + enum bch_validate_flags flags) 513 511 { 514 512 struct bkey_s_c_inode_v3 inode = bkey_s_c_to_inode_v3(k); 515 513 int ret = 0; 516 514 517 515 bkey_fsck_err_on(INODEv3_FIELDS_START(inode.v) < INODEv3_FIELDS_START_INITIAL || 518 - INODEv3_FIELDS_START(inode.v) > bkey_val_u64s(inode.k), c, err, 519 - inode_v3_fields_start_bad, 516 + INODEv3_FIELDS_START(inode.v) > bkey_val_u64s(inode.k), 517 + c, inode_v3_fields_start_bad, 520 518 "invalid fields_start (got %llu, min %u max %zu)", 521 519 INODEv3_FIELDS_START(inode.v), 522 520 INODEv3_FIELDS_START_INITIAL, 523 521 bkey_val_u64s(inode.k)); 524 522 525 - bkey_fsck_err_on(INODEv3_STR_HASH(inode.v) >= BCH_STR_HASH_NR, c, err, 526 - inode_str_hash_invalid, 523 + bkey_fsck_err_on(INODEv3_STR_HASH(inode.v) >= BCH_STR_HASH_NR, 524 + c, inode_str_hash_invalid, 527 525 "invalid str hash type (%llu >= %u)", 528 526 INODEv3_STR_HASH(inode.v), BCH_STR_HASH_NR); 529 527 530 - ret = __bch2_inode_invalid(c, k, err); 528 + ret = __bch2_inode_validate(c, k, flags); 531 529 fsck_err: 532 530 return ret; 533 531 } ··· 623 625 return 0; 624 626 } 625 627 626 - int bch2_inode_generation_invalid(struct bch_fs *c, struct bkey_s_c k, 627 - enum bch_validate_flags flags, 628 - struct printbuf *err) 628 + int bch2_inode_generation_validate(struct bch_fs *c, struct bkey_s_c k, 629 + enum bch_validate_flags flags) 629 630 { 630 631 int ret = 0; 631 632 632 - bkey_fsck_err_on(k.k->p.inode, c, err, 633 - inode_pos_inode_nonzero, 633 + bkey_fsck_err_on(k.k->p.inode, 634 + c, inode_pos_inode_nonzero, 634 635 "nonzero k.p.inode"); 635 636 fsck_err: 636 637 return ret;
+12 -12
fs/bcachefs/inode.h
··· 9 9 enum bch_validate_flags; 10 10 extern const char * const bch2_inode_opts[]; 11 11 12 - int bch2_inode_invalid(struct bch_fs *, struct bkey_s_c, 13 - enum bch_validate_flags, struct printbuf *); 14 - int bch2_inode_v2_invalid(struct bch_fs *, struct bkey_s_c, 15 - enum bch_validate_flags, struct printbuf *); 16 - int bch2_inode_v3_invalid(struct bch_fs *, struct bkey_s_c, 17 - enum bch_validate_flags, struct printbuf *); 12 + int bch2_inode_validate(struct bch_fs *, struct bkey_s_c, 13 + enum bch_validate_flags); 14 + int bch2_inode_v2_validate(struct bch_fs *, struct bkey_s_c, 15 + enum bch_validate_flags); 16 + int bch2_inode_v3_validate(struct bch_fs *, struct bkey_s_c, 17 + enum bch_validate_flags); 18 18 void bch2_inode_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 19 19 20 20 int bch2_trigger_inode(struct btree_trans *, enum btree_id, unsigned, ··· 22 22 enum btree_iter_update_trigger_flags); 23 23 24 24 #define bch2_bkey_ops_inode ((struct bkey_ops) { \ 25 - .key_invalid = bch2_inode_invalid, \ 25 + .key_validate = bch2_inode_validate, \ 26 26 .val_to_text = bch2_inode_to_text, \ 27 27 .trigger = bch2_trigger_inode, \ 28 28 .min_val_size = 16, \ 29 29 }) 30 30 31 31 #define bch2_bkey_ops_inode_v2 ((struct bkey_ops) { \ 32 - .key_invalid = bch2_inode_v2_invalid, \ 32 + .key_validate = bch2_inode_v2_validate, \ 33 33 .val_to_text = bch2_inode_to_text, \ 34 34 .trigger = bch2_trigger_inode, \ 35 35 .min_val_size = 32, \ 36 36 }) 37 37 38 38 #define bch2_bkey_ops_inode_v3 ((struct bkey_ops) { \ 39 - .key_invalid = bch2_inode_v3_invalid, \ 39 + .key_validate = bch2_inode_v3_validate, \ 40 40 .val_to_text = bch2_inode_to_text, \ 41 41 .trigger = bch2_trigger_inode, \ 42 42 .min_val_size = 48, \ ··· 49 49 k->type == KEY_TYPE_inode_v3; 50 50 } 51 51 52 - int bch2_inode_generation_invalid(struct bch_fs *, struct bkey_s_c, 53 - enum bch_validate_flags, struct printbuf *); 52 + int bch2_inode_generation_validate(struct bch_fs *, struct bkey_s_c, 53 + enum bch_validate_flags); 54 54 void bch2_inode_generation_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 55 55 56 56 #define bch2_bkey_ops_inode_generation ((struct bkey_ops) { \ 57 - .key_invalid = bch2_inode_generation_invalid, \ 57 + .key_validate = bch2_inode_generation_validate, \ 58 58 .val_to_text = bch2_inode_generation_to_text, \ 59 59 .min_val_size = 8, \ 60 60 })
+5 -19
fs/bcachefs/journal_io.c
··· 332 332 { 333 333 int write = flags & BCH_VALIDATE_write; 334 334 void *next = vstruct_next(entry); 335 - struct printbuf buf = PRINTBUF; 336 335 int ret = 0; 337 336 338 337 if (journal_entry_err_on(!k->k.u64s, ··· 367 368 bch2_bkey_compat(level, btree_id, version, big_endian, 368 369 write, NULL, bkey_to_packed(k)); 369 370 370 - if (bch2_bkey_invalid(c, bkey_i_to_s_c(k), 371 - __btree_node_type(level, btree_id), write, &buf)) { 372 - printbuf_reset(&buf); 373 - journal_entry_err_msg(&buf, version, jset, entry); 374 - prt_newline(&buf); 375 - printbuf_indent_add(&buf, 2); 376 - 377 - bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(k)); 378 - prt_newline(&buf); 379 - bch2_bkey_invalid(c, bkey_i_to_s_c(k), 380 - __btree_node_type(level, btree_id), write, &buf); 381 - 382 - mustfix_fsck_err(c, journal_entry_bkey_invalid, 383 - "%s", buf.buf); 384 - 371 + ret = bch2_bkey_validate(c, bkey_i_to_s_c(k), 372 + __btree_node_type(level, btree_id), write); 373 + if (ret == -BCH_ERR_fsck_delete_bkey) { 385 374 le16_add_cpu(&entry->u64s, -((u16) k->k.u64s)); 386 375 memmove(k, bkey_next(k), next - (void *) bkey_next(k)); 387 376 journal_entry_null_range(vstruct_next(entry), next); 388 - 389 - printbuf_exit(&buf); 390 377 return FSCK_DELETED_KEY; 391 378 } 379 + if (ret) 380 + goto fsck_err; 392 381 393 382 if (write) 394 383 bch2_bkey_compat(level, btree_id, version, big_endian, 395 384 write, NULL, bkey_to_packed(k)); 396 385 fsck_err: 397 - printbuf_exit(&buf); 398 386 return ret; 399 387 } 400 388
+4 -5
fs/bcachefs/lru.c
··· 10 10 #include "recovery.h" 11 11 12 12 /* KEY_TYPE_lru is obsolete: */ 13 - int bch2_lru_invalid(struct bch_fs *c, struct bkey_s_c k, 14 - enum bch_validate_flags flags, 15 - struct printbuf *err) 13 + int bch2_lru_validate(struct bch_fs *c, struct bkey_s_c k, 14 + enum bch_validate_flags flags) 16 15 { 17 16 int ret = 0; 18 17 19 - bkey_fsck_err_on(!lru_pos_time(k.k->p), c, err, 20 - lru_entry_at_time_0, 18 + bkey_fsck_err_on(!lru_pos_time(k.k->p), 19 + c, lru_entry_at_time_0, 21 20 "lru entry at time=0"); 22 21 fsck_err: 23 22 return ret;
+2 -3
fs/bcachefs/lru.h
··· 33 33 return BCH_LRU_read; 34 34 } 35 35 36 - int bch2_lru_invalid(struct bch_fs *, struct bkey_s_c, 37 - enum bch_validate_flags, struct printbuf *); 36 + int bch2_lru_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 38 37 void bch2_lru_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 39 38 40 39 void bch2_lru_pos_to_text(struct printbuf *, struct bpos); 41 40 42 41 #define bch2_bkey_ops_lru ((struct bkey_ops) { \ 43 - .key_invalid = bch2_lru_invalid, \ 42 + .key_validate = bch2_lru_validate, \ 44 43 .val_to_text = bch2_lru_to_text, \ 45 44 .min_val_size = 8, \ 46 45 })
+4 -4
fs/bcachefs/quota.c
··· 59 59 .to_text = bch2_sb_quota_to_text, 60 60 }; 61 61 62 - int bch2_quota_invalid(struct bch_fs *c, struct bkey_s_c k, 63 - enum bch_validate_flags flags, struct printbuf *err) 62 + int bch2_quota_validate(struct bch_fs *c, struct bkey_s_c k, 63 + enum bch_validate_flags flags) 64 64 { 65 65 int ret = 0; 66 66 67 - bkey_fsck_err_on(k.k->p.inode >= QTYP_NR, c, err, 68 - quota_type_invalid, 67 + bkey_fsck_err_on(k.k->p.inode >= QTYP_NR, 68 + c, quota_type_invalid, 69 69 "invalid quota type (%llu >= %u)", 70 70 k.k->p.inode, QTYP_NR); 71 71 fsck_err:
+2 -3
fs/bcachefs/quota.h
··· 8 8 enum bch_validate_flags; 9 9 extern const struct bch_sb_field_ops bch_sb_field_ops_quota; 10 10 11 - int bch2_quota_invalid(struct bch_fs *, struct bkey_s_c, 12 - enum bch_validate_flags, struct printbuf *); 11 + int bch2_quota_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 13 12 void bch2_quota_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 14 13 15 14 #define bch2_bkey_ops_quota ((struct bkey_ops) { \ 16 - .key_invalid = bch2_quota_invalid, \ 15 + .key_validate = bch2_quota_validate, \ 17 16 .val_to_text = bch2_quota_to_text, \ 18 17 .min_val_size = 32, \ 19 18 })
+8 -11
fs/bcachefs/reflink.c
··· 29 29 30 30 /* reflink pointers */ 31 31 32 - int bch2_reflink_p_invalid(struct bch_fs *c, struct bkey_s_c k, 33 - enum bch_validate_flags flags, 34 - struct printbuf *err) 32 + int bch2_reflink_p_validate(struct bch_fs *c, struct bkey_s_c k, 33 + enum bch_validate_flags flags) 35 34 { 36 35 struct bkey_s_c_reflink_p p = bkey_s_c_to_reflink_p(k); 37 36 int ret = 0; 38 37 39 38 bkey_fsck_err_on(le64_to_cpu(p.v->idx) < le32_to_cpu(p.v->front_pad), 40 - c, err, reflink_p_front_pad_bad, 39 + c, reflink_p_front_pad_bad, 41 40 "idx < front_pad (%llu < %u)", 42 41 le64_to_cpu(p.v->idx), le32_to_cpu(p.v->front_pad)); 43 42 fsck_err: ··· 255 256 256 257 /* indirect extents */ 257 258 258 - int bch2_reflink_v_invalid(struct bch_fs *c, struct bkey_s_c k, 259 - enum bch_validate_flags flags, 260 - struct printbuf *err) 259 + int bch2_reflink_v_validate(struct bch_fs *c, struct bkey_s_c k, 260 + enum bch_validate_flags flags) 261 261 { 262 - return bch2_bkey_ptrs_invalid(c, k, flags, err); 262 + return bch2_bkey_ptrs_validate(c, k, flags); 263 263 } 264 264 265 265 void bch2_reflink_v_to_text(struct printbuf *out, struct bch_fs *c, ··· 309 311 310 312 /* indirect inline data */ 311 313 312 - int bch2_indirect_inline_data_invalid(struct bch_fs *c, struct bkey_s_c k, 313 - enum bch_validate_flags flags, 314 - struct printbuf *err) 314 + int bch2_indirect_inline_data_validate(struct bch_fs *c, struct bkey_s_c k, 315 + enum bch_validate_flags flags) 315 316 { 316 317 return 0; 317 318 }
+9 -13
fs/bcachefs/reflink.h
··· 4 4 5 5 enum bch_validate_flags; 6 6 7 - int bch2_reflink_p_invalid(struct bch_fs *, struct bkey_s_c, 8 - enum bch_validate_flags, struct printbuf *); 9 - void bch2_reflink_p_to_text(struct printbuf *, struct bch_fs *, 10 - struct bkey_s_c); 7 + int bch2_reflink_p_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 8 + void bch2_reflink_p_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 11 9 bool bch2_reflink_p_merge(struct bch_fs *, struct bkey_s, struct bkey_s_c); 12 10 int bch2_trigger_reflink_p(struct btree_trans *, enum btree_id, unsigned, 13 11 struct bkey_s_c, struct bkey_s, 14 12 enum btree_iter_update_trigger_flags); 15 13 16 14 #define bch2_bkey_ops_reflink_p ((struct bkey_ops) { \ 17 - .key_invalid = bch2_reflink_p_invalid, \ 15 + .key_validate = bch2_reflink_p_validate, \ 18 16 .val_to_text = bch2_reflink_p_to_text, \ 19 17 .key_merge = bch2_reflink_p_merge, \ 20 18 .trigger = bch2_trigger_reflink_p, \ 21 19 .min_val_size = 16, \ 22 20 }) 23 21 24 - int bch2_reflink_v_invalid(struct bch_fs *, struct bkey_s_c, 25 - enum bch_validate_flags, struct printbuf *); 26 - void bch2_reflink_v_to_text(struct printbuf *, struct bch_fs *, 27 - struct bkey_s_c); 22 + int bch2_reflink_v_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 23 + void bch2_reflink_v_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 28 24 int bch2_trigger_reflink_v(struct btree_trans *, enum btree_id, unsigned, 29 25 struct bkey_s_c, struct bkey_s, 30 26 enum btree_iter_update_trigger_flags); 31 27 32 28 #define bch2_bkey_ops_reflink_v ((struct bkey_ops) { \ 33 - .key_invalid = bch2_reflink_v_invalid, \ 29 + .key_validate = bch2_reflink_v_validate, \ 34 30 .val_to_text = bch2_reflink_v_to_text, \ 35 31 .swab = bch2_ptr_swab, \ 36 32 .trigger = bch2_trigger_reflink_v, \ 37 33 .min_val_size = 8, \ 38 34 }) 39 35 40 - int bch2_indirect_inline_data_invalid(struct bch_fs *, struct bkey_s_c, 41 - enum bch_validate_flags, struct printbuf *); 36 + int bch2_indirect_inline_data_validate(struct bch_fs *, struct bkey_s_c, 37 + enum bch_validate_flags); 42 38 void bch2_indirect_inline_data_to_text(struct printbuf *, 43 39 struct bch_fs *, struct bkey_s_c); 44 40 int bch2_trigger_indirect_inline_data(struct btree_trans *, ··· 43 47 enum btree_iter_update_trigger_flags); 44 48 45 49 #define bch2_bkey_ops_indirect_inline_data ((struct bkey_ops) { \ 46 - .key_invalid = bch2_indirect_inline_data_invalid, \ 50 + .key_validate = bch2_indirect_inline_data_validate, \ 47 51 .val_to_text = bch2_indirect_inline_data_to_text, \ 48 52 .trigger = bch2_trigger_indirect_inline_data, \ 49 53 .min_val_size = 8, \
+5 -1
fs/bcachefs/sb-downgrade.c
··· 72 72 BCH_FSCK_ERR_accounting_key_replicas_nr_devs_0, \ 73 73 BCH_FSCK_ERR_accounting_key_replicas_nr_required_bad, \ 74 74 BCH_FSCK_ERR_accounting_key_replicas_devs_unsorted, \ 75 - BCH_FSCK_ERR_accounting_key_junk_at_end) 75 + BCH_FSCK_ERR_accounting_key_junk_at_end) \ 76 + x(disk_accounting_inum, \ 77 + BIT_ULL(BCH_RECOVERY_PASS_check_allocations), \ 78 + BCH_FSCK_ERR_accounting_mismatch) 76 79 77 80 #define DOWNGRADE_TABLE() \ 78 81 x(bucket_stripe_sectors, \ ··· 107 104 BCH_FSCK_ERR_fs_usage_nr_inodes_wrong, \ 108 105 BCH_FSCK_ERR_fs_usage_persistent_reserved_wrong, \ 109 106 BCH_FSCK_ERR_fs_usage_replicas_wrong, \ 107 + BCH_FSCK_ERR_accounting_replicas_not_marked, \ 110 108 BCH_FSCK_ERR_bkey_version_in_future) 111 109 112 110 struct upgrade_downgrade_entry {
+20 -22
fs/bcachefs/snapshot.c
··· 31 31 le32_to_cpu(t.v->root_snapshot)); 32 32 } 33 33 34 - int bch2_snapshot_tree_invalid(struct bch_fs *c, struct bkey_s_c k, 35 - enum bch_validate_flags flags, 36 - struct printbuf *err) 34 + int bch2_snapshot_tree_validate(struct bch_fs *c, struct bkey_s_c k, 35 + enum bch_validate_flags flags) 37 36 { 38 37 int ret = 0; 39 38 40 39 bkey_fsck_err_on(bkey_gt(k.k->p, POS(0, U32_MAX)) || 41 - bkey_lt(k.k->p, POS(0, 1)), c, err, 42 - snapshot_tree_pos_bad, 40 + bkey_lt(k.k->p, POS(0, 1)), 41 + c, snapshot_tree_pos_bad, 43 42 "bad pos"); 44 43 fsck_err: 45 44 return ret; ··· 224 225 le32_to_cpu(s.v->skip[2])); 225 226 } 226 227 227 - int bch2_snapshot_invalid(struct bch_fs *c, struct bkey_s_c k, 228 - enum bch_validate_flags flags, 229 - struct printbuf *err) 228 + int bch2_snapshot_validate(struct bch_fs *c, struct bkey_s_c k, 229 + enum bch_validate_flags flags) 230 230 { 231 231 struct bkey_s_c_snapshot s; 232 232 u32 i, id; 233 233 int ret = 0; 234 234 235 235 bkey_fsck_err_on(bkey_gt(k.k->p, POS(0, U32_MAX)) || 236 - bkey_lt(k.k->p, POS(0, 1)), c, err, 237 - snapshot_pos_bad, 236 + bkey_lt(k.k->p, POS(0, 1)), 237 + c, snapshot_pos_bad, 238 238 "bad pos"); 239 239 240 240 s = bkey_s_c_to_snapshot(k); 241 241 242 242 id = le32_to_cpu(s.v->parent); 243 - bkey_fsck_err_on(id && id <= k.k->p.offset, c, err, 244 - snapshot_parent_bad, 243 + bkey_fsck_err_on(id && id <= k.k->p.offset, 244 + c, snapshot_parent_bad, 245 245 "bad parent node (%u <= %llu)", 246 246 id, k.k->p.offset); 247 247 248 - bkey_fsck_err_on(le32_to_cpu(s.v->children[0]) < le32_to_cpu(s.v->children[1]), c, err, 249 - snapshot_children_not_normalized, 248 + bkey_fsck_err_on(le32_to_cpu(s.v->children[0]) < le32_to_cpu(s.v->children[1]), 249 + c, snapshot_children_not_normalized, 250 250 "children not normalized"); 251 251 252 - bkey_fsck_err_on(s.v->children[0] && s.v->children[0] == s.v->children[1], c, err, 253 - snapshot_child_duplicate, 252 + bkey_fsck_err_on(s.v->children[0] && s.v->children[0] == s.v->children[1], 253 + c, snapshot_child_duplicate, 254 254 "duplicate child nodes"); 255 255 256 256 for (i = 0; i < 2; i++) { 257 257 id = le32_to_cpu(s.v->children[i]); 258 258 259 - bkey_fsck_err_on(id >= k.k->p.offset, c, err, 260 - snapshot_child_bad, 259 + bkey_fsck_err_on(id >= k.k->p.offset, 260 + c, snapshot_child_bad, 261 261 "bad child node (%u >= %llu)", 262 262 id, k.k->p.offset); 263 263 } 264 264 265 265 if (bkey_val_bytes(k.k) > offsetof(struct bch_snapshot, skip)) { 266 266 bkey_fsck_err_on(le32_to_cpu(s.v->skip[0]) > le32_to_cpu(s.v->skip[1]) || 267 - le32_to_cpu(s.v->skip[1]) > le32_to_cpu(s.v->skip[2]), c, err, 268 - snapshot_skiplist_not_normalized, 267 + le32_to_cpu(s.v->skip[1]) > le32_to_cpu(s.v->skip[2]), 268 + c, snapshot_skiplist_not_normalized, 269 269 "skiplist not normalized"); 270 270 271 271 for (i = 0; i < ARRAY_SIZE(s.v->skip); i++) { 272 272 id = le32_to_cpu(s.v->skip[i]); 273 273 274 - bkey_fsck_err_on(id && id < le32_to_cpu(s.v->parent), c, err, 275 - snapshot_skiplist_bad, 274 + bkey_fsck_err_on(id && id < le32_to_cpu(s.v->parent), 275 + c, snapshot_skiplist_bad, 276 276 "bad skiplist node %u", id); 277 277 } 278 278 }
+5 -6
fs/bcachefs/snapshot.h
··· 5 5 enum bch_validate_flags; 6 6 7 7 void bch2_snapshot_tree_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 8 - int bch2_snapshot_tree_invalid(struct bch_fs *, struct bkey_s_c, 9 - enum bch_validate_flags, struct printbuf *); 8 + int bch2_snapshot_tree_validate(struct bch_fs *, struct bkey_s_c, 9 + enum bch_validate_flags); 10 10 11 11 #define bch2_bkey_ops_snapshot_tree ((struct bkey_ops) { \ 12 - .key_invalid = bch2_snapshot_tree_invalid, \ 12 + .key_validate = bch2_snapshot_tree_validate, \ 13 13 .val_to_text = bch2_snapshot_tree_to_text, \ 14 14 .min_val_size = 8, \ 15 15 }) ··· 19 19 int bch2_snapshot_tree_lookup(struct btree_trans *, u32, struct bch_snapshot_tree *); 20 20 21 21 void bch2_snapshot_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 22 - int bch2_snapshot_invalid(struct bch_fs *, struct bkey_s_c, 23 - enum bch_validate_flags, struct printbuf *); 22 + int bch2_snapshot_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 24 23 int bch2_mark_snapshot(struct btree_trans *, enum btree_id, unsigned, 25 24 struct bkey_s_c, struct bkey_s, 26 25 enum btree_iter_update_trigger_flags); 27 26 28 27 #define bch2_bkey_ops_snapshot ((struct bkey_ops) { \ 29 - .key_invalid = bch2_snapshot_invalid, \ 28 + .key_validate = bch2_snapshot_validate, \ 30 29 .val_to_text = bch2_snapshot_to_text, \ 31 30 .trigger = bch2_mark_snapshot, \ 32 31 .min_val_size = 24, \
+8 -8
fs/bcachefs/subvolume.c
··· 207 207 208 208 /* Subvolumes: */ 209 209 210 - int bch2_subvolume_invalid(struct bch_fs *c, struct bkey_s_c k, 211 - enum bch_validate_flags flags, struct printbuf *err) 210 + int bch2_subvolume_validate(struct bch_fs *c, struct bkey_s_c k, 211 + enum bch_validate_flags flags) 212 212 { 213 213 struct bkey_s_c_subvolume subvol = bkey_s_c_to_subvolume(k); 214 214 int ret = 0; 215 215 216 216 bkey_fsck_err_on(bkey_lt(k.k->p, SUBVOL_POS_MIN) || 217 - bkey_gt(k.k->p, SUBVOL_POS_MAX), c, err, 218 - subvol_pos_bad, 217 + bkey_gt(k.k->p, SUBVOL_POS_MAX), 218 + c, subvol_pos_bad, 219 219 "invalid pos"); 220 220 221 - bkey_fsck_err_on(!subvol.v->snapshot, c, err, 222 - subvol_snapshot_bad, 221 + bkey_fsck_err_on(!subvol.v->snapshot, 222 + c, subvol_snapshot_bad, 223 223 "invalid snapshot"); 224 224 225 - bkey_fsck_err_on(!subvol.v->inode, c, err, 226 - subvol_inode_bad, 225 + bkey_fsck_err_on(!subvol.v->inode, 226 + c, subvol_inode_bad, 227 227 "invalid inode"); 228 228 fsck_err: 229 229 return ret;
+2 -3
fs/bcachefs/subvolume.h
··· 10 10 int bch2_check_subvols(struct bch_fs *); 11 11 int bch2_check_subvol_children(struct bch_fs *); 12 12 13 - int bch2_subvolume_invalid(struct bch_fs *, struct bkey_s_c, 14 - enum bch_validate_flags, struct printbuf *); 13 + int bch2_subvolume_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 15 14 void bch2_subvolume_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 16 15 int bch2_subvolume_trigger(struct btree_trans *, enum btree_id, unsigned, 17 16 struct bkey_s_c, struct bkey_s, 18 17 enum btree_iter_update_trigger_flags); 19 18 20 19 #define bch2_bkey_ops_subvolume ((struct bkey_ops) { \ 21 - .key_invalid = bch2_subvolume_invalid, \ 20 + .key_validate = bch2_subvolume_validate, \ 22 21 .val_to_text = bch2_subvolume_to_text, \ 23 22 .trigger = bch2_subvolume_trigger, \ 24 23 .min_val_size = 16, \
+1
fs/bcachefs/trace.c
··· 4 4 #include "buckets.h" 5 5 #include "btree_cache.h" 6 6 #include "btree_iter.h" 7 + #include "btree_key_cache.h" 7 8 #include "btree_locking.h" 8 9 #include "btree_update_interior.h" 9 10 #include "keylist.h"
+25 -2
fs/bcachefs/trace.h
··· 988 988 __entry->u64s_remaining) 989 989 ); 990 990 991 - DEFINE_EVENT(transaction_event, trans_blocked_journal_reclaim, 991 + TRACE_EVENT(trans_blocked_journal_reclaim, 992 992 TP_PROTO(struct btree_trans *trans, 993 993 unsigned long caller_ip), 994 - TP_ARGS(trans, caller_ip) 994 + TP_ARGS(trans, caller_ip), 995 + 996 + TP_STRUCT__entry( 997 + __array(char, trans_fn, 32 ) 998 + __field(unsigned long, caller_ip ) 999 + 1000 + __field(unsigned long, key_cache_nr_keys ) 1001 + __field(unsigned long, key_cache_nr_dirty ) 1002 + __field(long, must_wait ) 1003 + ), 1004 + 1005 + TP_fast_assign( 1006 + strscpy(__entry->trans_fn, trans->fn, sizeof(__entry->trans_fn)); 1007 + __entry->caller_ip = caller_ip; 1008 + __entry->key_cache_nr_keys = atomic_long_read(&trans->c->btree_key_cache.nr_keys); 1009 + __entry->key_cache_nr_dirty = atomic_long_read(&trans->c->btree_key_cache.nr_dirty); 1010 + __entry->must_wait = __bch2_btree_key_cache_must_wait(trans->c); 1011 + ), 1012 + 1013 + TP_printk("%s %pS key cache keys %lu dirty %lu must_wait %li", 1014 + __entry->trans_fn, (void *) __entry->caller_ip, 1015 + __entry->key_cache_nr_keys, 1016 + __entry->key_cache_nr_dirty, 1017 + __entry->must_wait) 995 1018 ); 996 1019 997 1020 TRACE_EVENT(trans_restart_journal_preres_get,
+10 -11
fs/bcachefs/xattr.c
··· 70 70 .cmp_bkey = xattr_cmp_bkey, 71 71 }; 72 72 73 - int bch2_xattr_invalid(struct bch_fs *c, struct bkey_s_c k, 74 - enum bch_validate_flags flags, 75 - struct printbuf *err) 73 + int bch2_xattr_validate(struct bch_fs *c, struct bkey_s_c k, 74 + enum bch_validate_flags flags) 76 75 { 77 76 struct bkey_s_c_xattr xattr = bkey_s_c_to_xattr(k); 78 77 unsigned val_u64s = xattr_val_u64s(xattr.v->x_name_len, 79 78 le16_to_cpu(xattr.v->x_val_len)); 80 79 int ret = 0; 81 80 82 - bkey_fsck_err_on(bkey_val_u64s(k.k) < val_u64s, c, err, 83 - xattr_val_size_too_small, 81 + bkey_fsck_err_on(bkey_val_u64s(k.k) < val_u64s, 82 + c, xattr_val_size_too_small, 84 83 "value too small (%zu < %u)", 85 84 bkey_val_u64s(k.k), val_u64s); 86 85 ··· 87 88 val_u64s = xattr_val_u64s(xattr.v->x_name_len, 88 89 le16_to_cpu(xattr.v->x_val_len) + 4); 89 90 90 - bkey_fsck_err_on(bkey_val_u64s(k.k) > val_u64s, c, err, 91 - xattr_val_size_too_big, 91 + bkey_fsck_err_on(bkey_val_u64s(k.k) > val_u64s, 92 + c, xattr_val_size_too_big, 92 93 "value too big (%zu > %u)", 93 94 bkey_val_u64s(k.k), val_u64s); 94 95 95 - bkey_fsck_err_on(!bch2_xattr_type_to_handler(xattr.v->x_type), c, err, 96 - xattr_invalid_type, 96 + bkey_fsck_err_on(!bch2_xattr_type_to_handler(xattr.v->x_type), 97 + c, xattr_invalid_type, 97 98 "invalid type (%u)", xattr.v->x_type); 98 99 99 - bkey_fsck_err_on(memchr(xattr.v->x_name, '\0', xattr.v->x_name_len), c, err, 100 - xattr_name_invalid_chars, 100 + bkey_fsck_err_on(memchr(xattr.v->x_name, '\0', xattr.v->x_name_len), 101 + c, xattr_name_invalid_chars, 101 102 "xattr name has invalid characters"); 102 103 fsck_err: 103 104 return ret;
+2 -3
fs/bcachefs/xattr.h
··· 6 6 7 7 extern const struct bch_hash_desc bch2_xattr_hash_desc; 8 8 9 - int bch2_xattr_invalid(struct bch_fs *, struct bkey_s_c, 10 - enum bch_validate_flags, struct printbuf *); 9 + int bch2_xattr_validate(struct bch_fs *, struct bkey_s_c, enum bch_validate_flags); 11 10 void bch2_xattr_to_text(struct printbuf *, struct bch_fs *, struct bkey_s_c); 12 11 13 12 #define bch2_bkey_ops_xattr ((struct bkey_ops) { \ 14 - .key_invalid = bch2_xattr_invalid, \ 13 + .key_validate = bch2_xattr_validate, \ 15 14 .val_to_text = bch2_xattr_to_text, \ 16 15 .min_val_size = 8, \ 17 16 })
+3 -1
fs/binfmt_flat.c
··· 72 72 73 73 #ifdef CONFIG_BINFMT_FLAT_NO_DATA_START_OFFSET 74 74 #define DATA_START_OFFSET_WORDS (0) 75 + #define MAX_SHARED_LIBS_UPDATE (0) 75 76 #else 76 77 #define DATA_START_OFFSET_WORDS (MAX_SHARED_LIBS) 78 + #define MAX_SHARED_LIBS_UPDATE (MAX_SHARED_LIBS) 77 79 #endif 78 80 79 81 struct lib_info { ··· 882 880 return res; 883 881 884 882 /* Update data segment pointers for all libraries */ 885 - for (i = 0; i < MAX_SHARED_LIBS; i++) { 883 + for (i = 0; i < MAX_SHARED_LIBS_UPDATE; i++) { 886 884 if (!libinfo.lib_list[i].loaded) 887 885 continue; 888 886 for (j = 0; j < MAX_SHARED_LIBS; j++) {
+67
fs/btrfs/delayed-ref.c
··· 1134 1134 return find_ref_head(delayed_refs, bytenr, false); 1135 1135 } 1136 1136 1137 + static int find_comp(struct btrfs_delayed_ref_node *entry, u64 root, u64 parent) 1138 + { 1139 + int type = parent ? BTRFS_SHARED_BLOCK_REF_KEY : BTRFS_TREE_BLOCK_REF_KEY; 1140 + 1141 + if (type < entry->type) 1142 + return -1; 1143 + if (type > entry->type) 1144 + return 1; 1145 + 1146 + if (type == BTRFS_TREE_BLOCK_REF_KEY) { 1147 + if (root < entry->ref_root) 1148 + return -1; 1149 + if (root > entry->ref_root) 1150 + return 1; 1151 + } else { 1152 + if (parent < entry->parent) 1153 + return -1; 1154 + if (parent > entry->parent) 1155 + return 1; 1156 + } 1157 + return 0; 1158 + } 1159 + 1160 + /* 1161 + * Check to see if a given root/parent reference is attached to the head. This 1162 + * only checks for BTRFS_ADD_DELAYED_REF references that match, as that 1163 + * indicates the reference exists for the given root or parent. This is for 1164 + * tree blocks only. 1165 + * 1166 + * @head: the head of the bytenr we're searching. 1167 + * @root: the root objectid of the reference if it is a normal reference. 1168 + * @parent: the parent if this is a shared backref. 1169 + */ 1170 + bool btrfs_find_delayed_tree_ref(struct btrfs_delayed_ref_head *head, 1171 + u64 root, u64 parent) 1172 + { 1173 + struct rb_node *node; 1174 + bool found = false; 1175 + 1176 + lockdep_assert_held(&head->mutex); 1177 + 1178 + spin_lock(&head->lock); 1179 + node = head->ref_tree.rb_root.rb_node; 1180 + while (node) { 1181 + struct btrfs_delayed_ref_node *entry; 1182 + int ret; 1183 + 1184 + entry = rb_entry(node, struct btrfs_delayed_ref_node, ref_node); 1185 + ret = find_comp(entry, root, parent); 1186 + if (ret < 0) { 1187 + node = node->rb_left; 1188 + } else if (ret > 0) { 1189 + node = node->rb_right; 1190 + } else { 1191 + /* 1192 + * We only want to count ADD actions, as drops mean the 1193 + * ref doesn't exist. 1194 + */ 1195 + if (entry->action == BTRFS_ADD_DELAYED_REF) 1196 + found = true; 1197 + break; 1198 + } 1199 + } 1200 + spin_unlock(&head->lock); 1201 + return found; 1202 + } 1203 + 1137 1204 void __cold btrfs_delayed_ref_exit(void) 1138 1205 { 1139 1206 kmem_cache_destroy(btrfs_delayed_ref_head_cachep);
+2
fs/btrfs/delayed-ref.h
··· 389 389 int btrfs_delayed_refs_rsv_refill(struct btrfs_fs_info *fs_info, 390 390 enum btrfs_reserve_flush_enum flush); 391 391 bool btrfs_check_space_for_delayed_refs(struct btrfs_fs_info *fs_info); 392 + bool btrfs_find_delayed_tree_ref(struct btrfs_delayed_ref_head *head, 393 + u64 root, u64 parent); 392 394 393 395 static inline u64 btrfs_delayed_ref_owner(struct btrfs_delayed_ref_node *node) 394 396 {
+45 -6
fs/btrfs/extent-tree.c
··· 5472 5472 struct btrfs_root *root, u64 bytenr, u64 parent, 5473 5473 int level) 5474 5474 { 5475 + struct btrfs_delayed_ref_root *delayed_refs; 5476 + struct btrfs_delayed_ref_head *head; 5475 5477 struct btrfs_path *path; 5476 5478 struct btrfs_extent_inline_ref *iref; 5477 5479 int ret; 5480 + bool exists = false; 5478 5481 5479 5482 path = btrfs_alloc_path(); 5480 5483 if (!path) 5481 5484 return -ENOMEM; 5482 - 5485 + again: 5483 5486 ret = lookup_extent_backref(trans, path, &iref, bytenr, 5484 5487 root->fs_info->nodesize, parent, 5485 5488 btrfs_root_id(root), level, 0); 5489 + if (ret != -ENOENT) { 5490 + /* 5491 + * If we get 0 then we found our reference, return 1, else 5492 + * return the error if it's not -ENOENT; 5493 + */ 5494 + btrfs_free_path(path); 5495 + return (ret < 0 ) ? ret : 1; 5496 + } 5497 + 5498 + /* 5499 + * We could have a delayed ref with this reference, so look it up while 5500 + * we're holding the path open to make sure we don't race with the 5501 + * delayed ref running. 5502 + */ 5503 + delayed_refs = &trans->transaction->delayed_refs; 5504 + spin_lock(&delayed_refs->lock); 5505 + head = btrfs_find_delayed_ref_head(delayed_refs, bytenr); 5506 + if (!head) 5507 + goto out; 5508 + if (!mutex_trylock(&head->mutex)) { 5509 + /* 5510 + * We're contended, means that the delayed ref is running, get a 5511 + * reference and wait for the ref head to be complete and then 5512 + * try again. 5513 + */ 5514 + refcount_inc(&head->refs); 5515 + spin_unlock(&delayed_refs->lock); 5516 + 5517 + btrfs_release_path(path); 5518 + 5519 + mutex_lock(&head->mutex); 5520 + mutex_unlock(&head->mutex); 5521 + btrfs_put_delayed_ref_head(head); 5522 + goto again; 5523 + } 5524 + 5525 + exists = btrfs_find_delayed_tree_ref(head, root->root_key.objectid, parent); 5526 + mutex_unlock(&head->mutex); 5527 + out: 5528 + spin_unlock(&delayed_refs->lock); 5486 5529 btrfs_free_path(path); 5487 - if (ret == -ENOENT) 5488 - return 0; 5489 - if (ret < 0) 5490 - return ret; 5491 - return 1; 5530 + return exists ? 1 : 0; 5492 5531 } 5493 5532 5494 5533 /*
+7 -7
fs/btrfs/extent_io.c
··· 1496 1496 free_extent_map(em); 1497 1497 em = NULL; 1498 1498 1499 + /* 1500 + * Although the PageDirty bit might be cleared before entering 1501 + * this function, subpage dirty bit is not cleared. 1502 + * So clear subpage dirty bit here so next time we won't submit 1503 + * page for range already written to disk. 1504 + */ 1505 + btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, iosize); 1499 1506 btrfs_set_range_writeback(inode, cur, cur + iosize - 1); 1500 1507 if (!PageWriteback(page)) { 1501 1508 btrfs_err(inode->root->fs_info, ··· 1510 1503 page->index, cur, end); 1511 1504 } 1512 1505 1513 - /* 1514 - * Although the PageDirty bit is cleared before entering this 1515 - * function, subpage dirty bit is not cleared. 1516 - * So clear subpage dirty bit here so next time we won't submit 1517 - * page for range already written to disk. 1518 - */ 1519 - btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, iosize); 1520 1506 1521 1507 submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, 1522 1508 cur - page_offset(page));
+6 -16
fs/btrfs/extent_map.c
··· 1147 1147 return 0; 1148 1148 1149 1149 /* 1150 - * We want to be fast because we can be called from any path trying to 1151 - * allocate memory, so if the lock is busy we don't want to spend time 1150 + * We want to be fast so if the lock is busy we don't want to spend time 1152 1151 * waiting for it - either some task is about to do IO for the inode or 1153 1152 * we may have another task shrinking extent maps, here in this code, so 1154 1153 * skip this inode. ··· 1190 1191 /* 1191 1192 * Stop if we need to reschedule or there's contention on the 1192 1193 * lock. This is to avoid slowing other tasks trying to take the 1193 - * lock and because the shrinker might be called during a memory 1194 - * allocation path and we want to avoid taking a very long time 1195 - * and slowing down all sorts of tasks. 1194 + * lock. 1196 1195 */ 1197 1196 if (need_resched() || rwlock_needbreak(&tree->lock)) 1198 1197 break; ··· 1219 1222 if (ctx->scanned >= ctx->nr_to_scan) 1220 1223 break; 1221 1224 1222 - /* 1223 - * We may be called from memory allocation paths, so we don't 1224 - * want to take too much time and slowdown tasks. 1225 - */ 1226 - if (need_resched()) 1227 - break; 1225 + cond_resched(); 1228 1226 1229 1227 inode = btrfs_find_first_inode(root, min_ino); 1230 1228 } ··· 1277 1285 ctx.last_ino); 1278 1286 } 1279 1287 1280 - /* 1281 - * We may be called from memory allocation paths, so we don't want to 1282 - * take too much time and slowdown tasks, so stop if we need reschedule. 1283 - */ 1284 - while (ctx.scanned < ctx.nr_to_scan && !need_resched()) { 1288 + while (ctx.scanned < ctx.nr_to_scan) { 1285 1289 struct btrfs_root *root; 1286 1290 unsigned long count; 1291 + 1292 + cond_resched(); 1287 1293 1288 1294 spin_lock(&fs_info->fs_roots_radix_lock); 1289 1295 count = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
+8 -6
fs/btrfs/free-space-cache.c
··· 2697 2697 u64 offset = bytenr - block_group->start; 2698 2698 u64 to_free, to_unusable; 2699 2699 int bg_reclaim_threshold = 0; 2700 - bool initial = ((size == block_group->length) && (block_group->alloc_offset == 0)); 2700 + bool initial; 2701 2701 u64 reclaimable_unusable; 2702 2702 2703 - WARN_ON(!initial && offset + size > block_group->zone_capacity); 2703 + spin_lock(&block_group->lock); 2704 2704 2705 + initial = ((size == block_group->length) && (block_group->alloc_offset == 0)); 2706 + WARN_ON(!initial && offset + size > block_group->zone_capacity); 2705 2707 if (!initial) 2706 2708 bg_reclaim_threshold = READ_ONCE(sinfo->bg_reclaim_threshold); 2707 2709 2708 - spin_lock(&ctl->tree_lock); 2709 2710 if (!used) 2710 2711 to_free = size; 2711 2712 else if (initial) ··· 2719 2718 to_free = offset + size - block_group->alloc_offset; 2720 2719 to_unusable = size - to_free; 2721 2720 2721 + spin_lock(&ctl->tree_lock); 2722 2722 ctl->free_space += to_free; 2723 + spin_unlock(&ctl->tree_lock); 2723 2724 /* 2724 2725 * If the block group is read-only, we should account freed space into 2725 2726 * bytes_readonly. ··· 2730 2727 block_group->zone_unusable += to_unusable; 2731 2728 WARN_ON(block_group->zone_unusable > block_group->length); 2732 2729 } 2733 - spin_unlock(&ctl->tree_lock); 2734 2730 if (!used) { 2735 - spin_lock(&block_group->lock); 2736 2731 block_group->alloc_offset -= size; 2737 - spin_unlock(&block_group->lock); 2738 2732 } 2739 2733 2740 2734 reclaimable_unusable = block_group->zone_unusable - ··· 2744 2744 mult_perc(block_group->zone_capacity, bg_reclaim_threshold)) { 2745 2745 btrfs_mark_bg_to_reclaim(block_group); 2746 2746 } 2747 + 2748 + spin_unlock(&block_group->lock); 2747 2749 2748 2750 return 0; 2749 2751 }
+1
fs/btrfs/inode.c
··· 4195 4195 4196 4196 btrfs_i_size_write(dir, dir->vfs_inode.i_size - name->len * 2); 4197 4197 inode_inc_iversion(&inode->vfs_inode); 4198 + inode_set_ctime_current(&inode->vfs_inode); 4198 4199 inode_inc_iversion(&dir->vfs_inode); 4199 4200 inode_set_mtime_to_ts(&dir->vfs_inode, inode_set_ctime_current(&dir->vfs_inode)); 4200 4201 ret = btrfs_update_inode(trans, dir);
+40 -14
fs/btrfs/send.c
··· 347 347 int ret; 348 348 int need_later_update; 349 349 int name_len; 350 - char name[]; 350 + char name[] __counted_by(name_len); 351 351 }; 352 352 353 353 /* See the comment at lru_cache.h about struct btrfs_lru_cache_entry. */ ··· 6157 6157 u64 offset = key->offset; 6158 6158 u64 end; 6159 6159 u64 bs = sctx->send_root->fs_info->sectorsize; 6160 + struct btrfs_file_extent_item *ei; 6161 + u64 disk_byte; 6162 + u64 data_offset; 6163 + u64 num_bytes; 6164 + struct btrfs_inode_info info = { 0 }; 6160 6165 6161 6166 end = min_t(u64, btrfs_file_extent_end(path), sctx->cur_inode_size); 6162 6167 if (offset >= end) 6163 6168 return 0; 6164 6169 6165 - if (clone_root && IS_ALIGNED(end, bs)) { 6166 - struct btrfs_file_extent_item *ei; 6167 - u64 disk_byte; 6168 - u64 data_offset; 6170 + num_bytes = end - offset; 6169 6171 6170 - ei = btrfs_item_ptr(path->nodes[0], path->slots[0], 6171 - struct btrfs_file_extent_item); 6172 - disk_byte = btrfs_file_extent_disk_bytenr(path->nodes[0], ei); 6173 - data_offset = btrfs_file_extent_offset(path->nodes[0], ei); 6174 - ret = clone_range(sctx, path, clone_root, disk_byte, 6175 - data_offset, offset, end - offset); 6176 - } else { 6177 - ret = send_extent_data(sctx, path, offset, end - offset); 6178 - } 6172 + if (!clone_root) 6173 + goto write_data; 6174 + 6175 + if (IS_ALIGNED(end, bs)) 6176 + goto clone_data; 6177 + 6178 + /* 6179 + * If the extent end is not aligned, we can clone if the extent ends at 6180 + * the i_size of the inode and the clone range ends at the i_size of the 6181 + * source inode, otherwise the clone operation fails with -EINVAL. 6182 + */ 6183 + if (end != sctx->cur_inode_size) 6184 + goto write_data; 6185 + 6186 + ret = get_inode_info(clone_root->root, clone_root->ino, &info); 6187 + if (ret < 0) 6188 + return ret; 6189 + 6190 + if (clone_root->offset + num_bytes == info.size) 6191 + goto clone_data; 6192 + 6193 + write_data: 6194 + ret = send_extent_data(sctx, path, offset, num_bytes); 6195 + sctx->cur_inode_next_write_offset = end; 6196 + return ret; 6197 + 6198 + clone_data: 6199 + ei = btrfs_item_ptr(path->nodes[0], path->slots[0], 6200 + struct btrfs_file_extent_item); 6201 + disk_byte = btrfs_file_extent_disk_bytenr(path->nodes[0], ei); 6202 + data_offset = btrfs_file_extent_offset(path->nodes[0], ei); 6203 + ret = clone_range(sctx, path, clone_root, disk_byte, data_offset, offset, 6204 + num_bytes); 6179 6205 sctx->cur_inode_next_write_offset = end; 6180 6206 return ret; 6181 6207 }
+17 -1
fs/btrfs/super.c
··· 28 28 #include <linux/btrfs.h> 29 29 #include <linux/security.h> 30 30 #include <linux/fs_parser.h> 31 + #include <linux/swap.h> 31 32 #include "messages.h" 32 33 #include "delayed-inode.h" 33 34 #include "ctree.h" ··· 2402 2401 2403 2402 trace_btrfs_extent_map_shrinker_count(fs_info, nr); 2404 2403 2405 - return nr; 2404 + /* 2405 + * Only report the real number for DEBUG builds, as there are reports of 2406 + * serious performance degradation caused by too frequent shrinks. 2407 + */ 2408 + if (IS_ENABLED(CONFIG_BTRFS_DEBUG)) 2409 + return nr; 2410 + return 0; 2406 2411 } 2407 2412 2408 2413 static long btrfs_free_cached_objects(struct super_block *sb, struct shrink_control *sc) 2409 2414 { 2410 2415 const long nr_to_scan = min_t(unsigned long, LONG_MAX, sc->nr_to_scan); 2411 2416 struct btrfs_fs_info *fs_info = btrfs_sb(sb); 2417 + 2418 + /* 2419 + * We may be called from any task trying to allocate memory and we don't 2420 + * want to slow it down with scanning and dropping extent maps. It would 2421 + * also cause heavy lock contention if many tasks concurrently enter 2422 + * here. Therefore only allow kswapd tasks to scan and drop extent maps. 2423 + */ 2424 + if (!current_is_kswapd()) 2425 + return 0; 2412 2426 2413 2427 return btrfs_free_extent_maps(fs_info, nr_to_scan); 2414 2428 }
+72 -2
fs/btrfs/tree-checker.c
··· 569 569 570 570 /* dir type check */ 571 571 dir_type = btrfs_dir_ftype(leaf, di); 572 - if (unlikely(dir_type >= BTRFS_FT_MAX)) { 572 + if (unlikely(dir_type <= BTRFS_FT_UNKNOWN || 573 + dir_type >= BTRFS_FT_MAX)) { 573 574 dir_item_err(leaf, slot, 574 - "invalid dir item type, have %u expect [0, %u)", 575 + "invalid dir item type, have %u expect (0, %u)", 575 576 dir_type, BTRFS_FT_MAX); 576 577 return -EUCLEAN; 577 578 } ··· 1764 1763 return 0; 1765 1764 } 1766 1765 1766 + static int check_dev_extent_item(const struct extent_buffer *leaf, 1767 + const struct btrfs_key *key, 1768 + int slot, 1769 + struct btrfs_key *prev_key) 1770 + { 1771 + struct btrfs_dev_extent *de; 1772 + const u32 sectorsize = leaf->fs_info->sectorsize; 1773 + 1774 + de = btrfs_item_ptr(leaf, slot, struct btrfs_dev_extent); 1775 + /* Basic fixed member checks. */ 1776 + if (unlikely(btrfs_dev_extent_chunk_tree(leaf, de) != 1777 + BTRFS_CHUNK_TREE_OBJECTID)) { 1778 + generic_err(leaf, slot, 1779 + "invalid dev extent chunk tree id, has %llu expect %llu", 1780 + btrfs_dev_extent_chunk_tree(leaf, de), 1781 + BTRFS_CHUNK_TREE_OBJECTID); 1782 + return -EUCLEAN; 1783 + } 1784 + if (unlikely(btrfs_dev_extent_chunk_objectid(leaf, de) != 1785 + BTRFS_FIRST_CHUNK_TREE_OBJECTID)) { 1786 + generic_err(leaf, slot, 1787 + "invalid dev extent chunk objectid, has %llu expect %llu", 1788 + btrfs_dev_extent_chunk_objectid(leaf, de), 1789 + BTRFS_FIRST_CHUNK_TREE_OBJECTID); 1790 + return -EUCLEAN; 1791 + } 1792 + /* Alignment check. */ 1793 + if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) { 1794 + generic_err(leaf, slot, 1795 + "invalid dev extent key.offset, has %llu not aligned to %u", 1796 + key->offset, sectorsize); 1797 + return -EUCLEAN; 1798 + } 1799 + if (unlikely(!IS_ALIGNED(btrfs_dev_extent_chunk_offset(leaf, de), 1800 + sectorsize))) { 1801 + generic_err(leaf, slot, 1802 + "invalid dev extent chunk offset, has %llu not aligned to %u", 1803 + btrfs_dev_extent_chunk_objectid(leaf, de), 1804 + sectorsize); 1805 + return -EUCLEAN; 1806 + } 1807 + if (unlikely(!IS_ALIGNED(btrfs_dev_extent_length(leaf, de), 1808 + sectorsize))) { 1809 + generic_err(leaf, slot, 1810 + "invalid dev extent length, has %llu not aligned to %u", 1811 + btrfs_dev_extent_length(leaf, de), sectorsize); 1812 + return -EUCLEAN; 1813 + } 1814 + /* Overlap check with previous dev extent. */ 1815 + if (slot && prev_key->objectid == key->objectid && 1816 + prev_key->type == key->type) { 1817 + struct btrfs_dev_extent *prev_de; 1818 + u64 prev_len; 1819 + 1820 + prev_de = btrfs_item_ptr(leaf, slot - 1, struct btrfs_dev_extent); 1821 + prev_len = btrfs_dev_extent_length(leaf, prev_de); 1822 + if (unlikely(prev_key->offset + prev_len > key->offset)) { 1823 + generic_err(leaf, slot, 1824 + "dev extent overlap, prev offset %llu len %llu current offset %llu", 1825 + prev_key->objectid, prev_len, key->offset); 1826 + return -EUCLEAN; 1827 + } 1828 + } 1829 + return 0; 1830 + } 1831 + 1767 1832 /* 1768 1833 * Common point to switch the item-specific validation. 1769 1834 */ ··· 1865 1798 break; 1866 1799 case BTRFS_DEV_ITEM_KEY: 1867 1800 ret = check_dev_item(leaf, key, slot); 1801 + break; 1802 + case BTRFS_DEV_EXTENT_KEY: 1803 + ret = check_dev_extent_item(leaf, key, slot, prev_key); 1868 1804 break; 1869 1805 case BTRFS_INODE_ITEM_KEY: 1870 1806 ret = check_inode_item(leaf, key, slot);
+25 -3
fs/ceph/addr.c
··· 246 246 if (err >= 0) { 247 247 if (sparse && err > 0) 248 248 err = ceph_sparse_ext_map_end(op); 249 - if (err < subreq->len) 249 + if (err < subreq->len && 250 + subreq->rreq->origin != NETFS_DIO_READ) 250 251 __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 251 252 if (IS_ENCRYPTED(inode) && err > 0) { 252 253 err = ceph_fscrypt_decrypt_extents(inode, ··· 283 282 size_t len; 284 283 int mode; 285 284 286 - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 285 + if (rreq->origin != NETFS_DIO_READ) 286 + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 287 287 __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); 288 288 289 289 if (subreq->start >= inode->i_size) ··· 426 424 struct ceph_netfs_request_data *priv; 427 425 int ret = 0; 428 426 427 + /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */ 428 + __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags); 429 + 429 430 if (rreq->origin != NETFS_READAHEAD) 430 431 return 0; 431 432 ··· 503 498 }; 504 499 505 500 #ifdef CONFIG_CEPH_FSCACHE 501 + static void ceph_set_page_fscache(struct page *page) 502 + { 503 + folio_start_private_2(page_folio(page)); /* [DEPRECATED] */ 504 + } 505 + 506 506 static void ceph_fscache_write_terminated(void *priv, ssize_t error, bool was_async) 507 507 { 508 508 struct inode *inode = priv; ··· 525 515 ceph_fscache_write_terminated, inode, true, caching); 526 516 } 527 517 #else 518 + static inline void ceph_set_page_fscache(struct page *page) 519 + { 520 + } 521 + 528 522 static inline void ceph_fscache_write_to_cache(struct inode *inode, u64 off, u64 len, bool caching) 529 523 { 530 524 } ··· 720 706 len = wlen; 721 707 722 708 set_page_writeback(page); 709 + if (caching) 710 + ceph_set_page_fscache(page); 723 711 ceph_fscache_write_to_cache(inode, page_off, len, caching); 724 712 725 713 if (IS_ENCRYPTED(inode)) { ··· 804 788 redirty_page_for_writepage(wbc, page); 805 789 return AOP_WRITEPAGE_ACTIVATE; 806 790 } 791 + 792 + folio_wait_private_2(page_folio(page)); /* [DEPRECATED] */ 807 793 808 794 err = writepage_nounlock(page, wbc); 809 795 if (err == -ERESTARTSYS) { ··· 1080 1062 unlock_page(page); 1081 1063 break; 1082 1064 } 1083 - if (PageWriteback(page)) { 1065 + if (PageWriteback(page) || 1066 + PagePrivate2(page) /* [DEPRECATED] */) { 1084 1067 if (wbc->sync_mode == WB_SYNC_NONE) { 1085 1068 doutc(cl, "%p under writeback\n", page); 1086 1069 unlock_page(page); ··· 1089 1070 } 1090 1071 doutc(cl, "waiting on writeback %p\n", page); 1091 1072 wait_on_page_writeback(page); 1073 + folio_wait_private_2(page_folio(page)); /* [DEPRECATED] */ 1092 1074 } 1093 1075 1094 1076 if (!clear_page_dirty_for_io(page)) { ··· 1274 1254 } 1275 1255 1276 1256 set_page_writeback(page); 1257 + if (caching) 1258 + ceph_set_page_fscache(page); 1277 1259 len += thp_size(page); 1278 1260 } 1279 1261 ceph_fscache_write_to_cache(inode, offset, len, caching);
-2
fs/ceph/inode.c
··· 577 577 578 578 /* Set parameters for the netfs library */ 579 579 netfs_inode_init(&ci->netfs, &ceph_netfs_ops, false); 580 - /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */ 581 - __set_bit(NETFS_ICTX_USE_PGPRIV2, &ci->netfs.flags); 582 580 583 581 spin_lock_init(&ci->i_ceph_lock); 584 582
+7 -1
fs/exec.c
··· 1692 1692 unsigned int mode; 1693 1693 vfsuid_t vfsuid; 1694 1694 vfsgid_t vfsgid; 1695 + int err; 1695 1696 1696 1697 if (!mnt_may_suid(file->f_path.mnt)) 1697 1698 return; ··· 1709 1708 /* Be careful if suid/sgid is set */ 1710 1709 inode_lock(inode); 1711 1710 1712 - /* reload atomically mode/uid/gid now that lock held */ 1711 + /* Atomically reload and check mode/uid/gid now that lock held. */ 1713 1712 mode = inode->i_mode; 1714 1713 vfsuid = i_uid_into_vfsuid(idmap, inode); 1715 1714 vfsgid = i_gid_into_vfsgid(idmap, inode); 1715 + err = inode_permission(idmap, inode, MAY_EXEC); 1716 1716 inode_unlock(inode); 1717 + 1718 + /* Did the exec bit vanish out from under us? Give up. */ 1719 + if (err) 1720 + return; 1717 1721 1718 1722 /* We ignore suid/sgid if there are no mappings for them in the ns */ 1719 1723 if (!vfsuid_has_mapping(bprm->cred->user_ns, vfsuid) ||
+12 -16
fs/file.c
··· 46 46 #define BITBIT_NR(nr) BITS_TO_LONGS(BITS_TO_LONGS(nr)) 47 47 #define BITBIT_SIZE(nr) (BITBIT_NR(nr) * sizeof(long)) 48 48 49 + #define fdt_words(fdt) ((fdt)->max_fds / BITS_PER_LONG) // words in ->open_fds 49 50 /* 50 51 * Copy 'count' fd bits from the old table to the new table and clear the extra 51 52 * space if any. This does not copy the file pointers. Called with the files 52 53 * spinlock held for write. 53 54 */ 54 - static void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt, 55 - unsigned int count) 55 + static inline void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt, 56 + unsigned int copy_words) 56 57 { 57 - unsigned int cpy, set; 58 + unsigned int nwords = fdt_words(nfdt); 58 59 59 - cpy = count / BITS_PER_BYTE; 60 - set = (nfdt->max_fds - count) / BITS_PER_BYTE; 61 - memcpy(nfdt->open_fds, ofdt->open_fds, cpy); 62 - memset((char *)nfdt->open_fds + cpy, 0, set); 63 - memcpy(nfdt->close_on_exec, ofdt->close_on_exec, cpy); 64 - memset((char *)nfdt->close_on_exec + cpy, 0, set); 65 - 66 - cpy = BITBIT_SIZE(count); 67 - set = BITBIT_SIZE(nfdt->max_fds) - cpy; 68 - memcpy(nfdt->full_fds_bits, ofdt->full_fds_bits, cpy); 69 - memset((char *)nfdt->full_fds_bits + cpy, 0, set); 60 + bitmap_copy_and_extend(nfdt->open_fds, ofdt->open_fds, 61 + copy_words * BITS_PER_LONG, nwords * BITS_PER_LONG); 62 + bitmap_copy_and_extend(nfdt->close_on_exec, ofdt->close_on_exec, 63 + copy_words * BITS_PER_LONG, nwords * BITS_PER_LONG); 64 + bitmap_copy_and_extend(nfdt->full_fds_bits, ofdt->full_fds_bits, 65 + copy_words, nwords); 70 66 } 71 67 72 68 /* ··· 80 84 memcpy(nfdt->fd, ofdt->fd, cpy); 81 85 memset((char *)nfdt->fd + cpy, 0, set); 82 86 83 - copy_fd_bitmaps(nfdt, ofdt, ofdt->max_fds); 87 + copy_fd_bitmaps(nfdt, ofdt, fdt_words(ofdt)); 84 88 } 85 89 86 90 /* ··· 375 379 open_files = sane_fdtable_size(old_fdt, max_fds); 376 380 } 377 381 378 - copy_fd_bitmaps(new_fdt, old_fdt, open_files); 382 + copy_fd_bitmaps(new_fdt, old_fdt, open_files / BITS_PER_LONG); 379 383 380 384 old_fds = old_fdt->fd; 381 385 new_fds = new_fdt->fd;
+4 -2
fs/fuse/dev.c
··· 1618 1618 1619 1619 this_num = min_t(unsigned, num, PAGE_SIZE - offset); 1620 1620 err = fuse_copy_page(cs, &page, offset, this_num, 0); 1621 - if (!err && offset == 0 && 1622 - (this_num == PAGE_SIZE || file_size == end)) 1621 + if (!PageUptodate(page) && !err && offset == 0 && 1622 + (this_num == PAGE_SIZE || file_size == end)) { 1623 + zero_user_segment(page, this_num, PAGE_SIZE); 1623 1624 SetPageUptodate(page); 1625 + } 1624 1626 unlock_page(page); 1625 1627 put_page(page); 1626 1628
+37 -2
fs/inode.c
··· 488 488 this_cpu_dec(nr_unused); 489 489 } 490 490 491 + static void inode_pin_lru_isolating(struct inode *inode) 492 + { 493 + lockdep_assert_held(&inode->i_lock); 494 + WARN_ON(inode->i_state & (I_LRU_ISOLATING | I_FREEING | I_WILL_FREE)); 495 + inode->i_state |= I_LRU_ISOLATING; 496 + } 497 + 498 + static void inode_unpin_lru_isolating(struct inode *inode) 499 + { 500 + spin_lock(&inode->i_lock); 501 + WARN_ON(!(inode->i_state & I_LRU_ISOLATING)); 502 + inode->i_state &= ~I_LRU_ISOLATING; 503 + smp_mb(); 504 + wake_up_bit(&inode->i_state, __I_LRU_ISOLATING); 505 + spin_unlock(&inode->i_lock); 506 + } 507 + 508 + static void inode_wait_for_lru_isolating(struct inode *inode) 509 + { 510 + spin_lock(&inode->i_lock); 511 + if (inode->i_state & I_LRU_ISOLATING) { 512 + DEFINE_WAIT_BIT(wq, &inode->i_state, __I_LRU_ISOLATING); 513 + wait_queue_head_t *wqh; 514 + 515 + wqh = bit_waitqueue(&inode->i_state, __I_LRU_ISOLATING); 516 + spin_unlock(&inode->i_lock); 517 + __wait_on_bit(wqh, &wq, bit_wait, TASK_UNINTERRUPTIBLE); 518 + spin_lock(&inode->i_lock); 519 + WARN_ON(inode->i_state & I_LRU_ISOLATING); 520 + } 521 + spin_unlock(&inode->i_lock); 522 + } 523 + 491 524 /** 492 525 * inode_sb_list_add - add inode to the superblock list of inodes 493 526 * @inode: inode to add ··· 689 656 inode_io_list_del(inode); 690 657 691 658 inode_sb_list_del(inode); 659 + 660 + inode_wait_for_lru_isolating(inode); 692 661 693 662 /* 694 663 * Wait for flusher thread to be done with the inode so that filesystem ··· 890 855 * be under pressure before the cache inside the highmem zone. 891 856 */ 892 857 if (inode_has_buffers(inode) || !mapping_empty(&inode->i_data)) { 893 - __iget(inode); 858 + inode_pin_lru_isolating(inode); 894 859 spin_unlock(&inode->i_lock); 895 860 spin_unlock(lru_lock); 896 861 if (remove_inode_buffers(inode)) { ··· 902 867 __count_vm_events(PGINODESTEAL, reap); 903 868 mm_account_reclaimed_pages(reap); 904 869 } 905 - iput(inode); 870 + inode_unpin_lru_isolating(inode); 906 871 spin_lock(lru_lock); 907 872 return LRU_RETRY; 908 873 }
+24 -11
fs/libfs.c
··· 450 450 mtree_destroy(&octx->mt); 451 451 } 452 452 453 + static int offset_dir_open(struct inode *inode, struct file *file) 454 + { 455 + struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode); 456 + 457 + file->private_data = (void *)ctx->next_offset; 458 + return 0; 459 + } 460 + 453 461 /** 454 462 * offset_dir_llseek - Advance the read position of a directory descriptor 455 463 * @file: an open directory whose position is to be updated ··· 471 463 */ 472 464 static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence) 473 465 { 466 + struct inode *inode = file->f_inode; 467 + struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode); 468 + 474 469 switch (whence) { 475 470 case SEEK_CUR: 476 471 offset += file->f_pos; ··· 487 476 } 488 477 489 478 /* In this case, ->private_data is protected by f_pos_lock */ 490 - file->private_data = NULL; 479 + if (!offset) 480 + file->private_data = (void *)ctx->next_offset; 491 481 return vfs_setpos(file, offset, LONG_MAX); 492 482 } 493 483 ··· 519 507 inode->i_ino, fs_umode_to_dtype(inode->i_mode)); 520 508 } 521 509 522 - static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx) 510 + static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, long last_index) 523 511 { 524 512 struct offset_ctx *octx = inode->i_op->get_offset_ctx(inode); 525 513 struct dentry *dentry; ··· 527 515 while (true) { 528 516 dentry = offset_find_next(octx, ctx->pos); 529 517 if (!dentry) 530 - return ERR_PTR(-ENOENT); 518 + return; 519 + 520 + if (dentry2offset(dentry) >= last_index) { 521 + dput(dentry); 522 + return; 523 + } 531 524 532 525 if (!offset_dir_emit(ctx, dentry)) { 533 526 dput(dentry); 534 - break; 527 + return; 535 528 } 536 529 537 530 ctx->pos = dentry2offset(dentry) + 1; 538 531 dput(dentry); 539 532 } 540 - return NULL; 541 533 } 542 534 543 535 /** ··· 568 552 static int offset_readdir(struct file *file, struct dir_context *ctx) 569 553 { 570 554 struct dentry *dir = file->f_path.dentry; 555 + long last_index = (long)file->private_data; 571 556 572 557 lockdep_assert_held(&d_inode(dir)->i_rwsem); 573 558 574 559 if (!dir_emit_dots(file, ctx)) 575 560 return 0; 576 561 577 - /* In this case, ->private_data is protected by f_pos_lock */ 578 - if (ctx->pos == DIR_OFFSET_MIN) 579 - file->private_data = NULL; 580 - else if (file->private_data == ERR_PTR(-ENOENT)) 581 - return 0; 582 - file->private_data = offset_iterate_dir(d_inode(dir), ctx); 562 + offset_iterate_dir(d_inode(dir), ctx, last_index); 583 563 return 0; 584 564 } 585 565 586 566 const struct file_operations simple_offset_dir_operations = { 567 + .open = offset_dir_open, 587 568 .llseek = offset_dir_llseek, 588 569 .iterate_shared = offset_readdir, 589 570 .read = generic_read_dir,
+1 -1
fs/locks.c
··· 2984 2984 filelock_cache = kmem_cache_create("file_lock_cache", 2985 2985 sizeof(struct file_lock), 0, SLAB_PANIC, NULL); 2986 2986 2987 - filelease_cache = kmem_cache_create("file_lock_cache", 2987 + filelease_cache = kmem_cache_create("file_lease_cache", 2988 2988 sizeof(struct file_lease), 0, SLAB_PANIC, NULL); 2989 2989 2990 2990 for_each_possible_cpu(i) {
+1 -1
fs/netfs/Kconfig
··· 24 24 25 25 config NETFS_DEBUG 26 26 bool "Enable dynamic debugging netfslib and FS-Cache" 27 - depends on NETFS 27 + depends on NETFS_SUPPORT 28 28 help 29 29 This permits debugging to be dynamically enabled in the local caching 30 30 management module. If this is set, the debugging output may be
+109 -14
fs/netfs/buffered_read.c
··· 10 10 #include "internal.h" 11 11 12 12 /* 13 + * [DEPRECATED] Unlock the folios in a read operation for when the filesystem 14 + * is using PG_private_2 and direct writing to the cache from here rather than 15 + * marking the page for writeback. 16 + * 17 + * Note that we don't touch folio->private in this code. 18 + */ 19 + static void netfs_rreq_unlock_folios_pgpriv2(struct netfs_io_request *rreq, 20 + size_t *account) 21 + { 22 + struct netfs_io_subrequest *subreq; 23 + struct folio *folio; 24 + pgoff_t start_page = rreq->start / PAGE_SIZE; 25 + pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; 26 + bool subreq_failed = false; 27 + 28 + XA_STATE(xas, &rreq->mapping->i_pages, start_page); 29 + 30 + /* Walk through the pagecache and the I/O request lists simultaneously. 31 + * We may have a mixture of cached and uncached sections and we only 32 + * really want to write out the uncached sections. This is slightly 33 + * complicated by the possibility that we might have huge pages with a 34 + * mixture inside. 35 + */ 36 + subreq = list_first_entry(&rreq->subrequests, 37 + struct netfs_io_subrequest, rreq_link); 38 + subreq_failed = (subreq->error < 0); 39 + 40 + trace_netfs_rreq(rreq, netfs_rreq_trace_unlock_pgpriv2); 41 + 42 + rcu_read_lock(); 43 + xas_for_each(&xas, folio, last_page) { 44 + loff_t pg_end; 45 + bool pg_failed = false; 46 + bool folio_started = false; 47 + 48 + if (xas_retry(&xas, folio)) 49 + continue; 50 + 51 + pg_end = folio_pos(folio) + folio_size(folio) - 1; 52 + 53 + for (;;) { 54 + loff_t sreq_end; 55 + 56 + if (!subreq) { 57 + pg_failed = true; 58 + break; 59 + } 60 + 61 + if (!folio_started && 62 + test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags) && 63 + fscache_operation_valid(&rreq->cache_resources)) { 64 + trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); 65 + folio_start_private_2(folio); 66 + folio_started = true; 67 + } 68 + 69 + pg_failed |= subreq_failed; 70 + sreq_end = subreq->start + subreq->len - 1; 71 + if (pg_end < sreq_end) 72 + break; 73 + 74 + *account += subreq->transferred; 75 + if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { 76 + subreq = list_next_entry(subreq, rreq_link); 77 + subreq_failed = (subreq->error < 0); 78 + } else { 79 + subreq = NULL; 80 + subreq_failed = false; 81 + } 82 + 83 + if (pg_end == sreq_end) 84 + break; 85 + } 86 + 87 + if (!pg_failed) { 88 + flush_dcache_folio(folio); 89 + folio_mark_uptodate(folio); 90 + } 91 + 92 + if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { 93 + if (folio->index == rreq->no_unlock_folio && 94 + test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) 95 + _debug("no unlock"); 96 + else 97 + folio_unlock(folio); 98 + } 99 + } 100 + rcu_read_unlock(); 101 + } 102 + 103 + /* 13 104 * Unlock the folios in a read operation. We need to set PG_writeback on any 14 105 * folios we're going to write back before we unlock them. 15 106 * ··· 126 35 } 127 36 } 128 37 38 + /* Handle deprecated PG_private_2 case. */ 39 + if (test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { 40 + netfs_rreq_unlock_folios_pgpriv2(rreq, &account); 41 + goto out; 42 + } 43 + 129 44 /* Walk through the pagecache and the I/O request lists simultaneously. 130 45 * We may have a mixture of cached and uncached sections and we only 131 46 * really want to write out the uncached sections. This is slightly ··· 149 52 loff_t pg_end; 150 53 bool pg_failed = false; 151 54 bool wback_to_cache = false; 152 - bool folio_started = false; 153 55 154 56 if (xas_retry(&xas, folio)) 155 57 continue; ··· 162 66 pg_failed = true; 163 67 break; 164 68 } 165 - if (test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { 166 - if (!folio_started && test_bit(NETFS_SREQ_COPY_TO_CACHE, 167 - &subreq->flags)) { 168 - trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); 169 - folio_start_private_2(folio); 170 - folio_started = true; 171 - } 172 - } else { 173 - wback_to_cache |= 174 - test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); 175 - } 69 + 70 + wback_to_cache |= test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); 176 71 pg_failed |= subreq_failed; 177 72 sreq_end = subreq->start + subreq->len - 1; 178 73 if (pg_end < sreq_end) ··· 211 124 } 212 125 rcu_read_unlock(); 213 126 127 + out: 214 128 task_io_account_read(account); 215 129 if (rreq->netfs_ops->done) 216 130 rreq->netfs_ops->done(rreq); ··· 483 395 } 484 396 485 397 /** 486 - * netfs_write_begin - Helper to prepare for writing 398 + * netfs_write_begin - Helper to prepare for writing [DEPRECATED] 487 399 * @ctx: The netfs context 488 400 * @file: The file to read from 489 401 * @mapping: The mapping to read from ··· 514 426 * inode before calling this. 515 427 * 516 428 * This is usable whether or not caching is enabled. 429 + * 430 + * Note that this should be considered deprecated and netfs_perform_write() 431 + * used instead. 517 432 */ 518 433 int netfs_write_begin(struct netfs_inode *ctx, 519 434 struct file *file, struct address_space *mapping, ··· 557 466 if (!netfs_is_cache_enabled(ctx) && 558 467 netfs_skip_folio_read(folio, pos, len, false)) { 559 468 netfs_stat(&netfs_n_rh_write_zskip); 560 - goto have_folio; 469 + goto have_folio_no_wait; 561 470 } 562 471 563 472 rreq = netfs_alloc_request(mapping, file, ··· 598 507 netfs_put_request(rreq, false, netfs_rreq_trace_put_return); 599 508 600 509 have_folio: 510 + ret = folio_wait_private_2_killable(folio); 511 + if (ret < 0) 512 + goto error; 513 + have_folio_no_wait: 601 514 *_folio = folio; 602 515 _leave(" = 0"); 603 516 return 0;
+1 -1
fs/netfs/buffered_write.c
··· 184 184 unsigned int bdp_flags = (iocb->ki_flags & IOCB_NOWAIT) ? BDP_ASYNC : 0; 185 185 ssize_t written = 0, ret, ret2; 186 186 loff_t i_size, pos = iocb->ki_pos, from, to; 187 - size_t max_chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER; 187 + size_t max_chunk = mapping_max_folio_size(mapping); 188 188 bool maybe_trouble = false; 189 189 190 190 if (unlikely(test_bit(NETFS_ICTX_WRITETHROUGH, &ctx->flags) ||
+4
fs/netfs/fscache_cookie.c
··· 741 741 spin_lock(&cookie->lock); 742 742 } 743 743 if (test_bit(FSCACHE_COOKIE_DO_LRU_DISCARD, &cookie->flags)) { 744 + if (atomic_read(&cookie->n_accesses) != 0) 745 + /* still being accessed: postpone it */ 746 + break; 747 + 744 748 __fscache_set_cookie_state(cookie, 745 749 FSCACHE_COOKIE_STATE_LRU_DISCARDING); 746 750 wake = true;
+155 -6
fs/netfs/io.c
··· 99 99 } 100 100 101 101 /* 102 + * [DEPRECATED] Deal with the completion of writing the data to the cache. We 103 + * have to clear the PG_fscache bits on the folios involved and release the 104 + * caller's ref. 105 + * 106 + * May be called in softirq mode and we inherit a ref from the caller. 107 + */ 108 + static void netfs_rreq_unmark_after_write(struct netfs_io_request *rreq, 109 + bool was_async) 110 + { 111 + struct netfs_io_subrequest *subreq; 112 + struct folio *folio; 113 + pgoff_t unlocked = 0; 114 + bool have_unlocked = false; 115 + 116 + rcu_read_lock(); 117 + 118 + list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 119 + XA_STATE(xas, &rreq->mapping->i_pages, subreq->start / PAGE_SIZE); 120 + 121 + xas_for_each(&xas, folio, (subreq->start + subreq->len - 1) / PAGE_SIZE) { 122 + if (xas_retry(&xas, folio)) 123 + continue; 124 + 125 + /* We might have multiple writes from the same huge 126 + * folio, but we mustn't unlock a folio more than once. 127 + */ 128 + if (have_unlocked && folio->index <= unlocked) 129 + continue; 130 + unlocked = folio_next_index(folio) - 1; 131 + trace_netfs_folio(folio, netfs_folio_trace_end_copy); 132 + folio_end_private_2(folio); 133 + have_unlocked = true; 134 + } 135 + } 136 + 137 + rcu_read_unlock(); 138 + netfs_rreq_completed(rreq, was_async); 139 + } 140 + 141 + static void netfs_rreq_copy_terminated(void *priv, ssize_t transferred_or_error, 142 + bool was_async) /* [DEPRECATED] */ 143 + { 144 + struct netfs_io_subrequest *subreq = priv; 145 + struct netfs_io_request *rreq = subreq->rreq; 146 + 147 + if (IS_ERR_VALUE(transferred_or_error)) { 148 + netfs_stat(&netfs_n_rh_write_failed); 149 + trace_netfs_failure(rreq, subreq, transferred_or_error, 150 + netfs_fail_copy_to_cache); 151 + } else { 152 + netfs_stat(&netfs_n_rh_write_done); 153 + } 154 + 155 + trace_netfs_sreq(subreq, netfs_sreq_trace_write_term); 156 + 157 + /* If we decrement nr_copy_ops to 0, the ref belongs to us. */ 158 + if (atomic_dec_and_test(&rreq->nr_copy_ops)) 159 + netfs_rreq_unmark_after_write(rreq, was_async); 160 + 161 + netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); 162 + } 163 + 164 + /* 165 + * [DEPRECATED] Perform any outstanding writes to the cache. We inherit a ref 166 + * from the caller. 167 + */ 168 + static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq) 169 + { 170 + struct netfs_cache_resources *cres = &rreq->cache_resources; 171 + struct netfs_io_subrequest *subreq, *next, *p; 172 + struct iov_iter iter; 173 + int ret; 174 + 175 + trace_netfs_rreq(rreq, netfs_rreq_trace_copy); 176 + 177 + /* We don't want terminating writes trying to wake us up whilst we're 178 + * still going through the list. 179 + */ 180 + atomic_inc(&rreq->nr_copy_ops); 181 + 182 + list_for_each_entry_safe(subreq, p, &rreq->subrequests, rreq_link) { 183 + if (!test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { 184 + list_del_init(&subreq->rreq_link); 185 + netfs_put_subrequest(subreq, false, 186 + netfs_sreq_trace_put_no_copy); 187 + } 188 + } 189 + 190 + list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 191 + /* Amalgamate adjacent writes */ 192 + while (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { 193 + next = list_next_entry(subreq, rreq_link); 194 + if (next->start != subreq->start + subreq->len) 195 + break; 196 + subreq->len += next->len; 197 + list_del_init(&next->rreq_link); 198 + netfs_put_subrequest(next, false, 199 + netfs_sreq_trace_put_merged); 200 + } 201 + 202 + ret = cres->ops->prepare_write(cres, &subreq->start, &subreq->len, 203 + subreq->len, rreq->i_size, true); 204 + if (ret < 0) { 205 + trace_netfs_failure(rreq, subreq, ret, netfs_fail_prepare_write); 206 + trace_netfs_sreq(subreq, netfs_sreq_trace_write_skip); 207 + continue; 208 + } 209 + 210 + iov_iter_xarray(&iter, ITER_SOURCE, &rreq->mapping->i_pages, 211 + subreq->start, subreq->len); 212 + 213 + atomic_inc(&rreq->nr_copy_ops); 214 + netfs_stat(&netfs_n_rh_write); 215 + netfs_get_subrequest(subreq, netfs_sreq_trace_get_copy_to_cache); 216 + trace_netfs_sreq(subreq, netfs_sreq_trace_write); 217 + cres->ops->write(cres, subreq->start, &iter, 218 + netfs_rreq_copy_terminated, subreq); 219 + } 220 + 221 + /* If we decrement nr_copy_ops to 0, the usage ref belongs to us. */ 222 + if (atomic_dec_and_test(&rreq->nr_copy_ops)) 223 + netfs_rreq_unmark_after_write(rreq, false); 224 + } 225 + 226 + static void netfs_rreq_write_to_cache_work(struct work_struct *work) /* [DEPRECATED] */ 227 + { 228 + struct netfs_io_request *rreq = 229 + container_of(work, struct netfs_io_request, work); 230 + 231 + netfs_rreq_do_write_to_cache(rreq); 232 + } 233 + 234 + static void netfs_rreq_write_to_cache(struct netfs_io_request *rreq) /* [DEPRECATED] */ 235 + { 236 + rreq->work.func = netfs_rreq_write_to_cache_work; 237 + if (!queue_work(system_unbound_wq, &rreq->work)) 238 + BUG(); 239 + } 240 + 241 + /* 102 242 * Handle a short read. 103 243 */ 104 244 static void netfs_rreq_short_read(struct netfs_io_request *rreq, ··· 415 275 clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 416 276 wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); 417 277 278 + if (test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags) && 279 + test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) 280 + return netfs_rreq_write_to_cache(rreq); 281 + 418 282 netfs_rreq_completed(rreq, was_async); 419 283 } 420 284 ··· 530 386 531 387 if (transferred_or_error == 0) { 532 388 if (__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { 533 - subreq->error = -ENODATA; 389 + if (rreq->origin != NETFS_DIO_READ) 390 + subreq->error = -ENODATA; 534 391 goto failed; 535 392 } 536 393 } else { ··· 602 457 } 603 458 if (subreq->len > ictx->zero_point - subreq->start) 604 459 subreq->len = ictx->zero_point - subreq->start; 460 + 461 + /* We limit buffered reads to the EOF, but let the 462 + * server deal with larger-than-EOF DIO/unbuffered 463 + * reads. 464 + */ 465 + if (subreq->len > rreq->i_size - subreq->start) 466 + subreq->len = rreq->i_size - subreq->start; 605 467 } 606 - if (subreq->len > rreq->i_size - subreq->start) 607 - subreq->len = rreq->i_size - subreq->start; 608 468 if (rreq->rsize && subreq->len > rreq->rsize) 609 469 subreq->len = rreq->rsize; 610 470 ··· 745 595 do { 746 596 _debug("submit %llx + %llx >= %llx", 747 597 rreq->start, rreq->submitted, rreq->i_size); 748 - if (rreq->origin == NETFS_DIO_READ && 749 - rreq->start + rreq->submitted >= rreq->i_size) 750 - break; 751 598 if (!netfs_rreq_submit_slice(rreq, &io_iter)) 599 + break; 600 + if (test_bit(NETFS_SREQ_NO_PROGRESS, &rreq->flags)) 752 601 break; 753 602 if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && 754 603 test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags))
-10
fs/netfs/objects.c
··· 24 24 struct netfs_io_request *rreq; 25 25 mempool_t *mempool = ctx->ops->request_pool ?: &netfs_request_pool; 26 26 struct kmem_cache *cache = mempool->pool_data; 27 - bool is_unbuffered = (origin == NETFS_UNBUFFERED_WRITE || 28 - origin == NETFS_DIO_READ || 29 - origin == NETFS_DIO_WRITE); 30 - bool cached = !is_unbuffered && netfs_is_cache_enabled(ctx); 31 27 int ret; 32 28 33 29 for (;;) { ··· 52 56 refcount_set(&rreq->ref, 1); 53 57 54 58 __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 55 - if (cached) { 56 - __set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags); 57 - if (test_bit(NETFS_ICTX_USE_PGPRIV2, &ctx->flags)) 58 - /* Filesystem uses deprecated PG_private_2 marking. */ 59 - __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags); 60 - } 61 59 if (file && file->f_flags & O_NONBLOCK) 62 60 __set_bit(NETFS_RREQ_NONBLOCK, &rreq->flags); 63 61 if (rreq->netfs_ops->init_request) {
+3 -1
fs/netfs/write_issue.c
··· 94 94 { 95 95 struct netfs_io_request *wreq; 96 96 struct netfs_inode *ictx; 97 + bool is_buffered = (origin == NETFS_WRITEBACK || 98 + origin == NETFS_WRITETHROUGH); 97 99 98 100 wreq = netfs_alloc_request(mapping, file, start, 0, origin); 99 101 if (IS_ERR(wreq)) ··· 104 102 _enter("R=%x", wreq->debug_id); 105 103 106 104 ictx = netfs_inode(wreq->inode); 107 - if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &wreq->flags)) 105 + if (is_buffered && netfs_is_cache_enabled(ictx)) 108 106 fscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ictx)); 109 107 110 108 wreq->contiguity = wreq->start;
+4 -1
fs/nfs/fscache.c
··· 265 265 { 266 266 rreq->netfs_priv = get_nfs_open_context(nfs_file_open_context(file)); 267 267 rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id); 268 + /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */ 269 + __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags); 268 270 269 271 return 0; 270 272 } ··· 363 361 return; 364 362 365 363 sreq = netfs->sreq; 366 - if (test_bit(NFS_IOHDR_EOF, &hdr->flags)) 364 + if (test_bit(NFS_IOHDR_EOF, &hdr->flags) && 365 + sreq->rreq->origin != NETFS_DIO_READ) 367 366 __set_bit(NETFS_SREQ_CLEAR_TAIL, &sreq->flags); 368 367 369 368 if (hdr->error)
-2
fs/nfs/fscache.h
··· 81 81 static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi) 82 82 { 83 83 netfs_inode_init(&nfsi->netfs, &nfs_netfs_ops, false); 84 - /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */ 85 - __set_bit(NETFS_ICTX_USE_PGPRIV2, &nfsi->netfs.flags); 86 84 } 87 85 extern void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr); 88 86 extern void nfs_netfs_read_completion(struct nfs_pgio_header *hdr);
+17 -7
fs/smb/client/file.c
··· 217 217 goto out; 218 218 } 219 219 220 - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 220 + if (subreq->rreq->origin != NETFS_DIO_READ) 221 + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 221 222 222 223 rc = rdata->server->ops->async_readv(rdata); 223 224 out: ··· 316 315 #endif 317 316 } 318 317 319 - if (rdata->credits.value != 0) 318 + if (rdata->credits.value != 0) { 320 319 trace_smb3_rw_credits(rdata->rreq->debug_id, 321 320 rdata->subreq.debug_index, 322 321 rdata->credits.value, ··· 324 323 rdata->server ? rdata->server->in_flight : 0, 325 324 -rdata->credits.value, 326 325 cifs_trace_rw_credits_free_subreq); 326 + if (rdata->server) 327 + add_credits_and_wake_if(rdata->server, &rdata->credits, 0); 328 + else 329 + rdata->credits.value = 0; 330 + } 327 331 328 - add_credits_and_wake_if(rdata->server, &rdata->credits, 0); 329 332 if (rdata->have_xid) 330 333 free_xid(rdata->xid); 331 334 } ··· 2754 2749 struct inode *inode = file->f_mapping->host; 2755 2750 struct cifsInodeInfo *cinode = CIFS_I(inode); 2756 2751 struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server; 2752 + struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 2757 2753 ssize_t rc; 2758 2754 2759 2755 rc = netfs_start_io_write(inode); ··· 2771 2765 if (rc <= 0) 2772 2766 goto out; 2773 2767 2774 - if (!cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(from), 2768 + if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) && 2769 + (cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(from), 2775 2770 server->vals->exclusive_lock_type, 0, 2776 - NULL, CIFS_WRITE_OP)) 2777 - rc = netfs_buffered_write_iter_locked(iocb, from, NULL); 2778 - else 2771 + NULL, CIFS_WRITE_OP))) { 2779 2772 rc = -EACCES; 2773 + goto out; 2774 + } 2775 + 2776 + rc = netfs_buffered_write_iter_locked(iocb, from, NULL); 2777 + 2780 2778 out: 2781 2779 up_read(&cinode->lock_sem); 2782 2780 netfs_end_io_write(inode);
+2
fs/smb/common/smb2pdu.h
··· 1216 1216 ); 1217 1217 __u8 Buffer[]; 1218 1218 } __packed; 1219 + static_assert(offsetof(struct create_context, Buffer) == sizeof(struct create_context_hdr), 1220 + "struct member likely outside of __struct_group()"); 1219 1221 1220 1222 struct smb2_create_req { 1221 1223 struct smb2_hdr hdr;
+12 -3
fs/smb/server/mgmt/share_config.c
··· 15 15 #include "share_config.h" 16 16 #include "user_config.h" 17 17 #include "user_session.h" 18 + #include "../connection.h" 18 19 #include "../transport_ipc.h" 19 20 #include "../misc.h" 20 21 ··· 121 120 return 0; 122 121 } 123 122 124 - static struct ksmbd_share_config *share_config_request(struct unicode_map *um, 123 + static struct ksmbd_share_config *share_config_request(struct ksmbd_work *work, 125 124 const char *name) 126 125 { 127 126 struct ksmbd_share_config_response *resp; 128 127 struct ksmbd_share_config *share = NULL; 129 128 struct ksmbd_share_config *lookup; 129 + struct unicode_map *um = work->conn->um; 130 130 int ret; 131 131 132 132 resp = ksmbd_ipc_share_config_request(name); ··· 183 181 KSMBD_SHARE_CONFIG_VETO_LIST(resp), 184 182 resp->veto_list_sz); 185 183 if (!ret && share->path) { 184 + if (__ksmbd_override_fsids(work, share)) { 185 + kill_share(share); 186 + share = NULL; 187 + goto out; 188 + } 189 + 186 190 ret = kern_path(share->path, 0, &share->vfs_path); 191 + ksmbd_revert_fsids(work); 187 192 if (ret) { 188 193 ksmbd_debug(SMB, "failed to access '%s'\n", 189 194 share->path); ··· 223 214 return share; 224 215 } 225 216 226 - struct ksmbd_share_config *ksmbd_share_config_get(struct unicode_map *um, 217 + struct ksmbd_share_config *ksmbd_share_config_get(struct ksmbd_work *work, 227 218 const char *name) 228 219 { 229 220 struct ksmbd_share_config *share; ··· 236 227 237 228 if (share) 238 229 return share; 239 - return share_config_request(um, name); 230 + return share_config_request(work, name); 240 231 } 241 232 242 233 bool ksmbd_share_veto_filename(struct ksmbd_share_config *share,
+3 -1
fs/smb/server/mgmt/share_config.h
··· 11 11 #include <linux/path.h> 12 12 #include <linux/unicode.h> 13 13 14 + struct ksmbd_work; 15 + 14 16 struct ksmbd_share_config { 15 17 char *name; 16 18 char *path; ··· 70 68 __ksmbd_share_config_put(share); 71 69 } 72 70 73 - struct ksmbd_share_config *ksmbd_share_config_get(struct unicode_map *um, 71 + struct ksmbd_share_config *ksmbd_share_config_get(struct ksmbd_work *work, 74 72 const char *name); 75 73 bool ksmbd_share_veto_filename(struct ksmbd_share_config *share, 76 74 const char *filename);
+5 -4
fs/smb/server/mgmt/tree_connect.c
··· 16 16 #include "user_session.h" 17 17 18 18 struct ksmbd_tree_conn_status 19 - ksmbd_tree_conn_connect(struct ksmbd_conn *conn, struct ksmbd_session *sess, 20 - const char *share_name) 19 + ksmbd_tree_conn_connect(struct ksmbd_work *work, const char *share_name) 21 20 { 22 21 struct ksmbd_tree_conn_status status = {-ENOENT, NULL}; 23 22 struct ksmbd_tree_connect_response *resp = NULL; 24 23 struct ksmbd_share_config *sc; 25 24 struct ksmbd_tree_connect *tree_conn = NULL; 26 25 struct sockaddr *peer_addr; 26 + struct ksmbd_conn *conn = work->conn; 27 + struct ksmbd_session *sess = work->sess; 27 28 int ret; 28 29 29 - sc = ksmbd_share_config_get(conn->um, share_name); 30 + sc = ksmbd_share_config_get(work, share_name); 30 31 if (!sc) 31 32 return status; 32 33 ··· 62 61 struct ksmbd_share_config *new_sc; 63 62 64 63 ksmbd_share_config_del(sc); 65 - new_sc = ksmbd_share_config_get(conn->um, share_name); 64 + new_sc = ksmbd_share_config_get(work, share_name); 66 65 if (!new_sc) { 67 66 pr_err("Failed to update stale share config\n"); 68 67 status.ret = -ESTALE;
+2 -2
fs/smb/server/mgmt/tree_connect.h
··· 13 13 struct ksmbd_share_config; 14 14 struct ksmbd_user; 15 15 struct ksmbd_conn; 16 + struct ksmbd_work; 16 17 17 18 enum { 18 19 TREE_NEW = 0, ··· 51 50 struct ksmbd_session; 52 51 53 52 struct ksmbd_tree_conn_status 54 - ksmbd_tree_conn_connect(struct ksmbd_conn *conn, struct ksmbd_session *sess, 55 - const char *share_name); 53 + ksmbd_tree_conn_connect(struct ksmbd_work *work, const char *share_name); 56 54 void ksmbd_tree_connect_put(struct ksmbd_tree_connect *tcon); 57 55 58 56 int ksmbd_tree_conn_disconnect(struct ksmbd_session *sess,
+8 -1
fs/smb/server/smb2pdu.c
··· 1955 1955 ksmbd_debug(SMB, "tree connect request for tree %s treename %s\n", 1956 1956 name, treename); 1957 1957 1958 - status = ksmbd_tree_conn_connect(conn, sess, name); 1958 + status = ksmbd_tree_conn_connect(work, name); 1959 1959 if (status.ret == KSMBD_TREE_CONN_STATUS_OK) 1960 1960 rsp->hdr.Id.SyncId.TreeId = cpu_to_le32(status.tree_conn->id); 1961 1961 else ··· 5596 5596 5597 5597 ksmbd_debug(SMB, "GOT query info request\n"); 5598 5598 5599 + if (ksmbd_override_fsids(work)) { 5600 + rc = -ENOMEM; 5601 + goto err_out; 5602 + } 5603 + 5599 5604 switch (req->InfoType) { 5600 5605 case SMB2_O_INFO_FILE: 5601 5606 ksmbd_debug(SMB, "GOT SMB2_O_INFO_FILE\n"); ··· 5619 5614 req->InfoType); 5620 5615 rc = -EOPNOTSUPP; 5621 5616 } 5617 + ksmbd_revert_fsids(work); 5622 5618 5623 5619 if (!rc) { 5624 5620 rsp->StructureSize = cpu_to_le16(9); ··· 5629 5623 le32_to_cpu(rsp->OutputBufferLength)); 5630 5624 } 5631 5625 5626 + err_out: 5632 5627 if (rc < 0) { 5633 5628 if (rc == -EACCES) 5634 5629 rsp->hdr.Status = STATUS_ACCESS_DENIED;
+7 -2
fs/smb/server/smb_common.c
··· 732 732 return p && p[0] == '*'; 733 733 } 734 734 735 - int ksmbd_override_fsids(struct ksmbd_work *work) 735 + int __ksmbd_override_fsids(struct ksmbd_work *work, 736 + struct ksmbd_share_config *share) 736 737 { 737 738 struct ksmbd_session *sess = work->sess; 738 - struct ksmbd_share_config *share = work->tcon->share_conf; 739 739 struct cred *cred; 740 740 struct group_info *gi; 741 741 unsigned int uid; ··· 773 773 return -EINVAL; 774 774 } 775 775 return 0; 776 + } 777 + 778 + int ksmbd_override_fsids(struct ksmbd_work *work) 779 + { 780 + return __ksmbd_override_fsids(work, work->tcon->share_conf); 776 781 } 777 782 778 783 void ksmbd_revert_fsids(struct ksmbd_work *work)
+2
fs/smb/server/smb_common.h
··· 447 447 int ksmbd_smb_negotiate_common(struct ksmbd_work *work, unsigned int command); 448 448 449 449 int ksmbd_smb_check_shared_mode(struct file *filp, struct ksmbd_file *curr_fp); 450 + int __ksmbd_override_fsids(struct ksmbd_work *work, 451 + struct ksmbd_share_config *share); 450 452 int ksmbd_override_fsids(struct ksmbd_work *work); 451 453 void ksmbd_revert_fsids(struct ksmbd_work *work); 452 454
+6 -1
fs/squashfs/inode.c
··· 279 279 if (err < 0) 280 280 goto failed_read; 281 281 282 - set_nlink(inode, le32_to_cpu(sqsh_ino->nlink)); 283 282 inode->i_size = le32_to_cpu(sqsh_ino->symlink_size); 283 + if (inode->i_size > PAGE_SIZE) { 284 + ERROR("Corrupted symlink\n"); 285 + return -EINVAL; 286 + } 287 + 288 + set_nlink(inode, le32_to_cpu(sqsh_ino->nlink)); 284 289 inode->i_op = &squashfs_symlink_inode_ops; 285 290 inode_nohighmem(inode); 286 291 inode->i_data.a_ops = &squashfs_symlink_aops;
+7 -1
fs/xfs/scrub/bmap.c
··· 938 938 } 939 939 break; 940 940 case XFS_ATTR_FORK: 941 - if (!xfs_has_attr(mp) && !xfs_has_attr2(mp)) 941 + /* 942 + * "attr" means that an attr fork was created at some point in 943 + * the life of this filesystem. "attr2" means that inodes have 944 + * variable-sized data/attr fork areas. Hence we only check 945 + * attr here. 946 + */ 947 + if (!xfs_has_attr(mp)) 942 948 xchk_ino_set_corrupt(sc, sc->ip->i_ino); 943 949 break; 944 950 default:
+11
fs/xfs/xfs_ioctl.c
··· 483 483 /* Can't change realtime flag if any extents are allocated. */ 484 484 if (ip->i_df.if_nextents || ip->i_delayed_blks) 485 485 return -EINVAL; 486 + 487 + /* 488 + * If S_DAX is enabled on this file, we can only switch the 489 + * device if both support fsdax. We can't update S_DAX because 490 + * there might be other threads walking down the access paths. 491 + */ 492 + if (IS_DAX(VFS_I(ip)) && 493 + (mp->m_ddev_targp->bt_daxdev == NULL || 494 + (mp->m_rtdev_targp && 495 + mp->m_rtdev_targp->bt_daxdev == NULL))) 496 + return -EINVAL; 486 497 } 487 498 488 499 if (rtflag) {
+6 -1
fs/xfs/xfs_trans_ail.c
··· 644 644 set_freezable(); 645 645 646 646 while (1) { 647 - if (tout) 647 + /* 648 + * Long waits of 50ms or more occur when we've run out of items 649 + * to push, so we only want uninterruptible state if we're 650 + * actually blocked on something. 651 + */ 652 + if (tout && tout <= 20) 648 653 set_current_state(TASK_KILLABLE|TASK_FREEZABLE); 649 654 else 650 655 set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE);
+1 -4
include/acpi/acpixf.h
··· 660 660 void *context)) 661 661 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 662 662 acpi_execute_reg_methods(acpi_handle device, 663 + u32 nax_depth, 663 664 acpi_adr_space_type 664 665 space_id)) 665 - ACPI_EXTERNAL_RETURN_STATUS(acpi_status 666 - acpi_execute_orphan_reg_method(acpi_handle device, 667 - acpi_adr_space_type 668 - space_id)) 669 666 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 670 667 acpi_remove_address_space_handler(acpi_handle 671 668 device,
+12
include/linux/bitmap.h
··· 270 270 dst[nbits / BITS_PER_LONG] &= BITMAP_LAST_WORD_MASK(nbits); 271 271 } 272 272 273 + static inline void bitmap_copy_and_extend(unsigned long *to, 274 + const unsigned long *from, 275 + unsigned int count, unsigned int size) 276 + { 277 + unsigned int copy = BITS_TO_LONGS(count); 278 + 279 + memcpy(to, from, copy * sizeof(long)); 280 + if (count % BITS_PER_LONG) 281 + to[copy - 1] &= BITMAP_LAST_WORD_MASK(count); 282 + memset(to + copy, 0, bitmap_size(size) - copy * sizeof(long)); 283 + } 284 + 273 285 /* 274 286 * On 32-bit systems bitmaps are represented as u32 arrays internally. On LE64 275 287 * machines the order of hi and lo parts of numbers match the bitmap structure.
+2 -2
include/linux/bpf_verifier.h
··· 856 856 /* only use after check_attach_btf_id() */ 857 857 static inline enum bpf_prog_type resolve_prog_type(const struct bpf_prog *prog) 858 858 { 859 - return (prog->type == BPF_PROG_TYPE_EXT && prog->aux->dst_prog) ? 860 - prog->aux->dst_prog->type : prog->type; 859 + return (prog->type == BPF_PROG_TYPE_EXT && prog->aux->saved_dst_prog_type) ? 860 + prog->aux->saved_dst_prog_type : prog->type; 861 861 } 862 862 863 863 static inline bool bpf_prog_check_recur(const struct bpf_prog *prog)
+1 -1
include/linux/file.h
··· 110 110 * 111 111 * f = dentry_open(&path, O_RDONLY, current_cred()); 112 112 * if (IS_ERR(f)) 113 - * return PTR_ERR(fd); 113 + * return PTR_ERR(f); 114 114 * 115 115 * fd_install(fd, f); 116 116 * return take_fd(fd);
+5
include/linux/fs.h
··· 2392 2392 * 2393 2393 * I_PINNING_FSCACHE_WB Inode is pinning an fscache object for writeback. 2394 2394 * 2395 + * I_LRU_ISOLATING Inode is pinned being isolated from LRU without holding 2396 + * i_count. 2397 + * 2395 2398 * Q: What is the difference between I_WILL_FREE and I_FREEING? 2396 2399 */ 2397 2400 #define I_DIRTY_SYNC (1 << 0) ··· 2418 2415 #define I_DONTCACHE (1 << 16) 2419 2416 #define I_SYNC_QUEUED (1 << 17) 2420 2417 #define I_PINNING_NETFS_WB (1 << 18) 2418 + #define __I_LRU_ISOLATING 19 2419 + #define I_LRU_ISOLATING (1 << __I_LRU_ISOLATING) 2421 2420 2422 2421 #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC) 2423 2422 #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)
+30 -3
include/linux/hugetlb.h
··· 944 944 static inline spinlock_t *huge_pte_lockptr(struct hstate *h, 945 945 struct mm_struct *mm, pte_t *pte) 946 946 { 947 - if (huge_page_size(h) == PMD_SIZE) 947 + const unsigned long size = huge_page_size(h); 948 + 949 + VM_WARN_ON(size == PAGE_SIZE); 950 + 951 + /* 952 + * hugetlb must use the exact same PT locks as core-mm page table 953 + * walkers would. When modifying a PTE table, hugetlb must take the 954 + * PTE PT lock, when modifying a PMD table, hugetlb must take the PMD 955 + * PT lock etc. 956 + * 957 + * The expectation is that any hugetlb folio smaller than a PMD is 958 + * always mapped into a single PTE table and that any hugetlb folio 959 + * smaller than a PUD (but at least as big as a PMD) is always mapped 960 + * into a single PMD table. 961 + * 962 + * If that does not hold for an architecture, then that architecture 963 + * must disable split PT locks such that all *_lockptr() functions 964 + * will give us the same result: the per-MM PT lock. 965 + * 966 + * Note that with e.g., CONFIG_PGTABLE_LEVELS=2 where 967 + * PGDIR_SIZE==P4D_SIZE==PUD_SIZE==PMD_SIZE, we'd use pud_lockptr() 968 + * and core-mm would use pmd_lockptr(). However, in such configurations 969 + * split PMD locks are disabled -- they don't make sense on a single 970 + * PGDIR page table -- and the end result is the same. 971 + */ 972 + if (size >= PUD_SIZE) 973 + return pud_lockptr(mm, (pud_t *) pte); 974 + else if (size >= PMD_SIZE || IS_ENABLED(CONFIG_HIGHPTE)) 948 975 return pmd_lockptr(mm, (pmd_t *) pte); 949 - VM_BUG_ON(huge_page_size(h) == PAGE_SIZE); 950 - return &mm->page_table_lock; 976 + /* pte_alloc_huge() only applies with !CONFIG_HIGHPTE */ 977 + return ptep_lockptr(mm, pte); 951 978 } 952 979 953 980 #ifndef hugepages_supported
+1 -1
include/linux/i2c.h
··· 1066 1066 struct acpi_resource; 1067 1067 struct acpi_resource_i2c_serialbus; 1068 1068 1069 - #if IS_ENABLED(CONFIG_ACPI) && IS_ENABLED(CONFIG_I2C) 1069 + #if IS_REACHABLE(CONFIG_ACPI) && IS_REACHABLE(CONFIG_I2C) 1070 1070 bool i2c_acpi_get_i2c_resource(struct acpi_resource *ares, 1071 1071 struct acpi_resource_i2c_serialbus **i2c); 1072 1072 int i2c_acpi_client_count(struct acpi_device *adev);
-2
include/linux/iommu.h
··· 795 795 struct device *dev); 796 796 extern void iommu_detach_device(struct iommu_domain *domain, 797 797 struct device *dev); 798 - extern int iommu_sva_unbind_gpasid(struct iommu_domain *domain, 799 - struct device *dev, ioasid_t pasid); 800 798 extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); 801 799 extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); 802 800 extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
+7
include/linux/kvm_host.h
··· 715 715 } 716 716 #endif 717 717 718 + #ifndef kvm_arch_has_readonly_mem 719 + static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) 720 + { 721 + return IS_ENABLED(CONFIG_HAVE_KVM_READONLY_MEM); 722 + } 723 + #endif 724 + 718 725 struct kvm_memslots { 719 726 u64 generation; 720 727 atomic_long_t last_used_slot;
+11
include/linux/mm.h
··· 2920 2920 return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); 2921 2921 } 2922 2922 2923 + static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte) 2924 + { 2925 + BUILD_BUG_ON(IS_ENABLED(CONFIG_HIGHPTE)); 2926 + BUILD_BUG_ON(MAX_PTRS_PER_PTE * sizeof(pte_t) > PAGE_SIZE); 2927 + return ptlock_ptr(virt_to_ptdesc(pte)); 2928 + } 2929 + 2923 2930 static inline bool ptlock_init(struct ptdesc *ptdesc) 2924 2931 { 2925 2932 /* ··· 2948 2941 * We use mm->page_table_lock to guard all pagetable pages of the mm. 2949 2942 */ 2950 2943 static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) 2944 + { 2945 + return &mm->page_table_lock; 2946 + } 2947 + static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte) 2951 2948 { 2952 2949 return &mm->page_table_lock; 2953 2950 }
-2
include/linux/mmzone.h
··· 220 220 PGDEMOTE_KSWAPD, 221 221 PGDEMOTE_DIRECT, 222 222 PGDEMOTE_KHUGEPAGED, 223 - NR_MEMMAP, /* page metadata allocated through buddy allocator */ 224 - NR_MEMMAP_BOOT, /* page metadata allocated through boot allocator */ 225 223 NR_VM_NODE_STAT_ITEMS 226 224 }; 227 225
-3
include/linux/netfs.h
··· 73 73 #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ 74 74 #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ 75 75 #define NETFS_ICTX_WRITETHROUGH 2 /* Write-through caching */ 76 - #define NETFS_ICTX_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark 77 - * write to cache on read */ 78 76 }; 79 77 80 78 /* ··· 267 269 #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */ 268 270 #define NETFS_RREQ_FAILED 4 /* The request failed */ 269 271 #define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */ 270 - #define NETFS_RREQ_WRITE_TO_CACHE 7 /* Need to write to the cache */ 271 272 #define NETFS_RREQ_UPLOAD_TO_SERVER 8 /* Need to write to the server */ 272 273 #define NETFS_RREQ_NONBLOCK 9 /* Don't block if possible (O_NONBLOCK) */ 273 274 #define NETFS_RREQ_BLOCKED 10 /* We blocked */
+13
include/linux/pgalloc_tag.h
··· 43 43 page_ext_put(page_ext_from_codetag_ref(ref)); 44 44 } 45 45 46 + static inline void clear_page_tag_ref(struct page *page) 47 + { 48 + if (mem_alloc_profiling_enabled()) { 49 + union codetag_ref *ref = get_page_tag_ref(page); 50 + 51 + if (ref) { 52 + set_codetag_empty(ref); 53 + put_page_tag_ref(ref); 54 + } 55 + } 56 + } 57 + 46 58 static inline void pgalloc_tag_add(struct page *page, struct task_struct *task, 47 59 unsigned int nr) 48 60 { ··· 138 126 139 127 static inline union codetag_ref *get_page_tag_ref(struct page *page) { return NULL; } 140 128 static inline void put_page_tag_ref(union codetag_ref *ref) {} 129 + static inline void clear_page_tag_ref(struct page *page) {} 141 130 static inline void pgalloc_tag_add(struct page *page, struct task_struct *task, 142 131 unsigned int nr) {} 143 132 static inline void pgalloc_tag_sub(struct page *page, unsigned int nr) {}
+2 -2
include/linux/refcount.h
··· 266 266 if (oldp) 267 267 *oldp = old; 268 268 269 - if (old == i) { 269 + if (old > 0 && old == i) { 270 270 smp_acquire__after_ctrl_dep(); 271 271 return true; 272 272 } 273 273 274 - if (unlikely(old < 0 || old - i < 0)) 274 + if (unlikely(old <= 0 || old - i < 0)) 275 275 refcount_warn_saturate(r, REFCOUNT_SUB_UAF); 276 276 277 277 return false;
+18 -1
include/linux/spi/spi.h
··· 902 902 struct spi_controller *ctlr); 903 903 extern void spi_unregister_controller(struct spi_controller *ctlr); 904 904 905 - #if IS_ENABLED(CONFIG_ACPI) 905 + #if IS_ENABLED(CONFIG_ACPI) && IS_ENABLED(CONFIG_SPI_MASTER) 906 906 extern struct spi_controller *acpi_spi_find_controller_by_adev(struct acpi_device *adev); 907 907 extern struct spi_device *acpi_spi_device_alloc(struct spi_controller *ctlr, 908 908 struct acpi_device *adev, 909 909 int index); 910 910 int acpi_spi_count_resources(struct acpi_device *adev); 911 + #else 912 + static inline struct spi_controller *acpi_spi_find_controller_by_adev(struct acpi_device *adev) 913 + { 914 + return NULL; 915 + } 916 + 917 + static inline struct spi_device *acpi_spi_device_alloc(struct spi_controller *ctlr, 918 + struct acpi_device *adev, 919 + int index) 920 + { 921 + return ERR_PTR(-ENODEV); 922 + } 923 + 924 + static inline int acpi_spi_count_resources(struct acpi_device *adev) 925 + { 926 + return 0; 927 + } 911 928 #endif 912 929 913 930 /*
+1
include/linux/thermal.h
··· 55 55 THERMAL_TZ_BIND_CDEV, /* Cooling dev is bind to the thermal zone */ 56 56 THERMAL_TZ_UNBIND_CDEV, /* Cooling dev is unbind from the thermal zone */ 57 57 THERMAL_INSTANCE_WEIGHT_CHANGED, /* Thermal instance weight changed */ 58 + THERMAL_TZ_RESUME, /* Thermal zone is resuming after system sleep */ 58 59 }; 59 60 60 61 /**
+8 -14
include/linux/vmstat.h
··· 34 34 unsigned nr_lazyfree_fail; 35 35 }; 36 36 37 - enum writeback_stat_item { 37 + /* Stat data for system wide items */ 38 + enum vm_stat_item { 38 39 NR_DIRTY_THRESHOLD, 39 40 NR_DIRTY_BG_THRESHOLD, 40 - NR_VM_WRITEBACK_STAT_ITEMS, 41 + NR_MEMMAP_PAGES, /* page metadata allocated through buddy allocator */ 42 + NR_MEMMAP_BOOT_PAGES, /* page metadata allocated through boot allocator */ 43 + NR_VM_STAT_ITEMS, 41 44 }; 42 45 43 46 #ifdef CONFIG_VM_EVENT_COUNTERS ··· 517 514 return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_" 518 515 } 519 516 520 - static inline const char *writeback_stat_name(enum writeback_stat_item item) 521 - { 522 - return vmstat_text[NR_VM_ZONE_STAT_ITEMS + 523 - NR_VM_NUMA_EVENT_ITEMS + 524 - NR_VM_NODE_STAT_ITEMS + 525 - item]; 526 - } 527 - 528 517 #if defined(CONFIG_VM_EVENT_COUNTERS) || defined(CONFIG_MEMCG) 529 518 static inline const char *vm_event_name(enum vm_event_item item) 530 519 { 531 520 return vmstat_text[NR_VM_ZONE_STAT_ITEMS + 532 521 NR_VM_NUMA_EVENT_ITEMS + 533 522 NR_VM_NODE_STAT_ITEMS + 534 - NR_VM_WRITEBACK_STAT_ITEMS + 523 + NR_VM_STAT_ITEMS + 535 524 item]; 536 525 } 537 526 #endif /* CONFIG_VM_EVENT_COUNTERS || CONFIG_MEMCG */ ··· 620 625 lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio)); 621 626 } 622 627 623 - void __meminit mod_node_early_perpage_metadata(int nid, long delta); 624 - void __meminit store_early_perpage_metadata(void); 625 - 628 + void memmap_boot_pages_add(long delta); 629 + void memmap_pages_add(long delta); 626 630 #endif /* _LINUX_VMSTAT_H */
+4
include/net/af_vsock.h
··· 230 230 int vsock_add_tap(struct vsock_tap *vt); 231 231 int vsock_remove_tap(struct vsock_tap *vt); 232 232 void vsock_deliver_tap(struct sk_buff *build_skb(void *opaque), void *opaque); 233 + int __vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, 234 + int flags); 233 235 int vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, 234 236 int flags); 237 + int __vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, 238 + size_t len, int flags); 235 239 int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, 236 240 size_t len, int flags); 237 241
+1
include/net/mana/mana.h
··· 275 275 /* NAPI data */ 276 276 struct napi_struct napi; 277 277 int work_done; 278 + int work_done_since_doorbell; 278 279 int budget; 279 280 }; 280 281
+2
include/trace/events/netfs.h
··· 51 51 EM(netfs_rreq_trace_resubmit, "RESUBMT") \ 52 52 EM(netfs_rreq_trace_set_pause, "PAUSE ") \ 53 53 EM(netfs_rreq_trace_unlock, "UNLOCK ") \ 54 + EM(netfs_rreq_trace_unlock_pgpriv2, "UNLCK-2") \ 54 55 EM(netfs_rreq_trace_unmark, "UNMARK ") \ 55 56 EM(netfs_rreq_trace_wait_ip, "WAIT-IP") \ 56 57 EM(netfs_rreq_trace_wait_pause, "WT-PAUS") \ ··· 146 145 EM(netfs_folio_trace_clear_g, "clear-g") \ 147 146 EM(netfs_folio_trace_clear_s, "clear-s") \ 148 147 EM(netfs_folio_trace_copy_to_cache, "mark-copy") \ 148 + EM(netfs_folio_trace_end_copy, "end-copy") \ 149 149 EM(netfs_folio_trace_filled_gaps, "filled-gaps") \ 150 150 EM(netfs_folio_trace_kill, "kill") \ 151 151 EM(netfs_folio_trace_kill_cc, "kill-cc") \
+1 -1
include/uapi/linux/io_uring.h
··· 421 421 * IO completion data structure (Completion Queue Entry) 422 422 */ 423 423 struct io_uring_cqe { 424 - __u64 user_data; /* sqe->data submission passed back */ 424 + __u64 user_data; /* sqe->user_data value passed back */ 425 425 __s32 res; /* result code for this event */ 426 426 __u32 flags; 427 427
+2 -1
include/uapi/linux/nsfs.h
··· 3 3 #define __LINUX_NSFS_H 4 4 5 5 #include <linux/ioctl.h> 6 + #include <linux/types.h> 6 7 7 8 #define NSIO 0xb7 8 9 ··· 17 16 /* Get owner UID (in the caller's user namespace) for a user namespace */ 18 17 #define NS_GET_OWNER_UID _IO(NSIO, 0x4) 19 18 /* Get the id for a mount namespace */ 20 - #define NS_GET_MNTNS_ID _IO(NSIO, 0x5) 19 + #define NS_GET_MNTNS_ID _IOR(NSIO, 0x5, __u64) 21 20 /* Translate pid from target pid namespace into the caller's pid namespace. */ 22 21 #define NS_GET_PID_FROM_PIDNS _IOR(NSIO, 0x6, int) 23 22 /* Return thread-group leader id of pid in the callers pid namespace. */
+1
include/uapi/linux/psp-sev.h
··· 51 51 SEV_RET_INVALID_PLATFORM_STATE, 52 52 SEV_RET_INVALID_GUEST_STATE, 53 53 SEV_RET_INAVLID_CONFIG, 54 + SEV_RET_INVALID_CONFIG = SEV_RET_INAVLID_CONFIG, 54 55 SEV_RET_INVALID_LEN, 55 56 SEV_RET_ALREADY_OWNED, 56 57 SEV_RET_INVALID_CERTIFICATE,
-3
include/uapi/misc/fastrpc.h
··· 8 8 #define FASTRPC_IOCTL_ALLOC_DMA_BUFF _IOWR('R', 1, struct fastrpc_alloc_dma_buf) 9 9 #define FASTRPC_IOCTL_FREE_DMA_BUFF _IOWR('R', 2, __u32) 10 10 #define FASTRPC_IOCTL_INVOKE _IOWR('R', 3, struct fastrpc_invoke) 11 - /* This ioctl is only supported with secure device nodes */ 12 11 #define FASTRPC_IOCTL_INIT_ATTACH _IO('R', 4) 13 12 #define FASTRPC_IOCTL_INIT_CREATE _IOWR('R', 5, struct fastrpc_init_create) 14 13 #define FASTRPC_IOCTL_MMAP _IOWR('R', 6, struct fastrpc_req_mmap) 15 14 #define FASTRPC_IOCTL_MUNMAP _IOWR('R', 7, struct fastrpc_req_munmap) 16 - /* This ioctl is only supported with secure device nodes */ 17 15 #define FASTRPC_IOCTL_INIT_ATTACH_SNS _IO('R', 8) 18 - /* This ioctl is only supported with secure device nodes */ 19 16 #define FASTRPC_IOCTL_INIT_CREATE_STATIC _IOWR('R', 9, struct fastrpc_init_create_static) 20 17 #define FASTRPC_IOCTL_MEM_MAP _IOWR('R', 10, struct fastrpc_mem_map) 21 18 #define FASTRPC_IOCTL_MEM_UNMAP _IOWR('R', 11, struct fastrpc_mem_unmap)
+2 -2
init/Kconfig
··· 1920 1920 config RUSTC_VERSION_TEXT 1921 1921 string 1922 1922 depends on RUST 1923 - default $(shell,command -v $(RUSTC) >/dev/null 2>&1 && $(RUSTC) --version || echo n) 1923 + default "$(shell,$(RUSTC) --version 2>/dev/null)" 1924 1924 1925 1925 config BINDGEN_VERSION_TEXT 1926 1926 string ··· 1928 1928 # The dummy parameter `workaround-for-0.69.0` is required to support 0.69.0 1929 1929 # (https://github.com/rust-lang/rust-bindgen/pull/2678). It can be removed when 1930 1930 # the minimum version is upgraded past that (0.69.1 already fixed the issue). 1931 - default $(shell,command -v $(BINDGEN) >/dev/null 2>&1 && $(BINDGEN) --version workaround-for-0.69.0 || echo n) 1931 + default "$(shell,$(BINDGEN) --version workaround-for-0.69.0 2>/dev/null)" 1932 1932 1933 1933 # 1934 1934 # Place an empty function call at each tracepoint site. Can be
+1 -2
io_uring/napi.c
··· 26 26 hlist_for_each_entry_rcu(e, hash_list, node) { 27 27 if (e->napi_id != napi_id) 28 28 continue; 29 - e->timeout = jiffies + NAPI_TIMEOUT; 30 29 return e; 31 30 } 32 31 ··· 301 302 { 302 303 iowq->napi_prefer_busy_poll = READ_ONCE(ctx->napi_prefer_busy_poll); 303 304 304 - if (!(ctx->flags & IORING_SETUP_SQPOLL) && ctx->napi_enabled) 305 + if (!(ctx->flags & IORING_SETUP_SQPOLL)) 305 306 io_napi_blocking_busy_loop(ctx, iowq); 306 307 } 307 308
+1 -1
io_uring/napi.h
··· 55 55 struct io_ring_ctx *ctx = req->ctx; 56 56 struct socket *sock; 57 57 58 - if (!READ_ONCE(ctx->napi_busy_poll_dt)) 58 + if (!READ_ONCE(ctx->napi_enabled)) 59 59 return; 60 60 61 61 sock = sock_from_file(req->file);
+1 -1
io_uring/sqpoll.c
··· 44 44 void io_sq_thread_park(struct io_sq_data *sqd) 45 45 __acquires(&sqd->lock) 46 46 { 47 - WARN_ON_ONCE(sqd->thread == current); 47 + WARN_ON_ONCE(data_race(sqd->thread) == current); 48 48 49 49 atomic_inc(&sqd->park_pending); 50 50 set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
+3 -2
kernel/bpf/verifier.c
··· 16884 16884 spi = i / BPF_REG_SIZE; 16885 16885 16886 16886 if (exact != NOT_EXACT && 16887 - old->stack[spi].slot_type[i % BPF_REG_SIZE] != 16888 - cur->stack[spi].slot_type[i % BPF_REG_SIZE]) 16887 + (i >= cur->allocated_stack || 16888 + old->stack[spi].slot_type[i % BPF_REG_SIZE] != 16889 + cur->stack[spi].slot_type[i % BPF_REG_SIZE])) 16889 16890 return false; 16890 16891 16891 16892 if (!(old->stack[spi].spilled_ptr.live & REG_LIVE_READ)
+11 -1
kernel/cpu.c
··· 2689 2689 return ret; 2690 2690 } 2691 2691 2692 + /** 2693 + * Check if the core a CPU belongs to is online 2694 + */ 2695 + #if !defined(topology_is_core_online) 2696 + static inline bool topology_is_core_online(unsigned int cpu) 2697 + { 2698 + return true; 2699 + } 2700 + #endif 2701 + 2692 2702 int cpuhp_smt_enable(void) 2693 2703 { 2694 2704 int cpu, ret = 0; ··· 2709 2699 /* Skip online CPUs and CPUs on offline nodes */ 2710 2700 if (cpu_online(cpu) || !node_online(cpu_to_node(cpu))) 2711 2701 continue; 2712 - if (!cpu_smt_thread_allowed(cpu)) 2702 + if (!cpu_smt_thread_allowed(cpu) || !topology_is_core_online(cpu)) 2713 2703 continue; 2714 2704 ret = _cpu_up(cpu, 0, CPUHP_ONLINE); 2715 2705 if (ret)
+2 -1
kernel/crash_reserve.c
··· 423 423 if (high && search_end == CRASH_ADDR_HIGH_MAX) { 424 424 search_end = CRASH_ADDR_LOW_MAX; 425 425 search_base = 0; 426 - goto retry; 426 + if (search_end != CRASH_ADDR_HIGH_MAX) 427 + goto retry; 427 428 } 428 429 pr_warn("cannot allocate crashkernel (size:0x%llx)\n", 429 430 crash_size);
+2 -1
kernel/events/core.c
··· 9706 9706 9707 9707 ret = __perf_event_account_interrupt(event, throttle); 9708 9708 9709 - if (event->prog && !bpf_overflow_handler(event, data, regs)) 9709 + if (event->prog && event->prog->type == BPF_PROG_TYPE_PERF_EVENT && 9710 + !bpf_overflow_handler(event, data, regs)) 9710 9711 return ret; 9711 9712 9712 9713 /*
+22 -3
kernel/fork.c
··· 2053 2053 */ 2054 2054 int pidfd_prepare(struct pid *pid, unsigned int flags, struct file **ret) 2055 2055 { 2056 - bool thread = flags & PIDFD_THREAD; 2057 - 2058 - if (!pid || !pid_has_task(pid, thread ? PIDTYPE_PID : PIDTYPE_TGID)) 2056 + if (!pid) 2059 2057 return -EINVAL; 2058 + 2059 + scoped_guard(rcu) { 2060 + struct task_struct *tsk; 2061 + 2062 + if (flags & PIDFD_THREAD) 2063 + tsk = pid_task(pid, PIDTYPE_PID); 2064 + else 2065 + tsk = pid_task(pid, PIDTYPE_TGID); 2066 + if (!tsk) 2067 + return -EINVAL; 2068 + 2069 + /* Don't create pidfds for kernel threads for now. */ 2070 + if (tsk->flags & PF_KTHREAD) 2071 + return -EINVAL; 2072 + } 2060 2073 2061 2074 return __pidfd_prepare(pid, flags, ret); 2062 2075 } ··· 2415 2402 */ 2416 2403 if (clone_flags & CLONE_PIDFD) { 2417 2404 int flags = (clone_flags & CLONE_THREAD) ? PIDFD_THREAD : 0; 2405 + 2406 + /* Don't create pidfds for kernel threads for now. */ 2407 + if (args->kthread) { 2408 + retval = -EINVAL; 2409 + goto bad_fork_free_pid; 2410 + } 2418 2411 2419 2412 /* Note that no task has been attached to @pid yet. */ 2420 2413 retval = __pidfd_prepare(pid, flags, &pidfile);
+6 -49
kernel/kallsyms.c
··· 160 160 return kallsyms_relative_base - 1 - kallsyms_offsets[idx]; 161 161 } 162 162 163 - static void cleanup_symbol_name(char *s) 164 - { 165 - char *res; 166 - 167 - if (!IS_ENABLED(CONFIG_LTO_CLANG)) 168 - return; 169 - 170 - /* 171 - * LLVM appends various suffixes for local functions and variables that 172 - * must be promoted to global scope as part of LTO. This can break 173 - * hooking of static functions with kprobes. '.' is not a valid 174 - * character in an identifier in C. Suffixes only in LLVM LTO observed: 175 - * - foo.llvm.[0-9a-f]+ 176 - */ 177 - res = strstr(s, ".llvm."); 178 - if (res) 179 - *res = '\0'; 180 - 181 - return; 182 - } 183 - 184 - static int compare_symbol_name(const char *name, char *namebuf) 185 - { 186 - /* The kallsyms_seqs_of_names is sorted based on names after 187 - * cleanup_symbol_name() (see scripts/kallsyms.c) if clang lto is enabled. 188 - * To ensure correct bisection in kallsyms_lookup_names(), do 189 - * cleanup_symbol_name(namebuf) before comparing name and namebuf. 190 - */ 191 - cleanup_symbol_name(namebuf); 192 - return strcmp(name, namebuf); 193 - } 194 - 195 163 static unsigned int get_symbol_seq(int index) 196 164 { 197 165 unsigned int i, seq = 0; ··· 187 219 seq = get_symbol_seq(mid); 188 220 off = get_symbol_offset(seq); 189 221 kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf)); 190 - ret = compare_symbol_name(name, namebuf); 222 + ret = strcmp(name, namebuf); 191 223 if (ret > 0) 192 224 low = mid + 1; 193 225 else if (ret < 0) ··· 204 236 seq = get_symbol_seq(low - 1); 205 237 off = get_symbol_offset(seq); 206 238 kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf)); 207 - if (compare_symbol_name(name, namebuf)) 239 + if (strcmp(name, namebuf)) 208 240 break; 209 241 low--; 210 242 } ··· 216 248 seq = get_symbol_seq(high + 1); 217 249 off = get_symbol_offset(seq); 218 250 kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf)); 219 - if (compare_symbol_name(name, namebuf)) 251 + if (strcmp(name, namebuf)) 220 252 break; 221 253 high++; 222 254 } ··· 375 407 if (modbuildid) 376 408 *modbuildid = NULL; 377 409 378 - ret = strlen(namebuf); 379 - goto found; 410 + return strlen(namebuf); 380 411 } 381 412 382 413 /* See if it's in a module or a BPF JITed image. */ ··· 389 422 ret = ftrace_mod_address_lookup(addr, symbolsize, 390 423 offset, modname, namebuf); 391 424 392 - found: 393 - cleanup_symbol_name(namebuf); 394 425 return ret; 395 426 } 396 427 ··· 415 450 416 451 int lookup_symbol_name(unsigned long addr, char *symname) 417 452 { 418 - int res; 419 - 420 453 symname[0] = '\0'; 421 454 symname[KSYM_NAME_LEN - 1] = '\0'; 422 455 ··· 425 462 /* Grab name */ 426 463 kallsyms_expand_symbol(get_symbol_offset(pos), 427 464 symname, KSYM_NAME_LEN); 428 - goto found; 465 + return 0; 429 466 } 430 467 /* See if it's in a module. */ 431 - res = lookup_module_symbol_name(addr, symname); 432 - if (res) 433 - return res; 434 - 435 - found: 436 - cleanup_symbol_name(symname); 437 - return 0; 468 + return lookup_module_symbol_name(addr, symname); 438 469 } 439 470 440 471 /* Look up a kernel symbol and return it in a text buffer. */
+1 -21
kernel/kallsyms_selftest.c
··· 187 187 stat.min, stat.max, div_u64(stat.sum, stat.real_cnt)); 188 188 } 189 189 190 - static bool match_cleanup_name(const char *s, const char *name) 191 - { 192 - char *p; 193 - int len; 194 - 195 - if (!IS_ENABLED(CONFIG_LTO_CLANG)) 196 - return false; 197 - 198 - p = strstr(s, ".llvm."); 199 - if (!p) 200 - return false; 201 - 202 - len = strlen(name); 203 - if (p - s != len) 204 - return false; 205 - 206 - return !strncmp(s, name, len); 207 - } 208 - 209 190 static int find_symbol(void *data, const char *name, unsigned long addr) 210 191 { 211 192 struct test_stat *stat = (struct test_stat *)data; 212 193 213 - if (strcmp(name, stat->name) == 0 || 214 - (!stat->perf && match_cleanup_name(name, stat->name))) { 194 + if (!strcmp(name, stat->name)) { 215 195 stat->real_cnt++; 216 196 stat->addr = addr; 217 197
+1 -1
kernel/trace/trace.c
··· 7956 7956 trace_access_unlock(iter->cpu_file); 7957 7957 7958 7958 if (ret < 0) { 7959 - if (trace_empty(iter)) { 7959 + if (trace_empty(iter) && !iter->closed) { 7960 7960 if ((filp->f_flags & O_NONBLOCK)) 7961 7961 return -EAGAIN; 7962 7962
+2
lib/generic-radix-tree.c
··· 121 121 if ((v = cmpxchg_release(&radix->root, r, new_root)) == r) { 122 122 v = new_root; 123 123 new_node = NULL; 124 + } else { 125 + new_node->children[0] = NULL; 124 126 } 125 127 } 126 128
+1 -2
lib/overflow_kunit.c
··· 668 668 669 669 static void overflow_allocation_test(struct kunit *test) 670 670 { 671 - const char device_name[] = "overflow-test"; 672 671 struct device *dev; 673 672 int count = 0; 674 673 ··· 677 678 } while (0) 678 679 679 680 /* Create dummy device for devm_kmalloc()-family tests. */ 680 - dev = kunit_device_register(test, device_name); 681 + dev = kunit_device_register(test, "overflow-test"); 681 682 KUNIT_ASSERT_FALSE_MSG(test, IS_ERR(dev), 682 683 "Cannot register test device\n"); 683 684
+13 -16
mm/huge_memory.c
··· 1685 1685 vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); 1686 1686 if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { 1687 1687 spin_unlock(vmf->ptl); 1688 - goto out; 1688 + return 0; 1689 1689 } 1690 1690 1691 1691 pmd = pmd_modify(oldpmd, vma->vm_page_prot); ··· 1728 1728 if (!migrate_misplaced_folio(folio, vma, target_nid)) { 1729 1729 flags |= TNF_MIGRATED; 1730 1730 nid = target_nid; 1731 - } else { 1732 - flags |= TNF_MIGRATE_FAIL; 1733 - vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); 1734 - if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { 1735 - spin_unlock(vmf->ptl); 1736 - goto out; 1737 - } 1738 - goto out_map; 1731 + task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags); 1732 + return 0; 1739 1733 } 1740 1734 1741 - out: 1742 - if (nid != NUMA_NO_NODE) 1743 - task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags); 1744 - 1745 - return 0; 1746 - 1735 + flags |= TNF_MIGRATE_FAIL; 1736 + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); 1737 + if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { 1738 + spin_unlock(vmf->ptl); 1739 + return 0; 1740 + } 1747 1741 out_map: 1748 1742 /* Restore the PMD */ 1749 1743 pmd = pmd_modify(oldpmd, vma->vm_page_prot); ··· 1747 1753 set_pmd_at(vma->vm_mm, haddr, vmf->pmd, pmd); 1748 1754 update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); 1749 1755 spin_unlock(vmf->ptl); 1750 - goto out; 1756 + 1757 + if (nid != NUMA_NO_NODE) 1758 + task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags); 1759 + return 0; 1751 1760 } 1752 1761 1753 1762 /*
+5 -8
mm/hugetlb_vmemmap.c
··· 185 185 static inline void free_vmemmap_page(struct page *page) 186 186 { 187 187 if (PageReserved(page)) { 188 + memmap_boot_pages_add(-1); 188 189 free_bootmem_page(page); 189 - mod_node_page_state(page_pgdat(page), NR_MEMMAP_BOOT, -1); 190 190 } else { 191 + memmap_pages_add(-1); 191 192 __free_page(page); 192 - mod_node_page_state(page_pgdat(page), NR_MEMMAP, -1); 193 193 } 194 194 } 195 195 ··· 341 341 copy_page(page_to_virt(walk.reuse_page), 342 342 (void *)walk.reuse_addr); 343 343 list_add(&walk.reuse_page->lru, vmemmap_pages); 344 - mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, 1); 344 + memmap_pages_add(1); 345 345 } 346 346 347 347 /* ··· 392 392 393 393 for (i = 0; i < nr_pages; i++) { 394 394 page = alloc_pages_node(nid, gfp_mask, 0); 395 - if (!page) { 396 - mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, i); 395 + if (!page) 397 396 goto out; 398 - } 399 397 list_add(&page->lru, list); 400 398 } 401 - 402 - mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, nr_pages); 399 + memmap_pages_add(nr_pages); 403 400 404 401 return 0; 405 402 out:
+5 -2
mm/memcontrol-v1.c
··· 1842 1842 buf = endp + 1; 1843 1843 1844 1844 cfd = simple_strtoul(buf, &endp, 10); 1845 - if ((*endp != ' ') && (*endp != '\0')) 1845 + if (*endp == '\0') 1846 + buf = endp; 1847 + else if (*endp == ' ') 1848 + buf = endp + 1; 1849 + else 1846 1850 return -EINVAL; 1847 - buf = endp + 1; 1848 1851 1849 1852 event = kzalloc(sizeof(*event), GFP_KERNEL); 1850 1853 if (!event)
+11 -9
mm/memory-failure.c
··· 2417 2417 struct memory_failure_cpu { 2418 2418 DECLARE_KFIFO(fifo, struct memory_failure_entry, 2419 2419 MEMORY_FAILURE_FIFO_SIZE); 2420 - spinlock_t lock; 2420 + raw_spinlock_t lock; 2421 2421 struct work_struct work; 2422 2422 }; 2423 2423 ··· 2443 2443 { 2444 2444 struct memory_failure_cpu *mf_cpu; 2445 2445 unsigned long proc_flags; 2446 + bool buffer_overflow; 2446 2447 struct memory_failure_entry entry = { 2447 2448 .pfn = pfn, 2448 2449 .flags = flags, 2449 2450 }; 2450 2451 2451 2452 mf_cpu = &get_cpu_var(memory_failure_cpu); 2452 - spin_lock_irqsave(&mf_cpu->lock, proc_flags); 2453 - if (kfifo_put(&mf_cpu->fifo, entry)) 2453 + raw_spin_lock_irqsave(&mf_cpu->lock, proc_flags); 2454 + buffer_overflow = !kfifo_put(&mf_cpu->fifo, entry); 2455 + if (!buffer_overflow) 2454 2456 schedule_work_on(smp_processor_id(), &mf_cpu->work); 2455 - else 2457 + raw_spin_unlock_irqrestore(&mf_cpu->lock, proc_flags); 2458 + put_cpu_var(memory_failure_cpu); 2459 + if (buffer_overflow) 2456 2460 pr_err("buffer overflow when queuing memory failure at %#lx\n", 2457 2461 pfn); 2458 - spin_unlock_irqrestore(&mf_cpu->lock, proc_flags); 2459 - put_cpu_var(memory_failure_cpu); 2460 2462 } 2461 2463 EXPORT_SYMBOL_GPL(memory_failure_queue); 2462 2464 ··· 2471 2469 2472 2470 mf_cpu = container_of(work, struct memory_failure_cpu, work); 2473 2471 for (;;) { 2474 - spin_lock_irqsave(&mf_cpu->lock, proc_flags); 2472 + raw_spin_lock_irqsave(&mf_cpu->lock, proc_flags); 2475 2473 gotten = kfifo_get(&mf_cpu->fifo, &entry); 2476 - spin_unlock_irqrestore(&mf_cpu->lock, proc_flags); 2474 + raw_spin_unlock_irqrestore(&mf_cpu->lock, proc_flags); 2477 2475 if (!gotten) 2478 2476 break; 2479 2477 if (entry.flags & MF_SOFT_OFFLINE) ··· 2503 2501 2504 2502 for_each_possible_cpu(cpu) { 2505 2503 mf_cpu = &per_cpu(memory_failure_cpu, cpu); 2506 - spin_lock_init(&mf_cpu->lock); 2504 + raw_spin_lock_init(&mf_cpu->lock); 2507 2505 INIT_KFIFO(mf_cpu->fifo); 2508 2506 INIT_WORK(&mf_cpu->work, memory_failure_work_func); 2509 2507 }
+16 -17
mm/memory.c
··· 5295 5295 5296 5296 if (unlikely(!pte_same(old_pte, vmf->orig_pte))) { 5297 5297 pte_unmap_unlock(vmf->pte, vmf->ptl); 5298 - goto out; 5298 + return 0; 5299 5299 } 5300 5300 5301 5301 pte = pte_modify(old_pte, vma->vm_page_prot); ··· 5358 5358 if (!migrate_misplaced_folio(folio, vma, target_nid)) { 5359 5359 nid = target_nid; 5360 5360 flags |= TNF_MIGRATED; 5361 - } else { 5362 - flags |= TNF_MIGRATE_FAIL; 5363 - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, 5364 - vmf->address, &vmf->ptl); 5365 - if (unlikely(!vmf->pte)) 5366 - goto out; 5367 - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { 5368 - pte_unmap_unlock(vmf->pte, vmf->ptl); 5369 - goto out; 5370 - } 5371 - goto out_map; 5361 + task_numa_fault(last_cpupid, nid, nr_pages, flags); 5362 + return 0; 5372 5363 } 5373 5364 5374 - out: 5375 - if (nid != NUMA_NO_NODE) 5376 - task_numa_fault(last_cpupid, nid, nr_pages, flags); 5377 - return 0; 5365 + flags |= TNF_MIGRATE_FAIL; 5366 + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, 5367 + vmf->address, &vmf->ptl); 5368 + if (unlikely(!vmf->pte)) 5369 + return 0; 5370 + if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { 5371 + pte_unmap_unlock(vmf->pte, vmf->ptl); 5372 + return 0; 5373 + } 5378 5374 out_map: 5379 5375 /* 5380 5376 * Make it present again, depending on how arch implements ··· 5383 5387 numa_rebuild_single_mapping(vmf, vma, vmf->address, vmf->pte, 5384 5388 writable); 5385 5389 pte_unmap_unlock(vmf->pte, vmf->ptl); 5386 - goto out; 5390 + 5391 + if (nid != NUMA_NO_NODE) 5392 + task_numa_fault(last_cpupid, nid, nr_pages, flags); 5393 + return 0; 5387 5394 } 5388 5395 5389 5396 static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
+11 -5
mm/migrate.c
··· 1479 1479 return rc; 1480 1480 } 1481 1481 1482 - static inline int try_split_folio(struct folio *folio, struct list_head *split_folios) 1482 + static inline int try_split_folio(struct folio *folio, struct list_head *split_folios, 1483 + enum migrate_mode mode) 1483 1484 { 1484 1485 int rc; 1485 1486 1486 - folio_lock(folio); 1487 + if (mode == MIGRATE_ASYNC) { 1488 + if (!folio_trylock(folio)) 1489 + return -EAGAIN; 1490 + } else { 1491 + folio_lock(folio); 1492 + } 1487 1493 rc = split_folio_to_list(folio, split_folios); 1488 1494 folio_unlock(folio); 1489 1495 if (!rc) ··· 1683 1677 */ 1684 1678 if (nr_pages > 2 && 1685 1679 !list_empty(&folio->_deferred_list)) { 1686 - if (try_split_folio(folio, split_folios) == 0) { 1680 + if (!try_split_folio(folio, split_folios, mode)) { 1687 1681 nr_failed++; 1688 1682 stats->nr_thp_failed += is_thp; 1689 1683 stats->nr_thp_split += is_thp; ··· 1705 1699 if (!thp_migration_supported() && is_thp) { 1706 1700 nr_failed++; 1707 1701 stats->nr_thp_failed++; 1708 - if (!try_split_folio(folio, split_folios)) { 1702 + if (!try_split_folio(folio, split_folios, mode)) { 1709 1703 stats->nr_thp_split++; 1710 1704 stats->nr_split++; 1711 1705 continue; ··· 1737 1731 stats->nr_thp_failed += is_thp; 1738 1732 /* Large folio NUMA faulting doesn't split to retry. */ 1739 1733 if (is_large && !nosplit) { 1740 - int ret = try_split_folio(folio, split_folios); 1734 + int ret = try_split_folio(folio, split_folios, mode); 1741 1735 1742 1736 if (!ret) { 1743 1737 stats->nr_thp_split += is_thp;
+4 -11
mm/mm_init.c
··· 1623 1623 panic("Failed to allocate %ld bytes for node %d memory map\n", 1624 1624 size, pgdat->node_id); 1625 1625 pgdat->node_mem_map = map + offset; 1626 - mod_node_early_perpage_metadata(pgdat->node_id, 1627 - DIV_ROUND_UP(size, PAGE_SIZE)); 1626 + memmap_boot_pages_add(DIV_ROUND_UP(size, PAGE_SIZE)); 1628 1627 pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n", 1629 1628 __func__, pgdat->node_id, (unsigned long)pgdat, 1630 1629 (unsigned long)pgdat->node_mem_map); ··· 2244 2245 2245 2246 set_pageblock_migratetype(page, MIGRATE_CMA); 2246 2247 set_page_refcounted(page); 2248 + /* pages were reserved and not allocated */ 2249 + clear_page_tag_ref(page); 2247 2250 __free_pages(page, pageblock_order); 2248 2251 2249 2252 adjust_managed_page_count(page, pageblock_nr_pages); ··· 2461 2460 } 2462 2461 2463 2462 /* pages were reserved and not allocated */ 2464 - if (mem_alloc_profiling_enabled()) { 2465 - union codetag_ref *ref = get_page_tag_ref(page); 2466 - 2467 - if (ref) { 2468 - set_codetag_empty(ref); 2469 - put_page_tag_ref(ref); 2470 - } 2471 - } 2472 - 2463 + clear_page_tag_ref(page); 2473 2464 __free_pages_core(page, order, MEMINIT_EARLY); 2474 2465 } 2475 2466
+11 -3
mm/mseal.c
··· 40 40 41 41 static bool is_madv_discard(int behavior) 42 42 { 43 - return behavior & 44 - (MADV_FREE | MADV_DONTNEED | MADV_DONTNEED_LOCKED | 45 - MADV_REMOVE | MADV_DONTFORK | MADV_WIPEONFORK); 43 + switch (behavior) { 44 + case MADV_FREE: 45 + case MADV_DONTNEED: 46 + case MADV_DONTNEED_LOCKED: 47 + case MADV_REMOVE: 48 + case MADV_DONTFORK: 49 + case MADV_WIPEONFORK: 50 + return true; 51 + } 52 + 53 + return false; 46 54 } 47 55 48 56 static bool is_ro_anon(struct vm_area_struct *vma)
+21 -31
mm/page_alloc.c
··· 287 287 288 288 static bool page_contains_unaccepted(struct page *page, unsigned int order); 289 289 static void accept_page(struct page *page, unsigned int order); 290 - static bool try_to_accept_memory(struct zone *zone, unsigned int order); 290 + static bool cond_accept_memory(struct zone *zone, unsigned int order); 291 291 static inline bool has_unaccepted_memory(void); 292 292 static bool __free_unaccepted(struct page *page); 293 293 ··· 3072 3072 if (!(alloc_flags & ALLOC_CMA)) 3073 3073 unusable_free += zone_page_state(z, NR_FREE_CMA_PAGES); 3074 3074 #endif 3075 - #ifdef CONFIG_UNACCEPTED_MEMORY 3076 - unusable_free += zone_page_state(z, NR_UNACCEPTED); 3077 - #endif 3078 3075 3079 3076 return unusable_free; 3080 3077 } ··· 3365 3368 } 3366 3369 } 3367 3370 3371 + cond_accept_memory(zone, order); 3372 + 3368 3373 /* 3369 3374 * Detect whether the number of free pages is below high 3370 3375 * watermark. If so, we will decrease pcp->high and free ··· 3392 3393 gfp_mask)) { 3393 3394 int ret; 3394 3395 3395 - if (has_unaccepted_memory()) { 3396 - if (try_to_accept_memory(zone, order)) 3397 - goto try_this_zone; 3398 - } 3396 + if (cond_accept_memory(zone, order)) 3397 + goto try_this_zone; 3399 3398 3400 3399 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT 3401 3400 /* ··· 3447 3450 3448 3451 return page; 3449 3452 } else { 3450 - if (has_unaccepted_memory()) { 3451 - if (try_to_accept_memory(zone, order)) 3452 - goto try_this_zone; 3453 - } 3453 + if (cond_accept_memory(zone, order)) 3454 + goto try_this_zone; 3454 3455 3455 3456 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT 3456 3457 /* Try again if zone has deferred pages */ ··· 5750 5755 for_each_online_pgdat(pgdat) 5751 5756 pgdat->per_cpu_nodestats = 5752 5757 alloc_percpu(struct per_cpu_nodestat); 5753 - store_early_perpage_metadata(); 5754 5758 } 5755 5759 5756 5760 __meminit void zone_pcp_init(struct zone *zone) ··· 5815 5821 5816 5822 void free_reserved_page(struct page *page) 5817 5823 { 5818 - if (mem_alloc_profiling_enabled()) { 5819 - union codetag_ref *ref = get_page_tag_ref(page); 5820 - 5821 - if (ref) { 5822 - set_codetag_empty(ref); 5823 - put_page_tag_ref(ref); 5824 - } 5825 - } 5824 + clear_page_tag_ref(page); 5826 5825 ClearPageReserved(page); 5827 5826 init_page_count(page); 5828 5827 __free_page(page); ··· 6938 6951 struct page *page; 6939 6952 bool last; 6940 6953 6941 - if (list_empty(&zone->unaccepted_pages)) 6942 - return false; 6943 - 6944 6954 spin_lock_irqsave(&zone->lock, flags); 6945 6955 page = list_first_entry_or_null(&zone->unaccepted_pages, 6946 6956 struct page, lru); ··· 6963 6979 return true; 6964 6980 } 6965 6981 6966 - static bool try_to_accept_memory(struct zone *zone, unsigned int order) 6982 + static bool cond_accept_memory(struct zone *zone, unsigned int order) 6967 6983 { 6968 6984 long to_accept; 6969 - int ret = false; 6985 + bool ret = false; 6986 + 6987 + if (!has_unaccepted_memory()) 6988 + return false; 6989 + 6990 + if (list_empty(&zone->unaccepted_pages)) 6991 + return false; 6970 6992 6971 6993 /* How much to accept to get to high watermark? */ 6972 6994 to_accept = high_wmark_pages(zone) - 6973 6995 (zone_page_state(zone, NR_FREE_PAGES) - 6974 - __zone_watermark_unusable_free(zone, order, 0)); 6996 + __zone_watermark_unusable_free(zone, order, 0) - 6997 + zone_page_state(zone, NR_UNACCEPTED)); 6975 6998 6976 - /* Accept at least one page */ 6977 - do { 6999 + while (to_accept > 0) { 6978 7000 if (!try_to_accept_memory_one(zone)) 6979 7001 break; 6980 7002 ret = true; 6981 7003 to_accept -= MAX_ORDER_NR_PAGES; 6982 - } while (to_accept > 0); 7004 + } 6983 7005 6984 7006 return ret; 6985 7007 } ··· 7028 7038 { 7029 7039 } 7030 7040 7031 - static bool try_to_accept_memory(struct zone *zone, unsigned int order) 7041 + static bool cond_accept_memory(struct zone *zone, unsigned int order) 7032 7042 { 7033 7043 return false; 7034 7044 }
+4 -14
mm/page_ext.c
··· 214 214 return -ENOMEM; 215 215 NODE_DATA(nid)->node_page_ext = base; 216 216 total_usage += table_size; 217 - mod_node_page_state(NODE_DATA(nid), NR_MEMMAP_BOOT, 218 - DIV_ROUND_UP(table_size, PAGE_SIZE)); 217 + memmap_boot_pages_add(DIV_ROUND_UP(table_size, PAGE_SIZE)); 219 218 return 0; 220 219 } 221 220 ··· 274 275 else 275 276 addr = vzalloc_node(size, nid); 276 277 277 - if (addr) { 278 - mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, 279 - DIV_ROUND_UP(size, PAGE_SIZE)); 280 - } 278 + if (addr) 279 + memmap_pages_add(DIV_ROUND_UP(size, PAGE_SIZE)); 281 280 282 281 return addr; 283 282 } ··· 320 323 { 321 324 size_t table_size; 322 325 struct page *page; 323 - struct pglist_data *pgdat; 324 326 325 327 table_size = page_ext_size * PAGES_PER_SECTION; 328 + memmap_pages_add(-1L * (DIV_ROUND_UP(table_size, PAGE_SIZE))); 326 329 327 330 if (is_vmalloc_addr(addr)) { 328 - page = vmalloc_to_page(addr); 329 - pgdat = page_pgdat(page); 330 331 vfree(addr); 331 332 } else { 332 333 page = virt_to_page(addr); 333 - pgdat = page_pgdat(page); 334 334 BUG_ON(PageReserved(page)); 335 335 kmemleak_free(addr); 336 336 free_pages_exact(addr, table_size); 337 337 } 338 - 339 - mod_node_page_state(pgdat, NR_MEMMAP, 340 - -1L * (DIV_ROUND_UP(table_size, PAGE_SIZE))); 341 - 342 338 } 343 339 344 340 static void __free_page_ext(unsigned long pfn)
+4 -7
mm/sparse-vmemmap.c
··· 469 469 if (r < 0) 470 470 return NULL; 471 471 472 - if (system_state == SYSTEM_BOOTING) { 473 - mod_node_early_perpage_metadata(nid, DIV_ROUND_UP(end - start, 474 - PAGE_SIZE)); 475 - } else { 476 - mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, 477 - DIV_ROUND_UP(end - start, PAGE_SIZE)); 478 - } 472 + if (system_state == SYSTEM_BOOTING) 473 + memmap_boot_pages_add(DIV_ROUND_UP(end - start, PAGE_SIZE)); 474 + else 475 + memmap_pages_add(DIV_ROUND_UP(end - start, PAGE_SIZE)); 479 476 480 477 return pfn_to_page(pfn); 481 478 }
+2 -3
mm/sparse.c
··· 463 463 sparsemap_buf = memmap_alloc(size, section_map_size(), addr, nid, true); 464 464 sparsemap_buf_end = sparsemap_buf + size; 465 465 #ifndef CONFIG_SPARSEMEM_VMEMMAP 466 - mod_node_early_perpage_metadata(nid, DIV_ROUND_UP(size, PAGE_SIZE)); 466 + memmap_boot_pages_add(DIV_ROUND_UP(size, PAGE_SIZE)); 467 467 #endif 468 468 } 469 469 ··· 643 643 unsigned long start = (unsigned long) pfn_to_page(pfn); 644 644 unsigned long end = start + nr_pages * sizeof(struct page); 645 645 646 - mod_node_page_state(page_pgdat(pfn_to_page(pfn)), NR_MEMMAP, 647 - -1L * (DIV_ROUND_UP(end - start, PAGE_SIZE))); 646 + memmap_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE))); 648 647 vmemmap_free(start, end, altmap); 649 648 } 650 649 static void free_map_bootmem(struct page *memmap)
+2 -9
mm/vmalloc.c
··· 3584 3584 page = alloc_pages_noprof(alloc_gfp, order); 3585 3585 else 3586 3586 page = alloc_pages_node_noprof(nid, alloc_gfp, order); 3587 - if (unlikely(!page)) { 3588 - if (!nofail) 3589 - break; 3590 - 3591 - /* fall back to the zero order allocations */ 3592 - alloc_gfp |= __GFP_NOFAIL; 3593 - order = 0; 3594 - continue; 3595 - } 3587 + if (unlikely(!page)) 3588 + break; 3596 3589 3597 3590 /* 3598 3591 * Higher order allocations must be able to be treated as
+25 -27
mm/vmstat.c
··· 1033 1033 } 1034 1034 #endif 1035 1035 1036 + /* 1037 + * Count number of pages "struct page" and "struct page_ext" consume. 1038 + * nr_memmap_boot_pages: # of pages allocated by boot allocator 1039 + * nr_memmap_pages: # of pages that were allocated by buddy allocator 1040 + */ 1041 + static atomic_long_t nr_memmap_boot_pages = ATOMIC_LONG_INIT(0); 1042 + static atomic_long_t nr_memmap_pages = ATOMIC_LONG_INIT(0); 1043 + 1044 + void memmap_boot_pages_add(long delta) 1045 + { 1046 + atomic_long_add(delta, &nr_memmap_boot_pages); 1047 + } 1048 + 1049 + void memmap_pages_add(long delta) 1050 + { 1051 + atomic_long_add(delta, &nr_memmap_pages); 1052 + } 1053 + 1036 1054 #ifdef CONFIG_COMPACTION 1037 1055 1038 1056 struct contig_page_info { ··· 1273 1255 "pgdemote_kswapd", 1274 1256 "pgdemote_direct", 1275 1257 "pgdemote_khugepaged", 1276 - "nr_memmap", 1277 - "nr_memmap_boot", 1278 - /* enum writeback_stat_item counters */ 1258 + /* system-wide enum vm_stat_item counters */ 1279 1259 "nr_dirty_threshold", 1280 1260 "nr_dirty_background_threshold", 1261 + "nr_memmap_pages", 1262 + "nr_memmap_boot_pages", 1281 1263 1282 1264 #if defined(CONFIG_VM_EVENT_COUNTERS) || defined(CONFIG_MEMCG) 1283 1265 /* enum vm_event_item counters */ ··· 1808 1790 #define NR_VMSTAT_ITEMS (NR_VM_ZONE_STAT_ITEMS + \ 1809 1791 NR_VM_NUMA_EVENT_ITEMS + \ 1810 1792 NR_VM_NODE_STAT_ITEMS + \ 1811 - NR_VM_WRITEBACK_STAT_ITEMS + \ 1793 + NR_VM_STAT_ITEMS + \ 1812 1794 (IS_ENABLED(CONFIG_VM_EVENT_COUNTERS) ? \ 1813 1795 NR_VM_EVENT_ITEMS : 0)) 1814 1796 ··· 1845 1827 1846 1828 global_dirty_limits(v + NR_DIRTY_BG_THRESHOLD, 1847 1829 v + NR_DIRTY_THRESHOLD); 1848 - v += NR_VM_WRITEBACK_STAT_ITEMS; 1830 + v[NR_MEMMAP_PAGES] = atomic_long_read(&nr_memmap_pages); 1831 + v[NR_MEMMAP_BOOT_PAGES] = atomic_long_read(&nr_memmap_boot_pages); 1832 + v += NR_VM_STAT_ITEMS; 1849 1833 1850 1834 #ifdef CONFIG_VM_EVENT_COUNTERS 1851 1835 all_vm_events(v); ··· 2305 2285 module_init(extfrag_debug_init); 2306 2286 2307 2287 #endif 2308 - 2309 - /* 2310 - * Page metadata size (struct page and page_ext) in pages 2311 - */ 2312 - static unsigned long early_perpage_metadata[MAX_NUMNODES] __meminitdata; 2313 - 2314 - void __meminit mod_node_early_perpage_metadata(int nid, long delta) 2315 - { 2316 - early_perpage_metadata[nid] += delta; 2317 - } 2318 - 2319 - void __meminit store_early_perpage_metadata(void) 2320 - { 2321 - int nid; 2322 - struct pglist_data *pgdat; 2323 - 2324 - for_each_online_pgdat(pgdat) { 2325 - nid = pgdat->node_id; 2326 - mod_node_page_state(NODE_DATA(nid), NR_MEMMAP_BOOT, 2327 - early_perpage_metadata[nid]); 2328 - } 2329 - }
+5 -1
net/bridge/br_netfilter_hooks.c
··· 622 622 if (likely(nf_ct_is_confirmed(ct))) 623 623 return NF_ACCEPT; 624 624 625 + if (WARN_ON_ONCE(refcount_read(&nfct->use) != 1)) { 626 + nf_reset_ct(skb); 627 + return NF_ACCEPT; 628 + } 629 + 625 630 WARN_ON_ONCE(skb_shared(skb)); 626 - WARN_ON_ONCE(refcount_read(&nfct->use) != 1); 627 631 628 632 /* We can't call nf_confirm here, it would create a dependency 629 633 * on nf_conntrack module.
+17 -9
net/core/dev.c
··· 9912 9912 } 9913 9913 } 9914 9914 9915 + static bool netdev_has_ip_or_hw_csum(netdev_features_t features) 9916 + { 9917 + netdev_features_t ip_csum_mask = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 9918 + bool ip_csum = (features & ip_csum_mask) == ip_csum_mask; 9919 + bool hw_csum = features & NETIF_F_HW_CSUM; 9920 + 9921 + return ip_csum || hw_csum; 9922 + } 9923 + 9915 9924 static netdev_features_t netdev_fix_features(struct net_device *dev, 9916 9925 netdev_features_t features) 9917 9926 { ··· 10002 9993 features &= ~NETIF_F_LRO; 10003 9994 } 10004 9995 10005 - if (features & NETIF_F_HW_TLS_TX) { 10006 - bool ip_csum = (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) == 10007 - (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM); 10008 - bool hw_csum = features & NETIF_F_HW_CSUM; 10009 - 10010 - if (!ip_csum && !hw_csum) { 10011 - netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n"); 10012 - features &= ~NETIF_F_HW_TLS_TX; 10013 - } 9996 + if ((features & NETIF_F_HW_TLS_TX) && !netdev_has_ip_or_hw_csum(features)) { 9997 + netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n"); 9998 + features &= ~NETIF_F_HW_TLS_TX; 10014 9999 } 10015 10000 10016 10001 if ((features & NETIF_F_HW_TLS_RX) && !(features & NETIF_F_RXCSUM)) { 10017 10002 netdev_dbg(dev, "Dropping TLS RX HW offload feature since no RXCSUM feature.\n"); 10018 10003 features &= ~NETIF_F_HW_TLS_RX; 10004 + } 10005 + 10006 + if ((features & NETIF_F_GSO_UDP_L4) && !netdev_has_ip_or_hw_csum(features)) { 10007 + netdev_dbg(dev, "Dropping USO feature since no CSUM feature.\n"); 10008 + features &= ~NETIF_F_GSO_UDP_L4; 10019 10009 } 10020 10010 10021 10011 return features;
+6 -2
net/ethtool/cmis_fw_update.c
··· 35 35 __be16 resv7; 36 36 }; 37 37 38 - #define CMIS_CDB_FW_WRITE_MECHANISM_LPL 0x01 38 + enum cmis_cdb_fw_write_mechanism { 39 + CMIS_CDB_FW_WRITE_MECHANISM_LPL = 0x01, 40 + CMIS_CDB_FW_WRITE_MECHANISM_BOTH = 0x11, 41 + }; 39 42 40 43 static int 41 44 cmis_fw_update_fw_mng_features_get(struct ethtool_cmis_cdb *cdb, ··· 67 64 } 68 65 69 66 rpl = (struct cmis_cdb_fw_mng_features_rpl *)args.req.payload; 70 - if (!(rpl->write_mechanism == CMIS_CDB_FW_WRITE_MECHANISM_LPL)) { 67 + if (!(rpl->write_mechanism == CMIS_CDB_FW_WRITE_MECHANISM_LPL || 68 + rpl->write_mechanism == CMIS_CDB_FW_WRITE_MECHANISM_BOTH)) { 71 69 ethnl_module_fw_flash_ntf_err(dev, ntf_params, 72 70 "Write LPL is not supported", 73 71 NULL);
+12 -16
net/ipv4/tcp_input.c
··· 238 238 */ 239 239 if (unlikely(len != icsk->icsk_ack.rcv_mss)) { 240 240 u64 val = (u64)skb->len << TCP_RMEM_TO_WIN_SCALE; 241 + u8 old_ratio = tcp_sk(sk)->scaling_ratio; 241 242 242 243 do_div(val, skb->truesize); 243 244 tcp_sk(sk)->scaling_ratio = val ? val : 1; 245 + 246 + if (old_ratio != tcp_sk(sk)->scaling_ratio) 247 + WRITE_ONCE(tcp_sk(sk)->window_clamp, 248 + tcp_win_from_space(sk, sk->sk_rcvbuf)); 244 249 } 245 250 icsk->icsk_ack.rcv_mss = min_t(unsigned int, len, 246 251 tcp_sk(sk)->advmss); ··· 759 754 * <prev RTT . ><current RTT .. ><next RTT .... > 760 755 */ 761 756 762 - if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf)) { 757 + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) && 758 + !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) { 763 759 u64 rcvwin, grow; 764 760 int rcvbuf; 765 761 ··· 776 770 777 771 rcvbuf = min_t(u64, tcp_space_from_win(sk, rcvwin), 778 772 READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2])); 779 - if (!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) { 780 - if (rcvbuf > sk->sk_rcvbuf) { 781 - WRITE_ONCE(sk->sk_rcvbuf, rcvbuf); 773 + if (rcvbuf > sk->sk_rcvbuf) { 774 + WRITE_ONCE(sk->sk_rcvbuf, rcvbuf); 782 775 783 - /* Make the window clamp follow along. */ 784 - WRITE_ONCE(tp->window_clamp, 785 - tcp_win_from_space(sk, rcvbuf)); 786 - } 787 - } else { 788 - /* Make the window clamp follow along while being bounded 789 - * by SO_RCVBUF. 790 - */ 791 - int clamp = tcp_win_from_space(sk, min(rcvbuf, sk->sk_rcvbuf)); 792 - 793 - if (clamp > tp->window_clamp) 794 - WRITE_ONCE(tp->window_clamp, clamp); 776 + /* Make the window clamp follow along. */ 777 + WRITE_ONCE(tp->window_clamp, 778 + tcp_win_from_space(sk, rcvbuf)); 795 779 } 796 780 } 797 781 tp->rcvq_space.space = copied;
+6
net/ipv4/udp_offload.c
··· 282 282 skb_transport_header(gso_skb))) 283 283 return ERR_PTR(-EINVAL); 284 284 285 + /* We don't know if egress device can segment and checksum the packet 286 + * when IPv6 extension headers are present. Fall back to software GSO. 287 + */ 288 + if (gso_skb->ip_summed != CHECKSUM_PARTIAL) 289 + features &= ~(NETIF_F_GSO_UDP_L4 | NETIF_F_CSUM_MASK); 290 + 285 291 if (skb_gso_ok(gso_skb, features | NETIF_F_GSO_ROBUST)) { 286 292 /* Packet is from an untrusted source, reset gso_segs. */ 287 293 skb_shinfo(gso_skb)->gso_segs = DIV_ROUND_UP(gso_skb->len - sizeof(*uh),
+4
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 154 154 }; 155 155 struct inet_frag_queue *q; 156 156 157 + if (!(ipv6_addr_type(&hdr->daddr) & (IPV6_ADDR_MULTICAST | 158 + IPV6_ADDR_LINKLOCAL))) 159 + key.iif = 0; 160 + 157 161 q = inet_frag_find(nf_frag->fqdir, &key); 158 162 if (!q) 159 163 return NULL;
+1 -1
net/mptcp/diag.c
··· 94 94 nla_total_size(4) + /* MPTCP_SUBFLOW_ATTR_RELWRITE_SEQ */ 95 95 nla_total_size_64bit(8) + /* MPTCP_SUBFLOW_ATTR_MAP_SEQ */ 96 96 nla_total_size(4) + /* MPTCP_SUBFLOW_ATTR_MAP_SFSEQ */ 97 - nla_total_size(2) + /* MPTCP_SUBFLOW_ATTR_SSN_OFFSET */ 97 + nla_total_size(4) + /* MPTCP_SUBFLOW_ATTR_SSN_OFFSET */ 98 98 nla_total_size(2) + /* MPTCP_SUBFLOW_ATTR_MAP_DATALEN */ 99 99 nla_total_size(4) + /* MPTCP_SUBFLOW_ATTR_FLAGS */ 100 100 nla_total_size(1) + /* MPTCP_SUBFLOW_ATTR_ID_REM */
+1 -1
net/netfilter/nf_flow_table_offload.c
··· 841 841 struct list_head *block_cb_list) 842 842 { 843 843 struct flow_cls_offload cls_flow = {}; 844 + struct netlink_ext_ack extack = {}; 844 845 struct flow_block_cb *block_cb; 845 - struct netlink_ext_ack extack; 846 846 __be16 proto = ETH_P_ALL; 847 847 int err, i = 0; 848 848
+110 -53
net/netfilter/nf_tables_api.c
··· 8020 8020 return skb->len; 8021 8021 } 8022 8022 8023 + static int nf_tables_dumpreset_obj(struct sk_buff *skb, 8024 + struct netlink_callback *cb) 8025 + { 8026 + struct nftables_pernet *nft_net = nft_pernet(sock_net(skb->sk)); 8027 + int ret; 8028 + 8029 + mutex_lock(&nft_net->commit_mutex); 8030 + ret = nf_tables_dump_obj(skb, cb); 8031 + mutex_unlock(&nft_net->commit_mutex); 8032 + 8033 + return ret; 8034 + } 8035 + 8023 8036 static int nf_tables_dump_obj_start(struct netlink_callback *cb) 8024 8037 { 8025 8038 struct nft_obj_dump_ctx *ctx = (void *)cb->ctx; ··· 8049 8036 if (nla[NFTA_OBJ_TYPE]) 8050 8037 ctx->type = ntohl(nla_get_be32(nla[NFTA_OBJ_TYPE])); 8051 8038 8052 - if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET) 8053 - ctx->reset = true; 8054 - 8055 8039 return 0; 8040 + } 8041 + 8042 + static int nf_tables_dumpreset_obj_start(struct netlink_callback *cb) 8043 + { 8044 + struct nft_obj_dump_ctx *ctx = (void *)cb->ctx; 8045 + 8046 + ctx->reset = true; 8047 + 8048 + return nf_tables_dump_obj_start(cb); 8056 8049 } 8057 8050 8058 8051 static int nf_tables_dump_obj_done(struct netlink_callback *cb) ··· 8071 8052 } 8072 8053 8073 8054 /* called with rcu_read_lock held */ 8074 - static int nf_tables_getobj(struct sk_buff *skb, const struct nfnl_info *info, 8075 - const struct nlattr * const nla[]) 8055 + static struct sk_buff * 8056 + nf_tables_getobj_single(u32 portid, const struct nfnl_info *info, 8057 + const struct nlattr * const nla[], bool reset) 8076 8058 { 8077 8059 struct netlink_ext_ack *extack = info->extack; 8078 8060 u8 genmask = nft_genmask_cur(info->net); ··· 8082 8062 struct net *net = info->net; 8083 8063 struct nft_object *obj; 8084 8064 struct sk_buff *skb2; 8085 - bool reset = false; 8086 8065 u32 objtype; 8087 8066 int err; 8067 + 8068 + if (!nla[NFTA_OBJ_NAME] || 8069 + !nla[NFTA_OBJ_TYPE]) 8070 + return ERR_PTR(-EINVAL); 8071 + 8072 + table = nft_table_lookup(net, nla[NFTA_OBJ_TABLE], family, genmask, 0); 8073 + if (IS_ERR(table)) { 8074 + NL_SET_BAD_ATTR(extack, nla[NFTA_OBJ_TABLE]); 8075 + return ERR_CAST(table); 8076 + } 8077 + 8078 + objtype = ntohl(nla_get_be32(nla[NFTA_OBJ_TYPE])); 8079 + obj = nft_obj_lookup(net, table, nla[NFTA_OBJ_NAME], objtype, genmask); 8080 + if (IS_ERR(obj)) { 8081 + NL_SET_BAD_ATTR(extack, nla[NFTA_OBJ_NAME]); 8082 + return ERR_CAST(obj); 8083 + } 8084 + 8085 + skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC); 8086 + if (!skb2) 8087 + return ERR_PTR(-ENOMEM); 8088 + 8089 + err = nf_tables_fill_obj_info(skb2, net, portid, 8090 + info->nlh->nlmsg_seq, NFT_MSG_NEWOBJ, 0, 8091 + family, table, obj, reset); 8092 + if (err < 0) { 8093 + kfree_skb(skb2); 8094 + return ERR_PTR(err); 8095 + } 8096 + 8097 + return skb2; 8098 + } 8099 + 8100 + static int nf_tables_getobj(struct sk_buff *skb, const struct nfnl_info *info, 8101 + const struct nlattr * const nla[]) 8102 + { 8103 + u32 portid = NETLINK_CB(skb).portid; 8104 + struct sk_buff *skb2; 8088 8105 8089 8106 if (info->nlh->nlmsg_flags & NLM_F_DUMP) { 8090 8107 struct netlink_dump_control c = { ··· 8135 8078 return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c); 8136 8079 } 8137 8080 8138 - if (!nla[NFTA_OBJ_NAME] || 8139 - !nla[NFTA_OBJ_TYPE]) 8081 + skb2 = nf_tables_getobj_single(portid, info, nla, false); 8082 + if (IS_ERR(skb2)) 8083 + return PTR_ERR(skb2); 8084 + 8085 + return nfnetlink_unicast(skb2, info->net, portid); 8086 + } 8087 + 8088 + static int nf_tables_getobj_reset(struct sk_buff *skb, 8089 + const struct nfnl_info *info, 8090 + const struct nlattr * const nla[]) 8091 + { 8092 + struct nftables_pernet *nft_net = nft_pernet(info->net); 8093 + u32 portid = NETLINK_CB(skb).portid; 8094 + struct net *net = info->net; 8095 + struct sk_buff *skb2; 8096 + char *buf; 8097 + 8098 + if (info->nlh->nlmsg_flags & NLM_F_DUMP) { 8099 + struct netlink_dump_control c = { 8100 + .start = nf_tables_dumpreset_obj_start, 8101 + .dump = nf_tables_dumpreset_obj, 8102 + .done = nf_tables_dump_obj_done, 8103 + .module = THIS_MODULE, 8104 + .data = (void *)nla, 8105 + }; 8106 + 8107 + return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c); 8108 + } 8109 + 8110 + if (!try_module_get(THIS_MODULE)) 8140 8111 return -EINVAL; 8112 + rcu_read_unlock(); 8113 + mutex_lock(&nft_net->commit_mutex); 8114 + skb2 = nf_tables_getobj_single(portid, info, nla, true); 8115 + mutex_unlock(&nft_net->commit_mutex); 8116 + rcu_read_lock(); 8117 + module_put(THIS_MODULE); 8141 8118 8142 - table = nft_table_lookup(net, nla[NFTA_OBJ_TABLE], family, genmask, 0); 8143 - if (IS_ERR(table)) { 8144 - NL_SET_BAD_ATTR(extack, nla[NFTA_OBJ_TABLE]); 8145 - return PTR_ERR(table); 8146 - } 8119 + if (IS_ERR(skb2)) 8120 + return PTR_ERR(skb2); 8147 8121 8148 - objtype = ntohl(nla_get_be32(nla[NFTA_OBJ_TYPE])); 8149 - obj = nft_obj_lookup(net, table, nla[NFTA_OBJ_NAME], objtype, genmask); 8150 - if (IS_ERR(obj)) { 8151 - NL_SET_BAD_ATTR(extack, nla[NFTA_OBJ_NAME]); 8152 - return PTR_ERR(obj); 8153 - } 8122 + buf = kasprintf(GFP_ATOMIC, "%.*s:%u", 8123 + nla_len(nla[NFTA_OBJ_TABLE]), 8124 + (char *)nla_data(nla[NFTA_OBJ_TABLE]), 8125 + nft_net->base_seq); 8126 + audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1, 8127 + AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC); 8128 + kfree(buf); 8154 8129 8155 - skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC); 8156 - if (!skb2) 8157 - return -ENOMEM; 8158 - 8159 - if (NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET) 8160 - reset = true; 8161 - 8162 - if (reset) { 8163 - const struct nftables_pernet *nft_net; 8164 - char *buf; 8165 - 8166 - nft_net = nft_pernet(net); 8167 - buf = kasprintf(GFP_ATOMIC, "%s:%u", table->name, nft_net->base_seq); 8168 - 8169 - audit_log_nfcfg(buf, 8170 - family, 8171 - 1, 8172 - AUDIT_NFT_OP_OBJ_RESET, 8173 - GFP_ATOMIC); 8174 - kfree(buf); 8175 - } 8176 - 8177 - err = nf_tables_fill_obj_info(skb2, net, NETLINK_CB(skb).portid, 8178 - info->nlh->nlmsg_seq, NFT_MSG_NEWOBJ, 0, 8179 - family, table, obj, reset); 8180 - if (err < 0) 8181 - goto err_fill_obj_info; 8182 - 8183 - return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid); 8184 - 8185 - err_fill_obj_info: 8186 - kfree_skb(skb2); 8187 - return err; 8130 + return nfnetlink_unicast(skb2, net, portid); 8188 8131 } 8189 8132 8190 8133 static void nft_obj_destroy(const struct nft_ctx *ctx, struct nft_object *obj) ··· 9467 9410 .policy = nft_obj_policy, 9468 9411 }, 9469 9412 [NFT_MSG_GETOBJ_RESET] = { 9470 - .call = nf_tables_getobj, 9413 + .call = nf_tables_getobj_reset, 9471 9414 .type = NFNL_CB_RCU, 9472 9415 .attr_count = NFTA_OBJ_MAX, 9473 9416 .policy = nft_obj_policy,
+4 -1
net/netfilter/nfnetlink.c
··· 427 427 428 428 nfnl_unlock(subsys_id); 429 429 430 - if (nlh->nlmsg_flags & NLM_F_ACK) 430 + if (nlh->nlmsg_flags & NLM_F_ACK) { 431 + memset(&extack, 0, sizeof(extack)); 431 432 nfnl_err_add(&err_list, nlh, 0, &extack); 433 + } 432 434 433 435 while (skb->len >= nlmsg_total_size(0)) { 434 436 int msglen, type; ··· 579 577 ss->abort(net, oskb, NFNL_ABORT_NONE); 580 578 netlink_ack(oskb, nlmsg_hdr(oskb), err, NULL); 581 579 } else if (nlh->nlmsg_flags & NLM_F_ACK) { 580 + memset(&extack, 0, sizeof(extack)); 582 581 nfnl_err_add(&err_list, nlh, 0, &extack); 583 582 } 584 583 } else {
+29 -21
net/vmw_vsock/af_vsock.c
··· 1270 1270 return err; 1271 1271 } 1272 1272 1273 + int __vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, 1274 + size_t len, int flags) 1275 + { 1276 + struct sock *sk = sock->sk; 1277 + struct vsock_sock *vsk = vsock_sk(sk); 1278 + 1279 + return vsk->transport->dgram_dequeue(vsk, msg, len, flags); 1280 + } 1281 + 1273 1282 int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, 1274 1283 size_t len, int flags) 1275 1284 { 1276 1285 #ifdef CONFIG_BPF_SYSCALL 1286 + struct sock *sk = sock->sk; 1277 1287 const struct proto *prot; 1278 - #endif 1279 - struct vsock_sock *vsk; 1280 - struct sock *sk; 1281 1288 1282 - sk = sock->sk; 1283 - vsk = vsock_sk(sk); 1284 - 1285 - #ifdef CONFIG_BPF_SYSCALL 1286 1289 prot = READ_ONCE(sk->sk_prot); 1287 1290 if (prot != &vsock_proto) 1288 1291 return prot->recvmsg(sk, msg, len, flags, NULL); 1289 1292 #endif 1290 1293 1291 - return vsk->transport->dgram_dequeue(vsk, msg, len, flags); 1294 + return __vsock_dgram_recvmsg(sock, msg, len, flags); 1292 1295 } 1293 1296 EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg); 1294 1297 ··· 2177 2174 } 2178 2175 2179 2176 int 2180 - vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, 2181 - int flags) 2177 + __vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, 2178 + int flags) 2182 2179 { 2183 2180 struct sock *sk; 2184 2181 struct vsock_sock *vsk; 2185 2182 const struct vsock_transport *transport; 2186 - #ifdef CONFIG_BPF_SYSCALL 2187 - const struct proto *prot; 2188 - #endif 2189 2183 int err; 2190 2184 2191 2185 sk = sock->sk; ··· 2233 2233 goto out; 2234 2234 } 2235 2235 2236 - #ifdef CONFIG_BPF_SYSCALL 2237 - prot = READ_ONCE(sk->sk_prot); 2238 - if (prot != &vsock_proto) { 2239 - release_sock(sk); 2240 - return prot->recvmsg(sk, msg, len, flags, NULL); 2241 - } 2242 - #endif 2243 - 2244 2236 if (sk->sk_type == SOCK_STREAM) 2245 2237 err = __vsock_stream_recvmsg(sk, msg, len, flags); 2246 2238 else ··· 2241 2249 out: 2242 2250 release_sock(sk); 2243 2251 return err; 2252 + } 2253 + 2254 + int 2255 + vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, 2256 + int flags) 2257 + { 2258 + #ifdef CONFIG_BPF_SYSCALL 2259 + struct sock *sk = sock->sk; 2260 + const struct proto *prot; 2261 + 2262 + prot = READ_ONCE(sk->sk_prot); 2263 + if (prot != &vsock_proto) 2264 + return prot->recvmsg(sk, msg, len, flags, NULL); 2265 + #endif 2266 + 2267 + return __vsock_connectible_recvmsg(sock, msg, len, flags); 2244 2268 } 2245 2269 EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg); 2246 2270
+2 -2
net/vmw_vsock/vsock_bpf.c
··· 64 64 int err; 65 65 66 66 if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) 67 - err = vsock_connectible_recvmsg(sock, msg, len, flags); 67 + err = __vsock_connectible_recvmsg(sock, msg, len, flags); 68 68 else if (sk->sk_type == SOCK_DGRAM) 69 - err = vsock_dgram_recvmsg(sock, msg, len, flags); 69 + err = __vsock_dgram_recvmsg(sock, msg, len, flags); 70 70 else 71 71 err = -EPROTOTYPE; 72 72
+4 -4
rust/Makefile
··· 227 227 -fno-reorder-blocks -fno-allow-store-data-races -fasan-shadow-offset=% \ 228 228 -fzero-call-used-regs=% -fno-stack-clash-protection \ 229 229 -fno-inline-functions-called-once -fsanitize=bounds-strict \ 230 - -fstrict-flex-arrays=% \ 230 + -fstrict-flex-arrays=% -fmin-function-alignment=% \ 231 231 --param=% --param asan-% 232 232 233 233 # Derived from `scripts/Makefile.clang`. ··· 350 350 $(Q)$(srctree)/scripts/generate_rust_analyzer.py \ 351 351 --cfgs='core=$(core-cfgs)' --cfgs='alloc=$(alloc-cfgs)' \ 352 352 $(realpath $(srctree)) $(realpath $(objtree)) \ 353 - $(RUST_LIB_SRC) $(KBUILD_EXTMOD) > \ 353 + $(rustc_sysroot) $(RUST_LIB_SRC) $(KBUILD_EXTMOD) > \ 354 354 $(if $(KBUILD_EXTMOD),$(extmod_prefix),$(objtree))/rust-project.json 355 355 356 356 redirect-intrinsics = \ 357 - __addsf3 __eqsf2 __gesf2 __lesf2 __ltsf2 __mulsf3 __nesf2 __unordsf2 \ 358 - __adddf3 __ledf2 __ltdf2 __muldf3 __unorddf2 \ 357 + __addsf3 __eqsf2 __extendsfdf2 __gesf2 __lesf2 __ltsf2 __mulsf3 __nesf2 __truncdfsf2 __unordsf2 \ 358 + __adddf3 __eqdf2 __ledf2 __ltdf2 __muldf3 __unorddf2 \ 359 359 __muloti4 __multi3 \ 360 360 __udivmodti4 __udivti3 __umodti3 361 361
+3
rust/compiler_builtins.rs
··· 40 40 define_panicking_intrinsics!("`f32` should not be used", { 41 41 __addsf3, 42 42 __eqsf2, 43 + __extendsfdf2, 43 44 __gesf2, 44 45 __lesf2, 45 46 __ltsf2, 46 47 __mulsf3, 47 48 __nesf2, 49 + __truncdfsf2, 48 50 __unordsf2, 49 51 }); 50 52 51 53 define_panicking_intrinsics!("`f64` should not be used", { 52 54 __adddf3, 55 + __eqdf2, 53 56 __ledf2, 54 57 __ltdf2, 55 58 __muldf3,
+1 -1
rust/macros/lib.rs
··· 94 94 /// - `license`: ASCII string literal of the license of the kernel module (required). 95 95 /// - `alias`: array of ASCII string literals of the alias names of the kernel module. 96 96 /// - `firmware`: array of ASCII string literals of the firmware files of 97 - /// the kernel module. 97 + /// the kernel module. 98 98 #[proc_macro] 99 99 pub fn module(ts: TokenStream) -> TokenStream { 100 100 module::module(ts)
-4
scripts/gcc-plugins/randomize_layout_plugin.c
··· 19 19 #include "gcc-common.h" 20 20 #include "randomize_layout_seed.h" 21 21 22 - #if BUILDING_GCC_MAJOR < 4 || (BUILDING_GCC_MAJOR == 4 && BUILDING_GCC_MINOR < 7) 23 - #error "The RANDSTRUCT plugin requires GCC 4.7 or newer." 24 - #endif 25 - 26 22 #define ORIG_TYPE_NAME(node) \ 27 23 (TYPE_NAME(TYPE_MAIN_VARIANT(node)) != NULL_TREE ? ((const unsigned char *)IDENTIFIER_POINTER(TYPE_NAME(TYPE_MAIN_VARIANT(node)))) : (const unsigned char *)"anonymous") 28 24
+5 -1
scripts/generate_rust_analyzer.py
··· 145 145 parser.add_argument('--cfgs', action='append', default=[]) 146 146 parser.add_argument("srctree", type=pathlib.Path) 147 147 parser.add_argument("objtree", type=pathlib.Path) 148 + parser.add_argument("sysroot", type=pathlib.Path) 148 149 parser.add_argument("sysroot_src", type=pathlib.Path) 149 150 parser.add_argument("exttree", type=pathlib.Path, nargs="?") 150 151 args = parser.parse_args() ··· 155 154 level=logging.INFO if args.verbose else logging.WARNING 156 155 ) 157 156 157 + # Making sure that the `sysroot` and `sysroot_src` belong to the same toolchain. 158 + assert args.sysroot in args.sysroot_src.parents 159 + 158 160 rust_project = { 159 161 "crates": generate_crates(args.srctree, args.objtree, args.sysroot_src, args.exttree, args.cfgs), 160 - "sysroot_src": str(args.sysroot_src), 162 + "sysroot": str(args.sysroot), 161 163 } 162 164 163 165 json.dump(rust_project, sys.stdout, sort_keys=True, indent=4)
+2 -2
scripts/generate_rust_target.rs
··· 162 162 "data-layout", 163 163 "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", 164 164 ); 165 - let mut features = "-3dnow,-3dnowa,-mmx,+soft-float".to_string(); 165 + let mut features = "-mmx,+soft-float".to_string(); 166 166 if cfg.has("MITIGATION_RETPOLINE") { 167 167 features += ",+retpoline-external-thunk"; 168 168 } ··· 179 179 "data-layout", 180 180 "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i128:128-f64:32:64-f80:32-n8:16:32-S128", 181 181 ); 182 - let mut features = "-3dnow,-3dnowa,-mmx,+soft-float".to_string(); 182 + let mut features = "-mmx,+soft-float".to_string(); 183 183 if cfg.has("MITIGATION_RETPOLINE") { 184 184 features += ",+retpoline-external-thunk"; 185 185 }
+2 -29
scripts/kallsyms.c
··· 5 5 * This software may be used and distributed according to the terms 6 6 * of the GNU General Public License, incorporated herein by reference. 7 7 * 8 - * Usage: kallsyms [--all-symbols] [--absolute-percpu] 9 - * [--lto-clang] in.map > out.S 8 + * Usage: kallsyms [--all-symbols] [--absolute-percpu] in.map > out.S 10 9 * 11 10 * Table compression uses all the unused char codes on the symbols and 12 11 * maps these to the most used substrings (tokens). For instance, it might ··· 61 62 static unsigned int table_size, table_cnt; 62 63 static int all_symbols; 63 64 static int absolute_percpu; 64 - static int lto_clang; 65 65 66 66 static int token_profit[0x10000]; 67 67 ··· 71 73 72 74 static void usage(void) 73 75 { 74 - fprintf(stderr, "Usage: kallsyms [--all-symbols] [--absolute-percpu] " 75 - "[--lto-clang] in.map > out.S\n"); 76 + fprintf(stderr, "Usage: kallsyms [--all-symbols] [--absolute-percpu] in.map > out.S\n"); 76 77 exit(1); 77 78 } 78 79 ··· 341 344 return s->percpu_absolute; 342 345 } 343 346 344 - static void cleanup_symbol_name(char *s) 345 - { 346 - char *p; 347 - 348 - /* 349 - * ASCII[.] = 2e 350 - * ASCII[0-9] = 30,39 351 - * ASCII[A-Z] = 41,5a 352 - * ASCII[_] = 5f 353 - * ASCII[a-z] = 61,7a 354 - * 355 - * As above, replacing the first '.' in ".llvm." with '\0' does not 356 - * affect the main sorting, but it helps us with subsorting. 357 - */ 358 - p = strstr(s, ".llvm."); 359 - if (p) 360 - *p = '\0'; 361 - } 362 - 363 347 static int compare_names(const void *a, const void *b) 364 348 { 365 349 int ret; ··· 503 525 output_label("kallsyms_relative_base"); 504 526 output_address(relative_base); 505 527 printf("\n"); 506 - 507 - if (lto_clang) 508 - for (i = 0; i < table_cnt; i++) 509 - cleanup_symbol_name((char *)table[i]->sym); 510 528 511 529 sort_symbols_by_name(); 512 530 output_label("kallsyms_seqs_of_names"); ··· 781 807 static const struct option long_options[] = { 782 808 {"all-symbols", no_argument, &all_symbols, 1}, 783 809 {"absolute-percpu", no_argument, &absolute_percpu, 1}, 784 - {"lto-clang", no_argument, &lto_clang, 1}, 785 810 {}, 786 811 }; 787 812
+22 -13
security/keys/trusted-keys/trusted_dcp.c
··· 186 186 return ret; 187 187 } 188 188 189 - static int decrypt_blob_key(u8 *key) 189 + static int decrypt_blob_key(u8 *encrypted_key, u8 *plain_key) 190 190 { 191 - return do_dcp_crypto(key, key, false); 191 + return do_dcp_crypto(encrypted_key, plain_key, false); 192 192 } 193 193 194 - static int encrypt_blob_key(u8 *key) 194 + static int encrypt_blob_key(u8 *plain_key, u8 *encrypted_key) 195 195 { 196 - return do_dcp_crypto(key, key, true); 196 + return do_dcp_crypto(plain_key, encrypted_key, true); 197 197 } 198 198 199 199 static int trusted_dcp_seal(struct trusted_key_payload *p, char *datablob) 200 200 { 201 201 struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob; 202 202 int blen, ret; 203 + u8 plain_blob_key[AES_KEYSIZE_128]; 203 204 204 205 blen = calc_blob_len(p->key_len); 205 206 if (blen > MAX_BLOB_SIZE) ··· 208 207 209 208 b->fmt_version = DCP_BLOB_VERSION; 210 209 get_random_bytes(b->nonce, AES_KEYSIZE_128); 211 - get_random_bytes(b->blob_key, AES_KEYSIZE_128); 210 + get_random_bytes(plain_blob_key, AES_KEYSIZE_128); 212 211 213 - ret = do_aead_crypto(p->key, b->payload, p->key_len, b->blob_key, 212 + ret = do_aead_crypto(p->key, b->payload, p->key_len, plain_blob_key, 214 213 b->nonce, true); 215 214 if (ret) { 216 215 pr_err("Unable to encrypt blob payload: %i\n", ret); 217 - return ret; 216 + goto out; 218 217 } 219 218 220 - ret = encrypt_blob_key(b->blob_key); 219 + ret = encrypt_blob_key(plain_blob_key, b->blob_key); 221 220 if (ret) { 222 221 pr_err("Unable to encrypt blob key: %i\n", ret); 223 - return ret; 222 + goto out; 224 223 } 225 224 226 - b->payload_len = get_unaligned_le32(&p->key_len); 225 + put_unaligned_le32(p->key_len, &b->payload_len); 227 226 p->blob_len = blen; 228 - return 0; 227 + ret = 0; 228 + 229 + out: 230 + memzero_explicit(plain_blob_key, sizeof(plain_blob_key)); 231 + 232 + return ret; 229 233 } 230 234 231 235 static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob) 232 236 { 233 237 struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob; 234 238 int blen, ret; 239 + u8 plain_blob_key[AES_KEYSIZE_128]; 235 240 236 241 if (b->fmt_version != DCP_BLOB_VERSION) { 237 242 pr_err("DCP blob has bad version: %i, expected %i\n", ··· 255 248 goto out; 256 249 } 257 250 258 - ret = decrypt_blob_key(b->blob_key); 251 + ret = decrypt_blob_key(b->blob_key, plain_blob_key); 259 252 if (ret) { 260 253 pr_err("Unable to decrypt blob key: %i\n", ret); 261 254 goto out; 262 255 } 263 256 264 257 ret = do_aead_crypto(b->payload, p->key, p->key_len + DCP_BLOB_AUTHLEN, 265 - b->blob_key, b->nonce, false); 258 + plain_blob_key, b->nonce, false); 266 259 if (ret) { 267 260 pr_err("Unwrap of DCP payload failed: %i\n", ret); 268 261 goto out; ··· 270 263 271 264 ret = 0; 272 265 out: 266 + memzero_explicit(plain_blob_key, sizeof(plain_blob_key)); 267 + 273 268 return ret; 274 269 } 275 270
+6 -2
security/selinux/avc.c
··· 330 330 { 331 331 struct avc_xperms_decision_node *dest_xpd; 332 332 333 - node->ae.xp_node->xp.len++; 334 333 dest_xpd = avc_xperms_decision_alloc(src->used); 335 334 if (!dest_xpd) 336 335 return -ENOMEM; 337 336 avc_copy_xperms_decision(&dest_xpd->xpd, src); 338 337 list_add(&dest_xpd->xpd_list, &node->ae.xp_node->xpd_head); 338 + node->ae.xp_node->xp.len++; 339 339 return 0; 340 340 } 341 341 ··· 907 907 node->ae.avd.auditdeny &= ~perms; 908 908 break; 909 909 case AVC_CALLBACK_ADD_XPERMS: 910 - avc_add_xperms_decision(node, xpd); 910 + rc = avc_add_xperms_decision(node, xpd); 911 + if (rc) { 912 + avc_node_kill(node); 913 + goto out_unlock; 914 + } 911 915 break; 912 916 } 913 917 avc_node_replace(node, orig);
+11 -1
security/selinux/hooks.c
··· 3852 3852 if (default_noexec && 3853 3853 (prot & PROT_EXEC) && !(vma->vm_flags & VM_EXEC)) { 3854 3854 int rc = 0; 3855 - if (vma_is_initial_heap(vma)) { 3855 + /* 3856 + * We don't use the vma_is_initial_heap() helper as it has 3857 + * a history of problems and is currently broken on systems 3858 + * where there is no heap, e.g. brk == start_brk. Before 3859 + * replacing the conditional below with vma_is_initial_heap(), 3860 + * or something similar, please ensure that the logic is the 3861 + * same as what we have below or you have tested every possible 3862 + * corner case you can think to test. 3863 + */ 3864 + if (vma->vm_start >= vma->vm_mm->start_brk && 3865 + vma->vm_end <= vma->vm_mm->brk) { 3856 3866 rc = avc_has_perm(sid, sid, SECCLASS_PROCESS, 3857 3867 PROCESS__EXECHEAP, NULL); 3858 3868 } else if (!vma->vm_file && (vma_is_initial_stack(vma) ||
+1 -1
sound/core/timer.c
··· 547 547 /* check the actual time for the start tick; 548 548 * bail out as error if it's way too low (< 100us) 549 549 */ 550 - if (start) { 550 + if (start && !(timer->hw.flags & SNDRV_TIMER_HW_SLAVE)) { 551 551 if ((u64)snd_timer_hw_resolution(timer) * ticks < 100000) 552 552 return -EINVAL; 553 553 }
+1 -1
sound/pci/hda/cs35l41_hda.c
··· 134 134 }; 135 135 136 136 static const struct cs_dsp_client_ops client_ops = { 137 - .control_remove = hda_cs_dsp_control_remove, 137 + /* cs_dsp requires the client to provide this even if it is empty */ 138 138 }; 139 139 140 140 static int cs35l41_request_tuning_param_file(struct cs35l41_hda *cs35l41, char *tuning_filename,
+1 -1
sound/pci/hda/cs35l56_hda.c
··· 413 413 } 414 414 415 415 static const struct cs_dsp_client_ops cs35l56_hda_client_ops = { 416 - .control_remove = hda_cs_dsp_control_remove, 416 + /* cs_dsp requires the client to provide this even if it is empty */ 417 417 }; 418 418 419 419 static int cs35l56_hda_request_firmware_file(struct cs35l56_hda *cs35l56,
+99 -1
sound/pci/hda/patch_realtek.c
··· 11 11 */ 12 12 13 13 #include <linux/acpi.h> 14 + #include <linux/cleanup.h> 14 15 #include <linux/init.h> 15 16 #include <linux/delay.h> 16 17 #include <linux/slab.h> 17 18 #include <linux/pci.h> 18 19 #include <linux/dmi.h> 19 20 #include <linux/module.h> 21 + #include <linux/i2c.h> 20 22 #include <linux/input.h> 21 23 #include <linux/leds.h> 22 24 #include <linux/ctype.h> 25 + #include <linux/spi/spi.h> 23 26 #include <sound/core.h> 24 27 #include <sound/jack.h> 25 28 #include <sound/hda_codec.h> ··· 586 583 switch (codec->core.vendor_id) { 587 584 case 0x10ec0236: 588 585 case 0x10ec0256: 589 - case 0x10ec0257: 590 586 case 0x19e58326: 591 587 case 0x10ec0283: 592 588 case 0x10ec0285: ··· 6858 6856 } 6859 6857 } 6860 6858 6859 + static void cs35lxx_autodet_fixup(struct hda_codec *cdc, 6860 + const struct hda_fixup *fix, 6861 + int action) 6862 + { 6863 + struct device *dev = hda_codec_dev(cdc); 6864 + struct acpi_device *adev; 6865 + struct fwnode_handle *fwnode __free(fwnode_handle) = NULL; 6866 + const char *bus = NULL; 6867 + static const struct { 6868 + const char *hid; 6869 + const char *name; 6870 + } acpi_ids[] = {{ "CSC3554", "cs35l54-hda" }, 6871 + { "CSC3556", "cs35l56-hda" }, 6872 + { "CSC3557", "cs35l57-hda" }}; 6873 + char *match; 6874 + int i, count = 0, count_devindex = 0; 6875 + 6876 + switch (action) { 6877 + case HDA_FIXUP_ACT_PRE_PROBE: 6878 + for (i = 0; i < ARRAY_SIZE(acpi_ids); ++i) { 6879 + adev = acpi_dev_get_first_match_dev(acpi_ids[i].hid, NULL, -1); 6880 + if (adev) 6881 + break; 6882 + } 6883 + if (!adev) { 6884 + dev_err(dev, "Failed to find ACPI entry for a Cirrus Amp\n"); 6885 + return; 6886 + } 6887 + 6888 + count = i2c_acpi_client_count(adev); 6889 + if (count > 0) { 6890 + bus = "i2c"; 6891 + } else { 6892 + count = acpi_spi_count_resources(adev); 6893 + if (count > 0) 6894 + bus = "spi"; 6895 + } 6896 + 6897 + fwnode = fwnode_handle_get(acpi_fwnode_handle(adev)); 6898 + acpi_dev_put(adev); 6899 + 6900 + if (!bus) { 6901 + dev_err(dev, "Did not find any buses for %s\n", acpi_ids[i].hid); 6902 + return; 6903 + } 6904 + 6905 + if (!fwnode) { 6906 + dev_err(dev, "Could not get fwnode for %s\n", acpi_ids[i].hid); 6907 + return; 6908 + } 6909 + 6910 + /* 6911 + * When available the cirrus,dev-index property is an accurate 6912 + * count of the amps in a system and is used in preference to 6913 + * the count of bus devices that can contain additional address 6914 + * alias entries. 6915 + */ 6916 + count_devindex = fwnode_property_count_u32(fwnode, "cirrus,dev-index"); 6917 + if (count_devindex > 0) 6918 + count = count_devindex; 6919 + 6920 + match = devm_kasprintf(dev, GFP_KERNEL, "-%%s:00-%s.%%d", acpi_ids[i].name); 6921 + if (!match) 6922 + return; 6923 + dev_info(dev, "Found %d %s on %s (%s)\n", count, acpi_ids[i].hid, bus, match); 6924 + comp_generic_fixup(cdc, action, bus, acpi_ids[i].hid, match, count); 6925 + 6926 + break; 6927 + case HDA_FIXUP_ACT_FREE: 6928 + /* 6929 + * Pass the action on to comp_generic_fixup() so that 6930 + * hda_component_manager functions can be called in just once 6931 + * place. In this context the bus, hid, match_str or count 6932 + * values do not need to be calculated. 6933 + */ 6934 + comp_generic_fixup(cdc, action, NULL, NULL, NULL, 0); 6935 + break; 6936 + } 6937 + } 6938 + 6861 6939 static void cs35l41_fixup_i2c_two(struct hda_codec *cdc, const struct hda_fixup *fix, int action) 6862 6940 { 6863 6941 comp_generic_fixup(cdc, action, "i2c", "CSC3551", "-%s:00-cs35l41-hda.%d", 2); ··· 7610 7528 ALC256_FIXUP_CHROME_BOOK, 7611 7529 ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7, 7612 7530 ALC287_FIXUP_LENOVO_SSID_17AA3820, 7531 + ALCXXX_FIXUP_CS35LXX, 7613 7532 }; 7614 7533 7615 7534 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 9940 9857 .type = HDA_FIXUP_FUNC, 9941 9858 .v.func = alc287_fixup_lenovo_ssid_17aa3820, 9942 9859 }, 9860 + [ALCXXX_FIXUP_CS35LXX] = { 9861 + .type = HDA_FIXUP_FUNC, 9862 + .v.func = cs35lxx_autodet_fixup, 9863 + }, 9943 9864 }; 9944 9865 9945 9866 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 10358 10271 SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10359 10272 SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10360 10273 SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), 10274 + SND_PCI_QUIRK(0x103c, 0x8d01, "HP ZBook Power 14 G12", ALCXXX_FIXUP_CS35LXX), 10275 + SND_PCI_QUIRK(0x103c, 0x8d08, "HP EliteBook 1045 14 G12", ALCXXX_FIXUP_CS35LXX), 10276 + SND_PCI_QUIRK(0x103c, 0x8d85, "HP EliteBook 1040 14 G12", ALCXXX_FIXUP_CS35LXX), 10277 + SND_PCI_QUIRK(0x103c, 0x8d86, "HP Elite x360 1040 14 G12", ALCXXX_FIXUP_CS35LXX), 10278 + SND_PCI_QUIRK(0x103c, 0x8d8c, "HP EliteBook 830 13 G12", ALCXXX_FIXUP_CS35LXX), 10279 + SND_PCI_QUIRK(0x103c, 0x8d8d, "HP Elite x360 830 13 G12", ALCXXX_FIXUP_CS35LXX), 10280 + SND_PCI_QUIRK(0x103c, 0x8d8e, "HP EliteBook 840 14 G12", ALCXXX_FIXUP_CS35LXX), 10281 + SND_PCI_QUIRK(0x103c, 0x8d8f, "HP EliteBook 840 14 G12", ALCXXX_FIXUP_CS35LXX), 10282 + SND_PCI_QUIRK(0x103c, 0x8d90, "HP EliteBook 860 16 G12", ALCXXX_FIXUP_CS35LXX), 10283 + SND_PCI_QUIRK(0x103c, 0x8d91, "HP ZBook Firefly 14 G12", ALCXXX_FIXUP_CS35LXX), 10284 + SND_PCI_QUIRK(0x103c, 0x8d92, "HP ZBook Firefly 16 G12", ALCXXX_FIXUP_CS35LXX), 10361 10285 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 10362 10286 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 10363 10287 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+9 -5
sound/pci/hda/tas2781_hda_i2c.c
··· 2 2 // 3 3 // TAS2781 HDA I2C driver 4 4 // 5 - // Copyright 2023 Texas Instruments, Inc. 5 + // Copyright 2023 - 2024 Texas Instruments, Inc. 6 6 // 7 7 // Author: Shenghao Ding <shenghao-ding@ti.com> 8 + // Current maintainer: Baojun Xu <baojun.xu@ti.com> 8 9 10 + #include <asm/unaligned.h> 9 11 #include <linux/acpi.h> 10 12 #include <linux/crc8.h> 11 13 #include <linux/crc32.h> ··· 521 519 static const unsigned char rgno_array[CALIB_MAX] = { 522 520 0x74, 0x0c, 0x14, 0x70, 0x7c, 523 521 }; 524 - unsigned char *data; 522 + int offset = 0; 525 523 int i, j, rc; 524 + __be32 data; 526 525 527 526 for (i = 0; i < tas_priv->ndev; i++) { 528 - data = tas_priv->cali_data.data + 529 - i * TASDEVICE_SPEAKER_CALIBRATION_SIZE; 530 527 for (j = 0; j < CALIB_MAX; j++) { 528 + data = cpu_to_be32( 529 + *(uint32_t *)&tas_priv->cali_data.data[offset]); 531 530 rc = tasdevice_dev_bulk_write(tas_priv, i, 532 531 TASDEVICE_REG(0, page_array[j], rgno_array[j]), 533 - &(data[4 * j]), 4); 532 + (unsigned char *)&data, 4); 534 533 if (rc < 0) 535 534 dev_err(tas_priv->dev, 536 535 "chn %d calib %d bulk_wr err = %d\n", 537 536 i, j, rc); 537 + offset += 4; 538 538 } 539 539 } 540 540 }
+1
sound/usb/quirks-table.h
··· 273 273 YAMAHA_DEVICE(0x105b, NULL), 274 274 YAMAHA_DEVICE(0x105c, NULL), 275 275 YAMAHA_DEVICE(0x105d, NULL), 276 + YAMAHA_DEVICE(0x1718, "P-125"), 276 277 { 277 278 USB_DEVICE(0x0499, 0x1503), 278 279 .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+2
sound/usb/quirks.c
··· 2221 2221 QUIRK_FLAG_GENERIC_IMPLICIT_FB), 2222 2222 DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */ 2223 2223 QUIRK_FLAG_GENERIC_IMPLICIT_FB), 2224 + DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */ 2225 + QUIRK_FLAG_CTL_MSG_DELAY_1M), 2224 2226 DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */ 2225 2227 QUIRK_FLAG_IGNORE_CTL_ERROR), 2226 2228 DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
+10
tools/arch/arm64/include/asm/cputype.h
··· 86 86 #define ARM_CPU_PART_CORTEX_X2 0xD48 87 87 #define ARM_CPU_PART_NEOVERSE_N2 0xD49 88 88 #define ARM_CPU_PART_CORTEX_A78C 0xD4B 89 + #define ARM_CPU_PART_CORTEX_X1C 0xD4C 90 + #define ARM_CPU_PART_CORTEX_X3 0xD4E 89 91 #define ARM_CPU_PART_NEOVERSE_V2 0xD4F 92 + #define ARM_CPU_PART_CORTEX_A720 0xD81 90 93 #define ARM_CPU_PART_CORTEX_X4 0xD82 91 94 #define ARM_CPU_PART_NEOVERSE_V3 0xD84 95 + #define ARM_CPU_PART_CORTEX_X925 0xD85 96 + #define ARM_CPU_PART_CORTEX_A725 0xD87 92 97 93 98 #define APM_CPU_PART_XGENE 0x000 94 99 #define APM_CPU_VAR_POTENZA 0x00 ··· 167 162 #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2) 168 163 #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) 169 164 #define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C) 165 + #define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C) 166 + #define MIDR_CORTEX_X3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X3) 170 167 #define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2) 168 + #define MIDR_CORTEX_A720 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A720) 171 169 #define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4) 172 170 #define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3) 171 + #define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925) 172 + #define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725) 173 173 #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) 174 174 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) 175 175 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+3
tools/arch/powerpc/include/uapi/asm/kvm.h
··· 645 645 #define KVM_REG_PPC_SIER3 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc3) 646 646 #define KVM_REG_PPC_DAWR1 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc4) 647 647 #define KVM_REG_PPC_DAWRX1 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc5) 648 + #define KVM_REG_PPC_DEXCR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc6) 649 + #define KVM_REG_PPC_HASHKEYR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc7) 650 + #define KVM_REG_PPC_HASHPKEYR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc8) 648 651 649 652 /* Transactional Memory checkpointed state: 650 653 * This is all GPRs, all VSX regs and a subset of SPRs
+403 -400
tools/arch/x86/include/asm/cpufeatures.h
··· 18 18 19 19 /* 20 20 * Note: If the comment begins with a quoted string, that string is used 21 - * in /proc/cpuinfo instead of the macro name. If the string is "", 22 - * this feature bit is not displayed in /proc/cpuinfo at all. 21 + * in /proc/cpuinfo instead of the macro name. Otherwise, this feature 22 + * bit is not displayed in /proc/cpuinfo at all. 23 23 * 24 24 * When adding new features here that depend on other features, 25 25 * please update the table in kernel/cpu/cpuid-deps.c as well. 26 26 */ 27 27 28 28 /* Intel-defined CPU features, CPUID level 0x00000001 (EDX), word 0 */ 29 - #define X86_FEATURE_FPU ( 0*32+ 0) /* Onboard FPU */ 30 - #define X86_FEATURE_VME ( 0*32+ 1) /* Virtual Mode Extensions */ 31 - #define X86_FEATURE_DE ( 0*32+ 2) /* Debugging Extensions */ 32 - #define X86_FEATURE_PSE ( 0*32+ 3) /* Page Size Extensions */ 33 - #define X86_FEATURE_TSC ( 0*32+ 4) /* Time Stamp Counter */ 34 - #define X86_FEATURE_MSR ( 0*32+ 5) /* Model-Specific Registers */ 35 - #define X86_FEATURE_PAE ( 0*32+ 6) /* Physical Address Extensions */ 36 - #define X86_FEATURE_MCE ( 0*32+ 7) /* Machine Check Exception */ 37 - #define X86_FEATURE_CX8 ( 0*32+ 8) /* CMPXCHG8 instruction */ 38 - #define X86_FEATURE_APIC ( 0*32+ 9) /* Onboard APIC */ 39 - #define X86_FEATURE_SEP ( 0*32+11) /* SYSENTER/SYSEXIT */ 40 - #define X86_FEATURE_MTRR ( 0*32+12) /* Memory Type Range Registers */ 41 - #define X86_FEATURE_PGE ( 0*32+13) /* Page Global Enable */ 42 - #define X86_FEATURE_MCA ( 0*32+14) /* Machine Check Architecture */ 43 - #define X86_FEATURE_CMOV ( 0*32+15) /* CMOV instructions (plus FCMOVcc, FCOMI with FPU) */ 44 - #define X86_FEATURE_PAT ( 0*32+16) /* Page Attribute Table */ 45 - #define X86_FEATURE_PSE36 ( 0*32+17) /* 36-bit PSEs */ 46 - #define X86_FEATURE_PN ( 0*32+18) /* Processor serial number */ 47 - #define X86_FEATURE_CLFLUSH ( 0*32+19) /* CLFLUSH instruction */ 29 + #define X86_FEATURE_FPU ( 0*32+ 0) /* "fpu" Onboard FPU */ 30 + #define X86_FEATURE_VME ( 0*32+ 1) /* "vme" Virtual Mode Extensions */ 31 + #define X86_FEATURE_DE ( 0*32+ 2) /* "de" Debugging Extensions */ 32 + #define X86_FEATURE_PSE ( 0*32+ 3) /* "pse" Page Size Extensions */ 33 + #define X86_FEATURE_TSC ( 0*32+ 4) /* "tsc" Time Stamp Counter */ 34 + #define X86_FEATURE_MSR ( 0*32+ 5) /* "msr" Model-Specific Registers */ 35 + #define X86_FEATURE_PAE ( 0*32+ 6) /* "pae" Physical Address Extensions */ 36 + #define X86_FEATURE_MCE ( 0*32+ 7) /* "mce" Machine Check Exception */ 37 + #define X86_FEATURE_CX8 ( 0*32+ 8) /* "cx8" CMPXCHG8 instruction */ 38 + #define X86_FEATURE_APIC ( 0*32+ 9) /* "apic" Onboard APIC */ 39 + #define X86_FEATURE_SEP ( 0*32+11) /* "sep" SYSENTER/SYSEXIT */ 40 + #define X86_FEATURE_MTRR ( 0*32+12) /* "mtrr" Memory Type Range Registers */ 41 + #define X86_FEATURE_PGE ( 0*32+13) /* "pge" Page Global Enable */ 42 + #define X86_FEATURE_MCA ( 0*32+14) /* "mca" Machine Check Architecture */ 43 + #define X86_FEATURE_CMOV ( 0*32+15) /* "cmov" CMOV instructions (plus FCMOVcc, FCOMI with FPU) */ 44 + #define X86_FEATURE_PAT ( 0*32+16) /* "pat" Page Attribute Table */ 45 + #define X86_FEATURE_PSE36 ( 0*32+17) /* "pse36" 36-bit PSEs */ 46 + #define X86_FEATURE_PN ( 0*32+18) /* "pn" Processor serial number */ 47 + #define X86_FEATURE_CLFLUSH ( 0*32+19) /* "clflush" CLFLUSH instruction */ 48 48 #define X86_FEATURE_DS ( 0*32+21) /* "dts" Debug Store */ 49 - #define X86_FEATURE_ACPI ( 0*32+22) /* ACPI via MSR */ 50 - #define X86_FEATURE_MMX ( 0*32+23) /* Multimedia Extensions */ 51 - #define X86_FEATURE_FXSR ( 0*32+24) /* FXSAVE/FXRSTOR, CR4.OSFXSR */ 49 + #define X86_FEATURE_ACPI ( 0*32+22) /* "acpi" ACPI via MSR */ 50 + #define X86_FEATURE_MMX ( 0*32+23) /* "mmx" Multimedia Extensions */ 51 + #define X86_FEATURE_FXSR ( 0*32+24) /* "fxsr" FXSAVE/FXRSTOR, CR4.OSFXSR */ 52 52 #define X86_FEATURE_XMM ( 0*32+25) /* "sse" */ 53 53 #define X86_FEATURE_XMM2 ( 0*32+26) /* "sse2" */ 54 54 #define X86_FEATURE_SELFSNOOP ( 0*32+27) /* "ss" CPU self snoop */ 55 - #define X86_FEATURE_HT ( 0*32+28) /* Hyper-Threading */ 55 + #define X86_FEATURE_HT ( 0*32+28) /* "ht" Hyper-Threading */ 56 56 #define X86_FEATURE_ACC ( 0*32+29) /* "tm" Automatic clock control */ 57 - #define X86_FEATURE_IA64 ( 0*32+30) /* IA-64 processor */ 58 - #define X86_FEATURE_PBE ( 0*32+31) /* Pending Break Enable */ 57 + #define X86_FEATURE_IA64 ( 0*32+30) /* "ia64" IA-64 processor */ 58 + #define X86_FEATURE_PBE ( 0*32+31) /* "pbe" Pending Break Enable */ 59 59 60 60 /* AMD-defined CPU features, CPUID level 0x80000001, word 1 */ 61 61 /* Don't duplicate feature flags which are redundant with Intel! */ 62 - #define X86_FEATURE_SYSCALL ( 1*32+11) /* SYSCALL/SYSRET */ 63 - #define X86_FEATURE_MP ( 1*32+19) /* MP Capable */ 64 - #define X86_FEATURE_NX ( 1*32+20) /* Execute Disable */ 65 - #define X86_FEATURE_MMXEXT ( 1*32+22) /* AMD MMX extensions */ 66 - #define X86_FEATURE_FXSR_OPT ( 1*32+25) /* FXSAVE/FXRSTOR optimizations */ 62 + #define X86_FEATURE_SYSCALL ( 1*32+11) /* "syscall" SYSCALL/SYSRET */ 63 + #define X86_FEATURE_MP ( 1*32+19) /* "mp" MP Capable */ 64 + #define X86_FEATURE_NX ( 1*32+20) /* "nx" Execute Disable */ 65 + #define X86_FEATURE_MMXEXT ( 1*32+22) /* "mmxext" AMD MMX extensions */ 66 + #define X86_FEATURE_FXSR_OPT ( 1*32+25) /* "fxsr_opt" FXSAVE/FXRSTOR optimizations */ 67 67 #define X86_FEATURE_GBPAGES ( 1*32+26) /* "pdpe1gb" GB pages */ 68 - #define X86_FEATURE_RDTSCP ( 1*32+27) /* RDTSCP */ 69 - #define X86_FEATURE_LM ( 1*32+29) /* Long Mode (x86-64, 64-bit support) */ 70 - #define X86_FEATURE_3DNOWEXT ( 1*32+30) /* AMD 3DNow extensions */ 71 - #define X86_FEATURE_3DNOW ( 1*32+31) /* 3DNow */ 68 + #define X86_FEATURE_RDTSCP ( 1*32+27) /* "rdtscp" RDTSCP */ 69 + #define X86_FEATURE_LM ( 1*32+29) /* "lm" Long Mode (x86-64, 64-bit support) */ 70 + #define X86_FEATURE_3DNOWEXT ( 1*32+30) /* "3dnowext" AMD 3DNow extensions */ 71 + #define X86_FEATURE_3DNOW ( 1*32+31) /* "3dnow" 3DNow */ 72 72 73 73 /* Transmeta-defined CPU features, CPUID level 0x80860001, word 2 */ 74 - #define X86_FEATURE_RECOVERY ( 2*32+ 0) /* CPU in recovery mode */ 75 - #define X86_FEATURE_LONGRUN ( 2*32+ 1) /* Longrun power control */ 76 - #define X86_FEATURE_LRTI ( 2*32+ 3) /* LongRun table interface */ 74 + #define X86_FEATURE_RECOVERY ( 2*32+ 0) /* "recovery" CPU in recovery mode */ 75 + #define X86_FEATURE_LONGRUN ( 2*32+ 1) /* "longrun" Longrun power control */ 76 + #define X86_FEATURE_LRTI ( 2*32+ 3) /* "lrti" LongRun table interface */ 77 77 78 78 /* Other features, Linux-defined mapping, word 3 */ 79 79 /* This range is used for feature bits which conflict or are synthesized */ 80 - #define X86_FEATURE_CXMMX ( 3*32+ 0) /* Cyrix MMX extensions */ 81 - #define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* AMD K6 nonstandard MTRRs */ 82 - #define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* Cyrix ARRs (= MTRRs) */ 83 - #define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* Centaur MCRs (= MTRRs) */ 84 - #define X86_FEATURE_K8 ( 3*32+ 4) /* "" Opteron, Athlon64 */ 85 - #define X86_FEATURE_ZEN5 ( 3*32+ 5) /* "" CPU based on Zen5 microarchitecture */ 86 - #define X86_FEATURE_P3 ( 3*32+ 6) /* "" P3 */ 87 - #define X86_FEATURE_P4 ( 3*32+ 7) /* "" P4 */ 88 - #define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */ 89 - #define X86_FEATURE_UP ( 3*32+ 9) /* SMP kernel running on UP */ 90 - #define X86_FEATURE_ART ( 3*32+10) /* Always running timer (ART) */ 91 - #define X86_FEATURE_ARCH_PERFMON ( 3*32+11) /* Intel Architectural PerfMon */ 92 - #define X86_FEATURE_PEBS ( 3*32+12) /* Precise-Event Based Sampling */ 93 - #define X86_FEATURE_BTS ( 3*32+13) /* Branch Trace Store */ 94 - #define X86_FEATURE_SYSCALL32 ( 3*32+14) /* "" syscall in IA32 userspace */ 95 - #define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */ 96 - #define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */ 97 - #define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */ 98 - #define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* "" Clear CPU buffers using VERW */ 99 - #define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */ 100 - #define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */ 101 - #define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */ 102 - #define X86_FEATURE_XTOPOLOGY ( 3*32+22) /* CPU topology enum extensions */ 103 - #define X86_FEATURE_TSC_RELIABLE ( 3*32+23) /* TSC is known to be reliable */ 104 - #define X86_FEATURE_NONSTOP_TSC ( 3*32+24) /* TSC does not stop in C states */ 105 - #define X86_FEATURE_CPUID ( 3*32+25) /* CPU has CPUID instruction itself */ 106 - #define X86_FEATURE_EXTD_APICID ( 3*32+26) /* Extended APICID (8 bits) */ 107 - #define X86_FEATURE_AMD_DCM ( 3*32+27) /* AMD multi-node processor */ 108 - #define X86_FEATURE_APERFMPERF ( 3*32+28) /* P-State hardware coordination feedback capability (APERF/MPERF MSRs) */ 109 - #define X86_FEATURE_RAPL ( 3*32+29) /* AMD/Hygon RAPL interface */ 110 - #define X86_FEATURE_NONSTOP_TSC_S3 ( 3*32+30) /* TSC doesn't stop in S3 state */ 111 - #define X86_FEATURE_TSC_KNOWN_FREQ ( 3*32+31) /* TSC has known frequency */ 80 + #define X86_FEATURE_CXMMX ( 3*32+ 0) /* "cxmmx" Cyrix MMX extensions */ 81 + #define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* "k6_mtrr" AMD K6 nonstandard MTRRs */ 82 + #define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* "cyrix_arr" Cyrix ARRs (= MTRRs) */ 83 + #define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* "centaur_mcr" Centaur MCRs (= MTRRs) */ 84 + #define X86_FEATURE_K8 ( 3*32+ 4) /* Opteron, Athlon64 */ 85 + #define X86_FEATURE_ZEN5 ( 3*32+ 5) /* CPU based on Zen5 microarchitecture */ 86 + #define X86_FEATURE_P3 ( 3*32+ 6) /* P3 */ 87 + #define X86_FEATURE_P4 ( 3*32+ 7) /* P4 */ 88 + #define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* "constant_tsc" TSC ticks at a constant rate */ 89 + #define X86_FEATURE_UP ( 3*32+ 9) /* "up" SMP kernel running on UP */ 90 + #define X86_FEATURE_ART ( 3*32+10) /* "art" Always running timer (ART) */ 91 + #define X86_FEATURE_ARCH_PERFMON ( 3*32+11) /* "arch_perfmon" Intel Architectural PerfMon */ 92 + #define X86_FEATURE_PEBS ( 3*32+12) /* "pebs" Precise-Event Based Sampling */ 93 + #define X86_FEATURE_BTS ( 3*32+13) /* "bts" Branch Trace Store */ 94 + #define X86_FEATURE_SYSCALL32 ( 3*32+14) /* syscall in IA32 userspace */ 95 + #define X86_FEATURE_SYSENTER32 ( 3*32+15) /* sysenter in IA32 userspace */ 96 + #define X86_FEATURE_REP_GOOD ( 3*32+16) /* "rep_good" REP microcode works well */ 97 + #define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* "amd_lbr_v2" AMD Last Branch Record Extension Version 2 */ 98 + #define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* Clear CPU buffers using VERW */ 99 + #define X86_FEATURE_ACC_POWER ( 3*32+19) /* "acc_power" AMD Accumulated Power Mechanism */ 100 + #define X86_FEATURE_NOPL ( 3*32+20) /* "nopl" The NOPL (0F 1F) instructions */ 101 + #define X86_FEATURE_ALWAYS ( 3*32+21) /* Always-present feature */ 102 + #define X86_FEATURE_XTOPOLOGY ( 3*32+22) /* "xtopology" CPU topology enum extensions */ 103 + #define X86_FEATURE_TSC_RELIABLE ( 3*32+23) /* "tsc_reliable" TSC is known to be reliable */ 104 + #define X86_FEATURE_NONSTOP_TSC ( 3*32+24) /* "nonstop_tsc" TSC does not stop in C states */ 105 + #define X86_FEATURE_CPUID ( 3*32+25) /* "cpuid" CPU has CPUID instruction itself */ 106 + #define X86_FEATURE_EXTD_APICID ( 3*32+26) /* "extd_apicid" Extended APICID (8 bits) */ 107 + #define X86_FEATURE_AMD_DCM ( 3*32+27) /* "amd_dcm" AMD multi-node processor */ 108 + #define X86_FEATURE_APERFMPERF ( 3*32+28) /* "aperfmperf" P-State hardware coordination feedback capability (APERF/MPERF MSRs) */ 109 + #define X86_FEATURE_RAPL ( 3*32+29) /* "rapl" AMD/Hygon RAPL interface */ 110 + #define X86_FEATURE_NONSTOP_TSC_S3 ( 3*32+30) /* "nonstop_tsc_s3" TSC doesn't stop in S3 state */ 111 + #define X86_FEATURE_TSC_KNOWN_FREQ ( 3*32+31) /* "tsc_known_freq" TSC has known frequency */ 112 112 113 113 /* Intel-defined CPU features, CPUID level 0x00000001 (ECX), word 4 */ 114 114 #define X86_FEATURE_XMM3 ( 4*32+ 0) /* "pni" SSE-3 */ 115 - #define X86_FEATURE_PCLMULQDQ ( 4*32+ 1) /* PCLMULQDQ instruction */ 116 - #define X86_FEATURE_DTES64 ( 4*32+ 2) /* 64-bit Debug Store */ 115 + #define X86_FEATURE_PCLMULQDQ ( 4*32+ 1) /* "pclmulqdq" PCLMULQDQ instruction */ 116 + #define X86_FEATURE_DTES64 ( 4*32+ 2) /* "dtes64" 64-bit Debug Store */ 117 117 #define X86_FEATURE_MWAIT ( 4*32+ 3) /* "monitor" MONITOR/MWAIT support */ 118 118 #define X86_FEATURE_DSCPL ( 4*32+ 4) /* "ds_cpl" CPL-qualified (filtered) Debug Store */ 119 - #define X86_FEATURE_VMX ( 4*32+ 5) /* Hardware virtualization */ 120 - #define X86_FEATURE_SMX ( 4*32+ 6) /* Safer Mode eXtensions */ 121 - #define X86_FEATURE_EST ( 4*32+ 7) /* Enhanced SpeedStep */ 122 - #define X86_FEATURE_TM2 ( 4*32+ 8) /* Thermal Monitor 2 */ 123 - #define X86_FEATURE_SSSE3 ( 4*32+ 9) /* Supplemental SSE-3 */ 124 - #define X86_FEATURE_CID ( 4*32+10) /* Context ID */ 125 - #define X86_FEATURE_SDBG ( 4*32+11) /* Silicon Debug */ 126 - #define X86_FEATURE_FMA ( 4*32+12) /* Fused multiply-add */ 127 - #define X86_FEATURE_CX16 ( 4*32+13) /* CMPXCHG16B instruction */ 128 - #define X86_FEATURE_XTPR ( 4*32+14) /* Send Task Priority Messages */ 129 - #define X86_FEATURE_PDCM ( 4*32+15) /* Perf/Debug Capabilities MSR */ 130 - #define X86_FEATURE_PCID ( 4*32+17) /* Process Context Identifiers */ 131 - #define X86_FEATURE_DCA ( 4*32+18) /* Direct Cache Access */ 119 + #define X86_FEATURE_VMX ( 4*32+ 5) /* "vmx" Hardware virtualization */ 120 + #define X86_FEATURE_SMX ( 4*32+ 6) /* "smx" Safer Mode eXtensions */ 121 + #define X86_FEATURE_EST ( 4*32+ 7) /* "est" Enhanced SpeedStep */ 122 + #define X86_FEATURE_TM2 ( 4*32+ 8) /* "tm2" Thermal Monitor 2 */ 123 + #define X86_FEATURE_SSSE3 ( 4*32+ 9) /* "ssse3" Supplemental SSE-3 */ 124 + #define X86_FEATURE_CID ( 4*32+10) /* "cid" Context ID */ 125 + #define X86_FEATURE_SDBG ( 4*32+11) /* "sdbg" Silicon Debug */ 126 + #define X86_FEATURE_FMA ( 4*32+12) /* "fma" Fused multiply-add */ 127 + #define X86_FEATURE_CX16 ( 4*32+13) /* "cx16" CMPXCHG16B instruction */ 128 + #define X86_FEATURE_XTPR ( 4*32+14) /* "xtpr" Send Task Priority Messages */ 129 + #define X86_FEATURE_PDCM ( 4*32+15) /* "pdcm" Perf/Debug Capabilities MSR */ 130 + #define X86_FEATURE_PCID ( 4*32+17) /* "pcid" Process Context Identifiers */ 131 + #define X86_FEATURE_DCA ( 4*32+18) /* "dca" Direct Cache Access */ 132 132 #define X86_FEATURE_XMM4_1 ( 4*32+19) /* "sse4_1" SSE-4.1 */ 133 133 #define X86_FEATURE_XMM4_2 ( 4*32+20) /* "sse4_2" SSE-4.2 */ 134 - #define X86_FEATURE_X2APIC ( 4*32+21) /* X2APIC */ 135 - #define X86_FEATURE_MOVBE ( 4*32+22) /* MOVBE instruction */ 136 - #define X86_FEATURE_POPCNT ( 4*32+23) /* POPCNT instruction */ 137 - #define X86_FEATURE_TSC_DEADLINE_TIMER ( 4*32+24) /* TSC deadline timer */ 138 - #define X86_FEATURE_AES ( 4*32+25) /* AES instructions */ 139 - #define X86_FEATURE_XSAVE ( 4*32+26) /* XSAVE/XRSTOR/XSETBV/XGETBV instructions */ 140 - #define X86_FEATURE_OSXSAVE ( 4*32+27) /* "" XSAVE instruction enabled in the OS */ 141 - #define X86_FEATURE_AVX ( 4*32+28) /* Advanced Vector Extensions */ 142 - #define X86_FEATURE_F16C ( 4*32+29) /* 16-bit FP conversions */ 143 - #define X86_FEATURE_RDRAND ( 4*32+30) /* RDRAND instruction */ 144 - #define X86_FEATURE_HYPERVISOR ( 4*32+31) /* Running on a hypervisor */ 134 + #define X86_FEATURE_X2APIC ( 4*32+21) /* "x2apic" X2APIC */ 135 + #define X86_FEATURE_MOVBE ( 4*32+22) /* "movbe" MOVBE instruction */ 136 + #define X86_FEATURE_POPCNT ( 4*32+23) /* "popcnt" POPCNT instruction */ 137 + #define X86_FEATURE_TSC_DEADLINE_TIMER ( 4*32+24) /* "tsc_deadline_timer" TSC deadline timer */ 138 + #define X86_FEATURE_AES ( 4*32+25) /* "aes" AES instructions */ 139 + #define X86_FEATURE_XSAVE ( 4*32+26) /* "xsave" XSAVE/XRSTOR/XSETBV/XGETBV instructions */ 140 + #define X86_FEATURE_OSXSAVE ( 4*32+27) /* XSAVE instruction enabled in the OS */ 141 + #define X86_FEATURE_AVX ( 4*32+28) /* "avx" Advanced Vector Extensions */ 142 + #define X86_FEATURE_F16C ( 4*32+29) /* "f16c" 16-bit FP conversions */ 143 + #define X86_FEATURE_RDRAND ( 4*32+30) /* "rdrand" RDRAND instruction */ 144 + #define X86_FEATURE_HYPERVISOR ( 4*32+31) /* "hypervisor" Running on a hypervisor */ 145 145 146 146 /* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */ 147 147 #define X86_FEATURE_XSTORE ( 5*32+ 2) /* "rng" RNG present (xstore) */ 148 148 #define X86_FEATURE_XSTORE_EN ( 5*32+ 3) /* "rng_en" RNG enabled */ 149 149 #define X86_FEATURE_XCRYPT ( 5*32+ 6) /* "ace" on-CPU crypto (xcrypt) */ 150 150 #define X86_FEATURE_XCRYPT_EN ( 5*32+ 7) /* "ace_en" on-CPU crypto enabled */ 151 - #define X86_FEATURE_ACE2 ( 5*32+ 8) /* Advanced Cryptography Engine v2 */ 152 - #define X86_FEATURE_ACE2_EN ( 5*32+ 9) /* ACE v2 enabled */ 153 - #define X86_FEATURE_PHE ( 5*32+10) /* PadLock Hash Engine */ 154 - #define X86_FEATURE_PHE_EN ( 5*32+11) /* PHE enabled */ 155 - #define X86_FEATURE_PMM ( 5*32+12) /* PadLock Montgomery Multiplier */ 156 - #define X86_FEATURE_PMM_EN ( 5*32+13) /* PMM enabled */ 151 + #define X86_FEATURE_ACE2 ( 5*32+ 8) /* "ace2" Advanced Cryptography Engine v2 */ 152 + #define X86_FEATURE_ACE2_EN ( 5*32+ 9) /* "ace2_en" ACE v2 enabled */ 153 + #define X86_FEATURE_PHE ( 5*32+10) /* "phe" PadLock Hash Engine */ 154 + #define X86_FEATURE_PHE_EN ( 5*32+11) /* "phe_en" PHE enabled */ 155 + #define X86_FEATURE_PMM ( 5*32+12) /* "pmm" PadLock Montgomery Multiplier */ 156 + #define X86_FEATURE_PMM_EN ( 5*32+13) /* "pmm_en" PMM enabled */ 157 157 158 158 /* More extended AMD flags: CPUID level 0x80000001, ECX, word 6 */ 159 - #define X86_FEATURE_LAHF_LM ( 6*32+ 0) /* LAHF/SAHF in long mode */ 160 - #define X86_FEATURE_CMP_LEGACY ( 6*32+ 1) /* If yes HyperThreading not valid */ 161 - #define X86_FEATURE_SVM ( 6*32+ 2) /* Secure Virtual Machine */ 162 - #define X86_FEATURE_EXTAPIC ( 6*32+ 3) /* Extended APIC space */ 163 - #define X86_FEATURE_CR8_LEGACY ( 6*32+ 4) /* CR8 in 32-bit mode */ 164 - #define X86_FEATURE_ABM ( 6*32+ 5) /* Advanced bit manipulation */ 165 - #define X86_FEATURE_SSE4A ( 6*32+ 6) /* SSE-4A */ 166 - #define X86_FEATURE_MISALIGNSSE ( 6*32+ 7) /* Misaligned SSE mode */ 167 - #define X86_FEATURE_3DNOWPREFETCH ( 6*32+ 8) /* 3DNow prefetch instructions */ 168 - #define X86_FEATURE_OSVW ( 6*32+ 9) /* OS Visible Workaround */ 169 - #define X86_FEATURE_IBS ( 6*32+10) /* Instruction Based Sampling */ 170 - #define X86_FEATURE_XOP ( 6*32+11) /* extended AVX instructions */ 171 - #define X86_FEATURE_SKINIT ( 6*32+12) /* SKINIT/STGI instructions */ 172 - #define X86_FEATURE_WDT ( 6*32+13) /* Watchdog timer */ 173 - #define X86_FEATURE_LWP ( 6*32+15) /* Light Weight Profiling */ 174 - #define X86_FEATURE_FMA4 ( 6*32+16) /* 4 operands MAC instructions */ 175 - #define X86_FEATURE_TCE ( 6*32+17) /* Translation Cache Extension */ 176 - #define X86_FEATURE_NODEID_MSR ( 6*32+19) /* NodeId MSR */ 177 - #define X86_FEATURE_TBM ( 6*32+21) /* Trailing Bit Manipulations */ 178 - #define X86_FEATURE_TOPOEXT ( 6*32+22) /* Topology extensions CPUID leafs */ 179 - #define X86_FEATURE_PERFCTR_CORE ( 6*32+23) /* Core performance counter extensions */ 180 - #define X86_FEATURE_PERFCTR_NB ( 6*32+24) /* NB performance counter extensions */ 181 - #define X86_FEATURE_BPEXT ( 6*32+26) /* Data breakpoint extension */ 182 - #define X86_FEATURE_PTSC ( 6*32+27) /* Performance time-stamp counter */ 183 - #define X86_FEATURE_PERFCTR_LLC ( 6*32+28) /* Last Level Cache performance counter extensions */ 184 - #define X86_FEATURE_MWAITX ( 6*32+29) /* MWAIT extension (MONITORX/MWAITX instructions) */ 159 + #define X86_FEATURE_LAHF_LM ( 6*32+ 0) /* "lahf_lm" LAHF/SAHF in long mode */ 160 + #define X86_FEATURE_CMP_LEGACY ( 6*32+ 1) /* "cmp_legacy" If yes HyperThreading not valid */ 161 + #define X86_FEATURE_SVM ( 6*32+ 2) /* "svm" Secure Virtual Machine */ 162 + #define X86_FEATURE_EXTAPIC ( 6*32+ 3) /* "extapic" Extended APIC space */ 163 + #define X86_FEATURE_CR8_LEGACY ( 6*32+ 4) /* "cr8_legacy" CR8 in 32-bit mode */ 164 + #define X86_FEATURE_ABM ( 6*32+ 5) /* "abm" Advanced bit manipulation */ 165 + #define X86_FEATURE_SSE4A ( 6*32+ 6) /* "sse4a" SSE-4A */ 166 + #define X86_FEATURE_MISALIGNSSE ( 6*32+ 7) /* "misalignsse" Misaligned SSE mode */ 167 + #define X86_FEATURE_3DNOWPREFETCH ( 6*32+ 8) /* "3dnowprefetch" 3DNow prefetch instructions */ 168 + #define X86_FEATURE_OSVW ( 6*32+ 9) /* "osvw" OS Visible Workaround */ 169 + #define X86_FEATURE_IBS ( 6*32+10) /* "ibs" Instruction Based Sampling */ 170 + #define X86_FEATURE_XOP ( 6*32+11) /* "xop" Extended AVX instructions */ 171 + #define X86_FEATURE_SKINIT ( 6*32+12) /* "skinit" SKINIT/STGI instructions */ 172 + #define X86_FEATURE_WDT ( 6*32+13) /* "wdt" Watchdog timer */ 173 + #define X86_FEATURE_LWP ( 6*32+15) /* "lwp" Light Weight Profiling */ 174 + #define X86_FEATURE_FMA4 ( 6*32+16) /* "fma4" 4 operands MAC instructions */ 175 + #define X86_FEATURE_TCE ( 6*32+17) /* "tce" Translation Cache Extension */ 176 + #define X86_FEATURE_NODEID_MSR ( 6*32+19) /* "nodeid_msr" NodeId MSR */ 177 + #define X86_FEATURE_TBM ( 6*32+21) /* "tbm" Trailing Bit Manipulations */ 178 + #define X86_FEATURE_TOPOEXT ( 6*32+22) /* "topoext" Topology extensions CPUID leafs */ 179 + #define X86_FEATURE_PERFCTR_CORE ( 6*32+23) /* "perfctr_core" Core performance counter extensions */ 180 + #define X86_FEATURE_PERFCTR_NB ( 6*32+24) /* "perfctr_nb" NB performance counter extensions */ 181 + #define X86_FEATURE_BPEXT ( 6*32+26) /* "bpext" Data breakpoint extension */ 182 + #define X86_FEATURE_PTSC ( 6*32+27) /* "ptsc" Performance time-stamp counter */ 183 + #define X86_FEATURE_PERFCTR_LLC ( 6*32+28) /* "perfctr_llc" Last Level Cache performance counter extensions */ 184 + #define X86_FEATURE_MWAITX ( 6*32+29) /* "mwaitx" MWAIT extension (MONITORX/MWAITX instructions) */ 185 185 186 186 /* 187 187 * Auxiliary flags: Linux defined - For features scattered in various ··· 189 189 * 190 190 * Reuse free bits when adding new feature flags! 191 191 */ 192 - #define X86_FEATURE_RING3MWAIT ( 7*32+ 0) /* Ring 3 MONITOR/MWAIT instructions */ 193 - #define X86_FEATURE_CPUID_FAULT ( 7*32+ 1) /* Intel CPUID faulting */ 194 - #define X86_FEATURE_CPB ( 7*32+ 2) /* AMD Core Performance Boost */ 195 - #define X86_FEATURE_EPB ( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS support */ 196 - #define X86_FEATURE_CAT_L3 ( 7*32+ 4) /* Cache Allocation Technology L3 */ 197 - #define X86_FEATURE_CAT_L2 ( 7*32+ 5) /* Cache Allocation Technology L2 */ 198 - #define X86_FEATURE_CDP_L3 ( 7*32+ 6) /* Code and Data Prioritization L3 */ 199 - #define X86_FEATURE_TDX_HOST_PLATFORM ( 7*32+ 7) /* Platform supports being a TDX host */ 200 - #define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */ 201 - #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ 202 - #define X86_FEATURE_XCOMPACTED ( 7*32+10) /* "" Use compacted XSTATE (XSAVES or XSAVEC) */ 203 - #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ 204 - #define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */ 205 - #define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */ 206 - #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ 207 - #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ 208 - #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ 209 - #define X86_FEATURE_SSBD ( 7*32+17) /* Speculative Store Bypass Disable */ 210 - #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */ 211 - #define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* "" Fill RSB on context switches */ 212 - #define X86_FEATURE_PERFMON_V2 ( 7*32+20) /* AMD Performance Monitoring Version 2 */ 213 - #define X86_FEATURE_USE_IBPB ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */ 214 - #define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* "" Use IBRS during runtime firmware calls */ 215 - #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */ 216 - #define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* "" AMD SSBD implementation via LS_CFG MSR */ 217 - #define X86_FEATURE_IBRS ( 7*32+25) /* Indirect Branch Restricted Speculation */ 218 - #define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */ 219 - #define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */ 220 - #define X86_FEATURE_ZEN ( 7*32+28) /* "" Generic flag for all Zen and newer */ 221 - #define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */ 222 - #define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* Enhanced IBRS */ 223 - #define X86_FEATURE_MSR_IA32_FEAT_CTL ( 7*32+31) /* "" MSR IA32_FEAT_CTL configured */ 192 + #define X86_FEATURE_RING3MWAIT ( 7*32+ 0) /* "ring3mwait" Ring 3 MONITOR/MWAIT instructions */ 193 + #define X86_FEATURE_CPUID_FAULT ( 7*32+ 1) /* "cpuid_fault" Intel CPUID faulting */ 194 + #define X86_FEATURE_CPB ( 7*32+ 2) /* "cpb" AMD Core Performance Boost */ 195 + #define X86_FEATURE_EPB ( 7*32+ 3) /* "epb" IA32_ENERGY_PERF_BIAS support */ 196 + #define X86_FEATURE_CAT_L3 ( 7*32+ 4) /* "cat_l3" Cache Allocation Technology L3 */ 197 + #define X86_FEATURE_CAT_L2 ( 7*32+ 5) /* "cat_l2" Cache Allocation Technology L2 */ 198 + #define X86_FEATURE_CDP_L3 ( 7*32+ 6) /* "cdp_l3" Code and Data Prioritization L3 */ 199 + #define X86_FEATURE_TDX_HOST_PLATFORM ( 7*32+ 7) /* "tdx_host_platform" Platform supports being a TDX host */ 200 + #define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* "hw_pstate" AMD HW-PState */ 201 + #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* "proc_feedback" AMD ProcFeedbackInterface */ 202 + #define X86_FEATURE_XCOMPACTED ( 7*32+10) /* Use compacted XSTATE (XSAVES or XSAVEC) */ 203 + #define X86_FEATURE_PTI ( 7*32+11) /* "pti" Kernel Page Table Isolation enabled */ 204 + #define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* Set/clear IBRS on kernel entry/exit */ 205 + #define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* Fill RSB on VM-Exit */ 206 + #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* "intel_ppin" Intel Processor Inventory Number */ 207 + #define X86_FEATURE_CDP_L2 ( 7*32+15) /* "cdp_l2" Code and Data Prioritization L2 */ 208 + #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* MSR SPEC_CTRL is implemented */ 209 + #define X86_FEATURE_SSBD ( 7*32+17) /* "ssbd" Speculative Store Bypass Disable */ 210 + #define X86_FEATURE_MBA ( 7*32+18) /* "mba" Memory Bandwidth Allocation */ 211 + #define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* Fill RSB on context switches */ 212 + #define X86_FEATURE_PERFMON_V2 ( 7*32+20) /* "perfmon_v2" AMD Performance Monitoring Version 2 */ 213 + #define X86_FEATURE_USE_IBPB ( 7*32+21) /* Indirect Branch Prediction Barrier enabled */ 214 + #define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* Use IBRS during runtime firmware calls */ 215 + #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */ 216 + #define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */ 217 + #define X86_FEATURE_IBRS ( 7*32+25) /* "ibrs" Indirect Branch Restricted Speculation */ 218 + #define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier */ 219 + #define X86_FEATURE_STIBP ( 7*32+27) /* "stibp" Single Thread Indirect Branch Predictors */ 220 + #define X86_FEATURE_ZEN ( 7*32+28) /* Generic flag for all Zen and newer */ 221 + #define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* L1TF workaround PTE inversion */ 222 + #define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* "ibrs_enhanced" Enhanced IBRS */ 223 + #define X86_FEATURE_MSR_IA32_FEAT_CTL ( 7*32+31) /* MSR IA32_FEAT_CTL configured */ 224 224 225 225 /* Virtualization flags: Linux defined, word 8 */ 226 - #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */ 227 - #define X86_FEATURE_FLEXPRIORITY ( 8*32+ 1) /* Intel FlexPriority */ 228 - #define X86_FEATURE_EPT ( 8*32+ 2) /* Intel Extended Page Table */ 229 - #define X86_FEATURE_VPID ( 8*32+ 3) /* Intel Virtual Processor ID */ 226 + #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* "tpr_shadow" Intel TPR Shadow */ 227 + #define X86_FEATURE_FLEXPRIORITY ( 8*32+ 1) /* "flexpriority" Intel FlexPriority */ 228 + #define X86_FEATURE_EPT ( 8*32+ 2) /* "ept" Intel Extended Page Table */ 229 + #define X86_FEATURE_VPID ( 8*32+ 3) /* "vpid" Intel Virtual Processor ID */ 230 230 231 - #define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer VMMCALL to VMCALL */ 232 - #define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */ 233 - #define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */ 234 - #define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */ 235 - #define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */ 236 - #define X86_FEATURE_PVUNLOCK ( 8*32+20) /* "" PV unlock function */ 237 - #define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* "" PV vcpu_is_preempted function */ 238 - #define X86_FEATURE_TDX_GUEST ( 8*32+22) /* Intel Trust Domain Extensions Guest */ 231 + #define X86_FEATURE_VMMCALL ( 8*32+15) /* "vmmcall" Prefer VMMCALL to VMCALL */ 232 + #define X86_FEATURE_XENPV ( 8*32+16) /* Xen paravirtual guest */ 233 + #define X86_FEATURE_EPT_AD ( 8*32+17) /* "ept_ad" Intel Extended Page Table access-dirty bit */ 234 + #define X86_FEATURE_VMCALL ( 8*32+18) /* Hypervisor supports the VMCALL instruction */ 235 + #define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* VMware prefers VMMCALL hypercall instruction */ 236 + #define X86_FEATURE_PVUNLOCK ( 8*32+20) /* PV unlock function */ 237 + #define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* PV vcpu_is_preempted function */ 238 + #define X86_FEATURE_TDX_GUEST ( 8*32+22) /* "tdx_guest" Intel Trust Domain Extensions Guest */ 239 239 240 240 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */ 241 - #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ 242 - #define X86_FEATURE_TSC_ADJUST ( 9*32+ 1) /* TSC adjustment MSR 0x3B */ 243 - #define X86_FEATURE_SGX ( 9*32+ 2) /* Software Guard Extensions */ 244 - #define X86_FEATURE_BMI1 ( 9*32+ 3) /* 1st group bit manipulation extensions */ 245 - #define X86_FEATURE_HLE ( 9*32+ 4) /* Hardware Lock Elision */ 246 - #define X86_FEATURE_AVX2 ( 9*32+ 5) /* AVX2 instructions */ 247 - #define X86_FEATURE_FDP_EXCPTN_ONLY ( 9*32+ 6) /* "" FPU data pointer updated only on x87 exceptions */ 248 - #define X86_FEATURE_SMEP ( 9*32+ 7) /* Supervisor Mode Execution Protection */ 249 - #define X86_FEATURE_BMI2 ( 9*32+ 8) /* 2nd group bit manipulation extensions */ 250 - #define X86_FEATURE_ERMS ( 9*32+ 9) /* Enhanced REP MOVSB/STOSB instructions */ 251 - #define X86_FEATURE_INVPCID ( 9*32+10) /* Invalidate Processor Context ID */ 252 - #define X86_FEATURE_RTM ( 9*32+11) /* Restricted Transactional Memory */ 253 - #define X86_FEATURE_CQM ( 9*32+12) /* Cache QoS Monitoring */ 254 - #define X86_FEATURE_ZERO_FCS_FDS ( 9*32+13) /* "" Zero out FPU CS and FPU DS */ 255 - #define X86_FEATURE_MPX ( 9*32+14) /* Memory Protection Extension */ 256 - #define X86_FEATURE_RDT_A ( 9*32+15) /* Resource Director Technology Allocation */ 257 - #define X86_FEATURE_AVX512F ( 9*32+16) /* AVX-512 Foundation */ 258 - #define X86_FEATURE_AVX512DQ ( 9*32+17) /* AVX-512 DQ (Double/Quad granular) Instructions */ 259 - #define X86_FEATURE_RDSEED ( 9*32+18) /* RDSEED instruction */ 260 - #define X86_FEATURE_ADX ( 9*32+19) /* ADCX and ADOX instructions */ 261 - #define X86_FEATURE_SMAP ( 9*32+20) /* Supervisor Mode Access Prevention */ 262 - #define X86_FEATURE_AVX512IFMA ( 9*32+21) /* AVX-512 Integer Fused Multiply-Add instructions */ 263 - #define X86_FEATURE_CLFLUSHOPT ( 9*32+23) /* CLFLUSHOPT instruction */ 264 - #define X86_FEATURE_CLWB ( 9*32+24) /* CLWB instruction */ 265 - #define X86_FEATURE_INTEL_PT ( 9*32+25) /* Intel Processor Trace */ 266 - #define X86_FEATURE_AVX512PF ( 9*32+26) /* AVX-512 Prefetch */ 267 - #define X86_FEATURE_AVX512ER ( 9*32+27) /* AVX-512 Exponential and Reciprocal */ 268 - #define X86_FEATURE_AVX512CD ( 9*32+28) /* AVX-512 Conflict Detection */ 269 - #define X86_FEATURE_SHA_NI ( 9*32+29) /* SHA1/SHA256 Instruction Extensions */ 270 - #define X86_FEATURE_AVX512BW ( 9*32+30) /* AVX-512 BW (Byte/Word granular) Instructions */ 271 - #define X86_FEATURE_AVX512VL ( 9*32+31) /* AVX-512 VL (128/256 Vector Length) Extensions */ 241 + #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* "fsgsbase" RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ 242 + #define X86_FEATURE_TSC_ADJUST ( 9*32+ 1) /* "tsc_adjust" TSC adjustment MSR 0x3B */ 243 + #define X86_FEATURE_SGX ( 9*32+ 2) /* "sgx" Software Guard Extensions */ 244 + #define X86_FEATURE_BMI1 ( 9*32+ 3) /* "bmi1" 1st group bit manipulation extensions */ 245 + #define X86_FEATURE_HLE ( 9*32+ 4) /* "hle" Hardware Lock Elision */ 246 + #define X86_FEATURE_AVX2 ( 9*32+ 5) /* "avx2" AVX2 instructions */ 247 + #define X86_FEATURE_FDP_EXCPTN_ONLY ( 9*32+ 6) /* FPU data pointer updated only on x87 exceptions */ 248 + #define X86_FEATURE_SMEP ( 9*32+ 7) /* "smep" Supervisor Mode Execution Protection */ 249 + #define X86_FEATURE_BMI2 ( 9*32+ 8) /* "bmi2" 2nd group bit manipulation extensions */ 250 + #define X86_FEATURE_ERMS ( 9*32+ 9) /* "erms" Enhanced REP MOVSB/STOSB instructions */ 251 + #define X86_FEATURE_INVPCID ( 9*32+10) /* "invpcid" Invalidate Processor Context ID */ 252 + #define X86_FEATURE_RTM ( 9*32+11) /* "rtm" Restricted Transactional Memory */ 253 + #define X86_FEATURE_CQM ( 9*32+12) /* "cqm" Cache QoS Monitoring */ 254 + #define X86_FEATURE_ZERO_FCS_FDS ( 9*32+13) /* Zero out FPU CS and FPU DS */ 255 + #define X86_FEATURE_MPX ( 9*32+14) /* "mpx" Memory Protection Extension */ 256 + #define X86_FEATURE_RDT_A ( 9*32+15) /* "rdt_a" Resource Director Technology Allocation */ 257 + #define X86_FEATURE_AVX512F ( 9*32+16) /* "avx512f" AVX-512 Foundation */ 258 + #define X86_FEATURE_AVX512DQ ( 9*32+17) /* "avx512dq" AVX-512 DQ (Double/Quad granular) Instructions */ 259 + #define X86_FEATURE_RDSEED ( 9*32+18) /* "rdseed" RDSEED instruction */ 260 + #define X86_FEATURE_ADX ( 9*32+19) /* "adx" ADCX and ADOX instructions */ 261 + #define X86_FEATURE_SMAP ( 9*32+20) /* "smap" Supervisor Mode Access Prevention */ 262 + #define X86_FEATURE_AVX512IFMA ( 9*32+21) /* "avx512ifma" AVX-512 Integer Fused Multiply-Add instructions */ 263 + #define X86_FEATURE_CLFLUSHOPT ( 9*32+23) /* "clflushopt" CLFLUSHOPT instruction */ 264 + #define X86_FEATURE_CLWB ( 9*32+24) /* "clwb" CLWB instruction */ 265 + #define X86_FEATURE_INTEL_PT ( 9*32+25) /* "intel_pt" Intel Processor Trace */ 266 + #define X86_FEATURE_AVX512PF ( 9*32+26) /* "avx512pf" AVX-512 Prefetch */ 267 + #define X86_FEATURE_AVX512ER ( 9*32+27) /* "avx512er" AVX-512 Exponential and Reciprocal */ 268 + #define X86_FEATURE_AVX512CD ( 9*32+28) /* "avx512cd" AVX-512 Conflict Detection */ 269 + #define X86_FEATURE_SHA_NI ( 9*32+29) /* "sha_ni" SHA1/SHA256 Instruction Extensions */ 270 + #define X86_FEATURE_AVX512BW ( 9*32+30) /* "avx512bw" AVX-512 BW (Byte/Word granular) Instructions */ 271 + #define X86_FEATURE_AVX512VL ( 9*32+31) /* "avx512vl" AVX-512 VL (128/256 Vector Length) Extensions */ 272 272 273 273 /* Extended state features, CPUID level 0x0000000d:1 (EAX), word 10 */ 274 - #define X86_FEATURE_XSAVEOPT (10*32+ 0) /* XSAVEOPT instruction */ 275 - #define X86_FEATURE_XSAVEC (10*32+ 1) /* XSAVEC instruction */ 276 - #define X86_FEATURE_XGETBV1 (10*32+ 2) /* XGETBV with ECX = 1 instruction */ 277 - #define X86_FEATURE_XSAVES (10*32+ 3) /* XSAVES/XRSTORS instructions */ 278 - #define X86_FEATURE_XFD (10*32+ 4) /* "" eXtended Feature Disabling */ 274 + #define X86_FEATURE_XSAVEOPT (10*32+ 0) /* "xsaveopt" XSAVEOPT instruction */ 275 + #define X86_FEATURE_XSAVEC (10*32+ 1) /* "xsavec" XSAVEC instruction */ 276 + #define X86_FEATURE_XGETBV1 (10*32+ 2) /* "xgetbv1" XGETBV with ECX = 1 instruction */ 277 + #define X86_FEATURE_XSAVES (10*32+ 3) /* "xsaves" XSAVES/XRSTORS instructions */ 278 + #define X86_FEATURE_XFD (10*32+ 4) /* eXtended Feature Disabling */ 279 279 280 280 /* 281 281 * Extended auxiliary flags: Linux defined - for features scattered in various ··· 283 283 * 284 284 * Reuse free bits when adding new feature flags! 285 285 */ 286 - #define X86_FEATURE_CQM_LLC (11*32+ 0) /* LLC QoS if 1 */ 287 - #define X86_FEATURE_CQM_OCCUP_LLC (11*32+ 1) /* LLC occupancy monitoring */ 288 - #define X86_FEATURE_CQM_MBM_TOTAL (11*32+ 2) /* LLC Total MBM monitoring */ 289 - #define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */ 290 - #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entry SWAPGS path */ 291 - #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */ 292 - #define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */ 293 - #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */ 294 - #define X86_FEATURE_SGX1 (11*32+ 8) /* "" Basic SGX */ 295 - #define X86_FEATURE_SGX2 (11*32+ 9) /* "" SGX Enclave Dynamic Memory Management (EDMM) */ 296 - #define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */ 297 - #define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */ 298 - #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ 299 - #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ 300 - #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ 301 - #define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */ 302 - #define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */ 303 - #define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */ 304 - #define X86_FEATURE_SGX_EDECCSSA (11*32+18) /* "" SGX EDECCSSA user leaf function */ 305 - #define X86_FEATURE_CALL_DEPTH (11*32+19) /* "" Call depth tracking for RSB stuffing */ 306 - #define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* "" MSR IA32_TSX_CTRL (Intel) implemented */ 307 - #define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */ 308 - #define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */ 309 - #define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */ 310 - #define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */ 311 - #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */ 312 - #define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */ 313 - #define X86_FEATURE_APIC_MSRS_FENCE (11*32+27) /* "" IA32_TSC_DEADLINE and X2APIC MSRs need fencing */ 314 - #define X86_FEATURE_ZEN2 (11*32+28) /* "" CPU based on Zen2 microarchitecture */ 315 - #define X86_FEATURE_ZEN3 (11*32+29) /* "" CPU based on Zen3 microarchitecture */ 316 - #define X86_FEATURE_ZEN4 (11*32+30) /* "" CPU based on Zen4 microarchitecture */ 317 - #define X86_FEATURE_ZEN1 (11*32+31) /* "" CPU based on Zen1 microarchitecture */ 286 + #define X86_FEATURE_CQM_LLC (11*32+ 0) /* "cqm_llc" LLC QoS if 1 */ 287 + #define X86_FEATURE_CQM_OCCUP_LLC (11*32+ 1) /* "cqm_occup_llc" LLC occupancy monitoring */ 288 + #define X86_FEATURE_CQM_MBM_TOTAL (11*32+ 2) /* "cqm_mbm_total" LLC Total MBM monitoring */ 289 + #define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* "cqm_mbm_local" LLC Local MBM monitoring */ 290 + #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* LFENCE in user entry SWAPGS path */ 291 + #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* LFENCE in kernel entry SWAPGS path */ 292 + #define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* "split_lock_detect" #AC for split lock */ 293 + #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* Per-thread Memory Bandwidth Allocation */ 294 + #define X86_FEATURE_SGX1 (11*32+ 8) /* Basic SGX */ 295 + #define X86_FEATURE_SGX2 (11*32+ 9) /* SGX Enclave Dynamic Memory Management (EDMM) */ 296 + #define X86_FEATURE_ENTRY_IBPB (11*32+10) /* Issue an IBPB on kernel entry */ 297 + #define X86_FEATURE_RRSBA_CTRL (11*32+11) /* RET prediction control */ 298 + #define X86_FEATURE_RETPOLINE (11*32+12) /* Generic Retpoline mitigation for Spectre variant 2 */ 299 + #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* Use LFENCE for Spectre variant 2 */ 300 + #define X86_FEATURE_RETHUNK (11*32+14) /* Use REturn THUNK */ 301 + #define X86_FEATURE_UNRET (11*32+15) /* AMD BTB untrain return */ 302 + #define X86_FEATURE_USE_IBPB_FW (11*32+16) /* Use IBPB during runtime firmware calls */ 303 + #define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* Fill RSB on VM exit when EIBRS is enabled */ 304 + #define X86_FEATURE_SGX_EDECCSSA (11*32+18) /* SGX EDECCSSA user leaf function */ 305 + #define X86_FEATURE_CALL_DEPTH (11*32+19) /* Call depth tracking for RSB stuffing */ 306 + #define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* MSR IA32_TSX_CTRL (Intel) implemented */ 307 + #define X86_FEATURE_SMBA (11*32+21) /* Slow Memory Bandwidth Allocation */ 308 + #define X86_FEATURE_BMEC (11*32+22) /* Bandwidth Monitoring Event Configuration */ 309 + #define X86_FEATURE_USER_SHSTK (11*32+23) /* "user_shstk" Shadow stack support for user mode applications */ 310 + #define X86_FEATURE_SRSO (11*32+24) /* AMD BTB untrain RETs */ 311 + #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* AMD BTB untrain RETs through aliasing */ 312 + #define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* Issue an IBPB only on VMEXIT */ 313 + #define X86_FEATURE_APIC_MSRS_FENCE (11*32+27) /* IA32_TSC_DEADLINE and X2APIC MSRs need fencing */ 314 + #define X86_FEATURE_ZEN2 (11*32+28) /* CPU based on Zen2 microarchitecture */ 315 + #define X86_FEATURE_ZEN3 (11*32+29) /* CPU based on Zen3 microarchitecture */ 316 + #define X86_FEATURE_ZEN4 (11*32+30) /* CPU based on Zen4 microarchitecture */ 317 + #define X86_FEATURE_ZEN1 (11*32+31) /* CPU based on Zen1 microarchitecture */ 318 318 319 319 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ 320 - #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ 321 - #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ 322 - #define X86_FEATURE_CMPCCXADD (12*32+ 7) /* "" CMPccXADD instructions */ 323 - #define X86_FEATURE_ARCH_PERFMON_EXT (12*32+ 8) /* "" Intel Architectural PerfMon Extension */ 324 - #define X86_FEATURE_FZRM (12*32+10) /* "" Fast zero-length REP MOVSB */ 325 - #define X86_FEATURE_FSRS (12*32+11) /* "" Fast short REP STOSB */ 326 - #define X86_FEATURE_FSRC (12*32+12) /* "" Fast short REP {CMPSB,SCASB} */ 327 - #define X86_FEATURE_FRED (12*32+17) /* Flexible Return and Event Delivery */ 328 - #define X86_FEATURE_LKGS (12*32+18) /* "" Load "kernel" (userspace) GS */ 329 - #define X86_FEATURE_WRMSRNS (12*32+19) /* "" Non-serializing WRMSR */ 330 - #define X86_FEATURE_AMX_FP16 (12*32+21) /* "" AMX fp16 Support */ 331 - #define X86_FEATURE_AVX_IFMA (12*32+23) /* "" Support for VPMADD52[H,L]UQ */ 332 - #define X86_FEATURE_LAM (12*32+26) /* Linear Address Masking */ 320 + #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* "avx_vnni" AVX VNNI instructions */ 321 + #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* "avx512_bf16" AVX512 BFLOAT16 instructions */ 322 + #define X86_FEATURE_CMPCCXADD (12*32+ 7) /* CMPccXADD instructions */ 323 + #define X86_FEATURE_ARCH_PERFMON_EXT (12*32+ 8) /* Intel Architectural PerfMon Extension */ 324 + #define X86_FEATURE_FZRM (12*32+10) /* Fast zero-length REP MOVSB */ 325 + #define X86_FEATURE_FSRS (12*32+11) /* Fast short REP STOSB */ 326 + #define X86_FEATURE_FSRC (12*32+12) /* Fast short REP {CMPSB,SCASB} */ 327 + #define X86_FEATURE_FRED (12*32+17) /* "fred" Flexible Return and Event Delivery */ 328 + #define X86_FEATURE_LKGS (12*32+18) /* Load "kernel" (userspace) GS */ 329 + #define X86_FEATURE_WRMSRNS (12*32+19) /* Non-serializing WRMSR */ 330 + #define X86_FEATURE_AMX_FP16 (12*32+21) /* AMX fp16 Support */ 331 + #define X86_FEATURE_AVX_IFMA (12*32+23) /* Support for VPMADD52[H,L]UQ */ 332 + #define X86_FEATURE_LAM (12*32+26) /* "lam" Linear Address Masking */ 333 333 334 334 /* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */ 335 - #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */ 336 - #define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */ 337 - #define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */ 338 - #define X86_FEATURE_RDPRU (13*32+ 4) /* Read processor register at user level */ 339 - #define X86_FEATURE_WBNOINVD (13*32+ 9) /* WBNOINVD instruction */ 340 - #define X86_FEATURE_AMD_IBPB (13*32+12) /* "" Indirect Branch Prediction Barrier */ 341 - #define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */ 342 - #define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */ 343 - #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */ 344 - #define X86_FEATURE_AMD_PPIN (13*32+23) /* Protected Processor Inventory Number */ 345 - #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ 346 - #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ 347 - #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ 348 - #define X86_FEATURE_CPPC (13*32+27) /* Collaborative Processor Performance Control */ 349 - #define X86_FEATURE_AMD_PSFD (13*32+28) /* "" Predictive Store Forwarding Disable */ 350 - #define X86_FEATURE_BTC_NO (13*32+29) /* "" Not vulnerable to Branch Type Confusion */ 351 - #define X86_FEATURE_BRS (13*32+31) /* Branch Sampling available */ 335 + #define X86_FEATURE_CLZERO (13*32+ 0) /* "clzero" CLZERO instruction */ 336 + #define X86_FEATURE_IRPERF (13*32+ 1) /* "irperf" Instructions Retired Count */ 337 + #define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* "xsaveerptr" Always save/restore FP error pointers */ 338 + #define X86_FEATURE_RDPRU (13*32+ 4) /* "rdpru" Read processor register at user level */ 339 + #define X86_FEATURE_WBNOINVD (13*32+ 9) /* "wbnoinvd" WBNOINVD instruction */ 340 + #define X86_FEATURE_AMD_IBPB (13*32+12) /* Indirect Branch Prediction Barrier */ 341 + #define X86_FEATURE_AMD_IBRS (13*32+14) /* Indirect Branch Restricted Speculation */ 342 + #define X86_FEATURE_AMD_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors */ 343 + #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* Single Thread Indirect Branch Predictors always-on preferred */ 344 + #define X86_FEATURE_AMD_PPIN (13*32+23) /* "amd_ppin" Protected Processor Inventory Number */ 345 + #define X86_FEATURE_AMD_SSBD (13*32+24) /* Speculative Store Bypass Disable */ 346 + #define X86_FEATURE_VIRT_SSBD (13*32+25) /* "virt_ssbd" Virtualized Speculative Store Bypass Disable */ 347 + #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* Speculative Store Bypass is fixed in hardware. */ 348 + #define X86_FEATURE_CPPC (13*32+27) /* "cppc" Collaborative Processor Performance Control */ 349 + #define X86_FEATURE_AMD_PSFD (13*32+28) /* Predictive Store Forwarding Disable */ 350 + #define X86_FEATURE_BTC_NO (13*32+29) /* Not vulnerable to Branch Type Confusion */ 351 + #define X86_FEATURE_BRS (13*32+31) /* "brs" Branch Sampling available */ 352 352 353 353 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ 354 - #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */ 355 - #define X86_FEATURE_IDA (14*32+ 1) /* Intel Dynamic Acceleration */ 356 - #define X86_FEATURE_ARAT (14*32+ 2) /* Always Running APIC Timer */ 357 - #define X86_FEATURE_PLN (14*32+ 4) /* Intel Power Limit Notification */ 358 - #define X86_FEATURE_PTS (14*32+ 6) /* Intel Package Thermal Status */ 359 - #define X86_FEATURE_HWP (14*32+ 7) /* Intel Hardware P-states */ 360 - #define X86_FEATURE_HWP_NOTIFY (14*32+ 8) /* HWP Notification */ 361 - #define X86_FEATURE_HWP_ACT_WINDOW (14*32+ 9) /* HWP Activity Window */ 362 - #define X86_FEATURE_HWP_EPP (14*32+10) /* HWP Energy Perf. Preference */ 363 - #define X86_FEATURE_HWP_PKG_REQ (14*32+11) /* HWP Package Level Request */ 364 - #define X86_FEATURE_HFI (14*32+19) /* Hardware Feedback Interface */ 354 + #define X86_FEATURE_DTHERM (14*32+ 0) /* "dtherm" Digital Thermal Sensor */ 355 + #define X86_FEATURE_IDA (14*32+ 1) /* "ida" Intel Dynamic Acceleration */ 356 + #define X86_FEATURE_ARAT (14*32+ 2) /* "arat" Always Running APIC Timer */ 357 + #define X86_FEATURE_PLN (14*32+ 4) /* "pln" Intel Power Limit Notification */ 358 + #define X86_FEATURE_PTS (14*32+ 6) /* "pts" Intel Package Thermal Status */ 359 + #define X86_FEATURE_HWP (14*32+ 7) /* "hwp" Intel Hardware P-states */ 360 + #define X86_FEATURE_HWP_NOTIFY (14*32+ 8) /* "hwp_notify" HWP Notification */ 361 + #define X86_FEATURE_HWP_ACT_WINDOW (14*32+ 9) /* "hwp_act_window" HWP Activity Window */ 362 + #define X86_FEATURE_HWP_EPP (14*32+10) /* "hwp_epp" HWP Energy Perf. Preference */ 363 + #define X86_FEATURE_HWP_PKG_REQ (14*32+11) /* "hwp_pkg_req" HWP Package Level Request */ 364 + #define X86_FEATURE_HWP_HIGHEST_PERF_CHANGE (14*32+15) /* HWP Highest perf change */ 365 + #define X86_FEATURE_HFI (14*32+19) /* "hfi" Hardware Feedback Interface */ 365 366 366 367 /* AMD SVM Feature Identification, CPUID level 0x8000000a (EDX), word 15 */ 367 - #define X86_FEATURE_NPT (15*32+ 0) /* Nested Page Table support */ 368 - #define X86_FEATURE_LBRV (15*32+ 1) /* LBR Virtualization support */ 368 + #define X86_FEATURE_NPT (15*32+ 0) /* "npt" Nested Page Table support */ 369 + #define X86_FEATURE_LBRV (15*32+ 1) /* "lbrv" LBR Virtualization support */ 369 370 #define X86_FEATURE_SVML (15*32+ 2) /* "svm_lock" SVM locking MSR */ 370 371 #define X86_FEATURE_NRIPS (15*32+ 3) /* "nrip_save" SVM next_rip save */ 371 372 #define X86_FEATURE_TSCRATEMSR (15*32+ 4) /* "tsc_scale" TSC scaling support */ 372 373 #define X86_FEATURE_VMCBCLEAN (15*32+ 5) /* "vmcb_clean" VMCB clean bits support */ 373 - #define X86_FEATURE_FLUSHBYASID (15*32+ 6) /* flush-by-ASID support */ 374 - #define X86_FEATURE_DECODEASSISTS (15*32+ 7) /* Decode Assists support */ 375 - #define X86_FEATURE_PAUSEFILTER (15*32+10) /* filtered pause intercept */ 376 - #define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */ 377 - #define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */ 378 - #define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */ 379 - #define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */ 380 - #define X86_FEATURE_X2AVIC (15*32+18) /* Virtual x2apic */ 381 - #define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* Virtual SPEC_CTRL */ 382 - #define X86_FEATURE_VNMI (15*32+25) /* Virtual NMI */ 383 - #define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* "" SVME addr check */ 374 + #define X86_FEATURE_FLUSHBYASID (15*32+ 6) /* "flushbyasid" Flush-by-ASID support */ 375 + #define X86_FEATURE_DECODEASSISTS (15*32+ 7) /* "decodeassists" Decode Assists support */ 376 + #define X86_FEATURE_PAUSEFILTER (15*32+10) /* "pausefilter" Filtered pause intercept */ 377 + #define X86_FEATURE_PFTHRESHOLD (15*32+12) /* "pfthreshold" Pause filter threshold */ 378 + #define X86_FEATURE_AVIC (15*32+13) /* "avic" Virtual Interrupt Controller */ 379 + #define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* "v_vmsave_vmload" Virtual VMSAVE VMLOAD */ 380 + #define X86_FEATURE_VGIF (15*32+16) /* "vgif" Virtual GIF */ 381 + #define X86_FEATURE_X2AVIC (15*32+18) /* "x2avic" Virtual x2apic */ 382 + #define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* "v_spec_ctrl" Virtual SPEC_CTRL */ 383 + #define X86_FEATURE_VNMI (15*32+25) /* "vnmi" Virtual NMI */ 384 + #define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* SVME addr check */ 384 385 385 386 /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */ 386 - #define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/ 387 - #define X86_FEATURE_UMIP (16*32+ 2) /* User Mode Instruction Protection */ 388 - #define X86_FEATURE_PKU (16*32+ 3) /* Protection Keys for Userspace */ 389 - #define X86_FEATURE_OSPKE (16*32+ 4) /* OS Protection Keys Enable */ 390 - #define X86_FEATURE_WAITPKG (16*32+ 5) /* UMONITOR/UMWAIT/TPAUSE Instructions */ 391 - #define X86_FEATURE_AVX512_VBMI2 (16*32+ 6) /* Additional AVX512 Vector Bit Manipulation Instructions */ 392 - #define X86_FEATURE_SHSTK (16*32+ 7) /* "" Shadow stack */ 393 - #define X86_FEATURE_GFNI (16*32+ 8) /* Galois Field New Instructions */ 394 - #define X86_FEATURE_VAES (16*32+ 9) /* Vector AES */ 395 - #define X86_FEATURE_VPCLMULQDQ (16*32+10) /* Carry-Less Multiplication Double Quadword */ 396 - #define X86_FEATURE_AVX512_VNNI (16*32+11) /* Vector Neural Network Instructions */ 397 - #define X86_FEATURE_AVX512_BITALG (16*32+12) /* Support for VPOPCNT[B,W] and VPSHUF-BITQMB instructions */ 398 - #define X86_FEATURE_TME (16*32+13) /* Intel Total Memory Encryption */ 399 - #define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */ 400 - #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ 401 - #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ 402 - #define X86_FEATURE_BUS_LOCK_DETECT (16*32+24) /* Bus Lock detect */ 403 - #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ 404 - #define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */ 405 - #define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */ 406 - #define X86_FEATURE_ENQCMD (16*32+29) /* ENQCMD and ENQCMDS instructions */ 407 - #define X86_FEATURE_SGX_LC (16*32+30) /* Software Guard Extensions Launch Control */ 387 + #define X86_FEATURE_AVX512VBMI (16*32+ 1) /* "avx512vbmi" AVX512 Vector Bit Manipulation instructions*/ 388 + #define X86_FEATURE_UMIP (16*32+ 2) /* "umip" User Mode Instruction Protection */ 389 + #define X86_FEATURE_PKU (16*32+ 3) /* "pku" Protection Keys for Userspace */ 390 + #define X86_FEATURE_OSPKE (16*32+ 4) /* "ospke" OS Protection Keys Enable */ 391 + #define X86_FEATURE_WAITPKG (16*32+ 5) /* "waitpkg" UMONITOR/UMWAIT/TPAUSE Instructions */ 392 + #define X86_FEATURE_AVX512_VBMI2 (16*32+ 6) /* "avx512_vbmi2" Additional AVX512 Vector Bit Manipulation Instructions */ 393 + #define X86_FEATURE_SHSTK (16*32+ 7) /* Shadow stack */ 394 + #define X86_FEATURE_GFNI (16*32+ 8) /* "gfni" Galois Field New Instructions */ 395 + #define X86_FEATURE_VAES (16*32+ 9) /* "vaes" Vector AES */ 396 + #define X86_FEATURE_VPCLMULQDQ (16*32+10) /* "vpclmulqdq" Carry-Less Multiplication Double Quadword */ 397 + #define X86_FEATURE_AVX512_VNNI (16*32+11) /* "avx512_vnni" Vector Neural Network Instructions */ 398 + #define X86_FEATURE_AVX512_BITALG (16*32+12) /* "avx512_bitalg" Support for VPOPCNT[B,W] and VPSHUF-BITQMB instructions */ 399 + #define X86_FEATURE_TME (16*32+13) /* "tme" Intel Total Memory Encryption */ 400 + #define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* "avx512_vpopcntdq" POPCNT for vectors of DW/QW */ 401 + #define X86_FEATURE_LA57 (16*32+16) /* "la57" 5-level page tables */ 402 + #define X86_FEATURE_RDPID (16*32+22) /* "rdpid" RDPID instruction */ 403 + #define X86_FEATURE_BUS_LOCK_DETECT (16*32+24) /* "bus_lock_detect" Bus Lock detect */ 404 + #define X86_FEATURE_CLDEMOTE (16*32+25) /* "cldemote" CLDEMOTE instruction */ 405 + #define X86_FEATURE_MOVDIRI (16*32+27) /* "movdiri" MOVDIRI instruction */ 406 + #define X86_FEATURE_MOVDIR64B (16*32+28) /* "movdir64b" MOVDIR64B instruction */ 407 + #define X86_FEATURE_ENQCMD (16*32+29) /* "enqcmd" ENQCMD and ENQCMDS instructions */ 408 + #define X86_FEATURE_SGX_LC (16*32+30) /* "sgx_lc" Software Guard Extensions Launch Control */ 408 409 409 410 /* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */ 410 - #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */ 411 - #define X86_FEATURE_SUCCOR (17*32+ 1) /* Uncorrectable error containment and recovery */ 412 - #define X86_FEATURE_SMCA (17*32+ 3) /* Scalable MCA */ 411 + #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* "overflow_recov" MCA overflow recovery support */ 412 + #define X86_FEATURE_SUCCOR (17*32+ 1) /* "succor" Uncorrectable error containment and recovery */ 413 + #define X86_FEATURE_SMCA (17*32+ 3) /* "smca" Scalable MCA */ 413 414 414 415 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */ 415 - #define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */ 416 - #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */ 417 - #define X86_FEATURE_FSRM (18*32+ 4) /* Fast Short Rep Mov */ 418 - #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */ 419 - #define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */ 420 - #define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */ 421 - #define X86_FEATURE_RTM_ALWAYS_ABORT (18*32+11) /* "" RTM transaction always aborts */ 422 - #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */ 423 - #define X86_FEATURE_SERIALIZE (18*32+14) /* SERIALIZE instruction */ 424 - #define X86_FEATURE_HYBRID_CPU (18*32+15) /* "" This part has CPUs of more than one type */ 425 - #define X86_FEATURE_TSXLDTRK (18*32+16) /* TSX Suspend Load Address Tracking */ 426 - #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */ 427 - #define X86_FEATURE_ARCH_LBR (18*32+19) /* Intel ARCH LBR */ 428 - #define X86_FEATURE_IBT (18*32+20) /* Indirect Branch Tracking */ 429 - #define X86_FEATURE_AMX_BF16 (18*32+22) /* AMX bf16 Support */ 430 - #define X86_FEATURE_AVX512_FP16 (18*32+23) /* AVX512 FP16 */ 431 - #define X86_FEATURE_AMX_TILE (18*32+24) /* AMX tile Support */ 432 - #define X86_FEATURE_AMX_INT8 (18*32+25) /* AMX int8 Support */ 433 - #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */ 434 - #define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */ 435 - #define X86_FEATURE_FLUSH_L1D (18*32+28) /* Flush L1D cache */ 436 - #define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */ 437 - #define X86_FEATURE_CORE_CAPABILITIES (18*32+30) /* "" IA32_CORE_CAPABILITIES MSR */ 438 - #define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */ 416 + #define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* "avx512_4vnniw" AVX-512 Neural Network Instructions */ 417 + #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* "avx512_4fmaps" AVX-512 Multiply Accumulation Single precision */ 418 + #define X86_FEATURE_FSRM (18*32+ 4) /* "fsrm" Fast Short Rep Mov */ 419 + #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* "avx512_vp2intersect" AVX-512 Intersect for D/Q */ 420 + #define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* SRBDS mitigation MSR available */ 421 + #define X86_FEATURE_MD_CLEAR (18*32+10) /* "md_clear" VERW clears CPU buffers */ 422 + #define X86_FEATURE_RTM_ALWAYS_ABORT (18*32+11) /* RTM transaction always aborts */ 423 + #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* TSX_FORCE_ABORT */ 424 + #define X86_FEATURE_SERIALIZE (18*32+14) /* "serialize" SERIALIZE instruction */ 425 + #define X86_FEATURE_HYBRID_CPU (18*32+15) /* This part has CPUs of more than one type */ 426 + #define X86_FEATURE_TSXLDTRK (18*32+16) /* "tsxldtrk" TSX Suspend Load Address Tracking */ 427 + #define X86_FEATURE_PCONFIG (18*32+18) /* "pconfig" Intel PCONFIG */ 428 + #define X86_FEATURE_ARCH_LBR (18*32+19) /* "arch_lbr" Intel ARCH LBR */ 429 + #define X86_FEATURE_IBT (18*32+20) /* "ibt" Indirect Branch Tracking */ 430 + #define X86_FEATURE_AMX_BF16 (18*32+22) /* "amx_bf16" AMX bf16 Support */ 431 + #define X86_FEATURE_AVX512_FP16 (18*32+23) /* "avx512_fp16" AVX512 FP16 */ 432 + #define X86_FEATURE_AMX_TILE (18*32+24) /* "amx_tile" AMX tile Support */ 433 + #define X86_FEATURE_AMX_INT8 (18*32+25) /* "amx_int8" AMX int8 Support */ 434 + #define X86_FEATURE_SPEC_CTRL (18*32+26) /* Speculation Control (IBRS + IBPB) */ 435 + #define X86_FEATURE_INTEL_STIBP (18*32+27) /* Single Thread Indirect Branch Predictors */ 436 + #define X86_FEATURE_FLUSH_L1D (18*32+28) /* "flush_l1d" Flush L1D cache */ 437 + #define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* "arch_capabilities" IA32_ARCH_CAPABILITIES MSR (Intel) */ 438 + #define X86_FEATURE_CORE_CAPABILITIES (18*32+30) /* IA32_CORE_CAPABILITIES MSR */ 439 + #define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* Speculative Store Bypass Disable */ 439 440 440 441 /* AMD-defined memory encryption features, CPUID level 0x8000001f (EAX), word 19 */ 441 - #define X86_FEATURE_SME (19*32+ 0) /* AMD Secure Memory Encryption */ 442 - #define X86_FEATURE_SEV (19*32+ 1) /* AMD Secure Encrypted Virtualization */ 443 - #define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* "" VM Page Flush MSR is supported */ 444 - #define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */ 445 - #define X86_FEATURE_SEV_SNP (19*32+ 4) /* AMD Secure Encrypted Virtualization - Secure Nested Paging */ 446 - #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */ 447 - #define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */ 448 - #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* AMD SEV-ES full debug state swap support */ 442 + #define X86_FEATURE_SME (19*32+ 0) /* "sme" AMD Secure Memory Encryption */ 443 + #define X86_FEATURE_SEV (19*32+ 1) /* "sev" AMD Secure Encrypted Virtualization */ 444 + #define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* VM Page Flush MSR is supported */ 445 + #define X86_FEATURE_SEV_ES (19*32+ 3) /* "sev_es" AMD Secure Encrypted Virtualization - Encrypted State */ 446 + #define X86_FEATURE_SEV_SNP (19*32+ 4) /* "sev_snp" AMD Secure Encrypted Virtualization - Secure Nested Paging */ 447 + #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* Virtual TSC_AUX */ 448 + #define X86_FEATURE_SME_COHERENT (19*32+10) /* AMD hardware-enforced cache coherency */ 449 + #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" AMD SEV-ES full debug state swap support */ 450 + #define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */ 449 451 450 452 /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */ 451 - #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */ 452 - #define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* "" WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */ 453 - #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */ 454 - #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */ 455 - #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* "" Automatic IBRS */ 456 - #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */ 453 + #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */ 454 + #define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */ 455 + #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */ 456 + #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */ 457 + #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */ 458 + #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */ 457 459 458 - #define X86_FEATURE_SBPB (20*32+27) /* "" Selective Branch Prediction Barrier */ 459 - #define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */ 460 - #define X86_FEATURE_SRSO_NO (20*32+29) /* "" CPU is not affected by SRSO */ 460 + #define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */ 461 + #define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* MSR_PRED_CMD[IBPB] flushes all branch type predictions */ 462 + #define X86_FEATURE_SRSO_NO (20*32+29) /* CPU is not affected by SRSO */ 461 463 462 464 /* 463 465 * Extended auxiliary flags: Linux defined - for features scattered in various ··· 467 465 * 468 466 * Reuse free bits when adding new feature flags! 469 467 */ 470 - #define X86_FEATURE_AMD_LBR_PMC_FREEZE (21*32+ 0) /* AMD LBR and PMC Freeze */ 471 - #define X86_FEATURE_CLEAR_BHB_LOOP (21*32+ 1) /* "" Clear branch history at syscall entry using SW loop */ 472 - #define X86_FEATURE_BHI_CTRL (21*32+ 2) /* "" BHI_DIS_S HW control available */ 473 - #define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* "" BHI_DIS_S HW control enabled */ 474 - #define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* "" Clear branch history at vmexit using SW loop */ 468 + #define X86_FEATURE_AMD_LBR_PMC_FREEZE (21*32+ 0) /* "amd_lbr_pmc_freeze" AMD LBR and PMC Freeze */ 469 + #define X86_FEATURE_CLEAR_BHB_LOOP (21*32+ 1) /* Clear branch history at syscall entry using SW loop */ 470 + #define X86_FEATURE_BHI_CTRL (21*32+ 2) /* BHI_DIS_S HW control available */ 471 + #define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* BHI_DIS_S HW control enabled */ 472 + #define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* Clear branch history at vmexit using SW loop */ 473 + #define X86_FEATURE_FAST_CPPC (21*32 + 5) /* AMD Fast CPPC */ 475 474 476 475 /* 477 476 * BUG word(s) 478 477 */ 479 478 #define X86_BUG(x) (NCAPINTS*32 + (x)) 480 479 481 - #define X86_BUG_F00F X86_BUG(0) /* Intel F00F */ 482 - #define X86_BUG_FDIV X86_BUG(1) /* FPU FDIV */ 483 - #define X86_BUG_COMA X86_BUG(2) /* Cyrix 6x86 coma */ 480 + #define X86_BUG_F00F X86_BUG(0) /* "f00f" Intel F00F */ 481 + #define X86_BUG_FDIV X86_BUG(1) /* "fdiv" FPU FDIV */ 482 + #define X86_BUG_COMA X86_BUG(2) /* "coma" Cyrix 6x86 coma */ 484 483 #define X86_BUG_AMD_TLB_MMATCH X86_BUG(3) /* "tlb_mmatch" AMD Erratum 383 */ 485 484 #define X86_BUG_AMD_APIC_C1E X86_BUG(4) /* "apic_c1e" AMD Erratum 400 */ 486 - #define X86_BUG_11AP X86_BUG(5) /* Bad local APIC aka 11AP */ 487 - #define X86_BUG_FXSAVE_LEAK X86_BUG(6) /* FXSAVE leaks FOP/FIP/FOP */ 488 - #define X86_BUG_CLFLUSH_MONITOR X86_BUG(7) /* AAI65, CLFLUSH required before MONITOR */ 489 - #define X86_BUG_SYSRET_SS_ATTRS X86_BUG(8) /* SYSRET doesn't fix up SS attrs */ 485 + #define X86_BUG_11AP X86_BUG(5) /* "11ap" Bad local APIC aka 11AP */ 486 + #define X86_BUG_FXSAVE_LEAK X86_BUG(6) /* "fxsave_leak" FXSAVE leaks FOP/FIP/FOP */ 487 + #define X86_BUG_CLFLUSH_MONITOR X86_BUG(7) /* "clflush_monitor" AAI65, CLFLUSH required before MONITOR */ 488 + #define X86_BUG_SYSRET_SS_ATTRS X86_BUG(8) /* "sysret_ss_attrs" SYSRET doesn't fix up SS attrs */ 490 489 #ifdef CONFIG_X86_32 491 490 /* 492 491 * 64-bit kernels don't use X86_BUG_ESPFIX. Make the define conditional 493 492 * to avoid confusion. 494 493 */ 495 - #define X86_BUG_ESPFIX X86_BUG(9) /* "" IRET to 16-bit SS corrupts ESP/RSP high bits */ 494 + #define X86_BUG_ESPFIX X86_BUG(9) /* IRET to 16-bit SS corrupts ESP/RSP high bits */ 496 495 #endif 497 - #define X86_BUG_NULL_SEG X86_BUG(10) /* Nulling a selector preserves the base */ 498 - #define X86_BUG_SWAPGS_FENCE X86_BUG(11) /* SWAPGS without input dep on GS */ 499 - #define X86_BUG_MONITOR X86_BUG(12) /* IPI required to wake up remote CPU */ 500 - #define X86_BUG_AMD_E400 X86_BUG(13) /* CPU is among the affected by Erratum 400 */ 501 - #define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* CPU is affected by meltdown attack and needs kernel page table isolation */ 502 - #define X86_BUG_SPECTRE_V1 X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */ 503 - #define X86_BUG_SPECTRE_V2 X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */ 504 - #define X86_BUG_SPEC_STORE_BYPASS X86_BUG(17) /* CPU is affected by speculative store bypass attack */ 505 - #define X86_BUG_L1TF X86_BUG(18) /* CPU is affected by L1 Terminal Fault */ 506 - #define X86_BUG_MDS X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */ 507 - #define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */ 508 - #define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */ 509 - #define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */ 510 - #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */ 511 - #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */ 512 - #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */ 513 - #define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */ 514 - #define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */ 515 - #define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */ 516 - #define X86_BUG_SMT_RSB X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */ 517 - #define X86_BUG_GDS X86_BUG(30) /* CPU is affected by Gather Data Sampling */ 518 - #define X86_BUG_TDX_PW_MCE X86_BUG(31) /* CPU may incur #MC if non-TD software does partial write to TDX private memory */ 496 + #define X86_BUG_NULL_SEG X86_BUG(10) /* "null_seg" Nulling a selector preserves the base */ 497 + #define X86_BUG_SWAPGS_FENCE X86_BUG(11) /* "swapgs_fence" SWAPGS without input dep on GS */ 498 + #define X86_BUG_MONITOR X86_BUG(12) /* "monitor" IPI required to wake up remote CPU */ 499 + #define X86_BUG_AMD_E400 X86_BUG(13) /* "amd_e400" CPU is among the affected by Erratum 400 */ 500 + #define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* "cpu_meltdown" CPU is affected by meltdown attack and needs kernel page table isolation */ 501 + #define X86_BUG_SPECTRE_V1 X86_BUG(15) /* "spectre_v1" CPU is affected by Spectre variant 1 attack with conditional branches */ 502 + #define X86_BUG_SPECTRE_V2 X86_BUG(16) /* "spectre_v2" CPU is affected by Spectre variant 2 attack with indirect branches */ 503 + #define X86_BUG_SPEC_STORE_BYPASS X86_BUG(17) /* "spec_store_bypass" CPU is affected by speculative store bypass attack */ 504 + #define X86_BUG_L1TF X86_BUG(18) /* "l1tf" CPU is affected by L1 Terminal Fault */ 505 + #define X86_BUG_MDS X86_BUG(19) /* "mds" CPU is affected by Microarchitectural data sampling */ 506 + #define X86_BUG_MSBDS_ONLY X86_BUG(20) /* "msbds_only" CPU is only affected by the MSDBS variant of BUG_MDS */ 507 + #define X86_BUG_SWAPGS X86_BUG(21) /* "swapgs" CPU is affected by speculation through SWAPGS */ 508 + #define X86_BUG_TAA X86_BUG(22) /* "taa" CPU is affected by TSX Async Abort(TAA) */ 509 + #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* "itlb_multihit" CPU may incur MCE during certain page attribute changes */ 510 + #define X86_BUG_SRBDS X86_BUG(24) /* "srbds" CPU may leak RNG bits if not mitigated */ 511 + #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* "mmio_stale_data" CPU is affected by Processor MMIO Stale Data vulnerabilities */ 512 + #define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* "mmio_unknown" CPU is too old and its MMIO Stale Data status is unknown */ 513 + #define X86_BUG_RETBLEED X86_BUG(27) /* "retbleed" CPU is affected by RETBleed */ 514 + #define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* "eibrs_pbrsb" EIBRS is vulnerable to Post Barrier RSB Predictions */ 515 + #define X86_BUG_SMT_RSB X86_BUG(29) /* "smt_rsb" CPU is vulnerable to Cross-Thread Return Address Predictions */ 516 + #define X86_BUG_GDS X86_BUG(30) /* "gds" CPU is affected by Gather Data Sampling */ 517 + #define X86_BUG_TDX_PW_MCE X86_BUG(31) /* "tdx_pw_mce" CPU may incur #MC if non-TD software does partial write to TDX private memory */ 519 518 520 519 /* BUG word 2 */ 521 - #define X86_BUG_SRSO X86_BUG(1*32 + 0) /* AMD SRSO bug */ 522 - #define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* AMD DIV0 speculation bug */ 523 - #define X86_BUG_RFDS X86_BUG(1*32 + 2) /* CPU is vulnerable to Register File Data Sampling */ 524 - #define X86_BUG_BHI X86_BUG(1*32 + 3) /* CPU is affected by Branch History Injection */ 520 + #define X86_BUG_SRSO X86_BUG(1*32 + 0) /* "srso" AMD SRSO bug */ 521 + #define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* "div0" AMD DIV0 speculation bug */ 522 + #define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */ 523 + #define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */ 525 524 #endif /* _ASM_X86_CPUFEATURES_H */
+11
tools/arch/x86/include/asm/msr-index.h
··· 566 566 #define MSR_RELOAD_PMC0 0x000014c1 567 567 #define MSR_RELOAD_FIXED_CTR0 0x00001309 568 568 569 + /* V6 PMON MSR range */ 570 + #define MSR_IA32_PMC_V6_GP0_CTR 0x1900 571 + #define MSR_IA32_PMC_V6_GP0_CFG_A 0x1901 572 + #define MSR_IA32_PMC_V6_FX0_CTR 0x1980 573 + #define MSR_IA32_PMC_V6_STEP 4 574 + 569 575 /* KeyID partitioning between MKTME and TDX */ 570 576 #define MSR_IA32_MKTME_KEYID_PARTITIONING 0x00000087 571 577 ··· 665 659 666 660 #define MSR_AMD64_RMP_BASE 0xc0010132 667 661 #define MSR_AMD64_RMP_END 0xc0010133 662 + 663 + #define MSR_SVSM_CAA 0xc001f000 668 664 669 665 /* AMD Collaborative Processor Performance Control MSRs */ 670 666 #define MSR_AMD_CPPC_CAP1 0xc00102b0 ··· 789 781 #define MSR_K7_HWCR_IRPERF_EN BIT_ULL(MSR_K7_HWCR_IRPERF_EN_BIT) 790 782 #define MSR_K7_FID_VID_CTL 0xc0010041 791 783 #define MSR_K7_FID_VID_STATUS 0xc0010042 784 + #define MSR_K7_HWCR_CPB_DIS_BIT 25 785 + #define MSR_K7_HWCR_CPB_DIS BIT_ULL(MSR_K7_HWCR_CPB_DIS_BIT) 792 786 793 787 /* K6 MSRs */ 794 788 #define MSR_K6_WHCR 0xc0000082 ··· 1174 1164 #define MSR_IA32_QM_CTR 0xc8e 1175 1165 #define MSR_IA32_PQR_ASSOC 0xc8f 1176 1166 #define MSR_IA32_L3_CBM_BASE 0xc90 1167 + #define MSR_RMID_SNC_CONFIG 0xca0 1177 1168 #define MSR_IA32_L2_CBM_BASE 0xd10 1178 1169 #define MSR_IA32_MBA_THRTL_BASE 0xd50 1179 1170
+49
tools/arch/x86/include/uapi/asm/kvm.h
··· 106 106 107 107 #define KVM_RUN_X86_SMM (1 << 0) 108 108 #define KVM_RUN_X86_BUS_LOCK (1 << 1) 109 + #define KVM_RUN_X86_GUEST_MODE (1 << 2) 109 110 110 111 /* for KVM_GET_REGS and KVM_SET_REGS */ 111 112 struct kvm_regs { ··· 698 697 /* Second time is the charm; improved versions of the above ioctls. */ 699 698 KVM_SEV_INIT2, 700 699 700 + /* SNP-specific commands */ 701 + KVM_SEV_SNP_LAUNCH_START = 100, 702 + KVM_SEV_SNP_LAUNCH_UPDATE, 703 + KVM_SEV_SNP_LAUNCH_FINISH, 704 + 701 705 KVM_SEV_NR_MAX, 702 706 }; 703 707 ··· 830 824 __u32 pad2; 831 825 }; 832 826 827 + struct kvm_sev_snp_launch_start { 828 + __u64 policy; 829 + __u8 gosvw[16]; 830 + __u16 flags; 831 + __u8 pad0[6]; 832 + __u64 pad1[4]; 833 + }; 834 + 835 + /* Kept in sync with firmware values for simplicity. */ 836 + #define KVM_SEV_SNP_PAGE_TYPE_NORMAL 0x1 837 + #define KVM_SEV_SNP_PAGE_TYPE_ZERO 0x3 838 + #define KVM_SEV_SNP_PAGE_TYPE_UNMEASURED 0x4 839 + #define KVM_SEV_SNP_PAGE_TYPE_SECRETS 0x5 840 + #define KVM_SEV_SNP_PAGE_TYPE_CPUID 0x6 841 + 842 + struct kvm_sev_snp_launch_update { 843 + __u64 gfn_start; 844 + __u64 uaddr; 845 + __u64 len; 846 + __u8 type; 847 + __u8 pad0; 848 + __u16 flags; 849 + __u32 pad1; 850 + __u64 pad2[4]; 851 + }; 852 + 853 + #define KVM_SEV_SNP_ID_BLOCK_SIZE 96 854 + #define KVM_SEV_SNP_ID_AUTH_SIZE 4096 855 + #define KVM_SEV_SNP_FINISH_DATA_SIZE 32 856 + 857 + struct kvm_sev_snp_launch_finish { 858 + __u64 id_block_uaddr; 859 + __u64 id_auth_uaddr; 860 + __u8 id_block_en; 861 + __u8 auth_key_en; 862 + __u8 vcek_disabled; 863 + __u8 host_data[KVM_SEV_SNP_FINISH_DATA_SIZE]; 864 + __u8 pad0[3]; 865 + __u16 flags; 866 + __u64 pad1[4]; 867 + }; 868 + 833 869 #define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0) 834 870 #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1) 835 871 ··· 922 874 #define KVM_X86_SW_PROTECTED_VM 1 923 875 #define KVM_X86_SEV_VM 2 924 876 #define KVM_X86_SEV_ES_VM 3 877 + #define KVM_X86_SNP_VM 4 925 878 926 879 #endif /* _ASM_X86_KVM_H */
+1
tools/arch/x86/include/uapi/asm/svm.h
··· 115 115 #define SVM_VMGEXIT_AP_CREATE_ON_INIT 0 116 116 #define SVM_VMGEXIT_AP_CREATE 1 117 117 #define SVM_VMGEXIT_AP_DESTROY 2 118 + #define SVM_VMGEXIT_SNP_RUN_VMPL 0x80000018 118 119 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd 119 120 #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe 120 121 #define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \
+73
tools/include/uapi/README
··· 1 + Why we want a copy of kernel headers in tools? 2 + ============================================== 3 + 4 + There used to be no copies, with tools/ code using kernel headers 5 + directly. From time to time tools/perf/ broke due to legitimate kernel 6 + hacking. At some point Linus complained about such direct usage. Then we 7 + adopted the current model. 8 + 9 + The way these headers are used in perf are not restricted to just 10 + including them to compile something. 11 + 12 + There are sometimes used in scripts that convert defines into string 13 + tables, etc, so some change may break one of these scripts, or new MSRs 14 + may use some different #define pattern, etc. 15 + 16 + E.g.: 17 + 18 + $ ls -1 tools/perf/trace/beauty/*.sh | head -5 19 + tools/perf/trace/beauty/arch_errno_names.sh 20 + tools/perf/trace/beauty/drm_ioctl.sh 21 + tools/perf/trace/beauty/fadvise.sh 22 + tools/perf/trace/beauty/fsconfig.sh 23 + tools/perf/trace/beauty/fsmount.sh 24 + $ 25 + $ tools/perf/trace/beauty/fadvise.sh 26 + static const char *fadvise_advices[] = { 27 + [0] = "NORMAL", 28 + [1] = "RANDOM", 29 + [2] = "SEQUENTIAL", 30 + [3] = "WILLNEED", 31 + [4] = "DONTNEED", 32 + [5] = "NOREUSE", 33 + }; 34 + $ 35 + 36 + The tools/perf/check-headers.sh script, part of the tools/ build 37 + process, points out changes in the original files. 38 + 39 + So its important not to touch the copies in tools/ when doing changes in 40 + the original kernel headers, that will be done later, when 41 + check-headers.sh inform about the change to the perf tools hackers. 42 + 43 + Another explanation from Ingo Molnar: 44 + It's better than all the alternatives we tried so far: 45 + 46 + - Symbolic links and direct #includes: this was the original approach but 47 + was pushed back on from the kernel side, when tooling modified the 48 + headers and broke them accidentally for kernel builds. 49 + 50 + - Duplicate self-defined ABI headers like glibc: double the maintenance 51 + burden, double the chance for mistakes, plus there's no tech-driven 52 + notification mechanism to look at new kernel side changes. 53 + 54 + What we are doing now is a third option: 55 + 56 + - A software-enforced copy-on-write mechanism of kernel headers to 57 + tooling, driven by non-fatal warnings on the tooling side build when 58 + kernel headers get modified: 59 + 60 + Warning: Kernel ABI header differences: 61 + diff -u tools/include/uapi/drm/i915_drm.h include/uapi/drm/i915_drm.h 62 + diff -u tools/include/uapi/linux/fs.h include/uapi/linux/fs.h 63 + diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h 64 + ... 65 + 66 + The tooling policy is to always pick up the kernel side headers as-is, 67 + and integate them into the tooling build. The warnings above serve as a 68 + notification to tooling maintainers that there's changes on the kernel 69 + side. 70 + 71 + We've been using this for many years now, and it might seem hacky, but 72 + works surprisingly well. 73 +
+1 -1
tools/include/uapi/asm-generic/unistd.h
··· 737 737 #define __NR_ppoll_time64 414 738 738 __SC_COMP(__NR_ppoll_time64, sys_ppoll, compat_sys_ppoll_time64) 739 739 #define __NR_io_pgetevents_time64 416 740 - __SYSCALL(__NR_io_pgetevents_time64, sys_io_pgetevents) 740 + __SC_COMP(__NR_io_pgetevents_time64, sys_io_pgetevents, compat_sys_io_pgetevents_time64) 741 741 #define __NR_recvmmsg_time64 417 742 742 __SC_COMP(__NR_recvmmsg_time64, sys_recvmmsg, compat_sys_recvmmsg_time64) 743 743 #define __NR_mq_timedsend_time64 418
+27
tools/include/uapi/drm/i915_drm.h
··· 2163 2163 * supports this per context flag. 2164 2164 */ 2165 2165 #define I915_CONTEXT_PARAM_LOW_LATENCY 0xe 2166 + 2167 + /* 2168 + * I915_CONTEXT_PARAM_CONTEXT_IMAGE: 2169 + * 2170 + * Allows userspace to provide own context images. 2171 + * 2172 + * Note that this is a debug API not available on production kernel builds. 2173 + */ 2174 + #define I915_CONTEXT_PARAM_CONTEXT_IMAGE 0xf 2166 2175 /* Must be kept compact -- no holes and well documented */ 2167 2176 2168 2177 /** @value: Context parameter value to be set or queried */ ··· 2572 2563 __u64 extensions; \ 2573 2564 struct i915_engine_class_instance engines[N__]; \ 2574 2565 } __attribute__((packed)) name__ 2566 + 2567 + struct i915_gem_context_param_context_image { 2568 + /** @engine: Engine class & instance to be configured. */ 2569 + struct i915_engine_class_instance engine; 2570 + 2571 + /** @flags: One of the supported flags or zero. */ 2572 + __u32 flags; 2573 + #define I915_CONTEXT_IMAGE_FLAG_ENGINE_INDEX (1u << 0) 2574 + 2575 + /** @size: Size of the image blob pointed to by @image. */ 2576 + __u32 size; 2577 + 2578 + /** @mbz: Must be zero. */ 2579 + __u32 mbz; 2580 + 2581 + /** @image: Userspace memory containing the context image. */ 2582 + __u64 image; 2583 + } __attribute__((packed)); 2575 2584 2576 2585 /** 2577 2586 * struct drm_i915_gem_context_create_ext_setparam - Context parameter
+2
tools/include/uapi/linux/in.h
··· 81 81 #define IPPROTO_ETHERNET IPPROTO_ETHERNET 82 82 IPPROTO_RAW = 255, /* Raw IP packets */ 83 83 #define IPPROTO_RAW IPPROTO_RAW 84 + IPPROTO_SMC = 256, /* Shared Memory Communications */ 85 + #define IPPROTO_SMC IPPROTO_SMC 84 86 IPPROTO_MPTCP = 262, /* Multipath TCP connection */ 85 87 #define IPPROTO_MPTCP IPPROTO_MPTCP 86 88 IPPROTO_MAX
+16 -1
tools/include/uapi/linux/kvm.h
··· 192 192 /* Flags that describe what fields in emulation_failure hold valid data. */ 193 193 #define KVM_INTERNAL_ERROR_EMULATION_FLAG_INSTRUCTION_BYTES (1ULL << 0) 194 194 195 + /* 196 + * struct kvm_run can be modified by userspace at any time, so KVM must be 197 + * careful to avoid TOCTOU bugs. In order to protect KVM, HINT_UNSAFE_IN_KVM() 198 + * renames fields in struct kvm_run from <symbol> to <symbol>__unsafe when 199 + * compiled into the kernel, ensuring that any use within KVM is obvious and 200 + * gets extra scrutiny. 201 + */ 202 + #ifdef __KERNEL__ 203 + #define HINT_UNSAFE_IN_KVM(_symbol) _symbol##__unsafe 204 + #else 205 + #define HINT_UNSAFE_IN_KVM(_symbol) _symbol 206 + #endif 207 + 195 208 /* for KVM_RUN, returned by mmap(vcpu_fd, offset=0) */ 196 209 struct kvm_run { 197 210 /* in */ 198 211 __u8 request_interrupt_window; 199 - __u8 immediate_exit; 212 + __u8 HINT_UNSAFE_IN_KVM(immediate_exit); 200 213 __u8 padding1[6]; 201 214 202 215 /* out */ ··· 931 918 #define KVM_CAP_GUEST_MEMFD 234 932 919 #define KVM_CAP_VM_TYPES 235 933 920 #define KVM_CAP_PRE_FAULT_MEMORY 236 921 + #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 922 + #define KVM_CAP_X86_GUEST_MODE 238 934 923 935 924 struct kvm_irq_routing_irqchip { 936 925 __u32 irqchip;
+4 -2
tools/include/uapi/linux/perf_event.h
··· 1349 1349 #define PERF_MEM_LVLNUM_L2 0x02 /* L2 */ 1350 1350 #define PERF_MEM_LVLNUM_L3 0x03 /* L3 */ 1351 1351 #define PERF_MEM_LVLNUM_L4 0x04 /* L4 */ 1352 - /* 5-0x7 available */ 1352 + #define PERF_MEM_LVLNUM_L2_MHB 0x05 /* L2 Miss Handling Buffer */ 1353 + #define PERF_MEM_LVLNUM_MSC 0x06 /* Memory-side Cache */ 1354 + /* 0x7 available */ 1353 1355 #define PERF_MEM_LVLNUM_UNC 0x08 /* Uncached */ 1354 1356 #define PERF_MEM_LVLNUM_CXL 0x09 /* CXL */ 1355 1357 #define PERF_MEM_LVLNUM_IO 0x0a /* I/O */ 1356 1358 #define PERF_MEM_LVLNUM_ANY_CACHE 0x0b /* Any cache */ 1357 - #define PERF_MEM_LVLNUM_LFB 0x0c /* LFB */ 1359 + #define PERF_MEM_LVLNUM_LFB 0x0c /* LFB / L1 Miss Handling Buffer */ 1358 1360 #define PERF_MEM_LVLNUM_RAM 0x0d /* RAM */ 1359 1361 #define PERF_MEM_LVLNUM_PMEM 0x0e /* PMEM */ 1360 1362 #define PERF_MEM_LVLNUM_NA 0x0f /* N/A */
+10 -2
tools/include/uapi/linux/stat.h
··· 126 126 __u64 stx_mnt_id; 127 127 __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */ 128 128 __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */ 129 - __u64 stx_subvol; /* Subvolume identifier */ 130 129 /* 0xa0 */ 131 - __u64 __spare3[11]; /* Spare space for future expansion */ 130 + __u64 stx_subvol; /* Subvolume identifier */ 131 + __u32 stx_atomic_write_unit_min; /* Min atomic write unit in bytes */ 132 + __u32 stx_atomic_write_unit_max; /* Max atomic write unit in bytes */ 133 + /* 0xb0 */ 134 + __u32 stx_atomic_write_segments_max; /* Max atomic write segment count */ 135 + __u32 __spare1[1]; 136 + /* 0xb8 */ 137 + __u64 __spare3[9]; /* Spare space for future expansion */ 132 138 /* 0x100 */ 133 139 }; 134 140 ··· 163 157 #define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */ 164 158 #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */ 165 159 #define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */ 160 + #define STATX_WRITE_ATOMIC 0x00010000U /* Want/got atomic_write_* fields */ 166 161 167 162 #define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */ 168 163 ··· 199 192 #define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */ 200 193 #define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */ 201 194 #define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */ 195 + #define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */ 202 196 203 197 204 198 #endif /* _UAPI_LINUX_STAT_H */
+5 -1
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 230 230 178 nospu rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend 231 231 179 32 pread64 sys_ppc_pread64 compat_sys_ppc_pread64 232 232 179 64 pread64 sys_pread64 233 + 179 spu pread64 sys_pread64 233 234 180 32 pwrite64 sys_ppc_pwrite64 compat_sys_ppc_pwrite64 234 235 180 64 pwrite64 sys_pwrite64 236 + 180 spu pwrite64 sys_pwrite64 235 237 181 common chown sys_chown 236 238 182 common getcwd sys_getcwd 237 239 183 common capget sys_capget ··· 248 246 190 common ugetrlimit sys_getrlimit compat_sys_getrlimit 249 247 191 32 readahead sys_ppc_readahead compat_sys_ppc_readahead 250 248 191 64 readahead sys_readahead 249 + 191 spu readahead sys_readahead 251 250 192 32 mmap2 sys_mmap2 compat_sys_mmap2 252 251 193 32 truncate64 sys_ppc_truncate64 compat_sys_ppc_truncate64 253 252 194 32 ftruncate64 sys_ppc_ftruncate64 compat_sys_ppc_ftruncate64 ··· 296 293 232 nospu set_tid_address sys_set_tid_address 297 294 233 32 fadvise64 sys_ppc32_fadvise64 compat_sys_ppc32_fadvise64 298 295 233 64 fadvise64 sys_fadvise64 296 + 233 spu fadvise64 sys_fadvise64 299 297 234 nospu exit_group sys_exit_group 300 298 235 nospu lookup_dcookie sys_ni_syscall 301 299 236 common epoll_create sys_epoll_create ··· 506 502 412 32 utimensat_time64 sys_utimensat sys_utimensat 507 503 413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64 508 504 414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64 509 - 416 32 io_pgetevents_time64 sys_io_pgetevents sys_io_pgetevents 505 + 416 32 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64 510 506 417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64 511 507 418 32 mq_timedsend_time64 sys_mq_timedsend sys_mq_timedsend 512 508 419 32 mq_timedreceive_time64 sys_mq_timedreceive sys_mq_timedreceive
+1 -1
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 418 418 412 32 utimensat_time64 - sys_utimensat 419 419 413 32 pselect6_time64 - compat_sys_pselect6_time64 420 420 414 32 ppoll_time64 - compat_sys_ppoll_time64 421 - 416 32 io_pgetevents_time64 - sys_io_pgetevents 421 + 416 32 io_pgetevents_time64 - compat_sys_io_pgetevents_time64 422 422 417 32 recvmmsg_time64 - compat_sys_recvmmsg_time64 423 423 418 32 mq_timedsend_time64 - sys_mq_timedsend 424 424 419 32 mq_timedreceive_time64 - sys_mq_timedreceive
+5 -3
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 1 2 # 2 3 # 64-bit system call numbers and entry vectors 3 4 # 4 5 # The format is: 5 - # <number> <abi> <name> <entry point> 6 + # <number> <abi> <name> <entry point> [<compat entry point> [noreturn]] 6 7 # 7 8 # The __x64_sys_*() stubs are created on-the-fly for sys_*() system calls 8 9 # ··· 69 68 57 common fork sys_fork 70 69 58 common vfork sys_vfork 71 70 59 64 execve sys_execve 72 - 60 common exit sys_exit 71 + 60 common exit sys_exit - noreturn 73 72 61 common wait4 sys_wait4 74 73 62 common kill sys_kill 75 74 63 common uname sys_newuname ··· 240 239 228 common clock_gettime sys_clock_gettime 241 240 229 common clock_getres sys_clock_getres 242 241 230 common clock_nanosleep sys_clock_nanosleep 243 - 231 common exit_group sys_exit_group 242 + 231 common exit_group sys_exit_group - noreturn 244 243 232 common epoll_wait sys_epoll_wait 245 244 233 common epoll_ctl sys_epoll_ctl 246 245 234 common tgkill sys_tgkill ··· 344 343 332 common statx sys_statx 345 344 333 common io_pgetevents sys_io_pgetevents 346 345 334 common rseq sys_rseq 346 + 335 common uretprobe sys_uretprobe 347 347 # don't use numbers 387 through 423, add new calls after the last 348 348 # 'common' entry 349 349 424 common pidfd_send_signal sys_pidfd_send_signal
+5 -4
tools/perf/builtin-daemon.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <internal/lib.h> 3 + #include <inttypes.h> 3 4 #include <subcmd/parse-options.h> 4 5 #include <api/fd/array.h> 5 6 #include <api/fs/fs.h> ··· 689 688 /* lock */ 690 689 csv_sep, daemon->base, "lock"); 691 690 692 - fprintf(out, "%c%lu", 691 + fprintf(out, "%c%" PRIu64, 693 692 /* session up time */ 694 693 csv_sep, (curr - daemon->start) / 60); 695 694 ··· 701 700 daemon->base, SESSION_OUTPUT); 702 701 fprintf(out, " lock: %s/lock\n", 703 702 daemon->base); 704 - fprintf(out, " up: %lu minutes\n", 703 + fprintf(out, " up: %" PRIu64 " minutes\n", 705 704 (curr - daemon->start) / 60); 706 705 } 707 706 } ··· 728 727 /* session ack */ 729 728 csv_sep, session->base, SESSION_ACK); 730 729 731 - fprintf(out, "%c%lu", 730 + fprintf(out, "%c%" PRIu64, 732 731 /* session up time */ 733 732 csv_sep, (curr - session->start) / 60); 734 733 ··· 746 745 session->base, SESSION_CONTROL); 747 746 fprintf(out, " ack: %s/%s\n", 748 747 session->base, SESSION_ACK); 749 - fprintf(out, " up: %lu minutes\n", 748 + fprintf(out, " up: %" PRIu64 " minutes\n", 750 749 (curr - session->start) / 60); 751 750 } 752 751 }
+4 -1
tools/perf/trace/beauty/include/linux/socket.h
··· 76 76 __kernel_size_t msg_controllen; /* ancillary data buffer length */ 77 77 struct kiocb *msg_iocb; /* ptr to iocb for async requests */ 78 78 struct ubuf_info *msg_ubuf; 79 - int (*sg_from_iter)(struct sock *sk, struct sk_buff *skb, 79 + int (*sg_from_iter)(struct sk_buff *skb, 80 80 struct iov_iter *from, size_t length); 81 81 }; 82 82 ··· 442 442 extern int __sys_socket(int family, int type, int protocol); 443 443 extern struct file *__sys_socket_file(int family, int type, int protocol); 444 444 extern int __sys_bind(int fd, struct sockaddr __user *umyaddr, int addrlen); 445 + extern int __sys_bind_socket(struct socket *sock, struct sockaddr_storage *address, 446 + int addrlen); 445 447 extern int __sys_connect_file(struct file *file, struct sockaddr_storage *addr, 446 448 int addrlen, int file_flags); 447 449 extern int __sys_connect(int fd, struct sockaddr __user *uservaddr, 448 450 int addrlen); 449 451 extern int __sys_listen(int fd, int backlog); 452 + extern int __sys_listen_socket(struct socket *sock, int backlog); 450 453 extern int __sys_getsockname(int fd, struct sockaddr __user *usockaddr, 451 454 int __user *usockaddr_len); 452 455 extern int __sys_getpeername(int fd, struct sockaddr __user *usockaddr,
+161 -2
tools/perf/trace/beauty/include/uapi/linux/fs.h
··· 329 329 /* per-IO negation of O_APPEND */ 330 330 #define RWF_NOAPPEND ((__force __kernel_rwf_t)0x00000020) 331 331 332 + /* Atomic Write */ 333 + #define RWF_ATOMIC ((__force __kernel_rwf_t)0x00000040) 334 + 332 335 /* mask of flags supported by the kernel */ 333 336 #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\ 334 - RWF_APPEND | RWF_NOAPPEND) 337 + RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC) 338 + 339 + #define PROCFS_IOCTL_MAGIC 'f' 335 340 336 341 /* Pagemap ioctl */ 337 - #define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg) 342 + #define PAGEMAP_SCAN _IOWR(PROCFS_IOCTL_MAGIC, 16, struct pm_scan_arg) 338 343 339 344 /* Bitmasks provided in pm_scan_args masks and reported in page_region.categories. */ 340 345 #define PAGE_IS_WPALLOWED (1 << 0) ··· 396 391 __u64 category_mask; 397 392 __u64 category_anyof_mask; 398 393 __u64 return_mask; 394 + }; 395 + 396 + /* /proc/<pid>/maps ioctl */ 397 + #define PROCMAP_QUERY _IOWR(PROCFS_IOCTL_MAGIC, 17, struct procmap_query) 398 + 399 + enum procmap_query_flags { 400 + /* 401 + * VMA permission flags. 402 + * 403 + * Can be used as part of procmap_query.query_flags field to look up 404 + * only VMAs satisfying specified subset of permissions. E.g., specifying 405 + * PROCMAP_QUERY_VMA_READABLE only will return both readable and read/write VMAs, 406 + * while having PROCMAP_QUERY_VMA_READABLE | PROCMAP_QUERY_VMA_WRITABLE will only 407 + * return read/write VMAs, though both executable/non-executable and 408 + * private/shared will be ignored. 409 + * 410 + * PROCMAP_QUERY_VMA_* flags are also returned in procmap_query.vma_flags 411 + * field to specify actual VMA permissions. 412 + */ 413 + PROCMAP_QUERY_VMA_READABLE = 0x01, 414 + PROCMAP_QUERY_VMA_WRITABLE = 0x02, 415 + PROCMAP_QUERY_VMA_EXECUTABLE = 0x04, 416 + PROCMAP_QUERY_VMA_SHARED = 0x08, 417 + /* 418 + * Query modifier flags. 419 + * 420 + * By default VMA that covers provided address is returned, or -ENOENT 421 + * is returned. With PROCMAP_QUERY_COVERING_OR_NEXT_VMA flag set, closest 422 + * VMA with vma_start > addr will be returned if no covering VMA is 423 + * found. 424 + * 425 + * PROCMAP_QUERY_FILE_BACKED_VMA instructs query to consider only VMAs that 426 + * have file backing. Can be combined with PROCMAP_QUERY_COVERING_OR_NEXT_VMA 427 + * to iterate all VMAs with file backing. 428 + */ 429 + PROCMAP_QUERY_COVERING_OR_NEXT_VMA = 0x10, 430 + PROCMAP_QUERY_FILE_BACKED_VMA = 0x20, 431 + }; 432 + 433 + /* 434 + * Input/output argument structured passed into ioctl() call. It can be used 435 + * to query a set of VMAs (Virtual Memory Areas) of a process. 436 + * 437 + * Each field can be one of three kinds, marked in a short comment to the 438 + * right of the field: 439 + * - "in", input argument, user has to provide this value, kernel doesn't modify it; 440 + * - "out", output argument, kernel sets this field with VMA data; 441 + * - "in/out", input and output argument; user provides initial value (used 442 + * to specify maximum allowable buffer size), and kernel sets it to actual 443 + * amount of data written (or zero, if there is no data). 444 + * 445 + * If matching VMA is found (according to criterias specified by 446 + * query_addr/query_flags, all the out fields are filled out, and ioctl() 447 + * returns 0. If there is no matching VMA, -ENOENT will be returned. 448 + * In case of any other error, negative error code other than -ENOENT is 449 + * returned. 450 + * 451 + * Most of the data is similar to the one returned as text in /proc/<pid>/maps 452 + * file, but procmap_query provides more querying flexibility. There are no 453 + * consistency guarantees between subsequent ioctl() calls, but data returned 454 + * for matched VMA is self-consistent. 455 + */ 456 + struct procmap_query { 457 + /* Query struct size, for backwards/forward compatibility */ 458 + __u64 size; 459 + /* 460 + * Query flags, a combination of enum procmap_query_flags values. 461 + * Defines query filtering and behavior, see enum procmap_query_flags. 462 + * 463 + * Input argument, provided by user. Kernel doesn't modify it. 464 + */ 465 + __u64 query_flags; /* in */ 466 + /* 467 + * Query address. By default, VMA that covers this address will 468 + * be looked up. PROCMAP_QUERY_* flags above modify this default 469 + * behavior further. 470 + * 471 + * Input argument, provided by user. Kernel doesn't modify it. 472 + */ 473 + __u64 query_addr; /* in */ 474 + /* VMA starting (inclusive) and ending (exclusive) address, if VMA is found. */ 475 + __u64 vma_start; /* out */ 476 + __u64 vma_end; /* out */ 477 + /* VMA permissions flags. A combination of PROCMAP_QUERY_VMA_* flags. */ 478 + __u64 vma_flags; /* out */ 479 + /* VMA backing page size granularity. */ 480 + __u64 vma_page_size; /* out */ 481 + /* 482 + * VMA file offset. If VMA has file backing, this specifies offset 483 + * within the file that VMA's start address corresponds to. 484 + * Is set to zero if VMA has no backing file. 485 + */ 486 + __u64 vma_offset; /* out */ 487 + /* Backing file's inode number, or zero, if VMA has no backing file. */ 488 + __u64 inode; /* out */ 489 + /* Backing file's device major/minor number, or zero, if VMA has no backing file. */ 490 + __u32 dev_major; /* out */ 491 + __u32 dev_minor; /* out */ 492 + /* 493 + * If set to non-zero value, signals the request to return VMA name 494 + * (i.e., VMA's backing file's absolute path, with " (deleted)" suffix 495 + * appended, if file was unlinked from FS) for matched VMA. VMA name 496 + * can also be some special name (e.g., "[heap]", "[stack]") or could 497 + * be even user-supplied with prctl(PR_SET_VMA, PR_SET_VMA_ANON_NAME). 498 + * 499 + * Kernel will set this field to zero, if VMA has no associated name. 500 + * Otherwise kernel will return actual amount of bytes filled in 501 + * user-supplied buffer (see vma_name_addr field below), including the 502 + * terminating zero. 503 + * 504 + * If VMA name is longer that user-supplied maximum buffer size, 505 + * -E2BIG error is returned. 506 + * 507 + * If this field is set to non-zero value, vma_name_addr should point 508 + * to valid user space memory buffer of at least vma_name_size bytes. 509 + * If set to zero, vma_name_addr should be set to zero as well 510 + */ 511 + __u32 vma_name_size; /* in/out */ 512 + /* 513 + * If set to non-zero value, signals the request to extract and return 514 + * VMA's backing file's build ID, if the backing file is an ELF file 515 + * and it contains embedded build ID. 516 + * 517 + * Kernel will set this field to zero, if VMA has no backing file, 518 + * backing file is not an ELF file, or ELF file has no build ID 519 + * embedded. 520 + * 521 + * Build ID is a binary value (not a string). Kernel will set 522 + * build_id_size field to exact number of bytes used for build ID. 523 + * If build ID is requested and present, but needs more bytes than 524 + * user-supplied maximum buffer size (see build_id_addr field below), 525 + * -E2BIG error will be returned. 526 + * 527 + * If this field is set to non-zero value, build_id_addr should point 528 + * to valid user space memory buffer of at least build_id_size bytes. 529 + * If set to zero, build_id_addr should be set to zero as well 530 + */ 531 + __u32 build_id_size; /* in/out */ 532 + /* 533 + * User-supplied address of a buffer of at least vma_name_size bytes 534 + * for kernel to fill with matched VMA's name (see vma_name_size field 535 + * description above for details). 536 + * 537 + * Should be set to zero if VMA name should not be returned. 538 + */ 539 + __u64 vma_name_addr; /* in */ 540 + /* 541 + * User-supplied address of a buffer of at least build_id_size bytes 542 + * for kernel to fill with matched VMA's ELF build ID, if available 543 + * (see build_id_size field description above for details). 544 + * 545 + * Should be set to zero if build ID should not be returned. 546 + */ 547 + __u64 build_id_addr; /* in */ 399 548 }; 400 549 401 550 #endif /* _UAPI_LINUX_FS_H */
+8 -2
tools/perf/trace/beauty/include/uapi/linux/mount.h
··· 154 154 */ 155 155 struct statmount { 156 156 __u32 size; /* Total size, including strings */ 157 - __u32 __spare1; 157 + __u32 mnt_opts; /* [str] Mount options of the mount */ 158 158 __u64 mask; /* What results were written */ 159 159 __u32 sb_dev_major; /* Device ID */ 160 160 __u32 sb_dev_minor; ··· 172 172 __u64 propagate_from; /* Propagation from in current namespace */ 173 173 __u32 mnt_root; /* [str] Root of mount relative to root of fs */ 174 174 __u32 mnt_point; /* [str] Mountpoint relative to current root */ 175 - __u64 __spare2[50]; 175 + __u64 mnt_ns_id; /* ID of the mount namespace */ 176 + __u64 __spare2[49]; 176 177 char str[]; /* Variable size part containing strings */ 177 178 }; 178 179 ··· 189 188 __u32 spare; 190 189 __u64 mnt_id; 191 190 __u64 param; 191 + __u64 mnt_ns_id; 192 192 }; 193 193 194 194 /* List of all mnt_id_req versions. */ 195 195 #define MNT_ID_REQ_SIZE_VER0 24 /* sizeof first published struct */ 196 + #define MNT_ID_REQ_SIZE_VER1 32 /* sizeof second published struct */ 196 197 197 198 /* 198 199 * @mask bits for statmount(2) ··· 205 202 #define STATMOUNT_MNT_ROOT 0x00000008U /* Want/got mnt_root */ 206 203 #define STATMOUNT_MNT_POINT 0x00000010U /* Want/got mnt_point */ 207 204 #define STATMOUNT_FS_TYPE 0x00000020U /* Want/got fs_type */ 205 + #define STATMOUNT_MNT_NS_ID 0x00000040U /* Want/got mnt_ns_id */ 206 + #define STATMOUNT_MNT_OPTS 0x00000080U /* Want/got mnt_opts */ 208 207 209 208 /* 210 209 * Special @mnt_id values that can be passed to listmount 211 210 */ 212 211 #define LSMT_ROOT 0xffffffffffffffff /* root mount */ 212 + #define LISTMOUNT_REVERSE (1 << 0) /* List later mounts first */ 213 213 214 214 #endif /* _UAPI_LINUX_MOUNT_H */
+10 -2
tools/perf/trace/beauty/include/uapi/linux/stat.h
··· 126 126 __u64 stx_mnt_id; 127 127 __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */ 128 128 __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */ 129 - __u64 stx_subvol; /* Subvolume identifier */ 130 129 /* 0xa0 */ 131 - __u64 __spare3[11]; /* Spare space for future expansion */ 130 + __u64 stx_subvol; /* Subvolume identifier */ 131 + __u32 stx_atomic_write_unit_min; /* Min atomic write unit in bytes */ 132 + __u32 stx_atomic_write_unit_max; /* Max atomic write unit in bytes */ 133 + /* 0xb0 */ 134 + __u32 stx_atomic_write_segments_max; /* Max atomic write segment count */ 135 + __u32 __spare1[1]; 136 + /* 0xb8 */ 137 + __u64 __spare3[9]; /* Spare space for future expansion */ 132 138 /* 0x100 */ 133 139 }; 134 140 ··· 163 157 #define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */ 164 158 #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */ 165 159 #define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */ 160 + #define STATX_WRITE_ATOMIC 0x00010000U /* Want/got atomic_write_* fields */ 166 161 167 162 #define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */ 168 163 ··· 199 192 #define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */ 200 193 #define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */ 201 194 #define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */ 195 + #define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */ 202 196 203 197 204 198 #endif /* _UAPI_LINUX_STAT_H */
+5 -4
tools/perf/trace/beauty/include/uapi/sound/asound.h
··· 142 142 * * 143 143 *****************************************************************************/ 144 144 145 - #define SNDRV_PCM_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 17) 145 + #define SNDRV_PCM_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 18) 146 146 147 147 typedef unsigned long snd_pcm_uframes_t; 148 148 typedef signed long snd_pcm_sframes_t; ··· 334 334 unsigned char id[16]; 335 335 unsigned short id16[8]; 336 336 unsigned int id32[4]; 337 - }; 337 + } __attribute__((deprecated)); 338 338 339 339 struct snd_pcm_info { 340 340 unsigned int device; /* RO/WR (control): device number */ ··· 348 348 int dev_subclass; /* SNDRV_PCM_SUBCLASS_* */ 349 349 unsigned int subdevices_count; 350 350 unsigned int subdevices_avail; 351 - union snd_pcm_sync_id sync; /* hardware synchronization ID */ 351 + unsigned char pad1[16]; /* was: hardware synchronization ID */ 352 352 unsigned char reserved[64]; /* reserved for future... */ 353 353 }; 354 354 ··· 420 420 unsigned int rate_num; /* R: rate numerator */ 421 421 unsigned int rate_den; /* R: rate denominator */ 422 422 snd_pcm_uframes_t fifo_size; /* R: chip FIFO size in frames */ 423 - unsigned char reserved[64]; /* reserved for future */ 423 + unsigned char sync[16]; /* R: synchronization ID (perfect sync - one clock source) */ 424 + unsigned char reserved[48]; /* reserved for future */ 424 425 }; 425 426 426 427 enum {
+54
tools/testing/selftests/bpf/progs/iters.c
··· 1432 1432 return sum; 1433 1433 } 1434 1434 1435 + __u32 upper, select_n, result; 1436 + __u64 global; 1437 + 1438 + static __noinline bool nest_2(char *str) 1439 + { 1440 + /* some insns (including branch insns) to ensure stacksafe() is triggered 1441 + * in nest_2(). This way, stacksafe() can compare frame associated with nest_1(). 1442 + */ 1443 + if (str[0] == 't') 1444 + return true; 1445 + if (str[1] == 'e') 1446 + return true; 1447 + if (str[2] == 's') 1448 + return true; 1449 + if (str[3] == 't') 1450 + return true; 1451 + return false; 1452 + } 1453 + 1454 + static __noinline bool nest_1(int n) 1455 + { 1456 + /* case 0: allocate stack, case 1: no allocate stack */ 1457 + switch (n) { 1458 + case 0: { 1459 + char comm[16]; 1460 + 1461 + if (bpf_get_current_comm(comm, 16)) 1462 + return false; 1463 + return nest_2(comm); 1464 + } 1465 + case 1: 1466 + return nest_2((char *)&global); 1467 + default: 1468 + return false; 1469 + } 1470 + } 1471 + 1472 + SEC("raw_tp") 1473 + __success 1474 + int iter_subprog_check_stacksafe(const void *ctx) 1475 + { 1476 + long i; 1477 + 1478 + bpf_for(i, 0, upper) { 1479 + if (!nest_1(select_n)) { 1480 + result = 1; 1481 + return 0; 1482 + } 1483 + } 1484 + 1485 + result = 2; 1486 + return 0; 1487 + } 1488 + 1435 1489 char _license[] SEC("license") = "GPL";
+35
tools/testing/selftests/core/close_range_test.c
··· 589 589 EXPECT_EQ(close(fd3), 0); 590 590 } 591 591 592 + TEST(close_range_bitmap_corruption) 593 + { 594 + pid_t pid; 595 + int status; 596 + struct __clone_args args = { 597 + .flags = CLONE_FILES, 598 + .exit_signal = SIGCHLD, 599 + }; 600 + 601 + /* get the first 128 descriptors open */ 602 + for (int i = 2; i < 128; i++) 603 + EXPECT_GE(dup2(0, i), 0); 604 + 605 + /* get descriptor table shared */ 606 + pid = sys_clone3(&args, sizeof(args)); 607 + ASSERT_GE(pid, 0); 608 + 609 + if (pid == 0) { 610 + /* unshare and truncate descriptor table down to 64 */ 611 + if (sys_close_range(64, ~0U, CLOSE_RANGE_UNSHARE)) 612 + exit(EXIT_FAILURE); 613 + 614 + ASSERT_EQ(fcntl(64, F_GETFD), -1); 615 + /* ... and verify that the range 64..127 is not 616 + stuck "fully used" according to secondary bitmap */ 617 + EXPECT_EQ(dup(0), 64) 618 + exit(EXIT_FAILURE); 619 + exit(EXIT_SUCCESS); 620 + } 621 + 622 + EXPECT_EQ(waitpid(pid, &status, 0), pid); 623 + EXPECT_EQ(true, WIFEXITED(status)); 624 + EXPECT_EQ(0, WEXITSTATUS(status)); 625 + } 626 + 592 627 TEST_HARNESS_MAIN
+2 -2
tools/testing/selftests/kvm/aarch64/get-reg-list.c
··· 32 32 { 33 33 ARM64_SYS_REG(3, 0, 10, 2, 2), /* PIRE0_EL1 */ 34 34 ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ 35 - 4, 35 + 8, 36 36 1 37 37 }, 38 38 { 39 39 ARM64_SYS_REG(3, 0, 10, 2, 3), /* PIR_EL1 */ 40 40 ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ 41 - 4, 41 + 8, 42 42 1 43 43 } 44 44 };
+28
tools/testing/selftests/kvm/x86_64/xapic_state_test.c
··· 184 184 kvm_vm_free(vm); 185 185 } 186 186 187 + static void test_x2apic_id(void) 188 + { 189 + struct kvm_lapic_state lapic = {}; 190 + struct kvm_vcpu *vcpu; 191 + struct kvm_vm *vm; 192 + int i; 193 + 194 + vm = vm_create_with_one_vcpu(&vcpu, NULL); 195 + vcpu_set_msr(vcpu, MSR_IA32_APICBASE, MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE); 196 + 197 + /* 198 + * Try stuffing a modified x2APIC ID, KVM should ignore the value and 199 + * always return the vCPU's default/readonly x2APIC ID. 200 + */ 201 + for (i = 0; i <= 0xff; i++) { 202 + *(u32 *)(lapic.regs + APIC_ID) = i << 24; 203 + *(u32 *)(lapic.regs + APIC_SPIV) = APIC_SPIV_APIC_ENABLED; 204 + vcpu_ioctl(vcpu, KVM_SET_LAPIC, &lapic); 205 + 206 + vcpu_ioctl(vcpu, KVM_GET_LAPIC, &lapic); 207 + TEST_ASSERT(*((u32 *)&lapic.regs[APIC_ID]) == vcpu->id << 24, 208 + "x2APIC ID should be fully readonly"); 209 + } 210 + 211 + kvm_vm_free(vm); 212 + } 213 + 187 214 int main(int argc, char *argv[]) 188 215 { 189 216 struct xapic_vcpu x = { ··· 238 211 kvm_vm_free(vm); 239 212 240 213 test_apic_id(); 214 + test_x2apic_id(); 241 215 }
+2
tools/testing/selftests/mm/Makefile
··· 53 53 TEST_GEN_FILES += map_fixed_noreplace 54 54 TEST_GEN_FILES += map_hugetlb 55 55 TEST_GEN_FILES += map_populate 56 + ifneq (,$(filter $(ARCH),arm64 riscv riscv64 x86 x86_64)) 56 57 TEST_GEN_FILES += memfd_secret 58 + endif 57 59 TEST_GEN_FILES += migration 58 60 TEST_GEN_FILES += mkdirty 59 61 TEST_GEN_FILES += mlock-random-test
+3 -2
tools/testing/selftests/mm/compaction_test.c
··· 89 89 int fd, ret = -1; 90 90 int compaction_index = 0; 91 91 char nr_hugepages[20] = {0}; 92 - char init_nr_hugepages[20] = {0}; 92 + char init_nr_hugepages[24] = {0}; 93 93 94 - sprintf(init_nr_hugepages, "%lu", initial_nr_hugepages); 94 + snprintf(init_nr_hugepages, sizeof(init_nr_hugepages), 95 + "%lu", initial_nr_hugepages); 95 96 96 97 /* We want to test with 80% of available memory. Else, OOM killer comes 97 98 in to play */
+3
tools/testing/selftests/mm/run_vmtests.sh
··· 374 374 # MADV_POPULATE_READ and MADV_POPULATE_WRITE tests 375 375 CATEGORY="madv_populate" run_test ./madv_populate 376 376 377 + if [ -x ./memfd_secret ] 378 + then 377 379 (echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope 2>&1) | tap_prefix 378 380 CATEGORY="memfd_secret" run_test ./memfd_secret 381 + fi 379 382 380 383 # KSM KSM_MERGE_TIME_HUGE_PAGES test with size of 100 381 384 CATEGORY="ksm" run_test ./ksm_tests -H -s 100
+1 -1
tools/testing/selftests/net/af_unix/msg_oob.c
··· 209 209 210 210 static void __recvpair(struct __test_metadata *_metadata, 211 211 FIXTURE_DATA(msg_oob) *self, 212 - const void *expected_buf, int expected_len, 212 + const char *expected_buf, int expected_len, 213 213 int buf_len, int flags) 214 214 { 215 215 int i, ret[2], recv_errno[2], expected_errno = 0;
+1
tools/testing/selftests/net/lib.sh
··· 146 146 147 147 for ns in "$@"; do 148 148 [ -z "${ns}" ] && continue 149 + ip netns pids "${ns}" 2> /dev/null | xargs -r kill || true 149 150 ip netns delete "${ns}" &> /dev/null || true 150 151 if ! busywait $BUSYWAIT_TIMEOUT ip netns list \| grep -vq "^$ns$" &> /dev/null; then 151 152 echo "Warn: Failed to remove namespace $ns"
+1
tools/testing/selftests/net/netfilter/Makefile
··· 7 7 MNL_LDLIBS := $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl) 8 8 9 9 TEST_PROGS := br_netfilter.sh bridge_brouter.sh 10 + TEST_PROGS += br_netfilter_queue.sh 10 11 TEST_PROGS += conntrack_icmp_related.sh 11 12 TEST_PROGS += conntrack_ipip_mtu.sh 12 13 TEST_PROGS += conntrack_tcp_unreplied.sh
+78
tools/testing/selftests/net/netfilter/br_netfilter_queue.sh
··· 1 + #!/bin/bash 2 + 3 + source lib.sh 4 + 5 + checktool "nft --version" "run test without nft tool" 6 + 7 + cleanup() { 8 + cleanup_all_ns 9 + } 10 + 11 + setup_ns c1 c2 c3 sender 12 + 13 + trap cleanup EXIT 14 + 15 + nf_queue_wait() 16 + { 17 + grep -q "^ *$1 " "/proc/self/net/netfilter/nfnetlink_queue" 18 + } 19 + 20 + port_add() { 21 + ns="$1" 22 + dev="$2" 23 + a="$3" 24 + 25 + ip link add name "$dev" type veth peer name "$dev" netns "$ns" 26 + 27 + ip -net "$ns" addr add 192.168.1."$a"/24 dev "$dev" 28 + ip -net "$ns" link set "$dev" up 29 + 30 + ip link set "$dev" master br0 31 + ip link set "$dev" up 32 + } 33 + 34 + [ "${1}" != "run" ] && { unshare -n "${0}" run; exit $?; } 35 + 36 + ip link add br0 type bridge 37 + ip addr add 192.168.1.254/24 dev br0 38 + 39 + port_add "$c1" "c1" 1 40 + port_add "$c2" "c2" 2 41 + port_add "$c3" "c3" 3 42 + port_add "$sender" "sender" 253 43 + 44 + ip link set br0 up 45 + 46 + modprobe -q br_netfilter 47 + 48 + sysctl net.bridge.bridge-nf-call-iptables=1 || exit 1 49 + 50 + ip netns exec "$sender" ping -I sender -c1 192.168.1.1 || exit 1 51 + ip netns exec "$sender" ping -I sender -c1 192.168.1.2 || exit 2 52 + ip netns exec "$sender" ping -I sender -c1 192.168.1.3 || exit 3 53 + 54 + nft -f /dev/stdin <<EOF 55 + table ip filter { 56 + chain forward { 57 + type filter hook forward priority 0; policy accept; 58 + ct state new counter 59 + ip protocol icmp counter queue num 0 bypass 60 + } 61 + } 62 + EOF 63 + ./nf_queue -t 5 > /dev/null & 64 + 65 + busywait 5000 nf_queue_wait 66 + 67 + for i in $(seq 1 5); do conntrack -F > /dev/null 2> /dev/null; sleep 0.1 ; done & 68 + ip netns exec "$sender" ping -I sender -f -c 50 -b 192.168.1.255 69 + 70 + read t < /proc/sys/kernel/tainted 71 + if [ "$t" -eq 0 ];then 72 + echo PASS: kernel not tainted 73 + else 74 + echo ERROR: kernel is tainted 75 + exit 1 76 + fi 77 + 78 + exit 0
+24 -1
tools/testing/selftests/net/udpgso.c
··· 67 67 int gso_len; /* mss after applying gso */ 68 68 int r_num_mss; /* recv(): number of calls of full mss */ 69 69 int r_len_last; /* recv(): size of last non-mss dgram, if any */ 70 + bool v6_ext_hdr; /* send() dgrams with IPv6 extension headers */ 70 71 }; 71 72 72 73 const struct in6_addr addr6 = { ··· 77 76 const struct in_addr addr4 = { 78 77 __constant_htonl(0x0a000001), /* 10.0.0.1 */ 79 78 }; 79 + 80 + static const char ipv6_hopopts_pad1[8] = { 0 }; 80 81 81 82 struct testcase testcases_v4[] = { 82 83 { ··· 259 256 .r_num_mss = 2, 260 257 }, 261 258 { 259 + /* send 2 1B segments with extension headers */ 260 + .tlen = 2, 261 + .gso_len = 1, 262 + .r_num_mss = 2, 263 + .v6_ext_hdr = true, 264 + }, 265 + { 262 266 /* send 2B + 2B + 1B segments */ 263 267 .tlen = 5, 264 268 .gso_len = 2, ··· 406 396 int i, ret, val, mss; 407 397 bool sent; 408 398 409 - fprintf(stderr, "ipv%d tx:%d gso:%d %s\n", 399 + fprintf(stderr, "ipv%d tx:%d gso:%d %s%s\n", 410 400 addr->sa_family == AF_INET ? 4 : 6, 411 401 test->tlen, test->gso_len, 402 + test->v6_ext_hdr ? "ext-hdr " : "", 412 403 test->tfail ? "(fail)" : ""); 404 + 405 + if (test->v6_ext_hdr) { 406 + if (setsockopt(fdt, IPPROTO_IPV6, IPV6_HOPOPTS, 407 + ipv6_hopopts_pad1, sizeof(ipv6_hopopts_pad1))) 408 + error(1, errno, "setsockopt ipv6 hopopts"); 409 + } 413 410 414 411 val = test->gso_len; 415 412 if (cfg_do_setsockopt) { ··· 429 412 error(1, 0, "send succeeded while expecting failure"); 430 413 if (!sent && !test->tfail) 431 414 error(1, 0, "send failed while expecting success"); 415 + 416 + if (test->v6_ext_hdr) { 417 + if (setsockopt(fdt, IPPROTO_IPV6, IPV6_HOPOPTS, NULL, 0)) 418 + error(1, errno, "setsockopt ipv6 hopopts clear"); 419 + } 420 + 432 421 if (!sent) 433 422 return; 434 423
+4 -7
tools/tracing/rtla/src/osnoise_top.c
··· 651 651 return NULL; 652 652 653 653 tool->data = osnoise_alloc_top(nr_cpus); 654 - if (!tool->data) 655 - goto out_err; 654 + if (!tool->data) { 655 + osnoise_destroy_tool(tool); 656 + return NULL; 657 + } 656 658 657 659 tool->params = params; 658 660 ··· 662 660 osnoise_top_handler, NULL); 663 661 664 662 return tool; 665 - 666 - out_err: 667 - osnoise_free_top(tool->data); 668 - osnoise_destroy_tool(tool); 669 - return NULL; 670 663 } 671 664 672 665 static int stop_tracing;
+7 -6
virt/kvm/eventfd.c
··· 97 97 mutex_lock(&kvm->irqfds.resampler_lock); 98 98 99 99 list_del_rcu(&irqfd->resampler_link); 100 - synchronize_srcu(&kvm->irq_srcu); 101 100 102 101 if (list_empty(&resampler->list)) { 103 102 list_del_rcu(&resampler->link); 104 103 kvm_unregister_irq_ack_notifier(kvm, &resampler->notifier); 105 104 /* 106 - * synchronize_srcu(&kvm->irq_srcu) already called 105 + * synchronize_srcu_expedited(&kvm->irq_srcu) already called 107 106 * in kvm_unregister_irq_ack_notifier(). 108 107 */ 109 108 kvm_set_irq(kvm, KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID, 110 109 resampler->notifier.gsi, 0, false); 111 110 kfree(resampler); 111 + } else { 112 + synchronize_srcu_expedited(&kvm->irq_srcu); 112 113 } 113 114 114 115 mutex_unlock(&kvm->irqfds.resampler_lock); ··· 127 126 u64 cnt; 128 127 129 128 /* Make sure irqfd has been initialized in assign path. */ 130 - synchronize_srcu(&kvm->irq_srcu); 129 + synchronize_srcu_expedited(&kvm->irq_srcu); 131 130 132 131 /* 133 132 * Synchronize with the wait-queue and unhook ourselves to prevent ··· 385 384 } 386 385 387 386 list_add_rcu(&irqfd->resampler_link, &irqfd->resampler->list); 388 - synchronize_srcu(&kvm->irq_srcu); 387 + synchronize_srcu_expedited(&kvm->irq_srcu); 389 388 390 389 mutex_unlock(&kvm->irqfds.resampler_lock); 391 390 } ··· 524 523 mutex_lock(&kvm->irq_lock); 525 524 hlist_del_init_rcu(&kian->link); 526 525 mutex_unlock(&kvm->irq_lock); 527 - synchronize_srcu(&kvm->irq_srcu); 526 + synchronize_srcu_expedited(&kvm->irq_srcu); 528 527 kvm_arch_post_irq_ack_notifier_list_update(kvm); 529 528 } 530 529 ··· 609 608 610 609 /* 611 610 * Take note of a change in irq routing. 612 - * Caller must invoke synchronize_srcu(&kvm->irq_srcu) afterwards. 611 + * Caller must invoke synchronize_srcu_expedited(&kvm->irq_srcu) afterwards. 613 612 */ 614 613 void kvm_irq_routing_update(struct kvm *kvm) 615 614 {
+2 -3
virt/kvm/kvm_main.c
··· 1578 1578 if (mem->flags & KVM_MEM_GUEST_MEMFD) 1579 1579 valid_flags &= ~KVM_MEM_LOG_DIRTY_PAGES; 1580 1580 1581 - #ifdef CONFIG_HAVE_KVM_READONLY_MEM 1582 1581 /* 1583 1582 * GUEST_MEMFD is incompatible with read-only memslots, as writes to 1584 1583 * read-only memslots have emulated MMIO, not page fault, semantics, 1585 1584 * and KVM doesn't allow emulated MMIO for private memory. 1586 1585 */ 1587 - if (!(mem->flags & KVM_MEM_GUEST_MEMFD)) 1586 + if (kvm_arch_has_readonly_mem(kvm) && 1587 + !(mem->flags & KVM_MEM_GUEST_MEMFD)) 1588 1588 valid_flags |= KVM_MEM_READONLY; 1589 - #endif 1590 1589 1591 1590 if (mem->flags & ~valid_flags) 1592 1591 return -EINVAL;