Merge tag 'i2c-host-fixes-6.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/andi.shyti/linux into i2c/for-current

i2c-host-fixes for v6.15-rc5

- imx-lpi2c: fix error handling sequence in probe

+3372 -2046
+8 -21
Documentation/admin-guide/xfs.rst
··· 562 Zoned Filesystems 563 ================= 564 565 - For zoned file systems, the following attribute is exposed in: 566 567 /sys/fs/xfs/<dev>/zoned/ 568 ··· 572 is limited by the capabilities of the backing zoned device, file system 573 size and the max_open_zones mount option. 574 575 - Zoned Filesystems 576 - ================= 577 - 578 - For zoned file systems, the following attributes are exposed in: 579 - 580 - /sys/fs/xfs/<dev>/zoned/ 581 - 582 - max_open_zones (Min: 1 Default: Varies Max: UINTMAX) 583 - This read-only attribute exposes the maximum number of open zones 584 - available for data placement. The value is determined at mount time and 585 - is limited by the capabilities of the backing zoned device, file system 586 - size and the max_open_zones mount option. 587 - 588 - zonegc_low_space (Min: 0 Default: 0 Max: 100) 589 - Define a percentage for how much of the unused space that GC should keep 590 - available for writing. A high value will reclaim more of the space 591 - occupied by unused blocks, creating a larger buffer against write 592 - bursts at the cost of increased write amplification. Regardless 593 - of this value, garbage collection will always aim to free a minimum 594 - amount of blocks to keep max_open_zones open for data placement purposes.
··· 562 Zoned Filesystems 563 ================= 564 565 + For zoned file systems, the following attributes are exposed in: 566 567 /sys/fs/xfs/<dev>/zoned/ 568 ··· 572 is limited by the capabilities of the backing zoned device, file system 573 size and the max_open_zones mount option. 574 575 + zonegc_low_space (Min: 0 Default: 0 Max: 100) 576 + Define a percentage for how much of the unused space that GC should keep 577 + available for writing. A high value will reclaim more of the space 578 + occupied by unused blocks, creating a larger buffer against write 579 + bursts at the cost of increased write amplification. Regardless 580 + of this value, garbage collection will always aim to free a minimum 581 + amount of blocks to keep max_open_zones open for data placement purposes.
+6 -6
Documentation/arch/openrisc/openrisc_port.rst
··· 7 8 For information about OpenRISC processors and ongoing development: 9 10 - ======= ============================= 11 website https://openrisc.io 12 - email openrisc@lists.librecores.org 13 - ======= ============================= 14 15 --------------------------------------------------------------------- 16 ··· 27 Instructions for building the different toolchains can be found on openrisc.io 28 or Stafford's toolchain build and release scripts. 29 30 - ========== ================================================= 31 - binaries https://github.com/openrisc/or1k-gcc/releases 32 toolchains https://openrisc.io/software 33 building https://github.com/stffrdhrn/or1k-toolchain-build 34 - ========== ================================================= 35 36 2) Building 37
··· 7 8 For information about OpenRISC processors and ongoing development: 9 10 + ======= ============================== 11 website https://openrisc.io 12 + email linux-openrisc@vger.kernel.org 13 + ======= ============================== 14 15 --------------------------------------------------------------------- 16 ··· 27 Instructions for building the different toolchains can be found on openrisc.io 28 or Stafford's toolchain build and release scripts. 29 30 + ========== ========================================================== 31 + binaries https://github.com/stffrdhrn/or1k-toolchain-build/releases 32 toolchains https://openrisc.io/software 33 building https://github.com/stffrdhrn/or1k-toolchain-build 34 + ========== ========================================================== 35 36 2) Building 37
+8
Documentation/bpf/bpf_devel_QA.rst
··· 382 into the Linux kernel, please implement support into LLVM's BPF back 383 end. See LLVM_ section below for further information. 384 385 Stable submission 386 ================= 387
··· 382 into the Linux kernel, please implement support into LLVM's BPF back 383 end. See LLVM_ section below for further information. 384 385 + Q: What "BPF_INTERNAL" symbol namespace is for? 386 + ----------------------------------------------- 387 + A: Symbols exported as BPF_INTERNAL can only be used by BPF infrastructure 388 + like preload kernel modules with light skeleton. Most symbols outside 389 + of BPF_INTERNAL are not expected to be used by code outside of BPF either. 390 + Symbols may lack the designation because they predate the namespaces, 391 + or due to an oversight. 392 + 393 Stable submission 394 ================= 395
+1 -1
Documentation/devicetree/bindings/nvmem/layouts/fixed-cell.yaml
··· 27 $ref: /schemas/types.yaml#/definitions/uint32-array 28 items: 29 - minimum: 0 30 - maximum: 7 31 description: 32 Offset in bit within the address range specified by reg. 33 - minimum: 1
··· 27 $ref: /schemas/types.yaml#/definitions/uint32-array 28 items: 29 - minimum: 0 30 + maximum: 31 31 description: 32 Offset in bit within the address range specified by reg. 33 - minimum: 1
+4
Documentation/devicetree/bindings/nvmem/qcom,qfprom.yaml
··· 19 - enum: 20 - qcom,apq8064-qfprom 21 - qcom,apq8084-qfprom 22 - qcom,ipq5332-qfprom 23 - qcom,ipq5424-qfprom 24 - qcom,ipq6018-qfprom ··· 29 - qcom,msm8226-qfprom 30 - qcom,msm8916-qfprom 31 - qcom,msm8917-qfprom 32 - qcom,msm8974-qfprom 33 - qcom,msm8976-qfprom 34 - qcom,msm8996-qfprom ··· 54 - qcom,sm8450-qfprom 55 - qcom,sm8550-qfprom 56 - qcom,sm8650-qfprom 57 - const: qcom,qfprom 58 59 reg:
··· 19 - enum: 20 - qcom,apq8064-qfprom 21 - qcom,apq8084-qfprom 22 + - qcom,ipq5018-qfprom 23 - qcom,ipq5332-qfprom 24 - qcom,ipq5424-qfprom 25 - qcom,ipq6018-qfprom ··· 28 - qcom,msm8226-qfprom 29 - qcom,msm8916-qfprom 30 - qcom,msm8917-qfprom 31 + - qcom,msm8937-qfprom 32 + - qcom,msm8960-qfprom 33 - qcom,msm8974-qfprom 34 - qcom,msm8976-qfprom 35 - qcom,msm8996-qfprom ··· 51 - qcom,sm8450-qfprom 52 - qcom,sm8550-qfprom 53 - qcom,sm8650-qfprom 54 + - qcom,x1e80100-qfprom 55 - const: qcom,qfprom 56 57 reg:
+25
Documentation/devicetree/bindings/nvmem/rockchip,otp.yaml
··· 14 enum: 15 - rockchip,px30-otp 16 - rockchip,rk3308-otp 17 - rockchip,rk3588-otp 18 19 reg: ··· 63 properties: 64 clocks: 65 maxItems: 3 66 resets: 67 maxItems: 1 68 reset-names: ··· 76 compatible: 77 contains: 78 enum: 79 - rockchip,rk3588-otp 80 then: 81 properties: 82 clocks: 83 minItems: 4 84 resets: 85 minItems: 3
··· 14 enum: 15 - rockchip,px30-otp 16 - rockchip,rk3308-otp 17 + - rockchip,rk3576-otp 18 - rockchip,rk3588-otp 19 20 reg: ··· 62 properties: 63 clocks: 64 maxItems: 3 65 + clock-names: 66 + maxItems: 3 67 resets: 68 maxItems: 1 69 reset-names: ··· 73 compatible: 74 contains: 75 enum: 76 + - rockchip,rk3576-otp 77 + then: 78 + properties: 79 + clocks: 80 + maxItems: 3 81 + clock-names: 82 + maxItems: 3 83 + resets: 84 + minItems: 2 85 + maxItems: 2 86 + reset-names: 87 + items: 88 + - const: otp 89 + - const: apb 90 + 91 + - if: 92 + properties: 93 + compatible: 94 + contains: 95 + enum: 96 - rockchip,rk3588-otp 97 then: 98 properties: 99 clocks: 100 + minItems: 4 101 + clock-names: 102 minItems: 4 103 resets: 104 minItems: 3
+6 -6
Documentation/translations/zh_CN/arch/openrisc/openrisc_port.rst
··· 17 18 关于OpenRISC处理器和正在进行中的开发的信息: 19 20 - ======= ============================= 21 网站 https://openrisc.io 22 - 邮箱 openrisc@lists.librecores.org 23 - ======= ============================= 24 25 --------------------------------------------------------------------- 26 ··· 36 工具链的构建指南可以在openrisc.io或Stafford的工具链构建和发布脚本 37 中找到。 38 39 - ====== ================================================= 40 - 二进制 https://github.com/openrisc/or1k-gcc/releases 41 工具链 https://openrisc.io/software 42 构建 https://github.com/stffrdhrn/or1k-toolchain-build 43 - ====== ================================================= 44 45 2) 构建 46
··· 17 18 关于OpenRISC处理器和正在进行中的开发的信息: 19 20 + ======= ============================== 21 网站 https://openrisc.io 22 + 邮箱 linux-openrisc@vger.kernel.org 23 + ======= ============================== 24 25 --------------------------------------------------------------------- 26 ··· 36 工具链的构建指南可以在openrisc.io或Stafford的工具链构建和发布脚本 37 中找到。 38 39 + ====== ========================================================== 40 + 二进制 https://github.com/stffrdhrn/or1k-toolchain-build/releases 41 工具链 https://openrisc.io/software 42 构建 https://github.com/stffrdhrn/or1k-toolchain-build 43 + ====== ========================================================== 44 45 2) 构建 46
+6 -6
Documentation/translations/zh_TW/arch/openrisc/openrisc_port.rst
··· 17 18 關於OpenRISC處理器和正在進行中的開發的信息: 19 20 - ======= ============================= 21 網站 https://openrisc.io 22 - 郵箱 openrisc@lists.librecores.org 23 - ======= ============================= 24 25 --------------------------------------------------------------------- 26 ··· 36 工具鏈的構建指南可以在openrisc.io或Stafford的工具鏈構建和發佈腳本 37 中找到。 38 39 - ====== ================================================= 40 - 二進制 https://github.com/openrisc/or1k-gcc/releases 41 工具鏈 https://openrisc.io/software 42 構建 https://github.com/stffrdhrn/or1k-toolchain-build 43 - ====== ================================================= 44 45 2) 構建 46
··· 17 18 關於OpenRISC處理器和正在進行中的開發的信息: 19 20 + ======= ============================== 21 網站 https://openrisc.io 22 + 郵箱 linux-openrisc@vger.kernel.org 23 + ======= ============================== 24 25 --------------------------------------------------------------------- 26 ··· 36 工具鏈的構建指南可以在openrisc.io或Stafford的工具鏈構建和發佈腳本 37 中找到。 38 39 + ====== ========================================================== 40 + 二進制 https://github.com/stffrdhrn/or1k-toolchain-build/releases 41 工具鏈 https://openrisc.io/software 42 構建 https://github.com/stffrdhrn/or1k-toolchain-build 43 + ====== ========================================================== 44 45 2) 構建 46
+38 -8
MAINTAINERS
··· 3191 S: Maintained 3192 F: drivers/clk/socfpga/ 3193 3194 ARM/SOCFPGA EDAC BINDINGS 3195 M: Matthew Gerlach <matthew.gerlach@altera.com> 3196 S: Maintained ··· 3873 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 3874 R: Dave Ertman <david.m.ertman@intel.com> 3875 R: Ira Weiny <ira.weiny@intel.com> 3876 S: Supported 3877 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 3878 F: Documentation/driver-api/auxiliary_bus.rst 3879 F: drivers/base/auxiliary.c 3880 F: include/linux/auxiliary_bus.h ··· 7234 M: "Rafael J. Wysocki" <rafael@kernel.org> 7235 M: Danilo Krummrich <dakr@kernel.org> 7236 S: Supported 7237 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 7238 F: Documentation/core-api/kobject.rst 7239 F: drivers/base/ 7240 F: fs/debugfs/ ··· 10464 F: drivers/infiniband/hw/hfi1 10465 10466 HFS FILESYSTEM 10467 L: linux-fsdevel@vger.kernel.org 10468 - S: Orphan 10469 F: Documentation/filesystems/hfs.rst 10470 F: fs/hfs/ 10471 10472 HFSPLUS FILESYSTEM 10473 L: linux-fsdevel@vger.kernel.org 10474 - S: Orphan 10475 F: Documentation/filesystems/hfsplus.rst 10476 F: fs/hfsplus/ 10477 ··· 13125 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 13126 M: Tejun Heo <tj@kernel.org> 13127 S: Supported 13128 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 13129 F: fs/kernfs/ 13130 F: include/linux/kernfs.h 13131 ··· 16825 F: drivers/connector/ 16826 F: drivers/net/ 16827 F: drivers/ptp/ 16828 F: include/dt-bindings/net/ 16829 F: include/linux/cn_proc.h 16830 F: include/linux/etherdevice.h ··· 16835 F: include/linux/hippidevice.h 16836 F: include/linux/if_* 16837 F: include/linux/inetdevice.h 16838 F: include/linux/netdev* 16839 F: include/linux/platform_data/wiznet.h 16840 F: include/uapi/linux/cn_proc.h ··· 18704 PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS 18705 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 18706 M: Krzysztof Wilczyński <kw@linux.com> 18707 - R: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 18708 R: Rob Herring <robh@kernel.org> 18709 L: linux-pci@vger.kernel.org 18710 S: Supported ··· 18757 F: include/linux/of_pci.h 18758 F: include/linux/pci* 18759 F: include/uapi/linux/pci* 18760 F: rust/kernel/pci.rs 18761 F: samples/rust/rust_driver_pci.rs 18762 ··· 21337 L: netdev@vger.kernel.org 21338 S: Supported 21339 F: drivers/s390/net/ 21340 21341 S390 PCI SUBSYSTEM 21342 M: Niklas Schnelle <schnelle@linux.ibm.com> ··· 25210 F: drivers/usb/typec/mux/pi3usb30532.c 25211 25212 USB TYPEC PORT CONTROLLER DRIVERS 25213 L: linux-usb@vger.kernel.org 25214 - S: Orphan 25215 - F: drivers/usb/typec/tcpm/ 25216 25217 USB TYPEC TUSB1046 MUX DRIVER 25218 M: Romain Gantois <romain.gantois@bootlin.com>
··· 3191 S: Maintained 3192 F: drivers/clk/socfpga/ 3193 3194 + ARM/SOCFPGA DWMAC GLUE LAYER 3195 + M: Maxime Chevallier <maxime.chevallier@bootlin.com> 3196 + S: Maintained 3197 + F: Documentation/devicetree/bindings/net/socfpga-dwmac.txt 3198 + F: drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c 3199 + 3200 ARM/SOCFPGA EDAC BINDINGS 3201 M: Matthew Gerlach <matthew.gerlach@altera.com> 3202 S: Maintained ··· 3867 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 3868 R: Dave Ertman <david.m.ertman@intel.com> 3869 R: Ira Weiny <ira.weiny@intel.com> 3870 + R: Leon Romanovsky <leon@kernel.org> 3871 S: Supported 3872 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core.git 3873 F: Documentation/driver-api/auxiliary_bus.rst 3874 F: drivers/base/auxiliary.c 3875 F: include/linux/auxiliary_bus.h ··· 7227 M: "Rafael J. Wysocki" <rafael@kernel.org> 7228 M: Danilo Krummrich <dakr@kernel.org> 7229 S: Supported 7230 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core.git 7231 F: Documentation/core-api/kobject.rst 7232 F: drivers/base/ 7233 F: fs/debugfs/ ··· 10457 F: drivers/infiniband/hw/hfi1 10458 10459 HFS FILESYSTEM 10460 + M: Viacheslav Dubeyko <slava@dubeyko.com> 10461 + M: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 10462 + M: Yangtao Li <frank.li@vivo.com> 10463 L: linux-fsdevel@vger.kernel.org 10464 + S: Maintained 10465 F: Documentation/filesystems/hfs.rst 10466 F: fs/hfs/ 10467 10468 HFSPLUS FILESYSTEM 10469 + M: Viacheslav Dubeyko <slava@dubeyko.com> 10470 + M: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 10471 + M: Yangtao Li <frank.li@vivo.com> 10472 L: linux-fsdevel@vger.kernel.org 10473 + S: Maintained 10474 F: Documentation/filesystems/hfsplus.rst 10475 F: fs/hfsplus/ 10476 ··· 13112 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 13113 M: Tejun Heo <tj@kernel.org> 13114 S: Supported 13115 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core.git 13116 F: fs/kernfs/ 13117 F: include/linux/kernfs.h 13118 ··· 16812 F: drivers/connector/ 16813 F: drivers/net/ 16814 F: drivers/ptp/ 16815 + F: drivers/s390/net/ 16816 F: include/dt-bindings/net/ 16817 F: include/linux/cn_proc.h 16818 F: include/linux/etherdevice.h ··· 16821 F: include/linux/hippidevice.h 16822 F: include/linux/if_* 16823 F: include/linux/inetdevice.h 16824 + F: include/linux/ism.h 16825 F: include/linux/netdev* 16826 F: include/linux/platform_data/wiznet.h 16827 F: include/uapi/linux/cn_proc.h ··· 18689 PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS 18690 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 18691 M: Krzysztof Wilczyński <kw@linux.com> 18692 + M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 18693 R: Rob Herring <robh@kernel.org> 18694 L: linux-pci@vger.kernel.org 18695 S: Supported ··· 18742 F: include/linux/of_pci.h 18743 F: include/linux/pci* 18744 F: include/uapi/linux/pci* 18745 + 18746 + PCI SUBSYSTEM [RUST] 18747 + M: Danilo Krummrich <dakr@kernel.org> 18748 + R: Bjorn Helgaas <bhelgaas@google.com> 18749 + R: Krzysztof Wilczyński <kwilczynski@kernel.org> 18750 + L: linux-pci@vger.kernel.org 18751 + S: Maintained 18752 + C: irc://irc.oftc.net/linux-pci 18753 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 18754 + F: rust/helpers/pci.c 18755 F: rust/kernel/pci.rs 18756 F: samples/rust/rust_driver_pci.rs 18757 ··· 21312 L: netdev@vger.kernel.org 21313 S: Supported 21314 F: drivers/s390/net/ 21315 + F: include/linux/ism.h 21316 21317 S390 PCI SUBSYSTEM 21318 M: Niklas Schnelle <schnelle@linux.ibm.com> ··· 25184 F: drivers/usb/typec/mux/pi3usb30532.c 25185 25186 USB TYPEC PORT CONTROLLER DRIVERS 25187 + M: Badhri Jagan Sridharan <badhri@google.com> 25188 L: linux-usb@vger.kernel.org 25189 + S: Maintained 25190 + F: drivers/usb/typec/tcpm/tcpci.c 25191 + F: drivers/usb/typec/tcpm/tcpm.c 25192 + F: include/linux/usb/tcpci.h 25193 + F: include/linux/usb/tcpm.h 25194 25195 USB TYPEC TUSB1046 MUX DRIVER 25196 M: Romain Gantois <romain.gantois@bootlin.com>
+4 -4
Makefile
··· 2 VERSION = 6 3 PATCHLEVEL = 15 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 6 NAME = Baby Opossum Posse 7 8 # *DOCUMENTATION* ··· 1053 KBUILD_CFLAGS += $(call cc-option, -fstrict-flex-arrays=3) 1054 1055 #Currently, disable -Wstringop-overflow for GCC 11, globally. 1056 - KBUILD_CFLAGS-$(CONFIG_CC_NO_STRINGOP_OVERFLOW) += $(call cc-option, -Wno-stringop-overflow) 1057 KBUILD_CFLAGS-$(CONFIG_CC_STRINGOP_OVERFLOW) += $(call cc-option, -Wstringop-overflow) 1058 1059 - #Currently, disable -Wunterminated-string-initialization as an error 1060 - KBUILD_CFLAGS += $(call cc-option, -Wno-error=unterminated-string-initialization) 1061 1062 # disable invalid "can't wrap" optimizations for signed / pointers 1063 KBUILD_CFLAGS += -fno-strict-overflow
··· 2 VERSION = 6 3 PATCHLEVEL = 15 4 SUBLEVEL = 0 5 + EXTRAVERSION = -rc4 6 NAME = Baby Opossum Posse 7 8 # *DOCUMENTATION* ··· 1053 KBUILD_CFLAGS += $(call cc-option, -fstrict-flex-arrays=3) 1054 1055 #Currently, disable -Wstringop-overflow for GCC 11, globally. 1056 + KBUILD_CFLAGS-$(CONFIG_CC_NO_STRINGOP_OVERFLOW) += $(call cc-disable-warning, stringop-overflow) 1057 KBUILD_CFLAGS-$(CONFIG_CC_STRINGOP_OVERFLOW) += $(call cc-option, -Wstringop-overflow) 1058 1059 + #Currently, disable -Wunterminated-string-initialization as broken 1060 + KBUILD_CFLAGS += $(call cc-disable-warning, unterminated-string-initialization) 1061 1062 # disable invalid "can't wrap" optimizations for signed / pointers 1063 KBUILD_CFLAGS += -fno-strict-overflow
+5
arch/arm64/include/asm/kvm_host.h
··· 1588 #define kvm_has_s1poe(k) \ 1589 (kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP)) 1590 1591 #endif /* __ARM64_KVM_HOST_H__ */
··· 1588 #define kvm_has_s1poe(k) \ 1589 (kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP)) 1590 1591 + static inline bool kvm_arch_has_irq_bypass(void) 1592 + { 1593 + return true; 1594 + } 1595 + 1596 #endif /* __ARM64_KVM_HOST_H__ */
-11
arch/arm64/include/asm/mmu.h
··· 94 return false; 95 } 96 97 - /* 98 - * Systems affected by Cavium erratum 24756 are incompatible 99 - * with KPTI. 100 - */ 101 - if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) { 102 - extern const struct midr_range cavium_erratum_27456_cpus[]; 103 - 104 - if (is_midr_in_range_list(cavium_erratum_27456_cpus)) 105 - return false; 106 - } 107 - 108 return true; 109 } 110
··· 94 return false; 95 } 96 97 return true; 98 } 99
+1 -1
arch/arm64/kernel/cpu_errata.c
··· 335 #endif 336 337 #ifdef CONFIG_CAVIUM_ERRATUM_27456 338 - const struct midr_range cavium_erratum_27456_cpus[] = { 339 /* Cavium ThunderX, T88 pass 1.x - 2.1 */ 340 MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1), 341 /* Cavium ThunderX, T81 pass 1.0 */
··· 335 #endif 336 337 #ifdef CONFIG_CAVIUM_ERRATUM_27456 338 + static const struct midr_range cavium_erratum_27456_cpus[] = { 339 /* Cavium ThunderX, T88 pass 1.x - 2.1 */ 340 MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1), 341 /* Cavium ThunderX, T81 pass 1.0 */
-4
arch/arm64/kernel/image-vars.h
··· 47 PROVIDE(__pi_id_aa64zfr0_override = id_aa64zfr0_override); 48 PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override); 49 PROVIDE(__pi_arm64_use_ng_mappings = arm64_use_ng_mappings); 50 - #ifdef CONFIG_CAVIUM_ERRATUM_27456 51 - PROVIDE(__pi_cavium_erratum_27456_cpus = cavium_erratum_27456_cpus); 52 - PROVIDE(__pi_is_midr_in_range_list = is_midr_in_range_list); 53 - #endif 54 PROVIDE(__pi__ctype = _ctype); 55 PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); 56
··· 47 PROVIDE(__pi_id_aa64zfr0_override = id_aa64zfr0_override); 48 PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override); 49 PROVIDE(__pi_arm64_use_ng_mappings = arm64_use_ng_mappings); 50 PROVIDE(__pi__ctype = _ctype); 51 PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); 52
+24 -1
arch/arm64/kernel/pi/map_kernel.c
··· 207 dsb(ishst); 208 } 209 210 asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) 211 { 212 static char const chosen_str[] __initconst = "/chosen"; ··· 269 u64 kaslr_seed = kaslr_early_init(fdt, chosen); 270 271 if (kaslr_seed && kaslr_requires_kpti()) 272 - arm64_use_ng_mappings = true; 273 274 kaslr_offset |= kaslr_seed & ~(MIN_KIMG_ALIGN - 1); 275 }
··· 207 dsb(ishst); 208 } 209 210 + /* 211 + * PI version of the Cavium Eratum 27456 detection, which makes it 212 + * impossible to use non-global mappings. 213 + */ 214 + static bool __init ng_mappings_allowed(void) 215 + { 216 + static const struct midr_range cavium_erratum_27456_cpus[] __initconst = { 217 + /* Cavium ThunderX, T88 pass 1.x - 2.1 */ 218 + MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1), 219 + /* Cavium ThunderX, T81 pass 1.0 */ 220 + MIDR_REV(MIDR_THUNDERX_81XX, 0, 0), 221 + {}, 222 + }; 223 + 224 + for (const struct midr_range *r = cavium_erratum_27456_cpus; r->model; r++) { 225 + if (midr_is_cpu_model_range(read_cpuid_id(), r->model, 226 + r->rv_min, r->rv_max)) 227 + return false; 228 + } 229 + 230 + return true; 231 + } 232 + 233 asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) 234 { 235 static char const chosen_str[] __initconst = "/chosen"; ··· 246 u64 kaslr_seed = kaslr_early_init(fdt, chosen); 247 248 if (kaslr_seed && kaslr_requires_kpti()) 249 + arm64_use_ng_mappings = ng_mappings_allowed(); 250 251 kaslr_offset |= kaslr_seed & ~(MIN_KIMG_ALIGN - 1); 252 }
-5
arch/arm64/kvm/arm.c
··· 2743 return irqchip_in_kernel(kvm); 2744 } 2745 2746 - bool kvm_arch_has_irq_bypass(void) 2747 - { 2748 - return true; 2749 - } 2750 - 2751 int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons, 2752 struct irq_bypass_producer *prod) 2753 {
··· 2743 return irqchip_in_kernel(kvm); 2744 } 2745 2746 int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons, 2747 struct irq_bypass_producer *prod) 2748 {
+1
arch/loongarch/Kconfig
··· 73 select ARCH_SUPPORTS_RT 74 select ARCH_USE_BUILTIN_BSWAP 75 select ARCH_USE_CMPXCHG_LOCKREF 76 select ARCH_USE_QUEUED_RWLOCKS 77 select ARCH_USE_QUEUED_SPINLOCKS 78 select ARCH_WANT_DEFAULT_BPF_JIT
··· 73 select ARCH_SUPPORTS_RT 74 select ARCH_USE_BUILTIN_BSWAP 75 select ARCH_USE_CMPXCHG_LOCKREF 76 + select ARCH_USE_MEMTEST 77 select ARCH_USE_QUEUED_RWLOCKS 78 select ARCH_USE_QUEUED_SPINLOCKS 79 select ARCH_WANT_DEFAULT_BPF_JIT
+20 -13
arch/loongarch/include/asm/fpu.h
··· 22 struct sigcontext; 23 24 #define kernel_fpu_available() cpu_has_fpu 25 - extern void kernel_fpu_begin(void); 26 - extern void kernel_fpu_end(void); 27 28 - extern void _init_fpu(unsigned int); 29 - extern void _save_fp(struct loongarch_fpu *); 30 - extern void _restore_fp(struct loongarch_fpu *); 31 32 - extern void _save_lsx(struct loongarch_fpu *fpu); 33 - extern void _restore_lsx(struct loongarch_fpu *fpu); 34 - extern void _init_lsx_upper(void); 35 - extern void _restore_lsx_upper(struct loongarch_fpu *fpu); 36 37 - extern void _save_lasx(struct loongarch_fpu *fpu); 38 - extern void _restore_lasx(struct loongarch_fpu *fpu); 39 - extern void _init_lasx_upper(void); 40 - extern void _restore_lasx_upper(struct loongarch_fpu *fpu); 41 42 static inline void enable_lsx(void); 43 static inline void disable_lsx(void);
··· 22 struct sigcontext; 23 24 #define kernel_fpu_available() cpu_has_fpu 25 26 + void kernel_fpu_begin(void); 27 + void kernel_fpu_end(void); 28 29 + asmlinkage void _init_fpu(unsigned int); 30 + asmlinkage void _save_fp(struct loongarch_fpu *); 31 + asmlinkage void _restore_fp(struct loongarch_fpu *); 32 + asmlinkage int _save_fp_context(void __user *fpregs, void __user *fcc, void __user *csr); 33 + asmlinkage int _restore_fp_context(void __user *fpregs, void __user *fcc, void __user *csr); 34 35 + asmlinkage void _save_lsx(struct loongarch_fpu *fpu); 36 + asmlinkage void _restore_lsx(struct loongarch_fpu *fpu); 37 + asmlinkage void _init_lsx_upper(void); 38 + asmlinkage void _restore_lsx_upper(struct loongarch_fpu *fpu); 39 + asmlinkage int _save_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 40 + asmlinkage int _restore_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 41 + 42 + asmlinkage void _save_lasx(struct loongarch_fpu *fpu); 43 + asmlinkage void _restore_lasx(struct loongarch_fpu *fpu); 44 + asmlinkage void _init_lasx_upper(void); 45 + asmlinkage void _restore_lasx_upper(struct loongarch_fpu *fpu); 46 + asmlinkage int _save_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 47 + asmlinkage int _restore_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 48 49 static inline void enable_lsx(void); 50 static inline void disable_lsx(void);
+7 -3
arch/loongarch/include/asm/lbt.h
··· 12 #include <asm/loongarch.h> 13 #include <asm/processor.h> 14 15 - extern void _init_lbt(void); 16 - extern void _save_lbt(struct loongarch_lbt *); 17 - extern void _restore_lbt(struct loongarch_lbt *); 18 19 static inline int is_lbt_enabled(void) 20 {
··· 12 #include <asm/loongarch.h> 13 #include <asm/processor.h> 14 15 + asmlinkage void _init_lbt(void); 16 + asmlinkage void _save_lbt(struct loongarch_lbt *); 17 + asmlinkage void _restore_lbt(struct loongarch_lbt *); 18 + asmlinkage int _save_lbt_context(void __user *regs, void __user *eflags); 19 + asmlinkage int _restore_lbt_context(void __user *regs, void __user *eflags); 20 + asmlinkage int _save_ftop_context(void __user *ftop); 21 + asmlinkage int _restore_ftop_context(void __user *ftop); 22 23 static inline int is_lbt_enabled(void) 24 {
+2 -2
arch/loongarch/include/asm/ptrace.h
··· 33 unsigned long __last[]; 34 } __aligned(8); 35 36 - static inline int regs_irqs_disabled(struct pt_regs *regs) 37 { 38 - return arch_irqs_disabled_flags(regs->csr_prmd); 39 } 40 41 static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
··· 33 unsigned long __last[]; 34 } __aligned(8); 35 36 + static __always_inline bool regs_irqs_disabled(struct pt_regs *regs) 37 { 38 + return !(regs->csr_prmd & CSR_PRMD_PIE); 39 } 40 41 static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+4 -4
arch/loongarch/kernel/Makefile
··· 21 22 obj-$(CONFIG_ARCH_STRICT_ALIGN) += unaligned.o 23 24 - CFLAGS_module.o += $(call cc-option,-Wno-override-init,) 25 - CFLAGS_syscall.o += $(call cc-option,-Wno-override-init,) 26 - CFLAGS_traps.o += $(call cc-option,-Wno-override-init,) 27 - CFLAGS_perf_event.o += $(call cc-option,-Wno-override-init,) 28 29 ifdef CONFIG_FUNCTION_TRACER 30 ifndef CONFIG_DYNAMIC_FTRACE
··· 21 22 obj-$(CONFIG_ARCH_STRICT_ALIGN) += unaligned.o 23 24 + CFLAGS_module.o += $(call cc-disable-warning, override-init) 25 + CFLAGS_syscall.o += $(call cc-disable-warning, override-init) 26 + CFLAGS_traps.o += $(call cc-disable-warning, override-init) 27 + CFLAGS_perf_event.o += $(call cc-disable-warning, override-init) 28 29 ifdef CONFIG_FUNCTION_TRACER 30 ifndef CONFIG_DYNAMIC_FTRACE
+6
arch/loongarch/kernel/fpu.S
··· 458 li.w a0, 0 # success 459 jr ra 460 SYM_FUNC_END(_save_fp_context) 461 462 /* 463 * a0: fpregs ··· 472 li.w a0, 0 # success 473 jr ra 474 SYM_FUNC_END(_restore_fp_context) 475 476 /* 477 * a0: fpregs ··· 486 li.w a0, 0 # success 487 jr ra 488 SYM_FUNC_END(_save_lsx_context) 489 490 /* 491 * a0: fpregs ··· 500 li.w a0, 0 # success 501 jr ra 502 SYM_FUNC_END(_restore_lsx_context) 503 504 /* 505 * a0: fpregs ··· 514 li.w a0, 0 # success 515 jr ra 516 SYM_FUNC_END(_save_lasx_context) 517 518 /* 519 * a0: fpregs ··· 528 li.w a0, 0 # success 529 jr ra 530 SYM_FUNC_END(_restore_lasx_context) 531 532 .L_fpu_fault: 533 li.w a0, -EFAULT # failure
··· 458 li.w a0, 0 # success 459 jr ra 460 SYM_FUNC_END(_save_fp_context) 461 + EXPORT_SYMBOL_GPL(_save_fp_context) 462 463 /* 464 * a0: fpregs ··· 471 li.w a0, 0 # success 472 jr ra 473 SYM_FUNC_END(_restore_fp_context) 474 + EXPORT_SYMBOL_GPL(_restore_fp_context) 475 476 /* 477 * a0: fpregs ··· 484 li.w a0, 0 # success 485 jr ra 486 SYM_FUNC_END(_save_lsx_context) 487 + EXPORT_SYMBOL_GPL(_save_lsx_context) 488 489 /* 490 * a0: fpregs ··· 497 li.w a0, 0 # success 498 jr ra 499 SYM_FUNC_END(_restore_lsx_context) 500 + EXPORT_SYMBOL_GPL(_restore_lsx_context) 501 502 /* 503 * a0: fpregs ··· 510 li.w a0, 0 # success 511 jr ra 512 SYM_FUNC_END(_save_lasx_context) 513 + EXPORT_SYMBOL_GPL(_save_lasx_context) 514 515 /* 516 * a0: fpregs ··· 523 li.w a0, 0 # success 524 jr ra 525 SYM_FUNC_END(_restore_lasx_context) 526 + EXPORT_SYMBOL_GPL(_restore_lasx_context) 527 528 .L_fpu_fault: 529 li.w a0, -EFAULT # failure
+4
arch/loongarch/kernel/lbt.S
··· 90 li.w a0, 0 # success 91 jr ra 92 SYM_FUNC_END(_save_lbt_context) 93 94 /* 95 * a0: scr ··· 111 li.w a0, 0 # success 112 jr ra 113 SYM_FUNC_END(_restore_lbt_context) 114 115 /* 116 * a0: ftop ··· 122 li.w a0, 0 # success 123 jr ra 124 SYM_FUNC_END(_save_ftop_context) 125 126 /* 127 * a0: ftop ··· 153 li.w a0, 0 # success 154 jr ra 155 SYM_FUNC_END(_restore_ftop_context) 156 157 .L_lbt_fault: 158 li.w a0, -EFAULT # failure
··· 90 li.w a0, 0 # success 91 jr ra 92 SYM_FUNC_END(_save_lbt_context) 93 + EXPORT_SYMBOL_GPL(_save_lbt_context) 94 95 /* 96 * a0: scr ··· 110 li.w a0, 0 # success 111 jr ra 112 SYM_FUNC_END(_restore_lbt_context) 113 + EXPORT_SYMBOL_GPL(_restore_lbt_context) 114 115 /* 116 * a0: ftop ··· 120 li.w a0, 0 # success 121 jr ra 122 SYM_FUNC_END(_save_ftop_context) 123 + EXPORT_SYMBOL_GPL(_save_ftop_context) 124 125 /* 126 * a0: ftop ··· 150 li.w a0, 0 # success 151 jr ra 152 SYM_FUNC_END(_restore_ftop_context) 153 + EXPORT_SYMBOL_GPL(_restore_ftop_context) 154 155 .L_lbt_fault: 156 li.w a0, -EFAULT # failure
-21
arch/loongarch/kernel/signal.c
··· 51 #define lock_lbt_owner() ({ preempt_disable(); pagefault_disable(); }) 52 #define unlock_lbt_owner() ({ pagefault_enable(); preempt_enable(); }) 53 54 - /* Assembly functions to move context to/from the FPU */ 55 - extern asmlinkage int 56 - _save_fp_context(void __user *fpregs, void __user *fcc, void __user *csr); 57 - extern asmlinkage int 58 - _restore_fp_context(void __user *fpregs, void __user *fcc, void __user *csr); 59 - extern asmlinkage int 60 - _save_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 61 - extern asmlinkage int 62 - _restore_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 63 - extern asmlinkage int 64 - _save_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 65 - extern asmlinkage int 66 - _restore_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 67 - 68 - #ifdef CONFIG_CPU_HAS_LBT 69 - extern asmlinkage int _save_lbt_context(void __user *regs, void __user *eflags); 70 - extern asmlinkage int _restore_lbt_context(void __user *regs, void __user *eflags); 71 - extern asmlinkage int _save_ftop_context(void __user *ftop); 72 - extern asmlinkage int _restore_ftop_context(void __user *ftop); 73 - #endif 74 - 75 struct rt_sigframe { 76 struct siginfo rs_info; 77 struct ucontext rs_uctx;
··· 51 #define lock_lbt_owner() ({ preempt_disable(); pagefault_disable(); }) 52 #define unlock_lbt_owner() ({ pagefault_enable(); preempt_enable(); }) 53 54 struct rt_sigframe { 55 struct siginfo rs_info; 56 struct ucontext rs_uctx;
+12 -8
arch/loongarch/kernel/traps.c
··· 553 die_if_kernel("Kernel ale access", regs); 554 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr); 555 #else 556 unsigned int *pc; 557 558 - if (regs->csr_prmd & CSR_PRMD_PIE) 559 local_irq_enable(); 560 561 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr); ··· 583 die_if_kernel("Kernel ale access", regs); 584 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr); 585 out: 586 - if (regs->csr_prmd & CSR_PRMD_PIE) 587 local_irq_disable(); 588 #endif 589 irqentry_exit(regs, state); ··· 622 asmlinkage void noinstr do_bce(struct pt_regs *regs) 623 { 624 bool user = user_mode(regs); 625 unsigned long era = exception_era(regs); 626 u64 badv = 0, lower = 0, upper = ULONG_MAX; 627 union loongarch_instruction insn; 628 irqentry_state_t state = irqentry_enter(regs); 629 630 - if (regs->csr_prmd & CSR_PRMD_PIE) 631 local_irq_enable(); 632 633 current->thread.trap_nr = read_csr_excode(); ··· 694 force_sig_bnderr((void __user *)badv, (void __user *)lower, (void __user *)upper); 695 696 out: 697 - if (regs->csr_prmd & CSR_PRMD_PIE) 698 local_irq_disable(); 699 700 irqentry_exit(regs, state); ··· 712 asmlinkage void noinstr do_bp(struct pt_regs *regs) 713 { 714 bool user = user_mode(regs); 715 unsigned int opcode, bcode; 716 unsigned long era = exception_era(regs); 717 irqentry_state_t state = irqentry_enter(regs); 718 719 - if (regs->csr_prmd & CSR_PRMD_PIE) 720 local_irq_enable(); 721 722 if (__get_inst(&opcode, (u32 *)era, user)) ··· 783 } 784 785 out: 786 - if (regs->csr_prmd & CSR_PRMD_PIE) 787 local_irq_disable(); 788 789 irqentry_exit(regs, state); ··· 1018 1019 asmlinkage void noinstr do_lbt(struct pt_regs *regs) 1020 { 1021 irqentry_state_t state = irqentry_enter(regs); 1022 1023 /* ··· 1028 * (including the user using 'MOVGR2GCSR' to turn on TM, which 1029 * will not trigger the BTE), we need to check PRMD first. 1030 */ 1031 - if (regs->csr_prmd & CSR_PRMD_PIE) 1032 local_irq_enable(); 1033 1034 if (!cpu_has_lbt) { ··· 1042 preempt_enable(); 1043 1044 out: 1045 - if (regs->csr_prmd & CSR_PRMD_PIE) 1046 local_irq_disable(); 1047 1048 irqentry_exit(regs, state);
··· 553 die_if_kernel("Kernel ale access", regs); 554 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr); 555 #else 556 + bool pie = regs_irqs_disabled(regs); 557 unsigned int *pc; 558 559 + if (!pie) 560 local_irq_enable(); 561 562 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr); ··· 582 die_if_kernel("Kernel ale access", regs); 583 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr); 584 out: 585 + if (!pie) 586 local_irq_disable(); 587 #endif 588 irqentry_exit(regs, state); ··· 621 asmlinkage void noinstr do_bce(struct pt_regs *regs) 622 { 623 bool user = user_mode(regs); 624 + bool pie = regs_irqs_disabled(regs); 625 unsigned long era = exception_era(regs); 626 u64 badv = 0, lower = 0, upper = ULONG_MAX; 627 union loongarch_instruction insn; 628 irqentry_state_t state = irqentry_enter(regs); 629 630 + if (!pie) 631 local_irq_enable(); 632 633 current->thread.trap_nr = read_csr_excode(); ··· 692 force_sig_bnderr((void __user *)badv, (void __user *)lower, (void __user *)upper); 693 694 out: 695 + if (!pie) 696 local_irq_disable(); 697 698 irqentry_exit(regs, state); ··· 710 asmlinkage void noinstr do_bp(struct pt_regs *regs) 711 { 712 bool user = user_mode(regs); 713 + bool pie = regs_irqs_disabled(regs); 714 unsigned int opcode, bcode; 715 unsigned long era = exception_era(regs); 716 irqentry_state_t state = irqentry_enter(regs); 717 718 + if (!pie) 719 local_irq_enable(); 720 721 if (__get_inst(&opcode, (u32 *)era, user)) ··· 780 } 781 782 out: 783 + if (!pie) 784 local_irq_disable(); 785 786 irqentry_exit(regs, state); ··· 1015 1016 asmlinkage void noinstr do_lbt(struct pt_regs *regs) 1017 { 1018 + bool pie = regs_irqs_disabled(regs); 1019 irqentry_state_t state = irqentry_enter(regs); 1020 1021 /* ··· 1024 * (including the user using 'MOVGR2GCSR' to turn on TM, which 1025 * will not trigger the BTE), we need to check PRMD first. 1026 */ 1027 + if (!pie) 1028 local_irq_enable(); 1029 1030 if (!cpu_has_lbt) { ··· 1038 preempt_enable(); 1039 1040 out: 1041 + if (!pie) 1042 local_irq_disable(); 1043 1044 irqentry_exit(regs, state);
+1 -1
arch/loongarch/kvm/Makefile
··· 21 kvm-y += intc/pch_pic.o 22 kvm-y += irqfd.o 23 24 - CFLAGS_exit.o += $(call cc-option,-Wno-override-init,)
··· 21 kvm-y += intc/pch_pic.o 22 kvm-y += irqfd.o 23 24 + CFLAGS_exit.o += $(call cc-disable-warning, override-init)
+2 -2
arch/loongarch/kvm/intc/ipi.c
··· 111 ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); 112 srcu_read_unlock(&vcpu->kvm->srcu, idx); 113 if (unlikely(ret)) { 114 - kvm_err("%s: : read date from addr %llx failed\n", __func__, addr); 115 return ret; 116 } 117 /* Construct the mask by scanning the bit 27-30 */ ··· 127 ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); 128 srcu_read_unlock(&vcpu->kvm->srcu, idx); 129 if (unlikely(ret)) 130 - kvm_err("%s: : write date to addr %llx failed\n", __func__, addr); 131 132 return ret; 133 }
··· 111 ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); 112 srcu_read_unlock(&vcpu->kvm->srcu, idx); 113 if (unlikely(ret)) { 114 + kvm_err("%s: : read data from addr %llx failed\n", __func__, addr); 115 return ret; 116 } 117 /* Construct the mask by scanning the bit 27-30 */ ··· 127 ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); 128 srcu_read_unlock(&vcpu->kvm->srcu, idx); 129 if (unlikely(ret)) 130 + kvm_err("%s: : write data to addr %llx failed\n", __func__, addr); 131 132 return ret; 133 }
+2 -2
arch/loongarch/kvm/main.c
··· 296 /* 297 * Enable virtualization features granting guest direct control of 298 * certain features: 299 - * GCI=2: Trap on init or unimplement cache instruction. 300 * TORU=0: Trap on Root Unimplement. 301 * CACTRL=1: Root control cache. 302 - * TOP=0: Trap on Previlege. 303 * TOE=0: Trap on Exception. 304 * TIT=0: Trap on Timer. 305 */
··· 296 /* 297 * Enable virtualization features granting guest direct control of 298 * certain features: 299 + * GCI=2: Trap on init or unimplemented cache instruction. 300 * TORU=0: Trap on Root Unimplement. 301 * CACTRL=1: Root control cache. 302 + * TOP=0: Trap on Privilege. 303 * TOE=0: Trap on Exception. 304 * TIT=0: Trap on Timer. 305 */
+8
arch/loongarch/kvm/vcpu.c
··· 294 vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST; 295 296 if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) { 297 /* make sure the vcpu mode has been written */ 298 smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE); 299 local_irq_enable(); ··· 903 vcpu->arch.st.guest_addr = 0; 904 memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending)); 905 memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear)); 906 break; 907 default: 908 ret = -EINVAL;
··· 294 vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST; 295 296 if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) { 297 + kvm_lose_pmu(vcpu); 298 /* make sure the vcpu mode has been written */ 299 smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE); 300 local_irq_enable(); ··· 902 vcpu->arch.st.guest_addr = 0; 903 memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending)); 904 memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear)); 905 + 906 + /* 907 + * When vCPU reset, clear the ESTAT and GINTC registers 908 + * Other CSR registers are cleared with function _kvm_setcsr(). 909 + */ 910 + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_GINTC, 0); 911 + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_ESTAT, 0); 912 break; 913 default: 914 ret = -EINVAL;
+1 -1
arch/loongarch/mm/hugetlbpage.c
··· 47 pmd = pmd_offset(pud, addr); 48 } 49 } 50 - return (pte_t *) pmd; 51 } 52 53 uint64_t pmd_to_entrylo(unsigned long pmd_val)
··· 47 pmd = pmd_offset(pud, addr); 48 } 49 } 50 + return pmd_none(pmdp_get(pmd)) ? NULL : (pte_t *) pmd; 51 } 52 53 uint64_t pmd_to_entrylo(unsigned long pmd_val)
-3
arch/loongarch/mm/init.c
··· 65 { 66 unsigned long max_zone_pfns[MAX_NR_ZONES]; 67 68 - #ifdef CONFIG_ZONE_DMA 69 - max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN; 70 - #endif 71 #ifdef CONFIG_ZONE_DMA32 72 max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN; 73 #endif
··· 65 { 66 unsigned long max_zone_pfns[MAX_NR_ZONES]; 67 68 #ifdef CONFIG_ZONE_DMA32 69 max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN; 70 #endif
+17
arch/openrisc/include/asm/cacheflush.h
··· 23 */ 24 extern void local_dcache_page_flush(struct page *page); 25 extern void local_icache_page_inv(struct page *page); 26 27 /* 28 * Data cache flushing always happen on the local cpu. Instruction cache ··· 40 #define icache_page_inv(page) smp_icache_page_inv(page) 41 extern void smp_icache_page_inv(struct page *page); 42 #endif /* CONFIG_SMP */ 43 44 /* 45 * Synchronizes caches. Whenever a cpu writes executable code to memory, this
··· 23 */ 24 extern void local_dcache_page_flush(struct page *page); 25 extern void local_icache_page_inv(struct page *page); 26 + extern void local_dcache_range_flush(unsigned long start, unsigned long end); 27 + extern void local_dcache_range_inv(unsigned long start, unsigned long end); 28 + extern void local_icache_range_inv(unsigned long start, unsigned long end); 29 30 /* 31 * Data cache flushing always happen on the local cpu. Instruction cache ··· 37 #define icache_page_inv(page) smp_icache_page_inv(page) 38 extern void smp_icache_page_inv(struct page *page); 39 #endif /* CONFIG_SMP */ 40 + 41 + /* 42 + * Even if the actual block size is larger than L1_CACHE_BYTES, paddr 43 + * can be incremented by L1_CACHE_BYTES. When paddr is written to the 44 + * invalidate register, the entire cache line encompassing this address 45 + * is invalidated. Each subsequent reference to the same cache line will 46 + * not affect the invalidation process. 47 + */ 48 + #define local_dcache_block_flush(addr) \ 49 + local_dcache_range_flush(addr, addr + L1_CACHE_BYTES) 50 + #define local_dcache_block_inv(addr) \ 51 + local_dcache_range_inv(addr, addr + L1_CACHE_BYTES) 52 + #define local_icache_block_inv(addr) \ 53 + local_icache_range_inv(addr, addr + L1_CACHE_BYTES) 54 55 /* 56 * Synchronizes caches. Whenever a cpu writes executable code to memory, this
+17 -7
arch/openrisc/include/asm/cpuinfo.h
··· 15 #ifndef __ASM_OPENRISC_CPUINFO_H 16 #define __ASM_OPENRISC_CPUINFO_H 17 18 struct cpuinfo_or1k { 19 u32 clock_frequency; 20 21 - u32 icache_size; 22 - u32 icache_block_size; 23 - u32 icache_ways; 24 - 25 - u32 dcache_size; 26 - u32 dcache_block_size; 27 - u32 dcache_ways; 28 29 u16 coreid; 30 }; 31 32 extern struct cpuinfo_or1k cpuinfo_or1k[NR_CPUS]; 33 extern void setup_cpuinfo(void); 34 35 #endif /* __ASM_OPENRISC_CPUINFO_H */
··· 15 #ifndef __ASM_OPENRISC_CPUINFO_H 16 #define __ASM_OPENRISC_CPUINFO_H 17 18 + #include <asm/spr.h> 19 + #include <asm/spr_defs.h> 20 + 21 + struct cache_desc { 22 + u32 size; 23 + u32 sets; 24 + u32 block_size; 25 + u32 ways; 26 + }; 27 + 28 struct cpuinfo_or1k { 29 u32 clock_frequency; 30 31 + struct cache_desc icache; 32 + struct cache_desc dcache; 33 34 u16 coreid; 35 }; 36 37 extern struct cpuinfo_or1k cpuinfo_or1k[NR_CPUS]; 38 extern void setup_cpuinfo(void); 39 + 40 + /* 41 + * Check if the cache component exists. 42 + */ 43 + extern bool cpu_cache_is_present(const unsigned int cache_type); 44 45 #endif /* __ASM_OPENRISC_CPUINFO_H */
+1 -1
arch/openrisc/kernel/Makefile
··· 7 8 obj-y := head.o setup.o or32_ksyms.o process.o dma.o \ 9 traps.o time.o irq.o entry.o ptrace.o signal.o \ 10 - sys_call_table.o unwinder.o 11 12 obj-$(CONFIG_SMP) += smp.o sync-timer.o 13 obj-$(CONFIG_STACKTRACE) += stacktrace.o
··· 7 8 obj-y := head.o setup.o or32_ksyms.o process.o dma.o \ 9 traps.o time.o irq.o entry.o ptrace.o signal.o \ 10 + sys_call_table.o unwinder.o cacheinfo.o 11 12 obj-$(CONFIG_SMP) += smp.o sync-timer.o 13 obj-$(CONFIG_STACKTRACE) += stacktrace.o
+104
arch/openrisc/kernel/cacheinfo.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * OpenRISC cacheinfo support 4 + * 5 + * Based on work done for MIPS and LoongArch. All original copyrights 6 + * apply as per the original source declaration. 7 + * 8 + * OpenRISC implementation: 9 + * Copyright (C) 2025 Sahil Siddiq <sahilcdq@proton.me> 10 + */ 11 + 12 + #include <linux/cacheinfo.h> 13 + #include <asm/cpuinfo.h> 14 + #include <asm/spr.h> 15 + #include <asm/spr_defs.h> 16 + 17 + static inline void ci_leaf_init(struct cacheinfo *this_leaf, enum cache_type type, 18 + unsigned int level, struct cache_desc *cache, int cpu) 19 + { 20 + this_leaf->type = type; 21 + this_leaf->level = level; 22 + this_leaf->coherency_line_size = cache->block_size; 23 + this_leaf->number_of_sets = cache->sets; 24 + this_leaf->ways_of_associativity = cache->ways; 25 + this_leaf->size = cache->size; 26 + cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map); 27 + } 28 + 29 + int init_cache_level(unsigned int cpu) 30 + { 31 + struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; 32 + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); 33 + int leaves = 0, levels = 0; 34 + unsigned long upr = mfspr(SPR_UPR); 35 + unsigned long iccfgr, dccfgr; 36 + 37 + if (!(upr & SPR_UPR_UP)) { 38 + printk(KERN_INFO 39 + "-- no UPR register... unable to detect configuration\n"); 40 + return -ENOENT; 41 + } 42 + 43 + if (cpu_cache_is_present(SPR_UPR_DCP)) { 44 + dccfgr = mfspr(SPR_DCCFGR); 45 + cpuinfo->dcache.ways = 1 << (dccfgr & SPR_DCCFGR_NCW); 46 + cpuinfo->dcache.sets = 1 << ((dccfgr & SPR_DCCFGR_NCS) >> 3); 47 + cpuinfo->dcache.block_size = 16 << ((dccfgr & SPR_DCCFGR_CBS) >> 7); 48 + cpuinfo->dcache.size = 49 + cpuinfo->dcache.sets * cpuinfo->dcache.ways * cpuinfo->dcache.block_size; 50 + leaves += 1; 51 + printk(KERN_INFO 52 + "-- dcache: %d bytes total, %d bytes/line, %d set(s), %d way(s)\n", 53 + cpuinfo->dcache.size, cpuinfo->dcache.block_size, 54 + cpuinfo->dcache.sets, cpuinfo->dcache.ways); 55 + } else 56 + printk(KERN_INFO "-- dcache disabled\n"); 57 + 58 + if (cpu_cache_is_present(SPR_UPR_ICP)) { 59 + iccfgr = mfspr(SPR_ICCFGR); 60 + cpuinfo->icache.ways = 1 << (iccfgr & SPR_ICCFGR_NCW); 61 + cpuinfo->icache.sets = 1 << ((iccfgr & SPR_ICCFGR_NCS) >> 3); 62 + cpuinfo->icache.block_size = 16 << ((iccfgr & SPR_ICCFGR_CBS) >> 7); 63 + cpuinfo->icache.size = 64 + cpuinfo->icache.sets * cpuinfo->icache.ways * cpuinfo->icache.block_size; 65 + leaves += 1; 66 + printk(KERN_INFO 67 + "-- icache: %d bytes total, %d bytes/line, %d set(s), %d way(s)\n", 68 + cpuinfo->icache.size, cpuinfo->icache.block_size, 69 + cpuinfo->icache.sets, cpuinfo->icache.ways); 70 + } else 71 + printk(KERN_INFO "-- icache disabled\n"); 72 + 73 + if (!leaves) 74 + return -ENOENT; 75 + 76 + levels = 1; 77 + 78 + this_cpu_ci->num_leaves = leaves; 79 + this_cpu_ci->num_levels = levels; 80 + 81 + return 0; 82 + } 83 + 84 + int populate_cache_leaves(unsigned int cpu) 85 + { 86 + struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; 87 + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); 88 + struct cacheinfo *this_leaf = this_cpu_ci->info_list; 89 + int level = 1; 90 + 91 + if (cpu_cache_is_present(SPR_UPR_DCP)) { 92 + ci_leaf_init(this_leaf, CACHE_TYPE_DATA, level, &cpuinfo->dcache, cpu); 93 + this_leaf->attributes = ((mfspr(SPR_DCCFGR) & SPR_DCCFGR_CWS) >> 8) ? 94 + CACHE_WRITE_BACK : CACHE_WRITE_THROUGH; 95 + this_leaf++; 96 + } 97 + 98 + if (cpu_cache_is_present(SPR_UPR_ICP)) 99 + ci_leaf_init(this_leaf, CACHE_TYPE_INST, level, &cpuinfo->icache, cpu); 100 + 101 + this_cpu_ci->cpu_map_populated = true; 102 + 103 + return 0; 104 + }
+4 -14
arch/openrisc/kernel/dma.c
··· 17 #include <linux/pagewalk.h> 18 19 #include <asm/cpuinfo.h> 20 #include <asm/spr_defs.h> 21 #include <asm/tlbflush.h> 22 ··· 25 page_set_nocache(pte_t *pte, unsigned long addr, 26 unsigned long next, struct mm_walk *walk) 27 { 28 - unsigned long cl; 29 - struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; 30 - 31 pte_val(*pte) |= _PAGE_CI; 32 33 /* ··· 34 flush_tlb_kernel_range(addr, addr + PAGE_SIZE); 35 36 /* Flush page out of dcache */ 37 - for (cl = __pa(addr); cl < __pa(next); cl += cpuinfo->dcache_block_size) 38 - mtspr(SPR_DCBFR, cl); 39 40 return 0; 41 } ··· 95 void arch_sync_dma_for_device(phys_addr_t addr, size_t size, 96 enum dma_data_direction dir) 97 { 98 - unsigned long cl; 99 - struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; 100 - 101 switch (dir) { 102 case DMA_TO_DEVICE: 103 /* Flush the dcache for the requested range */ 104 - for (cl = addr; cl < addr + size; 105 - cl += cpuinfo->dcache_block_size) 106 - mtspr(SPR_DCBFR, cl); 107 break; 108 case DMA_FROM_DEVICE: 109 /* Invalidate the dcache for the requested range */ 110 - for (cl = addr; cl < addr + size; 111 - cl += cpuinfo->dcache_block_size) 112 - mtspr(SPR_DCBIR, cl); 113 break; 114 default: 115 /*
··· 17 #include <linux/pagewalk.h> 18 19 #include <asm/cpuinfo.h> 20 + #include <asm/cacheflush.h> 21 #include <asm/spr_defs.h> 22 #include <asm/tlbflush.h> 23 ··· 24 page_set_nocache(pte_t *pte, unsigned long addr, 25 unsigned long next, struct mm_walk *walk) 26 { 27 pte_val(*pte) |= _PAGE_CI; 28 29 /* ··· 36 flush_tlb_kernel_range(addr, addr + PAGE_SIZE); 37 38 /* Flush page out of dcache */ 39 + local_dcache_range_flush(__pa(addr), __pa(next)); 40 41 return 0; 42 } ··· 98 void arch_sync_dma_for_device(phys_addr_t addr, size_t size, 99 enum dma_data_direction dir) 100 { 101 switch (dir) { 102 case DMA_TO_DEVICE: 103 /* Flush the dcache for the requested range */ 104 + local_dcache_range_flush(addr, addr + size); 105 break; 106 case DMA_FROM_DEVICE: 107 /* Invalidate the dcache for the requested range */ 108 + local_dcache_range_inv(addr, addr + size); 109 break; 110 default: 111 /*
+3 -42
arch/openrisc/kernel/setup.c
··· 113 return; 114 } 115 116 - if (upr & SPR_UPR_DCP) 117 - printk(KERN_INFO 118 - "-- dcache: %4d bytes total, %2d bytes/line, %d way(s)\n", 119 - cpuinfo->dcache_size, cpuinfo->dcache_block_size, 120 - cpuinfo->dcache_ways); 121 - else 122 - printk(KERN_INFO "-- dcache disabled\n"); 123 - if (upr & SPR_UPR_ICP) 124 - printk(KERN_INFO 125 - "-- icache: %4d bytes total, %2d bytes/line, %d way(s)\n", 126 - cpuinfo->icache_size, cpuinfo->icache_block_size, 127 - cpuinfo->icache_ways); 128 - else 129 - printk(KERN_INFO "-- icache disabled\n"); 130 - 131 if (upr & SPR_UPR_DMP) 132 printk(KERN_INFO "-- dmmu: %4d entries, %lu way(s)\n", 133 1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> 2), ··· 140 void __init setup_cpuinfo(void) 141 { 142 struct device_node *cpu; 143 - unsigned long iccfgr, dccfgr; 144 - unsigned long cache_set_size; 145 int cpu_id = smp_processor_id(); 146 struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[cpu_id]; 147 148 cpu = of_get_cpu_node(cpu_id, NULL); 149 if (!cpu) 150 panic("Couldn't find CPU%d in device tree...\n", cpu_id); 151 - 152 - iccfgr = mfspr(SPR_ICCFGR); 153 - cpuinfo->icache_ways = 1 << (iccfgr & SPR_ICCFGR_NCW); 154 - cache_set_size = 1 << ((iccfgr & SPR_ICCFGR_NCS) >> 3); 155 - cpuinfo->icache_block_size = 16 << ((iccfgr & SPR_ICCFGR_CBS) >> 7); 156 - cpuinfo->icache_size = 157 - cache_set_size * cpuinfo->icache_ways * cpuinfo->icache_block_size; 158 - 159 - dccfgr = mfspr(SPR_DCCFGR); 160 - cpuinfo->dcache_ways = 1 << (dccfgr & SPR_DCCFGR_NCW); 161 - cache_set_size = 1 << ((dccfgr & SPR_DCCFGR_NCS) >> 3); 162 - cpuinfo->dcache_block_size = 16 << ((dccfgr & SPR_DCCFGR_CBS) >> 7); 163 - cpuinfo->dcache_size = 164 - cache_set_size * cpuinfo->dcache_ways * cpuinfo->dcache_block_size; 165 166 if (of_property_read_u32(cpu, "clock-frequency", 167 &cpuinfo->clock_frequency)) { ··· 263 unsigned int vr, cpucfgr; 264 unsigned int avr; 265 unsigned int version; 266 struct cpuinfo_or1k *cpuinfo = v; 267 268 vr = mfspr(SPR_VR); 269 cpucfgr = mfspr(SPR_CPUCFGR); 270 271 - #ifdef CONFIG_SMP 272 - seq_printf(m, "processor\t\t: %d\n", cpuinfo->coreid); 273 - #endif 274 if (vr & SPR_VR_UVRP) { 275 vr = mfspr(SPR_VR2); 276 version = vr & SPR_VR2_VER; ··· 289 seq_printf(m, "revision\t\t: %d\n", vr & SPR_VR_REV); 290 } 291 seq_printf(m, "frequency\t\t: %ld\n", loops_per_jiffy * HZ); 292 - seq_printf(m, "dcache size\t\t: %d bytes\n", cpuinfo->dcache_size); 293 - seq_printf(m, "dcache block size\t: %d bytes\n", 294 - cpuinfo->dcache_block_size); 295 - seq_printf(m, "dcache ways\t\t: %d\n", cpuinfo->dcache_ways); 296 - seq_printf(m, "icache size\t\t: %d bytes\n", cpuinfo->icache_size); 297 - seq_printf(m, "icache block size\t: %d bytes\n", 298 - cpuinfo->icache_block_size); 299 - seq_printf(m, "icache ways\t\t: %d\n", cpuinfo->icache_ways); 300 seq_printf(m, "immu\t\t\t: %d entries, %lu ways\n", 301 1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> 2), 302 1 + (mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTW));
··· 113 return; 114 } 115 116 if (upr & SPR_UPR_DMP) 117 printk(KERN_INFO "-- dmmu: %4d entries, %lu way(s)\n", 118 1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> 2), ··· 155 void __init setup_cpuinfo(void) 156 { 157 struct device_node *cpu; 158 int cpu_id = smp_processor_id(); 159 struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[cpu_id]; 160 161 cpu = of_get_cpu_node(cpu_id, NULL); 162 if (!cpu) 163 panic("Couldn't find CPU%d in device tree...\n", cpu_id); 164 165 if (of_property_read_u32(cpu, "clock-frequency", 166 &cpuinfo->clock_frequency)) { ··· 294 unsigned int vr, cpucfgr; 295 unsigned int avr; 296 unsigned int version; 297 + #ifdef CONFIG_SMP 298 struct cpuinfo_or1k *cpuinfo = v; 299 + seq_printf(m, "processor\t\t: %d\n", cpuinfo->coreid); 300 + #endif 301 302 vr = mfspr(SPR_VR); 303 cpucfgr = mfspr(SPR_CPUCFGR); 304 305 if (vr & SPR_VR_UVRP) { 306 vr = mfspr(SPR_VR2); 307 version = vr & SPR_VR2_VER; ··· 320 seq_printf(m, "revision\t\t: %d\n", vr & SPR_VR_REV); 321 } 322 seq_printf(m, "frequency\t\t: %ld\n", loops_per_jiffy * HZ); 323 seq_printf(m, "immu\t\t\t: %d entries, %lu ways\n", 324 1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> 2), 325 1 + (mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTW));
+47 -9
arch/openrisc/mm/cache.c
··· 14 #include <asm/spr_defs.h> 15 #include <asm/cache.h> 16 #include <asm/cacheflush.h> 17 #include <asm/tlbflush.h> 18 19 - static __always_inline void cache_loop(struct page *page, const unsigned int reg) 20 { 21 unsigned long paddr = page_to_pfn(page) << PAGE_SHIFT; 22 - unsigned long line = paddr & ~(L1_CACHE_BYTES - 1); 23 24 - while (line < paddr + PAGE_SIZE) { 25 - mtspr(reg, line); 26 - line += L1_CACHE_BYTES; 27 - } 28 } 29 30 void local_dcache_page_flush(struct page *page) 31 { 32 - cache_loop(page, SPR_DCBFR); 33 } 34 EXPORT_SYMBOL(local_dcache_page_flush); 35 36 void local_icache_page_inv(struct page *page) 37 { 38 - cache_loop(page, SPR_ICBIR); 39 } 40 EXPORT_SYMBOL(local_icache_page_inv); 41 42 void update_cache(struct vm_area_struct *vma, unsigned long address, 43 pte_t *pte) ··· 97 sync_icache_dcache(folio_page(folio, nr)); 98 } 99 } 100 -
··· 14 #include <asm/spr_defs.h> 15 #include <asm/cache.h> 16 #include <asm/cacheflush.h> 17 + #include <asm/cpuinfo.h> 18 #include <asm/tlbflush.h> 19 20 + /* 21 + * Check if the cache component exists. 22 + */ 23 + bool cpu_cache_is_present(const unsigned int cache_type) 24 + { 25 + unsigned long upr = mfspr(SPR_UPR); 26 + unsigned long mask = SPR_UPR_UP | cache_type; 27 + 28 + return !((upr & mask) ^ mask); 29 + } 30 + 31 + static __always_inline void cache_loop(unsigned long paddr, unsigned long end, 32 + const unsigned short reg, const unsigned int cache_type) 33 + { 34 + if (!cpu_cache_is_present(cache_type)) 35 + return; 36 + 37 + while (paddr < end) { 38 + mtspr(reg, paddr); 39 + paddr += L1_CACHE_BYTES; 40 + } 41 + } 42 + 43 + static __always_inline void cache_loop_page(struct page *page, const unsigned short reg, 44 + const unsigned int cache_type) 45 { 46 unsigned long paddr = page_to_pfn(page) << PAGE_SHIFT; 47 + unsigned long end = paddr + PAGE_SIZE; 48 49 + paddr &= ~(L1_CACHE_BYTES - 1); 50 + 51 + cache_loop(paddr, end, reg, cache_type); 52 } 53 54 void local_dcache_page_flush(struct page *page) 55 { 56 + cache_loop_page(page, SPR_DCBFR, SPR_UPR_DCP); 57 } 58 EXPORT_SYMBOL(local_dcache_page_flush); 59 60 void local_icache_page_inv(struct page *page) 61 { 62 + cache_loop_page(page, SPR_ICBIR, SPR_UPR_ICP); 63 } 64 EXPORT_SYMBOL(local_icache_page_inv); 65 + 66 + void local_dcache_range_flush(unsigned long start, unsigned long end) 67 + { 68 + cache_loop(start, end, SPR_DCBFR, SPR_UPR_DCP); 69 + } 70 + 71 + void local_dcache_range_inv(unsigned long start, unsigned long end) 72 + { 73 + cache_loop(start, end, SPR_DCBIR, SPR_UPR_DCP); 74 + } 75 + 76 + void local_icache_range_inv(unsigned long start, unsigned long end) 77 + { 78 + cache_loop(start, end, SPR_ICBIR, SPR_UPR_ICP); 79 + } 80 81 void update_cache(struct vm_area_struct *vma, unsigned long address, 82 pte_t *pte) ··· 58 sync_icache_dcache(folio_page(folio, nr)); 59 } 60 }
+3 -2
arch/openrisc/mm/init.c
··· 35 #include <asm/fixmap.h> 36 #include <asm/tlbflush.h> 37 #include <asm/sections.h> 38 39 int mem_init_done; 40 ··· 177 barrier(); 178 179 /* Invalidate instruction caches after code modification */ 180 - mtspr(SPR_ICBIR, 0x900); 181 - mtspr(SPR_ICBIR, 0xa00); 182 183 /* New TLB miss handlers and kernel page tables are in now place. 184 * Make sure that page flags get updated for all pages in TLB by
··· 35 #include <asm/fixmap.h> 36 #include <asm/tlbflush.h> 37 #include <asm/sections.h> 38 + #include <asm/cacheflush.h> 39 40 int mem_init_done; 41 ··· 176 barrier(); 177 178 /* Invalidate instruction caches after code modification */ 179 + local_icache_block_inv(0x900); 180 + local_icache_block_inv(0xa00); 181 182 /* New TLB miss handlers and kernel page tables are in now place. 183 * Make sure that page flags get updated for all pages in TLB by
+10 -5
arch/riscv/include/asm/cacheflush.h
··· 34 flush_dcache_folio(page_folio(page)); 35 } 36 37 - /* 38 - * RISC-V doesn't have an instruction to flush parts of the instruction cache, 39 - * so instead we just flush the whole thing. 40 - */ 41 - #define flush_icache_range(start, end) flush_icache_all() 42 #define flush_icache_user_page(vma, pg, addr, len) \ 43 do { \ 44 if (vma->vm_flags & VM_EXEC) \ ··· 72 void flush_icache_mm(struct mm_struct *mm, bool local); 73 74 #endif /* CONFIG_SMP */ 75 76 extern unsigned int riscv_cbom_block_size; 77 extern unsigned int riscv_cboz_block_size;
··· 34 flush_dcache_folio(page_folio(page)); 35 } 36 37 #define flush_icache_user_page(vma, pg, addr, len) \ 38 do { \ 39 if (vma->vm_flags & VM_EXEC) \ ··· 77 void flush_icache_mm(struct mm_struct *mm, bool local); 78 79 #endif /* CONFIG_SMP */ 80 + 81 + /* 82 + * RISC-V doesn't have an instruction to flush parts of the instruction cache, 83 + * so instead we just flush the whole thing. 84 + */ 85 + #define flush_icache_range flush_icache_range 86 + static inline void flush_icache_range(unsigned long start, unsigned long end) 87 + { 88 + flush_icache_all(); 89 + } 90 91 extern unsigned int riscv_cbom_block_size; 92 extern unsigned int riscv_cboz_block_size;
+2 -2
arch/riscv/kernel/Makefile
··· 9 CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) 10 CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) 11 endif 12 - CFLAGS_syscall_table.o += $(call cc-option,-Wno-override-init,) 13 - CFLAGS_compat_syscall_table.o += $(call cc-option,-Wno-override-init,) 14 15 ifdef CONFIG_KEXEC_CORE 16 AFLAGS_kexec_relocate.o := -mcmodel=medany $(call cc-option,-mno-relax)
··· 9 CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) 10 CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) 11 endif 12 + CFLAGS_syscall_table.o += $(call cc-disable-warning, override-init) 13 + CFLAGS_compat_syscall_table.o += $(call cc-disable-warning, override-init) 14 15 ifdef CONFIG_KEXEC_CORE 16 AFLAGS_kexec_relocate.o := -mcmodel=medany $(call cc-option,-mno-relax)
+2 -8
arch/riscv/kernel/probes/uprobes.c
··· 167 /* Initialize the slot */ 168 void *kaddr = kmap_atomic(page); 169 void *dst = kaddr + (vaddr & ~PAGE_MASK); 170 171 memcpy(dst, src, len); 172 ··· 177 *(uprobe_opcode_t *)dst = __BUG_INSN_32; 178 } 179 180 kunmap_atomic(kaddr); 181 - 182 - /* 183 - * We probably need flush_icache_user_page() but it needs vma. 184 - * This should work on most of architectures by default. If 185 - * architecture needs to do something different it can define 186 - * its own version of the function. 187 - */ 188 - flush_dcache_page(page); 189 }
··· 167 /* Initialize the slot */ 168 void *kaddr = kmap_atomic(page); 169 void *dst = kaddr + (vaddr & ~PAGE_MASK); 170 + unsigned long start = (unsigned long)dst; 171 172 memcpy(dst, src, len); 173 ··· 176 *(uprobe_opcode_t *)dst = __BUG_INSN_32; 177 } 178 179 + flush_icache_range(start, start + len); 180 kunmap_atomic(kaddr); 181 }
+1 -1
arch/x86/boot/Makefile
··· 59 $(obj)/bzImage: asflags-y := $(SVGA_MODE) 60 61 quiet_cmd_image = BUILD $@ 62 - cmd_image = cp $< $@; truncate -s %4K $@; cat $(obj)/vmlinux.bin >>$@ 63 64 $(obj)/bzImage: $(obj)/setup.bin $(obj)/vmlinux.bin FORCE 65 $(call if_changed,image)
··· 59 $(obj)/bzImage: asflags-y := $(SVGA_MODE) 60 61 quiet_cmd_image = BUILD $@ 62 + cmd_image = (dd if=$< bs=4k conv=sync status=none; cat $(filter-out $<,$(real-prereqs))) >$@ 63 64 $(obj)/bzImage: $(obj)/setup.bin $(obj)/vmlinux.bin FORCE 65 $(call if_changed,image)
+1 -1
arch/x86/events/core.c
··· 629 if (event->attr.type == event->pmu->type) 630 event->hw.config |= x86_pmu_get_event_config(event); 631 632 - if (!event->attr.freq && x86_pmu.limit_period) { 633 s64 left = event->attr.sample_period; 634 x86_pmu.limit_period(event, &left); 635 if (left > event->attr.sample_period)
··· 629 if (event->attr.type == event->pmu->type) 630 event->hw.config |= x86_pmu_get_event_config(event); 631 632 + if (is_sampling_event(event) && !event->attr.freq && x86_pmu.limit_period) { 633 s64 left = event->attr.sample_period; 634 x86_pmu.limit_period(event, &left); 635 if (left > event->attr.sample_period)
+6
arch/x86/include/asm/kvm_host.h
··· 35 #include <asm/mtrr.h> 36 #include <asm/msr-index.h> 37 #include <asm/asm.h> 38 #include <asm/kvm_page_track.h> 39 #include <asm/kvm_vcpu_regs.h> 40 #include <asm/reboot.h> ··· 2423 * remaining 31 lower bits must be 0 to preserve ABI. 2424 */ 2425 #define KVM_EXIT_HYPERCALL_MBZ GENMASK_ULL(31, 1) 2426 2427 #endif /* _ASM_X86_KVM_HOST_H */
··· 35 #include <asm/mtrr.h> 36 #include <asm/msr-index.h> 37 #include <asm/asm.h> 38 + #include <asm/irq_remapping.h> 39 #include <asm/kvm_page_track.h> 40 #include <asm/kvm_vcpu_regs.h> 41 #include <asm/reboot.h> ··· 2422 * remaining 31 lower bits must be 0 to preserve ABI. 2423 */ 2424 #define KVM_EXIT_HYPERCALL_MBZ GENMASK_ULL(31, 1) 2425 + 2426 + static inline bool kvm_arch_has_irq_bypass(void) 2427 + { 2428 + return enable_apicv && irq_remapping_cap(IRQ_POSTING_CAP); 2429 + } 2430 2431 #endif /* _ASM_X86_KVM_HOST_H */
+11 -8
arch/x86/include/asm/pgalloc.h
··· 6 #include <linux/mm.h> /* for struct page */ 7 #include <linux/pagemap.h> 8 9 #define __HAVE_ARCH_PTE_ALLOC_ONE 10 #define __HAVE_ARCH_PGD_FREE 11 #include <asm-generic/pgalloc.h> ··· 31 static inline void paravirt_release_p4d(unsigned long pfn) {} 32 #endif 33 34 - #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION 35 /* 36 - * Instead of one PGD, we acquire two PGDs. Being order-1, it is 37 - * both 8k in size and 8k-aligned. That lets us just flip bit 12 38 - * in a pointer to swap between the two 4k halves. 39 */ 40 - #define PGD_ALLOCATION_ORDER 1 41 - #else 42 - #define PGD_ALLOCATION_ORDER 0 43 - #endif 44 45 /* 46 * Allocate and free page tables.
··· 6 #include <linux/mm.h> /* for struct page */ 7 #include <linux/pagemap.h> 8 9 + #include <asm/cpufeature.h> 10 + 11 #define __HAVE_ARCH_PTE_ALLOC_ONE 12 #define __HAVE_ARCH_PGD_FREE 13 #include <asm-generic/pgalloc.h> ··· 29 static inline void paravirt_release_p4d(unsigned long pfn) {} 30 #endif 31 32 /* 33 + * In case of Page Table Isolation active, we acquire two PGDs instead of one. 34 + * Being order-1, it is both 8k in size and 8k-aligned. That lets us just 35 + * flip bit 12 in a pointer to swap between the two 4k halves. 36 */ 37 + static inline unsigned int pgd_allocation_order(void) 38 + { 39 + if (cpu_feature_enabled(X86_FEATURE_PTI)) 40 + return 1; 41 + return 0; 42 + } 43 44 /* 45 * Allocate and free page tables.
+8
arch/x86/kernel/e820.c
··· 1299 memblock_add(entry->addr, entry->size); 1300 } 1301 1302 /* Throw away partial pages: */ 1303 memblock_trim_memory(PAGE_SIZE); 1304
··· 1299 memblock_add(entry->addr, entry->size); 1300 } 1301 1302 + /* 1303 + * 32-bit systems are limited to 4BG of memory even with HIGHMEM and 1304 + * to even less without it. 1305 + * Discard memory after max_pfn - the actual limit detected at runtime. 1306 + */ 1307 + if (IS_ENABLED(CONFIG_X86_32)) 1308 + memblock_remove(PFN_PHYS(max_pfn), -1); 1309 + 1310 /* Throw away partial pages: */ 1311 memblock_trim_memory(PAGE_SIZE); 1312
+2 -2
arch/x86/kernel/machine_kexec_32.c
··· 42 43 static void machine_kexec_free_page_tables(struct kimage *image) 44 { 45 - free_pages((unsigned long)image->arch.pgd, PGD_ALLOCATION_ORDER); 46 image->arch.pgd = NULL; 47 #ifdef CONFIG_X86_PAE 48 free_page((unsigned long)image->arch.pmd0); ··· 59 static int machine_kexec_alloc_page_tables(struct kimage *image) 60 { 61 image->arch.pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 62 - PGD_ALLOCATION_ORDER); 63 #ifdef CONFIG_X86_PAE 64 image->arch.pmd0 = (pmd_t *)get_zeroed_page(GFP_KERNEL); 65 image->arch.pmd1 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
··· 42 43 static void machine_kexec_free_page_tables(struct kimage *image) 44 { 45 + free_pages((unsigned long)image->arch.pgd, pgd_allocation_order()); 46 image->arch.pgd = NULL; 47 #ifdef CONFIG_X86_PAE 48 free_page((unsigned long)image->arch.pmd0); ··· 59 static int machine_kexec_alloc_page_tables(struct kimage *image) 60 { 61 image->arch.pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 62 + pgd_allocation_order()); 63 #ifdef CONFIG_X86_PAE 64 image->arch.pmd0 = (pmd_t *)get_zeroed_page(GFP_KERNEL); 65 image->arch.pmd1 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+37 -31
arch/x86/kvm/svm/avic.c
··· 796 struct amd_svm_iommu_ir *ir; 797 u64 entry; 798 799 /** 800 * In some cases, the existing irte is updated and re-set, 801 * so we need to check here if it's already been * added 802 * to the ir_list. 803 */ 804 - if (pi->ir_data && (pi->prev_ga_tag != 0)) { 805 struct kvm *kvm = svm->vcpu.kvm; 806 u32 vcpu_id = AVIC_GATAG_TO_VCPUID(pi->prev_ga_tag); 807 struct kvm_vcpu *prev_vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); ··· 823 * Allocating new amd_iommu_pi_data, which will get 824 * add to the per-vcpu ir_list. 825 */ 826 - ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_KERNEL_ACCOUNT); 827 if (!ir) { 828 ret = -ENOMEM; 829 goto out; ··· 899 { 900 struct kvm_kernel_irq_routing_entry *e; 901 struct kvm_irq_routing_table *irq_rt; 902 int idx, ret = 0; 903 904 - if (!kvm_arch_has_assigned_device(kvm) || 905 - !irq_remapping_cap(IRQ_POSTING_CAP)) 906 return 0; 907 908 pr_debug("SVM: %s: host_irq=%#x, guest_irq=%#x, set=%#x\n", ··· 936 kvm_vcpu_apicv_active(&svm->vcpu)) { 937 struct amd_iommu_pi_data pi; 938 939 /* Try to enable guest_mode in IRTE */ 940 pi.base = __sme_set(page_to_phys(svm->avic_backing_page) & 941 AVIC_HPA_MASK); ··· 956 */ 957 if (!ret && pi.is_guest_mode) 958 svm_ir_list_add(svm, &pi); 959 - } else { 960 - /* Use legacy mode in IRTE */ 961 - struct amd_iommu_pi_data pi; 962 - 963 - /** 964 - * Here, pi is used to: 965 - * - Tell IOMMU to use legacy mode for this interrupt. 966 - * - Retrieve ga_tag of prior interrupt remapping data. 967 - */ 968 - pi.prev_ga_tag = 0; 969 - pi.is_guest_mode = false; 970 - ret = irq_set_vcpu_affinity(host_irq, &pi); 971 - 972 - /** 973 - * Check if the posted interrupt was previously 974 - * setup with the guest_mode by checking if the ga_tag 975 - * was cached. If so, we need to clean up the per-vcpu 976 - * ir_list. 977 - */ 978 - if (!ret && pi.prev_ga_tag) { 979 - int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag); 980 - struct kvm_vcpu *vcpu; 981 - 982 - vcpu = kvm_get_vcpu_by_id(kvm, id); 983 - if (vcpu) 984 - svm_ir_list_del(to_svm(vcpu), &pi); 985 - } 986 } 987 988 if (!ret && svm) { ··· 971 } 972 973 ret = 0; 974 out: 975 srcu_read_unlock(&kvm->irq_srcu, idx); 976 return ret;
··· 796 struct amd_svm_iommu_ir *ir; 797 u64 entry; 798 799 + if (WARN_ON_ONCE(!pi->ir_data)) 800 + return -EINVAL; 801 + 802 /** 803 * In some cases, the existing irte is updated and re-set, 804 * so we need to check here if it's already been * added 805 * to the ir_list. 806 */ 807 + if (pi->prev_ga_tag) { 808 struct kvm *kvm = svm->vcpu.kvm; 809 u32 vcpu_id = AVIC_GATAG_TO_VCPUID(pi->prev_ga_tag); 810 struct kvm_vcpu *prev_vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); ··· 820 * Allocating new amd_iommu_pi_data, which will get 821 * add to the per-vcpu ir_list. 822 */ 823 + ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_ATOMIC | __GFP_ACCOUNT); 824 if (!ir) { 825 ret = -ENOMEM; 826 goto out; ··· 896 { 897 struct kvm_kernel_irq_routing_entry *e; 898 struct kvm_irq_routing_table *irq_rt; 899 + bool enable_remapped_mode = true; 900 int idx, ret = 0; 901 902 + if (!kvm_arch_has_assigned_device(kvm) || !kvm_arch_has_irq_bypass()) 903 return 0; 904 905 pr_debug("SVM: %s: host_irq=%#x, guest_irq=%#x, set=%#x\n", ··· 933 kvm_vcpu_apicv_active(&svm->vcpu)) { 934 struct amd_iommu_pi_data pi; 935 936 + enable_remapped_mode = false; 937 + 938 /* Try to enable guest_mode in IRTE */ 939 pi.base = __sme_set(page_to_phys(svm->avic_backing_page) & 940 AVIC_HPA_MASK); ··· 951 */ 952 if (!ret && pi.is_guest_mode) 953 svm_ir_list_add(svm, &pi); 954 } 955 956 if (!ret && svm) { ··· 993 } 994 995 ret = 0; 996 + if (enable_remapped_mode) { 997 + /* Use legacy mode in IRTE */ 998 + struct amd_iommu_pi_data pi; 999 + 1000 + /** 1001 + * Here, pi is used to: 1002 + * - Tell IOMMU to use legacy mode for this interrupt. 1003 + * - Retrieve ga_tag of prior interrupt remapping data. 1004 + */ 1005 + pi.prev_ga_tag = 0; 1006 + pi.is_guest_mode = false; 1007 + ret = irq_set_vcpu_affinity(host_irq, &pi); 1008 + 1009 + /** 1010 + * Check if the posted interrupt was previously 1011 + * setup with the guest_mode by checking if the ga_tag 1012 + * was cached. If so, we need to clean up the per-vcpu 1013 + * ir_list. 1014 + */ 1015 + if (!ret && pi.prev_ga_tag) { 1016 + int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag); 1017 + struct kvm_vcpu *vcpu; 1018 + 1019 + vcpu = kvm_get_vcpu_by_id(kvm, id); 1020 + if (vcpu) 1021 + svm_ir_list_del(to_svm(vcpu), &pi); 1022 + } 1023 + } 1024 out: 1025 srcu_read_unlock(&kvm->irq_srcu, idx); 1026 return ret;
+10 -3
arch/x86/kvm/trace.h
··· 11 #undef TRACE_SYSTEM 12 #define TRACE_SYSTEM kvm 13 14 /* 15 * Tracepoint for guest mode entry. 16 */ ··· 35 36 TP_fast_assign( 37 __entry->vcpu_id = vcpu->vcpu_id; 38 - __entry->rip = kvm_rip_read(vcpu); 39 __entry->immediate_exit = force_immediate_exit; 40 41 kvm_x86_call(get_entry_info)(vcpu, &__entry->intr_info, ··· 326 ), \ 327 \ 328 TP_fast_assign( \ 329 - __entry->guest_rip = kvm_rip_read(vcpu); \ 330 __entry->isa = isa; \ 331 __entry->vcpu_id = vcpu->vcpu_id; \ 332 __entry->requests = READ_ONCE(vcpu->requests); \ ··· 430 431 TP_fast_assign( 432 __entry->vcpu_id = vcpu->vcpu_id; 433 - __entry->guest_rip = kvm_rip_read(vcpu); 434 __entry->fault_address = fault_address; 435 __entry->error_code = error_code; 436 ),
··· 11 #undef TRACE_SYSTEM 12 #define TRACE_SYSTEM kvm 13 14 + #ifdef CREATE_TRACE_POINTS 15 + #define tracing_kvm_rip_read(vcpu) ({ \ 16 + typeof(vcpu) __vcpu = vcpu; \ 17 + __vcpu->arch.guest_state_protected ? 0 : kvm_rip_read(__vcpu); \ 18 + }) 19 + #endif 20 + 21 /* 22 * Tracepoint for guest mode entry. 23 */ ··· 28 29 TP_fast_assign( 30 __entry->vcpu_id = vcpu->vcpu_id; 31 + __entry->rip = tracing_kvm_rip_read(vcpu); 32 __entry->immediate_exit = force_immediate_exit; 33 34 kvm_x86_call(get_entry_info)(vcpu, &__entry->intr_info, ··· 319 ), \ 320 \ 321 TP_fast_assign( \ 322 + __entry->guest_rip = tracing_kvm_rip_read(vcpu); \ 323 __entry->isa = isa; \ 324 __entry->vcpu_id = vcpu->vcpu_id; \ 325 __entry->requests = READ_ONCE(vcpu->requests); \ ··· 423 424 TP_fast_assign( 425 __entry->vcpu_id = vcpu->vcpu_id; 426 + __entry->guest_rip = tracing_kvm_rip_read(vcpu); 427 __entry->fault_address = fault_address; 428 __entry->error_code = error_code; 429 ),
+10 -18
arch/x86/kvm/vmx/posted_intr.c
··· 297 { 298 struct kvm_kernel_irq_routing_entry *e; 299 struct kvm_irq_routing_table *irq_rt; 300 struct kvm_lapic_irq irq; 301 struct kvm_vcpu *vcpu; 302 struct vcpu_data vcpu_info; ··· 336 337 kvm_set_msi_irq(kvm, e, &irq); 338 if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) || 339 - !kvm_irq_is_postable(&irq)) { 340 - /* 341 - * Make sure the IRTE is in remapped mode if 342 - * we don't handle it in posted mode. 343 - */ 344 - ret = irq_set_vcpu_affinity(host_irq, NULL); 345 - if (ret < 0) { 346 - printk(KERN_INFO 347 - "failed to back to remapped mode, irq: %u\n", 348 - host_irq); 349 - goto out; 350 - } 351 - 352 continue; 353 - } 354 355 vcpu_info.pi_desc_addr = __pa(vcpu_to_pi_desc(vcpu)); 356 vcpu_info.vector = irq.vector; ··· 345 trace_kvm_pi_irte_update(host_irq, vcpu->vcpu_id, e->gsi, 346 vcpu_info.vector, vcpu_info.pi_desc_addr, set); 347 348 - if (set) 349 - ret = irq_set_vcpu_affinity(host_irq, &vcpu_info); 350 - else 351 - ret = irq_set_vcpu_affinity(host_irq, NULL); 352 353 if (ret < 0) { 354 printk(KERN_INFO "%s: failed to update PI IRTE\n", 355 __func__); 356 goto out; 357 } 358 } 359 360 ret = 0; 361 out:
··· 297 { 298 struct kvm_kernel_irq_routing_entry *e; 299 struct kvm_irq_routing_table *irq_rt; 300 + bool enable_remapped_mode = true; 301 struct kvm_lapic_irq irq; 302 struct kvm_vcpu *vcpu; 303 struct vcpu_data vcpu_info; ··· 335 336 kvm_set_msi_irq(kvm, e, &irq); 337 if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) || 338 + !kvm_irq_is_postable(&irq)) 339 continue; 340 341 vcpu_info.pi_desc_addr = __pa(vcpu_to_pi_desc(vcpu)); 342 vcpu_info.vector = irq.vector; ··· 357 trace_kvm_pi_irte_update(host_irq, vcpu->vcpu_id, e->gsi, 358 vcpu_info.vector, vcpu_info.pi_desc_addr, set); 359 360 + if (!set) 361 + continue; 362 363 + enable_remapped_mode = false; 364 + 365 + ret = irq_set_vcpu_affinity(host_irq, &vcpu_info); 366 if (ret < 0) { 367 printk(KERN_INFO "%s: failed to update PI IRTE\n", 368 __func__); 369 goto out; 370 } 371 } 372 + 373 + if (enable_remapped_mode) 374 + ret = irq_set_vcpu_affinity(host_irq, NULL); 375 376 ret = 0; 377 out:
+19 -9
arch/x86/kvm/x86.c
··· 11098 /* 11099 * Profile KVM exit RIPs: 11100 */ 11101 - if (unlikely(prof_on == KVM_PROFILING)) { 11102 unsigned long rip = kvm_rip_read(vcpu); 11103 profile_hit(KVM_PROFILING, (void *)rip); 11104 } ··· 13557 } 13558 EXPORT_SYMBOL_GPL(kvm_arch_has_noncoherent_dma); 13559 13560 - bool kvm_arch_has_irq_bypass(void) 13561 - { 13562 - return enable_apicv && irq_remapping_cap(IRQ_POSTING_CAP); 13563 - } 13564 - 13565 int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons, 13566 struct irq_bypass_producer *prod) 13567 { 13568 struct kvm_kernel_irqfd *irqfd = 13569 container_of(cons, struct kvm_kernel_irqfd, consumer); 13570 int ret; 13571 13572 - irqfd->producer = prod; 13573 kvm_arch_start_assignment(irqfd->kvm); 13574 ret = kvm_x86_call(pi_update_irte)(irqfd->kvm, 13575 prod->irq, irqfd->gsi, 1); 13576 if (ret) 13577 kvm_arch_end_assignment(irqfd->kvm); 13578 13579 return ret; 13580 } ··· 13587 int ret; 13588 struct kvm_kernel_irqfd *irqfd = 13589 container_of(cons, struct kvm_kernel_irqfd, consumer); 13590 13591 WARN_ON(irqfd->producer != prod); 13592 - irqfd->producer = NULL; 13593 13594 /* 13595 * When producer of consumer is unregistered, we change back to ··· 13597 * when the irq is masked/disabled or the consumer side (KVM 13598 * int this case doesn't want to receive the interrupts. 13599 */ 13600 ret = kvm_x86_call(pi_update_irte)(irqfd->kvm, 13601 prod->irq, irqfd->gsi, 0); 13602 if (ret) 13603 printk(KERN_INFO "irq bypass consumer (token %p) unregistration" 13604 " fails: %d\n", irqfd->consumer.token, ret); 13605 13606 kvm_arch_end_assignment(irqfd->kvm); 13607 } ··· 13621 bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old, 13622 struct kvm_kernel_irq_routing_entry *new) 13623 { 13624 - if (new->type != KVM_IRQ_ROUTING_MSI) 13625 return true; 13626 13627 return !!memcmp(&old->msi, &new->msi, sizeof(new->msi));
··· 11098 /* 11099 * Profile KVM exit RIPs: 11100 */ 11101 + if (unlikely(prof_on == KVM_PROFILING && 11102 + !vcpu->arch.guest_state_protected)) { 11103 unsigned long rip = kvm_rip_read(vcpu); 11104 profile_hit(KVM_PROFILING, (void *)rip); 11105 } ··· 13556 } 13557 EXPORT_SYMBOL_GPL(kvm_arch_has_noncoherent_dma); 13558 13559 int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons, 13560 struct irq_bypass_producer *prod) 13561 { 13562 struct kvm_kernel_irqfd *irqfd = 13563 container_of(cons, struct kvm_kernel_irqfd, consumer); 13564 + struct kvm *kvm = irqfd->kvm; 13565 int ret; 13566 13567 kvm_arch_start_assignment(irqfd->kvm); 13568 + 13569 + spin_lock_irq(&kvm->irqfds.lock); 13570 + irqfd->producer = prod; 13571 + 13572 ret = kvm_x86_call(pi_update_irte)(irqfd->kvm, 13573 prod->irq, irqfd->gsi, 1); 13574 if (ret) 13575 kvm_arch_end_assignment(irqfd->kvm); 13576 + 13577 + spin_unlock_irq(&kvm->irqfds.lock); 13578 + 13579 13580 return ret; 13581 } ··· 13584 int ret; 13585 struct kvm_kernel_irqfd *irqfd = 13586 container_of(cons, struct kvm_kernel_irqfd, consumer); 13587 + struct kvm *kvm = irqfd->kvm; 13588 13589 WARN_ON(irqfd->producer != prod); 13590 13591 /* 13592 * When producer of consumer is unregistered, we change back to ··· 13594 * when the irq is masked/disabled or the consumer side (KVM 13595 * int this case doesn't want to receive the interrupts. 13596 */ 13597 + spin_lock_irq(&kvm->irqfds.lock); 13598 + irqfd->producer = NULL; 13599 + 13600 ret = kvm_x86_call(pi_update_irte)(irqfd->kvm, 13601 prod->irq, irqfd->gsi, 0); 13602 if (ret) 13603 printk(KERN_INFO "irq bypass consumer (token %p) unregistration" 13604 " fails: %d\n", irqfd->consumer.token, ret); 13605 + 13606 + spin_unlock_irq(&kvm->irqfds.lock); 13607 + 13608 13609 kvm_arch_end_assignment(irqfd->kvm); 13610 } ··· 13612 bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old, 13613 struct kvm_kernel_irq_routing_entry *new) 13614 { 13615 + if (old->type != KVM_IRQ_ROUTING_MSI || 13616 + new->type != KVM_IRQ_ROUTING_MSI) 13617 return true; 13618 13619 return !!memcmp(&old->msi, &new->msi, sizeof(new->msi));
+2 -2
arch/x86/lib/x86-opcode-map.txt
··· 996 83: Grp1 Ev,Ib (1A),(es) 997 # CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL, 998 # CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ 999 - 84: CTESTSCC (ev) 1000 - 85: CTESTSCC (es) | CTESTSCC (66),(es) 1001 88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es) 1002 8f: POP2 Bq,Rq (000),(11B),(ev) 1003 a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
··· 996 83: Grp1 Ev,Ib (1A),(es) 997 # CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL, 998 # CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ 999 + 84: CTESTSCC Eb,Gb (ev) 1000 + 85: CTESTSCC Ev,Gv (es) | CTESTSCC Ev,Gv (66),(es) 1001 88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es) 1002 8f: POP2 Bq,Rq (000),(11B),(ev) 1003 a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
+2 -2
arch/x86/mm/pgtable.c
··· 360 * We allocate one page for pgd. 361 */ 362 if (!SHARED_KERNEL_PMD) 363 - return __pgd_alloc(mm, PGD_ALLOCATION_ORDER); 364 365 /* 366 * Now PAE kernel is not running as a Xen domain. We can allocate ··· 380 381 static inline pgd_t *_pgd_alloc(struct mm_struct *mm) 382 { 383 - return __pgd_alloc(mm, PGD_ALLOCATION_ORDER); 384 } 385 386 static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
··· 360 * We allocate one page for pgd. 361 */ 362 if (!SHARED_KERNEL_PMD) 363 + return __pgd_alloc(mm, pgd_allocation_order()); 364 365 /* 366 * Now PAE kernel is not running as a Xen domain. We can allocate ··· 380 381 static inline pgd_t *_pgd_alloc(struct mm_struct *mm) 382 { 383 + return __pgd_alloc(mm, pgd_allocation_order()); 384 } 385 386 static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
+2 -2
arch/x86/platform/efi/efi_64.c
··· 73 gfp_t gfp_mask; 74 75 gfp_mask = GFP_KERNEL | __GFP_ZERO; 76 - efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, PGD_ALLOCATION_ORDER); 77 if (!efi_pgd) 78 goto fail; 79 ··· 96 if (pgtable_l5_enabled()) 97 free_page((unsigned long)pgd_page_vaddr(*pgd)); 98 free_pgd: 99 - free_pages((unsigned long)efi_pgd, PGD_ALLOCATION_ORDER); 100 fail: 101 return -ENOMEM; 102 }
··· 73 gfp_t gfp_mask; 74 75 gfp_mask = GFP_KERNEL | __GFP_ZERO; 76 + efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, pgd_allocation_order()); 77 if (!efi_pgd) 78 goto fail; 79 ··· 96 if (pgtable_l5_enabled()) 97 free_page((unsigned long)pgd_page_vaddr(*pgd)); 98 free_pgd: 99 + free_pages((unsigned long)efi_pgd, pgd_allocation_order()); 100 fail: 101 return -ENOMEM; 102 }
+51 -16
block/bdev.c
··· 152 get_order(bsize)); 153 } 154 155 int set_blocksize(struct file *file, int size) 156 { 157 struct inode *inode = file->f_mapping->host; 158 struct block_device *bdev = I_BDEV(inode); 159 160 - if (blk_validate_block_size(size)) 161 - return -EINVAL; 162 - 163 - /* Size cannot be smaller than the size supported by the device */ 164 - if (size < bdev_logical_block_size(bdev)) 165 - return -EINVAL; 166 167 if (!file->private_data) 168 return -EINVAL; 169 170 /* Don't change the size if it is same as current */ 171 if (inode->i_blkbits != blksize_bits(size)) { 172 sync_blockdev(bdev); 173 inode->i_blkbits = blksize_bits(size); 174 mapping_set_folio_min_order(inode->i_mapping, get_order(size)); 175 kill_bdev(bdev); 176 } 177 return 0; 178 } ··· 815 blkdev_put_whole(whole); 816 } 817 818 - struct block_device *blkdev_get_no_open(dev_t dev) 819 { 820 struct block_device *bdev; 821 struct inode *inode; 822 823 inode = ilookup(blockdev_superblock, dev); 824 - if (!inode && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) { 825 blk_request_module(dev); 826 inode = ilookup(blockdev_superblock, dev); 827 if (inode) ··· 1043 if (ret) 1044 return ERR_PTR(ret); 1045 1046 - bdev = blkdev_get_no_open(dev); 1047 if (!bdev) 1048 return ERR_PTR(-ENXIO); 1049 ··· 1312 */ 1313 void bdev_statx(const struct path *path, struct kstat *stat, u32 request_mask) 1314 { 1315 - struct inode *backing_inode; 1316 struct block_device *bdev; 1317 1318 - backing_inode = d_backing_inode(path->dentry); 1319 - 1320 /* 1321 - * Note that backing_inode is the inode of a block device node file, 1322 - * not the block device's internal inode. Therefore it is *not* valid 1323 - * to use I_BDEV() here; the block device has to be looked up by i_rdev 1324 * instead. 1325 */ 1326 - bdev = blkdev_get_no_open(backing_inode->i_rdev); 1327 if (!bdev) 1328 return; 1329
··· 152 get_order(bsize)); 153 } 154 155 + /** 156 + * bdev_validate_blocksize - check that this block size is acceptable 157 + * @bdev: blockdevice to check 158 + * @block_size: block size to check 159 + * 160 + * For block device users that do not use buffer heads or the block device 161 + * page cache, make sure that this block size can be used with the device. 162 + * 163 + * Return: On success zero is returned, negative error code on failure. 164 + */ 165 + int bdev_validate_blocksize(struct block_device *bdev, int block_size) 166 + { 167 + if (blk_validate_block_size(block_size)) 168 + return -EINVAL; 169 + 170 + /* Size cannot be smaller than the size supported by the device */ 171 + if (block_size < bdev_logical_block_size(bdev)) 172 + return -EINVAL; 173 + 174 + return 0; 175 + } 176 + EXPORT_SYMBOL_GPL(bdev_validate_blocksize); 177 + 178 int set_blocksize(struct file *file, int size) 179 { 180 struct inode *inode = file->f_mapping->host; 181 struct block_device *bdev = I_BDEV(inode); 182 + int ret; 183 184 + ret = bdev_validate_blocksize(bdev, size); 185 + if (ret) 186 + return ret; 187 188 if (!file->private_data) 189 return -EINVAL; 190 191 /* Don't change the size if it is same as current */ 192 if (inode->i_blkbits != blksize_bits(size)) { 193 + /* 194 + * Flush and truncate the pagecache before we reconfigure the 195 + * mapping geometry because folio sizes are variable now. If a 196 + * reader has already allocated a folio whose size is smaller 197 + * than the new min_order but invokes readahead after the new 198 + * min_order becomes visible, readahead will think there are 199 + * "zero" blocks per folio and crash. Take the inode and 200 + * invalidation locks to avoid racing with 201 + * read/write/fallocate. 202 + */ 203 + inode_lock(inode); 204 + filemap_invalidate_lock(inode->i_mapping); 205 + 206 sync_blockdev(bdev); 207 + kill_bdev(bdev); 208 + 209 inode->i_blkbits = blksize_bits(size); 210 mapping_set_folio_min_order(inode->i_mapping, get_order(size)); 211 kill_bdev(bdev); 212 + filemap_invalidate_unlock(inode->i_mapping); 213 + inode_unlock(inode); 214 } 215 return 0; 216 } ··· 777 blkdev_put_whole(whole); 778 } 779 780 + struct block_device *blkdev_get_no_open(dev_t dev, bool autoload) 781 { 782 struct block_device *bdev; 783 struct inode *inode; 784 785 inode = ilookup(blockdev_superblock, dev); 786 + if (!inode && autoload && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) { 787 blk_request_module(dev); 788 inode = ilookup(blockdev_superblock, dev); 789 if (inode) ··· 1005 if (ret) 1006 return ERR_PTR(ret); 1007 1008 + bdev = blkdev_get_no_open(dev, true); 1009 if (!bdev) 1010 return ERR_PTR(-ENXIO); 1011 ··· 1274 */ 1275 void bdev_statx(const struct path *path, struct kstat *stat, u32 request_mask) 1276 { 1277 struct block_device *bdev; 1278 1279 /* 1280 + * Note that d_backing_inode() returns the block device node inode, not 1281 + * the block device's internal inode. Therefore it is *not* valid to 1282 + * use I_BDEV() here; the block device has to be looked up by i_rdev 1283 * instead. 1284 */ 1285 + bdev = blkdev_get_no_open(d_backing_inode(path->dentry)->i_rdev, false); 1286 if (!bdev) 1287 return; 1288
+1 -1
block/blk-cgroup.c
··· 797 return -EINVAL; 798 input = skip_spaces(input); 799 800 - bdev = blkdev_get_no_open(MKDEV(major, minor)); 801 if (!bdev) 802 return -ENODEV; 803 if (bdev_is_partition(bdev)) {
··· 797 return -EINVAL; 798 input = skip_spaces(input); 799 800 + bdev = blkdev_get_no_open(MKDEV(major, minor), false); 801 if (!bdev) 802 return -ENODEV; 803 if (bdev_is_partition(bdev)) {
+7 -1
block/blk-settings.c
··· 61 /* 62 * For read-ahead of large files to be effective, we need to read ahead 63 * at least twice the optimal I/O size. 64 */ 65 - bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES); 66 bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT; 67 } 68
··· 61 /* 62 * For read-ahead of large files to be effective, we need to read ahead 63 * at least twice the optimal I/O size. 64 + * 65 + * There is no hardware limitation for the read-ahead size and the user 66 + * might have increased the read-ahead size through sysfs, so don't ever 67 + * decrease it. 68 */ 69 + bdi->ra_pages = max3(bdi->ra_pages, 70 + lim->io_opt * 2 / PAGE_SIZE, 71 + VM_READAHEAD_PAGES); 72 bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT; 73 } 74
+4 -1
block/blk-zoned.c
··· 343 op = REQ_OP_ZONE_RESET; 344 345 /* Invalidate the page cache, including dirty pages. */ 346 filemap_invalidate_lock(bdev->bd_mapping); 347 ret = blkdev_truncate_zone_range(bdev, mode, &zrange); 348 if (ret) ··· 365 ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors); 366 367 fail: 368 - if (cmd == BLKRESETZONE) 369 filemap_invalidate_unlock(bdev->bd_mapping); 370 371 return ret; 372 }
··· 343 op = REQ_OP_ZONE_RESET; 344 345 /* Invalidate the page cache, including dirty pages. */ 346 + inode_lock(bdev->bd_mapping->host); 347 filemap_invalidate_lock(bdev->bd_mapping); 348 ret = blkdev_truncate_zone_range(bdev, mode, &zrange); 349 if (ret) ··· 364 ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors); 365 366 fail: 367 + if (cmd == BLKRESETZONE) { 368 filemap_invalidate_unlock(bdev->bd_mapping); 369 + inode_unlock(bdev->bd_mapping->host); 370 + } 371 372 return ret; 373 }
+3
block/blk.h
··· 94 wait_for_completion_io(done); 95 } 96 97 #define BIO_INLINE_VECS 4 98 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs, 99 gfp_t gfp_mask);
··· 94 wait_for_completion_io(done); 95 } 96 97 + struct block_device *blkdev_get_no_open(dev_t dev, bool autoload); 98 + void blkdev_put_no_open(struct block_device *bdev); 99 + 100 #define BIO_INLINE_VECS 4 101 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs, 102 gfp_t gfp_mask);
+17 -1
block/fops.c
··· 642 if (ret) 643 return ret; 644 645 - bdev = blkdev_get_no_open(inode->i_rdev); 646 if (!bdev) 647 return -ENXIO; 648 ··· 746 ret = direct_write_fallback(iocb, from, ret, 747 blkdev_buffered_write(iocb, from)); 748 } else { 749 ret = blkdev_buffered_write(iocb, from); 750 } 751 752 if (ret > 0) ··· 764 765 static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) 766 { 767 struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host); 768 loff_t size = bdev_nr_bytes(bdev); 769 loff_t pos = iocb->ki_pos; ··· 801 goto reexpand; 802 } 803 804 ret = filemap_read(iocb, to, ret); 805 806 reexpand: 807 if (unlikely(shorted)) ··· 850 if ((start | len) & (bdev_logical_block_size(bdev) - 1)) 851 return -EINVAL; 852 853 filemap_invalidate_lock(inode->i_mapping); 854 855 /* ··· 883 884 fail: 885 filemap_invalidate_unlock(inode->i_mapping); 886 return error; 887 } 888
··· 642 if (ret) 643 return ret; 644 645 + bdev = blkdev_get_no_open(inode->i_rdev, true); 646 if (!bdev) 647 return -ENXIO; 648 ··· 746 ret = direct_write_fallback(iocb, from, ret, 747 blkdev_buffered_write(iocb, from)); 748 } else { 749 + /* 750 + * Take i_rwsem and invalidate_lock to avoid racing with 751 + * set_blocksize changing i_blkbits/folio order and punching 752 + * out the pagecache. 753 + */ 754 + inode_lock_shared(bd_inode); 755 ret = blkdev_buffered_write(iocb, from); 756 + inode_unlock_shared(bd_inode); 757 } 758 759 if (ret > 0) ··· 757 758 static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) 759 { 760 + struct inode *bd_inode = bdev_file_inode(iocb->ki_filp); 761 struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host); 762 loff_t size = bdev_nr_bytes(bdev); 763 loff_t pos = iocb->ki_pos; ··· 793 goto reexpand; 794 } 795 796 + /* 797 + * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize 798 + * changing i_blkbits/folio order and punching out the pagecache. 799 + */ 800 + inode_lock_shared(bd_inode); 801 ret = filemap_read(iocb, to, ret); 802 + inode_unlock_shared(bd_inode); 803 804 reexpand: 805 if (unlikely(shorted)) ··· 836 if ((start | len) & (bdev_logical_block_size(bdev) - 1)) 837 return -EINVAL; 838 839 + inode_lock(inode); 840 filemap_invalidate_lock(inode->i_mapping); 841 842 /* ··· 868 869 fail: 870 filemap_invalidate_unlock(inode->i_mapping); 871 + inode_unlock(inode); 872 return error; 873 } 874
+6
block/ioctl.c
··· 142 if (err) 143 return err; 144 145 filemap_invalidate_lock(bdev->bd_mapping); 146 err = truncate_bdev_range(bdev, mode, start, start + len - 1); 147 if (err) ··· 175 blk_finish_plug(&plug); 176 fail: 177 filemap_invalidate_unlock(bdev->bd_mapping); 178 return err; 179 } 180 ··· 201 end > bdev_nr_bytes(bdev)) 202 return -EINVAL; 203 204 filemap_invalidate_lock(bdev->bd_mapping); 205 err = truncate_bdev_range(bdev, mode, start, end - 1); 206 if (!err) 207 err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9, 208 GFP_KERNEL); 209 filemap_invalidate_unlock(bdev->bd_mapping); 210 return err; 211 } 212 ··· 240 return -EINVAL; 241 242 /* Invalidate the page cache, including dirty pages */ 243 filemap_invalidate_lock(bdev->bd_mapping); 244 err = truncate_bdev_range(bdev, mode, start, end); 245 if (err) ··· 251 252 fail: 253 filemap_invalidate_unlock(bdev->bd_mapping); 254 return err; 255 } 256
··· 142 if (err) 143 return err; 144 145 + inode_lock(bdev->bd_mapping->host); 146 filemap_invalidate_lock(bdev->bd_mapping); 147 err = truncate_bdev_range(bdev, mode, start, start + len - 1); 148 if (err) ··· 174 blk_finish_plug(&plug); 175 fail: 176 filemap_invalidate_unlock(bdev->bd_mapping); 177 + inode_unlock(bdev->bd_mapping->host); 178 return err; 179 } 180 ··· 199 end > bdev_nr_bytes(bdev)) 200 return -EINVAL; 201 202 + inode_lock(bdev->bd_mapping->host); 203 filemap_invalidate_lock(bdev->bd_mapping); 204 err = truncate_bdev_range(bdev, mode, start, end - 1); 205 if (!err) 206 err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9, 207 GFP_KERNEL); 208 filemap_invalidate_unlock(bdev->bd_mapping); 209 + inode_unlock(bdev->bd_mapping->host); 210 return err; 211 } 212 ··· 236 return -EINVAL; 237 238 /* Invalidate the page cache, including dirty pages */ 239 + inode_lock(bdev->bd_mapping->host); 240 filemap_invalidate_lock(bdev->bd_mapping); 241 err = truncate_bdev_range(bdev, mode, start, end); 242 if (err) ··· 246 247 fail: 248 filemap_invalidate_unlock(bdev->bd_mapping); 249 + inode_unlock(bdev->bd_mapping->host); 250 return err; 251 } 252
+5 -5
crypto/scompress.c
··· 215 spage = nth_page(spage, soff / PAGE_SIZE); 216 soff = offset_in_page(soff); 217 218 - n = slen / PAGE_SIZE; 219 - n += (offset_in_page(slen) + soff - 1) / PAGE_SIZE; 220 if (PageHighMem(nth_page(spage, n)) && 221 size_add(soff, slen) > PAGE_SIZE) 222 break; ··· 243 dpage = nth_page(dpage, doff / PAGE_SIZE); 244 doff = offset_in_page(doff); 245 246 - n = dlen / PAGE_SIZE; 247 - n += (offset_in_page(dlen) + doff - 1) / PAGE_SIZE; 248 - if (PageHighMem(dpage + n) && 249 size_add(doff, dlen) > PAGE_SIZE) 250 break; 251 dst = kmap_local_page(dpage) + doff;
··· 215 spage = nth_page(spage, soff / PAGE_SIZE); 216 soff = offset_in_page(soff); 217 218 + n = (slen - 1) / PAGE_SIZE; 219 + n += (offset_in_page(slen - 1) + soff) / PAGE_SIZE; 220 if (PageHighMem(nth_page(spage, n)) && 221 size_add(soff, slen) > PAGE_SIZE) 222 break; ··· 243 dpage = nth_page(dpage, doff / PAGE_SIZE); 244 doff = offset_in_page(doff); 245 246 + n = (dlen - 1) / PAGE_SIZE; 247 + n += (offset_in_page(dlen - 1) + doff) / PAGE_SIZE; 248 + if (PageHighMem(nth_page(dpage, n)) && 249 size_add(doff, dlen) > PAGE_SIZE) 250 break; 251 dst = kmap_local_page(dpage) + doff;
+63 -82
crypto/testmgr.c
··· 58 MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations"); 59 #endif 60 61 - /* Multibuffer is unlimited. Set arbitrary limit for testing. */ 62 - #define MAX_MB_MSGS 16 63 - 64 #ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS 65 66 /* a perfect nop */ ··· 3326 int ctcount, int dtcount) 3327 { 3328 const char *algo = crypto_tfm_alg_driver_name(crypto_acomp_tfm(tfm)); 3329 - struct scatterlist *src = NULL, *dst = NULL; 3330 - struct acomp_req *reqs[MAX_MB_MSGS] = {}; 3331 - char *decomp_out[MAX_MB_MSGS] = {}; 3332 - char *output[MAX_MB_MSGS] = {}; 3333 - struct crypto_wait wait; 3334 - struct acomp_req *req; 3335 - int ret = -ENOMEM; 3336 unsigned int i; 3337 3338 - src = kmalloc_array(MAX_MB_MSGS, sizeof(*src), GFP_KERNEL); 3339 - if (!src) 3340 - goto out; 3341 - dst = kmalloc_array(MAX_MB_MSGS, sizeof(*dst), GFP_KERNEL); 3342 - if (!dst) 3343 - goto out; 3344 3345 - for (i = 0; i < MAX_MB_MSGS; i++) { 3346 - reqs[i] = acomp_request_alloc(tfm); 3347 - if (!reqs[i]) 3348 - goto out; 3349 - 3350 - acomp_request_set_callback(reqs[i], 3351 - CRYPTO_TFM_REQ_MAY_SLEEP | 3352 - CRYPTO_TFM_REQ_MAY_BACKLOG, 3353 - crypto_req_done, &wait); 3354 - if (i) 3355 - acomp_request_chain(reqs[i], reqs[0]); 3356 - 3357 - output[i] = kmalloc(COMP_BUF_SIZE, GFP_KERNEL); 3358 - if (!output[i]) 3359 - goto out; 3360 - 3361 - decomp_out[i] = kmalloc(COMP_BUF_SIZE, GFP_KERNEL); 3362 - if (!decomp_out[i]) 3363 - goto out; 3364 } 3365 3366 for (i = 0; i < ctcount; i++) { 3367 unsigned int dlen = COMP_BUF_SIZE; 3368 int ilen = ctemplate[i].inlen; 3369 void *input_vec; 3370 - int j; 3371 3372 input_vec = kmemdup(ctemplate[i].input, ilen, GFP_KERNEL); 3373 if (!input_vec) { ··· 3354 goto out; 3355 } 3356 3357 crypto_init_wait(&wait); 3358 - sg_init_one(src, input_vec, ilen); 3359 3360 - for (j = 0; j < MAX_MB_MSGS; j++) { 3361 - sg_init_one(dst + j, output[j], dlen); 3362 - acomp_request_set_params(reqs[j], src, dst + j, ilen, dlen); 3363 } 3364 3365 - req = reqs[0]; 3366 ret = crypto_wait_req(crypto_acomp_compress(req), &wait); 3367 if (ret) { 3368 pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n", 3369 i + 1, algo, -ret); 3370 kfree(input_vec); 3371 goto out; 3372 } 3373 3374 ilen = req->dlen; 3375 dlen = COMP_BUF_SIZE; 3376 crypto_init_wait(&wait); 3377 - for (j = 0; j < MAX_MB_MSGS; j++) { 3378 - sg_init_one(src + j, output[j], ilen); 3379 - sg_init_one(dst + j, decomp_out[j], dlen); 3380 - acomp_request_set_params(reqs[j], src + j, dst + j, ilen, dlen); 3381 } 3382 3383 - crypto_wait_req(crypto_acomp_decompress(req), &wait); 3384 - for (j = 0; j < MAX_MB_MSGS; j++) { 3385 - ret = reqs[j]->base.err; 3386 - if (ret) { 3387 - pr_err("alg: acomp: compression failed on test %d (%d) for %s: ret=%d\n", 3388 - i + 1, j, algo, -ret); 3389 - kfree(input_vec); 3390 - goto out; 3391 - } 3392 3393 - if (reqs[j]->dlen != ctemplate[i].inlen) { 3394 - pr_err("alg: acomp: Compression test %d (%d) failed for %s: output len = %d\n", 3395 - i + 1, j, algo, reqs[j]->dlen); 3396 - ret = -EINVAL; 3397 - kfree(input_vec); 3398 - goto out; 3399 - } 3400 - 3401 - if (memcmp(input_vec, decomp_out[j], reqs[j]->dlen)) { 3402 - pr_err("alg: acomp: Compression test %d (%d) failed for %s\n", 3403 - i + 1, j, algo); 3404 - hexdump(output[j], reqs[j]->dlen); 3405 - ret = -EINVAL; 3406 - kfree(input_vec); 3407 - goto out; 3408 - } 3409 } 3410 3411 kfree(input_vec); 3412 } 3413 3414 for (i = 0; i < dtcount; i++) { ··· 3431 goto out; 3432 } 3433 3434 crypto_init_wait(&wait); 3435 - sg_init_one(src, input_vec, ilen); 3436 - sg_init_one(dst, output[0], dlen); 3437 3438 req = acomp_request_alloc(tfm); 3439 if (!req) { ··· 3445 goto out; 3446 } 3447 3448 - acomp_request_set_params(req, src, dst, ilen, dlen); 3449 acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, 3450 crypto_req_done, &wait); 3451 ··· 3467 goto out; 3468 } 3469 3470 - if (memcmp(output[0], dtemplate[i].output, req->dlen)) { 3471 pr_err("alg: acomp: Decompression test %d failed for %s\n", 3472 i + 1, algo); 3473 - hexdump(output[0], req->dlen); 3474 ret = -EINVAL; 3475 kfree(input_vec); 3476 acomp_request_free(req); ··· 3484 ret = 0; 3485 3486 out: 3487 - acomp_request_free(reqs[0]); 3488 - for (i = 0; i < MAX_MB_MSGS; i++) { 3489 - kfree(output[i]); 3490 - kfree(decomp_out[i]); 3491 - } 3492 - kfree(dst); 3493 - kfree(src); 3494 return ret; 3495 } 3496
··· 58 MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations"); 59 #endif 60 61 #ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS 62 63 /* a perfect nop */ ··· 3329 int ctcount, int dtcount) 3330 { 3331 const char *algo = crypto_tfm_alg_driver_name(crypto_acomp_tfm(tfm)); 3332 unsigned int i; 3333 + char *output, *decomp_out; 3334 + int ret; 3335 + struct scatterlist src, dst; 3336 + struct acomp_req *req; 3337 + struct crypto_wait wait; 3338 3339 + output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL); 3340 + if (!output) 3341 + return -ENOMEM; 3342 3343 + decomp_out = kmalloc(COMP_BUF_SIZE, GFP_KERNEL); 3344 + if (!decomp_out) { 3345 + kfree(output); 3346 + return -ENOMEM; 3347 } 3348 3349 for (i = 0; i < ctcount; i++) { 3350 unsigned int dlen = COMP_BUF_SIZE; 3351 int ilen = ctemplate[i].inlen; 3352 void *input_vec; 3353 3354 input_vec = kmemdup(ctemplate[i].input, ilen, GFP_KERNEL); 3355 if (!input_vec) { ··· 3378 goto out; 3379 } 3380 3381 + memset(output, 0, dlen); 3382 crypto_init_wait(&wait); 3383 + sg_init_one(&src, input_vec, ilen); 3384 + sg_init_one(&dst, output, dlen); 3385 3386 + req = acomp_request_alloc(tfm); 3387 + if (!req) { 3388 + pr_err("alg: acomp: request alloc failed for %s\n", 3389 + algo); 3390 + kfree(input_vec); 3391 + ret = -ENOMEM; 3392 + goto out; 3393 } 3394 3395 + acomp_request_set_params(req, &src, &dst, ilen, dlen); 3396 + acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, 3397 + crypto_req_done, &wait); 3398 + 3399 ret = crypto_wait_req(crypto_acomp_compress(req), &wait); 3400 if (ret) { 3401 pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n", 3402 i + 1, algo, -ret); 3403 kfree(input_vec); 3404 + acomp_request_free(req); 3405 goto out; 3406 } 3407 3408 ilen = req->dlen; 3409 dlen = COMP_BUF_SIZE; 3410 + sg_init_one(&src, output, ilen); 3411 + sg_init_one(&dst, decomp_out, dlen); 3412 crypto_init_wait(&wait); 3413 + acomp_request_set_params(req, &src, &dst, ilen, dlen); 3414 + 3415 + ret = crypto_wait_req(crypto_acomp_decompress(req), &wait); 3416 + if (ret) { 3417 + pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n", 3418 + i + 1, algo, -ret); 3419 + kfree(input_vec); 3420 + acomp_request_free(req); 3421 + goto out; 3422 } 3423 3424 + if (req->dlen != ctemplate[i].inlen) { 3425 + pr_err("alg: acomp: Compression test %d failed for %s: output len = %d\n", 3426 + i + 1, algo, req->dlen); 3427 + ret = -EINVAL; 3428 + kfree(input_vec); 3429 + acomp_request_free(req); 3430 + goto out; 3431 + } 3432 3433 + if (memcmp(input_vec, decomp_out, req->dlen)) { 3434 + pr_err("alg: acomp: Compression test %d failed for %s\n", 3435 + i + 1, algo); 3436 + hexdump(output, req->dlen); 3437 + ret = -EINVAL; 3438 + kfree(input_vec); 3439 + acomp_request_free(req); 3440 + goto out; 3441 } 3442 3443 kfree(input_vec); 3444 + acomp_request_free(req); 3445 } 3446 3447 for (i = 0; i < dtcount; i++) { ··· 3446 goto out; 3447 } 3448 3449 + memset(output, 0, dlen); 3450 crypto_init_wait(&wait); 3451 + sg_init_one(&src, input_vec, ilen); 3452 + sg_init_one(&dst, output, dlen); 3453 3454 req = acomp_request_alloc(tfm); 3455 if (!req) { ··· 3459 goto out; 3460 } 3461 3462 + acomp_request_set_params(req, &src, &dst, ilen, dlen); 3463 acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, 3464 crypto_req_done, &wait); 3465 ··· 3481 goto out; 3482 } 3483 3484 + if (memcmp(output, dtemplate[i].output, req->dlen)) { 3485 pr_err("alg: acomp: Decompression test %d failed for %s\n", 3486 i + 1, algo); 3487 + hexdump(output, req->dlen); 3488 ret = -EINVAL; 3489 kfree(input_vec); 3490 acomp_request_free(req); ··· 3498 ret = 0; 3499 3500 out: 3501 + kfree(decomp_out); 3502 + kfree(output); 3503 return ret; 3504 } 3505
+1 -1
drivers/acpi/tables.c
··· 396 } 397 398 /* All but ACPI_SIG_RSDP and ACPI_SIG_FACS: */ 399 - static const char table_sigs[][ACPI_NAMESEG_SIZE] __initconst __nonstring = { 400 ACPI_SIG_BERT, ACPI_SIG_BGRT, ACPI_SIG_CPEP, ACPI_SIG_ECDT, 401 ACPI_SIG_EINJ, ACPI_SIG_ERST, ACPI_SIG_HEST, ACPI_SIG_MADT, 402 ACPI_SIG_MSCT, ACPI_SIG_SBST, ACPI_SIG_SLIT, ACPI_SIG_SRAT,
··· 396 } 397 398 /* All but ACPI_SIG_RSDP and ACPI_SIG_FACS: */ 399 + static const char table_sigs[][ACPI_NAMESEG_SIZE] __initconst = { 400 ACPI_SIG_BERT, ACPI_SIG_BGRT, ACPI_SIG_CPEP, ACPI_SIG_ECDT, 401 ACPI_SIG_EINJ, ACPI_SIG_ERST, ACPI_SIG_HEST, ACPI_SIG_MADT, 402 ACPI_SIG_MSCT, ACPI_SIG_SBST, ACPI_SIG_SLIT, ACPI_SIG_SRAT,
+1 -1
drivers/android/binder.c
··· 6373 seq_printf(m, " node %d", buffer->target_node->debug_id); 6374 seq_printf(m, " size %zd:%zd offset %lx\n", 6375 buffer->data_size, buffer->offsets_size, 6376 - proc->alloc.vm_start - buffer->user_data); 6377 } 6378 6379 static void print_binder_work_ilocked(struct seq_file *m,
··· 6373 seq_printf(m, " node %d", buffer->target_node->debug_id); 6374 seq_printf(m, " size %zd:%zd offset %lx\n", 6375 buffer->data_size, buffer->offsets_size, 6376 + buffer->user_data - proc->alloc.vm_start); 6377 } 6378 6379 static void print_binder_work_ilocked(struct seq_file *m,
+17 -8
drivers/ata/libata-scsi.c
··· 2453 */ 2454 put_unaligned_be16(ATA_FEATURE_SUB_MPAGE_LEN - 4, &buf[2]); 2455 2456 - if (dev->flags & ATA_DFLAG_CDL) 2457 - buf[4] = 0x02; /* Support T2A and T2B pages */ 2458 else 2459 buf[4] = 0; 2460 ··· 3886 } 3887 3888 /* 3889 - * Translate MODE SELECT control mode page, sub-pages f2h (ATA feature mode 3890 * page) into a SET FEATURES command. 3891 */ 3892 - static unsigned int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc, 3893 - const u8 *buf, int len, 3894 - u16 *fp) 3895 { 3896 struct ata_device *dev = qc->dev; 3897 struct ata_taskfile *tf = &qc->tf; ··· 3908 /* Check cdl_ctrl */ 3909 switch (buf[0] & 0x03) { 3910 case 0: 3911 - /* Disable CDL */ 3912 cdl_action = 0; 3913 dev->flags &= ~ATA_DFLAG_CDL_ENABLED; 3914 break; 3915 case 0x02: 3916 - /* Enable CDL T2A/T2B: NCQ priority must be disabled */ 3917 if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) { 3918 ata_dev_err(dev, 3919 "NCQ priority must be disabled to enable CDL\n"); 3920 return -EINVAL; 3921 } 3922 cdl_action = 1; 3923 dev->flags |= ATA_DFLAG_CDL_ENABLED; 3924 break;
··· 2453 */ 2454 put_unaligned_be16(ATA_FEATURE_SUB_MPAGE_LEN - 4, &buf[2]); 2455 2456 + if (dev->flags & ATA_DFLAG_CDL_ENABLED) 2457 + buf[4] = 0x02; /* T2A and T2B pages enabled */ 2458 else 2459 buf[4] = 0; 2460 ··· 3886 } 3887 3888 /* 3889 + * Translate MODE SELECT control mode page, sub-page f2h (ATA feature mode 3890 * page) into a SET FEATURES command. 3891 */ 3892 + static int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc, 3893 + const u8 *buf, int len, u16 *fp) 3894 { 3895 struct ata_device *dev = qc->dev; 3896 struct ata_taskfile *tf = &qc->tf; ··· 3909 /* Check cdl_ctrl */ 3910 switch (buf[0] & 0x03) { 3911 case 0: 3912 + /* Disable CDL if it is enabled */ 3913 + if (!(dev->flags & ATA_DFLAG_CDL_ENABLED)) 3914 + return 0; 3915 + ata_dev_dbg(dev, "Disabling CDL\n"); 3916 cdl_action = 0; 3917 dev->flags &= ~ATA_DFLAG_CDL_ENABLED; 3918 break; 3919 case 0x02: 3920 + /* 3921 + * Enable CDL if not already enabled. Since this is mutually 3922 + * exclusive with NCQ priority, allow this only if NCQ priority 3923 + * is disabled. 3924 + */ 3925 + if (dev->flags & ATA_DFLAG_CDL_ENABLED) 3926 + return 0; 3927 if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) { 3928 ata_dev_err(dev, 3929 "NCQ priority must be disabled to enable CDL\n"); 3930 return -EINVAL; 3931 } 3932 + ata_dev_dbg(dev, "Enabling CDL\n"); 3933 cdl_action = 1; 3934 dev->flags |= ATA_DFLAG_CDL_ENABLED; 3935 break;
+10
drivers/base/auxiliary.c
··· 156 * }, 157 * .ops = my_custom_ops, 158 * }; 159 */ 160 161 static const struct auxiliary_device_id *auxiliary_match_id(const struct auxiliary_device_id *id,
··· 156 * }, 157 * .ops = my_custom_ops, 158 * }; 159 + * 160 + * Please note that such custom ops approach is valid, but it is hard to implement 161 + * it right without global locks per-device to protect from auxiliary_drv removal 162 + * during call to that ops. In addition, this implementation lacks proper module 163 + * dependency, which causes to load/unload races between auxiliary parent and devices 164 + * modules. 165 + * 166 + * The most easiest way to provide these ops reliably without needing to 167 + * have a lock is to EXPORT_SYMBOL*() them and rely on already existing 168 + * modules infrastructure for validity and correct dependencies chains. 169 */ 170 171 static const struct auxiliary_device_id *auxiliary_match_id(const struct auxiliary_device_id *id,
+17
drivers/base/base.h
··· 73 kset_put(&sp->subsys); 74 } 75 76 struct subsys_private *class_to_subsys(const struct class *class); 77 78 struct driver_private { ··· 180 int driver_add_groups(const struct device_driver *drv, const struct attribute_group **groups); 181 void driver_remove_groups(const struct device_driver *drv, const struct attribute_group **groups); 182 void device_driver_detach(struct device *dev); 183 184 int devres_release_all(struct device *dev); 185 void device_block_probing(void);
··· 73 kset_put(&sp->subsys); 74 } 75 76 + struct subsys_private *bus_to_subsys(const struct bus_type *bus); 77 struct subsys_private *class_to_subsys(const struct class *class); 78 79 struct driver_private { ··· 179 int driver_add_groups(const struct device_driver *drv, const struct attribute_group **groups); 180 void driver_remove_groups(const struct device_driver *drv, const struct attribute_group **groups); 181 void device_driver_detach(struct device *dev); 182 + 183 + static inline void device_set_driver(struct device *dev, const struct device_driver *drv) 184 + { 185 + /* 186 + * Majority (all?) read accesses to dev->driver happens either 187 + * while holding device lock or in bus/driver code that is only 188 + * invoked when the device is bound to a driver and there is no 189 + * concern of the pointer being changed while it is being read. 190 + * However when reading device's uevent file we read driver pointer 191 + * without taking device lock (so we do not block there for 192 + * arbitrary amount of time). We use WRITE_ONCE() here to prevent 193 + * tearing so that READ_ONCE() can safely be used in uevent code. 194 + */ 195 + // FIXME - this cast should not be needed "soon" 196 + WRITE_ONCE(dev->driver, (struct device_driver *)drv); 197 + } 198 199 int devres_release_all(struct device *dev); 200 void device_block_probing(void);
+1 -1
drivers/base/bus.c
··· 57 * NULL. A call to subsys_put() must be done when finished with the pointer in 58 * order for it to be properly freed. 59 */ 60 - static struct subsys_private *bus_to_subsys(const struct bus_type *bus) 61 { 62 struct subsys_private *sp = NULL; 63 struct kobject *kobj;
··· 57 * NULL. A call to subsys_put() must be done when finished with the pointer in 58 * order for it to be properly freed. 59 */ 60 + struct subsys_private *bus_to_subsys(const struct bus_type *bus) 61 { 62 struct subsys_private *sp = NULL; 63 struct kobject *kobj;
+32 -6
drivers/base/core.c
··· 2624 return NULL; 2625 } 2626 2627 static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env) 2628 { 2629 const struct device *dev = kobj_to_dev(kobj); ··· 2684 if (dev->type && dev->type->name) 2685 add_uevent_var(env, "DEVTYPE=%s", dev->type->name); 2686 2687 - if (dev->driver) 2688 - add_uevent_var(env, "DRIVER=%s", dev->driver->name); 2689 2690 /* Add common DT information about the device */ 2691 of_device_uevent(dev, env); ··· 2755 if (!env) 2756 return -ENOMEM; 2757 2758 - /* Synchronize with really_probe() */ 2759 - device_lock(dev); 2760 /* let the kset specific function add its keys */ 2761 retval = kset->uevent_ops->uevent(&dev->kobj, env); 2762 - device_unlock(dev); 2763 if (retval) 2764 goto out; 2765 ··· 3726 device_pm_remove(dev); 3727 dpm_sysfs_remove(dev); 3728 DPMError: 3729 - dev->driver = NULL; 3730 bus_remove_device(dev); 3731 BusError: 3732 device_remove_attrs(dev);
··· 2624 return NULL; 2625 } 2626 2627 + /* 2628 + * Try filling "DRIVER=<name>" uevent variable for a device. Because this 2629 + * function may race with binding and unbinding the device from a driver, 2630 + * we need to be careful. Binding is generally safe, at worst we miss the 2631 + * fact that the device is already bound to a driver (but the driver 2632 + * information that is delivered through uevents is best-effort, it may 2633 + * become obsolete as soon as it is generated anyways). Unbinding is more 2634 + * risky as driver pointer is transitioning to NULL, so READ_ONCE() should 2635 + * be used to make sure we are dealing with the same pointer, and to 2636 + * ensure that driver structure is not going to disappear from under us 2637 + * we take bus' drivers klist lock. The assumption that only registered 2638 + * driver can be bound to a device, and to unregister a driver bus code 2639 + * will take the same lock. 2640 + */ 2641 + static void dev_driver_uevent(const struct device *dev, struct kobj_uevent_env *env) 2642 + { 2643 + struct subsys_private *sp = bus_to_subsys(dev->bus); 2644 + 2645 + if (sp) { 2646 + scoped_guard(spinlock, &sp->klist_drivers.k_lock) { 2647 + struct device_driver *drv = READ_ONCE(dev->driver); 2648 + if (drv) 2649 + add_uevent_var(env, "DRIVER=%s", drv->name); 2650 + } 2651 + 2652 + subsys_put(sp); 2653 + } 2654 + } 2655 + 2656 static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env) 2657 { 2658 const struct device *dev = kobj_to_dev(kobj); ··· 2655 if (dev->type && dev->type->name) 2656 add_uevent_var(env, "DEVTYPE=%s", dev->type->name); 2657 2658 + /* Add "DRIVER=%s" variable if the device is bound to a driver */ 2659 + dev_driver_uevent(dev, env); 2660 2661 /* Add common DT information about the device */ 2662 of_device_uevent(dev, env); ··· 2726 if (!env) 2727 return -ENOMEM; 2728 2729 /* let the kset specific function add its keys */ 2730 retval = kset->uevent_ops->uevent(&dev->kobj, env); 2731 if (retval) 2732 goto out; 2733 ··· 3700 device_pm_remove(dev); 3701 dpm_sysfs_remove(dev); 3702 DPMError: 3703 + device_set_driver(dev, NULL); 3704 bus_remove_device(dev); 3705 BusError: 3706 device_remove_attrs(dev);
+3 -4
drivers/base/dd.c
··· 550 arch_teardown_dma_ops(dev); 551 kfree(dev->dma_range_map); 552 dev->dma_range_map = NULL; 553 - dev->driver = NULL; 554 dev_set_drvdata(dev, NULL); 555 if (dev->pm_domain && dev->pm_domain->dismiss) 556 dev->pm_domain->dismiss(dev); ··· 629 } 630 631 re_probe: 632 - // FIXME - this cast should not be needed "soon" 633 - dev->driver = (struct device_driver *)drv; 634 635 /* If using pinctrl, bind pins now before probing */ 636 ret = pinctrl_bind_pins(dev); ··· 1013 if (ret == 0) 1014 ret = 1; 1015 else { 1016 - dev->driver = NULL; 1017 ret = 0; 1018 } 1019 } else {
··· 550 arch_teardown_dma_ops(dev); 551 kfree(dev->dma_range_map); 552 dev->dma_range_map = NULL; 553 + device_set_driver(dev, NULL); 554 dev_set_drvdata(dev, NULL); 555 if (dev->pm_domain && dev->pm_domain->dismiss) 556 dev->pm_domain->dismiss(dev); ··· 629 } 630 631 re_probe: 632 + device_set_driver(dev, drv); 633 634 /* If using pinctrl, bind pins now before probing */ 635 ret = pinctrl_bind_pins(dev); ··· 1014 if (ret == 0) 1015 ret = 1; 1016 else { 1017 + device_set_driver(dev, NULL); 1018 ret = 0; 1019 } 1020 } else {
+9 -13
drivers/base/devtmpfs.c
··· 296 return err; 297 } 298 299 - static int dev_mynode(struct device *dev, struct inode *inode, struct kstat *stat) 300 { 301 /* did we create it */ 302 if (inode->i_private != &thread) ··· 304 305 /* does the dev_t match */ 306 if (is_blockdev(dev)) { 307 - if (!S_ISBLK(stat->mode)) 308 return 0; 309 } else { 310 - if (!S_ISCHR(stat->mode)) 311 return 0; 312 } 313 - if (stat->rdev != dev->devt) 314 return 0; 315 316 /* ours */ ··· 321 { 322 struct path parent; 323 struct dentry *dentry; 324 - struct kstat stat; 325 - struct path p; 326 int deleted = 0; 327 - int err; 328 329 dentry = kern_path_locked(nodename, &parent); 330 if (IS_ERR(dentry)) 331 return PTR_ERR(dentry); 332 333 - p.mnt = parent.mnt; 334 - p.dentry = dentry; 335 - err = vfs_getattr(&p, &stat, STATX_TYPE | STATX_MODE, 336 - AT_STATX_SYNC_AS_STAT); 337 - if (!err && dev_mynode(dev, d_inode(dentry), &stat)) { 338 struct iattr newattrs; 339 /* 340 * before unlinking this node, reset permissions ··· 338 */ 339 newattrs.ia_uid = GLOBAL_ROOT_UID; 340 newattrs.ia_gid = GLOBAL_ROOT_GID; 341 - newattrs.ia_mode = stat.mode & ~0777; 342 newattrs.ia_valid = 343 ATTR_UID|ATTR_GID|ATTR_MODE; 344 inode_lock(d_inode(dentry));
··· 296 return err; 297 } 298 299 + static int dev_mynode(struct device *dev, struct inode *inode) 300 { 301 /* did we create it */ 302 if (inode->i_private != &thread) ··· 304 305 /* does the dev_t match */ 306 if (is_blockdev(dev)) { 307 + if (!S_ISBLK(inode->i_mode)) 308 return 0; 309 } else { 310 + if (!S_ISCHR(inode->i_mode)) 311 return 0; 312 } 313 + if (inode->i_rdev != dev->devt) 314 return 0; 315 316 /* ours */ ··· 321 { 322 struct path parent; 323 struct dentry *dentry; 324 + struct inode *inode; 325 int deleted = 0; 326 + int err = 0; 327 328 dentry = kern_path_locked(nodename, &parent); 329 if (IS_ERR(dentry)) 330 return PTR_ERR(dentry); 331 332 + inode = d_inode(dentry); 333 + if (dev_mynode(dev, inode)) { 334 struct iattr newattrs; 335 /* 336 * before unlinking this node, reset permissions ··· 342 */ 343 newattrs.ia_uid = GLOBAL_ROOT_UID; 344 newattrs.ia_gid = GLOBAL_ROOT_GID; 345 + newattrs.ia_mode = inode->i_mode & ~0777; 346 newattrs.ia_valid = 347 ATTR_UID|ATTR_GID|ATTR_MODE; 348 inode_lock(d_inode(dentry));
+17 -24
drivers/base/memory.c
··· 816 return 0; 817 } 818 819 - static int __init add_boot_memory_block(unsigned long base_section_nr) 820 - { 821 - unsigned long nr; 822 - 823 - for_each_present_section_nr(base_section_nr, nr) { 824 - if (nr >= (base_section_nr + sections_per_block)) 825 - break; 826 - 827 - return add_memory_block(memory_block_id(base_section_nr), 828 - MEM_ONLINE, NULL, NULL); 829 - } 830 - 831 - return 0; 832 - } 833 - 834 static int add_hotplug_memory_block(unsigned long block_id, 835 struct vmem_altmap *altmap, 836 struct memory_group *group) ··· 942 void __init memory_dev_init(void) 943 { 944 int ret; 945 - unsigned long block_sz, nr; 946 947 /* Validate the configured memory block size */ 948 block_sz = memory_block_size_bytes(); ··· 955 panic("%s() failed to register subsystem: %d\n", __func__, ret); 956 957 /* 958 - * Create entries for memory sections that were found 959 - * during boot and have been initialized 960 */ 961 - for (nr = 0; nr <= __highest_present_section_nr; 962 - nr += sections_per_block) { 963 - ret = add_boot_memory_block(nr); 964 - if (ret) 965 - panic("%s() failed to add memory block: %d\n", __func__, 966 - ret); 967 } 968 } 969
··· 816 return 0; 817 } 818 819 static int add_hotplug_memory_block(unsigned long block_id, 820 struct vmem_altmap *altmap, 821 struct memory_group *group) ··· 957 void __init memory_dev_init(void) 958 { 959 int ret; 960 + unsigned long block_sz, block_id, nr; 961 962 /* Validate the configured memory block size */ 963 block_sz = memory_block_size_bytes(); ··· 970 panic("%s() failed to register subsystem: %d\n", __func__, ret); 971 972 /* 973 + * Create entries for memory sections that were found during boot 974 + * and have been initialized. Use @block_id to track the last 975 + * handled block and initialize it to an invalid value (ULONG_MAX) 976 + * to bypass the block ID matching check for the first present 977 + * block so that it can be covered. 978 */ 979 + block_id = ULONG_MAX; 980 + for_each_present_section_nr(0, nr) { 981 + if (block_id != ULONG_MAX && memory_block_id(nr) == block_id) 982 + continue; 983 + 984 + block_id = memory_block_id(nr); 985 + ret = add_memory_block(block_id, MEM_ONLINE, NULL, NULL); 986 + if (ret) { 987 + panic("%s() failed to add memory block: %d\n", 988 + __func__, ret); 989 + } 990 } 991 } 992
+1 -2
drivers/base/swnode.c
··· 1080 if (!swnode) 1081 return; 1082 1083 ret = sysfs_create_link(&dev->kobj, &swnode->kobj, "software_node"); 1084 if (ret) 1085 return; ··· 1090 sysfs_remove_link(&dev->kobj, "software_node"); 1091 return; 1092 } 1093 - 1094 - kobject_get(&swnode->kobj); 1095 } 1096 1097 void software_node_notify_remove(struct device *dev)
··· 1080 if (!swnode) 1081 return; 1082 1083 + kobject_get(&swnode->kobj); 1084 ret = sysfs_create_link(&dev->kobj, &swnode->kobj, "software_node"); 1085 if (ret) 1086 return; ··· 1089 sysfs_remove_link(&dev->kobj, "software_node"); 1090 return; 1091 } 1092 } 1093 1094 void software_node_notify_remove(struct device *dev)
+24 -17
drivers/block/ublk_drv.c
··· 1683 ublk_put_disk(disk); 1684 } 1685 1686 - static void ublk_cancel_cmd(struct ublk_queue *ubq, struct ublk_io *io, 1687 unsigned int issue_flags) 1688 { 1689 bool done; 1690 1691 if (!(io->flags & UBLK_IO_FLAG_ACTIVE)) 1692 return; 1693 1694 spin_lock(&ubq->cancel_lock); ··· 1739 struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd); 1740 struct ublk_queue *ubq = pdu->ubq; 1741 struct task_struct *task; 1742 - struct ublk_io *io; 1743 1744 if (WARN_ON_ONCE(!ubq)) 1745 return; ··· 1753 if (!ubq->canceling) 1754 ublk_start_cancel(ubq); 1755 1756 - io = &ubq->ios[pdu->tag]; 1757 - WARN_ON_ONCE(io->cmd != cmd); 1758 - ublk_cancel_cmd(ubq, io, issue_flags); 1759 } 1760 1761 static inline bool ublk_queue_ready(struct ublk_queue *ubq) ··· 1767 int i; 1768 1769 for (i = 0; i < ubq->q_depth; i++) 1770 - ublk_cancel_cmd(ubq, &ubq->ios[i], IO_URING_F_UNLOCKED); 1771 } 1772 1773 /* Cancel all pending commands, must be called after del_gendisk() returns */ ··· 1899 ublk_reset_io_flags(ub); 1900 complete_all(&ub->completion); 1901 } 1902 - } 1903 - 1904 - static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id, 1905 - int tag) 1906 - { 1907 - struct ublk_queue *ubq = ublk_get_queue(ub, q_id); 1908 - struct request *req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag); 1909 - 1910 - ublk_queue_cmd(ubq, req); 1911 } 1912 1913 static inline int ublk_check_cmd_op(u32 cmd_op) ··· 2109 if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) 2110 goto out; 2111 ublk_fill_io_cmd(io, cmd, ub_cmd->addr); 2112 - ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag); 2113 - break; 2114 default: 2115 goto out; 2116 }
··· 1683 ublk_put_disk(disk); 1684 } 1685 1686 + static void ublk_cancel_cmd(struct ublk_queue *ubq, unsigned tag, 1687 unsigned int issue_flags) 1688 { 1689 + struct ublk_io *io = &ubq->ios[tag]; 1690 + struct ublk_device *ub = ubq->dev; 1691 + struct request *req; 1692 bool done; 1693 1694 if (!(io->flags & UBLK_IO_FLAG_ACTIVE)) 1695 + return; 1696 + 1697 + /* 1698 + * Don't try to cancel this command if the request is started for 1699 + * avoiding race between io_uring_cmd_done() and 1700 + * io_uring_cmd_complete_in_task(). 1701 + * 1702 + * Either the started request will be aborted via __ublk_abort_rq(), 1703 + * then this uring_cmd is canceled next time, or it will be done in 1704 + * task work function ublk_dispatch_req() because io_uring guarantees 1705 + * that ublk_dispatch_req() is always called 1706 + */ 1707 + req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag); 1708 + if (req && blk_mq_request_started(req)) 1709 return; 1710 1711 spin_lock(&ubq->cancel_lock); ··· 1722 struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd); 1723 struct ublk_queue *ubq = pdu->ubq; 1724 struct task_struct *task; 1725 1726 if (WARN_ON_ONCE(!ubq)) 1727 return; ··· 1737 if (!ubq->canceling) 1738 ublk_start_cancel(ubq); 1739 1740 + WARN_ON_ONCE(ubq->ios[pdu->tag].cmd != cmd); 1741 + ublk_cancel_cmd(ubq, pdu->tag, issue_flags); 1742 } 1743 1744 static inline bool ublk_queue_ready(struct ublk_queue *ubq) ··· 1752 int i; 1753 1754 for (i = 0; i < ubq->q_depth; i++) 1755 + ublk_cancel_cmd(ubq, i, IO_URING_F_UNLOCKED); 1756 } 1757 1758 /* Cancel all pending commands, must be called after del_gendisk() returns */ ··· 1884 ublk_reset_io_flags(ub); 1885 complete_all(&ub->completion); 1886 } 1887 } 1888 1889 static inline int ublk_check_cmd_op(u32 cmd_op) ··· 2103 if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) 2104 goto out; 2105 ublk_fill_io_cmd(io, cmd, ub_cmd->addr); 2106 + req = blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag); 2107 + ublk_dispatch_req(ubq, req, issue_flags); 2108 + return -EIOCBQUEUED; 2109 default: 2110 goto out; 2111 }
+1 -1
drivers/char/misc.c
··· 315 goto fail_remove; 316 317 err = -EIO; 318 - if (register_chrdev(MISC_MAJOR, "misc", &misc_fops)) 319 goto fail_printk; 320 return 0; 321
··· 315 goto fail_remove; 316 317 err = -EIO; 318 + if (__register_chrdev(MISC_MAJOR, 0, MINORMASK + 1, "misc", &misc_fops)) 319 goto fail_printk; 320 return 0; 321
+4 -3
drivers/char/virtio_console.c
··· 1576 break; 1577 case VIRTIO_CONSOLE_RESIZE: { 1578 struct { 1579 - __u16 rows; 1580 - __u16 cols; 1581 } size; 1582 1583 if (!is_console_port(port)) ··· 1585 1586 memcpy(&size, buf->buf + buf->offset + sizeof(*cpkt), 1587 sizeof(size)); 1588 - set_console_size(port, size.rows, size.cols); 1589 1590 port->cons.hvc->irq_requested = 1; 1591 resize_console(port);
··· 1576 break; 1577 case VIRTIO_CONSOLE_RESIZE: { 1578 struct { 1579 + __virtio16 cols; 1580 + __virtio16 rows; 1581 } size; 1582 1583 if (!is_console_port(port)) ··· 1585 1586 memcpy(&size, buf->buf + buf->offset + sizeof(*cpkt), 1587 sizeof(size)); 1588 + set_console_size(port, virtio16_to_cpu(vdev, size.rows), 1589 + virtio16_to_cpu(vdev, size.cols)); 1590 1591 port->cons.hvc->irq_requested = 1; 1592 resize_console(port);
+1 -1
drivers/comedi/drivers/jr3_pci.c
··· 758 struct jr3_pci_dev_private *devpriv = dev->private; 759 760 if (devpriv) 761 - timer_delete_sync(&devpriv->timer); 762 763 comedi_pci_detach(dev); 764 }
··· 758 struct jr3_pci_dev_private *devpriv = dev->private; 759 760 if (devpriv) 761 + timer_shutdown_sync(&devpriv->timer); 762 763 comedi_pci_detach(dev); 764 }
+10 -10
drivers/cpufreq/Kconfig.arm
··· 76 config ARM_BRCMSTB_AVS_CPUFREQ 77 tristate "Broadcom STB AVS CPUfreq driver" 78 depends on (ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ) || COMPILE_TEST 79 - default y 80 help 81 Some Broadcom STB SoCs use a co-processor running proprietary firmware 82 ("AVS") to handle voltage and frequency scaling. This driver provides ··· 88 tristate "Calxeda Highbank-based" 89 depends on ARCH_HIGHBANK || COMPILE_TEST 90 depends on CPUFREQ_DT && REGULATOR && PL320_MBOX 91 - default m 92 help 93 This adds the CPUFreq driver for Calxeda Highbank SoC 94 based boards. ··· 133 config ARM_MEDIATEK_CPUFREQ_HW 134 tristate "MediaTek CPUFreq HW driver" 135 depends on ARCH_MEDIATEK || COMPILE_TEST 136 - default m 137 help 138 Support for the CPUFreq HW driver. 139 Some MediaTek chipsets have a HW engine to offload the steps ··· 181 config ARM_S3C64XX_CPUFREQ 182 bool "Samsung S3C64XX" 183 depends on CPU_S3C6410 || COMPILE_TEST 184 - default y 185 help 186 This adds the CPUFreq driver for Samsung S3C6410 SoC. 187 ··· 190 config ARM_S5PV210_CPUFREQ 191 bool "Samsung S5PV210 and S5PC110" 192 depends on CPU_S5PV210 || COMPILE_TEST 193 - default y 194 help 195 This adds the CPUFreq driver for Samsung S5PV210 and 196 S5PC110 SoCs. ··· 214 config ARM_SPEAR_CPUFREQ 215 bool "SPEAr CPUFreq support" 216 depends on PLAT_SPEAR || COMPILE_TEST 217 - default y 218 help 219 This adds the CPUFreq driver support for SPEAr SOCs. 220 ··· 233 tristate "Tegra20/30 CPUFreq support" 234 depends on ARCH_TEGRA || COMPILE_TEST 235 depends on CPUFREQ_DT 236 - default y 237 help 238 This adds the CPUFreq driver support for Tegra20/30 SOCs. 239 ··· 241 bool "Tegra124 CPUFreq support" 242 depends on ARCH_TEGRA || COMPILE_TEST 243 depends on CPUFREQ_DT 244 - default y 245 help 246 This adds the CPUFreq driver support for Tegra124 SOCs. 247 ··· 256 tristate "Tegra194 CPUFreq support" 257 depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST) 258 depends on TEGRA_BPMP 259 - default y 260 help 261 This adds CPU frequency driver support for Tegra194 SOCs. 262 263 config ARM_TI_CPUFREQ 264 bool "Texas Instruments CPUFreq support" 265 depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST 266 - default y 267 help 268 This driver enables valid OPPs on the running platform based on 269 values contained within the SoC in use. Enable this in order to
··· 76 config ARM_BRCMSTB_AVS_CPUFREQ 77 tristate "Broadcom STB AVS CPUfreq driver" 78 depends on (ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ) || COMPILE_TEST 79 + default y if ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ 80 help 81 Some Broadcom STB SoCs use a co-processor running proprietary firmware 82 ("AVS") to handle voltage and frequency scaling. This driver provides ··· 88 tristate "Calxeda Highbank-based" 89 depends on ARCH_HIGHBANK || COMPILE_TEST 90 depends on CPUFREQ_DT && REGULATOR && PL320_MBOX 91 + default m if ARCH_HIGHBANK 92 help 93 This adds the CPUFreq driver for Calxeda Highbank SoC 94 based boards. ··· 133 config ARM_MEDIATEK_CPUFREQ_HW 134 tristate "MediaTek CPUFreq HW driver" 135 depends on ARCH_MEDIATEK || COMPILE_TEST 136 + default m if ARCH_MEDIATEK 137 help 138 Support for the CPUFreq HW driver. 139 Some MediaTek chipsets have a HW engine to offload the steps ··· 181 config ARM_S3C64XX_CPUFREQ 182 bool "Samsung S3C64XX" 183 depends on CPU_S3C6410 || COMPILE_TEST 184 + default CPU_S3C6410 185 help 186 This adds the CPUFreq driver for Samsung S3C6410 SoC. 187 ··· 190 config ARM_S5PV210_CPUFREQ 191 bool "Samsung S5PV210 and S5PC110" 192 depends on CPU_S5PV210 || COMPILE_TEST 193 + default CPU_S5PV210 194 help 195 This adds the CPUFreq driver for Samsung S5PV210 and 196 S5PC110 SoCs. ··· 214 config ARM_SPEAR_CPUFREQ 215 bool "SPEAr CPUFreq support" 216 depends on PLAT_SPEAR || COMPILE_TEST 217 + default PLAT_SPEAR 218 help 219 This adds the CPUFreq driver support for SPEAr SOCs. 220 ··· 233 tristate "Tegra20/30 CPUFreq support" 234 depends on ARCH_TEGRA || COMPILE_TEST 235 depends on CPUFREQ_DT 236 + default ARCH_TEGRA 237 help 238 This adds the CPUFreq driver support for Tegra20/30 SOCs. 239 ··· 241 bool "Tegra124 CPUFreq support" 242 depends on ARCH_TEGRA || COMPILE_TEST 243 depends on CPUFREQ_DT 244 + default ARCH_TEGRA 245 help 246 This adds the CPUFreq driver support for Tegra124 SOCs. 247 ··· 256 tristate "Tegra194 CPUFreq support" 257 depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST) 258 depends on TEGRA_BPMP 259 + default ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC 260 help 261 This adds CPU frequency driver support for Tegra194 SOCs. 262 263 config ARM_TI_CPUFREQ 264 bool "Texas Instruments CPUFreq support" 265 depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST 266 + default ARCH_OMAP2PLUS || ARCH_K3 267 help 268 This driver enables valid OPPs on the running platform based on 269 values contained within the SoC in use. Enable this in order to
+8 -2
drivers/cpufreq/apple-soc-cpufreq.c
··· 134 135 static unsigned int apple_soc_cpufreq_get_rate(unsigned int cpu) 136 { 137 - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 138 - struct apple_cpu_priv *priv = policy->driver_data; 139 struct cpufreq_frequency_table *p; 140 unsigned int pstate; 141 142 if (priv->info->cur_pstate_mask) { 143 u32 reg = readl_relaxed(priv->reg_base + APPLE_DVFS_STATUS);
··· 134 135 static unsigned int apple_soc_cpufreq_get_rate(unsigned int cpu) 136 { 137 + struct cpufreq_policy *policy; 138 + struct apple_cpu_priv *priv; 139 struct cpufreq_frequency_table *p; 140 unsigned int pstate; 141 + 142 + policy = cpufreq_cpu_get_raw(cpu); 143 + if (unlikely(!policy)) 144 + return 0; 145 + 146 + priv = policy->driver_data; 147 148 if (priv->info->cur_pstate_mask) { 149 u32 reg = readl_relaxed(priv->reg_base + APPLE_DVFS_STATUS);
+1 -1
drivers/cpufreq/cppc_cpufreq.c
··· 747 int ret; 748 749 if (!policy) 750 - return -ENODEV; 751 752 cpu_data = policy->driver_data; 753
··· 747 int ret; 748 749 if (!policy) 750 + return 0; 751 752 cpu_data = policy->driver_data; 753
+1
drivers/cpufreq/cpufreq-dt-platdev.c
··· 175 { .compatible = "qcom,sm8350", }, 176 { .compatible = "qcom,sm8450", }, 177 { .compatible = "qcom,sm8550", }, 178 179 { .compatible = "st,stih407", }, 180 { .compatible = "st,stih410", },
··· 175 { .compatible = "qcom,sm8350", }, 176 { .compatible = "qcom,sm8450", }, 177 { .compatible = "qcom,sm8550", }, 178 + { .compatible = "qcom,sm8650", }, 179 180 { .compatible = "st,stih407", }, 181 { .compatible = "st,stih410", },
+8 -2
drivers/cpufreq/scmi-cpufreq.c
··· 37 38 static unsigned int scmi_cpufreq_get_rate(unsigned int cpu) 39 { 40 - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 41 - struct scmi_data *priv = policy->driver_data; 42 unsigned long rate; 43 int ret; 44 45 ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false); 46 if (ret)
··· 37 38 static unsigned int scmi_cpufreq_get_rate(unsigned int cpu) 39 { 40 + struct cpufreq_policy *policy; 41 + struct scmi_data *priv; 42 unsigned long rate; 43 int ret; 44 + 45 + policy = cpufreq_cpu_get_raw(cpu); 46 + if (unlikely(!policy)) 47 + return 0; 48 + 49 + priv = policy->driver_data; 50 51 ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false); 52 if (ret)
+10 -3
drivers/cpufreq/scpi-cpufreq.c
··· 29 30 static unsigned int scpi_cpufreq_get_rate(unsigned int cpu) 31 { 32 - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 33 - struct scpi_data *priv = policy->driver_data; 34 - unsigned long rate = clk_get_rate(priv->clk); 35 36 return rate / 1000; 37 }
··· 29 30 static unsigned int scpi_cpufreq_get_rate(unsigned int cpu) 31 { 32 + struct cpufreq_policy *policy; 33 + struct scpi_data *priv; 34 + unsigned long rate; 35 + 36 + policy = cpufreq_cpu_get_raw(cpu); 37 + if (unlikely(!policy)) 38 + return 0; 39 + 40 + priv = policy->driver_data; 41 + rate = clk_get_rate(priv->clk); 42 43 return rate / 1000; 44 }
+12 -6
drivers/cpufreq/sun50i-cpufreq-nvmem.c
··· 194 struct nvmem_cell *speedbin_nvmem; 195 const struct of_device_id *match; 196 struct device *cpu_dev; 197 - u32 *speedbin; 198 int ret; 199 200 cpu_dev = get_cpu_device(0); ··· 219 return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem), 220 "Could not get nvmem cell\n"); 221 222 - speedbin = nvmem_cell_read(speedbin_nvmem, NULL); 223 nvmem_cell_put(speedbin_nvmem); 224 - if (IS_ERR(speedbin)) 225 - return PTR_ERR(speedbin); 226 227 - ret = opp_data->efuse_xlate(*speedbin); 228 229 - kfree(speedbin); 230 231 return ret; 232 };
··· 194 struct nvmem_cell *speedbin_nvmem; 195 const struct of_device_id *match; 196 struct device *cpu_dev; 197 + void *speedbin_ptr; 198 + u32 speedbin = 0; 199 + size_t len; 200 int ret; 201 202 cpu_dev = get_cpu_device(0); ··· 217 return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem), 218 "Could not get nvmem cell\n"); 219 220 + speedbin_ptr = nvmem_cell_read(speedbin_nvmem, &len); 221 nvmem_cell_put(speedbin_nvmem); 222 + if (IS_ERR(speedbin_ptr)) 223 + return PTR_ERR(speedbin_ptr); 224 225 + if (len <= 4) 226 + memcpy(&speedbin, speedbin_ptr, len); 227 + speedbin = le32_to_cpu(speedbin); 228 229 + ret = opp_data->efuse_xlate(speedbin); 230 + 231 + kfree(speedbin_ptr); 232 233 return ret; 234 };
+6
drivers/crypto/atmel-sha204a.c
··· 163 i2c_priv->hwrng.name = dev_name(&client->dev); 164 i2c_priv->hwrng.read = atmel_sha204a_rng_read; 165 166 ret = devm_hwrng_register(&client->dev, &i2c_priv->hwrng); 167 if (ret) 168 dev_warn(&client->dev, "failed to register RNG (%d)\n", ret);
··· 163 i2c_priv->hwrng.name = dev_name(&client->dev); 164 i2c_priv->hwrng.read = atmel_sha204a_rng_read; 165 166 + /* 167 + * According to review by Bill Cox [1], this HWRNG has very low entropy. 168 + * [1] https://www.metzdowd.com/pipermail/cryptography/2014-December/023858.html 169 + */ 170 + i2c_priv->hwrng.quality = 1; 171 + 172 ret = devm_hwrng_register(&client->dev, &i2c_priv->hwrng); 173 if (ret) 174 dev_warn(&client->dev, "failed to register RNG (%d)\n", ret);
+1 -1
drivers/cxl/core/core.h
··· 119 120 int cxl_ras_init(void); 121 void cxl_ras_exit(void); 122 - int cxl_gpf_port_setup(struct device *dport_dev, struct cxl_port *port); 123 int cxl_acpi_get_extended_linear_cache_size(struct resource *backing_res, 124 int nid, resource_size_t *size); 125
··· 119 120 int cxl_ras_init(void); 121 void cxl_ras_exit(void); 122 + int cxl_gpf_port_setup(struct cxl_dport *dport); 123 int cxl_acpi_get_extended_linear_cache_size(struct resource *backing_res, 124 int nid, resource_size_t *size); 125
+3 -3
drivers/cxl/core/features.c
··· 528 rc = cxl_set_feature(cxl_mbox, &feat_in->uuid, 529 feat_in->version, feat_in->feat_data, 530 data_size, flags, offset, &return_code); 531 if (rc) { 532 rpc_out->retval = return_code; 533 return no_free_ptr(rpc_out); 534 } 535 536 rpc_out->retval = CXL_MBOX_CMD_RC_SUCCESS; 537 - *out_len = sizeof(*rpc_out); 538 539 return no_free_ptr(rpc_out); 540 } ··· 677 fwctl_put(fwctl_dev); 678 } 679 680 - int devm_cxl_setup_fwctl(struct cxl_memdev *cxlmd) 681 { 682 struct cxl_dev_state *cxlds = cxlmd->cxlds; 683 struct cxl_features_state *cxlfs; ··· 700 if (rc) 701 return rc; 702 703 - return devm_add_action_or_reset(&cxlmd->dev, free_memdev_fwctl, 704 no_free_ptr(fwctl_dev)); 705 } 706 EXPORT_SYMBOL_NS_GPL(devm_cxl_setup_fwctl, "CXL");
··· 528 rc = cxl_set_feature(cxl_mbox, &feat_in->uuid, 529 feat_in->version, feat_in->feat_data, 530 data_size, flags, offset, &return_code); 531 + *out_len = sizeof(*rpc_out); 532 if (rc) { 533 rpc_out->retval = return_code; 534 return no_free_ptr(rpc_out); 535 } 536 537 rpc_out->retval = CXL_MBOX_CMD_RC_SUCCESS; 538 539 return no_free_ptr(rpc_out); 540 } ··· 677 fwctl_put(fwctl_dev); 678 } 679 680 + int devm_cxl_setup_fwctl(struct device *host, struct cxl_memdev *cxlmd) 681 { 682 struct cxl_dev_state *cxlds = cxlmd->cxlds; 683 struct cxl_features_state *cxlfs; ··· 700 if (rc) 701 return rc; 702 703 + return devm_add_action_or_reset(host, free_memdev_fwctl, 704 no_free_ptr(fwctl_dev)); 705 } 706 EXPORT_SYMBOL_NS_GPL(devm_cxl_setup_fwctl, "CXL");
+17 -13
drivers/cxl/core/pci.c
··· 1072 #define GPF_TIMEOUT_BASE_MAX 2 1073 #define GPF_TIMEOUT_SCALE_MAX 7 /* 10 seconds */ 1074 1075 - u16 cxl_gpf_get_dvsec(struct device *dev, bool is_port) 1076 { 1077 u16 dvsec; 1078 1079 if (!dev_is_pci(dev)) 1080 return 0; 1081 1082 - dvsec = pci_find_dvsec_capability(to_pci_dev(dev), PCI_VENDOR_ID_CXL, 1083 is_port ? CXL_DVSEC_PORT_GPF : CXL_DVSEC_DEVICE_GPF); 1084 if (!dvsec) 1085 dev_warn(dev, "%s GPF DVSEC not present\n", ··· 1134 return rc; 1135 } 1136 1137 - int cxl_gpf_port_setup(struct device *dport_dev, struct cxl_port *port) 1138 { 1139 - struct pci_dev *pdev; 1140 - 1141 - if (!port) 1142 return -EINVAL; 1143 1144 - if (!port->gpf_dvsec) { 1145 int dvsec; 1146 1147 - dvsec = cxl_gpf_get_dvsec(dport_dev, true); 1148 if (!dvsec) 1149 return -EINVAL; 1150 1151 - port->gpf_dvsec = dvsec; 1152 } 1153 - 1154 - pdev = to_pci_dev(dport_dev); 1155 - update_gpf_port_dvsec(pdev, port->gpf_dvsec, 1); 1156 - update_gpf_port_dvsec(pdev, port->gpf_dvsec, 2); 1157 1158 return 0; 1159 }
··· 1072 #define GPF_TIMEOUT_BASE_MAX 2 1073 #define GPF_TIMEOUT_SCALE_MAX 7 /* 10 seconds */ 1074 1075 + u16 cxl_gpf_get_dvsec(struct device *dev) 1076 { 1077 + struct pci_dev *pdev; 1078 + bool is_port = true; 1079 u16 dvsec; 1080 1081 if (!dev_is_pci(dev)) 1082 return 0; 1083 1084 + pdev = to_pci_dev(dev); 1085 + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ENDPOINT) 1086 + is_port = false; 1087 + 1088 + dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL, 1089 is_port ? CXL_DVSEC_PORT_GPF : CXL_DVSEC_DEVICE_GPF); 1090 if (!dvsec) 1091 dev_warn(dev, "%s GPF DVSEC not present\n", ··· 1128 return rc; 1129 } 1130 1131 + int cxl_gpf_port_setup(struct cxl_dport *dport) 1132 { 1133 + if (!dport) 1134 return -EINVAL; 1135 1136 + if (!dport->gpf_dvsec) { 1137 + struct pci_dev *pdev; 1138 int dvsec; 1139 1140 + dvsec = cxl_gpf_get_dvsec(dport->dport_dev); 1141 if (!dvsec) 1142 return -EINVAL; 1143 1144 + dport->gpf_dvsec = dvsec; 1145 + pdev = to_pci_dev(dport->dport_dev); 1146 + update_gpf_port_dvsec(pdev, dport->gpf_dvsec, 1); 1147 + update_gpf_port_dvsec(pdev, dport->gpf_dvsec, 2); 1148 } 1149 1150 return 0; 1151 }
+1 -1
drivers/cxl/core/port.c
··· 1678 if (rc && rc != -EBUSY) 1679 return rc; 1680 1681 - cxl_gpf_port_setup(dport_dev, port); 1682 1683 /* Any more ports to add between this one and the root? */ 1684 if (!dev_is_cxl_root_child(&port->dev))
··· 1678 if (rc && rc != -EBUSY) 1679 return rc; 1680 1681 + cxl_gpf_port_setup(dport); 1682 1683 /* Any more ports to add between this one and the root? */ 1684 if (!dev_is_cxl_root_child(&port->dev))
-4
drivers/cxl/core/regs.c
··· 581 resource_size_t rcrb = ri->base; 582 void __iomem *addr; 583 u32 bar0, bar1; 584 - u16 cmd; 585 u32 id; 586 587 if (which == CXL_RCRB_UPSTREAM) ··· 602 } 603 604 id = readl(addr + PCI_VENDOR_ID); 605 - cmd = readw(addr + PCI_COMMAND); 606 bar0 = readl(addr + PCI_BASE_ADDRESS_0); 607 bar1 = readl(addr + PCI_BASE_ADDRESS_1); 608 iounmap(addr); ··· 616 dev_err(dev, "Failed to access Downstream Port RCRB\n"); 617 return CXL_RESOURCE_NONE; 618 } 619 - if (!(cmd & PCI_COMMAND_MEMORY)) 620 - return CXL_RESOURCE_NONE; 621 /* The RCRB is a Memory Window, and the MEM_TYPE_1M bit is obsolete */ 622 if (bar0 & (PCI_BASE_ADDRESS_MEM_TYPE_1M | PCI_BASE_ADDRESS_SPACE_IO)) 623 return CXL_RESOURCE_NONE;
··· 581 resource_size_t rcrb = ri->base; 582 void __iomem *addr; 583 u32 bar0, bar1; 584 u32 id; 585 586 if (which == CXL_RCRB_UPSTREAM) ··· 603 } 604 605 id = readl(addr + PCI_VENDOR_ID); 606 bar0 = readl(addr + PCI_BASE_ADDRESS_0); 607 bar1 = readl(addr + PCI_BASE_ADDRESS_1); 608 iounmap(addr); ··· 618 dev_err(dev, "Failed to access Downstream Port RCRB\n"); 619 return CXL_RESOURCE_NONE; 620 } 621 /* The RCRB is a Memory Window, and the MEM_TYPE_1M bit is obsolete */ 622 if (bar0 & (PCI_BASE_ADDRESS_MEM_TYPE_1M | PCI_BASE_ADDRESS_SPACE_IO)) 623 return CXL_RESOURCE_NONE;
+3 -3
drivers/cxl/cxl.h
··· 592 * @cdat: Cached CDAT data 593 * @cdat_available: Should a CDAT attribute be available in sysfs 594 * @pci_latency: Upstream latency in picoseconds 595 - * @gpf_dvsec: Cached GPF port DVSEC 596 */ 597 struct cxl_port { 598 struct device dev; ··· 615 } cdat; 616 bool cdat_available; 617 long pci_latency; 618 - int gpf_dvsec; 619 }; 620 621 /** ··· 662 * @regs: Dport parsed register blocks 663 * @coord: access coordinates (bandwidth and latency performance attributes) 664 * @link_latency: calculated PCIe downstream latency 665 */ 666 struct cxl_dport { 667 struct device *dport_dev; ··· 674 struct cxl_regs regs; 675 struct access_coordinate coord[ACCESS_COORDINATE_MAX]; 676 long link_latency; 677 }; 678 679 /** ··· 910 #define __mock static 911 #endif 912 913 - u16 cxl_gpf_get_dvsec(struct device *dev, bool is_port); 914 915 #endif /* __CXL_H__ */
··· 592 * @cdat: Cached CDAT data 593 * @cdat_available: Should a CDAT attribute be available in sysfs 594 * @pci_latency: Upstream latency in picoseconds 595 */ 596 struct cxl_port { 597 struct device dev; ··· 616 } cdat; 617 bool cdat_available; 618 long pci_latency; 619 }; 620 621 /** ··· 664 * @regs: Dport parsed register blocks 665 * @coord: access coordinates (bandwidth and latency performance attributes) 666 * @link_latency: calculated PCIe downstream latency 667 + * @gpf_dvsec: Cached GPF port DVSEC 668 */ 669 struct cxl_dport { 670 struct device *dport_dev; ··· 675 struct cxl_regs regs; 676 struct access_coordinate coord[ACCESS_COORDINATE_MAX]; 677 long link_latency; 678 + int gpf_dvsec; 679 }; 680 681 /** ··· 910 #define __mock static 911 #endif 912 913 + u16 cxl_gpf_get_dvsec(struct device *dev); 914 915 #endif /* __CXL_H__ */
+1 -1
drivers/cxl/pci.c
··· 1018 if (rc) 1019 return rc; 1020 1021 - rc = devm_cxl_setup_fwctl(cxlmd); 1022 if (rc) 1023 dev_dbg(&pdev->dev, "No CXL FWCTL setup\n"); 1024
··· 1018 if (rc) 1019 return rc; 1020 1021 + rc = devm_cxl_setup_fwctl(&pdev->dev, cxlmd); 1022 if (rc) 1023 dev_dbg(&pdev->dev, "No CXL FWCTL setup\n"); 1024
+1 -1
drivers/cxl/pmem.c
··· 108 return; 109 } 110 111 - if (!cxl_gpf_get_dvsec(cxlds->dev, false)) 112 return; 113 114 if (cxl_get_dirty_count(mds, &count)) {
··· 108 return; 109 } 110 111 + if (!cxl_gpf_get_dvsec(cxlds->dev)) 112 return; 113 114 if (cxl_get_dirty_count(mds, &count)) {
+11 -3
drivers/firmware/stratix10-svc.c
··· 1224 if (!svc->intel_svc_fcs) { 1225 dev_err(dev, "failed to allocate %s device\n", INTEL_FCS); 1226 ret = -ENOMEM; 1227 - goto err_unregister_dev; 1228 } 1229 1230 ret = platform_device_add(svc->intel_svc_fcs); 1231 if (ret) { 1232 platform_device_put(svc->intel_svc_fcs); 1233 - goto err_unregister_dev; 1234 } 1235 1236 dev_set_drvdata(dev, svc); 1237 ··· 1243 1244 return 0; 1245 1246 - err_unregister_dev: 1247 platform_device_unregister(svc->stratix10_svc_rsu); 1248 err_free_kfifo: 1249 kfifo_free(&controller->svc_fifo); ··· 1258 { 1259 struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev); 1260 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev); 1261 1262 platform_device_unregister(svc->intel_svc_fcs); 1263 platform_device_unregister(svc->stratix10_svc_rsu);
··· 1224 if (!svc->intel_svc_fcs) { 1225 dev_err(dev, "failed to allocate %s device\n", INTEL_FCS); 1226 ret = -ENOMEM; 1227 + goto err_unregister_rsu_dev; 1228 } 1229 1230 ret = platform_device_add(svc->intel_svc_fcs); 1231 if (ret) { 1232 platform_device_put(svc->intel_svc_fcs); 1233 + goto err_unregister_rsu_dev; 1234 } 1235 + 1236 + ret = of_platform_default_populate(dev_of_node(dev), NULL, dev); 1237 + if (ret) 1238 + goto err_unregister_fcs_dev; 1239 1240 dev_set_drvdata(dev, svc); 1241 ··· 1239 1240 return 0; 1241 1242 + err_unregister_fcs_dev: 1243 + platform_device_unregister(svc->intel_svc_fcs); 1244 + err_unregister_rsu_dev: 1245 platform_device_unregister(svc->stratix10_svc_rsu); 1246 err_free_kfifo: 1247 kfifo_free(&controller->svc_fifo); ··· 1252 { 1253 struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev); 1254 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev); 1255 + 1256 + of_platform_depopulate(ctrl->dev); 1257 1258 platform_device_unregister(svc->intel_svc_fcs); 1259 platform_device_unregister(svc->stratix10_svc_rsu);
+45 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 43 #include <linux/dma-fence-array.h> 44 #include <linux/pci-p2pdma.h> 45 46 /** 47 * amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation 48 * ··· 77 static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf, 78 struct dma_buf_attachment *attach) 79 { 80 struct drm_gem_object *obj = dmabuf->priv; 81 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); 82 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 83 84 - if (pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) 85 attach->peer2peer = false; 86 87 amdgpu_vm_bo_update_shared(bo); ··· 102 { 103 struct dma_buf *dmabuf = attach->dmabuf; 104 struct amdgpu_bo *bo = gem_to_amdgpu_bo(dmabuf->priv); 105 - u32 domains = bo->preferred_domains; 106 107 dma_resv_assert_held(dmabuf->resv); 108 109 - /* 110 - * Try pinning into VRAM to allow P2P with RDMA NICs without ODP 111 * support if all attachments can do P2P. If any attachment can't do 112 * P2P just pin into GTT instead. 113 */ 114 - list_for_each_entry(attach, &dmabuf->attachments, node) 115 - if (!attach->peer2peer) 116 - domains &= ~AMDGPU_GEM_DOMAIN_VRAM; 117 118 if (domains & AMDGPU_GEM_DOMAIN_VRAM) 119 bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; 120 121 return amdgpu_bo_pin(bo, domains); 122 } ··· 504 { 505 struct drm_gem_object *obj = &bo->tbo.base; 506 struct drm_gem_object *gobj; 507 508 if (obj->import_attach) { 509 struct dma_buf *dma_buf = obj->import_attach->dmabuf;
··· 43 #include <linux/dma-fence-array.h> 44 #include <linux/pci-p2pdma.h> 45 46 + static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops; 47 + 48 + /** 49 + * dma_buf_attach_adev - Helper to get adev of an attachment 50 + * 51 + * @attach: attachment 52 + * 53 + * Returns: 54 + * A struct amdgpu_device * if the attaching device is an amdgpu device or 55 + * partition, NULL otherwise. 56 + */ 57 + static struct amdgpu_device *dma_buf_attach_adev(struct dma_buf_attachment *attach) 58 + { 59 + if (attach->importer_ops == &amdgpu_dma_buf_attach_ops) { 60 + struct drm_gem_object *obj = attach->importer_priv; 61 + struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); 62 + 63 + return amdgpu_ttm_adev(bo->tbo.bdev); 64 + } 65 + 66 + return NULL; 67 + } 68 + 69 /** 70 * amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation 71 * ··· 54 static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf, 55 struct dma_buf_attachment *attach) 56 { 57 + struct amdgpu_device *attach_adev = dma_buf_attach_adev(attach); 58 struct drm_gem_object *obj = dmabuf->priv; 59 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); 60 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 61 62 + if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) && 63 + pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) 64 attach->peer2peer = false; 65 66 amdgpu_vm_bo_update_shared(bo); ··· 77 { 78 struct dma_buf *dmabuf = attach->dmabuf; 79 struct amdgpu_bo *bo = gem_to_amdgpu_bo(dmabuf->priv); 80 + u32 domains = bo->allowed_domains; 81 82 dma_resv_assert_held(dmabuf->resv); 83 84 + /* Try pinning into VRAM to allow P2P with RDMA NICs without ODP 85 * support if all attachments can do P2P. If any attachment can't do 86 * P2P just pin into GTT instead. 87 + * 88 + * To avoid with conflicting pinnings between GPUs and RDMA when move 89 + * notifiers are disabled, only allow pinning in VRAM when move 90 + * notiers are enabled. 91 */ 92 + if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { 93 + domains &= ~AMDGPU_GEM_DOMAIN_VRAM; 94 + } else { 95 + list_for_each_entry(attach, &dmabuf->attachments, node) 96 + if (!attach->peer2peer) 97 + domains &= ~AMDGPU_GEM_DOMAIN_VRAM; 98 + } 99 100 if (domains & AMDGPU_GEM_DOMAIN_VRAM) 101 bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; 102 + 103 + if (WARN_ON(!domains)) 104 + return -EINVAL; 105 106 return amdgpu_bo_pin(bo, domains); 107 } ··· 469 { 470 struct drm_gem_object *obj = &bo->tbo.base; 471 struct drm_gem_object *gobj; 472 + 473 + if (!adev) 474 + return false; 475 476 if (obj->import_attach) { 477 struct dma_buf *dma_buf = obj->import_attach->dmabuf;
+12 -29
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 1920 switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) { 1921 case IP_VERSION(3, 5, 0): 1922 case IP_VERSION(3, 6, 0): 1923 - /* 1924 - * On DCN35 systems with Z8 enabled, it's possible for IPS2 + Z8 to 1925 - * cause a hard hang. A fix exists for newer PMFW. 1926 - * 1927 - * As a workaround, for non-fixed PMFW, force IPS1+RCG as the deepest 1928 - * IPS state in all cases, except for s0ix and all displays off (DPMS), 1929 - * where IPS2 is allowed. 1930 - * 1931 - * When checking pmfw version, use the major and minor only. 1932 - */ 1933 - if ((adev->pm.fw_version & 0x00FFFF00) < 0x005D6300) 1934 - ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF; 1935 - else if (amdgpu_ip_version(adev, GC_HWIP, 0) > IP_VERSION(11, 5, 0)) 1936 - /* 1937 - * Other ASICs with DCN35 that have residency issues with 1938 - * IPS2 in idle. 1939 - * We want them to use IPS2 only in display off cases. 1940 - */ 1941 - ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF; 1942 - break; 1943 case IP_VERSION(3, 5, 1): 1944 ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF; 1945 break; ··· 3335 for (k = 0; k < dc_state->stream_count; k++) { 3336 bundle->stream_update.stream = dc_state->streams[k]; 3337 3338 - for (m = 0; m < dc_state->stream_status->plane_count; m++) { 3339 bundle->surface_updates[m].surface = 3340 - dc_state->stream_status->plane_states[m]; 3341 bundle->surface_updates[m].surface->force_full_update = 3342 true; 3343 } 3344 3345 update_planes_and_stream_adapter(dm->dc, 3346 UPDATE_TYPE_FULL, 3347 - dc_state->stream_status->plane_count, 3348 dc_state->streams[k], 3349 &bundle->stream_update, 3350 bundle->surface_updates); ··· 6501 const struct drm_display_mode *native_mode, 6502 bool scale_enabled) 6503 { 6504 - if (scale_enabled) { 6505 - copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode); 6506 - } else if (native_mode->clock == drm_mode->clock && 6507 - native_mode->htotal == drm_mode->htotal && 6508 - native_mode->vtotal == drm_mode->vtotal) { 6509 - copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode); 6510 } else { 6511 /* no scaling nor amdgpu inserted, no need to patch */ 6512 } ··· 11021 */ 11022 if (amdgpu_ip_version(adev, DCE_HWIP, 0) < IP_VERSION(3, 2, 0) && 11023 state->allow_modeset) 11024 return true; 11025 11026 /* Exit early if we know that we're adding or removing the plane. */
··· 1920 switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) { 1921 case IP_VERSION(3, 5, 0): 1922 case IP_VERSION(3, 6, 0): 1923 case IP_VERSION(3, 5, 1): 1924 ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF; 1925 break; ··· 3355 for (k = 0; k < dc_state->stream_count; k++) { 3356 bundle->stream_update.stream = dc_state->streams[k]; 3357 3358 + for (m = 0; m < dc_state->stream_status[k].plane_count; m++) { 3359 bundle->surface_updates[m].surface = 3360 + dc_state->stream_status[k].plane_states[m]; 3361 bundle->surface_updates[m].surface->force_full_update = 3362 true; 3363 } 3364 3365 update_planes_and_stream_adapter(dm->dc, 3366 UPDATE_TYPE_FULL, 3367 + dc_state->stream_status[k].plane_count, 3368 dc_state->streams[k], 3369 &bundle->stream_update, 3370 bundle->surface_updates); ··· 6521 const struct drm_display_mode *native_mode, 6522 bool scale_enabled) 6523 { 6524 + if (scale_enabled || ( 6525 + native_mode->clock == drm_mode->clock && 6526 + native_mode->htotal == drm_mode->htotal && 6527 + native_mode->vtotal == drm_mode->vtotal)) { 6528 + if (native_mode->crtc_clock) 6529 + copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode); 6530 } else { 6531 /* no scaling nor amdgpu inserted, no need to patch */ 6532 } ··· 11041 */ 11042 if (amdgpu_ip_version(adev, DCE_HWIP, 0) < IP_VERSION(3, 2, 0) && 11043 state->allow_modeset) 11044 + return true; 11045 + 11046 + if (amdgpu_in_reset(adev) && state->allow_modeset) 11047 return true; 11048 11049 /* Exit early if we know that we're adding or removing the plane. */
+1 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 918 { 919 struct drm_connector *connector = data; 920 struct acpi_device *acpidev = ACPI_COMPANION(connector->dev->dev); 921 - unsigned char start = block * EDID_LENGTH; 922 struct edid *edid; 923 int r; 924
··· 918 { 919 struct drm_connector *connector = data; 920 struct acpi_device *acpidev = ACPI_COMPANION(connector->dev->dev); 921 + unsigned short start = block * EDID_LENGTH; 922 struct edid *edid; 923 int r; 924
+2 -2
drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
··· 195 .dcn_downspread_percent = 0.5, 196 .gpuvm_min_page_size_bytes = 4096, 197 .hostvm_min_page_size_bytes = 4096, 198 - .do_urgent_latency_adjustment = 0, 199 .urgent_latency_adjustment_fabric_clock_component_us = 0, 200 - .urgent_latency_adjustment_fabric_clock_reference_mhz = 0, 201 }; 202 203 void dcn35_build_wm_range_table_fpu(struct clk_mgr *clk_mgr)
··· 195 .dcn_downspread_percent = 0.5, 196 .gpuvm_min_page_size_bytes = 4096, 197 .hostvm_min_page_size_bytes = 4096, 198 + .do_urgent_latency_adjustment = 1, 199 .urgent_latency_adjustment_fabric_clock_component_us = 0, 200 + .urgent_latency_adjustment_fabric_clock_reference_mhz = 3000, 201 }; 202 203 void dcn35_build_wm_range_table_fpu(struct clk_mgr *clk_mgr)
+2 -2
drivers/gpu/drm/exynos/exynos7_drm_decon.c
··· 43 unsigned int wincon_burstlen_shift; 44 }; 45 46 - static struct decon_data exynos7_decon_data = { 47 .vidw_buf_start_base = 0x80, 48 .shadowcon_win_protect_shift = 10, 49 .wincon_burstlen_shift = 11, 50 }; 51 52 - static struct decon_data exynos7870_decon_data = { 53 .vidw_buf_start_base = 0x880, 54 .shadowcon_win_protect_shift = 8, 55 .wincon_burstlen_shift = 10,
··· 43 unsigned int wincon_burstlen_shift; 44 }; 45 46 + static const struct decon_data exynos7_decon_data = { 47 .vidw_buf_start_base = 0x80, 48 .shadowcon_win_protect_shift = 10, 49 .wincon_burstlen_shift = 11, 50 }; 51 52 + static const struct decon_data exynos7870_decon_data = { 53 .vidw_buf_start_base = 0x880, 54 .shadowcon_win_protect_shift = 8, 55 .wincon_burstlen_shift = 10,
+1 -2
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 355 { 356 struct drm_device *drm = platform_get_drvdata(pdev); 357 358 - if (drm) 359 - drm_atomic_helper_shutdown(drm); 360 } 361 362 static struct platform_driver exynos_drm_platform_driver = {
··· 355 { 356 struct drm_device *drm = platform_get_drvdata(pdev); 357 358 + drm_atomic_helper_shutdown(drm); 359 } 360 361 static struct platform_driver exynos_drm_platform_driver = {
+1 -1
drivers/gpu/drm/exynos/exynos_drm_fimc.c
··· 908 u32 buf_num; 909 u32 cfg; 910 911 - DRM_DEV_DEBUG_KMS(ctx->dev, "buf_id[%d]enqueu[%d]\n", buf_id, enqueue); 912 913 spin_lock_irqsave(&ctx->lock, flags); 914
··· 908 u32 buf_num; 909 u32 cfg; 910 911 + DRM_DEV_DEBUG_KMS(ctx->dev, "buf_id[%d]enqueue[%d]\n", buf_id, enqueue); 912 913 spin_lock_irqsave(&ctx->lock, flags); 914
+1 -1
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 731 /* 732 * Setting dma-burst to 16Word causes permanent tearing for very small 733 * buffers, e.g. cursor buffer. Burst Mode switching which based on 734 - * plane size is not recommended as plane size varies alot towards the 735 * end of the screen and rapid movement causes unstable DMA, but it is 736 * still better to change dma-burst than displaying garbage. 737 */
··· 731 /* 732 * Setting dma-burst to 16Word causes permanent tearing for very small 733 * buffers, e.g. cursor buffer. Burst Mode switching which based on 734 + * plane size is not recommended as plane size varies a lot towards the 735 * end of the screen and rapid movement causes unstable DMA, but it is 736 * still better to change dma-burst than displaying garbage. 737 */
-3
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 312 else 313 drm_edid = drm_edid_alloc(fake_edid_info, sizeof(fake_edid_info)); 314 315 - if (!drm_edid) 316 - return 0; 317 - 318 drm_edid_connector_update(connector, drm_edid); 319 320 count = drm_edid_connector_add_modes(connector);
··· 312 else 313 drm_edid = drm_edid_alloc(fake_edid_info, sizeof(fake_edid_info)); 314 315 drm_edid_connector_update(connector, drm_edid); 316 317 count = drm_edid_connector_add_modes(connector);
+1 -1
drivers/gpu/drm/meson/meson_drv.c
··· 169 /* S805X/S805Y HDMI PLL won't lock for HDMI PHY freq > 1,65GHz */ 170 { 171 .limits = { 172 - .max_hdmi_phy_freq = 1650000, 173 }, 174 .attrs = (const struct soc_device_attribute []) { 175 { .soc_id = "GXL (S805*)", },
··· 169 /* S805X/S805Y HDMI PLL won't lock for HDMI PHY freq > 1,65GHz */ 170 { 171 .limits = { 172 + .max_hdmi_phy_freq = 1650000000, 173 }, 174 .attrs = (const struct soc_device_attribute []) { 175 { .soc_id = "GXL (S805*)", },
+1 -1
drivers/gpu/drm/meson/meson_drv.h
··· 37 }; 38 39 struct meson_drm_soc_limits { 40 - unsigned int max_hdmi_phy_freq; 41 }; 42 43 struct meson_drm {
··· 37 }; 38 39 struct meson_drm_soc_limits { 40 + unsigned long long max_hdmi_phy_freq; 41 }; 42 43 struct meson_drm {
+16 -13
drivers/gpu/drm/meson/meson_encoder_hdmi.c
··· 70 { 71 struct meson_drm *priv = encoder_hdmi->priv; 72 int vic = drm_match_cea_mode(mode); 73 - unsigned int phy_freq; 74 - unsigned int vclk_freq; 75 - unsigned int venc_freq; 76 - unsigned int hdmi_freq; 77 78 - vclk_freq = mode->clock; 79 80 /* For 420, pixel clock is half unlike venc clock */ 81 if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24) ··· 107 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 108 venc_freq /= 2; 109 110 - dev_dbg(priv->dev, "vclk:%d phy=%d venc=%d hdmi=%d enci=%d\n", 111 phy_freq, vclk_freq, venc_freq, hdmi_freq, 112 priv->venc.hdmi_use_enci); 113 ··· 123 struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge); 124 struct meson_drm *priv = encoder_hdmi->priv; 125 bool is_hdmi2_sink = display_info->hdmi.scdc.supported; 126 - unsigned int phy_freq; 127 - unsigned int vclk_freq; 128 - unsigned int venc_freq; 129 - unsigned int hdmi_freq; 130 int vic = drm_match_cea_mode(mode); 131 enum drm_mode_status status; 132 ··· 146 if (status != MODE_OK) 147 return status; 148 149 - return meson_vclk_dmt_supported_freq(priv, mode->clock); 150 /* Check against supported VIC modes */ 151 } else if (!meson_venc_hdmi_supported_vic(vic)) 152 return MODE_BAD; 153 154 - vclk_freq = mode->clock; 155 156 /* For 420, pixel clock is half unlike venc clock */ 157 if (drm_mode_is_420_only(display_info, mode) || ··· 181 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 182 venc_freq /= 2; 183 184 - dev_dbg(priv->dev, "%s: vclk:%d phy=%d venc=%d hdmi=%d\n", 185 __func__, phy_freq, vclk_freq, venc_freq, hdmi_freq); 186 187 return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
··· 70 { 71 struct meson_drm *priv = encoder_hdmi->priv; 72 int vic = drm_match_cea_mode(mode); 73 + unsigned long long phy_freq; 74 + unsigned long long vclk_freq; 75 + unsigned long long venc_freq; 76 + unsigned long long hdmi_freq; 77 78 + vclk_freq = mode->clock * 1000; 79 80 /* For 420, pixel clock is half unlike venc clock */ 81 if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24) ··· 107 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 108 venc_freq /= 2; 109 110 + dev_dbg(priv->dev, 111 + "vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n", 112 phy_freq, vclk_freq, venc_freq, hdmi_freq, 113 priv->venc.hdmi_use_enci); 114 ··· 122 struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge); 123 struct meson_drm *priv = encoder_hdmi->priv; 124 bool is_hdmi2_sink = display_info->hdmi.scdc.supported; 125 + unsigned long long clock = mode->clock * 1000; 126 + unsigned long long phy_freq; 127 + unsigned long long vclk_freq; 128 + unsigned long long venc_freq; 129 + unsigned long long hdmi_freq; 130 int vic = drm_match_cea_mode(mode); 131 enum drm_mode_status status; 132 ··· 144 if (status != MODE_OK) 145 return status; 146 147 + return meson_vclk_dmt_supported_freq(priv, clock); 148 /* Check against supported VIC modes */ 149 } else if (!meson_venc_hdmi_supported_vic(vic)) 150 return MODE_BAD; 151 152 + vclk_freq = clock; 153 154 /* For 420, pixel clock is half unlike venc clock */ 155 if (drm_mode_is_420_only(display_info, mode) || ··· 179 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 180 venc_freq /= 2; 181 182 + dev_dbg(priv->dev, 183 + "%s: vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz\n", 184 __func__, phy_freq, vclk_freq, venc_freq, hdmi_freq); 185 186 return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
+101 -94
drivers/gpu/drm/meson/meson_vclk.c
··· 110 #define HDMI_PLL_LOCK BIT(31) 111 #define HDMI_PLL_LOCK_G12A (3 << 30) 112 113 - #define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST(_freq * 1000, 1001) 114 115 /* VID PLL Dividers */ 116 enum { ··· 363 }; 364 365 struct meson_vclk_params { 366 - unsigned int pll_freq; 367 - unsigned int phy_freq; 368 - unsigned int vclk_freq; 369 - unsigned int venc_freq; 370 - unsigned int pixel_freq; 371 unsigned int pll_od1; 372 unsigned int pll_od2; 373 unsigned int pll_od3; ··· 375 unsigned int vclk_div; 376 } params[] = { 377 [MESON_VCLK_HDMI_ENCI_54000] = { 378 - .pll_freq = 4320000, 379 - .phy_freq = 270000, 380 - .vclk_freq = 54000, 381 - .venc_freq = 54000, 382 - .pixel_freq = 54000, 383 .pll_od1 = 4, 384 .pll_od2 = 4, 385 .pll_od3 = 1, ··· 387 .vclk_div = 1, 388 }, 389 [MESON_VCLK_HDMI_DDR_54000] = { 390 - .pll_freq = 4320000, 391 - .phy_freq = 270000, 392 - .vclk_freq = 54000, 393 - .venc_freq = 54000, 394 - .pixel_freq = 27000, 395 .pll_od1 = 4, 396 .pll_od2 = 4, 397 .pll_od3 = 1, ··· 399 .vclk_div = 1, 400 }, 401 [MESON_VCLK_HDMI_DDR_148500] = { 402 - .pll_freq = 2970000, 403 - .phy_freq = 742500, 404 - .vclk_freq = 148500, 405 - .venc_freq = 148500, 406 - .pixel_freq = 74250, 407 .pll_od1 = 4, 408 .pll_od2 = 1, 409 .pll_od3 = 1, ··· 411 .vclk_div = 1, 412 }, 413 [MESON_VCLK_HDMI_74250] = { 414 - .pll_freq = 2970000, 415 - .phy_freq = 742500, 416 - .vclk_freq = 74250, 417 - .venc_freq = 74250, 418 - .pixel_freq = 74250, 419 .pll_od1 = 2, 420 .pll_od2 = 2, 421 .pll_od3 = 2, ··· 423 .vclk_div = 1, 424 }, 425 [MESON_VCLK_HDMI_148500] = { 426 - .pll_freq = 2970000, 427 - .phy_freq = 1485000, 428 - .vclk_freq = 148500, 429 - .venc_freq = 148500, 430 - .pixel_freq = 148500, 431 .pll_od1 = 1, 432 .pll_od2 = 2, 433 .pll_od3 = 2, ··· 435 .vclk_div = 1, 436 }, 437 [MESON_VCLK_HDMI_297000] = { 438 - .pll_freq = 5940000, 439 - .phy_freq = 2970000, 440 - .venc_freq = 297000, 441 - .vclk_freq = 297000, 442 - .pixel_freq = 297000, 443 .pll_od1 = 2, 444 .pll_od2 = 1, 445 .pll_od3 = 1, ··· 447 .vclk_div = 2, 448 }, 449 [MESON_VCLK_HDMI_594000] = { 450 - .pll_freq = 5940000, 451 - .phy_freq = 5940000, 452 - .venc_freq = 594000, 453 - .vclk_freq = 594000, 454 - .pixel_freq = 594000, 455 .pll_od1 = 1, 456 .pll_od2 = 1, 457 .pll_od3 = 2, ··· 459 .vclk_div = 1, 460 }, 461 [MESON_VCLK_HDMI_594000_YUV420] = { 462 - .pll_freq = 5940000, 463 - .phy_freq = 2970000, 464 - .venc_freq = 594000, 465 - .vclk_freq = 594000, 466 - .pixel_freq = 297000, 467 .pll_od1 = 2, 468 .pll_od2 = 1, 469 .pll_od3 = 1, ··· 620 3 << 20, pll_od_to_reg(od3) << 20); 621 } 622 623 - #define XTAL_FREQ 24000 624 625 static unsigned int meson_hdmi_pll_get_m(struct meson_drm *priv, 626 - unsigned int pll_freq) 627 { 628 /* The GXBB PLL has a /2 pre-multiplier */ 629 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) 630 - pll_freq /= 2; 631 632 - return pll_freq / XTAL_FREQ; 633 } 634 635 #define HDMI_FRAC_MAX_GXBB 4096 ··· 638 639 static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv, 640 unsigned int m, 641 - unsigned int pll_freq) 642 { 643 - unsigned int parent_freq = XTAL_FREQ; 644 unsigned int frac_max = HDMI_FRAC_MAX_GXL; 645 unsigned int frac_m; 646 unsigned int frac; 647 648 /* The GXBB PLL has a /2 pre-multiplier and a larger FRAC width */ 649 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { ··· 656 frac_max = HDMI_FRAC_MAX_G12A; 657 658 /* We can have a perfect match !*/ 659 - if (pll_freq / m == parent_freq && 660 - pll_freq % m == 0) 661 return 0; 662 663 - frac = div_u64((u64)pll_freq * (u64)frac_max, parent_freq); 664 frac_m = m * frac_max; 665 if (frac_m > frac) 666 return frac_max; ··· 670 } 671 672 static bool meson_hdmi_pll_validate_params(struct meson_drm *priv, 673 - unsigned int m, 674 unsigned int frac) 675 { 676 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { ··· 698 } 699 700 static bool meson_hdmi_pll_find_params(struct meson_drm *priv, 701 - unsigned int freq, 702 unsigned int *m, 703 unsigned int *frac, 704 unsigned int *od) ··· 710 continue; 711 *frac = meson_hdmi_pll_get_frac(priv, *m, freq * *od); 712 713 - DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d\n", 714 freq, *m, *frac, *od); 715 716 if (meson_hdmi_pll_validate_params(priv, *m, *frac)) ··· 722 723 /* pll_freq is the frequency after the OD dividers */ 724 enum drm_mode_status 725 - meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq) 726 { 727 unsigned int od, m, frac; 728 ··· 745 746 /* pll_freq is the frequency after the OD dividers */ 747 static void meson_hdmi_pll_generic_set(struct meson_drm *priv, 748 - unsigned int pll_freq) 749 { 750 unsigned int od, m, frac, od1, od2, od3; 751 ··· 760 od1 = od / od2; 761 } 762 763 - DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d/%d/%d\n", 764 pll_freq, m, frac, od1, od2, od3); 765 766 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3); ··· 768 return; 769 } 770 771 - DRM_ERROR("Fatal, unable to find parameters for PLL freq %d\n", 772 pll_freq); 773 } 774 775 enum drm_mode_status 776 - meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq, 777 - unsigned int vclk_freq) 778 { 779 int i; 780 781 - DRM_DEBUG_DRIVER("phy_freq = %d vclk_freq = %d\n", 782 phy_freq, vclk_freq); 783 784 /* Check against soc revision/package limits */ ··· 790 } 791 792 for (i = 0 ; params[i].pixel_freq ; ++i) { 793 - DRM_DEBUG_DRIVER("i = %d pixel_freq = %d alt = %d\n", 794 i, params[i].pixel_freq, 795 - FREQ_1000_1001(params[i].pixel_freq)); 796 - DRM_DEBUG_DRIVER("i = %d phy_freq = %d alt = %d\n", 797 i, params[i].phy_freq, 798 - FREQ_1000_1001(params[i].phy_freq/1000)*1000); 799 /* Match strict frequency */ 800 if (phy_freq == params[i].phy_freq && 801 vclk_freq == params[i].vclk_freq) 802 return MODE_OK; 803 /* Match 1000/1001 variant */ 804 - if (phy_freq == (FREQ_1000_1001(params[i].phy_freq/1000)*1000) && 805 - vclk_freq == FREQ_1000_1001(params[i].vclk_freq)) 806 return MODE_OK; 807 } 808 ··· 810 } 811 EXPORT_SYMBOL_GPL(meson_vclk_vic_supported_freq); 812 813 - static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq, 814 - unsigned int od1, unsigned int od2, unsigned int od3, 815 unsigned int vid_pll_div, unsigned int vclk_div, 816 unsigned int hdmi_tx_div, unsigned int venc_div, 817 bool hdmi_use_enci, bool vic_alternate_clock) ··· 832 meson_hdmi_pll_generic_set(priv, pll_base_freq); 833 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { 834 switch (pll_base_freq) { 835 - case 2970000: 836 m = 0x3d; 837 frac = vic_alternate_clock ? 0xd02 : 0xe00; 838 break; 839 - case 4320000: 840 m = vic_alternate_clock ? 0x59 : 0x5a; 841 frac = vic_alternate_clock ? 0xe8f : 0; 842 break; 843 - case 5940000: 844 m = 0x7b; 845 frac = vic_alternate_clock ? 0xa05 : 0xc00; 846 break; ··· 850 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) || 851 meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) { 852 switch (pll_base_freq) { 853 - case 2970000: 854 m = 0x7b; 855 frac = vic_alternate_clock ? 0x281 : 0x300; 856 break; 857 - case 4320000: 858 m = vic_alternate_clock ? 0xb3 : 0xb4; 859 frac = vic_alternate_clock ? 0x347 : 0; 860 break; 861 - case 5940000: 862 m = 0xf7; 863 frac = vic_alternate_clock ? 0x102 : 0x200; 864 break; ··· 867 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3); 868 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { 869 switch (pll_base_freq) { 870 - case 2970000: 871 m = 0x7b; 872 frac = vic_alternate_clock ? 0x140b4 : 0x18000; 873 break; 874 - case 4320000: 875 m = vic_alternate_clock ? 0xb3 : 0xb4; 876 frac = vic_alternate_clock ? 0x1a3ee : 0; 877 break; 878 - case 5940000: 879 m = 0xf7; 880 frac = vic_alternate_clock ? 0x8148 : 0x10000; 881 break; ··· 1031 } 1032 1033 void meson_vclk_setup(struct meson_drm *priv, unsigned int target, 1034 - unsigned int phy_freq, unsigned int vclk_freq, 1035 - unsigned int venc_freq, unsigned int dac_freq, 1036 bool hdmi_use_enci) 1037 { 1038 bool vic_alternate_clock = false; 1039 - unsigned int freq; 1040 - unsigned int hdmi_tx_div; 1041 - unsigned int venc_div; 1042 1043 if (target == MESON_VCLK_TARGET_CVBS) { 1044 meson_venci_cvbs_clock_config(priv); ··· 1058 return; 1059 } 1060 1061 - hdmi_tx_div = vclk_freq / dac_freq; 1062 1063 if (hdmi_tx_div == 0) { 1064 - pr_err("Fatal Error, invalid HDMI-TX freq %d\n", 1065 dac_freq); 1066 return; 1067 } 1068 1069 - venc_div = vclk_freq / venc_freq; 1070 1071 if (venc_div == 0) { 1072 - pr_err("Fatal Error, invalid HDMI venc freq %d\n", 1073 venc_freq); 1074 return; 1075 } 1076 1077 for (freq = 0 ; params[freq].pixel_freq ; ++freq) { 1078 if ((phy_freq == params[freq].phy_freq || 1079 - phy_freq == FREQ_1000_1001(params[freq].phy_freq/1000)*1000) && 1080 (vclk_freq == params[freq].vclk_freq || 1081 - vclk_freq == FREQ_1000_1001(params[freq].vclk_freq))) { 1082 if (vclk_freq != params[freq].vclk_freq) 1083 vic_alternate_clock = true; 1084 else ··· 1104 } 1105 1106 if (!params[freq].pixel_freq) { 1107 - pr_err("Fatal Error, invalid HDMI vclk freq %d\n", vclk_freq); 1108 return; 1109 } 1110
··· 110 #define HDMI_PLL_LOCK BIT(31) 111 #define HDMI_PLL_LOCK_G12A (3 << 30) 112 113 + #define PIXEL_FREQ_1000_1001(_freq) \ 114 + DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL) 115 + #define PHY_FREQ_1000_1001(_freq) \ 116 + (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10) 117 118 /* VID PLL Dividers */ 119 enum { ··· 360 }; 361 362 struct meson_vclk_params { 363 + unsigned long long pll_freq; 364 + unsigned long long phy_freq; 365 + unsigned long long vclk_freq; 366 + unsigned long long venc_freq; 367 + unsigned long long pixel_freq; 368 unsigned int pll_od1; 369 unsigned int pll_od2; 370 unsigned int pll_od3; ··· 372 unsigned int vclk_div; 373 } params[] = { 374 [MESON_VCLK_HDMI_ENCI_54000] = { 375 + .pll_freq = 4320000000, 376 + .phy_freq = 270000000, 377 + .vclk_freq = 54000000, 378 + .venc_freq = 54000000, 379 + .pixel_freq = 54000000, 380 .pll_od1 = 4, 381 .pll_od2 = 4, 382 .pll_od3 = 1, ··· 384 .vclk_div = 1, 385 }, 386 [MESON_VCLK_HDMI_DDR_54000] = { 387 + .pll_freq = 4320000000, 388 + .phy_freq = 270000000, 389 + .vclk_freq = 54000000, 390 + .venc_freq = 54000000, 391 + .pixel_freq = 27000000, 392 .pll_od1 = 4, 393 .pll_od2 = 4, 394 .pll_od3 = 1, ··· 396 .vclk_div = 1, 397 }, 398 [MESON_VCLK_HDMI_DDR_148500] = { 399 + .pll_freq = 2970000000, 400 + .phy_freq = 742500000, 401 + .vclk_freq = 148500000, 402 + .venc_freq = 148500000, 403 + .pixel_freq = 74250000, 404 .pll_od1 = 4, 405 .pll_od2 = 1, 406 .pll_od3 = 1, ··· 408 .vclk_div = 1, 409 }, 410 [MESON_VCLK_HDMI_74250] = { 411 + .pll_freq = 2970000000, 412 + .phy_freq = 742500000, 413 + .vclk_freq = 74250000, 414 + .venc_freq = 74250000, 415 + .pixel_freq = 74250000, 416 .pll_od1 = 2, 417 .pll_od2 = 2, 418 .pll_od3 = 2, ··· 420 .vclk_div = 1, 421 }, 422 [MESON_VCLK_HDMI_148500] = { 423 + .pll_freq = 2970000000, 424 + .phy_freq = 1485000000, 425 + .vclk_freq = 148500000, 426 + .venc_freq = 148500000, 427 + .pixel_freq = 148500000, 428 .pll_od1 = 1, 429 .pll_od2 = 2, 430 .pll_od3 = 2, ··· 432 .vclk_div = 1, 433 }, 434 [MESON_VCLK_HDMI_297000] = { 435 + .pll_freq = 5940000000, 436 + .phy_freq = 2970000000, 437 + .venc_freq = 297000000, 438 + .vclk_freq = 297000000, 439 + .pixel_freq = 297000000, 440 .pll_od1 = 2, 441 .pll_od2 = 1, 442 .pll_od3 = 1, ··· 444 .vclk_div = 2, 445 }, 446 [MESON_VCLK_HDMI_594000] = { 447 + .pll_freq = 5940000000, 448 + .phy_freq = 5940000000, 449 + .venc_freq = 594000000, 450 + .vclk_freq = 594000000, 451 + .pixel_freq = 594000000, 452 .pll_od1 = 1, 453 .pll_od2 = 1, 454 .pll_od3 = 2, ··· 456 .vclk_div = 1, 457 }, 458 [MESON_VCLK_HDMI_594000_YUV420] = { 459 + .pll_freq = 5940000000, 460 + .phy_freq = 2970000000, 461 + .venc_freq = 594000000, 462 + .vclk_freq = 594000000, 463 + .pixel_freq = 297000000, 464 .pll_od1 = 2, 465 .pll_od2 = 1, 466 .pll_od3 = 1, ··· 617 3 << 20, pll_od_to_reg(od3) << 20); 618 } 619 620 + #define XTAL_FREQ (24 * 1000 * 1000) 621 622 static unsigned int meson_hdmi_pll_get_m(struct meson_drm *priv, 623 + unsigned long long pll_freq) 624 { 625 /* The GXBB PLL has a /2 pre-multiplier */ 626 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) 627 + pll_freq = DIV_ROUND_DOWN_ULL(pll_freq, 2); 628 629 + return DIV_ROUND_DOWN_ULL(pll_freq, XTAL_FREQ); 630 } 631 632 #define HDMI_FRAC_MAX_GXBB 4096 ··· 635 636 static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv, 637 unsigned int m, 638 + unsigned long long pll_freq) 639 { 640 + unsigned long long parent_freq = XTAL_FREQ; 641 unsigned int frac_max = HDMI_FRAC_MAX_GXL; 642 unsigned int frac_m; 643 unsigned int frac; 644 + u32 remainder; 645 646 /* The GXBB PLL has a /2 pre-multiplier and a larger FRAC width */ 647 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { ··· 652 frac_max = HDMI_FRAC_MAX_G12A; 653 654 /* We can have a perfect match !*/ 655 + if (div_u64_rem(pll_freq, m, &remainder) == parent_freq && 656 + remainder == 0) 657 return 0; 658 659 + frac = mul_u64_u64_div_u64(pll_freq, frac_max, parent_freq); 660 frac_m = m * frac_max; 661 if (frac_m > frac) 662 return frac_max; ··· 666 } 667 668 static bool meson_hdmi_pll_validate_params(struct meson_drm *priv, 669 + unsigned long long m, 670 unsigned int frac) 671 { 672 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { ··· 694 } 695 696 static bool meson_hdmi_pll_find_params(struct meson_drm *priv, 697 + unsigned long long freq, 698 unsigned int *m, 699 unsigned int *frac, 700 unsigned int *od) ··· 706 continue; 707 *frac = meson_hdmi_pll_get_frac(priv, *m, freq * *od); 708 709 + DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d\n", 710 freq, *m, *frac, *od); 711 712 if (meson_hdmi_pll_validate_params(priv, *m, *frac)) ··· 718 719 /* pll_freq is the frequency after the OD dividers */ 720 enum drm_mode_status 721 + meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq) 722 { 723 unsigned int od, m, frac; 724 ··· 741 742 /* pll_freq is the frequency after the OD dividers */ 743 static void meson_hdmi_pll_generic_set(struct meson_drm *priv, 744 + unsigned long long pll_freq) 745 { 746 unsigned int od, m, frac, od1, od2, od3; 747 ··· 756 od1 = od / od2; 757 } 758 759 + DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d/%d/%d\n", 760 pll_freq, m, frac, od1, od2, od3); 761 762 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3); ··· 764 return; 765 } 766 767 + DRM_ERROR("Fatal, unable to find parameters for PLL freq %lluHz\n", 768 pll_freq); 769 } 770 771 enum drm_mode_status 772 + meson_vclk_vic_supported_freq(struct meson_drm *priv, 773 + unsigned long long phy_freq, 774 + unsigned long long vclk_freq) 775 { 776 int i; 777 778 + DRM_DEBUG_DRIVER("phy_freq = %lluHz vclk_freq = %lluHz\n", 779 phy_freq, vclk_freq); 780 781 /* Check against soc revision/package limits */ ··· 785 } 786 787 for (i = 0 ; params[i].pixel_freq ; ++i) { 788 + DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n", 789 i, params[i].pixel_freq, 790 + PIXEL_FREQ_1000_1001(params[i].pixel_freq)); 791 + DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n", 792 i, params[i].phy_freq, 793 + PHY_FREQ_1000_1001(params[i].phy_freq)); 794 /* Match strict frequency */ 795 if (phy_freq == params[i].phy_freq && 796 vclk_freq == params[i].vclk_freq) 797 return MODE_OK; 798 /* Match 1000/1001 variant */ 799 + if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) && 800 + vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq)) 801 return MODE_OK; 802 } 803 ··· 805 } 806 EXPORT_SYMBOL_GPL(meson_vclk_vic_supported_freq); 807 808 + static void meson_vclk_set(struct meson_drm *priv, 809 + unsigned long long pll_base_freq, unsigned int od1, 810 + unsigned int od2, unsigned int od3, 811 unsigned int vid_pll_div, unsigned int vclk_div, 812 unsigned int hdmi_tx_div, unsigned int venc_div, 813 bool hdmi_use_enci, bool vic_alternate_clock) ··· 826 meson_hdmi_pll_generic_set(priv, pll_base_freq); 827 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { 828 switch (pll_base_freq) { 829 + case 2970000000: 830 m = 0x3d; 831 frac = vic_alternate_clock ? 0xd02 : 0xe00; 832 break; 833 + case 4320000000: 834 m = vic_alternate_clock ? 0x59 : 0x5a; 835 frac = vic_alternate_clock ? 0xe8f : 0; 836 break; 837 + case 5940000000: 838 m = 0x7b; 839 frac = vic_alternate_clock ? 0xa05 : 0xc00; 840 break; ··· 844 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) || 845 meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) { 846 switch (pll_base_freq) { 847 + case 2970000000: 848 m = 0x7b; 849 frac = vic_alternate_clock ? 0x281 : 0x300; 850 break; 851 + case 4320000000: 852 m = vic_alternate_clock ? 0xb3 : 0xb4; 853 frac = vic_alternate_clock ? 0x347 : 0; 854 break; 855 + case 5940000000: 856 m = 0xf7; 857 frac = vic_alternate_clock ? 0x102 : 0x200; 858 break; ··· 861 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3); 862 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { 863 switch (pll_base_freq) { 864 + case 2970000000: 865 m = 0x7b; 866 frac = vic_alternate_clock ? 0x140b4 : 0x18000; 867 break; 868 + case 4320000000: 869 m = vic_alternate_clock ? 0xb3 : 0xb4; 870 frac = vic_alternate_clock ? 0x1a3ee : 0; 871 break; 872 + case 5940000000: 873 m = 0xf7; 874 frac = vic_alternate_clock ? 0x8148 : 0x10000; 875 break; ··· 1025 } 1026 1027 void meson_vclk_setup(struct meson_drm *priv, unsigned int target, 1028 + unsigned long long phy_freq, unsigned long long vclk_freq, 1029 + unsigned long long venc_freq, unsigned long long dac_freq, 1030 bool hdmi_use_enci) 1031 { 1032 bool vic_alternate_clock = false; 1033 + unsigned long long freq; 1034 + unsigned long long hdmi_tx_div; 1035 + unsigned long long venc_div; 1036 1037 if (target == MESON_VCLK_TARGET_CVBS) { 1038 meson_venci_cvbs_clock_config(priv); ··· 1052 return; 1053 } 1054 1055 + hdmi_tx_div = DIV_ROUND_DOWN_ULL(vclk_freq, dac_freq); 1056 1057 if (hdmi_tx_div == 0) { 1058 + pr_err("Fatal Error, invalid HDMI-TX freq %lluHz\n", 1059 dac_freq); 1060 return; 1061 } 1062 1063 + venc_div = DIV_ROUND_DOWN_ULL(vclk_freq, venc_freq); 1064 1065 if (venc_div == 0) { 1066 + pr_err("Fatal Error, invalid HDMI venc freq %lluHz\n", 1067 venc_freq); 1068 return; 1069 } 1070 1071 for (freq = 0 ; params[freq].pixel_freq ; ++freq) { 1072 if ((phy_freq == params[freq].phy_freq || 1073 + phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) && 1074 (vclk_freq == params[freq].vclk_freq || 1075 + vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) { 1076 if (vclk_freq != params[freq].vclk_freq) 1077 vic_alternate_clock = true; 1078 else ··· 1098 } 1099 1100 if (!params[freq].pixel_freq) { 1101 + pr_err("Fatal Error, invalid HDMI vclk freq %lluHz\n", 1102 + vclk_freq); 1103 return; 1104 } 1105
+7 -6
drivers/gpu/drm/meson/meson_vclk.h
··· 20 }; 21 22 /* 27MHz is the CVBS Pixel Clock */ 23 - #define MESON_VCLK_CVBS 27000 24 25 enum drm_mode_status 26 - meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq); 27 enum drm_mode_status 28 - meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq, 29 - unsigned int vclk_freq); 30 31 void meson_vclk_setup(struct meson_drm *priv, unsigned int target, 32 - unsigned int phy_freq, unsigned int vclk_freq, 33 - unsigned int venc_freq, unsigned int dac_freq, 34 bool hdmi_use_enci); 35 36 #endif /* __MESON_VCLK_H */
··· 20 }; 21 22 /* 27MHz is the CVBS Pixel Clock */ 23 + #define MESON_VCLK_CVBS (27 * 1000 * 1000) 24 25 enum drm_mode_status 26 + meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq); 27 enum drm_mode_status 28 + meson_vclk_vic_supported_freq(struct meson_drm *priv, 29 + unsigned long long phy_freq, 30 + unsigned long long vclk_freq); 31 32 void meson_vclk_setup(struct meson_drm *priv, unsigned int target, 33 + unsigned long long phy_freq, unsigned long long vclk_freq, 34 + unsigned long long venc_freq, unsigned long long dac_freq, 35 bool hdmi_use_enci); 36 37 #endif /* __MESON_VCLK_H */
+2 -2
drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
··· 129 { 130 struct jadard *jadard = panel_to_jadard(panel); 131 132 - gpiod_set_value(jadard->reset, 1); 133 msleep(120); 134 135 if (jadard->desc->reset_before_power_off_vcioo) { 136 - gpiod_set_value(jadard->reset, 0); 137 138 usleep_range(1000, 2000); 139 }
··· 129 { 130 struct jadard *jadard = panel_to_jadard(panel); 131 132 + gpiod_set_value(jadard->reset, 0); 133 msleep(120); 134 135 if (jadard->desc->reset_before_power_off_vcioo) { 136 + gpiod_set_value(jadard->reset, 1); 137 138 usleep_range(1000, 2000); 139 }
+9
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 128 drm_dev_put(dev); 129 } 130 131 static void virtio_gpu_config_changed(struct virtio_device *vdev) 132 { 133 struct drm_device *dev = vdev->priv; ··· 170 .id_table = id_table, 171 .probe = virtio_gpu_probe, 172 .remove = virtio_gpu_remove, 173 .config_changed = virtio_gpu_config_changed 174 }; 175
··· 128 drm_dev_put(dev); 129 } 130 131 + static void virtio_gpu_shutdown(struct virtio_device *vdev) 132 + { 133 + /* 134 + * drm does its own synchronization on shutdown. 135 + * Do nothing here, opt out of device reset. 136 + */ 137 + } 138 + 139 static void virtio_gpu_config_changed(struct virtio_device *vdev) 140 { 141 struct drm_device *dev = vdev->priv; ··· 162 .id_table = id_table, 163 .probe = virtio_gpu_probe, 164 .remove = virtio_gpu_remove, 165 + .shutdown = virtio_gpu_shutdown, 166 .config_changed = virtio_gpu_config_changed 167 }; 168
+1
drivers/hwtracing/intel_th/Kconfig
··· 60 61 config INTEL_TH_MSU 62 tristate "Intel(R) Trace Hub Memory Storage Unit" 63 help 64 Memory Storage Unit (MSU) trace output device enables 65 storing STP traces to system memory. It supports single
··· 60 61 config INTEL_TH_MSU 62 tristate "Intel(R) Trace Hub Memory Storage Unit" 63 + depends on MMU 64 help 65 Memory Storage Unit (MSU) trace output device enables 66 storing STP traces to system memory. It supports single
+7 -24
drivers/hwtracing/intel_th/msu.c
··· 19 #include <linux/io.h> 20 #include <linux/workqueue.h> 21 #include <linux/dma-mapping.h> 22 23 #ifdef CONFIG_X86 24 #include <asm/set_memory.h> ··· 977 for (off = 0; off < msc->nr_pages << PAGE_SHIFT; off += PAGE_SIZE) { 978 struct page *page = virt_to_page(msc->base + off); 979 980 - page->mapping = NULL; 981 __free_page(page); 982 } 983 ··· 1158 int i; 1159 1160 for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) { 1161 - struct page *page = msc_sg_page(sg); 1162 - 1163 - page->mapping = NULL; 1164 dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, 1165 sg_virt(sg), sg_dma_address(sg)); 1166 } ··· 1598 { 1599 struct msc_iter *iter = vma->vm_file->private_data; 1600 struct msc *msc = iter->msc; 1601 - unsigned long pg; 1602 1603 if (!atomic_dec_and_mutex_lock(&msc->mmap_count, &msc->buf_mutex)) 1604 return; 1605 - 1606 - /* drop page _refcounts */ 1607 - for (pg = 0; pg < msc->nr_pages; pg++) { 1608 - struct page *page = msc_buffer_get_page(msc, pg); 1609 - 1610 - if (WARN_ON_ONCE(!page)) 1611 - continue; 1612 - 1613 - if (page->mapping) 1614 - page->mapping = NULL; 1615 - } 1616 1617 /* last mapping -- drop user_count */ 1618 atomic_dec(&msc->user_count); ··· 1611 { 1612 struct msc_iter *iter = vmf->vma->vm_file->private_data; 1613 struct msc *msc = iter->msc; 1614 1615 - vmf->page = msc_buffer_get_page(msc, vmf->pgoff); 1616 - if (!vmf->page) 1617 return VM_FAULT_SIGBUS; 1618 1619 - get_page(vmf->page); 1620 - vmf->page->mapping = vmf->vma->vm_file->f_mapping; 1621 - vmf->page->index = vmf->pgoff; 1622 - 1623 - return 0; 1624 } 1625 1626 static const struct vm_operations_struct msc_mmap_ops = { ··· 1659 atomic_dec(&msc->user_count); 1660 1661 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1662 - vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY); 1663 vma->vm_ops = &msc_mmap_ops; 1664 return ret; 1665 }
··· 19 #include <linux/io.h> 20 #include <linux/workqueue.h> 21 #include <linux/dma-mapping.h> 22 + #include <linux/pfn_t.h> 23 24 #ifdef CONFIG_X86 25 #include <asm/set_memory.h> ··· 976 for (off = 0; off < msc->nr_pages << PAGE_SHIFT; off += PAGE_SIZE) { 977 struct page *page = virt_to_page(msc->base + off); 978 979 __free_page(page); 980 } 981 ··· 1158 int i; 1159 1160 for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) { 1161 dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, 1162 sg_virt(sg), sg_dma_address(sg)); 1163 } ··· 1601 { 1602 struct msc_iter *iter = vma->vm_file->private_data; 1603 struct msc *msc = iter->msc; 1604 1605 if (!atomic_dec_and_mutex_lock(&msc->mmap_count, &msc->buf_mutex)) 1606 return; 1607 1608 /* last mapping -- drop user_count */ 1609 atomic_dec(&msc->user_count); ··· 1626 { 1627 struct msc_iter *iter = vmf->vma->vm_file->private_data; 1628 struct msc *msc = iter->msc; 1629 + struct page *page; 1630 1631 + page = msc_buffer_get_page(msc, vmf->pgoff); 1632 + if (!page) 1633 return VM_FAULT_SIGBUS; 1634 1635 + get_page(page); 1636 + return vmf_insert_mixed(vmf->vma, vmf->address, page_to_pfn_t(page)); 1637 } 1638 1639 static const struct vm_operations_struct msc_mmap_ops = { ··· 1676 atomic_dec(&msc->user_count); 1677 1678 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1679 + vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY | VM_MIXEDMAP); 1680 vma->vm_ops = &msc_mmap_ops; 1681 return ret; 1682 }
+2 -2
drivers/i2c/busses/i2c-imx-lpi2c.c
··· 1380 return 0; 1381 1382 rpm_disable: 1383 - pm_runtime_put(&pdev->dev); 1384 - pm_runtime_disable(&pdev->dev); 1385 pm_runtime_dont_use_autosuspend(&pdev->dev); 1386 1387 return ret; 1388 }
··· 1380 return 0; 1381 1382 rpm_disable: 1383 pm_runtime_dont_use_autosuspend(&pdev->dev); 1384 + pm_runtime_put_sync(&pdev->dev); 1385 + pm_runtime_disable(&pdev->dev); 1386 1387 return ret; 1388 }
+4 -11
drivers/iommu/amd/iommu.c
··· 3869 struct irq_2_irte *irte_info = &ir_data->irq_2_irte; 3870 struct iommu_dev_data *dev_data; 3871 3872 if (ir_data->iommu == NULL) 3873 return -EINVAL; 3874 ··· 3882 * we should not modify the IRTE 3883 */ 3884 if (!dev_data || !dev_data->use_vapic) 3885 - return 0; 3886 3887 ir_data->cfg = irqd_cfg(data); 3888 pi_data->ir_data = ir_data; 3889 - 3890 - /* Note: 3891 - * SVM tries to set up for VAPIC mode, but we are in 3892 - * legacy mode. So, we force legacy mode instead. 3893 - */ 3894 - if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir)) { 3895 - pr_debug("%s: Fall back to using intr legacy remap\n", 3896 - __func__); 3897 - pi_data->is_guest_mode = false; 3898 - } 3899 3900 pi_data->prev_ga_tag = ir_data->cached_ga_tag; 3901 if (pi_data->is_guest_mode) {
··· 3869 struct irq_2_irte *irte_info = &ir_data->irq_2_irte; 3870 struct iommu_dev_data *dev_data; 3871 3872 + if (WARN_ON_ONCE(!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir))) 3873 + return -EINVAL; 3874 + 3875 if (ir_data->iommu == NULL) 3876 return -EINVAL; 3877 ··· 3879 * we should not modify the IRTE 3880 */ 3881 if (!dev_data || !dev_data->use_vapic) 3882 + return -EINVAL; 3883 3884 ir_data->cfg = irqd_cfg(data); 3885 pi_data->ir_data = ir_data; 3886 3887 pi_data->prev_ga_tag = ir_data->cached_ga_tag; 3888 if (pi_data->is_guest_mode) {
+1 -1
drivers/irqchip/irq-gic-v2m.c
··· 421 #ifdef CONFIG_ACPI 422 static int acpi_num_msi; 423 424 - static __init struct fwnode_handle *gicv2m_get_fwnode(struct device *dev) 425 { 426 struct v2m_data *data; 427
··· 421 #ifdef CONFIG_ACPI 422 static int acpi_num_msi; 423 424 + static struct fwnode_handle *gicv2m_get_fwnode(struct device *dev) 425 { 426 struct v2m_data *data; 427
+1 -1
drivers/mcb/mcb-parse.c
··· 96 97 ret = mcb_device_register(bus, mdev); 98 if (ret < 0) 99 - goto err; 100 101 return 0; 102
··· 96 97 ret = mcb_device_register(bus, mdev); 98 if (ret < 0) 99 + return ret; 100 101 return 0; 102
+6 -2
drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
··· 37 struct pci1xxxx_gpio { 38 struct auxiliary_device *aux_dev; 39 void __iomem *reg_base; 40 struct gpio_chip gpio; 41 spinlock_t lock; 42 int irq_base; ··· 168 unsigned long flags; 169 170 spin_lock_irqsave(&priv->lock, flags); 171 - pci1xxx_assign_bit(priv->reg_base, INTR_STAT_OFFSET(gpio), (gpio % 32), true); 172 spin_unlock_irqrestore(&priv->lock, flags); 173 } 174 ··· 258 struct pci1xxxx_gpio *priv = dev_id; 259 struct gpio_chip *gc = &priv->gpio; 260 unsigned long int_status = 0; 261 unsigned long flags; 262 u8 pincount; 263 int bit; ··· 282 writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank)); 283 spin_unlock_irqrestore(&priv->lock, flags); 284 irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32))); 285 - handle_nested_irq(irq); 286 } 287 } 288 spin_lock_irqsave(&priv->lock, flags);
··· 37 struct pci1xxxx_gpio { 38 struct auxiliary_device *aux_dev; 39 void __iomem *reg_base; 40 + raw_spinlock_t wa_lock; 41 struct gpio_chip gpio; 42 spinlock_t lock; 43 int irq_base; ··· 167 unsigned long flags; 168 169 spin_lock_irqsave(&priv->lock, flags); 170 + writel(BIT(gpio % 32), priv->reg_base + INTR_STAT_OFFSET(gpio)); 171 spin_unlock_irqrestore(&priv->lock, flags); 172 } 173 ··· 257 struct pci1xxxx_gpio *priv = dev_id; 258 struct gpio_chip *gc = &priv->gpio; 259 unsigned long int_status = 0; 260 + unsigned long wa_flags; 261 unsigned long flags; 262 u8 pincount; 263 int bit; ··· 280 writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank)); 281 spin_unlock_irqrestore(&priv->lock, flags); 282 irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32))); 283 + raw_spin_lock_irqsave(&priv->wa_lock, wa_flags); 284 + generic_handle_irq(irq); 285 + raw_spin_unlock_irqrestore(&priv->wa_lock, wa_flags); 286 } 287 } 288 spin_lock_irqsave(&priv->lock, flags);
+1
drivers/misc/mei/hw-me-regs.h
··· 117 118 #define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */ 119 120 #define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */ 121 122 /*
··· 117 118 #define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */ 119 120 + #define MEI_DEV_ID_PTL_H 0xE370 /* Panther Lake H */ 121 #define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */ 122 123 /*
+1
drivers/misc/mei/pci-me.c
··· 124 125 {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)}, 126 127 {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)}, 128 129 /* required last entry */
··· 124 125 {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)}, 126 127 + {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_H, MEI_ME_PCH15_CFG)}, 128 {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)}, 129 130 /* required last entry */
+22 -18
drivers/misc/mei/vsc-tp.c
··· 36 #define VSC_TP_XFER_TIMEOUT_BYTES 700 37 #define VSC_TP_PACKET_PADDING_SIZE 1 38 #define VSC_TP_PACKET_SIZE(pkt) \ 39 - (sizeof(struct vsc_tp_packet) + le16_to_cpu((pkt)->len) + VSC_TP_CRC_SIZE) 40 #define VSC_TP_MAX_PACKET_SIZE \ 41 - (sizeof(struct vsc_tp_packet) + VSC_TP_MAX_MSG_SIZE + VSC_TP_CRC_SIZE) 42 #define VSC_TP_MAX_XFER_SIZE \ 43 (VSC_TP_MAX_PACKET_SIZE + VSC_TP_XFER_TIMEOUT_BYTES) 44 #define VSC_TP_NEXT_XFER_LEN(len, offset) \ 45 - (len + sizeof(struct vsc_tp_packet) + VSC_TP_CRC_SIZE - offset + VSC_TP_PACKET_PADDING_SIZE) 46 47 - struct vsc_tp_packet { 48 __u8 sync; 49 __u8 cmd; 50 __le16 len; 51 __le32 seq; 52 - __u8 buf[] __counted_by(len); 53 }; 54 55 struct vsc_tp { ··· 71 u32 seq; 72 73 /* command buffer */ 74 - void *tx_buf; 75 - void *rx_buf; 76 77 atomic_t assert_cnt; 78 wait_queue_head_t xfer_wait; ··· 162 static int vsc_tp_xfer_helper(struct vsc_tp *tp, struct vsc_tp_packet *pkt, 163 void *ibuf, u16 ilen) 164 { 165 - int ret, offset = 0, cpy_len, src_len, dst_len = sizeof(struct vsc_tp_packet); 166 int next_xfer_len = VSC_TP_PACKET_SIZE(pkt) + VSC_TP_XFER_TIMEOUT_BYTES; 167 - u8 *src, *crc_src, *rx_buf = tp->rx_buf; 168 int count_down = VSC_TP_MAX_XFER_COUNT; 169 u32 recv_crc = 0, crc = ~0; 170 - struct vsc_tp_packet ack; 171 u8 *dst = (u8 *)&ack; 172 bool synced = false; 173 ··· 284 285 guard(mutex)(&tp->mutex); 286 287 - pkt->sync = VSC_TP_PACKET_SYNC; 288 - pkt->cmd = cmd; 289 - pkt->len = cpu_to_le16(olen); 290 - pkt->seq = cpu_to_le32(++tp->seq); 291 memcpy(pkt->buf, obuf, olen); 292 293 crc = ~crc32(~0, (u8 *)pkt, sizeof(pkt) + olen); ··· 324 guard(mutex)(&tp->mutex); 325 326 /* rom xfer is big endian */ 327 - cpu_to_be32_array(tp->tx_buf, obuf, words); 328 329 ret = read_poll_timeout(gpiod_get_value_cansleep, ret, 330 !ret, VSC_TP_ROM_XFER_POLL_DELAY_US, ··· 340 return ret; 341 342 if (ibuf) 343 - be32_to_cpu_array(ibuf, tp->rx_buf, words); 344 345 return ret; 346 } ··· 494 if (!tp) 495 return -ENOMEM; 496 497 - tp->tx_buf = devm_kzalloc(dev, VSC_TP_MAX_XFER_SIZE, GFP_KERNEL); 498 if (!tp->tx_buf) 499 return -ENOMEM; 500 501 - tp->rx_buf = devm_kzalloc(dev, VSC_TP_MAX_XFER_SIZE, GFP_KERNEL); 502 if (!tp->rx_buf) 503 return -ENOMEM; 504
··· 36 #define VSC_TP_XFER_TIMEOUT_BYTES 700 37 #define VSC_TP_PACKET_PADDING_SIZE 1 38 #define VSC_TP_PACKET_SIZE(pkt) \ 39 + (sizeof(struct vsc_tp_packet_hdr) + le16_to_cpu((pkt)->hdr.len) + VSC_TP_CRC_SIZE) 40 #define VSC_TP_MAX_PACKET_SIZE \ 41 + (sizeof(struct vsc_tp_packet_hdr) + VSC_TP_MAX_MSG_SIZE + VSC_TP_CRC_SIZE) 42 #define VSC_TP_MAX_XFER_SIZE \ 43 (VSC_TP_MAX_PACKET_SIZE + VSC_TP_XFER_TIMEOUT_BYTES) 44 #define VSC_TP_NEXT_XFER_LEN(len, offset) \ 45 + (len + sizeof(struct vsc_tp_packet_hdr) + VSC_TP_CRC_SIZE - offset + VSC_TP_PACKET_PADDING_SIZE) 46 47 + struct vsc_tp_packet_hdr { 48 __u8 sync; 49 __u8 cmd; 50 __le16 len; 51 __le32 seq; 52 + }; 53 + 54 + struct vsc_tp_packet { 55 + struct vsc_tp_packet_hdr hdr; 56 + __u8 buf[VSC_TP_MAX_XFER_SIZE - sizeof(struct vsc_tp_packet_hdr)]; 57 }; 58 59 struct vsc_tp { ··· 67 u32 seq; 68 69 /* command buffer */ 70 + struct vsc_tp_packet *tx_buf; 71 + struct vsc_tp_packet *rx_buf; 72 73 atomic_t assert_cnt; 74 wait_queue_head_t xfer_wait; ··· 158 static int vsc_tp_xfer_helper(struct vsc_tp *tp, struct vsc_tp_packet *pkt, 159 void *ibuf, u16 ilen) 160 { 161 + int ret, offset = 0, cpy_len, src_len, dst_len = sizeof(struct vsc_tp_packet_hdr); 162 int next_xfer_len = VSC_TP_PACKET_SIZE(pkt) + VSC_TP_XFER_TIMEOUT_BYTES; 163 + u8 *src, *crc_src, *rx_buf = (u8 *)tp->rx_buf; 164 int count_down = VSC_TP_MAX_XFER_COUNT; 165 u32 recv_crc = 0, crc = ~0; 166 + struct vsc_tp_packet_hdr ack; 167 u8 *dst = (u8 *)&ack; 168 bool synced = false; 169 ··· 280 281 guard(mutex)(&tp->mutex); 282 283 + pkt->hdr.sync = VSC_TP_PACKET_SYNC; 284 + pkt->hdr.cmd = cmd; 285 + pkt->hdr.len = cpu_to_le16(olen); 286 + pkt->hdr.seq = cpu_to_le32(++tp->seq); 287 memcpy(pkt->buf, obuf, olen); 288 289 crc = ~crc32(~0, (u8 *)pkt, sizeof(pkt) + olen); ··· 320 guard(mutex)(&tp->mutex); 321 322 /* rom xfer is big endian */ 323 + cpu_to_be32_array((u32 *)tp->tx_buf, obuf, words); 324 325 ret = read_poll_timeout(gpiod_get_value_cansleep, ret, 326 !ret, VSC_TP_ROM_XFER_POLL_DELAY_US, ··· 336 return ret; 337 338 if (ibuf) 339 + be32_to_cpu_array(ibuf, (u32 *)tp->rx_buf, words); 340 341 return ret; 342 } ··· 490 if (!tp) 491 return -ENOMEM; 492 493 + tp->tx_buf = devm_kzalloc(dev, sizeof(*tp->tx_buf), GFP_KERNEL); 494 if (!tp->tx_buf) 495 return -ENOMEM; 496 497 + tp->rx_buf = devm_kzalloc(dev, sizeof(*tp->rx_buf), GFP_KERNEL); 498 if (!tp->rx_buf) 499 return -ENOMEM; 500
+1 -20
drivers/misc/pci_endpoint_test.c
··· 122 struct pci_endpoint_test_data { 123 enum pci_barno test_reg_bar; 124 size_t alignment; 125 - int irq_type; 126 }; 127 128 static inline u32 pci_endpoint_test_readl(struct pci_endpoint_test *test, ··· 947 test_reg_bar = data->test_reg_bar; 948 test->test_reg_bar = test_reg_bar; 949 test->alignment = data->alignment; 950 - test->irq_type = data->irq_type; 951 } 952 953 init_completion(&test->irq_raised); ··· 967 } 968 969 pci_set_master(pdev); 970 - 971 - ret = pci_endpoint_test_alloc_irq_vectors(test, test->irq_type); 972 - if (ret) 973 - goto err_disable_irq; 974 975 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 976 if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) { ··· 1003 goto err_ida_remove; 1004 } 1005 1006 - ret = pci_endpoint_test_request_irq(test); 1007 - if (ret) 1008 - goto err_kfree_test_name; 1009 - 1010 pci_endpoint_test_get_capabilities(test); 1011 1012 misc_device = &test->miscdev; ··· 1010 misc_device->name = kstrdup(name, GFP_KERNEL); 1011 if (!misc_device->name) { 1012 ret = -ENOMEM; 1013 - goto err_release_irq; 1014 } 1015 misc_device->parent = &pdev->dev; 1016 misc_device->fops = &pci_endpoint_test_fops; ··· 1026 err_kfree_name: 1027 kfree(misc_device->name); 1028 1029 - err_release_irq: 1030 - pci_endpoint_test_release_irq(test); 1031 - 1032 err_kfree_test_name: 1033 kfree(test->name); 1034 ··· 1038 pci_iounmap(pdev, test->bar[bar]); 1039 } 1040 1041 - err_disable_irq: 1042 - pci_endpoint_test_free_irq_vectors(test); 1043 pci_release_regions(pdev); 1044 1045 err_disable_pdev: ··· 1077 static const struct pci_endpoint_test_data default_data = { 1078 .test_reg_bar = BAR_0, 1079 .alignment = SZ_4K, 1080 - .irq_type = PCITEST_IRQ_TYPE_MSI, 1081 }; 1082 1083 static const struct pci_endpoint_test_data am654_data = { 1084 .test_reg_bar = BAR_2, 1085 .alignment = SZ_64K, 1086 - .irq_type = PCITEST_IRQ_TYPE_MSI, 1087 }; 1088 1089 static const struct pci_endpoint_test_data j721e_data = { 1090 .alignment = 256, 1091 - .irq_type = PCITEST_IRQ_TYPE_MSI, 1092 }; 1093 1094 static const struct pci_endpoint_test_data rk3588_data = { 1095 .alignment = SZ_64K, 1096 - .irq_type = PCITEST_IRQ_TYPE_MSI, 1097 }; 1098 1099 /*
··· 122 struct pci_endpoint_test_data { 123 enum pci_barno test_reg_bar; 124 size_t alignment; 125 }; 126 127 static inline u32 pci_endpoint_test_readl(struct pci_endpoint_test *test, ··· 948 test_reg_bar = data->test_reg_bar; 949 test->test_reg_bar = test_reg_bar; 950 test->alignment = data->alignment; 951 } 952 953 init_completion(&test->irq_raised); ··· 969 } 970 971 pci_set_master(pdev); 972 973 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 974 if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) { ··· 1009 goto err_ida_remove; 1010 } 1011 1012 pci_endpoint_test_get_capabilities(test); 1013 1014 misc_device = &test->miscdev; ··· 1020 misc_device->name = kstrdup(name, GFP_KERNEL); 1021 if (!misc_device->name) { 1022 ret = -ENOMEM; 1023 + goto err_kfree_test_name; 1024 } 1025 misc_device->parent = &pdev->dev; 1026 misc_device->fops = &pci_endpoint_test_fops; ··· 1036 err_kfree_name: 1037 kfree(misc_device->name); 1038 1039 err_kfree_test_name: 1040 kfree(test->name); 1041 ··· 1051 pci_iounmap(pdev, test->bar[bar]); 1052 } 1053 1054 pci_release_regions(pdev); 1055 1056 err_disable_pdev: ··· 1092 static const struct pci_endpoint_test_data default_data = { 1093 .test_reg_bar = BAR_0, 1094 .alignment = SZ_4K, 1095 }; 1096 1097 static const struct pci_endpoint_test_data am654_data = { 1098 .test_reg_bar = BAR_2, 1099 .alignment = SZ_64K, 1100 }; 1101 1102 static const struct pci_endpoint_test_data j721e_data = { 1103 .alignment = 256, 1104 }; 1105 1106 static const struct pci_endpoint_test_data rk3588_data = { 1107 .alignment = SZ_64K, 1108 }; 1109 1110 /*
+3 -3
drivers/net/dsa/mt7530.c
··· 2419 struct mt7530_priv *priv = ds->priv; 2420 int ret, i; 2421 2422 mt753x_trap_frames(priv); 2423 2424 /* Enable and reset MIB counters */ ··· 2573 ret = mt7531_setup_common(ds); 2574 if (ret) 2575 return ret; 2576 - 2577 - ds->assisted_learning_on_cpu_port = true; 2578 - ds->mtu_enforcement_ingress = true; 2579 2580 return 0; 2581 }
··· 2419 struct mt7530_priv *priv = ds->priv; 2420 int ret, i; 2421 2422 + ds->assisted_learning_on_cpu_port = true; 2423 + ds->mtu_enforcement_ingress = true; 2424 + 2425 mt753x_trap_frames(priv); 2426 2427 /* Enable and reset MIB counters */ ··· 2570 ret = mt7531_setup_common(ds); 2571 if (ret) 2572 return ret; 2573 2574 return 0; 2575 }
+14 -22
drivers/net/ethernet/amd/pds_core/adminq.c
··· 5 6 #include "core.h" 7 8 - struct pdsc_wait_context { 9 - struct pdsc_qcq *qcq; 10 - struct completion wait_completion; 11 - }; 12 - 13 static int pdsc_process_notifyq(struct pdsc_qcq *qcq) 14 { 15 union pds_core_notifyq_comp *comp; ··· 104 q_info = &q->info[q->tail_idx]; 105 q->tail_idx = (q->tail_idx + 1) & (q->num_descs - 1); 106 107 - /* Copy out the completion data */ 108 - memcpy(q_info->dest, comp, sizeof(*comp)); 109 - 110 - complete_all(&q_info->wc->wait_completion); 111 112 if (cq->tail_idx == cq->num_descs - 1) 113 cq->done_color = !cq->done_color; ··· 157 static int __pdsc_adminq_post(struct pdsc *pdsc, 158 struct pdsc_qcq *qcq, 159 union pds_core_adminq_cmd *cmd, 160 - union pds_core_adminq_comp *comp, 161 - struct pdsc_wait_context *wc) 162 { 163 struct pdsc_queue *q = &qcq->q; 164 struct pdsc_q_info *q_info; ··· 199 /* Post the request */ 200 index = q->head_idx; 201 q_info = &q->info[index]; 202 - q_info->wc = wc; 203 q_info->dest = comp; 204 memcpy(q_info->desc, cmd, sizeof(*cmd)); 205 206 dev_dbg(pdsc->dev, "head_idx %d tail_idx %d\n", 207 q->head_idx, q->tail_idx); ··· 225 union pds_core_adminq_comp *comp, 226 bool fast_poll) 227 { 228 - struct pdsc_wait_context wc = { 229 - .wait_completion = 230 - COMPLETION_INITIALIZER_ONSTACK(wc.wait_completion), 231 - }; 232 unsigned long poll_interval = 1; 233 unsigned long poll_jiffies; 234 unsigned long time_limit; 235 unsigned long time_start; 236 unsigned long time_done; 237 unsigned long remaining; 238 int err = 0; 239 int index; 240 ··· 241 return -ENXIO; 242 } 243 244 - wc.qcq = &pdsc->adminqcq; 245 - index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp, &wc); 246 if (index < 0) { 247 err = index; 248 goto err_out; 249 } 250 251 time_start = jiffies; 252 time_limit = time_start + HZ * pdsc->devcmd_timeout; 253 do { 254 /* Timeslice the actual wait to catch IO errors etc early */ 255 poll_jiffies = msecs_to_jiffies(poll_interval); 256 - remaining = wait_for_completion_timeout(&wc.wait_completion, 257 - poll_jiffies); 258 if (remaining) 259 break; 260 ··· 282 dev_dbg(pdsc->dev, "%s: elapsed %d msecs\n", 283 __func__, jiffies_to_msecs(time_done - time_start)); 284 285 - /* Check the results */ 286 - if (time_after_eq(time_done, time_limit)) 287 err = -ETIMEDOUT; 288 289 dev_dbg(pdsc->dev, "read admin queue completion idx %d:\n", index); 290 dynamic_hex_dump("comp ", DUMP_PREFIX_OFFSET, 16, 1,
··· 5 6 #include "core.h" 7 8 static int pdsc_process_notifyq(struct pdsc_qcq *qcq) 9 { 10 union pds_core_notifyq_comp *comp; ··· 109 q_info = &q->info[q->tail_idx]; 110 q->tail_idx = (q->tail_idx + 1) & (q->num_descs - 1); 111 112 + if (!completion_done(&q_info->completion)) { 113 + memcpy(q_info->dest, comp, sizeof(*comp)); 114 + complete(&q_info->completion); 115 + } 116 117 if (cq->tail_idx == cq->num_descs - 1) 118 cq->done_color = !cq->done_color; ··· 162 static int __pdsc_adminq_post(struct pdsc *pdsc, 163 struct pdsc_qcq *qcq, 164 union pds_core_adminq_cmd *cmd, 165 + union pds_core_adminq_comp *comp) 166 { 167 struct pdsc_queue *q = &qcq->q; 168 struct pdsc_q_info *q_info; ··· 205 /* Post the request */ 206 index = q->head_idx; 207 q_info = &q->info[index]; 208 q_info->dest = comp; 209 memcpy(q_info->desc, cmd, sizeof(*cmd)); 210 + reinit_completion(&q_info->completion); 211 212 dev_dbg(pdsc->dev, "head_idx %d tail_idx %d\n", 213 q->head_idx, q->tail_idx); ··· 231 union pds_core_adminq_comp *comp, 232 bool fast_poll) 233 { 234 unsigned long poll_interval = 1; 235 unsigned long poll_jiffies; 236 unsigned long time_limit; 237 unsigned long time_start; 238 unsigned long time_done; 239 unsigned long remaining; 240 + struct completion *wc; 241 int err = 0; 242 int index; 243 ··· 250 return -ENXIO; 251 } 252 253 + index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp); 254 if (index < 0) { 255 err = index; 256 goto err_out; 257 } 258 259 + wc = &pdsc->adminqcq.q.info[index].completion; 260 time_start = jiffies; 261 time_limit = time_start + HZ * pdsc->devcmd_timeout; 262 do { 263 /* Timeslice the actual wait to catch IO errors etc early */ 264 poll_jiffies = msecs_to_jiffies(poll_interval); 265 + remaining = wait_for_completion_timeout(wc, poll_jiffies); 266 if (remaining) 267 break; 268 ··· 292 dev_dbg(pdsc->dev, "%s: elapsed %d msecs\n", 293 __func__, jiffies_to_msecs(time_done - time_start)); 294 295 + /* Check the results and clear an un-completed timeout */ 296 + if (time_after_eq(time_done, time_limit) && !completion_done(wc)) { 297 err = -ETIMEDOUT; 298 + complete(wc); 299 + } 300 301 dev_dbg(pdsc->dev, "read admin queue completion idx %d:\n", index); 302 dynamic_hex_dump("comp ", DUMP_PREFIX_OFFSET, 16, 1,
-3
drivers/net/ethernet/amd/pds_core/auxbus.c
··· 107 dev_dbg(pf->dev, "%s: %s opcode %d\n", 108 __func__, dev_name(&padev->aux_dev.dev), req->opcode); 109 110 - if (pf->state) 111 - return -ENXIO; 112 - 113 /* Wrap the client's request */ 114 cmd.client_request.opcode = PDS_AQ_CMD_CLIENT_CMD; 115 cmd.client_request.client_id = cpu_to_le16(padev->client_id);
··· 107 dev_dbg(pf->dev, "%s: %s opcode %d\n", 108 __func__, dev_name(&padev->aux_dev.dev), req->opcode); 109 110 /* Wrap the client's request */ 111 cmd.client_request.opcode = PDS_AQ_CMD_CLIENT_CMD; 112 cmd.client_request.client_id = cpu_to_le16(padev->client_id);
+4 -5
drivers/net/ethernet/amd/pds_core/core.c
··· 167 q->base = base; 168 q->base_pa = base_pa; 169 170 - for (i = 0, cur = q->info; i < q->num_descs; i++, cur++) 171 cur->desc = base + (i * q->desc_size); 172 } 173 174 static void pdsc_cq_map(struct pdsc_cq *cq, void *base, dma_addr_t base_pa) ··· 327 size_t sz; 328 int err; 329 330 - /* Scale the descriptor ring length based on number of CPUs and VFs */ 331 - numdescs = max_t(int, PDSC_ADMINQ_MIN_LENGTH, num_online_cpus()); 332 - numdescs += 2 * pci_sriov_get_totalvfs(pdsc->pdev); 333 - numdescs = roundup_pow_of_two(numdescs); 334 err = pdsc_qcq_alloc(pdsc, PDS_CORE_QTYPE_ADMINQ, 0, "adminq", 335 PDS_CORE_QCQ_F_CORE | PDS_CORE_QCQ_F_INTR, 336 numdescs,
··· 167 q->base = base; 168 q->base_pa = base_pa; 169 170 + for (i = 0, cur = q->info; i < q->num_descs; i++, cur++) { 171 cur->desc = base + (i * q->desc_size); 172 + init_completion(&cur->completion); 173 + } 174 } 175 176 static void pdsc_cq_map(struct pdsc_cq *cq, void *base, dma_addr_t base_pa) ··· 325 size_t sz; 326 int err; 327 328 + numdescs = PDSC_ADMINQ_MAX_LENGTH; 329 err = pdsc_qcq_alloc(pdsc, PDS_CORE_QTYPE_ADMINQ, 0, "adminq", 330 PDS_CORE_QCQ_F_CORE | PDS_CORE_QCQ_F_INTR, 331 numdescs,
+2 -2
drivers/net/ethernet/amd/pds_core/core.h
··· 16 17 #define PDSC_WATCHDOG_SECS 5 18 #define PDSC_QUEUE_NAME_MAX_SZ 16 19 - #define PDSC_ADMINQ_MIN_LENGTH 16 /* must be a power of two */ 20 #define PDSC_NOTIFYQ_LENGTH 64 /* must be a power of two */ 21 #define PDSC_TEARDOWN_RECOVERY false 22 #define PDSC_TEARDOWN_REMOVING true ··· 96 unsigned int bytes; 97 unsigned int nbufs; 98 struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS]; 99 - struct pdsc_wait_context *wc; 100 void *dest; 101 }; 102
··· 16 17 #define PDSC_WATCHDOG_SECS 5 18 #define PDSC_QUEUE_NAME_MAX_SZ 16 19 + #define PDSC_ADMINQ_MAX_LENGTH 16 /* must be a power of two */ 20 #define PDSC_NOTIFYQ_LENGTH 64 /* must be a power of two */ 21 #define PDSC_TEARDOWN_RECOVERY false 22 #define PDSC_TEARDOWN_REMOVING true ··· 96 unsigned int bytes; 97 unsigned int nbufs; 98 struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS]; 99 + struct completion completion; 100 void *dest; 101 }; 102
+1 -3
drivers/net/ethernet/amd/pds_core/devlink.c
··· 105 .fw_control.opcode = PDS_CORE_CMD_FW_CONTROL, 106 .fw_control.oper = PDS_CORE_FW_GET_LIST, 107 }; 108 - struct pds_core_fw_list_info fw_list; 109 struct pdsc *pdsc = devlink_priv(dl); 110 union pds_core_dev_comp comp; 111 char buf[32]; ··· 118 if (!err) 119 memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list)); 120 mutex_unlock(&pdsc->devcmd_lock); 121 - if (err && err != -EIO) 122 - return err; 123 124 listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names)); 125 for (i = 0; i < listlen; i++) {
··· 105 .fw_control.opcode = PDS_CORE_CMD_FW_CONTROL, 106 .fw_control.oper = PDS_CORE_FW_GET_LIST, 107 }; 108 + struct pds_core_fw_list_info fw_list = {}; 109 struct pdsc *pdsc = devlink_priv(dl); 110 union pds_core_dev_comp comp; 111 char buf[32]; ··· 118 if (!err) 119 memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list)); 120 mutex_unlock(&pdsc->devcmd_lock); 121 122 listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names)); 123 for (i = 0; i < listlen; i++) {
+28 -17
drivers/net/ethernet/freescale/enetc/enetc.c
··· 1850 } 1851 } 1852 1853 static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, 1854 struct napi_struct *napi, int work_limit, 1855 struct bpf_prog *prog) ··· 1878 1879 while (likely(rx_frm_cnt < work_limit)) { 1880 union enetc_rx_bd *rxbd, *orig_rxbd; 1881 - int orig_i, orig_cleaned_cnt; 1882 struct xdp_buff xdp_buff; 1883 struct sk_buff *skb; 1884 u32 bd_status; 1885 - int err; 1886 1887 rxbd = enetc_rxbd(rx_ring, i); 1888 bd_status = le32_to_cpu(rxbd->r.lstatus); ··· 1896 break; 1897 1898 orig_rxbd = rxbd; 1899 - orig_cleaned_cnt = cleaned_cnt; 1900 orig_i = i; 1901 1902 enetc_build_xdp_buff(rx_ring, bd_status, &rxbd, &i, ··· 1923 rx_ring->stats.xdp_drops++; 1924 break; 1925 case XDP_PASS: 1926 - rxbd = orig_rxbd; 1927 - cleaned_cnt = orig_cleaned_cnt; 1928 - i = orig_i; 1929 - 1930 - skb = enetc_build_skb(rx_ring, bd_status, &rxbd, 1931 - &i, &cleaned_cnt, 1932 - ENETC_RXB_DMA_SIZE_XDP); 1933 - if (unlikely(!skb)) 1934 goto out; 1935 1936 napi_gro_receive(napi, skb); 1937 break; ··· 1979 enetc_xdp_drop(rx_ring, orig_i, i); 1980 rx_ring->stats.xdp_redirect_failures++; 1981 } else { 1982 - while (orig_i != i) { 1983 - enetc_flip_rx_buff(rx_ring, 1984 - &rx_ring->rx_swbd[orig_i]); 1985 - enetc_bdr_idx_inc(rx_ring, &orig_i); 1986 - } 1987 xdp_redirect_frm_cnt++; 1988 rx_ring->stats.xdp_redirect++; 1989 } ··· 3372 bdr->buffer_offset = ENETC_RXB_PAD; 3373 priv->rx_ring[i] = bdr; 3374 3375 - err = xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0); 3376 if (err) 3377 goto free_vector; 3378
··· 1850 } 1851 } 1852 1853 + static void enetc_bulk_flip_buff(struct enetc_bdr *rx_ring, int rx_ring_first, 1854 + int rx_ring_last) 1855 + { 1856 + while (rx_ring_first != rx_ring_last) { 1857 + enetc_flip_rx_buff(rx_ring, 1858 + &rx_ring->rx_swbd[rx_ring_first]); 1859 + enetc_bdr_idx_inc(rx_ring, &rx_ring_first); 1860 + } 1861 + } 1862 + 1863 static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, 1864 struct napi_struct *napi, int work_limit, 1865 struct bpf_prog *prog) ··· 1868 1869 while (likely(rx_frm_cnt < work_limit)) { 1870 union enetc_rx_bd *rxbd, *orig_rxbd; 1871 struct xdp_buff xdp_buff; 1872 struct sk_buff *skb; 1873 + int orig_i, err; 1874 u32 bd_status; 1875 1876 rxbd = enetc_rxbd(rx_ring, i); 1877 bd_status = le32_to_cpu(rxbd->r.lstatus); ··· 1887 break; 1888 1889 orig_rxbd = rxbd; 1890 orig_i = i; 1891 1892 enetc_build_xdp_buff(rx_ring, bd_status, &rxbd, &i, ··· 1915 rx_ring->stats.xdp_drops++; 1916 break; 1917 case XDP_PASS: 1918 + skb = xdp_build_skb_from_buff(&xdp_buff); 1919 + /* Probably under memory pressure, stop NAPI */ 1920 + if (unlikely(!skb)) { 1921 + enetc_xdp_drop(rx_ring, orig_i, i); 1922 + rx_ring->stats.xdp_drops++; 1923 goto out; 1924 + } 1925 + 1926 + enetc_get_offloads(rx_ring, orig_rxbd, skb); 1927 + 1928 + /* These buffers are about to be owned by the stack. 1929 + * Update our buffer cache (the rx_swbd array elements) 1930 + * with their other page halves. 1931 + */ 1932 + enetc_bulk_flip_buff(rx_ring, orig_i, i); 1933 1934 napi_gro_receive(napi, skb); 1935 break; ··· 1965 enetc_xdp_drop(rx_ring, orig_i, i); 1966 rx_ring->stats.xdp_redirect_failures++; 1967 } else { 1968 + enetc_bulk_flip_buff(rx_ring, orig_i, i); 1969 xdp_redirect_frm_cnt++; 1970 rx_ring->stats.xdp_redirect++; 1971 } ··· 3362 bdr->buffer_offset = ENETC_RXB_PAD; 3363 priv->rx_ring[i] = bdr; 3364 3365 + err = __xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0, 3366 + ENETC_RXB_DMA_SIZE_XDP); 3367 if (err) 3368 goto free_vector; 3369
+20 -4
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 4043 mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP); 4044 4045 if (mtk_is_netsys_v3_or_greater(eth)) { 4046 - /* PSE should not drop port1, port8 and port9 packets */ 4047 - mtk_w32(eth, 0x00000302, PSE_DROP_CFG); 4048 4049 /* GDM and CDM Threshold */ 4050 - mtk_w32(eth, 0x00000707, MTK_CDMW0_THRES); 4051 mtk_w32(eth, 0x00000077, MTK_CDMW1_THRES); 4052 4053 /* Disable GDM1 RX CRC stripping */ ··· 4080 mtk_w32(eth, 0x00000300, PSE_DROP_CFG); 4081 4082 /* PSE should drop packets to port 8/9 on WDMA Rx ring full */ 4083 - mtk_w32(eth, 0x00000300, PSE_PPE0_DROP); 4084 4085 /* PSE Free Queue Flow Control */ 4086 mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2);
··· 4043 mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP); 4044 4045 if (mtk_is_netsys_v3_or_greater(eth)) { 4046 + /* PSE dummy page mechanism */ 4047 + mtk_w32(eth, PSE_DUMMY_WORK_GDM(1) | PSE_DUMMY_WORK_GDM(2) | 4048 + PSE_DUMMY_WORK_GDM(3) | DUMMY_PAGE_THR, PSE_DUMY_REQ); 4049 + 4050 + /* PSE free buffer drop threshold */ 4051 + mtk_w32(eth, 0x00600009, PSE_IQ_REV(8)); 4052 + 4053 + /* PSE should not drop port8, port9 and port13 packets from 4054 + * WDMA Tx 4055 + */ 4056 + mtk_w32(eth, 0x00002300, PSE_DROP_CFG); 4057 + 4058 + /* PSE should drop packets to port8, port9 and port13 on WDMA Rx 4059 + * ring full 4060 + */ 4061 + mtk_w32(eth, 0x00002300, PSE_PPE_DROP(0)); 4062 + mtk_w32(eth, 0x00002300, PSE_PPE_DROP(1)); 4063 + mtk_w32(eth, 0x00002300, PSE_PPE_DROP(2)); 4064 4065 /* GDM and CDM Threshold */ 4066 + mtk_w32(eth, 0x08000707, MTK_CDMW0_THRES); 4067 mtk_w32(eth, 0x00000077, MTK_CDMW1_THRES); 4068 4069 /* Disable GDM1 RX CRC stripping */ ··· 4064 mtk_w32(eth, 0x00000300, PSE_DROP_CFG); 4065 4066 /* PSE should drop packets to port 8/9 on WDMA Rx ring full */ 4067 + mtk_w32(eth, 0x00000300, PSE_PPE_DROP(0)); 4068 4069 /* PSE Free Queue Flow Control */ 4070 mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2);
+9 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.h
··· 151 #define PSE_FQFC_CFG1 0x100 152 #define PSE_FQFC_CFG2 0x104 153 #define PSE_DROP_CFG 0x108 154 - #define PSE_PPE0_DROP 0x110 155 156 /* PSE Input Queue Reservation Register*/ 157 #define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2))
··· 151 #define PSE_FQFC_CFG1 0x100 152 #define PSE_FQFC_CFG2 0x104 153 #define PSE_DROP_CFG 0x108 154 + #define PSE_PPE_DROP(x) (0x110 + ((x) * 0x4)) 155 + 156 + /* PSE Last FreeQ Page Request Control */ 157 + #define PSE_DUMY_REQ 0x10C 158 + /* PSE_DUMY_REQ is not a typo but actually called like that also in 159 + * MediaTek's datasheet 160 + */ 161 + #define PSE_DUMMY_WORK_GDM(x) BIT(16 + (x)) 162 + #define DUMMY_PAGE_THR 0x1 163 164 /* PSE Input Queue Reservation Register*/ 165 #define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2))
+18 -8
drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c
··· 637 bool use_l4_type; 638 int err; 639 640 - ttc = kvzalloc(sizeof(*ttc), GFP_KERNEL); 641 - if (!ttc) 642 - return ERR_PTR(-ENOMEM); 643 - 644 switch (params->ns_type) { 645 case MLX5_FLOW_NAMESPACE_PORT_SEL: 646 use_l4_type = MLX5_CAP_GEN_2(dev, pcc_ifa2) && ··· 650 return ERR_PTR(-EINVAL); 651 } 652 653 ns = mlx5_get_flow_namespace(dev, params->ns_type); 654 groups = use_l4_type ? &inner_ttc_groups[TTC_GROUPS_USE_L4_TYPE] : 655 &inner_ttc_groups[TTC_GROUPS_DEFAULT]; 656 ··· 715 bool use_l4_type; 716 int err; 717 718 - ttc = kvzalloc(sizeof(*ttc), GFP_KERNEL); 719 - if (!ttc) 720 - return ERR_PTR(-ENOMEM); 721 - 722 switch (params->ns_type) { 723 case MLX5_FLOW_NAMESPACE_PORT_SEL: 724 use_l4_type = MLX5_CAP_GEN_2(dev, pcc_ifa2) && ··· 728 return ERR_PTR(-EINVAL); 729 } 730 731 ns = mlx5_get_flow_namespace(dev, params->ns_type); 732 groups = use_l4_type ? &ttc_groups[TTC_GROUPS_USE_L4_TYPE] : 733 &ttc_groups[TTC_GROUPS_DEFAULT]; 734
··· 637 bool use_l4_type; 638 int err; 639 640 switch (params->ns_type) { 641 case MLX5_FLOW_NAMESPACE_PORT_SEL: 642 use_l4_type = MLX5_CAP_GEN_2(dev, pcc_ifa2) && ··· 654 return ERR_PTR(-EINVAL); 655 } 656 657 + ttc = kvzalloc(sizeof(*ttc), GFP_KERNEL); 658 + if (!ttc) 659 + return ERR_PTR(-ENOMEM); 660 + 661 ns = mlx5_get_flow_namespace(dev, params->ns_type); 662 + if (!ns) { 663 + kvfree(ttc); 664 + return ERR_PTR(-EOPNOTSUPP); 665 + } 666 + 667 groups = use_l4_type ? &inner_ttc_groups[TTC_GROUPS_USE_L4_TYPE] : 668 &inner_ttc_groups[TTC_GROUPS_DEFAULT]; 669 ··· 710 bool use_l4_type; 711 int err; 712 713 switch (params->ns_type) { 714 case MLX5_FLOW_NAMESPACE_PORT_SEL: 715 use_l4_type = MLX5_CAP_GEN_2(dev, pcc_ifa2) && ··· 727 return ERR_PTR(-EINVAL); 728 } 729 730 + ttc = kvzalloc(sizeof(*ttc), GFP_KERNEL); 731 + if (!ttc) 732 + return ERR_PTR(-ENOMEM); 733 + 734 ns = mlx5_get_flow_namespace(dev, params->ns_type); 735 + if (!ns) { 736 + kvfree(ttc); 737 + return ERR_PTR(-EOPNOTSUPP); 738 + } 739 + 740 groups = use_l4_type ? &ttc_groups[TTC_GROUPS_USE_L4_TYPE] : 741 &ttc_groups[TTC_GROUPS_DEFAULT]; 742
+2 -2
drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
··· 320 321 /* PTP and timestamping registers */ 322 323 - #define GMAC3_X_ATSNS GENMASK(19, 16) 324 - #define GMAC3_X_ATSNS_SHIFT 16 325 326 #define GMAC_PTP_TCR_ATSFC BIT(24) 327 #define GMAC_PTP_TCR_ATSEN0 BIT(25)
··· 320 321 /* PTP and timestamping registers */ 322 323 + #define GMAC3_X_ATSNS GENMASK(29, 25) 324 + #define GMAC3_X_ATSNS_SHIFT 25 325 326 #define GMAC_PTP_TCR_ATSFC BIT(24) 327 #define GMAC_PTP_TCR_ATSEN0 BIT(25)
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
··· 553 u64 ns; 554 555 ns = readl(ptpaddr + GMAC_PTP_ATNR); 556 - ns += readl(ptpaddr + GMAC_PTP_ATSR) * NSEC_PER_SEC; 557 558 *ptp_time = ns; 559 }
··· 553 u64 ns; 554 555 ns = readl(ptpaddr + GMAC_PTP_ATNR); 556 + ns += (u64)readl(ptpaddr + GMAC_PTP_ATSR) * NSEC_PER_SEC; 557 558 *ptp_time = ns; 559 }
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
··· 222 u64 ns; 223 224 ns = readl(ptpaddr + PTP_ATNR); 225 - ns += readl(ptpaddr + PTP_ATSR) * NSEC_PER_SEC; 226 227 *ptp_time = ns; 228 }
··· 222 u64 ns; 223 224 ns = readl(ptpaddr + PTP_ATNR); 225 + ns += (u64)readl(ptpaddr + PTP_ATSR) * NSEC_PER_SEC; 226 227 *ptp_time = ns; 228 }
+1 -1
drivers/net/phy/dp83822.c
··· 730 return phydev->drv->config_init(phydev); 731 } 732 733 - #ifdef CONFIG_OF_MDIO 734 static const u32 tx_amplitude_100base_tx_gain[] = { 735 80, 82, 83, 85, 87, 88, 90, 92, 736 93, 95, 97, 98, 100, 102, 103, 105,
··· 730 return phydev->drv->config_init(phydev); 731 } 732 733 + #if IS_ENABLED(CONFIG_OF_MDIO) 734 static const u32 tx_amplitude_100base_tx_gain[] = { 735 80, 82, 83, 85, 87, 88, 90, 92, 736 93, 95, 97, 98, 100, 102, 103, 105,
+3 -43
drivers/net/phy/microchip.c
··· 37 return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page); 38 } 39 40 - static int lan88xx_phy_config_intr(struct phy_device *phydev) 41 - { 42 - int rc; 43 - 44 - if (phydev->interrupts == PHY_INTERRUPT_ENABLED) { 45 - /* unmask all source and clear them before enable */ 46 - rc = phy_write(phydev, LAN88XX_INT_MASK, 0x7FFF); 47 - rc = phy_read(phydev, LAN88XX_INT_STS); 48 - rc = phy_write(phydev, LAN88XX_INT_MASK, 49 - LAN88XX_INT_MASK_MDINTPIN_EN_ | 50 - LAN88XX_INT_MASK_LINK_CHANGE_); 51 - } else { 52 - rc = phy_write(phydev, LAN88XX_INT_MASK, 0); 53 - if (rc) 54 - return rc; 55 - 56 - /* Ack interrupts after they have been disabled */ 57 - rc = phy_read(phydev, LAN88XX_INT_STS); 58 - } 59 - 60 - return rc < 0 ? rc : 0; 61 - } 62 - 63 - static irqreturn_t lan88xx_handle_interrupt(struct phy_device *phydev) 64 - { 65 - int irq_status; 66 - 67 - irq_status = phy_read(phydev, LAN88XX_INT_STS); 68 - if (irq_status < 0) { 69 - phy_error(phydev); 70 - return IRQ_NONE; 71 - } 72 - 73 - if (!(irq_status & LAN88XX_INT_STS_LINK_CHANGE_)) 74 - return IRQ_NONE; 75 - 76 - phy_trigger_machine(phydev); 77 - 78 - return IRQ_HANDLED; 79 - } 80 - 81 static int lan88xx_suspend(struct phy_device *phydev) 82 { 83 struct lan88xx_priv *priv = phydev->priv; ··· 487 .config_aneg = lan88xx_config_aneg, 488 .link_change_notify = lan88xx_link_change_notify, 489 490 - .config_intr = lan88xx_phy_config_intr, 491 - .handle_interrupt = lan88xx_handle_interrupt, 492 493 .suspend = lan88xx_suspend, 494 .resume = genphy_resume,
··· 37 return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page); 38 } 39 40 static int lan88xx_suspend(struct phy_device *phydev) 41 { 42 struct lan88xx_priv *priv = phydev->priv; ··· 528 .config_aneg = lan88xx_config_aneg, 529 .link_change_notify = lan88xx_link_change_notify, 530 531 + /* Interrupt handling is broken, do not define related 532 + * functions to force polling. 533 + */ 534 535 .suspend = lan88xx_suspend, 536 .resume = genphy_resume,
+13 -10
drivers/net/phy/phy_led_triggers.c
··· 93 if (!phy->phy_num_led_triggers) 94 return 0; 95 96 - phy->led_link_trigger = devm_kzalloc(&phy->mdio.dev, 97 - sizeof(*phy->led_link_trigger), 98 - GFP_KERNEL); 99 if (!phy->led_link_trigger) { 100 err = -ENOMEM; 101 goto out_clear; ··· 104 if (err) 105 goto out_free_link; 106 107 - phy->phy_led_triggers = devm_kcalloc(&phy->mdio.dev, 108 - phy->phy_num_led_triggers, 109 - sizeof(struct phy_led_trigger), 110 - GFP_KERNEL); 111 if (!phy->phy_led_triggers) { 112 err = -ENOMEM; 113 goto out_unreg_link; ··· 127 out_unreg: 128 while (i--) 129 phy_led_trigger_unregister(&phy->phy_led_triggers[i]); 130 - devm_kfree(&phy->mdio.dev, phy->phy_led_triggers); 131 out_unreg_link: 132 phy_led_trigger_unregister(phy->led_link_trigger); 133 out_free_link: 134 - devm_kfree(&phy->mdio.dev, phy->led_link_trigger); 135 phy->led_link_trigger = NULL; 136 out_clear: 137 phy->phy_num_led_triggers = 0; ··· 145 146 for (i = 0; i < phy->phy_num_led_triggers; i++) 147 phy_led_trigger_unregister(&phy->phy_led_triggers[i]); 148 149 - if (phy->led_link_trigger) 150 phy_led_trigger_unregister(phy->led_link_trigger); 151 } 152 EXPORT_SYMBOL_GPL(phy_led_triggers_unregister);
··· 93 if (!phy->phy_num_led_triggers) 94 return 0; 95 96 + phy->led_link_trigger = kzalloc(sizeof(*phy->led_link_trigger), 97 + GFP_KERNEL); 98 if (!phy->led_link_trigger) { 99 err = -ENOMEM; 100 goto out_clear; ··· 105 if (err) 106 goto out_free_link; 107 108 + phy->phy_led_triggers = kcalloc(phy->phy_num_led_triggers, 109 + sizeof(struct phy_led_trigger), 110 + GFP_KERNEL); 111 if (!phy->phy_led_triggers) { 112 err = -ENOMEM; 113 goto out_unreg_link; ··· 129 out_unreg: 130 while (i--) 131 phy_led_trigger_unregister(&phy->phy_led_triggers[i]); 132 + kfree(phy->phy_led_triggers); 133 out_unreg_link: 134 phy_led_trigger_unregister(phy->led_link_trigger); 135 out_free_link: 136 + kfree(phy->led_link_trigger); 137 phy->led_link_trigger = NULL; 138 out_clear: 139 phy->phy_num_led_triggers = 0; ··· 147 148 for (i = 0; i < phy->phy_num_led_triggers; i++) 149 phy_led_trigger_unregister(&phy->phy_led_triggers[i]); 150 + kfree(phy->phy_led_triggers); 151 + phy->phy_led_triggers = NULL; 152 153 + if (phy->led_link_trigger) { 154 phy_led_trigger_unregister(phy->led_link_trigger); 155 + kfree(phy->led_link_trigger); 156 + phy->led_link_trigger = NULL; 157 + } 158 } 159 EXPORT_SYMBOL_GPL(phy_led_triggers_unregister);
+22 -16
drivers/net/phy/phylink.c
··· 81 unsigned int pcs_state; 82 83 bool link_failed; 84 bool major_config_failed; 85 bool mac_supports_eee_ops; 86 bool mac_supports_eee; ··· 2546 /* Stop the resolver bringing the link up */ 2547 __set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state); 2548 2549 - /* Disable the carrier, to prevent transmit timeouts, 2550 - * but one would hope all packets have been sent. This 2551 - * also means phylink_resolve() will do nothing. 2552 - */ 2553 - if (pl->netdev) 2554 - netif_carrier_off(pl->netdev); 2555 - else 2556 pl->old_link_state = false; 2557 2558 /* We do not call mac_link_down() here as we want the 2559 * link to remain up to receive the WoL packets. ··· 2606 if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) { 2607 /* Wake-on-Lan enabled, MAC handling */ 2608 2609 - /* Call mac_link_down() so we keep the overall state balanced. 2610 - * Do this under the state_mutex lock for consistency. This 2611 - * will cause a "Link Down" message to be printed during 2612 - * resume, which is harmless - the true link state will be 2613 - * printed when we run a resolve. 2614 - */ 2615 - mutex_lock(&pl->state_mutex); 2616 - phylink_link_down(pl); 2617 - mutex_unlock(&pl->state_mutex); 2618 2619 /* Re-apply the link parameters so that all the settings get 2620 * restored to the MAC.
··· 81 unsigned int pcs_state; 82 83 bool link_failed; 84 + bool suspend_link_up; 85 bool major_config_failed; 86 bool mac_supports_eee_ops; 87 bool mac_supports_eee; ··· 2545 /* Stop the resolver bringing the link up */ 2546 __set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state); 2547 2548 + pl->suspend_link_up = phylink_link_is_up(pl); 2549 + if (pl->suspend_link_up) { 2550 + /* Disable the carrier, to prevent transmit timeouts, 2551 + * but one would hope all packets have been sent. This 2552 + * also means phylink_resolve() will do nothing. 2553 + */ 2554 + if (pl->netdev) 2555 + netif_carrier_off(pl->netdev); 2556 pl->old_link_state = false; 2557 + } 2558 2559 /* We do not call mac_link_down() here as we want the 2560 * link to remain up to receive the WoL packets. ··· 2603 if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) { 2604 /* Wake-on-Lan enabled, MAC handling */ 2605 2606 + if (pl->suspend_link_up) { 2607 + /* Call mac_link_down() so we keep the overall state 2608 + * balanced. Do this under the state_mutex lock for 2609 + * consistency. This will cause a "Link Down" message 2610 + * to be printed during resume, which is harmless - 2611 + * the true link state will be printed when we run a 2612 + * resolve. 2613 + */ 2614 + mutex_lock(&pl->state_mutex); 2615 + phylink_link_down(pl); 2616 + mutex_unlock(&pl->state_mutex); 2617 + } 2618 2619 /* Re-apply the link parameters so that all the settings get 2620 * restored to the MAC.
+57 -12
drivers/net/virtio_net.c
··· 3342 return NETDEV_TX_OK; 3343 } 3344 3345 - static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq) 3346 { 3347 bool running = netif_running(vi->dev); 3348 ··· 3353 } 3354 } 3355 3356 - static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq) 3357 { 3358 bool running = netif_running(vi->dev); 3359 3360 - if (!try_fill_recv(vi, rq, GFP_KERNEL)) 3361 schedule_delayed_work(&vi->refill, 0); 3362 3363 if (running) 3364 virtnet_napi_enable(rq); 3365 } 3366 3367 static int virtnet_rx_resize(struct virtnet_info *vi, ··· 6006 if (prog) 6007 bpf_prog_add(prog, vi->max_queue_pairs - 1); 6008 6009 /* Make sure NAPI is not using any XDP TX queues for RX. */ 6010 if (netif_running(dev)) { 6011 - for (i = 0; i < vi->max_queue_pairs; i++) { 6012 - virtnet_napi_disable(&vi->rq[i]); 6013 virtnet_napi_tx_disable(&vi->sq[i]); 6014 - } 6015 } 6016 6017 if (!prog) { ··· 6043 vi->xdp_enabled = false; 6044 } 6045 6046 for (i = 0; i < vi->max_queue_pairs; i++) { 6047 if (old_prog) 6048 bpf_prog_put(old_prog); 6049 - if (netif_running(dev)) { 6050 - virtnet_napi_enable(&vi->rq[i]); 6051 virtnet_napi_tx_enable(&vi->sq[i]); 6052 - } 6053 } 6054 6055 return 0; ··· 6060 rcu_assign_pointer(vi->rq[i].xdp_prog, old_prog); 6061 } 6062 6063 if (netif_running(dev)) { 6064 - for (i = 0; i < vi->max_queue_pairs; i++) { 6065 - virtnet_napi_enable(&vi->rq[i]); 6066 virtnet_napi_tx_enable(&vi->sq[i]); 6067 - } 6068 } 6069 if (prog) 6070 bpf_prog_sub(prog, vi->max_queue_pairs - 1);
··· 3342 return NETDEV_TX_OK; 3343 } 3344 3345 + static void __virtnet_rx_pause(struct virtnet_info *vi, 3346 + struct receive_queue *rq) 3347 { 3348 bool running = netif_running(vi->dev); 3349 ··· 3352 } 3353 } 3354 3355 + static void virtnet_rx_pause_all(struct virtnet_info *vi) 3356 + { 3357 + int i; 3358 + 3359 + /* 3360 + * Make sure refill_work does not run concurrently to 3361 + * avoid napi_disable race which leads to deadlock. 3362 + */ 3363 + disable_delayed_refill(vi); 3364 + cancel_delayed_work_sync(&vi->refill); 3365 + for (i = 0; i < vi->max_queue_pairs; i++) 3366 + __virtnet_rx_pause(vi, &vi->rq[i]); 3367 + } 3368 + 3369 + static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq) 3370 + { 3371 + /* 3372 + * Make sure refill_work does not run concurrently to 3373 + * avoid napi_disable race which leads to deadlock. 3374 + */ 3375 + disable_delayed_refill(vi); 3376 + cancel_delayed_work_sync(&vi->refill); 3377 + __virtnet_rx_pause(vi, rq); 3378 + } 3379 + 3380 + static void __virtnet_rx_resume(struct virtnet_info *vi, 3381 + struct receive_queue *rq, 3382 + bool refill) 3383 { 3384 bool running = netif_running(vi->dev); 3385 3386 + if (refill && !try_fill_recv(vi, rq, GFP_KERNEL)) 3387 schedule_delayed_work(&vi->refill, 0); 3388 3389 if (running) 3390 virtnet_napi_enable(rq); 3391 + } 3392 + 3393 + static void virtnet_rx_resume_all(struct virtnet_info *vi) 3394 + { 3395 + int i; 3396 + 3397 + enable_delayed_refill(vi); 3398 + for (i = 0; i < vi->max_queue_pairs; i++) { 3399 + if (i < vi->curr_queue_pairs) 3400 + __virtnet_rx_resume(vi, &vi->rq[i], true); 3401 + else 3402 + __virtnet_rx_resume(vi, &vi->rq[i], false); 3403 + } 3404 + } 3405 + 3406 + static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq) 3407 + { 3408 + enable_delayed_refill(vi); 3409 + __virtnet_rx_resume(vi, rq, true); 3410 } 3411 3412 static int virtnet_rx_resize(struct virtnet_info *vi, ··· 5959 if (prog) 5960 bpf_prog_add(prog, vi->max_queue_pairs - 1); 5961 5962 + virtnet_rx_pause_all(vi); 5963 + 5964 /* Make sure NAPI is not using any XDP TX queues for RX. */ 5965 if (netif_running(dev)) { 5966 + for (i = 0; i < vi->max_queue_pairs; i++) 5967 virtnet_napi_tx_disable(&vi->sq[i]); 5968 } 5969 5970 if (!prog) { ··· 5996 vi->xdp_enabled = false; 5997 } 5998 5999 + virtnet_rx_resume_all(vi); 6000 for (i = 0; i < vi->max_queue_pairs; i++) { 6001 if (old_prog) 6002 bpf_prog_put(old_prog); 6003 + if (netif_running(dev)) 6004 virtnet_napi_tx_enable(&vi->sq[i]); 6005 } 6006 6007 return 0; ··· 6014 rcu_assign_pointer(vi->rq[i].xdp_prog, old_prog); 6015 } 6016 6017 + virtnet_rx_resume_all(vi); 6018 if (netif_running(dev)) { 6019 + for (i = 0; i < vi->max_queue_pairs; i++) 6020 virtnet_napi_tx_enable(&vi->sq[i]); 6021 } 6022 if (prog) 6023 bpf_prog_sub(prog, vi->max_queue_pairs - 1);
+13 -6
drivers/net/xen-netfront.c
··· 985 act = bpf_prog_run_xdp(prog, xdp); 986 switch (act) { 987 case XDP_TX: 988 - get_page(pdata); 989 xdpf = xdp_convert_buff_to_frame(xdp); 990 - err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0); 991 - if (unlikely(!err)) 992 - xdp_return_frame_rx_napi(xdpf); 993 - else if (unlikely(err < 0)) 994 trace_xdp_exception(queue->info->netdev, prog, act); 995 break; 996 case XDP_REDIRECT: 997 get_page(pdata); 998 err = xdp_do_redirect(queue->info->netdev, xdp, prog); 999 *need_xdp_flush = true; 1000 - if (unlikely(err)) 1001 trace_xdp_exception(queue->info->netdev, prog, act); 1002 break; 1003 case XDP_PASS: 1004 case XDP_DROP:
··· 985 act = bpf_prog_run_xdp(prog, xdp); 986 switch (act) { 987 case XDP_TX: 988 xdpf = xdp_convert_buff_to_frame(xdp); 989 + if (unlikely(!xdpf)) { 990 trace_xdp_exception(queue->info->netdev, prog, act); 991 + break; 992 + } 993 + get_page(pdata); 994 + err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0); 995 + if (unlikely(err <= 0)) { 996 + if (err < 0) 997 + trace_xdp_exception(queue->info->netdev, prog, act); 998 + xdp_return_frame_rx_napi(xdpf); 999 + } 1000 break; 1001 case XDP_REDIRECT: 1002 get_page(pdata); 1003 err = xdp_do_redirect(queue->info->netdev, xdp, prog); 1004 *need_xdp_flush = true; 1005 + if (unlikely(err)) { 1006 trace_xdp_exception(queue->info->netdev, prog, act); 1007 + xdp_return_buff(xdp); 1008 + } 1009 break; 1010 case XDP_PASS: 1011 case XDP_DROP:
+3
drivers/nvme/target/core.c
··· 324 325 lockdep_assert_held(&nvmet_config_sem); 326 327 ops = nvmet_transports[port->disc_addr.trtype]; 328 if (!ops) { 329 up_write(&nvmet_config_sem);
··· 324 325 lockdep_assert_held(&nvmet_config_sem); 326 327 + if (port->disc_addr.trtype == NVMF_TRTYPE_MAX) 328 + return -EINVAL; 329 + 330 ops = nvmet_transports[port->disc_addr.trtype]; 331 if (!ops) { 332 up_write(&nvmet_config_sem);
+32 -8
drivers/nvmem/core.c
··· 594 cell->nbits = info->nbits; 595 cell->np = info->np; 596 597 - if (cell->nbits) 598 cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset, 599 BITS_PER_BYTE); 600 601 if (!IS_ALIGNED(cell->offset, nvmem->stride)) { 602 dev_err(&nvmem->dev, 603 "cell %s unaligned to nvmem stride %d\n", 604 cell->name ?: "<unknown>", nvmem->stride); 605 return -EINVAL; 606 } 607 608 return 0; ··· 851 if (addr && len == (2 * sizeof(u32))) { 852 info.bit_offset = be32_to_cpup(addr++); 853 info.nbits = be32_to_cpup(addr); 854 - if (info.bit_offset >= BITS_PER_BYTE || info.nbits < 1) { 855 dev_err(dev, "nvmem: invalid bits on %pOF\n", child); 856 of_node_put(child); 857 return -EINVAL; ··· 1646 static void nvmem_shift_read_buffer_in_place(struct nvmem_cell_entry *cell, void *buf) 1647 { 1648 u8 *p, *b; 1649 - int i, extra, bit_offset = cell->bit_offset; 1650 1651 p = b = buf; 1652 - if (bit_offset) { 1653 /* First shift */ 1654 - *b++ >>= bit_offset; 1655 1656 /* setup rest of the bytes if any */ 1657 for (i = 1; i < cell->bytes; i++) { 1658 /* Get bits from next byte and shift them towards msb */ 1659 - *p |= *b << (BITS_PER_BYTE - bit_offset); 1660 1661 - p = b; 1662 - *b++ >>= bit_offset; 1663 } 1664 } else { 1665 /* point to the msb */ 1666 p += cell->bytes - 1;
··· 594 cell->nbits = info->nbits; 595 cell->np = info->np; 596 597 + if (cell->nbits) { 598 cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset, 599 BITS_PER_BYTE); 600 + cell->raw_len = ALIGN(cell->bytes, nvmem->word_size); 601 + } 602 603 if (!IS_ALIGNED(cell->offset, nvmem->stride)) { 604 dev_err(&nvmem->dev, 605 "cell %s unaligned to nvmem stride %d\n", 606 cell->name ?: "<unknown>", nvmem->stride); 607 return -EINVAL; 608 + } 609 + 610 + if (!IS_ALIGNED(cell->raw_len, nvmem->word_size)) { 611 + dev_err(&nvmem->dev, 612 + "cell %s raw len %zd unaligned to nvmem word size %d\n", 613 + cell->name ?: "<unknown>", cell->raw_len, 614 + nvmem->word_size); 615 + 616 + if (info->raw_len) 617 + return -EINVAL; 618 + 619 + cell->raw_len = ALIGN(cell->raw_len, nvmem->word_size); 620 } 621 622 return 0; ··· 837 if (addr && len == (2 * sizeof(u32))) { 838 info.bit_offset = be32_to_cpup(addr++); 839 info.nbits = be32_to_cpup(addr); 840 + if (info.bit_offset >= BITS_PER_BYTE * info.bytes || 841 + info.nbits < 1 || 842 + info.bit_offset + info.nbits > BITS_PER_BYTE * info.bytes) { 843 dev_err(dev, "nvmem: invalid bits on %pOF\n", child); 844 of_node_put(child); 845 return -EINVAL; ··· 1630 static void nvmem_shift_read_buffer_in_place(struct nvmem_cell_entry *cell, void *buf) 1631 { 1632 u8 *p, *b; 1633 + int i, extra, bytes_offset; 1634 + int bit_offset = cell->bit_offset; 1635 1636 p = b = buf; 1637 + 1638 + bytes_offset = bit_offset / BITS_PER_BYTE; 1639 + b += bytes_offset; 1640 + bit_offset %= BITS_PER_BYTE; 1641 + 1642 + if (bit_offset % BITS_PER_BYTE) { 1643 /* First shift */ 1644 + *p = *b++ >> bit_offset; 1645 1646 /* setup rest of the bytes if any */ 1647 for (i = 1; i < cell->bytes; i++) { 1648 /* Get bits from next byte and shift them towards msb */ 1649 + *p++ |= *b << (BITS_PER_BYTE - bit_offset); 1650 1651 + *p = *b++ >> bit_offset; 1652 } 1653 + } else if (p != b) { 1654 + memmove(p, b, cell->bytes - bytes_offset); 1655 + p += cell->bytes - 1; 1656 } else { 1657 /* point to the msb */ 1658 p += cell->bytes - 1;
+20 -6
drivers/nvmem/qfprom.c
··· 321 unsigned int reg, void *_val, size_t bytes) 322 { 323 struct qfprom_priv *priv = context; 324 - u8 *val = _val; 325 - int i = 0, words = bytes; 326 void __iomem *base = priv->qfpcorrected; 327 328 if (read_raw_data && priv->qfpraw) 329 base = priv->qfpraw; 330 331 - while (words--) 332 - *val++ = readb(base + reg + i++); 333 334 return 0; 335 } 336 337 static void qfprom_runtime_disable(void *data) ··· 371 struct nvmem_config econfig = { 372 .name = "qfprom", 373 .add_legacy_fixed_of_cells = true, 374 - .stride = 1, 375 - .word_size = 1, 376 .id = NVMEM_DEVID_AUTO, 377 .reg_read = qfprom_reg_read, 378 }; 379 struct device *dev = &pdev->dev; 380 struct resource *res;
··· 321 unsigned int reg, void *_val, size_t bytes) 322 { 323 struct qfprom_priv *priv = context; 324 + u32 *val = _val; 325 void __iomem *base = priv->qfpcorrected; 326 + int words = DIV_ROUND_UP(bytes, sizeof(u32)); 327 + int i; 328 329 if (read_raw_data && priv->qfpraw) 330 base = priv->qfpraw; 331 332 + for (i = 0; i < words; i++) 333 + *val++ = readl(base + reg + i * sizeof(u32)); 334 335 return 0; 336 + } 337 + 338 + /* Align reads to word boundary */ 339 + static void qfprom_fixup_dt_cell_info(struct nvmem_device *nvmem, 340 + struct nvmem_cell_info *cell) 341 + { 342 + unsigned int byte_offset = cell->offset % sizeof(u32); 343 + 344 + cell->bit_offset += byte_offset * BITS_PER_BYTE; 345 + cell->offset -= byte_offset; 346 + if (byte_offset && !cell->nbits) 347 + cell->nbits = cell->bytes * BITS_PER_BYTE; 348 } 349 350 static void qfprom_runtime_disable(void *data) ··· 358 struct nvmem_config econfig = { 359 .name = "qfprom", 360 .add_legacy_fixed_of_cells = true, 361 + .stride = 4, 362 + .word_size = 4, 363 .id = NVMEM_DEVID_AUTO, 364 .reg_read = qfprom_reg_read, 365 + .fixup_dt_cell_info = qfprom_fixup_dt_cell_info, 366 }; 367 struct device *dev = &pdev->dev; 368 struct resource *res;
+15 -2
drivers/nvmem/rockchip-otp.c
··· 59 #define RK3588_OTPC_AUTO_EN 0x08 60 #define RK3588_OTPC_INT_ST 0x84 61 #define RK3588_OTPC_DOUT0 0x20 62 - #define RK3588_NO_SECURE_OFFSET 0x300 63 #define RK3588_NBYTES 4 64 #define RK3588_BURST_NUM 1 65 #define RK3588_BURST_SHIFT 8 ··· 68 69 struct rockchip_data { 70 int size; 71 const char * const *clks; 72 int num_clks; 73 nvmem_reg_read_t reg_read; ··· 196 addr_start = round_down(offset, RK3588_NBYTES) / RK3588_NBYTES; 197 addr_end = round_up(offset + bytes, RK3588_NBYTES) / RK3588_NBYTES; 198 addr_len = addr_end - addr_start; 199 - addr_start += RK3588_NO_SECURE_OFFSET; 200 201 buf = kzalloc(array_size(addr_len, RK3588_NBYTES), GFP_KERNEL); 202 if (!buf) ··· 274 .reg_read = px30_otp_read, 275 }; 276 277 static const char * const rk3588_otp_clocks[] = { 278 "otp", "apb_pclk", "phy", "arb", 279 }; 280 281 static const struct rockchip_data rk3588_data = { 282 .size = 0x400, 283 .clks = rk3588_otp_clocks, 284 .num_clks = ARRAY_SIZE(rk3588_otp_clocks), 285 .reg_read = rk3588_otp_read, ··· 302 { 303 .compatible = "rockchip,rk3308-otp", 304 .data = &px30_data, 305 }, 306 { 307 .compatible = "rockchip,rk3588-otp",
··· 59 #define RK3588_OTPC_AUTO_EN 0x08 60 #define RK3588_OTPC_INT_ST 0x84 61 #define RK3588_OTPC_DOUT0 0x20 62 #define RK3588_NBYTES 4 63 #define RK3588_BURST_NUM 1 64 #define RK3588_BURST_SHIFT 8 ··· 69 70 struct rockchip_data { 71 int size; 72 + int read_offset; 73 const char * const *clks; 74 int num_clks; 75 nvmem_reg_read_t reg_read; ··· 196 addr_start = round_down(offset, RK3588_NBYTES) / RK3588_NBYTES; 197 addr_end = round_up(offset + bytes, RK3588_NBYTES) / RK3588_NBYTES; 198 addr_len = addr_end - addr_start; 199 + addr_start += otp->data->read_offset / RK3588_NBYTES; 200 201 buf = kzalloc(array_size(addr_len, RK3588_NBYTES), GFP_KERNEL); 202 if (!buf) ··· 274 .reg_read = px30_otp_read, 275 }; 276 277 + static const struct rockchip_data rk3576_data = { 278 + .size = 0x100, 279 + .read_offset = 0x700, 280 + .clks = px30_otp_clocks, 281 + .num_clks = ARRAY_SIZE(px30_otp_clocks), 282 + .reg_read = rk3588_otp_read, 283 + }; 284 + 285 static const char * const rk3588_otp_clocks[] = { 286 "otp", "apb_pclk", "phy", "arb", 287 }; 288 289 static const struct rockchip_data rk3588_data = { 290 .size = 0x400, 291 + .read_offset = 0xc00, 292 .clks = rk3588_otp_clocks, 293 .num_clks = ARRAY_SIZE(rk3588_otp_clocks), 294 .reg_read = rk3588_otp_read, ··· 293 { 294 .compatible = "rockchip,rk3308-otp", 295 .data = &px30_data, 296 + }, 297 + { 298 + .compatible = "rockchip,rk3576-otp", 299 + .data = &rk3576_data, 300 }, 301 { 302 .compatible = "rockchip,rk3588-otp",
+4
drivers/pci/setup-bus.c
··· 187 panic("%s: kzalloc() failed!\n", __func__); 188 tmp->res = r; 189 tmp->dev = dev; 190 191 /* Fallback is smallest one or list is empty */ 192 n = head; ··· 548 pci_dbg(dev, "%s %pR: releasing\n", res_name, res); 549 550 release_resource(res); 551 } 552 /* Restore start/end/flags from saved list */ 553 list_for_each_entry(save_res, &save_head, list)
··· 187 panic("%s: kzalloc() failed!\n", __func__); 188 tmp->res = r; 189 tmp->dev = dev; 190 + tmp->start = r->start; 191 + tmp->end = r->end; 192 + tmp->flags = r->flags; 193 194 /* Fallback is smallest one or list is empty */ 195 n = head; ··· 545 pci_dbg(dev, "%s %pR: releasing\n", res_name, res); 546 547 release_resource(res); 548 + restore_dev_resource(dev_res); 549 } 550 /* Restore start/end/flags from saved list */ 551 list_for_each_entry(save_res, &save_head, list)
+1 -1
drivers/pps/generators/pps_gen_tio.c
··· 230 hrtimer_setup(&tio->timer, hrtimer_callback, CLOCK_REALTIME, 231 HRTIMER_MODE_ABS); 232 spin_lock_init(&tio->lock); 233 - platform_set_drvdata(pdev, &tio); 234 235 return 0; 236 }
··· 230 hrtimer_setup(&tio->timer, hrtimer_callback, CLOCK_REALTIME, 231 HRTIMER_MODE_ABS); 232 spin_lock_init(&tio->lock); 233 + platform_set_drvdata(pdev, tio); 234 235 return 0; 236 }
+7 -1
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 174 char *desc = NULL; 175 u16 event; 176 177 event = event_reply->event; 178 179 switch (event) { ··· 454 return 0; 455 } 456 457 reply_desc = (struct mpi3_default_reply_descriptor *)mrioc->admin_reply_base + 458 admin_reply_ci; 459 ··· 569 WRITE_ONCE(op_req_q->ci, le16_to_cpu(reply_desc->request_queue_ci)); 570 mpi3mr_process_op_reply_desc(mrioc, reply_desc, &reply_dma, 571 reply_qidx); 572 - atomic_dec(&op_reply_q->pend_ios); 573 if (reply_dma) 574 mpi3mr_repost_reply_buf(mrioc, reply_dma); 575 num_op_reply++; ··· 2929 mrioc->admin_reply_ci = 0; 2930 mrioc->admin_reply_ephase = 1; 2931 atomic_set(&mrioc->admin_reply_q_in_use, 0); 2932 2933 if (!mrioc->admin_req_base) { 2934 mrioc->admin_req_base = dma_alloc_coherent(&mrioc->pdev->dev, ··· 4658 if (mrioc->admin_reply_base) 4659 memset(mrioc->admin_reply_base, 0, mrioc->admin_reply_q_sz); 4660 atomic_set(&mrioc->admin_reply_q_in_use, 0); 4661 4662 if (mrioc->init_cmds.reply) { 4663 memset(mrioc->init_cmds.reply, 0, sizeof(*mrioc->init_cmds.reply));
··· 174 char *desc = NULL; 175 u16 event; 176 177 + if (!(mrioc->logging_level & MPI3_DEBUG_EVENT)) 178 + return; 179 + 180 event = event_reply->event; 181 182 switch (event) { ··· 451 return 0; 452 } 453 454 + atomic_set(&mrioc->admin_pend_isr, 0); 455 reply_desc = (struct mpi3_default_reply_descriptor *)mrioc->admin_reply_base + 456 admin_reply_ci; 457 ··· 565 WRITE_ONCE(op_req_q->ci, le16_to_cpu(reply_desc->request_queue_ci)); 566 mpi3mr_process_op_reply_desc(mrioc, reply_desc, &reply_dma, 567 reply_qidx); 568 + 569 if (reply_dma) 570 mpi3mr_repost_reply_buf(mrioc, reply_dma); 571 num_op_reply++; ··· 2925 mrioc->admin_reply_ci = 0; 2926 mrioc->admin_reply_ephase = 1; 2927 atomic_set(&mrioc->admin_reply_q_in_use, 0); 2928 + atomic_set(&mrioc->admin_pend_isr, 0); 2929 2930 if (!mrioc->admin_req_base) { 2931 mrioc->admin_req_base = dma_alloc_coherent(&mrioc->pdev->dev, ··· 4653 if (mrioc->admin_reply_base) 4654 memset(mrioc->admin_reply_base, 0, mrioc->admin_reply_q_sz); 4655 atomic_set(&mrioc->admin_reply_q_in_use, 0); 4656 + atomic_set(&mrioc->admin_pend_isr, 0); 4657 4658 if (mrioc->init_cmds.reply) { 4659 memset(mrioc->init_cmds.reply, 0, sizeof(*mrioc->init_cmds.reply));
+24 -12
drivers/scsi/scsi.c
··· 707 */ 708 int scsi_cdl_enable(struct scsi_device *sdev, bool enable) 709 { 710 - struct scsi_mode_data data; 711 - struct scsi_sense_hdr sshdr; 712 - struct scsi_vpd *vpd; 713 - bool is_ata = false; 714 char buf[64]; 715 int ret; 716 717 if (!sdev->cdl_supported) 718 return -EOPNOTSUPP; 719 720 rcu_read_lock(); 721 - vpd = rcu_dereference(sdev->vpd_pg89); 722 - if (vpd) 723 - is_ata = true; 724 rcu_read_unlock(); 725 726 /* 727 * For ATA devices, CDL needs to be enabled with a SET FEATURES command. 728 */ 729 if (is_ata) { 730 char *buf_data; 731 int len; 732 ··· 732 if (ret) 733 return -EINVAL; 734 735 - /* Enable CDL using the ATA feature page */ 736 len = min_t(size_t, sizeof(buf), 737 data.length - data.header_length - 738 data.block_descriptor_length); 739 buf_data = buf + data.header_length + 740 data.block_descriptor_length; 741 - if (enable) 742 - buf_data[4] = 0x02; 743 - else 744 - buf_data[4] = 0; 745 746 ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3, 747 &data, &sshdr); ··· 767 } 768 } 769 770 sdev->cdl_enable = enable; 771 772 return 0;
··· 707 */ 708 int scsi_cdl_enable(struct scsi_device *sdev, bool enable) 709 { 710 char buf[64]; 711 + bool is_ata; 712 int ret; 713 714 if (!sdev->cdl_supported) 715 return -EOPNOTSUPP; 716 717 rcu_read_lock(); 718 + is_ata = rcu_dereference(sdev->vpd_pg89); 719 rcu_read_unlock(); 720 721 /* 722 * For ATA devices, CDL needs to be enabled with a SET FEATURES command. 723 */ 724 if (is_ata) { 725 + struct scsi_mode_data data; 726 + struct scsi_sense_hdr sshdr; 727 char *buf_data; 728 int len; 729 ··· 735 if (ret) 736 return -EINVAL; 737 738 + /* Enable or disable CDL using the ATA feature page */ 739 len = min_t(size_t, sizeof(buf), 740 data.length - data.header_length - 741 data.block_descriptor_length); 742 buf_data = buf + data.header_length + 743 data.block_descriptor_length; 744 + 745 + /* 746 + * If we want to enable CDL and CDL is already enabled on the 747 + * device, do nothing. This avoids needlessly resetting the CDL 748 + * statistics on the device as that is implied by the CDL enable 749 + * action. Similar to this, there is no need to do anything if 750 + * we want to disable CDL and CDL is already disabled. 751 + */ 752 + if (enable) { 753 + if ((buf_data[4] & 0x03) == 0x02) 754 + goto out; 755 + buf_data[4] &= ~0x03; 756 + buf_data[4] |= 0x02; 757 + } else { 758 + if ((buf_data[4] & 0x03) == 0x00) 759 + goto out; 760 + buf_data[4] &= ~0x03; 761 + } 762 763 ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3, 764 &data, &sshdr); ··· 756 } 757 } 758 759 + out: 760 sdev->cdl_enable = enable; 761 762 return 0;
+5 -1
drivers/scsi/scsi_lib.c
··· 1253 */ 1254 static void scsi_cleanup_rq(struct request *rq) 1255 { 1256 if (rq->rq_flags & RQF_DONTPREP) { 1257 - scsi_mq_uninit_cmd(blk_mq_rq_to_pdu(rq)); 1258 rq->rq_flags &= ~RQF_DONTPREP; 1259 } 1260 }
··· 1253 */ 1254 static void scsi_cleanup_rq(struct request *rq) 1255 { 1256 + struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq); 1257 + 1258 + cmd->flags = 0; 1259 + 1260 if (rq->rq_flags & RQF_DONTPREP) { 1261 + scsi_mq_uninit_cmd(cmd); 1262 rq->rq_flags &= ~RQF_DONTPREP; 1263 } 1264 }
+1 -1
drivers/target/iscsi/iscsi_target.c
··· 4263 spin_unlock(&iscsit_global->ts_bitmap_lock); 4264 4265 iscsit_stop_timers_for_cmds(conn); 4266 - iscsit_stop_nopin_response_timer(conn); 4267 iscsit_stop_nopin_timer(conn); 4268 4269 if (conn->conn_transport->iscsit_wait_conn) 4270 conn->conn_transport->iscsit_wait_conn(conn);
··· 4263 spin_unlock(&iscsit_global->ts_bitmap_lock); 4264 4265 iscsit_stop_timers_for_cmds(conn); 4266 iscsit_stop_nopin_timer(conn); 4267 + iscsit_stop_nopin_response_timer(conn); 4268 4269 if (conn->conn_transport->iscsit_wait_conn) 4270 conn->conn_transport->iscsit_wait_conn(conn);
+6
drivers/tty/serial/msm_serial.c
··· 1746 if (!device->port.membase) 1747 return -ENODEV; 1748 1749 device->con->write = msm_serial_early_write_dm; 1750 return 0; 1751 }
··· 1746 if (!device->port.membase) 1747 return -ENODEV; 1748 1749 + /* Disable DM / single-character modes */ 1750 + msm_write(&device->port, 0, UARTDM_DMEN); 1751 + msm_write(&device->port, MSM_UART_CR_CMD_RESET_RX, MSM_UART_CR); 1752 + msm_write(&device->port, MSM_UART_CR_CMD_RESET_TX, MSM_UART_CR); 1753 + msm_write(&device->port, MSM_UART_CR_TX_ENABLE, MSM_UART_CR); 1754 + 1755 device->con->write = msm_serial_early_write_dm; 1756 return 0; 1757 }
+6
drivers/tty/serial/sifive.c
··· 563 static int sifive_serial_startup(struct uart_port *port) 564 { 565 struct sifive_serial_port *ssp = port_to_sifive_serial_port(port); 566 567 __ssp_enable_rxwm(ssp); 568 569 return 0; 570 } ··· 575 static void sifive_serial_shutdown(struct uart_port *port) 576 { 577 struct sifive_serial_port *ssp = port_to_sifive_serial_port(port); 578 579 __ssp_disable_rxwm(ssp); 580 __ssp_disable_txwm(ssp); 581 } 582 583 /**
··· 563 static int sifive_serial_startup(struct uart_port *port) 564 { 565 struct sifive_serial_port *ssp = port_to_sifive_serial_port(port); 566 + unsigned long flags; 567 568 + uart_port_lock_irqsave(&ssp->port, &flags); 569 __ssp_enable_rxwm(ssp); 570 + uart_port_unlock_irqrestore(&ssp->port, flags); 571 572 return 0; 573 } ··· 572 static void sifive_serial_shutdown(struct uart_port *port) 573 { 574 struct sifive_serial_port *ssp = port_to_sifive_serial_port(port); 575 + unsigned long flags; 576 577 + uart_port_lock_irqsave(&ssp->port, &flags); 578 __ssp_disable_rxwm(ssp); 579 __ssp_disable_txwm(ssp); 580 + uart_port_unlock_irqrestore(&ssp->port, flags); 581 } 582 583 /**
+2 -3
drivers/tty/vt/selection.c
··· 193 return -EFAULT; 194 195 /* 196 - * TIOCL_SELCLEAR, TIOCL_SELPOINTER and TIOCL_SELMOUSEREPORT are OK to 197 - * use without CAP_SYS_ADMIN as they do not modify the selection. 198 */ 199 switch (v.sel_mode) { 200 case TIOCL_SELCLEAR: 201 case TIOCL_SELPOINTER: 202 - case TIOCL_SELMOUSEREPORT: 203 break; 204 default: 205 if (!capable(CAP_SYS_ADMIN))
··· 193 return -EFAULT; 194 195 /* 196 + * TIOCL_SELCLEAR and TIOCL_SELPOINTER are OK to use without 197 + * CAP_SYS_ADMIN as they do not modify the selection. 198 */ 199 switch (v.sel_mode) { 200 case TIOCL_SELCLEAR: 201 case TIOCL_SELPOINTER: 202 break; 203 default: 204 if (!capable(CAP_SYS_ADMIN))
+5 -7
drivers/ufs/core/ufs-mcq.c
··· 677 unsigned long flags; 678 int err; 679 680 - if (!ufshcd_cmd_inflight(lrbp->cmd)) { 681 - dev_err(hba->dev, 682 - "%s: skip abort. cmd at tag %d already completed.\n", 683 - __func__, tag); 684 - return FAILED; 685 - } 686 - 687 /* Skip task abort in case previous aborts failed and report failure */ 688 if (lrbp->req_abort_skip) { 689 dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n", ··· 685 } 686 687 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); 688 689 if (ufshcd_mcq_sqe_search(hba, hwq, tag)) { 690 /*
··· 677 unsigned long flags; 678 int err; 679 680 /* Skip task abort in case previous aborts failed and report failure */ 681 if (lrbp->req_abort_skip) { 682 dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n", ··· 692 } 693 694 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); 695 + if (!hwq) { 696 + dev_err(hba->dev, "%s: skip abort. cmd at tag %d already completed.\n", 697 + __func__, tag); 698 + return FAILED; 699 + } 700 701 if (ufshcd_mcq_sqe_search(hba, hwq, tag)) { 702 /*
+31
drivers/ufs/core/ufshcd.c
··· 278 .model = UFS_ANY_MODEL, 279 .quirk = UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM | 280 UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE | 281 UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS }, 282 { .wmanufacturerid = UFS_VENDOR_SKHYNIX, 283 .model = UFS_ANY_MODEL, ··· 5678 continue; 5679 5680 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); 5681 5682 if (force_compl) { 5683 ufshcd_mcq_compl_all_cqes_lock(hba, hwq); ··· 8473 return ret; 8474 } 8475 8476 static void ufshcd_tune_unipro_params(struct ufs_hba *hba) 8477 { 8478 ufshcd_vops_apply_dev_quirks(hba); ··· 8508 8509 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE) 8510 ufshcd_quirk_tune_host_pa_tactivate(hba); 8511 } 8512 8513 static void ufshcd_clear_dbg_ufs_stats(struct ufs_hba *hba)
··· 278 .model = UFS_ANY_MODEL, 279 .quirk = UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM | 280 UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE | 281 + UFS_DEVICE_QUIRK_PA_HIBER8TIME | 282 UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS }, 283 { .wmanufacturerid = UFS_VENDOR_SKHYNIX, 284 .model = UFS_ANY_MODEL, ··· 5677 continue; 5678 5679 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); 5680 + if (!hwq) 5681 + continue; 5682 5683 if (force_compl) { 5684 ufshcd_mcq_compl_all_cqes_lock(hba, hwq); ··· 8470 return ret; 8471 } 8472 8473 + /** 8474 + * ufshcd_quirk_override_pa_h8time - Ensures proper adjustment of PA_HIBERN8TIME. 8475 + * @hba: per-adapter instance 8476 + * 8477 + * Some UFS devices require specific adjustments to the PA_HIBERN8TIME parameter 8478 + * to ensure proper hibernation timing. This function retrieves the current 8479 + * PA_HIBERN8TIME value and increments it by 100us. 8480 + */ 8481 + static void ufshcd_quirk_override_pa_h8time(struct ufs_hba *hba) 8482 + { 8483 + u32 pa_h8time; 8484 + int ret; 8485 + 8486 + ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_HIBERN8TIME), &pa_h8time); 8487 + if (ret) { 8488 + dev_err(hba->dev, "Failed to get PA_HIBERN8TIME: %d\n", ret); 8489 + return; 8490 + } 8491 + 8492 + /* Increment by 1 to increase hibernation time by 100 µs */ 8493 + ret = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_HIBERN8TIME), pa_h8time + 1); 8494 + if (ret) 8495 + dev_err(hba->dev, "Failed updating PA_HIBERN8TIME: %d\n", ret); 8496 + } 8497 + 8498 static void ufshcd_tune_unipro_params(struct ufs_hba *hba) 8499 { 8500 ufshcd_vops_apply_dev_quirks(hba); ··· 8480 8481 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE) 8482 ufshcd_quirk_tune_host_pa_tactivate(hba); 8483 + 8484 + if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_HIBER8TIME) 8485 + ufshcd_quirk_override_pa_h8time(hba); 8486 } 8487 8488 static void ufshcd_clear_dbg_ufs_stats(struct ufs_hba *hba)
+43
drivers/ufs/host/ufs-qcom.c
··· 33 ((((c) >> 16) & MCQ_QCFGPTR_MASK) * MCQ_QCFGPTR_UNIT) 34 #define MCQ_QCFG_SIZE 0x40 35 36 enum { 37 TSTBUS_UAWM, 38 TSTBUS_UARM, ··· 799 return ufs_qcom_icc_set_bw(host, bw_table.mem_bw, bw_table.cfg_bw); 800 } 801 802 static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba, 803 enum ufs_notify_change_status status, 804 const struct ufs_pa_layer_attr *dev_max_params, ··· 867 dev_req_params->gear_tx, 868 PA_INITIAL_ADAPT); 869 } 870 break; 871 case POST_CHANGE: 872 if (ufs_qcom_cfg_timers(hba, false)) { ··· 919 (pa_vs_config_reg1 | (1 << 12))); 920 } 921 922 static int ufs_qcom_apply_dev_quirks(struct ufs_hba *hba) 923 { 924 int err = 0; 925 926 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME) 927 err = ufs_qcom_quirk_host_pa_saveconfigtime(hba); 928 929 return err; 930 } ··· 953 { .wmanufacturerid = UFS_VENDOR_WDC, 954 .model = UFS_ANY_MODEL, 955 .quirk = UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE }, 956 {} 957 }; 958
··· 33 ((((c) >> 16) & MCQ_QCFGPTR_MASK) * MCQ_QCFGPTR_UNIT) 34 #define MCQ_QCFG_SIZE 0x40 35 36 + /* De-emphasis for gear-5 */ 37 + #define DEEMPHASIS_3_5_dB 0x04 38 + #define NO_DEEMPHASIS 0x0 39 + 40 enum { 41 TSTBUS_UAWM, 42 TSTBUS_UARM, ··· 795 return ufs_qcom_icc_set_bw(host, bw_table.mem_bw, bw_table.cfg_bw); 796 } 797 798 + static void ufs_qcom_set_tx_hs_equalizer(struct ufs_hba *hba, u32 gear, u32 tx_lanes) 799 + { 800 + u32 equalizer_val; 801 + int ret, i; 802 + 803 + /* Determine the equalizer value based on the gear */ 804 + equalizer_val = (gear == 5) ? DEEMPHASIS_3_5_dB : NO_DEEMPHASIS; 805 + 806 + for (i = 0; i < tx_lanes; i++) { 807 + ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(TX_HS_EQUALIZER, i), 808 + equalizer_val); 809 + if (ret) 810 + dev_err(hba->dev, "%s: failed equalizer lane %d\n", 811 + __func__, i); 812 + } 813 + } 814 + 815 static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba, 816 enum ufs_notify_change_status status, 817 const struct ufs_pa_layer_attr *dev_max_params, ··· 846 dev_req_params->gear_tx, 847 PA_INITIAL_ADAPT); 848 } 849 + 850 + if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING) 851 + ufs_qcom_set_tx_hs_equalizer(hba, 852 + dev_req_params->gear_tx, dev_req_params->lane_tx); 853 + 854 break; 855 case POST_CHANGE: 856 if (ufs_qcom_cfg_timers(hba, false)) { ··· 893 (pa_vs_config_reg1 | (1 << 12))); 894 } 895 896 + static void ufs_qcom_override_pa_tx_hsg1_sync_len(struct ufs_hba *hba) 897 + { 898 + int err; 899 + 900 + err = ufshcd_dme_peer_set(hba, UIC_ARG_MIB(PA_TX_HSG1_SYNC_LENGTH), 901 + PA_TX_HSG1_SYNC_LENGTH_VAL); 902 + if (err) 903 + dev_err(hba->dev, "Failed (%d) set PA_TX_HSG1_SYNC_LENGTH\n", err); 904 + } 905 + 906 static int ufs_qcom_apply_dev_quirks(struct ufs_hba *hba) 907 { 908 int err = 0; 909 910 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME) 911 err = ufs_qcom_quirk_host_pa_saveconfigtime(hba); 912 + 913 + if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH) 914 + ufs_qcom_override_pa_tx_hsg1_sync_len(hba); 915 916 return err; 917 } ··· 914 { .wmanufacturerid = UFS_VENDOR_WDC, 915 .model = UFS_ANY_MODEL, 916 .quirk = UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE }, 917 + { .wmanufacturerid = UFS_VENDOR_SAMSUNG, 918 + .model = UFS_ANY_MODEL, 919 + .quirk = UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH | 920 + UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING }, 921 {} 922 }; 923
+18
drivers/ufs/host/ufs-qcom.h
··· 122 TMRLUT_HW_CGC_EN | OCSC_HW_CGC_EN) 123 124 /* QUniPro Vendor specific attributes */ 125 #define PA_VS_CONFIG_REG1 0x9000 126 #define DME_VS_CORE_CLK_CTRL 0xD002 127 /* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */ 128 #define CLK_1US_CYCLES_MASK_V4 GENMASK(27, 16) 129 #define CLK_1US_CYCLES_MASK GENMASK(7, 0) ··· 143 #define UNIPRO_CORE_CLK_FREQ_300_MHZ 300 144 #define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202 145 #define UNIPRO_CORE_CLK_FREQ_403_MHZ 403 146 147 /* ICE allocator type to share AES engines among TX stream and RX stream */ 148 #define ICE_ALLOCATOR_TYPE 2
··· 122 TMRLUT_HW_CGC_EN | OCSC_HW_CGC_EN) 123 124 /* QUniPro Vendor specific attributes */ 125 + #define PA_TX_HSG1_SYNC_LENGTH 0x1552 126 #define PA_VS_CONFIG_REG1 0x9000 127 #define DME_VS_CORE_CLK_CTRL 0xD002 128 + #define TX_HS_EQUALIZER 0x0037 129 + 130 /* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */ 131 #define CLK_1US_CYCLES_MASK_V4 GENMASK(27, 16) 132 #define CLK_1US_CYCLES_MASK GENMASK(7, 0) ··· 140 #define UNIPRO_CORE_CLK_FREQ_300_MHZ 300 141 #define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202 142 #define UNIPRO_CORE_CLK_FREQ_403_MHZ 403 143 + 144 + /* TX_HSG1_SYNC_LENGTH attr value */ 145 + #define PA_TX_HSG1_SYNC_LENGTH_VAL 0x4A 146 + 147 + /* 148 + * Some ufs device vendors need a different TSync length. 149 + * Enable this quirk to give an additional TX_HS_SYNC_LENGTH. 150 + */ 151 + #define UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH BIT(16) 152 + 153 + /* 154 + * Some ufs device vendors need a different Deemphasis setting. 155 + * Enable this quirk to tune TX Deemphasis parameters. 156 + */ 157 + #define UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING BIT(17) 158 159 /* ICE allocator type to share AES engines among TX stream and RX stream */ 160 #define ICE_ALLOCATOR_TYPE 2
+2
drivers/usb/cdns3/cdns3-gadget.c
··· 1963 unsigned int bit; 1964 unsigned long reg; 1965 1966 spin_lock_irqsave(&priv_dev->lock, flags); 1967 1968 reg = readl(&priv_dev->regs->usb_ists); ··· 2005 irqend: 2006 writel(~0, &priv_dev->regs->ep_ien); 2007 spin_unlock_irqrestore(&priv_dev->lock, flags); 2008 2009 return ret; 2010 }
··· 1963 unsigned int bit; 1964 unsigned long reg; 1965 1966 + local_bh_disable(); 1967 spin_lock_irqsave(&priv_dev->lock, flags); 1968 1969 reg = readl(&priv_dev->regs->usb_ists); ··· 2004 irqend: 2005 writel(~0, &priv_dev->regs->ep_ien); 2006 spin_unlock_irqrestore(&priv_dev->lock, flags); 2007 + local_bh_enable(); 2008 2009 return ret; 2010 }
+31 -13
drivers/usb/chipidea/ci_hdrc_imx.c
··· 336 return ret; 337 } 338 339 static int ci_hdrc_imx_probe(struct platform_device *pdev) 340 { 341 struct ci_hdrc_imx_data *data; ··· 401 "Failed to enable HSIC pad regulator\n"); 402 goto err_put; 403 } 404 } 405 } 406 ··· 446 447 ret = imx_get_clks(dev); 448 if (ret) 449 - goto disable_hsic_regulator; 450 451 ret = imx_prepare_enable_clks(dev); 452 if (ret) 453 - goto disable_hsic_regulator; 454 455 ret = clk_prepare_enable(data->clk_wakeup); 456 if (ret) ··· 484 of_usb_get_phy_mode(np) == USBPHY_INTERFACE_MODE_ULPI) { 485 pdata.flags |= CI_HDRC_OVERRIDE_PHY_CONTROL; 486 data->override_phy_control = true; 487 - usb_phy_init(pdata.usb_phy); 488 } 489 490 if (pdata.flags & CI_HDRC_SUPPORTS_RUNTIME_PM) ··· 497 ret = imx_usbmisc_init(data->usbmisc_data); 498 if (ret) { 499 dev_err(dev, "usbmisc init failed, ret=%d\n", ret); 500 - goto err_clk; 501 } 502 503 data->ci_pdev = ci_hdrc_add_device(dev, ··· 506 if (IS_ERR(data->ci_pdev)) { 507 ret = PTR_ERR(data->ci_pdev); 508 dev_err_probe(dev, ret, "ci_hdrc_add_device failed\n"); 509 - goto err_clk; 510 } 511 512 if (data->usbmisc_data) { ··· 540 541 disable_device: 542 ci_hdrc_remove_device(data->ci_pdev); 543 err_clk: 544 clk_disable_unprepare(data->clk_wakeup); 545 err_wakeup_clk: 546 imx_disable_unprepare_clks(dev); 547 - disable_hsic_regulator: 548 - if (data->hsic_pad_regulator) 549 - /* don't overwrite original ret (cf. EPROBE_DEFER) */ 550 - regulator_disable(data->hsic_pad_regulator); 551 if (pdata.flags & CI_HDRC_PMQOS) 552 cpu_latency_qos_remove_request(&data->pm_qos_req); 553 data->ci_pdev = NULL; 554 err_put: 555 - put_device(data->usbmisc_data->dev); 556 return ret; 557 } 558 ··· 575 clk_disable_unprepare(data->clk_wakeup); 576 if (data->plat_data->flags & CI_HDRC_PMQOS) 577 cpu_latency_qos_remove_request(&data->pm_qos_req); 578 - if (data->hsic_pad_regulator) 579 - regulator_disable(data->hsic_pad_regulator); 580 } 581 - put_device(data->usbmisc_data->dev); 582 } 583 584 static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
··· 336 return ret; 337 } 338 339 + static void ci_hdrc_imx_disable_regulator(void *arg) 340 + { 341 + struct ci_hdrc_imx_data *data = arg; 342 + 343 + regulator_disable(data->hsic_pad_regulator); 344 + } 345 + 346 static int ci_hdrc_imx_probe(struct platform_device *pdev) 347 { 348 struct ci_hdrc_imx_data *data; ··· 394 "Failed to enable HSIC pad regulator\n"); 395 goto err_put; 396 } 397 + ret = devm_add_action_or_reset(dev, 398 + ci_hdrc_imx_disable_regulator, data); 399 + if (ret) { 400 + dev_err(dev, 401 + "Failed to add regulator devm action\n"); 402 + goto err_put; 403 + } 404 } 405 } 406 ··· 432 433 ret = imx_get_clks(dev); 434 if (ret) 435 + goto qos_remove_request; 436 437 ret = imx_prepare_enable_clks(dev); 438 if (ret) 439 + goto qos_remove_request; 440 441 ret = clk_prepare_enable(data->clk_wakeup); 442 if (ret) ··· 470 of_usb_get_phy_mode(np) == USBPHY_INTERFACE_MODE_ULPI) { 471 pdata.flags |= CI_HDRC_OVERRIDE_PHY_CONTROL; 472 data->override_phy_control = true; 473 + ret = usb_phy_init(pdata.usb_phy); 474 + if (ret) { 475 + dev_err(dev, "Failed to init phy\n"); 476 + goto err_clk; 477 + } 478 } 479 480 if (pdata.flags & CI_HDRC_SUPPORTS_RUNTIME_PM) ··· 479 ret = imx_usbmisc_init(data->usbmisc_data); 480 if (ret) { 481 dev_err(dev, "usbmisc init failed, ret=%d\n", ret); 482 + goto phy_shutdown; 483 } 484 485 data->ci_pdev = ci_hdrc_add_device(dev, ··· 488 if (IS_ERR(data->ci_pdev)) { 489 ret = PTR_ERR(data->ci_pdev); 490 dev_err_probe(dev, ret, "ci_hdrc_add_device failed\n"); 491 + goto phy_shutdown; 492 } 493 494 if (data->usbmisc_data) { ··· 522 523 disable_device: 524 ci_hdrc_remove_device(data->ci_pdev); 525 + phy_shutdown: 526 + if (data->override_phy_control) 527 + usb_phy_shutdown(data->phy); 528 err_clk: 529 clk_disable_unprepare(data->clk_wakeup); 530 err_wakeup_clk: 531 imx_disable_unprepare_clks(dev); 532 + qos_remove_request: 533 if (pdata.flags & CI_HDRC_PMQOS) 534 cpu_latency_qos_remove_request(&data->pm_qos_req); 535 data->ci_pdev = NULL; 536 err_put: 537 + if (data->usbmisc_data) 538 + put_device(data->usbmisc_data->dev); 539 return ret; 540 } 541 ··· 556 clk_disable_unprepare(data->clk_wakeup); 557 if (data->plat_data->flags & CI_HDRC_PMQOS) 558 cpu_latency_qos_remove_request(&data->pm_qos_req); 559 } 560 + if (data->usbmisc_data) 561 + put_device(data->usbmisc_data->dev); 562 } 563 564 static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
+16 -5
drivers/usb/class/cdc-wdm.c
··· 726 rv = -EBUSY; 727 goto out; 728 } 729 - 730 rv = usb_autopm_get_interface(desc->intf); 731 if (rv < 0) { 732 dev_err(&desc->intf->dev, "Error autopm - %d\n", rv); ··· 829 static int wdm_wwan_port_start(struct wwan_port *port) 830 { 831 struct wdm_device *desc = wwan_port_get_drvdata(port); 832 833 /* The interface is both exposed via the WWAN framework and as a 834 * legacy usbmisc chardev. If chardev is already open, just fail ··· 849 wwan_port_txon(port); 850 851 /* Start getting events */ 852 - return usb_submit_urb(desc->validity, GFP_KERNEL); 853 } 854 855 static void wdm_wwan_port_stop(struct wwan_port *port) ··· 868 poison_urbs(desc); 869 desc->manage_power(desc->intf, 0); 870 clear_bit(WDM_READ, &desc->flags); 871 - clear_bit(WDM_WWAN_IN_USE, &desc->flags); 872 unpoison_urbs(desc); 873 } 874 875 static void wdm_wwan_port_tx_complete(struct urb *urb) ··· 879 struct sk_buff *skb = urb->context; 880 struct wdm_device *desc = skb_shinfo(skb)->destructor_arg; 881 882 - usb_autopm_put_interface(desc->intf); 883 wwan_port_txon(desc->wwanp); 884 kfree_skb(skb); 885 } ··· 909 req->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE); 910 req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND; 911 req->wValue = 0; 912 - req->wIndex = desc->inum; 913 req->wLength = cpu_to_le16(skb->len); 914 915 skb_shinfo(skb)->destructor_arg = desc;
··· 726 rv = -EBUSY; 727 goto out; 728 } 729 + smp_rmb(); /* ordered against wdm_wwan_port_stop() */ 730 rv = usb_autopm_get_interface(desc->intf); 731 if (rv < 0) { 732 dev_err(&desc->intf->dev, "Error autopm - %d\n", rv); ··· 829 static int wdm_wwan_port_start(struct wwan_port *port) 830 { 831 struct wdm_device *desc = wwan_port_get_drvdata(port); 832 + int rv; 833 834 /* The interface is both exposed via the WWAN framework and as a 835 * legacy usbmisc chardev. If chardev is already open, just fail ··· 848 wwan_port_txon(port); 849 850 /* Start getting events */ 851 + rv = usb_submit_urb(desc->validity, GFP_KERNEL); 852 + if (rv < 0) { 853 + wwan_port_txoff(port); 854 + desc->manage_power(desc->intf, 0); 855 + /* this must be last lest we race with chardev open */ 856 + clear_bit(WDM_WWAN_IN_USE, &desc->flags); 857 + } 858 + 859 + return rv; 860 } 861 862 static void wdm_wwan_port_stop(struct wwan_port *port) ··· 859 poison_urbs(desc); 860 desc->manage_power(desc->intf, 0); 861 clear_bit(WDM_READ, &desc->flags); 862 unpoison_urbs(desc); 863 + smp_wmb(); /* ordered against wdm_open() */ 864 + /* this must be last lest we open a poisoned device */ 865 + clear_bit(WDM_WWAN_IN_USE, &desc->flags); 866 } 867 868 static void wdm_wwan_port_tx_complete(struct urb *urb) ··· 868 struct sk_buff *skb = urb->context; 869 struct wdm_device *desc = skb_shinfo(skb)->destructor_arg; 870 871 + usb_autopm_put_interface_async(desc->intf); 872 wwan_port_txon(desc->wwanp); 873 kfree_skb(skb); 874 } ··· 898 req->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE); 899 req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND; 900 req->wValue = 0; 901 + req->wIndex = desc->inum; /* already converted */ 902 req->wLength = cpu_to_le16(skb->len); 903 904 skb_shinfo(skb)->destructor_arg = desc;
+9
drivers/usb/core/quirks.c
··· 369 { USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM }, 370 { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, 371 372 /* Realforce 87U Keyboard */ 373 { USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM }, 374 ··· 385 USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL }, 386 { USB_DEVICE(0x0904, 0x6103), .driver_info = 387 USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL }, 388 389 /* Sound Devices USBPre2 */ 390 { USB_DEVICE(0x0926, 0x0202), .driver_info = ··· 544 /* Hauppauge HVR-950q */ 545 { USB_DEVICE(0x2040, 0x7200), .driver_info = 546 USB_QUIRK_CONFIG_INTF_STRINGS }, 547 548 /* Raydium Touchscreen */ 549 { USB_DEVICE(0x2386, 0x3114), .driver_info = USB_QUIRK_NO_LPM },
··· 369 { USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM }, 370 { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, 371 372 + /* SanDisk Corp. SanDisk 3.2Gen1 */ 373 + { USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT }, 374 + 375 /* Realforce 87U Keyboard */ 376 { USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM }, 377 ··· 382 USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL }, 383 { USB_DEVICE(0x0904, 0x6103), .driver_info = 384 USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL }, 385 + 386 + /* Silicon Motion Flash Drive */ 387 + { USB_DEVICE(0x090c, 0x1000), .driver_info = USB_QUIRK_DELAY_INIT }, 388 389 /* Sound Devices USBPre2 */ 390 { USB_DEVICE(0x0926, 0x0202), .driver_info = ··· 538 /* Hauppauge HVR-950q */ 539 { USB_DEVICE(0x2040, 0x7200), .driver_info = 540 USB_QUIRK_CONFIG_INTF_STRINGS }, 541 + 542 + /* VLI disk */ 543 + { USB_DEVICE(0x2109, 0x0711), .driver_info = USB_QUIRK_NO_LPM }, 544 545 /* Raydium Touchscreen */ 546 { USB_DEVICE(0x2386, 0x3114), .driver_info = USB_QUIRK_NO_LPM },
+1 -3
drivers/usb/dwc3/dwc3-xilinx.c
··· 207 208 skip_usb3_phy: 209 /* ulpi reset via gpio-modepin or gpio-framework driver */ 210 - reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW); 211 if (IS_ERR(reset_gpio)) { 212 return dev_err_probe(dev, PTR_ERR(reset_gpio), 213 "Failed to request reset GPIO\n"); 214 } 215 216 if (reset_gpio) { 217 - /* Toggle ulpi to reset the phy. */ 218 - gpiod_set_value_cansleep(reset_gpio, 1); 219 usleep_range(5000, 10000); 220 gpiod_set_value_cansleep(reset_gpio, 0); 221 usleep_range(5000, 10000);
··· 207 208 skip_usb3_phy: 209 /* ulpi reset via gpio-modepin or gpio-framework driver */ 210 + reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 211 if (IS_ERR(reset_gpio)) { 212 return dev_err_probe(dev, PTR_ERR(reset_gpio), 213 "Failed to request reset GPIO\n"); 214 } 215 216 if (reset_gpio) { 217 usleep_range(5000, 10000); 218 gpiod_set_value_cansleep(reset_gpio, 0); 219 usleep_range(5000, 10000);
+6
drivers/usb/dwc3/gadget.c
··· 4617 if (!count) 4618 return IRQ_NONE; 4619 4620 evt->count = count; 4621 evt->flags |= DWC3_EVENT_PENDING; 4622
··· 4617 if (!count) 4618 return IRQ_NONE; 4619 4620 + if (count > evt->length) { 4621 + dev_err_ratelimited(dwc->dev, "invalid count(%u) > evt->length(%u)\n", 4622 + count, evt->length); 4623 + return IRQ_NONE; 4624 + } 4625 + 4626 evt->count = count; 4627 evt->flags |= DWC3_EVENT_PENDING; 4628
+23
drivers/usb/host/ohci-pci.c
··· 165 return 0; 166 } 167 168 static int ohci_quirk_qemu(struct usb_hcd *hcd) 169 { 170 struct ohci_hcd *ohci = hcd_to_ohci(hcd); ··· 242 { 243 PCI_DEVICE(PCI_VENDOR_ID_ATI, 0x4399), 244 .driver_data = (unsigned long)ohci_quirk_amd700, 245 }, 246 { 247 .vendor = PCI_VENDOR_ID_APPLE,
··· 165 return 0; 166 } 167 168 + static int ohci_quirk_loongson(struct usb_hcd *hcd) 169 + { 170 + struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 171 + 172 + /* 173 + * Loongson's LS7A OHCI controller (rev 0x02) has a 174 + * flaw. MMIO register with offset 0x60/64 is treated 175 + * as legacy PS2-compatible keyboard/mouse interface. 176 + * Since OHCI only use 4KB BAR resource, LS7A OHCI's 177 + * 32KB BAR is wrapped around (the 2nd 4KB BAR space 178 + * is the same as the 1st 4KB internally). So add 4KB 179 + * offset (0x1000) to the OHCI registers as a quirk. 180 + */ 181 + if (pdev->revision == 0x2) 182 + hcd->regs += SZ_4K; /* SZ_4K = 0x1000 */ 183 + 184 + return 0; 185 + } 186 + 187 static int ohci_quirk_qemu(struct usb_hcd *hcd) 188 { 189 struct ohci_hcd *ohci = hcd_to_ohci(hcd); ··· 223 { 224 PCI_DEVICE(PCI_VENDOR_ID_ATI, 0x4399), 225 .driver_data = (unsigned long)ohci_quirk_amd700, 226 + }, 227 + { 228 + PCI_DEVICE(PCI_VENDOR_ID_LOONGSON, 0x7a24), 229 + .driver_data = (unsigned long)ohci_quirk_loongson, 230 }, 231 { 232 .vendor = PCI_VENDOR_ID_APPLE,
+16 -14
drivers/usb/host/xhci-hub.c
··· 1878 int max_ports, port_index; 1879 int sret; 1880 u32 next_state; 1881 - u32 temp, portsc; 1882 struct xhci_hub *rhub; 1883 struct xhci_port **ports; 1884 1885 rhub = xhci_get_rhub(hcd); 1886 ports = rhub->ports; ··· 1897 return -ESHUTDOWN; 1898 } 1899 1900 - /* delay the irqs */ 1901 - temp = readl(&xhci->op_regs->command); 1902 - temp &= ~CMD_EIE; 1903 - writel(temp, &xhci->op_regs->command); 1904 - 1905 /* bus specific resume for ports we suspended at bus_suspend */ 1906 - if (hcd->speed >= HCD_USB3) 1907 next_state = XDEV_U0; 1908 - else 1909 next_state = XDEV_RESUME; 1910 - 1911 port_index = max_ports; 1912 while (port_index--) { 1913 portsc = readl(ports[port_index]->addr); ··· 1978 (void) readl(&xhci->op_regs->command); 1979 1980 bus_state->next_statechange = jiffies + msecs_to_jiffies(5); 1981 - /* re-enable irqs */ 1982 - temp = readl(&xhci->op_regs->command); 1983 - temp |= CMD_EIE; 1984 - writel(temp, &xhci->op_regs->command); 1985 - temp = readl(&xhci->op_regs->command); 1986 1987 spin_unlock_irqrestore(&xhci->lock, flags); 1988 return 0;
··· 1878 int max_ports, port_index; 1879 int sret; 1880 u32 next_state; 1881 + u32 portsc; 1882 struct xhci_hub *rhub; 1883 struct xhci_port **ports; 1884 + bool disabled_irq = false; 1885 1886 rhub = xhci_get_rhub(hcd); 1887 ports = rhub->ports; ··· 1896 return -ESHUTDOWN; 1897 } 1898 1899 /* bus specific resume for ports we suspended at bus_suspend */ 1900 + if (hcd->speed >= HCD_USB3) { 1901 next_state = XDEV_U0; 1902 + } else { 1903 next_state = XDEV_RESUME; 1904 + if (bus_state->bus_suspended) { 1905 + /* 1906 + * prevent port event interrupts from interfering 1907 + * with usb2 port resume process 1908 + */ 1909 + xhci_disable_interrupter(xhci->interrupters[0]); 1910 + disabled_irq = true; 1911 + } 1912 + } 1913 port_index = max_ports; 1914 while (port_index--) { 1915 portsc = readl(ports[port_index]->addr); ··· 1974 (void) readl(&xhci->op_regs->command); 1975 1976 bus_state->next_statechange = jiffies + msecs_to_jiffies(5); 1977 + /* re-enable interrupter */ 1978 + if (disabled_irq) 1979 + xhci_enable_interrupter(xhci->interrupters[0]); 1980 1981 spin_unlock_irqrestore(&xhci->lock, flags); 1982 return 0;
+4 -7
drivers/usb/host/xhci-ring.c
··· 561 * pointer command pending because the device can choose to start any 562 * stream once the endpoint is on the HW schedule. 563 */ 564 - if (ep_state & (EP_STOP_CMD_PENDING | SET_DEQ_PENDING | EP_HALTED | 565 - EP_CLEARING_TT | EP_STALLED)) 566 return; 567 568 trace_xhci_ring_ep_doorbell(slot_id, DB_VALUE(ep_index, stream_id)); ··· 2573 2574 xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET); 2575 return; 2576 - case COMP_STALL_ERROR: 2577 - ep->ep_state |= EP_STALLED; 2578 - break; 2579 default: 2580 /* do nothing */ 2581 break; ··· 2913 if (xhci_spurious_success_tx_event(xhci, ep_ring)) { 2914 xhci_dbg(xhci, "Spurious event dma %pad, comp_code %u after %u\n", 2915 &ep_trb_dma, trb_comp_code, ep_ring->old_trb_comp_code); 2916 - ep_ring->old_trb_comp_code = trb_comp_code; 2917 return 0; 2918 } 2919 ··· 3777 * enqueue a No Op TRB, this can prevent the Setup and Data Stage 3778 * TRB to be breaked by the Link TRB. 3779 */ 3780 - if (trb_is_link(ep_ring->enqueue + 1)) { 3781 field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state; 3782 queue_trb(xhci, ep_ring, false, 0, 0, 3783 TRB_INTR_TARGET(0), field);
··· 561 * pointer command pending because the device can choose to start any 562 * stream once the endpoint is on the HW schedule. 563 */ 564 + if ((ep_state & EP_STOP_CMD_PENDING) || (ep_state & SET_DEQ_PENDING) || 565 + (ep_state & EP_HALTED) || (ep_state & EP_CLEARING_TT)) 566 return; 567 568 trace_xhci_ring_ep_doorbell(slot_id, DB_VALUE(ep_index, stream_id)); ··· 2573 2574 xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET); 2575 return; 2576 default: 2577 /* do nothing */ 2578 break; ··· 2916 if (xhci_spurious_success_tx_event(xhci, ep_ring)) { 2917 xhci_dbg(xhci, "Spurious event dma %pad, comp_code %u after %u\n", 2918 &ep_trb_dma, trb_comp_code, ep_ring->old_trb_comp_code); 2919 + ep_ring->old_trb_comp_code = 0; 2920 return 0; 2921 } 2922 ··· 3780 * enqueue a No Op TRB, this can prevent the Setup and Data Stage 3781 * TRB to be breaked by the Link TRB. 3782 */ 3783 + if (last_trb_on_seg(ep_ring->enq_seg, ep_ring->enqueue + 1)) { 3784 field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state; 3785 queue_trb(xhci, ep_ring, false, 0, 0, 3786 TRB_INTR_TARGET(0), field);
+5 -13
drivers/usb/host/xhci.c
··· 322 xhci_info(xhci, "Fault detected\n"); 323 } 324 325 - static int xhci_enable_interrupter(struct xhci_interrupter *ir) 326 { 327 u32 iman; 328 ··· 335 return 0; 336 } 337 338 - static int xhci_disable_interrupter(struct xhci_interrupter *ir) 339 { 340 u32 iman; 341 ··· 1605 goto free_priv; 1606 } 1607 1608 - /* Class driver might not be aware ep halted due to async URB giveback */ 1609 - if (*ep_state & EP_STALLED) 1610 - dev_dbg(&urb->dev->dev, "URB %p queued before clearing halt\n", 1611 - urb); 1612 - 1613 switch (usb_endpoint_type(&urb->ep->desc)) { 1614 1615 case USB_ENDPOINT_XFER_CONTROL: ··· 1765 goto done; 1766 } 1767 1768 - /* In these cases no commands are pending but the endpoint is stopped */ 1769 - if (ep->ep_state & (EP_CLEARING_TT | EP_STALLED)) { 1770 /* and cancelled TDs can be given back right away */ 1771 xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n", 1772 urb->dev->slot_id, ep_index, ep->ep_state); ··· 3204 3205 ep = &vdev->eps[ep_index]; 3206 3207 - spin_lock_irqsave(&xhci->lock, flags); 3208 - 3209 - ep->ep_state &= ~EP_STALLED; 3210 - 3211 /* Bail out if toggle is already being cleared by a endpoint reset */ 3212 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) { 3213 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE; 3214 spin_unlock_irqrestore(&xhci->lock, flags);
··· 322 xhci_info(xhci, "Fault detected\n"); 323 } 324 325 + int xhci_enable_interrupter(struct xhci_interrupter *ir) 326 { 327 u32 iman; 328 ··· 335 return 0; 336 } 337 338 + int xhci_disable_interrupter(struct xhci_interrupter *ir) 339 { 340 u32 iman; 341 ··· 1605 goto free_priv; 1606 } 1607 1608 switch (usb_endpoint_type(&urb->ep->desc)) { 1609 1610 case USB_ENDPOINT_XFER_CONTROL: ··· 1770 goto done; 1771 } 1772 1773 + /* In this case no commands are pending but the endpoint is stopped */ 1774 + if (ep->ep_state & EP_CLEARING_TT) { 1775 /* and cancelled TDs can be given back right away */ 1776 xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n", 1777 urb->dev->slot_id, ep_index, ep->ep_state); ··· 3209 3210 ep = &vdev->eps[ep_index]; 3211 3212 /* Bail out if toggle is already being cleared by a endpoint reset */ 3213 + spin_lock_irqsave(&xhci->lock, flags); 3214 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) { 3215 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE; 3216 spin_unlock_irqrestore(&xhci->lock, flags);
+3 -2
drivers/usb/host/xhci.h
··· 664 unsigned int err_count; 665 unsigned int ep_state; 666 #define SET_DEQ_PENDING (1 << 0) 667 - #define EP_HALTED (1 << 1) /* Halted host ep handling */ 668 #define EP_STOP_CMD_PENDING (1 << 2) /* For URB cancellation */ 669 /* Transitioning the endpoint to using streams, don't enqueue URBs */ 670 #define EP_GETTING_STREAMS (1 << 3) ··· 675 #define EP_SOFT_CLEAR_TOGGLE (1 << 7) 676 /* usb_hub_clear_tt_buffer is in progress */ 677 #define EP_CLEARING_TT (1 << 8) 678 - #define EP_STALLED (1 << 9) /* For stall handling */ 679 /* ---- Related to URB cancellation ---- */ 680 struct list_head cancelled_td_list; 681 struct xhci_hcd *xhci; ··· 1890 struct usb_tt *tt, gfp_t mem_flags); 1891 int xhci_set_interrupter_moderation(struct xhci_interrupter *ir, 1892 u32 imod_interval); 1893 1894 /* xHCI ring, segment, TRB, and TD functions */ 1895 dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb);
··· 664 unsigned int err_count; 665 unsigned int ep_state; 666 #define SET_DEQ_PENDING (1 << 0) 667 + #define EP_HALTED (1 << 1) /* For stall handling */ 668 #define EP_STOP_CMD_PENDING (1 << 2) /* For URB cancellation */ 669 /* Transitioning the endpoint to using streams, don't enqueue URBs */ 670 #define EP_GETTING_STREAMS (1 << 3) ··· 675 #define EP_SOFT_CLEAR_TOGGLE (1 << 7) 676 /* usb_hub_clear_tt_buffer is in progress */ 677 #define EP_CLEARING_TT (1 << 8) 678 /* ---- Related to URB cancellation ---- */ 679 struct list_head cancelled_td_list; 680 struct xhci_hcd *xhci; ··· 1891 struct usb_tt *tt, gfp_t mem_flags); 1892 int xhci_set_interrupter_moderation(struct xhci_interrupter *ir, 1893 u32 imod_interval); 1894 + int xhci_enable_interrupter(struct xhci_interrupter *ir); 1895 + int xhci_disable_interrupter(struct xhci_interrupter *ir); 1896 1897 /* xHCI ring, segment, TRB, and TD functions */ 1898 dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb);
+2
drivers/usb/serial/ftdi_sio.c
··· 1093 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) }, 1094 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) }, 1095 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) }, 1096 { } /* Terminating entry */ 1097 }; 1098
··· 1093 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) }, 1094 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) }, 1095 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) }, 1096 + /* Abacus Electrics */ 1097 + { USB_DEVICE(FTDI_VID, ABACUS_OPTICAL_PROBE_PID) }, 1098 { } /* Terminating entry */ 1099 }; 1100
+5
drivers/usb/serial/ftdi_sio_ids.h
··· 443 #define LINX_FUTURE_2_PID 0xF44C /* Linx future device */ 444 445 /* 446 * Oceanic product ids 447 */ 448 #define FTDI_OCEANIC_PID 0xF460 /* Oceanic dive instrument */
··· 443 #define LINX_FUTURE_2_PID 0xF44C /* Linx future device */ 444 445 /* 446 + * Abacus Electrics 447 + */ 448 + #define ABACUS_OPTICAL_PROBE_PID 0xf458 /* ABACUS ELECTRICS Optical Probe */ 449 + 450 + /* 451 * Oceanic product ids 452 */ 453 #define FTDI_OCEANIC_PID 0xF460 /* Oceanic dive instrument */
+3
drivers/usb/serial/option.c
··· 611 /* Sierra Wireless products */ 612 #define SIERRA_VENDOR_ID 0x1199 613 #define SIERRA_PRODUCT_EM9191 0x90d3 614 615 /* UNISOC (Spreadtrum) products */ 616 #define UNISOC_VENDOR_ID 0x1782 ··· 2433 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) }, 2434 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) }, 2435 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) }, 2436 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) }, 2437 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) }, 2438 { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */
··· 611 /* Sierra Wireless products */ 612 #define SIERRA_VENDOR_ID 0x1199 613 #define SIERRA_PRODUCT_EM9191 0x90d3 614 + #define SIERRA_PRODUCT_EM9291 0x90e3 615 616 /* UNISOC (Spreadtrum) products */ 617 #define UNISOC_VENDOR_ID 0x1782 ··· 2432 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) }, 2433 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) }, 2434 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) }, 2435 + { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9291, 0xff, 0xff, 0x30) }, 2436 + { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9291, 0xff, 0xff, 0x40) }, 2437 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) }, 2438 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) }, 2439 { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */
+7
drivers/usb/serial/usb-serial-simple.c
··· 100 { USB_DEVICE(0x09d7, 0x0100) } /* NovAtel FlexPack GPS */ 101 DEVICE_N(novatel_gps, NOVATEL_IDS, 3); 102 103 /* Siemens USB/MPI adapter */ 104 #define SIEMENS_IDS() \ 105 { USB_DEVICE(0x908, 0x0004) } ··· 139 &motorola_tetra_device, 140 &nokia_device, 141 &novatel_gps_device, 142 &siemens_mpi_device, 143 &suunto_device, 144 &vivopay_device, ··· 159 MOTOROLA_TETRA_IDS(), 160 NOKIA_IDS(), 161 NOVATEL_IDS(), 162 SIEMENS_IDS(), 163 SUUNTO_IDS(), 164 VIVOPAY_IDS(),
··· 100 { USB_DEVICE(0x09d7, 0x0100) } /* NovAtel FlexPack GPS */ 101 DEVICE_N(novatel_gps, NOVATEL_IDS, 3); 102 103 + /* OWON electronic test and measurement equipment driver */ 104 + #define OWON_IDS() \ 105 + { USB_DEVICE(0x5345, 0x1234) } /* HDS200 oscilloscopes and others */ 106 + DEVICE(owon, OWON_IDS); 107 + 108 /* Siemens USB/MPI adapter */ 109 #define SIEMENS_IDS() \ 110 { USB_DEVICE(0x908, 0x0004) } ··· 134 &motorola_tetra_device, 135 &nokia_device, 136 &novatel_gps_device, 137 + &owon_device, 138 &siemens_mpi_device, 139 &suunto_device, 140 &vivopay_device, ··· 153 MOTOROLA_TETRA_IDS(), 154 NOKIA_IDS(), 155 NOVATEL_IDS(), 156 + OWON_IDS(), 157 SIEMENS_IDS(), 158 SUUNTO_IDS(), 159 VIVOPAY_IDS(),
+7
drivers/usb/storage/unusual_uas.h
··· 83 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 84 US_FL_NO_REPORT_LUNS), 85 86 /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */ 87 UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999, 88 "Initio Corporation",
··· 83 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 84 US_FL_NO_REPORT_LUNS), 85 86 + /* Reported-by: Oliver Neukum <oneukum@suse.com> */ 87 + UNUSUAL_DEV(0x125f, 0xa94a, 0x0160, 0x0160, 88 + "ADATA", 89 + "Portable HDD CH94", 90 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 91 + US_FL_NO_ATA_1X), 92 + 93 /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */ 94 UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999, 95 "Initio Corporation",
+20 -4
drivers/usb/typec/class.c
··· 1052 partner->usb_mode = USB_MODE_USB3; 1053 } 1054 1055 ret = device_register(&partner->dev); 1056 if (ret) { 1057 dev_err(&port->dev, "failed to register partner (%d)\n", ret); 1058 put_device(&partner->dev); 1059 return ERR_PTR(ret); 1060 } ··· 1065 typec_partner_link_device(partner, port->usb2_dev); 1066 if (port->usb3_dev) 1067 typec_partner_link_device(partner, port->usb3_dev); 1068 1069 return partner; 1070 } ··· 1086 1087 port = to_typec_port(partner->dev.parent); 1088 1089 - if (port->usb2_dev) 1090 typec_partner_unlink_device(partner, port->usb2_dev); 1091 - if (port->usb3_dev) 1092 typec_partner_unlink_device(partner, port->usb3_dev); 1093 1094 device_unregister(&partner->dev); 1095 } 1096 EXPORT_SYMBOL_GPL(typec_unregister_partner); 1097 ··· 2050 static void typec_partner_attach(struct typec_connector *con, struct device *dev) 2051 { 2052 struct typec_port *port = container_of(con, struct typec_port, con); 2053 - struct typec_partner *partner = typec_get_partner(port); 2054 struct usb_device *udev = to_usb_device(dev); 2055 enum usb_mode usb_mode; 2056 2057 if (udev->speed < USB_SPEED_SUPER) { 2058 usb_mode = USB_MODE_USB2; 2059 port->usb2_dev = dev; ··· 2063 port->usb3_dev = dev; 2064 } 2065 2066 if (partner) { 2067 typec_partner_set_usb_mode(partner, usb_mode); 2068 typec_partner_link_device(partner, dev); 2069 put_device(&partner->dev); 2070 } 2071 } 2072 2073 static void typec_partner_deattach(struct typec_connector *con, struct device *dev) 2074 { 2075 struct typec_port *port = container_of(con, struct typec_port, con); 2076 - struct typec_partner *partner = typec_get_partner(port); 2077 2078 if (partner) { 2079 typec_partner_unlink_device(partner, dev); 2080 put_device(&partner->dev); ··· 2088 port->usb2_dev = NULL; 2089 else if (port->usb3_dev == dev) 2090 port->usb3_dev = NULL; 2091 } 2092 2093 /** ··· 2629 2630 ida_init(&port->mode_ids); 2631 mutex_init(&port->port_type_lock); 2632 2633 port->id = id; 2634 port->ops = cap->ops;
··· 1052 partner->usb_mode = USB_MODE_USB3; 1053 } 1054 1055 + mutex_lock(&port->partner_link_lock); 1056 ret = device_register(&partner->dev); 1057 if (ret) { 1058 dev_err(&port->dev, "failed to register partner (%d)\n", ret); 1059 + mutex_unlock(&port->partner_link_lock); 1060 put_device(&partner->dev); 1061 return ERR_PTR(ret); 1062 } ··· 1063 typec_partner_link_device(partner, port->usb2_dev); 1064 if (port->usb3_dev) 1065 typec_partner_link_device(partner, port->usb3_dev); 1066 + mutex_unlock(&port->partner_link_lock); 1067 1068 return partner; 1069 } ··· 1083 1084 port = to_typec_port(partner->dev.parent); 1085 1086 + mutex_lock(&port->partner_link_lock); 1087 + if (port->usb2_dev) { 1088 typec_partner_unlink_device(partner, port->usb2_dev); 1089 + port->usb2_dev = NULL; 1090 + } 1091 + if (port->usb3_dev) { 1092 typec_partner_unlink_device(partner, port->usb3_dev); 1093 + port->usb3_dev = NULL; 1094 + } 1095 1096 device_unregister(&partner->dev); 1097 + mutex_unlock(&port->partner_link_lock); 1098 } 1099 EXPORT_SYMBOL_GPL(typec_unregister_partner); 1100 ··· 2041 static void typec_partner_attach(struct typec_connector *con, struct device *dev) 2042 { 2043 struct typec_port *port = container_of(con, struct typec_port, con); 2044 + struct typec_partner *partner; 2045 struct usb_device *udev = to_usb_device(dev); 2046 enum usb_mode usb_mode; 2047 2048 + mutex_lock(&port->partner_link_lock); 2049 if (udev->speed < USB_SPEED_SUPER) { 2050 usb_mode = USB_MODE_USB2; 2051 port->usb2_dev = dev; ··· 2053 port->usb3_dev = dev; 2054 } 2055 2056 + partner = typec_get_partner(port); 2057 if (partner) { 2058 typec_partner_set_usb_mode(partner, usb_mode); 2059 typec_partner_link_device(partner, dev); 2060 put_device(&partner->dev); 2061 } 2062 + mutex_unlock(&port->partner_link_lock); 2063 } 2064 2065 static void typec_partner_deattach(struct typec_connector *con, struct device *dev) 2066 { 2067 struct typec_port *port = container_of(con, struct typec_port, con); 2068 + struct typec_partner *partner; 2069 2070 + mutex_lock(&port->partner_link_lock); 2071 + partner = typec_get_partner(port); 2072 if (partner) { 2073 typec_partner_unlink_device(partner, dev); 2074 put_device(&partner->dev); ··· 2074 port->usb2_dev = NULL; 2075 else if (port->usb3_dev == dev) 2076 port->usb3_dev = NULL; 2077 + mutex_unlock(&port->partner_link_lock); 2078 } 2079 2080 /** ··· 2614 2615 ida_init(&port->mode_ids); 2616 mutex_init(&port->port_type_lock); 2617 + mutex_init(&port->partner_link_lock); 2618 2619 port->id = id; 2620 port->ops = cap->ops;
+1
drivers/usb/typec/class.h
··· 59 enum typec_port_type port_type; 60 enum usb_mode usb_mode; 61 struct mutex port_type_lock; 62 63 enum typec_orientation orientation; 64 struct typec_switch *sw;
··· 59 enum typec_port_type port_type; 60 enum usb_mode usb_mode; 61 struct mutex port_type_lock; 62 + struct mutex partner_link_lock; 63 64 enum typec_orientation orientation; 65 struct typec_switch *sw;
+56 -18
drivers/vhost/scsi.c
··· 627 int ret; 628 629 llnode = llist_del_all(&svq->completion_list); 630 llist_for_each_entry_safe(cmd, t, llnode, tvc_completion_list) { 631 se_cmd = &cmd->tvc_se_cmd; 632 ··· 662 663 vhost_scsi_release_cmd_res(se_cmd); 664 } 665 666 if (signal) 667 vhost_signal(&svq->vs->dev, &svq->vq); ··· 999 1000 static void 1001 vhost_scsi_send_status(struct vhost_scsi *vs, struct vhost_virtqueue *vq, 1002 - int head, unsigned int out, u8 status) 1003 { 1004 - struct virtio_scsi_cmd_resp __user *resp; 1005 struct virtio_scsi_cmd_resp rsp; 1006 int ret; 1007 1008 memset(&rsp, 0, sizeof(rsp)); 1009 rsp.status = status; 1010 - resp = vq->iov[out].iov_base; 1011 - ret = __copy_to_user(resp, &rsp, sizeof(rsp)); 1012 - if (!ret) 1013 - vhost_add_used_and_signal(&vs->dev, vq, head, 0); 1014 else 1015 pr_err("Faulted on virtio_scsi_cmd_resp\n"); 1016 } 1017 1018 static void 1019 vhost_scsi_send_bad_target(struct vhost_scsi *vs, 1020 struct vhost_virtqueue *vq, 1021 - int head, unsigned out) 1022 { 1023 - struct virtio_scsi_cmd_resp __user *resp; 1024 - struct virtio_scsi_cmd_resp rsp; 1025 int ret; 1026 1027 memset(&rsp, 0, sizeof(rsp)); 1028 - rsp.response = VIRTIO_SCSI_S_BAD_TARGET; 1029 - resp = vq->iov[out].iov_base; 1030 - ret = __copy_to_user(resp, &rsp, sizeof(rsp)); 1031 - if (!ret) 1032 - vhost_add_used_and_signal(&vs->dev, vq, head, 0); 1033 else 1034 - pr_err("Faulted on virtio_scsi_cmd_resp\n"); 1035 } 1036 1037 static int ··· 1422 if (ret == -ENXIO) 1423 break; 1424 else if (ret == -EIO) 1425 - vhost_scsi_send_bad_target(vs, vq, vc.head, vc.out); 1426 else if (ret == -ENOMEM) 1427 - vhost_scsi_send_status(vs, vq, vc.head, vc.out, 1428 SAM_STAT_TASK_SET_FULL); 1429 } while (likely(!vhost_exceeds_weight(vq, ++c, 0))); 1430 out: ··· 1464 else 1465 resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED; 1466 1467 vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs, 1468 tmf->vq_desc, &tmf->resp_iov, resp_code); 1469 vhost_scsi_release_tmf_res(tmf); 1470 } 1471 ··· 1658 if (ret == -ENXIO) 1659 break; 1660 else if (ret == -EIO) 1661 - vhost_scsi_send_bad_target(vs, vq, vc.head, vc.out); 1662 } while (likely(!vhost_exceeds_weight(vq, ++c, 0))); 1663 out: 1664 mutex_unlock(&vq->mutex);
··· 627 int ret; 628 629 llnode = llist_del_all(&svq->completion_list); 630 + 631 + mutex_lock(&svq->vq.mutex); 632 + 633 llist_for_each_entry_safe(cmd, t, llnode, tvc_completion_list) { 634 se_cmd = &cmd->tvc_se_cmd; 635 ··· 659 660 vhost_scsi_release_cmd_res(se_cmd); 661 } 662 + 663 + mutex_unlock(&svq->vq.mutex); 664 665 if (signal) 666 vhost_signal(&svq->vs->dev, &svq->vq); ··· 994 995 static void 996 vhost_scsi_send_status(struct vhost_scsi *vs, struct vhost_virtqueue *vq, 997 + struct vhost_scsi_ctx *vc, u8 status) 998 { 999 struct virtio_scsi_cmd_resp rsp; 1000 + struct iov_iter iov_iter; 1001 int ret; 1002 1003 memset(&rsp, 0, sizeof(rsp)); 1004 rsp.status = status; 1005 + 1006 + iov_iter_init(&iov_iter, ITER_DEST, &vq->iov[vc->out], vc->in, 1007 + sizeof(rsp)); 1008 + 1009 + ret = copy_to_iter(&rsp, sizeof(rsp), &iov_iter); 1010 + 1011 + if (likely(ret == sizeof(rsp))) 1012 + vhost_add_used_and_signal(&vs->dev, vq, vc->head, 0); 1013 else 1014 pr_err("Faulted on virtio_scsi_cmd_resp\n"); 1015 } 1016 1017 + #define TYPE_IO_CMD 0 1018 + #define TYPE_CTRL_TMF 1 1019 + #define TYPE_CTRL_AN 2 1020 + 1021 static void 1022 vhost_scsi_send_bad_target(struct vhost_scsi *vs, 1023 struct vhost_virtqueue *vq, 1024 + struct vhost_scsi_ctx *vc, int type) 1025 { 1026 + union { 1027 + struct virtio_scsi_cmd_resp cmd; 1028 + struct virtio_scsi_ctrl_tmf_resp tmf; 1029 + struct virtio_scsi_ctrl_an_resp an; 1030 + } rsp; 1031 + struct iov_iter iov_iter; 1032 + size_t rsp_size; 1033 int ret; 1034 1035 memset(&rsp, 0, sizeof(rsp)); 1036 + 1037 + if (type == TYPE_IO_CMD) { 1038 + rsp_size = sizeof(struct virtio_scsi_cmd_resp); 1039 + rsp.cmd.response = VIRTIO_SCSI_S_BAD_TARGET; 1040 + } else if (type == TYPE_CTRL_TMF) { 1041 + rsp_size = sizeof(struct virtio_scsi_ctrl_tmf_resp); 1042 + rsp.tmf.response = VIRTIO_SCSI_S_BAD_TARGET; 1043 + } else { 1044 + rsp_size = sizeof(struct virtio_scsi_ctrl_an_resp); 1045 + rsp.an.response = VIRTIO_SCSI_S_BAD_TARGET; 1046 + } 1047 + 1048 + iov_iter_init(&iov_iter, ITER_DEST, &vq->iov[vc->out], vc->in, 1049 + rsp_size); 1050 + 1051 + ret = copy_to_iter(&rsp, rsp_size, &iov_iter); 1052 + 1053 + if (likely(ret == rsp_size)) 1054 + vhost_add_used_and_signal(&vs->dev, vq, vc->head, 0); 1055 else 1056 + pr_err("Faulted on virtio scsi type=%d\n", type); 1057 } 1058 1059 static int ··· 1390 if (ret == -ENXIO) 1391 break; 1392 else if (ret == -EIO) 1393 + vhost_scsi_send_bad_target(vs, vq, &vc, TYPE_IO_CMD); 1394 else if (ret == -ENOMEM) 1395 + vhost_scsi_send_status(vs, vq, &vc, 1396 SAM_STAT_TASK_SET_FULL); 1397 } while (likely(!vhost_exceeds_weight(vq, ++c, 0))); 1398 out: ··· 1432 else 1433 resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED; 1434 1435 + mutex_lock(&tmf->svq->vq.mutex); 1436 vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs, 1437 tmf->vq_desc, &tmf->resp_iov, resp_code); 1438 + mutex_unlock(&tmf->svq->vq.mutex); 1439 + 1440 vhost_scsi_release_tmf_res(tmf); 1441 } 1442 ··· 1623 if (ret == -ENXIO) 1624 break; 1625 else if (ret == -EIO) 1626 + vhost_scsi_send_bad_target(vs, vq, &vc, 1627 + v_req.type == VIRTIO_SCSI_T_TMF ? 1628 + TYPE_CTRL_TMF : 1629 + TYPE_CTRL_AN); 1630 } while (likely(!vhost_exceeds_weight(vq, ++c, 0))); 1631 out: 1632 mutex_unlock(&vq->mutex);
+6
drivers/virtio/virtio.c
··· 407 if (!drv) 408 return; 409 410 /* 411 * Some devices get wedged if you kick them after they are 412 * reset. Mark all vqs as broken to make sure we don't.
··· 407 if (!drv) 408 return; 409 410 + /* If the driver has its own shutdown method, use that. */ 411 + if (drv->shutdown) { 412 + drv->shutdown(dev); 413 + return; 414 + } 415 + 416 /* 417 * Some devices get wedged if you kick them after they are 418 * reset. Mark all vqs as broken to make sure we don't.
+2 -2
drivers/virtio/virtio_pci_modern.c
··· 247 sg_init_one(&data_sg, get_data, sizeof(*get_data)); 248 sg_init_one(&result_sg, result, sizeof(*result)); 249 cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_DEVICE_CAP_GET); 250 - cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV); 251 cmd.data_sg = &data_sg; 252 cmd.result_sg = &result_sg; 253 ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd); ··· 305 306 sg_init_one(&result_sg, data, sizeof(*data)); 307 cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_CAP_ID_LIST_QUERY); 308 - cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV); 309 cmd.result_sg = &result_sg; 310 311 ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd);
··· 247 sg_init_one(&data_sg, get_data, sizeof(*get_data)); 248 sg_init_one(&result_sg, result, sizeof(*result)); 249 cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_DEVICE_CAP_GET); 250 + cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SELF); 251 cmd.data_sg = &data_sg; 252 cmd.result_sg = &result_sg; 253 ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd); ··· 305 306 sg_init_one(&result_sg, data, sizeof(*data)); 307 cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_CAP_ID_LIST_QUERY); 308 + cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SELF); 309 cmd.result_sg = &result_sg; 310 311 ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd);
+1 -1
drivers/virtio/virtio_ring.c
··· 2650 struct vring_virtqueue *vq = to_vvq(_vq); 2651 2652 if (vq->event_triggered) 2653 - vq->event_triggered = false; 2654 2655 return vq->packed_ring ? virtqueue_enable_cb_delayed_packed(_vq) : 2656 virtqueue_enable_cb_delayed_split(_vq);
··· 2650 struct vring_virtqueue *vq = to_vvq(_vq); 2651 2652 if (vq->event_triggered) 2653 + data_race(vq->event_triggered = false); 2654 2655 return vq->packed_ring ? virtqueue_enable_cb_delayed_packed(_vq) : 2656 virtqueue_enable_cb_delayed_split(_vq);
+2
fs/bcachefs/alloc_foreground.c
··· 1425 open_bucket_for_each(c, &wp->ptrs, ob, i) 1426 wp->sectors_free = min(wp->sectors_free, ob->sectors_free); 1427 1428 BUG_ON(!wp->sectors_free || wp->sectors_free == UINT_MAX); 1429 1430 return 0;
··· 1425 open_bucket_for_each(c, &wp->ptrs, ob, i) 1426 wp->sectors_free = min(wp->sectors_free, ob->sectors_free); 1427 1428 + wp->sectors_free = rounddown(wp->sectors_free, block_sectors(c)); 1429 + 1430 BUG_ON(!wp->sectors_free || wp->sectors_free == UINT_MAX); 1431 1432 return 0;
+3 -1
fs/bcachefs/alloc_foreground.h
··· 110 unsigned i; 111 112 open_bucket_for_each(c, &wp->ptrs, ob, i) 113 - ob_push(c, !ob->sectors_free ? &ptrs : &keep, ob); 114 wp->ptrs = keep; 115 116 mutex_unlock(&wp->lock);
··· 110 unsigned i; 111 112 open_bucket_for_each(c, &wp->ptrs, ob, i) 113 + ob_push(c, ob->sectors_free < block_sectors(c) 114 + ? &ptrs 115 + : &keep, ob); 116 wp->ptrs = keep; 117 118 mutex_unlock(&wp->lock);
+43 -38
fs/bcachefs/bcachefs_format.h
··· 366 #define __BKEY_PADDED(key, pad) \ 367 struct bkey_i key; __u64 key ## _pad[pad] 368 369 /* 370 * - DELETED keys are used internally to mark keys that should be ignored but 371 * override keys in composition order. Their version number is ignored. ··· 387 * 388 * - WHITEOUT: for hash table btrees 389 */ 390 - #define BCH_BKEY_TYPES() \ 391 - x(deleted, 0) \ 392 - x(whiteout, 1) \ 393 - x(error, 2) \ 394 - x(cookie, 3) \ 395 - x(hash_whiteout, 4) \ 396 - x(btree_ptr, 5) \ 397 - x(extent, 6) \ 398 - x(reservation, 7) \ 399 - x(inode, 8) \ 400 - x(inode_generation, 9) \ 401 - x(dirent, 10) \ 402 - x(xattr, 11) \ 403 - x(alloc, 12) \ 404 - x(quota, 13) \ 405 - x(stripe, 14) \ 406 - x(reflink_p, 15) \ 407 - x(reflink_v, 16) \ 408 - x(inline_data, 17) \ 409 - x(btree_ptr_v2, 18) \ 410 - x(indirect_inline_data, 19) \ 411 - x(alloc_v2, 20) \ 412 - x(subvolume, 21) \ 413 - x(snapshot, 22) \ 414 - x(inode_v2, 23) \ 415 - x(alloc_v3, 24) \ 416 - x(set, 25) \ 417 - x(lru, 26) \ 418 - x(alloc_v4, 27) \ 419 - x(backpointer, 28) \ 420 - x(inode_v3, 29) \ 421 - x(bucket_gens, 30) \ 422 - x(snapshot_tree, 31) \ 423 - x(logged_op_truncate, 32) \ 424 - x(logged_op_finsert, 33) \ 425 - x(accounting, 34) \ 426 - x(inode_alloc_cursor, 35) 427 428 enum bch_bkey_type { 429 - #define x(name, nr) KEY_TYPE_##name = nr, 430 BCH_BKEY_TYPES() 431 #undef x 432 KEY_TYPE_MAX, ··· 867 LE64_BITMASK(BCH_SB_SHARD_INUMS_NBITS, struct bch_sb, flags[6], 0, 4); 868 LE64_BITMASK(BCH_SB_WRITE_ERROR_TIMEOUT,struct bch_sb, flags[6], 4, 14); 869 LE64_BITMASK(BCH_SB_CSUM_ERR_RETRY_NR, struct bch_sb, flags[6], 14, 20); 870 871 static inline __u64 BCH_SB_COMPRESSION_TYPE(const struct bch_sb *sb) 872 {
··· 366 #define __BKEY_PADDED(key, pad) \ 367 struct bkey_i key; __u64 key ## _pad[pad] 368 369 + enum bch_bkey_type_flags { 370 + BKEY_TYPE_strict_btree_checks = BIT(0), 371 + }; 372 + 373 /* 374 * - DELETED keys are used internally to mark keys that should be ignored but 375 * override keys in composition order. Their version number is ignored. ··· 383 * 384 * - WHITEOUT: for hash table btrees 385 */ 386 + #define BCH_BKEY_TYPES() \ 387 + x(deleted, 0, 0) \ 388 + x(whiteout, 1, 0) \ 389 + x(error, 2, 0) \ 390 + x(cookie, 3, 0) \ 391 + x(hash_whiteout, 4, BKEY_TYPE_strict_btree_checks) \ 392 + x(btree_ptr, 5, BKEY_TYPE_strict_btree_checks) \ 393 + x(extent, 6, BKEY_TYPE_strict_btree_checks) \ 394 + x(reservation, 7, BKEY_TYPE_strict_btree_checks) \ 395 + x(inode, 8, BKEY_TYPE_strict_btree_checks) \ 396 + x(inode_generation, 9, BKEY_TYPE_strict_btree_checks) \ 397 + x(dirent, 10, BKEY_TYPE_strict_btree_checks) \ 398 + x(xattr, 11, BKEY_TYPE_strict_btree_checks) \ 399 + x(alloc, 12, BKEY_TYPE_strict_btree_checks) \ 400 + x(quota, 13, BKEY_TYPE_strict_btree_checks) \ 401 + x(stripe, 14, BKEY_TYPE_strict_btree_checks) \ 402 + x(reflink_p, 15, BKEY_TYPE_strict_btree_checks) \ 403 + x(reflink_v, 16, BKEY_TYPE_strict_btree_checks) \ 404 + x(inline_data, 17, BKEY_TYPE_strict_btree_checks) \ 405 + x(btree_ptr_v2, 18, BKEY_TYPE_strict_btree_checks) \ 406 + x(indirect_inline_data, 19, BKEY_TYPE_strict_btree_checks) \ 407 + x(alloc_v2, 20, BKEY_TYPE_strict_btree_checks) \ 408 + x(subvolume, 21, BKEY_TYPE_strict_btree_checks) \ 409 + x(snapshot, 22, BKEY_TYPE_strict_btree_checks) \ 410 + x(inode_v2, 23, BKEY_TYPE_strict_btree_checks) \ 411 + x(alloc_v3, 24, BKEY_TYPE_strict_btree_checks) \ 412 + x(set, 25, 0) \ 413 + x(lru, 26, BKEY_TYPE_strict_btree_checks) \ 414 + x(alloc_v4, 27, BKEY_TYPE_strict_btree_checks) \ 415 + x(backpointer, 28, BKEY_TYPE_strict_btree_checks) \ 416 + x(inode_v3, 29, BKEY_TYPE_strict_btree_checks) \ 417 + x(bucket_gens, 30, BKEY_TYPE_strict_btree_checks) \ 418 + x(snapshot_tree, 31, BKEY_TYPE_strict_btree_checks) \ 419 + x(logged_op_truncate, 32, BKEY_TYPE_strict_btree_checks) \ 420 + x(logged_op_finsert, 33, BKEY_TYPE_strict_btree_checks) \ 421 + x(accounting, 34, BKEY_TYPE_strict_btree_checks) \ 422 + x(inode_alloc_cursor, 35, BKEY_TYPE_strict_btree_checks) 423 424 enum bch_bkey_type { 425 + #define x(name, nr, ...) KEY_TYPE_##name = nr, 426 BCH_BKEY_TYPES() 427 #undef x 428 KEY_TYPE_MAX, ··· 863 LE64_BITMASK(BCH_SB_SHARD_INUMS_NBITS, struct bch_sb, flags[6], 0, 4); 864 LE64_BITMASK(BCH_SB_WRITE_ERROR_TIMEOUT,struct bch_sb, flags[6], 4, 14); 865 LE64_BITMASK(BCH_SB_CSUM_ERR_RETRY_NR, struct bch_sb, flags[6], 14, 20); 866 + LE64_BITMASK(BCH_SB_CASEFOLD, struct bch_sb, flags[6], 22, 23); 867 868 static inline __u64 BCH_SB_COMPRESSION_TYPE(const struct bch_sb *sb) 869 {
+20 -4
fs/bcachefs/bkey_methods.c
··· 21 #include "xattr.h" 22 23 const char * const bch2_bkey_types[] = { 24 - #define x(name, nr) #name, 25 BCH_BKEY_TYPES() 26 #undef x 27 NULL ··· 115 }) 116 117 const struct bkey_ops bch2_bkey_ops[] = { 118 - #define x(name, nr) [KEY_TYPE_##name] = bch2_bkey_ops_##name, 119 BCH_BKEY_TYPES() 120 #undef x 121 }; ··· 155 #undef x 156 }; 157 158 const char *bch2_btree_node_type_str(enum btree_node_type type) 159 { 160 return type == BKEY_TYPE_btree ? "internal btree node" : bch2_btree_id_str(type - 1); ··· 183 if (type >= BKEY_TYPE_NR) 184 return 0; 185 186 - bkey_fsck_err_on(k.k->type < KEY_TYPE_MAX && 187 - (type == BKEY_TYPE_btree || (from.flags & BCH_VALIDATE_commit)) && 188 !(bch2_key_types_allowed[type] & BIT_ULL(k.k->type)), 189 c, bkey_invalid_type_for_btree, 190 "invalid key type for btree %s (%s)",
··· 21 #include "xattr.h" 22 23 const char * const bch2_bkey_types[] = { 24 + #define x(name, nr, ...) #name, 25 BCH_BKEY_TYPES() 26 #undef x 27 NULL ··· 115 }) 116 117 const struct bkey_ops bch2_bkey_ops[] = { 118 + #define x(name, nr, ...) [KEY_TYPE_##name] = bch2_bkey_ops_##name, 119 BCH_BKEY_TYPES() 120 #undef x 121 }; ··· 155 #undef x 156 }; 157 158 + static const enum bch_bkey_type_flags bch2_bkey_type_flags[] = { 159 + #define x(name, nr, flags) [KEY_TYPE_##name] = flags, 160 + BCH_BKEY_TYPES() 161 + #undef x 162 + }; 163 + 164 const char *bch2_btree_node_type_str(enum btree_node_type type) 165 { 166 return type == BKEY_TYPE_btree ? "internal btree node" : bch2_btree_id_str(type - 1); ··· 177 if (type >= BKEY_TYPE_NR) 178 return 0; 179 180 + enum bch_bkey_type_flags bkey_flags = k.k->type < KEY_TYPE_MAX 181 + ? bch2_bkey_type_flags[k.k->type] 182 + : 0; 183 + 184 + bool strict_key_type_allowed = 185 + (from.flags & BCH_VALIDATE_commit) || 186 + type == BKEY_TYPE_btree || 187 + (from.btree < BTREE_ID_NR && 188 + (bkey_flags & BKEY_TYPE_strict_btree_checks)); 189 + 190 + bkey_fsck_err_on(strict_key_type_allowed && 191 + k.k->type < KEY_TYPE_MAX && 192 !(bch2_key_types_allowed[type] & BIT_ULL(k.k->type)), 193 c, bkey_invalid_type_for_btree, 194 "invalid key type for btree %s (%s)",
+5 -2
fs/bcachefs/btree_iter.c
··· 2577 struct bpos end) 2578 { 2579 if ((iter->flags & (BTREE_ITER_is_extents|BTREE_ITER_filter_snapshots)) && 2580 - !bkey_eq(iter->pos, POS_MAX)) { 2581 /* 2582 * bkey_start_pos(), for extents, is not monotonically 2583 * increasing until after filtering for snapshots: ··· 2605 2606 bch2_trans_verify_not_unlocked_or_in_restart(trans); 2607 bch2_btree_iter_verify_entry_exit(iter); 2608 - EBUG_ON((iter->flags & BTREE_ITER_filter_snapshots) && bpos_eq(end, POS_MIN)); 2609 2610 int ret = trans_maybe_inject_restart(trans, _RET_IP_); 2611 if (unlikely(ret)) {
··· 2577 struct bpos end) 2578 { 2579 if ((iter->flags & (BTREE_ITER_is_extents|BTREE_ITER_filter_snapshots)) && 2580 + !bkey_eq(iter->pos, POS_MAX) && 2581 + !((iter->flags & BTREE_ITER_is_extents) && 2582 + iter->pos.offset == U64_MAX)) { 2583 + 2584 /* 2585 * bkey_start_pos(), for extents, is not monotonically 2586 * increasing until after filtering for snapshots: ··· 2602 2603 bch2_trans_verify_not_unlocked_or_in_restart(trans); 2604 bch2_btree_iter_verify_entry_exit(iter); 2605 + EBUG_ON((iter->flags & BTREE_ITER_filter_snapshots) && iter->pos.inode != end.inode); 2606 2607 int ret = trans_maybe_inject_restart(trans, _RET_IP_); 2608 if (unlikely(ret)) {
+2 -14
fs/bcachefs/dirent.c
··· 13 14 #include <linux/dcache.h> 15 16 - static int bch2_casefold(struct btree_trans *trans, const struct bch_hash_info *info, 17 - const struct qstr *str, struct qstr *out_cf) 18 { 19 *out_cf = (struct qstr) QSTR_INIT(NULL, 0); 20 ··· 33 #else 34 return -EOPNOTSUPP; 35 #endif 36 - } 37 - 38 - static inline int bch2_maybe_casefold(struct btree_trans *trans, 39 - const struct bch_hash_info *info, 40 - const struct qstr *str, struct qstr *out_cf) 41 - { 42 - if (likely(!info->cf_encoding)) { 43 - *out_cf = *str; 44 - return 0; 45 - } else { 46 - return bch2_casefold(trans, info, str, out_cf); 47 - } 48 } 49 50 static unsigned bch2_dirent_name_bytes(struct bkey_s_c_dirent d)
··· 13 14 #include <linux/dcache.h> 15 16 + int bch2_casefold(struct btree_trans *trans, const struct bch_hash_info *info, 17 + const struct qstr *str, struct qstr *out_cf) 18 { 19 *out_cf = (struct qstr) QSTR_INIT(NULL, 0); 20 ··· 33 #else 34 return -EOPNOTSUPP; 35 #endif 36 } 37 38 static unsigned bch2_dirent_name_bytes(struct bkey_s_c_dirent d)
+15
fs/bcachefs/dirent.h
··· 23 struct bch_hash_info; 24 struct bch_inode_info; 25 26 struct qstr bch2_dirent_get_name(struct bkey_s_c_dirent d); 27 28 static inline unsigned dirent_val_u64s(unsigned len, unsigned cf_len)
··· 23 struct bch_hash_info; 24 struct bch_inode_info; 25 26 + int bch2_casefold(struct btree_trans *, const struct bch_hash_info *, 27 + const struct qstr *, struct qstr *); 28 + 29 + static inline int bch2_maybe_casefold(struct btree_trans *trans, 30 + const struct bch_hash_info *info, 31 + const struct qstr *str, struct qstr *out_cf) 32 + { 33 + if (likely(!info->cf_encoding)) { 34 + *out_cf = *str; 35 + return 0; 36 + } else { 37 + return bch2_casefold(trans, info, str, out_cf); 38 + } 39 + } 40 + 41 struct qstr bch2_dirent_get_name(struct bkey_s_c_dirent d); 42 43 static inline unsigned dirent_val_u64s(unsigned len, unsigned cf_len)
+12 -5
fs/bcachefs/error.c
··· 272 { 273 struct fsck_err_state *s; 274 275 - if (!test_bit(BCH_FS_fsck_running, &c->flags)) 276 - return NULL; 277 - 278 list_for_each_entry(s, &c->fsck_error_msgs, list) 279 if (s->id == id) { 280 /* ··· 636 return ret; 637 } 638 639 - void bch2_flush_fsck_errs(struct bch_fs *c) 640 { 641 struct fsck_err_state *s, *n; 642 643 mutex_lock(&c->fsck_error_msgs_lock); 644 645 list_for_each_entry_safe(s, n, &c->fsck_error_msgs, list) { 646 - if (s->ratelimited && s->last_msg) 647 bch_err(c, "Saw %llu errors like:\n %s", s->nr, s->last_msg); 648 649 list_del(&s->list); ··· 652 } 653 654 mutex_unlock(&c->fsck_error_msgs_lock); 655 } 656 657 int bch2_inum_offset_err_msg_trans(struct btree_trans *trans, struct printbuf *out,
··· 272 { 273 struct fsck_err_state *s; 274 275 list_for_each_entry(s, &c->fsck_error_msgs, list) 276 if (s->id == id) { 277 /* ··· 639 return ret; 640 } 641 642 + static void __bch2_flush_fsck_errs(struct bch_fs *c, bool print) 643 { 644 struct fsck_err_state *s, *n; 645 646 mutex_lock(&c->fsck_error_msgs_lock); 647 648 list_for_each_entry_safe(s, n, &c->fsck_error_msgs, list) { 649 + if (print && s->ratelimited && s->last_msg) 650 bch_err(c, "Saw %llu errors like:\n %s", s->nr, s->last_msg); 651 652 list_del(&s->list); ··· 655 } 656 657 mutex_unlock(&c->fsck_error_msgs_lock); 658 + } 659 + 660 + void bch2_flush_fsck_errs(struct bch_fs *c) 661 + { 662 + __bch2_flush_fsck_errs(c, true); 663 + } 664 + 665 + void bch2_free_fsck_errs(struct bch_fs *c) 666 + { 667 + __bch2_flush_fsck_errs(c, false); 668 } 669 670 int bch2_inum_offset_err_msg_trans(struct btree_trans *trans, struct printbuf *out,
+1
fs/bcachefs/error.h
··· 93 _flags, BCH_FSCK_ERR_##_err_type, __VA_ARGS__) 94 95 void bch2_flush_fsck_errs(struct bch_fs *); 96 97 #define fsck_err_wrap(_do) \ 98 ({ \
··· 93 _flags, BCH_FSCK_ERR_##_err_type, __VA_ARGS__) 94 95 void bch2_flush_fsck_errs(struct bch_fs *); 96 + void bch2_free_fsck_errs(struct bch_fs *); 97 98 #define fsck_err_wrap(_do) \ 99 ({ \
-217
fs/bcachefs/fs-ioctl.c
··· 21 #define FSOP_GOING_FLAGS_LOGFLUSH 0x1 /* flush log but not data */ 22 #define FSOP_GOING_FLAGS_NOLOGFLUSH 0x2 /* don't flush log nor data */ 23 24 - struct flags_set { 25 - unsigned mask; 26 - unsigned flags; 27 - 28 - unsigned projid; 29 - 30 - bool set_projinherit; 31 - bool projinherit; 32 - }; 33 - 34 - static int bch2_inode_flags_set(struct btree_trans *trans, 35 - struct bch_inode_info *inode, 36 - struct bch_inode_unpacked *bi, 37 - void *p) 38 - { 39 - struct bch_fs *c = inode->v.i_sb->s_fs_info; 40 - /* 41 - * We're relying on btree locking here for exclusion with other ioctl 42 - * calls - use the flags in the btree (@bi), not inode->i_flags: 43 - */ 44 - struct flags_set *s = p; 45 - unsigned newflags = s->flags; 46 - unsigned oldflags = bi->bi_flags & s->mask; 47 - 48 - if (((newflags ^ oldflags) & (BCH_INODE_append|BCH_INODE_immutable)) && 49 - !capable(CAP_LINUX_IMMUTABLE)) 50 - return -EPERM; 51 - 52 - if (!S_ISREG(bi->bi_mode) && 53 - !S_ISDIR(bi->bi_mode) && 54 - (newflags & (BCH_INODE_nodump|BCH_INODE_noatime)) != newflags) 55 - return -EINVAL; 56 - 57 - if ((newflags ^ oldflags) & BCH_INODE_casefolded) { 58 - #ifdef CONFIG_UNICODE 59 - int ret = 0; 60 - /* Not supported on individual files. */ 61 - if (!S_ISDIR(bi->bi_mode)) 62 - return -EOPNOTSUPP; 63 - 64 - /* 65 - * Make sure the dir is empty, as otherwise we'd need to 66 - * rehash everything and update the dirent keys. 67 - */ 68 - ret = bch2_empty_dir_trans(trans, inode_inum(inode)); 69 - if (ret < 0) 70 - return ret; 71 - 72 - ret = bch2_request_incompat_feature(c, bcachefs_metadata_version_casefolding); 73 - if (ret) 74 - return ret; 75 - 76 - bch2_check_set_feature(c, BCH_FEATURE_casefolding); 77 - #else 78 - printk(KERN_ERR "Cannot use casefolding on a kernel without CONFIG_UNICODE\n"); 79 - return -EOPNOTSUPP; 80 - #endif 81 - } 82 - 83 - if (s->set_projinherit) { 84 - bi->bi_fields_set &= ~(1 << Inode_opt_project); 85 - bi->bi_fields_set |= ((int) s->projinherit << Inode_opt_project); 86 - } 87 - 88 - bi->bi_flags &= ~s->mask; 89 - bi->bi_flags |= newflags; 90 - 91 - bi->bi_ctime = timespec_to_bch2_time(c, current_time(&inode->v)); 92 - return 0; 93 - } 94 - 95 - static int bch2_ioc_getflags(struct bch_inode_info *inode, int __user *arg) 96 - { 97 - unsigned flags = map_flags(bch_flags_to_uflags, inode->ei_inode.bi_flags); 98 - 99 - return put_user(flags, arg); 100 - } 101 - 102 - static int bch2_ioc_setflags(struct bch_fs *c, 103 - struct file *file, 104 - struct bch_inode_info *inode, 105 - void __user *arg) 106 - { 107 - struct flags_set s = { .mask = map_defined(bch_flags_to_uflags) }; 108 - unsigned uflags; 109 - int ret; 110 - 111 - if (get_user(uflags, (int __user *) arg)) 112 - return -EFAULT; 113 - 114 - s.flags = map_flags_rev(bch_flags_to_uflags, uflags); 115 - if (uflags) 116 - return -EOPNOTSUPP; 117 - 118 - ret = mnt_want_write_file(file); 119 - if (ret) 120 - return ret; 121 - 122 - inode_lock(&inode->v); 123 - if (!inode_owner_or_capable(file_mnt_idmap(file), &inode->v)) { 124 - ret = -EACCES; 125 - goto setflags_out; 126 - } 127 - 128 - mutex_lock(&inode->ei_update_lock); 129 - ret = bch2_subvol_is_ro(c, inode->ei_inum.subvol) ?: 130 - bch2_write_inode(c, inode, bch2_inode_flags_set, &s, 131 - ATTR_CTIME); 132 - mutex_unlock(&inode->ei_update_lock); 133 - 134 - setflags_out: 135 - inode_unlock(&inode->v); 136 - mnt_drop_write_file(file); 137 - return ret; 138 - } 139 - 140 - static int bch2_ioc_fsgetxattr(struct bch_inode_info *inode, 141 - struct fsxattr __user *arg) 142 - { 143 - struct fsxattr fa = { 0 }; 144 - 145 - fa.fsx_xflags = map_flags(bch_flags_to_xflags, inode->ei_inode.bi_flags); 146 - 147 - if (inode->ei_inode.bi_fields_set & (1 << Inode_opt_project)) 148 - fa.fsx_xflags |= FS_XFLAG_PROJINHERIT; 149 - 150 - fa.fsx_projid = inode->ei_qid.q[QTYP_PRJ]; 151 - 152 - if (copy_to_user(arg, &fa, sizeof(fa))) 153 - return -EFAULT; 154 - 155 - return 0; 156 - } 157 - 158 - static int fssetxattr_inode_update_fn(struct btree_trans *trans, 159 - struct bch_inode_info *inode, 160 - struct bch_inode_unpacked *bi, 161 - void *p) 162 - { 163 - struct flags_set *s = p; 164 - 165 - if (s->projid != bi->bi_project) { 166 - bi->bi_fields_set |= 1U << Inode_opt_project; 167 - bi->bi_project = s->projid; 168 - } 169 - 170 - return bch2_inode_flags_set(trans, inode, bi, p); 171 - } 172 - 173 - static int bch2_ioc_fssetxattr(struct bch_fs *c, 174 - struct file *file, 175 - struct bch_inode_info *inode, 176 - struct fsxattr __user *arg) 177 - { 178 - struct flags_set s = { .mask = map_defined(bch_flags_to_xflags) }; 179 - struct fsxattr fa; 180 - int ret; 181 - 182 - if (copy_from_user(&fa, arg, sizeof(fa))) 183 - return -EFAULT; 184 - 185 - s.set_projinherit = true; 186 - s.projinherit = (fa.fsx_xflags & FS_XFLAG_PROJINHERIT) != 0; 187 - fa.fsx_xflags &= ~FS_XFLAG_PROJINHERIT; 188 - 189 - s.flags = map_flags_rev(bch_flags_to_xflags, fa.fsx_xflags); 190 - if (fa.fsx_xflags) 191 - return -EOPNOTSUPP; 192 - 193 - if (fa.fsx_projid >= U32_MAX) 194 - return -EINVAL; 195 - 196 - /* 197 - * inode fields accessible via the xattr interface are stored with a +1 198 - * bias, so that 0 means unset: 199 - */ 200 - s.projid = fa.fsx_projid + 1; 201 - 202 - ret = mnt_want_write_file(file); 203 - if (ret) 204 - return ret; 205 - 206 - inode_lock(&inode->v); 207 - if (!inode_owner_or_capable(file_mnt_idmap(file), &inode->v)) { 208 - ret = -EACCES; 209 - goto err; 210 - } 211 - 212 - mutex_lock(&inode->ei_update_lock); 213 - ret = bch2_subvol_is_ro(c, inode->ei_inum.subvol) ?: 214 - bch2_set_projid(c, inode, fa.fsx_projid) ?: 215 - bch2_write_inode(c, inode, fssetxattr_inode_update_fn, &s, 216 - ATTR_CTIME); 217 - mutex_unlock(&inode->ei_update_lock); 218 - err: 219 - inode_unlock(&inode->v); 220 - mnt_drop_write_file(file); 221 - return ret; 222 - } 223 - 224 static int bch2_reinherit_attrs_fn(struct btree_trans *trans, 225 struct bch_inode_info *inode, 226 struct bch_inode_unpacked *bi, ··· 358 long ret; 359 360 switch (cmd) { 361 - case FS_IOC_GETFLAGS: 362 - ret = bch2_ioc_getflags(inode, (int __user *) arg); 363 - break; 364 - 365 - case FS_IOC_SETFLAGS: 366 - ret = bch2_ioc_setflags(c, file, inode, (int __user *) arg); 367 - break; 368 - 369 - case FS_IOC_FSGETXATTR: 370 - ret = bch2_ioc_fsgetxattr(inode, (void __user *) arg); 371 - break; 372 - 373 - case FS_IOC_FSSETXATTR: 374 - ret = bch2_ioc_fssetxattr(c, file, inode, 375 - (void __user *) arg); 376 - break; 377 - 378 case BCHFS_IOC_REINHERIT_ATTRS: 379 ret = bch2_ioc_reinherit_attrs(c, file, inode, 380 (void __user *) arg);
··· 21 #define FSOP_GOING_FLAGS_LOGFLUSH 0x1 /* flush log but not data */ 22 #define FSOP_GOING_FLAGS_NOLOGFLUSH 0x2 /* don't flush log nor data */ 23 24 static int bch2_reinherit_attrs_fn(struct btree_trans *trans, 25 struct bch_inode_info *inode, 26 struct bch_inode_unpacked *bi, ··· 558 long ret; 559 560 switch (cmd) { 561 case BCHFS_IOC_REINHERIT_ATTRS: 562 ret = bch2_ioc_reinherit_attrs(c, file, inode, 563 (void __user *) arg);
-75
fs/bcachefs/fs-ioctl.h
··· 2 #ifndef _BCACHEFS_FS_IOCTL_H 3 #define _BCACHEFS_FS_IOCTL_H 4 5 - /* Inode flags: */ 6 - 7 - /* bcachefs inode flags -> vfs inode flags: */ 8 - static const __maybe_unused unsigned bch_flags_to_vfs[] = { 9 - [__BCH_INODE_sync] = S_SYNC, 10 - [__BCH_INODE_immutable] = S_IMMUTABLE, 11 - [__BCH_INODE_append] = S_APPEND, 12 - [__BCH_INODE_noatime] = S_NOATIME, 13 - [__BCH_INODE_casefolded] = S_CASEFOLD, 14 - }; 15 - 16 - /* bcachefs inode flags -> FS_IOC_GETFLAGS: */ 17 - static const __maybe_unused unsigned bch_flags_to_uflags[] = { 18 - [__BCH_INODE_sync] = FS_SYNC_FL, 19 - [__BCH_INODE_immutable] = FS_IMMUTABLE_FL, 20 - [__BCH_INODE_append] = FS_APPEND_FL, 21 - [__BCH_INODE_nodump] = FS_NODUMP_FL, 22 - [__BCH_INODE_noatime] = FS_NOATIME_FL, 23 - [__BCH_INODE_casefolded] = FS_CASEFOLD_FL, 24 - }; 25 - 26 - /* bcachefs inode flags -> FS_IOC_FSGETXATTR: */ 27 - static const __maybe_unused unsigned bch_flags_to_xflags[] = { 28 - [__BCH_INODE_sync] = FS_XFLAG_SYNC, 29 - [__BCH_INODE_immutable] = FS_XFLAG_IMMUTABLE, 30 - [__BCH_INODE_append] = FS_XFLAG_APPEND, 31 - [__BCH_INODE_nodump] = FS_XFLAG_NODUMP, 32 - [__BCH_INODE_noatime] = FS_XFLAG_NOATIME, 33 - //[__BCH_INODE_PROJINHERIT] = FS_XFLAG_PROJINHERIT; 34 - }; 35 - 36 - #define set_flags(_map, _in, _out) \ 37 - do { \ 38 - unsigned _i; \ 39 - \ 40 - for (_i = 0; _i < ARRAY_SIZE(_map); _i++) \ 41 - if ((_in) & (1 << _i)) \ 42 - (_out) |= _map[_i]; \ 43 - else \ 44 - (_out) &= ~_map[_i]; \ 45 - } while (0) 46 - 47 - #define map_flags(_map, _in) \ 48 - ({ \ 49 - unsigned _out = 0; \ 50 - \ 51 - set_flags(_map, _in, _out); \ 52 - _out; \ 53 - }) 54 - 55 - #define map_flags_rev(_map, _in) \ 56 - ({ \ 57 - unsigned _i, _out = 0; \ 58 - \ 59 - for (_i = 0; _i < ARRAY_SIZE(_map); _i++) \ 60 - if ((_in) & _map[_i]) { \ 61 - (_out) |= 1 << _i; \ 62 - (_in) &= ~_map[_i]; \ 63 - } \ 64 - (_out); \ 65 - }) 66 - 67 - #define map_defined(_map) \ 68 - ({ \ 69 - unsigned _in = ~0; \ 70 - \ 71 - map_flags_rev(_map, _in); \ 72 - }) 73 - 74 - /* Set VFS inode flags from bcachefs inode: */ 75 - static inline void bch2_inode_flags_to_vfs(struct bch_inode_info *inode) 76 - { 77 - set_flags(bch_flags_to_vfs, inode->ei_inode.bi_flags, inode->v.i_flags); 78 - } 79 - 80 long bch2_fs_file_ioctl(struct file *, unsigned, unsigned long); 81 long bch2_compat_fs_ioctl(struct file *, unsigned, unsigned long); 82
··· 2 #ifndef _BCACHEFS_FS_IOCTL_H 3 #define _BCACHEFS_FS_IOCTL_H 4 5 long bch2_fs_file_ioctl(struct file *, unsigned, unsigned long); 6 long bch2_compat_fs_ioctl(struct file *, unsigned, unsigned long); 7
+393 -76
fs/bcachefs/fs.c
··· 33 #include <linux/backing-dev.h> 34 #include <linux/exportfs.h> 35 #include <linux/fiemap.h> 36 #include <linux/fs_context.h> 37 #include <linux/module.h> 38 #include <linux/pagemap.h> ··· 51 struct bch_inode_info *, 52 struct bch_inode_unpacked *, 53 struct bch_subvolume *); 54 55 void bch2_inode_update_after_write(struct btree_trans *trans, 56 struct bch_inode_info *inode, ··· 96 97 inode->ei_inode = *bi; 98 99 - bch2_inode_flags_to_vfs(inode); 100 } 101 102 int __must_check bch2_write_inode(struct bch_fs *c, ··· 648 const struct qstr *name) 649 { 650 struct bch_fs *c = trans->c; 651 - struct btree_iter dirent_iter = {}; 652 subvol_inum inum = {}; 653 struct printbuf buf = PRINTBUF; 654 655 struct bkey_s_c k = bch2_hash_lookup(trans, &dirent_iter, bch2_dirent_hash_desc, 656 - dir_hash_info, dir, name, 0); 657 - int ret = bkey_err(k); 658 if (ret) 659 return ERR_PTR(ret); 660 ··· 846 * the VFS that it's been deleted here: 847 */ 848 set_nlink(&inode->v, 0); 849 } 850 err: 851 bch2_trans_put(trans); ··· 1262 return finish_open_simple(file, 0); 1263 } 1264 1265 static int bch2_fill_extent(struct bch_fs *c, 1266 struct fiemap_extent_info *info, 1267 - struct bkey_s_c k, unsigned flags) 1268 { 1269 if (bkey_extent_is_direct_data(k.k)) { 1270 struct bkey_ptrs_c ptrs = bch2_bkey_ptrs_c(k); 1271 const union bch_extent_entry *entry; ··· 1328 } 1329 } 1330 1331 static int bch2_fiemap(struct inode *vinode, struct fiemap_extent_info *info, 1332 u64 start, u64 len) 1333 { 1334 struct bch_fs *c = vinode->i_sb->s_fs_info; 1335 struct bch_inode_info *ei = to_bch_ei(vinode); 1336 struct btree_trans *trans; 1337 - struct btree_iter iter; 1338 - struct bkey_s_c k; 1339 - struct bkey_buf cur, prev; 1340 - bool have_extent = false; 1341 int ret = 0; 1342 1343 - ret = fiemap_prep(&ei->v, info, start, &len, FIEMAP_FLAG_SYNC); 1344 if (ret) 1345 return ret; 1346 1347 - struct bpos end = POS(ei->v.i_ino, (start + len) >> 9); 1348 if (start + len < start) 1349 return -EINVAL; 1350 1351 start >>= 9; 1352 1353 - bch2_bkey_buf_init(&cur); 1354 - bch2_bkey_buf_init(&prev); 1355 trans = bch2_trans_get(c); 1356 1357 - bch2_trans_iter_init(trans, &iter, BTREE_ID_extents, 1358 - POS(ei->v.i_ino, start), 0); 1359 - 1360 - while (!ret || bch2_err_matches(ret, BCH_ERR_transaction_restart)) { 1361 - enum btree_id data_btree = BTREE_ID_extents; 1362 - 1363 - bch2_trans_begin(trans); 1364 - 1365 - u32 snapshot; 1366 - ret = bch2_subvolume_get_snapshot(trans, ei->ei_inum.subvol, &snapshot); 1367 if (ret) 1368 - continue; 1369 1370 - bch2_btree_iter_set_snapshot(trans, &iter, snapshot); 1371 1372 - k = bch2_btree_iter_peek_max(trans, &iter, end); 1373 - ret = bkey_err(k); 1374 - if (ret) 1375 - continue; 1376 - 1377 - if (!k.k) 1378 break; 1379 1380 - if (!bkey_extent_is_data(k.k) && 1381 - k.k->type != KEY_TYPE_reservation) { 1382 - bch2_btree_iter_advance(trans, &iter); 1383 - continue; 1384 - } 1385 1386 - s64 offset_into_extent = iter.pos.offset - bkey_start_offset(k.k); 1387 - unsigned sectors = k.k->size - offset_into_extent; 1388 - 1389 - bch2_bkey_buf_reassemble(&cur, c, k); 1390 - 1391 - ret = bch2_read_indirect_extent(trans, &data_btree, 1392 - &offset_into_extent, &cur); 1393 - if (ret) 1394 - continue; 1395 - 1396 - k = bkey_i_to_s_c(cur.k); 1397 - bch2_bkey_buf_realloc(&prev, c, k.k->u64s); 1398 - 1399 - sectors = min_t(unsigned, sectors, k.k->size - offset_into_extent); 1400 - 1401 - bch2_cut_front(POS(k.k->p.inode, 1402 - bkey_start_offset(k.k) + 1403 - offset_into_extent), 1404 - cur.k); 1405 - bch2_key_resize(&cur.k->k, sectors); 1406 - cur.k->k.p = iter.pos; 1407 - cur.k->k.p.offset += cur.k->k.size; 1408 - 1409 - if (have_extent) { 1410 bch2_trans_unlock(trans); 1411 - ret = bch2_fill_extent(c, info, 1412 - bkey_i_to_s_c(prev.k), 0); 1413 if (ret) 1414 - break; 1415 } 1416 1417 - bkey_copy(prev.k, cur.k); 1418 - have_extent = true; 1419 - 1420 - bch2_btree_iter_set_pos(trans, &iter, 1421 - POS(iter.pos.inode, iter.pos.offset + sectors)); 1422 } 1423 - bch2_trans_iter_exit(trans, &iter); 1424 1425 - if (!ret && have_extent) { 1426 bch2_trans_unlock(trans); 1427 - ret = bch2_fill_extent(c, info, bkey_i_to_s_c(prev.k), 1428 - FIEMAP_EXTENT_LAST); 1429 } 1430 - 1431 bch2_trans_put(trans); 1432 - bch2_bkey_buf_exit(&cur, c); 1433 - bch2_bkey_buf_exit(&prev, c); 1434 - return ret < 0 ? ret : 0; 1435 } 1436 1437 static const struct vm_operations_struct bch_vm_ops = { ··· 1599 return generic_file_open(vinode, file); 1600 } 1601 1602 static const struct file_operations bch_file_operations = { 1603 .open = bch2_open, 1604 .llseek = bch2_llseek, ··· 1785 .get_inode_acl = bch2_get_acl, 1786 .set_acl = bch2_set_acl, 1787 #endif 1788 }; 1789 1790 static const struct inode_operations bch_dir_inode_operations = { ··· 1807 .get_inode_acl = bch2_get_acl, 1808 .set_acl = bch2_set_acl, 1809 #endif 1810 }; 1811 1812 static const struct file_operations bch_dir_file_operations = { ··· 1831 .get_inode_acl = bch2_get_acl, 1832 .set_acl = bch2_set_acl, 1833 #endif 1834 }; 1835 1836 static const struct inode_operations bch_special_inode_operations = { ··· 1843 .get_inode_acl = bch2_get_acl, 1844 .set_acl = bch2_set_acl, 1845 #endif 1846 }; 1847 1848 static const struct address_space_operations bch_address_space_operations = {
··· 33 #include <linux/backing-dev.h> 34 #include <linux/exportfs.h> 35 #include <linux/fiemap.h> 36 + #include <linux/fileattr.h> 37 #include <linux/fs_context.h> 38 #include <linux/module.h> 39 #include <linux/pagemap.h> ··· 50 struct bch_inode_info *, 51 struct bch_inode_unpacked *, 52 struct bch_subvolume *); 53 + 54 + /* Set VFS inode flags from bcachefs inode: */ 55 + static inline void bch2_inode_flags_to_vfs(struct bch_fs *c, struct bch_inode_info *inode) 56 + { 57 + static const __maybe_unused unsigned bch_flags_to_vfs[] = { 58 + [__BCH_INODE_sync] = S_SYNC, 59 + [__BCH_INODE_immutable] = S_IMMUTABLE, 60 + [__BCH_INODE_append] = S_APPEND, 61 + [__BCH_INODE_noatime] = S_NOATIME, 62 + }; 63 + 64 + set_flags(bch_flags_to_vfs, inode->ei_inode.bi_flags, inode->v.i_flags); 65 + 66 + if (bch2_inode_casefold(c, &inode->ei_inode)) 67 + inode->v.i_flags |= S_CASEFOLD; 68 + } 69 70 void bch2_inode_update_after_write(struct btree_trans *trans, 71 struct bch_inode_info *inode, ··· 79 80 inode->ei_inode = *bi; 81 82 + bch2_inode_flags_to_vfs(c, inode); 83 } 84 85 int __must_check bch2_write_inode(struct bch_fs *c, ··· 631 const struct qstr *name) 632 { 633 struct bch_fs *c = trans->c; 634 subvol_inum inum = {}; 635 struct printbuf buf = PRINTBUF; 636 637 + struct qstr lookup_name; 638 + int ret = bch2_maybe_casefold(trans, dir_hash_info, name, &lookup_name); 639 + if (ret) 640 + return ERR_PTR(ret); 641 + 642 + struct btree_iter dirent_iter = {}; 643 struct bkey_s_c k = bch2_hash_lookup(trans, &dirent_iter, bch2_dirent_hash_desc, 644 + dir_hash_info, dir, &lookup_name, 0); 645 + ret = bkey_err(k); 646 if (ret) 647 return ERR_PTR(ret); 648 ··· 824 * the VFS that it's been deleted here: 825 */ 826 set_nlink(&inode->v, 0); 827 + } 828 + 829 + if (IS_CASEFOLDED(vdir)) { 830 + d_invalidate(dentry); 831 + d_prune_aliases(&inode->v); 832 } 833 err: 834 bch2_trans_put(trans); ··· 1235 return finish_open_simple(file, 0); 1236 } 1237 1238 + struct bch_fiemap_extent { 1239 + struct bkey_buf kbuf; 1240 + unsigned flags; 1241 + }; 1242 + 1243 static int bch2_fill_extent(struct bch_fs *c, 1244 struct fiemap_extent_info *info, 1245 + struct bch_fiemap_extent *fe) 1246 { 1247 + struct bkey_s_c k = bkey_i_to_s_c(fe->kbuf.k); 1248 + unsigned flags = fe->flags; 1249 + 1250 + BUG_ON(!k.k->size); 1251 + 1252 if (bkey_extent_is_direct_data(k.k)) { 1253 struct bkey_ptrs_c ptrs = bch2_bkey_ptrs_c(k); 1254 const union bch_extent_entry *entry; ··· 1291 } 1292 } 1293 1294 + /* 1295 + * Scan a range of an inode for data in pagecache. 1296 + * 1297 + * Intended to be retryable, so don't modify the output params until success is 1298 + * imminent. 1299 + */ 1300 + static int 1301 + bch2_fiemap_hole_pagecache(struct inode *vinode, u64 *start, u64 *end, 1302 + bool nonblock) 1303 + { 1304 + loff_t dstart, dend; 1305 + 1306 + dstart = bch2_seek_pagecache_data(vinode, *start, *end, 0, nonblock); 1307 + if (dstart < 0) 1308 + return dstart; 1309 + 1310 + if (dstart == *end) { 1311 + *start = dstart; 1312 + return 0; 1313 + } 1314 + 1315 + dend = bch2_seek_pagecache_hole(vinode, dstart, *end, 0, nonblock); 1316 + if (dend < 0) 1317 + return dend; 1318 + 1319 + /* race */ 1320 + BUG_ON(dstart == dend); 1321 + 1322 + *start = dstart; 1323 + *end = dend; 1324 + return 0; 1325 + } 1326 + 1327 + /* 1328 + * Scan a range of pagecache that corresponds to a file mapping hole in the 1329 + * extent btree. If data is found, fake up an extent key so it looks like a 1330 + * delalloc extent to the rest of the fiemap processing code. 1331 + */ 1332 + static int 1333 + bch2_next_fiemap_pagecache_extent(struct btree_trans *trans, struct bch_inode_info *inode, 1334 + u64 start, u64 end, struct bch_fiemap_extent *cur) 1335 + { 1336 + struct bch_fs *c = trans->c; 1337 + struct bkey_i_extent *delextent; 1338 + struct bch_extent_ptr ptr = {}; 1339 + loff_t dstart = start << 9, dend = end << 9; 1340 + int ret; 1341 + 1342 + /* 1343 + * We hold btree locks here so we cannot block on folio locks without 1344 + * dropping trans locks first. Run a nonblocking scan for the common 1345 + * case of no folios over holes and fall back on failure. 1346 + * 1347 + * Note that dropping locks like this is technically racy against 1348 + * writeback inserting to the extent tree, but a non-sync fiemap scan is 1349 + * fundamentally racy with writeback anyways. Therefore, just report the 1350 + * range as delalloc regardless of whether we have to cycle trans locks. 1351 + */ 1352 + ret = bch2_fiemap_hole_pagecache(&inode->v, &dstart, &dend, true); 1353 + if (ret == -EAGAIN) 1354 + ret = drop_locks_do(trans, 1355 + bch2_fiemap_hole_pagecache(&inode->v, &dstart, &dend, false)); 1356 + if (ret < 0) 1357 + return ret; 1358 + 1359 + /* 1360 + * Create a fake extent key in the buffer. We have to add a dummy extent 1361 + * pointer for the fill code to add an extent entry. It's explicitly 1362 + * zeroed to reflect delayed allocation (i.e. phys offset 0). 1363 + */ 1364 + bch2_bkey_buf_realloc(&cur->kbuf, c, sizeof(*delextent) / sizeof(u64)); 1365 + delextent = bkey_extent_init(cur->kbuf.k); 1366 + delextent->k.p = POS(inode->ei_inum.inum, dend >> 9); 1367 + delextent->k.size = (dend - dstart) >> 9; 1368 + bch2_bkey_append_ptr(&delextent->k_i, ptr); 1369 + 1370 + cur->flags = FIEMAP_EXTENT_DELALLOC; 1371 + 1372 + return 0; 1373 + } 1374 + 1375 + static int bch2_next_fiemap_extent(struct btree_trans *trans, 1376 + struct bch_inode_info *inode, 1377 + u64 start, u64 end, 1378 + struct bch_fiemap_extent *cur) 1379 + { 1380 + u32 snapshot; 1381 + int ret = bch2_subvolume_get_snapshot(trans, inode->ei_inum.subvol, &snapshot); 1382 + if (ret) 1383 + return ret; 1384 + 1385 + struct btree_iter iter; 1386 + bch2_trans_iter_init(trans, &iter, BTREE_ID_extents, 1387 + SPOS(inode->ei_inum.inum, start, snapshot), 0); 1388 + 1389 + struct bkey_s_c k = 1390 + bch2_btree_iter_peek_max(trans, &iter, POS(inode->ei_inum.inum, end)); 1391 + ret = bkey_err(k); 1392 + if (ret) 1393 + goto err; 1394 + 1395 + ret = bch2_next_fiemap_pagecache_extent(trans, inode, start, end, cur); 1396 + if (ret) 1397 + goto err; 1398 + 1399 + struct bpos pagecache_start = bkey_start_pos(&cur->kbuf.k->k); 1400 + 1401 + /* 1402 + * Does the pagecache or the btree take precedence? 1403 + * 1404 + * It _should_ be the pagecache, so that we correctly report delalloc 1405 + * extents when dirty in the pagecache (we're COW, after all). 1406 + * 1407 + * But we'd have to add per-sector writeback tracking to 1408 + * bch_folio_state, otherwise we report delalloc extents for clean 1409 + * cached data in the pagecache. 1410 + * 1411 + * We should do this, but even then fiemap won't report stable mappings: 1412 + * on bcachefs data moves around in the background (copygc, rebalance) 1413 + * and we don't provide a way for userspace to lock that out. 1414 + */ 1415 + if (k.k && 1416 + bkey_le(bpos_max(iter.pos, bkey_start_pos(k.k)), 1417 + pagecache_start)) { 1418 + bch2_bkey_buf_reassemble(&cur->kbuf, trans->c, k); 1419 + bch2_cut_front(iter.pos, cur->kbuf.k); 1420 + bch2_cut_back(POS(inode->ei_inum.inum, end), cur->kbuf.k); 1421 + cur->flags = 0; 1422 + } else if (k.k) { 1423 + bch2_cut_back(bkey_start_pos(k.k), cur->kbuf.k); 1424 + } 1425 + 1426 + if (cur->kbuf.k->k.type == KEY_TYPE_reflink_p) { 1427 + unsigned sectors = cur->kbuf.k->k.size; 1428 + s64 offset_into_extent = 0; 1429 + enum btree_id data_btree = BTREE_ID_extents; 1430 + int ret = bch2_read_indirect_extent(trans, &data_btree, &offset_into_extent, 1431 + &cur->kbuf); 1432 + if (ret) 1433 + goto err; 1434 + 1435 + struct bkey_i *k = cur->kbuf.k; 1436 + sectors = min_t(unsigned, sectors, k->k.size - offset_into_extent); 1437 + 1438 + bch2_cut_front(POS(k->k.p.inode, 1439 + bkey_start_offset(&k->k) + offset_into_extent), 1440 + k); 1441 + bch2_key_resize(&k->k, sectors); 1442 + k->k.p = iter.pos; 1443 + k->k.p.offset += k->k.size; 1444 + } 1445 + err: 1446 + bch2_trans_iter_exit(trans, &iter); 1447 + return ret; 1448 + } 1449 + 1450 static int bch2_fiemap(struct inode *vinode, struct fiemap_extent_info *info, 1451 u64 start, u64 len) 1452 { 1453 struct bch_fs *c = vinode->i_sb->s_fs_info; 1454 struct bch_inode_info *ei = to_bch_ei(vinode); 1455 struct btree_trans *trans; 1456 + struct bch_fiemap_extent cur, prev; 1457 int ret = 0; 1458 1459 + ret = fiemap_prep(&ei->v, info, start, &len, 0); 1460 if (ret) 1461 return ret; 1462 1463 if (start + len < start) 1464 return -EINVAL; 1465 1466 start >>= 9; 1467 + u64 end = (start + len) >> 9; 1468 1469 + bch2_bkey_buf_init(&cur.kbuf); 1470 + bch2_bkey_buf_init(&prev.kbuf); 1471 + bkey_init(&prev.kbuf.k->k); 1472 + 1473 trans = bch2_trans_get(c); 1474 1475 + while (start < end) { 1476 + ret = lockrestart_do(trans, 1477 + bch2_next_fiemap_extent(trans, ei, start, end, &cur)); 1478 if (ret) 1479 + goto err; 1480 1481 + BUG_ON(bkey_start_offset(&cur.kbuf.k->k) < start); 1482 + BUG_ON(cur.kbuf.k->k.p.offset > end); 1483 1484 + if (bkey_start_offset(&cur.kbuf.k->k) == end) 1485 break; 1486 1487 + start = cur.kbuf.k->k.p.offset; 1488 1489 + if (!bkey_deleted(&prev.kbuf.k->k)) { 1490 bch2_trans_unlock(trans); 1491 + ret = bch2_fill_extent(c, info, &prev); 1492 if (ret) 1493 + goto err; 1494 } 1495 1496 + bch2_bkey_buf_copy(&prev.kbuf, c, cur.kbuf.k); 1497 + prev.flags = cur.flags; 1498 } 1499 1500 + if (!bkey_deleted(&prev.kbuf.k->k)) { 1501 bch2_trans_unlock(trans); 1502 + prev.flags |= FIEMAP_EXTENT_LAST; 1503 + ret = bch2_fill_extent(c, info, &prev); 1504 } 1505 + err: 1506 bch2_trans_put(trans); 1507 + bch2_bkey_buf_exit(&cur.kbuf, c); 1508 + bch2_bkey_buf_exit(&prev.kbuf, c); 1509 + 1510 + return bch2_err_class(ret < 0 ? ret : 0); 1511 } 1512 1513 static const struct vm_operations_struct bch_vm_ops = { ··· 1449 return generic_file_open(vinode, file); 1450 } 1451 1452 + /* bcachefs inode flags -> FS_IOC_GETFLAGS: */ 1453 + static const __maybe_unused unsigned bch_flags_to_uflags[] = { 1454 + [__BCH_INODE_sync] = FS_SYNC_FL, 1455 + [__BCH_INODE_immutable] = FS_IMMUTABLE_FL, 1456 + [__BCH_INODE_append] = FS_APPEND_FL, 1457 + [__BCH_INODE_nodump] = FS_NODUMP_FL, 1458 + [__BCH_INODE_noatime] = FS_NOATIME_FL, 1459 + }; 1460 + 1461 + /* bcachefs inode flags -> FS_IOC_FSGETXATTR: */ 1462 + static const __maybe_unused unsigned bch_flags_to_xflags[] = { 1463 + [__BCH_INODE_sync] = FS_XFLAG_SYNC, 1464 + [__BCH_INODE_immutable] = FS_XFLAG_IMMUTABLE, 1465 + [__BCH_INODE_append] = FS_XFLAG_APPEND, 1466 + [__BCH_INODE_nodump] = FS_XFLAG_NODUMP, 1467 + [__BCH_INODE_noatime] = FS_XFLAG_NOATIME, 1468 + }; 1469 + 1470 + static int bch2_fileattr_get(struct dentry *dentry, 1471 + struct fileattr *fa) 1472 + { 1473 + struct bch_inode_info *inode = to_bch_ei(d_inode(dentry)); 1474 + struct bch_fs *c = inode->v.i_sb->s_fs_info; 1475 + 1476 + fileattr_fill_xflags(fa, map_flags(bch_flags_to_xflags, inode->ei_inode.bi_flags)); 1477 + 1478 + if (inode->ei_inode.bi_fields_set & (1 << Inode_opt_project)) 1479 + fa->fsx_xflags |= FS_XFLAG_PROJINHERIT; 1480 + 1481 + if (bch2_inode_casefold(c, &inode->ei_inode)) 1482 + fa->flags |= FS_CASEFOLD_FL; 1483 + 1484 + fa->fsx_projid = inode->ei_qid.q[QTYP_PRJ]; 1485 + return 0; 1486 + } 1487 + 1488 + struct flags_set { 1489 + unsigned mask; 1490 + unsigned flags; 1491 + unsigned projid; 1492 + bool set_project; 1493 + bool set_casefold; 1494 + bool casefold; 1495 + }; 1496 + 1497 + static int fssetxattr_inode_update_fn(struct btree_trans *trans, 1498 + struct bch_inode_info *inode, 1499 + struct bch_inode_unpacked *bi, 1500 + void *p) 1501 + { 1502 + struct bch_fs *c = trans->c; 1503 + struct flags_set *s = p; 1504 + 1505 + /* 1506 + * We're relying on btree locking here for exclusion with other ioctl 1507 + * calls - use the flags in the btree (@bi), not inode->i_flags: 1508 + */ 1509 + if (!S_ISREG(bi->bi_mode) && 1510 + !S_ISDIR(bi->bi_mode) && 1511 + (s->flags & (BCH_INODE_nodump|BCH_INODE_noatime)) != s->flags) 1512 + return -EINVAL; 1513 + 1514 + if (s->casefold != bch2_inode_casefold(c, bi)) { 1515 + #ifdef CONFIG_UNICODE 1516 + int ret = 0; 1517 + /* Not supported on individual files. */ 1518 + if (!S_ISDIR(bi->bi_mode)) 1519 + return -EOPNOTSUPP; 1520 + 1521 + /* 1522 + * Make sure the dir is empty, as otherwise we'd need to 1523 + * rehash everything and update the dirent keys. 1524 + */ 1525 + ret = bch2_empty_dir_trans(trans, inode_inum(inode)); 1526 + if (ret < 0) 1527 + return ret; 1528 + 1529 + ret = bch2_request_incompat_feature(c, bcachefs_metadata_version_casefolding); 1530 + if (ret) 1531 + return ret; 1532 + 1533 + bch2_check_set_feature(c, BCH_FEATURE_casefolding); 1534 + 1535 + bi->bi_casefold = s->casefold + 1; 1536 + bi->bi_fields_set |= BIT(Inode_opt_casefold); 1537 + 1538 + #else 1539 + printk(KERN_ERR "Cannot use casefolding on a kernel without CONFIG_UNICODE\n"); 1540 + return -EOPNOTSUPP; 1541 + #endif 1542 + } 1543 + 1544 + if (s->set_project) { 1545 + bi->bi_project = s->projid; 1546 + bi->bi_fields_set |= BIT(Inode_opt_project); 1547 + } 1548 + 1549 + bi->bi_flags &= ~s->mask; 1550 + bi->bi_flags |= s->flags; 1551 + 1552 + bi->bi_ctime = timespec_to_bch2_time(c, current_time(&inode->v)); 1553 + return 0; 1554 + } 1555 + 1556 + static int bch2_fileattr_set(struct mnt_idmap *idmap, 1557 + struct dentry *dentry, 1558 + struct fileattr *fa) 1559 + { 1560 + struct bch_inode_info *inode = to_bch_ei(d_inode(dentry)); 1561 + struct bch_fs *c = inode->v.i_sb->s_fs_info; 1562 + struct flags_set s = {}; 1563 + int ret; 1564 + 1565 + if (fa->fsx_valid) { 1566 + fa->fsx_xflags &= ~FS_XFLAG_PROJINHERIT; 1567 + 1568 + s.mask = map_defined(bch_flags_to_xflags); 1569 + s.flags |= map_flags_rev(bch_flags_to_xflags, fa->fsx_xflags); 1570 + if (fa->fsx_xflags) 1571 + return -EOPNOTSUPP; 1572 + 1573 + if (fa->fsx_projid >= U32_MAX) 1574 + return -EINVAL; 1575 + 1576 + /* 1577 + * inode fields accessible via the xattr interface are stored with a +1 1578 + * bias, so that 0 means unset: 1579 + */ 1580 + if ((inode->ei_inode.bi_project || 1581 + fa->fsx_projid) && 1582 + inode->ei_inode.bi_project != fa->fsx_projid + 1) { 1583 + s.projid = fa->fsx_projid + 1; 1584 + s.set_project = true; 1585 + } 1586 + } 1587 + 1588 + if (fa->flags_valid) { 1589 + s.mask = map_defined(bch_flags_to_uflags); 1590 + 1591 + s.set_casefold = true; 1592 + s.casefold = (fa->flags & FS_CASEFOLD_FL) != 0; 1593 + fa->flags &= ~FS_CASEFOLD_FL; 1594 + 1595 + s.flags |= map_flags_rev(bch_flags_to_uflags, fa->flags); 1596 + if (fa->flags) 1597 + return -EOPNOTSUPP; 1598 + } 1599 + 1600 + mutex_lock(&inode->ei_update_lock); 1601 + ret = bch2_subvol_is_ro(c, inode->ei_inum.subvol) ?: 1602 + (s.set_project 1603 + ? bch2_set_projid(c, inode, fa->fsx_projid) 1604 + : 0) ?: 1605 + bch2_write_inode(c, inode, fssetxattr_inode_update_fn, &s, 1606 + ATTR_CTIME); 1607 + mutex_unlock(&inode->ei_update_lock); 1608 + return ret; 1609 + } 1610 + 1611 static const struct file_operations bch_file_operations = { 1612 .open = bch2_open, 1613 .llseek = bch2_llseek, ··· 1476 .get_inode_acl = bch2_get_acl, 1477 .set_acl = bch2_set_acl, 1478 #endif 1479 + .fileattr_get = bch2_fileattr_get, 1480 + .fileattr_set = bch2_fileattr_set, 1481 }; 1482 1483 static const struct inode_operations bch_dir_inode_operations = { ··· 1496 .get_inode_acl = bch2_get_acl, 1497 .set_acl = bch2_set_acl, 1498 #endif 1499 + .fileattr_get = bch2_fileattr_get, 1500 + .fileattr_set = bch2_fileattr_set, 1501 }; 1502 1503 static const struct file_operations bch_dir_file_operations = { ··· 1518 .get_inode_acl = bch2_get_acl, 1519 .set_acl = bch2_set_acl, 1520 #endif 1521 + .fileattr_get = bch2_fileattr_get, 1522 + .fileattr_set = bch2_fileattr_set, 1523 }; 1524 1525 static const struct inode_operations bch_special_inode_operations = { ··· 1528 .get_inode_acl = bch2_get_acl, 1529 .set_acl = bch2_set_acl, 1530 #endif 1531 + .fileattr_get = bch2_fileattr_get, 1532 + .fileattr_set = bch2_fileattr_set, 1533 }; 1534 1535 static const struct address_space_operations bch_address_space_operations = {
+8
fs/bcachefs/inode.h
··· 243 } 244 } 245 246 /* i_nlink: */ 247 248 static inline unsigned nlink_bias(umode_t mode)
··· 243 } 244 } 245 246 + static inline bool bch2_inode_casefold(struct bch_fs *c, const struct bch_inode_unpacked *bi) 247 + { 248 + /* inode apts are stored with a +1 bias: 0 means "unset, use fs opt" */ 249 + return bi->bi_casefold 250 + ? bi->bi_casefold - 1 251 + : c->opts.casefold; 252 + } 253 + 254 /* i_nlink: */ 255 256 static inline unsigned nlink_bias(umode_t mode)
+5 -4
fs/bcachefs/inode_format.h
··· 103 x(bi_parent_subvol, 32) \ 104 x(bi_nocow, 8) \ 105 x(bi_depth, 32) \ 106 - x(bi_inodes_32bit, 8) 107 108 /* subset of BCH_INODE_FIELDS */ 109 #define BCH_INODE_OPTS() \ ··· 118 x(background_target, 16) \ 119 x(erasure_code, 16) \ 120 x(nocow, 8) \ 121 - x(inodes_32bit, 8) 122 123 enum inode_opt_id { 124 #define x(name, ...) \ ··· 139 x(i_sectors_dirty, 6) \ 140 x(unlinked, 7) \ 141 x(backptr_untrusted, 8) \ 142 - x(has_child_snapshot, 9) \ 143 - x(casefolded, 10) 144 145 /* bits 20+ reserved for packed fields below: */ 146
··· 103 x(bi_parent_subvol, 32) \ 104 x(bi_nocow, 8) \ 105 x(bi_depth, 32) \ 106 + x(bi_inodes_32bit, 8) \ 107 + x(bi_casefold, 8) 108 109 /* subset of BCH_INODE_FIELDS */ 110 #define BCH_INODE_OPTS() \ ··· 117 x(background_target, 16) \ 118 x(erasure_code, 16) \ 119 x(nocow, 8) \ 120 + x(inodes_32bit, 8) \ 121 + x(casefold, 8) 122 123 enum inode_opt_id { 124 #define x(name, ...) \ ··· 137 x(i_sectors_dirty, 6) \ 138 x(unlinked, 7) \ 139 x(backptr_untrusted, 8) \ 140 + x(has_child_snapshot, 9) 141 142 /* bits 20+ reserved for packed fields below: */ 143
+33 -3
fs/bcachefs/journal.c
··· 281 282 sectors = vstruct_blocks_plus(buf->data, c->block_bits, 283 buf->u64s_reserved) << c->block_bits; 284 - BUG_ON(sectors > buf->sectors); 285 buf->sectors = sectors; 286 287 /* ··· 1479 j->last_empty_seq = cur_seq - 1; /* to match j->seq */ 1480 1481 spin_lock(&j->lock); 1482 - 1483 - set_bit(JOURNAL_running, &j->flags); 1484 j->last_flush_write = jiffies; 1485 1486 j->reservations.idx = journal_cur_seq(j); ··· 1487 spin_unlock(&j->lock); 1488 1489 return 0; 1490 } 1491 1492 /* init/exit: */
··· 281 282 sectors = vstruct_blocks_plus(buf->data, c->block_bits, 283 buf->u64s_reserved) << c->block_bits; 284 + if (unlikely(sectors > buf->sectors)) { 285 + struct printbuf err = PRINTBUF; 286 + err.atomic++; 287 + 288 + prt_printf(&err, "journal entry overran reserved space: %u > %u\n", 289 + sectors, buf->sectors); 290 + prt_printf(&err, "buf u64s %u u64s reserved %u cur_entry_u64s %u block_bits %u\n", 291 + le32_to_cpu(buf->data->u64s), buf->u64s_reserved, 292 + j->cur_entry_u64s, 293 + c->block_bits); 294 + prt_printf(&err, "fatal error - emergency read only"); 295 + bch2_journal_halt_locked(j); 296 + 297 + bch_err(c, "%s", err.buf); 298 + printbuf_exit(&err); 299 + return; 300 + } 301 + 302 buf->sectors = sectors; 303 304 /* ··· 1462 j->last_empty_seq = cur_seq - 1; /* to match j->seq */ 1463 1464 spin_lock(&j->lock); 1465 j->last_flush_write = jiffies; 1466 1467 j->reservations.idx = journal_cur_seq(j); ··· 1472 spin_unlock(&j->lock); 1473 1474 return 0; 1475 + } 1476 + 1477 + void bch2_journal_set_replay_done(struct journal *j) 1478 + { 1479 + /* 1480 + * journal_space_available must happen before setting JOURNAL_running 1481 + * JOURNAL_running must happen before JOURNAL_replay_done 1482 + */ 1483 + spin_lock(&j->lock); 1484 + bch2_journal_space_available(j); 1485 + 1486 + set_bit(JOURNAL_need_flush_write, &j->flags); 1487 + set_bit(JOURNAL_running, &j->flags); 1488 + set_bit(JOURNAL_replay_done, &j->flags); 1489 + spin_unlock(&j->lock); 1490 } 1491 1492 /* init/exit: */
+1 -6
fs/bcachefs/journal.h
··· 437 438 struct bch_dev; 439 440 - static inline void bch2_journal_set_replay_done(struct journal *j) 441 - { 442 - BUG_ON(!test_bit(JOURNAL_running, &j->flags)); 443 - set_bit(JOURNAL_replay_done, &j->flags); 444 - } 445 - 446 void bch2_journal_unblock(struct journal *); 447 void bch2_journal_block(struct journal *); 448 struct journal_buf *bch2_next_write_buffer_flush_journal_buf(struct journal *, u64, bool *); ··· 453 454 void bch2_fs_journal_stop(struct journal *); 455 int bch2_fs_journal_start(struct journal *, u64); 456 457 void bch2_dev_journal_exit(struct bch_dev *); 458 int bch2_dev_journal_init(struct bch_dev *, struct bch_sb *);
··· 437 438 struct bch_dev; 439 440 void bch2_journal_unblock(struct journal *); 441 void bch2_journal_block(struct journal *); 442 struct journal_buf *bch2_next_write_buffer_flush_journal_buf(struct journal *, u64, bool *); ··· 459 460 void bch2_fs_journal_stop(struct journal *); 461 int bch2_fs_journal_start(struct journal *, u64); 462 + void bch2_journal_set_replay_done(struct journal *); 463 464 void bch2_dev_journal_exit(struct bch_dev *); 465 int bch2_dev_journal_init(struct bch_dev *, struct bch_sb *);
+4 -1
fs/bcachefs/journal_reclaim.c
··· 252 253 bch2_journal_set_watermark(j); 254 out: 255 - j->cur_entry_sectors = !ret ? j->space[journal_space_discarded].next_entry : 0; 256 j->cur_entry_error = ret; 257 258 if (!ret)
··· 252 253 bch2_journal_set_watermark(j); 254 out: 255 + j->cur_entry_sectors = !ret 256 + ? round_down(j->space[journal_space_discarded].next_entry, 257 + block_sectors(c)) 258 + : 0; 259 j->cur_entry_error = ret; 260 261 if (!ret)
+7
fs/bcachefs/movinggc.c
··· 356 357 set_freezable(); 358 359 bch2_move_stats_init(&move_stats, "copygc"); 360 bch2_moving_ctxt_init(&ctxt, c, NULL, &move_stats, 361 writepoint_ptr(&c->copygc_write_point),
··· 356 357 set_freezable(); 358 359 + /* 360 + * Data move operations can't run until after check_snapshots has 361 + * completed, and bch2_snapshot_is_ancestor() is available. 362 + */ 363 + kthread_wait_freezable(c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots || 364 + kthread_should_stop()); 365 + 366 bch2_move_stats_init(&move_stats, "copygc"); 367 bch2_moving_ctxt_init(&ctxt, c, NULL, &move_stats, 368 writepoint_ptr(&c->copygc_write_point),
+9
fs/bcachefs/movinggc.h
··· 5 unsigned long bch2_copygc_wait_amount(struct bch_fs *); 6 void bch2_copygc_wait_to_text(struct printbuf *, struct bch_fs *); 7 8 void bch2_copygc_stop(struct bch_fs *); 9 int bch2_copygc_start(struct bch_fs *); 10 void bch2_fs_copygc_init(struct bch_fs *);
··· 5 unsigned long bch2_copygc_wait_amount(struct bch_fs *); 6 void bch2_copygc_wait_to_text(struct printbuf *, struct bch_fs *); 7 8 + static inline void bch2_copygc_wakeup(struct bch_fs *c) 9 + { 10 + rcu_read_lock(); 11 + struct task_struct *p = rcu_dereference(c->copygc_thread); 12 + if (p) 13 + wake_up_process(p); 14 + rcu_read_unlock(); 15 + } 16 + 17 void bch2_copygc_stop(struct bch_fs *); 18 int bch2_copygc_start(struct bch_fs *); 19 void bch2_fs_copygc_init(struct bch_fs *);
-4
fs/bcachefs/namei.c
··· 47 if (ret) 48 goto err; 49 50 - /* Inherit casefold state from parent. */ 51 - if (S_ISDIR(mode)) 52 - new_inode->bi_flags |= dir_u->bi_flags & BCH_INODE_casefolded; 53 - 54 if (!(flags & BCH_CREATE_SNAPSHOT)) { 55 /* Normal create path - allocate a new inode: */ 56 bch2_inode_init_late(new_inode, now, uid, gid, mode, rdev, dir_u);
··· 47 if (ret) 48 goto err; 49 50 if (!(flags & BCH_CREATE_SNAPSHOT)) { 51 /* Normal create path - allocate a new inode: */ 52 bch2_inode_init_late(new_inode, now, uid, gid, mode, rdev, dir_u);
+5
fs/bcachefs/opts.h
··· 228 OPT_BOOL(), \ 229 BCH_SB_ERASURE_CODE, false, \ 230 NULL, "Enable erasure coding (DO NOT USE YET)") \ 231 x(inodes_32bit, u8, \ 232 OPT_FS|OPT_INODE|OPT_FORMAT|OPT_MOUNT|OPT_RUNTIME, \ 233 OPT_BOOL(), \
··· 228 OPT_BOOL(), \ 229 BCH_SB_ERASURE_CODE, false, \ 230 NULL, "Enable erasure coding (DO NOT USE YET)") \ 231 + x(casefold, u8, \ 232 + OPT_FS|OPT_INODE|OPT_FORMAT, \ 233 + OPT_BOOL(), \ 234 + BCH_SB_CASEFOLD, false, \ 235 + NULL, "Dirent lookups are casefolded") \ 236 x(inodes_32bit, u8, \ 237 OPT_FS|OPT_INODE|OPT_FORMAT|OPT_MOUNT|OPT_RUNTIME, \ 238 OPT_BOOL(), \
+9 -2
fs/bcachefs/rebalance.c
··· 262 int ret = bch2_trans_commit_do(c, NULL, NULL, 263 BCH_TRANS_COMMIT_no_enospc, 264 bch2_set_rebalance_needs_scan_trans(trans, inum)); 265 - rebalance_wakeup(c); 266 return ret; 267 } 268 ··· 581 582 set_freezable(); 583 584 bch2_moving_ctxt_init(&ctxt, c, NULL, &r->work_stats, 585 writepoint_ptr(&c->rebalance_write_point), 586 true); ··· 671 c->rebalance.thread = NULL; 672 673 if (p) { 674 - /* for sychronizing with rebalance_wakeup() */ 675 synchronize_rcu(); 676 677 kthread_stop(p);
··· 262 int ret = bch2_trans_commit_do(c, NULL, NULL, 263 BCH_TRANS_COMMIT_no_enospc, 264 bch2_set_rebalance_needs_scan_trans(trans, inum)); 265 + bch2_rebalance_wakeup(c); 266 return ret; 267 } 268 ··· 581 582 set_freezable(); 583 584 + /* 585 + * Data move operations can't run until after check_snapshots has 586 + * completed, and bch2_snapshot_is_ancestor() is available. 587 + */ 588 + kthread_wait_freezable(c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots || 589 + kthread_should_stop()); 590 + 591 bch2_moving_ctxt_init(&ctxt, c, NULL, &r->work_stats, 592 writepoint_ptr(&c->rebalance_write_point), 593 true); ··· 664 c->rebalance.thread = NULL; 665 666 if (p) { 667 + /* for sychronizing with bch2_rebalance_wakeup() */ 668 synchronize_rcu(); 669 670 kthread_stop(p);
+1 -1
fs/bcachefs/rebalance.h
··· 37 int bch2_set_rebalance_needs_scan(struct bch_fs *, u64 inum); 38 int bch2_set_fs_needs_rebalance(struct bch_fs *); 39 40 - static inline void rebalance_wakeup(struct bch_fs *c) 41 { 42 struct task_struct *p; 43
··· 37 int bch2_set_rebalance_needs_scan(struct bch_fs *, u64 inum); 38 int bch2_set_fs_needs_rebalance(struct bch_fs *); 39 40 + static inline void bch2_rebalance_wakeup(struct bch_fs *c) 41 { 42 struct task_struct *p; 43
+7 -3
fs/bcachefs/recovery.c
··· 18 #include "journal_seq_blacklist.h" 19 #include "logged_ops.h" 20 #include "move.h" 21 #include "namei.h" 22 #include "quota.h" 23 #include "rebalance.h" ··· 1130 if (ret) 1131 goto err; 1132 1133 - set_bit(BCH_FS_accounting_replay_done, &c->flags); 1134 - bch2_journal_set_replay_done(&c->journal); 1135 - 1136 ret = bch2_fs_read_write_early(c); 1137 if (ret) 1138 goto err; 1139 1140 for_each_member_device(c, ca) { 1141 ret = bch2_dev_usage_init(ca, false); ··· 1194 goto err; 1195 1196 c->recovery_pass_done = BCH_RECOVERY_PASS_NR - 1; 1197 1198 if (enabled_qtypes(c)) { 1199 ret = bch2_fs_quota_read(c);
··· 18 #include "journal_seq_blacklist.h" 19 #include "logged_ops.h" 20 #include "move.h" 21 + #include "movinggc.h" 22 #include "namei.h" 23 #include "quota.h" 24 #include "rebalance.h" ··· 1129 if (ret) 1130 goto err; 1131 1132 ret = bch2_fs_read_write_early(c); 1133 if (ret) 1134 goto err; 1135 + 1136 + set_bit(BCH_FS_accounting_replay_done, &c->flags); 1137 + bch2_journal_set_replay_done(&c->journal); 1138 1139 for_each_member_device(c, ca) { 1140 ret = bch2_dev_usage_init(ca, false); ··· 1193 goto err; 1194 1195 c->recovery_pass_done = BCH_RECOVERY_PASS_NR - 1; 1196 + 1197 + bch2_copygc_wakeup(c); 1198 + bch2_rebalance_wakeup(c); 1199 1200 if (enabled_qtypes(c)) { 1201 ret = bch2_fs_quota_read(c);
+36 -32
fs/bcachefs/recovery_passes.c
··· 12 #include "journal.h" 13 #include "lru.h" 14 #include "logged_ops.h" 15 #include "rebalance.h" 16 #include "recovery.h" 17 #include "recovery_passes.h" ··· 263 */ 264 c->opts.recovery_passes_exclude &= ~BCH_RECOVERY_PASS_set_may_go_rw; 265 266 - while (c->curr_recovery_pass < ARRAY_SIZE(recovery_pass_fns) && !ret) { 267 - c->next_recovery_pass = c->curr_recovery_pass + 1; 268 269 - spin_lock_irq(&c->recovery_pass_lock); 270 unsigned pass = c->curr_recovery_pass; 271 272 if (c->opts.recovery_pass_last && 273 - c->curr_recovery_pass > c->opts.recovery_pass_last) { 274 - spin_unlock_irq(&c->recovery_pass_lock); 275 break; 276 - } 277 278 - if (!should_run_recovery_pass(c, pass)) { 279 - c->curr_recovery_pass++; 280 - c->recovery_pass_done = max(c->recovery_pass_done, pass); 281 spin_unlock_irq(&c->recovery_pass_lock); 282 - continue; 283 } 284 - spin_unlock_irq(&c->recovery_pass_lock); 285 286 - ret = bch2_run_recovery_pass(c, pass) ?: 287 - bch2_journal_flush(&c->journal); 288 - 289 - if (!ret && !test_bit(BCH_FS_error, &c->flags)) 290 - bch2_clear_recovery_pass_required(c, pass); 291 - 292 - spin_lock_irq(&c->recovery_pass_lock); 293 - if (c->next_recovery_pass < c->curr_recovery_pass) { 294 - /* 295 - * bch2_run_explicit_recovery_pass() was called: we 296 - * can't always catch -BCH_ERR_restart_recovery because 297 - * it may have been called from another thread (btree 298 - * node read completion) 299 - */ 300 - ret = 0; 301 - c->recovery_passes_complete &= ~(~0ULL << c->curr_recovery_pass); 302 - } else { 303 - c->recovery_passes_complete |= BIT_ULL(pass); 304 - c->recovery_pass_done = max(c->recovery_pass_done, pass); 305 - } 306 c->curr_recovery_pass = c->next_recovery_pass; 307 - spin_unlock_irq(&c->recovery_pass_lock); 308 } 309 310 return ret; 311 }
··· 12 #include "journal.h" 13 #include "lru.h" 14 #include "logged_ops.h" 15 + #include "movinggc.h" 16 #include "rebalance.h" 17 #include "recovery.h" 18 #include "recovery_passes.h" ··· 262 */ 263 c->opts.recovery_passes_exclude &= ~BCH_RECOVERY_PASS_set_may_go_rw; 264 265 + spin_lock_irq(&c->recovery_pass_lock); 266 267 + while (c->curr_recovery_pass < ARRAY_SIZE(recovery_pass_fns) && !ret) { 268 + unsigned prev_done = c->recovery_pass_done; 269 unsigned pass = c->curr_recovery_pass; 270 271 + c->next_recovery_pass = pass + 1; 272 + 273 if (c->opts.recovery_pass_last && 274 + c->curr_recovery_pass > c->opts.recovery_pass_last) 275 break; 276 277 + if (should_run_recovery_pass(c, pass)) { 278 spin_unlock_irq(&c->recovery_pass_lock); 279 + ret = bch2_run_recovery_pass(c, pass) ?: 280 + bch2_journal_flush(&c->journal); 281 + 282 + if (!ret && !test_bit(BCH_FS_error, &c->flags)) 283 + bch2_clear_recovery_pass_required(c, pass); 284 + spin_lock_irq(&c->recovery_pass_lock); 285 + 286 + if (c->next_recovery_pass < c->curr_recovery_pass) { 287 + /* 288 + * bch2_run_explicit_recovery_pass() was called: we 289 + * can't always catch -BCH_ERR_restart_recovery because 290 + * it may have been called from another thread (btree 291 + * node read completion) 292 + */ 293 + ret = 0; 294 + c->recovery_passes_complete &= ~(~0ULL << c->curr_recovery_pass); 295 + } else { 296 + c->recovery_passes_complete |= BIT_ULL(pass); 297 + c->recovery_pass_done = max(c->recovery_pass_done, pass); 298 + } 299 } 300 301 c->curr_recovery_pass = c->next_recovery_pass; 302 + 303 + if (prev_done <= BCH_RECOVERY_PASS_check_snapshots && 304 + c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots) { 305 + bch2_copygc_wakeup(c); 306 + bch2_rebalance_wakeup(c); 307 + } 308 } 309 + 310 + spin_unlock_irq(&c->recovery_pass_lock); 311 312 return ret; 313 }
+1 -1
fs/bcachefs/snapshot.c
··· 396 u32 subvol = 0, s; 397 398 rcu_read_lock(); 399 - while (id) { 400 s = snapshot_t(c, id)->subvol; 401 402 if (s && (!subvol || s < subvol))
··· 396 u32 subvol = 0, s; 397 398 rcu_read_lock(); 399 + while (id && bch2_snapshot_exists(c, id)) { 400 s = snapshot_t(c, id)->subvol; 401 402 if (s && (!subvol || s < subvol))
+2 -3
fs/bcachefs/str_hash.h
··· 33 34 struct bch_hash_info { 35 u8 type; 36 - struct unicode_map *cf_encoding; 37 /* 38 * For crc32 or crc64 string hashes the first key value of 39 * the siphash_key (k0) is used as the key. ··· 44 static inline struct bch_hash_info 45 bch2_hash_info_init(struct bch_fs *c, const struct bch_inode_unpacked *bi) 46 { 47 - /* XXX ick */ 48 struct bch_hash_info info = { 49 .type = INODE_STR_HASH(bi), 50 #ifdef CONFIG_UNICODE 51 - .cf_encoding = !!(bi->bi_flags & BCH_INODE_casefolded) ? c->cf_encoding : NULL, 52 #endif 53 .siphash_key = { .k0 = bi->bi_hash_seed } 54 };
··· 33 34 struct bch_hash_info { 35 u8 type; 36 + struct unicode_map *cf_encoding; 37 /* 38 * For crc32 or crc64 string hashes the first key value of 39 * the siphash_key (k0) is used as the key. ··· 44 static inline struct bch_hash_info 45 bch2_hash_info_init(struct bch_fs *c, const struct bch_inode_unpacked *bi) 46 { 47 struct bch_hash_info info = { 48 .type = INODE_STR_HASH(bi), 49 #ifdef CONFIG_UNICODE 50 + .cf_encoding = bch2_inode_casefold(c, bi) ? c->cf_encoding : NULL, 51 #endif 52 .siphash_key = { .k0 = bi->bi_hash_seed } 53 };
+2 -1
fs/bcachefs/super-io.c
··· 1102 prt_str(&buf, ")"); 1103 bch2_fs_fatal_error(c, ": %s", buf.buf); 1104 printbuf_exit(&buf); 1105 - return -BCH_ERR_sb_not_downgraded; 1106 } 1107 1108 darray_for_each(online_devices, ca) {
··· 1102 prt_str(&buf, ")"); 1103 bch2_fs_fatal_error(c, ": %s", buf.buf); 1104 printbuf_exit(&buf); 1105 + ret = -BCH_ERR_sb_not_downgraded; 1106 + goto out; 1107 } 1108 1109 darray_for_each(online_devices, ca) {
+66 -90
fs/bcachefs/super.c
··· 418 return ret; 419 } 420 421 - static int bch2_fs_read_write_late(struct bch_fs *c) 422 - { 423 - int ret; 424 - 425 - /* 426 - * Data move operations can't run until after check_snapshots has 427 - * completed, and bch2_snapshot_is_ancestor() is available. 428 - * 429 - * Ideally we'd start copygc/rebalance earlier instead of waiting for 430 - * all of recovery/fsck to complete: 431 - */ 432 - ret = bch2_copygc_start(c); 433 - if (ret) { 434 - bch_err(c, "error starting copygc thread"); 435 - return ret; 436 - } 437 - 438 - ret = bch2_rebalance_start(c); 439 - if (ret) { 440 - bch_err(c, "error starting rebalance thread"); 441 - return ret; 442 - } 443 - 444 - return 0; 445 - } 446 - 447 static int __bch2_fs_read_write(struct bch_fs *c, bool early) 448 { 449 int ret; ··· 440 441 clear_bit(BCH_FS_clean_shutdown, &c->flags); 442 443 - /* 444 - * First journal write must be a flush write: after a clean shutdown we 445 - * don't read the journal, so the first journal write may end up 446 - * overwriting whatever was there previously, and there must always be 447 - * at least one non-flush write in the journal or recovery will fail: 448 - */ 449 - set_bit(JOURNAL_need_flush_write, &c->journal.flags); 450 - set_bit(JOURNAL_running, &c->journal.flags); 451 - 452 __for_each_online_member(c, ca, BIT(BCH_MEMBER_STATE_rw), READ) { 453 bch2_dev_allocator_add(c, ca); 454 percpu_ref_reinit(&ca->io_ref[WRITE]); 455 } 456 bch2_recalc_capacity(c); 457 458 ret = bch2_fs_mark_dirty(c); 459 if (ret) 460 goto err; 461 - 462 - spin_lock(&c->journal.lock); 463 - bch2_journal_space_available(&c->journal); 464 - spin_unlock(&c->journal.lock); 465 466 ret = bch2_journal_reclaim_start(&c->journal); 467 if (ret) ··· 477 atomic_long_inc(&c->writes[i]); 478 } 479 #endif 480 - if (!early) { 481 - ret = bch2_fs_read_write_late(c); 482 - if (ret) 483 - goto err; 484 } 485 486 bch2_do_discards(c); ··· 533 534 bch2_find_btree_nodes_exit(&c->found_btree_nodes); 535 bch2_free_pending_node_rewrites(c); 536 bch2_fs_accounting_exit(c); 537 bch2_fs_sb_errors_exit(c); 538 bch2_fs_counters_exit(c); ··· 1004 printbuf_exit(&p); 1005 } 1006 1007 int bch2_fs_start(struct bch_fs *c) 1008 { 1009 time64_t now = ktime_get_real_seconds(); 1010 int ret = 0; 1011 1012 print_mount_opts(c); 1013 1014 down_write(&c->state_lock); 1015 mutex_lock(&c->sb_lock); ··· 1100 wake_up(&c->ro_ref_wait); 1101 1102 down_write(&c->state_lock); 1103 - if (c->opts.read_only) { 1104 bch2_fs_read_only(c); 1105 - } else { 1106 - ret = !test_bit(BCH_FS_rw, &c->flags) 1107 - ? bch2_fs_read_write(c) 1108 - : bch2_fs_read_write_late(c); 1109 - } 1110 up_write(&c->state_lock); 1111 1112 err: ··· 1515 1516 printbuf_exit(&name); 1517 1518 - rebalance_wakeup(c); 1519 return 0; 1520 } 1521 ··· 1574 } 1575 } 1576 1577 - static bool bch2_fs_may_start(struct bch_fs *c) 1578 - { 1579 - struct bch_dev *ca; 1580 - unsigned i, flags = 0; 1581 - 1582 - if (c->opts.very_degraded) 1583 - flags |= BCH_FORCE_IF_DEGRADED|BCH_FORCE_IF_LOST; 1584 - 1585 - if (c->opts.degraded) 1586 - flags |= BCH_FORCE_IF_DEGRADED; 1587 - 1588 - if (!c->opts.degraded && 1589 - !c->opts.very_degraded) { 1590 - mutex_lock(&c->sb_lock); 1591 - 1592 - for (i = 0; i < c->disk_sb.sb->nr_devices; i++) { 1593 - if (!bch2_member_exists(c->disk_sb.sb, i)) 1594 - continue; 1595 - 1596 - ca = bch2_dev_locked(c, i); 1597 - 1598 - if (!bch2_dev_is_online(ca) && 1599 - (ca->mi.state == BCH_MEMBER_STATE_rw || 1600 - ca->mi.state == BCH_MEMBER_STATE_ro)) { 1601 - mutex_unlock(&c->sb_lock); 1602 - return false; 1603 - } 1604 - } 1605 - mutex_unlock(&c->sb_lock); 1606 - } 1607 - 1608 - return bch2_have_enough_devs(c, bch2_online_devs(c), flags, true); 1609 - } 1610 - 1611 static void __bch2_dev_read_only(struct bch_fs *c, struct bch_dev *ca) 1612 { 1613 bch2_dev_io_ref_stop(ca, WRITE); ··· 1627 if (new_state == BCH_MEMBER_STATE_rw) 1628 __bch2_dev_read_write(c, ca); 1629 1630 - rebalance_wakeup(c); 1631 1632 return ret; 1633 } ··· 2208 } 2209 } 2210 up_write(&c->state_lock); 2211 - 2212 - if (!bch2_fs_may_start(c)) { 2213 - ret = -BCH_ERR_insufficient_devices_to_start; 2214 - goto err_print; 2215 - } 2216 2217 if (!c->opts.nostart) { 2218 ret = bch2_fs_start(c);
··· 418 return ret; 419 } 420 421 static int __bch2_fs_read_write(struct bch_fs *c, bool early) 422 { 423 int ret; ··· 466 467 clear_bit(BCH_FS_clean_shutdown, &c->flags); 468 469 __for_each_online_member(c, ca, BIT(BCH_MEMBER_STATE_rw), READ) { 470 bch2_dev_allocator_add(c, ca); 471 percpu_ref_reinit(&ca->io_ref[WRITE]); 472 } 473 bch2_recalc_capacity(c); 474 475 + /* 476 + * First journal write must be a flush write: after a clean shutdown we 477 + * don't read the journal, so the first journal write may end up 478 + * overwriting whatever was there previously, and there must always be 479 + * at least one non-flush write in the journal or recovery will fail: 480 + */ 481 + spin_lock(&c->journal.lock); 482 + set_bit(JOURNAL_need_flush_write, &c->journal.flags); 483 + set_bit(JOURNAL_running, &c->journal.flags); 484 + bch2_journal_space_available(&c->journal); 485 + spin_unlock(&c->journal.lock); 486 + 487 ret = bch2_fs_mark_dirty(c); 488 if (ret) 489 goto err; 490 491 ret = bch2_journal_reclaim_start(&c->journal); 492 if (ret) ··· 504 atomic_long_inc(&c->writes[i]); 505 } 506 #endif 507 + 508 + ret = bch2_copygc_start(c); 509 + if (ret) { 510 + bch_err_msg(c, ret, "error starting copygc thread"); 511 + goto err; 512 + } 513 + 514 + ret = bch2_rebalance_start(c); 515 + if (ret) { 516 + bch_err_msg(c, ret, "error starting rebalance thread"); 517 + goto err; 518 } 519 520 bch2_do_discards(c); ··· 553 554 bch2_find_btree_nodes_exit(&c->found_btree_nodes); 555 bch2_free_pending_node_rewrites(c); 556 + bch2_free_fsck_errs(c); 557 bch2_fs_accounting_exit(c); 558 bch2_fs_sb_errors_exit(c); 559 bch2_fs_counters_exit(c); ··· 1023 printbuf_exit(&p); 1024 } 1025 1026 + static bool bch2_fs_may_start(struct bch_fs *c) 1027 + { 1028 + struct bch_dev *ca; 1029 + unsigned i, flags = 0; 1030 + 1031 + if (c->opts.very_degraded) 1032 + flags |= BCH_FORCE_IF_DEGRADED|BCH_FORCE_IF_LOST; 1033 + 1034 + if (c->opts.degraded) 1035 + flags |= BCH_FORCE_IF_DEGRADED; 1036 + 1037 + if (!c->opts.degraded && 1038 + !c->opts.very_degraded) { 1039 + mutex_lock(&c->sb_lock); 1040 + 1041 + for (i = 0; i < c->disk_sb.sb->nr_devices; i++) { 1042 + if (!bch2_member_exists(c->disk_sb.sb, i)) 1043 + continue; 1044 + 1045 + ca = bch2_dev_locked(c, i); 1046 + 1047 + if (!bch2_dev_is_online(ca) && 1048 + (ca->mi.state == BCH_MEMBER_STATE_rw || 1049 + ca->mi.state == BCH_MEMBER_STATE_ro)) { 1050 + mutex_unlock(&c->sb_lock); 1051 + return false; 1052 + } 1053 + } 1054 + mutex_unlock(&c->sb_lock); 1055 + } 1056 + 1057 + return bch2_have_enough_devs(c, bch2_online_devs(c), flags, true); 1058 + } 1059 + 1060 int bch2_fs_start(struct bch_fs *c) 1061 { 1062 time64_t now = ktime_get_real_seconds(); 1063 int ret = 0; 1064 1065 print_mount_opts(c); 1066 + 1067 + if (!bch2_fs_may_start(c)) 1068 + return -BCH_ERR_insufficient_devices_to_start; 1069 1070 down_write(&c->state_lock); 1071 mutex_lock(&c->sb_lock); ··· 1082 wake_up(&c->ro_ref_wait); 1083 1084 down_write(&c->state_lock); 1085 + if (c->opts.read_only) 1086 bch2_fs_read_only(c); 1087 + else if (!test_bit(BCH_FS_rw, &c->flags)) 1088 + ret = bch2_fs_read_write(c); 1089 up_write(&c->state_lock); 1090 1091 err: ··· 1500 1501 printbuf_exit(&name); 1502 1503 + bch2_rebalance_wakeup(c); 1504 return 0; 1505 } 1506 ··· 1559 } 1560 } 1561 1562 static void __bch2_dev_read_only(struct bch_fs *c, struct bch_dev *ca) 1563 { 1564 bch2_dev_io_ref_stop(ca, WRITE); ··· 1646 if (new_state == BCH_MEMBER_STATE_rw) 1647 __bch2_dev_read_write(c, ca); 1648 1649 + bch2_rebalance_wakeup(c); 1650 1651 return ret; 1652 } ··· 2227 } 2228 } 2229 up_write(&c->state_lock); 2230 2231 if (!c->opts.nostart) { 2232 ret = bch2_fs_start(c);
+3 -4
fs/bcachefs/sysfs.c
··· 654 bch2_set_rebalance_needs_scan(c, 0); 655 656 if (v && id == Opt_rebalance_enabled) 657 - rebalance_wakeup(c); 658 659 - if (v && id == Opt_copygc_enabled && 660 - c->copygc_thread) 661 - wake_up_process(c->copygc_thread); 662 663 if (id == Opt_discard && !ca) { 664 mutex_lock(&c->sb_lock);
··· 654 bch2_set_rebalance_needs_scan(c, 0); 655 656 if (v && id == Opt_rebalance_enabled) 657 + bch2_rebalance_wakeup(c); 658 659 + if (v && id == Opt_copygc_enabled) 660 + bch2_copygc_wakeup(c); 661 662 if (id == Opt_discard && !ca) { 663 mutex_lock(&c->sb_lock);
+4
fs/bcachefs/tests.c
··· 342 */ 343 static int test_peek_end(struct bch_fs *c, u64 nr) 344 { 345 struct btree_trans *trans = bch2_trans_get(c); 346 struct btree_iter iter; 347 struct bkey_s_c k; ··· 364 365 static int test_peek_end_extents(struct bch_fs *c, u64 nr) 366 { 367 struct btree_trans *trans = bch2_trans_get(c); 368 struct btree_iter iter; 369 struct bkey_s_c k;
··· 342 */ 343 static int test_peek_end(struct bch_fs *c, u64 nr) 344 { 345 + delete_test_keys(c); 346 + 347 struct btree_trans *trans = bch2_trans_get(c); 348 struct btree_iter iter; 349 struct bkey_s_c k; ··· 362 363 static int test_peek_end_extents(struct bch_fs *c, u64 nr) 364 { 365 + delete_test_keys(c); 366 + 367 struct btree_trans *trans = bch2_trans_get(c); 368 struct btree_iter iter; 369 struct bkey_s_c k;
+38
fs/bcachefs/util.h
··· 739 *--dst = *src++; 740 } 741 742 #endif /* _BCACHEFS_UTIL_H */
··· 739 *--dst = *src++; 740 } 741 742 + #define set_flags(_map, _in, _out) \ 743 + do { \ 744 + unsigned _i; \ 745 + \ 746 + for (_i = 0; _i < ARRAY_SIZE(_map); _i++) \ 747 + if ((_in) & (1 << _i)) \ 748 + (_out) |= _map[_i]; \ 749 + else \ 750 + (_out) &= ~_map[_i]; \ 751 + } while (0) 752 + 753 + #define map_flags(_map, _in) \ 754 + ({ \ 755 + unsigned _out = 0; \ 756 + \ 757 + set_flags(_map, _in, _out); \ 758 + _out; \ 759 + }) 760 + 761 + #define map_flags_rev(_map, _in) \ 762 + ({ \ 763 + unsigned _i, _out = 0; \ 764 + \ 765 + for (_i = 0; _i < ARRAY_SIZE(_map); _i++) \ 766 + if ((_in) & _map[_i]) { \ 767 + (_out) |= 1 << _i; \ 768 + (_in) &= ~_map[_i]; \ 769 + } \ 770 + (_out); \ 771 + }) 772 + 773 + #define map_defined(_map) \ 774 + ({ \ 775 + unsigned _in = ~0; \ 776 + \ 777 + map_flags_rev(_map, _in); \ 778 + }) 779 + 780 #endif /* _BCACHEFS_UTIL_H */
+7 -2
fs/btrfs/file.c
··· 2104 * will always return true. 2105 * So here we need to do extra page alignment for 2106 * filemap_range_has_page(). 2107 */ 2108 const u64 page_lockstart = round_up(lockstart, PAGE_SIZE); 2109 - const u64 page_lockend = round_down(lockend + 1, PAGE_SIZE) - 1; 2110 2111 while (1) { 2112 truncate_pagecache_range(inode, lockstart, lockend); 2113 2114 lock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend, 2115 cached_state); 2116 /* 2117 * We can't have ordered extents in the range, nor dirty/writeback 2118 * pages, because we have locked the inode's VFS lock in exclusive ··· 2129 * we do, unlock the range and retry. 2130 */ 2131 if (!filemap_range_has_page(inode->i_mapping, page_lockstart, 2132 - page_lockend)) 2133 break; 2134 2135 unlock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,
··· 2104 * will always return true. 2105 * So here we need to do extra page alignment for 2106 * filemap_range_has_page(). 2107 + * 2108 + * And do not decrease page_lockend right now, as it can be 0. 2109 */ 2110 const u64 page_lockstart = round_up(lockstart, PAGE_SIZE); 2111 + const u64 page_lockend = round_down(lockend + 1, PAGE_SIZE); 2112 2113 while (1) { 2114 truncate_pagecache_range(inode, lockstart, lockend); 2115 2116 lock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend, 2117 cached_state); 2118 + /* The same page or adjacent pages. */ 2119 + if (page_lockend <= page_lockstart) 2120 + break; 2121 /* 2122 * We can't have ordered extents in the range, nor dirty/writeback 2123 * pages, because we have locked the inode's VFS lock in exclusive ··· 2124 * we do, unlock the range and retry. 2125 */ 2126 if (!filemap_range_has_page(inode->i_mapping, page_lockstart, 2127 + page_lockend - 1)) 2128 break; 2129 2130 unlock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,
+1 -1
fs/btrfs/relocation.c
··· 3803 if (ret) { 3804 if (inode) 3805 iput(&inode->vfs_inode); 3806 - inode = ERR_PTR(ret); 3807 } 3808 return &inode->vfs_inode; 3809 }
··· 3803 if (ret) { 3804 if (inode) 3805 iput(&inode->vfs_inode); 3806 + return ERR_PTR(ret); 3807 } 3808 return &inode->vfs_inode; 3809 }
+2 -2
fs/btrfs/subpage.c
··· 204 btrfs_blocks_per_folio(fs_info, folio); \ 205 \ 206 btrfs_subpage_assert(fs_info, folio, start, len); \ 207 - __start_bit = offset_in_page(start) >> fs_info->sectorsize_bits; \ 208 __start_bit += blocks_per_folio * btrfs_bitmap_nr_##name; \ 209 __start_bit; \ 210 }) ··· 666 btrfs_blocks_per_folio(fs_info, folio); \ 667 const struct btrfs_subpage *subpage = folio_get_private(folio); \ 668 \ 669 - ASSERT(blocks_per_folio < BITS_PER_LONG); \ 670 *dst = bitmap_read(subpage->bitmaps, \ 671 blocks_per_folio * btrfs_bitmap_nr_##name, \ 672 blocks_per_folio); \
··· 204 btrfs_blocks_per_folio(fs_info, folio); \ 205 \ 206 btrfs_subpage_assert(fs_info, folio, start, len); \ 207 + __start_bit = offset_in_folio(folio, start) >> fs_info->sectorsize_bits; \ 208 __start_bit += blocks_per_folio * btrfs_bitmap_nr_##name; \ 209 __start_bit; \ 210 }) ··· 666 btrfs_blocks_per_folio(fs_info, folio); \ 667 const struct btrfs_subpage *subpage = folio_get_private(folio); \ 668 \ 669 + ASSERT(blocks_per_folio <= BITS_PER_LONG); \ 670 *dst = bitmap_read(subpage->bitmaps, \ 671 blocks_per_folio * btrfs_bitmap_nr_##name, \ 672 blocks_per_folio); \
+1 -1
fs/btrfs/tree-checker.c
··· 2235 btrfs_err(fs_info, 2236 "tree level mismatch detected, bytenr=%llu level expected=%u has=%u", 2237 eb->start, check->level, found_level); 2238 - return -EIO; 2239 } 2240 2241 if (!check->has_first_key)
··· 2235 btrfs_err(fs_info, 2236 "tree level mismatch detected, bytenr=%llu level expected=%u has=%u", 2237 eb->start, check->level, found_level); 2238 + return -EUCLEAN; 2239 } 2240 2241 if (!check->has_first_key)
+16 -3
fs/btrfs/zoned.c
··· 1277 1278 static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx, 1279 struct zone_info *info, unsigned long *active, 1280 - struct btrfs_chunk_map *map) 1281 { 1282 struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace; 1283 struct btrfs_device *device; ··· 1307 return 0; 1308 } 1309 1310 /* This zone will be used for allocation, so mark this zone non-empty. */ 1311 btrfs_dev_clear_zone_empty(device, info->physical); 1312 ··· 1321 * to determine the allocation offset within the zone. 1322 */ 1323 WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size)); 1324 nofs_flag = memalloc_nofs_save(); 1325 ret = btrfs_get_dev_zone(device, info->physical, &zone); 1326 memalloc_nofs_restore(nofs_flag); ··· 1602 } 1603 1604 for (i = 0; i < map->num_stripes; i++) { 1605 - ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active, map); 1606 if (ret) 1607 goto out; 1608 ··· 1673 * stripe. 1674 */ 1675 cache->alloc_offset = cache->zone_capacity; 1676 - ret = 0; 1677 } 1678 1679 out:
··· 1277 1278 static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx, 1279 struct zone_info *info, unsigned long *active, 1280 + struct btrfs_chunk_map *map, bool new) 1281 { 1282 struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace; 1283 struct btrfs_device *device; ··· 1307 return 0; 1308 } 1309 1310 + ASSERT(!new || btrfs_dev_is_empty_zone(device, info->physical)); 1311 + 1312 /* This zone will be used for allocation, so mark this zone non-empty. */ 1313 btrfs_dev_clear_zone_empty(device, info->physical); 1314 ··· 1319 * to determine the allocation offset within the zone. 1320 */ 1321 WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size)); 1322 + 1323 + if (new) { 1324 + sector_t capacity; 1325 + 1326 + capacity = bdev_zone_capacity(device->bdev, info->physical >> SECTOR_SHIFT); 1327 + up_read(&dev_replace->rwsem); 1328 + info->alloc_offset = 0; 1329 + info->capacity = capacity << SECTOR_SHIFT; 1330 + 1331 + return 0; 1332 + } 1333 + 1334 nofs_flag = memalloc_nofs_save(); 1335 ret = btrfs_get_dev_zone(device, info->physical, &zone); 1336 memalloc_nofs_restore(nofs_flag); ··· 1588 } 1589 1590 for (i = 0; i < map->num_stripes; i++) { 1591 + ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active, map, new); 1592 if (ret) 1593 goto out; 1594 ··· 1659 * stripe. 1660 */ 1661 cache->alloc_offset = cache->zone_capacity; 1662 } 1663 1664 out:
+54 -19
fs/buffer.c
··· 176 } 177 EXPORT_SYMBOL(end_buffer_write_sync); 178 179 - /* 180 - * Various filesystems appear to want __find_get_block to be non-blocking. 181 - * But it's the page lock which protects the buffers. To get around this, 182 - * we get exclusion from try_to_free_buffers with the blockdev mapping's 183 - * i_private_lock. 184 - * 185 - * Hack idea: for the blockdev mapping, i_private_lock contention 186 - * may be quite high. This code could TryLock the page, and if that 187 - * succeeds, there is no need to take i_private_lock. 188 - */ 189 static struct buffer_head * 190 - __find_get_block_slow(struct block_device *bdev, sector_t block) 191 { 192 struct address_space *bd_mapping = bdev->bd_mapping; 193 const int blkbits = bd_mapping->host->i_blkbits; ··· 194 if (IS_ERR(folio)) 195 goto out; 196 197 - spin_lock(&bd_mapping->i_private_lock); 198 head = folio_buffers(folio); 199 if (!head) 200 goto out_unlock; 201 bh = head; 202 do { 203 if (!buffer_mapped(bh)) ··· 244 1 << blkbits); 245 } 246 out_unlock: 247 - spin_unlock(&bd_mapping->i_private_lock); 248 folio_put(folio); 249 out: 250 return ret; ··· 667 void write_boundary_block(struct block_device *bdev, 668 sector_t bblock, unsigned blocksize) 669 { 670 - struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize); 671 if (bh) { 672 if (buffer_dirty(bh)) 673 write_dirty_buffer(bh, 0); ··· 1399 /* 1400 * Perform a pagecache lookup for the matching buffer. If it's there, refresh 1401 * it in the LRU and mark it as accessed. If it is not present then return 1402 - * NULL 1403 */ 1404 - struct buffer_head * 1405 - __find_get_block(struct block_device *bdev, sector_t block, unsigned size) 1406 { 1407 struct buffer_head *bh = lookup_bh_lru(bdev, block, size); 1408 1409 if (bh == NULL) { 1410 /* __find_get_block_slow will mark the page accessed */ 1411 - bh = __find_get_block_slow(bdev, block); 1412 if (bh) 1413 bh_lru_install(bh); 1414 } else ··· 1418 1419 return bh; 1420 } 1421 EXPORT_SYMBOL(__find_get_block); 1422 1423 /** 1424 * bdev_getblk - Get a buffer_head in a block device's buffer cache. ··· 1452 struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block, 1453 unsigned size, gfp_t gfp) 1454 { 1455 - struct buffer_head *bh = __find_get_block(bdev, block, size); 1456 1457 might_alloc(gfp); 1458 if (bh)
··· 176 } 177 EXPORT_SYMBOL(end_buffer_write_sync); 178 179 static struct buffer_head * 180 + __find_get_block_slow(struct block_device *bdev, sector_t block, bool atomic) 181 { 182 struct address_space *bd_mapping = bdev->bd_mapping; 183 const int blkbits = bd_mapping->host->i_blkbits; ··· 204 if (IS_ERR(folio)) 205 goto out; 206 207 + /* 208 + * Folio lock protects the buffers. Callers that cannot block 209 + * will fallback to serializing vs try_to_free_buffers() via 210 + * the i_private_lock. 211 + */ 212 + if (atomic) 213 + spin_lock(&bd_mapping->i_private_lock); 214 + else 215 + folio_lock(folio); 216 + 217 head = folio_buffers(folio); 218 if (!head) 219 goto out_unlock; 220 + /* 221 + * Upon a noref migration, the folio lock serializes here; 222 + * otherwise bail. 223 + */ 224 + if (test_bit_acquire(BH_Migrate, &head->b_state)) { 225 + WARN_ON(!atomic); 226 + goto out_unlock; 227 + } 228 + 229 bh = head; 230 do { 231 if (!buffer_mapped(bh)) ··· 236 1 << blkbits); 237 } 238 out_unlock: 239 + if (atomic) 240 + spin_unlock(&bd_mapping->i_private_lock); 241 + else 242 + folio_unlock(folio); 243 folio_put(folio); 244 out: 245 return ret; ··· 656 void write_boundary_block(struct block_device *bdev, 657 sector_t bblock, unsigned blocksize) 658 { 659 + struct buffer_head *bh; 660 + 661 + bh = __find_get_block_nonatomic(bdev, bblock + 1, blocksize); 662 if (bh) { 663 if (buffer_dirty(bh)) 664 write_dirty_buffer(bh, 0); ··· 1386 /* 1387 * Perform a pagecache lookup for the matching buffer. If it's there, refresh 1388 * it in the LRU and mark it as accessed. If it is not present then return 1389 + * NULL. Atomic context callers may also return NULL if the buffer is being 1390 + * migrated; similarly the page is not marked accessed either. 1391 */ 1392 + static struct buffer_head * 1393 + find_get_block_common(struct block_device *bdev, sector_t block, 1394 + unsigned size, bool atomic) 1395 { 1396 struct buffer_head *bh = lookup_bh_lru(bdev, block, size); 1397 1398 if (bh == NULL) { 1399 /* __find_get_block_slow will mark the page accessed */ 1400 + bh = __find_get_block_slow(bdev, block, atomic); 1401 if (bh) 1402 bh_lru_install(bh); 1403 } else ··· 1403 1404 return bh; 1405 } 1406 + 1407 + struct buffer_head * 1408 + __find_get_block(struct block_device *bdev, sector_t block, unsigned size) 1409 + { 1410 + return find_get_block_common(bdev, block, size, true); 1411 + } 1412 EXPORT_SYMBOL(__find_get_block); 1413 + 1414 + /* same as __find_get_block() but allows sleeping contexts */ 1415 + struct buffer_head * 1416 + __find_get_block_nonatomic(struct block_device *bdev, sector_t block, 1417 + unsigned size) 1418 + { 1419 + return find_get_block_common(bdev, block, size, false); 1420 + } 1421 + EXPORT_SYMBOL(__find_get_block_nonatomic); 1422 1423 /** 1424 * bdev_getblk - Get a buffer_head in a block device's buffer cache. ··· 1422 struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block, 1423 unsigned size, gfp_t gfp) 1424 { 1425 + struct buffer_head *bh; 1426 + 1427 + if (gfpflags_allow_blocking(gfp)) 1428 + bh = __find_get_block_nonatomic(bdev, block, size); 1429 + else 1430 + bh = __find_get_block(bdev, block, size); 1431 1432 might_alloc(gfp); 1433 if (bh)
+1 -1
fs/ceph/inode.c
··· 2367 2368 /* Try to writeback the dirty pagecaches */ 2369 if (issued & (CEPH_CAP_FILE_BUFFER)) { 2370 - loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SHIFT - 1; 2371 2372 ret = filemap_write_and_wait_range(inode->i_mapping, 2373 orig_pos, lend);
··· 2367 2368 /* Try to writeback the dirty pagecaches */ 2369 if (issued & (CEPH_CAP_FILE_BUFFER)) { 2370 + loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SIZE - 1; 2371 2372 ret = filemap_write_and_wait_range(inode->i_mapping, 2373 orig_pos, lend);
+2 -1
fs/ext4/ialloc.c
··· 691 if (!bh || !buffer_uptodate(bh)) 692 /* 693 * If the block is not in the buffer cache, then it 694 - * must have been written out. 695 */ 696 goto out; 697
··· 691 if (!bh || !buffer_uptodate(bh)) 692 /* 693 * If the block is not in the buffer cache, then it 694 + * must have been written out, or, most unlikely, is 695 + * being migrated - false failure should be OK here. 696 */ 697 goto out; 698
+2 -1
fs/ext4/mballoc.c
··· 6642 for (i = 0; i < count; i++) { 6643 cond_resched(); 6644 if (is_metadata) 6645 - bh = sb_find_get_block(inode->i_sb, block + i); 6646 ext4_forget(handle, is_metadata, inode, bh, block + i); 6647 } 6648 }
··· 6642 for (i = 0; i < count; i++) { 6643 cond_resched(); 6644 if (is_metadata) 6645 + bh = sb_find_get_block_nonatomic(inode->i_sb, 6646 + block + i); 6647 ext4_forget(handle, is_metadata, inode, bh, block + i); 6648 } 6649 }
+1 -1
fs/file.c
··· 26 27 #include "internal.h" 28 29 - bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt) 30 { 31 /* 32 * If the reference count was already in the dead zone, then this
··· 26 27 #include "internal.h" 28 29 + static noinline bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt) 30 { 31 /* 32 * If the reference count was already in the dead zone, then this
+9 -6
fs/jbd2/revoke.c
··· 345 bh = bh_in; 346 347 if (!bh) { 348 - bh = __find_get_block(bdev, blocknr, journal->j_blocksize); 349 if (bh) 350 BUFFER_TRACE(bh, "found on hash"); 351 } ··· 356 357 /* If there is a different buffer_head lying around in 358 * memory anywhere... */ 359 - bh2 = __find_get_block(bdev, blocknr, journal->j_blocksize); 360 if (bh2) { 361 /* ... and it has RevokeValid status... */ 362 if (bh2 != bh && buffer_revokevalid(bh2)) ··· 466 * state machine will get very upset later on. */ 467 if (need_cancel) { 468 struct buffer_head *bh2; 469 - bh2 = __find_get_block(bh->b_bdev, bh->b_blocknr, bh->b_size); 470 if (bh2) { 471 if (bh2 != bh) 472 clear_buffer_revoked(bh2); ··· 495 struct jbd2_revoke_record_s *record; 496 struct buffer_head *bh; 497 record = (struct jbd2_revoke_record_s *)list_entry; 498 - bh = __find_get_block(journal->j_fs_dev, 499 - record->blocknr, 500 - journal->j_blocksize); 501 if (bh) { 502 clear_buffer_revoked(bh); 503 __brelse(bh);
··· 345 bh = bh_in; 346 347 if (!bh) { 348 + bh = __find_get_block_nonatomic(bdev, blocknr, 349 + journal->j_blocksize); 350 if (bh) 351 BUFFER_TRACE(bh, "found on hash"); 352 } ··· 355 356 /* If there is a different buffer_head lying around in 357 * memory anywhere... */ 358 + bh2 = __find_get_block_nonatomic(bdev, blocknr, 359 + journal->j_blocksize); 360 if (bh2) { 361 /* ... and it has RevokeValid status... */ 362 if (bh2 != bh && buffer_revokevalid(bh2)) ··· 464 * state machine will get very upset later on. */ 465 if (need_cancel) { 466 struct buffer_head *bh2; 467 + bh2 = __find_get_block_nonatomic(bh->b_bdev, bh->b_blocknr, 468 + bh->b_size); 469 if (bh2) { 470 if (bh2 != bh) 471 clear_buffer_revoked(bh2); ··· 492 struct jbd2_revoke_record_s *record; 493 struct buffer_head *bh; 494 record = (struct jbd2_revoke_record_s *)list_entry; 495 + bh = __find_get_block_nonatomic(journal->j_fs_dev, 496 + record->blocknr, 497 + journal->j_blocksize); 498 if (bh) { 499 clear_buffer_revoked(bh); 500 __brelse(bh);
+36 -33
fs/namespace.c
··· 2826 struct vfsmount *mnt = path->mnt; 2827 struct dentry *dentry; 2828 struct mountpoint *mp = ERR_PTR(-ENOENT); 2829 2830 for (;;) { 2831 - struct mount *m; 2832 2833 if (beneath) { 2834 - m = real_mount(mnt); 2835 read_seqlock_excl(&mount_lock); 2836 - dentry = dget(m->mnt_mountpoint); 2837 read_sequnlock_excl(&mount_lock); 2838 } else { 2839 dentry = path->dentry; 2840 } 2841 2842 inode_lock(dentry->d_inode); 2843 - if (unlikely(cant_mount(dentry))) { 2844 - inode_unlock(dentry->d_inode); 2845 - goto out; 2846 - } 2847 - 2848 namespace_lock(); 2849 2850 - if (beneath && (!is_mounted(mnt) || m->mnt_mountpoint != dentry)) { 2851 namespace_unlock(); 2852 inode_unlock(dentry->d_inode); 2853 - goto out; 2854 } 2855 2856 mnt = lookup_mnt(path); 2857 - if (likely(!mnt)) 2858 break; 2859 - 2860 - namespace_unlock(); 2861 - inode_unlock(dentry->d_inode); 2862 - if (beneath) 2863 - dput(dentry); 2864 - path_put(path); 2865 - path->mnt = mnt; 2866 - path->dentry = dget(mnt->mnt_root); 2867 } 2868 - 2869 - mp = get_mountpoint(dentry); 2870 - if (IS_ERR(mp)) { 2871 - namespace_unlock(); 2872 - inode_unlock(dentry->d_inode); 2873 - } 2874 - 2875 - out: 2876 if (beneath) 2877 - dput(dentry); 2878 - 2879 return mp; 2880 } 2881 ··· 2892 2893 static void unlock_mount(struct mountpoint *where) 2894 { 2895 - struct dentry *dentry = where->m_dentry; 2896 - 2897 read_seqlock_excl(&mount_lock); 2898 put_mountpoint(where); 2899 read_sequnlock_excl(&mount_lock); 2900 - 2901 namespace_unlock(); 2902 - inode_unlock(dentry->d_inode); 2903 } 2904 2905 static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
··· 2826 struct vfsmount *mnt = path->mnt; 2827 struct dentry *dentry; 2828 struct mountpoint *mp = ERR_PTR(-ENOENT); 2829 + struct path under = {}; 2830 2831 for (;;) { 2832 + struct mount *m = real_mount(mnt); 2833 2834 if (beneath) { 2835 + path_put(&under); 2836 read_seqlock_excl(&mount_lock); 2837 + under.mnt = mntget(&m->mnt_parent->mnt); 2838 + under.dentry = dget(m->mnt_mountpoint); 2839 read_sequnlock_excl(&mount_lock); 2840 + dentry = under.dentry; 2841 } else { 2842 dentry = path->dentry; 2843 } 2844 2845 inode_lock(dentry->d_inode); 2846 namespace_lock(); 2847 2848 + if (unlikely(cant_mount(dentry) || !is_mounted(mnt))) 2849 + break; // not to be mounted on 2850 + 2851 + if (beneath && unlikely(m->mnt_mountpoint != dentry || 2852 + &m->mnt_parent->mnt != under.mnt)) { 2853 namespace_unlock(); 2854 inode_unlock(dentry->d_inode); 2855 + continue; // got moved 2856 } 2857 2858 mnt = lookup_mnt(path); 2859 + if (unlikely(mnt)) { 2860 + namespace_unlock(); 2861 + inode_unlock(dentry->d_inode); 2862 + path_put(path); 2863 + path->mnt = mnt; 2864 + path->dentry = dget(mnt->mnt_root); 2865 + continue; // got overmounted 2866 + } 2867 + mp = get_mountpoint(dentry); 2868 + if (IS_ERR(mp)) 2869 break; 2870 + if (beneath) { 2871 + /* 2872 + * @under duplicates the references that will stay 2873 + * at least until namespace_unlock(), so the path_put() 2874 + * below is safe (and OK to do under namespace_lock - 2875 + * we are not dropping the final references here). 2876 + */ 2877 + path_put(&under); 2878 + } 2879 + return mp; 2880 } 2881 + namespace_unlock(); 2882 + inode_unlock(dentry->d_inode); 2883 if (beneath) 2884 + path_put(&under); 2885 return mp; 2886 } 2887 ··· 2886 2887 static void unlock_mount(struct mountpoint *where) 2888 { 2889 + inode_unlock(where->m_dentry->d_inode); 2890 read_seqlock_excl(&mount_lock); 2891 put_mountpoint(where); 2892 read_sequnlock_excl(&mount_lock); 2893 namespace_unlock(); 2894 } 2895 2896 static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
+1 -1
fs/ocfs2/journal.c
··· 1249 } 1250 1251 for (i = 0; i < p_blocks; i++, p_blkno++) { 1252 - bh = __find_get_block(osb->sb->s_bdev, p_blkno, 1253 osb->sb->s_blocksize); 1254 /* block not cached. */ 1255 if (!bh)
··· 1249 } 1250 1251 for (i = 0; i < p_blocks; i++, p_blkno++) { 1252 + bh = __find_get_block_nonatomic(osb->sb->s_bdev, p_blkno, 1253 osb->sb->s_blocksize); 1254 /* block not cached. */ 1255 if (!bh)
+1 -1
fs/splice.c
··· 45 * here if set to avoid blocking other users of this pipe if splice is 46 * being done on it. 47 */ 48 - static noinline void noinline pipe_clear_nowait(struct file *file) 49 { 50 fmode_t fmode = READ_ONCE(file->f_mode); 51
··· 45 * here if set to avoid blocking other users of this pipe if splice is 46 * being done on it. 47 */ 48 + static noinline void pipe_clear_nowait(struct file *file) 49 { 50 fmode_t fmode = READ_ONCE(file->f_mode); 51
+2 -2
fs/xattr.c
··· 703 return error; 704 705 filename = getname_maybe_null(pathname, at_flags); 706 - if (!filename) { 707 CLASS(fd, f)(dfd); 708 if (fd_empty(f)) 709 error = -EBADF; ··· 847 return error; 848 849 filename = getname_maybe_null(pathname, at_flags); 850 - if (!filename) { 851 CLASS(fd, f)(dfd); 852 if (fd_empty(f)) 853 return -EBADF;
··· 703 return error; 704 705 filename = getname_maybe_null(pathname, at_flags); 706 + if (!filename && dfd >= 0) { 707 CLASS(fd, f)(dfd); 708 if (fd_empty(f)) 709 error = -EBADF; ··· 847 return error; 848 849 filename = getname_maybe_null(pathname, at_flags); 850 + if (!filename && dfd >= 0) { 851 CLASS(fd, f)(dfd); 852 if (fd_empty(f)) 853 return -EBADF;
+8 -2
fs/xfs/xfs_zone_gc.c
··· 170 xfs_zoned_need_gc( 171 struct xfs_mount *mp) 172 { 173 - s64 available, free; 174 175 if (!xfs_group_marked(mp, XG_TYPE_RTG, XFS_RTG_RECLAIMABLE)) 176 return false; ··· 184 return true; 185 186 free = xfs_estimate_freecounter(mp, XC_FREE_RTEXTENTS); 187 - if (available < mult_frac(free, mp->m_zonegc_low_space, 100)) 188 return true; 189 190 return false;
··· 170 xfs_zoned_need_gc( 171 struct xfs_mount *mp) 172 { 173 + s64 available, free, threshold; 174 + s32 remainder; 175 176 if (!xfs_group_marked(mp, XG_TYPE_RTG, XFS_RTG_RECLAIMABLE)) 177 return false; ··· 183 return true; 184 185 free = xfs_estimate_freecounter(mp, XC_FREE_RTEXTENTS); 186 + 187 + threshold = div_s64_rem(free, 100, &remainder); 188 + threshold = threshold * mp->m_zonegc_low_space + 189 + remainder * div_s64(mp->m_zonegc_low_space, 100); 190 + 191 + if (available < threshold) 192 return true; 193 194 return false;
+3 -2
include/cxl/features.h
··· 66 #ifdef CONFIG_CXL_FEATURES 67 inline struct cxl_features_state *to_cxlfs(struct cxl_dev_state *cxlds); 68 int devm_cxl_setup_features(struct cxl_dev_state *cxlds); 69 - int devm_cxl_setup_fwctl(struct cxl_memdev *cxlmd); 70 #else 71 static inline struct cxl_features_state *to_cxlfs(struct cxl_dev_state *cxlds) 72 { ··· 78 return -EOPNOTSUPP; 79 } 80 81 - static inline int devm_cxl_setup_fwctl(struct cxl_memdev *cxlmd) 82 { 83 return -EOPNOTSUPP; 84 }
··· 66 #ifdef CONFIG_CXL_FEATURES 67 inline struct cxl_features_state *to_cxlfs(struct cxl_dev_state *cxlds); 68 int devm_cxl_setup_features(struct cxl_dev_state *cxlds); 69 + int devm_cxl_setup_fwctl(struct device *host, struct cxl_memdev *cxlmd); 70 #else 71 static inline struct cxl_features_state *to_cxlfs(struct cxl_dev_state *cxlds) 72 { ··· 78 return -EOPNOTSUPP; 79 } 80 81 + static inline int devm_cxl_setup_fwctl(struct device *host, 82 + struct cxl_memdev *cxlmd) 83 { 84 return -EOPNOTSUPP; 85 }
+46 -26
include/linux/blkdev.h
··· 712 (q->limits.features & BLK_FEAT_ZONED); 713 } 714 715 - #ifdef CONFIG_BLK_DEV_ZONED 716 - static inline unsigned int disk_nr_zones(struct gendisk *disk) 717 - { 718 - return disk->nr_zones; 719 - } 720 - bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs); 721 - #else /* CONFIG_BLK_DEV_ZONED */ 722 - static inline unsigned int disk_nr_zones(struct gendisk *disk) 723 - { 724 - return 0; 725 - } 726 - static inline bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs) 727 - { 728 - return false; 729 - } 730 - #endif /* CONFIG_BLK_DEV_ZONED */ 731 - 732 static inline unsigned int disk_zone_no(struct gendisk *disk, sector_t sector) 733 { 734 if (!blk_queue_is_zoned(disk->queue)) 735 return 0; 736 return sector >> ilog2(disk->queue->limits.chunk_sectors); 737 - } 738 - 739 - static inline unsigned int bdev_nr_zones(struct block_device *bdev) 740 - { 741 - return disk_nr_zones(bdev->bd_disk); 742 } 743 744 static inline unsigned int bdev_max_open_zones(struct block_device *bdev) ··· 823 { 824 return bdev_nr_sectors(sb->s_bdev) >> 825 (sb->s_blocksize_bits - SECTOR_SHIFT); 826 } 827 828 int bdev_disk_changed(struct gendisk *disk, bool invalidate); ··· 1637 return bio_end_io_acct_remapped(bio, start_time, bio->bi_bdev); 1638 } 1639 1640 int set_blocksize(struct file *file, int size); 1641 1642 int lookup_bdev(const char *pathname, dev_t *dev); ··· 1693 int bd_prepare_to_claim(struct block_device *bdev, void *holder, 1694 const struct blk_holder_ops *hops); 1695 void bd_abort_claiming(struct block_device *bdev, void *holder); 1696 - 1697 - /* just for blk-cgroup, don't use elsewhere */ 1698 - struct block_device *blkdev_get_no_open(dev_t dev); 1699 - void blkdev_put_no_open(struct block_device *bdev); 1700 1701 struct block_device *I_BDEV(struct inode *inode); 1702 struct block_device *file_bdev(struct file *bdev_file);
··· 712 (q->limits.features & BLK_FEAT_ZONED); 713 } 714 715 static inline unsigned int disk_zone_no(struct gendisk *disk, sector_t sector) 716 { 717 if (!blk_queue_is_zoned(disk->queue)) 718 return 0; 719 return sector >> ilog2(disk->queue->limits.chunk_sectors); 720 } 721 722 static inline unsigned int bdev_max_open_zones(struct block_device *bdev) ··· 845 { 846 return bdev_nr_sectors(sb->s_bdev) >> 847 (sb->s_blocksize_bits - SECTOR_SHIFT); 848 + } 849 + 850 + #ifdef CONFIG_BLK_DEV_ZONED 851 + static inline unsigned int disk_nr_zones(struct gendisk *disk) 852 + { 853 + return disk->nr_zones; 854 + } 855 + bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs); 856 + 857 + /** 858 + * disk_zone_capacity - returns the zone capacity of zone containing @sector 859 + * @disk: disk to work with 860 + * @sector: sector number within the querying zone 861 + * 862 + * Returns the zone capacity of a zone containing @sector. @sector can be any 863 + * sector in the zone. 864 + */ 865 + static inline unsigned int disk_zone_capacity(struct gendisk *disk, 866 + sector_t sector) 867 + { 868 + sector_t zone_sectors = disk->queue->limits.chunk_sectors; 869 + 870 + if (sector + zone_sectors >= get_capacity(disk)) 871 + return disk->last_zone_capacity; 872 + return disk->zone_capacity; 873 + } 874 + static inline unsigned int bdev_zone_capacity(struct block_device *bdev, 875 + sector_t pos) 876 + { 877 + return disk_zone_capacity(bdev->bd_disk, pos); 878 + } 879 + #else /* CONFIG_BLK_DEV_ZONED */ 880 + static inline unsigned int disk_nr_zones(struct gendisk *disk) 881 + { 882 + return 0; 883 + } 884 + static inline bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs) 885 + { 886 + return false; 887 + } 888 + #endif /* CONFIG_BLK_DEV_ZONED */ 889 + 890 + static inline unsigned int bdev_nr_zones(struct block_device *bdev) 891 + { 892 + return disk_nr_zones(bdev->bd_disk); 893 } 894 895 int bdev_disk_changed(struct gendisk *disk, bool invalidate); ··· 1614 return bio_end_io_acct_remapped(bio, start_time, bio->bi_bdev); 1615 } 1616 1617 + int bdev_validate_blocksize(struct block_device *bdev, int block_size); 1618 int set_blocksize(struct file *file, int size); 1619 1620 int lookup_bdev(const char *pathname, dev_t *dev); ··· 1669 int bd_prepare_to_claim(struct block_device *bdev, void *holder, 1670 const struct blk_holder_ops *hops); 1671 void bd_abort_claiming(struct block_device *bdev, void *holder); 1672 1673 struct block_device *I_BDEV(struct inode *inode); 1674 struct block_device *file_bdev(struct file *bdev_file);
+9
include/linux/buffer_head.h
··· 34 BH_Meta, /* Buffer contains metadata */ 35 BH_Prio, /* Buffer should be submitted with REQ_PRIO */ 36 BH_Defer_Completion, /* Defer AIO completion to workqueue */ 37 38 BH_PrivateStart,/* not a state bit, but the first bit available 39 * for private allocation by other entities ··· 223 wait_queue_head_t *bh_waitq_head(struct buffer_head *bh); 224 struct buffer_head *__find_get_block(struct block_device *bdev, sector_t block, 225 unsigned size); 226 struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block, 227 unsigned size, gfp_t gfp); 228 void __brelse(struct buffer_head *); ··· 398 sb_find_get_block(struct super_block *sb, sector_t block) 399 { 400 return __find_get_block(sb->s_bdev, block, sb->s_blocksize); 401 } 402 403 static inline void
··· 34 BH_Meta, /* Buffer contains metadata */ 35 BH_Prio, /* Buffer should be submitted with REQ_PRIO */ 36 BH_Defer_Completion, /* Defer AIO completion to workqueue */ 37 + BH_Migrate, /* Buffer is being migrated (norefs) */ 38 39 BH_PrivateStart,/* not a state bit, but the first bit available 40 * for private allocation by other entities ··· 222 wait_queue_head_t *bh_waitq_head(struct buffer_head *bh); 223 struct buffer_head *__find_get_block(struct block_device *bdev, sector_t block, 224 unsigned size); 225 + struct buffer_head *__find_get_block_nonatomic(struct block_device *bdev, 226 + sector_t block, unsigned size); 227 struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block, 228 unsigned size, gfp_t gfp); 229 void __brelse(struct buffer_head *); ··· 395 sb_find_get_block(struct super_block *sb, sector_t block) 396 { 397 return __find_get_block(sb->s_bdev, block, sb->s_blocksize); 398 + } 399 + 400 + static inline struct buffer_head * 401 + sb_find_get_block_nonatomic(struct super_block *sb, sector_t block) 402 + { 403 + return __find_get_block_nonatomic(sb->s_bdev, block, sb->s_blocksize); 404 } 405 406 static inline void
-6
include/linux/ceph/osd_client.h
··· 490 struct page **pages, u64 length, 491 u32 alignment, bool pages_from_pool, 492 bool own_pages); 493 - extern void osd_req_op_extent_osd_data_pagelist(struct ceph_osd_request *, 494 - unsigned int which, 495 - struct ceph_pagelist *pagelist); 496 #ifdef CONFIG_BLOCK 497 void osd_req_op_extent_osd_data_bio(struct ceph_osd_request *osd_req, 498 unsigned int which, ··· 506 void osd_req_op_extent_osd_iter(struct ceph_osd_request *osd_req, 507 unsigned int which, struct iov_iter *iter); 508 509 - extern void osd_req_op_cls_request_data_pagelist(struct ceph_osd_request *, 510 - unsigned int which, 511 - struct ceph_pagelist *pagelist); 512 extern void osd_req_op_cls_request_data_pages(struct ceph_osd_request *, 513 unsigned int which, 514 struct page **pages, u64 length,
··· 490 struct page **pages, u64 length, 491 u32 alignment, bool pages_from_pool, 492 bool own_pages); 493 #ifdef CONFIG_BLOCK 494 void osd_req_op_extent_osd_data_bio(struct ceph_osd_request *osd_req, 495 unsigned int which, ··· 509 void osd_req_op_extent_osd_iter(struct ceph_osd_request *osd_req, 510 unsigned int which, struct iov_iter *iter); 511 512 extern void osd_req_op_cls_request_data_pages(struct ceph_osd_request *, 513 unsigned int which, 514 struct page **pages, u64 length,
+8 -4
include/linux/dma-mapping.h
··· 629 #else 630 #define DEFINE_DMA_UNMAP_ADDR(ADDR_NAME) 631 #define DEFINE_DMA_UNMAP_LEN(LEN_NAME) 632 - #define dma_unmap_addr(PTR, ADDR_NAME) (0) 633 - #define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) do { } while (0) 634 - #define dma_unmap_len(PTR, LEN_NAME) (0) 635 - #define dma_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0) 636 #endif 637 638 #endif /* _LINUX_DMA_MAPPING_H */
··· 629 #else 630 #define DEFINE_DMA_UNMAP_ADDR(ADDR_NAME) 631 #define DEFINE_DMA_UNMAP_LEN(LEN_NAME) 632 + #define dma_unmap_addr(PTR, ADDR_NAME) \ 633 + ({ typeof(PTR) __p __maybe_unused = PTR; 0; }) 634 + #define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) \ 635 + do { typeof(PTR) __p __maybe_unused = PTR; } while (0) 636 + #define dma_unmap_len(PTR, LEN_NAME) \ 637 + ({ typeof(PTR) __p __maybe_unused = PTR; 0; }) 638 + #define dma_unmap_len_set(PTR, LEN_NAME, VAL) \ 639 + do { typeof(PTR) __p __maybe_unused = PTR; } while (0) 640 #endif 641 642 #endif /* _LINUX_DMA_MAPPING_H */
+6 -13
include/linux/file_ref.h
··· 61 atomic_long_set(&ref->refcnt, cnt - 1); 62 } 63 64 - bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt); 65 bool __file_ref_put(file_ref_t *ref, unsigned long cnt); 66 67 /** ··· 177 */ 178 static __always_inline __must_check bool file_ref_put_close(file_ref_t *ref) 179 { 180 - long old, new; 181 182 old = atomic_long_read(&ref->refcnt); 183 - do { 184 - if (unlikely(old < 0)) 185 - return __file_ref_put_badval(ref, old); 186 - 187 - if (old == FILE_REF_ONEREF) 188 - new = FILE_REF_DEAD; 189 - else 190 - new = old - 1; 191 - } while (!atomic_long_try_cmpxchg(&ref->refcnt, &old, new)); 192 - 193 - return new == FILE_REF_DEAD; 194 } 195 196 /**
··· 61 atomic_long_set(&ref->refcnt, cnt - 1); 62 } 63 64 bool __file_ref_put(file_ref_t *ref, unsigned long cnt); 65 66 /** ··· 178 */ 179 static __always_inline __must_check bool file_ref_put_close(file_ref_t *ref) 180 { 181 + long old; 182 183 old = atomic_long_read(&ref->refcnt); 184 + if (likely(old == FILE_REF_ONEREF)) { 185 + if (likely(atomic_long_try_cmpxchg(&ref->refcnt, &old, FILE_REF_DEAD))) 186 + return true; 187 + } 188 + return file_ref_put(ref); 189 } 190 191 /**
+5
include/linux/fwnode.h
··· 2 /* 3 * fwnode.h - Firmware device node object handle type definition. 4 * 5 * Copyright (C) 2015, Intel Corporation 6 * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 7 */
··· 2 /* 3 * fwnode.h - Firmware device node object handle type definition. 4 * 5 + * This header file provides low-level data types and definitions for firmware 6 + * and device property providers. The respective API header files supplied by 7 + * them should contain all of the requisite data types and definitions for end 8 + * users, so including it directly should not be necessary. 9 + * 10 * Copyright (C) 2015, Intel Corporation 11 * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 12 */
+4 -4
include/linux/local_lock_internal.h
··· 102 l = (local_lock_t *)this_cpu_ptr(lock); \ 103 tl = (local_trylock_t *)l; \ 104 _Generic((lock), \ 105 - local_trylock_t *: ({ \ 106 lockdep_assert(tl->acquired == 0); \ 107 WRITE_ONCE(tl->acquired, 1); \ 108 }), \ 109 - default:(void)0); \ 110 local_lock_acquire(l); \ 111 } while (0) 112 ··· 171 tl = (local_trylock_t *)l; \ 172 local_lock_release(l); \ 173 _Generic((lock), \ 174 - local_trylock_t *: ({ \ 175 lockdep_assert(tl->acquired == 1); \ 176 WRITE_ONCE(tl->acquired, 0); \ 177 }), \ 178 - default:(void)0); \ 179 } while (0) 180 181 #define __local_unlock(lock) \
··· 102 l = (local_lock_t *)this_cpu_ptr(lock); \ 103 tl = (local_trylock_t *)l; \ 104 _Generic((lock), \ 105 + __percpu local_trylock_t *: ({ \ 106 lockdep_assert(tl->acquired == 0); \ 107 WRITE_ONCE(tl->acquired, 1); \ 108 }), \ 109 + __percpu local_lock_t *: (void)0); \ 110 local_lock_acquire(l); \ 111 } while (0) 112 ··· 171 tl = (local_trylock_t *)l; \ 172 local_lock_release(l); \ 173 _Generic((lock), \ 174 + __percpu local_trylock_t *: ({ \ 175 lockdep_assert(tl->acquired == 1); \ 176 WRITE_ONCE(tl->acquired, 0); \ 177 }), \ 178 + __percpu local_lock_t *: (void)0); \ 179 } while (0) 180 181 #define __local_unlock(lock) \
+20 -11
include/linux/phylink.h
··· 361 phy_interface_t iface); 362 363 /** 364 - * mac_link_down() - take the link down 365 * @config: a pointer to a &struct phylink_config. 366 * @mode: link autonegotiation mode 367 * @interface: link &typedef phy_interface_t mode 368 * 369 - * If @mode is not an in-band negotiation mode (as defined by 370 - * phylink_autoneg_inband()), force the link down and disable any 371 - * Energy Efficient Ethernet MAC configuration. Interface type 372 - * selection must be done in mac_config(). 373 */ 374 void mac_link_down(struct phylink_config *config, unsigned int mode, 375 phy_interface_t interface); 376 377 /** 378 - * mac_link_up() - allow the link to come up 379 * @config: a pointer to a &struct phylink_config. 380 - * @phy: any attached phy 381 * @mode: link autonegotiation mode 382 * @interface: link &typedef phy_interface_t mode 383 * @speed: link speed ··· 391 * @tx_pause: link transmit pause enablement status 392 * @rx_pause: link receive pause enablement status 393 * 394 - * Configure the MAC for an established link. 395 * 396 * @speed, @duplex, @tx_pause and @rx_pause indicate the finalised link 397 * settings, and should be used to configure the MAC block appropriately ··· 406 * that the user wishes to override the pause settings, and this should 407 * be allowed when considering the implementation of this method. 408 * 409 - * If in-band negotiation mode is disabled, allow the link to come up. If 410 - * @phy is non-%NULL, configure Energy Efficient Ethernet by calling 411 - * phy_init_eee() and perform appropriate MAC configuration for EEE. 412 * Interface type selection must be done in mac_config(). 413 */ 414 void mac_link_up(struct phylink_config *config, struct phy_device *phy,
··· 361 phy_interface_t iface); 362 363 /** 364 + * mac_link_down() - notification that the link has gone down 365 * @config: a pointer to a &struct phylink_config. 366 * @mode: link autonegotiation mode 367 * @interface: link &typedef phy_interface_t mode 368 * 369 + * Notifies the MAC that the link has gone down. This will not be called 370 + * unless mac_link_up() has been previously called. 371 + * 372 + * The MAC should stop processing packets for transmission and reception. 373 + * phylink will have called netif_carrier_off() to notify the networking 374 + * stack that the link has gone down, so MAC drivers should not make this 375 + * call. 376 + * 377 + * If @mode is %MLO_AN_INBAND, then this function must not prevent the 378 + * link coming up. 379 */ 380 void mac_link_down(struct phylink_config *config, unsigned int mode, 381 phy_interface_t interface); 382 383 /** 384 + * mac_link_up() - notification that the link has come up 385 * @config: a pointer to a &struct phylink_config. 386 + * @phy: any attached phy (deprecated - please use LPI interfaces) 387 * @mode: link autonegotiation mode 388 * @interface: link &typedef phy_interface_t mode 389 * @speed: link speed ··· 385 * @tx_pause: link transmit pause enablement status 386 * @rx_pause: link receive pause enablement status 387 * 388 + * Notifies the MAC that the link has come up, and the parameters of the 389 + * link as seen from the MACs point of view. If mac_link_up() has been 390 + * called previously, there will be an intervening call to mac_link_down() 391 + * before this method will be subsequently called. 392 * 393 * @speed, @duplex, @tx_pause and @rx_pause indicate the finalised link 394 * settings, and should be used to configure the MAC block appropriately ··· 397 * that the user wishes to override the pause settings, and this should 398 * be allowed when considering the implementation of this method. 399 * 400 + * Once configured, the MAC may begin to process packets for transmission 401 + * and reception. 402 + * 403 * Interface type selection must be done in mac_config(). 404 */ 405 void mac_link_up(struct phylink_config *config, struct phy_device *phy,
+3
include/linux/virtio.h
··· 220 * occurs. 221 * @reset_done: optional function to call after transport specific reset 222 * operation has finished. 223 */ 224 struct virtio_driver { 225 struct device_driver driver; ··· 239 int (*restore)(struct virtio_device *dev); 240 int (*reset_prepare)(struct virtio_device *dev); 241 int (*reset_done)(struct virtio_device *dev); 242 }; 243 244 #define drv_to_virtio(__drv) container_of_const(__drv, struct virtio_driver, driver)
··· 220 * occurs. 221 * @reset_done: optional function to call after transport specific reset 222 * operation has finished. 223 + * @shutdown: synchronize with the device on shutdown. If provided, replaces 224 + * the virtio core implementation. 225 */ 226 struct virtio_driver { 227 struct device_driver driver; ··· 237 int (*restore)(struct virtio_device *dev); 238 int (*reset_prepare)(struct virtio_device *dev); 239 int (*reset_done)(struct virtio_device *dev); 240 + void (*shutdown)(struct virtio_device *dev); 241 }; 242 243 #define drv_to_virtio(__drv) container_of_const(__drv, struct virtio_driver, driver)
+56 -29
include/uapi/linux/landlock.h
··· 53 __u64 scoped; 54 }; 55 56 - /* 57 - * sys_landlock_create_ruleset() flags: 58 * 59 - * - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI 60 - * version. 61 - * - %LANDLOCK_CREATE_RULESET_ERRATA: Get a bitmask of fixed issues. 62 */ 63 /* clang-format off */ 64 #define LANDLOCK_CREATE_RULESET_VERSION (1U << 0) 65 #define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1) 66 /* clang-format on */ 67 68 - /* 69 - * sys_landlock_restrict_self() flags: 70 * 71 - * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF: Do not create any log related to the 72 - * enforced restrictions. This should only be set by tools launching unknown 73 - * or untrusted programs (e.g. a sandbox tool, container runtime, system 74 - * service manager). Because programs sandboxing themselves should fix any 75 - * denied access, they should not set this flag to be aware of potential 76 - * issues reported by system's logs (i.e. audit). 77 - * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON: Explicitly ask to continue 78 - * logging denied access requests even after an :manpage:`execve(2)` call. 79 - * This flag should only be set if all the programs than can legitimately be 80 - * executed will not try to request a denied access (which could spam audit 81 - * logs). 82 - * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF: Do not create any log related 83 - * to the enforced restrictions coming from future nested domains created by 84 - * the caller or its descendants. This should only be set according to a 85 - * runtime configuration (i.e. not hardcoded) by programs launching other 86 - * unknown or untrusted programs that may create their own Landlock domains 87 - * and spam logs. The main use case is for container runtimes to enable users 88 - * to mute buggy sandboxed programs for a specific container image. Other use 89 - * cases include sandboxer tools and init systems. Unlike 90 - * %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF, 91 - * %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF does not impact the requested 92 - * restriction (if any) but only the future nested domains. 93 */ 94 /* clang-format off */ 95 #define LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF (1U << 0)
··· 53 __u64 scoped; 54 }; 55 56 + /** 57 + * DOC: landlock_create_ruleset_flags 58 * 59 + * **Flags** 60 + * 61 + * %LANDLOCK_CREATE_RULESET_VERSION 62 + * Get the highest supported Landlock ABI version (starting at 1). 63 + * 64 + * %LANDLOCK_CREATE_RULESET_ERRATA 65 + * Get a bitmask of fixed issues for the current Landlock ABI version. 66 */ 67 /* clang-format off */ 68 #define LANDLOCK_CREATE_RULESET_VERSION (1U << 0) 69 #define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1) 70 /* clang-format on */ 71 72 + /** 73 + * DOC: landlock_restrict_self_flags 74 * 75 + * **Flags** 76 + * 77 + * By default, denied accesses originating from programs that sandbox themselves 78 + * are logged via the audit subsystem. Such events typically indicate unexpected 79 + * behavior, such as bugs or exploitation attempts. However, to avoid excessive 80 + * logging, access requests denied by a domain not created by the originating 81 + * program are not logged by default. The rationale is that programs should know 82 + * their own behavior, but not necessarily the behavior of other programs. This 83 + * default configuration is suitable for most programs that sandbox themselves. 84 + * For specific use cases, the following flags allow programs to modify this 85 + * default logging behavior. 86 + * 87 + * The %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF and 88 + * %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON flags apply to the newly created 89 + * Landlock domain. 90 + * 91 + * %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF 92 + * Disables logging of denied accesses originating from the thread creating 93 + * the Landlock domain, as well as its children, as long as they continue 94 + * running the same executable code (i.e., without an intervening 95 + * :manpage:`execve(2)` call). This is intended for programs that execute 96 + * unknown code without invoking :manpage:`execve(2)`, such as script 97 + * interpreters. Programs that only sandbox themselves should not set this 98 + * flag, so users can be notified of unauthorized access attempts via system 99 + * logs. 100 + * 101 + * %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON 102 + * Enables logging of denied accesses after an :manpage:`execve(2)` call, 103 + * providing visibility into unauthorized access attempts by newly executed 104 + * programs within the created Landlock domain. This flag is recommended 105 + * only when all potential executables in the domain are expected to comply 106 + * with the access restrictions, as excessive audit log entries could make 107 + * it more difficult to identify critical events. 108 + * 109 + * %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF 110 + * Disables logging of denied accesses originating from nested Landlock 111 + * domains created by the caller or its descendants. This flag should be set 112 + * according to runtime configuration, not hardcoded, to avoid suppressing 113 + * important security events. It is useful for container runtimes or 114 + * sandboxing tools that may launch programs which themselves create 115 + * Landlock domains and could otherwise generate excessive logs. Unlike 116 + * ``LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF``, this flag only affects 117 + * future nested domains, not the one being created. It can also be used 118 + * with a @ruleset_fd value of -1 to mute subdomain logs without creating a 119 + * domain. 120 */ 121 /* clang-format off */ 122 #define LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF (1U << 0)
+2 -2
include/uapi/linux/vhost.h
··· 28 29 /* Set current process as the (exclusive) owner of this file descriptor. This 30 * must be called before any other vhost command. Further calls to 31 - * VHOST_OWNER_SET fail until VHOST_OWNER_RESET is called. */ 32 #define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01) 33 /* Give up ownership, and reset the device to default values. 34 - * Allows subsequent call to VHOST_OWNER_SET to succeed. */ 35 #define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02) 36 37 /* Set up/modify memory layout */
··· 28 29 /* Set current process as the (exclusive) owner of this file descriptor. This 30 * must be called before any other vhost command. Further calls to 31 + * VHOST_SET_OWNER fail until VHOST_RESET_OWNER is called. */ 32 #define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01) 33 /* Give up ownership, and reset the device to default values. 34 + * Allows subsequent call to VHOST_SET_OWNER to succeed. */ 35 #define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02) 36 37 /* Set up/modify memory layout */
+1
include/uapi/linux/virtio_pci.h
··· 246 #define VIRTIO_ADMIN_CMD_LIST_USE 0x1 247 248 /* Admin command group type. */ 249 #define VIRTIO_ADMIN_GROUP_TYPE_SRIOV 0x1 250 251 /* Transitional device admin command. */
··· 246 #define VIRTIO_ADMIN_CMD_LIST_USE 0x1 247 248 /* Admin command group type. */ 249 + #define VIRTIO_ADMIN_GROUP_TYPE_SELF 0x0 250 #define VIRTIO_ADMIN_GROUP_TYPE_SRIOV 0x1 251 252 /* Transitional device admin command. */
+6
include/ufs/ufs_quirks.h
··· 107 */ 108 #define UFS_DEVICE_QUIRK_DELAY_AFTER_LPM (1 << 11) 109 110 #endif /* UFS_QUIRKS_H_ */
··· 107 */ 108 #define UFS_DEVICE_QUIRK_DELAY_AFTER_LPM (1 << 11) 109 110 + /* 111 + * Some ufs devices may need more time to be in hibern8 before exiting. 112 + * Enable this quirk to give it an additional 100us. 113 + */ 114 + #define UFS_DEVICE_QUIRK_PA_HIBER8TIME (1 << 12) 115 + 116 #endif /* UFS_QUIRKS_H_ */
+15 -9
io_uring/io_uring.c
··· 872 lockdep_assert(!io_wq_current_is_worker()); 873 lockdep_assert_held(&ctx->uring_lock); 874 875 - __io_cq_lock(ctx); 876 - posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags); 877 ctx->submit_state.cq_flush = true; 878 - __io_cq_unlock_post(ctx); 879 return posted; 880 } 881 ··· 1083 while (node) { 1084 req = container_of(node, struct io_kiocb, io_task_work.node); 1085 node = node->next; 1086 - if (sync && last_ctx != req->ctx) { 1087 if (last_ctx) { 1088 - flush_delayed_work(&last_ctx->fallback_work); 1089 percpu_ref_put(&last_ctx->refs); 1090 } 1091 last_ctx = req->ctx; 1092 percpu_ref_get(&last_ctx->refs); 1093 } 1094 - if (llist_add(&req->io_task_work.node, 1095 - &req->ctx->fallback_llist)) 1096 - schedule_delayed_work(&req->ctx->fallback_work, 1); 1097 } 1098 1099 if (last_ctx) { 1100 - flush_delayed_work(&last_ctx->fallback_work); 1101 percpu_ref_put(&last_ctx->refs); 1102 } 1103 }
··· 872 lockdep_assert(!io_wq_current_is_worker()); 873 lockdep_assert_held(&ctx->uring_lock); 874 875 + if (!ctx->lockless_cq) { 876 + spin_lock(&ctx->completion_lock); 877 + posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags); 878 + spin_unlock(&ctx->completion_lock); 879 + } else { 880 + posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags); 881 + } 882 + 883 ctx->submit_state.cq_flush = true; 884 return posted; 885 } 886 ··· 1078 while (node) { 1079 req = container_of(node, struct io_kiocb, io_task_work.node); 1080 node = node->next; 1081 + if (last_ctx != req->ctx) { 1082 if (last_ctx) { 1083 + if (sync) 1084 + flush_delayed_work(&last_ctx->fallback_work); 1085 percpu_ref_put(&last_ctx->refs); 1086 } 1087 last_ctx = req->ctx; 1088 percpu_ref_get(&last_ctx->refs); 1089 } 1090 + if (llist_add(&req->io_task_work.node, &last_ctx->fallback_llist)) 1091 + schedule_delayed_work(&last_ctx->fallback_work, 1); 1092 } 1093 1094 if (last_ctx) { 1095 + if (sync) 1096 + flush_delayed_work(&last_ctx->fallback_work); 1097 percpu_ref_put(&last_ctx->refs); 1098 } 1099 }
+1 -1
kernel/bpf/hashtab.c
··· 2189 b = &htab->buckets[i]; 2190 rcu_read_lock(); 2191 head = &b->head; 2192 - hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) { 2193 key = elem->key; 2194 if (is_percpu) { 2195 /* current cpu value for percpu map */
··· 2189 b = &htab->buckets[i]; 2190 rcu_read_lock(); 2191 head = &b->head; 2192 + hlist_nulls_for_each_entry_safe(elem, n, head, hash_node) { 2193 key = elem->key; 2194 if (is_percpu) { 2195 /* current cpu value for percpu map */
+1
kernel/bpf/preload/bpf_preload_kern.c
··· 89 } 90 late_initcall(load); 91 module_exit(fini); 92 MODULE_LICENSE("GPL"); 93 MODULE_DESCRIPTION("Embedded BPF programs for introspection in bpffs");
··· 89 } 90 late_initcall(load); 91 module_exit(fini); 92 + MODULE_IMPORT_NS("BPF_INTERNAL"); 93 MODULE_LICENSE("GPL"); 94 MODULE_DESCRIPTION("Embedded BPF programs for introspection in bpffs");
+3 -3
kernel/bpf/syscall.c
··· 1583 1584 return map; 1585 } 1586 - EXPORT_SYMBOL(bpf_map_get); 1587 1588 struct bpf_map *bpf_map_get_with_uref(u32 ufd) 1589 { ··· 3364 bpf_link_inc(link); 3365 return link; 3366 } 3367 - EXPORT_SYMBOL(bpf_link_get_from_fd); 3368 3369 static void bpf_tracing_link_release(struct bpf_link *link) 3370 { ··· 6020 return ____bpf_sys_bpf(cmd, attr, size); 6021 } 6022 } 6023 - EXPORT_SYMBOL(kern_sys_bpf); 6024 6025 static const struct bpf_func_proto bpf_sys_bpf_proto = { 6026 .func = bpf_sys_bpf,
··· 1583 1584 return map; 1585 } 1586 + EXPORT_SYMBOL_NS(bpf_map_get, "BPF_INTERNAL"); 1587 1588 struct bpf_map *bpf_map_get_with_uref(u32 ufd) 1589 { ··· 3364 bpf_link_inc(link); 3365 return link; 3366 } 3367 + EXPORT_SYMBOL_NS(bpf_link_get_from_fd, "BPF_INTERNAL"); 3368 3369 static void bpf_tracing_link_release(struct bpf_link *link) 3370 { ··· 6020 return ____bpf_sys_bpf(cmd, attr, size); 6021 } 6022 } 6023 + EXPORT_SYMBOL_NS(kern_sys_bpf, "BPF_INTERNAL"); 6024 6025 static const struct bpf_func_proto bpf_sys_bpf_proto = { 6026 .func = bpf_sys_bpf,
+30 -1
kernel/cgroup/cgroup.c
··· 90 DEFINE_MUTEX(cgroup_mutex); 91 DEFINE_SPINLOCK(css_set_lock); 92 93 - #ifdef CONFIG_PROVE_RCU 94 EXPORT_SYMBOL_GPL(cgroup_mutex); 95 EXPORT_SYMBOL_GPL(css_set_lock); 96 #endif ··· 2353 }; 2354 2355 #ifdef CONFIG_CPUSETS_V1 2356 static const struct fs_context_operations cpuset_fs_context_ops = { 2357 .get_tree = cgroup1_get_tree, 2358 .free = cgroup_fs_context_free, 2359 }; 2360 2361 /* ··· 2420 static struct file_system_type cpuset_fs_type = { 2421 .name = "cpuset", 2422 .init_fs_context = cpuset_init_fs_context, 2423 .fs_flags = FS_USERNS_MOUNT, 2424 }; 2425 #endif
··· 90 DEFINE_MUTEX(cgroup_mutex); 91 DEFINE_SPINLOCK(css_set_lock); 92 93 + #if (defined CONFIG_PROVE_RCU || defined CONFIG_LOCKDEP) 94 EXPORT_SYMBOL_GPL(cgroup_mutex); 95 EXPORT_SYMBOL_GPL(css_set_lock); 96 #endif ··· 2353 }; 2354 2355 #ifdef CONFIG_CPUSETS_V1 2356 + enum cpuset_param { 2357 + Opt_cpuset_v2_mode, 2358 + }; 2359 + 2360 + static const struct fs_parameter_spec cpuset_fs_parameters[] = { 2361 + fsparam_flag ("cpuset_v2_mode", Opt_cpuset_v2_mode), 2362 + {} 2363 + }; 2364 + 2365 + static int cpuset_parse_param(struct fs_context *fc, struct fs_parameter *param) 2366 + { 2367 + struct cgroup_fs_context *ctx = cgroup_fc2context(fc); 2368 + struct fs_parse_result result; 2369 + int opt; 2370 + 2371 + opt = fs_parse(fc, cpuset_fs_parameters, param, &result); 2372 + if (opt < 0) 2373 + return opt; 2374 + 2375 + switch (opt) { 2376 + case Opt_cpuset_v2_mode: 2377 + ctx->flags |= CGRP_ROOT_CPUSET_V2_MODE; 2378 + return 0; 2379 + } 2380 + return -EINVAL; 2381 + } 2382 + 2383 static const struct fs_context_operations cpuset_fs_context_ops = { 2384 .get_tree = cgroup1_get_tree, 2385 .free = cgroup_fs_context_free, 2386 + .parse_param = cpuset_parse_param, 2387 }; 2388 2389 /* ··· 2392 static struct file_system_type cpuset_fs_type = { 2393 .name = "cpuset", 2394 .init_fs_context = cpuset_init_fs_context, 2395 + .parameters = cpuset_fs_parameters, 2396 .fs_flags = FS_USERNS_MOUNT, 2397 }; 2398 #endif
+9 -3
kernel/dma/coherent.c
··· 336 337 static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) 338 { 339 - if (!rmem->priv) { 340 - struct dma_coherent_mem *mem; 341 342 mem = dma_init_coherent_memory(rmem->base, rmem->base, 343 rmem->size, true); 344 if (IS_ERR(mem)) 345 return PTR_ERR(mem); 346 rmem->priv = mem; 347 } 348 - dma_assign_coherent_memory(dev, rmem->priv); 349 return 0; 350 } 351
··· 336 337 static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) 338 { 339 + struct dma_coherent_mem *mem = rmem->priv; 340 341 + if (!mem) { 342 mem = dma_init_coherent_memory(rmem->base, rmem->base, 343 rmem->size, true); 344 if (IS_ERR(mem)) 345 return PTR_ERR(mem); 346 rmem->priv = mem; 347 } 348 + 349 + /* Warn if the device potentially can't use the reserved memory */ 350 + if (mem->device_base + rmem->size - 1 > 351 + min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit)) 352 + dev_warn(dev, "reserved memory is beyond device's set DMA address range\n"); 353 + 354 + dma_assign_coherent_memory(dev, mem); 355 return 0; 356 } 357
+1 -2
kernel/dma/contiguous.c
··· 64 * Users, who want to set the size of global CMA area for their system 65 * should use cma= kernel parameter. 66 */ 67 - static const phys_addr_t size_bytes __initconst = 68 - (phys_addr_t)CMA_SIZE_MBYTES * SZ_1M; 69 static phys_addr_t size_cmdline __initdata = -1; 70 static phys_addr_t base_cmdline __initdata; 71 static phys_addr_t limit_cmdline __initdata;
··· 64 * Users, who want to set the size of global CMA area for their system 65 * should use cma= kernel parameter. 66 */ 67 + #define size_bytes ((phys_addr_t)CMA_SIZE_MBYTES * SZ_1M) 68 static phys_addr_t size_cmdline __initdata = -1; 69 static phys_addr_t base_cmdline __initdata; 70 static phys_addr_t limit_cmdline __initdata;
+18 -9
kernel/dma/mapping.c
··· 910 } 911 EXPORT_SYMBOL(dma_set_coherent_mask); 912 913 - /** 914 - * dma_addressing_limited - return if the device is addressing limited 915 - * @dev: device to check 916 - * 917 - * Return %true if the devices DMA mask is too small to address all memory in 918 - * the system, else %false. Lack of addressing bits is the prime reason for 919 - * bounce buffering, but might not be the only one. 920 - */ 921 - bool dma_addressing_limited(struct device *dev) 922 { 923 const struct dma_map_ops *ops = get_dma_ops(dev); 924 ··· 921 if (unlikely(ops) || use_dma_iommu(dev)) 922 return false; 923 return !dma_direct_all_ram_mapped(dev); 924 } 925 EXPORT_SYMBOL_GPL(dma_addressing_limited); 926
··· 910 } 911 EXPORT_SYMBOL(dma_set_coherent_mask); 912 913 + static bool __dma_addressing_limited(struct device *dev) 914 { 915 const struct dma_map_ops *ops = get_dma_ops(dev); 916 ··· 929 if (unlikely(ops) || use_dma_iommu(dev)) 930 return false; 931 return !dma_direct_all_ram_mapped(dev); 932 + } 933 + 934 + /** 935 + * dma_addressing_limited - return if the device is addressing limited 936 + * @dev: device to check 937 + * 938 + * Return %true if the devices DMA mask is too small to address all memory in 939 + * the system, else %false. Lack of addressing bits is the prime reason for 940 + * bounce buffering, but might not be the only one. 941 + */ 942 + bool dma_addressing_limited(struct device *dev) 943 + { 944 + if (!__dma_addressing_limited(dev)) 945 + return false; 946 + 947 + dev_dbg(dev, "device is DMA addressing limited\n"); 948 + return true; 949 } 950 EXPORT_SYMBOL_GPL(dma_addressing_limited); 951
+2 -2
kernel/events/core.c
··· 3943 perf_event_set_state(event, PERF_EVENT_STATE_ERROR); 3944 3945 if (*perf_event_fasync(event)) 3946 - event->pending_kill = POLL_HUP; 3947 3948 perf_event_wakeup(event); 3949 } else { ··· 6075 6076 if (unlikely(READ_ONCE(event->state) == PERF_EVENT_STATE_ERROR && 6077 event->attr.pinned)) 6078 - return events; 6079 6080 /* 6081 * Pin the event->rb by taking event->mmap_mutex; otherwise
··· 3943 perf_event_set_state(event, PERF_EVENT_STATE_ERROR); 3944 3945 if (*perf_event_fasync(event)) 3946 + event->pending_kill = POLL_ERR; 3947 3948 perf_event_wakeup(event); 3949 } else { ··· 6075 6076 if (unlikely(READ_ONCE(event->state) == PERF_EVENT_STATE_ERROR && 6077 event->attr.pinned)) 6078 + return EPOLLERR; 6079 6080 /* 6081 * Pin the event->rb by taking event->mmap_mutex; otherwise
+7 -43
kernel/sched/ext.c
··· 163 /* 164 * CPU cgroup support flags 165 */ 166 - SCX_OPS_HAS_CGROUP_WEIGHT = 1LLU << 16, /* cpu.weight */ 167 168 SCX_OPS_ALL_FLAGS = SCX_OPS_KEEP_BUILTIN_IDLE | 169 SCX_OPS_ENQ_LAST | ··· 3899 3900 DEFINE_STATIC_PERCPU_RWSEM(scx_cgroup_rwsem); 3901 static bool scx_cgroup_enabled; 3902 - static bool cgroup_warned_missing_weight; 3903 - static bool cgroup_warned_missing_idle; 3904 - 3905 - static void scx_cgroup_warn_missing_weight(struct task_group *tg) 3906 - { 3907 - if (scx_ops_enable_state() == SCX_OPS_DISABLED || 3908 - cgroup_warned_missing_weight) 3909 - return; 3910 - 3911 - if ((scx_ops.flags & SCX_OPS_HAS_CGROUP_WEIGHT) || !tg->css.parent) 3912 - return; 3913 - 3914 - pr_warn("sched_ext: \"%s\" does not implement cgroup cpu.weight\n", 3915 - scx_ops.name); 3916 - cgroup_warned_missing_weight = true; 3917 - } 3918 - 3919 - static void scx_cgroup_warn_missing_idle(struct task_group *tg) 3920 - { 3921 - if (!scx_cgroup_enabled || cgroup_warned_missing_idle) 3922 - return; 3923 - 3924 - if (!tg->idle) 3925 - return; 3926 - 3927 - pr_warn("sched_ext: \"%s\" does not implement cgroup cpu.idle\n", 3928 - scx_ops.name); 3929 - cgroup_warned_missing_idle = true; 3930 - } 3931 3932 int scx_tg_online(struct task_group *tg) 3933 { ··· 3907 WARN_ON_ONCE(tg->scx_flags & (SCX_TG_ONLINE | SCX_TG_INITED)); 3908 3909 percpu_down_read(&scx_cgroup_rwsem); 3910 - 3911 - scx_cgroup_warn_missing_weight(tg); 3912 3913 if (scx_cgroup_enabled) { 3914 if (SCX_HAS_OP(cgroup_init)) { ··· 4045 4046 void scx_group_set_idle(struct task_group *tg, bool idle) 4047 { 4048 - percpu_down_read(&scx_cgroup_rwsem); 4049 - scx_cgroup_warn_missing_idle(tg); 4050 - percpu_up_read(&scx_cgroup_rwsem); 4051 } 4052 4053 static void scx_cgroup_lock(void) ··· 4239 4240 percpu_rwsem_assert_held(&scx_cgroup_rwsem); 4241 4242 - cgroup_warned_missing_weight = false; 4243 - cgroup_warned_missing_idle = false; 4244 - 4245 /* 4246 * scx_tg_on/offline() are excluded through scx_cgroup_rwsem. If we walk 4247 * cgroups and init, all online cgroups are initialized. ··· 4247 css_for_each_descendant_pre(css, &root_task_group.css) { 4248 struct task_group *tg = css_tg(css); 4249 struct scx_cgroup_init_args args = { .weight = tg->scx_weight }; 4250 - 4251 - scx_cgroup_warn_missing_weight(tg); 4252 - scx_cgroup_warn_missing_idle(tg); 4253 4254 if ((tg->scx_flags & 4255 (SCX_TG_ONLINE | SCX_TG_INITED)) != SCX_TG_ONLINE) ··· 4584 4585 static void free_exit_info(struct scx_exit_info *ei) 4586 { 4587 - kfree(ei->dump); 4588 kfree(ei->msg); 4589 kfree(ei->bt); 4590 kfree(ei); ··· 4600 4601 ei->bt = kcalloc(SCX_EXIT_BT_LEN, sizeof(ei->bt[0]), GFP_KERNEL); 4602 ei->msg = kzalloc(SCX_EXIT_MSG_LEN, GFP_KERNEL); 4603 - ei->dump = kzalloc(exit_dump_len, GFP_KERNEL); 4604 4605 if (!ei->bt || !ei->msg || !ei->dump) { 4606 free_exit_info(ei); ··· 5212 scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE requires CPU idle selection enabled"); 5213 return -EINVAL; 5214 } 5215 5216 return 0; 5217 }
··· 163 /* 164 * CPU cgroup support flags 165 */ 166 + SCX_OPS_HAS_CGROUP_WEIGHT = 1LLU << 16, /* DEPRECATED, will be removed on 6.18 */ 167 168 SCX_OPS_ALL_FLAGS = SCX_OPS_KEEP_BUILTIN_IDLE | 169 SCX_OPS_ENQ_LAST | ··· 3899 3900 DEFINE_STATIC_PERCPU_RWSEM(scx_cgroup_rwsem); 3901 static bool scx_cgroup_enabled; 3902 3903 int scx_tg_online(struct task_group *tg) 3904 { ··· 3936 WARN_ON_ONCE(tg->scx_flags & (SCX_TG_ONLINE | SCX_TG_INITED)); 3937 3938 percpu_down_read(&scx_cgroup_rwsem); 3939 3940 if (scx_cgroup_enabled) { 3941 if (SCX_HAS_OP(cgroup_init)) { ··· 4076 4077 void scx_group_set_idle(struct task_group *tg, bool idle) 4078 { 4079 + /* TODO: Implement ops->cgroup_set_idle() */ 4080 } 4081 4082 static void scx_cgroup_lock(void) ··· 4272 4273 percpu_rwsem_assert_held(&scx_cgroup_rwsem); 4274 4275 /* 4276 * scx_tg_on/offline() are excluded through scx_cgroup_rwsem. If we walk 4277 * cgroups and init, all online cgroups are initialized. ··· 4283 css_for_each_descendant_pre(css, &root_task_group.css) { 4284 struct task_group *tg = css_tg(css); 4285 struct scx_cgroup_init_args args = { .weight = tg->scx_weight }; 4286 4287 if ((tg->scx_flags & 4288 (SCX_TG_ONLINE | SCX_TG_INITED)) != SCX_TG_ONLINE) ··· 4623 4624 static void free_exit_info(struct scx_exit_info *ei) 4625 { 4626 + kvfree(ei->dump); 4627 kfree(ei->msg); 4628 kfree(ei->bt); 4629 kfree(ei); ··· 4639 4640 ei->bt = kcalloc(SCX_EXIT_BT_LEN, sizeof(ei->bt[0]), GFP_KERNEL); 4641 ei->msg = kzalloc(SCX_EXIT_MSG_LEN, GFP_KERNEL); 4642 + ei->dump = kvzalloc(exit_dump_len, GFP_KERNEL); 4643 4644 if (!ei->bt || !ei->msg || !ei->dump) { 4645 free_exit_info(ei); ··· 5251 scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE requires CPU idle selection enabled"); 5252 return -EINVAL; 5253 } 5254 + 5255 + if (ops->flags & SCX_OPS_HAS_CGROUP_WEIGHT) 5256 + pr_warn("SCX_OPS_HAS_CGROUP_WEIGHT is deprecated and a noop\n"); 5257 5258 return 0; 5259 }
+1 -3
kernel/sched/fair.c
··· 7081 h_nr_idle = task_has_idle_policy(p); 7082 if (task_sleep || task_delayed || !se->sched_delayed) 7083 h_nr_runnable = 1; 7084 - } else { 7085 - cfs_rq = group_cfs_rq(se); 7086 - slice = cfs_rq_min_slice(cfs_rq); 7087 } 7088 7089 for_each_sched_entity(se) { ··· 7090 if (p && &p->se == se) 7091 return -1; 7092 7093 break; 7094 } 7095
··· 7081 h_nr_idle = task_has_idle_policy(p); 7082 if (task_sleep || task_delayed || !se->sched_delayed) 7083 h_nr_runnable = 1; 7084 } 7085 7086 for_each_sched_entity(se) { ··· 7093 if (p && &p->se == se) 7094 return -1; 7095 7096 + slice = cfs_rq_min_slice(cfs_rq); 7097 break; 7098 } 7099
+1 -1
kernel/vhost_task.c
··· 111 * @arg: data to be passed to fn and handled_kill 112 * @name: the thread's name 113 * 114 - * This returns a specialized task for use by the vhost layer or NULL on 115 * failure. The returned task is inactive, and the caller must fire it up 116 * through vhost_task_start(). 117 */
··· 111 * @arg: data to be passed to fn and handled_kill 112 * @name: the thread's name 113 * 114 + * This returns a specialized task for use by the vhost layer or ERR_PTR() on 115 * failure. The returned task is inactive, and the caller must fire it up 116 * through vhost_task_start(). 117 */
+5 -3
mm/migrate.c
··· 845 return -EAGAIN; 846 847 if (check_refs) { 848 - bool busy; 849 bool invalidated = false; 850 851 recheck_buffers: 852 busy = false; 853 spin_lock(&mapping->i_private_lock); ··· 861 } 862 bh = bh->b_this_page; 863 } while (bh != head); 864 if (busy) { 865 if (invalidated) { 866 rc = -EAGAIN; 867 goto unlock_buffers; 868 } 869 - spin_unlock(&mapping->i_private_lock); 870 invalidate_bh_lrus(); 871 invalidated = true; 872 goto recheck_buffers; ··· 885 886 unlock_buffers: 887 if (check_refs) 888 - spin_unlock(&mapping->i_private_lock); 889 bh = head; 890 do { 891 unlock_buffer(bh);
··· 845 return -EAGAIN; 846 847 if (check_refs) { 848 + bool busy, migrating; 849 bool invalidated = false; 850 851 + migrating = test_and_set_bit_lock(BH_Migrate, &head->b_state); 852 + VM_WARN_ON_ONCE(migrating); 853 recheck_buffers: 854 busy = false; 855 spin_lock(&mapping->i_private_lock); ··· 859 } 860 bh = bh->b_this_page; 861 } while (bh != head); 862 + spin_unlock(&mapping->i_private_lock); 863 if (busy) { 864 if (invalidated) { 865 rc = -EAGAIN; 866 goto unlock_buffers; 867 } 868 invalidate_bh_lrus(); 869 invalidated = true; 870 goto recheck_buffers; ··· 883 884 unlock_buffers: 885 if (check_refs) 886 + clear_bit_unlock(BH_Migrate, &head->b_state); 887 bh = head; 888 do { 889 unlock_buffer(bh);
-23
net/ceph/osd_client.c
··· 220 } 221 EXPORT_SYMBOL(osd_req_op_extent_osd_data_pages); 222 223 - void osd_req_op_extent_osd_data_pagelist(struct ceph_osd_request *osd_req, 224 - unsigned int which, struct ceph_pagelist *pagelist) 225 - { 226 - struct ceph_osd_data *osd_data; 227 - 228 - osd_data = osd_req_op_data(osd_req, which, extent, osd_data); 229 - ceph_osd_data_pagelist_init(osd_data, pagelist); 230 - } 231 - EXPORT_SYMBOL(osd_req_op_extent_osd_data_pagelist); 232 - 233 #ifdef CONFIG_BLOCK 234 void osd_req_op_extent_osd_data_bio(struct ceph_osd_request *osd_req, 235 unsigned int which, ··· 286 osd_data = osd_req_op_data(osd_req, which, cls, request_info); 287 ceph_osd_data_pagelist_init(osd_data, pagelist); 288 } 289 - 290 - void osd_req_op_cls_request_data_pagelist( 291 - struct ceph_osd_request *osd_req, 292 - unsigned int which, struct ceph_pagelist *pagelist) 293 - { 294 - struct ceph_osd_data *osd_data; 295 - 296 - osd_data = osd_req_op_data(osd_req, which, cls, request_data); 297 - ceph_osd_data_pagelist_init(osd_data, pagelist); 298 - osd_req->r_ops[which].cls.indata_len += pagelist->length; 299 - osd_req->r_ops[which].indata_len += pagelist->length; 300 - } 301 - EXPORT_SYMBOL(osd_req_op_cls_request_data_pagelist); 302 303 void osd_req_op_cls_request_data_pages(struct ceph_osd_request *osd_req, 304 unsigned int which, struct page **pages, u64 length,
··· 220 } 221 EXPORT_SYMBOL(osd_req_op_extent_osd_data_pages); 222 223 #ifdef CONFIG_BLOCK 224 void osd_req_op_extent_osd_data_bio(struct ceph_osd_request *osd_req, 225 unsigned int which, ··· 296 osd_data = osd_req_op_data(osd_req, which, cls, request_info); 297 ceph_osd_data_pagelist_init(osd_data, pagelist); 298 } 299 300 void osd_req_op_cls_request_data_pages(struct ceph_osd_request *osd_req, 301 unsigned int which, struct page **pages, u64 length,
+20 -6
net/core/lwtunnel.c
··· 333 struct dst_entry *dst; 334 int ret; 335 336 if (dev_xmit_recursion()) { 337 net_crit_ratelimited("%s(): recursion limit reached on datapath\n", 338 __func__); ··· 350 lwtstate = dst->lwtstate; 351 352 if (lwtstate->type == LWTUNNEL_ENCAP_NONE || 353 - lwtstate->type > LWTUNNEL_ENCAP_MAX) 354 - return 0; 355 356 ret = -EOPNOTSUPP; 357 rcu_read_lock(); ··· 368 if (ret == -EOPNOTSUPP) 369 goto drop; 370 371 - return ret; 372 373 drop: 374 kfree_skb(skb); 375 376 return ret; 377 } 378 EXPORT_SYMBOL_GPL(lwtunnel_output); ··· 385 struct lwtunnel_state *lwtstate; 386 struct dst_entry *dst; 387 int ret; 388 389 if (dev_xmit_recursion()) { 390 net_crit_ratelimited("%s(): recursion limit reached on datapath\n", ··· 404 lwtstate = dst->lwtstate; 405 406 if (lwtstate->type == LWTUNNEL_ENCAP_NONE || 407 - lwtstate->type > LWTUNNEL_ENCAP_MAX) 408 - return 0; 409 410 ret = -EOPNOTSUPP; 411 rcu_read_lock(); ··· 422 if (ret == -EOPNOTSUPP) 423 goto drop; 424 425 - return ret; 426 427 drop: 428 kfree_skb(skb); 429 430 return ret; 431 } 432 EXPORT_SYMBOL_GPL(lwtunnel_xmit); ··· 439 struct lwtunnel_state *lwtstate; 440 struct dst_entry *dst; 441 int ret; 442 443 if (dev_xmit_recursion()) { 444 net_crit_ratelimited("%s(): recursion limit reached on datapath\n",
··· 333 struct dst_entry *dst; 334 int ret; 335 336 + local_bh_disable(); 337 + 338 if (dev_xmit_recursion()) { 339 net_crit_ratelimited("%s(): recursion limit reached on datapath\n", 340 __func__); ··· 348 lwtstate = dst->lwtstate; 349 350 if (lwtstate->type == LWTUNNEL_ENCAP_NONE || 351 + lwtstate->type > LWTUNNEL_ENCAP_MAX) { 352 + ret = 0; 353 + goto out; 354 + } 355 356 ret = -EOPNOTSUPP; 357 rcu_read_lock(); ··· 364 if (ret == -EOPNOTSUPP) 365 goto drop; 366 367 + goto out; 368 369 drop: 370 kfree_skb(skb); 371 372 + out: 373 + local_bh_enable(); 374 return ret; 375 } 376 EXPORT_SYMBOL_GPL(lwtunnel_output); ··· 379 struct lwtunnel_state *lwtstate; 380 struct dst_entry *dst; 381 int ret; 382 + 383 + local_bh_disable(); 384 385 if (dev_xmit_recursion()) { 386 net_crit_ratelimited("%s(): recursion limit reached on datapath\n", ··· 396 lwtstate = dst->lwtstate; 397 398 if (lwtstate->type == LWTUNNEL_ENCAP_NONE || 399 + lwtstate->type > LWTUNNEL_ENCAP_MAX) { 400 + ret = 0; 401 + goto out; 402 + } 403 404 ret = -EOPNOTSUPP; 405 rcu_read_lock(); ··· 412 if (ret == -EOPNOTSUPP) 413 goto drop; 414 415 + goto out; 416 417 drop: 418 kfree_skb(skb); 419 420 + out: 421 + local_bh_enable(); 422 return ret; 423 } 424 EXPORT_SYMBOL_GPL(lwtunnel_xmit); ··· 427 struct lwtunnel_state *lwtstate; 428 struct dst_entry *dst; 429 int ret; 430 + 431 + DEBUG_NET_WARN_ON_ONCE(!in_softirq()); 432 433 if (dev_xmit_recursion()) { 434 net_crit_ratelimited("%s(): recursion limit reached on datapath\n",
+6 -3
net/core/netdev-genl.c
··· 861 862 mutex_lock(&priv->lock); 863 864 netdev = netdev_get_by_index_lock(genl_info_net(info), ifindex); 865 - if (!netdev || !netif_device_present(netdev)) { 866 err = -ENODEV; 867 goto err_unlock_sock; 868 } 869 - 870 - if (!netdev_need_ops_lock(netdev)) { 871 err = -EOPNOTSUPP; 872 NL_SET_BAD_ATTR(info->extack, 873 info->attrs[NETDEV_A_DEV_IFINDEX]); 874 goto err_unlock;
··· 861 862 mutex_lock(&priv->lock); 863 864 + err = 0; 865 netdev = netdev_get_by_index_lock(genl_info_net(info), ifindex); 866 + if (!netdev) { 867 err = -ENODEV; 868 goto err_unlock_sock; 869 } 870 + if (!netif_device_present(netdev)) 871 + err = -ENODEV; 872 + else if (!netdev_need_ops_lock(netdev)) 873 err = -EOPNOTSUPP; 874 + if (err) { 875 NL_SET_BAD_ATTR(info->extack, 876 info->attrs[NETDEV_A_DEV_IFINDEX]); 877 goto err_unlock;
+13 -5
net/core/selftests.c
··· 100 ehdr->h_proto = htons(ETH_P_IP); 101 102 if (attr->tcp) { 103 thdr->source = htons(attr->sport); 104 thdr->dest = htons(attr->dport); 105 thdr->doff = sizeof(struct tcphdr) / 4; 106 - thdr->check = 0; 107 } else { 108 uhdr->source = htons(attr->sport); 109 uhdr->dest = htons(attr->dport); ··· 144 attr->id = net_test_next_id; 145 shdr->id = net_test_next_id++; 146 147 - if (attr->size) 148 - skb_put(skb, attr->size); 149 - if (attr->max_size && attr->max_size > skb->len) 150 - skb_put(skb, attr->max_size - skb->len); 151 152 skb->csum = 0; 153 skb->ip_summed = CHECKSUM_PARTIAL;
··· 100 ehdr->h_proto = htons(ETH_P_IP); 101 102 if (attr->tcp) { 103 + memset(thdr, 0, sizeof(*thdr)); 104 thdr->source = htons(attr->sport); 105 thdr->dest = htons(attr->dport); 106 thdr->doff = sizeof(struct tcphdr) / 4; 107 } else { 108 uhdr->source = htons(attr->sport); 109 uhdr->dest = htons(attr->dport); ··· 144 attr->id = net_test_next_id; 145 shdr->id = net_test_next_id++; 146 147 + if (attr->size) { 148 + void *payload = skb_put(skb, attr->size); 149 + 150 + memset(payload, 0, attr->size); 151 + } 152 + 153 + if (attr->max_size && attr->max_size > skb->len) { 154 + size_t pad_len = attr->max_size - skb->len; 155 + void *pad = skb_put(skb, pad_len); 156 + 157 + memset(pad, 0, pad_len); 158 + } 159 160 skb->csum = 0; 161 skb->ip_summed = CHECKSUM_PARTIAL;
+5 -1
net/mptcp/pm_userspace.c
··· 337 338 release_sock(sk); 339 340 - sock_kfree_s(sk, match, sizeof(*match)); 341 342 err = 0; 343 out:
··· 337 338 release_sock(sk); 339 340 + kfree_rcu_mightsleep(match); 341 + /* Adjust sk_omem_alloc like sock_kfree_s() does, to match 342 + * with allocation of this memory by sock_kmemdup() 343 + */ 344 + atomic_sub(sizeof(*match), &sk->sk_omem_alloc); 345 346 err = 0; 347 out:
+17 -6
net/sched/sch_hfsc.c
··· 961 962 if (cl != NULL) { 963 int old_flags; 964 965 if (parentid) { 966 if (cl->cl_parent && ··· 992 if (usc != NULL) 993 hfsc_change_usc(cl, usc, cur_time); 994 995 if (cl->qdisc->q.qlen != 0) { 996 - int len = qdisc_peek_len(cl->qdisc); 997 - 998 if (cl->cl_flags & HFSC_RSC) { 999 if (old_flags & HFSC_RSC) 1000 update_ed(cl, len); ··· 1641 if (cl->qdisc->q.qlen != 0) { 1642 /* update ed */ 1643 next_len = qdisc_peek_len(cl->qdisc); 1644 - if (realtime) 1645 - update_ed(cl, next_len); 1646 - else 1647 - update_d(cl, next_len); 1648 } else { 1649 /* the class becomes passive */ 1650 eltree_remove(cl);
··· 961 962 if (cl != NULL) { 963 int old_flags; 964 + int len = 0; 965 966 if (parentid) { 967 if (cl->cl_parent && ··· 991 if (usc != NULL) 992 hfsc_change_usc(cl, usc, cur_time); 993 994 + if (cl->qdisc->q.qlen != 0) 995 + len = qdisc_peek_len(cl->qdisc); 996 + /* Check queue length again since some qdisc implementations 997 + * (e.g., netem/codel) might empty the queue during the peek 998 + * operation. 999 + */ 1000 if (cl->qdisc->q.qlen != 0) { 1001 if (cl->cl_flags & HFSC_RSC) { 1002 if (old_flags & HFSC_RSC) 1003 update_ed(cl, len); ··· 1636 if (cl->qdisc->q.qlen != 0) { 1637 /* update ed */ 1638 next_len = qdisc_peek_len(cl->qdisc); 1639 + /* Check queue length again since some qdisc implementations 1640 + * (e.g., netem/codel) might empty the queue during the peek 1641 + * operation. 1642 + */ 1643 + if (cl->qdisc->q.qlen != 0) { 1644 + if (realtime) 1645 + update_ed(cl, next_len); 1646 + else 1647 + update_d(cl, next_len); 1648 + } 1649 } else { 1650 /* the class becomes passive */ 1651 eltree_remove(cl);
+1 -5
net/sunrpc/cache.c
··· 1536 * or by one second if it has already reached the current time. 1537 * Newly added cache entries will always have ->last_refresh greater 1538 * that ->flush_time, so they don't get flushed prematurely. 1539 - * 1540 - * If someone frequently calls the flush interface, we should 1541 - * immediately clean the corresponding cache_detail instead of 1542 - * continuously accumulating nextcheck. 1543 */ 1544 1545 - if (cd->flush_time >= now && cd->flush_time < (now + 5)) 1546 now = cd->flush_time + 1; 1547 1548 cd->flush_time = now;
··· 1536 * or by one second if it has already reached the current time. 1537 * Newly added cache entries will always have ->last_refresh greater 1538 * that ->flush_time, so they don't get flushed prematurely. 1539 */ 1540 1541 + if (cd->flush_time >= now) 1542 now = cd->flush_time + 1; 1543 1544 cd->flush_time = now;
+2 -1
net/tipc/monitor.c
··· 716 if (!mon) 717 continue; 718 write_lock_bh(&mon->lock); 719 - mon->self->addr = tipc_own_addr(net); 720 write_unlock_bh(&mon->lock); 721 } 722 }
··· 716 if (!mon) 717 continue; 718 write_lock_bh(&mon->lock); 719 + if (mon->self) 720 + mon->self->addr = tipc_own_addr(net); 721 write_unlock_bh(&mon->lock); 722 } 723 }
+6 -2
rust/kernel/firmware.rs
··· 4 //! 5 //! C header: [`include/linux/firmware.h`](srctree/include/linux/firmware.h) 6 7 - use crate::{bindings, device::Device, error::Error, error::Result, str::CStr}; 8 use core::ptr::NonNull; 9 10 /// # Invariants ··· 12 /// One of the following: `bindings::request_firmware`, `bindings::firmware_request_nowarn`, 13 /// `bindings::firmware_request_platform`, `bindings::request_firmware_direct`. 14 struct FwFunc( 15 - unsafe extern "C" fn(*mut *const bindings::firmware, *const u8, *mut bindings::device) -> i32, 16 ); 17 18 impl FwFunc {
··· 4 //! 5 //! C header: [`include/linux/firmware.h`](srctree/include/linux/firmware.h) 6 7 + use crate::{bindings, device::Device, error::Error, error::Result, ffi, str::CStr}; 8 use core::ptr::NonNull; 9 10 /// # Invariants ··· 12 /// One of the following: `bindings::request_firmware`, `bindings::firmware_request_nowarn`, 13 /// `bindings::firmware_request_platform`, `bindings::request_firmware_direct`. 14 struct FwFunc( 15 + unsafe extern "C" fn( 16 + *mut *const bindings::firmware, 17 + *const ffi::c_char, 18 + *mut bindings::device, 19 + ) -> i32, 20 ); 21 22 impl FwFunc {
+1 -1
samples/bpf/Makefile
··· 376 @echo " CLANG-bpf " $@ 377 $(Q)$(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(BPF_EXTRA_CFLAGS) \ 378 -I$(obj) -I$(srctree)/tools/testing/selftests/bpf/ \ 379 - -I$(LIBBPF_INCLUDE) \ 380 -D__KERNEL__ -D__BPF_TRACING__ -Wno-unused-value -Wno-pointer-sign \ 381 -D__TARGET_ARCH_$(SRCARCH) -Wno-compare-distinct-pointer-types \ 382 -Wno-gnu-variable-sized-type-not-at-end \
··· 376 @echo " CLANG-bpf " $@ 377 $(Q)$(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(BPF_EXTRA_CFLAGS) \ 378 -I$(obj) -I$(srctree)/tools/testing/selftests/bpf/ \ 379 + -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES) \ 380 -D__KERNEL__ -D__BPF_TRACING__ -Wno-unused-value -Wno-pointer-sign \ 381 -D__TARGET_ARCH_$(SRCARCH) -Wno-compare-distinct-pointer-types \ 382 -Wno-gnu-variable-sized-type-not-at-end \
+1 -1
scripts/Makefile.extrawarn
··· 15 KBUILD_CFLAGS += -Werror=strict-prototypes 16 KBUILD_CFLAGS += -Wno-format-security 17 KBUILD_CFLAGS += -Wno-trigraphs 18 - KBUILD_CFLAGS += $(call cc-disable-warning,frame-address,) 19 KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member) 20 KBUILD_CFLAGS += -Wmissing-declarations 21 KBUILD_CFLAGS += -Wmissing-prototypes
··· 15 KBUILD_CFLAGS += -Werror=strict-prototypes 16 KBUILD_CFLAGS += -Wno-format-security 17 KBUILD_CFLAGS += -Wno-trigraphs 18 + KBUILD_CFLAGS += $(call cc-disable-warning, frame-address) 19 KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member) 20 KBUILD_CFLAGS += -Wmissing-declarations 21 KBUILD_CFLAGS += -Wmissing-prototypes
+3 -1
security/integrity/ima/ima_main.c
··· 245 &allowed_algos); 246 violation_check = ((func == FILE_CHECK || func == MMAP_CHECK || 247 func == MMAP_CHECK_REQPROT) && 248 - (ima_policy_flag & IMA_MEASURE)); 249 if (!action && !violation_check) 250 return 0; 251
··· 245 &allowed_algos); 246 violation_check = ((func == FILE_CHECK || func == MMAP_CHECK || 247 func == MMAP_CHECK_REQPROT) && 248 + (ima_policy_flag & IMA_MEASURE) && 249 + ((action & IMA_MEASURE) || 250 + (file->f_mode & FMODE_WRITE))); 251 if (!action && !violation_check) 252 return 0; 253
+2 -2
security/landlock/domain.c
··· 16 #include <linux/path.h> 17 #include <linux/pid.h> 18 #include <linux/sched.h> 19 #include <linux/uidgid.h> 20 21 #include "access.h" ··· 100 return ERR_PTR(-ENOMEM); 101 102 memcpy(details->exe_path, path_str, path_size); 103 - WARN_ON_ONCE(current_cred() != current_real_cred()); 104 - details->pid = get_pid(task_pid(current)); 105 details->uid = from_kuid(&init_user_ns, current_uid()); 106 get_task_comm(details->comm, current); 107 return details;
··· 16 #include <linux/path.h> 17 #include <linux/pid.h> 18 #include <linux/sched.h> 19 + #include <linux/signal.h> 20 #include <linux/uidgid.h> 21 22 #include "access.h" ··· 99 return ERR_PTR(-ENOMEM); 100 101 memcpy(details->exe_path, path_str, path_size); 102 + details->pid = get_pid(task_tgid(current)); 103 details->uid = from_kuid(&init_user_ns, current_uid()); 104 get_task_comm(details->comm, current); 105 return details;
+1 -1
security/landlock/domain.h
··· 130 static inline void 131 landlock_free_hierarchy_details(struct landlock_hierarchy *const hierarchy) 132 { 133 - if (WARN_ON_ONCE(!hierarchy || !hierarchy->details)) 134 return; 135 136 put_pid(hierarchy->details->pid);
··· 130 static inline void 131 landlock_free_hierarchy_details(struct landlock_hierarchy *const hierarchy) 132 { 133 + if (!hierarchy || !hierarchy->details) 134 return; 135 136 put_pid(hierarchy->details->pid);
+13 -14
security/landlock/syscalls.c
··· 169 * the new ruleset. 170 * @size: Size of the pointed &struct landlock_ruleset_attr (needed for 171 * backward and forward compatibility). 172 - * @flags: Supported value: 173 * - %LANDLOCK_CREATE_RULESET_VERSION 174 * - %LANDLOCK_CREATE_RULESET_ERRATA 175 * 176 * This system call enables to create a new Landlock ruleset, and returns the 177 * related file descriptor on success. 178 * 179 - * If @flags is %LANDLOCK_CREATE_RULESET_VERSION and @attr is NULL and @size is 180 - * 0, then the returned value is the highest supported Landlock ABI version 181 - * (starting at 1). 182 - * 183 - * If @flags is %LANDLOCK_CREATE_RULESET_ERRATA and @attr is NULL and @size is 184 - * 0, then the returned value is a bitmask of fixed issues for the current 185 - * Landlock ABI version. 186 * 187 * Possible returned errors are: 188 * ··· 187 * - %E2BIG: @attr or @size inconsistencies; 188 * - %EFAULT: @attr or @size inconsistencies; 189 * - %ENOMSG: empty &landlock_ruleset_attr.handled_access_fs. 190 */ 191 SYSCALL_DEFINE3(landlock_create_ruleset, 192 const struct landlock_ruleset_attr __user *const, attr, ··· 451 * @ruleset_fd: File descriptor tied to the ruleset to merge with the target. 452 * @flags: Supported values: 453 * 454 - * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF 455 - * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON 456 - * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF 457 * 458 * This system call enables to enforce a Landlock ruleset on the current 459 * thread. Enforcing a ruleset requires that the task has %CAP_SYS_ADMIN in its 460 * namespace or is running with no_new_privs. This avoids scenarios where 461 * unprivileged tasks can affect the behavior of privileged children. 462 - * 463 - * It is allowed to only pass the %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF 464 - * flag with a @ruleset_fd value of -1. 465 * 466 * Possible returned errors are: 467 * ··· 471 * %CAP_SYS_ADMIN in its namespace. 472 * - %E2BIG: The maximum number of stacked rulesets is reached for the current 473 * thread. 474 */ 475 SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32, 476 flags)
··· 169 * the new ruleset. 170 * @size: Size of the pointed &struct landlock_ruleset_attr (needed for 171 * backward and forward compatibility). 172 + * @flags: Supported values: 173 + * 174 * - %LANDLOCK_CREATE_RULESET_VERSION 175 * - %LANDLOCK_CREATE_RULESET_ERRATA 176 * 177 * This system call enables to create a new Landlock ruleset, and returns the 178 * related file descriptor on success. 179 * 180 + * If %LANDLOCK_CREATE_RULESET_VERSION or %LANDLOCK_CREATE_RULESET_ERRATA is 181 + * set, then @attr must be NULL and @size must be 0. 182 * 183 * Possible returned errors are: 184 * ··· 191 * - %E2BIG: @attr or @size inconsistencies; 192 * - %EFAULT: @attr or @size inconsistencies; 193 * - %ENOMSG: empty &landlock_ruleset_attr.handled_access_fs. 194 + * 195 + * .. kernel-doc:: include/uapi/linux/landlock.h 196 + * :identifiers: landlock_create_ruleset_flags 197 */ 198 SYSCALL_DEFINE3(landlock_create_ruleset, 199 const struct landlock_ruleset_attr __user *const, attr, ··· 452 * @ruleset_fd: File descriptor tied to the ruleset to merge with the target. 453 * @flags: Supported values: 454 * 455 + * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF 456 + * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON 457 + * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF 458 * 459 * This system call enables to enforce a Landlock ruleset on the current 460 * thread. Enforcing a ruleset requires that the task has %CAP_SYS_ADMIN in its 461 * namespace or is running with no_new_privs. This avoids scenarios where 462 * unprivileged tasks can affect the behavior of privileged children. 463 * 464 * Possible returned errors are: 465 * ··· 475 * %CAP_SYS_ADMIN in its namespace. 476 * - %E2BIG: The maximum number of stacked rulesets is reached for the current 477 * thread. 478 + * 479 + * .. kernel-doc:: include/uapi/linux/landlock.h 480 + * :identifiers: landlock_restrict_self_flags 481 */ 482 SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32, 483 flags)
+2 -2
tools/arch/x86/lib/x86-opcode-map.txt
··· 996 83: Grp1 Ev,Ib (1A),(es) 997 # CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL, 998 # CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ 999 - 84: CTESTSCC (ev) 1000 - 85: CTESTSCC (es) | CTESTSCC (66),(es) 1001 88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es) 1002 8f: POP2 Bq,Rq (000),(11B),(ev) 1003 a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
··· 996 83: Grp1 Ev,Ib (1A),(es) 997 # CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL, 998 # CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ 999 + 84: CTESTSCC Eb,Gb (ev) 1000 + 85: CTESTSCC Ev,Gv (es) | CTESTSCC Ev,Gv (66),(es) 1001 88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es) 1002 8f: POP2 Bq,Rq (000),(11B),(ev) 1003 a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
+1 -1
tools/sched_ext/scx_flatcg.bpf.c
··· 950 .cgroup_move = (void *)fcg_cgroup_move, 951 .init = (void *)fcg_init, 952 .exit = (void *)fcg_exit, 953 - .flags = SCX_OPS_HAS_CGROUP_WEIGHT | SCX_OPS_ENQ_EXITING, 954 .name = "flatcg");
··· 950 .cgroup_move = (void *)fcg_cgroup_move, 951 .init = (void *)fcg_init, 952 .exit = (void *)fcg_exit, 953 + .flags = SCX_OPS_ENQ_EXITING, 954 .name = "flatcg");
+1 -1
tools/testing/cxl/test/mem.c
··· 1780 if (rc) 1781 return rc; 1782 1783 - rc = devm_cxl_setup_fwctl(cxlmd); 1784 if (rc) 1785 dev_dbg(dev, "No CXL FWCTL setup\n"); 1786
··· 1780 if (rc) 1781 return rc; 1782 1783 + rc = devm_cxl_setup_fwctl(&pdev->dev, cxlmd); 1784 if (rc) 1785 dev_dbg(dev, "No CXL FWCTL setup\n"); 1786
+2
tools/testing/kunit/configs/all_tests.config
··· 43 44 CONFIG_AUDIT=y 45 46 CONFIG_SECURITY=y 47 CONFIG_SECURITY_APPARMOR=y 48 CONFIG_SECURITY_LANDLOCK=y
··· 43 44 CONFIG_AUDIT=y 45 46 + CONFIG_PRIME_NUMBERS=y 47 + 48 CONFIG_SECURITY=y 49 CONFIG_SECURITY_APPARMOR=y 50 CONFIG_SECURITY_LANDLOCK=y
+37
tools/testing/selftests/bpf/prog_tests/for_each.c
··· 6 #include "for_each_array_map_elem.skel.h" 7 #include "for_each_map_elem_write_key.skel.h" 8 #include "for_each_multi_maps.skel.h" 9 10 static unsigned int duration; 11 ··· 204 for_each_multi_maps__destroy(skel); 205 } 206 207 void test_for_each(void) 208 { 209 if (test__start_subtest("hash_map")) ··· 248 test_write_map_key(); 249 if (test__start_subtest("multi_maps")) 250 test_multi_maps(); 251 }
··· 6 #include "for_each_array_map_elem.skel.h" 7 #include "for_each_map_elem_write_key.skel.h" 8 #include "for_each_multi_maps.skel.h" 9 + #include "for_each_hash_modify.skel.h" 10 11 static unsigned int duration; 12 ··· 203 for_each_multi_maps__destroy(skel); 204 } 205 206 + static void test_hash_modify(void) 207 + { 208 + struct for_each_hash_modify *skel; 209 + int max_entries, i, err; 210 + __u64 key, val; 211 + 212 + LIBBPF_OPTS(bpf_test_run_opts, topts, 213 + .data_in = &pkt_v4, 214 + .data_size_in = sizeof(pkt_v4), 215 + .repeat = 1 216 + ); 217 + 218 + skel = for_each_hash_modify__open_and_load(); 219 + if (!ASSERT_OK_PTR(skel, "for_each_hash_modify__open_and_load")) 220 + return; 221 + 222 + max_entries = bpf_map__max_entries(skel->maps.hashmap); 223 + for (i = 0; i < max_entries; i++) { 224 + key = i; 225 + val = i; 226 + err = bpf_map__update_elem(skel->maps.hashmap, &key, sizeof(key), 227 + &val, sizeof(val), BPF_ANY); 228 + if (!ASSERT_OK(err, "map_update")) 229 + goto out; 230 + } 231 + 232 + err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_pkt_access), &topts); 233 + ASSERT_OK(err, "bpf_prog_test_run_opts"); 234 + ASSERT_OK(topts.retval, "retval"); 235 + 236 + out: 237 + for_each_hash_modify__destroy(skel); 238 + } 239 + 240 void test_for_each(void) 241 { 242 if (test__start_subtest("hash_map")) ··· 213 test_write_map_key(); 214 if (test__start_subtest("multi_maps")) 215 test_multi_maps(); 216 + if (test__start_subtest("hash_modify")) 217 + test_hash_modify(); 218 }
-1
tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
··· 68 goto close_cli; 69 70 err = disconnect(cli); 71 - ASSERT_OK(err, "disconnect"); 72 73 close_cli: 74 close(cli);
··· 68 goto close_cli; 69 70 err = disconnect(cli); 71 72 close_cli: 73 close(cli);
+1 -1
tools/testing/selftests/bpf/progs/bpf_misc.h
··· 221 #define CAN_USE_GOTOL 222 #endif 223 224 - #if _clang_major__ >= 18 225 #define CAN_USE_BPF_ST 226 #endif 227
··· 221 #define CAN_USE_GOTOL 222 #endif 223 224 + #if __clang_major__ >= 18 225 #define CAN_USE_BPF_ST 226 #endif 227
+30
tools/testing/selftests/bpf/progs/for_each_hash_modify.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2025 Intel Corporation */ 3 + #include "vmlinux.h" 4 + #include <bpf/bpf_helpers.h> 5 + 6 + char _license[] SEC("license") = "GPL"; 7 + 8 + struct { 9 + __uint(type, BPF_MAP_TYPE_HASH); 10 + __uint(max_entries, 128); 11 + __type(key, __u64); 12 + __type(value, __u64); 13 + } hashmap SEC(".maps"); 14 + 15 + static int cb(struct bpf_map *map, __u64 *key, __u64 *val, void *arg) 16 + { 17 + bpf_map_delete_elem(map, key); 18 + bpf_map_update_elem(map, key, val, 0); 19 + return 0; 20 + } 21 + 22 + SEC("tc") 23 + int test_pkt_access(struct __sk_buff *skb) 24 + { 25 + (void)skb; 26 + 27 + bpf_for_each_map_elem(&hashmap, cb, NULL, 0); 28 + 29 + return 0; 30 + }
+14 -7
tools/testing/selftests/landlock/audit.h
··· 300 return err; 301 } 302 303 - static int __maybe_unused matches_log_domain_allocated(int audit_fd, 304 __u64 *domain_id) 305 { 306 - return audit_match_record( 307 - audit_fd, AUDIT_LANDLOCK_DOMAIN, 308 - REGEX_LANDLOCK_PREFIX 309 - " status=allocated mode=enforcing pid=[0-9]\\+ uid=[0-9]\\+" 310 - " exe=\"[^\"]\\+\" comm=\".*_test\"$", 311 - domain_id); 312 } 313 314 static int __maybe_unused matches_log_domain_deallocated(
··· 300 return err; 301 } 302 303 + static int __maybe_unused matches_log_domain_allocated(int audit_fd, pid_t pid, 304 __u64 *domain_id) 305 { 306 + static const char log_template[] = REGEX_LANDLOCK_PREFIX 307 + " status=allocated mode=enforcing pid=%d uid=[0-9]\\+" 308 + " exe=\"[^\"]\\+\" comm=\".*_test\"$"; 309 + char log_match[sizeof(log_template) + 10]; 310 + int log_match_len; 311 + 312 + log_match_len = 313 + snprintf(log_match, sizeof(log_match), log_template, pid); 314 + if (log_match_len > sizeof(log_match)) 315 + return -E2BIG; 316 + 317 + return audit_match_record(audit_fd, AUDIT_LANDLOCK_DOMAIN, log_match, 318 + domain_id); 319 } 320 321 static int __maybe_unused matches_log_domain_deallocated(
+137 -17
tools/testing/selftests/landlock/audit_test.c
··· 9 #include <errno.h> 10 #include <limits.h> 11 #include <linux/landlock.h> 12 #include <stdlib.h> 13 #include <sys/mount.h> 14 #include <sys/prctl.h> ··· 41 { 42 struct audit_filter audit_filter; 43 int audit_fd; 44 - __u64(*domain_stack)[16]; 45 }; 46 47 FIXTURE_SETUP(audit) ··· 60 TH_LOG("Failed to initialize audit: %s", error_msg); 61 } 62 clear_cap(_metadata, CAP_AUDIT_CONTROL); 63 - 64 - self->domain_stack = mmap(NULL, sizeof(*self->domain_stack), 65 - PROT_READ | PROT_WRITE, 66 - MAP_SHARED | MAP_ANONYMOUS, -1, 0); 67 - ASSERT_NE(MAP_FAILED, self->domain_stack); 68 - memset(self->domain_stack, 0, sizeof(*self->domain_stack)); 69 } 70 71 FIXTURE_TEARDOWN(audit) 72 { 73 - EXPECT_EQ(0, munmap(self->domain_stack, sizeof(*self->domain_stack))); 74 - 75 set_cap(_metadata, CAP_AUDIT_CONTROL); 76 EXPECT_EQ(0, audit_cleanup(self->audit_fd, &self->audit_filter)); 77 clear_cap(_metadata, CAP_AUDIT_CONTROL); ··· 75 .scoped = LANDLOCK_SCOPE_SIGNAL, 76 }; 77 int status, ruleset_fd, i; 78 __u64 prev_dom = 3; 79 pid_t child; 80 81 ruleset_fd = 82 landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); ··· 92 child = fork(); 93 ASSERT_LE(0, child); 94 if (child == 0) { 95 - for (i = 0; i < ARRAY_SIZE(*self->domain_stack); i++) { 96 __u64 denial_dom = 1; 97 __u64 allocated_dom = 2; 98 ··· 105 matches_log_signal(_metadata, self->audit_fd, 106 getppid(), &denial_dom)); 107 EXPECT_EQ(0, matches_log_domain_allocated( 108 - self->audit_fd, &allocated_dom)); 109 EXPECT_NE(denial_dom, 1); 110 EXPECT_NE(denial_dom, 0); 111 EXPECT_EQ(denial_dom, allocated_dom); ··· 114 /* Checks that the new domain is younger than the previous one. */ 115 EXPECT_GT(allocated_dom, prev_dom); 116 prev_dom = allocated_dom; 117 - (*self->domain_stack)[i] = allocated_dom; 118 } 119 120 /* Checks that we reached the maximum number of layers. */ ··· 141 /* Purges log from deallocated domains. */ 142 EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 143 &audit_tv_dom_drop, sizeof(audit_tv_dom_drop))); 144 - for (i = ARRAY_SIZE(*self->domain_stack) - 1; i >= 0; i--) { 145 __u64 deallocated_dom = 2; 146 147 EXPECT_EQ(0, matches_log_domain_deallocated(self->audit_fd, 1, 148 &deallocated_dom)); 149 - EXPECT_EQ((*self->domain_stack)[i], deallocated_dom) 150 { 151 TH_LOG("Failed to match domain %llx (#%d)", 152 - (*self->domain_stack)[i], i); 153 } 154 } 155 EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 156 &audit_tv_default, sizeof(audit_tv_default))); 157 - 158 EXPECT_EQ(0, close(ruleset_fd)); 159 } 160 161 FIXTURE(audit_flags) ··· 392 393 /* Checks domain information records. */ 394 EXPECT_EQ(0, matches_log_domain_allocated( 395 - self->audit_fd, &allocated_dom)); 396 EXPECT_NE(*self->domain_id, 1); 397 EXPECT_NE(*self->domain_id, 0); 398 EXPECT_EQ(*self->domain_id, allocated_dom);
··· 9 #include <errno.h> 10 #include <limits.h> 11 #include <linux/landlock.h> 12 + #include <pthread.h> 13 #include <stdlib.h> 14 #include <sys/mount.h> 15 #include <sys/prctl.h> ··· 40 { 41 struct audit_filter audit_filter; 42 int audit_fd; 43 }; 44 45 FIXTURE_SETUP(audit) ··· 60 TH_LOG("Failed to initialize audit: %s", error_msg); 61 } 62 clear_cap(_metadata, CAP_AUDIT_CONTROL); 63 } 64 65 FIXTURE_TEARDOWN(audit) 66 { 67 set_cap(_metadata, CAP_AUDIT_CONTROL); 68 EXPECT_EQ(0, audit_cleanup(self->audit_fd, &self->audit_filter)); 69 clear_cap(_metadata, CAP_AUDIT_CONTROL); ··· 83 .scoped = LANDLOCK_SCOPE_SIGNAL, 84 }; 85 int status, ruleset_fd, i; 86 + __u64(*domain_stack)[16]; 87 __u64 prev_dom = 3; 88 pid_t child; 89 + 90 + domain_stack = mmap(NULL, sizeof(*domain_stack), PROT_READ | PROT_WRITE, 91 + MAP_SHARED | MAP_ANONYMOUS, -1, 0); 92 + ASSERT_NE(MAP_FAILED, domain_stack); 93 + memset(domain_stack, 0, sizeof(*domain_stack)); 94 95 ruleset_fd = 96 landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); ··· 94 child = fork(); 95 ASSERT_LE(0, child); 96 if (child == 0) { 97 + for (i = 0; i < ARRAY_SIZE(*domain_stack); i++) { 98 __u64 denial_dom = 1; 99 __u64 allocated_dom = 2; 100 ··· 107 matches_log_signal(_metadata, self->audit_fd, 108 getppid(), &denial_dom)); 109 EXPECT_EQ(0, matches_log_domain_allocated( 110 + self->audit_fd, getpid(), 111 + &allocated_dom)); 112 EXPECT_NE(denial_dom, 1); 113 EXPECT_NE(denial_dom, 0); 114 EXPECT_EQ(denial_dom, allocated_dom); ··· 115 /* Checks that the new domain is younger than the previous one. */ 116 EXPECT_GT(allocated_dom, prev_dom); 117 prev_dom = allocated_dom; 118 + (*domain_stack)[i] = allocated_dom; 119 } 120 121 /* Checks that we reached the maximum number of layers. */ ··· 142 /* Purges log from deallocated domains. */ 143 EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 144 &audit_tv_dom_drop, sizeof(audit_tv_dom_drop))); 145 + for (i = ARRAY_SIZE(*domain_stack) - 1; i >= 0; i--) { 146 __u64 deallocated_dom = 2; 147 148 EXPECT_EQ(0, matches_log_domain_deallocated(self->audit_fd, 1, 149 &deallocated_dom)); 150 + EXPECT_EQ((*domain_stack)[i], deallocated_dom) 151 { 152 TH_LOG("Failed to match domain %llx (#%d)", 153 + (*domain_stack)[i], i); 154 } 155 } 156 + EXPECT_EQ(0, munmap(domain_stack, sizeof(*domain_stack))); 157 EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 158 &audit_tv_default, sizeof(audit_tv_default))); 159 EXPECT_EQ(0, close(ruleset_fd)); 160 + } 161 + 162 + struct thread_data { 163 + pid_t parent_pid; 164 + int ruleset_fd, pipe_child, pipe_parent; 165 + }; 166 + 167 + static void *thread_audit_test(void *arg) 168 + { 169 + const struct thread_data *data = (struct thread_data *)arg; 170 + uintptr_t err = 0; 171 + char buffer; 172 + 173 + /* TGID and TID are different for a second thread. */ 174 + if (getpid() == gettid()) { 175 + err = 1; 176 + goto out; 177 + } 178 + 179 + if (landlock_restrict_self(data->ruleset_fd, 0)) { 180 + err = 2; 181 + goto out; 182 + } 183 + 184 + if (close(data->ruleset_fd)) { 185 + err = 3; 186 + goto out; 187 + } 188 + 189 + /* Creates a denial to get the domain ID. */ 190 + if (kill(data->parent_pid, 0) != -1) { 191 + err = 4; 192 + goto out; 193 + } 194 + 195 + if (EPERM != errno) { 196 + err = 5; 197 + goto out; 198 + } 199 + 200 + /* Signals the parent to read denial logs. */ 201 + if (write(data->pipe_child, ".", 1) != 1) { 202 + err = 6; 203 + goto out; 204 + } 205 + 206 + /* Waits for the parent to update audit filters. */ 207 + if (read(data->pipe_parent, &buffer, 1) != 1) { 208 + err = 7; 209 + goto out; 210 + } 211 + 212 + out: 213 + close(data->pipe_child); 214 + close(data->pipe_parent); 215 + return (void *)err; 216 + } 217 + 218 + /* Checks that the PID tied to a domain is not a TID but the TGID. */ 219 + TEST_F(audit, thread) 220 + { 221 + const struct landlock_ruleset_attr ruleset_attr = { 222 + .scoped = LANDLOCK_SCOPE_SIGNAL, 223 + }; 224 + __u64 denial_dom = 1; 225 + __u64 allocated_dom = 2; 226 + __u64 deallocated_dom = 3; 227 + pthread_t thread; 228 + int pipe_child[2], pipe_parent[2]; 229 + char buffer; 230 + struct thread_data child_data; 231 + 232 + child_data.parent_pid = getppid(); 233 + ASSERT_EQ(0, pipe2(pipe_child, O_CLOEXEC)); 234 + child_data.pipe_child = pipe_child[1]; 235 + ASSERT_EQ(0, pipe2(pipe_parent, O_CLOEXEC)); 236 + child_data.pipe_parent = pipe_parent[0]; 237 + child_data.ruleset_fd = 238 + landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); 239 + ASSERT_LE(0, child_data.ruleset_fd); 240 + 241 + /* TGID and TID are the same for the initial thread . */ 242 + EXPECT_EQ(getpid(), gettid()); 243 + EXPECT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)); 244 + ASSERT_EQ(0, pthread_create(&thread, NULL, thread_audit_test, 245 + &child_data)); 246 + 247 + /* Waits for the child to generate a denial. */ 248 + ASSERT_EQ(1, read(pipe_child[0], &buffer, 1)); 249 + EXPECT_EQ(0, close(pipe_child[0])); 250 + 251 + /* Matches the signal log to get the domain ID. */ 252 + EXPECT_EQ(0, matches_log_signal(_metadata, self->audit_fd, 253 + child_data.parent_pid, &denial_dom)); 254 + EXPECT_NE(denial_dom, 1); 255 + EXPECT_NE(denial_dom, 0); 256 + 257 + EXPECT_EQ(0, matches_log_domain_allocated(self->audit_fd, getpid(), 258 + &allocated_dom)); 259 + EXPECT_EQ(denial_dom, allocated_dom); 260 + 261 + /* Updates filter rules to match the drop record. */ 262 + set_cap(_metadata, CAP_AUDIT_CONTROL); 263 + EXPECT_EQ(0, audit_filter_drop(self->audit_fd, AUDIT_ADD_RULE)); 264 + EXPECT_EQ(0, audit_filter_exe(self->audit_fd, &self->audit_filter, 265 + AUDIT_DEL_RULE)); 266 + clear_cap(_metadata, CAP_AUDIT_CONTROL); 267 + 268 + /* Signals the thread to exit, which will generate a domain deallocation. */ 269 + ASSERT_EQ(1, write(pipe_parent[1], ".", 1)); 270 + EXPECT_EQ(0, close(pipe_parent[1])); 271 + ASSERT_EQ(0, pthread_join(thread, NULL)); 272 + 273 + EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 274 + &audit_tv_dom_drop, sizeof(audit_tv_dom_drop))); 275 + EXPECT_EQ(0, matches_log_domain_deallocated(self->audit_fd, 1, 276 + &deallocated_dom)); 277 + EXPECT_EQ(denial_dom, deallocated_dom); 278 + EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 279 + &audit_tv_default, sizeof(audit_tv_default))); 280 } 281 282 FIXTURE(audit_flags) ··· 273 274 /* Checks domain information records. */ 275 EXPECT_EQ(0, matches_log_domain_allocated( 276 + self->audit_fd, getpid(), 277 + &allocated_dom)); 278 EXPECT_NE(*self->domain_id, 1); 279 EXPECT_NE(*self->domain_id, 0); 280 EXPECT_EQ(*self->domain_id, allocated_dom);
+2 -1
tools/testing/selftests/landlock/fs_test.c
··· 5964 EXPECT_EQ(EXDEV, errno); 5965 EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.refer", 5966 dir_s1d1)); 5967 - EXPECT_EQ(0, matches_log_domain_allocated(self->audit_fd, NULL)); 5968 EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.refer", 5969 dir_s1d3)); 5970
··· 5964 EXPECT_EQ(EXDEV, errno); 5965 EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.refer", 5966 dir_s1d1)); 5967 + EXPECT_EQ(0, 5968 + matches_log_domain_allocated(self->audit_fd, getpid(), NULL)); 5969 EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.refer", 5970 dir_s1d3)); 5971
+2 -3
tools/testing/selftests/net/mptcp/diag.sh
··· 206 local token 207 local msg 208 209 - ss_token="$(ss -inmHMN $ns | grep 'token:' |\ 210 - head -n 1 |\ 211 - sed 's/.*token:\([0-9a-f]*\).*/\1/')" 212 213 token="$(ip netns exec $ns ./mptcp_diag -t $ss_token |\ 214 awk -F':[ \t]+' '/^token/ {print $2}')"
··· 206 local token 207 local msg 208 209 + ss_token="$(ss -inmHMN $ns | 210 + mptcp_lib_get_info_value "token" "token")" 211 212 token="$(ip netns exec $ns ./mptcp_diag -t $ss_token |\ 213 awk -F':[ \t]+' '/^token/ {print $2}')"
+2 -1
tools/testing/selftests/pcie_bwctrl/Makefile
··· 1 - TEST_PROGS = set_pcie_cooling_state.sh set_pcie_speed.sh 2 include ../lib.mk
··· 1 + TEST_PROGS = set_pcie_cooling_state.sh 2 + TEST_FILES = set_pcie_speed.sh 3 include ../lib.mk
+39
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 313 "$TC qdisc del dev $DUMMY handle 1: root", 314 "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 315 ] 316 } 317 ]
··· 313 "$TC qdisc del dev $DUMMY handle 1: root", 314 "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 315 ] 316 + }, 317 + { 318 + "id": "a4c3", 319 + "name": "Test HFSC with netem/blackhole - queue emptying during peek operation", 320 + "category": [ 321 + "qdisc", 322 + "hfsc", 323 + "netem", 324 + "blackhole" 325 + ], 326 + "plugins": { 327 + "requires": "nsPlugin" 328 + }, 329 + "setup": [ 330 + "$IP link set dev $DUMMY up || true", 331 + "$IP addr add 10.10.10.10/24 dev $DUMMY || true", 332 + "$TC qdisc add dev $DUMMY handle 1:0 root drr", 333 + "$TC class add dev $DUMMY parent 1:0 classid 1:1 drr", 334 + "$TC class add dev $DUMMY parent 1:0 classid 1:2 drr", 335 + "$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 plug limit 1024", 336 + "$TC qdisc add dev $DUMMY parent 1:2 handle 3:0 hfsc default 1", 337 + "$TC class add dev $DUMMY parent 3:0 classid 3:1 hfsc rt m1 5Mbit d 10ms m2 10Mbit", 338 + "$TC qdisc add dev $DUMMY parent 3:1 handle 4:0 netem delay 1ms", 339 + "$TC qdisc add dev $DUMMY parent 4:1 handle 5:0 blackhole", 340 + "ping -c 3 -W 0.01 -i 0.001 -s 1 10.10.10.10 -I $DUMMY > /dev/null 2>&1 || true", 341 + "$TC class change dev $DUMMY parent 3:0 classid 3:1 hfsc sc m1 5Mbit d 10ms m2 10Mbit", 342 + "$TC class del dev $DUMMY parent 3:0 classid 3:1", 343 + "$TC class add dev $DUMMY parent 3:0 classid 3:1 hfsc rt m1 5Mbit d 10ms m2 10Mbit", 344 + "ping -c 3 -W 0.01 -i 0.001 -s 1 10.10.10.10 -I $DUMMY > /dev/null 2>&1 || true" 345 + ], 346 + "cmdUnderTest": "$TC class change dev $DUMMY parent 3:0 classid 3:1 hfsc sc m1 5Mbit d 10ms m2 10Mbit", 347 + "expExitCode": "0", 348 + "verifyCmd": "$TC -s qdisc show dev $DUMMY", 349 + "matchPattern": "qdisc hfsc 3:.*parent 1:2.*default 1", 350 + "matchCount": "1", 351 + "teardown": [ 352 + "$TC qdisc del dev $DUMMY handle 1:0 root", 353 + "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 354 + ] 355 } 356 ]
+1
tools/testing/selftests/ublk/kublk.c
··· 1354 value = strtol(optarg, NULL, 10); 1355 if (value) 1356 ctx.flags |= UBLK_F_NEED_GET_DATA; 1357 case 0: 1358 if (!strcmp(longopts[option_idx].name, "debug_mask")) 1359 ublk_dbg_mask = strtol(optarg, NULL, 16);
··· 1354 value = strtol(optarg, NULL, 10); 1355 if (value) 1356 ctx.flags |= UBLK_F_NEED_GET_DATA; 1357 + break; 1358 case 0: 1359 if (!strcmp(longopts[option_idx].name, "debug_mask")) 1360 ublk_dbg_mask = strtol(optarg, NULL, 16);
-3
tools/testing/selftests/ublk/kublk.h
··· 86 unsigned int fg:1; 87 unsigned int recovery:1; 88 89 - /* fault_inject */ 90 - long long delay_us; 91 - 92 int _evtfd; 93 int _shmid; 94
··· 86 unsigned int fg:1; 87 unsigned int recovery:1; 88 89 int _evtfd; 90 int _shmid; 91
+2 -2
tools/testing/selftests/ublk/test_common.sh
··· 17 local minor 18 19 dev=/dev/ublkb"${dev_id}" 20 - major=$(stat -c '%Hr' "$dev") 21 - minor=$(stat -c '%Lr' "$dev") 22 23 echo $(( (major & 0xfff) << 20 | (minor & 0xfffff) )) 24 }
··· 17 local minor 18 19 dev=/dev/ublkb"${dev_id}" 20 + major="0x"$(stat -c '%t' "$dev") 21 + minor="0x"$(stat -c '%T' "$dev") 22 23 echo $(( (major & 0xfff) << 20 | (minor & 0xfffff) )) 24 }
+1 -1
tools/testing/selftests/ublk/test_generic_05.sh
··· 3 4 . "$(cd "$(dirname "$0")" && pwd)"/test_common.sh 5 6 - TID="generic_04" 7 ERR_CODE=0 8 9 ublk_run_recover_test()
··· 3 4 . "$(cd "$(dirname "$0")" && pwd)"/test_common.sh 5 6 + TID="generic_05" 7 ERR_CODE=0 8 9 ublk_run_recover_test()