Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.15-rc5).

No conflicts or adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4721 -2360
+8 -21
Documentation/admin-guide/xfs.rst
··· 562 562 Zoned Filesystems 563 563 ================= 564 564 565 - For zoned file systems, the following attribute is exposed in: 565 + For zoned file systems, the following attributes are exposed in: 566 566 567 567 /sys/fs/xfs/<dev>/zoned/ 568 568 ··· 572 572 is limited by the capabilities of the backing zoned device, file system 573 573 size and the max_open_zones mount option. 574 574 575 - Zoned Filesystems 576 - ================= 577 - 578 - For zoned file systems, the following attributes are exposed in: 579 - 580 - /sys/fs/xfs/<dev>/zoned/ 581 - 582 - max_open_zones (Min: 1 Default: Varies Max: UINTMAX) 583 - This read-only attribute exposes the maximum number of open zones 584 - available for data placement. The value is determined at mount time and 585 - is limited by the capabilities of the backing zoned device, file system 586 - size and the max_open_zones mount option. 587 - 588 - zonegc_low_space (Min: 0 Default: 0 Max: 100) 589 - Define a percentage for how much of the unused space that GC should keep 590 - available for writing. A high value will reclaim more of the space 591 - occupied by unused blocks, creating a larger buffer against write 592 - bursts at the cost of increased write amplification. Regardless 593 - of this value, garbage collection will always aim to free a minimum 594 - amount of blocks to keep max_open_zones open for data placement purposes. 575 + zonegc_low_space (Min: 0 Default: 0 Max: 100) 576 + Define a percentage for how much of the unused space that GC should keep 577 + available for writing. A high value will reclaim more of the space 578 + occupied by unused blocks, creating a larger buffer against write 579 + bursts at the cost of increased write amplification. Regardless 580 + of this value, garbage collection will always aim to free a minimum 581 + amount of blocks to keep max_open_zones open for data placement purposes.
+6 -6
Documentation/arch/openrisc/openrisc_port.rst
··· 7 7 8 8 For information about OpenRISC processors and ongoing development: 9 9 10 - ======= ============================= 10 + ======= ============================== 11 11 website https://openrisc.io 12 - email openrisc@lists.librecores.org 13 - ======= ============================= 12 + email linux-openrisc@vger.kernel.org 13 + ======= ============================== 14 14 15 15 --------------------------------------------------------------------- 16 16 ··· 27 27 Instructions for building the different toolchains can be found on openrisc.io 28 28 or Stafford's toolchain build and release scripts. 29 29 30 - ========== ================================================= 31 - binaries https://github.com/openrisc/or1k-gcc/releases 30 + ========== ========================================================== 31 + binaries https://github.com/stffrdhrn/or1k-toolchain-build/releases 32 32 toolchains https://openrisc.io/software 33 33 building https://github.com/stffrdhrn/or1k-toolchain-build 34 - ========== ================================================= 34 + ========== ========================================================== 35 35 36 36 2) Building 37 37
+8
Documentation/bpf/bpf_devel_QA.rst
··· 382 382 into the Linux kernel, please implement support into LLVM's BPF back 383 383 end. See LLVM_ section below for further information. 384 384 385 + Q: What "BPF_INTERNAL" symbol namespace is for? 386 + ----------------------------------------------- 387 + A: Symbols exported as BPF_INTERNAL can only be used by BPF infrastructure 388 + like preload kernel modules with light skeleton. Most symbols outside 389 + of BPF_INTERNAL are not expected to be used by code outside of BPF either. 390 + Symbols may lack the designation because they predate the namespaces, 391 + or due to an oversight. 392 + 385 393 Stable submission 386 394 ================= 387 395
+1 -1
Documentation/devicetree/bindings/nvmem/layouts/fixed-cell.yaml
··· 27 27 $ref: /schemas/types.yaml#/definitions/uint32-array 28 28 items: 29 29 - minimum: 0 30 - maximum: 7 30 + maximum: 31 31 31 description: 32 32 Offset in bit within the address range specified by reg. 33 33 - minimum: 1
+4
Documentation/devicetree/bindings/nvmem/qcom,qfprom.yaml
··· 19 19 - enum: 20 20 - qcom,apq8064-qfprom 21 21 - qcom,apq8084-qfprom 22 + - qcom,ipq5018-qfprom 22 23 - qcom,ipq5332-qfprom 23 24 - qcom,ipq5424-qfprom 24 25 - qcom,ipq6018-qfprom ··· 29 28 - qcom,msm8226-qfprom 30 29 - qcom,msm8916-qfprom 31 30 - qcom,msm8917-qfprom 31 + - qcom,msm8937-qfprom 32 + - qcom,msm8960-qfprom 32 33 - qcom,msm8974-qfprom 33 34 - qcom,msm8976-qfprom 34 35 - qcom,msm8996-qfprom ··· 54 51 - qcom,sm8450-qfprom 55 52 - qcom,sm8550-qfprom 56 53 - qcom,sm8650-qfprom 54 + - qcom,x1e80100-qfprom 57 55 - const: qcom,qfprom 58 56 59 57 reg:
+25
Documentation/devicetree/bindings/nvmem/rockchip,otp.yaml
··· 14 14 enum: 15 15 - rockchip,px30-otp 16 16 - rockchip,rk3308-otp 17 + - rockchip,rk3576-otp 17 18 - rockchip,rk3588-otp 18 19 19 20 reg: ··· 63 62 properties: 64 63 clocks: 65 64 maxItems: 3 65 + clock-names: 66 + maxItems: 3 66 67 resets: 67 68 maxItems: 1 68 69 reset-names: ··· 76 73 compatible: 77 74 contains: 78 75 enum: 76 + - rockchip,rk3576-otp 77 + then: 78 + properties: 79 + clocks: 80 + maxItems: 3 81 + clock-names: 82 + maxItems: 3 83 + resets: 84 + minItems: 2 85 + maxItems: 2 86 + reset-names: 87 + items: 88 + - const: otp 89 + - const: apb 90 + 91 + - if: 92 + properties: 93 + compatible: 94 + contains: 95 + enum: 79 96 - rockchip,rk3588-otp 80 97 then: 81 98 properties: 82 99 clocks: 100 + minItems: 4 101 + clock-names: 83 102 minItems: 4 84 103 resets: 85 104 minItems: 3
+3 -1
Documentation/netlink/specs/ethtool.yaml
··· 89 89 doc: Group of short_detected states 90 90 - 91 91 name: phy-upstream-type 92 - enum-name: 92 + enum-name: phy-upstream 93 + header: linux/ethtool.h 93 94 type: enum 95 + name-prefix: phy-upstream 94 96 entries: [ mac, phy ] 95 97 - 96 98 name: tcp-data-split
+6 -6
Documentation/translations/zh_CN/arch/openrisc/openrisc_port.rst
··· 17 17 18 18 关于OpenRISC处理器和正在进行中的开发的信息: 19 19 20 - ======= ============================= 20 + ======= ============================== 21 21 网站 https://openrisc.io 22 - 邮箱 openrisc@lists.librecores.org 23 - ======= ============================= 22 + 邮箱 linux-openrisc@vger.kernel.org 23 + ======= ============================== 24 24 25 25 --------------------------------------------------------------------- 26 26 ··· 36 36 工具链的构建指南可以在openrisc.io或Stafford的工具链构建和发布脚本 37 37 中找到。 38 38 39 - ====== ================================================= 40 - 二进制 https://github.com/openrisc/or1k-gcc/releases 39 + ====== ========================================================== 40 + 二进制 https://github.com/stffrdhrn/or1k-toolchain-build/releases 41 41 工具链 https://openrisc.io/software 42 42 构建 https://github.com/stffrdhrn/or1k-toolchain-build 43 - ====== ================================================= 43 + ====== ========================================================== 44 44 45 45 2) 构建 46 46
+6 -6
Documentation/translations/zh_TW/arch/openrisc/openrisc_port.rst
··· 17 17 18 18 關於OpenRISC處理器和正在進行中的開發的信息: 19 19 20 - ======= ============================= 20 + ======= ============================== 21 21 網站 https://openrisc.io 22 - 郵箱 openrisc@lists.librecores.org 23 - ======= ============================= 22 + 郵箱 linux-openrisc@vger.kernel.org 23 + ======= ============================== 24 24 25 25 --------------------------------------------------------------------- 26 26 ··· 36 36 工具鏈的構建指南可以在openrisc.io或Stafford的工具鏈構建和發佈腳本 37 37 中找到。 38 38 39 - ====== ================================================= 40 - 二進制 https://github.com/openrisc/or1k-gcc/releases 39 + ====== ========================================================== 40 + 二進制 https://github.com/stffrdhrn/or1k-toolchain-build/releases 41 41 工具鏈 https://openrisc.io/software 42 42 構建 https://github.com/stffrdhrn/or1k-toolchain-build 43 - ====== ================================================= 43 + ====== ========================================================== 44 44 45 45 2) 構建 46 46
+29 -8
MAINTAINERS
··· 3873 3873 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 3874 3874 R: Dave Ertman <david.m.ertman@intel.com> 3875 3875 R: Ira Weiny <ira.weiny@intel.com> 3876 + R: Leon Romanovsky <leon@kernel.org> 3876 3877 S: Supported 3877 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 3878 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core.git 3878 3879 F: Documentation/driver-api/auxiliary_bus.rst 3879 3880 F: drivers/base/auxiliary.c 3880 3881 F: include/linux/auxiliary_bus.h ··· 7225 7224 M: "Rafael J. Wysocki" <rafael@kernel.org> 7226 7225 M: Danilo Krummrich <dakr@kernel.org> 7227 7226 S: Supported 7228 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 7227 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core.git 7229 7228 F: Documentation/core-api/kobject.rst 7230 7229 F: drivers/base/ 7231 7230 F: fs/debugfs/ ··· 10455 10454 F: drivers/infiniband/hw/hfi1 10456 10455 10457 10456 HFS FILESYSTEM 10457 + M: Viacheslav Dubeyko <slava@dubeyko.com> 10458 + M: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 10459 + M: Yangtao Li <frank.li@vivo.com> 10458 10460 L: linux-fsdevel@vger.kernel.org 10459 - S: Orphan 10461 + S: Maintained 10460 10462 F: Documentation/filesystems/hfs.rst 10461 10463 F: fs/hfs/ 10462 10464 10463 10465 HFSPLUS FILESYSTEM 10466 + M: Viacheslav Dubeyko <slava@dubeyko.com> 10467 + M: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 10468 + M: Yangtao Li <frank.li@vivo.com> 10464 10469 L: linux-fsdevel@vger.kernel.org 10465 - S: Orphan 10470 + S: Maintained 10466 10471 F: Documentation/filesystems/hfsplus.rst 10467 10472 F: fs/hfsplus/ 10468 10473 ··· 13116 13109 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 13117 13110 M: Tejun Heo <tj@kernel.org> 13118 13111 S: Supported 13119 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 13112 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core.git 13120 13113 F: fs/kernfs/ 13121 13114 F: include/linux/kernfs.h 13122 13115 ··· 18706 18699 PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS 18707 18700 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 18708 18701 M: Krzysztof Wilczyński <kw@linux.com> 18709 - R: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 18702 + M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 18710 18703 R: Rob Herring <robh@kernel.org> 18711 18704 L: linux-pci@vger.kernel.org 18712 18705 S: Supported ··· 18759 18752 F: include/linux/of_pci.h 18760 18753 F: include/linux/pci* 18761 18754 F: include/uapi/linux/pci* 18755 + 18756 + PCI SUBSYSTEM [RUST] 18757 + M: Danilo Krummrich <dakr@kernel.org> 18758 + R: Bjorn Helgaas <bhelgaas@google.com> 18759 + R: Krzysztof Wilczyński <kwilczynski@kernel.org> 18760 + L: linux-pci@vger.kernel.org 18761 + S: Maintained 18762 + C: irc://irc.oftc.net/linux-pci 18763 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 18764 + F: rust/helpers/pci.c 18762 18765 F: rust/kernel/pci.rs 18763 18766 F: samples/rust/rust_driver_pci.rs 18764 18767 ··· 25220 25203 F: drivers/usb/typec/mux/pi3usb30532.c 25221 25204 25222 25205 USB TYPEC PORT CONTROLLER DRIVERS 25206 + M: Badhri Jagan Sridharan <badhri@google.com> 25223 25207 L: linux-usb@vger.kernel.org 25224 - S: Orphan 25225 - F: drivers/usb/typec/tcpm/ 25208 + S: Maintained 25209 + F: drivers/usb/typec/tcpm/tcpci.c 25210 + F: drivers/usb/typec/tcpm/tcpm.c 25211 + F: include/linux/usb/tcpci.h 25212 + F: include/linux/usb/tcpm.h 25226 25213 25227 25214 USB TYPEC TUSB1046 MUX DRIVER 25228 25215 M: Romain Gantois <romain.gantois@bootlin.com>
+1 -8
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 15 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1051 1051 # arrays. Enforce this for everything that may examine structure sizes and 1052 1052 # perform bounds checking. 1053 1053 KBUILD_CFLAGS += $(call cc-option, -fstrict-flex-arrays=3) 1054 - 1055 - #Currently, disable -Wstringop-overflow for GCC 11, globally. 1056 - KBUILD_CFLAGS-$(CONFIG_CC_NO_STRINGOP_OVERFLOW) += $(call cc-disable-warning, stringop-overflow) 1057 - KBUILD_CFLAGS-$(CONFIG_CC_STRINGOP_OVERFLOW) += $(call cc-option, -Wstringop-overflow) 1058 - 1059 - #Currently, disable -Wunterminated-string-initialization as broken 1060 - KBUILD_CFLAGS += $(call cc-disable-warning, unterminated-string-initialization) 1061 1054 1062 1055 # disable invalid "can't wrap" optimizations for signed / pointers 1063 1056 KBUILD_CFLAGS += -fno-strict-overflow
+5
arch/arm64/include/asm/kvm_host.h
··· 1588 1588 #define kvm_has_s1poe(k) \ 1589 1589 (kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP)) 1590 1590 1591 + static inline bool kvm_arch_has_irq_bypass(void) 1592 + { 1593 + return true; 1594 + } 1595 + 1591 1596 #endif /* __ARM64_KVM_HOST_H__ */
-11
arch/arm64/include/asm/mmu.h
··· 94 94 return false; 95 95 } 96 96 97 - /* 98 - * Systems affected by Cavium erratum 24756 are incompatible 99 - * with KPTI. 100 - */ 101 - if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) { 102 - extern const struct midr_range cavium_erratum_27456_cpus[]; 103 - 104 - if (is_midr_in_range_list(cavium_erratum_27456_cpus)) 105 - return false; 106 - } 107 - 108 97 return true; 109 98 } 110 99
+1 -1
arch/arm64/kernel/cpu_errata.c
··· 335 335 #endif 336 336 337 337 #ifdef CONFIG_CAVIUM_ERRATUM_27456 338 - const struct midr_range cavium_erratum_27456_cpus[] = { 338 + static const struct midr_range cavium_erratum_27456_cpus[] = { 339 339 /* Cavium ThunderX, T88 pass 1.x - 2.1 */ 340 340 MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1), 341 341 /* Cavium ThunderX, T81 pass 1.0 */
-4
arch/arm64/kernel/image-vars.h
··· 47 47 PROVIDE(__pi_id_aa64zfr0_override = id_aa64zfr0_override); 48 48 PROVIDE(__pi_arm64_sw_feature_override = arm64_sw_feature_override); 49 49 PROVIDE(__pi_arm64_use_ng_mappings = arm64_use_ng_mappings); 50 - #ifdef CONFIG_CAVIUM_ERRATUM_27456 51 - PROVIDE(__pi_cavium_erratum_27456_cpus = cavium_erratum_27456_cpus); 52 - PROVIDE(__pi_is_midr_in_range_list = is_midr_in_range_list); 53 - #endif 54 50 PROVIDE(__pi__ctype = _ctype); 55 51 PROVIDE(__pi_memstart_offset_seed = memstart_offset_seed); 56 52
+24 -1
arch/arm64/kernel/pi/map_kernel.c
··· 207 207 dsb(ishst); 208 208 } 209 209 210 + /* 211 + * PI version of the Cavium Eratum 27456 detection, which makes it 212 + * impossible to use non-global mappings. 213 + */ 214 + static bool __init ng_mappings_allowed(void) 215 + { 216 + static const struct midr_range cavium_erratum_27456_cpus[] __initconst = { 217 + /* Cavium ThunderX, T88 pass 1.x - 2.1 */ 218 + MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1), 219 + /* Cavium ThunderX, T81 pass 1.0 */ 220 + MIDR_REV(MIDR_THUNDERX_81XX, 0, 0), 221 + {}, 222 + }; 223 + 224 + for (const struct midr_range *r = cavium_erratum_27456_cpus; r->model; r++) { 225 + if (midr_is_cpu_model_range(read_cpuid_id(), r->model, 226 + r->rv_min, r->rv_max)) 227 + return false; 228 + } 229 + 230 + return true; 231 + } 232 + 210 233 asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) 211 234 { 212 235 static char const chosen_str[] __initconst = "/chosen"; ··· 269 246 u64 kaslr_seed = kaslr_early_init(fdt, chosen); 270 247 271 248 if (kaslr_seed && kaslr_requires_kpti()) 272 - arm64_use_ng_mappings = true; 249 + arm64_use_ng_mappings = ng_mappings_allowed(); 273 250 274 251 kaslr_offset |= kaslr_seed & ~(MIN_KIMG_ALIGN - 1); 275 252 }
-5
arch/arm64/kvm/arm.c
··· 2743 2743 return irqchip_in_kernel(kvm); 2744 2744 } 2745 2745 2746 - bool kvm_arch_has_irq_bypass(void) 2747 - { 2748 - return true; 2749 - } 2750 - 2751 2746 int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons, 2752 2747 struct irq_bypass_producer *prod) 2753 2748 {
+1
arch/loongarch/Kconfig
··· 73 73 select ARCH_SUPPORTS_RT 74 74 select ARCH_USE_BUILTIN_BSWAP 75 75 select ARCH_USE_CMPXCHG_LOCKREF 76 + select ARCH_USE_MEMTEST 76 77 select ARCH_USE_QUEUED_RWLOCKS 77 78 select ARCH_USE_QUEUED_SPINLOCKS 78 79 select ARCH_WANT_DEFAULT_BPF_JIT
+20 -13
arch/loongarch/include/asm/fpu.h
··· 22 22 struct sigcontext; 23 23 24 24 #define kernel_fpu_available() cpu_has_fpu 25 - extern void kernel_fpu_begin(void); 26 - extern void kernel_fpu_end(void); 27 25 28 - extern void _init_fpu(unsigned int); 29 - extern void _save_fp(struct loongarch_fpu *); 30 - extern void _restore_fp(struct loongarch_fpu *); 26 + void kernel_fpu_begin(void); 27 + void kernel_fpu_end(void); 31 28 32 - extern void _save_lsx(struct loongarch_fpu *fpu); 33 - extern void _restore_lsx(struct loongarch_fpu *fpu); 34 - extern void _init_lsx_upper(void); 35 - extern void _restore_lsx_upper(struct loongarch_fpu *fpu); 29 + asmlinkage void _init_fpu(unsigned int); 30 + asmlinkage void _save_fp(struct loongarch_fpu *); 31 + asmlinkage void _restore_fp(struct loongarch_fpu *); 32 + asmlinkage int _save_fp_context(void __user *fpregs, void __user *fcc, void __user *csr); 33 + asmlinkage int _restore_fp_context(void __user *fpregs, void __user *fcc, void __user *csr); 36 34 37 - extern void _save_lasx(struct loongarch_fpu *fpu); 38 - extern void _restore_lasx(struct loongarch_fpu *fpu); 39 - extern void _init_lasx_upper(void); 40 - extern void _restore_lasx_upper(struct loongarch_fpu *fpu); 35 + asmlinkage void _save_lsx(struct loongarch_fpu *fpu); 36 + asmlinkage void _restore_lsx(struct loongarch_fpu *fpu); 37 + asmlinkage void _init_lsx_upper(void); 38 + asmlinkage void _restore_lsx_upper(struct loongarch_fpu *fpu); 39 + asmlinkage int _save_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 40 + asmlinkage int _restore_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 41 + 42 + asmlinkage void _save_lasx(struct loongarch_fpu *fpu); 43 + asmlinkage void _restore_lasx(struct loongarch_fpu *fpu); 44 + asmlinkage void _init_lasx_upper(void); 45 + asmlinkage void _restore_lasx_upper(struct loongarch_fpu *fpu); 46 + asmlinkage int _save_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 47 + asmlinkage int _restore_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 41 48 42 49 static inline void enable_lsx(void); 43 50 static inline void disable_lsx(void);
+7 -3
arch/loongarch/include/asm/lbt.h
··· 12 12 #include <asm/loongarch.h> 13 13 #include <asm/processor.h> 14 14 15 - extern void _init_lbt(void); 16 - extern void _save_lbt(struct loongarch_lbt *); 17 - extern void _restore_lbt(struct loongarch_lbt *); 15 + asmlinkage void _init_lbt(void); 16 + asmlinkage void _save_lbt(struct loongarch_lbt *); 17 + asmlinkage void _restore_lbt(struct loongarch_lbt *); 18 + asmlinkage int _save_lbt_context(void __user *regs, void __user *eflags); 19 + asmlinkage int _restore_lbt_context(void __user *regs, void __user *eflags); 20 + asmlinkage int _save_ftop_context(void __user *ftop); 21 + asmlinkage int _restore_ftop_context(void __user *ftop); 18 22 19 23 static inline int is_lbt_enabled(void) 20 24 {
+2 -2
arch/loongarch/include/asm/ptrace.h
··· 33 33 unsigned long __last[]; 34 34 } __aligned(8); 35 35 36 - static inline int regs_irqs_disabled(struct pt_regs *regs) 36 + static __always_inline bool regs_irqs_disabled(struct pt_regs *regs) 37 37 { 38 - return arch_irqs_disabled_flags(regs->csr_prmd); 38 + return !(regs->csr_prmd & CSR_PRMD_PIE); 39 39 } 40 40 41 41 static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+6
arch/loongarch/kernel/fpu.S
··· 458 458 li.w a0, 0 # success 459 459 jr ra 460 460 SYM_FUNC_END(_save_fp_context) 461 + EXPORT_SYMBOL_GPL(_save_fp_context) 461 462 462 463 /* 463 464 * a0: fpregs ··· 472 471 li.w a0, 0 # success 473 472 jr ra 474 473 SYM_FUNC_END(_restore_fp_context) 474 + EXPORT_SYMBOL_GPL(_restore_fp_context) 475 475 476 476 /* 477 477 * a0: fpregs ··· 486 484 li.w a0, 0 # success 487 485 jr ra 488 486 SYM_FUNC_END(_save_lsx_context) 487 + EXPORT_SYMBOL_GPL(_save_lsx_context) 489 488 490 489 /* 491 490 * a0: fpregs ··· 500 497 li.w a0, 0 # success 501 498 jr ra 502 499 SYM_FUNC_END(_restore_lsx_context) 500 + EXPORT_SYMBOL_GPL(_restore_lsx_context) 503 501 504 502 /* 505 503 * a0: fpregs ··· 514 510 li.w a0, 0 # success 515 511 jr ra 516 512 SYM_FUNC_END(_save_lasx_context) 513 + EXPORT_SYMBOL_GPL(_save_lasx_context) 517 514 518 515 /* 519 516 * a0: fpregs ··· 528 523 li.w a0, 0 # success 529 524 jr ra 530 525 SYM_FUNC_END(_restore_lasx_context) 526 + EXPORT_SYMBOL_GPL(_restore_lasx_context) 531 527 532 528 .L_fpu_fault: 533 529 li.w a0, -EFAULT # failure
+4
arch/loongarch/kernel/lbt.S
··· 90 90 li.w a0, 0 # success 91 91 jr ra 92 92 SYM_FUNC_END(_save_lbt_context) 93 + EXPORT_SYMBOL_GPL(_save_lbt_context) 93 94 94 95 /* 95 96 * a0: scr ··· 111 110 li.w a0, 0 # success 112 111 jr ra 113 112 SYM_FUNC_END(_restore_lbt_context) 113 + EXPORT_SYMBOL_GPL(_restore_lbt_context) 114 114 115 115 /* 116 116 * a0: ftop ··· 122 120 li.w a0, 0 # success 123 121 jr ra 124 122 SYM_FUNC_END(_save_ftop_context) 123 + EXPORT_SYMBOL_GPL(_save_ftop_context) 125 124 126 125 /* 127 126 * a0: ftop ··· 153 150 li.w a0, 0 # success 154 151 jr ra 155 152 SYM_FUNC_END(_restore_ftop_context) 153 + EXPORT_SYMBOL_GPL(_restore_ftop_context) 156 154 157 155 .L_lbt_fault: 158 156 li.w a0, -EFAULT # failure
-21
arch/loongarch/kernel/signal.c
··· 51 51 #define lock_lbt_owner() ({ preempt_disable(); pagefault_disable(); }) 52 52 #define unlock_lbt_owner() ({ pagefault_enable(); preempt_enable(); }) 53 53 54 - /* Assembly functions to move context to/from the FPU */ 55 - extern asmlinkage int 56 - _save_fp_context(void __user *fpregs, void __user *fcc, void __user *csr); 57 - extern asmlinkage int 58 - _restore_fp_context(void __user *fpregs, void __user *fcc, void __user *csr); 59 - extern asmlinkage int 60 - _save_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 61 - extern asmlinkage int 62 - _restore_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 63 - extern asmlinkage int 64 - _save_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 65 - extern asmlinkage int 66 - _restore_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr); 67 - 68 - #ifdef CONFIG_CPU_HAS_LBT 69 - extern asmlinkage int _save_lbt_context(void __user *regs, void __user *eflags); 70 - extern asmlinkage int _restore_lbt_context(void __user *regs, void __user *eflags); 71 - extern asmlinkage int _save_ftop_context(void __user *ftop); 72 - extern asmlinkage int _restore_ftop_context(void __user *ftop); 73 - #endif 74 - 75 54 struct rt_sigframe { 76 55 struct siginfo rs_info; 77 56 struct ucontext rs_uctx;
+12 -8
arch/loongarch/kernel/traps.c
··· 553 553 die_if_kernel("Kernel ale access", regs); 554 554 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr); 555 555 #else 556 + bool pie = regs_irqs_disabled(regs); 556 557 unsigned int *pc; 557 558 558 - if (regs->csr_prmd & CSR_PRMD_PIE) 559 + if (!pie) 559 560 local_irq_enable(); 560 561 561 562 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr); ··· 583 582 die_if_kernel("Kernel ale access", regs); 584 583 force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr); 585 584 out: 586 - if (regs->csr_prmd & CSR_PRMD_PIE) 585 + if (!pie) 587 586 local_irq_disable(); 588 587 #endif 589 588 irqentry_exit(regs, state); ··· 622 621 asmlinkage void noinstr do_bce(struct pt_regs *regs) 623 622 { 624 623 bool user = user_mode(regs); 624 + bool pie = regs_irqs_disabled(regs); 625 625 unsigned long era = exception_era(regs); 626 626 u64 badv = 0, lower = 0, upper = ULONG_MAX; 627 627 union loongarch_instruction insn; 628 628 irqentry_state_t state = irqentry_enter(regs); 629 629 630 - if (regs->csr_prmd & CSR_PRMD_PIE) 630 + if (!pie) 631 631 local_irq_enable(); 632 632 633 633 current->thread.trap_nr = read_csr_excode(); ··· 694 692 force_sig_bnderr((void __user *)badv, (void __user *)lower, (void __user *)upper); 695 693 696 694 out: 697 - if (regs->csr_prmd & CSR_PRMD_PIE) 695 + if (!pie) 698 696 local_irq_disable(); 699 697 700 698 irqentry_exit(regs, state); ··· 712 710 asmlinkage void noinstr do_bp(struct pt_regs *regs) 713 711 { 714 712 bool user = user_mode(regs); 713 + bool pie = regs_irqs_disabled(regs); 715 714 unsigned int opcode, bcode; 716 715 unsigned long era = exception_era(regs); 717 716 irqentry_state_t state = irqentry_enter(regs); 718 717 719 - if (regs->csr_prmd & CSR_PRMD_PIE) 718 + if (!pie) 720 719 local_irq_enable(); 721 720 722 721 if (__get_inst(&opcode, (u32 *)era, user)) ··· 783 780 } 784 781 785 782 out: 786 - if (regs->csr_prmd & CSR_PRMD_PIE) 783 + if (!pie) 787 784 local_irq_disable(); 788 785 789 786 irqentry_exit(regs, state); ··· 1018 1015 1019 1016 asmlinkage void noinstr do_lbt(struct pt_regs *regs) 1020 1017 { 1018 + bool pie = regs_irqs_disabled(regs); 1021 1019 irqentry_state_t state = irqentry_enter(regs); 1022 1020 1023 1021 /* ··· 1028 1024 * (including the user using 'MOVGR2GCSR' to turn on TM, which 1029 1025 * will not trigger the BTE), we need to check PRMD first. 1030 1026 */ 1031 - if (regs->csr_prmd & CSR_PRMD_PIE) 1027 + if (!pie) 1032 1028 local_irq_enable(); 1033 1029 1034 1030 if (!cpu_has_lbt) { ··· 1042 1038 preempt_enable(); 1043 1039 1044 1040 out: 1045 - if (regs->csr_prmd & CSR_PRMD_PIE) 1041 + if (!pie) 1046 1042 local_irq_disable(); 1047 1043 1048 1044 irqentry_exit(regs, state);
+2 -2
arch/loongarch/kvm/intc/ipi.c
··· 111 111 ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); 112 112 srcu_read_unlock(&vcpu->kvm->srcu, idx); 113 113 if (unlikely(ret)) { 114 - kvm_err("%s: : read date from addr %llx failed\n", __func__, addr); 114 + kvm_err("%s: : read data from addr %llx failed\n", __func__, addr); 115 115 return ret; 116 116 } 117 117 /* Construct the mask by scanning the bit 27-30 */ ··· 127 127 ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); 128 128 srcu_read_unlock(&vcpu->kvm->srcu, idx); 129 129 if (unlikely(ret)) 130 - kvm_err("%s: : write date to addr %llx failed\n", __func__, addr); 130 + kvm_err("%s: : write data to addr %llx failed\n", __func__, addr); 131 131 132 132 return ret; 133 133 }
+2 -2
arch/loongarch/kvm/main.c
··· 296 296 /* 297 297 * Enable virtualization features granting guest direct control of 298 298 * certain features: 299 - * GCI=2: Trap on init or unimplement cache instruction. 299 + * GCI=2: Trap on init or unimplemented cache instruction. 300 300 * TORU=0: Trap on Root Unimplement. 301 301 * CACTRL=1: Root control cache. 302 - * TOP=0: Trap on Previlege. 302 + * TOP=0: Trap on Privilege. 303 303 * TOE=0: Trap on Exception. 304 304 * TIT=0: Trap on Timer. 305 305 */
+8
arch/loongarch/kvm/vcpu.c
··· 294 294 vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST; 295 295 296 296 if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) { 297 + kvm_lose_pmu(vcpu); 297 298 /* make sure the vcpu mode has been written */ 298 299 smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE); 299 300 local_irq_enable(); ··· 903 902 vcpu->arch.st.guest_addr = 0; 904 903 memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending)); 905 904 memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear)); 905 + 906 + /* 907 + * When vCPU reset, clear the ESTAT and GINTC registers 908 + * Other CSR registers are cleared with function _kvm_setcsr(). 909 + */ 910 + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_GINTC, 0); 911 + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_ESTAT, 0); 906 912 break; 907 913 default: 908 914 ret = -EINVAL;
+1 -1
arch/loongarch/mm/hugetlbpage.c
··· 47 47 pmd = pmd_offset(pud, addr); 48 48 } 49 49 } 50 - return (pte_t *) pmd; 50 + return pmd_none(pmdp_get(pmd)) ? NULL : (pte_t *) pmd; 51 51 } 52 52 53 53 uint64_t pmd_to_entrylo(unsigned long pmd_val)
-3
arch/loongarch/mm/init.c
··· 65 65 { 66 66 unsigned long max_zone_pfns[MAX_NR_ZONES]; 67 67 68 - #ifdef CONFIG_ZONE_DMA 69 - max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN; 70 - #endif 71 68 #ifdef CONFIG_ZONE_DMA32 72 69 max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN; 73 70 #endif
+17
arch/openrisc/include/asm/cacheflush.h
··· 23 23 */ 24 24 extern void local_dcache_page_flush(struct page *page); 25 25 extern void local_icache_page_inv(struct page *page); 26 + extern void local_dcache_range_flush(unsigned long start, unsigned long end); 27 + extern void local_dcache_range_inv(unsigned long start, unsigned long end); 28 + extern void local_icache_range_inv(unsigned long start, unsigned long end); 26 29 27 30 /* 28 31 * Data cache flushing always happen on the local cpu. Instruction cache ··· 40 37 #define icache_page_inv(page) smp_icache_page_inv(page) 41 38 extern void smp_icache_page_inv(struct page *page); 42 39 #endif /* CONFIG_SMP */ 40 + 41 + /* 42 + * Even if the actual block size is larger than L1_CACHE_BYTES, paddr 43 + * can be incremented by L1_CACHE_BYTES. When paddr is written to the 44 + * invalidate register, the entire cache line encompassing this address 45 + * is invalidated. Each subsequent reference to the same cache line will 46 + * not affect the invalidation process. 47 + */ 48 + #define local_dcache_block_flush(addr) \ 49 + local_dcache_range_flush(addr, addr + L1_CACHE_BYTES) 50 + #define local_dcache_block_inv(addr) \ 51 + local_dcache_range_inv(addr, addr + L1_CACHE_BYTES) 52 + #define local_icache_block_inv(addr) \ 53 + local_icache_range_inv(addr, addr + L1_CACHE_BYTES) 43 54 44 55 /* 45 56 * Synchronizes caches. Whenever a cpu writes executable code to memory, this
+17 -7
arch/openrisc/include/asm/cpuinfo.h
··· 15 15 #ifndef __ASM_OPENRISC_CPUINFO_H 16 16 #define __ASM_OPENRISC_CPUINFO_H 17 17 18 + #include <asm/spr.h> 19 + #include <asm/spr_defs.h> 20 + 21 + struct cache_desc { 22 + u32 size; 23 + u32 sets; 24 + u32 block_size; 25 + u32 ways; 26 + }; 27 + 18 28 struct cpuinfo_or1k { 19 29 u32 clock_frequency; 20 30 21 - u32 icache_size; 22 - u32 icache_block_size; 23 - u32 icache_ways; 24 - 25 - u32 dcache_size; 26 - u32 dcache_block_size; 27 - u32 dcache_ways; 31 + struct cache_desc icache; 32 + struct cache_desc dcache; 28 33 29 34 u16 coreid; 30 35 }; 31 36 32 37 extern struct cpuinfo_or1k cpuinfo_or1k[NR_CPUS]; 33 38 extern void setup_cpuinfo(void); 39 + 40 + /* 41 + * Check if the cache component exists. 42 + */ 43 + extern bool cpu_cache_is_present(const unsigned int cache_type); 34 44 35 45 #endif /* __ASM_OPENRISC_CPUINFO_H */
+1 -1
arch/openrisc/kernel/Makefile
··· 7 7 8 8 obj-y := head.o setup.o or32_ksyms.o process.o dma.o \ 9 9 traps.o time.o irq.o entry.o ptrace.o signal.o \ 10 - sys_call_table.o unwinder.o 10 + sys_call_table.o unwinder.o cacheinfo.o 11 11 12 12 obj-$(CONFIG_SMP) += smp.o sync-timer.o 13 13 obj-$(CONFIG_STACKTRACE) += stacktrace.o
+104
arch/openrisc/kernel/cacheinfo.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * OpenRISC cacheinfo support 4 + * 5 + * Based on work done for MIPS and LoongArch. All original copyrights 6 + * apply as per the original source declaration. 7 + * 8 + * OpenRISC implementation: 9 + * Copyright (C) 2025 Sahil Siddiq <sahilcdq@proton.me> 10 + */ 11 + 12 + #include <linux/cacheinfo.h> 13 + #include <asm/cpuinfo.h> 14 + #include <asm/spr.h> 15 + #include <asm/spr_defs.h> 16 + 17 + static inline void ci_leaf_init(struct cacheinfo *this_leaf, enum cache_type type, 18 + unsigned int level, struct cache_desc *cache, int cpu) 19 + { 20 + this_leaf->type = type; 21 + this_leaf->level = level; 22 + this_leaf->coherency_line_size = cache->block_size; 23 + this_leaf->number_of_sets = cache->sets; 24 + this_leaf->ways_of_associativity = cache->ways; 25 + this_leaf->size = cache->size; 26 + cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map); 27 + } 28 + 29 + int init_cache_level(unsigned int cpu) 30 + { 31 + struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; 32 + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); 33 + int leaves = 0, levels = 0; 34 + unsigned long upr = mfspr(SPR_UPR); 35 + unsigned long iccfgr, dccfgr; 36 + 37 + if (!(upr & SPR_UPR_UP)) { 38 + printk(KERN_INFO 39 + "-- no UPR register... unable to detect configuration\n"); 40 + return -ENOENT; 41 + } 42 + 43 + if (cpu_cache_is_present(SPR_UPR_DCP)) { 44 + dccfgr = mfspr(SPR_DCCFGR); 45 + cpuinfo->dcache.ways = 1 << (dccfgr & SPR_DCCFGR_NCW); 46 + cpuinfo->dcache.sets = 1 << ((dccfgr & SPR_DCCFGR_NCS) >> 3); 47 + cpuinfo->dcache.block_size = 16 << ((dccfgr & SPR_DCCFGR_CBS) >> 7); 48 + cpuinfo->dcache.size = 49 + cpuinfo->dcache.sets * cpuinfo->dcache.ways * cpuinfo->dcache.block_size; 50 + leaves += 1; 51 + printk(KERN_INFO 52 + "-- dcache: %d bytes total, %d bytes/line, %d set(s), %d way(s)\n", 53 + cpuinfo->dcache.size, cpuinfo->dcache.block_size, 54 + cpuinfo->dcache.sets, cpuinfo->dcache.ways); 55 + } else 56 + printk(KERN_INFO "-- dcache disabled\n"); 57 + 58 + if (cpu_cache_is_present(SPR_UPR_ICP)) { 59 + iccfgr = mfspr(SPR_ICCFGR); 60 + cpuinfo->icache.ways = 1 << (iccfgr & SPR_ICCFGR_NCW); 61 + cpuinfo->icache.sets = 1 << ((iccfgr & SPR_ICCFGR_NCS) >> 3); 62 + cpuinfo->icache.block_size = 16 << ((iccfgr & SPR_ICCFGR_CBS) >> 7); 63 + cpuinfo->icache.size = 64 + cpuinfo->icache.sets * cpuinfo->icache.ways * cpuinfo->icache.block_size; 65 + leaves += 1; 66 + printk(KERN_INFO 67 + "-- icache: %d bytes total, %d bytes/line, %d set(s), %d way(s)\n", 68 + cpuinfo->icache.size, cpuinfo->icache.block_size, 69 + cpuinfo->icache.sets, cpuinfo->icache.ways); 70 + } else 71 + printk(KERN_INFO "-- icache disabled\n"); 72 + 73 + if (!leaves) 74 + return -ENOENT; 75 + 76 + levels = 1; 77 + 78 + this_cpu_ci->num_leaves = leaves; 79 + this_cpu_ci->num_levels = levels; 80 + 81 + return 0; 82 + } 83 + 84 + int populate_cache_leaves(unsigned int cpu) 85 + { 86 + struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; 87 + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); 88 + struct cacheinfo *this_leaf = this_cpu_ci->info_list; 89 + int level = 1; 90 + 91 + if (cpu_cache_is_present(SPR_UPR_DCP)) { 92 + ci_leaf_init(this_leaf, CACHE_TYPE_DATA, level, &cpuinfo->dcache, cpu); 93 + this_leaf->attributes = ((mfspr(SPR_DCCFGR) & SPR_DCCFGR_CWS) >> 8) ? 94 + CACHE_WRITE_BACK : CACHE_WRITE_THROUGH; 95 + this_leaf++; 96 + } 97 + 98 + if (cpu_cache_is_present(SPR_UPR_ICP)) 99 + ci_leaf_init(this_leaf, CACHE_TYPE_INST, level, &cpuinfo->icache, cpu); 100 + 101 + this_cpu_ci->cpu_map_populated = true; 102 + 103 + return 0; 104 + }
+4 -14
arch/openrisc/kernel/dma.c
··· 17 17 #include <linux/pagewalk.h> 18 18 19 19 #include <asm/cpuinfo.h> 20 + #include <asm/cacheflush.h> 20 21 #include <asm/spr_defs.h> 21 22 #include <asm/tlbflush.h> 22 23 ··· 25 24 page_set_nocache(pte_t *pte, unsigned long addr, 26 25 unsigned long next, struct mm_walk *walk) 27 26 { 28 - unsigned long cl; 29 - struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; 30 - 31 27 pte_val(*pte) |= _PAGE_CI; 32 28 33 29 /* ··· 34 36 flush_tlb_kernel_range(addr, addr + PAGE_SIZE); 35 37 36 38 /* Flush page out of dcache */ 37 - for (cl = __pa(addr); cl < __pa(next); cl += cpuinfo->dcache_block_size) 38 - mtspr(SPR_DCBFR, cl); 39 + local_dcache_range_flush(__pa(addr), __pa(next)); 39 40 40 41 return 0; 41 42 } ··· 95 98 void arch_sync_dma_for_device(phys_addr_t addr, size_t size, 96 99 enum dma_data_direction dir) 97 100 { 98 - unsigned long cl; 99 - struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; 100 - 101 101 switch (dir) { 102 102 case DMA_TO_DEVICE: 103 103 /* Flush the dcache for the requested range */ 104 - for (cl = addr; cl < addr + size; 105 - cl += cpuinfo->dcache_block_size) 106 - mtspr(SPR_DCBFR, cl); 104 + local_dcache_range_flush(addr, addr + size); 107 105 break; 108 106 case DMA_FROM_DEVICE: 109 107 /* Invalidate the dcache for the requested range */ 110 - for (cl = addr; cl < addr + size; 111 - cl += cpuinfo->dcache_block_size) 112 - mtspr(SPR_DCBIR, cl); 108 + local_dcache_range_inv(addr, addr + size); 113 109 break; 114 110 default: 115 111 /*
+3 -42
arch/openrisc/kernel/setup.c
··· 113 113 return; 114 114 } 115 115 116 - if (upr & SPR_UPR_DCP) 117 - printk(KERN_INFO 118 - "-- dcache: %4d bytes total, %2d bytes/line, %d way(s)\n", 119 - cpuinfo->dcache_size, cpuinfo->dcache_block_size, 120 - cpuinfo->dcache_ways); 121 - else 122 - printk(KERN_INFO "-- dcache disabled\n"); 123 - if (upr & SPR_UPR_ICP) 124 - printk(KERN_INFO 125 - "-- icache: %4d bytes total, %2d bytes/line, %d way(s)\n", 126 - cpuinfo->icache_size, cpuinfo->icache_block_size, 127 - cpuinfo->icache_ways); 128 - else 129 - printk(KERN_INFO "-- icache disabled\n"); 130 - 131 116 if (upr & SPR_UPR_DMP) 132 117 printk(KERN_INFO "-- dmmu: %4d entries, %lu way(s)\n", 133 118 1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> 2), ··· 140 155 void __init setup_cpuinfo(void) 141 156 { 142 157 struct device_node *cpu; 143 - unsigned long iccfgr, dccfgr; 144 - unsigned long cache_set_size; 145 158 int cpu_id = smp_processor_id(); 146 159 struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[cpu_id]; 147 160 148 161 cpu = of_get_cpu_node(cpu_id, NULL); 149 162 if (!cpu) 150 163 panic("Couldn't find CPU%d in device tree...\n", cpu_id); 151 - 152 - iccfgr = mfspr(SPR_ICCFGR); 153 - cpuinfo->icache_ways = 1 << (iccfgr & SPR_ICCFGR_NCW); 154 - cache_set_size = 1 << ((iccfgr & SPR_ICCFGR_NCS) >> 3); 155 - cpuinfo->icache_block_size = 16 << ((iccfgr & SPR_ICCFGR_CBS) >> 7); 156 - cpuinfo->icache_size = 157 - cache_set_size * cpuinfo->icache_ways * cpuinfo->icache_block_size; 158 - 159 - dccfgr = mfspr(SPR_DCCFGR); 160 - cpuinfo->dcache_ways = 1 << (dccfgr & SPR_DCCFGR_NCW); 161 - cache_set_size = 1 << ((dccfgr & SPR_DCCFGR_NCS) >> 3); 162 - cpuinfo->dcache_block_size = 16 << ((dccfgr & SPR_DCCFGR_CBS) >> 7); 163 - cpuinfo->dcache_size = 164 - cache_set_size * cpuinfo->dcache_ways * cpuinfo->dcache_block_size; 165 164 166 165 if (of_property_read_u32(cpu, "clock-frequency", 167 166 &cpuinfo->clock_frequency)) { ··· 263 294 unsigned int vr, cpucfgr; 264 295 unsigned int avr; 265 296 unsigned int version; 297 + #ifdef CONFIG_SMP 266 298 struct cpuinfo_or1k *cpuinfo = v; 299 + seq_printf(m, "processor\t\t: %d\n", cpuinfo->coreid); 300 + #endif 267 301 268 302 vr = mfspr(SPR_VR); 269 303 cpucfgr = mfspr(SPR_CPUCFGR); 270 304 271 - #ifdef CONFIG_SMP 272 - seq_printf(m, "processor\t\t: %d\n", cpuinfo->coreid); 273 - #endif 274 305 if (vr & SPR_VR_UVRP) { 275 306 vr = mfspr(SPR_VR2); 276 307 version = vr & SPR_VR2_VER; ··· 289 320 seq_printf(m, "revision\t\t: %d\n", vr & SPR_VR_REV); 290 321 } 291 322 seq_printf(m, "frequency\t\t: %ld\n", loops_per_jiffy * HZ); 292 - seq_printf(m, "dcache size\t\t: %d bytes\n", cpuinfo->dcache_size); 293 - seq_printf(m, "dcache block size\t: %d bytes\n", 294 - cpuinfo->dcache_block_size); 295 - seq_printf(m, "dcache ways\t\t: %d\n", cpuinfo->dcache_ways); 296 - seq_printf(m, "icache size\t\t: %d bytes\n", cpuinfo->icache_size); 297 - seq_printf(m, "icache block size\t: %d bytes\n", 298 - cpuinfo->icache_block_size); 299 - seq_printf(m, "icache ways\t\t: %d\n", cpuinfo->icache_ways); 300 323 seq_printf(m, "immu\t\t\t: %d entries, %lu ways\n", 301 324 1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> 2), 302 325 1 + (mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTW));
+47 -9
arch/openrisc/mm/cache.c
··· 14 14 #include <asm/spr_defs.h> 15 15 #include <asm/cache.h> 16 16 #include <asm/cacheflush.h> 17 + #include <asm/cpuinfo.h> 17 18 #include <asm/tlbflush.h> 18 19 19 - static __always_inline void cache_loop(struct page *page, const unsigned int reg) 20 + /* 21 + * Check if the cache component exists. 22 + */ 23 + bool cpu_cache_is_present(const unsigned int cache_type) 24 + { 25 + unsigned long upr = mfspr(SPR_UPR); 26 + unsigned long mask = SPR_UPR_UP | cache_type; 27 + 28 + return !((upr & mask) ^ mask); 29 + } 30 + 31 + static __always_inline void cache_loop(unsigned long paddr, unsigned long end, 32 + const unsigned short reg, const unsigned int cache_type) 33 + { 34 + if (!cpu_cache_is_present(cache_type)) 35 + return; 36 + 37 + while (paddr < end) { 38 + mtspr(reg, paddr); 39 + paddr += L1_CACHE_BYTES; 40 + } 41 + } 42 + 43 + static __always_inline void cache_loop_page(struct page *page, const unsigned short reg, 44 + const unsigned int cache_type) 20 45 { 21 46 unsigned long paddr = page_to_pfn(page) << PAGE_SHIFT; 22 - unsigned long line = paddr & ~(L1_CACHE_BYTES - 1); 47 + unsigned long end = paddr + PAGE_SIZE; 23 48 24 - while (line < paddr + PAGE_SIZE) { 25 - mtspr(reg, line); 26 - line += L1_CACHE_BYTES; 27 - } 49 + paddr &= ~(L1_CACHE_BYTES - 1); 50 + 51 + cache_loop(paddr, end, reg, cache_type); 28 52 } 29 53 30 54 void local_dcache_page_flush(struct page *page) 31 55 { 32 - cache_loop(page, SPR_DCBFR); 56 + cache_loop_page(page, SPR_DCBFR, SPR_UPR_DCP); 33 57 } 34 58 EXPORT_SYMBOL(local_dcache_page_flush); 35 59 36 60 void local_icache_page_inv(struct page *page) 37 61 { 38 - cache_loop(page, SPR_ICBIR); 62 + cache_loop_page(page, SPR_ICBIR, SPR_UPR_ICP); 39 63 } 40 64 EXPORT_SYMBOL(local_icache_page_inv); 65 + 66 + void local_dcache_range_flush(unsigned long start, unsigned long end) 67 + { 68 + cache_loop(start, end, SPR_DCBFR, SPR_UPR_DCP); 69 + } 70 + 71 + void local_dcache_range_inv(unsigned long start, unsigned long end) 72 + { 73 + cache_loop(start, end, SPR_DCBIR, SPR_UPR_DCP); 74 + } 75 + 76 + void local_icache_range_inv(unsigned long start, unsigned long end) 77 + { 78 + cache_loop(start, end, SPR_ICBIR, SPR_UPR_ICP); 79 + } 41 80 42 81 void update_cache(struct vm_area_struct *vma, unsigned long address, 43 82 pte_t *pte) ··· 97 58 sync_icache_dcache(folio_page(folio, nr)); 98 59 } 99 60 } 100 -
+3 -2
arch/openrisc/mm/init.c
··· 35 35 #include <asm/fixmap.h> 36 36 #include <asm/tlbflush.h> 37 37 #include <asm/sections.h> 38 + #include <asm/cacheflush.h> 38 39 39 40 int mem_init_done; 40 41 ··· 177 176 barrier(); 178 177 179 178 /* Invalidate instruction caches after code modification */ 180 - mtspr(SPR_ICBIR, 0x900); 181 - mtspr(SPR_ICBIR, 0xa00); 179 + local_icache_block_inv(0x900); 180 + local_icache_block_inv(0xa00); 182 181 183 182 /* New TLB miss handlers and kernel page tables are in now place. 184 183 * Make sure that page flags get updated for all pages in TLB by
+2 -4
arch/powerpc/boot/wrapper
··· 234 234 235 235 # suppress some warnings in recent ld versions 236 236 nowarn="-z noexecstack" 237 - if ! ld_is_lld; then 238 - if [ "$LD_VERSION" -ge "$(echo 2.39 | ld_version)" ]; then 239 - nowarn="$nowarn --no-warn-rwx-segments" 240 - fi 237 + if "${CROSS}ld" -v --no-warn-rwx-segments >/dev/null 2>&1; then 238 + nowarn="$nowarn --no-warn-rwx-segments" 241 239 fi 242 240 243 241 platformo=$object/"$platform".o
-4
arch/powerpc/kernel/module_64.c
··· 258 258 break; 259 259 } 260 260 } 261 - if (i == hdr->e_shnum) { 262 - pr_err("%s: doesn't contain __patchable_function_entries.\n", me->name); 263 - return -ENOEXEC; 264 - } 265 261 #endif 266 262 267 263 pr_debug("Looks like a total of %lu stubs, max\n", relocs);
+17 -3
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 976 976 return 0; 977 977 } 978 978 979 - 979 + #ifdef CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP 980 980 bool vmemmap_can_optimize(struct vmem_altmap *altmap, struct dev_pagemap *pgmap) 981 981 { 982 982 if (radix_enabled()) ··· 984 984 985 985 return false; 986 986 } 987 + #endif 987 988 988 989 int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, 989 990 unsigned long addr, unsigned long next) ··· 1121 1120 pmd_t *pmd; 1122 1121 pte_t *pte; 1123 1122 1123 + /* 1124 + * Make sure we align the start vmemmap addr so that we calculate 1125 + * the correct start_pfn in altmap boundary check to decided whether 1126 + * we should use altmap or RAM based backing memory allocation. Also 1127 + * the address need to be aligned for set_pte operation. 1128 + 1129 + * If the start addr is already PMD_SIZE aligned we will try to use 1130 + * a pmd mapping. We don't want to be too aggressive here beacause 1131 + * that will cause more allocations in RAM. So only if the namespace 1132 + * vmemmap start addr is PMD_SIZE aligned we will use PMD mapping. 1133 + */ 1134 + 1135 + start = ALIGN_DOWN(start, PAGE_SIZE); 1124 1136 for (addr = start; addr < end; addr = next) { 1125 1137 next = pmd_addr_end(addr, end); 1126 1138 ··· 1159 1145 * in altmap block allocation failures, in which case 1160 1146 * we fallback to RAM for vmemmap allocation. 1161 1147 */ 1162 - if (altmap && (!IS_ALIGNED(addr, PMD_SIZE) || 1163 - altmap_cross_boundary(altmap, addr, PMD_SIZE))) { 1148 + if (!IS_ALIGNED(addr, PMD_SIZE) || (altmap && 1149 + altmap_cross_boundary(altmap, addr, PMD_SIZE))) { 1164 1150 /* 1165 1151 * make sure we don't create altmap mappings 1166 1152 * covering things outside the device.
+1 -1
arch/powerpc/platforms/powernv/Kconfig
··· 17 17 select MMU_NOTIFIER 18 18 select FORCE_SMP 19 19 select ARCH_SUPPORTS_PER_VMA_LOCK 20 - select PPC_RADIX_BROADCAST_TLBIE 20 + select PPC_RADIX_BROADCAST_TLBIE if PPC_RADIX_MMU 21 21 default y 22 22 23 23 config OPAL_PRD
+1 -1
arch/powerpc/platforms/pseries/Kconfig
··· 23 23 select FORCE_SMP 24 24 select SWIOTLB 25 25 select ARCH_SUPPORTS_PER_VMA_LOCK 26 - select PPC_RADIX_BROADCAST_TLBIE 26 + select PPC_RADIX_BROADCAST_TLBIE if PPC_RADIX_MMU 27 27 default y 28 28 29 29 config PARAVIRT
+10 -5
arch/riscv/include/asm/cacheflush.h
··· 34 34 flush_dcache_folio(page_folio(page)); 35 35 } 36 36 37 - /* 38 - * RISC-V doesn't have an instruction to flush parts of the instruction cache, 39 - * so instead we just flush the whole thing. 40 - */ 41 - #define flush_icache_range(start, end) flush_icache_all() 42 37 #define flush_icache_user_page(vma, pg, addr, len) \ 43 38 do { \ 44 39 if (vma->vm_flags & VM_EXEC) \ ··· 72 77 void flush_icache_mm(struct mm_struct *mm, bool local); 73 78 74 79 #endif /* CONFIG_SMP */ 80 + 81 + /* 82 + * RISC-V doesn't have an instruction to flush parts of the instruction cache, 83 + * so instead we just flush the whole thing. 84 + */ 85 + #define flush_icache_range flush_icache_range 86 + static inline void flush_icache_range(unsigned long start, unsigned long end) 87 + { 88 + flush_icache_all(); 89 + } 75 90 76 91 extern unsigned int riscv_cbom_block_size; 77 92 extern unsigned int riscv_cboz_block_size;
+2 -8
arch/riscv/kernel/probes/uprobes.c
··· 167 167 /* Initialize the slot */ 168 168 void *kaddr = kmap_atomic(page); 169 169 void *dst = kaddr + (vaddr & ~PAGE_MASK); 170 + unsigned long start = (unsigned long)dst; 170 171 171 172 memcpy(dst, src, len); 172 173 ··· 177 176 *(uprobe_opcode_t *)dst = __BUG_INSN_32; 178 177 } 179 178 179 + flush_icache_range(start, start + len); 180 180 kunmap_atomic(kaddr); 181 - 182 - /* 183 - * We probably need flush_icache_user_page() but it needs vma. 184 - * This should work on most of architectures by default. If 185 - * architecture needs to do something different it can define 186 - * its own version of the function. 187 - */ 188 - flush_dcache_page(page); 189 181 }
+1 -1
arch/x86/boot/Makefile
··· 59 59 $(obj)/bzImage: asflags-y := $(SVGA_MODE) 60 60 61 61 quiet_cmd_image = BUILD $@ 62 - cmd_image = cp $< $@; truncate -s %4K $@; cat $(obj)/vmlinux.bin >>$@ 62 + cmd_image = (dd if=$< bs=4k conv=sync status=none; cat $(filter-out $<,$(real-prereqs))) >$@ 63 63 64 64 $(obj)/bzImage: $(obj)/setup.bin $(obj)/vmlinux.bin FORCE 65 65 $(call if_changed,image)
+1 -1
arch/x86/events/core.c
··· 629 629 if (event->attr.type == event->pmu->type) 630 630 event->hw.config |= x86_pmu_get_event_config(event); 631 631 632 - if (!event->attr.freq && x86_pmu.limit_period) { 632 + if (is_sampling_event(event) && !event->attr.freq && x86_pmu.limit_period) { 633 633 s64 left = event->attr.sample_period; 634 634 x86_pmu.limit_period(event, &left); 635 635 if (left > event->attr.sample_period)
+6
arch/x86/include/asm/kvm_host.h
··· 35 35 #include <asm/mtrr.h> 36 36 #include <asm/msr-index.h> 37 37 #include <asm/asm.h> 38 + #include <asm/irq_remapping.h> 38 39 #include <asm/kvm_page_track.h> 39 40 #include <asm/kvm_vcpu_regs.h> 40 41 #include <asm/reboot.h> ··· 2423 2422 * remaining 31 lower bits must be 0 to preserve ABI. 2424 2423 */ 2425 2424 #define KVM_EXIT_HYPERCALL_MBZ GENMASK_ULL(31, 1) 2425 + 2426 + static inline bool kvm_arch_has_irq_bypass(void) 2427 + { 2428 + return enable_apicv && irq_remapping_cap(IRQ_POSTING_CAP); 2429 + } 2426 2430 2427 2431 #endif /* _ASM_X86_KVM_HOST_H */
+11 -8
arch/x86/include/asm/pgalloc.h
··· 6 6 #include <linux/mm.h> /* for struct page */ 7 7 #include <linux/pagemap.h> 8 8 9 + #include <asm/cpufeature.h> 10 + 9 11 #define __HAVE_ARCH_PTE_ALLOC_ONE 10 12 #define __HAVE_ARCH_PGD_FREE 11 13 #include <asm-generic/pgalloc.h> ··· 31 29 static inline void paravirt_release_p4d(unsigned long pfn) {} 32 30 #endif 33 31 34 - #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION 35 32 /* 36 - * Instead of one PGD, we acquire two PGDs. Being order-1, it is 37 - * both 8k in size and 8k-aligned. That lets us just flip bit 12 38 - * in a pointer to swap between the two 4k halves. 33 + * In case of Page Table Isolation active, we acquire two PGDs instead of one. 34 + * Being order-1, it is both 8k in size and 8k-aligned. That lets us just 35 + * flip bit 12 in a pointer to swap between the two 4k halves. 39 36 */ 40 - #define PGD_ALLOCATION_ORDER 1 41 - #else 42 - #define PGD_ALLOCATION_ORDER 0 43 - #endif 37 + static inline unsigned int pgd_allocation_order(void) 38 + { 39 + if (cpu_feature_enabled(X86_FEATURE_PTI)) 40 + return 1; 41 + return 0; 42 + } 44 43 45 44 /* 46 45 * Allocate and free page tables.
+8
arch/x86/kernel/e820.c
··· 1299 1299 memblock_add(entry->addr, entry->size); 1300 1300 } 1301 1301 1302 + /* 1303 + * 32-bit systems are limited to 4BG of memory even with HIGHMEM and 1304 + * to even less without it. 1305 + * Discard memory after max_pfn - the actual limit detected at runtime. 1306 + */ 1307 + if (IS_ENABLED(CONFIG_X86_32)) 1308 + memblock_remove(PFN_PHYS(max_pfn), -1); 1309 + 1302 1310 /* Throw away partial pages: */ 1303 1311 memblock_trim_memory(PAGE_SIZE); 1304 1312
+2 -2
arch/x86/kernel/machine_kexec_32.c
··· 42 42 43 43 static void machine_kexec_free_page_tables(struct kimage *image) 44 44 { 45 - free_pages((unsigned long)image->arch.pgd, PGD_ALLOCATION_ORDER); 45 + free_pages((unsigned long)image->arch.pgd, pgd_allocation_order()); 46 46 image->arch.pgd = NULL; 47 47 #ifdef CONFIG_X86_PAE 48 48 free_page((unsigned long)image->arch.pmd0); ··· 59 59 static int machine_kexec_alloc_page_tables(struct kimage *image) 60 60 { 61 61 image->arch.pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 62 - PGD_ALLOCATION_ORDER); 62 + pgd_allocation_order()); 63 63 #ifdef CONFIG_X86_PAE 64 64 image->arch.pmd0 = (pmd_t *)get_zeroed_page(GFP_KERNEL); 65 65 image->arch.pmd1 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+37 -31
arch/x86/kvm/svm/avic.c
··· 796 796 struct amd_svm_iommu_ir *ir; 797 797 u64 entry; 798 798 799 + if (WARN_ON_ONCE(!pi->ir_data)) 800 + return -EINVAL; 801 + 799 802 /** 800 803 * In some cases, the existing irte is updated and re-set, 801 804 * so we need to check here if it's already been * added 802 805 * to the ir_list. 803 806 */ 804 - if (pi->ir_data && (pi->prev_ga_tag != 0)) { 807 + if (pi->prev_ga_tag) { 805 808 struct kvm *kvm = svm->vcpu.kvm; 806 809 u32 vcpu_id = AVIC_GATAG_TO_VCPUID(pi->prev_ga_tag); 807 810 struct kvm_vcpu *prev_vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); ··· 823 820 * Allocating new amd_iommu_pi_data, which will get 824 821 * add to the per-vcpu ir_list. 825 822 */ 826 - ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_KERNEL_ACCOUNT); 823 + ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_ATOMIC | __GFP_ACCOUNT); 827 824 if (!ir) { 828 825 ret = -ENOMEM; 829 826 goto out; ··· 899 896 { 900 897 struct kvm_kernel_irq_routing_entry *e; 901 898 struct kvm_irq_routing_table *irq_rt; 899 + bool enable_remapped_mode = true; 902 900 int idx, ret = 0; 903 901 904 - if (!kvm_arch_has_assigned_device(kvm) || 905 - !irq_remapping_cap(IRQ_POSTING_CAP)) 902 + if (!kvm_arch_has_assigned_device(kvm) || !kvm_arch_has_irq_bypass()) 906 903 return 0; 907 904 908 905 pr_debug("SVM: %s: host_irq=%#x, guest_irq=%#x, set=%#x\n", ··· 936 933 kvm_vcpu_apicv_active(&svm->vcpu)) { 937 934 struct amd_iommu_pi_data pi; 938 935 936 + enable_remapped_mode = false; 937 + 939 938 /* Try to enable guest_mode in IRTE */ 940 939 pi.base = __sme_set(page_to_phys(svm->avic_backing_page) & 941 940 AVIC_HPA_MASK); ··· 956 951 */ 957 952 if (!ret && pi.is_guest_mode) 958 953 svm_ir_list_add(svm, &pi); 959 - } else { 960 - /* Use legacy mode in IRTE */ 961 - struct amd_iommu_pi_data pi; 962 - 963 - /** 964 - * Here, pi is used to: 965 - * - Tell IOMMU to use legacy mode for this interrupt. 966 - * - Retrieve ga_tag of prior interrupt remapping data. 967 - */ 968 - pi.prev_ga_tag = 0; 969 - pi.is_guest_mode = false; 970 - ret = irq_set_vcpu_affinity(host_irq, &pi); 971 - 972 - /** 973 - * Check if the posted interrupt was previously 974 - * setup with the guest_mode by checking if the ga_tag 975 - * was cached. If so, we need to clean up the per-vcpu 976 - * ir_list. 977 - */ 978 - if (!ret && pi.prev_ga_tag) { 979 - int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag); 980 - struct kvm_vcpu *vcpu; 981 - 982 - vcpu = kvm_get_vcpu_by_id(kvm, id); 983 - if (vcpu) 984 - svm_ir_list_del(to_svm(vcpu), &pi); 985 - } 986 954 } 987 955 988 956 if (!ret && svm) { ··· 971 993 } 972 994 973 995 ret = 0; 996 + if (enable_remapped_mode) { 997 + /* Use legacy mode in IRTE */ 998 + struct amd_iommu_pi_data pi; 999 + 1000 + /** 1001 + * Here, pi is used to: 1002 + * - Tell IOMMU to use legacy mode for this interrupt. 1003 + * - Retrieve ga_tag of prior interrupt remapping data. 1004 + */ 1005 + pi.prev_ga_tag = 0; 1006 + pi.is_guest_mode = false; 1007 + ret = irq_set_vcpu_affinity(host_irq, &pi); 1008 + 1009 + /** 1010 + * Check if the posted interrupt was previously 1011 + * setup with the guest_mode by checking if the ga_tag 1012 + * was cached. If so, we need to clean up the per-vcpu 1013 + * ir_list. 1014 + */ 1015 + if (!ret && pi.prev_ga_tag) { 1016 + int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag); 1017 + struct kvm_vcpu *vcpu; 1018 + 1019 + vcpu = kvm_get_vcpu_by_id(kvm, id); 1020 + if (vcpu) 1021 + svm_ir_list_del(to_svm(vcpu), &pi); 1022 + } 1023 + } 974 1024 out: 975 1025 srcu_read_unlock(&kvm->irq_srcu, idx); 976 1026 return ret;
+10 -3
arch/x86/kvm/trace.h
··· 11 11 #undef TRACE_SYSTEM 12 12 #define TRACE_SYSTEM kvm 13 13 14 + #ifdef CREATE_TRACE_POINTS 15 + #define tracing_kvm_rip_read(vcpu) ({ \ 16 + typeof(vcpu) __vcpu = vcpu; \ 17 + __vcpu->arch.guest_state_protected ? 0 : kvm_rip_read(__vcpu); \ 18 + }) 19 + #endif 20 + 14 21 /* 15 22 * Tracepoint for guest mode entry. 16 23 */ ··· 35 28 36 29 TP_fast_assign( 37 30 __entry->vcpu_id = vcpu->vcpu_id; 38 - __entry->rip = kvm_rip_read(vcpu); 31 + __entry->rip = tracing_kvm_rip_read(vcpu); 39 32 __entry->immediate_exit = force_immediate_exit; 40 33 41 34 kvm_x86_call(get_entry_info)(vcpu, &__entry->intr_info, ··· 326 319 ), \ 327 320 \ 328 321 TP_fast_assign( \ 329 - __entry->guest_rip = kvm_rip_read(vcpu); \ 322 + __entry->guest_rip = tracing_kvm_rip_read(vcpu); \ 330 323 __entry->isa = isa; \ 331 324 __entry->vcpu_id = vcpu->vcpu_id; \ 332 325 __entry->requests = READ_ONCE(vcpu->requests); \ ··· 430 423 431 424 TP_fast_assign( 432 425 __entry->vcpu_id = vcpu->vcpu_id; 433 - __entry->guest_rip = kvm_rip_read(vcpu); 426 + __entry->guest_rip = tracing_kvm_rip_read(vcpu); 434 427 __entry->fault_address = fault_address; 435 428 __entry->error_code = error_code; 436 429 ),
+10 -18
arch/x86/kvm/vmx/posted_intr.c
··· 297 297 { 298 298 struct kvm_kernel_irq_routing_entry *e; 299 299 struct kvm_irq_routing_table *irq_rt; 300 + bool enable_remapped_mode = true; 300 301 struct kvm_lapic_irq irq; 301 302 struct kvm_vcpu *vcpu; 302 303 struct vcpu_data vcpu_info; ··· 336 335 337 336 kvm_set_msi_irq(kvm, e, &irq); 338 337 if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) || 339 - !kvm_irq_is_postable(&irq)) { 340 - /* 341 - * Make sure the IRTE is in remapped mode if 342 - * we don't handle it in posted mode. 343 - */ 344 - ret = irq_set_vcpu_affinity(host_irq, NULL); 345 - if (ret < 0) { 346 - printk(KERN_INFO 347 - "failed to back to remapped mode, irq: %u\n", 348 - host_irq); 349 - goto out; 350 - } 351 - 338 + !kvm_irq_is_postable(&irq)) 352 339 continue; 353 - } 354 340 355 341 vcpu_info.pi_desc_addr = __pa(vcpu_to_pi_desc(vcpu)); 356 342 vcpu_info.vector = irq.vector; ··· 345 357 trace_kvm_pi_irte_update(host_irq, vcpu->vcpu_id, e->gsi, 346 358 vcpu_info.vector, vcpu_info.pi_desc_addr, set); 347 359 348 - if (set) 349 - ret = irq_set_vcpu_affinity(host_irq, &vcpu_info); 350 - else 351 - ret = irq_set_vcpu_affinity(host_irq, NULL); 360 + if (!set) 361 + continue; 352 362 363 + enable_remapped_mode = false; 364 + 365 + ret = irq_set_vcpu_affinity(host_irq, &vcpu_info); 353 366 if (ret < 0) { 354 367 printk(KERN_INFO "%s: failed to update PI IRTE\n", 355 368 __func__); 356 369 goto out; 357 370 } 358 371 } 372 + 373 + if (enable_remapped_mode) 374 + ret = irq_set_vcpu_affinity(host_irq, NULL); 359 375 360 376 ret = 0; 361 377 out:
+19 -9
arch/x86/kvm/x86.c
··· 11098 11098 /* 11099 11099 * Profile KVM exit RIPs: 11100 11100 */ 11101 - if (unlikely(prof_on == KVM_PROFILING)) { 11101 + if (unlikely(prof_on == KVM_PROFILING && 11102 + !vcpu->arch.guest_state_protected)) { 11102 11103 unsigned long rip = kvm_rip_read(vcpu); 11103 11104 profile_hit(KVM_PROFILING, (void *)rip); 11104 11105 } ··· 13557 13556 } 13558 13557 EXPORT_SYMBOL_GPL(kvm_arch_has_noncoherent_dma); 13559 13558 13560 - bool kvm_arch_has_irq_bypass(void) 13561 - { 13562 - return enable_apicv && irq_remapping_cap(IRQ_POSTING_CAP); 13563 - } 13564 - 13565 13559 int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons, 13566 13560 struct irq_bypass_producer *prod) 13567 13561 { 13568 13562 struct kvm_kernel_irqfd *irqfd = 13569 13563 container_of(cons, struct kvm_kernel_irqfd, consumer); 13564 + struct kvm *kvm = irqfd->kvm; 13570 13565 int ret; 13571 13566 13572 - irqfd->producer = prod; 13573 13567 kvm_arch_start_assignment(irqfd->kvm); 13568 + 13569 + spin_lock_irq(&kvm->irqfds.lock); 13570 + irqfd->producer = prod; 13571 + 13574 13572 ret = kvm_x86_call(pi_update_irte)(irqfd->kvm, 13575 13573 prod->irq, irqfd->gsi, 1); 13576 13574 if (ret) 13577 13575 kvm_arch_end_assignment(irqfd->kvm); 13576 + 13577 + spin_unlock_irq(&kvm->irqfds.lock); 13578 + 13578 13579 13579 13580 return ret; 13580 13581 } ··· 13587 13584 int ret; 13588 13585 struct kvm_kernel_irqfd *irqfd = 13589 13586 container_of(cons, struct kvm_kernel_irqfd, consumer); 13587 + struct kvm *kvm = irqfd->kvm; 13590 13588 13591 13589 WARN_ON(irqfd->producer != prod); 13592 - irqfd->producer = NULL; 13593 13590 13594 13591 /* 13595 13592 * When producer of consumer is unregistered, we change back to ··· 13597 13594 * when the irq is masked/disabled or the consumer side (KVM 13598 13595 * int this case doesn't want to receive the interrupts. 13599 13596 */ 13597 + spin_lock_irq(&kvm->irqfds.lock); 13598 + irqfd->producer = NULL; 13599 + 13600 13600 ret = kvm_x86_call(pi_update_irte)(irqfd->kvm, 13601 13601 prod->irq, irqfd->gsi, 0); 13602 13602 if (ret) 13603 13603 printk(KERN_INFO "irq bypass consumer (token %p) unregistration" 13604 13604 " fails: %d\n", irqfd->consumer.token, ret); 13605 + 13606 + spin_unlock_irq(&kvm->irqfds.lock); 13607 + 13605 13608 13606 13609 kvm_arch_end_assignment(irqfd->kvm); 13607 13610 } ··· 13621 13612 bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old, 13622 13613 struct kvm_kernel_irq_routing_entry *new) 13623 13614 { 13624 - if (new->type != KVM_IRQ_ROUTING_MSI) 13615 + if (old->type != KVM_IRQ_ROUTING_MSI || 13616 + new->type != KVM_IRQ_ROUTING_MSI) 13625 13617 return true; 13626 13618 13627 13619 return !!memcmp(&old->msi, &new->msi, sizeof(new->msi));
+2 -2
arch/x86/lib/x86-opcode-map.txt
··· 996 996 83: Grp1 Ev,Ib (1A),(es) 997 997 # CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL, 998 998 # CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ 999 - 84: CTESTSCC (ev) 1000 - 85: CTESTSCC (es) | CTESTSCC (66),(es) 999 + 84: CTESTSCC Eb,Gb (ev) 1000 + 85: CTESTSCC Ev,Gv (es) | CTESTSCC Ev,Gv (66),(es) 1001 1001 88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es) 1002 1002 8f: POP2 Bq,Rq (000),(11B),(ev) 1003 1003 a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
+2 -2
arch/x86/mm/pgtable.c
··· 360 360 * We allocate one page for pgd. 361 361 */ 362 362 if (!SHARED_KERNEL_PMD) 363 - return __pgd_alloc(mm, PGD_ALLOCATION_ORDER); 363 + return __pgd_alloc(mm, pgd_allocation_order()); 364 364 365 365 /* 366 366 * Now PAE kernel is not running as a Xen domain. We can allocate ··· 380 380 381 381 static inline pgd_t *_pgd_alloc(struct mm_struct *mm) 382 382 { 383 - return __pgd_alloc(mm, PGD_ALLOCATION_ORDER); 383 + return __pgd_alloc(mm, pgd_allocation_order()); 384 384 } 385 385 386 386 static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
+2 -2
arch/x86/platform/efi/efi_64.c
··· 73 73 gfp_t gfp_mask; 74 74 75 75 gfp_mask = GFP_KERNEL | __GFP_ZERO; 76 - efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, PGD_ALLOCATION_ORDER); 76 + efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, pgd_allocation_order()); 77 77 if (!efi_pgd) 78 78 goto fail; 79 79 ··· 96 96 if (pgtable_l5_enabled()) 97 97 free_page((unsigned long)pgd_page_vaddr(*pgd)); 98 98 free_pgd: 99 - free_pages((unsigned long)efi_pgd, PGD_ALLOCATION_ORDER); 99 + free_pages((unsigned long)efi_pgd, pgd_allocation_order()); 100 100 fail: 101 101 return -ENOMEM; 102 102 }
+51 -16
block/bdev.c
··· 152 152 get_order(bsize)); 153 153 } 154 154 155 + /** 156 + * bdev_validate_blocksize - check that this block size is acceptable 157 + * @bdev: blockdevice to check 158 + * @block_size: block size to check 159 + * 160 + * For block device users that do not use buffer heads or the block device 161 + * page cache, make sure that this block size can be used with the device. 162 + * 163 + * Return: On success zero is returned, negative error code on failure. 164 + */ 165 + int bdev_validate_blocksize(struct block_device *bdev, int block_size) 166 + { 167 + if (blk_validate_block_size(block_size)) 168 + return -EINVAL; 169 + 170 + /* Size cannot be smaller than the size supported by the device */ 171 + if (block_size < bdev_logical_block_size(bdev)) 172 + return -EINVAL; 173 + 174 + return 0; 175 + } 176 + EXPORT_SYMBOL_GPL(bdev_validate_blocksize); 177 + 155 178 int set_blocksize(struct file *file, int size) 156 179 { 157 180 struct inode *inode = file->f_mapping->host; 158 181 struct block_device *bdev = I_BDEV(inode); 182 + int ret; 159 183 160 - if (blk_validate_block_size(size)) 161 - return -EINVAL; 162 - 163 - /* Size cannot be smaller than the size supported by the device */ 164 - if (size < bdev_logical_block_size(bdev)) 165 - return -EINVAL; 184 + ret = bdev_validate_blocksize(bdev, size); 185 + if (ret) 186 + return ret; 166 187 167 188 if (!file->private_data) 168 189 return -EINVAL; 169 190 170 191 /* Don't change the size if it is same as current */ 171 192 if (inode->i_blkbits != blksize_bits(size)) { 193 + /* 194 + * Flush and truncate the pagecache before we reconfigure the 195 + * mapping geometry because folio sizes are variable now. If a 196 + * reader has already allocated a folio whose size is smaller 197 + * than the new min_order but invokes readahead after the new 198 + * min_order becomes visible, readahead will think there are 199 + * "zero" blocks per folio and crash. Take the inode and 200 + * invalidation locks to avoid racing with 201 + * read/write/fallocate. 202 + */ 203 + inode_lock(inode); 204 + filemap_invalidate_lock(inode->i_mapping); 205 + 172 206 sync_blockdev(bdev); 207 + kill_bdev(bdev); 208 + 173 209 inode->i_blkbits = blksize_bits(size); 174 210 mapping_set_folio_min_order(inode->i_mapping, get_order(size)); 175 211 kill_bdev(bdev); 212 + filemap_invalidate_unlock(inode->i_mapping); 213 + inode_unlock(inode); 176 214 } 177 215 return 0; 178 216 } ··· 815 777 blkdev_put_whole(whole); 816 778 } 817 779 818 - struct block_device *blkdev_get_no_open(dev_t dev) 780 + struct block_device *blkdev_get_no_open(dev_t dev, bool autoload) 819 781 { 820 782 struct block_device *bdev; 821 783 struct inode *inode; 822 784 823 785 inode = ilookup(blockdev_superblock, dev); 824 - if (!inode && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) { 786 + if (!inode && autoload && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) { 825 787 blk_request_module(dev); 826 788 inode = ilookup(blockdev_superblock, dev); 827 789 if (inode) ··· 1043 1005 if (ret) 1044 1006 return ERR_PTR(ret); 1045 1007 1046 - bdev = blkdev_get_no_open(dev); 1008 + bdev = blkdev_get_no_open(dev, true); 1047 1009 if (!bdev) 1048 1010 return ERR_PTR(-ENXIO); 1049 1011 ··· 1312 1274 */ 1313 1275 void bdev_statx(const struct path *path, struct kstat *stat, u32 request_mask) 1314 1276 { 1315 - struct inode *backing_inode; 1316 1277 struct block_device *bdev; 1317 1278 1318 - backing_inode = d_backing_inode(path->dentry); 1319 - 1320 1279 /* 1321 - * Note that backing_inode is the inode of a block device node file, 1322 - * not the block device's internal inode. Therefore it is *not* valid 1323 - * to use I_BDEV() here; the block device has to be looked up by i_rdev 1280 + * Note that d_backing_inode() returns the block device node inode, not 1281 + * the block device's internal inode. Therefore it is *not* valid to 1282 + * use I_BDEV() here; the block device has to be looked up by i_rdev 1324 1283 * instead. 1325 1284 */ 1326 - bdev = blkdev_get_no_open(backing_inode->i_rdev); 1285 + bdev = blkdev_get_no_open(d_backing_inode(path->dentry)->i_rdev, false); 1327 1286 if (!bdev) 1328 1287 return; 1329 1288
+1 -1
block/blk-cgroup.c
··· 797 797 return -EINVAL; 798 798 input = skip_spaces(input); 799 799 800 - bdev = blkdev_get_no_open(MKDEV(major, minor)); 800 + bdev = blkdev_get_no_open(MKDEV(major, minor), false); 801 801 if (!bdev) 802 802 return -ENODEV; 803 803 if (bdev_is_partition(bdev)) {
+7 -1
block/blk-settings.c
··· 61 61 /* 62 62 * For read-ahead of large files to be effective, we need to read ahead 63 63 * at least twice the optimal I/O size. 64 + * 65 + * There is no hardware limitation for the read-ahead size and the user 66 + * might have increased the read-ahead size through sysfs, so don't ever 67 + * decrease it. 64 68 */ 65 - bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES); 69 + bdi->ra_pages = max3(bdi->ra_pages, 70 + lim->io_opt * 2 / PAGE_SIZE, 71 + VM_READAHEAD_PAGES); 66 72 bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT; 67 73 } 68 74
+4 -1
block/blk-zoned.c
··· 343 343 op = REQ_OP_ZONE_RESET; 344 344 345 345 /* Invalidate the page cache, including dirty pages. */ 346 + inode_lock(bdev->bd_mapping->host); 346 347 filemap_invalidate_lock(bdev->bd_mapping); 347 348 ret = blkdev_truncate_zone_range(bdev, mode, &zrange); 348 349 if (ret) ··· 365 364 ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors); 366 365 367 366 fail: 368 - if (cmd == BLKRESETZONE) 367 + if (cmd == BLKRESETZONE) { 369 368 filemap_invalidate_unlock(bdev->bd_mapping); 369 + inode_unlock(bdev->bd_mapping->host); 370 + } 370 371 371 372 return ret; 372 373 }
+3
block/blk.h
··· 94 94 wait_for_completion_io(done); 95 95 } 96 96 97 + struct block_device *blkdev_get_no_open(dev_t dev, bool autoload); 98 + void blkdev_put_no_open(struct block_device *bdev); 99 + 97 100 #define BIO_INLINE_VECS 4 98 101 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs, 99 102 gfp_t gfp_mask);
+17 -1
block/fops.c
··· 642 642 if (ret) 643 643 return ret; 644 644 645 - bdev = blkdev_get_no_open(inode->i_rdev); 645 + bdev = blkdev_get_no_open(inode->i_rdev, true); 646 646 if (!bdev) 647 647 return -ENXIO; 648 648 ··· 746 746 ret = direct_write_fallback(iocb, from, ret, 747 747 blkdev_buffered_write(iocb, from)); 748 748 } else { 749 + /* 750 + * Take i_rwsem and invalidate_lock to avoid racing with 751 + * set_blocksize changing i_blkbits/folio order and punching 752 + * out the pagecache. 753 + */ 754 + inode_lock_shared(bd_inode); 749 755 ret = blkdev_buffered_write(iocb, from); 756 + inode_unlock_shared(bd_inode); 750 757 } 751 758 752 759 if (ret > 0) ··· 764 757 765 758 static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) 766 759 { 760 + struct inode *bd_inode = bdev_file_inode(iocb->ki_filp); 767 761 struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host); 768 762 loff_t size = bdev_nr_bytes(bdev); 769 763 loff_t pos = iocb->ki_pos; ··· 801 793 goto reexpand; 802 794 } 803 795 796 + /* 797 + * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize 798 + * changing i_blkbits/folio order and punching out the pagecache. 799 + */ 800 + inode_lock_shared(bd_inode); 804 801 ret = filemap_read(iocb, to, ret); 802 + inode_unlock_shared(bd_inode); 805 803 806 804 reexpand: 807 805 if (unlikely(shorted)) ··· 850 836 if ((start | len) & (bdev_logical_block_size(bdev) - 1)) 851 837 return -EINVAL; 852 838 839 + inode_lock(inode); 853 840 filemap_invalidate_lock(inode->i_mapping); 854 841 855 842 /* ··· 883 868 884 869 fail: 885 870 filemap_invalidate_unlock(inode->i_mapping); 871 + inode_unlock(inode); 886 872 return error; 887 873 } 888 874
+6
block/ioctl.c
··· 142 142 if (err) 143 143 return err; 144 144 145 + inode_lock(bdev->bd_mapping->host); 145 146 filemap_invalidate_lock(bdev->bd_mapping); 146 147 err = truncate_bdev_range(bdev, mode, start, start + len - 1); 147 148 if (err) ··· 175 174 blk_finish_plug(&plug); 176 175 fail: 177 176 filemap_invalidate_unlock(bdev->bd_mapping); 177 + inode_unlock(bdev->bd_mapping->host); 178 178 return err; 179 179 } 180 180 ··· 201 199 end > bdev_nr_bytes(bdev)) 202 200 return -EINVAL; 203 201 202 + inode_lock(bdev->bd_mapping->host); 204 203 filemap_invalidate_lock(bdev->bd_mapping); 205 204 err = truncate_bdev_range(bdev, mode, start, end - 1); 206 205 if (!err) 207 206 err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9, 208 207 GFP_KERNEL); 209 208 filemap_invalidate_unlock(bdev->bd_mapping); 209 + inode_unlock(bdev->bd_mapping->host); 210 210 return err; 211 211 } 212 212 ··· 240 236 return -EINVAL; 241 237 242 238 /* Invalidate the page cache, including dirty pages */ 239 + inode_lock(bdev->bd_mapping->host); 243 240 filemap_invalidate_lock(bdev->bd_mapping); 244 241 err = truncate_bdev_range(bdev, mode, start, end); 245 242 if (err) ··· 251 246 252 247 fail: 253 248 filemap_invalidate_unlock(bdev->bd_mapping); 249 + inode_unlock(bdev->bd_mapping->host); 254 250 return err; 255 251 } 256 252
+2 -3
crypto/scompress.c
··· 163 163 if (ret) 164 164 goto unlock; 165 165 } 166 - if (!scomp_scratch_users) { 166 + if (!scomp_scratch_users++) { 167 167 ret = crypto_scomp_alloc_scratches(); 168 168 if (ret) 169 - goto unlock; 170 - scomp_scratch_users++; 169 + scomp_scratch_users--; 171 170 } 172 171 unlock: 173 172 mutex_unlock(&scomp_lock);
+1 -1
drivers/android/binder.c
··· 6373 6373 seq_printf(m, " node %d", buffer->target_node->debug_id); 6374 6374 seq_printf(m, " size %zd:%zd offset %lx\n", 6375 6375 buffer->data_size, buffer->offsets_size, 6376 - proc->alloc.vm_start - buffer->user_data); 6376 + buffer->user_data - proc->alloc.vm_start); 6377 6377 } 6378 6378 6379 6379 static void print_binder_work_ilocked(struct seq_file *m,
+17 -8
drivers/ata/libata-scsi.c
··· 2453 2453 */ 2454 2454 put_unaligned_be16(ATA_FEATURE_SUB_MPAGE_LEN - 4, &buf[2]); 2455 2455 2456 - if (dev->flags & ATA_DFLAG_CDL) 2457 - buf[4] = 0x02; /* Support T2A and T2B pages */ 2456 + if (dev->flags & ATA_DFLAG_CDL_ENABLED) 2457 + buf[4] = 0x02; /* T2A and T2B pages enabled */ 2458 2458 else 2459 2459 buf[4] = 0; 2460 2460 ··· 3886 3886 } 3887 3887 3888 3888 /* 3889 - * Translate MODE SELECT control mode page, sub-pages f2h (ATA feature mode 3889 + * Translate MODE SELECT control mode page, sub-page f2h (ATA feature mode 3890 3890 * page) into a SET FEATURES command. 3891 3891 */ 3892 - static unsigned int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc, 3893 - const u8 *buf, int len, 3894 - u16 *fp) 3892 + static int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc, 3893 + const u8 *buf, int len, u16 *fp) 3895 3894 { 3896 3895 struct ata_device *dev = qc->dev; 3897 3896 struct ata_taskfile *tf = &qc->tf; ··· 3908 3909 /* Check cdl_ctrl */ 3909 3910 switch (buf[0] & 0x03) { 3910 3911 case 0: 3911 - /* Disable CDL */ 3912 + /* Disable CDL if it is enabled */ 3913 + if (!(dev->flags & ATA_DFLAG_CDL_ENABLED)) 3914 + return 0; 3915 + ata_dev_dbg(dev, "Disabling CDL\n"); 3912 3916 cdl_action = 0; 3913 3917 dev->flags &= ~ATA_DFLAG_CDL_ENABLED; 3914 3918 break; 3915 3919 case 0x02: 3916 - /* Enable CDL T2A/T2B: NCQ priority must be disabled */ 3920 + /* 3921 + * Enable CDL if not already enabled. Since this is mutually 3922 + * exclusive with NCQ priority, allow this only if NCQ priority 3923 + * is disabled. 3924 + */ 3925 + if (dev->flags & ATA_DFLAG_CDL_ENABLED) 3926 + return 0; 3917 3927 if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) { 3918 3928 ata_dev_err(dev, 3919 3929 "NCQ priority must be disabled to enable CDL\n"); 3920 3930 return -EINVAL; 3921 3931 } 3932 + ata_dev_dbg(dev, "Enabling CDL\n"); 3922 3933 cdl_action = 1; 3923 3934 dev->flags |= ATA_DFLAG_CDL_ENABLED; 3924 3935 break;
+10
drivers/base/auxiliary.c
··· 156 156 * }, 157 157 * .ops = my_custom_ops, 158 158 * }; 159 + * 160 + * Please note that such custom ops approach is valid, but it is hard to implement 161 + * it right without global locks per-device to protect from auxiliary_drv removal 162 + * during call to that ops. In addition, this implementation lacks proper module 163 + * dependency, which causes to load/unload races between auxiliary parent and devices 164 + * modules. 165 + * 166 + * The most easiest way to provide these ops reliably without needing to 167 + * have a lock is to EXPORT_SYMBOL*() them and rely on already existing 168 + * modules infrastructure for validity and correct dependencies chains. 159 169 */ 160 170 161 171 static const struct auxiliary_device_id *auxiliary_match_id(const struct auxiliary_device_id *id,
+17
drivers/base/base.h
··· 73 73 kset_put(&sp->subsys); 74 74 } 75 75 76 + struct subsys_private *bus_to_subsys(const struct bus_type *bus); 76 77 struct subsys_private *class_to_subsys(const struct class *class); 77 78 78 79 struct driver_private { ··· 180 179 int driver_add_groups(const struct device_driver *drv, const struct attribute_group **groups); 181 180 void driver_remove_groups(const struct device_driver *drv, const struct attribute_group **groups); 182 181 void device_driver_detach(struct device *dev); 182 + 183 + static inline void device_set_driver(struct device *dev, const struct device_driver *drv) 184 + { 185 + /* 186 + * Majority (all?) read accesses to dev->driver happens either 187 + * while holding device lock or in bus/driver code that is only 188 + * invoked when the device is bound to a driver and there is no 189 + * concern of the pointer being changed while it is being read. 190 + * However when reading device's uevent file we read driver pointer 191 + * without taking device lock (so we do not block there for 192 + * arbitrary amount of time). We use WRITE_ONCE() here to prevent 193 + * tearing so that READ_ONCE() can safely be used in uevent code. 194 + */ 195 + // FIXME - this cast should not be needed "soon" 196 + WRITE_ONCE(dev->driver, (struct device_driver *)drv); 197 + } 183 198 184 199 int devres_release_all(struct device *dev); 185 200 void device_block_probing(void);
+1 -1
drivers/base/bus.c
··· 57 57 * NULL. A call to subsys_put() must be done when finished with the pointer in 58 58 * order for it to be properly freed. 59 59 */ 60 - static struct subsys_private *bus_to_subsys(const struct bus_type *bus) 60 + struct subsys_private *bus_to_subsys(const struct bus_type *bus) 61 61 { 62 62 struct subsys_private *sp = NULL; 63 63 struct kobject *kobj;
+32 -6
drivers/base/core.c
··· 2624 2624 return NULL; 2625 2625 } 2626 2626 2627 + /* 2628 + * Try filling "DRIVER=<name>" uevent variable for a device. Because this 2629 + * function may race with binding and unbinding the device from a driver, 2630 + * we need to be careful. Binding is generally safe, at worst we miss the 2631 + * fact that the device is already bound to a driver (but the driver 2632 + * information that is delivered through uevents is best-effort, it may 2633 + * become obsolete as soon as it is generated anyways). Unbinding is more 2634 + * risky as driver pointer is transitioning to NULL, so READ_ONCE() should 2635 + * be used to make sure we are dealing with the same pointer, and to 2636 + * ensure that driver structure is not going to disappear from under us 2637 + * we take bus' drivers klist lock. The assumption that only registered 2638 + * driver can be bound to a device, and to unregister a driver bus code 2639 + * will take the same lock. 2640 + */ 2641 + static void dev_driver_uevent(const struct device *dev, struct kobj_uevent_env *env) 2642 + { 2643 + struct subsys_private *sp = bus_to_subsys(dev->bus); 2644 + 2645 + if (sp) { 2646 + scoped_guard(spinlock, &sp->klist_drivers.k_lock) { 2647 + struct device_driver *drv = READ_ONCE(dev->driver); 2648 + if (drv) 2649 + add_uevent_var(env, "DRIVER=%s", drv->name); 2650 + } 2651 + 2652 + subsys_put(sp); 2653 + } 2654 + } 2655 + 2627 2656 static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env) 2628 2657 { 2629 2658 const struct device *dev = kobj_to_dev(kobj); ··· 2684 2655 if (dev->type && dev->type->name) 2685 2656 add_uevent_var(env, "DEVTYPE=%s", dev->type->name); 2686 2657 2687 - if (dev->driver) 2688 - add_uevent_var(env, "DRIVER=%s", dev->driver->name); 2658 + /* Add "DRIVER=%s" variable if the device is bound to a driver */ 2659 + dev_driver_uevent(dev, env); 2689 2660 2690 2661 /* Add common DT information about the device */ 2691 2662 of_device_uevent(dev, env); ··· 2755 2726 if (!env) 2756 2727 return -ENOMEM; 2757 2728 2758 - /* Synchronize with really_probe() */ 2759 - device_lock(dev); 2760 2729 /* let the kset specific function add its keys */ 2761 2730 retval = kset->uevent_ops->uevent(&dev->kobj, env); 2762 - device_unlock(dev); 2763 2731 if (retval) 2764 2732 goto out; 2765 2733 ··· 3726 3700 device_pm_remove(dev); 3727 3701 dpm_sysfs_remove(dev); 3728 3702 DPMError: 3729 - dev->driver = NULL; 3703 + device_set_driver(dev, NULL); 3730 3704 bus_remove_device(dev); 3731 3705 BusError: 3732 3706 device_remove_attrs(dev);
+3 -4
drivers/base/dd.c
··· 550 550 arch_teardown_dma_ops(dev); 551 551 kfree(dev->dma_range_map); 552 552 dev->dma_range_map = NULL; 553 - dev->driver = NULL; 553 + device_set_driver(dev, NULL); 554 554 dev_set_drvdata(dev, NULL); 555 555 if (dev->pm_domain && dev->pm_domain->dismiss) 556 556 dev->pm_domain->dismiss(dev); ··· 629 629 } 630 630 631 631 re_probe: 632 - // FIXME - this cast should not be needed "soon" 633 - dev->driver = (struct device_driver *)drv; 632 + device_set_driver(dev, drv); 634 633 635 634 /* If using pinctrl, bind pins now before probing */ 636 635 ret = pinctrl_bind_pins(dev); ··· 1013 1014 if (ret == 0) 1014 1015 ret = 1; 1015 1016 else { 1016 - dev->driver = NULL; 1017 + device_set_driver(dev, NULL); 1017 1018 ret = 0; 1018 1019 } 1019 1020 } else {
+9 -13
drivers/base/devtmpfs.c
··· 296 296 return err; 297 297 } 298 298 299 - static int dev_mynode(struct device *dev, struct inode *inode, struct kstat *stat) 299 + static int dev_mynode(struct device *dev, struct inode *inode) 300 300 { 301 301 /* did we create it */ 302 302 if (inode->i_private != &thread) ··· 304 304 305 305 /* does the dev_t match */ 306 306 if (is_blockdev(dev)) { 307 - if (!S_ISBLK(stat->mode)) 307 + if (!S_ISBLK(inode->i_mode)) 308 308 return 0; 309 309 } else { 310 - if (!S_ISCHR(stat->mode)) 310 + if (!S_ISCHR(inode->i_mode)) 311 311 return 0; 312 312 } 313 - if (stat->rdev != dev->devt) 313 + if (inode->i_rdev != dev->devt) 314 314 return 0; 315 315 316 316 /* ours */ ··· 321 321 { 322 322 struct path parent; 323 323 struct dentry *dentry; 324 - struct kstat stat; 325 - struct path p; 324 + struct inode *inode; 326 325 int deleted = 0; 327 - int err; 326 + int err = 0; 328 327 329 328 dentry = kern_path_locked(nodename, &parent); 330 329 if (IS_ERR(dentry)) 331 330 return PTR_ERR(dentry); 332 331 333 - p.mnt = parent.mnt; 334 - p.dentry = dentry; 335 - err = vfs_getattr(&p, &stat, STATX_TYPE | STATX_MODE, 336 - AT_STATX_SYNC_AS_STAT); 337 - if (!err && dev_mynode(dev, d_inode(dentry), &stat)) { 332 + inode = d_inode(dentry); 333 + if (dev_mynode(dev, inode)) { 338 334 struct iattr newattrs; 339 335 /* 340 336 * before unlinking this node, reset permissions ··· 338 342 */ 339 343 newattrs.ia_uid = GLOBAL_ROOT_UID; 340 344 newattrs.ia_gid = GLOBAL_ROOT_GID; 341 - newattrs.ia_mode = stat.mode & ~0777; 345 + newattrs.ia_mode = inode->i_mode & ~0777; 342 346 newattrs.ia_valid = 343 347 ATTR_UID|ATTR_GID|ATTR_MODE; 344 348 inode_lock(d_inode(dentry));
+17 -24
drivers/base/memory.c
··· 816 816 return 0; 817 817 } 818 818 819 - static int __init add_boot_memory_block(unsigned long base_section_nr) 820 - { 821 - unsigned long nr; 822 - 823 - for_each_present_section_nr(base_section_nr, nr) { 824 - if (nr >= (base_section_nr + sections_per_block)) 825 - break; 826 - 827 - return add_memory_block(memory_block_id(base_section_nr), 828 - MEM_ONLINE, NULL, NULL); 829 - } 830 - 831 - return 0; 832 - } 833 - 834 819 static int add_hotplug_memory_block(unsigned long block_id, 835 820 struct vmem_altmap *altmap, 836 821 struct memory_group *group) ··· 942 957 void __init memory_dev_init(void) 943 958 { 944 959 int ret; 945 - unsigned long block_sz, nr; 960 + unsigned long block_sz, block_id, nr; 946 961 947 962 /* Validate the configured memory block size */ 948 963 block_sz = memory_block_size_bytes(); ··· 955 970 panic("%s() failed to register subsystem: %d\n", __func__, ret); 956 971 957 972 /* 958 - * Create entries for memory sections that were found 959 - * during boot and have been initialized 973 + * Create entries for memory sections that were found during boot 974 + * and have been initialized. Use @block_id to track the last 975 + * handled block and initialize it to an invalid value (ULONG_MAX) 976 + * to bypass the block ID matching check for the first present 977 + * block so that it can be covered. 960 978 */ 961 - for (nr = 0; nr <= __highest_present_section_nr; 962 - nr += sections_per_block) { 963 - ret = add_boot_memory_block(nr); 964 - if (ret) 965 - panic("%s() failed to add memory block: %d\n", __func__, 966 - ret); 979 + block_id = ULONG_MAX; 980 + for_each_present_section_nr(0, nr) { 981 + if (block_id != ULONG_MAX && memory_block_id(nr) == block_id) 982 + continue; 983 + 984 + block_id = memory_block_id(nr); 985 + ret = add_memory_block(block_id, MEM_ONLINE, NULL, NULL); 986 + if (ret) { 987 + panic("%s() failed to add memory block: %d\n", 988 + __func__, ret); 989 + } 967 990 } 968 991 } 969 992
+5 -8
drivers/base/module.c
··· 42 42 if (mod) 43 43 mk = &mod->mkobj; 44 44 else if (drv->mod_name) { 45 - struct kobject *mkobj; 46 - 47 - /* Lookup built-in module entry in /sys/modules */ 48 - mkobj = kset_find_obj(module_kset, drv->mod_name); 49 - if (mkobj) { 50 - mk = container_of(mkobj, struct module_kobject, kobj); 45 + /* Lookup or create built-in module entry in /sys/modules */ 46 + mk = lookup_or_create_module_kobject(drv->mod_name); 47 + if (mk) { 51 48 /* remember our module structure */ 52 49 drv->p->mkobj = mk; 53 - /* kset_find_obj took a reference */ 54 - kobject_put(mkobj); 50 + /* lookup_or_create_module_kobject took a reference */ 51 + kobject_put(&mk->kobj); 55 52 } 56 53 } 57 54
+1 -2
drivers/base/swnode.c
··· 1080 1080 if (!swnode) 1081 1081 return; 1082 1082 1083 + kobject_get(&swnode->kobj); 1083 1084 ret = sysfs_create_link(&dev->kobj, &swnode->kobj, "software_node"); 1084 1085 if (ret) 1085 1086 return; ··· 1090 1089 sysfs_remove_link(&dev->kobj, "software_node"); 1091 1090 return; 1092 1091 } 1093 - 1094 - kobject_get(&swnode->kobj); 1095 1092 } 1096 1093 1097 1094 void software_node_notify_remove(struct device *dev)
+24 -17
drivers/block/ublk_drv.c
··· 1683 1683 ublk_put_disk(disk); 1684 1684 } 1685 1685 1686 - static void ublk_cancel_cmd(struct ublk_queue *ubq, struct ublk_io *io, 1686 + static void ublk_cancel_cmd(struct ublk_queue *ubq, unsigned tag, 1687 1687 unsigned int issue_flags) 1688 1688 { 1689 + struct ublk_io *io = &ubq->ios[tag]; 1690 + struct ublk_device *ub = ubq->dev; 1691 + struct request *req; 1689 1692 bool done; 1690 1693 1691 1694 if (!(io->flags & UBLK_IO_FLAG_ACTIVE)) 1695 + return; 1696 + 1697 + /* 1698 + * Don't try to cancel this command if the request is started for 1699 + * avoiding race between io_uring_cmd_done() and 1700 + * io_uring_cmd_complete_in_task(). 1701 + * 1702 + * Either the started request will be aborted via __ublk_abort_rq(), 1703 + * then this uring_cmd is canceled next time, or it will be done in 1704 + * task work function ublk_dispatch_req() because io_uring guarantees 1705 + * that ublk_dispatch_req() is always called 1706 + */ 1707 + req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag); 1708 + if (req && blk_mq_request_started(req)) 1692 1709 return; 1693 1710 1694 1711 spin_lock(&ubq->cancel_lock); ··· 1739 1722 struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd); 1740 1723 struct ublk_queue *ubq = pdu->ubq; 1741 1724 struct task_struct *task; 1742 - struct ublk_io *io; 1743 1725 1744 1726 if (WARN_ON_ONCE(!ubq)) 1745 1727 return; ··· 1753 1737 if (!ubq->canceling) 1754 1738 ublk_start_cancel(ubq); 1755 1739 1756 - io = &ubq->ios[pdu->tag]; 1757 - WARN_ON_ONCE(io->cmd != cmd); 1758 - ublk_cancel_cmd(ubq, io, issue_flags); 1740 + WARN_ON_ONCE(ubq->ios[pdu->tag].cmd != cmd); 1741 + ublk_cancel_cmd(ubq, pdu->tag, issue_flags); 1759 1742 } 1760 1743 1761 1744 static inline bool ublk_queue_ready(struct ublk_queue *ubq) ··· 1767 1752 int i; 1768 1753 1769 1754 for (i = 0; i < ubq->q_depth; i++) 1770 - ublk_cancel_cmd(ubq, &ubq->ios[i], IO_URING_F_UNLOCKED); 1755 + ublk_cancel_cmd(ubq, i, IO_URING_F_UNLOCKED); 1771 1756 } 1772 1757 1773 1758 /* Cancel all pending commands, must be called after del_gendisk() returns */ ··· 1899 1884 ublk_reset_io_flags(ub); 1900 1885 complete_all(&ub->completion); 1901 1886 } 1902 - } 1903 - 1904 - static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id, 1905 - int tag) 1906 - { 1907 - struct ublk_queue *ubq = ublk_get_queue(ub, q_id); 1908 - struct request *req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag); 1909 - 1910 - ublk_queue_cmd(ubq, req); 1911 1887 } 1912 1888 1913 1889 static inline int ublk_check_cmd_op(u32 cmd_op) ··· 2109 2103 if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) 2110 2104 goto out; 2111 2105 ublk_fill_io_cmd(io, cmd, ub_cmd->addr); 2112 - ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag); 2113 - break; 2106 + req = blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag); 2107 + ublk_dispatch_req(ubq, req, issue_flags); 2108 + return -EIOCBQUEUED; 2114 2109 default: 2115 2110 goto out; 2116 2111 }
+31 -26
drivers/bluetooth/btintel_pcie.c
··· 957 957 /* This is a debug event that comes from IML and OP image when it 958 958 * starts execution. There is no need pass this event to stack. 959 959 */ 960 - if (skb->data[2] == 0x97) 960 + if (skb->data[2] == 0x97) { 961 + hci_recv_diag(hdev, skb); 961 962 return 0; 963 + } 962 964 } 963 965 964 966 return hci_recv_frame(hdev, skb); ··· 976 974 u8 pkt_type; 977 975 u16 plen; 978 976 u32 pcie_pkt_type; 979 - struct sk_buff *new_skb; 980 977 void *pdata; 981 978 struct hci_dev *hdev = data->hdev; 982 979 ··· 1052 1051 1053 1052 bt_dev_dbg(hdev, "pkt_type: 0x%2.2x len: %u", pkt_type, plen); 1054 1053 1055 - new_skb = bt_skb_alloc(plen, GFP_ATOMIC); 1056 - if (!new_skb) { 1057 - bt_dev_err(hdev, "Failed to allocate memory for skb of len: %u", 1058 - skb->len); 1059 - ret = -ENOMEM; 1060 - goto exit_error; 1061 - } 1062 - 1063 - hci_skb_pkt_type(new_skb) = pkt_type; 1064 - skb_put_data(new_skb, skb->data, plen); 1054 + hci_skb_pkt_type(skb) = pkt_type; 1065 1055 hdev->stat.byte_rx += plen; 1056 + skb_trim(skb, plen); 1066 1057 1067 1058 if (pcie_pkt_type == BTINTEL_PCIE_HCI_EVT_PKT) 1068 - ret = btintel_pcie_recv_event(hdev, new_skb); 1059 + ret = btintel_pcie_recv_event(hdev, skb); 1069 1060 else 1070 - ret = hci_recv_frame(hdev, new_skb); 1061 + ret = hci_recv_frame(hdev, skb); 1062 + skb = NULL; /* skb is freed in the callee */ 1071 1063 1072 1064 exit_error: 1065 + if (skb) 1066 + kfree_skb(skb); 1067 + 1073 1068 if (ret) 1074 1069 hdev->stat.err_rx++; 1075 1070 ··· 1199 1202 struct btintel_pcie_data *data = container_of(work, 1200 1203 struct btintel_pcie_data, rx_work); 1201 1204 struct sk_buff *skb; 1202 - int err; 1203 - struct hci_dev *hdev = data->hdev; 1204 1205 1205 1206 if (test_bit(BTINTEL_PCIE_HWEXP_INPROGRESS, &data->flags)) { 1206 1207 /* Unlike usb products, controller will not send hardware ··· 1219 1224 1220 1225 /* Process the sk_buf in queue and send to the HCI layer */ 1221 1226 while ((skb = skb_dequeue(&data->rx_skb_q))) { 1222 - err = btintel_pcie_recv_frame(data, skb); 1223 - if (err) 1224 - bt_dev_err(hdev, "Failed to send received frame: %d", 1225 - err); 1226 - kfree_skb(skb); 1227 + btintel_pcie_recv_frame(data, skb); 1227 1228 } 1228 1229 } 1229 1230 ··· 1272 1281 bt_dev_dbg(hdev, "RXQ: cr_hia: %u cr_tia: %u", cr_hia, cr_tia); 1273 1282 1274 1283 /* Check CR_TIA and CR_HIA for change */ 1275 - if (cr_tia == cr_hia) { 1276 - bt_dev_warn(hdev, "RXQ: no new CD found"); 1284 + if (cr_tia == cr_hia) 1277 1285 return; 1278 - } 1279 1286 1280 1287 rxq = &data->rxq; 1281 1288 ··· 1309 1320 return IRQ_WAKE_THREAD; 1310 1321 } 1311 1322 1323 + static inline bool btintel_pcie_is_rxq_empty(struct btintel_pcie_data *data) 1324 + { 1325 + return data->ia.cr_hia[BTINTEL_PCIE_RXQ_NUM] == data->ia.cr_tia[BTINTEL_PCIE_RXQ_NUM]; 1326 + } 1327 + 1328 + static inline bool btintel_pcie_is_txackq_empty(struct btintel_pcie_data *data) 1329 + { 1330 + return data->ia.cr_tia[BTINTEL_PCIE_TXQ_NUM] == data->ia.cr_hia[BTINTEL_PCIE_TXQ_NUM]; 1331 + } 1332 + 1312 1333 static irqreturn_t btintel_pcie_irq_msix_handler(int irq, void *dev_id) 1313 1334 { 1314 1335 struct msix_entry *entry = dev_id; ··· 1350 1351 btintel_pcie_msix_gp0_handler(data); 1351 1352 1352 1353 /* For TX */ 1353 - if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0) 1354 + if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0) { 1354 1355 btintel_pcie_msix_tx_handle(data); 1356 + if (!btintel_pcie_is_rxq_empty(data)) 1357 + btintel_pcie_msix_rx_handle(data); 1358 + } 1355 1359 1356 1360 /* For RX */ 1357 - if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_1) 1361 + if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_1) { 1358 1362 btintel_pcie_msix_rx_handle(data); 1363 + if (!btintel_pcie_is_txackq_empty(data)) 1364 + btintel_pcie_msix_tx_handle(data); 1365 + } 1359 1366 1360 1367 /* 1361 1368 * Before sending the interrupt the HW disables it to prevent a nested
+10 -2
drivers/bluetooth/btmtksdio.c
··· 723 723 { 724 724 struct btmtksdio_dev *bdev = hci_get_drvdata(hdev); 725 725 726 + /* Skip btmtksdio_close if BTMTKSDIO_FUNC_ENABLED isn't set */ 727 + if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state)) 728 + return 0; 729 + 726 730 sdio_claim_host(bdev->func); 727 731 728 732 /* Disable interrupt */ ··· 1447 1443 if (!bdev) 1448 1444 return; 1449 1445 1446 + hdev = bdev->hdev; 1447 + 1448 + /* Make sure to call btmtksdio_close before removing sdio card */ 1449 + if (test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state)) 1450 + btmtksdio_close(hdev); 1451 + 1450 1452 /* Be consistent the state in btmtksdio_probe */ 1451 1453 pm_runtime_get_noresume(bdev->dev); 1452 - 1453 - hdev = bdev->hdev; 1454 1454 1455 1455 sdio_set_drvdata(func, NULL); 1456 1456 hci_unregister_dev(hdev);
+73 -28
drivers/bluetooth/btusb.c
··· 3010 3010 bt_dev_err(hdev, "%s: triggle crash failed (%d)", __func__, err); 3011 3011 } 3012 3012 3013 - /* 3014 - * ==0: not a dump pkt. 3015 - * < 0: fails to handle a dump pkt 3016 - * > 0: otherwise. 3017 - */ 3013 + /* Return: 0 on success, negative errno on failure. */ 3018 3014 static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb) 3019 3015 { 3020 - int ret = 1; 3016 + int ret = 0; 3021 3017 u8 pkt_type; 3022 3018 u8 *sk_ptr; 3023 3019 unsigned int sk_len; 3024 3020 u16 seqno; 3025 3021 u32 dump_size; 3026 3022 3027 - struct hci_event_hdr *event_hdr; 3028 - struct hci_acl_hdr *acl_hdr; 3029 3023 struct qca_dump_hdr *dump_hdr; 3030 3024 struct btusb_data *btdata = hci_get_drvdata(hdev); 3031 3025 struct usb_device *udev = btdata->udev; ··· 3029 3035 sk_len = skb->len; 3030 3036 3031 3037 if (pkt_type == HCI_ACLDATA_PKT) { 3032 - acl_hdr = hci_acl_hdr(skb); 3033 - if (le16_to_cpu(acl_hdr->handle) != QCA_MEMDUMP_ACL_HANDLE) 3034 - return 0; 3035 3038 sk_ptr += HCI_ACL_HDR_SIZE; 3036 3039 sk_len -= HCI_ACL_HDR_SIZE; 3037 - event_hdr = (struct hci_event_hdr *)sk_ptr; 3038 - } else { 3039 - event_hdr = hci_event_hdr(skb); 3040 3040 } 3041 - 3042 - if ((event_hdr->evt != HCI_VENDOR_PKT) 3043 - || (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE))) 3044 - return 0; 3045 3041 3046 3042 sk_ptr += HCI_EVENT_HDR_SIZE; 3047 3043 sk_len -= HCI_EVENT_HDR_SIZE; 3048 3044 3049 3045 dump_hdr = (struct qca_dump_hdr *)sk_ptr; 3050 - if ((sk_len < offsetof(struct qca_dump_hdr, data)) 3051 - || (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) 3052 - || (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE)) 3053 - return 0; 3054 - 3055 - /*it is dump pkt now*/ 3056 3046 seqno = le16_to_cpu(dump_hdr->seqno); 3057 3047 if (seqno == 0) { 3058 3048 set_bit(BTUSB_HW_SSR_ACTIVE, &btdata->flags); ··· 3110 3132 return ret; 3111 3133 } 3112 3134 3135 + /* Return: true if the ACL packet is a dump packet, false otherwise. */ 3136 + static bool acl_pkt_is_dump_qca(struct hci_dev *hdev, struct sk_buff *skb) 3137 + { 3138 + u8 *sk_ptr; 3139 + unsigned int sk_len; 3140 + 3141 + struct hci_event_hdr *event_hdr; 3142 + struct hci_acl_hdr *acl_hdr; 3143 + struct qca_dump_hdr *dump_hdr; 3144 + 3145 + sk_ptr = skb->data; 3146 + sk_len = skb->len; 3147 + 3148 + acl_hdr = hci_acl_hdr(skb); 3149 + if (le16_to_cpu(acl_hdr->handle) != QCA_MEMDUMP_ACL_HANDLE) 3150 + return false; 3151 + 3152 + sk_ptr += HCI_ACL_HDR_SIZE; 3153 + sk_len -= HCI_ACL_HDR_SIZE; 3154 + event_hdr = (struct hci_event_hdr *)sk_ptr; 3155 + 3156 + if ((event_hdr->evt != HCI_VENDOR_PKT) || 3157 + (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE))) 3158 + return false; 3159 + 3160 + sk_ptr += HCI_EVENT_HDR_SIZE; 3161 + sk_len -= HCI_EVENT_HDR_SIZE; 3162 + 3163 + dump_hdr = (struct qca_dump_hdr *)sk_ptr; 3164 + if ((sk_len < offsetof(struct qca_dump_hdr, data)) || 3165 + (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) || 3166 + (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE)) 3167 + return false; 3168 + 3169 + return true; 3170 + } 3171 + 3172 + /* Return: true if the event packet is a dump packet, false otherwise. */ 3173 + static bool evt_pkt_is_dump_qca(struct hci_dev *hdev, struct sk_buff *skb) 3174 + { 3175 + u8 *sk_ptr; 3176 + unsigned int sk_len; 3177 + 3178 + struct hci_event_hdr *event_hdr; 3179 + struct qca_dump_hdr *dump_hdr; 3180 + 3181 + sk_ptr = skb->data; 3182 + sk_len = skb->len; 3183 + 3184 + event_hdr = hci_event_hdr(skb); 3185 + 3186 + if ((event_hdr->evt != HCI_VENDOR_PKT) 3187 + || (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE))) 3188 + return false; 3189 + 3190 + sk_ptr += HCI_EVENT_HDR_SIZE; 3191 + sk_len -= HCI_EVENT_HDR_SIZE; 3192 + 3193 + dump_hdr = (struct qca_dump_hdr *)sk_ptr; 3194 + if ((sk_len < offsetof(struct qca_dump_hdr, data)) || 3195 + (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) || 3196 + (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE)) 3197 + return false; 3198 + 3199 + return true; 3200 + } 3201 + 3113 3202 static int btusb_recv_acl_qca(struct hci_dev *hdev, struct sk_buff *skb) 3114 3203 { 3115 - if (handle_dump_pkt_qca(hdev, skb)) 3116 - return 0; 3204 + if (acl_pkt_is_dump_qca(hdev, skb)) 3205 + return handle_dump_pkt_qca(hdev, skb); 3117 3206 return hci_recv_frame(hdev, skb); 3118 3207 } 3119 3208 3120 3209 static int btusb_recv_evt_qca(struct hci_dev *hdev, struct sk_buff *skb) 3121 3210 { 3122 - if (handle_dump_pkt_qca(hdev, skb)) 3123 - return 0; 3211 + if (evt_pkt_is_dump_qca(hdev, skb)) 3212 + return handle_dump_pkt_qca(hdev, skb); 3124 3213 return hci_recv_frame(hdev, skb); 3125 3214 } 3126 3215
+1 -1
drivers/char/misc.c
··· 315 315 goto fail_remove; 316 316 317 317 err = -EIO; 318 - if (register_chrdev(MISC_MAJOR, "misc", &misc_fops)) 318 + if (__register_chrdev(MISC_MAJOR, 0, MINORMASK + 1, "misc", &misc_fops)) 319 319 goto fail_printk; 320 320 return 0; 321 321
+1 -1
drivers/comedi/drivers/jr3_pci.c
··· 758 758 struct jr3_pci_dev_private *devpriv = dev->private; 759 759 760 760 if (devpriv) 761 - timer_delete_sync(&devpriv->timer); 761 + timer_shutdown_sync(&devpriv->timer); 762 762 763 763 comedi_pci_detach(dev); 764 764 }
+10 -10
drivers/cpufreq/Kconfig.arm
··· 76 76 config ARM_BRCMSTB_AVS_CPUFREQ 77 77 tristate "Broadcom STB AVS CPUfreq driver" 78 78 depends on (ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ) || COMPILE_TEST 79 - default y 79 + default y if ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ 80 80 help 81 81 Some Broadcom STB SoCs use a co-processor running proprietary firmware 82 82 ("AVS") to handle voltage and frequency scaling. This driver provides ··· 88 88 tristate "Calxeda Highbank-based" 89 89 depends on ARCH_HIGHBANK || COMPILE_TEST 90 90 depends on CPUFREQ_DT && REGULATOR && PL320_MBOX 91 - default m 91 + default m if ARCH_HIGHBANK 92 92 help 93 93 This adds the CPUFreq driver for Calxeda Highbank SoC 94 94 based boards. ··· 133 133 config ARM_MEDIATEK_CPUFREQ_HW 134 134 tristate "MediaTek CPUFreq HW driver" 135 135 depends on ARCH_MEDIATEK || COMPILE_TEST 136 - default m 136 + default m if ARCH_MEDIATEK 137 137 help 138 138 Support for the CPUFreq HW driver. 139 139 Some MediaTek chipsets have a HW engine to offload the steps ··· 181 181 config ARM_S3C64XX_CPUFREQ 182 182 bool "Samsung S3C64XX" 183 183 depends on CPU_S3C6410 || COMPILE_TEST 184 - default y 184 + default CPU_S3C6410 185 185 help 186 186 This adds the CPUFreq driver for Samsung S3C6410 SoC. 187 187 ··· 190 190 config ARM_S5PV210_CPUFREQ 191 191 bool "Samsung S5PV210 and S5PC110" 192 192 depends on CPU_S5PV210 || COMPILE_TEST 193 - default y 193 + default CPU_S5PV210 194 194 help 195 195 This adds the CPUFreq driver for Samsung S5PV210 and 196 196 S5PC110 SoCs. ··· 214 214 config ARM_SPEAR_CPUFREQ 215 215 bool "SPEAr CPUFreq support" 216 216 depends on PLAT_SPEAR || COMPILE_TEST 217 - default y 217 + default PLAT_SPEAR 218 218 help 219 219 This adds the CPUFreq driver support for SPEAr SOCs. 220 220 ··· 233 233 tristate "Tegra20/30 CPUFreq support" 234 234 depends on ARCH_TEGRA || COMPILE_TEST 235 235 depends on CPUFREQ_DT 236 - default y 236 + default ARCH_TEGRA 237 237 help 238 238 This adds the CPUFreq driver support for Tegra20/30 SOCs. 239 239 ··· 241 241 bool "Tegra124 CPUFreq support" 242 242 depends on ARCH_TEGRA || COMPILE_TEST 243 243 depends on CPUFREQ_DT 244 - default y 244 + default ARCH_TEGRA 245 245 help 246 246 This adds the CPUFreq driver support for Tegra124 SOCs. 247 247 ··· 256 256 tristate "Tegra194 CPUFreq support" 257 257 depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST) 258 258 depends on TEGRA_BPMP 259 - default y 259 + default ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC 260 260 help 261 261 This adds CPU frequency driver support for Tegra194 SOCs. 262 262 263 263 config ARM_TI_CPUFREQ 264 264 bool "Texas Instruments CPUFreq support" 265 265 depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST 266 - default y 266 + default ARCH_OMAP2PLUS || ARCH_K3 267 267 help 268 268 This driver enables valid OPPs on the running platform based on 269 269 values contained within the SoC in use. Enable this in order to
+8 -2
drivers/cpufreq/apple-soc-cpufreq.c
··· 134 134 135 135 static unsigned int apple_soc_cpufreq_get_rate(unsigned int cpu) 136 136 { 137 - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 138 - struct apple_cpu_priv *priv = policy->driver_data; 137 + struct cpufreq_policy *policy; 138 + struct apple_cpu_priv *priv; 139 139 struct cpufreq_frequency_table *p; 140 140 unsigned int pstate; 141 + 142 + policy = cpufreq_cpu_get_raw(cpu); 143 + if (unlikely(!policy)) 144 + return 0; 145 + 146 + priv = policy->driver_data; 141 147 142 148 if (priv->info->cur_pstate_mask) { 143 149 u32 reg = readl_relaxed(priv->reg_base + APPLE_DVFS_STATUS);
+1 -1
drivers/cpufreq/cppc_cpufreq.c
··· 747 747 int ret; 748 748 749 749 if (!policy) 750 - return -ENODEV; 750 + return 0; 751 751 752 752 cpu_data = policy->driver_data; 753 753
+1
drivers/cpufreq/cpufreq-dt-platdev.c
··· 175 175 { .compatible = "qcom,sm8350", }, 176 176 { .compatible = "qcom,sm8450", }, 177 177 { .compatible = "qcom,sm8550", }, 178 + { .compatible = "qcom,sm8650", }, 178 179 179 180 { .compatible = "st,stih407", }, 180 181 { .compatible = "st,stih410", },
+8 -2
drivers/cpufreq/scmi-cpufreq.c
··· 37 37 38 38 static unsigned int scmi_cpufreq_get_rate(unsigned int cpu) 39 39 { 40 - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 41 - struct scmi_data *priv = policy->driver_data; 40 + struct cpufreq_policy *policy; 41 + struct scmi_data *priv; 42 42 unsigned long rate; 43 43 int ret; 44 + 45 + policy = cpufreq_cpu_get_raw(cpu); 46 + if (unlikely(!policy)) 47 + return 0; 48 + 49 + priv = policy->driver_data; 44 50 45 51 ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false); 46 52 if (ret)
+10 -3
drivers/cpufreq/scpi-cpufreq.c
··· 29 29 30 30 static unsigned int scpi_cpufreq_get_rate(unsigned int cpu) 31 31 { 32 - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 33 - struct scpi_data *priv = policy->driver_data; 34 - unsigned long rate = clk_get_rate(priv->clk); 32 + struct cpufreq_policy *policy; 33 + struct scpi_data *priv; 34 + unsigned long rate; 35 + 36 + policy = cpufreq_cpu_get_raw(cpu); 37 + if (unlikely(!policy)) 38 + return 0; 39 + 40 + priv = policy->driver_data; 41 + rate = clk_get_rate(priv->clk); 35 42 36 43 return rate / 1000; 37 44 }
+12 -6
drivers/cpufreq/sun50i-cpufreq-nvmem.c
··· 194 194 struct nvmem_cell *speedbin_nvmem; 195 195 const struct of_device_id *match; 196 196 struct device *cpu_dev; 197 - u32 *speedbin; 197 + void *speedbin_ptr; 198 + u32 speedbin = 0; 199 + size_t len; 198 200 int ret; 199 201 200 202 cpu_dev = get_cpu_device(0); ··· 219 217 return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem), 220 218 "Could not get nvmem cell\n"); 221 219 222 - speedbin = nvmem_cell_read(speedbin_nvmem, NULL); 220 + speedbin_ptr = nvmem_cell_read(speedbin_nvmem, &len); 223 221 nvmem_cell_put(speedbin_nvmem); 224 - if (IS_ERR(speedbin)) 225 - return PTR_ERR(speedbin); 222 + if (IS_ERR(speedbin_ptr)) 223 + return PTR_ERR(speedbin_ptr); 226 224 227 - ret = opp_data->efuse_xlate(*speedbin); 225 + if (len <= 4) 226 + memcpy(&speedbin, speedbin_ptr, len); 227 + speedbin = le32_to_cpu(speedbin); 228 228 229 - kfree(speedbin); 229 + ret = opp_data->efuse_xlate(speedbin); 230 + 231 + kfree(speedbin_ptr); 230 232 231 233 return ret; 232 234 };
+1 -1
drivers/cxl/core/core.h
··· 119 119 120 120 int cxl_ras_init(void); 121 121 void cxl_ras_exit(void); 122 - int cxl_gpf_port_setup(struct device *dport_dev, struct cxl_port *port); 122 + int cxl_gpf_port_setup(struct cxl_dport *dport); 123 123 int cxl_acpi_get_extended_linear_cache_size(struct resource *backing_res, 124 124 int nid, resource_size_t *size); 125 125
+3 -3
drivers/cxl/core/features.c
··· 528 528 rc = cxl_set_feature(cxl_mbox, &feat_in->uuid, 529 529 feat_in->version, feat_in->feat_data, 530 530 data_size, flags, offset, &return_code); 531 + *out_len = sizeof(*rpc_out); 531 532 if (rc) { 532 533 rpc_out->retval = return_code; 533 534 return no_free_ptr(rpc_out); 534 535 } 535 536 536 537 rpc_out->retval = CXL_MBOX_CMD_RC_SUCCESS; 537 - *out_len = sizeof(*rpc_out); 538 538 539 539 return no_free_ptr(rpc_out); 540 540 } ··· 677 677 fwctl_put(fwctl_dev); 678 678 } 679 679 680 - int devm_cxl_setup_fwctl(struct cxl_memdev *cxlmd) 680 + int devm_cxl_setup_fwctl(struct device *host, struct cxl_memdev *cxlmd) 681 681 { 682 682 struct cxl_dev_state *cxlds = cxlmd->cxlds; 683 683 struct cxl_features_state *cxlfs; ··· 700 700 if (rc) 701 701 return rc; 702 702 703 - return devm_add_action_or_reset(&cxlmd->dev, free_memdev_fwctl, 703 + return devm_add_action_or_reset(host, free_memdev_fwctl, 704 704 no_free_ptr(fwctl_dev)); 705 705 } 706 706 EXPORT_SYMBOL_NS_GPL(devm_cxl_setup_fwctl, "CXL");
+17 -13
drivers/cxl/core/pci.c
··· 1072 1072 #define GPF_TIMEOUT_BASE_MAX 2 1073 1073 #define GPF_TIMEOUT_SCALE_MAX 7 /* 10 seconds */ 1074 1074 1075 - u16 cxl_gpf_get_dvsec(struct device *dev, bool is_port) 1075 + u16 cxl_gpf_get_dvsec(struct device *dev) 1076 1076 { 1077 + struct pci_dev *pdev; 1078 + bool is_port = true; 1077 1079 u16 dvsec; 1078 1080 1079 1081 if (!dev_is_pci(dev)) 1080 1082 return 0; 1081 1083 1082 - dvsec = pci_find_dvsec_capability(to_pci_dev(dev), PCI_VENDOR_ID_CXL, 1084 + pdev = to_pci_dev(dev); 1085 + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ENDPOINT) 1086 + is_port = false; 1087 + 1088 + dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL, 1083 1089 is_port ? CXL_DVSEC_PORT_GPF : CXL_DVSEC_DEVICE_GPF); 1084 1090 if (!dvsec) 1085 1091 dev_warn(dev, "%s GPF DVSEC not present\n", ··· 1134 1128 return rc; 1135 1129 } 1136 1130 1137 - int cxl_gpf_port_setup(struct device *dport_dev, struct cxl_port *port) 1131 + int cxl_gpf_port_setup(struct cxl_dport *dport) 1138 1132 { 1139 - struct pci_dev *pdev; 1140 - 1141 - if (!port) 1133 + if (!dport) 1142 1134 return -EINVAL; 1143 1135 1144 - if (!port->gpf_dvsec) { 1136 + if (!dport->gpf_dvsec) { 1137 + struct pci_dev *pdev; 1145 1138 int dvsec; 1146 1139 1147 - dvsec = cxl_gpf_get_dvsec(dport_dev, true); 1140 + dvsec = cxl_gpf_get_dvsec(dport->dport_dev); 1148 1141 if (!dvsec) 1149 1142 return -EINVAL; 1150 1143 1151 - port->gpf_dvsec = dvsec; 1144 + dport->gpf_dvsec = dvsec; 1145 + pdev = to_pci_dev(dport->dport_dev); 1146 + update_gpf_port_dvsec(pdev, dport->gpf_dvsec, 1); 1147 + update_gpf_port_dvsec(pdev, dport->gpf_dvsec, 2); 1152 1148 } 1153 - 1154 - pdev = to_pci_dev(dport_dev); 1155 - update_gpf_port_dvsec(pdev, port->gpf_dvsec, 1); 1156 - update_gpf_port_dvsec(pdev, port->gpf_dvsec, 2); 1157 1149 1158 1150 return 0; 1159 1151 }
+1 -1
drivers/cxl/core/port.c
··· 1678 1678 if (rc && rc != -EBUSY) 1679 1679 return rc; 1680 1680 1681 - cxl_gpf_port_setup(dport_dev, port); 1681 + cxl_gpf_port_setup(dport); 1682 1682 1683 1683 /* Any more ports to add between this one and the root? */ 1684 1684 if (!dev_is_cxl_root_child(&port->dev))
-4
drivers/cxl/core/regs.c
··· 581 581 resource_size_t rcrb = ri->base; 582 582 void __iomem *addr; 583 583 u32 bar0, bar1; 584 - u16 cmd; 585 584 u32 id; 586 585 587 586 if (which == CXL_RCRB_UPSTREAM) ··· 602 603 } 603 604 604 605 id = readl(addr + PCI_VENDOR_ID); 605 - cmd = readw(addr + PCI_COMMAND); 606 606 bar0 = readl(addr + PCI_BASE_ADDRESS_0); 607 607 bar1 = readl(addr + PCI_BASE_ADDRESS_1); 608 608 iounmap(addr); ··· 616 618 dev_err(dev, "Failed to access Downstream Port RCRB\n"); 617 619 return CXL_RESOURCE_NONE; 618 620 } 619 - if (!(cmd & PCI_COMMAND_MEMORY)) 620 - return CXL_RESOURCE_NONE; 621 621 /* The RCRB is a Memory Window, and the MEM_TYPE_1M bit is obsolete */ 622 622 if (bar0 & (PCI_BASE_ADDRESS_MEM_TYPE_1M | PCI_BASE_ADDRESS_SPACE_IO)) 623 623 return CXL_RESOURCE_NONE;
+3 -3
drivers/cxl/cxl.h
··· 592 592 * @cdat: Cached CDAT data 593 593 * @cdat_available: Should a CDAT attribute be available in sysfs 594 594 * @pci_latency: Upstream latency in picoseconds 595 - * @gpf_dvsec: Cached GPF port DVSEC 596 595 */ 597 596 struct cxl_port { 598 597 struct device dev; ··· 615 616 } cdat; 616 617 bool cdat_available; 617 618 long pci_latency; 618 - int gpf_dvsec; 619 619 }; 620 620 621 621 /** ··· 662 664 * @regs: Dport parsed register blocks 663 665 * @coord: access coordinates (bandwidth and latency performance attributes) 664 666 * @link_latency: calculated PCIe downstream latency 667 + * @gpf_dvsec: Cached GPF port DVSEC 665 668 */ 666 669 struct cxl_dport { 667 670 struct device *dport_dev; ··· 674 675 struct cxl_regs regs; 675 676 struct access_coordinate coord[ACCESS_COORDINATE_MAX]; 676 677 long link_latency; 678 + int gpf_dvsec; 677 679 }; 678 680 679 681 /** ··· 910 910 #define __mock static 911 911 #endif 912 912 913 - u16 cxl_gpf_get_dvsec(struct device *dev, bool is_port); 913 + u16 cxl_gpf_get_dvsec(struct device *dev); 914 914 915 915 #endif /* __CXL_H__ */
+1 -1
drivers/cxl/pci.c
··· 1018 1018 if (rc) 1019 1019 return rc; 1020 1020 1021 - rc = devm_cxl_setup_fwctl(cxlmd); 1021 + rc = devm_cxl_setup_fwctl(&pdev->dev, cxlmd); 1022 1022 if (rc) 1023 1023 dev_dbg(&pdev->dev, "No CXL FWCTL setup\n"); 1024 1024
+1 -1
drivers/cxl/pmem.c
··· 108 108 return; 109 109 } 110 110 111 - if (!cxl_gpf_get_dvsec(cxlds->dev, false)) 111 + if (!cxl_gpf_get_dvsec(cxlds->dev)) 112 112 return; 113 113 114 114 if (cxl_get_dirty_count(mds, &count)) {
+11 -3
drivers/firmware/stratix10-svc.c
··· 1224 1224 if (!svc->intel_svc_fcs) { 1225 1225 dev_err(dev, "failed to allocate %s device\n", INTEL_FCS); 1226 1226 ret = -ENOMEM; 1227 - goto err_unregister_dev; 1227 + goto err_unregister_rsu_dev; 1228 1228 } 1229 1229 1230 1230 ret = platform_device_add(svc->intel_svc_fcs); 1231 1231 if (ret) { 1232 1232 platform_device_put(svc->intel_svc_fcs); 1233 - goto err_unregister_dev; 1233 + goto err_unregister_rsu_dev; 1234 1234 } 1235 + 1236 + ret = of_platform_default_populate(dev_of_node(dev), NULL, dev); 1237 + if (ret) 1238 + goto err_unregister_fcs_dev; 1235 1239 1236 1240 dev_set_drvdata(dev, svc); 1237 1241 ··· 1243 1239 1244 1240 return 0; 1245 1241 1246 - err_unregister_dev: 1242 + err_unregister_fcs_dev: 1243 + platform_device_unregister(svc->intel_svc_fcs); 1244 + err_unregister_rsu_dev: 1247 1245 platform_device_unregister(svc->stratix10_svc_rsu); 1248 1246 err_free_kfifo: 1249 1247 kfifo_free(&controller->svc_fifo); ··· 1258 1252 { 1259 1253 struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev); 1260 1254 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev); 1255 + 1256 + of_platform_depopulate(ctrl->dev); 1261 1257 1262 1258 platform_device_unregister(svc->intel_svc_fcs); 1263 1259 platform_device_unregister(svc->stratix10_svc_rsu);
+45 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 43 43 #include <linux/dma-fence-array.h> 44 44 #include <linux/pci-p2pdma.h> 45 45 46 + static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops; 47 + 48 + /** 49 + * dma_buf_attach_adev - Helper to get adev of an attachment 50 + * 51 + * @attach: attachment 52 + * 53 + * Returns: 54 + * A struct amdgpu_device * if the attaching device is an amdgpu device or 55 + * partition, NULL otherwise. 56 + */ 57 + static struct amdgpu_device *dma_buf_attach_adev(struct dma_buf_attachment *attach) 58 + { 59 + if (attach->importer_ops == &amdgpu_dma_buf_attach_ops) { 60 + struct drm_gem_object *obj = attach->importer_priv; 61 + struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); 62 + 63 + return amdgpu_ttm_adev(bo->tbo.bdev); 64 + } 65 + 66 + return NULL; 67 + } 68 + 46 69 /** 47 70 * amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation 48 71 * ··· 77 54 static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf, 78 55 struct dma_buf_attachment *attach) 79 56 { 57 + struct amdgpu_device *attach_adev = dma_buf_attach_adev(attach); 80 58 struct drm_gem_object *obj = dmabuf->priv; 81 59 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); 82 60 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 83 61 84 - if (pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) 62 + if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) && 63 + pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0) 85 64 attach->peer2peer = false; 86 65 87 66 amdgpu_vm_bo_update_shared(bo); ··· 102 77 { 103 78 struct dma_buf *dmabuf = attach->dmabuf; 104 79 struct amdgpu_bo *bo = gem_to_amdgpu_bo(dmabuf->priv); 105 - u32 domains = bo->preferred_domains; 80 + u32 domains = bo->allowed_domains; 106 81 107 82 dma_resv_assert_held(dmabuf->resv); 108 83 109 - /* 110 - * Try pinning into VRAM to allow P2P with RDMA NICs without ODP 84 + /* Try pinning into VRAM to allow P2P with RDMA NICs without ODP 111 85 * support if all attachments can do P2P. If any attachment can't do 112 86 * P2P just pin into GTT instead. 87 + * 88 + * To avoid with conflicting pinnings between GPUs and RDMA when move 89 + * notifiers are disabled, only allow pinning in VRAM when move 90 + * notiers are enabled. 113 91 */ 114 - list_for_each_entry(attach, &dmabuf->attachments, node) 115 - if (!attach->peer2peer) 116 - domains &= ~AMDGPU_GEM_DOMAIN_VRAM; 92 + if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { 93 + domains &= ~AMDGPU_GEM_DOMAIN_VRAM; 94 + } else { 95 + list_for_each_entry(attach, &dmabuf->attachments, node) 96 + if (!attach->peer2peer) 97 + domains &= ~AMDGPU_GEM_DOMAIN_VRAM; 98 + } 117 99 118 100 if (domains & AMDGPU_GEM_DOMAIN_VRAM) 119 101 bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; 102 + 103 + if (WARN_ON(!domains)) 104 + return -EINVAL; 120 105 121 106 return amdgpu_bo_pin(bo, domains); 122 107 } ··· 504 469 { 505 470 struct drm_gem_object *obj = &bo->tbo.base; 506 471 struct drm_gem_object *gobj; 472 + 473 + if (!adev) 474 + return false; 507 475 508 476 if (obj->import_attach) { 509 477 struct dma_buf *dma_buf = obj->import_attach->dmabuf;
+12 -29
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 1920 1920 switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) { 1921 1921 case IP_VERSION(3, 5, 0): 1922 1922 case IP_VERSION(3, 6, 0): 1923 - /* 1924 - * On DCN35 systems with Z8 enabled, it's possible for IPS2 + Z8 to 1925 - * cause a hard hang. A fix exists for newer PMFW. 1926 - * 1927 - * As a workaround, for non-fixed PMFW, force IPS1+RCG as the deepest 1928 - * IPS state in all cases, except for s0ix and all displays off (DPMS), 1929 - * where IPS2 is allowed. 1930 - * 1931 - * When checking pmfw version, use the major and minor only. 1932 - */ 1933 - if ((adev->pm.fw_version & 0x00FFFF00) < 0x005D6300) 1934 - ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF; 1935 - else if (amdgpu_ip_version(adev, GC_HWIP, 0) > IP_VERSION(11, 5, 0)) 1936 - /* 1937 - * Other ASICs with DCN35 that have residency issues with 1938 - * IPS2 in idle. 1939 - * We want them to use IPS2 only in display off cases. 1940 - */ 1941 - ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF; 1942 - break; 1943 1923 case IP_VERSION(3, 5, 1): 1944 1924 ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF; 1945 1925 break; ··· 3335 3355 for (k = 0; k < dc_state->stream_count; k++) { 3336 3356 bundle->stream_update.stream = dc_state->streams[k]; 3337 3357 3338 - for (m = 0; m < dc_state->stream_status->plane_count; m++) { 3358 + for (m = 0; m < dc_state->stream_status[k].plane_count; m++) { 3339 3359 bundle->surface_updates[m].surface = 3340 - dc_state->stream_status->plane_states[m]; 3360 + dc_state->stream_status[k].plane_states[m]; 3341 3361 bundle->surface_updates[m].surface->force_full_update = 3342 3362 true; 3343 3363 } 3344 3364 3345 3365 update_planes_and_stream_adapter(dm->dc, 3346 3366 UPDATE_TYPE_FULL, 3347 - dc_state->stream_status->plane_count, 3367 + dc_state->stream_status[k].plane_count, 3348 3368 dc_state->streams[k], 3349 3369 &bundle->stream_update, 3350 3370 bundle->surface_updates); ··· 6501 6521 const struct drm_display_mode *native_mode, 6502 6522 bool scale_enabled) 6503 6523 { 6504 - if (scale_enabled) { 6505 - copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode); 6506 - } else if (native_mode->clock == drm_mode->clock && 6507 - native_mode->htotal == drm_mode->htotal && 6508 - native_mode->vtotal == drm_mode->vtotal) { 6509 - copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode); 6524 + if (scale_enabled || ( 6525 + native_mode->clock == drm_mode->clock && 6526 + native_mode->htotal == drm_mode->htotal && 6527 + native_mode->vtotal == drm_mode->vtotal)) { 6528 + if (native_mode->crtc_clock) 6529 + copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode); 6510 6530 } else { 6511 6531 /* no scaling nor amdgpu inserted, no need to patch */ 6512 6532 } ··· 11021 11041 */ 11022 11042 if (amdgpu_ip_version(adev, DCE_HWIP, 0) < IP_VERSION(3, 2, 0) && 11023 11043 state->allow_modeset) 11044 + return true; 11045 + 11046 + if (amdgpu_in_reset(adev) && state->allow_modeset) 11024 11047 return true; 11025 11048 11026 11049 /* Exit early if we know that we're adding or removing the plane. */
+1 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 918 918 { 919 919 struct drm_connector *connector = data; 920 920 struct acpi_device *acpidev = ACPI_COMPANION(connector->dev->dev); 921 - unsigned char start = block * EDID_LENGTH; 921 + unsigned short start = block * EDID_LENGTH; 922 922 struct edid *edid; 923 923 int r; 924 924
+2 -2
drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
··· 195 195 .dcn_downspread_percent = 0.5, 196 196 .gpuvm_min_page_size_bytes = 4096, 197 197 .hostvm_min_page_size_bytes = 4096, 198 - .do_urgent_latency_adjustment = 0, 198 + .do_urgent_latency_adjustment = 1, 199 199 .urgent_latency_adjustment_fabric_clock_component_us = 0, 200 - .urgent_latency_adjustment_fabric_clock_reference_mhz = 0, 200 + .urgent_latency_adjustment_fabric_clock_reference_mhz = 3000, 201 201 }; 202 202 203 203 void dcn35_build_wm_range_table_fpu(struct clk_mgr *clk_mgr)
+2 -2
drivers/gpu/drm/exynos/exynos7_drm_decon.c
··· 43 43 unsigned int wincon_burstlen_shift; 44 44 }; 45 45 46 - static struct decon_data exynos7_decon_data = { 46 + static const struct decon_data exynos7_decon_data = { 47 47 .vidw_buf_start_base = 0x80, 48 48 .shadowcon_win_protect_shift = 10, 49 49 .wincon_burstlen_shift = 11, 50 50 }; 51 51 52 - static struct decon_data exynos7870_decon_data = { 52 + static const struct decon_data exynos7870_decon_data = { 53 53 .vidw_buf_start_base = 0x880, 54 54 .shadowcon_win_protect_shift = 8, 55 55 .wincon_burstlen_shift = 10,
+1 -2
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 355 355 { 356 356 struct drm_device *drm = platform_get_drvdata(pdev); 357 357 358 - if (drm) 359 - drm_atomic_helper_shutdown(drm); 358 + drm_atomic_helper_shutdown(drm); 360 359 } 361 360 362 361 static struct platform_driver exynos_drm_platform_driver = {
+1 -1
drivers/gpu/drm/exynos/exynos_drm_fimc.c
··· 908 908 u32 buf_num; 909 909 u32 cfg; 910 910 911 - DRM_DEV_DEBUG_KMS(ctx->dev, "buf_id[%d]enqueu[%d]\n", buf_id, enqueue); 911 + DRM_DEV_DEBUG_KMS(ctx->dev, "buf_id[%d]enqueue[%d]\n", buf_id, enqueue); 912 912 913 913 spin_lock_irqsave(&ctx->lock, flags); 914 914
+1 -1
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 731 731 /* 732 732 * Setting dma-burst to 16Word causes permanent tearing for very small 733 733 * buffers, e.g. cursor buffer. Burst Mode switching which based on 734 - * plane size is not recommended as plane size varies alot towards the 734 + * plane size is not recommended as plane size varies a lot towards the 735 735 * end of the screen and rapid movement causes unstable DMA, but it is 736 736 * still better to change dma-burst than displaying garbage. 737 737 */
-3
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 312 312 else 313 313 drm_edid = drm_edid_alloc(fake_edid_info, sizeof(fake_edid_info)); 314 314 315 - if (!drm_edid) 316 - return 0; 317 - 318 315 drm_edid_connector_update(connector, drm_edid); 319 316 320 317 count = drm_edid_connector_add_modes(connector);
+1 -1
drivers/gpu/drm/meson/meson_drv.c
··· 169 169 /* S805X/S805Y HDMI PLL won't lock for HDMI PHY freq > 1,65GHz */ 170 170 { 171 171 .limits = { 172 - .max_hdmi_phy_freq = 1650000, 172 + .max_hdmi_phy_freq = 1650000000, 173 173 }, 174 174 .attrs = (const struct soc_device_attribute []) { 175 175 { .soc_id = "GXL (S805*)", },
+1 -1
drivers/gpu/drm/meson/meson_drv.h
··· 37 37 }; 38 38 39 39 struct meson_drm_soc_limits { 40 - unsigned int max_hdmi_phy_freq; 40 + unsigned long long max_hdmi_phy_freq; 41 41 }; 42 42 43 43 struct meson_drm {
+16 -13
drivers/gpu/drm/meson/meson_encoder_hdmi.c
··· 70 70 { 71 71 struct meson_drm *priv = encoder_hdmi->priv; 72 72 int vic = drm_match_cea_mode(mode); 73 - unsigned int phy_freq; 74 - unsigned int vclk_freq; 75 - unsigned int venc_freq; 76 - unsigned int hdmi_freq; 73 + unsigned long long phy_freq; 74 + unsigned long long vclk_freq; 75 + unsigned long long venc_freq; 76 + unsigned long long hdmi_freq; 77 77 78 - vclk_freq = mode->clock; 78 + vclk_freq = mode->clock * 1000; 79 79 80 80 /* For 420, pixel clock is half unlike venc clock */ 81 81 if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24) ··· 107 107 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 108 108 venc_freq /= 2; 109 109 110 - dev_dbg(priv->dev, "vclk:%d phy=%d venc=%d hdmi=%d enci=%d\n", 110 + dev_dbg(priv->dev, 111 + "vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n", 111 112 phy_freq, vclk_freq, venc_freq, hdmi_freq, 112 113 priv->venc.hdmi_use_enci); 113 114 ··· 123 122 struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge); 124 123 struct meson_drm *priv = encoder_hdmi->priv; 125 124 bool is_hdmi2_sink = display_info->hdmi.scdc.supported; 126 - unsigned int phy_freq; 127 - unsigned int vclk_freq; 128 - unsigned int venc_freq; 129 - unsigned int hdmi_freq; 125 + unsigned long long clock = mode->clock * 1000; 126 + unsigned long long phy_freq; 127 + unsigned long long vclk_freq; 128 + unsigned long long venc_freq; 129 + unsigned long long hdmi_freq; 130 130 int vic = drm_match_cea_mode(mode); 131 131 enum drm_mode_status status; 132 132 ··· 146 144 if (status != MODE_OK) 147 145 return status; 148 146 149 - return meson_vclk_dmt_supported_freq(priv, mode->clock); 147 + return meson_vclk_dmt_supported_freq(priv, clock); 150 148 /* Check against supported VIC modes */ 151 149 } else if (!meson_venc_hdmi_supported_vic(vic)) 152 150 return MODE_BAD; 153 151 154 - vclk_freq = mode->clock; 152 + vclk_freq = clock; 155 153 156 154 /* For 420, pixel clock is half unlike venc clock */ 157 155 if (drm_mode_is_420_only(display_info, mode) || ··· 181 179 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 182 180 venc_freq /= 2; 183 181 184 - dev_dbg(priv->dev, "%s: vclk:%d phy=%d venc=%d hdmi=%d\n", 182 + dev_dbg(priv->dev, 183 + "%s: vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz\n", 185 184 __func__, phy_freq, vclk_freq, venc_freq, hdmi_freq); 186 185 187 186 return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
+101 -94
drivers/gpu/drm/meson/meson_vclk.c
··· 110 110 #define HDMI_PLL_LOCK BIT(31) 111 111 #define HDMI_PLL_LOCK_G12A (3 << 30) 112 112 113 - #define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST(_freq * 1000, 1001) 113 + #define PIXEL_FREQ_1000_1001(_freq) \ 114 + DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL) 115 + #define PHY_FREQ_1000_1001(_freq) \ 116 + (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10) 114 117 115 118 /* VID PLL Dividers */ 116 119 enum { ··· 363 360 }; 364 361 365 362 struct meson_vclk_params { 366 - unsigned int pll_freq; 367 - unsigned int phy_freq; 368 - unsigned int vclk_freq; 369 - unsigned int venc_freq; 370 - unsigned int pixel_freq; 363 + unsigned long long pll_freq; 364 + unsigned long long phy_freq; 365 + unsigned long long vclk_freq; 366 + unsigned long long venc_freq; 367 + unsigned long long pixel_freq; 371 368 unsigned int pll_od1; 372 369 unsigned int pll_od2; 373 370 unsigned int pll_od3; ··· 375 372 unsigned int vclk_div; 376 373 } params[] = { 377 374 [MESON_VCLK_HDMI_ENCI_54000] = { 378 - .pll_freq = 4320000, 379 - .phy_freq = 270000, 380 - .vclk_freq = 54000, 381 - .venc_freq = 54000, 382 - .pixel_freq = 54000, 375 + .pll_freq = 4320000000, 376 + .phy_freq = 270000000, 377 + .vclk_freq = 54000000, 378 + .venc_freq = 54000000, 379 + .pixel_freq = 54000000, 383 380 .pll_od1 = 4, 384 381 .pll_od2 = 4, 385 382 .pll_od3 = 1, ··· 387 384 .vclk_div = 1, 388 385 }, 389 386 [MESON_VCLK_HDMI_DDR_54000] = { 390 - .pll_freq = 4320000, 391 - .phy_freq = 270000, 392 - .vclk_freq = 54000, 393 - .venc_freq = 54000, 394 - .pixel_freq = 27000, 387 + .pll_freq = 4320000000, 388 + .phy_freq = 270000000, 389 + .vclk_freq = 54000000, 390 + .venc_freq = 54000000, 391 + .pixel_freq = 27000000, 395 392 .pll_od1 = 4, 396 393 .pll_od2 = 4, 397 394 .pll_od3 = 1, ··· 399 396 .vclk_div = 1, 400 397 }, 401 398 [MESON_VCLK_HDMI_DDR_148500] = { 402 - .pll_freq = 2970000, 403 - .phy_freq = 742500, 404 - .vclk_freq = 148500, 405 - .venc_freq = 148500, 406 - .pixel_freq = 74250, 399 + .pll_freq = 2970000000, 400 + .phy_freq = 742500000, 401 + .vclk_freq = 148500000, 402 + .venc_freq = 148500000, 403 + .pixel_freq = 74250000, 407 404 .pll_od1 = 4, 408 405 .pll_od2 = 1, 409 406 .pll_od3 = 1, ··· 411 408 .vclk_div = 1, 412 409 }, 413 410 [MESON_VCLK_HDMI_74250] = { 414 - .pll_freq = 2970000, 415 - .phy_freq = 742500, 416 - .vclk_freq = 74250, 417 - .venc_freq = 74250, 418 - .pixel_freq = 74250, 411 + .pll_freq = 2970000000, 412 + .phy_freq = 742500000, 413 + .vclk_freq = 74250000, 414 + .venc_freq = 74250000, 415 + .pixel_freq = 74250000, 419 416 .pll_od1 = 2, 420 417 .pll_od2 = 2, 421 418 .pll_od3 = 2, ··· 423 420 .vclk_div = 1, 424 421 }, 425 422 [MESON_VCLK_HDMI_148500] = { 426 - .pll_freq = 2970000, 427 - .phy_freq = 1485000, 428 - .vclk_freq = 148500, 429 - .venc_freq = 148500, 430 - .pixel_freq = 148500, 423 + .pll_freq = 2970000000, 424 + .phy_freq = 1485000000, 425 + .vclk_freq = 148500000, 426 + .venc_freq = 148500000, 427 + .pixel_freq = 148500000, 431 428 .pll_od1 = 1, 432 429 .pll_od2 = 2, 433 430 .pll_od3 = 2, ··· 435 432 .vclk_div = 1, 436 433 }, 437 434 [MESON_VCLK_HDMI_297000] = { 438 - .pll_freq = 5940000, 439 - .phy_freq = 2970000, 440 - .venc_freq = 297000, 441 - .vclk_freq = 297000, 442 - .pixel_freq = 297000, 435 + .pll_freq = 5940000000, 436 + .phy_freq = 2970000000, 437 + .venc_freq = 297000000, 438 + .vclk_freq = 297000000, 439 + .pixel_freq = 297000000, 443 440 .pll_od1 = 2, 444 441 .pll_od2 = 1, 445 442 .pll_od3 = 1, ··· 447 444 .vclk_div = 2, 448 445 }, 449 446 [MESON_VCLK_HDMI_594000] = { 450 - .pll_freq = 5940000, 451 - .phy_freq = 5940000, 452 - .venc_freq = 594000, 453 - .vclk_freq = 594000, 454 - .pixel_freq = 594000, 447 + .pll_freq = 5940000000, 448 + .phy_freq = 5940000000, 449 + .venc_freq = 594000000, 450 + .vclk_freq = 594000000, 451 + .pixel_freq = 594000000, 455 452 .pll_od1 = 1, 456 453 .pll_od2 = 1, 457 454 .pll_od3 = 2, ··· 459 456 .vclk_div = 1, 460 457 }, 461 458 [MESON_VCLK_HDMI_594000_YUV420] = { 462 - .pll_freq = 5940000, 463 - .phy_freq = 2970000, 464 - .venc_freq = 594000, 465 - .vclk_freq = 594000, 466 - .pixel_freq = 297000, 459 + .pll_freq = 5940000000, 460 + .phy_freq = 2970000000, 461 + .venc_freq = 594000000, 462 + .vclk_freq = 594000000, 463 + .pixel_freq = 297000000, 467 464 .pll_od1 = 2, 468 465 .pll_od2 = 1, 469 466 .pll_od3 = 1, ··· 620 617 3 << 20, pll_od_to_reg(od3) << 20); 621 618 } 622 619 623 - #define XTAL_FREQ 24000 620 + #define XTAL_FREQ (24 * 1000 * 1000) 624 621 625 622 static unsigned int meson_hdmi_pll_get_m(struct meson_drm *priv, 626 - unsigned int pll_freq) 623 + unsigned long long pll_freq) 627 624 { 628 625 /* The GXBB PLL has a /2 pre-multiplier */ 629 626 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) 630 - pll_freq /= 2; 627 + pll_freq = DIV_ROUND_DOWN_ULL(pll_freq, 2); 631 628 632 - return pll_freq / XTAL_FREQ; 629 + return DIV_ROUND_DOWN_ULL(pll_freq, XTAL_FREQ); 633 630 } 634 631 635 632 #define HDMI_FRAC_MAX_GXBB 4096 ··· 638 635 639 636 static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv, 640 637 unsigned int m, 641 - unsigned int pll_freq) 638 + unsigned long long pll_freq) 642 639 { 643 - unsigned int parent_freq = XTAL_FREQ; 640 + unsigned long long parent_freq = XTAL_FREQ; 644 641 unsigned int frac_max = HDMI_FRAC_MAX_GXL; 645 642 unsigned int frac_m; 646 643 unsigned int frac; 644 + u32 remainder; 647 645 648 646 /* The GXBB PLL has a /2 pre-multiplier and a larger FRAC width */ 649 647 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { ··· 656 652 frac_max = HDMI_FRAC_MAX_G12A; 657 653 658 654 /* We can have a perfect match !*/ 659 - if (pll_freq / m == parent_freq && 660 - pll_freq % m == 0) 655 + if (div_u64_rem(pll_freq, m, &remainder) == parent_freq && 656 + remainder == 0) 661 657 return 0; 662 658 663 - frac = div_u64((u64)pll_freq * (u64)frac_max, parent_freq); 659 + frac = mul_u64_u64_div_u64(pll_freq, frac_max, parent_freq); 664 660 frac_m = m * frac_max; 665 661 if (frac_m > frac) 666 662 return frac_max; ··· 670 666 } 671 667 672 668 static bool meson_hdmi_pll_validate_params(struct meson_drm *priv, 673 - unsigned int m, 669 + unsigned long long m, 674 670 unsigned int frac) 675 671 { 676 672 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { ··· 698 694 } 699 695 700 696 static bool meson_hdmi_pll_find_params(struct meson_drm *priv, 701 - unsigned int freq, 697 + unsigned long long freq, 702 698 unsigned int *m, 703 699 unsigned int *frac, 704 700 unsigned int *od) ··· 710 706 continue; 711 707 *frac = meson_hdmi_pll_get_frac(priv, *m, freq * *od); 712 708 713 - DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d\n", 709 + DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d\n", 714 710 freq, *m, *frac, *od); 715 711 716 712 if (meson_hdmi_pll_validate_params(priv, *m, *frac)) ··· 722 718 723 719 /* pll_freq is the frequency after the OD dividers */ 724 720 enum drm_mode_status 725 - meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq) 721 + meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq) 726 722 { 727 723 unsigned int od, m, frac; 728 724 ··· 745 741 746 742 /* pll_freq is the frequency after the OD dividers */ 747 743 static void meson_hdmi_pll_generic_set(struct meson_drm *priv, 748 - unsigned int pll_freq) 744 + unsigned long long pll_freq) 749 745 { 750 746 unsigned int od, m, frac, od1, od2, od3; 751 747 ··· 760 756 od1 = od / od2; 761 757 } 762 758 763 - DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d/%d/%d\n", 759 + DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d/%d/%d\n", 764 760 pll_freq, m, frac, od1, od2, od3); 765 761 766 762 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3); ··· 768 764 return; 769 765 } 770 766 771 - DRM_ERROR("Fatal, unable to find parameters for PLL freq %d\n", 767 + DRM_ERROR("Fatal, unable to find parameters for PLL freq %lluHz\n", 772 768 pll_freq); 773 769 } 774 770 775 771 enum drm_mode_status 776 - meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq, 777 - unsigned int vclk_freq) 772 + meson_vclk_vic_supported_freq(struct meson_drm *priv, 773 + unsigned long long phy_freq, 774 + unsigned long long vclk_freq) 778 775 { 779 776 int i; 780 777 781 - DRM_DEBUG_DRIVER("phy_freq = %d vclk_freq = %d\n", 778 + DRM_DEBUG_DRIVER("phy_freq = %lluHz vclk_freq = %lluHz\n", 782 779 phy_freq, vclk_freq); 783 780 784 781 /* Check against soc revision/package limits */ ··· 790 785 } 791 786 792 787 for (i = 0 ; params[i].pixel_freq ; ++i) { 793 - DRM_DEBUG_DRIVER("i = %d pixel_freq = %d alt = %d\n", 788 + DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n", 794 789 i, params[i].pixel_freq, 795 - FREQ_1000_1001(params[i].pixel_freq)); 796 - DRM_DEBUG_DRIVER("i = %d phy_freq = %d alt = %d\n", 790 + PIXEL_FREQ_1000_1001(params[i].pixel_freq)); 791 + DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n", 797 792 i, params[i].phy_freq, 798 - FREQ_1000_1001(params[i].phy_freq/1000)*1000); 793 + PHY_FREQ_1000_1001(params[i].phy_freq)); 799 794 /* Match strict frequency */ 800 795 if (phy_freq == params[i].phy_freq && 801 796 vclk_freq == params[i].vclk_freq) 802 797 return MODE_OK; 803 798 /* Match 1000/1001 variant */ 804 - if (phy_freq == (FREQ_1000_1001(params[i].phy_freq/1000)*1000) && 805 - vclk_freq == FREQ_1000_1001(params[i].vclk_freq)) 799 + if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) && 800 + vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq)) 806 801 return MODE_OK; 807 802 } 808 803 ··· 810 805 } 811 806 EXPORT_SYMBOL_GPL(meson_vclk_vic_supported_freq); 812 807 813 - static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq, 814 - unsigned int od1, unsigned int od2, unsigned int od3, 808 + static void meson_vclk_set(struct meson_drm *priv, 809 + unsigned long long pll_base_freq, unsigned int od1, 810 + unsigned int od2, unsigned int od3, 815 811 unsigned int vid_pll_div, unsigned int vclk_div, 816 812 unsigned int hdmi_tx_div, unsigned int venc_div, 817 813 bool hdmi_use_enci, bool vic_alternate_clock) ··· 832 826 meson_hdmi_pll_generic_set(priv, pll_base_freq); 833 827 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) { 834 828 switch (pll_base_freq) { 835 - case 2970000: 829 + case 2970000000: 836 830 m = 0x3d; 837 831 frac = vic_alternate_clock ? 0xd02 : 0xe00; 838 832 break; 839 - case 4320000: 833 + case 4320000000: 840 834 m = vic_alternate_clock ? 0x59 : 0x5a; 841 835 frac = vic_alternate_clock ? 0xe8f : 0; 842 836 break; 843 - case 5940000: 837 + case 5940000000: 844 838 m = 0x7b; 845 839 frac = vic_alternate_clock ? 0xa05 : 0xc00; 846 840 break; ··· 850 844 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) || 851 845 meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) { 852 846 switch (pll_base_freq) { 853 - case 2970000: 847 + case 2970000000: 854 848 m = 0x7b; 855 849 frac = vic_alternate_clock ? 0x281 : 0x300; 856 850 break; 857 - case 4320000: 851 + case 4320000000: 858 852 m = vic_alternate_clock ? 0xb3 : 0xb4; 859 853 frac = vic_alternate_clock ? 0x347 : 0; 860 854 break; 861 - case 5940000: 855 + case 5940000000: 862 856 m = 0xf7; 863 857 frac = vic_alternate_clock ? 0x102 : 0x200; 864 858 break; ··· 867 861 meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3); 868 862 } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { 869 863 switch (pll_base_freq) { 870 - case 2970000: 864 + case 2970000000: 871 865 m = 0x7b; 872 866 frac = vic_alternate_clock ? 0x140b4 : 0x18000; 873 867 break; 874 - case 4320000: 868 + case 4320000000: 875 869 m = vic_alternate_clock ? 0xb3 : 0xb4; 876 870 frac = vic_alternate_clock ? 0x1a3ee : 0; 877 871 break; 878 - case 5940000: 872 + case 5940000000: 879 873 m = 0xf7; 880 874 frac = vic_alternate_clock ? 0x8148 : 0x10000; 881 875 break; ··· 1031 1025 } 1032 1026 1033 1027 void meson_vclk_setup(struct meson_drm *priv, unsigned int target, 1034 - unsigned int phy_freq, unsigned int vclk_freq, 1035 - unsigned int venc_freq, unsigned int dac_freq, 1028 + unsigned long long phy_freq, unsigned long long vclk_freq, 1029 + unsigned long long venc_freq, unsigned long long dac_freq, 1036 1030 bool hdmi_use_enci) 1037 1031 { 1038 1032 bool vic_alternate_clock = false; 1039 - unsigned int freq; 1040 - unsigned int hdmi_tx_div; 1041 - unsigned int venc_div; 1033 + unsigned long long freq; 1034 + unsigned long long hdmi_tx_div; 1035 + unsigned long long venc_div; 1042 1036 1043 1037 if (target == MESON_VCLK_TARGET_CVBS) { 1044 1038 meson_venci_cvbs_clock_config(priv); ··· 1058 1052 return; 1059 1053 } 1060 1054 1061 - hdmi_tx_div = vclk_freq / dac_freq; 1055 + hdmi_tx_div = DIV_ROUND_DOWN_ULL(vclk_freq, dac_freq); 1062 1056 1063 1057 if (hdmi_tx_div == 0) { 1064 - pr_err("Fatal Error, invalid HDMI-TX freq %d\n", 1058 + pr_err("Fatal Error, invalid HDMI-TX freq %lluHz\n", 1065 1059 dac_freq); 1066 1060 return; 1067 1061 } 1068 1062 1069 - venc_div = vclk_freq / venc_freq; 1063 + venc_div = DIV_ROUND_DOWN_ULL(vclk_freq, venc_freq); 1070 1064 1071 1065 if (venc_div == 0) { 1072 - pr_err("Fatal Error, invalid HDMI venc freq %d\n", 1066 + pr_err("Fatal Error, invalid HDMI venc freq %lluHz\n", 1073 1067 venc_freq); 1074 1068 return; 1075 1069 } 1076 1070 1077 1071 for (freq = 0 ; params[freq].pixel_freq ; ++freq) { 1078 1072 if ((phy_freq == params[freq].phy_freq || 1079 - phy_freq == FREQ_1000_1001(params[freq].phy_freq/1000)*1000) && 1073 + phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) && 1080 1074 (vclk_freq == params[freq].vclk_freq || 1081 - vclk_freq == FREQ_1000_1001(params[freq].vclk_freq))) { 1075 + vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) { 1082 1076 if (vclk_freq != params[freq].vclk_freq) 1083 1077 vic_alternate_clock = true; 1084 1078 else ··· 1104 1098 } 1105 1099 1106 1100 if (!params[freq].pixel_freq) { 1107 - pr_err("Fatal Error, invalid HDMI vclk freq %d\n", vclk_freq); 1101 + pr_err("Fatal Error, invalid HDMI vclk freq %lluHz\n", 1102 + vclk_freq); 1108 1103 return; 1109 1104 } 1110 1105
+7 -6
drivers/gpu/drm/meson/meson_vclk.h
··· 20 20 }; 21 21 22 22 /* 27MHz is the CVBS Pixel Clock */ 23 - #define MESON_VCLK_CVBS 27000 23 + #define MESON_VCLK_CVBS (27 * 1000 * 1000) 24 24 25 25 enum drm_mode_status 26 - meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq); 26 + meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq); 27 27 enum drm_mode_status 28 - meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq, 29 - unsigned int vclk_freq); 28 + meson_vclk_vic_supported_freq(struct meson_drm *priv, 29 + unsigned long long phy_freq, 30 + unsigned long long vclk_freq); 30 31 31 32 void meson_vclk_setup(struct meson_drm *priv, unsigned int target, 32 - unsigned int phy_freq, unsigned int vclk_freq, 33 - unsigned int venc_freq, unsigned int dac_freq, 33 + unsigned long long phy_freq, unsigned long long vclk_freq, 34 + unsigned long long venc_freq, unsigned long long dac_freq, 34 35 bool hdmi_use_enci); 35 36 36 37 #endif /* __MESON_VCLK_H */
+2 -2
drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
··· 129 129 { 130 130 struct jadard *jadard = panel_to_jadard(panel); 131 131 132 - gpiod_set_value(jadard->reset, 1); 132 + gpiod_set_value(jadard->reset, 0); 133 133 msleep(120); 134 134 135 135 if (jadard->desc->reset_before_power_off_vcioo) { 136 - gpiod_set_value(jadard->reset, 0); 136 + gpiod_set_value(jadard->reset, 1); 137 137 138 138 usleep_range(1000, 2000); 139 139 }
+6 -4
drivers/hv/hv_common.c
··· 307 307 308 308 local_irq_save(flags); 309 309 output = *this_cpu_ptr(hyperv_pcpu_input_arg); 310 - status = hv_do_hypercall(HVCALL_GET_PARTITION_ID, NULL, &output); 310 + status = hv_do_hypercall(HVCALL_GET_PARTITION_ID, NULL, output); 311 311 pt_id = output->partition_id; 312 312 local_irq_restore(flags); 313 313 ··· 566 566 * originally allocated memory is reused in hv_common_cpu_init(). 567 567 */ 568 568 569 - synic_eventring_tail = this_cpu_ptr(hv_synic_eventring_tail); 570 - kfree(*synic_eventring_tail); 571 - *synic_eventring_tail = NULL; 569 + if (hv_root_partition()) { 570 + synic_eventring_tail = this_cpu_ptr(hv_synic_eventring_tail); 571 + kfree(*synic_eventring_tail); 572 + *synic_eventring_tail = NULL; 573 + } 572 574 573 575 return 0; 574 576 }
+1
drivers/hwtracing/intel_th/Kconfig
··· 60 60 61 61 config INTEL_TH_MSU 62 62 tristate "Intel(R) Trace Hub Memory Storage Unit" 63 + depends on MMU 63 64 help 64 65 Memory Storage Unit (MSU) trace output device enables 65 66 storing STP traces to system memory. It supports single
+7 -24
drivers/hwtracing/intel_th/msu.c
··· 19 19 #include <linux/io.h> 20 20 #include <linux/workqueue.h> 21 21 #include <linux/dma-mapping.h> 22 + #include <linux/pfn_t.h> 22 23 23 24 #ifdef CONFIG_X86 24 25 #include <asm/set_memory.h> ··· 977 976 for (off = 0; off < msc->nr_pages << PAGE_SHIFT; off += PAGE_SIZE) { 978 977 struct page *page = virt_to_page(msc->base + off); 979 978 980 - page->mapping = NULL; 981 979 __free_page(page); 982 980 } 983 981 ··· 1158 1158 int i; 1159 1159 1160 1160 for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) { 1161 - struct page *page = msc_sg_page(sg); 1162 - 1163 - page->mapping = NULL; 1164 1161 dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, 1165 1162 sg_virt(sg), sg_dma_address(sg)); 1166 1163 } ··· 1598 1601 { 1599 1602 struct msc_iter *iter = vma->vm_file->private_data; 1600 1603 struct msc *msc = iter->msc; 1601 - unsigned long pg; 1602 1604 1603 1605 if (!atomic_dec_and_mutex_lock(&msc->mmap_count, &msc->buf_mutex)) 1604 1606 return; 1605 - 1606 - /* drop page _refcounts */ 1607 - for (pg = 0; pg < msc->nr_pages; pg++) { 1608 - struct page *page = msc_buffer_get_page(msc, pg); 1609 - 1610 - if (WARN_ON_ONCE(!page)) 1611 - continue; 1612 - 1613 - if (page->mapping) 1614 - page->mapping = NULL; 1615 - } 1616 1607 1617 1608 /* last mapping -- drop user_count */ 1618 1609 atomic_dec(&msc->user_count); ··· 1611 1626 { 1612 1627 struct msc_iter *iter = vmf->vma->vm_file->private_data; 1613 1628 struct msc *msc = iter->msc; 1629 + struct page *page; 1614 1630 1615 - vmf->page = msc_buffer_get_page(msc, vmf->pgoff); 1616 - if (!vmf->page) 1631 + page = msc_buffer_get_page(msc, vmf->pgoff); 1632 + if (!page) 1617 1633 return VM_FAULT_SIGBUS; 1618 1634 1619 - get_page(vmf->page); 1620 - vmf->page->mapping = vmf->vma->vm_file->f_mapping; 1621 - vmf->page->index = vmf->pgoff; 1622 - 1623 - return 0; 1635 + get_page(page); 1636 + return vmf_insert_mixed(vmf->vma, vmf->address, page_to_pfn_t(page)); 1624 1637 } 1625 1638 1626 1639 static const struct vm_operations_struct msc_mmap_ops = { ··· 1659 1676 atomic_dec(&msc->user_count); 1660 1677 1661 1678 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1662 - vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY); 1679 + vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY | VM_MIXEDMAP); 1663 1680 vma->vm_ops = &msc_mmap_ops; 1664 1681 return ret; 1665 1682 }
+4 -11
drivers/iommu/amd/iommu.c
··· 3869 3869 struct irq_2_irte *irte_info = &ir_data->irq_2_irte; 3870 3870 struct iommu_dev_data *dev_data; 3871 3871 3872 + if (WARN_ON_ONCE(!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir))) 3873 + return -EINVAL; 3874 + 3872 3875 if (ir_data->iommu == NULL) 3873 3876 return -EINVAL; 3874 3877 ··· 3882 3879 * we should not modify the IRTE 3883 3880 */ 3884 3881 if (!dev_data || !dev_data->use_vapic) 3885 - return 0; 3882 + return -EINVAL; 3886 3883 3887 3884 ir_data->cfg = irqd_cfg(data); 3888 3885 pi_data->ir_data = ir_data; 3889 - 3890 - /* Note: 3891 - * SVM tries to set up for VAPIC mode, but we are in 3892 - * legacy mode. So, we force legacy mode instead. 3893 - */ 3894 - if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir)) { 3895 - pr_debug("%s: Fall back to using intr legacy remap\n", 3896 - __func__); 3897 - pi_data->is_guest_mode = false; 3898 - } 3899 3886 3900 3887 pi_data->prev_ga_tag = ir_data->cached_ga_tag; 3901 3888 if (pi_data->is_guest_mode) {
+1 -1
drivers/irqchip/irq-gic-v2m.c
··· 421 421 #ifdef CONFIG_ACPI 422 422 static int acpi_num_msi; 423 423 424 - static __init struct fwnode_handle *gicv2m_get_fwnode(struct device *dev) 424 + static struct fwnode_handle *gicv2m_get_fwnode(struct device *dev) 425 425 { 426 426 struct v2m_data *data; 427 427
+1 -1
drivers/mcb/mcb-parse.c
··· 96 96 97 97 ret = mcb_device_register(bus, mdev); 98 98 if (ret < 0) 99 - goto err; 99 + return ret; 100 100 101 101 return 0; 102 102
+8 -1
drivers/md/dm-bufio.c
··· 68 68 #define LIST_DIRTY 1 69 69 #define LIST_SIZE 2 70 70 71 + #define SCAN_RESCHED_CYCLE 16 72 + 71 73 /*--------------------------------------------------------------*/ 72 74 73 75 /* ··· 2426 2424 2427 2425 atomic_long_dec(&c->need_shrink); 2428 2426 freed++; 2429 - cond_resched(); 2427 + 2428 + if (unlikely(freed % SCAN_RESCHED_CYCLE == 0)) { 2429 + dm_bufio_unlock(c); 2430 + cond_resched(); 2431 + dm_bufio_lock(c); 2432 + } 2430 2433 } 2431 2434 } 2432 2435 }
+1 -1
drivers/md/dm-integrity.c
··· 5164 5164 BUG_ON(!RB_EMPTY_ROOT(&ic->in_progress)); 5165 5165 BUG_ON(!list_empty(&ic->wait_list)); 5166 5166 5167 - if (ic->mode == 'B') 5167 + if (ic->mode == 'B' && ic->bitmap_flush_work.work.func) 5168 5168 cancel_delayed_work_sync(&ic->bitmap_flush_work); 5169 5169 if (ic->metadata_wq) 5170 5170 destroy_workqueue(ic->metadata_wq);
+3 -5
drivers/md/dm-table.c
··· 523 523 gfp = GFP_NOIO; 524 524 } 525 525 argv = kmalloc_array(new_size, sizeof(*argv), gfp); 526 - if (argv && old_argv) { 527 - memcpy(argv, old_argv, *size * sizeof(*argv)); 526 + if (argv) { 528 527 *size = new_size; 528 + if (old_argv) 529 + memcpy(argv, old_argv, *size * sizeof(*argv)); 529 530 } 530 531 531 532 kfree(old_argv); ··· 1050 1049 unsigned int min_pool_size = 0, pool_size; 1051 1050 struct dm_md_mempools *pools; 1052 1051 unsigned int bioset_flags = 0; 1053 - bool mempool_needs_integrity = t->integrity_supported; 1054 1052 1055 1053 if (unlikely(type == DM_TYPE_NONE)) { 1056 1054 DMERR("no table type is set, can't allocate mempools"); ··· 1074 1074 1075 1075 per_io_data_size = max(per_io_data_size, ti->per_io_data_size); 1076 1076 min_pool_size = max(min_pool_size, ti->num_flush_bios); 1077 - 1078 - mempool_needs_integrity |= ti->mempool_needs_integrity; 1079 1077 } 1080 1078 pool_size = max(dm_get_reserved_bio_based_ios(), min_pool_size); 1081 1079 front_pad = roundup(per_io_data_size,
+6 -2
drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
··· 37 37 struct pci1xxxx_gpio { 38 38 struct auxiliary_device *aux_dev; 39 39 void __iomem *reg_base; 40 + raw_spinlock_t wa_lock; 40 41 struct gpio_chip gpio; 41 42 spinlock_t lock; 42 43 int irq_base; ··· 168 167 unsigned long flags; 169 168 170 169 spin_lock_irqsave(&priv->lock, flags); 171 - pci1xxx_assign_bit(priv->reg_base, INTR_STAT_OFFSET(gpio), (gpio % 32), true); 170 + writel(BIT(gpio % 32), priv->reg_base + INTR_STAT_OFFSET(gpio)); 172 171 spin_unlock_irqrestore(&priv->lock, flags); 173 172 } 174 173 ··· 258 257 struct pci1xxxx_gpio *priv = dev_id; 259 258 struct gpio_chip *gc = &priv->gpio; 260 259 unsigned long int_status = 0; 260 + unsigned long wa_flags; 261 261 unsigned long flags; 262 262 u8 pincount; 263 263 int bit; ··· 282 280 writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank)); 283 281 spin_unlock_irqrestore(&priv->lock, flags); 284 282 irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32))); 285 - handle_nested_irq(irq); 283 + raw_spin_lock_irqsave(&priv->wa_lock, wa_flags); 284 + generic_handle_irq(irq); 285 + raw_spin_unlock_irqrestore(&priv->wa_lock, wa_flags); 286 286 } 287 287 } 288 288 spin_lock_irqsave(&priv->lock, flags);
+1
drivers/misc/mei/hw-me-regs.h
··· 117 117 118 118 #define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */ 119 119 120 + #define MEI_DEV_ID_PTL_H 0xE370 /* Panther Lake H */ 120 121 #define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */ 121 122 122 123 /*
+1
drivers/misc/mei/pci-me.c
··· 124 124 125 125 {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)}, 126 126 127 + {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_H, MEI_ME_PCH15_CFG)}, 127 128 {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)}, 128 129 129 130 /* required last entry */
+22 -18
drivers/misc/mei/vsc-tp.c
··· 36 36 #define VSC_TP_XFER_TIMEOUT_BYTES 700 37 37 #define VSC_TP_PACKET_PADDING_SIZE 1 38 38 #define VSC_TP_PACKET_SIZE(pkt) \ 39 - (sizeof(struct vsc_tp_packet) + le16_to_cpu((pkt)->len) + VSC_TP_CRC_SIZE) 39 + (sizeof(struct vsc_tp_packet_hdr) + le16_to_cpu((pkt)->hdr.len) + VSC_TP_CRC_SIZE) 40 40 #define VSC_TP_MAX_PACKET_SIZE \ 41 - (sizeof(struct vsc_tp_packet) + VSC_TP_MAX_MSG_SIZE + VSC_TP_CRC_SIZE) 41 + (sizeof(struct vsc_tp_packet_hdr) + VSC_TP_MAX_MSG_SIZE + VSC_TP_CRC_SIZE) 42 42 #define VSC_TP_MAX_XFER_SIZE \ 43 43 (VSC_TP_MAX_PACKET_SIZE + VSC_TP_XFER_TIMEOUT_BYTES) 44 44 #define VSC_TP_NEXT_XFER_LEN(len, offset) \ 45 - (len + sizeof(struct vsc_tp_packet) + VSC_TP_CRC_SIZE - offset + VSC_TP_PACKET_PADDING_SIZE) 45 + (len + sizeof(struct vsc_tp_packet_hdr) + VSC_TP_CRC_SIZE - offset + VSC_TP_PACKET_PADDING_SIZE) 46 46 47 - struct vsc_tp_packet { 47 + struct vsc_tp_packet_hdr { 48 48 __u8 sync; 49 49 __u8 cmd; 50 50 __le16 len; 51 51 __le32 seq; 52 - __u8 buf[] __counted_by(len); 52 + }; 53 + 54 + struct vsc_tp_packet { 55 + struct vsc_tp_packet_hdr hdr; 56 + __u8 buf[VSC_TP_MAX_XFER_SIZE - sizeof(struct vsc_tp_packet_hdr)]; 53 57 }; 54 58 55 59 struct vsc_tp { ··· 71 67 u32 seq; 72 68 73 69 /* command buffer */ 74 - void *tx_buf; 75 - void *rx_buf; 70 + struct vsc_tp_packet *tx_buf; 71 + struct vsc_tp_packet *rx_buf; 76 72 77 73 atomic_t assert_cnt; 78 74 wait_queue_head_t xfer_wait; ··· 162 158 static int vsc_tp_xfer_helper(struct vsc_tp *tp, struct vsc_tp_packet *pkt, 163 159 void *ibuf, u16 ilen) 164 160 { 165 - int ret, offset = 0, cpy_len, src_len, dst_len = sizeof(struct vsc_tp_packet); 161 + int ret, offset = 0, cpy_len, src_len, dst_len = sizeof(struct vsc_tp_packet_hdr); 166 162 int next_xfer_len = VSC_TP_PACKET_SIZE(pkt) + VSC_TP_XFER_TIMEOUT_BYTES; 167 - u8 *src, *crc_src, *rx_buf = tp->rx_buf; 163 + u8 *src, *crc_src, *rx_buf = (u8 *)tp->rx_buf; 168 164 int count_down = VSC_TP_MAX_XFER_COUNT; 169 165 u32 recv_crc = 0, crc = ~0; 170 - struct vsc_tp_packet ack; 166 + struct vsc_tp_packet_hdr ack; 171 167 u8 *dst = (u8 *)&ack; 172 168 bool synced = false; 173 169 ··· 284 280 285 281 guard(mutex)(&tp->mutex); 286 282 287 - pkt->sync = VSC_TP_PACKET_SYNC; 288 - pkt->cmd = cmd; 289 - pkt->len = cpu_to_le16(olen); 290 - pkt->seq = cpu_to_le32(++tp->seq); 283 + pkt->hdr.sync = VSC_TP_PACKET_SYNC; 284 + pkt->hdr.cmd = cmd; 285 + pkt->hdr.len = cpu_to_le16(olen); 286 + pkt->hdr.seq = cpu_to_le32(++tp->seq); 291 287 memcpy(pkt->buf, obuf, olen); 292 288 293 289 crc = ~crc32(~0, (u8 *)pkt, sizeof(pkt) + olen); ··· 324 320 guard(mutex)(&tp->mutex); 325 321 326 322 /* rom xfer is big endian */ 327 - cpu_to_be32_array(tp->tx_buf, obuf, words); 323 + cpu_to_be32_array((u32 *)tp->tx_buf, obuf, words); 328 324 329 325 ret = read_poll_timeout(gpiod_get_value_cansleep, ret, 330 326 !ret, VSC_TP_ROM_XFER_POLL_DELAY_US, ··· 340 336 return ret; 341 337 342 338 if (ibuf) 343 - be32_to_cpu_array(ibuf, tp->rx_buf, words); 339 + be32_to_cpu_array(ibuf, (u32 *)tp->rx_buf, words); 344 340 345 341 return ret; 346 342 } ··· 494 490 if (!tp) 495 491 return -ENOMEM; 496 492 497 - tp->tx_buf = devm_kzalloc(dev, VSC_TP_MAX_XFER_SIZE, GFP_KERNEL); 493 + tp->tx_buf = devm_kzalloc(dev, sizeof(*tp->tx_buf), GFP_KERNEL); 498 494 if (!tp->tx_buf) 499 495 return -ENOMEM; 500 496 501 - tp->rx_buf = devm_kzalloc(dev, VSC_TP_MAX_XFER_SIZE, GFP_KERNEL); 497 + tp->rx_buf = devm_kzalloc(dev, sizeof(*tp->rx_buf), GFP_KERNEL); 502 498 if (!tp->rx_buf) 503 499 return -ENOMEM; 504 500
+1 -20
drivers/misc/pci_endpoint_test.c
··· 122 122 struct pci_endpoint_test_data { 123 123 enum pci_barno test_reg_bar; 124 124 size_t alignment; 125 - int irq_type; 126 125 }; 127 126 128 127 static inline u32 pci_endpoint_test_readl(struct pci_endpoint_test *test, ··· 947 948 test_reg_bar = data->test_reg_bar; 948 949 test->test_reg_bar = test_reg_bar; 949 950 test->alignment = data->alignment; 950 - test->irq_type = data->irq_type; 951 951 } 952 952 953 953 init_completion(&test->irq_raised); ··· 967 969 } 968 970 969 971 pci_set_master(pdev); 970 - 971 - ret = pci_endpoint_test_alloc_irq_vectors(test, test->irq_type); 972 - if (ret) 973 - goto err_disable_irq; 974 972 975 973 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 976 974 if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) { ··· 1003 1009 goto err_ida_remove; 1004 1010 } 1005 1011 1006 - ret = pci_endpoint_test_request_irq(test); 1007 - if (ret) 1008 - goto err_kfree_test_name; 1009 - 1010 1012 pci_endpoint_test_get_capabilities(test); 1011 1013 1012 1014 misc_device = &test->miscdev; ··· 1010 1020 misc_device->name = kstrdup(name, GFP_KERNEL); 1011 1021 if (!misc_device->name) { 1012 1022 ret = -ENOMEM; 1013 - goto err_release_irq; 1023 + goto err_kfree_test_name; 1014 1024 } 1015 1025 misc_device->parent = &pdev->dev; 1016 1026 misc_device->fops = &pci_endpoint_test_fops; ··· 1026 1036 err_kfree_name: 1027 1037 kfree(misc_device->name); 1028 1038 1029 - err_release_irq: 1030 - pci_endpoint_test_release_irq(test); 1031 - 1032 1039 err_kfree_test_name: 1033 1040 kfree(test->name); 1034 1041 ··· 1038 1051 pci_iounmap(pdev, test->bar[bar]); 1039 1052 } 1040 1053 1041 - err_disable_irq: 1042 - pci_endpoint_test_free_irq_vectors(test); 1043 1054 pci_release_regions(pdev); 1044 1055 1045 1056 err_disable_pdev: ··· 1077 1092 static const struct pci_endpoint_test_data default_data = { 1078 1093 .test_reg_bar = BAR_0, 1079 1094 .alignment = SZ_4K, 1080 - .irq_type = PCITEST_IRQ_TYPE_MSI, 1081 1095 }; 1082 1096 1083 1097 static const struct pci_endpoint_test_data am654_data = { 1084 1098 .test_reg_bar = BAR_2, 1085 1099 .alignment = SZ_64K, 1086 - .irq_type = PCITEST_IRQ_TYPE_MSI, 1087 1100 }; 1088 1101 1089 1102 static const struct pci_endpoint_test_data j721e_data = { 1090 1103 .alignment = 256, 1091 - .irq_type = PCITEST_IRQ_TYPE_MSI, 1092 1104 }; 1093 1105 1094 1106 static const struct pci_endpoint_test_data rk3588_data = { 1095 1107 .alignment = SZ_64K, 1096 - .irq_type = PCITEST_IRQ_TYPE_MSI, 1097 1108 }; 1098 1109 1099 1110 /*
+1 -1
drivers/mmc/host/Kconfig
··· 691 691 config MMC_SDHI 692 692 tristate "Renesas SDHI SD/SDIO controller support" 693 693 depends on SUPERH || ARCH_RENESAS || COMPILE_TEST 694 + depends on (RESET_CONTROLLER && REGULATOR) || !OF 694 695 select MMC_TMIO_CORE 695 - select RESET_CONTROLLER if ARCH_RENESAS 696 696 help 697 697 This provides support for the SDHI SD/SDIO controller found in 698 698 Renesas SuperH, ARM and ARM64 based SoCs
+5 -7
drivers/mmc/host/renesas_sdhi_core.c
··· 1179 1179 if (IS_ERR(rdev)) { 1180 1180 dev_err(dev, "regulator register failed err=%ld", PTR_ERR(rdev)); 1181 1181 ret = PTR_ERR(rdev); 1182 - goto efree; 1182 + goto edisclk; 1183 1183 } 1184 1184 priv->rdev = rdev; 1185 1185 } ··· 1243 1243 num_irqs = platform_irq_count(pdev); 1244 1244 if (num_irqs < 0) { 1245 1245 ret = num_irqs; 1246 - goto eirq; 1246 + goto edisclk; 1247 1247 } 1248 1248 1249 1249 /* There must be at least one IRQ source */ 1250 1250 if (!num_irqs) { 1251 1251 ret = -ENXIO; 1252 - goto eirq; 1252 + goto edisclk; 1253 1253 } 1254 1254 1255 1255 for (i = 0; i < num_irqs; i++) { 1256 1256 irq = platform_get_irq(pdev, i); 1257 1257 if (irq < 0) { 1258 1258 ret = irq; 1259 - goto eirq; 1259 + goto edisclk; 1260 1260 } 1261 1261 1262 1262 ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_irq, 0, 1263 1263 dev_name(&pdev->dev), host); 1264 1264 if (ret) 1265 - goto eirq; 1265 + goto edisclk; 1266 1266 } 1267 1267 1268 1268 ret = tmio_mmc_host_probe(host); ··· 1274 1274 1275 1275 return ret; 1276 1276 1277 - eirq: 1278 - tmio_mmc_host_remove(host); 1279 1277 edisclk: 1280 1278 renesas_sdhi_clk_disable(host); 1281 1279 efree:
+4 -1
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1543 1543 struct tc_taprio_qopt_offload *taprio; 1544 1544 struct ocelot_port *ocelot_port; 1545 1545 struct timespec64 base_ts; 1546 - int port; 1546 + int i, port; 1547 1547 u32 val; 1548 1548 1549 1549 mutex_lock(&ocelot->fwd_domain_lock); ··· 1574 1574 QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB(val), 1575 1575 QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB_M, 1576 1576 QSYS_PARAM_CFG_REG_3); 1577 + 1578 + for (i = 0; i < taprio->num_entries; i++) 1579 + vsc9959_tas_gcl_set(ocelot, i, &taprio->entries[i]); 1577 1580 1578 1581 ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE, 1579 1582 QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
-1
drivers/net/ethernet/amd/pds_core/auxbus.c
··· 186 186 pds_client_unregister(pf, padev->client_id); 187 187 auxiliary_device_delete(&padev->aux_dev); 188 188 auxiliary_device_uninit(&padev->aux_dev); 189 - padev->client_id = 0; 190 189 *pd_ptr = NULL; 191 190 192 191 mutex_unlock(&pf->config_lock);
+7 -2
drivers/net/ethernet/amd/xgbe/xgbe-desc.c
··· 264 264 } 265 265 266 266 /* Set up the header page info */ 267 - xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa, 268 - XGBE_SKB_ALLOC_SIZE); 267 + if (pdata->netdev->features & NETIF_F_RXCSUM) { 268 + xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa, 269 + XGBE_SKB_ALLOC_SIZE); 270 + } else { 271 + xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa, 272 + pdata->rx_buf_size); 273 + } 269 274 270 275 /* Set up the buffer page info */ 271 276 xgbe_set_buffer_data(&rdata->rx.buf, &ring->rx_buf_pa,
+22 -2
drivers/net/ethernet/amd/xgbe/xgbe-dev.c
··· 211 211 XGMAC_IOWRITE_BITS(pdata, MAC_RCR, HDSMS, XGBE_SPH_HDSMS_SIZE); 212 212 } 213 213 214 + static void xgbe_disable_sph_mode(struct xgbe_prv_data *pdata) 215 + { 216 + unsigned int i; 217 + 218 + for (i = 0; i < pdata->channel_count; i++) { 219 + if (!pdata->channel[i]->rx_ring) 220 + break; 221 + 222 + XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_CR, SPH, 0); 223 + } 224 + } 225 + 214 226 static int xgbe_write_rss_reg(struct xgbe_prv_data *pdata, unsigned int type, 215 227 unsigned int index, unsigned int val) 216 228 { ··· 3448 3436 xgbe_config_tx_coalesce(pdata); 3449 3437 xgbe_config_rx_buffer_size(pdata); 3450 3438 xgbe_config_tso_mode(pdata); 3451 - xgbe_config_sph_mode(pdata); 3452 - xgbe_config_rss(pdata); 3439 + 3440 + if (pdata->netdev->features & NETIF_F_RXCSUM) { 3441 + xgbe_config_sph_mode(pdata); 3442 + xgbe_config_rss(pdata); 3443 + } 3444 + 3453 3445 desc_if->wrapper_tx_desc_init(pdata); 3454 3446 desc_if->wrapper_rx_desc_init(pdata); 3455 3447 xgbe_enable_dma_interrupts(pdata); ··· 3608 3592 hw_if->enable_vxlan = xgbe_enable_vxlan; 3609 3593 hw_if->disable_vxlan = xgbe_disable_vxlan; 3610 3594 hw_if->set_vxlan_id = xgbe_set_vxlan_id; 3595 + 3596 + /* For Split Header*/ 3597 + hw_if->enable_sph = xgbe_config_sph_mode; 3598 + hw_if->disable_sph = xgbe_disable_sph_mode; 3611 3599 3612 3600 DBGPR("<--xgbe_init_function_ptrs\n"); 3613 3601 }
+9 -2
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 2148 2148 if (ret) 2149 2149 return ret; 2150 2150 2151 - if ((features & NETIF_F_RXCSUM) && !rxcsum) 2151 + if ((features & NETIF_F_RXCSUM) && !rxcsum) { 2152 + hw_if->enable_sph(pdata); 2153 + hw_if->enable_vxlan(pdata); 2152 2154 hw_if->enable_rx_csum(pdata); 2153 - else if (!(features & NETIF_F_RXCSUM) && rxcsum) 2155 + schedule_work(&pdata->restart_work); 2156 + } else if (!(features & NETIF_F_RXCSUM) && rxcsum) { 2157 + hw_if->disable_sph(pdata); 2158 + hw_if->disable_vxlan(pdata); 2154 2159 hw_if->disable_rx_csum(pdata); 2160 + schedule_work(&pdata->restart_work); 2161 + } 2155 2162 2156 2163 if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !rxvlan) 2157 2164 hw_if->enable_rx_vlan_stripping(pdata);
+4
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 756 756 void (*enable_vxlan)(struct xgbe_prv_data *); 757 757 void (*disable_vxlan)(struct xgbe_prv_data *); 758 758 void (*set_vxlan_id)(struct xgbe_prv_data *); 759 + 760 + /* For Split Header */ 761 + void (*enable_sph)(struct xgbe_prv_data *pdata); 762 + void (*disable_sph)(struct xgbe_prv_data *pdata); 759 763 }; 760 764 761 765 /* This structure represents implementation specific routines for an
+20 -15
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2044 2044 } 2045 2045 return skb; 2046 2046 vlan_err: 2047 + skb_mark_for_recycle(skb); 2047 2048 dev_kfree_skb(skb); 2048 2049 return NULL; 2049 2050 } ··· 3446 3445 3447 3446 bnxt_free_one_tx_ring_skbs(bp, txr, i); 3448 3447 } 3448 + 3449 + if (bp->ptp_cfg && !(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP)) 3450 + bnxt_ptp_free_txts_skbs(bp->ptp_cfg); 3449 3451 } 3450 3452 3451 3453 static void bnxt_free_one_rx_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) ··· 11645 11641 poll_fn = bnxt_poll_p5; 11646 11642 else if (BNXT_CHIP_TYPE_NITRO_A0(bp)) 11647 11643 cp_nr_rings--; 11644 + 11645 + set_bit(BNXT_STATE_NAPI_DISABLED, &bp->state); 11646 + 11648 11647 for (i = 0; i < cp_nr_rings; i++) { 11649 11648 bnapi = bp->bnapi[i]; 11650 11649 netif_napi_add_config_locked(bp->dev, &bnapi->napi, poll_fn, ··· 12367 12360 { 12368 12361 struct hwrm_func_drv_if_change_output *resp; 12369 12362 struct hwrm_func_drv_if_change_input *req; 12370 - bool fw_reset = !bp->irq_tbl; 12371 12363 bool resc_reinit = false; 12372 12364 bool caps_change = false; 12373 12365 int rc, retry = 0; 12366 + bool fw_reset; 12374 12367 u32 flags = 0; 12368 + 12369 + fw_reset = (bp->fw_reset_state == BNXT_FW_RESET_STATE_ABORT); 12370 + bp->fw_reset_state = 0; 12375 12371 12376 12372 if (!(bp->fw_cap & BNXT_FW_CAP_IF_CHANGE)) 12377 12373 return 0; ··· 12444 12434 set_bit(BNXT_STATE_ABORT_ERR, &bp->state); 12445 12435 return rc; 12446 12436 } 12437 + /* IRQ will be initialized later in bnxt_request_irq()*/ 12447 12438 bnxt_clear_int_mode(bp); 12448 - rc = bnxt_init_int_mode(bp); 12449 - if (rc) { 12450 - clear_bit(BNXT_STATE_FW_RESET_DET, &bp->state); 12451 - netdev_err(bp->dev, "init int mode failed\n"); 12452 - return rc; 12453 - } 12454 12439 } 12455 12440 rc = bnxt_cancel_reservations(bp, fw_reset); 12456 12441 } ··· 12844 12839 /* VF-reps may need to be re-opened after the PF is re-opened */ 12845 12840 if (BNXT_PF(bp)) 12846 12841 bnxt_vf_reps_open(bp); 12847 - if (bp->ptp_cfg && !(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP)) 12848 - WRITE_ONCE(bp->ptp_cfg->tx_avail, BNXT_MAX_TX_TS); 12849 12842 bnxt_ptp_init_rtc(bp, true); 12850 12843 bnxt_ptp_cfg_tstamp_filters(bp); 12851 12844 if (BNXT_SUPPORTS_MULTI_RSS_CTX(bp)) ··· 14878 14875 clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 14879 14876 if (bp->fw_reset_state != BNXT_FW_RESET_STATE_POLL_VF) 14880 14877 bnxt_dl_health_fw_status_update(bp, false); 14881 - bp->fw_reset_state = 0; 14878 + bp->fw_reset_state = BNXT_FW_RESET_STATE_ABORT; 14882 14879 netif_close(bp->dev); 14883 14880 } 14884 14881 ··· 16050 16047 16051 16048 bnxt_rdma_aux_device_del(bp); 16052 16049 16053 - bnxt_ptp_clear(bp); 16054 16050 unregister_netdev(dev); 16051 + bnxt_ptp_clear(bp); 16055 16052 16056 16053 bnxt_rdma_aux_device_uninit(bp); 16057 16054 ··· 16978 16975 if (!err) 16979 16976 result = PCI_ERS_RESULT_RECOVERED; 16980 16977 16978 + /* IRQ will be initialized later in bnxt_io_resume */ 16981 16979 bnxt_ulp_irq_stop(bp); 16982 16980 bnxt_clear_int_mode(bp); 16983 - err = bnxt_init_int_mode(bp); 16984 - bnxt_ulp_irq_restart(bp, err); 16985 16981 } 16986 16982 16987 16983 reset_exit: ··· 17009 17007 17010 17008 err = bnxt_hwrm_func_qcaps(bp); 17011 17009 if (!err) { 17012 - if (netif_running(netdev)) 17010 + if (netif_running(netdev)) { 17013 17011 err = bnxt_open(netdev); 17014 - else 17012 + } else { 17015 17013 err = bnxt_reserve_rings(bp, true); 17014 + if (!err) 17015 + err = bnxt_init_int_mode(bp); 17016 + } 17016 17017 } 17017 17018 17018 17019 if (!err)
+1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 2615 2615 #define BNXT_FW_RESET_STATE_POLL_FW 4 2616 2616 #define BNXT_FW_RESET_STATE_OPENING 5 2617 2617 #define BNXT_FW_RESET_STATE_POLL_FW_DOWN 6 2618 + #define BNXT_FW_RESET_STATE_ABORT 7 2618 2619 2619 2620 u16 fw_reset_min_dsecs; 2620 2621 #define BNXT_DFLT_FW_RST_MIN_DSECS 20
+20 -10
drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
··· 110 110 } 111 111 } 112 112 113 - if (info->dest_buf) { 114 - if ((info->seg_start + off + len) <= 115 - BNXT_COREDUMP_BUF_LEN(info->buf_len)) { 116 - memcpy(info->dest_buf + off, dma_buf, len); 117 - } else { 118 - rc = -ENOBUFS; 119 - break; 120 - } 121 - } 122 - 123 113 if (cmn_req->req_type == 124 114 cpu_to_le16(HWRM_DBG_COREDUMP_RETRIEVE)) 125 115 info->dest_buf_size += len; 116 + 117 + if (info->dest_buf) { 118 + if ((info->seg_start + off + len) <= 119 + BNXT_COREDUMP_BUF_LEN(info->buf_len)) { 120 + u16 copylen = min_t(u16, len, 121 + info->dest_buf_size - off); 122 + 123 + memcpy(info->dest_buf + off, dma_buf, copylen); 124 + if (copylen < len) 125 + break; 126 + } else { 127 + rc = -ENOBUFS; 128 + if (cmn_req->req_type == 129 + cpu_to_le16(HWRM_DBG_COREDUMP_LIST)) { 130 + kfree(info->dest_buf); 131 + info->dest_buf = NULL; 132 + } 133 + break; 134 + } 135 + } 126 136 127 137 if (!(cmn_resp->flags & HWRM_DBG_CMN_FLAGS_MORE)) 128 138 break;
+32 -6
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 2062 2062 return reg_len; 2063 2063 } 2064 2064 2065 + #define BNXT_PCIE_32B_ENTRY(start, end) \ 2066 + { offsetof(struct pcie_ctx_hw_stats, start), \ 2067 + offsetof(struct pcie_ctx_hw_stats, end) } 2068 + 2069 + static const struct { 2070 + u16 start; 2071 + u16 end; 2072 + } bnxt_pcie_32b_entries[] = { 2073 + BNXT_PCIE_32B_ENTRY(pcie_ltssm_histogram[0], pcie_ltssm_histogram[3]), 2074 + }; 2075 + 2065 2076 static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs, 2066 2077 void *_p) 2067 2078 { ··· 2105 2094 req->pcie_stat_host_addr = cpu_to_le64(hw_pcie_stats_addr); 2106 2095 rc = hwrm_req_send(bp, req); 2107 2096 if (!rc) { 2108 - __le64 *src = (__le64 *)hw_pcie_stats; 2109 - u64 *dst = (u64 *)(_p + BNXT_PXP_REG_LEN); 2110 - int i; 2097 + u8 *dst = (u8 *)(_p + BNXT_PXP_REG_LEN); 2098 + u8 *src = (u8 *)hw_pcie_stats; 2099 + int i, j; 2111 2100 2112 - for (i = 0; i < sizeof(*hw_pcie_stats) / sizeof(__le64); i++) 2113 - dst[i] = le64_to_cpu(src[i]); 2101 + for (i = 0, j = 0; i < sizeof(*hw_pcie_stats); ) { 2102 + if (i >= bnxt_pcie_32b_entries[j].start && 2103 + i <= bnxt_pcie_32b_entries[j].end) { 2104 + u32 *dst32 = (u32 *)(dst + i); 2105 + 2106 + *dst32 = le32_to_cpu(*(__le32 *)(src + i)); 2107 + i += 4; 2108 + if (i > bnxt_pcie_32b_entries[j].end && 2109 + j < ARRAY_SIZE(bnxt_pcie_32b_entries) - 1) 2110 + j++; 2111 + } else { 2112 + u64 *dst64 = (u64 *)(dst + i); 2113 + 2114 + *dst64 = le64_to_cpu(*(__le64 *)(src + i)); 2115 + i += 8; 2116 + } 2117 + } 2114 2118 } 2115 2119 hwrm_req_drop(bp, req); 2116 2120 } ··· 5017 4991 if (!bp->num_tests || !BNXT_PF(bp)) 5018 4992 return; 5019 4993 4994 + memset(buf, 0, sizeof(u64) * bp->num_tests); 5020 4995 if (etest->flags & ETH_TEST_FL_OFFLINE && 5021 4996 bnxt_ulp_registered(bp->edev)) { 5022 4997 etest->flags |= ETH_TEST_FL_FAILED; ··· 5025 4998 return; 5026 4999 } 5027 5000 5028 - memset(buf, 0, sizeof(u64) * bp->num_tests); 5029 5001 if (!netif_running(dev)) { 5030 5002 etest->flags |= ETH_TEST_FL_FAILED; 5031 5003 return;
+21 -8
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
··· 794 794 return HZ; 795 795 } 796 796 797 + void bnxt_ptp_free_txts_skbs(struct bnxt_ptp_cfg *ptp) 798 + { 799 + struct bnxt_ptp_tx_req *txts_req; 800 + u16 cons = ptp->txts_cons; 801 + 802 + /* make sure ptp aux worker finished with 803 + * possible BNXT_STATE_OPEN set 804 + */ 805 + ptp_cancel_worker_sync(ptp->ptp_clock); 806 + 807 + ptp->tx_avail = BNXT_MAX_TX_TS; 808 + while (cons != ptp->txts_prod) { 809 + txts_req = &ptp->txts_req[cons]; 810 + if (!IS_ERR_OR_NULL(txts_req->tx_skb)) 811 + dev_kfree_skb_any(txts_req->tx_skb); 812 + cons = NEXT_TXTS(cons); 813 + } 814 + ptp->txts_cons = cons; 815 + ptp_schedule_worker(ptp->ptp_clock, 0); 816 + } 817 + 797 818 int bnxt_ptp_get_txts_prod(struct bnxt_ptp_cfg *ptp, u16 *prod) 798 819 { 799 820 spin_lock_bh(&ptp->ptp_tx_lock); ··· 1126 1105 void bnxt_ptp_clear(struct bnxt *bp) 1127 1106 { 1128 1107 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 1129 - int i; 1130 1108 1131 1109 if (!ptp) 1132 1110 return; ··· 1136 1116 ptp->ptp_clock = NULL; 1137 1117 kfree(ptp->ptp_info.pin_config); 1138 1118 ptp->ptp_info.pin_config = NULL; 1139 - 1140 - for (i = 0; i < BNXT_MAX_TX_TS; i++) { 1141 - if (ptp->txts_req[i].tx_skb) { 1142 - dev_kfree_skb_any(ptp->txts_req[i].tx_skb); 1143 - ptp->txts_req[i].tx_skb = NULL; 1144 - } 1145 - } 1146 1119 1147 1120 bnxt_unmap_ptp_regs(bp); 1148 1121 }
+1
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
··· 162 162 void bnxt_ptp_reapply_pps(struct bnxt *bp); 163 163 int bnxt_hwtstamp_set(struct net_device *dev, struct ifreq *ifr); 164 164 int bnxt_hwtstamp_get(struct net_device *dev, struct ifreq *ifr); 165 + void bnxt_ptp_free_txts_skbs(struct bnxt_ptp_cfg *ptp); 165 166 int bnxt_ptp_get_txts_prod(struct bnxt_ptp_cfg *ptp, u16 *prod); 166 167 void bnxt_get_tx_ts_p5(struct bnxt *bp, struct sk_buff *skb, u16 prod); 167 168 int bnxt_get_rx_ts_p5(struct bnxt *bp, u64 *ts, u32 pkt_ts);
+1 -1
drivers/net/ethernet/dlink/dl2k.c
··· 352 352 eth_hw_addr_set(dev, psrom->mac_addr); 353 353 354 354 if (np->chip_id == CHIP_IP1000A) { 355 - np->led_mode = psrom->led_mode; 355 + np->led_mode = le16_to_cpu(psrom->led_mode); 356 356 return 0; 357 357 } 358 358
+1 -1
drivers/net/ethernet/dlink/dl2k.h
··· 335 335 u16 sub_system_id; /* 0x06 */ 336 336 u16 pci_base_1; /* 0x08 (IP1000A only) */ 337 337 u16 pci_base_2; /* 0x0a (IP1000A only) */ 338 - u16 led_mode; /* 0x0c (IP1000A only) */ 338 + __le16 led_mode; /* 0x0c (IP1000A only) */ 339 339 u16 reserved1[9]; /* 0x0e-0x1f */ 340 340 u8 mac_addr[6]; /* 0x20-0x25 */ 341 341 u8 reserved2[10]; /* 0x26-0x2f */
+6 -1
drivers/net/ethernet/freescale/fec_main.c
··· 714 714 txq->bd.cur = bdp; 715 715 716 716 /* Trigger transmission start */ 717 - writel(0, txq->bd.reg_desc_active); 717 + if (!(fep->quirks & FEC_QUIRK_ERR007885) || 718 + !readl(txq->bd.reg_desc_active) || 719 + !readl(txq->bd.reg_desc_active) || 720 + !readl(txq->bd.reg_desc_active) || 721 + !readl(txq->bd.reg_desc_active)) 722 + writel(0, txq->bd.reg_desc_active); 718 723 719 724 return 0; 720 725 }
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
··· 61 61 .name = "tm_qset", 62 62 .cmd = HNAE3_DBG_CMD_TM_QSET, 63 63 .dentry = HNS3_DBG_DENTRY_TM, 64 - .buf_len = HNS3_DBG_READ_LEN, 64 + .buf_len = HNS3_DBG_READ_LEN_1MB, 65 65 .init = hns3_dbg_common_file_init, 66 66 }, 67 67 {
+39 -43
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 473 473 writel(mask_en, tqp_vector->mask_addr); 474 474 } 475 475 476 - static void hns3_vector_enable(struct hns3_enet_tqp_vector *tqp_vector) 476 + static void hns3_irq_enable(struct hns3_enet_tqp_vector *tqp_vector) 477 477 { 478 478 napi_enable(&tqp_vector->napi); 479 479 enable_irq(tqp_vector->vector_irq); 480 - 481 - /* enable vector */ 482 - hns3_mask_vector_irq(tqp_vector, 1); 483 480 } 484 481 485 - static void hns3_vector_disable(struct hns3_enet_tqp_vector *tqp_vector) 482 + static void hns3_irq_disable(struct hns3_enet_tqp_vector *tqp_vector) 486 483 { 487 - /* disable vector */ 488 - hns3_mask_vector_irq(tqp_vector, 0); 489 - 490 484 disable_irq(tqp_vector->vector_irq); 491 485 napi_disable(&tqp_vector->napi); 492 486 cancel_work_sync(&tqp_vector->rx_group.dim.work); ··· 701 707 return 0; 702 708 } 703 709 710 + static void hns3_enable_irqs_and_tqps(struct net_device *netdev) 711 + { 712 + struct hns3_nic_priv *priv = netdev_priv(netdev); 713 + struct hnae3_handle *h = priv->ae_handle; 714 + u16 i; 715 + 716 + for (i = 0; i < priv->vector_num; i++) 717 + hns3_irq_enable(&priv->tqp_vector[i]); 718 + 719 + for (i = 0; i < priv->vector_num; i++) 720 + hns3_mask_vector_irq(&priv->tqp_vector[i], 1); 721 + 722 + for (i = 0; i < h->kinfo.num_tqps; i++) 723 + hns3_tqp_enable(h->kinfo.tqp[i]); 724 + } 725 + 726 + static void hns3_disable_irqs_and_tqps(struct net_device *netdev) 727 + { 728 + struct hns3_nic_priv *priv = netdev_priv(netdev); 729 + struct hnae3_handle *h = priv->ae_handle; 730 + u16 i; 731 + 732 + for (i = 0; i < h->kinfo.num_tqps; i++) 733 + hns3_tqp_disable(h->kinfo.tqp[i]); 734 + 735 + for (i = 0; i < priv->vector_num; i++) 736 + hns3_mask_vector_irq(&priv->tqp_vector[i], 0); 737 + 738 + for (i = 0; i < priv->vector_num; i++) 739 + hns3_irq_disable(&priv->tqp_vector[i]); 740 + } 741 + 704 742 static int hns3_nic_net_up(struct net_device *netdev) 705 743 { 706 744 struct hns3_nic_priv *priv = netdev_priv(netdev); 707 745 struct hnae3_handle *h = priv->ae_handle; 708 - int i, j; 709 746 int ret; 710 747 711 748 ret = hns3_nic_reset_all_ring(h); ··· 745 720 746 721 clear_bit(HNS3_NIC_STATE_DOWN, &priv->state); 747 722 748 - /* enable the vectors */ 749 - for (i = 0; i < priv->vector_num; i++) 750 - hns3_vector_enable(&priv->tqp_vector[i]); 751 - 752 - /* enable rcb */ 753 - for (j = 0; j < h->kinfo.num_tqps; j++) 754 - hns3_tqp_enable(h->kinfo.tqp[j]); 723 + hns3_enable_irqs_and_tqps(netdev); 755 724 756 725 /* start the ae_dev */ 757 726 ret = h->ae_algo->ops->start ? h->ae_algo->ops->start(h) : 0; 758 727 if (ret) { 759 728 set_bit(HNS3_NIC_STATE_DOWN, &priv->state); 760 - while (j--) 761 - hns3_tqp_disable(h->kinfo.tqp[j]); 762 - 763 - for (j = i - 1; j >= 0; j--) 764 - hns3_vector_disable(&priv->tqp_vector[j]); 729 + hns3_disable_irqs_and_tqps(netdev); 765 730 } 766 731 767 732 return ret; ··· 838 823 static void hns3_nic_net_down(struct net_device *netdev) 839 824 { 840 825 struct hns3_nic_priv *priv = netdev_priv(netdev); 841 - struct hnae3_handle *h = hns3_get_handle(netdev); 842 826 const struct hnae3_ae_ops *ops; 843 - int i; 844 827 845 - /* disable vectors */ 846 - for (i = 0; i < priv->vector_num; i++) 847 - hns3_vector_disable(&priv->tqp_vector[i]); 848 - 849 - /* disable rcb */ 850 - for (i = 0; i < h->kinfo.num_tqps; i++) 851 - hns3_tqp_disable(h->kinfo.tqp[i]); 828 + hns3_disable_irqs_and_tqps(netdev); 852 829 853 830 /* stop ae_dev */ 854 831 ops = priv->ae_handle->ae_algo->ops; ··· 5871 5864 void hns3_external_lb_prepare(struct net_device *ndev, bool if_running) 5872 5865 { 5873 5866 struct hns3_nic_priv *priv = netdev_priv(ndev); 5874 - struct hnae3_handle *h = priv->ae_handle; 5875 - int i; 5876 5867 5877 5868 if (!if_running) 5878 5869 return; ··· 5881 5876 netif_carrier_off(ndev); 5882 5877 netif_tx_disable(ndev); 5883 5878 5884 - for (i = 0; i < priv->vector_num; i++) 5885 - hns3_vector_disable(&priv->tqp_vector[i]); 5886 - 5887 - for (i = 0; i < h->kinfo.num_tqps; i++) 5888 - hns3_tqp_disable(h->kinfo.tqp[i]); 5879 + hns3_disable_irqs_and_tqps(ndev); 5889 5880 5890 5881 /* delay ring buffer clearing to hns3_reset_notify_uninit_enet 5891 5882 * during reset process, because driver may not be able ··· 5897 5896 { 5898 5897 struct hns3_nic_priv *priv = netdev_priv(ndev); 5899 5898 struct hnae3_handle *h = priv->ae_handle; 5900 - int i; 5901 5899 5902 5900 if (!if_running) 5903 5901 return; ··· 5912 5912 5913 5913 clear_bit(HNS3_NIC_STATE_DOWN, &priv->state); 5914 5914 5915 - for (i = 0; i < priv->vector_num; i++) 5916 - hns3_vector_enable(&priv->tqp_vector[i]); 5917 - 5918 - for (i = 0; i < h->kinfo.num_tqps; i++) 5919 - hns3_tqp_enable(h->kinfo.tqp[i]); 5915 + hns3_enable_irqs_and_tqps(ndev); 5920 5916 5921 5917 netif_tx_wake_all_queues(ndev); 5922 5918
+7 -6
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
··· 440 440 ptp->info.settime64 = hclge_ptp_settime; 441 441 442 442 ptp->info.n_alarm = 0; 443 + 444 + spin_lock_init(&ptp->lock); 445 + ptp->io_base = hdev->hw.hw.io_base + HCLGE_PTP_REG_OFFSET; 446 + ptp->ts_cfg.rx_filter = HWTSTAMP_FILTER_NONE; 447 + ptp->ts_cfg.tx_type = HWTSTAMP_TX_OFF; 448 + hdev->ptp = ptp; 449 + 443 450 ptp->clock = ptp_clock_register(&ptp->info, &hdev->pdev->dev); 444 451 if (IS_ERR(ptp->clock)) { 445 452 dev_err(&hdev->pdev->dev, ··· 457 450 dev_err(&hdev->pdev->dev, "failed to register ptp clock\n"); 458 451 return -ENODEV; 459 452 } 460 - 461 - spin_lock_init(&ptp->lock); 462 - ptp->io_base = hdev->hw.hw.io_base + HCLGE_PTP_REG_OFFSET; 463 - ptp->ts_cfg.rx_filter = HWTSTAMP_FILTER_NONE; 464 - ptp->ts_cfg.tx_type = HWTSTAMP_TX_OFF; 465 - hdev->ptp = ptp; 466 453 467 454 return 0; 468 455 }
+19 -6
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 1292 1292 rtnl_unlock(); 1293 1293 } 1294 1294 1295 - static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable) 1295 + static int hclgevf_en_hw_strip_rxvtag_cmd(struct hclgevf_dev *hdev, bool enable) 1296 1296 { 1297 - struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 1298 1297 struct hclge_vf_to_pf_msg send_msg; 1299 1298 1300 1299 hclgevf_build_send_msg(&send_msg, HCLGE_MBX_SET_VLAN, 1301 1300 HCLGE_MBX_VLAN_RX_OFF_CFG); 1302 1301 send_msg.data[0] = enable ? 1 : 0; 1303 1302 return hclgevf_send_mbx_msg(hdev, &send_msg, false, NULL, 0); 1303 + } 1304 + 1305 + static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable) 1306 + { 1307 + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 1308 + int ret; 1309 + 1310 + ret = hclgevf_en_hw_strip_rxvtag_cmd(hdev, enable); 1311 + if (ret) 1312 + return ret; 1313 + 1314 + hdev->rxvtag_strip_en = enable; 1315 + return 0; 1304 1316 } 1305 1317 1306 1318 static int hclgevf_reset_tqp(struct hnae3_handle *handle) ··· 2216 2204 tc_valid, tc_size); 2217 2205 } 2218 2206 2219 - static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev) 2207 + static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev, 2208 + bool rxvtag_strip_en) 2220 2209 { 2221 2210 struct hnae3_handle *nic = &hdev->nic; 2222 2211 int ret; 2223 2212 2224 - ret = hclgevf_en_hw_strip_rxvtag(nic, true); 2213 + ret = hclgevf_en_hw_strip_rxvtag(nic, rxvtag_strip_en); 2225 2214 if (ret) { 2226 2215 dev_err(&hdev->pdev->dev, 2227 2216 "failed to enable rx vlan offload, ret = %d\n", ret); ··· 2892 2879 if (ret) 2893 2880 return ret; 2894 2881 2895 - ret = hclgevf_init_vlan_config(hdev); 2882 + ret = hclgevf_init_vlan_config(hdev, hdev->rxvtag_strip_en); 2896 2883 if (ret) { 2897 2884 dev_err(&hdev->pdev->dev, 2898 2885 "failed(%d) to initialize VLAN config\n", ret); ··· 3007 2994 goto err_config; 3008 2995 } 3009 2996 3010 - ret = hclgevf_init_vlan_config(hdev); 2997 + ret = hclgevf_init_vlan_config(hdev, true); 3011 2998 if (ret) { 3012 2999 dev_err(&hdev->pdev->dev, 3013 3000 "failed(%d) to initialize VLAN config\n", ret);
+1
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
··· 253 253 int *vector_irq; 254 254 255 255 bool gro_en; 256 + bool rxvtag_strip_en; 256 257 257 258 unsigned long vlan_del_fail_bmap[BITS_TO_LONGS(VLAN_N_VID)]; 258 259
+5 -5
drivers/net/ethernet/intel/ice/ice_ddp.c
··· 2345 2345 cmd->set_flags |= ICE_AQC_TX_TOPO_FLAGS_SRC_RAM | 2346 2346 ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW; 2347 2347 2348 - if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) 2349 - desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); 2348 + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); 2350 2349 } else { 2351 2350 ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_tx_topo); 2352 2351 cmd->get_flags = ICE_AQC_TX_TOPO_GET_RAM; 2353 - } 2354 2352 2355 - if (hw->mac_type != ICE_MAC_GENERIC_3K_E825) 2356 - desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); 2353 + if (hw->mac_type == ICE_MAC_E810 || 2354 + hw->mac_type == ICE_MAC_GENERIC) 2355 + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); 2356 + } 2357 2357 2358 2358 status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd); 2359 2359 if (status)
+5
drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
··· 2097 2097 pf = vf->pf; 2098 2098 dev = ice_pf_to_dev(pf); 2099 2099 vf_vsi = ice_get_vf_vsi(vf); 2100 + if (!vf_vsi) { 2101 + dev_err(dev, "Can not get FDIR vf_vsi for VF %u\n", vf->vf_id); 2102 + v_ret = VIRTCHNL_STATUS_ERR_PARAM; 2103 + goto err_exit; 2104 + } 2100 2105 2101 2106 #define ICE_VF_MAX_FDIR_FILTERS 128 2102 2107 if (!ice_fdir_num_avail_fltr(&pf->hw, vf_vsi) ||
+8 -10
drivers/net/ethernet/intel/idpf/idpf.h
··· 629 629 VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |\ 630 630 VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6) 631 631 632 - #define IDPF_CAP_RX_CSUM_L4V4 (\ 633 - VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |\ 634 - VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP) 632 + #define IDPF_CAP_TX_CSUM_L4V4 (\ 633 + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP |\ 634 + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP) 635 635 636 - #define IDPF_CAP_RX_CSUM_L4V6 (\ 637 - VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\ 638 - VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP) 636 + #define IDPF_CAP_TX_CSUM_L4V6 (\ 637 + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP |\ 638 + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP) 639 639 640 640 #define IDPF_CAP_RX_CSUM (\ 641 641 VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |\ ··· 644 644 VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\ 645 645 VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP) 646 646 647 - #define IDPF_CAP_SCTP_CSUM (\ 647 + #define IDPF_CAP_TX_SCTP_CSUM (\ 648 648 VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |\ 649 - VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |\ 650 - VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |\ 651 - VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP) 649 + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP) 652 650 653 651 #define IDPF_CAP_TUNNEL_TX_CSUM (\ 654 652 VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL |\
+30 -46
drivers/net/ethernet/intel/idpf/idpf_lib.c
··· 703 703 { 704 704 struct idpf_adapter *adapter = vport->adapter; 705 705 struct idpf_vport_config *vport_config; 706 + netdev_features_t other_offloads = 0; 707 + netdev_features_t csum_offloads = 0; 708 + netdev_features_t tso_offloads = 0; 706 709 netdev_features_t dflt_features; 707 - netdev_features_t offloads = 0; 708 710 struct idpf_netdev_priv *np; 709 711 struct net_device *netdev; 710 712 u16 idx = vport->idx; ··· 768 766 769 767 if (idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) 770 768 dflt_features |= NETIF_F_RXHASH; 771 - if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V4)) 772 - dflt_features |= NETIF_F_IP_CSUM; 773 - if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V6)) 774 - dflt_features |= NETIF_F_IPV6_CSUM; 769 + if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_CSUM_L4V4)) 770 + csum_offloads |= NETIF_F_IP_CSUM; 771 + if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_CSUM_L4V6)) 772 + csum_offloads |= NETIF_F_IPV6_CSUM; 775 773 if (idpf_is_cap_ena(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM)) 776 - dflt_features |= NETIF_F_RXCSUM; 777 - if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_SCTP_CSUM)) 778 - dflt_features |= NETIF_F_SCTP_CRC; 774 + csum_offloads |= NETIF_F_RXCSUM; 775 + if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_SCTP_CSUM)) 776 + csum_offloads |= NETIF_F_SCTP_CRC; 779 777 780 778 if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV4_TCP)) 781 - dflt_features |= NETIF_F_TSO; 779 + tso_offloads |= NETIF_F_TSO; 782 780 if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV6_TCP)) 783 - dflt_features |= NETIF_F_TSO6; 781 + tso_offloads |= NETIF_F_TSO6; 784 782 if (idpf_is_cap_ena_all(adapter, IDPF_SEG_CAPS, 785 783 VIRTCHNL2_CAP_SEG_IPV4_UDP | 786 784 VIRTCHNL2_CAP_SEG_IPV6_UDP)) 787 - dflt_features |= NETIF_F_GSO_UDP_L4; 785 + tso_offloads |= NETIF_F_GSO_UDP_L4; 788 786 if (idpf_is_cap_ena_all(adapter, IDPF_RSC_CAPS, IDPF_CAP_RSC)) 789 - offloads |= NETIF_F_GRO_HW; 790 - /* advertise to stack only if offloads for encapsulated packets is 791 - * supported 792 - */ 793 - if (idpf_is_cap_ena(vport->adapter, IDPF_SEG_CAPS, 794 - VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL)) { 795 - offloads |= NETIF_F_GSO_UDP_TUNNEL | 796 - NETIF_F_GSO_GRE | 797 - NETIF_F_GSO_GRE_CSUM | 798 - NETIF_F_GSO_PARTIAL | 799 - NETIF_F_GSO_UDP_TUNNEL_CSUM | 800 - NETIF_F_GSO_IPXIP4 | 801 - NETIF_F_GSO_IPXIP6 | 802 - 0; 803 - 804 - if (!idpf_is_cap_ena_all(vport->adapter, IDPF_CSUM_CAPS, 805 - IDPF_CAP_TUNNEL_TX_CSUM)) 806 - netdev->gso_partial_features |= 807 - NETIF_F_GSO_UDP_TUNNEL_CSUM; 808 - 809 - netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM; 810 - offloads |= NETIF_F_TSO_MANGLEID; 811 - } 787 + other_offloads |= NETIF_F_GRO_HW; 812 788 if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_LOOPBACK)) 813 - offloads |= NETIF_F_LOOPBACK; 789 + other_offloads |= NETIF_F_LOOPBACK; 814 790 815 - netdev->features |= dflt_features; 816 - netdev->hw_features |= dflt_features | offloads; 817 - netdev->hw_enc_features |= dflt_features | offloads; 791 + netdev->features |= dflt_features | csum_offloads | tso_offloads; 792 + netdev->hw_features |= netdev->features | other_offloads; 793 + netdev->vlan_features |= netdev->features | other_offloads; 794 + netdev->hw_enc_features |= dflt_features | other_offloads; 818 795 idpf_set_ethtool_ops(netdev); 819 796 netif_set_affinity_auto(netdev); 820 797 SET_NETDEV_DEV(netdev, &adapter->pdev->dev); ··· 1113 1132 1114 1133 num_max_q = max(max_q->max_txq, max_q->max_rxq); 1115 1134 vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL); 1116 - if (!vport->q_vector_idxs) { 1117 - kfree(vport); 1135 + if (!vport->q_vector_idxs) 1136 + goto free_vport; 1118 1137 1119 - return NULL; 1120 - } 1121 1138 idpf_vport_init(vport, max_q); 1122 1139 1123 1140 /* This alloc is done separate from the LUT because it's not strictly ··· 1125 1146 */ 1126 1147 rss_data = &adapter->vport_config[idx]->user_config.rss_data; 1127 1148 rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL); 1128 - if (!rss_data->rss_key) { 1129 - kfree(vport); 1149 + if (!rss_data->rss_key) 1150 + goto free_vector_idxs; 1130 1151 1131 - return NULL; 1132 - } 1133 1152 /* Initialize default rss key */ 1134 1153 netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size); 1135 1154 ··· 1140 1163 adapter->next_vport = idpf_get_free_slot(adapter); 1141 1164 1142 1165 return vport; 1166 + 1167 + free_vector_idxs: 1168 + kfree(vport->q_vector_idxs); 1169 + free_vport: 1170 + kfree(vport); 1171 + 1172 + return NULL; 1143 1173 } 1144 1174 1145 1175 /**
+1
drivers/net/ethernet/intel/idpf/idpf_main.c
··· 89 89 { 90 90 struct idpf_adapter *adapter = pci_get_drvdata(pdev); 91 91 92 + cancel_delayed_work_sync(&adapter->serv_task); 92 93 cancel_delayed_work_sync(&adapter->vc_event_task); 93 94 idpf_vc_core_deinit(adapter); 94 95 idpf_deinit_dflt_mbx(adapter);
+4 -2
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 1282 1282 /* reset the tstamp_config */ 1283 1283 igc_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config); 1284 1284 1285 + mutex_lock(&adapter->ptm_lock); 1286 + 1285 1287 spin_lock_irqsave(&adapter->tmreg_lock, flags); 1286 1288 1287 1289 switch (adapter->hw.mac.type) { ··· 1302 1300 if (!igc_is_crosststamp_supported(adapter)) 1303 1301 break; 1304 1302 1305 - mutex_lock(&adapter->ptm_lock); 1306 1303 wr32(IGC_PCIE_DIG_DELAY, IGC_PCIE_DIG_DELAY_DEFAULT); 1307 1304 wr32(IGC_PCIE_PHY_DELAY, IGC_PCIE_PHY_DELAY_DEFAULT); 1308 1305 ··· 1325 1324 netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n"); 1326 1325 1327 1326 igc_ptm_reset(hw); 1328 - mutex_unlock(&adapter->ptm_lock); 1329 1327 break; 1330 1328 default: 1331 1329 /* No work to do. */ ··· 1340 1340 } 1341 1341 out: 1342 1342 spin_unlock_irqrestore(&adapter->tmreg_lock, flags); 1343 + 1344 + mutex_unlock(&adapter->ptm_lock); 1343 1345 1344 1346 wrfl(); 1345 1347 }
+1 -1
drivers/net/ethernet/marvell/octeon_ep/octep_main.c
··· 1223 1223 miss_cnt); 1224 1224 rtnl_lock(); 1225 1225 if (netif_running(oct->netdev)) 1226 - octep_stop(oct->netdev); 1226 + dev_close(oct->netdev); 1227 1227 rtnl_unlock(); 1228 1228 } 1229 1229
+3 -1
drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
··· 833 833 struct octep_vf_device *oct = netdev_priv(netdev); 834 834 835 835 netdev_hold(netdev, NULL, GFP_ATOMIC); 836 - schedule_work(&oct->tx_timeout_task); 836 + if (!schedule_work(&oct->tx_timeout_task)) 837 + netdev_put(netdev, NULL); 838 + 837 839 } 838 840 839 841 static int octep_vf_set_mac(struct net_device *netdev, void *p)
+9 -9
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 269 269 "ethwarp_wocpu2", 270 270 "ethwarp_wocpu1", 271 271 "ethwarp_wocpu0", 272 - "top_usxgmii0_sel", 273 - "top_usxgmii1_sel", 274 272 "top_sgm0_sel", 275 273 "top_sgm1_sel", 276 - "top_xfi_phy0_xtal_sel", 277 - "top_xfi_phy1_xtal_sel", 278 274 "top_eth_gmii_sel", 279 275 "top_eth_refck_50m_sel", 280 276 "top_eth_sys_200m_sel", ··· 2248 2252 ring->data[idx] = new_data; 2249 2253 rxd->rxd1 = (unsigned int)dma_addr; 2250 2254 release_desc: 2255 + if (MTK_HAS_CAPS(eth->soc->caps, MTK_36BIT_DMA)) { 2256 + if (unlikely(dma_addr == DMA_MAPPING_ERROR)) 2257 + addr64 = FIELD_GET(RX_DMA_ADDR64_MASK, 2258 + rxd->rxd2); 2259 + else 2260 + addr64 = RX_DMA_PREP_ADDR64(dma_addr); 2261 + } 2262 + 2251 2263 if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) 2252 2264 rxd->rxd2 = RX_DMA_LSO; 2253 2265 else 2254 - rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size); 2255 - 2256 - if (MTK_HAS_CAPS(eth->soc->caps, MTK_36BIT_DMA) && 2257 - likely(dma_addr != DMA_MAPPING_ERROR)) 2258 - rxd->rxd2 |= RX_DMA_PREP_ADDR64(dma_addr); 2266 + rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size) | addr64; 2259 2267 2260 2268 ring->calc_idx = idx; 2261 2269 done++;
+7 -6
drivers/net/ethernet/mediatek/mtk_star_emac.c
··· 1163 1163 struct net_device *ndev = priv->ndev; 1164 1164 unsigned int head = ring->head; 1165 1165 unsigned int entry = ring->tail; 1166 + unsigned long flags; 1166 1167 1167 1168 while (entry != head && count < (MTK_STAR_RING_NUM_DESCS - 1)) { 1168 1169 ret = mtk_star_tx_complete_one(priv); ··· 1183 1182 netif_wake_queue(ndev); 1184 1183 1185 1184 if (napi_complete(napi)) { 1186 - spin_lock(&priv->lock); 1185 + spin_lock_irqsave(&priv->lock, flags); 1187 1186 mtk_star_enable_dma_irq(priv, false, true); 1188 - spin_unlock(&priv->lock); 1187 + spin_unlock_irqrestore(&priv->lock, flags); 1189 1188 } 1190 1189 1191 1190 return 0; ··· 1342 1341 static int mtk_star_rx_poll(struct napi_struct *napi, int budget) 1343 1342 { 1344 1343 struct mtk_star_priv *priv; 1344 + unsigned long flags; 1345 1345 int work_done = 0; 1346 1346 1347 1347 priv = container_of(napi, struct mtk_star_priv, rx_napi); 1348 1348 1349 1349 work_done = mtk_star_rx(priv, budget); 1350 - if (work_done < budget) { 1351 - napi_complete_done(napi, work_done); 1352 - spin_lock(&priv->lock); 1350 + if (work_done < budget && napi_complete_done(napi, work_done)) { 1351 + spin_lock_irqsave(&priv->lock, flags); 1353 1352 mtk_star_enable_dma_irq(priv, true, false); 1354 - spin_unlock(&priv->lock); 1353 + spin_unlock_irqrestore(&priv->lock, flags); 1355 1354 } 1356 1355 1357 1356 return work_done;
+2 -4
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
··· 176 176 177 177 priv = ptpsq->txqsq.priv; 178 178 179 + rtnl_lock(); 179 180 mutex_lock(&priv->state_lock); 180 181 chs = &priv->channels; 181 182 netdev = priv->netdev; ··· 184 183 carrier_ok = netif_carrier_ok(netdev); 185 184 netif_carrier_off(netdev); 186 185 187 - rtnl_lock(); 188 186 mlx5e_deactivate_priv_channels(priv); 189 - rtnl_unlock(); 190 187 191 188 mlx5e_ptp_close(chs->ptp); 192 189 err = mlx5e_ptp_open(priv, &chs->params, chs->c[0]->lag_port, &chs->ptp); 193 190 194 - rtnl_lock(); 195 191 mlx5e_activate_priv_channels(priv); 196 - rtnl_unlock(); 197 192 198 193 /* return carrier back if needed */ 199 194 if (carrier_ok) 200 195 netif_carrier_on(netdev); 201 196 202 197 mutex_unlock(&priv->state_lock); 198 + rtnl_unlock(); 203 199 204 200 return err; 205 201 }
+29 -3
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
··· 165 165 struct flow_match_enc_keyid enc_keyid; 166 166 void *misc_c, *misc_v; 167 167 168 - misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters); 169 - misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters); 170 - 171 168 if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) 172 169 return 0; 173 170 ··· 179 182 err = mlx5e_tc_tun_parse_vxlan_gbp_option(priv, spec, f); 180 183 if (err) 181 184 return err; 185 + 186 + /* We can't mix custom tunnel headers with symbolic ones and we 187 + * don't have a symbolic field name for GBP, so we use custom 188 + * tunnel headers in this case. We need hardware support to 189 + * match on custom tunnel headers, but we already know it's 190 + * supported because the previous call successfully checked for 191 + * that. 192 + */ 193 + misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, 194 + misc_parameters_5); 195 + misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, 196 + misc_parameters_5); 197 + 198 + /* Shift by 8 to account for the reserved bits in the vxlan 199 + * header after the VNI. 200 + */ 201 + MLX5_SET(fte_match_set_misc5, misc_c, tunnel_header_1, 202 + be32_to_cpu(enc_keyid.mask->keyid) << 8); 203 + MLX5_SET(fte_match_set_misc5, misc_v, tunnel_header_1, 204 + be32_to_cpu(enc_keyid.key->keyid) << 8); 205 + 206 + spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_5; 207 + 208 + return 0; 182 209 } 183 210 184 211 /* match on VNI is required */ ··· 215 194 "Matching on VXLAN VNI is not supported\n"); 216 195 return -EOPNOTSUPP; 217 196 } 197 + 198 + misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, 199 + misc_parameters); 200 + misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, 201 + misc_parameters); 218 202 219 203 MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni, 220 204 be32_to_cpu(enc_keyid.mask->keyid));
+1 -4
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1750 1750 !list_is_first(&attr->list, &flow->attrs)) 1751 1751 return 0; 1752 1752 1753 - if (flow_flag_test(flow, SLOW)) 1754 - return 0; 1755 - 1756 1753 esw_attr = attr->esw_attr; 1757 1754 if (!esw_attr->split_count || 1758 1755 esw_attr->split_count == esw_attr->out_count - 1) ··· 1763 1766 for (i = esw_attr->split_count; i < esw_attr->out_count; i++) { 1764 1767 /* external dest with encap is considered as internal by firmware */ 1765 1768 if (esw_attr->dests[i].vport == MLX5_VPORT_UPLINK && 1766 - !(esw_attr->dests[i].flags & MLX5_ESW_DEST_ENCAP_VALID)) 1769 + !(esw_attr->dests[i].flags & MLX5_ESW_DEST_ENCAP)) 1767 1770 ext_dest = true; 1768 1771 else 1769 1772 int_dest = true;
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 3533 3533 int err; 3534 3534 3535 3535 mutex_init(&esw->offloads.termtbl_mutex); 3536 - mlx5_rdma_enable_roce(esw->dev); 3536 + err = mlx5_rdma_enable_roce(esw->dev); 3537 + if (err) 3538 + goto err_roce; 3537 3539 3538 3540 err = mlx5_esw_host_number_init(esw); 3539 3541 if (err) ··· 3596 3594 esw_offloads_metadata_uninit(esw); 3597 3595 err_metadata: 3598 3596 mlx5_rdma_disable_roce(esw->dev); 3597 + err_roce: 3599 3598 mutex_destroy(&esw->offloads.termtbl_mutex); 3600 3599 return err; 3601 3600 }
+6 -5
drivers/net/ethernet/mellanox/mlx5/core/rdma.c
··· 118 118 119 119 static int mlx5_rdma_add_roce_addr(struct mlx5_core_dev *dev) 120 120 { 121 + u8 mac[ETH_ALEN] = {}; 121 122 union ib_gid gid; 122 - u8 mac[ETH_ALEN]; 123 123 124 124 mlx5_rdma_make_default_gid(dev, &gid); 125 125 return mlx5_core_roce_gid_set(dev, 0, ··· 140 140 mlx5_nic_vport_disable_roce(dev); 141 141 } 142 142 143 - void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) 143 + int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) 144 144 { 145 145 int err; 146 146 147 147 if (!MLX5_CAP_GEN(dev, roce)) 148 - return; 148 + return 0; 149 149 150 150 err = mlx5_nic_vport_enable_roce(dev); 151 151 if (err) { 152 152 mlx5_core_err(dev, "Failed to enable RoCE: %d\n", err); 153 - return; 153 + return err; 154 154 } 155 155 156 156 err = mlx5_rdma_add_roce_addr(dev); ··· 165 165 goto del_roce_addr; 166 166 } 167 167 168 - return; 168 + return err; 169 169 170 170 del_roce_addr: 171 171 mlx5_rdma_del_roce_addr(dev); 172 172 disable_roce: 173 173 mlx5_nic_vport_disable_roce(dev); 174 + return err; 174 175 }
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/rdma.h
··· 8 8 9 9 #ifdef CONFIG_MLX5_ESWITCH 10 10 11 - void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev); 11 + int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev); 12 12 void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev); 13 13 14 14 #else /* CONFIG_MLX5_ESWITCH */ 15 15 16 - static inline void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) {} 16 + static inline int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) { return 0; } 17 17 static inline void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev) {} 18 18 19 19 #endif /* CONFIG_MLX5_ESWITCH */
+6 -2
drivers/net/ethernet/microchip/lan743x_main.c
··· 1815 1815 if (nr_frags <= 0) { 1816 1816 tx->frame_data0 |= TX_DESC_DATA0_LS_; 1817 1817 tx->frame_data0 |= TX_DESC_DATA0_IOC_; 1818 + tx->frame_last = tx->frame_first; 1818 1819 } 1819 1820 tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail]; 1820 1821 tx_descriptor->data0 = cpu_to_le32(tx->frame_data0); ··· 1885 1884 tx->frame_first = 0; 1886 1885 tx->frame_data0 = 0; 1887 1886 tx->frame_tail = 0; 1887 + tx->frame_last = 0; 1888 1888 return -ENOMEM; 1889 1889 } 1890 1890 ··· 1926 1924 TX_DESC_DATA0_DTYPE_DATA_) { 1927 1925 tx->frame_data0 |= TX_DESC_DATA0_LS_; 1928 1926 tx->frame_data0 |= TX_DESC_DATA0_IOC_; 1927 + tx->frame_last = tx->frame_tail; 1929 1928 } 1930 1929 1931 - tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail]; 1932 - buffer_info = &tx->buffer_info[tx->frame_tail]; 1930 + tx_descriptor = &tx->ring_cpu_ptr[tx->frame_last]; 1931 + buffer_info = &tx->buffer_info[tx->frame_last]; 1933 1932 buffer_info->skb = skb; 1934 1933 if (time_stamp) 1935 1934 buffer_info->flags |= TX_BUFFER_INFO_FLAG_TIMESTAMP_REQUESTED; 1936 1935 if (ignore_sync) 1937 1936 buffer_info->flags |= TX_BUFFER_INFO_FLAG_IGNORE_SYNC; 1938 1937 1938 + tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail]; 1939 1939 tx_descriptor->data0 = cpu_to_le32(tx->frame_data0); 1940 1940 tx->frame_tail = lan743x_tx_next_index(tx, tx->frame_tail); 1941 1941 tx->last_tail = tx->frame_tail;
+1
drivers/net/ethernet/microchip/lan743x_main.h
··· 980 980 u32 frame_first; 981 981 u32 frame_data0; 982 982 u32 frame_tail; 983 + u32 frame_last; 983 984 984 985 struct lan743x_tx_buffer_info *buffer_info; 985 986
+6
drivers/net/ethernet/mscc/ocelot.c
··· 830 830 int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid, 831 831 bool untagged) 832 832 { 833 + struct ocelot_port *ocelot_port = ocelot->ports[port]; 833 834 int err; 834 835 835 836 /* Ignore VID 0 added to our RX filter by the 8021q module, since ··· 848 847 if (pvid) { 849 848 err = ocelot_port_set_pvid(ocelot, port, 850 849 ocelot_bridge_vlan_find(ocelot, vid)); 850 + if (err) 851 + return err; 852 + } else if (ocelot_port->pvid_vlan && 853 + ocelot_bridge_vlan_find(ocelot, vid) == ocelot_port->pvid_vlan) { 854 + err = ocelot_port_set_pvid(ocelot, port, NULL); 851 855 if (err) 852 856 return err; 853 857 }
+2 -2
drivers/net/ethernet/realtek/rtase/rtase_main.c
··· 1985 1985 1986 1986 time_us = min_t(int, time_us, RTASE_MITI_MAX_TIME); 1987 1987 1988 - msb = fls(time_us); 1989 - if (msb >= RTASE_MITI_COUNT_BIT_NUM) { 1988 + if (time_us > RTASE_MITI_TIME_COUNT_MASK) { 1989 + msb = fls(time_us); 1990 1990 time_unit = msb - RTASE_MITI_COUNT_BIT_NUM; 1991 1991 time_count = time_us >> (msb - RTASE_MITI_COUNT_BIT_NUM); 1992 1992 } else {
+29 -7
drivers/net/ethernet/vertexcom/mse102x.c
··· 6 6 7 7 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 8 8 9 + #include <linux/if_vlan.h> 9 10 #include <linux/interrupt.h> 10 11 #include <linux/module.h> 11 12 #include <linux/kernel.h> ··· 34 33 #define CMD_CTR (0x2 << CMD_SHIFT) 35 34 36 35 #define CMD_MASK GENMASK(15, CMD_SHIFT) 37 - #define LEN_MASK GENMASK(CMD_SHIFT - 1, 0) 36 + #define LEN_MASK GENMASK(CMD_SHIFT - 2, 0) 38 37 39 38 #define DET_CMD_LEN 4 40 39 #define DET_SOF_LEN 2 ··· 263 262 } 264 263 265 264 static int mse102x_rx_frame_spi(struct mse102x_net *mse, u8 *buff, 266 - unsigned int frame_len) 265 + unsigned int frame_len, bool drop) 267 266 { 268 267 struct mse102x_net_spi *mses = to_mse102x_spi(mse); 269 268 struct spi_transfer *xfer = &mses->spi_xfer; ··· 281 280 netdev_err(mse->ndev, "%s: spi_sync() failed: %d\n", 282 281 __func__, ret); 283 282 mse->stats.xfer_err++; 283 + } else if (drop) { 284 + netdev_dbg(mse->ndev, "%s: Drop frame\n", __func__); 285 + ret = -EINVAL; 284 286 } else if (*sof != cpu_to_be16(DET_SOF)) { 285 287 netdev_dbg(mse->ndev, "%s: SPI start of frame is invalid (0x%04x)\n", 286 288 __func__, *sof); ··· 311 307 struct sk_buff *skb; 312 308 unsigned int rxalign; 313 309 unsigned int rxlen; 310 + bool drop = false; 314 311 __be16 rx = 0; 315 312 u16 cmd_resp; 316 313 u8 *rxpkt; ··· 334 329 net_dbg_ratelimited("%s: Unexpected response (0x%04x)\n", 335 330 __func__, cmd_resp); 336 331 mse->stats.invalid_rts++; 337 - return; 332 + drop = true; 333 + goto drop; 338 334 } 339 335 340 336 net_dbg_ratelimited("%s: Unexpected response to first CMD\n", ··· 343 337 } 344 338 345 339 rxlen = cmd_resp & LEN_MASK; 346 - if (!rxlen) { 347 - net_dbg_ratelimited("%s: No frame length defined\n", __func__); 340 + if (rxlen < ETH_ZLEN || rxlen > VLAN_ETH_FRAME_LEN) { 341 + net_dbg_ratelimited("%s: Invalid frame length: %d\n", __func__, 342 + rxlen); 348 343 mse->stats.invalid_len++; 349 - return; 344 + drop = true; 350 345 } 346 + 347 + /* In case of a invalid CMD_RTS, the frame must be consumed anyway. 348 + * So assume the maximum possible frame length. 349 + */ 350 + drop: 351 + if (drop) 352 + rxlen = VLAN_ETH_FRAME_LEN; 351 353 352 354 rxalign = ALIGN(rxlen + DET_SOF_LEN + DET_DFT_LEN, 4); 353 355 skb = netdev_alloc_skb_ip_align(mse->ndev, rxalign); ··· 367 353 * They are copied, but ignored. 368 354 */ 369 355 rxpkt = skb_put(skb, rxlen) - DET_SOF_LEN; 370 - if (mse102x_rx_frame_spi(mse, rxpkt, rxlen)) { 356 + if (mse102x_rx_frame_spi(mse, rxpkt, rxlen, drop)) { 371 357 mse->ndev->stats.rx_errors++; 372 358 dev_kfree_skb(skb); 373 359 return; ··· 523 509 static int mse102x_net_open(struct net_device *ndev) 524 510 { 525 511 struct mse102x_net *mse = netdev_priv(ndev); 512 + struct mse102x_net_spi *mses = to_mse102x_spi(mse); 526 513 int ret; 527 514 528 515 ret = request_threaded_irq(ndev->irq, NULL, mse102x_irq, IRQF_ONESHOT, ··· 538 523 netif_start_queue(ndev); 539 524 540 525 netif_carrier_on(ndev); 526 + 527 + /* The SPI interrupt can stuck in case of pending packet(s). 528 + * So poll for possible packet(s) to re-arm the interrupt. 529 + */ 530 + mutex_lock(&mses->lock); 531 + mse102x_rx_pkt_spi(mse); 532 + mutex_unlock(&mses->lock); 541 533 542 534 netif_dbg(mse, ifup, ndev, "network device up\n"); 543 535
+2 -1
drivers/net/mdio/mdio-mux-meson-gxl.c
··· 17 17 #define REG2_LEDACT GENMASK(23, 22) 18 18 #define REG2_LEDLINK GENMASK(25, 24) 19 19 #define REG2_DIV4SEL BIT(27) 20 + #define REG2_REVERSED BIT(28) 20 21 #define REG2_ADCBYPASS BIT(30) 21 22 #define REG2_CLKINSEL BIT(31) 22 23 #define ETH_REG3 0x4 ··· 66 65 * The only constraint is that it must match the one in 67 66 * drivers/net/phy/meson-gxl.c to properly match the PHY. 68 67 */ 69 - writel(FIELD_PREP(REG2_PHYID, EPHY_GXL_ID), 68 + writel(REG2_REVERSED | FIELD_PREP(REG2_PHYID, EPHY_GXL_ID), 70 69 priv->regs + ETH_REG2); 71 70 72 71 /* Enable the internal phy */
+2 -14
drivers/net/usb/rndis_host.c
··· 630 630 .tx_fixup = rndis_tx_fixup, 631 631 }; 632 632 633 - static const struct driver_info wwan_rndis_info = { 634 - .description = "Mobile Broadband RNDIS device", 635 - .flags = FLAG_WWAN | FLAG_POINTTOPOINT | FLAG_FRAMING_RN | FLAG_NO_SETINT, 636 - .bind = rndis_bind, 637 - .unbind = rndis_unbind, 638 - .status = rndis_status, 639 - .rx_fixup = rndis_rx_fixup, 640 - .tx_fixup = rndis_tx_fixup, 641 - }; 642 - 643 633 /*-------------------------------------------------------------------------*/ 644 634 645 635 static const struct usb_device_id products [] = { ··· 666 676 USB_INTERFACE_INFO(USB_CLASS_WIRELESS_CONTROLLER, 1, 3), 667 677 .driver_info = (unsigned long) &rndis_info, 668 678 }, { 669 - /* Mobile Broadband Modem, seen in Novatel Verizon USB730L and 670 - * Telit FN990A (RNDIS) 671 - */ 679 + /* Novatel Verizon USB730L */ 672 680 USB_INTERFACE_INFO(USB_CLASS_MISC, 4, 1), 673 - .driver_info = (unsigned long)&wwan_rndis_info, 681 + .driver_info = (unsigned long) &rndis_info, 674 682 }, 675 683 { }, // END 676 684 };
+1 -1
drivers/net/vmxnet3/vmxnet3_xdp.c
··· 397 397 398 398 xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq); 399 399 xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset, 400 - rbi->len, false); 400 + rcd->len, false); 401 401 xdp_buff_clear_frags_flag(&xdp); 402 402 403 403 xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog);
+3
drivers/nvme/target/core.c
··· 324 324 325 325 lockdep_assert_held(&nvmet_config_sem); 326 326 327 + if (port->disc_addr.trtype == NVMF_TRTYPE_MAX) 328 + return -EINVAL; 329 + 327 330 ops = nvmet_transports[port->disc_addr.trtype]; 328 331 if (!ops) { 329 332 up_write(&nvmet_config_sem);
+32 -8
drivers/nvmem/core.c
··· 594 594 cell->nbits = info->nbits; 595 595 cell->np = info->np; 596 596 597 - if (cell->nbits) 597 + if (cell->nbits) { 598 598 cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset, 599 599 BITS_PER_BYTE); 600 + cell->raw_len = ALIGN(cell->bytes, nvmem->word_size); 601 + } 600 602 601 603 if (!IS_ALIGNED(cell->offset, nvmem->stride)) { 602 604 dev_err(&nvmem->dev, 603 605 "cell %s unaligned to nvmem stride %d\n", 604 606 cell->name ?: "<unknown>", nvmem->stride); 605 607 return -EINVAL; 608 + } 609 + 610 + if (!IS_ALIGNED(cell->raw_len, nvmem->word_size)) { 611 + dev_err(&nvmem->dev, 612 + "cell %s raw len %zd unaligned to nvmem word size %d\n", 613 + cell->name ?: "<unknown>", cell->raw_len, 614 + nvmem->word_size); 615 + 616 + if (info->raw_len) 617 + return -EINVAL; 618 + 619 + cell->raw_len = ALIGN(cell->raw_len, nvmem->word_size); 606 620 } 607 621 608 622 return 0; ··· 851 837 if (addr && len == (2 * sizeof(u32))) { 852 838 info.bit_offset = be32_to_cpup(addr++); 853 839 info.nbits = be32_to_cpup(addr); 854 - if (info.bit_offset >= BITS_PER_BYTE || info.nbits < 1) { 840 + if (info.bit_offset >= BITS_PER_BYTE * info.bytes || 841 + info.nbits < 1 || 842 + info.bit_offset + info.nbits > BITS_PER_BYTE * info.bytes) { 855 843 dev_err(dev, "nvmem: invalid bits on %pOF\n", child); 856 844 of_node_put(child); 857 845 return -EINVAL; ··· 1646 1630 static void nvmem_shift_read_buffer_in_place(struct nvmem_cell_entry *cell, void *buf) 1647 1631 { 1648 1632 u8 *p, *b; 1649 - int i, extra, bit_offset = cell->bit_offset; 1633 + int i, extra, bytes_offset; 1634 + int bit_offset = cell->bit_offset; 1650 1635 1651 1636 p = b = buf; 1652 - if (bit_offset) { 1637 + 1638 + bytes_offset = bit_offset / BITS_PER_BYTE; 1639 + b += bytes_offset; 1640 + bit_offset %= BITS_PER_BYTE; 1641 + 1642 + if (bit_offset % BITS_PER_BYTE) { 1653 1643 /* First shift */ 1654 - *b++ >>= bit_offset; 1644 + *p = *b++ >> bit_offset; 1655 1645 1656 1646 /* setup rest of the bytes if any */ 1657 1647 for (i = 1; i < cell->bytes; i++) { 1658 1648 /* Get bits from next byte and shift them towards msb */ 1659 - *p |= *b << (BITS_PER_BYTE - bit_offset); 1649 + *p++ |= *b << (BITS_PER_BYTE - bit_offset); 1660 1650 1661 - p = b; 1662 - *b++ >>= bit_offset; 1651 + *p = *b++ >> bit_offset; 1663 1652 } 1653 + } else if (p != b) { 1654 + memmove(p, b, cell->bytes - bytes_offset); 1655 + p += cell->bytes - 1; 1664 1656 } else { 1665 1657 /* point to the msb */ 1666 1658 p += cell->bytes - 1;
+20 -6
drivers/nvmem/qfprom.c
··· 321 321 unsigned int reg, void *_val, size_t bytes) 322 322 { 323 323 struct qfprom_priv *priv = context; 324 - u8 *val = _val; 325 - int i = 0, words = bytes; 324 + u32 *val = _val; 326 325 void __iomem *base = priv->qfpcorrected; 326 + int words = DIV_ROUND_UP(bytes, sizeof(u32)); 327 + int i; 327 328 328 329 if (read_raw_data && priv->qfpraw) 329 330 base = priv->qfpraw; 330 331 331 - while (words--) 332 - *val++ = readb(base + reg + i++); 332 + for (i = 0; i < words; i++) 333 + *val++ = readl(base + reg + i * sizeof(u32)); 333 334 334 335 return 0; 336 + } 337 + 338 + /* Align reads to word boundary */ 339 + static void qfprom_fixup_dt_cell_info(struct nvmem_device *nvmem, 340 + struct nvmem_cell_info *cell) 341 + { 342 + unsigned int byte_offset = cell->offset % sizeof(u32); 343 + 344 + cell->bit_offset += byte_offset * BITS_PER_BYTE; 345 + cell->offset -= byte_offset; 346 + if (byte_offset && !cell->nbits) 347 + cell->nbits = cell->bytes * BITS_PER_BYTE; 335 348 } 336 349 337 350 static void qfprom_runtime_disable(void *data) ··· 371 358 struct nvmem_config econfig = { 372 359 .name = "qfprom", 373 360 .add_legacy_fixed_of_cells = true, 374 - .stride = 1, 375 - .word_size = 1, 361 + .stride = 4, 362 + .word_size = 4, 376 363 .id = NVMEM_DEVID_AUTO, 377 364 .reg_read = qfprom_reg_read, 365 + .fixup_dt_cell_info = qfprom_fixup_dt_cell_info, 378 366 }; 379 367 struct device *dev = &pdev->dev; 380 368 struct resource *res;
+15 -2
drivers/nvmem/rockchip-otp.c
··· 59 59 #define RK3588_OTPC_AUTO_EN 0x08 60 60 #define RK3588_OTPC_INT_ST 0x84 61 61 #define RK3588_OTPC_DOUT0 0x20 62 - #define RK3588_NO_SECURE_OFFSET 0x300 63 62 #define RK3588_NBYTES 4 64 63 #define RK3588_BURST_NUM 1 65 64 #define RK3588_BURST_SHIFT 8 ··· 68 69 69 70 struct rockchip_data { 70 71 int size; 72 + int read_offset; 71 73 const char * const *clks; 72 74 int num_clks; 73 75 nvmem_reg_read_t reg_read; ··· 196 196 addr_start = round_down(offset, RK3588_NBYTES) / RK3588_NBYTES; 197 197 addr_end = round_up(offset + bytes, RK3588_NBYTES) / RK3588_NBYTES; 198 198 addr_len = addr_end - addr_start; 199 - addr_start += RK3588_NO_SECURE_OFFSET; 199 + addr_start += otp->data->read_offset / RK3588_NBYTES; 200 200 201 201 buf = kzalloc(array_size(addr_len, RK3588_NBYTES), GFP_KERNEL); 202 202 if (!buf) ··· 274 274 .reg_read = px30_otp_read, 275 275 }; 276 276 277 + static const struct rockchip_data rk3576_data = { 278 + .size = 0x100, 279 + .read_offset = 0x700, 280 + .clks = px30_otp_clocks, 281 + .num_clks = ARRAY_SIZE(px30_otp_clocks), 282 + .reg_read = rk3588_otp_read, 283 + }; 284 + 277 285 static const char * const rk3588_otp_clocks[] = { 278 286 "otp", "apb_pclk", "phy", "arb", 279 287 }; 280 288 281 289 static const struct rockchip_data rk3588_data = { 282 290 .size = 0x400, 291 + .read_offset = 0xc00, 283 292 .clks = rk3588_otp_clocks, 284 293 .num_clks = ARRAY_SIZE(rk3588_otp_clocks), 285 294 .reg_read = rk3588_otp_read, ··· 302 293 { 303 294 .compatible = "rockchip,rk3308-otp", 304 295 .data = &px30_data, 296 + }, 297 + { 298 + .compatible = "rockchip,rk3576-otp", 299 + .data = &rk3576_data, 305 300 }, 306 301 { 307 302 .compatible = "rockchip,rk3588-otp",
+4
drivers/pci/setup-bus.c
··· 187 187 panic("%s: kzalloc() failed!\n", __func__); 188 188 tmp->res = r; 189 189 tmp->dev = dev; 190 + tmp->start = r->start; 191 + tmp->end = r->end; 192 + tmp->flags = r->flags; 190 193 191 194 /* Fallback is smallest one or list is empty */ 192 195 n = head; ··· 548 545 pci_dbg(dev, "%s %pR: releasing\n", res_name, res); 549 546 550 547 release_resource(res); 548 + restore_dev_resource(dev_res); 551 549 } 552 550 /* Restore start/end/flags from saved list */ 553 551 list_for_each_entry(save_res, &save_head, list)
+3 -4
drivers/platform/x86/amd/pmc/pmc.c
··· 644 644 struct smu_metrics table; 645 645 int rc; 646 646 647 - /* CZN: Ensure that future s0i3 entry attempts at least 10ms passed */ 648 - if (pdev->cpu_id == AMD_CPU_ID_CZN && !get_metrics_table(pdev, &table) && 649 - table.s0i3_last_entry_status) 650 - usleep_range(10000, 20000); 647 + /* Avoid triggering OVP */ 648 + if (!get_metrics_table(pdev, &table) && table.s0i3_last_entry_status) 649 + msleep(2500); 651 650 652 651 /* Dump the IdleMask before we add to the STB */ 653 652 amd_pmc_idlemask_read(pdev, pdev->dev, NULL);
+10 -1
drivers/platform/x86/asus-wmi.c
··· 304 304 305 305 u32 kbd_rgb_dev; 306 306 bool kbd_rgb_state_available; 307 + bool oobe_state_available; 307 308 308 309 u8 throttle_thermal_policy_mode; 309 310 u32 throttle_thermal_policy_dev; ··· 1827 1826 goto error; 1828 1827 } 1829 1828 1830 - if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_OOBE)) { 1829 + if (asus->oobe_state_available) { 1831 1830 /* 1832 1831 * Disable OOBE state, so that e.g. the keyboard backlight 1833 1832 * works. ··· 4724 4723 asus->egpu_enable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_EGPU); 4725 4724 asus->dgpu_disable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_DGPU); 4726 4725 asus->kbd_rgb_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_TUF_RGB_STATE); 4726 + asus->oobe_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_OOBE); 4727 4727 asus->ally_mcu_usb_switch = acpi_has_method(NULL, ASUS_USB0_PWR_EC0_CSEE) 4728 4728 && dmi_check_system(asus_ally_mcu_quirk); 4729 4729 ··· 4972 4970 } 4973 4971 if (!IS_ERR_OR_NULL(asus->kbd_led.dev)) 4974 4972 kbd_led_update(asus); 4973 + if (asus->oobe_state_available) { 4974 + /* 4975 + * Disable OOBE state, so that e.g. the keyboard backlight 4976 + * works. 4977 + */ 4978 + asus_wmi_set_devstate(ASUS_WMI_DEVID_OOBE, 1, NULL); 4979 + } 4975 4980 4976 4981 if (asus_wmi_has_fnlock_key(asus)) 4977 4982 asus_wmi_fnlock_update(asus);
+10 -4
drivers/platform/x86/dell/alienware-wmi-wmax.c
··· 70 70 .driver_data = &generic_quirks, 71 71 }, 72 72 { 73 + .ident = "Alienware m15 R7", 74 + .matches = { 75 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 76 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m15 R7"), 77 + }, 78 + .driver_data = &generic_quirks, 79 + }, 80 + { 73 81 .ident = "Alienware m16 R1", 74 82 .matches = { 75 83 DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), ··· 663 655 for (u32 i = 0; i < sys_desc[3]; i++) { 664 656 ret = wmax_thermal_information(priv->wdev, WMAX_OPERATION_LIST_IDS, 665 657 i + first_mode, &out_data); 666 - 667 - if (ret == -EIO) 668 - return ret; 669 - 670 658 if (ret == -EBADRQC) 671 659 break; 660 + if (ret) 661 + return ret; 672 662 673 663 if (!is_wmax_thermal_code(out_data)) 674 664 continue;
+16
drivers/platform/x86/ideapad-laptop.c
··· 1294 1294 /* Specific to some newer models */ 1295 1295 { KE_KEY, 0x3e | IDEAPAD_WMI_KEY, { KEY_MICMUTE } }, 1296 1296 { KE_KEY, 0x3f | IDEAPAD_WMI_KEY, { KEY_RFKILL } }, 1297 + /* Star- (User Assignable Key) */ 1298 + { KE_KEY, 0x44 | IDEAPAD_WMI_KEY, { KEY_PROG1 } }, 1299 + /* Eye */ 1300 + { KE_KEY, 0x45 | IDEAPAD_WMI_KEY, { KEY_PROG3 } }, 1301 + /* Performance toggle also Fn+Q, handled inside ideapad_wmi_notify() */ 1302 + { KE_KEY, 0x3d | IDEAPAD_WMI_KEY, { KEY_PROG4 } }, 1303 + /* shift + prtsc */ 1304 + { KE_KEY, 0x2d | IDEAPAD_WMI_KEY, { KEY_CUT } }, 1305 + { KE_KEY, 0x29 | IDEAPAD_WMI_KEY, { KEY_TOUCHPAD_TOGGLE } }, 1306 + { KE_KEY, 0x2a | IDEAPAD_WMI_KEY, { KEY_ROOT_MENU } }, 1297 1307 1298 1308 { KE_END }, 1299 1309 }; ··· 2089 2079 2090 2080 dev_dbg(&wdev->dev, "WMI fn-key event: 0x%llx\n", 2091 2081 data->integer.value); 2082 + 2083 + /* performance button triggered by 0x3d */ 2084 + if (data->integer.value == 0x3d && priv->dytc) { 2085 + platform_profile_cycle(); 2086 + break; 2087 + } 2092 2088 2093 2089 /* 0x02 FnLock, 0x03 Esc */ 2094 2090 if (data->integer.value == 0x02 || data->integer.value == 0x03)
+11 -10
drivers/platform/x86/intel/hid.c
··· 44 44 MODULE_AUTHOR("Alex Hung"); 45 45 46 46 static const struct acpi_device_id intel_hid_ids[] = { 47 - {"INT33D5", 0}, 48 - {"INTC1051", 0}, 49 - {"INTC1054", 0}, 50 - {"INTC1070", 0}, 51 - {"INTC1076", 0}, 52 - {"INTC1077", 0}, 53 - {"INTC1078", 0}, 54 - {"INTC107B", 0}, 55 - {"INTC10CB", 0}, 56 - {"", 0}, 47 + { "INT33D5" }, 48 + { "INTC1051" }, 49 + { "INTC1054" }, 50 + { "INTC1070" }, 51 + { "INTC1076" }, 52 + { "INTC1077" }, 53 + { "INTC1078" }, 54 + { "INTC107B" }, 55 + { "INTC10CB" }, 56 + { "INTC10CC" }, 57 + { } 57 58 }; 58 59 MODULE_DEVICE_TABLE(acpi, intel_hid_ids); 59 60
+9 -4
drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
··· 146 146 { 147 147 struct uncore_data *data; 148 148 int target; 149 + int ret; 149 150 150 151 /* Check if there is an online cpu in the package for uncore MSR */ 151 152 target = cpumask_any_and(&uncore_cpu_mask, topology_die_cpumask(cpu)); 152 153 if (target < nr_cpu_ids) 153 154 return 0; 154 - 155 - /* Use this CPU on this die as a control CPU */ 156 - cpumask_set_cpu(cpu, &uncore_cpu_mask); 157 155 158 156 data = uncore_get_instance(cpu); 159 157 if (!data) ··· 161 163 data->die_id = topology_die_id(cpu); 162 164 data->domain_id = UNCORE_DOMAIN_ID_INVALID; 163 165 164 - return uncore_freq_add_entry(data, cpu); 166 + ret = uncore_freq_add_entry(data, cpu); 167 + if (ret) 168 + return ret; 169 + 170 + /* Use this CPU on this die as a control CPU */ 171 + cpumask_set_cpu(cpu, &uncore_cpu_mask); 172 + 173 + return 0; 165 174 } 166 175 167 176 static int uncore_event_cpu_offline(unsigned int cpu)
+1 -1
drivers/pps/generators/pps_gen_tio.c
··· 230 230 hrtimer_setup(&tio->timer, hrtimer_callback, CLOCK_REALTIME, 231 231 HRTIMER_MODE_ABS); 232 232 spin_lock_init(&tio->lock); 233 - platform_set_drvdata(pdev, &tio); 233 + platform_set_drvdata(pdev, tio); 234 234 235 235 return 0; 236 236 }
+50 -2
drivers/ptp/ptp_ocp.c
··· 2578 2578 .set_output = ptp_ocp_sma_fb_set_output, 2579 2579 }; 2580 2580 2581 + static int 2582 + ptp_ocp_sma_adva_set_output(struct ptp_ocp *bp, int sma_nr, u32 val) 2583 + { 2584 + u32 reg, mask, shift; 2585 + unsigned long flags; 2586 + u32 __iomem *gpio; 2587 + 2588 + gpio = sma_nr > 2 ? &bp->sma_map1->gpio2 : &bp->sma_map2->gpio2; 2589 + shift = sma_nr & 1 ? 0 : 16; 2590 + 2591 + mask = 0xffff << (16 - shift); 2592 + 2593 + spin_lock_irqsave(&bp->lock, flags); 2594 + 2595 + reg = ioread32(gpio); 2596 + reg = (reg & mask) | (val << shift); 2597 + 2598 + iowrite32(reg, gpio); 2599 + 2600 + spin_unlock_irqrestore(&bp->lock, flags); 2601 + 2602 + return 0; 2603 + } 2604 + 2605 + static int 2606 + ptp_ocp_sma_adva_set_inputs(struct ptp_ocp *bp, int sma_nr, u32 val) 2607 + { 2608 + u32 reg, mask, shift; 2609 + unsigned long flags; 2610 + u32 __iomem *gpio; 2611 + 2612 + gpio = sma_nr > 2 ? &bp->sma_map2->gpio1 : &bp->sma_map1->gpio1; 2613 + shift = sma_nr & 1 ? 0 : 16; 2614 + 2615 + mask = 0xffff << (16 - shift); 2616 + 2617 + spin_lock_irqsave(&bp->lock, flags); 2618 + 2619 + reg = ioread32(gpio); 2620 + reg = (reg & mask) | (val << shift); 2621 + 2622 + iowrite32(reg, gpio); 2623 + 2624 + spin_unlock_irqrestore(&bp->lock, flags); 2625 + 2626 + return 0; 2627 + } 2628 + 2581 2629 static const struct ocp_sma_op ocp_adva_sma_op = { 2582 2630 .tbl = { ptp_ocp_adva_sma_in, ptp_ocp_adva_sma_out }, 2583 2631 .init = ptp_ocp_sma_fb_init, 2584 2632 .get = ptp_ocp_sma_fb_get, 2585 - .set_inputs = ptp_ocp_sma_fb_set_inputs, 2586 - .set_output = ptp_ocp_sma_fb_set_output, 2633 + .set_inputs = ptp_ocp_sma_adva_set_inputs, 2634 + .set_output = ptp_ocp_sma_adva_set_output, 2587 2635 }; 2588 2636 2589 2637 static int
+7 -1
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 174 174 char *desc = NULL; 175 175 u16 event; 176 176 177 + if (!(mrioc->logging_level & MPI3_DEBUG_EVENT)) 178 + return; 179 + 177 180 event = event_reply->event; 178 181 179 182 switch (event) { ··· 454 451 return 0; 455 452 } 456 453 454 + atomic_set(&mrioc->admin_pend_isr, 0); 457 455 reply_desc = (struct mpi3_default_reply_descriptor *)mrioc->admin_reply_base + 458 456 admin_reply_ci; 459 457 ··· 569 565 WRITE_ONCE(op_req_q->ci, le16_to_cpu(reply_desc->request_queue_ci)); 570 566 mpi3mr_process_op_reply_desc(mrioc, reply_desc, &reply_dma, 571 567 reply_qidx); 572 - atomic_dec(&op_reply_q->pend_ios); 568 + 573 569 if (reply_dma) 574 570 mpi3mr_repost_reply_buf(mrioc, reply_dma); 575 571 num_op_reply++; ··· 2929 2925 mrioc->admin_reply_ci = 0; 2930 2926 mrioc->admin_reply_ephase = 1; 2931 2927 atomic_set(&mrioc->admin_reply_q_in_use, 0); 2928 + atomic_set(&mrioc->admin_pend_isr, 0); 2932 2929 2933 2930 if (!mrioc->admin_req_base) { 2934 2931 mrioc->admin_req_base = dma_alloc_coherent(&mrioc->pdev->dev, ··· 4658 4653 if (mrioc->admin_reply_base) 4659 4654 memset(mrioc->admin_reply_base, 0, mrioc->admin_reply_q_sz); 4660 4655 atomic_set(&mrioc->admin_reply_q_in_use, 0); 4656 + atomic_set(&mrioc->admin_pend_isr, 0); 4661 4657 4662 4658 if (mrioc->init_cmds.reply) { 4663 4659 memset(mrioc->init_cmds.reply, 0, sizeof(*mrioc->init_cmds.reply));
+24 -12
drivers/scsi/scsi.c
··· 707 707 */ 708 708 int scsi_cdl_enable(struct scsi_device *sdev, bool enable) 709 709 { 710 - struct scsi_mode_data data; 711 - struct scsi_sense_hdr sshdr; 712 - struct scsi_vpd *vpd; 713 - bool is_ata = false; 714 710 char buf[64]; 711 + bool is_ata; 715 712 int ret; 716 713 717 714 if (!sdev->cdl_supported) 718 715 return -EOPNOTSUPP; 719 716 720 717 rcu_read_lock(); 721 - vpd = rcu_dereference(sdev->vpd_pg89); 722 - if (vpd) 723 - is_ata = true; 718 + is_ata = rcu_dereference(sdev->vpd_pg89); 724 719 rcu_read_unlock(); 725 720 726 721 /* 727 722 * For ATA devices, CDL needs to be enabled with a SET FEATURES command. 728 723 */ 729 724 if (is_ata) { 725 + struct scsi_mode_data data; 726 + struct scsi_sense_hdr sshdr; 730 727 char *buf_data; 731 728 int len; 732 729 ··· 732 735 if (ret) 733 736 return -EINVAL; 734 737 735 - /* Enable CDL using the ATA feature page */ 738 + /* Enable or disable CDL using the ATA feature page */ 736 739 len = min_t(size_t, sizeof(buf), 737 740 data.length - data.header_length - 738 741 data.block_descriptor_length); 739 742 buf_data = buf + data.header_length + 740 743 data.block_descriptor_length; 741 - if (enable) 742 - buf_data[4] = 0x02; 743 - else 744 - buf_data[4] = 0; 744 + 745 + /* 746 + * If we want to enable CDL and CDL is already enabled on the 747 + * device, do nothing. This avoids needlessly resetting the CDL 748 + * statistics on the device as that is implied by the CDL enable 749 + * action. Similar to this, there is no need to do anything if 750 + * we want to disable CDL and CDL is already disabled. 751 + */ 752 + if (enable) { 753 + if ((buf_data[4] & 0x03) == 0x02) 754 + goto out; 755 + buf_data[4] &= ~0x03; 756 + buf_data[4] |= 0x02; 757 + } else { 758 + if ((buf_data[4] & 0x03) == 0x00) 759 + goto out; 760 + buf_data[4] &= ~0x03; 761 + } 745 762 746 763 ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3, 747 764 &data, &sshdr); ··· 767 756 } 768 757 } 769 758 759 + out: 770 760 sdev->cdl_enable = enable; 771 761 772 762 return 0;
+5 -1
drivers/scsi/scsi_lib.c
··· 1253 1253 */ 1254 1254 static void scsi_cleanup_rq(struct request *rq) 1255 1255 { 1256 + struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq); 1257 + 1258 + cmd->flags = 0; 1259 + 1256 1260 if (rq->rq_flags & RQF_DONTPREP) { 1257 - scsi_mq_uninit_cmd(blk_mq_rq_to_pdu(rq)); 1261 + scsi_mq_uninit_cmd(cmd); 1258 1262 rq->rq_flags &= ~RQF_DONTPREP; 1259 1263 } 1260 1264 }
+1 -1
drivers/target/iscsi/iscsi_target.c
··· 4263 4263 spin_unlock(&iscsit_global->ts_bitmap_lock); 4264 4264 4265 4265 iscsit_stop_timers_for_cmds(conn); 4266 - iscsit_stop_nopin_response_timer(conn); 4267 4266 iscsit_stop_nopin_timer(conn); 4267 + iscsit_stop_nopin_response_timer(conn); 4268 4268 4269 4269 if (conn->conn_transport->iscsit_wait_conn) 4270 4270 conn->conn_transport->iscsit_wait_conn(conn);
+6
drivers/tty/serial/msm_serial.c
··· 1746 1746 if (!device->port.membase) 1747 1747 return -ENODEV; 1748 1748 1749 + /* Disable DM / single-character modes */ 1750 + msm_write(&device->port, 0, UARTDM_DMEN); 1751 + msm_write(&device->port, MSM_UART_CR_CMD_RESET_RX, MSM_UART_CR); 1752 + msm_write(&device->port, MSM_UART_CR_CMD_RESET_TX, MSM_UART_CR); 1753 + msm_write(&device->port, MSM_UART_CR_TX_ENABLE, MSM_UART_CR); 1754 + 1749 1755 device->con->write = msm_serial_early_write_dm; 1750 1756 return 0; 1751 1757 }
+6
drivers/tty/serial/sifive.c
··· 563 563 static int sifive_serial_startup(struct uart_port *port) 564 564 { 565 565 struct sifive_serial_port *ssp = port_to_sifive_serial_port(port); 566 + unsigned long flags; 566 567 568 + uart_port_lock_irqsave(&ssp->port, &flags); 567 569 __ssp_enable_rxwm(ssp); 570 + uart_port_unlock_irqrestore(&ssp->port, flags); 568 571 569 572 return 0; 570 573 } ··· 575 572 static void sifive_serial_shutdown(struct uart_port *port) 576 573 { 577 574 struct sifive_serial_port *ssp = port_to_sifive_serial_port(port); 575 + unsigned long flags; 578 576 577 + uart_port_lock_irqsave(&ssp->port, &flags); 579 578 __ssp_disable_rxwm(ssp); 580 579 __ssp_disable_txwm(ssp); 580 + uart_port_unlock_irqrestore(&ssp->port, flags); 581 581 } 582 582 583 583 /**
+2 -3
drivers/tty/vt/selection.c
··· 193 193 return -EFAULT; 194 194 195 195 /* 196 - * TIOCL_SELCLEAR, TIOCL_SELPOINTER and TIOCL_SELMOUSEREPORT are OK to 197 - * use without CAP_SYS_ADMIN as they do not modify the selection. 196 + * TIOCL_SELCLEAR and TIOCL_SELPOINTER are OK to use without 197 + * CAP_SYS_ADMIN as they do not modify the selection. 198 198 */ 199 199 switch (v.sel_mode) { 200 200 case TIOCL_SELCLEAR: 201 201 case TIOCL_SELPOINTER: 202 - case TIOCL_SELMOUSEREPORT: 203 202 break; 204 203 default: 205 204 if (!capable(CAP_SYS_ADMIN))
+5 -7
drivers/ufs/core/ufs-mcq.c
··· 677 677 unsigned long flags; 678 678 int err; 679 679 680 - if (!ufshcd_cmd_inflight(lrbp->cmd)) { 681 - dev_err(hba->dev, 682 - "%s: skip abort. cmd at tag %d already completed.\n", 683 - __func__, tag); 684 - return FAILED; 685 - } 686 - 687 680 /* Skip task abort in case previous aborts failed and report failure */ 688 681 if (lrbp->req_abort_skip) { 689 682 dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n", ··· 685 692 } 686 693 687 694 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); 695 + if (!hwq) { 696 + dev_err(hba->dev, "%s: skip abort. cmd at tag %d already completed.\n", 697 + __func__, tag); 698 + return FAILED; 699 + } 688 700 689 701 if (ufshcd_mcq_sqe_search(hba, hwq, tag)) { 690 702 /*
+31
drivers/ufs/core/ufshcd.c
··· 278 278 .model = UFS_ANY_MODEL, 279 279 .quirk = UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM | 280 280 UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE | 281 + UFS_DEVICE_QUIRK_PA_HIBER8TIME | 281 282 UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS }, 282 283 { .wmanufacturerid = UFS_VENDOR_SKHYNIX, 283 284 .model = UFS_ANY_MODEL, ··· 5678 5677 continue; 5679 5678 5680 5679 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); 5680 + if (!hwq) 5681 + continue; 5681 5682 5682 5683 if (force_compl) { 5683 5684 ufshcd_mcq_compl_all_cqes_lock(hba, hwq); ··· 8473 8470 return ret; 8474 8471 } 8475 8472 8473 + /** 8474 + * ufshcd_quirk_override_pa_h8time - Ensures proper adjustment of PA_HIBERN8TIME. 8475 + * @hba: per-adapter instance 8476 + * 8477 + * Some UFS devices require specific adjustments to the PA_HIBERN8TIME parameter 8478 + * to ensure proper hibernation timing. This function retrieves the current 8479 + * PA_HIBERN8TIME value and increments it by 100us. 8480 + */ 8481 + static void ufshcd_quirk_override_pa_h8time(struct ufs_hba *hba) 8482 + { 8483 + u32 pa_h8time; 8484 + int ret; 8485 + 8486 + ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_HIBERN8TIME), &pa_h8time); 8487 + if (ret) { 8488 + dev_err(hba->dev, "Failed to get PA_HIBERN8TIME: %d\n", ret); 8489 + return; 8490 + } 8491 + 8492 + /* Increment by 1 to increase hibernation time by 100 µs */ 8493 + ret = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_HIBERN8TIME), pa_h8time + 1); 8494 + if (ret) 8495 + dev_err(hba->dev, "Failed updating PA_HIBERN8TIME: %d\n", ret); 8496 + } 8497 + 8476 8498 static void ufshcd_tune_unipro_params(struct ufs_hba *hba) 8477 8499 { 8478 8500 ufshcd_vops_apply_dev_quirks(hba); ··· 8508 8480 8509 8481 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE) 8510 8482 ufshcd_quirk_tune_host_pa_tactivate(hba); 8483 + 8484 + if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_HIBER8TIME) 8485 + ufshcd_quirk_override_pa_h8time(hba); 8511 8486 } 8512 8487 8513 8488 static void ufshcd_clear_dbg_ufs_stats(struct ufs_hba *hba)
+43
drivers/ufs/host/ufs-qcom.c
··· 33 33 ((((c) >> 16) & MCQ_QCFGPTR_MASK) * MCQ_QCFGPTR_UNIT) 34 34 #define MCQ_QCFG_SIZE 0x40 35 35 36 + /* De-emphasis for gear-5 */ 37 + #define DEEMPHASIS_3_5_dB 0x04 38 + #define NO_DEEMPHASIS 0x0 39 + 36 40 enum { 37 41 TSTBUS_UAWM, 38 42 TSTBUS_UARM, ··· 799 795 return ufs_qcom_icc_set_bw(host, bw_table.mem_bw, bw_table.cfg_bw); 800 796 } 801 797 798 + static void ufs_qcom_set_tx_hs_equalizer(struct ufs_hba *hba, u32 gear, u32 tx_lanes) 799 + { 800 + u32 equalizer_val; 801 + int ret, i; 802 + 803 + /* Determine the equalizer value based on the gear */ 804 + equalizer_val = (gear == 5) ? DEEMPHASIS_3_5_dB : NO_DEEMPHASIS; 805 + 806 + for (i = 0; i < tx_lanes; i++) { 807 + ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(TX_HS_EQUALIZER, i), 808 + equalizer_val); 809 + if (ret) 810 + dev_err(hba->dev, "%s: failed equalizer lane %d\n", 811 + __func__, i); 812 + } 813 + } 814 + 802 815 static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba, 803 816 enum ufs_notify_change_status status, 804 817 const struct ufs_pa_layer_attr *dev_max_params, ··· 867 846 dev_req_params->gear_tx, 868 847 PA_INITIAL_ADAPT); 869 848 } 849 + 850 + if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING) 851 + ufs_qcom_set_tx_hs_equalizer(hba, 852 + dev_req_params->gear_tx, dev_req_params->lane_tx); 853 + 870 854 break; 871 855 case POST_CHANGE: 872 856 if (ufs_qcom_cfg_timers(hba, false)) { ··· 919 893 (pa_vs_config_reg1 | (1 << 12))); 920 894 } 921 895 896 + static void ufs_qcom_override_pa_tx_hsg1_sync_len(struct ufs_hba *hba) 897 + { 898 + int err; 899 + 900 + err = ufshcd_dme_peer_set(hba, UIC_ARG_MIB(PA_TX_HSG1_SYNC_LENGTH), 901 + PA_TX_HSG1_SYNC_LENGTH_VAL); 902 + if (err) 903 + dev_err(hba->dev, "Failed (%d) set PA_TX_HSG1_SYNC_LENGTH\n", err); 904 + } 905 + 922 906 static int ufs_qcom_apply_dev_quirks(struct ufs_hba *hba) 923 907 { 924 908 int err = 0; 925 909 926 910 if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME) 927 911 err = ufs_qcom_quirk_host_pa_saveconfigtime(hba); 912 + 913 + if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH) 914 + ufs_qcom_override_pa_tx_hsg1_sync_len(hba); 928 915 929 916 return err; 930 917 } ··· 953 914 { .wmanufacturerid = UFS_VENDOR_WDC, 954 915 .model = UFS_ANY_MODEL, 955 916 .quirk = UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE }, 917 + { .wmanufacturerid = UFS_VENDOR_SAMSUNG, 918 + .model = UFS_ANY_MODEL, 919 + .quirk = UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH | 920 + UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING }, 956 921 {} 957 922 }; 958 923
+18
drivers/ufs/host/ufs-qcom.h
··· 122 122 TMRLUT_HW_CGC_EN | OCSC_HW_CGC_EN) 123 123 124 124 /* QUniPro Vendor specific attributes */ 125 + #define PA_TX_HSG1_SYNC_LENGTH 0x1552 125 126 #define PA_VS_CONFIG_REG1 0x9000 126 127 #define DME_VS_CORE_CLK_CTRL 0xD002 128 + #define TX_HS_EQUALIZER 0x0037 129 + 127 130 /* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */ 128 131 #define CLK_1US_CYCLES_MASK_V4 GENMASK(27, 16) 129 132 #define CLK_1US_CYCLES_MASK GENMASK(7, 0) ··· 143 140 #define UNIPRO_CORE_CLK_FREQ_300_MHZ 300 144 141 #define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202 145 142 #define UNIPRO_CORE_CLK_FREQ_403_MHZ 403 143 + 144 + /* TX_HSG1_SYNC_LENGTH attr value */ 145 + #define PA_TX_HSG1_SYNC_LENGTH_VAL 0x4A 146 + 147 + /* 148 + * Some ufs device vendors need a different TSync length. 149 + * Enable this quirk to give an additional TX_HS_SYNC_LENGTH. 150 + */ 151 + #define UFS_DEVICE_QUIRK_PA_TX_HSG1_SYNC_LENGTH BIT(16) 152 + 153 + /* 154 + * Some ufs device vendors need a different Deemphasis setting. 155 + * Enable this quirk to tune TX Deemphasis parameters. 156 + */ 157 + #define UFS_DEVICE_QUIRK_PA_TX_DEEMPHASIS_TUNING BIT(17) 146 158 147 159 /* ICE allocator type to share AES engines among TX stream and RX stream */ 148 160 #define ICE_ALLOCATOR_TYPE 2
+2
drivers/usb/cdns3/cdns3-gadget.c
··· 1963 1963 unsigned int bit; 1964 1964 unsigned long reg; 1965 1965 1966 + local_bh_disable(); 1966 1967 spin_lock_irqsave(&priv_dev->lock, flags); 1967 1968 1968 1969 reg = readl(&priv_dev->regs->usb_ists); ··· 2005 2004 irqend: 2006 2005 writel(~0, &priv_dev->regs->ep_ien); 2007 2006 spin_unlock_irqrestore(&priv_dev->lock, flags); 2007 + local_bh_enable(); 2008 2008 2009 2009 return ret; 2010 2010 }
+31 -13
drivers/usb/chipidea/ci_hdrc_imx.c
··· 336 336 return ret; 337 337 } 338 338 339 + static void ci_hdrc_imx_disable_regulator(void *arg) 340 + { 341 + struct ci_hdrc_imx_data *data = arg; 342 + 343 + regulator_disable(data->hsic_pad_regulator); 344 + } 345 + 339 346 static int ci_hdrc_imx_probe(struct platform_device *pdev) 340 347 { 341 348 struct ci_hdrc_imx_data *data; ··· 401 394 "Failed to enable HSIC pad regulator\n"); 402 395 goto err_put; 403 396 } 397 + ret = devm_add_action_or_reset(dev, 398 + ci_hdrc_imx_disable_regulator, data); 399 + if (ret) { 400 + dev_err(dev, 401 + "Failed to add regulator devm action\n"); 402 + goto err_put; 403 + } 404 404 } 405 405 } 406 406 ··· 446 432 447 433 ret = imx_get_clks(dev); 448 434 if (ret) 449 - goto disable_hsic_regulator; 435 + goto qos_remove_request; 450 436 451 437 ret = imx_prepare_enable_clks(dev); 452 438 if (ret) 453 - goto disable_hsic_regulator; 439 + goto qos_remove_request; 454 440 455 441 ret = clk_prepare_enable(data->clk_wakeup); 456 442 if (ret) ··· 484 470 of_usb_get_phy_mode(np) == USBPHY_INTERFACE_MODE_ULPI) { 485 471 pdata.flags |= CI_HDRC_OVERRIDE_PHY_CONTROL; 486 472 data->override_phy_control = true; 487 - usb_phy_init(pdata.usb_phy); 473 + ret = usb_phy_init(pdata.usb_phy); 474 + if (ret) { 475 + dev_err(dev, "Failed to init phy\n"); 476 + goto err_clk; 477 + } 488 478 } 489 479 490 480 if (pdata.flags & CI_HDRC_SUPPORTS_RUNTIME_PM) ··· 497 479 ret = imx_usbmisc_init(data->usbmisc_data); 498 480 if (ret) { 499 481 dev_err(dev, "usbmisc init failed, ret=%d\n", ret); 500 - goto err_clk; 482 + goto phy_shutdown; 501 483 } 502 484 503 485 data->ci_pdev = ci_hdrc_add_device(dev, ··· 506 488 if (IS_ERR(data->ci_pdev)) { 507 489 ret = PTR_ERR(data->ci_pdev); 508 490 dev_err_probe(dev, ret, "ci_hdrc_add_device failed\n"); 509 - goto err_clk; 491 + goto phy_shutdown; 510 492 } 511 493 512 494 if (data->usbmisc_data) { ··· 540 522 541 523 disable_device: 542 524 ci_hdrc_remove_device(data->ci_pdev); 525 + phy_shutdown: 526 + if (data->override_phy_control) 527 + usb_phy_shutdown(data->phy); 543 528 err_clk: 544 529 clk_disable_unprepare(data->clk_wakeup); 545 530 err_wakeup_clk: 546 531 imx_disable_unprepare_clks(dev); 547 - disable_hsic_regulator: 548 - if (data->hsic_pad_regulator) 549 - /* don't overwrite original ret (cf. EPROBE_DEFER) */ 550 - regulator_disable(data->hsic_pad_regulator); 532 + qos_remove_request: 551 533 if (pdata.flags & CI_HDRC_PMQOS) 552 534 cpu_latency_qos_remove_request(&data->pm_qos_req); 553 535 data->ci_pdev = NULL; 554 536 err_put: 555 - put_device(data->usbmisc_data->dev); 537 + if (data->usbmisc_data) 538 + put_device(data->usbmisc_data->dev); 556 539 return ret; 557 540 } 558 541 ··· 575 556 clk_disable_unprepare(data->clk_wakeup); 576 557 if (data->plat_data->flags & CI_HDRC_PMQOS) 577 558 cpu_latency_qos_remove_request(&data->pm_qos_req); 578 - if (data->hsic_pad_regulator) 579 - regulator_disable(data->hsic_pad_regulator); 580 559 } 581 - put_device(data->usbmisc_data->dev); 560 + if (data->usbmisc_data) 561 + put_device(data->usbmisc_data->dev); 582 562 } 583 563 584 564 static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
+16 -5
drivers/usb/class/cdc-wdm.c
··· 726 726 rv = -EBUSY; 727 727 goto out; 728 728 } 729 - 729 + smp_rmb(); /* ordered against wdm_wwan_port_stop() */ 730 730 rv = usb_autopm_get_interface(desc->intf); 731 731 if (rv < 0) { 732 732 dev_err(&desc->intf->dev, "Error autopm - %d\n", rv); ··· 829 829 static int wdm_wwan_port_start(struct wwan_port *port) 830 830 { 831 831 struct wdm_device *desc = wwan_port_get_drvdata(port); 832 + int rv; 832 833 833 834 /* The interface is both exposed via the WWAN framework and as a 834 835 * legacy usbmisc chardev. If chardev is already open, just fail ··· 849 848 wwan_port_txon(port); 850 849 851 850 /* Start getting events */ 852 - return usb_submit_urb(desc->validity, GFP_KERNEL); 851 + rv = usb_submit_urb(desc->validity, GFP_KERNEL); 852 + if (rv < 0) { 853 + wwan_port_txoff(port); 854 + desc->manage_power(desc->intf, 0); 855 + /* this must be last lest we race with chardev open */ 856 + clear_bit(WDM_WWAN_IN_USE, &desc->flags); 857 + } 858 + 859 + return rv; 853 860 } 854 861 855 862 static void wdm_wwan_port_stop(struct wwan_port *port) ··· 868 859 poison_urbs(desc); 869 860 desc->manage_power(desc->intf, 0); 870 861 clear_bit(WDM_READ, &desc->flags); 871 - clear_bit(WDM_WWAN_IN_USE, &desc->flags); 872 862 unpoison_urbs(desc); 863 + smp_wmb(); /* ordered against wdm_open() */ 864 + /* this must be last lest we open a poisoned device */ 865 + clear_bit(WDM_WWAN_IN_USE, &desc->flags); 873 866 } 874 867 875 868 static void wdm_wwan_port_tx_complete(struct urb *urb) ··· 879 868 struct sk_buff *skb = urb->context; 880 869 struct wdm_device *desc = skb_shinfo(skb)->destructor_arg; 881 870 882 - usb_autopm_put_interface(desc->intf); 871 + usb_autopm_put_interface_async(desc->intf); 883 872 wwan_port_txon(desc->wwanp); 884 873 kfree_skb(skb); 885 874 } ··· 909 898 req->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE); 910 899 req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND; 911 900 req->wValue = 0; 912 - req->wIndex = desc->inum; 901 + req->wIndex = desc->inum; /* already converted */ 913 902 req->wLength = cpu_to_le16(skb->len); 914 903 915 904 skb_shinfo(skb)->destructor_arg = desc;
+9
drivers/usb/core/quirks.c
··· 369 369 { USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM }, 370 370 { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, 371 371 372 + /* SanDisk Corp. SanDisk 3.2Gen1 */ 373 + { USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT }, 374 + 372 375 /* Realforce 87U Keyboard */ 373 376 { USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM }, 374 377 ··· 385 382 USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL }, 386 383 { USB_DEVICE(0x0904, 0x6103), .driver_info = 387 384 USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL }, 385 + 386 + /* Silicon Motion Flash Drive */ 387 + { USB_DEVICE(0x090c, 0x1000), .driver_info = USB_QUIRK_DELAY_INIT }, 388 388 389 389 /* Sound Devices USBPre2 */ 390 390 { USB_DEVICE(0x0926, 0x0202), .driver_info = ··· 544 538 /* Hauppauge HVR-950q */ 545 539 { USB_DEVICE(0x2040, 0x7200), .driver_info = 546 540 USB_QUIRK_CONFIG_INTF_STRINGS }, 541 + 542 + /* VLI disk */ 543 + { USB_DEVICE(0x2109, 0x0711), .driver_info = USB_QUIRK_NO_LPM }, 547 544 548 545 /* Raydium Touchscreen */ 549 546 { USB_DEVICE(0x2386, 0x3114), .driver_info = USB_QUIRK_NO_LPM },
+1 -3
drivers/usb/dwc3/dwc3-xilinx.c
··· 207 207 208 208 skip_usb3_phy: 209 209 /* ulpi reset via gpio-modepin or gpio-framework driver */ 210 - reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW); 210 + reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 211 211 if (IS_ERR(reset_gpio)) { 212 212 return dev_err_probe(dev, PTR_ERR(reset_gpio), 213 213 "Failed to request reset GPIO\n"); 214 214 } 215 215 216 216 if (reset_gpio) { 217 - /* Toggle ulpi to reset the phy. */ 218 - gpiod_set_value_cansleep(reset_gpio, 1); 219 217 usleep_range(5000, 10000); 220 218 gpiod_set_value_cansleep(reset_gpio, 0); 221 219 usleep_range(5000, 10000);
+6
drivers/usb/dwc3/gadget.c
··· 4617 4617 if (!count) 4618 4618 return IRQ_NONE; 4619 4619 4620 + if (count > evt->length) { 4621 + dev_err_ratelimited(dwc->dev, "invalid count(%u) > evt->length(%u)\n", 4622 + count, evt->length); 4623 + return IRQ_NONE; 4624 + } 4625 + 4620 4626 evt->count = count; 4621 4627 evt->flags |= DWC3_EVENT_PENDING; 4622 4628
+23
drivers/usb/host/ohci-pci.c
··· 165 165 return 0; 166 166 } 167 167 168 + static int ohci_quirk_loongson(struct usb_hcd *hcd) 169 + { 170 + struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 171 + 172 + /* 173 + * Loongson's LS7A OHCI controller (rev 0x02) has a 174 + * flaw. MMIO register with offset 0x60/64 is treated 175 + * as legacy PS2-compatible keyboard/mouse interface. 176 + * Since OHCI only use 4KB BAR resource, LS7A OHCI's 177 + * 32KB BAR is wrapped around (the 2nd 4KB BAR space 178 + * is the same as the 1st 4KB internally). So add 4KB 179 + * offset (0x1000) to the OHCI registers as a quirk. 180 + */ 181 + if (pdev->revision == 0x2) 182 + hcd->regs += SZ_4K; /* SZ_4K = 0x1000 */ 183 + 184 + return 0; 185 + } 186 + 168 187 static int ohci_quirk_qemu(struct usb_hcd *hcd) 169 188 { 170 189 struct ohci_hcd *ohci = hcd_to_ohci(hcd); ··· 242 223 { 243 224 PCI_DEVICE(PCI_VENDOR_ID_ATI, 0x4399), 244 225 .driver_data = (unsigned long)ohci_quirk_amd700, 226 + }, 227 + { 228 + PCI_DEVICE(PCI_VENDOR_ID_LOONGSON, 0x7a24), 229 + .driver_data = (unsigned long)ohci_quirk_loongson, 245 230 }, 246 231 { 247 232 .vendor = PCI_VENDOR_ID_APPLE,
+16 -14
drivers/usb/host/xhci-hub.c
··· 1878 1878 int max_ports, port_index; 1879 1879 int sret; 1880 1880 u32 next_state; 1881 - u32 temp, portsc; 1881 + u32 portsc; 1882 1882 struct xhci_hub *rhub; 1883 1883 struct xhci_port **ports; 1884 + bool disabled_irq = false; 1884 1885 1885 1886 rhub = xhci_get_rhub(hcd); 1886 1887 ports = rhub->ports; ··· 1897 1896 return -ESHUTDOWN; 1898 1897 } 1899 1898 1900 - /* delay the irqs */ 1901 - temp = readl(&xhci->op_regs->command); 1902 - temp &= ~CMD_EIE; 1903 - writel(temp, &xhci->op_regs->command); 1904 - 1905 1899 /* bus specific resume for ports we suspended at bus_suspend */ 1906 - if (hcd->speed >= HCD_USB3) 1900 + if (hcd->speed >= HCD_USB3) { 1907 1901 next_state = XDEV_U0; 1908 - else 1902 + } else { 1909 1903 next_state = XDEV_RESUME; 1910 - 1904 + if (bus_state->bus_suspended) { 1905 + /* 1906 + * prevent port event interrupts from interfering 1907 + * with usb2 port resume process 1908 + */ 1909 + xhci_disable_interrupter(xhci->interrupters[0]); 1910 + disabled_irq = true; 1911 + } 1912 + } 1911 1913 port_index = max_ports; 1912 1914 while (port_index--) { 1913 1915 portsc = readl(ports[port_index]->addr); ··· 1978 1974 (void) readl(&xhci->op_regs->command); 1979 1975 1980 1976 bus_state->next_statechange = jiffies + msecs_to_jiffies(5); 1981 - /* re-enable irqs */ 1982 - temp = readl(&xhci->op_regs->command); 1983 - temp |= CMD_EIE; 1984 - writel(temp, &xhci->op_regs->command); 1985 - temp = readl(&xhci->op_regs->command); 1977 + /* re-enable interrupter */ 1978 + if (disabled_irq) 1979 + xhci_enable_interrupter(xhci->interrupters[0]); 1986 1980 1987 1981 spin_unlock_irqrestore(&xhci->lock, flags); 1988 1982 return 0;
+4 -7
drivers/usb/host/xhci-ring.c
··· 561 561 * pointer command pending because the device can choose to start any 562 562 * stream once the endpoint is on the HW schedule. 563 563 */ 564 - if (ep_state & (EP_STOP_CMD_PENDING | SET_DEQ_PENDING | EP_HALTED | 565 - EP_CLEARING_TT | EP_STALLED)) 564 + if ((ep_state & EP_STOP_CMD_PENDING) || (ep_state & SET_DEQ_PENDING) || 565 + (ep_state & EP_HALTED) || (ep_state & EP_CLEARING_TT)) 566 566 return; 567 567 568 568 trace_xhci_ring_ep_doorbell(slot_id, DB_VALUE(ep_index, stream_id)); ··· 2573 2573 2574 2574 xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET); 2575 2575 return; 2576 - case COMP_STALL_ERROR: 2577 - ep->ep_state |= EP_STALLED; 2578 - break; 2579 2576 default: 2580 2577 /* do nothing */ 2581 2578 break; ··· 2913 2916 if (xhci_spurious_success_tx_event(xhci, ep_ring)) { 2914 2917 xhci_dbg(xhci, "Spurious event dma %pad, comp_code %u after %u\n", 2915 2918 &ep_trb_dma, trb_comp_code, ep_ring->old_trb_comp_code); 2916 - ep_ring->old_trb_comp_code = trb_comp_code; 2919 + ep_ring->old_trb_comp_code = 0; 2917 2920 return 0; 2918 2921 } 2919 2922 ··· 3777 3780 * enqueue a No Op TRB, this can prevent the Setup and Data Stage 3778 3781 * TRB to be breaked by the Link TRB. 3779 3782 */ 3780 - if (trb_is_link(ep_ring->enqueue + 1)) { 3783 + if (last_trb_on_seg(ep_ring->enq_seg, ep_ring->enqueue + 1)) { 3781 3784 field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state; 3782 3785 queue_trb(xhci, ep_ring, false, 0, 0, 3783 3786 TRB_INTR_TARGET(0), field);
+5 -13
drivers/usb/host/xhci.c
··· 322 322 xhci_info(xhci, "Fault detected\n"); 323 323 } 324 324 325 - static int xhci_enable_interrupter(struct xhci_interrupter *ir) 325 + int xhci_enable_interrupter(struct xhci_interrupter *ir) 326 326 { 327 327 u32 iman; 328 328 ··· 335 335 return 0; 336 336 } 337 337 338 - static int xhci_disable_interrupter(struct xhci_interrupter *ir) 338 + int xhci_disable_interrupter(struct xhci_interrupter *ir) 339 339 { 340 340 u32 iman; 341 341 ··· 1605 1605 goto free_priv; 1606 1606 } 1607 1607 1608 - /* Class driver might not be aware ep halted due to async URB giveback */ 1609 - if (*ep_state & EP_STALLED) 1610 - dev_dbg(&urb->dev->dev, "URB %p queued before clearing halt\n", 1611 - urb); 1612 - 1613 1608 switch (usb_endpoint_type(&urb->ep->desc)) { 1614 1609 1615 1610 case USB_ENDPOINT_XFER_CONTROL: ··· 1765 1770 goto done; 1766 1771 } 1767 1772 1768 - /* In these cases no commands are pending but the endpoint is stopped */ 1769 - if (ep->ep_state & (EP_CLEARING_TT | EP_STALLED)) { 1773 + /* In this case no commands are pending but the endpoint is stopped */ 1774 + if (ep->ep_state & EP_CLEARING_TT) { 1770 1775 /* and cancelled TDs can be given back right away */ 1771 1776 xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n", 1772 1777 urb->dev->slot_id, ep_index, ep->ep_state); ··· 3204 3209 3205 3210 ep = &vdev->eps[ep_index]; 3206 3211 3207 - spin_lock_irqsave(&xhci->lock, flags); 3208 - 3209 - ep->ep_state &= ~EP_STALLED; 3210 - 3211 3212 /* Bail out if toggle is already being cleared by a endpoint reset */ 3213 + spin_lock_irqsave(&xhci->lock, flags); 3212 3214 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) { 3213 3215 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE; 3214 3216 spin_unlock_irqrestore(&xhci->lock, flags);
+3 -2
drivers/usb/host/xhci.h
··· 664 664 unsigned int err_count; 665 665 unsigned int ep_state; 666 666 #define SET_DEQ_PENDING (1 << 0) 667 - #define EP_HALTED (1 << 1) /* Halted host ep handling */ 667 + #define EP_HALTED (1 << 1) /* For stall handling */ 668 668 #define EP_STOP_CMD_PENDING (1 << 2) /* For URB cancellation */ 669 669 /* Transitioning the endpoint to using streams, don't enqueue URBs */ 670 670 #define EP_GETTING_STREAMS (1 << 3) ··· 675 675 #define EP_SOFT_CLEAR_TOGGLE (1 << 7) 676 676 /* usb_hub_clear_tt_buffer is in progress */ 677 677 #define EP_CLEARING_TT (1 << 8) 678 - #define EP_STALLED (1 << 9) /* For stall handling */ 679 678 /* ---- Related to URB cancellation ---- */ 680 679 struct list_head cancelled_td_list; 681 680 struct xhci_hcd *xhci; ··· 1890 1891 struct usb_tt *tt, gfp_t mem_flags); 1891 1892 int xhci_set_interrupter_moderation(struct xhci_interrupter *ir, 1892 1893 u32 imod_interval); 1894 + int xhci_enable_interrupter(struct xhci_interrupter *ir); 1895 + int xhci_disable_interrupter(struct xhci_interrupter *ir); 1893 1896 1894 1897 /* xHCI ring, segment, TRB, and TD functions */ 1895 1898 dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb);
+2
drivers/usb/serial/ftdi_sio.c
··· 1093 1093 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) }, 1094 1094 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) }, 1095 1095 { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) }, 1096 + /* Abacus Electrics */ 1097 + { USB_DEVICE(FTDI_VID, ABACUS_OPTICAL_PROBE_PID) }, 1096 1098 { } /* Terminating entry */ 1097 1099 }; 1098 1100
+5
drivers/usb/serial/ftdi_sio_ids.h
··· 443 443 #define LINX_FUTURE_2_PID 0xF44C /* Linx future device */ 444 444 445 445 /* 446 + * Abacus Electrics 447 + */ 448 + #define ABACUS_OPTICAL_PROBE_PID 0xf458 /* ABACUS ELECTRICS Optical Probe */ 449 + 450 + /* 446 451 * Oceanic product ids 447 452 */ 448 453 #define FTDI_OCEANIC_PID 0xF460 /* Oceanic dive instrument */
+3
drivers/usb/serial/option.c
··· 611 611 /* Sierra Wireless products */ 612 612 #define SIERRA_VENDOR_ID 0x1199 613 613 #define SIERRA_PRODUCT_EM9191 0x90d3 614 + #define SIERRA_PRODUCT_EM9291 0x90e3 614 615 615 616 /* UNISOC (Spreadtrum) products */ 616 617 #define UNISOC_VENDOR_ID 0x1782 ··· 2433 2432 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) }, 2434 2433 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) }, 2435 2434 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) }, 2435 + { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9291, 0xff, 0xff, 0x30) }, 2436 + { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9291, 0xff, 0xff, 0x40) }, 2436 2437 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) }, 2437 2438 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) }, 2438 2439 { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */
+7
drivers/usb/serial/usb-serial-simple.c
··· 100 100 { USB_DEVICE(0x09d7, 0x0100) } /* NovAtel FlexPack GPS */ 101 101 DEVICE_N(novatel_gps, NOVATEL_IDS, 3); 102 102 103 + /* OWON electronic test and measurement equipment driver */ 104 + #define OWON_IDS() \ 105 + { USB_DEVICE(0x5345, 0x1234) } /* HDS200 oscilloscopes and others */ 106 + DEVICE(owon, OWON_IDS); 107 + 103 108 /* Siemens USB/MPI adapter */ 104 109 #define SIEMENS_IDS() \ 105 110 { USB_DEVICE(0x908, 0x0004) } ··· 139 134 &motorola_tetra_device, 140 135 &nokia_device, 141 136 &novatel_gps_device, 137 + &owon_device, 142 138 &siemens_mpi_device, 143 139 &suunto_device, 144 140 &vivopay_device, ··· 159 153 MOTOROLA_TETRA_IDS(), 160 154 NOKIA_IDS(), 161 155 NOVATEL_IDS(), 156 + OWON_IDS(), 162 157 SIEMENS_IDS(), 163 158 SUUNTO_IDS(), 164 159 VIVOPAY_IDS(),
+7
drivers/usb/storage/unusual_uas.h
··· 83 83 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 84 84 US_FL_NO_REPORT_LUNS), 85 85 86 + /* Reported-by: Oliver Neukum <oneukum@suse.com> */ 87 + UNUSUAL_DEV(0x125f, 0xa94a, 0x0160, 0x0160, 88 + "ADATA", 89 + "Portable HDD CH94", 90 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 91 + US_FL_NO_ATA_1X), 92 + 86 93 /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */ 87 94 UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999, 88 95 "Initio Corporation",
+20 -4
drivers/usb/typec/class.c
··· 1052 1052 partner->usb_mode = USB_MODE_USB3; 1053 1053 } 1054 1054 1055 + mutex_lock(&port->partner_link_lock); 1055 1056 ret = device_register(&partner->dev); 1056 1057 if (ret) { 1057 1058 dev_err(&port->dev, "failed to register partner (%d)\n", ret); 1059 + mutex_unlock(&port->partner_link_lock); 1058 1060 put_device(&partner->dev); 1059 1061 return ERR_PTR(ret); 1060 1062 } ··· 1065 1063 typec_partner_link_device(partner, port->usb2_dev); 1066 1064 if (port->usb3_dev) 1067 1065 typec_partner_link_device(partner, port->usb3_dev); 1066 + mutex_unlock(&port->partner_link_lock); 1068 1067 1069 1068 return partner; 1070 1069 } ··· 1086 1083 1087 1084 port = to_typec_port(partner->dev.parent); 1088 1085 1089 - if (port->usb2_dev) 1086 + mutex_lock(&port->partner_link_lock); 1087 + if (port->usb2_dev) { 1090 1088 typec_partner_unlink_device(partner, port->usb2_dev); 1091 - if (port->usb3_dev) 1089 + port->usb2_dev = NULL; 1090 + } 1091 + if (port->usb3_dev) { 1092 1092 typec_partner_unlink_device(partner, port->usb3_dev); 1093 + port->usb3_dev = NULL; 1094 + } 1093 1095 1094 1096 device_unregister(&partner->dev); 1097 + mutex_unlock(&port->partner_link_lock); 1095 1098 } 1096 1099 EXPORT_SYMBOL_GPL(typec_unregister_partner); 1097 1100 ··· 2050 2041 static void typec_partner_attach(struct typec_connector *con, struct device *dev) 2051 2042 { 2052 2043 struct typec_port *port = container_of(con, struct typec_port, con); 2053 - struct typec_partner *partner = typec_get_partner(port); 2044 + struct typec_partner *partner; 2054 2045 struct usb_device *udev = to_usb_device(dev); 2055 2046 enum usb_mode usb_mode; 2056 2047 2048 + mutex_lock(&port->partner_link_lock); 2057 2049 if (udev->speed < USB_SPEED_SUPER) { 2058 2050 usb_mode = USB_MODE_USB2; 2059 2051 port->usb2_dev = dev; ··· 2063 2053 port->usb3_dev = dev; 2064 2054 } 2065 2055 2056 + partner = typec_get_partner(port); 2066 2057 if (partner) { 2067 2058 typec_partner_set_usb_mode(partner, usb_mode); 2068 2059 typec_partner_link_device(partner, dev); 2069 2060 put_device(&partner->dev); 2070 2061 } 2062 + mutex_unlock(&port->partner_link_lock); 2071 2063 } 2072 2064 2073 2065 static void typec_partner_deattach(struct typec_connector *con, struct device *dev) 2074 2066 { 2075 2067 struct typec_port *port = container_of(con, struct typec_port, con); 2076 - struct typec_partner *partner = typec_get_partner(port); 2068 + struct typec_partner *partner; 2077 2069 2070 + mutex_lock(&port->partner_link_lock); 2071 + partner = typec_get_partner(port); 2078 2072 if (partner) { 2079 2073 typec_partner_unlink_device(partner, dev); 2080 2074 put_device(&partner->dev); ··· 2088 2074 port->usb2_dev = NULL; 2089 2075 else if (port->usb3_dev == dev) 2090 2076 port->usb3_dev = NULL; 2077 + mutex_unlock(&port->partner_link_lock); 2091 2078 } 2092 2079 2093 2080 /** ··· 2629 2614 2630 2615 ida_init(&port->mode_ids); 2631 2616 mutex_init(&port->port_type_lock); 2617 + mutex_init(&port->partner_link_lock); 2632 2618 2633 2619 port->id = id; 2634 2620 port->ops = cap->ops;
+1
drivers/usb/typec/class.h
··· 59 59 enum typec_port_type port_type; 60 60 enum usb_mode usb_mode; 61 61 struct mutex port_type_lock; 62 + struct mutex partner_link_lock; 62 63 63 64 enum typec_orientation orientation; 64 65 struct typec_switch *sw;
+2
fs/bcachefs/alloc_foreground.c
··· 1425 1425 open_bucket_for_each(c, &wp->ptrs, ob, i) 1426 1426 wp->sectors_free = min(wp->sectors_free, ob->sectors_free); 1427 1427 1428 + wp->sectors_free = rounddown(wp->sectors_free, block_sectors(c)); 1429 + 1428 1430 BUG_ON(!wp->sectors_free || wp->sectors_free == UINT_MAX); 1429 1431 1430 1432 return 0;
+3 -1
fs/bcachefs/alloc_foreground.h
··· 110 110 unsigned i; 111 111 112 112 open_bucket_for_each(c, &wp->ptrs, ob, i) 113 - ob_push(c, !ob->sectors_free ? &ptrs : &keep, ob); 113 + ob_push(c, ob->sectors_free < block_sectors(c) 114 + ? &ptrs 115 + : &keep, ob); 114 116 wp->ptrs = keep; 115 117 116 118 mutex_unlock(&wp->lock);
+43 -38
fs/bcachefs/bcachefs_format.h
··· 366 366 #define __BKEY_PADDED(key, pad) \ 367 367 struct bkey_i key; __u64 key ## _pad[pad] 368 368 369 + enum bch_bkey_type_flags { 370 + BKEY_TYPE_strict_btree_checks = BIT(0), 371 + }; 372 + 369 373 /* 370 374 * - DELETED keys are used internally to mark keys that should be ignored but 371 375 * override keys in composition order. Their version number is ignored. ··· 387 383 * 388 384 * - WHITEOUT: for hash table btrees 389 385 */ 390 - #define BCH_BKEY_TYPES() \ 391 - x(deleted, 0) \ 392 - x(whiteout, 1) \ 393 - x(error, 2) \ 394 - x(cookie, 3) \ 395 - x(hash_whiteout, 4) \ 396 - x(btree_ptr, 5) \ 397 - x(extent, 6) \ 398 - x(reservation, 7) \ 399 - x(inode, 8) \ 400 - x(inode_generation, 9) \ 401 - x(dirent, 10) \ 402 - x(xattr, 11) \ 403 - x(alloc, 12) \ 404 - x(quota, 13) \ 405 - x(stripe, 14) \ 406 - x(reflink_p, 15) \ 407 - x(reflink_v, 16) \ 408 - x(inline_data, 17) \ 409 - x(btree_ptr_v2, 18) \ 410 - x(indirect_inline_data, 19) \ 411 - x(alloc_v2, 20) \ 412 - x(subvolume, 21) \ 413 - x(snapshot, 22) \ 414 - x(inode_v2, 23) \ 415 - x(alloc_v3, 24) \ 416 - x(set, 25) \ 417 - x(lru, 26) \ 418 - x(alloc_v4, 27) \ 419 - x(backpointer, 28) \ 420 - x(inode_v3, 29) \ 421 - x(bucket_gens, 30) \ 422 - x(snapshot_tree, 31) \ 423 - x(logged_op_truncate, 32) \ 424 - x(logged_op_finsert, 33) \ 425 - x(accounting, 34) \ 426 - x(inode_alloc_cursor, 35) 386 + #define BCH_BKEY_TYPES() \ 387 + x(deleted, 0, 0) \ 388 + x(whiteout, 1, 0) \ 389 + x(error, 2, 0) \ 390 + x(cookie, 3, 0) \ 391 + x(hash_whiteout, 4, BKEY_TYPE_strict_btree_checks) \ 392 + x(btree_ptr, 5, BKEY_TYPE_strict_btree_checks) \ 393 + x(extent, 6, BKEY_TYPE_strict_btree_checks) \ 394 + x(reservation, 7, BKEY_TYPE_strict_btree_checks) \ 395 + x(inode, 8, BKEY_TYPE_strict_btree_checks) \ 396 + x(inode_generation, 9, BKEY_TYPE_strict_btree_checks) \ 397 + x(dirent, 10, BKEY_TYPE_strict_btree_checks) \ 398 + x(xattr, 11, BKEY_TYPE_strict_btree_checks) \ 399 + x(alloc, 12, BKEY_TYPE_strict_btree_checks) \ 400 + x(quota, 13, BKEY_TYPE_strict_btree_checks) \ 401 + x(stripe, 14, BKEY_TYPE_strict_btree_checks) \ 402 + x(reflink_p, 15, BKEY_TYPE_strict_btree_checks) \ 403 + x(reflink_v, 16, BKEY_TYPE_strict_btree_checks) \ 404 + x(inline_data, 17, BKEY_TYPE_strict_btree_checks) \ 405 + x(btree_ptr_v2, 18, BKEY_TYPE_strict_btree_checks) \ 406 + x(indirect_inline_data, 19, BKEY_TYPE_strict_btree_checks) \ 407 + x(alloc_v2, 20, BKEY_TYPE_strict_btree_checks) \ 408 + x(subvolume, 21, BKEY_TYPE_strict_btree_checks) \ 409 + x(snapshot, 22, BKEY_TYPE_strict_btree_checks) \ 410 + x(inode_v2, 23, BKEY_TYPE_strict_btree_checks) \ 411 + x(alloc_v3, 24, BKEY_TYPE_strict_btree_checks) \ 412 + x(set, 25, 0) \ 413 + x(lru, 26, BKEY_TYPE_strict_btree_checks) \ 414 + x(alloc_v4, 27, BKEY_TYPE_strict_btree_checks) \ 415 + x(backpointer, 28, BKEY_TYPE_strict_btree_checks) \ 416 + x(inode_v3, 29, BKEY_TYPE_strict_btree_checks) \ 417 + x(bucket_gens, 30, BKEY_TYPE_strict_btree_checks) \ 418 + x(snapshot_tree, 31, BKEY_TYPE_strict_btree_checks) \ 419 + x(logged_op_truncate, 32, BKEY_TYPE_strict_btree_checks) \ 420 + x(logged_op_finsert, 33, BKEY_TYPE_strict_btree_checks) \ 421 + x(accounting, 34, BKEY_TYPE_strict_btree_checks) \ 422 + x(inode_alloc_cursor, 35, BKEY_TYPE_strict_btree_checks) 427 423 428 424 enum bch_bkey_type { 429 - #define x(name, nr) KEY_TYPE_##name = nr, 425 + #define x(name, nr, ...) KEY_TYPE_##name = nr, 430 426 BCH_BKEY_TYPES() 431 427 #undef x 432 428 KEY_TYPE_MAX, ··· 867 863 LE64_BITMASK(BCH_SB_SHARD_INUMS_NBITS, struct bch_sb, flags[6], 0, 4); 868 864 LE64_BITMASK(BCH_SB_WRITE_ERROR_TIMEOUT,struct bch_sb, flags[6], 4, 14); 869 865 LE64_BITMASK(BCH_SB_CSUM_ERR_RETRY_NR, struct bch_sb, flags[6], 14, 20); 866 + LE64_BITMASK(BCH_SB_CASEFOLD, struct bch_sb, flags[6], 22, 23); 870 867 871 868 static inline __u64 BCH_SB_COMPRESSION_TYPE(const struct bch_sb *sb) 872 869 {
+20 -4
fs/bcachefs/bkey_methods.c
··· 21 21 #include "xattr.h" 22 22 23 23 const char * const bch2_bkey_types[] = { 24 - #define x(name, nr) #name, 24 + #define x(name, nr, ...) #name, 25 25 BCH_BKEY_TYPES() 26 26 #undef x 27 27 NULL ··· 115 115 }) 116 116 117 117 const struct bkey_ops bch2_bkey_ops[] = { 118 - #define x(name, nr) [KEY_TYPE_##name] = bch2_bkey_ops_##name, 118 + #define x(name, nr, ...) [KEY_TYPE_##name] = bch2_bkey_ops_##name, 119 119 BCH_BKEY_TYPES() 120 120 #undef x 121 121 }; ··· 155 155 #undef x 156 156 }; 157 157 158 + static const enum bch_bkey_type_flags bch2_bkey_type_flags[] = { 159 + #define x(name, nr, flags) [KEY_TYPE_##name] = flags, 160 + BCH_BKEY_TYPES() 161 + #undef x 162 + }; 163 + 158 164 const char *bch2_btree_node_type_str(enum btree_node_type type) 159 165 { 160 166 return type == BKEY_TYPE_btree ? "internal btree node" : bch2_btree_id_str(type - 1); ··· 183 177 if (type >= BKEY_TYPE_NR) 184 178 return 0; 185 179 186 - bkey_fsck_err_on(k.k->type < KEY_TYPE_MAX && 187 - (type == BKEY_TYPE_btree || (from.flags & BCH_VALIDATE_commit)) && 180 + enum bch_bkey_type_flags bkey_flags = k.k->type < KEY_TYPE_MAX 181 + ? bch2_bkey_type_flags[k.k->type] 182 + : 0; 183 + 184 + bool strict_key_type_allowed = 185 + (from.flags & BCH_VALIDATE_commit) || 186 + type == BKEY_TYPE_btree || 187 + (from.btree < BTREE_ID_NR && 188 + (bkey_flags & BKEY_TYPE_strict_btree_checks)); 189 + 190 + bkey_fsck_err_on(strict_key_type_allowed && 191 + k.k->type < KEY_TYPE_MAX && 188 192 !(bch2_key_types_allowed[type] & BIT_ULL(k.k->type)), 189 193 c, bkey_invalid_type_for_btree, 190 194 "invalid key type for btree %s (%s)",
+5 -2
fs/bcachefs/btree_iter.c
··· 2577 2577 struct bpos end) 2578 2578 { 2579 2579 if ((iter->flags & (BTREE_ITER_is_extents|BTREE_ITER_filter_snapshots)) && 2580 - !bkey_eq(iter->pos, POS_MAX)) { 2580 + !bkey_eq(iter->pos, POS_MAX) && 2581 + !((iter->flags & BTREE_ITER_is_extents) && 2582 + iter->pos.offset == U64_MAX)) { 2583 + 2581 2584 /* 2582 2585 * bkey_start_pos(), for extents, is not monotonically 2583 2586 * increasing until after filtering for snapshots: ··· 2605 2602 2606 2603 bch2_trans_verify_not_unlocked_or_in_restart(trans); 2607 2604 bch2_btree_iter_verify_entry_exit(iter); 2608 - EBUG_ON((iter->flags & BTREE_ITER_filter_snapshots) && bpos_eq(end, POS_MIN)); 2605 + EBUG_ON((iter->flags & BTREE_ITER_filter_snapshots) && iter->pos.inode != end.inode); 2609 2606 2610 2607 int ret = trans_maybe_inject_restart(trans, _RET_IP_); 2611 2608 if (unlikely(ret)) {
+2 -14
fs/bcachefs/dirent.c
··· 13 13 14 14 #include <linux/dcache.h> 15 15 16 - static int bch2_casefold(struct btree_trans *trans, const struct bch_hash_info *info, 17 - const struct qstr *str, struct qstr *out_cf) 16 + int bch2_casefold(struct btree_trans *trans, const struct bch_hash_info *info, 17 + const struct qstr *str, struct qstr *out_cf) 18 18 { 19 19 *out_cf = (struct qstr) QSTR_INIT(NULL, 0); 20 20 ··· 33 33 #else 34 34 return -EOPNOTSUPP; 35 35 #endif 36 - } 37 - 38 - static inline int bch2_maybe_casefold(struct btree_trans *trans, 39 - const struct bch_hash_info *info, 40 - const struct qstr *str, struct qstr *out_cf) 41 - { 42 - if (likely(!info->cf_encoding)) { 43 - *out_cf = *str; 44 - return 0; 45 - } else { 46 - return bch2_casefold(trans, info, str, out_cf); 47 - } 48 36 } 49 37 50 38 static unsigned bch2_dirent_name_bytes(struct bkey_s_c_dirent d)
+15
fs/bcachefs/dirent.h
··· 23 23 struct bch_hash_info; 24 24 struct bch_inode_info; 25 25 26 + int bch2_casefold(struct btree_trans *, const struct bch_hash_info *, 27 + const struct qstr *, struct qstr *); 28 + 29 + static inline int bch2_maybe_casefold(struct btree_trans *trans, 30 + const struct bch_hash_info *info, 31 + const struct qstr *str, struct qstr *out_cf) 32 + { 33 + if (likely(!info->cf_encoding)) { 34 + *out_cf = *str; 35 + return 0; 36 + } else { 37 + return bch2_casefold(trans, info, str, out_cf); 38 + } 39 + } 40 + 26 41 struct qstr bch2_dirent_get_name(struct bkey_s_c_dirent d); 27 42 28 43 static inline unsigned dirent_val_u64s(unsigned len, unsigned cf_len)
+12 -5
fs/bcachefs/error.c
··· 272 272 { 273 273 struct fsck_err_state *s; 274 274 275 - if (!test_bit(BCH_FS_fsck_running, &c->flags)) 276 - return NULL; 277 - 278 275 list_for_each_entry(s, &c->fsck_error_msgs, list) 279 276 if (s->id == id) { 280 277 /* ··· 636 639 return ret; 637 640 } 638 641 639 - void bch2_flush_fsck_errs(struct bch_fs *c) 642 + static void __bch2_flush_fsck_errs(struct bch_fs *c, bool print) 640 643 { 641 644 struct fsck_err_state *s, *n; 642 645 643 646 mutex_lock(&c->fsck_error_msgs_lock); 644 647 645 648 list_for_each_entry_safe(s, n, &c->fsck_error_msgs, list) { 646 - if (s->ratelimited && s->last_msg) 649 + if (print && s->ratelimited && s->last_msg) 647 650 bch_err(c, "Saw %llu errors like:\n %s", s->nr, s->last_msg); 648 651 649 652 list_del(&s->list); ··· 652 655 } 653 656 654 657 mutex_unlock(&c->fsck_error_msgs_lock); 658 + } 659 + 660 + void bch2_flush_fsck_errs(struct bch_fs *c) 661 + { 662 + __bch2_flush_fsck_errs(c, true); 663 + } 664 + 665 + void bch2_free_fsck_errs(struct bch_fs *c) 666 + { 667 + __bch2_flush_fsck_errs(c, false); 655 668 } 656 669 657 670 int bch2_inum_offset_err_msg_trans(struct btree_trans *trans, struct printbuf *out,
+1
fs/bcachefs/error.h
··· 93 93 _flags, BCH_FSCK_ERR_##_err_type, __VA_ARGS__) 94 94 95 95 void bch2_flush_fsck_errs(struct bch_fs *); 96 + void bch2_free_fsck_errs(struct bch_fs *); 96 97 97 98 #define fsck_err_wrap(_do) \ 98 99 ({ \
-217
fs/bcachefs/fs-ioctl.c
··· 21 21 #define FSOP_GOING_FLAGS_LOGFLUSH 0x1 /* flush log but not data */ 22 22 #define FSOP_GOING_FLAGS_NOLOGFLUSH 0x2 /* don't flush log nor data */ 23 23 24 - struct flags_set { 25 - unsigned mask; 26 - unsigned flags; 27 - 28 - unsigned projid; 29 - 30 - bool set_projinherit; 31 - bool projinherit; 32 - }; 33 - 34 - static int bch2_inode_flags_set(struct btree_trans *trans, 35 - struct bch_inode_info *inode, 36 - struct bch_inode_unpacked *bi, 37 - void *p) 38 - { 39 - struct bch_fs *c = inode->v.i_sb->s_fs_info; 40 - /* 41 - * We're relying on btree locking here for exclusion with other ioctl 42 - * calls - use the flags in the btree (@bi), not inode->i_flags: 43 - */ 44 - struct flags_set *s = p; 45 - unsigned newflags = s->flags; 46 - unsigned oldflags = bi->bi_flags & s->mask; 47 - 48 - if (((newflags ^ oldflags) & (BCH_INODE_append|BCH_INODE_immutable)) && 49 - !capable(CAP_LINUX_IMMUTABLE)) 50 - return -EPERM; 51 - 52 - if (!S_ISREG(bi->bi_mode) && 53 - !S_ISDIR(bi->bi_mode) && 54 - (newflags & (BCH_INODE_nodump|BCH_INODE_noatime)) != newflags) 55 - return -EINVAL; 56 - 57 - if ((newflags ^ oldflags) & BCH_INODE_casefolded) { 58 - #ifdef CONFIG_UNICODE 59 - int ret = 0; 60 - /* Not supported on individual files. */ 61 - if (!S_ISDIR(bi->bi_mode)) 62 - return -EOPNOTSUPP; 63 - 64 - /* 65 - * Make sure the dir is empty, as otherwise we'd need to 66 - * rehash everything and update the dirent keys. 67 - */ 68 - ret = bch2_empty_dir_trans(trans, inode_inum(inode)); 69 - if (ret < 0) 70 - return ret; 71 - 72 - ret = bch2_request_incompat_feature(c, bcachefs_metadata_version_casefolding); 73 - if (ret) 74 - return ret; 75 - 76 - bch2_check_set_feature(c, BCH_FEATURE_casefolding); 77 - #else 78 - printk(KERN_ERR "Cannot use casefolding on a kernel without CONFIG_UNICODE\n"); 79 - return -EOPNOTSUPP; 80 - #endif 81 - } 82 - 83 - if (s->set_projinherit) { 84 - bi->bi_fields_set &= ~(1 << Inode_opt_project); 85 - bi->bi_fields_set |= ((int) s->projinherit << Inode_opt_project); 86 - } 87 - 88 - bi->bi_flags &= ~s->mask; 89 - bi->bi_flags |= newflags; 90 - 91 - bi->bi_ctime = timespec_to_bch2_time(c, current_time(&inode->v)); 92 - return 0; 93 - } 94 - 95 - static int bch2_ioc_getflags(struct bch_inode_info *inode, int __user *arg) 96 - { 97 - unsigned flags = map_flags(bch_flags_to_uflags, inode->ei_inode.bi_flags); 98 - 99 - return put_user(flags, arg); 100 - } 101 - 102 - static int bch2_ioc_setflags(struct bch_fs *c, 103 - struct file *file, 104 - struct bch_inode_info *inode, 105 - void __user *arg) 106 - { 107 - struct flags_set s = { .mask = map_defined(bch_flags_to_uflags) }; 108 - unsigned uflags; 109 - int ret; 110 - 111 - if (get_user(uflags, (int __user *) arg)) 112 - return -EFAULT; 113 - 114 - s.flags = map_flags_rev(bch_flags_to_uflags, uflags); 115 - if (uflags) 116 - return -EOPNOTSUPP; 117 - 118 - ret = mnt_want_write_file(file); 119 - if (ret) 120 - return ret; 121 - 122 - inode_lock(&inode->v); 123 - if (!inode_owner_or_capable(file_mnt_idmap(file), &inode->v)) { 124 - ret = -EACCES; 125 - goto setflags_out; 126 - } 127 - 128 - mutex_lock(&inode->ei_update_lock); 129 - ret = bch2_subvol_is_ro(c, inode->ei_inum.subvol) ?: 130 - bch2_write_inode(c, inode, bch2_inode_flags_set, &s, 131 - ATTR_CTIME); 132 - mutex_unlock(&inode->ei_update_lock); 133 - 134 - setflags_out: 135 - inode_unlock(&inode->v); 136 - mnt_drop_write_file(file); 137 - return ret; 138 - } 139 - 140 - static int bch2_ioc_fsgetxattr(struct bch_inode_info *inode, 141 - struct fsxattr __user *arg) 142 - { 143 - struct fsxattr fa = { 0 }; 144 - 145 - fa.fsx_xflags = map_flags(bch_flags_to_xflags, inode->ei_inode.bi_flags); 146 - 147 - if (inode->ei_inode.bi_fields_set & (1 << Inode_opt_project)) 148 - fa.fsx_xflags |= FS_XFLAG_PROJINHERIT; 149 - 150 - fa.fsx_projid = inode->ei_qid.q[QTYP_PRJ]; 151 - 152 - if (copy_to_user(arg, &fa, sizeof(fa))) 153 - return -EFAULT; 154 - 155 - return 0; 156 - } 157 - 158 - static int fssetxattr_inode_update_fn(struct btree_trans *trans, 159 - struct bch_inode_info *inode, 160 - struct bch_inode_unpacked *bi, 161 - void *p) 162 - { 163 - struct flags_set *s = p; 164 - 165 - if (s->projid != bi->bi_project) { 166 - bi->bi_fields_set |= 1U << Inode_opt_project; 167 - bi->bi_project = s->projid; 168 - } 169 - 170 - return bch2_inode_flags_set(trans, inode, bi, p); 171 - } 172 - 173 - static int bch2_ioc_fssetxattr(struct bch_fs *c, 174 - struct file *file, 175 - struct bch_inode_info *inode, 176 - struct fsxattr __user *arg) 177 - { 178 - struct flags_set s = { .mask = map_defined(bch_flags_to_xflags) }; 179 - struct fsxattr fa; 180 - int ret; 181 - 182 - if (copy_from_user(&fa, arg, sizeof(fa))) 183 - return -EFAULT; 184 - 185 - s.set_projinherit = true; 186 - s.projinherit = (fa.fsx_xflags & FS_XFLAG_PROJINHERIT) != 0; 187 - fa.fsx_xflags &= ~FS_XFLAG_PROJINHERIT; 188 - 189 - s.flags = map_flags_rev(bch_flags_to_xflags, fa.fsx_xflags); 190 - if (fa.fsx_xflags) 191 - return -EOPNOTSUPP; 192 - 193 - if (fa.fsx_projid >= U32_MAX) 194 - return -EINVAL; 195 - 196 - /* 197 - * inode fields accessible via the xattr interface are stored with a +1 198 - * bias, so that 0 means unset: 199 - */ 200 - s.projid = fa.fsx_projid + 1; 201 - 202 - ret = mnt_want_write_file(file); 203 - if (ret) 204 - return ret; 205 - 206 - inode_lock(&inode->v); 207 - if (!inode_owner_or_capable(file_mnt_idmap(file), &inode->v)) { 208 - ret = -EACCES; 209 - goto err; 210 - } 211 - 212 - mutex_lock(&inode->ei_update_lock); 213 - ret = bch2_subvol_is_ro(c, inode->ei_inum.subvol) ?: 214 - bch2_set_projid(c, inode, fa.fsx_projid) ?: 215 - bch2_write_inode(c, inode, fssetxattr_inode_update_fn, &s, 216 - ATTR_CTIME); 217 - mutex_unlock(&inode->ei_update_lock); 218 - err: 219 - inode_unlock(&inode->v); 220 - mnt_drop_write_file(file); 221 - return ret; 222 - } 223 - 224 24 static int bch2_reinherit_attrs_fn(struct btree_trans *trans, 225 25 struct bch_inode_info *inode, 226 26 struct bch_inode_unpacked *bi, ··· 358 558 long ret; 359 559 360 560 switch (cmd) { 361 - case FS_IOC_GETFLAGS: 362 - ret = bch2_ioc_getflags(inode, (int __user *) arg); 363 - break; 364 - 365 - case FS_IOC_SETFLAGS: 366 - ret = bch2_ioc_setflags(c, file, inode, (int __user *) arg); 367 - break; 368 - 369 - case FS_IOC_FSGETXATTR: 370 - ret = bch2_ioc_fsgetxattr(inode, (void __user *) arg); 371 - break; 372 - 373 - case FS_IOC_FSSETXATTR: 374 - ret = bch2_ioc_fssetxattr(c, file, inode, 375 - (void __user *) arg); 376 - break; 377 - 378 561 case BCHFS_IOC_REINHERIT_ATTRS: 379 562 ret = bch2_ioc_reinherit_attrs(c, file, inode, 380 563 (void __user *) arg);
-75
fs/bcachefs/fs-ioctl.h
··· 2 2 #ifndef _BCACHEFS_FS_IOCTL_H 3 3 #define _BCACHEFS_FS_IOCTL_H 4 4 5 - /* Inode flags: */ 6 - 7 - /* bcachefs inode flags -> vfs inode flags: */ 8 - static const __maybe_unused unsigned bch_flags_to_vfs[] = { 9 - [__BCH_INODE_sync] = S_SYNC, 10 - [__BCH_INODE_immutable] = S_IMMUTABLE, 11 - [__BCH_INODE_append] = S_APPEND, 12 - [__BCH_INODE_noatime] = S_NOATIME, 13 - [__BCH_INODE_casefolded] = S_CASEFOLD, 14 - }; 15 - 16 - /* bcachefs inode flags -> FS_IOC_GETFLAGS: */ 17 - static const __maybe_unused unsigned bch_flags_to_uflags[] = { 18 - [__BCH_INODE_sync] = FS_SYNC_FL, 19 - [__BCH_INODE_immutable] = FS_IMMUTABLE_FL, 20 - [__BCH_INODE_append] = FS_APPEND_FL, 21 - [__BCH_INODE_nodump] = FS_NODUMP_FL, 22 - [__BCH_INODE_noatime] = FS_NOATIME_FL, 23 - [__BCH_INODE_casefolded] = FS_CASEFOLD_FL, 24 - }; 25 - 26 - /* bcachefs inode flags -> FS_IOC_FSGETXATTR: */ 27 - static const __maybe_unused unsigned bch_flags_to_xflags[] = { 28 - [__BCH_INODE_sync] = FS_XFLAG_SYNC, 29 - [__BCH_INODE_immutable] = FS_XFLAG_IMMUTABLE, 30 - [__BCH_INODE_append] = FS_XFLAG_APPEND, 31 - [__BCH_INODE_nodump] = FS_XFLAG_NODUMP, 32 - [__BCH_INODE_noatime] = FS_XFLAG_NOATIME, 33 - //[__BCH_INODE_PROJINHERIT] = FS_XFLAG_PROJINHERIT; 34 - }; 35 - 36 - #define set_flags(_map, _in, _out) \ 37 - do { \ 38 - unsigned _i; \ 39 - \ 40 - for (_i = 0; _i < ARRAY_SIZE(_map); _i++) \ 41 - if ((_in) & (1 << _i)) \ 42 - (_out) |= _map[_i]; \ 43 - else \ 44 - (_out) &= ~_map[_i]; \ 45 - } while (0) 46 - 47 - #define map_flags(_map, _in) \ 48 - ({ \ 49 - unsigned _out = 0; \ 50 - \ 51 - set_flags(_map, _in, _out); \ 52 - _out; \ 53 - }) 54 - 55 - #define map_flags_rev(_map, _in) \ 56 - ({ \ 57 - unsigned _i, _out = 0; \ 58 - \ 59 - for (_i = 0; _i < ARRAY_SIZE(_map); _i++) \ 60 - if ((_in) & _map[_i]) { \ 61 - (_out) |= 1 << _i; \ 62 - (_in) &= ~_map[_i]; \ 63 - } \ 64 - (_out); \ 65 - }) 66 - 67 - #define map_defined(_map) \ 68 - ({ \ 69 - unsigned _in = ~0; \ 70 - \ 71 - map_flags_rev(_map, _in); \ 72 - }) 73 - 74 - /* Set VFS inode flags from bcachefs inode: */ 75 - static inline void bch2_inode_flags_to_vfs(struct bch_inode_info *inode) 76 - { 77 - set_flags(bch_flags_to_vfs, inode->ei_inode.bi_flags, inode->v.i_flags); 78 - } 79 - 80 5 long bch2_fs_file_ioctl(struct file *, unsigned, unsigned long); 81 6 long bch2_compat_fs_ioctl(struct file *, unsigned, unsigned long); 82 7
+393 -76
fs/bcachefs/fs.c
··· 33 33 #include <linux/backing-dev.h> 34 34 #include <linux/exportfs.h> 35 35 #include <linux/fiemap.h> 36 + #include <linux/fileattr.h> 36 37 #include <linux/fs_context.h> 37 38 #include <linux/module.h> 38 39 #include <linux/pagemap.h> ··· 51 50 struct bch_inode_info *, 52 51 struct bch_inode_unpacked *, 53 52 struct bch_subvolume *); 53 + 54 + /* Set VFS inode flags from bcachefs inode: */ 55 + static inline void bch2_inode_flags_to_vfs(struct bch_fs *c, struct bch_inode_info *inode) 56 + { 57 + static const __maybe_unused unsigned bch_flags_to_vfs[] = { 58 + [__BCH_INODE_sync] = S_SYNC, 59 + [__BCH_INODE_immutable] = S_IMMUTABLE, 60 + [__BCH_INODE_append] = S_APPEND, 61 + [__BCH_INODE_noatime] = S_NOATIME, 62 + }; 63 + 64 + set_flags(bch_flags_to_vfs, inode->ei_inode.bi_flags, inode->v.i_flags); 65 + 66 + if (bch2_inode_casefold(c, &inode->ei_inode)) 67 + inode->v.i_flags |= S_CASEFOLD; 68 + } 54 69 55 70 void bch2_inode_update_after_write(struct btree_trans *trans, 56 71 struct bch_inode_info *inode, ··· 96 79 97 80 inode->ei_inode = *bi; 98 81 99 - bch2_inode_flags_to_vfs(inode); 82 + bch2_inode_flags_to_vfs(c, inode); 100 83 } 101 84 102 85 int __must_check bch2_write_inode(struct bch_fs *c, ··· 648 631 const struct qstr *name) 649 632 { 650 633 struct bch_fs *c = trans->c; 651 - struct btree_iter dirent_iter = {}; 652 634 subvol_inum inum = {}; 653 635 struct printbuf buf = PRINTBUF; 654 636 637 + struct qstr lookup_name; 638 + int ret = bch2_maybe_casefold(trans, dir_hash_info, name, &lookup_name); 639 + if (ret) 640 + return ERR_PTR(ret); 641 + 642 + struct btree_iter dirent_iter = {}; 655 643 struct bkey_s_c k = bch2_hash_lookup(trans, &dirent_iter, bch2_dirent_hash_desc, 656 - dir_hash_info, dir, name, 0); 657 - int ret = bkey_err(k); 644 + dir_hash_info, dir, &lookup_name, 0); 645 + ret = bkey_err(k); 658 646 if (ret) 659 647 return ERR_PTR(ret); 660 648 ··· 846 824 * the VFS that it's been deleted here: 847 825 */ 848 826 set_nlink(&inode->v, 0); 827 + } 828 + 829 + if (IS_CASEFOLDED(vdir)) { 830 + d_invalidate(dentry); 831 + d_prune_aliases(&inode->v); 849 832 } 850 833 err: 851 834 bch2_trans_put(trans); ··· 1262 1235 return finish_open_simple(file, 0); 1263 1236 } 1264 1237 1238 + struct bch_fiemap_extent { 1239 + struct bkey_buf kbuf; 1240 + unsigned flags; 1241 + }; 1242 + 1265 1243 static int bch2_fill_extent(struct bch_fs *c, 1266 1244 struct fiemap_extent_info *info, 1267 - struct bkey_s_c k, unsigned flags) 1245 + struct bch_fiemap_extent *fe) 1268 1246 { 1247 + struct bkey_s_c k = bkey_i_to_s_c(fe->kbuf.k); 1248 + unsigned flags = fe->flags; 1249 + 1250 + BUG_ON(!k.k->size); 1251 + 1269 1252 if (bkey_extent_is_direct_data(k.k)) { 1270 1253 struct bkey_ptrs_c ptrs = bch2_bkey_ptrs_c(k); 1271 1254 const union bch_extent_entry *entry; ··· 1328 1291 } 1329 1292 } 1330 1293 1294 + /* 1295 + * Scan a range of an inode for data in pagecache. 1296 + * 1297 + * Intended to be retryable, so don't modify the output params until success is 1298 + * imminent. 1299 + */ 1300 + static int 1301 + bch2_fiemap_hole_pagecache(struct inode *vinode, u64 *start, u64 *end, 1302 + bool nonblock) 1303 + { 1304 + loff_t dstart, dend; 1305 + 1306 + dstart = bch2_seek_pagecache_data(vinode, *start, *end, 0, nonblock); 1307 + if (dstart < 0) 1308 + return dstart; 1309 + 1310 + if (dstart == *end) { 1311 + *start = dstart; 1312 + return 0; 1313 + } 1314 + 1315 + dend = bch2_seek_pagecache_hole(vinode, dstart, *end, 0, nonblock); 1316 + if (dend < 0) 1317 + return dend; 1318 + 1319 + /* race */ 1320 + BUG_ON(dstart == dend); 1321 + 1322 + *start = dstart; 1323 + *end = dend; 1324 + return 0; 1325 + } 1326 + 1327 + /* 1328 + * Scan a range of pagecache that corresponds to a file mapping hole in the 1329 + * extent btree. If data is found, fake up an extent key so it looks like a 1330 + * delalloc extent to the rest of the fiemap processing code. 1331 + */ 1332 + static int 1333 + bch2_next_fiemap_pagecache_extent(struct btree_trans *trans, struct bch_inode_info *inode, 1334 + u64 start, u64 end, struct bch_fiemap_extent *cur) 1335 + { 1336 + struct bch_fs *c = trans->c; 1337 + struct bkey_i_extent *delextent; 1338 + struct bch_extent_ptr ptr = {}; 1339 + loff_t dstart = start << 9, dend = end << 9; 1340 + int ret; 1341 + 1342 + /* 1343 + * We hold btree locks here so we cannot block on folio locks without 1344 + * dropping trans locks first. Run a nonblocking scan for the common 1345 + * case of no folios over holes and fall back on failure. 1346 + * 1347 + * Note that dropping locks like this is technically racy against 1348 + * writeback inserting to the extent tree, but a non-sync fiemap scan is 1349 + * fundamentally racy with writeback anyways. Therefore, just report the 1350 + * range as delalloc regardless of whether we have to cycle trans locks. 1351 + */ 1352 + ret = bch2_fiemap_hole_pagecache(&inode->v, &dstart, &dend, true); 1353 + if (ret == -EAGAIN) 1354 + ret = drop_locks_do(trans, 1355 + bch2_fiemap_hole_pagecache(&inode->v, &dstart, &dend, false)); 1356 + if (ret < 0) 1357 + return ret; 1358 + 1359 + /* 1360 + * Create a fake extent key in the buffer. We have to add a dummy extent 1361 + * pointer for the fill code to add an extent entry. It's explicitly 1362 + * zeroed to reflect delayed allocation (i.e. phys offset 0). 1363 + */ 1364 + bch2_bkey_buf_realloc(&cur->kbuf, c, sizeof(*delextent) / sizeof(u64)); 1365 + delextent = bkey_extent_init(cur->kbuf.k); 1366 + delextent->k.p = POS(inode->ei_inum.inum, dend >> 9); 1367 + delextent->k.size = (dend - dstart) >> 9; 1368 + bch2_bkey_append_ptr(&delextent->k_i, ptr); 1369 + 1370 + cur->flags = FIEMAP_EXTENT_DELALLOC; 1371 + 1372 + return 0; 1373 + } 1374 + 1375 + static int bch2_next_fiemap_extent(struct btree_trans *trans, 1376 + struct bch_inode_info *inode, 1377 + u64 start, u64 end, 1378 + struct bch_fiemap_extent *cur) 1379 + { 1380 + u32 snapshot; 1381 + int ret = bch2_subvolume_get_snapshot(trans, inode->ei_inum.subvol, &snapshot); 1382 + if (ret) 1383 + return ret; 1384 + 1385 + struct btree_iter iter; 1386 + bch2_trans_iter_init(trans, &iter, BTREE_ID_extents, 1387 + SPOS(inode->ei_inum.inum, start, snapshot), 0); 1388 + 1389 + struct bkey_s_c k = 1390 + bch2_btree_iter_peek_max(trans, &iter, POS(inode->ei_inum.inum, end)); 1391 + ret = bkey_err(k); 1392 + if (ret) 1393 + goto err; 1394 + 1395 + ret = bch2_next_fiemap_pagecache_extent(trans, inode, start, end, cur); 1396 + if (ret) 1397 + goto err; 1398 + 1399 + struct bpos pagecache_start = bkey_start_pos(&cur->kbuf.k->k); 1400 + 1401 + /* 1402 + * Does the pagecache or the btree take precedence? 1403 + * 1404 + * It _should_ be the pagecache, so that we correctly report delalloc 1405 + * extents when dirty in the pagecache (we're COW, after all). 1406 + * 1407 + * But we'd have to add per-sector writeback tracking to 1408 + * bch_folio_state, otherwise we report delalloc extents for clean 1409 + * cached data in the pagecache. 1410 + * 1411 + * We should do this, but even then fiemap won't report stable mappings: 1412 + * on bcachefs data moves around in the background (copygc, rebalance) 1413 + * and we don't provide a way for userspace to lock that out. 1414 + */ 1415 + if (k.k && 1416 + bkey_le(bpos_max(iter.pos, bkey_start_pos(k.k)), 1417 + pagecache_start)) { 1418 + bch2_bkey_buf_reassemble(&cur->kbuf, trans->c, k); 1419 + bch2_cut_front(iter.pos, cur->kbuf.k); 1420 + bch2_cut_back(POS(inode->ei_inum.inum, end), cur->kbuf.k); 1421 + cur->flags = 0; 1422 + } else if (k.k) { 1423 + bch2_cut_back(bkey_start_pos(k.k), cur->kbuf.k); 1424 + } 1425 + 1426 + if (cur->kbuf.k->k.type == KEY_TYPE_reflink_p) { 1427 + unsigned sectors = cur->kbuf.k->k.size; 1428 + s64 offset_into_extent = 0; 1429 + enum btree_id data_btree = BTREE_ID_extents; 1430 + int ret = bch2_read_indirect_extent(trans, &data_btree, &offset_into_extent, 1431 + &cur->kbuf); 1432 + if (ret) 1433 + goto err; 1434 + 1435 + struct bkey_i *k = cur->kbuf.k; 1436 + sectors = min_t(unsigned, sectors, k->k.size - offset_into_extent); 1437 + 1438 + bch2_cut_front(POS(k->k.p.inode, 1439 + bkey_start_offset(&k->k) + offset_into_extent), 1440 + k); 1441 + bch2_key_resize(&k->k, sectors); 1442 + k->k.p = iter.pos; 1443 + k->k.p.offset += k->k.size; 1444 + } 1445 + err: 1446 + bch2_trans_iter_exit(trans, &iter); 1447 + return ret; 1448 + } 1449 + 1331 1450 static int bch2_fiemap(struct inode *vinode, struct fiemap_extent_info *info, 1332 1451 u64 start, u64 len) 1333 1452 { 1334 1453 struct bch_fs *c = vinode->i_sb->s_fs_info; 1335 1454 struct bch_inode_info *ei = to_bch_ei(vinode); 1336 1455 struct btree_trans *trans; 1337 - struct btree_iter iter; 1338 - struct bkey_s_c k; 1339 - struct bkey_buf cur, prev; 1340 - bool have_extent = false; 1456 + struct bch_fiemap_extent cur, prev; 1341 1457 int ret = 0; 1342 1458 1343 - ret = fiemap_prep(&ei->v, info, start, &len, FIEMAP_FLAG_SYNC); 1459 + ret = fiemap_prep(&ei->v, info, start, &len, 0); 1344 1460 if (ret) 1345 1461 return ret; 1346 1462 1347 - struct bpos end = POS(ei->v.i_ino, (start + len) >> 9); 1348 1463 if (start + len < start) 1349 1464 return -EINVAL; 1350 1465 1351 1466 start >>= 9; 1467 + u64 end = (start + len) >> 9; 1352 1468 1353 - bch2_bkey_buf_init(&cur); 1354 - bch2_bkey_buf_init(&prev); 1469 + bch2_bkey_buf_init(&cur.kbuf); 1470 + bch2_bkey_buf_init(&prev.kbuf); 1471 + bkey_init(&prev.kbuf.k->k); 1472 + 1355 1473 trans = bch2_trans_get(c); 1356 1474 1357 - bch2_trans_iter_init(trans, &iter, BTREE_ID_extents, 1358 - POS(ei->v.i_ino, start), 0); 1359 - 1360 - while (!ret || bch2_err_matches(ret, BCH_ERR_transaction_restart)) { 1361 - enum btree_id data_btree = BTREE_ID_extents; 1362 - 1363 - bch2_trans_begin(trans); 1364 - 1365 - u32 snapshot; 1366 - ret = bch2_subvolume_get_snapshot(trans, ei->ei_inum.subvol, &snapshot); 1475 + while (start < end) { 1476 + ret = lockrestart_do(trans, 1477 + bch2_next_fiemap_extent(trans, ei, start, end, &cur)); 1367 1478 if (ret) 1368 - continue; 1479 + goto err; 1369 1480 1370 - bch2_btree_iter_set_snapshot(trans, &iter, snapshot); 1481 + BUG_ON(bkey_start_offset(&cur.kbuf.k->k) < start); 1482 + BUG_ON(cur.kbuf.k->k.p.offset > end); 1371 1483 1372 - k = bch2_btree_iter_peek_max(trans, &iter, end); 1373 - ret = bkey_err(k); 1374 - if (ret) 1375 - continue; 1376 - 1377 - if (!k.k) 1484 + if (bkey_start_offset(&cur.kbuf.k->k) == end) 1378 1485 break; 1379 1486 1380 - if (!bkey_extent_is_data(k.k) && 1381 - k.k->type != KEY_TYPE_reservation) { 1382 - bch2_btree_iter_advance(trans, &iter); 1383 - continue; 1384 - } 1487 + start = cur.kbuf.k->k.p.offset; 1385 1488 1386 - s64 offset_into_extent = iter.pos.offset - bkey_start_offset(k.k); 1387 - unsigned sectors = k.k->size - offset_into_extent; 1388 - 1389 - bch2_bkey_buf_reassemble(&cur, c, k); 1390 - 1391 - ret = bch2_read_indirect_extent(trans, &data_btree, 1392 - &offset_into_extent, &cur); 1393 - if (ret) 1394 - continue; 1395 - 1396 - k = bkey_i_to_s_c(cur.k); 1397 - bch2_bkey_buf_realloc(&prev, c, k.k->u64s); 1398 - 1399 - sectors = min_t(unsigned, sectors, k.k->size - offset_into_extent); 1400 - 1401 - bch2_cut_front(POS(k.k->p.inode, 1402 - bkey_start_offset(k.k) + 1403 - offset_into_extent), 1404 - cur.k); 1405 - bch2_key_resize(&cur.k->k, sectors); 1406 - cur.k->k.p = iter.pos; 1407 - cur.k->k.p.offset += cur.k->k.size; 1408 - 1409 - if (have_extent) { 1489 + if (!bkey_deleted(&prev.kbuf.k->k)) { 1410 1490 bch2_trans_unlock(trans); 1411 - ret = bch2_fill_extent(c, info, 1412 - bkey_i_to_s_c(prev.k), 0); 1491 + ret = bch2_fill_extent(c, info, &prev); 1413 1492 if (ret) 1414 - break; 1493 + goto err; 1415 1494 } 1416 1495 1417 - bkey_copy(prev.k, cur.k); 1418 - have_extent = true; 1419 - 1420 - bch2_btree_iter_set_pos(trans, &iter, 1421 - POS(iter.pos.inode, iter.pos.offset + sectors)); 1496 + bch2_bkey_buf_copy(&prev.kbuf, c, cur.kbuf.k); 1497 + prev.flags = cur.flags; 1422 1498 } 1423 - bch2_trans_iter_exit(trans, &iter); 1424 1499 1425 - if (!ret && have_extent) { 1500 + if (!bkey_deleted(&prev.kbuf.k->k)) { 1426 1501 bch2_trans_unlock(trans); 1427 - ret = bch2_fill_extent(c, info, bkey_i_to_s_c(prev.k), 1428 - FIEMAP_EXTENT_LAST); 1502 + prev.flags |= FIEMAP_EXTENT_LAST; 1503 + ret = bch2_fill_extent(c, info, &prev); 1429 1504 } 1430 - 1505 + err: 1431 1506 bch2_trans_put(trans); 1432 - bch2_bkey_buf_exit(&cur, c); 1433 - bch2_bkey_buf_exit(&prev, c); 1434 - return ret < 0 ? ret : 0; 1507 + bch2_bkey_buf_exit(&cur.kbuf, c); 1508 + bch2_bkey_buf_exit(&prev.kbuf, c); 1509 + 1510 + return bch2_err_class(ret < 0 ? ret : 0); 1435 1511 } 1436 1512 1437 1513 static const struct vm_operations_struct bch_vm_ops = { ··· 1599 1449 return generic_file_open(vinode, file); 1600 1450 } 1601 1451 1452 + /* bcachefs inode flags -> FS_IOC_GETFLAGS: */ 1453 + static const __maybe_unused unsigned bch_flags_to_uflags[] = { 1454 + [__BCH_INODE_sync] = FS_SYNC_FL, 1455 + [__BCH_INODE_immutable] = FS_IMMUTABLE_FL, 1456 + [__BCH_INODE_append] = FS_APPEND_FL, 1457 + [__BCH_INODE_nodump] = FS_NODUMP_FL, 1458 + [__BCH_INODE_noatime] = FS_NOATIME_FL, 1459 + }; 1460 + 1461 + /* bcachefs inode flags -> FS_IOC_FSGETXATTR: */ 1462 + static const __maybe_unused unsigned bch_flags_to_xflags[] = { 1463 + [__BCH_INODE_sync] = FS_XFLAG_SYNC, 1464 + [__BCH_INODE_immutable] = FS_XFLAG_IMMUTABLE, 1465 + [__BCH_INODE_append] = FS_XFLAG_APPEND, 1466 + [__BCH_INODE_nodump] = FS_XFLAG_NODUMP, 1467 + [__BCH_INODE_noatime] = FS_XFLAG_NOATIME, 1468 + }; 1469 + 1470 + static int bch2_fileattr_get(struct dentry *dentry, 1471 + struct fileattr *fa) 1472 + { 1473 + struct bch_inode_info *inode = to_bch_ei(d_inode(dentry)); 1474 + struct bch_fs *c = inode->v.i_sb->s_fs_info; 1475 + 1476 + fileattr_fill_xflags(fa, map_flags(bch_flags_to_xflags, inode->ei_inode.bi_flags)); 1477 + 1478 + if (inode->ei_inode.bi_fields_set & (1 << Inode_opt_project)) 1479 + fa->fsx_xflags |= FS_XFLAG_PROJINHERIT; 1480 + 1481 + if (bch2_inode_casefold(c, &inode->ei_inode)) 1482 + fa->flags |= FS_CASEFOLD_FL; 1483 + 1484 + fa->fsx_projid = inode->ei_qid.q[QTYP_PRJ]; 1485 + return 0; 1486 + } 1487 + 1488 + struct flags_set { 1489 + unsigned mask; 1490 + unsigned flags; 1491 + unsigned projid; 1492 + bool set_project; 1493 + bool set_casefold; 1494 + bool casefold; 1495 + }; 1496 + 1497 + static int fssetxattr_inode_update_fn(struct btree_trans *trans, 1498 + struct bch_inode_info *inode, 1499 + struct bch_inode_unpacked *bi, 1500 + void *p) 1501 + { 1502 + struct bch_fs *c = trans->c; 1503 + struct flags_set *s = p; 1504 + 1505 + /* 1506 + * We're relying on btree locking here for exclusion with other ioctl 1507 + * calls - use the flags in the btree (@bi), not inode->i_flags: 1508 + */ 1509 + if (!S_ISREG(bi->bi_mode) && 1510 + !S_ISDIR(bi->bi_mode) && 1511 + (s->flags & (BCH_INODE_nodump|BCH_INODE_noatime)) != s->flags) 1512 + return -EINVAL; 1513 + 1514 + if (s->casefold != bch2_inode_casefold(c, bi)) { 1515 + #ifdef CONFIG_UNICODE 1516 + int ret = 0; 1517 + /* Not supported on individual files. */ 1518 + if (!S_ISDIR(bi->bi_mode)) 1519 + return -EOPNOTSUPP; 1520 + 1521 + /* 1522 + * Make sure the dir is empty, as otherwise we'd need to 1523 + * rehash everything and update the dirent keys. 1524 + */ 1525 + ret = bch2_empty_dir_trans(trans, inode_inum(inode)); 1526 + if (ret < 0) 1527 + return ret; 1528 + 1529 + ret = bch2_request_incompat_feature(c, bcachefs_metadata_version_casefolding); 1530 + if (ret) 1531 + return ret; 1532 + 1533 + bch2_check_set_feature(c, BCH_FEATURE_casefolding); 1534 + 1535 + bi->bi_casefold = s->casefold + 1; 1536 + bi->bi_fields_set |= BIT(Inode_opt_casefold); 1537 + 1538 + #else 1539 + printk(KERN_ERR "Cannot use casefolding on a kernel without CONFIG_UNICODE\n"); 1540 + return -EOPNOTSUPP; 1541 + #endif 1542 + } 1543 + 1544 + if (s->set_project) { 1545 + bi->bi_project = s->projid; 1546 + bi->bi_fields_set |= BIT(Inode_opt_project); 1547 + } 1548 + 1549 + bi->bi_flags &= ~s->mask; 1550 + bi->bi_flags |= s->flags; 1551 + 1552 + bi->bi_ctime = timespec_to_bch2_time(c, current_time(&inode->v)); 1553 + return 0; 1554 + } 1555 + 1556 + static int bch2_fileattr_set(struct mnt_idmap *idmap, 1557 + struct dentry *dentry, 1558 + struct fileattr *fa) 1559 + { 1560 + struct bch_inode_info *inode = to_bch_ei(d_inode(dentry)); 1561 + struct bch_fs *c = inode->v.i_sb->s_fs_info; 1562 + struct flags_set s = {}; 1563 + int ret; 1564 + 1565 + if (fa->fsx_valid) { 1566 + fa->fsx_xflags &= ~FS_XFLAG_PROJINHERIT; 1567 + 1568 + s.mask = map_defined(bch_flags_to_xflags); 1569 + s.flags |= map_flags_rev(bch_flags_to_xflags, fa->fsx_xflags); 1570 + if (fa->fsx_xflags) 1571 + return -EOPNOTSUPP; 1572 + 1573 + if (fa->fsx_projid >= U32_MAX) 1574 + return -EINVAL; 1575 + 1576 + /* 1577 + * inode fields accessible via the xattr interface are stored with a +1 1578 + * bias, so that 0 means unset: 1579 + */ 1580 + if ((inode->ei_inode.bi_project || 1581 + fa->fsx_projid) && 1582 + inode->ei_inode.bi_project != fa->fsx_projid + 1) { 1583 + s.projid = fa->fsx_projid + 1; 1584 + s.set_project = true; 1585 + } 1586 + } 1587 + 1588 + if (fa->flags_valid) { 1589 + s.mask = map_defined(bch_flags_to_uflags); 1590 + 1591 + s.set_casefold = true; 1592 + s.casefold = (fa->flags & FS_CASEFOLD_FL) != 0; 1593 + fa->flags &= ~FS_CASEFOLD_FL; 1594 + 1595 + s.flags |= map_flags_rev(bch_flags_to_uflags, fa->flags); 1596 + if (fa->flags) 1597 + return -EOPNOTSUPP; 1598 + } 1599 + 1600 + mutex_lock(&inode->ei_update_lock); 1601 + ret = bch2_subvol_is_ro(c, inode->ei_inum.subvol) ?: 1602 + (s.set_project 1603 + ? bch2_set_projid(c, inode, fa->fsx_projid) 1604 + : 0) ?: 1605 + bch2_write_inode(c, inode, fssetxattr_inode_update_fn, &s, 1606 + ATTR_CTIME); 1607 + mutex_unlock(&inode->ei_update_lock); 1608 + return ret; 1609 + } 1610 + 1602 1611 static const struct file_operations bch_file_operations = { 1603 1612 .open = bch2_open, 1604 1613 .llseek = bch2_llseek, ··· 1785 1476 .get_inode_acl = bch2_get_acl, 1786 1477 .set_acl = bch2_set_acl, 1787 1478 #endif 1479 + .fileattr_get = bch2_fileattr_get, 1480 + .fileattr_set = bch2_fileattr_set, 1788 1481 }; 1789 1482 1790 1483 static const struct inode_operations bch_dir_inode_operations = { ··· 1807 1496 .get_inode_acl = bch2_get_acl, 1808 1497 .set_acl = bch2_set_acl, 1809 1498 #endif 1499 + .fileattr_get = bch2_fileattr_get, 1500 + .fileattr_set = bch2_fileattr_set, 1810 1501 }; 1811 1502 1812 1503 static const struct file_operations bch_dir_file_operations = { ··· 1831 1518 .get_inode_acl = bch2_get_acl, 1832 1519 .set_acl = bch2_set_acl, 1833 1520 #endif 1521 + .fileattr_get = bch2_fileattr_get, 1522 + .fileattr_set = bch2_fileattr_set, 1834 1523 }; 1835 1524 1836 1525 static const struct inode_operations bch_special_inode_operations = { ··· 1843 1528 .get_inode_acl = bch2_get_acl, 1844 1529 .set_acl = bch2_set_acl, 1845 1530 #endif 1531 + .fileattr_get = bch2_fileattr_get, 1532 + .fileattr_set = bch2_fileattr_set, 1846 1533 }; 1847 1534 1848 1535 static const struct address_space_operations bch_address_space_operations = {
+8
fs/bcachefs/inode.h
··· 243 243 } 244 244 } 245 245 246 + static inline bool bch2_inode_casefold(struct bch_fs *c, const struct bch_inode_unpacked *bi) 247 + { 248 + /* inode apts are stored with a +1 bias: 0 means "unset, use fs opt" */ 249 + return bi->bi_casefold 250 + ? bi->bi_casefold - 1 251 + : c->opts.casefold; 252 + } 253 + 246 254 /* i_nlink: */ 247 255 248 256 static inline unsigned nlink_bias(umode_t mode)
+5 -4
fs/bcachefs/inode_format.h
··· 103 103 x(bi_parent_subvol, 32) \ 104 104 x(bi_nocow, 8) \ 105 105 x(bi_depth, 32) \ 106 - x(bi_inodes_32bit, 8) 106 + x(bi_inodes_32bit, 8) \ 107 + x(bi_casefold, 8) 107 108 108 109 /* subset of BCH_INODE_FIELDS */ 109 110 #define BCH_INODE_OPTS() \ ··· 118 117 x(background_target, 16) \ 119 118 x(erasure_code, 16) \ 120 119 x(nocow, 8) \ 121 - x(inodes_32bit, 8) 120 + x(inodes_32bit, 8) \ 121 + x(casefold, 8) 122 122 123 123 enum inode_opt_id { 124 124 #define x(name, ...) \ ··· 139 137 x(i_sectors_dirty, 6) \ 140 138 x(unlinked, 7) \ 141 139 x(backptr_untrusted, 8) \ 142 - x(has_child_snapshot, 9) \ 143 - x(casefolded, 10) 140 + x(has_child_snapshot, 9) 144 141 145 142 /* bits 20+ reserved for packed fields below: */ 146 143
+33 -3
fs/bcachefs/journal.c
··· 281 281 282 282 sectors = vstruct_blocks_plus(buf->data, c->block_bits, 283 283 buf->u64s_reserved) << c->block_bits; 284 - BUG_ON(sectors > buf->sectors); 284 + if (unlikely(sectors > buf->sectors)) { 285 + struct printbuf err = PRINTBUF; 286 + err.atomic++; 287 + 288 + prt_printf(&err, "journal entry overran reserved space: %u > %u\n", 289 + sectors, buf->sectors); 290 + prt_printf(&err, "buf u64s %u u64s reserved %u cur_entry_u64s %u block_bits %u\n", 291 + le32_to_cpu(buf->data->u64s), buf->u64s_reserved, 292 + j->cur_entry_u64s, 293 + c->block_bits); 294 + prt_printf(&err, "fatal error - emergency read only"); 295 + bch2_journal_halt_locked(j); 296 + 297 + bch_err(c, "%s", err.buf); 298 + printbuf_exit(&err); 299 + return; 300 + } 301 + 285 302 buf->sectors = sectors; 286 303 287 304 /* ··· 1479 1462 j->last_empty_seq = cur_seq - 1; /* to match j->seq */ 1480 1463 1481 1464 spin_lock(&j->lock); 1482 - 1483 - set_bit(JOURNAL_running, &j->flags); 1484 1465 j->last_flush_write = jiffies; 1485 1466 1486 1467 j->reservations.idx = journal_cur_seq(j); ··· 1487 1472 spin_unlock(&j->lock); 1488 1473 1489 1474 return 0; 1475 + } 1476 + 1477 + void bch2_journal_set_replay_done(struct journal *j) 1478 + { 1479 + /* 1480 + * journal_space_available must happen before setting JOURNAL_running 1481 + * JOURNAL_running must happen before JOURNAL_replay_done 1482 + */ 1483 + spin_lock(&j->lock); 1484 + bch2_journal_space_available(j); 1485 + 1486 + set_bit(JOURNAL_need_flush_write, &j->flags); 1487 + set_bit(JOURNAL_running, &j->flags); 1488 + set_bit(JOURNAL_replay_done, &j->flags); 1489 + spin_unlock(&j->lock); 1490 1490 } 1491 1491 1492 1492 /* init/exit: */
+1 -6
fs/bcachefs/journal.h
··· 437 437 438 438 struct bch_dev; 439 439 440 - static inline void bch2_journal_set_replay_done(struct journal *j) 441 - { 442 - BUG_ON(!test_bit(JOURNAL_running, &j->flags)); 443 - set_bit(JOURNAL_replay_done, &j->flags); 444 - } 445 - 446 440 void bch2_journal_unblock(struct journal *); 447 441 void bch2_journal_block(struct journal *); 448 442 struct journal_buf *bch2_next_write_buffer_flush_journal_buf(struct journal *, u64, bool *); ··· 453 459 454 460 void bch2_fs_journal_stop(struct journal *); 455 461 int bch2_fs_journal_start(struct journal *, u64); 462 + void bch2_journal_set_replay_done(struct journal *); 456 463 457 464 void bch2_dev_journal_exit(struct bch_dev *); 458 465 int bch2_dev_journal_init(struct bch_dev *, struct bch_sb *);
+4 -1
fs/bcachefs/journal_reclaim.c
··· 252 252 253 253 bch2_journal_set_watermark(j); 254 254 out: 255 - j->cur_entry_sectors = !ret ? j->space[journal_space_discarded].next_entry : 0; 255 + j->cur_entry_sectors = !ret 256 + ? round_down(j->space[journal_space_discarded].next_entry, 257 + block_sectors(c)) 258 + : 0; 256 259 j->cur_entry_error = ret; 257 260 258 261 if (!ret)
+7
fs/bcachefs/movinggc.c
··· 356 356 357 357 set_freezable(); 358 358 359 + /* 360 + * Data move operations can't run until after check_snapshots has 361 + * completed, and bch2_snapshot_is_ancestor() is available. 362 + */ 363 + kthread_wait_freezable(c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots || 364 + kthread_should_stop()); 365 + 359 366 bch2_move_stats_init(&move_stats, "copygc"); 360 367 bch2_moving_ctxt_init(&ctxt, c, NULL, &move_stats, 361 368 writepoint_ptr(&c->copygc_write_point),
+9
fs/bcachefs/movinggc.h
··· 5 5 unsigned long bch2_copygc_wait_amount(struct bch_fs *); 6 6 void bch2_copygc_wait_to_text(struct printbuf *, struct bch_fs *); 7 7 8 + static inline void bch2_copygc_wakeup(struct bch_fs *c) 9 + { 10 + rcu_read_lock(); 11 + struct task_struct *p = rcu_dereference(c->copygc_thread); 12 + if (p) 13 + wake_up_process(p); 14 + rcu_read_unlock(); 15 + } 16 + 8 17 void bch2_copygc_stop(struct bch_fs *); 9 18 int bch2_copygc_start(struct bch_fs *); 10 19 void bch2_fs_copygc_init(struct bch_fs *);
-4
fs/bcachefs/namei.c
··· 47 47 if (ret) 48 48 goto err; 49 49 50 - /* Inherit casefold state from parent. */ 51 - if (S_ISDIR(mode)) 52 - new_inode->bi_flags |= dir_u->bi_flags & BCH_INODE_casefolded; 53 - 54 50 if (!(flags & BCH_CREATE_SNAPSHOT)) { 55 51 /* Normal create path - allocate a new inode: */ 56 52 bch2_inode_init_late(new_inode, now, uid, gid, mode, rdev, dir_u);
+5
fs/bcachefs/opts.h
··· 228 228 OPT_BOOL(), \ 229 229 BCH_SB_ERASURE_CODE, false, \ 230 230 NULL, "Enable erasure coding (DO NOT USE YET)") \ 231 + x(casefold, u8, \ 232 + OPT_FS|OPT_INODE|OPT_FORMAT, \ 233 + OPT_BOOL(), \ 234 + BCH_SB_CASEFOLD, false, \ 235 + NULL, "Dirent lookups are casefolded") \ 231 236 x(inodes_32bit, u8, \ 232 237 OPT_FS|OPT_INODE|OPT_FORMAT|OPT_MOUNT|OPT_RUNTIME, \ 233 238 OPT_BOOL(), \
+9 -2
fs/bcachefs/rebalance.c
··· 262 262 int ret = bch2_trans_commit_do(c, NULL, NULL, 263 263 BCH_TRANS_COMMIT_no_enospc, 264 264 bch2_set_rebalance_needs_scan_trans(trans, inum)); 265 - rebalance_wakeup(c); 265 + bch2_rebalance_wakeup(c); 266 266 return ret; 267 267 } 268 268 ··· 581 581 582 582 set_freezable(); 583 583 584 + /* 585 + * Data move operations can't run until after check_snapshots has 586 + * completed, and bch2_snapshot_is_ancestor() is available. 587 + */ 588 + kthread_wait_freezable(c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots || 589 + kthread_should_stop()); 590 + 584 591 bch2_moving_ctxt_init(&ctxt, c, NULL, &r->work_stats, 585 592 writepoint_ptr(&c->rebalance_write_point), 586 593 true); ··· 671 664 c->rebalance.thread = NULL; 672 665 673 666 if (p) { 674 - /* for sychronizing with rebalance_wakeup() */ 667 + /* for sychronizing with bch2_rebalance_wakeup() */ 675 668 synchronize_rcu(); 676 669 677 670 kthread_stop(p);
+1 -1
fs/bcachefs/rebalance.h
··· 37 37 int bch2_set_rebalance_needs_scan(struct bch_fs *, u64 inum); 38 38 int bch2_set_fs_needs_rebalance(struct bch_fs *); 39 39 40 - static inline void rebalance_wakeup(struct bch_fs *c) 40 + static inline void bch2_rebalance_wakeup(struct bch_fs *c) 41 41 { 42 42 struct task_struct *p; 43 43
+7 -3
fs/bcachefs/recovery.c
··· 18 18 #include "journal_seq_blacklist.h" 19 19 #include "logged_ops.h" 20 20 #include "move.h" 21 + #include "movinggc.h" 21 22 #include "namei.h" 22 23 #include "quota.h" 23 24 #include "rebalance.h" ··· 1130 1129 if (ret) 1131 1130 goto err; 1132 1131 1133 - set_bit(BCH_FS_accounting_replay_done, &c->flags); 1134 - bch2_journal_set_replay_done(&c->journal); 1135 - 1136 1132 ret = bch2_fs_read_write_early(c); 1137 1133 if (ret) 1138 1134 goto err; 1135 + 1136 + set_bit(BCH_FS_accounting_replay_done, &c->flags); 1137 + bch2_journal_set_replay_done(&c->journal); 1139 1138 1140 1139 for_each_member_device(c, ca) { 1141 1140 ret = bch2_dev_usage_init(ca, false); ··· 1194 1193 goto err; 1195 1194 1196 1195 c->recovery_pass_done = BCH_RECOVERY_PASS_NR - 1; 1196 + 1197 + bch2_copygc_wakeup(c); 1198 + bch2_rebalance_wakeup(c); 1197 1199 1198 1200 if (enabled_qtypes(c)) { 1199 1201 ret = bch2_fs_quota_read(c);
+36 -32
fs/bcachefs/recovery_passes.c
··· 12 12 #include "journal.h" 13 13 #include "lru.h" 14 14 #include "logged_ops.h" 15 + #include "movinggc.h" 15 16 #include "rebalance.h" 16 17 #include "recovery.h" 17 18 #include "recovery_passes.h" ··· 263 262 */ 264 263 c->opts.recovery_passes_exclude &= ~BCH_RECOVERY_PASS_set_may_go_rw; 265 264 266 - while (c->curr_recovery_pass < ARRAY_SIZE(recovery_pass_fns) && !ret) { 267 - c->next_recovery_pass = c->curr_recovery_pass + 1; 265 + spin_lock_irq(&c->recovery_pass_lock); 268 266 269 - spin_lock_irq(&c->recovery_pass_lock); 267 + while (c->curr_recovery_pass < ARRAY_SIZE(recovery_pass_fns) && !ret) { 268 + unsigned prev_done = c->recovery_pass_done; 270 269 unsigned pass = c->curr_recovery_pass; 271 270 271 + c->next_recovery_pass = pass + 1; 272 + 272 273 if (c->opts.recovery_pass_last && 273 - c->curr_recovery_pass > c->opts.recovery_pass_last) { 274 - spin_unlock_irq(&c->recovery_pass_lock); 274 + c->curr_recovery_pass > c->opts.recovery_pass_last) 275 275 break; 276 - } 277 276 278 - if (!should_run_recovery_pass(c, pass)) { 279 - c->curr_recovery_pass++; 280 - c->recovery_pass_done = max(c->recovery_pass_done, pass); 277 + if (should_run_recovery_pass(c, pass)) { 281 278 spin_unlock_irq(&c->recovery_pass_lock); 282 - continue; 279 + ret = bch2_run_recovery_pass(c, pass) ?: 280 + bch2_journal_flush(&c->journal); 281 + 282 + if (!ret && !test_bit(BCH_FS_error, &c->flags)) 283 + bch2_clear_recovery_pass_required(c, pass); 284 + spin_lock_irq(&c->recovery_pass_lock); 285 + 286 + if (c->next_recovery_pass < c->curr_recovery_pass) { 287 + /* 288 + * bch2_run_explicit_recovery_pass() was called: we 289 + * can't always catch -BCH_ERR_restart_recovery because 290 + * it may have been called from another thread (btree 291 + * node read completion) 292 + */ 293 + ret = 0; 294 + c->recovery_passes_complete &= ~(~0ULL << c->curr_recovery_pass); 295 + } else { 296 + c->recovery_passes_complete |= BIT_ULL(pass); 297 + c->recovery_pass_done = max(c->recovery_pass_done, pass); 298 + } 283 299 } 284 - spin_unlock_irq(&c->recovery_pass_lock); 285 300 286 - ret = bch2_run_recovery_pass(c, pass) ?: 287 - bch2_journal_flush(&c->journal); 288 - 289 - if (!ret && !test_bit(BCH_FS_error, &c->flags)) 290 - bch2_clear_recovery_pass_required(c, pass); 291 - 292 - spin_lock_irq(&c->recovery_pass_lock); 293 - if (c->next_recovery_pass < c->curr_recovery_pass) { 294 - /* 295 - * bch2_run_explicit_recovery_pass() was called: we 296 - * can't always catch -BCH_ERR_restart_recovery because 297 - * it may have been called from another thread (btree 298 - * node read completion) 299 - */ 300 - ret = 0; 301 - c->recovery_passes_complete &= ~(~0ULL << c->curr_recovery_pass); 302 - } else { 303 - c->recovery_passes_complete |= BIT_ULL(pass); 304 - c->recovery_pass_done = max(c->recovery_pass_done, pass); 305 - } 306 301 c->curr_recovery_pass = c->next_recovery_pass; 307 - spin_unlock_irq(&c->recovery_pass_lock); 302 + 303 + if (prev_done <= BCH_RECOVERY_PASS_check_snapshots && 304 + c->recovery_pass_done > BCH_RECOVERY_PASS_check_snapshots) { 305 + bch2_copygc_wakeup(c); 306 + bch2_rebalance_wakeup(c); 307 + } 308 308 } 309 + 310 + spin_unlock_irq(&c->recovery_pass_lock); 309 311 310 312 return ret; 311 313 }
+1 -1
fs/bcachefs/snapshot.c
··· 396 396 u32 subvol = 0, s; 397 397 398 398 rcu_read_lock(); 399 - while (id) { 399 + while (id && bch2_snapshot_exists(c, id)) { 400 400 s = snapshot_t(c, id)->subvol; 401 401 402 402 if (s && (!subvol || s < subvol))
+2 -3
fs/bcachefs/str_hash.h
··· 33 33 34 34 struct bch_hash_info { 35 35 u8 type; 36 - struct unicode_map *cf_encoding; 36 + struct unicode_map *cf_encoding; 37 37 /* 38 38 * For crc32 or crc64 string hashes the first key value of 39 39 * the siphash_key (k0) is used as the key. ··· 44 44 static inline struct bch_hash_info 45 45 bch2_hash_info_init(struct bch_fs *c, const struct bch_inode_unpacked *bi) 46 46 { 47 - /* XXX ick */ 48 47 struct bch_hash_info info = { 49 48 .type = INODE_STR_HASH(bi), 50 49 #ifdef CONFIG_UNICODE 51 - .cf_encoding = !!(bi->bi_flags & BCH_INODE_casefolded) ? c->cf_encoding : NULL, 50 + .cf_encoding = bch2_inode_casefold(c, bi) ? c->cf_encoding : NULL, 52 51 #endif 53 52 .siphash_key = { .k0 = bi->bi_hash_seed } 54 53 };
+2 -1
fs/bcachefs/super-io.c
··· 1102 1102 prt_str(&buf, ")"); 1103 1103 bch2_fs_fatal_error(c, ": %s", buf.buf); 1104 1104 printbuf_exit(&buf); 1105 - return -BCH_ERR_sb_not_downgraded; 1105 + ret = -BCH_ERR_sb_not_downgraded; 1106 + goto out; 1106 1107 } 1107 1108 1108 1109 darray_for_each(online_devices, ca) {
+66 -90
fs/bcachefs/super.c
··· 418 418 return ret; 419 419 } 420 420 421 - static int bch2_fs_read_write_late(struct bch_fs *c) 422 - { 423 - int ret; 424 - 425 - /* 426 - * Data move operations can't run until after check_snapshots has 427 - * completed, and bch2_snapshot_is_ancestor() is available. 428 - * 429 - * Ideally we'd start copygc/rebalance earlier instead of waiting for 430 - * all of recovery/fsck to complete: 431 - */ 432 - ret = bch2_copygc_start(c); 433 - if (ret) { 434 - bch_err(c, "error starting copygc thread"); 435 - return ret; 436 - } 437 - 438 - ret = bch2_rebalance_start(c); 439 - if (ret) { 440 - bch_err(c, "error starting rebalance thread"); 441 - return ret; 442 - } 443 - 444 - return 0; 445 - } 446 - 447 421 static int __bch2_fs_read_write(struct bch_fs *c, bool early) 448 422 { 449 423 int ret; ··· 440 466 441 467 clear_bit(BCH_FS_clean_shutdown, &c->flags); 442 468 443 - /* 444 - * First journal write must be a flush write: after a clean shutdown we 445 - * don't read the journal, so the first journal write may end up 446 - * overwriting whatever was there previously, and there must always be 447 - * at least one non-flush write in the journal or recovery will fail: 448 - */ 449 - set_bit(JOURNAL_need_flush_write, &c->journal.flags); 450 - set_bit(JOURNAL_running, &c->journal.flags); 451 - 452 469 __for_each_online_member(c, ca, BIT(BCH_MEMBER_STATE_rw), READ) { 453 470 bch2_dev_allocator_add(c, ca); 454 471 percpu_ref_reinit(&ca->io_ref[WRITE]); 455 472 } 456 473 bch2_recalc_capacity(c); 457 474 475 + /* 476 + * First journal write must be a flush write: after a clean shutdown we 477 + * don't read the journal, so the first journal write may end up 478 + * overwriting whatever was there previously, and there must always be 479 + * at least one non-flush write in the journal or recovery will fail: 480 + */ 481 + spin_lock(&c->journal.lock); 482 + set_bit(JOURNAL_need_flush_write, &c->journal.flags); 483 + set_bit(JOURNAL_running, &c->journal.flags); 484 + bch2_journal_space_available(&c->journal); 485 + spin_unlock(&c->journal.lock); 486 + 458 487 ret = bch2_fs_mark_dirty(c); 459 488 if (ret) 460 489 goto err; 461 - 462 - spin_lock(&c->journal.lock); 463 - bch2_journal_space_available(&c->journal); 464 - spin_unlock(&c->journal.lock); 465 490 466 491 ret = bch2_journal_reclaim_start(&c->journal); 467 492 if (ret) ··· 477 504 atomic_long_inc(&c->writes[i]); 478 505 } 479 506 #endif 480 - if (!early) { 481 - ret = bch2_fs_read_write_late(c); 482 - if (ret) 483 - goto err; 507 + 508 + ret = bch2_copygc_start(c); 509 + if (ret) { 510 + bch_err_msg(c, ret, "error starting copygc thread"); 511 + goto err; 512 + } 513 + 514 + ret = bch2_rebalance_start(c); 515 + if (ret) { 516 + bch_err_msg(c, ret, "error starting rebalance thread"); 517 + goto err; 484 518 } 485 519 486 520 bch2_do_discards(c); ··· 533 553 534 554 bch2_find_btree_nodes_exit(&c->found_btree_nodes); 535 555 bch2_free_pending_node_rewrites(c); 556 + bch2_free_fsck_errs(c); 536 557 bch2_fs_accounting_exit(c); 537 558 bch2_fs_sb_errors_exit(c); 538 559 bch2_fs_counters_exit(c); ··· 1004 1023 printbuf_exit(&p); 1005 1024 } 1006 1025 1026 + static bool bch2_fs_may_start(struct bch_fs *c) 1027 + { 1028 + struct bch_dev *ca; 1029 + unsigned i, flags = 0; 1030 + 1031 + if (c->opts.very_degraded) 1032 + flags |= BCH_FORCE_IF_DEGRADED|BCH_FORCE_IF_LOST; 1033 + 1034 + if (c->opts.degraded) 1035 + flags |= BCH_FORCE_IF_DEGRADED; 1036 + 1037 + if (!c->opts.degraded && 1038 + !c->opts.very_degraded) { 1039 + mutex_lock(&c->sb_lock); 1040 + 1041 + for (i = 0; i < c->disk_sb.sb->nr_devices; i++) { 1042 + if (!bch2_member_exists(c->disk_sb.sb, i)) 1043 + continue; 1044 + 1045 + ca = bch2_dev_locked(c, i); 1046 + 1047 + if (!bch2_dev_is_online(ca) && 1048 + (ca->mi.state == BCH_MEMBER_STATE_rw || 1049 + ca->mi.state == BCH_MEMBER_STATE_ro)) { 1050 + mutex_unlock(&c->sb_lock); 1051 + return false; 1052 + } 1053 + } 1054 + mutex_unlock(&c->sb_lock); 1055 + } 1056 + 1057 + return bch2_have_enough_devs(c, bch2_online_devs(c), flags, true); 1058 + } 1059 + 1007 1060 int bch2_fs_start(struct bch_fs *c) 1008 1061 { 1009 1062 time64_t now = ktime_get_real_seconds(); 1010 1063 int ret = 0; 1011 1064 1012 1065 print_mount_opts(c); 1066 + 1067 + if (!bch2_fs_may_start(c)) 1068 + return -BCH_ERR_insufficient_devices_to_start; 1013 1069 1014 1070 down_write(&c->state_lock); 1015 1071 mutex_lock(&c->sb_lock); ··· 1100 1082 wake_up(&c->ro_ref_wait); 1101 1083 1102 1084 down_write(&c->state_lock); 1103 - if (c->opts.read_only) { 1085 + if (c->opts.read_only) 1104 1086 bch2_fs_read_only(c); 1105 - } else { 1106 - ret = !test_bit(BCH_FS_rw, &c->flags) 1107 - ? bch2_fs_read_write(c) 1108 - : bch2_fs_read_write_late(c); 1109 - } 1087 + else if (!test_bit(BCH_FS_rw, &c->flags)) 1088 + ret = bch2_fs_read_write(c); 1110 1089 up_write(&c->state_lock); 1111 1090 1112 1091 err: ··· 1515 1500 1516 1501 printbuf_exit(&name); 1517 1502 1518 - rebalance_wakeup(c); 1503 + bch2_rebalance_wakeup(c); 1519 1504 return 0; 1520 1505 } 1521 1506 ··· 1574 1559 } 1575 1560 } 1576 1561 1577 - static bool bch2_fs_may_start(struct bch_fs *c) 1578 - { 1579 - struct bch_dev *ca; 1580 - unsigned i, flags = 0; 1581 - 1582 - if (c->opts.very_degraded) 1583 - flags |= BCH_FORCE_IF_DEGRADED|BCH_FORCE_IF_LOST; 1584 - 1585 - if (c->opts.degraded) 1586 - flags |= BCH_FORCE_IF_DEGRADED; 1587 - 1588 - if (!c->opts.degraded && 1589 - !c->opts.very_degraded) { 1590 - mutex_lock(&c->sb_lock); 1591 - 1592 - for (i = 0; i < c->disk_sb.sb->nr_devices; i++) { 1593 - if (!bch2_member_exists(c->disk_sb.sb, i)) 1594 - continue; 1595 - 1596 - ca = bch2_dev_locked(c, i); 1597 - 1598 - if (!bch2_dev_is_online(ca) && 1599 - (ca->mi.state == BCH_MEMBER_STATE_rw || 1600 - ca->mi.state == BCH_MEMBER_STATE_ro)) { 1601 - mutex_unlock(&c->sb_lock); 1602 - return false; 1603 - } 1604 - } 1605 - mutex_unlock(&c->sb_lock); 1606 - } 1607 - 1608 - return bch2_have_enough_devs(c, bch2_online_devs(c), flags, true); 1609 - } 1610 - 1611 1562 static void __bch2_dev_read_only(struct bch_fs *c, struct bch_dev *ca) 1612 1563 { 1613 1564 bch2_dev_io_ref_stop(ca, WRITE); ··· 1627 1646 if (new_state == BCH_MEMBER_STATE_rw) 1628 1647 __bch2_dev_read_write(c, ca); 1629 1648 1630 - rebalance_wakeup(c); 1649 + bch2_rebalance_wakeup(c); 1631 1650 1632 1651 return ret; 1633 1652 } ··· 2208 2227 } 2209 2228 } 2210 2229 up_write(&c->state_lock); 2211 - 2212 - if (!bch2_fs_may_start(c)) { 2213 - ret = -BCH_ERR_insufficient_devices_to_start; 2214 - goto err_print; 2215 - } 2216 2230 2217 2231 if (!c->opts.nostart) { 2218 2232 ret = bch2_fs_start(c);
+3 -4
fs/bcachefs/sysfs.c
··· 654 654 bch2_set_rebalance_needs_scan(c, 0); 655 655 656 656 if (v && id == Opt_rebalance_enabled) 657 - rebalance_wakeup(c); 657 + bch2_rebalance_wakeup(c); 658 658 659 - if (v && id == Opt_copygc_enabled && 660 - c->copygc_thread) 661 - wake_up_process(c->copygc_thread); 659 + if (v && id == Opt_copygc_enabled) 660 + bch2_copygc_wakeup(c); 662 661 663 662 if (id == Opt_discard && !ca) { 664 663 mutex_lock(&c->sb_lock);
+4
fs/bcachefs/tests.c
··· 342 342 */ 343 343 static int test_peek_end(struct bch_fs *c, u64 nr) 344 344 { 345 + delete_test_keys(c); 346 + 345 347 struct btree_trans *trans = bch2_trans_get(c); 346 348 struct btree_iter iter; 347 349 struct bkey_s_c k; ··· 364 362 365 363 static int test_peek_end_extents(struct bch_fs *c, u64 nr) 366 364 { 365 + delete_test_keys(c); 366 + 367 367 struct btree_trans *trans = bch2_trans_get(c); 368 368 struct btree_iter iter; 369 369 struct bkey_s_c k;
+38
fs/bcachefs/util.h
··· 739 739 *--dst = *src++; 740 740 } 741 741 742 + #define set_flags(_map, _in, _out) \ 743 + do { \ 744 + unsigned _i; \ 745 + \ 746 + for (_i = 0; _i < ARRAY_SIZE(_map); _i++) \ 747 + if ((_in) & (1 << _i)) \ 748 + (_out) |= _map[_i]; \ 749 + else \ 750 + (_out) &= ~_map[_i]; \ 751 + } while (0) 752 + 753 + #define map_flags(_map, _in) \ 754 + ({ \ 755 + unsigned _out = 0; \ 756 + \ 757 + set_flags(_map, _in, _out); \ 758 + _out; \ 759 + }) 760 + 761 + #define map_flags_rev(_map, _in) \ 762 + ({ \ 763 + unsigned _i, _out = 0; \ 764 + \ 765 + for (_i = 0; _i < ARRAY_SIZE(_map); _i++) \ 766 + if ((_in) & _map[_i]) { \ 767 + (_out) |= 1 << _i; \ 768 + (_in) &= ~_map[_i]; \ 769 + } \ 770 + (_out); \ 771 + }) 772 + 773 + #define map_defined(_map) \ 774 + ({ \ 775 + unsigned _in = ~0; \ 776 + \ 777 + map_flags_rev(_map, _in); \ 778 + }) 779 + 742 780 #endif /* _BCACHEFS_UTIL_H */
+1 -1
fs/btrfs/extent_io.c
··· 2047 2047 subpage->bitmaps)) { 2048 2048 spin_unlock_irqrestore(&subpage->lock, flags); 2049 2049 spin_unlock(&folio->mapping->i_private_lock); 2050 - bit_start++; 2050 + bit_start += sectors_per_node; 2051 2051 continue; 2052 2052 } 2053 2053
+8 -5
fs/btrfs/inode.c
··· 2129 2129 2130 2130 /* 2131 2131 * If the found extent starts after requested offset, then 2132 - * adjust extent_end to be right before this extent begins 2132 + * adjust cur_offset to be right before this extent begins. 2133 2133 */ 2134 2134 if (found_key.offset > cur_offset) { 2135 - extent_end = found_key.offset; 2136 - extent_type = 0; 2137 - goto must_cow; 2135 + if (cow_start == (u64)-1) 2136 + cow_start = cur_offset; 2137 + cur_offset = found_key.offset; 2138 + goto next_slot; 2138 2139 } 2139 2140 2140 2141 /* ··· 5682 5681 return inode; 5683 5682 5684 5683 path = btrfs_alloc_path(); 5685 - if (!path) 5684 + if (!path) { 5685 + iget_failed(&inode->vfs_inode); 5686 5686 return ERR_PTR(-ENOMEM); 5687 + } 5687 5688 5688 5689 ret = btrfs_read_locked_inode(inode, path); 5689 5690 btrfs_free_path(path);
+54 -19
fs/buffer.c
··· 176 176 } 177 177 EXPORT_SYMBOL(end_buffer_write_sync); 178 178 179 - /* 180 - * Various filesystems appear to want __find_get_block to be non-blocking. 181 - * But it's the page lock which protects the buffers. To get around this, 182 - * we get exclusion from try_to_free_buffers with the blockdev mapping's 183 - * i_private_lock. 184 - * 185 - * Hack idea: for the blockdev mapping, i_private_lock contention 186 - * may be quite high. This code could TryLock the page, and if that 187 - * succeeds, there is no need to take i_private_lock. 188 - */ 189 179 static struct buffer_head * 190 - __find_get_block_slow(struct block_device *bdev, sector_t block) 180 + __find_get_block_slow(struct block_device *bdev, sector_t block, bool atomic) 191 181 { 192 182 struct address_space *bd_mapping = bdev->bd_mapping; 193 183 const int blkbits = bd_mapping->host->i_blkbits; ··· 194 204 if (IS_ERR(folio)) 195 205 goto out; 196 206 197 - spin_lock(&bd_mapping->i_private_lock); 207 + /* 208 + * Folio lock protects the buffers. Callers that cannot block 209 + * will fallback to serializing vs try_to_free_buffers() via 210 + * the i_private_lock. 211 + */ 212 + if (atomic) 213 + spin_lock(&bd_mapping->i_private_lock); 214 + else 215 + folio_lock(folio); 216 + 198 217 head = folio_buffers(folio); 199 218 if (!head) 200 219 goto out_unlock; 220 + /* 221 + * Upon a noref migration, the folio lock serializes here; 222 + * otherwise bail. 223 + */ 224 + if (test_bit_acquire(BH_Migrate, &head->b_state)) { 225 + WARN_ON(!atomic); 226 + goto out_unlock; 227 + } 228 + 201 229 bh = head; 202 230 do { 203 231 if (!buffer_mapped(bh)) ··· 244 236 1 << blkbits); 245 237 } 246 238 out_unlock: 247 - spin_unlock(&bd_mapping->i_private_lock); 239 + if (atomic) 240 + spin_unlock(&bd_mapping->i_private_lock); 241 + else 242 + folio_unlock(folio); 248 243 folio_put(folio); 249 244 out: 250 245 return ret; ··· 667 656 void write_boundary_block(struct block_device *bdev, 668 657 sector_t bblock, unsigned blocksize) 669 658 { 670 - struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize); 659 + struct buffer_head *bh; 660 + 661 + bh = __find_get_block_nonatomic(bdev, bblock + 1, blocksize); 671 662 if (bh) { 672 663 if (buffer_dirty(bh)) 673 664 write_dirty_buffer(bh, 0); ··· 1399 1386 /* 1400 1387 * Perform a pagecache lookup for the matching buffer. If it's there, refresh 1401 1388 * it in the LRU and mark it as accessed. If it is not present then return 1402 - * NULL 1389 + * NULL. Atomic context callers may also return NULL if the buffer is being 1390 + * migrated; similarly the page is not marked accessed either. 1403 1391 */ 1404 - struct buffer_head * 1405 - __find_get_block(struct block_device *bdev, sector_t block, unsigned size) 1392 + static struct buffer_head * 1393 + find_get_block_common(struct block_device *bdev, sector_t block, 1394 + unsigned size, bool atomic) 1406 1395 { 1407 1396 struct buffer_head *bh = lookup_bh_lru(bdev, block, size); 1408 1397 1409 1398 if (bh == NULL) { 1410 1399 /* __find_get_block_slow will mark the page accessed */ 1411 - bh = __find_get_block_slow(bdev, block); 1400 + bh = __find_get_block_slow(bdev, block, atomic); 1412 1401 if (bh) 1413 1402 bh_lru_install(bh); 1414 1403 } else ··· 1418 1403 1419 1404 return bh; 1420 1405 } 1406 + 1407 + struct buffer_head * 1408 + __find_get_block(struct block_device *bdev, sector_t block, unsigned size) 1409 + { 1410 + return find_get_block_common(bdev, block, size, true); 1411 + } 1421 1412 EXPORT_SYMBOL(__find_get_block); 1413 + 1414 + /* same as __find_get_block() but allows sleeping contexts */ 1415 + struct buffer_head * 1416 + __find_get_block_nonatomic(struct block_device *bdev, sector_t block, 1417 + unsigned size) 1418 + { 1419 + return find_get_block_common(bdev, block, size, false); 1420 + } 1421 + EXPORT_SYMBOL(__find_get_block_nonatomic); 1422 1422 1423 1423 /** 1424 1424 * bdev_getblk - Get a buffer_head in a block device's buffer cache. ··· 1452 1422 struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block, 1453 1423 unsigned size, gfp_t gfp) 1454 1424 { 1455 - struct buffer_head *bh = __find_get_block(bdev, block, size); 1425 + struct buffer_head *bh; 1426 + 1427 + if (gfpflags_allow_blocking(gfp)) 1428 + bh = __find_get_block_nonatomic(bdev, block, size); 1429 + else 1430 + bh = __find_get_block(bdev, block, size); 1456 1431 1457 1432 might_alloc(gfp); 1458 1433 if (bh)
+1 -1
fs/ceph/inode.c
··· 2367 2367 2368 2368 /* Try to writeback the dirty pagecaches */ 2369 2369 if (issued & (CEPH_CAP_FILE_BUFFER)) { 2370 - loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SHIFT - 1; 2370 + loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SIZE - 1; 2371 2371 2372 2372 ret = filemap_write_and_wait_range(inode->i_mapping, 2373 2373 orig_pos, lend);
+2 -1
fs/ext4/ialloc.c
··· 691 691 if (!bh || !buffer_uptodate(bh)) 692 692 /* 693 693 * If the block is not in the buffer cache, then it 694 - * must have been written out. 694 + * must have been written out, or, most unlikely, is 695 + * being migrated - false failure should be OK here. 695 696 */ 696 697 goto out; 697 698
+2 -1
fs/ext4/mballoc.c
··· 6642 6642 for (i = 0; i < count; i++) { 6643 6643 cond_resched(); 6644 6644 if (is_metadata) 6645 - bh = sb_find_get_block(inode->i_sb, block + i); 6645 + bh = sb_find_get_block_nonatomic(inode->i_sb, 6646 + block + i); 6646 6647 ext4_forget(handle, is_metadata, inode, bh, block + i); 6647 6648 } 6648 6649 }
+1 -1
fs/file.c
··· 26 26 27 27 #include "internal.h" 28 28 29 - bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt) 29 + static noinline bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt) 30 30 { 31 31 /* 32 32 * If the reference count was already in the dead zone, then this
+9 -6
fs/jbd2/revoke.c
··· 345 345 bh = bh_in; 346 346 347 347 if (!bh) { 348 - bh = __find_get_block(bdev, blocknr, journal->j_blocksize); 348 + bh = __find_get_block_nonatomic(bdev, blocknr, 349 + journal->j_blocksize); 349 350 if (bh) 350 351 BUFFER_TRACE(bh, "found on hash"); 351 352 } ··· 356 355 357 356 /* If there is a different buffer_head lying around in 358 357 * memory anywhere... */ 359 - bh2 = __find_get_block(bdev, blocknr, journal->j_blocksize); 358 + bh2 = __find_get_block_nonatomic(bdev, blocknr, 359 + journal->j_blocksize); 360 360 if (bh2) { 361 361 /* ... and it has RevokeValid status... */ 362 362 if (bh2 != bh && buffer_revokevalid(bh2)) ··· 466 464 * state machine will get very upset later on. */ 467 465 if (need_cancel) { 468 466 struct buffer_head *bh2; 469 - bh2 = __find_get_block(bh->b_bdev, bh->b_blocknr, bh->b_size); 467 + bh2 = __find_get_block_nonatomic(bh->b_bdev, bh->b_blocknr, 468 + bh->b_size); 470 469 if (bh2) { 471 470 if (bh2 != bh) 472 471 clear_buffer_revoked(bh2); ··· 495 492 struct jbd2_revoke_record_s *record; 496 493 struct buffer_head *bh; 497 494 record = (struct jbd2_revoke_record_s *)list_entry; 498 - bh = __find_get_block(journal->j_fs_dev, 499 - record->blocknr, 500 - journal->j_blocksize); 495 + bh = __find_get_block_nonatomic(journal->j_fs_dev, 496 + record->blocknr, 497 + journal->j_blocksize); 501 498 if (bh) { 502 499 clear_buffer_revoked(bh); 503 500 __brelse(bh);
+36 -33
fs/namespace.c
··· 2826 2826 struct vfsmount *mnt = path->mnt; 2827 2827 struct dentry *dentry; 2828 2828 struct mountpoint *mp = ERR_PTR(-ENOENT); 2829 + struct path under = {}; 2829 2830 2830 2831 for (;;) { 2831 - struct mount *m; 2832 + struct mount *m = real_mount(mnt); 2832 2833 2833 2834 if (beneath) { 2834 - m = real_mount(mnt); 2835 + path_put(&under); 2835 2836 read_seqlock_excl(&mount_lock); 2836 - dentry = dget(m->mnt_mountpoint); 2837 + under.mnt = mntget(&m->mnt_parent->mnt); 2838 + under.dentry = dget(m->mnt_mountpoint); 2837 2839 read_sequnlock_excl(&mount_lock); 2840 + dentry = under.dentry; 2838 2841 } else { 2839 2842 dentry = path->dentry; 2840 2843 } 2841 2844 2842 2845 inode_lock(dentry->d_inode); 2843 - if (unlikely(cant_mount(dentry))) { 2844 - inode_unlock(dentry->d_inode); 2845 - goto out; 2846 - } 2847 - 2848 2846 namespace_lock(); 2849 2847 2850 - if (beneath && (!is_mounted(mnt) || m->mnt_mountpoint != dentry)) { 2848 + if (unlikely(cant_mount(dentry) || !is_mounted(mnt))) 2849 + break; // not to be mounted on 2850 + 2851 + if (beneath && unlikely(m->mnt_mountpoint != dentry || 2852 + &m->mnt_parent->mnt != under.mnt)) { 2851 2853 namespace_unlock(); 2852 2854 inode_unlock(dentry->d_inode); 2853 - goto out; 2855 + continue; // got moved 2854 2856 } 2855 2857 2856 2858 mnt = lookup_mnt(path); 2857 - if (likely(!mnt)) 2859 + if (unlikely(mnt)) { 2860 + namespace_unlock(); 2861 + inode_unlock(dentry->d_inode); 2862 + path_put(path); 2863 + path->mnt = mnt; 2864 + path->dentry = dget(mnt->mnt_root); 2865 + continue; // got overmounted 2866 + } 2867 + mp = get_mountpoint(dentry); 2868 + if (IS_ERR(mp)) 2858 2869 break; 2859 - 2860 - namespace_unlock(); 2861 - inode_unlock(dentry->d_inode); 2862 - if (beneath) 2863 - dput(dentry); 2864 - path_put(path); 2865 - path->mnt = mnt; 2866 - path->dentry = dget(mnt->mnt_root); 2870 + if (beneath) { 2871 + /* 2872 + * @under duplicates the references that will stay 2873 + * at least until namespace_unlock(), so the path_put() 2874 + * below is safe (and OK to do under namespace_lock - 2875 + * we are not dropping the final references here). 2876 + */ 2877 + path_put(&under); 2878 + } 2879 + return mp; 2867 2880 } 2868 - 2869 - mp = get_mountpoint(dentry); 2870 - if (IS_ERR(mp)) { 2871 - namespace_unlock(); 2872 - inode_unlock(dentry->d_inode); 2873 - } 2874 - 2875 - out: 2881 + namespace_unlock(); 2882 + inode_unlock(dentry->d_inode); 2876 2883 if (beneath) 2877 - dput(dentry); 2878 - 2884 + path_put(&under); 2879 2885 return mp; 2880 2886 } 2881 2887 ··· 2892 2886 2893 2887 static void unlock_mount(struct mountpoint *where) 2894 2888 { 2895 - struct dentry *dentry = where->m_dentry; 2896 - 2889 + inode_unlock(where->m_dentry->d_inode); 2897 2890 read_seqlock_excl(&mount_lock); 2898 2891 put_mountpoint(where); 2899 2892 read_sequnlock_excl(&mount_lock); 2900 - 2901 2893 namespace_unlock(); 2902 - inode_unlock(dentry->d_inode); 2903 2894 } 2904 2895 2905 2896 static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
+1 -6
fs/notify/fanotify/fanotify_user.c
··· 1961 1961 return -EINVAL; 1962 1962 1963 1963 if (mark_cmd == FAN_MARK_FLUSH) { 1964 - if (mark_type == FAN_MARK_MOUNT) 1965 - fsnotify_clear_vfsmount_marks_by_group(group); 1966 - else if (mark_type == FAN_MARK_FILESYSTEM) 1967 - fsnotify_clear_sb_marks_by_group(group); 1968 - else 1969 - fsnotify_clear_inode_marks_by_group(group); 1964 + fsnotify_clear_marks_by_group(group, obj_type); 1970 1965 return 0; 1971 1966 } 1972 1967
+1 -1
fs/ocfs2/journal.c
··· 1249 1249 } 1250 1250 1251 1251 for (i = 0; i < p_blocks; i++, p_blkno++) { 1252 - bh = __find_get_block(osb->sb->s_bdev, p_blkno, 1252 + bh = __find_get_block_nonatomic(osb->sb->s_bdev, p_blkno, 1253 1253 osb->sb->s_blocksize); 1254 1254 /* block not cached. */ 1255 1255 if (!bh)
+13 -1
fs/smb/server/auth.c
··· 550 550 retval = -ENOMEM; 551 551 goto out; 552 552 } 553 - sess->user = user; 553 + 554 + if (!sess->user) { 555 + /* First successful authentication */ 556 + sess->user = user; 557 + } else { 558 + if (!ksmbd_compare_user(sess->user, user)) { 559 + ksmbd_debug(AUTH, "different user tried to reuse session\n"); 560 + retval = -EPERM; 561 + ksmbd_free_user(user); 562 + goto out; 563 + } 564 + ksmbd_free_user(user); 565 + } 554 566 555 567 memcpy(sess->sess_key, resp->payload, resp->session_key_len); 556 568 memcpy(out_blob, resp->payload + resp->session_key_len,
+14 -6
fs/smb/server/mgmt/user_session.c
··· 59 59 struct ksmbd_session_rpc *entry; 60 60 long index; 61 61 62 + down_write(&sess->rpc_lock); 62 63 xa_for_each(&sess->rpc_handle_list, index, entry) { 63 64 xa_erase(&sess->rpc_handle_list, index); 64 65 __session_rpc_close(sess, entry); 65 66 } 67 + up_write(&sess->rpc_lock); 66 68 67 69 xa_destroy(&sess->rpc_handle_list); 68 70 } ··· 94 92 { 95 93 struct ksmbd_session_rpc *entry, *old; 96 94 struct ksmbd_rpc_command *resp; 97 - int method; 95 + int method, id; 98 96 99 97 method = __rpc_method(rpc_name); 100 98 if (!method) ··· 104 102 if (!entry) 105 103 return -ENOMEM; 106 104 105 + down_read(&sess->rpc_lock); 107 106 entry->method = method; 108 - entry->id = ksmbd_ipc_id_alloc(); 109 - if (entry->id < 0) 107 + entry->id = id = ksmbd_ipc_id_alloc(); 108 + if (id < 0) 110 109 goto free_entry; 111 - old = xa_store(&sess->rpc_handle_list, entry->id, entry, KSMBD_DEFAULT_GFP); 110 + old = xa_store(&sess->rpc_handle_list, id, entry, KSMBD_DEFAULT_GFP); 112 111 if (xa_is_err(old)) 113 112 goto free_id; 114 113 115 - resp = ksmbd_rpc_open(sess, entry->id); 114 + resp = ksmbd_rpc_open(sess, id); 116 115 if (!resp) 117 116 goto erase_xa; 118 117 118 + up_read(&sess->rpc_lock); 119 119 kvfree(resp); 120 - return entry->id; 120 + return id; 121 121 erase_xa: 122 122 xa_erase(&sess->rpc_handle_list, entry->id); 123 123 free_id: 124 124 ksmbd_rpc_id_free(entry->id); 125 125 free_entry: 126 126 kfree(entry); 127 + up_read(&sess->rpc_lock); 127 128 return -EINVAL; 128 129 } 129 130 ··· 134 129 { 135 130 struct ksmbd_session_rpc *entry; 136 131 132 + down_write(&sess->rpc_lock); 137 133 entry = xa_erase(&sess->rpc_handle_list, id); 138 134 if (entry) 139 135 __session_rpc_close(sess, entry); 136 + up_write(&sess->rpc_lock); 140 137 } 141 138 142 139 int ksmbd_session_rpc_method(struct ksmbd_session *sess, int id) ··· 446 439 sess->sequence_number = 1; 447 440 rwlock_init(&sess->tree_conns_lock); 448 441 atomic_set(&sess->refcnt, 2); 442 + init_rwsem(&sess->rpc_lock); 449 443 450 444 ret = __init_smb2_session(sess); 451 445 if (ret)
+1
fs/smb/server/mgmt/user_session.h
··· 63 63 rwlock_t tree_conns_lock; 64 64 65 65 atomic_t refcnt; 66 + struct rw_semaphore rpc_lock; 66 67 }; 67 68 68 69 static inline int test_session_flag(struct ksmbd_session *sess, int bit)
+7 -11
fs/smb/server/smb2pdu.c
··· 1445 1445 { 1446 1446 struct ksmbd_conn *conn = work->conn; 1447 1447 struct ksmbd_session *sess = work->sess; 1448 - struct channel *chann = NULL; 1448 + struct channel *chann = NULL, *old; 1449 1449 struct ksmbd_user *user; 1450 1450 u64 prev_id; 1451 1451 int sz, rc; ··· 1557 1557 return -ENOMEM; 1558 1558 1559 1559 chann->conn = conn; 1560 - xa_store(&sess->ksmbd_chann_list, (long)conn, chann, KSMBD_DEFAULT_GFP); 1560 + old = xa_store(&sess->ksmbd_chann_list, (long)conn, chann, 1561 + KSMBD_DEFAULT_GFP); 1562 + if (xa_is_err(old)) { 1563 + kfree(chann); 1564 + return xa_err(old); 1565 + } 1561 1566 } 1562 1567 } 1563 1568 ··· 1606 1601 prev_sess_id = le64_to_cpu(req->PreviousSessionId); 1607 1602 if (prev_sess_id && prev_sess_id != sess->id) 1608 1603 destroy_previous_session(conn, sess->user, prev_sess_id); 1609 - 1610 - if (sess->state == SMB2_SESSION_VALID) { 1611 - ksmbd_free_user(sess->user); 1612 - sess->user = NULL; 1613 - } 1614 1604 1615 1605 retval = ksmbd_krb5_authenticate(sess, in_blob, in_len, 1616 1606 out_blob, &out_len); ··· 2249 2249 sess->state = SMB2_SESSION_EXPIRED; 2250 2250 up_write(&conn->session_lock); 2251 2251 2252 - if (sess->user) { 2253 - ksmbd_free_user(sess->user); 2254 - sess->user = NULL; 2255 - } 2256 2252 ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_SETUP); 2257 2253 2258 2254 rsp->StructureSize = cpu_to_le16(4);
+1 -1
fs/splice.c
··· 45 45 * here if set to avoid blocking other users of this pipe if splice is 46 46 * being done on it. 47 47 */ 48 - static noinline void noinline pipe_clear_nowait(struct file *file) 48 + static noinline void pipe_clear_nowait(struct file *file) 49 49 { 50 50 fmode_t fmode = READ_ONCE(file->f_mode); 51 51
+2 -2
fs/xattr.c
··· 703 703 return error; 704 704 705 705 filename = getname_maybe_null(pathname, at_flags); 706 - if (!filename) { 706 + if (!filename && dfd >= 0) { 707 707 CLASS(fd, f)(dfd); 708 708 if (fd_empty(f)) 709 709 error = -EBADF; ··· 847 847 return error; 848 848 849 849 filename = getname_maybe_null(pathname, at_flags); 850 - if (!filename) { 850 + if (!filename && dfd >= 0) { 851 851 CLASS(fd, f)(dfd); 852 852 if (fd_empty(f)) 853 853 return -EBADF;
+8 -2
fs/xfs/xfs_zone_gc.c
··· 170 170 xfs_zoned_need_gc( 171 171 struct xfs_mount *mp) 172 172 { 173 - s64 available, free; 173 + s64 available, free, threshold; 174 + s32 remainder; 174 175 175 176 if (!xfs_group_marked(mp, XG_TYPE_RTG, XFS_RTG_RECLAIMABLE)) 176 177 return false; ··· 184 183 return true; 185 184 186 185 free = xfs_estimate_freecounter(mp, XC_FREE_RTEXTENTS); 187 - if (available < mult_frac(free, mp->m_zonegc_low_space, 100)) 186 + 187 + threshold = div_s64_rem(free, 100, &remainder); 188 + threshold = threshold * mp->m_zonegc_low_space + 189 + remainder * div_s64(mp->m_zonegc_low_space, 100); 190 + 191 + if (available < threshold) 188 192 return true; 189 193 190 194 return false;
+3 -2
include/cxl/features.h
··· 66 66 #ifdef CONFIG_CXL_FEATURES 67 67 inline struct cxl_features_state *to_cxlfs(struct cxl_dev_state *cxlds); 68 68 int devm_cxl_setup_features(struct cxl_dev_state *cxlds); 69 - int devm_cxl_setup_fwctl(struct cxl_memdev *cxlmd); 69 + int devm_cxl_setup_fwctl(struct device *host, struct cxl_memdev *cxlmd); 70 70 #else 71 71 static inline struct cxl_features_state *to_cxlfs(struct cxl_dev_state *cxlds) 72 72 { ··· 78 78 return -EOPNOTSUPP; 79 79 } 80 80 81 - static inline int devm_cxl_setup_fwctl(struct cxl_memdev *cxlmd) 81 + static inline int devm_cxl_setup_fwctl(struct device *host, 82 + struct cxl_memdev *cxlmd) 82 83 { 83 84 return -EOPNOTSUPP; 84 85 }
+1 -4
include/linux/blkdev.h
··· 1637 1637 return bio_end_io_acct_remapped(bio, start_time, bio->bi_bdev); 1638 1638 } 1639 1639 1640 + int bdev_validate_blocksize(struct block_device *bdev, int block_size); 1640 1641 int set_blocksize(struct file *file, int size); 1641 1642 1642 1643 int lookup_bdev(const char *pathname, dev_t *dev); ··· 1693 1692 int bd_prepare_to_claim(struct block_device *bdev, void *holder, 1694 1693 const struct blk_holder_ops *hops); 1695 1694 void bd_abort_claiming(struct block_device *bdev, void *holder); 1696 - 1697 - /* just for blk-cgroup, don't use elsewhere */ 1698 - struct block_device *blkdev_get_no_open(dev_t dev); 1699 - void blkdev_put_no_open(struct block_device *bdev); 1700 1695 1701 1696 struct block_device *I_BDEV(struct inode *inode); 1702 1697 struct block_device *file_bdev(struct file *bdev_file);
+9
include/linux/buffer_head.h
··· 34 34 BH_Meta, /* Buffer contains metadata */ 35 35 BH_Prio, /* Buffer should be submitted with REQ_PRIO */ 36 36 BH_Defer_Completion, /* Defer AIO completion to workqueue */ 37 + BH_Migrate, /* Buffer is being migrated (norefs) */ 37 38 38 39 BH_PrivateStart,/* not a state bit, but the first bit available 39 40 * for private allocation by other entities ··· 223 222 wait_queue_head_t *bh_waitq_head(struct buffer_head *bh); 224 223 struct buffer_head *__find_get_block(struct block_device *bdev, sector_t block, 225 224 unsigned size); 225 + struct buffer_head *__find_get_block_nonatomic(struct block_device *bdev, 226 + sector_t block, unsigned size); 226 227 struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block, 227 228 unsigned size, gfp_t gfp); 228 229 void __brelse(struct buffer_head *); ··· 398 395 sb_find_get_block(struct super_block *sb, sector_t block) 399 396 { 400 397 return __find_get_block(sb->s_bdev, block, sb->s_blocksize); 398 + } 399 + 400 + static inline struct buffer_head * 401 + sb_find_get_block_nonatomic(struct super_block *sb, sector_t block) 402 + { 403 + return __find_get_block_nonatomic(sb->s_bdev, block, sb->s_blocksize); 401 404 } 402 405 403 406 static inline void
-6
include/linux/ceph/osd_client.h
··· 490 490 struct page **pages, u64 length, 491 491 u32 alignment, bool pages_from_pool, 492 492 bool own_pages); 493 - extern void osd_req_op_extent_osd_data_pagelist(struct ceph_osd_request *, 494 - unsigned int which, 495 - struct ceph_pagelist *pagelist); 496 493 #ifdef CONFIG_BLOCK 497 494 void osd_req_op_extent_osd_data_bio(struct ceph_osd_request *osd_req, 498 495 unsigned int which, ··· 506 509 void osd_req_op_extent_osd_iter(struct ceph_osd_request *osd_req, 507 510 unsigned int which, struct iov_iter *iter); 508 511 509 - extern void osd_req_op_cls_request_data_pagelist(struct ceph_osd_request *, 510 - unsigned int which, 511 - struct ceph_pagelist *pagelist); 512 512 extern void osd_req_op_cls_request_data_pages(struct ceph_osd_request *, 513 513 unsigned int which, 514 514 struct page **pages, u64 length,
+8 -4
include/linux/dma-mapping.h
··· 629 629 #else 630 630 #define DEFINE_DMA_UNMAP_ADDR(ADDR_NAME) 631 631 #define DEFINE_DMA_UNMAP_LEN(LEN_NAME) 632 - #define dma_unmap_addr(PTR, ADDR_NAME) (0) 633 - #define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) do { } while (0) 634 - #define dma_unmap_len(PTR, LEN_NAME) (0) 635 - #define dma_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0) 632 + #define dma_unmap_addr(PTR, ADDR_NAME) \ 633 + ({ typeof(PTR) __p __maybe_unused = PTR; 0; }) 634 + #define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) \ 635 + do { typeof(PTR) __p __maybe_unused = PTR; } while (0) 636 + #define dma_unmap_len(PTR, LEN_NAME) \ 637 + ({ typeof(PTR) __p __maybe_unused = PTR; 0; }) 638 + #define dma_unmap_len_set(PTR, LEN_NAME, VAL) \ 639 + do { typeof(PTR) __p __maybe_unused = PTR; } while (0) 636 640 #endif 637 641 638 642 #endif /* _LINUX_DMA_MAPPING_H */
+6 -13
include/linux/file_ref.h
··· 61 61 atomic_long_set(&ref->refcnt, cnt - 1); 62 62 } 63 63 64 - bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt); 65 64 bool __file_ref_put(file_ref_t *ref, unsigned long cnt); 66 65 67 66 /** ··· 177 178 */ 178 179 static __always_inline __must_check bool file_ref_put_close(file_ref_t *ref) 179 180 { 180 - long old, new; 181 + long old; 181 182 182 183 old = atomic_long_read(&ref->refcnt); 183 - do { 184 - if (unlikely(old < 0)) 185 - return __file_ref_put_badval(ref, old); 186 - 187 - if (old == FILE_REF_ONEREF) 188 - new = FILE_REF_DEAD; 189 - else 190 - new = old - 1; 191 - } while (!atomic_long_try_cmpxchg(&ref->refcnt, &old, new)); 192 - 193 - return new == FILE_REF_DEAD; 184 + if (likely(old == FILE_REF_ONEREF)) { 185 + if (likely(atomic_long_try_cmpxchg(&ref->refcnt, &old, FILE_REF_DEAD))) 186 + return true; 187 + } 188 + return file_ref_put(ref); 194 189 } 195 190 196 191 /**
-15
include/linux/fsnotify_backend.h
··· 907 907 /* Clear all of the marks of a group attached to a given object type */ 908 908 extern void fsnotify_clear_marks_by_group(struct fsnotify_group *group, 909 909 unsigned int obj_type); 910 - /* run all the marks in a group, and clear all of the vfsmount marks */ 911 - static inline void fsnotify_clear_vfsmount_marks_by_group(struct fsnotify_group *group) 912 - { 913 - fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_VFSMOUNT); 914 - } 915 - /* run all the marks in a group, and clear all of the inode marks */ 916 - static inline void fsnotify_clear_inode_marks_by_group(struct fsnotify_group *group) 917 - { 918 - fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_INODE); 919 - } 920 - /* run all the marks in a group, and clear all of the sn marks */ 921 - static inline void fsnotify_clear_sb_marks_by_group(struct fsnotify_group *group) 922 - { 923 - fsnotify_clear_marks_by_group(group, FSNOTIFY_OBJ_TYPE_SB); 924 - } 925 910 extern void fsnotify_get_mark(struct fsnotify_mark *mark); 926 911 extern void fsnotify_put_mark(struct fsnotify_mark *mark); 927 912 extern void fsnotify_finish_user_wait(struct fsnotify_iter_info *iter_info);
+5
include/linux/fwnode.h
··· 2 2 /* 3 3 * fwnode.h - Firmware device node object handle type definition. 4 4 * 5 + * This header file provides low-level data types and definitions for firmware 6 + * and device property providers. The respective API header files supplied by 7 + * them should contain all of the requisite data types and definitions for end 8 + * users, so including it directly should not be necessary. 9 + * 5 10 * Copyright (C) 2015, Intel Corporation 6 11 * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 7 12 */
+2
include/linux/module.h
··· 162 162 #define __INITRODATA_OR_MODULE __INITRODATA 163 163 #endif /*CONFIG_MODULES*/ 164 164 165 + struct module_kobject *lookup_or_create_module_kobject(const char *name); 166 + 165 167 /* Generic info of form tag = "info" */ 166 168 #define MODULE_INFO(tag, info) __MODULE_INFO(tag, tag, info) 167 169
+3 -1
include/net/bluetooth/hci.h
··· 1931 1931 __u8 sync_cte_type; 1932 1932 } __packed; 1933 1933 1934 + #define HCI_OP_LE_PA_CREATE_SYNC_CANCEL 0x2045 1935 + 1934 1936 #define HCI_OP_LE_PA_TERM_SYNC 0x2046 1935 1937 struct hci_cp_le_pa_term_sync { 1936 1938 __le16 handle; ··· 2832 2830 __le16 bis_handle[]; 2833 2831 } __packed; 2834 2832 2835 - #define HCI_EVT_LE_BIG_SYNC_ESTABILISHED 0x1d 2833 + #define HCI_EVT_LE_BIG_SYNC_ESTABLISHED 0x1d 2836 2834 struct hci_evt_le_big_sync_estabilished { 2837 2835 __u8 status; 2838 2836 __u8 handle;
+9 -11
include/net/bluetooth/hci_core.h
··· 1113 1113 return NULL; 1114 1114 } 1115 1115 1116 - static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev, 1117 - __u8 sid, 1118 - bdaddr_t *dst, 1119 - __u8 dst_type) 1116 + static inline struct hci_conn * 1117 + hci_conn_hash_lookup_create_pa_sync(struct hci_dev *hdev) 1120 1118 { 1121 1119 struct hci_conn_hash *h = &hdev->conn_hash; 1122 1120 struct hci_conn *c; ··· 1122 1124 rcu_read_lock(); 1123 1125 1124 1126 list_for_each_entry_rcu(c, &h->list, list) { 1125 - if (c->type != ISO_LINK || bacmp(&c->dst, dst) || 1126 - c->dst_type != dst_type || c->sid != sid) 1127 + if (c->type != ISO_LINK) 1128 + continue; 1129 + 1130 + if (!test_bit(HCI_CONN_CREATE_PA_SYNC, &c->flags)) 1127 1131 continue; 1128 1132 1129 1133 rcu_read_unlock(); ··· 1524 1524 void hci_sco_setup(struct hci_conn *conn, __u8 status); 1525 1525 bool hci_iso_setup_path(struct hci_conn *conn); 1526 1526 int hci_le_create_cis_pending(struct hci_dev *hdev); 1527 - int hci_pa_create_sync_pending(struct hci_dev *hdev); 1528 - int hci_le_big_create_sync_pending(struct hci_dev *hdev); 1529 1527 int hci_conn_check_create_cis(struct hci_conn *conn); 1530 1528 1531 1529 struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst, ··· 1564 1566 __u8 data_len, __u8 *data); 1565 1567 struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, 1566 1568 __u8 dst_type, __u8 sid, struct bt_iso_qos *qos); 1567 - int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon, 1568 - struct bt_iso_qos *qos, 1569 - __u16 sync_handle, __u8 num_bis, __u8 bis[]); 1569 + int hci_conn_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon, 1570 + struct bt_iso_qos *qos, __u16 sync_handle, 1571 + __u8 num_bis, __u8 bis[]); 1570 1572 int hci_conn_check_link_mode(struct hci_conn *conn); 1571 1573 int hci_conn_check_secure(struct hci_conn *conn, __u8 sec_level); 1572 1574 int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type,
+3
include/net/bluetooth/hci_sync.h
··· 185 185 int hci_cancel_connect_sync(struct hci_dev *hdev, struct hci_conn *conn); 186 186 int hci_le_conn_update_sync(struct hci_dev *hdev, struct hci_conn *conn, 187 187 struct hci_conn_params *params); 188 + 189 + int hci_connect_pa_sync(struct hci_dev *hdev, struct hci_conn *conn); 190 + int hci_connect_big_sync(struct hci_dev *hdev, struct hci_conn *conn);
-3
include/net/xdp_sock.h
··· 71 71 */ 72 72 u32 tx_budget_spent; 73 73 74 - /* Protects generic receive. */ 75 - spinlock_t rx_lock; 76 - 77 74 /* Statistics */ 78 75 u64 rx_dropped; 79 76 u64 rx_queue_full;
+3 -1
include/net/xsk_buff_pool.h
··· 53 53 refcount_t users; 54 54 struct xdp_umem *umem; 55 55 struct work_struct work; 56 + /* Protects generic receive in shared and non-shared umem mode. */ 57 + spinlock_t rx_lock; 56 58 struct list_head free_list; 57 59 struct list_head xskb_list; 58 60 u32 heads_cnt; ··· 240 238 return orig_addr; 241 239 242 240 offset = xskb->xdp.data - xskb->xdp.data_hard_start; 243 - orig_addr -= offset; 244 241 offset += pool->headroom; 242 + orig_addr -= offset; 245 243 return orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT); 246 244 } 247 245
+56 -29
include/uapi/linux/landlock.h
··· 53 53 __u64 scoped; 54 54 }; 55 55 56 - /* 57 - * sys_landlock_create_ruleset() flags: 56 + /** 57 + * DOC: landlock_create_ruleset_flags 58 58 * 59 - * - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI 60 - * version. 61 - * - %LANDLOCK_CREATE_RULESET_ERRATA: Get a bitmask of fixed issues. 59 + * **Flags** 60 + * 61 + * %LANDLOCK_CREATE_RULESET_VERSION 62 + * Get the highest supported Landlock ABI version (starting at 1). 63 + * 64 + * %LANDLOCK_CREATE_RULESET_ERRATA 65 + * Get a bitmask of fixed issues for the current Landlock ABI version. 62 66 */ 63 67 /* clang-format off */ 64 68 #define LANDLOCK_CREATE_RULESET_VERSION (1U << 0) 65 69 #define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1) 66 70 /* clang-format on */ 67 71 68 - /* 69 - * sys_landlock_restrict_self() flags: 72 + /** 73 + * DOC: landlock_restrict_self_flags 70 74 * 71 - * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF: Do not create any log related to the 72 - * enforced restrictions. This should only be set by tools launching unknown 73 - * or untrusted programs (e.g. a sandbox tool, container runtime, system 74 - * service manager). Because programs sandboxing themselves should fix any 75 - * denied access, they should not set this flag to be aware of potential 76 - * issues reported by system's logs (i.e. audit). 77 - * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON: Explicitly ask to continue 78 - * logging denied access requests even after an :manpage:`execve(2)` call. 79 - * This flag should only be set if all the programs than can legitimately be 80 - * executed will not try to request a denied access (which could spam audit 81 - * logs). 82 - * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF: Do not create any log related 83 - * to the enforced restrictions coming from future nested domains created by 84 - * the caller or its descendants. This should only be set according to a 85 - * runtime configuration (i.e. not hardcoded) by programs launching other 86 - * unknown or untrusted programs that may create their own Landlock domains 87 - * and spam logs. The main use case is for container runtimes to enable users 88 - * to mute buggy sandboxed programs for a specific container image. Other use 89 - * cases include sandboxer tools and init systems. Unlike 90 - * %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF, 91 - * %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF does not impact the requested 92 - * restriction (if any) but only the future nested domains. 75 + * **Flags** 76 + * 77 + * By default, denied accesses originating from programs that sandbox themselves 78 + * are logged via the audit subsystem. Such events typically indicate unexpected 79 + * behavior, such as bugs or exploitation attempts. However, to avoid excessive 80 + * logging, access requests denied by a domain not created by the originating 81 + * program are not logged by default. The rationale is that programs should know 82 + * their own behavior, but not necessarily the behavior of other programs. This 83 + * default configuration is suitable for most programs that sandbox themselves. 84 + * For specific use cases, the following flags allow programs to modify this 85 + * default logging behavior. 86 + * 87 + * The %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF and 88 + * %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON flags apply to the newly created 89 + * Landlock domain. 90 + * 91 + * %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF 92 + * Disables logging of denied accesses originating from the thread creating 93 + * the Landlock domain, as well as its children, as long as they continue 94 + * running the same executable code (i.e., without an intervening 95 + * :manpage:`execve(2)` call). This is intended for programs that execute 96 + * unknown code without invoking :manpage:`execve(2)`, such as script 97 + * interpreters. Programs that only sandbox themselves should not set this 98 + * flag, so users can be notified of unauthorized access attempts via system 99 + * logs. 100 + * 101 + * %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON 102 + * Enables logging of denied accesses after an :manpage:`execve(2)` call, 103 + * providing visibility into unauthorized access attempts by newly executed 104 + * programs within the created Landlock domain. This flag is recommended 105 + * only when all potential executables in the domain are expected to comply 106 + * with the access restrictions, as excessive audit log entries could make 107 + * it more difficult to identify critical events. 108 + * 109 + * %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF 110 + * Disables logging of denied accesses originating from nested Landlock 111 + * domains created by the caller or its descendants. This flag should be set 112 + * according to runtime configuration, not hardcoded, to avoid suppressing 113 + * important security events. It is useful for container runtimes or 114 + * sandboxing tools that may launch programs which themselves create 115 + * Landlock domains and could otherwise generate excessive logs. Unlike 116 + * ``LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF``, this flag only affects 117 + * future nested domains, not the one being created. It can also be used 118 + * with a @ruleset_fd value of -1 to mute subdomain logs without creating a 119 + * domain. 93 120 */ 94 121 /* clang-format off */ 95 122 #define LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF (1U << 0)
+6
include/ufs/ufs_quirks.h
··· 107 107 */ 108 108 #define UFS_DEVICE_QUIRK_DELAY_AFTER_LPM (1 << 11) 109 109 110 + /* 111 + * Some ufs devices may need more time to be in hibern8 before exiting. 112 + * Enable this quirk to give it an additional 100us. 113 + */ 114 + #define UFS_DEVICE_QUIRK_PA_HIBER8TIME (1 << 12) 115 + 110 116 #endif /* UFS_QUIRKS_H_ */
+15 -9
io_uring/io_uring.c
··· 872 872 lockdep_assert(!io_wq_current_is_worker()); 873 873 lockdep_assert_held(&ctx->uring_lock); 874 874 875 - __io_cq_lock(ctx); 876 - posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags); 875 + if (!ctx->lockless_cq) { 876 + spin_lock(&ctx->completion_lock); 877 + posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags); 878 + spin_unlock(&ctx->completion_lock); 879 + } else { 880 + posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags); 881 + } 882 + 877 883 ctx->submit_state.cq_flush = true; 878 - __io_cq_unlock_post(ctx); 879 884 return posted; 880 885 } 881 886 ··· 1083 1078 while (node) { 1084 1079 req = container_of(node, struct io_kiocb, io_task_work.node); 1085 1080 node = node->next; 1086 - if (sync && last_ctx != req->ctx) { 1081 + if (last_ctx != req->ctx) { 1087 1082 if (last_ctx) { 1088 - flush_delayed_work(&last_ctx->fallback_work); 1083 + if (sync) 1084 + flush_delayed_work(&last_ctx->fallback_work); 1089 1085 percpu_ref_put(&last_ctx->refs); 1090 1086 } 1091 1087 last_ctx = req->ctx; 1092 1088 percpu_ref_get(&last_ctx->refs); 1093 1089 } 1094 - if (llist_add(&req->io_task_work.node, 1095 - &req->ctx->fallback_llist)) 1096 - schedule_delayed_work(&req->ctx->fallback_work, 1); 1090 + if (llist_add(&req->io_task_work.node, &last_ctx->fallback_llist)) 1091 + schedule_delayed_work(&last_ctx->fallback_work, 1); 1097 1092 } 1098 1093 1099 1094 if (last_ctx) { 1100 - flush_delayed_work(&last_ctx->fallback_work); 1095 + if (sync) 1096 + flush_delayed_work(&last_ctx->fallback_work); 1101 1097 percpu_ref_put(&last_ctx->refs); 1102 1098 } 1103 1099 }
+1 -1
kernel/bpf/hashtab.c
··· 2189 2189 b = &htab->buckets[i]; 2190 2190 rcu_read_lock(); 2191 2191 head = &b->head; 2192 - hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) { 2192 + hlist_nulls_for_each_entry_safe(elem, n, head, hash_node) { 2193 2193 key = elem->key; 2194 2194 if (is_percpu) { 2195 2195 /* current cpu value for percpu map */
+1
kernel/bpf/preload/bpf_preload_kern.c
··· 89 89 } 90 90 late_initcall(load); 91 91 module_exit(fini); 92 + MODULE_IMPORT_NS("BPF_INTERNAL"); 92 93 MODULE_LICENSE("GPL"); 93 94 MODULE_DESCRIPTION("Embedded BPF programs for introspection in bpffs");
+3 -3
kernel/bpf/syscall.c
··· 1583 1583 1584 1584 return map; 1585 1585 } 1586 - EXPORT_SYMBOL(bpf_map_get); 1586 + EXPORT_SYMBOL_NS(bpf_map_get, "BPF_INTERNAL"); 1587 1587 1588 1588 struct bpf_map *bpf_map_get_with_uref(u32 ufd) 1589 1589 { ··· 3364 3364 bpf_link_inc(link); 3365 3365 return link; 3366 3366 } 3367 - EXPORT_SYMBOL(bpf_link_get_from_fd); 3367 + EXPORT_SYMBOL_NS(bpf_link_get_from_fd, "BPF_INTERNAL"); 3368 3368 3369 3369 static void bpf_tracing_link_release(struct bpf_link *link) 3370 3370 { ··· 6020 6020 return ____bpf_sys_bpf(cmd, attr, size); 6021 6021 } 6022 6022 } 6023 - EXPORT_SYMBOL(kern_sys_bpf); 6023 + EXPORT_SYMBOL_NS(kern_sys_bpf, "BPF_INTERNAL"); 6024 6024 6025 6025 static const struct bpf_func_proto bpf_sys_bpf_proto = { 6026 6026 .func = bpf_sys_bpf,
+9 -3
kernel/dma/coherent.c
··· 336 336 337 337 static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) 338 338 { 339 - if (!rmem->priv) { 340 - struct dma_coherent_mem *mem; 339 + struct dma_coherent_mem *mem = rmem->priv; 341 340 341 + if (!mem) { 342 342 mem = dma_init_coherent_memory(rmem->base, rmem->base, 343 343 rmem->size, true); 344 344 if (IS_ERR(mem)) 345 345 return PTR_ERR(mem); 346 346 rmem->priv = mem; 347 347 } 348 - dma_assign_coherent_memory(dev, rmem->priv); 348 + 349 + /* Warn if the device potentially can't use the reserved memory */ 350 + if (mem->device_base + rmem->size - 1 > 351 + min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit)) 352 + dev_warn(dev, "reserved memory is beyond device's set DMA address range\n"); 353 + 354 + dma_assign_coherent_memory(dev, mem); 349 355 return 0; 350 356 } 351 357
+1 -2
kernel/dma/contiguous.c
··· 64 64 * Users, who want to set the size of global CMA area for their system 65 65 * should use cma= kernel parameter. 66 66 */ 67 - static const phys_addr_t size_bytes __initconst = 68 - (phys_addr_t)CMA_SIZE_MBYTES * SZ_1M; 67 + #define size_bytes ((phys_addr_t)CMA_SIZE_MBYTES * SZ_1M) 69 68 static phys_addr_t size_cmdline __initdata = -1; 70 69 static phys_addr_t base_cmdline __initdata; 71 70 static phys_addr_t limit_cmdline __initdata;
+18 -9
kernel/dma/mapping.c
··· 910 910 } 911 911 EXPORT_SYMBOL(dma_set_coherent_mask); 912 912 913 - /** 914 - * dma_addressing_limited - return if the device is addressing limited 915 - * @dev: device to check 916 - * 917 - * Return %true if the devices DMA mask is too small to address all memory in 918 - * the system, else %false. Lack of addressing bits is the prime reason for 919 - * bounce buffering, but might not be the only one. 920 - */ 921 - bool dma_addressing_limited(struct device *dev) 913 + static bool __dma_addressing_limited(struct device *dev) 922 914 { 923 915 const struct dma_map_ops *ops = get_dma_ops(dev); 924 916 ··· 921 929 if (unlikely(ops) || use_dma_iommu(dev)) 922 930 return false; 923 931 return !dma_direct_all_ram_mapped(dev); 932 + } 933 + 934 + /** 935 + * dma_addressing_limited - return if the device is addressing limited 936 + * @dev: device to check 937 + * 938 + * Return %true if the devices DMA mask is too small to address all memory in 939 + * the system, else %false. Lack of addressing bits is the prime reason for 940 + * bounce buffering, but might not be the only one. 941 + */ 942 + bool dma_addressing_limited(struct device *dev) 943 + { 944 + if (!__dma_addressing_limited(dev)) 945 + return false; 946 + 947 + dev_dbg(dev, "device is DMA addressing limited\n"); 948 + return true; 924 949 } 925 950 EXPORT_SYMBOL_GPL(dma_addressing_limited); 926 951
+2 -2
kernel/events/core.c
··· 3943 3943 perf_event_set_state(event, PERF_EVENT_STATE_ERROR); 3944 3944 3945 3945 if (*perf_event_fasync(event)) 3946 - event->pending_kill = POLL_HUP; 3946 + event->pending_kill = POLL_ERR; 3947 3947 3948 3948 perf_event_wakeup(event); 3949 3949 } else { ··· 6075 6075 6076 6076 if (unlikely(READ_ONCE(event->state) == PERF_EVENT_STATE_ERROR && 6077 6077 event->attr.pinned)) 6078 - return events; 6078 + return EPOLLERR; 6079 6079 6080 6080 /* 6081 6081 * Pin the event->rb by taking event->mmap_mutex; otherwise
+21 -24
kernel/params.c
··· 760 760 params[i].ops->free(params[i].arg); 761 761 } 762 762 763 - static struct module_kobject * __init locate_module_kobject(const char *name) 763 + struct module_kobject __modinit * lookup_or_create_module_kobject(const char *name) 764 764 { 765 765 struct module_kobject *mk; 766 766 struct kobject *kobj; 767 767 int err; 768 768 769 769 kobj = kset_find_obj(module_kset, name); 770 - if (kobj) { 771 - mk = to_module_kobject(kobj); 772 - } else { 773 - mk = kzalloc(sizeof(struct module_kobject), GFP_KERNEL); 774 - BUG_ON(!mk); 770 + if (kobj) 771 + return to_module_kobject(kobj); 775 772 776 - mk->mod = THIS_MODULE; 777 - mk->kobj.kset = module_kset; 778 - err = kobject_init_and_add(&mk->kobj, &module_ktype, NULL, 779 - "%s", name); 780 - #ifdef CONFIG_MODULES 781 - if (!err) 782 - err = sysfs_create_file(&mk->kobj, &module_uevent.attr); 783 - #endif 784 - if (err) { 785 - kobject_put(&mk->kobj); 786 - pr_crit("Adding module '%s' to sysfs failed (%d), the system may be unstable.\n", 787 - name, err); 788 - return NULL; 789 - } 773 + mk = kzalloc(sizeof(struct module_kobject), GFP_KERNEL); 774 + if (!mk) 775 + return NULL; 790 776 791 - /* So that we hold reference in both cases. */ 792 - kobject_get(&mk->kobj); 777 + mk->mod = THIS_MODULE; 778 + mk->kobj.kset = module_kset; 779 + err = kobject_init_and_add(&mk->kobj, &module_ktype, NULL, "%s", name); 780 + if (IS_ENABLED(CONFIG_MODULES) && !err) 781 + err = sysfs_create_file(&mk->kobj, &module_uevent.attr); 782 + if (err) { 783 + kobject_put(&mk->kobj); 784 + pr_crit("Adding module '%s' to sysfs failed (%d), the system may be unstable.\n", 785 + name, err); 786 + return NULL; 793 787 } 788 + 789 + /* So that we hold reference in both cases. */ 790 + kobject_get(&mk->kobj); 794 791 795 792 return mk; 796 793 } ··· 799 802 struct module_kobject *mk; 800 803 int err; 801 804 802 - mk = locate_module_kobject(name); 805 + mk = lookup_or_create_module_kobject(name); 803 806 if (!mk) 804 807 return; 805 808 ··· 870 873 int err; 871 874 872 875 for (vattr = __start___modver; vattr < __stop___modver; vattr++) { 873 - mk = locate_module_kobject(vattr->module_name); 876 + mk = lookup_or_create_module_kobject(vattr->module_name); 874 877 if (mk) { 875 878 err = sysfs_create_file(&mk->kobj, &vattr->mattr.attr); 876 879 WARN_ON_ONCE(err);
+1 -3
kernel/sched/fair.c
··· 7081 7081 h_nr_idle = task_has_idle_policy(p); 7082 7082 if (task_sleep || task_delayed || !se->sched_delayed) 7083 7083 h_nr_runnable = 1; 7084 - } else { 7085 - cfs_rq = group_cfs_rq(se); 7086 - slice = cfs_rq_min_slice(cfs_rq); 7087 7084 } 7088 7085 7089 7086 for_each_sched_entity(se) { ··· 7090 7093 if (p && &p->se == se) 7091 7094 return -1; 7092 7095 7096 + slice = cfs_rq_min_slice(cfs_rq); 7093 7097 break; 7094 7098 } 7095 7099
+11 -1
mm/memblock.c
··· 2183 2183 struct memblock_region *region; 2184 2184 phys_addr_t start, end; 2185 2185 int nid; 2186 + unsigned long max_reserved; 2186 2187 2187 2188 /* 2188 2189 * set nid on all reserved pages and also treat struct 2189 2190 * pages for the NOMAP regions as PageReserved 2190 2191 */ 2192 + repeat: 2193 + max_reserved = memblock.reserved.max; 2191 2194 for_each_mem_region(region) { 2192 2195 nid = memblock_get_region_node(region); 2193 2196 start = region->base; ··· 2199 2196 if (memblock_is_nomap(region)) 2200 2197 reserve_bootmem_region(start, end, nid); 2201 2198 2202 - memblock_set_node(start, end, &memblock.reserved, nid); 2199 + memblock_set_node(start, region->size, &memblock.reserved, nid); 2203 2200 } 2201 + /* 2202 + * 'max' is changed means memblock.reserved has been doubled its 2203 + * array, which may result a new reserved region before current 2204 + * 'start'. Now we should repeat the procedure to set its node id. 2205 + */ 2206 + if (max_reserved != memblock.reserved.max) 2207 + goto repeat; 2204 2208 2205 2209 /* 2206 2210 * initialize struct pages for reserved regions that don't have
+5 -3
mm/migrate.c
··· 845 845 return -EAGAIN; 846 846 847 847 if (check_refs) { 848 - bool busy; 848 + bool busy, migrating; 849 849 bool invalidated = false; 850 850 851 + migrating = test_and_set_bit_lock(BH_Migrate, &head->b_state); 852 + VM_WARN_ON_ONCE(migrating); 851 853 recheck_buffers: 852 854 busy = false; 853 855 spin_lock(&mapping->i_private_lock); ··· 861 859 } 862 860 bh = bh->b_this_page; 863 861 } while (bh != head); 862 + spin_unlock(&mapping->i_private_lock); 864 863 if (busy) { 865 864 if (invalidated) { 866 865 rc = -EAGAIN; 867 866 goto unlock_buffers; 868 867 } 869 - spin_unlock(&mapping->i_private_lock); 870 868 invalidate_bh_lrus(); 871 869 invalidated = true; 872 870 goto recheck_buffers; ··· 885 883 886 884 unlock_buffers: 887 885 if (check_refs) 888 - spin_unlock(&mapping->i_private_lock); 886 + clear_bit_unlock(BH_Migrate, &head->b_state); 889 887 bh = head; 890 888 do { 891 889 unlock_buffer(bh);
+7 -174
net/bluetooth/hci_conn.c
··· 2064 2064 return hci_le_create_big(conn, &conn->iso_qos); 2065 2065 } 2066 2066 2067 - static void create_pa_complete(struct hci_dev *hdev, void *data, int err) 2068 - { 2069 - bt_dev_dbg(hdev, ""); 2070 - 2071 - if (err) 2072 - bt_dev_err(hdev, "Unable to create PA: %d", err); 2073 - } 2074 - 2075 - static bool hci_conn_check_create_pa_sync(struct hci_conn *conn) 2076 - { 2077 - if (conn->type != ISO_LINK || conn->sid == HCI_SID_INVALID) 2078 - return false; 2079 - 2080 - return true; 2081 - } 2082 - 2083 - static int create_pa_sync(struct hci_dev *hdev, void *data) 2084 - { 2085 - struct hci_cp_le_pa_create_sync cp = {0}; 2086 - struct hci_conn *conn; 2087 - int err = 0; 2088 - 2089 - hci_dev_lock(hdev); 2090 - 2091 - rcu_read_lock(); 2092 - 2093 - /* The spec allows only one pending LE Periodic Advertising Create 2094 - * Sync command at a time. If the command is pending now, don't do 2095 - * anything. We check for pending connections after each PA Sync 2096 - * Established event. 2097 - * 2098 - * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E 2099 - * page 2493: 2100 - * 2101 - * If the Host issues this command when another HCI_LE_Periodic_ 2102 - * Advertising_Create_Sync command is pending, the Controller shall 2103 - * return the error code Command Disallowed (0x0C). 2104 - */ 2105 - list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { 2106 - if (test_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags)) 2107 - goto unlock; 2108 - } 2109 - 2110 - list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { 2111 - if (hci_conn_check_create_pa_sync(conn)) { 2112 - struct bt_iso_qos *qos = &conn->iso_qos; 2113 - 2114 - cp.options = qos->bcast.options; 2115 - cp.sid = conn->sid; 2116 - cp.addr_type = conn->dst_type; 2117 - bacpy(&cp.addr, &conn->dst); 2118 - cp.skip = cpu_to_le16(qos->bcast.skip); 2119 - cp.sync_timeout = cpu_to_le16(qos->bcast.sync_timeout); 2120 - cp.sync_cte_type = qos->bcast.sync_cte_type; 2121 - 2122 - break; 2123 - } 2124 - } 2125 - 2126 - unlock: 2127 - rcu_read_unlock(); 2128 - 2129 - hci_dev_unlock(hdev); 2130 - 2131 - if (bacmp(&cp.addr, BDADDR_ANY)) { 2132 - hci_dev_set_flag(hdev, HCI_PA_SYNC); 2133 - set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags); 2134 - 2135 - err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC, 2136 - sizeof(cp), &cp, HCI_CMD_TIMEOUT); 2137 - if (!err) 2138 - err = hci_update_passive_scan_sync(hdev); 2139 - 2140 - if (err) { 2141 - hci_dev_clear_flag(hdev, HCI_PA_SYNC); 2142 - clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags); 2143 - } 2144 - } 2145 - 2146 - return err; 2147 - } 2148 - 2149 - int hci_pa_create_sync_pending(struct hci_dev *hdev) 2150 - { 2151 - /* Queue start pa_create_sync and scan */ 2152 - return hci_cmd_sync_queue(hdev, create_pa_sync, 2153 - NULL, create_pa_complete); 2154 - } 2155 - 2156 2067 struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, 2157 2068 __u8 dst_type, __u8 sid, 2158 2069 struct bt_iso_qos *qos) ··· 2078 2167 conn->dst_type = dst_type; 2079 2168 conn->sid = sid; 2080 2169 conn->state = BT_LISTEN; 2170 + conn->conn_timeout = msecs_to_jiffies(qos->bcast.sync_timeout * 10); 2081 2171 2082 2172 hci_conn_hold(conn); 2083 2173 2084 - hci_pa_create_sync_pending(hdev); 2174 + hci_connect_pa_sync(hdev, conn); 2085 2175 2086 2176 return conn; 2087 2177 } 2088 2178 2089 - static bool hci_conn_check_create_big_sync(struct hci_conn *conn) 2090 - { 2091 - if (!conn->num_bis) 2092 - return false; 2093 - 2094 - return true; 2095 - } 2096 - 2097 - static void big_create_sync_complete(struct hci_dev *hdev, void *data, int err) 2098 - { 2099 - bt_dev_dbg(hdev, ""); 2100 - 2101 - if (err) 2102 - bt_dev_err(hdev, "Unable to create BIG sync: %d", err); 2103 - } 2104 - 2105 - static int big_create_sync(struct hci_dev *hdev, void *data) 2106 - { 2107 - DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11); 2108 - struct hci_conn *conn; 2109 - 2110 - rcu_read_lock(); 2111 - 2112 - pdu->num_bis = 0; 2113 - 2114 - /* The spec allows only one pending LE BIG Create Sync command at 2115 - * a time. If the command is pending now, don't do anything. We 2116 - * check for pending connections after each BIG Sync Established 2117 - * event. 2118 - * 2119 - * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E 2120 - * page 2586: 2121 - * 2122 - * If the Host sends this command when the Controller is in the 2123 - * process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_ 2124 - * Established event has not been generated, the Controller shall 2125 - * return the error code Command Disallowed (0x0C). 2126 - */ 2127 - list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { 2128 - if (test_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags)) 2129 - goto unlock; 2130 - } 2131 - 2132 - list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { 2133 - if (hci_conn_check_create_big_sync(conn)) { 2134 - struct bt_iso_qos *qos = &conn->iso_qos; 2135 - 2136 - set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags); 2137 - 2138 - pdu->handle = qos->bcast.big; 2139 - pdu->sync_handle = cpu_to_le16(conn->sync_handle); 2140 - pdu->encryption = qos->bcast.encryption; 2141 - memcpy(pdu->bcode, qos->bcast.bcode, 2142 - sizeof(pdu->bcode)); 2143 - pdu->mse = qos->bcast.mse; 2144 - pdu->timeout = cpu_to_le16(qos->bcast.timeout); 2145 - pdu->num_bis = conn->num_bis; 2146 - memcpy(pdu->bis, conn->bis, conn->num_bis); 2147 - 2148 - break; 2149 - } 2150 - } 2151 - 2152 - unlock: 2153 - rcu_read_unlock(); 2154 - 2155 - if (!pdu->num_bis) 2156 - return 0; 2157 - 2158 - return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC, 2159 - struct_size(pdu, bis, pdu->num_bis), pdu); 2160 - } 2161 - 2162 - int hci_le_big_create_sync_pending(struct hci_dev *hdev) 2163 - { 2164 - /* Queue big_create_sync */ 2165 - return hci_cmd_sync_queue_once(hdev, big_create_sync, 2166 - NULL, big_create_sync_complete); 2167 - } 2168 - 2169 - int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon, 2170 - struct bt_iso_qos *qos, 2171 - __u16 sync_handle, __u8 num_bis, __u8 bis[]) 2179 + int hci_conn_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon, 2180 + struct bt_iso_qos *qos, __u16 sync_handle, 2181 + __u8 num_bis, __u8 bis[]) 2172 2182 { 2173 2183 int err; 2174 2184 ··· 2106 2274 2107 2275 hcon->num_bis = num_bis; 2108 2276 memcpy(hcon->bis, bis, num_bis); 2277 + hcon->conn_timeout = msecs_to_jiffies(qos->bcast.timeout * 10); 2109 2278 } 2110 2279 2111 - return hci_le_big_create_sync_pending(hdev); 2280 + return hci_connect_big_sync(hdev, hcon); 2112 2281 } 2113 2282 2114 2283 static void create_big_complete(struct hci_dev *hdev, void *data, int err)
+4 -11
net/bluetooth/hci_event.c
··· 6378 6378 6379 6379 hci_dev_clear_flag(hdev, HCI_PA_SYNC); 6380 6380 6381 - conn = hci_conn_hash_lookup_sid(hdev, ev->sid, &ev->bdaddr, 6382 - ev->bdaddr_type); 6381 + conn = hci_conn_hash_lookup_create_pa_sync(hdev); 6383 6382 if (!conn) { 6384 6383 bt_dev_err(hdev, 6385 6384 "Unable to find connection for dst %pMR sid 0x%2.2x", ··· 6417 6418 } 6418 6419 6419 6420 unlock: 6420 - /* Handle any other pending PA sync command */ 6421 - hci_pa_create_sync_pending(hdev); 6422 - 6423 6421 hci_dev_unlock(hdev); 6424 6422 } 6425 6423 ··· 6928 6932 6929 6933 bt_dev_dbg(hdev, "status 0x%2.2x", ev->status); 6930 6934 6931 - if (!hci_le_ev_skb_pull(hdev, skb, HCI_EVT_LE_BIG_SYNC_ESTABILISHED, 6935 + if (!hci_le_ev_skb_pull(hdev, skb, HCI_EVT_LE_BIG_SYNC_ESTABLISHED, 6932 6936 flex_array_size(ev, bis, ev->num_bis))) 6933 6937 return; 6934 6938 ··· 6999 7003 } 7000 7004 7001 7005 unlock: 7002 - /* Handle any other pending BIG sync command */ 7003 - hci_le_big_create_sync_pending(hdev); 7004 - 7005 7006 hci_dev_unlock(hdev); 7006 7007 } 7007 7008 ··· 7120 7127 hci_le_create_big_complete_evt, 7121 7128 sizeof(struct hci_evt_le_create_big_complete), 7122 7129 HCI_MAX_EVENT_SIZE), 7123 - /* [0x1d = HCI_EV_LE_BIG_SYNC_ESTABILISHED] */ 7124 - HCI_LE_EV_VL(HCI_EVT_LE_BIG_SYNC_ESTABILISHED, 7130 + /* [0x1d = HCI_EV_LE_BIG_SYNC_ESTABLISHED] */ 7131 + HCI_LE_EV_VL(HCI_EVT_LE_BIG_SYNC_ESTABLISHED, 7125 7132 hci_le_big_sync_established_evt, 7126 7133 sizeof(struct hci_evt_le_big_sync_estabilished), 7127 7134 HCI_MAX_EVENT_SIZE),
+145 -5
net/bluetooth/hci_sync.c
··· 2693 2693 2694 2694 /* Force address filtering if PA Sync is in progress */ 2695 2695 if (hci_dev_test_flag(hdev, HCI_PA_SYNC)) { 2696 - struct hci_cp_le_pa_create_sync *sent; 2696 + struct hci_conn *conn; 2697 2697 2698 - sent = hci_sent_cmd_data(hdev, HCI_OP_LE_PA_CREATE_SYNC); 2699 - if (sent) { 2698 + conn = hci_conn_hash_lookup_create_pa_sync(hdev); 2699 + if (conn) { 2700 2700 struct conn_params pa; 2701 2701 2702 2702 memset(&pa, 0, sizeof(pa)); 2703 2703 2704 - bacpy(&pa.addr, &sent->addr); 2705 - pa.addr_type = sent->addr_type; 2704 + bacpy(&pa.addr, &conn->dst); 2705 + pa.addr_type = conn->dst_type; 2706 2706 2707 2707 /* Clear first since there could be addresses left 2708 2708 * behind. ··· 6894 6894 6895 6895 return __hci_cmd_sync_status(hdev, HCI_OP_LE_CONN_UPDATE, 6896 6896 sizeof(cp), &cp, HCI_CMD_TIMEOUT); 6897 + } 6898 + 6899 + static void create_pa_complete(struct hci_dev *hdev, void *data, int err) 6900 + { 6901 + bt_dev_dbg(hdev, "err %d", err); 6902 + 6903 + if (!err) 6904 + return; 6905 + 6906 + hci_dev_clear_flag(hdev, HCI_PA_SYNC); 6907 + 6908 + if (err == -ECANCELED) 6909 + return; 6910 + 6911 + hci_dev_lock(hdev); 6912 + 6913 + hci_update_passive_scan_sync(hdev); 6914 + 6915 + hci_dev_unlock(hdev); 6916 + } 6917 + 6918 + static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data) 6919 + { 6920 + struct hci_cp_le_pa_create_sync cp; 6921 + struct hci_conn *conn = data; 6922 + struct bt_iso_qos *qos = &conn->iso_qos; 6923 + int err; 6924 + 6925 + if (!hci_conn_valid(hdev, conn)) 6926 + return -ECANCELED; 6927 + 6928 + if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC)) 6929 + return -EBUSY; 6930 + 6931 + /* Mark HCI_CONN_CREATE_PA_SYNC so hci_update_passive_scan_sync can 6932 + * program the address in the allow list so PA advertisements can be 6933 + * received. 6934 + */ 6935 + set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags); 6936 + 6937 + hci_update_passive_scan_sync(hdev); 6938 + 6939 + memset(&cp, 0, sizeof(cp)); 6940 + cp.options = qos->bcast.options; 6941 + cp.sid = conn->sid; 6942 + cp.addr_type = conn->dst_type; 6943 + bacpy(&cp.addr, &conn->dst); 6944 + cp.skip = cpu_to_le16(qos->bcast.skip); 6945 + cp.sync_timeout = cpu_to_le16(qos->bcast.sync_timeout); 6946 + cp.sync_cte_type = qos->bcast.sync_cte_type; 6947 + 6948 + /* The spec allows only one pending LE Periodic Advertising Create 6949 + * Sync command at a time so we forcefully wait for PA Sync Established 6950 + * event since cmd_work can only schedule one command at a time. 6951 + * 6952 + * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E 6953 + * page 2493: 6954 + * 6955 + * If the Host issues this command when another HCI_LE_Periodic_ 6956 + * Advertising_Create_Sync command is pending, the Controller shall 6957 + * return the error code Command Disallowed (0x0C). 6958 + */ 6959 + err = __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_PA_CREATE_SYNC, 6960 + sizeof(cp), &cp, 6961 + HCI_EV_LE_PA_SYNC_ESTABLISHED, 6962 + conn->conn_timeout, NULL); 6963 + if (err == -ETIMEDOUT) 6964 + __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC_CANCEL, 6965 + 0, NULL, HCI_CMD_TIMEOUT); 6966 + 6967 + return err; 6968 + } 6969 + 6970 + int hci_connect_pa_sync(struct hci_dev *hdev, struct hci_conn *conn) 6971 + { 6972 + return hci_cmd_sync_queue_once(hdev, hci_le_pa_create_sync, conn, 6973 + create_pa_complete); 6974 + } 6975 + 6976 + static void create_big_complete(struct hci_dev *hdev, void *data, int err) 6977 + { 6978 + struct hci_conn *conn = data; 6979 + 6980 + bt_dev_dbg(hdev, "err %d", err); 6981 + 6982 + if (err == -ECANCELED) 6983 + return; 6984 + 6985 + if (hci_conn_valid(hdev, conn)) 6986 + clear_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags); 6987 + } 6988 + 6989 + static int hci_le_big_create_sync(struct hci_dev *hdev, void *data) 6990 + { 6991 + DEFINE_FLEX(struct hci_cp_le_big_create_sync, cp, bis, num_bis, 0x11); 6992 + struct hci_conn *conn = data; 6993 + struct bt_iso_qos *qos = &conn->iso_qos; 6994 + int err; 6995 + 6996 + if (!hci_conn_valid(hdev, conn)) 6997 + return -ECANCELED; 6998 + 6999 + set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags); 7000 + 7001 + memset(cp, 0, sizeof(*cp)); 7002 + cp->handle = qos->bcast.big; 7003 + cp->sync_handle = cpu_to_le16(conn->sync_handle); 7004 + cp->encryption = qos->bcast.encryption; 7005 + memcpy(cp->bcode, qos->bcast.bcode, sizeof(cp->bcode)); 7006 + cp->mse = qos->bcast.mse; 7007 + cp->timeout = cpu_to_le16(qos->bcast.timeout); 7008 + cp->num_bis = conn->num_bis; 7009 + memcpy(cp->bis, conn->bis, conn->num_bis); 7010 + 7011 + /* The spec allows only one pending LE BIG Create Sync command at 7012 + * a time, so we forcefully wait for BIG Sync Established event since 7013 + * cmd_work can only schedule one command at a time. 7014 + * 7015 + * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E 7016 + * page 2586: 7017 + * 7018 + * If the Host sends this command when the Controller is in the 7019 + * process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_ 7020 + * Established event has not been generated, the Controller shall 7021 + * return the error code Command Disallowed (0x0C). 7022 + */ 7023 + err = __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_BIG_CREATE_SYNC, 7024 + struct_size(cp, bis, cp->num_bis), cp, 7025 + HCI_EVT_LE_BIG_SYNC_ESTABLISHED, 7026 + conn->conn_timeout, NULL); 7027 + if (err == -ETIMEDOUT) 7028 + hci_le_big_terminate_sync(hdev, cp->handle); 7029 + 7030 + return err; 7031 + } 7032 + 7033 + int hci_connect_big_sync(struct hci_dev *hdev, struct hci_conn *conn) 7034 + { 7035 + return hci_cmd_sync_queue_once(hdev, hci_le_big_create_sync, conn, 7036 + create_big_complete); 6897 7037 }
+12 -14
net/bluetooth/iso.c
··· 1462 1462 lock_sock(sk); 1463 1463 1464 1464 if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) { 1465 - err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon, 1466 - &iso_pi(sk)->qos, 1467 - iso_pi(sk)->sync_handle, 1468 - iso_pi(sk)->bc_num_bis, 1469 - iso_pi(sk)->bc_bis); 1465 + err = hci_conn_big_create_sync(hdev, iso_pi(sk)->conn->hcon, 1466 + &iso_pi(sk)->qos, 1467 + iso_pi(sk)->sync_handle, 1468 + iso_pi(sk)->bc_num_bis, 1469 + iso_pi(sk)->bc_bis); 1470 1470 if (err) 1471 - bt_dev_err(hdev, "hci_le_big_create_sync: %d", 1472 - err); 1471 + bt_dev_err(hdev, "hci_big_create_sync: %d", err); 1473 1472 } 1474 1473 1475 1474 release_sock(sk); ··· 1921 1922 hcon); 1922 1923 } else if (test_bit(HCI_CONN_BIG_SYNC_FAILED, &hcon->flags)) { 1923 1924 ev = hci_recv_event_data(hcon->hdev, 1924 - HCI_EVT_LE_BIG_SYNC_ESTABILISHED); 1925 + HCI_EVT_LE_BIG_SYNC_ESTABLISHED); 1925 1926 1926 1927 /* Get reference to PA sync parent socket, if it exists */ 1927 1928 parent = iso_get_sock(&hcon->src, &hcon->dst, ··· 2112 2113 2113 2114 if (!test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags) && 2114 2115 !test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) { 2115 - err = hci_le_big_create_sync(hdev, 2116 - hcon, 2117 - &iso_pi(sk)->qos, 2118 - iso_pi(sk)->sync_handle, 2119 - iso_pi(sk)->bc_num_bis, 2120 - iso_pi(sk)->bc_bis); 2116 + err = hci_conn_big_create_sync(hdev, hcon, 2117 + &iso_pi(sk)->qos, 2118 + iso_pi(sk)->sync_handle, 2119 + iso_pi(sk)->bc_num_bis, 2120 + iso_pi(sk)->bc_bis); 2121 2121 if (err) { 2122 2122 bt_dev_err(hdev, "hci_le_big_create_sync: %d", 2123 2123 err);
+3
net/bluetooth/l2cap_core.c
··· 7415 7415 return -ENOMEM; 7416 7416 /* Init rx_len */ 7417 7417 conn->rx_len = len; 7418 + 7419 + skb_set_delivery_time(conn->rx_skb, skb->tstamp, 7420 + skb->tstamp_type); 7418 7421 } 7419 7422 7420 7423 /* Copy as much as the rx_skb can hold */
-23
net/ceph/osd_client.c
··· 220 220 } 221 221 EXPORT_SYMBOL(osd_req_op_extent_osd_data_pages); 222 222 223 - void osd_req_op_extent_osd_data_pagelist(struct ceph_osd_request *osd_req, 224 - unsigned int which, struct ceph_pagelist *pagelist) 225 - { 226 - struct ceph_osd_data *osd_data; 227 - 228 - osd_data = osd_req_op_data(osd_req, which, extent, osd_data); 229 - ceph_osd_data_pagelist_init(osd_data, pagelist); 230 - } 231 - EXPORT_SYMBOL(osd_req_op_extent_osd_data_pagelist); 232 - 233 223 #ifdef CONFIG_BLOCK 234 224 void osd_req_op_extent_osd_data_bio(struct ceph_osd_request *osd_req, 235 225 unsigned int which, ··· 286 296 osd_data = osd_req_op_data(osd_req, which, cls, request_info); 287 297 ceph_osd_data_pagelist_init(osd_data, pagelist); 288 298 } 289 - 290 - void osd_req_op_cls_request_data_pagelist( 291 - struct ceph_osd_request *osd_req, 292 - unsigned int which, struct ceph_pagelist *pagelist) 293 - { 294 - struct ceph_osd_data *osd_data; 295 - 296 - osd_data = osd_req_op_data(osd_req, which, cls, request_data); 297 - ceph_osd_data_pagelist_init(osd_data, pagelist); 298 - osd_req->r_ops[which].cls.indata_len += pagelist->length; 299 - osd_req->r_ops[which].indata_len += pagelist->length; 300 - } 301 - EXPORT_SYMBOL(osd_req_op_cls_request_data_pagelist); 302 299 303 300 void osd_req_op_cls_request_data_pages(struct ceph_osd_request *osd_req, 304 301 unsigned int which, struct page **pages, u64 length,
+1 -1
net/ipv4/tcp_offload.c
··· 439 439 iif, sdif); 440 440 NAPI_GRO_CB(skb)->is_flist = !sk; 441 441 if (sk) 442 - sock_put(sk); 442 + sock_gen_put(sk); 443 443 } 444 444 445 445 INDIRECT_CALLABLE_SCOPE
+60 -1
net/ipv4/udp_offload.c
··· 410 410 return segs; 411 411 } 412 412 413 + static void __udpv6_gso_segment_csum(struct sk_buff *seg, 414 + struct in6_addr *oldip, 415 + const struct in6_addr *newip, 416 + __be16 *oldport, __be16 newport) 417 + { 418 + struct udphdr *uh = udp_hdr(seg); 419 + 420 + if (ipv6_addr_equal(oldip, newip) && *oldport == newport) 421 + return; 422 + 423 + if (uh->check) { 424 + inet_proto_csum_replace16(&uh->check, seg, oldip->s6_addr32, 425 + newip->s6_addr32, true); 426 + 427 + inet_proto_csum_replace2(&uh->check, seg, *oldport, newport, 428 + false); 429 + if (!uh->check) 430 + uh->check = CSUM_MANGLED_0; 431 + } 432 + 433 + *oldip = *newip; 434 + *oldport = newport; 435 + } 436 + 437 + static struct sk_buff *__udpv6_gso_segment_list_csum(struct sk_buff *segs) 438 + { 439 + const struct ipv6hdr *iph; 440 + const struct udphdr *uh; 441 + struct ipv6hdr *iph2; 442 + struct sk_buff *seg; 443 + struct udphdr *uh2; 444 + 445 + seg = segs; 446 + uh = udp_hdr(seg); 447 + iph = ipv6_hdr(seg); 448 + uh2 = udp_hdr(seg->next); 449 + iph2 = ipv6_hdr(seg->next); 450 + 451 + if (!(*(const u32 *)&uh->source ^ *(const u32 *)&uh2->source) && 452 + ipv6_addr_equal(&iph->saddr, &iph2->saddr) && 453 + ipv6_addr_equal(&iph->daddr, &iph2->daddr)) 454 + return segs; 455 + 456 + while ((seg = seg->next)) { 457 + uh2 = udp_hdr(seg); 458 + iph2 = ipv6_hdr(seg); 459 + 460 + __udpv6_gso_segment_csum(seg, &iph2->saddr, &iph->saddr, 461 + &uh2->source, uh->source); 462 + __udpv6_gso_segment_csum(seg, &iph2->daddr, &iph->daddr, 463 + &uh2->dest, uh->dest); 464 + } 465 + 466 + return segs; 467 + } 468 + 413 469 static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb, 414 470 netdev_features_t features, 415 471 bool is_ipv6) ··· 478 422 479 423 udp_hdr(skb)->len = htons(sizeof(struct udphdr) + mss); 480 424 481 - return is_ipv6 ? skb : __udpv4_gso_segment_list_csum(skb); 425 + if (is_ipv6) 426 + return __udpv6_gso_segment_list_csum(skb); 427 + else 428 + return __udpv4_gso_segment_list_csum(skb); 482 429 } 483 430 484 431 struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+1 -1
net/ipv6/tcpv6_offload.c
··· 42 42 iif, sdif); 43 43 NAPI_GRO_CB(skb)->is_flist = !sk; 44 44 if (sk) 45 - sock_put(sk); 45 + sock_gen_put(sk); 46 46 #endif /* IS_ENABLED(CONFIG_IPV6) */ 47 47 } 48 48
+6 -3
net/sched/sch_drr.c
··· 35 35 struct Qdisc_class_hash clhash; 36 36 }; 37 37 38 + static bool cl_is_active(struct drr_class *cl) 39 + { 40 + return !list_empty(&cl->alist); 41 + } 42 + 38 43 static struct drr_class *drr_find_class(struct Qdisc *sch, u32 classid) 39 44 { 40 45 struct drr_sched *q = qdisc_priv(sch); ··· 342 337 struct drr_sched *q = qdisc_priv(sch); 343 338 struct drr_class *cl; 344 339 int err = 0; 345 - bool first; 346 340 347 341 cl = drr_classify(skb, sch, &err); 348 342 if (cl == NULL) { ··· 351 347 return err; 352 348 } 353 349 354 - first = !cl->qdisc->q.qlen; 355 350 err = qdisc_enqueue(skb, cl->qdisc, to_free); 356 351 if (unlikely(err != NET_XMIT_SUCCESS)) { 357 352 if (net_xmit_drop_count(err)) { ··· 360 357 return err; 361 358 } 362 359 363 - if (first) { 360 + if (!cl_is_active(cl)) { 364 361 list_add_tail(&cl->alist, &q->active); 365 362 cl->deficit = cl->quantum; 366 363 }
+6 -3
net/sched/sch_ets.c
··· 74 74 [TCA_ETS_QUANTA_BAND] = { .type = NLA_U32 }, 75 75 }; 76 76 77 + static bool cl_is_active(struct ets_class *cl) 78 + { 79 + return !list_empty(&cl->alist); 80 + } 81 + 77 82 static int ets_quantum_parse(struct Qdisc *sch, const struct nlattr *attr, 78 83 unsigned int *quantum, 79 84 struct netlink_ext_ack *extack) ··· 421 416 struct ets_sched *q = qdisc_priv(sch); 422 417 struct ets_class *cl; 423 418 int err = 0; 424 - bool first; 425 419 426 420 cl = ets_classify(skb, sch, &err); 427 421 if (!cl) { ··· 430 426 return err; 431 427 } 432 428 433 - first = !cl->qdisc->q.qlen; 434 429 err = qdisc_enqueue(skb, cl->qdisc, to_free); 435 430 if (unlikely(err != NET_XMIT_SUCCESS)) { 436 431 if (net_xmit_drop_count(err)) { ··· 439 436 return err; 440 437 } 441 438 442 - if (first && !ets_class_is_strict(q, cl)) { 439 + if (!cl_is_active(cl) && !ets_class_is_strict(q, cl)) { 443 440 list_add_tail(&cl->alist, &q->active); 444 441 cl->deficit = cl->quantum; 445 442 }
+1 -1
net/sched/sch_hfsc.c
··· 1569 1569 return err; 1570 1570 } 1571 1571 1572 - if (first) { 1572 + if (first && !cl->cl_nactive) { 1573 1573 if (cl->cl_flags & HFSC_RSC) 1574 1574 init_ed(cl, len); 1575 1575 if (cl->cl_flags & HFSC_FSC)
+7 -4
net/sched/sch_qfq.c
··· 202 202 */ 203 203 enum update_reason {enqueue, requeue}; 204 204 205 + static bool cl_is_active(struct qfq_class *cl) 206 + { 207 + return !list_empty(&cl->alist); 208 + } 209 + 205 210 static struct qfq_class *qfq_find_class(struct Qdisc *sch, u32 classid) 206 211 { 207 212 struct qfq_sched *q = qdisc_priv(sch); ··· 1220 1215 struct qfq_class *cl; 1221 1216 struct qfq_aggregate *agg; 1222 1217 int err = 0; 1223 - bool first; 1224 1218 1225 1219 cl = qfq_classify(skb, sch, &err); 1226 1220 if (cl == NULL) { ··· 1241 1237 } 1242 1238 1243 1239 gso_segs = skb_is_gso(skb) ? skb_shinfo(skb)->gso_segs : 1; 1244 - first = !cl->qdisc->q.qlen; 1245 1240 err = qdisc_enqueue(skb, cl->qdisc, to_free); 1246 1241 if (unlikely(err != NET_XMIT_SUCCESS)) { 1247 1242 pr_debug("qfq_enqueue: enqueue failed %d\n", err); ··· 1256 1253 ++sch->q.qlen; 1257 1254 1258 1255 agg = cl->agg; 1259 - /* if the queue was not empty, then done here */ 1260 - if (!first) { 1256 + /* if the class is active, then done here */ 1257 + if (cl_is_active(cl)) { 1261 1258 if (unlikely(skb == cl->qdisc->ops->peek(cl->qdisc)) && 1262 1259 list_first_entry(&agg->active, struct qfq_class, alist) 1263 1260 == cl && cl->deficit < len)
+1 -5
net/sunrpc/cache.c
··· 1536 1536 * or by one second if it has already reached the current time. 1537 1537 * Newly added cache entries will always have ->last_refresh greater 1538 1538 * that ->flush_time, so they don't get flushed prematurely. 1539 - * 1540 - * If someone frequently calls the flush interface, we should 1541 - * immediately clean the corresponding cache_detail instead of 1542 - * continuously accumulating nextcheck. 1543 1539 */ 1544 1540 1545 - if (cd->flush_time >= now && cd->flush_time < (now + 5)) 1541 + if (cd->flush_time >= now) 1546 1542 now = cd->flush_time + 1; 1547 1543 1548 1544 cd->flush_time = now;
+3 -3
net/xdp/xsk.c
··· 338 338 u32 len = xdp_get_buff_len(xdp); 339 339 int err; 340 340 341 - spin_lock_bh(&xs->rx_lock); 342 341 err = xsk_rcv_check(xs, xdp, len); 343 342 if (!err) { 343 + spin_lock_bh(&xs->pool->rx_lock); 344 344 err = __xsk_rcv(xs, xdp, len); 345 345 xsk_flush(xs); 346 + spin_unlock_bh(&xs->pool->rx_lock); 346 347 } 347 - spin_unlock_bh(&xs->rx_lock); 348 + 348 349 return err; 349 350 } 350 351 ··· 1735 1734 xs = xdp_sk(sk); 1736 1735 xs->state = XSK_READY; 1737 1736 mutex_init(&xs->mutex); 1738 - spin_lock_init(&xs->rx_lock); 1739 1737 1740 1738 INIT_LIST_HEAD(&xs->map_list); 1741 1739 spin_lock_init(&xs->map_list_lock);
+1
net/xdp/xsk_buff_pool.c
··· 89 89 pool->addrs = umem->addrs; 90 90 pool->tx_metadata_len = umem->tx_metadata_len; 91 91 pool->tx_sw_csum = umem->flags & XDP_UMEM_TX_SW_CSUM; 92 + spin_lock_init(&pool->rx_lock); 92 93 INIT_LIST_HEAD(&pool->free_list); 93 94 INIT_LIST_HEAD(&pool->xskb_list); 94 95 INIT_LIST_HEAD(&pool->xsk_tx_list);
+6 -2
rust/kernel/firmware.rs
··· 4 4 //! 5 5 //! C header: [`include/linux/firmware.h`](srctree/include/linux/firmware.h) 6 6 7 - use crate::{bindings, device::Device, error::Error, error::Result, str::CStr}; 7 + use crate::{bindings, device::Device, error::Error, error::Result, ffi, str::CStr}; 8 8 use core::ptr::NonNull; 9 9 10 10 /// # Invariants ··· 12 12 /// One of the following: `bindings::request_firmware`, `bindings::firmware_request_nowarn`, 13 13 /// `bindings::firmware_request_platform`, `bindings::request_firmware_direct`. 14 14 struct FwFunc( 15 - unsafe extern "C" fn(*mut *const bindings::firmware, *const u8, *mut bindings::device) -> i32, 15 + unsafe extern "C" fn( 16 + *mut *const bindings::firmware, 17 + *const ffi::c_char, 18 + *mut bindings::device, 19 + ) -> i32, 16 20 ); 17 21 18 22 impl FwFunc {
+1 -1
samples/bpf/Makefile
··· 376 376 @echo " CLANG-bpf " $@ 377 377 $(Q)$(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(BPF_EXTRA_CFLAGS) \ 378 378 -I$(obj) -I$(srctree)/tools/testing/selftests/bpf/ \ 379 - -I$(LIBBPF_INCLUDE) \ 379 + -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES) \ 380 380 -D__KERNEL__ -D__BPF_TRACING__ -Wno-unused-value -Wno-pointer-sign \ 381 381 -D__TARGET_ARCH_$(SRCARCH) -Wno-compare-distinct-pointer-types \ 382 382 -Wno-gnu-variable-sized-type-not-at-end \
+8 -1
scripts/Makefile.extrawarn
··· 8 8 9 9 # Default set of warnings, always enabled 10 10 KBUILD_CFLAGS += -Wall 11 + KBUILD_CFLAGS += -Wextra 11 12 KBUILD_CFLAGS += -Wundef 12 13 KBUILD_CFLAGS += -Werror=implicit-function-declaration 13 14 KBUILD_CFLAGS += -Werror=implicit-int ··· 57 56 # globally built with -Wcast-function-type. 58 57 KBUILD_CFLAGS += $(call cc-option, -Wcast-function-type) 59 58 59 + # Currently, disable -Wstringop-overflow for GCC 11, globally. 60 + KBUILD_CFLAGS-$(CONFIG_CC_NO_STRINGOP_OVERFLOW) += $(call cc-disable-warning, stringop-overflow) 61 + KBUILD_CFLAGS-$(CONFIG_CC_STRINGOP_OVERFLOW) += $(call cc-option, -Wstringop-overflow) 62 + 63 + # Currently, disable -Wunterminated-string-initialization as broken 64 + KBUILD_CFLAGS += $(call cc-disable-warning, unterminated-string-initialization) 65 + 60 66 # The allocators already balk at large sizes, so silence the compiler 61 67 # warnings for bounds checks involving those possible values. While 62 68 # -Wno-alloc-size-larger-than would normally be used here, earlier versions ··· 90 82 # Warn if there is an enum types mismatch 91 83 KBUILD_CFLAGS += $(call cc-option,-Wenum-conversion) 92 84 93 - KBUILD_CFLAGS += -Wextra 94 85 KBUILD_CFLAGS += -Wunused 95 86 96 87 #
+2 -2
security/landlock/domain.c
··· 16 16 #include <linux/path.h> 17 17 #include <linux/pid.h> 18 18 #include <linux/sched.h> 19 + #include <linux/signal.h> 19 20 #include <linux/uidgid.h> 20 21 21 22 #include "access.h" ··· 100 99 return ERR_PTR(-ENOMEM); 101 100 102 101 memcpy(details->exe_path, path_str, path_size); 103 - WARN_ON_ONCE(current_cred() != current_real_cred()); 104 - details->pid = get_pid(task_pid(current)); 102 + details->pid = get_pid(task_tgid(current)); 105 103 details->uid = from_kuid(&init_user_ns, current_uid()); 106 104 get_task_comm(details->comm, current); 107 105 return details;
+1 -1
security/landlock/domain.h
··· 130 130 static inline void 131 131 landlock_free_hierarchy_details(struct landlock_hierarchy *const hierarchy) 132 132 { 133 - if (WARN_ON_ONCE(!hierarchy || !hierarchy->details)) 133 + if (!hierarchy || !hierarchy->details) 134 134 return; 135 135 136 136 put_pid(hierarchy->details->pid);
+13 -14
security/landlock/syscalls.c
··· 169 169 * the new ruleset. 170 170 * @size: Size of the pointed &struct landlock_ruleset_attr (needed for 171 171 * backward and forward compatibility). 172 - * @flags: Supported value: 172 + * @flags: Supported values: 173 + * 173 174 * - %LANDLOCK_CREATE_RULESET_VERSION 174 175 * - %LANDLOCK_CREATE_RULESET_ERRATA 175 176 * 176 177 * This system call enables to create a new Landlock ruleset, and returns the 177 178 * related file descriptor on success. 178 179 * 179 - * If @flags is %LANDLOCK_CREATE_RULESET_VERSION and @attr is NULL and @size is 180 - * 0, then the returned value is the highest supported Landlock ABI version 181 - * (starting at 1). 182 - * 183 - * If @flags is %LANDLOCK_CREATE_RULESET_ERRATA and @attr is NULL and @size is 184 - * 0, then the returned value is a bitmask of fixed issues for the current 185 - * Landlock ABI version. 180 + * If %LANDLOCK_CREATE_RULESET_VERSION or %LANDLOCK_CREATE_RULESET_ERRATA is 181 + * set, then @attr must be NULL and @size must be 0. 186 182 * 187 183 * Possible returned errors are: 188 184 * ··· 187 191 * - %E2BIG: @attr or @size inconsistencies; 188 192 * - %EFAULT: @attr or @size inconsistencies; 189 193 * - %ENOMSG: empty &landlock_ruleset_attr.handled_access_fs. 194 + * 195 + * .. kernel-doc:: include/uapi/linux/landlock.h 196 + * :identifiers: landlock_create_ruleset_flags 190 197 */ 191 198 SYSCALL_DEFINE3(landlock_create_ruleset, 192 199 const struct landlock_ruleset_attr __user *const, attr, ··· 451 452 * @ruleset_fd: File descriptor tied to the ruleset to merge with the target. 452 453 * @flags: Supported values: 453 454 * 454 - * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF 455 - * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON 456 - * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF 455 + * - %LANDLOCK_RESTRICT_SELF_LOG_SAME_EXEC_OFF 456 + * - %LANDLOCK_RESTRICT_SELF_LOG_NEW_EXEC_ON 457 + * - %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF 457 458 * 458 459 * This system call enables to enforce a Landlock ruleset on the current 459 460 * thread. Enforcing a ruleset requires that the task has %CAP_SYS_ADMIN in its 460 461 * namespace or is running with no_new_privs. This avoids scenarios where 461 462 * unprivileged tasks can affect the behavior of privileged children. 462 - * 463 - * It is allowed to only pass the %LANDLOCK_RESTRICT_SELF_LOG_SUBDOMAINS_OFF 464 - * flag with a @ruleset_fd value of -1. 465 463 * 466 464 * Possible returned errors are: 467 465 * ··· 471 475 * %CAP_SYS_ADMIN in its namespace. 472 476 * - %E2BIG: The maximum number of stacked rulesets is reached for the current 473 477 * thread. 478 + * 479 + * .. kernel-doc:: include/uapi/linux/landlock.h 480 + * :identifiers: landlock_restrict_self_flags 474 481 */ 475 482 SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32, 476 483 flags)
+2 -2
tools/arch/x86/lib/x86-opcode-map.txt
··· 996 996 83: Grp1 Ev,Ib (1A),(es) 997 997 # CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL, 998 998 # CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ 999 - 84: CTESTSCC (ev) 1000 - 85: CTESTSCC (es) | CTESTSCC (66),(es) 999 + 84: CTESTSCC Eb,Gb (ev) 1000 + 85: CTESTSCC Ev,Gv (es) | CTESTSCC Ev,Gv (66),(es) 1001 1001 88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es) 1002 1002 8f: POP2 Bq,Rq (000),(11B),(ev) 1003 1003 a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
+84 -24
tools/hv/hv_kvp_daemon.c
··· 24 24 25 25 #include <sys/poll.h> 26 26 #include <sys/utsname.h> 27 + #include <stdbool.h> 27 28 #include <stdio.h> 28 29 #include <stdlib.h> 29 30 #include <unistd.h> ··· 678 677 pclose(file); 679 678 } 680 679 680 + static bool kvp_verify_ip_address(const void *address_string) 681 + { 682 + char verify_buf[sizeof(struct in6_addr)]; 683 + 684 + if (inet_pton(AF_INET, address_string, verify_buf) == 1) 685 + return true; 686 + if (inet_pton(AF_INET6, address_string, verify_buf) == 1) 687 + return true; 688 + return false; 689 + } 690 + 691 + static void kvp_extract_routes(const char *line, void **output, size_t *remaining) 692 + { 693 + static const char needle[] = "via "; 694 + const char *match, *haystack = line; 695 + 696 + while ((match = strstr(haystack, needle))) { 697 + const char *address, *next_char; 698 + 699 + /* Address starts after needle. */ 700 + address = match + strlen(needle); 701 + 702 + /* The char following address is a space or end of line. */ 703 + next_char = strpbrk(address, " \t\\"); 704 + if (!next_char) 705 + next_char = address + strlen(address) + 1; 706 + 707 + /* Enough room for address and semicolon. */ 708 + if (*remaining >= (next_char - address) + 1) { 709 + memcpy(*output, address, next_char - address); 710 + /* Terminate string for verification. */ 711 + memcpy(*output + (next_char - address), "", 1); 712 + if (kvp_verify_ip_address(*output)) { 713 + /* Advance output buffer. */ 714 + *output += next_char - address; 715 + *remaining -= next_char - address; 716 + 717 + /* Each address needs a trailing semicolon. */ 718 + memcpy(*output, ";", 1); 719 + *output += 1; 720 + *remaining -= 1; 721 + } 722 + } 723 + haystack = next_char; 724 + } 725 + } 726 + 727 + static void kvp_get_gateway(void *buffer, size_t buffer_len) 728 + { 729 + static const char needle[] = "default "; 730 + FILE *f; 731 + void *output = buffer; 732 + char *line = NULL; 733 + size_t alloc_size = 0, remaining = buffer_len - 1; 734 + ssize_t num_chars; 735 + 736 + /* Show route information in a single line, for each address family */ 737 + f = popen("ip --oneline -4 route show;ip --oneline -6 route show", "r"); 738 + if (!f) { 739 + /* Convert buffer into C-String. */ 740 + memcpy(output, "", 1); 741 + return; 742 + } 743 + while ((num_chars = getline(&line, &alloc_size, f)) > 0) { 744 + /* Skip short lines. */ 745 + if (num_chars <= strlen(needle)) 746 + continue; 747 + /* Skip lines without default route. */ 748 + if (memcmp(line, needle, strlen(needle))) 749 + continue; 750 + /* Remove trailing newline to simplify further parsing. */ 751 + if (line[num_chars - 1] == '\n') 752 + line[num_chars - 1] = '\0'; 753 + /* Search routes after match. */ 754 + kvp_extract_routes(line + strlen(needle), &output, &remaining); 755 + } 756 + /* Convert buffer into C-String. */ 757 + memcpy(output, "", 1); 758 + free(line); 759 + pclose(f); 760 + } 761 + 681 762 static void kvp_get_ipconfig_info(char *if_name, 682 763 struct hv_kvp_ipaddr_value *buffer) 683 764 { ··· 768 685 char *p; 769 686 FILE *file; 770 687 771 - /* 772 - * Get the address of default gateway (ipv4). 773 - */ 774 - sprintf(cmd, "%s %s", "ip route show dev", if_name); 775 - strcat(cmd, " | awk '/default/ {print $3 }'"); 776 - 777 - /* 778 - * Execute the command to gather gateway info. 779 - */ 780 - kvp_process_ipconfig_file(cmd, (char *)buffer->gate_way, 781 - (MAX_GATEWAY_SIZE * 2), INET_ADDRSTRLEN, 0); 782 - 783 - /* 784 - * Get the address of default gateway (ipv6). 785 - */ 786 - sprintf(cmd, "%s %s", "ip -f inet6 route show dev", if_name); 787 - strcat(cmd, " | awk '/default/ {print $3 }'"); 788 - 789 - /* 790 - * Execute the command to gather gateway info (ipv6). 791 - */ 792 - kvp_process_ipconfig_file(cmd, (char *)buffer->gate_way, 793 - (MAX_GATEWAY_SIZE * 2), INET6_ADDRSTRLEN, 1); 794 - 688 + kvp_get_gateway(buffer->gate_way, sizeof(buffer->gate_way)); 795 689 796 690 /* 797 691 * Gather the DNS state.
+1 -1
tools/testing/cxl/test/mem.c
··· 1780 1780 if (rc) 1781 1781 return rc; 1782 1782 1783 - rc = devm_cxl_setup_fwctl(cxlmd); 1783 + rc = devm_cxl_setup_fwctl(&pdev->dev, cxlmd); 1784 1784 if (rc) 1785 1785 dev_dbg(dev, "No CXL FWCTL setup\n"); 1786 1786
+2
tools/testing/kunit/configs/all_tests.config
··· 43 43 44 44 CONFIG_AUDIT=y 45 45 46 + CONFIG_PRIME_NUMBERS=y 47 + 46 48 CONFIG_SECURITY=y 47 49 CONFIG_SECURITY_APPARMOR=y 48 50 CONFIG_SECURITY_LANDLOCK=y
+102
tools/testing/memblock/tests/basic_api.c
··· 2434 2434 return 0; 2435 2435 } 2436 2436 2437 + #ifdef CONFIG_NUMA 2438 + static int memblock_set_node_check(void) 2439 + { 2440 + unsigned long i, max_reserved; 2441 + struct memblock_region *rgn; 2442 + void *orig_region; 2443 + 2444 + PREFIX_PUSH(); 2445 + 2446 + reset_memblock_regions(); 2447 + memblock_allow_resize(); 2448 + 2449 + dummy_physical_memory_init(); 2450 + memblock_add(dummy_physical_memory_base(), MEM_SIZE); 2451 + orig_region = memblock.reserved.regions; 2452 + 2453 + /* Equally Split range to node 0 and 1*/ 2454 + memblock_set_node(memblock_start_of_DRAM(), 2455 + memblock_phys_mem_size() / 2, &memblock.memory, 0); 2456 + memblock_set_node(memblock_start_of_DRAM() + memblock_phys_mem_size() / 2, 2457 + memblock_phys_mem_size() / 2, &memblock.memory, 1); 2458 + 2459 + ASSERT_EQ(memblock.memory.cnt, 2); 2460 + rgn = &memblock.memory.regions[0]; 2461 + ASSERT_EQ(rgn->base, memblock_start_of_DRAM()); 2462 + ASSERT_EQ(rgn->size, memblock_phys_mem_size() / 2); 2463 + ASSERT_EQ(memblock_get_region_node(rgn), 0); 2464 + rgn = &memblock.memory.regions[1]; 2465 + ASSERT_EQ(rgn->base, memblock_start_of_DRAM() + memblock_phys_mem_size() / 2); 2466 + ASSERT_EQ(rgn->size, memblock_phys_mem_size() / 2); 2467 + ASSERT_EQ(memblock_get_region_node(rgn), 1); 2468 + 2469 + /* Reserve 126 regions with the last one across node boundary */ 2470 + for (i = 0; i < 125; i++) 2471 + memblock_reserve(memblock_start_of_DRAM() + SZ_16 * i, SZ_8); 2472 + 2473 + memblock_reserve(memblock_start_of_DRAM() + memblock_phys_mem_size() / 2 - SZ_8, 2474 + SZ_16); 2475 + 2476 + /* 2477 + * Commit 61167ad5fecd ("mm: pass nid to reserve_bootmem_region()") 2478 + * do following process to set nid to each memblock.reserved region. 2479 + * But it may miss some region if memblock_set_node() double the 2480 + * array. 2481 + * 2482 + * By checking 'max', we make sure all region nid is set properly. 2483 + */ 2484 + repeat: 2485 + max_reserved = memblock.reserved.max; 2486 + for_each_mem_region(rgn) { 2487 + int nid = memblock_get_region_node(rgn); 2488 + 2489 + memblock_set_node(rgn->base, rgn->size, &memblock.reserved, nid); 2490 + } 2491 + if (max_reserved != memblock.reserved.max) 2492 + goto repeat; 2493 + 2494 + /* Confirm each region has valid node set */ 2495 + for_each_reserved_mem_region(rgn) { 2496 + ASSERT_TRUE(numa_valid_node(memblock_get_region_node(rgn))); 2497 + if (rgn == (memblock.reserved.regions + memblock.reserved.cnt - 1)) 2498 + ASSERT_EQ(1, memblock_get_region_node(rgn)); 2499 + else 2500 + ASSERT_EQ(0, memblock_get_region_node(rgn)); 2501 + } 2502 + 2503 + dummy_physical_memory_cleanup(); 2504 + 2505 + /* 2506 + * The current reserved.regions is occupying a range of memory that 2507 + * allocated from dummy_physical_memory_init(). After free the memory, 2508 + * we must not use it. So restore the origin memory region to make sure 2509 + * the tests can run as normal and not affected by the double array. 2510 + */ 2511 + memblock.reserved.regions = orig_region; 2512 + memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS; 2513 + 2514 + test_pass_pop(); 2515 + 2516 + return 0; 2517 + } 2518 + 2519 + static int memblock_set_node_checks(void) 2520 + { 2521 + prefix_reset(); 2522 + prefix_push("memblock_set_node"); 2523 + test_print("Running memblock_set_node tests...\n"); 2524 + 2525 + memblock_set_node_check(); 2526 + 2527 + prefix_pop(); 2528 + 2529 + return 0; 2530 + } 2531 + #else 2532 + static int memblock_set_node_checks(void) 2533 + { 2534 + return 0; 2535 + } 2536 + #endif 2537 + 2437 2538 int memblock_basic_checks(void) 2438 2539 { 2439 2540 memblock_initialization_check(); ··· 2545 2444 memblock_bottom_up_checks(); 2546 2445 memblock_trim_memory_checks(); 2547 2446 memblock_overlaps_region_checks(); 2447 + memblock_set_node_checks(); 2548 2448 2549 2449 return 0; 2550 2450 }
+37
tools/testing/selftests/bpf/prog_tests/for_each.c
··· 6 6 #include "for_each_array_map_elem.skel.h" 7 7 #include "for_each_map_elem_write_key.skel.h" 8 8 #include "for_each_multi_maps.skel.h" 9 + #include "for_each_hash_modify.skel.h" 9 10 10 11 static unsigned int duration; 11 12 ··· 204 203 for_each_multi_maps__destroy(skel); 205 204 } 206 205 206 + static void test_hash_modify(void) 207 + { 208 + struct for_each_hash_modify *skel; 209 + int max_entries, i, err; 210 + __u64 key, val; 211 + 212 + LIBBPF_OPTS(bpf_test_run_opts, topts, 213 + .data_in = &pkt_v4, 214 + .data_size_in = sizeof(pkt_v4), 215 + .repeat = 1 216 + ); 217 + 218 + skel = for_each_hash_modify__open_and_load(); 219 + if (!ASSERT_OK_PTR(skel, "for_each_hash_modify__open_and_load")) 220 + return; 221 + 222 + max_entries = bpf_map__max_entries(skel->maps.hashmap); 223 + for (i = 0; i < max_entries; i++) { 224 + key = i; 225 + val = i; 226 + err = bpf_map__update_elem(skel->maps.hashmap, &key, sizeof(key), 227 + &val, sizeof(val), BPF_ANY); 228 + if (!ASSERT_OK(err, "map_update")) 229 + goto out; 230 + } 231 + 232 + err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_pkt_access), &topts); 233 + ASSERT_OK(err, "bpf_prog_test_run_opts"); 234 + ASSERT_OK(topts.retval, "retval"); 235 + 236 + out: 237 + for_each_hash_modify__destroy(skel); 238 + } 239 + 207 240 void test_for_each(void) 208 241 { 209 242 if (test__start_subtest("hash_map")) ··· 248 213 test_write_map_key(); 249 214 if (test__start_subtest("multi_maps")) 250 215 test_multi_maps(); 216 + if (test__start_subtest("hash_modify")) 217 + test_hash_modify(); 251 218 }
-1
tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
··· 68 68 goto close_cli; 69 69 70 70 err = disconnect(cli); 71 - ASSERT_OK(err, "disconnect"); 72 71 73 72 close_cli: 74 73 close(cli);
+1 -1
tools/testing/selftests/bpf/progs/bpf_misc.h
··· 221 221 #define CAN_USE_GOTOL 222 222 #endif 223 223 224 - #if _clang_major__ >= 18 224 + #if __clang_major__ >= 18 225 225 #define CAN_USE_BPF_ST 226 226 #endif 227 227
+30
tools/testing/selftests/bpf/progs/for_each_hash_modify.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2025 Intel Corporation */ 3 + #include "vmlinux.h" 4 + #include <bpf/bpf_helpers.h> 5 + 6 + char _license[] SEC("license") = "GPL"; 7 + 8 + struct { 9 + __uint(type, BPF_MAP_TYPE_HASH); 10 + __uint(max_entries, 128); 11 + __type(key, __u64); 12 + __type(value, __u64); 13 + } hashmap SEC(".maps"); 14 + 15 + static int cb(struct bpf_map *map, __u64 *key, __u64 *val, void *arg) 16 + { 17 + bpf_map_delete_elem(map, key); 18 + bpf_map_update_elem(map, key, val, 0); 19 + return 0; 20 + } 21 + 22 + SEC("tc") 23 + int test_pkt_access(struct __sk_buff *skb) 24 + { 25 + (void)skb; 26 + 27 + bpf_for_each_map_elem(&hashmap, cb, NULL, 0); 28 + 29 + return 0; 30 + }
+2 -6
tools/testing/selftests/drivers/net/ocelot/psfp.sh
··· 266 266 "${base_time}" \ 267 267 "${CYCLE_TIME_NS}" \ 268 268 "${SHIFT_TIME_NS}" \ 269 + "${GATE_DURATION_NS}" \ 269 270 "${NUM_PKTS}" \ 270 271 "${STREAM_VID}" \ 271 272 "${STREAM_PRIO}" \ 272 273 "" \ 273 274 "${isochron_dat}" 274 275 275 - # Count all received packets by looking at the non-zero RX timestamps 276 - received=$(isochron report \ 277 - --input-file "${isochron_dat}" \ 278 - --printf-format "%u\n" --printf-args "R" | \ 279 - grep -w -v '0' | wc -l) 280 - 276 + received=$(isochron_report_num_received "${isochron_dat}") 281 277 if [ "${received}" = "${expected}" ]; then 282 278 RET=0 283 279 else
+46 -11
tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
··· 48 48 49 49 static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX"; 50 50 51 + static const int mark_cmds[] = { 52 + FAN_MARK_ADD, 53 + FAN_MARK_REMOVE, 54 + FAN_MARK_FLUSH 55 + }; 56 + 57 + #define NUM_FAN_FDS ARRAY_SIZE(mark_cmds) 58 + 51 59 FIXTURE(fanotify) { 52 - int fan_fd; 60 + int fan_fd[NUM_FAN_FDS]; 53 61 char buf[256]; 54 62 unsigned int rem; 55 63 void *next; ··· 69 61 70 62 FIXTURE_SETUP(fanotify) 71 63 { 72 - int ret; 64 + int i, ret; 73 65 74 66 ASSERT_EQ(unshare(CLONE_NEWNS), 0); 75 67 ··· 97 89 self->root_id = get_mnt_id(_metadata, "/"); 98 90 ASSERT_NE(self->root_id, 0); 99 91 100 - self->fan_fd = fanotify_init(FAN_REPORT_MNT, 0); 101 - ASSERT_GE(self->fan_fd, 0); 102 - 103 - ret = fanotify_mark(self->fan_fd, FAN_MARK_ADD | FAN_MARK_MNTNS, 104 - FAN_MNT_ATTACH | FAN_MNT_DETACH, self->ns_fd, NULL); 105 - ASSERT_EQ(ret, 0); 92 + for (i = 0; i < NUM_FAN_FDS; i++) { 93 + self->fan_fd[i] = fanotify_init(FAN_REPORT_MNT | FAN_NONBLOCK, 94 + 0); 95 + ASSERT_GE(self->fan_fd[i], 0); 96 + ret = fanotify_mark(self->fan_fd[i], FAN_MARK_ADD | 97 + FAN_MARK_MNTNS, 98 + FAN_MNT_ATTACH | FAN_MNT_DETACH, 99 + self->ns_fd, NULL); 100 + ASSERT_EQ(ret, 0); 101 + // On fd[0] we do an extra ADD that changes nothing. 102 + // On fd[1]/fd[2] we REMOVE/FLUSH which removes the mark. 103 + ret = fanotify_mark(self->fan_fd[i], mark_cmds[i] | 104 + FAN_MARK_MNTNS, 105 + FAN_MNT_ATTACH | FAN_MNT_DETACH, 106 + self->ns_fd, NULL); 107 + ASSERT_EQ(ret, 0); 108 + } 106 109 107 110 self->rem = 0; 108 111 } 109 112 110 113 FIXTURE_TEARDOWN(fanotify) 111 114 { 115 + int i; 116 + 112 117 ASSERT_EQ(self->rem, 0); 113 - close(self->fan_fd); 118 + for (i = 0; i < NUM_FAN_FDS; i++) 119 + close(self->fan_fd[i]); 114 120 115 121 ASSERT_EQ(fchdir(self->orig_root), 0); 116 122 ··· 145 123 unsigned int thislen; 146 124 147 125 if (!self->rem) { 148 - ssize_t len = read(self->fan_fd, self->buf, sizeof(self->buf)); 149 - ASSERT_GT(len, 0); 126 + ssize_t len; 127 + int i; 128 + 129 + for (i = NUM_FAN_FDS - 1; i >= 0; i--) { 130 + len = read(self->fan_fd[i], self->buf, 131 + sizeof(self->buf)); 132 + if (i > 0) { 133 + // Groups 1,2 should get EAGAIN 134 + ASSERT_EQ(len, -1); 135 + ASSERT_EQ(errno, EAGAIN); 136 + } else { 137 + // Group 0 should get events 138 + ASSERT_GT(len, 0); 139 + } 140 + } 150 141 151 142 self->rem = len; 152 143 self->next = (void *) self->buf;
+14 -7
tools/testing/selftests/landlock/audit.h
··· 300 300 return err; 301 301 } 302 302 303 - static int __maybe_unused matches_log_domain_allocated(int audit_fd, 303 + static int __maybe_unused matches_log_domain_allocated(int audit_fd, pid_t pid, 304 304 __u64 *domain_id) 305 305 { 306 - return audit_match_record( 307 - audit_fd, AUDIT_LANDLOCK_DOMAIN, 308 - REGEX_LANDLOCK_PREFIX 309 - " status=allocated mode=enforcing pid=[0-9]\\+ uid=[0-9]\\+" 310 - " exe=\"[^\"]\\+\" comm=\".*_test\"$", 311 - domain_id); 306 + static const char log_template[] = REGEX_LANDLOCK_PREFIX 307 + " status=allocated mode=enforcing pid=%d uid=[0-9]\\+" 308 + " exe=\"[^\"]\\+\" comm=\".*_test\"$"; 309 + char log_match[sizeof(log_template) + 10]; 310 + int log_match_len; 311 + 312 + log_match_len = 313 + snprintf(log_match, sizeof(log_match), log_template, pid); 314 + if (log_match_len > sizeof(log_match)) 315 + return -E2BIG; 316 + 317 + return audit_match_record(audit_fd, AUDIT_LANDLOCK_DOMAIN, log_match, 318 + domain_id); 312 319 } 313 320 314 321 static int __maybe_unused matches_log_domain_deallocated(
+137 -17
tools/testing/selftests/landlock/audit_test.c
··· 9 9 #include <errno.h> 10 10 #include <limits.h> 11 11 #include <linux/landlock.h> 12 + #include <pthread.h> 12 13 #include <stdlib.h> 13 14 #include <sys/mount.h> 14 15 #include <sys/prctl.h> ··· 41 40 { 42 41 struct audit_filter audit_filter; 43 42 int audit_fd; 44 - __u64(*domain_stack)[16]; 45 43 }; 46 44 47 45 FIXTURE_SETUP(audit) ··· 60 60 TH_LOG("Failed to initialize audit: %s", error_msg); 61 61 } 62 62 clear_cap(_metadata, CAP_AUDIT_CONTROL); 63 - 64 - self->domain_stack = mmap(NULL, sizeof(*self->domain_stack), 65 - PROT_READ | PROT_WRITE, 66 - MAP_SHARED | MAP_ANONYMOUS, -1, 0); 67 - ASSERT_NE(MAP_FAILED, self->domain_stack); 68 - memset(self->domain_stack, 0, sizeof(*self->domain_stack)); 69 63 } 70 64 71 65 FIXTURE_TEARDOWN(audit) 72 66 { 73 - EXPECT_EQ(0, munmap(self->domain_stack, sizeof(*self->domain_stack))); 74 - 75 67 set_cap(_metadata, CAP_AUDIT_CONTROL); 76 68 EXPECT_EQ(0, audit_cleanup(self->audit_fd, &self->audit_filter)); 77 69 clear_cap(_metadata, CAP_AUDIT_CONTROL); ··· 75 83 .scoped = LANDLOCK_SCOPE_SIGNAL, 76 84 }; 77 85 int status, ruleset_fd, i; 86 + __u64(*domain_stack)[16]; 78 87 __u64 prev_dom = 3; 79 88 pid_t child; 89 + 90 + domain_stack = mmap(NULL, sizeof(*domain_stack), PROT_READ | PROT_WRITE, 91 + MAP_SHARED | MAP_ANONYMOUS, -1, 0); 92 + ASSERT_NE(MAP_FAILED, domain_stack); 93 + memset(domain_stack, 0, sizeof(*domain_stack)); 80 94 81 95 ruleset_fd = 82 96 landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); ··· 92 94 child = fork(); 93 95 ASSERT_LE(0, child); 94 96 if (child == 0) { 95 - for (i = 0; i < ARRAY_SIZE(*self->domain_stack); i++) { 97 + for (i = 0; i < ARRAY_SIZE(*domain_stack); i++) { 96 98 __u64 denial_dom = 1; 97 99 __u64 allocated_dom = 2; 98 100 ··· 105 107 matches_log_signal(_metadata, self->audit_fd, 106 108 getppid(), &denial_dom)); 107 109 EXPECT_EQ(0, matches_log_domain_allocated( 108 - self->audit_fd, &allocated_dom)); 110 + self->audit_fd, getpid(), 111 + &allocated_dom)); 109 112 EXPECT_NE(denial_dom, 1); 110 113 EXPECT_NE(denial_dom, 0); 111 114 EXPECT_EQ(denial_dom, allocated_dom); ··· 114 115 /* Checks that the new domain is younger than the previous one. */ 115 116 EXPECT_GT(allocated_dom, prev_dom); 116 117 prev_dom = allocated_dom; 117 - (*self->domain_stack)[i] = allocated_dom; 118 + (*domain_stack)[i] = allocated_dom; 118 119 } 119 120 120 121 /* Checks that we reached the maximum number of layers. */ ··· 141 142 /* Purges log from deallocated domains. */ 142 143 EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 143 144 &audit_tv_dom_drop, sizeof(audit_tv_dom_drop))); 144 - for (i = ARRAY_SIZE(*self->domain_stack) - 1; i >= 0; i--) { 145 + for (i = ARRAY_SIZE(*domain_stack) - 1; i >= 0; i--) { 145 146 __u64 deallocated_dom = 2; 146 147 147 148 EXPECT_EQ(0, matches_log_domain_deallocated(self->audit_fd, 1, 148 149 &deallocated_dom)); 149 - EXPECT_EQ((*self->domain_stack)[i], deallocated_dom) 150 + EXPECT_EQ((*domain_stack)[i], deallocated_dom) 150 151 { 151 152 TH_LOG("Failed to match domain %llx (#%d)", 152 - (*self->domain_stack)[i], i); 153 + (*domain_stack)[i], i); 153 154 } 154 155 } 156 + EXPECT_EQ(0, munmap(domain_stack, sizeof(*domain_stack))); 155 157 EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 156 158 &audit_tv_default, sizeof(audit_tv_default))); 157 - 158 159 EXPECT_EQ(0, close(ruleset_fd)); 160 + } 161 + 162 + struct thread_data { 163 + pid_t parent_pid; 164 + int ruleset_fd, pipe_child, pipe_parent; 165 + }; 166 + 167 + static void *thread_audit_test(void *arg) 168 + { 169 + const struct thread_data *data = (struct thread_data *)arg; 170 + uintptr_t err = 0; 171 + char buffer; 172 + 173 + /* TGID and TID are different for a second thread. */ 174 + if (getpid() == gettid()) { 175 + err = 1; 176 + goto out; 177 + } 178 + 179 + if (landlock_restrict_self(data->ruleset_fd, 0)) { 180 + err = 2; 181 + goto out; 182 + } 183 + 184 + if (close(data->ruleset_fd)) { 185 + err = 3; 186 + goto out; 187 + } 188 + 189 + /* Creates a denial to get the domain ID. */ 190 + if (kill(data->parent_pid, 0) != -1) { 191 + err = 4; 192 + goto out; 193 + } 194 + 195 + if (EPERM != errno) { 196 + err = 5; 197 + goto out; 198 + } 199 + 200 + /* Signals the parent to read denial logs. */ 201 + if (write(data->pipe_child, ".", 1) != 1) { 202 + err = 6; 203 + goto out; 204 + } 205 + 206 + /* Waits for the parent to update audit filters. */ 207 + if (read(data->pipe_parent, &buffer, 1) != 1) { 208 + err = 7; 209 + goto out; 210 + } 211 + 212 + out: 213 + close(data->pipe_child); 214 + close(data->pipe_parent); 215 + return (void *)err; 216 + } 217 + 218 + /* Checks that the PID tied to a domain is not a TID but the TGID. */ 219 + TEST_F(audit, thread) 220 + { 221 + const struct landlock_ruleset_attr ruleset_attr = { 222 + .scoped = LANDLOCK_SCOPE_SIGNAL, 223 + }; 224 + __u64 denial_dom = 1; 225 + __u64 allocated_dom = 2; 226 + __u64 deallocated_dom = 3; 227 + pthread_t thread; 228 + int pipe_child[2], pipe_parent[2]; 229 + char buffer; 230 + struct thread_data child_data; 231 + 232 + child_data.parent_pid = getppid(); 233 + ASSERT_EQ(0, pipe2(pipe_child, O_CLOEXEC)); 234 + child_data.pipe_child = pipe_child[1]; 235 + ASSERT_EQ(0, pipe2(pipe_parent, O_CLOEXEC)); 236 + child_data.pipe_parent = pipe_parent[0]; 237 + child_data.ruleset_fd = 238 + landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); 239 + ASSERT_LE(0, child_data.ruleset_fd); 240 + 241 + /* TGID and TID are the same for the initial thread . */ 242 + EXPECT_EQ(getpid(), gettid()); 243 + EXPECT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)); 244 + ASSERT_EQ(0, pthread_create(&thread, NULL, thread_audit_test, 245 + &child_data)); 246 + 247 + /* Waits for the child to generate a denial. */ 248 + ASSERT_EQ(1, read(pipe_child[0], &buffer, 1)); 249 + EXPECT_EQ(0, close(pipe_child[0])); 250 + 251 + /* Matches the signal log to get the domain ID. */ 252 + EXPECT_EQ(0, matches_log_signal(_metadata, self->audit_fd, 253 + child_data.parent_pid, &denial_dom)); 254 + EXPECT_NE(denial_dom, 1); 255 + EXPECT_NE(denial_dom, 0); 256 + 257 + EXPECT_EQ(0, matches_log_domain_allocated(self->audit_fd, getpid(), 258 + &allocated_dom)); 259 + EXPECT_EQ(denial_dom, allocated_dom); 260 + 261 + /* Updates filter rules to match the drop record. */ 262 + set_cap(_metadata, CAP_AUDIT_CONTROL); 263 + EXPECT_EQ(0, audit_filter_drop(self->audit_fd, AUDIT_ADD_RULE)); 264 + EXPECT_EQ(0, audit_filter_exe(self->audit_fd, &self->audit_filter, 265 + AUDIT_DEL_RULE)); 266 + clear_cap(_metadata, CAP_AUDIT_CONTROL); 267 + 268 + /* Signals the thread to exit, which will generate a domain deallocation. */ 269 + ASSERT_EQ(1, write(pipe_parent[1], ".", 1)); 270 + EXPECT_EQ(0, close(pipe_parent[1])); 271 + ASSERT_EQ(0, pthread_join(thread, NULL)); 272 + 273 + EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 274 + &audit_tv_dom_drop, sizeof(audit_tv_dom_drop))); 275 + EXPECT_EQ(0, matches_log_domain_deallocated(self->audit_fd, 1, 276 + &deallocated_dom)); 277 + EXPECT_EQ(denial_dom, deallocated_dom); 278 + EXPECT_EQ(0, setsockopt(self->audit_fd, SOL_SOCKET, SO_RCVTIMEO, 279 + &audit_tv_default, sizeof(audit_tv_default))); 159 280 } 160 281 161 282 FIXTURE(audit_flags) ··· 392 273 393 274 /* Checks domain information records. */ 394 275 EXPECT_EQ(0, matches_log_domain_allocated( 395 - self->audit_fd, &allocated_dom)); 276 + self->audit_fd, getpid(), 277 + &allocated_dom)); 396 278 EXPECT_NE(*self->domain_id, 1); 397 279 EXPECT_NE(*self->domain_id, 0); 398 280 EXPECT_EQ(*self->domain_id, allocated_dom);
+2 -1
tools/testing/selftests/landlock/fs_test.c
··· 5964 5964 EXPECT_EQ(EXDEV, errno); 5965 5965 EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.refer", 5966 5966 dir_s1d1)); 5967 - EXPECT_EQ(0, matches_log_domain_allocated(self->audit_fd, NULL)); 5967 + EXPECT_EQ(0, 5968 + matches_log_domain_allocated(self->audit_fd, getpid(), NULL)); 5968 5969 EXPECT_EQ(0, matches_log_fs(_metadata, self->audit_fd, "fs\\.refer", 5969 5970 dir_s1d3)); 5970 5971
+95 -1
tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - ALL_TESTS="ping_ipv4 ping_ipv6 learning flooding vlan_deletion extern_learn other_tpid" 4 + ALL_TESTS="ping_ipv4 ping_ipv6 learning flooding vlan_deletion extern_learn other_tpid 8021p drop_untagged" 5 5 NUM_NETIFS=4 6 6 CHECK_TC="yes" 7 7 source lib.sh ··· 190 190 log_test "Reception of VLAN with other TPID as untagged (no PVID)" 191 191 192 192 bridge vlan add dev $swp1 vid 1 pvid untagged 193 + ip link set $h2 promisc off 194 + tc qdisc del dev $h2 clsact 195 + } 196 + 197 + 8021p_do() 198 + { 199 + local should_fail=$1; shift 200 + local mac=de:ad:be:ef:13:37 201 + 202 + tc filter add dev $h2 ingress protocol all pref 1 handle 101 \ 203 + flower dst_mac $mac action drop 204 + 205 + $MZ -q $h1 -c 1 -b $mac -a own "81:00 00:00 08:00 aa-aa-aa-aa-aa-aa-aa-aa-aa" 206 + sleep 1 207 + 208 + tc -j -s filter show dev $h2 ingress \ 209 + | jq -e ".[] | select(.options.handle == 101) \ 210 + | select(.options.actions[0].stats.packets == 1)" &> /dev/null 211 + check_err_fail $should_fail $? "802.1p-tagged reception" 212 + 213 + tc filter del dev $h2 ingress pref 1 214 + } 215 + 216 + 8021p() 217 + { 218 + RET=0 219 + 220 + tc qdisc add dev $h2 clsact 221 + ip link set $h2 promisc on 222 + 223 + # Test that with the default_pvid, 1, packets tagged with VID 0 are 224 + # accepted. 225 + 8021p_do 0 226 + 227 + # Test that packets tagged with VID 0 are still accepted after changing 228 + # the default_pvid. 229 + ip link set br0 type bridge vlan_default_pvid 10 230 + 8021p_do 0 231 + 232 + log_test "Reception of 802.1p-tagged traffic" 233 + 234 + ip link set $h2 promisc off 235 + tc qdisc del dev $h2 clsact 236 + } 237 + 238 + send_untagged_and_8021p() 239 + { 240 + ping_do $h1 192.0.2.2 241 + check_fail $? 242 + 243 + 8021p_do 1 244 + } 245 + 246 + drop_untagged() 247 + { 248 + RET=0 249 + 250 + tc qdisc add dev $h2 clsact 251 + ip link set $h2 promisc on 252 + 253 + # Test that with no PVID, untagged and 802.1p-tagged traffic is 254 + # dropped. 255 + ip link set br0 type bridge vlan_default_pvid 1 256 + 257 + # First we reconfigure the default_pvid, 1, as a non-PVID VLAN. 258 + bridge vlan add dev $swp1 vid 1 untagged 259 + send_untagged_and_8021p 260 + bridge vlan add dev $swp1 vid 1 pvid untagged 261 + 262 + # Next we try to delete VID 1 altogether 263 + bridge vlan del dev $swp1 vid 1 264 + send_untagged_and_8021p 265 + bridge vlan add dev $swp1 vid 1 pvid untagged 266 + 267 + # Set up the bridge without a default_pvid, then check that the 8021q 268 + # module, when the bridge port goes down and then up again, does not 269 + # accidentally re-enable untagged packet reception. 270 + ip link set br0 type bridge vlan_default_pvid 0 271 + ip link set $swp1 down 272 + ip link set $swp1 up 273 + setup_wait 274 + send_untagged_and_8021p 275 + 276 + # Remove swp1 as a bridge port and let it rejoin the bridge while it 277 + # has no default_pvid. 278 + ip link set $swp1 nomaster 279 + ip link set $swp1 master br0 280 + send_untagged_and_8021p 281 + 282 + # Restore settings 283 + ip link set br0 type bridge vlan_default_pvid 1 284 + 285 + log_test "Dropping of untagged and 802.1p-tagged traffic with no PVID" 286 + 193 287 ip link set $h2 promisc off 194 288 tc qdisc del dev $h2 clsact 195 289 }
+421
tools/testing/selftests/net/forwarding/tc_taprio.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + ALL_TESTS=" \ 5 + test_clock_jump_backward \ 6 + test_taprio_after_ptp \ 7 + test_max_sdu \ 8 + test_clock_jump_backward_forward \ 9 + " 10 + NUM_NETIFS=4 11 + source tc_common.sh 12 + source lib.sh 13 + source tsn_lib.sh 14 + 15 + require_command python3 16 + 17 + # The test assumes the usual topology from the README, where h1 is connected to 18 + # swp1, h2 to swp2, and swp1 and swp2 are together in a bridge. 19 + # Additional assumption: h1 and h2 use the same PHC, and so do swp1 and swp2. 20 + # By synchronizing h1 to swp1 via PTP, h2 is also implicitly synchronized to 21 + # swp1 (and both to CLOCK_REALTIME). 22 + h1=${NETIFS[p1]} 23 + swp1=${NETIFS[p2]} 24 + swp2=${NETIFS[p3]} 25 + h2=${NETIFS[p4]} 26 + 27 + UDS_ADDRESS_H1="/var/run/ptp4l_h1" 28 + UDS_ADDRESS_SWP1="/var/run/ptp4l_swp1" 29 + 30 + H1_IPV4="192.0.2.1" 31 + H2_IPV4="192.0.2.2" 32 + H1_IPV6="2001:db8:1::1" 33 + H2_IPV6="2001:db8:1::2" 34 + 35 + # Tunables 36 + NUM_PKTS=100 37 + STREAM_VID=10 38 + STREAM_PRIO_1=6 39 + STREAM_PRIO_2=5 40 + STREAM_PRIO_3=4 41 + # PTP uses TC 0 42 + ALL_GATES=$((1 << 0 | 1 << STREAM_PRIO_1 | 1 << STREAM_PRIO_2)) 43 + # Use a conservative cycle of 10 ms to allow the test to still pass when the 44 + # kernel has some extra overhead like lockdep etc 45 + CYCLE_TIME_NS=10000000 46 + # Create two Gate Control List entries, one OPEN and one CLOSE, of equal 47 + # durations 48 + GATE_DURATION_NS=$((CYCLE_TIME_NS / 2)) 49 + # Give 2/3 of the cycle time to user space and 1/3 to the kernel 50 + FUDGE_FACTOR=$((CYCLE_TIME_NS / 3)) 51 + # Shift the isochron base time by half the gate time, so that packets are 52 + # always received by swp1 close to the middle of the time slot, to minimize 53 + # inaccuracies due to network sync 54 + SHIFT_TIME_NS=$((GATE_DURATION_NS / 2)) 55 + 56 + path_delay= 57 + 58 + h1_create() 59 + { 60 + simple_if_init $h1 $H1_IPV4/24 $H1_IPV6/64 61 + } 62 + 63 + h1_destroy() 64 + { 65 + simple_if_fini $h1 $H1_IPV4/24 $H1_IPV6/64 66 + } 67 + 68 + h2_create() 69 + { 70 + simple_if_init $h2 $H2_IPV4/24 $H2_IPV6/64 71 + } 72 + 73 + h2_destroy() 74 + { 75 + simple_if_fini $h2 $H2_IPV4/24 $H2_IPV6/64 76 + } 77 + 78 + switch_create() 79 + { 80 + local h2_mac_addr=$(mac_get $h2) 81 + 82 + ip link set $swp1 up 83 + ip link set $swp2 up 84 + 85 + ip link add br0 type bridge vlan_filtering 1 86 + ip link set $swp1 master br0 87 + ip link set $swp2 master br0 88 + ip link set br0 up 89 + 90 + bridge vlan add dev $swp2 vid $STREAM_VID 91 + bridge vlan add dev $swp1 vid $STREAM_VID 92 + bridge fdb add dev $swp2 \ 93 + $h2_mac_addr vlan $STREAM_VID static master 94 + } 95 + 96 + switch_destroy() 97 + { 98 + ip link del br0 99 + } 100 + 101 + ptp_setup() 102 + { 103 + # Set up swp1 as a master PHC for h1, synchronized to the local 104 + # CLOCK_REALTIME. 105 + phc2sys_start $UDS_ADDRESS_SWP1 106 + ptp4l_start $h1 true $UDS_ADDRESS_H1 107 + ptp4l_start $swp1 false $UDS_ADDRESS_SWP1 108 + } 109 + 110 + ptp_cleanup() 111 + { 112 + ptp4l_stop $swp1 113 + ptp4l_stop $h1 114 + phc2sys_stop 115 + } 116 + 117 + txtime_setup() 118 + { 119 + local if_name=$1 120 + 121 + tc qdisc add dev $if_name clsact 122 + # Classify PTP on TC 7 and isochron on TC 6 123 + tc filter add dev $if_name egress protocol 0x88f7 \ 124 + flower action skbedit priority 7 125 + tc filter add dev $if_name egress protocol 802.1Q \ 126 + flower vlan_ethtype 0xdead action skbedit priority 6 127 + tc qdisc add dev $if_name handle 100: parent root mqprio num_tc 8 \ 128 + queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \ 129 + map 0 1 2 3 4 5 6 7 \ 130 + hw 1 131 + # Set up TC 5, 6, 7 for SO_TXTIME. tc-mqprio queues count from 1. 132 + tc qdisc replace dev $if_name parent 100:$((STREAM_PRIO_1 + 1)) etf \ 133 + clockid CLOCK_TAI offload delta $FUDGE_FACTOR 134 + tc qdisc replace dev $if_name parent 100:$((STREAM_PRIO_2 + 1)) etf \ 135 + clockid CLOCK_TAI offload delta $FUDGE_FACTOR 136 + tc qdisc replace dev $if_name parent 100:$((STREAM_PRIO_3 + 1)) etf \ 137 + clockid CLOCK_TAI offload delta $FUDGE_FACTOR 138 + } 139 + 140 + txtime_cleanup() 141 + { 142 + local if_name=$1 143 + 144 + tc qdisc del dev $if_name clsact 145 + tc qdisc del dev $if_name root 146 + } 147 + 148 + taprio_replace() 149 + { 150 + local if_name="$1"; shift 151 + local extra_args="$1"; shift 152 + 153 + # STREAM_PRIO_1 always has an open gate. 154 + # STREAM_PRIO_2 has a gate open for GATE_DURATION_NS (half the cycle time) 155 + # STREAM_PRIO_3 always has a closed gate. 156 + tc qdisc replace dev $if_name root stab overhead 24 taprio num_tc 8 \ 157 + queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \ 158 + map 0 1 2 3 4 5 6 7 \ 159 + sched-entry S $(printf "%x" $ALL_GATES) $GATE_DURATION_NS \ 160 + sched-entry S $(printf "%x" $((ALL_GATES & ~(1 << STREAM_PRIO_2)))) $GATE_DURATION_NS \ 161 + base-time 0 flags 0x2 $extra_args 162 + taprio_wait_for_admin $if_name 163 + } 164 + 165 + taprio_cleanup() 166 + { 167 + local if_name=$1 168 + 169 + tc qdisc del dev $if_name root 170 + } 171 + 172 + probe_path_delay() 173 + { 174 + local isochron_dat="$(mktemp)" 175 + local received 176 + 177 + log_info "Probing path delay" 178 + 179 + isochron_do "$h1" "$h2" "$UDS_ADDRESS_H1" "" 0 \ 180 + "$CYCLE_TIME_NS" "" "" "$NUM_PKTS" \ 181 + "$STREAM_VID" "$STREAM_PRIO_1" "" "$isochron_dat" 182 + 183 + received=$(isochron_report_num_received "$isochron_dat") 184 + if [ "$received" != "$NUM_PKTS" ]; then 185 + echo "Cannot establish basic data path between $h1 and $h2" 186 + exit $ksft_fail 187 + fi 188 + 189 + printf "pdelay = {}\n" > isochron_data.py 190 + isochron report --input-file "$isochron_dat" \ 191 + --printf-format "pdelay[%u] = %d - %d\n" \ 192 + --printf-args "qRT" \ 193 + >> isochron_data.py 194 + cat <<-'EOF' > isochron_postprocess.py 195 + #!/usr/bin/env python3 196 + 197 + from isochron_data import pdelay 198 + import numpy as np 199 + 200 + w = np.array(list(pdelay.values())) 201 + print("{}".format(np.max(w))) 202 + EOF 203 + path_delay=$(python3 ./isochron_postprocess.py) 204 + 205 + log_info "Path delay from $h1 to $h2 estimated at $path_delay ns" 206 + 207 + if [ "$path_delay" -gt "$GATE_DURATION_NS" ]; then 208 + echo "Path delay larger than gate duration, aborting" 209 + exit $ksft_fail 210 + fi 211 + 212 + rm -f ./isochron_data.py 2> /dev/null 213 + rm -f ./isochron_postprocess.py 2> /dev/null 214 + rm -f "$isochron_dat" 2> /dev/null 215 + } 216 + 217 + setup_prepare() 218 + { 219 + vrf_prepare 220 + 221 + h1_create 222 + h2_create 223 + switch_create 224 + 225 + txtime_setup $h1 226 + 227 + # Temporarily set up PTP just to probe the end-to-end path delay. 228 + ptp_setup 229 + probe_path_delay 230 + ptp_cleanup 231 + } 232 + 233 + cleanup() 234 + { 235 + pre_cleanup 236 + 237 + isochron_recv_stop 238 + txtime_cleanup $h1 239 + 240 + switch_destroy 241 + h2_destroy 242 + h1_destroy 243 + 244 + vrf_cleanup 245 + } 246 + 247 + run_test() 248 + { 249 + local base_time=$1; shift 250 + local stream_prio=$1; shift 251 + local expected_delay=$1; shift 252 + local should_fail=$1; shift 253 + local test_name=$1; shift 254 + local isochron_dat="$(mktemp)" 255 + local received 256 + local median_delay 257 + 258 + RET=0 259 + 260 + # Set the shift time equal to the cycle time, which effectively 261 + # cancels the default advance time. Packets won't be sent early in 262 + # software, which ensures that they won't prematurely enter through 263 + # the open gate in __test_out_of_band(). Also, the gate is open for 264 + # long enough that this won't cause a problem in __test_in_band(). 265 + isochron_do "$h1" "$h2" "$UDS_ADDRESS_H1" "" "$base_time" \ 266 + "$CYCLE_TIME_NS" "$SHIFT_TIME_NS" "$GATE_DURATION_NS" \ 267 + "$NUM_PKTS" "$STREAM_VID" "$stream_prio" "" "$isochron_dat" 268 + 269 + received=$(isochron_report_num_received "$isochron_dat") 270 + [ "$received" = "$NUM_PKTS" ] 271 + check_err_fail $should_fail $? "Reception of $NUM_PKTS packets" 272 + 273 + if [ $should_fail = 0 ] && [ "$received" = "$NUM_PKTS" ]; then 274 + printf "pdelay = {}\n" > isochron_data.py 275 + isochron report --input-file "$isochron_dat" \ 276 + --printf-format "pdelay[%u] = %d - %d\n" \ 277 + --printf-args "qRT" \ 278 + >> isochron_data.py 279 + cat <<-'EOF' > isochron_postprocess.py 280 + #!/usr/bin/env python3 281 + 282 + from isochron_data import pdelay 283 + import numpy as np 284 + 285 + w = np.array(list(pdelay.values())) 286 + print("{}".format(int(np.median(w)))) 287 + EOF 288 + median_delay=$(python3 ./isochron_postprocess.py) 289 + 290 + # If the condition below is true, packets were delayed by a closed gate 291 + [ "$median_delay" -gt $((path_delay + expected_delay)) ] 292 + check_fail $? "Median delay $median_delay is greater than expected delay $expected_delay plus path delay $path_delay" 293 + 294 + # If the condition below is true, packets were sent expecting them to 295 + # hit a closed gate in the switch, but were not delayed 296 + [ "$expected_delay" -gt 0 ] && [ "$median_delay" -lt "$expected_delay" ] 297 + check_fail $? "Median delay $median_delay is less than expected delay $expected_delay" 298 + fi 299 + 300 + log_test "$test_name" 301 + 302 + rm -f ./isochron_data.py 2> /dev/null 303 + rm -f ./isochron_postprocess.py 2> /dev/null 304 + rm -f "$isochron_dat" 2> /dev/null 305 + } 306 + 307 + __test_always_open() 308 + { 309 + run_test 0.000000000 $STREAM_PRIO_1 0 0 "Gate always open" 310 + } 311 + 312 + __test_always_closed() 313 + { 314 + run_test 0.000000000 $STREAM_PRIO_3 0 1 "Gate always closed" 315 + } 316 + 317 + __test_in_band() 318 + { 319 + # Send packets in-band with the OPEN gate entry 320 + run_test 0.000000000 $STREAM_PRIO_2 0 0 "In band with gate" 321 + } 322 + 323 + __test_out_of_band() 324 + { 325 + # Send packets in-band with the CLOSE gate entry 326 + run_test 0.005000000 $STREAM_PRIO_2 \ 327 + $((GATE_DURATION_NS - SHIFT_TIME_NS)) 0 \ 328 + "Out of band with gate" 329 + } 330 + 331 + run_subtests() 332 + { 333 + __test_always_open 334 + __test_always_closed 335 + __test_in_band 336 + __test_out_of_band 337 + } 338 + 339 + test_taprio_after_ptp() 340 + { 341 + log_info "Setting up taprio after PTP" 342 + ptp_setup 343 + taprio_replace $swp2 344 + run_subtests 345 + taprio_cleanup $swp2 346 + ptp_cleanup 347 + } 348 + 349 + __test_under_max_sdu() 350 + { 351 + # Limit max-sdu for STREAM_PRIO_1 352 + taprio_replace "$swp2" "max-sdu 0 0 0 0 0 0 100 0" 353 + run_test 0.000000000 $STREAM_PRIO_1 0 0 "Under maximum SDU" 354 + } 355 + 356 + __test_over_max_sdu() 357 + { 358 + # Limit max-sdu for STREAM_PRIO_1 359 + taprio_replace "$swp2" "max-sdu 0 0 0 0 0 0 20 0" 360 + run_test 0.000000000 $STREAM_PRIO_1 0 1 "Over maximum SDU" 361 + } 362 + 363 + test_max_sdu() 364 + { 365 + ptp_setup 366 + __test_under_max_sdu 367 + __test_over_max_sdu 368 + taprio_cleanup $swp2 369 + ptp_cleanup 370 + } 371 + 372 + # Perform a clock jump in the past without synchronization running, so that the 373 + # time base remains where it was set by phc_ctl. 374 + test_clock_jump_backward() 375 + { 376 + # This is a more complex schedule specifically crafted in a way that 377 + # has been problematic on NXP LS1028A. Not much to test with it other 378 + # than the fact that it passes traffic. 379 + tc qdisc replace dev $swp2 root stab overhead 24 taprio num_tc 8 \ 380 + queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 map 0 1 2 3 4 5 6 7 \ 381 + base-time 0 sched-entry S 20 300000 sched-entry S 10 200000 \ 382 + sched-entry S 20 300000 sched-entry S 48 200000 \ 383 + sched-entry S 20 300000 sched-entry S 83 200000 \ 384 + sched-entry S 40 300000 sched-entry S 00 200000 flags 2 385 + 386 + log_info "Forcing a backward clock jump" 387 + phc_ctl $swp1 set 0 388 + 389 + ping_test $h1 192.0.2.2 390 + taprio_cleanup $swp2 391 + } 392 + 393 + # Test that taprio tolerates clock jumps. 394 + # Since ptp4l and phc2sys are running, it is expected for the time to 395 + # eventually recover (through yet another clock jump). Isochron waits 396 + # until that is the case. 397 + test_clock_jump_backward_forward() 398 + { 399 + log_info "Forcing a backward and a forward clock jump" 400 + taprio_replace $swp2 401 + phc_ctl $swp1 set 0 402 + ptp_setup 403 + ping_test $h1 192.0.2.2 404 + run_subtests 405 + ptp_cleanup 406 + taprio_cleanup $swp2 407 + } 408 + 409 + tc_offload_check 410 + if [[ $? -ne 0 ]]; then 411 + log_test_skip "Could not test offloaded functionality" 412 + exit $EXIT_STATUS 413 + fi 414 + 415 + trap cleanup EXIT 416 + 417 + setup_prepare 418 + setup_wait 419 + tests_run 420 + 421 + exit $EXIT_STATUS
+26
tools/testing/selftests/net/forwarding/tsn_lib.sh
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # Copyright 2021-2022 NXP 4 4 5 + tc_testing_scripts_dir=$(dirname $0)/../../tc-testing/scripts 6 + 5 7 REQUIRE_ISOCHRON=${REQUIRE_ISOCHRON:=yes} 6 8 REQUIRE_LINUXPTP=${REQUIRE_LINUXPTP:=yes} 7 9 ··· 20 18 if [[ "$REQUIRE_LINUXPTP" = "yes" ]]; then 21 19 require_command phc2sys 22 20 require_command ptp4l 21 + require_command phc_ctl 23 22 fi 24 23 25 24 phc2sys_start() ··· 185 182 local base_time=$1; shift 186 183 local cycle_time=$1; shift 187 184 local shift_time=$1; shift 185 + local window_size=$1; shift 188 186 local num_pkts=$1; shift 189 187 local vid=$1; shift 190 188 local priority=$1; shift ··· 214 210 215 211 if ! [ -z "${shift_time}" ]; then 216 212 extra_args="${extra_args} --shift-time=${shift_time}" 213 + fi 214 + 215 + if ! [ -z "${window_size}" ]; then 216 + extra_args="${extra_args} --window-size=${window_size}" 217 217 fi 218 218 219 219 if [ "${use_l2}" = "true" ]; then ··· 254 246 isochron_recv_stop 5000 255 247 256 248 cpufreq_restore ${ISOCHRON_CPU} 249 + } 250 + 251 + isochron_report_num_received() 252 + { 253 + local isochron_dat=$1; shift 254 + 255 + # Count all received packets by looking at the non-zero RX timestamps 256 + isochron report \ 257 + --input-file "${isochron_dat}" \ 258 + --printf-format "%u\n" --printf-args "R" | \ 259 + grep -w -v '0' | wc -l 260 + } 261 + 262 + taprio_wait_for_admin() 263 + { 264 + local if_name="$1"; shift 265 + 266 + "$tc_testing_scripts_dir/taprio_wait_for_admin.sh" "$(which tc)" "$if_name" 257 267 }
+2 -1
tools/testing/selftests/pcie_bwctrl/Makefile
··· 1 - TEST_PROGS = set_pcie_cooling_state.sh set_pcie_speed.sh 1 + TEST_PROGS = set_pcie_cooling_state.sh 2 + TEST_FILES = set_pcie_speed.sh 2 3 include ../lib.mk
+186
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 352 352 "$TC qdisc del dev $DUMMY handle 1:0 root", 353 353 "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 354 354 ] 355 + }, 356 + { 357 + "id": "90ec", 358 + "name": "Test DRR's enqueue reentrant behaviour with netem", 359 + "category": [ 360 + "qdisc", 361 + "drr" 362 + ], 363 + "plugins": { 364 + "requires": "nsPlugin" 365 + }, 366 + "setup": [ 367 + "$IP link set dev $DUMMY up || true", 368 + "$IP addr add 10.10.10.10/24 dev $DUMMY || true", 369 + "$TC qdisc add dev $DUMMY handle 1:0 root drr", 370 + "$TC class replace dev $DUMMY parent 1:0 classid 1:1 drr", 371 + "$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 netem duplicate 100%", 372 + "$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 1:1" 373 + ], 374 + "cmdUnderTest": "ping -c 1 -I $DUMMY 10.10.10.1 > /dev/null || true", 375 + "expExitCode": "0", 376 + "verifyCmd": "$TC -j -s qdisc ls dev $DUMMY handle 1:0", 377 + "matchJSON": [ 378 + { 379 + "kind": "drr", 380 + "handle": "1:", 381 + "bytes": 196, 382 + "packets": 2 383 + } 384 + ], 385 + "matchCount": "1", 386 + "teardown": [ 387 + "$TC qdisc del dev $DUMMY handle 1:0 root", 388 + "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 389 + ] 390 + }, 391 + { 392 + "id": "1f1f", 393 + "name": "Test ETS's enqueue reentrant behaviour with netem", 394 + "category": [ 395 + "qdisc", 396 + "ets" 397 + ], 398 + "plugins": { 399 + "requires": "nsPlugin" 400 + }, 401 + "setup": [ 402 + "$IP link set dev $DUMMY up || true", 403 + "$IP addr add 10.10.10.10/24 dev $DUMMY || true", 404 + "$TC qdisc add dev $DUMMY handle 1:0 root ets bands 2", 405 + "$TC class replace dev $DUMMY parent 1:0 classid 1:1 ets quantum 1500", 406 + "$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 netem duplicate 100%", 407 + "$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 1:1" 408 + ], 409 + "cmdUnderTest": "ping -c 1 -I $DUMMY 10.10.10.1 > /dev/null || true", 410 + "expExitCode": "0", 411 + "verifyCmd": "$TC -j -s class show dev $DUMMY", 412 + "matchJSON": [ 413 + { 414 + "class": "ets", 415 + "handle": "1:1", 416 + "stats": { 417 + "bytes": 196, 418 + "packets": 2 419 + } 420 + } 421 + ], 422 + "matchCount": "1", 423 + "teardown": [ 424 + "$TC qdisc del dev $DUMMY handle 1:0 root", 425 + "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 426 + ] 427 + }, 428 + { 429 + "id": "5e6d", 430 + "name": "Test QFQ's enqueue reentrant behaviour with netem", 431 + "category": [ 432 + "qdisc", 433 + "qfq" 434 + ], 435 + "plugins": { 436 + "requires": "nsPlugin" 437 + }, 438 + "setup": [ 439 + "$IP link set dev $DUMMY up || true", 440 + "$IP addr add 10.10.10.10/24 dev $DUMMY || true", 441 + "$TC qdisc add dev $DUMMY handle 1:0 root qfq", 442 + "$TC class replace dev $DUMMY parent 1:0 classid 1:1 qfq weight 100 maxpkt 1500", 443 + "$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 netem duplicate 100%", 444 + "$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 1:1" 445 + ], 446 + "cmdUnderTest": "ping -c 1 -I $DUMMY 10.10.10.1 > /dev/null || true", 447 + "expExitCode": "0", 448 + "verifyCmd": "$TC -j -s qdisc ls dev $DUMMY handle 1:0", 449 + "matchJSON": [ 450 + { 451 + "kind": "qfq", 452 + "handle": "1:", 453 + "bytes": 196, 454 + "packets": 2 455 + } 456 + ], 457 + "matchCount": "1", 458 + "teardown": [ 459 + "$TC qdisc del dev $DUMMY handle 1:0 root", 460 + "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 461 + ] 462 + }, 463 + { 464 + "id": "bf1d", 465 + "name": "Test HFSC's enqueue reentrant behaviour with netem", 466 + "category": [ 467 + "qdisc", 468 + "hfsc" 469 + ], 470 + "plugins": { 471 + "requires": "nsPlugin" 472 + }, 473 + "setup": [ 474 + "$IP link set dev $DUMMY up || true", 475 + "$IP addr add 10.10.10.10/24 dev $DUMMY || true", 476 + "$TC qdisc add dev $DUMMY handle 1:0 root hfsc", 477 + "$TC class add dev $DUMMY parent 1:0 classid 1:1 hfsc ls m2 10Mbit", 478 + "$TC qdisc add dev $DUMMY parent 1:1 handle 2:0 netem duplicate 100%", 479 + "$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip dst 10.10.10.1/32 flowid 1:1", 480 + "$TC class add dev $DUMMY parent 1:0 classid 1:2 hfsc ls m2 10Mbit", 481 + "$TC qdisc add dev $DUMMY parent 1:2 handle 3:0 netem duplicate 100%", 482 + "$TC filter add dev $DUMMY parent 1:0 protocol ip prio 2 u32 match ip dst 10.10.10.2/32 flowid 1:2", 483 + "ping -c 1 10.10.10.1 -I$DUMMY > /dev/null || true", 484 + "$TC filter del dev $DUMMY parent 1:0 protocol ip prio 1", 485 + "$TC class del dev $DUMMY classid 1:1" 486 + ], 487 + "cmdUnderTest": "ping -c 1 10.10.10.2 -I$DUMMY > /dev/null || true", 488 + "expExitCode": "0", 489 + "verifyCmd": "$TC -j -s qdisc ls dev $DUMMY handle 1:0", 490 + "matchJSON": [ 491 + { 492 + "kind": "hfsc", 493 + "handle": "1:", 494 + "bytes": 392, 495 + "packets": 4 496 + } 497 + ], 498 + "matchCount": "1", 499 + "teardown": [ 500 + "$TC qdisc del dev $DUMMY handle 1:0 root", 501 + "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 502 + ] 503 + }, 504 + { 505 + "id": "7c3b", 506 + "name": "Test nested DRR's enqueue reentrant behaviour with netem", 507 + "category": [ 508 + "qdisc", 509 + "drr" 510 + ], 511 + "plugins": { 512 + "requires": "nsPlugin" 513 + }, 514 + "setup": [ 515 + "$IP link set dev $DUMMY up || true", 516 + "$IP addr add 10.10.10.10/24 dev $DUMMY || true", 517 + "$TC qdisc add dev $DUMMY handle 1:0 root drr", 518 + "$TC class add dev $DUMMY parent 1:0 classid 1:1 drr", 519 + "$TC filter add dev $DUMMY parent 1:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 1:1", 520 + "$TC qdisc add dev $DUMMY handle 2:0 parent 1:1 drr", 521 + "$TC class add dev $DUMMY classid 2:1 parent 2:0 drr", 522 + "$TC filter add dev $DUMMY parent 2:0 protocol ip prio 1 u32 match ip protocol 1 0xff flowid 2:1", 523 + "$TC qdisc add dev $DUMMY parent 2:1 handle 3:0 netem duplicate 100%" 524 + ], 525 + "cmdUnderTest": "ping -c 1 -I $DUMMY 10.10.10.1 > /dev/null || true", 526 + "expExitCode": "0", 527 + "verifyCmd": "$TC -j -s qdisc ls dev $DUMMY handle 1:0", 528 + "matchJSON": [ 529 + { 530 + "kind": "drr", 531 + "handle": "1:", 532 + "bytes": 196, 533 + "packets": 2 534 + } 535 + ], 536 + "matchCount": "1", 537 + "teardown": [ 538 + "$TC qdisc del dev $DUMMY handle 1:0 root", 539 + "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 540 + ] 355 541 } 356 542 ]
+1
tools/testing/selftests/ublk/kublk.c
··· 1354 1354 value = strtol(optarg, NULL, 10); 1355 1355 if (value) 1356 1356 ctx.flags |= UBLK_F_NEED_GET_DATA; 1357 + break; 1357 1358 case 0: 1358 1359 if (!strcmp(longopts[option_idx].name, "debug_mask")) 1359 1360 ublk_dbg_mask = strtol(optarg, NULL, 16);
-3
tools/testing/selftests/ublk/kublk.h
··· 86 86 unsigned int fg:1; 87 87 unsigned int recovery:1; 88 88 89 - /* fault_inject */ 90 - long long delay_us; 91 - 92 89 int _evtfd; 93 90 int _shmid; 94 91
+2 -2
tools/testing/selftests/ublk/test_common.sh
··· 17 17 local minor 18 18 19 19 dev=/dev/ublkb"${dev_id}" 20 - major=$(stat -c '%Hr' "$dev") 21 - minor=$(stat -c '%Lr' "$dev") 20 + major="0x"$(stat -c '%t' "$dev") 21 + minor="0x"$(stat -c '%T' "$dev") 22 22 23 23 echo $(( (major & 0xfff) << 20 | (minor & 0xfffff) )) 24 24 }
+1 -1
tools/testing/selftests/ublk/test_generic_05.sh
··· 3 3 4 4 . "$(cd "$(dirname "$0")" && pwd)"/test_common.sh 5 5 6 - TID="generic_04" 6 + TID="generic_05" 7 7 ERR_CODE=0 8 8 9 9 ublk_run_recover_test()