Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'kvmarm-fixes-6.17-2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 changes for 6.17, round #3

- Invalidate nested MMUs upon freeing the PGD to avoid WARNs when
visiting from an MMU notifier

- Fixes to the TLB match process and TLB invalidation range for
managing the VCNR pseudo-TLB

- Prevent SPE from erroneously profiling guests due to UNKNOWN reset
values in PMSCR_EL1

- Fix save/restore of host MDCR_EL2 to account for eagerly programming
at vcpu_load() on VHE systems

- Correct lock ordering when dealing with VGIC LPIs, avoiding scenarios
where an xarray's spinlock was nested with a *raw* spinlock

- Permit stage-2 read permission aborts which are possible in the case
of NV depending on the guest hypervisor's stage-2 translation

- Call raw_spin_unlock() instead of the internal spinlock API

- Fix parameter ordering when assigning VBAR_EL1

[Pull into kvm/master to fix conflicts. - Paolo]

+3024 -2057
+7
CREDITS
··· 3222 3222 D: Starter of Linux1394 effort 3223 3223 S: ask per mail for current address 3224 3224 3225 + N: Boris Pismenny 3226 + E: borisp@mellanox.com 3227 + D: Kernel TLS implementation and offload support. 3228 + 3225 3229 N: Nicolas Pitre 3226 3230 E: nico@fluxnic.net 3227 3231 D: StrongARM SA1100 support integrator & hacker ··· 4171 4167 S: 1513 Brewster Dr. 4172 4168 S: Carrollton, TX 75010 4173 4169 S: USA 4170 + 4171 + N: Dave Watson 4172 + D: Kernel TLS implementation. 4174 4173 4175 4174 N: Tim Waugh 4176 4175 E: tim@cyberelk.net
+1 -4
Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
··· 215 215 Spectre_v2_user X X * (Note 1) 216 216 SRBDS X X X X 217 217 SRSO X X X X 218 - SSB (Note 4) 218 + SSB X 219 219 TAA X X X X * (Note 2) 220 220 TSA X X X X 221 221 =============== ============== ============ ============= ============== ============ ======== ··· 228 228 229 229 3 -- Disables SMT if cross-thread mitigations are fully enabled, the CPU is 230 230 vulnerable, and STIBP is not supported 231 - 232 - 4 -- Speculative store bypass is always enabled by default (no kernel 233 - mitigation applied) unless overridden with spec_store_bypass_disable option 234 231 235 232 When an attack-vector is disabled, all mitigations for the vulnerabilities 236 233 listed in the above table are disabled, unless mitigation is required for a
-1
Documentation/devicetree/bindings/display/msm/qcom,mdp5.yaml
··· 60 60 - const: bus 61 61 - const: core 62 62 - const: vsync 63 - - const: lut 64 63 - const: tbu 65 64 - const: tbu_rt 66 65 # MSM8996 has additional iommu clock
+2
Documentation/devicetree/bindings/vendor-prefixes.yaml
··· 507 507 description: Espressif Systems Co. Ltd. 508 508 "^est,.*": 509 509 description: ESTeem Wireless Modems 510 + "^eswin,.*": 511 + description: Beijing ESWIN Technology Group Co. Ltd. 510 512 "^ettus,.*": 511 513 description: NI Ettus Research 512 514 "^eukrea,.*":
+6 -7
MAINTAINERS
··· 931 931 F: drivers/dma/altera-msgdma.c 932 932 933 933 ALTERA PIO DRIVER 934 - M: Mun Yew Tham <mun.yew.tham@intel.com> 934 + M: Adrian Ng <adrianhoyin.ng@altera.com> 935 935 L: linux-gpio@vger.kernel.org 936 936 S: Maintained 937 937 F: drivers/gpio/gpio-altera.c 938 938 939 939 ALTERA TRIPLE SPEED ETHERNET DRIVER 940 - M: Joyce Ooi <joyce.ooi@intel.com> 940 + M: Boon Khai Ng <boon.khai.ng@altera.com> 941 941 L: netdev@vger.kernel.org 942 942 S: Maintained 943 943 F: drivers/net/ethernet/altera/ ··· 4205 4205 F: drivers/net/hamradio/baycom* 4206 4206 4207 4207 BCACHE (BLOCK LAYER CACHE) 4208 - M: Coly Li <colyli@kernel.org> 4208 + M: Coly Li <colyli@fnnas.com> 4209 4209 M: Kent Overstreet <kent.overstreet@linux.dev> 4210 4210 L: linux-bcache@vger.kernel.org 4211 4211 S: Maintained ··· 4216 4216 BCACHEFS 4217 4217 M: Kent Overstreet <kent.overstreet@linux.dev> 4218 4218 L: linux-bcachefs@vger.kernel.org 4219 - S: Supported 4219 + S: Externally maintained 4220 4220 C: irc://irc.oftc.net/bcache 4221 4221 P: Documentation/filesystems/bcachefs/SubmittingPatches.rst 4222 4222 T: git https://evilpiepirate.org/git/bcachefs.git ··· 17848 17848 F: net/ipv6/tcp*.c 17849 17849 17850 17850 NETWORKING [TLS] 17851 - M: Boris Pismenny <borisp@nvidia.com> 17852 17851 M: John Fastabend <john.fastabend@gmail.com> 17853 17852 M: Jakub Kicinski <kuba@kernel.org> 17854 17853 L: netdev@vger.kernel.org ··· 20877 20878 F: drivers/firmware/qcom/qcom_qseecom_uefisecapp.c 20878 20879 20879 20880 QUALCOMM RMNET DRIVER 20880 - M: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com> 20881 - M: Sean Tranchetti <quic_stranche@quicinc.com> 20881 + M: Subash Abhinov Kasiviswanathan <subash.a.kasiviswanathan@oss.qualcomm.com> 20882 + M: Sean Tranchetti <sean.tranchetti@oss.qualcomm.com> 20882 20883 L: netdev@vger.kernel.org 20883 20884 S: Maintained 20884 20885 F: Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+2 -1
arch/arm/include/asm/stacktrace.h
··· 2 2 #ifndef __ASM_STACKTRACE_H 3 3 #define __ASM_STACKTRACE_H 4 4 5 - #include <asm/ptrace.h> 6 5 #include <linux/llist.h> 6 + #include <asm/ptrace.h> 7 + #include <asm/sections.h> 7 8 8 9 struct stackframe { 9 10 /*
+3 -109
arch/arm64/include/asm/kvm_host.h
··· 1160 1160 __v; \ 1161 1161 }) 1162 1162 1163 - u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg); 1164 - void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg); 1165 - 1166 - static inline bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val) 1167 - { 1168 - /* 1169 - * *** VHE ONLY *** 1170 - * 1171 - * System registers listed in the switch are not saved on every 1172 - * exit from the guest but are only saved on vcpu_put. 1173 - * 1174 - * SYSREGS_ON_CPU *MUST* be checked before using this helper. 1175 - * 1176 - * Note that MPIDR_EL1 for the guest is set by KVM via VMPIDR_EL2 but 1177 - * should never be listed below, because the guest cannot modify its 1178 - * own MPIDR_EL1 and MPIDR_EL1 is accessed for VCPU A from VCPU B's 1179 - * thread when emulating cross-VCPU communication. 1180 - */ 1181 - if (!has_vhe()) 1182 - return false; 1183 - 1184 - switch (reg) { 1185 - case SCTLR_EL1: *val = read_sysreg_s(SYS_SCTLR_EL12); break; 1186 - case CPACR_EL1: *val = read_sysreg_s(SYS_CPACR_EL12); break; 1187 - case TTBR0_EL1: *val = read_sysreg_s(SYS_TTBR0_EL12); break; 1188 - case TTBR1_EL1: *val = read_sysreg_s(SYS_TTBR1_EL12); break; 1189 - case TCR_EL1: *val = read_sysreg_s(SYS_TCR_EL12); break; 1190 - case TCR2_EL1: *val = read_sysreg_s(SYS_TCR2_EL12); break; 1191 - case PIR_EL1: *val = read_sysreg_s(SYS_PIR_EL12); break; 1192 - case PIRE0_EL1: *val = read_sysreg_s(SYS_PIRE0_EL12); break; 1193 - case POR_EL1: *val = read_sysreg_s(SYS_POR_EL12); break; 1194 - case ESR_EL1: *val = read_sysreg_s(SYS_ESR_EL12); break; 1195 - case AFSR0_EL1: *val = read_sysreg_s(SYS_AFSR0_EL12); break; 1196 - case AFSR1_EL1: *val = read_sysreg_s(SYS_AFSR1_EL12); break; 1197 - case FAR_EL1: *val = read_sysreg_s(SYS_FAR_EL12); break; 1198 - case MAIR_EL1: *val = read_sysreg_s(SYS_MAIR_EL12); break; 1199 - case VBAR_EL1: *val = read_sysreg_s(SYS_VBAR_EL12); break; 1200 - case CONTEXTIDR_EL1: *val = read_sysreg_s(SYS_CONTEXTIDR_EL12);break; 1201 - case TPIDR_EL0: *val = read_sysreg_s(SYS_TPIDR_EL0); break; 1202 - case TPIDRRO_EL0: *val = read_sysreg_s(SYS_TPIDRRO_EL0); break; 1203 - case TPIDR_EL1: *val = read_sysreg_s(SYS_TPIDR_EL1); break; 1204 - case AMAIR_EL1: *val = read_sysreg_s(SYS_AMAIR_EL12); break; 1205 - case CNTKCTL_EL1: *val = read_sysreg_s(SYS_CNTKCTL_EL12); break; 1206 - case ELR_EL1: *val = read_sysreg_s(SYS_ELR_EL12); break; 1207 - case SPSR_EL1: *val = read_sysreg_s(SYS_SPSR_EL12); break; 1208 - case PAR_EL1: *val = read_sysreg_par(); break; 1209 - case DACR32_EL2: *val = read_sysreg_s(SYS_DACR32_EL2); break; 1210 - case IFSR32_EL2: *val = read_sysreg_s(SYS_IFSR32_EL2); break; 1211 - case DBGVCR32_EL2: *val = read_sysreg_s(SYS_DBGVCR32_EL2); break; 1212 - case ZCR_EL1: *val = read_sysreg_s(SYS_ZCR_EL12); break; 1213 - case SCTLR2_EL1: *val = read_sysreg_s(SYS_SCTLR2_EL12); break; 1214 - default: return false; 1215 - } 1216 - 1217 - return true; 1218 - } 1219 - 1220 - static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg) 1221 - { 1222 - /* 1223 - * *** VHE ONLY *** 1224 - * 1225 - * System registers listed in the switch are not restored on every 1226 - * entry to the guest but are only restored on vcpu_load. 1227 - * 1228 - * SYSREGS_ON_CPU *MUST* be checked before using this helper. 1229 - * 1230 - * Note that MPIDR_EL1 for the guest is set by KVM via VMPIDR_EL2 but 1231 - * should never be listed below, because the MPIDR should only be set 1232 - * once, before running the VCPU, and never changed later. 1233 - */ 1234 - if (!has_vhe()) 1235 - return false; 1236 - 1237 - switch (reg) { 1238 - case SCTLR_EL1: write_sysreg_s(val, SYS_SCTLR_EL12); break; 1239 - case CPACR_EL1: write_sysreg_s(val, SYS_CPACR_EL12); break; 1240 - case TTBR0_EL1: write_sysreg_s(val, SYS_TTBR0_EL12); break; 1241 - case TTBR1_EL1: write_sysreg_s(val, SYS_TTBR1_EL12); break; 1242 - case TCR_EL1: write_sysreg_s(val, SYS_TCR_EL12); break; 1243 - case TCR2_EL1: write_sysreg_s(val, SYS_TCR2_EL12); break; 1244 - case PIR_EL1: write_sysreg_s(val, SYS_PIR_EL12); break; 1245 - case PIRE0_EL1: write_sysreg_s(val, SYS_PIRE0_EL12); break; 1246 - case POR_EL1: write_sysreg_s(val, SYS_POR_EL12); break; 1247 - case ESR_EL1: write_sysreg_s(val, SYS_ESR_EL12); break; 1248 - case AFSR0_EL1: write_sysreg_s(val, SYS_AFSR0_EL12); break; 1249 - case AFSR1_EL1: write_sysreg_s(val, SYS_AFSR1_EL12); break; 1250 - case FAR_EL1: write_sysreg_s(val, SYS_FAR_EL12); break; 1251 - case MAIR_EL1: write_sysreg_s(val, SYS_MAIR_EL12); break; 1252 - case VBAR_EL1: write_sysreg_s(val, SYS_VBAR_EL12); break; 1253 - case CONTEXTIDR_EL1: write_sysreg_s(val, SYS_CONTEXTIDR_EL12);break; 1254 - case TPIDR_EL0: write_sysreg_s(val, SYS_TPIDR_EL0); break; 1255 - case TPIDRRO_EL0: write_sysreg_s(val, SYS_TPIDRRO_EL0); break; 1256 - case TPIDR_EL1: write_sysreg_s(val, SYS_TPIDR_EL1); break; 1257 - case AMAIR_EL1: write_sysreg_s(val, SYS_AMAIR_EL12); break; 1258 - case CNTKCTL_EL1: write_sysreg_s(val, SYS_CNTKCTL_EL12); break; 1259 - case ELR_EL1: write_sysreg_s(val, SYS_ELR_EL12); break; 1260 - case SPSR_EL1: write_sysreg_s(val, SYS_SPSR_EL12); break; 1261 - case PAR_EL1: write_sysreg_s(val, SYS_PAR_EL1); break; 1262 - case DACR32_EL2: write_sysreg_s(val, SYS_DACR32_EL2); break; 1263 - case IFSR32_EL2: write_sysreg_s(val, SYS_IFSR32_EL2); break; 1264 - case DBGVCR32_EL2: write_sysreg_s(val, SYS_DBGVCR32_EL2); break; 1265 - case ZCR_EL1: write_sysreg_s(val, SYS_ZCR_EL12); break; 1266 - case SCTLR2_EL1: write_sysreg_s(val, SYS_SCTLR2_EL12); break; 1267 - default: return false; 1268 - } 1269 - 1270 - return true; 1271 - } 1163 + u64 vcpu_read_sys_reg(const struct kvm_vcpu *, enum vcpu_sysreg); 1164 + void vcpu_write_sys_reg(struct kvm_vcpu *, u64, enum vcpu_sysreg); 1272 1165 1273 1166 struct kvm_vm_stat { 1274 1167 struct kvm_vm_stat_generic generic; ··· 1369 1476 } 1370 1477 1371 1478 void kvm_init_host_debug_data(void); 1479 + void kvm_debug_init_vhe(void); 1372 1480 void kvm_vcpu_load_debug(struct kvm_vcpu *vcpu); 1373 1481 void kvm_vcpu_put_debug(struct kvm_vcpu *vcpu); 1374 1482 void kvm_debug_set_guest_ownership(struct kvm_vcpu *vcpu);
+1
arch/arm64/include/asm/kvm_mmu.h
··· 180 180 int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, 181 181 phys_addr_t pa, unsigned long size, bool writable); 182 182 183 + int kvm_handle_guest_sea(struct kvm_vcpu *vcpu); 183 184 int kvm_handle_guest_abort(struct kvm_vcpu *vcpu); 184 185 185 186 phys_addr_t kvm_mmu_get_httbr(void);
-25
arch/arm64/include/asm/kvm_ras.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2018 - Arm Ltd */ 3 - 4 - #ifndef __ARM64_KVM_RAS_H__ 5 - #define __ARM64_KVM_RAS_H__ 6 - 7 - #include <linux/acpi.h> 8 - #include <linux/errno.h> 9 - #include <linux/types.h> 10 - 11 - #include <asm/acpi.h> 12 - 13 - /* 14 - * Was this synchronous external abort a RAS notification? 15 - * Returns '0' for errors handled by some RAS subsystem, or -ENOENT. 16 - */ 17 - static inline int kvm_handle_guest_sea(void) 18 - { 19 - /* apei_claim_sea(NULL) expects to mask interrupts itself */ 20 - lockdep_assert_irqs_enabled(); 21 - 22 - return apei_claim_sea(NULL); 23 - } 24 - 25 - #endif /* __ARM64_KVM_RAS_H__ */
+7
arch/arm64/include/asm/mmu.h
··· 17 17 #include <linux/refcount.h> 18 18 #include <asm/cpufeature.h> 19 19 20 + enum pgtable_type { 21 + TABLE_PTE, 22 + TABLE_PMD, 23 + TABLE_PUD, 24 + TABLE_P4D, 25 + }; 26 + 20 27 typedef struct { 21 28 atomic64_t id; 22 29 #ifdef CONFIG_COMPAT
-3
arch/arm64/include/asm/sysreg.h
··· 1142 1142 1143 1143 #define ARM64_FEATURE_FIELD_BITS 4 1144 1144 1145 - /* Defined for compatibility only, do not add new users. */ 1146 - #define ARM64_FEATURE_MASK(x) (x##_MASK) 1147 - 1148 1145 #ifdef __ASSEMBLY__ 1149 1146 1150 1147 .macro mrs_s, rt, sreg
+27 -2
arch/arm64/kernel/cpufeature.c
··· 84 84 #include <asm/hwcap.h> 85 85 #include <asm/insn.h> 86 86 #include <asm/kvm_host.h> 87 + #include <asm/mmu.h> 87 88 #include <asm/mmu_context.h> 88 89 #include <asm/mte.h> 89 90 #include <asm/hypervisor.h> ··· 1946 1945 extern 1947 1946 void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, 1948 1947 phys_addr_t size, pgprot_t prot, 1949 - phys_addr_t (*pgtable_alloc)(int), int flags); 1948 + phys_addr_t (*pgtable_alloc)(enum pgtable_type), int flags); 1950 1949 1951 1950 static phys_addr_t __initdata kpti_ng_temp_alloc; 1952 1951 1953 - static phys_addr_t __init kpti_ng_pgd_alloc(int shift) 1952 + static phys_addr_t __init kpti_ng_pgd_alloc(enum pgtable_type type) 1954 1953 { 1955 1954 kpti_ng_temp_alloc -= PAGE_SIZE; 1956 1955 return kpti_ng_temp_alloc; ··· 2269 2268 { 2270 2269 /* Firmware may have left a deferred SError in this register. */ 2271 2270 write_sysreg_s(0, SYS_DISR_EL1); 2271 + } 2272 + static bool has_rasv1p1(const struct arm64_cpu_capabilities *__unused, int scope) 2273 + { 2274 + const struct arm64_cpu_capabilities rasv1p1_caps[] = { 2275 + { 2276 + ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, RAS, V1P1) 2277 + }, 2278 + { 2279 + ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, RAS, IMP) 2280 + }, 2281 + { 2282 + ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, RAS_frac, RASv1p1) 2283 + }, 2284 + }; 2285 + 2286 + return (has_cpuid_feature(&rasv1p1_caps[0], scope) || 2287 + (has_cpuid_feature(&rasv1p1_caps[1], scope) && 2288 + has_cpuid_feature(&rasv1p1_caps[2], scope))); 2272 2289 } 2273 2290 #endif /* CONFIG_ARM64_RAS_EXTN */ 2274 2291 ··· 2705 2686 .matches = has_cpuid_feature, 2706 2687 .cpu_enable = cpu_clear_disr, 2707 2688 ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, RAS, IMP) 2689 + }, 2690 + { 2691 + .desc = "RASv1p1 Extension Support", 2692 + .capability = ARM64_HAS_RASV1P1_EXTN, 2693 + .type = ARM64_CPUCAP_SYSTEM_FEATURE, 2694 + .matches = has_rasv1p1, 2708 2695 }, 2709 2696 #endif /* CONFIG_ARM64_RAS_EXTN */ 2710 2697 #ifdef CONFIG_ARM64_AMU_EXTN
+7 -5
arch/arm64/kvm/arm.c
··· 2113 2113 { 2114 2114 cpu_set_hyp_vector(); 2115 2115 2116 - if (is_kernel_in_hyp_mode()) 2116 + if (is_kernel_in_hyp_mode()) { 2117 2117 kvm_timer_init_vhe(); 2118 + kvm_debug_init_vhe(); 2119 + } 2118 2120 2119 2121 if (vgic_present) 2120 2122 kvm_vgic_init_cpu_hardware(); ··· 2410 2408 */ 2411 2409 u64 val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 2412 2410 2413 - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | 2414 - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); 2411 + val &= ~(ID_AA64PFR0_EL1_CSV2 | 2412 + ID_AA64PFR0_EL1_CSV3); 2415 2413 2416 - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 2414 + val |= FIELD_PREP(ID_AA64PFR0_EL1_CSV2, 2417 2415 arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED); 2418 - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 2416 + val |= FIELD_PREP(ID_AA64PFR0_EL1_CSV3, 2419 2417 arm64_get_meltdown_state() == SPECTRE_UNAFFECTED); 2420 2418 2421 2419 return val;
+3 -3
arch/arm64/kvm/at.c
··· 1420 1420 return; 1421 1421 1422 1422 /* 1423 - * If we only have a single stage of translation (E2H=0 or 1424 - * TGE=1), exit early. Same thing if {VM,DC}=={0,0}. 1423 + * If we only have a single stage of translation (EL2&0), exit 1424 + * early. Same thing if {VM,DC}=={0,0}. 1425 1425 */ 1426 - if (!vcpu_el2_e2h_is_set(vcpu) || vcpu_el2_tge_is_set(vcpu) || 1426 + if (compute_translation_regime(vcpu, op) == TR_EL20 || 1427 1427 !(vcpu_read_sys_reg(vcpu, HCR_EL2) & (HCR_VM | HCR_DC))) 1428 1428 return; 1429 1429
+13
arch/arm64/kvm/debug.c
··· 96 96 } 97 97 } 98 98 99 + void kvm_debug_init_vhe(void) 100 + { 101 + /* Clear PMSCR_EL1.E{0,1}SPE which reset to UNKNOWN values. */ 102 + if (SYS_FIELD_GET(ID_AA64DFR0_EL1, PMSVer, read_sysreg(id_aa64dfr0_el1))) 103 + write_sysreg_el1(0, SYS_PMSCR); 104 + } 105 + 99 106 /* 100 107 * Configures the 'external' MDSCR_EL1 value for the guest, i.e. when the host 101 108 * has taken over MDSCR_EL1. ··· 144 137 145 138 /* Must be called before kvm_vcpu_load_vhe() */ 146 139 KVM_BUG_ON(vcpu_get_flag(vcpu, SYSREGS_ON_CPU), vcpu->kvm); 140 + 141 + if (has_vhe()) 142 + *host_data_ptr(host_debug_state.mdcr_el2) = read_sysreg(mdcr_el2); 147 143 148 144 /* 149 145 * Determine which of the possible debug states we're in: ··· 194 184 195 185 void kvm_vcpu_put_debug(struct kvm_vcpu *vcpu) 196 186 { 187 + if (has_vhe()) 188 + write_sysreg(*host_data_ptr(host_debug_state.mdcr_el2), mdcr_el2); 189 + 197 190 if (likely(!(vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) 198 191 return; 199 192
+1 -1
arch/arm64/kvm/emulate-nested.c
··· 2833 2833 iabt ? ESR_ELx_EC_IABT_LOW : ESR_ELx_EC_DABT_LOW); 2834 2834 esr |= ESR_ELx_FSC_EXTABT | ESR_ELx_IL; 2835 2835 2836 - vcpu_write_sys_reg(vcpu, FAR_EL2, addr); 2836 + vcpu_write_sys_reg(vcpu, addr, FAR_EL2); 2837 2837 2838 2838 if (__vcpu_sys_reg(vcpu, SCTLR2_EL2) & SCTLR2_EL1_EASE) 2839 2839 return kvm_inject_nested(vcpu, esr, except_type_serror);
+6 -14
arch/arm64/kvm/hyp/exception.c
··· 22 22 23 23 static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) 24 24 { 25 - u64 val; 26 - 27 - if (unlikely(vcpu_has_nv(vcpu))) 25 + if (has_vhe()) 28 26 return vcpu_read_sys_reg(vcpu, reg); 29 - else if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) && 30 - __vcpu_read_sys_reg_from_cpu(reg, &val)) 31 - return val; 32 27 33 28 return __vcpu_sys_reg(vcpu, reg); 34 29 } 35 30 36 31 static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) 37 32 { 38 - if (unlikely(vcpu_has_nv(vcpu))) 33 + if (has_vhe()) 39 34 vcpu_write_sys_reg(vcpu, val, reg); 40 - else if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU) || 41 - !__vcpu_write_sys_reg_to_cpu(val, reg)) 35 + else 42 36 __vcpu_assign_sys_reg(vcpu, reg, val); 43 37 } 44 38 45 39 static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long target_mode, 46 40 u64 val) 47 41 { 48 - if (unlikely(vcpu_has_nv(vcpu))) { 42 + if (has_vhe()) { 49 43 if (target_mode == PSR_MODE_EL1h) 50 44 vcpu_write_sys_reg(vcpu, val, SPSR_EL1); 51 45 else 52 46 vcpu_write_sys_reg(vcpu, val, SPSR_EL2); 53 - } else if (has_vhe()) { 54 - write_sysreg_el1(val, SYS_SPSR); 55 47 } else { 56 48 __vcpu_assign_sys_reg(vcpu, SPSR_EL1, val); 57 49 } ··· 51 59 52 60 static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val) 53 61 { 54 - if (has_vhe()) 62 + if (has_vhe() && vcpu_get_flag(vcpu, SYSREGS_ON_CPU)) 55 63 write_sysreg(val, spsr_abt); 56 64 else 57 65 vcpu->arch.ctxt.spsr_abt = val; ··· 59 67 60 68 static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) 61 69 { 62 - if (has_vhe()) 70 + if (has_vhe() && vcpu_get_flag(vcpu, SYSREGS_ON_CPU)) 63 71 write_sysreg(val, spsr_und); 64 72 else 65 73 vcpu->arch.ctxt.spsr_und = val;
-5
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 431 431 vcpu_set_flag(vcpu, PMUSERENR_ON_CPU); 432 432 } 433 433 434 - *host_data_ptr(host_debug_state.mdcr_el2) = read_sysreg(mdcr_el2); 435 - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); 436 - 437 434 if (cpus_have_final_cap(ARM64_HAS_HCX)) { 438 435 u64 hcrx = vcpu->arch.hcrx_el2; 439 436 if (is_nested_ctxt(vcpu)) { ··· 450 453 static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu) 451 454 { 452 455 struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt); 453 - 454 - write_sysreg(*host_data_ptr(host_debug_state.mdcr_el2), mdcr_el2); 455 456 456 457 write_sysreg(0, hstr_el2); 457 458 if (system_supports_pmuv3()) {
+1 -1
arch/arm64/kvm/hyp/nvhe/list_debug.c
··· 17 17 bool corruption = unlikely(condition); \ 18 18 if (corruption) { \ 19 19 if (IS_ENABLED(CONFIG_BUG_ON_DATA_CORRUPTION)) { \ 20 - BUG_ON(1); \ 20 + BUG(); \ 21 21 } else \ 22 22 WARN_ON(1); \ 23 23 } \
+6
arch/arm64/kvm/hyp/nvhe/switch.c
··· 50 50 static void __activate_traps(struct kvm_vcpu *vcpu) 51 51 { 52 52 ___activate_traps(vcpu, vcpu->arch.hcr_el2); 53 + 54 + *host_data_ptr(host_debug_state.mdcr_el2) = read_sysreg(mdcr_el2); 55 + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); 56 + 53 57 __activate_traps_common(vcpu); 54 58 __activate_cptr_traps(vcpu); 55 59 ··· 96 92 write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR); 97 93 isb(); 98 94 } 95 + 96 + write_sysreg(*host_data_ptr(host_debug_state.mdcr_el2), mdcr_el2); 99 97 100 98 __deactivate_traps_common(vcpu); 101 99
+5
arch/arm64/kvm/hyp/nvhe/sys_regs.c
··· 253 253 254 254 *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); 255 255 *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR); 256 + __vcpu_assign_sys_reg(vcpu, VBAR_EL1, read_sysreg_el1(SYS_VBAR)); 256 257 257 258 kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC); 258 259 ··· 373 372 374 373 /* Debug and Trace Registers are restricted. */ 375 374 375 + /* Group 1 ID registers */ 376 + HOST_HANDLED(SYS_REVIDR_EL1), 377 + 376 378 /* AArch64 mappings of the AArch32 ID registers */ 377 379 /* CRm=1 */ 378 380 AARCH32(SYS_ID_PFR0_EL1), ··· 464 460 465 461 HOST_HANDLED(SYS_CCSIDR_EL1), 466 462 HOST_HANDLED(SYS_CLIDR_EL1), 463 + HOST_HANDLED(SYS_AIDR_EL1), 467 464 HOST_HANDLED(SYS_CSSELR_EL1), 468 465 HOST_HANDLED(SYS_CTR_EL0), 469 466
+1 -1
arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
··· 20 20 if (vcpu_mode_is_32bit(vcpu)) 21 21 return !!(read_sysreg_el2(SYS_SPSR) & PSR_AA32_E_BIT); 22 22 23 - return !!(read_sysreg(SCTLR_EL1) & SCTLR_ELx_EE); 23 + return !!(read_sysreg_el1(SYS_SCTLR) & SCTLR_ELx_EE); 24 24 } 25 25 26 26 /*
+4 -1
arch/arm64/kvm/hyp/vhe/switch.c
··· 43 43 * 44 44 * - API/APK: they are already accounted for by vcpu_load(), and can 45 45 * only take effect across a load/put cycle (such as ERET) 46 + * 47 + * - FIEN: no way we let a guest have access to the RAS "Common Fault 48 + * Injection" thing, whatever that does 46 49 */ 47 - #define NV_HCR_GUEST_EXCLUDE (HCR_TGE | HCR_API | HCR_APK) 50 + #define NV_HCR_GUEST_EXCLUDE (HCR_TGE | HCR_API | HCR_APK | HCR_FIEN) 48 51 49 52 static u64 __compute_hcr(struct kvm_vcpu *vcpu) 50 53 {
+21 -12
arch/arm64/kvm/mmu.c
··· 4 4 * Author: Christoffer Dall <c.dall@virtualopensystems.com> 5 5 */ 6 6 7 + #include <linux/acpi.h> 7 8 #include <linux/mman.h> 8 9 #include <linux/kvm_host.h> 9 10 #include <linux/io.h> 10 11 #include <linux/hugetlb.h> 11 12 #include <linux/sched/signal.h> 12 13 #include <trace/events/kvm.h> 14 + #include <asm/acpi.h> 13 15 #include <asm/pgalloc.h> 14 16 #include <asm/cacheflush.h> 15 17 #include <asm/kvm_arm.h> 16 18 #include <asm/kvm_mmu.h> 17 19 #include <asm/kvm_pgtable.h> 18 20 #include <asm/kvm_pkvm.h> 19 - #include <asm/kvm_ras.h> 20 21 #include <asm/kvm_asm.h> 21 22 #include <asm/kvm_emulate.h> 22 23 #include <asm/virt.h> ··· 1074 1073 mmu->pgt = NULL; 1075 1074 free_percpu(mmu->last_vcpu_ran); 1076 1075 } 1076 + 1077 + if (kvm_is_nested_s2_mmu(kvm, mmu)) 1078 + kvm_init_nested_s2_mmu(mmu); 1079 + 1077 1080 write_unlock(&kvm->mmu_lock); 1078 1081 1079 1082 if (pgt) { ··· 1897 1892 read_unlock(&vcpu->kvm->mmu_lock); 1898 1893 } 1899 1894 1895 + int kvm_handle_guest_sea(struct kvm_vcpu *vcpu) 1896 + { 1897 + /* 1898 + * Give APEI the opportunity to claim the abort before handling it 1899 + * within KVM. apei_claim_sea() expects to be called with IRQs enabled. 1900 + */ 1901 + lockdep_assert_irqs_enabled(); 1902 + if (apei_claim_sea(NULL) == 0) 1903 + return 1; 1904 + 1905 + return kvm_inject_serror(vcpu); 1906 + } 1907 + 1900 1908 /** 1901 1909 * kvm_handle_guest_abort - handles all 2nd stage aborts 1902 1910 * @vcpu: the VCPU pointer ··· 1933 1915 gfn_t gfn; 1934 1916 int ret, idx; 1935 1917 1936 - /* Synchronous External Abort? */ 1937 - if (kvm_vcpu_abt_issea(vcpu)) { 1938 - /* 1939 - * For RAS the host kernel may handle this abort. 1940 - * There is no need to pass the error into the guest. 1941 - */ 1942 - if (kvm_handle_guest_sea()) 1943 - return kvm_inject_serror(vcpu); 1944 - 1945 - return 1; 1946 - } 1918 + if (kvm_vcpu_abt_issea(vcpu)) 1919 + return kvm_handle_guest_sea(vcpu); 1947 1920 1948 1921 esr = kvm_vcpu_get_esr(vcpu); 1949 1922
+7 -4
arch/arm64/kvm/nested.c
··· 847 847 848 848 ipa_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift, 849 849 vt->wr.level)); 850 - ipa_start = vt->wr.pa & (ipa_size - 1); 850 + ipa_start = vt->wr.pa & ~(ipa_size - 1); 851 851 ipa_end = ipa_start + ipa_size; 852 852 853 853 if (ipa_end <= start || ipa_start >= end) ··· 887 887 888 888 va_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift, 889 889 vt->wr.level)); 890 - va_start = vt->gva & (va_size - 1); 890 + va_start = vt->gva & ~(va_size - 1); 891 891 va_end = va_start + va_size; 892 892 893 893 switch (scope->type) { ··· 1292 1292 !(tcr & TCR_ASID16)) 1293 1293 asid &= GENMASK(7, 0); 1294 1294 1295 - return asid != vt->wr.asid; 1295 + return asid == vt->wr.asid; 1296 1296 } 1297 1297 1298 1298 return true; ··· 1303 1303 struct vncr_tlb *vt = vcpu->arch.vncr_tlb; 1304 1304 u64 esr = kvm_vcpu_get_esr(vcpu); 1305 1305 1306 - BUG_ON(!(esr & ESR_ELx_VNCR_SHIFT)); 1306 + WARN_ON_ONCE(!(esr & ESR_ELx_VNCR)); 1307 + 1308 + if (kvm_vcpu_abt_issea(vcpu)) 1309 + return kvm_handle_guest_sea(vcpu); 1307 1310 1308 1311 if (esr_fsc_is_permission_fault(esr)) { 1309 1312 inject_vncr_perm(vcpu);
+293 -138
arch/arm64/kvm/sys_regs.c
··· 82 82 "sys_reg write to read-only register"); 83 83 } 84 84 85 - #define PURE_EL2_SYSREG(el2) \ 86 - case el2: { \ 87 - *el1r = el2; \ 88 - return true; \ 89 - } 85 + enum sr_loc_attr { 86 + SR_LOC_MEMORY = 0, /* Register definitely in memory */ 87 + SR_LOC_LOADED = BIT(0), /* Register on CPU, unless it cannot */ 88 + SR_LOC_MAPPED = BIT(1), /* Register in a different CPU register */ 89 + SR_LOC_XLATED = BIT(2), /* Register translated to fit another reg */ 90 + SR_LOC_SPECIAL = BIT(3), /* Demanding register, implies loaded */ 91 + }; 90 92 91 - #define MAPPED_EL2_SYSREG(el2, el1, fn) \ 92 - case el2: { \ 93 - *xlate = fn; \ 94 - *el1r = el1; \ 95 - return true; \ 96 - } 93 + struct sr_loc { 94 + enum sr_loc_attr loc; 95 + enum vcpu_sysreg map_reg; 96 + u64 (*xlate)(u64); 97 + }; 97 98 98 - static bool get_el2_to_el1_mapping(unsigned int reg, 99 - unsigned int *el1r, u64 (**xlate)(u64)) 99 + static enum sr_loc_attr locate_direct_register(const struct kvm_vcpu *vcpu, 100 + enum vcpu_sysreg reg) 100 101 { 101 102 switch (reg) { 102 - PURE_EL2_SYSREG( VPIDR_EL2 ); 103 - PURE_EL2_SYSREG( VMPIDR_EL2 ); 104 - PURE_EL2_SYSREG( ACTLR_EL2 ); 105 - PURE_EL2_SYSREG( HCR_EL2 ); 106 - PURE_EL2_SYSREG( MDCR_EL2 ); 107 - PURE_EL2_SYSREG( HSTR_EL2 ); 108 - PURE_EL2_SYSREG( HACR_EL2 ); 109 - PURE_EL2_SYSREG( VTTBR_EL2 ); 110 - PURE_EL2_SYSREG( VTCR_EL2 ); 111 - PURE_EL2_SYSREG( TPIDR_EL2 ); 112 - PURE_EL2_SYSREG( HPFAR_EL2 ); 113 - PURE_EL2_SYSREG( HCRX_EL2 ); 114 - PURE_EL2_SYSREG( HFGRTR_EL2 ); 115 - PURE_EL2_SYSREG( HFGWTR_EL2 ); 116 - PURE_EL2_SYSREG( HFGITR_EL2 ); 117 - PURE_EL2_SYSREG( HDFGRTR_EL2 ); 118 - PURE_EL2_SYSREG( HDFGWTR_EL2 ); 119 - PURE_EL2_SYSREG( HAFGRTR_EL2 ); 120 - PURE_EL2_SYSREG( CNTVOFF_EL2 ); 121 - PURE_EL2_SYSREG( CNTHCTL_EL2 ); 103 + case SCTLR_EL1: 104 + case CPACR_EL1: 105 + case TTBR0_EL1: 106 + case TTBR1_EL1: 107 + case TCR_EL1: 108 + case TCR2_EL1: 109 + case PIR_EL1: 110 + case PIRE0_EL1: 111 + case POR_EL1: 112 + case ESR_EL1: 113 + case AFSR0_EL1: 114 + case AFSR1_EL1: 115 + case FAR_EL1: 116 + case MAIR_EL1: 117 + case VBAR_EL1: 118 + case CONTEXTIDR_EL1: 119 + case AMAIR_EL1: 120 + case CNTKCTL_EL1: 121 + case ELR_EL1: 122 + case SPSR_EL1: 123 + case ZCR_EL1: 124 + case SCTLR2_EL1: 125 + /* 126 + * EL1 registers which have an ELx2 mapping are loaded if 127 + * we're not in hypervisor context. 128 + */ 129 + return is_hyp_ctxt(vcpu) ? SR_LOC_MEMORY : SR_LOC_LOADED; 130 + 131 + case TPIDR_EL0: 132 + case TPIDRRO_EL0: 133 + case TPIDR_EL1: 134 + case PAR_EL1: 135 + case DACR32_EL2: 136 + case IFSR32_EL2: 137 + case DBGVCR32_EL2: 138 + /* These registers are always loaded, no matter what */ 139 + return SR_LOC_LOADED; 140 + 141 + default: 142 + /* Non-mapped EL2 registers are by definition in memory. */ 143 + return SR_LOC_MEMORY; 144 + } 145 + } 146 + 147 + static void locate_mapped_el2_register(const struct kvm_vcpu *vcpu, 148 + enum vcpu_sysreg reg, 149 + enum vcpu_sysreg map_reg, 150 + u64 (*xlate)(u64), 151 + struct sr_loc *loc) 152 + { 153 + if (!is_hyp_ctxt(vcpu)) { 154 + loc->loc = SR_LOC_MEMORY; 155 + return; 156 + } 157 + 158 + loc->loc = SR_LOC_LOADED | SR_LOC_MAPPED; 159 + loc->map_reg = map_reg; 160 + 161 + WARN_ON(locate_direct_register(vcpu, map_reg) != SR_LOC_MEMORY); 162 + 163 + if (xlate != NULL && !vcpu_el2_e2h_is_set(vcpu)) { 164 + loc->loc |= SR_LOC_XLATED; 165 + loc->xlate = xlate; 166 + } 167 + } 168 + 169 + #define MAPPED_EL2_SYSREG(r, m, t) \ 170 + case r: { \ 171 + locate_mapped_el2_register(vcpu, r, m, t, loc); \ 172 + break; \ 173 + } 174 + 175 + static void locate_register(const struct kvm_vcpu *vcpu, enum vcpu_sysreg reg, 176 + struct sr_loc *loc) 177 + { 178 + if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU)) { 179 + loc->loc = SR_LOC_MEMORY; 180 + return; 181 + } 182 + 183 + switch (reg) { 122 184 MAPPED_EL2_SYSREG(SCTLR_EL2, SCTLR_EL1, 123 185 translate_sctlr_el2_to_sctlr_el1 ); 124 186 MAPPED_EL2_SYSREG(CPTR_EL2, CPACR_EL1, ··· 206 144 MAPPED_EL2_SYSREG(ZCR_EL2, ZCR_EL1, NULL ); 207 145 MAPPED_EL2_SYSREG(CONTEXTIDR_EL2, CONTEXTIDR_EL1, NULL ); 208 146 MAPPED_EL2_SYSREG(SCTLR2_EL2, SCTLR2_EL1, NULL ); 147 + case CNTHCTL_EL2: 148 + /* CNTHCTL_EL2 is super special, until we support NV2.1 */ 149 + loc->loc = ((is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu)) ? 150 + SR_LOC_SPECIAL : SR_LOC_MEMORY); 151 + break; 209 152 default: 210 - return false; 153 + loc->loc = locate_direct_register(vcpu, reg); 211 154 } 212 155 } 213 156 214 - u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) 157 + static u64 read_sr_from_cpu(enum vcpu_sysreg reg) 215 158 { 216 159 u64 val = 0x8badf00d8badf00d; 217 - u64 (*xlate)(u64) = NULL; 218 - unsigned int el1r; 219 160 220 - if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU)) 221 - goto memory_read; 161 + switch (reg) { 162 + case SCTLR_EL1: val = read_sysreg_s(SYS_SCTLR_EL12); break; 163 + case CPACR_EL1: val = read_sysreg_s(SYS_CPACR_EL12); break; 164 + case TTBR0_EL1: val = read_sysreg_s(SYS_TTBR0_EL12); break; 165 + case TTBR1_EL1: val = read_sysreg_s(SYS_TTBR1_EL12); break; 166 + case TCR_EL1: val = read_sysreg_s(SYS_TCR_EL12); break; 167 + case TCR2_EL1: val = read_sysreg_s(SYS_TCR2_EL12); break; 168 + case PIR_EL1: val = read_sysreg_s(SYS_PIR_EL12); break; 169 + case PIRE0_EL1: val = read_sysreg_s(SYS_PIRE0_EL12); break; 170 + case POR_EL1: val = read_sysreg_s(SYS_POR_EL12); break; 171 + case ESR_EL1: val = read_sysreg_s(SYS_ESR_EL12); break; 172 + case AFSR0_EL1: val = read_sysreg_s(SYS_AFSR0_EL12); break; 173 + case AFSR1_EL1: val = read_sysreg_s(SYS_AFSR1_EL12); break; 174 + case FAR_EL1: val = read_sysreg_s(SYS_FAR_EL12); break; 175 + case MAIR_EL1: val = read_sysreg_s(SYS_MAIR_EL12); break; 176 + case VBAR_EL1: val = read_sysreg_s(SYS_VBAR_EL12); break; 177 + case CONTEXTIDR_EL1: val = read_sysreg_s(SYS_CONTEXTIDR_EL12);break; 178 + case AMAIR_EL1: val = read_sysreg_s(SYS_AMAIR_EL12); break; 179 + case CNTKCTL_EL1: val = read_sysreg_s(SYS_CNTKCTL_EL12); break; 180 + case ELR_EL1: val = read_sysreg_s(SYS_ELR_EL12); break; 181 + case SPSR_EL1: val = read_sysreg_s(SYS_SPSR_EL12); break; 182 + case ZCR_EL1: val = read_sysreg_s(SYS_ZCR_EL12); break; 183 + case SCTLR2_EL1: val = read_sysreg_s(SYS_SCTLR2_EL12); break; 184 + case TPIDR_EL0: val = read_sysreg_s(SYS_TPIDR_EL0); break; 185 + case TPIDRRO_EL0: val = read_sysreg_s(SYS_TPIDRRO_EL0); break; 186 + case TPIDR_EL1: val = read_sysreg_s(SYS_TPIDR_EL1); break; 187 + case PAR_EL1: val = read_sysreg_par(); break; 188 + case DACR32_EL2: val = read_sysreg_s(SYS_DACR32_EL2); break; 189 + case IFSR32_EL2: val = read_sysreg_s(SYS_IFSR32_EL2); break; 190 + case DBGVCR32_EL2: val = read_sysreg_s(SYS_DBGVCR32_EL2); break; 191 + default: WARN_ON_ONCE(1); 192 + } 222 193 223 - if (unlikely(get_el2_to_el1_mapping(reg, &el1r, &xlate))) { 224 - if (!is_hyp_ctxt(vcpu)) 225 - goto memory_read; 194 + return val; 195 + } 196 + 197 + static void write_sr_to_cpu(enum vcpu_sysreg reg, u64 val) 198 + { 199 + switch (reg) { 200 + case SCTLR_EL1: write_sysreg_s(val, SYS_SCTLR_EL12); break; 201 + case CPACR_EL1: write_sysreg_s(val, SYS_CPACR_EL12); break; 202 + case TTBR0_EL1: write_sysreg_s(val, SYS_TTBR0_EL12); break; 203 + case TTBR1_EL1: write_sysreg_s(val, SYS_TTBR1_EL12); break; 204 + case TCR_EL1: write_sysreg_s(val, SYS_TCR_EL12); break; 205 + case TCR2_EL1: write_sysreg_s(val, SYS_TCR2_EL12); break; 206 + case PIR_EL1: write_sysreg_s(val, SYS_PIR_EL12); break; 207 + case PIRE0_EL1: write_sysreg_s(val, SYS_PIRE0_EL12); break; 208 + case POR_EL1: write_sysreg_s(val, SYS_POR_EL12); break; 209 + case ESR_EL1: write_sysreg_s(val, SYS_ESR_EL12); break; 210 + case AFSR0_EL1: write_sysreg_s(val, SYS_AFSR0_EL12); break; 211 + case AFSR1_EL1: write_sysreg_s(val, SYS_AFSR1_EL12); break; 212 + case FAR_EL1: write_sysreg_s(val, SYS_FAR_EL12); break; 213 + case MAIR_EL1: write_sysreg_s(val, SYS_MAIR_EL12); break; 214 + case VBAR_EL1: write_sysreg_s(val, SYS_VBAR_EL12); break; 215 + case CONTEXTIDR_EL1: write_sysreg_s(val, SYS_CONTEXTIDR_EL12);break; 216 + case AMAIR_EL1: write_sysreg_s(val, SYS_AMAIR_EL12); break; 217 + case CNTKCTL_EL1: write_sysreg_s(val, SYS_CNTKCTL_EL12); break; 218 + case ELR_EL1: write_sysreg_s(val, SYS_ELR_EL12); break; 219 + case SPSR_EL1: write_sysreg_s(val, SYS_SPSR_EL12); break; 220 + case ZCR_EL1: write_sysreg_s(val, SYS_ZCR_EL12); break; 221 + case SCTLR2_EL1: write_sysreg_s(val, SYS_SCTLR2_EL12); break; 222 + case TPIDR_EL0: write_sysreg_s(val, SYS_TPIDR_EL0); break; 223 + case TPIDRRO_EL0: write_sysreg_s(val, SYS_TPIDRRO_EL0); break; 224 + case TPIDR_EL1: write_sysreg_s(val, SYS_TPIDR_EL1); break; 225 + case PAR_EL1: write_sysreg_s(val, SYS_PAR_EL1); break; 226 + case DACR32_EL2: write_sysreg_s(val, SYS_DACR32_EL2); break; 227 + case IFSR32_EL2: write_sysreg_s(val, SYS_IFSR32_EL2); break; 228 + case DBGVCR32_EL2: write_sysreg_s(val, SYS_DBGVCR32_EL2); break; 229 + default: WARN_ON_ONCE(1); 230 + } 231 + } 232 + 233 + u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg reg) 234 + { 235 + struct sr_loc loc = {}; 236 + 237 + locate_register(vcpu, reg, &loc); 238 + 239 + WARN_ON_ONCE(!has_vhe() && loc.loc != SR_LOC_MEMORY); 240 + 241 + if (loc.loc & SR_LOC_SPECIAL) { 242 + u64 val; 243 + 244 + WARN_ON_ONCE(loc.loc & ~SR_LOC_SPECIAL); 226 245 227 246 /* 228 - * CNTHCTL_EL2 requires some special treatment to 229 - * account for the bits that can be set via CNTKCTL_EL1. 247 + * CNTHCTL_EL2 requires some special treatment to account 248 + * for the bits that can be set via CNTKCTL_EL1 when E2H==1. 230 249 */ 231 250 switch (reg) { 232 251 case CNTHCTL_EL2: 233 - if (vcpu_el2_e2h_is_set(vcpu)) { 234 - val = read_sysreg_el1(SYS_CNTKCTL); 235 - val &= CNTKCTL_VALID_BITS; 236 - val |= __vcpu_sys_reg(vcpu, reg) & ~CNTKCTL_VALID_BITS; 237 - return val; 238 - } 239 - break; 252 + val = read_sysreg_el1(SYS_CNTKCTL); 253 + val &= CNTKCTL_VALID_BITS; 254 + val |= __vcpu_sys_reg(vcpu, reg) & ~CNTKCTL_VALID_BITS; 255 + return val; 256 + default: 257 + WARN_ON_ONCE(1); 240 258 } 241 - 242 - /* 243 - * If this register does not have an EL1 counterpart, 244 - * then read the stored EL2 version. 245 - */ 246 - if (reg == el1r) 247 - goto memory_read; 248 - 249 - /* 250 - * If we have a non-VHE guest and that the sysreg 251 - * requires translation to be used at EL1, use the 252 - * in-memory copy instead. 253 - */ 254 - if (!vcpu_el2_e2h_is_set(vcpu) && xlate) 255 - goto memory_read; 256 - 257 - /* Get the current version of the EL1 counterpart. */ 258 - WARN_ON(!__vcpu_read_sys_reg_from_cpu(el1r, &val)); 259 - if (reg >= __SANITISED_REG_START__) 260 - val = kvm_vcpu_apply_reg_masks(vcpu, reg, val); 261 - 262 - return val; 263 259 } 264 260 265 - /* EL1 register can't be on the CPU if the guest is in vEL2. */ 266 - if (unlikely(is_hyp_ctxt(vcpu))) 267 - goto memory_read; 261 + if (loc.loc & SR_LOC_LOADED) { 262 + enum vcpu_sysreg map_reg = reg; 268 263 269 - if (__vcpu_read_sys_reg_from_cpu(reg, &val)) 270 - return val; 264 + if (loc.loc & SR_LOC_MAPPED) 265 + map_reg = loc.map_reg; 271 266 272 - memory_read: 267 + if (!(loc.loc & SR_LOC_XLATED)) { 268 + u64 val = read_sr_from_cpu(map_reg); 269 + 270 + if (reg >= __SANITISED_REG_START__) 271 + val = kvm_vcpu_apply_reg_masks(vcpu, reg, val); 272 + 273 + return val; 274 + } 275 + } 276 + 273 277 return __vcpu_sys_reg(vcpu, reg); 274 278 } 275 279 276 - void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) 280 + void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, enum vcpu_sysreg reg) 277 281 { 278 - u64 (*xlate)(u64) = NULL; 279 - unsigned int el1r; 282 + struct sr_loc loc = {}; 280 283 281 - if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU)) 282 - goto memory_write; 284 + locate_register(vcpu, reg, &loc); 283 285 284 - if (unlikely(get_el2_to_el1_mapping(reg, &el1r, &xlate))) { 285 - if (!is_hyp_ctxt(vcpu)) 286 - goto memory_write; 286 + WARN_ON_ONCE(!has_vhe() && loc.loc != SR_LOC_MEMORY); 287 287 288 - /* 289 - * Always store a copy of the write to memory to avoid having 290 - * to reverse-translate virtual EL2 system registers for a 291 - * non-VHE guest hypervisor. 292 - */ 293 - __vcpu_assign_sys_reg(vcpu, reg, val); 288 + if (loc.loc & SR_LOC_SPECIAL) { 289 + 290 + WARN_ON_ONCE(loc.loc & ~SR_LOC_SPECIAL); 294 291 295 292 switch (reg) { 296 293 case CNTHCTL_EL2: 297 294 /* 298 - * If E2H=0, CNHTCTL_EL2 is a pure shadow register. 299 - * Otherwise, some of the bits are backed by 295 + * If E2H=1, some of the bits are backed by 300 296 * CNTKCTL_EL1, while the rest is kept in memory. 301 297 * Yes, this is fun stuff. 302 298 */ 303 - if (vcpu_el2_e2h_is_set(vcpu)) 304 - write_sysreg_el1(val, SYS_CNTKCTL); 305 - return; 299 + write_sysreg_el1(val, SYS_CNTKCTL); 300 + break; 301 + default: 302 + WARN_ON_ONCE(1); 306 303 } 307 - 308 - /* No EL1 counterpart? We're done here.? */ 309 - if (reg == el1r) 310 - return; 311 - 312 - if (!vcpu_el2_e2h_is_set(vcpu) && xlate) 313 - val = xlate(val); 314 - 315 - /* Redirect this to the EL1 version of the register. */ 316 - WARN_ON(!__vcpu_write_sys_reg_to_cpu(val, el1r)); 317 - return; 318 304 } 319 305 320 - /* EL1 register can't be on the CPU if the guest is in vEL2. */ 321 - if (unlikely(is_hyp_ctxt(vcpu))) 322 - goto memory_write; 306 + if (loc.loc & SR_LOC_LOADED) { 307 + enum vcpu_sysreg map_reg = reg; 308 + u64 xlated_val; 323 309 324 - if (__vcpu_write_sys_reg_to_cpu(val, reg)) 325 - return; 310 + if (reg >= __SANITISED_REG_START__) 311 + val = kvm_vcpu_apply_reg_masks(vcpu, reg, val); 326 312 327 - memory_write: 313 + if (loc.loc & SR_LOC_MAPPED) 314 + map_reg = loc.map_reg; 315 + 316 + if (loc.loc & SR_LOC_XLATED) 317 + xlated_val = loc.xlate(val); 318 + else 319 + xlated_val = val; 320 + 321 + write_sr_to_cpu(map_reg, xlated_val); 322 + 323 + /* 324 + * Fall through to write the backing store anyway, which 325 + * allows translated registers to be directly read without a 326 + * reverse translation. 327 + */ 328 + } 329 + 328 330 __vcpu_assign_sys_reg(vcpu, reg, val); 329 331 } 330 332 ··· 1710 1584 } 1711 1585 1712 1586 static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val); 1587 + static u64 sanitise_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, u64 val); 1713 1588 static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val); 1714 1589 1715 1590 /* Read a sanitised cpufeature ID register by sys_reg_desc */ ··· 1733 1606 val = sanitise_id_aa64pfr0_el1(vcpu, val); 1734 1607 break; 1735 1608 case SYS_ID_AA64PFR1_EL1: 1736 - if (!kvm_has_mte(vcpu->kvm)) { 1737 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); 1738 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac); 1739 - } 1740 - 1741 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); 1742 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap); 1743 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI); 1744 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS); 1745 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE); 1746 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX); 1747 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR); 1748 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac); 1609 + val = sanitise_id_aa64pfr1_el1(vcpu, val); 1749 1610 break; 1750 1611 case SYS_ID_AA64PFR2_EL1: 1751 1612 val &= ID_AA64PFR2_EL1_FPMR | ··· 1743 1628 break; 1744 1629 case SYS_ID_AA64ISAR1_EL1: 1745 1630 if (!vcpu_has_ptrauth(vcpu)) 1746 - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | 1747 - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | 1748 - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | 1749 - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); 1631 + val &= ~(ID_AA64ISAR1_EL1_APA | 1632 + ID_AA64ISAR1_EL1_API | 1633 + ID_AA64ISAR1_EL1_GPA | 1634 + ID_AA64ISAR1_EL1_GPI); 1750 1635 break; 1751 1636 case SYS_ID_AA64ISAR2_EL1: 1752 1637 if (!vcpu_has_ptrauth(vcpu)) 1753 - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | 1754 - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); 1638 + val &= ~(ID_AA64ISAR2_EL1_APA3 | 1639 + ID_AA64ISAR2_EL1_GPA3); 1755 1640 if (!cpus_have_final_cap(ARM64_HAS_WFXT) || 1756 1641 has_broken_cntvoff()) 1757 - val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); 1642 + val &= ~ID_AA64ISAR2_EL1_WFxT; 1758 1643 break; 1759 1644 case SYS_ID_AA64ISAR3_EL1: 1760 1645 val &= ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_FAMINMAX; ··· 1770 1655 ID_AA64MMFR3_EL1_S1PIE; 1771 1656 break; 1772 1657 case SYS_ID_MMFR4_EL1: 1773 - val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); 1658 + val &= ~ID_MMFR4_EL1_CCIDX; 1774 1659 break; 1775 1660 } 1776 1661 ··· 1947 1832 * older kernels let the guest see the ID bit. 1948 1833 */ 1949 1834 val &= ~ID_AA64PFR0_EL1_MPAM_MASK; 1835 + 1836 + return val; 1837 + } 1838 + 1839 + static u64 sanitise_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, u64 val) 1840 + { 1841 + u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 1842 + 1843 + if (!kvm_has_mte(vcpu->kvm)) { 1844 + val &= ~ID_AA64PFR1_EL1_MTE; 1845 + val &= ~ID_AA64PFR1_EL1_MTE_frac; 1846 + } 1847 + 1848 + if (!(cpus_have_final_cap(ARM64_HAS_RASV1P1_EXTN) && 1849 + SYS_FIELD_GET(ID_AA64PFR0_EL1, RAS, pfr0) == ID_AA64PFR0_EL1_RAS_IMP)) 1850 + val &= ~ID_AA64PFR1_EL1_RAS_frac; 1851 + 1852 + val &= ~ID_AA64PFR1_EL1_SME; 1853 + val &= ~ID_AA64PFR1_EL1_RNDR_trap; 1854 + val &= ~ID_AA64PFR1_EL1_NMI; 1855 + val &= ~ID_AA64PFR1_EL1_GCS; 1856 + val &= ~ID_AA64PFR1_EL1_THE; 1857 + val &= ~ID_AA64PFR1_EL1_MTEX; 1858 + val &= ~ID_AA64PFR1_EL1_PFAR; 1859 + val &= ~ID_AA64PFR1_EL1_MPAM_frac; 1950 1860 1951 1861 return val; 1952 1862 } ··· 2837 2697 struct kvm *kvm = vcpu->kvm; 2838 2698 2839 2699 switch(reg_to_encoding(r)) { 2700 + case SYS_ERXPFGCDN_EL1: 2701 + case SYS_ERXPFGCTL_EL1: 2702 + case SYS_ERXPFGF_EL1: 2703 + case SYS_ERXMISC2_EL1: 2704 + case SYS_ERXMISC3_EL1: 2705 + if (!(kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1) || 2706 + (kvm_has_feat_enum(kvm, ID_AA64PFR0_EL1, RAS, IMP) && 2707 + kvm_has_feat(kvm, ID_AA64PFR1_EL1, RAS_frac, RASv1p1)))) { 2708 + kvm_inject_undefined(vcpu); 2709 + return false; 2710 + } 2711 + break; 2840 2712 default: 2841 2713 if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP)) { 2842 2714 kvm_inject_undefined(vcpu); ··· 3081 2929 ~(ID_AA64PFR0_EL1_AMU | 3082 2930 ID_AA64PFR0_EL1_MPAM | 3083 2931 ID_AA64PFR0_EL1_SVE | 3084 - ID_AA64PFR0_EL1_RAS | 3085 2932 ID_AA64PFR0_EL1_AdvSIMD | 3086 2933 ID_AA64PFR0_EL1_FP)), 3087 2934 ID_FILTERED(ID_AA64PFR1_EL1, id_aa64pfr1_el1, ··· 3094 2943 ID_AA64PFR1_EL1_SME | 3095 2944 ID_AA64PFR1_EL1_RES0 | 3096 2945 ID_AA64PFR1_EL1_MPAM_frac | 3097 - ID_AA64PFR1_EL1_RAS_frac | 3098 2946 ID_AA64PFR1_EL1_MTE)), 3099 2947 ID_WRITABLE(ID_AA64PFR2_EL1, 3100 2948 ID_AA64PFR2_EL1_FPMR | ··· 3213 3063 { SYS_DESC(SYS_ERXCTLR_EL1), access_ras }, 3214 3064 { SYS_DESC(SYS_ERXSTATUS_EL1), access_ras }, 3215 3065 { SYS_DESC(SYS_ERXADDR_EL1), access_ras }, 3066 + { SYS_DESC(SYS_ERXPFGF_EL1), access_ras }, 3067 + { SYS_DESC(SYS_ERXPFGCTL_EL1), access_ras }, 3068 + { SYS_DESC(SYS_ERXPFGCDN_EL1), access_ras }, 3216 3069 { SYS_DESC(SYS_ERXMISC0_EL1), access_ras }, 3217 3070 { SYS_DESC(SYS_ERXMISC1_EL1), access_ras }, 3071 + { SYS_DESC(SYS_ERXMISC2_EL1), access_ras }, 3072 + { SYS_DESC(SYS_ERXMISC3_EL1), access_ras }, 3218 3073 3219 3074 MTE_REG(TFSR_EL1), 3220 3075 MTE_REG(TFSRE0_EL1),
+1 -1
arch/arm64/kvm/vgic/vgic-debug.c
··· 69 69 int nr_lpis = 0; 70 70 71 71 xa_for_each(&dist->lpi_xa, intid, irq) { 72 - if (!vgic_try_get_irq_kref(irq)) 72 + if (!vgic_try_get_irq_ref(irq)) 73 73 continue; 74 74 75 75 xa_set_mark(&dist->lpi_xa, intid, LPI_XA_MARK_DEBUG_ITER);
+3 -3
arch/arm64/kvm/vgic/vgic-init.c
··· 53 53 { 54 54 struct vgic_dist *dist = &kvm->arch.vgic; 55 55 56 - xa_init_flags(&dist->lpi_xa, XA_FLAGS_LOCK_IRQ); 56 + xa_init(&dist->lpi_xa); 57 57 } 58 58 59 59 /* CREATION */ ··· 208 208 raw_spin_lock_init(&irq->irq_lock); 209 209 irq->vcpu = NULL; 210 210 irq->target_vcpu = vcpu0; 211 - kref_init(&irq->refcount); 211 + refcount_set(&irq->refcount, 0); 212 212 switch (dist->vgic_model) { 213 213 case KVM_DEV_TYPE_ARM_VGIC_V2: 214 214 irq->targets = 0; ··· 277 277 irq->intid = i; 278 278 irq->vcpu = NULL; 279 279 irq->target_vcpu = vcpu; 280 - kref_init(&irq->refcount); 280 + refcount_set(&irq->refcount, 0); 281 281 if (vgic_irq_is_sgi(i)) { 282 282 /* SGIs */ 283 283 irq->enabled = 1;
+7 -8
arch/arm64/kvm/vgic/vgic-its.c
··· 78 78 { 79 79 struct vgic_dist *dist = &kvm->arch.vgic; 80 80 struct vgic_irq *irq = vgic_get_irq(kvm, intid), *oldirq; 81 - unsigned long flags; 82 81 int ret; 83 82 84 83 /* In this case there is no put, since we keep the reference. */ ··· 88 89 if (!irq) 89 90 return ERR_PTR(-ENOMEM); 90 91 91 - ret = xa_reserve_irq(&dist->lpi_xa, intid, GFP_KERNEL_ACCOUNT); 92 + ret = xa_reserve(&dist->lpi_xa, intid, GFP_KERNEL_ACCOUNT); 92 93 if (ret) { 93 94 kfree(irq); 94 95 return ERR_PTR(ret); ··· 98 99 raw_spin_lock_init(&irq->irq_lock); 99 100 100 101 irq->config = VGIC_CONFIG_EDGE; 101 - kref_init(&irq->refcount); 102 + refcount_set(&irq->refcount, 1); 102 103 irq->intid = intid; 103 104 irq->target_vcpu = vcpu; 104 105 irq->group = 1; 105 106 106 - xa_lock_irqsave(&dist->lpi_xa, flags); 107 + xa_lock(&dist->lpi_xa); 107 108 108 109 /* 109 110 * There could be a race with another vgic_add_lpi(), so we need to 110 111 * check that we don't add a second list entry with the same LPI. 111 112 */ 112 113 oldirq = xa_load(&dist->lpi_xa, intid); 113 - if (vgic_try_get_irq_kref(oldirq)) { 114 + if (vgic_try_get_irq_ref(oldirq)) { 114 115 /* Someone was faster with adding this LPI, lets use that. */ 115 116 kfree(irq); 116 117 irq = oldirq; ··· 125 126 } 126 127 127 128 out_unlock: 128 - xa_unlock_irqrestore(&dist->lpi_xa, flags); 129 + xa_unlock(&dist->lpi_xa); 129 130 130 131 if (ret) 131 132 return ERR_PTR(ret); ··· 546 547 rcu_read_lock(); 547 548 548 549 irq = xa_load(&its->translation_cache, cache_key); 549 - if (!vgic_try_get_irq_kref(irq)) 550 + if (!vgic_try_get_irq_ref(irq)) 550 551 irq = NULL; 551 552 552 553 rcu_read_unlock(); ··· 570 571 * its_lock, as the ITE (and the reference it holds) cannot be freed. 571 572 */ 572 573 lockdep_assert_held(&its->its_lock); 573 - vgic_get_irq_kref(irq); 574 + vgic_get_irq_ref(irq); 574 575 575 576 old = xa_store(&its->translation_cache, cache_key, irq, GFP_KERNEL_ACCOUNT); 576 577
+8
arch/arm64/kvm/vgic/vgic-mmio-v3.c
··· 50 50 51 51 bool vgic_supports_direct_msis(struct kvm *kvm) 52 52 { 53 + /* 54 + * Deliberately conflate vLPI and vSGI support on GICv4.1 hardware, 55 + * indirectly allowing userspace to control whether or not vPEs are 56 + * allocated for the VM. 57 + */ 58 + if (system_supports_direct_sgis() && !vgic_supports_direct_sgis(kvm)) 59 + return false; 60 + 53 61 return kvm_vgic_global_state.has_gicv4 && vgic_has_its(kvm); 54 62 } 55 63
+1 -1
arch/arm64/kvm/vgic/vgic-mmio.c
··· 1091 1091 len = vgic_v3_init_dist_iodev(io_device); 1092 1092 break; 1093 1093 default: 1094 - BUG_ON(1); 1094 + BUG(); 1095 1095 } 1096 1096 1097 1097 io_device->base_addr = dist_base_address;
+1 -1
arch/arm64/kvm/vgic/vgic-v4.c
··· 518 518 if (!irq->hw || irq->host_irq != host_irq) 519 519 continue; 520 520 521 - if (!vgic_try_get_irq_kref(irq)) 521 + if (!vgic_try_get_irq_ref(irq)) 522 522 return NULL; 523 523 524 524 return irq;
+58 -22
arch/arm64/kvm/vgic/vgic.c
··· 28 28 * kvm->arch.config_lock (mutex) 29 29 * its->cmd_lock (mutex) 30 30 * its->its_lock (mutex) 31 - * vgic_cpu->ap_list_lock must be taken with IRQs disabled 32 - * vgic_dist->lpi_xa.xa_lock must be taken with IRQs disabled 31 + * vgic_dist->lpi_xa.xa_lock 32 + * vgic_cpu->ap_list_lock must be taken with IRQs disabled 33 33 * vgic_irq->irq_lock must be taken with IRQs disabled 34 34 * 35 35 * As the ap_list_lock might be taken from the timer interrupt handler, ··· 71 71 rcu_read_lock(); 72 72 73 73 irq = xa_load(&dist->lpi_xa, intid); 74 - if (!vgic_try_get_irq_kref(irq)) 74 + if (!vgic_try_get_irq_ref(irq)) 75 75 irq = NULL; 76 76 77 77 rcu_read_unlock(); ··· 114 114 return vgic_get_irq(vcpu->kvm, intid); 115 115 } 116 116 117 - /* 118 - * We can't do anything in here, because we lack the kvm pointer to 119 - * lock and remove the item from the lpi_list. So we keep this function 120 - * empty and use the return value of kref_put() to trigger the freeing. 121 - */ 122 - static void vgic_irq_release(struct kref *ref) 117 + static void vgic_release_lpi_locked(struct vgic_dist *dist, struct vgic_irq *irq) 123 118 { 119 + lockdep_assert_held(&dist->lpi_xa.xa_lock); 120 + __xa_erase(&dist->lpi_xa, irq->intid); 121 + kfree_rcu(irq, rcu); 122 + } 123 + 124 + static __must_check bool __vgic_put_irq(struct kvm *kvm, struct vgic_irq *irq) 125 + { 126 + if (irq->intid < VGIC_MIN_LPI) 127 + return false; 128 + 129 + return refcount_dec_and_test(&irq->refcount); 130 + } 131 + 132 + static __must_check bool vgic_put_irq_norelease(struct kvm *kvm, struct vgic_irq *irq) 133 + { 134 + if (!__vgic_put_irq(kvm, irq)) 135 + return false; 136 + 137 + irq->pending_release = true; 138 + return true; 124 139 } 125 140 126 141 void vgic_put_irq(struct kvm *kvm, struct vgic_irq *irq) 127 142 { 128 143 struct vgic_dist *dist = &kvm->arch.vgic; 129 - unsigned long flags; 130 144 131 - if (irq->intid < VGIC_MIN_LPI) 145 + if (irq->intid >= VGIC_MIN_LPI) 146 + might_lock(&dist->lpi_xa.xa_lock); 147 + 148 + if (!__vgic_put_irq(kvm, irq)) 132 149 return; 133 150 134 - if (!kref_put(&irq->refcount, vgic_irq_release)) 135 - return; 151 + xa_lock(&dist->lpi_xa); 152 + vgic_release_lpi_locked(dist, irq); 153 + xa_unlock(&dist->lpi_xa); 154 + } 136 155 137 - xa_lock_irqsave(&dist->lpi_xa, flags); 138 - __xa_erase(&dist->lpi_xa, irq->intid); 139 - xa_unlock_irqrestore(&dist->lpi_xa, flags); 156 + static void vgic_release_deleted_lpis(struct kvm *kvm) 157 + { 158 + struct vgic_dist *dist = &kvm->arch.vgic; 159 + unsigned long intid; 160 + struct vgic_irq *irq; 140 161 141 - kfree_rcu(irq, rcu); 162 + xa_lock(&dist->lpi_xa); 163 + 164 + xa_for_each(&dist->lpi_xa, intid, irq) { 165 + if (irq->pending_release) 166 + vgic_release_lpi_locked(dist, irq); 167 + } 168 + 169 + xa_unlock(&dist->lpi_xa); 142 170 } 143 171 144 172 void vgic_flush_pending_lpis(struct kvm_vcpu *vcpu) 145 173 { 146 174 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 147 175 struct vgic_irq *irq, *tmp; 176 + bool deleted = false; 148 177 unsigned long flags; 149 178 150 179 raw_spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags); ··· 184 155 list_del(&irq->ap_list); 185 156 irq->vcpu = NULL; 186 157 raw_spin_unlock(&irq->irq_lock); 187 - vgic_put_irq(vcpu->kvm, irq); 158 + deleted |= vgic_put_irq_norelease(vcpu->kvm, irq); 188 159 } 189 160 } 190 161 191 162 raw_spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); 163 + 164 + if (deleted) 165 + vgic_release_deleted_lpis(vcpu->kvm); 192 166 } 193 167 194 168 void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending) ··· 431 399 * now in the ap_list. This is safe as the caller must already hold a 432 400 * reference on the irq. 433 401 */ 434 - vgic_get_irq_kref(irq); 402 + vgic_get_irq_ref(irq); 435 403 list_add_tail(&irq->ap_list, &vcpu->arch.vgic_cpu.ap_list_head); 436 404 irq->vcpu = vcpu; 437 405 ··· 662 630 { 663 631 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 664 632 struct vgic_irq *irq, *tmp; 633 + bool deleted_lpis = false; 665 634 666 635 DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); 667 636 ··· 690 657 691 658 /* 692 659 * This vgic_put_irq call matches the 693 - * vgic_get_irq_kref in vgic_queue_irq_unlock, 660 + * vgic_get_irq_ref in vgic_queue_irq_unlock, 694 661 * where we added the LPI to the ap_list. As 695 662 * we remove the irq from the list, we drop 696 663 * also drop the refcount. 697 664 */ 698 - vgic_put_irq(vcpu->kvm, irq); 665 + deleted_lpis |= vgic_put_irq_norelease(vcpu->kvm, irq); 699 666 continue; 700 667 } 701 668 ··· 758 725 } 759 726 760 727 raw_spin_unlock(&vgic_cpu->ap_list_lock); 728 + 729 + if (unlikely(deleted_lpis)) 730 + vgic_release_deleted_lpis(vcpu->kvm); 761 731 } 762 732 763 733 static inline void vgic_fold_lr_state(struct kvm_vcpu *vcpu) ··· 854 818 * the AP list has been sorted already. 855 819 */ 856 820 if (multi_sgi && irq->priority > prio) { 857 - _raw_spin_unlock(&irq->irq_lock); 821 + raw_spin_unlock(&irq->irq_lock); 858 822 break; 859 823 } 860 824
+5 -13
arch/arm64/kvm/vgic/vgic.h
··· 267 267 void vgic_v2_save_state(struct kvm_vcpu *vcpu); 268 268 void vgic_v2_restore_state(struct kvm_vcpu *vcpu); 269 269 270 - static inline bool vgic_try_get_irq_kref(struct vgic_irq *irq) 270 + static inline bool vgic_try_get_irq_ref(struct vgic_irq *irq) 271 271 { 272 272 if (!irq) 273 273 return false; ··· 275 275 if (irq->intid < VGIC_MIN_LPI) 276 276 return true; 277 277 278 - return kref_get_unless_zero(&irq->refcount); 278 + return refcount_inc_not_zero(&irq->refcount); 279 279 } 280 280 281 - static inline void vgic_get_irq_kref(struct vgic_irq *irq) 281 + static inline void vgic_get_irq_ref(struct vgic_irq *irq) 282 282 { 283 - WARN_ON_ONCE(!vgic_try_get_irq_kref(irq)); 283 + WARN_ON_ONCE(!vgic_try_get_irq_ref(irq)); 284 284 } 285 285 286 286 void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu); ··· 396 396 397 397 static inline bool vgic_supports_direct_irqs(struct kvm *kvm) 398 398 { 399 - /* 400 - * Deliberately conflate vLPI and vSGI support on GICv4.1 hardware, 401 - * indirectly allowing userspace to control whether or not vPEs are 402 - * allocated for the VM. 403 - */ 404 - if (system_supports_direct_sgis()) 405 - return vgic_supports_direct_sgis(kvm); 406 - 407 - return vgic_supports_direct_msis(kvm); 399 + return vgic_supports_direct_msis(kvm) || vgic_supports_direct_sgis(kvm); 408 400 } 409 401 410 402 int vgic_v4_init(struct kvm *kvm);
-7
arch/arm64/mm/mmu.c
··· 47 47 #define NO_CONT_MAPPINGS BIT(1) 48 48 #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ 49 49 50 - enum pgtable_type { 51 - TABLE_PTE, 52 - TABLE_PMD, 53 - TABLE_PUD, 54 - TABLE_P4D, 55 - }; 56 - 57 50 u64 kimage_voffset __ro_after_init; 58 51 EXPORT_SYMBOL(kimage_voffset); 59 52
+1
arch/arm64/tools/cpucaps
··· 53 53 HAS_S1POE 54 54 HAS_SCTLR2 55 55 HAS_RAS_EXTN 56 + HAS_RASV1P1_EXTN 56 57 HAS_RNG 57 58 HAS_SB 58 59 HAS_STAGE2_FWB
+3 -3
arch/powerpc/boot/Makefile
··· 243 243 hostprogs := addnote hack-coff mktree 244 244 245 245 targets += $(patsubst $(obj)/%,%,$(obj-boot) wrapper.a) zImage.lds 246 - extra-y := $(obj)/wrapper.a $(obj-plat) $(obj)/empty.o \ 246 + always-y := $(obj)/wrapper.a $(obj-plat) $(obj)/empty.o \ 247 247 $(obj)/zImage.lds $(obj)/zImage.coff.lds $(obj)/zImage.ps3.lds 248 248 249 249 dtstree := $(src)/dts 250 250 251 251 wrapper := $(src)/wrapper 252 - wrapperbits := $(extra-y) $(addprefix $(obj)/,addnote hack-coff mktree) \ 252 + wrapperbits := $(always-y) $(addprefix $(obj)/,addnote hack-coff mktree) \ 253 253 $(wrapper) FORCE 254 254 255 255 ############# ··· 456 456 WRAPPER_BINDIR := /usr/sbin 457 457 INSTALL := install 458 458 459 - extra-installed := $(patsubst $(obj)/%, $(DESTDIR)$(WRAPPER_OBJDIR)/%, $(extra-y)) 459 + extra-installed := $(patsubst $(obj)/%, $(DESTDIR)$(WRAPPER_OBJDIR)/%, $(always-y)) 460 460 hostprogs-installed := $(patsubst %, $(DESTDIR)$(WRAPPER_BINDIR)/%, $(hostprogs)) 461 461 wrapper-installed := $(DESTDIR)$(WRAPPER_BINDIR)/wrapper 462 462 dts-installed := $(patsubst $(dtstree)/%, $(DESTDIR)$(WRAPPER_DTSDIR)/%, $(wildcard $(dtstree)/*.dts))
+7 -7
arch/powerpc/boot/install.sh
··· 19 19 set -e 20 20 21 21 # this should work for both the pSeries zImage and the iSeries vmlinux.sm 22 - image_name=`basename $2` 22 + image_name=$(basename "$2") 23 23 24 24 25 25 echo "Warning: '${INSTALLKERNEL}' command not available... Copying" \ 26 26 "directly to $4/$image_name-$1" >&2 27 27 28 - if [ -f $4/$image_name-$1 ]; then 29 - mv $4/$image_name-$1 $4/$image_name-$1.old 28 + if [ -f "$4"/"$image_name"-"$1" ]; then 29 + mv "$4"/"$image_name"-"$1" "$4"/"$image_name"-"$1".old 30 30 fi 31 31 32 - if [ -f $4/System.map-$1 ]; then 33 - mv $4/System.map-$1 $4/System-$1.old 32 + if [ -f "$4"/System.map-"$1" ]; then 33 + mv "$4"/System.map-"$1" "$4"/System-"$1".old 34 34 fi 35 35 36 - cat $2 > $4/$image_name-$1 37 - cp $3 $4/System.map-$1 36 + cat "$2" > "$4"/"$image_name"-"$1" 37 + cp "$3" "$4"/System.map-"$1"
+3 -1
arch/powerpc/kernel/Makefile
··· 199 199 200 200 obj-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init.o 201 201 obj64-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_entry_64.o 202 - extra-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init_check 202 + ifdef KBUILD_BUILTIN 203 + always-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += prom_init_check 204 + endif 203 205 204 206 obj-$(CONFIG_PPC64) += $(obj64-y) 205 207 obj-$(CONFIG_PPC32) += $(obj32-y)
+4 -4
arch/powerpc/kernel/kvm.c
··· 632 632 #endif 633 633 } 634 634 635 - switch (inst_no_rt & ~KVM_MASK_RB) { 636 635 #ifdef CONFIG_PPC_BOOK3S_32 636 + switch (inst_no_rt & ~KVM_MASK_RB) { 637 637 case KVM_INST_MTSRIN: 638 638 if (features & KVM_MAGIC_FEAT_SR) { 639 639 u32 inst_rb = _inst & KVM_MASK_RB; 640 640 kvm_patch_ins_mtsrin(inst, inst_rt, inst_rb); 641 641 } 642 642 break; 643 - #endif 644 643 } 644 + #endif 645 645 646 - switch (_inst) { 647 646 #ifdef CONFIG_BOOKE 647 + switch (_inst) { 648 648 case KVM_INST_WRTEEI_0: 649 649 kvm_patch_ins_wrteei_0(inst); 650 650 break; ··· 652 652 case KVM_INST_WRTEEI_1: 653 653 kvm_patch_ins_wrtee(inst, 0, 1); 654 654 break; 655 - #endif 656 655 } 656 + #endif 657 657 } 658 658 659 659 extern u32 kvm_template_start[];
+8 -8
arch/powerpc/kernel/prom_init_check.sh
··· 15 15 16 16 has_renamed_memintrinsics() 17 17 { 18 - grep -q "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} && \ 19 - ! grep -q "^CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y" ${KCONFIG_CONFIG} 18 + grep -q "^CONFIG_KASAN=y$" "${KCONFIG_CONFIG}" && \ 19 + ! grep -q "^CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX=y" "${KCONFIG_CONFIG}" 20 20 } 21 21 22 22 if has_renamed_memintrinsics ··· 42 42 { 43 43 file=$1 44 44 section=$2 45 - size=$(objdump -h -j $section $file 2>/dev/null | awk "\$2 == \"$section\" {print \$3}") 45 + size=$(objdump -h -j "$section" "$file" 2>/dev/null | awk "\$2 == \"$section\" {print \$3}") 46 46 size=${size:-0} 47 - if [ $size -ne 0 ]; then 47 + if [ "$size" -ne 0 ]; then 48 48 ERROR=1 49 49 echo "Error: Section $section not empty in prom_init.c" >&2 50 50 fi 51 51 } 52 52 53 - for UNDEF in $($NM -u $OBJ | awk '{print $2}') 53 + for UNDEF in $($NM -u "$OBJ" | awk '{print $2}') 54 54 do 55 55 # On 64-bit nm gives us the function descriptors, which have 56 56 # a leading . on the name, so strip it off here. ··· 87 87 fi 88 88 done 89 89 90 - check_section $OBJ .data 91 - check_section $OBJ .bss 92 - check_section $OBJ .init.data 90 + check_section "$OBJ" .data 91 + check_section "$OBJ" .bss 92 + check_section "$OBJ" .init.data 93 93 94 94 exit $ERROR
+1 -4
arch/powerpc/kernel/setup_64.c
··· 141 141 smt_enabled_at_boot = 0; 142 142 else { 143 143 int smt; 144 - int rc; 145 - 146 - rc = kstrtoint(smt_enabled_cmdline, 10, &smt); 147 - if (!rc) 144 + if (!kstrtoint(smt_enabled_cmdline, 10, &smt)) 148 145 smt_enabled_at_boot = 149 146 min(threads_per_core, smt); 150 147 }
+1 -1
arch/powerpc/kvm/powerpc.c
··· 69 69 70 70 /* 71 71 * Common checks before entering the guest world. Call with interrupts 72 - * disabled. 72 + * enabled. 73 73 * 74 74 * returns: 75 75 *
+1 -2
arch/powerpc/platforms/8xx/cpm1-ic.c
··· 110 110 111 111 out_be32(&data->reg->cpic_cimr, 0); 112 112 113 - data->host = irq_domain_create_linear(of_fwnode_handle(dev->of_node), 114 - 64, &cpm_pic_host_ops, data); 113 + data->host = irq_domain_create_linear(dev_fwnode(dev), 64, &cpm_pic_host_ops, data); 115 114 if (!data->host) 116 115 return -ENODEV; 117 116
+4 -9
arch/powerpc/platforms/Kconfig.cputype
··· 122 122 If unsure, select Generic. 123 123 124 124 config POWERPC64_CPU 125 - bool "Generic (POWER5 and PowerPC 970 and above)" 126 - depends on PPC_BOOK3S_64 && !CPU_LITTLE_ENDIAN 125 + bool "Generic 64 bits powerpc" 126 + depends on PPC_BOOK3S_64 127 + select ARCH_HAS_FAST_MULTIPLIER if CPU_LITTLE_ENDIAN 127 128 select PPC_64S_HASH_MMU 128 - 129 - config POWERPC64_CPU 130 - bool "Generic (POWER8 and above)" 131 - depends on PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN 132 - select ARCH_HAS_FAST_MULTIPLIER 133 - select PPC_64S_HASH_MMU 134 - select PPC_HAS_LBARX_LHARX 129 + select PPC_HAS_LBARX_LHARX if CPU_LITTLE_ENDIAN 135 130 136 131 config POWERPC_CPU 137 132 bool "Generic 32 bits powerpc"
+2 -3
arch/powerpc/sysdev/fsl_msi.c
··· 412 412 } 413 413 platform_set_drvdata(dev, msi); 414 414 415 - msi->irqhost = irq_domain_create_linear(of_fwnode_handle(dev->dev.of_node), 416 - NR_MSI_IRQS_MAX, &fsl_msi_host_ops, msi); 417 - 415 + msi->irqhost = irq_domain_create_linear(dev_fwnode(&dev->dev), NR_MSI_IRQS_MAX, 416 + &fsl_msi_host_ops, msi); 418 417 if (msi->irqhost == NULL) { 419 418 dev_err(&dev->dev, "No memory for MSI irqhost\n"); 420 419 err = -ENOMEM;
+4 -1
arch/riscv/kvm/mmu.c
··· 39 39 unsigned long size, bool writable, bool in_atomic) 40 40 { 41 41 int ret = 0; 42 + pgprot_t prot; 42 43 unsigned long pfn; 43 44 phys_addr_t addr, end; 44 45 struct kvm_mmu_memory_cache pcache = { ··· 56 55 57 56 end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; 58 57 pfn = __phys_to_pfn(hpa); 58 + prot = pgprot_noncached(PAGE_WRITE); 59 59 60 60 for (addr = gpa; addr < end; addr += PAGE_SIZE) { 61 61 map.addr = addr; 62 - map.pte = pfn_pte(pfn, PAGE_KERNEL_IO); 62 + map.pte = pfn_pte(pfn, prot); 63 + map.pte = pte_mkdirty(map.pte); 63 64 map.level = 0; 64 65 65 66 if (!writable)
+1 -1
arch/riscv/kvm/vcpu.c
··· 683 683 } 684 684 685 685 /** 686 - * check_vcpu_requests - check and handle pending vCPU requests 686 + * kvm_riscv_check_vcpu_requests - check and handle pending vCPU requests 687 687 * @vcpu: the VCPU pointer 688 688 * 689 689 * Return: 1 if we should enter the guest
+2
arch/riscv/kvm/vcpu_vector.c
··· 182 182 struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; 183 183 unsigned long reg_val; 184 184 185 + if (reg_size != sizeof(reg_val)) 186 + return -EINVAL; 185 187 if (copy_from_user(&reg_val, uaddr, reg_size)) 186 188 return -EFAULT; 187 189 if (reg_val != cntx->vector.vlenb)
+9
arch/x86/kernel/cpu/bugs.c
··· 416 416 cpu_attack_vector_mitigated(CPU_MITIGATE_USER_USER) || 417 417 cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_GUEST) || 418 418 (smt_mitigations != SMT_MITIGATIONS_OFF); 419 + 420 + case X86_BUG_SPEC_STORE_BYPASS: 421 + return cpu_attack_vector_mitigated(CPU_MITIGATE_USER_USER); 422 + 419 423 default: 420 424 WARN(1, "Unknown bug %x\n", bug); 421 425 return false; ··· 2714 2710 ssb_mode = SPEC_STORE_BYPASS_DISABLE; 2715 2711 break; 2716 2712 case SPEC_STORE_BYPASS_CMD_AUTO: 2713 + if (should_mitigate_vuln(X86_BUG_SPEC_STORE_BYPASS)) 2714 + ssb_mode = SPEC_STORE_BYPASS_PRCTL; 2715 + else 2716 + ssb_mode = SPEC_STORE_BYPASS_NONE; 2717 + break; 2717 2718 case SPEC_STORE_BYPASS_CMD_PRCTL: 2718 2719 ssb_mode = SPEC_STORE_BYPASS_PRCTL; 2719 2720 break;
+1 -1
arch/x86/kernel/cpu/intel.c
··· 262 262 if (c->x86_power & (1 << 8)) { 263 263 set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); 264 264 set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC); 265 - } else if ((c->x86_vfm >= INTEL_P4_PRESCOTT && c->x86_vfm <= INTEL_P4_WILLAMETTE) || 265 + } else if ((c->x86_vfm >= INTEL_P4_PRESCOTT && c->x86_vfm <= INTEL_P4_CEDARMILL) || 266 266 (c->x86_vfm >= INTEL_CORE_YONAH && c->x86_vfm <= INTEL_IVYBRIDGE)) { 267 267 set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); 268 268 }
+20 -2
arch/x86/kernel/cpu/microcode/amd.c
··· 171 171 return 1; 172 172 } 173 173 174 + static u32 cpuid_to_ucode_rev(unsigned int val) 175 + { 176 + union zen_patch_rev p = {}; 177 + union cpuid_1_eax c; 178 + 179 + c.full = val; 180 + 181 + p.stepping = c.stepping; 182 + p.model = c.model; 183 + p.ext_model = c.ext_model; 184 + p.ext_fam = c.ext_fam; 185 + 186 + return p.ucode_rev; 187 + } 188 + 174 189 static bool need_sha_check(u32 cur_rev) 175 190 { 191 + if (!cur_rev) { 192 + cur_rev = cpuid_to_ucode_rev(bsp_cpuid_1_eax); 193 + pr_info_once("No current revision, generating the lowest one: 0x%x\n", cur_rev); 194 + } 195 + 176 196 switch (cur_rev >> 8) { 177 197 case 0x80012: return cur_rev <= 0x800126f; break; 178 198 case 0x80082: return cur_rev <= 0x800820f; break; ··· 768 748 769 749 n.equiv_cpu = equiv_cpu; 770 750 n.patch_id = uci->cpu_sig.rev; 771 - 772 - WARN_ON_ONCE(!n.patch_id); 773 751 774 752 list_for_each_entry(p, &microcode_cache, plist) 775 753 if (patch_cpus_equivalent(p, &n, false))
+16 -11
arch/x86/kernel/cpu/topology_amd.c
··· 81 81 82 82 cpuid_leaf(0x8000001e, &leaf); 83 83 84 - tscan->c->topo.initial_apicid = leaf.ext_apic_id; 85 - 86 84 /* 87 - * If leaf 0xb is available, then the domain shifts are set 88 - * already and nothing to do here. Only valid for family >= 0x17. 85 + * If leaf 0xb/0x26 is available, then the APIC ID and the domain 86 + * shifts are set already. 89 87 */ 90 - if (!has_topoext && tscan->c->x86 >= 0x17) { 91 - /* 92 - * Leaf 0x80000008 set the CORE domain shift already. 93 - * Update the SMT domain, but do not propagate it. 94 - */ 95 - unsigned int nthreads = leaf.core_nthreads + 1; 88 + if (!has_topoext) { 89 + tscan->c->topo.initial_apicid = leaf.ext_apic_id; 96 90 97 - topology_update_dom(tscan, TOPO_SMT_DOMAIN, get_count_order(nthreads), nthreads); 91 + /* 92 + * Leaf 0x8000008 sets the CORE domain shift but not the 93 + * SMT domain shift. On CPUs with family >= 0x17, there 94 + * might be hyperthreads. 95 + */ 96 + if (tscan->c->x86 >= 0x17) { 97 + /* Update the SMT domain, but do not propagate it. */ 98 + unsigned int nthreads = leaf.core_nthreads + 1; 99 + 100 + topology_update_dom(tscan, TOPO_SMT_DOMAIN, 101 + get_count_order(nthreads), nthreads); 102 + } 98 103 } 99 104 100 105 store_node(tscan, leaf.nnodes_per_socket + 1, leaf.node_id);
+8 -5
block/blk-rq-qos.h
··· 149 149 q = bdev_get_queue(bio->bi_bdev); 150 150 151 151 /* 152 - * If a bio has BIO_QOS_xxx set, it implicitly implies that 153 - * q->rq_qos is present. So, we skip re-checking q->rq_qos 154 - * here as an extra optimization and directly call 155 - * __rq_qos_done_bio(). 152 + * A BIO may carry BIO_QOS_* flags even if the associated request_queue 153 + * does not have rq_qos enabled. This can happen with stacked block 154 + * devices — for example, NVMe multipath, where it's possible that the 155 + * bottom device has QoS enabled but the top device does not. Therefore, 156 + * always verify that q->rq_qos is present and QoS is enabled before 157 + * calling __rq_qos_done_bio(). 156 158 */ 157 - __rq_qos_done_bio(q->rq_qos, bio); 159 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 160 + __rq_qos_done_bio(q->rq_qos, bio); 158 161 } 159 162 160 163 static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
+6 -5
block/blk-zoned.c
··· 1286 1286 struct block_device *bdev; 1287 1287 unsigned long flags; 1288 1288 struct bio *bio; 1289 + bool prepared; 1289 1290 1290 1291 /* 1291 1292 * Submit the next plugged BIO. If we do not have any, clear 1292 1293 * the plugged flag. 1293 1294 */ 1294 - spin_lock_irqsave(&zwplug->lock, flags); 1295 - 1296 1295 again: 1296 + spin_lock_irqsave(&zwplug->lock, flags); 1297 1297 bio = bio_list_pop(&zwplug->bio_list); 1298 1298 if (!bio) { 1299 1299 zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED; ··· 1304 1304 trace_blk_zone_wplug_bio(zwplug->disk->queue, zwplug->zone_no, 1305 1305 bio->bi_iter.bi_sector, bio_sectors(bio)); 1306 1306 1307 - if (!blk_zone_wplug_prepare_bio(zwplug, bio)) { 1307 + prepared = blk_zone_wplug_prepare_bio(zwplug, bio); 1308 + spin_unlock_irqrestore(&zwplug->lock, flags); 1309 + 1310 + if (!prepared) { 1308 1311 blk_zone_wplug_bio_io_error(zwplug, bio); 1309 1312 goto again; 1310 1313 } 1311 - 1312 - spin_unlock_irqrestore(&zwplug->lock, flags); 1313 1314 1314 1315 bdev = bio->bi_bdev; 1315 1316
+40 -17
drivers/ata/ahci.c
··· 689 689 "where <pci_dev> is the PCI ID of an AHCI controller in the " 690 690 "form \"domain:bus:dev.func\""); 691 691 692 - static void ahci_apply_port_map_mask(struct device *dev, 693 - struct ahci_host_priv *hpriv, char *mask_s) 692 + static char *ahci_mask_port_ext; 693 + module_param_named(mask_port_ext, ahci_mask_port_ext, charp, 0444); 694 + MODULE_PARM_DESC(mask_port_ext, 695 + "32-bits mask to ignore the external/hotplug capability of ports. " 696 + "Valid values are: " 697 + "\"<mask>\" to apply the same mask to all AHCI controller " 698 + "devices, and \"<pci_dev>=<mask>,<pci_dev>=<mask>,...\" to " 699 + "specify different masks for the controllers specified, " 700 + "where <pci_dev> is the PCI ID of an AHCI controller in the " 701 + "form \"domain:bus:dev.func\""); 702 + 703 + static u32 ahci_port_mask(struct device *dev, char *mask_s) 694 704 { 695 705 unsigned int mask; 696 706 697 707 if (kstrtouint(mask_s, 0, &mask)) { 698 708 dev_err(dev, "Invalid port map mask\n"); 699 - return; 709 + return 0; 700 710 } 701 711 702 - hpriv->mask_port_map = mask; 712 + return mask; 703 713 } 704 714 705 - static void ahci_get_port_map_mask(struct device *dev, 706 - struct ahci_host_priv *hpriv) 715 + static u32 ahci_get_port_mask(struct device *dev, char *mask_p) 707 716 { 708 717 char *param, *end, *str, *mask_s; 709 718 char *name; 719 + u32 mask = 0; 710 720 711 - if (!strlen(ahci_mask_port_map)) 712 - return; 721 + if (!mask_p || !strlen(mask_p)) 722 + return 0; 713 723 714 - str = kstrdup(ahci_mask_port_map, GFP_KERNEL); 724 + str = kstrdup(mask_p, GFP_KERNEL); 715 725 if (!str) 716 - return; 726 + return 0; 717 727 718 728 /* Handle single mask case */ 719 729 if (!strchr(str, '=')) { 720 - ahci_apply_port_map_mask(dev, hpriv, str); 730 + mask = ahci_port_mask(dev, str); 721 731 goto free; 722 732 } 723 733 724 734 /* 725 - * Mask list case: parse the parameter to apply the mask only if 735 + * Mask list case: parse the parameter to get the mask only if 726 736 * the device name matches. 727 737 */ 728 738 param = str; ··· 762 752 param++; 763 753 } 764 754 765 - ahci_apply_port_map_mask(dev, hpriv, mask_s); 755 + mask = ahci_port_mask(dev, mask_s); 766 756 } 767 757 768 758 free: 769 759 kfree(str); 760 + 761 + return mask; 770 762 } 771 763 772 764 static void ahci_pci_save_initial_config(struct pci_dev *pdev, ··· 794 782 } 795 783 796 784 /* Handle port map masks passed as module parameter. */ 797 - if (ahci_mask_port_map) 798 - ahci_get_port_map_mask(&pdev->dev, hpriv); 785 + hpriv->mask_port_map = 786 + ahci_get_port_mask(&pdev->dev, ahci_mask_port_map); 787 + hpriv->mask_port_ext = 788 + ahci_get_port_mask(&pdev->dev, ahci_mask_port_ext); 799 789 800 790 ahci_save_initial_config(&pdev->dev, hpriv); 801 791 } ··· 1771 1757 void __iomem *port_mmio = ahci_port_base(ap); 1772 1758 u32 tmp; 1773 1759 1774 - /* mark external ports (hotplug-capable, eSATA) */ 1760 + /* 1761 + * Mark external ports (hotplug-capable, eSATA), unless we were asked to 1762 + * ignore this feature. 1763 + */ 1775 1764 tmp = readl(port_mmio + PORT_CMD); 1776 1765 if (((tmp & PORT_CMD_ESP) && (hpriv->cap & HOST_CAP_SXS)) || 1777 - (tmp & PORT_CMD_HPCP)) 1766 + (tmp & PORT_CMD_HPCP)) { 1767 + if (hpriv->mask_port_ext & (1U << ap->port_no)) { 1768 + ata_port_info(ap, 1769 + "Ignoring external/hotplug capability\n"); 1770 + return; 1771 + } 1778 1772 ap->pflags |= ATA_PFLAG_EXTERNAL; 1773 + } 1779 1774 } 1780 1775 1781 1776 static void ahci_update_initial_lpm_policy(struct ata_port *ap)
+1
drivers/ata/ahci.h
··· 330 330 /* Input fields */ 331 331 unsigned int flags; /* AHCI_HFLAG_* */ 332 332 u32 mask_port_map; /* Mask of valid ports */ 333 + u32 mask_port_ext; /* Mask of ports ext capability */ 333 334 334 335 void __iomem * mmio; /* bus-independent mem map */ 335 336 u32 cap; /* cap to use */
+2 -5
drivers/ata/ahci_xgene.c
··· 450 450 { 451 451 int pmp = sata_srst_pmp(link); 452 452 struct ata_port *ap = link->ap; 453 - u32 rc; 454 453 void __iomem *port_mmio = ahci_port_base(ap); 455 454 u32 port_fbs; 456 455 ··· 462 463 port_fbs |= pmp << PORT_FBS_DEV_OFFSET; 463 464 writel(port_fbs, port_mmio + PORT_FBS); 464 465 465 - rc = ahci_do_softreset(link, class, pmp, deadline, ahci_check_ready); 466 - 467 - return rc; 466 + return ahci_do_softreset(link, class, pmp, deadline, ahci_check_ready); 468 467 } 469 468 470 469 /** ··· 497 500 u32 port_fbs; 498 501 u32 port_fbs_save; 499 502 u32 retry = 1; 500 - u32 rc; 503 + int rc; 501 504 502 505 port_fbs_save = readl(port_mmio + PORT_FBS); 503 506
+14 -3
drivers/atm/atmtcp.c
··· 279 279 return NULL; 280 280 } 281 281 282 + static int atmtcp_c_pre_send(struct atm_vcc *vcc, struct sk_buff *skb) 283 + { 284 + struct atmtcp_hdr *hdr; 285 + 286 + if (skb->len < sizeof(struct atmtcp_hdr)) 287 + return -EINVAL; 288 + 289 + hdr = (struct atmtcp_hdr *)skb->data; 290 + if (hdr->length == ATMTCP_HDR_MAGIC) 291 + return -EINVAL; 292 + 293 + return 0; 294 + } 282 295 283 296 static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb) 284 297 { ··· 300 287 struct atm_vcc *out_vcc; 301 288 struct sk_buff *new_skb; 302 289 int result = 0; 303 - 304 - if (skb->len < sizeof(struct atmtcp_hdr)) 305 - goto done; 306 290 307 291 dev = vcc->dev_data; 308 292 hdr = (struct atmtcp_hdr *) skb->data; ··· 357 347 358 348 static const struct atmdev_ops atmtcp_c_dev_ops = { 359 349 .close = atmtcp_c_close, 350 + .pre_send = atmtcp_c_pre_send, 360 351 .send = atmtcp_c_send 361 352 }; 362 353
+2 -2
drivers/base/power/main.c
··· 675 675 idx = device_links_read_lock(); 676 676 677 677 /* Start processing the device's "async" consumers. */ 678 - list_for_each_entry_rcu(link, &dev->links.consumers, s_node) 678 + list_for_each_entry_rcu_locked(link, &dev->links.consumers, s_node) 679 679 if (READ_ONCE(link->status) != DL_STATE_DORMANT) 680 680 dpm_async_with_cleanup(link->consumer, func); 681 681 ··· 1330 1330 idx = device_links_read_lock(); 1331 1331 1332 1332 /* Start processing the device's "async" suppliers. */ 1333 - list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) 1333 + list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) 1334 1334 if (READ_ONCE(link->status) != DL_STATE_DORMANT) 1335 1335 dpm_async_with_cleanup(link->supplier, func); 1336 1336
+16 -10
drivers/block/loop.c
··· 139 139 140 140 static loff_t lo_calculate_size(struct loop_device *lo, struct file *file) 141 141 { 142 - struct kstat stat; 143 142 loff_t loopsize; 144 143 int ret; 145 144 146 - /* 147 - * Get the accurate file size. This provides better results than 148 - * cached inode data, particularly for network filesystems where 149 - * metadata may be stale. 150 - */ 151 - ret = vfs_getattr_nosec(&file->f_path, &stat, STATX_SIZE, 0); 152 - if (ret) 153 - return 0; 145 + if (S_ISBLK(file_inode(file)->i_mode)) { 146 + loopsize = i_size_read(file->f_mapping->host); 147 + } else { 148 + struct kstat stat; 154 149 155 - loopsize = stat.size; 150 + /* 151 + * Get the accurate file size. This provides better results than 152 + * cached inode data, particularly for network filesystems where 153 + * metadata may be stale. 154 + */ 155 + ret = vfs_getattr_nosec(&file->f_path, &stat, STATX_SIZE, 0); 156 + if (ret) 157 + return 0; 158 + 159 + loopsize = stat.size; 160 + } 161 + 156 162 if (lo->lo_offset > 0) 157 163 loopsize -= lo->lo_offset; 158 164 /* offset is beyond i_size, weird but possible */
+70 -2
drivers/block/ublk_drv.c
··· 239 239 struct mutex cancel_mutex; 240 240 bool canceling; 241 241 pid_t ublksrv_tgid; 242 + struct delayed_work exit_work; 242 243 }; 243 244 244 245 /* header of ublk_params */ ··· 1596 1595 ublk_get_queue(ub, i)->canceling = canceling; 1597 1596 } 1598 1597 1599 - static int ublk_ch_release(struct inode *inode, struct file *filp) 1598 + static bool ublk_check_and_reset_active_ref(struct ublk_device *ub) 1600 1599 { 1601 - struct ublk_device *ub = filp->private_data; 1600 + int i, j; 1601 + 1602 + if (!(ub->dev_info.flags & (UBLK_F_SUPPORT_ZERO_COPY | 1603 + UBLK_F_AUTO_BUF_REG))) 1604 + return false; 1605 + 1606 + for (i = 0; i < ub->dev_info.nr_hw_queues; i++) { 1607 + struct ublk_queue *ubq = ublk_get_queue(ub, i); 1608 + 1609 + for (j = 0; j < ubq->q_depth; j++) { 1610 + struct ublk_io *io = &ubq->ios[j]; 1611 + unsigned int refs = refcount_read(&io->ref) + 1612 + io->task_registered_buffers; 1613 + 1614 + /* 1615 + * UBLK_REFCOUNT_INIT or zero means no active 1616 + * reference 1617 + */ 1618 + if (refs != UBLK_REFCOUNT_INIT && refs != 0) 1619 + return true; 1620 + 1621 + /* reset to zero if the io hasn't active references */ 1622 + refcount_set(&io->ref, 0); 1623 + io->task_registered_buffers = 0; 1624 + } 1625 + } 1626 + return false; 1627 + } 1628 + 1629 + static void ublk_ch_release_work_fn(struct work_struct *work) 1630 + { 1631 + struct ublk_device *ub = 1632 + container_of(work, struct ublk_device, exit_work.work); 1602 1633 struct gendisk *disk; 1603 1634 int i; 1635 + 1636 + /* 1637 + * For zero-copy and auto buffer register modes, I/O references 1638 + * might not be dropped naturally when the daemon is killed, but 1639 + * io_uring guarantees that registered bvec kernel buffers are 1640 + * unregistered finally when freeing io_uring context, then the 1641 + * active references are dropped. 1642 + * 1643 + * Wait until active references are dropped for avoiding use-after-free 1644 + * 1645 + * registered buffer may be unregistered in io_ring's release hander, 1646 + * so have to wait by scheduling work function for avoiding the two 1647 + * file release dependency. 1648 + */ 1649 + if (ublk_check_and_reset_active_ref(ub)) { 1650 + schedule_delayed_work(&ub->exit_work, 1); 1651 + return; 1652 + } 1604 1653 1605 1654 /* 1606 1655 * disk isn't attached yet, either device isn't live, or it has ··· 1724 1673 ublk_reset_ch_dev(ub); 1725 1674 out: 1726 1675 clear_bit(UB_STATE_OPEN, &ub->state); 1676 + 1677 + /* put the reference grabbed in ublk_ch_release() */ 1678 + ublk_put_device(ub); 1679 + } 1680 + 1681 + static int ublk_ch_release(struct inode *inode, struct file *filp) 1682 + { 1683 + struct ublk_device *ub = filp->private_data; 1684 + 1685 + /* 1686 + * Grab ublk device reference, so it won't be gone until we are 1687 + * really released from work function. 1688 + */ 1689 + ublk_get_device(ub); 1690 + 1691 + INIT_DELAYED_WORK(&ub->exit_work, ublk_ch_release_work_fn); 1692 + schedule_delayed_work(&ub->exit_work, 0); 1727 1693 return 0; 1728 1694 } 1729 1695
+27 -34
drivers/firmware/efi/stmm/tee_stmm_efi.c
··· 143 143 return var_hdr->ret_status; 144 144 } 145 145 146 + #define COMM_BUF_SIZE(__payload_size) (MM_COMMUNICATE_HEADER_SIZE + \ 147 + MM_VARIABLE_COMMUNICATE_SIZE + \ 148 + (__payload_size)) 149 + 146 150 /** 147 151 * setup_mm_hdr() - Allocate a buffer for StandAloneMM and initialize the 148 152 * header data. ··· 154 150 * @dptr: pointer address to store allocated buffer 155 151 * @payload_size: payload size 156 152 * @func: standAloneMM function number 157 - * @ret: EFI return code 158 153 * Return: pointer to corresponding StandAloneMM function buffer or NULL 159 154 */ 160 - static void *setup_mm_hdr(u8 **dptr, size_t payload_size, size_t func, 161 - efi_status_t *ret) 155 + static void *setup_mm_hdr(u8 **dptr, size_t payload_size, size_t func) 162 156 { 163 157 const efi_guid_t mm_var_guid = EFI_MM_VARIABLE_GUID; 164 158 struct efi_mm_communicate_header *mm_hdr; ··· 171 169 if (max_buffer_size && 172 170 max_buffer_size < (MM_COMMUNICATE_HEADER_SIZE + 173 171 MM_VARIABLE_COMMUNICATE_SIZE + payload_size)) { 174 - *ret = EFI_INVALID_PARAMETER; 175 172 return NULL; 176 173 } 177 174 178 - comm_buf = kzalloc(MM_COMMUNICATE_HEADER_SIZE + 179 - MM_VARIABLE_COMMUNICATE_SIZE + payload_size, 180 - GFP_KERNEL); 181 - if (!comm_buf) { 182 - *ret = EFI_OUT_OF_RESOURCES; 175 + comm_buf = alloc_pages_exact(COMM_BUF_SIZE(payload_size), 176 + GFP_KERNEL | __GFP_ZERO); 177 + if (!comm_buf) 183 178 return NULL; 184 - } 185 179 186 180 mm_hdr = (struct efi_mm_communicate_header *)comm_buf; 187 181 memcpy(&mm_hdr->header_guid, &mm_var_guid, sizeof(mm_hdr->header_guid)); ··· 185 187 186 188 var_hdr = (struct smm_variable_communicate_header *)mm_hdr->data; 187 189 var_hdr->function = func; 188 - if (dptr) 189 - *dptr = comm_buf; 190 - *ret = EFI_SUCCESS; 190 + *dptr = comm_buf; 191 191 192 192 return var_hdr->data; 193 193 } ··· 208 212 209 213 payload_size = sizeof(*var_payload); 210 214 var_payload = setup_mm_hdr(&comm_buf, payload_size, 211 - SMM_VARIABLE_FUNCTION_GET_PAYLOAD_SIZE, 212 - &ret); 215 + SMM_VARIABLE_FUNCTION_GET_PAYLOAD_SIZE); 213 216 if (!var_payload) 214 - return EFI_OUT_OF_RESOURCES; 217 + return EFI_DEVICE_ERROR; 215 218 216 219 ret = mm_communicate(comm_buf, payload_size); 217 220 if (ret != EFI_SUCCESS) ··· 234 239 */ 235 240 *size -= 2; 236 241 out: 237 - kfree(comm_buf); 242 + free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size)); 238 243 return ret; 239 244 } 240 245 ··· 254 259 255 260 smm_property = setup_mm_hdr( 256 261 &comm_buf, payload_size, 257 - SMM_VARIABLE_FUNCTION_VAR_CHECK_VARIABLE_PROPERTY_GET, &ret); 262 + SMM_VARIABLE_FUNCTION_VAR_CHECK_VARIABLE_PROPERTY_GET); 258 263 if (!smm_property) 259 - return EFI_OUT_OF_RESOURCES; 264 + return EFI_DEVICE_ERROR; 260 265 261 266 memcpy(&smm_property->guid, vendor, sizeof(smm_property->guid)); 262 267 smm_property->name_size = name_size; ··· 277 282 memcpy(var_property, &smm_property->property, sizeof(*var_property)); 278 283 279 284 out: 280 - kfree(comm_buf); 285 + free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size)); 281 286 return ret; 282 287 } 283 288 ··· 310 315 311 316 payload_size = MM_VARIABLE_ACCESS_HEADER_SIZE + name_size + tmp_dsize; 312 317 var_acc = setup_mm_hdr(&comm_buf, payload_size, 313 - SMM_VARIABLE_FUNCTION_GET_VARIABLE, &ret); 318 + SMM_VARIABLE_FUNCTION_GET_VARIABLE); 314 319 if (!var_acc) 315 - return EFI_OUT_OF_RESOURCES; 320 + return EFI_DEVICE_ERROR; 316 321 317 322 /* Fill in contents */ 318 323 memcpy(&var_acc->guid, vendor, sizeof(var_acc->guid)); ··· 342 347 memcpy(data, (u8 *)var_acc->name + var_acc->name_size, 343 348 var_acc->data_size); 344 349 out: 345 - kfree(comm_buf); 350 + free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size)); 346 351 return ret; 347 352 } 348 353 ··· 375 380 376 381 payload_size = MM_VARIABLE_GET_NEXT_HEADER_SIZE + out_name_size; 377 382 var_getnext = setup_mm_hdr(&comm_buf, payload_size, 378 - SMM_VARIABLE_FUNCTION_GET_NEXT_VARIABLE_NAME, 379 - &ret); 383 + SMM_VARIABLE_FUNCTION_GET_NEXT_VARIABLE_NAME); 380 384 if (!var_getnext) 381 - return EFI_OUT_OF_RESOURCES; 385 + return EFI_DEVICE_ERROR; 382 386 383 387 /* Fill in contents */ 384 388 memcpy(&var_getnext->guid, guid, sizeof(var_getnext->guid)); ··· 398 404 memcpy(name, var_getnext->name, var_getnext->name_size); 399 405 400 406 out: 401 - kfree(comm_buf); 407 + free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size)); 402 408 return ret; 403 409 } 404 410 ··· 431 437 * the properties, if the allocation fails 432 438 */ 433 439 var_acc = setup_mm_hdr(&comm_buf, payload_size, 434 - SMM_VARIABLE_FUNCTION_SET_VARIABLE, &ret); 440 + SMM_VARIABLE_FUNCTION_SET_VARIABLE); 435 441 if (!var_acc) 436 - return EFI_OUT_OF_RESOURCES; 442 + return EFI_DEVICE_ERROR; 437 443 438 444 /* 439 445 * The API has the ability to override RO flags. If no RO check was ··· 461 467 ret = mm_communicate(comm_buf, payload_size); 462 468 dev_dbg(pvt_data.dev, "Set Variable %s %d %lx\n", __FILE__, __LINE__, ret); 463 469 out: 464 - kfree(comm_buf); 470 + free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size)); 465 471 return ret; 466 472 } 467 473 ··· 486 492 487 493 payload_size = sizeof(*mm_query_info); 488 494 mm_query_info = setup_mm_hdr(&comm_buf, payload_size, 489 - SMM_VARIABLE_FUNCTION_QUERY_VARIABLE_INFO, 490 - &ret); 495 + SMM_VARIABLE_FUNCTION_QUERY_VARIABLE_INFO); 491 496 if (!mm_query_info) 492 - return EFI_OUT_OF_RESOURCES; 497 + return EFI_DEVICE_ERROR; 493 498 494 499 mm_query_info->attr = attributes; 495 500 ret = mm_communicate(comm_buf, payload_size); ··· 500 507 *max_variable_size = mm_query_info->max_variable_size; 501 508 502 509 out: 503 - kfree(comm_buf); 510 + free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size)); 504 511 return ret; 505 512 } 506 513
+1 -1
drivers/gpio/gpio-timberdale.c
··· 137 137 u32 ver; 138 138 int ret = 0; 139 139 140 - if (offset < 0 || offset > tgpio->gpio.ngpio) 140 + if (offset < 0 || offset >= tgpio->gpio.ngpio) 141 141 return -EINVAL; 142 142 143 143 ver = ioread32(tgpio->membase + TGPIO_VER);
+14
drivers/gpio/gpiolib-acpi-quirks.c
··· 344 344 .ignore_interrupt = "AMDI0030:00@8", 345 345 }, 346 346 }, 347 + { 348 + /* 349 + * Spurious wakeups from TP_ATTN# pin 350 + * Found in BIOS 5.35 351 + * https://gitlab.freedesktop.org/drm/amd/-/issues/4482 352 + */ 353 + .matches = { 354 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 355 + DMI_MATCH(DMI_PRODUCT_FAMILY, "ProArt PX13"), 356 + }, 357 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 358 + .ignore_wake = "ASCP1A00:00@8", 359 + }, 360 + }, 347 361 {} /* Terminating entry */ 348 362 }; 349 363
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
··· 88 88 } 89 89 90 90 r = amdgpu_vm_bo_map(adev, *bo_va, csa_addr, 0, size, 91 - AMDGPU_VM_PAGE_READABLE | AMDGPU_VM_PAGE_WRITEABLE | 92 - AMDGPU_VM_PAGE_EXECUTABLE); 91 + AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE | 92 + AMDGPU_PTE_EXECUTABLE); 93 93 94 94 if (r) { 95 95 DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);
+32 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 285 285 return ret; 286 286 } 287 287 288 + static int amdgpu_dma_buf_vmap(struct dma_buf *dma_buf, struct iosys_map *map) 289 + { 290 + struct drm_gem_object *obj = dma_buf->priv; 291 + struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); 292 + int ret; 293 + 294 + /* 295 + * Pin to keep buffer in place while it's vmap'ed. The actual 296 + * domain is not that important as long as it's mapable. Using 297 + * GTT and VRAM should be compatible with most use cases. 298 + */ 299 + ret = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT | AMDGPU_GEM_DOMAIN_VRAM); 300 + if (ret) 301 + return ret; 302 + ret = drm_gem_dmabuf_vmap(dma_buf, map); 303 + if (ret) 304 + amdgpu_bo_unpin(bo); 305 + 306 + return ret; 307 + } 308 + 309 + static void amdgpu_dma_buf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map) 310 + { 311 + struct drm_gem_object *obj = dma_buf->priv; 312 + struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); 313 + 314 + drm_gem_dmabuf_vunmap(dma_buf, map); 315 + amdgpu_bo_unpin(bo); 316 + } 317 + 288 318 const struct dma_buf_ops amdgpu_dmabuf_ops = { 289 319 .attach = amdgpu_dma_buf_attach, 290 320 .pin = amdgpu_dma_buf_pin, ··· 324 294 .release = drm_gem_dmabuf_release, 325 295 .begin_cpu_access = amdgpu_dma_buf_begin_cpu_access, 326 296 .mmap = drm_gem_dmabuf_mmap, 327 - .vmap = drm_gem_dmabuf_vmap, 328 - .vunmap = drm_gem_dmabuf_vunmap, 297 + .vmap = amdgpu_dma_buf_vmap, 298 + .vunmap = amdgpu_dma_buf_vunmap, 329 299 }; 330 300 331 301 /**
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
··· 471 471 if (index == (uint64_t)-EINVAL) { 472 472 drm_file_err(uq_mgr->file, "Failed to get doorbell for queue\n"); 473 473 kfree(queue); 474 + r = -EINVAL; 474 475 goto unlock; 475 476 } 476 477
+9 -5
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 1612 1612 case IP_VERSION(11, 0, 2): 1613 1613 case IP_VERSION(11, 0, 3): 1614 1614 if (!adev->gfx.disable_uq && 1615 - adev->gfx.me_fw_version >= 2390 && 1616 - adev->gfx.pfp_fw_version >= 2530 && 1617 - adev->gfx.mec_fw_version >= 2600 && 1615 + adev->gfx.me_fw_version >= 2420 && 1616 + adev->gfx.pfp_fw_version >= 2580 && 1617 + adev->gfx.mec_fw_version >= 2650 && 1618 1618 adev->mes.fw_version[0] >= 120) { 1619 1619 adev->userq_funcs[AMDGPU_HW_IP_GFX] = &userq_mes_funcs; 1620 1620 adev->userq_funcs[AMDGPU_HW_IP_COMPUTE] = &userq_mes_funcs; ··· 4129 4129 #endif 4130 4130 if (prop->tmz_queue) 4131 4131 tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, TMZ_MATCH, 1); 4132 + if (!prop->kernel_queue) 4133 + tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_NON_PRIV, 1); 4132 4134 mqd->cp_gfx_hqd_cntl = tmp; 4133 4135 4134 4136 /* set up cp_doorbell_control */ ··· 4283 4281 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1); 4284 4282 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 4285 4283 prop->allow_tunneling); 4286 - tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1); 4287 - tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1); 4284 + if (prop->kernel_queue) { 4285 + tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1); 4286 + tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1); 4287 + } 4288 4288 if (prop->tmz_queue) 4289 4289 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TMZ, 1); 4290 4290 mqd->cp_hqd_pq_control = tmp;
+6 -2
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
··· 3026 3026 #endif 3027 3027 if (prop->tmz_queue) 3028 3028 tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, TMZ_MATCH, 1); 3029 + if (!prop->kernel_queue) 3030 + tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_NON_PRIV, 1); 3029 3031 mqd->cp_gfx_hqd_cntl = tmp; 3030 3032 3031 3033 /* set up cp_doorbell_control */ ··· 3177 3175 (order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1)); 3178 3176 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1); 3179 3177 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0); 3180 - tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1); 3181 - tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1); 3178 + if (prop->kernel_queue) { 3179 + tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1); 3180 + tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1); 3181 + } 3182 3182 if (prop->tmz_queue) 3183 3183 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TMZ, 1); 3184 3184 mqd->cp_hqd_pq_control = tmp;
+10 -8
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 3458 3458 effective_mode &= ~S_IWUSR; 3459 3459 3460 3460 /* not implemented yet for APUs other than GC 10.3.1 (vangogh) and 9.4.3 */ 3461 - if (((adev->family == AMDGPU_FAMILY_SI) || 3462 - ((adev->flags & AMD_IS_APU) && (gc_ver != IP_VERSION(10, 3, 1)) && 3463 - (gc_ver != IP_VERSION(9, 4, 3) && gc_ver != IP_VERSION(9, 4, 4)))) && 3464 - (attr == &sensor_dev_attr_power1_cap_max.dev_attr.attr || 3465 - attr == &sensor_dev_attr_power1_cap_min.dev_attr.attr || 3466 - attr == &sensor_dev_attr_power1_cap.dev_attr.attr || 3467 - attr == &sensor_dev_attr_power1_cap_default.dev_attr.attr)) 3468 - return 0; 3461 + if (attr == &sensor_dev_attr_power1_cap_max.dev_attr.attr || 3462 + attr == &sensor_dev_attr_power1_cap_min.dev_attr.attr || 3463 + attr == &sensor_dev_attr_power1_cap.dev_attr.attr || 3464 + attr == &sensor_dev_attr_power1_cap_default.dev_attr.attr) { 3465 + if (adev->family == AMDGPU_FAMILY_SI || 3466 + ((adev->flags & AMD_IS_APU) && gc_ver != IP_VERSION(10, 3, 1) && 3467 + (gc_ver != IP_VERSION(9, 4, 3) && gc_ver != IP_VERSION(9, 4, 4))) || 3468 + (amdgpu_sriov_vf(adev) && gc_ver == IP_VERSION(11, 0, 3))) 3469 + return 0; 3470 + } 3469 3471 3470 3472 /* not implemented yet for APUs having < GC 9.3.0 (Renoir) */ 3471 3473 if (((adev->family == AMDGPU_FAMILY_SI) ||
+40 -40
drivers/gpu/drm/drm_gpuvm.c
··· 40 40 * mapping's backing &drm_gem_object buffers. 41 41 * 42 42 * &drm_gem_object buffers maintain a list of &drm_gpuva objects representing 43 - * all existent GPU VA mappings using this &drm_gem_object as backing buffer. 43 + * all existing GPU VA mappings using this &drm_gem_object as backing buffer. 44 44 * 45 45 * GPU VAs can be flagged as sparse, such that drivers may use GPU VAs to also 46 46 * keep track of sparse PTEs in order to support Vulkan 'Sparse Resources'. ··· 72 72 * but it can also be a 'dummy' object, which can be allocated with 73 73 * drm_gpuvm_resv_object_alloc(). 74 74 * 75 - * In order to connect a struct drm_gpuva its backing &drm_gem_object each 75 + * In order to connect a struct drm_gpuva to its backing &drm_gem_object each 76 76 * &drm_gem_object maintains a list of &drm_gpuvm_bo structures, and each 77 77 * &drm_gpuvm_bo contains a list of &drm_gpuva structures. 78 78 * ··· 81 81 * This is ensured by the API through drm_gpuvm_bo_obtain() and 82 82 * drm_gpuvm_bo_obtain_prealloc() which first look into the corresponding 83 83 * &drm_gem_object list of &drm_gpuvm_bos for an existing instance of this 84 - * particular combination. If not existent a new instance is created and linked 84 + * particular combination. If not present, a new instance is created and linked 85 85 * to the &drm_gem_object. 86 86 * 87 87 * &drm_gpuvm_bo structures, since unique for a given &drm_gpuvm, are also used ··· 108 108 * sequence of operations to satisfy a given map or unmap request. 109 109 * 110 110 * Therefore the DRM GPU VA manager provides an algorithm implementing splitting 111 - * and merging of existent GPU VA mappings with the ones that are requested to 111 + * and merging of existing GPU VA mappings with the ones that are requested to 112 112 * be mapped or unmapped. This feature is required by the Vulkan API to 113 113 * implement Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this 114 114 * as VM BIND. ··· 119 119 * execute in order to integrate the new mapping cleanly into the current state 120 120 * of the GPU VA space. 121 121 * 122 - * Depending on how the new GPU VA mapping intersects with the existent mappings 122 + * Depending on how the new GPU VA mapping intersects with the existing mappings 123 123 * of the GPU VA space the &drm_gpuvm_ops callbacks contain an arbitrary amount 124 124 * of unmap operations, a maximum of two remap operations and a single map 125 125 * operation. The caller might receive no callback at all if no operation is ··· 139 139 * one unmap operation and one or two map operations, such that drivers can 140 140 * derive the page table update delta accordingly. 141 141 * 142 - * Note that there can't be more than two existent mappings to split up, one at 142 + * Note that there can't be more than two existing mappings to split up, one at 143 143 * the beginning and one at the end of the new mapping, hence there is a 144 144 * maximum of two remap operations. 145 145 * 146 146 * Analogous to drm_gpuvm_sm_map() drm_gpuvm_sm_unmap() uses &drm_gpuvm_ops to 147 147 * call back into the driver in order to unmap a range of GPU VA space. The 148 - * logic behind this function is way simpler though: For all existent mappings 148 + * logic behind this function is way simpler though: For all existing mappings 149 149 * enclosed by the given range unmap operations are created. For mappings which 150 - * are only partically located within the given range, remap operations are 151 - * created such that those mappings are split up and re-mapped partically. 150 + * are only partially located within the given range, remap operations are 151 + * created such that those mappings are split up and re-mapped partially. 152 152 * 153 153 * As an alternative to drm_gpuvm_sm_map() and drm_gpuvm_sm_unmap(), 154 154 * drm_gpuvm_sm_map_ops_create() and drm_gpuvm_sm_unmap_ops_create() can be used ··· 168 168 * provided helper functions drm_gpuva_map(), drm_gpuva_remap() and 169 169 * drm_gpuva_unmap() instead. 170 170 * 171 - * The following diagram depicts the basic relationships of existent GPU VA 171 + * The following diagram depicts the basic relationships of existing GPU VA 172 172 * mappings, a newly requested mapping and the resulting mappings as implemented 173 173 * by drm_gpuvm_sm_map() - it doesn't cover any arbitrary combinations of these. 174 174 * ··· 218 218 * 219 219 * 220 220 * 4) Existent mapping is a left aligned subset of the requested one, hence 221 - * replace the existent one. 221 + * replace the existing one. 222 222 * 223 223 * :: 224 224 * ··· 236 236 * and/or non-contiguous BO offset. 237 237 * 238 238 * 239 - * 5) Requested mapping's range is a left aligned subset of the existent one, 239 + * 5) Requested mapping's range is a left aligned subset of the existing one, 240 240 * but backed by a different BO. Hence, map the requested mapping and split 241 - * the existent one adjusting its BO offset. 241 + * the existing one adjusting its BO offset. 242 242 * 243 243 * :: 244 244 * ··· 271 271 * new: |-----|-----| (a.bo_offset=n, a'.bo_offset=n+1) 272 272 * 273 273 * 274 - * 7) Requested mapping's range is a right aligned subset of the existent one, 274 + * 7) Requested mapping's range is a right aligned subset of the existing one, 275 275 * but backed by a different BO. Hence, map the requested mapping and split 276 - * the existent one, without adjusting the BO offset. 276 + * the existing one, without adjusting the BO offset. 277 277 * 278 278 * :: 279 279 * ··· 304 304 * 305 305 * 9) Existent mapping is overlapped at the end by the requested mapping backed 306 306 * by a different BO. Hence, map the requested mapping and split up the 307 - * existent one, without adjusting the BO offset. 307 + * existing one, without adjusting the BO offset. 308 308 * 309 309 * :: 310 310 * ··· 334 334 * new: |-----|-----------| (a'.bo_offset=n, a.bo_offset=n+1) 335 335 * 336 336 * 337 - * 11) Requested mapping's range is a centered subset of the existent one 337 + * 11) Requested mapping's range is a centered subset of the existing one 338 338 * having a different backing BO. Hence, map the requested mapping and split 339 - * up the existent one in two mappings, adjusting the BO offset of the right 339 + * up the existing one in two mappings, adjusting the BO offset of the right 340 340 * one accordingly. 341 341 * 342 342 * :: ··· 351 351 * new: |-----|-----|-----| (a.bo_offset=n,b.bo_offset=m,a'.bo_offset=n+2) 352 352 * 353 353 * 354 - * 12) Requested mapping is a contiguous subset of the existent one. Split it 354 + * 12) Requested mapping is a contiguous subset of the existing one. Split it 355 355 * up, but indicate that the backing PTEs could be kept. 356 356 * 357 357 * :: ··· 367 367 * 368 368 * 369 369 * 13) Existent mapping is a right aligned subset of the requested one, hence 370 - * replace the existent one. 370 + * replace the existing one. 371 371 * 372 372 * :: 373 373 * ··· 386 386 * 387 387 * 388 388 * 14) Existent mapping is a centered subset of the requested one, hence 389 - * replace the existent one. 389 + * replace the existing one. 390 390 * 391 391 * :: 392 392 * ··· 406 406 * 407 407 * 15) Existent mappings is overlapped at the beginning by the requested mapping 408 408 * backed by a different BO. Hence, map the requested mapping and split up 409 - * the existent one, adjusting its BO offset accordingly. 409 + * the existing one, adjusting its BO offset accordingly. 410 410 * 411 411 * :: 412 412 * ··· 469 469 * make use of them. 470 470 * 471 471 * The below code is strictly limited to illustrate the generic usage pattern. 472 - * To maintain simplicitly, it doesn't make use of any abstractions for common 473 - * code, different (asyncronous) stages with fence signalling critical paths, 472 + * To maintain simplicity, it doesn't make use of any abstractions for common 473 + * code, different (asynchronous) stages with fence signalling critical paths, 474 474 * any other helpers or error handling in terms of freeing memory and dropping 475 475 * previously taken locks. 476 476 * ··· 479 479 * // Allocates a new &drm_gpuva. 480 480 * struct drm_gpuva * driver_gpuva_alloc(void); 481 481 * 482 - * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva 482 + * // Typically drivers would embed the &drm_gpuvm and &drm_gpuva 483 483 * // structure in individual driver structures and lock the dma-resv with 484 484 * // drm_exec or similar helpers. 485 485 * int driver_mapping_create(struct drm_gpuvm *gpuvm, ··· 582 582 * .sm_step_unmap = driver_gpuva_unmap, 583 583 * }; 584 584 * 585 - * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva 585 + * // Typically drivers would embed the &drm_gpuvm and &drm_gpuva 586 586 * // structure in individual driver structures and lock the dma-resv with 587 587 * // drm_exec or similar helpers. 588 588 * int driver_mapping_create(struct drm_gpuvm *gpuvm, ··· 680 680 * 681 681 * This helper is here to provide lockless list iteration. Lockless as in, the 682 682 * iterator releases the lock immediately after picking the first element from 683 - * the list, so list insertion deletion can happen concurrently. 683 + * the list, so list insertion and deletion can happen concurrently. 684 684 * 685 685 * Elements popped from the original list are kept in a local list, so removal 686 686 * and is_empty checks can still happen while we're iterating the list. ··· 1160 1160 } 1161 1161 1162 1162 /** 1163 - * drm_gpuvm_prepare_objects() - prepare all assoiciated BOs 1163 + * drm_gpuvm_prepare_objects() - prepare all associated BOs 1164 1164 * @gpuvm: the &drm_gpuvm 1165 1165 * @exec: the &drm_exec locking context 1166 1166 * @num_fences: the amount of &dma_fences to reserve ··· 1230 1230 EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_range); 1231 1231 1232 1232 /** 1233 - * drm_gpuvm_exec_lock() - lock all dma-resv of all assoiciated BOs 1233 + * drm_gpuvm_exec_lock() - lock all dma-resv of all associated BOs 1234 1234 * @vm_exec: the &drm_gpuvm_exec wrapper 1235 1235 * 1236 1236 * Acquires all dma-resv locks of all &drm_gem_objects the given 1237 1237 * &drm_gpuvm contains mappings of. 1238 1238 * 1239 - * Addionally, when calling this function with struct drm_gpuvm_exec::extra 1239 + * Additionally, when calling this function with struct drm_gpuvm_exec::extra 1240 1240 * being set the driver receives the given @fn callback to lock additional 1241 1241 * dma-resv in the context of the &drm_gpuvm_exec instance. Typically, drivers 1242 1242 * would call drm_exec_prepare_obj() from within this callback. ··· 1293 1293 } 1294 1294 1295 1295 /** 1296 - * drm_gpuvm_exec_lock_array() - lock all dma-resv of all assoiciated BOs 1296 + * drm_gpuvm_exec_lock_array() - lock all dma-resv of all associated BOs 1297 1297 * @vm_exec: the &drm_gpuvm_exec wrapper 1298 1298 * @objs: additional &drm_gem_objects to lock 1299 1299 * @num_objs: the number of additional &drm_gem_objects to lock ··· 1588 1588 EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find); 1589 1589 1590 1590 /** 1591 - * drm_gpuvm_bo_obtain() - obtains and instance of the &drm_gpuvm_bo for the 1591 + * drm_gpuvm_bo_obtain() - obtains an instance of the &drm_gpuvm_bo for the 1592 1592 * given &drm_gpuvm and &drm_gem_object 1593 1593 * @gpuvm: The &drm_gpuvm the @obj is mapped in. 1594 1594 * @obj: The &drm_gem_object being mapped in the @gpuvm. ··· 1624 1624 EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain); 1625 1625 1626 1626 /** 1627 - * drm_gpuvm_bo_obtain_prealloc() - obtains and instance of the &drm_gpuvm_bo 1627 + * drm_gpuvm_bo_obtain_prealloc() - obtains an instance of the &drm_gpuvm_bo 1628 1628 * for the given &drm_gpuvm and &drm_gem_object 1629 1629 * @__vm_bo: A pre-allocated struct drm_gpuvm_bo. 1630 1630 * ··· 1688 1688 * @vm_bo: the &drm_gpuvm_bo to add or remove 1689 1689 * @evict: indicates whether the object is evicted 1690 1690 * 1691 - * Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvms evicted list. 1691 + * Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvm's evicted list. 1692 1692 */ 1693 1693 void 1694 1694 drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict) ··· 1790 1790 * drm_gpuva_remove() - remove a &drm_gpuva 1791 1791 * @va: the &drm_gpuva to remove 1792 1792 * 1793 - * This removes the given &va from the underlaying tree. 1793 + * This removes the given &va from the underlying tree. 1794 1794 * 1795 1795 * It is safe to use this function using the safe versions of iterating the GPU 1796 1796 * VA space, such as drm_gpuvm_for_each_va_safe() and ··· 2358 2358 * 2359 2359 * This function iterates the given range of the GPU VA space. It utilizes the 2360 2360 * &drm_gpuvm_ops to call back into the driver providing the operations to 2361 - * unmap and, if required, split existent mappings. 2361 + * unmap and, if required, split existing mappings. 2362 2362 * 2363 2363 * Drivers may use these callbacks to update the GPU VA space right away within 2364 2364 * the callback. In case the driver decides to copy and store the operations for ··· 2430 2430 * remapped, and locks+prepares (drm_exec_prepare_object()) objects that 2431 2431 * will be newly mapped. 2432 2432 * 2433 - * The expected usage is: 2433 + * The expected usage is:: 2434 2434 * 2435 2435 * .. code-block:: c 2436 2436 * ··· 2475 2475 * required without the earlier DRIVER_OP_MAP. This is safe because we've 2476 2476 * already locked the GEM object in the earlier DRIVER_OP_MAP step. 2477 2477 * 2478 - * Returns: 0 on success or a negative error codec 2478 + * Returns: 0 on success or a negative error code 2479 2479 */ 2480 2480 int 2481 2481 drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm, ··· 2619 2619 * @req_offset: the offset within the &drm_gem_object 2620 2620 * 2621 2621 * This function creates a list of operations to perform splitting and merging 2622 - * of existent mapping(s) with the newly requested one. 2622 + * of existing mapping(s) with the newly requested one. 2623 2623 * 2624 2624 * The list can be iterated with &drm_gpuva_for_each_op and must be processed 2625 2625 * in the given order. It can contain map, unmap and remap operations, but it 2626 2626 * also can be empty if no operation is required, e.g. if the requested mapping 2627 - * already exists is the exact same way. 2627 + * already exists in the exact same way. 2628 2628 * 2629 2629 * There can be an arbitrary amount of unmap operations, a maximum of two remap 2630 2630 * operations and a single map operation. The latter one represents the original
+14 -7
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 387 387 388 388 of_id = of_match_node(mtk_drm_of_ids, node); 389 389 if (!of_id) 390 - continue; 390 + goto next_put_node; 391 391 392 392 pdev = of_find_device_by_node(node); 393 393 if (!pdev) 394 - continue; 394 + goto next_put_node; 395 395 396 396 drm_dev = device_find_child(&pdev->dev, NULL, mtk_drm_match); 397 397 if (!drm_dev) 398 - continue; 398 + goto next_put_device_pdev_dev; 399 399 400 400 temp_drm_priv = dev_get_drvdata(drm_dev); 401 401 if (!temp_drm_priv) 402 - continue; 402 + goto next_put_device_drm_dev; 403 403 404 404 if (temp_drm_priv->data->main_len) 405 405 all_drm_priv[CRTC_MAIN] = temp_drm_priv; ··· 411 411 if (temp_drm_priv->mtk_drm_bound) 412 412 cnt++; 413 413 414 - if (cnt == MAX_CRTC) { 415 - of_node_put(node); 414 + next_put_device_drm_dev: 415 + put_device(drm_dev); 416 + 417 + next_put_device_pdev_dev: 418 + put_device(&pdev->dev); 419 + 420 + next_put_node: 421 + of_node_put(node); 422 + 423 + if (cnt == MAX_CRTC) 416 424 break; 417 - } 418 425 } 419 426 420 427 if (drm_priv->data->mmsys_dev_num == cnt) {
+6
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 1002 1002 return PTR_ERR(dsi->next_bridge); 1003 1003 } 1004 1004 1005 + /* 1006 + * set flag to request the DSI host bridge be pre-enabled before device bridge 1007 + * in the chain, so the DSI host is ready when the device bridge is pre-enabled 1008 + */ 1009 + dsi->next_bridge->pre_enable_prev_first = true; 1010 + 1005 1011 drm_bridge_add(&dsi->bridge); 1006 1012 1007 1013 ret = component_add(host->dev, &mtk_dsi_component_ops);
+4 -4
drivers/gpu/drm/mediatek/mtk_hdmi.c
··· 182 182 183 183 static void mtk_hdmi_hw_vid_black(struct mtk_hdmi *hdmi, bool black) 184 184 { 185 - regmap_update_bits(hdmi->regs, VIDEO_SOURCE_SEL, 186 - VIDEO_CFG_4, black ? GEN_RGB : NORMAL_PATH); 185 + regmap_update_bits(hdmi->regs, VIDEO_CFG_4, 186 + VIDEO_SOURCE_SEL, black ? GEN_RGB : NORMAL_PATH); 187 187 } 188 188 189 189 static void mtk_hdmi_hw_make_reg_writable(struct mtk_hdmi *hdmi, bool enable) ··· 310 310 311 311 static void mtk_hdmi_hw_send_aud_packet(struct mtk_hdmi *hdmi, bool enable) 312 312 { 313 - regmap_update_bits(hdmi->regs, AUDIO_PACKET_OFF, 314 - GRL_SHIFT_R2, enable ? 0 : AUDIO_PACKET_OFF); 313 + regmap_update_bits(hdmi->regs, GRL_SHIFT_R2, 314 + AUDIO_PACKET_OFF, enable ? 0 : AUDIO_PACKET_OFF); 315 315 } 316 316 317 317 static void mtk_hdmi_hw_config_sys(struct mtk_hdmi *hdmi)
+2 -1
drivers/gpu/drm/mediatek/mtk_plane.c
··· 292 292 wmb(); /* Make sure the above parameter is set before update */ 293 293 mtk_plane_state->pending.dirty = true; 294 294 295 - mtk_crtc_plane_disable(old_state->crtc, plane); 295 + if (old_state && old_state->crtc) 296 + mtk_crtc_plane_disable(old_state->crtc, plane); 296 297 } 297 298 298 299 static void mtk_plane_atomic_update(struct drm_plane *plane,
+33 -14
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
··· 11 11 static const unsigned int *gen7_0_0_external_core_regs[] __always_unused; 12 12 static const unsigned int *gen7_2_0_external_core_regs[] __always_unused; 13 13 static const unsigned int *gen7_9_0_external_core_regs[] __always_unused; 14 - static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] __always_unused; 14 + static const struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] __always_unused; 15 15 static const u32 gen7_9_0_cx_debugbus_blocks[] __always_unused; 16 16 17 17 #include "adreno_gen7_0_0_snapshot.h" ··· 174 174 static int debugbus_read(struct msm_gpu *gpu, u32 block, u32 offset, 175 175 u32 *data) 176 176 { 177 - u32 reg = A6XX_DBGC_CFG_DBGBUS_SEL_D_PING_INDEX(offset) | 178 - A6XX_DBGC_CFG_DBGBUS_SEL_D_PING_BLK_SEL(block); 177 + u32 reg; 178 + 179 + if (to_adreno_gpu(gpu)->info->family >= ADRENO_7XX_GEN1) { 180 + reg = A7XX_DBGC_CFG_DBGBUS_SEL_D_PING_INDEX(offset) | 181 + A7XX_DBGC_CFG_DBGBUS_SEL_D_PING_BLK_SEL(block); 182 + } else { 183 + reg = A6XX_DBGC_CFG_DBGBUS_SEL_D_PING_INDEX(offset) | 184 + A6XX_DBGC_CFG_DBGBUS_SEL_D_PING_BLK_SEL(block); 185 + } 179 186 180 187 gpu_write(gpu, REG_A6XX_DBGC_CFG_DBGBUS_SEL_A, reg); 181 188 gpu_write(gpu, REG_A6XX_DBGC_CFG_DBGBUS_SEL_B, reg); ··· 205 198 readl((ptr) + ((offset) << 2)) 206 199 207 200 /* read a value from the CX debug bus */ 208 - static int cx_debugbus_read(void __iomem *cxdbg, u32 block, u32 offset, 201 + static int cx_debugbus_read(struct msm_gpu *gpu, void __iomem *cxdbg, u32 block, u32 offset, 209 202 u32 *data) 210 203 { 211 - u32 reg = A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_INDEX(offset) | 212 - A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_BLK_SEL(block); 204 + u32 reg; 205 + 206 + if (to_adreno_gpu(gpu)->info->family >= ADRENO_7XX_GEN1) { 207 + reg = A7XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_INDEX(offset) | 208 + A7XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_BLK_SEL(block); 209 + } else { 210 + reg = A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_INDEX(offset) | 211 + A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_BLK_SEL(block); 212 + } 213 213 214 214 cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_A, reg); 215 215 cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_B, reg); ··· 329 315 ptr += debugbus_read(gpu, block->id, i, ptr); 330 316 } 331 317 332 - static void a6xx_get_cx_debugbus_block(void __iomem *cxdbg, 318 + static void a6xx_get_cx_debugbus_block(struct msm_gpu *gpu, 319 + void __iomem *cxdbg, 333 320 struct a6xx_gpu_state *a6xx_state, 334 321 const struct a6xx_debugbus_block *block, 335 322 struct a6xx_gpu_state_obj *obj) ··· 345 330 obj->handle = block; 346 331 347 332 for (ptr = obj->data, i = 0; i < block->count; i++) 348 - ptr += cx_debugbus_read(cxdbg, block->id, i, ptr); 333 + ptr += cx_debugbus_read(gpu, cxdbg, block->id, i, ptr); 349 334 } 350 335 351 336 static void a6xx_get_debugbus_blocks(struct msm_gpu *gpu, ··· 438 423 a6xx_state, &a7xx_debugbus_blocks[gbif_debugbus_blocks[i]], 439 424 &a6xx_state->debugbus[i + debugbus_blocks_count]); 440 425 } 441 - } 442 426 427 + a6xx_state->nr_debugbus = total_debugbus_blocks; 428 + } 443 429 } 444 430 445 431 static void a6xx_get_debugbus(struct msm_gpu *gpu, ··· 542 526 int i; 543 527 544 528 for (i = 0; i < nr_cx_debugbus_blocks; i++) 545 - a6xx_get_cx_debugbus_block(cxdbg, 529 + a6xx_get_cx_debugbus_block(gpu, 530 + cxdbg, 546 531 a6xx_state, 547 532 &cx_debugbus_blocks[i], 548 533 &a6xx_state->cx_debugbus[i]); ··· 776 759 size_t datasize; 777 760 int i, regcount = 0; 778 761 779 - /* Some clusters need a selector register to be programmed too */ 780 - if (cluster->sel) 781 - in += CRASHDUMP_WRITE(in, cluster->sel->cd_reg, cluster->sel->val); 782 - 783 762 in += CRASHDUMP_WRITE(in, REG_A7XX_CP_APERTURE_CNTL_CD, 784 763 A7XX_CP_APERTURE_CNTL_CD_PIPE(cluster->pipe_id) | 785 764 A7XX_CP_APERTURE_CNTL_CD_CLUSTER(cluster->cluster_id) | 786 765 A7XX_CP_APERTURE_CNTL_CD_CONTEXT(cluster->context_id)); 766 + 767 + /* Some clusters need a selector register to be programmed too */ 768 + if (cluster->sel) 769 + in += CRASHDUMP_WRITE(in, cluster->sel->cd_reg, cluster->sel->val); 787 770 788 771 for (i = 0; cluster->regs[i] != UINT_MAX; i += 2) { 789 772 int count = RANGE(cluster->regs, i); ··· 1813 1796 1814 1797 print_name(p, " - type: ", a7xx_statetype_names[block->statetype]); 1815 1798 print_name(p, " - pipe: ", a7xx_pipe_names[block->pipeid]); 1799 + drm_printf(p, " - location: %d\n", block->location); 1816 1800 1817 1801 for (i = 0; i < block->num_sps; i++) { 1818 1802 drm_printf(p, " - sp: %d\n", i); ··· 1891 1873 print_name(p, " - pipe: ", a7xx_pipe_names[dbgahb->pipe_id]); 1892 1874 print_name(p, " - cluster-name: ", a7xx_cluster_names[dbgahb->cluster_id]); 1893 1875 drm_printf(p, " - context: %d\n", dbgahb->context_id); 1876 + drm_printf(p, " - location: %d\n", dbgahb->location_id); 1894 1877 a7xx_show_registers_indented(dbgahb->regs, obj->data, p, 4); 1895 1878 } 1896 1879 }
+19 -19
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
··· 419 419 REG_A6XX_CP_SQE_STAT_DATA, 0x33, NULL }, 420 420 { "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR, 421 421 REG_A6XX_CP_DRAW_STATE_DATA, 0x100, NULL }, 422 - { "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR, 422 + { "CP_SQE_UCODE_DBG", REG_A6XX_CP_SQE_UCODE_DBG_ADDR, 423 423 REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x8000, NULL }, 424 - { "CP_ROQ", REG_A6XX_CP_ROQ_DBG_ADDR, 424 + { "CP_ROQ_DBG", REG_A6XX_CP_ROQ_DBG_ADDR, 425 425 REG_A6XX_CP_ROQ_DBG_DATA, 0, a6xx_get_cp_roq_size}, 426 426 }; 427 427 428 428 static const struct a6xx_indexed_registers a7xx_indexed_reglist[] = { 429 429 { "CP_SQE_STAT", REG_A6XX_CP_SQE_STAT_ADDR, 430 - REG_A6XX_CP_SQE_STAT_DATA, 0x33, NULL }, 430 + REG_A6XX_CP_SQE_STAT_DATA, 0x40, NULL }, 431 431 { "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR, 432 432 REG_A6XX_CP_DRAW_STATE_DATA, 0x100, NULL }, 433 - { "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR, 433 + { "CP_SQE_UCODE_DBG", REG_A6XX_CP_SQE_UCODE_DBG_ADDR, 434 434 REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x8000, NULL }, 435 - { "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_SQE_STAT_ADDR, 436 - REG_A7XX_CP_BV_SQE_STAT_DATA, 0x33, NULL }, 437 - { "CP_BV_DRAW_STATE_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR, 435 + { "CP_BV_SQE_STAT", REG_A7XX_CP_BV_SQE_STAT_ADDR, 436 + REG_A7XX_CP_BV_SQE_STAT_DATA, 0x40, NULL }, 437 + { "CP_BV_DRAW_STATE", REG_A7XX_CP_BV_DRAW_STATE_ADDR, 438 438 REG_A7XX_CP_BV_DRAW_STATE_DATA, 0x100, NULL }, 439 - { "CP_BV_SQE_UCODE_DBG_ADDR", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR, 439 + { "CP_BV_SQE_UCODE_DBG", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR, 440 440 REG_A7XX_CP_BV_SQE_UCODE_DBG_DATA, 0x8000, NULL }, 441 - { "CP_SQE_AC_STAT_ADDR", REG_A7XX_CP_SQE_AC_STAT_ADDR, 442 - REG_A7XX_CP_SQE_AC_STAT_DATA, 0x33, NULL }, 443 - { "CP_LPAC_DRAW_STATE_ADDR", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR, 441 + { "CP_SQE_AC_STAT", REG_A7XX_CP_SQE_AC_STAT_ADDR, 442 + REG_A7XX_CP_SQE_AC_STAT_DATA, 0x40, NULL }, 443 + { "CP_LPAC_DRAW_STATE", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR, 444 444 REG_A7XX_CP_LPAC_DRAW_STATE_DATA, 0x100, NULL }, 445 - { "CP_SQE_AC_UCODE_DBG_ADDR", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR, 445 + { "CP_SQE_AC_UCODE_DBG", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR, 446 446 REG_A7XX_CP_SQE_AC_UCODE_DBG_DATA, 0x8000, NULL }, 447 - { "CP_LPAC_FIFO_DBG_ADDR", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR, 447 + { "CP_LPAC_FIFO_DBG", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR, 448 448 REG_A7XX_CP_LPAC_FIFO_DBG_DATA, 0x40, NULL }, 449 - { "CP_ROQ", REG_A6XX_CP_ROQ_DBG_ADDR, 449 + { "CP_ROQ_DBG", REG_A6XX_CP_ROQ_DBG_ADDR, 450 450 REG_A6XX_CP_ROQ_DBG_DATA, 0, a7xx_get_cp_roq_size }, 451 451 }; 452 452 453 453 static const struct a6xx_indexed_registers a6xx_cp_mempool_indexed = { 454 - "CP_MEMPOOL", REG_A6XX_CP_MEM_POOL_DBG_ADDR, 454 + "CP_MEM_POOL_DBG", REG_A6XX_CP_MEM_POOL_DBG_ADDR, 455 455 REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2060, NULL, 456 456 }; 457 457 458 458 static const struct a6xx_indexed_registers a7xx_cp_bv_mempool_indexed[] = { 459 - { "CP_MEMPOOL", REG_A6XX_CP_MEM_POOL_DBG_ADDR, 460 - REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2100, NULL }, 461 - { "CP_BV_MEMPOOL", REG_A7XX_CP_BV_MEM_POOL_DBG_ADDR, 462 - REG_A7XX_CP_BV_MEM_POOL_DBG_DATA, 0x2100, NULL }, 459 + { "CP_MEM_POOL_DBG", REG_A6XX_CP_MEM_POOL_DBG_ADDR, 460 + REG_A6XX_CP_MEM_POOL_DBG_DATA, 0x2200, NULL }, 461 + { "CP_BV_MEM_POOL_DBG", REG_A7XX_CP_BV_MEM_POOL_DBG_ADDR, 462 + REG_A7XX_CP_BV_MEM_POOL_DBG_DATA, 0x2200, NULL }, 463 463 }; 464 464 465 465 #define DEBUGBUS(_id, _count) { .id = _id, .name = #_id, .count = _count }
+13 -6
drivers/gpu/drm/msm/adreno/adreno_gen7_0_0_snapshot.h
··· 81 81 A7XX_DBGBUS_USPTP_7, 82 82 }; 83 83 84 - static struct gen7_shader_block gen7_0_0_shader_blocks[] = { 84 + static const struct gen7_shader_block gen7_0_0_shader_blocks[] = { 85 85 {A7XX_TP0_TMO_DATA, 0x200, 4, 2, A7XX_PIPE_BR, A7XX_USPTP}, 86 86 {A7XX_TP0_SMO_DATA, 0x80, 4, 2, A7XX_PIPE_BR, A7XX_USPTP}, 87 87 {A7XX_TP0_MIPMAP_BASE_DATA, 0x3c0, 4, 2, A7XX_PIPE_BR, A7XX_USPTP}, ··· 668 668 }; 669 669 static_assert(IS_ALIGNED(sizeof(gen7_0_0_sp_noncontext_pipe_lpac_usptp_registers), 8)); 670 670 671 - /* Block: TPl1 Cluster: noncontext Pipeline: A7XX_PIPE_BR */ 672 - static const u32 gen7_0_0_tpl1_noncontext_pipe_br_registers[] = { 671 + /* Block: TPl1 Cluster: noncontext Pipeline: A7XX_PIPE_NONE */ 672 + static const u32 gen7_0_0_tpl1_noncontext_pipe_none_registers[] = { 673 673 0x0b600, 0x0b600, 0x0b602, 0x0b602, 0x0b604, 0x0b604, 0x0b608, 0x0b60c, 674 674 0x0b60f, 0x0b621, 0x0b630, 0x0b633, 675 675 UINT_MAX, UINT_MAX, 676 + }; 677 + static_assert(IS_ALIGNED(sizeof(gen7_0_0_tpl1_noncontext_pipe_none_registers), 8)); 678 + 679 + /* Block: TPl1 Cluster: noncontext Pipeline: A7XX_PIPE_BR */ 680 + static const u32 gen7_0_0_tpl1_noncontext_pipe_br_registers[] = { 681 + 0x0b600, 0x0b600, 682 + UINT_MAX, UINT_MAX, 676 683 }; 677 684 static_assert(IS_ALIGNED(sizeof(gen7_0_0_tpl1_noncontext_pipe_br_registers), 8)); 678 685 ··· 702 695 .val = 0x9, 703 696 }; 704 697 705 - static struct gen7_cluster_registers gen7_0_0_clusters[] = { 698 + static const struct gen7_cluster_registers gen7_0_0_clusters[] = { 706 699 { A7XX_CLUSTER_NONE, A7XX_PIPE_BR, STATE_NON_CONTEXT, 707 700 gen7_0_0_noncontext_pipe_br_registers, }, 708 701 { A7XX_CLUSTER_NONE, A7XX_PIPE_BV, STATE_NON_CONTEXT, ··· 771 764 gen7_0_0_vpc_cluster_vpc_ps_pipe_bv_registers, }, 772 765 }; 773 766 774 - static struct gen7_sptp_cluster_registers gen7_0_0_sptp_clusters[] = { 767 + static const struct gen7_sptp_cluster_registers gen7_0_0_sptp_clusters[] = { 775 768 { A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_HLSQ_STATE, 776 769 gen7_0_0_sp_noncontext_pipe_br_hlsq_state_registers, 0xae00 }, 777 770 { A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_SP_TOP, ··· 921 914 }; 922 915 static_assert(IS_ALIGNED(sizeof(gen7_0_0_dpm_registers), 8)); 923 916 924 - static struct gen7_reg_list gen7_0_0_reg_list[] = { 917 + static const struct gen7_reg_list gen7_0_0_reg_list[] = { 925 918 { gen7_0_0_gpu_registers, NULL }, 926 919 { gen7_0_0_cx_misc_registers, NULL }, 927 920 { gen7_0_0_dpm_registers, NULL },
+6 -4
drivers/gpu/drm/msm/adreno/adreno_gen7_2_0_snapshot.h
··· 95 95 A7XX_DBGBUS_CCHE_2, 96 96 }; 97 97 98 - static struct gen7_shader_block gen7_2_0_shader_blocks[] = { 98 + static const struct gen7_shader_block gen7_2_0_shader_blocks[] = { 99 99 {A7XX_TP0_TMO_DATA, 0x200, 6, 2, A7XX_PIPE_BR, A7XX_USPTP}, 100 100 {A7XX_TP0_SMO_DATA, 0x80, 6, 2, A7XX_PIPE_BR, A7XX_USPTP}, 101 101 {A7XX_TP0_MIPMAP_BASE_DATA, 0x3c0, 6, 2, A7XX_PIPE_BR, A7XX_USPTP}, ··· 489 489 .val = 0x9, 490 490 }; 491 491 492 - static struct gen7_cluster_registers gen7_2_0_clusters[] = { 492 + static const struct gen7_cluster_registers gen7_2_0_clusters[] = { 493 493 { A7XX_CLUSTER_NONE, A7XX_PIPE_BR, STATE_NON_CONTEXT, 494 494 gen7_2_0_noncontext_pipe_br_registers, }, 495 495 { A7XX_CLUSTER_NONE, A7XX_PIPE_BV, STATE_NON_CONTEXT, ··· 558 558 gen7_0_0_vpc_cluster_vpc_ps_pipe_bv_registers, }, 559 559 }; 560 560 561 - static struct gen7_sptp_cluster_registers gen7_2_0_sptp_clusters[] = { 561 + static const struct gen7_sptp_cluster_registers gen7_2_0_sptp_clusters[] = { 562 562 { A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_HLSQ_STATE, 563 563 gen7_0_0_sp_noncontext_pipe_br_hlsq_state_registers, 0xae00 }, 564 564 { A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_SP_TOP, ··· 573 573 gen7_0_0_sp_noncontext_pipe_lpac_usptp_registers, 0xaf80 }, 574 574 { A7XX_CLUSTER_NONE, A7XX_TP0_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_USPTP, 575 575 gen7_0_0_tpl1_noncontext_pipe_br_registers, 0xb600 }, 576 + { A7XX_CLUSTER_NONE, A7XX_TP0_NCTX_REG, A7XX_PIPE_NONE, 0, A7XX_USPTP, 577 + gen7_0_0_tpl1_noncontext_pipe_none_registers, 0xb600 }, 576 578 { A7XX_CLUSTER_NONE, A7XX_TP0_NCTX_REG, A7XX_PIPE_LPAC, 0, A7XX_USPTP, 577 579 gen7_0_0_tpl1_noncontext_pipe_lpac_registers, 0xb780 }, 578 580 { A7XX_CLUSTER_SP_PS, A7XX_SP_CTX0_3D_CPS_REG, A7XX_PIPE_BR, 0, A7XX_HLSQ_STATE, ··· 739 737 }; 740 738 static_assert(IS_ALIGNED(sizeof(gen7_2_0_dpm_registers), 8)); 741 739 742 - static struct gen7_reg_list gen7_2_0_reg_list[] = { 740 + static const struct gen7_reg_list gen7_2_0_reg_list[] = { 743 741 { gen7_2_0_gpu_registers, NULL }, 744 742 { gen7_2_0_cx_misc_registers, NULL }, 745 743 { gen7_2_0_dpm_registers, NULL },
+17 -17
drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
··· 117 117 A7XX_DBGBUS_GBIF_CX, 118 118 }; 119 119 120 - static struct gen7_shader_block gen7_9_0_shader_blocks[] = { 120 + static const struct gen7_shader_block gen7_9_0_shader_blocks[] = { 121 121 { A7XX_TP0_TMO_DATA, 0x0200, 6, 2, A7XX_PIPE_BR, A7XX_USPTP }, 122 122 { A7XX_TP0_SMO_DATA, 0x0080, 6, 2, A7XX_PIPE_BR, A7XX_USPTP }, 123 123 { A7XX_TP0_MIPMAP_BASE_DATA, 0x03C0, 6, 2, A7XX_PIPE_BR, A7XX_USPTP }, ··· 1116 1116 .val = 0x9, 1117 1117 }; 1118 1118 1119 - static struct gen7_cluster_registers gen7_9_0_clusters[] = { 1119 + static const struct gen7_cluster_registers gen7_9_0_clusters[] = { 1120 1120 { A7XX_CLUSTER_NONE, A7XX_PIPE_BR, STATE_NON_CONTEXT, 1121 1121 gen7_9_0_non_context_pipe_br_registers, }, 1122 1122 { A7XX_CLUSTER_NONE, A7XX_PIPE_BV, STATE_NON_CONTEXT, ··· 1185 1185 gen7_9_0_vpc_pipe_bv_cluster_vpc_ps_registers, }, 1186 1186 }; 1187 1187 1188 - static struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] = { 1188 + static const struct gen7_sptp_cluster_registers gen7_9_0_sptp_clusters[] = { 1189 1189 { A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_HLSQ_STATE, 1190 1190 gen7_9_0_non_context_sp_pipe_br_hlsq_state_registers, 0xae00}, 1191 1191 { A7XX_CLUSTER_NONE, A7XX_SP_NCTX_REG, A7XX_PIPE_BR, 0, A7XX_SP_TOP, ··· 1294 1294 gen7_9_0_tpl1_pipe_br_cluster_sp_ps_usptp_registers, 0xb000}, 1295 1295 }; 1296 1296 1297 - static struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = { 1297 + static const struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = { 1298 1298 { "CP_SQE_STAT", REG_A6XX_CP_SQE_STAT_ADDR, 1299 1299 REG_A6XX_CP_SQE_STAT_DATA, 0x00040}, 1300 1300 { "CP_DRAW_STATE", REG_A6XX_CP_DRAW_STATE_ADDR, 1301 1301 REG_A6XX_CP_DRAW_STATE_DATA, 0x00200}, 1302 - { "CP_ROQ", REG_A6XX_CP_ROQ_DBG_ADDR, 1302 + { "CP_ROQ_DBG", REG_A6XX_CP_ROQ_DBG_ADDR, 1303 1303 REG_A6XX_CP_ROQ_DBG_DATA, 0x00800}, 1304 - { "CP_UCODE_DBG_DATA", REG_A6XX_CP_SQE_UCODE_DBG_ADDR, 1304 + { "CP_SQE_UCODE_DBG", REG_A6XX_CP_SQE_UCODE_DBG_ADDR, 1305 1305 REG_A6XX_CP_SQE_UCODE_DBG_DATA, 0x08000}, 1306 - { "CP_BV_DRAW_STATE_ADDR", REG_A7XX_CP_BV_DRAW_STATE_ADDR, 1306 + { "CP_BV_DRAW_STATE", REG_A7XX_CP_BV_DRAW_STATE_ADDR, 1307 1307 REG_A7XX_CP_BV_DRAW_STATE_DATA, 0x00200}, 1308 - { "CP_BV_ROQ_DBG_ADDR", REG_A7XX_CP_BV_ROQ_DBG_ADDR, 1308 + { "CP_BV_ROQ_DBG", REG_A7XX_CP_BV_ROQ_DBG_ADDR, 1309 1309 REG_A7XX_CP_BV_ROQ_DBG_DATA, 0x00800}, 1310 - { "CP_BV_SQE_UCODE_DBG_ADDR", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR, 1310 + { "CP_BV_SQE_UCODE_DBG", REG_A7XX_CP_BV_SQE_UCODE_DBG_ADDR, 1311 1311 REG_A7XX_CP_BV_SQE_UCODE_DBG_DATA, 0x08000}, 1312 - { "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_SQE_STAT_ADDR, 1312 + { "CP_BV_SQE_STAT", REG_A7XX_CP_BV_SQE_STAT_ADDR, 1313 1313 REG_A7XX_CP_BV_SQE_STAT_DATA, 0x00040}, 1314 - { "CP_RESOURCE_TBL", REG_A7XX_CP_RESOURCE_TABLE_DBG_ADDR, 1314 + { "CP_RESOURCE_TABLE_DBG", REG_A7XX_CP_RESOURCE_TABLE_DBG_ADDR, 1315 1315 REG_A7XX_CP_RESOURCE_TABLE_DBG_DATA, 0x04100}, 1316 - { "CP_LPAC_DRAW_STATE_ADDR", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR, 1316 + { "CP_LPAC_DRAW_STATE", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR, 1317 1317 REG_A7XX_CP_LPAC_DRAW_STATE_DATA, 0x00200}, 1318 - { "CP_LPAC_ROQ", REG_A7XX_CP_LPAC_ROQ_DBG_ADDR, 1318 + { "CP_LPAC_ROQ_DBG", REG_A7XX_CP_LPAC_ROQ_DBG_ADDR, 1319 1319 REG_A7XX_CP_LPAC_ROQ_DBG_DATA, 0x00200}, 1320 - { "CP_SQE_AC_UCODE_DBG_ADDR", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR, 1320 + { "CP_SQE_AC_UCODE_DBG", REG_A7XX_CP_SQE_AC_UCODE_DBG_ADDR, 1321 1321 REG_A7XX_CP_SQE_AC_UCODE_DBG_DATA, 0x08000}, 1322 - { "CP_SQE_AC_STAT_ADDR", REG_A7XX_CP_SQE_AC_STAT_ADDR, 1322 + { "CP_SQE_AC_STAT", REG_A7XX_CP_SQE_AC_STAT_ADDR, 1323 1323 REG_A7XX_CP_SQE_AC_STAT_DATA, 0x00040}, 1324 - { "CP_LPAC_FIFO_DBG_ADDR", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR, 1324 + { "CP_LPAC_FIFO_DBG", REG_A7XX_CP_LPAC_FIFO_DBG_ADDR, 1325 1325 REG_A7XX_CP_LPAC_FIFO_DBG_DATA, 0x00040}, 1326 1326 { "CP_AQE_ROQ_0", REG_A7XX_CP_AQE_ROQ_DBG_ADDR_0, 1327 1327 REG_A7XX_CP_AQE_ROQ_DBG_DATA_0, 0x00100}, ··· 1337 1337 REG_A7XX_CP_AQE_STAT_DATA_1, 0x00040}, 1338 1338 }; 1339 1339 1340 - static struct gen7_reg_list gen7_9_0_reg_list[] = { 1340 + static const struct gen7_reg_list gen7_9_0_reg_list[] = { 1341 1341 { gen7_9_0_gpu_registers, NULL}, 1342 1342 { gen7_9_0_cx_misc_registers, NULL}, 1343 1343 { gen7_9_0_cx_dbgc_registers, NULL},
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 596 596 597 597 spin_lock_irqsave(&dev->event_lock, flags); 598 598 if (dpu_crtc->event) { 599 - DRM_DEBUG_VBL("%s: send event: %pK\n", dpu_crtc->name, 599 + DRM_DEBUG_VBL("%s: send event: %p\n", dpu_crtc->name, 600 600 dpu_crtc->event); 601 601 trace_dpu_crtc_complete_flip(DRMID(crtc)); 602 602 drm_crtc_send_vblank_event(crtc, dpu_crtc->event);
+2
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
··· 730 730 return false; 731 731 732 732 conn_state = drm_atomic_get_new_connector_state(state, connector); 733 + if (!conn_state) 734 + return false; 733 735 734 736 /** 735 737 * These checks are duplicated from dpu_encoder_update_topology() since
+2 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
··· 31 31 u32 base; 32 32 33 33 if (!ctx) { 34 - DRM_ERROR("invalid ctx %pK\n", ctx); 34 + DRM_ERROR("invalid ctx %p\n", ctx); 35 35 return; 36 36 } 37 37 38 38 base = ctx->cap->sblk->pcc.base; 39 39 40 40 if (!base) { 41 - DRM_ERROR("invalid ctx %pK pcc base 0x%x\n", ctx, base); 41 + DRM_ERROR("invalid ctx %p pcc base 0x%x\n", ctx, base); 42 42 return; 43 43 } 44 44
+2 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
··· 1345 1345 dpu_kms->mmio = NULL; 1346 1346 return ret; 1347 1347 } 1348 - DRM_DEBUG("mapped dpu address space @%pK\n", dpu_kms->mmio); 1348 + DRM_DEBUG("mapped dpu address space @%p\n", dpu_kms->mmio); 1349 1349 1350 1350 dpu_kms->vbif[VBIF_RT] = msm_ioremap_mdss(mdss_dev, 1351 1351 dpu_kms->pdev, ··· 1380 1380 dpu_kms->mmio = NULL; 1381 1381 return ret; 1382 1382 } 1383 - DRM_DEBUG("mapped dpu address space @%pK\n", dpu_kms->mmio); 1383 + DRM_DEBUG("mapped dpu address space @%p\n", dpu_kms->mmio); 1384 1384 1385 1385 dpu_kms->vbif[VBIF_RT] = msm_ioremap(pdev, "vbif"); 1386 1386 if (IS_ERR(dpu_kms->vbif[VBIF_RT])) {
+2 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
··· 1129 1129 struct drm_plane_state *old_plane_state = 1130 1130 drm_atomic_get_old_plane_state(state, plane); 1131 1131 struct dpu_plane_state *pstate = to_dpu_plane_state(plane_state); 1132 - struct drm_crtc_state *crtc_state; 1132 + struct drm_crtc_state *crtc_state = NULL; 1133 1133 int ret; 1134 1134 1135 1135 if (IS_ERR(plane_state)) ··· 1162 1162 if (!old_plane_state || !old_plane_state->fb || 1163 1163 old_plane_state->src_w != plane_state->src_w || 1164 1164 old_plane_state->src_h != plane_state->src_h || 1165 - old_plane_state->src_w != plane_state->src_w || 1165 + old_plane_state->crtc_w != plane_state->crtc_w || 1166 1166 old_plane_state->crtc_h != plane_state->crtc_h || 1167 1167 msm_framebuffer_format(old_plane_state->fb) != 1168 1168 msm_framebuffer_format(plane_state->fb))
+18 -41
drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
··· 5 5 6 6 #include <linux/clk-provider.h> 7 7 #include <linux/platform_device.h> 8 + #include <linux/pm_clock.h> 9 + #include <linux/pm_runtime.h> 8 10 #include <dt-bindings/phy/phy.h> 9 11 10 12 #include "dsi_phy.h" ··· 513 511 return 0; 514 512 } 515 513 516 - static int dsi_phy_enable_resource(struct msm_dsi_phy *phy) 517 - { 518 - struct device *dev = &phy->pdev->dev; 519 - int ret; 520 - 521 - ret = pm_runtime_resume_and_get(dev); 522 - if (ret) 523 - return ret; 524 - 525 - ret = clk_prepare_enable(phy->ahb_clk); 526 - if (ret) { 527 - DRM_DEV_ERROR(dev, "%s: can't enable ahb clk, %d\n", __func__, ret); 528 - pm_runtime_put_sync(dev); 529 - } 530 - 531 - return ret; 532 - } 533 - 534 - static void dsi_phy_disable_resource(struct msm_dsi_phy *phy) 535 - { 536 - clk_disable_unprepare(phy->ahb_clk); 537 - pm_runtime_put(&phy->pdev->dev); 538 - } 539 - 540 514 static const struct of_device_id dsi_phy_dt_match[] = { 541 515 #ifdef CONFIG_DRM_MSM_DSI_28NM_PHY 542 516 { .compatible = "qcom,dsi-phy-28nm-hpm", ··· 676 698 if (ret) 677 699 return ret; 678 700 679 - phy->ahb_clk = msm_clk_get(pdev, "iface"); 680 - if (IS_ERR(phy->ahb_clk)) 681 - return dev_err_probe(dev, PTR_ERR(phy->ahb_clk), 682 - "Unable to get ahb clk\n"); 701 + platform_set_drvdata(pdev, phy); 683 702 684 - ret = devm_pm_runtime_enable(&pdev->dev); 703 + ret = devm_pm_runtime_enable(dev); 685 704 if (ret) 686 705 return ret; 687 706 688 - /* PLL init will call into clk_register which requires 689 - * register access, so we need to enable power and ahb clock. 690 - */ 691 - ret = dsi_phy_enable_resource(phy); 707 + ret = devm_pm_clk_create(dev); 692 708 if (ret) 693 709 return ret; 710 + 711 + ret = pm_clk_add(dev, "iface"); 712 + if (ret < 0) 713 + return dev_err_probe(dev, ret, "Unable to get iface clk\n"); 694 714 695 715 if (phy->cfg->ops.pll_init) { 696 716 ret = phy->cfg->ops.pll_init(phy); ··· 703 727 return dev_err_probe(dev, ret, 704 728 "Failed to register clk provider\n"); 705 729 706 - dsi_phy_disable_resource(phy); 707 - 708 - platform_set_drvdata(pdev, phy); 709 - 710 730 return 0; 711 731 } 732 + 733 + static const struct dev_pm_ops dsi_phy_pm_ops = { 734 + SET_RUNTIME_PM_OPS(pm_clk_suspend, pm_clk_resume, NULL) 735 + }; 712 736 713 737 static struct platform_driver dsi_phy_platform_driver = { 714 738 .probe = dsi_phy_driver_probe, 715 739 .driver = { 716 740 .name = "msm_dsi_phy", 717 741 .of_match_table = dsi_phy_dt_match, 742 + .pm = &dsi_phy_pm_ops, 718 743 }, 719 744 }; 720 745 ··· 741 764 742 765 dev = &phy->pdev->dev; 743 766 744 - ret = dsi_phy_enable_resource(phy); 767 + ret = pm_runtime_resume_and_get(dev); 745 768 if (ret) { 746 - DRM_DEV_ERROR(dev, "%s: resource enable failed, %d\n", 769 + DRM_DEV_ERROR(dev, "%s: resume failed, %d\n", 747 770 __func__, ret); 748 771 goto res_en_fail; 749 772 } ··· 787 810 phy_en_fail: 788 811 regulator_bulk_disable(phy->cfg->num_regulators, phy->supplies); 789 812 reg_en_fail: 790 - dsi_phy_disable_resource(phy); 813 + pm_runtime_put(dev); 791 814 res_en_fail: 792 815 return ret; 793 816 } ··· 800 823 phy->cfg->ops.disable(phy); 801 824 802 825 regulator_bulk_disable(phy->cfg->num_regulators, phy->supplies); 803 - dsi_phy_disable_resource(phy); 826 + pm_runtime_put(&phy->pdev->dev); 804 827 } 805 828 806 829 void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
-1
drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
··· 104 104 phys_addr_t lane_size; 105 105 int id; 106 106 107 - struct clk *ahb_clk; 108 107 struct regulator_bulk_data *supplies; 109 108 110 109 struct msm_dsi_dphy_timing timing;
+7 -4
drivers/gpu/drm/msm/msm_debugfs.c
··· 325 325 326 326 static int late_init_minor(struct drm_minor *minor) 327 327 { 328 - struct drm_device *dev = minor->dev; 329 - struct msm_drm_private *priv = dev->dev_private; 328 + struct drm_device *dev; 329 + struct msm_drm_private *priv; 330 330 int ret; 331 331 332 332 if (!minor) 333 333 return 0; 334 + 335 + dev = minor->dev; 336 + priv = dev->dev_private; 334 337 335 338 if (!priv->gpu_pdev) 336 339 return 0; 337 340 338 341 ret = msm_rd_debugfs_init(minor); 339 342 if (ret) { 340 - DRM_DEV_ERROR(minor->dev->dev, "could not install rd debugfs\n"); 343 + DRM_DEV_ERROR(dev->dev, "could not install rd debugfs\n"); 341 344 return ret; 342 345 } 343 346 344 347 ret = msm_perf_debugfs_init(minor); 345 348 if (ret) { 346 - DRM_DEV_ERROR(minor->dev->dev, "could not install perf debugfs\n"); 349 + DRM_DEV_ERROR(dev->dev, "could not install perf debugfs\n"); 347 350 return ret; 348 351 } 349 352
+11 -2
drivers/gpu/drm/msm/msm_gem.c
··· 95 95 void msm_gem_vma_put(struct drm_gem_object *obj) 96 96 { 97 97 struct msm_drm_private *priv = obj->dev->dev_private; 98 - struct drm_exec exec; 99 98 100 99 if (atomic_dec_return(&to_msm_bo(obj)->vma_ref)) 101 100 return; ··· 102 103 if (!priv->kms) 103 104 return; 104 105 106 + #ifdef CONFIG_DRM_MSM_KMS 107 + struct drm_exec exec; 108 + 105 109 msm_gem_lock_vm_and_obj(&exec, obj, priv->kms->vm); 106 110 put_iova_spaces(obj, priv->kms->vm, true, "vma_put"); 107 111 drm_exec_fini(&exec); /* drop locks */ 112 + #endif 108 113 } 109 114 110 115 /* ··· 666 663 667 664 static bool is_kms_vm(struct drm_gpuvm *vm) 668 665 { 666 + #ifdef CONFIG_DRM_MSM_KMS 669 667 struct msm_drm_private *priv = vm->drm->dev_private; 670 668 671 669 return priv->kms && (priv->kms->vm == vm); 670 + #else 671 + return false; 672 + #endif 672 673 } 673 674 674 675 /* ··· 1120 1113 put_pages(obj); 1121 1114 } 1122 1115 1123 - if (msm_obj->flags & MSM_BO_NO_SHARE) { 1116 + if (obj->resv != &obj->_resv) { 1124 1117 struct drm_gem_object *r_obj = 1125 1118 container_of(obj->resv, struct drm_gem_object, _resv); 1119 + 1120 + WARN_ON(!(msm_obj->flags & MSM_BO_NO_SHARE)); 1126 1121 1127 1122 /* Drop reference we hold to shared resv obj: */ 1128 1123 drm_gem_object_put(r_obj);
+1 -1
drivers/gpu/drm/msm/msm_gem.h
··· 100 100 * 101 101 * Only used for kernel managed VMs, unused for user managed VMs. 102 102 * 103 - * Protected by @mm_lock. 103 + * Protected by vm lock. See msm_gem_lock_vm_and_obj(), for ex. 104 104 */ 105 105 struct drm_mm mm; 106 106
+41 -35
drivers/gpu/drm/msm/msm_gem_submit.c
··· 271 271 return ret; 272 272 } 273 273 274 + static int submit_lock_objects_vmbind(struct msm_gem_submit *submit) 275 + { 276 + unsigned flags = DRM_EXEC_INTERRUPTIBLE_WAIT | DRM_EXEC_IGNORE_DUPLICATES; 277 + struct drm_exec *exec = &submit->exec; 278 + int ret = 0; 279 + 280 + drm_exec_init(&submit->exec, flags, submit->nr_bos); 281 + 282 + drm_exec_until_all_locked (&submit->exec) { 283 + ret = drm_gpuvm_prepare_vm(submit->vm, exec, 1); 284 + drm_exec_retry_on_contention(exec); 285 + if (ret) 286 + break; 287 + 288 + ret = drm_gpuvm_prepare_objects(submit->vm, exec, 1); 289 + drm_exec_retry_on_contention(exec); 290 + if (ret) 291 + break; 292 + } 293 + 294 + return ret; 295 + } 296 + 274 297 /* This is where we make sure all the bo's are reserved and pin'd: */ 275 298 static int submit_lock_objects(struct msm_gem_submit *submit) 276 299 { 277 300 unsigned flags = DRM_EXEC_INTERRUPTIBLE_WAIT; 278 - struct drm_exec *exec = &submit->exec; 279 - int ret; 301 + int ret = 0; 280 302 281 - if (msm_context_is_vmbind(submit->queue->ctx)) { 282 - flags |= DRM_EXEC_IGNORE_DUPLICATES; 283 - 284 - drm_exec_init(&submit->exec, flags, submit->nr_bos); 285 - 286 - drm_exec_until_all_locked (&submit->exec) { 287 - ret = drm_gpuvm_prepare_vm(submit->vm, exec, 1); 288 - drm_exec_retry_on_contention(exec); 289 - if (ret) 290 - return ret; 291 - 292 - ret = drm_gpuvm_prepare_objects(submit->vm, exec, 1); 293 - drm_exec_retry_on_contention(exec); 294 - if (ret) 295 - return ret; 296 - } 297 - 298 - return 0; 299 - } 303 + if (msm_context_is_vmbind(submit->queue->ctx)) 304 + return submit_lock_objects_vmbind(submit); 300 305 301 306 drm_exec_init(&submit->exec, flags, submit->nr_bos); 302 307 ··· 310 305 drm_gpuvm_resv_obj(submit->vm)); 311 306 drm_exec_retry_on_contention(&submit->exec); 312 307 if (ret) 313 - return ret; 308 + break; 314 309 for (unsigned i = 0; i < submit->nr_bos; i++) { 315 310 struct drm_gem_object *obj = submit->bos[i].obj; 316 311 ret = drm_exec_prepare_obj(&submit->exec, obj, 1); 317 312 drm_exec_retry_on_contention(&submit->exec); 318 313 if (ret) 319 - return ret; 314 + break; 320 315 } 321 316 } 322 317 323 - return 0; 318 + return ret; 324 319 } 325 320 326 321 static int submit_fence_sync(struct msm_gem_submit *submit) ··· 519 514 */ 520 515 static void submit_cleanup(struct msm_gem_submit *submit, bool error) 521 516 { 517 + if (error) 518 + submit_unpin_objects(submit); 519 + 522 520 if (submit->exec.objects) 523 521 drm_exec_fini(&submit->exec); 524 522 525 - if (error) { 526 - submit_unpin_objects(submit); 527 - /* job wasn't enqueued to scheduler, so early retirement: */ 523 + /* if job wasn't enqueued to scheduler, early retirement: */ 524 + if (error) 528 525 msm_submit_retire(submit); 529 - } 530 526 } 531 527 532 528 void msm_submit_retire(struct msm_gem_submit *submit) ··· 775 769 776 770 if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) { 777 771 sync_file = sync_file_create(submit->user_fence); 778 - if (!sync_file) { 772 + if (!sync_file) 779 773 ret = -ENOMEM; 780 - } else { 781 - fd_install(out_fence_fd, sync_file->file); 782 - args->fence_fd = out_fence_fd; 783 - } 784 774 } 785 775 786 776 if (ret) ··· 814 812 out_unlock: 815 813 mutex_unlock(&queue->lock); 816 814 out_post_unlock: 817 - if (ret && (out_fence_fd >= 0)) { 818 - put_unused_fd(out_fence_fd); 815 + if (ret) { 816 + if (out_fence_fd >= 0) 817 + put_unused_fd(out_fence_fd); 819 818 if (sync_file) 820 819 fput(sync_file->file); 820 + } else if (sync_file) { 821 + fd_install(out_fence_fd, sync_file->file); 822 + args->fence_fd = out_fence_fd; 821 823 } 822 824 823 825 if (!IS_ERR_OR_NULL(submit)) {
+45 -15
drivers/gpu/drm/msm/msm_gem_vma.c
··· 319 319 mutex_lock(&vm->mmu_lock); 320 320 321 321 /* 322 - * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold 322 + * NOTE: if not using pgtable preallocation, we cannot hold 323 323 * a lock across map/unmap which is also used in the job_run() 324 324 * path, as this can cause deadlock in job_run() vs shrinker/ 325 325 * reclaim. 326 - * 327 - * Revisit this if we can come up with a scheme to pre-alloc pages 328 - * for the pgtable in map/unmap ops. 329 326 */ 330 327 ret = vm_map_op(vm, &(struct msm_vm_map_op){ 331 328 .iova = vma->va.addr, ··· 451 454 struct op_arg { 452 455 unsigned flags; 453 456 struct msm_vm_bind_job *job; 457 + const struct msm_vm_bind_op *op; 458 + bool kept; 454 459 }; 455 460 456 461 static void ··· 474 475 } 475 476 476 477 static int 477 - msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) 478 + msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *_arg) 478 479 { 479 - struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; 480 + struct op_arg *arg = _arg; 481 + struct msm_vm_bind_job *job = arg->job; 480 482 struct drm_gem_object *obj = op->map.gem.obj; 481 483 struct drm_gpuva *vma; 482 484 struct sg_table *sgt; 483 485 unsigned prot; 486 + 487 + if (arg->kept) 488 + return 0; 484 489 485 490 vma = vma_from_op(arg, &op->map); 486 491 if (WARN_ON(IS_ERR(vma))) ··· 605 602 } 606 603 607 604 static int 608 - msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) 605 + msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *_arg) 609 606 { 610 - struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; 607 + struct op_arg *arg = _arg; 608 + struct msm_vm_bind_job *job = arg->job; 611 609 struct drm_gpuva *vma = op->unmap.va; 612 610 struct msm_gem_vma *msm_vma = to_msm_vma(vma); 613 611 614 612 vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, 615 613 vma->va.addr, vma->va.range); 614 + 615 + /* 616 + * Detect in-place remap. Turnip does this to change the vma flags, 617 + * in particular MSM_VMA_DUMP. In this case we want to avoid actually 618 + * touching the page tables, as that would require synchronization 619 + * against SUBMIT jobs running on the GPU. 620 + */ 621 + if (op->unmap.keep && 622 + (arg->op->op == MSM_VM_BIND_OP_MAP) && 623 + (vma->gem.obj == arg->op->obj) && 624 + (vma->gem.offset == arg->op->obj_offset) && 625 + (vma->va.addr == arg->op->iova) && 626 + (vma->va.range == arg->op->range)) { 627 + /* We are only expecting a single in-place unmap+map cb pair: */ 628 + WARN_ON(arg->kept); 629 + 630 + /* Leave the existing VMA in place, but signal that to the map cb: */ 631 + arg->kept = true; 632 + 633 + /* Only flags are changing, so update that in-place: */ 634 + unsigned orig_flags = vma->flags & (DRM_GPUVA_USERBITS - 1); 635 + vma->flags = orig_flags | arg->flags; 636 + 637 + return 0; 638 + } 616 639 617 640 if (!msm_vma->mapped) 618 641 goto out_close; ··· 1300 1271 const struct msm_vm_bind_op *op = &job->ops[i]; 1301 1272 struct op_arg arg = { 1302 1273 .job = job, 1274 + .op = op, 1303 1275 }; 1304 1276 1305 1277 switch (op->op) { ··· 1490 1460 1491 1461 if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { 1492 1462 sync_file = sync_file_create(job->fence); 1493 - if (!sync_file) { 1463 + if (!sync_file) 1494 1464 ret = -ENOMEM; 1495 - } else { 1496 - fd_install(out_fence_fd, sync_file->file); 1497 - args->fence_fd = out_fence_fd; 1498 - } 1499 1465 } 1500 1466 1501 1467 if (ret) ··· 1520 1494 out_unlock: 1521 1495 mutex_unlock(&queue->lock); 1522 1496 out_post_unlock: 1523 - if (ret && (out_fence_fd >= 0)) { 1524 - put_unused_fd(out_fence_fd); 1497 + if (ret) { 1498 + if (out_fence_fd >= 0) 1499 + put_unused_fd(out_fence_fd); 1525 1500 if (sync_file) 1526 1501 fput(sync_file->file); 1502 + } else if (sync_file) { 1503 + fd_install(out_fence_fd, sync_file->file); 1504 + args->fence_fd = out_fence_fd; 1527 1505 } 1528 1506 1529 1507 if (!IS_ERR_OR_NULL(job)) {
+16 -4
drivers/gpu/drm/msm/msm_gpu.c
··· 465 465 struct msm_gem_submit *submit; 466 466 struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); 467 467 char *comm = NULL, *cmd = NULL; 468 + struct task_struct *task; 468 469 int i; 469 470 470 471 mutex_lock(&gpu->lock); ··· 483 482 484 483 /* Increment the fault counts */ 485 484 submit->queue->faults++; 486 - if (submit->vm) { 485 + 486 + task = get_pid_task(submit->pid, PIDTYPE_PID); 487 + if (!task) 488 + gpu->global_faults++; 489 + else { 487 490 struct msm_gem_vm *vm = to_msm_vm(submit->vm); 488 491 489 492 vm->faults++; 490 493 491 494 /* 492 495 * If userspace has opted-in to VM_BIND (and therefore userspace 493 - * management of the VM), faults mark the VM as unusuable. This 496 + * management of the VM), faults mark the VM as unusable. This 494 497 * matches vulkan expectations (vulkan is the main target for 495 - * VM_BIND) 498 + * VM_BIND). 496 499 */ 497 500 if (!vm->managed) 498 501 msm_gem_vm_unusable(submit->vm); ··· 558 553 unsigned long flags; 559 554 560 555 spin_lock_irqsave(&ring->submit_lock, flags); 561 - list_for_each_entry(submit, &ring->submits, node) 556 + list_for_each_entry(submit, &ring->submits, node) { 557 + /* 558 + * If the submit uses an unusable vm make sure 559 + * we don't actually run it 560 + */ 561 + if (to_msm_vm(submit->vm)->unusable) 562 + submit->nr_cmds = 0; 562 563 gpu->funcs->submit(gpu, submit); 564 + } 563 565 spin_unlock_irqrestore(&ring->submit_lock, flags); 564 566 } 565 567 }
+12 -4
drivers/gpu/drm/msm/msm_iommu.c
··· 14 14 struct msm_iommu { 15 15 struct msm_mmu base; 16 16 struct iommu_domain *domain; 17 - atomic_t pagetables; 17 + 18 + struct mutex init_lock; /* protects pagetables counter and prr_page */ 19 + int pagetables; 18 20 struct page *prr_page; 19 21 20 22 struct kmem_cache *pt_cache; ··· 229 227 * If this is the last attached pagetable for the parent, 230 228 * disable TTBR0 in the arm-smmu driver 231 229 */ 232 - if (atomic_dec_return(&iommu->pagetables) == 0) { 230 + mutex_lock(&iommu->init_lock); 231 + if (--iommu->pagetables == 0) { 233 232 adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); 234 233 235 234 if (adreno_smmu->set_prr_bit) { ··· 239 236 iommu->prr_page = NULL; 240 237 } 241 238 } 239 + mutex_unlock(&iommu->init_lock); 242 240 243 241 free_io_pgtable_ops(pagetable->pgtbl_ops); 244 242 kfree(pagetable); ··· 572 568 * If this is the first pagetable that we've allocated, send it back to 573 569 * the arm-smmu driver as a trigger to set up TTBR0 574 570 */ 575 - if (atomic_inc_return(&iommu->pagetables) == 1) { 571 + mutex_lock(&iommu->init_lock); 572 + if (iommu->pagetables++ == 0) { 576 573 ret = adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, &ttbr0_cfg); 577 574 if (ret) { 575 + iommu->pagetables--; 576 + mutex_unlock(&iommu->init_lock); 578 577 free_io_pgtable_ops(pagetable->pgtbl_ops); 579 578 kfree(pagetable); 580 579 return ERR_PTR(ret); ··· 602 595 adreno_smmu->set_prr_bit(adreno_smmu->cookie, true); 603 596 } 604 597 } 598 + mutex_unlock(&iommu->init_lock); 605 599 606 600 /* Needed later for TLB flush */ 607 601 pagetable->parent = parent; ··· 738 730 iommu->domain = domain; 739 731 msm_mmu_init(&iommu->base, dev, &funcs, MSM_MMU_IOMMU); 740 732 741 - atomic_set(&iommu->pagetables, 0); 733 + mutex_init(&iommu->init_lock); 742 734 743 735 ret = iommu_attach_device(iommu->domain, dev); 744 736 if (ret) {
+6 -4
drivers/gpu/drm/msm/msm_kms.c
··· 275 275 if (ret) 276 276 return ret; 277 277 278 + ret = msm_disp_snapshot_init(ddev); 279 + if (ret) { 280 + DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret); 281 + return ret; 282 + } 283 + 278 284 ret = priv->kms_init(ddev); 279 285 if (ret) { 280 286 DRM_DEV_ERROR(dev, "failed to load kms\n"); ··· 332 326 DRM_DEV_ERROR(dev, "failed to install IRQ handler\n"); 333 327 goto err_msm_uninit; 334 328 } 335 - 336 - ret = msm_disp_snapshot_init(ddev); 337 - if (ret) 338 - DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret); 339 329 340 330 drm_mode_config_reset(ddev); 341 331
+1 -1
drivers/gpu/drm/msm/msm_mdss.c
··· 423 423 if (IS_ERR(msm_mdss->mmio)) 424 424 return ERR_CAST(msm_mdss->mmio); 425 425 426 - dev_dbg(&pdev->dev, "mapped mdss address space @%pK\n", msm_mdss->mmio); 426 + dev_dbg(&pdev->dev, "mapped mdss address space @%p\n", msm_mdss->mmio); 427 427 428 428 ret = msm_mdss_parse_data_bus_icc_path(&pdev->dev, msm_mdss); 429 429 if (ret)
+13 -1
drivers/gpu/drm/msm/registers/adreno/a6xx.xml
··· 594 594 <reg32 offset="0x0600" name="DBGC_CFG_DBGBUS_SEL_A"/> 595 595 <reg32 offset="0x0601" name="DBGC_CFG_DBGBUS_SEL_B"/> 596 596 <reg32 offset="0x0602" name="DBGC_CFG_DBGBUS_SEL_C"/> 597 - <reg32 offset="0x0603" name="DBGC_CFG_DBGBUS_SEL_D"> 597 + <reg32 offset="0x0603" name="DBGC_CFG_DBGBUS_SEL_D" variants="A6XX"> 598 598 <bitfield high="7" low="0" name="PING_INDEX"/> 599 599 <bitfield high="15" low="8" name="PING_BLK_SEL"/> 600 + </reg32> 601 + <reg32 offset="0x0603" name="DBGC_CFG_DBGBUS_SEL_D" variants="A7XX-"> 602 + <bitfield high="7" low="0" name="PING_INDEX"/> 603 + <bitfield high="24" low="16" name="PING_BLK_SEL"/> 600 604 </reg32> 601 605 <reg32 offset="0x0604" name="DBGC_CFG_DBGBUS_CNTLT"> 602 606 <bitfield high="5" low="0" name="TRACEEN"/> ··· 3798 3794 3799 3795 <reg32 offset="0x002f" name="CFG_DBGBUS_TRACE_BUF1"/> 3800 3796 <reg32 offset="0x0030" name="CFG_DBGBUS_TRACE_BUF2"/> 3797 + </domain> 3798 + 3799 + <domain name="A7XX_CX_DBGC" width="32"> 3800 + <!-- Bitfields shifted, but otherwise the same: --> 3801 + <reg32 offset="0x0000" name="CFG_DBGBUS_SEL_A" variants="A7XX-"> 3802 + <bitfield high="7" low="0" name="PING_INDEX"/> 3803 + <bitfield high="24" low="16" name="PING_BLK_SEL"/> 3804 + </reg32> 3801 3805 </domain> 3802 3806 3803 3807 <domain name="A6XX_CX_MISC" width="32" prefix="variant" varset="chip">
+14 -14
drivers/gpu/drm/msm/registers/display/dsi.xml
··· 159 159 <bitfield name="RGB_SWAP" low="12" high="14" type="dsi_rgb_swap"/> 160 160 </reg32> 161 161 <reg32 offset="0x00020" name="ACTIVE_H"> 162 - <bitfield name="START" low="0" high="11" type="uint"/> 163 - <bitfield name="END" low="16" high="27" type="uint"/> 162 + <bitfield name="START" low="0" high="15" type="uint"/> 163 + <bitfield name="END" low="16" high="31" type="uint"/> 164 164 </reg32> 165 165 <reg32 offset="0x00024" name="ACTIVE_V"> 166 - <bitfield name="START" low="0" high="11" type="uint"/> 167 - <bitfield name="END" low="16" high="27" type="uint"/> 166 + <bitfield name="START" low="0" high="15" type="uint"/> 167 + <bitfield name="END" low="16" high="31" type="uint"/> 168 168 </reg32> 169 169 <reg32 offset="0x00028" name="TOTAL"> 170 - <bitfield name="H_TOTAL" low="0" high="11" type="uint"/> 171 - <bitfield name="V_TOTAL" low="16" high="27" type="uint"/> 170 + <bitfield name="H_TOTAL" low="0" high="15" type="uint"/> 171 + <bitfield name="V_TOTAL" low="16" high="31" type="uint"/> 172 172 </reg32> 173 173 <reg32 offset="0x0002c" name="ACTIVE_HSYNC"> 174 - <bitfield name="START" low="0" high="11" type="uint"/> 175 - <bitfield name="END" low="16" high="27" type="uint"/> 174 + <bitfield name="START" low="0" high="15" type="uint"/> 175 + <bitfield name="END" low="16" high="31" type="uint"/> 176 176 </reg32> 177 177 <reg32 offset="0x00030" name="ACTIVE_VSYNC_HPOS"> 178 - <bitfield name="START" low="0" high="11" type="uint"/> 179 - <bitfield name="END" low="16" high="27" type="uint"/> 178 + <bitfield name="START" low="0" high="15" type="uint"/> 179 + <bitfield name="END" low="16" high="31" type="uint"/> 180 180 </reg32> 181 181 <reg32 offset="0x00034" name="ACTIVE_VSYNC_VPOS"> 182 - <bitfield name="START" low="0" high="11" type="uint"/> 183 - <bitfield name="END" low="16" high="27" type="uint"/> 182 + <bitfield name="START" low="0" high="15" type="uint"/> 183 + <bitfield name="END" low="16" high="31" type="uint"/> 184 184 </reg32> 185 185 186 186 <reg32 offset="0x00038" name="CMD_DMA_CTRL"> ··· 209 209 <bitfield name="WORD_COUNT" low="16" high="31" type="uint"/> 210 210 </reg32> 211 211 <reg32 offset="0x00058" name="CMD_MDP_STREAM0_TOTAL"> 212 - <bitfield name="H_TOTAL" low="0" high="11" type="uint"/> 213 - <bitfield name="V_TOTAL" low="16" high="27" type="uint"/> 212 + <bitfield name="H_TOTAL" low="0" high="15" type="uint"/> 213 + <bitfield name="V_TOTAL" low="16" high="31" type="uint"/> 214 214 </reg32> 215 215 <reg32 offset="0x0005c" name="CMD_MDP_STREAM1_CTRL"> 216 216 <bitfield name="DATA_TYPE" low="0" high="5" type="uint"/>
+4
drivers/gpu/drm/nouveau/dispnv50/wndw.c
··· 795 795 struct nouveau_drm *drm = nouveau_drm(plane->dev); 796 796 uint8_t i; 797 797 798 + /* All chipsets can display all formats in linear layout */ 799 + if (modifier == DRM_FORMAT_MOD_LINEAR) 800 + return true; 801 + 798 802 if (drm->client.device.info.chipset < 0xc0) { 799 803 const struct drm_format_info *info = drm_format_info(format); 800 804 const uint8_t kind = (modifier >> 12) & 0xff;
+4 -11
drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c
··· 103 103 static void 104 104 gm200_flcn_pio_imem_wr(struct nvkm_falcon *falcon, u8 port, const u8 *img, int len, u16 tag) 105 105 { 106 - nvkm_falcon_wr32(falcon, 0x188 + (port * 0x10), tag++); 106 + nvkm_falcon_wr32(falcon, 0x188 + (port * 0x10), tag); 107 107 while (len >= 4) { 108 108 nvkm_falcon_wr32(falcon, 0x184 + (port * 0x10), *(u32 *)img); 109 109 img += 4; ··· 249 249 gm200_flcn_fw_load(struct nvkm_falcon_fw *fw) 250 250 { 251 251 struct nvkm_falcon *falcon = fw->falcon; 252 - int target, ret; 252 + int ret; 253 253 254 254 if (fw->inst) { 255 + int target; 256 + 255 257 nvkm_falcon_mask(falcon, 0x048, 0x00000001, 0x00000001); 256 258 257 259 switch (nvkm_memory_target(fw->inst)) { ··· 287 285 } 288 286 289 287 if (fw->boot) { 290 - switch (nvkm_memory_target(&fw->fw.mem.memory)) { 291 - case NVKM_MEM_TARGET_VRAM: target = 4; break; 292 - case NVKM_MEM_TARGET_HOST: target = 5; break; 293 - case NVKM_MEM_TARGET_NCOH: target = 6; break; 294 - default: 295 - WARN_ON(1); 296 - return -EINVAL; 297 - } 298 - 299 288 ret = nvkm_falcon_pio_wr(falcon, fw->boot, 0, 0, 300 289 IMEM, falcon->code.limit - fw->boot_size, fw->boot_size, 301 290 fw->boot_addr >> 8, false);
+3 -2
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
··· 209 209 fw->boot_addr = bld->start_tag << 8; 210 210 fw->boot_size = bld->code_size; 211 211 fw->boot = kmemdup(bl->data + hdr->data_offset + bld->code_off, fw->boot_size, GFP_KERNEL); 212 - if (!fw->boot) 213 - ret = -ENOMEM; 214 212 215 213 nvkm_firmware_put(bl); 214 + 215 + if (!fw->boot) 216 + return -ENOMEM; 216 217 217 218 /* Patch in interface data. */ 218 219 return nvkm_gsp_fwsec_patch(gsp, fw, desc->InterfaceOffset, init_cmd);
+1 -1
drivers/gpu/drm/tegra/gem.c
··· 526 526 if (drm_gem_is_imported(gem)) { 527 527 dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt, 528 528 DMA_TO_DEVICE); 529 - dma_buf_detach(gem->dma_buf, gem->import_attach); 529 + dma_buf_detach(gem->import_attach->dmabuf, gem->import_attach); 530 530 } 531 531 } 532 532
+4 -4
drivers/gpu/drm/xe/xe_bo.c
··· 812 812 } 813 813 814 814 if (ttm_bo->type == ttm_bo_type_sg) { 815 - ret = xe_bo_move_notify(bo, ctx); 815 + if (new_mem->mem_type == XE_PL_SYSTEM) 816 + ret = xe_bo_move_notify(bo, ctx); 816 817 if (!ret) 817 818 ret = xe_bo_move_dmabuf(ttm_bo, new_mem); 818 819 return ret; ··· 2439 2438 .no_wait_gpu = false, 2440 2439 .gfp_retry_mayfail = true, 2441 2440 }; 2442 - struct pin_cookie cookie; 2443 2441 int ret; 2444 2442 2445 2443 if (vm) { ··· 2449 2449 ctx.resv = xe_vm_resv(vm); 2450 2450 } 2451 2451 2452 - cookie = xe_vm_set_validating(vm, allow_res_evict); 2452 + xe_vm_set_validating(vm, allow_res_evict); 2453 2453 trace_xe_bo_validate(bo); 2454 2454 ret = ttm_bo_validate(&bo->ttm, &bo->placement, &ctx); 2455 - xe_vm_clear_validating(vm, allow_res_evict, cookie); 2455 + xe_vm_clear_validating(vm, allow_res_evict); 2456 2456 2457 2457 return ret; 2458 2458 }
+9 -1
drivers/gpu/drm/xe/xe_gen_wa_oob.c
··· 123 123 return 0; 124 124 } 125 125 126 + /* Avoid GNU vs POSIX basename() discrepancy, just use our own */ 127 + static const char *xbasename(const char *s) 128 + { 129 + const char *p = strrchr(s, '/'); 130 + 131 + return p ? p + 1 : s; 132 + } 133 + 126 134 static int fn_to_prefix(const char *fn, char *prefix, size_t size) 127 135 { 128 136 size_t len; 129 137 130 - fn = basename(fn); 138 + fn = xbasename(fn); 131 139 len = strlen(fn); 132 140 133 141 if (len > size - 1)
+1 -1
drivers/gpu/drm/xe/xe_sync.c
··· 77 77 { 78 78 struct xe_user_fence *ufence = container_of(w, struct xe_user_fence, worker); 79 79 80 + WRITE_ONCE(ufence->signalled, 1); 80 81 if (mmget_not_zero(ufence->mm)) { 81 82 kthread_use_mm(ufence->mm); 82 83 if (copy_to_user(ufence->addr, &ufence->value, sizeof(ufence->value))) ··· 92 91 * Wake up waiters only after updating the ufence state, allowing the UMD 93 92 * to safely reuse the same ufence without encountering -EBUSY errors. 94 93 */ 95 - WRITE_ONCE(ufence->signalled, 1); 96 94 wake_up_all(&ufence->xe->ufence_wq); 97 95 user_fence_put(ufence); 98 96 }
+6 -2
drivers/gpu/drm/xe/xe_vm.c
··· 1610 1610 1611 1611 for (i = MAX_HUGEPTE_LEVEL; i < vm->pt_root[id]->level; i++) { 1612 1612 vm->scratch_pt[id][i] = xe_pt_create(vm, tile, i); 1613 - if (IS_ERR(vm->scratch_pt[id][i])) 1614 - return PTR_ERR(vm->scratch_pt[id][i]); 1613 + if (IS_ERR(vm->scratch_pt[id][i])) { 1614 + int err = PTR_ERR(vm->scratch_pt[id][i]); 1615 + 1616 + vm->scratch_pt[id][i] = NULL; 1617 + return err; 1618 + } 1615 1619 1616 1620 xe_pt_populate_empty(tile, vm, vm->scratch_pt[id][i]); 1617 1621 }
+2 -13
drivers/gpu/drm/xe/xe_vm.h
··· 315 315 * Register this task as currently making bos resident for the vm. Intended 316 316 * to avoid eviction by the same task of shared bos bound to the vm. 317 317 * Call with the vm's resv lock held. 318 - * 319 - * Return: A pin cookie that should be used for xe_vm_clear_validating(). 320 318 */ 321 - static inline struct pin_cookie xe_vm_set_validating(struct xe_vm *vm, 322 - bool allow_res_evict) 319 + static inline void xe_vm_set_validating(struct xe_vm *vm, bool allow_res_evict) 323 320 { 324 - struct pin_cookie cookie = {}; 325 - 326 321 if (vm && !allow_res_evict) { 327 322 xe_vm_assert_held(vm); 328 - cookie = lockdep_pin_lock(&xe_vm_resv(vm)->lock.base); 329 323 /* Pairs with READ_ONCE in xe_vm_is_validating() */ 330 324 WRITE_ONCE(vm->validating, current); 331 325 } 332 - 333 - return cookie; 334 326 } 335 327 336 328 /** ··· 330 338 * @vm: Pointer to the vm or NULL 331 339 * @allow_res_evict: Eviction from @vm was allowed. Must be set to the same 332 340 * value as for xe_vm_set_validation(). 333 - * @cookie: Cookie obtained from xe_vm_set_validating(). 334 341 * 335 342 * Register this task as currently making bos resident for the vm. Intended 336 343 * to avoid eviction by the same task of shared bos bound to the vm. 337 344 * Call with the vm's resv lock held. 338 345 */ 339 - static inline void xe_vm_clear_validating(struct xe_vm *vm, bool allow_res_evict, 340 - struct pin_cookie cookie) 346 + static inline void xe_vm_clear_validating(struct xe_vm *vm, bool allow_res_evict) 341 347 { 342 348 if (vm && !allow_res_evict) { 343 - lockdep_unpin_lock(&xe_vm_resv(vm)->lock.base, cookie); 344 349 /* Pairs with READ_ONCE in xe_vm_is_validating() */ 345 350 WRITE_ONCE(vm->validating, NULL); 346 351 }
+1 -1
drivers/hid/Kconfig
··· 1243 1243 1244 1244 U2F Zero supports custom commands for blinking the LED 1245 1245 and getting data from the internal hardware RNG. 1246 - The internal hardware can be used to feed the enthropy pool. 1246 + The internal hardware can be used to feed the entropy pool. 1247 1247 1248 1248 U2F Zero only supports blinking its LED, so this driver doesn't 1249 1249 allow setting the brightness to anything but 1, which will
+7 -1
drivers/hid/hid-asus.c
··· 1213 1213 return ret; 1214 1214 } 1215 1215 1216 - if (!drvdata->input) { 1216 + /* 1217 + * Check that input registration succeeded. Checking that 1218 + * HID_CLAIMED_INPUT is set prevents a UAF when all input devices 1219 + * were freed during registration due to no usages being mapped, 1220 + * leaving drvdata->input pointing to freed memory. 1221 + */ 1222 + if (!drvdata->input || !(hdev->claimed & HID_CLAIMED_INPUT)) { 1217 1223 hid_err(hdev, "Asus input not registered\n"); 1218 1224 ret = -ENOMEM; 1219 1225 goto err_stop_hw;
+2
drivers/hid/hid-elecom.c
··· 101 101 */ 102 102 mouse_button_fixup(hdev, rdesc, *rsize, 12, 30, 14, 20, 8); 103 103 break; 104 + case USB_DEVICE_ID_ELECOM_M_DT2DRBK: 104 105 case USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C: 105 106 /* 106 107 * Report descriptor format: ··· 124 123 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) }, 125 124 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, 126 125 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, 126 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT2DRBK) }, 127 127 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_010C) }, 128 128 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_019B) }, 129 129 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) },
+4
drivers/hid/hid-ids.h
··· 451 451 #define USB_DEVICE_ID_ELECOM_M_XT4DRBK 0x00fd 452 452 #define USB_DEVICE_ID_ELECOM_M_DT1URBK 0x00fe 453 453 #define USB_DEVICE_ID_ELECOM_M_DT1DRBK 0x00ff 454 + #define USB_DEVICE_ID_ELECOM_M_DT2DRBK 0x018d 454 455 #define USB_DEVICE_ID_ELECOM_M_HT1URBK_010C 0x010c 455 456 #define USB_DEVICE_ID_ELECOM_M_HT1URBK_019B 0x019b 456 457 #define USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D 0x010d ··· 835 834 #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019 836 835 #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E 0x602e 837 836 #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6093 0x6093 837 + #define USB_DEVICE_ID_LENOVO_LEGION_GO_DUAL_DINPUT 0x6184 838 + #define USB_DEVICE_ID_LENOVO_LEGION_GO2_DUAL_DINPUT 0x61ed 838 839 839 840 #define USB_VENDOR_ID_LETSKETCH 0x6161 840 841 #define USB_DEVICE_ID_WP9620N 0x4d15 ··· 910 907 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_2 0xc534 911 908 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1 0xc539 912 909 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_1 0xc53f 910 + #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_2 0xc543 913 911 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_POWERPLAY 0xc53a 914 912 #define USB_DEVICE_ID_LOGITECH_BOLT_RECEIVER 0xc548 915 913 #define USB_DEVICE_ID_SPACETRAVELLER 0xc623
+5 -5
drivers/hid/hid-input-test.c
··· 7 7 8 8 #include <kunit/test.h> 9 9 10 - static void hid_test_input_set_battery_charge_status(struct kunit *test) 10 + static void hid_test_input_update_battery_charge_status(struct kunit *test) 11 11 { 12 12 struct hid_device *dev; 13 13 bool handled; ··· 15 15 dev = kunit_kzalloc(test, sizeof(*dev), GFP_KERNEL); 16 16 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev); 17 17 18 - handled = hidinput_set_battery_charge_status(dev, HID_DG_HEIGHT, 0); 18 + handled = hidinput_update_battery_charge_status(dev, HID_DG_HEIGHT, 0); 19 19 KUNIT_EXPECT_FALSE(test, handled); 20 20 KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_UNKNOWN); 21 21 22 - handled = hidinput_set_battery_charge_status(dev, HID_BAT_CHARGING, 0); 22 + handled = hidinput_update_battery_charge_status(dev, HID_BAT_CHARGING, 0); 23 23 KUNIT_EXPECT_TRUE(test, handled); 24 24 KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_DISCHARGING); 25 25 26 - handled = hidinput_set_battery_charge_status(dev, HID_BAT_CHARGING, 1); 26 + handled = hidinput_update_battery_charge_status(dev, HID_BAT_CHARGING, 1); 27 27 KUNIT_EXPECT_TRUE(test, handled); 28 28 KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_CHARGING); 29 29 } ··· 63 63 } 64 64 65 65 static struct kunit_case hid_input_tests[] = { 66 - KUNIT_CASE(hid_test_input_set_battery_charge_status), 66 + KUNIT_CASE(hid_test_input_update_battery_charge_status), 67 67 KUNIT_CASE(hid_test_input_get_battery_property), 68 68 { } 69 69 };
+24 -27
drivers/hid/hid-input.c
··· 595 595 dev->battery = NULL; 596 596 } 597 597 598 - static void hidinput_update_battery(struct hid_device *dev, int value) 598 + static bool hidinput_update_battery_charge_status(struct hid_device *dev, 599 + unsigned int usage, int value) 600 + { 601 + switch (usage) { 602 + case HID_BAT_CHARGING: 603 + dev->battery_charge_status = value ? 604 + POWER_SUPPLY_STATUS_CHARGING : 605 + POWER_SUPPLY_STATUS_DISCHARGING; 606 + return true; 607 + } 608 + 609 + return false; 610 + } 611 + 612 + static void hidinput_update_battery(struct hid_device *dev, unsigned int usage, 613 + int value) 599 614 { 600 615 int capacity; 601 616 602 617 if (!dev->battery) 603 618 return; 619 + 620 + if (hidinput_update_battery_charge_status(dev, usage, value)) { 621 + power_supply_changed(dev->battery); 622 + return; 623 + } 604 624 605 625 if (value == 0 || value < dev->battery_min || value > dev->battery_max) 606 626 return; ··· 637 617 power_supply_changed(dev->battery); 638 618 } 639 619 } 640 - 641 - static bool hidinput_set_battery_charge_status(struct hid_device *dev, 642 - unsigned int usage, int value) 643 - { 644 - switch (usage) { 645 - case HID_BAT_CHARGING: 646 - dev->battery_charge_status = value ? 647 - POWER_SUPPLY_STATUS_CHARGING : 648 - POWER_SUPPLY_STATUS_DISCHARGING; 649 - return true; 650 - } 651 - 652 - return false; 653 - } 654 620 #else /* !CONFIG_HID_BATTERY_STRENGTH */ 655 621 static int hidinput_setup_battery(struct hid_device *dev, unsigned report_type, 656 622 struct hid_field *field, bool is_percentage) ··· 648 642 { 649 643 } 650 644 651 - static void hidinput_update_battery(struct hid_device *dev, int value) 645 + static void hidinput_update_battery(struct hid_device *dev, unsigned int usage, 646 + int value) 652 647 { 653 - } 654 - 655 - static bool hidinput_set_battery_charge_status(struct hid_device *dev, 656 - unsigned int usage, int value) 657 - { 658 - return false; 659 648 } 660 649 #endif /* CONFIG_HID_BATTERY_STRENGTH */ 661 650 ··· 1516 1515 return; 1517 1516 1518 1517 if (usage->type == EV_PWR) { 1519 - bool handled = hidinput_set_battery_charge_status(hid, usage->hid, value); 1520 - 1521 - if (!handled) 1522 - hidinput_update_battery(hid, value); 1523 - 1518 + hidinput_update_battery(hid, usage->hid, value); 1524 1519 return; 1525 1520 } 1526 1521
+4
drivers/hid/hid-logitech-dj.c
··· 1983 1983 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1984 1984 USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_1), 1985 1985 .driver_data = recvr_type_gaming_hidpp}, 1986 + { /* Logitech lightspeed receiver (0xc543) */ 1987 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1988 + USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_2), 1989 + .driver_data = recvr_type_gaming_hidpp}, 1986 1990 1987 1991 { /* Logitech 27 MHz HID++ 1.0 receiver (0xc513) */ 1988 1992 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_MX3000_RECEIVER),
+2
drivers/hid/hid-logitech-hidpp.c
··· 4596 4596 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC094) }, 4597 4597 { /* Logitech G Pro X Superlight 2 Gaming Mouse over USB */ 4598 4598 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC09b) }, 4599 + { /* Logitech G PRO 2 LIGHTSPEED Wireless Mouse over USB */ 4600 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xc09a) }, 4599 4601 4600 4602 { /* G935 Gaming Headset */ 4601 4603 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0x0a87),
+4
drivers/hid/hid-mcp2221.c
··· 906 906 } 907 907 if (data[2] == MCP2221_I2C_READ_COMPL || 908 908 data[2] == MCP2221_I2C_READ_PARTIAL) { 909 + if (!mcp->rxbuf || mcp->rxbuf_idx < 0 || data[3] > 60) { 910 + mcp->status = -EINVAL; 911 + break; 912 + } 909 913 buf = mcp->rxbuf; 910 914 memcpy(&buf[mcp->rxbuf_idx], &data[4], data[3]); 911 915 mcp->rxbuf_idx = mcp->rxbuf_idx + data[3];
+8
drivers/hid/hid-multitouch.c
··· 1503 1503 if (hdev->vendor == I2C_VENDOR_ID_GOODIX && 1504 1504 (hdev->product == I2C_DEVICE_ID_GOODIX_01E8 || 1505 1505 hdev->product == I2C_DEVICE_ID_GOODIX_01E9)) { 1506 + if (*size < 608) { 1507 + dev_info( 1508 + &hdev->dev, 1509 + "GT7868Q fixup: report descriptor is only %u bytes, skipping\n", 1510 + *size); 1511 + return rdesc; 1512 + } 1513 + 1506 1514 if (rdesc[607] == 0x15) { 1507 1515 rdesc[607] = 0x25; 1508 1516 dev_info(
+3
drivers/hid/hid-ntrig.c
··· 144 144 struct usb_device *usb_dev = hid_to_usb_dev(hdev); 145 145 unsigned char *data = kmalloc(8, GFP_KERNEL); 146 146 147 + if (!hid_is_usb(hdev)) 148 + return; 149 + 147 150 if (!data) 148 151 goto err_free; 149 152
+3
drivers/hid/hid-quirks.c
··· 124 124 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X_V2), HID_QUIRK_MULTI_INPUT }, 125 125 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_PENSKETCH_T609A), HID_QUIRK_MULTI_INPUT }, 126 126 { HID_USB_DEVICE(USB_VENDOR_ID_LABTEC, USB_DEVICE_ID_LABTEC_ODDOR_HANDBRAKE), HID_QUIRK_ALWAYS_POLL }, 127 + { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_LEGION_GO_DUAL_DINPUT), HID_QUIRK_MULTI_INPUT }, 128 + { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_LEGION_GO2_DUAL_DINPUT), HID_QUIRK_MULTI_INPUT }, 127 129 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E), HID_QUIRK_ALWAYS_POLL }, 128 130 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D), HID_QUIRK_ALWAYS_POLL }, 129 131 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019), HID_QUIRK_ALWAYS_POLL }, ··· 413 411 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) }, 414 412 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, 415 413 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, 414 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT2DRBK) }, 416 415 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_010C) }, 417 416 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_019B) }, 418 417 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) },
-3
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 264 264 265 265 static struct device __maybe_unused *ish_resume_device; 266 266 267 - /* 50ms to get resume response */ 268 - #define WAIT_FOR_RESUME_ACK_MS 50 269 - 270 267 /** 271 268 * ish_resume_handler() - Work function to complete resume 272 269 * @work: work struct
+3
drivers/hid/intel-ish-hid/ishtp-hid-client.c
··· 759 759 if (ishtp_wait_resume(ishtp_get_ishtp_device(hid_ishtp_cl))) { 760 760 client_data->suspended = false; 761 761 wake_up_interruptible(&client_data->ishtp_resume_wait); 762 + } else { 763 + hid_ishtp_trace(client_data, "hid client: wait for resume timed out"); 764 + dev_err(cl_data_to_dev(client_data), "wait for resume timed out"); 762 765 } 763 766 } 764 767
-3
drivers/hid/intel-ish-hid/ishtp/bus.c
··· 852 852 */ 853 853 bool ishtp_wait_resume(struct ishtp_device *dev) 854 854 { 855 - /* 50ms to get resume response */ 856 - #define WAIT_FOR_RESUME_ACK_MS 50 857 - 858 855 /* Waiting to get resume response */ 859 856 if (dev->resume_flag) 860 857 wait_event_interruptible_timeout(dev->resume_wait,
+3
drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h
··· 47 47 48 48 #define MAX_DMA_DELAY 20 49 49 50 + /* 300ms to get resume response */ 51 + #define WAIT_FOR_RESUME_ACK_MS 300 52 + 50 53 /* ISHTP device states */ 51 54 enum ishtp_dev_state { 52 55 ISHTP_DEV_INITIALIZING = 0,
+1
drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
··· 419 419 */ 420 420 static void quicki2c_dev_deinit(struct quicki2c_device *qcdev) 421 421 { 422 + thc_interrupt_quiesce(qcdev->thc_hw, true); 422 423 thc_interrupt_enable(qcdev->thc_hw, false); 423 424 thc_ltr_unconfig(qcdev->thc_hw); 424 425 thc_wot_unconfig(qcdev->thc_hw);
+2
drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-dev.h
··· 77 77 u16 device_address; 78 78 u64 connection_speed; 79 79 u8 addressing_mode; 80 + u8 reserved; 80 81 } __packed; 81 82 82 83 /** ··· 127 126 u64 HMTD; 128 127 u64 HMRD; 129 128 u64 HMSL; 129 + u8 reserved; 130 130 }; 131 131 132 132 /**
+2 -2
drivers/hid/intel-thc-hid/intel-thc/intel-thc-dev.c
··· 1540 1540 1541 1541 for (int i = 0; i < ARRAY_SIZE(i2c_subip_regs); i++) { 1542 1542 ret = thc_i2c_subip_pio_read(dev, i2c_subip_regs[i], 1543 - &read_size, (u32 *)&dev->i2c_subip_regs + i); 1543 + &read_size, &dev->i2c_subip_regs[i]); 1544 1544 if (ret < 0) 1545 1545 return ret; 1546 1546 } ··· 1563 1563 1564 1564 for (int i = 0; i < ARRAY_SIZE(i2c_subip_regs); i++) { 1565 1565 ret = thc_i2c_subip_pio_write(dev, i2c_subip_regs[i], 1566 - write_size, (u32 *)&dev->i2c_subip_regs + i); 1566 + write_size, &dev->i2c_subip_regs[i]); 1567 1567 if (ret < 0) 1568 1568 return ret; 1569 1569 }
+1
drivers/hid/wacom_wac.c
··· 684 684 case 0x885: /* Intuos3 Marker Pen */ 685 685 case 0x804: /* Intuos4/5 13HD/24HD Marker Pen */ 686 686 case 0x10804: /* Intuos4/5 13HD/24HD Art Pen */ 687 + case 0x204: /* Art Pen 2 */ 687 688 is_art_pen = true; 688 689 break; 689 690 }
+1 -1
drivers/irqchip/irq-atmel-aic.c
··· 188 188 189 189 gc = dgc->gc[idx]; 190 190 191 - guard(raw_spinlock_irq)(&gc->lock); 191 + guard(raw_spinlock_irqsave)(&gc->lock); 192 192 smr = irq_reg_readl(gc, AT91_AIC_SMR(*out_hwirq)); 193 193 aic_common_set_priority(intspec[2], &smr); 194 194 irq_reg_writel(gc, smr, AT91_AIC_SMR(*out_hwirq));
+1 -1
drivers/irqchip/irq-atmel-aic5.c
··· 279 279 if (ret) 280 280 return ret; 281 281 282 - guard(raw_spinlock_irq)(&bgc->lock); 282 + guard(raw_spinlock_irqsave)(&bgc->lock); 283 283 irq_reg_writel(bgc, *out_hwirq, AT91_AIC5_SSR); 284 284 smr = irq_reg_readl(bgc, AT91_AIC5_SMR); 285 285 aic_common_set_priority(intspec[2], &smr);
+7 -2
drivers/irqchip/irq-gic-v5-irs.c
··· 5 5 6 6 #define pr_fmt(fmt) "GICv5 IRS: " fmt 7 7 8 + #include <linux/kmemleak.h> 8 9 #include <linux/log2.h> 9 10 #include <linux/of.h> 10 11 #include <linux/of_address.h> ··· 118 117 kfree(ist); 119 118 return ret; 120 119 } 120 + kmemleak_ignore(ist); 121 121 122 122 return 0; 123 123 } ··· 234 232 kfree(l2ist); 235 233 return ret; 236 234 } 235 + kmemleak_ignore(l2ist); 237 236 238 237 /* 239 238 * Make sure we invalidate the cache line pulled before the IRS ··· 626 623 int cpu; 627 624 628 625 cpu_node = of_parse_phandle(node, "cpus", i); 629 - if (WARN_ON(!cpu_node)) 626 + if (!cpu_node) { 627 + pr_warn(FW_BUG "Erroneous CPU node phandle\n"); 630 628 continue; 629 + } 631 630 632 631 cpu = of_cpu_node_to_id(cpu_node); 633 632 of_node_put(cpu_node); 634 - if (WARN_ON(cpu < 0)) 633 + if (cpu < 0) 635 634 continue; 636 635 637 636 if (iaffids[i] & ~iaffid_mask) {
+1 -1
drivers/irqchip/irq-mvebu-gicp.c
··· 238 238 } 239 239 240 240 base = ioremap(gicp->res->start, resource_size(gicp->res)); 241 - if (IS_ERR(base)) { 241 + if (!base) { 242 242 dev_err(&pdev->dev, "ioremap() failed. Unable to clear pending interrupts.\n"); 243 243 } else { 244 244 for (i = 0; i < 64; i++)
+5 -7
drivers/isdn/hardware/mISDN/hfcpci.c
··· 39 39 40 40 #include "hfc_pci.h" 41 41 42 + static void hfcpci_softirq(struct timer_list *unused); 42 43 static const char *hfcpci_revision = "2.0"; 43 44 44 45 static int HFC_cnt; 45 46 static uint debug; 46 47 static uint poll, tics; 47 - static struct timer_list hfc_tl; 48 + static DEFINE_TIMER(hfc_tl, hfcpci_softirq); 48 49 static unsigned long hfc_jiffies; 49 50 50 51 MODULE_AUTHOR("Karsten Keil"); ··· 2306 2305 hfc_jiffies = jiffies + 1; 2307 2306 else 2308 2307 hfc_jiffies += tics; 2309 - hfc_tl.expires = hfc_jiffies; 2310 - add_timer(&hfc_tl); 2308 + mod_timer(&hfc_tl, hfc_jiffies); 2311 2309 } 2312 2310 2313 2311 static int __init ··· 2332 2332 if (poll != HFCPCI_BTRANS_THRESHOLD) { 2333 2333 printk(KERN_INFO "%s: Using alternative poll value of %d\n", 2334 2334 __func__, poll); 2335 - timer_setup(&hfc_tl, hfcpci_softirq, 0); 2336 - hfc_tl.expires = jiffies + tics; 2337 - hfc_jiffies = hfc_tl.expires; 2338 - add_timer(&hfc_tl); 2335 + hfc_jiffies = jiffies + tics; 2336 + mod_timer(&hfc_tl, hfc_jiffies); 2339 2337 } else 2340 2338 tics = 0; /* indicate the use of controller's timer */ 2341 2339
-1
drivers/media/i2c/alvium-csi2.c
··· 1841 1841 1842 1842 } else { 1843 1843 alvium_set_stream_mipi(alvium, enable); 1844 - pm_runtime_mark_last_busy(&client->dev); 1845 1844 pm_runtime_put_autosuspend(&client->dev); 1846 1845 } 1847 1846
+1 -6
drivers/media/i2c/ccs/ccs-core.c
··· 787 787 rval = -EINVAL; 788 788 } 789 789 790 - if (pm_status > 0) { 791 - pm_runtime_mark_last_busy(&client->dev); 790 + if (pm_status > 0) 792 791 pm_runtime_put_autosuspend(&client->dev); 793 - } 794 792 795 793 return rval; 796 794 } ··· 1912 1914 if (!enable) { 1913 1915 ccs_stop_streaming(sensor); 1914 1916 sensor->streaming = false; 1915 - pm_runtime_mark_last_busy(&client->dev); 1916 1917 pm_runtime_put_autosuspend(&client->dev); 1917 1918 1918 1919 return 0; ··· 1926 1929 rval = ccs_start_streaming(sensor); 1927 1930 if (rval < 0) { 1928 1931 sensor->streaming = false; 1929 - pm_runtime_mark_last_busy(&client->dev); 1930 1932 pm_runtime_put_autosuspend(&client->dev); 1931 1933 } 1932 1934 ··· 2673 2677 return -ENODEV; 2674 2678 } 2675 2679 2676 - pm_runtime_mark_last_busy(&client->dev); 2677 2680 pm_runtime_put_autosuspend(&client->dev); 2678 2681 2679 2682 /*
-1
drivers/media/i2c/dw9768.c
··· 374 374 375 375 static int dw9768_close(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh) 376 376 { 377 - pm_runtime_mark_last_busy(sd->dev); 378 377 pm_runtime_put_autosuspend(sd->dev); 379 378 380 379 return 0;
-3
drivers/media/i2c/gc0308.c
··· 974 974 if (ret) 975 975 dev_err(gc0308->dev, "failed to set control: %d\n", ret); 976 976 977 - pm_runtime_mark_last_busy(gc0308->dev); 978 977 pm_runtime_put_autosuspend(gc0308->dev); 979 978 980 979 return ret; ··· 1156 1157 return 0; 1157 1158 1158 1159 disable_pm: 1159 - pm_runtime_mark_last_busy(gc0308->dev); 1160 1160 pm_runtime_put_autosuspend(gc0308->dev); 1161 1161 return ret; 1162 1162 } 1163 1163 1164 1164 static int gc0308_stop_stream(struct gc0308 *gc0308) 1165 1165 { 1166 - pm_runtime_mark_last_busy(gc0308->dev); 1167 1166 pm_runtime_put_autosuspend(gc0308->dev); 1168 1167 return 0; 1169 1168 }
-3
drivers/media/i2c/gc2145.c
··· 963 963 return 0; 964 964 965 965 err_rpm_put: 966 - pm_runtime_mark_last_busy(&client->dev); 967 966 pm_runtime_put_autosuspend(&client->dev); 968 967 return ret; 969 968 } ··· 984 985 if (ret) 985 986 dev_err(&client->dev, "%s failed to write regs\n", __func__); 986 987 987 - pm_runtime_mark_last_busy(&client->dev); 988 988 pm_runtime_put_autosuspend(&client->dev); 989 989 990 990 return ret; ··· 1191 1193 break; 1192 1194 } 1193 1195 1194 - pm_runtime_mark_last_busy(&client->dev); 1195 1196 pm_runtime_put_autosuspend(&client->dev); 1196 1197 1197 1198 return ret;
-2
drivers/media/i2c/imx219.c
··· 771 771 return 0; 772 772 773 773 err_rpm_put: 774 - pm_runtime_mark_last_busy(&client->dev); 775 774 pm_runtime_put_autosuspend(&client->dev); 776 775 return ret; 777 776 } ··· 792 793 __v4l2_ctrl_grab(imx219->vflip, false); 793 794 __v4l2_ctrl_grab(imx219->hflip, false); 794 795 795 - pm_runtime_mark_last_busy(&client->dev); 796 796 pm_runtime_put_autosuspend(&client->dev); 797 797 798 798 return ret;
-3
drivers/media/i2c/imx283.c
··· 1143 1143 return 0; 1144 1144 1145 1145 err_rpm_put: 1146 - pm_runtime_mark_last_busy(imx283->dev); 1147 1146 pm_runtime_put_autosuspend(imx283->dev); 1148 1147 1149 1148 return ret; ··· 1162 1163 if (ret) 1163 1164 dev_err(imx283->dev, "Failed to stop stream\n"); 1164 1165 1165 - pm_runtime_mark_last_busy(imx283->dev); 1166 1166 pm_runtime_put_autosuspend(imx283->dev); 1167 1167 1168 1168 return ret; ··· 1556 1558 * Decrease the PM usage count. The device will get suspended after the 1557 1559 * autosuspend delay, turning the power off. 1558 1560 */ 1559 - pm_runtime_mark_last_busy(imx283->dev); 1560 1561 pm_runtime_put_autosuspend(imx283->dev); 1561 1562 1562 1563 return 0;
-3
drivers/media/i2c/imx290.c
··· 869 869 break; 870 870 } 871 871 872 - pm_runtime_mark_last_busy(imx290->dev); 873 872 pm_runtime_put_autosuspend(imx290->dev); 874 873 875 874 return ret; ··· 1098 1099 } 1099 1100 } else { 1100 1101 imx290_stop_streaming(imx290); 1101 - pm_runtime_mark_last_busy(imx290->dev); 1102 1102 pm_runtime_put_autosuspend(imx290->dev); 1103 1103 } 1104 1104 ··· 1292 1294 * will already be prevented even before the delay. 1293 1295 */ 1294 1296 v4l2_i2c_subdev_init(&imx290->sd, client, &imx290_subdev_ops); 1295 - pm_runtime_mark_last_busy(imx290->dev); 1296 1297 pm_runtime_put_autosuspend(imx290->dev); 1297 1298 1298 1299 imx290->sd.internal_ops = &imx290_internal_ops;
-1
drivers/media/i2c/imx296.c
··· 604 604 if (!enable) { 605 605 ret = imx296_stream_off(sensor); 606 606 607 - pm_runtime_mark_last_busy(sensor->dev); 608 607 pm_runtime_put_autosuspend(sensor->dev); 609 608 610 609 goto unlock;
-1
drivers/media/i2c/imx415.c
··· 952 952 if (!enable) { 953 953 ret = imx415_stream_off(sensor); 954 954 955 - pm_runtime_mark_last_busy(sensor->dev); 956 955 pm_runtime_put_autosuspend(sensor->dev); 957 956 958 957 goto unlock;
-6
drivers/media/i2c/mt9m114.c
··· 974 974 return 0; 975 975 976 976 error: 977 - pm_runtime_mark_last_busy(&sensor->client->dev); 978 977 pm_runtime_put_autosuspend(&sensor->client->dev); 979 978 980 979 return ret; ··· 987 988 988 989 ret = mt9m114_set_state(sensor, MT9M114_SYS_STATE_ENTER_SUSPEND); 989 990 990 - pm_runtime_mark_last_busy(&sensor->client->dev); 991 991 pm_runtime_put_autosuspend(&sensor->client->dev); 992 992 993 993 return ret; ··· 1044 1046 break; 1045 1047 } 1046 1048 1047 - pm_runtime_mark_last_busy(&sensor->client->dev); 1048 1049 pm_runtime_put_autosuspend(&sensor->client->dev); 1049 1050 1050 1051 return ret; ··· 1110 1113 break; 1111 1114 } 1112 1115 1113 - pm_runtime_mark_last_busy(&sensor->client->dev); 1114 1116 pm_runtime_put_autosuspend(&sensor->client->dev); 1115 1117 1116 1118 return ret; ··· 1561 1565 break; 1562 1566 } 1563 1567 1564 - pm_runtime_mark_last_busy(&sensor->client->dev); 1565 1568 pm_runtime_put_autosuspend(&sensor->client->dev); 1566 1569 1567 1570 return ret; ··· 2467 2472 * Decrease the PM usage count. The device will get suspended after the 2468 2473 * autosuspend delay, turning the power off. 2469 2474 */ 2470 - pm_runtime_mark_last_busy(dev); 2471 2475 pm_runtime_put_autosuspend(dev); 2472 2476 2473 2477 return 0;
-3
drivers/media/i2c/ov4689.c
··· 497 497 } else { 498 498 cci_write(ov4689->regmap, OV4689_REG_CTRL_MODE, 499 499 OV4689_MODE_SW_STANDBY, NULL); 500 - pm_runtime_mark_last_busy(dev); 501 500 pm_runtime_put_autosuspend(dev); 502 501 } 503 502 ··· 701 702 break; 702 703 } 703 704 704 - pm_runtime_mark_last_busy(dev); 705 705 pm_runtime_put_autosuspend(dev); 706 706 707 707 return ret; ··· 997 999 goto err_clean_subdev_pm; 998 1000 } 999 1001 1000 - pm_runtime_mark_last_busy(dev); 1001 1002 pm_runtime_put_autosuspend(dev); 1002 1003 1003 1004 return 0;
-4
drivers/media/i2c/ov5640.c
··· 3341 3341 break; 3342 3342 } 3343 3343 3344 - pm_runtime_mark_last_busy(&sensor->i2c_client->dev); 3345 3344 pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3346 3345 3347 3346 return 0; ··· 3416 3417 break; 3417 3418 } 3418 3419 3419 - pm_runtime_mark_last_busy(&sensor->i2c_client->dev); 3420 3420 pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3421 3421 3422 3422 return ret; ··· 3752 3754 mutex_unlock(&sensor->lock); 3753 3755 3754 3756 if (!enable || ret) { 3755 - pm_runtime_mark_last_busy(&sensor->i2c_client->dev); 3756 3757 pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3757 3758 } 3758 3759 ··· 3962 3965 3963 3966 pm_runtime_set_autosuspend_delay(dev, 1000); 3964 3967 pm_runtime_use_autosuspend(dev); 3965 - pm_runtime_mark_last_busy(dev); 3966 3968 pm_runtime_put_autosuspend(dev); 3967 3969 3968 3970 return 0;
-3
drivers/media/i2c/ov5645.c
··· 808 808 break; 809 809 } 810 810 811 - pm_runtime_mark_last_busy(ov5645->dev); 812 811 pm_runtime_put_autosuspend(ov5645->dev); 813 812 814 813 return ret; ··· 978 979 OV5645_SYSTEM_CTRL0_STOP); 979 980 980 981 rpm_put: 981 - pm_runtime_mark_last_busy(ov5645->dev); 982 982 pm_runtime_put_autosuspend(ov5645->dev); 983 983 984 984 return ret; ··· 1194 1196 1195 1197 pm_runtime_set_autosuspend_delay(dev, 1000); 1196 1198 pm_runtime_use_autosuspend(dev); 1197 - pm_runtime_mark_last_busy(dev); 1198 1199 pm_runtime_put_autosuspend(dev); 1199 1200 1200 1201 return 0;
+1 -6
drivers/media/i2c/ov64a40.c
··· 2990 2990 return 0; 2991 2991 2992 2992 error_power_off: 2993 - pm_runtime_mark_last_busy(ov64a40->dev); 2994 2993 pm_runtime_put_autosuspend(ov64a40->dev); 2995 2994 2996 2995 return ret; ··· 2999 3000 struct v4l2_subdev_state *state) 3000 3001 { 3001 3002 cci_update_bits(ov64a40->cci, OV64A40_REG_SMIA, BIT(0), 0, NULL); 3002 - pm_runtime_mark_last_busy(ov64a40->dev); 3003 3003 pm_runtime_put_autosuspend(ov64a40->dev); 3004 3004 3005 3005 __v4l2_ctrl_grab(ov64a40->link_freq, false); ··· 3327 3329 break; 3328 3330 } 3329 3331 3330 - if (pm_status > 0) { 3331 - pm_runtime_mark_last_busy(ov64a40->dev); 3332 + if (pm_status > 0) 3332 3333 pm_runtime_put_autosuspend(ov64a40->dev); 3333 - } 3334 3334 3335 3335 return ret; 3336 3336 } ··· 3618 3622 goto error_subdev_cleanup; 3619 3623 } 3620 3624 3621 - pm_runtime_mark_last_busy(&client->dev); 3622 3625 pm_runtime_put_autosuspend(&client->dev); 3623 3626 3624 3627 return 0;
-2
drivers/media/i2c/ov8858.c
··· 1391 1391 } 1392 1392 } else { 1393 1393 ov8858_stop_stream(ov8858); 1394 - pm_runtime_mark_last_busy(&client->dev); 1395 1394 pm_runtime_put_autosuspend(&client->dev); 1396 1395 } 1397 1396 ··· 1944 1945 goto err_power_off; 1945 1946 } 1946 1947 1947 - pm_runtime_mark_last_busy(dev); 1948 1948 pm_runtime_put_autosuspend(dev); 1949 1949 1950 1950 return 0;
-2
drivers/media/i2c/st-mipid02.c
··· 465 465 if (ret) 466 466 goto error; 467 467 468 - pm_runtime_mark_last_busy(&client->dev); 469 468 pm_runtime_put_autosuspend(&client->dev); 470 469 471 470 error: ··· 541 542 cci_write(bridge->regmap, MIPID02_DATA_LANE0_REG1, 0, &ret); 542 543 cci_write(bridge->regmap, MIPID02_DATA_LANE1_REG1, 0, &ret); 543 544 544 - pm_runtime_mark_last_busy(&client->dev); 545 545 pm_runtime_put_autosuspend(&client->dev); 546 546 return ret; 547 547 }
-5
drivers/media/i2c/tc358746.c
··· 816 816 return 0; 817 817 818 818 err_out: 819 - pm_runtime_mark_last_busy(sd->dev); 820 819 pm_runtime_put_sync_autosuspend(sd->dev); 821 820 822 821 return err; ··· 837 838 if (err) 838 839 return err; 839 840 840 - pm_runtime_mark_last_busy(sd->dev); 841 841 pm_runtime_put_sync_autosuspend(sd->dev); 842 842 843 843 return v4l2_subdev_call(src, video, s_stream, 0); ··· 1014 1016 err = tc358746_read(tc358746, reg->reg, &val); 1015 1017 reg->val = val; 1016 1018 1017 - pm_runtime_mark_last_busy(sd->dev); 1018 1019 pm_runtime_put_sync_autosuspend(sd->dev); 1019 1020 1020 1021 return err; ··· 1029 1032 1030 1033 tc358746_write(tc358746, (u32)reg->reg, (u32)reg->val); 1031 1034 1032 - pm_runtime_mark_last_busy(sd->dev); 1033 1035 pm_runtime_put_sync_autosuspend(sd->dev); 1034 1036 1035 1037 return 0; ··· 1391 1395 } 1392 1396 1393 1397 err = tc358746_read(tc358746, CHIPID_REG, &val); 1394 - pm_runtime_mark_last_busy(dev); 1395 1398 pm_runtime_put_sync_autosuspend(dev); 1396 1399 if (err) 1397 1400 return -ENODEV;
-4
drivers/media/i2c/thp7312.c
··· 808 808 if (!enable) { 809 809 thp7312_stream_enable(thp7312, false); 810 810 811 - pm_runtime_mark_last_busy(thp7312->dev); 812 811 pm_runtime_put_autosuspend(thp7312->dev); 813 812 814 813 v4l2_subdev_unlock_state(sd_state); ··· 838 839 goto finish_unlock; 839 840 840 841 finish_pm: 841 - pm_runtime_mark_last_busy(thp7312->dev); 842 842 pm_runtime_put_autosuspend(thp7312->dev); 843 843 finish_unlock: 844 844 v4l2_subdev_unlock_state(sd_state); ··· 1145 1147 break; 1146 1148 } 1147 1149 1148 - pm_runtime_mark_last_busy(thp7312->dev); 1149 1150 pm_runtime_put_autosuspend(thp7312->dev); 1150 1151 1151 1152 return ret; ··· 2180 2183 * Decrease the PM usage count. The device will get suspended after the 2181 2184 * autosuspend delay, turning the power off. 2182 2185 */ 2183 - pm_runtime_mark_last_busy(dev); 2184 2186 pm_runtime_put_autosuspend(dev); 2185 2187 2186 2188 dev_info(dev, "THP7312 firmware version %02u.%02u\n",
-4
drivers/media/i2c/vd55g1.c
··· 1104 1104 1105 1105 vd55g1_grab_ctrls(sensor, false); 1106 1106 1107 - pm_runtime_mark_last_busy(sensor->dev); 1108 1107 pm_runtime_put_autosuspend(sensor->dev); 1109 1108 1110 1109 return ret; ··· 1337 1338 break; 1338 1339 } 1339 1340 1340 - pm_runtime_mark_last_busy(sensor->dev); 1341 1341 pm_runtime_put_autosuspend(sensor->dev); 1342 1342 1343 1343 return ret; ··· 1431 1433 break; 1432 1434 } 1433 1435 1434 - pm_runtime_mark_last_busy(sensor->dev); 1435 1436 pm_runtime_put_autosuspend(sensor->dev); 1436 1437 1437 1438 return ret; ··· 1892 1895 pm_runtime_enable(dev); 1893 1896 pm_runtime_set_autosuspend_delay(dev, 4000); 1894 1897 pm_runtime_use_autosuspend(dev); 1895 - pm_runtime_mark_last_busy(dev); 1896 1898 pm_runtime_put_autosuspend(dev); 1897 1899 1898 1900 ret = vd55g1_subdev_init(sensor);
-4
drivers/media/i2c/vd56g3.c
··· 493 493 break; 494 494 } 495 495 496 - pm_runtime_mark_last_busy(sensor->dev); 497 496 pm_runtime_put_autosuspend(sensor->dev); 498 497 499 498 return ret; ··· 576 577 break; 577 578 } 578 579 579 - pm_runtime_mark_last_busy(sensor->dev); 580 580 pm_runtime_put_autosuspend(sensor->dev); 581 581 582 582 return ret; ··· 1019 1021 __v4l2_ctrl_grab(sensor->vflip_ctrl, false); 1020 1022 __v4l2_ctrl_grab(sensor->patgen_ctrl, false); 1021 1023 1022 - pm_runtime_mark_last_busy(sensor->dev); 1023 1024 pm_runtime_put_autosuspend(sensor->dev); 1024 1025 1025 1026 return ret; ··· 1524 1527 } 1525 1528 1526 1529 /* Sensor could now be powered off (after the autosuspend delay) */ 1527 - pm_runtime_mark_last_busy(dev); 1528 1530 pm_runtime_put_autosuspend(dev); 1529 1531 1530 1532 dev_dbg(dev, "Successfully probe %s sensor\n",
-4
drivers/media/i2c/video-i2c.c
··· 288 288 return tmp; 289 289 290 290 tmp = regmap_bulk_read(data->regmap, AMG88XX_REG_TTHL, &buf, 2); 291 - pm_runtime_mark_last_busy(regmap_get_device(data->regmap)); 292 291 pm_runtime_put_autosuspend(regmap_get_device(data->regmap)); 293 292 if (tmp) 294 293 return tmp; ··· 526 527 return 0; 527 528 528 529 error_rpm_put: 529 - pm_runtime_mark_last_busy(dev); 530 530 pm_runtime_put_autosuspend(dev); 531 531 error_del_list: 532 532 video_i2c_del_list(vq, VB2_BUF_STATE_QUEUED); ··· 542 544 543 545 kthread_stop(data->kthread_vid_cap); 544 546 data->kthread_vid_cap = NULL; 545 - pm_runtime_mark_last_busy(regmap_get_device(data->regmap)); 546 547 pm_runtime_put_autosuspend(regmap_get_device(data->regmap)); 547 548 548 549 video_i2c_del_list(vq, VB2_BUF_STATE_ERROR); ··· 850 853 if (ret < 0) 851 854 goto error_pm_disable; 852 855 853 - pm_runtime_mark_last_busy(&client->dev); 854 856 pm_runtime_put_autosuspend(&client->dev); 855 857 856 858 return 0;
-4
drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
··· 451 451 if (q_status.report_queue_count == 0 && 452 452 (q_status.instance_queue_count == 0 || dec_info.sequence_changed)) { 453 453 dev_dbg(inst->dev->dev, "%s: finishing job.\n", __func__); 454 - pm_runtime_mark_last_busy(inst->dev->dev); 455 454 pm_runtime_put_autosuspend(inst->dev->dev); 456 455 v4l2_m2m_job_finish(inst->v4l2_m2m_dev, m2m_ctx); 457 456 } ··· 1363 1364 } 1364 1365 1365 1366 } 1366 - pm_runtime_mark_last_busy(inst->dev->dev); 1367 1367 pm_runtime_put_autosuspend(inst->dev->dev); 1368 1368 return ret; 1369 1369 ··· 1496 1498 else 1497 1499 streamoff_capture(q); 1498 1500 1499 - pm_runtime_mark_last_busy(inst->dev->dev); 1500 1501 pm_runtime_put_autosuspend(inst->dev->dev); 1501 1502 } 1502 1503 ··· 1659 1662 1660 1663 finish_job_and_return: 1661 1664 dev_dbg(inst->dev->dev, "%s: leave and finish job", __func__); 1662 - pm_runtime_mark_last_busy(inst->dev->dev); 1663 1665 pm_runtime_put_autosuspend(inst->dev->dev); 1664 1666 v4l2_m2m_job_finish(inst->v4l2_m2m_dev, m2m_ctx); 1665 1667 }
-5
drivers/media/platform/chips-media/wave5/wave5-vpu-enc.c
··· 1391 1391 if (ret) 1392 1392 goto return_buffers; 1393 1393 1394 - pm_runtime_mark_last_busy(inst->dev->dev); 1395 1394 pm_runtime_put_autosuspend(inst->dev->dev); 1396 1395 return 0; 1397 1396 return_buffers: 1398 1397 wave5_return_bufs(q, VB2_BUF_STATE_QUEUED); 1399 - pm_runtime_mark_last_busy(inst->dev->dev); 1400 1398 pm_runtime_put_autosuspend(inst->dev->dev); 1401 1399 return ret; 1402 1400 } ··· 1463 1465 else 1464 1466 streamoff_capture(inst, q); 1465 1467 1466 - pm_runtime_mark_last_busy(inst->dev->dev); 1467 1468 pm_runtime_put_autosuspend(inst->dev->dev); 1468 1469 } 1469 1470 ··· 1517 1520 break; 1518 1521 } 1519 1522 dev_dbg(inst->dev->dev, "%s: leave with active job", __func__); 1520 - pm_runtime_mark_last_busy(inst->dev->dev); 1521 1523 pm_runtime_put_autosuspend(inst->dev->dev); 1522 1524 return; 1523 1525 default: ··· 1525 1529 break; 1526 1530 } 1527 1531 dev_dbg(inst->dev->dev, "%s: leave and finish job", __func__); 1528 - pm_runtime_mark_last_busy(inst->dev->dev); 1529 1532 pm_runtime_put_autosuspend(inst->dev->dev); 1530 1533 v4l2_m2m_job_finish(inst->v4l2_m2m_dev, m2m_ctx); 1531 1534 }
-2
drivers/media/platform/nvidia/tegra-vde/h264.c
··· 585 585 return 0; 586 586 587 587 put_runtime_pm: 588 - pm_runtime_mark_last_busy(dev); 589 588 pm_runtime_put_autosuspend(dev); 590 589 591 590 unlock: ··· 611 612 if (err) 612 613 dev_err(dev, "DEC end: Failed to assert HW reset: %d\n", err); 613 614 614 - pm_runtime_mark_last_busy(dev); 615 615 pm_runtime_put_autosuspend(dev); 616 616 617 617 mutex_unlock(&vde->lock);
-1
drivers/media/platform/qcom/iris/iris_hfi_queue.c
··· 142 142 } 143 143 mutex_unlock(&core->lock); 144 144 145 - pm_runtime_mark_last_busy(core->dev); 146 145 pm_runtime_put_autosuspend(core->dev); 147 146 148 147 return 0;
-2
drivers/media/platform/raspberrypi/pisp_be/pisp_be.c
··· 950 950 kfree(job); 951 951 } 952 952 953 - pm_runtime_mark_last_busy(pispbe->dev); 954 953 pm_runtime_put_autosuspend(pispbe->dev); 955 954 956 955 dev_dbg(pispbe->dev, "Nodes streaming now 0x%x\n", ··· 1741 1742 if (ret) 1742 1743 goto disable_devs_err; 1743 1744 1744 - pm_runtime_mark_last_busy(pispbe->dev); 1745 1745 pm_runtime_put_autosuspend(pispbe->dev); 1746 1746 1747 1747 return 0;
+9 -8
drivers/media/platform/rockchip/rkvdec/rkvdec.c
··· 765 765 { 766 766 struct rkvdec_dev *rkvdec = ctx->dev; 767 767 768 - pm_runtime_mark_last_busy(rkvdec->dev); 769 768 pm_runtime_put_autosuspend(rkvdec->dev); 770 769 rkvdec_job_finish_no_pm(ctx, result); 771 770 } ··· 1158 1159 return ret; 1159 1160 } 1160 1161 1161 - if (iommu_get_domain_for_dev(&pdev->dev)) { 1162 - rkvdec->empty_domain = iommu_paging_domain_alloc(rkvdec->dev); 1163 - 1164 - if (!rkvdec->empty_domain) 1165 - dev_warn(rkvdec->dev, "cannot alloc new empty domain\n"); 1166 - } 1167 - 1168 1162 vb2_dma_contig_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32)); 1169 1163 1170 1164 irq = platform_get_irq(pdev, 0); ··· 1179 1187 ret = rkvdec_v4l2_init(rkvdec); 1180 1188 if (ret) 1181 1189 goto err_disable_runtime_pm; 1190 + 1191 + if (iommu_get_domain_for_dev(&pdev->dev)) { 1192 + rkvdec->empty_domain = iommu_paging_domain_alloc(rkvdec->dev); 1193 + 1194 + if (IS_ERR(rkvdec->empty_domain)) { 1195 + rkvdec->empty_domain = NULL; 1196 + dev_warn(rkvdec->dev, "cannot alloc new empty domain\n"); 1197 + } 1198 + } 1182 1199 1183 1200 return 0; 1184 1201
-1
drivers/media/platform/verisilicon/hantro_drv.c
··· 89 89 struct hantro_ctx *ctx, 90 90 enum vb2_buffer_state result) 91 91 { 92 - pm_runtime_mark_last_busy(vpu->dev); 93 92 pm_runtime_put_autosuspend(vpu->dev); 94 93 95 94 clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks);
+1 -3
drivers/media/rc/gpio-ir-recv.c
··· 48 48 if (val >= 0) 49 49 ir_raw_event_store_edge(gpio_dev->rcdev, val == 1); 50 50 51 - if (pmdev) { 52 - pm_runtime_mark_last_busy(pmdev); 51 + if (pmdev) 53 52 pm_runtime_put_autosuspend(pmdev); 54 - } 55 53 56 54 return IRQ_HANDLED; 57 55 }
+30 -6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 8016 8016 } 8017 8017 rx_rings = min_t(int, rx_rings, hwr.grp); 8018 8018 hwr.cp = min_t(int, hwr.cp, bp->cp_nr_rings); 8019 - if (hwr.stat > bnxt_get_ulp_stat_ctxs(bp)) 8019 + if (bnxt_ulp_registered(bp->edev) && 8020 + hwr.stat > bnxt_get_ulp_stat_ctxs(bp)) 8020 8021 hwr.stat -= bnxt_get_ulp_stat_ctxs(bp); 8021 8022 hwr.cp = min_t(int, hwr.cp, hwr.stat); 8022 8023 rc = bnxt_trim_rings(bp, &rx_rings, &hwr.tx, hwr.cp, sh); ··· 8025 8024 hwr.rx = rx_rings << 1; 8026 8025 tx_cp = bnxt_num_tx_to_cp(bp, hwr.tx); 8027 8026 hwr.cp = sh ? max_t(int, tx_cp, rx_rings) : tx_cp + rx_rings; 8027 + if (hwr.tx != bp->tx_nr_rings) { 8028 + netdev_warn(bp->dev, 8029 + "Able to reserve only %d out of %d requested TX rings\n", 8030 + hwr.tx, bp->tx_nr_rings); 8031 + } 8028 8032 bp->tx_nr_rings = hwr.tx; 8029 8033 8030 8034 /* If we cannot reserve all the RX rings, reset the RSS map only ··· 12857 12851 return rc; 12858 12852 } 12859 12853 12854 + static int bnxt_tx_nr_rings(struct bnxt *bp) 12855 + { 12856 + return bp->num_tc ? bp->tx_nr_rings_per_tc * bp->num_tc : 12857 + bp->tx_nr_rings_per_tc; 12858 + } 12859 + 12860 + static int bnxt_tx_nr_rings_per_tc(struct bnxt *bp) 12861 + { 12862 + return bp->num_tc ? bp->tx_nr_rings / bp->num_tc : bp->tx_nr_rings; 12863 + } 12864 + 12860 12865 static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init) 12861 12866 { 12862 12867 int rc = 0; ··· 12885 12868 if (rc) 12886 12869 return rc; 12887 12870 12871 + /* Make adjustments if reserved TX rings are less than requested */ 12872 + bp->tx_nr_rings -= bp->tx_nr_rings_xdp; 12873 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 12874 + if (bp->tx_nr_rings_xdp) { 12875 + bp->tx_nr_rings_xdp = bp->tx_nr_rings_per_tc; 12876 + bp->tx_nr_rings += bp->tx_nr_rings_xdp; 12877 + } 12888 12878 rc = bnxt_alloc_mem(bp, irq_re_init); 12889 12879 if (rc) { 12890 12880 netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc); ··· 16349 16325 bp->cp_nr_rings = min_t(int, bp->tx_nr_rings_per_tc, bp->rx_nr_rings); 16350 16326 bp->rx_nr_rings = bp->cp_nr_rings; 16351 16327 bp->tx_nr_rings_per_tc = bp->cp_nr_rings; 16352 - bp->tx_nr_rings = bp->tx_nr_rings_per_tc; 16328 + bp->tx_nr_rings = bnxt_tx_nr_rings(bp); 16353 16329 } 16354 16330 16355 16331 static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh) ··· 16381 16357 bnxt_trim_dflt_sh_rings(bp); 16382 16358 else 16383 16359 bp->cp_nr_rings = bp->tx_nr_rings_per_tc + bp->rx_nr_rings; 16384 - bp->tx_nr_rings = bp->tx_nr_rings_per_tc; 16360 + bp->tx_nr_rings = bnxt_tx_nr_rings(bp); 16385 16361 16386 16362 avail_msix = bnxt_get_max_func_irqs(bp) - bp->cp_nr_rings; 16387 16363 if (avail_msix >= BNXT_MIN_ROCE_CP_RINGS) { ··· 16394 16370 rc = __bnxt_reserve_rings(bp); 16395 16371 if (rc && rc != -ENODEV) 16396 16372 netdev_warn(bp->dev, "Unable to reserve tx rings\n"); 16397 - bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 16373 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16398 16374 if (sh) 16399 16375 bnxt_trim_dflt_sh_rings(bp); 16400 16376 ··· 16403 16379 rc = __bnxt_reserve_rings(bp); 16404 16380 if (rc && rc != -ENODEV) 16405 16381 netdev_warn(bp->dev, "2nd rings reservation failed.\n"); 16406 - bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 16382 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16407 16383 } 16408 16384 if (BNXT_CHIP_TYPE_NITRO_A0(bp)) { 16409 16385 bp->rx_nr_rings++; ··· 16437 16413 if (rc) 16438 16414 goto init_dflt_ring_err; 16439 16415 16440 - bp->tx_nr_rings_per_tc = bp->tx_nr_rings; 16416 + bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp); 16441 16417 16442 16418 bnxt_set_dflt_rfs(bp); 16443 16419
+4 -7
drivers/net/ethernet/cadence/macb_main.c
··· 3090 3090 /* Add GEM_OCTTXH, GEM_OCTRXH */ 3091 3091 val = bp->macb_reg_readl(bp, offset + 4); 3092 3092 bp->ethtool_stats[i] += ((u64)val) << 32; 3093 - *(p++) += ((u64)val) << 32; 3093 + *p += ((u64)val) << 32; 3094 3094 } 3095 3095 } 3096 3096 ··· 5399 5399 5400 5400 if (dev) { 5401 5401 bp = netdev_priv(dev); 5402 + unregister_netdev(dev); 5402 5403 phy_exit(bp->sgmii_phy); 5403 5404 mdiobus_unregister(bp->mii_bus); 5404 5405 mdiobus_free(bp->mii_bus); 5405 5406 5406 - unregister_netdev(dev); 5407 + device_set_wakeup_enable(&bp->pdev->dev, 0); 5407 5408 cancel_work_sync(&bp->hresp_err_bh_work); 5408 5409 pm_runtime_disable(&pdev->dev); 5409 5410 pm_runtime_dont_use_autosuspend(&pdev->dev); 5410 - if (!pm_runtime_suspended(&pdev->dev)) { 5411 - macb_clks_disable(bp->pclk, bp->hclk, bp->tx_clk, 5412 - bp->rx_clk, bp->tsu_clk); 5413 - pm_runtime_set_suspended(&pdev->dev); 5414 - } 5411 + pm_runtime_set_suspended(&pdev->dev); 5415 5412 phylink_destroy(bp->phylink); 5416 5413 free_netdev(dev); 5417 5414 }
+1 -1
drivers/net/ethernet/dlink/dl2k.c
··· 1099 1099 dev->stats.rx_bytes += dr32(OctetRcvOk); 1100 1100 dev->stats.tx_bytes += dr32(OctetXmtOk); 1101 1101 1102 - dev->stats.multicast = dr32(McstFramesRcvdOk); 1102 + dev->stats.multicast += dr32(McstFramesRcvdOk); 1103 1103 dev->stats.collisions += dr32(SingleColFrames) 1104 1104 + dr32(MultiColFrames); 1105 1105
+1
drivers/net/ethernet/intel/ice/ice.h
··· 510 510 ICE_FLAG_LINK_LENIENT_MODE_ENA, 511 511 ICE_FLAG_PLUG_AUX_DEV, 512 512 ICE_FLAG_UNPLUG_AUX_DEV, 513 + ICE_FLAG_AUX_DEV_CREATED, 513 514 ICE_FLAG_MTU_CHANGED, 514 515 ICE_FLAG_GNSS, /* GNSS successfully initialized */ 515 516 ICE_FLAG_DPLL, /* SyncE/PTP dplls initialized */
+38 -11
drivers/net/ethernet/intel/ice/ice_adapter.c
··· 13 13 static DEFINE_XARRAY(ice_adapters); 14 14 static DEFINE_MUTEX(ice_adapters_mutex); 15 15 16 - static unsigned long ice_adapter_index(u64 dsn) 16 + #define ICE_ADAPTER_FIXED_INDEX BIT_ULL(63) 17 + 18 + #define ICE_ADAPTER_INDEX_E825C \ 19 + (ICE_DEV_ID_E825C_BACKPLANE | ICE_ADAPTER_FIXED_INDEX) 20 + 21 + static u64 ice_adapter_index(struct pci_dev *pdev) 17 22 { 23 + switch (pdev->device) { 24 + case ICE_DEV_ID_E825C_BACKPLANE: 25 + case ICE_DEV_ID_E825C_QSFP: 26 + case ICE_DEV_ID_E825C_SFP: 27 + case ICE_DEV_ID_E825C_SGMII: 28 + /* E825C devices have multiple NACs which are connected to the 29 + * same clock source, and which must share the same 30 + * ice_adapter structure. We can't use the serial number since 31 + * each NAC has its own NVM generated with its own unique 32 + * Device Serial Number. Instead, rely on the embedded nature 33 + * of the E825C devices, and use a fixed index. This relies on 34 + * the fact that all E825C physical functions in a given 35 + * system are part of the same overall device. 36 + */ 37 + return ICE_ADAPTER_INDEX_E825C; 38 + default: 39 + return pci_get_dsn(pdev) & ~ICE_ADAPTER_FIXED_INDEX; 40 + } 41 + } 42 + 43 + static unsigned long ice_adapter_xa_index(struct pci_dev *pdev) 44 + { 45 + u64 index = ice_adapter_index(pdev); 46 + 18 47 #if BITS_PER_LONG == 64 19 - return dsn; 48 + return index; 20 49 #else 21 - return (u32)dsn ^ (u32)(dsn >> 32); 50 + return (u32)index ^ (u32)(index >> 32); 22 51 #endif 23 52 } 24 53 25 - static struct ice_adapter *ice_adapter_new(u64 dsn) 54 + static struct ice_adapter *ice_adapter_new(struct pci_dev *pdev) 26 55 { 27 56 struct ice_adapter *adapter; 28 57 ··· 59 30 if (!adapter) 60 31 return NULL; 61 32 62 - adapter->device_serial_number = dsn; 33 + adapter->index = ice_adapter_index(pdev); 63 34 spin_lock_init(&adapter->ptp_gltsyn_time_lock); 64 35 spin_lock_init(&adapter->txq_ctx_lock); 65 36 refcount_set(&adapter->refcount, 1); ··· 93 64 */ 94 65 struct ice_adapter *ice_adapter_get(struct pci_dev *pdev) 95 66 { 96 - u64 dsn = pci_get_dsn(pdev); 97 67 struct ice_adapter *adapter; 98 68 unsigned long index; 99 69 int err; 100 70 101 - index = ice_adapter_index(dsn); 71 + index = ice_adapter_xa_index(pdev); 102 72 scoped_guard(mutex, &ice_adapters_mutex) { 103 73 err = xa_insert(&ice_adapters, index, NULL, GFP_KERNEL); 104 74 if (err == -EBUSY) { 105 75 adapter = xa_load(&ice_adapters, index); 106 76 refcount_inc(&adapter->refcount); 107 - WARN_ON_ONCE(adapter->device_serial_number != dsn); 77 + WARN_ON_ONCE(adapter->index != ice_adapter_index(pdev)); 108 78 return adapter; 109 79 } 110 80 if (err) 111 81 return ERR_PTR(err); 112 82 113 - adapter = ice_adapter_new(dsn); 83 + adapter = ice_adapter_new(pdev); 114 84 if (!adapter) 115 85 return ERR_PTR(-ENOMEM); 116 86 xa_store(&ice_adapters, index, adapter, GFP_KERNEL); ··· 128 100 */ 129 101 void ice_adapter_put(struct pci_dev *pdev) 130 102 { 131 - u64 dsn = pci_get_dsn(pdev); 132 103 struct ice_adapter *adapter; 133 104 unsigned long index; 134 105 135 - index = ice_adapter_index(dsn); 106 + index = ice_adapter_xa_index(pdev); 136 107 scoped_guard(mutex, &ice_adapters_mutex) { 137 108 adapter = xa_load(&ice_adapters, index); 138 109 if (WARN_ON(!adapter))
+2 -2
drivers/net/ethernet/intel/ice/ice_adapter.h
··· 33 33 * @txq_ctx_lock: Spinlock protecting access to the GLCOMM_QTX_CNTX_CTL register 34 34 * @ctrl_pf: Control PF of the adapter 35 35 * @ports: Ports list 36 - * @device_serial_number: DSN cached for collision detection on 32bit systems 36 + * @index: 64-bit index cached for collision detection on 32bit systems 37 37 */ 38 38 struct ice_adapter { 39 39 refcount_t refcount; ··· 44 44 45 45 struct ice_pf *ctrl_pf; 46 46 struct ice_port_list ports; 47 - u64 device_serial_number; 47 + u64 index; 48 48 }; 49 49 50 50 struct ice_adapter *ice_adapter_get(struct pci_dev *pdev);
+32 -12
drivers/net/ethernet/intel/ice/ice_ddp.c
··· 2377 2377 * The function will apply the new Tx topology from the package buffer 2378 2378 * if available. 2379 2379 * 2380 - * Return: zero when update was successful, negative values otherwise. 2380 + * Return: 2381 + * * 0 - Successfully applied topology configuration. 2382 + * * -EBUSY - Failed to acquire global configuration lock. 2383 + * * -EEXIST - Topology configuration has already been applied. 2384 + * * -EIO - Unable to apply topology configuration. 2385 + * * -ENODEV - Failed to re-initialize device after applying configuration. 2386 + * * Other negative error codes indicate unexpected failures. 2381 2387 */ 2382 2388 int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len) 2383 2389 { ··· 2416 2410 2417 2411 if (status) { 2418 2412 ice_debug(hw, ICE_DBG_INIT, "Get current topology is failed\n"); 2419 - return status; 2413 + return -EIO; 2420 2414 } 2421 2415 2422 2416 /* Is default topology already applied ? */ ··· 2503 2497 ICE_GLOBAL_CFG_LOCK_TIMEOUT); 2504 2498 if (status) { 2505 2499 ice_debug(hw, ICE_DBG_INIT, "Failed to acquire global lock\n"); 2506 - return status; 2500 + return -EBUSY; 2507 2501 } 2508 2502 2509 2503 /* Check if reset was triggered already. */ 2510 2504 reg = rd32(hw, GLGEN_RSTAT); 2511 2505 if (reg & GLGEN_RSTAT_DEVSTATE_M) { 2512 - /* Reset is in progress, re-init the HW again */ 2513 2506 ice_debug(hw, ICE_DBG_INIT, "Reset is in progress. Layer topology might be applied already\n"); 2514 2507 ice_check_reset(hw); 2515 - return 0; 2508 + /* Reset is in progress, re-init the HW again */ 2509 + goto reinit_hw; 2516 2510 } 2517 2511 2518 2512 /* Set new topology */ 2519 2513 status = ice_get_set_tx_topo(hw, new_topo, size, NULL, NULL, true); 2520 2514 if (status) { 2521 - ice_debug(hw, ICE_DBG_INIT, "Failed setting Tx topology\n"); 2522 - return status; 2515 + ice_debug(hw, ICE_DBG_INIT, "Failed to set Tx topology, status %pe\n", 2516 + ERR_PTR(status)); 2517 + /* only report -EIO here as the caller checks the error value 2518 + * and reports an informational error message informing that 2519 + * the driver failed to program Tx topology. 2520 + */ 2521 + status = -EIO; 2523 2522 } 2524 2523 2525 - /* New topology is updated, delay 1 second before issuing the CORER */ 2524 + /* Even if Tx topology config failed, we need to CORE reset here to 2525 + * clear the global configuration lock. Delay 1 second to allow 2526 + * hardware to settle then issue a CORER 2527 + */ 2526 2528 msleep(1000); 2527 2529 ice_reset(hw, ICE_RESET_CORER); 2528 - /* CORER will clear the global lock, so no explicit call 2529 - * required for release. 2530 - */ 2530 + ice_check_reset(hw); 2531 2531 2532 - return 0; 2532 + reinit_hw: 2533 + /* Since we triggered a CORER, re-initialize hardware */ 2534 + ice_deinit_hw(hw); 2535 + if (ice_init_hw(hw)) { 2536 + ice_debug(hw, ICE_DBG_INIT, "Failed to re-init hardware after setting Tx topology\n"); 2537 + return -ENODEV; 2538 + } 2539 + 2540 + return status; 2533 2541 }
+6 -4
drivers/net/ethernet/intel/ice/ice_idc.c
··· 336 336 mutex_lock(&pf->adev_mutex); 337 337 cdev->adev = adev; 338 338 mutex_unlock(&pf->adev_mutex); 339 + set_bit(ICE_FLAG_AUX_DEV_CREATED, pf->flags); 339 340 340 341 return 0; 341 342 } ··· 348 347 { 349 348 struct auxiliary_device *adev; 350 349 350 + if (!test_and_clear_bit(ICE_FLAG_AUX_DEV_CREATED, pf->flags)) 351 + return; 352 + 351 353 mutex_lock(&pf->adev_mutex); 352 354 adev = pf->cdev_info->adev; 353 355 pf->cdev_info->adev = NULL; 354 356 mutex_unlock(&pf->adev_mutex); 355 357 356 - if (adev) { 357 - auxiliary_device_delete(adev); 358 - auxiliary_device_uninit(adev); 359 - } 358 + auxiliary_device_delete(adev); 359 + auxiliary_device_uninit(adev); 360 360 } 361 361 362 362 /**
+11 -5
drivers/net/ethernet/intel/ice/ice_main.c
··· 4536 4536 dev_info(dev, "Tx scheduling layers switching feature disabled\n"); 4537 4537 else 4538 4538 dev_info(dev, "Tx scheduling layers switching feature enabled\n"); 4539 - /* if there was a change in topology ice_cfg_tx_topo triggered 4540 - * a CORER and we need to re-init hw 4539 + return 0; 4540 + } else if (err == -ENODEV) { 4541 + /* If we failed to re-initialize the device, we can no longer 4542 + * continue loading. 4541 4543 */ 4542 - ice_deinit_hw(hw); 4543 - err = ice_init_hw(hw); 4544 - 4544 + dev_warn(dev, "Failed to initialize hardware after applying Tx scheduling configuration.\n"); 4545 4545 return err; 4546 4546 } else if (err == -EIO) { 4547 4547 dev_info(dev, "DDP package does not support Tx scheduling layers switching feature - please update to the latest DDP package and try again\n"); 4548 + return 0; 4549 + } else if (err == -EEXIST) { 4550 + return 0; 4548 4551 } 4549 4552 4553 + /* Do not treat this as a fatal error. */ 4554 + dev_info(dev, "Failed to apply Tx scheduling configuration, err %pe\n", 4555 + ERR_PTR(err)); 4550 4556 return 0; 4551 4557 } 4552 4558
+1 -1
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 1352 1352 skb = ice_construct_skb(rx_ring, xdp); 1353 1353 /* exit if we failed to retrieve a buffer */ 1354 1354 if (!skb) { 1355 - rx_ring->ring_stats->rx_stats.alloc_page_failed++; 1355 + rx_ring->ring_stats->rx_stats.alloc_buf_failed++; 1356 1356 xdp_verdict = ICE_XDP_CONSUMED; 1357 1357 } 1358 1358 ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
+57 -4
drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
··· 180 180 } 181 181 182 182 /** 183 + * idpf_tx_singleq_dma_map_error - handle TX DMA map errors 184 + * @txq: queue to send buffer on 185 + * @skb: send buffer 186 + * @first: original first buffer info buffer for packet 187 + * @idx: starting point on ring to unwind 188 + */ 189 + static void idpf_tx_singleq_dma_map_error(struct idpf_tx_queue *txq, 190 + struct sk_buff *skb, 191 + struct idpf_tx_buf *first, u16 idx) 192 + { 193 + struct libeth_sq_napi_stats ss = { }; 194 + struct libeth_cq_pp cp = { 195 + .dev = txq->dev, 196 + .ss = &ss, 197 + }; 198 + 199 + u64_stats_update_begin(&txq->stats_sync); 200 + u64_stats_inc(&txq->q_stats.dma_map_errs); 201 + u64_stats_update_end(&txq->stats_sync); 202 + 203 + /* clear dma mappings for failed tx_buf map */ 204 + for (;;) { 205 + struct idpf_tx_buf *tx_buf; 206 + 207 + tx_buf = &txq->tx_buf[idx]; 208 + libeth_tx_complete(tx_buf, &cp); 209 + if (tx_buf == first) 210 + break; 211 + if (idx == 0) 212 + idx = txq->desc_count; 213 + idx--; 214 + } 215 + 216 + if (skb_is_gso(skb)) { 217 + union idpf_tx_flex_desc *tx_desc; 218 + 219 + /* If we failed a DMA mapping for a TSO packet, we will have 220 + * used one additional descriptor for a context 221 + * descriptor. Reset that here. 222 + */ 223 + tx_desc = &txq->flex_tx[idx]; 224 + memset(tx_desc, 0, sizeof(*tx_desc)); 225 + if (idx == 0) 226 + idx = txq->desc_count; 227 + idx--; 228 + } 229 + 230 + /* Update tail in case netdev_xmit_more was previously true */ 231 + idpf_tx_buf_hw_update(txq, idx, false); 232 + } 233 + 234 + /** 183 235 * idpf_tx_singleq_map - Build the Tx base descriptor 184 236 * @tx_q: queue to send buffer on 185 237 * @first: first buffer info buffer to use ··· 271 219 for (frag = &skb_shinfo(skb)->frags[0];; frag++) { 272 220 unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED; 273 221 274 - if (dma_mapping_error(tx_q->dev, dma)) 275 - return idpf_tx_dma_map_error(tx_q, skb, first, i); 222 + if (unlikely(dma_mapping_error(tx_q->dev, dma))) 223 + return idpf_tx_singleq_dma_map_error(tx_q, skb, 224 + first, i); 276 225 277 226 /* record length, and DMA address */ 278 227 dma_unmap_len_set(tx_buf, len, size); ··· 415 362 { 416 363 struct idpf_tx_offload_params offload = { }; 417 364 struct idpf_tx_buf *first; 365 + u32 count, buf_count = 1; 418 366 int csum, tso, needed; 419 - unsigned int count; 420 367 __be16 protocol; 421 368 422 - count = idpf_tx_desc_count_required(tx_q, skb); 369 + count = idpf_tx_res_count_required(tx_q, skb, &buf_count); 423 370 if (unlikely(!count)) 424 371 return idpf_tx_drop_skb(tx_q, skb); 425 372
+281 -472
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 8 8 #include "idpf_ptp.h" 9 9 #include "idpf_virtchnl.h" 10 10 11 - struct idpf_tx_stash { 12 - struct hlist_node hlist; 13 - struct libeth_sqe buf; 14 - }; 15 - 16 - #define idpf_tx_buf_compl_tag(buf) (*(u32 *)&(buf)->priv) 11 + #define idpf_tx_buf_next(buf) (*(u32 *)&(buf)->priv) 17 12 LIBETH_SQE_CHECK_PRIV(u32); 18 13 19 14 static bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs, 20 15 unsigned int count); 21 - 22 - /** 23 - * idpf_buf_lifo_push - push a buffer pointer onto stack 24 - * @stack: pointer to stack struct 25 - * @buf: pointer to buf to push 26 - * 27 - * Returns 0 on success, negative on failure 28 - **/ 29 - static int idpf_buf_lifo_push(struct idpf_buf_lifo *stack, 30 - struct idpf_tx_stash *buf) 31 - { 32 - if (unlikely(stack->top == stack->size)) 33 - return -ENOSPC; 34 - 35 - stack->bufs[stack->top++] = buf; 36 - 37 - return 0; 38 - } 39 - 40 - /** 41 - * idpf_buf_lifo_pop - pop a buffer pointer from stack 42 - * @stack: pointer to stack struct 43 - **/ 44 - static struct idpf_tx_stash *idpf_buf_lifo_pop(struct idpf_buf_lifo *stack) 45 - { 46 - if (unlikely(!stack->top)) 47 - return NULL; 48 - 49 - return stack->bufs[--stack->top]; 50 - } 51 16 52 17 /** 53 18 * idpf_tx_timeout - Respond to a Tx Hang ··· 42 77 static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq) 43 78 { 44 79 struct libeth_sq_napi_stats ss = { }; 45 - struct idpf_buf_lifo *buf_stack; 46 - struct idpf_tx_stash *stash; 47 80 struct libeth_cq_pp cp = { 48 81 .dev = txq->dev, 49 82 .ss = &ss, 50 83 }; 51 - struct hlist_node *tmp; 52 - u32 i, tag; 84 + u32 i; 53 85 54 86 /* Buffers already cleared, nothing to do */ 55 87 if (!txq->tx_buf) 56 88 return; 57 89 58 90 /* Free all the Tx buffer sk_buffs */ 59 - for (i = 0; i < txq->desc_count; i++) 91 + for (i = 0; i < txq->buf_pool_size; i++) 60 92 libeth_tx_complete(&txq->tx_buf[i], &cp); 61 93 62 94 kfree(txq->tx_buf); 63 95 txq->tx_buf = NULL; 64 - 65 - if (!idpf_queue_has(FLOW_SCH_EN, txq)) 66 - return; 67 - 68 - buf_stack = &txq->stash->buf_stack; 69 - if (!buf_stack->bufs) 70 - return; 71 - 72 - /* 73 - * If a Tx timeout occurred, there are potentially still bufs in the 74 - * hash table, free them here. 75 - */ 76 - hash_for_each_safe(txq->stash->sched_buf_hash, tag, tmp, stash, 77 - hlist) { 78 - if (!stash) 79 - continue; 80 - 81 - libeth_tx_complete(&stash->buf, &cp); 82 - hash_del(&stash->hlist); 83 - idpf_buf_lifo_push(buf_stack, stash); 84 - } 85 - 86 - for (i = 0; i < buf_stack->size; i++) 87 - kfree(buf_stack->bufs[i]); 88 - 89 - kfree(buf_stack->bufs); 90 - buf_stack->bufs = NULL; 91 96 } 92 97 93 98 /** ··· 73 138 74 139 if (!txq->desc_ring) 75 140 return; 141 + 142 + if (txq->refillq) 143 + kfree(txq->refillq->ring); 76 144 77 145 dmam_free_coherent(txq->dev, txq->size, txq->desc_ring, txq->dma); 78 146 txq->desc_ring = NULL; ··· 133 195 */ 134 196 static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q) 135 197 { 136 - struct idpf_buf_lifo *buf_stack; 137 - int buf_size; 138 - int i; 139 - 140 198 /* Allocate book keeping buffers only. Buffers to be supplied to HW 141 199 * are allocated by kernel network stack and received as part of skb 142 200 */ 143 - buf_size = sizeof(struct idpf_tx_buf) * tx_q->desc_count; 144 - tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL); 201 + if (idpf_queue_has(FLOW_SCH_EN, tx_q)) 202 + tx_q->buf_pool_size = U16_MAX; 203 + else 204 + tx_q->buf_pool_size = tx_q->desc_count; 205 + tx_q->tx_buf = kcalloc(tx_q->buf_pool_size, sizeof(*tx_q->tx_buf), 206 + GFP_KERNEL); 145 207 if (!tx_q->tx_buf) 146 208 return -ENOMEM; 147 - 148 - if (!idpf_queue_has(FLOW_SCH_EN, tx_q)) 149 - return 0; 150 - 151 - buf_stack = &tx_q->stash->buf_stack; 152 - 153 - /* Initialize tx buf stack for out-of-order completions if 154 - * flow scheduling offload is enabled 155 - */ 156 - buf_stack->bufs = kcalloc(tx_q->desc_count, sizeof(*buf_stack->bufs), 157 - GFP_KERNEL); 158 - if (!buf_stack->bufs) 159 - return -ENOMEM; 160 - 161 - buf_stack->size = tx_q->desc_count; 162 - buf_stack->top = tx_q->desc_count; 163 - 164 - for (i = 0; i < tx_q->desc_count; i++) { 165 - buf_stack->bufs[i] = kzalloc(sizeof(*buf_stack->bufs[i]), 166 - GFP_KERNEL); 167 - if (!buf_stack->bufs[i]) 168 - return -ENOMEM; 169 - } 170 209 171 210 return 0; 172 211 } ··· 159 244 struct idpf_tx_queue *tx_q) 160 245 { 161 246 struct device *dev = tx_q->dev; 247 + struct idpf_sw_queue *refillq; 162 248 int err; 163 249 164 250 err = idpf_tx_buf_alloc_all(tx_q); ··· 182 266 tx_q->next_to_use = 0; 183 267 tx_q->next_to_clean = 0; 184 268 idpf_queue_set(GEN_CHK, tx_q); 269 + 270 + if (!idpf_queue_has(FLOW_SCH_EN, tx_q)) 271 + return 0; 272 + 273 + refillq = tx_q->refillq; 274 + refillq->desc_count = tx_q->buf_pool_size; 275 + refillq->ring = kcalloc(refillq->desc_count, sizeof(u32), 276 + GFP_KERNEL); 277 + if (!refillq->ring) { 278 + err = -ENOMEM; 279 + goto err_alloc; 280 + } 281 + 282 + for (unsigned int i = 0; i < refillq->desc_count; i++) 283 + refillq->ring[i] = 284 + FIELD_PREP(IDPF_RFL_BI_BUFID_M, i) | 285 + FIELD_PREP(IDPF_RFL_BI_GEN_M, 286 + idpf_queue_has(GEN_CHK, refillq)); 287 + 288 + /* Go ahead and flip the GEN bit since this counts as filling 289 + * up the ring, i.e. we already ring wrapped. 290 + */ 291 + idpf_queue_change(GEN_CHK, refillq); 292 + 293 + tx_q->last_re = tx_q->desc_count - IDPF_TX_SPLITQ_RE_MIN_GAP; 185 294 186 295 return 0; 187 296 ··· 258 317 for (i = 0; i < vport->num_txq_grp; i++) { 259 318 for (j = 0; j < vport->txq_grps[i].num_txq; j++) { 260 319 struct idpf_tx_queue *txq = vport->txq_grps[i].txqs[j]; 261 - u8 gen_bits = 0; 262 - u16 bufidx_mask; 263 320 264 321 err = idpf_tx_desc_alloc(vport, txq); 265 322 if (err) { ··· 266 327 i); 267 328 goto err_out; 268 329 } 269 - 270 - if (!idpf_is_queue_model_split(vport->txq_model)) 271 - continue; 272 - 273 - txq->compl_tag_cur_gen = 0; 274 - 275 - /* Determine the number of bits in the bufid 276 - * mask and add one to get the start of the 277 - * generation bits 278 - */ 279 - bufidx_mask = txq->desc_count - 1; 280 - while (bufidx_mask >> 1) { 281 - txq->compl_tag_gen_s++; 282 - bufidx_mask = bufidx_mask >> 1; 283 - } 284 - txq->compl_tag_gen_s++; 285 - 286 - gen_bits = IDPF_TX_SPLITQ_COMPL_TAG_WIDTH - 287 - txq->compl_tag_gen_s; 288 - txq->compl_tag_gen_max = GETMAXVAL(gen_bits); 289 - 290 - /* Set bufid mask based on location of first 291 - * gen bit; it cannot simply be the descriptor 292 - * ring size-1 since we can have size values 293 - * where not all of those bits are set. 294 - */ 295 - txq->compl_tag_bufid_m = 296 - GETMAXVAL(txq->compl_tag_gen_s); 297 330 } 298 331 299 332 if (!idpf_is_queue_model_split(vport->txq_model)) ··· 514 603 } 515 604 516 605 /** 517 - * idpf_rx_post_buf_refill - Post buffer id to refill queue 606 + * idpf_post_buf_refill - Post buffer id to refill queue 518 607 * @refillq: refill queue to post to 519 608 * @buf_id: buffer id to post 520 609 */ 521 - static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id) 610 + static void idpf_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id) 522 611 { 523 612 u32 nta = refillq->next_to_use; 524 613 525 614 /* store the buffer ID and the SW maintained GEN bit to the refillq */ 526 615 refillq->ring[nta] = 527 - FIELD_PREP(IDPF_RX_BI_BUFID_M, buf_id) | 528 - FIELD_PREP(IDPF_RX_BI_GEN_M, 616 + FIELD_PREP(IDPF_RFL_BI_BUFID_M, buf_id) | 617 + FIELD_PREP(IDPF_RFL_BI_GEN_M, 529 618 idpf_queue_has(GEN_CHK, refillq)); 530 619 531 620 if (unlikely(++nta == refillq->desc_count)) { ··· 906 995 struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; 907 996 908 997 for (j = 0; j < txq_grp->num_txq; j++) { 998 + if (flow_sch_en) { 999 + kfree(txq_grp->txqs[j]->refillq); 1000 + txq_grp->txqs[j]->refillq = NULL; 1001 + } 1002 + 909 1003 kfree(txq_grp->txqs[j]); 910 1004 txq_grp->txqs[j] = NULL; 911 1005 } ··· 920 1004 921 1005 kfree(txq_grp->complq); 922 1006 txq_grp->complq = NULL; 923 - 924 - if (flow_sch_en) 925 - kfree(txq_grp->stashes); 926 1007 } 927 1008 kfree(vport->txq_grps); 928 1009 vport->txq_grps = NULL; ··· 1280 1367 for (i = 0; i < vport->num_txq_grp; i++) { 1281 1368 struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; 1282 1369 struct idpf_adapter *adapter = vport->adapter; 1283 - struct idpf_txq_stash *stashes; 1284 1370 int j; 1285 1371 1286 1372 tx_qgrp->vport = vport; ··· 1290 1378 GFP_KERNEL); 1291 1379 if (!tx_qgrp->txqs[j]) 1292 1380 goto err_alloc; 1293 - } 1294 - 1295 - if (split && flow_sch_en) { 1296 - stashes = kcalloc(num_txq, sizeof(*stashes), 1297 - GFP_KERNEL); 1298 - if (!stashes) 1299 - goto err_alloc; 1300 - 1301 - tx_qgrp->stashes = stashes; 1302 1381 } 1303 1382 1304 1383 for (j = 0; j < tx_qgrp->num_txq; j++) { ··· 1311 1408 if (!flow_sch_en) 1312 1409 continue; 1313 1410 1314 - if (split) { 1315 - q->stash = &stashes[j]; 1316 - hash_init(q->stash->sched_buf_hash); 1317 - } 1318 - 1319 1411 idpf_queue_set(FLOW_SCH_EN, q); 1412 + 1413 + q->refillq = kzalloc(sizeof(*q->refillq), GFP_KERNEL); 1414 + if (!q->refillq) 1415 + goto err_alloc; 1416 + 1417 + idpf_queue_set(GEN_CHK, q->refillq); 1418 + idpf_queue_set(RFL_GEN_CHK, q->refillq); 1320 1419 } 1321 1420 1322 1421 if (!split) ··· 1602 1697 spin_unlock_bh(&tx_tstamp_caps->status_lock); 1603 1698 } 1604 1699 1605 - /** 1606 - * idpf_tx_clean_stashed_bufs - clean bufs that were stored for 1607 - * out of order completions 1608 - * @txq: queue to clean 1609 - * @compl_tag: completion tag of packet to clean (from completion descriptor) 1610 - * @cleaned: pointer to stats struct to track cleaned packets/bytes 1611 - * @budget: Used to determine if we are in netpoll 1612 - */ 1613 - static void idpf_tx_clean_stashed_bufs(struct idpf_tx_queue *txq, 1614 - u16 compl_tag, 1615 - struct libeth_sq_napi_stats *cleaned, 1616 - int budget) 1617 - { 1618 - struct idpf_tx_stash *stash; 1619 - struct hlist_node *tmp_buf; 1620 - struct libeth_cq_pp cp = { 1621 - .dev = txq->dev, 1622 - .ss = cleaned, 1623 - .napi = budget, 1624 - }; 1625 - 1626 - /* Buffer completion */ 1627 - hash_for_each_possible_safe(txq->stash->sched_buf_hash, stash, tmp_buf, 1628 - hlist, compl_tag) { 1629 - if (unlikely(idpf_tx_buf_compl_tag(&stash->buf) != compl_tag)) 1630 - continue; 1631 - 1632 - hash_del(&stash->hlist); 1633 - 1634 - if (stash->buf.type == LIBETH_SQE_SKB && 1635 - (skb_shinfo(stash->buf.skb)->tx_flags & SKBTX_IN_PROGRESS)) 1636 - idpf_tx_read_tstamp(txq, stash->buf.skb); 1637 - 1638 - libeth_tx_complete(&stash->buf, &cp); 1639 - 1640 - /* Push shadow buf back onto stack */ 1641 - idpf_buf_lifo_push(&txq->stash->buf_stack, stash); 1642 - } 1643 - } 1644 - 1645 - /** 1646 - * idpf_stash_flow_sch_buffers - store buffer parameters info to be freed at a 1647 - * later time (only relevant for flow scheduling mode) 1648 - * @txq: Tx queue to clean 1649 - * @tx_buf: buffer to store 1650 - */ 1651 - static int idpf_stash_flow_sch_buffers(struct idpf_tx_queue *txq, 1652 - struct idpf_tx_buf *tx_buf) 1653 - { 1654 - struct idpf_tx_stash *stash; 1655 - 1656 - if (unlikely(tx_buf->type <= LIBETH_SQE_CTX)) 1657 - return 0; 1658 - 1659 - stash = idpf_buf_lifo_pop(&txq->stash->buf_stack); 1660 - if (unlikely(!stash)) { 1661 - net_err_ratelimited("%s: No out-of-order TX buffers left!\n", 1662 - netdev_name(txq->netdev)); 1663 - 1664 - return -ENOMEM; 1665 - } 1666 - 1667 - /* Store buffer params in shadow buffer */ 1668 - stash->buf.skb = tx_buf->skb; 1669 - stash->buf.bytes = tx_buf->bytes; 1670 - stash->buf.packets = tx_buf->packets; 1671 - stash->buf.type = tx_buf->type; 1672 - stash->buf.nr_frags = tx_buf->nr_frags; 1673 - dma_unmap_addr_set(&stash->buf, dma, dma_unmap_addr(tx_buf, dma)); 1674 - dma_unmap_len_set(&stash->buf, len, dma_unmap_len(tx_buf, len)); 1675 - idpf_tx_buf_compl_tag(&stash->buf) = idpf_tx_buf_compl_tag(tx_buf); 1676 - 1677 - /* Add buffer to buf_hash table to be freed later */ 1678 - hash_add(txq->stash->sched_buf_hash, &stash->hlist, 1679 - idpf_tx_buf_compl_tag(&stash->buf)); 1680 - 1681 - tx_buf->type = LIBETH_SQE_EMPTY; 1682 - 1683 - return 0; 1684 - } 1685 - 1686 1700 #define idpf_tx_splitq_clean_bump_ntc(txq, ntc, desc, buf) \ 1687 1701 do { \ 1688 1702 if (unlikely(++(ntc) == (txq)->desc_count)) { \ ··· 1629 1805 * Separate packet completion events will be reported on the completion queue, 1630 1806 * and the buffers will be cleaned separately. The stats are not updated from 1631 1807 * this function when using flow-based scheduling. 1632 - * 1633 - * Furthermore, in flow scheduling mode, check to make sure there are enough 1634 - * reserve buffers to stash the packet. If there are not, return early, which 1635 - * will leave next_to_clean pointing to the packet that failed to be stashed. 1636 - * 1637 - * Return: false in the scenario above, true otherwise. 1638 1808 */ 1639 - static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, 1809 + static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end, 1640 1810 int napi_budget, 1641 1811 struct libeth_sq_napi_stats *cleaned, 1642 1812 bool descs_only) ··· 1644 1826 .napi = napi_budget, 1645 1827 }; 1646 1828 struct idpf_tx_buf *tx_buf; 1647 - bool clean_complete = true; 1829 + 1830 + if (descs_only) { 1831 + /* Bump ring index to mark as cleaned. */ 1832 + tx_q->next_to_clean = end; 1833 + return; 1834 + } 1648 1835 1649 1836 tx_desc = &tx_q->flex_tx[ntc]; 1650 1837 next_pending_desc = &tx_q->flex_tx[end]; ··· 1669 1846 break; 1670 1847 1671 1848 eop_idx = tx_buf->rs_idx; 1849 + libeth_tx_complete(tx_buf, &cp); 1672 1850 1673 - if (descs_only) { 1674 - if (IDPF_TX_BUF_RSV_UNUSED(tx_q) < tx_buf->nr_frags) { 1675 - clean_complete = false; 1676 - goto tx_splitq_clean_out; 1677 - } 1851 + /* unmap remaining buffers */ 1852 + while (ntc != eop_idx) { 1853 + idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, 1854 + tx_desc, tx_buf); 1678 1855 1679 - idpf_stash_flow_sch_buffers(tx_q, tx_buf); 1680 - 1681 - while (ntc != eop_idx) { 1682 - idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, 1683 - tx_desc, tx_buf); 1684 - idpf_stash_flow_sch_buffers(tx_q, tx_buf); 1685 - } 1686 - } else { 1856 + /* unmap any remaining paged data */ 1687 1857 libeth_tx_complete(tx_buf, &cp); 1688 - 1689 - /* unmap remaining buffers */ 1690 - while (ntc != eop_idx) { 1691 - idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, 1692 - tx_desc, tx_buf); 1693 - 1694 - /* unmap any remaining paged data */ 1695 - libeth_tx_complete(tx_buf, &cp); 1696 - } 1697 1858 } 1698 1859 1699 1860 fetch_next_txq_desc: 1700 1861 idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, tx_desc, tx_buf); 1701 1862 } 1702 1863 1703 - tx_splitq_clean_out: 1704 1864 tx_q->next_to_clean = ntc; 1705 - 1706 - return clean_complete; 1707 1865 } 1708 1866 1709 - #define idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, buf) \ 1710 - do { \ 1711 - (buf)++; \ 1712 - (ntc)++; \ 1713 - if (unlikely((ntc) == (txq)->desc_count)) { \ 1714 - buf = (txq)->tx_buf; \ 1715 - ntc = 0; \ 1716 - } \ 1717 - } while (0) 1718 - 1719 1867 /** 1720 - * idpf_tx_clean_buf_ring - clean flow scheduling TX queue buffers 1868 + * idpf_tx_clean_bufs - clean flow scheduling TX queue buffers 1721 1869 * @txq: queue to clean 1722 - * @compl_tag: completion tag of packet to clean (from completion descriptor) 1870 + * @buf_id: packet's starting buffer ID, from completion descriptor 1723 1871 * @cleaned: pointer to stats struct to track cleaned packets/bytes 1724 1872 * @budget: Used to determine if we are in netpoll 1725 1873 * 1726 - * Cleans all buffers associated with the input completion tag either from the 1727 - * TX buffer ring or from the hash table if the buffers were previously 1728 - * stashed. Returns the byte/segment count for the cleaned packet associated 1729 - * this completion tag. 1874 + * Clean all buffers associated with the packet starting at buf_id. Returns the 1875 + * byte/segment count for the cleaned packet. 1730 1876 */ 1731 - static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag, 1732 - struct libeth_sq_napi_stats *cleaned, 1733 - int budget) 1877 + static void idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id, 1878 + struct libeth_sq_napi_stats *cleaned, 1879 + int budget) 1734 1880 { 1735 - u16 idx = compl_tag & txq->compl_tag_bufid_m; 1736 1881 struct idpf_tx_buf *tx_buf = NULL; 1737 1882 struct libeth_cq_pp cp = { 1738 1883 .dev = txq->dev, 1739 1884 .ss = cleaned, 1740 1885 .napi = budget, 1741 1886 }; 1742 - u16 ntc, orig_idx = idx; 1743 1887 1744 - tx_buf = &txq->tx_buf[idx]; 1745 - 1746 - if (unlikely(tx_buf->type <= LIBETH_SQE_CTX || 1747 - idpf_tx_buf_compl_tag(tx_buf) != compl_tag)) 1748 - return false; 1749 - 1888 + tx_buf = &txq->tx_buf[buf_id]; 1750 1889 if (tx_buf->type == LIBETH_SQE_SKB) { 1751 1890 if (skb_shinfo(tx_buf->skb)->tx_flags & SKBTX_IN_PROGRESS) 1752 1891 idpf_tx_read_tstamp(txq, tx_buf->skb); 1753 1892 1754 1893 libeth_tx_complete(tx_buf, &cp); 1894 + idpf_post_buf_refill(txq->refillq, buf_id); 1755 1895 } 1756 1896 1757 - idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf); 1897 + while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) { 1898 + buf_id = idpf_tx_buf_next(tx_buf); 1758 1899 1759 - while (idpf_tx_buf_compl_tag(tx_buf) == compl_tag) { 1900 + tx_buf = &txq->tx_buf[buf_id]; 1760 1901 libeth_tx_complete(tx_buf, &cp); 1761 - idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf); 1902 + idpf_post_buf_refill(txq->refillq, buf_id); 1762 1903 } 1763 - 1764 - /* 1765 - * It's possible the packet we just cleaned was an out of order 1766 - * completion, which means we can stash the buffers starting from 1767 - * the original next_to_clean and reuse the descriptors. We need 1768 - * to compare the descriptor ring next_to_clean packet's "first" buffer 1769 - * to the "first" buffer of the packet we just cleaned to determine if 1770 - * this is the case. Howevever, next_to_clean can point to either a 1771 - * reserved buffer that corresponds to a context descriptor used for the 1772 - * next_to_clean packet (TSO packet) or the "first" buffer (single 1773 - * packet). The orig_idx from the packet we just cleaned will always 1774 - * point to the "first" buffer. If next_to_clean points to a reserved 1775 - * buffer, let's bump ntc once and start the comparison from there. 1776 - */ 1777 - ntc = txq->next_to_clean; 1778 - tx_buf = &txq->tx_buf[ntc]; 1779 - 1780 - if (tx_buf->type == LIBETH_SQE_CTX) 1781 - idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, tx_buf); 1782 - 1783 - /* 1784 - * If ntc still points to a different "first" buffer, clean the 1785 - * descriptor ring and stash all of the buffers for later cleaning. If 1786 - * we cannot stash all of the buffers, next_to_clean will point to the 1787 - * "first" buffer of the packet that could not be stashed and cleaning 1788 - * will start there next time. 1789 - */ 1790 - if (unlikely(tx_buf != &txq->tx_buf[orig_idx] && 1791 - !idpf_tx_splitq_clean(txq, orig_idx, budget, cleaned, 1792 - true))) 1793 - return true; 1794 - 1795 - /* 1796 - * Otherwise, update next_to_clean to reflect the cleaning that was 1797 - * done above. 1798 - */ 1799 - txq->next_to_clean = idx; 1800 - 1801 - return true; 1802 1904 } 1803 1905 1804 1906 /** ··· 1742 1994 struct libeth_sq_napi_stats *cleaned, 1743 1995 int budget) 1744 1996 { 1745 - u16 compl_tag; 1997 + /* RS completion contains queue head for queue based scheduling or 1998 + * completion tag for flow based scheduling. 1999 + */ 2000 + u16 rs_compl_val = le16_to_cpu(desc->q_head_compl_tag.q_head); 1746 2001 1747 2002 if (!idpf_queue_has(FLOW_SCH_EN, txq)) { 1748 - u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head); 1749 - 1750 - idpf_tx_splitq_clean(txq, head, budget, cleaned, false); 2003 + idpf_tx_splitq_clean(txq, rs_compl_val, budget, cleaned, false); 1751 2004 return; 1752 2005 } 1753 2006 1754 - compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag); 1755 - 1756 - /* If we didn't clean anything on the ring, this packet must be 1757 - * in the hash table. Go clean it there. 1758 - */ 1759 - if (!idpf_tx_clean_buf_ring(txq, compl_tag, cleaned, budget)) 1760 - idpf_tx_clean_stashed_bufs(txq, compl_tag, cleaned, budget); 2007 + idpf_tx_clean_bufs(txq, rs_compl_val, cleaned, budget); 1761 2008 } 1762 2009 1763 2010 /** ··· 1869 2126 /* Update BQL */ 1870 2127 nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); 1871 2128 1872 - dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) || 1873 - np->state != __IDPF_VPORT_UP || 2129 + dont_wake = !complq_ok || np->state != __IDPF_VPORT_UP || 1874 2130 !netif_carrier_ok(tx_q->netdev); 1875 2131 /* Check if the TXQ needs to and can be restarted */ 1876 2132 __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes, ··· 1926 2184 desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag); 1927 2185 } 1928 2186 1929 - /* Global conditions to tell whether the txq (and related resources) 1930 - * has room to allow the use of "size" descriptors. 2187 + /** 2188 + * idpf_tx_splitq_has_room - check if enough Tx splitq resources are available 2189 + * @tx_q: the queue to be checked 2190 + * @descs_needed: number of descriptors required for this packet 2191 + * @bufs_needed: number of Tx buffers required for this packet 2192 + * 2193 + * Return: 0 if no room available, 1 otherwise 1931 2194 */ 1932 - static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 size) 2195 + static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 descs_needed, 2196 + u32 bufs_needed) 1933 2197 { 1934 - if (IDPF_DESC_UNUSED(tx_q) < size || 2198 + if (IDPF_DESC_UNUSED(tx_q) < descs_needed || 1935 2199 IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) > 1936 2200 IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq) || 1937 - IDPF_TX_BUF_RSV_LOW(tx_q)) 2201 + idpf_tx_splitq_get_free_bufs(tx_q->refillq) < bufs_needed) 1938 2202 return 0; 1939 2203 return 1; 1940 2204 } ··· 1949 2201 * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions 1950 2202 * @tx_q: the queue to be checked 1951 2203 * @descs_needed: number of descriptors required for this packet 2204 + * @bufs_needed: number of buffers needed for this packet 1952 2205 * 1953 - * Returns 0 if stop is not needed 2206 + * Return: 0 if stop is not needed 1954 2207 */ 1955 2208 static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q, 1956 - unsigned int descs_needed) 2209 + u32 descs_needed, 2210 + u32 bufs_needed) 1957 2211 { 2212 + /* Since we have multiple resources to check for splitq, our 2213 + * start,stop_thrs becomes a boolean check instead of a count 2214 + * threshold. 2215 + */ 1958 2216 if (netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx, 1959 - idpf_txq_has_room(tx_q, descs_needed), 2217 + idpf_txq_has_room(tx_q, descs_needed, 2218 + bufs_needed), 1960 2219 1, 1)) 1961 2220 return 0; 1962 2221 ··· 2005 2250 } 2006 2251 2007 2252 /** 2008 - * idpf_tx_desc_count_required - calculate number of Tx descriptors needed 2253 + * idpf_tx_res_count_required - get number of Tx resources needed for this pkt 2009 2254 * @txq: queue to send buffer on 2010 2255 * @skb: send buffer 2256 + * @bufs_needed: (output) number of buffers needed for this skb. 2011 2257 * 2012 - * Returns number of data descriptors needed for this skb. 2258 + * Return: number of data descriptors and buffers needed for this skb. 2013 2259 */ 2014 - unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, 2015 - struct sk_buff *skb) 2260 + unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq, 2261 + struct sk_buff *skb, 2262 + u32 *bufs_needed) 2016 2263 { 2017 2264 const struct skb_shared_info *shinfo; 2018 2265 unsigned int count = 0, i; ··· 2025 2268 return count; 2026 2269 2027 2270 shinfo = skb_shinfo(skb); 2271 + *bufs_needed += shinfo->nr_frags; 2028 2272 for (i = 0; i < shinfo->nr_frags; i++) { 2029 2273 unsigned int size; 2030 2274 ··· 2055 2297 } 2056 2298 2057 2299 /** 2058 - * idpf_tx_dma_map_error - handle TX DMA map errors 2059 - * @txq: queue to send buffer on 2060 - * @skb: send buffer 2061 - * @first: original first buffer info buffer for packet 2062 - * @idx: starting point on ring to unwind 2063 - */ 2064 - void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, 2065 - struct idpf_tx_buf *first, u16 idx) 2066 - { 2067 - struct libeth_sq_napi_stats ss = { }; 2068 - struct libeth_cq_pp cp = { 2069 - .dev = txq->dev, 2070 - .ss = &ss, 2071 - }; 2072 - 2073 - u64_stats_update_begin(&txq->stats_sync); 2074 - u64_stats_inc(&txq->q_stats.dma_map_errs); 2075 - u64_stats_update_end(&txq->stats_sync); 2076 - 2077 - /* clear dma mappings for failed tx_buf map */ 2078 - for (;;) { 2079 - struct idpf_tx_buf *tx_buf; 2080 - 2081 - tx_buf = &txq->tx_buf[idx]; 2082 - libeth_tx_complete(tx_buf, &cp); 2083 - if (tx_buf == first) 2084 - break; 2085 - if (idx == 0) 2086 - idx = txq->desc_count; 2087 - idx--; 2088 - } 2089 - 2090 - if (skb_is_gso(skb)) { 2091 - union idpf_tx_flex_desc *tx_desc; 2092 - 2093 - /* If we failed a DMA mapping for a TSO packet, we will have 2094 - * used one additional descriptor for a context 2095 - * descriptor. Reset that here. 2096 - */ 2097 - tx_desc = &txq->flex_tx[idx]; 2098 - memset(tx_desc, 0, sizeof(*tx_desc)); 2099 - if (idx == 0) 2100 - idx = txq->desc_count; 2101 - idx--; 2102 - } 2103 - 2104 - /* Update tail in case netdev_xmit_more was previously true */ 2105 - idpf_tx_buf_hw_update(txq, idx, false); 2106 - } 2107 - 2108 - /** 2109 2300 * idpf_tx_splitq_bump_ntu - adjust NTU and generation 2110 2301 * @txq: the tx ring to wrap 2111 2302 * @ntu: ring index to bump ··· 2063 2356 { 2064 2357 ntu++; 2065 2358 2066 - if (ntu == txq->desc_count) { 2359 + if (ntu == txq->desc_count) 2067 2360 ntu = 0; 2068 - txq->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(txq); 2069 - } 2070 2361 2071 2362 return ntu; 2363 + } 2364 + 2365 + /** 2366 + * idpf_tx_get_free_buf_id - get a free buffer ID from the refill queue 2367 + * @refillq: refill queue to get buffer ID from 2368 + * @buf_id: return buffer ID 2369 + * 2370 + * Return: true if a buffer ID was found, false if not 2371 + */ 2372 + static bool idpf_tx_get_free_buf_id(struct idpf_sw_queue *refillq, 2373 + u32 *buf_id) 2374 + { 2375 + u32 ntc = refillq->next_to_clean; 2376 + u32 refill_desc; 2377 + 2378 + refill_desc = refillq->ring[ntc]; 2379 + 2380 + if (unlikely(idpf_queue_has(RFL_GEN_CHK, refillq) != 2381 + !!(refill_desc & IDPF_RFL_BI_GEN_M))) 2382 + return false; 2383 + 2384 + *buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc); 2385 + 2386 + if (unlikely(++ntc == refillq->desc_count)) { 2387 + idpf_queue_change(RFL_GEN_CHK, refillq); 2388 + ntc = 0; 2389 + } 2390 + 2391 + refillq->next_to_clean = ntc; 2392 + 2393 + return true; 2394 + } 2395 + 2396 + /** 2397 + * idpf_tx_splitq_pkt_err_unmap - Unmap buffers and bump tail in case of error 2398 + * @txq: Tx queue to unwind 2399 + * @params: pointer to splitq params struct 2400 + * @first: starting buffer for packet to unmap 2401 + */ 2402 + static void idpf_tx_splitq_pkt_err_unmap(struct idpf_tx_queue *txq, 2403 + struct idpf_tx_splitq_params *params, 2404 + struct idpf_tx_buf *first) 2405 + { 2406 + struct idpf_sw_queue *refillq = txq->refillq; 2407 + struct libeth_sq_napi_stats ss = { }; 2408 + struct idpf_tx_buf *tx_buf = first; 2409 + struct libeth_cq_pp cp = { 2410 + .dev = txq->dev, 2411 + .ss = &ss, 2412 + }; 2413 + 2414 + u64_stats_update_begin(&txq->stats_sync); 2415 + u64_stats_inc(&txq->q_stats.dma_map_errs); 2416 + u64_stats_update_end(&txq->stats_sync); 2417 + 2418 + libeth_tx_complete(tx_buf, &cp); 2419 + while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) { 2420 + tx_buf = &txq->tx_buf[idpf_tx_buf_next(tx_buf)]; 2421 + libeth_tx_complete(tx_buf, &cp); 2422 + } 2423 + 2424 + /* Update tail in case netdev_xmit_more was previously true. */ 2425 + idpf_tx_buf_hw_update(txq, params->prev_ntu, false); 2426 + 2427 + if (!refillq) 2428 + return; 2429 + 2430 + /* Restore refillq state to avoid leaking tags. */ 2431 + if (params->prev_refill_gen != idpf_queue_has(RFL_GEN_CHK, refillq)) 2432 + idpf_queue_change(RFL_GEN_CHK, refillq); 2433 + refillq->next_to_clean = params->prev_refill_ntc; 2072 2434 } 2073 2435 2074 2436 /** ··· 2161 2385 struct netdev_queue *nq; 2162 2386 struct sk_buff *skb; 2163 2387 skb_frag_t *frag; 2388 + u32 next_buf_id; 2164 2389 u16 td_cmd = 0; 2165 2390 dma_addr_t dma; 2166 2391 ··· 2179 2402 tx_buf = first; 2180 2403 first->nr_frags = 0; 2181 2404 2182 - params->compl_tag = 2183 - (tx_q->compl_tag_cur_gen << tx_q->compl_tag_gen_s) | i; 2184 - 2185 2405 for (frag = &skb_shinfo(skb)->frags[0];; frag++) { 2186 2406 unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED; 2187 2407 2188 - if (dma_mapping_error(tx_q->dev, dma)) 2189 - return idpf_tx_dma_map_error(tx_q, skb, first, i); 2408 + if (unlikely(dma_mapping_error(tx_q->dev, dma))) { 2409 + idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL; 2410 + return idpf_tx_splitq_pkt_err_unmap(tx_q, params, 2411 + first); 2412 + } 2190 2413 2191 2414 first->nr_frags++; 2192 - idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag; 2193 2415 tx_buf->type = LIBETH_SQE_FRAG; 2194 2416 2195 2417 /* record length, and DMA address */ ··· 2244 2468 max_data); 2245 2469 2246 2470 if (unlikely(++i == tx_q->desc_count)) { 2247 - tx_buf = tx_q->tx_buf; 2248 2471 tx_desc = &tx_q->flex_tx[0]; 2249 2472 i = 0; 2250 - tx_q->compl_tag_cur_gen = 2251 - IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q); 2252 2473 } else { 2253 - tx_buf++; 2254 2474 tx_desc++; 2255 2475 } 2256 - 2257 - /* Since this packet has a buffer that is going to span 2258 - * multiple descriptors, it's going to leave holes in 2259 - * to the TX buffer ring. To ensure these holes do not 2260 - * cause issues in the cleaning routines, we will clear 2261 - * them of any stale data and assign them the same 2262 - * completion tag as the current packet. Then when the 2263 - * packet is being cleaned, the cleaning routines will 2264 - * simply pass over these holes and finish cleaning the 2265 - * rest of the packet. 2266 - */ 2267 - tx_buf->type = LIBETH_SQE_EMPTY; 2268 - idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag; 2269 2476 2270 2477 /* Adjust the DMA offset and the remaining size of the 2271 2478 * fragment. On the first iteration of this loop, ··· 2274 2515 idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size); 2275 2516 2276 2517 if (unlikely(++i == tx_q->desc_count)) { 2277 - tx_buf = tx_q->tx_buf; 2278 2518 tx_desc = &tx_q->flex_tx[0]; 2279 2519 i = 0; 2280 - tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q); 2281 2520 } else { 2282 - tx_buf++; 2283 2521 tx_desc++; 2284 2522 } 2523 + 2524 + if (idpf_queue_has(FLOW_SCH_EN, tx_q)) { 2525 + if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq, 2526 + &next_buf_id))) { 2527 + idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL; 2528 + return idpf_tx_splitq_pkt_err_unmap(tx_q, params, 2529 + first); 2530 + } 2531 + } else { 2532 + next_buf_id = i; 2533 + } 2534 + idpf_tx_buf_next(tx_buf) = next_buf_id; 2535 + tx_buf = &tx_q->tx_buf[next_buf_id]; 2285 2536 2286 2537 size = skb_frag_size(frag); 2287 2538 data_len -= size; ··· 2307 2538 2308 2539 /* write last descriptor with RS and EOP bits */ 2309 2540 first->rs_idx = i; 2541 + idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL; 2310 2542 td_cmd |= params->eop_cmd; 2311 2543 idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size); 2312 2544 i = idpf_tx_splitq_bump_ntu(tx_q, i); ··· 2516 2746 union idpf_flex_tx_ctx_desc *desc; 2517 2747 int i = txq->next_to_use; 2518 2748 2519 - txq->tx_buf[i].type = LIBETH_SQE_CTX; 2520 - 2521 2749 /* grab the next descriptor */ 2522 2750 desc = &txq->flex_ctx[i]; 2523 2751 txq->next_to_use = idpf_tx_splitq_bump_ntu(txq, i); ··· 2609 2841 #endif /* CONFIG_PTP_1588_CLOCK */ 2610 2842 2611 2843 /** 2844 + * idpf_tx_splitq_need_re - check whether RE bit needs to be set 2845 + * @tx_q: pointer to Tx queue 2846 + * 2847 + * Return: true if RE bit needs to be set, false otherwise 2848 + */ 2849 + static bool idpf_tx_splitq_need_re(struct idpf_tx_queue *tx_q) 2850 + { 2851 + int gap = tx_q->next_to_use - tx_q->last_re; 2852 + 2853 + gap += (gap < 0) ? tx_q->desc_count : 0; 2854 + 2855 + return gap >= IDPF_TX_SPLITQ_RE_MIN_GAP; 2856 + } 2857 + 2858 + /** 2612 2859 * idpf_tx_splitq_frame - Sends buffer on Tx ring using flex descriptors 2613 2860 * @skb: send buffer 2614 2861 * @tx_q: queue to send buffer on ··· 2633 2850 static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, 2634 2851 struct idpf_tx_queue *tx_q) 2635 2852 { 2636 - struct idpf_tx_splitq_params tx_params = { }; 2853 + struct idpf_tx_splitq_params tx_params = { 2854 + .prev_ntu = tx_q->next_to_use, 2855 + }; 2637 2856 union idpf_flex_tx_ctx_desc *ctx_desc; 2638 2857 struct idpf_tx_buf *first; 2639 - unsigned int count; 2858 + u32 count, buf_count = 1; 2640 2859 int tso, idx; 2860 + u32 buf_id; 2641 2861 2642 - count = idpf_tx_desc_count_required(tx_q, skb); 2862 + count = idpf_tx_res_count_required(tx_q, skb, &buf_count); 2643 2863 if (unlikely(!count)) 2644 2864 return idpf_tx_drop_skb(tx_q, skb); 2645 2865 ··· 2652 2866 2653 2867 /* Check for splitq specific TX resources */ 2654 2868 count += (IDPF_TX_DESCS_PER_CACHE_LINE + tso); 2655 - if (idpf_tx_maybe_stop_splitq(tx_q, count)) { 2869 + if (idpf_tx_maybe_stop_splitq(tx_q, count, buf_count)) { 2656 2870 idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false); 2657 2871 2658 2872 return NETDEV_TX_BUSY; ··· 2684 2898 idpf_tx_set_tstamp_desc(ctx_desc, idx); 2685 2899 } 2686 2900 2687 - /* record the location of the first descriptor for this packet */ 2688 - first = &tx_q->tx_buf[tx_q->next_to_use]; 2901 + if (idpf_queue_has(FLOW_SCH_EN, tx_q)) { 2902 + struct idpf_sw_queue *refillq = tx_q->refillq; 2903 + 2904 + /* Save refillq state in case of a packet rollback. Otherwise, 2905 + * the tags will be leaked since they will be popped from the 2906 + * refillq but never reposted during cleaning. 2907 + */ 2908 + tx_params.prev_refill_gen = 2909 + idpf_queue_has(RFL_GEN_CHK, refillq); 2910 + tx_params.prev_refill_ntc = refillq->next_to_clean; 2911 + 2912 + if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq, 2913 + &buf_id))) { 2914 + if (tx_params.prev_refill_gen != 2915 + idpf_queue_has(RFL_GEN_CHK, refillq)) 2916 + idpf_queue_change(RFL_GEN_CHK, refillq); 2917 + refillq->next_to_clean = tx_params.prev_refill_ntc; 2918 + 2919 + tx_q->next_to_use = tx_params.prev_ntu; 2920 + return idpf_tx_drop_skb(tx_q, skb); 2921 + } 2922 + tx_params.compl_tag = buf_id; 2923 + 2924 + tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE; 2925 + tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP; 2926 + /* Set the RE bit to periodically "clean" the descriptor ring. 2927 + * MIN_GAP is set to MIN_RING size to ensure it will be set at 2928 + * least once each time around the ring. 2929 + */ 2930 + if (idpf_tx_splitq_need_re(tx_q)) { 2931 + tx_params.eop_cmd |= IDPF_TXD_FLEX_FLOW_CMD_RE; 2932 + tx_q->txq_grp->num_completions_pending++; 2933 + tx_q->last_re = tx_q->next_to_use; 2934 + } 2935 + 2936 + if (skb->ip_summed == CHECKSUM_PARTIAL) 2937 + tx_params.offload.td_cmd |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN; 2938 + 2939 + } else { 2940 + buf_id = tx_q->next_to_use; 2941 + 2942 + tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2; 2943 + tx_params.eop_cmd = IDPF_TXD_LAST_DESC_CMD; 2944 + 2945 + if (skb->ip_summed == CHECKSUM_PARTIAL) 2946 + tx_params.offload.td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN; 2947 + } 2948 + 2949 + first = &tx_q->tx_buf[buf_id]; 2689 2950 first->skb = skb; 2690 2951 2691 2952 if (tso) { ··· 2742 2909 } else { 2743 2910 first->packets = 1; 2744 2911 first->bytes = max_t(unsigned int, skb->len, ETH_ZLEN); 2745 - } 2746 - 2747 - if (idpf_queue_has(FLOW_SCH_EN, tx_q)) { 2748 - tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE; 2749 - tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP; 2750 - /* Set the RE bit to catch any packets that may have not been 2751 - * stashed during RS completion cleaning. MIN_GAP is set to 2752 - * MIN_RING size to ensure it will be set at least once each 2753 - * time around the ring. 2754 - */ 2755 - if (!(tx_q->next_to_use % IDPF_TX_SPLITQ_RE_MIN_GAP)) { 2756 - tx_params.eop_cmd |= IDPF_TXD_FLEX_FLOW_CMD_RE; 2757 - tx_q->txq_grp->num_completions_pending++; 2758 - } 2759 - 2760 - if (skb->ip_summed == CHECKSUM_PARTIAL) 2761 - tx_params.offload.td_cmd |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN; 2762 - 2763 - } else { 2764 - tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2; 2765 - tx_params.eop_cmd = IDPF_TXD_LAST_DESC_CMD; 2766 - 2767 - if (skb->ip_summed == CHECKSUM_PARTIAL) 2768 - tx_params.offload.td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN; 2769 2912 } 2770 2913 2771 2914 idpf_tx_splitq_map(tx_q, &tx_params, first); ··· 3281 3472 skip_data: 3282 3473 rx_buf->netmem = 0; 3283 3474 3284 - idpf_rx_post_buf_refill(refillq, buf_id); 3475 + idpf_post_buf_refill(refillq, buf_id); 3285 3476 IDPF_RX_BUMP_NTC(rxq, ntc); 3286 3477 3287 3478 /* skip if it is non EOP desc */ ··· 3389 3580 bool failure; 3390 3581 3391 3582 if (idpf_queue_has(RFL_GEN_CHK, refillq) != 3392 - !!(refill_desc & IDPF_RX_BI_GEN_M)) 3583 + !!(refill_desc & IDPF_RFL_BI_GEN_M)) 3393 3584 break; 3394 3585 3395 - buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc); 3586 + buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc); 3396 3587 failure = idpf_rx_update_bufq_desc(bufq, buf_id, buf_desc); 3397 3588 if (failure) 3398 3589 break;
+33 -54
drivers/net/ethernet/intel/idpf/idpf_txrx.h
··· 108 108 */ 109 109 #define IDPF_TX_SPLITQ_RE_MIN_GAP 64 110 110 111 - #define IDPF_RX_BI_GEN_M BIT(16) 112 - #define IDPF_RX_BI_BUFID_M GENMASK(15, 0) 111 + #define IDPF_RFL_BI_GEN_M BIT(16) 112 + #define IDPF_RFL_BI_BUFID_M GENMASK(15, 0) 113 113 114 114 #define IDPF_RXD_EOF_SPLITQ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M 115 115 #define IDPF_RXD_EOF_SINGLEQ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M ··· 117 117 #define IDPF_DESC_UNUSED(txq) \ 118 118 ((((txq)->next_to_clean > (txq)->next_to_use) ? 0 : (txq)->desc_count) + \ 119 119 (txq)->next_to_clean - (txq)->next_to_use - 1) 120 - 121 - #define IDPF_TX_BUF_RSV_UNUSED(txq) ((txq)->stash->buf_stack.top) 122 - #define IDPF_TX_BUF_RSV_LOW(txq) (IDPF_TX_BUF_RSV_UNUSED(txq) < \ 123 - (txq)->desc_count >> 2) 124 120 125 121 #define IDPF_TX_COMPLQ_OVERFLOW_THRESH(txcq) ((txcq)->desc_count >> 1) 126 122 /* Determine the absolute number of completions pending, i.e. the number of ··· 127 131 0 : U32_MAX) + \ 128 132 (txq)->num_completions_pending - (txq)->complq->num_completions) 129 133 130 - #define IDPF_TX_SPLITQ_COMPL_TAG_WIDTH 16 131 - /* Adjust the generation for the completion tag and wrap if necessary */ 132 - #define IDPF_TX_ADJ_COMPL_TAG_GEN(txq) \ 133 - ((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \ 134 - 0 : (txq)->compl_tag_cur_gen) 134 + #define IDPF_TXBUF_NULL U32_MAX 135 135 136 136 #define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS) 137 137 ··· 143 151 }; 144 152 145 153 #define idpf_tx_buf libeth_sqe 146 - 147 - /** 148 - * struct idpf_buf_lifo - LIFO for managing OOO completions 149 - * @top: Used to know how many buffers are left 150 - * @size: Total size of LIFO 151 - * @bufs: Backing array 152 - */ 153 - struct idpf_buf_lifo { 154 - u16 top; 155 - u16 size; 156 - struct idpf_tx_stash **bufs; 157 - }; 158 154 159 155 /** 160 156 * struct idpf_tx_offload_params - Offload parameters for a given packet ··· 176 196 * @compl_tag: Associated tag for completion 177 197 * @td_tag: Descriptor tunneling tag 178 198 * @offload: Offload parameters 199 + * @prev_ntu: stored TxQ next_to_use in case of rollback 200 + * @prev_refill_ntc: stored refillq next_to_clean in case of packet rollback 201 + * @prev_refill_gen: stored refillq generation bit in case of packet rollback 179 202 */ 180 203 struct idpf_tx_splitq_params { 181 204 enum idpf_tx_desc_dtype_value dtype; ··· 189 206 }; 190 207 191 208 struct idpf_tx_offload_params offload; 209 + 210 + u16 prev_ntu; 211 + u16 prev_refill_ntc; 212 + bool prev_refill_gen; 192 213 }; 193 214 194 215 enum idpf_tx_ctx_desc_eipt_offload { ··· 455 468 #define IDPF_DIM_DEFAULT_PROFILE_IX 1 456 469 457 470 /** 458 - * struct idpf_txq_stash - Tx buffer stash for Flow-based scheduling mode 459 - * @buf_stack: Stack of empty buffers to store buffer info for out of order 460 - * buffer completions. See struct idpf_buf_lifo 461 - * @sched_buf_hash: Hash table to store buffers 462 - */ 463 - struct idpf_txq_stash { 464 - struct idpf_buf_lifo buf_stack; 465 - DECLARE_HASHTABLE(sched_buf_hash, 12); 466 - } ____cacheline_aligned; 467 - 468 - /** 469 471 * struct idpf_rx_queue - software structure representing a receive queue 470 472 * @rx: universal receive descriptor array 471 473 * @single_buf: buffer descriptor array in singleq ··· 586 610 * @netdev: &net_device corresponding to this queue 587 611 * @next_to_use: Next descriptor to use 588 612 * @next_to_clean: Next descriptor to clean 613 + * @last_re: last descriptor index that RE bit was set 614 + * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather 589 615 * @cleaned_bytes: Splitq only, TXQ only: When a TX completion is received on 590 616 * the TX completion queue, it can be for any TXQ associated 591 617 * with that completion queue. This means we can clean up to ··· 598 620 * only once at the end of the cleaning routine. 599 621 * @clean_budget: singleq only, queue cleaning budget 600 622 * @cleaned_pkts: Number of packets cleaned for the above said case 601 - * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather 602 - * @stash: Tx buffer stash for Flow-based scheduling mode 603 - * @compl_tag_bufid_m: Completion tag buffer id mask 604 - * @compl_tag_cur_gen: Used to keep track of current completion tag generation 605 - * @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset 623 + * @refillq: Pointer to refill queue 606 624 * @cached_tstamp_caps: Tx timestamp capabilities negotiated with the CP 607 625 * @tstamp_task: Work that handles Tx timestamp read 608 626 * @stats_sync: See struct u64_stats_sync ··· 607 633 * @size: Length of descriptor ring in bytes 608 634 * @dma: Physical address of ring 609 635 * @q_vector: Backreference to associated vector 636 + * @buf_pool_size: Total number of idpf_tx_buf 610 637 */ 611 638 struct idpf_tx_queue { 612 639 __cacheline_group_begin_aligned(read_mostly); ··· 629 654 u16 desc_count; 630 655 631 656 u16 tx_min_pkt_len; 632 - u16 compl_tag_gen_s; 633 657 634 658 struct net_device *netdev; 635 659 __cacheline_group_end_aligned(read_mostly); ··· 636 662 __cacheline_group_begin_aligned(read_write); 637 663 u16 next_to_use; 638 664 u16 next_to_clean; 665 + u16 last_re; 666 + u16 tx_max_bufs; 639 667 640 668 union { 641 669 u32 cleaned_bytes; ··· 645 669 }; 646 670 u16 cleaned_pkts; 647 671 648 - u16 tx_max_bufs; 649 - struct idpf_txq_stash *stash; 650 - 651 - u16 compl_tag_bufid_m; 652 - u16 compl_tag_cur_gen; 653 - u16 compl_tag_gen_max; 672 + struct idpf_sw_queue *refillq; 654 673 655 674 struct idpf_ptp_vport_tx_tstamp_caps *cached_tstamp_caps; 656 675 struct work_struct *tstamp_task; ··· 660 689 dma_addr_t dma; 661 690 662 691 struct idpf_q_vector *q_vector; 692 + u32 buf_pool_size; 663 693 __cacheline_group_end_aligned(cold); 664 694 }; 665 695 libeth_cacheline_set_assert(struct idpf_tx_queue, 64, 666 - 112 + sizeof(struct u64_stats_sync), 667 - 24); 696 + 104 + sizeof(struct u64_stats_sync), 697 + 32); 668 698 669 699 /** 670 700 * struct idpf_buf_queue - software structure representing a buffer queue ··· 875 903 * @vport: Vport back pointer 876 904 * @num_txq: Number of TX queues associated 877 905 * @txqs: Array of TX queue pointers 878 - * @stashes: array of OOO stashes for the queues 879 906 * @complq: Associated completion queue pointer, split queue only 880 907 * @num_completions_pending: Total number of completions pending for the 881 908 * completion queue, acculumated for all TX queues ··· 889 918 890 919 u16 num_txq; 891 920 struct idpf_tx_queue *txqs[IDPF_LARGE_MAX_Q]; 892 - struct idpf_txq_stash *stashes; 893 921 894 922 struct idpf_compl_queue *complq; 895 923 ··· 981 1011 reg->dyn_ctl); 982 1012 } 983 1013 1014 + /** 1015 + * idpf_tx_splitq_get_free_bufs - get number of free buf_ids in refillq 1016 + * @refillq: pointer to refillq containing buf_ids 1017 + */ 1018 + static inline u32 idpf_tx_splitq_get_free_bufs(struct idpf_sw_queue *refillq) 1019 + { 1020 + return (refillq->next_to_use > refillq->next_to_clean ? 1021 + 0 : refillq->desc_count) + 1022 + refillq->next_to_use - refillq->next_to_clean - 1; 1023 + } 1024 + 984 1025 int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget); 985 1026 void idpf_vport_init_num_qs(struct idpf_vport *vport, 986 1027 struct virtchnl2_create_vport *vport_msg); ··· 1019 1038 bool xmit_more); 1020 1039 unsigned int idpf_size_to_txd_count(unsigned int size); 1021 1040 netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb); 1022 - void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, 1023 - struct idpf_tx_buf *first, u16 ring_idx); 1024 - unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, 1025 - struct sk_buff *skb); 1041 + unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq, 1042 + struct sk_buff *skb, u32 *buf_count); 1026 1043 void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue); 1027 1044 netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, 1028 1045 struct idpf_tx_queue *tx_q);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
··· 3125 3125 if (err) 3126 3126 return err; 3127 3127 3128 - combo_ver = le32_to_cpu(civd.combo_ver); 3128 + combo_ver = get_unaligned_le32(&civd.combo_ver); 3129 3129 3130 3130 orom->major = (u8)FIELD_GET(IXGBE_OROM_VER_MASK, combo_ver); 3131 3131 orom->patch = (u8)FIELD_GET(IXGBE_OROM_VER_PATCH_MASK, combo_ver);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h
··· 932 932 __le32 combo_ver; /* Combo Image Version number */ 933 933 u8 combo_name_len; /* Length of the unicode combo image version string, max of 32 */ 934 934 __le16 combo_name[32]; /* Unicode string representing the Combo Image version */ 935 - }; 935 + } __packed; 936 936 937 937 /* Function specific capabilities */ 938 938 struct ixgbe_hw_func_caps {
+7
drivers/net/ethernet/marvell/octeontx2/af/cgx.c
··· 1978 1978 goto err_release_regions; 1979 1979 } 1980 1980 1981 + if (!is_cn20k(pdev) && 1982 + !is_cgx_mapped_to_nix(pdev->subsystem_device, cgx->cgx_id)) { 1983 + dev_notice(dev, "CGX %d not mapped to NIX, skipping probe\n", 1984 + cgx->cgx_id); 1985 + goto err_release_regions; 1986 + } 1987 + 1981 1988 cgx->lmac_count = cgx->mac_ops->get_nr_lmacs(cgx); 1982 1989 if (!cgx->lmac_count) { 1983 1990 dev_notice(dev, "CGX %d LMAC count is zero, skipping probe\n", cgx->cgx_id);
+14
drivers/net/ethernet/marvell/octeontx2/af/rvu.h
··· 783 783 return false; 784 784 } 785 785 786 + static inline bool is_cgx_mapped_to_nix(unsigned short id, u8 cgx_id) 787 + { 788 + /* On CNF10KA and CNF10KB silicons only two CGX blocks are connected 789 + * to NIX. 790 + */ 791 + if (id == PCI_SUBSYS_DEVID_CNF10K_A || id == PCI_SUBSYS_DEVID_CNF10K_B) 792 + return cgx_id <= 1; 793 + 794 + return !(cgx_id && !(id == PCI_SUBSYS_DEVID_96XX || 795 + id == PCI_SUBSYS_DEVID_98XX || 796 + id == PCI_SUBSYS_DEVID_CN10K_A || 797 + id == PCI_SUBSYS_DEVID_CN10K_B)); 798 + } 799 + 786 800 static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu) 787 801 { 788 802 u64 npc_const3;
+3 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 124 124 dev_stats->rx_ucast_frames; 125 125 126 126 dev_stats->tx_bytes = OTX2_GET_TX_STATS(TX_OCTS); 127 - dev_stats->tx_drops = OTX2_GET_TX_STATS(TX_DROP); 127 + dev_stats->tx_drops = OTX2_GET_TX_STATS(TX_DROP) + 128 + (unsigned long)atomic_long_read(&dev_stats->tx_discards); 129 + 128 130 dev_stats->tx_bcast_frames = OTX2_GET_TX_STATS(TX_BCAST); 129 131 dev_stats->tx_mcast_frames = OTX2_GET_TX_STATS(TX_MCAST); 130 132 dev_stats->tx_ucast_frames = OTX2_GET_TX_STATS(TX_UCAST);
+1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
··· 153 153 u64 tx_bcast_frames; 154 154 u64 tx_mcast_frames; 155 155 u64 tx_drops; 156 + atomic_long_t tx_discards; 156 157 }; 157 158 158 159 /* Driver counted stats */
+3
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 2220 2220 { 2221 2221 struct otx2_nic *pf = netdev_priv(netdev); 2222 2222 int qidx = skb_get_queue_mapping(skb); 2223 + struct otx2_dev_stats *dev_stats; 2223 2224 struct otx2_snd_queue *sq; 2224 2225 struct netdev_queue *txq; 2225 2226 int sq_idx; ··· 2233 2232 /* Check for minimum and maximum packet length */ 2234 2233 if (skb->len <= ETH_HLEN || 2235 2234 (!skb_shinfo(skb)->gso_size && skb->len > pf->tx_max_pktlen)) { 2235 + dev_stats = &pf->hw.dev_stats; 2236 + atomic_long_inc(&dev_stats->tx_discards); 2236 2237 dev_kfree_skb(skb); 2237 2238 return NETDEV_TX_OK; 2238 2239 }
+10
drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
··· 417 417 { 418 418 struct otx2_nic *vf = netdev_priv(netdev); 419 419 int qidx = skb_get_queue_mapping(skb); 420 + struct otx2_dev_stats *dev_stats; 420 421 struct otx2_snd_queue *sq; 421 422 struct netdev_queue *txq; 423 + 424 + /* Check for minimum and maximum packet length */ 425 + if (skb->len <= ETH_HLEN || 426 + (!skb_shinfo(skb)->gso_size && skb->len > vf->tx_max_pktlen)) { 427 + dev_stats = &vf->hw.dev_stats; 428 + atomic_long_inc(&dev_stats->tx_discards); 429 + dev_kfree_skb(skb); 430 + return NETDEV_TX_OK; 431 + } 422 432 423 433 sq = &vf->qset.sq[qidx]; 424 434 txq = netdev_get_tx_queue(netdev, qidx);
+12 -1
drivers/net/ethernet/marvell/octeontx2/nic/rep.c
··· 371 371 stats->rx_mcast_frames = rsp->rx.mcast; 372 372 stats->tx_bytes = rsp->tx.octs; 373 373 stats->tx_frames = rsp->tx.ucast + rsp->tx.bcast + rsp->tx.mcast; 374 - stats->tx_drops = rsp->tx.drop; 374 + stats->tx_drops = rsp->tx.drop + 375 + (unsigned long)atomic_long_read(&stats->tx_discards); 375 376 exit: 376 377 mutex_unlock(&priv->mbox.lock); 377 378 } ··· 419 418 struct otx2_nic *pf = rep->mdev; 420 419 struct otx2_snd_queue *sq; 421 420 struct netdev_queue *txq; 421 + struct rep_stats *stats; 422 + 423 + /* Check for minimum and maximum packet length */ 424 + if (skb->len <= ETH_HLEN || 425 + (!skb_shinfo(skb)->gso_size && skb->len > pf->tx_max_pktlen)) { 426 + stats = &rep->stats; 427 + atomic_long_inc(&stats->tx_discards); 428 + dev_kfree_skb(skb); 429 + return NETDEV_TX_OK; 430 + } 422 431 423 432 sq = &pf->qset.sq[rep->rep_id]; 424 433 txq = netdev_get_tx_queue(dev, 0);
+1
drivers/net/ethernet/marvell/octeontx2/nic/rep.h
··· 27 27 u64 tx_bytes; 28 28 u64 tx_frames; 29 29 u64 tx_drops; 30 + atomic_long_t tx_discards; 30 31 }; 31 32 32 33 struct rep_dev {
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 160 160 if (err) 161 161 return err; 162 162 163 - mlx5_unload_one_devl_locked(dev, true); 163 + mlx5_sync_reset_unload_flow(dev, true); 164 164 err = mlx5_health_wait_pci_up(dev); 165 165 if (err) 166 166 NL_SET_ERR_MSG_MOD(extack, "FW activate aborted, PCI reads fail after reset");
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
··· 575 575 if (err) 576 576 return err; 577 577 } 578 - priv->dcbx.xoff = xoff; 579 578 580 579 /* Apply the settings */ 581 580 if (update_buffer) { ··· 582 583 if (err) 583 584 return err; 584 585 } 586 + 587 + priv->dcbx.xoff = xoff; 585 588 586 589 if (update_prio2buffer) 587 590 err = mlx5e_port_set_priority2buffer(priv->mdev, prio2buffer);
+12
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
··· 66 66 struct mlx5e_bufferx_reg buffer[MLX5E_MAX_NETWORK_BUFFER]; 67 67 }; 68 68 69 + #ifdef CONFIG_MLX5_CORE_EN_DCB 69 70 int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv, 70 71 u32 change, unsigned int mtu, 71 72 struct ieee_pfc *pfc, 72 73 u32 *buffer_size, 73 74 u8 *prio2buffer); 75 + #else 76 + static inline int 77 + mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv, 78 + u32 change, unsigned int mtu, 79 + void *pfc, 80 + u32 *buffer_size, 81 + u8 *prio2buffer) 82 + { 83 + return 0; 84 + } 85 + #endif 74 86 75 87 int mlx5e_port_query_buffer(struct mlx5e_priv *priv, 76 88 struct mlx5e_port_buffer *port_buffer);
+18 -1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 49 49 #include "en.h" 50 50 #include "en/dim.h" 51 51 #include "en/txrx.h" 52 + #include "en/port_buffer.h" 52 53 #include "en_tc.h" 53 54 #include "en_rep.h" 54 55 #include "en_accel/ipsec.h" ··· 139 138 if (up) { 140 139 netdev_info(priv->netdev, "Link up\n"); 141 140 netif_carrier_on(priv->netdev); 141 + mlx5e_port_manual_buffer_config(priv, 0, priv->netdev->mtu, 142 + NULL, NULL, NULL); 142 143 } else { 143 144 netdev_info(priv->netdev, "Link down\n"); 144 145 netif_carrier_off(priv->netdev); ··· 3043 3040 struct mlx5e_params *params = &priv->channels.params; 3044 3041 struct net_device *netdev = priv->netdev; 3045 3042 struct mlx5_core_dev *mdev = priv->mdev; 3046 - u16 mtu; 3043 + u16 mtu, prev_mtu; 3047 3044 int err; 3045 + 3046 + mlx5e_query_mtu(mdev, params, &prev_mtu); 3048 3047 3049 3048 err = mlx5e_set_mtu(mdev, params, params->sw_mtu); 3050 3049 if (err) ··· 3056 3051 if (mtu != params->sw_mtu) 3057 3052 netdev_warn(netdev, "%s: VPort MTU %d is different than netdev mtu %d\n", 3058 3053 __func__, mtu, params->sw_mtu); 3054 + 3055 + if (mtu != prev_mtu && MLX5_BUFFER_SUPPORTED(mdev)) { 3056 + err = mlx5e_port_manual_buffer_config(priv, 0, mtu, 3057 + NULL, NULL, NULL); 3058 + if (err) { 3059 + netdev_warn(netdev, "%s: Failed to set Xon/Xoff values with MTU %d (err %d), setting back to previous MTU %d\n", 3060 + __func__, mtu, err, prev_mtu); 3061 + 3062 + mlx5e_set_mtu(mdev, params, prev_mtu); 3063 + return err; 3064 + } 3065 + } 3059 3066 3060 3067 params->sw_mtu = mtu; 3061 3068 return 0;
+7 -8
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 3734 3734 char *value = val.vstr; 3735 3735 u8 eswitch_mode; 3736 3736 3737 + eswitch_mode = mlx5_eswitch_mode(dev); 3738 + if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { 3739 + NL_SET_ERR_MSG_FMT_MOD(extack, 3740 + "Changing fs mode is not supported when eswitch offloads enabled."); 3741 + return -EOPNOTSUPP; 3742 + } 3743 + 3737 3744 if (!strcmp(value, "dmfs")) 3738 3745 return 0; 3739 3746 ··· 3764 3757 NL_SET_ERR_MSG_MOD(extack, 3765 3758 "Bad parameter: supported values are [\"dmfs\", \"smfs\", \"hmfs\"]"); 3766 3759 return -EINVAL; 3767 - } 3768 - 3769 - eswitch_mode = mlx5_eswitch_mode(dev); 3770 - if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) { 3771 - NL_SET_ERR_MSG_FMT_MOD(extack, 3772 - "Moving to %s is not supported when eswitch offloads enabled.", 3773 - value); 3774 - return -EOPNOTSUPP; 3775 3760 } 3776 3761 3777 3762 return 0;
+76 -56
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 6 6 #include "fw_reset.h" 7 7 #include "diag/fw_tracer.h" 8 8 #include "lib/tout.h" 9 + #include "sf/sf.h" 9 10 10 11 enum { 11 12 MLX5_FW_RESET_FLAGS_RESET_REQUESTED, 12 13 MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST, 13 14 MLX5_FW_RESET_FLAGS_PENDING_COMP, 14 15 MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS, 15 - MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED 16 + MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, 17 + MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, 16 18 }; 17 19 18 20 struct mlx5_fw_reset { ··· 221 219 return mlx5_reg_mfrl_set(dev, MLX5_MFRL_REG_RESET_LEVEL0, 0, 0, false); 222 220 } 223 221 224 - static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unloaded) 222 + static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev) 225 223 { 226 224 struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 227 225 struct devlink *devlink = priv_to_devlink(dev); ··· 230 228 if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) { 231 229 complete(&fw_reset->done); 232 230 } else { 233 - if (!unloaded) 234 - mlx5_unload_one(dev, false); 231 + mlx5_sync_reset_unload_flow(dev, false); 235 232 if (mlx5_health_wait_pci_up(dev)) 236 233 mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n"); 237 234 else ··· 273 272 274 273 mlx5_sync_reset_clear_reset_requested(dev, false); 275 274 mlx5_enter_error_state(dev, true); 276 - mlx5_fw_reset_complete_reload(dev, false); 275 + mlx5_fw_reset_complete_reload(dev); 277 276 } 278 277 279 278 #define MLX5_RESET_POLL_INTERVAL (HZ / 10) ··· 426 425 427 426 if (!MLX5_CAP_GEN(dev, fast_teardown)) { 428 427 mlx5_core_warn(dev, "fast teardown is not supported by firmware\n"); 428 + return false; 429 + } 430 + 431 + if (!mlx5_core_is_ecpf(dev) && !mlx5_sf_table_empty(dev)) { 432 + mlx5_core_warn(dev, "SFs should be removed before reset\n"); 429 433 return false; 430 434 } 431 435 ··· 592 586 return err; 593 587 } 594 588 595 - static void mlx5_sync_reset_now_event(struct work_struct *work) 589 + void mlx5_sync_reset_unload_flow(struct mlx5_core_dev *dev, bool locked) 596 590 { 597 - struct mlx5_fw_reset *fw_reset = container_of(work, struct mlx5_fw_reset, 598 - reset_now_work); 599 - struct mlx5_core_dev *dev = fw_reset->dev; 600 - int err; 601 - 602 - if (mlx5_sync_reset_clear_reset_requested(dev, false)) 603 - return; 604 - 605 - mlx5_core_warn(dev, "Sync Reset now. Device is going to reset.\n"); 606 - 607 - err = mlx5_cmd_fast_teardown_hca(dev); 608 - if (err) { 609 - mlx5_core_warn(dev, "Fast teardown failed, no reset done, err %d\n", err); 610 - goto done; 611 - } 612 - 613 - err = mlx5_sync_pci_reset(dev, fw_reset->reset_method); 614 - if (err) { 615 - mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, no reset done, err %d\n", err); 616 - set_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags); 617 - } 618 - 619 - mlx5_enter_error_state(dev, true); 620 - done: 621 - fw_reset->ret = err; 622 - mlx5_fw_reset_complete_reload(dev, false); 623 - } 624 - 625 - static void mlx5_sync_reset_unload_event(struct work_struct *work) 626 - { 627 - struct mlx5_fw_reset *fw_reset; 628 - struct mlx5_core_dev *dev; 591 + struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 629 592 unsigned long timeout; 630 593 int poll_freq = 20; 631 594 bool reset_action; 632 595 u8 rst_state; 633 596 int err; 634 597 635 - fw_reset = container_of(work, struct mlx5_fw_reset, reset_unload_work); 636 - dev = fw_reset->dev; 637 - 638 - if (mlx5_sync_reset_clear_reset_requested(dev, false)) 639 - return; 640 - 641 - mlx5_core_warn(dev, "Sync Reset Unload. Function is forced down.\n"); 642 - 643 - err = mlx5_cmd_fast_teardown_hca(dev); 644 - if (err) 645 - mlx5_core_warn(dev, "Fast teardown failed, unloading, err %d\n", err); 646 - else 647 - mlx5_enter_error_state(dev, true); 648 - 649 - if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) 598 + if (locked) 650 599 mlx5_unload_one_devl_locked(dev, false); 651 600 else 652 601 mlx5_unload_one(dev, false); 602 + 603 + if (!test_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags)) 604 + return; 653 605 654 606 mlx5_set_fw_rst_ack(dev); 655 607 mlx5_core_warn(dev, "Sync Reset Unload done, device reset expected\n"); ··· 636 672 goto done; 637 673 } 638 674 639 - mlx5_core_warn(dev, "Sync Reset, got reset action. rst_state = %u\n", rst_state); 675 + mlx5_core_warn(dev, "Sync Reset, got reset action. rst_state = %u\n", 676 + rst_state); 640 677 if (rst_state == MLX5_FW_RST_STATE_TOGGLE_REQ) { 641 678 err = mlx5_sync_pci_reset(dev, fw_reset->reset_method); 642 679 if (err) { 643 - mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, err %d\n", err); 680 + mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, err %d\n", 681 + err); 644 682 fw_reset->ret = err; 645 683 } 646 684 } 647 685 648 686 done: 649 - mlx5_fw_reset_complete_reload(dev, true); 687 + clear_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags); 688 + } 689 + 690 + static void mlx5_sync_reset_now_event(struct work_struct *work) 691 + { 692 + struct mlx5_fw_reset *fw_reset = container_of(work, struct mlx5_fw_reset, 693 + reset_now_work); 694 + struct mlx5_core_dev *dev = fw_reset->dev; 695 + int err; 696 + 697 + if (mlx5_sync_reset_clear_reset_requested(dev, false)) 698 + return; 699 + 700 + mlx5_core_warn(dev, "Sync Reset now. Device is going to reset.\n"); 701 + 702 + err = mlx5_cmd_fast_teardown_hca(dev); 703 + if (err) { 704 + mlx5_core_warn(dev, "Fast teardown failed, no reset done, err %d\n", err); 705 + goto done; 706 + } 707 + 708 + err = mlx5_sync_pci_reset(dev, fw_reset->reset_method); 709 + if (err) { 710 + mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, no reset done, err %d\n", err); 711 + set_bit(MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED, &fw_reset->reset_flags); 712 + } 713 + 714 + mlx5_enter_error_state(dev, true); 715 + done: 716 + fw_reset->ret = err; 717 + mlx5_fw_reset_complete_reload(dev); 718 + } 719 + 720 + static void mlx5_sync_reset_unload_event(struct work_struct *work) 721 + { 722 + struct mlx5_fw_reset *fw_reset; 723 + struct mlx5_core_dev *dev; 724 + int err; 725 + 726 + fw_reset = container_of(work, struct mlx5_fw_reset, reset_unload_work); 727 + dev = fw_reset->dev; 728 + 729 + if (mlx5_sync_reset_clear_reset_requested(dev, false)) 730 + return; 731 + 732 + set_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags); 733 + mlx5_core_warn(dev, "Sync Reset Unload. Function is forced down.\n"); 734 + 735 + err = mlx5_cmd_fast_teardown_hca(dev); 736 + if (err) 737 + mlx5_core_warn(dev, "Fast teardown failed, unloading, err %d\n", err); 738 + else 739 + mlx5_enter_error_state(dev, true); 740 + 741 + mlx5_fw_reset_complete_reload(dev); 650 742 } 651 743 652 744 static void mlx5_sync_reset_abort_event(struct work_struct *work)
+1
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
··· 12 12 int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev); 13 13 14 14 int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev); 15 + void mlx5_sync_reset_unload_flow(struct mlx5_core_dev *dev, bool locked); 15 16 int mlx5_fw_reset_verify_fw_complete(struct mlx5_core_dev *dev, 16 17 struct netlink_ext_ack *extack); 17 18 void mlx5_fw_reset_events_start(struct mlx5_core_dev *dev);
+10
drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
··· 518 518 WARN_ON(!xa_empty(&table->function_ids)); 519 519 kfree(table); 520 520 } 521 + 522 + bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev) 523 + { 524 + struct mlx5_sf_table *table = dev->priv.sf_table; 525 + 526 + if (!table) 527 + return true; 528 + 529 + return xa_empty(&table->function_ids); 530 + }
+6
drivers/net/ethernet/mellanox/mlx5/core/sf/sf.h
··· 17 17 18 18 int mlx5_sf_table_init(struct mlx5_core_dev *dev); 19 19 void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev); 20 + bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev); 20 21 21 22 int mlx5_devlink_sf_port_new(struct devlink *devlink, 22 23 const struct devlink_port_new_attrs *add_attr, ··· 60 59 61 60 static inline void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev) 62 61 { 62 + } 63 + 64 + static inline bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev) 65 + { 66 + return true; 63 67 } 64 68 65 69 #endif
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
··· 117 117 mlx5hws_err(ctx, "No such stc_type: %d\n", stc_type); 118 118 pr_warn("HWS: Invalid stc_type: %d\n", stc_type); 119 119 ret = -EINVAL; 120 - goto unlock_and_out; 120 + goto free_shared_stc; 121 121 } 122 122 123 123 ret = mlx5hws_action_alloc_single_stc(ctx, &stc_attr, tbl_type,
+4 -2
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pat_arg.c
··· 279 279 return ret; 280 280 281 281 clean_pattern: 282 - mlx5hws_cmd_header_modify_pattern_destroy(ctx->mdev, *pattern_id); 282 + mlx5hws_cmd_header_modify_pattern_destroy(ctx->mdev, ptrn_id); 283 283 out_unlock: 284 284 mutex_unlock(&ctx->pattern_cache->lock); 285 285 return ret; ··· 527 527 u32 *nop_locations, __be64 *new_pat) 528 528 { 529 529 u16 prev_src_field = INVALID_FIELD, prev_dst_field = INVALID_FIELD; 530 - u16 src_field, dst_field; 531 530 u8 action_type; 532 531 bool dependent; 533 532 size_t i, j; ··· 538 539 return 0; 539 540 540 541 for (i = 0, j = 0; i < num_actions; i++, j++) { 542 + u16 src_field = INVALID_FIELD; 543 + u16 dst_field = INVALID_FIELD; 544 + 541 545 if (j >= max_actions) 542 546 return -EINVAL; 543 547
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c
··· 124 124 mlx5hws_err(pool->ctx, "Failed to create resource type: %d size %zu\n", 125 125 pool->type, pool->alloc_log_sz); 126 126 mlx5hws_buddy_cleanup(buddy); 127 + kfree(buddy); 127 128 return -ENOMEM; 128 129 } 129 130
+4
drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
··· 52 52 fbnic_bmc_rpc_init(fbd); 53 53 fbnic_rss_reinit(fbd, fbn); 54 54 55 + phylink_resume(fbn->phylink); 56 + 55 57 return 0; 56 58 time_stop: 57 59 fbnic_time_stop(fbn); ··· 85 83 static int fbnic_stop(struct net_device *netdev) 86 84 { 87 85 struct fbnic_net *fbn = netdev_priv(netdev); 86 + 87 + phylink_suspend(fbn->phylink, fbnic_bmc_present(fbn->fbd)); 88 88 89 89 fbnic_down(fbn); 90 90 fbnic_pcs_free_irq(fbn->fbd);
+6 -9
drivers/net/ethernet/meta/fbnic/fbnic_pci.c
··· 118 118 struct fbnic_dev *fbd = fbn->fbd; 119 119 120 120 schedule_delayed_work(&fbd->service_task, HZ); 121 - phylink_resume(fbn->phylink); 122 121 } 123 122 124 123 static void fbnic_service_task_stop(struct fbnic_net *fbn) 125 124 { 126 125 struct fbnic_dev *fbd = fbn->fbd; 127 126 128 - phylink_suspend(fbn->phylink, fbnic_bmc_present(fbd)); 129 127 cancel_delayed_work(&fbd->service_task); 130 128 } 131 129 ··· 441 443 442 444 /* Re-enable mailbox */ 443 445 err = fbnic_fw_request_mbx(fbd); 446 + devl_unlock(priv_to_devlink(fbd)); 444 447 if (err) 445 448 goto err_free_irqs; 446 - 447 - devl_unlock(priv_to_devlink(fbd)); 448 449 449 450 /* Only send log history if log buffer is empty to prevent duplicate 450 451 * log entries. ··· 461 464 462 465 rtnl_lock(); 463 466 464 - if (netif_running(netdev)) { 467 + if (netif_running(netdev)) 465 468 err = __fbnic_open(fbn); 466 - if (err) 467 - goto err_free_mbx; 468 - } 469 469 470 470 rtnl_unlock(); 471 + if (err) 472 + goto err_free_mbx; 471 473 472 474 return 0; 473 475 err_free_mbx: 474 476 fbnic_fw_log_disable(fbd); 475 477 476 - rtnl_unlock(); 478 + devl_lock(priv_to_devlink(fbd)); 477 479 fbnic_fw_free_mbx(fbd); 480 + devl_unlock(priv_to_devlink(fbd)); 478 481 err_free_irqs: 479 482 fbnic_free_irqs(fbd); 480 483 err_invalidate_uc_addr:
+11 -2
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
··· 49 49 writel(XGMAC_INT_DEFAULT_EN, ioaddr + XGMAC_INT_EN); 50 50 } 51 51 52 + static void dwxgmac2_update_caps(struct stmmac_priv *priv) 53 + { 54 + if (!priv->dma_cap.mbps_10_100) 55 + priv->hw->link.caps &= ~(MAC_10 | MAC_100); 56 + else if (!priv->dma_cap.half_duplex) 57 + priv->hw->link.caps &= ~(MAC_10HD | MAC_100HD); 58 + } 59 + 52 60 static void dwxgmac2_set_mac(void __iomem *ioaddr, bool enable) 53 61 { 54 62 u32 tx = readl(ioaddr + XGMAC_TX_CONFIG); ··· 1432 1424 1433 1425 const struct stmmac_ops dwxgmac210_ops = { 1434 1426 .core_init = dwxgmac2_core_init, 1427 + .update_caps = dwxgmac2_update_caps, 1435 1428 .set_mac = dwxgmac2_set_mac, 1436 1429 .rx_ipc = dwxgmac2_rx_ipc, 1437 1430 .rx_queue_enable = dwxgmac2_rx_queue_enable, ··· 1541 1532 mac->mcast_bits_log2 = ilog2(mac->multicast_filter_bins); 1542 1533 1543 1534 mac->link.caps = MAC_ASYM_PAUSE | MAC_SYM_PAUSE | 1544 - MAC_1000FD | MAC_2500FD | MAC_5000FD | 1545 - MAC_10000FD; 1535 + MAC_10 | MAC_100 | MAC_1000FD | 1536 + MAC_2500FD | MAC_5000FD | MAC_10000FD; 1546 1537 mac->link.duplex = 0; 1547 1538 mac->link.speed10 = XGMAC_CONFIG_SS_10_MII; 1548 1539 mac->link.speed100 = XGMAC_CONFIG_SS_100_MII;
+5 -4
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
··· 203 203 } 204 204 205 205 writel(value, ioaddr + XGMAC_MTL_RXQ_OPMODE(channel)); 206 - 207 - /* Enable MTL RX overflow */ 208 - value = readl(ioaddr + XGMAC_MTL_QINTEN(channel)); 209 - writel(value | XGMAC_RXOIE, ioaddr + XGMAC_MTL_QINTEN(channel)); 210 206 } 211 207 212 208 static void dwxgmac2_dma_tx_mode(struct stmmac_priv *priv, void __iomem *ioaddr, ··· 382 386 static int dwxgmac2_get_hw_feature(void __iomem *ioaddr, 383 387 struct dma_features *dma_cap) 384 388 { 389 + struct stmmac_priv *priv; 385 390 u32 hw_cap; 391 + 392 + priv = container_of(dma_cap, struct stmmac_priv, dma_cap); 386 393 387 394 /* MAC HW feature 0 */ 388 395 hw_cap = readl(ioaddr + XGMAC_HW_FEATURE0); ··· 409 410 dma_cap->vlhash = (hw_cap & XGMAC_HWFEAT_VLHASH) >> 4; 410 411 dma_cap->half_duplex = (hw_cap & XGMAC_HWFEAT_HDSEL) >> 3; 411 412 dma_cap->mbps_1000 = (hw_cap & XGMAC_HWFEAT_GMIISEL) >> 1; 413 + if (dma_cap->mbps_1000 && priv->synopsys_id >= DWXGMAC_CORE_2_20) 414 + dma_cap->mbps_10_100 = 1; 412 415 413 416 /* MAC HW feature 1 */ 414 417 hw_cap = readl(ioaddr + XGMAC_HW_FEATURE1);
+4 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2584 2584 struct netdev_queue *nq = netdev_get_tx_queue(priv->dev, queue); 2585 2585 struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; 2586 2586 struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue]; 2587 + bool csum = !priv->plat->tx_queues_cfg[queue].coe_unsupported; 2587 2588 struct xsk_buff_pool *pool = tx_q->xsk_pool; 2588 2589 unsigned int entry = tx_q->cur_tx; 2589 2590 struct dma_desc *tx_desc = NULL; ··· 2672 2671 } 2673 2672 2674 2673 stmmac_prepare_tx_desc(priv, tx_desc, 1, xdp_desc.len, 2675 - true, priv->mode, true, true, 2674 + csum, priv->mode, true, true, 2676 2675 xdp_desc.len); 2677 2676 2678 2677 stmmac_enable_dma_transmission(priv, priv->ioaddr, queue); ··· 4984 4983 { 4985 4984 struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue]; 4986 4985 struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue]; 4986 + bool csum = !priv->plat->tx_queues_cfg[queue].coe_unsupported; 4987 4987 unsigned int entry = tx_q->cur_tx; 4988 4988 struct dma_desc *tx_desc; 4989 4989 dma_addr_t dma_addr; ··· 5036 5034 stmmac_set_desc_addr(priv, tx_desc, dma_addr); 5037 5035 5038 5036 stmmac_prepare_tx_desc(priv, tx_desc, 1, xdpf->len, 5039 - true, priv->mode, true, true, 5037 + csum, priv->mode, true, true, 5040 5038 xdpf->len); 5041 5039 5042 5040 tx_q->tx_count_frames++;
+8 -9
drivers/net/hyperv/netvsc.c
··· 1812 1812 1813 1813 /* Enable NAPI handler before init callbacks */ 1814 1814 netif_napi_add(ndev, &net_device->chan_table[0].napi, netvsc_poll); 1815 + napi_enable(&net_device->chan_table[0].napi); 1816 + netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, 1817 + &net_device->chan_table[0].napi); 1818 + netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, 1819 + &net_device->chan_table[0].napi); 1815 1820 1816 1821 /* Open the channel */ 1817 1822 device->channel->next_request_id_callback = vmbus_next_request_id; ··· 1836 1831 /* Channel is opened */ 1837 1832 netdev_dbg(ndev, "hv_netvsc channel opened successfully\n"); 1838 1833 1839 - napi_enable(&net_device->chan_table[0].napi); 1840 - netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, 1841 - &net_device->chan_table[0].napi); 1842 - netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, 1843 - &net_device->chan_table[0].napi); 1844 - 1845 1834 /* Connect with the NetVsp */ 1846 1835 ret = netvsc_connect_vsp(device, net_device, device_info); 1847 1836 if (ret != 0) { ··· 1853 1854 1854 1855 close: 1855 1856 RCU_INIT_POINTER(net_device_ctx->nvdev, NULL); 1856 - netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, NULL); 1857 - netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, NULL); 1858 - napi_disable(&net_device->chan_table[0].napi); 1859 1857 1860 1858 /* Now, we can close the channel safely */ 1861 1859 vmbus_close(device->channel); 1862 1860 1863 1861 cleanup: 1862 + netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, NULL); 1863 + netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, NULL); 1864 + napi_disable(&net_device->chan_table[0].napi); 1864 1865 netif_napi_del(&net_device->chan_table[0].napi); 1865 1866 1866 1867 cleanup2:
+16 -7
drivers/net/hyperv/rndis_filter.c
··· 1252 1252 new_sc->rqstor_size = netvsc_rqstor_size(netvsc_ring_bytes); 1253 1253 new_sc->max_pkt_size = NETVSC_MAX_PKT_SIZE; 1254 1254 1255 + /* Enable napi before opening the vmbus channel to avoid races 1256 + * as the host placing data on the host->guest ring may be left 1257 + * out if napi was not enabled. 1258 + */ 1259 + napi_enable(&nvchan->napi); 1260 + netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX, 1261 + &nvchan->napi); 1262 + netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX, 1263 + &nvchan->napi); 1264 + 1255 1265 ret = vmbus_open(new_sc, netvsc_ring_bytes, 1256 1266 netvsc_ring_bytes, NULL, 0, 1257 1267 netvsc_channel_cb, nvchan); 1258 - if (ret == 0) { 1259 - napi_enable(&nvchan->napi); 1260 - netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX, 1261 - &nvchan->napi); 1262 - netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX, 1263 - &nvchan->napi); 1264 - } else { 1268 + if (ret != 0) { 1265 1269 netdev_notice(ndev, "sub channel open failed: %d\n", ret); 1270 + netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX, 1271 + NULL); 1272 + netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX, 1273 + NULL); 1274 + napi_disable(&nvchan->napi); 1266 1275 } 1267 1276 1268 1277 if (atomic_inc_return(&nvscdev->open_chn) == nvscdev->num_chn)
+4
drivers/net/phy/mscc/mscc.h
··· 481 481 void vsc85xx_link_change_notify(struct phy_device *phydev); 482 482 void vsc8584_config_ts_intr(struct phy_device *phydev); 483 483 int vsc8584_ptp_init(struct phy_device *phydev); 484 + void vsc8584_ptp_deinit(struct phy_device *phydev); 484 485 int vsc8584_ptp_probe_once(struct phy_device *phydev); 485 486 int vsc8584_ptp_probe(struct phy_device *phydev); 486 487 irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev); ··· 495 494 static inline int vsc8584_ptp_init(struct phy_device *phydev) 496 495 { 497 496 return 0; 497 + } 498 + static inline void vsc8584_ptp_deinit(struct phy_device *phydev) 499 + { 498 500 } 499 501 static inline int vsc8584_ptp_probe_once(struct phy_device *phydev) 500 502 {
+1 -3
drivers/net/phy/mscc/mscc_main.c
··· 2337 2337 2338 2338 static void vsc85xx_remove(struct phy_device *phydev) 2339 2339 { 2340 - struct vsc8531_private *priv = phydev->priv; 2341 - 2342 - skb_queue_purge(&priv->rx_skbs_list); 2340 + vsc8584_ptp_deinit(phydev); 2343 2341 } 2344 2342 2345 2343 /* Microsemi VSC85xx PHYs */
+21 -13
drivers/net/phy/mscc/mscc_ptp.c
··· 1298 1298 1299 1299 static int __vsc8584_init_ptp(struct phy_device *phydev) 1300 1300 { 1301 - struct vsc8531_private *vsc8531 = phydev->priv; 1302 1301 static const u32 ltc_seq_e[] = { 0, 400000, 0, 0, 0 }; 1303 1302 static const u8 ltc_seq_a[] = { 8, 6, 5, 4, 2 }; 1304 1303 u32 val; ··· 1514 1515 1515 1516 vsc85xx_ts_eth_cmp1_sig(phydev); 1516 1517 1517 - vsc8531->mii_ts.rxtstamp = vsc85xx_rxtstamp; 1518 - vsc8531->mii_ts.txtstamp = vsc85xx_txtstamp; 1519 - vsc8531->mii_ts.hwtstamp = vsc85xx_hwtstamp; 1520 - vsc8531->mii_ts.ts_info = vsc85xx_ts_info; 1521 - phydev->mii_ts = &vsc8531->mii_ts; 1522 - 1523 - memcpy(&vsc8531->ptp->caps, &vsc85xx_clk_caps, sizeof(vsc85xx_clk_caps)); 1524 - 1525 - vsc8531->ptp->ptp_clock = ptp_clock_register(&vsc8531->ptp->caps, 1526 - &phydev->mdio.dev); 1527 - return PTR_ERR_OR_ZERO(vsc8531->ptp->ptp_clock); 1518 + return 0; 1528 1519 } 1529 1520 1530 1521 void vsc8584_config_ts_intr(struct phy_device *phydev) ··· 1539 1550 } 1540 1551 1541 1552 return 0; 1553 + } 1554 + 1555 + void vsc8584_ptp_deinit(struct phy_device *phydev) 1556 + { 1557 + struct vsc8531_private *vsc8531 = phydev->priv; 1558 + 1559 + if (vsc8531->ptp->ptp_clock) { 1560 + ptp_clock_unregister(vsc8531->ptp->ptp_clock); 1561 + skb_queue_purge(&vsc8531->rx_skbs_list); 1562 + } 1542 1563 } 1543 1564 1544 1565 irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev) ··· 1611 1612 1612 1613 vsc8531->ptp->phydev = phydev; 1613 1614 1614 - return 0; 1615 + vsc8531->mii_ts.rxtstamp = vsc85xx_rxtstamp; 1616 + vsc8531->mii_ts.txtstamp = vsc85xx_txtstamp; 1617 + vsc8531->mii_ts.hwtstamp = vsc85xx_hwtstamp; 1618 + vsc8531->mii_ts.ts_info = vsc85xx_ts_info; 1619 + phydev->mii_ts = &vsc8531->mii_ts; 1620 + 1621 + memcpy(&vsc8531->ptp->caps, &vsc85xx_clk_caps, sizeof(vsc85xx_clk_caps)); 1622 + vsc8531->ptp->ptp_clock = ptp_clock_register(&vsc8531->ptp->caps, 1623 + &phydev->mdio.dev); 1624 + return PTR_ERR_OR_ZERO(vsc8531->ptp->ptp_clock); 1615 1625 } 1616 1626 1617 1627 int vsc8584_ptp_probe_once(struct phy_device *phydev)
+3
drivers/net/usb/qmi_wwan.c
··· 1355 1355 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 1356 1356 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 1357 1357 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */ 1358 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1034, 2)}, /* Telit LE910C4-WWX */ 1359 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1037, 4)}, /* Telit LE910C4-WWX */ 1360 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1038, 3)}, /* Telit LE910C4-WWX */ 1358 1361 {QMI_QUIRK_SET_DTR(0x1bc7, 0x103a, 0)}, /* Telit LE910C4-WWX */ 1359 1362 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 1360 1363 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */
+4 -3
drivers/net/virtio_net.c
··· 5758 5758 disable_rx_mode_work(vi); 5759 5759 flush_work(&vi->rx_mode_work); 5760 5760 5761 - netif_tx_lock_bh(vi->dev); 5762 - netif_device_detach(vi->dev); 5763 - netif_tx_unlock_bh(vi->dev); 5764 5761 if (netif_running(vi->dev)) { 5765 5762 rtnl_lock(); 5766 5763 virtnet_close(vi->dev); 5767 5764 rtnl_unlock(); 5768 5765 } 5766 + 5767 + netif_tx_lock_bh(vi->dev); 5768 + netif_device_detach(vi->dev); 5769 + netif_tx_unlock_bh(vi->dev); 5769 5770 } 5770 5771 5771 5772 static int init_vqs(struct virtnet_info *vi);
+2 -2
drivers/of/device.c
··· 17 17 18 18 /** 19 19 * of_match_device - Tell if a struct device matches an of_device_id list 20 - * @matches: array of of device match structures to search in 21 - * @dev: the of device structure to match against 20 + * @matches: array of of_device_id match structures to search in 21 + * @dev: the OF device structure to match against 22 22 * 23 23 * Used by a driver to check whether an platform_device present in the 24 24 * system is in its list of supported devices.
+7 -2
drivers/of/dynamic.c
··· 935 935 return -ENOMEM; 936 936 937 937 ret = of_changeset_add_property(ocs, np, new_pp); 938 - if (ret) 938 + if (ret) { 939 939 __of_prop_free(new_pp); 940 + return ret; 941 + } 940 942 941 - return ret; 943 + new_pp->next = np->deadprops; 944 + np->deadprops = new_pp; 945 + 946 + return 0; 942 947 } 943 948 944 949 /**
+13 -4
drivers/of/of_reserved_mem.c
··· 25 25 #include <linux/memblock.h> 26 26 #include <linux/kmemleak.h> 27 27 #include <linux/cma.h> 28 + #include <linux/dma-map-ops.h> 28 29 29 30 #include "of_private.h" 30 31 ··· 176 175 base = dt_mem_next_cell(dt_root_addr_cells, &prop); 177 176 size = dt_mem_next_cell(dt_root_size_cells, &prop); 178 177 179 - if (size && 180 - early_init_dt_reserve_memory(base, size, nomap) == 0) 178 + if (size && early_init_dt_reserve_memory(base, size, nomap) == 0) { 179 + /* Architecture specific contiguous memory fixup. */ 180 + if (of_flat_dt_is_compatible(node, "shared-dma-pool") && 181 + of_get_flat_dt_prop(node, "reusable", NULL)) 182 + dma_contiguous_early_fixup(base, size); 181 183 pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n", 182 184 uname, &base, (unsigned long)(size / SZ_1M)); 183 - else 185 + } else { 184 186 pr_err("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n", 185 187 uname, &base, (unsigned long)(size / SZ_1M)); 188 + } 186 189 187 190 len -= t_len; 188 191 } ··· 477 472 uname, (unsigned long)(size / SZ_1M)); 478 473 return -ENOMEM; 479 474 } 480 - 475 + /* Architecture specific contiguous memory fixup. */ 476 + if (of_flat_dt_is_compatible(node, "shared-dma-pool") && 477 + of_get_flat_dt_prop(node, "reusable", NULL)) 478 + dma_contiguous_early_fixup(base, size); 481 479 /* Save region in the reserved_mem array */ 482 480 fdt_reserved_mem_save_node(node, uname, base, size); 483 481 return 0; ··· 779 771 return -EINVAL; 780 772 781 773 resource_set_range(res, rmem->base, rmem->size); 774 + res->flags = IORESOURCE_MEM; 782 775 res->name = rmem->name; 783 776 return 0; 784 777 }
+1
drivers/pinctrl/Kconfig
··· 539 539 tristate "STMicroelectronics STMFX GPIO expander pinctrl driver" 540 540 depends on I2C 541 541 depends on OF_GPIO 542 + depends on HAS_IOMEM 542 543 select GENERIC_PINCONF 543 544 select GPIOLIB_IRQCHIP 544 545 select MFD_STMFX
+4 -4
drivers/pinctrl/mediatek/pinctrl-airoha.c
··· 2696 2696 arg = 1; 2697 2697 break; 2698 2698 default: 2699 - return -EOPNOTSUPP; 2699 + return -ENOTSUPP; 2700 2700 } 2701 2701 2702 2702 *config = pinconf_to_config_packed(param, arg); ··· 2788 2788 break; 2789 2789 } 2790 2790 default: 2791 - return -EOPNOTSUPP; 2791 + return -ENOTSUPP; 2792 2792 } 2793 2793 } 2794 2794 ··· 2805 2805 if (airoha_pinconf_get(pctrl_dev, 2806 2806 airoha_pinctrl_groups[group].pins[i], 2807 2807 config)) 2808 - return -EOPNOTSUPP; 2808 + return -ENOTSUPP; 2809 2809 2810 2810 if (i && cur_config != *config) 2811 - return -EOPNOTSUPP; 2811 + return -ENOTSUPP; 2812 2812 2813 2813 cur_config = *config; 2814 2814 }
+1 -1
drivers/pinctrl/meson/pinctrl-amlogic-a4.c
··· 1093 1093 { .compatible = "amlogic,pinctrl-s6", .data = &s6_priv_data, }, 1094 1094 { /* sentinel */ } 1095 1095 }; 1096 - MODULE_DEVICE_TABLE(of, aml_pctl_dt_match); 1096 + MODULE_DEVICE_TABLE(of, aml_pctl_of_match); 1097 1097 1098 1098 static struct platform_driver aml_pctl_driver = { 1099 1099 .driver = {
+6
drivers/platform/x86/intel/int3472/discrete.c
··· 193 193 *con_id = "privacy-led"; 194 194 *gpio_flags = GPIO_ACTIVE_HIGH; 195 195 break; 196 + case INT3472_GPIO_TYPE_HOTPLUG_DETECT: 197 + *con_id = "hpd"; 198 + *gpio_flags = GPIO_ACTIVE_HIGH; 199 + break; 196 200 case INT3472_GPIO_TYPE_POWER_ENABLE: 197 201 *con_id = "avdd"; 198 202 *gpio_flags = GPIO_ACTIVE_HIGH; ··· 227 223 * 0x0b Power enable 228 224 * 0x0c Clock enable 229 225 * 0x0d Privacy LED 226 + * 0x13 Hotplug detect 230 227 * 231 228 * There are some known platform specific quirks where that does not quite 232 229 * hold up; for example where a pin with type 0x01 (Power down) is mapped to ··· 297 292 switch (type) { 298 293 case INT3472_GPIO_TYPE_RESET: 299 294 case INT3472_GPIO_TYPE_POWERDOWN: 295 + case INT3472_GPIO_TYPE_HOTPLUG_DETECT: 300 296 ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, con_id, gpio_flags); 301 297 if (ret) 302 298 err_msg = "Failed to map GPIO pin to sensor\n";
+1 -1
drivers/regulator/qcom-pm8008-regulator.c
··· 96 96 97 97 uV = le16_to_cpu(val) * 1000; 98 98 99 - return (uV - preg->desc.min_uV) / preg->desc.uV_step; 99 + return regulator_map_voltage_linear_range(rdev, uV, INT_MAX); 100 100 } 101 101 102 102 static const struct regulator_ops pm8008_regulator_ops = {
+20 -3
drivers/soc/qcom/ubwc_config.c
··· 12 12 13 13 #include <linux/soc/qcom/ubwc.h> 14 14 15 + static const struct qcom_ubwc_cfg_data no_ubwc_data = { 16 + /* no UBWC, no HBB */ 17 + }; 18 + 15 19 static const struct qcom_ubwc_cfg_data msm8937_data = { 16 20 .ubwc_enc_version = UBWC_1_0, 17 21 .ubwc_dec_version = UBWC_1_0, ··· 219 215 }; 220 216 221 217 static const struct of_device_id qcom_ubwc_configs[] __maybe_unused = { 218 + { .compatible = "qcom,apq8016", .data = &no_ubwc_data }, 219 + { .compatible = "qcom,apq8026", .data = &no_ubwc_data }, 220 + { .compatible = "qcom,apq8074", .data = &no_ubwc_data }, 222 221 { .compatible = "qcom,apq8096", .data = &msm8998_data }, 223 - { .compatible = "qcom,msm8917", .data = &msm8937_data }, 222 + { .compatible = "qcom,msm8226", .data = &no_ubwc_data }, 223 + { .compatible = "qcom,msm8916", .data = &no_ubwc_data }, 224 + { .compatible = "qcom,msm8917", .data = &no_ubwc_data }, 224 225 { .compatible = "qcom,msm8937", .data = &msm8937_data }, 226 + { .compatible = "qcom,msm8929", .data = &no_ubwc_data }, 227 + { .compatible = "qcom,msm8939", .data = &no_ubwc_data }, 225 228 { .compatible = "qcom,msm8953", .data = &msm8937_data }, 226 - { .compatible = "qcom,msm8956", .data = &msm8937_data }, 227 - { .compatible = "qcom,msm8976", .data = &msm8937_data }, 229 + { .compatible = "qcom,msm8956", .data = &no_ubwc_data }, 230 + { .compatible = "qcom,msm8974", .data = &no_ubwc_data }, 231 + { .compatible = "qcom,msm8976", .data = &no_ubwc_data }, 228 232 { .compatible = "qcom,msm8996", .data = &msm8998_data }, 229 233 { .compatible = "qcom,msm8998", .data = &msm8998_data }, 230 234 { .compatible = "qcom,qcm2290", .data = &qcm2290_data, }, ··· 245 233 { .compatible = "qcom,sc7280", .data = &sc7280_data, }, 246 234 { .compatible = "qcom,sc8180x", .data = &sc8180x_data, }, 247 235 { .compatible = "qcom,sc8280xp", .data = &sc8280xp_data, }, 236 + { .compatible = "qcom,sda660", .data = &msm8937_data }, 237 + { .compatible = "qcom,sdm450", .data = &msm8937_data }, 248 238 { .compatible = "qcom,sdm630", .data = &msm8937_data }, 239 + { .compatible = "qcom,sdm632", .data = &msm8937_data }, 249 240 { .compatible = "qcom,sdm636", .data = &msm8937_data }, 250 241 { .compatible = "qcom,sdm660", .data = &msm8937_data }, 251 242 { .compatible = "qcom,sdm670", .data = &sdm670_data, }, ··· 261 246 { .compatible = "qcom,sm6375", .data = &sm6350_data, }, 262 247 { .compatible = "qcom,sm7125", .data = &sc7180_data }, 263 248 { .compatible = "qcom,sm7150", .data = &sm7150_data, }, 249 + { .compatible = "qcom,sm7225", .data = &sm6350_data, }, 250 + { .compatible = "qcom,sm7325", .data = &sc7280_data, }, 264 251 { .compatible = "qcom,sm8150", .data = &sm8150_data, }, 265 252 { .compatible = "qcom,sm8250", .data = &sm8250_data, }, 266 253 { .compatible = "qcom,sm8350", .data = &sm8350_data, },
+7 -2
drivers/vhost/net.c
··· 99 99 atomic_t refcount; 100 100 wait_queue_head_t wait; 101 101 struct vhost_virtqueue *vq; 102 + struct rcu_head rcu; 102 103 }; 103 104 104 105 #define VHOST_NET_BATCH 64 ··· 251 250 252 251 static int vhost_net_ubuf_put(struct vhost_net_ubuf_ref *ubufs) 253 252 { 254 - int r = atomic_sub_return(1, &ubufs->refcount); 253 + int r; 254 + 255 + rcu_read_lock(); 256 + r = atomic_sub_return(1, &ubufs->refcount); 255 257 if (unlikely(!r)) 256 258 wake_up(&ubufs->wait); 259 + rcu_read_unlock(); 257 260 return r; 258 261 } 259 262 ··· 270 265 static void vhost_net_ubuf_put_wait_and_free(struct vhost_net_ubuf_ref *ubufs) 271 266 { 272 267 vhost_net_ubuf_put_and_wait(ubufs); 273 - kfree(ubufs); 268 + kfree_rcu(ubufs, rcu); 274 269 } 275 270 276 271 static void vhost_net_clear_ubuf_info(struct vhost_net *n)
+4
drivers/virtio/virtio_input.c
··· 360 360 { 361 361 struct virtio_input *vi = vdev->priv; 362 362 unsigned long flags; 363 + void *buf; 363 364 364 365 spin_lock_irqsave(&vi->lock, flags); 365 366 vi->ready = false; 366 367 spin_unlock_irqrestore(&vi->lock, flags); 367 368 369 + virtio_reset_device(vdev); 370 + while ((buf = virtqueue_detach_unused_buf(vi->sts)) != NULL) 371 + kfree(buf); 368 372 vdev->config->del_vqs(vdev); 369 373 return 0; 370 374 }
+2 -2
drivers/virtio/virtio_pci_legacy_dev.c
··· 140 140 * vp_legacy_queue_vector - set the MSIX vector for a specific virtqueue 141 141 * @ldev: the legacy virtio-pci device 142 142 * @index: queue index 143 - * @vector: the config vector 143 + * @vector: the queue vector 144 144 * 145 - * Returns the config vector read from the device 145 + * Returns the queue vector read from the device 146 146 */ 147 147 u16 vp_legacy_queue_vector(struct virtio_pci_legacy_device *ldev, 148 148 u16 index, u16 vector)
+2 -2
drivers/virtio/virtio_pci_modern_dev.c
··· 546 546 * vp_modern_queue_vector - set the MSIX vector for a specific virtqueue 547 547 * @mdev: the modern virtio-pci device 548 548 * @index: queue index 549 - * @vector: the config vector 549 + * @vector: the queue vector 550 550 * 551 - * Returns the config vector read from the device 551 + * Returns the queue vector read from the device 552 552 */ 553 553 u16 vp_modern_queue_vector(struct virtio_pci_modern_device *mdev, 554 554 u16 index, u16 vector)
+4
fs/efivarfs/super.c
··· 152 152 { 153 153 int guid = len - EFI_VARIABLE_GUID_LEN; 154 154 155 + /* Parallel lookups may produce a temporary invalid filename */ 156 + if (guid <= 0) 157 + return 1; 158 + 155 159 if (name->len != len) 156 160 return 1; 157 161
+14
fs/smb/client/cifsfs.c
··· 1358 1358 truncate_setsize(target_inode, new_size); 1359 1359 fscache_resize_cookie(cifs_inode_cookie(target_inode), 1360 1360 new_size); 1361 + } else if (rc == -EOPNOTSUPP) { 1362 + /* 1363 + * copy_file_range syscall man page indicates EINVAL 1364 + * is returned e.g when "fd_in and fd_out refer to the 1365 + * same file and the source and target ranges overlap." 1366 + * Test generic/157 was what showed these cases where 1367 + * we need to remap EOPNOTSUPP to EINVAL 1368 + */ 1369 + if (off >= src_inode->i_size) { 1370 + rc = -EINVAL; 1371 + } else if (src_inode == target_inode) { 1372 + if (off + len > destoff) 1373 + rc = -EINVAL; 1374 + } 1361 1375 } 1362 1376 if (rc == 0 && new_size > target_cifsi->netfs.zero_point) 1363 1377 target_cifsi->netfs.zero_point = new_size;
+5 -2
fs/smb/client/smb2inode.c
··· 207 207 server = cifs_pick_channel(ses); 208 208 209 209 vars = kzalloc(sizeof(*vars), GFP_ATOMIC); 210 - if (vars == NULL) 211 - return -ENOMEM; 210 + if (vars == NULL) { 211 + rc = -ENOMEM; 212 + goto out; 213 + } 212 214 rqst = &vars->rqst[0]; 213 215 rsp_iov = &vars->rsp_iov[0]; 214 216 ··· 866 864 smb2_should_replay(tcon, &retries, &cur_sleep)) 867 865 goto replay_again; 868 866 867 + out: 869 868 if (cfile) 870 869 cifsFileInfo_put(cfile); 871 870
+1
fs/xfs/Kconfig
··· 105 105 config XFS_RT 106 106 bool "XFS Realtime subvolume support" 107 107 depends on XFS_FS 108 + default BLK_DEV_ZONED 108 109 help 109 110 If you say Y here you will be able to mount and use XFS filesystems 110 111 which contain a realtime subvolume. The realtime subvolume is a
+7
fs/xfs/libxfs/xfs_attr_remote.c
··· 435 435 0, &bp, &xfs_attr3_rmt_buf_ops); 436 436 if (xfs_metadata_is_sick(error)) 437 437 xfs_dirattr_mark_sick(args->dp, XFS_ATTR_FORK); 438 + /* 439 + * ENODATA from disk implies a disk medium failure; 440 + * ENODATA for xattrs means attribute not found, so 441 + * disambiguate that here. 442 + */ 443 + if (error == -ENODATA) 444 + error = -EIO; 438 445 if (error) 439 446 return error; 440 447
+6
fs/xfs/libxfs/xfs_da_btree.c
··· 2833 2833 &bp, ops); 2834 2834 if (xfs_metadata_is_sick(error)) 2835 2835 xfs_dirattr_mark_sick(dp, whichfork); 2836 + /* 2837 + * ENODATA from disk implies a disk medium failure; ENODATA for 2838 + * xattrs means attribute not found, so disambiguate that here. 2839 + */ 2840 + if (error == -ENODATA && whichfork == XFS_ATTR_FORK) 2841 + error = -EIO; 2836 2842 if (error) 2837 2843 goto out_free; 2838 2844
+3
fs/xfs/xfs_aops.c
··· 760 760 { 761 761 struct xfs_inode *ip = XFS_I(file_inode(swap_file)); 762 762 763 + if (xfs_is_zoned_inode(ip)) 764 + return -EINVAL; 765 + 763 766 /* 764 767 * Swap file activation can race against concurrent shared extent 765 768 * removal in files that have been cloned. If this happens,
+2 -43
fs/xfs/xfs_zone_alloc.c
··· 374 374 return 0; 375 375 } 376 376 377 - /* 378 - * Check if the zone containing the data just before the offset we are 379 - * writing to is still open and has space. 380 - */ 381 - static struct xfs_open_zone * 382 - xfs_last_used_zone( 383 - struct iomap_ioend *ioend) 384 - { 385 - struct xfs_inode *ip = XFS_I(ioend->io_inode); 386 - struct xfs_mount *mp = ip->i_mount; 387 - xfs_fileoff_t offset_fsb = XFS_B_TO_FSB(mp, ioend->io_offset); 388 - struct xfs_rtgroup *rtg = NULL; 389 - struct xfs_open_zone *oz = NULL; 390 - struct xfs_iext_cursor icur; 391 - struct xfs_bmbt_irec got; 392 - 393 - xfs_ilock(ip, XFS_ILOCK_SHARED); 394 - if (!xfs_iext_lookup_extent_before(ip, &ip->i_df, &offset_fsb, 395 - &icur, &got)) { 396 - xfs_iunlock(ip, XFS_ILOCK_SHARED); 397 - return NULL; 398 - } 399 - xfs_iunlock(ip, XFS_ILOCK_SHARED); 400 - 401 - rtg = xfs_rtgroup_grab(mp, xfs_rtb_to_rgno(mp, got.br_startblock)); 402 - if (!rtg) 403 - return NULL; 404 - 405 - xfs_ilock(rtg_rmap(rtg), XFS_ILOCK_SHARED); 406 - oz = READ_ONCE(rtg->rtg_open_zone); 407 - if (oz && (oz->oz_is_gc || !atomic_inc_not_zero(&oz->oz_ref))) 408 - oz = NULL; 409 - xfs_iunlock(rtg_rmap(rtg), XFS_ILOCK_SHARED); 410 - 411 - xfs_rtgroup_rele(rtg); 412 - return oz; 413 - } 414 - 415 377 static struct xfs_group * 416 378 xfs_find_free_zone( 417 379 struct xfs_mount *mp, ··· 880 918 goto out_error; 881 919 882 920 /* 883 - * If we don't have a cached zone in this write context, see if the 884 - * last extent before the one we are writing to points to an active 885 - * zone. If so, just continue writing to it. 921 + * If we don't have a locally cached zone in this write context, see if 922 + * the inode is still associated with a zone and use that if so. 886 923 */ 887 - if (!*oz && ioend->io_offset) 888 - *oz = xfs_last_used_zone(ioend); 889 924 if (!*oz) 890 925 *oz = xfs_cached_zone(mp, ip); 891 926
+6
fs/xfs/xfs_zone_space_resv.c
··· 10 10 #include "xfs_mount.h" 11 11 #include "xfs_inode.h" 12 12 #include "xfs_rtbitmap.h" 13 + #include "xfs_icache.h" 13 14 #include "xfs_zone_alloc.h" 14 15 #include "xfs_zone_priv.h" 15 16 #include "xfs_zones.h" ··· 231 230 232 231 error = xfs_dec_freecounter(mp, XC_FREE_RTEXTENTS, count_fsb, 233 232 flags & XFS_ZR_RESERVED); 233 + if (error == -ENOSPC && !(flags & XFS_ZR_NOWAIT)) { 234 + xfs_inodegc_flush(mp); 235 + error = xfs_dec_freecounter(mp, XC_FREE_RTEXTENTS, count_fsb, 236 + flags & XFS_ZR_RESERVED); 237 + } 234 238 if (error == -ENOSPC && (flags & XFS_ZR_GREEDY) && count_fsb > 1) 235 239 error = xfs_zoned_reserve_extents_greedy(mp, &count_fsb, flags); 236 240 if (error)
+5 -5
include/drm/drm_gpuvm.h
··· 103 103 } va; 104 104 105 105 /** 106 - * @gem: structure containing the &drm_gem_object and it's offset 106 + * @gem: structure containing the &drm_gem_object and its offset 107 107 */ 108 108 struct { 109 109 /** ··· 843 843 } va; 844 844 845 845 /** 846 - * @gem: structure containing the &drm_gem_object and it's offset 846 + * @gem: structure containing the &drm_gem_object and its offset 847 847 */ 848 848 struct { 849 849 /** ··· 1189 1189 1190 1190 /** 1191 1191 * @sm_step_unmap: called from &drm_gpuvm_sm_map and 1192 - * &drm_gpuvm_sm_unmap to unmap an existent mapping 1192 + * &drm_gpuvm_sm_unmap to unmap an existing mapping 1193 1193 * 1194 - * This callback is called when existent mapping needs to be unmapped. 1194 + * This callback is called when existing mapping needs to be unmapped. 1195 1195 * This is the case when either a newly requested mapping encloses an 1196 - * existent mapping or an unmap of an existent mapping is requested. 1196 + * existing mapping or an unmap of an existing mapping is requested. 1197 1197 * 1198 1198 * The &priv pointer matches the one the driver passed to 1199 1199 * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
+6 -3
include/kvm/arm_vgic.h
··· 8 8 #include <linux/bits.h> 9 9 #include <linux/kvm.h> 10 10 #include <linux/irqreturn.h> 11 - #include <linux/kref.h> 12 11 #include <linux/mutex.h> 12 + #include <linux/refcount.h> 13 13 #include <linux/spinlock.h> 14 14 #include <linux/static_key.h> 15 15 #include <linux/types.h> ··· 139 139 bool pending_latch; /* The pending latch state used to calculate 140 140 * the pending state for both level 141 141 * and edge triggered IRQs. */ 142 - bool active; /* not used for LPIs */ 142 + bool active; 143 + bool pending_release; /* Used for LPIs only, unreferenced IRQ 144 + * pending a release */ 145 + 143 146 bool enabled; 144 147 bool hw; /* Tied to HW IRQ */ 145 - struct kref refcount; /* Used for LPIs */ 148 + refcount_t refcount; /* Used for LPIs */ 146 149 u32 hwintid; /* HW INTID number */ 147 150 unsigned int host_irq; /* linux irq corresponding to hwintid */ 148 151 union {
+1
include/linux/atmdev.h
··· 185 185 int (*compat_ioctl)(struct atm_dev *dev,unsigned int cmd, 186 186 void __user *arg); 187 187 #endif 188 + int (*pre_send)(struct atm_vcc *vcc, struct sk_buff *skb); 188 189 int (*send)(struct atm_vcc *vcc,struct sk_buff *skb); 189 190 int (*send_bh)(struct atm_vcc *vcc, struct sk_buff *skb); 190 191 int (*send_oam)(struct atm_vcc *vcc,void *cell,int flags);
+3
include/linux/dma-map-ops.h
··· 153 153 { 154 154 __free_pages(page, get_order(size)); 155 155 } 156 + static inline void dma_contiguous_early_fixup(phys_addr_t base, unsigned long size) 157 + { 158 + } 156 159 #endif /* CONFIG_DMA_CMA*/ 157 160 158 161 #ifdef CONFIG_DMA_DECLARE_COHERENT
+3 -2
include/linux/memblock.h
··· 40 40 * via a driver, and never indicated in the firmware-provided memory map as 41 41 * system RAM. This corresponds to IORESOURCE_SYSRAM_DRIVER_MANAGED in the 42 42 * kernel resource tree. 43 - * @MEMBLOCK_RSRV_NOINIT: memory region for which struct pages are 44 - * not initialized (only for reserved regions). 43 + * @MEMBLOCK_RSRV_NOINIT: reserved memory region for which struct pages are not 44 + * fully initialized. Users of this flag are responsible to properly initialize 45 + * struct pages of this region 45 46 * @MEMBLOCK_RSRV_KERN: memory region that is reserved for kernel use, 46 47 * either explictitly with memblock_reserve_kern() or via memblock 47 48 * allocation APIs. All memblock allocations set this flag.
+1
include/linux/platform_data/x86/int3472.h
··· 27 27 #define INT3472_GPIO_TYPE_CLK_ENABLE 0x0c 28 28 #define INT3472_GPIO_TYPE_PRIVACY_LED 0x0d 29 29 #define INT3472_GPIO_TYPE_HANDSHAKE 0x12 30 + #define INT3472_GPIO_TYPE_HOTPLUG_DETECT 0x13 30 31 31 32 #define INT3472_PDEV_MAX_NAME_LEN 23 32 33 #define INT3472_MAX_SENSOR_GPIOS 3
+2
include/linux/skbuff.h
··· 4172 4172 struct iov_iter *to, int len, u32 *crcp); 4173 4173 int skb_copy_datagram_from_iter(struct sk_buff *skb, int offset, 4174 4174 struct iov_iter *from, int len); 4175 + int skb_copy_datagram_from_iter_full(struct sk_buff *skb, int offset, 4176 + struct iov_iter *from, int len); 4175 4177 int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *frm); 4176 4178 void skb_free_datagram(struct sock *sk, struct sk_buff *skb); 4177 4179 int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags);
-2
include/linux/virtio_config.h
··· 328 328 bool virtio_get_shm_region(struct virtio_device *vdev, 329 329 struct virtio_shm_region *region, u8 id) 330 330 { 331 - if (!region->len) 332 - return false; 333 331 if (!vdev->config->get_shm_region) 334 332 return false; 335 333 return vdev->config->get_shm_region(vdev, region, id);
+1 -1
include/net/bluetooth/hci_sync.h
··· 93 93 94 94 int hci_update_eir_sync(struct hci_dev *hdev); 95 95 int hci_update_class_sync(struct hci_dev *hdev); 96 - int hci_update_name_sync(struct hci_dev *hdev); 96 + int hci_update_name_sync(struct hci_dev *hdev, const u8 *name); 97 97 int hci_write_ssp_mode_sync(struct hci_dev *hdev, u8 mode); 98 98 99 99 int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+17 -1
include/net/rose.h
··· 8 8 #ifndef _ROSE_H 9 9 #define _ROSE_H 10 10 11 + #include <linux/refcount.h> 11 12 #include <linux/rose.h> 12 13 #include <net/ax25.h> 13 14 #include <net/sock.h> ··· 97 96 ax25_cb *ax25; 98 97 struct net_device *dev; 99 98 unsigned short count; 100 - unsigned short use; 99 + refcount_t use; 101 100 unsigned int number; 102 101 char restarted; 103 102 char dce_mode; ··· 151 150 }; 152 151 153 152 #define rose_sk(sk) ((struct rose_sock *)(sk)) 153 + 154 + static inline void rose_neigh_hold(struct rose_neigh *rose_neigh) 155 + { 156 + refcount_inc(&rose_neigh->use); 157 + } 158 + 159 + static inline void rose_neigh_put(struct rose_neigh *rose_neigh) 160 + { 161 + if (refcount_dec_and_test(&rose_neigh->use)) { 162 + if (rose_neigh->ax25) 163 + ax25_cb_put(rose_neigh->ax25); 164 + kfree(rose_neigh->digipeat); 165 + kfree(rose_neigh); 166 + } 167 + } 154 168 155 169 /* af_rose.c */ 156 170 extern ax25_address rose_callsign;
+2 -2
include/uapi/linux/vhost.h
··· 260 260 * When fork_owner is set to VHOST_FORK_OWNER_KTHREAD: 261 261 * - Vhost will create vhost workers as kernel threads. 262 262 */ 263 - #define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x83, __u8) 263 + #define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x84, __u8) 264 264 265 265 /** 266 266 * VHOST_GET_FORK_OWNER - Get the current fork_owner flag for the vhost device. ··· 268 268 * 269 269 * @return: An 8-bit value indicating the current thread mode. 270 270 */ 271 - #define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x84, __u8) 271 + #define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x85, __u8) 272 272 273 273 #endif
+5 -4
init/Kconfig
··· 117 117 118 118 config CC_HAS_COUNTED_BY 119 119 bool 120 - # clang needs to be at least 19.1.3 to avoid __bdos miscalculations 121 - # https://github.com/llvm/llvm-project/pull/110497 122 - # https://github.com/llvm/llvm-project/pull/112636 123 - default y if CC_IS_CLANG && CLANG_VERSION >= 190103 120 + # clang needs to be at least 20.1.0 to avoid potential crashes 121 + # when building structures that contain __counted_by 122 + # https://github.com/ClangBuiltLinux/linux/issues/2114 123 + # https://github.com/llvm/llvm-project/commit/160fb1121cdf703c3ef5e61fb26c5659eb581489 124 + default y if CC_IS_CLANG && CLANG_VERSION >= 200100 124 125 # supported since gcc 15.1.0 125 126 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896 126 127 default y if CC_IS_GCC && GCC_VERSION >= 150100
+13 -7
io_uring/kbuf.c
··· 36 36 { 37 37 while (len) { 38 38 struct io_uring_buf *buf; 39 - u32 this_len; 39 + u32 buf_len, this_len; 40 40 41 41 buf = io_ring_head_to_buf(bl->buf_ring, bl->head, bl->mask); 42 - this_len = min_t(int, len, buf->len); 43 - buf->len -= this_len; 44 - if (buf->len) { 42 + buf_len = READ_ONCE(buf->len); 43 + this_len = min_t(u32, len, buf_len); 44 + buf_len -= this_len; 45 + /* Stop looping for invalid buffer length of 0 */ 46 + if (buf_len || !this_len) { 45 47 buf->addr += this_len; 48 + buf->len = buf_len; 46 49 return false; 47 50 } 51 + buf->len = 0; 48 52 bl->head++; 49 53 len -= this_len; 50 54 } ··· 163 159 __u16 tail, head = bl->head; 164 160 struct io_uring_buf *buf; 165 161 void __user *ret; 162 + u32 buf_len; 166 163 167 164 tail = smp_load_acquire(&br->tail); 168 165 if (unlikely(tail == head)) ··· 173 168 req->flags |= REQ_F_BL_EMPTY; 174 169 175 170 buf = io_ring_head_to_buf(br, head, bl->mask); 176 - if (*len == 0 || *len > buf->len) 177 - *len = buf->len; 171 + buf_len = READ_ONCE(buf->len); 172 + if (*len == 0 || *len > buf_len) 173 + *len = buf_len; 178 174 req->flags |= REQ_F_BUFFER_RING | REQ_F_BUFFERS_COMMIT; 179 175 req->buf_list = bl; 180 176 req->buf_index = buf->bid; ··· 271 265 272 266 req->buf_index = buf->bid; 273 267 do { 274 - u32 len = buf->len; 268 + u32 len = READ_ONCE(buf->len); 275 269 276 270 /* truncate end piece, if needed, for non partial buffers */ 277 271 if (len > arg->max_len) {
-2
kernel/dma/contiguous.c
··· 483 483 pr_err("Reserved memory: unable to setup CMA region\n"); 484 484 return err; 485 485 } 486 - /* Architecture specific contiguous memory fixup. */ 487 - dma_contiguous_early_fixup(rmem->base, rmem->size); 488 486 489 487 if (default_cma) 490 488 dma_contiguous_default_area = cma;
+2 -2
kernel/dma/pool.c
··· 102 102 103 103 #ifdef CONFIG_DMA_DIRECT_REMAP 104 104 addr = dma_common_contiguous_remap(page, pool_size, 105 - pgprot_dmacoherent(PAGE_KERNEL), 106 - __builtin_return_address(0)); 105 + pgprot_decrypted(pgprot_dmacoherent(PAGE_KERNEL)), 106 + __builtin_return_address(0)); 107 107 if (!addr) 108 108 goto free_page; 109 109 #else
+12 -6
kernel/sched/deadline.c
··· 1496 1496 } 1497 1497 1498 1498 if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(dl_se))) { 1499 - if (dl_server(dl_se)) 1500 - enqueue_dl_entity(dl_se, ENQUEUE_REPLENISH); 1501 - else 1499 + if (dl_server(dl_se)) { 1500 + replenish_dl_new_period(dl_se, rq); 1501 + start_dl_timer(dl_se); 1502 + } else { 1502 1503 enqueue_task_dl(rq, dl_task_of(dl_se), ENQUEUE_REPLENISH); 1504 + } 1503 1505 } 1504 1506 1505 1507 if (!is_leftmost(dl_se, &rq->dl)) ··· 1613 1611 static bool dl_server_stopped(struct sched_dl_entity *dl_se) 1614 1612 { 1615 1613 if (!dl_se->dl_server_active) 1616 - return false; 1614 + return true; 1617 1615 1618 1616 if (dl_se->dl_server_idle) { 1619 1617 dl_server_stop(dl_se); ··· 1851 1849 u64 deadline = dl_se->deadline; 1852 1850 1853 1851 dl_rq->dl_nr_running++; 1854 - add_nr_running(rq_of_dl_rq(dl_rq), 1); 1852 + 1853 + if (!dl_server(dl_se)) 1854 + add_nr_running(rq_of_dl_rq(dl_rq), 1); 1855 1855 1856 1856 inc_dl_deadline(dl_rq, deadline); 1857 1857 } ··· 1863 1859 { 1864 1860 WARN_ON(!dl_rq->dl_nr_running); 1865 1861 dl_rq->dl_nr_running--; 1866 - sub_nr_running(rq_of_dl_rq(dl_rq), 1); 1862 + 1863 + if (!dl_server(dl_se)) 1864 + sub_nr_running(rq_of_dl_rq(dl_rq), 1); 1867 1865 1868 1866 dec_dl_deadline(dl_rq, dl_se->deadline); 1869 1867 }
+2 -4
kernel/sched/debug.c
··· 376 376 return -EINVAL; 377 377 } 378 378 379 - if (rq->cfs.h_nr_queued) { 380 - update_rq_clock(rq); 381 - dl_server_stop(&rq->fair_server); 382 - } 379 + update_rq_clock(rq); 380 + dl_server_stop(&rq->fair_server); 383 381 384 382 retval = dl_server_apply_params(&rq->fair_server, runtime, period, 0); 385 383 if (retval)
+3 -3
lib/ubsan.c
··· 333 333 void __ubsan_handle_divrem_overflow(void *_data, void *lhs, void *rhs) 334 334 { 335 335 struct overflow_data *data = _data; 336 - char rhs_val_str[VALUE_LENGTH]; 336 + char lhs_val_str[VALUE_LENGTH]; 337 337 338 338 if (suppress_report(&data->location)) 339 339 return; 340 340 341 341 ubsan_prologue(&data->location, "division-overflow"); 342 342 343 - val_to_string(rhs_val_str, sizeof(rhs_val_str), data->type, rhs); 343 + val_to_string(lhs_val_str, sizeof(lhs_val_str), data->type, lhs); 344 344 345 345 if (type_is_signed(data->type) && get_signed_val(data->type, rhs) == -1) 346 346 pr_err("division of %s by -1 cannot be represented in type %s\n", 347 - rhs_val_str, data->type->type_name); 347 + lhs_val_str, data->type->type_name); 348 348 else 349 349 pr_err("division by zero\n"); 350 350
+13 -6
mm/memblock.c
··· 780 780 } 781 781 782 782 if ((nr_pages << PAGE_SHIFT) > threshold_bytes) { 783 - mem_size_mb = memblock_phys_mem_size() >> 20; 783 + mem_size_mb = memblock_phys_mem_size() / SZ_1M; 784 784 pr_err("NUMA: no nodes coverage for %luMB of %luMB RAM\n", 785 - (nr_pages << PAGE_SHIFT) >> 20, mem_size_mb); 785 + (nr_pages << PAGE_SHIFT) / SZ_1M, mem_size_mb); 786 786 return false; 787 787 } 788 788 ··· 1091 1091 1092 1092 /** 1093 1093 * memblock_reserved_mark_noinit - Mark a reserved memory region with flag 1094 - * MEMBLOCK_RSRV_NOINIT which results in the struct pages not being initialized 1095 - * for this region. 1094 + * MEMBLOCK_RSRV_NOINIT 1095 + * 1096 1096 * @base: the base phys addr of the region 1097 1097 * @size: the size of the region 1098 1098 * 1099 - * struct pages will not be initialized for reserved memory regions marked with 1100 - * %MEMBLOCK_RSRV_NOINIT. 1099 + * The struct pages for the reserved regions marked %MEMBLOCK_RSRV_NOINIT will 1100 + * not be fully initialized to allow the caller optimize their initialization. 1101 + * 1102 + * When %CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, setting this flag 1103 + * completely bypasses the initialization of struct pages for such region. 1104 + * 1105 + * When %CONFIG_DEFERRED_STRUCT_PAGE_INIT is disabled, struct pages in this 1106 + * region will be initialized with default values but won't be marked as 1107 + * reserved. 1101 1108 * 1102 1109 * Return: 0 on success, -errno on failure. 1103 1110 */
+2 -2
mm/numa_emulation.c
··· 73 73 } 74 74 75 75 printk(KERN_INFO "Faking node %d at [mem %#018Lx-%#018Lx] (%LuMB)\n", 76 - nid, eb->start, eb->end - 1, (eb->end - eb->start) >> 20); 76 + nid, eb->start, eb->end - 1, (eb->end - eb->start) / SZ_1M); 77 77 return 0; 78 78 } 79 79 ··· 264 264 min_size = ALIGN(max(min_size, FAKE_NODE_MIN_SIZE), FAKE_NODE_MIN_SIZE); 265 265 if (size < min_size) { 266 266 pr_err("Fake node size %LuMB too small, increasing to %LuMB\n", 267 - size >> 20, min_size >> 20); 267 + size / SZ_1M, min_size / SZ_1M); 268 268 size = min_size; 269 269 } 270 270 size = ALIGN_DOWN(size, FAKE_NODE_MIN_SIZE);
+3 -3
mm/numa_memblks.c
··· 76 76 for (j = 0; j < cnt; j++) 77 77 numa_distance[i * cnt + j] = i == j ? 78 78 LOCAL_DISTANCE : REMOTE_DISTANCE; 79 - printk(KERN_DEBUG "NUMA: Initialized distance table, cnt=%d\n", cnt); 79 + pr_debug("NUMA: Initialized distance table, cnt=%d\n", cnt); 80 80 81 81 return 0; 82 82 } ··· 427 427 unsigned long pfn_align = node_map_pfn_alignment(); 428 428 429 429 if (pfn_align && pfn_align < PAGES_PER_SECTION) { 430 - unsigned long node_align_mb = PFN_PHYS(pfn_align) >> 20; 430 + unsigned long node_align_mb = PFN_PHYS(pfn_align) / SZ_1M; 431 431 432 - unsigned long sect_align_mb = PFN_PHYS(PAGES_PER_SECTION) >> 20; 432 + unsigned long sect_align_mb = PFN_PHYS(PAGES_PER_SECTION) / SZ_1M; 433 433 434 434 pr_warn("Node alignment %luMB < min %luMB, rejecting NUMA config\n", 435 435 node_align_mb, sect_align_mb);
+12 -3
net/atm/common.c
··· 635 635 636 636 skb->dev = NULL; /* for paths shared with net_device interfaces */ 637 637 if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) { 638 - atm_return_tx(vcc, skb); 639 - kfree_skb(skb); 640 638 error = -EFAULT; 641 - goto out; 639 + goto free_skb; 642 640 } 643 641 if (eff != size) 644 642 memset(skb->data + size, 0, eff-size); 643 + 644 + if (vcc->dev->ops->pre_send) { 645 + error = vcc->dev->ops->pre_send(vcc, skb); 646 + if (error) 647 + goto free_skb; 648 + } 649 + 645 650 error = vcc->dev->ops->send(vcc, skb); 646 651 error = error ? error : size; 647 652 out: 648 653 release_sock(sk); 649 654 return error; 655 + free_skb: 656 + atm_return_tx(vcc, skb); 657 + kfree_skb(skb); 658 + goto out; 650 659 } 651 660 652 661 __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait)
+41 -17
net/bluetooth/hci_conn.c
··· 149 149 150 150 hci_chan_list_flush(conn); 151 151 152 - hci_conn_hash_del(hdev, conn); 153 - 154 152 if (HCI_CONN_HANDLE_UNSET(conn->handle)) 155 153 ida_free(&hdev->unset_handle_ida, conn->handle); 156 154 ··· 1150 1152 disable_delayed_work_sync(&conn->auto_accept_work); 1151 1153 disable_delayed_work_sync(&conn->idle_work); 1152 1154 1153 - if (conn->type == ACL_LINK) { 1154 - /* Unacked frames */ 1155 - hdev->acl_cnt += conn->sent; 1156 - } else if (conn->type == LE_LINK) { 1157 - cancel_delayed_work(&conn->le_conn_timeout); 1155 + /* Remove the connection from the list so unacked logic can detect when 1156 + * a certain pool is not being utilized. 1157 + */ 1158 + hci_conn_hash_del(hdev, conn); 1158 1159 1159 - if (hdev->le_pkts) 1160 - hdev->le_cnt += conn->sent; 1160 + /* Handle unacked frames: 1161 + * 1162 + * - In case there are no connection, or if restoring the buffers 1163 + * considered in transist would overflow, restore all buffers to the 1164 + * pool. 1165 + * - Otherwise restore just the buffers considered in transit for the 1166 + * hci_conn 1167 + */ 1168 + switch (conn->type) { 1169 + case ACL_LINK: 1170 + if (!hci_conn_num(hdev, ACL_LINK) || 1171 + hdev->acl_cnt + conn->sent > hdev->acl_pkts) 1172 + hdev->acl_cnt = hdev->acl_pkts; 1161 1173 else 1162 1174 hdev->acl_cnt += conn->sent; 1163 - } else { 1164 - /* Unacked ISO frames */ 1165 - if (conn->type == CIS_LINK || 1166 - conn->type == BIS_LINK || 1167 - conn->type == PA_LINK) { 1168 - if (hdev->iso_pkts) 1169 - hdev->iso_cnt += conn->sent; 1170 - else if (hdev->le_pkts) 1175 + break; 1176 + case LE_LINK: 1177 + cancel_delayed_work(&conn->le_conn_timeout); 1178 + 1179 + if (hdev->le_pkts) { 1180 + if (!hci_conn_num(hdev, LE_LINK) || 1181 + hdev->le_cnt + conn->sent > hdev->le_pkts) 1182 + hdev->le_cnt = hdev->le_pkts; 1183 + else 1171 1184 hdev->le_cnt += conn->sent; 1185 + } else { 1186 + if ((!hci_conn_num(hdev, LE_LINK) && 1187 + !hci_conn_num(hdev, ACL_LINK)) || 1188 + hdev->acl_cnt + conn->sent > hdev->acl_pkts) 1189 + hdev->acl_cnt = hdev->acl_pkts; 1172 1190 else 1173 1191 hdev->acl_cnt += conn->sent; 1174 1192 } 1193 + break; 1194 + case CIS_LINK: 1195 + case BIS_LINK: 1196 + case PA_LINK: 1197 + if (!hci_iso_count(hdev) || 1198 + hdev->iso_cnt + conn->sent > hdev->iso_pkts) 1199 + hdev->iso_cnt = hdev->iso_pkts; 1200 + else 1201 + hdev->iso_cnt += conn->sent; 1202 + break; 1175 1203 } 1176 1204 1177 1205 skb_queue_purge(&conn->data_q);
+23 -2
net/bluetooth/hci_event.c
··· 2703 2703 if (!conn) 2704 2704 goto unlock; 2705 2705 2706 - if (status) { 2706 + if (status && status != HCI_ERROR_UNKNOWN_CONN_ID) { 2707 2707 mgmt_disconnect_failed(hdev, &conn->dst, conn->type, 2708 2708 conn->dst_type, status); 2709 2709 ··· 2717 2717 2718 2718 goto done; 2719 2719 } 2720 + 2721 + /* During suspend, mark connection as closed immediately 2722 + * since we might not receive HCI_EV_DISCONN_COMPLETE 2723 + */ 2724 + if (hdev->suspended) 2725 + conn->state = BT_CLOSED; 2720 2726 2721 2727 mgmt_conn = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags); 2722 2728 ··· 4404 4398 if (!conn) 4405 4399 continue; 4406 4400 4407 - conn->sent -= count; 4401 + /* Check if there is really enough packets outstanding before 4402 + * attempting to decrease the sent counter otherwise it could 4403 + * underflow.. 4404 + */ 4405 + if (conn->sent >= count) { 4406 + conn->sent -= count; 4407 + } else { 4408 + bt_dev_warn(hdev, "hcon %p sent %u < count %u", 4409 + conn, conn->sent, count); 4410 + conn->sent = 0; 4411 + } 4408 4412 4409 4413 for (i = 0; i < count; ++i) 4410 4414 hci_conn_tx_dequeue(conn); ··· 7024 7008 { 7025 7009 struct hci_evt_le_big_sync_lost *ev = data; 7026 7010 struct hci_conn *bis, *conn; 7011 + bool mgmt_conn; 7027 7012 7028 7013 bt_dev_dbg(hdev, "big handle 0x%2.2x", ev->handle); 7029 7014 ··· 7043 7026 while ((bis = hci_conn_hash_lookup_big_state(hdev, ev->handle, 7044 7027 BT_CONNECTED, 7045 7028 HCI_ROLE_SLAVE))) { 7029 + mgmt_conn = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &bis->flags); 7030 + mgmt_device_disconnected(hdev, &bis->dst, bis->type, bis->dst_type, 7031 + ev->reason, mgmt_conn); 7032 + 7046 7033 clear_bit(HCI_CONN_BIG_SYNC, &bis->flags); 7047 7034 hci_disconn_cfm(bis, ev->reason); 7048 7035 hci_conn_del(bis);
+3 -3
net/bluetooth/hci_sync.c
··· 3481 3481 return hci_write_scan_enable_sync(hdev, scan); 3482 3482 } 3483 3483 3484 - int hci_update_name_sync(struct hci_dev *hdev) 3484 + int hci_update_name_sync(struct hci_dev *hdev, const u8 *name) 3485 3485 { 3486 3486 struct hci_cp_write_local_name cp; 3487 3487 3488 3488 memset(&cp, 0, sizeof(cp)); 3489 3489 3490 - memcpy(cp.name, hdev->dev_name, sizeof(cp.name)); 3490 + memcpy(cp.name, name, sizeof(cp.name)); 3491 3491 3492 3492 return __hci_cmd_sync_status(hdev, HCI_OP_WRITE_LOCAL_NAME, 3493 3493 sizeof(cp), &cp, ··· 3540 3540 hci_write_fast_connectable_sync(hdev, false); 3541 3541 hci_update_scan_sync(hdev); 3542 3542 hci_update_class_sync(hdev); 3543 - hci_update_name_sync(hdev); 3543 + hci_update_name_sync(hdev, hdev->dev_name); 3544 3544 hci_update_eir_sync(hdev); 3545 3545 } 3546 3546
+7 -2
net/bluetooth/mgmt.c
··· 3892 3892 3893 3893 static int set_name_sync(struct hci_dev *hdev, void *data) 3894 3894 { 3895 + struct mgmt_pending_cmd *cmd = data; 3896 + struct mgmt_cp_set_local_name *cp = cmd->param; 3897 + 3895 3898 if (lmp_bredr_capable(hdev)) { 3896 - hci_update_name_sync(hdev); 3899 + hci_update_name_sync(hdev, cp->name); 3897 3900 hci_update_eir_sync(hdev); 3898 3901 } 3899 3902 ··· 9708 9705 if (!mgmt_connected) 9709 9706 return; 9710 9707 9711 - if (link_type != ACL_LINK && link_type != LE_LINK) 9708 + if (link_type != ACL_LINK && 9709 + link_type != LE_LINK && 9710 + link_type != BIS_LINK) 9712 9711 return; 9713 9712 9714 9713 bacpy(&ev.addr.bdaddr, bdaddr);
+14
net/core/datagram.c
··· 618 618 } 619 619 EXPORT_SYMBOL(skb_copy_datagram_from_iter); 620 620 621 + int skb_copy_datagram_from_iter_full(struct sk_buff *skb, int offset, 622 + struct iov_iter *from, int len) 623 + { 624 + struct iov_iter_state state; 625 + int ret; 626 + 627 + iov_iter_save_state(from, &state); 628 + ret = skb_copy_datagram_from_iter(skb, offset, from, len); 629 + if (ret) 630 + iov_iter_restore(from, &state); 631 + return ret; 632 + } 633 + EXPORT_SYMBOL(skb_copy_datagram_from_iter_full); 634 + 621 635 int zerocopy_fill_skb_from_iter(struct sk_buff *skb, 622 636 struct iov_iter *from, size_t length) 623 637 {
+4 -2
net/core/page_pool.c
··· 287 287 } 288 288 289 289 if (pool->mp_ops) { 290 - if (!pool->dma_map || !pool->dma_sync) 291 - return -EOPNOTSUPP; 290 + if (!pool->dma_map || !pool->dma_sync) { 291 + err = -EOPNOTSUPP; 292 + goto free_ptr_ring; 293 + } 292 294 293 295 if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) { 294 296 err = -EFAULT;
+7 -3
net/ipv4/route.c
··· 2575 2575 !netif_is_l3_master(dev_out)) 2576 2576 return ERR_PTR(-EINVAL); 2577 2577 2578 - if (ipv4_is_lbcast(fl4->daddr)) 2578 + if (ipv4_is_lbcast(fl4->daddr)) { 2579 2579 type = RTN_BROADCAST; 2580 - else if (ipv4_is_multicast(fl4->daddr)) 2580 + 2581 + /* reset fi to prevent gateway resolution */ 2582 + fi = NULL; 2583 + } else if (ipv4_is_multicast(fl4->daddr)) { 2581 2584 type = RTN_MULTICAST; 2582 - else if (ipv4_is_zeronet(fl4->daddr)) 2585 + } else if (ipv4_is_zeronet(fl4->daddr)) { 2583 2586 return ERR_PTR(-EINVAL); 2587 + } 2584 2588 2585 2589 if (dev_out->flags & IFF_LOOPBACK) 2586 2590 flags |= RTCF_LOCAL;
+8 -17
net/l2tp/l2tp_ppp.c
··· 129 129 130 130 static const struct proto_ops pppol2tp_ops; 131 131 132 - /* Retrieves the pppol2tp socket associated to a session. 133 - * A reference is held on the returned socket, so this function must be paired 134 - * with sock_put(). 135 - */ 132 + /* Retrieves the pppol2tp socket associated to a session. */ 136 133 static struct sock *pppol2tp_session_get_sock(struct l2tp_session *session) 137 134 { 138 135 struct pppol2tp_session *ps = l2tp_session_priv(session); 139 - struct sock *sk; 140 136 141 - rcu_read_lock(); 142 - sk = rcu_dereference(ps->sk); 143 - if (sk) 144 - sock_hold(sk); 145 - rcu_read_unlock(); 146 - 147 - return sk; 137 + return rcu_dereference(ps->sk); 148 138 } 149 139 150 140 /* Helpers to obtain tunnel/session contexts from sockets. ··· 196 206 197 207 static void pppol2tp_recv(struct l2tp_session *session, struct sk_buff *skb, int data_len) 198 208 { 199 - struct pppol2tp_session *ps = l2tp_session_priv(session); 200 - struct sock *sk = NULL; 209 + struct sock *sk; 201 210 202 211 /* If the socket is bound, send it in to PPP's input queue. Otherwise 203 212 * queue it on the session socket. 204 213 */ 205 214 rcu_read_lock(); 206 - sk = rcu_dereference(ps->sk); 215 + sk = pppol2tp_session_get_sock(session); 207 216 if (!sk) 208 217 goto no_sock; 209 218 ··· 499 510 struct l2tp_session *session = arg; 500 511 struct sock *sk; 501 512 513 + rcu_read_lock(); 502 514 sk = pppol2tp_session_get_sock(session); 503 515 if (sk) { 504 516 struct pppox_sock *po = pppox_sk(sk); 505 517 506 518 seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan)); 507 - sock_put(sk); 508 519 } 520 + rcu_read_unlock(); 509 521 } 510 522 511 523 static void pppol2tp_session_init(struct l2tp_session *session) ··· 1520 1530 port = ntohs(inet->inet_sport); 1521 1531 } 1522 1532 1533 + rcu_read_lock(); 1523 1534 sk = pppol2tp_session_get_sock(session); 1524 1535 if (sk) { 1525 1536 state = sk->sk_state; ··· 1556 1565 struct pppox_sock *po = pppox_sk(sk); 1557 1566 1558 1567 seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan)); 1559 - sock_put(sk); 1560 1568 } 1569 + rcu_read_unlock(); 1561 1570 } 1562 1571 1563 1572 static int pppol2tp_seq_show(struct seq_file *m, void *v)
+7 -6
net/rose/af_rose.c
··· 170 170 171 171 if (rose->neighbour == neigh) { 172 172 rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0); 173 - rose->neighbour->use--; 173 + rose_neigh_put(rose->neighbour); 174 174 rose->neighbour = NULL; 175 175 } 176 176 } ··· 212 212 if (rose->device == dev) { 213 213 rose_disconnect(sk, ENETUNREACH, ROSE_OUT_OF_ORDER, 0); 214 214 if (rose->neighbour) 215 - rose->neighbour->use--; 215 + rose_neigh_put(rose->neighbour); 216 216 netdev_put(rose->device, &rose->dev_tracker); 217 217 rose->device = NULL; 218 218 } ··· 655 655 break; 656 656 657 657 case ROSE_STATE_2: 658 - rose->neighbour->use--; 658 + rose_neigh_put(rose->neighbour); 659 659 release_sock(sk); 660 660 rose_disconnect(sk, 0, -1, -1); 661 661 lock_sock(sk); ··· 823 823 rose->lci = rose_new_lci(rose->neighbour); 824 824 if (!rose->lci) { 825 825 err = -ENETUNREACH; 826 + rose_neigh_put(rose->neighbour); 826 827 goto out_release; 827 828 } 828 829 ··· 835 834 dev = rose_dev_first(); 836 835 if (!dev) { 837 836 err = -ENETUNREACH; 837 + rose_neigh_put(rose->neighbour); 838 838 goto out_release; 839 839 } 840 840 841 841 user = ax25_findbyuid(current_euid()); 842 842 if (!user) { 843 843 err = -EINVAL; 844 + rose_neigh_put(rose->neighbour); 844 845 dev_put(dev); 845 846 goto out_release; 846 847 } ··· 876 873 sk->sk_state = TCP_SYN_SENT; 877 874 878 875 rose->state = ROSE_STATE_1; 879 - 880 - rose->neighbour->use++; 881 876 882 877 rose_write_internal(sk, ROSE_CALL_REQUEST); 883 878 rose_start_heartbeat(sk); ··· 1078 1077 GFP_ATOMIC); 1079 1078 make_rose->facilities = facilities; 1080 1079 1081 - make_rose->neighbour->use++; 1080 + rose_neigh_hold(make_rose->neighbour); 1082 1081 1083 1082 if (rose_sk(sk)->defer) { 1084 1083 make_rose->state = ROSE_STATE_5;
+6 -6
net/rose/rose_in.c
··· 56 56 case ROSE_CLEAR_REQUEST: 57 57 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 58 58 rose_disconnect(sk, ECONNREFUSED, skb->data[3], skb->data[4]); 59 - rose->neighbour->use--; 59 + rose_neigh_put(rose->neighbour); 60 60 break; 61 61 62 62 default: ··· 79 79 case ROSE_CLEAR_REQUEST: 80 80 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 81 81 rose_disconnect(sk, 0, skb->data[3], skb->data[4]); 82 - rose->neighbour->use--; 82 + rose_neigh_put(rose->neighbour); 83 83 break; 84 84 85 85 case ROSE_CLEAR_CONFIRMATION: 86 86 rose_disconnect(sk, 0, -1, -1); 87 - rose->neighbour->use--; 87 + rose_neigh_put(rose->neighbour); 88 88 break; 89 89 90 90 default: ··· 121 121 case ROSE_CLEAR_REQUEST: 122 122 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 123 123 rose_disconnect(sk, 0, skb->data[3], skb->data[4]); 124 - rose->neighbour->use--; 124 + rose_neigh_put(rose->neighbour); 125 125 break; 126 126 127 127 case ROSE_RR: ··· 234 234 case ROSE_CLEAR_REQUEST: 235 235 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 236 236 rose_disconnect(sk, 0, skb->data[3], skb->data[4]); 237 - rose->neighbour->use--; 237 + rose_neigh_put(rose->neighbour); 238 238 break; 239 239 240 240 default: ··· 254 254 if (frametype == ROSE_CLEAR_REQUEST) { 255 255 rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION); 256 256 rose_disconnect(sk, 0, skb->data[3], skb->data[4]); 257 - rose_sk(sk)->neighbour->use--; 257 + rose_neigh_put(rose_sk(sk)->neighbour); 258 258 } 259 259 260 260 return 0;
+38 -24
net/rose/rose_route.c
··· 93 93 rose_neigh->ax25 = NULL; 94 94 rose_neigh->dev = dev; 95 95 rose_neigh->count = 0; 96 - rose_neigh->use = 0; 97 96 rose_neigh->dce_mode = 0; 98 97 rose_neigh->loopback = 0; 99 98 rose_neigh->number = rose_neigh_no++; 100 99 rose_neigh->restarted = 0; 100 + refcount_set(&rose_neigh->use, 1); 101 101 102 102 skb_queue_head_init(&rose_neigh->queue); 103 103 ··· 178 178 } 179 179 } 180 180 rose_neigh->count++; 181 + rose_neigh_hold(rose_neigh); 181 182 182 183 goto out; 183 184 } ··· 188 187 rose_node->neighbour[rose_node->count] = rose_neigh; 189 188 rose_node->count++; 190 189 rose_neigh->count++; 190 + rose_neigh_hold(rose_neigh); 191 191 } 192 192 193 193 out: ··· 236 234 237 235 if ((s = rose_neigh_list) == rose_neigh) { 238 236 rose_neigh_list = rose_neigh->next; 239 - if (rose_neigh->ax25) 240 - ax25_cb_put(rose_neigh->ax25); 241 - kfree(rose_neigh->digipeat); 242 - kfree(rose_neigh); 243 237 return; 244 238 } 245 239 246 240 while (s != NULL && s->next != NULL) { 247 241 if (s->next == rose_neigh) { 248 242 s->next = rose_neigh->next; 249 - if (rose_neigh->ax25) 250 - ax25_cb_put(rose_neigh->ax25); 251 - kfree(rose_neigh->digipeat); 252 - kfree(rose_neigh); 253 243 return; 254 244 } 255 245 ··· 257 263 struct rose_route *s; 258 264 259 265 if (rose_route->neigh1 != NULL) 260 - rose_route->neigh1->use--; 266 + rose_neigh_put(rose_route->neigh1); 261 267 262 268 if (rose_route->neigh2 != NULL) 263 - rose_route->neigh2->use--; 269 + rose_neigh_put(rose_route->neigh2); 264 270 265 271 if ((s = rose_route_list) == rose_route) { 266 272 rose_route_list = rose_route->next; ··· 324 330 for (i = 0; i < rose_node->count; i++) { 325 331 if (rose_node->neighbour[i] == rose_neigh) { 326 332 rose_neigh->count--; 333 + rose_neigh_put(rose_neigh); 327 334 328 - if (rose_neigh->count == 0 && rose_neigh->use == 0) 335 + if (rose_neigh->count == 0) { 329 336 rose_remove_neigh(rose_neigh); 337 + rose_neigh_put(rose_neigh); 338 + } 330 339 331 340 rose_node->count--; 332 341 ··· 378 381 sn->ax25 = NULL; 379 382 sn->dev = NULL; 380 383 sn->count = 0; 381 - sn->use = 0; 382 384 sn->dce_mode = 1; 383 385 sn->loopback = 1; 384 386 sn->number = rose_neigh_no++; 385 387 sn->restarted = 1; 388 + refcount_set(&sn->use, 1); 386 389 387 390 skb_queue_head_init(&sn->queue); 388 391 ··· 433 436 rose_node_list = rose_node; 434 437 435 438 rose_loopback_neigh->count++; 439 + rose_neigh_hold(rose_loopback_neigh); 436 440 437 441 out: 438 442 spin_unlock_bh(&rose_node_list_lock); ··· 465 467 rose_remove_node(rose_node); 466 468 467 469 rose_loopback_neigh->count--; 470 + rose_neigh_put(rose_loopback_neigh); 468 471 469 472 out: 470 473 spin_unlock_bh(&rose_node_list_lock); ··· 505 506 memmove(&t->neighbour[i], &t->neighbour[i + 1], 506 507 sizeof(t->neighbour[0]) * 507 508 (t->count - i)); 509 + rose_neigh_put(s); 508 510 } 509 511 510 512 if (t->count <= 0) ··· 513 513 } 514 514 515 515 rose_remove_neigh(s); 516 + rose_neigh_put(s); 516 517 } 517 518 spin_unlock_bh(&rose_neigh_list_lock); 518 519 spin_unlock_bh(&rose_node_list_lock); ··· 549 548 { 550 549 struct rose_neigh *s, *rose_neigh; 551 550 struct rose_node *t, *rose_node; 551 + int i; 552 552 553 553 spin_lock_bh(&rose_node_list_lock); 554 554 spin_lock_bh(&rose_neigh_list_lock); ··· 560 558 while (rose_node != NULL) { 561 559 t = rose_node; 562 560 rose_node = rose_node->next; 563 - if (!t->loopback) 561 + 562 + if (!t->loopback) { 563 + for (i = 0; i < t->count; i++) 564 + rose_neigh_put(t->neighbour[i]); 564 565 rose_remove_node(t); 566 + } 565 567 } 566 568 567 569 while (rose_neigh != NULL) { 568 570 s = rose_neigh; 569 571 rose_neigh = rose_neigh->next; 570 572 571 - if (s->use == 0 && !s->loopback) { 572 - s->count = 0; 573 + if (!s->loopback) { 573 574 rose_remove_neigh(s); 575 + rose_neigh_put(s); 574 576 } 575 577 } 576 578 ··· 690 684 for (i = 0; i < node->count; i++) { 691 685 if (node->neighbour[i]->restarted) { 692 686 res = node->neighbour[i]; 687 + rose_neigh_hold(node->neighbour[i]); 693 688 goto out; 694 689 } 695 690 } ··· 702 695 for (i = 0; i < node->count; i++) { 703 696 if (!rose_ftimer_running(node->neighbour[i])) { 704 697 res = node->neighbour[i]; 698 + rose_neigh_hold(node->neighbour[i]); 705 699 goto out; 706 700 } 707 701 failed = 1; ··· 792 784 } 793 785 794 786 if (rose_route->neigh1 == rose_neigh) { 795 - rose_route->neigh1->use--; 787 + rose_neigh_put(rose_route->neigh1); 796 788 rose_route->neigh1 = NULL; 797 789 rose_transmit_clear_request(rose_route->neigh2, rose_route->lci2, ROSE_OUT_OF_ORDER, 0); 798 790 } 799 791 800 792 if (rose_route->neigh2 == rose_neigh) { 801 - rose_route->neigh2->use--; 793 + rose_neigh_put(rose_route->neigh2); 802 794 rose_route->neigh2 = NULL; 803 795 rose_transmit_clear_request(rose_route->neigh1, rose_route->lci1, ROSE_OUT_OF_ORDER, 0); 804 796 } ··· 927 919 rose_clear_queues(sk); 928 920 rose->cause = ROSE_NETWORK_CONGESTION; 929 921 rose->diagnostic = 0; 930 - rose->neighbour->use--; 922 + rose_neigh_put(rose->neighbour); 931 923 rose->neighbour = NULL; 932 924 rose->lci = 0; 933 925 rose->state = ROSE_STATE_0; ··· 1052 1044 1053 1045 if ((new_lci = rose_new_lci(new_neigh)) == 0) { 1054 1046 rose_transmit_clear_request(rose_neigh, lci, ROSE_NETWORK_CONGESTION, 71); 1055 - goto out; 1047 + goto put_neigh; 1056 1048 } 1057 1049 1058 1050 if ((rose_route = kmalloc(sizeof(*rose_route), GFP_ATOMIC)) == NULL) { 1059 1051 rose_transmit_clear_request(rose_neigh, lci, ROSE_NETWORK_CONGESTION, 120); 1060 - goto out; 1052 + goto put_neigh; 1061 1053 } 1062 1054 1063 1055 rose_route->lci1 = lci; ··· 1070 1062 rose_route->lci2 = new_lci; 1071 1063 rose_route->neigh2 = new_neigh; 1072 1064 1073 - rose_route->neigh1->use++; 1074 - rose_route->neigh2->use++; 1065 + rose_neigh_hold(rose_route->neigh1); 1066 + rose_neigh_hold(rose_route->neigh2); 1075 1067 1076 1068 rose_route->next = rose_route_list; 1077 1069 rose_route_list = rose_route; ··· 1083 1075 rose_transmit_link(skb, rose_route->neigh2); 1084 1076 res = 1; 1085 1077 1078 + put_neigh: 1079 + rose_neigh_put(new_neigh); 1086 1080 out: 1087 1081 spin_unlock_bh(&rose_route_list_lock); 1088 1082 spin_unlock_bh(&rose_neigh_list_lock); ··· 1200 1190 (rose_neigh->loopback) ? "RSLOOP-0" : ax2asc(buf, &rose_neigh->callsign), 1201 1191 rose_neigh->dev ? rose_neigh->dev->name : "???", 1202 1192 rose_neigh->count, 1203 - rose_neigh->use, 1193 + refcount_read(&rose_neigh->use) - rose_neigh->count - 1, 1204 1194 (rose_neigh->dce_mode) ? "DCE" : "DTE", 1205 1195 (rose_neigh->restarted) ? "yes" : "no", 1206 1196 ax25_display_timer(&rose_neigh->t0timer) / HZ, ··· 1305 1295 struct rose_neigh *s, *rose_neigh = rose_neigh_list; 1306 1296 struct rose_node *t, *rose_node = rose_node_list; 1307 1297 struct rose_route *u, *rose_route = rose_route_list; 1298 + int i; 1308 1299 1309 1300 while (rose_neigh != NULL) { 1310 1301 s = rose_neigh; 1311 1302 rose_neigh = rose_neigh->next; 1312 1303 1313 1304 rose_remove_neigh(s); 1305 + rose_neigh_put(s); 1314 1306 } 1315 1307 1316 1308 while (rose_node != NULL) { 1317 1309 t = rose_node; 1318 1310 rose_node = rose_node->next; 1319 1311 1312 + for (i = 0; i < t->count; i++) 1313 + rose_neigh_put(t->neighbour[i]); 1320 1314 rose_remove_node(t); 1321 1315 } 1322 1316
+1 -1
net/rose/rose_timer.c
··· 180 180 break; 181 181 182 182 case ROSE_STATE_2: /* T3 */ 183 - rose->neighbour->use--; 183 + rose_neigh_put(rose->neighbour); 184 184 rose_disconnect(sk, ETIMEDOUT, -1, -1); 185 185 break; 186 186
+2
net/sctp/ipv6.c
··· 547 547 { 548 548 addr->v6.sin6_family = AF_INET6; 549 549 addr->v6.sin6_port = 0; 550 + addr->v6.sin6_flowinfo = 0; 550 551 addr->v6.sin6_addr = sk->sk_v6_rcv_saddr; 552 + addr->v6.sin6_scope_id = 0; 551 553 } 552 554 553 555 /* Initialize sk->sk_rcv_saddr from sctp_addr. */
+5 -3
net/vmw_vsock/virtio_transport_common.c
··· 105 105 size_t len, 106 106 bool zcopy) 107 107 { 108 + struct msghdr *msg = info->msg; 109 + 108 110 if (zcopy) 109 - return __zerocopy_sg_from_iter(info->msg, NULL, skb, 110 - &info->msg->msg_iter, len, NULL); 111 + return __zerocopy_sg_from_iter(msg, NULL, skb, 112 + &msg->msg_iter, len, NULL); 111 113 112 114 virtio_vsock_skb_put(skb, len); 113 - return skb_copy_datagram_from_iter(skb, 0, &info->msg->msg_iter, len); 115 + return skb_copy_datagram_from_iter_full(skb, 0, &msg->msg_iter, len); 114 116 } 115 117 116 118 static void virtio_transport_init_hdr(struct sk_buff *skb,
+28
tools/arch/arm64/include/asm/cputype.h
··· 75 75 #define ARM_CPU_PART_CORTEX_A76 0xD0B 76 76 #define ARM_CPU_PART_NEOVERSE_N1 0xD0C 77 77 #define ARM_CPU_PART_CORTEX_A77 0xD0D 78 + #define ARM_CPU_PART_CORTEX_A76AE 0xD0E 78 79 #define ARM_CPU_PART_NEOVERSE_V1 0xD40 79 80 #define ARM_CPU_PART_CORTEX_A78 0xD41 80 81 #define ARM_CPU_PART_CORTEX_A78AE 0xD42 81 82 #define ARM_CPU_PART_CORTEX_X1 0xD44 82 83 #define ARM_CPU_PART_CORTEX_A510 0xD46 84 + #define ARM_CPU_PART_CORTEX_X1C 0xD4C 83 85 #define ARM_CPU_PART_CORTEX_A520 0xD80 84 86 #define ARM_CPU_PART_CORTEX_A710 0xD47 85 87 #define ARM_CPU_PART_CORTEX_A715 0xD4D ··· 121 119 #define QCOM_CPU_PART_KRYO 0x200 122 120 #define QCOM_CPU_PART_KRYO_2XX_GOLD 0x800 123 121 #define QCOM_CPU_PART_KRYO_2XX_SILVER 0x801 122 + #define QCOM_CPU_PART_KRYO_3XX_GOLD 0x802 124 123 #define QCOM_CPU_PART_KRYO_3XX_SILVER 0x803 125 124 #define QCOM_CPU_PART_KRYO_4XX_GOLD 0x804 126 125 #define QCOM_CPU_PART_KRYO_4XX_SILVER 0x805 126 + #define QCOM_CPU_PART_ORYON_X1 0x001 127 127 128 128 #define NVIDIA_CPU_PART_DENVER 0x003 129 129 #define NVIDIA_CPU_PART_CARMEL 0x004 ··· 133 129 #define FUJITSU_CPU_PART_A64FX 0x001 134 130 135 131 #define HISI_CPU_PART_TSV110 0xD01 132 + #define HISI_CPU_PART_HIP09 0xD02 136 133 #define HISI_CPU_PART_HIP12 0xD06 137 134 138 135 #define APPLE_CPU_PART_M1_ICESTORM 0x022 ··· 164 159 #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) 165 160 #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1) 166 161 #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77) 162 + #define MIDR_CORTEX_A76AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76AE) 167 163 #define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1) 168 164 #define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78) 169 165 #define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE) 170 166 #define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1) 171 167 #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510) 168 + #define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C) 172 169 #define MIDR_CORTEX_A520 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A520) 173 170 #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) 174 171 #define MIDR_CORTEX_A715 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A715) ··· 203 196 #define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO) 204 197 #define MIDR_QCOM_KRYO_2XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_GOLD) 205 198 #define MIDR_QCOM_KRYO_2XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_SILVER) 199 + #define MIDR_QCOM_KRYO_3XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_GOLD) 206 200 #define MIDR_QCOM_KRYO_3XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_SILVER) 207 201 #define MIDR_QCOM_KRYO_4XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_GOLD) 208 202 #define MIDR_QCOM_KRYO_4XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_SILVER) 203 + #define MIDR_QCOM_ORYON_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_ORYON_X1) 204 + 205 + /* 206 + * NOTES: 207 + * - Qualcomm Kryo 5XX Prime / Gold ID themselves as MIDR_CORTEX_A77 208 + * - Qualcomm Kryo 5XX Silver IDs itself as MIDR_QCOM_KRYO_4XX_SILVER 209 + * - Qualcomm Kryo 6XX Prime IDs itself as MIDR_CORTEX_X1 210 + * - Qualcomm Kryo 6XX Gold IDs itself as ARM_CPU_PART_CORTEX_A78 211 + * - Qualcomm Kryo 6XX Silver IDs itself as MIDR_CORTEX_A55 212 + */ 213 + 209 214 #define MIDR_NVIDIA_DENVER MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_DENVER) 210 215 #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL) 211 216 #define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX) 212 217 #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110) 218 + #define MIDR_HISI_HIP09 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP09) 213 219 #define MIDR_HISI_HIP12 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP12) 214 220 #define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM) 215 221 #define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM) ··· 310 290 { 311 291 return read_cpuid(MIDR_EL1); 312 292 } 293 + 294 + struct target_impl_cpu { 295 + u64 midr; 296 + u64 revidr; 297 + u64 aidr; 298 + }; 299 + 300 + bool cpu_errata_set_target_impl(u64 num, void *impl_cpus); 313 301 314 302 static inline u64 __attribute_const__ read_cpuid_mpidr(void) 315 303 {
-3
tools/arch/arm64/include/asm/sysreg.h
··· 1080 1080 1081 1081 #define ARM64_FEATURE_FIELD_BITS 4 1082 1082 1083 - /* Defined for compatibility only, do not add new users. */ 1084 - #define ARM64_FEATURE_MASK(x) (x##_MASK) 1085 - 1086 1083 #ifdef __ASSEMBLY__ 1087 1084 1088 1085 .macro mrs_s, rt, sreg
-13
tools/arch/powerpc/include/uapi/asm/kvm.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 2 /* 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License, version 2, as 5 - * published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 11 - * 12 - * You should have received a copy of the GNU General Public License 13 - * along with this program; if not, write to the Free Software 14 - * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. 15 - * 16 3 * Copyright IBM Corp. 2007 17 4 * 18 5 * Authors: Hollis Blanchard <hollisb@us.ibm.com>
+9 -1
tools/arch/x86/include/asm/cpufeatures.h
··· 218 218 #define X86_FEATURE_FLEXPRIORITY ( 8*32+ 1) /* "flexpriority" Intel FlexPriority */ 219 219 #define X86_FEATURE_EPT ( 8*32+ 2) /* "ept" Intel Extended Page Table */ 220 220 #define X86_FEATURE_VPID ( 8*32+ 3) /* "vpid" Intel Virtual Processor ID */ 221 + #define X86_FEATURE_COHERENCY_SFW_NO ( 8*32+ 4) /* SNP cache coherency software work around not needed */ 221 222 222 223 #define X86_FEATURE_VMMCALL ( 8*32+15) /* "vmmcall" Prefer VMMCALL to VMCALL */ 223 224 #define X86_FEATURE_XENPV ( 8*32+16) /* Xen paravirtual guest */ ··· 457 456 #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */ 458 457 #define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */ 459 458 #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */ 459 + #define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* The memory form of VERW mitigates TSA */ 460 460 #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */ 461 + 461 462 #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */ 462 463 #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */ 464 + 465 + #define X86_FEATURE_GP_ON_USER_CPUID (20*32+17) /* User CPUID faulting */ 463 466 464 467 #define X86_FEATURE_PREFETCHI (20*32+20) /* Prefetch Data/Instruction to Cache Level */ 465 468 #define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */ ··· 492 487 #define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */ 493 488 #define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */ 494 489 #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */ 490 + #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */ 491 + #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */ 492 + #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */ 495 493 496 494 /* 497 495 * BUG word(s) ··· 550 542 #define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */ 551 543 #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ 552 544 #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 553 - 545 + #define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */ 554 546 #endif /* _ASM_X86_CPUFEATURES_H */
+7
tools/arch/x86/include/asm/msr-index.h
··· 419 419 #define DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI (1UL << 12) 420 420 #define DEBUGCTLMSR_FREEZE_IN_SMM_BIT 14 421 421 #define DEBUGCTLMSR_FREEZE_IN_SMM (1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT) 422 + #define DEBUGCTLMSR_RTM_DEBUG BIT(15) 422 423 423 424 #define MSR_PEBS_FRONTEND 0x000003f7 424 425 ··· 734 733 #define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301 735 734 #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302 736 735 736 + /* AMD Hardware Feedback Support MSRs */ 737 + #define MSR_AMD_WORKLOAD_CLASS_CONFIG 0xc0000500 738 + #define MSR_AMD_WORKLOAD_CLASS_ID 0xc0000501 739 + #define MSR_AMD_WORKLOAD_HRST 0xc0000502 740 + 737 741 /* AMD Last Branch Record MSRs */ 738 742 #define MSR_AMD64_LBR_SELECT 0xc000010e 739 743 ··· 837 831 #define MSR_K7_HWCR_SMMLOCK BIT_ULL(MSR_K7_HWCR_SMMLOCK_BIT) 838 832 #define MSR_K7_HWCR_IRPERF_EN_BIT 30 839 833 #define MSR_K7_HWCR_IRPERF_EN BIT_ULL(MSR_K7_HWCR_IRPERF_EN_BIT) 834 + #define MSR_K7_HWCR_CPUID_USER_DIS_BIT 35 840 835 #define MSR_K7_FID_VID_CTL 0xc0010041 841 836 #define MSR_K7_FID_VID_STATUS 0xc0010042 842 837 #define MSR_K7_HWCR_CPB_DIS_BIT 25
+7 -1
tools/arch/x86/include/uapi/asm/kvm.h
··· 965 965 struct kvm_tdx_capabilities { 966 966 __u64 supported_attrs; 967 967 __u64 supported_xfam; 968 - __u64 reserved[254]; 968 + 969 + __u64 kernel_tdvmcallinfo_1_r11; 970 + __u64 user_tdvmcallinfo_1_r11; 971 + __u64 kernel_tdvmcallinfo_1_r12; 972 + __u64 user_tdvmcallinfo_1_r12; 973 + 974 + __u64 reserved[250]; 969 975 970 976 /* Configurable CPUID bits for userspace */ 971 977 struct kvm_cpuid2 cpuid;
+6 -23
tools/include/linux/bits.h
··· 2 2 #ifndef __LINUX_BITS_H 3 3 #define __LINUX_BITS_H 4 4 5 - #include <linux/const.h> 6 5 #include <vdso/bits.h> 7 6 #include <uapi/linux/bits.h> 8 - #include <asm/bitsperlong.h> 9 7 10 8 #define BIT_MASK(nr) (UL(1) << ((nr) % BITS_PER_LONG)) 11 9 #define BIT_WORD(nr) ((nr) / BITS_PER_LONG) ··· 48 50 (type_max(t) << (l) & \ 49 51 type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h))))) 50 52 53 + #define GENMASK(h, l) GENMASK_TYPE(unsigned long, h, l) 54 + #define GENMASK_ULL(h, l) GENMASK_TYPE(unsigned long long, h, l) 55 + 51 56 #define GENMASK_U8(h, l) GENMASK_TYPE(u8, h, l) 52 57 #define GENMASK_U16(h, l) GENMASK_TYPE(u16, h, l) 53 58 #define GENMASK_U32(h, l) GENMASK_TYPE(u32, h, l) 54 59 #define GENMASK_U64(h, l) GENMASK_TYPE(u64, h, l) 60 + #define GENMASK_U128(h, l) GENMASK_TYPE(u128, h, l) 55 61 56 62 /* 57 63 * Fixed-type variants of BIT(), with additional checks like GENMASK_TYPE(). The ··· 81 79 * BUILD_BUG_ON_ZERO is not available in h files included from asm files, 82 80 * disable the input check if that is the case. 83 81 */ 84 - #define GENMASK_INPUT_CHECK(h, l) 0 82 + #define GENMASK(h, l) __GENMASK(h, l) 83 + #define GENMASK_ULL(h, l) __GENMASK_ULL(h, l) 85 84 86 85 #endif /* !defined(__ASSEMBLY__) */ 87 - 88 - #define GENMASK(h, l) \ 89 - (GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l)) 90 - #define GENMASK_ULL(h, l) \ 91 - (GENMASK_INPUT_CHECK(h, l) + __GENMASK_ULL(h, l)) 92 - 93 - #if !defined(__ASSEMBLY__) 94 - /* 95 - * Missing asm support 96 - * 97 - * __GENMASK_U128() depends on _BIT128() which would not work 98 - * in the asm code, as it shifts an 'unsigned __int128' data 99 - * type instead of direct representation of 128 bit constants 100 - * such as long and unsigned long. The fundamental problem is 101 - * that a 128 bit constant will get silently truncated by the 102 - * gcc compiler. 103 - */ 104 - #define GENMASK_U128(h, l) \ 105 - (GENMASK_INPUT_CHECK(h, l) + __GENMASK_U128(h, l)) 106 - #endif 107 86 108 87 #endif /* __LINUX_BITS_H */
+23
tools/include/linux/cfi_types.h
··· 41 41 SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) 42 42 #endif 43 43 44 + #else /* __ASSEMBLY__ */ 45 + 46 + #ifdef CONFIG_CFI_CLANG 47 + #define DEFINE_CFI_TYPE(name, func) \ 48 + /* \ 49 + * Force a reference to the function so the compiler generates \ 50 + * __kcfi_typeid_<func>. \ 51 + */ \ 52 + __ADDRESSABLE(func); \ 53 + /* u32 name __ro_after_init = __kcfi_typeid_<func> */ \ 54 + extern u32 name; \ 55 + asm ( \ 56 + " .pushsection .data..ro_after_init,\"aw\",\%progbits \n" \ 57 + " .type " #name ",\%object \n" \ 58 + " .globl " #name " \n" \ 59 + " .p2align 2, 0x0 \n" \ 60 + #name ": \n" \ 61 + " .4byte __kcfi_typeid_" #func " \n" \ 62 + " .size " #name ", 4 \n" \ 63 + " .popsection \n" \ 64 + ); 65 + #endif 66 + 44 67 #endif /* __ASSEMBLY__ */ 45 68 #endif /* _LINUX_CFI_TYPES_H */
+7 -1
tools/include/uapi/asm-generic/unistd.h
··· 852 852 #define __NR_open_tree_attr 467 853 853 __SYSCALL(__NR_open_tree_attr, sys_open_tree_attr) 854 854 855 + /* fs/inode.c */ 856 + #define __NR_file_getattr 468 857 + __SYSCALL(__NR_file_getattr, sys_file_getattr) 858 + #define __NR_file_setattr 469 859 + __SYSCALL(__NR_file_setattr, sys_file_setattr) 860 + 855 861 #undef __NR_syscalls 856 - #define __NR_syscalls 468 862 + #define __NR_syscalls 470 857 863 858 864 /* 859 865 * 32 bit systems traditionally used different
+27
tools/include/uapi/linux/kvm.h
··· 178 178 #define KVM_EXIT_NOTIFY 37 179 179 #define KVM_EXIT_LOONGARCH_IOCSR 38 180 180 #define KVM_EXIT_MEMORY_FAULT 39 181 + #define KVM_EXIT_TDX 40 181 182 182 183 /* For KVM_EXIT_INTERNAL_ERROR */ 183 184 /* Emulate instruction failed. */ ··· 448 447 __u64 gpa; 449 448 __u64 size; 450 449 } memory_fault; 450 + /* KVM_EXIT_TDX */ 451 + struct { 452 + __u64 flags; 453 + __u64 nr; 454 + union { 455 + struct { 456 + __u64 ret; 457 + __u64 data[5]; 458 + } unknown; 459 + struct { 460 + __u64 ret; 461 + __u64 gpa; 462 + __u64 size; 463 + } get_quote; 464 + struct { 465 + __u64 ret; 466 + __u64 leaf; 467 + __u64 r11, r12, r13, r14; 468 + } get_tdvmcall_info; 469 + struct { 470 + __u64 ret; 471 + __u64 vector; 472 + } setup_event_notify; 473 + }; 474 + } tdx; 451 475 /* Fix the size of the union. */ 452 476 char padding[256]; 453 477 }; ··· 961 935 #define KVM_CAP_ARM_EL2 240 962 936 #define KVM_CAP_ARM_EL2_E2H0 241 963 937 #define KVM_CAP_RISCV_MP_STATE_RESET 242 938 + #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 964 939 965 940 struct kvm_irq_routing_irqchip { 966 941 __u32 irqchip;
+2
tools/perf/arch/arm/entry/syscalls/syscall.tbl
··· 482 482 465 common listxattrat sys_listxattrat 483 483 466 common removexattrat sys_removexattrat 484 484 467 common open_tree_attr sys_open_tree_attr 485 + 468 common file_getattr sys_file_getattr 486 + 469 common file_setattr sys_file_setattr
+2
tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
··· 382 382 465 n64 listxattrat sys_listxattrat 383 383 466 n64 removexattrat sys_removexattrat 384 384 467 n64 open_tree_attr sys_open_tree_attr 385 + 468 n64 file_getattr sys_file_getattr 386 + 469 n64 file_setattr sys_file_setattr
+2
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 558 558 465 common listxattrat sys_listxattrat 559 559 466 common removexattrat sys_removexattrat 560 560 467 common open_tree_attr sys_open_tree_attr 561 + 468 common file_getattr sys_file_getattr 562 + 469 common file_setattr sys_file_setattr
+2
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 470 470 465 common listxattrat sys_listxattrat sys_listxattrat 471 471 466 common removexattrat sys_removexattrat sys_removexattrat 472 472 467 common open_tree_attr sys_open_tree_attr sys_open_tree_attr 473 + 468 common file_getattr sys_file_getattr sys_file_getattr 474 + 469 common file_setattr sys_file_setattr sys_file_setattr
+2
tools/perf/arch/sh/entry/syscalls/syscall.tbl
··· 471 471 465 common listxattrat sys_listxattrat 472 472 466 common removexattrat sys_removexattrat 473 473 467 common open_tree_attr sys_open_tree_attr 474 + 468 common file_getattr sys_file_getattr 475 + 469 common file_setattr sys_file_setattr
+2
tools/perf/arch/sparc/entry/syscalls/syscall.tbl
··· 513 513 465 common listxattrat sys_listxattrat 514 514 466 common removexattrat sys_removexattrat 515 515 467 common open_tree_attr sys_open_tree_attr 516 + 468 common file_getattr sys_file_getattr 517 + 469 common file_setattr sys_file_setattr
+2
tools/perf/arch/x86/entry/syscalls/syscall_32.tbl
··· 473 473 465 i386 listxattrat sys_listxattrat 474 474 466 i386 removexattrat sys_removexattrat 475 475 467 i386 open_tree_attr sys_open_tree_attr 476 + 468 i386 file_getattr sys_file_getattr 477 + 469 i386 file_setattr sys_file_setattr
+2
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 391 391 465 common listxattrat sys_listxattrat 392 392 466 common removexattrat sys_removexattrat 393 393 467 common open_tree_attr sys_open_tree_attr 394 + 468 common file_getattr sys_file_getattr 395 + 469 common file_setattr sys_file_setattr 394 396 395 397 # 396 398 # Due to a historical design error, certain syscalls are numbered differently
+1
tools/perf/arch/x86/tests/topdown.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include "arch-tests.h" 3 3 #include "../util/topdown.h" 4 + #include "debug.h" 4 5 #include "evlist.h" 5 6 #include "parse-events.h" 6 7 #include "pmu.h"
+2
tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
··· 438 438 465 common listxattrat sys_listxattrat 439 439 466 common removexattrat sys_removexattrat 440 440 467 common open_tree_attr sys_open_tree_attr 441 + 468 common file_getattr sys_file_getattr 442 + 469 common file_setattr sys_file_setattr
+1 -1
tools/perf/bench/inject-buildid.c
··· 85 85 if (typeflag == FTW_D || typeflag == FTW_SL) 86 86 return 0; 87 87 88 - if (filename__read_build_id(fpath, &bid) < 0) 88 + if (filename__read_build_id(fpath, &bid, /*block=*/true) < 0) 89 89 return 0; 90 90 91 91 dso->name = realpath(fpath, NULL);
+4 -4
tools/perf/builtin-buildid-cache.c
··· 180 180 struct nscookie nsc; 181 181 182 182 nsinfo__mountns_enter(nsi, &nsc); 183 - err = filename__read_build_id(filename, &bid); 183 + err = filename__read_build_id(filename, &bid, /*block=*/true); 184 184 nsinfo__mountns_exit(&nsc); 185 185 if (err < 0) { 186 186 pr_debug("Couldn't read a build-id in %s\n", filename); ··· 204 204 int err; 205 205 206 206 nsinfo__mountns_enter(nsi, &nsc); 207 - err = filename__read_build_id(filename, &bid); 207 + err = filename__read_build_id(filename, &bid, /*block=*/true); 208 208 nsinfo__mountns_exit(&nsc); 209 209 if (err < 0) { 210 210 pr_debug("Couldn't read a build-id in %s\n", filename); ··· 280 280 if (!dso__build_id_filename(dso, filename, sizeof(filename), false)) 281 281 return true; 282 282 283 - if (filename__read_build_id(filename, &bid) == -1) { 283 + if (filename__read_build_id(filename, &bid, /*block=*/true) == -1) { 284 284 if (errno == ENOENT) 285 285 return false; 286 286 ··· 309 309 int err; 310 310 311 311 nsinfo__mountns_enter(nsi, &nsc); 312 - err = filename__read_build_id(filename, &bid); 312 + err = filename__read_build_id(filename, &bid, /*block=*/true); 313 313 nsinfo__mountns_exit(&nsc); 314 314 if (err < 0) { 315 315 pr_debug("Couldn't read a build-id in %s\n", filename);
+2 -2
tools/perf/builtin-inject.c
··· 680 680 681 681 mutex_lock(dso__lock(dso)); 682 682 nsinfo__mountns_enter(dso__nsinfo(dso), &nsc); 683 - if (filename__read_build_id(dso__long_name(dso), &bid) > 0) 683 + if (filename__read_build_id(dso__long_name(dso), &bid, /*block=*/true) > 0) 684 684 dso__set_build_id(dso, &bid); 685 685 else if (dso__nsinfo(dso)) { 686 686 char *new_name = dso__filename_with_chroot(dso, dso__long_name(dso)); 687 687 688 - if (new_name && filename__read_build_id(new_name, &bid) > 0) 688 + if (new_name && filename__read_build_id(new_name, &bid, /*block=*/true) > 0) 689 689 dso__set_build_id(dso, &bid); 690 690 free(new_name); 691 691 }
+1 -1
tools/perf/tests/sdt.c
··· 31 31 struct build_id bid = { .size = 0, }; 32 32 int err; 33 33 34 - err = filename__read_build_id(filename, &bid); 34 + err = filename__read_build_id(filename, &bid, /*block=*/true); 35 35 if (err < 0) { 36 36 pr_debug("Failed to read build id of %s\n", filename); 37 37 return err;
+18
tools/perf/trace/beauty/include/uapi/linux/fcntl.h
··· 90 90 #define DN_ATTRIB 0x00000020 /* File changed attibutes */ 91 91 #define DN_MULTISHOT 0x80000000 /* Don't remove notifier */ 92 92 93 + /* Reserved kernel ranges [-100], [-10000, -40000]. */ 93 94 #define AT_FDCWD -100 /* Special value for dirfd used to 94 95 indicate openat should use the 95 96 current working directory. */ 96 97 98 + /* 99 + * The concept of process and threads in userland and the kernel is a confusing 100 + * one - within the kernel every thread is a 'task' with its own individual PID, 101 + * however from userland's point of view threads are grouped by a single PID, 102 + * which is that of the 'thread group leader', typically the first thread 103 + * spawned. 104 + * 105 + * To cut the Gideon knot, for internal kernel usage, we refer to 106 + * PIDFD_SELF_THREAD to refer to the current thread (or task from a kernel 107 + * perspective), and PIDFD_SELF_THREAD_GROUP to refer to the current thread 108 + * group leader... 109 + */ 110 + #define PIDFD_SELF_THREAD -10000 /* Current thread. */ 111 + #define PIDFD_SELF_THREAD_GROUP -10001 /* Current thread group leader. */ 112 + 113 + #define FD_PIDFS_ROOT -10002 /* Root of the pidfs filesystem */ 114 + #define FD_INVALID -10009 /* Invalid file descriptor: -10000 - EBADF = -10009 */ 97 115 98 116 /* Generic flags for the *at(2) family of syscalls. */ 99 117
+88
tools/perf/trace/beauty/include/uapi/linux/fs.h
··· 60 60 #define RENAME_EXCHANGE (1 << 1) /* Exchange source and dest */ 61 61 #define RENAME_WHITEOUT (1 << 2) /* Whiteout source */ 62 62 63 + /* 64 + * The root inode of procfs is guaranteed to always have the same inode number. 65 + * For programs that make heavy use of procfs, verifying that the root is a 66 + * real procfs root and using openat2(RESOLVE_{NO_{XDEV,MAGICLINKS},BENEATH}) 67 + * will allow you to make sure you are never tricked into operating on the 68 + * wrong procfs file. 69 + */ 70 + enum procfs_ino { 71 + PROCFS_ROOT_INO = 1, 72 + }; 73 + 63 74 struct file_clone_range { 64 75 __s64 src_fd; 65 76 __u64 src_offset; ··· 100 89 struct fs_sysfs_path { 101 90 __u8 len; 102 91 __u8 name[128]; 92 + }; 93 + 94 + /* Protection info capability flags */ 95 + #define LBMD_PI_CAP_INTEGRITY (1 << 0) 96 + #define LBMD_PI_CAP_REFTAG (1 << 1) 97 + 98 + /* Checksum types for Protection Information */ 99 + #define LBMD_PI_CSUM_NONE 0 100 + #define LBMD_PI_CSUM_IP 1 101 + #define LBMD_PI_CSUM_CRC16_T10DIF 2 102 + #define LBMD_PI_CSUM_CRC64_NVME 4 103 + 104 + /* sizeof first published struct */ 105 + #define LBMD_SIZE_VER0 16 106 + 107 + /* 108 + * Logical block metadata capability descriptor 109 + * If the device does not support metadata, all the fields will be zero. 110 + * Applications must check lbmd_flags to determine whether metadata is 111 + * supported or not. 112 + */ 113 + struct logical_block_metadata_cap { 114 + /* Bitmask of logical block metadata capability flags */ 115 + __u32 lbmd_flags; 116 + /* 117 + * The amount of data described by each unit of logical block 118 + * metadata 119 + */ 120 + __u16 lbmd_interval; 121 + /* 122 + * Size in bytes of the logical block metadata associated with each 123 + * interval 124 + */ 125 + __u8 lbmd_size; 126 + /* 127 + * Size in bytes of the opaque block tag associated with each 128 + * interval 129 + */ 130 + __u8 lbmd_opaque_size; 131 + /* 132 + * Offset in bytes of the opaque block tag within the logical block 133 + * metadata 134 + */ 135 + __u8 lbmd_opaque_offset; 136 + /* Size in bytes of the T10 PI tuple associated with each interval */ 137 + __u8 lbmd_pi_size; 138 + /* Offset in bytes of T10 PI tuple within the logical block metadata */ 139 + __u8 lbmd_pi_offset; 140 + /* T10 PI guard tag type */ 141 + __u8 lbmd_guard_tag_type; 142 + /* Size in bytes of the T10 PI application tag */ 143 + __u8 lbmd_app_tag_size; 144 + /* Size in bytes of the T10 PI reference tag */ 145 + __u8 lbmd_ref_tag_size; 146 + /* Size in bytes of the T10 PI storage tag */ 147 + __u8 lbmd_storage_tag_size; 148 + __u8 pad; 103 149 }; 104 150 105 151 /* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */ ··· 215 147 __u32 fsx_cowextsize; /* CoW extsize field value (get/set)*/ 216 148 unsigned char fsx_pad[8]; 217 149 }; 150 + 151 + /* 152 + * Variable size structure for file_[sg]et_attr(). 153 + * 154 + * Note. This is alternative to the structure 'struct file_kattr'/'struct fsxattr'. 155 + * As this structure is passed to/from userspace with its size, this can 156 + * be versioned based on the size. 157 + */ 158 + struct file_attr { 159 + __u64 fa_xflags; /* xflags field value (get/set) */ 160 + __u32 fa_extsize; /* extsize field value (get/set)*/ 161 + __u32 fa_nextents; /* nextents field value (get) */ 162 + __u32 fa_projid; /* project identifier (get/set) */ 163 + __u32 fa_cowextsize; /* CoW extsize field value (get/set) */ 164 + }; 165 + 166 + #define FILE_ATTR_SIZE_VER0 24 167 + #define FILE_ATTR_SIZE_LATEST FILE_ATTR_SIZE_VER0 218 168 219 169 /* 220 170 * Flags for the fsx_xflags field ··· 333 247 * also /sys/kernel/debug/ for filesystems with debugfs exports 334 248 */ 335 249 #define FS_IOC_GETFSSYSFSPATH _IOR(0x15, 1, struct fs_sysfs_path) 250 + /* Get logical block metadata capability details */ 251 + #define FS_IOC_GETLBMD_CAP _IOWR(0x15, 2, struct logical_block_metadata_cap) 336 252 337 253 /* 338 254 * Inode flags (FS_IOC_GETFLAGS / FS_IOC_SETFLAGS)
+8 -1
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 244 244 # define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT) 245 245 /* Unused; kept only for source compatibility */ 246 246 # define PR_MTE_TCF_SHIFT 1 247 + /* MTE tag check store only */ 248 + # define PR_MTE_STORE_ONLY (1UL << 19) 247 249 /* RISC-V pointer masking tag length */ 248 250 # define PR_PMLEN_SHIFT 24 249 251 # define PR_PMLEN_MASK (0x7fUL << PR_PMLEN_SHIFT) ··· 257 255 /* Dispatch syscalls to a userspace handler */ 258 256 #define PR_SET_SYSCALL_USER_DISPATCH 59 259 257 # define PR_SYS_DISPATCH_OFF 0 260 - # define PR_SYS_DISPATCH_ON 1 258 + /* Enable dispatch except for the specified range */ 259 + # define PR_SYS_DISPATCH_EXCLUSIVE_ON 1 260 + /* Enable dispatch for the specified range */ 261 + # define PR_SYS_DISPATCH_INCLUSIVE_ON 2 262 + /* Legacy name for backwards compatibility */ 263 + # define PR_SYS_DISPATCH_ON PR_SYS_DISPATCH_EXCLUSIVE_ON 261 264 /* The control values for the user space selector when dispatch is enabled */ 262 265 # define SYSCALL_DISPATCH_FILTER_ALLOW 0 263 266 # define SYSCALL_DISPATCH_FILTER_BLOCK 1
+35
tools/perf/trace/beauty/include/uapi/linux/vhost.h
··· 235 235 */ 236 236 #define VHOST_VDPA_GET_VRING_SIZE _IOWR(VHOST_VIRTIO, 0x82, \ 237 237 struct vhost_vring_state) 238 + 239 + /* Extended features manipulation */ 240 + #define VHOST_GET_FEATURES_ARRAY _IOR(VHOST_VIRTIO, 0x83, \ 241 + struct vhost_features_array) 242 + #define VHOST_SET_FEATURES_ARRAY _IOW(VHOST_VIRTIO, 0x83, \ 243 + struct vhost_features_array) 244 + 245 + /* fork_owner values for vhost */ 246 + #define VHOST_FORK_OWNER_KTHREAD 0 247 + #define VHOST_FORK_OWNER_TASK 1 248 + 249 + /** 250 + * VHOST_SET_FORK_FROM_OWNER - Set the fork_owner flag for the vhost device, 251 + * This ioctl must called before VHOST_SET_OWNER. 252 + * Only available when CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL=y 253 + * 254 + * @param fork_owner: An 8-bit value that determines the vhost thread mode 255 + * 256 + * When fork_owner is set to VHOST_FORK_OWNER_TASK(default value): 257 + * - Vhost will create vhost worker as tasks forked from the owner, 258 + * inheriting all of the owner's attributes. 259 + * 260 + * When fork_owner is set to VHOST_FORK_OWNER_KTHREAD: 261 + * - Vhost will create vhost workers as kernel threads. 262 + */ 263 + #define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x84, __u8) 264 + 265 + /** 266 + * VHOST_GET_FORK_OWNER - Get the current fork_owner flag for the vhost device. 267 + * Only available when CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL=y 268 + * 269 + * @return: An 8-bit value indicating the current thread mode. 270 + */ 271 + #define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x85, __u8) 272 + 238 273 #endif
+2 -2
tools/perf/util/build-id.c
··· 115 115 struct build_id bid = { .size = 0, }; 116 116 int ret; 117 117 118 - ret = filename__read_build_id(pathname, &bid); 118 + ret = filename__read_build_id(pathname, &bid, /*block=*/true); 119 119 if (ret < 0) 120 120 return ret; 121 121 ··· 841 841 int ret; 842 842 843 843 nsinfo__mountns_enter(nsi, &nsc); 844 - ret = filename__read_build_id(filename, bid); 844 + ret = filename__read_build_id(filename, bid, /*block=*/true); 845 845 nsinfo__mountns_exit(&nsc); 846 846 847 847 return ret;
+6 -2
tools/perf/util/debuginfo.c
··· 110 110 if (!dso) 111 111 goto out; 112 112 113 - /* Set the build id for DSO_BINARY_TYPE__BUILDID_DEBUGINFO */ 114 - if (is_regular_file(path) && filename__read_build_id(path, &bid) > 0) 113 + /* 114 + * Set the build id for DSO_BINARY_TYPE__BUILDID_DEBUGINFO. Don't block 115 + * incase the path isn't for a regular file. 116 + */ 117 + assert(!dso__has_build_id(dso)); 118 + if (filename__read_build_id(path, &bid, /*block=*/false) > 0) 115 119 dso__set_build_id(dso, &bid); 116 120 117 121 for (type = distro_dwarf_types;
+2 -2
tools/perf/util/dsos.c
··· 81 81 return 0; 82 82 } 83 83 nsinfo__mountns_enter(dso__nsinfo(dso), &nsc); 84 - if (filename__read_build_id(dso__long_name(dso), &bid) > 0) { 84 + if (filename__read_build_id(dso__long_name(dso), &bid, /*block=*/true) > 0) { 85 85 dso__set_build_id(dso, &bid); 86 86 args->have_build_id = true; 87 87 } else if (errno == ENOENT && dso__nsinfo(dso)) { 88 88 char *new_name = dso__filename_with_chroot(dso, dso__long_name(dso)); 89 89 90 - if (new_name && filename__read_build_id(new_name, &bid) > 0) { 90 + if (new_name && filename__read_build_id(new_name, &bid, /*block=*/true) > 0) { 91 91 dso__set_build_id(dso, &bid); 92 92 args->have_build_id = true; 93 93 }
+5 -4
tools/perf/util/symbol-elf.c
··· 902 902 903 903 #else // HAVE_LIBBFD_BUILDID_SUPPORT 904 904 905 - static int read_build_id(const char *filename, struct build_id *bid) 905 + static int read_build_id(const char *filename, struct build_id *bid, bool block) 906 906 { 907 907 size_t size = sizeof(bid->data); 908 908 int fd, err = -1; ··· 911 911 if (size < BUILD_ID_SIZE) 912 912 goto out; 913 913 914 - fd = open(filename, O_RDONLY); 914 + fd = open(filename, block ? O_RDONLY : (O_RDONLY | O_NONBLOCK)); 915 915 if (fd < 0) 916 916 goto out; 917 917 ··· 934 934 935 935 #endif // HAVE_LIBBFD_BUILDID_SUPPORT 936 936 937 - int filename__read_build_id(const char *filename, struct build_id *bid) 937 + int filename__read_build_id(const char *filename, struct build_id *bid, bool block) 938 938 { 939 939 struct kmod_path m = { .name = NULL, }; 940 940 char path[PATH_MAX]; ··· 958 958 } 959 959 close(fd); 960 960 filename = path; 961 + block = true; 961 962 } 962 963 963 - err = read_build_id(filename, bid); 964 + err = read_build_id(filename, bid, block); 964 965 965 966 if (m.comp) 966 967 unlink(filename);
+29 -30
tools/perf/util/symbol-minimal.c
··· 4 4 5 5 #include <errno.h> 6 6 #include <unistd.h> 7 - #include <stdio.h> 8 7 #include <fcntl.h> 9 8 #include <string.h> 10 9 #include <stdlib.h> ··· 85 86 /* 86 87 * Just try PT_NOTE header otherwise fails 87 88 */ 88 - int filename__read_build_id(const char *filename, struct build_id *bid) 89 + int filename__read_build_id(const char *filename, struct build_id *bid, bool block) 89 90 { 90 - FILE *fp; 91 - int ret = -1; 91 + int fd, ret = -1; 92 92 bool need_swap = false, elf32; 93 - u8 e_ident[EI_NIDENT]; 94 - int i; 95 93 union { 96 94 struct { 97 95 Elf32_Ehdr ehdr32; ··· 99 103 Elf64_Phdr *phdr64; 100 104 }; 101 105 } hdrs; 102 - void *phdr; 103 - size_t phdr_size; 104 - void *buf = NULL; 105 - size_t buf_size = 0; 106 + void *phdr, *buf = NULL; 107 + ssize_t phdr_size, ehdr_size, buf_size = 0; 106 108 107 - fp = fopen(filename, "r"); 108 - if (fp == NULL) 109 + fd = open(filename, block ? O_RDONLY : (O_RDONLY | O_NONBLOCK)); 110 + if (fd < 0) 109 111 return -1; 110 112 111 - if (fread(e_ident, sizeof(e_ident), 1, fp) != 1) 113 + if (read(fd, hdrs.ehdr32.e_ident, EI_NIDENT) != EI_NIDENT) 112 114 goto out; 113 115 114 - if (memcmp(e_ident, ELFMAG, SELFMAG) || 115 - e_ident[EI_VERSION] != EV_CURRENT) 116 + if (memcmp(hdrs.ehdr32.e_ident, ELFMAG, SELFMAG) || 117 + hdrs.ehdr32.e_ident[EI_VERSION] != EV_CURRENT) 116 118 goto out; 117 119 118 - need_swap = check_need_swap(e_ident[EI_DATA]); 119 - elf32 = e_ident[EI_CLASS] == ELFCLASS32; 120 + need_swap = check_need_swap(hdrs.ehdr32.e_ident[EI_DATA]); 121 + elf32 = hdrs.ehdr32.e_ident[EI_CLASS] == ELFCLASS32; 122 + ehdr_size = (elf32 ? sizeof(hdrs.ehdr32) : sizeof(hdrs.ehdr64)) - EI_NIDENT; 120 123 121 - if (fread(elf32 ? (void *)&hdrs.ehdr32 : (void *)&hdrs.ehdr64, 122 - elf32 ? sizeof(hdrs.ehdr32) : sizeof(hdrs.ehdr64), 123 - 1, fp) != 1) 124 + if (read(fd, 125 + (elf32 ? (void *)&hdrs.ehdr32 : (void *)&hdrs.ehdr64) + EI_NIDENT, 126 + ehdr_size) != ehdr_size) 124 127 goto out; 125 128 126 129 if (need_swap) { ··· 133 138 hdrs.ehdr64.e_phnum = bswap_16(hdrs.ehdr64.e_phnum); 134 139 } 135 140 } 136 - phdr_size = elf32 ? hdrs.ehdr32.e_phentsize * hdrs.ehdr32.e_phnum 137 - : hdrs.ehdr64.e_phentsize * hdrs.ehdr64.e_phnum; 141 + if ((elf32 && hdrs.ehdr32.e_phentsize != sizeof(Elf32_Phdr)) || 142 + (!elf32 && hdrs.ehdr64.e_phentsize != sizeof(Elf64_Phdr))) 143 + goto out; 144 + 145 + phdr_size = elf32 ? sizeof(Elf32_Phdr) * hdrs.ehdr32.e_phnum 146 + : sizeof(Elf64_Phdr) * hdrs.ehdr64.e_phnum; 138 147 phdr = malloc(phdr_size); 139 148 if (phdr == NULL) 140 149 goto out; 141 150 142 - fseek(fp, elf32 ? hdrs.ehdr32.e_phoff : hdrs.ehdr64.e_phoff, SEEK_SET); 143 - if (fread(phdr, phdr_size, 1, fp) != 1) 151 + lseek(fd, elf32 ? hdrs.ehdr32.e_phoff : hdrs.ehdr64.e_phoff, SEEK_SET); 152 + if (read(fd, phdr, phdr_size) != phdr_size) 144 153 goto out_free; 145 154 146 155 if (elf32) ··· 152 153 else 153 154 hdrs.phdr64 = phdr; 154 155 155 - for (i = 0; i < elf32 ? hdrs.ehdr32.e_phnum : hdrs.ehdr64.e_phnum; i++) { 156 - size_t p_filesz; 156 + for (int i = 0; i < (elf32 ? hdrs.ehdr32.e_phnum : hdrs.ehdr64.e_phnum); i++) { 157 + ssize_t p_filesz; 157 158 158 159 if (need_swap) { 159 160 if (elf32) { ··· 179 180 goto out_free; 180 181 buf = tmp; 181 182 } 182 - fseek(fp, elf32 ? hdrs.phdr32[i].p_offset : hdrs.phdr64[i].p_offset, SEEK_SET); 183 - if (fread(buf, p_filesz, 1, fp) != 1) 183 + lseek(fd, elf32 ? hdrs.phdr32[i].p_offset : hdrs.phdr64[i].p_offset, SEEK_SET); 184 + if (read(fd, buf, p_filesz) != p_filesz) 184 185 goto out_free; 185 186 186 187 ret = read_build_id(buf, p_filesz, bid, need_swap); ··· 193 194 free(buf); 194 195 free(phdr); 195 196 out: 196 - fclose(fp); 197 + close(fd); 197 198 return ret; 198 199 } 199 200 ··· 323 324 if (ret >= 0) 324 325 RC_CHK_ACCESS(dso)->is_64_bit = ret; 325 326 326 - if (filename__read_build_id(ss->name, &bid) > 0) 327 + if (filename__read_build_id(ss->name, &bid, /*block=*/true) > 0) 327 328 dso__set_build_id(dso, &bid); 328 329 return 0; 329 330 }
+4 -4
tools/perf/util/symbol.c
··· 1869 1869 1870 1870 /* 1871 1871 * Read the build id if possible. This is required for 1872 - * DSO_BINARY_TYPE__BUILDID_DEBUGINFO to work 1872 + * DSO_BINARY_TYPE__BUILDID_DEBUGINFO to work. Don't block in case path 1873 + * isn't for a regular file. 1873 1874 */ 1874 - if (!dso__has_build_id(dso) && 1875 - is_regular_file(dso__long_name(dso))) { 1875 + if (!dso__has_build_id(dso)) { 1876 1876 struct build_id bid = { .size = 0, }; 1877 1877 1878 1878 __symbol__join_symfs(name, PATH_MAX, dso__long_name(dso)); 1879 - if (filename__read_build_id(name, &bid) > 0) 1879 + if (filename__read_build_id(name, &bid, /*block=*/false) > 0) 1880 1880 dso__set_build_id(dso, &bid); 1881 1881 } 1882 1882
+1 -1
tools/perf/util/symbol.h
··· 140 140 141 141 enum dso_type dso__type_fd(int fd); 142 142 143 - int filename__read_build_id(const char *filename, struct build_id *id); 143 + int filename__read_build_id(const char *filename, struct build_id *id, bool block); 144 144 int sysfs__read_build_id(const char *filename, struct build_id *bid); 145 145 int modules__parse(const char *filename, void *arg, 146 146 int (*process_module)(void *arg, const char *name,
+1 -1
tools/perf/util/synthetic-events.c
··· 401 401 nsi = nsinfo__new(event->pid); 402 402 nsinfo__mountns_enter(nsi, &nc); 403 403 404 - rc = filename__read_build_id(event->filename, &bid) > 0 ? 0 : -1; 404 + rc = filename__read_build_id(event->filename, &bid, /*block=*/false) > 0 ? 0 : -1; 405 405 406 406 nsinfo__mountns_exit(&nc); 407 407 nsinfo__put(nsi);
+2
tools/scripts/syscall.tbl
··· 408 408 465 common listxattrat sys_listxattrat 409 409 466 common removexattrat sys_removexattrat 410 410 467 common open_tree_attr sys_open_tree_attr 411 + 468 common file_getattr sys_file_getattr 412 + 469 common file_setattr sys_file_setattr
+2 -3
tools/testing/selftests/arm64/fp/fp-ptrace.c
··· 1187 1187 if (!vl) 1188 1188 return; 1189 1189 1190 - iov.iov_len = SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, SVE_PT_REGS_SVE); 1190 + iov.iov_len = SVE_PT_SIZE(vq, SVE_PT_REGS_SVE); 1191 1191 iov.iov_base = malloc(iov.iov_len); 1192 1192 if (!iov.iov_base) { 1193 1193 ksft_print_msg("Failed allocating %lu byte SVE write buffer\n", ··· 1234 1234 if (!vl) 1235 1235 return; 1236 1236 1237 - iov.iov_len = SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, 1238 - SVE_PT_REGS_FPSIMD); 1237 + iov.iov_len = SVE_PT_SIZE(vq, SVE_PT_REGS_FPSIMD); 1239 1238 iov.iov_base = malloc(iov.iov_len); 1240 1239 if (!iov.iov_base) { 1241 1240 ksft_print_msg("Failed allocating %lu byte SVE write buffer\n",
+1
tools/testing/selftests/kvm/Makefile.kvm
··· 169 169 TEST_GEN_PROGS_arm64 += arm64/vgic_lpi_stress 170 170 TEST_GEN_PROGS_arm64 += arm64/vpmu_counter_access 171 171 TEST_GEN_PROGS_arm64 += arm64/no-vgic-v3 172 + TEST_GEN_PROGS_arm64 += arm64/kvm-uuid 172 173 TEST_GEN_PROGS_arm64 += access_tracking_perf_test 173 174 TEST_GEN_PROGS_arm64 += arch_timer 174 175 TEST_GEN_PROGS_arm64 += coalesced_io_test
+1 -1
tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
··· 146 146 147 147 val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1)); 148 148 149 - el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val); 149 + el0 = FIELD_GET(ID_AA64PFR0_EL1_EL0, val); 150 150 return el0 == ID_AA64PFR0_EL1_EL0_IMP; 151 151 } 152 152
+6 -6
tools/testing/selftests/kvm/arm64/debug-exceptions.c
··· 116 116 117 117 /* Reset all bcr/bvr/wcr/wvr registers */ 118 118 dfr0 = read_sysreg(id_aa64dfr0_el1); 119 - brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), dfr0); 119 + brps = FIELD_GET(ID_AA64DFR0_EL1_BRPs, dfr0); 120 120 for (i = 0; i <= brps; i++) { 121 121 write_dbgbcr(i, 0); 122 122 write_dbgbvr(i, 0); 123 123 } 124 - wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), dfr0); 124 + wrps = FIELD_GET(ID_AA64DFR0_EL1_WRPs, dfr0); 125 125 for (i = 0; i <= wrps; i++) { 126 126 write_dbgwcr(i, 0); 127 127 write_dbgwvr(i, 0); ··· 418 418 419 419 static int debug_version(uint64_t id_aa64dfr0) 420 420 { 421 - return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0); 421 + return FIELD_GET(ID_AA64DFR0_EL1_DebugVer, id_aa64dfr0); 422 422 } 423 423 424 424 static void test_guest_debug_exceptions(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn) ··· 539 539 int b, w, c; 540 540 541 541 /* Number of breakpoints */ 542 - brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), aa64dfr0) + 1; 542 + brp_num = FIELD_GET(ID_AA64DFR0_EL1_BRPs, aa64dfr0) + 1; 543 543 __TEST_REQUIRE(brp_num >= 2, "At least two breakpoints are required"); 544 544 545 545 /* Number of watchpoints */ 546 - wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), aa64dfr0) + 1; 546 + wrp_num = FIELD_GET(ID_AA64DFR0_EL1_WRPs, aa64dfr0) + 1; 547 547 548 548 /* Number of context aware breakpoints */ 549 - ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_CTX_CMPs), aa64dfr0) + 1; 549 + ctx_brp_num = FIELD_GET(ID_AA64DFR0_EL1_CTX_CMPs, aa64dfr0) + 1; 550 550 551 551 pr_debug("%s brp_num:%d, wrp_num:%d, ctx_brp_num:%d\n", __func__, 552 552 brp_num, wrp_num, ctx_brp_num);
+70
tools/testing/selftests/kvm/arm64/kvm-uuid.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + // Check that nobody has tampered with KVM's UID 4 + 5 + #include <errno.h> 6 + #include <linux/arm-smccc.h> 7 + #include <asm/kvm.h> 8 + #include <kvm_util.h> 9 + 10 + #include "processor.h" 11 + 12 + /* 13 + * Do NOT redefine these constants, or try to replace them with some 14 + * "common" version. They are hardcoded here to detect any potential 15 + * breakage happening in the rest of the kernel. 16 + * 17 + * KVM UID value: 28b46fb6-2ec5-11e9-a9ca-4b564d003a74 18 + */ 19 + #define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0 0xb66fb428U 20 + #define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1 0xe911c52eU 21 + #define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2 0x564bcaa9U 22 + #define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3 0x743a004dU 23 + 24 + static void guest_code(void) 25 + { 26 + struct arm_smccc_res res = {}; 27 + 28 + smccc_hvc(ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); 29 + 30 + __GUEST_ASSERT(res.a0 == ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0 && 31 + res.a1 == ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1 && 32 + res.a2 == ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2 && 33 + res.a3 == ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3, 34 + "Unexpected KVM-specific UID %lx %lx %lx %lx\n", res.a0, res.a1, res.a2, res.a3); 35 + GUEST_DONE(); 36 + } 37 + 38 + int main (int argc, char *argv[]) 39 + { 40 + struct kvm_vcpu *vcpu; 41 + struct kvm_vm *vm; 42 + struct ucall uc; 43 + bool guest_done = false; 44 + 45 + vm = vm_create_with_one_vcpu(&vcpu, guest_code); 46 + 47 + while (!guest_done) { 48 + vcpu_run(vcpu); 49 + 50 + switch (get_ucall(vcpu, &uc)) { 51 + case UCALL_SYNC: 52 + break; 53 + case UCALL_DONE: 54 + guest_done = true; 55 + break; 56 + case UCALL_ABORT: 57 + REPORT_GUEST_ASSERT(uc); 58 + break; 59 + case UCALL_PRINTF: 60 + printf("%s", uc.buffer); 61 + break; 62 + default: 63 + TEST_FAIL("Unexpected guest exit"); 64 + } 65 + } 66 + 67 + kvm_vm_free(vm); 68 + 69 + return 0; 70 + }
+2 -2
tools/testing/selftests/kvm/arm64/no-vgic-v3.c
··· 54 54 * Check that we advertise that ID_AA64PFR0_EL1.GIC == 0, having 55 55 * hidden the feature at runtime without any other userspace action. 56 56 */ 57 - __GUEST_ASSERT(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 57 + __GUEST_ASSERT(FIELD_GET(ID_AA64PFR0_EL1_GIC, 58 58 read_sysreg(id_aa64pfr0_el1)) == 0, 59 59 "GICv3 wrongly advertised"); 60 60 ··· 165 165 166 166 vm = vm_create_with_one_vcpu(&vcpu, NULL); 167 167 pfr0 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1)); 168 - __TEST_REQUIRE(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), pfr0), 168 + __TEST_REQUIRE(FIELD_GET(ID_AA64PFR0_EL1_GIC, pfr0), 169 169 "GICv3 not supported."); 170 170 kvm_vm_free(vm); 171 171
+3 -3
tools/testing/selftests/kvm/arm64/page_fault_test.c
··· 95 95 uint64_t isar0 = read_sysreg(id_aa64isar0_el1); 96 96 uint64_t atomic; 97 97 98 - atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC), isar0); 98 + atomic = FIELD_GET(ID_AA64ISAR0_EL1_ATOMIC, isar0); 99 99 return atomic >= 2; 100 100 } 101 101 102 102 static bool guest_check_dc_zva(void) 103 103 { 104 104 uint64_t dczid = read_sysreg(dczid_el0); 105 - uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_EL0_DZP), dczid); 105 + uint64_t dzp = FIELD_GET(DCZID_EL0_DZP, dczid); 106 106 107 107 return dzp == 0; 108 108 } ··· 195 195 uint64_t hadbs, tcr; 196 196 197 197 /* Skip if HA is not supported. */ 198 - hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS), mmfr1); 198 + hadbs = FIELD_GET(ID_AA64MMFR1_EL1_HAFDBS, mmfr1); 199 199 if (hadbs == 0) 200 200 return false; 201 201
+5 -4
tools/testing/selftests/kvm/arm64/set_id_regs.c
··· 243 243 GUEST_REG_SYNC(SYS_ID_AA64MMFR0_EL1); 244 244 GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); 245 245 GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); 246 + GUEST_REG_SYNC(SYS_ID_AA64MMFR3_EL1); 246 247 GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); 247 248 GUEST_REG_SYNC(SYS_CTR_EL0); 248 249 GUEST_REG_SYNC(SYS_MIDR_EL1); ··· 595 594 */ 596 595 val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1)); 597 596 598 - mte = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), val); 599 - mte_frac = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac), val); 597 + mte = FIELD_GET(ID_AA64PFR1_EL1_MTE, val); 598 + mte_frac = FIELD_GET(ID_AA64PFR1_EL1_MTE_frac, val); 600 599 if (mte != ID_AA64PFR1_EL1_MTE_MTE2 || 601 600 mte_frac != ID_AA64PFR1_EL1_MTE_frac_NI) { 602 601 ksft_test_result_skip("MTE_ASYNC or MTE_ASYMM are supported, nothing to test\n"); ··· 613 612 } 614 613 615 614 val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1)); 616 - mte_frac = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac), val); 615 + mte_frac = FIELD_GET(ID_AA64PFR1_EL1_MTE_frac, val); 617 616 if (mte_frac == ID_AA64PFR1_EL1_MTE_frac_NI) 618 617 ksft_test_result_pass("ID_AA64PFR1_EL1.MTE_frac=0 accepted and still 0xF\n"); 619 618 else ··· 775 774 776 775 /* Check for AARCH64 only system */ 777 776 val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1)); 778 - el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val); 777 + el0 = FIELD_GET(ID_AA64PFR0_EL1_EL0, val); 779 778 aarch64_only = (el0 == ID_AA64PFR0_EL1_EL0_IMP); 780 779 781 780 ksft_print_header();
+1 -1
tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
··· 441 441 442 442 /* Make sure that PMUv3 support is indicated in the ID register */ 443 443 dfr0 = vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1)); 444 - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0); 444 + pmuver = FIELD_GET(ID_AA64DFR0_EL1_PMUVer, dfr0); 445 445 TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && 446 446 pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP, 447 447 "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
+3 -3
tools/testing/selftests/kvm/lib/arm64/processor.c
··· 573 573 err = ioctl(vcpu_fd, KVM_GET_ONE_REG, &reg); 574 574 TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_GET_ONE_REG, vcpu_fd)); 575 575 576 - gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN4), val); 576 + gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN4, val); 577 577 *ipa4k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN4_NI, 578 578 ID_AA64MMFR0_EL1_TGRAN4_52_BIT); 579 579 580 - gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN64), val); 580 + gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN64, val); 581 581 *ipa64k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN64_NI, 582 582 ID_AA64MMFR0_EL1_TGRAN64_IMP); 583 583 584 - gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN16), val); 584 + gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN16, val); 585 585 *ipa16k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN16_NI, 586 586 ID_AA64MMFR0_EL1_TGRAN16_52_BIT); 587 587
+5 -5
tools/testing/selftests/ublk/file_backed.c
··· 20 20 struct io_uring_sqe *sqe[1]; 21 21 22 22 ublk_io_alloc_sqes(t, sqe, 1); 23 - io_uring_prep_fsync(sqe[0], 1 /*fds[1]*/, IORING_FSYNC_DATASYNC); 23 + io_uring_prep_fsync(sqe[0], ublk_get_registered_fd(q, 1) /*fds[1]*/, IORING_FSYNC_DATASYNC); 24 24 io_uring_sqe_set_flags(sqe[0], IOSQE_FIXED_FILE); 25 25 /* bit63 marks us as tgt io */ 26 26 sqe[0]->user_data = build_user_data(tag, ublk_op, 0, q->q_id, 1); ··· 42 42 if (!sqe[0]) 43 43 return -ENOMEM; 44 44 45 - io_uring_prep_rw(op, sqe[0], 1 /*fds[1]*/, 45 + io_uring_prep_rw(op, sqe[0], ublk_get_registered_fd(q, 1) /*fds[1]*/, 46 46 addr, 47 47 iod->nr_sectors << 9, 48 48 iod->start_sector << 9); ··· 56 56 57 57 ublk_io_alloc_sqes(t, sqe, 3); 58 58 59 - io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, ublk_get_io(q, tag)->buf_index); 59 + io_uring_prep_buf_register(sqe[0], q, tag, q->q_id, ublk_get_io(q, tag)->buf_index); 60 60 sqe[0]->flags |= IOSQE_CQE_SKIP_SUCCESS | IOSQE_IO_HARDLINK; 61 61 sqe[0]->user_data = build_user_data(tag, 62 62 ublk_cmd_op_nr(sqe[0]->cmd_op), 0, q->q_id, 1); 63 63 64 - io_uring_prep_rw(op, sqe[1], 1 /*fds[1]*/, 0, 64 + io_uring_prep_rw(op, sqe[1], ublk_get_registered_fd(q, 1) /*fds[1]*/, 0, 65 65 iod->nr_sectors << 9, 66 66 iod->start_sector << 9); 67 67 sqe[1]->buf_index = tag; 68 68 sqe[1]->flags |= IOSQE_FIXED_FILE | IOSQE_IO_HARDLINK; 69 69 sqe[1]->user_data = build_user_data(tag, ublk_op, 0, q->q_id, 1); 70 70 71 - io_uring_prep_buf_unregister(sqe[2], 0, tag, q->q_id, ublk_get_io(q, tag)->buf_index); 71 + io_uring_prep_buf_unregister(sqe[2], q, tag, q->q_id, ublk_get_io(q, tag)->buf_index); 72 72 sqe[2]->user_data = build_user_data(tag, ublk_cmd_op_nr(sqe[2]->cmd_op), 0, q->q_id, 1); 73 73 74 74 return 2;
+31 -7
tools/testing/selftests/ublk/kublk.c
··· 432 432 } 433 433 } 434 434 435 - static int ublk_queue_init(struct ublk_queue *q, unsigned extra_flags) 435 + static int ublk_queue_init(struct ublk_queue *q, unsigned long long extra_flags) 436 436 { 437 437 struct ublk_dev *dev = q->dev; 438 438 int depth = dev->dev_info.queue_depth; ··· 445 445 q->q_depth = depth; 446 446 q->flags = dev->dev_info.flags; 447 447 q->flags |= extra_flags; 448 + 449 + /* Cache fd in queue for fast path access */ 450 + q->ublk_fd = dev->fds[0]; 448 451 449 452 cmd_buf_size = ublk_queue_cmd_buf_sz(q); 450 453 off = UBLKSRV_CMD_BUF_OFFSET + q->q_id * ublk_queue_max_cmd_buf_sz(); ··· 484 481 return -ENOMEM; 485 482 } 486 483 487 - static int ublk_thread_init(struct ublk_thread *t) 484 + static int ublk_thread_init(struct ublk_thread *t, unsigned long long extra_flags) 488 485 { 489 486 struct ublk_dev *dev = t->dev; 487 + unsigned long long flags = dev->dev_info.flags | extra_flags; 490 488 int ring_depth = dev->tgt.sq_depth, cq_depth = dev->tgt.cq_depth; 491 489 int ret; 492 490 ··· 516 512 517 513 io_uring_register_ring_fd(&t->ring); 518 514 519 - ret = io_uring_register_files(&t->ring, dev->fds, dev->nr_fds); 515 + if (flags & UBLKS_Q_NO_UBLK_FIXED_FD) { 516 + /* Register only backing files starting from index 1, exclude ublk control device */ 517 + if (dev->nr_fds > 1) { 518 + ret = io_uring_register_files(&t->ring, &dev->fds[1], dev->nr_fds - 1); 519 + } else { 520 + /* No backing files to register, skip file registration */ 521 + ret = 0; 522 + } 523 + } else { 524 + ret = io_uring_register_files(&t->ring, dev->fds, dev->nr_fds); 525 + } 520 526 if (ret) { 521 527 ublk_err("ublk dev %d thread %d register files failed %d\n", 522 528 t->dev->dev_info.dev_id, t->idx, ret); ··· 640 626 641 627 /* These fields should be written once, never change */ 642 628 ublk_set_sqe_cmd_op(sqe[0], cmd_op); 643 - sqe[0]->fd = 0; /* dev->fds[0] */ 629 + sqe[0]->fd = ublk_get_registered_fd(q, 0); /* dev->fds[0] */ 644 630 sqe[0]->opcode = IORING_OP_URING_CMD; 645 - sqe[0]->flags = IOSQE_FIXED_FILE; 631 + if (q->flags & UBLKS_Q_NO_UBLK_FIXED_FD) 632 + sqe[0]->flags = 0; /* Use raw FD, not fixed file */ 633 + else 634 + sqe[0]->flags = IOSQE_FIXED_FILE; 646 635 sqe[0]->rw_flags = 0; 647 636 cmd->tag = io->tag; 648 637 cmd->q_id = q->q_id; ··· 849 832 unsigned idx; 850 833 sem_t *ready; 851 834 cpu_set_t *affinity; 835 + unsigned long long extra_flags; 852 836 }; 853 837 854 838 static void *ublk_io_handler_fn(void *data) ··· 862 844 t->dev = info->dev; 863 845 t->idx = info->idx; 864 846 865 - ret = ublk_thread_init(t); 847 + ret = ublk_thread_init(t, info->extra_flags); 866 848 if (ret) { 867 849 ublk_err("ublk dev %d thread %u init failed\n", 868 850 dev_id, t->idx); ··· 952 934 953 935 if (ctx->auto_zc_fallback) 954 936 extra_flags = UBLKS_Q_AUTO_BUF_REG_FALLBACK; 937 + if (ctx->no_ublk_fixed_fd) 938 + extra_flags |= UBLKS_Q_NO_UBLK_FIXED_FD; 955 939 956 940 for (i = 0; i < dinfo->nr_hw_queues; i++) { 957 941 dev->q[i].dev = dev; ··· 971 951 tinfo[i].dev = dev; 972 952 tinfo[i].idx = i; 973 953 tinfo[i].ready = &ready; 954 + tinfo[i].extra_flags = extra_flags; 974 955 975 956 /* 976 957 * If threads are not tied 1:1 to queues, setting thread ··· 1492 1471 printf("%s %s -t [null|loop|stripe|fault_inject] [-q nr_queues] [-d depth] [-n dev_id]\n", 1493 1472 exe, recovery ? "recover" : "add"); 1494 1473 printf("\t[--foreground] [--quiet] [-z] [--auto_zc] [--auto_zc_fallback] [--debug_mask mask] [-r 0|1 ] [-g]\n"); 1495 - printf("\t[-e 0|1 ] [-i 0|1]\n"); 1474 + printf("\t[-e 0|1 ] [-i 0|1] [--no_ublk_fixed_fd]\n"); 1496 1475 printf("\t[--nthreads threads] [--per_io_tasks]\n"); 1497 1476 printf("\t[target options] [backfile1] [backfile2] ...\n"); 1498 1477 printf("\tdefault: nr_queues=2(max 32), depth=128(max 1024), dev_id=-1(auto allocation)\n"); ··· 1555 1534 { "size", 1, NULL, 's'}, 1556 1535 { "nthreads", 1, NULL, 0 }, 1557 1536 { "per_io_tasks", 0, NULL, 0 }, 1537 + { "no_ublk_fixed_fd", 0, NULL, 0 }, 1558 1538 { 0, 0, 0, 0 } 1559 1539 }; 1560 1540 const struct ublk_tgt_ops *ops = NULL; ··· 1635 1613 ctx.nthreads = strtol(optarg, NULL, 10); 1636 1614 if (!strcmp(longopts[option_idx].name, "per_io_tasks")) 1637 1615 ctx.per_io_tasks = 1; 1616 + if (!strcmp(longopts[option_idx].name, "no_ublk_fixed_fd")) 1617 + ctx.no_ublk_fixed_fd = 1; 1638 1618 break; 1639 1619 case '?': 1640 1620 /*
+31 -14
tools/testing/selftests/ublk/kublk.h
··· 77 77 unsigned int recovery:1; 78 78 unsigned int auto_zc_fallback:1; 79 79 unsigned int per_io_tasks:1; 80 + unsigned int no_ublk_fixed_fd:1; 80 81 81 82 int _evtfd; 82 83 int _shmid; ··· 167 166 168 167 /* borrow one bit of ublk uapi flags, which may never be used */ 169 168 #define UBLKS_Q_AUTO_BUF_REG_FALLBACK (1ULL << 63) 169 + #define UBLKS_Q_NO_UBLK_FIXED_FD (1ULL << 62) 170 170 __u64 flags; 171 + int ublk_fd; /* cached ublk char device fd */ 171 172 struct ublk_io ios[UBLK_QUEUE_DEPTH]; 172 173 }; 173 174 ··· 276 273 return nr_sqes; 277 274 } 278 275 279 - static inline void io_uring_prep_buf_register(struct io_uring_sqe *sqe, 280 - int dev_fd, int tag, int q_id, __u64 index) 276 + static inline int ublk_get_registered_fd(struct ublk_queue *q, int fd_index) 277 + { 278 + if (q->flags & UBLKS_Q_NO_UBLK_FIXED_FD) { 279 + if (fd_index == 0) 280 + /* Return the raw ublk FD for index 0 */ 281 + return q->ublk_fd; 282 + /* Adjust index for backing files (index 1 becomes 0, etc.) */ 283 + return fd_index - 1; 284 + } 285 + return fd_index; 286 + } 287 + 288 + static inline void __io_uring_prep_buf_reg_unreg(struct io_uring_sqe *sqe, 289 + struct ublk_queue *q, int tag, int q_id, __u64 index) 281 290 { 282 291 struct ublksrv_io_cmd *cmd = (struct ublksrv_io_cmd *)sqe->cmd; 292 + int dev_fd = ublk_get_registered_fd(q, 0); 283 293 284 294 io_uring_prep_read(sqe, dev_fd, 0, 0, 0); 285 295 sqe->opcode = IORING_OP_URING_CMD; 286 - sqe->flags |= IOSQE_FIXED_FILE; 287 - sqe->cmd_op = UBLK_U_IO_REGISTER_IO_BUF; 296 + if (q->flags & UBLKS_Q_NO_UBLK_FIXED_FD) 297 + sqe->flags &= ~IOSQE_FIXED_FILE; 298 + else 299 + sqe->flags |= IOSQE_FIXED_FILE; 288 300 289 301 cmd->tag = tag; 290 302 cmd->addr = index; 291 303 cmd->q_id = q_id; 292 304 } 293 305 294 - static inline void io_uring_prep_buf_unregister(struct io_uring_sqe *sqe, 295 - int dev_fd, int tag, int q_id, __u64 index) 306 + static inline void io_uring_prep_buf_register(struct io_uring_sqe *sqe, 307 + struct ublk_queue *q, int tag, int q_id, __u64 index) 296 308 { 297 - struct ublksrv_io_cmd *cmd = (struct ublksrv_io_cmd *)sqe->cmd; 309 + __io_uring_prep_buf_reg_unreg(sqe, q, tag, q_id, index); 310 + sqe->cmd_op = UBLK_U_IO_REGISTER_IO_BUF; 311 + } 298 312 299 - io_uring_prep_read(sqe, dev_fd, 0, 0, 0); 300 - sqe->opcode = IORING_OP_URING_CMD; 301 - sqe->flags |= IOSQE_FIXED_FILE; 313 + static inline void io_uring_prep_buf_unregister(struct io_uring_sqe *sqe, 314 + struct ublk_queue *q, int tag, int q_id, __u64 index) 315 + { 316 + __io_uring_prep_buf_reg_unreg(sqe, q, tag, q_id, index); 302 317 sqe->cmd_op = UBLK_U_IO_UNREGISTER_IO_BUF; 303 - 304 - cmd->tag = tag; 305 - cmd->addr = index; 306 - cmd->q_id = q_id; 307 318 } 308 319 309 320 static inline void *ublk_get_sqe_cmd(const struct io_uring_sqe *sqe)
+2 -2
tools/testing/selftests/ublk/null.c
··· 63 63 64 64 ublk_io_alloc_sqes(t, sqe, 3); 65 65 66 - io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, ublk_get_io(q, tag)->buf_index); 66 + io_uring_prep_buf_register(sqe[0], q, tag, q->q_id, ublk_get_io(q, tag)->buf_index); 67 67 sqe[0]->user_data = build_user_data(tag, 68 68 ublk_cmd_op_nr(sqe[0]->cmd_op), 0, q->q_id, 1); 69 69 sqe[0]->flags |= IOSQE_CQE_SKIP_SUCCESS | IOSQE_IO_HARDLINK; ··· 71 71 __setup_nop_io(tag, iod, sqe[1], q->q_id); 72 72 sqe[1]->flags |= IOSQE_IO_HARDLINK; 73 73 74 - io_uring_prep_buf_unregister(sqe[2], 0, tag, q->q_id, ublk_get_io(q, tag)->buf_index); 74 + io_uring_prep_buf_unregister(sqe[2], q, tag, q->q_id, ublk_get_io(q, tag)->buf_index); 75 75 sqe[2]->user_data = build_user_data(tag, ublk_cmd_op_nr(sqe[2]->cmd_op), 0, q->q_id, 1); 76 76 77 77 // buf register is marked as IOSQE_CQE_SKIP_SUCCESS
+2 -2
tools/testing/selftests/ublk/stripe.c
··· 142 142 ublk_io_alloc_sqes(t, sqe, s->nr + extra); 143 143 144 144 if (zc) { 145 - io_uring_prep_buf_register(sqe[0], 0, tag, q->q_id, io->buf_index); 145 + io_uring_prep_buf_register(sqe[0], q, tag, q->q_id, io->buf_index); 146 146 sqe[0]->flags |= IOSQE_CQE_SKIP_SUCCESS | IOSQE_IO_HARDLINK; 147 147 sqe[0]->user_data = build_user_data(tag, 148 148 ublk_cmd_op_nr(sqe[0]->cmd_op), 0, q->q_id, 1); ··· 168 168 if (zc) { 169 169 struct io_uring_sqe *unreg = sqe[s->nr + 1]; 170 170 171 - io_uring_prep_buf_unregister(unreg, 0, tag, q->q_id, io->buf_index); 171 + io_uring_prep_buf_unregister(unreg, q, tag, q->q_id, io->buf_index); 172 172 unreg->user_data = build_user_data( 173 173 tag, ublk_cmd_op_nr(unreg->cmd_op), 0, q->q_id, 1); 174 174 }
+3 -3
tools/testing/selftests/ublk/test_stress_04.sh
··· 28 28 _create_backfile 1 128M 29 29 _create_backfile 2 128M 30 30 31 - ublk_io_and_kill_daemon 8G -t null -q 4 -z & 32 - ublk_io_and_kill_daemon 256M -t loop -q 4 -z "${UBLK_BACKFILES[0]}" & 31 + ublk_io_and_kill_daemon 8G -t null -q 4 -z --no_ublk_fixed_fd & 32 + ublk_io_and_kill_daemon 256M -t loop -q 4 -z --no_ublk_fixed_fd "${UBLK_BACKFILES[0]}" & 33 33 ublk_io_and_kill_daemon 256M -t stripe -q 4 -z "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 34 34 35 35 if _have_feature "AUTO_BUF_REG"; then 36 36 ublk_io_and_kill_daemon 8G -t null -q 4 --auto_zc & 37 37 ublk_io_and_kill_daemon 256M -t loop -q 4 --auto_zc "${UBLK_BACKFILES[0]}" & 38 - ublk_io_and_kill_daemon 256M -t stripe -q 4 --auto_zc "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 38 + ublk_io_and_kill_daemon 256M -t stripe -q 4 --auto_zc --no_ublk_fixed_fd "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 39 39 ublk_io_and_kill_daemon 8G -t null -q 4 -z --auto_zc --auto_zc_fallback & 40 40 fi 41 41