Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branches 'for-next/doc', 'for-next/sve', 'for-next/sysreg', 'for-next/gettimeofday', 'for-next/stacktrace', 'for-next/atomics', 'for-next/el1-exceptions', 'for-next/a510-erratum-2658417', 'for-next/defconfig', 'for-next/tpidr2_el0' and 'for-next/ftrace', remote-tracking branch 'arm64/for-next/perf' into for-next/core

* arm64/for-next/perf:
arm64: asm/perf_regs.h: Avoid C++-style comment in UAPI header
arm64/sve: Add Perf extensions documentation
perf: arm64: Add SVE vector granule register to user regs
MAINTAINERS: add maintainers for Alibaba' T-Head PMU driver
drivers/perf: add DDR Sub-System Driveway PMU driver for Yitian 710 SoC
docs: perf: Add description for Alibaba's T-Head PMU driver

* for-next/doc:
: Documentation/arm64 updates
arm64/sve: Document our actual ABI for clearing registers on syscall

* for-next/sve:
: SVE updates
arm64/sysreg: Add hwcap for SVE EBF16

* for-next/sysreg: (35 commits)
: arm64 system registers generation (more conversions)
arm64/sysreg: Fix a few missed conversions
arm64/sysreg: Convert ID_AA64AFRn_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64DFR1_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64FDR0_EL1 to automatic generation
arm64/sysreg: Use feature numbering for PMU and SPE revisions
arm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition names
arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture
arm64/sysreg: Add defintion for ALLINT
arm64/sysreg: Convert SCXTNUM_EL1 to automatic generation
arm64/sysreg: Convert TIPDR_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64PFR1_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64PFR0_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64MMFR2_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64MMFR1_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64MMFR0_EL1 to automatic generation
arm64/sysreg: Convert HCRX_EL2 to automatic generation
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 SME enumeration
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 BTI enumeration
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 fractional version fields
arm64/sysreg: Standardise naming for MTE feature enumeration
...

* for-next/gettimeofday:
: Use self-synchronising counter access in gettimeofday() (if FEAT_ECV)
arm64: vdso: use SYS_CNTVCTSS_EL0 for gettimeofday
arm64: alternative: patch alternatives in the vDSO
arm64: module: move find_section to header

* for-next/stacktrace:
: arm64 stacktrace cleanups and improvements
arm64: stacktrace: track hyp stacks in unwinder's address space
arm64: stacktrace: track all stack boundaries explicitly
arm64: stacktrace: remove stack type from fp translator
arm64: stacktrace: rework stack boundary discovery
arm64: stacktrace: add stackinfo_on_stack() helper
arm64: stacktrace: move SDEI stack helpers to stacktrace code
arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()
arm64: stacktrace: simplify unwind_next_common()
arm64: stacktrace: fix kerneldoc comments

* for-next/atomics:
: arm64 atomics improvements
arm64: atomic: always inline the assembly
arm64: atomics: remove LL/SC trampolines

* for-next/el1-exceptions:
: Improve the reporting of EL1 exceptions
arm64: rework BTI exception handling
arm64: rework FPAC exception handling
arm64: consistently pass ESR_ELx to die()
arm64: die(): pass 'err' as long
arm64: report EL1 UNDEFs better

* for-next/a510-erratum-2658417:
: Cortex-A510: 2658417: remove BF16 support due to incorrect result
arm64: errata: remove BF16 HWCAP due to incorrect result on Cortex-A510
arm64: cpufeature: Expose get_arm64_ftr_reg() outside cpufeature.c
arm64: cpufeature: Force HWCAP to be based on the sysreg visible to user-space

* for-next/defconfig:
: arm64 defconfig updates
arm64: defconfig: Add Coresight as module
arm64: Enable docker support in defconfig
arm64: defconfig: Enable memory hotplug and hotremove config
arm64: configs: Enable all PMUs provided by Arm

* for-next/tpidr2_el0:
: arm64 ptrace() support for TPIDR2_EL0
kselftest/arm64: Add coverage of TPIDR2_EL0 ptrace interface
arm64/ptrace: Support access to TPIDR2_EL0
arm64/ptrace: Document extension of NT_ARM_TLS to cover TPIDR2_EL0
kselftest/arm64: Add test coverage for NT_ARM_TLS

* for-next/ftrace:
: arm64 ftraces updates/fixes
arm64: ftrace: fix module PLTs with mcount
arm64: module: Remove unused plt_entry_is_initialized()
arm64: module: Make plt_equals_entry() static

+1651 -929
+3
Documentation/arm64/elf_hwcaps.rst
··· 272 272 HWCAP2_EBF16 273 273 Functionality implied by ID_AA64ISAR1_EL1.BF16 == 0b0010. 274 274 275 + HWCAP2_SVE_EBF16 276 + Functionality implied by ID_AA64ZFR0_EL1.BF16 == 0b0010. 277 + 275 278 4. Unused AT_HWCAP bits 276 279 ----------------------- 277 280
+2
Documentation/arm64/silicon-errata.rst
··· 110 110 +----------------+-----------------+-----------------+-----------------------------+ 111 111 | ARM | Cortex-A510 | #2441009 | ARM64_ERRATUM_2441009 | 112 112 +----------------+-----------------+-----------------+-----------------------------+ 113 + | ARM | Cortex-A510 | #2658417 | ARM64_ERRATUM_2658417 | 114 + +----------------+-----------------+-----------------+-----------------------------+ 113 115 | ARM | Cortex-A710 | #2119858 | ARM64_ERRATUM_2119858 | 114 116 +----------------+-----------------+-----------------+-----------------------------+ 115 117 | ARM | Cortex-A710 | #2054223 | ARM64_ERRATUM_2054223 |
+3
Documentation/arm64/sme.rst
··· 331 331 been read if a PTRACE_GETREGSET of NT_ARM_ZA were executed for each thread 332 332 when the coredump was generated. 333 333 334 + * The NT_ARM_TLS note will be extended to two registers, the second register 335 + will contain TPIDR2_EL0 on systems that support SME and will be read as 336 + zero with writes ignored otherwise. 334 337 335 338 9. System runtime configuration 336 339 --------------------------------
+1 -1
Documentation/arm64/sve.rst
··· 111 111 112 112 * On syscall, V0..V31 are preserved (as without SVE). Thus, bits [127:0] of 113 113 Z0..Z31 are preserved. All other bits of Z0..Z31, and all of P0..P15 and FFR 114 - become unspecified on return from a syscall. 114 + become zero on return from a syscall. 115 115 116 116 * The SVE registers are not used to pass arguments to or receive results from 117 117 any syscall.
+13
arch/arm64/Kconfig
··· 733 733 734 734 If unsure, say Y. 735 735 736 + config ARM64_ERRATUM_2658417 737 + bool "Cortex-A510: 2658417: remove BF16 support due to incorrect result" 738 + default y 739 + help 740 + This option adds the workaround for ARM Cortex-A510 erratum 2658417. 741 + Affected Cortex-A510 (r0p0 to r1p1) may produce the wrong result for 742 + BFMMLA or VMMLA instructions in rare circumstances when a pair of 743 + A510 CPUs are using shared neon hardware. As the sharing is not 744 + discoverable by the kernel, hide the BF16 HWCAP to indicate that 745 + user-space should not be using these instructions. 746 + 747 + If unsure, say Y. 748 + 736 749 config ARM64_ERRATUM_2119858 737 750 bool "Cortex-A710/X2: 2119858: workaround TRBE overwriting trace data in FILL mode" 738 751 default y
+23
arch/arm64/configs/defconfig
··· 18 18 CONFIG_MEMCG=y 19 19 CONFIG_BLK_CGROUP=y 20 20 CONFIG_CGROUP_PIDS=y 21 + CONFIG_CGROUP_FREEZER=y 21 22 CONFIG_CGROUP_HUGETLB=y 22 23 CONFIG_CPUSETS=y 23 24 CONFIG_CGROUP_DEVICE=y ··· 103 102 CONFIG_ARM_TEGRA186_CPUFREQ=y 104 103 CONFIG_QORIQ_CPUFREQ=y 105 104 CONFIG_ACPI=y 105 + CONFIG_ACPI_HOTPLUG_MEMORY=y 106 + CONFIG_ACPI_HMAT=y 106 107 CONFIG_ACPI_APEI=y 107 108 CONFIG_ACPI_APEI_GHES=y 108 109 CONFIG_ACPI_APEI_PCIEAER=y ··· 129 126 CONFIG_MODULE_UNLOAD=y 130 127 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 131 128 # CONFIG_COMPAT_BRK is not set 129 + CONFIG_MEMORY_HOTPLUG=y 130 + CONFIG_MEMORY_HOTREMOVE=y 132 131 CONFIG_KSM=y 133 132 CONFIG_MEMORY_FAILURE=y 134 133 CONFIG_TRANSPARENT_HUGEPAGE=y ··· 144 139 CONFIG_IP_PNP_BOOTP=y 145 140 CONFIG_IPV6=m 146 141 CONFIG_NETFILTER=y 142 + CONFIG_BRIDGE_NETFILTER=m 147 143 CONFIG_NF_CONNTRACK=m 148 144 CONFIG_NF_CONNTRACK_EVENTS=y 145 + CONFIG_NETFILTER_XT_MARK=m 149 146 CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m 150 147 CONFIG_NETFILTER_XT_TARGET_LOG=m 151 148 CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m 152 149 CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m 150 + CONFIG_NETFILTER_XT_MATCH_IPVS=m 151 + CONFIG_IP_VS=m 153 152 CONFIG_IP_NF_IPTABLES=m 154 153 CONFIG_IP_NF_FILTER=m 155 154 CONFIG_IP_NF_TARGET_REJECT=m ··· 1238 1229 CONFIG_PHY_TEGRA_XUSB=y 1239 1230 CONFIG_PHY_AM654_SERDES=m 1240 1231 CONFIG_PHY_J721E_WIZ=m 1232 + CONFIG_ARM_CCI_PMU=m 1233 + CONFIG_ARM_CCN=m 1234 + CONFIG_ARM_CMN=m 1241 1235 CONFIG_ARM_SMMU_V3_PMU=m 1236 + CONFIG_ARM_DSU_PMU=m 1242 1237 CONFIG_FSL_IMX8_DDR_PMU=m 1238 + CONFIG_ARM_SPE_PMU=m 1239 + CONFIG_ARM_DMC620_PMU=m 1243 1240 CONFIG_QCOM_L2_PMU=y 1244 1241 CONFIG_QCOM_L3_PMU=y 1245 1242 CONFIG_HISI_PMU=y ··· 1340 1325 # CONFIG_SCHED_DEBUG is not set 1341 1326 # CONFIG_DEBUG_PREEMPT is not set 1342 1327 # CONFIG_FTRACE is not set 1328 + CONFIG_CORESIGHT=m 1329 + CONFIG_CORESIGHT_LINK_AND_SINK_TMC=m 1330 + CONFIG_CORESIGHT_CATU=m 1331 + CONFIG_CORESIGHT_SINK_TPIU=m 1332 + CONFIG_CORESIGHT_SINK_ETBV10=m 1333 + CONFIG_CORESIGHT_STM=m 1334 + CONFIG_CORESIGHT_CPU_DEBUG=m 1335 + CONFIG_CORESIGHT_CTI=m 1343 1336 CONFIG_MEMTEST=y
+5 -5
arch/arm64/include/asm/assembler.h
··· 384 384 .macro tcr_compute_pa_size, tcr, pos, tmp0, tmp1 385 385 mrs \tmp0, ID_AA64MMFR0_EL1 386 386 // Narrow PARange to fit the PS field in TCR_ELx 387 - ubfx \tmp0, \tmp0, #ID_AA64MMFR0_PARANGE_SHIFT, #3 388 - mov \tmp1, #ID_AA64MMFR0_PARANGE_MAX 387 + ubfx \tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3 388 + mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX 389 389 cmp \tmp0, \tmp1 390 390 csel \tmp0, \tmp1, \tmp0, hi 391 391 bfi \tcr, \tmp0, \pos, #3 ··· 512 512 */ 513 513 .macro reset_pmuserenr_el0, tmpreg 514 514 mrs \tmpreg, id_aa64dfr0_el1 515 - sbfx \tmpreg, \tmpreg, #ID_AA64DFR0_PMUVER_SHIFT, #4 515 + sbfx \tmpreg, \tmpreg, #ID_AA64DFR0_EL1_PMUVer_SHIFT, #4 516 516 cmp \tmpreg, #1 // Skip if no PMU present 517 517 b.lt 9000f 518 518 msr pmuserenr_el0, xzr // Disable PMU access from EL0 ··· 524 524 */ 525 525 .macro reset_amuserenr_el0, tmpreg 526 526 mrs \tmpreg, id_aa64pfr0_el1 // Check ID_AA64PFR0_EL1 527 - ubfx \tmpreg, \tmpreg, #ID_AA64PFR0_AMU_SHIFT, #4 527 + ubfx \tmpreg, \tmpreg, #ID_AA64PFR0_EL1_AMU_SHIFT, #4 528 528 cbz \tmpreg, .Lskip_\@ // Skip if no AMU present 529 529 msr_s SYS_AMUSERENR_EL0, xzr // Disable AMU access from EL0 530 530 .Lskip_\@: ··· 612 612 .macro offset_ttbr1, ttbr, tmp 613 613 #ifdef CONFIG_ARM64_VA_BITS_52 614 614 mrs_s \tmp, SYS_ID_AA64MMFR2_EL1 615 - and \tmp, \tmp, #(0xf << ID_AA64MMFR2_LVA_SHIFT) 615 + and \tmp, \tmp, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) 616 616 cbnz \tmp, .Lskipoffs_\@ 617 617 orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET 618 618 .Lskipoffs_\@ :
+18 -40
arch/arm64/include/asm/atomic_ll_sc.h
··· 12 12 13 13 #include <linux/stringify.h> 14 14 15 - #ifdef CONFIG_ARM64_LSE_ATOMICS 16 - #define __LL_SC_FALLBACK(asm_ops) \ 17 - " b 3f\n" \ 18 - " .subsection 1\n" \ 19 - "3:\n" \ 20 - asm_ops "\n" \ 21 - " b 4f\n" \ 22 - " .previous\n" \ 23 - "4:\n" 24 - #else 25 - #define __LL_SC_FALLBACK(asm_ops) asm_ops 26 - #endif 27 - 28 15 #ifndef CONFIG_CC_HAS_K_CONSTRAINT 29 16 #define K 30 17 #endif ··· 23 36 */ 24 37 25 38 #define ATOMIC_OP(op, asm_op, constraint) \ 26 - static inline void \ 39 + static __always_inline void \ 27 40 __ll_sc_atomic_##op(int i, atomic_t *v) \ 28 41 { \ 29 42 unsigned long tmp; \ 30 43 int result; \ 31 44 \ 32 45 asm volatile("// atomic_" #op "\n" \ 33 - __LL_SC_FALLBACK( \ 34 46 " prfm pstl1strm, %2\n" \ 35 47 "1: ldxr %w0, %2\n" \ 36 48 " " #asm_op " %w0, %w0, %w3\n" \ 37 49 " stxr %w1, %w0, %2\n" \ 38 - " cbnz %w1, 1b\n") \ 50 + " cbnz %w1, 1b\n" \ 39 51 : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ 40 52 : __stringify(constraint) "r" (i)); \ 41 53 } 42 54 43 55 #define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\ 44 - static inline int \ 56 + static __always_inline int \ 45 57 __ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \ 46 58 { \ 47 59 unsigned long tmp; \ 48 60 int result; \ 49 61 \ 50 62 asm volatile("// atomic_" #op "_return" #name "\n" \ 51 - __LL_SC_FALLBACK( \ 52 63 " prfm pstl1strm, %2\n" \ 53 64 "1: ld" #acq "xr %w0, %2\n" \ 54 65 " " #asm_op " %w0, %w0, %w3\n" \ 55 66 " st" #rel "xr %w1, %w0, %2\n" \ 56 67 " cbnz %w1, 1b\n" \ 57 - " " #mb ) \ 68 + " " #mb \ 58 69 : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ 59 70 : __stringify(constraint) "r" (i) \ 60 71 : cl); \ ··· 61 76 } 62 77 63 78 #define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint) \ 64 - static inline int \ 79 + static __always_inline int \ 65 80 __ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \ 66 81 { \ 67 82 unsigned long tmp; \ 68 83 int val, result; \ 69 84 \ 70 85 asm volatile("// atomic_fetch_" #op #name "\n" \ 71 - __LL_SC_FALLBACK( \ 72 86 " prfm pstl1strm, %3\n" \ 73 87 "1: ld" #acq "xr %w0, %3\n" \ 74 88 " " #asm_op " %w1, %w0, %w4\n" \ 75 89 " st" #rel "xr %w2, %w1, %3\n" \ 76 90 " cbnz %w2, 1b\n" \ 77 - " " #mb ) \ 91 + " " #mb \ 78 92 : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ 79 93 : __stringify(constraint) "r" (i) \ 80 94 : cl); \ ··· 119 135 #undef ATOMIC_OP 120 136 121 137 #define ATOMIC64_OP(op, asm_op, constraint) \ 122 - static inline void \ 138 + static __always_inline void \ 123 139 __ll_sc_atomic64_##op(s64 i, atomic64_t *v) \ 124 140 { \ 125 141 s64 result; \ 126 142 unsigned long tmp; \ 127 143 \ 128 144 asm volatile("// atomic64_" #op "\n" \ 129 - __LL_SC_FALLBACK( \ 130 145 " prfm pstl1strm, %2\n" \ 131 146 "1: ldxr %0, %2\n" \ 132 147 " " #asm_op " %0, %0, %3\n" \ 133 148 " stxr %w1, %0, %2\n" \ 134 - " cbnz %w1, 1b") \ 149 + " cbnz %w1, 1b" \ 135 150 : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ 136 151 : __stringify(constraint) "r" (i)); \ 137 152 } 138 153 139 154 #define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\ 140 - static inline long \ 155 + static __always_inline long \ 141 156 __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \ 142 157 { \ 143 158 s64 result; \ 144 159 unsigned long tmp; \ 145 160 \ 146 161 asm volatile("// atomic64_" #op "_return" #name "\n" \ 147 - __LL_SC_FALLBACK( \ 148 162 " prfm pstl1strm, %2\n" \ 149 163 "1: ld" #acq "xr %0, %2\n" \ 150 164 " " #asm_op " %0, %0, %3\n" \ 151 165 " st" #rel "xr %w1, %0, %2\n" \ 152 166 " cbnz %w1, 1b\n" \ 153 - " " #mb ) \ 167 + " " #mb \ 154 168 : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ 155 169 : __stringify(constraint) "r" (i) \ 156 170 : cl); \ ··· 157 175 } 158 176 159 177 #define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\ 160 - static inline long \ 178 + static __always_inline long \ 161 179 __ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ 162 180 { \ 163 181 s64 result, val; \ 164 182 unsigned long tmp; \ 165 183 \ 166 184 asm volatile("// atomic64_fetch_" #op #name "\n" \ 167 - __LL_SC_FALLBACK( \ 168 185 " prfm pstl1strm, %3\n" \ 169 186 "1: ld" #acq "xr %0, %3\n" \ 170 187 " " #asm_op " %1, %0, %4\n" \ 171 188 " st" #rel "xr %w2, %1, %3\n" \ 172 189 " cbnz %w2, 1b\n" \ 173 - " " #mb ) \ 190 + " " #mb \ 174 191 : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ 175 192 : __stringify(constraint) "r" (i) \ 176 193 : cl); \ ··· 214 233 #undef ATOMIC64_OP_RETURN 215 234 #undef ATOMIC64_OP 216 235 217 - static inline s64 236 + static __always_inline s64 218 237 __ll_sc_atomic64_dec_if_positive(atomic64_t *v) 219 238 { 220 239 s64 result; 221 240 unsigned long tmp; 222 241 223 242 asm volatile("// atomic64_dec_if_positive\n" 224 - __LL_SC_FALLBACK( 225 243 " prfm pstl1strm, %2\n" 226 244 "1: ldxr %0, %2\n" 227 245 " subs %0, %0, #1\n" ··· 228 248 " stlxr %w1, %0, %2\n" 229 249 " cbnz %w1, 1b\n" 230 250 " dmb ish\n" 231 - "2:") 251 + "2:" 232 252 : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) 233 253 : 234 254 : "cc", "memory"); ··· 237 257 } 238 258 239 259 #define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint) \ 240 - static inline u##sz \ 260 + static __always_inline u##sz \ 241 261 __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \ 242 262 unsigned long old, \ 243 263 u##sz new) \ ··· 254 274 old = (u##sz)old; \ 255 275 \ 256 276 asm volatile( \ 257 - __LL_SC_FALLBACK( \ 258 277 " prfm pstl1strm, %[v]\n" \ 259 278 "1: ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n" \ 260 279 " eor %" #w "[tmp], %" #w "[oldval], %" #w "[old]\n" \ ··· 261 282 " st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n" \ 262 283 " cbnz %w[tmp], 1b\n" \ 263 284 " " #mb "\n" \ 264 - "2:") \ 285 + "2:" \ 265 286 : [tmp] "=&r" (tmp), [oldval] "=&r" (oldval), \ 266 287 [v] "+Q" (*(u##sz *)ptr) \ 267 288 : [old] __stringify(constraint) "r" (old), [new] "r" (new) \ ··· 295 316 #undef __CMPXCHG_CASE 296 317 297 318 #define __CMPXCHG_DBL(name, mb, rel, cl) \ 298 - static inline long \ 319 + static __always_inline long \ 299 320 __ll_sc__cmpxchg_double##name(unsigned long old1, \ 300 321 unsigned long old2, \ 301 322 unsigned long new1, \ ··· 305 326 unsigned long tmp, ret; \ 306 327 \ 307 328 asm volatile("// __cmpxchg_double" #name "\n" \ 308 - __LL_SC_FALLBACK( \ 309 329 " prfm pstl1strm, %2\n" \ 310 330 "1: ldxp %0, %1, %2\n" \ 311 331 " eor %0, %0, %3\n" \ ··· 314 336 " st" #rel "xp %w0, %5, %6, %2\n" \ 315 337 " cbnz %w0, 1b\n" \ 316 338 " " #mb "\n" \ 317 - "2:") \ 339 + "2:" \ 318 340 : "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr) \ 319 341 : "r" (old1), "r" (old2), "r" (new1), "r" (new2) \ 320 342 : cl); \
+29 -17
arch/arm64/include/asm/atomic_lse.h
··· 11 11 #define __ASM_ATOMIC_LSE_H 12 12 13 13 #define ATOMIC_OP(op, asm_op) \ 14 - static inline void __lse_atomic_##op(int i, atomic_t *v) \ 14 + static __always_inline void \ 15 + __lse_atomic_##op(int i, atomic_t *v) \ 15 16 { \ 16 17 asm volatile( \ 17 18 __LSE_PREAMBLE \ ··· 26 25 ATOMIC_OP(xor, steor) 27 26 ATOMIC_OP(add, stadd) 28 27 29 - static inline void __lse_atomic_sub(int i, atomic_t *v) 28 + static __always_inline void __lse_atomic_sub(int i, atomic_t *v) 30 29 { 31 30 __lse_atomic_add(-i, v); 32 31 } ··· 34 33 #undef ATOMIC_OP 35 34 36 35 #define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \ 37 - static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \ 36 + static __always_inline int \ 37 + __lse_atomic_fetch_##op##name(int i, atomic_t *v) \ 38 38 { \ 39 39 int old; \ 40 40 \ ··· 65 63 #undef ATOMIC_FETCH_OPS 66 64 67 65 #define ATOMIC_FETCH_OP_SUB(name) \ 68 - static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \ 66 + static __always_inline int \ 67 + __lse_atomic_fetch_sub##name(int i, atomic_t *v) \ 69 68 { \ 70 69 return __lse_atomic_fetch_add##name(-i, v); \ 71 70 } ··· 79 76 #undef ATOMIC_FETCH_OP_SUB 80 77 81 78 #define ATOMIC_OP_ADD_SUB_RETURN(name) \ 82 - static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \ 79 + static __always_inline int \ 80 + __lse_atomic_add_return##name(int i, atomic_t *v) \ 83 81 { \ 84 82 return __lse_atomic_fetch_add##name(i, v) + i; \ 85 83 } \ 86 84 \ 87 - static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \ 85 + static __always_inline int \ 86 + __lse_atomic_sub_return##name(int i, atomic_t *v) \ 88 87 { \ 89 88 return __lse_atomic_fetch_sub(i, v) - i; \ 90 89 } ··· 98 93 99 94 #undef ATOMIC_OP_ADD_SUB_RETURN 100 95 101 - static inline void __lse_atomic_and(int i, atomic_t *v) 96 + static __always_inline void __lse_atomic_and(int i, atomic_t *v) 102 97 { 103 98 return __lse_atomic_andnot(~i, v); 104 99 } 105 100 106 101 #define ATOMIC_FETCH_OP_AND(name, mb, cl...) \ 107 - static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v) \ 102 + static __always_inline int \ 103 + __lse_atomic_fetch_and##name(int i, atomic_t *v) \ 108 104 { \ 109 105 return __lse_atomic_fetch_andnot##name(~i, v); \ 110 106 } ··· 118 112 #undef ATOMIC_FETCH_OP_AND 119 113 120 114 #define ATOMIC64_OP(op, asm_op) \ 121 - static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \ 115 + static __always_inline void \ 116 + __lse_atomic64_##op(s64 i, atomic64_t *v) \ 122 117 { \ 123 118 asm volatile( \ 124 119 __LSE_PREAMBLE \ ··· 133 126 ATOMIC64_OP(xor, steor) 134 127 ATOMIC64_OP(add, stadd) 135 128 136 - static inline void __lse_atomic64_sub(s64 i, atomic64_t *v) 129 + static __always_inline void __lse_atomic64_sub(s64 i, atomic64_t *v) 137 130 { 138 131 __lse_atomic64_add(-i, v); 139 132 } ··· 141 134 #undef ATOMIC64_OP 142 135 143 136 #define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \ 144 - static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\ 137 + static __always_inline long \ 138 + __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \ 145 139 { \ 146 140 s64 old; \ 147 141 \ ··· 172 164 #undef ATOMIC64_FETCH_OPS 173 165 174 166 #define ATOMIC64_FETCH_OP_SUB(name) \ 175 - static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \ 167 + static __always_inline long \ 168 + __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \ 176 169 { \ 177 170 return __lse_atomic64_fetch_add##name(-i, v); \ 178 171 } ··· 186 177 #undef ATOMIC64_FETCH_OP_SUB 187 178 188 179 #define ATOMIC64_OP_ADD_SUB_RETURN(name) \ 189 - static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\ 180 + static __always_inline long \ 181 + __lse_atomic64_add_return##name(s64 i, atomic64_t *v) \ 190 182 { \ 191 183 return __lse_atomic64_fetch_add##name(i, v) + i; \ 192 184 } \ 193 185 \ 194 - static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)\ 186 + static __always_inline long \ 187 + __lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \ 195 188 { \ 196 189 return __lse_atomic64_fetch_sub##name(i, v) - i; \ 197 190 } ··· 205 194 206 195 #undef ATOMIC64_OP_ADD_SUB_RETURN 207 196 208 - static inline void __lse_atomic64_and(s64 i, atomic64_t *v) 197 + static __always_inline void __lse_atomic64_and(s64 i, atomic64_t *v) 209 198 { 210 199 return __lse_atomic64_andnot(~i, v); 211 200 } 212 201 213 202 #define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ 214 - static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \ 203 + static __always_inline long \ 204 + __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \ 215 205 { \ 216 206 return __lse_atomic64_fetch_andnot##name(~i, v); \ 217 207 } ··· 224 212 225 213 #undef ATOMIC64_FETCH_OP_AND 226 214 227 - static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v) 215 + static __always_inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v) 228 216 { 229 217 unsigned long tmp; 230 218
-4
arch/arm64/include/asm/cache.h
··· 45 45 #define arch_slab_minalign() arch_slab_minalign() 46 46 #endif 47 47 48 - #define CTR_CACHE_MINLINE_MASK \ 49 - (0xf << CTR_EL0_DMINLINE_SHIFT | \ 50 - CTR_EL0_IMINLINE_MASK << CTR_EL0_IMINLINE_SHIFT) 51 - 52 48 #define CTR_L1IP(ctr) SYS_FIELD_GET(CTR_EL0, L1Ip, ctr) 53 49 54 50 #define ICACHEF_ALIASING 0
+35 -33
arch/arm64/include/asm/cpufeature.h
··· 553 553 u64 mask = GENMASK_ULL(field + 3, field); 554 554 555 555 /* Treat IMPLEMENTATION DEFINED functionality as unimplemented */ 556 - if (val == ID_AA64DFR0_PMUVER_IMP_DEF) 556 + if (val == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) 557 557 val = 0; 558 558 559 559 if (val > cap) { ··· 597 597 598 598 static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0) 599 599 { 600 - return cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL_SHIFT) == 0x1 || 601 - cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1; 600 + return cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_EL1_BIGEND_SHIFT) == 0x1 || 601 + cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_EL1_BIGENDEL0_SHIFT) == 0x1; 602 602 } 603 603 604 604 static inline bool id_aa64pfr0_32bit_el1(u64 pfr0) 605 605 { 606 - u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT); 606 + u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_EL1_SHIFT); 607 607 608 - return val == ID_AA64PFR0_ELx_32BIT_64BIT; 608 + return val == ID_AA64PFR0_EL1_ELx_32BIT_64BIT; 609 609 } 610 610 611 611 static inline bool id_aa64pfr0_32bit_el0(u64 pfr0) 612 612 { 613 - u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT); 613 + u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_EL0_SHIFT); 614 614 615 - return val == ID_AA64PFR0_ELx_32BIT_64BIT; 615 + return val == ID_AA64PFR0_EL1_ELx_32BIT_64BIT; 616 616 } 617 617 618 618 static inline bool id_aa64pfr0_sve(u64 pfr0) 619 619 { 620 - u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_SVE_SHIFT); 620 + u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SVE_SHIFT); 621 621 622 622 return val > 0; 623 623 } 624 624 625 625 static inline bool id_aa64pfr1_sme(u64 pfr1) 626 626 { 627 - u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_SME_SHIFT); 627 + u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_EL1_SME_SHIFT); 628 628 629 629 return val > 0; 630 630 } 631 631 632 632 static inline bool id_aa64pfr1_mte(u64 pfr1) 633 633 { 634 - u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT); 634 + u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_EL1_MTE_SHIFT); 635 635 636 - return val >= ID_AA64PFR1_MTE; 636 + return val >= ID_AA64PFR1_EL1_MTE_MTE2; 637 637 } 638 638 639 639 void __init setup_cpu_features(void); ··· 659 659 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 660 660 661 661 csv2_val = cpuid_feature_extract_unsigned_field(pfr0, 662 - ID_AA64PFR0_CSV2_SHIFT); 662 + ID_AA64PFR0_EL1_CSV2_SHIFT); 663 663 return csv2_val == 3; 664 664 } 665 665 ··· 694 694 695 695 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 696 696 val = cpuid_feature_extract_unsigned_field(mmfr0, 697 - ID_AA64MMFR0_TGRAN4_SHIFT); 697 + ID_AA64MMFR0_EL1_TGRAN4_SHIFT); 698 698 699 - return (val >= ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN) && 700 - (val <= ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX); 699 + return (val >= ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN) && 700 + (val <= ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX); 701 701 } 702 702 703 703 static inline bool system_supports_64kb_granule(void) ··· 707 707 708 708 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 709 709 val = cpuid_feature_extract_unsigned_field(mmfr0, 710 - ID_AA64MMFR0_TGRAN64_SHIFT); 710 + ID_AA64MMFR0_EL1_TGRAN64_SHIFT); 711 711 712 - return (val >= ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN) && 713 - (val <= ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX); 712 + return (val >= ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN) && 713 + (val <= ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX); 714 714 } 715 715 716 716 static inline bool system_supports_16kb_granule(void) ··· 720 720 721 721 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 722 722 val = cpuid_feature_extract_unsigned_field(mmfr0, 723 - ID_AA64MMFR0_TGRAN16_SHIFT); 723 + ID_AA64MMFR0_EL1_TGRAN16_SHIFT); 724 724 725 - return (val >= ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN) && 726 - (val <= ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX); 725 + return (val >= ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN) && 726 + (val <= ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX); 727 727 } 728 728 729 729 static inline bool system_supports_mixed_endian_el0(void) ··· 738 738 739 739 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 740 740 val = cpuid_feature_extract_unsigned_field(mmfr0, 741 - ID_AA64MMFR0_BIGENDEL_SHIFT); 741 + ID_AA64MMFR0_EL1_BIGEND_SHIFT); 742 742 743 743 return val == 0x1; 744 744 } ··· 840 840 static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange) 841 841 { 842 842 switch (parange) { 843 - case ID_AA64MMFR0_PARANGE_32: return 32; 844 - case ID_AA64MMFR0_PARANGE_36: return 36; 845 - case ID_AA64MMFR0_PARANGE_40: return 40; 846 - case ID_AA64MMFR0_PARANGE_42: return 42; 847 - case ID_AA64MMFR0_PARANGE_44: return 44; 848 - case ID_AA64MMFR0_PARANGE_48: return 48; 849 - case ID_AA64MMFR0_PARANGE_52: return 52; 843 + case ID_AA64MMFR0_EL1_PARANGE_32: return 32; 844 + case ID_AA64MMFR0_EL1_PARANGE_36: return 36; 845 + case ID_AA64MMFR0_EL1_PARANGE_40: return 40; 846 + case ID_AA64MMFR0_EL1_PARANGE_42: return 42; 847 + case ID_AA64MMFR0_EL1_PARANGE_44: return 44; 848 + case ID_AA64MMFR0_EL1_PARANGE_48: return 48; 849 + case ID_AA64MMFR0_EL1_PARANGE_52: return 52; 850 850 /* 851 851 * A future PE could use a value unknown to the kernel. 852 852 * However, by the "D10.1.4 Principles of the ID scheme ··· 868 868 869 869 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1); 870 870 return cpuid_feature_extract_unsigned_field(mmfr1, 871 - ID_AA64MMFR1_HADBS_SHIFT); 871 + ID_AA64MMFR1_EL1_HAFDBS_SHIFT); 872 872 } 873 873 874 874 static inline bool cpu_has_pan(void) 875 875 { 876 876 u64 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1); 877 877 return cpuid_feature_extract_unsigned_field(mmfr1, 878 - ID_AA64MMFR1_PAN_SHIFT); 878 + ID_AA64MMFR1_EL1_PAN_SHIFT); 879 879 } 880 880 881 881 #ifdef CONFIG_ARM64_AMU_EXTN ··· 896 896 int vmid_bits; 897 897 898 898 vmid_bits = cpuid_feature_extract_unsigned_field(mmfr1, 899 - ID_AA64MMFR1_VMIDBITS_SHIFT); 900 - if (vmid_bits == ID_AA64MMFR1_VMIDBITS_16) 899 + ID_AA64MMFR1_EL1_VMIDBits_SHIFT); 900 + if (vmid_bits == ID_AA64MMFR1_EL1_VMIDBits_16) 901 901 return 16; 902 902 903 903 /* ··· 906 906 */ 907 907 return 8; 908 908 } 909 + 910 + struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); 909 911 910 912 extern struct arm64_ftr_override id_aa64mmfr1_override; 911 913 extern struct arm64_ftr_override id_aa64pfr0_override;
+9 -9
arch/arm64/include/asm/el2_setup.h
··· 40 40 41 41 .macro __init_el2_debug 42 42 mrs x1, id_aa64dfr0_el1 43 - sbfx x0, x1, #ID_AA64DFR0_PMUVER_SHIFT, #4 43 + sbfx x0, x1, #ID_AA64DFR0_EL1_PMUVer_SHIFT, #4 44 44 cmp x0, #1 45 45 b.lt .Lskip_pmu_\@ // Skip if no PMU present 46 46 mrs x0, pmcr_el0 // Disable debug access traps ··· 49 49 csel x2, xzr, x0, lt // all PMU counters from EL1 50 50 51 51 /* Statistical profiling */ 52 - ubfx x0, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4 52 + ubfx x0, x1, #ID_AA64DFR0_EL1_PMSVer_SHIFT, #4 53 53 cbz x0, .Lskip_spe_\@ // Skip if SPE not present 54 54 55 55 mrs_s x0, SYS_PMBIDR_EL1 // If SPE available at EL2, ··· 65 65 66 66 .Lskip_spe_\@: 67 67 /* Trace buffer */ 68 - ubfx x0, x1, #ID_AA64DFR0_TRBE_SHIFT, #4 68 + ubfx x0, x1, #ID_AA64DFR0_EL1_TraceBuffer_SHIFT, #4 69 69 cbz x0, .Lskip_trace_\@ // Skip if TraceBuffer is not present 70 70 71 71 mrs_s x0, SYS_TRBIDR_EL1 ··· 83 83 /* LORegions */ 84 84 .macro __init_el2_lor 85 85 mrs x1, id_aa64mmfr1_el1 86 - ubfx x0, x1, #ID_AA64MMFR1_LOR_SHIFT, 4 86 + ubfx x0, x1, #ID_AA64MMFR1_EL1_LO_SHIFT, 4 87 87 cbz x0, .Lskip_lor_\@ 88 88 msr_s SYS_LORC_EL1, xzr 89 89 .Lskip_lor_\@: ··· 97 97 /* GICv3 system register access */ 98 98 .macro __init_el2_gicv3 99 99 mrs x0, id_aa64pfr0_el1 100 - ubfx x0, x0, #ID_AA64PFR0_GIC_SHIFT, #4 100 + ubfx x0, x0, #ID_AA64PFR0_EL1_GIC_SHIFT, #4 101 101 cbz x0, .Lskip_gicv3_\@ 102 102 103 103 mrs_s x0, SYS_ICC_SRE_EL2 ··· 132 132 /* Disable any fine grained traps */ 133 133 .macro __init_el2_fgt 134 134 mrs x1, id_aa64mmfr0_el1 135 - ubfx x1, x1, #ID_AA64MMFR0_FGT_SHIFT, #4 135 + ubfx x1, x1, #ID_AA64MMFR0_EL1_FGT_SHIFT, #4 136 136 cbz x1, .Lskip_fgt_\@ 137 137 138 138 mov x0, xzr 139 139 mrs x1, id_aa64dfr0_el1 140 - ubfx x1, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4 140 + ubfx x1, x1, #ID_AA64DFR0_EL1_PMSVer_SHIFT, #4 141 141 cmp x1, #3 142 142 b.lt .Lset_debug_fgt_\@ 143 143 /* Disable PMSNEVFR_EL1 read and write traps */ ··· 149 149 150 150 mov x0, xzr 151 151 mrs x1, id_aa64pfr1_el1 152 - ubfx x1, x1, #ID_AA64PFR1_SME_SHIFT, #4 152 + ubfx x1, x1, #ID_AA64PFR1_EL1_SME_SHIFT, #4 153 153 cbz x1, .Lset_fgt_\@ 154 154 155 155 /* Disable nVHE traps of TPIDR2 and SMPRI */ ··· 162 162 msr_s SYS_HFGITR_EL2, xzr 163 163 164 164 mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU 165 - ubfx x1, x1, #ID_AA64PFR0_AMU_SHIFT, #4 165 + ubfx x1, x1, #ID_AA64PFR0_EL1_AMU_SHIFT, #4 166 166 cbz x1, .Lskip_fgt_\@ 167 167 168 168 msr_s SYS_HAFGRTR_EL2, xzr
+5 -3
arch/arm64/include/asm/exception.h
··· 58 58 asmlinkage void asm_exit_to_user_mode(struct pt_regs *regs); 59 59 60 60 void do_mem_abort(unsigned long far, unsigned long esr, struct pt_regs *regs); 61 - void do_undefinstr(struct pt_regs *regs); 62 - void do_bti(struct pt_regs *regs); 61 + void do_undefinstr(struct pt_regs *regs, unsigned long esr); 62 + void do_el0_bti(struct pt_regs *regs); 63 + void do_el1_bti(struct pt_regs *regs, unsigned long esr); 63 64 void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr, 64 65 struct pt_regs *regs); 65 66 void do_fpsimd_acc(unsigned long esr, struct pt_regs *regs); ··· 73 72 void do_cp15instr(unsigned long esr, struct pt_regs *regs); 74 73 void do_el0_svc(struct pt_regs *regs); 75 74 void do_el0_svc_compat(struct pt_regs *regs); 76 - void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr); 75 + void do_el0_fpac(struct pt_regs *regs, unsigned long esr); 76 + void do_el1_fpac(struct pt_regs *regs, unsigned long esr); 77 77 void do_serror(struct pt_regs *regs, unsigned long esr); 78 78 void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags); 79 79
+2 -2
arch/arm64/include/asm/hw_breakpoint.h
··· 142 142 u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); 143 143 return 1 + 144 144 cpuid_feature_extract_unsigned_field(dfr0, 145 - ID_AA64DFR0_BRPS_SHIFT); 145 + ID_AA64DFR0_EL1_BRPs_SHIFT); 146 146 } 147 147 148 148 /* Determine number of WRP registers available. */ ··· 151 151 u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); 152 152 return 1 + 153 153 cpuid_feature_extract_unsigned_field(dfr0, 154 - ID_AA64DFR0_WRPS_SHIFT); 154 + ID_AA64DFR0_EL1_WRPs_SHIFT); 155 155 } 156 156 157 157 #endif /* __ASM_BREAKPOINT_H */
+1
arch/arm64/include/asm/hwcap.h
··· 119 119 #define KERNEL_HWCAP_SME_FA64 __khwcap2_feature(SME_FA64) 120 120 #define KERNEL_HWCAP_WFXT __khwcap2_feature(WFXT) 121 121 #define KERNEL_HWCAP_EBF16 __khwcap2_feature(EBF16) 122 + #define KERNEL_HWCAP_SVE_EBF16 __khwcap2_feature(SVE_EBF16) 122 123 123 124 /* 124 125 * This yields a mask that user programs can use to figure out what
+3 -3
arch/arm64/include/asm/kvm_pgtable.h
··· 16 16 static inline u64 kvm_get_parange(u64 mmfr0) 17 17 { 18 18 u64 parange = cpuid_feature_extract_unsigned_field(mmfr0, 19 - ID_AA64MMFR0_PARANGE_SHIFT); 20 - if (parange > ID_AA64MMFR0_PARANGE_MAX) 21 - parange = ID_AA64MMFR0_PARANGE_MAX; 19 + ID_AA64MMFR0_EL1_PARANGE_SHIFT); 20 + if (parange > ID_AA64MMFR0_EL1_PARANGE_MAX) 21 + parange = ID_AA64MMFR0_EL1_PARANGE_MAX; 22 22 23 23 return parange; 24 24 }
+12 -3
arch/arm64/include/asm/module.h
··· 58 58 } 59 59 60 60 struct plt_entry get_plt_entry(u64 dst, void *pc); 61 - bool plt_entries_equal(const struct plt_entry *a, const struct plt_entry *b); 62 61 63 - static inline bool plt_entry_is_initialized(const struct plt_entry *e) 62 + static inline const Elf_Shdr *find_section(const Elf_Ehdr *hdr, 63 + const Elf_Shdr *sechdrs, 64 + const char *name) 64 65 { 65 - return e->adrp || e->add || e->br; 66 + const Elf_Shdr *s, *se; 67 + const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; 68 + 69 + for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) { 70 + if (strcmp(name, secstrs + s->sh_name) == 0) 71 + return s; 72 + } 73 + 74 + return NULL; 66 75 } 67 76 68 77 #endif /* __ASM_MODULE_H */
+1 -1
arch/arm64/include/asm/processor.h
··· 410 410 * The top of the current task's task stack 411 411 */ 412 412 #define current_top_of_stack() ((unsigned long)current->stack + THREAD_SIZE) 413 - #define on_thread_stack() (on_task_stack(current, current_stack_pointer, 1, NULL)) 413 + #define on_thread_stack() (on_task_stack(current, current_stack_pointer, 1)) 414 414 415 415 #endif /* __ASSEMBLY__ */ 416 416 #endif /* __ASM_PROCESSOR_H */
-17
arch/arm64/include/asm/sdei.h
··· 43 43 unsigned long sdei_arch_get_entry_point(int conduit); 44 44 #define sdei_arch_get_entry_point(x) sdei_arch_get_entry_point(x) 45 45 46 - struct stack_info; 47 - 48 - bool _on_sdei_stack(unsigned long sp, unsigned long size, 49 - struct stack_info *info); 50 - static inline bool on_sdei_stack(unsigned long sp, unsigned long size, 51 - struct stack_info *info) 52 - { 53 - if (!IS_ENABLED(CONFIG_VMAP_STACK)) 54 - return false; 55 - if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) 56 - return false; 57 - if (in_nmi()) 58 - return _on_sdei_stack(sp, size, info); 59 - 60 - return false; 61 - } 62 - 63 46 #endif /* __ASSEMBLY__ */ 64 47 #endif /* __ASM_SDEI_H */
+59 -12
arch/arm64/include/asm/stacktrace.h
··· 22 22 23 23 DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); 24 24 25 - static inline bool on_irq_stack(unsigned long sp, unsigned long size, 26 - struct stack_info *info) 25 + static inline struct stack_info stackinfo_get_irq(void) 27 26 { 28 27 unsigned long low = (unsigned long)raw_cpu_read(irq_stack_ptr); 29 28 unsigned long high = low + IRQ_STACK_SIZE; 30 29 31 - return on_stack(sp, size, low, high, STACK_TYPE_IRQ, info); 30 + return (struct stack_info) { 31 + .low = low, 32 + .high = high, 33 + }; 32 34 } 33 35 34 - static inline bool on_task_stack(const struct task_struct *tsk, 35 - unsigned long sp, unsigned long size, 36 - struct stack_info *info) 36 + static inline bool on_irq_stack(unsigned long sp, unsigned long size) 37 + { 38 + struct stack_info info = stackinfo_get_irq(); 39 + return stackinfo_on_stack(&info, sp, size); 40 + } 41 + 42 + static inline struct stack_info stackinfo_get_task(const struct task_struct *tsk) 37 43 { 38 44 unsigned long low = (unsigned long)task_stack_page(tsk); 39 45 unsigned long high = low + THREAD_SIZE; 40 46 41 - return on_stack(sp, size, low, high, STACK_TYPE_TASK, info); 47 + return (struct stack_info) { 48 + .low = low, 49 + .high = high, 50 + }; 51 + } 52 + 53 + static inline bool on_task_stack(const struct task_struct *tsk, 54 + unsigned long sp, unsigned long size) 55 + { 56 + struct stack_info info = stackinfo_get_task(tsk); 57 + return stackinfo_on_stack(&info, sp, size); 42 58 } 43 59 44 60 #ifdef CONFIG_VMAP_STACK 45 61 DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack); 46 62 47 - static inline bool on_overflow_stack(unsigned long sp, unsigned long size, 48 - struct stack_info *info) 63 + static inline struct stack_info stackinfo_get_overflow(void) 49 64 { 50 65 unsigned long low = (unsigned long)raw_cpu_ptr(overflow_stack); 51 66 unsigned long high = low + OVERFLOW_STACK_SIZE; 52 67 53 - return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); 68 + return (struct stack_info) { 69 + .low = low, 70 + .high = high, 71 + }; 54 72 } 55 73 #else 56 - static inline bool on_overflow_stack(unsigned long sp, unsigned long size, 57 - struct stack_info *info) { return false; } 74 + #define stackinfo_get_overflow() stackinfo_get_unknown() 75 + #endif 76 + 77 + #if defined(CONFIG_ARM_SDE_INTERFACE) && defined(CONFIG_VMAP_STACK) 78 + DECLARE_PER_CPU(unsigned long *, sdei_stack_normal_ptr); 79 + DECLARE_PER_CPU(unsigned long *, sdei_stack_critical_ptr); 80 + 81 + static inline struct stack_info stackinfo_get_sdei_normal(void) 82 + { 83 + unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr); 84 + unsigned long high = low + SDEI_STACK_SIZE; 85 + 86 + return (struct stack_info) { 87 + .low = low, 88 + .high = high, 89 + }; 90 + } 91 + 92 + static inline struct stack_info stackinfo_get_sdei_critical(void) 93 + { 94 + unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr); 95 + unsigned long high = low + SDEI_STACK_SIZE; 96 + 97 + return (struct stack_info) { 98 + .low = low, 99 + .high = high, 100 + }; 101 + } 102 + #else 103 + #define stackinfo_get_sdei_normal() stackinfo_get_unknown() 104 + #define stackinfo_get_sdei_critical() stackinfo_get_unknown() 58 105 #endif 59 106 60 107 #endif /* __ASM_STACKTRACE_H */
+107 -130
arch/arm64/include/asm/stacktrace/common.h
··· 2 2 /* 3 3 * Common arm64 stack unwinder code. 4 4 * 5 - * To implement a new arm64 stack unwinder: 6 - * 1) Include this header 7 - * 8 - * 2) Call into unwind_next_common() from your top level unwind 9 - * function, passing it the validation and translation callbacks 10 - * (though the later can be NULL if no translation is required). 11 - * 12 5 * See: arch/arm64/kernel/stacktrace.c for the reference implementation. 13 6 * 14 7 * Copyright (C) 2012 ARM Ltd. ··· 9 16 #ifndef __ASM_STACKTRACE_COMMON_H 10 17 #define __ASM_STACKTRACE_COMMON_H 11 18 12 - #include <linux/bitmap.h> 13 - #include <linux/bitops.h> 14 19 #include <linux/kprobes.h> 15 20 #include <linux/types.h> 16 - 17 - enum stack_type { 18 - STACK_TYPE_UNKNOWN, 19 - STACK_TYPE_TASK, 20 - STACK_TYPE_IRQ, 21 - STACK_TYPE_OVERFLOW, 22 - STACK_TYPE_SDEI_NORMAL, 23 - STACK_TYPE_SDEI_CRITICAL, 24 - STACK_TYPE_HYP, 25 - __NR_STACK_TYPES 26 - }; 27 21 28 22 struct stack_info { 29 23 unsigned long low; 30 24 unsigned long high; 31 - enum stack_type type; 32 25 }; 33 26 34 - /* 35 - * A snapshot of a frame record or fp/lr register values, along with some 36 - * accounting information necessary for robust unwinding. 27 + /** 28 + * struct unwind_state - state used for robust unwinding. 37 29 * 38 30 * @fp: The fp value in the frame record (or the real fp) 39 31 * @pc: The lr value in the frame record (or the real lr) 40 - * 41 - * @stacks_done: Stacks which have been entirely unwound, for which it is no 42 - * longer valid to unwind to. 43 - * 44 - * @prev_fp: The fp that pointed to this frame record, or a synthetic value 45 - * of 0. This is used to ensure that within a stack, each 46 - * subsequent frame record is at an increasing address. 47 - * @prev_type: The type of stack this frame record was on, or a synthetic 48 - * value of STACK_TYPE_UNKNOWN. This is used to detect a 49 - * transition from one stack to another. 50 32 * 51 33 * @kr_cur: When KRETPROBES is selected, holds the kretprobe instance 52 34 * associated with the most recently encountered replacement lr 53 35 * value. 54 36 * 55 37 * @task: The task being unwound. 38 + * 39 + * @stack: The stack currently being unwound. 40 + * @stacks: An array of stacks which can be unwound. 41 + * @nr_stacks: The number of stacks in @stacks. 56 42 */ 57 43 struct unwind_state { 58 44 unsigned long fp; 59 45 unsigned long pc; 60 - DECLARE_BITMAP(stacks_done, __NR_STACK_TYPES); 61 - unsigned long prev_fp; 62 - enum stack_type prev_type; 63 46 #ifdef CONFIG_KRETPROBES 64 47 struct llist_node *kr_cur; 65 48 #endif 66 49 struct task_struct *task; 50 + 51 + struct stack_info stack; 52 + struct stack_info *stacks; 53 + int nr_stacks; 67 54 }; 68 55 69 - static inline bool on_stack(unsigned long sp, unsigned long size, 70 - unsigned long low, unsigned long high, 71 - enum stack_type type, struct stack_info *info) 56 + static inline struct stack_info stackinfo_get_unknown(void) 72 57 { 73 - if (!low) 58 + return (struct stack_info) { 59 + .low = 0, 60 + .high = 0, 61 + }; 62 + } 63 + 64 + static inline bool stackinfo_on_stack(const struct stack_info *info, 65 + unsigned long sp, unsigned long size) 66 + { 67 + if (!info->low) 74 68 return false; 75 69 76 - if (sp < low || sp + size < sp || sp + size > high) 70 + if (sp < info->low || sp + size < sp || sp + size > info->high) 77 71 return false; 78 72 79 - if (info) { 80 - info->low = low; 81 - info->high = high; 82 - info->type = type; 83 - } 84 73 return true; 85 74 } 86 75 ··· 74 99 state->kr_cur = NULL; 75 100 #endif 76 101 77 - /* 78 - * Prime the first unwind. 79 - * 80 - * In unwind_next() we'll check that the FP points to a valid stack, 81 - * which can't be STACK_TYPE_UNKNOWN, and the first unwind will be 82 - * treated as a transition to whichever stack that happens to be. The 83 - * prev_fp value won't be used, but we set it to 0 such that it is 84 - * definitely not an accessible stack address. 85 - */ 86 - bitmap_zero(state->stacks_done, __NR_STACK_TYPES); 87 - state->prev_fp = 0; 88 - state->prev_type = STACK_TYPE_UNKNOWN; 102 + state->stack = stackinfo_get_unknown(); 89 103 } 90 104 91 - /* 92 - * stack_trace_translate_fp_fn() - Translates a non-kernel frame pointer to 93 - * a kernel address. 94 - * 95 - * @fp: the frame pointer to be updated to its kernel address. 96 - * @type: the stack type associated with frame pointer @fp 97 - * 98 - * Returns true and success and @fp is updated to the corresponding 99 - * kernel virtual address; otherwise returns false. 100 - */ 101 - typedef bool (*stack_trace_translate_fp_fn)(unsigned long *fp, 102 - enum stack_type type); 103 - 104 - /* 105 - * on_accessible_stack_fn() - Check whether a stack range is on any 106 - * of the possible stacks. 107 - * 108 - * @tsk: task whose stack is being unwound 109 - * @sp: stack address being checked 110 - * @size: size of the stack range being checked 111 - * @info: stack unwinding context 112 - */ 113 - typedef bool (*on_accessible_stack_fn)(const struct task_struct *tsk, 114 - unsigned long sp, unsigned long size, 115 - struct stack_info *info); 116 - 117 - static inline int unwind_next_common(struct unwind_state *state, 118 - struct stack_info *info, 119 - on_accessible_stack_fn accessible, 120 - stack_trace_translate_fp_fn translate_fp) 105 + static struct stack_info *unwind_find_next_stack(const struct unwind_state *state, 106 + unsigned long sp, 107 + unsigned long size) 121 108 { 122 - unsigned long fp = state->fp, kern_fp = fp; 123 - struct task_struct *tsk = state->task; 109 + for (int i = 0; i < state->nr_stacks; i++) { 110 + struct stack_info *info = &state->stacks[i]; 111 + 112 + if (stackinfo_on_stack(info, sp, size)) 113 + return info; 114 + } 115 + 116 + return NULL; 117 + } 118 + 119 + /** 120 + * unwind_consume_stack() - Check if an object is on an accessible stack, 121 + * updating stack boundaries so that future unwind steps cannot consume this 122 + * object again. 123 + * 124 + * @state: the current unwind state. 125 + * @sp: the base address of the object. 126 + * @size: the size of the object. 127 + * 128 + * Return: 0 upon success, an error code otherwise. 129 + */ 130 + static inline int unwind_consume_stack(struct unwind_state *state, 131 + unsigned long sp, 132 + unsigned long size) 133 + { 134 + struct stack_info *next; 135 + 136 + if (stackinfo_on_stack(&state->stack, sp, size)) 137 + goto found; 138 + 139 + next = unwind_find_next_stack(state, sp, size); 140 + if (!next) 141 + return -EINVAL; 142 + 143 + /* 144 + * Stack transitions are strictly one-way, and once we've 145 + * transitioned from one stack to another, it's never valid to 146 + * unwind back to the old stack. 147 + * 148 + * Remove the current stack from the list of stacks so that it cannot 149 + * be found on a subsequent transition. 150 + * 151 + * Note that stacks can nest in several valid orders, e.g. 152 + * 153 + * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL 154 + * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW 155 + * HYP -> OVERFLOW 156 + * 157 + * ... so we do not check the specific order of stack 158 + * transitions. 159 + */ 160 + state->stack = *next; 161 + *next = stackinfo_get_unknown(); 162 + 163 + found: 164 + /* 165 + * Future unwind steps can only consume stack above this frame record. 166 + * Update the current stack to start immediately above it. 167 + */ 168 + state->stack.low = sp + size; 169 + return 0; 170 + } 171 + 172 + /** 173 + * unwind_next_frame_record() - Unwind to the next frame record. 174 + * 175 + * @state: the current unwind state. 176 + * 177 + * Return: 0 upon success, an error code otherwise. 178 + */ 179 + static inline int 180 + unwind_next_frame_record(struct unwind_state *state) 181 + { 182 + unsigned long fp = state->fp; 183 + int err; 124 184 125 185 if (fp & 0x7) 126 186 return -EINVAL; 127 187 128 - if (!accessible(tsk, fp, 16, info)) 129 - return -EINVAL; 130 - 131 - if (test_bit(info->type, state->stacks_done)) 132 - return -EINVAL; 188 + err = unwind_consume_stack(state, fp, 16); 189 + if (err) 190 + return err; 133 191 134 192 /* 135 - * If fp is not from the current address space perform the necessary 136 - * translation before dereferencing it to get the next fp. 193 + * Record this frame record's values. 137 194 */ 138 - if (translate_fp && !translate_fp(&kern_fp, info->type)) 139 - return -EINVAL; 140 - 141 - /* 142 - * As stacks grow downward, any valid record on the same stack must be 143 - * at a strictly higher address than the prior record. 144 - * 145 - * Stacks can nest in several valid orders, e.g. 146 - * 147 - * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL 148 - * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW 149 - * HYP -> OVERFLOW 150 - * 151 - * ... but the nesting itself is strict. Once we transition from one 152 - * stack to another, it's never valid to unwind back to that first 153 - * stack. 154 - */ 155 - if (info->type == state->prev_type) { 156 - if (fp <= state->prev_fp) 157 - return -EINVAL; 158 - } else { 159 - __set_bit(state->prev_type, state->stacks_done); 160 - } 161 - 162 - /* 163 - * Record this frame record's values and location. The prev_fp and 164 - * prev_type are only meaningful to the next unwind_next() invocation. 165 - */ 166 - state->fp = READ_ONCE(*(unsigned long *)(kern_fp)); 167 - state->pc = READ_ONCE(*(unsigned long *)(kern_fp + 8)); 168 - state->prev_fp = fp; 169 - state->prev_type = info->type; 195 + state->fp = READ_ONCE(*(unsigned long *)(fp)); 196 + state->pc = READ_ONCE(*(unsigned long *)(fp + 8)); 170 197 171 198 return 0; 172 199 }
+2 -2
arch/arm64/include/asm/stacktrace/nvhe.h
··· 20 20 21 21 #include <asm/stacktrace/common.h> 22 22 23 - /* 24 - * kvm_nvhe_unwind_init - Start an unwind from the given nVHE HYP fp and pc 23 + /** 24 + * kvm_nvhe_unwind_init() - Start an unwind from the given nVHE HYP fp and pc 25 25 * 26 26 * @state : unwind_state to initialize 27 27 * @fp : frame pointer at which to start the unwinding.
+26 -185
arch/arm64/include/asm/sysreg.h
··· 190 190 #define SYS_MVFR1_EL1 sys_reg(3, 0, 0, 3, 1) 191 191 #define SYS_MVFR2_EL1 sys_reg(3, 0, 0, 3, 2) 192 192 193 - #define SYS_ID_AA64PFR0_EL1 sys_reg(3, 0, 0, 4, 0) 194 - #define SYS_ID_AA64PFR1_EL1 sys_reg(3, 0, 0, 4, 1) 195 - 196 - #define SYS_ID_AA64DFR0_EL1 sys_reg(3, 0, 0, 5, 0) 197 - #define SYS_ID_AA64DFR1_EL1 sys_reg(3, 0, 0, 5, 1) 198 - 199 - #define SYS_ID_AA64AFR0_EL1 sys_reg(3, 0, 0, 5, 4) 200 - #define SYS_ID_AA64AFR1_EL1 sys_reg(3, 0, 0, 5, 5) 201 - 202 - #define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0) 203 - #define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1) 204 - #define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2) 205 - 206 193 #define SYS_ACTLR_EL1 sys_reg(3, 0, 1, 0, 1) 207 194 #define SYS_RGSR_EL1 sys_reg(3, 0, 1, 0, 5) 208 195 #define SYS_GCR_EL1 sys_reg(3, 0, 1, 0, 6) ··· 423 436 #define SYS_ICC_IGRPEN0_EL1 sys_reg(3, 0, 12, 12, 6) 424 437 #define SYS_ICC_IGRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 425 438 426 - #define SYS_TPIDR_EL1 sys_reg(3, 0, 13, 0, 4) 427 - 428 - #define SYS_SCXTNUM_EL1 sys_reg(3, 0, 13, 0, 7) 429 - 430 439 #define SYS_CNTKCTL_EL1 sys_reg(3, 0, 14, 1, 0) 431 440 432 441 #define SYS_CCSIDR_EL1 sys_reg(3, 1, 0, 0, 0) 433 442 #define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7) 434 - 435 - #define SMIDR_EL1_IMPLEMENTER_SHIFT 24 436 - #define SMIDR_EL1_SMPS_SHIFT 15 437 - #define SMIDR_EL1_AFFINITY_SHIFT 0 438 443 439 444 #define SYS_RNDR_EL0 sys_reg(3, 3, 2, 4, 0) 440 445 #define SYS_RNDRRS_EL0 sys_reg(3, 3, 2, 4, 1) ··· 516 537 #define SYS_HFGWTR_EL2 sys_reg(3, 4, 1, 1, 5) 517 538 #define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6) 518 539 #define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) 519 - #define SYS_HCRX_EL2 sys_reg(3, 4, 1, 2, 2) 520 540 #define SYS_HDFGRTR_EL2 sys_reg(3, 4, 3, 1, 4) 521 541 #define SYS_HDFGWTR_EL2 sys_reg(3, 4, 3, 1, 5) 522 542 #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) ··· 668 690 #define MAIR_ATTRIDX(attr, idx) ((attr) << ((idx) * 8)) 669 691 670 692 /* id_aa64pfr0 */ 671 - #define ID_AA64PFR0_CSV3_SHIFT 60 672 - #define ID_AA64PFR0_CSV2_SHIFT 56 673 - #define ID_AA64PFR0_DIT_SHIFT 48 674 - #define ID_AA64PFR0_AMU_SHIFT 44 675 - #define ID_AA64PFR0_MPAM_SHIFT 40 676 - #define ID_AA64PFR0_SEL2_SHIFT 36 677 - #define ID_AA64PFR0_SVE_SHIFT 32 678 - #define ID_AA64PFR0_RAS_SHIFT 28 679 - #define ID_AA64PFR0_GIC_SHIFT 24 680 - #define ID_AA64PFR0_ASIMD_SHIFT 20 681 - #define ID_AA64PFR0_FP_SHIFT 16 682 - #define ID_AA64PFR0_EL3_SHIFT 12 683 - #define ID_AA64PFR0_EL2_SHIFT 8 684 - #define ID_AA64PFR0_EL1_SHIFT 4 685 - #define ID_AA64PFR0_EL0_SHIFT 0 686 - 687 - #define ID_AA64PFR0_AMU 0x1 688 - #define ID_AA64PFR0_SVE 0x1 689 - #define ID_AA64PFR0_RAS_V1 0x1 690 - #define ID_AA64PFR0_RAS_V1P1 0x2 691 - #define ID_AA64PFR0_FP_NI 0xf 692 - #define ID_AA64PFR0_FP_SUPPORTED 0x0 693 - #define ID_AA64PFR0_ASIMD_NI 0xf 694 - #define ID_AA64PFR0_ASIMD_SUPPORTED 0x0 695 - #define ID_AA64PFR0_ELx_64BIT_ONLY 0x1 696 - #define ID_AA64PFR0_ELx_32BIT_64BIT 0x2 697 - 698 - /* id_aa64pfr1 */ 699 - #define ID_AA64PFR1_SME_SHIFT 24 700 - #define ID_AA64PFR1_MPAMFRAC_SHIFT 16 701 - #define ID_AA64PFR1_RASFRAC_SHIFT 12 702 - #define ID_AA64PFR1_MTE_SHIFT 8 703 - #define ID_AA64PFR1_SSBS_SHIFT 4 704 - #define ID_AA64PFR1_BT_SHIFT 0 705 - 706 - #define ID_AA64PFR1_SSBS_PSTATE_NI 0 707 - #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 708 - #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 709 - #define ID_AA64PFR1_BT_BTI 0x1 710 - #define ID_AA64PFR1_SME 1 711 - 712 - #define ID_AA64PFR1_MTE_NI 0x0 713 - #define ID_AA64PFR1_MTE_EL0 0x1 714 - #define ID_AA64PFR1_MTE 0x2 715 - #define ID_AA64PFR1_MTE_ASYMM 0x3 693 + #define ID_AA64PFR0_EL1_ELx_64BIT_ONLY 0x1 694 + #define ID_AA64PFR0_EL1_ELx_32BIT_64BIT 0x2 716 695 717 696 /* id_aa64mmfr0 */ 718 - #define ID_AA64MMFR0_ECV_SHIFT 60 719 - #define ID_AA64MMFR0_FGT_SHIFT 56 720 - #define ID_AA64MMFR0_EXS_SHIFT 44 721 - #define ID_AA64MMFR0_TGRAN4_2_SHIFT 40 722 - #define ID_AA64MMFR0_TGRAN64_2_SHIFT 36 723 - #define ID_AA64MMFR0_TGRAN16_2_SHIFT 32 724 - #define ID_AA64MMFR0_TGRAN4_SHIFT 28 725 - #define ID_AA64MMFR0_TGRAN64_SHIFT 24 726 - #define ID_AA64MMFR0_TGRAN16_SHIFT 20 727 - #define ID_AA64MMFR0_BIGENDEL0_SHIFT 16 728 - #define ID_AA64MMFR0_SNSMEM_SHIFT 12 729 - #define ID_AA64MMFR0_BIGENDEL_SHIFT 8 730 - #define ID_AA64MMFR0_ASID_SHIFT 4 731 - #define ID_AA64MMFR0_PARANGE_SHIFT 0 732 - 733 - #define ID_AA64MMFR0_ASID_8 0x0 734 - #define ID_AA64MMFR0_ASID_16 0x2 735 - 736 - #define ID_AA64MMFR0_TGRAN4_NI 0xf 737 - #define ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN 0x0 738 - #define ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX 0x7 739 - #define ID_AA64MMFR0_TGRAN64_NI 0xf 740 - #define ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 0x0 741 - #define ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 0x7 742 - #define ID_AA64MMFR0_TGRAN16_NI 0x0 743 - #define ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 0x1 744 - #define ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 0xf 745 - 746 - #define ID_AA64MMFR0_PARANGE_32 0x0 747 - #define ID_AA64MMFR0_PARANGE_36 0x1 748 - #define ID_AA64MMFR0_PARANGE_40 0x2 749 - #define ID_AA64MMFR0_PARANGE_42 0x3 750 - #define ID_AA64MMFR0_PARANGE_44 0x4 751 - #define ID_AA64MMFR0_PARANGE_48 0x5 752 - #define ID_AA64MMFR0_PARANGE_52 0x6 697 + #define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 0x0 698 + #define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX 0x7 699 + #define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN 0x0 700 + #define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX 0x7 701 + #define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN 0x1 702 + #define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX 0xf 753 703 754 704 #define ARM64_MIN_PARANGE_BITS 32 755 705 756 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT 0x0 757 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE 0x1 758 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN 0x2 759 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX 0x7 706 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_DEFAULT 0x0 707 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_NONE 0x1 708 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN 0x2 709 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX 0x7 760 710 761 711 #ifdef CONFIG_ARM64_PA_BITS_52 762 - #define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_52 712 + #define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_52 763 713 #else 764 - #define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_48 714 + #define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_48 765 715 #endif 766 - 767 - /* id_aa64mmfr1 */ 768 - #define ID_AA64MMFR1_ECBHB_SHIFT 60 769 - #define ID_AA64MMFR1_TIDCP1_SHIFT 52 770 - #define ID_AA64MMFR1_HCX_SHIFT 40 771 - #define ID_AA64MMFR1_AFP_SHIFT 44 772 - #define ID_AA64MMFR1_ETS_SHIFT 36 773 - #define ID_AA64MMFR1_TWED_SHIFT 32 774 - #define ID_AA64MMFR1_XNX_SHIFT 28 775 - #define ID_AA64MMFR1_SPECSEI_SHIFT 24 776 - #define ID_AA64MMFR1_PAN_SHIFT 20 777 - #define ID_AA64MMFR1_LOR_SHIFT 16 778 - #define ID_AA64MMFR1_HPD_SHIFT 12 779 - #define ID_AA64MMFR1_VHE_SHIFT 8 780 - #define ID_AA64MMFR1_VMIDBITS_SHIFT 4 781 - #define ID_AA64MMFR1_HADBS_SHIFT 0 782 - 783 - #define ID_AA64MMFR1_VMIDBITS_8 0 784 - #define ID_AA64MMFR1_VMIDBITS_16 2 785 - 786 - #define ID_AA64MMFR1_TIDCP1_NI 0 787 - #define ID_AA64MMFR1_TIDCP1_IMP 1 788 - 789 - /* id_aa64mmfr2 */ 790 - #define ID_AA64MMFR2_E0PD_SHIFT 60 791 - #define ID_AA64MMFR2_EVT_SHIFT 56 792 - #define ID_AA64MMFR2_BBM_SHIFT 52 793 - #define ID_AA64MMFR2_TTL_SHIFT 48 794 - #define ID_AA64MMFR2_FWB_SHIFT 40 795 - #define ID_AA64MMFR2_IDS_SHIFT 36 796 - #define ID_AA64MMFR2_AT_SHIFT 32 797 - #define ID_AA64MMFR2_ST_SHIFT 28 798 - #define ID_AA64MMFR2_NV_SHIFT 24 799 - #define ID_AA64MMFR2_CCIDX_SHIFT 20 800 - #define ID_AA64MMFR2_LVA_SHIFT 16 801 - #define ID_AA64MMFR2_IESB_SHIFT 12 802 - #define ID_AA64MMFR2_LSM_SHIFT 8 803 - #define ID_AA64MMFR2_UAO_SHIFT 4 804 - #define ID_AA64MMFR2_CNP_SHIFT 0 805 - 806 - /* id_aa64dfr0 */ 807 - #define ID_AA64DFR0_MTPMU_SHIFT 48 808 - #define ID_AA64DFR0_TRBE_SHIFT 44 809 - #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 810 - #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36 811 - #define ID_AA64DFR0_PMSVER_SHIFT 32 812 - #define ID_AA64DFR0_CTX_CMPS_SHIFT 28 813 - #define ID_AA64DFR0_WRPS_SHIFT 20 814 - #define ID_AA64DFR0_BRPS_SHIFT 12 815 - #define ID_AA64DFR0_PMUVER_SHIFT 8 816 - #define ID_AA64DFR0_TRACEVER_SHIFT 4 817 - #define ID_AA64DFR0_DEBUGVER_SHIFT 0 818 - 819 - #define ID_AA64DFR0_PMUVER_8_0 0x1 820 - #define ID_AA64DFR0_PMUVER_8_1 0x4 821 - #define ID_AA64DFR0_PMUVER_8_4 0x5 822 - #define ID_AA64DFR0_PMUVER_8_5 0x6 823 - #define ID_AA64DFR0_PMUVER_8_7 0x7 824 - #define ID_AA64DFR0_PMUVER_IMP_DEF 0xf 825 - 826 - #define ID_AA64DFR0_PMSVER_8_2 0x1 827 - #define ID_AA64DFR0_PMSVER_8_3 0x2 828 716 829 717 #define ID_DFR0_PERFMON_SHIFT 24 830 718 ··· 799 955 #define ID_PFR1_PROGMOD_SHIFT 0 800 956 801 957 #if defined(CONFIG_ARM64_4K_PAGES) 802 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT 803 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN 804 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX 805 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN4_2_SHIFT 958 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT 959 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 960 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX 961 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN4_2_SHIFT 806 962 #elif defined(CONFIG_ARM64_16K_PAGES) 807 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN16_SHIFT 808 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 809 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 810 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN16_2_SHIFT 963 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN16_SHIFT 964 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN 965 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX 966 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN16_2_SHIFT 811 967 #elif defined(CONFIG_ARM64_64K_PAGES) 812 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN64_SHIFT 813 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 814 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 815 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN64_2_SHIFT 968 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN64_SHIFT 969 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN 970 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX 971 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN64_2_SHIFT 816 972 #endif 817 973 818 974 #define MVFR2_FPMISC_SHIFT 4 ··· 871 1027 #define TRFCR_EL2_CX BIT(3) 872 1028 #define TRFCR_ELx_ExTRE BIT(1) 873 1029 #define TRFCR_ELx_E0TRE BIT(0) 874 - 875 - /* HCRX_EL2 definitions */ 876 - #define HCRX_EL2_SMPME_MASK (1 << 5) 877 1030 878 1031 /* GIC Hypervisor interface registers */ 879 1032 /* ICH_MISR_EL2 bit definitions */
+1 -1
arch/arm64/include/asm/system_misc.h
··· 18 18 19 19 struct pt_regs; 20 20 21 - void die(const char *msg, struct pt_regs *regs, int err); 21 + void die(const char *msg, struct pt_regs *regs, long err); 22 22 23 23 struct siginfo; 24 24 void arm64_notify_die(const char *str, struct pt_regs *regs,
+3
arch/arm64/include/asm/vdso.h
··· 26 26 (void *)(vdso_offset_##name - VDSO_LBASE + (unsigned long)(base)); \ 27 27 }) 28 28 29 + extern char vdso_start[], vdso_end[]; 30 + extern char vdso32_start[], vdso32_end[]; 31 + 29 32 #endif /* !__ASSEMBLY__ */ 30 33 31 34 #endif /* __ASM_VDSO_H */
+15 -4
arch/arm64/include/asm/vdso/gettimeofday.h
··· 7 7 8 8 #ifndef __ASSEMBLY__ 9 9 10 + #include <asm/alternative.h> 10 11 #include <asm/barrier.h> 11 12 #include <asm/unistd.h> 13 + #include <asm/sysreg.h> 12 14 13 15 #define VDSO_HAS_CLOCK_GETRES 1 14 16 ··· 80 78 return 0; 81 79 82 80 /* 83 - * This isb() is required to prevent that the counter value 81 + * If FEAT_ECV is available, use the self-synchronizing counter. 82 + * Otherwise the isb is required to prevent that the counter value 84 83 * is speculated. 85 - */ 86 - isb(); 87 - asm volatile("mrs %0, cntvct_el0" : "=r" (res) :: "memory"); 84 + */ 85 + asm volatile( 86 + ALTERNATIVE("isb\n" 87 + "mrs %0, cntvct_el0", 88 + "nop\n" 89 + __mrs_s("%0", SYS_CNTVCTSS_EL0), 90 + ARM64_HAS_ECV) 91 + : "=r" (res) 92 + : 93 + : "memory"); 94 + 88 95 arch_counter_enforce_ordering(res); 89 96 90 97 return res;
+1
arch/arm64/include/uapi/asm/hwcap.h
··· 92 92 #define HWCAP2_SME_FA64 (1 << 30) 93 93 #define HWCAP2_WFXT (1UL << 31) 94 94 #define HWCAP2_EBF16 (1UL << 32) 95 + #define HWCAP2_SVE_EBF16 (1UL << 33) 95 96 96 97 #endif /* _UAPI__ASM_HWCAP_H */
+28
arch/arm64/kernel/alternative.c
··· 10 10 11 11 #include <linux/init.h> 12 12 #include <linux/cpu.h> 13 + #include <linux/elf.h> 13 14 #include <asm/cacheflush.h> 14 15 #include <asm/alternative.h> 15 16 #include <asm/cpufeature.h> 16 17 #include <asm/insn.h> 18 + #include <asm/module.h> 17 19 #include <asm/sections.h> 20 + #include <asm/vdso.h> 18 21 #include <linux/stop_machine.h> 19 22 20 23 #define __ALT_PTR(a, f) ((void *)&(a)->f + (a)->f) ··· 195 192 } 196 193 } 197 194 195 + void apply_alternatives_vdso(void) 196 + { 197 + struct alt_region region; 198 + const struct elf64_hdr *hdr; 199 + const struct elf64_shdr *shdr; 200 + const struct elf64_shdr *alt; 201 + DECLARE_BITMAP(all_capabilities, ARM64_NPATCHABLE); 202 + 203 + bitmap_fill(all_capabilities, ARM64_NPATCHABLE); 204 + 205 + hdr = (struct elf64_hdr *)vdso_start; 206 + shdr = (void *)hdr + hdr->e_shoff; 207 + alt = find_section(hdr, shdr, ".altinstructions"); 208 + if (!alt) 209 + return; 210 + 211 + region = (struct alt_region){ 212 + .begin = (void *)hdr + alt->sh_offset, 213 + .end = (void *)hdr + alt->sh_offset + alt->sh_size, 214 + }; 215 + 216 + __apply_alternatives(&region, false, &all_capabilities[0]); 217 + } 218 + 198 219 /* 199 220 * We might be patching the stop_machine state machine, so implement a 200 221 * really simple polling protocol here. ··· 252 225 253 226 void __init apply_alternatives_all(void) 254 227 { 228 + apply_alternatives_vdso(); 255 229 /* better not try code patching on a live SMP system */ 256 230 stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask); 257 231 }
+26
arch/arm64/kernel/cpu_errata.c
··· 121 121 sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0); 122 122 } 123 123 124 + static DEFINE_RAW_SPINLOCK(reg_user_mask_modification); 125 + static void __maybe_unused 126 + cpu_clear_bf16_from_user_emulation(const struct arm64_cpu_capabilities *__unused) 127 + { 128 + struct arm64_ftr_reg *regp; 129 + 130 + regp = get_arm64_ftr_reg(SYS_ID_AA64ISAR1_EL1); 131 + if (!regp) 132 + return; 133 + 134 + raw_spin_lock(&reg_user_mask_modification); 135 + if (regp->user_mask & ID_AA64ISAR1_EL1_BF16_MASK) 136 + regp->user_mask &= ~ID_AA64ISAR1_EL1_BF16_MASK; 137 + raw_spin_unlock(&reg_user_mask_modification); 138 + } 139 + 124 140 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) \ 125 141 .matches = is_affected_midr_range, \ 126 142 .midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max) ··· 706 690 .capability = ARM64_WORKAROUND_1742098, 707 691 CAP_MIDR_RANGE_LIST(broken_aarch32_aes), 708 692 .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, 693 + }, 694 + #endif 695 + #ifdef CONFIG_ARM64_ERRATUM_2658417 696 + { 697 + .desc = "ARM erratum 2658417", 698 + .capability = ARM64_WORKAROUND_2658417, 699 + /* Cortex-A510 r0p0 - r1p1 */ 700 + ERRATA_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1), 701 + MIDR_FIXED(MIDR_CPU_VAR_REV(1,1), BIT(25)), 702 + .cpu_enable = cpu_clear_bf16_from_user_emulation, 709 703 }, 710 704 #endif 711 705 {
+153 -129
arch/arm64/kernel/cpufeature.c
··· 243 243 }; 244 244 245 245 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { 246 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0), 247 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0), 248 - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0), 249 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0), 250 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_MPAM_SHIFT, 4, 0), 251 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SEL2_SHIFT, 4, 0), 246 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_CSV3_SHIFT, 4, 0), 247 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_CSV2_SHIFT, 4, 0), 248 + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_DIT_SHIFT, 4, 0), 249 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_AMU_SHIFT, 4, 0), 250 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_MPAM_SHIFT, 4, 0), 251 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SEL2_SHIFT, 4, 0), 252 252 ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE), 253 - FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0), 254 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0), 255 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0), 256 - S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI), 257 - S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI), 258 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0), 259 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0), 260 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY), 261 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY), 253 + FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SVE_SHIFT, 4, 0), 254 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_RAS_SHIFT, 4, 0), 255 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_GIC_SHIFT, 4, 0), 256 + S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_AdvSIMD_SHIFT, 4, ID_AA64PFR0_EL1_AdvSIMD_NI), 257 + S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_FP_SHIFT, 4, ID_AA64PFR0_EL1_FP_NI), 258 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL3_SHIFT, 4, 0), 259 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL2_SHIFT, 4, 0), 260 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL1_SHIFT, 4, ID_AA64PFR0_EL1_ELx_64BIT_ONLY), 261 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL0_SHIFT, 4, ID_AA64PFR0_EL1_ELx_64BIT_ONLY), 262 262 ARM64_FTR_END, 263 263 }; 264 264 265 265 static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { 266 266 ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), 267 - FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SME_SHIFT, 4, 0), 268 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0), 269 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0), 267 + FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0), 268 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0), 269 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_RAS_frac_SHIFT, 4, 0), 270 270 ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_MTE), 271 - FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MTE_SHIFT, 4, ID_AA64PFR1_MTE_NI), 272 - ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI), 271 + FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MTE_SHIFT, 4, ID_AA64PFR1_EL1_MTE_NI), 272 + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SSBS_SHIFT, 4, ID_AA64PFR1_EL1_SSBS_NI), 273 273 ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI), 274 - FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0), 274 + FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_BT_SHIFT, 4, 0), 275 275 ARM64_FTR_END, 276 276 }; 277 277 ··· 316 316 }; 317 317 318 318 static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = { 319 - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0), 320 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0), 321 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EXS_SHIFT, 4, 0), 319 + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_ECV_SHIFT, 4, 0), 320 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_FGT_SHIFT, 4, 0), 321 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_EXS_SHIFT, 4, 0), 322 322 /* 323 323 * Page size not being supported at Stage-2 is not fatal. You 324 324 * just give up KVM if PAGE_SIZE isn't supported there. Go fix ··· 334 334 * fields are inconsistent across vCPUs, then it isn't worth 335 335 * trying to bring KVM up. 336 336 */ 337 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN4_2_SHIFT, 4, 1), 338 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN64_2_SHIFT, 4, 1), 339 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN16_2_SHIFT, 4, 1), 337 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_EL1_TGRAN4_2_SHIFT, 4, 1), 338 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_EL1_TGRAN64_2_SHIFT, 4, 1), 339 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_EL1_TGRAN16_2_SHIFT, 4, 1), 340 340 /* 341 341 * We already refuse to boot CPUs that don't support our configured 342 342 * page size, so we can only detect mismatches for a page size other ··· 344 344 * exist in the wild so, even though we don't like it, we'll have to go 345 345 * along with it and treat them as non-strict. 346 346 */ 347 - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI), 348 - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI), 349 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI), 347 + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_TGRAN4_SHIFT, 4, ID_AA64MMFR0_EL1_TGRAN4_NI), 348 + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_TGRAN64_SHIFT, 4, ID_AA64MMFR0_EL1_TGRAN64_NI), 349 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_TGRAN16_SHIFT, 4, ID_AA64MMFR0_EL1_TGRAN16_NI), 350 350 351 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0), 351 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_BIGENDEL0_SHIFT, 4, 0), 352 352 /* Linux shouldn't care about secure memory */ 353 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0), 354 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0), 355 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0), 353 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_SNSMEM_SHIFT, 4, 0), 354 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_BIGEND_SHIFT, 4, 0), 355 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_ASIDBITS_SHIFT, 4, 0), 356 356 /* 357 357 * Differing PARange is fine as long as all peripherals and memory are mapped 358 358 * within the minimum PARange of all CPUs 359 359 */ 360 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0), 360 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EL1_PARANGE_SHIFT, 4, 0), 361 361 ARM64_FTR_END, 362 362 }; 363 363 364 364 static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = { 365 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TIDCP1_SHIFT, 4, 0), 366 - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_AFP_SHIFT, 4, 0), 367 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0), 368 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0), 369 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0), 370 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR1_SPECSEI_SHIFT, 4, 0), 371 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0), 372 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0), 373 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0), 374 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0), 375 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0), 376 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0), 365 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_TIDCP1_SHIFT, 4, 0), 366 + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_AFP_SHIFT, 4, 0), 367 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_ETS_SHIFT, 4, 0), 368 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_TWED_SHIFT, 4, 0), 369 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_XNX_SHIFT, 4, 0), 370 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR1_EL1_SpecSEI_SHIFT, 4, 0), 371 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_PAN_SHIFT, 4, 0), 372 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_LO_SHIFT, 4, 0), 373 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_HPDS_SHIFT, 4, 0), 374 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_VH_SHIFT, 4, 0), 375 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_VMIDBits_SHIFT, 4, 0), 376 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_HAFDBS_SHIFT, 4, 0), 377 377 ARM64_FTR_END, 378 378 }; 379 379 380 380 static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = { 381 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0), 382 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EVT_SHIFT, 4, 0), 383 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_BBM_SHIFT, 4, 0), 384 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_TTL_SHIFT, 4, 0), 385 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0), 386 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IDS_SHIFT, 4, 0), 387 - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0), 388 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_ST_SHIFT, 4, 0), 389 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_NV_SHIFT, 4, 0), 390 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CCIDX_SHIFT, 4, 0), 391 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0), 392 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0), 393 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0), 394 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0), 395 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0), 381 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_E0PD_SHIFT, 4, 0), 382 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_EVT_SHIFT, 4, 0), 383 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_BBM_SHIFT, 4, 0), 384 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_TTL_SHIFT, 4, 0), 385 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_FWB_SHIFT, 4, 0), 386 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_IDS_SHIFT, 4, 0), 387 + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_AT_SHIFT, 4, 0), 388 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_ST_SHIFT, 4, 0), 389 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_NV_SHIFT, 4, 0), 390 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_CCIDX_SHIFT, 4, 0), 391 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_VARange_SHIFT, 4, 0), 392 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_IESB_SHIFT, 4, 0), 393 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_LSM_SHIFT, 4, 0), 394 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_UAO_SHIFT, 4, 0), 395 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EL1_CnP_SHIFT, 4, 0), 396 396 ARM64_FTR_END, 397 397 }; 398 398 ··· 434 434 }; 435 435 436 436 static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = { 437 - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 4, 0), 438 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_PMSVER_SHIFT, 4, 0), 439 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0), 440 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0), 441 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_BRPS_SHIFT, 4, 0), 437 + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_DoubleLock_SHIFT, 4, 0), 438 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_PMSVer_SHIFT, 4, 0), 439 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_CTX_CMPs_SHIFT, 4, 0), 440 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_WRPs_SHIFT, 4, 0), 441 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_BRPs_SHIFT, 4, 0), 442 442 /* 443 443 * We can instantiate multiple PMU instances with different levels 444 444 * of support. 445 445 */ 446 - S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0), 447 - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6), 446 + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_EL1_PMUVer_SHIFT, 4, 0), 447 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_EL1_DebugVer_SHIFT, 4, 0x6), 448 448 ARM64_FTR_END, 449 449 }; 450 450 ··· 750 750 * returns - Upon success, matching ftr_reg entry for id. 751 751 * - NULL on failure but with an WARN_ON(). 752 752 */ 753 - static struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id) 753 + struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id) 754 754 { 755 755 struct arm64_ftr_reg *reg; 756 756 ··· 1401 1401 return val >= entry->min_field_value; 1402 1402 } 1403 1403 1404 + static u64 1405 + read_scoped_sysreg(const struct arm64_cpu_capabilities *entry, int scope) 1406 + { 1407 + WARN_ON(scope == SCOPE_LOCAL_CPU && preemptible()); 1408 + if (scope == SCOPE_SYSTEM) 1409 + return read_sanitised_ftr_reg(entry->sys_reg); 1410 + else 1411 + return __read_sysreg_by_encoding(entry->sys_reg); 1412 + } 1413 + 1414 + static bool 1415 + has_user_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) 1416 + { 1417 + int mask; 1418 + struct arm64_ftr_reg *regp; 1419 + u64 val = read_scoped_sysreg(entry, scope); 1420 + 1421 + regp = get_arm64_ftr_reg(entry->sys_reg); 1422 + if (!regp) 1423 + return false; 1424 + 1425 + mask = cpuid_feature_extract_unsigned_field_width(regp->user_mask, 1426 + entry->field_pos, 1427 + entry->field_width); 1428 + if (!mask) 1429 + return false; 1430 + 1431 + return feature_matches(val, entry); 1432 + } 1433 + 1404 1434 static bool 1405 1435 has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) 1406 1436 { 1407 - u64 val; 1408 - 1409 - WARN_ON(scope == SCOPE_LOCAL_CPU && preemptible()); 1410 - if (scope == SCOPE_SYSTEM) 1411 - val = read_sanitised_ftr_reg(entry->sys_reg); 1412 - else 1413 - val = __read_sysreg_by_encoding(entry->sys_reg); 1414 - 1437 + u64 val = read_scoped_sysreg(entry, scope); 1415 1438 return feature_matches(val, entry); 1416 1439 } 1417 1440 ··· 1515 1492 u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 1516 1493 1517 1494 return cpuid_feature_extract_signed_field(pfr0, 1518 - ID_AA64PFR0_FP_SHIFT) < 0; 1495 + ID_AA64PFR0_EL1_FP_SHIFT) < 0; 1519 1496 } 1520 1497 1521 1498 static bool has_cache_idc(const struct arm64_cpu_capabilities *entry, ··· 1594 1571 if (IS_ENABLED(CONFIG_ARM64_E0PD)) { 1595 1572 u64 mmfr2 = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); 1596 1573 if (cpuid_feature_extract_unsigned_field(mmfr2, 1597 - ID_AA64MMFR2_E0PD_SHIFT)) 1574 + ID_AA64MMFR2_EL1_E0PD_SHIFT)) 1598 1575 return false; 1599 1576 } 1600 1577 ··· 2116 2093 .type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE, 2117 2094 .matches = has_useable_gicv3_cpuif, 2118 2095 .sys_reg = SYS_ID_AA64PFR0_EL1, 2119 - .field_pos = ID_AA64PFR0_GIC_SHIFT, 2096 + .field_pos = ID_AA64PFR0_EL1_GIC_SHIFT, 2120 2097 .field_width = 4, 2121 2098 .sign = FTR_UNSIGNED, 2122 2099 .min_field_value = 1, ··· 2127 2104 .type = ARM64_CPUCAP_SYSTEM_FEATURE, 2128 2105 .matches = has_cpuid_feature, 2129 2106 .sys_reg = SYS_ID_AA64MMFR0_EL1, 2130 - .field_pos = ID_AA64MMFR0_ECV_SHIFT, 2107 + .field_pos = ID_AA64MMFR0_EL1_ECV_SHIFT, 2131 2108 .field_width = 4, 2132 2109 .sign = FTR_UNSIGNED, 2133 2110 .min_field_value = 1, ··· 2139 2116 .type = ARM64_CPUCAP_SYSTEM_FEATURE, 2140 2117 .matches = has_cpuid_feature, 2141 2118 .sys_reg = SYS_ID_AA64MMFR1_EL1, 2142 - .field_pos = ID_AA64MMFR1_PAN_SHIFT, 2119 + .field_pos = ID_AA64MMFR1_EL1_PAN_SHIFT, 2143 2120 .field_width = 4, 2144 2121 .sign = FTR_UNSIGNED, 2145 2122 .min_field_value = 1, ··· 2153 2130 .type = ARM64_CPUCAP_SYSTEM_FEATURE, 2154 2131 .matches = has_cpuid_feature, 2155 2132 .sys_reg = SYS_ID_AA64MMFR1_EL1, 2156 - .field_pos = ID_AA64MMFR1_PAN_SHIFT, 2133 + .field_pos = ID_AA64MMFR1_EL1_PAN_SHIFT, 2157 2134 .field_width = 4, 2158 2135 .sign = FTR_UNSIGNED, 2159 2136 .min_field_value = 3, ··· 2191 2168 .matches = has_32bit_el0, 2192 2169 .sys_reg = SYS_ID_AA64PFR0_EL1, 2193 2170 .sign = FTR_UNSIGNED, 2194 - .field_pos = ID_AA64PFR0_EL0_SHIFT, 2171 + .field_pos = ID_AA64PFR0_EL1_EL0_SHIFT, 2195 2172 .field_width = 4, 2196 - .min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT, 2173 + .min_field_value = ID_AA64PFR0_EL1_ELx_32BIT_64BIT, 2197 2174 }, 2198 2175 #ifdef CONFIG_KVM 2199 2176 { ··· 2203 2180 .matches = has_cpuid_feature, 2204 2181 .sys_reg = SYS_ID_AA64PFR0_EL1, 2205 2182 .sign = FTR_UNSIGNED, 2206 - .field_pos = ID_AA64PFR0_EL1_SHIFT, 2183 + .field_pos = ID_AA64PFR0_EL1_EL1_SHIFT, 2207 2184 .field_width = 4, 2208 - .min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT, 2185 + .min_field_value = ID_AA64PFR0_EL1_ELx_32BIT_64BIT, 2209 2186 }, 2210 2187 { 2211 2188 .desc = "Protected KVM", ··· 2224 2201 * more details. 2225 2202 */ 2226 2203 .sys_reg = SYS_ID_AA64PFR0_EL1, 2227 - .field_pos = ID_AA64PFR0_CSV3_SHIFT, 2204 + .field_pos = ID_AA64PFR0_EL1_CSV3_SHIFT, 2228 2205 .field_width = 4, 2229 2206 .min_field_value = 1, 2230 2207 .matches = unmap_kernel_at_el0, ··· 2267 2244 .capability = ARM64_SVE, 2268 2245 .sys_reg = SYS_ID_AA64PFR0_EL1, 2269 2246 .sign = FTR_UNSIGNED, 2270 - .field_pos = ID_AA64PFR0_SVE_SHIFT, 2247 + .field_pos = ID_AA64PFR0_EL1_SVE_SHIFT, 2271 2248 .field_width = 4, 2272 - .min_field_value = ID_AA64PFR0_SVE, 2249 + .min_field_value = ID_AA64PFR0_EL1_SVE_IMP, 2273 2250 .matches = has_cpuid_feature, 2274 2251 .cpu_enable = sve_kernel_enable, 2275 2252 }, ··· 2282 2259 .matches = has_cpuid_feature, 2283 2260 .sys_reg = SYS_ID_AA64PFR0_EL1, 2284 2261 .sign = FTR_UNSIGNED, 2285 - .field_pos = ID_AA64PFR0_RAS_SHIFT, 2262 + .field_pos = ID_AA64PFR0_EL1_RAS_SHIFT, 2286 2263 .field_width = 4, 2287 - .min_field_value = ID_AA64PFR0_RAS_V1, 2264 + .min_field_value = ID_AA64PFR0_EL1_RAS_IMP, 2288 2265 .cpu_enable = cpu_clear_disr, 2289 2266 }, 2290 2267 #endif /* CONFIG_ARM64_RAS_EXTN */ ··· 2301 2278 .matches = has_amu, 2302 2279 .sys_reg = SYS_ID_AA64PFR0_EL1, 2303 2280 .sign = FTR_UNSIGNED, 2304 - .field_pos = ID_AA64PFR0_AMU_SHIFT, 2281 + .field_pos = ID_AA64PFR0_EL1_AMU_SHIFT, 2305 2282 .field_width = 4, 2306 - .min_field_value = ID_AA64PFR0_AMU, 2283 + .min_field_value = ID_AA64PFR0_EL1_AMU_IMP, 2307 2284 .cpu_enable = cpu_amu_enable, 2308 2285 }, 2309 2286 #endif /* CONFIG_ARM64_AMU_EXTN */ ··· 2326 2303 .capability = ARM64_HAS_STAGE2_FWB, 2327 2304 .sys_reg = SYS_ID_AA64MMFR2_EL1, 2328 2305 .sign = FTR_UNSIGNED, 2329 - .field_pos = ID_AA64MMFR2_FWB_SHIFT, 2306 + .field_pos = ID_AA64MMFR2_EL1_FWB_SHIFT, 2330 2307 .field_width = 4, 2331 2308 .min_field_value = 1, 2332 2309 .matches = has_cpuid_feature, ··· 2337 2314 .capability = ARM64_HAS_ARMv8_4_TTL, 2338 2315 .sys_reg = SYS_ID_AA64MMFR2_EL1, 2339 2316 .sign = FTR_UNSIGNED, 2340 - .field_pos = ID_AA64MMFR2_TTL_SHIFT, 2317 + .field_pos = ID_AA64MMFR2_EL1_TTL_SHIFT, 2341 2318 .field_width = 4, 2342 2319 .min_field_value = 1, 2343 2320 .matches = has_cpuid_feature, ··· 2367 2344 .capability = ARM64_HW_DBM, 2368 2345 .sys_reg = SYS_ID_AA64MMFR1_EL1, 2369 2346 .sign = FTR_UNSIGNED, 2370 - .field_pos = ID_AA64MMFR1_HADBS_SHIFT, 2347 + .field_pos = ID_AA64MMFR1_EL1_HAFDBS_SHIFT, 2371 2348 .field_width = 4, 2372 2349 .min_field_value = 2, 2373 2350 .matches = has_hw_dbm, ··· 2390 2367 .type = ARM64_CPUCAP_SYSTEM_FEATURE, 2391 2368 .matches = has_cpuid_feature, 2392 2369 .sys_reg = SYS_ID_AA64PFR1_EL1, 2393 - .field_pos = ID_AA64PFR1_SSBS_SHIFT, 2370 + .field_pos = ID_AA64PFR1_EL1_SSBS_SHIFT, 2394 2371 .field_width = 4, 2395 2372 .sign = FTR_UNSIGNED, 2396 - .min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY, 2373 + .min_field_value = ID_AA64PFR1_EL1_SSBS_IMP, 2397 2374 }, 2398 2375 #ifdef CONFIG_ARM64_CNP 2399 2376 { ··· 2403 2380 .matches = has_useable_cnp, 2404 2381 .sys_reg = SYS_ID_AA64MMFR2_EL1, 2405 2382 .sign = FTR_UNSIGNED, 2406 - .field_pos = ID_AA64MMFR2_CNP_SHIFT, 2383 + .field_pos = ID_AA64MMFR2_EL1_CnP_SHIFT, 2407 2384 .field_width = 4, 2408 2385 .min_field_value = 1, 2409 2386 .cpu_enable = cpu_enable_cnp, ··· 2508 2485 .type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE, 2509 2486 .matches = can_use_gic_priorities, 2510 2487 .sys_reg = SYS_ID_AA64PFR0_EL1, 2511 - .field_pos = ID_AA64PFR0_GIC_SHIFT, 2488 + .field_pos = ID_AA64PFR0_EL1_GIC_SHIFT, 2512 2489 .field_width = 4, 2513 2490 .sign = FTR_UNSIGNED, 2514 2491 .min_field_value = 1, ··· 2522 2499 .sys_reg = SYS_ID_AA64MMFR2_EL1, 2523 2500 .sign = FTR_UNSIGNED, 2524 2501 .field_width = 4, 2525 - .field_pos = ID_AA64MMFR2_E0PD_SHIFT, 2502 + .field_pos = ID_AA64MMFR2_EL1_E0PD_SHIFT, 2526 2503 .matches = has_cpuid_feature, 2527 2504 .min_field_value = 1, 2528 2505 .cpu_enable = cpu_enable_e0pd, ··· 2551 2528 .matches = has_cpuid_feature, 2552 2529 .cpu_enable = bti_enable, 2553 2530 .sys_reg = SYS_ID_AA64PFR1_EL1, 2554 - .field_pos = ID_AA64PFR1_BT_SHIFT, 2531 + .field_pos = ID_AA64PFR1_EL1_BT_SHIFT, 2555 2532 .field_width = 4, 2556 - .min_field_value = ID_AA64PFR1_BT_BTI, 2533 + .min_field_value = ID_AA64PFR1_EL1_BT_IMP, 2557 2534 .sign = FTR_UNSIGNED, 2558 2535 }, 2559 2536 #endif ··· 2564 2541 .type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE, 2565 2542 .matches = has_cpuid_feature, 2566 2543 .sys_reg = SYS_ID_AA64PFR1_EL1, 2567 - .field_pos = ID_AA64PFR1_MTE_SHIFT, 2544 + .field_pos = ID_AA64PFR1_EL1_MTE_SHIFT, 2568 2545 .field_width = 4, 2569 - .min_field_value = ID_AA64PFR1_MTE, 2546 + .min_field_value = ID_AA64PFR1_EL1_MTE_MTE2, 2570 2547 .sign = FTR_UNSIGNED, 2571 2548 .cpu_enable = cpu_enable_mte, 2572 2549 }, ··· 2576 2553 .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, 2577 2554 .matches = has_cpuid_feature, 2578 2555 .sys_reg = SYS_ID_AA64PFR1_EL1, 2579 - .field_pos = ID_AA64PFR1_MTE_SHIFT, 2556 + .field_pos = ID_AA64PFR1_EL1_MTE_SHIFT, 2580 2557 .field_width = 4, 2581 - .min_field_value = ID_AA64PFR1_MTE_ASYMM, 2558 + .min_field_value = ID_AA64PFR1_EL1_MTE_MTE3, 2582 2559 .sign = FTR_UNSIGNED, 2583 2560 }, 2584 2561 #endif /* CONFIG_ARM64_MTE */ ··· 2600 2577 .capability = ARM64_SME, 2601 2578 .sys_reg = SYS_ID_AA64PFR1_EL1, 2602 2579 .sign = FTR_UNSIGNED, 2603 - .field_pos = ID_AA64PFR1_SME_SHIFT, 2580 + .field_pos = ID_AA64PFR1_EL1_SME_SHIFT, 2604 2581 .field_width = 4, 2605 - .min_field_value = ID_AA64PFR1_SME, 2582 + .min_field_value = ID_AA64PFR1_EL1_SME_IMP, 2606 2583 .matches = has_cpuid_feature, 2607 2584 .cpu_enable = sme_kernel_enable, 2608 2585 }, ··· 2637 2614 .type = ARM64_CPUCAP_SYSTEM_FEATURE, 2638 2615 .sys_reg = SYS_ID_AA64MMFR1_EL1, 2639 2616 .sign = FTR_UNSIGNED, 2640 - .field_pos = ID_AA64MMFR1_TIDCP1_SHIFT, 2617 + .field_pos = ID_AA64MMFR1_EL1_TIDCP1_SHIFT, 2641 2618 .field_width = 4, 2642 - .min_field_value = ID_AA64MMFR1_TIDCP1_IMP, 2619 + .min_field_value = ID_AA64MMFR1_EL1_TIDCP1_IMP, 2643 2620 .matches = has_cpuid_feature, 2644 2621 .cpu_enable = cpu_trap_el0_impdef, 2645 2622 }, ··· 2647 2624 }; 2648 2625 2649 2626 #define HWCAP_CPUID_MATCH(reg, field, width, s, min_value) \ 2650 - .matches = has_cpuid_feature, \ 2627 + .matches = has_user_cpuid_feature, \ 2651 2628 .sys_reg = reg, \ 2652 2629 .field_pos = field, \ 2653 2630 .field_width = width, \ ··· 2731 2708 HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_EL1_TS_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FLAGM), 2732 2709 HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_EL1_TS_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_FLAGM2), 2733 2710 HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_EL1_RNDR_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RNG), 2734 - HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, 4, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP), 2735 - HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP), 2736 - HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, 4, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD), 2737 - HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP), 2738 - HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT), 2711 + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_FP_SHIFT, 4, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP), 2712 + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_FP_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP), 2713 + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_AdvSIMD_SHIFT, 4, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD), 2714 + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_AdvSIMD_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP), 2715 + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_DIT_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT), 2739 2716 HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_DPB_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP), 2740 2717 HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_DPB_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_DCPODP), 2741 2718 HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_JSCVT_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT), ··· 2748 2725 HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_BF16_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_EBF16), 2749 2726 HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_DGH_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DGH), 2750 2727 HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_I8MM_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_I8MM), 2751 - HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT), 2728 + HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_EL1_AT_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT), 2752 2729 #ifdef CONFIG_ARM64_SVE 2753 - HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE), 2730 + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_SVE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR0_EL1_SVE_IMP, CAP_HWCAP, KERNEL_HWCAP_SVE), 2754 2731 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_SVEver_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_SVEver_SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2), 2755 2732 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_AES_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_AES_IMP, CAP_HWCAP, KERNEL_HWCAP_SVEAES), 2756 2733 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_AES_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_AES_PMULL128, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL), 2757 2734 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_BitPerm_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_BitPerm_IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM), 2758 2735 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_BF16_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_BF16_IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBF16), 2736 + HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_BF16_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_BF16_EBF16, CAP_HWCAP, KERNEL_HWCAP_SVE_EBF16), 2759 2737 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_SHA3_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_SHA3_IMP, CAP_HWCAP, KERNEL_HWCAP_SVESHA3), 2760 2738 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_SM4_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_SM4_IMP, CAP_HWCAP, KERNEL_HWCAP_SVESM4), 2761 2739 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_I8MM_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_I8MM_IMP, CAP_HWCAP, KERNEL_HWCAP_SVEI8MM), 2762 2740 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_F32MM_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_F32MM_IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF32MM), 2763 2741 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_EL1_F64MM_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_EL1_F64MM_IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM), 2764 2742 #endif 2765 - HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS), 2743 + HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_EL1_SSBS_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_EL1_SSBS_SSBS2, CAP_HWCAP, KERNEL_HWCAP_SSBS), 2766 2744 #ifdef CONFIG_ARM64_BTI 2767 - HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_BT_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_BT_BTI, CAP_HWCAP, KERNEL_HWCAP_BTI), 2745 + HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_EL1_BT_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_EL1_BT_IMP, CAP_HWCAP, KERNEL_HWCAP_BTI), 2768 2746 #endif 2769 2747 #ifdef CONFIG_ARM64_PTR_AUTH 2770 2748 HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA), 2771 2749 HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG), 2772 2750 #endif 2773 2751 #ifdef CONFIG_ARM64_MTE 2774 - HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE), 2775 - HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_MTE_ASYMM, CAP_HWCAP, KERNEL_HWCAP_MTE3), 2752 + HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_EL1_MTE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_EL1_MTE_MTE2, CAP_HWCAP, KERNEL_HWCAP_MTE), 2753 + HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_EL1_MTE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_EL1_MTE_MTE3, CAP_HWCAP, KERNEL_HWCAP_MTE3), 2776 2754 #endif /* CONFIG_ARM64_MTE */ 2777 - HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV), 2778 - HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_AFP_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP), 2755 + HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_EL1_ECV_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV), 2756 + HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_EL1_AFP_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP), 2779 2757 HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_EL1_RPRES_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RPRES), 2780 2758 HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_EL1_WFxT_SHIFT, 4, FTR_UNSIGNED, ID_AA64ISAR2_EL1_WFxT_IMP, CAP_HWCAP, KERNEL_HWCAP_WFXT), 2781 2759 #ifdef CONFIG_ARM64_SME 2782 - HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SME_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_SME, CAP_HWCAP, KERNEL_HWCAP_SME), 2760 + HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_EL1_SME_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_EL1_SME_IMP, CAP_HWCAP, KERNEL_HWCAP_SME), 2783 2761 HWCAP_CAP(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_EL1_FA64_SHIFT, 1, FTR_UNSIGNED, ID_AA64SMFR0_EL1_FA64_IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64), 2784 2762 HWCAP_CAP(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_EL1_I16I64_SHIFT, 4, FTR_UNSIGNED, ID_AA64SMFR0_EL1_I16I64_IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64), 2785 2763 HWCAP_CAP(SYS_ID_AA64SMFR0_EL1, ID_AA64SMFR0_EL1_F64F64_SHIFT, 1, FTR_UNSIGNED, ID_AA64SMFR0_EL1_F64F64_IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64), ··· 3126 3102 3127 3103 /* Verify IPA range */ 3128 3104 parange = cpuid_feature_extract_unsigned_field(mmfr0, 3129 - ID_AA64MMFR0_PARANGE_SHIFT); 3105 + ID_AA64MMFR0_EL1_PARANGE_SHIFT); 3130 3106 ipa_max = id_aa64mmfr0_parange_to_phys_shift(parange); 3131 3107 if (ipa_max < get_kvm_ipa_limit()) { 3132 3108 pr_crit("CPU%d: IPA range mismatch\n", smp_processor_id());
+1
arch/arm64/kernel/cpuinfo.c
··· 115 115 [KERNEL_HWCAP_SME_FA64] = "smefa64", 116 116 [KERNEL_HWCAP_WFXT] = "wfxt", 117 117 [KERNEL_HWCAP_EBF16] = "ebf16", 118 + [KERNEL_HWCAP_SVE_EBF16] = "sveebf16", 118 119 }; 119 120 120 121 #ifdef CONFIG_COMPAT
+1 -1
arch/arm64/kernel/debug-monitors.c
··· 28 28 u8 debug_monitors_arch(void) 29 29 { 30 30 return cpuid_feature_extract_unsigned_field(read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1), 31 - ID_AA64DFR0_DEBUGVER_SHIFT); 31 + ID_AA64DFR0_EL1_DebugVer_SHIFT); 32 32 } 33 33 34 34 /*
+22 -10
arch/arm64/kernel/entry-common.c
··· 379 379 exit_to_kernel_mode(regs); 380 380 } 381 381 382 - static void noinstr el1_undef(struct pt_regs *regs) 382 + static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr) 383 383 { 384 384 enter_from_kernel_mode(regs); 385 385 local_daif_inherit(regs); 386 - do_undefinstr(regs); 386 + do_undefinstr(regs, esr); 387 + local_daif_mask(); 388 + exit_to_kernel_mode(regs); 389 + } 390 + 391 + static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr) 392 + { 393 + enter_from_kernel_mode(regs); 394 + local_daif_inherit(regs); 395 + do_el1_bti(regs, esr); 387 396 local_daif_mask(); 388 397 exit_to_kernel_mode(regs); 389 398 } ··· 411 402 { 412 403 enter_from_kernel_mode(regs); 413 404 local_daif_inherit(regs); 414 - do_ptrauth_fault(regs, esr); 405 + do_el1_fpac(regs, esr); 415 406 local_daif_mask(); 416 407 exit_to_kernel_mode(regs); 417 408 } ··· 434 425 break; 435 426 case ESR_ELx_EC_SYS64: 436 427 case ESR_ELx_EC_UNKNOWN: 437 - el1_undef(regs); 428 + el1_undef(regs, esr); 429 + break; 430 + case ESR_ELx_EC_BTI: 431 + el1_bti(regs, esr); 438 432 break; 439 433 case ESR_ELx_EC_BREAKPT_CUR: 440 434 case ESR_ELx_EC_SOFTSTP_CUR: ··· 594 582 exit_to_user_mode(regs); 595 583 } 596 584 597 - static void noinstr el0_undef(struct pt_regs *regs) 585 + static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr) 598 586 { 599 587 enter_from_user_mode(regs); 600 588 local_daif_restore(DAIF_PROCCTX); 601 - do_undefinstr(regs); 589 + do_undefinstr(regs, esr); 602 590 exit_to_user_mode(regs); 603 591 } 604 592 ··· 606 594 { 607 595 enter_from_user_mode(regs); 608 596 local_daif_restore(DAIF_PROCCTX); 609 - do_bti(regs); 597 + do_el0_bti(regs); 610 598 exit_to_user_mode(regs); 611 599 } 612 600 ··· 641 629 { 642 630 enter_from_user_mode(regs); 643 631 local_daif_restore(DAIF_PROCCTX); 644 - do_ptrauth_fault(regs, esr); 632 + do_el0_fpac(regs, esr); 645 633 exit_to_user_mode(regs); 646 634 } 647 635 ··· 682 670 el0_pc(regs, esr); 683 671 break; 684 672 case ESR_ELx_EC_UNKNOWN: 685 - el0_undef(regs); 673 + el0_undef(regs, esr); 686 674 break; 687 675 case ESR_ELx_EC_BTI: 688 676 el0_bti(regs); ··· 800 788 case ESR_ELx_EC_CP14_MR: 801 789 case ESR_ELx_EC_CP14_LS: 802 790 case ESR_ELx_EC_CP14_64: 803 - el0_undef(regs); 791 + el0_undef(regs, esr); 804 792 break; 805 793 case ESR_ELx_EC_CP15_32: 806 794 case ESR_ELx_EC_CP15_64:
+16 -1
arch/arm64/kernel/ftrace.c
··· 217 217 unsigned long pc = rec->ip; 218 218 u32 old = 0, new; 219 219 220 + new = aarch64_insn_gen_nop(); 221 + 222 + /* 223 + * When using mcount, callsites in modules may have been initalized to 224 + * call an arbitrary module PLT (which redirects to the _mcount stub) 225 + * rather than the ftrace PLT we'll use at runtime (which redirects to 226 + * the ftrace trampoline). We can ignore the old PLT when initializing 227 + * the callsite. 228 + * 229 + * Note: 'mod' is only set at module load time. 230 + */ 231 + if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && 232 + IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && mod) { 233 + return aarch64_insn_patch_text_nosync((void *)pc, new); 234 + } 235 + 220 236 if (!ftrace_find_callable_addr(rec, mod, &addr)) 221 237 return -EINVAL; 222 238 223 239 old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); 224 - new = aarch64_insn_gen_nop(); 225 240 226 241 return ftrace_modify_code(pc, old, new, true); 227 242 }
+5 -5
arch/arm64/kernel/head.S
··· 99 99 */ 100 100 #if VA_BITS > 48 101 101 mrs_s x0, SYS_ID_AA64MMFR2_EL1 102 - tst x0, #0xf << ID_AA64MMFR2_LVA_SHIFT 102 + tst x0, #0xf << ID_AA64MMFR2_EL1_VARange_SHIFT 103 103 mov x0, #VA_BITS 104 104 mov x25, #VA_BITS_MIN 105 105 csel x25, x25, x0, eq ··· 656 656 */ 657 657 SYM_FUNC_START(__enable_mmu) 658 658 mrs x3, ID_AA64MMFR0_EL1 659 - ubfx x3, x3, #ID_AA64MMFR0_TGRAN_SHIFT, 4 660 - cmp x3, #ID_AA64MMFR0_TGRAN_SUPPORTED_MIN 659 + ubfx x3, x3, #ID_AA64MMFR0_EL1_TGRAN_SHIFT, 4 660 + cmp x3, #ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN 661 661 b.lt __no_granule_support 662 - cmp x3, #ID_AA64MMFR0_TGRAN_SUPPORTED_MAX 662 + cmp x3, #ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX 663 663 b.gt __no_granule_support 664 664 phys_to_ttbr x2, x2 665 665 msr ttbr0_el1, x2 // load TTBR0 ··· 677 677 b.ne 2f 678 678 679 679 mrs_s x0, SYS_ID_AA64MMFR2_EL1 680 - and x0, x0, #(0xf << ID_AA64MMFR2_LVA_SHIFT) 680 + and x0, x0, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT) 681 681 cbnz x0, 2f 682 682 683 683 update_early_cpu_boot_status \
+4 -4
arch/arm64/kernel/hyp-stub.S
··· 98 98 SYM_CODE_END(elx_sync) 99 99 100 100 SYM_CODE_START_LOCAL(__finalise_el2) 101 - check_override id_aa64pfr0 ID_AA64PFR0_SVE_SHIFT .Linit_sve .Lskip_sve 101 + check_override id_aa64pfr0 ID_AA64PFR0_EL1_SVE_SHIFT .Linit_sve .Lskip_sve 102 102 103 103 .Linit_sve: /* SVE register access */ 104 104 mrs x0, cptr_el2 // Disable SVE traps ··· 109 109 msr_s SYS_ZCR_EL2, x1 // length for EL1. 110 110 111 111 .Lskip_sve: 112 - check_override id_aa64pfr1 ID_AA64PFR1_SME_SHIFT .Linit_sme .Lskip_sme 112 + check_override id_aa64pfr1 ID_AA64PFR1_EL1_SME_SHIFT .Linit_sme .Lskip_sme 113 113 114 114 .Linit_sme: /* SME register access and priority mapping */ 115 115 mrs x0, cptr_el2 // Disable SME traps ··· 142 142 msr_s SYS_SMPRIMAP_EL2, xzr // Make all priorities equal 143 143 144 144 mrs x1, id_aa64mmfr1_el1 // HCRX_EL2 present? 145 - ubfx x1, x1, #ID_AA64MMFR1_HCX_SHIFT, #4 145 + ubfx x1, x1, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4 146 146 cbz x1, .Lskip_sme 147 147 148 148 mrs_s x1, SYS_HCRX_EL2 ··· 157 157 tbnz x1, #0, 1f 158 158 159 159 // Needs to be VHE capable, obviously 160 - check_override id_aa64mmfr1 ID_AA64MMFR1_VHE_SHIFT 2f 1f 160 + check_override id_aa64mmfr1 ID_AA64MMFR1_EL1_VH_SHIFT 2f 1f 161 161 162 162 1: mov_q x0, HVC_STUB_ERR 163 163 eret
+5 -5
arch/arm64/kernel/idreg-override.c
··· 50 50 .name = "id_aa64mmfr1", 51 51 .override = &id_aa64mmfr1_override, 52 52 .fields = { 53 - FIELD("vh", ID_AA64MMFR1_VHE_SHIFT, mmfr1_vh_filter), 53 + FIELD("vh", ID_AA64MMFR1_EL1_VH_SHIFT, mmfr1_vh_filter), 54 54 {} 55 55 }, 56 56 }; ··· 74 74 .name = "id_aa64pfr0", 75 75 .override = &id_aa64pfr0_override, 76 76 .fields = { 77 - FIELD("sve", ID_AA64PFR0_SVE_SHIFT, pfr0_sve_filter), 77 + FIELD("sve", ID_AA64PFR0_EL1_SVE_SHIFT, pfr0_sve_filter), 78 78 {} 79 79 }, 80 80 }; ··· 98 98 .name = "id_aa64pfr1", 99 99 .override = &id_aa64pfr1_override, 100 100 .fields = { 101 - FIELD("bt", ID_AA64PFR1_BT_SHIFT, NULL ), 102 - FIELD("mte", ID_AA64PFR1_MTE_SHIFT, NULL), 103 - FIELD("sme", ID_AA64PFR1_SME_SHIFT, pfr1_sme_filter), 101 + FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT, NULL ), 102 + FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT, NULL), 103 + FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT, pfr1_sme_filter), 104 104 {} 105 105 }, 106 106 };
+2 -1
arch/arm64/kernel/module-plts.c
··· 37 37 return plt; 38 38 } 39 39 40 - bool plt_entries_equal(const struct plt_entry *a, const struct plt_entry *b) 40 + static bool plt_entries_equal(const struct plt_entry *a, 41 + const struct plt_entry *b) 41 42 { 42 43 u64 p, q; 43 44
-15
arch/arm64/kernel/module.c
··· 476 476 return -ENOEXEC; 477 477 } 478 478 479 - static const Elf_Shdr *find_section(const Elf_Ehdr *hdr, 480 - const Elf_Shdr *sechdrs, 481 - const char *name) 482 - { 483 - const Elf_Shdr *s, *se; 484 - const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; 485 - 486 - for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) { 487 - if (strcmp(name, secstrs + s->sh_name) == 0) 488 - return s; 489 - } 490 - 491 - return NULL; 492 - } 493 - 494 479 static inline void __init_plt(struct plt_entry *plt, unsigned long addr) 495 480 { 496 481 *plt = get_plt_entry(addr, plt);
+4 -4
arch/arm64/kernel/perf_event.c
··· 390 390 */ 391 391 static bool armv8pmu_has_long_event(struct arm_pmu *cpu_pmu) 392 392 { 393 - return (cpu_pmu->pmuver >= ID_AA64DFR0_PMUVER_8_5); 393 + return (cpu_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5); 394 394 } 395 395 396 396 static inline bool armv8pmu_event_has_user_read(struct perf_event *event) ··· 1145 1145 1146 1146 dfr0 = read_sysreg(id_aa64dfr0_el1); 1147 1147 pmuver = cpuid_feature_extract_unsigned_field(dfr0, 1148 - ID_AA64DFR0_PMUVER_SHIFT); 1149 - if (pmuver == ID_AA64DFR0_PMUVER_IMP_DEF || pmuver == 0) 1148 + ID_AA64DFR0_EL1_PMUVer_SHIFT); 1149 + if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF || pmuver == 0) 1150 1150 return; 1151 1151 1152 1152 cpu_pmu->pmuver = pmuver; ··· 1172 1172 pmceid, ARMV8_PMUV3_MAX_COMMON_EVENTS); 1173 1173 1174 1174 /* store PMMIR_EL1 register for sysfs */ 1175 - if (pmuver >= ID_AA64DFR0_PMUVER_8_4 && (pmceid_raw[1] & BIT(31))) 1175 + if (pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4 && (pmceid_raw[1] & BIT(31))) 1176 1176 cpu_pmu->reg_pmmir = read_cpuid(PMMIR_EL1); 1177 1177 else 1178 1178 cpu_pmu->reg_pmmir = 0;
+2 -2
arch/arm64/kernel/proton-pack.c
··· 168 168 169 169 /* If the CPU has CSV2 set, we're safe */ 170 170 pfr0 = read_cpuid(ID_AA64PFR0_EL1); 171 - if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT)) 171 + if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_CSV2_SHIFT)) 172 172 return SPECTRE_UNAFFECTED; 173 173 174 174 /* Alternatively, we have a list of unaffected CPUs */ ··· 945 945 mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); 946 946 947 947 return cpuid_feature_extract_unsigned_field(mmfr1, 948 - ID_AA64MMFR1_ECBHB_SHIFT); 948 + ID_AA64MMFR1_EL1_ECBHB_SHIFT); 949 949 } 950 950 951 951 bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
+21 -6
arch/arm64/kernel/ptrace.c
··· 121 121 { 122 122 return ((addr & ~(THREAD_SIZE - 1)) == 123 123 (kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))) || 124 - on_irq_stack(addr, sizeof(unsigned long), NULL); 124 + on_irq_stack(addr, sizeof(unsigned long)); 125 125 } 126 126 127 127 /** ··· 666 666 static int tls_get(struct task_struct *target, const struct user_regset *regset, 667 667 struct membuf to) 668 668 { 669 + int ret; 670 + 669 671 if (target == current) 670 672 tls_preserve_current_state(); 671 673 672 - return membuf_store(&to, target->thread.uw.tp_value); 674 + ret = membuf_store(&to, target->thread.uw.tp_value); 675 + if (system_supports_tpidr2()) 676 + ret = membuf_store(&to, target->thread.tpidr2_el0); 677 + else 678 + ret = membuf_zero(&to, sizeof(u64)); 679 + 680 + return ret; 673 681 } 674 682 675 683 static int tls_set(struct task_struct *target, const struct user_regset *regset, ··· 685 677 const void *kbuf, const void __user *ubuf) 686 678 { 687 679 int ret; 688 - unsigned long tls = target->thread.uw.tp_value; 680 + unsigned long tls[2]; 689 681 690 - ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &tls, 0, -1); 682 + tls[0] = target->thread.uw.tp_value; 683 + if (system_supports_sme()) 684 + tls[1] = target->thread.tpidr2_el0; 685 + 686 + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, tls, 0, count); 691 687 if (ret) 692 688 return ret; 693 689 694 - target->thread.uw.tp_value = tls; 690 + target->thread.uw.tp_value = tls[0]; 691 + if (system_supports_sme()) 692 + target->thread.tpidr2_el0 = tls[1]; 693 + 695 694 return ret; 696 695 } 697 696 ··· 1407 1392 }, 1408 1393 [REGSET_TLS] = { 1409 1394 .core_note_type = NT_ARM_TLS, 1410 - .n = 1, 1395 + .n = 2, 1411 1396 .size = sizeof(void *), 1412 1397 .align = sizeof(void *), 1413 1398 .regset_get = tls_get,
-32
arch/arm64/kernel/sdei.c
··· 162 162 return err; 163 163 } 164 164 165 - static bool on_sdei_normal_stack(unsigned long sp, unsigned long size, 166 - struct stack_info *info) 167 - { 168 - unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr); 169 - unsigned long high = low + SDEI_STACK_SIZE; 170 - 171 - return on_stack(sp, size, low, high, STACK_TYPE_SDEI_NORMAL, info); 172 - } 173 - 174 - static bool on_sdei_critical_stack(unsigned long sp, unsigned long size, 175 - struct stack_info *info) 176 - { 177 - unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr); 178 - unsigned long high = low + SDEI_STACK_SIZE; 179 - 180 - return on_stack(sp, size, low, high, STACK_TYPE_SDEI_CRITICAL, info); 181 - } 182 - 183 - bool _on_sdei_stack(unsigned long sp, unsigned long size, struct stack_info *info) 184 - { 185 - if (!IS_ENABLED(CONFIG_VMAP_STACK)) 186 - return false; 187 - 188 - if (on_sdei_critical_stack(sp, size, info)) 189 - return true; 190 - 191 - if (on_sdei_normal_stack(sp, size, info)) 192 - return true; 193 - 194 - return false; 195 - } 196 - 197 165 unsigned long sdei_arch_get_entry_point(int conduit) 198 166 { 199 167 /*
+38 -28
arch/arm64/kernel/stacktrace.c
··· 68 68 } 69 69 70 70 /* 71 - * We can only safely access per-cpu stacks from current in a non-preemptible 72 - * context. 73 - */ 74 - static bool on_accessible_stack(const struct task_struct *tsk, 75 - unsigned long sp, unsigned long size, 76 - struct stack_info *info) 77 - { 78 - if (info) 79 - info->type = STACK_TYPE_UNKNOWN; 80 - 81 - if (on_task_stack(tsk, sp, size, info)) 82 - return true; 83 - if (tsk != current || preemptible()) 84 - return false; 85 - if (on_irq_stack(sp, size, info)) 86 - return true; 87 - if (on_overflow_stack(sp, size, info)) 88 - return true; 89 - if (on_sdei_stack(sp, size, info)) 90 - return true; 91 - 92 - return false; 93 - } 94 - 95 - /* 96 71 * Unwind from one frame record (A) to the next frame record (B). 97 72 * 98 73 * We terminate early if the location of B indicates a malformed chain of frame ··· 78 103 { 79 104 struct task_struct *tsk = state->task; 80 105 unsigned long fp = state->fp; 81 - struct stack_info info; 82 106 int err; 83 107 84 108 /* Final frame; nothing to unwind */ 85 109 if (fp == (unsigned long)task_pt_regs(tsk)->stackframe) 86 110 return -ENOENT; 87 111 88 - err = unwind_next_common(state, &info, on_accessible_stack, NULL); 112 + err = unwind_next_frame_record(state); 89 113 if (err) 90 114 return err; 91 115 ··· 164 190 barrier(); 165 191 } 166 192 193 + /* 194 + * Per-cpu stacks are only accessible when unwinding the current task in a 195 + * non-preemptible context. 196 + */ 197 + #define STACKINFO_CPU(name) \ 198 + ({ \ 199 + ((task == current) && !preemptible()) \ 200 + ? stackinfo_get_##name() \ 201 + : stackinfo_get_unknown(); \ 202 + }) 203 + 204 + /* 205 + * SDEI stacks are only accessible when unwinding the current task in an NMI 206 + * context. 207 + */ 208 + #define STACKINFO_SDEI(name) \ 209 + ({ \ 210 + ((task == current) && in_nmi()) \ 211 + ? stackinfo_get_sdei_##name() \ 212 + : stackinfo_get_unknown(); \ 213 + }) 214 + 167 215 noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, 168 216 void *cookie, struct task_struct *task, 169 217 struct pt_regs *regs) 170 218 { 171 - struct unwind_state state; 219 + struct stack_info stacks[] = { 220 + stackinfo_get_task(task), 221 + STACKINFO_CPU(irq), 222 + #if defined(CONFIG_VMAP_STACK) 223 + STACKINFO_CPU(overflow), 224 + #endif 225 + #if defined(CONFIG_VMAP_STACK) && defined(CONFIG_ARM_SDE_INTERFACE) 226 + STACKINFO_SDEI(normal), 227 + STACKINFO_SDEI(critical), 228 + #endif 229 + }; 230 + struct unwind_state state = { 231 + .stacks = stacks, 232 + .nr_stacks = ARRAY_SIZE(stacks), 233 + }; 172 234 173 235 if (regs) { 174 236 if (task != current)
+30 -20
arch/arm64/kernel/traps.c
··· 180 180 181 181 #define S_SMP " SMP" 182 182 183 - static int __die(const char *str, int err, struct pt_regs *regs) 183 + static int __die(const char *str, long err, struct pt_regs *regs) 184 184 { 185 185 static int die_counter; 186 186 int ret; 187 187 188 - pr_emerg("Internal error: %s: %x [#%d]" S_PREEMPT S_SMP "\n", 188 + pr_emerg("Internal error: %s: %016lx [#%d]" S_PREEMPT S_SMP "\n", 189 189 str, err, ++die_counter); 190 190 191 191 /* trap and error numbers are mostly meaningless on ARM */ ··· 206 206 /* 207 207 * This function is protected against re-entrancy. 208 208 */ 209 - void die(const char *str, struct pt_regs *regs, int err) 209 + void die(const char *str, struct pt_regs *regs, long err) 210 210 { 211 211 int ret; 212 212 unsigned long flags; ··· 485 485 force_signal_inject(SIGSEGV, code, addr, 0); 486 486 } 487 487 488 - void do_undefinstr(struct pt_regs *regs) 488 + void do_undefinstr(struct pt_regs *regs, unsigned long esr) 489 489 { 490 490 /* check for AArch32 breakpoint instructions */ 491 491 if (!aarch32_break_handler(regs)) ··· 494 494 if (call_undef_hook(regs) == 0) 495 495 return; 496 496 497 - BUG_ON(!user_mode(regs)); 497 + if (!user_mode(regs)) 498 + die("Oops - Undefined instruction", regs, esr); 499 + 498 500 force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0); 499 501 } 500 502 NOKPROBE_SYMBOL(do_undefinstr); 501 503 502 - void do_bti(struct pt_regs *regs) 504 + void do_el0_bti(struct pt_regs *regs) 503 505 { 504 - BUG_ON(!user_mode(regs)); 505 506 force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0); 506 507 } 507 - NOKPROBE_SYMBOL(do_bti); 508 508 509 - void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr) 509 + void do_el1_bti(struct pt_regs *regs, unsigned long esr) 510 510 { 511 - /* 512 - * Unexpected FPAC exception or pointer authentication failure in 513 - * the kernel: kill the task before it does any more harm. 514 - */ 515 - BUG_ON(!user_mode(regs)); 511 + die("Oops - BTI", regs, esr); 512 + } 513 + NOKPROBE_SYMBOL(do_el1_bti); 514 + 515 + void do_el0_fpac(struct pt_regs *regs, unsigned long esr) 516 + { 516 517 force_signal_inject(SIGILL, ILL_ILLOPN, regs->pc, esr); 517 518 } 518 - NOKPROBE_SYMBOL(do_ptrauth_fault); 519 + 520 + void do_el1_fpac(struct pt_regs *regs, unsigned long esr) 521 + { 522 + /* 523 + * Unexpected FPAC exception in the kernel: kill the task before it 524 + * does any more harm. 525 + */ 526 + die("Oops - FPAC", regs, esr); 527 + } 528 + NOKPROBE_SYMBOL(do_el1_fpac) 519 529 520 530 #define __user_cache_maint(insn, address, res) \ 521 531 if (address >= TASK_SIZE_MAX) { \ ··· 768 758 hook_base = cp15_64_hooks; 769 759 break; 770 760 default: 771 - do_undefinstr(regs); 761 + do_undefinstr(regs, esr); 772 762 return; 773 763 } 774 764 ··· 783 773 * EL0. Fall back to our usual undefined instruction handler 784 774 * so that we handle these consistently. 785 775 */ 786 - do_undefinstr(regs); 776 + do_undefinstr(regs, esr); 787 777 } 788 778 NOKPROBE_SYMBOL(do_cp15instr); 789 779 #endif ··· 803 793 * back to our usual undefined instruction handler so that we handle 804 794 * these consistently. 805 795 */ 806 - do_undefinstr(regs); 796 + do_undefinstr(regs, esr); 807 797 } 808 798 NOKPROBE_SYMBOL(do_sysinstr); 809 799 ··· 980 970 { 981 971 switch (report_bug(regs->pc, regs)) { 982 972 case BUG_TRAP_TYPE_BUG: 983 - die("Oops - BUG", regs, 0); 973 + die("Oops - BUG", regs, esr); 984 974 break; 985 975 986 976 case BUG_TRAP_TYPE_WARN: ··· 1048 1038 * This is something that might be fixed at some point in the future. 1049 1039 */ 1050 1040 if (!recover) 1051 - die("Oops - KASAN", regs, 0); 1041 + die("Oops - KASAN", regs, esr); 1052 1042 1053 1043 /* If thread survives, skip over the brk instruction and continue: */ 1054 1044 arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
-3
arch/arm64/kernel/vdso.c
··· 29 29 #include <asm/signal32.h> 30 30 #include <asm/vdso.h> 31 31 32 - extern char vdso_start[], vdso_end[]; 33 - extern char vdso32_start[], vdso32_end[]; 34 - 35 32 enum vdso_abi { 36 33 VDSO_ABI_AA64, 37 34 VDSO_ABI_AA32,
+7
arch/arm64/kernel/vdso/vdso.lds.S
··· 48 48 PROVIDE (_etext = .); 49 49 PROVIDE (etext = .); 50 50 51 + . = ALIGN(4); 52 + .altinstructions : { 53 + __alt_instructions = .; 54 + *(.altinstructions) 55 + __alt_instructions_end = .; 56 + } 57 + 51 58 .dynamic : { *(.dynamic) } :text :dynamic 52 59 53 60 .rela.dyn : ALIGN(8) { *(.rela .rela*) }
+2 -2
arch/arm64/kvm/debug.c
··· 295 295 * If SPE is present on this CPU and is available at current EL, 296 296 * we may need to check if the host state needs to be saved. 297 297 */ 298 - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) && 298 + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) && 299 299 !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT))) 300 300 vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE); 301 301 302 302 /* Check if we have TRBE implemented and available at the host */ 303 - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) && 303 + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceBuffer_SHIFT) && 304 304 !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG)) 305 305 vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE); 306 306 }
+30 -30
arch/arm64/kvm/hyp/include/nvhe/fixed_config.h
··· 35 35 * - Data Independent Timing 36 36 */ 37 37 #define PVM_ID_AA64PFR0_ALLOW (\ 38 - ARM64_FEATURE_MASK(ID_AA64PFR0_FP) | \ 39 - ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD) | \ 40 - ARM64_FEATURE_MASK(ID_AA64PFR0_DIT) \ 38 + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \ 39 + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \ 40 + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) \ 41 41 ) 42 42 43 43 /* ··· 49 49 * Supported by KVM 50 50 */ 51 51 #define PVM_ID_AA64PFR0_RESTRICT_UNSIGNED (\ 52 - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \ 53 - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \ 54 - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \ 55 - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \ 56 - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) \ 52 + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), ID_AA64PFR0_EL1_ELx_64BIT_ONLY) | \ 53 + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), ID_AA64PFR0_EL1_ELx_64BIT_ONLY) | \ 54 + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL2), ID_AA64PFR0_EL1_ELx_64BIT_ONLY) | \ 55 + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL3), ID_AA64PFR0_EL1_ELx_64BIT_ONLY) | \ 56 + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), ID_AA64PFR0_EL1_RAS_IMP) \ 57 57 ) 58 58 59 59 /* ··· 62 62 * - Speculative Store Bypassing 63 63 */ 64 64 #define PVM_ID_AA64PFR1_ALLOW (\ 65 - ARM64_FEATURE_MASK(ID_AA64PFR1_BT) | \ 66 - ARM64_FEATURE_MASK(ID_AA64PFR1_SSBS) \ 65 + ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_BT) | \ 66 + ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SSBS) \ 67 67 ) 68 68 69 69 /* ··· 74 74 * - Non-context synchronizing exception entry and exit 75 75 */ 76 76 #define PVM_ID_AA64MMFR0_ALLOW (\ 77 - ARM64_FEATURE_MASK(ID_AA64MMFR0_BIGENDEL) | \ 78 - ARM64_FEATURE_MASK(ID_AA64MMFR0_SNSMEM) | \ 79 - ARM64_FEATURE_MASK(ID_AA64MMFR0_BIGENDEL0) | \ 80 - ARM64_FEATURE_MASK(ID_AA64MMFR0_EXS) \ 77 + ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGEND) | \ 78 + ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_SNSMEM) | \ 79 + ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGENDEL0) | \ 80 + ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_EXS) \ 81 81 ) 82 82 83 83 /* ··· 86 86 * - 16-bit ASID 87 87 */ 88 88 #define PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED (\ 89 - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \ 90 - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) \ 89 + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_PARANGE), ID_AA64MMFR0_EL1_PARANGE_40) | \ 90 + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_ASIDBITS), ID_AA64MMFR0_EL1_ASIDBITS_16) \ 91 91 ) 92 92 93 93 /* ··· 100 100 * - Enhanced Translation Synchronization 101 101 */ 102 102 #define PVM_ID_AA64MMFR1_ALLOW (\ 103 - ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS) | \ 104 - ARM64_FEATURE_MASK(ID_AA64MMFR1_VMIDBITS) | \ 105 - ARM64_FEATURE_MASK(ID_AA64MMFR1_HPD) | \ 106 - ARM64_FEATURE_MASK(ID_AA64MMFR1_PAN) | \ 107 - ARM64_FEATURE_MASK(ID_AA64MMFR1_SPECSEI) | \ 108 - ARM64_FEATURE_MASK(ID_AA64MMFR1_ETS) \ 103 + ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS) | \ 104 + ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_VMIDBits) | \ 105 + ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HPDS) | \ 106 + ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_PAN) | \ 107 + ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_SpecSEI) | \ 108 + ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_ETS) \ 109 109 ) 110 110 111 111 /* ··· 120 120 * - E0PDx mechanism 121 121 */ 122 122 #define PVM_ID_AA64MMFR2_ALLOW (\ 123 - ARM64_FEATURE_MASK(ID_AA64MMFR2_CNP) | \ 124 - ARM64_FEATURE_MASK(ID_AA64MMFR2_UAO) | \ 125 - ARM64_FEATURE_MASK(ID_AA64MMFR2_IESB) | \ 126 - ARM64_FEATURE_MASK(ID_AA64MMFR2_AT) | \ 127 - ARM64_FEATURE_MASK(ID_AA64MMFR2_IDS) | \ 128 - ARM64_FEATURE_MASK(ID_AA64MMFR2_TTL) | \ 129 - ARM64_FEATURE_MASK(ID_AA64MMFR2_BBM) | \ 130 - ARM64_FEATURE_MASK(ID_AA64MMFR2_E0PD) \ 123 + ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_CnP) | \ 124 + ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_UAO) | \ 125 + ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_IESB) | \ 126 + ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_AT) | \ 127 + ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_IDS) | \ 128 + ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_TTL) | \ 129 + ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_BBM) | \ 130 + ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_E0PD) \ 131 131 ) 132 132 133 133 /*
+19 -19
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 20 20 u64 cptr_set = 0; 21 21 22 22 /* Protected KVM does not support AArch32 guests. */ 23 - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), 24 - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_ELx_64BIT_ONLY); 25 - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1), 26 - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_ELx_64BIT_ONLY); 23 + BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), 24 + PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_ELx_64BIT_ONLY); 25 + BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), 26 + PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_ELx_64BIT_ONLY); 27 27 28 28 /* 29 29 * Linux guests assume support for floating-point and Advanced SIMD. Do 30 30 * not change the trapping behavior for these from the KVM default. 31 31 */ 32 - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_FP), 32 + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP), 33 33 PVM_ID_AA64PFR0_ALLOW)); 34 - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD), 34 + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), 35 35 PVM_ID_AA64PFR0_ALLOW)); 36 36 37 37 /* Trap RAS unless all current versions are supported */ 38 - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_RAS), feature_ids) < 39 - ID_AA64PFR0_RAS_V1P1) { 38 + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), feature_ids) < 39 + ID_AA64PFR0_EL1_RAS_V1P1) { 40 40 hcr_set |= HCR_TERR | HCR_TEA; 41 41 hcr_clear |= HCR_FIEN; 42 42 } 43 43 44 44 /* Trap AMU */ 45 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_AMU), feature_ids)) { 45 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), feature_ids)) { 46 46 hcr_clear |= HCR_AMVOFFEN; 47 47 cptr_set |= CPTR_EL2_TAM; 48 48 } 49 49 50 50 /* Trap SVE */ 51 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_SVE), feature_ids)) 51 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), feature_ids)) 52 52 cptr_set |= CPTR_EL2_TZ; 53 53 54 54 vcpu->arch.hcr_el2 |= hcr_set; ··· 66 66 u64 hcr_clear = 0; 67 67 68 68 /* Memory Tagging: Trap and Treat as Untagged if not supported. */ 69 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_MTE), feature_ids)) { 69 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), feature_ids)) { 70 70 hcr_set |= HCR_TID5; 71 71 hcr_clear |= HCR_DCT | HCR_ATA; 72 72 } ··· 86 86 u64 cptr_set = 0; 87 87 88 88 /* Trap/constrain PMU */ 89 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), feature_ids)) { 89 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), feature_ids)) { 90 90 mdcr_set |= MDCR_EL2_TPM | MDCR_EL2_TPMCR; 91 91 mdcr_clear |= MDCR_EL2_HPME | MDCR_EL2_MTPME | 92 92 MDCR_EL2_HPMN_MASK; 93 93 } 94 94 95 95 /* Trap Debug */ 96 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), feature_ids)) 96 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), feature_ids)) 97 97 mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE; 98 98 99 99 /* Trap OS Double Lock */ 100 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DOUBLELOCK), feature_ids)) 100 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DoubleLock), feature_ids)) 101 101 mdcr_set |= MDCR_EL2_TDOSA; 102 102 103 103 /* Trap SPE */ 104 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER), feature_ids)) { 104 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer), feature_ids)) { 105 105 mdcr_set |= MDCR_EL2_TPMS; 106 106 mdcr_clear |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; 107 107 } 108 108 109 109 /* Trap Trace Filter */ 110 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_TRACE_FILT), feature_ids)) 110 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), feature_ids)) 111 111 mdcr_set |= MDCR_EL2_TTRF; 112 112 113 113 /* Trap Trace */ 114 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_TRACEVER), feature_ids)) 114 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), feature_ids)) 115 115 cptr_set |= CPTR_EL2_TTA; 116 116 117 117 vcpu->arch.mdcr_el2 |= mdcr_set; ··· 128 128 u64 mdcr_set = 0; 129 129 130 130 /* Trap Debug Communications Channel registers */ 131 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_FGT), feature_ids)) 131 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_FGT), feature_ids)) 132 132 mdcr_set |= MDCR_EL2_TDCC; 133 133 134 134 vcpu->arch.mdcr_el2 |= mdcr_set; ··· 143 143 u64 hcr_set = 0; 144 144 145 145 /* Trap LOR */ 146 - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_LOR), feature_ids)) 146 + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_LO), feature_ids)) 147 147 hcr_set |= HCR_TLOR; 148 148 149 149 vcpu->arch.hcr_el2 |= hcr_set;
+19 -21
arch/arm64/kvm/hyp/nvhe/stacktrace.c
··· 39 39 40 40 DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace); 41 41 42 - static bool on_overflow_stack(unsigned long sp, unsigned long size, 43 - struct stack_info *info) 42 + static struct stack_info stackinfo_get_overflow(void) 44 43 { 45 44 unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack); 46 45 unsigned long high = low + OVERFLOW_STACK_SIZE; 47 46 48 - return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); 47 + return (struct stack_info) { 48 + .low = low, 49 + .high = high, 50 + }; 49 51 } 50 52 51 - static bool on_hyp_stack(unsigned long sp, unsigned long size, 52 - struct stack_info *info) 53 + static struct stack_info stackinfo_get_hyp(void) 53 54 { 54 55 struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); 55 56 unsigned long high = params->stack_hyp_va; 56 57 unsigned long low = high - PAGE_SIZE; 57 58 58 - return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); 59 - } 60 - 61 - static bool on_accessible_stack(const struct task_struct *tsk, 62 - unsigned long sp, unsigned long size, 63 - struct stack_info *info) 64 - { 65 - if (info) 66 - info->type = STACK_TYPE_UNKNOWN; 67 - 68 - return (on_overflow_stack(sp, size, info) || 69 - on_hyp_stack(sp, size, info)); 59 + return (struct stack_info) { 60 + .low = low, 61 + .high = high, 62 + }; 70 63 } 71 64 72 65 static int unwind_next(struct unwind_state *state) 73 66 { 74 - struct stack_info info; 75 - 76 - return unwind_next_common(state, &info, on_accessible_stack, NULL); 67 + return unwind_next_frame_record(state); 77 68 } 78 69 79 70 static void notrace unwind(struct unwind_state *state, ··· 120 129 */ 121 130 static void pkvm_save_backtrace(unsigned long fp, unsigned long pc) 122 131 { 123 - struct unwind_state state; 132 + struct stack_info stacks[] = { 133 + stackinfo_get_overflow(), 134 + stackinfo_get_hyp(), 135 + }; 136 + struct unwind_state state = { 137 + .stacks = stacks, 138 + .nr_stacks = ARRAY_SIZE(stacks), 139 + }; 124 140 int idx = 0; 125 141 126 142 kvm_nvhe_unwind_init(&state, fp, pc);
+5 -5
arch/arm64/kvm/hyp/nvhe/sys_regs.c
··· 92 92 PVM_ID_AA64PFR0_RESTRICT_UNSIGNED); 93 93 94 94 /* Spectre and Meltdown mitigation in KVM */ 95 - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), 95 + set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 96 96 (u64)kvm->arch.pfr0_csv2); 97 - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), 97 + set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 98 98 (u64)kvm->arch.pfr0_csv3); 99 99 100 100 return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; ··· 106 106 u64 allow_mask = PVM_ID_AA64PFR1_ALLOW; 107 107 108 108 if (!kvm_has_mte(kvm)) 109 - allow_mask &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE); 109 + allow_mask &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); 110 110 111 111 return id_aa64pfr1_el1_sys_val & allow_mask; 112 112 } ··· 281 281 * No support for AArch32 guests, therefore, pKVM has no sanitized copy 282 282 * of AArch32 feature id registers. 283 283 */ 284 - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1), 285 - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) > ID_AA64PFR0_ELx_64BIT_ONLY); 284 + BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), 285 + PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) > ID_AA64PFR0_EL1_ELx_64BIT_ONLY); 286 286 287 287 return pvm_access_raz_wi(vcpu, p, r); 288 288 }
+1 -1
arch/arm64/kvm/hyp/pgtable.c
··· 61 61 62 62 static bool kvm_phys_is_valid(u64 phys) 63 63 { 64 - return phys < BIT(id_aa64mmfr0_parange_to_phys_shift(ID_AA64MMFR0_PARANGE_MAX)); 64 + return phys < BIT(id_aa64mmfr0_parange_to_phys_shift(ID_AA64MMFR0_EL1_PARANGE_MAX)); 65 65 } 66 66 67 67 static bool kvm_block_mapping_supported(u64 addr, u64 end, u64 phys, u32 level)
+8 -8
arch/arm64/kvm/pmu-emul.c
··· 33 33 pmuver = kvm->arch.arm_pmu->pmuver; 34 34 35 35 switch (pmuver) { 36 - case ID_AA64DFR0_PMUVER_8_0: 36 + case ID_AA64DFR0_EL1_PMUVer_IMP: 37 37 return GENMASK(9, 0); 38 - case ID_AA64DFR0_PMUVER_8_1: 39 - case ID_AA64DFR0_PMUVER_8_4: 40 - case ID_AA64DFR0_PMUVER_8_5: 41 - case ID_AA64DFR0_PMUVER_8_7: 38 + case ID_AA64DFR0_EL1_PMUVer_V3P1: 39 + case ID_AA64DFR0_EL1_PMUVer_V3P4: 40 + case ID_AA64DFR0_EL1_PMUVer_V3P5: 41 + case ID_AA64DFR0_EL1_PMUVer_V3P7: 42 42 return GENMASK(15, 0); 43 43 default: /* Shouldn't be here, just for sanity */ 44 44 WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); ··· 774 774 { 775 775 struct arm_pmu_entry *entry; 776 776 777 - if (pmu->pmuver == 0 || pmu->pmuver == ID_AA64DFR0_PMUVER_IMP_DEF) 777 + if (pmu->pmuver == 0 || pmu->pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) 778 778 return; 779 779 780 780 mutex_lock(&arm_pmus_lock); ··· 828 828 if (event->pmu) { 829 829 pmu = to_arm_pmu(event->pmu); 830 830 if (pmu->pmuver == 0 || 831 - pmu->pmuver == ID_AA64DFR0_PMUVER_IMP_DEF) 831 + pmu->pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) 832 832 pmu = NULL; 833 833 } 834 834 ··· 856 856 * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled 857 857 * as RAZ 858 858 */ 859 - if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_PMUVER_8_4) 859 + if (vcpu->kvm->arch.arm_pmu->pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P4) 860 860 val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32); 861 861 base = 32; 862 862 }
+6 -6
arch/arm64/kvm/reset.c
··· 359 359 360 360 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 361 361 parange = cpuid_feature_extract_unsigned_field(mmfr0, 362 - ID_AA64MMFR0_PARANGE_SHIFT); 362 + ID_AA64MMFR0_EL1_PARANGE_SHIFT); 363 363 /* 364 364 * IPA size beyond 48 bits could not be supported 365 365 * on either 4K or 16K page size. Hence let's cap ··· 367 367 * on the system. 368 368 */ 369 369 if (PAGE_SIZE != SZ_64K) 370 - parange = min(parange, (unsigned int)ID_AA64MMFR0_PARANGE_48); 370 + parange = min(parange, (unsigned int)ID_AA64MMFR0_EL1_PARANGE_48); 371 371 372 372 /* 373 373 * Check with ARMv8.5-GTG that our PAGE_SIZE is supported at 374 374 * Stage-2. If not, things will stop very quickly. 375 375 */ 376 - switch (cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_TGRAN_2_SHIFT)) { 377 - case ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE: 376 + switch (cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_EL1_TGRAN_2_SHIFT)) { 377 + case ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_NONE: 378 378 kvm_err("PAGE_SIZE not supported at Stage-2, giving up\n"); 379 379 return -EINVAL; 380 - case ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT: 380 + case ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_DEFAULT: 381 381 kvm_debug("PAGE_SIZE supported at Stage-2 (default)\n"); 382 382 break; 383 - case ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN ... ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX: 383 + case ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN ... ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX: 384 384 kvm_debug("PAGE_SIZE supported at Stage-2 (advertised)\n"); 385 385 break; 386 386 default:
+81 -54
arch/arm64/kvm/stacktrace.c
··· 21 21 22 22 #include <asm/stacktrace/nvhe.h> 23 23 24 + static struct stack_info stackinfo_get_overflow(void) 25 + { 26 + struct kvm_nvhe_stacktrace_info *stacktrace_info 27 + = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); 28 + unsigned long low = (unsigned long)stacktrace_info->overflow_stack_base; 29 + unsigned long high = low + OVERFLOW_STACK_SIZE; 30 + 31 + return (struct stack_info) { 32 + .low = low, 33 + .high = high, 34 + }; 35 + } 36 + 37 + static struct stack_info stackinfo_get_overflow_kern_va(void) 38 + { 39 + unsigned long low = (unsigned long)this_cpu_ptr_nvhe_sym(overflow_stack); 40 + unsigned long high = low + OVERFLOW_STACK_SIZE; 41 + 42 + return (struct stack_info) { 43 + .low = low, 44 + .high = high, 45 + }; 46 + } 47 + 48 + static struct stack_info stackinfo_get_hyp(void) 49 + { 50 + struct kvm_nvhe_stacktrace_info *stacktrace_info 51 + = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); 52 + unsigned long low = (unsigned long)stacktrace_info->stack_base; 53 + unsigned long high = low + PAGE_SIZE; 54 + 55 + return (struct stack_info) { 56 + .low = low, 57 + .high = high, 58 + }; 59 + } 60 + 61 + static struct stack_info stackinfo_get_hyp_kern_va(void) 62 + { 63 + unsigned long low = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page); 64 + unsigned long high = low + PAGE_SIZE; 65 + 66 + return (struct stack_info) { 67 + .low = low, 68 + .high = high, 69 + }; 70 + } 71 + 24 72 /* 25 73 * kvm_nvhe_stack_kern_va - Convert KVM nVHE HYP stack addresses to a kernel VAs 26 74 * ··· 82 34 * Returns true on success and updates @addr to its corresponding kernel VA; 83 35 * otherwise returns false. 84 36 */ 85 - static bool kvm_nvhe_stack_kern_va(unsigned long *addr, 86 - enum stack_type type) 37 + static bool kvm_nvhe_stack_kern_va(unsigned long *addr, unsigned long size) 87 38 { 88 - struct kvm_nvhe_stacktrace_info *stacktrace_info; 89 - unsigned long hyp_base, kern_base, hyp_offset; 39 + struct stack_info stack_hyp, stack_kern; 90 40 91 - stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); 41 + stack_hyp = stackinfo_get_hyp(); 42 + stack_kern = stackinfo_get_hyp_kern_va(); 43 + if (stackinfo_on_stack(&stack_hyp, *addr, size)) 44 + goto found; 92 45 93 - switch (type) { 94 - case STACK_TYPE_HYP: 95 - kern_base = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page); 96 - hyp_base = (unsigned long)stacktrace_info->stack_base; 97 - break; 98 - case STACK_TYPE_OVERFLOW: 99 - kern_base = (unsigned long)this_cpu_ptr_nvhe_sym(overflow_stack); 100 - hyp_base = (unsigned long)stacktrace_info->overflow_stack_base; 101 - break; 102 - default: 103 - return false; 104 - } 46 + stack_hyp = stackinfo_get_overflow(); 47 + stack_kern = stackinfo_get_overflow_kern_va(); 48 + if (stackinfo_on_stack(&stack_hyp, *addr, size)) 49 + goto found; 105 50 106 - hyp_offset = *addr - hyp_base; 51 + return false; 107 52 108 - *addr = kern_base + hyp_offset; 109 - 53 + found: 54 + *addr = *addr - stack_hyp.low + stack_kern.low; 110 55 return true; 111 56 } 112 57 113 - static bool on_overflow_stack(unsigned long sp, unsigned long size, 114 - struct stack_info *info) 58 + /* 59 + * Convert a KVN nVHE HYP frame record address to a kernel VA 60 + */ 61 + static bool kvm_nvhe_stack_kern_record_va(unsigned long *addr) 115 62 { 116 - struct kvm_nvhe_stacktrace_info *stacktrace_info 117 - = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); 118 - unsigned long low = (unsigned long)stacktrace_info->overflow_stack_base; 119 - unsigned long high = low + OVERFLOW_STACK_SIZE; 120 - 121 - return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); 122 - } 123 - 124 - static bool on_hyp_stack(unsigned long sp, unsigned long size, 125 - struct stack_info *info) 126 - { 127 - struct kvm_nvhe_stacktrace_info *stacktrace_info 128 - = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); 129 - unsigned long low = (unsigned long)stacktrace_info->stack_base; 130 - unsigned long high = low + PAGE_SIZE; 131 - 132 - return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); 133 - } 134 - 135 - static bool on_accessible_stack(const struct task_struct *tsk, 136 - unsigned long sp, unsigned long size, 137 - struct stack_info *info) 138 - { 139 - if (info) 140 - info->type = STACK_TYPE_UNKNOWN; 141 - 142 - return (on_overflow_stack(sp, size, info) || 143 - on_hyp_stack(sp, size, info)); 63 + return kvm_nvhe_stack_kern_va(addr, 16); 144 64 } 145 65 146 66 static int unwind_next(struct unwind_state *state) 147 67 { 148 - struct stack_info info; 68 + /* 69 + * The FP is in the hypervisor VA space. Convert it to the kernel VA 70 + * space so it can be unwound by the regular unwind functions. 71 + */ 72 + if (!kvm_nvhe_stack_kern_record_va(&state->fp)) 73 + return -EINVAL; 149 74 150 - return unwind_next_common(state, &info, on_accessible_stack, 151 - kvm_nvhe_stack_kern_va); 75 + return unwind_next_frame_record(state); 152 76 } 153 77 154 78 static void unwind(struct unwind_state *state, ··· 178 158 static void hyp_dump_backtrace(unsigned long hyp_offset) 179 159 { 180 160 struct kvm_nvhe_stacktrace_info *stacktrace_info; 181 - struct unwind_state state; 161 + struct stack_info stacks[] = { 162 + stackinfo_get_overflow_kern_va(), 163 + stackinfo_get_hyp_kern_va(), 164 + }; 165 + struct unwind_state state = { 166 + .stacks = stacks, 167 + .nr_stacks = ARRAY_SIZE(stacks), 168 + }; 182 169 183 170 stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); 184 171
+24 -24
arch/arm64/kvm/sys_regs.c
··· 273 273 u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); 274 274 u32 sr = reg_to_encoding(r); 275 275 276 - if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) { 276 + if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { 277 277 kvm_inject_undefined(vcpu); 278 278 return false; 279 279 } ··· 1077 1077 switch (id) { 1078 1078 case SYS_ID_AA64PFR0_EL1: 1079 1079 if (!vcpu_has_sve(vcpu)) 1080 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); 1081 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU); 1082 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2); 1083 - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); 1084 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3); 1085 - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); 1080 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); 1081 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); 1082 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); 1083 + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); 1084 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); 1085 + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); 1086 1086 if (kvm_vgic_global_state.type == VGIC_V3) { 1087 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC); 1088 - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), 1); 1087 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); 1088 + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); 1089 1089 } 1090 1090 break; 1091 1091 case SYS_ID_AA64PFR1_EL1: 1092 1092 if (!kvm_has_mte(vcpu->kvm)) 1093 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE); 1093 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); 1094 1094 1095 - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_SME); 1095 + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); 1096 1096 break; 1097 1097 case SYS_ID_AA64ISAR1_EL1: 1098 1098 if (!vcpu_has_ptrauth(vcpu)) ··· 1110 1110 break; 1111 1111 case SYS_ID_AA64DFR0_EL1: 1112 1112 /* Limit debug to ARMv8.0 */ 1113 - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER); 1114 - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6); 1113 + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); 1114 + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); 1115 1115 /* Limit guests to PMUv3 for ARMv8.4 */ 1116 1116 val = cpuid_feature_cap_perfmon_field(val, 1117 - ID_AA64DFR0_PMUVER_SHIFT, 1118 - kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0); 1117 + ID_AA64DFR0_EL1_PMUVer_SHIFT, 1118 + kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_EL1_PMUVer_V3P4 : 0); 1119 1119 /* Hide SPE from guests */ 1120 - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER); 1120 + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); 1121 1121 break; 1122 1122 case SYS_ID_DFR0_EL1: 1123 1123 /* Limit guests to PMUv3 for ARMv8.4 */ ··· 1196 1196 * it doesn't promise more than what is actually provided (the 1197 1197 * guest could otherwise be covered in ectoplasmic residue). 1198 1198 */ 1199 - csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV2_SHIFT); 1199 + csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); 1200 1200 if (csv2 > 1 || 1201 1201 (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) 1202 1202 return -EINVAL; 1203 1203 1204 1204 /* Same thing for CSV3 */ 1205 - csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV3_SHIFT); 1205 + csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); 1206 1206 if (csv3 > 1 || 1207 1207 (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) 1208 1208 return -EINVAL; 1209 1209 1210 1210 /* We can only differ with CSV[23], and anything else is an error */ 1211 1211 val ^= read_id_reg(vcpu, rd, false); 1212 - val &= ~((0xFUL << ID_AA64PFR0_CSV2_SHIFT) | 1213 - (0xFUL << ID_AA64PFR0_CSV3_SHIFT)); 1212 + val &= ~((0xFUL << ID_AA64PFR0_EL1_CSV2_SHIFT) | 1213 + (0xFUL << ID_AA64PFR0_EL1_CSV3_SHIFT)); 1214 1214 if (val) 1215 1215 return -EINVAL; 1216 1216 ··· 1825 1825 } else { 1826 1826 u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); 1827 1827 u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 1828 - u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL3_SHIFT); 1828 + u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL1_EL3_SHIFT); 1829 1829 1830 - p->regval = ((((dfr >> ID_AA64DFR0_WRPS_SHIFT) & 0xf) << 28) | 1831 - (((dfr >> ID_AA64DFR0_BRPS_SHIFT) & 0xf) << 24) | 1832 - (((dfr >> ID_AA64DFR0_CTX_CMPS_SHIFT) & 0xf) << 20) 1830 + p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) << 28) | 1831 + (((dfr >> ID_AA64DFR0_EL1_BRPs_SHIFT) & 0xf) << 24) | 1832 + (((dfr >> ID_AA64DFR0_EL1_CTX_CMPs_SHIFT) & 0xf) << 20) 1833 1833 | (6 << 16) | (1 << 15) | (el3 << 14) | (el3 << 12)); 1834 1834 return true; 1835 1835 }
+3 -3
arch/arm64/mm/context.c
··· 43 43 { 44 44 u32 asid; 45 45 int fld = cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64MMFR0_EL1), 46 - ID_AA64MMFR0_ASID_SHIFT); 46 + ID_AA64MMFR0_EL1_ASIDBITS_SHIFT); 47 47 48 48 switch (fld) { 49 49 default: 50 50 pr_warn("CPU%d: Unknown ASID size (%d); assuming 8-bit\n", 51 51 smp_processor_id(), fld); 52 52 fallthrough; 53 - case ID_AA64MMFR0_ASID_8: 53 + case ID_AA64MMFR0_EL1_ASIDBITS_8: 54 54 asid = 8; 55 55 break; 56 - case ID_AA64MMFR0_ASID_16: 56 + case ID_AA64MMFR0_EL1_ASIDBITS_16: 57 57 asid = 16; 58 58 } 59 59
+1 -1
arch/arm64/mm/init.c
··· 360 360 extern u16 memstart_offset_seed; 361 361 u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); 362 362 int parange = cpuid_feature_extract_unsigned_field( 363 - mmfr0, ID_AA64MMFR0_PARANGE_SHIFT); 363 + mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT); 364 364 s64 range = linear_region_size - 365 365 BIT(id_aa64mmfr0_parange_to_phys_shift(parange)); 366 366
+1 -1
arch/arm64/mm/mmu.c
··· 686 686 687 687 pfr1 = __read_sysreg_by_encoding(SYS_ID_AA64PFR1_EL1); 688 688 return cpuid_feature_extract_unsigned_field(pfr1, 689 - ID_AA64PFR1_BT_SHIFT); 689 + ID_AA64PFR1_EL1_BT_SHIFT); 690 690 } 691 691 692 692 /*
+2 -2
arch/arm64/mm/proc.S
··· 434 434 * (ID_AA64PFR1_EL1[11:8] > 1). 435 435 */ 436 436 mrs x10, ID_AA64PFR1_EL1 437 - ubfx x10, x10, #ID_AA64PFR1_MTE_SHIFT, #4 438 - cmp x10, #ID_AA64PFR1_MTE 437 + ubfx x10, x10, #ID_AA64PFR1_EL1_MTE_SHIFT, #4 438 + cmp x10, #ID_AA64PFR1_EL1_MTE_MTE2 439 439 b.lt 1f 440 440 441 441 /* Normal Tagged memory type at the corresponding MAIR index */
+1
arch/arm64/tools/cpucaps
··· 68 68 WORKAROUND_2064142 69 69 WORKAROUND_2077057 70 70 WORKAROUND_2457168 71 + WORKAROUND_2658417 71 72 WORKAROUND_TRBE_OVERWRITE_FILL_MODE 72 73 WORKAROUND_TSB_FLUSH_FAILURE 73 74 WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
+448 -1
arch/arm64/tools/sysreg
··· 46 46 # feature that introduces them (eg, FEAT_LS64_ACCDATA introduces enumeration 47 47 # item ACCDATA) though it may be more taseful to do something else. 48 48 49 + Sysreg ID_AA64PFR0_EL1 3 0 0 4 0 50 + Enum 63:60 CSV3 51 + 0b0000 NI 52 + 0b0001 IMP 53 + EndEnum 54 + Enum 59:56 CSV2 55 + 0b0000 NI 56 + 0b0001 IMP 57 + 0b0010 CSV2_2 58 + 0b0011 CSV2_3 59 + EndEnum 60 + Enum 55:52 RME 61 + 0b0000 NI 62 + 0b0001 IMP 63 + EndEnum 64 + Enum 51:48 DIT 65 + 0b0000 NI 66 + 0b0001 IMP 67 + EndEnum 68 + Enum 47:44 AMU 69 + 0b0000 NI 70 + 0b0001 IMP 71 + 0b0010 V1P1 72 + EndEnum 73 + Enum 43:40 MPAM 74 + 0b0000 0 75 + 0b0001 1 76 + EndEnum 77 + Enum 39:36 SEL2 78 + 0b0000 NI 79 + 0b0001 IMP 80 + EndEnum 81 + Enum 35:32 SVE 82 + 0b0000 NI 83 + 0b0001 IMP 84 + EndEnum 85 + Enum 31:28 RAS 86 + 0b0000 NI 87 + 0b0001 IMP 88 + 0b0010 V1P1 89 + EndEnum 90 + Enum 27:24 GIC 91 + 0b0000 NI 92 + 0b0001 IMP 93 + 0b0010 V4P1 94 + EndEnum 95 + Enum 23:20 AdvSIMD 96 + 0b0000 IMP 97 + 0b0001 FP16 98 + 0b1111 NI 99 + EndEnum 100 + Enum 19:16 FP 101 + 0b0000 IMP 102 + 0b0001 FP16 103 + 0b1111 NI 104 + EndEnum 105 + Enum 15:12 EL3 106 + 0b0000 NI 107 + 0b0001 IMP 108 + 0b0010 AARCH32 109 + EndEnum 110 + Enum 11:8 EL2 111 + 0b0000 NI 112 + 0b0001 IMP 113 + 0b0010 AARCH32 114 + EndEnum 115 + Enum 7:4 EL1 116 + 0b0001 IMP 117 + 0b0010 AARCH32 118 + EndEnum 119 + Enum 3:0 EL0 120 + 0b0001 IMP 121 + 0b0010 AARCH32 122 + EndEnum 123 + EndSysreg 124 + 125 + Sysreg ID_AA64PFR1_EL1 3 0 0 4 1 126 + Res0 63:40 127 + Enum 39:36 NMI 128 + 0b0000 NI 129 + 0b0001 IMP 130 + EndEnum 131 + Enum 35:32 CSV2_frac 132 + 0b0000 NI 133 + 0b0001 CSV2_1p1 134 + 0b0010 CSV2_1p2 135 + EndEnum 136 + Enum 31:28 RNDR_trap 137 + 0b0000 NI 138 + 0b0001 IMP 139 + EndEnum 140 + Enum 27:24 SME 141 + 0b0000 NI 142 + 0b0001 IMP 143 + EndEnum 144 + Res0 23:20 145 + Enum 19:16 MPAM_frac 146 + 0b0000 MINOR_0 147 + 0b0001 MINOR_1 148 + EndEnum 149 + Enum 15:12 RAS_frac 150 + 0b0000 NI 151 + 0b0001 RASv1p1 152 + EndEnum 153 + Enum 11:8 MTE 154 + 0b0000 NI 155 + 0b0001 IMP 156 + 0b0010 MTE2 157 + 0b0011 MTE3 158 + EndEnum 159 + Enum 7:4 SSBS 160 + 0b0000 NI 161 + 0b0001 IMP 162 + 0b0010 SSBS2 163 + EndEnum 164 + Enum 3:0 BT 165 + 0b0000 NI 166 + 0b0001 IMP 167 + EndEnum 168 + EndSysreg 169 + 49 170 Sysreg ID_AA64ZFR0_EL1 3 0 0 4 4 50 171 Res0 63:60 51 172 Enum 59:56 F64MM ··· 219 98 0b1 IMP 220 99 EndEnum 221 100 Res0 62:60 222 - Field 59:56 SMEver 101 + Enum 59:56 SMEver 102 + 0b0000 IMP 103 + EndEnum 223 104 Enum 55:52 I16I64 224 105 0b0000 NI 225 106 0b1111 IMP ··· 250 127 0b1 IMP 251 128 EndEnum 252 129 Res0 31:0 130 + EndSysreg 131 + 132 + Sysreg ID_AA64DFR0_EL1 3 0 0 5 0 133 + Enum 63:60 HPMN0 134 + 0b0000 UNPREDICTABLE 135 + 0b0001 DEF 136 + EndEnum 137 + Res0 59:56 138 + Enum 55:52 BRBE 139 + 0b0000 NI 140 + 0b0001 IMP 141 + 0b0010 BRBE_V1P1 142 + EndEnum 143 + Enum 51:48 MTPMU 144 + 0b0000 NI_IMPDEF 145 + 0b0001 IMP 146 + 0b1111 NI 147 + EndEnum 148 + Enum 47:44 TraceBuffer 149 + 0b0000 NI 150 + 0b0001 IMP 151 + EndEnum 152 + Enum 43:40 TraceFilt 153 + 0b0000 NI 154 + 0b0001 IMP 155 + EndEnum 156 + Enum 39:36 DoubleLock 157 + 0b0000 IMP 158 + 0b1111 NI 159 + EndEnum 160 + Enum 35:32 PMSVer 161 + 0b0000 NI 162 + 0b0001 IMP 163 + 0b0010 V1P1 164 + 0b0011 V1P2 165 + 0b0100 V1P3 166 + EndEnum 167 + Field 31:28 CTX_CMPs 168 + Res0 27:24 169 + Field 23:20 WRPs 170 + Res0 19:16 171 + Field 15:12 BRPs 172 + Enum 11:8 PMUVer 173 + 0b0000 NI 174 + 0b0001 IMP 175 + 0b0100 V3P1 176 + 0b0101 V3P4 177 + 0b0110 V3P5 178 + 0b0111 V3P7 179 + 0b1000 V3P8 180 + 0b1111 IMP_DEF 181 + EndEnum 182 + Enum 7:4 TraceVer 183 + 0b0000 NI 184 + 0b0001 IMP 185 + EndEnum 186 + Enum 3:0 DebugVer 187 + 0b0110 IMP 188 + 0b0111 VHE 189 + 0b1000 V8P2 190 + 0b1001 V8P4 191 + 0b1010 V8P8 192 + EndEnum 193 + EndSysreg 194 + 195 + Sysreg ID_AA64DFR1_EL1 3 0 0 5 1 196 + Res0 63:0 197 + EndSysreg 198 + 199 + Sysreg ID_AA64AFR0_EL1 3 0 0 5 4 200 + Res0 63:32 201 + Field 31:28 IMPDEF7 202 + Field 27:24 IMPDEF6 203 + Field 23:20 IMPDEF5 204 + Field 19:16 IMPDEF4 205 + Field 15:12 IMPDEF3 206 + Field 11:8 IMPDEF2 207 + Field 7:4 IMPDEF1 208 + Field 3:0 IMPDEF0 209 + EndSysreg 210 + 211 + Sysreg ID_AA64AFR1_EL1 3 0 0 5 5 212 + Res0 63:0 253 213 EndSysreg 254 214 255 215 Sysreg ID_AA64ISAR0_EL1 3 0 0 6 0 ··· 519 313 EndEnum 520 314 EndSysreg 521 315 316 + Sysreg ID_AA64MMFR0_EL1 3 0 0 7 0 317 + Enum 63:60 ECV 318 + 0b0000 NI 319 + 0b0001 IMP 320 + 0b0010 CNTPOFF 321 + EndEnum 322 + Enum 59:56 FGT 323 + 0b0000 NI 324 + 0b0001 IMP 325 + EndEnum 326 + Res0 55:48 327 + Enum 47:44 EXS 328 + 0b0000 NI 329 + 0b0001 IMP 330 + EndEnum 331 + Enum 43:40 TGRAN4_2 332 + 0b0000 TGRAN4 333 + 0b0001 NI 334 + 0b0010 IMP 335 + 0b0011 52_BIT 336 + EndEnum 337 + Enum 39:36 TGRAN64_2 338 + 0b0000 TGRAN64 339 + 0b0001 NI 340 + 0b0010 IMP 341 + EndEnum 342 + Enum 35:32 TGRAN16_2 343 + 0b0000 TGRAN16 344 + 0b0001 NI 345 + 0b0010 IMP 346 + 0b0011 52_BIT 347 + EndEnum 348 + Enum 31:28 TGRAN4 349 + 0b0000 IMP 350 + 0b0001 52_BIT 351 + 0b1111 NI 352 + EndEnum 353 + Enum 27:24 TGRAN64 354 + 0b0000 IMP 355 + 0b1111 NI 356 + EndEnum 357 + Enum 23:20 TGRAN16 358 + 0b0000 NI 359 + 0b0001 IMP 360 + 0b0010 52_BIT 361 + EndEnum 362 + Enum 19:16 BIGENDEL0 363 + 0b0000 NI 364 + 0b0001 IMP 365 + EndEnum 366 + Enum 15:12 SNSMEM 367 + 0b0000 NI 368 + 0b0001 IMP 369 + EndEnum 370 + Enum 11:8 BIGEND 371 + 0b0000 NI 372 + 0b0001 IMP 373 + EndEnum 374 + Enum 7:4 ASIDBITS 375 + 0b0000 8 376 + 0b0010 16 377 + EndEnum 378 + Enum 3:0 PARANGE 379 + 0b0000 32 380 + 0b0001 36 381 + 0b0010 40 382 + 0b0011 42 383 + 0b0100 44 384 + 0b0101 48 385 + 0b0110 52 386 + EndEnum 387 + EndSysreg 388 + 389 + Sysreg ID_AA64MMFR1_EL1 3 0 0 7 1 390 + Enum 63:60 ECBHB 391 + 0b0000 NI 392 + 0b0001 IMP 393 + EndEnum 394 + Enum 59:56 CMOW 395 + 0b0000 NI 396 + 0b0001 IMP 397 + EndEnum 398 + Enum 55:52 TIDCP1 399 + 0b0000 NI 400 + 0b0001 IMP 401 + EndEnum 402 + Enum 51:48 nTLBPA 403 + 0b0000 NI 404 + 0b0001 IMP 405 + EndEnum 406 + Enum 47:44 AFP 407 + 0b0000 NI 408 + 0b0001 IMP 409 + EndEnum 410 + Enum 43:40 HCX 411 + 0b0000 NI 412 + 0b0001 IMP 413 + EndEnum 414 + Enum 39:36 ETS 415 + 0b0000 NI 416 + 0b0001 IMP 417 + EndEnum 418 + Enum 35:32 TWED 419 + 0b0000 NI 420 + 0b0001 IMP 421 + EndEnum 422 + Enum 31:28 XNX 423 + 0b0000 NI 424 + 0b0001 IMP 425 + EndEnum 426 + Enum 27:24 SpecSEI 427 + 0b0000 NI 428 + 0b0001 IMP 429 + EndEnum 430 + Enum 23:20 PAN 431 + 0b0000 NI 432 + 0b0001 IMP 433 + 0b0010 PAN2 434 + 0b0011 PAN3 435 + EndEnum 436 + Enum 19:16 LO 437 + 0b0000 NI 438 + 0b0001 IMP 439 + EndEnum 440 + Enum 15:12 HPDS 441 + 0b0000 NI 442 + 0b0001 IMP 443 + 0b0010 HPDS2 444 + EndEnum 445 + Enum 11:8 VH 446 + 0b0000 NI 447 + 0b0001 IMP 448 + EndEnum 449 + Enum 7:4 VMIDBits 450 + 0b0000 8 451 + 0b0010 16 452 + EndEnum 453 + Enum 3:0 HAFDBS 454 + 0b0000 NI 455 + 0b0001 AF 456 + 0b0010 DBM 457 + EndEnum 458 + EndSysreg 459 + 460 + Sysreg ID_AA64MMFR2_EL1 3 0 0 7 2 461 + Enum 63:60 E0PD 462 + 0b0000 NI 463 + 0b0001 IMP 464 + EndEnum 465 + Enum 59:56 EVT 466 + 0b0000 NI 467 + 0b0001 IMP 468 + 0b0010 TTLBxS 469 + EndEnum 470 + Enum 55:52 BBM 471 + 0b0000 0 472 + 0b0001 1 473 + 0b0010 2 474 + EndEnum 475 + Enum 51:48 TTL 476 + 0b0000 NI 477 + 0b0001 IMP 478 + EndEnum 479 + Res0 47:44 480 + Enum 43:40 FWB 481 + 0b0000 NI 482 + 0b0001 IMP 483 + EndEnum 484 + Enum 39:36 IDS 485 + 0b0000 0x0 486 + 0b0001 0x18 487 + EndEnum 488 + Enum 35:32 AT 489 + 0b0000 NI 490 + 0b0001 IMP 491 + EndEnum 492 + Enum 31:28 ST 493 + 0b0000 39 494 + 0b0001 48_47 495 + EndEnum 496 + Enum 27:24 NV 497 + 0b0000 NI 498 + 0b0001 IMP 499 + 0b0010 NV2 500 + EndEnum 501 + Enum 23:20 CCIDX 502 + 0b0000 32 503 + 0b0001 64 504 + EndEnum 505 + Enum 19:16 VARange 506 + 0b0000 48 507 + 0b0001 52 508 + EndEnum 509 + Enum 15:12 IESB 510 + 0b0000 NI 511 + 0b0001 IMP 512 + EndEnum 513 + Enum 11:8 LSM 514 + 0b0000 NI 515 + 0b0001 IMP 516 + EndEnum 517 + Enum 7:4 UAO 518 + 0b0000 NI 519 + 0b0001 IMP 520 + EndEnum 521 + Enum 3:0 CnP 522 + 0b0000 NI 523 + 0b0001 IMP 524 + EndEnum 525 + EndSysreg 526 + 522 527 Sysreg SCTLR_EL1 3 0 1 0 0 523 528 Field 63 TIDCP 524 529 Field 62 SPINMASK ··· 844 427 Fields SMCR_ELx 845 428 EndSysreg 846 429 430 + Sysreg ALLINT 3 0 4 3 0 431 + Res0 63:14 432 + Field 13 ALLINT 433 + Res0 12:0 434 + EndSysreg 435 + 847 436 Sysreg FAR_EL1 3 0 6 0 0 848 437 Field 63:0 ADDR 849 438 EndSysreg ··· 861 438 862 439 Sysreg CONTEXTIDR_EL1 3 0 13 0 1 863 440 Fields CONTEXTIDR_ELx 441 + EndSysreg 442 + 443 + Sysreg TPIDR_EL1 3 0 13 0 4 444 + Field 63:0 ThreadID 445 + EndSysreg 446 + 447 + Sysreg SCXTNUM_EL1 3 0 13 0 7 448 + Field 63:0 SoftwareContextNumber 864 449 EndSysreg 865 450 866 451 Sysreg CLIDR_EL1 3 1 0 0 1 ··· 943 512 944 513 Sysreg ZCR_EL2 3 4 1 2 0 945 514 Fields ZCR_ELx 515 + EndSysreg 516 + 517 + Sysreg HCRX_EL2 3 4 1 2 2 518 + Res0 63:12 519 + Field 11 MSCEn 520 + Field 10 MCE2 521 + Field 9 CMOW 522 + Field 8 VFNMI 523 + Field 7 VINMI 524 + Field 6 TALLINT 525 + Field 5 SMPME 526 + Field 4 FGTnXS 527 + Field 3 FnXS 528 + Field 2 EnASR 529 + Field 1 EnALS 530 + Field 0 EnAS0 946 531 EndSysreg 947 532 948 533 Sysreg SMPRIMAP_EL2 3 4 1 2 5
+2 -2
drivers/firmware/efi/libstub/arm64-stub.c
··· 23 23 if (IS_ENABLED(CONFIG_ARM64_4K_PAGES)) 24 24 return EFI_SUCCESS; 25 25 26 - tg = (read_cpuid(ID_AA64MMFR0_EL1) >> ID_AA64MMFR0_TGRAN_SHIFT) & 0xf; 27 - if (tg < ID_AA64MMFR0_TGRAN_SUPPORTED_MIN || tg > ID_AA64MMFR0_TGRAN_SUPPORTED_MAX) { 26 + tg = (read_cpuid(ID_AA64MMFR0_EL1) >> ID_AA64MMFR0_EL1_TGRAN_SHIFT) & 0xf; 27 + if (tg < ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN || tg > ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX) { 28 28 if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) 29 29 efi_err("This 64 KB granular kernel is not supported by your CPU\n"); 30 30 else
+2 -2
drivers/hwtracing/coresight/coresight-etm4x-core.c
··· 966 966 { 967 967 u64 dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1); 968 968 969 - return ((dfr0 >> ID_AA64DFR0_TRACEVER_SHIFT) & 0xfUL) > 0; 969 + return ((dfr0 >> ID_AA64DFR0_EL1_TraceVer_SHIFT) & 0xfUL) > 0; 970 970 } 971 971 972 972 static bool etm4_init_sysreg_access(struct etmv4_drvdata *drvdata, ··· 1054 1054 u64 trfcr; 1055 1055 1056 1056 drvdata->trfcr = 0; 1057 - if (!cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRACE_FILT_SHIFT)) 1057 + if (!cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceFilt_SHIFT)) 1058 1058 return; 1059 1059 1060 1060 /*
+2 -1
drivers/hwtracing/coresight/coresight-trbe.h
··· 20 20 static inline bool is_trbe_available(void) 21 21 { 22 22 u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1); 23 - unsigned int trbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_TRBE_SHIFT); 23 + unsigned int trbe = cpuid_feature_extract_unsigned_field(aa64dfr0, 24 + ID_AA64DFR0_EL1_TraceBuffer_SHIFT); 24 25 25 26 return trbe >= 0b0001; 26 27 }
+3 -3
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
··· 150 150 } 151 151 152 152 reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 153 - par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT); 153 + par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_EL1_PARANGE_SHIFT); 154 154 tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par); 155 155 156 156 cd->ttbr = virt_to_phys(mm->pgd); ··· 425 425 * addresses larger than what we support. 426 426 */ 427 427 reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 428 - fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT); 428 + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_EL1_PARANGE_SHIFT); 429 429 oas = id_aa64mmfr0_parange_to_phys_shift(fld); 430 430 if (smmu->oas < oas) 431 431 return false; 432 432 433 433 /* We can support bigger ASIDs than the CPU, but not smaller */ 434 - fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_ASID_SHIFT); 434 + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_EL1_ASIDBITS_SHIFT); 435 435 asid_bits = fld ? 16 : 8; 436 436 if (smmu->asid_bits < asid_bits) 437 437 return false;
+1 -1
drivers/irqchip/irq-gic-v4.c
··· 94 94 { 95 95 unsigned long fld, reg = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 96 96 97 - fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64PFR0_GIC_SHIFT); 97 + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64PFR0_EL1_GIC_SHIFT); 98 98 99 99 return fld >= 0x3; 100 100 }
+3 -3
drivers/perf/arm_spe_pmu.c
··· 674 674 static u64 arm_spe_pmsevfr_res0(u16 pmsver) 675 675 { 676 676 switch (pmsver) { 677 - case ID_AA64DFR0_PMSVER_8_2: 677 + case ID_AA64DFR0_EL1_PMSVer_IMP: 678 678 return SYS_PMSEVFR_EL1_RES0_8_2; 679 - case ID_AA64DFR0_PMSVER_8_3: 679 + case ID_AA64DFR0_EL1_PMSVer_V1P1: 680 680 /* Return the highest version we support in default */ 681 681 default: 682 682 return SYS_PMSEVFR_EL1_RES0_8_3; ··· 958 958 struct device *dev = &spe_pmu->pdev->dev; 959 959 960 960 fld = cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64DFR0_EL1), 961 - ID_AA64DFR0_PMSVER_SHIFT); 961 + ID_AA64DFR0_EL1_PMSVer_SHIFT); 962 962 if (!fld) { 963 963 dev_err(dev, 964 964 "unsupported ID_AA64DFR0_EL1.PMSVer [%d] on CPU %d\n",
+1
tools/testing/selftests/arm64/abi/.gitignore
··· 1 + ptrace 1 2 syscall-abi 2 3 tpidr2
+1 -1
tools/testing/selftests/arm64/abi/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 # Copyright (C) 2021 ARM Limited 3 3 4 - TEST_GEN_PROGS := syscall-abi tpidr2 4 + TEST_GEN_PROGS := ptrace syscall-abi tpidr2 5 5 6 6 include ../../lib.mk 7 7
+241
tools/testing/selftests/arm64/abi/ptrace.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2022 ARM Limited. 4 + */ 5 + #include <errno.h> 6 + #include <stdbool.h> 7 + #include <stddef.h> 8 + #include <stdio.h> 9 + #include <stdlib.h> 10 + #include <string.h> 11 + #include <unistd.h> 12 + #include <sys/auxv.h> 13 + #include <sys/prctl.h> 14 + #include <sys/ptrace.h> 15 + #include <sys/types.h> 16 + #include <sys/uio.h> 17 + #include <sys/wait.h> 18 + #include <asm/sigcontext.h> 19 + #include <asm/ptrace.h> 20 + 21 + #include "../../kselftest.h" 22 + 23 + #define EXPECTED_TESTS 7 24 + 25 + #define MAX_TPIDRS 2 26 + 27 + static bool have_sme(void) 28 + { 29 + return getauxval(AT_HWCAP2) & HWCAP2_SME; 30 + } 31 + 32 + static void test_tpidr(pid_t child) 33 + { 34 + uint64_t read_val[MAX_TPIDRS]; 35 + uint64_t write_val[MAX_TPIDRS]; 36 + struct iovec read_iov, write_iov; 37 + bool test_tpidr2 = false; 38 + int ret, i; 39 + 40 + read_iov.iov_base = read_val; 41 + write_iov.iov_base = write_val; 42 + 43 + /* Should be able to read a single TPIDR... */ 44 + read_iov.iov_len = sizeof(uint64_t); 45 + ret = ptrace(PTRACE_GETREGSET, child, NT_ARM_TLS, &read_iov); 46 + ksft_test_result(ret == 0, "read_tpidr_one\n"); 47 + 48 + /* ...write a new value.. */ 49 + write_iov.iov_len = sizeof(uint64_t); 50 + write_val[0] = read_val[0]++; 51 + ret = ptrace(PTRACE_SETREGSET, child, NT_ARM_TLS, &write_iov); 52 + ksft_test_result(ret == 0, "write_tpidr_one\n"); 53 + 54 + /* ...then read it back */ 55 + ret = ptrace(PTRACE_GETREGSET, child, NT_ARM_TLS, &read_iov); 56 + ksft_test_result(ret == 0 && write_val[0] == read_val[0], 57 + "verify_tpidr_one\n"); 58 + 59 + /* If we have TPIDR2 we should be able to read it */ 60 + read_iov.iov_len = sizeof(read_val); 61 + ret = ptrace(PTRACE_GETREGSET, child, NT_ARM_TLS, &read_iov); 62 + if (ret == 0) { 63 + /* If we have SME there should be two TPIDRs */ 64 + if (read_iov.iov_len >= sizeof(read_val)) 65 + test_tpidr2 = true; 66 + 67 + if (have_sme() && test_tpidr2) { 68 + ksft_test_result(test_tpidr2, "count_tpidrs\n"); 69 + } else { 70 + ksft_test_result(read_iov.iov_len % sizeof(uint64_t) == 0, 71 + "count_tpidrs\n"); 72 + } 73 + } else { 74 + ksft_test_result_fail("count_tpidrs\n"); 75 + } 76 + 77 + if (test_tpidr2) { 78 + /* Try to write new values to all known TPIDRs... */ 79 + write_iov.iov_len = sizeof(write_val); 80 + for (i = 0; i < MAX_TPIDRS; i++) 81 + write_val[i] = read_val[i] + 1; 82 + ret = ptrace(PTRACE_SETREGSET, child, NT_ARM_TLS, &write_iov); 83 + 84 + ksft_test_result(ret == 0 && 85 + write_iov.iov_len == sizeof(write_val), 86 + "tpidr2_write\n"); 87 + 88 + /* ...then read them back */ 89 + read_iov.iov_len = sizeof(read_val); 90 + ret = ptrace(PTRACE_GETREGSET, child, NT_ARM_TLS, &read_iov); 91 + 92 + if (have_sme()) { 93 + /* Should read back the written value */ 94 + ksft_test_result(ret == 0 && 95 + read_iov.iov_len >= sizeof(read_val) && 96 + memcmp(read_val, write_val, 97 + sizeof(read_val)) == 0, 98 + "tpidr2_read\n"); 99 + } else { 100 + /* TPIDR2 should read as zero */ 101 + ksft_test_result(ret == 0 && 102 + read_iov.iov_len >= sizeof(read_val) && 103 + read_val[0] == write_val[0] && 104 + read_val[1] == 0, 105 + "tpidr2_read\n"); 106 + } 107 + 108 + /* Writing only TPIDR... */ 109 + write_iov.iov_len = sizeof(uint64_t); 110 + memcpy(write_val, read_val, sizeof(read_val)); 111 + write_val[0] += 1; 112 + ret = ptrace(PTRACE_SETREGSET, child, NT_ARM_TLS, &write_iov); 113 + 114 + if (ret == 0) { 115 + /* ...should leave TPIDR2 untouched */ 116 + read_iov.iov_len = sizeof(read_val); 117 + ret = ptrace(PTRACE_GETREGSET, child, NT_ARM_TLS, 118 + &read_iov); 119 + 120 + ksft_test_result(ret == 0 && 121 + read_iov.iov_len >= sizeof(read_val) && 122 + memcmp(read_val, write_val, 123 + sizeof(read_val)) == 0, 124 + "write_tpidr_only\n"); 125 + } else { 126 + ksft_test_result_fail("write_tpidr_only\n"); 127 + } 128 + } else { 129 + ksft_test_result_skip("tpidr2_write\n"); 130 + ksft_test_result_skip("tpidr2_read\n"); 131 + ksft_test_result_skip("write_tpidr_only\n"); 132 + } 133 + } 134 + 135 + static int do_child(void) 136 + { 137 + if (ptrace(PTRACE_TRACEME, -1, NULL, NULL)) 138 + ksft_exit_fail_msg("PTRACE_TRACEME", strerror(errno)); 139 + 140 + if (raise(SIGSTOP)) 141 + ksft_exit_fail_msg("raise(SIGSTOP)", strerror(errno)); 142 + 143 + return EXIT_SUCCESS; 144 + } 145 + 146 + static int do_parent(pid_t child) 147 + { 148 + int ret = EXIT_FAILURE; 149 + pid_t pid; 150 + int status; 151 + siginfo_t si; 152 + 153 + /* Attach to the child */ 154 + while (1) { 155 + int sig; 156 + 157 + pid = wait(&status); 158 + if (pid == -1) { 159 + perror("wait"); 160 + goto error; 161 + } 162 + 163 + /* 164 + * This should never happen but it's hard to flag in 165 + * the framework. 166 + */ 167 + if (pid != child) 168 + continue; 169 + 170 + if (WIFEXITED(status) || WIFSIGNALED(status)) 171 + ksft_exit_fail_msg("Child died unexpectedly\n"); 172 + 173 + if (!WIFSTOPPED(status)) 174 + goto error; 175 + 176 + sig = WSTOPSIG(status); 177 + 178 + if (ptrace(PTRACE_GETSIGINFO, pid, NULL, &si)) { 179 + if (errno == ESRCH) 180 + goto disappeared; 181 + 182 + if (errno == EINVAL) { 183 + sig = 0; /* bust group-stop */ 184 + goto cont; 185 + } 186 + 187 + ksft_test_result_fail("PTRACE_GETSIGINFO: %s\n", 188 + strerror(errno)); 189 + goto error; 190 + } 191 + 192 + if (sig == SIGSTOP && si.si_code == SI_TKILL && 193 + si.si_pid == pid) 194 + break; 195 + 196 + cont: 197 + if (ptrace(PTRACE_CONT, pid, NULL, sig)) { 198 + if (errno == ESRCH) 199 + goto disappeared; 200 + 201 + ksft_test_result_fail("PTRACE_CONT: %s\n", 202 + strerror(errno)); 203 + goto error; 204 + } 205 + } 206 + 207 + ksft_print_msg("Parent is %d, child is %d\n", getpid(), child); 208 + 209 + test_tpidr(child); 210 + 211 + ret = EXIT_SUCCESS; 212 + 213 + error: 214 + kill(child, SIGKILL); 215 + 216 + disappeared: 217 + return ret; 218 + } 219 + 220 + int main(void) 221 + { 222 + int ret = EXIT_SUCCESS; 223 + pid_t child; 224 + 225 + srandom(getpid()); 226 + 227 + ksft_print_header(); 228 + 229 + ksft_set_plan(EXPECTED_TESTS); 230 + 231 + child = fork(); 232 + if (!child) 233 + return do_child(); 234 + 235 + if (do_parent(child)) 236 + ret = EXIT_FAILURE; 237 + 238 + ksft_print_cnts(); 239 + 240 + return ret; 241 + }