Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'kvmarm-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 updates for 6.7

- Generalized infrastructure for 'writable' ID registers, effectively
allowing userspace to opt-out of certain vCPU features for its guest

- Optimization for vSGI injection, opportunistically compressing MPIDR
to vCPU mapping into a table

- Improvements to KVM's PMU emulation, allowing userspace to select
the number of PMCs available to a VM

- Guest support for memory operation instructions (FEAT_MOPS)

- Cleanups to handling feature flags in KVM_ARM_VCPU_INIT, squashing
bugs and getting rid of useless code

- Changes to the way the SMCCC filter is constructed, avoiding wasted
memory allocations when not in use

- Load the stage-2 MMU context at vcpu_load() for VHE systems, reducing
the overhead of errata mitigations

- Miscellaneous kernel and selftest fixes

+3024 -1182
+52
Documentation/virt/kvm/api.rst
··· 3422 3422 indicate that the attribute can be read or written in the device's 3423 3423 current state. "addr" is ignored. 3424 3424 3425 + .. _KVM_ARM_VCPU_INIT: 3426 + 3425 3427 4.82 KVM_ARM_VCPU_INIT 3426 3428 ---------------------- 3427 3429 ··· 6141 6139 writes to the CNTVCT_EL0 and CNTPCT_EL0 registers using the SET_ONE_REG 6142 6140 interface. No error will be returned, but the resulting offset will not be 6143 6141 applied. 6142 + 6143 + .. _KVM_ARM_GET_REG_WRITABLE_MASKS: 6144 + 6145 + 4.139 KVM_ARM_GET_REG_WRITABLE_MASKS 6146 + ------------------------------------------- 6147 + 6148 + :Capability: KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES 6149 + :Architectures: arm64 6150 + :Type: vm ioctl 6151 + :Parameters: struct reg_mask_range (in/out) 6152 + :Returns: 0 on success, < 0 on error 6153 + 6154 + 6155 + :: 6156 + 6157 + #define KVM_ARM_FEATURE_ID_RANGE 0 6158 + #define KVM_ARM_FEATURE_ID_RANGE_SIZE (3 * 8 * 8) 6159 + 6160 + struct reg_mask_range { 6161 + __u64 addr; /* Pointer to mask array */ 6162 + __u32 range; /* Requested range */ 6163 + __u32 reserved[13]; 6164 + }; 6165 + 6166 + This ioctl copies the writable masks for a selected range of registers to 6167 + userspace. 6168 + 6169 + The ``addr`` field is a pointer to the destination array where KVM copies 6170 + the writable masks. 6171 + 6172 + The ``range`` field indicates the requested range of registers. 6173 + ``KVM_CHECK_EXTENSION`` for the ``KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES`` 6174 + capability returns the supported ranges, expressed as a set of flags. Each 6175 + flag's bit index represents a possible value for the ``range`` field. 6176 + All other values are reserved for future use and KVM may return an error. 6177 + 6178 + The ``reserved[13]`` array is reserved for future use and should be 0, or 6179 + KVM may return an error. 6180 + 6181 + KVM_ARM_FEATURE_ID_RANGE (0) 6182 + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 6183 + 6184 + The Feature ID range is defined as the AArch64 System register space with 6185 + op0==3, op1=={0, 1, 3}, CRn==0, CRm=={0-7}, op2=={0-7}. 6186 + 6187 + The mask returned array pointed to by ``addr`` is indexed by the macro 6188 + ``ARM64_FEATURE_ID_RANGE_IDX(op0, op1, crn, crm, op2)``, allowing userspace 6189 + to know what fields can be changed for the system register described by 6190 + ``op0, op1, crn, crm, op2``. KVM rejects ID register values that describe a 6191 + superset of the features supported by the system. 6144 6192 6145 6193 5. The kvm_run structure 6146 6194 ========================
+1
Documentation/virt/kvm/arm/index.rst
··· 11 11 hypercalls 12 12 pvtime 13 13 ptp_kvm 14 + vcpu-features
+48
Documentation/virt/kvm/arm/vcpu-features.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =============================== 4 + vCPU feature selection on arm64 5 + =============================== 6 + 7 + KVM/arm64 provides two mechanisms that allow userspace to configure 8 + the CPU features presented to the guest. 9 + 10 + KVM_ARM_VCPU_INIT 11 + ================= 12 + 13 + The ``KVM_ARM_VCPU_INIT`` ioctl accepts a bitmap of feature flags 14 + (``struct kvm_vcpu_init::features``). Features enabled by this interface are 15 + *opt-in* and may change/extend UAPI. See :ref:`KVM_ARM_VCPU_INIT` for complete 16 + documentation of the features controlled by the ioctl. 17 + 18 + Otherwise, all CPU features supported by KVM are described by the architected 19 + ID registers. 20 + 21 + The ID Registers 22 + ================ 23 + 24 + The Arm architecture specifies a range of *ID Registers* that describe the set 25 + of architectural features supported by the CPU implementation. KVM initializes 26 + the guest's ID registers to the maximum set of CPU features supported by the 27 + system. The ID register values may be VM-scoped in KVM, meaning that the 28 + values could be shared for all vCPUs in a VM. 29 + 30 + KVM allows userspace to *opt-out* of certain CPU features described by the ID 31 + registers by writing values to them via the ``KVM_SET_ONE_REG`` ioctl. The ID 32 + registers are mutable until the VM has started, i.e. userspace has called 33 + ``KVM_RUN`` on at least one vCPU in the VM. Userspace can discover what fields 34 + are mutable in the ID registers using the ``KVM_ARM_GET_REG_WRITABLE_MASKS``. 35 + See the :ref:`ioctl documentation <KVM_ARM_GET_REG_WRITABLE_MASKS>` for more 36 + details. 37 + 38 + Userspace is allowed to *limit* or *mask* CPU features according to the rules 39 + outlined by the architecture in DDI0487J.a D19.1.3 'Principles of the ID 40 + scheme for fields in ID register'. KVM does not allow ID register values that 41 + exceed the capabilities of the system. 42 + 43 + .. warning:: 44 + It is **strongly recommended** that userspace modify the ID register values 45 + before accessing the rest of the vCPU's CPU register state. KVM may use the 46 + ID register values to control feature emulation. Interleaving ID register 47 + modification with other system register accesses may lead to unpredictable 48 + behavior.
+7
Documentation/virt/kvm/devices/arm-vgic-v3.rst
··· 59 59 It is invalid to mix calls with KVM_VGIC_V3_ADDR_TYPE_REDIST and 60 60 KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION attributes. 61 61 62 + Note that to obtain reproducible results (the same VCPU being associated 63 + with the same redistributor across a save/restore operation), VCPU creation 64 + order, redistributor region creation order as well as the respective 65 + interleaves of VCPU and region creation MUST be preserved. Any change in 66 + either ordering may result in a different vcpu_id/redistributor association, 67 + resulting in a VM that will fail to run at restore time. 68 + 62 69 Errors: 63 70 64 71 ======= =============================================================
+3 -1
arch/arm64/include/asm/kvm_arm.h
··· 102 102 #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC) 103 103 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H) 104 104 105 - #define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En) 105 + #define HCRX_GUEST_FLAGS \ 106 + (HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \ 107 + (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0)) 106 108 #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En) 107 109 108 110 /* TCR_EL2 Registers bits */
+7 -8
arch/arm64/include/asm/kvm_emulate.h
··· 54 54 int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2); 55 55 int kvm_inject_nested_irq(struct kvm_vcpu *vcpu); 56 56 57 + static inline bool vcpu_has_feature(const struct kvm_vcpu *vcpu, int feature) 58 + { 59 + return test_bit(feature, vcpu->kvm->arch.vcpu_features); 60 + } 61 + 57 62 #if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__) 58 63 static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) 59 64 { ··· 67 62 #else 68 63 static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) 69 64 { 70 - return test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features); 65 + return vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); 71 66 } 72 67 #endif 73 68 ··· 470 465 471 466 static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) 472 467 { 473 - return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; 468 + return __vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; 474 469 } 475 470 476 471 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu) ··· 569 564 vcpu_set_flag((v), PENDING_EXCEPTION); \ 570 565 vcpu_set_flag((v), e); \ 571 566 } while (0) 572 - 573 - 574 - static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature) 575 - { 576 - return test_bit(feature, vcpu->arch.features); 577 - } 578 567 579 568 static __always_inline void kvm_write_cptr_el2(u64 val) 580 569 {
+48 -13
arch/arm64/include/asm/kvm_host.h
··· 78 78 int __init kvm_arm_init_sve(void); 79 79 80 80 u32 __attribute_const__ kvm_target_cpu(void); 81 - int kvm_reset_vcpu(struct kvm_vcpu *vcpu); 81 + void kvm_reset_vcpu(struct kvm_vcpu *vcpu); 82 82 void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); 83 83 84 84 struct kvm_hyp_memcache { ··· 158 158 phys_addr_t pgd_phys; 159 159 struct kvm_pgtable *pgt; 160 160 161 + /* 162 + * VTCR value used on the host. For a non-NV guest (or a NV 163 + * guest that runs in a context where its own S2 doesn't 164 + * apply), its T0SZ value reflects that of the IPA size. 165 + * 166 + * For a shadow S2 MMU, T0SZ reflects the PARange exposed to 167 + * the guest. 168 + */ 169 + u64 vtcr; 170 + 161 171 /* The last vcpu id that ran on each physical CPU */ 162 172 int __percpu *last_vcpu_ran; 163 173 ··· 212 202 struct kvm_hyp_memcache teardown_mc; 213 203 }; 214 204 205 + struct kvm_mpidr_data { 206 + u64 mpidr_mask; 207 + DECLARE_FLEX_ARRAY(u16, cmpidr_to_idx); 208 + }; 209 + 210 + static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr) 211 + { 212 + unsigned long mask = data->mpidr_mask; 213 + u64 aff = mpidr & MPIDR_HWID_BITMASK; 214 + int nbits, bit, bit_idx = 0; 215 + u16 index = 0; 216 + 217 + /* 218 + * If this looks like RISC-V's BEXT or x86's PEXT 219 + * instructions, it isn't by accident. 220 + */ 221 + nbits = fls(mask); 222 + for_each_set_bit(bit, &mask, nbits) { 223 + index |= (aff & BIT(bit)) >> (bit - bit_idx); 224 + bit_idx++; 225 + } 226 + 227 + return index; 228 + } 229 + 215 230 struct kvm_arch { 216 231 struct kvm_s2_mmu mmu; 217 - 218 - /* VTCR_EL2 value for this VM */ 219 - u64 vtcr; 220 232 221 233 /* Interrupt controller */ 222 234 struct vgic_dist vgic; ··· 271 239 #define KVM_ARCH_FLAG_VM_COUNTER_OFFSET 5 272 240 /* Timer PPIs made immutable */ 273 241 #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 6 274 - /* SMCCC filter initialized for the VM */ 275 - #define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 7 276 242 /* Initial ID reg values loaded */ 277 - #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 8 243 + #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 7 278 244 unsigned long flags; 279 245 280 246 /* VM-wide vCPU feature set */ 281 247 DECLARE_BITMAP(vcpu_features, KVM_VCPU_MAX_FEATURES); 248 + 249 + /* MPIDR to vcpu index mapping, optional */ 250 + struct kvm_mpidr_data *mpidr_data; 282 251 283 252 /* 284 253 * VM-wide PMU filter, implemented as a bitmap and big enough for ··· 289 256 struct arm_pmu *arm_pmu; 290 257 291 258 cpumask_var_t supported_cpus; 259 + 260 + /* PMCR_EL0.N value for the guest */ 261 + u8 pmcr_n; 292 262 293 263 /* Hypercall features firmware registers' descriptor */ 294 264 struct kvm_smccc_features smccc_feat; ··· 609 573 610 574 /* Cache some mmu pages needed inside spinlock regions */ 611 575 struct kvm_mmu_memory_cache mmu_page_cache; 612 - 613 - /* feature flags */ 614 - DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES); 615 576 616 577 /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ 617 578 u64 vsesr_el2; ··· 1058 1025 extern unsigned int __ro_after_init kvm_arm_vmid_bits; 1059 1026 int __init kvm_arm_vmid_alloc_init(void); 1060 1027 void __init kvm_arm_vmid_alloc_free(void); 1061 - void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid); 1028 + bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid); 1062 1029 void kvm_arm_vmid_clear_active(void); 1063 1030 1064 1031 static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) ··· 1111 1078 struct kvm_arm_copy_mte_tags *copy_tags); 1112 1079 int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm, 1113 1080 struct kvm_arm_counter_offset *offset); 1081 + int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, 1082 + struct reg_mask_range *range); 1114 1083 1115 1084 /* Guest/host FPSIMD coordination helpers */ 1116 1085 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); ··· 1144 1109 } 1145 1110 #endif 1146 1111 1147 - void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu); 1148 - void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu); 1112 + void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu); 1113 + void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu); 1149 1114 1150 1115 int __init kvm_set_ipa_limit(void); 1151 1116
+2 -5
arch/arm64/include/asm/kvm_hyp.h
··· 93 93 void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt); 94 94 void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt); 95 95 #else 96 + void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu); 97 + void __vcpu_put_switch_sysregs(struct kvm_vcpu *vcpu); 96 98 void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt); 97 99 void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt); 98 100 void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt); ··· 112 110 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); 113 111 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); 114 112 void __sve_restore_state(void *sve_pffr, u32 *fpsr); 115 - 116 - #ifndef __KVM_NVHE_HYPERVISOR__ 117 - void activate_traps_vhe_load(struct kvm_vcpu *vcpu); 118 - void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu); 119 - #endif 120 113 121 114 u64 __guest_enter(struct kvm_vcpu *vcpu); 122 115
+35 -10
arch/arm64/include/asm/kvm_mmu.h
··· 150 150 */ 151 151 #define KVM_PHYS_SHIFT (40) 152 152 153 - #define kvm_phys_shift(kvm) VTCR_EL2_IPA(kvm->arch.vtcr) 154 - #define kvm_phys_size(kvm) (_AC(1, ULL) << kvm_phys_shift(kvm)) 155 - #define kvm_phys_mask(kvm) (kvm_phys_size(kvm) - _AC(1, ULL)) 153 + #define kvm_phys_shift(mmu) VTCR_EL2_IPA((mmu)->vtcr) 154 + #define kvm_phys_size(mmu) (_AC(1, ULL) << kvm_phys_shift(mmu)) 155 + #define kvm_phys_mask(mmu) (kvm_phys_size(mmu) - _AC(1, ULL)) 156 156 157 157 #include <asm/kvm_pgtable.h> 158 158 #include <asm/stage2_pgtable.h> ··· 224 224 kvm_flush_dcache_to_poc(va, size); 225 225 } 226 226 227 + static inline size_t __invalidate_icache_max_range(void) 228 + { 229 + u8 iminline; 230 + u64 ctr; 231 + 232 + asm volatile(ALTERNATIVE_CB("movz %0, #0\n" 233 + "movk %0, #0, lsl #16\n" 234 + "movk %0, #0, lsl #32\n" 235 + "movk %0, #0, lsl #48\n", 236 + ARM64_ALWAYS_SYSTEM, 237 + kvm_compute_final_ctr_el0) 238 + : "=r" (ctr)); 239 + 240 + iminline = SYS_FIELD_GET(CTR_EL0, IminLine, ctr) + 2; 241 + return MAX_DVM_OPS << iminline; 242 + } 243 + 227 244 static inline void __invalidate_icache_guest_page(void *va, size_t size) 228 245 { 229 - if (icache_is_aliasing()) { 230 - /* any kind of VIPT cache */ 246 + /* 247 + * VPIPT I-cache maintenance must be done from EL2. See comment in the 248 + * nVHE flavor of __kvm_tlb_flush_vmid_ipa(). 249 + */ 250 + if (icache_is_vpipt() && read_sysreg(CurrentEL) != CurrentEL_EL2) 251 + return; 252 + 253 + /* 254 + * Blow the whole I-cache if it is aliasing (i.e. VIPT) or the 255 + * invalidation range exceeds our arbitrary limit on invadations by 256 + * cache line. 257 + */ 258 + if (icache_is_aliasing() || size > __invalidate_icache_max_range()) 231 259 icache_inval_all_pou(); 232 - } else if (read_sysreg(CurrentEL) != CurrentEL_EL1 || 233 - !icache_is_vpipt()) { 234 - /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */ 260 + else 235 261 icache_inval_pou((unsigned long)va, (unsigned long)va + size); 236 - } 237 262 } 238 263 239 264 void kvm_set_way_flush(struct kvm_vcpu *vcpu); ··· 324 299 static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, 325 300 struct kvm_arch *arch) 326 301 { 327 - write_sysreg(arch->vtcr, vtcr_el2); 302 + write_sysreg(mmu->vtcr, vtcr_el2); 328 303 write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); 329 304 330 305 /*
+2 -1
arch/arm64/include/asm/kvm_nested.h
··· 2 2 #ifndef __ARM64_KVM_NESTED_H 3 3 #define __ARM64_KVM_NESTED_H 4 4 5 + #include <asm/kvm_emulate.h> 5 6 #include <linux/kvm_host.h> 6 7 7 8 static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu) 8 9 { 9 10 return (!__is_defined(__KVM_NVHE_HYPERVISOR__) && 10 11 cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && 11 - test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features)); 12 + vcpu_has_feature(vcpu, KVM_ARM_VCPU_HAS_EL2)); 12 13 } 13 14 14 15 extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
+2 -2
arch/arm64/include/asm/stage2_pgtable.h
··· 21 21 * (IPA_SHIFT - 4). 22 22 */ 23 23 #define stage2_pgtable_levels(ipa) ARM64_HW_PGTABLE_LEVELS((ipa) - 4) 24 - #define kvm_stage2_levels(kvm) VTCR_EL2_LVLS(kvm->arch.vtcr) 24 + #define kvm_stage2_levels(mmu) VTCR_EL2_LVLS((mmu)->vtcr) 25 25 26 26 /* 27 27 * kvm_mmmu_cache_min_pages() is the number of pages required to install 28 28 * a stage-2 translation. We pre-allocate the entry level page table at 29 29 * the VM creation. 30 30 */ 31 - #define kvm_mmu_cache_min_pages(kvm) (kvm_stage2_levels(kvm) - 1) 31 + #define kvm_mmu_cache_min_pages(mmu) (kvm_stage2_levels(mmu) - 1) 32 32 33 33 #endif /* __ARM64_S2_PGTABLE_H_ */
+45
arch/arm64/include/asm/sysreg.h
··· 270 270 /* ETM */ 271 271 #define SYS_TRCOSLAR sys_reg(2, 1, 1, 0, 4) 272 272 273 + #define SYS_BRBCR_EL2 sys_reg(2, 4, 9, 0, 0) 274 + 273 275 #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0) 274 276 #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) 275 277 #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) ··· 486 484 487 485 #define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0) 488 486 #define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1) 487 + #define SYS_SCTLR2_EL2 sys_reg(3, 4, 1, 0, 3) 489 488 #define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0) 490 489 #define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1) 491 490 #define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2) ··· 500 497 #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) 501 498 502 499 #define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) 500 + #define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0) 503 501 #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) 504 502 #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) 505 503 #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1) 506 504 #define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0) 505 + #define SYS_SPSR_irq sys_reg(3, 4, 4, 3, 0) 506 + #define SYS_SPSR_abt sys_reg(3, 4, 4, 3, 1) 507 + #define SYS_SPSR_und sys_reg(3, 4, 4, 3, 2) 508 + #define SYS_SPSR_fiq sys_reg(3, 4, 4, 3, 3) 507 509 #define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1) 508 510 #define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0) 509 511 #define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1) ··· 522 514 523 515 #define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0) 524 516 #define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0) 517 + #define SYS_MPAMHCR_EL2 sys_reg(3, 4, 10, 4, 0) 518 + #define SYS_MPAMVPMV_EL2 sys_reg(3, 4, 10, 4, 1) 519 + #define SYS_MPAM2_EL2 sys_reg(3, 4, 10, 5, 0) 520 + #define __SYS__MPAMVPMx_EL2(x) sys_reg(3, 4, 10, 6, x) 521 + #define SYS_MPAMVPM0_EL2 __SYS__MPAMVPMx_EL2(0) 522 + #define SYS_MPAMVPM1_EL2 __SYS__MPAMVPMx_EL2(1) 523 + #define SYS_MPAMVPM2_EL2 __SYS__MPAMVPMx_EL2(2) 524 + #define SYS_MPAMVPM3_EL2 __SYS__MPAMVPMx_EL2(3) 525 + #define SYS_MPAMVPM4_EL2 __SYS__MPAMVPMx_EL2(4) 526 + #define SYS_MPAMVPM5_EL2 __SYS__MPAMVPMx_EL2(5) 527 + #define SYS_MPAMVPM6_EL2 __SYS__MPAMVPMx_EL2(6) 528 + #define SYS_MPAMVPM7_EL2 __SYS__MPAMVPMx_EL2(7) 525 529 526 530 #define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0) 527 531 #define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1) ··· 582 562 583 563 #define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1) 584 564 #define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2) 565 + #define SYS_SCXTNUM_EL2 sys_reg(3, 4, 13, 0, 7) 566 + 567 + #define __AMEV_op2(m) (m & 0x7) 568 + #define __AMEV_CRm(n, m) (n | ((m & 0x8) >> 3)) 569 + #define __SYS__AMEVCNTVOFF0n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0x8, m), __AMEV_op2(m)) 570 + #define SYS_AMEVCNTVOFF0n_EL2(m) __SYS__AMEVCNTVOFF0n_EL2(m) 571 + #define __SYS__AMEVCNTVOFF1n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0xA, m), __AMEV_op2(m)) 572 + #define SYS_AMEVCNTVOFF1n_EL2(m) __SYS__AMEVCNTVOFF1n_EL2(m) 585 573 586 574 #define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3) 587 575 #define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0) 576 + #define SYS_CNTHP_TVAL_EL2 sys_reg(3, 4, 14, 2, 0) 577 + #define SYS_CNTHP_CTL_EL2 sys_reg(3, 4, 14, 2, 1) 578 + #define SYS_CNTHP_CVAL_EL2 sys_reg(3, 4, 14, 2, 2) 579 + #define SYS_CNTHV_TVAL_EL2 sys_reg(3, 4, 14, 3, 0) 580 + #define SYS_CNTHV_CTL_EL2 sys_reg(3, 4, 14, 3, 1) 581 + #define SYS_CNTHV_CVAL_EL2 sys_reg(3, 4, 14, 3, 2) 588 582 589 583 /* VHE encodings for architectural EL0/1 system registers */ 584 + #define SYS_BRBCR_EL12 sys_reg(2, 5, 9, 0, 0) 590 585 #define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0) 586 + #define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2) 587 + #define SYS_SCTLR2_EL12 sys_reg(3, 5, 1, 0, 3) 588 + #define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0) 589 + #define SYS_TRFCR_EL12 sys_reg(3, 5, 1, 2, 1) 590 + #define SYS_SMCR_EL12 sys_reg(3, 5, 1, 2, 6) 591 591 #define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0) 592 592 #define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1) 593 593 #define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2) 594 + #define SYS_TCR2_EL12 sys_reg(3, 5, 2, 0, 3) 594 595 #define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0) 595 596 #define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1) 596 597 #define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0) 597 598 #define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1) 598 599 #define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0) 599 600 #define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0) 601 + #define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0) 602 + #define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0) 600 603 #define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0) 601 604 #define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0) 602 605 #define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0) 606 + #define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1) 607 + #define SYS_SCXTNUM_EL12 sys_reg(3, 5, 13, 0, 7) 603 608 #define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0) 604 609 #define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0) 605 610 #define SYS_CNTP_CTL_EL02 sys_reg(3, 5, 14, 2, 1)
+4 -4
arch/arm64/include/asm/tlbflush.h
··· 333 333 * This is meant to avoid soft lock-ups on large TLB flushing ranges and not 334 334 * necessarily a performance improvement. 335 335 */ 336 - #define MAX_TLBI_OPS PTRS_PER_PTE 336 + #define MAX_DVM_OPS PTRS_PER_PTE 337 337 338 338 /* 339 339 * __flush_tlb_range_op - Perform TLBI operation upon a range ··· 413 413 414 414 /* 415 415 * When not uses TLB range ops, we can handle up to 416 - * (MAX_TLBI_OPS - 1) pages; 416 + * (MAX_DVM_OPS - 1) pages; 417 417 * When uses TLB range ops, we can handle up to 418 418 * (MAX_TLBI_RANGE_PAGES - 1) pages. 419 419 */ 420 420 if ((!system_supports_tlb_range() && 421 - (end - start) >= (MAX_TLBI_OPS * stride)) || 421 + (end - start) >= (MAX_DVM_OPS * stride)) || 422 422 pages >= MAX_TLBI_RANGE_PAGES) { 423 423 flush_tlb_mm(vma->vm_mm); 424 424 return; ··· 451 451 { 452 452 unsigned long addr; 453 453 454 - if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) { 454 + if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) { 455 455 flush_tlb_all(); 456 456 return; 457 457 }
+52 -2
arch/arm64/include/asm/traps.h
··· 9 9 10 10 #include <linux/list.h> 11 11 #include <asm/esr.h> 12 + #include <asm/ptrace.h> 12 13 #include <asm/sections.h> 13 - 14 - struct pt_regs; 15 14 16 15 #ifdef CONFIG_ARMV8_DEPRECATED 17 16 bool try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn); ··· 100 101 101 102 bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned long esr); 102 103 void __noreturn arm64_serror_panic(struct pt_regs *regs, unsigned long esr); 104 + 105 + static inline void arm64_mops_reset_regs(struct user_pt_regs *regs, unsigned long esr) 106 + { 107 + bool wrong_option = esr & ESR_ELx_MOPS_ISS_WRONG_OPTION; 108 + bool option_a = esr & ESR_ELx_MOPS_ISS_OPTION_A; 109 + int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr); 110 + int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr); 111 + int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr); 112 + unsigned long dst, src, size; 113 + 114 + dst = regs->regs[dstreg]; 115 + src = regs->regs[srcreg]; 116 + size = regs->regs[sizereg]; 117 + 118 + /* 119 + * Put the registers back in the original format suitable for a 120 + * prologue instruction, using the generic return routine from the 121 + * Arm ARM (DDI 0487I.a) rules CNTMJ and MWFQH. 122 + */ 123 + if (esr & ESR_ELx_MOPS_ISS_MEM_INST) { 124 + /* SET* instruction */ 125 + if (option_a ^ wrong_option) { 126 + /* Format is from Option A; forward set */ 127 + regs->regs[dstreg] = dst + size; 128 + regs->regs[sizereg] = -size; 129 + } 130 + } else { 131 + /* CPY* instruction */ 132 + if (!(option_a ^ wrong_option)) { 133 + /* Format is from Option B */ 134 + if (regs->pstate & PSR_N_BIT) { 135 + /* Backward copy */ 136 + regs->regs[dstreg] = dst - size; 137 + regs->regs[srcreg] = src - size; 138 + } 139 + } else { 140 + /* Format is from Option A */ 141 + if (size & BIT(63)) { 142 + /* Forward copy */ 143 + regs->regs[dstreg] = dst + size; 144 + regs->regs[srcreg] = src + size; 145 + regs->regs[sizereg] = -size; 146 + } 147 + } 148 + } 149 + 150 + if (esr & ESR_ELx_MOPS_ISS_FROM_EPILOGUE) 151 + regs->pc -= 8; 152 + else 153 + regs->pc -= 4; 154 + } 103 155 #endif
+32
arch/arm64/include/uapi/asm/kvm.h
··· 505 505 #define KVM_HYPERCALL_EXIT_SMC (1U << 0) 506 506 #define KVM_HYPERCALL_EXIT_16BIT (1U << 1) 507 507 508 + /* 509 + * Get feature ID registers userspace writable mask. 510 + * 511 + * From DDI0487J.a, D19.2.66 ("ID_AA64MMFR2_EL1, AArch64 Memory Model 512 + * Feature Register 2"): 513 + * 514 + * "The Feature ID space is defined as the System register space in 515 + * AArch64 with op0==3, op1=={0, 1, 3}, CRn==0, CRm=={0-7}, 516 + * op2=={0-7}." 517 + * 518 + * This covers all currently known R/O registers that indicate 519 + * anything useful feature wise, including the ID registers. 520 + * 521 + * If we ever need to introduce a new range, it will be described as 522 + * such in the range field. 523 + */ 524 + #define KVM_ARM_FEATURE_ID_RANGE_IDX(op0, op1, crn, crm, op2) \ 525 + ({ \ 526 + __u64 __op1 = (op1) & 3; \ 527 + __op1 -= (__op1 == 3); \ 528 + (__op1 << 6 | ((crm) & 7) << 3 | (op2)); \ 529 + }) 530 + 531 + #define KVM_ARM_FEATURE_ID_RANGE 0 532 + #define KVM_ARM_FEATURE_ID_RANGE_SIZE (3 * 8 * 8) 533 + 534 + struct reg_mask_range { 535 + __u64 addr; /* Pointer to mask array */ 536 + __u32 range; /* Requested range */ 537 + __u32 reserved[13]; 538 + }; 539 + 508 540 #endif 509 541 510 542 #endif /* __ARM_KVM_H__ */
+1 -47
arch/arm64/kernel/traps.c
··· 516 516 517 517 void do_el0_mops(struct pt_regs *regs, unsigned long esr) 518 518 { 519 - bool wrong_option = esr & ESR_ELx_MOPS_ISS_WRONG_OPTION; 520 - bool option_a = esr & ESR_ELx_MOPS_ISS_OPTION_A; 521 - int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr); 522 - int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr); 523 - int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr); 524 - unsigned long dst, src, size; 525 - 526 - dst = pt_regs_read_reg(regs, dstreg); 527 - src = pt_regs_read_reg(regs, srcreg); 528 - size = pt_regs_read_reg(regs, sizereg); 529 - 530 - /* 531 - * Put the registers back in the original format suitable for a 532 - * prologue instruction, using the generic return routine from the 533 - * Arm ARM (DDI 0487I.a) rules CNTMJ and MWFQH. 534 - */ 535 - if (esr & ESR_ELx_MOPS_ISS_MEM_INST) { 536 - /* SET* instruction */ 537 - if (option_a ^ wrong_option) { 538 - /* Format is from Option A; forward set */ 539 - pt_regs_write_reg(regs, dstreg, dst + size); 540 - pt_regs_write_reg(regs, sizereg, -size); 541 - } 542 - } else { 543 - /* CPY* instruction */ 544 - if (!(option_a ^ wrong_option)) { 545 - /* Format is from Option B */ 546 - if (regs->pstate & PSR_N_BIT) { 547 - /* Backward copy */ 548 - pt_regs_write_reg(regs, dstreg, dst - size); 549 - pt_regs_write_reg(regs, srcreg, src - size); 550 - } 551 - } else { 552 - /* Format is from Option A */ 553 - if (size & BIT(63)) { 554 - /* Forward copy */ 555 - pt_regs_write_reg(regs, dstreg, dst + size); 556 - pt_regs_write_reg(regs, srcreg, src + size); 557 - pt_regs_write_reg(regs, sizereg, -size); 558 - } 559 - } 560 - } 561 - 562 - if (esr & ESR_ELx_MOPS_ISS_FROM_EPILOGUE) 563 - regs->pc -= 8; 564 - else 565 - regs->pc -= 4; 519 + arm64_mops_reset_regs(&regs->user_regs, esr); 566 520 567 521 /* 568 522 * If single stepping then finish the step before executing the
+2 -4
arch/arm64/kvm/arch_timer.c
··· 453 453 timer_ctx->irq.level); 454 454 455 455 if (!userspace_irqchip(vcpu->kvm)) { 456 - ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, 456 + ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu, 457 457 timer_irq(timer_ctx), 458 458 timer_ctx->irq.level, 459 459 timer_ctx); ··· 936 936 unmask_vtimer_irq_user(vcpu); 937 937 } 938 938 939 - int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu) 939 + void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu) 940 940 { 941 941 struct arch_timer_cpu *timer = vcpu_timer(vcpu); 942 942 struct timer_map map; ··· 980 980 soft_timer_cancel(&map.emul_vtimer->hrtimer); 981 981 if (map.emul_ptimer) 982 982 soft_timer_cancel(&map.emul_ptimer->hrtimer); 983 - 984 - return 0; 985 983 } 986 984 987 985 static void timer_context_init(struct kvm_vcpu *vcpu, int timerid)
+162 -38
arch/arm64/kvm/arm.c
··· 205 205 if (is_protected_kvm_enabled()) 206 206 pkvm_destroy_hyp_vm(kvm); 207 207 208 + kfree(kvm->arch.mpidr_data); 208 209 kvm_destroy_vcpus(kvm); 209 210 210 211 kvm_unshare_hyp(kvm, kvm + 1); ··· 318 317 case KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES: 319 318 r = kvm_supported_block_sizes(); 320 319 break; 320 + case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES: 321 + r = BIT(0); 322 + break; 321 323 default: 322 324 r = 0; 323 325 } ··· 371 367 372 368 /* Force users to call KVM_ARM_VCPU_INIT */ 373 369 vcpu_clear_flag(vcpu, VCPU_INITIALIZED); 374 - bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); 375 370 376 371 vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; 377 372 ··· 441 438 * We might get preempted before the vCPU actually runs, but 442 439 * over-invalidation doesn't affect correctness. 443 440 */ 444 - if (*last_ran != vcpu->vcpu_id) { 441 + if (*last_ran != vcpu->vcpu_idx) { 445 442 kvm_call_hyp(__kvm_flush_cpu_context, mmu); 446 - *last_ran = vcpu->vcpu_id; 443 + *last_ran = vcpu->vcpu_idx; 447 444 } 448 445 449 446 vcpu->cpu = cpu; ··· 451 448 kvm_vgic_load(vcpu); 452 449 kvm_timer_vcpu_load(vcpu); 453 450 if (has_vhe()) 454 - kvm_vcpu_load_sysregs_vhe(vcpu); 451 + kvm_vcpu_load_vhe(vcpu); 455 452 kvm_arch_vcpu_load_fp(vcpu); 456 453 kvm_vcpu_pmu_restore_guest(vcpu); 457 454 if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) ··· 475 472 kvm_arch_vcpu_put_debug_state_flags(vcpu); 476 473 kvm_arch_vcpu_put_fp(vcpu); 477 474 if (has_vhe()) 478 - kvm_vcpu_put_sysregs_vhe(vcpu); 475 + kvm_vcpu_put_vhe(vcpu); 479 476 kvm_timer_vcpu_put(vcpu); 480 477 kvm_vgic_put(vcpu); 481 478 kvm_vcpu_pmu_restore_host(vcpu); ··· 581 578 return vcpu_get_flag(vcpu, VCPU_INITIALIZED); 582 579 } 583 580 581 + static void kvm_init_mpidr_data(struct kvm *kvm) 582 + { 583 + struct kvm_mpidr_data *data = NULL; 584 + unsigned long c, mask, nr_entries; 585 + u64 aff_set = 0, aff_clr = ~0UL; 586 + struct kvm_vcpu *vcpu; 587 + 588 + mutex_lock(&kvm->arch.config_lock); 589 + 590 + if (kvm->arch.mpidr_data || atomic_read(&kvm->online_vcpus) == 1) 591 + goto out; 592 + 593 + kvm_for_each_vcpu(c, vcpu, kvm) { 594 + u64 aff = kvm_vcpu_get_mpidr_aff(vcpu); 595 + aff_set |= aff; 596 + aff_clr &= aff; 597 + } 598 + 599 + /* 600 + * A significant bit can be either 0 or 1, and will only appear in 601 + * aff_set. Use aff_clr to weed out the useless stuff. 602 + */ 603 + mask = aff_set ^ aff_clr; 604 + nr_entries = BIT_ULL(hweight_long(mask)); 605 + 606 + /* 607 + * Don't let userspace fool us. If we need more than a single page 608 + * to describe the compressed MPIDR array, just fall back to the 609 + * iterative method. Single vcpu VMs do not need this either. 610 + */ 611 + if (struct_size(data, cmpidr_to_idx, nr_entries) <= PAGE_SIZE) 612 + data = kzalloc(struct_size(data, cmpidr_to_idx, nr_entries), 613 + GFP_KERNEL_ACCOUNT); 614 + 615 + if (!data) 616 + goto out; 617 + 618 + data->mpidr_mask = mask; 619 + 620 + kvm_for_each_vcpu(c, vcpu, kvm) { 621 + u64 aff = kvm_vcpu_get_mpidr_aff(vcpu); 622 + u16 index = kvm_mpidr_index(data, aff); 623 + 624 + data->cmpidr_to_idx[index] = c; 625 + } 626 + 627 + kvm->arch.mpidr_data = data; 628 + out: 629 + mutex_unlock(&kvm->arch.config_lock); 630 + } 631 + 584 632 /* 585 633 * Handle both the initialisation that is being done when the vcpu is 586 634 * run for the first time, as well as the updates that must be ··· 654 600 655 601 if (likely(vcpu_has_run_once(vcpu))) 656 602 return 0; 603 + 604 + kvm_init_mpidr_data(kvm); 657 605 658 606 kvm_arm_vcpu_init_debug(vcpu); 659 607 ··· 857 801 } 858 802 859 803 if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu)) 860 - kvm_pmu_handle_pmcr(vcpu, 861 - __vcpu_sys_reg(vcpu, PMCR_EL0)); 804 + kvm_vcpu_reload_pmu(vcpu); 862 805 863 806 if (kvm_check_request(KVM_REQ_RESYNC_PMU_EL0, vcpu)) 864 807 kvm_vcpu_pmu_restore_guest(vcpu); ··· 1005 950 * making a thread's VMID inactive. So we need to call 1006 951 * kvm_arm_vmid_update() in non-premptible context. 1007 952 */ 1008 - kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid); 953 + if (kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid) && 954 + has_vhe()) 955 + __load_stage2(vcpu->arch.hw_mmu, 956 + vcpu->arch.hw_mmu->arch); 1009 957 1010 958 kvm_pmu_flush_hwstate(vcpu); 1011 959 ··· 1192 1134 bool line_status) 1193 1135 { 1194 1136 u32 irq = irq_level->irq; 1195 - unsigned int irq_type, vcpu_idx, irq_num; 1196 - int nrcpus = atomic_read(&kvm->online_vcpus); 1137 + unsigned int irq_type, vcpu_id, irq_num; 1197 1138 struct kvm_vcpu *vcpu = NULL; 1198 1139 bool level = irq_level->level; 1199 1140 1200 1141 irq_type = (irq >> KVM_ARM_IRQ_TYPE_SHIFT) & KVM_ARM_IRQ_TYPE_MASK; 1201 - vcpu_idx = (irq >> KVM_ARM_IRQ_VCPU_SHIFT) & KVM_ARM_IRQ_VCPU_MASK; 1202 - vcpu_idx += ((irq >> KVM_ARM_IRQ_VCPU2_SHIFT) & KVM_ARM_IRQ_VCPU2_MASK) * (KVM_ARM_IRQ_VCPU_MASK + 1); 1142 + vcpu_id = (irq >> KVM_ARM_IRQ_VCPU_SHIFT) & KVM_ARM_IRQ_VCPU_MASK; 1143 + vcpu_id += ((irq >> KVM_ARM_IRQ_VCPU2_SHIFT) & KVM_ARM_IRQ_VCPU2_MASK) * (KVM_ARM_IRQ_VCPU_MASK + 1); 1203 1144 irq_num = (irq >> KVM_ARM_IRQ_NUM_SHIFT) & KVM_ARM_IRQ_NUM_MASK; 1204 1145 1205 - trace_kvm_irq_line(irq_type, vcpu_idx, irq_num, irq_level->level); 1146 + trace_kvm_irq_line(irq_type, vcpu_id, irq_num, irq_level->level); 1206 1147 1207 1148 switch (irq_type) { 1208 1149 case KVM_ARM_IRQ_TYPE_CPU: 1209 1150 if (irqchip_in_kernel(kvm)) 1210 1151 return -ENXIO; 1211 1152 1212 - if (vcpu_idx >= nrcpus) 1213 - return -EINVAL; 1214 - 1215 - vcpu = kvm_get_vcpu(kvm, vcpu_idx); 1153 + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); 1216 1154 if (!vcpu) 1217 1155 return -EINVAL; 1218 1156 ··· 1220 1166 if (!irqchip_in_kernel(kvm)) 1221 1167 return -ENXIO; 1222 1168 1223 - if (vcpu_idx >= nrcpus) 1224 - return -EINVAL; 1225 - 1226 - vcpu = kvm_get_vcpu(kvm, vcpu_idx); 1169 + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); 1227 1170 if (!vcpu) 1228 1171 return -EINVAL; 1229 1172 1230 1173 if (irq_num < VGIC_NR_SGIS || irq_num >= VGIC_NR_PRIVATE_IRQS) 1231 1174 return -EINVAL; 1232 1175 1233 - return kvm_vgic_inject_irq(kvm, vcpu->vcpu_id, irq_num, level, NULL); 1176 + return kvm_vgic_inject_irq(kvm, vcpu, irq_num, level, NULL); 1234 1177 case KVM_ARM_IRQ_TYPE_SPI: 1235 1178 if (!irqchip_in_kernel(kvm)) 1236 1179 return -ENXIO; ··· 1235 1184 if (irq_num < VGIC_NR_PRIVATE_IRQS) 1236 1185 return -EINVAL; 1237 1186 1238 - return kvm_vgic_inject_irq(kvm, 0, irq_num, level, NULL); 1187 + return kvm_vgic_inject_irq(kvm, NULL, irq_num, level, NULL); 1239 1188 } 1240 1189 1241 1190 return -EINVAL; 1191 + } 1192 + 1193 + static unsigned long system_supported_vcpu_features(void) 1194 + { 1195 + unsigned long features = KVM_VCPU_VALID_FEATURES; 1196 + 1197 + if (!cpus_have_final_cap(ARM64_HAS_32BIT_EL1)) 1198 + clear_bit(KVM_ARM_VCPU_EL1_32BIT, &features); 1199 + 1200 + if (!kvm_arm_support_pmu_v3()) 1201 + clear_bit(KVM_ARM_VCPU_PMU_V3, &features); 1202 + 1203 + if (!system_supports_sve()) 1204 + clear_bit(KVM_ARM_VCPU_SVE, &features); 1205 + 1206 + if (!system_has_full_ptr_auth()) { 1207 + clear_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features); 1208 + clear_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features); 1209 + } 1210 + 1211 + if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT)) 1212 + clear_bit(KVM_ARM_VCPU_HAS_EL2, &features); 1213 + 1214 + return features; 1242 1215 } 1243 1216 1244 1217 static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu, ··· 1279 1204 return -ENOENT; 1280 1205 } 1281 1206 1207 + if (features & ~system_supported_vcpu_features()) 1208 + return -EINVAL; 1209 + 1210 + /* 1211 + * For now make sure that both address/generic pointer authentication 1212 + * features are requested by the userspace together. 1213 + */ 1214 + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features) != 1215 + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features)) 1216 + return -EINVAL; 1217 + 1218 + /* Disallow NV+SVE for the time being */ 1219 + if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features) && 1220 + test_bit(KVM_ARM_VCPU_SVE, &features)) 1221 + return -EINVAL; 1222 + 1282 1223 if (!test_bit(KVM_ARM_VCPU_EL1_32BIT, &features)) 1283 1224 return 0; 1284 - 1285 - if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) 1286 - return -EINVAL; 1287 1225 1288 1226 /* MTE is incompatible with AArch32 */ 1289 1227 if (kvm_has_mte(vcpu->kvm)) ··· 1314 1226 { 1315 1227 unsigned long features = init->features[0]; 1316 1228 1317 - return !bitmap_equal(vcpu->arch.features, &features, KVM_VCPU_MAX_FEATURES); 1229 + return !bitmap_equal(vcpu->kvm->arch.vcpu_features, &features, 1230 + KVM_VCPU_MAX_FEATURES); 1231 + } 1232 + 1233 + static int kvm_setup_vcpu(struct kvm_vcpu *vcpu) 1234 + { 1235 + struct kvm *kvm = vcpu->kvm; 1236 + int ret = 0; 1237 + 1238 + /* 1239 + * When the vCPU has a PMU, but no PMU is set for the guest 1240 + * yet, set the default one. 1241 + */ 1242 + if (kvm_vcpu_has_pmu(vcpu) && !kvm->arch.arm_pmu) 1243 + ret = kvm_arm_set_default_pmu(kvm); 1244 + 1245 + return ret; 1318 1246 } 1319 1247 1320 1248 static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu, ··· 1343 1239 mutex_lock(&kvm->arch.config_lock); 1344 1240 1345 1241 if (test_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags) && 1346 - !bitmap_equal(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES)) 1242 + kvm_vcpu_init_changed(vcpu, init)) 1347 1243 goto out_unlock; 1348 - 1349 - bitmap_copy(vcpu->arch.features, &features, KVM_VCPU_MAX_FEATURES); 1350 - 1351 - /* Now we know what it is, we can reset it. */ 1352 - ret = kvm_reset_vcpu(vcpu); 1353 - if (ret) { 1354 - bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); 1355 - goto out_unlock; 1356 - } 1357 1244 1358 1245 bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES); 1246 + 1247 + ret = kvm_setup_vcpu(vcpu); 1248 + if (ret) 1249 + goto out_unlock; 1250 + 1251 + /* Now we know what it is, we can reset it. */ 1252 + kvm_reset_vcpu(vcpu); 1253 + 1359 1254 set_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags); 1360 1255 vcpu_set_flag(vcpu, VCPU_INITIALIZED); 1256 + ret = 0; 1361 1257 out_unlock: 1362 1258 mutex_unlock(&kvm->arch.config_lock); 1363 1259 return ret; ··· 1382 1278 if (kvm_vcpu_init_changed(vcpu, init)) 1383 1279 return -EINVAL; 1384 1280 1385 - return kvm_reset_vcpu(vcpu); 1281 + kvm_reset_vcpu(vcpu); 1282 + return 0; 1386 1283 } 1387 1284 1388 1285 static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, ··· 1733 1628 return -EFAULT; 1734 1629 1735 1630 return kvm_vm_set_attr(kvm, &attr); 1631 + } 1632 + case KVM_ARM_GET_REG_WRITABLE_MASKS: { 1633 + struct reg_mask_range range; 1634 + 1635 + if (copy_from_user(&range, argp, sizeof(range))) 1636 + return -EFAULT; 1637 + return kvm_vm_ioctl_get_reg_writable_masks(kvm, &range); 1736 1638 } 1737 1639 default: 1738 1640 return -EINVAL; ··· 2453 2341 unsigned long i; 2454 2342 2455 2343 mpidr &= MPIDR_HWID_BITMASK; 2344 + 2345 + if (kvm->arch.mpidr_data) { 2346 + u16 idx = kvm_mpidr_index(kvm->arch.mpidr_data, mpidr); 2347 + 2348 + vcpu = kvm_get_vcpu(kvm, 2349 + kvm->arch.mpidr_data->cmpidr_to_idx[idx]); 2350 + if (mpidr != kvm_vcpu_get_mpidr_aff(vcpu)) 2351 + vcpu = NULL; 2352 + 2353 + return vcpu; 2354 + } 2355 + 2456 2356 kvm_for_each_vcpu(i, vcpu, kvm) { 2457 2357 if (mpidr == kvm_vcpu_get_mpidr_aff(vcpu)) 2458 2358 return vcpu;
+71 -6
arch/arm64/kvm/emulate-nested.c
··· 648 648 SR_TRAP(SYS_APGAKEYLO_EL1, CGT_HCR_APK), 649 649 SR_TRAP(SYS_APGAKEYHI_EL1, CGT_HCR_APK), 650 650 /* All _EL2 registers */ 651 - SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0), 652 - sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV), 651 + SR_TRAP(SYS_BRBCR_EL2, CGT_HCR_NV), 652 + SR_TRAP(SYS_VPIDR_EL2, CGT_HCR_NV), 653 + SR_TRAP(SYS_VMPIDR_EL2, CGT_HCR_NV), 654 + SR_TRAP(SYS_SCTLR_EL2, CGT_HCR_NV), 655 + SR_TRAP(SYS_ACTLR_EL2, CGT_HCR_NV), 656 + SR_TRAP(SYS_SCTLR2_EL2, CGT_HCR_NV), 657 + SR_RANGE_TRAP(SYS_HCR_EL2, 658 + SYS_HCRX_EL2, CGT_HCR_NV), 659 + SR_TRAP(SYS_SMPRIMAP_EL2, CGT_HCR_NV), 660 + SR_TRAP(SYS_SMCR_EL2, CGT_HCR_NV), 661 + SR_RANGE_TRAP(SYS_TTBR0_EL2, 662 + SYS_TCR2_EL2, CGT_HCR_NV), 663 + SR_TRAP(SYS_VTTBR_EL2, CGT_HCR_NV), 664 + SR_TRAP(SYS_VTCR_EL2, CGT_HCR_NV), 665 + SR_TRAP(SYS_VNCR_EL2, CGT_HCR_NV), 666 + SR_RANGE_TRAP(SYS_HDFGRTR_EL2, 667 + SYS_HAFGRTR_EL2, CGT_HCR_NV), 653 668 /* Skip the SP_EL1 encoding... */ 654 669 SR_TRAP(SYS_SPSR_EL2, CGT_HCR_NV), 655 670 SR_TRAP(SYS_ELR_EL2, CGT_HCR_NV), 656 - SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1), 657 - sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV), 658 - SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0), 659 - sys_reg(3, 4, 14, 15, 7), CGT_HCR_NV), 671 + /* Skip SPSR_irq, SPSR_abt, SPSR_und, SPSR_fiq */ 672 + SR_TRAP(SYS_AFSR0_EL2, CGT_HCR_NV), 673 + SR_TRAP(SYS_AFSR1_EL2, CGT_HCR_NV), 674 + SR_TRAP(SYS_ESR_EL2, CGT_HCR_NV), 675 + SR_TRAP(SYS_VSESR_EL2, CGT_HCR_NV), 676 + SR_TRAP(SYS_TFSR_EL2, CGT_HCR_NV), 677 + SR_TRAP(SYS_FAR_EL2, CGT_HCR_NV), 678 + SR_TRAP(SYS_HPFAR_EL2, CGT_HCR_NV), 679 + SR_TRAP(SYS_PMSCR_EL2, CGT_HCR_NV), 680 + SR_TRAP(SYS_MAIR_EL2, CGT_HCR_NV), 681 + SR_TRAP(SYS_AMAIR_EL2, CGT_HCR_NV), 682 + SR_TRAP(SYS_MPAMHCR_EL2, CGT_HCR_NV), 683 + SR_TRAP(SYS_MPAMVPMV_EL2, CGT_HCR_NV), 684 + SR_TRAP(SYS_MPAM2_EL2, CGT_HCR_NV), 685 + SR_RANGE_TRAP(SYS_MPAMVPM0_EL2, 686 + SYS_MPAMVPM7_EL2, CGT_HCR_NV), 687 + /* 688 + * Note that the spec. describes a group of MEC registers 689 + * whose access should not trap, therefore skip the following: 690 + * MECID_A0_EL2, MECID_A1_EL2, MECID_P0_EL2, 691 + * MECID_P1_EL2, MECIDR_EL2, VMECID_A_EL2, 692 + * VMECID_P_EL2. 693 + */ 694 + SR_RANGE_TRAP(SYS_VBAR_EL2, 695 + SYS_RMR_EL2, CGT_HCR_NV), 696 + SR_TRAP(SYS_VDISR_EL2, CGT_HCR_NV), 697 + /* ICH_AP0R<m>_EL2 */ 698 + SR_RANGE_TRAP(SYS_ICH_AP0R0_EL2, 699 + SYS_ICH_AP0R3_EL2, CGT_HCR_NV), 700 + /* ICH_AP1R<m>_EL2 */ 701 + SR_RANGE_TRAP(SYS_ICH_AP1R0_EL2, 702 + SYS_ICH_AP1R3_EL2, CGT_HCR_NV), 703 + SR_TRAP(SYS_ICC_SRE_EL2, CGT_HCR_NV), 704 + SR_RANGE_TRAP(SYS_ICH_HCR_EL2, 705 + SYS_ICH_EISR_EL2, CGT_HCR_NV), 706 + SR_TRAP(SYS_ICH_ELRSR_EL2, CGT_HCR_NV), 707 + SR_TRAP(SYS_ICH_VMCR_EL2, CGT_HCR_NV), 708 + /* ICH_LR<m>_EL2 */ 709 + SR_RANGE_TRAP(SYS_ICH_LR0_EL2, 710 + SYS_ICH_LR15_EL2, CGT_HCR_NV), 711 + SR_TRAP(SYS_CONTEXTIDR_EL2, CGT_HCR_NV), 712 + SR_TRAP(SYS_TPIDR_EL2, CGT_HCR_NV), 713 + SR_TRAP(SYS_SCXTNUM_EL2, CGT_HCR_NV), 714 + /* AMEVCNTVOFF0<n>_EL2, AMEVCNTVOFF1<n>_EL2 */ 715 + SR_RANGE_TRAP(SYS_AMEVCNTVOFF0n_EL2(0), 716 + SYS_AMEVCNTVOFF1n_EL2(15), CGT_HCR_NV), 717 + /* CNT*_EL2 */ 718 + SR_TRAP(SYS_CNTVOFF_EL2, CGT_HCR_NV), 719 + SR_TRAP(SYS_CNTPOFF_EL2, CGT_HCR_NV), 720 + SR_TRAP(SYS_CNTHCTL_EL2, CGT_HCR_NV), 721 + SR_RANGE_TRAP(SYS_CNTHP_TVAL_EL2, 722 + SYS_CNTHP_CVAL_EL2, CGT_HCR_NV), 723 + SR_RANGE_TRAP(SYS_CNTHV_TVAL_EL2, 724 + SYS_CNTHV_CVAL_EL2, CGT_HCR_NV), 660 725 /* All _EL02, _EL12 registers */ 661 726 SR_RANGE_TRAP(sys_reg(3, 5, 0, 0, 0), 662 727 sys_reg(3, 5, 10, 15, 7), CGT_HCR_NV),
+17
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 30 30 #include <asm/fpsimd.h> 31 31 #include <asm/debug-monitors.h> 32 32 #include <asm/processor.h> 33 + #include <asm/traps.h> 33 34 34 35 struct kvm_exception_table_entry { 35 36 int insn, fixup; ··· 264 263 static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) 265 264 { 266 265 return __get_fault_info(vcpu->arch.fault.esr_el2, &vcpu->arch.fault); 266 + } 267 + 268 + static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code) 269 + { 270 + *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); 271 + arm64_mops_reset_regs(vcpu_gp_regs(vcpu), vcpu->arch.fault.esr_el2); 272 + write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); 273 + 274 + /* 275 + * Finish potential single step before executing the prologue 276 + * instruction. 277 + */ 278 + *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; 279 + write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); 280 + 281 + return true; 267 282 } 268 283 269 284 static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
+2 -1
arch/arm64/kvm/hyp/include/nvhe/fixed_config.h
··· 197 197 198 198 #define PVM_ID_AA64ISAR2_ALLOW (\ 199 199 ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3) | \ 200 - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) \ 200 + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | \ 201 + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) \ 201 202 ) 202 203 203 204 u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
+4 -4
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 129 129 parange = kvm_get_parange(id_aa64mmfr0_el1_sys_val); 130 130 phys_shift = id_aa64mmfr0_parange_to_phys_shift(parange); 131 131 132 - host_mmu.arch.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val, 133 - id_aa64mmfr1_el1_sys_val, phys_shift); 132 + host_mmu.arch.mmu.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val, 133 + id_aa64mmfr1_el1_sys_val, phys_shift); 134 134 } 135 135 136 136 static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot); ··· 235 235 unsigned long nr_pages; 236 236 int ret; 237 237 238 - nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; 238 + nr_pages = kvm_pgtable_stage2_pgd_size(mmu->vtcr) >> PAGE_SHIFT; 239 239 ret = hyp_pool_init(&vm->pool, hyp_virt_to_pfn(pgd), nr_pages, 0); 240 240 if (ret) 241 241 return ret; ··· 295 295 return -EPERM; 296 296 297 297 params->vttbr = kvm_get_vttbr(mmu); 298 - params->vtcr = host_mmu.arch.vtcr; 298 + params->vtcr = mmu->vtcr; 299 299 params->hcr_el2 |= HCR_VM; 300 300 301 301 /*
+2 -2
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 303 303 { 304 304 hyp_vm->host_kvm = host_kvm; 305 305 hyp_vm->kvm.created_vcpus = nr_vcpus; 306 - hyp_vm->kvm.arch.vtcr = host_mmu.arch.vtcr; 306 + hyp_vm->kvm.arch.mmu.vtcr = host_mmu.arch.mmu.vtcr; 307 307 } 308 308 309 309 static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, ··· 483 483 } 484 484 485 485 vm_size = pkvm_get_hyp_vm_size(nr_vcpus); 486 - pgd_size = kvm_pgtable_stage2_pgd_size(host_mmu.arch.vtcr); 486 + pgd_size = kvm_pgtable_stage2_pgd_size(host_mmu.arch.mmu.vtcr); 487 487 488 488 ret = -ENOMEM; 489 489
+2
arch/arm64/kvm/hyp/nvhe/switch.c
··· 192 192 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 193 193 [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 194 194 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 195 + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, 195 196 }; 196 197 197 198 static const exit_handler_fn pvm_exit_handlers[] = { ··· 204 203 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 205 204 [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 206 205 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 206 + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, 207 207 }; 208 208 209 209 static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)
+2 -2
arch/arm64/kvm/hyp/pgtable.c
··· 1314 1314 ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, 1315 1315 KVM_PGTABLE_WALK_HANDLE_FAULT | 1316 1316 KVM_PGTABLE_WALK_SHARED); 1317 - if (!ret) 1317 + if (!ret || ret == -EAGAIN) 1318 1318 kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); 1319 1319 return ret; 1320 1320 } ··· 1511 1511 kvm_pgtable_force_pte_cb_t force_pte_cb) 1512 1512 { 1513 1513 size_t pgd_sz; 1514 - u64 vtcr = mmu->arch->vtcr; 1514 + u64 vtcr = mmu->vtcr; 1515 1515 u32 ia_bits = VTCR_EL2_IPA(vtcr); 1516 1516 u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr); 1517 1517 u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0;
+21 -13
arch/arm64/kvm/hyp/vhe/switch.c
··· 137 137 NOKPROBE_SYMBOL(__deactivate_traps); 138 138 139 139 /* 140 - * Disable IRQs in {activate,deactivate}_traps_vhe_{load,put}() to 140 + * Disable IRQs in __vcpu_{load,put}_{activate,deactivate}_traps() to 141 141 * prevent a race condition between context switching of PMUSERENR_EL0 142 142 * in __{activate,deactivate}_traps_common() and IPIs that attempts to 143 143 * update PMUSERENR_EL0. See also kvm_set_pmuserenr(). 144 144 */ 145 - void activate_traps_vhe_load(struct kvm_vcpu *vcpu) 145 + static void __vcpu_load_activate_traps(struct kvm_vcpu *vcpu) 146 146 { 147 147 unsigned long flags; 148 148 ··· 151 151 local_irq_restore(flags); 152 152 } 153 153 154 - void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu) 154 + static void __vcpu_put_deactivate_traps(struct kvm_vcpu *vcpu) 155 155 { 156 156 unsigned long flags; 157 157 158 158 local_irq_save(flags); 159 159 __deactivate_traps_common(vcpu); 160 160 local_irq_restore(flags); 161 + } 162 + 163 + void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu) 164 + { 165 + __vcpu_load_switch_sysregs(vcpu); 166 + __vcpu_load_activate_traps(vcpu); 167 + __load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch); 168 + } 169 + 170 + void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu) 171 + { 172 + __vcpu_put_deactivate_traps(vcpu); 173 + __vcpu_put_switch_sysregs(vcpu); 161 174 } 162 175 163 176 static const exit_handler_fn hyp_exit_handlers[] = { ··· 183 170 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 184 171 [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 185 172 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 173 + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, 186 174 }; 187 175 188 176 static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) ··· 228 214 sysreg_save_host_state_vhe(host_ctxt); 229 215 230 216 /* 231 - * ARM erratum 1165522 requires us to configure both stage 1 and 232 - * stage 2 translation for the guest context before we clear 233 - * HCR_EL2.TGE. 234 - * 235 - * We have already configured the guest's stage 1 translation in 236 - * kvm_vcpu_load_sysregs_vhe above. We must now call 237 - * __load_stage2 before __activate_traps, because 238 - * __load_stage2 configures stage 2 translation, and 239 - * __activate_traps clear HCR_EL2.TGE (among other things). 217 + * Note that ARM erratum 1165522 requires us to configure both stage 1 218 + * and stage 2 translation for the guest context before we clear 219 + * HCR_EL2.TGE. The stage 1 and stage 2 guest context has already been 220 + * loaded on the CPU in kvm_vcpu_load_vhe(). 240 221 */ 241 - __load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch); 242 222 __activate_traps(vcpu); 243 223 244 224 __kvm_adjust_pc(vcpu);
+4 -7
arch/arm64/kvm/hyp/vhe/sysreg-sr.c
··· 52 52 NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); 53 53 54 54 /** 55 - * kvm_vcpu_load_sysregs_vhe - Load guest system registers to the physical CPU 55 + * __vcpu_load_switch_sysregs - Load guest system registers to the physical CPU 56 56 * 57 57 * @vcpu: The VCPU pointer 58 58 * ··· 62 62 * and loading system register state early avoids having to load them on 63 63 * every entry to the VM. 64 64 */ 65 - void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) 65 + void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu) 66 66 { 67 67 struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; 68 68 struct kvm_cpu_context *host_ctxt; ··· 92 92 __sysreg_restore_el1_state(guest_ctxt); 93 93 94 94 vcpu_set_flag(vcpu, SYSREGS_ON_CPU); 95 - 96 - activate_traps_vhe_load(vcpu); 97 95 } 98 96 99 97 /** 100 - * kvm_vcpu_put_sysregs_vhe - Restore host system registers to the physical CPU 98 + * __vcpu_put_switch_syregs - Restore host system registers to the physical CPU 101 99 * 102 100 * @vcpu: The VCPU pointer 103 101 * ··· 105 107 * and deferring saving system register state until we're no longer running the 106 108 * VCPU avoids having to save them on every exit from the VM. 107 109 */ 108 - void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) 110 + void __vcpu_put_switch_sysregs(struct kvm_vcpu *vcpu) 109 111 { 110 112 struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; 111 113 struct kvm_cpu_context *host_ctxt; 112 114 113 115 host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; 114 - deactivate_traps_vhe_put(vcpu); 115 116 116 117 __sysreg_save_el1_state(guest_ctxt); 117 118 __sysreg_save_user_state(guest_ctxt);
+14 -4
arch/arm64/kvm/hyp/vhe/tlb.c
··· 11 11 #include <asm/tlbflush.h> 12 12 13 13 struct tlb_inv_context { 14 - unsigned long flags; 15 - u64 tcr; 16 - u64 sctlr; 14 + struct kvm_s2_mmu *mmu; 15 + unsigned long flags; 16 + u64 tcr; 17 + u64 sctlr; 17 18 }; 18 19 19 20 static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, 20 21 struct tlb_inv_context *cxt) 21 22 { 23 + struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); 22 24 u64 val; 23 25 24 26 local_irq_save(cxt->flags); 27 + 28 + if (vcpu && mmu != vcpu->arch.hw_mmu) 29 + cxt->mmu = vcpu->arch.hw_mmu; 30 + else 31 + cxt->mmu = NULL; 25 32 26 33 if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { 27 34 /* ··· 73 66 * We're done with the TLB operation, let's restore the host's 74 67 * view of HCR_EL2. 75 68 */ 76 - write_sysreg(0, vttbr_el2); 77 69 write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); 78 70 isb(); 71 + 72 + /* ... and the stage-2 MMU context that we switched away from */ 73 + if (cxt->mmu) 74 + __load_stage2(cxt->mmu, cxt->mmu->arch); 79 75 80 76 if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { 81 77 /* Restore the registers to what they were */
+23 -13
arch/arm64/kvm/hypercalls.c
··· 133 133 ARM_SMCCC_SMC_64, \ 134 134 0, ARM_SMCCC_FUNC_MASK) 135 135 136 - static void init_smccc_filter(struct kvm *kvm) 136 + static int kvm_smccc_filter_insert_reserved(struct kvm *kvm) 137 137 { 138 138 int r; 139 - 140 - mt_init(&kvm->arch.smccc_filter); 141 139 142 140 /* 143 141 * Prevent userspace from handling any SMCCC calls in the architecture ··· 146 148 SMC32_ARCH_RANGE_BEGIN, SMC32_ARCH_RANGE_END, 147 149 xa_mk_value(KVM_SMCCC_FILTER_HANDLE), 148 150 GFP_KERNEL_ACCOUNT); 149 - WARN_ON_ONCE(r); 151 + if (r) 152 + goto out_destroy; 150 153 151 154 r = mtree_insert_range(&kvm->arch.smccc_filter, 152 155 SMC64_ARCH_RANGE_BEGIN, SMC64_ARCH_RANGE_END, 153 156 xa_mk_value(KVM_SMCCC_FILTER_HANDLE), 154 157 GFP_KERNEL_ACCOUNT); 155 - WARN_ON_ONCE(r); 158 + if (r) 159 + goto out_destroy; 156 160 161 + return 0; 162 + out_destroy: 163 + mtree_destroy(&kvm->arch.smccc_filter); 164 + return r; 165 + } 166 + 167 + static bool kvm_smccc_filter_configured(struct kvm *kvm) 168 + { 169 + return !mtree_empty(&kvm->arch.smccc_filter); 157 170 } 158 171 159 172 static int kvm_smccc_set_filter(struct kvm *kvm, struct kvm_smccc_filter __user *uaddr) ··· 193 184 goto out_unlock; 194 185 } 195 186 187 + if (!kvm_smccc_filter_configured(kvm)) { 188 + r = kvm_smccc_filter_insert_reserved(kvm); 189 + if (WARN_ON_ONCE(r)) 190 + goto out_unlock; 191 + } 192 + 196 193 r = mtree_insert_range(&kvm->arch.smccc_filter, start, end, 197 194 xa_mk_value(filter.action), GFP_KERNEL_ACCOUNT); 198 - if (r) 199 - goto out_unlock; 200 - 201 - set_bit(KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED, &kvm->arch.flags); 202 - 203 195 out_unlock: 204 196 mutex_unlock(&kvm->arch.config_lock); 205 197 return r; ··· 211 201 unsigned long idx = func_id; 212 202 void *val; 213 203 214 - if (!test_bit(KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED, &kvm->arch.flags)) 204 + if (!kvm_smccc_filter_configured(kvm)) 215 205 return KVM_SMCCC_FILTER_HANDLE; 216 206 217 207 /* ··· 397 387 smccc_feat->std_hyp_bmap = KVM_ARM_SMCCC_STD_HYP_FEATURES; 398 388 smccc_feat->vendor_hyp_bmap = KVM_ARM_SMCCC_VENDOR_HYP_FEATURES; 399 389 400 - init_smccc_filter(kvm); 390 + mt_init(&kvm->arch.smccc_filter); 401 391 } 402 392 403 393 void kvm_arm_teardown_hypercalls(struct kvm *kvm) ··· 564 554 { 565 555 bool wants_02; 566 556 567 - wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features); 557 + wants_02 = vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2); 568 558 569 559 switch (val) { 570 560 case KVM_ARM_PSCI_0_1:
+3 -1
arch/arm64/kvm/mmio.c
··· 135 135 * volunteered to do so, and bail out otherwise. 136 136 */ 137 137 if (!kvm_vcpu_dabt_isvalid(vcpu)) { 138 + trace_kvm_mmio_nisv(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu), 139 + kvm_vcpu_get_hfar(vcpu), fault_ipa); 140 + 138 141 if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, 139 142 &vcpu->kvm->arch.flags)) { 140 143 run->exit_reason = KVM_EXIT_ARM_NISV; ··· 146 143 return 0; 147 144 } 148 145 149 - kvm_pr_unimpl("Data abort outside memslots with no valid syndrome info\n"); 150 146 return -ENOSYS; 151 147 } 152 148
+7 -26
arch/arm64/kvm/mmu.c
··· 892 892 893 893 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 894 894 mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); 895 - kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); 895 + mmu->vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); 896 896 897 897 if (mmu->pgt != NULL) { 898 898 kvm_err("kvm_arch already initialized?\n"); ··· 1067 1067 phys_addr_t addr; 1068 1068 int ret = 0; 1069 1069 struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO }; 1070 - struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; 1070 + struct kvm_s2_mmu *mmu = &kvm->arch.mmu; 1071 + struct kvm_pgtable *pgt = mmu->pgt; 1071 1072 enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE | 1072 1073 KVM_PGTABLE_PROT_R | 1073 1074 (writable ? KVM_PGTABLE_PROT_W : 0); ··· 1081 1080 1082 1081 for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) { 1083 1082 ret = kvm_mmu_topup_memory_cache(&cache, 1084 - kvm_mmu_cache_min_pages(kvm)); 1083 + kvm_mmu_cache_min_pages(mmu)); 1085 1084 if (ret) 1086 1085 break; 1087 1086 ··· 1299 1298 if (sz < PMD_SIZE) 1300 1299 return PAGE_SIZE; 1301 1300 1302 - /* 1303 - * The address we faulted on is backed by a transparent huge 1304 - * page. However, because we map the compound huge page and 1305 - * not the individual tail page, we need to transfer the 1306 - * refcount to the head page. We have to be careful that the 1307 - * THP doesn't start to split while we are adjusting the 1308 - * refcounts. 1309 - * 1310 - * We are sure this doesn't happen, because mmu_invalidate_retry 1311 - * was successful and we are holding the mmu_lock, so if this 1312 - * THP is trying to split, it will be blocked in the mmu 1313 - * notifier before touching any of the pages, specifically 1314 - * before being able to call __split_huge_page_refcount(). 1315 - * 1316 - * We can therefore safely transfer the refcount from PG_tail 1317 - * to PG_head and switch the pfn from a tail page to the head 1318 - * page accordingly. 1319 - */ 1320 1301 *ipap &= PMD_MASK; 1321 - kvm_release_pfn_clean(pfn); 1322 1302 pfn &= ~(PTRS_PER_PMD - 1); 1323 - get_page(pfn_to_page(pfn)); 1324 1303 *pfnp = pfn; 1325 1304 1326 1305 return PMD_SIZE; ··· 1412 1431 if (fault_status != ESR_ELx_FSC_PERM || 1413 1432 (logging_active && write_fault)) { 1414 1433 ret = kvm_mmu_topup_memory_cache(memcache, 1415 - kvm_mmu_cache_min_pages(kvm)); 1434 + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); 1416 1435 if (ret) 1417 1436 return ret; 1418 1437 } ··· 1728 1747 } 1729 1748 1730 1749 /* Userspace should not be able to register out-of-bounds IPAs */ 1731 - VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->kvm)); 1750 + VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->arch.hw_mmu)); 1732 1751 1733 1752 if (fault_status == ESR_ELx_FSC_ACCESS) { 1734 1753 handle_access_fault(vcpu, fault_ipa); ··· 2002 2021 * Prevent userspace from creating a memory region outside of the IPA 2003 2022 * space addressable by the KVM guest IPA space. 2004 2023 */ 2005 - if ((new->base_gfn + new->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT)) 2024 + if ((new->base_gfn + new->npages) > (kvm_phys_size(&kvm->arch.mmu) >> PAGE_SHIFT)) 2006 2025 return -EFAULT; 2007 2026 2008 2027 hva = new->userspace_addr;
+1 -1
arch/arm64/kvm/pkvm.c
··· 123 123 if (host_kvm->created_vcpus < 1) 124 124 return -EINVAL; 125 125 126 - pgd_sz = kvm_pgtable_stage2_pgd_size(host_kvm->arch.vtcr); 126 + pgd_sz = kvm_pgtable_stage2_pgd_size(host_kvm->arch.mmu.vtcr); 127 127 128 128 /* 129 129 * The PGD pages will be reclaimed using a hyp_memcache which implies
+107 -38
arch/arm64/kvm/pmu-emul.c
··· 60 60 return __kvm_pmu_event_mask(pmuver); 61 61 } 62 62 63 + u64 kvm_pmu_evtyper_mask(struct kvm *kvm) 64 + { 65 + u64 mask = ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | 66 + kvm_pmu_event_mask(kvm); 67 + u64 pfr0 = IDREG(kvm, SYS_ID_AA64PFR0_EL1); 68 + 69 + if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL2, pfr0)) 70 + mask |= ARMV8_PMU_INCLUDE_EL2; 71 + 72 + if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr0)) 73 + mask |= ARMV8_PMU_EXCLUDE_NS_EL0 | 74 + ARMV8_PMU_EXCLUDE_NS_EL1 | 75 + ARMV8_PMU_EXCLUDE_EL3; 76 + 77 + return mask; 78 + } 79 + 63 80 /** 64 81 * kvm_pmc_is_64bit - determine if counter is 64bit 65 82 * @pmc: counter context ··· 89 72 90 73 static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc) 91 74 { 92 - u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0); 75 + u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc)); 93 76 94 77 return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) || 95 78 (pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC)); ··· 267 250 268 251 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) 269 252 { 270 - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; 253 + u64 val = kvm_vcpu_read_pmcr(vcpu) >> ARMV8_PMU_PMCR_N_SHIFT; 271 254 272 255 val &= ARMV8_PMU_PMCR_N_MASK; 273 256 if (val == 0) ··· 289 272 if (!kvm_vcpu_has_pmu(vcpu)) 290 273 return; 291 274 292 - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) 275 + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val) 293 276 return; 294 277 295 278 for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { ··· 341 324 { 342 325 u64 reg = 0; 343 326 344 - if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) { 327 + if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) { 345 328 reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); 346 329 reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); 347 330 reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1); ··· 365 348 pmu->irq_level = overflow; 366 349 367 350 if (likely(irqchip_in_kernel(vcpu->kvm))) { 368 - int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, 351 + int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu, 369 352 pmu->irq_num, overflow, pmu); 370 353 WARN_ON(ret); 371 354 } ··· 443 426 { 444 427 int i; 445 428 446 - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) 429 + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) 447 430 return; 448 431 449 432 /* Weed out disabled counters */ ··· 586 569 static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) 587 570 { 588 571 struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); 589 - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && 572 + return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) && 590 573 (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx)); 591 574 } 592 575 ··· 601 584 struct perf_event *event; 602 585 struct perf_event_attr attr; 603 586 u64 eventsel, reg, data; 587 + bool p, u, nsk, nsu; 604 588 605 589 reg = counter_index_to_evtreg(pmc->idx); 606 590 data = __vcpu_sys_reg(vcpu, reg); ··· 628 610 !test_bit(eventsel, vcpu->kvm->arch.pmu_filter)) 629 611 return; 630 612 613 + p = data & ARMV8_PMU_EXCLUDE_EL1; 614 + u = data & ARMV8_PMU_EXCLUDE_EL0; 615 + nsk = data & ARMV8_PMU_EXCLUDE_NS_EL1; 616 + nsu = data & ARMV8_PMU_EXCLUDE_NS_EL0; 617 + 631 618 memset(&attr, 0, sizeof(struct perf_event_attr)); 632 619 attr.type = arm_pmu->pmu.type; 633 620 attr.size = sizeof(attr); 634 621 attr.pinned = 1; 635 622 attr.disabled = !kvm_pmu_counter_is_enabled(pmc); 636 - attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0; 637 - attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0; 623 + attr.exclude_user = (u != nsu); 624 + attr.exclude_kernel = (p != nsk); 638 625 attr.exclude_hv = 1; /* Don't count EL2 events */ 639 626 attr.exclude_host = 1; /* Don't count host events */ 640 627 attr.config = eventsel; ··· 680 657 u64 select_idx) 681 658 { 682 659 struct kvm_pmc *pmc = kvm_vcpu_idx_to_pmc(vcpu, select_idx); 683 - u64 reg, mask; 660 + u64 reg; 684 661 685 662 if (!kvm_vcpu_has_pmu(vcpu)) 686 663 return; 687 664 688 - mask = ARMV8_PMU_EVTYPE_MASK; 689 - mask &= ~ARMV8_PMU_EVTYPE_EVENT; 690 - mask |= kvm_pmu_event_mask(vcpu->kvm); 691 - 692 665 reg = counter_index_to_evtreg(pmc->idx); 693 - 694 - __vcpu_sys_reg(vcpu, reg) = data & mask; 666 + __vcpu_sys_reg(vcpu, reg) = data & kvm_pmu_evtyper_mask(vcpu->kvm); 695 667 696 668 kvm_pmu_create_perf_event(pmc); 697 669 } ··· 735 717 * It is still necessary to get a valid cpu, though, to probe for the 736 718 * default PMU instance as userspace is not required to specify a PMU 737 719 * type. In order to uphold the preexisting behavior KVM selects the 738 - * PMU instance for the core where the first call to the 739 - * KVM_ARM_VCPU_PMU_V3_CTRL attribute group occurs. A dependent use case 740 - * would be a user with disdain of all things big.LITTLE that affines 741 - * the VMM to a particular cluster of cores. 720 + * PMU instance for the core during vcpu init. A dependent use 721 + * case would be a user with disdain of all things big.LITTLE that 722 + * affines the VMM to a particular cluster of cores. 742 723 * 743 724 * In any case, userspace should just do the sane thing and use the UAPI 744 725 * to select a PMU type directly. But, be wary of the baggage being ··· 801 784 } 802 785 803 786 return val & mask; 787 + } 788 + 789 + void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) 790 + { 791 + u64 mask = kvm_pmu_valid_counter_mask(vcpu); 792 + 793 + kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu)); 794 + 795 + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask; 796 + __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask; 797 + __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask; 804 798 } 805 799 806 800 int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) ··· 902 874 return true; 903 875 } 904 876 877 + /** 878 + * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. 879 + * @kvm: The kvm pointer 880 + */ 881 + u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) 882 + { 883 + struct arm_pmu *arm_pmu = kvm->arch.arm_pmu; 884 + 885 + /* 886 + * The arm_pmu->num_events considers the cycle counter as well. 887 + * Ignore that and return only the general-purpose counters. 888 + */ 889 + return arm_pmu->num_events - 1; 890 + } 891 + 892 + static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) 893 + { 894 + lockdep_assert_held(&kvm->arch.config_lock); 895 + 896 + kvm->arch.arm_pmu = arm_pmu; 897 + kvm->arch.pmcr_n = kvm_arm_pmu_get_max_counters(kvm); 898 + } 899 + 900 + /** 901 + * kvm_arm_set_default_pmu - No PMU set, get the default one. 902 + * @kvm: The kvm pointer 903 + * 904 + * The observant among you will notice that the supported_cpus 905 + * mask does not get updated for the default PMU even though it 906 + * is quite possible the selected instance supports only a 907 + * subset of cores in the system. This is intentional, and 908 + * upholds the preexisting behavior on heterogeneous systems 909 + * where vCPUs can be scheduled on any core but the guest 910 + * counters could stop working. 911 + */ 912 + int kvm_arm_set_default_pmu(struct kvm *kvm) 913 + { 914 + struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu(); 915 + 916 + if (!arm_pmu) 917 + return -ENODEV; 918 + 919 + kvm_arm_set_pmu(kvm, arm_pmu); 920 + return 0; 921 + } 922 + 905 923 static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) 906 924 { 907 925 struct kvm *kvm = vcpu->kvm; ··· 967 893 break; 968 894 } 969 895 970 - kvm->arch.arm_pmu = arm_pmu; 896 + kvm_arm_set_pmu(kvm, arm_pmu); 971 897 cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); 972 898 ret = 0; 973 899 break; ··· 989 915 990 916 if (vcpu->arch.pmu.created) 991 917 return -EBUSY; 992 - 993 - if (!kvm->arch.arm_pmu) { 994 - /* 995 - * No PMU set, get the default one. 996 - * 997 - * The observant among you will notice that the supported_cpus 998 - * mask does not get updated for the default PMU even though it 999 - * is quite possible the selected instance supports only a 1000 - * subset of cores in the system. This is intentional, and 1001 - * upholds the preexisting behavior on heterogeneous systems 1002 - * where vCPUs can be scheduled on any core but the guest 1003 - * counters could stop working. 1004 - */ 1005 - kvm->arch.arm_pmu = kvm_pmu_probe_armpmu(); 1006 - if (!kvm->arch.arm_pmu) 1007 - return -ENODEV; 1008 - } 1009 918 1010 919 switch (attr->attr) { 1011 920 case KVM_ARM_VCPU_PMU_V3_IRQ: { ··· 1128 1071 ID_AA64DFR0_EL1_PMUVer_SHIFT, 1129 1072 ID_AA64DFR0_EL1_PMUVer_V3P5); 1130 1073 return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp); 1074 + } 1075 + 1076 + /** 1077 + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU 1078 + * @vcpu: The vcpu pointer 1079 + */ 1080 + u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) 1081 + { 1082 + u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) & 1083 + ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); 1084 + 1085 + return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); 1131 1086 }
+10 -46
arch/arm64/kvm/reset.c
··· 73 73 return 0; 74 74 } 75 75 76 - static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) 76 + static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) 77 77 { 78 - if (!system_supports_sve()) 79 - return -EINVAL; 80 - 81 78 vcpu->arch.sve_max_vl = kvm_sve_max_vl; 82 79 83 80 /* ··· 83 86 * kvm_arm_vcpu_finalize(), which freezes the configuration. 84 87 */ 85 88 vcpu_set_flag(vcpu, GUEST_HAS_SVE); 86 - 87 - return 0; 88 89 } 89 90 90 91 /* ··· 165 170 memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); 166 171 } 167 172 168 - static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) 173 + static void kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) 169 174 { 170 - /* 171 - * For now make sure that both address/generic pointer authentication 172 - * features are requested by the userspace together and the system 173 - * supports these capabilities. 174 - */ 175 - if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || 176 - !test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features) || 177 - !system_has_full_ptr_auth()) 178 - return -EINVAL; 179 - 180 175 vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH); 181 - return 0; 182 176 } 183 177 184 178 /** ··· 188 204 * disable preemption around the vcpu reset as we would otherwise race with 189 205 * preempt notifiers which also call put/load. 190 206 */ 191 - int kvm_reset_vcpu(struct kvm_vcpu *vcpu) 207 + void kvm_reset_vcpu(struct kvm_vcpu *vcpu) 192 208 { 193 209 struct vcpu_reset_state reset_state; 194 - int ret; 195 210 bool loaded; 196 211 u32 pstate; 197 212 ··· 207 224 if (loaded) 208 225 kvm_arch_vcpu_put(vcpu); 209 226 210 - /* Disallow NV+SVE for the time being */ 211 - if (vcpu_has_nv(vcpu) && vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { 212 - ret = -EINVAL; 213 - goto out; 214 - } 215 - 216 227 if (!kvm_arm_vcpu_sve_finalized(vcpu)) { 217 - if (test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) { 218 - ret = kvm_vcpu_enable_sve(vcpu); 219 - if (ret) 220 - goto out; 221 - } 228 + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) 229 + kvm_vcpu_enable_sve(vcpu); 222 230 } else { 223 231 kvm_vcpu_reset_sve(vcpu); 224 232 } 225 233 226 - if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || 227 - test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) { 228 - if (kvm_vcpu_enable_ptrauth(vcpu)) { 229 - ret = -EINVAL; 230 - goto out; 231 - } 232 - } 234 + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) || 235 + vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC)) 236 + kvm_vcpu_enable_ptrauth(vcpu); 233 237 234 238 if (vcpu_el1_is_32bit(vcpu)) 235 239 pstate = VCPU_RESET_PSTATE_SVC; ··· 224 254 pstate = VCPU_RESET_PSTATE_EL2; 225 255 else 226 256 pstate = VCPU_RESET_PSTATE_EL1; 227 - 228 - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { 229 - ret = -EINVAL; 230 - goto out; 231 - } 232 257 233 258 /* Reset core registers */ 234 259 memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); ··· 259 294 } 260 295 261 296 /* Reset timer */ 262 - ret = kvm_timer_vcpu_reset(vcpu); 263 - out: 297 + kvm_timer_vcpu_reset(vcpu); 298 + 264 299 if (loaded) 265 300 kvm_arch_vcpu_load(vcpu, smp_processor_id()); 266 301 preempt_enable(); 267 - return ret; 268 302 } 269 303 270 304 u32 get_kvm_ipa_limit(void)
+286 -67
arch/arm64/kvm/sys_regs.c
··· 379 379 struct sys_reg_params *p, 380 380 const struct sys_reg_desc *r) 381 381 { 382 - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); 382 + u64 val = IDREG(vcpu->kvm, SYS_ID_AA64MMFR1_EL1); 383 383 u32 sr = reg_to_encoding(r); 384 384 385 385 if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { ··· 719 719 720 720 static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 721 721 { 722 - u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); 722 + u64 mask = BIT(ARMV8_PMU_CYCLE_IDX); 723 + u8 n = vcpu->kvm->arch.pmcr_n; 723 724 724 - /* No PMU available, any PMU reg may UNDEF... */ 725 - if (!kvm_arm_support_pmu_v3()) 726 - return 0; 727 - 728 - n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; 729 - n &= ARMV8_PMU_PMCR_N_MASK; 730 725 if (n) 731 726 mask |= GENMASK(n - 1, 0); 732 727 ··· 741 746 742 747 static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 743 748 { 749 + /* This thing will UNDEF, who cares about the reset value? */ 750 + if (!kvm_vcpu_has_pmu(vcpu)) 751 + return 0; 752 + 744 753 reset_unknown(vcpu, r); 745 - __vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK; 754 + __vcpu_sys_reg(vcpu, r->reg) &= kvm_pmu_evtyper_mask(vcpu->kvm); 746 755 747 756 return __vcpu_sys_reg(vcpu, r->reg); 748 757 } ··· 761 762 762 763 static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 763 764 { 764 - u64 pmcr; 765 + u64 pmcr = 0; 765 766 766 - /* No PMU available, PMCR_EL0 may UNDEF... */ 767 - if (!kvm_arm_support_pmu_v3()) 768 - return 0; 769 - 770 - /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ 771 - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); 772 767 if (!kvm_supports_32bit_el0()) 773 768 pmcr |= ARMV8_PMU_PMCR_LC; 774 769 770 + /* 771 + * The value of PMCR.N field is included when the 772 + * vCPU register is read via kvm_vcpu_read_pmcr(). 773 + */ 775 774 __vcpu_sys_reg(vcpu, r->reg) = pmcr; 776 775 777 776 return __vcpu_sys_reg(vcpu, r->reg); ··· 819 822 * Only update writeable bits of PMCR (continuing into 820 823 * kvm_pmu_handle_pmcr() as well) 821 824 */ 822 - val = __vcpu_sys_reg(vcpu, PMCR_EL0); 825 + val = kvm_vcpu_read_pmcr(vcpu); 823 826 val &= ~ARMV8_PMU_PMCR_MASK; 824 827 val |= p->regval & ARMV8_PMU_PMCR_MASK; 825 828 if (!kvm_supports_32bit_el0()) ··· 827 830 kvm_pmu_handle_pmcr(vcpu, val); 828 831 } else { 829 832 /* PMCR.P & PMCR.C are RAZ */ 830 - val = __vcpu_sys_reg(vcpu, PMCR_EL0) 833 + val = kvm_vcpu_read_pmcr(vcpu) 831 834 & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); 832 835 p->regval = val; 833 836 } ··· 876 879 { 877 880 u64 pmcr, val; 878 881 879 - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0); 882 + pmcr = kvm_vcpu_read_pmcr(vcpu); 880 883 val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 881 884 if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { 882 885 kvm_inject_undefined(vcpu); ··· 985 988 kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); 986 989 kvm_vcpu_pmu_restore_guest(vcpu); 987 990 } else { 988 - p->regval = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK; 991 + p->regval = __vcpu_sys_reg(vcpu, reg); 989 992 } 990 993 991 994 return true; 995 + } 996 + 997 + static int set_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 val) 998 + { 999 + bool set; 1000 + 1001 + val &= kvm_pmu_valid_counter_mask(vcpu); 1002 + 1003 + switch (r->reg) { 1004 + case PMOVSSET_EL0: 1005 + /* CRm[1] being set indicates a SET register, and CLR otherwise */ 1006 + set = r->CRm & 2; 1007 + break; 1008 + default: 1009 + /* Op2[0] being set indicates a SET register, and CLR otherwise */ 1010 + set = r->Op2 & 1; 1011 + break; 1012 + } 1013 + 1014 + if (set) 1015 + __vcpu_sys_reg(vcpu, r->reg) |= val; 1016 + else 1017 + __vcpu_sys_reg(vcpu, r->reg) &= ~val; 1018 + 1019 + return 0; 1020 + } 1021 + 1022 + static int get_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 *val) 1023 + { 1024 + u64 mask = kvm_pmu_valid_counter_mask(vcpu); 1025 + 1026 + *val = __vcpu_sys_reg(vcpu, r->reg) & mask; 1027 + return 0; 992 1028 } 993 1029 994 1030 static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, ··· 1131 1101 } 1132 1102 1133 1103 return true; 1104 + } 1105 + 1106 + static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, 1107 + u64 *val) 1108 + { 1109 + *val = kvm_vcpu_read_pmcr(vcpu); 1110 + return 0; 1111 + } 1112 + 1113 + static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, 1114 + u64 val) 1115 + { 1116 + u8 new_n = (val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 1117 + struct kvm *kvm = vcpu->kvm; 1118 + 1119 + mutex_lock(&kvm->arch.config_lock); 1120 + 1121 + /* 1122 + * The vCPU can't have more counters than the PMU hardware 1123 + * implements. Ignore this error to maintain compatibility 1124 + * with the existing KVM behavior. 1125 + */ 1126 + if (!kvm_vm_has_ran_once(kvm) && 1127 + new_n <= kvm_arm_pmu_get_max_counters(kvm)) 1128 + kvm->arch.pmcr_n = new_n; 1129 + 1130 + mutex_unlock(&kvm->arch.config_lock); 1131 + 1132 + /* 1133 + * Ignore writes to RES0 bits, read only bits that are cleared on 1134 + * vCPU reset, and writable bits that KVM doesn't support yet. 1135 + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace) 1136 + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU. 1137 + * But, we leave the bit as it is here, as the vCPU's PMUver might 1138 + * be changed later (NOTE: the bit will be cleared on first vCPU run 1139 + * if necessary). 1140 + */ 1141 + val &= ARMV8_PMU_PMCR_MASK; 1142 + 1143 + /* The LC bit is RES1 when AArch32 is not supported */ 1144 + if (!kvm_supports_32bit_el0()) 1145 + val |= ARMV8_PMU_PMCR_LC; 1146 + 1147 + __vcpu_sys_reg(vcpu, r->reg) = val; 1148 + return 0; 1134 1149 } 1135 1150 1136 1151 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ ··· 1291 1216 /* Some features have different safe value type in KVM than host features */ 1292 1217 switch (id) { 1293 1218 case SYS_ID_AA64DFR0_EL1: 1294 - if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT) 1219 + switch (kvm_ftr.shift) { 1220 + case ID_AA64DFR0_EL1_PMUVer_SHIFT: 1295 1221 kvm_ftr.type = FTR_LOWER_SAFE; 1222 + break; 1223 + case ID_AA64DFR0_EL1_DebugVer_SHIFT: 1224 + kvm_ftr.type = FTR_LOWER_SAFE; 1225 + break; 1226 + } 1296 1227 break; 1297 1228 case SYS_ID_DFR0_EL1: 1298 1229 if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT) ··· 1309 1228 return arm64_ftr_safe_value(&kvm_ftr, new, cur); 1310 1229 } 1311 1230 1312 - /** 1231 + /* 1313 1232 * arm64_check_features() - Check if a feature register value constitutes 1314 1233 * a subset of features indicated by the idreg's KVM sanitised limit. 1315 1234 * ··· 1419 1338 ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); 1420 1339 if (!cpus_have_final_cap(ARM64_HAS_WFXT)) 1421 1340 val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); 1422 - val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS); 1423 1341 break; 1424 1342 case SYS_ID_AA64MMFR2_EL1: 1425 1343 val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; ··· 1451 1371 return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && 1452 1372 sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && 1453 1373 sys_reg_CRm(id) < 8); 1374 + } 1375 + 1376 + static inline bool is_aa32_id_reg(u32 id) 1377 + { 1378 + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && 1379 + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && 1380 + sys_reg_CRm(id) <= 3); 1454 1381 } 1455 1382 1456 1383 static unsigned int id_visibility(const struct kvm_vcpu *vcpu, ··· 1556 1469 return val; 1557 1470 } 1558 1471 1472 + #define ID_REG_LIMIT_FIELD_ENUM(val, reg, field, limit) \ 1473 + ({ \ 1474 + u64 __f_val = FIELD_GET(reg##_##field##_MASK, val); \ 1475 + (val) &= ~reg##_##field##_MASK; \ 1476 + (val) |= FIELD_PREP(reg##_##field##_MASK, \ 1477 + min(__f_val, (u64)reg##_##field##_##limit)); \ 1478 + (val); \ 1479 + }) 1480 + 1559 1481 static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, 1560 1482 const struct sys_reg_desc *rd) 1561 1483 { 1562 1484 u64 val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); 1563 1485 1564 - /* Limit debug to ARMv8.0 */ 1565 - val &= ~ID_AA64DFR0_EL1_DebugVer_MASK; 1566 - val |= SYS_FIELD_PREP_ENUM(ID_AA64DFR0_EL1, DebugVer, IMP); 1486 + val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64DFR0_EL1, DebugVer, V8P8); 1567 1487 1568 1488 /* 1569 1489 * Only initialize the PMU version if the vCPU was configured with one. ··· 1590 1496 const struct sys_reg_desc *rd, 1591 1497 u64 val) 1592 1498 { 1499 + u8 debugver = SYS_FIELD_GET(ID_AA64DFR0_EL1, DebugVer, val); 1593 1500 u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val); 1594 1501 1595 1502 /* ··· 1610 1515 if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) 1611 1516 val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; 1612 1517 1518 + /* 1519 + * ID_AA64DFR0_EL1.DebugVer is one of those awkward fields with a 1520 + * nonzero minimum safe value. 1521 + */ 1522 + if (debugver < ID_AA64DFR0_EL1_DebugVer_IMP) 1523 + return -EINVAL; 1524 + 1613 1525 return set_id_reg(vcpu, rd, val); 1614 1526 } 1615 1527 ··· 1630 1528 if (kvm_vcpu_has_pmu(vcpu)) 1631 1529 val |= SYS_FIELD_PREP(ID_DFR0_EL1, PerfMon, perfmon); 1632 1530 1531 + val = ID_REG_LIMIT_FIELD_ENUM(val, ID_DFR0_EL1, CopDbg, Debugv8p8); 1532 + 1633 1533 return val; 1634 1534 } 1635 1535 ··· 1640 1536 u64 val) 1641 1537 { 1642 1538 u8 perfmon = SYS_FIELD_GET(ID_DFR0_EL1, PerfMon, val); 1539 + u8 copdbg = SYS_FIELD_GET(ID_DFR0_EL1, CopDbg, val); 1643 1540 1644 1541 if (perfmon == ID_DFR0_EL1_PerfMon_IMPDEF) { 1645 1542 val &= ~ID_DFR0_EL1_PerfMon_MASK; ··· 1654 1549 * that this is a PMUv3. 1655 1550 */ 1656 1551 if (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3) 1552 + return -EINVAL; 1553 + 1554 + if (copdbg < ID_DFR0_EL1_CopDbg_Armv8) 1657 1555 return -EINVAL; 1658 1556 1659 1557 return set_id_reg(vcpu, rd, val); ··· 1899 1791 * HCR_EL2.E2H==1, and only in the sysreg table for convenience of 1900 1792 * handling traps. Given that, they are always hidden from userspace. 1901 1793 */ 1902 - static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, 1903 - const struct sys_reg_desc *rd) 1794 + static unsigned int hidden_user_visibility(const struct kvm_vcpu *vcpu, 1795 + const struct sys_reg_desc *rd) 1904 1796 { 1905 1797 return REG_HIDDEN_USER; 1906 1798 } ··· 1911 1803 .reset = rst, \ 1912 1804 .reg = name##_EL1, \ 1913 1805 .val = v, \ 1914 - .visibility = elx2_visibility, \ 1806 + .visibility = hidden_user_visibility, \ 1915 1807 } 1916 1808 1917 1809 /* ··· 1925 1817 * from userspace. 1926 1818 */ 1927 1819 1928 - /* sys_reg_desc initialiser for known cpufeature ID registers */ 1929 - #define ID_SANITISED(name) { \ 1820 + #define ID_DESC(name) \ 1930 1821 SYS_DESC(SYS_##name), \ 1931 1822 .access = access_id_reg, \ 1932 - .get_user = get_id_reg, \ 1823 + .get_user = get_id_reg \ 1824 + 1825 + /* sys_reg_desc initialiser for known cpufeature ID registers */ 1826 + #define ID_SANITISED(name) { \ 1827 + ID_DESC(name), \ 1933 1828 .set_user = set_id_reg, \ 1934 1829 .visibility = id_visibility, \ 1935 1830 .reset = kvm_read_sanitised_id_reg, \ ··· 1941 1830 1942 1831 /* sys_reg_desc initialiser for known cpufeature ID registers */ 1943 1832 #define AA32_ID_SANITISED(name) { \ 1944 - SYS_DESC(SYS_##name), \ 1945 - .access = access_id_reg, \ 1946 - .get_user = get_id_reg, \ 1833 + ID_DESC(name), \ 1947 1834 .set_user = set_id_reg, \ 1948 1835 .visibility = aa32_id_visibility, \ 1949 1836 .reset = kvm_read_sanitised_id_reg, \ 1950 1837 .val = 0, \ 1838 + } 1839 + 1840 + /* sys_reg_desc initialiser for writable ID registers */ 1841 + #define ID_WRITABLE(name, mask) { \ 1842 + ID_DESC(name), \ 1843 + .set_user = set_id_reg, \ 1844 + .visibility = id_visibility, \ 1845 + .reset = kvm_read_sanitised_id_reg, \ 1846 + .val = mask, \ 1951 1847 } 1952 1848 1953 1849 /* ··· 1978 1860 * RAZ for the guest. 1979 1861 */ 1980 1862 #define ID_HIDDEN(name) { \ 1981 - SYS_DESC(SYS_##name), \ 1982 - .access = access_id_reg, \ 1983 - .get_user = get_id_reg, \ 1863 + ID_DESC(name), \ 1984 1864 .set_user = set_id_reg, \ 1985 1865 .visibility = raz_visibility, \ 1986 1866 .reset = kvm_read_sanitised_id_reg, \ ··· 2077 1961 // DBGDTR[TR]X_EL0 share the same encoding 2078 1962 { SYS_DESC(SYS_DBGDTRTX_EL0), trap_raz_wi }, 2079 1963 2080 - { SYS_DESC(SYS_DBGVCR32_EL2), NULL, reset_val, DBGVCR32_EL2, 0 }, 1964 + { SYS_DESC(SYS_DBGVCR32_EL2), trap_undef, reset_val, DBGVCR32_EL2, 0 }, 2081 1965 2082 1966 { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 }, 2083 1967 ··· 2096 1980 .set_user = set_id_dfr0_el1, 2097 1981 .visibility = aa32_id_visibility, 2098 1982 .reset = read_sanitised_id_dfr0_el1, 2099 - .val = ID_DFR0_EL1_PerfMon_MASK, }, 1983 + .val = ID_DFR0_EL1_PerfMon_MASK | 1984 + ID_DFR0_EL1_CopDbg_MASK, }, 2100 1985 ID_HIDDEN(ID_AFR0_EL1), 2101 1986 AA32_ID_SANITISED(ID_MMFR0_EL1), 2102 1987 AA32_ID_SANITISED(ID_MMFR1_EL1), ··· 2131 2014 .get_user = get_id_reg, 2132 2015 .set_user = set_id_reg, 2133 2016 .reset = read_sanitised_id_aa64pfr0_el1, 2134 - .val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, }, 2017 + .val = ~(ID_AA64PFR0_EL1_AMU | 2018 + ID_AA64PFR0_EL1_MPAM | 2019 + ID_AA64PFR0_EL1_SVE | 2020 + ID_AA64PFR0_EL1_RAS | 2021 + ID_AA64PFR0_EL1_GIC | 2022 + ID_AA64PFR0_EL1_AdvSIMD | 2023 + ID_AA64PFR0_EL1_FP), }, 2135 2024 ID_SANITISED(ID_AA64PFR1_EL1), 2136 2025 ID_UNALLOCATED(4,2), 2137 2026 ID_UNALLOCATED(4,3), 2138 - ID_SANITISED(ID_AA64ZFR0_EL1), 2027 + ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), 2139 2028 ID_HIDDEN(ID_AA64SMFR0_EL1), 2140 2029 ID_UNALLOCATED(4,6), 2141 2030 ID_UNALLOCATED(4,7), ··· 2152 2029 .get_user = get_id_reg, 2153 2030 .set_user = set_id_aa64dfr0_el1, 2154 2031 .reset = read_sanitised_id_aa64dfr0_el1, 2155 - .val = ID_AA64DFR0_EL1_PMUVer_MASK, }, 2032 + .val = ID_AA64DFR0_EL1_PMUVer_MASK | 2033 + ID_AA64DFR0_EL1_DebugVer_MASK, }, 2156 2034 ID_SANITISED(ID_AA64DFR1_EL1), 2157 2035 ID_UNALLOCATED(5,2), 2158 2036 ID_UNALLOCATED(5,3), ··· 2163 2039 ID_UNALLOCATED(5,7), 2164 2040 2165 2041 /* CRm=6 */ 2166 - ID_SANITISED(ID_AA64ISAR0_EL1), 2167 - ID_SANITISED(ID_AA64ISAR1_EL1), 2168 - ID_SANITISED(ID_AA64ISAR2_EL1), 2042 + ID_WRITABLE(ID_AA64ISAR0_EL1, ~ID_AA64ISAR0_EL1_RES0), 2043 + ID_WRITABLE(ID_AA64ISAR1_EL1, ~(ID_AA64ISAR1_EL1_GPI | 2044 + ID_AA64ISAR1_EL1_GPA | 2045 + ID_AA64ISAR1_EL1_API | 2046 + ID_AA64ISAR1_EL1_APA)), 2047 + ID_WRITABLE(ID_AA64ISAR2_EL1, ~(ID_AA64ISAR2_EL1_RES0 | 2048 + ID_AA64ISAR2_EL1_APA3 | 2049 + ID_AA64ISAR2_EL1_GPA3)), 2169 2050 ID_UNALLOCATED(6,3), 2170 2051 ID_UNALLOCATED(6,4), 2171 2052 ID_UNALLOCATED(6,5), ··· 2178 2049 ID_UNALLOCATED(6,7), 2179 2050 2180 2051 /* CRm=7 */ 2181 - ID_SANITISED(ID_AA64MMFR0_EL1), 2182 - ID_SANITISED(ID_AA64MMFR1_EL1), 2183 - ID_SANITISED(ID_AA64MMFR2_EL1), 2052 + ID_WRITABLE(ID_AA64MMFR0_EL1, ~(ID_AA64MMFR0_EL1_RES0 | 2053 + ID_AA64MMFR0_EL1_TGRAN4_2 | 2054 + ID_AA64MMFR0_EL1_TGRAN64_2 | 2055 + ID_AA64MMFR0_EL1_TGRAN16_2)), 2056 + ID_WRITABLE(ID_AA64MMFR1_EL1, ~(ID_AA64MMFR1_EL1_RES0 | 2057 + ID_AA64MMFR1_EL1_HCX | 2058 + ID_AA64MMFR1_EL1_XNX | 2059 + ID_AA64MMFR1_EL1_TWED | 2060 + ID_AA64MMFR1_EL1_XNX | 2061 + ID_AA64MMFR1_EL1_VH | 2062 + ID_AA64MMFR1_EL1_VMIDBits)), 2063 + ID_WRITABLE(ID_AA64MMFR2_EL1, ~(ID_AA64MMFR2_EL1_RES0 | 2064 + ID_AA64MMFR2_EL1_EVT | 2065 + ID_AA64MMFR2_EL1_FWB | 2066 + ID_AA64MMFR2_EL1_IDS | 2067 + ID_AA64MMFR2_EL1_NV | 2068 + ID_AA64MMFR2_EL1_CCIDX)), 2184 2069 ID_SANITISED(ID_AA64MMFR3_EL1), 2185 2070 ID_UNALLOCATED(7,4), 2186 2071 ID_UNALLOCATED(7,5), ··· 2259 2116 /* PMBIDR_EL1 is not trapped */ 2260 2117 2261 2118 { PMU_SYS_REG(PMINTENSET_EL1), 2262 - .access = access_pminten, .reg = PMINTENSET_EL1 }, 2119 + .access = access_pminten, .reg = PMINTENSET_EL1, 2120 + .get_user = get_pmreg, .set_user = set_pmreg }, 2263 2121 { PMU_SYS_REG(PMINTENCLR_EL1), 2264 - .access = access_pminten, .reg = PMINTENSET_EL1 }, 2122 + .access = access_pminten, .reg = PMINTENSET_EL1, 2123 + .get_user = get_pmreg, .set_user = set_pmreg }, 2265 2124 { SYS_DESC(SYS_PMMIR_EL1), trap_raz_wi }, 2266 2125 2267 2126 { SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 }, ··· 2311 2166 { SYS_DESC(SYS_CTR_EL0), access_ctr }, 2312 2167 { SYS_DESC(SYS_SVCR), undef_access }, 2313 2168 2314 - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, 2315 - .reset = reset_pmcr, .reg = PMCR_EL0 }, 2169 + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, 2170 + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr }, 2316 2171 { PMU_SYS_REG(PMCNTENSET_EL0), 2317 - .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, 2172 + .access = access_pmcnten, .reg = PMCNTENSET_EL0, 2173 + .get_user = get_pmreg, .set_user = set_pmreg }, 2318 2174 { PMU_SYS_REG(PMCNTENCLR_EL0), 2319 - .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, 2175 + .access = access_pmcnten, .reg = PMCNTENSET_EL0, 2176 + .get_user = get_pmreg, .set_user = set_pmreg }, 2320 2177 { PMU_SYS_REG(PMOVSCLR_EL0), 2321 - .access = access_pmovs, .reg = PMOVSSET_EL0 }, 2178 + .access = access_pmovs, .reg = PMOVSSET_EL0, 2179 + .get_user = get_pmreg, .set_user = set_pmreg }, 2322 2180 /* 2323 2181 * PM_SWINC_EL0 is exposed to userspace as RAZ/WI, as it was 2324 2182 * previously (and pointlessly) advertised in the past... ··· 2349 2201 { PMU_SYS_REG(PMUSERENR_EL0), .access = access_pmuserenr, 2350 2202 .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 }, 2351 2203 { PMU_SYS_REG(PMOVSSET_EL0), 2352 - .access = access_pmovs, .reg = PMOVSSET_EL0 }, 2204 + .access = access_pmovs, .reg = PMOVSSET_EL0, 2205 + .get_user = get_pmreg, .set_user = set_pmreg }, 2353 2206 2354 2207 { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, 2355 2208 { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, ··· 2529 2380 EL2_REG(VTTBR_EL2, access_rw, reset_val, 0), 2530 2381 EL2_REG(VTCR_EL2, access_rw, reset_val, 0), 2531 2382 2532 - { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, 2383 + { SYS_DESC(SYS_DACR32_EL2), trap_undef, reset_unknown, DACR32_EL2 }, 2533 2384 EL2_REG(HDFGRTR_EL2, access_rw, reset_val, 0), 2534 2385 EL2_REG(HDFGWTR_EL2, access_rw, reset_val, 0), 2535 2386 EL2_REG(SPSR_EL2, access_rw, reset_val, 0), 2536 2387 EL2_REG(ELR_EL2, access_rw, reset_val, 0), 2537 2388 { SYS_DESC(SYS_SP_EL1), access_sp_el1}, 2538 2389 2539 - { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 }, 2390 + /* AArch32 SPSR_* are RES0 if trapped from a NV guest */ 2391 + { SYS_DESC(SYS_SPSR_irq), .access = trap_raz_wi, 2392 + .visibility = hidden_user_visibility }, 2393 + { SYS_DESC(SYS_SPSR_abt), .access = trap_raz_wi, 2394 + .visibility = hidden_user_visibility }, 2395 + { SYS_DESC(SYS_SPSR_und), .access = trap_raz_wi, 2396 + .visibility = hidden_user_visibility }, 2397 + { SYS_DESC(SYS_SPSR_fiq), .access = trap_raz_wi, 2398 + .visibility = hidden_user_visibility }, 2399 + 2400 + { SYS_DESC(SYS_IFSR32_EL2), trap_undef, reset_unknown, IFSR32_EL2 }, 2540 2401 EL2_REG(AFSR0_EL2, access_rw, reset_val, 0), 2541 2402 EL2_REG(AFSR1_EL2, access_rw, reset_val, 0), 2542 2403 EL2_REG(ESR_EL2, access_rw, reset_val, 0), 2543 - { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 }, 2404 + { SYS_DESC(SYS_FPEXC32_EL2), trap_undef, reset_val, FPEXC32_EL2, 0x700 }, 2544 2405 2545 2406 EL2_REG(FAR_EL2, access_rw, reset_val, 0), 2546 2407 EL2_REG(HPFAR_EL2, access_rw, reset_val, 0), ··· 2597 2438 if (p->is_write) { 2598 2439 return ignore_write(vcpu, p); 2599 2440 } else { 2600 - u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); 2601 - u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 2602 - u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL1_EL3_SHIFT); 2441 + u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); 2442 + u64 pfr = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1); 2443 + u32 el3 = !!SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr); 2603 2444 2604 - p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) << 28) | 2605 - (((dfr >> ID_AA64DFR0_EL1_BRPs_SHIFT) & 0xf) << 24) | 2606 - (((dfr >> ID_AA64DFR0_EL1_CTX_CMPs_SHIFT) & 0xf) << 20) 2607 - | (6 << 16) | (1 << 15) | (el3 << 14) | (el3 << 12)); 2445 + p->regval = ((SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr) << 28) | 2446 + (SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr) << 24) | 2447 + (SYS_FIELD_GET(ID_AA64DFR0_EL1, CTX_CMPs, dfr) << 20) | 2448 + (SYS_FIELD_GET(ID_AA64DFR0_EL1, DebugVer, dfr) << 16) | 2449 + (1 << 15) | (el3 << 14) | (el3 << 12)); 2608 2450 return true; 2609 2451 } 2610 2452 } ··· 3730 3570 uindices += err; 3731 3571 3732 3572 return write_demux_regids(uindices); 3573 + } 3574 + 3575 + #define KVM_ARM_FEATURE_ID_RANGE_INDEX(r) \ 3576 + KVM_ARM_FEATURE_ID_RANGE_IDX(sys_reg_Op0(r), \ 3577 + sys_reg_Op1(r), \ 3578 + sys_reg_CRn(r), \ 3579 + sys_reg_CRm(r), \ 3580 + sys_reg_Op2(r)) 3581 + 3582 + static bool is_feature_id_reg(u32 encoding) 3583 + { 3584 + return (sys_reg_Op0(encoding) == 3 && 3585 + (sys_reg_Op1(encoding) < 2 || sys_reg_Op1(encoding) == 3) && 3586 + sys_reg_CRn(encoding) == 0 && 3587 + sys_reg_CRm(encoding) <= 7); 3588 + } 3589 + 3590 + int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *range) 3591 + { 3592 + const void *zero_page = page_to_virt(ZERO_PAGE(0)); 3593 + u64 __user *masks = (u64 __user *)range->addr; 3594 + 3595 + /* Only feature id range is supported, reserved[13] must be zero. */ 3596 + if (range->range || 3597 + memcmp(range->reserved, zero_page, sizeof(range->reserved))) 3598 + return -EINVAL; 3599 + 3600 + /* Wipe the whole thing first */ 3601 + if (clear_user(masks, KVM_ARM_FEATURE_ID_RANGE_SIZE * sizeof(__u64))) 3602 + return -EFAULT; 3603 + 3604 + for (int i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) { 3605 + const struct sys_reg_desc *reg = &sys_reg_descs[i]; 3606 + u32 encoding = reg_to_encoding(reg); 3607 + u64 val; 3608 + 3609 + if (!is_feature_id_reg(encoding) || !reg->set_user) 3610 + continue; 3611 + 3612 + /* 3613 + * For ID registers, we return the writable mask. Other feature 3614 + * registers return a full 64bit mask. That's not necessary 3615 + * compliant with a given revision of the architecture, but the 3616 + * RES0/RES1 definitions allow us to do that. 3617 + */ 3618 + if (is_id_reg(encoding)) { 3619 + if (!reg->val || 3620 + (is_aa32_id_reg(encoding) && !kvm_supports_32bit_el0())) 3621 + continue; 3622 + val = reg->val; 3623 + } else { 3624 + val = ~0UL; 3625 + } 3626 + 3627 + if (put_user(val, (masks + KVM_ARM_FEATURE_ID_RANGE_INDEX(encoding)))) 3628 + return -EFAULT; 3629 + } 3630 + 3631 + return 0; 3733 3632 } 3734 3633 3735 3634 int __init kvm_sys_reg_table_init(void)
+25
arch/arm64/kvm/trace_arm.h
··· 136 136 __entry->vcpu_pc, __entry->instr, __entry->cpsr) 137 137 ); 138 138 139 + TRACE_EVENT(kvm_mmio_nisv, 140 + TP_PROTO(unsigned long vcpu_pc, unsigned long esr, 141 + unsigned long far, unsigned long ipa), 142 + TP_ARGS(vcpu_pc, esr, far, ipa), 143 + 144 + TP_STRUCT__entry( 145 + __field( unsigned long, vcpu_pc ) 146 + __field( unsigned long, esr ) 147 + __field( unsigned long, far ) 148 + __field( unsigned long, ipa ) 149 + ), 150 + 151 + TP_fast_assign( 152 + __entry->vcpu_pc = vcpu_pc; 153 + __entry->esr = esr; 154 + __entry->far = far; 155 + __entry->ipa = ipa; 156 + ), 157 + 158 + TP_printk("ipa %#016lx, esr %#016lx, far %#016lx, pc %#016lx", 159 + __entry->ipa, __entry->esr, 160 + __entry->far, __entry->vcpu_pc) 161 + ); 162 + 163 + 139 164 TRACE_EVENT(kvm_set_way_flush, 140 165 TP_PROTO(unsigned long vcpu_pc, bool cache), 141 166 TP_ARGS(vcpu_pc, cache),
+3 -3
arch/arm64/kvm/vgic/vgic-debug.c
··· 166 166 167 167 if (vcpu) { 168 168 hdr = "VCPU"; 169 - id = vcpu->vcpu_id; 169 + id = vcpu->vcpu_idx; 170 170 } 171 171 172 172 seq_printf(s, "\n"); ··· 212 212 " %2d " 213 213 "\n", 214 214 type, irq->intid, 215 - (irq->target_vcpu) ? irq->target_vcpu->vcpu_id : -1, 215 + (irq->target_vcpu) ? irq->target_vcpu->vcpu_idx : -1, 216 216 pending, 217 217 irq->line_level, 218 218 irq->active, ··· 224 224 irq->mpidr, 225 225 irq->source, 226 226 irq->priority, 227 - (irq->vcpu) ? irq->vcpu->vcpu_id : -1); 227 + (irq->vcpu) ? irq->vcpu->vcpu_idx : -1); 228 228 } 229 229 230 230 static int vgic_debug_show(struct seq_file *s, void *v)
+1 -1
arch/arm64/kvm/vgic/vgic-irqfd.c
··· 23 23 24 24 if (!vgic_valid_spi(kvm, spi_id)) 25 25 return -EINVAL; 26 - return kvm_vgic_inject_irq(kvm, 0, spi_id, level, NULL); 26 + return kvm_vgic_inject_irq(kvm, NULL, spi_id, level, NULL); 27 27 } 28 28 29 29 /**
+27 -22
arch/arm64/kvm/vgic/vgic-its.c
··· 378 378 return ret; 379 379 } 380 380 381 + static struct kvm_vcpu *collection_to_vcpu(struct kvm *kvm, 382 + struct its_collection *col) 383 + { 384 + return kvm_get_vcpu_by_id(kvm, col->target_addr); 385 + } 386 + 381 387 /* 382 388 * Promotes the ITS view of affinity of an ITTE (which redistributor this LPI 383 389 * is targeting) to the VGIC's view, which deals with target VCPUs. ··· 397 391 if (!its_is_collection_mapped(ite->collection)) 398 392 return; 399 393 400 - vcpu = kvm_get_vcpu(kvm, ite->collection->target_addr); 394 + vcpu = collection_to_vcpu(kvm, ite->collection); 401 395 update_affinity(ite->irq, vcpu); 402 396 } 403 397 ··· 685 679 if (!ite || !its_is_collection_mapped(ite->collection)) 686 680 return E_ITS_INT_UNMAPPED_INTERRUPT; 687 681 688 - vcpu = kvm_get_vcpu(kvm, ite->collection->target_addr); 682 + vcpu = collection_to_vcpu(kvm, ite->collection); 689 683 if (!vcpu) 690 684 return E_ITS_INT_UNMAPPED_INTERRUPT; 691 685 ··· 893 887 return E_ITS_MOVI_UNMAPPED_COLLECTION; 894 888 895 889 ite->collection = collection; 896 - vcpu = kvm_get_vcpu(kvm, collection->target_addr); 890 + vcpu = collection_to_vcpu(kvm, collection); 897 891 898 892 vgic_its_invalidate_cache(kvm); 899 893 ··· 1127 1121 } 1128 1122 1129 1123 if (its_is_collection_mapped(collection)) 1130 - vcpu = kvm_get_vcpu(kvm, collection->target_addr); 1124 + vcpu = collection_to_vcpu(kvm, collection); 1131 1125 1132 1126 irq = vgic_add_lpi(kvm, lpi_nr, vcpu); 1133 1127 if (IS_ERR(irq)) { ··· 1248 1242 u64 *its_cmd) 1249 1243 { 1250 1244 u16 coll_id; 1251 - u32 target_addr; 1252 1245 struct its_collection *collection; 1253 1246 bool valid; 1254 1247 1255 1248 valid = its_cmd_get_validbit(its_cmd); 1256 1249 coll_id = its_cmd_get_collection(its_cmd); 1257 - target_addr = its_cmd_get_target_addr(its_cmd); 1258 - 1259 - if (target_addr >= atomic_read(&kvm->online_vcpus)) 1260 - return E_ITS_MAPC_PROCNUM_OOR; 1261 1250 1262 1251 if (!valid) { 1263 1252 vgic_its_free_collection(its, coll_id); 1264 1253 vgic_its_invalidate_cache(kvm); 1265 1254 } else { 1255 + struct kvm_vcpu *vcpu; 1256 + 1257 + vcpu = kvm_get_vcpu_by_id(kvm, its_cmd_get_target_addr(its_cmd)); 1258 + if (!vcpu) 1259 + return E_ITS_MAPC_PROCNUM_OOR; 1260 + 1266 1261 collection = find_collection(its, coll_id); 1267 1262 1268 1263 if (!collection) { ··· 1277 1270 coll_id); 1278 1271 if (ret) 1279 1272 return ret; 1280 - collection->target_addr = target_addr; 1273 + collection->target_addr = vcpu->vcpu_id; 1281 1274 } else { 1282 - collection->target_addr = target_addr; 1275 + collection->target_addr = vcpu->vcpu_id; 1283 1276 update_affinity_collection(kvm, its, collection); 1284 1277 } 1285 1278 } ··· 1389 1382 if (!its_is_collection_mapped(collection)) 1390 1383 return E_ITS_INVALL_UNMAPPED_COLLECTION; 1391 1384 1392 - vcpu = kvm_get_vcpu(kvm, collection->target_addr); 1385 + vcpu = collection_to_vcpu(kvm, collection); 1393 1386 vgic_its_invall(vcpu); 1394 1387 1395 1388 return 0; ··· 1406 1399 static int vgic_its_cmd_handle_movall(struct kvm *kvm, struct vgic_its *its, 1407 1400 u64 *its_cmd) 1408 1401 { 1409 - u32 target1_addr = its_cmd_get_target_addr(its_cmd); 1410 - u32 target2_addr = its_cmd_mask_field(its_cmd, 3, 16, 32); 1411 1402 struct kvm_vcpu *vcpu1, *vcpu2; 1412 1403 struct vgic_irq *irq; 1413 1404 u32 *intids; 1414 1405 int irq_count, i; 1415 1406 1416 - if (target1_addr >= atomic_read(&kvm->online_vcpus) || 1417 - target2_addr >= atomic_read(&kvm->online_vcpus)) 1407 + /* We advertise GITS_TYPER.PTA==0, making the address the vcpu ID */ 1408 + vcpu1 = kvm_get_vcpu_by_id(kvm, its_cmd_get_target_addr(its_cmd)); 1409 + vcpu2 = kvm_get_vcpu_by_id(kvm, its_cmd_mask_field(its_cmd, 3, 16, 32)); 1410 + 1411 + if (!vcpu1 || !vcpu2) 1418 1412 return E_ITS_MOVALL_PROCNUM_OOR; 1419 1413 1420 - if (target1_addr == target2_addr) 1414 + if (vcpu1 == vcpu2) 1421 1415 return 0; 1422 - 1423 - vcpu1 = kvm_get_vcpu(kvm, target1_addr); 1424 - vcpu2 = kvm_get_vcpu(kvm, target2_addr); 1425 1416 1426 1417 irq_count = vgic_copy_lpi_list(kvm, vcpu1, &intids); 1427 1418 if (irq_count < 0) ··· 2263 2258 return PTR_ERR(ite); 2264 2259 2265 2260 if (its_is_collection_mapped(collection)) 2266 - vcpu = kvm_get_vcpu(kvm, collection->target_addr); 2261 + vcpu = kvm_get_vcpu_by_id(kvm, collection->target_addr); 2267 2262 2268 2263 irq = vgic_add_lpi(kvm, lpi_id, vcpu); 2269 2264 if (IS_ERR(irq)) { ··· 2578 2573 coll_id = val & KVM_ITS_CTE_ICID_MASK; 2579 2574 2580 2575 if (target_addr != COLLECTION_NOT_MAPPED && 2581 - target_addr >= atomic_read(&kvm->online_vcpus)) 2576 + !kvm_get_vcpu_by_id(kvm, target_addr)) 2582 2577 return -EINVAL; 2583 2578 2584 2579 collection = find_collection(its, coll_id);
+4 -7
arch/arm64/kvm/vgic/vgic-kvm-device.c
··· 27 27 if (addr + size < addr) 28 28 return -EINVAL; 29 29 30 - if (addr & ~kvm_phys_mask(kvm) || addr + size > kvm_phys_size(kvm)) 30 + if (addr & ~kvm_phys_mask(&kvm->arch.mmu) || 31 + (addr + size) > kvm_phys_size(&kvm->arch.mmu)) 31 32 return -E2BIG; 32 33 33 34 return 0; ··· 340 339 { 341 340 int cpuid; 342 341 343 - cpuid = (attr->attr & KVM_DEV_ARM_VGIC_CPUID_MASK) >> 344 - KVM_DEV_ARM_VGIC_CPUID_SHIFT; 342 + cpuid = FIELD_GET(KVM_DEV_ARM_VGIC_CPUID_MASK, attr->attr); 345 343 346 - if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) 347 - return -EINVAL; 348 - 349 - reg_attr->vcpu = kvm_get_vcpu(dev->kvm, cpuid); 344 + reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid); 350 345 reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK; 351 346 352 347 return 0;
+61 -95
arch/arm64/kvm/vgic/vgic-mmio-v3.c
··· 1013 1013 1014 1014 return 0; 1015 1015 } 1016 - /* 1017 - * Compare a given affinity (level 1-3 and a level 0 mask, from the SGI 1018 - * generation register ICC_SGI1R_EL1) with a given VCPU. 1019 - * If the VCPU's MPIDR matches, return the level0 affinity, otherwise 1020 - * return -1. 1021 - */ 1022 - static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu) 1023 - { 1024 - unsigned long affinity; 1025 - int level0; 1026 - 1027 - /* 1028 - * Split the current VCPU's MPIDR into affinity level 0 and the 1029 - * rest as this is what we have to compare against. 1030 - */ 1031 - affinity = kvm_vcpu_get_mpidr_aff(vcpu); 1032 - level0 = MPIDR_AFFINITY_LEVEL(affinity, 0); 1033 - affinity &= ~MPIDR_LEVEL_MASK; 1034 - 1035 - /* bail out if the upper three levels don't match */ 1036 - if (sgi_aff != affinity) 1037 - return -1; 1038 - 1039 - /* Is this VCPU's bit set in the mask ? */ 1040 - if (!(sgi_cpu_mask & BIT(level0))) 1041 - return -1; 1042 - 1043 - return level0; 1044 - } 1045 1016 1046 1017 /* 1047 1018 * The ICC_SGI* registers encode the affinity differently from the MPIDR, ··· 1022 1051 #define SGI_AFFINITY_LEVEL(reg, level) \ 1023 1052 ((((reg) & ICC_SGI1R_AFFINITY_## level ##_MASK) \ 1024 1053 >> ICC_SGI1R_AFFINITY_## level ##_SHIFT) << MPIDR_LEVEL_SHIFT(level)) 1054 + 1055 + static void vgic_v3_queue_sgi(struct kvm_vcpu *vcpu, u32 sgi, bool allow_group1) 1056 + { 1057 + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, sgi); 1058 + unsigned long flags; 1059 + 1060 + raw_spin_lock_irqsave(&irq->irq_lock, flags); 1061 + 1062 + /* 1063 + * An access targeting Group0 SGIs can only generate 1064 + * those, while an access targeting Group1 SGIs can 1065 + * generate interrupts of either group. 1066 + */ 1067 + if (!irq->group || allow_group1) { 1068 + if (!irq->hw) { 1069 + irq->pending_latch = true; 1070 + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); 1071 + } else { 1072 + /* HW SGI? Ask the GIC to inject it */ 1073 + int err; 1074 + err = irq_set_irqchip_state(irq->host_irq, 1075 + IRQCHIP_STATE_PENDING, 1076 + true); 1077 + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); 1078 + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 1079 + } 1080 + } else { 1081 + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 1082 + } 1083 + 1084 + vgic_put_irq(vcpu->kvm, irq); 1085 + } 1025 1086 1026 1087 /** 1027 1088 * vgic_v3_dispatch_sgi - handle SGI requests from VCPUs ··· 1065 1062 * This will trap in sys_regs.c and call this function. 1066 1063 * This ICC_SGI1R_EL1 register contains the upper three affinity levels of the 1067 1064 * target processors as well as a bitmask of 16 Aff0 CPUs. 1068 - * If the interrupt routing mode bit is not set, we iterate over all VCPUs to 1069 - * check for matching ones. If this bit is set, we signal all, but not the 1070 - * calling VCPU. 1065 + * 1066 + * If the interrupt routing mode bit is not set, we iterate over the Aff0 1067 + * bits and signal the VCPUs matching the provided Aff{3,2,1}. 1068 + * 1069 + * If this bit is set, we signal all, but not the calling VCPU. 1071 1070 */ 1072 1071 void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) 1073 1072 { 1074 1073 struct kvm *kvm = vcpu->kvm; 1075 1074 struct kvm_vcpu *c_vcpu; 1076 - u16 target_cpus; 1075 + unsigned long target_cpus; 1077 1076 u64 mpidr; 1078 - int sgi; 1079 - int vcpu_id = vcpu->vcpu_id; 1080 - bool broadcast; 1081 - unsigned long c, flags; 1077 + u32 sgi, aff0; 1078 + unsigned long c; 1082 1079 1083 - sgi = (reg & ICC_SGI1R_SGI_ID_MASK) >> ICC_SGI1R_SGI_ID_SHIFT; 1084 - broadcast = reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT); 1085 - target_cpus = (reg & ICC_SGI1R_TARGET_LIST_MASK) >> ICC_SGI1R_TARGET_LIST_SHIFT; 1080 + sgi = FIELD_GET(ICC_SGI1R_SGI_ID_MASK, reg); 1081 + 1082 + /* Broadcast */ 1083 + if (unlikely(reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT))) { 1084 + kvm_for_each_vcpu(c, c_vcpu, kvm) { 1085 + /* Don't signal the calling VCPU */ 1086 + if (c_vcpu == vcpu) 1087 + continue; 1088 + 1089 + vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); 1090 + } 1091 + 1092 + return; 1093 + } 1094 + 1095 + /* We iterate over affinities to find the corresponding vcpus */ 1086 1096 mpidr = SGI_AFFINITY_LEVEL(reg, 3); 1087 1097 mpidr |= SGI_AFFINITY_LEVEL(reg, 2); 1088 1098 mpidr |= SGI_AFFINITY_LEVEL(reg, 1); 1099 + target_cpus = FIELD_GET(ICC_SGI1R_TARGET_LIST_MASK, reg); 1089 1100 1090 - /* 1091 - * We iterate over all VCPUs to find the MPIDRs matching the request. 1092 - * If we have handled one CPU, we clear its bit to detect early 1093 - * if we are already finished. This avoids iterating through all 1094 - * VCPUs when most of the times we just signal a single VCPU. 1095 - */ 1096 - kvm_for_each_vcpu(c, c_vcpu, kvm) { 1097 - struct vgic_irq *irq; 1098 - 1099 - /* Exit early if we have dealt with all requested CPUs */ 1100 - if (!broadcast && target_cpus == 0) 1101 - break; 1102 - 1103 - /* Don't signal the calling VCPU */ 1104 - if (broadcast && c == vcpu_id) 1105 - continue; 1106 - 1107 - if (!broadcast) { 1108 - int level0; 1109 - 1110 - level0 = match_mpidr(mpidr, target_cpus, c_vcpu); 1111 - if (level0 == -1) 1112 - continue; 1113 - 1114 - /* remove this matching VCPU from the mask */ 1115 - target_cpus &= ~BIT(level0); 1116 - } 1117 - 1118 - irq = vgic_get_irq(vcpu->kvm, c_vcpu, sgi); 1119 - 1120 - raw_spin_lock_irqsave(&irq->irq_lock, flags); 1121 - 1122 - /* 1123 - * An access targeting Group0 SGIs can only generate 1124 - * those, while an access targeting Group1 SGIs can 1125 - * generate interrupts of either group. 1126 - */ 1127 - if (!irq->group || allow_group1) { 1128 - if (!irq->hw) { 1129 - irq->pending_latch = true; 1130 - vgic_queue_irq_unlock(vcpu->kvm, irq, flags); 1131 - } else { 1132 - /* HW SGI? Ask the GIC to inject it */ 1133 - int err; 1134 - err = irq_set_irqchip_state(irq->host_irq, 1135 - IRQCHIP_STATE_PENDING, 1136 - true); 1137 - WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); 1138 - raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 1139 - } 1140 - } else { 1141 - raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 1142 - } 1143 - 1144 - vgic_put_irq(vcpu->kvm, irq); 1101 + for_each_set_bit(aff0, &target_cpus, hweight_long(ICC_SGI1R_TARGET_LIST_MASK)) { 1102 + c_vcpu = kvm_mpidr_to_vcpu(kvm, mpidr | aff0); 1103 + if (c_vcpu) 1104 + vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); 1145 1105 } 1146 1106 } 1147 1107
+5 -7
arch/arm64/kvm/vgic/vgic.c
··· 422 422 /** 423 423 * kvm_vgic_inject_irq - Inject an IRQ from a device to the vgic 424 424 * @kvm: The VM structure pointer 425 - * @cpuid: The CPU for PPIs 425 + * @vcpu: The CPU for PPIs or NULL for global interrupts 426 426 * @intid: The INTID to inject a new state to. 427 427 * @level: Edge-triggered: true: to trigger the interrupt 428 428 * false: to ignore the call ··· 436 436 * level-sensitive interrupts. You can think of the level parameter as 1 437 437 * being HIGH and 0 being LOW and all devices being active-HIGH. 438 438 */ 439 - int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid, 440 - bool level, void *owner) 439 + int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, 440 + unsigned int intid, bool level, void *owner) 441 441 { 442 - struct kvm_vcpu *vcpu; 443 442 struct vgic_irq *irq; 444 443 unsigned long flags; 445 444 int ret; 446 - 447 - trace_vgic_update_irq_pending(cpuid, intid, level); 448 445 449 446 ret = vgic_lazy_init(kvm); 450 447 if (ret) 451 448 return ret; 452 449 453 - vcpu = kvm_get_vcpu(kvm, cpuid); 454 450 if (!vcpu && intid < VGIC_NR_PRIVATE_IRQS) 455 451 return -EINVAL; 452 + 453 + trace_vgic_update_irq_pending(vcpu ? vcpu->vcpu_idx : 0, intid, level); 456 454 457 455 irq = vgic_get_irq(kvm, vcpu, intid); 458 456 if (!irq)
+8 -3
arch/arm64/kvm/vmid.c
··· 135 135 atomic64_set(this_cpu_ptr(&active_vmids), VMID_ACTIVE_INVALID); 136 136 } 137 137 138 - void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid) 138 + bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid) 139 139 { 140 140 unsigned long flags; 141 141 u64 vmid, old_active_vmid; 142 + bool updated = false; 142 143 143 144 vmid = atomic64_read(&kvm_vmid->id); 144 145 ··· 157 156 if (old_active_vmid != 0 && vmid_gen_match(vmid) && 158 157 0 != atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids), 159 158 old_active_vmid, vmid)) 160 - return; 159 + return false; 161 160 162 161 raw_spin_lock_irqsave(&cpu_vmid_lock, flags); 163 162 164 163 /* Check that our VMID belongs to the current generation. */ 165 164 vmid = atomic64_read(&kvm_vmid->id); 166 - if (!vmid_gen_match(vmid)) 165 + if (!vmid_gen_match(vmid)) { 167 166 vmid = new_vmid(kvm_vmid); 167 + updated = true; 168 + } 168 169 169 170 atomic64_set(this_cpu_ptr(&active_vmids), vmid); 170 171 raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags); 172 + 173 + return updated; 171 174 } 172 175 173 176 /*
+1 -1
include/kvm/arm_arch_timer.h
··· 96 96 97 97 int __init kvm_timer_hyp_init(bool has_gic); 98 98 int kvm_timer_enable(struct kvm_vcpu *vcpu); 99 - int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu); 99 + void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu); 100 100 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu); 101 101 void kvm_timer_sync_user(struct kvm_vcpu *vcpu); 102 102 bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu);
+26 -2
include/kvm/arm_pmu.h
··· 13 13 #define ARMV8_PMU_CYCLE_IDX (ARMV8_PMU_MAX_COUNTERS - 1) 14 14 15 15 #if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) 16 - 17 16 struct kvm_pmc { 18 17 u8 idx; /* index into the pmu->pmc array */ 19 18 struct perf_event *perf_event; ··· 62 63 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val); 63 64 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, 64 65 u64 select_idx); 66 + void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu); 65 67 int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, 66 68 struct kvm_device_attr *attr); 67 69 int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, ··· 77 77 void kvm_vcpu_pmu_resync_el0(void); 78 78 79 79 #define kvm_vcpu_has_pmu(vcpu) \ 80 - (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) 80 + (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) 81 81 82 82 /* 83 83 * Updates the vcpu's view of the pmu events for this cpu. ··· 101 101 }) 102 102 103 103 u8 kvm_arm_pmu_get_pmuver_limit(void); 104 + u64 kvm_pmu_evtyper_mask(struct kvm *kvm); 105 + int kvm_arm_set_default_pmu(struct kvm *kvm); 106 + u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm); 104 107 108 + u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); 105 109 #else 106 110 struct kvm_pmu { 107 111 }; ··· 172 168 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} 173 169 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} 174 170 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} 171 + static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} 175 172 static inline u8 kvm_arm_pmu_get_pmuver_limit(void) 176 173 { 177 174 return 0; 178 175 } 176 + static inline u64 kvm_pmu_evtyper_mask(struct kvm *kvm) 177 + { 178 + return 0; 179 + } 179 180 static inline void kvm_vcpu_pmu_resync_el0(void) {} 181 + 182 + static inline int kvm_arm_set_default_pmu(struct kvm *kvm) 183 + { 184 + return -ENODEV; 185 + } 186 + 187 + static inline u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) 188 + { 189 + return 0; 190 + } 191 + 192 + static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) 193 + { 194 + return 0; 195 + } 180 196 181 197 #endif 182 198
+1 -1
include/kvm/arm_psci.h
··· 26 26 * revisions. It is thus safe to return the latest, unless 27 27 * userspace has instructed us otherwise. 28 28 */ 29 - if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) { 29 + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2)) { 30 30 if (vcpu->kvm->arch.psci_version) 31 31 return vcpu->kvm->arch.psci_version; 32 32
+2 -2
include/kvm/arm_vgic.h
··· 375 375 int kvm_vgic_hyp_init(void); 376 376 void kvm_vgic_init_cpu_hardware(void); 377 377 378 - int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid, 379 - bool level, void *owner); 378 + int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, 379 + unsigned int intid, bool level, void *owner); 380 380 int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, unsigned int host_irq, 381 381 u32 vintid, struct irq_ops *ops); 382 382 int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, unsigned int vintid);
+6 -3
include/linux/perf/arm_pmuv3.h
··· 234 234 /* 235 235 * Event filters for PMUv3 236 236 */ 237 - #define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) 238 - #define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) 239 - #define ARMV8_PMU_INCLUDE_EL2 (1U << 27) 237 + #define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) 238 + #define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) 239 + #define ARMV8_PMU_EXCLUDE_NS_EL1 (1U << 29) 240 + #define ARMV8_PMU_EXCLUDE_NS_EL0 (1U << 28) 241 + #define ARMV8_PMU_INCLUDE_EL2 (1U << 27) 242 + #define ARMV8_PMU_EXCLUDE_EL3 (1U << 26) 240 243 241 244 /* 242 245 * PMUSERENR: user enable reg
+2
include/uapi/linux/kvm.h
··· 1200 1200 #define KVM_CAP_COUNTER_OFFSET 227 1201 1201 #define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228 1202 1202 #define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229 1203 + #define KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES 230 1203 1204 1204 1205 #ifdef KVM_CAP_IRQ_ROUTING 1205 1206 ··· 1572 1571 #define KVM_ARM_MTE_COPY_TAGS _IOR(KVMIO, 0xb4, struct kvm_arm_copy_mte_tags) 1573 1572 /* Available with KVM_CAP_COUNTER_OFFSET */ 1574 1573 #define KVM_ARM_SET_COUNTER_OFFSET _IOW(KVMIO, 0xb5, struct kvm_arm_counter_offset) 1574 + #define KVM_ARM_GET_REG_WRITABLE_MASKS _IOR(KVMIO, 0xb6, struct reg_mask_range) 1575 1575 1576 1576 /* ioctl for vm fd */ 1577 1577 #define KVM_CREATE_DEVICE _IOWR(KVMIO, 0xe0, struct kvm_create_device)
+1
tools/arch/arm64/include/.gitignore
··· 1 + generated/
+26
tools/arch/arm64/include/asm/gpr-num.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + #ifndef __ASM_GPR_NUM_H 3 + #define __ASM_GPR_NUM_H 4 + 5 + #ifdef __ASSEMBLY__ 6 + 7 + .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30 8 + .equ .L__gpr_num_x\num, \num 9 + .equ .L__gpr_num_w\num, \num 10 + .endr 11 + .equ .L__gpr_num_xzr, 31 12 + .equ .L__gpr_num_wzr, 31 13 + 14 + #else /* __ASSEMBLY__ */ 15 + 16 + #define __DEFINE_ASM_GPR_NUMS \ 17 + " .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" \ 18 + " .equ .L__gpr_num_x\\num, \\num\n" \ 19 + " .equ .L__gpr_num_w\\num, \\num\n" \ 20 + " .endr\n" \ 21 + " .equ .L__gpr_num_xzr, 31\n" \ 22 + " .equ .L__gpr_num_wzr, 31\n" 23 + 24 + #endif /* __ASSEMBLY__ */ 25 + 26 + #endif /* __ASM_GPR_NUM_H */
+193 -648
tools/arch/arm64/include/asm/sysreg.h
··· 12 12 #include <linux/bits.h> 13 13 #include <linux/stringify.h> 14 14 15 + #include <asm/gpr-num.h> 16 + 15 17 /* 16 18 * ARMv8 ARM reserves the following encoding for system registers: 17 19 * (Ref: ARMv8 ARM, Section: "System instruction class encoding overview", ··· 89 87 */ 90 88 #define pstate_field(op1, op2) ((op1) << Op1_shift | (op2) << Op2_shift) 91 89 #define PSTATE_Imm_shift CRm_shift 90 + #define SET_PSTATE(x, r) __emit_inst(0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift)) 92 91 93 92 #define PSTATE_PAN pstate_field(0, 4) 94 93 #define PSTATE_UAO pstate_field(0, 3) 95 94 #define PSTATE_SSBS pstate_field(3, 1) 95 + #define PSTATE_DIT pstate_field(3, 2) 96 96 #define PSTATE_TCO pstate_field(3, 4) 97 97 98 - #define SET_PSTATE_PAN(x) __emit_inst(0xd500401f | PSTATE_PAN | ((!!x) << PSTATE_Imm_shift)) 99 - #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift)) 100 - #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift)) 101 - #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift)) 98 + #define SET_PSTATE_PAN(x) SET_PSTATE((x), PAN) 99 + #define SET_PSTATE_UAO(x) SET_PSTATE((x), UAO) 100 + #define SET_PSTATE_SSBS(x) SET_PSTATE((x), SSBS) 101 + #define SET_PSTATE_DIT(x) SET_PSTATE((x), DIT) 102 + #define SET_PSTATE_TCO(x) SET_PSTATE((x), TCO) 102 103 103 104 #define set_pstate_pan(x) asm volatile(SET_PSTATE_PAN(x)) 104 105 #define set_pstate_uao(x) asm volatile(SET_PSTATE_UAO(x)) 105 106 #define set_pstate_ssbs(x) asm volatile(SET_PSTATE_SSBS(x)) 107 + #define set_pstate_dit(x) asm volatile(SET_PSTATE_DIT(x)) 106 108 107 109 #define __SYS_BARRIER_INSN(CRm, op2, Rt) \ 108 110 __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f)) ··· 114 108 #define SB_BARRIER_INSN __SYS_BARRIER_INSN(0, 7, 31) 115 109 116 110 #define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2) 111 + #define SYS_DC_IGSW sys_insn(1, 0, 7, 6, 4) 112 + #define SYS_DC_IGDSW sys_insn(1, 0, 7, 6, 6) 117 113 #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) 114 + #define SYS_DC_CGSW sys_insn(1, 0, 7, 10, 4) 115 + #define SYS_DC_CGDSW sys_insn(1, 0, 7, 10, 6) 118 116 #define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2) 117 + #define SYS_DC_CIGSW sys_insn(1, 0, 7, 14, 4) 118 + #define SYS_DC_CIGDSW sys_insn(1, 0, 7, 14, 6) 119 + 120 + /* 121 + * Automatically generated definitions for system registers, the 122 + * manual encodings below are in the process of being converted to 123 + * come from here. The header relies on the definition of sys_reg() 124 + * earlier in this file. 125 + */ 126 + #include "asm/sysreg-defs.h" 119 127 120 128 /* 121 129 * System registers, organised loosely by encoding but grouped together 122 130 * where the architected name contains an index. e.g. ID_MMFR<n>_EL1. 123 131 */ 124 - #define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2) 125 - #define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0) 126 - #define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2) 127 - #define SYS_OSDTRTX_EL1 sys_reg(2, 0, 0, 3, 2) 128 - #define SYS_OSECCR_EL1 sys_reg(2, 0, 0, 6, 2) 132 + #define SYS_SVCR_SMSTOP_SM_EL0 sys_reg(0, 3, 4, 2, 3) 133 + #define SYS_SVCR_SMSTART_SM_EL0 sys_reg(0, 3, 4, 3, 3) 134 + #define SYS_SVCR_SMSTOP_SMZA_EL0 sys_reg(0, 3, 4, 6, 3) 135 + 129 136 #define SYS_DBGBVRn_EL1(n) sys_reg(2, 0, 0, n, 4) 130 137 #define SYS_DBGBCRn_EL1(n) sys_reg(2, 0, 0, n, 5) 131 138 #define SYS_DBGWVRn_EL1(n) sys_reg(2, 0, 0, n, 6) 132 139 #define SYS_DBGWCRn_EL1(n) sys_reg(2, 0, 0, n, 7) 133 140 #define SYS_MDRAR_EL1 sys_reg(2, 0, 1, 0, 0) 134 - #define SYS_OSLAR_EL1 sys_reg(2, 0, 1, 0, 4) 141 + 135 142 #define SYS_OSLSR_EL1 sys_reg(2, 0, 1, 1, 4) 143 + #define OSLSR_EL1_OSLM_MASK (BIT(3) | BIT(0)) 144 + #define OSLSR_EL1_OSLM_NI 0 145 + #define OSLSR_EL1_OSLM_IMPLEMENTED BIT(3) 146 + #define OSLSR_EL1_OSLK BIT(1) 147 + 136 148 #define SYS_OSDLR_EL1 sys_reg(2, 0, 1, 3, 4) 137 149 #define SYS_DBGPRCR_EL1 sys_reg(2, 0, 1, 4, 4) 138 150 #define SYS_DBGCLAIMSET_EL1 sys_reg(2, 0, 7, 8, 6) ··· 166 142 #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) 167 143 #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) 168 144 169 - #define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0) 170 - #define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1) 171 - #define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4) 172 - #define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2) 173 - #define SYS_ID_DFR1_EL1 sys_reg(3, 0, 0, 3, 5) 174 - #define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3) 175 - #define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4) 176 - #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) 177 - #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) 178 - #define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) 179 - #define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) 180 - #define SYS_ID_MMFR5_EL1 sys_reg(3, 0, 0, 3, 6) 181 - 182 - #define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) 183 - #define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) 184 - #define SYS_ID_ISAR2_EL1 sys_reg(3, 0, 0, 2, 2) 185 - #define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) 186 - #define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) 187 - #define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) 188 - #define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) 189 - 190 - #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) 191 - #define SYS_MVFR1_EL1 sys_reg(3, 0, 0, 3, 1) 192 - #define SYS_MVFR2_EL1 sys_reg(3, 0, 0, 3, 2) 193 - 194 - #define SYS_ID_AA64PFR0_EL1 sys_reg(3, 0, 0, 4, 0) 195 - #define SYS_ID_AA64PFR1_EL1 sys_reg(3, 0, 0, 4, 1) 196 - #define SYS_ID_AA64ZFR0_EL1 sys_reg(3, 0, 0, 4, 4) 197 - 198 - #define SYS_ID_AA64DFR0_EL1 sys_reg(3, 0, 0, 5, 0) 199 - #define SYS_ID_AA64DFR1_EL1 sys_reg(3, 0, 0, 5, 1) 200 - 201 - #define SYS_ID_AA64AFR0_EL1 sys_reg(3, 0, 0, 5, 4) 202 - #define SYS_ID_AA64AFR1_EL1 sys_reg(3, 0, 0, 5, 5) 203 - 204 - #define SYS_ID_AA64ISAR0_EL1 sys_reg(3, 0, 0, 6, 0) 205 - #define SYS_ID_AA64ISAR1_EL1 sys_reg(3, 0, 0, 6, 1) 206 - 207 - #define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0) 208 - #define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1) 209 - #define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2) 210 - 211 - #define SYS_SCTLR_EL1 sys_reg(3, 0, 1, 0, 0) 212 145 #define SYS_ACTLR_EL1 sys_reg(3, 0, 1, 0, 1) 213 - #define SYS_CPACR_EL1 sys_reg(3, 0, 1, 0, 2) 214 146 #define SYS_RGSR_EL1 sys_reg(3, 0, 1, 0, 5) 215 147 #define SYS_GCR_EL1 sys_reg(3, 0, 1, 0, 6) 216 148 217 - #define SYS_ZCR_EL1 sys_reg(3, 0, 1, 2, 0) 218 149 #define SYS_TRFCR_EL1 sys_reg(3, 0, 1, 2, 1) 219 150 220 - #define SYS_TTBR0_EL1 sys_reg(3, 0, 2, 0, 0) 221 - #define SYS_TTBR1_EL1 sys_reg(3, 0, 2, 0, 1) 222 151 #define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2) 223 152 224 153 #define SYS_APIAKEYLO_EL1 sys_reg(3, 0, 2, 1, 0) ··· 207 230 #define SYS_TFSR_EL1 sys_reg(3, 0, 5, 6, 0) 208 231 #define SYS_TFSRE0_EL1 sys_reg(3, 0, 5, 6, 1) 209 232 210 - #define SYS_FAR_EL1 sys_reg(3, 0, 6, 0, 0) 211 233 #define SYS_PAR_EL1 sys_reg(3, 0, 7, 4, 0) 212 234 213 235 #define SYS_PAR_EL1_F BIT(0) 214 236 #define SYS_PAR_EL1_FST GENMASK(6, 1) 215 237 216 238 /*** Statistical Profiling Extension ***/ 217 - /* ID registers */ 218 - #define SYS_PMSIDR_EL1 sys_reg(3, 0, 9, 9, 7) 219 - #define SYS_PMSIDR_EL1_FE_SHIFT 0 220 - #define SYS_PMSIDR_EL1_FT_SHIFT 1 221 - #define SYS_PMSIDR_EL1_FL_SHIFT 2 222 - #define SYS_PMSIDR_EL1_ARCHINST_SHIFT 3 223 - #define SYS_PMSIDR_EL1_LDS_SHIFT 4 224 - #define SYS_PMSIDR_EL1_ERND_SHIFT 5 225 - #define SYS_PMSIDR_EL1_INTERVAL_SHIFT 8 226 - #define SYS_PMSIDR_EL1_INTERVAL_MASK 0xfUL 227 - #define SYS_PMSIDR_EL1_MAXSIZE_SHIFT 12 228 - #define SYS_PMSIDR_EL1_MAXSIZE_MASK 0xfUL 229 - #define SYS_PMSIDR_EL1_COUNTSIZE_SHIFT 16 230 - #define SYS_PMSIDR_EL1_COUNTSIZE_MASK 0xfUL 231 - 232 - #define SYS_PMBIDR_EL1 sys_reg(3, 0, 9, 10, 7) 233 - #define SYS_PMBIDR_EL1_ALIGN_SHIFT 0 234 - #define SYS_PMBIDR_EL1_ALIGN_MASK 0xfU 235 - #define SYS_PMBIDR_EL1_P_SHIFT 4 236 - #define SYS_PMBIDR_EL1_F_SHIFT 5 237 - 238 - /* Sampling controls */ 239 - #define SYS_PMSCR_EL1 sys_reg(3, 0, 9, 9, 0) 240 - #define SYS_PMSCR_EL1_E0SPE_SHIFT 0 241 - #define SYS_PMSCR_EL1_E1SPE_SHIFT 1 242 - #define SYS_PMSCR_EL1_CX_SHIFT 3 243 - #define SYS_PMSCR_EL1_PA_SHIFT 4 244 - #define SYS_PMSCR_EL1_TS_SHIFT 5 245 - #define SYS_PMSCR_EL1_PCT_SHIFT 6 246 - 247 - #define SYS_PMSCR_EL2 sys_reg(3, 4, 9, 9, 0) 248 - #define SYS_PMSCR_EL2_E0HSPE_SHIFT 0 249 - #define SYS_PMSCR_EL2_E2SPE_SHIFT 1 250 - #define SYS_PMSCR_EL2_CX_SHIFT 3 251 - #define SYS_PMSCR_EL2_PA_SHIFT 4 252 - #define SYS_PMSCR_EL2_TS_SHIFT 5 253 - #define SYS_PMSCR_EL2_PCT_SHIFT 6 254 - 255 - #define SYS_PMSICR_EL1 sys_reg(3, 0, 9, 9, 2) 256 - 257 - #define SYS_PMSIRR_EL1 sys_reg(3, 0, 9, 9, 3) 258 - #define SYS_PMSIRR_EL1_RND_SHIFT 0 259 - #define SYS_PMSIRR_EL1_INTERVAL_SHIFT 8 260 - #define SYS_PMSIRR_EL1_INTERVAL_MASK 0xffffffUL 261 - 262 - /* Filtering controls */ 263 - #define SYS_PMSNEVFR_EL1 sys_reg(3, 0, 9, 9, 1) 264 - 265 - #define SYS_PMSFCR_EL1 sys_reg(3, 0, 9, 9, 4) 266 - #define SYS_PMSFCR_EL1_FE_SHIFT 0 267 - #define SYS_PMSFCR_EL1_FT_SHIFT 1 268 - #define SYS_PMSFCR_EL1_FL_SHIFT 2 269 - #define SYS_PMSFCR_EL1_B_SHIFT 16 270 - #define SYS_PMSFCR_EL1_LD_SHIFT 17 271 - #define SYS_PMSFCR_EL1_ST_SHIFT 18 272 - 273 - #define SYS_PMSEVFR_EL1 sys_reg(3, 0, 9, 9, 5) 274 - #define SYS_PMSEVFR_EL1_RES0_8_2 \ 239 + #define PMSEVFR_EL1_RES0_IMP \ 275 240 (GENMASK_ULL(47, 32) | GENMASK_ULL(23, 16) | GENMASK_ULL(11, 8) |\ 276 241 BIT_ULL(6) | BIT_ULL(4) | BIT_ULL(2) | BIT_ULL(0)) 277 - #define SYS_PMSEVFR_EL1_RES0_8_3 \ 278 - (SYS_PMSEVFR_EL1_RES0_8_2 & ~(BIT_ULL(18) | BIT_ULL(17) | BIT_ULL(11))) 279 - 280 - #define SYS_PMSLATFR_EL1 sys_reg(3, 0, 9, 9, 6) 281 - #define SYS_PMSLATFR_EL1_MINLAT_SHIFT 0 282 - 283 - /* Buffer controls */ 284 - #define SYS_PMBLIMITR_EL1 sys_reg(3, 0, 9, 10, 0) 285 - #define SYS_PMBLIMITR_EL1_E_SHIFT 0 286 - #define SYS_PMBLIMITR_EL1_FM_SHIFT 1 287 - #define SYS_PMBLIMITR_EL1_FM_MASK 0x3UL 288 - #define SYS_PMBLIMITR_EL1_FM_STOP_IRQ (0 << SYS_PMBLIMITR_EL1_FM_SHIFT) 289 - 290 - #define SYS_PMBPTR_EL1 sys_reg(3, 0, 9, 10, 1) 242 + #define PMSEVFR_EL1_RES0_V1P1 \ 243 + (PMSEVFR_EL1_RES0_IMP & ~(BIT_ULL(18) | BIT_ULL(17) | BIT_ULL(11))) 244 + #define PMSEVFR_EL1_RES0_V1P2 \ 245 + (PMSEVFR_EL1_RES0_V1P1 & ~BIT_ULL(6)) 291 246 292 247 /* Buffer error reporting */ 293 - #define SYS_PMBSR_EL1 sys_reg(3, 0, 9, 10, 3) 294 - #define SYS_PMBSR_EL1_COLL_SHIFT 16 295 - #define SYS_PMBSR_EL1_S_SHIFT 17 296 - #define SYS_PMBSR_EL1_EA_SHIFT 18 297 - #define SYS_PMBSR_EL1_DL_SHIFT 19 298 - #define SYS_PMBSR_EL1_EC_SHIFT 26 299 - #define SYS_PMBSR_EL1_EC_MASK 0x3fUL 248 + #define PMBSR_EL1_FAULT_FSC_SHIFT PMBSR_EL1_MSS_SHIFT 249 + #define PMBSR_EL1_FAULT_FSC_MASK PMBSR_EL1_MSS_MASK 300 250 301 - #define SYS_PMBSR_EL1_EC_BUF (0x0UL << SYS_PMBSR_EL1_EC_SHIFT) 302 - #define SYS_PMBSR_EL1_EC_FAULT_S1 (0x24UL << SYS_PMBSR_EL1_EC_SHIFT) 303 - #define SYS_PMBSR_EL1_EC_FAULT_S2 (0x25UL << SYS_PMBSR_EL1_EC_SHIFT) 251 + #define PMBSR_EL1_BUF_BSC_SHIFT PMBSR_EL1_MSS_SHIFT 252 + #define PMBSR_EL1_BUF_BSC_MASK PMBSR_EL1_MSS_MASK 304 253 305 - #define SYS_PMBSR_EL1_FAULT_FSC_SHIFT 0 306 - #define SYS_PMBSR_EL1_FAULT_FSC_MASK 0x3fUL 307 - 308 - #define SYS_PMBSR_EL1_BUF_BSC_SHIFT 0 309 - #define SYS_PMBSR_EL1_BUF_BSC_MASK 0x3fUL 310 - 311 - #define SYS_PMBSR_EL1_BUF_BSC_FULL (0x1UL << SYS_PMBSR_EL1_BUF_BSC_SHIFT) 254 + #define PMBSR_EL1_BUF_BSC_FULL 0x1UL 312 255 313 256 /*** End of Statistical Profiling Extension ***/ 314 257 315 - /* 316 - * TRBE Registers 317 - */ 318 - #define SYS_TRBLIMITR_EL1 sys_reg(3, 0, 9, 11, 0) 319 - #define SYS_TRBPTR_EL1 sys_reg(3, 0, 9, 11, 1) 320 - #define SYS_TRBBASER_EL1 sys_reg(3, 0, 9, 11, 2) 321 - #define SYS_TRBSR_EL1 sys_reg(3, 0, 9, 11, 3) 322 - #define SYS_TRBMAR_EL1 sys_reg(3, 0, 9, 11, 4) 323 - #define SYS_TRBTRG_EL1 sys_reg(3, 0, 9, 11, 6) 324 - #define SYS_TRBIDR_EL1 sys_reg(3, 0, 9, 11, 7) 325 - 326 - #define TRBLIMITR_LIMIT_MASK GENMASK_ULL(51, 0) 327 - #define TRBLIMITR_LIMIT_SHIFT 12 328 - #define TRBLIMITR_NVM BIT(5) 329 - #define TRBLIMITR_TRIG_MODE_MASK GENMASK(1, 0) 330 - #define TRBLIMITR_TRIG_MODE_SHIFT 3 331 - #define TRBLIMITR_FILL_MODE_MASK GENMASK(1, 0) 332 - #define TRBLIMITR_FILL_MODE_SHIFT 1 333 - #define TRBLIMITR_ENABLE BIT(0) 334 - #define TRBPTR_PTR_MASK GENMASK_ULL(63, 0) 335 - #define TRBPTR_PTR_SHIFT 0 336 - #define TRBBASER_BASE_MASK GENMASK_ULL(51, 0) 337 - #define TRBBASER_BASE_SHIFT 12 338 - #define TRBSR_EC_MASK GENMASK(5, 0) 339 - #define TRBSR_EC_SHIFT 26 340 - #define TRBSR_IRQ BIT(22) 341 - #define TRBSR_TRG BIT(21) 342 - #define TRBSR_WRAP BIT(20) 343 - #define TRBSR_ABORT BIT(18) 344 - #define TRBSR_STOP BIT(17) 345 - #define TRBSR_MSS_MASK GENMASK(15, 0) 346 - #define TRBSR_MSS_SHIFT 0 347 - #define TRBSR_BSC_MASK GENMASK(5, 0) 348 - #define TRBSR_BSC_SHIFT 0 349 - #define TRBSR_FSC_MASK GENMASK(5, 0) 350 - #define TRBSR_FSC_SHIFT 0 351 - #define TRBMAR_SHARE_MASK GENMASK(1, 0) 352 - #define TRBMAR_SHARE_SHIFT 8 353 - #define TRBMAR_OUTER_MASK GENMASK(3, 0) 354 - #define TRBMAR_OUTER_SHIFT 4 355 - #define TRBMAR_INNER_MASK GENMASK(3, 0) 356 - #define TRBMAR_INNER_SHIFT 0 357 - #define TRBTRG_TRG_MASK GENMASK(31, 0) 358 - #define TRBTRG_TRG_SHIFT 0 359 - #define TRBIDR_FLAG BIT(5) 360 - #define TRBIDR_PROG BIT(4) 361 - #define TRBIDR_ALIGN_MASK GENMASK(3, 0) 362 - #define TRBIDR_ALIGN_SHIFT 0 258 + #define TRBSR_EL1_BSC_MASK GENMASK(5, 0) 259 + #define TRBSR_EL1_BSC_SHIFT 0 363 260 364 261 #define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1) 365 262 #define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2) ··· 242 391 243 392 #define SYS_MAIR_EL1 sys_reg(3, 0, 10, 2, 0) 244 393 #define SYS_AMAIR_EL1 sys_reg(3, 0, 10, 3, 0) 245 - 246 - #define SYS_LORSA_EL1 sys_reg(3, 0, 10, 4, 0) 247 - #define SYS_LOREA_EL1 sys_reg(3, 0, 10, 4, 1) 248 - #define SYS_LORN_EL1 sys_reg(3, 0, 10, 4, 2) 249 - #define SYS_LORC_EL1 sys_reg(3, 0, 10, 4, 3) 250 - #define SYS_LORID_EL1 sys_reg(3, 0, 10, 4, 7) 251 394 252 395 #define SYS_VBAR_EL1 sys_reg(3, 0, 12, 0, 0) 253 396 #define SYS_DISR_EL1 sys_reg(3, 0, 12, 1, 1) ··· 274 429 #define SYS_ICC_IGRPEN0_EL1 sys_reg(3, 0, 12, 12, 6) 275 430 #define SYS_ICC_IGRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 276 431 277 - #define SYS_CONTEXTIDR_EL1 sys_reg(3, 0, 13, 0, 1) 278 - #define SYS_TPIDR_EL1 sys_reg(3, 0, 13, 0, 4) 279 - 280 - #define SYS_SCXTNUM_EL1 sys_reg(3, 0, 13, 0, 7) 281 - 282 432 #define SYS_CNTKCTL_EL1 sys_reg(3, 0, 14, 1, 0) 283 433 284 - #define SYS_CCSIDR_EL1 sys_reg(3, 1, 0, 0, 0) 285 - #define SYS_CLIDR_EL1 sys_reg(3, 1, 0, 0, 1) 286 - #define SYS_GMID_EL1 sys_reg(3, 1, 0, 0, 4) 287 434 #define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7) 288 - 289 - #define SYS_CSSELR_EL1 sys_reg(3, 2, 0, 0, 0) 290 - 291 - #define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1) 292 - #define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7) 293 435 294 436 #define SYS_RNDR_EL0 sys_reg(3, 3, 2, 4, 0) 295 437 #define SYS_RNDRRS_EL0 sys_reg(3, 3, 2, 4, 1) ··· 297 465 298 466 #define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2) 299 467 #define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3) 468 + #define SYS_TPIDR2_EL0 sys_reg(3, 3, 13, 0, 5) 300 469 301 470 #define SYS_SCXTNUM_EL0 sys_reg(3, 3, 13, 0, 7) 302 471 ··· 339 506 340 507 #define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0) 341 508 509 + #define SYS_CNTPCT_EL0 sys_reg(3, 3, 14, 0, 1) 510 + #define SYS_CNTPCTSS_EL0 sys_reg(3, 3, 14, 0, 5) 511 + #define SYS_CNTVCTSS_EL0 sys_reg(3, 3, 14, 0, 6) 512 + 342 513 #define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0) 343 514 #define SYS_CNTP_CTL_EL0 sys_reg(3, 3, 14, 2, 1) 344 515 #define SYS_CNTP_CVAL_EL0 sys_reg(3, 3, 14, 2, 2) ··· 352 515 353 516 #define SYS_AARCH32_CNTP_TVAL sys_reg(0, 0, 14, 2, 0) 354 517 #define SYS_AARCH32_CNTP_CTL sys_reg(0, 0, 14, 2, 1) 518 + #define SYS_AARCH32_CNTPCT sys_reg(0, 0, 0, 14, 0) 355 519 #define SYS_AARCH32_CNTP_CVAL sys_reg(0, 2, 0, 14, 0) 520 + #define SYS_AARCH32_CNTPCTSS sys_reg(0, 8, 0, 14, 0) 356 521 357 522 #define __PMEV_op2(n) ((n) & 0x7) 358 523 #define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3)) ··· 364 525 365 526 #define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7) 366 527 528 + #define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0) 529 + #define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5) 530 + 367 531 #define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0) 368 - #define SYS_HFGRTR_EL2 sys_reg(3, 4, 1, 1, 4) 369 - #define SYS_HFGWTR_EL2 sys_reg(3, 4, 1, 1, 5) 370 - #define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6) 371 - #define SYS_ZCR_EL2 sys_reg(3, 4, 1, 2, 0) 532 + #define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1) 533 + #define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0) 534 + #define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1) 535 + #define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2) 536 + #define SYS_HSTR_EL2 sys_reg(3, 4, 1, 1, 3) 537 + #define SYS_HACR_EL2 sys_reg(3, 4, 1, 1, 7) 538 + 539 + #define SYS_TTBR0_EL2 sys_reg(3, 4, 2, 0, 0) 540 + #define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1) 541 + #define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2) 542 + #define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0) 543 + #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) 544 + 372 545 #define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) 373 - #define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0) 374 546 #define SYS_HDFGRTR_EL2 sys_reg(3, 4, 3, 1, 4) 375 547 #define SYS_HDFGWTR_EL2 sys_reg(3, 4, 3, 1, 5) 376 548 #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) 377 549 #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) 378 550 #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1) 551 + #define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0) 379 552 #define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1) 553 + #define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0) 554 + #define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1) 380 555 #define SYS_ESR_EL2 sys_reg(3, 4, 5, 2, 0) 381 556 #define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3) 382 557 #define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0) 383 558 #define SYS_TFSR_EL2 sys_reg(3, 4, 5, 6, 0) 384 - #define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0) 385 559 386 - #define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1) 560 + #define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0) 561 + #define SYS_HPFAR_EL2 sys_reg(3, 4, 6, 0, 4) 562 + 563 + #define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0) 564 + #define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0) 565 + 566 + #define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0) 567 + #define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1) 568 + #define SYS_RMR_EL2 sys_reg(3, 4, 12, 0, 2) 569 + #define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1) 387 570 #define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x) 388 571 #define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0) 389 572 #define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1) ··· 447 586 #define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6) 448 587 #define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7) 449 588 589 + #define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1) 590 + #define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2) 591 + 592 + #define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3) 593 + #define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0) 594 + 450 595 /* VHE encodings for architectural EL0/1 system registers */ 451 596 #define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0) 452 - #define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2) 453 - #define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0) 454 597 #define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0) 455 598 #define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1) 456 599 #define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2) ··· 464 599 #define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1) 465 600 #define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0) 466 601 #define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0) 467 - #define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0) 468 602 #define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0) 469 603 #define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0) 470 604 #define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0) 471 - #define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1) 472 605 #define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0) 473 606 #define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0) 474 607 #define SYS_CNTP_CTL_EL02 sys_reg(3, 5, 14, 2, 1) ··· 475 612 #define SYS_CNTV_CTL_EL02 sys_reg(3, 5, 14, 3, 1) 476 613 #define SYS_CNTV_CVAL_EL02 sys_reg(3, 5, 14, 3, 2) 477 614 615 + #define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0) 616 + 478 617 /* Common SCTLR_ELx flags. */ 618 + #define SCTLR_ELx_ENTP2 (BIT(60)) 479 619 #define SCTLR_ELx_DSSBS (BIT(44)) 480 620 #define SCTLR_ELx_ATA (BIT(43)) 481 621 482 - #define SCTLR_ELx_TCF_SHIFT 40 483 - #define SCTLR_ELx_TCF_NONE (UL(0x0) << SCTLR_ELx_TCF_SHIFT) 484 - #define SCTLR_ELx_TCF_SYNC (UL(0x1) << SCTLR_ELx_TCF_SHIFT) 485 - #define SCTLR_ELx_TCF_ASYNC (UL(0x2) << SCTLR_ELx_TCF_SHIFT) 486 - #define SCTLR_ELx_TCF_MASK (UL(0x3) << SCTLR_ELx_TCF_SHIFT) 487 - 622 + #define SCTLR_ELx_EE_SHIFT 25 488 623 #define SCTLR_ELx_ENIA_SHIFT 31 489 624 490 - #define SCTLR_ELx_ITFSB (BIT(37)) 491 - #define SCTLR_ELx_ENIA (BIT(SCTLR_ELx_ENIA_SHIFT)) 492 - #define SCTLR_ELx_ENIB (BIT(30)) 493 - #define SCTLR_ELx_ENDA (BIT(27)) 494 - #define SCTLR_ELx_EE (BIT(25)) 495 - #define SCTLR_ELx_IESB (BIT(21)) 496 - #define SCTLR_ELx_WXN (BIT(19)) 497 - #define SCTLR_ELx_ENDB (BIT(13)) 498 - #define SCTLR_ELx_I (BIT(12)) 499 - #define SCTLR_ELx_SA (BIT(3)) 500 - #define SCTLR_ELx_C (BIT(2)) 501 - #define SCTLR_ELx_A (BIT(1)) 502 - #define SCTLR_ELx_M (BIT(0)) 625 + #define SCTLR_ELx_ITFSB (BIT(37)) 626 + #define SCTLR_ELx_ENIA (BIT(SCTLR_ELx_ENIA_SHIFT)) 627 + #define SCTLR_ELx_ENIB (BIT(30)) 628 + #define SCTLR_ELx_LSMAOE (BIT(29)) 629 + #define SCTLR_ELx_nTLSMD (BIT(28)) 630 + #define SCTLR_ELx_ENDA (BIT(27)) 631 + #define SCTLR_ELx_EE (BIT(SCTLR_ELx_EE_SHIFT)) 632 + #define SCTLR_ELx_EIS (BIT(22)) 633 + #define SCTLR_ELx_IESB (BIT(21)) 634 + #define SCTLR_ELx_TSCXT (BIT(20)) 635 + #define SCTLR_ELx_WXN (BIT(19)) 636 + #define SCTLR_ELx_ENDB (BIT(13)) 637 + #define SCTLR_ELx_I (BIT(12)) 638 + #define SCTLR_ELx_EOS (BIT(11)) 639 + #define SCTLR_ELx_SA (BIT(3)) 640 + #define SCTLR_ELx_C (BIT(2)) 641 + #define SCTLR_ELx_A (BIT(1)) 642 + #define SCTLR_ELx_M (BIT(0)) 503 643 504 644 /* SCTLR_EL2 specific flags. */ 505 645 #define SCTLR_EL2_RES1 ((BIT(4)) | (BIT(5)) | (BIT(11)) | (BIT(16)) | \ 506 646 (BIT(18)) | (BIT(22)) | (BIT(23)) | (BIT(28)) | \ 507 647 (BIT(29))) 508 648 649 + #define SCTLR_EL2_BT (BIT(36)) 509 650 #ifdef CONFIG_CPU_BIG_ENDIAN 510 651 #define ENDIAN_SET_EL2 SCTLR_ELx_EE 511 652 #else ··· 525 658 (SCTLR_EL2_RES1 | ENDIAN_SET_EL2) 526 659 527 660 /* SCTLR_EL1 specific flags. */ 528 - #define SCTLR_EL1_EPAN (BIT(57)) 529 - #define SCTLR_EL1_ATA0 (BIT(42)) 530 - 531 - #define SCTLR_EL1_TCF0_SHIFT 38 532 - #define SCTLR_EL1_TCF0_NONE (UL(0x0) << SCTLR_EL1_TCF0_SHIFT) 533 - #define SCTLR_EL1_TCF0_SYNC (UL(0x1) << SCTLR_EL1_TCF0_SHIFT) 534 - #define SCTLR_EL1_TCF0_ASYNC (UL(0x2) << SCTLR_EL1_TCF0_SHIFT) 535 - #define SCTLR_EL1_TCF0_MASK (UL(0x3) << SCTLR_EL1_TCF0_SHIFT) 536 - 537 - #define SCTLR_EL1_BT1 (BIT(36)) 538 - #define SCTLR_EL1_BT0 (BIT(35)) 539 - #define SCTLR_EL1_UCI (BIT(26)) 540 - #define SCTLR_EL1_E0E (BIT(24)) 541 - #define SCTLR_EL1_SPAN (BIT(23)) 542 - #define SCTLR_EL1_NTWE (BIT(18)) 543 - #define SCTLR_EL1_NTWI (BIT(16)) 544 - #define SCTLR_EL1_UCT (BIT(15)) 545 - #define SCTLR_EL1_DZE (BIT(14)) 546 - #define SCTLR_EL1_UMA (BIT(9)) 547 - #define SCTLR_EL1_SED (BIT(8)) 548 - #define SCTLR_EL1_ITD (BIT(7)) 549 - #define SCTLR_EL1_CP15BEN (BIT(5)) 550 - #define SCTLR_EL1_SA0 (BIT(4)) 551 - 552 - #define SCTLR_EL1_RES1 ((BIT(11)) | (BIT(20)) | (BIT(22)) | (BIT(28)) | \ 553 - (BIT(29))) 554 - 555 661 #ifdef CONFIG_CPU_BIG_ENDIAN 556 662 #define ENDIAN_SET_EL1 (SCTLR_EL1_E0E | SCTLR_ELx_EE) 557 663 #else ··· 532 692 #endif 533 693 534 694 #define INIT_SCTLR_EL1_MMU_OFF \ 535 - (ENDIAN_SET_EL1 | SCTLR_EL1_RES1) 695 + (ENDIAN_SET_EL1 | SCTLR_EL1_LSMAOE | SCTLR_EL1_nTLSMD | \ 696 + SCTLR_EL1_EIS | SCTLR_EL1_TSCXT | SCTLR_EL1_EOS) 536 697 537 698 #define INIT_SCTLR_EL1_MMU_ON \ 538 - (SCTLR_ELx_M | SCTLR_ELx_C | SCTLR_ELx_SA | SCTLR_EL1_SA0 | \ 539 - SCTLR_EL1_SED | SCTLR_ELx_I | SCTLR_EL1_DZE | SCTLR_EL1_UCT | \ 540 - SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \ 541 - SCTLR_ELx_ATA | SCTLR_EL1_ATA0 | ENDIAN_SET_EL1 | SCTLR_EL1_UCI | \ 542 - SCTLR_EL1_EPAN | SCTLR_EL1_RES1) 699 + (SCTLR_ELx_M | SCTLR_ELx_C | SCTLR_ELx_SA | \ 700 + SCTLR_EL1_SA0 | SCTLR_EL1_SED | SCTLR_ELx_I | \ 701 + SCTLR_EL1_DZE | SCTLR_EL1_UCT | SCTLR_EL1_nTWE | \ 702 + SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \ 703 + ENDIAN_SET_EL1 | SCTLR_EL1_UCI | SCTLR_EL1_EPAN | \ 704 + SCTLR_EL1_LSMAOE | SCTLR_EL1_nTLSMD | SCTLR_EL1_EIS | \ 705 + SCTLR_EL1_TSCXT | SCTLR_EL1_EOS) 543 706 544 707 /* MAIR_ELx memory attributes (used by Linux) */ 545 708 #define MAIR_ATTR_DEVICE_nGnRnE UL(0x00) ··· 555 712 /* Position the attr at the correct index */ 556 713 #define MAIR_ATTRIDX(attr, idx) ((attr) << ((idx) * 8)) 557 714 558 - /* id_aa64isar0 */ 559 - #define ID_AA64ISAR0_RNDR_SHIFT 60 560 - #define ID_AA64ISAR0_TLB_SHIFT 56 561 - #define ID_AA64ISAR0_TS_SHIFT 52 562 - #define ID_AA64ISAR0_FHM_SHIFT 48 563 - #define ID_AA64ISAR0_DP_SHIFT 44 564 - #define ID_AA64ISAR0_SM4_SHIFT 40 565 - #define ID_AA64ISAR0_SM3_SHIFT 36 566 - #define ID_AA64ISAR0_SHA3_SHIFT 32 567 - #define ID_AA64ISAR0_RDM_SHIFT 28 568 - #define ID_AA64ISAR0_ATOMICS_SHIFT 20 569 - #define ID_AA64ISAR0_CRC32_SHIFT 16 570 - #define ID_AA64ISAR0_SHA2_SHIFT 12 571 - #define ID_AA64ISAR0_SHA1_SHIFT 8 572 - #define ID_AA64ISAR0_AES_SHIFT 4 573 - 574 - #define ID_AA64ISAR0_TLB_RANGE_NI 0x0 575 - #define ID_AA64ISAR0_TLB_RANGE 0x2 576 - 577 - /* id_aa64isar1 */ 578 - #define ID_AA64ISAR1_I8MM_SHIFT 52 579 - #define ID_AA64ISAR1_DGH_SHIFT 48 580 - #define ID_AA64ISAR1_BF16_SHIFT 44 581 - #define ID_AA64ISAR1_SPECRES_SHIFT 40 582 - #define ID_AA64ISAR1_SB_SHIFT 36 583 - #define ID_AA64ISAR1_FRINTTS_SHIFT 32 584 - #define ID_AA64ISAR1_GPI_SHIFT 28 585 - #define ID_AA64ISAR1_GPA_SHIFT 24 586 - #define ID_AA64ISAR1_LRCPC_SHIFT 20 587 - #define ID_AA64ISAR1_FCMA_SHIFT 16 588 - #define ID_AA64ISAR1_JSCVT_SHIFT 12 589 - #define ID_AA64ISAR1_API_SHIFT 8 590 - #define ID_AA64ISAR1_APA_SHIFT 4 591 - #define ID_AA64ISAR1_DPB_SHIFT 0 592 - 593 - #define ID_AA64ISAR1_APA_NI 0x0 594 - #define ID_AA64ISAR1_APA_ARCHITECTED 0x1 595 - #define ID_AA64ISAR1_APA_ARCH_EPAC 0x2 596 - #define ID_AA64ISAR1_APA_ARCH_EPAC2 0x3 597 - #define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC 0x4 598 - #define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB 0x5 599 - #define ID_AA64ISAR1_API_NI 0x0 600 - #define ID_AA64ISAR1_API_IMP_DEF 0x1 601 - #define ID_AA64ISAR1_API_IMP_DEF_EPAC 0x2 602 - #define ID_AA64ISAR1_API_IMP_DEF_EPAC2 0x3 603 - #define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC 0x4 604 - #define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB 0x5 605 - #define ID_AA64ISAR1_GPA_NI 0x0 606 - #define ID_AA64ISAR1_GPA_ARCHITECTED 0x1 607 - #define ID_AA64ISAR1_GPI_NI 0x0 608 - #define ID_AA64ISAR1_GPI_IMP_DEF 0x1 609 - 610 715 /* id_aa64pfr0 */ 611 - #define ID_AA64PFR0_CSV3_SHIFT 60 612 - #define ID_AA64PFR0_CSV2_SHIFT 56 613 - #define ID_AA64PFR0_DIT_SHIFT 48 614 - #define ID_AA64PFR0_AMU_SHIFT 44 615 - #define ID_AA64PFR0_MPAM_SHIFT 40 616 - #define ID_AA64PFR0_SEL2_SHIFT 36 617 - #define ID_AA64PFR0_SVE_SHIFT 32 618 - #define ID_AA64PFR0_RAS_SHIFT 28 619 - #define ID_AA64PFR0_GIC_SHIFT 24 620 - #define ID_AA64PFR0_ASIMD_SHIFT 20 621 - #define ID_AA64PFR0_FP_SHIFT 16 622 - #define ID_AA64PFR0_EL3_SHIFT 12 623 - #define ID_AA64PFR0_EL2_SHIFT 8 624 - #define ID_AA64PFR0_EL1_SHIFT 4 625 - #define ID_AA64PFR0_EL0_SHIFT 0 626 - 627 - #define ID_AA64PFR0_AMU 0x1 628 - #define ID_AA64PFR0_SVE 0x1 629 - #define ID_AA64PFR0_RAS_V1 0x1 630 - #define ID_AA64PFR0_RAS_V1P1 0x2 631 - #define ID_AA64PFR0_FP_NI 0xf 632 - #define ID_AA64PFR0_FP_SUPPORTED 0x0 633 - #define ID_AA64PFR0_ASIMD_NI 0xf 634 - #define ID_AA64PFR0_ASIMD_SUPPORTED 0x0 635 - #define ID_AA64PFR0_ELx_64BIT_ONLY 0x1 636 - #define ID_AA64PFR0_ELx_32BIT_64BIT 0x2 637 - 638 - /* id_aa64pfr1 */ 639 - #define ID_AA64PFR1_MPAMFRAC_SHIFT 16 640 - #define ID_AA64PFR1_RASFRAC_SHIFT 12 641 - #define ID_AA64PFR1_MTE_SHIFT 8 642 - #define ID_AA64PFR1_SSBS_SHIFT 4 643 - #define ID_AA64PFR1_BT_SHIFT 0 644 - 645 - #define ID_AA64PFR1_SSBS_PSTATE_NI 0 646 - #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 647 - #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 648 - #define ID_AA64PFR1_BT_BTI 0x1 649 - 650 - #define ID_AA64PFR1_MTE_NI 0x0 651 - #define ID_AA64PFR1_MTE_EL0 0x1 652 - #define ID_AA64PFR1_MTE 0x2 653 - 654 - /* id_aa64zfr0 */ 655 - #define ID_AA64ZFR0_F64MM_SHIFT 56 656 - #define ID_AA64ZFR0_F32MM_SHIFT 52 657 - #define ID_AA64ZFR0_I8MM_SHIFT 44 658 - #define ID_AA64ZFR0_SM4_SHIFT 40 659 - #define ID_AA64ZFR0_SHA3_SHIFT 32 660 - #define ID_AA64ZFR0_BF16_SHIFT 20 661 - #define ID_AA64ZFR0_BITPERM_SHIFT 16 662 - #define ID_AA64ZFR0_AES_SHIFT 4 663 - #define ID_AA64ZFR0_SVEVER_SHIFT 0 664 - 665 - #define ID_AA64ZFR0_F64MM 0x1 666 - #define ID_AA64ZFR0_F32MM 0x1 667 - #define ID_AA64ZFR0_I8MM 0x1 668 - #define ID_AA64ZFR0_BF16 0x1 669 - #define ID_AA64ZFR0_SM4 0x1 670 - #define ID_AA64ZFR0_SHA3 0x1 671 - #define ID_AA64ZFR0_BITPERM 0x1 672 - #define ID_AA64ZFR0_AES 0x1 673 - #define ID_AA64ZFR0_AES_PMULL 0x2 674 - #define ID_AA64ZFR0_SVEVER_SVE2 0x1 716 + #define ID_AA64PFR0_EL1_ELx_64BIT_ONLY 0x1 717 + #define ID_AA64PFR0_EL1_ELx_32BIT_64BIT 0x2 675 718 676 719 /* id_aa64mmfr0 */ 677 - #define ID_AA64MMFR0_ECV_SHIFT 60 678 - #define ID_AA64MMFR0_FGT_SHIFT 56 679 - #define ID_AA64MMFR0_EXS_SHIFT 44 680 - #define ID_AA64MMFR0_TGRAN4_2_SHIFT 40 681 - #define ID_AA64MMFR0_TGRAN64_2_SHIFT 36 682 - #define ID_AA64MMFR0_TGRAN16_2_SHIFT 32 683 - #define ID_AA64MMFR0_TGRAN4_SHIFT 28 684 - #define ID_AA64MMFR0_TGRAN64_SHIFT 24 685 - #define ID_AA64MMFR0_TGRAN16_SHIFT 20 686 - #define ID_AA64MMFR0_BIGENDEL0_SHIFT 16 687 - #define ID_AA64MMFR0_SNSMEM_SHIFT 12 688 - #define ID_AA64MMFR0_BIGENDEL_SHIFT 8 689 - #define ID_AA64MMFR0_ASID_SHIFT 4 690 - #define ID_AA64MMFR0_PARANGE_SHIFT 0 691 - 692 - #define ID_AA64MMFR0_ASID_8 0x0 693 - #define ID_AA64MMFR0_ASID_16 0x2 694 - 695 - #define ID_AA64MMFR0_TGRAN4_NI 0xf 696 - #define ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN 0x0 697 - #define ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX 0x7 698 - #define ID_AA64MMFR0_TGRAN64_NI 0xf 699 - #define ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 0x0 700 - #define ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 0x7 701 - #define ID_AA64MMFR0_TGRAN16_NI 0x0 702 - #define ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 0x1 703 - #define ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 0xf 704 - 705 - #define ID_AA64MMFR0_PARANGE_32 0x0 706 - #define ID_AA64MMFR0_PARANGE_36 0x1 707 - #define ID_AA64MMFR0_PARANGE_40 0x2 708 - #define ID_AA64MMFR0_PARANGE_42 0x3 709 - #define ID_AA64MMFR0_PARANGE_44 0x4 710 - #define ID_AA64MMFR0_PARANGE_48 0x5 711 - #define ID_AA64MMFR0_PARANGE_52 0x6 720 + #define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 0x0 721 + #define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX 0x7 722 + #define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN 0x0 723 + #define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX 0x7 724 + #define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN 0x1 725 + #define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX 0xf 712 726 713 727 #define ARM64_MIN_PARANGE_BITS 32 714 728 715 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT 0x0 716 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE 0x1 717 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN 0x2 718 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX 0x7 729 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_DEFAULT 0x0 730 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_NONE 0x1 731 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN 0x2 732 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX 0x7 719 733 720 734 #ifdef CONFIG_ARM64_PA_BITS_52 721 - #define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_52 735 + #define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_52 722 736 #else 723 - #define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_48 737 + #define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_48 724 738 #endif 725 - 726 - /* id_aa64mmfr1 */ 727 - #define ID_AA64MMFR1_ETS_SHIFT 36 728 - #define ID_AA64MMFR1_TWED_SHIFT 32 729 - #define ID_AA64MMFR1_XNX_SHIFT 28 730 - #define ID_AA64MMFR1_SPECSEI_SHIFT 24 731 - #define ID_AA64MMFR1_PAN_SHIFT 20 732 - #define ID_AA64MMFR1_LOR_SHIFT 16 733 - #define ID_AA64MMFR1_HPD_SHIFT 12 734 - #define ID_AA64MMFR1_VHE_SHIFT 8 735 - #define ID_AA64MMFR1_VMIDBITS_SHIFT 4 736 - #define ID_AA64MMFR1_HADBS_SHIFT 0 737 - 738 - #define ID_AA64MMFR1_VMIDBITS_8 0 739 - #define ID_AA64MMFR1_VMIDBITS_16 2 740 - 741 - /* id_aa64mmfr2 */ 742 - #define ID_AA64MMFR2_E0PD_SHIFT 60 743 - #define ID_AA64MMFR2_EVT_SHIFT 56 744 - #define ID_AA64MMFR2_BBM_SHIFT 52 745 - #define ID_AA64MMFR2_TTL_SHIFT 48 746 - #define ID_AA64MMFR2_FWB_SHIFT 40 747 - #define ID_AA64MMFR2_IDS_SHIFT 36 748 - #define ID_AA64MMFR2_AT_SHIFT 32 749 - #define ID_AA64MMFR2_ST_SHIFT 28 750 - #define ID_AA64MMFR2_NV_SHIFT 24 751 - #define ID_AA64MMFR2_CCIDX_SHIFT 20 752 - #define ID_AA64MMFR2_LVA_SHIFT 16 753 - #define ID_AA64MMFR2_IESB_SHIFT 12 754 - #define ID_AA64MMFR2_LSM_SHIFT 8 755 - #define ID_AA64MMFR2_UAO_SHIFT 4 756 - #define ID_AA64MMFR2_CNP_SHIFT 0 757 - 758 - /* id_aa64dfr0 */ 759 - #define ID_AA64DFR0_MTPMU_SHIFT 48 760 - #define ID_AA64DFR0_TRBE_SHIFT 44 761 - #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 762 - #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36 763 - #define ID_AA64DFR0_PMSVER_SHIFT 32 764 - #define ID_AA64DFR0_CTX_CMPS_SHIFT 28 765 - #define ID_AA64DFR0_WRPS_SHIFT 20 766 - #define ID_AA64DFR0_BRPS_SHIFT 12 767 - #define ID_AA64DFR0_PMUVER_SHIFT 8 768 - #define ID_AA64DFR0_TRACEVER_SHIFT 4 769 - #define ID_AA64DFR0_DEBUGVER_SHIFT 0 770 - 771 - #define ID_AA64DFR0_PMUVER_8_0 0x1 772 - #define ID_AA64DFR0_PMUVER_8_1 0x4 773 - #define ID_AA64DFR0_PMUVER_8_4 0x5 774 - #define ID_AA64DFR0_PMUVER_8_5 0x6 775 - #define ID_AA64DFR0_PMUVER_IMP_DEF 0xf 776 - 777 - #define ID_AA64DFR0_PMSVER_8_2 0x1 778 - #define ID_AA64DFR0_PMSVER_8_3 0x2 779 - 780 - #define ID_DFR0_PERFMON_SHIFT 24 781 - 782 - #define ID_DFR0_PERFMON_8_0 0x3 783 - #define ID_DFR0_PERFMON_8_1 0x4 784 - #define ID_DFR0_PERFMON_8_4 0x5 785 - #define ID_DFR0_PERFMON_8_5 0x6 786 - 787 - #define ID_ISAR4_SWP_FRAC_SHIFT 28 788 - #define ID_ISAR4_PSR_M_SHIFT 24 789 - #define ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT 20 790 - #define ID_ISAR4_BARRIER_SHIFT 16 791 - #define ID_ISAR4_SMC_SHIFT 12 792 - #define ID_ISAR4_WRITEBACK_SHIFT 8 793 - #define ID_ISAR4_WITHSHIFTS_SHIFT 4 794 - #define ID_ISAR4_UNPRIV_SHIFT 0 795 - 796 - #define ID_DFR1_MTPMU_SHIFT 0 797 - 798 - #define ID_ISAR0_DIVIDE_SHIFT 24 799 - #define ID_ISAR0_DEBUG_SHIFT 20 800 - #define ID_ISAR0_COPROC_SHIFT 16 801 - #define ID_ISAR0_CMPBRANCH_SHIFT 12 802 - #define ID_ISAR0_BITFIELD_SHIFT 8 803 - #define ID_ISAR0_BITCOUNT_SHIFT 4 804 - #define ID_ISAR0_SWAP_SHIFT 0 805 - 806 - #define ID_ISAR5_RDM_SHIFT 24 807 - #define ID_ISAR5_CRC32_SHIFT 16 808 - #define ID_ISAR5_SHA2_SHIFT 12 809 - #define ID_ISAR5_SHA1_SHIFT 8 810 - #define ID_ISAR5_AES_SHIFT 4 811 - #define ID_ISAR5_SEVL_SHIFT 0 812 - 813 - #define ID_ISAR6_I8MM_SHIFT 24 814 - #define ID_ISAR6_BF16_SHIFT 20 815 - #define ID_ISAR6_SPECRES_SHIFT 16 816 - #define ID_ISAR6_SB_SHIFT 12 817 - #define ID_ISAR6_FHM_SHIFT 8 818 - #define ID_ISAR6_DP_SHIFT 4 819 - #define ID_ISAR6_JSCVT_SHIFT 0 820 - 821 - #define ID_MMFR0_INNERSHR_SHIFT 28 822 - #define ID_MMFR0_FCSE_SHIFT 24 823 - #define ID_MMFR0_AUXREG_SHIFT 20 824 - #define ID_MMFR0_TCM_SHIFT 16 825 - #define ID_MMFR0_SHARELVL_SHIFT 12 826 - #define ID_MMFR0_OUTERSHR_SHIFT 8 827 - #define ID_MMFR0_PMSA_SHIFT 4 828 - #define ID_MMFR0_VMSA_SHIFT 0 829 - 830 - #define ID_MMFR4_EVT_SHIFT 28 831 - #define ID_MMFR4_CCIDX_SHIFT 24 832 - #define ID_MMFR4_LSM_SHIFT 20 833 - #define ID_MMFR4_HPDS_SHIFT 16 834 - #define ID_MMFR4_CNP_SHIFT 12 835 - #define ID_MMFR4_XNX_SHIFT 8 836 - #define ID_MMFR4_AC2_SHIFT 4 837 - #define ID_MMFR4_SPECSEI_SHIFT 0 838 - 839 - #define ID_MMFR5_ETS_SHIFT 0 840 - 841 - #define ID_PFR0_DIT_SHIFT 24 842 - #define ID_PFR0_CSV2_SHIFT 16 843 - #define ID_PFR0_STATE3_SHIFT 12 844 - #define ID_PFR0_STATE2_SHIFT 8 845 - #define ID_PFR0_STATE1_SHIFT 4 846 - #define ID_PFR0_STATE0_SHIFT 0 847 - 848 - #define ID_DFR0_PERFMON_SHIFT 24 849 - #define ID_DFR0_MPROFDBG_SHIFT 20 850 - #define ID_DFR0_MMAPTRC_SHIFT 16 851 - #define ID_DFR0_COPTRC_SHIFT 12 852 - #define ID_DFR0_MMAPDBG_SHIFT 8 853 - #define ID_DFR0_COPSDBG_SHIFT 4 854 - #define ID_DFR0_COPDBG_SHIFT 0 855 - 856 - #define ID_PFR2_SSBS_SHIFT 4 857 - #define ID_PFR2_CSV3_SHIFT 0 858 - 859 - #define MVFR0_FPROUND_SHIFT 28 860 - #define MVFR0_FPSHVEC_SHIFT 24 861 - #define MVFR0_FPSQRT_SHIFT 20 862 - #define MVFR0_FPDIVIDE_SHIFT 16 863 - #define MVFR0_FPTRAP_SHIFT 12 864 - #define MVFR0_FPDP_SHIFT 8 865 - #define MVFR0_FPSP_SHIFT 4 866 - #define MVFR0_SIMD_SHIFT 0 867 - 868 - #define MVFR1_SIMDFMAC_SHIFT 28 869 - #define MVFR1_FPHP_SHIFT 24 870 - #define MVFR1_SIMDHP_SHIFT 20 871 - #define MVFR1_SIMDSP_SHIFT 16 872 - #define MVFR1_SIMDINT_SHIFT 12 873 - #define MVFR1_SIMDLS_SHIFT 8 874 - #define MVFR1_FPDNAN_SHIFT 4 875 - #define MVFR1_FPFTZ_SHIFT 0 876 - 877 - #define ID_PFR1_GIC_SHIFT 28 878 - #define ID_PFR1_VIRT_FRAC_SHIFT 24 879 - #define ID_PFR1_SEC_FRAC_SHIFT 20 880 - #define ID_PFR1_GENTIMER_SHIFT 16 881 - #define ID_PFR1_VIRTUALIZATION_SHIFT 12 882 - #define ID_PFR1_MPROGMOD_SHIFT 8 883 - #define ID_PFR1_SECURITY_SHIFT 4 884 - #define ID_PFR1_PROGMOD_SHIFT 0 885 739 886 740 #if defined(CONFIG_ARM64_4K_PAGES) 887 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT 888 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN 889 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX 890 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN4_2_SHIFT 741 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT 742 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 743 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX 744 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN4_2_SHIFT 891 745 #elif defined(CONFIG_ARM64_16K_PAGES) 892 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN16_SHIFT 893 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 894 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 895 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN16_2_SHIFT 746 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN16_SHIFT 747 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN 748 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX 749 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN16_2_SHIFT 896 750 #elif defined(CONFIG_ARM64_64K_PAGES) 897 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN64_SHIFT 898 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 899 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 900 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN64_2_SHIFT 751 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN64_SHIFT 752 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN 753 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX 754 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN64_2_SHIFT 901 755 #endif 902 756 903 - #define MVFR2_FPMISC_SHIFT 4 904 - #define MVFR2_SIMDMISC_SHIFT 0 757 + #define CPACR_EL1_FPEN_EL1EN (BIT(20)) /* enable EL1 access */ 758 + #define CPACR_EL1_FPEN_EL0EN (BIT(21)) /* enable EL0 access, if EL1EN set */ 905 759 906 - #define DCZID_DZP_SHIFT 4 907 - #define DCZID_BS_SHIFT 0 908 - 909 - /* 910 - * The ZCR_ELx_LEN_* definitions intentionally include bits [8:4] which 911 - * are reserved by the SVE architecture for future expansion of the LEN 912 - * field, with compatible semantics. 913 - */ 914 - #define ZCR_ELx_LEN_SHIFT 0 915 - #define ZCR_ELx_LEN_SIZE 9 916 - #define ZCR_ELx_LEN_MASK 0x1ff 760 + #define CPACR_EL1_SMEN_EL1EN (BIT(24)) /* enable EL1 access */ 761 + #define CPACR_EL1_SMEN_EL0EN (BIT(25)) /* enable EL0 access, if EL1EN set */ 917 762 918 763 #define CPACR_EL1_ZEN_EL1EN (BIT(16)) /* enable EL1 access */ 919 764 #define CPACR_EL1_ZEN_EL0EN (BIT(17)) /* enable EL0 access, if EL1EN set */ 920 - #define CPACR_EL1_ZEN (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN) 921 - 922 - /* TCR EL1 Bit Definitions */ 923 - #define SYS_TCR_EL1_TCMA1 (BIT(58)) 924 - #define SYS_TCR_EL1_TCMA0 (BIT(57)) 925 765 926 766 /* GCR_EL1 Definitions */ 927 767 #define SYS_GCR_EL1_RRND (BIT(16)) 928 768 #define SYS_GCR_EL1_EXCL_MASK 0xffffUL 929 769 770 + #define KERNEL_GCR_EL1 (SYS_GCR_EL1_RRND | KERNEL_GCR_EL1_EXCL) 771 + 930 772 /* RGSR_EL1 Definitions */ 931 773 #define SYS_RGSR_EL1_TAG_MASK 0xfUL 932 774 #define SYS_RGSR_EL1_SEED_SHIFT 8 933 775 #define SYS_RGSR_EL1_SEED_MASK 0xffffUL 934 - 935 - /* GMID_EL1 field definitions */ 936 - #define SYS_GMID_EL1_BS_SHIFT 0 937 - #define SYS_GMID_EL1_BS_SIZE 4 938 776 939 777 /* TFSR{,E0}_EL1 bit definitions */ 940 778 #define SYS_TFSR_EL1_TF0_SHIFT 0 ··· 627 1103 #define SYS_MPIDR_SAFE_VAL (BIT(31)) 628 1104 629 1105 #define TRFCR_ELx_TS_SHIFT 5 1106 + #define TRFCR_ELx_TS_MASK ((0x3UL) << TRFCR_ELx_TS_SHIFT) 630 1107 #define TRFCR_ELx_TS_VIRTUAL ((0x1UL) << TRFCR_ELx_TS_SHIFT) 631 1108 #define TRFCR_ELx_TS_GUEST_PHYSICAL ((0x2UL) << TRFCR_ELx_TS_SHIFT) 632 1109 #define TRFCR_ELx_TS_PHYSICAL ((0x3UL) << TRFCR_ELx_TS_SHIFT) 633 1110 #define TRFCR_EL2_CX BIT(3) 634 1111 #define TRFCR_ELx_ExTRE BIT(1) 635 1112 #define TRFCR_ELx_E0TRE BIT(0) 636 - 637 1113 638 1114 /* GIC Hypervisor interface registers */ 639 1115 /* ICH_MISR_EL2 bit definitions */ ··· 661 1137 #define ICH_HCR_TC (1 << 10) 662 1138 #define ICH_HCR_TALL0 (1 << 11) 663 1139 #define ICH_HCR_TALL1 (1 << 12) 1140 + #define ICH_HCR_TDIR (1 << 14) 664 1141 #define ICH_HCR_EOIcount_SHIFT 27 665 1142 #define ICH_HCR_EOIcount_MASK (0x1f << ICH_HCR_EOIcount_SHIFT) 666 1143 ··· 694 1169 #define ICH_VTR_SEIS_MASK (1 << ICH_VTR_SEIS_SHIFT) 695 1170 #define ICH_VTR_A3V_SHIFT 21 696 1171 #define ICH_VTR_A3V_MASK (1 << ICH_VTR_A3V_SHIFT) 1172 + #define ICH_VTR_TDS_SHIFT 19 1173 + #define ICH_VTR_TDS_MASK (1 << ICH_VTR_TDS_SHIFT) 1174 + 1175 + /* 1176 + * Permission Indirection Extension (PIE) permission encodings. 1177 + * Encodings with the _O suffix, have overlays applied (Permission Overlay Extension). 1178 + */ 1179 + #define PIE_NONE_O 0x0 1180 + #define PIE_R_O 0x1 1181 + #define PIE_X_O 0x2 1182 + #define PIE_RX_O 0x3 1183 + #define PIE_RW_O 0x5 1184 + #define PIE_RWnX_O 0x6 1185 + #define PIE_RWX_O 0x7 1186 + #define PIE_R 0x8 1187 + #define PIE_GCS 0x9 1188 + #define PIE_RX 0xa 1189 + #define PIE_RW 0xc 1190 + #define PIE_RWX 0xe 1191 + 1192 + #define PIRx_ELx_PERM(idx, perm) ((perm) << ((idx) * 4)) 697 1193 698 1194 #define ARM64_FEATURE_FIELD_BITS 4 699 1195 700 - /* Create a mask for the feature bits of the specified feature. */ 701 - #define ARM64_FEATURE_MASK(x) (GENMASK_ULL(x##_SHIFT + ARM64_FEATURE_FIELD_BITS - 1, x##_SHIFT)) 1196 + /* Defined for compatibility only, do not add new users. */ 1197 + #define ARM64_FEATURE_MASK(x) (x##_MASK) 702 1198 703 1199 #ifdef __ASSEMBLY__ 704 1200 705 - .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30 706 - .equ .L__reg_num_x\num, \num 707 - .endr 708 - .equ .L__reg_num_xzr, 31 709 - 710 1201 .macro mrs_s, rt, sreg 711 - __emit_inst(0xd5200000|(\sreg)|(.L__reg_num_\rt)) 1202 + __emit_inst(0xd5200000|(\sreg)|(.L__gpr_num_\rt)) 712 1203 .endm 713 1204 714 1205 .macro msr_s, sreg, rt 715 - __emit_inst(0xd5000000|(\sreg)|(.L__reg_num_\rt)) 1206 + __emit_inst(0xd5000000|(\sreg)|(.L__gpr_num_\rt)) 716 1207 .endm 717 1208 718 1209 #else 719 1210 1211 + #include <linux/bitfield.h> 720 1212 #include <linux/build_bug.h> 721 1213 #include <linux/types.h> 722 1214 #include <asm/alternative.h> 723 1215 724 - #define __DEFINE_MRS_MSR_S_REGNUM \ 725 - " .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" \ 726 - " .equ .L__reg_num_x\\num, \\num\n" \ 727 - " .endr\n" \ 728 - " .equ .L__reg_num_xzr, 31\n" 729 - 730 1216 #define DEFINE_MRS_S \ 731 - __DEFINE_MRS_MSR_S_REGNUM \ 1217 + __DEFINE_ASM_GPR_NUMS \ 732 1218 " .macro mrs_s, rt, sreg\n" \ 733 - __emit_inst(0xd5200000|(\\sreg)|(.L__reg_num_\\rt)) \ 1219 + __emit_inst(0xd5200000|(\\sreg)|(.L__gpr_num_\\rt)) \ 734 1220 " .endm\n" 735 1221 736 1222 #define DEFINE_MSR_S \ 737 - __DEFINE_MRS_MSR_S_REGNUM \ 1223 + __DEFINE_ASM_GPR_NUMS \ 738 1224 " .macro msr_s, sreg, rt\n" \ 739 - __emit_inst(0xd5000000|(\\sreg)|(.L__reg_num_\\rt)) \ 1225 + __emit_inst(0xd5000000|(\\sreg)|(.L__gpr_num_\\rt)) \ 740 1226 " .endm\n" 741 1227 742 1228 #define UNDEFINE_MRS_S \ ··· 826 1290 asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412)); \ 827 1291 par; \ 828 1292 }) 1293 + 1294 + #define SYS_FIELD_GET(reg, field, val) \ 1295 + FIELD_GET(reg##_##field##_MASK, val) 1296 + 1297 + #define SYS_FIELD_PREP(reg, field, val) \ 1298 + FIELD_PREP(reg##_##field##_MASK, val) 1299 + 1300 + #define SYS_FIELD_PREP_ENUM(reg, field, val) \ 1301 + FIELD_PREP(reg##_##field##_MASK, reg##_##field##_##val) 829 1302 830 1303 #endif 831 1304
+38
tools/arch/arm64/tools/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + ifeq ($(top_srcdir),) 4 + top_srcdir := $(patsubst %/,%,$(dir $(CURDIR))) 5 + top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir))) 6 + top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir))) 7 + top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir))) 8 + endif 9 + 10 + include $(top_srcdir)/tools/scripts/Makefile.include 11 + 12 + AWK ?= awk 13 + MKDIR ?= mkdir 14 + RM ?= rm 15 + 16 + ifeq ($(V),1) 17 + Q = 18 + else 19 + Q = @ 20 + endif 21 + 22 + arm64_tools_dir = $(top_srcdir)/arch/arm64/tools 23 + arm64_sysreg_tbl = $(arm64_tools_dir)/sysreg 24 + arm64_gen_sysreg = $(arm64_tools_dir)/gen-sysreg.awk 25 + arm64_generated_dir = $(top_srcdir)/tools/arch/arm64/include/generated 26 + arm64_sysreg_defs = $(arm64_generated_dir)/asm/sysreg-defs.h 27 + 28 + all: $(arm64_sysreg_defs) 29 + @: 30 + 31 + $(arm64_sysreg_defs): $(arm64_gen_sysreg) $(arm64_sysreg_tbl) 32 + $(Q)$(MKDIR) -p $(dir $@) 33 + $(QUIET_GEN)$(AWK) -f $^ > $@ 34 + 35 + clean: 36 + $(Q)$(RM) -rf $(arm64_generated_dir) 37 + 38 + .PHONY: all clean
+308
tools/include/perf/arm_pmuv3.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2012 ARM Ltd. 4 + */ 5 + 6 + #ifndef __PERF_ARM_PMUV3_H 7 + #define __PERF_ARM_PMUV3_H 8 + 9 + #include <assert.h> 10 + #include <asm/bug.h> 11 + 12 + #define ARMV8_PMU_MAX_COUNTERS 32 13 + #define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) 14 + 15 + /* 16 + * Common architectural and microarchitectural event numbers. 17 + */ 18 + #define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000 19 + #define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001 20 + #define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002 21 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003 22 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004 23 + #define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005 24 + #define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006 25 + #define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007 26 + #define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008 27 + #define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009 28 + #define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A 29 + #define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B 30 + #define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C 31 + #define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D 32 + #define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E 33 + #define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F 34 + #define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010 35 + #define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011 36 + #define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012 37 + #define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013 38 + #define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014 39 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015 40 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016 41 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017 42 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018 43 + #define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019 44 + #define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A 45 + #define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B 46 + #define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C 47 + #define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D 48 + #define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E 49 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F 50 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020 51 + #define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021 52 + #define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022 53 + #define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023 54 + #define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024 55 + #define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025 56 + #define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026 57 + #define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027 58 + #define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028 59 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029 60 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A 61 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B 62 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C 63 + #define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D 64 + #define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E 65 + #define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F 66 + #define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030 67 + #define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031 68 + #define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032 69 + #define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033 70 + #define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034 71 + #define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035 72 + #define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036 73 + #define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037 74 + #define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038 75 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039 76 + #define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A 77 + #define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B 78 + #define ARMV8_PMUV3_PERFCTR_STALL 0x003C 79 + #define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D 80 + #define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E 81 + #define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F 82 + 83 + /* Statistical profiling extension microarchitectural events */ 84 + #define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000 85 + #define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001 86 + #define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002 87 + #define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003 88 + 89 + /* AMUv1 architecture events */ 90 + #define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004 91 + #define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005 92 + 93 + /* long-latency read miss events */ 94 + #define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006 95 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009 96 + #define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A 97 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B 98 + 99 + /* Trace buffer events */ 100 + #define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C 101 + #define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E 102 + 103 + /* Trace unit events */ 104 + #define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010 105 + #define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011 106 + #define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012 107 + #define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013 108 + #define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018 109 + #define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019 110 + #define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A 111 + #define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B 112 + 113 + /* additional latency from alignment events */ 114 + #define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020 115 + #define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021 116 + #define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022 117 + 118 + /* Armv8.5 Memory Tagging Extension events */ 119 + #define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024 120 + #define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025 121 + #define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026 122 + 123 + /* ARMv8 recommended implementation defined event types */ 124 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040 125 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041 126 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042 127 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043 128 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044 129 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045 130 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046 131 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047 132 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048 133 + 134 + #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C 135 + #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D 136 + #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E 137 + #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F 138 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050 139 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051 140 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052 141 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053 142 + 143 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056 144 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057 145 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058 146 + 147 + #define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C 148 + #define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D 149 + #define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E 150 + #define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F 151 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060 152 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061 153 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062 154 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063 155 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064 156 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065 157 + #define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066 158 + #define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067 159 + #define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068 160 + #define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069 161 + #define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A 162 + 163 + #define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C 164 + #define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D 165 + #define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E 166 + #define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F 167 + #define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070 168 + #define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071 169 + #define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072 170 + #define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073 171 + #define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074 172 + #define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075 173 + #define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076 174 + #define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077 175 + #define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078 176 + #define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079 177 + #define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A 178 + 179 + #define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C 180 + #define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D 181 + #define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E 182 + 183 + #define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081 184 + #define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082 185 + #define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083 186 + #define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084 187 + 188 + #define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086 189 + #define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087 190 + #define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088 191 + 192 + #define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A 193 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B 194 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C 195 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D 196 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E 197 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F 198 + #define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090 199 + #define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091 200 + 201 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0 202 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1 203 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2 204 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3 205 + 206 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6 207 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7 208 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8 209 + 210 + /* 211 + * Per-CPU PMCR: config reg 212 + */ 213 + #define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ 214 + #define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ 215 + #define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ 216 + #define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ 217 + #define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ 218 + #define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ 219 + #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ 220 + #define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ 221 + #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ 222 + #define ARMV8_PMU_PMCR_N_MASK 0x1f 223 + #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ 224 + 225 + /* 226 + * PMOVSR: counters overflow flag status reg 227 + */ 228 + #define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ 229 + #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK 230 + 231 + /* 232 + * PMXEVTYPER: Event selection reg 233 + */ 234 + #define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ 235 + #define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ 236 + 237 + /* 238 + * Event filters for PMUv3 239 + */ 240 + #define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) 241 + #define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) 242 + #define ARMV8_PMU_INCLUDE_EL2 (1U << 27) 243 + 244 + /* 245 + * PMUSERENR: user enable reg 246 + */ 247 + #define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ 248 + #define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ 249 + #define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ 250 + #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ 251 + #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ 252 + 253 + /* PMMIR_EL1.SLOTS mask */ 254 + #define ARMV8_PMU_SLOTS_MASK 0xff 255 + 256 + #define ARMV8_PMU_BUS_SLOTS_SHIFT 8 257 + #define ARMV8_PMU_BUS_SLOTS_MASK 0xff 258 + #define ARMV8_PMU_BUS_WIDTH_SHIFT 16 259 + #define ARMV8_PMU_BUS_WIDTH_MASK 0xf 260 + 261 + /* 262 + * This code is really good 263 + */ 264 + 265 + #define PMEVN_CASE(n, case_macro) \ 266 + case n: case_macro(n); break 267 + 268 + #define PMEVN_SWITCH(x, case_macro) \ 269 + do { \ 270 + switch (x) { \ 271 + PMEVN_CASE(0, case_macro); \ 272 + PMEVN_CASE(1, case_macro); \ 273 + PMEVN_CASE(2, case_macro); \ 274 + PMEVN_CASE(3, case_macro); \ 275 + PMEVN_CASE(4, case_macro); \ 276 + PMEVN_CASE(5, case_macro); \ 277 + PMEVN_CASE(6, case_macro); \ 278 + PMEVN_CASE(7, case_macro); \ 279 + PMEVN_CASE(8, case_macro); \ 280 + PMEVN_CASE(9, case_macro); \ 281 + PMEVN_CASE(10, case_macro); \ 282 + PMEVN_CASE(11, case_macro); \ 283 + PMEVN_CASE(12, case_macro); \ 284 + PMEVN_CASE(13, case_macro); \ 285 + PMEVN_CASE(14, case_macro); \ 286 + PMEVN_CASE(15, case_macro); \ 287 + PMEVN_CASE(16, case_macro); \ 288 + PMEVN_CASE(17, case_macro); \ 289 + PMEVN_CASE(18, case_macro); \ 290 + PMEVN_CASE(19, case_macro); \ 291 + PMEVN_CASE(20, case_macro); \ 292 + PMEVN_CASE(21, case_macro); \ 293 + PMEVN_CASE(22, case_macro); \ 294 + PMEVN_CASE(23, case_macro); \ 295 + PMEVN_CASE(24, case_macro); \ 296 + PMEVN_CASE(25, case_macro); \ 297 + PMEVN_CASE(26, case_macro); \ 298 + PMEVN_CASE(27, case_macro); \ 299 + PMEVN_CASE(28, case_macro); \ 300 + PMEVN_CASE(29, case_macro); \ 301 + PMEVN_CASE(30, case_macro); \ 302 + default: \ 303 + WARN(1, "Invalid PMEV* index\n"); \ 304 + assert(0); \ 305 + } \ 306 + } while (0) 307 + 308 + #endif
+13 -2
tools/perf/Makefile.perf
··· 443 443 # Create output directory if not already present 444 444 _dummy := $(shell [ -d '$(beauty_ioctl_outdir)' ] || mkdir -p '$(beauty_ioctl_outdir)') 445 445 446 + arm64_gen_sysreg_dir := $(srctree)/tools/arch/arm64/tools 447 + 448 + arm64-sysreg-defs: FORCE 449 + $(Q)$(MAKE) -C $(arm64_gen_sysreg_dir) 450 + 451 + arm64-sysreg-defs-clean: 452 + $(call QUIET_CLEAN,arm64-sysreg-defs) 453 + $(Q)$(MAKE) -C $(arm64_gen_sysreg_dir) clean > /dev/null 454 + 446 455 $(drm_ioctl_array): $(drm_hdr_dir)/drm.h $(drm_hdr_dir)/i915_drm.h $(drm_ioctl_tbl) 447 456 $(Q)$(SHELL) '$(drm_ioctl_tbl)' $(drm_hdr_dir) > $@ 448 457 ··· 725 716 __build-dir = $(subst $(OUTPUT),,$(dir $@)) 726 717 build-dir = $(or $(__build-dir),.) 727 718 728 - prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders $(drm_ioctl_array) \ 719 + prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders \ 720 + arm64-sysreg-defs \ 721 + $(drm_ioctl_array) \ 729 722 $(fadvise_advice_array) \ 730 723 $(fsconfig_arrays) \ 731 724 $(fsmount_arrays) \ ··· 1136 1125 bpf-skel-clean: 1137 1126 $(call QUIET_CLEAN, bpf-skel) $(RM) -r $(SKEL_TMP_OUT) $(SKELETONS) 1138 1127 1139 - clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean 1128 + clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean arm64-sysreg-defs-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean 1140 1129 $(call QUIET_CLEAN, core-objs) $(RM) $(LIBPERF_A) $(OUTPUT)perf-archive $(OUTPUT)perf-iostat $(LANG_BINDINGS) 1141 1130 $(Q)find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete 1142 1131 $(Q)$(RM) $(OUTPUT).config-detected
+1 -1
tools/perf/util/Build
··· 345 345 CFLAGS_libstring.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" 346 346 CFLAGS_hweight.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" 347 347 CFLAGS_header.o += -include $(OUTPUT)PERF-VERSION-FILE 348 - CFLAGS_arm-spe.o += -I$(srctree)/tools/arch/arm64/include/ 348 + CFLAGS_arm-spe.o += -I$(srctree)/tools/arch/arm64/include/ -I$(srctree)/tools/arch/arm64/include/generated/ 349 349 350 350 $(OUTPUT)util/argv_split.o: ../lib/argv_split.c FORCE 351 351 $(call rule_mkdir)
+21 -3
tools/testing/selftests/kvm/Makefile
··· 17 17 ARCH_DIR := $(ARCH) 18 18 endif 19 19 20 + ifeq ($(ARCH),arm64) 21 + arm64_tools_dir := $(top_srcdir)/tools/arch/arm64/tools/ 22 + GEN_HDRS := $(top_srcdir)/tools/arch/arm64/include/generated/ 23 + CFLAGS += -I$(GEN_HDRS) 24 + 25 + $(GEN_HDRS): $(wildcard $(arm64_tools_dir)/*) 26 + $(MAKE) -C $(arm64_tools_dir) 27 + endif 28 + 20 29 LIBKVM += lib/assert.c 21 30 LIBKVM += lib/elf.c 22 31 LIBKVM += lib/guest_modes.c ··· 155 146 TEST_GEN_PROGS_aarch64 += aarch64/hypercalls 156 147 TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test 157 148 TEST_GEN_PROGS_aarch64 += aarch64/psci_test 149 + TEST_GEN_PROGS_aarch64 += aarch64/set_id_regs 158 150 TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter 159 151 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config 160 152 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init 161 153 TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq 154 + TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access 162 155 TEST_GEN_PROGS_aarch64 += access_tracking_perf_test 163 156 TEST_GEN_PROGS_aarch64 += demand_paging_test 164 157 TEST_GEN_PROGS_aarch64 += dirty_log_test ··· 268 257 $(SPLIT_TESTS_TARGETS): %: %.o $(SPLIT_TESTS_OBJS) 269 258 $(CC) $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) $^ $(LDLIBS) -o $@ 270 259 271 - EXTRA_CLEAN += $(LIBKVM_OBJS) $(TEST_DEP_FILES) $(TEST_GEN_OBJ) $(SPLIT_TESTS_OBJS) cscope.* 260 + EXTRA_CLEAN += $(GEN_HDRS) \ 261 + $(LIBKVM_OBJS) \ 262 + $(SPLIT_TESTS_OBJS) \ 263 + $(TEST_DEP_FILES) \ 264 + $(TEST_GEN_OBJ) \ 265 + cscope.* 272 266 273 267 x := $(shell mkdir -p $(sort $(dir $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ)))) 274 - $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c 268 + $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c $(GEN_HDRS) 275 269 $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ 276 270 277 - $(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S 271 + $(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S $(GEN_HDRS) 278 272 $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ 279 273 280 274 # Compile the string overrides as freestanding to prevent the compiler from ··· 289 273 $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c -ffreestanding $< -o $@ 290 274 291 275 x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) 276 + $(SPLIT_TESTS_OBJS): $(GEN_HDRS) 292 277 $(TEST_GEN_PROGS): $(LIBKVM_OBJS) 293 278 $(TEST_GEN_PROGS_EXTENDED): $(LIBKVM_OBJS) 279 + $(TEST_GEN_OBJ): $(GEN_HDRS) 294 280 295 281 cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib .. 296 282 cscope:
+2 -2
tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c
··· 146 146 147 147 vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val); 148 148 149 - el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), val); 150 - return el0 == ID_AA64PFR0_ELx_64BIT_ONLY; 149 + el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val); 150 + return el0 == ID_AA64PFR0_EL1_ELx_64BIT_ONLY; 151 151 } 152 152 153 153 int main(void)
+6 -6
tools/testing/selftests/kvm/aarch64/debug-exceptions.c
··· 116 116 117 117 /* Reset all bcr/bvr/wcr/wvr registers */ 118 118 dfr0 = read_sysreg(id_aa64dfr0_el1); 119 - brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS), dfr0); 119 + brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), dfr0); 120 120 for (i = 0; i <= brps; i++) { 121 121 write_dbgbcr(i, 0); 122 122 write_dbgbvr(i, 0); 123 123 } 124 - wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS), dfr0); 124 + wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), dfr0); 125 125 for (i = 0; i <= wrps; i++) { 126 126 write_dbgwcr(i, 0); 127 127 write_dbgwvr(i, 0); ··· 418 418 419 419 static int debug_version(uint64_t id_aa64dfr0) 420 420 { 421 - return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), id_aa64dfr0); 421 + return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0); 422 422 } 423 423 424 424 static void test_guest_debug_exceptions(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn) ··· 539 539 int b, w, c; 540 540 541 541 /* Number of breakpoints */ 542 - brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS), aa64dfr0) + 1; 542 + brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), aa64dfr0) + 1; 543 543 __TEST_REQUIRE(brp_num >= 2, "At least two breakpoints are required"); 544 544 545 545 /* Number of watchpoints */ 546 - wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS), aa64dfr0) + 1; 546 + wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), aa64dfr0) + 1; 547 547 548 548 /* Number of context aware breakpoints */ 549 - ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_CTX_CMPS), aa64dfr0) + 1; 549 + ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_CTX_CMPs), aa64dfr0) + 1; 550 550 551 551 pr_debug("%s brp_num:%d, wrp_num:%d, ctx_brp_num:%d\n", __func__, 552 552 brp_num, wrp_num, ctx_brp_num);
+7 -4
tools/testing/selftests/kvm/aarch64/page_fault_test.c
··· 96 96 uint64_t isar0 = read_sysreg(id_aa64isar0_el1); 97 97 uint64_t atomic; 98 98 99 - atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_ATOMICS), isar0); 99 + atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC), isar0); 100 100 return atomic >= 2; 101 101 } 102 102 103 103 static bool guest_check_dc_zva(void) 104 104 { 105 105 uint64_t dczid = read_sysreg(dczid_el0); 106 - uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_DZP), dczid); 106 + uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_EL0_DZP), dczid); 107 107 108 108 return dzp == 0; 109 109 } ··· 135 135 uint64_t par; 136 136 137 137 asm volatile("at s1e1r, %0" :: "r" (guest_test_memory)); 138 - par = read_sysreg(par_el1); 139 138 isb(); 139 + par = read_sysreg(par_el1); 140 140 141 141 /* Bit 1 indicates whether the AT was successful */ 142 142 GUEST_ASSERT_EQ(par & 1, 0); ··· 196 196 uint64_t hadbs, tcr; 197 197 198 198 /* Skip if HA is not supported. */ 199 - hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS), mmfr1); 199 + hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS), mmfr1); 200 200 if (hadbs == 0) 201 201 return false; 202 202 ··· 842 842 .name = SCAT2(ro_memslot_no_syndrome, _access), \ 843 843 .data_memslot_flags = KVM_MEM_READONLY, \ 844 844 .pt_memslot_flags = KVM_MEM_READONLY, \ 845 + .guest_prepare = { _PREPARE(_access) }, \ 845 846 .guest_test = _access, \ 846 847 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ 847 848 .expected_events = { .fail_vcpu_runs = 1 }, \ ··· 866 865 .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ 867 866 .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 868 867 .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 868 + .guest_prepare = { _PREPARE(_access) }, \ 869 869 .guest_test = _access, \ 870 870 .guest_test_check = { _test_check }, \ 871 871 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ ··· 896 894 .data_memslot_flags = KVM_MEM_READONLY, \ 897 895 .pt_memslot_flags = KVM_MEM_READONLY, \ 898 896 .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ 897 + .guest_prepare = { _PREPARE(_access) }, \ 899 898 .guest_test = _access, \ 900 899 .uffd_data_handler = _uffd_data_handler, \ 901 900 .uffd_pt_handler = uffd_pt_handler, \
+481
tools/testing/selftests/kvm/aarch64/set_id_regs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * set_id_regs - Test for setting ID register from usersapce. 4 + * 5 + * Copyright (c) 2023 Google LLC. 6 + * 7 + * 8 + * Test that KVM supports setting ID registers from userspace and handles the 9 + * feature set correctly. 10 + */ 11 + 12 + #include <stdint.h> 13 + #include "kvm_util.h" 14 + #include "processor.h" 15 + #include "test_util.h" 16 + #include <linux/bitfield.h> 17 + 18 + enum ftr_type { 19 + FTR_EXACT, /* Use a predefined safe value */ 20 + FTR_LOWER_SAFE, /* Smaller value is safe */ 21 + FTR_HIGHER_SAFE, /* Bigger value is safe */ 22 + FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */ 23 + FTR_END, /* Mark the last ftr bits */ 24 + }; 25 + 26 + #define FTR_SIGNED true /* Value should be treated as signed */ 27 + #define FTR_UNSIGNED false /* Value should be treated as unsigned */ 28 + 29 + struct reg_ftr_bits { 30 + char *name; 31 + bool sign; 32 + enum ftr_type type; 33 + uint8_t shift; 34 + uint64_t mask; 35 + int64_t safe_val; 36 + }; 37 + 38 + struct test_feature_reg { 39 + uint32_t reg; 40 + const struct reg_ftr_bits *ftr_bits; 41 + }; 42 + 43 + #define __REG_FTR_BITS(NAME, SIGNED, TYPE, SHIFT, MASK, SAFE_VAL) \ 44 + { \ 45 + .name = #NAME, \ 46 + .sign = SIGNED, \ 47 + .type = TYPE, \ 48 + .shift = SHIFT, \ 49 + .mask = MASK, \ 50 + .safe_val = SAFE_VAL, \ 51 + } 52 + 53 + #define REG_FTR_BITS(type, reg, field, safe_val) \ 54 + __REG_FTR_BITS(reg##_##field, FTR_UNSIGNED, type, reg##_##field##_SHIFT, \ 55 + reg##_##field##_MASK, safe_val) 56 + 57 + #define S_REG_FTR_BITS(type, reg, field, safe_val) \ 58 + __REG_FTR_BITS(reg##_##field, FTR_SIGNED, type, reg##_##field##_SHIFT, \ 59 + reg##_##field##_MASK, safe_val) 60 + 61 + #define REG_FTR_END \ 62 + { \ 63 + .type = FTR_END, \ 64 + } 65 + 66 + static const struct reg_ftr_bits ftr_id_aa64dfr0_el1[] = { 67 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, PMUVer, 0), 68 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, DebugVer, 0), 69 + REG_FTR_END, 70 + }; 71 + 72 + static const struct reg_ftr_bits ftr_id_dfr0_el1[] = { 73 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_DFR0_EL1, PerfMon, 0), 74 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_DFR0_EL1, CopDbg, 0), 75 + REG_FTR_END, 76 + }; 77 + 78 + static const struct reg_ftr_bits ftr_id_aa64isar0_el1[] = { 79 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, RNDR, 0), 80 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TLB, 0), 81 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TS, 0), 82 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, FHM, 0), 83 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, DP, 0), 84 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SM4, 0), 85 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SM3, 0), 86 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA3, 0), 87 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, RDM, 0), 88 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TME, 0), 89 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, ATOMIC, 0), 90 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, CRC32, 0), 91 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA2, 0), 92 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA1, 0), 93 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, AES, 0), 94 + REG_FTR_END, 95 + }; 96 + 97 + static const struct reg_ftr_bits ftr_id_aa64isar1_el1[] = { 98 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, LS64, 0), 99 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, XS, 0), 100 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, I8MM, 0), 101 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, DGH, 0), 102 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, BF16, 0), 103 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, SPECRES, 0), 104 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, SB, 0), 105 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, FRINTTS, 0), 106 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, LRCPC, 0), 107 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, FCMA, 0), 108 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, JSCVT, 0), 109 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, DPB, 0), 110 + REG_FTR_END, 111 + }; 112 + 113 + static const struct reg_ftr_bits ftr_id_aa64isar2_el1[] = { 114 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, BC, 0), 115 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, RPRES, 0), 116 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, WFxT, 0), 117 + REG_FTR_END, 118 + }; 119 + 120 + static const struct reg_ftr_bits ftr_id_aa64pfr0_el1[] = { 121 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, CSV3, 0), 122 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, CSV2, 0), 123 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, DIT, 0), 124 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, SEL2, 0), 125 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 0), 126 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 0), 127 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 0), 128 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 0), 129 + REG_FTR_END, 130 + }; 131 + 132 + static const struct reg_ftr_bits ftr_id_aa64mmfr0_el1[] = { 133 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, ECV, 0), 134 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, EXS, 0), 135 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN4, 0), 136 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN64, 0), 137 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN16, 0), 138 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGENDEL0, 0), 139 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, SNSMEM, 0), 140 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGEND, 0), 141 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, ASIDBITS, 0), 142 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, PARANGE, 0), 143 + REG_FTR_END, 144 + }; 145 + 146 + static const struct reg_ftr_bits ftr_id_aa64mmfr1_el1[] = { 147 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, TIDCP1, 0), 148 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, AFP, 0), 149 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, ETS, 0), 150 + REG_FTR_BITS(FTR_HIGHER_SAFE, ID_AA64MMFR1_EL1, SpecSEI, 0), 151 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, PAN, 0), 152 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, LO, 0), 153 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, HPDS, 0), 154 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, HAFDBS, 0), 155 + REG_FTR_END, 156 + }; 157 + 158 + static const struct reg_ftr_bits ftr_id_aa64mmfr2_el1[] = { 159 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, E0PD, 0), 160 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, BBM, 0), 161 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, TTL, 0), 162 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, AT, 0), 163 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, ST, 0), 164 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, VARange, 0), 165 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, IESB, 0), 166 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, LSM, 0), 167 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, UAO, 0), 168 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, CnP, 0), 169 + REG_FTR_END, 170 + }; 171 + 172 + static const struct reg_ftr_bits ftr_id_aa64zfr0_el1[] = { 173 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F64MM, 0), 174 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F32MM, 0), 175 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, I8MM, 0), 176 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SM4, 0), 177 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SHA3, 0), 178 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, BF16, 0), 179 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, BitPerm, 0), 180 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, AES, 0), 181 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SVEver, 0), 182 + REG_FTR_END, 183 + }; 184 + 185 + #define TEST_REG(id, table) \ 186 + { \ 187 + .reg = id, \ 188 + .ftr_bits = &((table)[0]), \ 189 + } 190 + 191 + static struct test_feature_reg test_regs[] = { 192 + TEST_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0_el1), 193 + TEST_REG(SYS_ID_DFR0_EL1, ftr_id_dfr0_el1), 194 + TEST_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0_el1), 195 + TEST_REG(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1_el1), 196 + TEST_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2_el1), 197 + TEST_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0_el1), 198 + TEST_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0_el1), 199 + TEST_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1_el1), 200 + TEST_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2_el1), 201 + TEST_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0_el1), 202 + }; 203 + 204 + #define GUEST_REG_SYNC(id) GUEST_SYNC_ARGS(0, id, read_sysreg_s(id), 0, 0); 205 + 206 + static void guest_code(void) 207 + { 208 + GUEST_REG_SYNC(SYS_ID_AA64DFR0_EL1); 209 + GUEST_REG_SYNC(SYS_ID_DFR0_EL1); 210 + GUEST_REG_SYNC(SYS_ID_AA64ISAR0_EL1); 211 + GUEST_REG_SYNC(SYS_ID_AA64ISAR1_EL1); 212 + GUEST_REG_SYNC(SYS_ID_AA64ISAR2_EL1); 213 + GUEST_REG_SYNC(SYS_ID_AA64PFR0_EL1); 214 + GUEST_REG_SYNC(SYS_ID_AA64MMFR0_EL1); 215 + GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); 216 + GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); 217 + GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); 218 + 219 + GUEST_DONE(); 220 + } 221 + 222 + /* Return a safe value to a given ftr_bits an ftr value */ 223 + uint64_t get_safe_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr) 224 + { 225 + uint64_t ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0); 226 + 227 + if (ftr_bits->type == FTR_UNSIGNED) { 228 + switch (ftr_bits->type) { 229 + case FTR_EXACT: 230 + ftr = ftr_bits->safe_val; 231 + break; 232 + case FTR_LOWER_SAFE: 233 + if (ftr > 0) 234 + ftr--; 235 + break; 236 + case FTR_HIGHER_SAFE: 237 + if (ftr < ftr_max) 238 + ftr++; 239 + break; 240 + case FTR_HIGHER_OR_ZERO_SAFE: 241 + if (ftr == ftr_max) 242 + ftr = 0; 243 + else if (ftr != 0) 244 + ftr++; 245 + break; 246 + default: 247 + break; 248 + } 249 + } else if (ftr != ftr_max) { 250 + switch (ftr_bits->type) { 251 + case FTR_EXACT: 252 + ftr = ftr_bits->safe_val; 253 + break; 254 + case FTR_LOWER_SAFE: 255 + if (ftr > 0) 256 + ftr--; 257 + break; 258 + case FTR_HIGHER_SAFE: 259 + if (ftr < ftr_max - 1) 260 + ftr++; 261 + break; 262 + case FTR_HIGHER_OR_ZERO_SAFE: 263 + if (ftr != 0 && ftr != ftr_max - 1) 264 + ftr++; 265 + break; 266 + default: 267 + break; 268 + } 269 + } 270 + 271 + return ftr; 272 + } 273 + 274 + /* Return an invalid value to a given ftr_bits an ftr value */ 275 + uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr) 276 + { 277 + uint64_t ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0); 278 + 279 + if (ftr_bits->type == FTR_UNSIGNED) { 280 + switch (ftr_bits->type) { 281 + case FTR_EXACT: 282 + ftr = max((uint64_t)ftr_bits->safe_val + 1, ftr + 1); 283 + break; 284 + case FTR_LOWER_SAFE: 285 + ftr++; 286 + break; 287 + case FTR_HIGHER_SAFE: 288 + ftr--; 289 + break; 290 + case FTR_HIGHER_OR_ZERO_SAFE: 291 + if (ftr == 0) 292 + ftr = ftr_max; 293 + else 294 + ftr--; 295 + break; 296 + default: 297 + break; 298 + } 299 + } else if (ftr != ftr_max) { 300 + switch (ftr_bits->type) { 301 + case FTR_EXACT: 302 + ftr = max((uint64_t)ftr_bits->safe_val + 1, ftr + 1); 303 + break; 304 + case FTR_LOWER_SAFE: 305 + ftr++; 306 + break; 307 + case FTR_HIGHER_SAFE: 308 + ftr--; 309 + break; 310 + case FTR_HIGHER_OR_ZERO_SAFE: 311 + if (ftr == 0) 312 + ftr = ftr_max - 1; 313 + else 314 + ftr--; 315 + break; 316 + default: 317 + break; 318 + } 319 + } else { 320 + ftr = 0; 321 + } 322 + 323 + return ftr; 324 + } 325 + 326 + static void test_reg_set_success(struct kvm_vcpu *vcpu, uint64_t reg, 327 + const struct reg_ftr_bits *ftr_bits) 328 + { 329 + uint8_t shift = ftr_bits->shift; 330 + uint64_t mask = ftr_bits->mask; 331 + uint64_t val, new_val, ftr; 332 + 333 + vcpu_get_reg(vcpu, reg, &val); 334 + ftr = (val & mask) >> shift; 335 + 336 + ftr = get_safe_value(ftr_bits, ftr); 337 + 338 + ftr <<= shift; 339 + val &= ~mask; 340 + val |= ftr; 341 + 342 + vcpu_set_reg(vcpu, reg, val); 343 + vcpu_get_reg(vcpu, reg, &new_val); 344 + TEST_ASSERT_EQ(new_val, val); 345 + } 346 + 347 + static void test_reg_set_fail(struct kvm_vcpu *vcpu, uint64_t reg, 348 + const struct reg_ftr_bits *ftr_bits) 349 + { 350 + uint8_t shift = ftr_bits->shift; 351 + uint64_t mask = ftr_bits->mask; 352 + uint64_t val, old_val, ftr; 353 + int r; 354 + 355 + vcpu_get_reg(vcpu, reg, &val); 356 + ftr = (val & mask) >> shift; 357 + 358 + ftr = get_invalid_value(ftr_bits, ftr); 359 + 360 + old_val = val; 361 + ftr <<= shift; 362 + val &= ~mask; 363 + val |= ftr; 364 + 365 + r = __vcpu_set_reg(vcpu, reg, val); 366 + TEST_ASSERT(r < 0 && errno == EINVAL, 367 + "Unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno); 368 + 369 + vcpu_get_reg(vcpu, reg, &val); 370 + TEST_ASSERT_EQ(val, old_val); 371 + } 372 + 373 + static void test_user_set_reg(struct kvm_vcpu *vcpu, bool aarch64_only) 374 + { 375 + uint64_t masks[KVM_ARM_FEATURE_ID_RANGE_SIZE]; 376 + struct reg_mask_range range = { 377 + .addr = (__u64)masks, 378 + }; 379 + int ret; 380 + 381 + /* KVM should return error when reserved field is not zero */ 382 + range.reserved[0] = 1; 383 + ret = __vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range); 384 + TEST_ASSERT(ret, "KVM doesn't check invalid parameters."); 385 + 386 + /* Get writable masks for feature ID registers */ 387 + memset(range.reserved, 0, sizeof(range.reserved)); 388 + vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range); 389 + 390 + for (int i = 0; i < ARRAY_SIZE(test_regs); i++) { 391 + const struct reg_ftr_bits *ftr_bits = test_regs[i].ftr_bits; 392 + uint32_t reg_id = test_regs[i].reg; 393 + uint64_t reg = KVM_ARM64_SYS_REG(reg_id); 394 + int idx; 395 + 396 + /* Get the index to masks array for the idreg */ 397 + idx = KVM_ARM_FEATURE_ID_RANGE_IDX(sys_reg_Op0(reg_id), sys_reg_Op1(reg_id), 398 + sys_reg_CRn(reg_id), sys_reg_CRm(reg_id), 399 + sys_reg_Op2(reg_id)); 400 + 401 + for (int j = 0; ftr_bits[j].type != FTR_END; j++) { 402 + /* Skip aarch32 reg on aarch64 only system, since they are RAZ/WI. */ 403 + if (aarch64_only && sys_reg_CRm(reg_id) < 4) { 404 + ksft_test_result_skip("%s on AARCH64 only system\n", 405 + ftr_bits[j].name); 406 + continue; 407 + } 408 + 409 + /* Make sure the feature field is writable */ 410 + TEST_ASSERT_EQ(masks[idx] & ftr_bits[j].mask, ftr_bits[j].mask); 411 + 412 + test_reg_set_fail(vcpu, reg, &ftr_bits[j]); 413 + test_reg_set_success(vcpu, reg, &ftr_bits[j]); 414 + 415 + ksft_test_result_pass("%s\n", ftr_bits[j].name); 416 + } 417 + } 418 + } 419 + 420 + static void test_guest_reg_read(struct kvm_vcpu *vcpu) 421 + { 422 + bool done = false; 423 + struct ucall uc; 424 + uint64_t val; 425 + 426 + while (!done) { 427 + vcpu_run(vcpu); 428 + 429 + switch (get_ucall(vcpu, &uc)) { 430 + case UCALL_ABORT: 431 + REPORT_GUEST_ASSERT(uc); 432 + break; 433 + case UCALL_SYNC: 434 + /* Make sure the written values are seen by guest */ 435 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(uc.args[2]), &val); 436 + TEST_ASSERT_EQ(val, uc.args[3]); 437 + break; 438 + case UCALL_DONE: 439 + done = true; 440 + break; 441 + default: 442 + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); 443 + } 444 + } 445 + } 446 + 447 + int main(void) 448 + { 449 + struct kvm_vcpu *vcpu; 450 + struct kvm_vm *vm; 451 + bool aarch64_only; 452 + uint64_t val, el0; 453 + int ftr_cnt; 454 + 455 + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES)); 456 + 457 + vm = vm_create_with_one_vcpu(&vcpu, guest_code); 458 + 459 + /* Check for AARCH64 only system */ 460 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val); 461 + el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val); 462 + aarch64_only = (el0 == ID_AA64PFR0_EL1_ELx_64BIT_ONLY); 463 + 464 + ksft_print_header(); 465 + 466 + ftr_cnt = ARRAY_SIZE(ftr_id_aa64dfr0_el1) + ARRAY_SIZE(ftr_id_dfr0_el1) + 467 + ARRAY_SIZE(ftr_id_aa64isar0_el1) + ARRAY_SIZE(ftr_id_aa64isar1_el1) + 468 + ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) + 469 + ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) + 470 + ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + ARRAY_SIZE(ftr_id_aa64zfr0_el1) - 471 + ARRAY_SIZE(test_regs); 472 + 473 + ksft_set_plan(ftr_cnt); 474 + 475 + test_user_set_reg(vcpu, aarch64_only); 476 + test_guest_reg_read(vcpu); 477 + 478 + kvm_vm_free(vm); 479 + 480 + ksft_finished(); 481 + }
+670
tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * vpmu_counter_access - Test vPMU event counter access 4 + * 5 + * Copyright (c) 2023 Google LLC. 6 + * 7 + * This test checks if the guest can see the same number of the PMU event 8 + * counters (PMCR_EL0.N) that userspace sets, if the guest can access 9 + * those counters, and if the guest is prevented from accessing any 10 + * other counters. 11 + * It also checks if the userspace accesses to the PMU regsisters honor the 12 + * PMCR.N value that's set for the guest. 13 + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. 14 + */ 15 + #include <kvm_util.h> 16 + #include <processor.h> 17 + #include <test_util.h> 18 + #include <vgic.h> 19 + #include <perf/arm_pmuv3.h> 20 + #include <linux/bitfield.h> 21 + 22 + /* The max number of the PMU event counters (excluding the cycle counter) */ 23 + #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) 24 + 25 + /* The cycle counter bit position that's common among the PMU registers */ 26 + #define ARMV8_PMU_CYCLE_IDX 31 27 + 28 + struct vpmu_vm { 29 + struct kvm_vm *vm; 30 + struct kvm_vcpu *vcpu; 31 + int gic_fd; 32 + }; 33 + 34 + static struct vpmu_vm vpmu_vm; 35 + 36 + struct pmreg_sets { 37 + uint64_t set_reg_id; 38 + uint64_t clr_reg_id; 39 + }; 40 + 41 + #define PMREG_SET(set, clr) {.set_reg_id = set, .clr_reg_id = clr} 42 + 43 + static uint64_t get_pmcr_n(uint64_t pmcr) 44 + { 45 + return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 46 + } 47 + 48 + static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n) 49 + { 50 + *pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); 51 + *pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); 52 + } 53 + 54 + static uint64_t get_counters_mask(uint64_t n) 55 + { 56 + uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX); 57 + 58 + if (n) 59 + mask |= GENMASK(n - 1, 0); 60 + return mask; 61 + } 62 + 63 + /* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */ 64 + static inline unsigned long read_sel_evcntr(int sel) 65 + { 66 + write_sysreg(sel, pmselr_el0); 67 + isb(); 68 + return read_sysreg(pmxevcntr_el0); 69 + } 70 + 71 + /* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */ 72 + static inline void write_sel_evcntr(int sel, unsigned long val) 73 + { 74 + write_sysreg(sel, pmselr_el0); 75 + isb(); 76 + write_sysreg(val, pmxevcntr_el0); 77 + isb(); 78 + } 79 + 80 + /* Read PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */ 81 + static inline unsigned long read_sel_evtyper(int sel) 82 + { 83 + write_sysreg(sel, pmselr_el0); 84 + isb(); 85 + return read_sysreg(pmxevtyper_el0); 86 + } 87 + 88 + /* Write PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */ 89 + static inline void write_sel_evtyper(int sel, unsigned long val) 90 + { 91 + write_sysreg(sel, pmselr_el0); 92 + isb(); 93 + write_sysreg(val, pmxevtyper_el0); 94 + isb(); 95 + } 96 + 97 + static inline void enable_counter(int idx) 98 + { 99 + uint64_t v = read_sysreg(pmcntenset_el0); 100 + 101 + write_sysreg(BIT(idx) | v, pmcntenset_el0); 102 + isb(); 103 + } 104 + 105 + static inline void disable_counter(int idx) 106 + { 107 + uint64_t v = read_sysreg(pmcntenset_el0); 108 + 109 + write_sysreg(BIT(idx) | v, pmcntenclr_el0); 110 + isb(); 111 + } 112 + 113 + static void pmu_disable_reset(void) 114 + { 115 + uint64_t pmcr = read_sysreg(pmcr_el0); 116 + 117 + /* Reset all counters, disabling them */ 118 + pmcr &= ~ARMV8_PMU_PMCR_E; 119 + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); 120 + isb(); 121 + } 122 + 123 + #define RETURN_READ_PMEVCNTRN(n) \ 124 + return read_sysreg(pmevcntr##n##_el0) 125 + static unsigned long read_pmevcntrn(int n) 126 + { 127 + PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); 128 + return 0; 129 + } 130 + 131 + #define WRITE_PMEVCNTRN(n) \ 132 + write_sysreg(val, pmevcntr##n##_el0) 133 + static void write_pmevcntrn(int n, unsigned long val) 134 + { 135 + PMEVN_SWITCH(n, WRITE_PMEVCNTRN); 136 + isb(); 137 + } 138 + 139 + #define READ_PMEVTYPERN(n) \ 140 + return read_sysreg(pmevtyper##n##_el0) 141 + static unsigned long read_pmevtypern(int n) 142 + { 143 + PMEVN_SWITCH(n, READ_PMEVTYPERN); 144 + return 0; 145 + } 146 + 147 + #define WRITE_PMEVTYPERN(n) \ 148 + write_sysreg(val, pmevtyper##n##_el0) 149 + static void write_pmevtypern(int n, unsigned long val) 150 + { 151 + PMEVN_SWITCH(n, WRITE_PMEVTYPERN); 152 + isb(); 153 + } 154 + 155 + /* 156 + * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0 157 + * accessors that test cases will use. Each of the accessors will 158 + * either directly reads/writes PMEV{CNTR,TYPER}<n>_EL0 159 + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through 160 + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). 161 + * 162 + * This is used to test that combinations of those accessors provide 163 + * the consistent behavior. 164 + */ 165 + struct pmc_accessor { 166 + /* A function to be used to read PMEVTCNTR<n>_EL0 */ 167 + unsigned long (*read_cntr)(int idx); 168 + /* A function to be used to write PMEVTCNTR<n>_EL0 */ 169 + void (*write_cntr)(int idx, unsigned long val); 170 + /* A function to be used to read PMEVTYPER<n>_EL0 */ 171 + unsigned long (*read_typer)(int idx); 172 + /* A function to be used to write PMEVTYPER<n>_EL0 */ 173 + void (*write_typer)(int idx, unsigned long val); 174 + }; 175 + 176 + struct pmc_accessor pmc_accessors[] = { 177 + /* test with all direct accesses */ 178 + { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern }, 179 + /* test with all indirect accesses */ 180 + { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper }, 181 + /* read with direct accesses, and write with indirect accesses */ 182 + { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper }, 183 + /* read with indirect accesses, and write with direct accesses */ 184 + { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, 185 + }; 186 + 187 + /* 188 + * Convert a pointer of pmc_accessor to an index in pmc_accessors[], 189 + * assuming that the pointer is one of the entries in pmc_accessors[]. 190 + */ 191 + #define PMC_ACC_TO_IDX(acc) (acc - &pmc_accessors[0]) 192 + 193 + #define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \ 194 + { \ 195 + uint64_t _tval = read_sysreg(regname); \ 196 + \ 197 + if (set_expected) \ 198 + __GUEST_ASSERT((_tval & mask), \ 199 + "tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \ 200 + _tval, mask, set_expected); \ 201 + else \ 202 + __GUEST_ASSERT(!(_tval & mask), \ 203 + "tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \ 204 + _tval, mask, set_expected); \ 205 + } 206 + 207 + /* 208 + * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers 209 + * are set or cleared as specified in @set_expected. 210 + */ 211 + static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected) 212 + { 213 + GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected); 214 + GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected); 215 + GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected); 216 + GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected); 217 + GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected); 218 + GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected); 219 + } 220 + 221 + /* 222 + * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding 223 + * to the specified counter (@pmc_idx) can be read/written as expected. 224 + * When @set_op is true, it tries to set the bit for the counter in 225 + * those registers by writing the SET registers (the bit won't be set 226 + * if the counter is not implemented though). 227 + * Otherwise, it tries to clear the bits in the registers by writing 228 + * the CLR registers. 229 + * Then, it checks if the values indicated in the registers are as expected. 230 + */ 231 + static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) 232 + { 233 + uint64_t pmcr_n, test_bit = BIT(pmc_idx); 234 + bool set_expected = false; 235 + 236 + if (set_op) { 237 + write_sysreg(test_bit, pmcntenset_el0); 238 + write_sysreg(test_bit, pmintenset_el1); 239 + write_sysreg(test_bit, pmovsset_el0); 240 + 241 + /* The bit will be set only if the counter is implemented */ 242 + pmcr_n = get_pmcr_n(read_sysreg(pmcr_el0)); 243 + set_expected = (pmc_idx < pmcr_n) ? true : false; 244 + } else { 245 + write_sysreg(test_bit, pmcntenclr_el0); 246 + write_sysreg(test_bit, pmintenclr_el1); 247 + write_sysreg(test_bit, pmovsclr_el0); 248 + } 249 + check_bitmap_pmu_regs(test_bit, set_expected); 250 + } 251 + 252 + /* 253 + * Tests for reading/writing registers for the (implemented) event counter 254 + * specified by @pmc_idx. 255 + */ 256 + static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) 257 + { 258 + uint64_t write_data, read_data; 259 + 260 + /* Disable all PMCs and reset all PMCs to zero. */ 261 + pmu_disable_reset(); 262 + 263 + /* 264 + * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1. 265 + */ 266 + 267 + /* Make sure that the bit in those registers are set to 0 */ 268 + test_bitmap_pmu_regs(pmc_idx, false); 269 + /* Test if setting the bit in those registers works */ 270 + test_bitmap_pmu_regs(pmc_idx, true); 271 + /* Test if clearing the bit in those registers works */ 272 + test_bitmap_pmu_regs(pmc_idx, false); 273 + 274 + /* 275 + * Tests for reading/writing the event type register. 276 + */ 277 + 278 + /* 279 + * Set the event type register to an arbitrary value just for testing 280 + * of reading/writing the register. 281 + * Arm ARM says that for the event from 0x0000 to 0x003F, 282 + * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is 283 + * the value written to the field even when the specified event 284 + * is not supported. 285 + */ 286 + write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED); 287 + acc->write_typer(pmc_idx, write_data); 288 + read_data = acc->read_typer(pmc_idx); 289 + __GUEST_ASSERT(read_data == write_data, 290 + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx", 291 + pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); 292 + 293 + /* 294 + * Tests for reading/writing the event count register. 295 + */ 296 + 297 + read_data = acc->read_cntr(pmc_idx); 298 + 299 + /* The count value must be 0, as it is disabled and reset */ 300 + __GUEST_ASSERT(read_data == 0, 301 + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx", 302 + pmc_idx, PMC_ACC_TO_IDX(acc), read_data); 303 + 304 + write_data = read_data + pmc_idx + 0x12345; 305 + acc->write_cntr(pmc_idx, write_data); 306 + read_data = acc->read_cntr(pmc_idx); 307 + __GUEST_ASSERT(read_data == write_data, 308 + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx", 309 + pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); 310 + } 311 + 312 + #define INVALID_EC (-1ul) 313 + uint64_t expected_ec = INVALID_EC; 314 + 315 + static void guest_sync_handler(struct ex_regs *regs) 316 + { 317 + uint64_t esr, ec; 318 + 319 + esr = read_sysreg(esr_el1); 320 + ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK; 321 + 322 + __GUEST_ASSERT(expected_ec == ec, 323 + "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx; EC expected: 0x%lx", 324 + regs->pc, esr, ec, expected_ec); 325 + 326 + /* skip the trapping instruction */ 327 + regs->pc += 4; 328 + 329 + /* Use INVALID_EC to indicate an exception occurred */ 330 + expected_ec = INVALID_EC; 331 + } 332 + 333 + /* 334 + * Run the given operation that should trigger an exception with the 335 + * given exception class. The exception handler (guest_sync_handler) 336 + * will reset op_end_addr to 0, expected_ec to INVALID_EC, and skip 337 + * the instruction that trapped. 338 + */ 339 + #define TEST_EXCEPTION(ec, ops) \ 340 + ({ \ 341 + GUEST_ASSERT(ec != INVALID_EC); \ 342 + WRITE_ONCE(expected_ec, ec); \ 343 + dsb(ish); \ 344 + ops; \ 345 + GUEST_ASSERT(expected_ec == INVALID_EC); \ 346 + }) 347 + 348 + /* 349 + * Tests for reading/writing registers for the unimplemented event counter 350 + * specified by @pmc_idx (>= PMCR_EL0.N). 351 + */ 352 + static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) 353 + { 354 + /* 355 + * Reading/writing the event count/type registers should cause 356 + * an UNDEFINED exception. 357 + */ 358 + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx)); 359 + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0)); 360 + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx)); 361 + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0)); 362 + /* 363 + * The bit corresponding to the (unimplemented) counter in 364 + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers should be RAZ. 365 + */ 366 + test_bitmap_pmu_regs(pmc_idx, 1); 367 + test_bitmap_pmu_regs(pmc_idx, 0); 368 + } 369 + 370 + /* 371 + * The guest is configured with PMUv3 with @expected_pmcr_n number of 372 + * event counters. 373 + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and 374 + * if reading/writing PMU registers for implemented or unimplemented 375 + * counters works as expected. 376 + */ 377 + static void guest_code(uint64_t expected_pmcr_n) 378 + { 379 + uint64_t pmcr, pmcr_n, unimp_mask; 380 + int i, pmc; 381 + 382 + __GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS, 383 + "Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx", 384 + expected_pmcr_n, ARMV8_PMU_MAX_GENERAL_COUNTERS); 385 + 386 + pmcr = read_sysreg(pmcr_el0); 387 + pmcr_n = get_pmcr_n(pmcr); 388 + 389 + /* Make sure that PMCR_EL0.N indicates the value userspace set */ 390 + __GUEST_ASSERT(pmcr_n == expected_pmcr_n, 391 + "Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx", 392 + expected_pmcr_n, pmcr_n); 393 + 394 + /* 395 + * Make sure that (RAZ) bits corresponding to unimplemented event 396 + * counters in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers are reset 397 + * to zero. 398 + * (NOTE: bits for implemented event counters are reset to UNKNOWN) 399 + */ 400 + unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n); 401 + check_bitmap_pmu_regs(unimp_mask, false); 402 + 403 + /* 404 + * Tests for reading/writing PMU registers for implemented counters. 405 + * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions. 406 + */ 407 + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { 408 + for (pmc = 0; pmc < pmcr_n; pmc++) 409 + test_access_pmc_regs(&pmc_accessors[i], pmc); 410 + } 411 + 412 + /* 413 + * Tests for reading/writing PMU registers for unimplemented counters. 414 + * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions. 415 + */ 416 + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { 417 + for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) 418 + test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); 419 + } 420 + 421 + GUEST_DONE(); 422 + } 423 + 424 + #define GICD_BASE_GPA 0x8000000ULL 425 + #define GICR_BASE_GPA 0x80A0000ULL 426 + 427 + /* Create a VM that has one vCPU with PMUv3 configured. */ 428 + static void create_vpmu_vm(void *guest_code) 429 + { 430 + struct kvm_vcpu_init init; 431 + uint8_t pmuver, ec; 432 + uint64_t dfr0, irq = 23; 433 + struct kvm_device_attr irq_attr = { 434 + .group = KVM_ARM_VCPU_PMU_V3_CTRL, 435 + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, 436 + .addr = (uint64_t)&irq, 437 + }; 438 + struct kvm_device_attr init_attr = { 439 + .group = KVM_ARM_VCPU_PMU_V3_CTRL, 440 + .attr = KVM_ARM_VCPU_PMU_V3_INIT, 441 + }; 442 + 443 + /* The test creates the vpmu_vm multiple times. Ensure a clean state */ 444 + memset(&vpmu_vm, 0, sizeof(vpmu_vm)); 445 + 446 + vpmu_vm.vm = vm_create(1); 447 + vm_init_descriptor_tables(vpmu_vm.vm); 448 + for (ec = 0; ec < ESR_EC_NUM; ec++) { 449 + vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec, 450 + guest_sync_handler); 451 + } 452 + 453 + /* Create vCPU with PMUv3 */ 454 + vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init); 455 + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); 456 + vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code); 457 + vcpu_init_descriptor_tables(vpmu_vm.vcpu); 458 + vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64, 459 + GICD_BASE_GPA, GICR_BASE_GPA); 460 + __TEST_REQUIRE(vpmu_vm.gic_fd >= 0, 461 + "Failed to create vgic-v3, skipping"); 462 + 463 + /* Make sure that PMUv3 support is indicated in the ID register */ 464 + vcpu_get_reg(vpmu_vm.vcpu, 465 + KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); 466 + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0); 467 + TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && 468 + pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP, 469 + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); 470 + 471 + /* Initialize vPMU */ 472 + vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); 473 + vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr); 474 + } 475 + 476 + static void destroy_vpmu_vm(void) 477 + { 478 + close(vpmu_vm.gic_fd); 479 + kvm_vm_free(vpmu_vm.vm); 480 + } 481 + 482 + static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) 483 + { 484 + struct ucall uc; 485 + 486 + vcpu_args_set(vcpu, 1, pmcr_n); 487 + vcpu_run(vcpu); 488 + switch (get_ucall(vcpu, &uc)) { 489 + case UCALL_ABORT: 490 + REPORT_GUEST_ASSERT(uc); 491 + break; 492 + case UCALL_DONE: 493 + break; 494 + default: 495 + TEST_FAIL("Unknown ucall %lu", uc.cmd); 496 + break; 497 + } 498 + } 499 + 500 + static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail) 501 + { 502 + struct kvm_vcpu *vcpu; 503 + uint64_t pmcr, pmcr_orig; 504 + 505 + create_vpmu_vm(guest_code); 506 + vcpu = vpmu_vm.vcpu; 507 + 508 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); 509 + pmcr = pmcr_orig; 510 + 511 + /* 512 + * Setting a larger value of PMCR.N should not modify the field, and 513 + * return a success. 514 + */ 515 + set_pmcr_n(&pmcr, pmcr_n); 516 + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); 517 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); 518 + 519 + if (expect_fail) 520 + TEST_ASSERT(pmcr_orig == pmcr, 521 + "PMCR.N modified by KVM to a larger value (PMCR: 0x%lx) for pmcr_n: 0x%lx\n", 522 + pmcr, pmcr_n); 523 + else 524 + TEST_ASSERT(pmcr_n == get_pmcr_n(pmcr), 525 + "Failed to update PMCR.N to %lu (received: %lu)\n", 526 + pmcr_n, get_pmcr_n(pmcr)); 527 + } 528 + 529 + /* 530 + * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n, 531 + * and run the test. 532 + */ 533 + static void run_access_test(uint64_t pmcr_n) 534 + { 535 + uint64_t sp; 536 + struct kvm_vcpu *vcpu; 537 + struct kvm_vcpu_init init; 538 + 539 + pr_debug("Test with pmcr_n %lu\n", pmcr_n); 540 + 541 + test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); 542 + vcpu = vpmu_vm.vcpu; 543 + 544 + /* Save the initial sp to restore them later to run the guest again */ 545 + vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); 546 + 547 + run_vcpu(vcpu, pmcr_n); 548 + 549 + /* 550 + * Reset and re-initialize the vCPU, and run the guest code again to 551 + * check if PMCR_EL0.N is preserved. 552 + */ 553 + vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init); 554 + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); 555 + aarch64_vcpu_setup(vcpu, &init); 556 + vcpu_init_descriptor_tables(vcpu); 557 + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); 558 + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); 559 + 560 + run_vcpu(vcpu, pmcr_n); 561 + 562 + destroy_vpmu_vm(); 563 + } 564 + 565 + static struct pmreg_sets validity_check_reg_sets[] = { 566 + PMREG_SET(SYS_PMCNTENSET_EL0, SYS_PMCNTENCLR_EL0), 567 + PMREG_SET(SYS_PMINTENSET_EL1, SYS_PMINTENCLR_EL1), 568 + PMREG_SET(SYS_PMOVSSET_EL0, SYS_PMOVSCLR_EL0), 569 + }; 570 + 571 + /* 572 + * Create a VM, and check if KVM handles the userspace accesses of 573 + * the PMU register sets in @validity_check_reg_sets[] correctly. 574 + */ 575 + static void run_pmregs_validity_test(uint64_t pmcr_n) 576 + { 577 + int i; 578 + struct kvm_vcpu *vcpu; 579 + uint64_t set_reg_id, clr_reg_id, reg_val; 580 + uint64_t valid_counters_mask, max_counters_mask; 581 + 582 + test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); 583 + vcpu = vpmu_vm.vcpu; 584 + 585 + valid_counters_mask = get_counters_mask(pmcr_n); 586 + max_counters_mask = get_counters_mask(ARMV8_PMU_MAX_COUNTERS); 587 + 588 + for (i = 0; i < ARRAY_SIZE(validity_check_reg_sets); i++) { 589 + set_reg_id = validity_check_reg_sets[i].set_reg_id; 590 + clr_reg_id = validity_check_reg_sets[i].clr_reg_id; 591 + 592 + /* 593 + * Test if the 'set' and 'clr' variants of the registers 594 + * are initialized based on the number of valid counters. 595 + */ 596 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), &reg_val); 597 + TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 598 + "Initial read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 599 + KVM_ARM64_SYS_REG(set_reg_id), reg_val); 600 + 601 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), &reg_val); 602 + TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 603 + "Initial read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 604 + KVM_ARM64_SYS_REG(clr_reg_id), reg_val); 605 + 606 + /* 607 + * Using the 'set' variant, force-set the register to the 608 + * max number of possible counters and test if KVM discards 609 + * the bits for unimplemented counters as it should. 610 + */ 611 + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), max_counters_mask); 612 + 613 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), &reg_val); 614 + TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 615 + "Read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 616 + KVM_ARM64_SYS_REG(set_reg_id), reg_val); 617 + 618 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), &reg_val); 619 + TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 620 + "Read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 621 + KVM_ARM64_SYS_REG(clr_reg_id), reg_val); 622 + } 623 + 624 + destroy_vpmu_vm(); 625 + } 626 + 627 + /* 628 + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for 629 + * the vCPU to @pmcr_n, which is larger than the host value. 630 + * The attempt should fail as @pmcr_n is too big to set for the vCPU. 631 + */ 632 + static void run_error_test(uint64_t pmcr_n) 633 + { 634 + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); 635 + 636 + test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); 637 + destroy_vpmu_vm(); 638 + } 639 + 640 + /* 641 + * Return the default number of implemented PMU event counters excluding 642 + * the cycle counter (i.e. PMCR_EL0.N value) for the guest. 643 + */ 644 + static uint64_t get_pmcr_n_limit(void) 645 + { 646 + uint64_t pmcr; 647 + 648 + create_vpmu_vm(guest_code); 649 + vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); 650 + destroy_vpmu_vm(); 651 + return get_pmcr_n(pmcr); 652 + } 653 + 654 + int main(void) 655 + { 656 + uint64_t i, pmcr_n; 657 + 658 + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); 659 + 660 + pmcr_n = get_pmcr_n_limit(); 661 + for (i = 0; i <= pmcr_n; i++) { 662 + run_access_test(i); 663 + run_pmregs_validity_test(i); 664 + } 665 + 666 + for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) 667 + run_error_test(i); 668 + 669 + return 0; 670 + }
+1
tools/testing/selftests/kvm/include/aarch64/processor.h
··· 104 104 #define ESR_EC_SHIFT 26 105 105 #define ESR_EC_MASK (ESR_EC_NUM - 1) 106 106 107 + #define ESR_EC_UNKNOWN 0x0 107 108 #define ESR_EC_SVC64 0x15 108 109 #define ESR_EC_IABT 0x21 109 110 #define ESR_EC_DABT 0x25
+3 -3
tools/testing/selftests/kvm/lib/aarch64/processor.c
··· 518 518 err = ioctl(vcpu_fd, KVM_GET_ONE_REG, &reg); 519 519 TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_GET_ONE_REG, vcpu_fd)); 520 520 521 - *ps4k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4), val) != 0xf; 522 - *ps64k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64), val) == 0; 523 - *ps16k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN16), val) != 0; 521 + *ps4k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN4), val) != 0xf; 522 + *ps64k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN64), val) == 0; 523 + *ps16k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN16), val) != 0; 524 524 525 525 close(vcpu_fd); 526 526 close(vm_fd);