Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'kvm-arm-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/ARM Changes for v4.12.

Changes include:
- Using the common sysreg definitions between KVM and arm64
- Improved hyp-stub implementation with support for kexec and kdump on the 32-bit side
- Proper PMU exception handling
- Performance improvements of our GIC handling
- Support for irqchip in userspace with in-kernel arch-timers and PMU support
- A fix for a race condition in our PSCI code

Conflicts:
Documentation/virtual/kvm/api.txt
include/uapi/linux/kvm.h

+1163 -927
+41
Documentation/virtual/kvm/api.txt
··· 4120 4120 (MWAIT/MWAITX) to stop the virtual CPU will not cause a VM exit. As such time 4121 4121 spent while virtual CPU is halted in this way will then be accounted for as 4122 4122 guest running time on the host (as opposed to e.g. HLT). 4123 + 4124 + 8.9 KVM_CAP_ARM_USER_IRQ 4125 + 4126 + Architectures: arm, arm64 4127 + This capability, if KVM_CHECK_EXTENSION indicates that it is available, means 4128 + that if userspace creates a VM without an in-kernel interrupt controller, it 4129 + will be notified of changes to the output level of in-kernel emulated devices, 4130 + which can generate virtual interrupts, presented to the VM. 4131 + For such VMs, on every return to userspace, the kernel 4132 + updates the vcpu's run->s.regs.device_irq_level field to represent the actual 4133 + output level of the device. 4134 + 4135 + Whenever kvm detects a change in the device output level, kvm guarantees at 4136 + least one return to userspace before running the VM. This exit could either 4137 + be a KVM_EXIT_INTR or any other exit event, like KVM_EXIT_MMIO. This way, 4138 + userspace can always sample the device output level and re-compute the state of 4139 + the userspace interrupt controller. Userspace should always check the state 4140 + of run->s.regs.device_irq_level on every kvm exit. 4141 + The value in run->s.regs.device_irq_level can represent both level and edge 4142 + triggered interrupt signals, depending on the device. Edge triggered interrupt 4143 + signals will exit to userspace with the bit in run->s.regs.device_irq_level 4144 + set exactly once per edge signal. 4145 + 4146 + The field run->s.regs.device_irq_level is available independent of 4147 + run->kvm_valid_regs or run->kvm_dirty_regs bits. 4148 + 4149 + If KVM_CAP_ARM_USER_IRQ is supported, the KVM_CHECK_EXTENSION ioctl returns a 4150 + number larger than 0 indicating the version of this capability is implemented 4151 + and thereby which bits in in run->s.regs.device_irq_level can signal values. 4152 + 4153 + Currently the following bits are defined for the device_irq_level bitmap: 4154 + 4155 + KVM_CAP_ARM_USER_IRQ >= 1: 4156 + 4157 + KVM_ARM_DEV_EL1_VTIMER - EL1 virtual timer 4158 + KVM_ARM_DEV_EL1_PTIMER - EL1 physical timer 4159 + KVM_ARM_DEV_PMU - ARM PMU overflow interrupt signal 4160 + 4161 + Future versions of kvm may implement additional events. These will get 4162 + indicated by returning a higher number from KVM_CHECK_EXTENSION and will be 4163 + listed above.
+53
Documentation/virtual/kvm/arm/hyp-abi.txt
··· 1 + * Internal ABI between the kernel and HYP 2 + 3 + This file documents the interaction between the Linux kernel and the 4 + hypervisor layer when running Linux as a hypervisor (for example 5 + KVM). It doesn't cover the interaction of the kernel with the 6 + hypervisor when running as a guest (under Xen, KVM or any other 7 + hypervisor), or any hypervisor-specific interaction when the kernel is 8 + used as a host. 9 + 10 + On arm and arm64 (without VHE), the kernel doesn't run in hypervisor 11 + mode, but still needs to interact with it, allowing a built-in 12 + hypervisor to be either installed or torn down. 13 + 14 + In order to achieve this, the kernel must be booted at HYP (arm) or 15 + EL2 (arm64), allowing it to install a set of stubs before dropping to 16 + SVC/EL1. These stubs are accessible by using a 'hvc #0' instruction, 17 + and only act on individual CPUs. 18 + 19 + Unless specified otherwise, any built-in hypervisor must implement 20 + these functions (see arch/arm{,64}/include/asm/virt.h): 21 + 22 + * r0/x0 = HVC_SET_VECTORS 23 + r1/x1 = vectors 24 + 25 + Set HVBAR/VBAR_EL2 to 'vectors' to enable a hypervisor. 'vectors' 26 + must be a physical address, and respect the alignment requirements 27 + of the architecture. Only implemented by the initial stubs, not by 28 + Linux hypervisors. 29 + 30 + * r0/x0 = HVC_RESET_VECTORS 31 + 32 + Turn HYP/EL2 MMU off, and reset HVBAR/VBAR_EL2 to the initials 33 + stubs' exception vector value. This effectively disables an existing 34 + hypervisor. 35 + 36 + * r0/x0 = HVC_SOFT_RESTART 37 + r1/x1 = restart address 38 + x2 = x0's value when entering the next payload (arm64) 39 + x3 = x1's value when entering the next payload (arm64) 40 + x4 = x2's value when entering the next payload (arm64) 41 + 42 + Mask all exceptions, disable the MMU, move the arguments into place 43 + (arm64 only), and jump to the restart address while at HYP/EL2. This 44 + hypercall is not expected to return to its caller. 45 + 46 + Any other value of r0/x0 triggers a hypervisor-specific handling, 47 + which is not documented here. 48 + 49 + The return value of a stub hypercall is held by r0/x0, and is 0 on 50 + success, and HVC_STUB_ERR on error. A stub hypercall is allowed to 51 + clobber any of the caller-saved registers (x0-x18 on arm64, r0-r3 and 52 + ip on arm). It is thus recommended to use a function call to perform 53 + the hypercall.
+11 -1
arch/arm/boot/compressed/head.S
··· 422 422 cmp r0, #HYP_MODE 423 423 bne 1f 424 424 425 - bl __hyp_get_vectors 425 + /* 426 + * Compute the address of the hyp vectors after relocation. 427 + * This requires some arithmetic since we cannot directly 428 + * reference __hyp_stub_vectors in a PC-relative way. 429 + * Call __hyp_set_vectors with the new address so that we 430 + * can HVC again after the copy. 431 + */ 432 + 0: adr r0, 0b 433 + movw r1, #:lower16:__hyp_stub_vectors - 0b 434 + movt r1, #:upper16:__hyp_stub_vectors - 0b 435 + add r0, r0, r1 426 436 sub r0, r0, r5 427 437 add r0, r0, r10 428 438 bl __hyp_set_vectors
+4 -3
arch/arm/include/asm/kvm_asm.h
··· 33 33 #define ARM_EXCEPTION_IRQ 5 34 34 #define ARM_EXCEPTION_FIQ 6 35 35 #define ARM_EXCEPTION_HVC 7 36 - 36 + #define ARM_EXCEPTION_HYP_GONE HVC_STUB_ERR 37 37 /* 38 38 * The rr_lo_hi macro swaps a pair of registers depending on 39 39 * current endianness. It is used in conjunction with ldrd and strd ··· 72 72 73 73 extern void __init_stage2_translation(void); 74 74 75 - extern void __kvm_hyp_reset(unsigned long); 76 - 77 75 extern u64 __vgic_v3_get_ich_vtr_el2(void); 76 + extern u64 __vgic_v3_read_vmcr(void); 77 + extern void __vgic_v3_write_vmcr(u32 vmcr); 78 78 extern void __vgic_v3_init_lrs(void); 79 + 79 80 #endif 80 81 81 82 #endif /* __ARM_KVM_ASM_H__ */
-6
arch/arm/include/asm/kvm_host.h
··· 269 269 kvm_call_hyp(__init_stage2_translation); 270 270 } 271 271 272 - static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr, 273 - phys_addr_t phys_idmap_start) 274 - { 275 - kvm_call_hyp((void *)virt_to_idmap(__kvm_hyp_reset), vector_ptr); 276 - } 277 - 278 272 static inline int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext) 279 273 { 280 274 return 0;
-1
arch/arm/include/asm/kvm_mmu.h
··· 56 56 57 57 phys_addr_t kvm_mmu_get_httbr(void); 58 58 phys_addr_t kvm_get_idmap_vector(void); 59 - phys_addr_t kvm_get_idmap_start(void); 60 59 int kvm_mmu_init(void); 61 60 void kvm_clear_hyp_idmap(void); 62 61
+2 -2
arch/arm/include/asm/proc-fns.h
··· 43 43 /* 44 44 * Special stuff for a reset 45 45 */ 46 - void (*reset)(unsigned long addr) __attribute__((noreturn)); 46 + void (*reset)(unsigned long addr, bool hvc) __attribute__((noreturn)); 47 47 /* 48 48 * Idle the processor 49 49 */ ··· 88 88 #else 89 89 extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte, unsigned int ext); 90 90 #endif 91 - extern void cpu_reset(unsigned long addr) __attribute__((noreturn)); 91 + extern void cpu_reset(unsigned long addr, bool hvc) __attribute__((noreturn)); 92 92 93 93 /* These three are private to arch/arm/kernel/suspend.c */ 94 94 extern void cpu_do_suspend(void *);
+13 -1
arch/arm/include/asm/virt.h
··· 53 53 } 54 54 55 55 void __hyp_set_vectors(unsigned long phys_vector_base); 56 - unsigned long __hyp_get_vectors(void); 56 + void __hyp_reset_vectors(void); 57 57 #else 58 58 #define __boot_cpu_mode (SVC_MODE) 59 59 #define sync_boot_mode() ··· 94 94 extern char __hyp_text_end[]; 95 95 #endif 96 96 97 + #else 98 + 99 + /* Only assembly code should need those */ 100 + 101 + #define HVC_SET_VECTORS 0 102 + #define HVC_SOFT_RESTART 1 103 + #define HVC_RESET_VECTORS 2 104 + 105 + #define HVC_STUB_HCALL_NR 3 106 + 97 107 #endif /* __ASSEMBLY__ */ 108 + 109 + #define HVC_STUB_ERR 0xbadca11 98 110 99 111 #endif /* ! VIRT_H */
+2
arch/arm/include/uapi/asm/kvm.h
··· 116 116 }; 117 117 118 118 struct kvm_sync_regs { 119 + /* Used with KVM_CAP_ARM_USER_IRQ */ 120 + __u64 device_irq_level; 119 121 }; 120 122 121 123 struct kvm_arch_memory_slot {
+34 -9
arch/arm/kernel/hyp-stub.S
··· 125 125 * (see safe_svcmode_maskall). 126 126 */ 127 127 @ Now install the hypervisor stub: 128 - adr r7, __hyp_stub_vectors 128 + W(adr) r7, __hyp_stub_vectors 129 129 mcr p15, 4, r7, c12, c0, 0 @ set hypervisor vector base (HVBAR) 130 130 131 131 @ Disable all traps, so we don't get any nasty surprise ··· 202 202 ENDPROC(__hyp_stub_install_secondary) 203 203 204 204 __hyp_stub_do_trap: 205 - cmp r0, #-1 206 - mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR 207 - mcrne p15, 4, r0, c12, c0, 0 @ set HVBAR 205 + teq r0, #HVC_SET_VECTORS 206 + bne 1f 207 + mcr p15, 4, r1, c12, c0, 0 @ set HVBAR 208 + b __hyp_stub_exit 209 + 210 + 1: teq r0, #HVC_SOFT_RESTART 211 + bne 1f 212 + bx r1 213 + 214 + 1: teq r0, #HVC_RESET_VECTORS 215 + beq __hyp_stub_exit 216 + 217 + ldr r0, =HVC_STUB_ERR 218 + __ERET 219 + 220 + __hyp_stub_exit: 221 + mov r0, #0 208 222 __ERET 209 223 ENDPROC(__hyp_stub_do_trap) 210 224 ··· 244 230 * so you will need to set that to something sensible at the new hypervisor's 245 231 * initialisation entry point. 246 232 */ 247 - ENTRY(__hyp_get_vectors) 248 - mov r0, #-1 249 - ENDPROC(__hyp_get_vectors) 250 - @ fall through 251 233 ENTRY(__hyp_set_vectors) 234 + mov r1, r0 235 + mov r0, #HVC_SET_VECTORS 252 236 __HVC(0) 253 237 ret lr 254 238 ENDPROC(__hyp_set_vectors) 239 + 240 + ENTRY(__hyp_soft_restart) 241 + mov r1, r0 242 + mov r0, #HVC_SOFT_RESTART 243 + __HVC(0) 244 + ret lr 245 + ENDPROC(__hyp_soft_restart) 246 + 247 + ENTRY(__hyp_reset_vectors) 248 + mov r0, #HVC_RESET_VECTORS 249 + __HVC(0) 250 + ret lr 251 + ENDPROC(__hyp_reset_vectors) 255 252 256 253 #ifndef ZIMAGE 257 254 .align 2 ··· 271 246 #endif 272 247 273 248 .align 5 274 - __hyp_stub_vectors: 249 + ENTRY(__hyp_stub_vectors) 275 250 __hyp_stub_reset: W(b) . 276 251 __hyp_stub_und: W(b) . 277 252 __hyp_stub_svc: W(b) .
+5 -2
arch/arm/kernel/reboot.c
··· 12 12 13 13 #include <asm/cacheflush.h> 14 14 #include <asm/idmap.h> 15 + #include <asm/virt.h> 15 16 16 17 #include "reboot.h" 17 18 18 - typedef void (*phys_reset_t)(unsigned long); 19 + typedef void (*phys_reset_t)(unsigned long, bool); 19 20 20 21 /* 21 22 * Function pointers to optional machine specific functions ··· 52 51 53 52 /* Switch to the identity mapping. */ 54 53 phys_reset = (phys_reset_t)virt_to_idmap(cpu_reset); 55 - phys_reset((unsigned long)addr); 54 + 55 + /* original stub should be restored by kvm */ 56 + phys_reset((unsigned long)addr, is_hyp_mode_available()); 56 57 57 58 /* Should never get here. */ 58 59 BUG();
+36 -30
arch/arm/kvm/arm.c
··· 53 53 54 54 static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); 55 55 static kvm_cpu_context_t __percpu *kvm_host_cpu_state; 56 - static unsigned long hyp_default_vectors; 57 56 58 57 /* Per-CPU variable containing the currently running vcpu. */ 59 58 static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu); ··· 226 227 else 227 228 r = kvm->arch.vgic.msis_require_devid; 228 229 break; 230 + case KVM_CAP_ARM_USER_IRQ: 231 + /* 232 + * 1: EL1_VTIMER, EL1_PTIMER, and PMU. 233 + * (bump this number if adding more devices) 234 + */ 235 + r = 1; 236 + break; 229 237 default: 230 238 r = kvm_arch_dev_ioctl_check_extension(kvm, ext); 231 239 break; ··· 354 348 vcpu->arch.host_cpu_context = this_cpu_ptr(kvm_host_cpu_state); 355 349 356 350 kvm_arm_set_running_vcpu(vcpu); 351 + 352 + kvm_vgic_load(vcpu); 357 353 } 358 354 359 355 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) 360 356 { 361 - /* 362 - * The arch-generic KVM code expects the cpu field of a vcpu to be -1 363 - * if the vcpu is no longer assigned to a cpu. This is used for the 364 - * optimized make_all_cpus_request path. 365 - */ 357 + kvm_vgic_put(vcpu); 358 + 366 359 vcpu->cpu = -1; 367 360 368 361 kvm_arm_set_running_vcpu(NULL); ··· 519 514 return ret; 520 515 } 521 516 522 - /* 523 - * Enable the arch timers only if we have an in-kernel VGIC 524 - * and it has been properly initialized, since we cannot handle 525 - * interrupts from the virtual timer with a userspace gic. 526 - */ 527 - if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) 528 - ret = kvm_timer_enable(vcpu); 517 + ret = kvm_timer_enable(vcpu); 529 518 530 519 return ret; 531 520 } ··· 629 630 * non-preemptible context. 630 631 */ 631 632 preempt_disable(); 633 + 632 634 kvm_pmu_flush_hwstate(vcpu); 635 + 633 636 kvm_timer_flush_hwstate(vcpu); 634 637 kvm_vgic_flush_hwstate(vcpu); 635 638 636 639 local_irq_disable(); 637 640 638 641 /* 639 - * Re-check atomic conditions 642 + * If we have a singal pending, or need to notify a userspace 643 + * irqchip about timer or PMU level changes, then we exit (and 644 + * update the timer level state in kvm_timer_update_run 645 + * below). 640 646 */ 641 - if (signal_pending(current)) { 647 + if (signal_pending(current) || 648 + kvm_timer_should_notify_user(vcpu) || 649 + kvm_pmu_should_notify_user(vcpu)) { 642 650 ret = -EINTR; 643 651 run->exit_reason = KVM_EXIT_INTR; 644 652 } ··· 715 709 preempt_enable(); 716 710 717 711 ret = handle_exit(vcpu, run, ret); 712 + } 713 + 714 + /* Tell userspace about in-kernel device output levels */ 715 + if (unlikely(!irqchip_in_kernel(vcpu->kvm))) { 716 + kvm_timer_update_run(vcpu); 717 + kvm_pmu_update_run(vcpu); 718 718 } 719 719 720 720 if (vcpu->sigset_active) ··· 1121 1109 kvm_arm_init_debug(); 1122 1110 } 1123 1111 1112 + static void cpu_hyp_reset(void) 1113 + { 1114 + if (!is_kernel_in_hyp_mode()) 1115 + __hyp_reset_vectors(); 1116 + } 1117 + 1124 1118 static void cpu_hyp_reinit(void) 1125 1119 { 1120 + cpu_hyp_reset(); 1121 + 1126 1122 if (is_kernel_in_hyp_mode()) { 1127 1123 /* 1128 1124 * __cpu_init_stage2() is safe to call even if the PM ··· 1138 1118 */ 1139 1119 __cpu_init_stage2(); 1140 1120 } else { 1141 - if (__hyp_get_vectors() == hyp_default_vectors) 1142 - cpu_init_hyp_mode(NULL); 1121 + cpu_init_hyp_mode(NULL); 1143 1122 } 1144 - } 1145 - 1146 - static void cpu_hyp_reset(void) 1147 - { 1148 - if (!is_kernel_in_hyp_mode()) 1149 - __cpu_reset_hyp_mode(hyp_default_vectors, 1150 - kvm_get_idmap_start()); 1151 1123 } 1152 1124 1153 1125 static void _kvm_arch_hardware_enable(void *discard) ··· 1323 1311 err = kvm_mmu_init(); 1324 1312 if (err) 1325 1313 goto out_err; 1326 - 1327 - /* 1328 - * It is probably enough to obtain the default on one 1329 - * CPU. It's unlikely to be different on the others. 1330 - */ 1331 - hyp_default_vectors = __hyp_get_vectors(); 1332 1314 1333 1315 /* 1334 1316 * Allocate stack pages for Hypervisor-mode
+21 -3
arch/arm/kvm/coproc.c
··· 40 40 * Co-processor emulation 41 41 *****************************************************************************/ 42 42 43 + static bool write_to_read_only(struct kvm_vcpu *vcpu, 44 + const struct coproc_params *params) 45 + { 46 + WARN_ONCE(1, "CP15 write to read-only register\n"); 47 + print_cp_instr(params); 48 + kvm_inject_undefined(vcpu); 49 + return false; 50 + } 51 + 52 + static bool read_from_write_only(struct kvm_vcpu *vcpu, 53 + const struct coproc_params *params) 54 + { 55 + WARN_ONCE(1, "CP15 read to write-only register\n"); 56 + print_cp_instr(params); 57 + kvm_inject_undefined(vcpu); 58 + return false; 59 + } 60 + 43 61 /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */ 44 62 static u32 cache_levels; 45 63 ··· 520 502 if (likely(r->access(vcpu, params, r))) { 521 503 /* Skip instruction, since it was emulated */ 522 504 kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 523 - return 1; 524 505 } 525 - /* If access function fails, it should complain. */ 526 506 } else { 507 + /* If access function fails, it should complain. */ 527 508 kvm_err("Unsupported guest CP15 access at: %08lx\n", 528 509 *vcpu_pc(vcpu)); 529 510 print_cp_instr(params); 511 + kvm_inject_undefined(vcpu); 530 512 } 531 - kvm_inject_undefined(vcpu); 513 + 532 514 return 1; 533 515 } 534 516
-18
arch/arm/kvm/coproc.h
··· 81 81 return true; 82 82 } 83 83 84 - static inline bool write_to_read_only(struct kvm_vcpu *vcpu, 85 - const struct coproc_params *params) 86 - { 87 - kvm_debug("CP15 write to read-only register at: %08lx\n", 88 - *vcpu_pc(vcpu)); 89 - print_cp_instr(params); 90 - return false; 91 - } 92 - 93 - static inline bool read_from_write_only(struct kvm_vcpu *vcpu, 94 - const struct coproc_params *params) 95 - { 96 - kvm_debug("CP15 read to write-only register at: %08lx\n", 97 - *vcpu_pc(vcpu)); 98 - print_cp_instr(params); 99 - return false; 100 - } 101 - 102 84 /* Reset functions */ 103 85 static inline void reset_unknown(struct kvm_vcpu *vcpu, 104 86 const struct coproc_reg *r)
+8
arch/arm/kvm/handle_exit.c
··· 160 160 case ARM_EXCEPTION_DATA_ABORT: 161 161 kvm_inject_vabt(vcpu); 162 162 return 1; 163 + case ARM_EXCEPTION_HYP_GONE: 164 + /* 165 + * HYP has been reset to the hyp-stub. This happens 166 + * when a guest is pre-empted by kvm_reboot()'s 167 + * shutdown call. 168 + */ 169 + run->exit_reason = KVM_EXIT_FAIL_ENTRY; 170 + return 0; 163 171 default: 164 172 kvm_pr_unimpl("Unsupported exception type: %d", 165 173 exception_index);
+23 -5
arch/arm/kvm/hyp/hyp-entry.S
··· 126 126 */ 127 127 pop {r0, r1, r2} 128 128 129 - /* Check for __hyp_get_vectors */ 130 - cmp r0, #-1 131 - mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR 132 - beq 1f 129 + /* 130 + * Check if we have a kernel function, which is guaranteed to be 131 + * bigger than the maximum hyp stub hypercall 132 + */ 133 + cmp r0, #HVC_STUB_HCALL_NR 134 + bhs 1f 133 135 136 + /* 137 + * Not a kernel function, treat it as a stub hypercall. 138 + * Compute the physical address for __kvm_handle_stub_hvc 139 + * (as the code lives in the idmaped page) and branch there. 140 + * We hijack ip (r12) as a tmp register. 141 + */ 142 + push {r1} 143 + ldr r1, =kimage_voffset 144 + ldr r1, [r1] 145 + ldr ip, =__kvm_handle_stub_hvc 146 + sub ip, ip, r1 147 + pop {r1} 148 + 149 + bx ip 150 + 151 + 1: 134 152 push {lr} 135 153 136 154 mov lr, r0 ··· 160 142 blx lr @ Call the HYP function 161 143 162 144 pop {lr} 163 - 1: eret 145 + eret 164 146 165 147 guest_trap: 166 148 load_vcpu r0 @ Load VCPU pointer to r0
+43 -8
arch/arm/kvm/init.S
··· 23 23 #include <asm/kvm_asm.h> 24 24 #include <asm/kvm_arm.h> 25 25 #include <asm/kvm_mmu.h> 26 + #include <asm/virt.h> 26 27 27 28 /******************************************************************** 28 29 * Hypervisor initialization ··· 40 39 * - Setup the page tables 41 40 * - Enable the MMU 42 41 * - Profit! (or eret, if you only care about the code). 42 + * 43 + * Another possibility is to get a HYP stub hypercall. 44 + * We discriminate between the two by checking if r0 contains a value 45 + * that is less than HVC_STUB_HCALL_NR. 43 46 */ 44 47 45 48 .text ··· 63 58 W(b) . 64 59 65 60 __do_hyp_init: 61 + @ Check for a stub hypercall 62 + cmp r0, #HVC_STUB_HCALL_NR 63 + blo __kvm_handle_stub_hvc 64 + 66 65 @ Set stack pointer 67 66 mov sp, r0 68 67 ··· 121 112 122 113 eret 123 114 124 - @ r0 : stub vectors address 125 - ENTRY(__kvm_hyp_reset) 115 + ENTRY(__kvm_handle_stub_hvc) 116 + cmp r0, #HVC_SOFT_RESTART 117 + bne 1f 118 + 119 + /* The target is expected in r1 */ 120 + msr ELR_hyp, r1 121 + mrs r0, cpsr 122 + bic r0, r0, #MODE_MASK 123 + orr r0, r0, #HYP_MODE 124 + THUMB( orr r0, r0, #PSR_T_BIT ) 125 + msr spsr_cxsf, r0 126 + b reset 127 + 128 + 1: cmp r0, #HVC_RESET_VECTORS 129 + bne 1f 130 + 131 + reset: 126 132 /* We're now in idmap, disable MMU */ 127 133 mrc p15, 4, r1, c1, c0, 0 @ HSCTLR 128 - ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I) 129 - bic r1, r1, r2 134 + ldr r0, =(HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I) 135 + bic r1, r1, r0 130 136 mcr p15, 4, r1, c1, c0, 0 @ HSCTLR 131 137 132 - /* Install stub vectors */ 133 - mcr p15, 4, r0, c12, c0, 0 @ HVBAR 134 - isb 138 + /* 139 + * Install stub vectors, using ardb's VA->PA trick. 140 + */ 141 + 0: adr r0, 0b @ PA(0) 142 + movw r1, #:lower16:__hyp_stub_vectors - 0b @ VA(stub) - VA(0) 143 + movt r1, #:upper16:__hyp_stub_vectors - 0b 144 + add r1, r1, r0 @ PA(stub) 145 + mcr p15, 4, r1, c12, c0, 0 @ HVBAR 146 + b exit 135 147 148 + 1: ldr r0, =HVC_STUB_ERR 136 149 eret 137 - ENDPROC(__kvm_hyp_reset) 150 + 151 + exit: 152 + mov r0, #0 153 + eret 154 + ENDPROC(__kvm_handle_stub_hvc) 138 155 139 156 .ltorg 140 157
-4
arch/arm/kvm/interrupts.S
··· 37 37 * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are 38 38 * passed in r0 (strictly 32bit). 39 39 * 40 - * A function pointer with a value of 0xffffffff has a special meaning, 41 - * and is used to implement __hyp_get_vectors in the same way as in 42 - * arch/arm/kernel/hyp_stub.S. 43 - * 44 40 * The calling convention follows the standard AAPCS: 45 41 * r0 - r3: caller save 46 42 * r12: caller save
+13 -23
arch/arm/kvm/mmu.c
··· 1512 1512 unsigned long start, 1513 1513 unsigned long end, 1514 1514 int (*handler)(struct kvm *kvm, 1515 - gpa_t gpa, void *data), 1515 + gpa_t gpa, u64 size, 1516 + void *data), 1516 1517 void *data) 1517 1518 { 1518 1519 struct kvm_memslots *slots; ··· 1525 1524 /* we only care about the pages that the guest sees */ 1526 1525 kvm_for_each_memslot(memslot, slots) { 1527 1526 unsigned long hva_start, hva_end; 1528 - gfn_t gfn, gfn_end; 1527 + gfn_t gpa; 1529 1528 1530 1529 hva_start = max(start, memslot->userspace_addr); 1531 1530 hva_end = min(end, memslot->userspace_addr + ··· 1533 1532 if (hva_start >= hva_end) 1534 1533 continue; 1535 1534 1536 - /* 1537 - * {gfn(page) | page intersects with [hva_start, hva_end)} = 1538 - * {gfn_start, gfn_start+1, ..., gfn_end-1}. 1539 - */ 1540 - gfn = hva_to_gfn_memslot(hva_start, memslot); 1541 - gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot); 1542 - 1543 - for (; gfn < gfn_end; ++gfn) { 1544 - gpa_t gpa = gfn << PAGE_SHIFT; 1545 - ret |= handler(kvm, gpa, data); 1546 - } 1535 + gpa = hva_to_gfn_memslot(hva_start, memslot) << PAGE_SHIFT; 1536 + ret |= handler(kvm, gpa, (u64)(hva_end - hva_start), data); 1547 1537 } 1548 1538 1549 1539 return ret; 1550 1540 } 1551 1541 1552 - static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, void *data) 1542 + static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) 1553 1543 { 1554 - unmap_stage2_range(kvm, gpa, PAGE_SIZE); 1544 + unmap_stage2_range(kvm, gpa, size); 1555 1545 return 0; 1556 1546 } 1557 1547 ··· 1569 1577 return 0; 1570 1578 } 1571 1579 1572 - static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, void *data) 1580 + static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) 1573 1581 { 1574 1582 pte_t *pte = (pte_t *)data; 1575 1583 1584 + WARN_ON(size != PAGE_SIZE); 1576 1585 /* 1577 1586 * We can always call stage2_set_pte with KVM_S2PTE_FLAG_LOGGING_ACTIVE 1578 1587 * flag clear because MMU notifiers will have unmapped a huge PMD before ··· 1599 1606 handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &stage2_pte); 1600 1607 } 1601 1608 1602 - static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data) 1609 + static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) 1603 1610 { 1604 1611 pmd_t *pmd; 1605 1612 pte_t *pte; 1606 1613 1614 + WARN_ON(size != PAGE_SIZE && size != PMD_SIZE); 1607 1615 pmd = stage2_get_pmd(kvm, NULL, gpa); 1608 1616 if (!pmd || pmd_none(*pmd)) /* Nothing there */ 1609 1617 return 0; ··· 1619 1625 return stage2_ptep_test_and_clear_young(pte); 1620 1626 } 1621 1627 1622 - static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data) 1628 + static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) 1623 1629 { 1624 1630 pmd_t *pmd; 1625 1631 pte_t *pte; 1626 1632 1633 + WARN_ON(size != PAGE_SIZE && size != PMD_SIZE); 1627 1634 pmd = stage2_get_pmd(kvm, NULL, gpa); 1628 1635 if (!pmd || pmd_none(*pmd)) /* Nothing there */ 1629 1636 return 0; ··· 1667 1672 phys_addr_t kvm_get_idmap_vector(void) 1668 1673 { 1669 1674 return hyp_idmap_vector; 1670 - } 1671 - 1672 - phys_addr_t kvm_get_idmap_start(void) 1673 - { 1674 - return hyp_idmap_start; 1675 1675 } 1676 1676 1677 1677 static int kvm_map_idmap_text(pgd_t *pgd)
+7 -1
arch/arm/kvm/psci.c
··· 208 208 209 209 static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu) 210 210 { 211 - int ret = 1; 211 + struct kvm *kvm = vcpu->kvm; 212 212 unsigned long psci_fn = vcpu_get_reg(vcpu, 0) & ~((u32) 0); 213 213 unsigned long val; 214 + int ret = 1; 214 215 215 216 switch (psci_fn) { 216 217 case PSCI_0_2_FN_PSCI_VERSION: ··· 231 230 break; 232 231 case PSCI_0_2_FN_CPU_ON: 233 232 case PSCI_0_2_FN64_CPU_ON: 233 + mutex_lock(&kvm->lock); 234 234 val = kvm_psci_vcpu_on(vcpu); 235 + mutex_unlock(&kvm->lock); 235 236 break; 236 237 case PSCI_0_2_FN_AFFINITY_INFO: 237 238 case PSCI_0_2_FN64_AFFINITY_INFO: ··· 282 279 283 280 static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu) 284 281 { 282 + struct kvm *kvm = vcpu->kvm; 285 283 unsigned long psci_fn = vcpu_get_reg(vcpu, 0) & ~((u32) 0); 286 284 unsigned long val; 287 285 ··· 292 288 val = PSCI_RET_SUCCESS; 293 289 break; 294 290 case KVM_PSCI_FN_CPU_ON: 291 + mutex_lock(&kvm->lock); 295 292 val = kvm_psci_vcpu_on(vcpu); 293 + mutex_unlock(&kvm->lock); 296 294 break; 297 295 default: 298 296 val = PSCI_RET_NOT_SUPPORTED;
+5
arch/arm/mm/mmu.c
··· 87 87 #define s2_policy(policy) 0 88 88 #endif 89 89 90 + unsigned long kimage_voffset __ro_after_init; 91 + 90 92 static struct cachepolicy cache_policies[] __initdata = { 91 93 { 92 94 .policy = "uncached", ··· 1637 1635 1638 1636 empty_zero_page = virt_to_page(zero_page); 1639 1637 __flush_dcache_page(NULL, empty_zero_page); 1638 + 1639 + /* Compute the virt/idmap offset, mostly for the sake of KVM */ 1640 + kimage_voffset = (unsigned long)&kimage_voffset - virt_to_idmap(&kimage_voffset); 1640 1641 }
+10 -5
arch/arm/mm/proc-v7.S
··· 39 39 ENDPROC(cpu_v7_proc_fin) 40 40 41 41 /* 42 - * cpu_v7_reset(loc) 42 + * cpu_v7_reset(loc, hyp) 43 43 * 44 44 * Perform a soft reset of the system. Put the CPU into the 45 45 * same state as it would be if it had been reset, and branch 46 46 * to what would be the reset vector. 47 47 * 48 48 * - loc - location to jump to for soft reset 49 + * - hyp - indicate if restart occurs in HYP mode 49 50 * 50 51 * This code must be executed using a flat identity mapping with 51 52 * caches disabled. ··· 54 53 .align 5 55 54 .pushsection .idmap.text, "ax" 56 55 ENTRY(cpu_v7_reset) 57 - mrc p15, 0, r1, c1, c0, 0 @ ctrl register 58 - bic r1, r1, #0x1 @ ...............m 59 - THUMB( bic r1, r1, #1 << 30 ) @ SCTLR.TE (Thumb exceptions) 60 - mcr p15, 0, r1, c1, c0, 0 @ disable MMU 56 + mrc p15, 0, r2, c1, c0, 0 @ ctrl register 57 + bic r2, r2, #0x1 @ ...............m 58 + THUMB( bic r2, r2, #1 << 30 ) @ SCTLR.TE (Thumb exceptions) 59 + mcr p15, 0, r2, c1, c0, 0 @ disable MMU 61 60 isb 61 + #ifdef CONFIG_ARM_VIRT_EXT 62 + teq r1, #0 63 + bne __hyp_soft_restart 64 + #endif 62 65 bx r0 63 66 ENDPROC(cpu_v7_reset) 64 67 .popsection
+13 -68
arch/arm64/include/asm/arch_gicv3.h
··· 20 20 21 21 #include <asm/sysreg.h> 22 22 23 - #define ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1) 24 - #define ICC_DIR_EL1 sys_reg(3, 0, 12, 11, 1) 25 - #define ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0) 26 - #define ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5) 27 - #define ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0) 28 - #define ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4) 29 - #define ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5) 30 - #define ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 31 - #define ICC_BPR1_EL1 sys_reg(3, 0, 12, 12, 3) 32 - 33 - #define ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5) 34 - 35 - /* 36 - * System register definitions 37 - */ 38 - #define ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4) 39 - #define ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0) 40 - #define ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1) 41 - #define ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2) 42 - #define ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3) 43 - #define ICH_ELSR_EL2 sys_reg(3, 4, 12, 11, 5) 44 - #define ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7) 45 - 46 - #define __LR0_EL2(x) sys_reg(3, 4, 12, 12, x) 47 - #define __LR8_EL2(x) sys_reg(3, 4, 12, 13, x) 48 - 49 - #define ICH_LR0_EL2 __LR0_EL2(0) 50 - #define ICH_LR1_EL2 __LR0_EL2(1) 51 - #define ICH_LR2_EL2 __LR0_EL2(2) 52 - #define ICH_LR3_EL2 __LR0_EL2(3) 53 - #define ICH_LR4_EL2 __LR0_EL2(4) 54 - #define ICH_LR5_EL2 __LR0_EL2(5) 55 - #define ICH_LR6_EL2 __LR0_EL2(6) 56 - #define ICH_LR7_EL2 __LR0_EL2(7) 57 - #define ICH_LR8_EL2 __LR8_EL2(0) 58 - #define ICH_LR9_EL2 __LR8_EL2(1) 59 - #define ICH_LR10_EL2 __LR8_EL2(2) 60 - #define ICH_LR11_EL2 __LR8_EL2(3) 61 - #define ICH_LR12_EL2 __LR8_EL2(4) 62 - #define ICH_LR13_EL2 __LR8_EL2(5) 63 - #define ICH_LR14_EL2 __LR8_EL2(6) 64 - #define ICH_LR15_EL2 __LR8_EL2(7) 65 - 66 - #define __AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x) 67 - #define ICH_AP0R0_EL2 __AP0Rx_EL2(0) 68 - #define ICH_AP0R1_EL2 __AP0Rx_EL2(1) 69 - #define ICH_AP0R2_EL2 __AP0Rx_EL2(2) 70 - #define ICH_AP0R3_EL2 __AP0Rx_EL2(3) 71 - 72 - #define __AP1Rx_EL2(x) sys_reg(3, 4, 12, 9, x) 73 - #define ICH_AP1R0_EL2 __AP1Rx_EL2(0) 74 - #define ICH_AP1R1_EL2 __AP1Rx_EL2(1) 75 - #define ICH_AP1R2_EL2 __AP1Rx_EL2(2) 76 - #define ICH_AP1R3_EL2 __AP1Rx_EL2(3) 77 - 78 23 #ifndef __ASSEMBLY__ 79 24 80 25 #include <linux/stringify.h> 81 26 #include <asm/barrier.h> 82 27 #include <asm/cacheflush.h> 83 28 84 - #define read_gicreg read_sysreg_s 85 - #define write_gicreg write_sysreg_s 29 + #define read_gicreg(r) read_sysreg_s(SYS_ ## r) 30 + #define write_gicreg(v, r) write_sysreg_s(v, SYS_ ## r) 86 31 87 32 /* 88 33 * Low-level accessors ··· 38 93 39 94 static inline void gic_write_eoir(u32 irq) 40 95 { 41 - write_sysreg_s(irq, ICC_EOIR1_EL1); 96 + write_sysreg_s(irq, SYS_ICC_EOIR1_EL1); 42 97 isb(); 43 98 } 44 99 45 100 static inline void gic_write_dir(u32 irq) 46 101 { 47 - write_sysreg_s(irq, ICC_DIR_EL1); 102 + write_sysreg_s(irq, SYS_ICC_DIR_EL1); 48 103 isb(); 49 104 } 50 105 ··· 52 107 { 53 108 u64 irqstat; 54 109 55 - irqstat = read_sysreg_s(ICC_IAR1_EL1); 110 + irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1); 56 111 dsb(sy); 57 112 return irqstat; 58 113 } ··· 69 124 u64 irqstat; 70 125 71 126 nops(8); 72 - irqstat = read_sysreg_s(ICC_IAR1_EL1); 127 + irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1); 73 128 nops(4); 74 129 mb(); 75 130 ··· 78 133 79 134 static inline void gic_write_pmr(u32 val) 80 135 { 81 - write_sysreg_s(val, ICC_PMR_EL1); 136 + write_sysreg_s(val, SYS_ICC_PMR_EL1); 82 137 } 83 138 84 139 static inline void gic_write_ctlr(u32 val) 85 140 { 86 - write_sysreg_s(val, ICC_CTLR_EL1); 141 + write_sysreg_s(val, SYS_ICC_CTLR_EL1); 87 142 isb(); 88 143 } 89 144 90 145 static inline void gic_write_grpen1(u32 val) 91 146 { 92 - write_sysreg_s(val, ICC_GRPEN1_EL1); 147 + write_sysreg_s(val, SYS_ICC_GRPEN1_EL1); 93 148 isb(); 94 149 } 95 150 96 151 static inline void gic_write_sgi1r(u64 val) 97 152 { 98 - write_sysreg_s(val, ICC_SGI1R_EL1); 153 + write_sysreg_s(val, SYS_ICC_SGI1R_EL1); 99 154 } 100 155 101 156 static inline u32 gic_read_sre(void) 102 157 { 103 - return read_sysreg_s(ICC_SRE_EL1); 158 + return read_sysreg_s(SYS_ICC_SRE_EL1); 104 159 } 105 160 106 161 static inline void gic_write_sre(u32 val) 107 162 { 108 - write_sysreg_s(val, ICC_SRE_EL1); 163 + write_sysreg_s(val, SYS_ICC_SRE_EL1); 109 164 isb(); 110 165 } 111 166 112 167 static inline void gic_write_bpr1(u32 val) 113 168 { 114 - asm volatile("msr_s " __stringify(ICC_BPR1_EL1) ", %0" : : "r" (val)); 169 + write_sysreg_s(val, SYS_ICC_BPR1_EL1); 115 170 } 116 171 117 172 #define gic_read_typer(c) readq_relaxed(c)
+3 -2
arch/arm64/include/asm/kvm_asm.h
··· 28 28 #define ARM_EXCEPTION_EL1_SERROR 1 29 29 #define ARM_EXCEPTION_TRAP 2 30 30 /* The hyp-stub will return this for any kvm_call_hyp() call */ 31 - #define ARM_EXCEPTION_HYP_GONE 3 31 + #define ARM_EXCEPTION_HYP_GONE HVC_STUB_ERR 32 32 33 33 #define KVM_ARM64_DEBUG_DIRTY_SHIFT 0 34 34 #define KVM_ARM64_DEBUG_DIRTY (1 << KVM_ARM64_DEBUG_DIRTY_SHIFT) ··· 47 47 48 48 extern char __kvm_hyp_init[]; 49 49 extern char __kvm_hyp_init_end[]; 50 - extern char __kvm_hyp_reset[]; 51 50 52 51 extern char __kvm_hyp_vector[]; 53 52 ··· 58 59 extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); 59 60 60 61 extern u64 __vgic_v3_get_ich_vtr_el2(void); 62 + extern u64 __vgic_v3_read_vmcr(void); 63 + extern void __vgic_v3_write_vmcr(u32 vmcr); 61 64 extern void __vgic_v3_init_lrs(void); 62 65 63 66 extern u32 __kvm_get_mdcr_el2(void);
-7
arch/arm64/include/asm/kvm_host.h
··· 361 361 __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr); 362 362 } 363 363 364 - void __kvm_hyp_teardown(void); 365 - static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr, 366 - phys_addr_t phys_idmap_start) 367 - { 368 - kvm_call_hyp(__kvm_hyp_teardown, phys_idmap_start); 369 - } 370 - 371 364 static inline void kvm_arch_hardware_unsetup(void) {} 372 365 static inline void kvm_arch_sync_events(struct kvm *kvm) {} 373 366 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
-1
arch/arm64/include/asm/kvm_mmu.h
··· 155 155 156 156 phys_addr_t kvm_mmu_get_httbr(void); 157 157 phys_addr_t kvm_get_idmap_vector(void); 158 - phys_addr_t kvm_get_idmap_start(void); 159 158 int kvm_mmu_init(void); 160 159 void kvm_clear_hyp_idmap(void); 161 160
+155 -7
arch/arm64/include/asm/sysreg.h
··· 48 48 ((crn) << CRn_shift) | ((crm) << CRm_shift) | \ 49 49 ((op2) << Op2_shift)) 50 50 51 + #define sys_insn sys_reg 52 + 51 53 #define sys_reg_Op0(id) (((id) >> Op0_shift) & Op0_mask) 52 54 #define sys_reg_Op1(id) (((id) >> Op1_shift) & Op1_mask) 53 55 #define sys_reg_CRn(id) (((id) >> CRn_shift) & CRn_mask) ··· 83 81 84 82 #endif /* CONFIG_BROKEN_GAS_INST */ 85 83 84 + #define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4) 85 + #define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3) 86 + 87 + #define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM | \ 88 + (!!x)<<8 | 0x1f) 89 + #define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \ 90 + (!!x)<<8 | 0x1f) 91 + 92 + #define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2) 93 + #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) 94 + #define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2) 95 + 96 + #define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2) 97 + #define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0) 98 + #define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2) 99 + #define SYS_OSDTRTX_EL1 sys_reg(2, 0, 0, 3, 2) 100 + #define SYS_OSECCR_EL1 sys_reg(2, 0, 0, 6, 2) 101 + #define SYS_DBGBVRn_EL1(n) sys_reg(2, 0, 0, n, 4) 102 + #define SYS_DBGBCRn_EL1(n) sys_reg(2, 0, 0, n, 5) 103 + #define SYS_DBGWVRn_EL1(n) sys_reg(2, 0, 0, n, 6) 104 + #define SYS_DBGWCRn_EL1(n) sys_reg(2, 0, 0, n, 7) 105 + #define SYS_MDRAR_EL1 sys_reg(2, 0, 1, 0, 0) 106 + #define SYS_OSLAR_EL1 sys_reg(2, 0, 1, 0, 4) 107 + #define SYS_OSLSR_EL1 sys_reg(2, 0, 1, 1, 4) 108 + #define SYS_OSDLR_EL1 sys_reg(2, 0, 1, 3, 4) 109 + #define SYS_DBGPRCR_EL1 sys_reg(2, 0, 1, 4, 4) 110 + #define SYS_DBGCLAIMSET_EL1 sys_reg(2, 0, 7, 8, 6) 111 + #define SYS_DBGCLAIMCLR_EL1 sys_reg(2, 0, 7, 9, 6) 112 + #define SYS_DBGAUTHSTATUS_EL1 sys_reg(2, 0, 7, 14, 6) 113 + #define SYS_MDCCSR_EL0 sys_reg(2, 3, 0, 1, 0) 114 + #define SYS_DBGDTR_EL0 sys_reg(2, 3, 0, 4, 0) 115 + #define SYS_DBGDTRRX_EL0 sys_reg(2, 3, 0, 5, 0) 116 + #define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0) 117 + #define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0) 118 + 86 119 #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0) 87 120 #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) 88 121 #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) ··· 125 88 #define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0) 126 89 #define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1) 127 90 #define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2) 91 + #define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3) 128 92 #define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4) 129 93 #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) 130 94 #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) ··· 156 118 #define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1) 157 119 #define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2) 158 120 159 - #define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0) 121 + #define SYS_SCTLR_EL1 sys_reg(3, 0, 1, 0, 0) 122 + #define SYS_ACTLR_EL1 sys_reg(3, 0, 1, 0, 1) 123 + #define SYS_CPACR_EL1 sys_reg(3, 0, 1, 0, 2) 124 + 125 + #define SYS_TTBR0_EL1 sys_reg(3, 0, 2, 0, 0) 126 + #define SYS_TTBR1_EL1 sys_reg(3, 0, 2, 0, 1) 127 + #define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2) 128 + 129 + #define SYS_ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0) 130 + 131 + #define SYS_AFSR0_EL1 sys_reg(3, 0, 5, 1, 0) 132 + #define SYS_AFSR1_EL1 sys_reg(3, 0, 5, 1, 1) 133 + #define SYS_ESR_EL1 sys_reg(3, 0, 5, 2, 0) 134 + #define SYS_FAR_EL1 sys_reg(3, 0, 6, 0, 0) 135 + #define SYS_PAR_EL1 sys_reg(3, 0, 7, 4, 0) 136 + 137 + #define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1) 138 + #define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2) 139 + 140 + #define SYS_MAIR_EL1 sys_reg(3, 0, 10, 2, 0) 141 + #define SYS_AMAIR_EL1 sys_reg(3, 0, 10, 3, 0) 142 + 143 + #define SYS_VBAR_EL1 sys_reg(3, 0, 12, 0, 0) 144 + 145 + #define SYS_ICC_DIR_EL1 sys_reg(3, 0, 12, 11, 1) 146 + #define SYS_ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5) 147 + #define SYS_ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0) 148 + #define SYS_ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1) 149 + #define SYS_ICC_BPR1_EL1 sys_reg(3, 0, 12, 12, 3) 150 + #define SYS_ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4) 151 + #define SYS_ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5) 152 + #define SYS_ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 153 + 154 + #define SYS_CONTEXTIDR_EL1 sys_reg(3, 0, 13, 0, 1) 155 + #define SYS_TPIDR_EL1 sys_reg(3, 0, 13, 0, 4) 156 + 157 + #define SYS_CNTKCTL_EL1 sys_reg(3, 0, 14, 1, 0) 158 + 159 + #define SYS_CLIDR_EL1 sys_reg(3, 1, 0, 0, 1) 160 + #define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7) 161 + 162 + #define SYS_CSSELR_EL1 sys_reg(3, 2, 0, 0, 0) 163 + 160 164 #define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1) 161 165 #define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7) 162 166 163 - #define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4) 164 - #define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3) 167 + #define SYS_PMCR_EL0 sys_reg(3, 3, 9, 12, 0) 168 + #define SYS_PMCNTENSET_EL0 sys_reg(3, 3, 9, 12, 1) 169 + #define SYS_PMCNTENCLR_EL0 sys_reg(3, 3, 9, 12, 2) 170 + #define SYS_PMOVSCLR_EL0 sys_reg(3, 3, 9, 12, 3) 171 + #define SYS_PMSWINC_EL0 sys_reg(3, 3, 9, 12, 4) 172 + #define SYS_PMSELR_EL0 sys_reg(3, 3, 9, 12, 5) 173 + #define SYS_PMCEID0_EL0 sys_reg(3, 3, 9, 12, 6) 174 + #define SYS_PMCEID1_EL0 sys_reg(3, 3, 9, 12, 7) 175 + #define SYS_PMCCNTR_EL0 sys_reg(3, 3, 9, 13, 0) 176 + #define SYS_PMXEVTYPER_EL0 sys_reg(3, 3, 9, 13, 1) 177 + #define SYS_PMXEVCNTR_EL0 sys_reg(3, 3, 9, 13, 2) 178 + #define SYS_PMUSERENR_EL0 sys_reg(3, 3, 9, 14, 0) 179 + #define SYS_PMOVSSET_EL0 sys_reg(3, 3, 9, 14, 3) 165 180 166 - #define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM | \ 167 - (!!x)<<8 | 0x1f) 168 - #define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \ 169 - (!!x)<<8 | 0x1f) 181 + #define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2) 182 + #define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3) 183 + 184 + #define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0) 185 + 186 + #define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0) 187 + #define SYS_CNTP_CTL_EL0 sys_reg(3, 3, 14, 2, 1) 188 + #define SYS_CNTP_CVAL_EL0 sys_reg(3, 3, 14, 2, 2) 189 + 190 + #define __PMEV_op2(n) ((n) & 0x7) 191 + #define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3)) 192 + #define SYS_PMEVCNTRn_EL0(n) sys_reg(3, 3, 14, __CNTR_CRm(n), __PMEV_op2(n)) 193 + #define __TYPER_CRm(n) (0xc | (((n) >> 3) & 0x3)) 194 + #define SYS_PMEVTYPERn_EL0(n) sys_reg(3, 3, 14, __TYPER_CRm(n), __PMEV_op2(n)) 195 + 196 + #define SYS_PMCCFILTR_EL0 sys_reg (3, 3, 14, 15, 7) 197 + 198 + #define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0) 199 + #define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1) 200 + #define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0) 201 + 202 + #define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x) 203 + #define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0) 204 + #define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1) 205 + #define SYS_ICH_AP0R2_EL2 __SYS__AP0Rx_EL2(2) 206 + #define SYS_ICH_AP0R3_EL2 __SYS__AP0Rx_EL2(3) 207 + 208 + #define __SYS__AP1Rx_EL2(x) sys_reg(3, 4, 12, 9, x) 209 + #define SYS_ICH_AP1R0_EL2 __SYS__AP1Rx_EL2(0) 210 + #define SYS_ICH_AP1R1_EL2 __SYS__AP1Rx_EL2(1) 211 + #define SYS_ICH_AP1R2_EL2 __SYS__AP1Rx_EL2(2) 212 + #define SYS_ICH_AP1R3_EL2 __SYS__AP1Rx_EL2(3) 213 + 214 + #define SYS_ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4) 215 + #define SYS_ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5) 216 + #define SYS_ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0) 217 + #define SYS_ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1) 218 + #define SYS_ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2) 219 + #define SYS_ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3) 220 + #define SYS_ICH_ELSR_EL2 sys_reg(3, 4, 12, 11, 5) 221 + #define SYS_ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7) 222 + 223 + #define __SYS__LR0_EL2(x) sys_reg(3, 4, 12, 12, x) 224 + #define SYS_ICH_LR0_EL2 __SYS__LR0_EL2(0) 225 + #define SYS_ICH_LR1_EL2 __SYS__LR0_EL2(1) 226 + #define SYS_ICH_LR2_EL2 __SYS__LR0_EL2(2) 227 + #define SYS_ICH_LR3_EL2 __SYS__LR0_EL2(3) 228 + #define SYS_ICH_LR4_EL2 __SYS__LR0_EL2(4) 229 + #define SYS_ICH_LR5_EL2 __SYS__LR0_EL2(5) 230 + #define SYS_ICH_LR6_EL2 __SYS__LR0_EL2(6) 231 + #define SYS_ICH_LR7_EL2 __SYS__LR0_EL2(7) 232 + 233 + #define __SYS__LR8_EL2(x) sys_reg(3, 4, 12, 13, x) 234 + #define SYS_ICH_LR8_EL2 __SYS__LR8_EL2(0) 235 + #define SYS_ICH_LR9_EL2 __SYS__LR8_EL2(1) 236 + #define SYS_ICH_LR10_EL2 __SYS__LR8_EL2(2) 237 + #define SYS_ICH_LR11_EL2 __SYS__LR8_EL2(3) 238 + #define SYS_ICH_LR12_EL2 __SYS__LR8_EL2(4) 239 + #define SYS_ICH_LR13_EL2 __SYS__LR8_EL2(5) 240 + #define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6) 241 + #define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7) 170 242 171 243 /* Common SCTLR_ELx flags. */ 172 244 #define SCTLR_ELx_EE (1 << 25)
+22 -9
arch/arm64/include/asm/virt.h
··· 19 19 #define __ASM__VIRT_H 20 20 21 21 /* 22 - * The arm64 hcall implementation uses x0 to specify the hcall type. A value 23 - * less than 0xfff indicates a special hcall, such as get/set vector. 24 - * Any other value is used as a pointer to the function to call. 22 + * The arm64 hcall implementation uses x0 to specify the hcall 23 + * number. A value less than HVC_STUB_HCALL_NR indicates a special 24 + * hcall, such as set vector. Any other value is handled in a 25 + * hypervisor specific way. 26 + * 27 + * The hypercall is allowed to clobber any of the caller-saved 28 + * registers (x0-x18), so it is advisable to use it through the 29 + * indirection of a function call (as implemented in hyp-stub.S). 25 30 */ 26 - 27 - /* HVC_GET_VECTORS - Return the value of the vbar_el2 register. */ 28 - #define HVC_GET_VECTORS 0 29 31 30 32 /* 31 33 * HVC_SET_VECTORS - Set the value of the vbar_el2 register. 32 34 * 33 35 * @x1: Physical address of the new vector table. 34 36 */ 35 - #define HVC_SET_VECTORS 1 37 + #define HVC_SET_VECTORS 0 36 38 37 39 /* 38 40 * HVC_SOFT_RESTART - CPU soft reset, used by the cpu_soft_restart routine. 39 41 */ 40 - #define HVC_SOFT_RESTART 2 42 + #define HVC_SOFT_RESTART 1 43 + 44 + /* 45 + * HVC_RESET_VECTORS - Restore the vectors to the original HYP stubs 46 + */ 47 + #define HVC_RESET_VECTORS 2 48 + 49 + /* Max number of HYP stub hypercalls */ 50 + #define HVC_STUB_HCALL_NR 3 51 + 52 + /* Error returned when an invalid stub number is passed into x0 */ 53 + #define HVC_STUB_ERR 0xbadca11 41 54 42 55 #define BOOT_CPU_MODE_EL1 (0xe11) 43 56 #define BOOT_CPU_MODE_EL2 (0xe12) ··· 74 61 extern u32 __boot_cpu_mode[2]; 75 62 76 63 void __hyp_set_vectors(phys_addr_t phys_vector_base); 77 - phys_addr_t __hyp_get_vectors(void); 64 + void __hyp_reset_vectors(void); 78 65 79 66 /* Reports the availability of HYP mode */ 80 67 static inline bool is_hyp_mode_available(void)
+2
arch/arm64/include/uapi/asm/kvm.h
··· 145 145 #define KVM_GUESTDBG_USE_HW (1 << 17) 146 146 147 147 struct kvm_sync_regs { 148 + /* Used with KVM_CAP_ARM_USER_IRQ */ 149 + __u64 device_irq_level; 148 150 }; 149 151 150 152 struct kvm_arch_memory_slot {
+4 -4
arch/arm64/kernel/head.S
··· 594 594 cmp x0, #1 595 595 b.ne 3f 596 596 597 - mrs_s x0, ICC_SRE_EL2 597 + mrs_s x0, SYS_ICC_SRE_EL2 598 598 orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1 599 599 orr x0, x0, #ICC_SRE_EL2_ENABLE // Set ICC_SRE_EL2.Enable==1 600 - msr_s ICC_SRE_EL2, x0 600 + msr_s SYS_ICC_SRE_EL2, x0 601 601 isb // Make sure SRE is now set 602 - mrs_s x0, ICC_SRE_EL2 // Read SRE back, 602 + mrs_s x0, SYS_ICC_SRE_EL2 // Read SRE back, 603 603 tbz x0, #0, 3f // and check that it sticks 604 - msr_s ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults 604 + msr_s SYS_ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults 605 605 606 606 3: 607 607 #endif
+15 -25
arch/arm64/kernel/hyp-stub.S
··· 55 55 .align 11 56 56 57 57 el1_sync: 58 - mrs x30, esr_el2 59 - lsr x30, x30, #ESR_ELx_EC_SHIFT 60 - 61 - cmp x30, #ESR_ELx_EC_HVC64 62 - b.ne 9f // Not an HVC trap 63 - 64 - cmp x0, #HVC_GET_VECTORS 65 - b.ne 1f 66 - mrs x0, vbar_el2 67 - b 9f 68 - 69 - 1: cmp x0, #HVC_SET_VECTORS 58 + cmp x0, #HVC_SET_VECTORS 70 59 b.ne 2f 71 60 msr vbar_el2, x1 72 61 b 9f ··· 68 79 mov x1, x3 69 80 br x4 // no return 70 81 71 - /* Someone called kvm_call_hyp() against the hyp-stub... */ 72 - 3: mov x0, #ARM_EXCEPTION_HYP_GONE 82 + 3: cmp x0, #HVC_RESET_VECTORS 83 + beq 9f // Nothing to reset! 73 84 74 - 9: eret 85 + /* Someone called kvm_call_hyp() against the hyp-stub... */ 86 + ldr x0, =HVC_STUB_ERR 87 + eret 88 + 89 + 9: mov x0, xzr 90 + eret 75 91 ENDPROC(el1_sync) 76 92 77 93 .macro invalid_vector label ··· 115 121 * initialisation entry point. 116 122 */ 117 123 118 - ENTRY(__hyp_get_vectors) 119 - str lr, [sp, #-16]! 120 - mov x0, #HVC_GET_VECTORS 121 - hvc #0 122 - ldr lr, [sp], #16 123 - ret 124 - ENDPROC(__hyp_get_vectors) 125 - 126 124 ENTRY(__hyp_set_vectors) 127 - str lr, [sp, #-16]! 128 125 mov x1, x0 129 126 mov x0, #HVC_SET_VECTORS 130 127 hvc #0 131 - ldr lr, [sp], #16 132 128 ret 133 129 ENDPROC(__hyp_set_vectors) 130 + 131 + ENTRY(__hyp_reset_vectors) 132 + mov x0, #HVC_RESET_VECTORS 133 + hvc #0 134 + ret 135 + ENDPROC(__hyp_reset_vectors)
+37 -11
arch/arm64/kvm/hyp-init.S
··· 22 22 #include <asm/kvm_mmu.h> 23 23 #include <asm/pgtable-hwdef.h> 24 24 #include <asm/sysreg.h> 25 + #include <asm/virt.h> 25 26 26 27 .text 27 28 .pushsection .hyp.idmap.text, "ax" ··· 59 58 * x2: HYP vectors 60 59 */ 61 60 __do_hyp_init: 61 + /* Check for a stub HVC call */ 62 + cmp x0, #HVC_STUB_HCALL_NR 63 + b.lo __kvm_handle_stub_hvc 62 64 63 65 msr ttbr0_el2, x0 64 66 ··· 123 119 eret 124 120 ENDPROC(__kvm_hyp_init) 125 121 122 + ENTRY(__kvm_handle_stub_hvc) 123 + cmp x0, #HVC_SOFT_RESTART 124 + b.ne 1f 125 + 126 + /* This is where we're about to jump, staying at EL2 */ 127 + msr elr_el2, x1 128 + mov x0, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT | PSR_MODE_EL2h) 129 + msr spsr_el2, x0 130 + 131 + /* Shuffle the arguments, and don't come back */ 132 + mov x0, x2 133 + mov x1, x3 134 + mov x2, x4 135 + b reset 136 + 137 + 1: cmp x0, #HVC_RESET_VECTORS 138 + b.ne 1f 139 + reset: 126 140 /* 127 - * Reset kvm back to the hyp stub. 141 + * Reset kvm back to the hyp stub. Do not clobber x0-x4 in 142 + * case we coming via HVC_SOFT_RESTART. 128 143 */ 129 - ENTRY(__kvm_hyp_reset) 130 - /* We're now in idmap, disable MMU */ 131 - mrs x0, sctlr_el2 132 - ldr x1, =SCTLR_ELx_FLAGS 133 - bic x0, x0, x1 // Clear SCTL_M and etc 134 - msr sctlr_el2, x0 144 + mrs x5, sctlr_el2 145 + ldr x6, =SCTLR_ELx_FLAGS 146 + bic x5, x5, x6 // Clear SCTL_M and etc 147 + msr sctlr_el2, x5 135 148 isb 136 149 137 150 /* Install stub vectors */ 138 - adr_l x0, __hyp_stub_vectors 139 - msr vbar_el2, x0 140 - 151 + adr_l x5, __hyp_stub_vectors 152 + msr vbar_el2, x5 153 + mov x0, xzr 141 154 eret 142 - ENDPROC(__kvm_hyp_reset) 155 + 156 + 1: /* Bad stub call */ 157 + ldr x0, =HVC_STUB_ERR 158 + eret 159 + 160 + ENDPROC(__kvm_handle_stub_hvc) 143 161 144 162 .ltorg 145 163
+1 -4
arch/arm64/kvm/hyp.S
··· 36 36 * passed in x0. 37 37 * 38 38 * A function pointer with a value less than 0xfff has a special meaning, 39 - * and is used to implement __hyp_get_vectors in the same way as in 39 + * and is used to implement hyp stubs in the same way as in 40 40 * arch/arm64/kernel/hyp_stub.S. 41 - * HVC behaves as a 'bl' call and will clobber lr. 42 41 */ 43 42 ENTRY(__kvm_call_hyp) 44 43 alternative_if_not ARM64_HAS_VIRT_HOST_EXTN 45 - str lr, [sp, #-16]! 46 44 hvc #0 47 - ldr lr, [sp], #16 48 45 ret 49 46 alternative_else_nop_endif 50 47 b __vhe_hyp_call
+21 -22
arch/arm64/kvm/hyp/hyp-entry.S
··· 32 32 * Shuffle the parameters before calling the function 33 33 * pointed to in x0. Assumes parameters in x[1,2,3]. 34 34 */ 35 + str lr, [sp, #-16]! 35 36 mov lr, x0 36 37 mov x0, x1 37 38 mov x1, x2 38 39 mov x2, x3 39 40 blr lr 41 + ldr lr, [sp], #16 40 42 .endm 41 43 42 44 ENTRY(__vhe_hyp_call) 43 - str lr, [sp, #-16]! 44 45 do_el2_call 45 - ldr lr, [sp], #16 46 46 /* 47 47 * We used to rely on having an exception return to get 48 48 * an implicit isb. In the E2H case, we don't have it anymore. ··· 53 53 ret 54 54 ENDPROC(__vhe_hyp_call) 55 55 56 - /* 57 - * Compute the idmap address of __kvm_hyp_reset based on the idmap 58 - * start passed as a parameter, and jump there. 59 - * 60 - * x0: HYP phys_idmap_start 61 - */ 62 - ENTRY(__kvm_hyp_teardown) 63 - mov x4, x0 64 - adr_l x3, __kvm_hyp_reset 65 - 66 - /* insert __kvm_hyp_reset()s offset into phys_idmap_start */ 67 - bfi x4, x3, #0, #PAGE_SHIFT 68 - br x4 69 - ENDPROC(__kvm_hyp_teardown) 70 - 71 56 el1_sync: // Guest trapped into EL2 72 57 stp x0, x1, [sp, #-16]! 73 58 ··· 72 87 /* Here, we're pretty sure the host called HVC. */ 73 88 ldp x0, x1, [sp], #16 74 89 75 - cmp x0, #HVC_GET_VECTORS 76 - b.ne 1f 77 - mrs x0, vbar_el2 78 - b 2f 90 + /* Check for a stub HVC call */ 91 + cmp x0, #HVC_STUB_HCALL_NR 92 + b.hs 1f 93 + 94 + /* 95 + * Compute the idmap address of __kvm_handle_stub_hvc and 96 + * jump there. Since we use kimage_voffset, do not use the 97 + * HYP VA for __kvm_handle_stub_hvc, but the kernel VA instead 98 + * (by loading it from the constant pool). 99 + * 100 + * Preserve x0-x4, which may contain stub parameters. 101 + */ 102 + ldr x5, =__kvm_handle_stub_hvc 103 + ldr_l x6, kimage_voffset 104 + 105 + /* x5 = __pa(x5) */ 106 + sub x5, x5, x6 107 + br x5 79 108 80 109 1: 81 110 /* ··· 98 99 kern_hyp_va x0 99 100 do_el2_call 100 101 101 - 2: eret 102 + eret 102 103 103 104 el1_trap: 104 105 /*
+165 -301
arch/arm64/kvm/sys_regs.c
··· 55 55 * 64bit interface. 56 56 */ 57 57 58 + static bool read_from_write_only(struct kvm_vcpu *vcpu, 59 + const struct sys_reg_params *params) 60 + { 61 + WARN_ONCE(1, "Unexpected sys_reg read to write-only register\n"); 62 + print_sys_reg_instr(params); 63 + kvm_inject_undefined(vcpu); 64 + return false; 65 + } 66 + 58 67 /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */ 59 68 static u32 cache_levels; 60 69 ··· 469 460 vcpu_sys_reg(vcpu, PMCR_EL0) = val; 470 461 } 471 462 472 - static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) 463 + static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) 473 464 { 474 465 u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); 466 + bool enabled = (reg & flags) || vcpu_mode_priv(vcpu); 475 467 476 - return !((reg & ARMV8_PMU_USERENR_EN) || vcpu_mode_priv(vcpu)); 468 + if (!enabled) 469 + kvm_inject_undefined(vcpu); 470 + 471 + return !enabled; 472 + } 473 + 474 + static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu) 475 + { 476 + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN); 477 477 } 478 478 479 479 static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu) 480 480 { 481 - u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); 482 - 483 - return !((reg & (ARMV8_PMU_USERENR_SW | ARMV8_PMU_USERENR_EN)) 484 - || vcpu_mode_priv(vcpu)); 481 + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_SW | ARMV8_PMU_USERENR_EN); 485 482 } 486 483 487 484 static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu) 488 485 { 489 - u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); 490 - 491 - return !((reg & (ARMV8_PMU_USERENR_CR | ARMV8_PMU_USERENR_EN)) 492 - || vcpu_mode_priv(vcpu)); 486 + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_CR | ARMV8_PMU_USERENR_EN); 493 487 } 494 488 495 489 static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu) 496 490 { 497 - u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); 498 - 499 - return !((reg & (ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN)) 500 - || vcpu_mode_priv(vcpu)); 491 + return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_EN); 501 492 } 502 493 503 494 static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, ··· 576 567 577 568 pmcr = vcpu_sys_reg(vcpu, PMCR_EL0); 578 569 val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 579 - if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) 570 + if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { 571 + kvm_inject_undefined(vcpu); 580 572 return false; 573 + } 581 574 582 575 return true; 583 576 } ··· 718 707 if (!kvm_arm_pmu_v3_ready(vcpu)) 719 708 return trap_raz_wi(vcpu, p, r); 720 709 721 - if (!vcpu_mode_priv(vcpu)) 710 + if (!vcpu_mode_priv(vcpu)) { 711 + kvm_inject_undefined(vcpu); 722 712 return false; 713 + } 723 714 724 715 if (p->is_write) { 725 716 u64 val = p->regval & mask; ··· 772 759 if (!kvm_arm_pmu_v3_ready(vcpu)) 773 760 return trap_raz_wi(vcpu, p, r); 774 761 762 + if (!p->is_write) 763 + return read_from_write_only(vcpu, p); 764 + 775 765 if (pmu_write_swinc_el0_disabled(vcpu)) 776 766 return false; 777 767 778 - if (p->is_write) { 779 - mask = kvm_pmu_valid_counter_mask(vcpu); 780 - kvm_pmu_software_increment(vcpu, p->regval & mask); 781 - return true; 782 - } 783 - 784 - return false; 768 + mask = kvm_pmu_valid_counter_mask(vcpu); 769 + kvm_pmu_software_increment(vcpu, p->regval & mask); 770 + return true; 785 771 } 786 772 787 773 static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, ··· 790 778 return trap_raz_wi(vcpu, p, r); 791 779 792 780 if (p->is_write) { 793 - if (!vcpu_mode_priv(vcpu)) 781 + if (!vcpu_mode_priv(vcpu)) { 782 + kvm_inject_undefined(vcpu); 794 783 return false; 784 + } 795 785 796 786 vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval 797 787 & ARMV8_PMU_USERENR_MASK; ··· 807 793 808 794 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ 809 795 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ 810 - /* DBGBVRn_EL1 */ \ 811 - { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b100), \ 796 + { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ 812 797 trap_bvr, reset_bvr, n, 0, get_bvr, set_bvr }, \ 813 - /* DBGBCRn_EL1 */ \ 814 - { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b101), \ 798 + { SYS_DESC(SYS_DBGBCRn_EL1(n)), \ 815 799 trap_bcr, reset_bcr, n, 0, get_bcr, set_bcr }, \ 816 - /* DBGWVRn_EL1 */ \ 817 - { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b110), \ 800 + { SYS_DESC(SYS_DBGWVRn_EL1(n)), \ 818 801 trap_wvr, reset_wvr, n, 0, get_wvr, set_wvr }, \ 819 - /* DBGWCRn_EL1 */ \ 820 - { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111), \ 802 + { SYS_DESC(SYS_DBGWCRn_EL1(n)), \ 821 803 trap_wcr, reset_wcr, n, 0, get_wcr, set_wcr } 822 804 823 805 /* Macro to expand the PMEVCNTRn_EL0 register */ 824 806 #define PMU_PMEVCNTR_EL0(n) \ 825 - /* PMEVCNTRn_EL0 */ \ 826 - { Op0(0b11), Op1(0b011), CRn(0b1110), \ 827 - CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \ 807 + { SYS_DESC(SYS_PMEVCNTRn_EL0(n)), \ 828 808 access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), } 829 809 830 810 /* Macro to expand the PMEVTYPERn_EL0 register */ 831 811 #define PMU_PMEVTYPER_EL0(n) \ 832 - /* PMEVTYPERn_EL0 */ \ 833 - { Op0(0b11), Op1(0b011), CRn(0b1110), \ 834 - CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \ 812 + { SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \ 835 813 access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), } 836 814 837 815 static bool access_cntp_tval(struct kvm_vcpu *vcpu, ··· 893 887 * more demanding guest... 894 888 */ 895 889 static const struct sys_reg_desc sys_reg_descs[] = { 896 - /* DC ISW */ 897 - { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b0110), Op2(0b010), 898 - access_dcsw }, 899 - /* DC CSW */ 900 - { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1010), Op2(0b010), 901 - access_dcsw }, 902 - /* DC CISW */ 903 - { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010), 904 - access_dcsw }, 890 + { SYS_DESC(SYS_DC_ISW), access_dcsw }, 891 + { SYS_DESC(SYS_DC_CSW), access_dcsw }, 892 + { SYS_DESC(SYS_DC_CISW), access_dcsw }, 905 893 906 894 DBG_BCR_BVR_WCR_WVR_EL1(0), 907 895 DBG_BCR_BVR_WCR_WVR_EL1(1), 908 - /* MDCCINT_EL1 */ 909 - { Op0(0b10), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000), 910 - trap_debug_regs, reset_val, MDCCINT_EL1, 0 }, 911 - /* MDSCR_EL1 */ 912 - { Op0(0b10), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010), 913 - trap_debug_regs, reset_val, MDSCR_EL1, 0 }, 896 + { SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 }, 897 + { SYS_DESC(SYS_MDSCR_EL1), trap_debug_regs, reset_val, MDSCR_EL1, 0 }, 914 898 DBG_BCR_BVR_WCR_WVR_EL1(2), 915 899 DBG_BCR_BVR_WCR_WVR_EL1(3), 916 900 DBG_BCR_BVR_WCR_WVR_EL1(4), ··· 916 920 DBG_BCR_BVR_WCR_WVR_EL1(14), 917 921 DBG_BCR_BVR_WCR_WVR_EL1(15), 918 922 919 - /* MDRAR_EL1 */ 920 - { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000), 921 - trap_raz_wi }, 922 - /* OSLAR_EL1 */ 923 - { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b100), 924 - trap_raz_wi }, 925 - /* OSLSR_EL1 */ 926 - { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0001), Op2(0b100), 927 - trap_oslsr_el1 }, 928 - /* OSDLR_EL1 */ 929 - { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0011), Op2(0b100), 930 - trap_raz_wi }, 931 - /* DBGPRCR_EL1 */ 932 - { Op0(0b10), Op1(0b000), CRn(0b0001), CRm(0b0100), Op2(0b100), 933 - trap_raz_wi }, 934 - /* DBGCLAIMSET_EL1 */ 935 - { Op0(0b10), Op1(0b000), CRn(0b0111), CRm(0b1000), Op2(0b110), 936 - trap_raz_wi }, 937 - /* DBGCLAIMCLR_EL1 */ 938 - { Op0(0b10), Op1(0b000), CRn(0b0111), CRm(0b1001), Op2(0b110), 939 - trap_raz_wi }, 940 - /* DBGAUTHSTATUS_EL1 */ 941 - { Op0(0b10), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b110), 942 - trap_dbgauthstatus_el1 }, 923 + { SYS_DESC(SYS_MDRAR_EL1), trap_raz_wi }, 924 + { SYS_DESC(SYS_OSLAR_EL1), trap_raz_wi }, 925 + { SYS_DESC(SYS_OSLSR_EL1), trap_oslsr_el1 }, 926 + { SYS_DESC(SYS_OSDLR_EL1), trap_raz_wi }, 927 + { SYS_DESC(SYS_DBGPRCR_EL1), trap_raz_wi }, 928 + { SYS_DESC(SYS_DBGCLAIMSET_EL1), trap_raz_wi }, 929 + { SYS_DESC(SYS_DBGCLAIMCLR_EL1), trap_raz_wi }, 930 + { SYS_DESC(SYS_DBGAUTHSTATUS_EL1), trap_dbgauthstatus_el1 }, 943 931 944 - /* MDCCSR_EL1 */ 945 - { Op0(0b10), Op1(0b011), CRn(0b0000), CRm(0b0001), Op2(0b000), 946 - trap_raz_wi }, 947 - /* DBGDTR_EL0 */ 948 - { Op0(0b10), Op1(0b011), CRn(0b0000), CRm(0b0100), Op2(0b000), 949 - trap_raz_wi }, 950 - /* DBGDTR[TR]X_EL0 */ 951 - { Op0(0b10), Op1(0b011), CRn(0b0000), CRm(0b0101), Op2(0b000), 952 - trap_raz_wi }, 932 + { SYS_DESC(SYS_MDCCSR_EL0), trap_raz_wi }, 933 + { SYS_DESC(SYS_DBGDTR_EL0), trap_raz_wi }, 934 + // DBGDTR[TR]X_EL0 share the same encoding 935 + { SYS_DESC(SYS_DBGDTRTX_EL0), trap_raz_wi }, 953 936 954 - /* DBGVCR32_EL2 */ 955 - { Op0(0b10), Op1(0b100), CRn(0b0000), CRm(0b0111), Op2(0b000), 956 - NULL, reset_val, DBGVCR32_EL2, 0 }, 937 + { SYS_DESC(SYS_DBGVCR32_EL2), NULL, reset_val, DBGVCR32_EL2, 0 }, 957 938 958 - /* MPIDR_EL1 */ 959 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101), 960 - NULL, reset_mpidr, MPIDR_EL1 }, 961 - /* SCTLR_EL1 */ 962 - { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000), 963 - access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 }, 964 - /* CPACR_EL1 */ 965 - { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010), 966 - NULL, reset_val, CPACR_EL1, 0 }, 967 - /* TTBR0_EL1 */ 968 - { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000), 969 - access_vm_reg, reset_unknown, TTBR0_EL1 }, 970 - /* TTBR1_EL1 */ 971 - { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001), 972 - access_vm_reg, reset_unknown, TTBR1_EL1 }, 973 - /* TCR_EL1 */ 974 - { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010), 975 - access_vm_reg, reset_val, TCR_EL1, 0 }, 939 + { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 }, 940 + { SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 }, 941 + { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 }, 942 + { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, 943 + { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, 944 + { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, 976 945 977 - /* AFSR0_EL1 */ 978 - { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000), 979 - access_vm_reg, reset_unknown, AFSR0_EL1 }, 980 - /* AFSR1_EL1 */ 981 - { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001), 982 - access_vm_reg, reset_unknown, AFSR1_EL1 }, 983 - /* ESR_EL1 */ 984 - { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000), 985 - access_vm_reg, reset_unknown, ESR_EL1 }, 986 - /* FAR_EL1 */ 987 - { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000), 988 - access_vm_reg, reset_unknown, FAR_EL1 }, 989 - /* PAR_EL1 */ 990 - { Op0(0b11), Op1(0b000), CRn(0b0111), CRm(0b0100), Op2(0b000), 991 - NULL, reset_unknown, PAR_EL1 }, 946 + { SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 }, 947 + { SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 }, 948 + { SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 }, 949 + { SYS_DESC(SYS_FAR_EL1), access_vm_reg, reset_unknown, FAR_EL1 }, 950 + { SYS_DESC(SYS_PAR_EL1), NULL, reset_unknown, PAR_EL1 }, 992 951 993 - /* PMINTENSET_EL1 */ 994 - { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001), 995 - access_pminten, reset_unknown, PMINTENSET_EL1 }, 996 - /* PMINTENCLR_EL1 */ 997 - { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010), 998 - access_pminten, NULL, PMINTENSET_EL1 }, 952 + { SYS_DESC(SYS_PMINTENSET_EL1), access_pminten, reset_unknown, PMINTENSET_EL1 }, 953 + { SYS_DESC(SYS_PMINTENCLR_EL1), access_pminten, NULL, PMINTENSET_EL1 }, 999 954 1000 - /* MAIR_EL1 */ 1001 - { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000), 1002 - access_vm_reg, reset_unknown, MAIR_EL1 }, 1003 - /* AMAIR_EL1 */ 1004 - { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000), 1005 - access_vm_reg, reset_amair_el1, AMAIR_EL1 }, 955 + { SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 }, 956 + { SYS_DESC(SYS_AMAIR_EL1), access_vm_reg, reset_amair_el1, AMAIR_EL1 }, 1006 957 1007 - /* VBAR_EL1 */ 1008 - { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000), 1009 - NULL, reset_val, VBAR_EL1, 0 }, 958 + { SYS_DESC(SYS_VBAR_EL1), NULL, reset_val, VBAR_EL1, 0 }, 1010 959 1011 - /* ICC_SGI1R_EL1 */ 1012 - { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b1011), Op2(0b101), 1013 - access_gic_sgi }, 1014 - /* ICC_SRE_EL1 */ 1015 - { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b1100), Op2(0b101), 1016 - access_gic_sre }, 960 + { SYS_DESC(SYS_ICC_SGI1R_EL1), access_gic_sgi }, 961 + { SYS_DESC(SYS_ICC_SRE_EL1), access_gic_sre }, 1017 962 1018 - /* CONTEXTIDR_EL1 */ 1019 - { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001), 1020 - access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 }, 1021 - /* TPIDR_EL1 */ 1022 - { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100), 1023 - NULL, reset_unknown, TPIDR_EL1 }, 963 + { SYS_DESC(SYS_CONTEXTIDR_EL1), access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 }, 964 + { SYS_DESC(SYS_TPIDR_EL1), NULL, reset_unknown, TPIDR_EL1 }, 1024 965 1025 - /* CNTKCTL_EL1 */ 1026 - { Op0(0b11), Op1(0b000), CRn(0b1110), CRm(0b0001), Op2(0b000), 1027 - NULL, reset_val, CNTKCTL_EL1, 0}, 966 + { SYS_DESC(SYS_CNTKCTL_EL1), NULL, reset_val, CNTKCTL_EL1, 0}, 1028 967 1029 - /* CSSELR_EL1 */ 1030 - { Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000), 1031 - NULL, reset_unknown, CSSELR_EL1 }, 968 + { SYS_DESC(SYS_CSSELR_EL1), NULL, reset_unknown, CSSELR_EL1 }, 1032 969 1033 - /* PMCR_EL0 */ 1034 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000), 1035 - access_pmcr, reset_pmcr, }, 1036 - /* PMCNTENSET_EL0 */ 1037 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001), 1038 - access_pmcnten, reset_unknown, PMCNTENSET_EL0 }, 1039 - /* PMCNTENCLR_EL0 */ 1040 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010), 1041 - access_pmcnten, NULL, PMCNTENSET_EL0 }, 1042 - /* PMOVSCLR_EL0 */ 1043 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011), 1044 - access_pmovs, NULL, PMOVSSET_EL0 }, 1045 - /* PMSWINC_EL0 */ 1046 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100), 1047 - access_pmswinc, reset_unknown, PMSWINC_EL0 }, 1048 - /* PMSELR_EL0 */ 1049 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101), 1050 - access_pmselr, reset_unknown, PMSELR_EL0 }, 1051 - /* PMCEID0_EL0 */ 1052 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110), 1053 - access_pmceid }, 1054 - /* PMCEID1_EL0 */ 1055 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111), 1056 - access_pmceid }, 1057 - /* PMCCNTR_EL0 */ 1058 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000), 1059 - access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 }, 1060 - /* PMXEVTYPER_EL0 */ 1061 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001), 1062 - access_pmu_evtyper }, 1063 - /* PMXEVCNTR_EL0 */ 1064 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010), 1065 - access_pmu_evcntr }, 1066 - /* PMUSERENR_EL0 1067 - * This register resets as unknown in 64bit mode while it resets as zero 970 + { SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, }, 971 + { SYS_DESC(SYS_PMCNTENSET_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 }, 972 + { SYS_DESC(SYS_PMCNTENCLR_EL0), access_pmcnten, NULL, PMCNTENSET_EL0 }, 973 + { SYS_DESC(SYS_PMOVSCLR_EL0), access_pmovs, NULL, PMOVSSET_EL0 }, 974 + { SYS_DESC(SYS_PMSWINC_EL0), access_pmswinc, reset_unknown, PMSWINC_EL0 }, 975 + { SYS_DESC(SYS_PMSELR_EL0), access_pmselr, reset_unknown, PMSELR_EL0 }, 976 + { SYS_DESC(SYS_PMCEID0_EL0), access_pmceid }, 977 + { SYS_DESC(SYS_PMCEID1_EL0), access_pmceid }, 978 + { SYS_DESC(SYS_PMCCNTR_EL0), access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 }, 979 + { SYS_DESC(SYS_PMXEVTYPER_EL0), access_pmu_evtyper }, 980 + { SYS_DESC(SYS_PMXEVCNTR_EL0), access_pmu_evcntr }, 981 + /* 982 + * PMUSERENR_EL0 resets as unknown in 64bit mode while it resets as zero 1068 983 * in 32bit mode. Here we choose to reset it as zero for consistency. 1069 984 */ 1070 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000), 1071 - access_pmuserenr, reset_val, PMUSERENR_EL0, 0 }, 1072 - /* PMOVSSET_EL0 */ 1073 - { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011), 1074 - access_pmovs, reset_unknown, PMOVSSET_EL0 }, 985 + { SYS_DESC(SYS_PMUSERENR_EL0), access_pmuserenr, reset_val, PMUSERENR_EL0, 0 }, 986 + { SYS_DESC(SYS_PMOVSSET_EL0), access_pmovs, reset_unknown, PMOVSSET_EL0 }, 1075 987 1076 - /* TPIDR_EL0 */ 1077 - { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010), 1078 - NULL, reset_unknown, TPIDR_EL0 }, 1079 - /* TPIDRRO_EL0 */ 1080 - { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011), 1081 - NULL, reset_unknown, TPIDRRO_EL0 }, 988 + { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, 989 + { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, 1082 990 1083 - /* CNTP_TVAL_EL0 */ 1084 - { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b000), 1085 - access_cntp_tval }, 1086 - /* CNTP_CTL_EL0 */ 1087 - { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b001), 1088 - access_cntp_ctl }, 1089 - /* CNTP_CVAL_EL0 */ 1090 - { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b010), 1091 - access_cntp_cval }, 991 + { SYS_DESC(SYS_CNTP_TVAL_EL0), access_cntp_tval }, 992 + { SYS_DESC(SYS_CNTP_CTL_EL0), access_cntp_ctl }, 993 + { SYS_DESC(SYS_CNTP_CVAL_EL0), access_cntp_cval }, 1092 994 1093 995 /* PMEVCNTRn_EL0 */ 1094 996 PMU_PMEVCNTR_EL0(0), ··· 1052 1158 PMU_PMEVTYPER_EL0(28), 1053 1159 PMU_PMEVTYPER_EL0(29), 1054 1160 PMU_PMEVTYPER_EL0(30), 1055 - /* PMCCFILTR_EL0 1056 - * This register resets as unknown in 64bit mode while it resets as zero 1161 + /* 1162 + * PMCCFILTR_EL0 resets as unknown in 64bit mode while it resets as zero 1057 1163 * in 32bit mode. Here we choose to reset it as zero for consistency. 1058 1164 */ 1059 - { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111), 1060 - access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 }, 1165 + { SYS_DESC(SYS_PMCCFILTR_EL0), access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 }, 1061 1166 1062 - /* DACR32_EL2 */ 1063 - { Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000), 1064 - NULL, reset_unknown, DACR32_EL2 }, 1065 - /* IFSR32_EL2 */ 1066 - { Op0(0b11), Op1(0b100), CRn(0b0101), CRm(0b0000), Op2(0b001), 1067 - NULL, reset_unknown, IFSR32_EL2 }, 1068 - /* FPEXC32_EL2 */ 1069 - { Op0(0b11), Op1(0b100), CRn(0b0101), CRm(0b0011), Op2(0b000), 1070 - NULL, reset_val, FPEXC32_EL2, 0x70 }, 1167 + { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, 1168 + { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 }, 1169 + { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 }, 1071 1170 }; 1072 1171 1073 1172 static bool trap_dbgidr(struct kvm_vcpu *vcpu, ··· 1444 1557 return 1; 1445 1558 } 1446 1559 1560 + static void perform_access(struct kvm_vcpu *vcpu, 1561 + struct sys_reg_params *params, 1562 + const struct sys_reg_desc *r) 1563 + { 1564 + /* 1565 + * Not having an accessor means that we have configured a trap 1566 + * that we don't know how to handle. This certainly qualifies 1567 + * as a gross bug that should be fixed right away. 1568 + */ 1569 + BUG_ON(!r->access); 1570 + 1571 + /* Skip instruction if instructed so */ 1572 + if (likely(r->access(vcpu, params, r))) 1573 + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 1574 + } 1575 + 1447 1576 /* 1448 1577 * emulate_cp -- tries to match a sys_reg access in a handling table, and 1449 1578 * call the corresponding trap handler. ··· 1483 1580 r = find_reg(params, table, num); 1484 1581 1485 1582 if (r) { 1486 - /* 1487 - * Not having an accessor means that we have 1488 - * configured a trap that we don't know how to 1489 - * handle. This certainly qualifies as a gross bug 1490 - * that should be fixed right away. 1491 - */ 1492 - BUG_ON(!r->access); 1493 - 1494 - if (likely(r->access(vcpu, params, r))) { 1495 - /* Skip instruction, since it was emulated */ 1496 - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 1497 - /* Handled */ 1498 - return 0; 1499 - } 1583 + perform_access(vcpu, params, r); 1584 + return 0; 1500 1585 } 1501 1586 1502 1587 /* Not handled */ ··· 1551 1660 params.regval |= vcpu_get_reg(vcpu, Rt2) << 32; 1552 1661 } 1553 1662 1554 - if (!emulate_cp(vcpu, &params, target_specific, nr_specific)) 1555 - goto out; 1556 - if (!emulate_cp(vcpu, &params, global, nr_global)) 1557 - goto out; 1663 + /* 1664 + * Try to emulate the coprocessor access using the target 1665 + * specific table first, and using the global table afterwards. 1666 + * If either of the tables contains a handler, handle the 1667 + * potential register operation in the case of a read and return 1668 + * with success. 1669 + */ 1670 + if (!emulate_cp(vcpu, &params, target_specific, nr_specific) || 1671 + !emulate_cp(vcpu, &params, global, nr_global)) { 1672 + /* Split up the value between registers for the read side */ 1673 + if (!params.is_write) { 1674 + vcpu_set_reg(vcpu, Rt, lower_32_bits(params.regval)); 1675 + vcpu_set_reg(vcpu, Rt2, upper_32_bits(params.regval)); 1676 + } 1558 1677 1559 - unhandled_cp_access(vcpu, &params); 1560 - 1561 - out: 1562 - /* Split up the value between registers for the read side */ 1563 - if (!params.is_write) { 1564 - vcpu_set_reg(vcpu, Rt, lower_32_bits(params.regval)); 1565 - vcpu_set_reg(vcpu, Rt2, upper_32_bits(params.regval)); 1678 + return 1; 1566 1679 } 1567 1680 1681 + unhandled_cp_access(vcpu, &params); 1568 1682 return 1; 1569 1683 } 1570 1684 ··· 1659 1763 r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); 1660 1764 1661 1765 if (likely(r)) { 1662 - /* 1663 - * Not having an accessor means that we have 1664 - * configured a trap that we don't know how to 1665 - * handle. This certainly qualifies as a gross bug 1666 - * that should be fixed right away. 1667 - */ 1668 - BUG_ON(!r->access); 1669 - 1670 - if (likely(r->access(vcpu, params, r))) { 1671 - /* Skip instruction, since it was emulated */ 1672 - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 1673 - return 1; 1674 - } 1675 - /* If access function fails, it should complain. */ 1766 + perform_access(vcpu, params, r); 1676 1767 } else { 1677 1768 kvm_err("Unsupported guest sys_reg access at: %lx\n", 1678 1769 *vcpu_pc(vcpu)); 1679 1770 print_sys_reg_instr(params); 1771 + kvm_inject_undefined(vcpu); 1680 1772 } 1681 - kvm_inject_undefined(vcpu); 1682 1773 return 1; 1683 1774 } 1684 1775 ··· 1815 1932 1816 1933 /* ->val is filled in by kvm_sys_reg_table_init() */ 1817 1934 static struct sys_reg_desc invariant_sys_regs[] = { 1818 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000), 1819 - NULL, get_midr_el1 }, 1820 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110), 1821 - NULL, get_revidr_el1 }, 1822 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000), 1823 - NULL, get_id_pfr0_el1 }, 1824 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001), 1825 - NULL, get_id_pfr1_el1 }, 1826 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010), 1827 - NULL, get_id_dfr0_el1 }, 1828 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011), 1829 - NULL, get_id_afr0_el1 }, 1830 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100), 1831 - NULL, get_id_mmfr0_el1 }, 1832 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101), 1833 - NULL, get_id_mmfr1_el1 }, 1834 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110), 1835 - NULL, get_id_mmfr2_el1 }, 1836 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111), 1837 - NULL, get_id_mmfr3_el1 }, 1838 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000), 1839 - NULL, get_id_isar0_el1 }, 1840 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001), 1841 - NULL, get_id_isar1_el1 }, 1842 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010), 1843 - NULL, get_id_isar2_el1 }, 1844 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011), 1845 - NULL, get_id_isar3_el1 }, 1846 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100), 1847 - NULL, get_id_isar4_el1 }, 1848 - { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101), 1849 - NULL, get_id_isar5_el1 }, 1850 - { Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001), 1851 - NULL, get_clidr_el1 }, 1852 - { Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111), 1853 - NULL, get_aidr_el1 }, 1854 - { Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001), 1855 - NULL, get_ctr_el0 }, 1935 + { SYS_DESC(SYS_MIDR_EL1), NULL, get_midr_el1 }, 1936 + { SYS_DESC(SYS_REVIDR_EL1), NULL, get_revidr_el1 }, 1937 + { SYS_DESC(SYS_ID_PFR0_EL1), NULL, get_id_pfr0_el1 }, 1938 + { SYS_DESC(SYS_ID_PFR1_EL1), NULL, get_id_pfr1_el1 }, 1939 + { SYS_DESC(SYS_ID_DFR0_EL1), NULL, get_id_dfr0_el1 }, 1940 + { SYS_DESC(SYS_ID_AFR0_EL1), NULL, get_id_afr0_el1 }, 1941 + { SYS_DESC(SYS_ID_MMFR0_EL1), NULL, get_id_mmfr0_el1 }, 1942 + { SYS_DESC(SYS_ID_MMFR1_EL1), NULL, get_id_mmfr1_el1 }, 1943 + { SYS_DESC(SYS_ID_MMFR2_EL1), NULL, get_id_mmfr2_el1 }, 1944 + { SYS_DESC(SYS_ID_MMFR3_EL1), NULL, get_id_mmfr3_el1 }, 1945 + { SYS_DESC(SYS_ID_ISAR0_EL1), NULL, get_id_isar0_el1 }, 1946 + { SYS_DESC(SYS_ID_ISAR1_EL1), NULL, get_id_isar1_el1 }, 1947 + { SYS_DESC(SYS_ID_ISAR2_EL1), NULL, get_id_isar2_el1 }, 1948 + { SYS_DESC(SYS_ID_ISAR3_EL1), NULL, get_id_isar3_el1 }, 1949 + { SYS_DESC(SYS_ID_ISAR4_EL1), NULL, get_id_isar4_el1 }, 1950 + { SYS_DESC(SYS_ID_ISAR5_EL1), NULL, get_id_isar5_el1 }, 1951 + { SYS_DESC(SYS_CLIDR_EL1), NULL, get_clidr_el1 }, 1952 + { SYS_DESC(SYS_AIDR_EL1), NULL, get_aidr_el1 }, 1953 + { SYS_DESC(SYS_CTR_EL0), NULL, get_ctr_el0 }, 1856 1954 }; 1857 1955 1858 1956 static int reg_from_user(u64 *val, const void __user *uaddr, u64 id)
+5 -18
arch/arm64/kvm/sys_regs.h
··· 83 83 return true; 84 84 } 85 85 86 - static inline bool write_to_read_only(struct kvm_vcpu *vcpu, 87 - const struct sys_reg_params *params) 88 - { 89 - kvm_debug("sys_reg write to read-only register at: %lx\n", 90 - *vcpu_pc(vcpu)); 91 - print_sys_reg_instr(params); 92 - return false; 93 - } 94 - 95 - static inline bool read_from_write_only(struct kvm_vcpu *vcpu, 96 - const struct sys_reg_params *params) 97 - { 98 - kvm_debug("sys_reg read to write-only register at: %lx\n", 99 - *vcpu_pc(vcpu)); 100 - print_sys_reg_instr(params); 101 - return false; 102 - } 103 - 104 86 /* Reset functions */ 105 87 static inline void reset_unknown(struct kvm_vcpu *vcpu, 106 88 const struct sys_reg_desc *r) ··· 128 146 #define CRn(_x) .CRn = _x 129 147 #define CRm(_x) .CRm = _x 130 148 #define Op2(_x) .Op2 = _x 149 + 150 + #define SYS_DESC(reg) \ 151 + Op0(sys_reg_Op0(reg)), Op1(sys_reg_Op1(reg)), \ 152 + CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \ 153 + Op2(sys_reg_Op2(reg)) 131 154 132 155 #endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
+1 -3
arch/arm64/kvm/sys_regs_generic_v8.c
··· 52 52 * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 53 53 */ 54 54 static const struct sys_reg_desc genericv8_sys_regs[] = { 55 - /* ACTLR_EL1 */ 56 - { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b001), 57 - access_actlr, reset_actlr, ACTLR_EL1 }, 55 + { SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 }, 58 56 }; 59 57 60 58 static const struct sys_reg_desc genericv8_cp15_regs[] = {
+2
include/kvm/arm_arch_timer.h
··· 63 63 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu); 64 64 void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu); 65 65 void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu); 66 + bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu); 67 + void kvm_timer_update_run(struct kvm_vcpu *vcpu); 66 68 void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu); 67 69 68 70 u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
+7
include/kvm/arm_pmu.h
··· 50 50 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val); 51 51 void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); 52 52 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); 53 + bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); 54 + void kvm_pmu_update_run(struct kvm_vcpu *vcpu); 53 55 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); 54 56 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val); 55 57 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, ··· 87 85 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {} 88 86 static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {} 89 87 static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {} 88 + static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) 89 + { 90 + return false; 91 + } 92 + static inline void kvm_pmu_update_run(struct kvm_vcpu *vcpu) {} 90 93 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {} 91 94 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {} 92 95 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
+3 -6
include/kvm/arm_vgic.h
··· 225 225 struct vgic_v2_cpu_if { 226 226 u32 vgic_hcr; 227 227 u32 vgic_vmcr; 228 - u32 vgic_misr; /* Saved only */ 229 - u64 vgic_eisr; /* Saved only */ 230 228 u64 vgic_elrsr; /* Saved only */ 231 229 u32 vgic_apr; 232 230 u32 vgic_lr[VGIC_V2_MAX_LRS]; ··· 234 236 u32 vgic_hcr; 235 237 u32 vgic_vmcr; 236 238 u32 vgic_sre; /* Restored only, change ignored */ 237 - u32 vgic_misr; /* Saved only */ 238 - u32 vgic_eisr; /* Saved only */ 239 239 u32 vgic_elrsr; /* Saved only */ 240 240 u32 vgic_ap0r[4]; 241 241 u32 vgic_ap1r[4]; ··· 259 263 * VCPU. 260 264 */ 261 265 struct list_head ap_list_head; 262 - 263 - u64 live_lrs; 264 266 265 267 /* 266 268 * Members below are used with GICv3 emulation only and represent ··· 299 305 bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, unsigned int virt_irq); 300 306 301 307 int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu); 308 + 309 + void kvm_vgic_load(struct kvm_vcpu *vcpu); 310 + void kvm_vgic_put(struct kvm_vcpu *vcpu); 302 311 303 312 #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel)) 304 313 #define vgic_initialized(k) ((k)->arch.vgic.initialized)
+8
include/uapi/linux/kvm.h
··· 894 894 #define KVM_CAP_S390_AIS 141 895 895 #define KVM_CAP_SPAPR_TCE_VFIO 142 896 896 #define KVM_CAP_X86_GUEST_MWAIT 143 897 + #define KVM_CAP_ARM_USER_IRQ 144 897 898 898 899 #ifdef KVM_CAP_IRQ_ROUTING 899 900 ··· 1371 1370 1372 1371 #define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0) 1373 1372 #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1) 1373 + 1374 + /* Available with KVM_CAP_ARM_USER_IRQ */ 1375 + 1376 + /* Bits for run->s.regs.device_irq_level */ 1377 + #define KVM_ARM_DEV_EL1_VTIMER (1 << 0) 1378 + #define KVM_ARM_DEV_EL1_PTIMER (1 << 1) 1379 + #define KVM_ARM_DEV_PMU (1 << 2) 1374 1380 1375 1381 #endif /* __LINUX_KVM_H */
+98 -26
virt/kvm/arm/arch_timer.c
··· 184 184 return cval <= now; 185 185 } 186 186 187 + /* 188 + * Reflect the timer output level into the kvm_run structure 189 + */ 190 + void kvm_timer_update_run(struct kvm_vcpu *vcpu) 191 + { 192 + struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); 193 + struct arch_timer_context *ptimer = vcpu_ptimer(vcpu); 194 + struct kvm_sync_regs *regs = &vcpu->run->s.regs; 195 + 196 + /* Populate the device bitmap with the timer states */ 197 + regs->device_irq_level &= ~(KVM_ARM_DEV_EL1_VTIMER | 198 + KVM_ARM_DEV_EL1_PTIMER); 199 + if (vtimer->irq.level) 200 + regs->device_irq_level |= KVM_ARM_DEV_EL1_VTIMER; 201 + if (ptimer->irq.level) 202 + regs->device_irq_level |= KVM_ARM_DEV_EL1_PTIMER; 203 + } 204 + 187 205 static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level, 188 206 struct arch_timer_context *timer_ctx) 189 207 { 190 208 int ret; 191 - 192 - BUG_ON(!vgic_initialized(vcpu->kvm)); 193 209 194 210 timer_ctx->active_cleared_last = false; 195 211 timer_ctx->irq.level = new_level; 196 212 trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq, 197 213 timer_ctx->irq.level); 198 214 199 - ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, timer_ctx->irq.irq, 200 - timer_ctx->irq.level); 201 - WARN_ON(ret); 215 + if (likely(irqchip_in_kernel(vcpu->kvm))) { 216 + ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, 217 + timer_ctx->irq.irq, 218 + timer_ctx->irq.level); 219 + WARN_ON(ret); 220 + } 202 221 } 203 222 204 223 /* 205 224 * Check if there was a change in the timer state (should we raise or lower 206 225 * the line level to the GIC). 207 226 */ 208 - static int kvm_timer_update_state(struct kvm_vcpu *vcpu) 227 + static void kvm_timer_update_state(struct kvm_vcpu *vcpu) 209 228 { 210 229 struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; 211 230 struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); ··· 236 217 * because the guest would never see the interrupt. Instead wait 237 218 * until we call this function from kvm_timer_flush_hwstate. 238 219 */ 239 - if (!vgic_initialized(vcpu->kvm) || !timer->enabled) 240 - return -ENODEV; 220 + if (unlikely(!timer->enabled)) 221 + return; 241 222 242 223 if (kvm_timer_should_fire(vtimer) != vtimer->irq.level) 243 224 kvm_timer_update_irq(vcpu, !vtimer->irq.level, vtimer); 244 225 245 226 if (kvm_timer_should_fire(ptimer) != ptimer->irq.level) 246 227 kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer); 247 - 248 - return 0; 249 228 } 250 229 251 230 /* Schedule the background timer for the emulated timer. */ ··· 303 286 timer_disarm(timer); 304 287 } 305 288 306 - /** 307 - * kvm_timer_flush_hwstate - prepare to move the virt timer to the cpu 308 - * @vcpu: The vcpu pointer 309 - * 310 - * Check if the virtual timer has expired while we were running in the host, 311 - * and inject an interrupt if that was the case. 312 - */ 313 - void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu) 289 + static void kvm_timer_flush_hwstate_vgic(struct kvm_vcpu *vcpu) 314 290 { 315 291 struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); 316 292 bool phys_active; 317 293 int ret; 318 - 319 - if (kvm_timer_update_state(vcpu)) 320 - return; 321 - 322 - /* Set the background timer for the physical timer emulation. */ 323 - kvm_timer_emulate(vcpu, vcpu_ptimer(vcpu)); 324 294 325 295 /* 326 296 * If we enter the guest with the virtual input level to the VGIC ··· 360 356 vtimer->active_cleared_last = !phys_active; 361 357 } 362 358 359 + bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu) 360 + { 361 + struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); 362 + struct arch_timer_context *ptimer = vcpu_ptimer(vcpu); 363 + struct kvm_sync_regs *sregs = &vcpu->run->s.regs; 364 + bool vlevel, plevel; 365 + 366 + if (likely(irqchip_in_kernel(vcpu->kvm))) 367 + return false; 368 + 369 + vlevel = sregs->device_irq_level & KVM_ARM_DEV_EL1_VTIMER; 370 + plevel = sregs->device_irq_level & KVM_ARM_DEV_EL1_PTIMER; 371 + 372 + return vtimer->irq.level != vlevel || 373 + ptimer->irq.level != plevel; 374 + } 375 + 376 + static void kvm_timer_flush_hwstate_user(struct kvm_vcpu *vcpu) 377 + { 378 + struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); 379 + 380 + /* 381 + * To prevent continuously exiting from the guest, we mask the 382 + * physical interrupt such that the guest can make forward progress. 383 + * Once we detect the output level being deasserted, we unmask the 384 + * interrupt again so that we exit from the guest when the timer 385 + * fires. 386 + */ 387 + if (vtimer->irq.level) 388 + disable_percpu_irq(host_vtimer_irq); 389 + else 390 + enable_percpu_irq(host_vtimer_irq, 0); 391 + } 392 + 393 + /** 394 + * kvm_timer_flush_hwstate - prepare timers before running the vcpu 395 + * @vcpu: The vcpu pointer 396 + * 397 + * Check if the virtual timer has expired while we were running in the host, 398 + * and inject an interrupt if that was the case, making sure the timer is 399 + * masked or disabled on the host so that we keep executing. Also schedule a 400 + * software timer for the physical timer if it is enabled. 401 + */ 402 + void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu) 403 + { 404 + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; 405 + 406 + if (unlikely(!timer->enabled)) 407 + return; 408 + 409 + kvm_timer_update_state(vcpu); 410 + 411 + /* Set the background timer for the physical timer emulation. */ 412 + kvm_timer_emulate(vcpu, vcpu_ptimer(vcpu)); 413 + 414 + if (unlikely(!irqchip_in_kernel(vcpu->kvm))) 415 + kvm_timer_flush_hwstate_user(vcpu); 416 + else 417 + kvm_timer_flush_hwstate_vgic(vcpu); 418 + } 419 + 363 420 /** 364 421 * kvm_timer_sync_hwstate - sync timer state from cpu 365 422 * @vcpu: The vcpu pointer 366 423 * 367 - * Check if the virtual timer has expired while we were running in the guest, 424 + * Check if any of the timers have expired while we were running in the guest, 368 425 * and inject an interrupt if that was the case. 369 426 */ 370 427 void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu) ··· 625 560 if (timer->enabled) 626 561 return 0; 627 562 563 + /* Without a VGIC we do not map virtual IRQs to physical IRQs */ 564 + if (!irqchip_in_kernel(vcpu->kvm)) 565 + goto no_vgic; 566 + 567 + if (!vgic_initialized(vcpu->kvm)) 568 + return -ENODEV; 569 + 628 570 /* 629 571 * Find the physical IRQ number corresponding to the host_vtimer_irq 630 572 */ ··· 655 583 if (ret) 656 584 return ret; 657 585 586 + no_vgic: 658 587 timer->enabled = 1; 659 - 660 588 return 0; 661 589 } 662 590
+7 -71
virt/kvm/arm/hyp/vgic-v2-sr.c
··· 22 22 #include <asm/kvm_emulate.h> 23 23 #include <asm/kvm_hyp.h> 24 24 25 - static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu, 26 - void __iomem *base) 27 - { 28 - struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 29 - int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr; 30 - u32 eisr0, eisr1; 31 - int i; 32 - bool expect_mi; 33 - 34 - expect_mi = !!(cpu_if->vgic_hcr & GICH_HCR_UIE); 35 - 36 - for (i = 0; i < nr_lr; i++) { 37 - if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i))) 38 - continue; 39 - 40 - expect_mi |= (!(cpu_if->vgic_lr[i] & GICH_LR_HW) && 41 - (cpu_if->vgic_lr[i] & GICH_LR_EOI)); 42 - } 43 - 44 - if (expect_mi) { 45 - cpu_if->vgic_misr = readl_relaxed(base + GICH_MISR); 46 - 47 - if (cpu_if->vgic_misr & GICH_MISR_EOI) { 48 - eisr0 = readl_relaxed(base + GICH_EISR0); 49 - if (unlikely(nr_lr > 32)) 50 - eisr1 = readl_relaxed(base + GICH_EISR1); 51 - else 52 - eisr1 = 0; 53 - } else { 54 - eisr0 = eisr1 = 0; 55 - } 56 - } else { 57 - cpu_if->vgic_misr = 0; 58 - eisr0 = eisr1 = 0; 59 - } 60 - 61 - #ifdef CONFIG_CPU_BIG_ENDIAN 62 - cpu_if->vgic_eisr = ((u64)eisr0 << 32) | eisr1; 63 - #else 64 - cpu_if->vgic_eisr = ((u64)eisr1 << 32) | eisr0; 65 - #endif 66 - } 67 - 68 25 static void __hyp_text save_elrsr(struct kvm_vcpu *vcpu, void __iomem *base) 69 26 { 70 27 struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; ··· 44 87 static void __hyp_text save_lrs(struct kvm_vcpu *vcpu, void __iomem *base) 45 88 { 46 89 struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 47 - int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr; 48 90 int i; 91 + u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; 49 92 50 - for (i = 0; i < nr_lr; i++) { 51 - if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i))) 52 - continue; 53 - 93 + for (i = 0; i < used_lrs; i++) { 54 94 if (cpu_if->vgic_elrsr & (1UL << i)) 55 95 cpu_if->vgic_lr[i] &= ~GICH_LR_STATE; 56 96 else ··· 64 110 struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 65 111 struct vgic_dist *vgic = &kvm->arch.vgic; 66 112 void __iomem *base = kern_hyp_va(vgic->vctrl_base); 113 + u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; 67 114 68 115 if (!base) 69 116 return; 70 117 71 - cpu_if->vgic_vmcr = readl_relaxed(base + GICH_VMCR); 72 - 73 - if (vcpu->arch.vgic_cpu.live_lrs) { 118 + if (used_lrs) { 74 119 cpu_if->vgic_apr = readl_relaxed(base + GICH_APR); 75 120 76 - save_maint_int_state(vcpu, base); 77 121 save_elrsr(vcpu, base); 78 122 save_lrs(vcpu, base); 79 123 80 124 writel_relaxed(0, base + GICH_HCR); 81 - 82 - vcpu->arch.vgic_cpu.live_lrs = 0; 83 125 } else { 84 - cpu_if->vgic_eisr = 0; 85 126 cpu_if->vgic_elrsr = ~0UL; 86 - cpu_if->vgic_misr = 0; 87 127 cpu_if->vgic_apr = 0; 88 128 } 89 129 } ··· 89 141 struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 90 142 struct vgic_dist *vgic = &kvm->arch.vgic; 91 143 void __iomem *base = kern_hyp_va(vgic->vctrl_base); 92 - int nr_lr = (kern_hyp_va(&kvm_vgic_global_state))->nr_lr; 93 144 int i; 94 - u64 live_lrs = 0; 145 + u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; 95 146 96 147 if (!base) 97 148 return; 98 149 99 - 100 - for (i = 0; i < nr_lr; i++) 101 - if (cpu_if->vgic_lr[i] & GICH_LR_STATE) 102 - live_lrs |= 1UL << i; 103 - 104 - if (live_lrs) { 150 + if (used_lrs) { 105 151 writel_relaxed(cpu_if->vgic_hcr, base + GICH_HCR); 106 152 writel_relaxed(cpu_if->vgic_apr, base + GICH_APR); 107 - for (i = 0; i < nr_lr; i++) { 108 - if (!(live_lrs & (1UL << i))) 109 - continue; 110 - 153 + for (i = 0; i < used_lrs; i++) { 111 154 writel_relaxed(cpu_if->vgic_lr[i], 112 155 base + GICH_LR0 + (i * 4)); 113 156 } 114 157 } 115 - 116 - writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR); 117 - vcpu->arch.vgic_cpu.live_lrs = live_lrs; 118 158 } 119 159 120 160 #ifdef CONFIG_ARM64
+24 -63
virt/kvm/arm/hyp/vgic-v3-sr.c
··· 118 118 } 119 119 } 120 120 121 - static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu, int nr_lr) 122 - { 123 - struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 124 - int i; 125 - bool expect_mi; 126 - 127 - expect_mi = !!(cpu_if->vgic_hcr & ICH_HCR_UIE); 128 - 129 - for (i = 0; i < nr_lr; i++) { 130 - if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i))) 131 - continue; 132 - 133 - expect_mi |= (!(cpu_if->vgic_lr[i] & ICH_LR_HW) && 134 - (cpu_if->vgic_lr[i] & ICH_LR_EOI)); 135 - } 136 - 137 - if (expect_mi) { 138 - cpu_if->vgic_misr = read_gicreg(ICH_MISR_EL2); 139 - 140 - if (cpu_if->vgic_misr & ICH_MISR_EOI) 141 - cpu_if->vgic_eisr = read_gicreg(ICH_EISR_EL2); 142 - else 143 - cpu_if->vgic_eisr = 0; 144 - } else { 145 - cpu_if->vgic_misr = 0; 146 - cpu_if->vgic_eisr = 0; 147 - } 148 - } 149 - 150 121 void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu) 151 122 { 152 123 struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 124 + u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; 153 125 u64 val; 154 126 155 127 /* 156 128 * Make sure stores to the GIC via the memory mapped interface 157 129 * are now visible to the system register interface. 158 130 */ 159 - if (!cpu_if->vgic_sre) 131 + if (!cpu_if->vgic_sre) { 160 132 dsb(st); 133 + cpu_if->vgic_vmcr = read_gicreg(ICH_VMCR_EL2); 134 + } 161 135 162 - cpu_if->vgic_vmcr = read_gicreg(ICH_VMCR_EL2); 163 - 164 - if (vcpu->arch.vgic_cpu.live_lrs) { 136 + if (used_lrs) { 165 137 int i; 166 - u32 max_lr_idx, nr_pri_bits; 138 + u32 nr_pri_bits; 167 139 168 140 cpu_if->vgic_elrsr = read_gicreg(ICH_ELSR_EL2); 169 141 170 142 write_gicreg(0, ICH_HCR_EL2); 171 143 val = read_gicreg(ICH_VTR_EL2); 172 - max_lr_idx = vtr_to_max_lr_idx(val); 173 144 nr_pri_bits = vtr_to_nr_pri_bits(val); 174 145 175 - save_maint_int_state(vcpu, max_lr_idx + 1); 176 - 177 - for (i = 0; i <= max_lr_idx; i++) { 178 - if (!(vcpu->arch.vgic_cpu.live_lrs & (1UL << i))) 179 - continue; 180 - 146 + for (i = 0; i < used_lrs; i++) { 181 147 if (cpu_if->vgic_elrsr & (1 << i)) 182 148 cpu_if->vgic_lr[i] &= ~ICH_LR_STATE; 183 149 else ··· 171 205 default: 172 206 cpu_if->vgic_ap1r[0] = read_gicreg(ICH_AP1R0_EL2); 173 207 } 174 - 175 - vcpu->arch.vgic_cpu.live_lrs = 0; 176 208 } else { 177 - cpu_if->vgic_misr = 0; 178 - cpu_if->vgic_eisr = 0; 179 209 cpu_if->vgic_elrsr = 0xffff; 180 210 cpu_if->vgic_ap0r[0] = 0; 181 211 cpu_if->vgic_ap0r[1] = 0; ··· 196 234 void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu) 197 235 { 198 236 struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 237 + u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; 199 238 u64 val; 200 - u32 max_lr_idx, nr_pri_bits; 201 - u16 live_lrs = 0; 239 + u32 nr_pri_bits; 202 240 int i; 203 241 204 242 /* ··· 207 245 * delivered as a FIQ to the guest, with potentially fatal 208 246 * consequences. So we must make sure that ICC_SRE_EL1 has 209 247 * been actually programmed with the value we want before 210 - * starting to mess with the rest of the GIC. 248 + * starting to mess with the rest of the GIC, and VMCR_EL2 in 249 + * particular. 211 250 */ 212 251 if (!cpu_if->vgic_sre) { 213 252 write_gicreg(0, ICC_SRE_EL1); 214 253 isb(); 254 + write_gicreg(cpu_if->vgic_vmcr, ICH_VMCR_EL2); 215 255 } 216 256 217 257 val = read_gicreg(ICH_VTR_EL2); 218 - max_lr_idx = vtr_to_max_lr_idx(val); 219 258 nr_pri_bits = vtr_to_nr_pri_bits(val); 220 259 221 - for (i = 0; i <= max_lr_idx; i++) { 222 - if (cpu_if->vgic_lr[i] & ICH_LR_STATE) 223 - live_lrs |= (1 << i); 224 - } 225 - 226 - write_gicreg(cpu_if->vgic_vmcr, ICH_VMCR_EL2); 227 - 228 - if (live_lrs) { 260 + if (used_lrs) { 229 261 write_gicreg(cpu_if->vgic_hcr, ICH_HCR_EL2); 230 262 231 263 switch (nr_pri_bits) { ··· 242 286 write_gicreg(cpu_if->vgic_ap1r[0], ICH_AP1R0_EL2); 243 287 } 244 288 245 - for (i = 0; i <= max_lr_idx; i++) { 246 - if (!(live_lrs & (1 << i))) 247 - continue; 248 - 289 + for (i = 0; i < used_lrs; i++) 249 290 __gic_v3_set_lr(cpu_if->vgic_lr[i], i); 250 - } 251 291 } 252 292 253 293 /* ··· 255 303 isb(); 256 304 dsb(sy); 257 305 } 258 - vcpu->arch.vgic_cpu.live_lrs = live_lrs; 259 306 260 307 /* 261 308 * Prevent the guest from touching the GIC system registers if ··· 276 325 u64 __hyp_text __vgic_v3_get_ich_vtr_el2(void) 277 326 { 278 327 return read_gicreg(ICH_VTR_EL2); 328 + } 329 + 330 + u64 __hyp_text __vgic_v3_read_vmcr(void) 331 + { 332 + return read_gicreg(ICH_VMCR_EL2); 333 + } 334 + 335 + void __hyp_text __vgic_v3_write_vmcr(u32 vmcr) 336 + { 337 + write_gicreg(vmcr, ICH_VMCR_EL2); 279 338 }
+35 -4
virt/kvm/arm/pmu.c
··· 230 230 return; 231 231 232 232 overflow = !!kvm_pmu_overflow_status(vcpu); 233 - if (pmu->irq_level != overflow) { 234 - pmu->irq_level = overflow; 235 - kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, 236 - pmu->irq_num, overflow); 233 + if (pmu->irq_level == overflow) 234 + return; 235 + 236 + pmu->irq_level = overflow; 237 + 238 + if (likely(irqchip_in_kernel(vcpu->kvm))) { 239 + int ret; 240 + ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, 241 + pmu->irq_num, overflow); 242 + WARN_ON(ret); 237 243 } 244 + } 245 + 246 + bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu) 247 + { 248 + struct kvm_pmu *pmu = &vcpu->arch.pmu; 249 + struct kvm_sync_regs *sregs = &vcpu->run->s.regs; 250 + bool run_level = sregs->device_irq_level & KVM_ARM_DEV_PMU; 251 + 252 + if (likely(irqchip_in_kernel(vcpu->kvm))) 253 + return false; 254 + 255 + return pmu->irq_level != run_level; 256 + } 257 + 258 + /* 259 + * Reflect the PMU overflow interrupt output level into the kvm_run structure 260 + */ 261 + void kvm_pmu_update_run(struct kvm_vcpu *vcpu) 262 + { 263 + struct kvm_sync_regs *regs = &vcpu->run->s.regs; 264 + 265 + /* Populate the timer bitmap for user space */ 266 + regs->device_irq_level &= ~KVM_ARM_DEV_PMU; 267 + if (vcpu->arch.pmu.irq_level) 268 + regs->device_irq_level |= KVM_ARM_DEV_PMU; 238 269 } 239 270 240 271 /**
+68 -40
virt/kvm/arm/vgic/vgic-init.c
··· 24 24 25 25 /* 26 26 * Initialization rules: there are multiple stages to the vgic 27 - * initialization, both for the distributor and the CPU interfaces. 27 + * initialization, both for the distributor and the CPU interfaces. The basic 28 + * idea is that even though the VGIC is not functional or not requested from 29 + * user space, the critical path of the run loop can still call VGIC functions 30 + * that just won't do anything, without them having to check additional 31 + * initialization flags to ensure they don't look at uninitialized data 32 + * structures. 28 33 * 29 34 * Distributor: 30 35 * ··· 44 39 * 45 40 * CPU Interface: 46 41 * 47 - * - kvm_vgic_cpu_early_init(): initialization of static data that 42 + * - kvm_vgic_vcpu_early_init(): initialization of static data that 48 43 * doesn't depend on any sizing information or emulation type. No 49 44 * allocation is allowed there. 50 45 */ 51 46 52 47 /* EARLY INIT */ 53 48 54 - /* 55 - * Those 2 functions should not be needed anymore but they 56 - * still are called from arm.c 49 + /** 50 + * kvm_vgic_early_init() - Initialize static VGIC VCPU data structures 51 + * @kvm: The VM whose VGIC districutor should be initialized 52 + * 53 + * Only do initialization of static structures that don't require any 54 + * allocation or sizing information from userspace. vgic_init() called 55 + * kvm_vgic_dist_init() which takes care of the rest. 57 56 */ 58 57 void kvm_vgic_early_init(struct kvm *kvm) 59 58 { 59 + struct vgic_dist *dist = &kvm->arch.vgic; 60 + 61 + INIT_LIST_HEAD(&dist->lpi_list_head); 62 + spin_lock_init(&dist->lpi_list_lock); 60 63 } 61 64 65 + /** 66 + * kvm_vgic_vcpu_early_init() - Initialize static VGIC VCPU data structures 67 + * @vcpu: The VCPU whose VGIC data structures whould be initialized 68 + * 69 + * Only do initialization, but do not actually enable the VGIC CPU interface 70 + * yet. 71 + */ 62 72 void kvm_vgic_vcpu_early_init(struct kvm_vcpu *vcpu) 63 73 { 74 + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 75 + int i; 76 + 77 + INIT_LIST_HEAD(&vgic_cpu->ap_list_head); 78 + spin_lock_init(&vgic_cpu->ap_list_lock); 79 + 80 + /* 81 + * Enable and configure all SGIs to be edge-triggered and 82 + * configure all PPIs as level-triggered. 83 + */ 84 + for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++) { 85 + struct vgic_irq *irq = &vgic_cpu->private_irqs[i]; 86 + 87 + INIT_LIST_HEAD(&irq->ap_list); 88 + spin_lock_init(&irq->irq_lock); 89 + irq->intid = i; 90 + irq->vcpu = NULL; 91 + irq->target_vcpu = vcpu; 92 + irq->targets = 1U << vcpu->vcpu_id; 93 + kref_init(&irq->refcount); 94 + if (vgic_irq_is_sgi(i)) { 95 + /* SGIs */ 96 + irq->enabled = 1; 97 + irq->config = VGIC_CONFIG_EDGE; 98 + } else { 99 + /* PPIs */ 100 + irq->config = VGIC_CONFIG_LEVEL; 101 + } 102 + } 64 103 } 65 104 66 105 /* CREATION */ ··· 197 148 struct kvm_vcpu *vcpu0 = kvm_get_vcpu(kvm, 0); 198 149 int i; 199 150 200 - INIT_LIST_HEAD(&dist->lpi_list_head); 201 - spin_lock_init(&dist->lpi_list_lock); 202 - 203 151 dist->spis = kcalloc(nr_spis, sizeof(struct vgic_irq), GFP_KERNEL); 204 152 if (!dist->spis) 205 153 return -ENOMEM; ··· 227 181 } 228 182 229 183 /** 230 - * kvm_vgic_vcpu_init: initialize the vcpu data structures and 231 - * enable the VCPU interface 232 - * @vcpu: the VCPU which's VGIC should be initialized 184 + * kvm_vgic_vcpu_init() - Enable the VCPU interface 185 + * @vcpu: the VCPU which's VGIC should be enabled 233 186 */ 234 187 static void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu) 235 188 { 236 - struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 237 - int i; 238 - 239 - INIT_LIST_HEAD(&vgic_cpu->ap_list_head); 240 - spin_lock_init(&vgic_cpu->ap_list_lock); 241 - 242 - /* 243 - * Enable and configure all SGIs to be edge-triggered and 244 - * configure all PPIs as level-triggered. 245 - */ 246 - for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++) { 247 - struct vgic_irq *irq = &vgic_cpu->private_irqs[i]; 248 - 249 - INIT_LIST_HEAD(&irq->ap_list); 250 - spin_lock_init(&irq->irq_lock); 251 - irq->intid = i; 252 - irq->vcpu = NULL; 253 - irq->target_vcpu = vcpu; 254 - irq->targets = 1U << vcpu->vcpu_id; 255 - kref_init(&irq->refcount); 256 - if (vgic_irq_is_sgi(i)) { 257 - /* SGIs */ 258 - irq->enabled = 1; 259 - irq->config = VGIC_CONFIG_EDGE; 260 - } else { 261 - /* PPIs */ 262 - irq->config = VGIC_CONFIG_LEVEL; 263 - } 264 - } 265 189 if (kvm_vgic_global_state.type == VGIC_V2) 266 190 vgic_v2_enable(vcpu); 267 191 else ··· 278 262 vgic_debug_init(kvm); 279 263 280 264 dist->initialized = true; 265 + 266 + /* 267 + * If we're initializing GICv2 on-demand when first running the VCPU 268 + * then we need to load the VGIC state onto the CPU. We can detect 269 + * this easily by checking if we are in between vcpu_load and vcpu_put 270 + * when we just initialized the VGIC. 271 + */ 272 + preempt_disable(); 273 + vcpu = kvm_arm_get_running_vcpu(); 274 + if (vcpu) 275 + kvm_vgic_load(vcpu); 276 + preempt_enable(); 281 277 out: 282 278 return ret; 283 279 }
+40 -52
virt/kvm/arm/vgic/vgic-v2.c
··· 22 22 23 23 #include "vgic.h" 24 24 25 - /* 26 - * Call this function to convert a u64 value to an unsigned long * bitmask 27 - * in a way that works on both 32-bit and 64-bit LE and BE platforms. 28 - * 29 - * Warning: Calling this function may modify *val. 30 - */ 31 - static unsigned long *u64_to_bitmask(u64 *val) 32 - { 33 - #if defined(CONFIG_CPU_BIG_ENDIAN) && BITS_PER_LONG == 32 34 - *val = (*val >> 32) | (*val << 32); 35 - #endif 36 - return (unsigned long *)val; 37 - } 38 - 39 - void vgic_v2_process_maintenance(struct kvm_vcpu *vcpu) 40 - { 41 - struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2; 42 - 43 - if (cpuif->vgic_misr & GICH_MISR_EOI) { 44 - u64 eisr = cpuif->vgic_eisr; 45 - unsigned long *eisr_bmap = u64_to_bitmask(&eisr); 46 - int lr; 47 - 48 - for_each_set_bit(lr, eisr_bmap, kvm_vgic_global_state.nr_lr) { 49 - u32 intid = cpuif->vgic_lr[lr] & GICH_LR_VIRTUALID; 50 - 51 - WARN_ON(cpuif->vgic_lr[lr] & GICH_LR_STATE); 52 - 53 - /* Only SPIs require notification */ 54 - if (vgic_valid_spi(vcpu->kvm, intid)) 55 - kvm_notify_acked_irq(vcpu->kvm, 0, 56 - intid - VGIC_NR_PRIVATE_IRQS); 57 - } 58 - } 59 - 60 - /* check and disable underflow maintenance IRQ */ 61 - cpuif->vgic_hcr &= ~GICH_HCR_UIE; 62 - 63 - /* 64 - * In the next iterations of the vcpu loop, if we sync the 65 - * vgic state after flushing it, but before entering the guest 66 - * (this happens for pending signals and vmid rollovers), then 67 - * make sure we don't pick up any old maintenance interrupts 68 - * here. 69 - */ 70 - cpuif->vgic_eisr = 0; 71 - } 72 - 73 25 void vgic_v2_set_underflow(struct kvm_vcpu *vcpu) 74 26 { 75 27 struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2; 76 28 77 29 cpuif->vgic_hcr |= GICH_HCR_UIE; 30 + } 31 + 32 + static bool lr_signals_eoi_mi(u32 lr_val) 33 + { 34 + return !(lr_val & GICH_LR_STATE) && (lr_val & GICH_LR_EOI) && 35 + !(lr_val & GICH_LR_HW); 78 36 } 79 37 80 38 /* ··· 44 86 */ 45 87 void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) 46 88 { 47 - struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2; 89 + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 90 + struct vgic_v2_cpu_if *cpuif = &vgic_cpu->vgic_v2; 48 91 int lr; 49 92 50 - for (lr = 0; lr < vcpu->arch.vgic_cpu.used_lrs; lr++) { 93 + cpuif->vgic_hcr &= ~GICH_HCR_UIE; 94 + 95 + for (lr = 0; lr < vgic_cpu->used_lrs; lr++) { 51 96 u32 val = cpuif->vgic_lr[lr]; 52 97 u32 intid = val & GICH_LR_VIRTUALID; 53 98 struct vgic_irq *irq; 99 + 100 + /* Notify fds when the guest EOI'ed a level-triggered SPI */ 101 + if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid)) 102 + kvm_notify_acked_irq(vcpu->kvm, 0, 103 + intid - VGIC_NR_PRIVATE_IRQS); 54 104 55 105 irq = vgic_get_irq(vcpu->kvm, vcpu, intid); 56 106 ··· 92 126 spin_unlock(&irq->irq_lock); 93 127 vgic_put_irq(vcpu->kvm, irq); 94 128 } 129 + 130 + vgic_cpu->used_lrs = 0; 95 131 } 96 132 97 133 /* ··· 152 184 153 185 void vgic_v2_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp) 154 186 { 187 + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 155 188 u32 vmcr; 156 189 157 190 vmcr = (vmcrp->ctlr << GICH_VMCR_CTRL_SHIFT) & GICH_VMCR_CTRL_MASK; ··· 163 194 vmcr |= (vmcrp->pmr << GICH_VMCR_PRIMASK_SHIFT) & 164 195 GICH_VMCR_PRIMASK_MASK; 165 196 166 - vcpu->arch.vgic_cpu.vgic_v2.vgic_vmcr = vmcr; 197 + cpu_if->vgic_vmcr = vmcr; 167 198 } 168 199 169 200 void vgic_v2_get_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp) 170 201 { 171 - u32 vmcr = vcpu->arch.vgic_cpu.vgic_v2.vgic_vmcr; 202 + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 203 + u32 vmcr; 204 + 205 + vmcr = cpu_if->vgic_vmcr; 172 206 173 207 vmcrp->ctlr = (vmcr & GICH_VMCR_CTRL_MASK) >> 174 208 GICH_VMCR_CTRL_SHIFT; ··· 346 374 iounmap(kvm_vgic_global_state.vcpu_base_va); 347 375 348 376 return ret; 377 + } 378 + 379 + void vgic_v2_load(struct kvm_vcpu *vcpu) 380 + { 381 + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 382 + struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; 383 + 384 + writel_relaxed(cpu_if->vgic_vmcr, vgic->vctrl_base + GICH_VMCR); 385 + } 386 + 387 + void vgic_v2_put(struct kvm_vcpu *vcpu) 388 + { 389 + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 390 + struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; 391 + 392 + cpu_if->vgic_vmcr = readl_relaxed(vgic->vctrl_base + GICH_VMCR); 349 393 }
+46 -43
virt/kvm/arm/vgic/vgic-v3.c
··· 21 21 22 22 #include "vgic.h" 23 23 24 - void vgic_v3_process_maintenance(struct kvm_vcpu *vcpu) 25 - { 26 - struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3; 27 - u32 model = vcpu->kvm->arch.vgic.vgic_model; 28 - 29 - if (cpuif->vgic_misr & ICH_MISR_EOI) { 30 - unsigned long eisr_bmap = cpuif->vgic_eisr; 31 - int lr; 32 - 33 - for_each_set_bit(lr, &eisr_bmap, kvm_vgic_global_state.nr_lr) { 34 - u32 intid; 35 - u64 val = cpuif->vgic_lr[lr]; 36 - 37 - if (model == KVM_DEV_TYPE_ARM_VGIC_V3) 38 - intid = val & ICH_LR_VIRTUAL_ID_MASK; 39 - else 40 - intid = val & GICH_LR_VIRTUALID; 41 - 42 - WARN_ON(cpuif->vgic_lr[lr] & ICH_LR_STATE); 43 - 44 - /* Only SPIs require notification */ 45 - if (vgic_valid_spi(vcpu->kvm, intid)) 46 - kvm_notify_acked_irq(vcpu->kvm, 0, 47 - intid - VGIC_NR_PRIVATE_IRQS); 48 - } 49 - 50 - /* 51 - * In the next iterations of the vcpu loop, if we sync 52 - * the vgic state after flushing it, but before 53 - * entering the guest (this happens for pending 54 - * signals and vmid rollovers), then make sure we 55 - * don't pick up any old maintenance interrupts here. 56 - */ 57 - cpuif->vgic_eisr = 0; 58 - } 59 - 60 - cpuif->vgic_hcr &= ~ICH_HCR_UIE; 61 - } 62 - 63 24 void vgic_v3_set_underflow(struct kvm_vcpu *vcpu) 64 25 { 65 26 struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3; ··· 28 67 cpuif->vgic_hcr |= ICH_HCR_UIE; 29 68 } 30 69 70 + static bool lr_signals_eoi_mi(u64 lr_val) 71 + { 72 + return !(lr_val & ICH_LR_STATE) && (lr_val & ICH_LR_EOI) && 73 + !(lr_val & ICH_LR_HW); 74 + } 75 + 31 76 void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) 32 77 { 33 - struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3; 78 + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 79 + struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3; 34 80 u32 model = vcpu->kvm->arch.vgic.vgic_model; 35 81 int lr; 36 82 37 - for (lr = 0; lr < vcpu->arch.vgic_cpu.used_lrs; lr++) { 83 + cpuif->vgic_hcr &= ~ICH_HCR_UIE; 84 + 85 + for (lr = 0; lr < vgic_cpu->used_lrs; lr++) { 38 86 u64 val = cpuif->vgic_lr[lr]; 39 87 u32 intid; 40 88 struct vgic_irq *irq; ··· 52 82 intid = val & ICH_LR_VIRTUAL_ID_MASK; 53 83 else 54 84 intid = val & GICH_LR_VIRTUALID; 85 + 86 + /* Notify fds when the guest EOI'ed a level-triggered IRQ */ 87 + if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid)) 88 + kvm_notify_acked_irq(vcpu->kvm, 0, 89 + intid - VGIC_NR_PRIVATE_IRQS); 90 + 55 91 irq = vgic_get_irq(vcpu->kvm, vcpu, intid); 56 92 if (!irq) /* An LPI could have been unmapped. */ 57 93 continue; ··· 93 117 spin_unlock(&irq->irq_lock); 94 118 vgic_put_irq(vcpu->kvm, irq); 95 119 } 120 + 121 + vgic_cpu->used_lrs = 0; 96 122 } 97 123 98 124 /* Requires the irq to be locked already */ ··· 151 173 152 174 void vgic_v3_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp) 153 175 { 176 + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 154 177 u32 vmcr; 155 178 156 179 /* ··· 167 188 vmcr |= (vmcrp->grpen0 << ICH_VMCR_ENG0_SHIFT) & ICH_VMCR_ENG0_MASK; 168 189 vmcr |= (vmcrp->grpen1 << ICH_VMCR_ENG1_SHIFT) & ICH_VMCR_ENG1_MASK; 169 190 170 - vcpu->arch.vgic_cpu.vgic_v3.vgic_vmcr = vmcr; 191 + cpu_if->vgic_vmcr = vmcr; 171 192 } 172 193 173 194 void vgic_v3_get_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcrp) 174 195 { 175 - u32 vmcr = vcpu->arch.vgic_cpu.vgic_v3.vgic_vmcr; 196 + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 197 + u32 vmcr; 198 + 199 + vmcr = cpu_if->vgic_vmcr; 176 200 177 201 /* 178 202 * Ignore the FIQen bit, because GIC emulation always implies ··· 367 385 kvm_vgic_global_state.max_gic_vcpus = VGIC_V3_MAX_CPUS; 368 386 369 387 return 0; 388 + } 389 + 390 + void vgic_v3_load(struct kvm_vcpu *vcpu) 391 + { 392 + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 393 + 394 + /* 395 + * If dealing with a GICv2 emulation on GICv3, VMCR_EL2.VFIQen 396 + * is dependent on ICC_SRE_EL1.SRE, and we have to perform the 397 + * VMCR_EL2 save/restore in the world switch. 398 + */ 399 + if (likely(cpu_if->vgic_sre)) 400 + kvm_call_hyp(__vgic_v3_write_vmcr, cpu_if->vgic_vmcr); 401 + } 402 + 403 + void vgic_v3_put(struct kvm_vcpu *vcpu) 404 + { 405 + struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 406 + 407 + if (likely(cpu_if->vgic_sre)) 408 + cpu_if->vgic_vmcr = kvm_call_hyp(__vgic_v3_read_vmcr); 370 409 }
+44 -16
virt/kvm/arm/vgic/vgic.c
··· 527 527 spin_unlock(&vgic_cpu->ap_list_lock); 528 528 } 529 529 530 - static inline void vgic_process_maintenance_interrupt(struct kvm_vcpu *vcpu) 531 - { 532 - if (kvm_vgic_global_state.type == VGIC_V2) 533 - vgic_v2_process_maintenance(vcpu); 534 - else 535 - vgic_v3_process_maintenance(vcpu); 536 - } 537 - 538 530 static inline void vgic_fold_lr_state(struct kvm_vcpu *vcpu) 539 531 { 540 532 if (kvm_vgic_global_state.type == VGIC_V2) ··· 593 601 594 602 DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock)); 595 603 596 - if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr) { 597 - vgic_set_underflow(vcpu); 604 + if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr) 598 605 vgic_sort_ap_list(vcpu); 599 - } 600 606 601 607 list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) { 602 608 spin_lock(&irq->irq_lock); ··· 613 623 next: 614 624 spin_unlock(&irq->irq_lock); 615 625 616 - if (count == kvm_vgic_global_state.nr_lr) 626 + if (count == kvm_vgic_global_state.nr_lr) { 627 + if (!list_is_last(&irq->ap_list, 628 + &vgic_cpu->ap_list_head)) 629 + vgic_set_underflow(vcpu); 617 630 break; 631 + } 618 632 } 619 633 620 634 vcpu->arch.vgic_cpu.used_lrs = count; ··· 631 637 /* Sync back the hardware VGIC state into our emulation after a guest's run. */ 632 638 void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) 633 639 { 634 - if (unlikely(!vgic_initialized(vcpu->kvm))) 640 + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 641 + 642 + /* An empty ap_list_head implies used_lrs == 0 */ 643 + if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head)) 635 644 return; 636 645 637 - vgic_process_maintenance_interrupt(vcpu); 638 - vgic_fold_lr_state(vcpu); 646 + if (vgic_cpu->used_lrs) 647 + vgic_fold_lr_state(vcpu); 639 648 vgic_prune_ap_list(vcpu); 640 649 } 641 650 642 651 /* Flush our emulation state into the GIC hardware before entering the guest. */ 643 652 void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) 644 653 { 645 - if (unlikely(!vgic_initialized(vcpu->kvm))) 654 + /* 655 + * If there are no virtual interrupts active or pending for this 656 + * VCPU, then there is no work to do and we can bail out without 657 + * taking any lock. There is a potential race with someone injecting 658 + * interrupts to the VCPU, but it is a benign race as the VCPU will 659 + * either observe the new interrupt before or after doing this check, 660 + * and introducing additional synchronization mechanism doesn't change 661 + * this. 662 + */ 663 + if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head)) 646 664 return; 647 665 648 666 spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock); 649 667 vgic_flush_lr_state(vcpu); 650 668 spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock); 669 + } 670 + 671 + void kvm_vgic_load(struct kvm_vcpu *vcpu) 672 + { 673 + if (unlikely(!vgic_initialized(vcpu->kvm))) 674 + return; 675 + 676 + if (kvm_vgic_global_state.type == VGIC_V2) 677 + vgic_v2_load(vcpu); 678 + else 679 + vgic_v3_load(vcpu); 680 + } 681 + 682 + void kvm_vgic_put(struct kvm_vcpu *vcpu) 683 + { 684 + if (unlikely(!vgic_initialized(vcpu->kvm))) 685 + return; 686 + 687 + if (kvm_vgic_global_state.type == VGIC_V2) 688 + vgic_v2_put(vcpu); 689 + else 690 + vgic_v3_put(vcpu); 651 691 } 652 692 653 693 int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
+6 -2
virt/kvm/arm/vgic/vgic.h
··· 112 112 int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr, 113 113 phys_addr_t addr, phys_addr_t alignment); 114 114 115 - void vgic_v2_process_maintenance(struct kvm_vcpu *vcpu); 116 115 void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu); 117 116 void vgic_v2_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr); 118 117 void vgic_v2_clear_lr(struct kvm_vcpu *vcpu, int lr); ··· 129 130 int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address, 130 131 enum vgic_type); 131 132 133 + void vgic_v2_load(struct kvm_vcpu *vcpu); 134 + void vgic_v2_put(struct kvm_vcpu *vcpu); 135 + 132 136 static inline void vgic_get_irq_kref(struct vgic_irq *irq) 133 137 { 134 138 if (irq->intid < VGIC_MIN_LPI) ··· 140 138 kref_get(&irq->refcount); 141 139 } 142 140 143 - void vgic_v3_process_maintenance(struct kvm_vcpu *vcpu); 144 141 void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu); 145 142 void vgic_v3_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr); 146 143 void vgic_v3_clear_lr(struct kvm_vcpu *vcpu, int lr); ··· 150 149 int vgic_v3_probe(const struct gic_kvm_info *info); 151 150 int vgic_v3_map_resources(struct kvm *kvm); 152 151 int vgic_register_redist_iodevs(struct kvm *kvm, gpa_t dist_base_address); 152 + 153 + void vgic_v3_load(struct kvm_vcpu *vcpu); 154 + void vgic_v3_put(struct kvm_vcpu *vcpu); 153 155 154 156 int vgic_register_its_iodevs(struct kvm *kvm); 155 157 bool vgic_has_its(struct kvm *kvm);