Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch kvm-arm64/pkvm-6.10 into kvmarm-master/next

* kvm-arm64/pkvm-6.10: (25 commits)
: .
: At last, a bunch of pKVM patches, courtesy of Fuad Tabba.
: From the cover letter:
:
: "This series is a bit of a bombay-mix of patches we've been
: carrying. There's no one overarching theme, but they do improve
: the code by fixing existing bugs in pKVM, refactoring code to
: make it more readable and easier to re-use for pKVM, or adding
: functionality to the existing pKVM code upstream."
: .
KVM: arm64: Force injection of a data abort on NISV MMIO exit
KVM: arm64: Restrict supported capabilities for protected VMs
KVM: arm64: Refactor setting the return value in kvm_vm_ioctl_enable_cap()
KVM: arm64: Document the KVM/arm64-specific calls in hypercalls.rst
KVM: arm64: Rename firmware pseudo-register documentation file
KVM: arm64: Reformat/beautify PTP hypercall documentation
KVM: arm64: Clarify rationale for ZCR_EL1 value restored on guest exit
KVM: arm64: Introduce and use predicates that check for protected VMs
KVM: arm64: Add is_pkvm_initialized() helper
KVM: arm64: Simplify vgic-v3 hypercalls
KVM: arm64: Move setting the page as dirty out of the critical section
KVM: arm64: Change kvm_handle_mmio_return() return polarity
KVM: arm64: Fix comment for __pkvm_vcpu_init_traps()
KVM: arm64: Prevent kmemleak from accessing .hyp.data
KVM: arm64: Do not map the host fpsimd state to hyp in pKVM
KVM: arm64: Rename __tlb_switch_to_{guest,host}() in VHE
KVM: arm64: Support TLB invalidation in guest context
KVM: arm64: Avoid BBM when changing only s/w bits in Stage-2 PTE
KVM: arm64: Check for PTE validity when checking for executable/cacheable
KVM: arm64: Avoid BUG-ing from the host abort path
...

Signed-off-by: Marc Zyngier <maz@kernel.org>

+513 -344
+7
Documentation/virt/kvm/api.rst
··· 6894 6894 KVM_EXIT_MMIO, but userspace has to emulate any change to the processing state 6895 6895 if it decides to decode and emulate the instruction. 6896 6896 6897 + This feature isn't available to protected VMs, as userspace does not 6898 + have access to the state that is required to perform the emulation. 6899 + Instead, a data abort exception is directly injected in the guest. 6900 + Note that although KVM_CAP_ARM_NISV_TO_USER will be reported if 6901 + queried outside of a protected VM context, the feature will not be 6902 + exposed if queried on a protected VM file descriptor. 6903 + 6897 6904 :: 6898 6905 6899 6906 /* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */
+138
Documentation/virt/kvm/arm/fw-pseudo-registers.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ======================================= 4 + ARM firmware pseudo-registers interface 5 + ======================================= 6 + 7 + KVM handles the hypercall services as requested by the guests. New hypercall 8 + services are regularly made available by the ARM specification or by KVM (as 9 + vendor services) if they make sense from a virtualization point of view. 10 + 11 + This means that a guest booted on two different versions of KVM can observe 12 + two different "firmware" revisions. This could cause issues if a given guest 13 + is tied to a particular version of a hypercall service, or if a migration 14 + causes a different version to be exposed out of the blue to an unsuspecting 15 + guest. 16 + 17 + In order to remedy this situation, KVM exposes a set of "firmware 18 + pseudo-registers" that can be manipulated using the GET/SET_ONE_REG 19 + interface. These registers can be saved/restored by userspace, and set 20 + to a convenient value as required. 21 + 22 + The following registers are defined: 23 + 24 + * KVM_REG_ARM_PSCI_VERSION: 25 + 26 + KVM implements the PSCI (Power State Coordination Interface) 27 + specification in order to provide services such as CPU on/off, reset 28 + and power-off to the guest. 29 + 30 + - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set 31 + (and thus has already been initialized) 32 + - Returns the current PSCI version on GET_ONE_REG (defaulting to the 33 + highest PSCI version implemented by KVM and compatible with v0.2) 34 + - Allows any PSCI version implemented by KVM and compatible with 35 + v0.2 to be set with SET_ONE_REG 36 + - Affects the whole VM (even if the register view is per-vcpu) 37 + 38 + * KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: 39 + Holds the state of the firmware support to mitigate CVE-2017-5715, as 40 + offered by KVM to the guest via a HVC call. The workaround is described 41 + under SMCCC_ARCH_WORKAROUND_1 in [1]. 42 + 43 + Accepted values are: 44 + 45 + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: 46 + KVM does not offer 47 + firmware support for the workaround. The mitigation status for the 48 + guest is unknown. 49 + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: 50 + The workaround HVC call is 51 + available to the guest and required for the mitigation. 52 + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED: 53 + The workaround HVC call 54 + is available to the guest, but it is not needed on this VCPU. 55 + 56 + * KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: 57 + Holds the state of the firmware support to mitigate CVE-2018-3639, as 58 + offered by KVM to the guest via a HVC call. The workaround is described 59 + under SMCCC_ARCH_WORKAROUND_2 in [1]_. 60 + 61 + Accepted values are: 62 + 63 + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: 64 + A workaround is not 65 + available. KVM does not offer firmware support for the workaround. 66 + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: 67 + The workaround state is 68 + unknown. KVM does not offer firmware support for the workaround. 69 + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: 70 + The workaround is available, 71 + and can be disabled by a vCPU. If 72 + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for 73 + this vCPU. 74 + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED: 75 + The workaround is always active on this vCPU or it is not needed. 76 + 77 + 78 + Bitmap Feature Firmware Registers 79 + --------------------------------- 80 + 81 + Contrary to the above registers, the following registers exposes the 82 + hypercall services in the form of a feature-bitmap to the userspace. This 83 + bitmap is translated to the services that are available to the guest. 84 + There is a register defined per service call owner and can be accessed via 85 + GET/SET_ONE_REG interface. 86 + 87 + By default, these registers are set with the upper limit of the features 88 + that are supported. This way userspace can discover all the usable 89 + hypercall services via GET_ONE_REG. The user-space can write-back the 90 + desired bitmap back via SET_ONE_REG. The features for the registers that 91 + are untouched, probably because userspace isn't aware of them, will be 92 + exposed as is to the guest. 93 + 94 + Note that KVM will not allow the userspace to configure the registers 95 + anymore once any of the vCPUs has run at least once. Instead, it will 96 + return a -EBUSY. 97 + 98 + The pseudo-firmware bitmap register are as follows: 99 + 100 + * KVM_REG_ARM_STD_BMAP: 101 + Controls the bitmap of the ARM Standard Secure Service Calls. 102 + 103 + The following bits are accepted: 104 + 105 + Bit-0: KVM_REG_ARM_STD_BIT_TRNG_V1_0: 106 + The bit represents the services offered under v1.0 of ARM True Random 107 + Number Generator (TRNG) specification, ARM DEN0098. 108 + 109 + * KVM_REG_ARM_STD_HYP_BMAP: 110 + Controls the bitmap of the ARM Standard Hypervisor Service Calls. 111 + 112 + The following bits are accepted: 113 + 114 + Bit-0: KVM_REG_ARM_STD_HYP_BIT_PV_TIME: 115 + The bit represents the Paravirtualized Time service as represented by 116 + ARM DEN0057A. 117 + 118 + * KVM_REG_ARM_VENDOR_HYP_BMAP: 119 + Controls the bitmap of the Vendor specific Hypervisor Service Calls. 120 + 121 + The following bits are accepted: 122 + 123 + Bit-0: KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT 124 + The bit represents the ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID 125 + and ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID function-ids. 126 + 127 + Bit-1: KVM_REG_ARM_VENDOR_HYP_BIT_PTP: 128 + The bit represents the Precision Time Protocol KVM service. 129 + 130 + Errors: 131 + 132 + ======= ============================================================= 133 + -ENOENT Unknown register accessed. 134 + -EBUSY Attempt a 'write' to the register after the VM has started. 135 + -EINVAL Invalid bitmap written to the register. 136 + ======= ============================================================= 137 + 138 + .. [1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
+36 -128
Documentation/virt/kvm/arm/hypercalls.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ======================= 4 - ARM Hypercall Interface 5 - ======================= 3 + =============================================== 4 + KVM/arm64-specific hypercalls exposed to guests 5 + =============================================== 6 6 7 - KVM handles the hypercall services as requested by the guests. New hypercall 8 - services are regularly made available by the ARM specification or by KVM (as 9 - vendor services) if they make sense from a virtualization point of view. 7 + This file documents the KVM/arm64-specific hypercalls which may be 8 + exposed by KVM/arm64 to guest operating systems. These hypercalls are 9 + issued using the HVC instruction according to version 1.1 of the Arm SMC 10 + Calling Convention (DEN0028/C): 10 11 11 - This means that a guest booted on two different versions of KVM can observe 12 - two different "firmware" revisions. This could cause issues if a given guest 13 - is tied to a particular version of a hypercall service, or if a migration 14 - causes a different version to be exposed out of the blue to an unsuspecting 15 - guest. 12 + https://developer.arm.com/docs/den0028/c 16 13 17 - In order to remedy this situation, KVM exposes a set of "firmware 18 - pseudo-registers" that can be manipulated using the GET/SET_ONE_REG 19 - interface. These registers can be saved/restored by userspace, and set 20 - to a convenient value as required. 14 + All KVM/arm64-specific hypercalls are allocated within the "Vendor 15 + Specific Hypervisor Service Call" range with a UID of 16 + ``28b46fb6-2ec5-11e9-a9ca-4b564d003a74``. This UID should be queried by the 17 + guest using the standard "Call UID" function for the service range in 18 + order to determine that the KVM/arm64-specific hypercalls are available. 21 19 22 - The following registers are defined: 20 + ``ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID`` 21 + --------------------------------------------- 23 22 24 - * KVM_REG_ARM_PSCI_VERSION: 23 + Provides a discovery mechanism for other KVM/arm64 hypercalls. 25 24 26 - KVM implements the PSCI (Power State Coordination Interface) 27 - specification in order to provide services such as CPU on/off, reset 28 - and power-off to the guest. 25 + +---------------------+-------------------------------------------------------------+ 26 + | Presence: | Mandatory for the KVM/arm64 UID | 27 + +---------------------+-------------------------------------------------------------+ 28 + | Calling convention: | HVC32 | 29 + +---------------------+----------+--------------------------------------------------+ 30 + | Function ID: | (uint32) | 0x86000000 | 31 + +---------------------+----------+--------------------------------------------------+ 32 + | Arguments: | None | 33 + +---------------------+----------+----+---------------------------------------------+ 34 + | Return Values: | (uint32) | R0 | Bitmap of available function numbers 0-31 | 35 + | +----------+----+---------------------------------------------+ 36 + | | (uint32) | R1 | Bitmap of available function numbers 32-63 | 37 + | +----------+----+---------------------------------------------+ 38 + | | (uint32) | R2 | Bitmap of available function numbers 64-95 | 39 + | +----------+----+---------------------------------------------+ 40 + | | (uint32) | R3 | Bitmap of available function numbers 96-127 | 41 + +---------------------+----------+----+---------------------------------------------+ 29 42 30 - - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set 31 - (and thus has already been initialized) 32 - - Returns the current PSCI version on GET_ONE_REG (defaulting to the 33 - highest PSCI version implemented by KVM and compatible with v0.2) 34 - - Allows any PSCI version implemented by KVM and compatible with 35 - v0.2 to be set with SET_ONE_REG 36 - - Affects the whole VM (even if the register view is per-vcpu) 43 + ``ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID`` 44 + ---------------------------------------- 37 45 38 - * KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: 39 - Holds the state of the firmware support to mitigate CVE-2017-5715, as 40 - offered by KVM to the guest via a HVC call. The workaround is described 41 - under SMCCC_ARCH_WORKAROUND_1 in [1]. 42 - 43 - Accepted values are: 44 - 45 - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: 46 - KVM does not offer 47 - firmware support for the workaround. The mitigation status for the 48 - guest is unknown. 49 - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: 50 - The workaround HVC call is 51 - available to the guest and required for the mitigation. 52 - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED: 53 - The workaround HVC call 54 - is available to the guest, but it is not needed on this VCPU. 55 - 56 - * KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: 57 - Holds the state of the firmware support to mitigate CVE-2018-3639, as 58 - offered by KVM to the guest via a HVC call. The workaround is described 59 - under SMCCC_ARCH_WORKAROUND_2 in [1]_. 60 - 61 - Accepted values are: 62 - 63 - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: 64 - A workaround is not 65 - available. KVM does not offer firmware support for the workaround. 66 - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: 67 - The workaround state is 68 - unknown. KVM does not offer firmware support for the workaround. 69 - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: 70 - The workaround is available, 71 - and can be disabled by a vCPU. If 72 - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for 73 - this vCPU. 74 - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED: 75 - The workaround is always active on this vCPU or it is not needed. 76 - 77 - 78 - Bitmap Feature Firmware Registers 79 - --------------------------------- 80 - 81 - Contrary to the above registers, the following registers exposes the 82 - hypercall services in the form of a feature-bitmap to the userspace. This 83 - bitmap is translated to the services that are available to the guest. 84 - There is a register defined per service call owner and can be accessed via 85 - GET/SET_ONE_REG interface. 86 - 87 - By default, these registers are set with the upper limit of the features 88 - that are supported. This way userspace can discover all the usable 89 - hypercall services via GET_ONE_REG. The user-space can write-back the 90 - desired bitmap back via SET_ONE_REG. The features for the registers that 91 - are untouched, probably because userspace isn't aware of them, will be 92 - exposed as is to the guest. 93 - 94 - Note that KVM will not allow the userspace to configure the registers 95 - anymore once any of the vCPUs has run at least once. Instead, it will 96 - return a -EBUSY. 97 - 98 - The pseudo-firmware bitmap register are as follows: 99 - 100 - * KVM_REG_ARM_STD_BMAP: 101 - Controls the bitmap of the ARM Standard Secure Service Calls. 102 - 103 - The following bits are accepted: 104 - 105 - Bit-0: KVM_REG_ARM_STD_BIT_TRNG_V1_0: 106 - The bit represents the services offered under v1.0 of ARM True Random 107 - Number Generator (TRNG) specification, ARM DEN0098. 108 - 109 - * KVM_REG_ARM_STD_HYP_BMAP: 110 - Controls the bitmap of the ARM Standard Hypervisor Service Calls. 111 - 112 - The following bits are accepted: 113 - 114 - Bit-0: KVM_REG_ARM_STD_HYP_BIT_PV_TIME: 115 - The bit represents the Paravirtualized Time service as represented by 116 - ARM DEN0057A. 117 - 118 - * KVM_REG_ARM_VENDOR_HYP_BMAP: 119 - Controls the bitmap of the Vendor specific Hypervisor Service Calls. 120 - 121 - The following bits are accepted: 122 - 123 - Bit-0: KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT 124 - The bit represents the ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID 125 - and ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID function-ids. 126 - 127 - Bit-1: KVM_REG_ARM_VENDOR_HYP_BIT_PTP: 128 - The bit represents the Precision Time Protocol KVM service. 129 - 130 - Errors: 131 - 132 - ======= ============================================================= 133 - -ENOENT Unknown register accessed. 134 - -EBUSY Attempt a 'write' to the register after the VM has started. 135 - -EINVAL Invalid bitmap written to the register. 136 - ======= ============================================================= 137 - 138 - .. [1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf 46 + See ptp_kvm.rst
+1
Documentation/virt/kvm/arm/index.rst
··· 7 7 .. toctree:: 8 8 :maxdepth: 2 9 9 10 + fw-pseudo-registers 10 11 hyp-abi 11 12 hypercalls 12 13 pvtime
+24 -14
Documentation/virt/kvm/arm/ptp_kvm.rst
··· 7 7 It relies on transferring the wall clock and counter value from the 8 8 host to the guest using a KVM-specific hypercall. 9 9 10 - * ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID: 0x86000001 10 + ``ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID`` 11 + ---------------------------------------- 11 12 12 - This hypercall uses the SMC32/HVC32 calling convention: 13 + Retrieve current time information for the specific counter. There are no 14 + endianness restrictions. 13 15 14 - ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID 15 - ============== ======== ===================================== 16 - Function ID: (uint32) 0x86000001 17 - Arguments: (uint32) KVM_PTP_VIRT_COUNTER(0) 18 - KVM_PTP_PHYS_COUNTER(1) 19 - Return Values: (int32) NOT_SUPPORTED(-1) on error, or 20 - (uint32) Upper 32 bits of wall clock time (r0) 21 - (uint32) Lower 32 bits of wall clock time (r1) 22 - (uint32) Upper 32 bits of counter (r2) 23 - (uint32) Lower 32 bits of counter (r3) 24 - Endianness: No Restrictions. 25 - ============== ======== ===================================== 16 + +---------------------+-------------------------------------------------------+ 17 + | Presence: | Optional | 18 + +---------------------+-------------------------------------------------------+ 19 + | Calling convention: | HVC32 | 20 + +---------------------+----------+--------------------------------------------+ 21 + | Function ID: | (uint32) | 0x86000001 | 22 + +---------------------+----------+----+---------------------------------------+ 23 + | Arguments: | (uint32) | R1 | ``KVM_PTP_VIRT_COUNTER (0)`` | 24 + | | | +---------------------------------------+ 25 + | | | | ``KVM_PTP_PHYS_COUNTER (1)`` | 26 + +---------------------+----------+----+---------------------------------------+ 27 + | Return Values: | (int32) | R0 | ``NOT_SUPPORTED (-1)`` on error, else | 28 + | | | | upper 32 bits of wall clock time | 29 + | +----------+----+---------------------------------------+ 30 + | | (uint32) | R1 | Lower 32 bits of wall clock time | 31 + | +----------+----+---------------------------------------+ 32 + | | (uint32) | R2 | Upper 32 bits of counter | 33 + | +----------+----+---------------------------------------+ 34 + | | (uint32) | R3 | Lower 32 bits of counter | 35 + +---------------------+----------+----+---------------------------------------+
+2 -6
arch/arm64/include/asm/kvm_asm.h
··· 73 73 __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, 74 74 __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, 75 75 __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, 76 - __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr, 77 - __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr, 78 - __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs, 79 - __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs, 76 + __KVM_HOST_SMCCC_FUNC___vgic_v3_save_vmcr_aprs, 77 + __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_vmcr_aprs, 80 78 __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_init_traps, 81 79 __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, 82 80 __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, ··· 239 241 extern void __kvm_adjust_pc(struct kvm_vcpu *vcpu); 240 242 241 243 extern u64 __vgic_v3_get_gic_config(void); 242 - extern u64 __vgic_v3_read_vmcr(void); 243 - extern void __vgic_v3_write_vmcr(u32 vmcr); 244 244 extern void __vgic_v3_init_lrs(void); 245 245 246 246 extern u64 __kvm_get_mdcr_el2(void);
+2 -4
arch/arm64/include/asm/kvm_emulate.h
··· 577 577 } else if (has_hvhe()) { 578 578 val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN); 579 579 580 - if (!vcpu_has_sve(vcpu) || 581 - (*host_data_ptr(fp_owner) != FP_STATE_GUEST_OWNED)) 580 + if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) 582 581 val |= CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN; 583 582 if (cpus_have_final_cap(ARM64_SME)) 584 583 val |= CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN; 585 584 } else { 586 585 val = CPTR_NVHE_EL2_RES1; 587 586 588 - if (vcpu_has_sve(vcpu) && 589 - (*host_data_ptr(fp_owner) == FP_STATE_GUEST_OWNED)) 587 + if (vcpu_has_sve(vcpu) && guest_owns_fp_regs()) 590 588 val |= CPTR_EL2_TZ; 591 589 if (cpus_have_final_cap(ARM64_SME)) 592 590 val &= ~CPTR_EL2_TSM;
+16 -7
arch/arm64/include/asm/kvm_host.h
··· 211 211 struct kvm_protected_vm { 212 212 pkvm_handle_t handle; 213 213 struct kvm_hyp_memcache teardown_mc; 214 + bool enabled; 214 215 }; 215 216 216 217 struct kvm_mpidr_data { ··· 663 662 struct kvm_guest_debug_arch *debug_ptr; 664 663 struct kvm_guest_debug_arch vcpu_debug_state; 665 664 struct kvm_guest_debug_arch external_debug_state; 666 - 667 - struct task_struct *parent_task; 668 665 669 666 /* VGIC state */ 670 667 struct vgic_cpu vgic_cpu; ··· 1211 1212 &this_cpu_ptr_hyp_sym(kvm_host_data)->f) 1212 1213 #endif 1213 1214 1215 + /* Check whether the FP regs are owned by the guest */ 1216 + static inline bool guest_owns_fp_regs(void) 1217 + { 1218 + return *host_data_ptr(fp_owner) == FP_STATE_GUEST_OWNED; 1219 + } 1220 + 1221 + /* Check whether the FP regs are owned by the host */ 1222 + static inline bool host_owns_fp_regs(void) 1223 + { 1224 + return *host_data_ptr(fp_owner) == FP_STATE_HOST_OWNED; 1225 + } 1226 + 1214 1227 static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt) 1215 1228 { 1216 1229 /* The host's MPIDR is immutable, so let's set it up at boot time */ ··· 1266 1255 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu); 1267 1256 void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); 1268 1257 void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); 1269 - void kvm_vcpu_unshare_task_fp(struct kvm_vcpu *vcpu); 1270 1258 1271 1259 static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) 1272 1260 { ··· 1301 1291 1302 1292 #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE 1303 1293 1304 - static inline bool kvm_vm_is_protected(struct kvm *kvm) 1305 - { 1306 - return false; 1307 - } 1294 + #define kvm_vm_is_protected(kvm) (is_protected_kvm_enabled() && (kvm)->arch.pkvm.enabled) 1295 + 1296 + #define vcpu_is_protected(vcpu) kvm_vm_is_protected((vcpu)->kvm) 1308 1297 1309 1298 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); 1310 1299 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
+2 -2
arch/arm64/include/asm/kvm_hyp.h
··· 80 80 void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if); 81 81 void __vgic_v3_activate_traps(struct vgic_v3_cpu_if *cpu_if); 82 82 void __vgic_v3_deactivate_traps(struct vgic_v3_cpu_if *cpu_if); 83 - void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if); 84 - void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if); 83 + void __vgic_v3_save_vmcr_aprs(struct vgic_v3_cpu_if *cpu_if); 84 + void __vgic_v3_restore_vmcr_aprs(struct vgic_v3_cpu_if *cpu_if); 85 85 int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); 86 86 87 87 #ifdef __KVM_NVHE_HYPERVISOR__
+8 -4
arch/arm64/include/asm/virt.h
··· 82 82 83 83 DECLARE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); 84 84 85 + static inline bool is_pkvm_initialized(void) 86 + { 87 + return IS_ENABLED(CONFIG_KVM) && 88 + static_branch_likely(&kvm_protected_mode_initialized); 89 + } 90 + 85 91 /* Reports the availability of HYP mode */ 86 92 static inline bool is_hyp_mode_available(void) 87 93 { ··· 95 89 * If KVM protected mode is initialized, all CPUs must have been booted 96 90 * in EL2. Avoid checking __boot_cpu_mode as CPUs now come up in EL1. 97 91 */ 98 - if (IS_ENABLED(CONFIG_KVM) && 99 - static_branch_likely(&kvm_protected_mode_initialized)) 92 + if (is_pkvm_initialized()) 100 93 return true; 101 94 102 95 return (__boot_cpu_mode[0] == BOOT_CPU_MODE_EL2 && ··· 109 104 * If KVM protected mode is initialized, all CPUs must have been booted 110 105 * in EL2. Avoid checking __boot_cpu_mode as CPUs now come up in EL1. 111 106 */ 112 - if (IS_ENABLED(CONFIG_KVM) && 113 - static_branch_likely(&kvm_protected_mode_initialized)) 107 + if (is_pkvm_initialized()) 114 108 return false; 115 109 116 110 return __boot_cpu_mode[0] != __boot_cpu_mode[1];
+44 -19
arch/arm64/kvm/arm.c
··· 70 70 return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; 71 71 } 72 72 73 + /* 74 + * This functions as an allow-list of protected VM capabilities. 75 + * Features not explicitly allowed by this function are denied. 76 + */ 77 + static bool pkvm_ext_allowed(struct kvm *kvm, long ext) 78 + { 79 + switch (ext) { 80 + case KVM_CAP_IRQCHIP: 81 + case KVM_CAP_ARM_PSCI: 82 + case KVM_CAP_ARM_PSCI_0_2: 83 + case KVM_CAP_NR_VCPUS: 84 + case KVM_CAP_MAX_VCPUS: 85 + case KVM_CAP_MAX_VCPU_ID: 86 + case KVM_CAP_MSI_DEVID: 87 + case KVM_CAP_ARM_VM_IPA_SIZE: 88 + case KVM_CAP_ARM_PMU_V3: 89 + case KVM_CAP_ARM_SVE: 90 + case KVM_CAP_ARM_PTRAUTH_ADDRESS: 91 + case KVM_CAP_ARM_PTRAUTH_GENERIC: 92 + return true; 93 + default: 94 + return false; 95 + } 96 + } 97 + 73 98 int kvm_vm_ioctl_enable_cap(struct kvm *kvm, 74 99 struct kvm_enable_cap *cap) 75 100 { 76 - int r; 77 - u64 new_cap; 101 + int r = -EINVAL; 78 102 79 103 if (cap->flags) 104 + return -EINVAL; 105 + 106 + if (kvm_vm_is_protected(kvm) && !pkvm_ext_allowed(kvm, cap->cap)) 80 107 return -EINVAL; 81 108 82 109 switch (cap->cap) { ··· 114 87 break; 115 88 case KVM_CAP_ARM_MTE: 116 89 mutex_lock(&kvm->lock); 117 - if (!system_supports_mte() || kvm->created_vcpus) { 118 - r = -EINVAL; 119 - } else { 90 + if (system_supports_mte() && !kvm->created_vcpus) { 120 91 r = 0; 121 92 set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &kvm->arch.flags); 122 93 } ··· 125 100 set_bit(KVM_ARCH_FLAG_SYSTEM_SUSPEND_ENABLED, &kvm->arch.flags); 126 101 break; 127 102 case KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE: 128 - new_cap = cap->args[0]; 129 - 130 103 mutex_lock(&kvm->slots_lock); 131 104 /* 132 105 * To keep things simple, allow changing the chunk 133 106 * size only when no memory slots have been created. 134 107 */ 135 - if (!kvm_are_all_memslots_empty(kvm)) { 136 - r = -EINVAL; 137 - } else if (new_cap && !kvm_is_block_size_supported(new_cap)) { 138 - r = -EINVAL; 139 - } else { 140 - r = 0; 141 - kvm->arch.mmu.split_page_chunk_size = new_cap; 108 + if (kvm_are_all_memslots_empty(kvm)) { 109 + u64 new_cap = cap->args[0]; 110 + 111 + if (!new_cap || kvm_is_block_size_supported(new_cap)) { 112 + r = 0; 113 + kvm->arch.mmu.split_page_chunk_size = new_cap; 114 + } 142 115 } 143 116 mutex_unlock(&kvm->slots_lock); 144 117 break; 145 118 default: 146 - r = -EINVAL; 147 119 break; 148 120 } 149 121 ··· 278 256 int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) 279 257 { 280 258 int r; 259 + 260 + if (kvm && kvm_vm_is_protected(kvm) && !pkvm_ext_allowed(kvm, ext)) 261 + return 0; 262 + 281 263 switch (ext) { 282 264 case KVM_CAP_IRQCHIP: 283 265 r = vgic_present; ··· 883 857 * doorbells to be signalled, should an interrupt become pending. 884 858 */ 885 859 preempt_disable(); 886 - kvm_vgic_vmcr_sync(vcpu); 887 860 vcpu_set_flag(vcpu, IN_WFI); 888 - vgic_v4_put(vcpu); 861 + kvm_vgic_put(vcpu); 889 862 preempt_enable(); 890 863 891 864 kvm_vcpu_halt(vcpu); ··· 892 867 893 868 preempt_disable(); 894 869 vcpu_clear_flag(vcpu, IN_WFI); 895 - vgic_v4_load(vcpu); 870 + kvm_vgic_load(vcpu); 896 871 preempt_enable(); 897 872 } 898 873 ··· 1072 1047 1073 1048 if (run->exit_reason == KVM_EXIT_MMIO) { 1074 1049 ret = kvm_handle_mmio_return(vcpu); 1075 - if (ret) 1050 + if (ret <= 0) 1076 1051 return ret; 1077 1052 } 1078 1053
+29 -31
arch/arm64/kvm/fpsimd.c
··· 14 14 #include <asm/kvm_mmu.h> 15 15 #include <asm/sysreg.h> 16 16 17 - void kvm_vcpu_unshare_task_fp(struct kvm_vcpu *vcpu) 18 - { 19 - struct task_struct *p = vcpu->arch.parent_task; 20 - struct user_fpsimd_state *fpsimd; 21 - 22 - if (!is_protected_kvm_enabled() || !p) 23 - return; 24 - 25 - fpsimd = &p->thread.uw.fpsimd_state; 26 - kvm_unshare_hyp(fpsimd, fpsimd + 1); 27 - put_task_struct(p); 28 - } 29 - 30 17 /* 31 18 * Called on entry to KVM_RUN unless this vcpu previously ran at least 32 19 * once and the most recent prior KVM_RUN for this vcpu was called from ··· 25 38 */ 26 39 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) 27 40 { 41 + struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state; 28 42 int ret; 29 43 30 - struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state; 31 - 32 - kvm_vcpu_unshare_task_fp(vcpu); 44 + /* pKVM has its own tracking of the host fpsimd state. */ 45 + if (is_protected_kvm_enabled()) 46 + return 0; 33 47 34 48 /* Make sure the host task fpsimd state is visible to hyp: */ 35 49 ret = kvm_share_hyp(fpsimd, fpsimd + 1); 36 50 if (ret) 37 51 return ret; 38 - 39 - /* 40 - * We need to keep current's task_struct pinned until its data has been 41 - * unshared with the hypervisor to make sure it is not re-used by the 42 - * kernel and donated to someone else while already shared -- see 43 - * kvm_vcpu_unshare_task_fp() for the matching put_task_struct(). 44 - */ 45 - if (is_protected_kvm_enabled()) { 46 - get_task_struct(current); 47 - vcpu->arch.parent_task = current; 48 - } 49 52 50 53 return 0; 51 54 } ··· 118 141 119 142 WARN_ON_ONCE(!irqs_disabled()); 120 143 121 - if (*host_data_ptr(fp_owner) == FP_STATE_GUEST_OWNED) { 122 - 144 + if (guest_owns_fp_regs()) { 123 145 /* 124 146 * Currently we do not support SME guests so SVCR is 125 147 * always 0 and we just need a variable to point to. ··· 171 195 isb(); 172 196 } 173 197 174 - if (*host_data_ptr(fp_owner) == FP_STATE_GUEST_OWNED) { 198 + if (guest_owns_fp_regs()) { 175 199 if (vcpu_has_sve(vcpu)) { 176 200 __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); 177 201 178 - /* Restore the VL that was saved when bound to the CPU */ 202 + /* 203 + * Restore the VL that was saved when bound to the CPU, 204 + * which is the maximum VL for the guest. Because the 205 + * layout of the data when saving the sve state depends 206 + * on the VL, we need to use a consistent (i.e., the 207 + * maximum) VL. 208 + * Note that this means that at guest exit ZCR_EL1 is 209 + * not necessarily the same as on guest entry. 210 + * 211 + * Restoring the VL isn't needed in VHE mode since 212 + * ZCR_EL2 (accessed via ZCR_EL1) would fulfill the same 213 + * role when doing the save from EL2. 214 + */ 179 215 if (!has_vhe()) 180 216 sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, 181 217 SYS_ZCR_EL1); 182 218 } 183 219 220 + /* 221 + * Flush (save and invalidate) the fpsimd/sve state so that if 222 + * the host tries to use fpsimd/sve, it's not using stale data 223 + * from the guest. 224 + * 225 + * Flushing the state sets the TIF_FOREIGN_FPSTATE bit for the 226 + * context unconditionally, in both nVHE and VHE. This allows 227 + * the kernel to restore the fpsimd/sve state, including ZCR_EL1 228 + * when needed. 229 + */ 184 230 fpsimd_save_and_flush_cpu_state(); 185 231 } else if (has_vhe() && system_supports_sve()) { 186 232 /*
+1 -7
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 40 40 extern struct kvm_exception_table_entry __start___kvm_ex_table; 41 41 extern struct kvm_exception_table_entry __stop___kvm_ex_table; 42 42 43 - /* Check whether the FP regs are owned by the guest */ 44 - static inline bool guest_owns_fp_regs(struct kvm_vcpu *vcpu) 45 - { 46 - return *host_data_ptr(fp_owner) == FP_STATE_GUEST_OWNED; 47 - } 48 - 49 43 /* Save the 32-bit only FPSIMD system register state */ 50 44 static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) 51 45 { ··· 369 375 isb(); 370 376 371 377 /* Write out the host state if it's in the registers */ 372 - if (*host_data_ptr(fp_owner) == FP_STATE_HOST_OWNED) 378 + if (host_owns_fp_regs()) 373 379 __fpsimd_save_state(*host_data_ptr(fpsimd_state)); 374 380 375 381 /* Restore the guest state */
+6
arch/arm64/kvm/hyp/include/nvhe/pkvm.h
··· 53 53 return container_of(hyp_vcpu->vcpu.kvm, struct pkvm_hyp_vm, kvm); 54 54 } 55 55 56 + static inline bool pkvm_hyp_vcpu_is_protected(struct pkvm_hyp_vcpu *hyp_vcpu) 57 + { 58 + return vcpu_is_protected(&hyp_vcpu->vcpu); 59 + } 60 + 56 61 void pkvm_hyp_vm_table_init(void *tbl); 62 + void pkvm_host_fpsimd_state_init(void); 57 63 58 64 int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, 59 65 unsigned long pgd_hva);
+6 -18
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 175 175 cpu_reg(host_ctxt, 1) = __vgic_v3_get_gic_config(); 176 176 } 177 177 178 - static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt) 179 - { 180 - cpu_reg(host_ctxt, 1) = __vgic_v3_read_vmcr(); 181 - } 182 - 183 - static void handle___vgic_v3_write_vmcr(struct kvm_cpu_context *host_ctxt) 184 - { 185 - __vgic_v3_write_vmcr(cpu_reg(host_ctxt, 1)); 186 - } 187 - 188 178 static void handle___vgic_v3_init_lrs(struct kvm_cpu_context *host_ctxt) 189 179 { 190 180 __vgic_v3_init_lrs(); ··· 185 195 cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2(); 186 196 } 187 197 188 - static void handle___vgic_v3_save_aprs(struct kvm_cpu_context *host_ctxt) 198 + static void handle___vgic_v3_save_vmcr_aprs(struct kvm_cpu_context *host_ctxt) 189 199 { 190 200 DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1); 191 201 192 - __vgic_v3_save_aprs(kern_hyp_va(cpu_if)); 202 + __vgic_v3_save_vmcr_aprs(kern_hyp_va(cpu_if)); 193 203 } 194 204 195 - static void handle___vgic_v3_restore_aprs(struct kvm_cpu_context *host_ctxt) 205 + static void handle___vgic_v3_restore_vmcr_aprs(struct kvm_cpu_context *host_ctxt) 196 206 { 197 207 DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1); 198 208 199 - __vgic_v3_restore_aprs(kern_hyp_va(cpu_if)); 209 + __vgic_v3_restore_vmcr_aprs(kern_hyp_va(cpu_if)); 200 210 } 201 211 202 212 static void handle___pkvm_init(struct kvm_cpu_context *host_ctxt) ··· 327 337 HANDLE_FUNC(__kvm_tlb_flush_vmid_range), 328 338 HANDLE_FUNC(__kvm_flush_cpu_context), 329 339 HANDLE_FUNC(__kvm_timer_set_cntvoff), 330 - HANDLE_FUNC(__vgic_v3_read_vmcr), 331 - HANDLE_FUNC(__vgic_v3_write_vmcr), 332 - HANDLE_FUNC(__vgic_v3_save_aprs), 333 - HANDLE_FUNC(__vgic_v3_restore_aprs), 340 + HANDLE_FUNC(__vgic_v3_save_vmcr_aprs), 341 + HANDLE_FUNC(__vgic_v3_restore_vmcr_aprs), 334 342 HANDLE_FUNC(__pkvm_vcpu_init_traps), 335 343 HANDLE_FUNC(__pkvm_init_vm), 336 344 HANDLE_FUNC(__pkvm_init_vcpu),
+7 -1
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 533 533 int ret = 0; 534 534 535 535 esr = read_sysreg_el2(SYS_ESR); 536 - BUG_ON(!__get_fault_info(esr, &fault)); 536 + if (!__get_fault_info(esr, &fault)) { 537 + /* 538 + * We've presumably raced with a page-table change which caused 539 + * AT to fail, try again. 540 + */ 541 + return; 542 + } 537 543 538 544 addr = (fault.hpfar_el2 & HPFAR_MASK) << 8; 539 545 ret = host_stage2_idmap(addr);
+13 -1
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 200 200 } 201 201 202 202 /* 203 - * Initialize trap register values for protected VMs. 203 + * Initialize trap register values in protected mode. 204 204 */ 205 205 void __pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu) 206 206 { ··· 245 245 { 246 246 WARN_ON(vm_table); 247 247 vm_table = tbl; 248 + } 249 + 250 + void pkvm_host_fpsimd_state_init(void) 251 + { 252 + unsigned long i; 253 + 254 + for (i = 0; i < hyp_nr_cpus; i++) { 255 + struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i); 256 + 257 + host_data->fpsimd_state = &host_data->host_ctxt.fp_regs; 258 + } 248 259 } 249 260 250 261 /* ··· 441 430 442 431 static void __unmap_donated_memory(void *va, size_t size) 443 432 { 433 + kvm_flush_dcache_to_poc(va, size); 444 434 WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(va), 445 435 PAGE_ALIGN(size) >> PAGE_SHIFT)); 446 436 }
+1
arch/arm64/kvm/hyp/nvhe/setup.c
··· 300 300 goto out; 301 301 302 302 pkvm_hyp_vm_table_init(vm_table_base); 303 + pkvm_host_fpsimd_state_init(); 303 304 out: 304 305 /* 305 306 * We tail-called to here from handle___pkvm_init() and will not return,
+4 -6
arch/arm64/kvm/hyp/nvhe/switch.c
··· 53 53 val |= CPTR_EL2_TSM; 54 54 } 55 55 56 - if (!guest_owns_fp_regs(vcpu)) { 56 + if (!guest_owns_fp_regs()) { 57 57 if (has_hvhe()) 58 58 val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN | 59 59 CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN); ··· 207 207 208 208 static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) 209 209 { 210 - if (unlikely(kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)))) 210 + if (unlikely(vcpu_is_protected(vcpu))) 211 211 return pvm_exit_handlers; 212 212 213 213 return hyp_exit_handlers; ··· 226 226 */ 227 227 static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) 228 228 { 229 - struct kvm *kvm = kern_hyp_va(vcpu->kvm); 230 - 231 - if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu)) { 229 + if (unlikely(vcpu_is_protected(vcpu) && vcpu_mode_is_32bit(vcpu))) { 232 230 /* 233 231 * As we have caught the guest red-handed, decide that it isn't 234 232 * fit for purpose anymore by making the vcpu invalid. The VMM ··· 333 335 334 336 __sysreg_restore_state_nvhe(host_ctxt); 335 337 336 - if (*host_data_ptr(fp_owner) == FP_STATE_GUEST_OWNED) 338 + if (guest_owns_fp_regs()) 337 339 __fpsimd_save_fpexc32(vcpu); 338 340 339 341 __debug_switch_to_host(vcpu);
+91 -24
arch/arm64/kvm/hyp/nvhe/tlb.c
··· 11 11 #include <nvhe/mem_protect.h> 12 12 13 13 struct tlb_inv_context { 14 - u64 tcr; 14 + struct kvm_s2_mmu *mmu; 15 + u64 tcr; 16 + u64 sctlr; 15 17 }; 16 18 17 - static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, 18 - struct tlb_inv_context *cxt, 19 - bool nsh) 19 + static void enter_vmid_context(struct kvm_s2_mmu *mmu, 20 + struct tlb_inv_context *cxt, 21 + bool nsh) 20 22 { 23 + struct kvm_s2_mmu *host_s2_mmu = &host_mmu.arch.mmu; 24 + struct kvm_cpu_context *host_ctxt; 25 + struct kvm_vcpu *vcpu; 26 + 27 + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; 28 + vcpu = host_ctxt->__hyp_running_vcpu; 29 + cxt->mmu = NULL; 30 + 21 31 /* 22 32 * We have two requirements: 23 33 * ··· 50 40 else 51 41 dsb(ish); 52 42 43 + /* 44 + * If we're already in the desired context, then there's nothing to do. 45 + */ 46 + if (vcpu) { 47 + /* 48 + * We're in guest context. However, for this to work, this needs 49 + * to be called from within __kvm_vcpu_run(), which ensures that 50 + * __hyp_running_vcpu is set to the current guest vcpu. 51 + */ 52 + if (mmu == vcpu->arch.hw_mmu || WARN_ON(mmu != host_s2_mmu)) 53 + return; 54 + 55 + cxt->mmu = vcpu->arch.hw_mmu; 56 + } else { 57 + /* We're in host context. */ 58 + if (mmu == host_s2_mmu) 59 + return; 60 + 61 + cxt->mmu = host_s2_mmu; 62 + } 63 + 53 64 if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { 54 65 u64 val; 55 66 56 67 /* 57 68 * For CPUs that are affected by ARM 1319367, we need to 58 - * avoid a host Stage-1 walk while we have the guest's 59 - * VMID set in the VTTBR in order to invalidate TLBs. 60 - * We're guaranteed that the S1 MMU is enabled, so we can 61 - * simply set the EPD bits to avoid any further TLB fill. 69 + * avoid a Stage-1 walk with the old VMID while we have 70 + * the new VMID set in the VTTBR in order to invalidate TLBs. 71 + * We're guaranteed that the host S1 MMU is enabled, so 72 + * we can simply set the EPD bits to avoid any further 73 + * TLB fill. For guests, we ensure that the S1 MMU is 74 + * temporarily enabled in the next context. 62 75 */ 63 76 val = cxt->tcr = read_sysreg_el1(SYS_TCR); 64 77 val |= TCR_EPD1_MASK | TCR_EPD0_MASK; 65 78 write_sysreg_el1(val, SYS_TCR); 66 79 isb(); 80 + 81 + if (vcpu) { 82 + val = cxt->sctlr = read_sysreg_el1(SYS_SCTLR); 83 + if (!(val & SCTLR_ELx_M)) { 84 + val |= SCTLR_ELx_M; 85 + write_sysreg_el1(val, SYS_SCTLR); 86 + isb(); 87 + } 88 + } else { 89 + /* The host S1 MMU is always enabled. */ 90 + cxt->sctlr = SCTLR_ELx_M; 91 + } 67 92 } 68 93 69 94 /* ··· 107 62 * ensuring that we always have an ISB, but not two ISBs back 108 63 * to back. 109 64 */ 110 - __load_stage2(mmu, kern_hyp_va(mmu->arch)); 65 + if (vcpu) 66 + __load_host_stage2(); 67 + else 68 + __load_stage2(mmu, kern_hyp_va(mmu->arch)); 69 + 111 70 asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); 112 71 } 113 72 114 - static void __tlb_switch_to_host(struct tlb_inv_context *cxt) 73 + static void exit_vmid_context(struct tlb_inv_context *cxt) 115 74 { 116 - __load_host_stage2(); 75 + struct kvm_s2_mmu *mmu = cxt->mmu; 76 + struct kvm_cpu_context *host_ctxt; 77 + struct kvm_vcpu *vcpu; 78 + 79 + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; 80 + vcpu = host_ctxt->__hyp_running_vcpu; 81 + 82 + if (!mmu) 83 + return; 84 + 85 + if (vcpu) 86 + __load_stage2(mmu, kern_hyp_va(mmu->arch)); 87 + else 88 + __load_host_stage2(); 117 89 118 90 if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { 119 - /* Ensure write of the host VMID */ 91 + /* Ensure write of the old VMID */ 120 92 isb(); 121 - /* Restore the host's TCR_EL1 */ 93 + 94 + if (!(cxt->sctlr & SCTLR_ELx_M)) { 95 + write_sysreg_el1(cxt->sctlr, SYS_SCTLR); 96 + isb(); 97 + } 98 + 122 99 write_sysreg_el1(cxt->tcr, SYS_TCR); 123 100 } 124 101 } ··· 151 84 struct tlb_inv_context cxt; 152 85 153 86 /* Switch to requested VMID */ 154 - __tlb_switch_to_guest(mmu, &cxt, false); 87 + enter_vmid_context(mmu, &cxt, false); 155 88 156 89 /* 157 90 * We could do so much better if we had the VA as well. ··· 172 105 dsb(ish); 173 106 isb(); 174 107 175 - __tlb_switch_to_host(&cxt); 108 + exit_vmid_context(&cxt); 176 109 } 177 110 178 111 void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, ··· 181 114 struct tlb_inv_context cxt; 182 115 183 116 /* Switch to requested VMID */ 184 - __tlb_switch_to_guest(mmu, &cxt, true); 117 + enter_vmid_context(mmu, &cxt, true); 185 118 186 119 /* 187 120 * We could do so much better if we had the VA as well. ··· 202 135 dsb(nsh); 203 136 isb(); 204 137 205 - __tlb_switch_to_host(&cxt); 138 + exit_vmid_context(&cxt); 206 139 } 207 140 208 141 void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, ··· 219 152 start = round_down(start, stride); 220 153 221 154 /* Switch to requested VMID */ 222 - __tlb_switch_to_guest(mmu, &cxt, false); 155 + enter_vmid_context(mmu, &cxt, false); 223 156 224 157 __flush_s2_tlb_range_op(ipas2e1is, start, pages, stride, 225 158 TLBI_TTL_UNKNOWN); ··· 229 162 dsb(ish); 230 163 isb(); 231 164 232 - __tlb_switch_to_host(&cxt); 165 + exit_vmid_context(&cxt); 233 166 } 234 167 235 168 void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) ··· 237 170 struct tlb_inv_context cxt; 238 171 239 172 /* Switch to requested VMID */ 240 - __tlb_switch_to_guest(mmu, &cxt, false); 173 + enter_vmid_context(mmu, &cxt, false); 241 174 242 175 __tlbi(vmalls12e1is); 243 176 dsb(ish); 244 177 isb(); 245 178 246 - __tlb_switch_to_host(&cxt); 179 + exit_vmid_context(&cxt); 247 180 } 248 181 249 182 void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu) ··· 251 184 struct tlb_inv_context cxt; 252 185 253 186 /* Switch to requested VMID */ 254 - __tlb_switch_to_guest(mmu, &cxt, false); 187 + enter_vmid_context(mmu, &cxt, false); 255 188 256 189 __tlbi(vmalle1); 257 190 asm volatile("ic iallu"); 258 191 dsb(nsh); 259 192 isb(); 260 193 261 - __tlb_switch_to_host(&cxt); 194 + exit_vmid_context(&cxt); 262 195 } 263 196 264 197 void __kvm_flush_vm_context(void) 265 198 { 266 - /* Same remark as in __tlb_switch_to_guest() */ 199 + /* Same remark as in enter_vmid_context() */ 267 200 dsb(ish); 268 201 __tlbi(alle1is); 269 202 dsb(ish);
+18 -3
arch/arm64/kvm/hyp/pgtable.c
··· 914 914 static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte) 915 915 { 916 916 u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR; 917 - return memattr == KVM_S2_MEMATTR(pgt, NORMAL); 917 + return kvm_pte_valid(pte) && memattr == KVM_S2_MEMATTR(pgt, NORMAL); 918 918 } 919 919 920 920 static bool stage2_pte_executable(kvm_pte_t pte) 921 921 { 922 - return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN); 922 + return kvm_pte_valid(pte) && !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN); 923 923 } 924 924 925 925 static u64 stage2_map_walker_phys_addr(const struct kvm_pgtable_visit_ctx *ctx, ··· 978 978 */ 979 979 if (!stage2_pte_needs_update(ctx->old, new)) 980 980 return -EAGAIN; 981 + 982 + /* If we're only changing software bits, then store them and go! */ 983 + if (!kvm_pgtable_walk_shared(ctx) && 984 + !((ctx->old ^ new) & ~KVM_PTE_LEAF_ATTR_HI_SW)) { 985 + bool old_is_counted = stage2_pte_is_counted(ctx->old); 986 + 987 + if (old_is_counted != stage2_pte_is_counted(new)) { 988 + if (old_is_counted) 989 + mm_ops->put_page(ctx->ptep); 990 + else 991 + mm_ops->get_page(ctx->ptep); 992 + } 993 + WARN_ON_ONCE(!stage2_try_set_pte(ctx, new)); 994 + return 0; 995 + } 981 996 982 997 if (!stage2_try_break_pte(ctx, data->mmu)) 983 998 return -EAGAIN; ··· 1385 1370 struct kvm_pgtable *pgt = ctx->arg; 1386 1371 struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; 1387 1372 1388 - if (!kvm_pte_valid(ctx->old) || !stage2_pte_cacheable(pgt, ctx->old)) 1373 + if (!stage2_pte_cacheable(pgt, ctx->old)) 1389 1374 return 0; 1390 1375 1391 1376 if (mm_ops->dcache_clean_inval_poc)
+23 -4
arch/arm64/kvm/hyp/vgic-v3-sr.c
··· 330 330 write_gicreg(0, ICH_HCR_EL2); 331 331 } 332 332 333 - void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if) 333 + static void __vgic_v3_save_aprs(struct vgic_v3_cpu_if *cpu_if) 334 334 { 335 335 u64 val; 336 336 u32 nr_pre_bits; ··· 363 363 } 364 364 } 365 365 366 - void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if) 366 + static void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if) 367 367 { 368 368 u64 val; 369 369 u32 nr_pre_bits; ··· 455 455 return val; 456 456 } 457 457 458 - u64 __vgic_v3_read_vmcr(void) 458 + static u64 __vgic_v3_read_vmcr(void) 459 459 { 460 460 return read_gicreg(ICH_VMCR_EL2); 461 461 } 462 462 463 - void __vgic_v3_write_vmcr(u32 vmcr) 463 + static void __vgic_v3_write_vmcr(u32 vmcr) 464 464 { 465 465 write_gicreg(vmcr, ICH_VMCR_EL2); 466 + } 467 + 468 + void __vgic_v3_save_vmcr_aprs(struct vgic_v3_cpu_if *cpu_if) 469 + { 470 + __vgic_v3_save_aprs(cpu_if); 471 + if (cpu_if->vgic_sre) 472 + cpu_if->vgic_vmcr = __vgic_v3_read_vmcr(); 473 + } 474 + 475 + void __vgic_v3_restore_vmcr_aprs(struct vgic_v3_cpu_if *cpu_if) 476 + { 477 + /* 478 + * If dealing with a GICv2 emulation on GICv3, VMCR_EL2.VFIQen 479 + * is dependent on ICC_SRE_EL1.SRE, and we have to perform the 480 + * VMCR_EL2 save/restore in the world switch. 481 + */ 482 + if (cpu_if->vgic_sre) 483 + __vgic_v3_write_vmcr(cpu_if->vgic_vmcr); 484 + __vgic_v3_restore_aprs(cpu_if); 466 485 } 467 486 468 487 static int __vgic_v3_bpr_min(void)
+2 -2
arch/arm64/kvm/hyp/vhe/switch.c
··· 107 107 108 108 val |= CPTR_EL2_TAM; 109 109 110 - if (guest_owns_fp_regs(vcpu)) { 110 + if (guest_owns_fp_regs()) { 111 111 if (vcpu_has_sve(vcpu)) 112 112 val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN; 113 113 } else { ··· 341 341 342 342 sysreg_restore_host_state_vhe(host_ctxt); 343 343 344 - if (*host_data_ptr(fp_owner) == FP_STATE_GUEST_OWNED) 344 + if (guest_owns_fp_regs()) 345 345 __fpsimd_save_fpexc32(vcpu); 346 346 347 347 __debug_switch_to_host(vcpu);
+13 -13
arch/arm64/kvm/hyp/vhe/tlb.c
··· 17 17 u64 sctlr; 18 18 }; 19 19 20 - static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, 21 - struct tlb_inv_context *cxt) 20 + static void enter_vmid_context(struct kvm_s2_mmu *mmu, 21 + struct tlb_inv_context *cxt) 22 22 { 23 23 struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); 24 24 u64 val; ··· 67 67 isb(); 68 68 } 69 69 70 - static void __tlb_switch_to_host(struct tlb_inv_context *cxt) 70 + static void exit_vmid_context(struct tlb_inv_context *cxt) 71 71 { 72 72 /* 73 73 * We're done with the TLB operation, let's restore the host's ··· 97 97 dsb(ishst); 98 98 99 99 /* Switch to requested VMID */ 100 - __tlb_switch_to_guest(mmu, &cxt); 100 + enter_vmid_context(mmu, &cxt); 101 101 102 102 /* 103 103 * We could do so much better if we had the VA as well. ··· 118 118 dsb(ish); 119 119 isb(); 120 120 121 - __tlb_switch_to_host(&cxt); 121 + exit_vmid_context(&cxt); 122 122 } 123 123 124 124 void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, ··· 129 129 dsb(nshst); 130 130 131 131 /* Switch to requested VMID */ 132 - __tlb_switch_to_guest(mmu, &cxt); 132 + enter_vmid_context(mmu, &cxt); 133 133 134 134 /* 135 135 * We could do so much better if we had the VA as well. ··· 150 150 dsb(nsh); 151 151 isb(); 152 152 153 - __tlb_switch_to_host(&cxt); 153 + exit_vmid_context(&cxt); 154 154 } 155 155 156 156 void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, ··· 169 169 dsb(ishst); 170 170 171 171 /* Switch to requested VMID */ 172 - __tlb_switch_to_guest(mmu, &cxt); 172 + enter_vmid_context(mmu, &cxt); 173 173 174 174 __flush_s2_tlb_range_op(ipas2e1is, start, pages, stride, 175 175 TLBI_TTL_UNKNOWN); ··· 179 179 dsb(ish); 180 180 isb(); 181 181 182 - __tlb_switch_to_host(&cxt); 182 + exit_vmid_context(&cxt); 183 183 } 184 184 185 185 void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) ··· 189 189 dsb(ishst); 190 190 191 191 /* Switch to requested VMID */ 192 - __tlb_switch_to_guest(mmu, &cxt); 192 + enter_vmid_context(mmu, &cxt); 193 193 194 194 __tlbi(vmalls12e1is); 195 195 dsb(ish); 196 196 isb(); 197 197 198 - __tlb_switch_to_host(&cxt); 198 + exit_vmid_context(&cxt); 199 199 } 200 200 201 201 void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu) ··· 203 203 struct tlb_inv_context cxt; 204 204 205 205 /* Switch to requested VMID */ 206 - __tlb_switch_to_guest(mmu, &cxt); 206 + enter_vmid_context(mmu, &cxt); 207 207 208 208 __tlbi(vmalle1); 209 209 asm volatile("ic iallu"); 210 210 dsb(nsh); 211 211 isb(); 212 212 213 - __tlb_switch_to_host(&cxt); 213 + exit_vmid_context(&cxt); 214 214 } 215 215 216 216 void __kvm_flush_vm_context(void)
+10 -2
arch/arm64/kvm/mmio.c
··· 86 86 87 87 /* Detect an already handled MMIO return */ 88 88 if (unlikely(!vcpu->mmio_needed)) 89 - return 0; 89 + return 1; 90 90 91 91 vcpu->mmio_needed = 0; 92 92 ··· 117 117 */ 118 118 kvm_incr_pc(vcpu); 119 119 120 - return 0; 120 + return 1; 121 121 } 122 122 123 123 int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) ··· 133 133 /* 134 134 * No valid syndrome? Ask userspace for help if it has 135 135 * volunteered to do so, and bail out otherwise. 136 + * 137 + * In the protected VM case, there isn't much userspace can do 138 + * though, so directly deliver an exception to the guest. 136 139 */ 137 140 if (!kvm_vcpu_dabt_isvalid(vcpu)) { 138 141 trace_kvm_mmio_nisv(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu), 139 142 kvm_vcpu_get_hfar(vcpu), fault_ipa); 143 + 144 + if (vcpu_is_protected(vcpu)) { 145 + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); 146 + return 1; 147 + } 140 148 141 149 if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, 142 150 &vcpu->kvm->arch.flags)) {
+5 -3
arch/arm64/kvm/mmu.c
··· 1522 1522 1523 1523 read_lock(&kvm->mmu_lock); 1524 1524 pgt = vcpu->arch.hw_mmu->pgt; 1525 - if (mmu_invalidate_retry(kvm, mmu_seq)) 1525 + if (mmu_invalidate_retry(kvm, mmu_seq)) { 1526 + ret = -EAGAIN; 1526 1527 goto out_unlock; 1528 + } 1527 1529 1528 1530 /* 1529 1531 * If we are not forced to use page mapping, check if we are ··· 1583 1581 memcache, 1584 1582 KVM_PGTABLE_WALK_HANDLE_FAULT | 1585 1583 KVM_PGTABLE_WALK_SHARED); 1584 + out_unlock: 1585 + read_unlock(&kvm->mmu_lock); 1586 1586 1587 1587 /* Mark the page dirty only if the fault is handled successfully */ 1588 1588 if (writable && !ret) { ··· 1592 1588 mark_page_dirty_in_slot(kvm, memslot, gfn); 1593 1589 } 1594 1590 1595 - out_unlock: 1596 - read_unlock(&kvm->mmu_lock); 1597 1591 kvm_release_pfn_clean(pfn); 1598 1592 return ret != -EAGAIN ? ret : 0; 1599 1593 }
+1 -1
arch/arm64/kvm/pkvm.c
··· 222 222 223 223 int pkvm_init_host_vm(struct kvm *host_kvm) 224 224 { 225 - mutex_init(&host_kvm->lock); 226 225 return 0; 227 226 } 228 227 ··· 258 259 * at, which would end badly once inaccessible. 259 260 */ 260 261 kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start); 262 + kmemleak_free_part(__hyp_rodata_start, __hyp_rodata_end - __hyp_rodata_start); 261 263 kmemleak_free_part_phys(hyp_mem_base, hyp_mem_size); 262 264 263 265 ret = pkvm_drop_host_privileges();
-1
arch/arm64/kvm/reset.c
··· 151 151 { 152 152 void *sve_state = vcpu->arch.sve_state; 153 153 154 - kvm_vcpu_unshare_task_fp(vcpu); 155 154 kvm_unshare_hyp(vcpu, vcpu + 1); 156 155 if (sve_state) 157 156 kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu));
+1 -8
arch/arm64/kvm/vgic/vgic-v2.c
··· 464 464 kvm_vgic_global_state.vctrl_base + GICH_APR); 465 465 } 466 466 467 - void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu) 468 - { 469 - struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 470 - 471 - cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR); 472 - } 473 - 474 467 void vgic_v2_put(struct kvm_vcpu *vcpu) 475 468 { 476 469 struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; 477 470 478 - vgic_v2_vmcr_sync(vcpu); 471 + cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR); 479 472 cpu_if->vgic_apr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_APR); 480 473 }
+2 -21
arch/arm64/kvm/vgic/vgic-v3.c
··· 722 722 { 723 723 struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 724 724 725 - /* 726 - * If dealing with a GICv2 emulation on GICv3, VMCR_EL2.VFIQen 727 - * is dependent on ICC_SRE_EL1.SRE, and we have to perform the 728 - * VMCR_EL2 save/restore in the world switch. 729 - */ 730 - if (likely(cpu_if->vgic_sre)) 731 - kvm_call_hyp(__vgic_v3_write_vmcr, cpu_if->vgic_vmcr); 732 - 733 - kvm_call_hyp(__vgic_v3_restore_aprs, cpu_if); 725 + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); 734 726 735 727 if (has_vhe()) 736 728 __vgic_v3_activate_traps(cpu_if); ··· 730 738 WARN_ON(vgic_v4_load(vcpu)); 731 739 } 732 740 733 - void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu) 734 - { 735 - struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 736 - 737 - if (likely(cpu_if->vgic_sre)) 738 - cpu_if->vgic_vmcr = kvm_call_hyp_ret(__vgic_v3_read_vmcr); 739 - } 740 - 741 741 void vgic_v3_put(struct kvm_vcpu *vcpu) 742 742 { 743 743 struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 744 744 745 + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); 745 746 WARN_ON(vgic_v4_put(vcpu)); 746 - 747 - vgic_v3_vmcr_sync(vcpu); 748 - 749 - kvm_call_hyp(__vgic_v3_save_aprs, cpu_if); 750 747 751 748 if (has_vhe()) 752 749 __vgic_v3_deactivate_traps(cpu_if);
-11
arch/arm64/kvm/vgic/vgic.c
··· 937 937 vgic_v3_put(vcpu); 938 938 } 939 939 940 - void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu) 941 - { 942 - if (unlikely(!irqchip_in_kernel(vcpu->kvm))) 943 - return; 944 - 945 - if (kvm_vgic_global_state.type == VGIC_V2) 946 - vgic_v2_vmcr_sync(vcpu); 947 - else 948 - vgic_v3_vmcr_sync(vcpu); 949 - } 950 - 951 940 int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu) 952 941 { 953 942 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
-2
arch/arm64/kvm/vgic/vgic.h
··· 215 215 void vgic_v2_init_lrs(void); 216 216 void vgic_v2_load(struct kvm_vcpu *vcpu); 217 217 void vgic_v2_put(struct kvm_vcpu *vcpu); 218 - void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu); 219 218 220 219 void vgic_v2_save_state(struct kvm_vcpu *vcpu); 221 220 void vgic_v2_restore_state(struct kvm_vcpu *vcpu); ··· 253 254 254 255 void vgic_v3_load(struct kvm_vcpu *vcpu); 255 256 void vgic_v3_put(struct kvm_vcpu *vcpu); 256 - void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu); 257 257 258 258 bool vgic_has_its(struct kvm *kvm); 259 259 int kvm_vgic_register_its_device(void);
-1
include/kvm/arm_vgic.h
··· 389 389 390 390 void kvm_vgic_load(struct kvm_vcpu *vcpu); 391 391 void kvm_vgic_put(struct kvm_vcpu *vcpu); 392 - void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu); 393 392 394 393 #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel)) 395 394 #define vgic_initialized(k) ((k)->arch.vgic.initialized)