Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines

Transitioning to/from a VMX guest requires KVM to manually save/load
the bulk of CPU state that the guest is allowed to direclty access,
e.g. XSAVE state, CR2, GPRs, etc... For obvious reasons, loading the
guest's GPR snapshot prior to VM-Enter and saving the snapshot after
VM-Exit is done via handcoded assembly. The assembly blob is written
as inline asm so that it can easily access KVM-defined structs that
are used to hold guest state, e.g. moving the blob to a standalone
assembly file would require generating defines for struct offsets.

The other relevant aspect of VMX transitions in KVM is the handling of
VM-Exits. KVM doesn't employ a separate VM-Exit handler per se, but
rather treats the VMX transition as a mega instruction (with many side
effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
VMLAUNCH/VMRESUME. The label is then exposed to C code via a global
variable definition in the inline assembly.

Because of the global variable, KVM takes steps to (attempt to) ensure
only a single instance of the owning C function, e.g. vmx_vcpu_run, is
generated by the compiler. The earliest approach placed the inline
assembly in a separate noinline function[1]. Later, the assembly was
folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
is still used today.

After moving to __noclone, an edge case was encountered where GCC's
-ftracer optimization resulted in the inline assembly blob being
duplicated. This was "fixed" by explicitly disabling -ftracer in the
__noclone definition[4].

Recently, it was found that disabling -ftracer causes build warnings
for unsuspecting users of __noclone[5], and more importantly for KVM,
prevents the compiler for properly optimizing vmx_vcpu_run()[6]. And
perhaps most importantly of all, it was pointed out that there is no
way to prevent duplication of a function with 100% reliability[7],
i.e. more edge cases may be encountered in the future.

So to summarize, the only way to prevent the compiler from duplicating
the global variable definition is to move the variable out of inline
assembly, which has been suggested several times over[1][7][8].

Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
VM-Exit "handler" to standalone assembly sub-routines. Moving only
the core VMX transition codes allows the struct indexing to remain as
inline assembly and also allows the sub-routines to be used by
nested_vmx_check_vmentry_hw(). Reusing the sub-routines has a happy
side-effect of eliminating two VMWRITEs in the nested_early_check path
as there is no longer a need to dynamically change VMCS.HOST_RIP.

Note that callers to vmx_vmenter() must account for the CALL modifying
RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
VMCS.HOST_RSP and "restore" RSP prior to the CALL. There are no great
alternatives to fudging RSP. Saving RSP in vmx_enter() is difficult
because doing so requires a second register (VMWRITE does not provide
an immediate encoding for the VMCS field and KVM supports Hyper-V's
memory-based eVMCS ABI). The other more drastic alternative would be
to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
variable (which can be encoded as e.g. gs:[imm]). But because a valid
stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
dedicated per-cpu VM-Exit stack would be required. A dedicated stack
isn't difficult to implement, but it would require at least one page
per CPU and knowledge of the stack in the dumpstack routines. And in
most cases there is essentially zero overhead in dynamically updating
VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
VMLAUNCH unless nested_early_check=1, which is not a fast path. In
other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
would only make the code marginally less ugly while requiring at least
one page per CPU and forcing the kernel to be aware (and approve) of
the VM-Exit stack shenanigans.

[1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
[2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
[3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
[4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions")
[5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
[6] https://patchwork.kernel.org/patch/8707981/#21817015
[7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
[8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com

Suggested-by: Andi Kleen <ak@linux.intel.com>
Suggested-by: Martin Jambor <mjambor@suse.cz>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

authored by

Sean Christopherson and committed by
Paolo Bonzini
453eafbe 051a2d3e

+78 -34
+1 -1
arch/x86/kvm/Makefile
··· 16 16 i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \ 17 17 hyperv.o page_track.o debugfs.o 18 18 19 - kvm-intel-y += vmx/vmx.o vmx/pmu_intel.o vmx/vmcs12.o vmx/evmcs.o vmx/nested.o 19 + kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o vmx/evmcs.o vmx/nested.o 20 20 kvm-amd-y += svm.o pmu_amd.o 21 21 22 22 obj-$(CONFIG_KVM) += kvm.o
+10 -20
arch/x86/kvm/vmx/nested.c
··· 19 19 static bool __read_mostly nested_early_check = 0; 20 20 module_param(nested_early_check, bool, S_IRUGO); 21 21 22 - extern const ulong vmx_early_consistency_check_return; 23 - 24 22 /* 25 23 * Hyper-V requires all of these, so mark them as supported even though 26 24 * they are just treated the same as all-context. ··· 2713 2715 return 0; 2714 2716 } 2715 2717 2716 - static int __noclone nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) 2718 + static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) 2717 2719 { 2718 2720 struct vcpu_vmx *vmx = to_vmx(vcpu); 2719 2721 unsigned long cr3, cr4; ··· 2738 2740 */ 2739 2741 vmcs_writel(GUEST_RFLAGS, 0); 2740 2742 2741 - vmcs_writel(HOST_RIP, vmx_early_consistency_check_return); 2742 - 2743 2743 cr3 = __get_current_cr3_fast(); 2744 2744 if (unlikely(cr3 != vmx->loaded_vmcs->host_state.cr3)) { 2745 2745 vmcs_writel(HOST_CR3, cr3); ··· 2754 2758 2755 2759 asm( 2756 2760 /* Set HOST_RSP */ 2761 + "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ 2757 2762 __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" 2758 - "mov %%" _ASM_SP ", %c[host_rsp](%% " _ASM_CX")\n\t" 2763 + "mov %%" _ASM_SP ", %c[host_rsp](%1)\n\t" 2764 + "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ 2759 2765 2760 2766 /* Check if vmlaunch or vmresume is needed */ 2761 2767 "cmpl $0, %c[launched](%% " _ASM_CX")\n\t" 2762 - "jne 1f\n\t" 2763 - __ex("vmlaunch") "\n\t" 2764 - "jmp 2f\n\t" 2765 - "1: " __ex("vmresume") "\n\t" 2766 - "2: " 2768 + 2769 + "call vmx_vmenter\n\t" 2770 + 2767 2771 /* Set vmx->fail accordingly */ 2768 2772 "setbe %c[fail](%% " _ASM_CX")\n\t" 2769 - 2770 - ".pushsection .rodata\n\t" 2771 - ".global vmx_early_consistency_check_return\n\t" 2772 - "vmx_early_consistency_check_return: " _ASM_PTR " 2b\n\t" 2773 - ".popsection" 2774 - : 2773 + : ASM_CALL_CONSTRAINT 2775 2774 : "c"(vmx), "d"((unsigned long)HOST_RSP), 2776 2775 [launched]"i"(offsetof(struct vcpu_vmx, __launched)), 2777 2776 [fail]"i"(offsetof(struct vcpu_vmx, fail)), 2778 - [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)) 2777 + [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), 2778 + [wordsize]"i"(sizeof(ulong)) 2779 2779 : "rax", "cc", "memory" 2780 2780 ); 2781 - 2782 - vmcs_writel(HOST_RIP, vmx_return); 2783 2781 2784 2782 preempt_enable(); 2785 2783
+57
arch/x86/kvm/vmx/vmenter.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <linux/linkage.h> 3 + #include <asm/asm.h> 4 + 5 + .text 6 + 7 + /** 8 + * vmx_vmenter - VM-Enter the current loaded VMCS 9 + * 10 + * %RFLAGS.ZF: !VMCS.LAUNCHED, i.e. controls VMLAUNCH vs. VMRESUME 11 + * 12 + * Returns: 13 + * %RFLAGS.CF is set on VM-Fail Invalid 14 + * %RFLAGS.ZF is set on VM-Fail Valid 15 + * %RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit 16 + * 17 + * Note that VMRESUME/VMLAUNCH fall-through and return directly if 18 + * they VM-Fail, whereas a successful VM-Enter + VM-Exit will jump 19 + * to vmx_vmexit. 20 + */ 21 + ENTRY(vmx_vmenter) 22 + /* EFLAGS.ZF is set if VMCS.LAUNCHED == 0 */ 23 + je 2f 24 + 25 + 1: vmresume 26 + ret 27 + 28 + 2: vmlaunch 29 + ret 30 + 31 + 3: cmpb $0, kvm_rebooting 32 + jne 4f 33 + call kvm_spurious_fault 34 + 4: ret 35 + 36 + .pushsection .fixup, "ax" 37 + 5: jmp 3b 38 + .popsection 39 + 40 + _ASM_EXTABLE(1b, 5b) 41 + _ASM_EXTABLE(2b, 5b) 42 + 43 + ENDPROC(vmx_vmenter) 44 + 45 + /** 46 + * vmx_vmexit - Handle a VMX VM-Exit 47 + * 48 + * Returns: 49 + * %RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit 50 + * 51 + * This is vmx_vmenter's partner in crime. On a VM-Exit, control will jump 52 + * here after hardware loads the host's state, i.e. this is the destination 53 + * referred to by VMCS.HOST_RIP. 54 + */ 55 + ENTRY(vmx_vmexit) 56 + ret 57 + ENDPROC(vmx_vmexit)
+10 -12
arch/x86/kvm/vmx/vmx.c
··· 336 336 static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 337 337 u32 msr, int type); 338 338 339 + void vmx_vmexit(void); 340 + 339 341 static DEFINE_PER_CPU(struct vmcs *, vmxarea); 340 342 DEFINE_PER_CPU(struct vmcs *, current_vmcs); 341 343 /* ··· 3775 3773 vmcs_writel(HOST_IDTR_BASE, dt.address); /* 22.2.4 */ 3776 3774 vmx->host_idt_base = dt.address; 3777 3775 3778 - vmcs_writel(HOST_RIP, vmx_return); /* 22.2.5 */ 3776 + vmcs_writel(HOST_RIP, (unsigned long)vmx_vmexit); /* 22.2.5 */ 3779 3777 3780 3778 rdmsr(MSR_IA32_SYSENTER_CS, low32, high32); 3781 3779 vmcs_write32(HOST_IA32_SYSENTER_CS, low32); ··· 6362 6360 vmx->loaded_vmcs->hv_timer_armed = false; 6363 6361 } 6364 6362 6365 - static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) 6363 + static void vmx_vcpu_run(struct kvm_vcpu *vcpu) 6366 6364 { 6367 6365 struct vcpu_vmx *vmx = to_vmx(vcpu); 6368 6366 unsigned long cr3, cr4, evmcs_rsp; ··· 6442 6440 "push %%" _ASM_DX "; push %%" _ASM_BP ";" 6443 6441 "push %%" _ASM_CX " \n\t" /* placeholder for guest rcx */ 6444 6442 "push %%" _ASM_CX " \n\t" 6443 + "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ 6445 6444 "cmp %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" 6446 6445 "je 1f \n\t" 6447 6446 "mov %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" ··· 6454 6451 "2: \n\t" 6455 6452 __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" 6456 6453 "1: \n\t" 6454 + "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ 6455 + 6457 6456 /* Reload cr2 if changed */ 6458 6457 "mov %c[cr2](%%" _ASM_CX "), %%" _ASM_AX " \n\t" 6459 6458 "mov %%cr2, %%" _ASM_DX " \n\t" ··· 6486 6481 "mov %c[rcx](%%" _ASM_CX "), %%" _ASM_CX " \n\t" 6487 6482 6488 6483 /* Enter guest mode */ 6489 - "jne 1f \n\t" 6490 - __ex("vmlaunch") "\n\t" 6491 - "jmp 2f \n\t" 6492 - "1: " __ex("vmresume") "\n\t" 6493 - "2: " 6484 + "call vmx_vmenter\n\t" 6494 6485 6495 6486 /* Save guest's RCX to the stack placeholder (see above) */ 6496 6487 "mov %%" _ASM_CX ", %c[wordsize](%%" _ASM_SP ") \n\t" ··· 6535 6534 "xor %%esi, %%esi \n\t" 6536 6535 "xor %%edi, %%edi \n\t" 6537 6536 "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" 6538 - ".pushsection .rodata \n\t" 6539 - ".global vmx_return \n\t" 6540 - "vmx_return: " _ASM_PTR " 2b \n\t" 6541 - ".popsection" 6542 - : : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), 6537 + : ASM_CALL_CONSTRAINT 6538 + : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), 6543 6539 [launched]"i"(offsetof(struct vcpu_vmx, __launched)), 6544 6540 [fail]"i"(offsetof(struct vcpu_vmx, fail)), 6545 6541 [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)),
-1
arch/x86/kvm/vmx/vmx.h
··· 12 12 #include "vmcs.h" 13 13 14 14 extern const u32 vmx_msr_index[]; 15 - extern const ulong vmx_return; 16 15 extern u64 host_efer; 17 16 18 17 #define MSR_TYPE_R 1