Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

KVM: x86: Check for nested events if there is an injectable interrupt

With commit b6b8a1451fc40412c57d1 that introduced
vmx_check_nested_events, checks for injectable interrupts happen
at different points in time for L1 and L2 that could potentially
cause a race. The regression occurs because KVM_REQ_EVENT is always
set when nested_run_pending is set even if there's no pending interrupt.
Consequently, there could be a small window when check_nested_events
returns without exiting to L1, but an interrupt comes through soon
after and it incorrectly, gets injected to L2 by inject_pending_event
Fix this by adding a call to check for nested events too when a check
for injectable interrupt returns true

Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

authored by

Bandan Das and committed by
Paolo Bonzini
9242b5b6 cd3de83f

+12
+12
arch/x86/kvm/x86.c
··· 5887 5887 kvm_x86_ops->set_nmi(vcpu); 5888 5888 } 5889 5889 } else if (kvm_cpu_has_injectable_intr(vcpu)) { 5890 + /* 5891 + * Because interrupts can be injected asynchronously, we are 5892 + * calling check_nested_events again here to avoid a race condition. 5893 + * See https://lkml.org/lkml/2014/7/2/60 for discussion about this 5894 + * proposal and current concerns. Perhaps we should be setting 5895 + * KVM_REQ_EVENT only on certain events and not unconditionally? 5896 + */ 5897 + if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) { 5898 + r = kvm_x86_ops->check_nested_events(vcpu, req_int_win); 5899 + if (r != 0) 5900 + return r; 5901 + } 5890 5902 if (kvm_x86_ops->interrupt_allowed(vcpu)) { 5891 5903 kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), 5892 5904 false);