kvm/x86: Avoid clearing the C-bit in rsvd_bits()

The following commit:

d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM")

uses __sme_clr() to remove the C-bit in rsvd_bits(). rsvd_bits() is
just a simple function to return some 1 bits. Applying a mask based
on properties of the host MMU is incorrect. Additionally, the masks
computed by __reset_rsvds_bits_mask also apply to guest page tables,
where the C bit is reserved since we don't emulate SME.

The fix is to clear the C-bit from rsvd_bits_mask array after it has been
populated from __reset_rsvds_bits_mask()

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: kvm@vger.kernel.org
Cc: paolo.bonzini@gmail.com
Fixes: d0ec49d ("kvm/x86/svm: Support Secure Memory Encryption within KVM")
Link: http://lkml.kernel.org/r/20170825205540.123531-1-brijesh.singh@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Brijesh Singh and committed by
Ingo Molnar
ea2800dd 413d63d7

+28 -4
+27 -3
arch/x86/kvm/mmu.c
··· 4109 4109 reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) 4110 4110 { 4111 4111 bool uses_nx = context->nx || context->base_role.smep_andnot_wp; 4112 + struct rsvd_bits_validate *shadow_zero_check; 4113 + int i; 4112 4114 4113 4115 /* 4114 4116 * Passing "true" to the last argument is okay; it adds a check 4115 4117 * on bit 8 of the SPTEs which KVM doesn't use anyway. 4116 4118 */ 4117 - __reset_rsvds_bits_mask(vcpu, &context->shadow_zero_check, 4119 + shadow_zero_check = &context->shadow_zero_check; 4120 + __reset_rsvds_bits_mask(vcpu, shadow_zero_check, 4118 4121 boot_cpu_data.x86_phys_bits, 4119 4122 context->shadow_root_level, uses_nx, 4120 4123 guest_cpuid_has_gbpages(vcpu), is_pse(vcpu), 4121 4124 true); 4125 + 4126 + if (!shadow_me_mask) 4127 + return; 4128 + 4129 + for (i = context->shadow_root_level; --i >= 0;) { 4130 + shadow_zero_check->rsvd_bits_mask[0][i] &= ~shadow_me_mask; 4131 + shadow_zero_check->rsvd_bits_mask[1][i] &= ~shadow_me_mask; 4132 + } 4133 + 4122 4134 } 4123 4135 EXPORT_SYMBOL_GPL(reset_shadow_zero_bits_mask); 4124 4136 ··· 4148 4136 reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, 4149 4137 struct kvm_mmu *context) 4150 4138 { 4139 + struct rsvd_bits_validate *shadow_zero_check; 4140 + int i; 4141 + 4142 + shadow_zero_check = &context->shadow_zero_check; 4143 + 4151 4144 if (boot_cpu_is_amd()) 4152 - __reset_rsvds_bits_mask(vcpu, &context->shadow_zero_check, 4145 + __reset_rsvds_bits_mask(vcpu, shadow_zero_check, 4153 4146 boot_cpu_data.x86_phys_bits, 4154 4147 context->shadow_root_level, false, 4155 4148 boot_cpu_has(X86_FEATURE_GBPAGES), 4156 4149 true, true); 4157 4150 else 4158 - __reset_rsvds_bits_mask_ept(&context->shadow_zero_check, 4151 + __reset_rsvds_bits_mask_ept(shadow_zero_check, 4159 4152 boot_cpu_data.x86_phys_bits, 4160 4153 false); 4161 4154 4155 + if (!shadow_me_mask) 4156 + return; 4157 + 4158 + for (i = context->shadow_root_level; --i >= 0;) { 4159 + shadow_zero_check->rsvd_bits_mask[0][i] &= ~shadow_me_mask; 4160 + shadow_zero_check->rsvd_bits_mask[1][i] &= ~shadow_me_mask; 4161 + } 4162 4162 } 4163 4163 4164 4164 /*
+1 -1
arch/x86/kvm/mmu.h
··· 48 48 49 49 static inline u64 rsvd_bits(int s, int e) 50 50 { 51 - return __sme_clr(((1ULL << (e - s + 1)) - 1) << s); 51 + return ((1ULL << (e - s + 1)) - 1) << s; 52 52 } 53 53 54 54 void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value);