Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

KVM: x86: remove check on nr_mmu_pages in kvm_arch_commit_memory_region()

* nr_mmu_pages would be non-zero only if kvm->arch.n_requested_mmu_pages is
non-zero.

* nr_mmu_pages is always non-zero, since kvm_mmu_calculate_mmu_pages()
never return zero.

Based on these two reasons, we can merge the two *if* clause and use the
return value from kvm_mmu_calculate_mmu_pages() directly. This simplify
the code and also eliminate the possibility for reader to believe
nr_mmu_pages would be zero.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

authored by

Wei Yang and committed by
Paolo Bonzini
4d66623c 711eff3a

+4 -8
+1 -1
arch/x86/include/asm/kvm_host.h
··· 1254 1254 gfn_t gfn_offset, unsigned long mask); 1255 1255 void kvm_mmu_zap_all(struct kvm *kvm); 1256 1256 void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); 1257 - unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); 1257 + unsigned int kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm); 1258 1258 void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages); 1259 1259 1260 1260 int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3);
+1 -1
arch/x86/kvm/mmu.c
··· 6028 6028 /* 6029 6029 * Calculate mmu pages needed for kvm. 6030 6030 */ 6031 - unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) 6031 + unsigned int kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm) 6032 6032 { 6033 6033 unsigned int nr_mmu_pages; 6034 6034 unsigned int nr_pages = 0;
+2 -6
arch/x86/kvm/x86.c
··· 9429 9429 const struct kvm_memory_slot *new, 9430 9430 enum kvm_mr_change change) 9431 9431 { 9432 - int nr_mmu_pages = 0; 9433 - 9434 9432 if (!kvm->arch.n_requested_mmu_pages) 9435 - nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm); 9436 - 9437 - if (nr_mmu_pages) 9438 - kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); 9433 + kvm_mmu_change_mmu_pages(kvm, 9434 + kvm_mmu_calculate_default_mmu_pages(kvm)); 9439 9435 9440 9436 /* 9441 9437 * Dirty logging tracks sptes in 4k granularity, meaning that large