Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest SPRs are live

After the L1 saves its PMU SPRs but before loading the L2's PMU SPRs,
switch the pmcregs_in_use field in the L1 lppaca to the value advertised
by the L2 in its VPA. On the way out of the L2, set it back after saving
the L2 PMU registers (if they were in-use).

This transfers the PMU liveness indication between the L1 and L2 at the
points where the registers are not live.

This fixes the nested HV bug for which a workaround was added to the L0
HV by commit 63279eeb7f93a ("KVM: PPC: Book3S HV: Always save guest pmu
for guest capable of nesting"), which explains the problem in detail.
That workaround is no longer required for guests that include this bug
fix.

Fixes: 360cae313702 ("KVM: PPC: Book3S HV: Nested guest entry via hypercall")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Link: https://lore.kernel.org/r/20210811160134.904987-10-npiggin@gmail.com

authored by

Nicholas Piggin and committed by
Michael Ellerman
17826638 f2e29db1

+27
+7
arch/powerpc/include/asm/pmc.h
··· 34 34 #endif 35 35 } 36 36 37 + #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 38 + static inline int ppc_get_pmu_inuse(void) 39 + { 40 + return get_paca()->pmcregs_in_use; 41 + } 42 + #endif 43 + 37 44 extern void power4_enable_pmcs(void); 38 45 39 46 #else /* CONFIG_PPC64 */
+20
arch/powerpc/kvm/book3s_hv.c
··· 59 59 #include <asm/kvm_book3s.h> 60 60 #include <asm/mmu_context.h> 61 61 #include <asm/lppaca.h> 62 + #include <asm/pmc.h> 62 63 #include <asm/processor.h> 63 64 #include <asm/cputhreads.h> 64 65 #include <asm/page.h> ··· 3892 3891 cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) 3893 3892 kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); 3894 3893 3894 + #ifdef CONFIG_PPC_PSERIES 3895 + if (kvmhv_on_pseries()) { 3896 + barrier(); 3897 + if (vcpu->arch.vpa.pinned_addr) { 3898 + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; 3899 + get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; 3900 + } else { 3901 + get_lppaca()->pmcregs_in_use = 1; 3902 + } 3903 + barrier(); 3904 + } 3905 + #endif 3895 3906 kvmhv_load_guest_pmu(vcpu); 3896 3907 3897 3908 msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); ··· 4038 4025 save_pmu |= nesting_enabled(vcpu->kvm); 4039 4026 4040 4027 kvmhv_save_guest_pmu(vcpu, save_pmu); 4028 + #ifdef CONFIG_PPC_PSERIES 4029 + if (kvmhv_on_pseries()) { 4030 + barrier(); 4031 + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); 4032 + barrier(); 4033 + } 4034 + #endif 4041 4035 4042 4036 vc->entry_exit_map = 0x101; 4043 4037 vc->in_guest = 0;