Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

KVM: arm64: Constrain the host to the maximum shared SVE VL with pKVM

When pKVM saves and restores the host floating point state on a SVE system,
it programs the vector length in ZCR_EL2.LEN to be whatever the maximum VL
for the PE is. But it uses a buffer allocated with kvm_host_sve_max_vl, the
maximum VL shared by all PEs in the system. This means that if we run on a
system where the maximum VLs are not consistent, we will overflow the buffer
on PEs which support larger VLs.

Since the host will not currently attempt to make use of non-shared VLs, fix
this by explicitly setting the EL2 VL to be the maximum shared VL when we
save and restore. This will enforce the limit on host VL usage. Should we
wish to support asymmetric VLs, this code will need to be updated along with
the required changes for the host:

https://lore.kernel.org/r/20240730-kvm-arm64-fix-pkvm-sve-vl-v6-0-cae8a2e0bd66@kernel.org

Fixes: b5b9955617bc ("KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM")
Signed-off-by: Mark Brown <broonie@kernel.org>
Tested-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Link: https://lore.kernel.org/r/20240912-kvm-arm64-limit-guest-vl-v2-1-dd2c29cb2ac9@kernel.org
[maz: added punctuation to the commit message]
Signed-off-by: Marc Zyngier <maz@kernel.org>

authored by

Mark Brown and committed by
Marc Zyngier
a9f41588 78fee419

+8 -6
+1 -1
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 338 338 struct cpu_sve_state *sve_state = *host_data_ptr(sve_state); 339 339 340 340 sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR); 341 - write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); 341 + write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); 342 342 __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), 343 343 &sve_state->fpsr, 344 344 true);
+7 -5
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 33 33 */ 34 34 sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); 35 35 __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true); 36 - write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); 36 + write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); 37 37 } 38 38 39 39 static void __hyp_sve_restore_host(void) ··· 45 45 * the host. The layout of the data when saving the sve state depends 46 46 * on the VL, so use a consistent (i.e., the maximum) host VL. 47 47 * 48 - * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length 49 - * supported by the system (or limited at EL3). 48 + * Note that this constrains the PE to the maximum shared VL 49 + * that was discovered, if we wish to use larger VLs this will 50 + * need to be revisited. 50 51 */ 51 - write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); 52 + write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); 52 53 __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), 53 54 &sve_state->fpsr, 54 55 true); ··· 489 488 case ESR_ELx_EC_SVE: 490 489 cpacr_clear_set(0, CPACR_ELx_ZEN); 491 490 isb(); 492 - sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); 491 + sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, 492 + SYS_ZCR_EL2); 493 493 break; 494 494 case ESR_ELx_EC_IABT_LOW: 495 495 case ESR_ELx_EC_DABT_LOW: