Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

arm/arm64: KVM: Handle out-of-RAM cache maintenance as a NOP

So far, our handling of cache maintenance by VA has been pretty
simple: Either the access is in the guest RAM and generates a S2
fault, which results in the page being mapped RW, or we go down
the io_mem_abort() path, and nuke the guest.

The first one is fine, but the second one is extremely weird.
Treating the CM as an I/O is wrong, and nothing in the ARM ARM
indicates that we should generate a fault for something that
cannot end-up in the cache anyway (even if the guest maps it,
it will keep on faulting at stage-2 for emulation).

So let's just skip this instruction, and let the guest get away
with it.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

+26
+5
arch/arm/include/asm/kvm_emulate.h
··· 138 138 return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_S1PTW; 139 139 } 140 140 141 + static inline bool kvm_vcpu_dabt_is_cm(struct kvm_vcpu *vcpu) 142 + { 143 + return !!(kvm_vcpu_get_hsr(vcpu) & HSR_DABT_CM); 144 + } 145 + 141 146 /* Get Access Size from a data abort */ 142 147 static inline int kvm_vcpu_dabt_get_as(struct kvm_vcpu *vcpu) 143 148 {
+16
arch/arm/kvm/mmu.c
··· 1431 1431 } 1432 1432 1433 1433 /* 1434 + * Check for a cache maintenance operation. Since we 1435 + * ended-up here, we know it is outside of any memory 1436 + * slot. But we can't find out if that is for a device, 1437 + * or if the guest is just being stupid. The only thing 1438 + * we know for sure is that this range cannot be cached. 1439 + * 1440 + * So let's assume that the guest is just being 1441 + * cautious, and skip the instruction. 1442 + */ 1443 + if (kvm_vcpu_dabt_is_cm(vcpu)) { 1444 + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 1445 + ret = 1; 1446 + goto out_unlock; 1447 + } 1448 + 1449 + /* 1434 1450 * The IPA is reported as [MAX:12], so we need to 1435 1451 * complement it with the bottom 12 bits from the 1436 1452 * faulting VA. This is always 12 bits, irrespective
+5
arch/arm64/include/asm/kvm_emulate.h
··· 189 189 return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW); 190 190 } 191 191 192 + static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu) 193 + { 194 + return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM); 195 + } 196 + 192 197 static inline int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu) 193 198 { 194 199 return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT);