Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

KVM: arm64: Check for kvm_vma_mte_allowed in the critical section

On page fault, we find about the VMA that backs the page fault
early on, and quickly release the mmap_read_lock. However, using
the VMA pointer after the critical section is pretty dangerous,
as a teardown may happen in the meantime and the VMA be long gone.

Move the sampling of the MTE permission early, and NULL-ify the
VMA pointer after that, just to be on the safe side.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20230316174546.3777507-3-maz@kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

authored by

Marc Zyngier and committed by
Oliver Upton
8c2e8ac8 e86fc1a3

+6 -2
+6 -2
arch/arm64/kvm/mmu.c
··· 1218 1218 { 1219 1219 int ret = 0; 1220 1220 bool write_fault, writable, force_pte = false; 1221 - bool exec_fault; 1221 + bool exec_fault, mte_allowed; 1222 1222 bool device = false; 1223 1223 unsigned long mmu_seq; 1224 1224 struct kvm *kvm = vcpu->kvm; ··· 1309 1309 fault_ipa &= ~(vma_pagesize - 1); 1310 1310 1311 1311 gfn = fault_ipa >> PAGE_SHIFT; 1312 + mte_allowed = kvm_vma_mte_allowed(vma); 1313 + 1314 + /* Don't use the VMA after the unlock -- it may have vanished */ 1315 + vma = NULL; 1312 1316 1313 1317 /* 1314 1318 * Read mmu_invalidate_seq so that KVM can detect if the results of ··· 1383 1379 1384 1380 if (fault_status != ESR_ELx_FSC_PERM && !device && kvm_has_mte(kvm)) { 1385 1381 /* Check the VMM hasn't introduced a new disallowed VMA */ 1386 - if (kvm_vma_mte_allowed(vma)) { 1382 + if (mte_allowed) { 1387 1383 sanitise_mte_tags(kvm, pfn, vma_pagesize); 1388 1384 } else { 1389 1385 ret = -EFAULT;