Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

nios2: force update_mmu_cache on spurious tlb-permission--related pagefaults

NIOS2 uses a software-managed TLB for virtual address translation. To
flush a cache line, the original mapping is replaced by one to physical
address 0x0 with no permissions (rwx mapped to 0) set. This can lead to
TLB-permission--related traps when such a nominally flushed entry is
encountered as a mapping for an otherwise valid virtual address within a
process (e.g. due to an MMU-PID-namespace rollover that previously
flushed the complete TLB including entries of existing, running
processes).

The default ptep_set_access_flags implementation from mm/pgtable-generic.c
only forces a TLB-update when the page-table entry has changed within the
page table:

/*
* [...] We return whether the PTE actually changed, which in turn
* instructs the caller to do things like update__mmu_cache. [...]
*/
int ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep,
pte_t entry, int dirty)
{
int changed = !pte_same(*ptep, entry);
if (changed) {
set_pte_at(vma->vm_mm, address, ptep, entry);
flush_tlb_fix_spurious_fault(vma, address);
}
return changed;
}

However, no cross-referencing with the TLB-state occurs, so the
flushing-induced pseudo entries that are responsible for the pagefault
in the first place are never pre-empted from TLB on this code path.

This commit fixes this behaviour by always requesting a TLB-update in
this part of the pagefault handling, fixing spurious page-faults on the
way. The handling is a straightforward port of the logic from the MIPS
architecture via an arch-specific ptep_set_access_flags function ported
from arch/mips/include/asm/pgtable.h.

Signed-off-by: Simon Schuster <schuster.simon@siemens-energy.com>
Signed-off-by: Andreas Oetken <andreas.oetken@siemens-energy.com>
Signed-off-by: Dinh Nguyen <dinguyen@kernel.org>

authored by

Simon Schuster and committed by
Dinh Nguyen
2d8a3179 0af2f6be

+16
+16
arch/nios2/include/asm/pgtable.h
··· 291 291 #define update_mmu_cache(vma, addr, ptep) \ 292 292 update_mmu_cache_range(NULL, vma, addr, ptep, 1) 293 293 294 + static inline int pte_same(pte_t pte_a, pte_t pte_b); 295 + 296 + #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS 297 + static inline int ptep_set_access_flags(struct vm_area_struct *vma, 298 + unsigned long address, pte_t *ptep, 299 + pte_t entry, int dirty) 300 + { 301 + if (!pte_same(*ptep, entry)) 302 + set_ptes(vma->vm_mm, address, ptep, entry, 1); 303 + /* 304 + * update_mmu_cache will unconditionally execute, handling both 305 + * the case that the PTE changed and the spurious fault case. 306 + */ 307 + return true; 308 + } 309 + 294 310 #endif /* _ASM_NIOS2_PGTABLE_H */