Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

arm64: Fix barriers used for page table modifications

The architecture specification states that both DSB and ISB are required
between page table modifications and subsequent memory accesses using the
corresponding virtual address. When TLB invalidation takes place, the
tlb_flush_* functions already have the necessary barriers. However, there are
other functions like create_mapping() for which this is not the case.

The patch adds the DSB+ISB instructions in the set_pte() function for
valid kernel mappings. The invalid pte case is handled by tlb_flush_*
and the user mappings in general have a corresponding update_mmu_cache()
call containing a DSB. Even when update_mmu_cache() isn't called, the
kernel can still cope with an unlikely spurious page fault by
re-executing the instruction.

In addition, the set_pmd, set_pud() functions gain an ISB for
architecture compliance when block mappings are created.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Steve Capper <steve.capper@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>

+17 -12
+1 -10
arch/arm64/include/asm/cacheflush.h
··· 138 138 #define flush_icache_page(vma,page) do { } while (0) 139 139 140 140 /* 141 - * flush_cache_vmap() is used when creating mappings (eg, via vmap, 142 - * vmalloc, ioremap etc) in kernel space for pages. On non-VIPT 143 - * caches, since the direct-mappings of these pages may contain cached 144 - * data, we need to do a full cache flush to ensure that writebacks 145 - * don't corrupt data placed into these pages via the new mappings. 141 + * Not required on AArch64 (PIPT or VIPT non-aliasing D-cache). 146 142 */ 147 143 static inline void flush_cache_vmap(unsigned long start, unsigned long end) 148 144 { 149 - /* 150 - * set_pte_at() called from vmap_pte_range() does not 151 - * have a DSB after cleaning the cache line. 152 - */ 153 - dsb(ish); 154 145 } 155 146 156 147 static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
+13
arch/arm64/include/asm/pgtable.h
··· 146 146 147 147 #define pte_valid_user(pte) \ 148 148 ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) 149 + #define pte_valid_not_user(pte) \ 150 + ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) 149 151 150 152 static inline pte_t pte_wrprotect(pte_t pte) 151 153 { ··· 194 192 static inline void set_pte(pte_t *ptep, pte_t pte) 195 193 { 196 194 *ptep = pte; 195 + 196 + /* 197 + * Only if the new pte is valid and kernel, otherwise TLB maintenance 198 + * or update_mmu_cache() have the necessary barriers. 199 + */ 200 + if (pte_valid_not_user(pte)) { 201 + dsb(ishst); 202 + isb(); 203 + } 197 204 } 198 205 199 206 extern void __sync_icache_dcache(pte_t pteval, unsigned long addr); ··· 322 311 { 323 312 *pmdp = pmd; 324 313 dsb(ishst); 314 + isb(); 325 315 } 326 316 327 317 static inline void pmd_clear(pmd_t *pmdp) ··· 355 343 { 356 344 *pudp = pud; 357 345 dsb(ishst); 346 + isb(); 358 347 } 359 348 360 349 static inline void pud_clear(pud_t *pudp)
+3 -2
arch/arm64/include/asm/tlbflush.h
··· 122 122 for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) 123 123 asm("tlbi vaae1is, %0" : : "r"(addr)); 124 124 dsb(ish); 125 + isb(); 125 126 } 126 127 127 128 /* ··· 132 131 unsigned long addr, pte_t *ptep) 133 132 { 134 133 /* 135 - * set_pte() does not have a DSB, so make sure that the page table 136 - * write is visible. 134 + * set_pte() does not have a DSB for user mappings, so make sure that 135 + * the page table write is visible. 137 136 */ 138 137 dsb(ishst); 139 138 }