Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: hugetlb: add arch hook for clearing page flags before entering pool

The core page allocator ensures that page flags are zeroed when freeing
pages via free_pages_check. A number of architectures (ARM, PPC, MIPS)
rely on this property to treat new pages as dirty with respect to the data
cache and perform the appropriate flushing before mapping the pages into
userspace.

This can lead to cache synchronisation problems when using hugepages,
since the allocator keeps its own pool of pages above the usual page
allocator and does not reset the page flags when freeing a page into the
pool.

This patch adds a new architecture hook, arch_clear_hugepage_flags, so
that architectures which rely on the page flags being in a particular
state for fresh allocations can adjust the flags accordingly when a page
is freed into the pool.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Michal Hocko <mhocko@suse.cz>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Will Deacon and committed by
Linus Torvalds
5d3a551c 01dc52eb

+32
+4
arch/ia64/include/asm/hugetlb.h
··· 77 77 { 78 78 } 79 79 80 + static inline void arch_clear_hugepage_flags(struct page *page) 81 + { 82 + } 83 + 80 84 #endif /* _ASM_IA64_HUGETLB_H */
+4
arch/mips/include/asm/hugetlb.h
··· 112 112 { 113 113 } 114 114 115 + static inline void arch_clear_hugepage_flags(struct page *page) 116 + { 117 + } 118 + 115 119 #endif /* __ASM_HUGETLB_H */
+4
arch/powerpc/include/asm/hugetlb.h
··· 151 151 { 152 152 } 153 153 154 + static inline void arch_clear_hugepage_flags(struct page *page) 155 + { 156 + } 157 + 154 158 #else /* ! CONFIG_HUGETLB_PAGE */ 155 159 static inline void flush_hugetlb_page(struct vm_area_struct *vma, 156 160 unsigned long vmaddr)
+1
arch/s390/include/asm/hugetlb.h
··· 33 33 } 34 34 35 35 #define hugetlb_prefault_arch_hook(mm) do { } while (0) 36 + #define arch_clear_hugepage_flags(page) do { } while (0) 36 37 37 38 int arch_prepare_hugepage(struct page *page); 38 39 void arch_release_hugepage(struct page *page);
+6
arch/sh/include/asm/hugetlb.h
··· 1 1 #ifndef _ASM_SH_HUGETLB_H 2 2 #define _ASM_SH_HUGETLB_H 3 3 4 + #include <asm/cacheflush.h> 4 5 #include <asm/page.h> 5 6 6 7 ··· 88 87 89 88 static inline void arch_release_hugepage(struct page *page) 90 89 { 90 + } 91 + 92 + static inline void arch_clear_hugepage_flags(struct page *page) 93 + { 94 + clear_bit(PG_dcache_clean, &page->flags); 91 95 } 92 96 93 97 #endif /* _ASM_SH_HUGETLB_H */
+4
arch/sparc/include/asm/hugetlb.h
··· 82 82 { 83 83 } 84 84 85 + static inline void arch_clear_hugepage_flags(struct page *page) 86 + { 87 + } 88 + 85 89 #endif /* _ASM_SPARC64_HUGETLB_H */
+4
arch/tile/include/asm/hugetlb.h
··· 106 106 { 107 107 } 108 108 109 + static inline void arch_clear_hugepage_flags(struct page *page) 110 + { 111 + } 112 + 109 113 #ifdef CONFIG_HUGETLB_SUPER_PAGES 110 114 static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, 111 115 struct page *page, int writable)
+4
arch/x86/include/asm/hugetlb.h
··· 90 90 { 91 91 } 92 92 93 + static inline void arch_clear_hugepage_flags(struct page *page) 94 + { 95 + } 96 + 93 97 #endif /* _ASM_X86_HUGETLB_H */
+1
mm/hugetlb.c
··· 637 637 h->surplus_huge_pages--; 638 638 h->surplus_huge_pages_node[nid]--; 639 639 } else { 640 + arch_clear_hugepage_flags(page); 640 641 enqueue_huge_page(h, page); 641 642 } 642 643 spin_unlock(&hugetlb_lock);