Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

thp: vma_adjust_trans_huge(): adjust file-backed VMA too

This series of patches adds support for using PMD page table entries to
map DAX files. We expect NV-DIMMs to start showing up that are many
gigabytes in size and the memory consumption of 4kB PTEs will be
astronomical.

The patch series leverages much of the Transparant Huge Pages
infrastructure, going so far as to borrow one of Kirill's patches from
his THP page cache series.

This patch (of 10):

Since we're going to have huge pages in page cache, we need to call adjust
file-backed VMA, which potentially can contain huge pages.

For now we call it for all VMAs.

Probably later we will need to introduce a flag to indicate that the VMA
has huge pages.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Acked-by: Hillf Danton <dhillf@gmail.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Kirill A. Shutemov and committed by
Linus Torvalds
e1b9996b ce75799b

+2 -11
+1 -10
include/linux/huge_mm.h
··· 122 122 #endif 123 123 extern int hugepage_madvise(struct vm_area_struct *vma, 124 124 unsigned long *vm_flags, int advice); 125 - extern void __vma_adjust_trans_huge(struct vm_area_struct *vma, 125 + extern void vma_adjust_trans_huge(struct vm_area_struct *vma, 126 126 unsigned long start, 127 127 unsigned long end, 128 128 long adjust_next); ··· 137 137 return __pmd_trans_huge_lock(pmd, vma, ptl); 138 138 else 139 139 return 0; 140 - } 141 - static inline void vma_adjust_trans_huge(struct vm_area_struct *vma, 142 - unsigned long start, 143 - unsigned long end, 144 - long adjust_next) 145 - { 146 - if (!vma->anon_vma || vma->vm_ops) 147 - return; 148 - __vma_adjust_trans_huge(vma, start, end, adjust_next); 149 140 } 150 141 static inline int hpage_nr_pages(struct page *page) 151 142 {
+1 -1
mm/huge_memory.c
··· 2991 2991 split_huge_page_pmd_mm(mm, address, pmd); 2992 2992 } 2993 2993 2994 - void __vma_adjust_trans_huge(struct vm_area_struct *vma, 2994 + void vma_adjust_trans_huge(struct vm_area_struct *vma, 2995 2995 unsigned long start, 2996 2996 unsigned long end, 2997 2997 long adjust_next)