Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse

This idea was introduced by David Rientjes[1].

Introduce a new madvise mode, MADV_COLLAPSE, that allows users to request
a synchronous collapse of memory at their own expense.

The benefits of this approach are:

* CPU is charged to the process that wants to spend the cycles for the
THP
* Avoid unpredictable timing of khugepaged collapse

Semantics

This call is independent of the system-wide THP sysfs settings, but will
fail for memory marked VM_NOHUGEPAGE. If the ranges provided span
multiple VMAs, the semantics of the collapse over each VMA is independent
from the others. This implies a hugepage cannot cross a VMA boundary. If
collapse of a given hugepage-aligned/sized region fails, the operation may
continue to attempt collapsing the remainder of memory specified.

The memory ranges provided must be page-aligned, but are not required to
be hugepage-aligned. If the memory ranges are not hugepage-aligned, the
start/end of the range will be clamped to the first/last hugepage-aligned
address covered by said range. The memory ranges must span at least one
hugepage-sized region.

All non-resident pages covered by the range will first be
swapped/faulted-in, before being internally copied onto a freshly
allocated hugepage. Unmapped pages will have their data directly
initialized to 0 in the new hugepage. However, for every eligible
hugepage aligned/sized region to-be collapsed, at least one page must
currently be backed by memory (a PMD covering the address range must
already exist).

Allocation for the new hugepage may enter direct reclaim and/or
compaction, regardless of VMA flags. When the system has multiple NUMA
nodes, the hugepage will be allocated from the node providing the most
native pages. This operation operates on the current state of the
specified process and makes no persistent changes or guarantees on how
pages will be mapped, constructed, or faulted in the future

Return Value

If all hugepage-sized/aligned regions covered by the provided range were
either successfully collapsed, or were already PMD-mapped THPs, this
operation will be deemed successful. On success, process_madvise(2)
returns the number of bytes advised, and madvise(2) returns 0. Else, -1
is returned and errno is set to indicate the error for the most-recently
attempted hugepage collapse. Note that many failures might have occurred,
since the operation may continue to collapse in the event a single
hugepage-sized/aligned region fails.

ENOMEM Memory allocation failed or VMA not found
EBUSY Memcg charging failed
EAGAIN Required resource temporarily unavailable. Try again
might succeed.
EINVAL Other error: No PMD found, subpage doesn't have Present
bit set, "Special" page no backed by struct page, VMA
incorrectly sized, address not page-aligned, ...

Most notable here is ENOMEM and EBUSY (new to madvise) which are intended
to provide the caller with actionable feedback so they may take an
appropriate fallback measure.

Use Cases

An immediate user of this new functionality are malloc() implementations
that manage memory in hugepage-sized chunks, but sometimes subrelease
memory back to the system in native-sized chunks via MADV_DONTNEED;
zapping the pmd. Later, when the memory is hot, the implementation could
madvise(MADV_COLLAPSE) to re-back the memory by THPs to regain hugepage
coverage and dTLB performance. TCMalloc is such an implementation that
could benefit from this[2].

Only privately-mapped anon memory is supported for now, but additional
support for file, shmem, and HugeTLB high-granularity mappings[2] is
expected. File and tmpfs/shmem support would permit:

* Backing executable text by THPs. Current support provided by
CONFIG_READ_ONLY_THP_FOR_FS may take a long time on a large system which
might impair services from serving at their full rated load after
(re)starting. Tricks like mremap(2)'ing text onto anonymous memory to
immediately realize iTLB performance prevents page sharing and demand
paging, both of which increase steady state memory footprint. With
MADV_COLLAPSE, we get the best of both worlds: Peak upfront performance
and lower RAM footprints.
* Backing guest memory by hugapages after the memory contents have been
migrated in native-page-sized chunks to a new host, in a
userfaultfd-based live-migration stack.

[1] https://lore.kernel.org/linux-mm/d098c392-273a-36a4-1a29-59731cdf5d3d@google.com/
[2] https://github.com/google/tcmalloc/tree/master/tcmalloc

[jrdr.linux@gmail.com: avoid possible memory leak in failure path]
Link: https://lkml.kernel.org/r/20220713024109.62810-1-jrdr.linux@gmail.com
[zokeefe@google.com add missing kfree() to madvise_collapse()]
Link: https://lore.kernel.org/linux-mm/20220713024109.62810-1-jrdr.linux@gmail.com/
Link: https://lkml.kernel.org/r/20220713161851.1879439-1-zokeefe@google.com
[zokeefe@google.com: delay computation of hpage boundaries until use]]
Link: https://lkml.kernel.org/r/20220720140603.1958773-4-zokeefe@google.com
Link: https://lkml.kernel.org/r/20220706235936.2197195-10-zokeefe@google.com
Signed-off-by: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: "Souptick Joarder (HPE)" <jrdr.linux@gmail.com>
Suggested-by: David Rientjes <rientjes@google.com>
Cc: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Chris Kennelly <ckennelly@google.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Pavel Begunkov <asml.silence@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Rongwei Wang <rongwei.wang@linux.alibaba.com>
Cc: SeongJae Park <sj@kernel.org>
Cc: Song Liu <songliubraving@fb.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Zach O'Keefe and committed by
Andrew Morton
7d8faaf1 50722804

+147 -3
+2
arch/alpha/include/uapi/asm/mman.h
··· 76 76 77 77 #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ 78 78 79 + #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ 80 + 79 81 /* compatibility flags */ 80 82 #define MAP_FILE 0 81 83
+2
arch/mips/include/uapi/asm/mman.h
··· 103 103 104 104 #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ 105 105 106 + #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ 107 + 106 108 /* compatibility flags */ 107 109 #define MAP_FILE 0 108 110
+2
arch/parisc/include/uapi/asm/mman.h
··· 70 70 #define MADV_WIPEONFORK 71 /* Zero memory on fork, child only */ 71 71 #define MADV_KEEPONFORK 72 /* Undo MADV_WIPEONFORK */ 72 72 73 + #define MADV_COLLAPSE 73 /* Synchronous hugepage collapse */ 74 + 73 75 #define MADV_HWPOISON 100 /* poison a page for testing */ 74 76 #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */ 75 77
+2
arch/xtensa/include/uapi/asm/mman.h
··· 111 111 112 112 #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ 113 113 114 + #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ 115 + 114 116 /* compatibility flags */ 115 117 #define MAP_FILE 0 116 118
+12 -2
include/linux/huge_mm.h
··· 218 218 219 219 int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags, 220 220 int advice); 221 + int madvise_collapse(struct vm_area_struct *vma, 222 + struct vm_area_struct **prev, 223 + unsigned long start, unsigned long end); 221 224 void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, 222 225 unsigned long end, long adjust_next); 223 226 spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma); ··· 364 361 static inline int hugepage_madvise(struct vm_area_struct *vma, 365 362 unsigned long *vm_flags, int advice) 366 363 { 367 - BUG(); 368 - return 0; 364 + return -EINVAL; 369 365 } 366 + 367 + static inline int madvise_collapse(struct vm_area_struct *vma, 368 + struct vm_area_struct **prev, 369 + unsigned long start, unsigned long end) 370 + { 371 + return -EINVAL; 372 + } 373 + 370 374 static inline void vma_adjust_trans_huge(struct vm_area_struct *vma, 371 375 unsigned long start, 372 376 unsigned long end,
+2
include/uapi/asm-generic/mman-common.h
··· 77 77 78 78 #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ 79 79 80 + #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ 81 + 80 82 /* compatibility flags */ 81 83 #define MAP_FILE 0 82 84
+118 -1
mm/khugepaged.c
··· 982 982 struct collapse_control *cc) 983 983 { 984 984 /* Only allocate from the target node */ 985 - gfp_t gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE; 985 + gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : 986 + GFP_TRANSHUGE) | __GFP_THISNODE; 986 987 int node = khugepaged_find_target_node(cc); 987 988 988 989 if (!khugepaged_alloc_page(hpage, gfp, node)) ··· 2362 2361 if (hugepage_flags_enabled() && khugepaged_thread) 2363 2362 set_recommended_min_free_kbytes(); 2364 2363 mutex_unlock(&khugepaged_mutex); 2364 + } 2365 + 2366 + static int madvise_collapse_errno(enum scan_result r) 2367 + { 2368 + /* 2369 + * MADV_COLLAPSE breaks from existing madvise(2) conventions to provide 2370 + * actionable feedback to caller, so they may take an appropriate 2371 + * fallback measure depending on the nature of the failure. 2372 + */ 2373 + switch (r) { 2374 + case SCAN_ALLOC_HUGE_PAGE_FAIL: 2375 + return -ENOMEM; 2376 + case SCAN_CGROUP_CHARGE_FAIL: 2377 + return -EBUSY; 2378 + /* Resource temporary unavailable - trying again might succeed */ 2379 + case SCAN_PAGE_LOCK: 2380 + case SCAN_PAGE_LRU: 2381 + return -EAGAIN; 2382 + /* 2383 + * Other: Trying again likely not to succeed / error intrinsic to 2384 + * specified memory range. khugepaged likely won't be able to collapse 2385 + * either. 2386 + */ 2387 + default: 2388 + return -EINVAL; 2389 + } 2390 + } 2391 + 2392 + int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, 2393 + unsigned long start, unsigned long end) 2394 + { 2395 + struct collapse_control *cc; 2396 + struct mm_struct *mm = vma->vm_mm; 2397 + unsigned long hstart, hend, addr; 2398 + int thps = 0, last_fail = SCAN_FAIL; 2399 + bool mmap_locked = true; 2400 + 2401 + BUG_ON(vma->vm_start > start); 2402 + BUG_ON(vma->vm_end < end); 2403 + 2404 + *prev = vma; 2405 + 2406 + /* TODO: Support file/shmem */ 2407 + if (!vma->anon_vma || !vma_is_anonymous(vma)) 2408 + return -EINVAL; 2409 + 2410 + if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false)) 2411 + return -EINVAL; 2412 + 2413 + cc = kmalloc(sizeof(*cc), GFP_KERNEL); 2414 + if (!cc) 2415 + return -ENOMEM; 2416 + cc->is_khugepaged = false; 2417 + cc->last_target_node = NUMA_NO_NODE; 2418 + 2419 + mmgrab(mm); 2420 + lru_add_drain_all(); 2421 + 2422 + hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; 2423 + hend = end & HPAGE_PMD_MASK; 2424 + 2425 + for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) { 2426 + int result = SCAN_FAIL; 2427 + 2428 + if (!mmap_locked) { 2429 + cond_resched(); 2430 + mmap_read_lock(mm); 2431 + mmap_locked = true; 2432 + result = hugepage_vma_revalidate(mm, addr, &vma, cc); 2433 + if (result != SCAN_SUCCEED) { 2434 + last_fail = result; 2435 + goto out_nolock; 2436 + } 2437 + } 2438 + mmap_assert_locked(mm); 2439 + memset(cc->node_load, 0, sizeof(cc->node_load)); 2440 + result = khugepaged_scan_pmd(mm, vma, addr, &mmap_locked, cc); 2441 + if (!mmap_locked) 2442 + *prev = NULL; /* Tell caller we dropped mmap_lock */ 2443 + 2444 + switch (result) { 2445 + case SCAN_SUCCEED: 2446 + case SCAN_PMD_MAPPED: 2447 + ++thps; 2448 + break; 2449 + /* Whitelisted set of results where continuing OK */ 2450 + case SCAN_PMD_NULL: 2451 + case SCAN_PTE_NON_PRESENT: 2452 + case SCAN_PTE_UFFD_WP: 2453 + case SCAN_PAGE_RO: 2454 + case SCAN_LACK_REFERENCED_PAGE: 2455 + case SCAN_PAGE_NULL: 2456 + case SCAN_PAGE_COUNT: 2457 + case SCAN_PAGE_LOCK: 2458 + case SCAN_PAGE_COMPOUND: 2459 + case SCAN_PAGE_LRU: 2460 + last_fail = result; 2461 + break; 2462 + default: 2463 + last_fail = result; 2464 + /* Other error, exit */ 2465 + goto out_maybelock; 2466 + } 2467 + } 2468 + 2469 + out_maybelock: 2470 + /* Caller expects us to hold mmap_lock on return */ 2471 + if (!mmap_locked) 2472 + mmap_read_lock(mm); 2473 + out_nolock: 2474 + mmap_assert_locked(mm); 2475 + mmdrop(mm); 2476 + kfree(cc); 2477 + 2478 + return thps == ((hend - hstart) >> HPAGE_PMD_SHIFT) ? 0 2479 + : madvise_collapse_errno(last_fail); 2365 2480 }
+5
mm/madvise.c
··· 59 59 case MADV_FREE: 60 60 case MADV_POPULATE_READ: 61 61 case MADV_POPULATE_WRITE: 62 + case MADV_COLLAPSE: 62 63 return 0; 63 64 default: 64 65 /* be safe, default to 1. list exceptions explicitly */ ··· 1058 1057 if (error) 1059 1058 goto out; 1060 1059 break; 1060 + case MADV_COLLAPSE: 1061 + return madvise_collapse(vma, prev, start, end); 1061 1062 } 1062 1063 1063 1064 anon_name = anon_vma_name(vma); ··· 1153 1150 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1154 1151 case MADV_HUGEPAGE: 1155 1152 case MADV_NOHUGEPAGE: 1153 + case MADV_COLLAPSE: 1156 1154 #endif 1157 1155 case MADV_DONTDUMP: 1158 1156 case MADV_DODUMP: ··· 1343 1339 * MADV_NOHUGEPAGE - mark the given range as not worth being backed by 1344 1340 * transparent huge pages so the existing pages will not be 1345 1341 * coalesced into THP and new pages will not be allocated as THP. 1342 + * MADV_COLLAPSE - synchronously coalesce pages into new THP. 1346 1343 * MADV_DONTDUMP - the application wants to prevent pages in the given range 1347 1344 * from being included in its core dump. 1348 1345 * MADV_DODUMP - cancel MADV_DONTDUMP: no longer exclude from core dump.
+2
tools/include/uapi/asm-generic/mman-common.h
··· 77 77 78 78 #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ 79 79 80 + #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ 81 + 80 82 /* compatibility flags */ 81 83 #define MAP_FILE 0 82 84