Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl

We must add hugetlb_free_vmemmap=on (or "off") to the boot cmdline and
reboot the server to enable or disable the feature of optimizing vmemmap
pages associated with HugeTLB pages. However, rebooting usually takes a
long time. So add a sysctl to enable or disable the feature at runtime
without rebooting. Why we need this? There are 3 use cases.

1) The feature of minimizing overhead of struct page associated with
each HugeTLB is disabled by default without passing
"hugetlb_free_vmemmap=on" to the boot cmdline. When we (ByteDance)
deliver the servers to the users who want to enable this feature, they
have to configure the grub (change boot cmdline) and reboot the
servers, whereas rebooting usually takes a long time (we have thousands
of servers). It's a very bad experience for the users. So we need a
approach to enable this feature after rebooting. This is a use case in
our practical environment.

2) Some use cases are that HugeTLB pages are allocated 'on the fly'
instead of being pulled from the HugeTLB pool, those workloads would be
affected with this feature enabled. Those workloads could be
identified by the characteristics of they never explicitly allocating
huge pages with 'nr_hugepages' but only set 'nr_overcommit_hugepages'
and then let the pages be allocated from the buddy allocator at fault
time. We can confirm it is a real use case from the commit
099730d67417. For those workloads, the page fault time could be ~2x
slower than before. We suspect those users want to disable this
feature if the system has enabled this before and they don't think the
memory savings benefit is enough to make up for the performance drop.

3) If the workload which wants vmemmap pages to be optimized and the
workload which wants to set 'nr_overcommit_hugepages' and does not want
the extera overhead at fault time when the overcommitted pages be
allocated from the buddy allocator are deployed in the same server.
The user could enable this feature and set 'nr_hugepages' and
'nr_overcommit_hugepages', then disable the feature. In this case, the
overcommited HugeTLB pages will not encounter the extra overhead at
fault time.

Link: https://lkml.kernel.org/r/20220512041142.39501-5-songmuchun@bytedance.com
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Iurii Zaikin <yzaikin@google.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: Masahiro Yamada <masahiroy@kernel.org>
Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Muchun Song and committed by
Andrew Morton
78f39084 9c54c522

+133 -15
+39
Documentation/admin-guide/sysctl/vm.rst
··· 562 562 See Documentation/admin-guide/mm/hugetlbpage.rst 563 563 564 564 565 + hugetlb_optimize_vmemmap 566 + ======================== 567 + 568 + This knob is not available when memory_hotplug.memmap_on_memory (kernel parameter) 569 + is configured or the size of 'struct page' (a structure defined in 570 + include/linux/mm_types.h) is not power of two (an unusual system config could 571 + result in this). 572 + 573 + Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap pages 574 + associated with each HugeTLB page. 575 + 576 + Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from 577 + buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages 578 + per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be 579 + optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool 580 + to the buddy allocator, the vmemmap pages representing that range needs to be 581 + remapped again and the vmemmap pages discarded earlier need to be rellocated 582 + again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g. 583 + never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set 584 + 'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on 585 + the fly') instead of being pulled from the HugeTLB pool, you should weigh the 586 + benefits of memory savings against the more overhead (~2x slower than before) 587 + of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy 588 + allocator. Another behavior to note is that if the system is under heavy memory 589 + pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB 590 + pool to the buddy allocator since the allocation of vmemmap pages could be 591 + failed, you have to retry later if your system encounter this situation. 592 + 593 + Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from 594 + buddy allocator will not be optimized meaning the extra overhead at allocation 595 + time from buddy allocator disappears, whereas already optimized HugeTLB pages 596 + will not be affected. If you want to make sure there are no optimized HugeTLB 597 + pages, you can set "nr_hugepages" to 0 first and then disable this. Note that 598 + writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus 599 + pages. So, those surplus pages are still optimized until they are no longer 600 + in use. You would need to wait for those surplus pages to be released before 601 + there are no optimized pages in the system. 602 + 603 + 565 604 nr_hugepages_mempolicy 566 605 ====================== 567 606
+9
include/linux/memory_hotplug.h
··· 351 351 extern bool mhp_supports_memmap_on_memory(unsigned long size); 352 352 #endif /* CONFIG_MEMORY_HOTPLUG */ 353 353 354 + #ifdef CONFIG_MHP_MEMMAP_ON_MEMORY 355 + bool mhp_memmap_on_memory(void); 356 + #else 357 + static inline bool mhp_memmap_on_memory(void) 358 + { 359 + return false; 360 + } 361 + #endif 362 + 354 363 #endif /* __LINUX_MEMORY_HOTPLUG_H */
+84 -9
mm/hugetlb_vmemmap.c
··· 10 10 */ 11 11 #define pr_fmt(fmt) "HugeTLB: " fmt 12 12 13 + #include <linux/memory_hotplug.h> 13 14 #include "hugetlb_vmemmap.h" 14 15 15 16 /* ··· 23 22 #define RESERVE_VMEMMAP_NR 1U 24 23 #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) 25 24 25 + enum vmemmap_optimize_mode { 26 + VMEMMAP_OPTIMIZE_OFF, 27 + VMEMMAP_OPTIMIZE_ON, 28 + }; 29 + 26 30 DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, 27 31 hugetlb_optimize_vmemmap_key); 28 32 EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); 29 33 34 + static enum vmemmap_optimize_mode vmemmap_optimize_mode = 35 + IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON); 36 + 37 + static void vmemmap_optimize_mode_switch(enum vmemmap_optimize_mode to) 38 + { 39 + if (vmemmap_optimize_mode == to) 40 + return; 41 + 42 + if (to == VMEMMAP_OPTIMIZE_OFF) 43 + static_branch_dec(&hugetlb_optimize_vmemmap_key); 44 + else 45 + static_branch_inc(&hugetlb_optimize_vmemmap_key); 46 + WRITE_ONCE(vmemmap_optimize_mode, to); 47 + } 48 + 30 49 static int __init hugetlb_vmemmap_early_param(char *buf) 31 50 { 32 51 bool enable; 52 + enum vmemmap_optimize_mode mode; 33 53 34 54 if (kstrtobool(buf, &enable)) 35 55 return -EINVAL; 36 56 37 - if (enable) 38 - static_branch_enable(&hugetlb_optimize_vmemmap_key); 39 - else 40 - static_branch_disable(&hugetlb_optimize_vmemmap_key); 57 + mode = enable ? VMEMMAP_OPTIMIZE_ON : VMEMMAP_OPTIMIZE_OFF; 58 + vmemmap_optimize_mode_switch(mode); 41 59 42 60 return 0; 43 61 } ··· 89 69 */ 90 70 ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, 91 71 GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); 92 - if (!ret) 72 + if (!ret) { 93 73 ClearHPageVmemmapOptimized(head); 74 + static_branch_dec(&hugetlb_optimize_vmemmap_key); 75 + } 94 76 95 77 return ret; 96 78 } ··· 106 84 if (!vmemmap_pages) 107 85 return; 108 86 87 + if (READ_ONCE(vmemmap_optimize_mode) == VMEMMAP_OPTIMIZE_OFF) 88 + return; 89 + 90 + static_branch_inc(&hugetlb_optimize_vmemmap_key); 91 + 109 92 vmemmap_addr += RESERVE_VMEMMAP_SIZE; 110 93 vmemmap_end = vmemmap_addr + (vmemmap_pages << PAGE_SHIFT); 111 94 vmemmap_reuse = vmemmap_addr - PAGE_SIZE; ··· 120 93 * to the page which @vmemmap_reuse is mapped to, then free the pages 121 94 * which the range [@vmemmap_addr, @vmemmap_end] is mapped to. 122 95 */ 123 - if (!vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) 96 + if (vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) 97 + static_branch_dec(&hugetlb_optimize_vmemmap_key); 98 + else 124 99 SetHPageVmemmapOptimized(head); 125 100 } 126 101 ··· 138 109 */ 139 110 BUILD_BUG_ON(__NR_USED_SUBPAGE >= 140 111 RESERVE_VMEMMAP_SIZE / sizeof(struct page)); 141 - 142 - if (!hugetlb_optimize_vmemmap_enabled()) 143 - return; 144 112 145 113 if (!is_power_of_2(sizeof(struct page))) { 146 114 pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n"); ··· 160 134 pr_info("can optimize %d vmemmap pages for %s\n", 161 135 h->optimize_vmemmap_pages, h->name); 162 136 } 137 + 138 + #ifdef CONFIG_PROC_SYSCTL 139 + static int hugetlb_optimize_vmemmap_handler(struct ctl_table *table, int write, 140 + void *buffer, size_t *length, 141 + loff_t *ppos) 142 + { 143 + int ret; 144 + enum vmemmap_optimize_mode mode; 145 + static DEFINE_MUTEX(sysctl_mutex); 146 + 147 + if (write && !capable(CAP_SYS_ADMIN)) 148 + return -EPERM; 149 + 150 + mutex_lock(&sysctl_mutex); 151 + mode = vmemmap_optimize_mode; 152 + table->data = &mode; 153 + ret = proc_dointvec_minmax(table, write, buffer, length, ppos); 154 + if (write && !ret) 155 + vmemmap_optimize_mode_switch(mode); 156 + mutex_unlock(&sysctl_mutex); 157 + 158 + return ret; 159 + } 160 + 161 + static struct ctl_table hugetlb_vmemmap_sysctls[] = { 162 + { 163 + .procname = "hugetlb_optimize_vmemmap", 164 + .maxlen = sizeof(enum vmemmap_optimize_mode), 165 + .mode = 0644, 166 + .proc_handler = hugetlb_optimize_vmemmap_handler, 167 + .extra1 = SYSCTL_ZERO, 168 + .extra2 = SYSCTL_ONE, 169 + }, 170 + { } 171 + }; 172 + 173 + static __init int hugetlb_vmemmap_sysctls_init(void) 174 + { 175 + /* 176 + * If "memory_hotplug.memmap_on_memory" is enabled or "struct page" 177 + * crosses page boundaries, the vmemmap pages cannot be optimized. 178 + */ 179 + if (!mhp_memmap_on_memory() && is_power_of_2(sizeof(struct page))) 180 + register_sysctl_init("vm", hugetlb_vmemmap_sysctls); 181 + 182 + return 0; 183 + } 184 + late_initcall(hugetlb_vmemmap_sysctls_init); 185 + #endif /* CONFIG_PROC_SYSCTL */
+1 -6
mm/memory_hotplug.c
··· 63 63 module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory, 0444); 64 64 MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug"); 65 65 66 - static inline bool mhp_memmap_on_memory(void) 66 + bool mhp_memmap_on_memory(void) 67 67 { 68 68 return memmap_on_memory; 69 - } 70 - #else 71 - static inline bool mhp_memmap_on_memory(void) 72 - { 73 - return false; 74 69 } 75 70 #endif 76 71