Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: hugetlb_vmemmap: move PageVmemmapSelfHosted() check to split_vmemmap_huge_pmd()

To check a page whether it is self-hosted needs to traverse the page table
(e.g. pmd_off_k()), however, we already have done this in the next
calling of vmemmap_remap_range(). Moving PageVmemmapSelfHosted() check to
vmemmap_pmd_entry() could simplify the code a bit.

Link: https://lkml.kernel.org/r/20231127084645.27017-4-songmuchun@bytedance.com
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Muchun Song and committed by
Andrew Morton
be035a2a fb93ed63

+24 -46
+24 -46
mm/hugetlb_vmemmap.c
··· 95 95 static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr, 96 96 unsigned long next, struct mm_walk *walk) 97 97 { 98 + int ret = 0; 98 99 struct page *head; 99 100 struct vmemmap_remap_walk *vmemmap_walk = walk->private; 100 101 ··· 105 104 106 105 spin_lock(&init_mm.page_table_lock); 107 106 head = pmd_leaf(*pmd) ? pmd_page(*pmd) : NULL; 107 + /* 108 + * Due to HugeTLB alignment requirements and the vmemmap 109 + * pages being at the start of the hotplugged memory 110 + * region in memory_hotplug.memmap_on_memory case. Checking 111 + * the vmemmap page associated with the first vmemmap page 112 + * if it is self-hosted is sufficient. 113 + * 114 + * [ hotplugged memory ] 115 + * [ section ][...][ section ] 116 + * [ vmemmap ][ usable memory ] 117 + * ^ | ^ | 118 + * +--+ | | 119 + * +------------------------+ 120 + */ 121 + if (unlikely(!vmemmap_walk->nr_walked)) { 122 + struct page *page = head ? head + pte_index(addr) : 123 + pte_page(ptep_get(pte_offset_kernel(pmd, addr))); 124 + 125 + if (PageVmemmapSelfHosted(page)) 126 + ret = -ENOTSUPP; 127 + } 108 128 spin_unlock(&init_mm.page_table_lock); 109 - if (!head) 110 - return 0; 129 + if (!head || ret) 130 + return ret; 111 131 112 132 return vmemmap_split_pmd(pmd, head, addr & PMD_MASK, vmemmap_walk); 113 133 } ··· 545 523 546 524 if (!hugetlb_vmemmap_optimizable(h)) 547 525 return false; 548 - 549 - if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) { 550 - pmd_t *pmdp, pmd; 551 - struct page *vmemmap_page; 552 - unsigned long vaddr = (unsigned long)head; 553 - 554 - /* 555 - * Only the vmemmap page's vmemmap page can be self-hosted. 556 - * Walking the page tables to find the backing page of the 557 - * vmemmap page. 558 - */ 559 - pmdp = pmd_off_k(vaddr); 560 - /* 561 - * The READ_ONCE() is used to stabilize *pmdp in a register or 562 - * on the stack so that it will stop changing under the code. 563 - * The only concurrent operation where it can be changed is 564 - * split_vmemmap_huge_pmd() (*pmdp will be stable after this 565 - * operation). 566 - */ 567 - pmd = READ_ONCE(*pmdp); 568 - if (pmd_leaf(pmd)) 569 - vmemmap_page = pmd_page(pmd) + pte_index(vaddr); 570 - else 571 - vmemmap_page = pte_page(*pte_offset_kernel(pmdp, vaddr)); 572 - /* 573 - * Due to HugeTLB alignment requirements and the vmemmap pages 574 - * being at the start of the hotplugged memory region in 575 - * memory_hotplug.memmap_on_memory case. Checking any vmemmap 576 - * page's vmemmap page if it is marked as VmemmapSelfHosted is 577 - * sufficient. 578 - * 579 - * [ hotplugged memory ] 580 - * [ section ][...][ section ] 581 - * [ vmemmap ][ usable memory ] 582 - * ^ | | | 583 - * +---+ | | 584 - * ^ | | 585 - * +-------+ | 586 - * ^ | 587 - * +-------------------------------------------+ 588 - */ 589 - if (PageVmemmapSelfHosted(vmemmap_page)) 590 - return false; 591 - } 592 526 593 527 return true; 594 528 }