Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

fs/proc/task_mmu: hold PTL in pagemap_hugetlb_range and gather_hugetlb_stats

Hold PTL in pagemap_hugetlb_range() and gather_hugetlb_stats() to avoid
operating on stale page, as pagemap_pmd_range() and gather_pte_stats()
have done.

Link: https://lkml.kernel.org/r/20250724090958.455887-3-tujinjiang@huawei.com
Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Andrei Vagin <avagin@gmail.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Brahmajit Das <brahmajit.xyz@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: David Rientjes <rientjes@google.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Joern Engel <joern@logfs.org>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Thiago Jung Bauermann <thiago.bauermann@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Jinjiang Tu and committed by
Andrew Morton
aa5a10b0 45d19b4b

+11 -3
+11 -3
fs/proc/task_mmu.c
··· 2021 2021 struct pagemapread *pm = walk->private; 2022 2022 struct vm_area_struct *vma = walk->vma; 2023 2023 u64 flags = 0, frame = 0; 2024 + spinlock_t *ptl; 2024 2025 int err = 0; 2025 2026 pte_t pte; 2026 2027 2027 2028 if (vma->vm_flags & VM_SOFTDIRTY) 2028 2029 flags |= PM_SOFT_DIRTY; 2029 2030 2031 + ptl = huge_pte_lock(hstate_vma(vma), walk->mm, ptep); 2030 2032 pte = huge_ptep_get(walk->mm, addr, ptep); 2031 2033 if (pte_present(pte)) { 2032 2034 struct folio *folio = page_folio(pte_page(pte)); ··· 2056 2054 2057 2055 err = add_to_pagemap(&pme, pm); 2058 2056 if (err) 2059 - return err; 2057 + break; 2060 2058 if (pm->show_pfn && (flags & PM_PRESENT)) 2061 2059 frame++; 2062 2060 } 2063 2061 2062 + spin_unlock(ptl); 2064 2063 cond_resched(); 2065 2064 2066 2065 return err; ··· 3135 3132 static int gather_hugetlb_stats(pte_t *pte, unsigned long hmask, 3136 3133 unsigned long addr, unsigned long end, struct mm_walk *walk) 3137 3134 { 3138 - pte_t huge_pte = huge_ptep_get(walk->mm, addr, pte); 3135 + pte_t huge_pte; 3139 3136 struct numa_maps *md; 3140 3137 struct page *page; 3138 + spinlock_t *ptl; 3141 3139 3140 + ptl = huge_pte_lock(hstate_vma(walk->vma), walk->mm, pte); 3141 + huge_pte = huge_ptep_get(walk->mm, addr, pte); 3142 3142 if (!pte_present(huge_pte)) 3143 - return 0; 3143 + goto out; 3144 3144 3145 3145 page = pte_page(huge_pte); 3146 3146 3147 3147 md = walk->private; 3148 3148 gather_stats(page, md, pte_dirty(huge_pte), 1); 3149 + out: 3150 + spin_unlock(ptl); 3149 3151 return 0; 3150 3152 } 3151 3153