Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

arm64: hugetlb: Cleanup huge_pte size discovery mechanisms

Not all huge_pte helper APIs explicitly provide the size of the
huge_pte. So the helpers have to depend on various methods to determine
the size of the huge_pte. Some of these methods are dubious.

Let's clean up the code to use preferred methods and retire the dubious
ones. The options in order of preference:

- If size is provided as parameter, use it together with
num_contig_ptes(). This is explicit and works for both present and
non-present ptes.

- If vma is provided as a parameter, retrieve size via
huge_page_size(hstate_vma(vma)) and use it together with
num_contig_ptes(). This is explicit and works for both present and
non-present ptes.

- If the pte is present and contiguous, use find_num_contig() to walk
the pgtable to find the level and infer the number of ptes from
level. Only works for *present* ptes.

- If the pte is present and not contiguous and you can infer from this
that only 1 pte needs to be operated on. This is ok if you don't care
about the absolute size, and just want to know the number of ptes.

- NEVER rely on resolving the PFN of a present pte to a folio and
getting the folio's size. This is fragile at best, because there is
nothing to stop the core-mm from allocating a folio twice as big as
the huge_pte then mapping it across 2 consecutive huge_ptes. Or just
partially mapping it.

Where we require that the pte is present, add warnings if not-present.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-2-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>

authored by

Ryan Roberts and committed by
Will Deacon
29cb8051 fcf8dda8

+15 -5
+15 -5
arch/arm64/mm/hugetlbpage.c
··· 129 129 if (!pte_present(orig_pte) || !pte_cont(orig_pte)) 130 130 return orig_pte; 131 131 132 - ncontig = num_contig_ptes(page_size(pte_page(orig_pte)), &pgsize); 132 + ncontig = find_num_contig(mm, addr, ptep, &pgsize); 133 133 for (i = 0; i < ncontig; i++, ptep++) { 134 134 pte_t pte = __ptep_get(ptep); 135 135 ··· 438 438 pgprot_t hugeprot; 439 439 pte_t orig_pte; 440 440 441 + VM_WARN_ON(!pte_present(pte)); 442 + 441 443 if (!pte_cont(pte)) 442 444 return __ptep_set_access_flags(vma, addr, ptep, pte, dirty); 443 445 444 - ncontig = find_num_contig(mm, addr, ptep, &pgsize); 446 + ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize); 445 447 dpfn = pgsize >> PAGE_SHIFT; 446 448 447 449 if (!__cont_access_flags_changed(ptep, pte, ncontig)) 448 450 return 0; 449 451 450 452 orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); 453 + VM_WARN_ON(!pte_present(orig_pte)); 451 454 452 455 /* Make sure we don't lose the dirty or young state */ 453 456 if (pte_dirty(orig_pte)) ··· 475 472 size_t pgsize; 476 473 pte_t pte; 477 474 478 - if (!pte_cont(__ptep_get(ptep))) { 475 + pte = __ptep_get(ptep); 476 + VM_WARN_ON(!pte_present(pte)); 477 + 478 + if (!pte_cont(pte)) { 479 479 __ptep_set_wrprotect(mm, addr, ptep); 480 480 return; 481 481 } ··· 502 496 struct mm_struct *mm = vma->vm_mm; 503 497 size_t pgsize; 504 498 int ncontig; 499 + pte_t pte; 505 500 506 - if (!pte_cont(__ptep_get(ptep))) 501 + pte = __ptep_get(ptep); 502 + VM_WARN_ON(!pte_present(pte)); 503 + 504 + if (!pte_cont(pte)) 507 505 return ptep_clear_flush(vma, addr, ptep); 508 506 509 - ncontig = find_num_contig(mm, addr, ptep, &pgsize); 507 + ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize); 510 508 return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); 511 509 } 512 510