Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

kfence: drop nth_page() usage

We want to get rid of nth_page(), and kfence init code is the last user.

Unfortunately, we might actually walk a PFN range where the pages are not
contiguous, because we might be allocating an area from memblock that
could span memory sections in problematic kernel configs (SPARSEMEM
without SPARSEMEM_VMEMMAP).

We could check whether the page range is contiguous using
page_range_contiguous() and failing kfence init, or making kfence
incompatible these problemtic kernel configs.

Let's keep it simple and simply use pfn_to_page() by iterating PFNs.

Link: https://lkml.kernel.org/r/20250901150359.867252-36-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Marco Elver <elver@google.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

David Hildenbrand and committed by
Andrew Morton
56531761 b5ba761a

+7 -5
+7 -5
mm/kfence/core.c
··· 594 594 */ 595 595 static unsigned long kfence_init_pool(void) 596 596 { 597 - unsigned long addr; 598 - struct page *pages; 597 + unsigned long addr, start_pfn; 599 598 int i; 600 599 601 600 if (!arch_kfence_init_pool()) 602 601 return (unsigned long)__kfence_pool; 603 602 604 603 addr = (unsigned long)__kfence_pool; 605 - pages = virt_to_page(__kfence_pool); 604 + start_pfn = PHYS_PFN(virt_to_phys(__kfence_pool)); 606 605 607 606 /* 608 607 * Set up object pages: they must have PGTY_slab set to avoid freeing ··· 612 613 * enters __slab_free() slow-path. 613 614 */ 614 615 for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { 615 - struct slab *slab = page_slab(nth_page(pages, i)); 616 + struct slab *slab; 616 617 617 618 if (!i || (i % 2)) 618 619 continue; 619 620 621 + slab = page_slab(pfn_to_page(start_pfn + i)); 620 622 __folio_set_slab(slab_folio(slab)); 621 623 #ifdef CONFIG_MEMCG 622 624 slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts | ··· 665 665 666 666 reset_slab: 667 667 for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { 668 - struct slab *slab = page_slab(nth_page(pages, i)); 668 + struct slab *slab; 669 669 670 670 if (!i || (i % 2)) 671 671 continue; 672 + 673 + slab = page_slab(pfn_to_page(start_pfn + i)); 672 674 #ifdef CONFIG_MEMCG 673 675 slab->obj_exts = 0; 674 676 #endif