Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

kho: fix restoring of contiguous ranges of order-0 pages

When contiguous ranges of order-0 pages are restored, kho_restore_page()
calls prep_compound_page() with the first page in the range and order as
parameters and then kho_restore_pages() calls split_page() to make sure
all pages in the range are order-0.

However, since split_page() is not intended to split compound pages and
with VM_DEBUG enabled it will trigger a VM_BUG_ON_PAGE().

Update kho_restore_page() so that it will use prep_compound_page() when it
restores a folio and make sure it properly sets page count for both large
folios and ranges of order-0 pages.

Link: https://lkml.kernel.org/r/20251125110917.843744-3-rppt@kernel.org
Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations")
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Reported-by: Pratyush Yadav <pratyush@kernel.org>
Cc: Alexander Graf <graf@amazon.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Mike Rapoport (Microsoft) and committed by
Andrew Morton
7b71205a 4bc84cd5

+12 -8
+12 -8
kernel/liveupdate/kexec_handover.c
··· 219 219 return 0; 220 220 } 221 221 222 - static struct page *kho_restore_page(phys_addr_t phys) 222 + static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) 223 223 { 224 224 struct page *page = pfn_to_online_page(PHYS_PFN(phys)); 225 + unsigned int nr_pages, ref_cnt; 225 226 union kho_page_info info; 226 - unsigned int nr_pages; 227 227 228 228 if (!page) 229 229 return NULL; ··· 243 243 /* Head page gets refcount of 1. */ 244 244 set_page_count(page, 1); 245 245 246 - /* For higher order folios, tail pages get a page count of zero. */ 246 + /* 247 + * For higher order folios, tail pages get a page count of zero. 248 + * For physically contiguous order-0 pages every pages gets a page 249 + * count of 1 250 + */ 251 + ref_cnt = is_folio ? 0 : 1; 247 252 for (unsigned int i = 1; i < nr_pages; i++) 248 - set_page_count(page + i, 0); 253 + set_page_count(page + i, ref_cnt); 249 254 250 - if (info.order > 0) 255 + if (is_folio && info.order) 251 256 prep_compound_page(page, info.order); 252 257 253 258 adjust_managed_page_count(page, nr_pages); ··· 267 262 */ 268 263 struct folio *kho_restore_folio(phys_addr_t phys) 269 264 { 270 - struct page *page = kho_restore_page(phys); 265 + struct page *page = kho_restore_page(phys, true); 271 266 272 267 return page ? page_folio(page) : NULL; 273 268 } ··· 292 287 while (pfn < end_pfn) { 293 288 const unsigned int order = 294 289 min(count_trailing_zeros(pfn), ilog2(end_pfn - pfn)); 295 - struct page *page = kho_restore_page(PFN_PHYS(pfn)); 290 + struct page *page = kho_restore_page(PFN_PHYS(pfn), false); 296 291 297 292 if (!page) 298 293 return NULL; 299 - split_page(page, order); 300 294 pfn += 1 << order; 301 295 } 302 296