Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/gup: drop nth_page() usage in unpin_user_page_range_dirty_lock()

There is the concern that unpin_user_page_range_dirty_lock() might do some
weird merging of PFN ranges -- either now or in the future -- such that
PFN range is contiguous but the page range might not be.

Let's sanity-check for that and drop the nth_page() usage.

Link: https://lkml.kernel.org/r/20250901150359.867252-35-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

David Hildenbrand and committed by
Andrew Morton
b5ba761a ce00897b

+7 -1
+7 -1
mm/gup.c
··· 237 237 static inline struct folio *gup_folio_range_next(struct page *start, 238 238 unsigned long npages, unsigned long i, unsigned int *ntails) 239 239 { 240 - struct page *next = nth_page(start, i); 240 + struct page *next = start + i; 241 241 struct folio *folio = page_folio(next); 242 242 unsigned int nr = 1; 243 243 ··· 342 342 * "gup-pinned page range" refers to a range of pages that has had one of the 343 343 * pin_user_pages() variants called on that page. 344 344 * 345 + * The page range must be truly physically contiguous: the page range 346 + * corresponds to a contiguous PFN range and all pages can be iterated 347 + * naturally. 348 + * 345 349 * For the page ranges defined by [page .. page+npages], make that range (or 346 350 * its head pages, if a compound page) dirty, if @make_dirty is true, and if the 347 351 * page range was previously listed as clean. ··· 362 358 unsigned long i; 363 359 struct folio *folio; 364 360 unsigned int nr; 361 + 362 + VM_WARN_ON_ONCE(!page_range_contiguous(page, npages)); 365 363 366 364 for (i = 0; i < npages; i += nr) { 367 365 folio = gup_folio_range_next(page, npages, i, &nr);