Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'folio-5.18b' of git://git.infradead.org/users/willy/pagecache

Pull filesystem folio updates from Matthew Wilcox:
"Primarily this series converts some of the address_space operations to
take a folio instead of a page.

Notably:

- a_ops->is_partially_uptodate() takes a folio instead of a page and
changes the type of the 'from' and 'count' arguments to make it
obvious they're bytes.

- a_ops->invalidatepage() becomes ->invalidate_folio() and has a
similar type change.

- a_ops->launder_page() becomes ->launder_folio()

- a_ops->set_page_dirty() becomes ->dirty_folio() and adds the
address_space as an argument.

There are a couple of other misc changes up front that weren't worth
separating into their own pull request"

* tag 'folio-5.18b' of git://git.infradead.org/users/willy/pagecache: (53 commits)
fs: Remove aops ->set_page_dirty
fb_defio: Use noop_dirty_folio()
fs: Convert __set_page_dirty_no_writeback to noop_dirty_folio
fs: Convert __set_page_dirty_buffers to block_dirty_folio
nilfs: Convert nilfs_set_page_dirty() to nilfs_dirty_folio()
mm: Convert swap_set_page_dirty() to swap_dirty_folio()
ubifs: Convert ubifs_set_page_dirty to ubifs_dirty_folio
f2fs: Convert f2fs_set_node_page_dirty to f2fs_dirty_node_folio
f2fs: Convert f2fs_set_data_page_dirty to f2fs_dirty_data_folio
f2fs: Convert f2fs_set_meta_page_dirty to f2fs_dirty_meta_folio
afs: Convert afs_dir_set_page_dirty() to afs_dir_dirty_folio()
btrfs: Convert extent_range_redirty_for_io() to use folios
fs: Convert trivial uses of __set_page_dirty_nobuffers to filemap_dirty_folio
btrfs: Convert from set_page_dirty to dirty_folio
fscache: Convert fscache_set_page_dirty() to fscache_dirty_folio()
fs: Add aops->dirty_folio
fs: Remove aops->launder_page
orangefs: Convert launder_page to launder_folio
nfs: Convert from launder_page to launder_folio
fuse: Convert from launder_page to launder_folio
...

+848 -874
+4 -3
Documentation/filesystems/caching/netfs-api.rst
··· 345 345 346 346 To support this, the following functions are provided:: 347 347 348 - int fscache_set_page_dirty(struct page *page, 349 - struct fscache_cookie *cookie); 348 + bool fscache_dirty_folio(struct address_space *mapping, 349 + struct folio *folio, 350 + struct fscache_cookie *cookie); 350 351 void fscache_unpin_writeback(struct writeback_control *wbc, 351 352 struct fscache_cookie *cookie); 352 353 void fscache_clear_inode_writeback(struct fscache_cookie *cookie, ··· 355 354 const void *aux); 356 355 357 356 The *set* function is intended to be called from the filesystem's 358 - ``set_page_dirty`` address space operation. If ``I_PINNING_FSCACHE_WB`` is not 357 + ``dirty_folio`` address space operation. If ``I_PINNING_FSCACHE_WB`` is not 359 358 set, it sets that flag and increments the use count on the cookie (the caller 360 359 must already have called ``fscache_use_cookie()``). 361 360
+21 -21
Documentation/filesystems/locking.rst
··· 239 239 int (*writepage)(struct page *page, struct writeback_control *wbc); 240 240 int (*readpage)(struct file *, struct page *); 241 241 int (*writepages)(struct address_space *, struct writeback_control *); 242 - int (*set_page_dirty)(struct page *page); 242 + bool (*dirty_folio)(struct address_space *, struct folio *folio); 243 243 void (*readahead)(struct readahead_control *); 244 244 int (*readpages)(struct file *filp, struct address_space *mapping, 245 245 struct list_head *pages, unsigned nr_pages); ··· 250 250 loff_t pos, unsigned len, unsigned copied, 251 251 struct page *page, void *fsdata); 252 252 sector_t (*bmap)(struct address_space *, sector_t); 253 - void (*invalidatepage) (struct page *, unsigned int, unsigned int); 253 + void (*invalidate_folio) (struct folio *, size_t start, size_t len); 254 254 int (*releasepage) (struct page *, int); 255 255 void (*freepage)(struct page *); 256 256 int (*direct_IO)(struct kiocb *, struct iov_iter *iter); 257 257 bool (*isolate_page) (struct page *, isolate_mode_t); 258 258 int (*migratepage)(struct address_space *, struct page *, struct page *); 259 259 void (*putback_page) (struct page *); 260 - int (*launder_page)(struct page *); 261 - int (*is_partially_uptodate)(struct page *, unsigned long, unsigned long); 260 + int (*launder_folio)(struct folio *); 261 + bool (*is_partially_uptodate)(struct folio *, size_t from, size_t count); 262 262 int (*error_remove_page)(struct address_space *, struct page *); 263 263 int (*swap_activate)(struct file *); 264 264 int (*swap_deactivate)(struct file *); 265 265 266 266 locking rules: 267 - All except set_page_dirty and freepage may block 267 + All except dirty_folio and freepage may block 268 268 269 269 ====================== ======================== ========= =============== 270 270 ops PageLocked(page) i_rwsem invalidate_lock ··· 272 272 writepage: yes, unlocks (see below) 273 273 readpage: yes, unlocks shared 274 274 writepages: 275 - set_page_dirty no 275 + dirty_folio maybe 276 276 readahead: yes, unlocks shared 277 277 readpages: no shared 278 278 write_begin: locks the page exclusive 279 279 write_end: yes, unlocks exclusive 280 280 bmap: 281 - invalidatepage: yes exclusive 281 + invalidate_folio: yes exclusive 282 282 releasepage: yes 283 283 freepage: yes 284 284 direct_IO: 285 285 isolate_page: yes 286 286 migratepage: yes (both) 287 287 putback_page: yes 288 - launder_page: yes 288 + launder_folio: yes 289 289 is_partially_uptodate: yes 290 290 error_remove_page: yes 291 291 swap_activate: no ··· 361 361 writepages should _only_ write pages which are present on 362 362 mapping->io_pages. 363 363 364 - ->set_page_dirty() is called from various places in the kernel 365 - when the target page is marked as needing writeback. It may be called 366 - under spinlock (it cannot block) and is sometimes called with the page 367 - not locked. 364 + ->dirty_folio() is called from various places in the kernel when 365 + the target folio is marked as needing writeback. The folio cannot be 366 + truncated because either the caller holds the folio lock, or the caller 367 + has found the folio while holding the page table lock which will block 368 + truncation. 368 369 369 370 ->bmap() is currently used by legacy ioctl() (FIBMAP) provided by some 370 371 filesystems and by the swapper. The latter will eventually go away. Please, 371 372 keep it that way and don't breed new callers. 372 373 373 - ->invalidatepage() is called when the filesystem must attempt to drop 374 + ->invalidate_folio() is called when the filesystem must attempt to drop 374 375 some or all of the buffers from the page when it is being truncated. It 375 - returns zero on success. If ->invalidatepage is zero, the kernel uses 376 - block_invalidatepage() instead. The filesystem must exclusively acquire 377 - invalidate_lock before invalidating page cache in truncate / hole punch path 378 - (and thus calling into ->invalidatepage) to block races between page cache 379 - invalidation and page cache filling functions (fault, read, ...). 376 + returns zero on success. The filesystem must exclusively acquire 377 + invalidate_lock before invalidating page cache in truncate / hole punch 378 + path (and thus calling into ->invalidate_folio) to block races between page 379 + cache invalidation and page cache filling functions (fault, read, ...). 380 380 381 381 ->releasepage() is called when the kernel is about to try to drop the 382 382 buffers from the page in preparation for freeing it. It returns zero to ··· 386 386 ->freepage() is called when the kernel is done dropping the page 387 387 from the page cache. 388 388 389 - ->launder_page() may be called prior to releasing a page if 390 - it is still found to be dirty. It returns zero if the page was successfully 391 - cleaned, or an error value if not. Note that in order to prevent the page 389 + ->launder_folio() may be called prior to releasing a folio if 390 + it is still found to be dirty. It returns zero if the folio was successfully 391 + cleaned, or an error value if not. Note that in order to prevent the folio 392 392 getting mapped back in and redirtied, it needs to be kept locked 393 393 across the entire operation. 394 394
+23 -23
Documentation/filesystems/vfs.rst
··· 658 658 659 659 The read process essentially only requires 'readpage'. The write 660 660 process is more complicated and uses write_begin/write_end or 661 - set_page_dirty to write data into the address_space, and writepage and 661 + dirty_folio to write data into the address_space, and writepage and 662 662 writepages to writeback data to storage. 663 663 664 664 Adding and removing pages to/from an address_space is protected by the ··· 724 724 int (*writepage)(struct page *page, struct writeback_control *wbc); 725 725 int (*readpage)(struct file *, struct page *); 726 726 int (*writepages)(struct address_space *, struct writeback_control *); 727 - int (*set_page_dirty)(struct page *page); 727 + bool (*dirty_folio)(struct address_space *, struct folio *); 728 728 void (*readahead)(struct readahead_control *); 729 729 int (*readpages)(struct file *filp, struct address_space *mapping, 730 730 struct list_head *pages, unsigned nr_pages); ··· 735 735 loff_t pos, unsigned len, unsigned copied, 736 736 struct page *page, void *fsdata); 737 737 sector_t (*bmap)(struct address_space *, sector_t); 738 - void (*invalidatepage) (struct page *, unsigned int, unsigned int); 738 + void (*invalidate_folio) (struct folio *, size_t start, size_t len); 739 739 int (*releasepage) (struct page *, int); 740 740 void (*freepage)(struct page *); 741 741 ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); ··· 745 745 int (*migratepage) (struct page *, struct page *); 746 746 /* put migration-failed page back to right list */ 747 747 void (*putback_page) (struct page *); 748 - int (*launder_page) (struct page *); 748 + int (*launder_folio) (struct folio *); 749 749 750 - int (*is_partially_uptodate) (struct page *, unsigned long, 751 - unsigned long); 750 + bool (*is_partially_uptodate) (struct folio *, size_t from, 751 + size_t count); 752 752 void (*is_dirty_writeback) (struct page *, bool *, bool *); 753 753 int (*error_remove_page) (struct mapping *mapping, struct page *page); 754 754 int (*swap_activate)(struct file *); ··· 793 793 This will choose pages from the address space that are tagged as 794 794 DIRTY and will pass them to ->writepage. 795 795 796 - ``set_page_dirty`` 797 - called by the VM to set a page dirty. This is particularly 798 - needed if an address space attaches private data to a page, and 799 - that data needs to be updated when a page is dirtied. This is 796 + ``dirty_folio`` 797 + called by the VM to mark a folio as dirty. This is particularly 798 + needed if an address space attaches private data to a folio, and 799 + that data needs to be updated when a folio is dirtied. This is 800 800 called, for example, when a memory mapped page gets modified. 801 - If defined, it should set the PageDirty flag, and the 802 - PAGECACHE_TAG_DIRTY tag in the radix tree. 801 + If defined, it should set the folio dirty flag, and the 802 + PAGECACHE_TAG_DIRTY search mark in i_pages. 803 803 804 804 ``readahead`` 805 805 Called by the VM to read pages associated with the address_space ··· 872 872 to find out where the blocks in the file are and uses those 873 873 addresses directly. 874 874 875 - ``invalidatepage`` 876 - If a page has PagePrivate set, then invalidatepage will be 877 - called when part or all of the page is to be removed from the 875 + ``invalidate_folio`` 876 + If a folio has private data, then invalidate_folio will be 877 + called when part or all of the folio is to be removed from the 878 878 address space. This generally corresponds to either a 879 879 truncation, punch hole or a complete invalidation of the address 880 880 space (in the latter case 'offset' will always be 0 and 'length' 881 - will be PAGE_SIZE). Any private data associated with the page 881 + will be folio_size()). Any private data associated with the page 882 882 should be updated to reflect this truncation. If offset is 0 883 - and length is PAGE_SIZE, then the private data should be 883 + and length is folio_size(), then the private data should be 884 884 released, because the page must be able to be completely 885 885 discarded. This may be done by calling the ->releasepage 886 886 function, but in this case the release MUST succeed. ··· 934 934 ``putback_page`` 935 935 Called by the VM when isolated page's migration fails. 936 936 937 - ``launder_page`` 938 - Called before freeing a page - it writes back the dirty page. 939 - To prevent redirtying the page, it is kept locked during the 937 + ``launder_folio`` 938 + Called before freeing a folio - it writes back the dirty folio. 939 + To prevent redirtying the folio, it is kept locked during the 940 940 whole operation. 941 941 942 942 ``is_partially_uptodate`` 943 943 Called by the VM when reading a file through the pagecache when 944 - the underlying blocksize != pagesize. If the required block is 945 - up to date then the read can complete without needing the IO to 946 - bring the whole page up to date. 944 + the underlying blocksize is smaller than the size of the folio. 945 + If the required block is up to date then the read can complete 946 + without needing I/O to bring the whole page up to date. 947 947 948 948 ``is_dirty_writeback`` 949 949 Called by the VM when attempting to reclaim a page. The VM uses
+2 -1
block/fops.c
··· 428 428 } 429 429 430 430 const struct address_space_operations def_blk_aops = { 431 - .set_page_dirty = __set_page_dirty_buffers, 431 + .dirty_folio = block_dirty_folio, 432 + .invalidate_folio = block_invalidate_folio, 432 433 .readpage = blkdev_readpage, 433 434 .readahead = blkdev_readahead, 434 435 .writepage = blkdev_writepage,
+1 -2
drivers/dax/device.c
··· 346 346 } 347 347 348 348 static const struct address_space_operations dev_dax_aops = { 349 - .set_page_dirty = __set_page_dirty_no_writeback, 350 - .invalidatepage = noop_invalidatepage, 349 + .dirty_folio = noop_dirty_folio, 351 350 }; 352 351 353 352 static int dax_open(struct inode *inode, struct file *filp)
+1 -8
drivers/video/fbdev/core/fb_defio.c
··· 151 151 .page_mkwrite = fb_deferred_io_mkwrite, 152 152 }; 153 153 154 - static int fb_deferred_io_set_page_dirty(struct page *page) 155 - { 156 - if (!PageDirty(page)) 157 - SetPageDirty(page); 158 - return 0; 159 - } 160 - 161 154 static const struct address_space_operations fb_deferred_io_aops = { 162 - .set_page_dirty = fb_deferred_io_set_page_dirty, 155 + .dirty_folio = noop_dirty_folio, 163 156 }; 164 157 165 158 int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
+10 -27
fs/9p/vfs_addr.c
··· 158 158 return 1; 159 159 } 160 160 161 - /** 162 - * v9fs_invalidate_page - Invalidate a page completely or partially 163 - * @page: The page to be invalidated 164 - * @offset: offset of the invalidated region 165 - * @length: length of the invalidated region 166 - */ 167 - 168 - static void v9fs_invalidate_page(struct page *page, unsigned int offset, 169 - unsigned int length) 161 + static void v9fs_invalidate_folio(struct folio *folio, size_t offset, 162 + size_t length) 170 163 { 171 - struct folio *folio = page_folio(page); 172 - 173 164 folio_wait_fscache(folio); 174 165 } 175 166 ··· 240 249 return retval; 241 250 } 242 251 243 - /** 244 - * v9fs_launder_page - Writeback a dirty page 245 - * @page: The page to be cleaned up 246 - * 247 - * Returns 0 on success. 248 - */ 249 - 250 - static int v9fs_launder_page(struct page *page) 252 + static int v9fs_launder_folio(struct folio *folio) 251 253 { 252 - struct folio *folio = page_folio(page); 253 254 int retval; 254 255 255 256 if (folio_clear_dirty_for_io(folio)) { ··· 359 376 * Mark a page as having been made dirty and thus needing writeback. We also 360 377 * need to pin the cache object to write back to. 361 378 */ 362 - static int v9fs_set_page_dirty(struct page *page) 379 + static bool v9fs_dirty_folio(struct address_space *mapping, struct folio *folio) 363 380 { 364 - struct v9fs_inode *v9inode = V9FS_I(page->mapping->host); 381 + struct v9fs_inode *v9inode = V9FS_I(mapping->host); 365 382 366 - return fscache_set_page_dirty(page, v9fs_inode_cookie(v9inode)); 383 + return fscache_dirty_folio(mapping, folio, v9fs_inode_cookie(v9inode)); 367 384 } 368 385 #else 369 - #define v9fs_set_page_dirty __set_page_dirty_nobuffers 386 + #define v9fs_dirty_folio filemap_dirty_folio 370 387 #endif 371 388 372 389 const struct address_space_operations v9fs_addr_operations = { 373 390 .readpage = v9fs_vfs_readpage, 374 391 .readahead = v9fs_vfs_readahead, 375 - .set_page_dirty = v9fs_set_page_dirty, 392 + .dirty_folio = v9fs_dirty_folio, 376 393 .writepage = v9fs_vfs_writepage, 377 394 .write_begin = v9fs_write_begin, 378 395 .write_end = v9fs_write_end, 379 396 .releasepage = v9fs_release_page, 380 - .invalidatepage = v9fs_invalidate_page, 381 - .launder_page = v9fs_launder_page, 397 + .invalidate_folio = v9fs_invalidate_folio, 398 + .launder_folio = v9fs_launder_folio, 382 399 .direct_IO = v9fs_direct_IO, 383 400 };
+2 -1
fs/adfs/inode.c
··· 73 73 } 74 74 75 75 static const struct address_space_operations adfs_aops = { 76 - .set_page_dirty = __set_page_dirty_buffers, 76 + .dirty_folio = block_dirty_folio, 77 + .invalidate_folio = block_invalidate_folio, 77 78 .readpage = adfs_readpage, 78 79 .writepage = adfs_writepage, 79 80 .write_begin = adfs_write_begin,
+4 -2
fs/affs/file.c
··· 453 453 } 454 454 455 455 const struct address_space_operations affs_aops = { 456 - .set_page_dirty = __set_page_dirty_buffers, 456 + .dirty_folio = block_dirty_folio, 457 + .invalidate_folio = block_invalidate_folio, 457 458 .readpage = affs_readpage, 458 459 .writepage = affs_writepage, 459 460 .write_begin = affs_write_begin, ··· 835 834 } 836 835 837 836 const struct address_space_operations affs_aops_ofs = { 838 - .set_page_dirty = __set_page_dirty_buffers, 837 + .dirty_folio = block_dirty_folio, 838 + .invalidate_folio = block_invalidate_folio, 839 839 .readpage = affs_readpage_ofs, 840 840 //.writepage = affs_writepage_ofs, 841 841 .write_begin = affs_write_begin_ofs,
+9 -9
fs/afs/dir.c
··· 42 42 struct dentry *old_dentry, struct inode *new_dir, 43 43 struct dentry *new_dentry, unsigned int flags); 44 44 static int afs_dir_releasepage(struct page *page, gfp_t gfp_flags); 45 - static void afs_dir_invalidatepage(struct page *page, unsigned int offset, 46 - unsigned int length); 45 + static void afs_dir_invalidate_folio(struct folio *folio, size_t offset, 46 + size_t length); 47 47 48 - static int afs_dir_set_page_dirty(struct page *page) 48 + static bool afs_dir_dirty_folio(struct address_space *mapping, 49 + struct folio *folio) 49 50 { 50 51 BUG(); /* This should never happen. */ 51 52 } ··· 74 73 }; 75 74 76 75 const struct address_space_operations afs_dir_aops = { 77 - .set_page_dirty = afs_dir_set_page_dirty, 76 + .dirty_folio = afs_dir_dirty_folio, 78 77 .releasepage = afs_dir_releasepage, 79 - .invalidatepage = afs_dir_invalidatepage, 78 + .invalidate_folio = afs_dir_invalidate_folio, 80 79 }; 81 80 82 81 const struct dentry_operations afs_fs_dentry_operations = { ··· 2020 2019 /* 2021 2020 * Invalidate part or all of a folio. 2022 2021 */ 2023 - static void afs_dir_invalidatepage(struct page *subpage, unsigned int offset, 2024 - unsigned int length) 2022 + static void afs_dir_invalidate_folio(struct folio *folio, size_t offset, 2023 + size_t length) 2025 2024 { 2026 - struct folio *folio = page_folio(subpage); 2027 2025 struct afs_vnode *dvnode = AFS_FS_I(folio_inode(folio)); 2028 2026 2029 - _enter("{%lu},%u,%u", folio_index(folio), offset, length); 2027 + _enter("{%lu},%zu,%zu", folio->index, offset, length); 2030 2028 2031 2029 BUG_ON(!folio_test_locked(folio)); 2032 2030
+13 -15
fs/afs/file.c
··· 21 21 static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); 22 22 static int afs_readpage(struct file *file, struct page *page); 23 23 static int afs_symlink_readpage(struct file *file, struct page *page); 24 - static void afs_invalidatepage(struct page *page, unsigned int offset, 25 - unsigned int length); 24 + static void afs_invalidate_folio(struct folio *folio, size_t offset, 25 + size_t length); 26 26 static int afs_releasepage(struct page *page, gfp_t gfp_flags); 27 27 28 28 static void afs_readahead(struct readahead_control *ractl); ··· 54 54 const struct address_space_operations afs_file_aops = { 55 55 .readpage = afs_readpage, 56 56 .readahead = afs_readahead, 57 - .set_page_dirty = afs_set_page_dirty, 58 - .launder_page = afs_launder_page, 57 + .dirty_folio = afs_dirty_folio, 58 + .launder_folio = afs_launder_folio, 59 59 .releasepage = afs_releasepage, 60 - .invalidatepage = afs_invalidatepage, 60 + .invalidate_folio = afs_invalidate_folio, 61 61 .write_begin = afs_write_begin, 62 62 .write_end = afs_write_end, 63 63 .writepage = afs_writepage, ··· 67 67 const struct address_space_operations afs_symlink_aops = { 68 68 .readpage = afs_symlink_readpage, 69 69 .releasepage = afs_releasepage, 70 - .invalidatepage = afs_invalidatepage, 70 + .invalidate_folio = afs_invalidate_folio, 71 71 }; 72 72 73 73 static const struct vm_operations_struct afs_vm_ops = { ··· 427 427 * Adjust the dirty region of the page on truncation or full invalidation, 428 428 * getting rid of the markers altogether if the region is entirely invalidated. 429 429 */ 430 - static void afs_invalidate_dirty(struct folio *folio, unsigned int offset, 431 - unsigned int length) 430 + static void afs_invalidate_dirty(struct folio *folio, size_t offset, 431 + size_t length) 432 432 { 433 433 struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio)); 434 434 unsigned long priv; ··· 485 485 * - release a page and clean up its private data if offset is 0 (indicating 486 486 * the entire page) 487 487 */ 488 - static void afs_invalidatepage(struct page *page, unsigned int offset, 489 - unsigned int length) 488 + static void afs_invalidate_folio(struct folio *folio, size_t offset, 489 + size_t length) 490 490 { 491 - struct folio *folio = page_folio(page); 491 + _enter("{%lu},%zu,%zu", folio->index, offset, length); 492 492 493 - _enter("{%lu},%u,%u", folio_index(folio), offset, length); 493 + BUG_ON(!folio_test_locked(folio)); 494 494 495 - BUG_ON(!PageLocked(page)); 496 - 497 - if (PagePrivate(page)) 495 + if (folio_get_private(folio)) 498 496 afs_invalidate_dirty(folio, offset, length); 499 497 500 498 folio_wait_fscache(folio);
+3 -3
fs/afs/internal.h
··· 1521 1521 * write.c 1522 1522 */ 1523 1523 #ifdef CONFIG_AFS_FSCACHE 1524 - extern int afs_set_page_dirty(struct page *); 1524 + bool afs_dirty_folio(struct address_space *, struct folio *); 1525 1525 #else 1526 - #define afs_set_page_dirty __set_page_dirty_nobuffers 1526 + #define afs_dirty_folio filemap_dirty_folio 1527 1527 #endif 1528 1528 extern int afs_write_begin(struct file *file, struct address_space *mapping, 1529 1529 loff_t pos, unsigned len, unsigned flags, ··· 1537 1537 extern int afs_fsync(struct file *, loff_t, loff_t, int); 1538 1538 extern vm_fault_t afs_page_mkwrite(struct vm_fault *vmf); 1539 1539 extern void afs_prune_wb_keys(struct afs_vnode *); 1540 - extern int afs_launder_page(struct page *); 1540 + int afs_launder_folio(struct folio *); 1541 1541 1542 1542 /* 1543 1543 * xattr.c
+5 -5
fs/afs/write.c
··· 22 22 * Mark a page as having been made dirty and thus needing writeback. We also 23 23 * need to pin the cache object to write back to. 24 24 */ 25 - int afs_set_page_dirty(struct page *page) 25 + bool afs_dirty_folio(struct address_space *mapping, struct folio *folio) 26 26 { 27 - return fscache_set_page_dirty(page, afs_vnode_cache(AFS_FS_I(page->mapping->host))); 27 + return fscache_dirty_folio(mapping, folio, 28 + afs_vnode_cache(AFS_FS_I(mapping->host))); 28 29 } 29 30 static void afs_folio_start_fscache(bool caching, struct folio *folio) 30 31 { ··· 980 979 /* 981 980 * Clean up a page during invalidation. 982 981 */ 983 - int afs_launder_page(struct page *subpage) 982 + int afs_launder_folio(struct folio *folio) 984 983 { 985 - struct folio *folio = page_folio(subpage); 986 984 struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio)); 987 985 struct iov_iter iter; 988 986 struct bio_vec bv[1]; ··· 989 989 unsigned int f, t; 990 990 int ret = 0; 991 991 992 - _enter("{%lx}", folio_index(folio)); 992 + _enter("{%lx}", folio->index); 993 993 994 994 priv = (unsigned long)folio_get_private(folio); 995 995 if (folio_clear_dirty_for_io(folio)) {
+1 -1
fs/aio.c
··· 478 478 #endif 479 479 480 480 static const struct address_space_operations aio_ctx_aops = { 481 - .set_page_dirty = __set_page_dirty_no_writeback, 481 + .dirty_folio = noop_dirty_folio, 482 482 #if IS_ENABLED(CONFIG_MIGRATION) 483 483 .migratepage = aio_migratepage, 484 484 #endif
+2 -1
fs/bfs/file.c
··· 188 188 } 189 189 190 190 const struct address_space_operations bfs_aops = { 191 - .set_page_dirty = __set_page_dirty_buffers, 191 + .dirty_folio = block_dirty_folio, 192 + .invalidate_folio = block_invalidate_folio, 192 193 .readpage = bfs_readpage, 193 194 .writepage = bfs_writepage, 194 195 .write_begin = bfs_write_begin,
+3
fs/btrfs/ctree.h
··· 3945 3945 #define PageOrdered(page) PagePrivate2(page) 3946 3946 #define SetPageOrdered(page) SetPagePrivate2(page) 3947 3947 #define ClearPageOrdered(page) ClearPagePrivate2(page) 3948 + #define folio_test_ordered(folio) folio_test_private_2(folio) 3949 + #define folio_set_ordered(folio) folio_set_private_2(folio) 3950 + #define folio_clear_ordered(folio) folio_clear_private_2(folio) 3948 3951 3949 3952 #endif
+24 -23
fs/btrfs/disk-io.c
··· 1013 1013 return try_release_extent_buffer(page); 1014 1014 } 1015 1015 1016 - static void btree_invalidatepage(struct page *page, unsigned int offset, 1017 - unsigned int length) 1016 + static void btree_invalidate_folio(struct folio *folio, size_t offset, 1017 + size_t length) 1018 1018 { 1019 1019 struct extent_io_tree *tree; 1020 - tree = &BTRFS_I(page->mapping->host)->io_tree; 1021 - extent_invalidatepage(tree, page, offset); 1022 - btree_releasepage(page, GFP_NOFS); 1023 - if (PagePrivate(page)) { 1024 - btrfs_warn(BTRFS_I(page->mapping->host)->root->fs_info, 1025 - "page private not zero on page %llu", 1026 - (unsigned long long)page_offset(page)); 1027 - detach_page_private(page); 1020 + tree = &BTRFS_I(folio->mapping->host)->io_tree; 1021 + extent_invalidate_folio(tree, folio, offset); 1022 + btree_releasepage(&folio->page, GFP_NOFS); 1023 + if (folio_get_private(folio)) { 1024 + btrfs_warn(BTRFS_I(folio->mapping->host)->root->fs_info, 1025 + "folio private not zero on folio %llu", 1026 + (unsigned long long)folio_pos(folio)); 1027 + folio_detach_private(folio); 1028 1028 } 1029 1029 } 1030 1030 1031 - static int btree_set_page_dirty(struct page *page) 1032 - { 1033 1031 #ifdef DEBUG 1034 - struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb); 1032 + static bool btree_dirty_folio(struct address_space *mapping, 1033 + struct folio *folio) 1034 + { 1035 + struct btrfs_fs_info *fs_info = btrfs_sb(mapping->host->i_sb); 1035 1036 struct btrfs_subpage *subpage; 1036 1037 struct extent_buffer *eb; 1037 1038 int cur_bit = 0; 1038 - u64 page_start = page_offset(page); 1039 + u64 page_start = folio_pos(folio); 1039 1040 1040 1041 if (fs_info->sectorsize == PAGE_SIZE) { 1041 - BUG_ON(!PagePrivate(page)); 1042 - eb = (struct extent_buffer *)page->private; 1042 + eb = folio_get_private(folio); 1043 1043 BUG_ON(!eb); 1044 1044 BUG_ON(!test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); 1045 1045 BUG_ON(!atomic_read(&eb->refs)); 1046 1046 btrfs_assert_tree_write_locked(eb); 1047 - return __set_page_dirty_nobuffers(page); 1047 + return filemap_dirty_folio(mapping, folio); 1048 1048 } 1049 - ASSERT(PagePrivate(page) && page->private); 1050 - subpage = (struct btrfs_subpage *)page->private; 1049 + subpage = folio_get_private(folio); 1051 1050 1052 1051 ASSERT(subpage->dirty_bitmap); 1053 1052 while (cur_bit < BTRFS_SUBPAGE_BITMAP_SIZE) { ··· 1072 1073 1073 1074 cur_bit += (fs_info->nodesize >> fs_info->sectorsize_bits); 1074 1075 } 1075 - #endif 1076 - return __set_page_dirty_nobuffers(page); 1076 + return filemap_dirty_folio(mapping, folio); 1077 1077 } 1078 + #else 1079 + #define btree_dirty_folio filemap_dirty_folio 1080 + #endif 1078 1081 1079 1082 static const struct address_space_operations btree_aops = { 1080 1083 .writepages = btree_writepages, 1081 1084 .releasepage = btree_releasepage, 1082 - .invalidatepage = btree_invalidatepage, 1085 + .invalidate_folio = btree_invalidate_folio, 1083 1086 #ifdef CONFIG_MIGRATION 1084 1087 .migratepage = btree_migratepage, 1085 1088 #endif 1086 - .set_page_dirty = btree_set_page_dirty, 1089 + .dirty_folio = btree_dirty_folio, 1087 1090 }; 1088 1091 1089 1092 struct extent_buffer *btrfs_find_create_tree_block(
+2 -2
fs/btrfs/extent-io-tree.h
··· 244 244 u64 *start_ret, u64 *end_ret, u32 bits); 245 245 int find_contiguous_extent_bit(struct extent_io_tree *tree, u64 start, 246 246 u64 *start_ret, u64 *end_ret, u32 bits); 247 - int extent_invalidatepage(struct extent_io_tree *tree, 248 - struct page *page, unsigned long offset); 247 + int extent_invalidate_folio(struct extent_io_tree *tree, 248 + struct folio *folio, size_t offset); 249 249 bool btrfs_find_delalloc_range(struct extent_io_tree *tree, u64 *start, 250 250 u64 *end, u64 max_bytes, 251 251 struct extent_state **cached_state);
+18 -17
fs/btrfs/extent_io.c
··· 1507 1507 1508 1508 void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end) 1509 1509 { 1510 + struct address_space *mapping = inode->i_mapping; 1510 1511 unsigned long index = start >> PAGE_SHIFT; 1511 1512 unsigned long end_index = end >> PAGE_SHIFT; 1512 - struct page *page; 1513 + struct folio *folio; 1513 1514 1514 1515 while (index <= end_index) { 1515 - page = find_get_page(inode->i_mapping, index); 1516 - BUG_ON(!page); /* Pages should be in the extent_io_tree */ 1517 - __set_page_dirty_nobuffers(page); 1518 - account_page_redirty(page); 1519 - put_page(page); 1520 - index++; 1516 + folio = filemap_get_folio(mapping, index); 1517 + filemap_dirty_folio(mapping, folio); 1518 + folio_account_redirty(folio); 1519 + index += folio_nr_pages(folio); 1520 + folio_put(folio); 1521 1521 } 1522 1522 } 1523 1523 ··· 4054 4054 static int __extent_writepage(struct page *page, struct writeback_control *wbc, 4055 4055 struct extent_page_data *epd) 4056 4056 { 4057 + struct folio *folio = page_folio(page); 4057 4058 struct inode *inode = page->mapping->host; 4058 4059 struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); 4059 4060 const u64 page_start = page_offset(page); ··· 4075 4074 pg_offset = offset_in_page(i_size); 4076 4075 if (page->index > end_index || 4077 4076 (page->index == end_index && !pg_offset)) { 4078 - page->mapping->a_ops->invalidatepage(page, 0, PAGE_SIZE); 4079 - unlock_page(page); 4077 + folio_invalidate(folio, 0, folio_size(folio)); 4078 + folio_unlock(folio); 4080 4079 return 0; 4081 4080 } 4082 4081 ··· 5226 5225 } 5227 5226 5228 5227 /* 5229 - * basic invalidatepage code, this waits on any locked or writeback 5230 - * ranges corresponding to the page, and then deletes any extent state 5228 + * basic invalidate_folio code, this waits on any locked or writeback 5229 + * ranges corresponding to the folio, and then deletes any extent state 5231 5230 * records from the tree 5232 5231 */ 5233 - int extent_invalidatepage(struct extent_io_tree *tree, 5234 - struct page *page, unsigned long offset) 5232 + int extent_invalidate_folio(struct extent_io_tree *tree, 5233 + struct folio *folio, size_t offset) 5235 5234 { 5236 5235 struct extent_state *cached_state = NULL; 5237 - u64 start = page_offset(page); 5238 - u64 end = start + PAGE_SIZE - 1; 5239 - size_t blocksize = page->mapping->host->i_sb->s_blocksize; 5236 + u64 start = folio_pos(folio); 5237 + u64 end = start + folio_size(folio) - 1; 5238 + size_t blocksize = folio->mapping->host->i_sb->s_blocksize; 5240 5239 5241 5240 /* This function is only called for the btree inode */ 5242 5241 ASSERT(tree->owner == IO_TREE_BTREE_INODE_IO); ··· 5246 5245 return 0; 5247 5246 5248 5247 lock_extent_bits(tree, start, end, &cached_state); 5249 - wait_on_page_writeback(page); 5248 + folio_wait_writeback(folio); 5250 5249 5251 5250 /* 5252 5251 * Currently for btree io tree, only EXTENT_LOCKED is utilized,
+40 -44
fs/btrfs/inode.c
··· 5080 5080 } 5081 5081 5082 5082 /* 5083 - * While truncating the inode pages during eviction, we get the VFS calling 5084 - * btrfs_invalidatepage() against each page of the inode. This is slow because 5085 - * the calls to btrfs_invalidatepage() result in a huge amount of calls to 5086 - * lock_extent_bits() and clear_extent_bit(), which keep merging and splitting 5087 - * extent_state structures over and over, wasting lots of time. 5083 + * While truncating the inode pages during eviction, we get the VFS 5084 + * calling btrfs_invalidate_folio() against each folio of the inode. This 5085 + * is slow because the calls to btrfs_invalidate_folio() result in a 5086 + * huge amount of calls to lock_extent_bits() and clear_extent_bit(), 5087 + * which keep merging and splitting extent_state structures over and over, 5088 + * wasting lots of time. 5088 5089 * 5089 - * Therefore if the inode is being evicted, let btrfs_invalidatepage() skip all 5090 - * those expensive operations on a per page basis and do only the ordered io 5091 - * finishing, while we release here the extent_map and extent_state structures, 5092 - * without the excessive merging and splitting. 5090 + * Therefore if the inode is being evicted, let btrfs_invalidate_folio() 5091 + * skip all those expensive operations on a per folio basis and do only 5092 + * the ordered io finishing, while we release here the extent_map and 5093 + * extent_state structures, without the excessive merging and splitting. 5093 5094 */ 5094 5095 static void evict_inode_truncate_pages(struct inode *inode) 5095 5096 { ··· 5156 5155 * If still has DELALLOC flag, the extent didn't reach disk, 5157 5156 * and its reserved space won't be freed by delayed_ref. 5158 5157 * So we need to free its reserved space here. 5159 - * (Refer to comment in btrfs_invalidatepage, case 2) 5158 + * (Refer to comment in btrfs_invalidate_folio, case 2) 5160 5159 * 5161 5160 * Note, end is the bytenr of last byte, so we need + 1 here. 5162 5161 */ ··· 8179 8178 } 8180 8179 8181 8180 /* 8182 - * For releasepage() and invalidatepage() we have a race window where 8183 - * end_page_writeback() is called but the subpage spinlock is not yet released. 8181 + * For releasepage() and invalidate_folio() we have a race window where 8182 + * folio_end_writeback() is called but the subpage spinlock is not yet released. 8184 8183 * If we continue to release/invalidate the page, we could cause use-after-free 8185 8184 * for subpage spinlock. So this function is to spin and wait for subpage 8186 8185 * spinlock. ··· 8256 8255 } 8257 8256 #endif 8258 8257 8259 - static void btrfs_invalidatepage(struct page *page, unsigned int offset, 8260 - unsigned int length) 8258 + static void btrfs_invalidate_folio(struct folio *folio, size_t offset, 8259 + size_t length) 8261 8260 { 8262 - struct btrfs_inode *inode = BTRFS_I(page->mapping->host); 8261 + struct btrfs_inode *inode = BTRFS_I(folio->mapping->host); 8263 8262 struct btrfs_fs_info *fs_info = inode->root->fs_info; 8264 8263 struct extent_io_tree *tree = &inode->io_tree; 8265 8264 struct extent_state *cached_state = NULL; 8266 - u64 page_start = page_offset(page); 8267 - u64 page_end = page_start + PAGE_SIZE - 1; 8265 + u64 page_start = folio_pos(folio); 8266 + u64 page_end = page_start + folio_size(folio) - 1; 8268 8267 u64 cur; 8269 8268 int inode_evicting = inode->vfs_inode.i_state & I_FREEING; 8270 8269 8271 8270 /* 8272 - * We have page locked so no new ordered extent can be created on this 8273 - * page, nor bio can be submitted for this page. 8271 + * We have folio locked so no new ordered extent can be created on this 8272 + * page, nor bio can be submitted for this folio. 8274 8273 * 8275 - * But already submitted bio can still be finished on this page. 8276 - * Furthermore, endio function won't skip page which has Ordered 8274 + * But already submitted bio can still be finished on this folio. 8275 + * Furthermore, endio function won't skip folio which has Ordered 8277 8276 * (Private2) already cleared, so it's possible for endio and 8278 - * invalidatepage to do the same ordered extent accounting twice 8279 - * on one page. 8277 + * invalidate_folio to do the same ordered extent accounting twice 8278 + * on one folio. 8280 8279 * 8281 8280 * So here we wait for any submitted bios to finish, so that we won't 8282 - * do double ordered extent accounting on the same page. 8281 + * do double ordered extent accounting on the same folio. 8283 8282 */ 8284 - wait_on_page_writeback(page); 8285 - wait_subpage_spinlock(page); 8283 + folio_wait_writeback(folio); 8284 + wait_subpage_spinlock(&folio->page); 8286 8285 8287 8286 /* 8288 8287 * For subpage case, we have call sites like 8289 8288 * btrfs_punch_hole_lock_range() which passes range not aligned to 8290 8289 * sectorsize. 8291 - * If the range doesn't cover the full page, we don't need to and 8292 - * shouldn't clear page extent mapped, as page->private can still 8290 + * If the range doesn't cover the full folio, we don't need to and 8291 + * shouldn't clear page extent mapped, as folio->private can still 8293 8292 * record subpage dirty bits for other part of the range. 8294 8293 * 8295 - * For cases that can invalidate the full even the range doesn't 8296 - * cover the full page, like invalidating the last page, we're 8294 + * For cases that invalidate the full folio even the range doesn't 8295 + * cover the full folio, like invalidating the last folio, we're 8297 8296 * still safe to wait for ordered extent to finish. 8298 8297 */ 8299 8298 if (!(offset == 0 && length == PAGE_SIZE)) { 8300 - btrfs_releasepage(page, GFP_NOFS); 8299 + btrfs_releasepage(&folio->page, GFP_NOFS); 8301 8300 return; 8302 8301 } 8303 8302 ··· 8338 8337 page_end); 8339 8338 ASSERT(range_end + 1 - cur < U32_MAX); 8340 8339 range_len = range_end + 1 - cur; 8341 - if (!btrfs_page_test_ordered(fs_info, page, cur, range_len)) { 8340 + if (!btrfs_page_test_ordered(fs_info, &folio->page, cur, range_len)) { 8342 8341 /* 8343 8342 * If Ordered (Private2) is cleared, it means endio has 8344 8343 * already been executed for the range. ··· 8348 8347 delete_states = false; 8349 8348 goto next; 8350 8349 } 8351 - btrfs_page_clear_ordered(fs_info, page, cur, range_len); 8350 + btrfs_page_clear_ordered(fs_info, &folio->page, cur, range_len); 8352 8351 8353 8352 /* 8354 8353 * IO on this page will never be started, so we need to account ··· 8418 8417 * should not have Ordered (Private2) anymore, or the above iteration 8419 8418 * did something wrong. 8420 8419 */ 8421 - ASSERT(!PageOrdered(page)); 8422 - btrfs_page_clear_checked(fs_info, page, page_offset(page), PAGE_SIZE); 8420 + ASSERT(!folio_test_ordered(folio)); 8421 + btrfs_page_clear_checked(fs_info, &folio->page, folio_pos(folio), folio_size(folio)); 8423 8422 if (!inode_evicting) 8424 - __btrfs_releasepage(page, GFP_NOFS); 8425 - clear_page_extent_mapped(page); 8423 + __btrfs_releasepage(&folio->page, GFP_NOFS); 8424 + clear_page_extent_mapped(&folio->page); 8426 8425 } 8427 8426 8428 8427 /* ··· 10057 10056 min_size, actual_len, alloc_hint, trans); 10058 10057 } 10059 10058 10060 - static int btrfs_set_page_dirty(struct page *page) 10061 - { 10062 - return __set_page_dirty_nobuffers(page); 10063 - } 10064 - 10065 10059 static int btrfs_permission(struct user_namespace *mnt_userns, 10066 10060 struct inode *inode, int mask) 10067 10061 { ··· 11355 11359 .writepages = btrfs_writepages, 11356 11360 .readahead = btrfs_readahead, 11357 11361 .direct_IO = noop_direct_IO, 11358 - .invalidatepage = btrfs_invalidatepage, 11362 + .invalidate_folio = btrfs_invalidate_folio, 11359 11363 .releasepage = btrfs_releasepage, 11360 11364 #ifdef CONFIG_MIGRATION 11361 11365 .migratepage = btrfs_migratepage, 11362 11366 #endif 11363 - .set_page_dirty = btrfs_set_page_dirty, 11367 + .dirty_folio = filemap_dirty_folio, 11364 11368 .error_remove_page = generic_error_remove_page, 11365 11369 .swap_activate = btrfs_swap_activate, 11366 11370 .swap_deactivate = btrfs_swap_deactivate,
+45 -51
fs/buffer.c
··· 613 613 * FIXME: may need to call ->reservepage here as well. That's rather up to the 614 614 * address_space though. 615 615 */ 616 - int __set_page_dirty_buffers(struct page *page) 616 + bool block_dirty_folio(struct address_space *mapping, struct folio *folio) 617 617 { 618 - int newly_dirty; 619 - struct address_space *mapping = page_mapping(page); 620 - 621 - if (unlikely(!mapping)) 622 - return !TestSetPageDirty(page); 618 + struct buffer_head *head; 619 + bool newly_dirty; 623 620 624 621 spin_lock(&mapping->private_lock); 625 - if (page_has_buffers(page)) { 626 - struct buffer_head *head = page_buffers(page); 622 + head = folio_buffers(folio); 623 + if (head) { 627 624 struct buffer_head *bh = head; 628 625 629 626 do { ··· 632 635 * Lock out page's memcg migration to keep PageDirty 633 636 * synchronized with per-memcg dirty page counters. 634 637 */ 635 - lock_page_memcg(page); 636 - newly_dirty = !TestSetPageDirty(page); 638 + folio_memcg_lock(folio); 639 + newly_dirty = !folio_test_set_dirty(folio); 637 640 spin_unlock(&mapping->private_lock); 638 641 639 642 if (newly_dirty) 640 - __set_page_dirty(page, mapping, 1); 643 + __folio_mark_dirty(folio, mapping, 1); 641 644 642 - unlock_page_memcg(page); 645 + folio_memcg_unlock(folio); 643 646 644 647 if (newly_dirty) 645 648 __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); 646 649 647 650 return newly_dirty; 648 651 } 649 - EXPORT_SYMBOL(__set_page_dirty_buffers); 652 + EXPORT_SYMBOL(block_dirty_folio); 650 653 651 654 /* 652 655 * Write out and wait upon a list of buffers. ··· 1481 1484 } 1482 1485 1483 1486 /** 1484 - * block_invalidatepage - invalidate part or all of a buffer-backed page 1485 - * 1486 - * @page: the page which is affected 1487 + * block_invalidate_folio - Invalidate part or all of a buffer-backed folio. 1488 + * @folio: The folio which is affected. 1487 1489 * @offset: start of the range to invalidate 1488 1490 * @length: length of the range to invalidate 1489 1491 * 1490 - * block_invalidatepage() is called when all or part of the page has become 1492 + * block_invalidate_folio() is called when all or part of the folio has been 1491 1493 * invalidated by a truncate operation. 1492 1494 * 1493 - * block_invalidatepage() does not have to release all buffers, but it must 1495 + * block_invalidate_folio() does not have to release all buffers, but it must 1494 1496 * ensure that no dirty buffer is left outside @offset and that no I/O 1495 1497 * is underway against any of the blocks which are outside the truncation 1496 1498 * point. Because the caller is about to free (and possibly reuse) those 1497 1499 * blocks on-disk. 1498 1500 */ 1499 - void block_invalidatepage(struct page *page, unsigned int offset, 1500 - unsigned int length) 1501 + void block_invalidate_folio(struct folio *folio, size_t offset, size_t length) 1501 1502 { 1502 1503 struct buffer_head *head, *bh, *next; 1503 - unsigned int curr_off = 0; 1504 - unsigned int stop = length + offset; 1504 + size_t curr_off = 0; 1505 + size_t stop = length + offset; 1505 1506 1506 - BUG_ON(!PageLocked(page)); 1507 - if (!page_has_buffers(page)) 1508 - goto out; 1507 + BUG_ON(!folio_test_locked(folio)); 1509 1508 1510 1509 /* 1511 1510 * Check for overflow 1512 1511 */ 1513 - BUG_ON(stop > PAGE_SIZE || stop < length); 1512 + BUG_ON(stop > folio_size(folio) || stop < length); 1514 1513 1515 - head = page_buffers(page); 1514 + head = folio_buffers(folio); 1515 + if (!head) 1516 + return; 1517 + 1516 1518 bh = head; 1517 1519 do { 1518 - unsigned int next_off = curr_off + bh->b_size; 1520 + size_t next_off = curr_off + bh->b_size; 1519 1521 next = bh->b_this_page; 1520 1522 1521 1523 /* ··· 1533 1537 } while (bh != head); 1534 1538 1535 1539 /* 1536 - * We release buffers only if the entire page is being invalidated. 1540 + * We release buffers only if the entire folio is being invalidated. 1537 1541 * The get_block cached value has been unconditionally invalidated, 1538 1542 * so real IO is not possible anymore. 1539 1543 */ 1540 - if (length == PAGE_SIZE) 1541 - try_to_release_page(page, 0); 1544 + if (length == folio_size(folio)) 1545 + filemap_release_folio(folio, 0); 1542 1546 out: 1543 1547 return; 1544 1548 } 1545 - EXPORT_SYMBOL(block_invalidatepage); 1549 + EXPORT_SYMBOL(block_invalidate_folio); 1546 1550 1547 1551 1548 1552 /* 1549 1553 * We attach and possibly dirty the buffers atomically wrt 1550 - * __set_page_dirty_buffers() via private_lock. try_to_free_buffers 1554 + * block_dirty_folio() via private_lock. try_to_free_buffers 1551 1555 * is already excluded via the page lock. 1552 1556 */ 1553 1557 void create_empty_buffers(struct page *page, ··· 1722 1726 (1 << BH_Dirty)|(1 << BH_Uptodate)); 1723 1727 1724 1728 /* 1725 - * Be very careful. We have no exclusion from __set_page_dirty_buffers 1729 + * Be very careful. We have no exclusion from block_dirty_folio 1726 1730 * here, and the (potentially unmapped) buffers may become dirty at 1727 1731 * any time. If a buffer becomes dirty here after we've inspected it 1728 1732 * then we just miss that fact, and the page stays dirty. 1729 1733 * 1730 - * Buffers outside i_size may be dirtied by __set_page_dirty_buffers; 1734 + * Buffers outside i_size may be dirtied by block_dirty_folio; 1731 1735 * handle that here by just cleaning them. 1732 1736 */ 1733 1737 ··· 2204 2208 EXPORT_SYMBOL(generic_write_end); 2205 2209 2206 2210 /* 2207 - * block_is_partially_uptodate checks whether buffers within a page are 2211 + * block_is_partially_uptodate checks whether buffers within a folio are 2208 2212 * uptodate or not. 2209 2213 * 2210 - * Returns true if all buffers which correspond to a file portion 2211 - * we want to read are uptodate. 2214 + * Returns true if all buffers which correspond to the specified part 2215 + * of the folio are uptodate. 2212 2216 */ 2213 - int block_is_partially_uptodate(struct page *page, unsigned long from, 2214 - unsigned long count) 2217 + bool block_is_partially_uptodate(struct folio *folio, size_t from, size_t count) 2215 2218 { 2216 2219 unsigned block_start, block_end, blocksize; 2217 2220 unsigned to; 2218 2221 struct buffer_head *bh, *head; 2219 - int ret = 1; 2222 + bool ret = true; 2220 2223 2221 - if (!page_has_buffers(page)) 2222 - return 0; 2223 - 2224 - head = page_buffers(page); 2224 + head = folio_buffers(folio); 2225 + if (!head) 2226 + return false; 2225 2227 blocksize = head->b_size; 2226 - to = min_t(unsigned, PAGE_SIZE - from, count); 2228 + to = min_t(unsigned, folio_size(folio) - from, count); 2227 2229 to = from + to; 2228 - if (from < blocksize && to > PAGE_SIZE - blocksize) 2229 - return 0; 2230 + if (from < blocksize && to > folio_size(folio) - blocksize) 2231 + return false; 2230 2232 2231 2233 bh = head; 2232 2234 block_start = 0; ··· 2232 2238 block_end = block_start + blocksize; 2233 2239 if (block_end > from && block_start < to) { 2234 2240 if (!buffer_uptodate(bh)) { 2235 - ret = 0; 2241 + ret = false; 2236 2242 break; 2237 2243 } 2238 2244 if (block_end >= to) ··· 3179 3185 * 3180 3186 * The same applies to regular filesystem pages: if all the buffers are 3181 3187 * clean then we set the page clean and proceed. To do that, we require 3182 - * total exclusion from __set_page_dirty_buffers(). That is obtained with 3188 + * total exclusion from block_dirty_folio(). That is obtained with 3183 3189 * private_lock. 3184 3190 * 3185 3191 * try_to_free_buffers() is non-blocking. ··· 3246 3252 * the page also. 3247 3253 * 3248 3254 * private_lock must be held over this entire operation in order 3249 - * to synchronise against __set_page_dirty_buffers and prevent the 3255 + * to synchronise against block_dirty_folio and prevent the 3250 3256 * dirty bit from being lost. 3251 3257 */ 3252 3258 if (ret)
+43 -40
fs/ceph/addr.c
··· 76 76 * Dirty a page. Optimistically adjust accounting, on the assumption 77 77 * that we won't race with invalidate. If we do, readjust. 78 78 */ 79 - static int ceph_set_page_dirty(struct page *page) 79 + static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) 80 80 { 81 - struct address_space *mapping = page->mapping; 82 81 struct inode *inode; 83 82 struct ceph_inode_info *ci; 84 83 struct ceph_snap_context *snapc; 85 84 86 - if (PageDirty(page)) { 87 - dout("%p set_page_dirty %p idx %lu -- already dirty\n", 88 - mapping->host, page, page->index); 89 - BUG_ON(!PagePrivate(page)); 90 - return 0; 85 + if (folio_test_dirty(folio)) { 86 + dout("%p dirty_folio %p idx %lu -- already dirty\n", 87 + mapping->host, folio, folio->index); 88 + BUG_ON(!folio_get_private(folio)); 89 + return false; 91 90 } 92 91 93 92 inode = mapping->host; ··· 110 111 if (ci->i_wrbuffer_ref == 0) 111 112 ihold(inode); 112 113 ++ci->i_wrbuffer_ref; 113 - dout("%p set_page_dirty %p idx %lu head %d/%d -> %d/%d " 114 + dout("%p dirty_folio %p idx %lu head %d/%d -> %d/%d " 114 115 "snapc %p seq %lld (%d snaps)\n", 115 - mapping->host, page, page->index, 116 + mapping->host, folio, folio->index, 116 117 ci->i_wrbuffer_ref-1, ci->i_wrbuffer_ref_head-1, 117 118 ci->i_wrbuffer_ref, ci->i_wrbuffer_ref_head, 118 119 snapc, snapc->seq, snapc->num_snaps); 119 120 spin_unlock(&ci->i_ceph_lock); 120 121 121 122 /* 122 - * Reference snap context in page->private. Also set 123 - * PagePrivate so that we get invalidatepage callback. 123 + * Reference snap context in folio->private. Also set 124 + * PagePrivate so that we get invalidate_folio callback. 124 125 */ 125 - BUG_ON(PagePrivate(page)); 126 - attach_page_private(page, snapc); 126 + BUG_ON(folio_get_private(folio)); 127 + folio_attach_private(folio, snapc); 127 128 128 - return ceph_fscache_set_page_dirty(page); 129 + return ceph_fscache_dirty_folio(mapping, folio); 129 130 } 130 131 131 132 /* 132 - * If we are truncating the full page (i.e. offset == 0), adjust the 133 - * dirty page counters appropriately. Only called if there is private 134 - * data on the page. 133 + * If we are truncating the full folio (i.e. offset == 0), adjust the 134 + * dirty folio counters appropriately. Only called if there is private 135 + * data on the folio. 135 136 */ 136 - static void ceph_invalidatepage(struct page *page, unsigned int offset, 137 - unsigned int length) 137 + static void ceph_invalidate_folio(struct folio *folio, size_t offset, 138 + size_t length) 138 139 { 139 140 struct inode *inode; 140 141 struct ceph_inode_info *ci; 141 142 struct ceph_snap_context *snapc; 142 143 143 - inode = page->mapping->host; 144 + inode = folio->mapping->host; 144 145 ci = ceph_inode(inode); 145 146 146 - if (offset != 0 || length != thp_size(page)) { 147 - dout("%p invalidatepage %p idx %lu partial dirty page %u~%u\n", 148 - inode, page, page->index, offset, length); 147 + if (offset != 0 || length != folio_size(folio)) { 148 + dout("%p invalidate_folio idx %lu partial dirty page %zu~%zu\n", 149 + inode, folio->index, offset, length); 149 150 return; 150 151 } 151 152 152 - WARN_ON(!PageLocked(page)); 153 - if (PagePrivate(page)) { 154 - dout("%p invalidatepage %p idx %lu full dirty page\n", 155 - inode, page, page->index); 153 + WARN_ON(!folio_test_locked(folio)); 154 + if (folio_get_private(folio)) { 155 + dout("%p invalidate_folio idx %lu full dirty page\n", 156 + inode, folio->index); 156 157 157 - snapc = detach_page_private(page); 158 + snapc = folio_detach_private(folio); 158 159 ceph_put_wrbuffer_cap_refs(ci, 1, snapc); 159 160 ceph_put_snap_context(snapc); 160 161 } 161 162 162 - wait_on_page_fscache(page); 163 + folio_wait_fscache(folio); 163 164 } 164 165 165 166 static int ceph_releasepage(struct page *page, gfp_t gfp) ··· 515 516 */ 516 517 static int writepage_nounlock(struct page *page, struct writeback_control *wbc) 517 518 { 519 + struct folio *folio = page_folio(page); 518 520 struct inode *inode = page->mapping->host; 519 521 struct ceph_inode_info *ci = ceph_inode(inode); 520 522 struct ceph_fs_client *fsc = ceph_inode_to_client(inode); ··· 550 550 551 551 /* is this a partial page at end of file? */ 552 552 if (page_off >= ceph_wbc.i_size) { 553 - dout("%p page eof %llu\n", page, ceph_wbc.i_size); 554 - page->mapping->a_ops->invalidatepage(page, 0, thp_size(page)); 553 + dout("folio at %lu beyond eof %llu\n", folio->index, 554 + ceph_wbc.i_size); 555 + folio_invalidate(folio, 0, folio_size(folio)); 555 556 return 0; 556 557 } 557 558 ··· 875 874 continue; 876 875 } 877 876 if (page_offset(page) >= ceph_wbc.i_size) { 878 - dout("%p page eof %llu\n", 879 - page, ceph_wbc.i_size); 877 + struct folio *folio = page_folio(page); 878 + 879 + dout("folio at %lu beyond eof %llu\n", 880 + folio->index, ceph_wbc.i_size); 880 881 if ((ceph_wbc.size_stable || 881 - page_offset(page) >= i_size_read(inode)) && 882 - clear_page_dirty_for_io(page)) 883 - mapping->a_ops->invalidatepage(page, 884 - 0, thp_size(page)); 885 - unlock_page(page); 882 + folio_pos(folio) >= i_size_read(inode)) && 883 + folio_clear_dirty_for_io(folio)) 884 + folio_invalidate(folio, 0, 885 + folio_size(folio)); 886 + folio_unlock(folio); 886 887 continue; 887 888 } 888 889 if (strip_unit_end && (page->index > strip_unit_end)) { ··· 1379 1376 .writepages = ceph_writepages_start, 1380 1377 .write_begin = ceph_write_begin, 1381 1378 .write_end = ceph_write_end, 1382 - .set_page_dirty = ceph_set_page_dirty, 1383 - .invalidatepage = ceph_invalidatepage, 1379 + .dirty_folio = ceph_dirty_folio, 1380 + .invalidate_folio = ceph_invalidate_folio, 1384 1381 .releasepage = ceph_releasepage, 1385 1382 .direct_IO = noop_direct_IO, 1386 1383 };
+7 -6
fs/ceph/cache.h
··· 54 54 fscache_unpin_writeback(wbc, ceph_fscache_cookie(ceph_inode(inode))); 55 55 } 56 56 57 - static inline int ceph_fscache_set_page_dirty(struct page *page) 57 + static inline int ceph_fscache_dirty_folio(struct address_space *mapping, 58 + struct folio *folio) 58 59 { 59 - struct inode *inode = page->mapping->host; 60 - struct ceph_inode_info *ci = ceph_inode(inode); 60 + struct ceph_inode_info *ci = ceph_inode(mapping->host); 61 61 62 - return fscache_set_page_dirty(page, ceph_fscache_cookie(ci)); 62 + return fscache_dirty_folio(mapping, folio, ceph_fscache_cookie(ci)); 63 63 } 64 64 65 65 static inline int ceph_begin_cache_operation(struct netfs_read_request *rreq) ··· 133 133 { 134 134 } 135 135 136 - static inline int ceph_fscache_set_page_dirty(struct page *page) 136 + static inline int ceph_fscache_dirty_folio(struct address_space *mapping, 137 + struct folio *folio) 137 138 { 138 - return __set_page_dirty_nobuffers(page); 139 + return filemap_dirty_folio(mapping, folio); 139 140 } 140 141 141 142 static inline bool ceph_is_cache_enabled(struct inode *inode)
+20 -19
fs/cifs/file.c
··· 4764 4764 return true; 4765 4765 } 4766 4766 4767 - static void cifs_invalidate_page(struct page *page, unsigned int offset, 4768 - unsigned int length) 4767 + static void cifs_invalidate_folio(struct folio *folio, size_t offset, 4768 + size_t length) 4769 4769 { 4770 - wait_on_page_fscache(page); 4770 + folio_wait_fscache(folio); 4771 4771 } 4772 4772 4773 - static int cifs_launder_page(struct page *page) 4773 + static int cifs_launder_folio(struct folio *folio) 4774 4774 { 4775 4775 int rc = 0; 4776 - loff_t range_start = page_offset(page); 4777 - loff_t range_end = range_start + (loff_t)(PAGE_SIZE - 1); 4776 + loff_t range_start = folio_pos(folio); 4777 + loff_t range_end = range_start + folio_size(folio); 4778 4778 struct writeback_control wbc = { 4779 4779 .sync_mode = WB_SYNC_ALL, 4780 4780 .nr_to_write = 0, ··· 4782 4782 .range_end = range_end, 4783 4783 }; 4784 4784 4785 - cifs_dbg(FYI, "Launder page: %p\n", page); 4785 + cifs_dbg(FYI, "Launder page: %lu\n", folio->index); 4786 4786 4787 - if (clear_page_dirty_for_io(page)) 4788 - rc = cifs_writepage_locked(page, &wbc); 4787 + if (folio_clear_dirty_for_io(folio)) 4788 + rc = cifs_writepage_locked(&folio->page, &wbc); 4789 4789 4790 - wait_on_page_fscache(page); 4790 + folio_wait_fscache(folio); 4791 4791 return rc; 4792 4792 } 4793 4793 ··· 4949 4949 * need to pin the cache object to write back to. 4950 4950 */ 4951 4951 #ifdef CONFIG_CIFS_FSCACHE 4952 - static int cifs_set_page_dirty(struct page *page) 4952 + static bool cifs_dirty_folio(struct address_space *mapping, struct folio *folio) 4953 4953 { 4954 - return fscache_set_page_dirty(page, cifs_inode_cookie(page->mapping->host)); 4954 + return fscache_dirty_folio(mapping, folio, 4955 + cifs_inode_cookie(mapping->host)); 4955 4956 } 4956 4957 #else 4957 - #define cifs_set_page_dirty __set_page_dirty_nobuffers 4958 + #define cifs_dirty_folio filemap_dirty_folio 4958 4959 #endif 4959 4960 4960 4961 const struct address_space_operations cifs_addr_ops = { ··· 4965 4964 .writepages = cifs_writepages, 4966 4965 .write_begin = cifs_write_begin, 4967 4966 .write_end = cifs_write_end, 4968 - .set_page_dirty = cifs_set_page_dirty, 4967 + .dirty_folio = cifs_dirty_folio, 4969 4968 .releasepage = cifs_release_page, 4970 4969 .direct_IO = cifs_direct_io, 4971 - .invalidatepage = cifs_invalidate_page, 4972 - .launder_page = cifs_launder_page, 4970 + .invalidate_folio = cifs_invalidate_folio, 4971 + .launder_folio = cifs_launder_folio, 4973 4972 /* 4974 4973 * TODO: investigate and if useful we could add an cifs_migratePage 4975 4974 * helper (under an CONFIG_MIGRATION) in the future, and also ··· 4990 4989 .writepages = cifs_writepages, 4991 4990 .write_begin = cifs_write_begin, 4992 4991 .write_end = cifs_write_end, 4993 - .set_page_dirty = cifs_set_page_dirty, 4992 + .dirty_folio = cifs_dirty_folio, 4994 4993 .releasepage = cifs_release_page, 4995 - .invalidatepage = cifs_invalidate_page, 4996 - .launder_page = cifs_launder_page, 4994 + .invalidate_folio = cifs_invalidate_folio, 4995 + .launder_folio = cifs_launder_folio, 4997 4996 };
+3 -2
fs/ecryptfs/mmap.c
··· 540 540 * XXX: This is pretty broken for multiple reasons: ecryptfs does not 541 541 * actually use buffer_heads, and ecryptfs will crash without 542 542 * CONFIG_BLOCK. But it matches the behavior before the default for 543 - * address_space_operations without the ->set_page_dirty method was 543 + * address_space_operations without the ->dirty_folio method was 544 544 * cleaned up, so this is the best we can do without maintainer 545 545 * feedback. 546 546 */ 547 547 #ifdef CONFIG_BLOCK 548 - .set_page_dirty = __set_page_dirty_buffers, 548 + .dirty_folio = block_dirty_folio, 549 + .invalidate_folio = block_invalidate_folio, 549 550 #endif 550 551 .writepage = ecryptfs_writepage, 551 552 .readpage = ecryptfs_readpage,
+8 -9
fs/erofs/super.c
··· 537 537 * decompression requests in progress, wait with rescheduling for a bit here. 538 538 * We could introduce an extra locking instead but it seems unnecessary. 539 539 */ 540 - static void erofs_managed_cache_invalidatepage(struct page *page, 541 - unsigned int offset, 542 - unsigned int length) 540 + static void erofs_managed_cache_invalidate_folio(struct folio *folio, 541 + size_t offset, size_t length) 543 542 { 544 - const unsigned int stop = length + offset; 543 + const size_t stop = length + offset; 545 544 546 - DBG_BUGON(!PageLocked(page)); 545 + DBG_BUGON(!folio_test_locked(folio)); 547 546 548 547 /* Check for potential overflow in debug mode */ 549 - DBG_BUGON(stop > PAGE_SIZE || stop < length); 548 + DBG_BUGON(stop > folio_size(folio) || stop < length); 550 549 551 - if (offset == 0 && stop == PAGE_SIZE) 552 - while (!erofs_managed_cache_releasepage(page, GFP_NOFS)) 550 + if (offset == 0 && stop == folio_size(folio)) 551 + while (!erofs_managed_cache_releasepage(&folio->page, GFP_NOFS)) 553 552 cond_resched(); 554 553 } 555 554 556 555 static const struct address_space_operations managed_cache_aops = { 557 556 .releasepage = erofs_managed_cache_releasepage, 558 - .invalidatepage = erofs_managed_cache_invalidatepage, 557 + .invalidate_folio = erofs_managed_cache_invalidate_folio, 559 558 }; 560 559 561 560 static int erofs_init_managed_cache(struct super_block *sb)
+2 -1
fs/exfat/inode.c
··· 490 490 } 491 491 492 492 static const struct address_space_operations exfat_aops = { 493 - .set_page_dirty = __set_page_dirty_buffers, 493 + .dirty_folio = block_dirty_folio, 494 + .invalidate_folio = block_invalidate_folio, 494 495 .readpage = exfat_readpage, 495 496 .readahead = exfat_readahead, 496 497 .writepage = exfat_writepage,
+5 -4
fs/ext2/inode.c
··· 967 967 } 968 968 969 969 const struct address_space_operations ext2_aops = { 970 - .set_page_dirty = __set_page_dirty_buffers, 970 + .dirty_folio = block_dirty_folio, 971 + .invalidate_folio = block_invalidate_folio, 971 972 .readpage = ext2_readpage, 972 973 .readahead = ext2_readahead, 973 974 .writepage = ext2_writepage, ··· 983 982 }; 984 983 985 984 const struct address_space_operations ext2_nobh_aops = { 986 - .set_page_dirty = __set_page_dirty_buffers, 985 + .dirty_folio = block_dirty_folio, 986 + .invalidate_folio = block_invalidate_folio, 987 987 .readpage = ext2_readpage, 988 988 .readahead = ext2_readahead, 989 989 .writepage = ext2_nobh_writepage, ··· 1000 998 static const struct address_space_operations ext2_dax_aops = { 1001 999 .writepages = ext2_dax_writepages, 1002 1000 .direct_IO = noop_direct_IO, 1003 - .set_page_dirty = __set_page_dirty_no_writeback, 1004 - .invalidatepage = noop_invalidatepage, 1001 + .dirty_folio = noop_dirty_folio, 1005 1002 }; 1006 1003 1007 1004 /*
+64 -65
fs/ext4/inode.c
··· 137 137 new_size); 138 138 } 139 139 140 - static void ext4_invalidatepage(struct page *page, unsigned int offset, 141 - unsigned int length); 142 140 static int __ext4_journalled_writepage(struct page *page, unsigned int len); 143 141 static int ext4_meta_trans_blocks(struct inode *inode, int lblocks, 144 142 int pextents); ··· 184 186 * journal. So although mm thinks everything is clean and 185 187 * ready for reaping the inode might still have some pages to 186 188 * write in the running transaction or waiting to be 187 - * checkpointed. Thus calling jbd2_journal_invalidatepage() 189 + * checkpointed. Thus calling jbd2_journal_invalidate_folio() 188 190 * (via truncate_inode_pages()) to discard these buffers can 189 191 * cause data loss. Also even if we did not discard these 190 192 * buffers, we would have no way to find them after the inode ··· 1569 1571 break; 1570 1572 for (i = 0; i < nr_pages; i++) { 1571 1573 struct page *page = pvec.pages[i]; 1574 + struct folio *folio = page_folio(page); 1572 1575 1573 - BUG_ON(!PageLocked(page)); 1574 - BUG_ON(PageWriteback(page)); 1576 + BUG_ON(!folio_test_locked(folio)); 1577 + BUG_ON(folio_test_writeback(folio)); 1575 1578 if (invalidate) { 1576 - if (page_mapped(page)) 1577 - clear_page_dirty_for_io(page); 1578 - block_invalidatepage(page, 0, PAGE_SIZE); 1579 - ClearPageUptodate(page); 1579 + if (folio_mapped(folio)) 1580 + folio_clear_dirty_for_io(folio); 1581 + block_invalidate_folio(folio, 0, 1582 + folio_size(folio)); 1583 + folio_clear_uptodate(folio); 1580 1584 } 1581 - unlock_page(page); 1585 + folio_unlock(folio); 1582 1586 } 1583 1587 pagevec_release(&pvec); 1584 1588 } ··· 1971 1971 static int ext4_writepage(struct page *page, 1972 1972 struct writeback_control *wbc) 1973 1973 { 1974 + struct folio *folio = page_folio(page); 1974 1975 int ret = 0; 1975 1976 loff_t size; 1976 1977 unsigned int len; ··· 1981 1980 bool keep_towrite = false; 1982 1981 1983 1982 if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) { 1984 - inode->i_mapping->a_ops->invalidatepage(page, 0, PAGE_SIZE); 1985 - unlock_page(page); 1983 + folio_invalidate(folio, 0, folio_size(folio)); 1984 + folio_unlock(folio); 1986 1985 return -EIO; 1987 1986 } 1988 1987 ··· 3208 3207 ext4_mpage_readpages(inode, rac, NULL); 3209 3208 } 3210 3209 3211 - static void ext4_invalidatepage(struct page *page, unsigned int offset, 3212 - unsigned int length) 3210 + static void ext4_invalidate_folio(struct folio *folio, size_t offset, 3211 + size_t length) 3213 3212 { 3214 - trace_ext4_invalidatepage(page, offset, length); 3213 + trace_ext4_invalidate_folio(folio, offset, length); 3215 3214 3216 3215 /* No journalling happens on data buffers when this function is used */ 3217 - WARN_ON(page_has_buffers(page) && buffer_jbd(page_buffers(page))); 3216 + WARN_ON(folio_buffers(folio) && buffer_jbd(folio_buffers(folio))); 3218 3217 3219 - block_invalidatepage(page, offset, length); 3218 + block_invalidate_folio(folio, offset, length); 3220 3219 } 3221 3220 3222 - static int __ext4_journalled_invalidatepage(struct page *page, 3223 - unsigned int offset, 3224 - unsigned int length) 3221 + static int __ext4_journalled_invalidate_folio(struct folio *folio, 3222 + size_t offset, size_t length) 3225 3223 { 3226 - journal_t *journal = EXT4_JOURNAL(page->mapping->host); 3224 + journal_t *journal = EXT4_JOURNAL(folio->mapping->host); 3227 3225 3228 - trace_ext4_journalled_invalidatepage(page, offset, length); 3226 + trace_ext4_journalled_invalidate_folio(folio, offset, length); 3229 3227 3230 3228 /* 3231 3229 * If it's a full truncate we just forget about the pending dirtying 3232 3230 */ 3233 - if (offset == 0 && length == PAGE_SIZE) 3234 - ClearPageChecked(page); 3231 + if (offset == 0 && length == folio_size(folio)) 3232 + folio_clear_checked(folio); 3235 3233 3236 - return jbd2_journal_invalidatepage(journal, page, offset, length); 3234 + return jbd2_journal_invalidate_folio(journal, folio, offset, length); 3237 3235 } 3238 3236 3239 3237 /* Wrapper for aops... */ 3240 - static void ext4_journalled_invalidatepage(struct page *page, 3241 - unsigned int offset, 3242 - unsigned int length) 3238 + static void ext4_journalled_invalidate_folio(struct folio *folio, 3239 + size_t offset, 3240 + size_t length) 3243 3241 { 3244 - WARN_ON(__ext4_journalled_invalidatepage(page, offset, length) < 0); 3242 + WARN_ON(__ext4_journalled_invalidate_folio(folio, offset, length) < 0); 3245 3243 } 3246 3244 3247 3245 static int ext4_releasepage(struct page *page, gfp_t wait) ··· 3573 3573 }; 3574 3574 3575 3575 /* 3576 - * Whenever the page is being dirtied, corresponding buffers should already be 3577 - * attached to the transaction (we take care of this in ext4_page_mkwrite() and 3578 - * ext4_write_begin()). However we cannot move buffers to dirty transaction 3579 - * lists here because ->set_page_dirty is called under VFS locks and the page 3576 + * Whenever the folio is being dirtied, corresponding buffers should already 3577 + * be attached to the transaction (we take care of this in ext4_page_mkwrite() 3578 + * and ext4_write_begin()). However we cannot move buffers to dirty transaction 3579 + * lists here because ->dirty_folio is called under VFS locks and the folio 3580 3580 * is not necessarily locked. 3581 3581 * 3582 - * We cannot just dirty the page and leave attached buffers clean, because the 3582 + * We cannot just dirty the folio and leave attached buffers clean, because the 3583 3583 * buffers' dirty state is "definitive". We cannot just set the buffers dirty 3584 3584 * or jbddirty because all the journalling code will explode. 3585 3585 * 3586 - * So what we do is to mark the page "pending dirty" and next time writepage 3586 + * So what we do is to mark the folio "pending dirty" and next time writepage 3587 3587 * is called, propagate that into the buffers appropriately. 3588 3588 */ 3589 - static int ext4_journalled_set_page_dirty(struct page *page) 3589 + static bool ext4_journalled_dirty_folio(struct address_space *mapping, 3590 + struct folio *folio) 3590 3591 { 3591 - WARN_ON_ONCE(!page_has_buffers(page)); 3592 - SetPageChecked(page); 3593 - return __set_page_dirty_nobuffers(page); 3592 + WARN_ON_ONCE(!page_has_buffers(&folio->page)); 3593 + folio_set_checked(folio); 3594 + return filemap_dirty_folio(mapping, folio); 3594 3595 } 3595 3596 3596 - static int ext4_set_page_dirty(struct page *page) 3597 + static bool ext4_dirty_folio(struct address_space *mapping, struct folio *folio) 3597 3598 { 3598 - WARN_ON_ONCE(!PageLocked(page) && !PageDirty(page)); 3599 - WARN_ON_ONCE(!page_has_buffers(page)); 3600 - return __set_page_dirty_buffers(page); 3599 + WARN_ON_ONCE(!folio_test_locked(folio) && !folio_test_dirty(folio)); 3600 + WARN_ON_ONCE(!folio_buffers(folio)); 3601 + return block_dirty_folio(mapping, folio); 3601 3602 } 3602 3603 3603 3604 static int ext4_iomap_swap_activate(struct swap_info_struct *sis, ··· 3615 3614 .writepages = ext4_writepages, 3616 3615 .write_begin = ext4_write_begin, 3617 3616 .write_end = ext4_write_end, 3618 - .set_page_dirty = ext4_set_page_dirty, 3617 + .dirty_folio = ext4_dirty_folio, 3619 3618 .bmap = ext4_bmap, 3620 - .invalidatepage = ext4_invalidatepage, 3619 + .invalidate_folio = ext4_invalidate_folio, 3621 3620 .releasepage = ext4_releasepage, 3622 3621 .direct_IO = noop_direct_IO, 3623 3622 .migratepage = buffer_migrate_page, ··· 3633 3632 .writepages = ext4_writepages, 3634 3633 .write_begin = ext4_write_begin, 3635 3634 .write_end = ext4_journalled_write_end, 3636 - .set_page_dirty = ext4_journalled_set_page_dirty, 3635 + .dirty_folio = ext4_journalled_dirty_folio, 3637 3636 .bmap = ext4_bmap, 3638 - .invalidatepage = ext4_journalled_invalidatepage, 3637 + .invalidate_folio = ext4_journalled_invalidate_folio, 3639 3638 .releasepage = ext4_releasepage, 3640 3639 .direct_IO = noop_direct_IO, 3641 3640 .is_partially_uptodate = block_is_partially_uptodate, ··· 3650 3649 .writepages = ext4_writepages, 3651 3650 .write_begin = ext4_da_write_begin, 3652 3651 .write_end = ext4_da_write_end, 3653 - .set_page_dirty = ext4_set_page_dirty, 3652 + .dirty_folio = ext4_dirty_folio, 3654 3653 .bmap = ext4_bmap, 3655 - .invalidatepage = ext4_invalidatepage, 3654 + .invalidate_folio = ext4_invalidate_folio, 3656 3655 .releasepage = ext4_releasepage, 3657 3656 .direct_IO = noop_direct_IO, 3658 3657 .migratepage = buffer_migrate_page, ··· 3664 3663 static const struct address_space_operations ext4_dax_aops = { 3665 3664 .writepages = ext4_dax_writepages, 3666 3665 .direct_IO = noop_direct_IO, 3667 - .set_page_dirty = __set_page_dirty_no_writeback, 3666 + .dirty_folio = noop_dirty_folio, 3668 3667 .bmap = ext4_bmap, 3669 - .invalidatepage = noop_invalidatepage, 3670 3668 .swap_activate = ext4_iomap_swap_activate, 3671 3669 }; 3672 3670 ··· 5238 5238 } 5239 5239 5240 5240 /* 5241 - * In data=journal mode ext4_journalled_invalidatepage() may fail to invalidate 5242 - * buffers that are attached to a page stradding i_size and are undergoing 5241 + * In data=journal mode ext4_journalled_invalidate_folio() may fail to invalidate 5242 + * buffers that are attached to a folio straddling i_size and are undergoing 5243 5243 * commit. In that case we have to wait for commit to finish and try again. 5244 5244 */ 5245 5245 static void ext4_wait_for_tail_page_commit(struct inode *inode) 5246 5246 { 5247 - struct page *page; 5248 5247 unsigned offset; 5249 5248 journal_t *journal = EXT4_SB(inode->i_sb)->s_journal; 5250 5249 tid_t commit_tid = 0; ··· 5251 5252 5252 5253 offset = inode->i_size & (PAGE_SIZE - 1); 5253 5254 /* 5254 - * If the page is fully truncated, we don't need to wait for any commit 5255 - * (and we even should not as __ext4_journalled_invalidatepage() may 5256 - * strip all buffers from the page but keep the page dirty which can then 5257 - * confuse e.g. concurrent ext4_writepage() seeing dirty page without 5255 + * If the folio is fully truncated, we don't need to wait for any commit 5256 + * (and we even should not as __ext4_journalled_invalidate_folio() may 5257 + * strip all buffers from the folio but keep the folio dirty which can then 5258 + * confuse e.g. concurrent ext4_writepage() seeing dirty folio without 5258 5259 * buffers). Also we don't need to wait for any commit if all buffers in 5259 - * the page remain valid. This is most beneficial for the common case of 5260 + * the folio remain valid. This is most beneficial for the common case of 5260 5261 * blocksize == PAGESIZE. 5261 5262 */ 5262 5263 if (!offset || offset > (PAGE_SIZE - i_blocksize(inode))) 5263 5264 return; 5264 5265 while (1) { 5265 - page = find_lock_page(inode->i_mapping, 5266 + struct folio *folio = filemap_lock_folio(inode->i_mapping, 5266 5267 inode->i_size >> PAGE_SHIFT); 5267 - if (!page) 5268 + if (!folio) 5268 5269 return; 5269 - ret = __ext4_journalled_invalidatepage(page, offset, 5270 - PAGE_SIZE - offset); 5271 - unlock_page(page); 5272 - put_page(page); 5270 + ret = __ext4_journalled_invalidate_folio(folio, offset, 5271 + folio_size(folio) - offset); 5272 + folio_unlock(folio); 5273 + folio_put(folio); 5273 5274 if (ret != -EBUSY) 5274 5275 return; 5275 5276 commit_tid = 0;
+15 -14
fs/f2fs/checkpoint.c
··· 447 447 return nwritten; 448 448 } 449 449 450 - static int f2fs_set_meta_page_dirty(struct page *page) 450 + static bool f2fs_dirty_meta_folio(struct address_space *mapping, 451 + struct folio *folio) 451 452 { 452 - trace_f2fs_set_page_dirty(page, META); 453 + trace_f2fs_set_page_dirty(&folio->page, META); 453 454 454 - if (!PageUptodate(page)) 455 - SetPageUptodate(page); 456 - if (!PageDirty(page)) { 457 - __set_page_dirty_nobuffers(page); 458 - inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_META); 459 - set_page_private_reference(page); 460 - return 1; 455 + if (!folio_test_uptodate(folio)) 456 + folio_mark_uptodate(folio); 457 + if (!folio_test_dirty(folio)) { 458 + filemap_dirty_folio(mapping, folio); 459 + inc_page_count(F2FS_P_SB(&folio->page), F2FS_DIRTY_META); 460 + set_page_private_reference(&folio->page); 461 + return true; 461 462 } 462 - return 0; 463 + return false; 463 464 } 464 465 465 466 const struct address_space_operations f2fs_meta_aops = { 466 467 .writepage = f2fs_write_meta_page, 467 468 .writepages = f2fs_write_meta_pages, 468 - .set_page_dirty = f2fs_set_meta_page_dirty, 469 - .invalidatepage = f2fs_invalidate_page, 469 + .dirty_folio = f2fs_dirty_meta_folio, 470 + .invalidate_folio = f2fs_invalidate_folio, 470 471 .releasepage = f2fs_release_page, 471 472 #ifdef CONFIG_MIGRATION 472 473 .migratepage = f2fs_migrate_page, ··· 1028 1027 stat_dec_dirty_inode(F2FS_I_SB(inode), type); 1029 1028 } 1030 1029 1031 - void f2fs_update_dirty_page(struct inode *inode, struct page *page) 1030 + void f2fs_update_dirty_folio(struct inode *inode, struct folio *folio) 1032 1031 { 1033 1032 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 1034 1033 enum inode_type type = S_ISDIR(inode->i_mode) ? DIR_INODE : FILE_INODE; ··· 1043 1042 inode_inc_dirty_pages(inode); 1044 1043 spin_unlock(&sbi->inode_lock[type]); 1045 1044 1046 - set_page_private_reference(page); 1045 + set_page_private_reference(&folio->page); 1047 1046 } 1048 1047 1049 1048 void f2fs_remove_dirty_inode(struct inode *inode)
+1 -1
fs/f2fs/compress.c
··· 1747 1747 1748 1748 const struct address_space_operations f2fs_compress_aops = { 1749 1749 .releasepage = f2fs_release_page, 1750 - .invalidatepage = f2fs_invalidate_page, 1750 + .invalidate_folio = f2fs_invalidate_folio, 1751 1751 }; 1752 1752 1753 1753 struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi)
+27 -29
fs/f2fs/data.c
··· 3489 3489 return copied; 3490 3490 } 3491 3491 3492 - void f2fs_invalidate_page(struct page *page, unsigned int offset, 3493 - unsigned int length) 3492 + void f2fs_invalidate_folio(struct folio *folio, size_t offset, size_t length) 3494 3493 { 3495 - struct inode *inode = page->mapping->host; 3494 + struct inode *inode = folio->mapping->host; 3496 3495 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 3497 3496 3498 3497 if (inode->i_ino >= F2FS_ROOT_INO(sbi) && 3499 - (offset % PAGE_SIZE || length != PAGE_SIZE)) 3498 + (offset || length != folio_size(folio))) 3500 3499 return; 3501 3500 3502 - if (PageDirty(page)) { 3501 + if (folio_test_dirty(folio)) { 3503 3502 if (inode->i_ino == F2FS_META_INO(sbi)) { 3504 3503 dec_page_count(sbi, F2FS_DIRTY_META); 3505 3504 } else if (inode->i_ino == F2FS_NODE_INO(sbi)) { ··· 3509 3510 } 3510 3511 } 3511 3512 3512 - clear_page_private_gcing(page); 3513 + clear_page_private_gcing(&folio->page); 3513 3514 3514 3515 if (test_opt(sbi, COMPRESS_CACHE) && 3515 3516 inode->i_ino == F2FS_COMPRESS_INO(sbi)) 3516 - clear_page_private_data(page); 3517 + clear_page_private_data(&folio->page); 3517 3518 3518 - if (page_private_atomic(page)) 3519 - return f2fs_drop_inmem_page(inode, page); 3519 + if (page_private_atomic(&folio->page)) 3520 + return f2fs_drop_inmem_page(inode, &folio->page); 3520 3521 3521 - detach_page_private(page); 3522 - set_page_private(page, 0); 3522 + folio_detach_private(folio); 3523 3523 } 3524 3524 3525 3525 int f2fs_release_page(struct page *page, gfp_t wait) ··· 3545 3547 return 1; 3546 3548 } 3547 3549 3548 - static int f2fs_set_data_page_dirty(struct page *page) 3550 + static bool f2fs_dirty_data_folio(struct address_space *mapping, 3551 + struct folio *folio) 3549 3552 { 3550 - struct inode *inode = page_file_mapping(page)->host; 3553 + struct inode *inode = mapping->host; 3551 3554 3552 - trace_f2fs_set_page_dirty(page, DATA); 3555 + trace_f2fs_set_page_dirty(&folio->page, DATA); 3553 3556 3554 - if (!PageUptodate(page)) 3555 - SetPageUptodate(page); 3556 - if (PageSwapCache(page)) 3557 - return __set_page_dirty_nobuffers(page); 3557 + if (!folio_test_uptodate(folio)) 3558 + folio_mark_uptodate(folio); 3559 + BUG_ON(folio_test_swapcache(folio)); 3558 3560 3559 3561 if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) { 3560 - if (!page_private_atomic(page)) { 3561 - f2fs_register_inmem_page(inode, page); 3562 - return 1; 3562 + if (!page_private_atomic(&folio->page)) { 3563 + f2fs_register_inmem_page(inode, &folio->page); 3564 + return true; 3563 3565 } 3564 3566 /* 3565 3567 * Previously, this page has been registered, we just 3566 3568 * return here. 3567 3569 */ 3568 - return 0; 3570 + return false; 3569 3571 } 3570 3572 3571 - if (!PageDirty(page)) { 3572 - __set_page_dirty_nobuffers(page); 3573 - f2fs_update_dirty_page(inode, page); 3574 - return 1; 3573 + if (!folio_test_dirty(folio)) { 3574 + filemap_dirty_folio(mapping, folio); 3575 + f2fs_update_dirty_folio(inode, folio); 3576 + return true; 3575 3577 } 3576 - return 0; 3578 + return true; 3577 3579 } 3578 3580 3579 3581 ··· 3935 3937 .writepages = f2fs_write_data_pages, 3936 3938 .write_begin = f2fs_write_begin, 3937 3939 .write_end = f2fs_write_end, 3938 - .set_page_dirty = f2fs_set_data_page_dirty, 3939 - .invalidatepage = f2fs_invalidate_page, 3940 + .dirty_folio = f2fs_dirty_data_folio, 3941 + .invalidate_folio = f2fs_invalidate_folio, 3940 3942 .releasepage = f2fs_release_page, 3941 3943 .direct_IO = noop_direct_IO, 3942 3944 .bmap = f2fs_bmap,
+2 -3
fs/f2fs/f2fs.h
··· 3705 3705 void f2fs_remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino); 3706 3706 int f2fs_recover_orphan_inodes(struct f2fs_sb_info *sbi); 3707 3707 int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi); 3708 - void f2fs_update_dirty_page(struct inode *inode, struct page *page); 3708 + void f2fs_update_dirty_folio(struct inode *inode, struct folio *folio); 3709 3709 void f2fs_remove_dirty_inode(struct inode *inode); 3710 3710 int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type); 3711 3711 void f2fs_wait_on_all_pages(struct f2fs_sb_info *sbi, int type); ··· 3769 3769 enum iostat_type io_type, 3770 3770 int compr_blocks, bool allow_balance); 3771 3771 void f2fs_write_failed(struct inode *inode, loff_t to); 3772 - void f2fs_invalidate_page(struct page *page, unsigned int offset, 3773 - unsigned int length); 3772 + void f2fs_invalidate_folio(struct folio *folio, size_t offset, size_t length); 3774 3773 int f2fs_release_page(struct page *page, gfp_t wait); 3775 3774 #ifdef CONFIG_MIGRATION 3776 3775 int f2fs_migrate_page(struct address_space *mapping, struct page *newpage,
+15 -14
fs/f2fs/node.c
··· 2137 2137 return 0; 2138 2138 } 2139 2139 2140 - static int f2fs_set_node_page_dirty(struct page *page) 2140 + static bool f2fs_dirty_node_folio(struct address_space *mapping, 2141 + struct folio *folio) 2141 2142 { 2142 - trace_f2fs_set_page_dirty(page, NODE); 2143 + trace_f2fs_set_page_dirty(&folio->page, NODE); 2143 2144 2144 - if (!PageUptodate(page)) 2145 - SetPageUptodate(page); 2145 + if (!folio_test_uptodate(folio)) 2146 + folio_mark_uptodate(folio); 2146 2147 #ifdef CONFIG_F2FS_CHECK_FS 2147 - if (IS_INODE(page)) 2148 - f2fs_inode_chksum_set(F2FS_P_SB(page), page); 2148 + if (IS_INODE(&folio->page)) 2149 + f2fs_inode_chksum_set(F2FS_P_SB(&folio->page), &folio->page); 2149 2150 #endif 2150 - if (!PageDirty(page)) { 2151 - __set_page_dirty_nobuffers(page); 2152 - inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES); 2153 - set_page_private_reference(page); 2154 - return 1; 2151 + if (!folio_test_dirty(folio)) { 2152 + filemap_dirty_folio(mapping, folio); 2153 + inc_page_count(F2FS_P_SB(&folio->page), F2FS_DIRTY_NODES); 2154 + set_page_private_reference(&folio->page); 2155 + return true; 2155 2156 } 2156 - return 0; 2157 + return false; 2157 2158 } 2158 2159 2159 2160 /* ··· 2163 2162 const struct address_space_operations f2fs_node_aops = { 2164 2163 .writepage = f2fs_write_node_page, 2165 2164 .writepages = f2fs_write_node_pages, 2166 - .set_page_dirty = f2fs_set_node_page_dirty, 2167 - .invalidatepage = f2fs_invalidate_page, 2165 + .dirty_folio = f2fs_dirty_node_folio, 2166 + .invalidate_folio = f2fs_invalidate_folio, 2168 2167 .releasepage = f2fs_release_page, 2169 2168 #ifdef CONFIG_MIGRATION 2170 2169 .migratepage = f2fs_migrate_page,
+2 -1
fs/fat/inode.c
··· 342 342 } 343 343 344 344 static const struct address_space_operations fat_aops = { 345 - .set_page_dirty = __set_page_dirty_buffers, 345 + .dirty_folio = block_dirty_folio, 346 + .invalidate_folio = block_invalidate_folio, 346 347 .readpage = fat_readpage, 347 348 .readahead = fat_readahead, 348 349 .writepage = fat_writepage,
+15 -13
fs/fscache/io.c
··· 159 159 EXPORT_SYMBOL(__fscache_begin_write_operation); 160 160 161 161 /** 162 - * fscache_set_page_dirty - Mark page dirty and pin a cache object for writeback 163 - * @page: The page being dirtied 162 + * fscache_dirty_folio - Mark folio dirty and pin a cache object for writeback 163 + * @mapping: The mapping the folio belongs to. 164 + * @folio: The folio being dirtied. 164 165 * @cookie: The cookie referring to the cache object 165 166 * 166 - * Set the dirty flag on a page and pin an in-use cache object in memory when 167 - * dirtying a page so that writeback can later write to it. This is intended 168 - * to be called from the filesystem's ->set_page_dirty() method. 167 + * Set the dirty flag on a folio and pin an in-use cache object in memory 168 + * so that writeback can later write to it. This is intended 169 + * to be called from the filesystem's ->dirty_folio() method. 169 170 * 170 - * Returns 1 if PG_dirty was set on the page, 0 otherwise. 171 + * Return: true if the dirty flag was set on the folio, false otherwise. 171 172 */ 172 - int fscache_set_page_dirty(struct page *page, struct fscache_cookie *cookie) 173 + bool fscache_dirty_folio(struct address_space *mapping, struct folio *folio, 174 + struct fscache_cookie *cookie) 173 175 { 174 - struct inode *inode = page->mapping->host; 176 + struct inode *inode = mapping->host; 175 177 bool need_use = false; 176 178 177 179 _enter(""); 178 180 179 - if (!__set_page_dirty_nobuffers(page)) 180 - return 0; 181 + if (!filemap_dirty_folio(mapping, folio)) 182 + return false; 181 183 if (!fscache_cookie_valid(cookie)) 182 - return 1; 184 + return true; 183 185 184 186 if (!(inode->i_state & I_PINNING_FSCACHE_WB)) { 185 187 spin_lock(&inode->i_lock); ··· 194 192 if (need_use) 195 193 fscache_use_cookie(cookie, true); 196 194 } 197 - return 1; 195 + return true; 198 196 } 199 - EXPORT_SYMBOL(fscache_set_page_dirty); 197 + EXPORT_SYMBOL(fscache_dirty_folio); 200 198 201 199 struct fscache_write_request { 202 200 struct netfs_cache_resources cache_resources;
+1 -2
fs/fuse/dax.c
··· 1326 1326 static const struct address_space_operations fuse_dax_file_aops = { 1327 1327 .writepages = fuse_dax_writepages, 1328 1328 .direct_IO = noop_direct_IO, 1329 - .set_page_dirty = __set_page_dirty_no_writeback, 1330 - .invalidatepage = noop_invalidatepage, 1329 + .dirty_folio = noop_dirty_folio, 1331 1330 }; 1332 1331 1333 1332 static bool fuse_should_enable_dax(struct inode *inode, unsigned int flags)
+1 -1
fs/fuse/dir.c
··· 1773 1773 1774 1774 /* 1775 1775 * Only call invalidate_inode_pages2() after removing 1776 - * FUSE_NOWRITE, otherwise fuse_launder_page() would deadlock. 1776 + * FUSE_NOWRITE, otherwise fuse_launder_folio() would deadlock. 1777 1777 */ 1778 1778 if ((is_truncate || !is_wb) && 1779 1779 S_ISREG(inode->i_mode) && oldsize != outarg.attr.size) {
+8 -8
fs/fuse/file.c
··· 2348 2348 return copied; 2349 2349 } 2350 2350 2351 - static int fuse_launder_page(struct page *page) 2351 + static int fuse_launder_folio(struct folio *folio) 2352 2352 { 2353 2353 int err = 0; 2354 - if (clear_page_dirty_for_io(page)) { 2355 - struct inode *inode = page->mapping->host; 2354 + if (folio_clear_dirty_for_io(folio)) { 2355 + struct inode *inode = folio->mapping->host; 2356 2356 2357 2357 /* Serialize with pending writeback for the same page */ 2358 - fuse_wait_on_page_writeback(inode, page->index); 2359 - err = fuse_writepage_locked(page); 2358 + fuse_wait_on_page_writeback(inode, folio->index); 2359 + err = fuse_writepage_locked(&folio->page); 2360 2360 if (!err) 2361 - fuse_wait_on_page_writeback(inode, page->index); 2361 + fuse_wait_on_page_writeback(inode, folio->index); 2362 2362 } 2363 2363 return err; 2364 2364 } ··· 3179 3179 .readahead = fuse_readahead, 3180 3180 .writepage = fuse_writepage, 3181 3181 .writepages = fuse_writepages, 3182 - .launder_page = fuse_launder_page, 3183 - .set_page_dirty = __set_page_dirty_nobuffers, 3182 + .launder_folio = fuse_launder_folio, 3183 + .dirty_folio = filemap_dirty_folio, 3184 3184 .bmap = fuse_bmap, 3185 3185 .direct_IO = fuse_direct_IO, 3186 3186 .write_begin = fuse_write_begin,
+19 -24
fs/gfs2/aops.c
··· 606 606 gfs2_trans_end(sdp); 607 607 } 608 608 609 - /** 610 - * jdata_set_page_dirty - Page dirtying function 611 - * @page: The page to dirty 612 - * 613 - * Returns: 1 if it dirtyed the page, or 0 otherwise 614 - */ 615 - 616 - static int jdata_set_page_dirty(struct page *page) 609 + static bool jdata_dirty_folio(struct address_space *mapping, 610 + struct folio *folio) 617 611 { 618 612 if (current->journal_info) 619 - SetPageChecked(page); 620 - return __set_page_dirty_buffers(page); 613 + folio_set_checked(folio); 614 + return block_dirty_folio(mapping, folio); 621 615 } 622 616 623 617 /** ··· 666 672 unlock_buffer(bh); 667 673 } 668 674 669 - static void gfs2_invalidatepage(struct page *page, unsigned int offset, 670 - unsigned int length) 675 + static void gfs2_invalidate_folio(struct folio *folio, size_t offset, 676 + size_t length) 671 677 { 672 - struct gfs2_sbd *sdp = GFS2_SB(page->mapping->host); 673 - unsigned int stop = offset + length; 674 - int partial_page = (offset || length < PAGE_SIZE); 678 + struct gfs2_sbd *sdp = GFS2_SB(folio->mapping->host); 679 + size_t stop = offset + length; 680 + int partial_page = (offset || length < folio_size(folio)); 675 681 struct buffer_head *bh, *head; 676 682 unsigned long pos = 0; 677 683 678 - BUG_ON(!PageLocked(page)); 684 + BUG_ON(!folio_test_locked(folio)); 679 685 if (!partial_page) 680 - ClearPageChecked(page); 681 - if (!page_has_buffers(page)) 686 + folio_clear_checked(folio); 687 + head = folio_buffers(folio); 688 + if (!head) 682 689 goto out; 683 690 684 - bh = head = page_buffers(page); 691 + bh = head; 685 692 do { 686 693 if (pos + bh->b_size > stop) 687 694 return; ··· 694 699 } while (bh != head); 695 700 out: 696 701 if (!partial_page) 697 - try_to_release_page(page, 0); 702 + filemap_release_folio(folio, 0); 698 703 } 699 704 700 705 /** ··· 774 779 .writepages = gfs2_writepages, 775 780 .readpage = gfs2_readpage, 776 781 .readahead = gfs2_readahead, 777 - .set_page_dirty = __set_page_dirty_nobuffers, 782 + .dirty_folio = filemap_dirty_folio, 778 783 .releasepage = iomap_releasepage, 779 - .invalidatepage = iomap_invalidatepage, 784 + .invalidate_folio = iomap_invalidate_folio, 780 785 .bmap = gfs2_bmap, 781 786 .direct_IO = noop_direct_IO, 782 787 .migratepage = iomap_migrate_page, ··· 789 794 .writepages = gfs2_jdata_writepages, 790 795 .readpage = gfs2_readpage, 791 796 .readahead = gfs2_readahead, 792 - .set_page_dirty = jdata_set_page_dirty, 797 + .dirty_folio = jdata_dirty_folio, 793 798 .bmap = gfs2_bmap, 794 - .invalidatepage = gfs2_invalidatepage, 799 + .invalidate_folio = gfs2_invalidate_folio, 795 800 .releasepage = gfs2_releasepage, 796 801 .is_partially_uptodate = block_is_partially_uptodate, 797 802 .error_remove_page = generic_error_remove_page,
+4 -2
fs/gfs2/meta_io.c
··· 89 89 } 90 90 91 91 const struct address_space_operations gfs2_meta_aops = { 92 - .set_page_dirty = __set_page_dirty_buffers, 92 + .dirty_folio = block_dirty_folio, 93 + .invalidate_folio = block_invalidate_folio, 93 94 .writepage = gfs2_aspace_writepage, 94 95 .releasepage = gfs2_releasepage, 95 96 }; 96 97 97 98 const struct address_space_operations gfs2_rgrp_aops = { 98 - .set_page_dirty = __set_page_dirty_buffers, 99 + .dirty_folio = block_dirty_folio, 100 + .invalidate_folio = block_invalidate_folio, 99 101 .writepage = gfs2_aspace_writepage, 100 102 .releasepage = gfs2_releasepage, 101 103 };
+4 -2
fs/hfs/inode.c
··· 159 159 } 160 160 161 161 const struct address_space_operations hfs_btree_aops = { 162 - .set_page_dirty = __set_page_dirty_buffers, 162 + .dirty_folio = block_dirty_folio, 163 + .invalidate_folio = block_invalidate_folio, 163 164 .readpage = hfs_readpage, 164 165 .writepage = hfs_writepage, 165 166 .write_begin = hfs_write_begin, ··· 170 169 }; 171 170 172 171 const struct address_space_operations hfs_aops = { 173 - .set_page_dirty = __set_page_dirty_buffers, 172 + .dirty_folio = block_dirty_folio, 173 + .invalidate_folio = block_invalidate_folio, 174 174 .readpage = hfs_readpage, 175 175 .writepage = hfs_writepage, 176 176 .write_begin = hfs_write_begin,
+4 -2
fs/hfsplus/inode.c
··· 156 156 } 157 157 158 158 const struct address_space_operations hfsplus_btree_aops = { 159 - .set_page_dirty = __set_page_dirty_buffers, 159 + .dirty_folio = block_dirty_folio, 160 + .invalidate_folio = block_invalidate_folio, 160 161 .readpage = hfsplus_readpage, 161 162 .writepage = hfsplus_writepage, 162 163 .write_begin = hfsplus_write_begin, ··· 167 166 }; 168 167 169 168 const struct address_space_operations hfsplus_aops = { 170 - .set_page_dirty = __set_page_dirty_buffers, 169 + .dirty_folio = block_dirty_folio, 170 + .invalidate_folio = block_invalidate_folio, 171 171 .readpage = hfsplus_readpage, 172 172 .writepage = hfsplus_writepage, 173 173 .write_begin = hfsplus_write_begin,
+2 -1
fs/hostfs/hostfs_kern.c
··· 14 14 #include <linux/statfs.h> 15 15 #include <linux/slab.h> 16 16 #include <linux/seq_file.h> 17 + #include <linux/writeback.h> 17 18 #include <linux/mount.h> 18 19 #include <linux/namei.h> 19 20 #include "hostfs.h" ··· 505 504 static const struct address_space_operations hostfs_aops = { 506 505 .writepage = hostfs_writepage, 507 506 .readpage = hostfs_readpage, 508 - .set_page_dirty = __set_page_dirty_nobuffers, 507 + .dirty_folio = filemap_dirty_folio, 509 508 .write_begin = hostfs_write_begin, 510 509 .write_end = hostfs_write_end, 511 510 };
+2 -1
fs/hpfs/file.c
··· 245 245 } 246 246 247 247 const struct address_space_operations hpfs_aops = { 248 - .set_page_dirty = __set_page_dirty_buffers, 248 + .dirty_folio = block_dirty_folio, 249 + .invalidate_folio = block_invalidate_folio, 249 250 .readpage = hpfs_readpage, 250 251 .writepage = hpfs_writepage, 251 252 .readahead = hpfs_readahead,
+1 -1
fs/hugetlbfs/inode.c
··· 1144 1144 static const struct address_space_operations hugetlbfs_aops = { 1145 1145 .write_begin = hugetlbfs_write_begin, 1146 1146 .write_end = hugetlbfs_write_end, 1147 - .set_page_dirty = __set_page_dirty_no_writeback, 1147 + .dirty_folio = noop_dirty_folio, 1148 1148 .migratepage = hugetlbfs_migrate_page, 1149 1149 .error_remove_page = hugetlbfs_error_remove_page, 1150 1150 };
+18 -28
fs/iomap/buffered-io.c
··· 425 425 EXPORT_SYMBOL_GPL(iomap_readahead); 426 426 427 427 /* 428 - * iomap_is_partially_uptodate checks whether blocks within a page are 428 + * iomap_is_partially_uptodate checks whether blocks within a folio are 429 429 * uptodate or not. 430 430 * 431 - * Returns true if all blocks which correspond to a file portion 432 - * we want to read within the page are uptodate. 431 + * Returns true if all blocks which correspond to the specified part 432 + * of the folio are uptodate. 433 433 */ 434 - int 435 - iomap_is_partially_uptodate(struct page *page, unsigned long from, 436 - unsigned long count) 434 + bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count) 437 435 { 438 - struct folio *folio = page_folio(page); 439 436 struct iomap_page *iop = to_iomap_page(folio); 440 - struct inode *inode = page->mapping->host; 441 - unsigned len, first, last; 442 - unsigned i; 437 + struct inode *inode = folio->mapping->host; 438 + size_t len; 439 + unsigned first, last, i; 443 440 444 - /* Limit range to one page */ 445 - len = min_t(unsigned, PAGE_SIZE - from, count); 441 + if (!iop) 442 + return false; 443 + 444 + /* Limit range to this folio */ 445 + len = min(folio_size(folio) - from, count); 446 446 447 447 /* First and last blocks in range within page */ 448 448 first = from >> inode->i_blkbits; 449 449 last = (from + len - 1) >> inode->i_blkbits; 450 450 451 - if (iop) { 452 - for (i = first; i <= last; i++) 453 - if (!test_bit(i, iop->uptodate)) 454 - return 0; 455 - return 1; 456 - } 457 - 458 - return 0; 451 + for (i = first; i <= last; i++) 452 + if (!test_bit(i, iop->uptodate)) 453 + return false; 454 + return true; 459 455 } 460 456 EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate); 461 457 ··· 477 481 478 482 void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) 479 483 { 480 - trace_iomap_invalidatepage(folio->mapping->host, offset, len); 484 + trace_iomap_invalidate_folio(folio->mapping->host, 485 + folio_pos(folio) + offset, len); 481 486 482 487 /* 483 488 * If we're invalidating the entire folio, clear the dirty state ··· 496 499 } 497 500 } 498 501 EXPORT_SYMBOL_GPL(iomap_invalidate_folio); 499 - 500 - void iomap_invalidatepage(struct page *page, unsigned int offset, 501 - unsigned int len) 502 - { 503 - iomap_invalidate_folio(page_folio(page), offset, len); 504 - } 505 - EXPORT_SYMBOL_GPL(iomap_invalidatepage); 506 502 507 503 #ifdef CONFIG_MIGRATION 508 504 int
+1 -1
fs/iomap/trace.h
··· 81 81 TP_ARGS(inode, off, len)) 82 82 DEFINE_RANGE_EVENT(iomap_writepage); 83 83 DEFINE_RANGE_EVENT(iomap_releasepage); 84 - DEFINE_RANGE_EVENT(iomap_invalidatepage); 84 + DEFINE_RANGE_EVENT(iomap_invalidate_folio); 85 85 DEFINE_RANGE_EVENT(iomap_dio_invalidate_fail); 86 86 87 87 #define IOMAP_TYPE_STRINGS \
+1 -1
fs/jbd2/journal.c
··· 86 86 EXPORT_SYMBOL(jbd2_journal_force_commit_nested); 87 87 EXPORT_SYMBOL(jbd2_journal_wipe); 88 88 EXPORT_SYMBOL(jbd2_journal_blocks_per_page); 89 - EXPORT_SYMBOL(jbd2_journal_invalidatepage); 89 + EXPORT_SYMBOL(jbd2_journal_invalidate_folio); 90 90 EXPORT_SYMBOL(jbd2_journal_try_to_free_buffers); 91 91 EXPORT_SYMBOL(jbd2_journal_force_commit); 92 92 EXPORT_SYMBOL(jbd2_journal_inode_ranged_write);
+15 -16
fs/jbd2/transaction.c
··· 2217 2217 } 2218 2218 2219 2219 /* 2220 - * jbd2_journal_invalidatepage 2220 + * jbd2_journal_invalidate_folio 2221 2221 * 2222 2222 * This code is tricky. It has a number of cases to deal with. 2223 2223 * 2224 2224 * There are two invariants which this code relies on: 2225 2225 * 2226 - * i_size must be updated on disk before we start calling invalidatepage on the 2227 - * data. 2226 + * i_size must be updated on disk before we start calling invalidate_folio 2227 + * on the data. 2228 2228 * 2229 2229 * This is done in ext3 by defining an ext3_setattr method which 2230 2230 * updates i_size before truncate gets going. By maintaining this ··· 2426 2426 } 2427 2427 2428 2428 /** 2429 - * jbd2_journal_invalidatepage() 2429 + * jbd2_journal_invalidate_folio() 2430 2430 * @journal: journal to use for flush... 2431 - * @page: page to flush 2431 + * @folio: folio to flush 2432 2432 * @offset: start of the range to invalidate 2433 2433 * @length: length of the range to invalidate 2434 2434 * ··· 2437 2437 * the page is straddling i_size. Caller then has to wait for current commit 2438 2438 * and try again. 2439 2439 */ 2440 - int jbd2_journal_invalidatepage(journal_t *journal, 2441 - struct page *page, 2442 - unsigned int offset, 2443 - unsigned int length) 2440 + int jbd2_journal_invalidate_folio(journal_t *journal, struct folio *folio, 2441 + size_t offset, size_t length) 2444 2442 { 2445 2443 struct buffer_head *head, *bh, *next; 2446 2444 unsigned int stop = offset + length; 2447 2445 unsigned int curr_off = 0; 2448 - int partial_page = (offset || length < PAGE_SIZE); 2446 + int partial_page = (offset || length < folio_size(folio)); 2449 2447 int may_free = 1; 2450 2448 int ret = 0; 2451 2449 2452 - if (!PageLocked(page)) 2450 + if (!folio_test_locked(folio)) 2453 2451 BUG(); 2454 - if (!page_has_buffers(page)) 2452 + head = folio_buffers(folio); 2453 + if (!head) 2455 2454 return 0; 2456 2455 2457 - BUG_ON(stop > PAGE_SIZE || stop < length); 2456 + BUG_ON(stop > folio_size(folio) || stop < length); 2458 2457 2459 2458 /* We will potentially be playing with lists other than just the 2460 2459 * data lists (especially for journaled data mode), so be 2461 2460 * cautious in our locking. */ 2462 2461 2463 - head = bh = page_buffers(page); 2462 + bh = head; 2464 2463 do { 2465 2464 unsigned int next_off = curr_off + bh->b_size; 2466 2465 next = bh->b_this_page; ··· 2482 2483 } while (bh != head); 2483 2484 2484 2485 if (!partial_page) { 2485 - if (may_free && try_to_free_buffers(page)) 2486 - J_ASSERT(!page_has_buffers(page)); 2486 + if (may_free && try_to_free_buffers(&folio->page)) 2487 + J_ASSERT(!folio_buffers(folio)); 2487 2488 } 2488 2489 return 0; 2489 2490 }
+2 -1
fs/jfs/inode.c
··· 357 357 } 358 358 359 359 const struct address_space_operations jfs_aops = { 360 - .set_page_dirty = __set_page_dirty_buffers, 360 + .dirty_folio = block_dirty_folio, 361 + .invalidate_folio = block_invalidate_folio, 361 362 .readpage = jfs_readpage, 362 363 .readahead = jfs_readahead, 363 364 .writepage = jfs_writepage,
+7 -7
fs/jfs/jfs_metapage.c
··· 552 552 return ret; 553 553 } 554 554 555 - static void metapage_invalidatepage(struct page *page, unsigned int offset, 556 - unsigned int length) 555 + static void metapage_invalidate_folio(struct folio *folio, size_t offset, 556 + size_t length) 557 557 { 558 - BUG_ON(offset || length < PAGE_SIZE); 558 + BUG_ON(offset || length < folio_size(folio)); 559 559 560 - BUG_ON(PageWriteback(page)); 560 + BUG_ON(folio_test_writeback(folio)); 561 561 562 - metapage_releasepage(page, 0); 562 + metapage_releasepage(&folio->page, 0); 563 563 } 564 564 565 565 const struct address_space_operations jfs_metapage_aops = { 566 566 .readpage = metapage_readpage, 567 567 .writepage = metapage_writepage, 568 568 .releasepage = metapage_releasepage, 569 - .invalidatepage = metapage_invalidatepage, 570 - .set_page_dirty = __set_page_dirty_nobuffers, 569 + .invalidate_folio = metapage_invalidate_folio, 570 + .dirty_folio = filemap_dirty_folio, 571 571 }; 572 572 573 573 struct metapage *__get_metapage(struct inode *inode, unsigned long lblock,
+2 -13
fs/libfs.c
··· 631 631 .readpage = simple_readpage, 632 632 .write_begin = simple_write_begin, 633 633 .write_end = simple_write_end, 634 - .set_page_dirty = __set_page_dirty_no_writeback, 634 + .dirty_folio = noop_dirty_folio, 635 635 }; 636 636 EXPORT_SYMBOL(ram_aops); 637 637 ··· 1198 1198 } 1199 1199 EXPORT_SYMBOL(noop_fsync); 1200 1200 1201 - void noop_invalidatepage(struct page *page, unsigned int offset, 1202 - unsigned int length) 1203 - { 1204 - /* 1205 - * There is no page cache to invalidate in the dax case, however 1206 - * we need this callback defined to prevent falling back to 1207 - * block_invalidatepage() in do_invalidatepage(). 1208 - */ 1209 - } 1210 - EXPORT_SYMBOL_GPL(noop_invalidatepage); 1211 - 1212 1201 ssize_t noop_direct_IO(struct kiocb *iocb, struct iov_iter *iter) 1213 1202 { 1214 1203 /* ··· 1220 1231 struct inode *alloc_anon_inode(struct super_block *s) 1221 1232 { 1222 1233 static const struct address_space_operations anon_aops = { 1223 - .set_page_dirty = __set_page_dirty_no_writeback, 1234 + .dirty_folio = noop_dirty_folio, 1224 1235 }; 1225 1236 struct inode *inode = new_inode_pseudo(s); 1226 1237
+2 -1
fs/minix/inode.c
··· 442 442 } 443 443 444 444 static const struct address_space_operations minix_aops = { 445 - .set_page_dirty = __set_page_dirty_buffers, 445 + .dirty_folio = block_dirty_folio, 446 + .invalidate_folio = block_invalidate_folio, 446 447 .readpage = minix_readpage, 447 448 .writepage = minix_writepage, 448 449 .write_begin = minix_write_begin,
+1 -1
fs/mpage.c
··· 479 479 if (!buffer_mapped(bh)) { 480 480 /* 481 481 * unmapped dirty buffers are created by 482 - * __set_page_dirty_buffers -> mmapped data 482 + * block_dirty_folio -> mmapped data 483 483 */ 484 484 if (buffer_dirty(bh)) 485 485 goto confused;
+16 -16
fs/nfs/file.c
··· 406 406 * - Called if either PG_private or PG_fscache is set on the page 407 407 * - Caller holds page lock 408 408 */ 409 - static void nfs_invalidate_page(struct page *page, unsigned int offset, 410 - unsigned int length) 409 + static void nfs_invalidate_folio(struct folio *folio, size_t offset, 410 + size_t length) 411 411 { 412 - dfprintk(PAGECACHE, "NFS: invalidate_page(%p, %u, %u)\n", 413 - page, offset, length); 412 + dfprintk(PAGECACHE, "NFS: invalidate_folio(%lu, %zu, %zu)\n", 413 + folio->index, offset, length); 414 414 415 - if (offset != 0 || length < PAGE_SIZE) 415 + if (offset != 0 || length < folio_size(folio)) 416 416 return; 417 417 /* Cancel any unstarted writes on this page */ 418 - nfs_wb_page_cancel(page_file_mapping(page)->host, page); 419 - wait_on_page_fscache(page); 418 + nfs_wb_folio_cancel(folio->mapping->host, folio); 419 + folio_wait_fscache(folio); 420 420 } 421 421 422 422 /* ··· 472 472 * - Caller holds page lock 473 473 * - Return 0 if successful, -error otherwise 474 474 */ 475 - static int nfs_launder_page(struct page *page) 475 + static int nfs_launder_folio(struct folio *folio) 476 476 { 477 - struct inode *inode = page_file_mapping(page)->host; 477 + struct inode *inode = folio->mapping->host; 478 478 479 - dfprintk(PAGECACHE, "NFS: launder_page(%ld, %llu)\n", 480 - inode->i_ino, (long long)page_offset(page)); 479 + dfprintk(PAGECACHE, "NFS: launder_folio(%ld, %llu)\n", 480 + inode->i_ino, folio_pos(folio)); 481 481 482 - wait_on_page_fscache(page); 483 - return nfs_wb_page(inode, page); 482 + folio_wait_fscache(folio); 483 + return nfs_wb_page(inode, &folio->page); 484 484 } 485 485 486 486 static int nfs_swap_activate(struct swap_info_struct *sis, struct file *file, ··· 515 515 const struct address_space_operations nfs_file_aops = { 516 516 .readpage = nfs_readpage, 517 517 .readpages = nfs_readpages, 518 - .set_page_dirty = __set_page_dirty_nobuffers, 518 + .dirty_folio = filemap_dirty_folio, 519 519 .writepage = nfs_writepage, 520 520 .writepages = nfs_writepages, 521 521 .write_begin = nfs_write_begin, 522 522 .write_end = nfs_write_end, 523 - .invalidatepage = nfs_invalidate_page, 523 + .invalidate_folio = nfs_invalidate_folio, 524 524 .releasepage = nfs_release_page, 525 525 .direct_IO = nfs_direct_IO, 526 526 #ifdef CONFIG_MIGRATION 527 527 .migratepage = nfs_migrate_page, 528 528 #endif 529 - .launder_page = nfs_launder_page, 529 + .launder_folio = nfs_launder_folio, 530 530 .is_dirty_writeback = nfs_check_dirty_writeback, 531 531 .error_remove_page = generic_error_remove_page, 532 532 .swap_activate = nfs_swap_activate,
+4 -4
fs/nfs/write.c
··· 2057 2057 } 2058 2058 EXPORT_SYMBOL_GPL(nfs_wb_all); 2059 2059 2060 - int nfs_wb_page_cancel(struct inode *inode, struct page *page) 2060 + int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio) 2061 2061 { 2062 2062 struct nfs_page *req; 2063 2063 int ret = 0; 2064 2064 2065 - wait_on_page_writeback(page); 2065 + folio_wait_writeback(folio); 2066 2066 2067 2067 /* blocking call to cancel all requests and join to a single (head) 2068 2068 * request */ 2069 - req = nfs_lock_and_join_requests(page); 2069 + req = nfs_lock_and_join_requests(&folio->page); 2070 2070 2071 2071 if (IS_ERR(req)) { 2072 2072 ret = PTR_ERR(req); 2073 2073 } else if (req) { 2074 - /* all requests from this page have been cancelled by 2074 + /* all requests from this folio have been cancelled by 2075 2075 * nfs_lock_and_join_requests, so just remove the head 2076 2076 * request from the inode / page_private pointer and 2077 2077 * release it */
+20 -22
fs/nilfs2/inode.c
··· 199 199 return 0; 200 200 } 201 201 202 - static int nilfs_set_page_dirty(struct page *page) 202 + static bool nilfs_dirty_folio(struct address_space *mapping, 203 + struct folio *folio) 203 204 { 204 - struct inode *inode = page->mapping->host; 205 - int ret = __set_page_dirty_nobuffers(page); 205 + struct inode *inode = mapping->host; 206 + struct buffer_head *head; 207 + unsigned int nr_dirty = 0; 208 + bool ret = filemap_dirty_folio(mapping, folio); 206 209 207 - if (page_has_buffers(page)) { 208 - unsigned int nr_dirty = 0; 209 - struct buffer_head *bh, *head; 210 + /* 211 + * The page may not be locked, eg if called from try_to_unmap_one() 212 + */ 213 + spin_lock(&mapping->private_lock); 214 + head = folio_buffers(folio); 215 + if (head) { 216 + struct buffer_head *bh = head; 210 217 211 - /* 212 - * This page is locked by callers, and no other thread 213 - * concurrently marks its buffers dirty since they are 214 - * only dirtied through routines in fs/buffer.c in 215 - * which call sites of mark_buffer_dirty are protected 216 - * by page lock. 217 - */ 218 - bh = head = page_buffers(page); 219 218 do { 220 219 /* Do not mark hole blocks dirty */ 221 220 if (buffer_dirty(bh) || !buffer_mapped(bh)) ··· 223 224 set_buffer_dirty(bh); 224 225 nr_dirty++; 225 226 } while (bh = bh->b_this_page, bh != head); 226 - 227 - if (nr_dirty) 228 - nilfs_set_file_dirty(inode, nr_dirty); 229 227 } else if (ret) { 230 - unsigned int nr_dirty = 1 << (PAGE_SHIFT - inode->i_blkbits); 231 - 232 - nilfs_set_file_dirty(inode, nr_dirty); 228 + nr_dirty = 1 << (folio_shift(folio) - inode->i_blkbits); 233 229 } 230 + spin_unlock(&mapping->private_lock); 231 + 232 + if (nr_dirty) 233 + nilfs_set_file_dirty(inode, nr_dirty); 234 234 return ret; 235 235 } 236 236 ··· 297 299 .writepage = nilfs_writepage, 298 300 .readpage = nilfs_readpage, 299 301 .writepages = nilfs_writepages, 300 - .set_page_dirty = nilfs_set_page_dirty, 302 + .dirty_folio = nilfs_dirty_folio, 301 303 .readahead = nilfs_readahead, 302 304 .write_begin = nilfs_write_begin, 303 305 .write_end = nilfs_write_end, 304 306 /* .releasepage = nilfs_releasepage, */ 305 - .invalidatepage = block_invalidatepage, 307 + .invalidate_folio = block_invalidate_folio, 306 308 .direct_IO = nilfs_direct_IO, 307 309 .is_partially_uptodate = block_is_partially_uptodate, 308 310 };
+2 -1
fs/nilfs2/mdt.c
··· 434 434 435 435 436 436 static const struct address_space_operations def_mdt_aops = { 437 - .set_page_dirty = __set_page_dirty_buffers, 437 + .dirty_folio = block_dirty_folio, 438 + .invalidate_folio = block_invalidate_folio, 438 439 .writepage = nilfs_mdt_write_page, 439 440 }; 440 441
+10 -11
fs/ntfs/aops.c
··· 593 593 iblock = initialized_size >> blocksize_bits; 594 594 595 595 /* 596 - * Be very careful. We have no exclusion from __set_page_dirty_buffers 596 + * Be very careful. We have no exclusion from block_dirty_folio 597 597 * here, and the (potentially unmapped) buffers may become dirty at 598 598 * any time. If a buffer becomes dirty here after we've inspected it 599 599 * then we just miss that fact, and the page stays dirty. 600 600 * 601 - * Buffers outside i_size may be dirtied by __set_page_dirty_buffers; 601 + * Buffers outside i_size may be dirtied by block_dirty_folio; 602 602 * handle that here by just cleaning them. 603 603 */ 604 604 ··· 653 653 // Update initialized size in the attribute and 654 654 // in the inode. 655 655 // Again, for each page do: 656 - // __set_page_dirty_buffers(); 656 + // block_dirty_folio(); 657 657 // put_page() 658 658 // We don't need to wait on the writes. 659 659 // Update iblock. ··· 1350 1350 /* Is the page fully outside i_size? (truncate in progress) */ 1351 1351 if (unlikely(page->index >= (i_size + PAGE_SIZE - 1) >> 1352 1352 PAGE_SHIFT)) { 1353 + struct folio *folio = page_folio(page); 1353 1354 /* 1354 1355 * The page may have dirty, unmapped buffers. Make them 1355 1356 * freeable here, so the page does not leak. 1356 1357 */ 1357 - block_invalidatepage(page, 0, PAGE_SIZE); 1358 - unlock_page(page); 1358 + block_invalidate_folio(folio, 0, folio_size(folio)); 1359 + folio_unlock(folio); 1359 1360 ntfs_debug("Write outside i_size - truncated?"); 1360 1361 return 0; 1361 1362 } ··· 1654 1653 .readpage = ntfs_readpage, 1655 1654 #ifdef NTFS_RW 1656 1655 .writepage = ntfs_writepage, 1657 - .set_page_dirty = __set_page_dirty_buffers, 1656 + .dirty_folio = block_dirty_folio, 1658 1657 #endif /* NTFS_RW */ 1659 1658 .bmap = ntfs_bmap, 1660 1659 .migratepage = buffer_migrate_page, ··· 1669 1668 .readpage = ntfs_readpage, 1670 1669 #ifdef NTFS_RW 1671 1670 .writepage = ntfs_writepage, 1672 - .set_page_dirty = __set_page_dirty_buffers, 1671 + .dirty_folio = block_dirty_folio, 1673 1672 #endif /* NTFS_RW */ 1674 1673 .migratepage = buffer_migrate_page, 1675 1674 .is_partially_uptodate = block_is_partially_uptodate, ··· 1684 1683 .readpage = ntfs_readpage, /* Fill page with data. */ 1685 1684 #ifdef NTFS_RW 1686 1685 .writepage = ntfs_writepage, /* Write dirty page to disk. */ 1687 - .set_page_dirty = __set_page_dirty_nobuffers, /* Set the page dirty 1688 - without touching the buffers 1689 - belonging to the page. */ 1686 + .dirty_folio = filemap_dirty_folio, 1690 1687 #endif /* NTFS_RW */ 1691 1688 .migratepage = buffer_migrate_page, 1692 1689 .is_partially_uptodate = block_is_partially_uptodate, ··· 1746 1747 set_buffer_dirty(bh); 1747 1748 } while ((bh = bh->b_this_page) != head); 1748 1749 spin_unlock(&mapping->private_lock); 1749 - __set_page_dirty_nobuffers(page); 1750 + block_dirty_folio(mapping, page_folio(page)); 1750 1751 if (unlikely(buffers_to_free)) { 1751 1752 do { 1752 1753 bh = buffers_to_free->b_this_page;
+1 -1
fs/ntfs3/inode.c
··· 1950 1950 .write_end = ntfs_write_end, 1951 1951 .direct_IO = ntfs_direct_IO, 1952 1952 .bmap = ntfs_bmap, 1953 - .set_page_dirty = __set_page_dirty_buffers, 1953 + .dirty_folio = block_dirty_folio, 1954 1954 }; 1955 1955 1956 1956 const struct address_space_operations ntfs_aops_cmpr = {
+2 -2
fs/ocfs2/aops.c
··· 2453 2453 } 2454 2454 2455 2455 const struct address_space_operations ocfs2_aops = { 2456 - .set_page_dirty = __set_page_dirty_buffers, 2456 + .dirty_folio = block_dirty_folio, 2457 2457 .readpage = ocfs2_readpage, 2458 2458 .readahead = ocfs2_readahead, 2459 2459 .writepage = ocfs2_writepage, ··· 2461 2461 .write_end = ocfs2_write_end, 2462 2462 .bmap = ocfs2_bmap, 2463 2463 .direct_IO = ocfs2_direct_IO, 2464 - .invalidatepage = block_invalidatepage, 2464 + .invalidate_folio = block_invalidate_folio, 2465 2465 .releasepage = ocfs2_releasepage, 2466 2466 .migratepage = buffer_migrate_page, 2467 2467 .is_partially_uptodate = block_is_partially_uptodate,
+2 -1
fs/omfs/file.c
··· 372 372 }; 373 373 374 374 const struct address_space_operations omfs_aops = { 375 - .set_page_dirty = __set_page_dirty_buffers, 375 + .dirty_folio = block_dirty_folio, 376 + .invalidate_folio = block_invalidate_folio, 376 377 .readpage = omfs_readpage, 377 378 .readahead = omfs_readahead, 378 379 .writepage = omfs_writepage,
+61 -60
fs/orangefs/inode.c
··· 46 46 else 47 47 wlen = PAGE_SIZE; 48 48 } 49 - /* Should've been handled in orangefs_invalidatepage. */ 49 + /* Should've been handled in orangefs_invalidate_folio. */ 50 50 WARN_ON(off == len || off + wlen > len); 51 51 52 52 bv.bv_page = page; ··· 243 243 return ret; 244 244 } 245 245 246 - static int orangefs_launder_page(struct page *); 246 + static int orangefs_launder_folio(struct folio *); 247 247 248 248 static void orangefs_readahead(struct readahead_control *rac) 249 249 { ··· 290 290 291 291 static int orangefs_readpage(struct file *file, struct page *page) 292 292 { 293 + struct folio *folio = page_folio(page); 293 294 struct inode *inode = page->mapping->host; 294 295 struct iov_iter iter; 295 296 struct bio_vec bv; 296 297 ssize_t ret; 297 298 loff_t off; /* offset into this page */ 298 299 299 - if (PageDirty(page)) 300 - orangefs_launder_page(page); 300 + if (folio_test_dirty(folio)) 301 + orangefs_launder_folio(folio); 301 302 302 303 off = page_offset(page); 303 304 bv.bv_page = page; ··· 331 330 void **fsdata) 332 331 { 333 332 struct orangefs_write_range *wr; 333 + struct folio *folio; 334 334 struct page *page; 335 335 pgoff_t index; 336 336 int ret; ··· 343 341 return -ENOMEM; 344 342 345 343 *pagep = page; 344 + folio = page_folio(page); 346 345 347 - if (PageDirty(page) && !PagePrivate(page)) { 346 + if (folio_test_dirty(folio) && !folio_test_private(folio)) { 348 347 /* 349 348 * Should be impossible. If it happens, launder the page 350 349 * since we don't know what's dirty. This will WARN in 351 350 * orangefs_writepage_locked. 352 351 */ 353 - ret = orangefs_launder_page(page); 352 + ret = orangefs_launder_folio(folio); 354 353 if (ret) 355 354 return ret; 356 355 } 357 - if (PagePrivate(page)) { 356 + if (folio_test_private(folio)) { 358 357 struct orangefs_write_range *wr; 359 - wr = (struct orangefs_write_range *)page_private(page); 358 + wr = folio_get_private(folio); 360 359 if (wr->pos + wr->len == pos && 361 360 uid_eq(wr->uid, current_fsuid()) && 362 361 gid_eq(wr->gid, current_fsgid())) { 363 362 wr->len += len; 364 363 goto okay; 365 364 } else { 366 - ret = orangefs_launder_page(page); 365 + ret = orangefs_launder_folio(folio); 367 366 if (ret) 368 367 return ret; 369 368 } ··· 378 375 wr->len = len; 379 376 wr->uid = current_fsuid(); 380 377 wr->gid = current_fsgid(); 381 - attach_page_private(page, wr); 378 + folio_attach_private(folio, wr); 382 379 okay: 383 380 return 0; 384 381 } ··· 418 415 return copied; 419 416 } 420 417 421 - static void orangefs_invalidatepage(struct page *page, 422 - unsigned int offset, 423 - unsigned int length) 418 + static void orangefs_invalidate_folio(struct folio *folio, 419 + size_t offset, size_t length) 424 420 { 425 - struct orangefs_write_range *wr; 426 - wr = (struct orangefs_write_range *)page_private(page); 421 + struct orangefs_write_range *wr = folio_get_private(folio); 427 422 428 423 if (offset == 0 && length == PAGE_SIZE) { 429 - kfree(detach_page_private(page)); 424 + kfree(folio_detach_private(folio)); 430 425 return; 431 426 /* write range entirely within invalidate range (or equal) */ 432 - } else if (page_offset(page) + offset <= wr->pos && 433 - wr->pos + wr->len <= page_offset(page) + offset + length) { 434 - kfree(detach_page_private(page)); 427 + } else if (folio_pos(folio) + offset <= wr->pos && 428 + wr->pos + wr->len <= folio_pos(folio) + offset + length) { 429 + kfree(folio_detach_private(folio)); 435 430 /* XXX is this right? only caller in fs */ 436 - cancel_dirty_page(page); 431 + folio_cancel_dirty(folio); 437 432 return; 438 433 /* invalidate range chops off end of write range */ 439 - } else if (wr->pos < page_offset(page) + offset && 440 - wr->pos + wr->len <= page_offset(page) + offset + length && 441 - page_offset(page) + offset < wr->pos + wr->len) { 434 + } else if (wr->pos < folio_pos(folio) + offset && 435 + wr->pos + wr->len <= folio_pos(folio) + offset + length && 436 + folio_pos(folio) + offset < wr->pos + wr->len) { 442 437 size_t x; 443 - x = wr->pos + wr->len - (page_offset(page) + offset); 438 + x = wr->pos + wr->len - (folio_pos(folio) + offset); 444 439 WARN_ON(x > wr->len); 445 440 wr->len -= x; 446 441 wr->uid = current_fsuid(); 447 442 wr->gid = current_fsgid(); 448 443 /* invalidate range chops off beginning of write range */ 449 - } else if (page_offset(page) + offset <= wr->pos && 450 - page_offset(page) + offset + length < wr->pos + wr->len && 451 - wr->pos < page_offset(page) + offset + length) { 444 + } else if (folio_pos(folio) + offset <= wr->pos && 445 + folio_pos(folio) + offset + length < wr->pos + wr->len && 446 + wr->pos < folio_pos(folio) + offset + length) { 452 447 size_t x; 453 - x = page_offset(page) + offset + length - wr->pos; 448 + x = folio_pos(folio) + offset + length - wr->pos; 454 449 WARN_ON(x > wr->len); 455 450 wr->pos += x; 456 451 wr->len -= x; 457 452 wr->uid = current_fsuid(); 458 453 wr->gid = current_fsgid(); 459 454 /* invalidate range entirely within write range (punch hole) */ 460 - } else if (wr->pos < page_offset(page) + offset && 461 - page_offset(page) + offset + length < wr->pos + wr->len) { 455 + } else if (wr->pos < folio_pos(folio) + offset && 456 + folio_pos(folio) + offset + length < wr->pos + wr->len) { 462 457 /* XXX what do we do here... should not WARN_ON */ 463 458 WARN_ON(1); 464 459 /* punch hole */ ··· 468 467 /* non-overlapping ranges */ 469 468 } else { 470 469 /* WARN if they do overlap */ 471 - if (!((page_offset(page) + offset + length <= wr->pos) ^ 472 - (wr->pos + wr->len <= page_offset(page) + offset))) { 470 + if (!((folio_pos(folio) + offset + length <= wr->pos) ^ 471 + (wr->pos + wr->len <= folio_pos(folio) + offset))) { 473 472 WARN_ON(1); 474 - printk("invalidate range offset %llu length %u\n", 475 - page_offset(page) + offset, length); 473 + printk("invalidate range offset %llu length %zu\n", 474 + folio_pos(folio) + offset, length); 476 475 printk("write range offset %llu length %zu\n", 477 476 wr->pos, wr->len); 478 477 } ··· 484 483 * Thus the following runs if wr was modified above. 485 484 */ 486 485 487 - orangefs_launder_page(page); 486 + orangefs_launder_folio(folio); 488 487 } 489 488 490 489 static int orangefs_releasepage(struct page *page, gfp_t foo) ··· 497 496 kfree(detach_page_private(page)); 498 497 } 499 498 500 - static int orangefs_launder_page(struct page *page) 499 + static int orangefs_launder_folio(struct folio *folio) 501 500 { 502 501 int r = 0; 503 502 struct writeback_control wbc = { 504 503 .sync_mode = WB_SYNC_ALL, 505 504 .nr_to_write = 0, 506 505 }; 507 - wait_on_page_writeback(page); 508 - if (clear_page_dirty_for_io(page)) { 509 - r = orangefs_writepage_locked(page, &wbc); 510 - end_page_writeback(page); 506 + folio_wait_writeback(folio); 507 + if (folio_clear_dirty_for_io(folio)) { 508 + r = orangefs_writepage_locked(&folio->page, &wbc); 509 + folio_end_writeback(folio); 511 510 } 512 511 return r; 513 512 } ··· 634 633 .readahead = orangefs_readahead, 635 634 .readpage = orangefs_readpage, 636 635 .writepages = orangefs_writepages, 637 - .set_page_dirty = __set_page_dirty_nobuffers, 636 + .dirty_folio = filemap_dirty_folio, 638 637 .write_begin = orangefs_write_begin, 639 638 .write_end = orangefs_write_end, 640 - .invalidatepage = orangefs_invalidatepage, 639 + .invalidate_folio = orangefs_invalidate_folio, 641 640 .releasepage = orangefs_releasepage, 642 641 .freepage = orangefs_freepage, 643 - .launder_page = orangefs_launder_page, 642 + .launder_folio = orangefs_launder_folio, 644 643 .direct_IO = orangefs_direct_IO, 645 644 }; 646 645 647 646 vm_fault_t orangefs_page_mkwrite(struct vm_fault *vmf) 648 647 { 649 - struct page *page = vmf->page; 648 + struct folio *folio = page_folio(vmf->page); 650 649 struct inode *inode = file_inode(vmf->vma->vm_file); 651 650 struct orangefs_inode_s *orangefs_inode = ORANGEFS_I(inode); 652 651 unsigned long *bitlock = &orangefs_inode->bitlock; ··· 660 659 goto out; 661 660 } 662 661 663 - lock_page(page); 664 - if (PageDirty(page) && !PagePrivate(page)) { 662 + folio_lock(folio); 663 + if (folio_test_dirty(folio) && !folio_test_private(folio)) { 665 664 /* 666 - * Should be impossible. If it happens, launder the page 665 + * Should be impossible. If it happens, launder the folio 667 666 * since we don't know what's dirty. This will WARN in 668 667 * orangefs_writepage_locked. 669 668 */ 670 - if (orangefs_launder_page(page)) { 669 + if (orangefs_launder_folio(folio)) { 671 670 ret = VM_FAULT_LOCKED|VM_FAULT_RETRY; 672 671 goto out; 673 672 } 674 673 } 675 - if (PagePrivate(page)) { 676 - wr = (struct orangefs_write_range *)page_private(page); 674 + if (folio_test_private(folio)) { 675 + wr = folio_get_private(folio); 677 676 if (uid_eq(wr->uid, current_fsuid()) && 678 677 gid_eq(wr->gid, current_fsgid())) { 679 - wr->pos = page_offset(page); 678 + wr->pos = page_offset(vmf->page); 680 679 wr->len = PAGE_SIZE; 681 680 goto okay; 682 681 } else { 683 - if (orangefs_launder_page(page)) { 682 + if (orangefs_launder_folio(folio)) { 684 683 ret = VM_FAULT_LOCKED|VM_FAULT_RETRY; 685 684 goto out; 686 685 } ··· 691 690 ret = VM_FAULT_LOCKED|VM_FAULT_RETRY; 692 691 goto out; 693 692 } 694 - wr->pos = page_offset(page); 693 + wr->pos = page_offset(vmf->page); 695 694 wr->len = PAGE_SIZE; 696 695 wr->uid = current_fsuid(); 697 696 wr->gid = current_fsgid(); 698 - attach_page_private(page, wr); 697 + folio_attach_private(folio, wr); 699 698 okay: 700 699 701 700 file_update_time(vmf->vma->vm_file); 702 - if (page->mapping != inode->i_mapping) { 703 - unlock_page(page); 701 + if (folio->mapping != inode->i_mapping) { 702 + folio_unlock(folio); 704 703 ret = VM_FAULT_LOCKED|VM_FAULT_NOPAGE; 705 704 goto out; 706 705 } 707 706 708 707 /* 709 - * We mark the page dirty already here so that when freeze is in 708 + * We mark the folio dirty already here so that when freeze is in 710 709 * progress, we are guaranteed that writeback during freezing will 711 - * see the dirty page and writeprotect it again. 710 + * see the dirty folio and writeprotect it again. 712 711 */ 713 - set_page_dirty(page); 714 - wait_for_stable_page(page); 712 + folio_mark_dirty(folio); 713 + folio_wait_stable(folio); 715 714 ret = VM_FAULT_LOCKED; 716 715 out: 717 716 sb_end_pagefault(inode->i_sb);
+20 -20
fs/reiserfs/inode.c
··· 3094 3094 * decide if this buffer needs to stay around for data logging or ordered 3095 3095 * write purposes 3096 3096 */ 3097 - static int invalidatepage_can_drop(struct inode *inode, struct buffer_head *bh) 3097 + static int invalidate_folio_can_drop(struct inode *inode, struct buffer_head *bh) 3098 3098 { 3099 3099 int ret = 1; 3100 3100 struct reiserfs_journal *j = SB_JOURNAL(inode->i_sb); ··· 3147 3147 return ret; 3148 3148 } 3149 3149 3150 - /* clm -- taken from fs/buffer.c:block_invalidate_page */ 3151 - static void reiserfs_invalidatepage(struct page *page, unsigned int offset, 3152 - unsigned int length) 3150 + /* clm -- taken from fs/buffer.c:block_invalidate_folio */ 3151 + static void reiserfs_invalidate_folio(struct folio *folio, size_t offset, 3152 + size_t length) 3153 3153 { 3154 3154 struct buffer_head *head, *bh, *next; 3155 - struct inode *inode = page->mapping->host; 3155 + struct inode *inode = folio->mapping->host; 3156 3156 unsigned int curr_off = 0; 3157 3157 unsigned int stop = offset + length; 3158 - int partial_page = (offset || length < PAGE_SIZE); 3158 + int partial_page = (offset || length < folio_size(folio)); 3159 3159 int ret = 1; 3160 3160 3161 - BUG_ON(!PageLocked(page)); 3161 + BUG_ON(!folio_test_locked(folio)); 3162 3162 3163 3163 if (!partial_page) 3164 - ClearPageChecked(page); 3164 + folio_clear_checked(folio); 3165 3165 3166 - if (!page_has_buffers(page)) 3166 + head = folio_buffers(folio); 3167 + if (!head) 3167 3168 goto out; 3168 3169 3169 - head = page_buffers(page); 3170 3170 bh = head; 3171 3171 do { 3172 3172 unsigned int next_off = curr_off + bh->b_size; ··· 3179 3179 * is this block fully invalidated? 3180 3180 */ 3181 3181 if (offset <= curr_off) { 3182 - if (invalidatepage_can_drop(inode, bh)) 3182 + if (invalidate_folio_can_drop(inode, bh)) 3183 3183 reiserfs_unmap_buffer(bh); 3184 3184 else 3185 3185 ret = 0; ··· 3194 3194 * so real IO is not possible anymore. 3195 3195 */ 3196 3196 if (!partial_page && ret) { 3197 - ret = try_to_release_page(page, 0); 3197 + ret = filemap_release_folio(folio, 0); 3198 3198 /* maybe should BUG_ON(!ret); - neilb */ 3199 3199 } 3200 3200 out: 3201 3201 return; 3202 3202 } 3203 3203 3204 - static int reiserfs_set_page_dirty(struct page *page) 3204 + static bool reiserfs_dirty_folio(struct address_space *mapping, 3205 + struct folio *folio) 3205 3206 { 3206 - struct inode *inode = page->mapping->host; 3207 - if (reiserfs_file_data_log(inode)) { 3208 - SetPageChecked(page); 3209 - return __set_page_dirty_nobuffers(page); 3207 + if (reiserfs_file_data_log(mapping->host)) { 3208 + folio_set_checked(folio); 3209 + return filemap_dirty_folio(mapping, folio); 3210 3210 } 3211 - return __set_page_dirty_buffers(page); 3211 + return block_dirty_folio(mapping, folio); 3212 3212 } 3213 3213 3214 3214 /* ··· 3430 3430 .readpage = reiserfs_readpage, 3431 3431 .readahead = reiserfs_readahead, 3432 3432 .releasepage = reiserfs_releasepage, 3433 - .invalidatepage = reiserfs_invalidatepage, 3433 + .invalidate_folio = reiserfs_invalidate_folio, 3434 3434 .write_begin = reiserfs_write_begin, 3435 3435 .write_end = reiserfs_write_end, 3436 3436 .bmap = reiserfs_aop_bmap, 3437 3437 .direct_IO = reiserfs_direct_IO, 3438 - .set_page_dirty = reiserfs_set_page_dirty, 3438 + .dirty_folio = reiserfs_dirty_folio, 3439 3439 };
+2 -2
fs/reiserfs/journal.c
··· 858 858 ret = -EIO; 859 859 } 860 860 /* 861 - * ugly interaction with invalidatepage here. 862 - * reiserfs_invalidate_page will pin any buffer that has a 861 + * ugly interaction with invalidate_folio here. 862 + * reiserfs_invalidate_folio will pin any buffer that has a 863 863 * valid journal head from an older transaction. If someone 864 864 * else sets our buffer dirty after we write it in the first 865 865 * loop, and then someone truncates the page away, nobody
+8 -8
fs/remap_range.c
··· 146 146 } 147 147 148 148 /* Read a page's worth of file data into the page cache. */ 149 - static struct folio *vfs_dedupe_get_folio(struct inode *inode, loff_t pos) 149 + static struct folio *vfs_dedupe_get_folio(struct file *file, loff_t pos) 150 150 { 151 151 struct folio *folio; 152 152 153 - folio = read_mapping_folio(inode->i_mapping, pos >> PAGE_SHIFT, NULL); 153 + folio = read_mapping_folio(file->f_mapping, pos >> PAGE_SHIFT, file); 154 154 if (IS_ERR(folio)) 155 155 return folio; 156 156 if (!folio_test_uptodate(folio)) { ··· 187 187 * Compare extents of two files to see if they are the same. 188 188 * Caller must have locked both inodes to prevent write races. 189 189 */ 190 - static int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff, 191 - struct inode *dest, loff_t dstoff, 190 + static int vfs_dedupe_file_range_compare(struct file *src, loff_t srcoff, 191 + struct file *dest, loff_t dstoff, 192 192 loff_t len, bool *is_same) 193 193 { 194 194 bool same = true; ··· 224 224 * someone is invalidating pages on us and we lose. 225 225 */ 226 226 if (!folio_test_uptodate(src_folio) || !folio_test_uptodate(dst_folio) || 227 - src_folio->mapping != src->i_mapping || 228 - dst_folio->mapping != dest->i_mapping) { 227 + src_folio->mapping != src->f_mapping || 228 + dst_folio->mapping != dest->f_mapping) { 229 229 same = false; 230 230 goto unlock; 231 231 } ··· 333 333 if (remap_flags & REMAP_FILE_DEDUP) { 334 334 bool is_same = false; 335 335 336 - ret = vfs_dedupe_file_range_compare(inode_in, pos_in, 337 - inode_out, pos_out, *len, &is_same); 336 + ret = vfs_dedupe_file_range_compare(file_in, pos_in, 337 + file_out, pos_out, *len, &is_same); 338 338 if (ret) 339 339 return ret; 340 340 if (!is_same)
+2 -1
fs/sysv/itree.c
··· 495 495 } 496 496 497 497 const struct address_space_operations sysv_aops = { 498 - .set_page_dirty = __set_page_dirty_buffers, 498 + .dirty_folio = block_dirty_folio, 499 + .invalidate_folio = block_invalidate_folio, 499 500 .readpage = sysv_readpage, 500 501 .writepage = sysv_writepage, 501 502 .write_begin = sysv_write_begin,
+17 -17
fs/ubifs/file.c
··· 1287 1287 return err; 1288 1288 } 1289 1289 1290 - static void ubifs_invalidatepage(struct page *page, unsigned int offset, 1291 - unsigned int length) 1290 + static void ubifs_invalidate_folio(struct folio *folio, size_t offset, 1291 + size_t length) 1292 1292 { 1293 - struct inode *inode = page->mapping->host; 1293 + struct inode *inode = folio->mapping->host; 1294 1294 struct ubifs_info *c = inode->i_sb->s_fs_info; 1295 1295 1296 - ubifs_assert(c, PagePrivate(page)); 1297 - if (offset || length < PAGE_SIZE) 1298 - /* Partial page remains dirty */ 1296 + ubifs_assert(c, folio_test_private(folio)); 1297 + if (offset || length < folio_size(folio)) 1298 + /* Partial folio remains dirty */ 1299 1299 return; 1300 1300 1301 - if (PageChecked(page)) 1301 + if (folio_test_checked(folio)) 1302 1302 release_new_page_budget(c); 1303 1303 else 1304 1304 release_existing_page_budget(c); 1305 1305 1306 1306 atomic_long_dec(&c->dirty_pg_cnt); 1307 - ClearPagePrivate(page); 1308 - ClearPageChecked(page); 1307 + folio_clear_private(folio); 1308 + folio_clear_checked(folio); 1309 1309 } 1310 1310 1311 1311 int ubifs_fsync(struct file *file, loff_t start, loff_t end, int datasync) ··· 1445 1445 return generic_file_write_iter(iocb, from); 1446 1446 } 1447 1447 1448 - static int ubifs_set_page_dirty(struct page *page) 1448 + static bool ubifs_dirty_folio(struct address_space *mapping, 1449 + struct folio *folio) 1449 1450 { 1450 - int ret; 1451 - struct inode *inode = page->mapping->host; 1452 - struct ubifs_info *c = inode->i_sb->s_fs_info; 1451 + bool ret; 1452 + struct ubifs_info *c = mapping->host->i_sb->s_fs_info; 1453 1453 1454 - ret = __set_page_dirty_nobuffers(page); 1454 + ret = filemap_dirty_folio(mapping, folio); 1455 1455 /* 1456 1456 * An attempt to dirty a page without budgeting for it - should not 1457 1457 * happen. 1458 1458 */ 1459 - ubifs_assert(c, ret == 0); 1459 + ubifs_assert(c, ret == false); 1460 1460 return ret; 1461 1461 } 1462 1462 ··· 1646 1646 .writepage = ubifs_writepage, 1647 1647 .write_begin = ubifs_write_begin, 1648 1648 .write_end = ubifs_write_end, 1649 - .invalidatepage = ubifs_invalidatepage, 1650 - .set_page_dirty = ubifs_set_page_dirty, 1649 + .invalidate_folio = ubifs_invalidate_folio, 1650 + .dirty_folio = ubifs_dirty_folio, 1651 1651 #ifdef CONFIG_MIGRATION 1652 1652 .migratepage = ubifs_migrate_page, 1653 1653 #endif
+2 -1
fs/udf/file.c
··· 125 125 } 126 126 127 127 const struct address_space_operations udf_adinicb_aops = { 128 - .set_page_dirty = __set_page_dirty_buffers, 128 + .dirty_folio = block_dirty_folio, 129 + .invalidate_folio = block_invalidate_folio, 129 130 .readpage = udf_adinicb_readpage, 130 131 .writepage = udf_adinicb_writepage, 131 132 .write_begin = udf_adinicb_write_begin,
+2 -1
fs/udf/inode.c
··· 235 235 } 236 236 237 237 const struct address_space_operations udf_aops = { 238 - .set_page_dirty = __set_page_dirty_buffers, 238 + .dirty_folio = block_dirty_folio, 239 + .invalidate_folio = block_invalidate_folio, 239 240 .readpage = udf_readpage, 240 241 .readahead = udf_readahead, 241 242 .writepage = udf_writepage,
+2 -1
fs/ufs/inode.c
··· 526 526 } 527 527 528 528 const struct address_space_operations ufs_aops = { 529 - .set_page_dirty = __set_page_dirty_buffers, 529 + .dirty_folio = block_dirty_folio, 530 + .invalidate_folio = block_invalidate_folio, 530 531 .readpage = ufs_readpage, 531 532 .writepage = ufs_writepage, 532 533 .write_begin = ufs_write_begin,
+1 -1
fs/vboxsf/file.c
··· 354 354 const struct address_space_operations vboxsf_reg_aops = { 355 355 .readpage = vboxsf_readpage, 356 356 .writepage = vboxsf_writepage, 357 - .set_page_dirty = __set_page_dirty_nobuffers, 357 + .dirty_folio = filemap_dirty_folio, 358 358 .write_begin = simple_write_begin, 359 359 .write_end = vboxsf_write_end, 360 360 };
+3 -4
fs/xfs/xfs_aops.c
··· 567 567 .readpage = xfs_vm_readpage, 568 568 .readahead = xfs_vm_readahead, 569 569 .writepages = xfs_vm_writepages, 570 - .set_page_dirty = __set_page_dirty_nobuffers, 570 + .dirty_folio = filemap_dirty_folio, 571 571 .releasepage = iomap_releasepage, 572 - .invalidatepage = iomap_invalidatepage, 572 + .invalidate_folio = iomap_invalidate_folio, 573 573 .bmap = xfs_vm_bmap, 574 574 .direct_IO = noop_direct_IO, 575 575 .migratepage = iomap_migrate_page, ··· 581 581 const struct address_space_operations xfs_dax_aops = { 582 582 .writepages = xfs_dax_writepages, 583 583 .direct_IO = noop_direct_IO, 584 - .set_page_dirty = __set_page_dirty_no_writeback, 585 - .invalidatepage = noop_invalidatepage, 584 + .dirty_folio = noop_dirty_folio, 586 585 .swap_activate = xfs_iomap_swapfile_activate, 587 586 };
+2 -2
fs/zonefs/super.c
··· 185 185 .readahead = zonefs_readahead, 186 186 .writepage = zonefs_writepage, 187 187 .writepages = zonefs_writepages, 188 - .set_page_dirty = __set_page_dirty_nobuffers, 188 + .dirty_folio = filemap_dirty_folio, 189 189 .releasepage = iomap_releasepage, 190 - .invalidatepage = iomap_invalidatepage, 190 + .invalidate_folio = iomap_invalidate_folio, 191 191 .migratepage = iomap_migrate_page, 192 192 .is_partially_uptodate = iomap_is_partially_uptodate, 193 193 .error_remove_page = generic_error_remove_page,
+4 -5
include/linux/buffer_head.h
··· 144 144 ((struct buffer_head *)page_private(page)); \ 145 145 }) 146 146 #define page_has_buffers(page) PagePrivate(page) 147 + #define folio_buffers(folio) folio_get_private(folio) 147 148 148 149 void buffer_check_dirty_writeback(struct page *page, 149 150 bool *dirty, bool *writeback); ··· 217 216 * Generic address_space_operations implementations for buffer_head-backed 218 217 * address_spaces. 219 218 */ 220 - void block_invalidatepage(struct page *page, unsigned int offset, 221 - unsigned int length); 219 + void block_invalidate_folio(struct folio *folio, size_t offset, size_t length); 222 220 int block_write_full_page(struct page *page, get_block_t *get_block, 223 221 struct writeback_control *wbc); 224 222 int __block_write_full_page(struct inode *inode, struct page *page, 225 223 get_block_t *get_block, struct writeback_control *wbc, 226 224 bh_end_io_t *handler); 227 225 int block_read_full_page(struct page*, get_block_t*); 228 - int block_is_partially_uptodate(struct page *page, unsigned long from, 229 - unsigned long count); 226 + bool block_is_partially_uptodate(struct folio *, size_t from, size_t count); 230 227 int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len, 231 228 unsigned flags, struct page **pagep, get_block_t *get_block); 232 229 int __block_write_begin(struct page *page, loff_t pos, unsigned len, ··· 397 398 return __bread_gfp(bdev, block, size, __GFP_MOVABLE); 398 399 } 399 400 400 - extern int __set_page_dirty_buffers(struct page *page); 401 + bool block_dirty_folio(struct address_space *mapping, struct folio *folio); 401 402 402 403 #else /* CONFIG_BLOCK */ 403 404
+6 -8
include/linux/fs.h
··· 368 368 /* Write back some dirty pages from this mapping. */ 369 369 int (*writepages)(struct address_space *, struct writeback_control *); 370 370 371 - /* Set a page dirty. Return true if this dirtied it */ 372 - int (*set_page_dirty)(struct page *page); 371 + /* Mark a folio dirty. Return true if this dirtied it */ 372 + bool (*dirty_folio)(struct address_space *, struct folio *); 373 373 374 374 /* 375 375 * Reads in the requested pages. Unlike ->readpage(), this is ··· 388 388 389 389 /* Unfortunately this kludge is needed for FIBMAP. Don't use it */ 390 390 sector_t (*bmap)(struct address_space *, sector_t); 391 - void (*invalidatepage) (struct page *, unsigned int, unsigned int); 391 + void (*invalidate_folio) (struct folio *, size_t offset, size_t len); 392 392 int (*releasepage) (struct page *, gfp_t); 393 393 void (*freepage)(struct page *); 394 394 ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); ··· 400 400 struct page *, struct page *, enum migrate_mode); 401 401 bool (*isolate_page)(struct page *, isolate_mode_t); 402 402 void (*putback_page)(struct page *); 403 - int (*launder_page) (struct page *); 404 - int (*is_partially_uptodate) (struct page *, unsigned long, 405 - unsigned long); 403 + int (*launder_folio)(struct folio *); 404 + bool (*is_partially_uptodate) (struct folio *, size_t from, 405 + size_t count); 406 406 void (*is_dirty_writeback) (struct page *, bool *, bool *); 407 407 int (*error_remove_page)(struct address_space *, struct page *); 408 408 ··· 3232 3232 extern void simple_recursive_removal(struct dentry *, 3233 3233 void (*callback)(struct dentry *)); 3234 3234 extern int noop_fsync(struct file *, loff_t, loff_t, int); 3235 - extern void noop_invalidatepage(struct page *page, unsigned int offset, 3236 - unsigned int length); 3237 3235 extern ssize_t noop_direct_IO(struct kiocb *iocb, struct iov_iter *iter); 3238 3236 extern int simple_empty(struct dentry *); 3239 3237 extern int simple_write_begin(struct file *file, struct address_space *mapping,
+5 -3
include/linux/fscache.h
··· 616 616 } 617 617 618 618 #if __fscache_available 619 - extern int fscache_set_page_dirty(struct page *page, struct fscache_cookie *cookie); 619 + bool fscache_dirty_folio(struct address_space *mapping, struct folio *folio, 620 + struct fscache_cookie *cookie); 620 621 #else 621 - #define fscache_set_page_dirty(PAGE, COOKIE) (__set_page_dirty_nobuffers((PAGE))) 622 + #define fscache_dirty_folio(MAPPING, FOLIO, COOKIE) \ 623 + filemap_dirty_folio(MAPPING, FOLIO) 622 624 #endif 623 625 624 626 /** ··· 628 626 * @wbc: The writeback control 629 627 * @cookie: The cookie referring to the cache object 630 628 * 631 - * Unpin the writeback resources pinned by fscache_set_page_dirty(). This is 629 + * Unpin the writeback resources pinned by fscache_dirty_folio(). This is 632 630 * intended to be called by the netfs's ->write_inode() method. 633 631 */ 634 632 static inline void fscache_unpin_writeback(struct writeback_control *wbc,
+1 -4
include/linux/iomap.h
··· 227 227 const struct iomap_ops *ops); 228 228 int iomap_readpage(struct page *page, const struct iomap_ops *ops); 229 229 void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops); 230 - int iomap_is_partially_uptodate(struct page *page, unsigned long from, 231 - unsigned long count); 230 + bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count); 232 231 int iomap_releasepage(struct page *page, gfp_t gfp_mask); 233 232 void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); 234 - void iomap_invalidatepage(struct page *page, unsigned int offset, 235 - unsigned int len); 236 233 #ifdef CONFIG_MIGRATION 237 234 int iomap_migrate_page(struct address_space *mapping, struct page *newpage, 238 235 struct page *page, enum migrate_mode mode);
+2 -2
include/linux/jbd2.h
··· 1527 1527 struct jbd2_buffer_trigger_type *type); 1528 1528 extern int jbd2_journal_dirty_metadata (handle_t *, struct buffer_head *); 1529 1529 extern int jbd2_journal_forget (handle_t *, struct buffer_head *); 1530 - extern int jbd2_journal_invalidatepage(journal_t *, 1531 - struct page *, unsigned int, unsigned int); 1530 + int jbd2_journal_invalidate_folio(journal_t *, struct folio *, 1531 + size_t offset, size_t length); 1532 1532 extern int jbd2_journal_try_to_free_buffers(journal_t *journal, struct page *page); 1533 1533 extern int jbd2_journal_stop(handle_t *); 1534 1534 extern int jbd2_journal_flush(journal_t *journal, unsigned int flags);
-3
include/linux/mm.h
··· 1947 1947 struct page **pages); 1948 1948 struct page *get_dump_page(unsigned long addr); 1949 1949 1950 - extern void do_invalidatepage(struct page *page, unsigned int offset, 1951 - unsigned int length); 1952 - 1953 1950 bool folio_mark_dirty(struct folio *folio); 1954 1951 bool set_page_dirty(struct page *page); 1955 1952 int set_page_dirty_lock(struct page *page);
+1 -1
include/linux/nfs_fs.h
··· 583 583 extern int nfs_sync_inode(struct inode *inode); 584 584 extern int nfs_wb_all(struct inode *inode); 585 585 extern int nfs_wb_page(struct inode *inode, struct page *page); 586 - extern int nfs_wb_page_cancel(struct inode *inode, struct page* page); 586 + int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio); 587 587 extern int nfs_commit_inode(struct inode *, int); 588 588 extern struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail); 589 589 extern void nfs_commit_free(struct nfs_commit_data *data);
+24 -5
include/linux/pagemap.h
··· 532 532 } 533 533 534 534 /** 535 + * filemap_lock_folio - Find and lock a folio. 536 + * @mapping: The address_space to search. 537 + * @index: The page index. 538 + * 539 + * Looks up the page cache entry at @mapping & @index. If a folio is 540 + * present, it is returned locked with an increased refcount. 541 + * 542 + * Context: May sleep. 543 + * Return: A folio or %NULL if there is no folio in the cache for this 544 + * index. Will not return a shadow, swap or DAX entry. 545 + */ 546 + static inline struct folio *filemap_lock_folio(struct address_space *mapping, 547 + pgoff_t index) 548 + { 549 + return __filemap_get_folio(mapping, index, FGP_LOCK, 0); 550 + } 551 + 552 + /** 535 553 * find_get_page - find and get a page reference 536 554 * @mapping: the address_space to search 537 555 * @offset: the page index ··· 756 738 struct list_head *pages, filler_t *filler, void *data); 757 739 758 740 static inline struct page *read_mapping_page(struct address_space *mapping, 759 - pgoff_t index, void *data) 741 + pgoff_t index, struct file *file) 760 742 { 761 - return read_cache_page(mapping, index, NULL, data); 743 + return read_cache_page(mapping, index, NULL, file); 762 744 } 763 745 764 746 static inline struct folio *read_mapping_folio(struct address_space *mapping, 765 - pgoff_t index, void *data) 747 + pgoff_t index, struct file *file) 766 748 { 767 - return read_cache_folio(mapping, index, NULL, data); 749 + return read_cache_folio(mapping, index, NULL, file); 768 750 } 769 751 770 752 /* ··· 1024 1006 } 1025 1007 bool folio_clear_dirty_for_io(struct folio *folio); 1026 1008 bool clear_page_dirty_for_io(struct page *page); 1009 + void folio_invalidate(struct folio *folio, size_t offset, size_t length); 1027 1010 int __must_check folio_write_one(struct folio *folio); 1028 1011 static inline int __must_check write_one_page(struct page *page) 1029 1012 { ··· 1032 1013 } 1033 1014 1034 1015 int __set_page_dirty_nobuffers(struct page *page); 1035 - int __set_page_dirty_no_writeback(struct page *page); 1016 + bool noop_dirty_folio(struct address_space *mapping, struct folio *folio); 1036 1017 1037 1018 void page_endio(struct page *page, bool is_write, int err); 1038 1019
+1 -1
include/linux/swap.h
··· 428 428 extern void end_swap_bio_write(struct bio *bio); 429 429 extern int __swap_writepage(struct page *page, struct writeback_control *wbc, 430 430 bio_end_io_t end_write_func); 431 - extern int swap_set_page_dirty(struct page *page); 431 + bool swap_dirty_folio(struct address_space *mapping, struct folio *folio); 432 432 433 433 int add_swap_extent(struct swap_info_struct *sis, unsigned long start_page, 434 434 unsigned long nr_pages, sector_t start_block);
+15 -15
include/trace/events/ext4.h
··· 608 608 TP_ARGS(page) 609 609 ); 610 610 611 - DECLARE_EVENT_CLASS(ext4_invalidatepage_op, 612 - TP_PROTO(struct page *page, unsigned int offset, unsigned int length), 611 + DECLARE_EVENT_CLASS(ext4_invalidate_folio_op, 612 + TP_PROTO(struct folio *folio, size_t offset, size_t length), 613 613 614 - TP_ARGS(page, offset, length), 614 + TP_ARGS(folio, offset, length), 615 615 616 616 TP_STRUCT__entry( 617 617 __field( dev_t, dev ) 618 618 __field( ino_t, ino ) 619 619 __field( pgoff_t, index ) 620 - __field( unsigned int, offset ) 621 - __field( unsigned int, length ) 620 + __field( size_t, offset ) 621 + __field( size_t, length ) 622 622 ), 623 623 624 624 TP_fast_assign( 625 - __entry->dev = page->mapping->host->i_sb->s_dev; 626 - __entry->ino = page->mapping->host->i_ino; 627 - __entry->index = page->index; 625 + __entry->dev = folio->mapping->host->i_sb->s_dev; 626 + __entry->ino = folio->mapping->host->i_ino; 627 + __entry->index = folio->index; 628 628 __entry->offset = offset; 629 629 __entry->length = length; 630 630 ), 631 631 632 - TP_printk("dev %d,%d ino %lu page_index %lu offset %u length %u", 632 + TP_printk("dev %d,%d ino %lu folio_index %lu offset %zu length %zu", 633 633 MAJOR(__entry->dev), MINOR(__entry->dev), 634 634 (unsigned long) __entry->ino, 635 635 (unsigned long) __entry->index, 636 636 __entry->offset, __entry->length) 637 637 ); 638 638 639 - DEFINE_EVENT(ext4_invalidatepage_op, ext4_invalidatepage, 640 - TP_PROTO(struct page *page, unsigned int offset, unsigned int length), 639 + DEFINE_EVENT(ext4_invalidate_folio_op, ext4_invalidate_folio, 640 + TP_PROTO(struct folio *folio, size_t offset, size_t length), 641 641 642 - TP_ARGS(page, offset, length) 642 + TP_ARGS(folio, offset, length) 643 643 ); 644 644 645 - DEFINE_EVENT(ext4_invalidatepage_op, ext4_journalled_invalidatepage, 646 - TP_PROTO(struct page *page, unsigned int offset, unsigned int length), 645 + DEFINE_EVENT(ext4_invalidate_folio_op, ext4_journalled_invalidate_folio, 646 + TP_PROTO(struct folio *folio, size_t offset, size_t length), 647 647 648 - TP_ARGS(page, offset, length) 648 + TP_ARGS(folio, offset, length) 649 649 ); 650 650 651 651 TRACE_EVENT(ext4_discard_blocks,
+4 -4
mm/filemap.c
··· 72 72 * Lock ordering: 73 73 * 74 74 * ->i_mmap_rwsem (truncate_pagecache) 75 - * ->private_lock (__free_pte->__set_page_dirty_buffers) 75 + * ->private_lock (__free_pte->block_dirty_folio) 76 76 * ->swap_lock (exclusive_swap_page, others) 77 77 * ->i_pages lock 78 78 * ··· 115 115 * ->memcg->move_lock (page_remove_rmap->lock_page_memcg) 116 116 * bdi.wb->list_lock (zap_pte_range->set_page_dirty) 117 117 * ->inode->i_lock (zap_pte_range->set_page_dirty) 118 - * ->private_lock (zap_pte_range->__set_page_dirty_buffers) 118 + * ->private_lock (zap_pte_range->block_dirty_folio) 119 119 * 120 120 * ->i_mmap_rwsem 121 121 * ->tasklist_lock (memory_failure, collect_procs_ao) ··· 2464 2464 pos -= folio_pos(folio); 2465 2465 } 2466 2466 2467 - return mapping->a_ops->is_partially_uptodate(&folio->page, pos, count); 2467 + return mapping->a_ops->is_partially_uptodate(folio, pos, count); 2468 2468 } 2469 2469 2470 2470 static int filemap_update_page(struct kiocb *iocb, ··· 2856 2856 offset = offset_in_folio(folio, start) & ~(bsz - 1); 2857 2857 2858 2858 do { 2859 - if (ops->is_partially_uptodate(&folio->page, offset, bsz) == 2859 + if (ops->is_partially_uptodate(folio, offset, bsz) == 2860 2860 seek_data) 2861 2861 break; 2862 2862 start = (start + bsz) & ~(bsz - 1);
+17 -19
mm/page-writeback.c
··· 2418 2418 /* 2419 2419 * For address_spaces which do not use buffers nor write back. 2420 2420 */ 2421 - int __set_page_dirty_no_writeback(struct page *page) 2421 + bool noop_dirty_folio(struct address_space *mapping, struct folio *folio) 2422 2422 { 2423 - if (!PageDirty(page)) 2424 - return !TestSetPageDirty(page); 2425 - return 0; 2423 + if (!folio_test_dirty(folio)) 2424 + return !folio_test_set_dirty(folio); 2425 + return false; 2426 2426 } 2427 - EXPORT_SYMBOL(__set_page_dirty_no_writeback); 2427 + EXPORT_SYMBOL(noop_dirty_folio); 2428 2428 2429 2429 /* 2430 2430 * Helper function for set_page_dirty family. ··· 2518 2518 * This is also sometimes used by filesystems which use buffer_heads when 2519 2519 * a single buffer is being dirtied: we want to set the folio dirty in 2520 2520 * that case, but not all the buffers. This is a "bottom-up" dirtying, 2521 - * whereas __set_page_dirty_buffers() is a "top-down" dirtying. 2521 + * whereas block_dirty_folio() is a "top-down" dirtying. 2522 2522 * 2523 2523 * The caller must ensure this doesn't race with truncation. Most will 2524 2524 * simply hold the folio lock, but e.g. zap_pte_range() calls with the ··· 2604 2604 * folio_mark_dirty - Mark a folio as being modified. 2605 2605 * @folio: The folio. 2606 2606 * 2607 - * For folios with a mapping this should be done under the page lock 2607 + * For folios with a mapping this should be done with the folio lock held 2608 2608 * for the benefit of asynchronous memory errors who prefer a consistent 2609 2609 * dirty state. This rule can be broken in some special cases, 2610 2610 * but should be better not to. ··· 2618 2618 if (likely(mapping)) { 2619 2619 /* 2620 2620 * readahead/lru_deactivate_page could remain 2621 - * PG_readahead/PG_reclaim due to race with end_page_writeback 2622 - * About readahead, if the page is written, the flags would be 2621 + * PG_readahead/PG_reclaim due to race with folio_end_writeback 2622 + * About readahead, if the folio is written, the flags would be 2623 2623 * reset. So no problem. 2624 - * About lru_deactivate_page, if the page is redirty, the flag 2625 - * will be reset. So no problem. but if the page is used by readahead 2626 - * it will confuse readahead and make it restart the size rampup 2627 - * process. But it's a trivial problem. 2624 + * About lru_deactivate_page, if the folio is redirtied, 2625 + * the flag will be reset. So no problem. but if the 2626 + * folio is used by readahead it will confuse readahead 2627 + * and make it restart the size rampup process. But it's 2628 + * a trivial problem. 2628 2629 */ 2629 2630 if (folio_test_reclaim(folio)) 2630 2631 folio_clear_reclaim(folio); 2631 - return mapping->a_ops->set_page_dirty(&folio->page); 2632 + return mapping->a_ops->dirty_folio(mapping, folio); 2632 2633 } 2633 - if (!folio_test_dirty(folio)) { 2634 - if (!folio_test_set_dirty(folio)) 2635 - return true; 2636 - } 2637 - return false; 2634 + 2635 + return noop_dirty_folio(mapping, folio); 2638 2636 } 2639 2637 EXPORT_SYMBOL(folio_mark_dirty); 2640 2638
+9 -6
mm/page_io.c
··· 439 439 return ret; 440 440 } 441 441 442 - int swap_set_page_dirty(struct page *page) 442 + bool swap_dirty_folio(struct address_space *mapping, struct folio *folio) 443 443 { 444 - struct swap_info_struct *sis = page_swap_info(page); 444 + struct swap_info_struct *sis = swp_swap_info(folio_swap_entry(folio)); 445 445 446 446 if (data_race(sis->flags & SWP_FS_OPS)) { 447 - struct address_space *mapping = sis->swap_file->f_mapping; 447 + const struct address_space_operations *aops; 448 448 449 - VM_BUG_ON_PAGE(!PageSwapCache(page), page); 450 - return mapping->a_ops->set_page_dirty(page); 449 + mapping = sis->swap_file->f_mapping; 450 + aops = mapping->a_ops; 451 + 452 + VM_BUG_ON_FOLIO(!folio_test_swapcache(folio), folio); 453 + return aops->dirty_folio(mapping, folio); 451 454 } else { 452 - return __set_page_dirty_no_writeback(page); 455 + return noop_dirty_folio(mapping, folio); 453 456 } 454 457 }
+1 -1
mm/readahead.c
··· 156 156 if (!trylock_page(page)) 157 157 BUG(); 158 158 page->mapping = mapping; 159 - do_invalidatepage(page, 0, PAGE_SIZE); 159 + folio_invalidate(page_folio(page), 0, PAGE_SIZE); 160 160 page->mapping = NULL; 161 161 unlock_page(page); 162 162 }
+2 -2
mm/rmap.c
··· 31 31 * mm->page_table_lock or pte_lock 32 32 * swap_lock (in swap_duplicate, swap_info_get) 33 33 * mmlist_lock (in mmput, drain_mmlist and others) 34 - * mapping->private_lock (in __set_page_dirty_buffers) 35 - * lock_page_memcg move_lock (in __set_page_dirty_buffers) 34 + * mapping->private_lock (in block_dirty_folio) 35 + * folio_lock_memcg move_lock (in block_dirty_folio) 36 36 * i_pages lock (widely used) 37 37 * lruvec->lru_lock (in folio_lruvec_lock_irq) 38 38 * inode->i_lock (in set_page_dirty's __mark_inode_dirty)
+1 -1
mm/secretmem.c
··· 152 152 } 153 153 154 154 const struct address_space_operations secretmem_aops = { 155 - .set_page_dirty = __set_page_dirty_no_writeback, 155 + .dirty_folio = noop_dirty_folio, 156 156 .freepage = secretmem_freepage, 157 157 .migratepage = secretmem_migratepage, 158 158 .isolate_page = secretmem_isolate_page,
+1 -1
mm/shmem.c
··· 3756 3756 3757 3757 const struct address_space_operations shmem_aops = { 3758 3758 .writepage = shmem_writepage, 3759 - .set_page_dirty = __set_page_dirty_no_writeback, 3759 + .dirty_folio = noop_dirty_folio, 3760 3760 #ifdef CONFIG_TMPFS 3761 3761 .write_begin = shmem_write_begin, 3762 3762 .write_end = shmem_write_end,
+1 -1
mm/swap_state.c
··· 30 30 */ 31 31 static const struct address_space_operations swap_aops = { 32 32 .writepage = swap_writepage, 33 - .set_page_dirty = swap_set_page_dirty, 33 + .dirty_folio = swap_dirty_folio, 34 34 #ifdef CONFIG_MIGRATION 35 35 .migratepage = migrate_page, 36 36 #endif
+17 -23
mm/truncate.c
··· 19 19 #include <linux/highmem.h> 20 20 #include <linux/pagevec.h> 21 21 #include <linux/task_io_accounting_ops.h> 22 - #include <linux/buffer_head.h> /* grr. try_to_release_page, 23 - do_invalidatepage */ 22 + #include <linux/buffer_head.h> /* grr. try_to_release_page */ 24 23 #include <linux/shmem_fs.h> 25 24 #include <linux/rmap.h> 26 25 #include "internal.h" ··· 137 138 } 138 139 139 140 /** 140 - * do_invalidatepage - invalidate part or all of a page 141 - * @page: the page which is affected 141 + * folio_invalidate - Invalidate part or all of a folio. 142 + * @folio: The folio which is affected. 142 143 * @offset: start of the range to invalidate 143 144 * @length: length of the range to invalidate 144 145 * 145 - * do_invalidatepage() is called when all or part of the page has become 146 + * folio_invalidate() is called when all or part of the folio has become 146 147 * invalidated by a truncate operation. 147 148 * 148 - * do_invalidatepage() does not have to release all buffers, but it must 149 + * folio_invalidate() does not have to release all buffers, but it must 149 150 * ensure that no dirty buffer is left outside @offset and that no I/O 150 151 * is underway against any of the blocks which are outside the truncation 151 152 * point. Because the caller is about to free (and possibly reuse) those 152 153 * blocks on-disk. 153 154 */ 154 - void do_invalidatepage(struct page *page, unsigned int offset, 155 - unsigned int length) 155 + void folio_invalidate(struct folio *folio, size_t offset, size_t length) 156 156 { 157 - void (*invalidatepage)(struct page *, unsigned int, unsigned int); 157 + const struct address_space_operations *aops = folio->mapping->a_ops; 158 158 159 - invalidatepage = page->mapping->a_ops->invalidatepage; 160 - #ifdef CONFIG_BLOCK 161 - if (!invalidatepage) 162 - invalidatepage = block_invalidatepage; 163 - #endif 164 - if (invalidatepage) 165 - (*invalidatepage)(page, offset, length); 159 + if (aops->invalidate_folio) 160 + aops->invalidate_folio(folio, offset, length); 166 161 } 162 + EXPORT_SYMBOL_GPL(folio_invalidate); 167 163 168 164 /* 169 165 * If truncate cannot remove the fs-private metadata from the page, the page ··· 176 182 unmap_mapping_folio(folio); 177 183 178 184 if (folio_has_private(folio)) 179 - do_invalidatepage(&folio->page, 0, folio_size(folio)); 185 + folio_invalidate(folio, 0, folio_size(folio)); 180 186 181 187 /* 182 188 * Some filesystems seem to re-dirty the page even after ··· 237 243 folio_zero_range(folio, offset, length); 238 244 239 245 if (folio_has_private(folio)) 240 - do_invalidatepage(&folio->page, offset, length); 246 + folio_invalidate(folio, offset, length); 241 247 if (!folio_test_large(folio)) 242 248 return true; 243 249 if (split_huge_page(&folio->page) == 0) ··· 323 329 * mapping is large, it is probably the case that the final pages are the most 324 330 * recently touched, and freeing happens in ascending file offset order. 325 331 * 326 - * Note that since ->invalidatepage() accepts range to invalidate 332 + * Note that since ->invalidate_folio() accepts range to invalidate 327 333 * truncate_inode_pages_range is able to handle cases where lend + 1 is not 328 334 * page aligned properly. 329 335 */ ··· 605 611 return 0; 606 612 } 607 613 608 - static int do_launder_folio(struct address_space *mapping, struct folio *folio) 614 + static int folio_launder(struct address_space *mapping, struct folio *folio) 609 615 { 610 616 if (!folio_test_dirty(folio)) 611 617 return 0; 612 - if (folio->mapping != mapping || mapping->a_ops->launder_page == NULL) 618 + if (folio->mapping != mapping || mapping->a_ops->launder_folio == NULL) 613 619 return 0; 614 - return mapping->a_ops->launder_page(&folio->page); 620 + return mapping->a_ops->launder_folio(folio); 615 621 } 616 622 617 623 /** ··· 677 683 unmap_mapping_folio(folio); 678 684 BUG_ON(folio_mapped(folio)); 679 685 680 - ret2 = do_launder_folio(mapping, folio); 686 + ret2 = folio_launder(mapping, folio); 681 687 if (ret2 == 0) { 682 688 if (!invalidate_complete_folio2(mapping, folio)) 683 689 ret2 = -EBUSY;