Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'folio-5.19' of git://git.infradead.org/users/willy/pagecache

Pull page cache updates from Matthew Wilcox:

- Appoint myself page cache maintainer

- Fix how scsicam uses the page cache

- Use the memalloc_nofs_save() API to replace AOP_FLAG_NOFS

- Remove the AOP flags entirely

- Remove pagecache_write_begin() and pagecache_write_end()

- Documentation updates

- Convert several address_space operations to use folios:
- is_dirty_writeback
- readpage becomes read_folio
- releasepage becomes release_folio
- freepage becomes free_folio

- Change filler_t to require a struct file pointer be the first
argument like ->read_folio

* tag 'folio-5.19' of git://git.infradead.org/users/willy/pagecache: (107 commits)
nilfs2: Fix some kernel-doc comments
Appoint myself page cache maintainer
fs: Remove aops->freepage
secretmem: Convert to free_folio
nfs: Convert to free_folio
orangefs: Convert to free_folio
fs: Add free_folio address space operation
fs: Convert drop_buffers() to use a folio
fs: Change try_to_free_buffers() to take a folio
jbd2: Convert release_buffer_page() to use a folio
jbd2: Convert jbd2_journal_try_to_free_buffers to take a folio
reiserfs: Convert release_buffer_page() to use a folio
fs: Remove last vestiges of releasepage
ubifs: Convert to release_folio
reiserfs: Convert to release_folio
orangefs: Convert to release_folio
ocfs2: Convert to release_folio
nilfs2: Remove comment about releasepage
nfs: Convert to release_folio
jfs: Convert to release_folio
...

+1233 -1221
+2 -2
Documentation/filesystems/caching/netfs-api.rst
··· 433 433 after which it *has* to look in the cache. 434 434 435 435 To inform fscache that a page might now be in the cache, the following function 436 - should be called from the ``releasepage`` address space op:: 436 + should be called from the ``release_folio`` address space op:: 437 437 438 438 void fscache_note_page_release(struct fscache_cookie *cookie); 439 439 440 - if the page has been released (ie. releasepage returned true). 440 + if the page has been released (ie. release_folio returned true). 441 441 442 442 Page release and page invalidation should also wait for any mark left on the 443 443 page to say that a DIO write is underway from that page::
+1 -1
Documentation/filesystems/fscrypt.rst
··· 1256 1256 When inline encryption isn't used, filesystems must encrypt/decrypt 1257 1257 the file contents themselves, as described below: 1258 1258 1259 - For the read path (->readpage()) of regular files, filesystems can 1259 + For the read path (->read_folio()) of regular files, filesystems can 1260 1260 read the ciphertext into the page cache and decrypt it in-place. The 1261 1261 page lock must be held until decryption has finished, to prevent the 1262 1262 page from becoming visible to userspace prematurely.
+1 -1
Documentation/filesystems/fsverity.rst
··· 559 559 Pagecache 560 560 ~~~~~~~~~ 561 561 562 - For filesystems using Linux's pagecache, the ``->readpage()`` and 562 + For filesystems using Linux's pagecache, the ``->read_folio()`` and 563 563 ``->readahead()`` methods must be modified to verify pages before they 564 564 are marked Uptodate. Merely hooking ``->read_iter()`` would be 565 565 insufficient, since ``->read_iter()`` is not used for memory maps.
+18 -18
Documentation/filesystems/locking.rst
··· 237 237 prototypes:: 238 238 239 239 int (*writepage)(struct page *page, struct writeback_control *wbc); 240 - int (*readpage)(struct file *, struct page *); 240 + int (*read_folio)(struct file *, struct folio *); 241 241 int (*writepages)(struct address_space *, struct writeback_control *); 242 242 bool (*dirty_folio)(struct address_space *, struct folio *folio); 243 243 void (*readahead)(struct readahead_control *); 244 244 int (*write_begin)(struct file *, struct address_space *mapping, 245 - loff_t pos, unsigned len, unsigned flags, 245 + loff_t pos, unsigned len, 246 246 struct page **pagep, void **fsdata); 247 247 int (*write_end)(struct file *, struct address_space *mapping, 248 248 loff_t pos, unsigned len, unsigned copied, 249 249 struct page *page, void *fsdata); 250 250 sector_t (*bmap)(struct address_space *, sector_t); 251 251 void (*invalidate_folio) (struct folio *, size_t start, size_t len); 252 - int (*releasepage) (struct page *, int); 253 - void (*freepage)(struct page *); 252 + bool (*release_folio)(struct folio *, gfp_t); 253 + void (*free_folio)(struct folio *); 254 254 int (*direct_IO)(struct kiocb *, struct iov_iter *iter); 255 255 bool (*isolate_page) (struct page *, isolate_mode_t); 256 256 int (*migratepage)(struct address_space *, struct page *, struct page *); ··· 262 262 int (*swap_deactivate)(struct file *); 263 263 264 264 locking rules: 265 - All except dirty_folio and freepage may block 265 + All except dirty_folio and free_folio may block 266 266 267 267 ====================== ======================== ========= =============== 268 - ops PageLocked(page) i_rwsem invalidate_lock 268 + ops folio locked i_rwsem invalidate_lock 269 269 ====================== ======================== ========= =============== 270 270 writepage: yes, unlocks (see below) 271 - readpage: yes, unlocks shared 271 + read_folio: yes, unlocks shared 272 272 writepages: 273 - dirty_folio maybe 273 + dirty_folio: maybe 274 274 readahead: yes, unlocks shared 275 275 write_begin: locks the page exclusive 276 276 write_end: yes, unlocks exclusive 277 277 bmap: 278 278 invalidate_folio: yes exclusive 279 - releasepage: yes 280 - freepage: yes 279 + release_folio: yes 280 + free_folio: yes 281 281 direct_IO: 282 282 isolate_page: yes 283 283 migratepage: yes (both) ··· 289 289 swap_deactivate: no 290 290 ====================== ======================== ========= =============== 291 291 292 - ->write_begin(), ->write_end() and ->readpage() may be called from 292 + ->write_begin(), ->write_end() and ->read_folio() may be called from 293 293 the request handler (/dev/loop). 294 294 295 - ->readpage() unlocks the page, either synchronously or via I/O 295 + ->read_folio() unlocks the folio, either synchronously or via I/O 296 296 completion. 297 297 298 - ->readahead() unlocks the pages that I/O is attempted on like ->readpage(). 298 + ->readahead() unlocks the folios that I/O is attempted on like ->read_folio(). 299 299 300 300 ->writepage() is used for two purposes: for "memory cleansing" and for 301 301 "sync". These are quite different operations and the behaviour may differ ··· 372 372 path (and thus calling into ->invalidate_folio) to block races between page 373 373 cache invalidation and page cache filling functions (fault, read, ...). 374 374 375 - ->releasepage() is called when the kernel is about to try to drop the 376 - buffers from the page in preparation for freeing it. It returns zero to 377 - indicate that the buffers are (or may be) freeable. If ->releasepage is zero, 378 - the kernel assumes that the fs has no private interest in the buffers. 375 + ->release_folio() is called when the kernel is about to try to drop the 376 + buffers from the folio in preparation for freeing it. It returns false to 377 + indicate that the buffers are (or may be) freeable. If ->release_folio is 378 + NULL, the kernel assumes that the fs has no private interest in the buffers. 379 379 380 - ->freepage() is called when the kernel is done dropping the page 380 + ->free_folio() is called when the kernel has dropped the folio 381 381 from the page cache. 382 382 383 383 ->launder_folio() may be called prior to releasing a folio if
+4 -5
Documentation/filesystems/netfs_library.rst
··· 96 96 Buffered Read Helpers 97 97 ===================== 98 98 99 - The library provides a set of read helpers that handle the ->readpage(), 99 + The library provides a set of read helpers that handle the ->read_folio(), 100 100 ->readahead() and much of the ->write_begin() VM operations and translate them 101 101 into a common call framework. 102 102 ··· 136 136 Three read helpers are provided:: 137 137 138 138 void netfs_readahead(struct readahead_control *ractl); 139 - int netfs_readpage(struct file *file, 140 - struct page *page); 139 + int netfs_read_folio(struct file *file, 140 + struct folio *folio); 141 141 int netfs_write_begin(struct file *file, 142 142 struct address_space *mapping, 143 143 loff_t pos, 144 144 unsigned int len, 145 - unsigned int flags, 146 145 struct folio **_folio, 147 146 void **_fsdata); 148 147 149 148 Each corresponds to a VM address space operation. These operations use the 150 149 state in the per-inode context. 151 150 152 - For ->readahead() and ->readpage(), the network filesystem just point directly 151 + For ->readahead() and ->read_folio(), the network filesystem just point directly 153 152 at the corresponding read helper; whereas for ->write_begin(), it may be a 154 153 little more complicated as the network filesystem might want to flush 155 154 conflicting writes or track dirty data and needs to put the acquired folio if
+1 -1
Documentation/filesystems/porting.rst
··· 624 624 have inode_nohighmem(inode) called before anything might start playing with 625 625 its pagecache. No highmem pages should end up in the pagecache of such 626 626 symlinks. That includes any preseeding that might be done during symlink 627 - creation. __page_symlink() will honour the mapping gfp flags, so once 627 + creation. page_symlink() will honour the mapping gfp flags, so once 628 628 you've done inode_nohighmem() it's safe to use, but if you allocate and 629 629 insert the page manually, make sure to use the right gfp flags. 630 630
+41 -45
Documentation/filesystems/vfs.rst
··· 620 620 The first can be used independently to the others. The VM can try to 621 621 either write dirty pages in order to clean them, or release clean pages 622 622 in order to reuse them. To do this it can call the ->writepage method 623 - on dirty pages, and ->releasepage on clean pages with PagePrivate set. 624 - Clean pages without PagePrivate and with no external references will be 625 - released without notice being given to the address_space. 623 + on dirty pages, and ->release_folio on clean folios with the private 624 + flag set. Clean pages without PagePrivate and with no external references 625 + will be released without notice being given to the address_space. 626 626 627 627 To achieve this functionality, pages need to be placed on an LRU with 628 628 lru_cache_add and mark_page_active needs to be called whenever the page ··· 656 656 the application, and then written-back to storage typically in whole 657 657 pages, however the address_space has finer control of write sizes. 658 658 659 - The read process essentially only requires 'readpage'. The write 659 + The read process essentially only requires 'read_folio'. The write 660 660 process is more complicated and uses write_begin/write_end or 661 661 dirty_folio to write data into the address_space, and writepage and 662 662 writepages to writeback data to storage. ··· 722 722 723 723 struct address_space_operations { 724 724 int (*writepage)(struct page *page, struct writeback_control *wbc); 725 - int (*readpage)(struct file *, struct page *); 725 + int (*read_folio)(struct file *, struct folio *); 726 726 int (*writepages)(struct address_space *, struct writeback_control *); 727 727 bool (*dirty_folio)(struct address_space *, struct folio *); 728 728 void (*readahead)(struct readahead_control *); 729 729 int (*write_begin)(struct file *, struct address_space *mapping, 730 - loff_t pos, unsigned len, unsigned flags, 730 + loff_t pos, unsigned len, 731 731 struct page **pagep, void **fsdata); 732 732 int (*write_end)(struct file *, struct address_space *mapping, 733 733 loff_t pos, unsigned len, unsigned copied, 734 734 struct page *page, void *fsdata); 735 735 sector_t (*bmap)(struct address_space *, sector_t); 736 736 void (*invalidate_folio) (struct folio *, size_t start, size_t len); 737 - int (*releasepage) (struct page *, int); 738 - void (*freepage)(struct page *); 737 + bool (*release_folio)(struct folio *, gfp_t); 738 + void (*free_folio)(struct folio *); 739 739 ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); 740 740 /* isolate a page for migration */ 741 741 bool (*isolate_page) (struct page *, isolate_mode_t); ··· 747 747 748 748 bool (*is_partially_uptodate) (struct folio *, size_t from, 749 749 size_t count); 750 - void (*is_dirty_writeback) (struct page *, bool *, bool *); 750 + void (*is_dirty_writeback)(struct folio *, bool *, bool *); 751 751 int (*error_remove_page) (struct mapping *mapping, struct page *page); 752 752 int (*swap_activate)(struct file *); 753 753 int (*swap_deactivate)(struct file *); ··· 772 772 773 773 See the file "Locking" for more details. 774 774 775 - ``readpage`` 776 - called by the VM to read a page from backing store. The page 777 - will be Locked when readpage is called, and should be unlocked 778 - and marked uptodate once the read completes. If ->readpage 779 - discovers that it needs to unlock the page for some reason, it 780 - can do so, and then return AOP_TRUNCATED_PAGE. In this case, 781 - the page will be relocated, relocked and if that all succeeds, 782 - ->readpage will be called again. 775 + ``read_folio`` 776 + called by the VM to read a folio from backing store. The folio 777 + will be locked when read_folio is called, and should be unlocked 778 + and marked uptodate once the read completes. If ->read_folio 779 + discovers that it cannot perform the I/O at this time, it can 780 + unlock the folio and return AOP_TRUNCATED_PAGE. In this case, 781 + the folio will be looked up again, relocked and if that all succeeds, 782 + ->read_folio will be called again. 783 783 784 784 ``writepages`` 785 785 called by the VM to write out pages associated with the ··· 832 832 passed to write_begin is greater than the number of bytes copied 833 833 into the page). 834 834 835 - flags is a field for AOP_FLAG_xxx flags, described in 836 - include/linux/fs.h. 837 - 838 835 A void * may be returned in fsdata, which then gets passed into 839 836 write_end. 840 837 ··· 864 867 address space. This generally corresponds to either a 865 868 truncation, punch hole or a complete invalidation of the address 866 869 space (in the latter case 'offset' will always be 0 and 'length' 867 - will be folio_size()). Any private data associated with the page 870 + will be folio_size()). Any private data associated with the folio 868 871 should be updated to reflect this truncation. If offset is 0 869 872 and length is folio_size(), then the private data should be 870 - released, because the page must be able to be completely 871 - discarded. This may be done by calling the ->releasepage 873 + released, because the folio must be able to be completely 874 + discarded. This may be done by calling the ->release_folio 872 875 function, but in this case the release MUST succeed. 873 876 874 - ``releasepage`` 875 - releasepage is called on PagePrivate pages to indicate that the 876 - page should be freed if possible. ->releasepage should remove 877 - any private data from the page and clear the PagePrivate flag. 878 - If releasepage() fails for some reason, it must indicate failure 879 - with a 0 return value. releasepage() is used in two distinct 880 - though related cases. The first is when the VM finds a clean 881 - page with no active users and wants to make it a free page. If 882 - ->releasepage succeeds, the page will be removed from the 883 - address_space and become free. 877 + ``release_folio`` 878 + release_folio is called on folios with private data to tell the 879 + filesystem that the folio is about to be freed. ->release_folio 880 + should remove any private data from the folio and clear the 881 + private flag. If release_folio() fails, it should return false. 882 + release_folio() is used in two distinct though related cases. 883 + The first is when the VM wants to free a clean folio with no 884 + active users. If ->release_folio succeeds, the folio will be 885 + removed from the address_space and be freed. 884 886 885 887 The second case is when a request has been made to invalidate 886 - some or all pages in an address_space. This can happen through 887 - the fadvise(POSIX_FADV_DONTNEED) system call or by the 888 - filesystem explicitly requesting it as nfs and 9fs do (when they 888 + some or all folios in an address_space. This can happen 889 + through the fadvise(POSIX_FADV_DONTNEED) system call or by the 890 + filesystem explicitly requesting it as nfs and 9p do (when they 889 891 believe the cache may be out of date with storage) by calling 890 892 invalidate_inode_pages2(). If the filesystem makes such a call, 891 - and needs to be certain that all pages are invalidated, then its 892 - releasepage will need to ensure this. Possibly it can clear the 893 - PageUptodate bit if it cannot free private data yet. 893 + and needs to be certain that all folios are invalidated, then 894 + its release_folio will need to ensure this. Possibly it can 895 + clear the uptodate flag if it cannot free private data yet. 894 896 895 - ``freepage`` 896 - freepage is called once the page is no longer visible in the 897 + ``free_folio`` 898 + free_folio is called once the folio is no longer visible in the 897 899 page cache in order to allow the cleanup of any private data. 898 900 Since it may be called by the memory reclaimer, it should not 899 901 assume that the original address_space mapping still exists, and ··· 931 935 without needing I/O to bring the whole page up to date. 932 936 933 937 ``is_dirty_writeback`` 934 - Called by the VM when attempting to reclaim a page. The VM uses 938 + Called by the VM when attempting to reclaim a folio. The VM uses 935 939 dirty and writeback information to determine if it needs to 936 940 stall to allow flushers a chance to complete some IO. 937 - Ordinarily it can use PageDirty and PageWriteback but some 938 - filesystems have more complex state (unstable pages in NFS 941 + Ordinarily it can use folio_test_dirty and folio_test_writeback but 942 + some filesystems have more complex state (unstable folios in NFS 939 943 prevent reclaim) or do not set those flags due to locking 940 944 problems. This callback allows a filesystem to indicate to the 941 - VM if a page should be treated as dirty or writeback for the 945 + VM if a folio should be treated as dirty or writeback for the 942 946 purposes of stalling. 943 947 944 948 ``error_remove_page``
+13
MAINTAINERS
··· 14878 14878 F: include/linux/padata.h 14879 14879 F: kernel/padata.c 14880 14880 14881 + PAGE CACHE 14882 + M: Matthew Wilcox (Oracle) <willy@infradead.org> 14883 + L: linux-fsdevel@vger.kernel.org 14884 + S: Supported 14885 + T: git git://git.infradead.org/users/willy/pagecache.git 14886 + F: Documentation/filesystems/locking.rst 14887 + F: Documentation/filesystems/vfs.rst 14888 + F: include/linux/pagemap.h 14889 + F: mm/filemap.c 14890 + F: mm/page-writeback.c 14891 + F: mm/readahead.c 14892 + F: mm/truncate.c 14893 + 14881 14894 PAGE POOL 14882 14895 M: Jesper Dangaard Brouer <hawk@kernel.org> 14883 14896 M: Ilias Apalodimas <ilias.apalodimas@linaro.org>
+5 -7
block/fops.c
··· 372 372 return block_write_full_page(page, blkdev_get_block, wbc); 373 373 } 374 374 375 - static int blkdev_readpage(struct file * file, struct page * page) 375 + static int blkdev_read_folio(struct file *file, struct folio *folio) 376 376 { 377 - return block_read_full_page(page, blkdev_get_block); 377 + return block_read_full_folio(folio, blkdev_get_block); 378 378 } 379 379 380 380 static void blkdev_readahead(struct readahead_control *rac) ··· 383 383 } 384 384 385 385 static int blkdev_write_begin(struct file *file, struct address_space *mapping, 386 - loff_t pos, unsigned len, unsigned flags, struct page **pagep, 387 - void **fsdata) 386 + loff_t pos, unsigned len, struct page **pagep, void **fsdata) 388 387 { 389 - return block_write_begin(mapping, pos, len, flags, pagep, 390 - blkdev_get_block); 388 + return block_write_begin(mapping, pos, len, pagep, blkdev_get_block); 391 389 } 392 390 393 391 static int blkdev_write_end(struct file *file, struct address_space *mapping, ··· 410 412 const struct address_space_operations def_blk_aops = { 411 413 .dirty_folio = block_dirty_folio, 412 414 .invalidate_folio = block_invalidate_folio, 413 - .readpage = blkdev_readpage, 415 + .read_folio = blkdev_read_folio, 414 416 .readahead = blkdev_readahead, 415 417 .writepage = blkdev_writepage, 416 418 .write_begin = blkdev_write_begin,
+11 -12
drivers/gpu/drm/i915/gem/i915_gem_shmem.c
··· 408 408 const struct drm_i915_gem_pwrite *arg) 409 409 { 410 410 struct address_space *mapping = obj->base.filp->f_mapping; 411 + const struct address_space_operations *aops = mapping->a_ops; 411 412 char __user *user_data = u64_to_user_ptr(arg->data_ptr); 412 413 u64 remain, offset; 413 414 unsigned int pg; ··· 466 465 if (err) 467 466 return err; 468 467 469 - err = pagecache_write_begin(obj->base.filp, mapping, 470 - offset, len, 0, 471 - &page, &data); 468 + err = aops->write_begin(obj->base.filp, mapping, offset, len, 469 + &page, &data); 472 470 if (err < 0) 473 471 return err; 474 472 ··· 477 477 len); 478 478 kunmap_atomic(vaddr); 479 479 480 - err = pagecache_write_end(obj->base.filp, mapping, 481 - offset, len, len - unwritten, 482 - page, data); 480 + err = aops->write_end(obj->base.filp, mapping, offset, len, 481 + len - unwritten, page, data); 483 482 if (err < 0) 484 483 return err; 485 484 ··· 621 622 { 622 623 struct drm_i915_gem_object *obj; 623 624 struct file *file; 625 + const struct address_space_operations *aops; 624 626 resource_size_t offset; 625 627 int err; 626 628 ··· 633 633 GEM_BUG_ON(obj->write_domain != I915_GEM_DOMAIN_CPU); 634 634 635 635 file = obj->base.filp; 636 + aops = file->f_mapping->a_ops; 636 637 offset = 0; 637 638 do { 638 639 unsigned int len = min_t(typeof(size), size, PAGE_SIZE); 639 640 struct page *page; 640 641 void *pgdata, *vaddr; 641 642 642 - err = pagecache_write_begin(file, file->f_mapping, 643 - offset, len, 0, 644 - &page, &pgdata); 643 + err = aops->write_begin(file, file->f_mapping, offset, len, 644 + &page, &pgdata); 645 645 if (err < 0) 646 646 goto fail; 647 647 ··· 649 649 memcpy(vaddr, data, len); 650 650 kunmap(page); 651 651 652 - err = pagecache_write_end(file, file->f_mapping, 653 - offset, len, len, 654 - page, pgdata); 652 + err = aops->write_end(file, file->f_mapping, offset, len, len, 653 + page, pgdata); 655 654 if (err < 0) 656 655 goto fail; 657 656
+5 -6
drivers/scsi/scsicam.c
··· 34 34 { 35 35 struct address_space *mapping = bdev_whole(dev)->bd_inode->i_mapping; 36 36 unsigned char *res = NULL; 37 - struct page *page; 37 + struct folio *folio; 38 38 39 - page = read_mapping_page(mapping, 0, NULL); 40 - if (IS_ERR(page)) 39 + folio = read_mapping_folio(mapping, 0, NULL); 40 + if (IS_ERR(folio)) 41 41 return NULL; 42 42 43 - if (!PageError(page)) 44 - res = kmemdup(page_address(page) + 0x1be, 66, GFP_KERNEL); 45 - put_page(page); 43 + res = kmemdup(folio_address(folio) + 0x1be, 66, GFP_KERNEL); 44 + folio_put(folio); 46 45 return res; 47 46 } 48 47 EXPORT_SYMBOL(scsi_bios_ptable);
+11 -12
fs/9p/vfs_addr.c
··· 100 100 }; 101 101 102 102 /** 103 - * v9fs_release_page - release the private state associated with a page 104 - * @page: The page to be released 103 + * v9fs_release_folio - release the private state associated with a folio 104 + * @folio: The folio to be released 105 105 * @gfp: The caller's allocation restrictions 106 106 * 107 - * Returns 1 if the page can be released, false otherwise. 107 + * Returns true if the page can be released, false otherwise. 108 108 */ 109 109 110 - static int v9fs_release_page(struct page *page, gfp_t gfp) 110 + static bool v9fs_release_folio(struct folio *folio, gfp_t gfp) 111 111 { 112 - struct folio *folio = page_folio(page); 113 112 struct inode *inode = folio_inode(folio); 114 113 115 114 if (folio_test_private(folio)) 116 - return 0; 115 + return false; 117 116 #ifdef CONFIG_9P_FSCACHE 118 117 if (folio_test_fscache(folio)) { 119 118 if (current_is_kswapd() || !(gfp & __GFP_FS)) 120 - return 0; 119 + return false; 121 120 folio_wait_fscache(folio); 122 121 } 123 122 #endif 124 123 fscache_note_page_release(v9fs_inode_cookie(V9FS_I(inode))); 125 - return 1; 124 + return true; 126 125 } 127 126 128 127 static void v9fs_invalidate_folio(struct folio *folio, size_t offset, ··· 259 260 } 260 261 261 262 static int v9fs_write_begin(struct file *filp, struct address_space *mapping, 262 - loff_t pos, unsigned int len, unsigned int flags, 263 + loff_t pos, unsigned int len, 263 264 struct page **subpagep, void **fsdata) 264 265 { 265 266 int retval; ··· 274 275 * file. We need to do this before we get a lock on the page in case 275 276 * there's more than one writer competing for the same cache block. 276 277 */ 277 - retval = netfs_write_begin(filp, mapping, pos, len, flags, &folio, fsdata); 278 + retval = netfs_write_begin(filp, mapping, pos, len, &folio, fsdata); 278 279 if (retval < 0) 279 280 return retval; 280 281 ··· 335 336 #endif 336 337 337 338 const struct address_space_operations v9fs_addr_operations = { 338 - .readpage = netfs_readpage, 339 + .read_folio = netfs_read_folio, 339 340 .readahead = netfs_readahead, 340 341 .dirty_folio = v9fs_dirty_folio, 341 342 .writepage = v9fs_vfs_writepage, 342 343 .write_begin = v9fs_write_begin, 343 344 .write_end = v9fs_write_end, 344 - .releasepage = v9fs_release_page, 345 + .release_folio = v9fs_release_folio, 345 346 .invalidate_folio = v9fs_invalidate_folio, 346 347 .launder_folio = v9fs_launder_folio, 347 348 .direct_IO = v9fs_direct_IO,
+5 -5
fs/adfs/inode.c
··· 38 38 return block_write_full_page(page, adfs_get_block, wbc); 39 39 } 40 40 41 - static int adfs_readpage(struct file *file, struct page *page) 41 + static int adfs_read_folio(struct file *file, struct folio *folio) 42 42 { 43 - return block_read_full_page(page, adfs_get_block); 43 + return block_read_full_folio(folio, adfs_get_block); 44 44 } 45 45 46 46 static void adfs_write_failed(struct address_space *mapping, loff_t to) ··· 52 52 } 53 53 54 54 static int adfs_write_begin(struct file *file, struct address_space *mapping, 55 - loff_t pos, unsigned len, unsigned flags, 55 + loff_t pos, unsigned len, 56 56 struct page **pagep, void **fsdata) 57 57 { 58 58 int ret; 59 59 60 60 *pagep = NULL; 61 - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, 61 + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, 62 62 adfs_get_block, 63 63 &ADFS_I(mapping->host)->mmu_private); 64 64 if (unlikely(ret)) ··· 75 75 static const struct address_space_operations adfs_aops = { 76 76 .dirty_folio = block_dirty_folio, 77 77 .invalidate_folio = block_invalidate_folio, 78 - .readpage = adfs_readpage, 78 + .read_folio = adfs_read_folio, 79 79 .writepage = adfs_writepage, 80 80 .write_begin = adfs_write_begin, 81 81 .write_end = generic_write_end,
+11 -10
fs/affs/file.c
··· 375 375 return block_write_full_page(page, affs_get_block, wbc); 376 376 } 377 377 378 - static int affs_readpage(struct file *file, struct page *page) 378 + static int affs_read_folio(struct file *file, struct folio *folio) 379 379 { 380 - return block_read_full_page(page, affs_get_block); 380 + return block_read_full_folio(folio, affs_get_block); 381 381 } 382 382 383 383 static void affs_write_failed(struct address_space *mapping, loff_t to) ··· 414 414 } 415 415 416 416 static int affs_write_begin(struct file *file, struct address_space *mapping, 417 - loff_t pos, unsigned len, unsigned flags, 417 + loff_t pos, unsigned len, 418 418 struct page **pagep, void **fsdata) 419 419 { 420 420 int ret; 421 421 422 422 *pagep = NULL; 423 - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, 423 + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, 424 424 affs_get_block, 425 425 &AFFS_I(mapping->host)->mmu_private); 426 426 if (unlikely(ret)) ··· 455 455 const struct address_space_operations affs_aops = { 456 456 .dirty_folio = block_dirty_folio, 457 457 .invalidate_folio = block_invalidate_folio, 458 - .readpage = affs_readpage, 458 + .read_folio = affs_read_folio, 459 459 .writepage = affs_writepage, 460 460 .write_begin = affs_write_begin, 461 461 .write_end = affs_write_end, ··· 629 629 } 630 630 631 631 static int 632 - affs_readpage_ofs(struct file *file, struct page *page) 632 + affs_read_folio_ofs(struct file *file, struct folio *folio) 633 633 { 634 + struct page *page = &folio->page; 634 635 struct inode *inode = page->mapping->host; 635 636 u32 to; 636 637 int err; ··· 651 650 } 652 651 653 652 static int affs_write_begin_ofs(struct file *file, struct address_space *mapping, 654 - loff_t pos, unsigned len, unsigned flags, 653 + loff_t pos, unsigned len, 655 654 struct page **pagep, void **fsdata) 656 655 { 657 656 struct inode *inode = mapping->host; ··· 671 670 } 672 671 673 672 index = pos >> PAGE_SHIFT; 674 - page = grab_cache_page_write_begin(mapping, index, flags); 673 + page = grab_cache_page_write_begin(mapping, index); 675 674 if (!page) 676 675 return -ENOMEM; 677 676 *pagep = page; ··· 838 837 const struct address_space_operations affs_aops_ofs = { 839 838 .dirty_folio = block_dirty_folio, 840 839 .invalidate_folio = block_invalidate_folio, 841 - .readpage = affs_readpage_ofs, 840 + .read_folio = affs_read_folio_ofs, 842 841 //.writepage = affs_writepage_ofs, 843 842 .write_begin = affs_write_begin_ofs, 844 843 .write_end = affs_write_end_ofs ··· 888 887 loff_t isize = inode->i_size; 889 888 int res; 890 889 891 - res = mapping->a_ops->write_begin(NULL, mapping, isize, 0, 0, &page, &fsdata); 890 + res = mapping->a_ops->write_begin(NULL, mapping, isize, 0, &page, &fsdata); 892 891 if (!res) 893 892 res = mapping->a_ops->write_end(NULL, mapping, isize, 0, 0, page, fsdata); 894 893 else
+3 -2
fs/affs/symlink.c
··· 11 11 12 12 #include "affs.h" 13 13 14 - static int affs_symlink_readpage(struct file *file, struct page *page) 14 + static int affs_symlink_read_folio(struct file *file, struct folio *folio) 15 15 { 16 + struct page *page = &folio->page; 16 17 struct buffer_head *bh; 17 18 struct inode *inode = page->mapping->host; 18 19 char *link = page_address(page); ··· 68 67 } 69 68 70 69 const struct address_space_operations affs_symlink_aops = { 71 - .readpage = affs_symlink_readpage, 70 + .read_folio = affs_symlink_read_folio, 72 71 }; 73 72 74 73 const struct inode_operations affs_symlink_inode_operations = {
+3 -4
fs/afs/dir.c
··· 41 41 static int afs_rename(struct user_namespace *mnt_userns, struct inode *old_dir, 42 42 struct dentry *old_dentry, struct inode *new_dir, 43 43 struct dentry *new_dentry, unsigned int flags); 44 - static int afs_dir_releasepage(struct page *page, gfp_t gfp_flags); 44 + static bool afs_dir_release_folio(struct folio *folio, gfp_t gfp_flags); 45 45 static void afs_dir_invalidate_folio(struct folio *folio, size_t offset, 46 46 size_t length); 47 47 ··· 75 75 76 76 const struct address_space_operations afs_dir_aops = { 77 77 .dirty_folio = afs_dir_dirty_folio, 78 - .releasepage = afs_dir_releasepage, 78 + .release_folio = afs_dir_release_folio, 79 79 .invalidate_folio = afs_dir_invalidate_folio, 80 80 }; 81 81 ··· 2002 2002 * Release a directory folio and clean up its private state if it's not busy 2003 2003 * - return true if the folio can now be released, false if not 2004 2004 */ 2005 - static int afs_dir_releasepage(struct page *subpage, gfp_t gfp_flags) 2005 + static bool afs_dir_release_folio(struct folio *folio, gfp_t gfp_flags) 2006 2006 { 2007 - struct folio *folio = page_folio(subpage); 2008 2007 struct afs_vnode *dvnode = AFS_FS_I(folio_inode(folio)); 2009 2008 2010 2009 _enter("{{%llx:%llu}[%lu]}", dvnode->fid.vid, dvnode->fid.vnode, folio_index(folio));
+13 -15
fs/afs/file.c
··· 19 19 #include "internal.h" 20 20 21 21 static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); 22 - static int afs_symlink_readpage(struct file *file, struct page *page); 22 + static int afs_symlink_read_folio(struct file *file, struct folio *folio); 23 23 static void afs_invalidate_folio(struct folio *folio, size_t offset, 24 24 size_t length); 25 - static int afs_releasepage(struct page *page, gfp_t gfp_flags); 25 + static bool afs_release_folio(struct folio *folio, gfp_t gfp_flags); 26 26 27 27 static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter); 28 28 static void afs_vm_open(struct vm_area_struct *area); ··· 50 50 }; 51 51 52 52 const struct address_space_operations afs_file_aops = { 53 - .readpage = netfs_readpage, 53 + .read_folio = netfs_read_folio, 54 54 .readahead = netfs_readahead, 55 55 .dirty_folio = afs_dirty_folio, 56 56 .launder_folio = afs_launder_folio, 57 - .releasepage = afs_releasepage, 57 + .release_folio = afs_release_folio, 58 58 .invalidate_folio = afs_invalidate_folio, 59 59 .write_begin = afs_write_begin, 60 60 .write_end = afs_write_end, ··· 63 63 }; 64 64 65 65 const struct address_space_operations afs_symlink_aops = { 66 - .readpage = afs_symlink_readpage, 67 - .releasepage = afs_releasepage, 66 + .read_folio = afs_symlink_read_folio, 67 + .release_folio = afs_release_folio, 68 68 .invalidate_folio = afs_invalidate_folio, 69 69 }; 70 70 ··· 332 332 afs_put_read(fsreq); 333 333 } 334 334 335 - static int afs_symlink_readpage(struct file *file, struct page *page) 335 + static int afs_symlink_read_folio(struct file *file, struct folio *folio) 336 336 { 337 - struct afs_vnode *vnode = AFS_FS_I(page->mapping->host); 337 + struct afs_vnode *vnode = AFS_FS_I(folio->mapping->host); 338 338 struct afs_read *fsreq; 339 - struct folio *folio = page_folio(page); 340 339 int ret; 341 340 342 341 fsreq = afs_alloc_read(GFP_NOFS); ··· 346 347 fsreq->len = folio_size(folio); 347 348 fsreq->vnode = vnode; 348 349 fsreq->iter = &fsreq->def_iter; 349 - iov_iter_xarray(&fsreq->def_iter, READ, &page->mapping->i_pages, 350 + iov_iter_xarray(&fsreq->def_iter, READ, &folio->mapping->i_pages, 350 351 fsreq->pos, fsreq->len); 351 352 352 353 ret = afs_fetch_data(fsreq->vnode, fsreq); 353 354 if (ret == 0) 354 - SetPageUptodate(page); 355 - unlock_page(page); 355 + folio_mark_uptodate(folio); 356 + folio_unlock(folio); 356 357 return ret; 357 358 } 358 359 ··· 481 482 * release a page and clean up its private state if it's not busy 482 483 * - return true if the page can now be released, false if not 483 484 */ 484 - static int afs_releasepage(struct page *page, gfp_t gfp) 485 + static bool afs_release_folio(struct folio *folio, gfp_t gfp) 485 486 { 486 - struct folio *folio = page_folio(page); 487 487 struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio)); 488 488 489 489 _enter("{{%llx:%llu}[%lu],%lx},%x", 490 490 vnode->fid.vid, vnode->fid.vnode, folio_index(folio), folio->flags, 491 491 gfp); 492 492 493 - /* deny if page is being written to the cache and the caller hasn't 493 + /* deny if folio is being written to the cache and the caller hasn't 494 494 * elected to wait */ 495 495 #ifdef CONFIG_AFS_FSCACHE 496 496 if (folio_test_fscache(folio)) {
+2 -2
fs/afs/internal.h
··· 311 311 atomic_t n_lookup; /* Number of lookups done */ 312 312 atomic_t n_reval; /* Number of dentries needing revalidation */ 313 313 atomic_t n_inval; /* Number of invalidations by the server */ 314 - atomic_t n_relpg; /* Number of invalidations by releasepage */ 314 + atomic_t n_relpg; /* Number of invalidations by release_folio */ 315 315 atomic_t n_read_dir; /* Number of directory pages read */ 316 316 atomic_t n_dir_cr; /* Number of directory entry creation edits */ 317 317 atomic_t n_dir_rm; /* Number of directory entry removal edits */ ··· 1535 1535 #define afs_dirty_folio filemap_dirty_folio 1536 1536 #endif 1537 1537 extern int afs_write_begin(struct file *file, struct address_space *mapping, 1538 - loff_t pos, unsigned len, unsigned flags, 1538 + loff_t pos, unsigned len, 1539 1539 struct page **pagep, void **fsdata); 1540 1540 extern int afs_write_end(struct file *file, struct address_space *mapping, 1541 1541 loff_t pos, unsigned len, unsigned copied,
+2 -2
fs/afs/write.c
··· 42 42 * prepare to perform part of a write to a page 43 43 */ 44 44 int afs_write_begin(struct file *file, struct address_space *mapping, 45 - loff_t pos, unsigned len, unsigned flags, 45 + loff_t pos, unsigned len, 46 46 struct page **_page, void **fsdata) 47 47 { 48 48 struct afs_vnode *vnode = AFS_FS_I(file_inode(file)); ··· 60 60 * file. We need to do this before we get a lock on the page in case 61 61 * there's more than one writer competing for the same cache block. 62 62 */ 63 - ret = netfs_write_begin(file, mapping, pos, len, flags, &folio, fsdata); 63 + ret = netfs_write_begin(file, mapping, pos, len, &folio, fsdata); 64 64 if (ret < 0) 65 65 return ret; 66 66
+9 -8
fs/befs/linuxvfs.c
··· 40 40 41 41 static int befs_readdir(struct file *, struct dir_context *); 42 42 static int befs_get_block(struct inode *, sector_t, struct buffer_head *, int); 43 - static int befs_readpage(struct file *file, struct page *page); 43 + static int befs_read_folio(struct file *file, struct folio *folio); 44 44 static sector_t befs_bmap(struct address_space *mapping, sector_t block); 45 45 static struct dentry *befs_lookup(struct inode *, struct dentry *, 46 46 unsigned int); ··· 48 48 static struct inode *befs_alloc_inode(struct super_block *sb); 49 49 static void befs_free_inode(struct inode *inode); 50 50 static void befs_destroy_inodecache(void); 51 - static int befs_symlink_readpage(struct file *, struct page *); 51 + static int befs_symlink_read_folio(struct file *, struct folio *); 52 52 static int befs_utf2nls(struct super_block *sb, const char *in, int in_len, 53 53 char **out, int *out_len); 54 54 static int befs_nls2utf(struct super_block *sb, const char *in, int in_len, ··· 87 87 }; 88 88 89 89 static const struct address_space_operations befs_aops = { 90 - .readpage = befs_readpage, 90 + .read_folio = befs_read_folio, 91 91 .bmap = befs_bmap, 92 92 }; 93 93 94 94 static const struct address_space_operations befs_symlink_aops = { 95 - .readpage = befs_symlink_readpage, 95 + .read_folio = befs_symlink_read_folio, 96 96 }; 97 97 98 98 static const struct export_operations befs_export_operations = { ··· 102 102 }; 103 103 104 104 /* 105 - * Called by generic_file_read() to read a page of data 105 + * Called by generic_file_read() to read a folio of data 106 106 * 107 107 * In turn, simply calls a generic block read function and 108 108 * passes it the address of befs_get_block, for mapping file 109 109 * positions to disk blocks. 110 110 */ 111 111 static int 112 - befs_readpage(struct file *file, struct page *page) 112 + befs_read_folio(struct file *file, struct folio *folio) 113 113 { 114 - return block_read_full_page(page, befs_get_block); 114 + return block_read_full_folio(folio, befs_get_block); 115 115 } 116 116 117 117 static sector_t ··· 468 468 * The data stream become link name. Unless the LONG_SYMLINK 469 469 * flag is set. 470 470 */ 471 - static int befs_symlink_readpage(struct file *unused, struct page *page) 471 + static int befs_symlink_read_folio(struct file *unused, struct folio *folio) 472 472 { 473 + struct page *page = &folio->page; 473 474 struct inode *inode = page->mapping->host; 474 475 struct super_block *sb = inode->i_sb; 475 476 struct befs_inode_info *befs_ino = BEFS_I(inode);
+5 -6
fs/bfs/file.c
··· 155 155 return block_write_full_page(page, bfs_get_block, wbc); 156 156 } 157 157 158 - static int bfs_readpage(struct file *file, struct page *page) 158 + static int bfs_read_folio(struct file *file, struct folio *folio) 159 159 { 160 - return block_read_full_page(page, bfs_get_block); 160 + return block_read_full_folio(folio, bfs_get_block); 161 161 } 162 162 163 163 static void bfs_write_failed(struct address_space *mapping, loff_t to) ··· 169 169 } 170 170 171 171 static int bfs_write_begin(struct file *file, struct address_space *mapping, 172 - loff_t pos, unsigned len, unsigned flags, 172 + loff_t pos, unsigned len, 173 173 struct page **pagep, void **fsdata) 174 174 { 175 175 int ret; 176 176 177 - ret = block_write_begin(mapping, pos, len, flags, pagep, 178 - bfs_get_block); 177 + ret = block_write_begin(mapping, pos, len, pagep, bfs_get_block); 179 178 if (unlikely(ret)) 180 179 bfs_write_failed(mapping, pos + len); 181 180 ··· 189 190 const struct address_space_operations bfs_aops = { 190 191 .dirty_folio = block_dirty_folio, 191 192 .invalidate_folio = block_invalidate_folio, 192 - .readpage = bfs_readpage, 193 + .read_folio = bfs_read_folio, 193 194 .writepage = bfs_writepage, 194 195 .write_begin = bfs_write_begin, 195 196 .write_end = generic_write_end,
+6 -6
fs/btrfs/disk-io.c
··· 996 996 return btree_write_cache_pages(mapping, wbc); 997 997 } 998 998 999 - static int btree_releasepage(struct page *page, gfp_t gfp_flags) 999 + static bool btree_release_folio(struct folio *folio, gfp_t gfp_flags) 1000 1000 { 1001 - if (PageWriteback(page) || PageDirty(page)) 1002 - return 0; 1001 + if (folio_test_writeback(folio) || folio_test_dirty(folio)) 1002 + return false; 1003 1003 1004 - return try_release_extent_buffer(page); 1004 + return try_release_extent_buffer(&folio->page); 1005 1005 } 1006 1006 1007 1007 static void btree_invalidate_folio(struct folio *folio, size_t offset, ··· 1010 1010 struct extent_io_tree *tree; 1011 1011 tree = &BTRFS_I(folio->mapping->host)->io_tree; 1012 1012 extent_invalidate_folio(tree, folio, offset); 1013 - btree_releasepage(&folio->page, GFP_NOFS); 1013 + btree_release_folio(folio, GFP_NOFS); 1014 1014 if (folio_get_private(folio)) { 1015 1015 btrfs_warn(BTRFS_I(folio->mapping->host)->root->fs_info, 1016 1016 "folio private not zero on folio %llu", ··· 1071 1071 1072 1072 static const struct address_space_operations btree_aops = { 1073 1073 .writepages = btree_writepages, 1074 - .releasepage = btree_releasepage, 1074 + .release_folio = btree_release_folio, 1075 1075 .invalidate_folio = btree_invalidate_folio, 1076 1076 #ifdef CONFIG_MIGRATION 1077 1077 .migratepage = btree_migratepage,
+9 -8
fs/btrfs/extent_io.c
··· 3799 3799 return ret; 3800 3800 } 3801 3801 3802 - int btrfs_readpage(struct file *file, struct page *page) 3802 + int btrfs_read_folio(struct file *file, struct folio *folio) 3803 3803 { 3804 + struct page *page = &folio->page; 3804 3805 struct btrfs_inode *inode = BTRFS_I(page->mapping->host); 3805 3806 u64 start = page_offset(page); 3806 3807 u64 end = start + PAGE_SIZE - 1; ··· 5307 5306 } 5308 5307 5309 5308 /* 5310 - * a helper for releasepage, this tests for areas of the page that 5309 + * a helper for release_folio, this tests for areas of the page that 5311 5310 * are locked or under IO and drops the related state bits if it is safe 5312 5311 * to drop the page. 5313 5312 */ ··· 5343 5342 } 5344 5343 5345 5344 /* 5346 - * a helper for releasepage. As long as there are no locked extents 5345 + * a helper for release_folio. As long as there are no locked extents 5347 5346 * in the range corresponding to the page, both state records and extent 5348 5347 * map records are removed 5349 5348 */ ··· 6043 6042 * 6044 6043 * It is only cleared in two cases: freeing the last non-tree 6045 6044 * reference to the extent_buffer when its STALE bit is set or 6046 - * calling releasepage when the tree reference is the only reference. 6045 + * calling release_folio when the tree reference is the only reference. 6047 6046 * 6048 6047 * In both cases, care is taken to ensure that the extent_buffer's 6049 - * pages are not under io. However, releasepage can be concurrently 6048 + * pages are not under io. However, release_folio can be concurrently 6050 6049 * called with creating new references, which is prone to race 6051 6050 * conditions between the calls to check_buffer_tree_ref in those 6052 6051 * codepaths and clearing TREE_REF in try_release_extent_buffer. ··· 6311 6310 /* 6312 6311 * We can't unlock the pages just yet since the extent buffer 6313 6312 * hasn't been properly inserted in the radix tree, this 6314 - * opens a race with btree_releasepage which can free a page 6313 + * opens a race with btree_release_folio which can free a page 6315 6314 * while we are still filling in all pages for the buffer and 6316 6315 * we could crash. 6317 6316 */ ··· 6340 6339 6341 6340 /* 6342 6341 * Now it's safe to unlock the pages because any calls to 6343 - * btree_releasepage will correctly detect that a page belongs to a 6342 + * btree_release_folio will correctly detect that a page belongs to a 6344 6343 * live buffer and won't free them prematurely. 6345 6344 */ 6346 6345 for (i = 0; i < num_pages; i++) ··· 6722 6721 eb->read_mirror = 0; 6723 6722 atomic_set(&eb->io_pages, num_reads); 6724 6723 /* 6725 - * It is possible for releasepage to clear the TREE_REF bit before we 6724 + * It is possible for release_folio to clear the TREE_REF bit before we 6726 6725 * set io_pages. See check_buffer_tree_ref for a more detailed comment. 6727 6726 */ 6728 6727 check_buffer_tree_ref(eb);
+1 -1
fs/btrfs/extent_io.h
··· 149 149 int try_release_extent_mapping(struct page *page, gfp_t mask); 150 150 int try_release_extent_buffer(struct page *page); 151 151 152 - int btrfs_readpage(struct file *file, struct page *page); 152 + int btrfs_read_folio(struct file *file, struct folio *folio); 153 153 int extent_write_full_page(struct page *page, struct writeback_control *wbc); 154 154 int extent_write_locked_range(struct inode *inode, u64 start, u64 end); 155 155 int extent_writepages(struct address_space *mapping,
+5 -4
fs/btrfs/file.c
··· 1307 1307 struct page *page, u64 pos, 1308 1308 bool force_uptodate) 1309 1309 { 1310 + struct folio *folio = page_folio(page); 1310 1311 int ret = 0; 1311 1312 1312 1313 if (((pos & (PAGE_SIZE - 1)) || force_uptodate) && 1313 1314 !PageUptodate(page)) { 1314 - ret = btrfs_readpage(NULL, page); 1315 + ret = btrfs_read_folio(NULL, folio); 1315 1316 if (ret) 1316 1317 return ret; 1317 1318 lock_page(page); ··· 1322 1321 } 1323 1322 1324 1323 /* 1325 - * Since btrfs_readpage() will unlock the page before it 1326 - * returns, there is a window where btrfs_releasepage() can be 1324 + * Since btrfs_read_folio() will unlock the folio before it 1325 + * returns, there is a window where btrfs_release_folio() can be 1327 1326 * called to release the page. Here we check both inode 1328 1327 * mapping and PagePrivate() to make sure the page was not 1329 1328 * released. ··· 2365 2364 { 2366 2365 struct address_space *mapping = filp->f_mapping; 2367 2366 2368 - if (!mapping->a_ops->readpage) 2367 + if (!mapping->a_ops->read_folio) 2369 2368 return -ENOEXEC; 2370 2369 2371 2370 file_accessed(filp);
+1 -1
fs/btrfs/free-space-cache.c
··· 465 465 466 466 io_ctl->pages[i] = page; 467 467 if (uptodate && !PageUptodate(page)) { 468 - btrfs_readpage(NULL, page); 468 + btrfs_read_folio(NULL, page_folio(page)); 469 469 lock_page(page); 470 470 if (page->mapping != inode->i_mapping) { 471 471 btrfs_err(BTRFS_I(inode)->root->fs_info,
+14 -14
fs/btrfs/inode.c
··· 4809 4809 goto out_unlock; 4810 4810 4811 4811 if (!PageUptodate(page)) { 4812 - ret = btrfs_readpage(NULL, page); 4812 + ret = btrfs_read_folio(NULL, page_folio(page)); 4813 4813 lock_page(page); 4814 4814 if (page->mapping != mapping) { 4815 4815 unlock_page(page); ··· 8204 8204 } 8205 8205 8206 8206 /* 8207 - * For releasepage() and invalidate_folio() we have a race window where 8207 + * For release_folio() and invalidate_folio() we have a race window where 8208 8208 * folio_end_writeback() is called but the subpage spinlock is not yet released. 8209 8209 * If we continue to release/invalidate the page, we could cause use-after-free 8210 8210 * for subpage spinlock. So this function is to spin and wait for subpage ··· 8236 8236 spin_unlock_irq(&subpage->lock); 8237 8237 } 8238 8238 8239 - static int __btrfs_releasepage(struct page *page, gfp_t gfp_flags) 8239 + static bool __btrfs_release_folio(struct folio *folio, gfp_t gfp_flags) 8240 8240 { 8241 - int ret = try_release_extent_mapping(page, gfp_flags); 8241 + int ret = try_release_extent_mapping(&folio->page, gfp_flags); 8242 8242 8243 8243 if (ret == 1) { 8244 - wait_subpage_spinlock(page); 8245 - clear_page_extent_mapped(page); 8244 + wait_subpage_spinlock(&folio->page); 8245 + clear_page_extent_mapped(&folio->page); 8246 8246 } 8247 8247 return ret; 8248 8248 } 8249 8249 8250 - static int btrfs_releasepage(struct page *page, gfp_t gfp_flags) 8250 + static bool btrfs_release_folio(struct folio *folio, gfp_t gfp_flags) 8251 8251 { 8252 - if (PageWriteback(page) || PageDirty(page)) 8253 - return 0; 8254 - return __btrfs_releasepage(page, gfp_flags); 8252 + if (folio_test_writeback(folio) || folio_test_dirty(folio)) 8253 + return false; 8254 + return __btrfs_release_folio(folio, gfp_flags); 8255 8255 } 8256 8256 8257 8257 #ifdef CONFIG_MIGRATION ··· 8322 8322 * still safe to wait for ordered extent to finish. 8323 8323 */ 8324 8324 if (!(offset == 0 && length == folio_size(folio))) { 8325 - btrfs_releasepage(&folio->page, GFP_NOFS); 8325 + btrfs_release_folio(folio, GFP_NOFS); 8326 8326 return; 8327 8327 } 8328 8328 ··· 8446 8446 ASSERT(!folio_test_ordered(folio)); 8447 8447 btrfs_page_clear_checked(fs_info, &folio->page, folio_pos(folio), folio_size(folio)); 8448 8448 if (!inode_evicting) 8449 - __btrfs_releasepage(&folio->page, GFP_NOFS); 8449 + __btrfs_release_folio(folio, GFP_NOFS); 8450 8450 clear_page_extent_mapped(&folio->page); 8451 8451 } 8452 8452 ··· 11415 11415 * For now we're avoiding this by dropping bmap. 11416 11416 */ 11417 11417 static const struct address_space_operations btrfs_aops = { 11418 - .readpage = btrfs_readpage, 11418 + .read_folio = btrfs_read_folio, 11419 11419 .writepage = btrfs_writepage, 11420 11420 .writepages = btrfs_writepages, 11421 11421 .readahead = btrfs_readahead, 11422 11422 .direct_IO = noop_direct_IO, 11423 11423 .invalidate_folio = btrfs_invalidate_folio, 11424 - .releasepage = btrfs_releasepage, 11424 + .release_folio = btrfs_release_folio, 11425 11425 #ifdef CONFIG_MIGRATION 11426 11426 .migratepage = btrfs_migratepage, 11427 11427 #endif
+1 -1
fs/btrfs/ioctl.c
··· 1358 1358 * make it uptodate. 1359 1359 */ 1360 1360 if (!PageUptodate(page)) { 1361 - btrfs_readpage(NULL, page); 1361 + btrfs_read_folio(NULL, page_folio(page)); 1362 1362 lock_page(page); 1363 1363 if (page->mapping != mapping || !PagePrivate(page)) { 1364 1364 unlock_page(page);
+7 -6
fs/btrfs/relocation.c
··· 1101 1101 continue; 1102 1102 1103 1103 /* 1104 - * if we are modifying block in fs tree, wait for readpage 1104 + * if we are modifying block in fs tree, wait for read_folio 1105 1105 * to complete and drop the extent cache 1106 1106 */ 1107 1107 if (root->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID) { ··· 1563 1563 end = (u64)-1; 1564 1564 } 1565 1565 1566 - /* the lock_extent waits for readpage to complete */ 1566 + /* the lock_extent waits for read_folio to complete */ 1567 1567 lock_extent(&BTRFS_I(inode)->io_tree, start, end); 1568 1568 btrfs_drop_extent_cache(BTRFS_I(inode), start, end, 1); 1569 1569 unlock_extent(&BTRFS_I(inode)->io_tree, start, end); ··· 2818 2818 * Subpage can't handle page with DIRTY but without UPTODATE 2819 2819 * bit as it can lead to the following deadlock: 2820 2820 * 2821 - * btrfs_readpage() 2821 + * btrfs_read_folio() 2822 2822 * | Page already *locked* 2823 2823 * |- btrfs_lock_and_flush_ordered_range() 2824 2824 * |- btrfs_start_ordered_extent() ··· 2967 2967 goto release_page; 2968 2968 2969 2969 if (PageReadahead(page)) 2970 - page_cache_async_readahead(inode->i_mapping, ra, NULL, page, 2971 - page_index, last_index + 1 - page_index); 2970 + page_cache_async_readahead(inode->i_mapping, ra, NULL, 2971 + page_folio(page), page_index, 2972 + last_index + 1 - page_index); 2972 2973 2973 2974 if (!PageUptodate(page)) { 2974 - btrfs_readpage(NULL, page); 2975 + btrfs_read_folio(NULL, page_folio(page)); 2975 2976 lock_page(page); 2976 2977 if (!PageUptodate(page)) { 2977 2978 ret = -EIO;
+3 -3
fs/btrfs/send.c
··· 4907 4907 4908 4908 if (PageReadahead(page)) 4909 4909 page_cache_async_readahead(sctx->cur_inode->i_mapping, 4910 - &sctx->ra, NULL, page, index, 4911 - last_index + 1 - index); 4910 + &sctx->ra, NULL, page_folio(page), 4911 + index, last_index + 1 - index); 4912 4912 4913 4913 if (!PageUptodate(page)) { 4914 - btrfs_readpage(NULL, page); 4914 + btrfs_read_folio(NULL, page_folio(page)); 4915 4915 lock_page(page); 4916 4916 if (!PageUptodate(page)) { 4917 4917 unlock_page(page);
+104 -110
fs/buffer.c
··· 79 79 EXPORT_SYMBOL(unlock_buffer); 80 80 81 81 /* 82 - * Returns if the page has dirty or writeback buffers. If all the buffers 83 - * are unlocked and clean then the PageDirty information is stale. If 84 - * any of the pages are locked, it is assumed they are locked for IO. 82 + * Returns if the folio has dirty or writeback buffers. If all the buffers 83 + * are unlocked and clean then the folio_test_dirty information is stale. If 84 + * any of the buffers are locked, it is assumed they are locked for IO. 85 85 */ 86 - void buffer_check_dirty_writeback(struct page *page, 86 + void buffer_check_dirty_writeback(struct folio *folio, 87 87 bool *dirty, bool *writeback) 88 88 { 89 89 struct buffer_head *head, *bh; 90 90 *dirty = false; 91 91 *writeback = false; 92 92 93 - BUG_ON(!PageLocked(page)); 93 + BUG_ON(!folio_test_locked(folio)); 94 94 95 - if (!page_has_buffers(page)) 95 + head = folio_buffers(folio); 96 + if (!head) 96 97 return; 97 98 98 - if (PageWriteback(page)) 99 + if (folio_test_writeback(folio)) 99 100 *writeback = true; 100 101 101 - head = page_buffers(page); 102 102 bh = head; 103 103 do { 104 104 if (buffer_locked(bh)) ··· 314 314 } 315 315 316 316 /* 317 - * I/O completion handler for block_read_full_page() - pages 317 + * I/O completion handler for block_read_full_folio() - pages 318 318 * which come unlocked at the end of I/O. 319 319 */ 320 320 static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate) ··· 955 955 size); 956 956 goto done; 957 957 } 958 - if (!try_to_free_buffers(page)) 958 + if (!try_to_free_buffers(page_folio(page))) 959 959 goto failed; 960 960 } 961 961 ··· 1060 1060 * Also. When blockdev buffers are explicitly read with bread(), they 1061 1061 * individually become uptodate. But their backing page remains not 1062 1062 * uptodate - even if all of its buffers are uptodate. A subsequent 1063 - * block_read_full_page() against that page will discover all the uptodate 1064 - * buffers, will set the page uptodate and will perform no I/O. 1063 + * block_read_full_folio() against that folio will discover all the uptodate 1064 + * buffers, will set the folio uptodate and will perform no I/O. 1065 1065 */ 1066 1066 1067 1067 /** ··· 2088 2088 2089 2089 /* 2090 2090 * If this is a partial write which happened to make all buffers 2091 - * uptodate then we can optimize away a bogus readpage() for 2091 + * uptodate then we can optimize away a bogus read_folio() for 2092 2092 * the next read(). Here we 'discover' whether the page went 2093 2093 * uptodate as a result of this (potentially partial) write. 2094 2094 */ ··· 2104 2104 * The filesystem needs to handle block truncation upon failure. 2105 2105 */ 2106 2106 int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len, 2107 - unsigned flags, struct page **pagep, get_block_t *get_block) 2107 + struct page **pagep, get_block_t *get_block) 2108 2108 { 2109 2109 pgoff_t index = pos >> PAGE_SHIFT; 2110 2110 struct page *page; 2111 2111 int status; 2112 2112 2113 - page = grab_cache_page_write_begin(mapping, index, flags); 2113 + page = grab_cache_page_write_begin(mapping, index); 2114 2114 if (!page) 2115 2115 return -ENOMEM; 2116 2116 ··· 2137 2137 2138 2138 if (unlikely(copied < len)) { 2139 2139 /* 2140 - * The buffers that were written will now be uptodate, so we 2141 - * don't have to worry about a readpage reading them and 2142 - * overwriting a partial write. However if we have encountered 2143 - * a short write and only partially written into a buffer, it 2144 - * will not be marked uptodate, so a readpage might come in and 2145 - * destroy our partial write. 2140 + * The buffers that were written will now be uptodate, so 2141 + * we don't have to worry about a read_folio reading them 2142 + * and overwriting a partial write. However if we have 2143 + * encountered a short write and only partially written 2144 + * into a buffer, it will not be marked uptodate, so a 2145 + * read_folio might come in and destroy our partial write. 2146 2146 * 2147 2147 * Do the simplest thing, and just treat any short write to a 2148 2148 * non uptodate page as a zero-length write, and force the ··· 2245 2245 EXPORT_SYMBOL(block_is_partially_uptodate); 2246 2246 2247 2247 /* 2248 - * Generic "read page" function for block devices that have the normal 2248 + * Generic "read_folio" function for block devices that have the normal 2249 2249 * get_block functionality. This is most of the block device filesystems. 2250 - * Reads the page asynchronously --- the unlock_buffer() and 2250 + * Reads the folio asynchronously --- the unlock_buffer() and 2251 2251 * set/clear_buffer_uptodate() functions propagate buffer state into the 2252 - * page struct once IO has completed. 2252 + * folio once IO has completed. 2253 2253 */ 2254 - int block_read_full_page(struct page *page, get_block_t *get_block) 2254 + int block_read_full_folio(struct folio *folio, get_block_t *get_block) 2255 2255 { 2256 - struct inode *inode = page->mapping->host; 2256 + struct inode *inode = folio->mapping->host; 2257 2257 sector_t iblock, lblock; 2258 2258 struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE]; 2259 2259 unsigned int blocksize, bbits; 2260 2260 int nr, i; 2261 2261 int fully_mapped = 1; 2262 2262 2263 - head = create_page_buffers(page, inode, 0); 2263 + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); 2264 + 2265 + head = create_page_buffers(&folio->page, inode, 0); 2264 2266 blocksize = head->b_size; 2265 2267 bbits = block_size_bits(blocksize); 2266 2268 2267 - iblock = (sector_t)page->index << (PAGE_SHIFT - bbits); 2269 + iblock = (sector_t)folio->index << (PAGE_SHIFT - bbits); 2268 2270 lblock = (i_size_read(inode)+blocksize-1) >> bbits; 2269 2271 bh = head; 2270 2272 nr = 0; ··· 2284 2282 WARN_ON(bh->b_size != blocksize); 2285 2283 err = get_block(inode, iblock, bh, 0); 2286 2284 if (err) 2287 - SetPageError(page); 2285 + folio_set_error(folio); 2288 2286 } 2289 2287 if (!buffer_mapped(bh)) { 2290 - zero_user(page, i * blocksize, blocksize); 2288 + folio_zero_range(folio, i * blocksize, 2289 + blocksize); 2291 2290 if (!err) 2292 2291 set_buffer_uptodate(bh); 2293 2292 continue; ··· 2304 2301 } while (i++, iblock++, (bh = bh->b_this_page) != head); 2305 2302 2306 2303 if (fully_mapped) 2307 - SetPageMappedToDisk(page); 2304 + folio_set_mappedtodisk(folio); 2308 2305 2309 2306 if (!nr) { 2310 2307 /* 2311 - * All buffers are uptodate - we can set the page uptodate 2308 + * All buffers are uptodate - we can set the folio uptodate 2312 2309 * as well. But not if get_block() returned an error. 2313 2310 */ 2314 - if (!PageError(page)) 2315 - SetPageUptodate(page); 2316 - unlock_page(page); 2311 + if (!folio_test_error(folio)) 2312 + folio_mark_uptodate(folio); 2313 + folio_unlock(folio); 2317 2314 return 0; 2318 2315 } 2319 2316 ··· 2338 2335 } 2339 2336 return 0; 2340 2337 } 2341 - EXPORT_SYMBOL(block_read_full_page); 2338 + EXPORT_SYMBOL(block_read_full_folio); 2342 2339 2343 2340 /* utility function for filesystems that need to do work on expanding 2344 2341 * truncates. Uses filesystem pagecache writes to allow the filesystem to ··· 2347 2344 int generic_cont_expand_simple(struct inode *inode, loff_t size) 2348 2345 { 2349 2346 struct address_space *mapping = inode->i_mapping; 2347 + const struct address_space_operations *aops = mapping->a_ops; 2350 2348 struct page *page; 2351 2349 void *fsdata; 2352 2350 int err; ··· 2356 2352 if (err) 2357 2353 goto out; 2358 2354 2359 - err = pagecache_write_begin(NULL, mapping, size, 0, 0, &page, &fsdata); 2355 + err = aops->write_begin(NULL, mapping, size, 0, &page, &fsdata); 2360 2356 if (err) 2361 2357 goto out; 2362 2358 2363 - err = pagecache_write_end(NULL, mapping, size, 0, 0, page, fsdata); 2359 + err = aops->write_end(NULL, mapping, size, 0, 0, page, fsdata); 2364 2360 BUG_ON(err > 0); 2365 2361 2366 2362 out: ··· 2372 2368 loff_t pos, loff_t *bytes) 2373 2369 { 2374 2370 struct inode *inode = mapping->host; 2371 + const struct address_space_operations *aops = mapping->a_ops; 2375 2372 unsigned int blocksize = i_blocksize(inode); 2376 2373 struct page *page; 2377 2374 void *fsdata; ··· 2392 2387 } 2393 2388 len = PAGE_SIZE - zerofrom; 2394 2389 2395 - err = pagecache_write_begin(file, mapping, curpos, len, 0, 2390 + err = aops->write_begin(file, mapping, curpos, len, 2396 2391 &page, &fsdata); 2397 2392 if (err) 2398 2393 goto out; 2399 2394 zero_user(page, zerofrom, len); 2400 - err = pagecache_write_end(file, mapping, curpos, len, len, 2395 + err = aops->write_end(file, mapping, curpos, len, len, 2401 2396 page, fsdata); 2402 2397 if (err < 0) 2403 2398 goto out; ··· 2425 2420 } 2426 2421 len = offset - zerofrom; 2427 2422 2428 - err = pagecache_write_begin(file, mapping, curpos, len, 0, 2423 + err = aops->write_begin(file, mapping, curpos, len, 2429 2424 &page, &fsdata); 2430 2425 if (err) 2431 2426 goto out; 2432 2427 zero_user(page, zerofrom, len); 2433 - err = pagecache_write_end(file, mapping, curpos, len, len, 2428 + err = aops->write_end(file, mapping, curpos, len, len, 2434 2429 page, fsdata); 2435 2430 if (err < 0) 2436 2431 goto out; ··· 2446 2441 * We may have to extend the file. 2447 2442 */ 2448 2443 int cont_write_begin(struct file *file, struct address_space *mapping, 2449 - loff_t pos, unsigned len, unsigned flags, 2444 + loff_t pos, unsigned len, 2450 2445 struct page **pagep, void **fsdata, 2451 2446 get_block_t *get_block, loff_t *bytes) 2452 2447 { ··· 2465 2460 (*bytes)++; 2466 2461 } 2467 2462 2468 - return block_write_begin(mapping, pos, len, flags, pagep, get_block); 2463 + return block_write_begin(mapping, pos, len, pagep, get_block); 2469 2464 } 2470 2465 EXPORT_SYMBOL(cont_write_begin); 2471 2466 ··· 2573 2568 * On exit the page is fully uptodate in the areas outside (from,to) 2574 2569 * The filesystem needs to handle block truncation upon failure. 2575 2570 */ 2576 - int nobh_write_begin(struct address_space *mapping, 2577 - loff_t pos, unsigned len, unsigned flags, 2571 + int nobh_write_begin(struct address_space *mapping, loff_t pos, unsigned len, 2578 2572 struct page **pagep, void **fsdata, 2579 2573 get_block_t *get_block) 2580 2574 { ··· 2595 2591 from = pos & (PAGE_SIZE - 1); 2596 2592 to = from + len; 2597 2593 2598 - page = grab_cache_page_write_begin(mapping, index, flags); 2594 + page = grab_cache_page_write_begin(mapping, index); 2599 2595 if (!page) 2600 2596 return -ENOMEM; 2601 2597 *pagep = page; ··· 2794 2790 loff_t from, get_block_t *get_block) 2795 2791 { 2796 2792 pgoff_t index = from >> PAGE_SHIFT; 2797 - unsigned offset = from & (PAGE_SIZE-1); 2798 - unsigned blocksize; 2799 - sector_t iblock; 2800 - unsigned length, pos; 2801 2793 struct inode *inode = mapping->host; 2802 - struct page *page; 2794 + unsigned blocksize = i_blocksize(inode); 2795 + struct folio *folio; 2803 2796 struct buffer_head map_bh; 2797 + size_t offset; 2798 + sector_t iblock; 2804 2799 int err; 2805 2800 2806 - blocksize = i_blocksize(inode); 2807 - length = offset & (blocksize - 1); 2808 - 2809 2801 /* Block boundary? Nothing to do */ 2810 - if (!length) 2802 + if (!(from & (blocksize - 1))) 2811 2803 return 0; 2812 2804 2813 - length = blocksize - length; 2814 - iblock = (sector_t)index << (PAGE_SHIFT - inode->i_blkbits); 2815 - 2816 - page = grab_cache_page(mapping, index); 2805 + folio = __filemap_get_folio(mapping, index, FGP_LOCK | FGP_CREAT, 2806 + mapping_gfp_mask(mapping)); 2817 2807 err = -ENOMEM; 2818 - if (!page) 2808 + if (!folio) 2819 2809 goto out; 2820 2810 2821 - if (page_has_buffers(page)) { 2822 - has_buffers: 2823 - unlock_page(page); 2824 - put_page(page); 2825 - return block_truncate_page(mapping, from, get_block); 2826 - } 2811 + if (folio_buffers(folio)) 2812 + goto has_buffers; 2827 2813 2828 - /* Find the buffer that contains "offset" */ 2829 - pos = blocksize; 2830 - while (offset >= pos) { 2831 - iblock++; 2832 - pos += blocksize; 2833 - } 2834 - 2814 + iblock = from >> inode->i_blkbits; 2835 2815 map_bh.b_size = blocksize; 2836 2816 map_bh.b_state = 0; 2837 2817 err = get_block(inode, iblock, &map_bh, 0); ··· 2826 2838 goto unlock; 2827 2839 2828 2840 /* Ok, it's mapped. Make sure it's up-to-date */ 2829 - if (!PageUptodate(page)) { 2830 - err = mapping->a_ops->readpage(NULL, page); 2841 + if (!folio_test_uptodate(folio)) { 2842 + err = mapping->a_ops->read_folio(NULL, folio); 2831 2843 if (err) { 2832 - put_page(page); 2844 + folio_put(folio); 2833 2845 goto out; 2834 2846 } 2835 - lock_page(page); 2836 - if (!PageUptodate(page)) { 2847 + folio_lock(folio); 2848 + if (!folio_test_uptodate(folio)) { 2837 2849 err = -EIO; 2838 2850 goto unlock; 2839 2851 } 2840 - if (page_has_buffers(page)) 2852 + if (folio_buffers(folio)) 2841 2853 goto has_buffers; 2842 2854 } 2843 - zero_user(page, offset, length); 2844 - set_page_dirty(page); 2855 + offset = offset_in_folio(folio, from); 2856 + folio_zero_segment(folio, offset, round_up(offset, blocksize)); 2857 + folio_mark_dirty(folio); 2845 2858 err = 0; 2846 2859 2847 2860 unlock: 2848 - unlock_page(page); 2849 - put_page(page); 2861 + folio_unlock(folio); 2862 + folio_put(folio); 2850 2863 out: 2851 2864 return err; 2865 + 2866 + has_buffers: 2867 + folio_unlock(folio); 2868 + folio_put(folio); 2869 + return block_truncate_page(mapping, from, get_block); 2852 2870 } 2853 2871 EXPORT_SYMBOL(nobh_truncate_page); 2854 2872 ··· 3155 3161 EXPORT_SYMBOL(sync_dirty_buffer); 3156 3162 3157 3163 /* 3158 - * try_to_free_buffers() checks if all the buffers on this particular page 3164 + * try_to_free_buffers() checks if all the buffers on this particular folio 3159 3165 * are unused, and releases them if so. 3160 3166 * 3161 3167 * Exclusion against try_to_free_buffers may be obtained by either 3162 - * locking the page or by holding its mapping's private_lock. 3168 + * locking the folio or by holding its mapping's private_lock. 3163 3169 * 3164 - * If the page is dirty but all the buffers are clean then we need to 3165 - * be sure to mark the page clean as well. This is because the page 3170 + * If the folio is dirty but all the buffers are clean then we need to 3171 + * be sure to mark the folio clean as well. This is because the folio 3166 3172 * may be against a block device, and a later reattachment of buffers 3167 - * to a dirty page will set *all* buffers dirty. Which would corrupt 3173 + * to a dirty folio will set *all* buffers dirty. Which would corrupt 3168 3174 * filesystem data on the same device. 3169 3175 * 3170 - * The same applies to regular filesystem pages: if all the buffers are 3171 - * clean then we set the page clean and proceed. To do that, we require 3176 + * The same applies to regular filesystem folios: if all the buffers are 3177 + * clean then we set the folio clean and proceed. To do that, we require 3172 3178 * total exclusion from block_dirty_folio(). That is obtained with 3173 3179 * private_lock. 3174 3180 * ··· 3180 3186 (bh->b_state & ((1 << BH_Dirty) | (1 << BH_Lock))); 3181 3187 } 3182 3188 3183 - static int 3184 - drop_buffers(struct page *page, struct buffer_head **buffers_to_free) 3189 + static bool 3190 + drop_buffers(struct folio *folio, struct buffer_head **buffers_to_free) 3185 3191 { 3186 - struct buffer_head *head = page_buffers(page); 3192 + struct buffer_head *head = folio_buffers(folio); 3187 3193 struct buffer_head *bh; 3188 3194 3189 3195 bh = head; ··· 3201 3207 bh = next; 3202 3208 } while (bh != head); 3203 3209 *buffers_to_free = head; 3204 - detach_page_private(page); 3205 - return 1; 3210 + folio_detach_private(folio); 3211 + return true; 3206 3212 failed: 3207 - return 0; 3213 + return false; 3208 3214 } 3209 3215 3210 - int try_to_free_buffers(struct page *page) 3216 + bool try_to_free_buffers(struct folio *folio) 3211 3217 { 3212 - struct address_space * const mapping = page->mapping; 3218 + struct address_space * const mapping = folio->mapping; 3213 3219 struct buffer_head *buffers_to_free = NULL; 3214 - int ret = 0; 3220 + bool ret = 0; 3215 3221 3216 - BUG_ON(!PageLocked(page)); 3217 - if (PageWriteback(page)) 3218 - return 0; 3222 + BUG_ON(!folio_test_locked(folio)); 3223 + if (folio_test_writeback(folio)) 3224 + return false; 3219 3225 3220 3226 if (mapping == NULL) { /* can this still happen? */ 3221 - ret = drop_buffers(page, &buffers_to_free); 3227 + ret = drop_buffers(folio, &buffers_to_free); 3222 3228 goto out; 3223 3229 } 3224 3230 3225 3231 spin_lock(&mapping->private_lock); 3226 - ret = drop_buffers(page, &buffers_to_free); 3232 + ret = drop_buffers(folio, &buffers_to_free); 3227 3233 3228 3234 /* 3229 3235 * If the filesystem writes its buffers by hand (eg ext3) 3230 - * then we can have clean buffers against a dirty page. We 3231 - * clean the page here; otherwise the VM will never notice 3236 + * then we can have clean buffers against a dirty folio. We 3237 + * clean the folio here; otherwise the VM will never notice 3232 3238 * that the filesystem did any IO at all. 3233 3239 * 3234 3240 * Also, during truncate, discard_buffer will have marked all 3235 - * the page's buffers clean. We discover that here and clean 3236 - * the page also. 3241 + * the folio's buffers clean. We discover that here and clean 3242 + * the folio also. 3237 3243 * 3238 3244 * private_lock must be held over this entire operation in order 3239 3245 * to synchronise against block_dirty_folio and prevent the 3240 3246 * dirty bit from being lost. 3241 3247 */ 3242 3248 if (ret) 3243 - cancel_dirty_page(page); 3249 + folio_cancel_dirty(folio); 3244 3250 spin_unlock(&mapping->private_lock); 3245 3251 out: 3246 3252 if (buffers_to_free) {
+16 -16
fs/ceph/addr.c
··· 162 162 folio_wait_fscache(folio); 163 163 } 164 164 165 - static int ceph_releasepage(struct page *page, gfp_t gfp) 165 + static bool ceph_release_folio(struct folio *folio, gfp_t gfp) 166 166 { 167 - struct inode *inode = page->mapping->host; 167 + struct inode *inode = folio->mapping->host; 168 168 169 - dout("%llx:%llx releasepage %p idx %lu (%sdirty)\n", 170 - ceph_vinop(inode), page, 171 - page->index, PageDirty(page) ? "" : "not "); 169 + dout("%llx:%llx release_folio idx %lu (%sdirty)\n", 170 + ceph_vinop(inode), 171 + folio->index, folio_test_dirty(folio) ? "" : "not "); 172 172 173 - if (PagePrivate(page)) 174 - return 0; 173 + if (folio_test_private(folio)) 174 + return false; 175 175 176 - if (PageFsCache(page)) { 176 + if (folio_test_fscache(folio)) { 177 177 if (current_is_kswapd() || !(gfp & __GFP_FS)) 178 - return 0; 179 - wait_on_page_fscache(page); 178 + return false; 179 + folio_wait_fscache(folio); 180 180 } 181 181 ceph_fscache_note_page_release(inode); 182 - return 1; 182 + return true; 183 183 } 184 184 185 185 static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq) ··· 1314 1314 * clean, or already dirty within the same snap context. 1315 1315 */ 1316 1316 static int ceph_write_begin(struct file *file, struct address_space *mapping, 1317 - loff_t pos, unsigned len, unsigned aop_flags, 1317 + loff_t pos, unsigned len, 1318 1318 struct page **pagep, void **fsdata) 1319 1319 { 1320 1320 struct inode *inode = file_inode(file); 1321 1321 struct folio *folio = NULL; 1322 1322 int r; 1323 1323 1324 - r = netfs_write_begin(file, inode->i_mapping, pos, len, 0, &folio, NULL); 1324 + r = netfs_write_begin(file, inode->i_mapping, pos, len, &folio, NULL); 1325 1325 if (r == 0) 1326 1326 folio_wait_fscache(folio); 1327 1327 if (r < 0) { ··· 1375 1375 } 1376 1376 1377 1377 const struct address_space_operations ceph_aops = { 1378 - .readpage = netfs_readpage, 1378 + .read_folio = netfs_read_folio, 1379 1379 .readahead = netfs_readahead, 1380 1380 .writepage = ceph_writepage, 1381 1381 .writepages = ceph_writepages_start, ··· 1383 1383 .write_end = ceph_write_end, 1384 1384 .dirty_folio = ceph_dirty_folio, 1385 1385 .invalidate_folio = ceph_invalidate_folio, 1386 - .releasepage = ceph_releasepage, 1386 + .release_folio = ceph_release_folio, 1387 1387 .direct_IO = noop_direct_IO, 1388 1388 }; 1389 1389 ··· 1775 1775 { 1776 1776 struct address_space *mapping = file->f_mapping; 1777 1777 1778 - if (!mapping->a_ops->readpage) 1778 + if (!mapping->a_ops->read_folio) 1779 1779 return -ENOEXEC; 1780 1780 file_accessed(file); 1781 1781 vma->vm_ops = &ceph_vmops;
+16 -15
fs/cifs/file.c
··· 4612 4612 return rc; 4613 4613 } 4614 4614 4615 - static int cifs_readpage(struct file *file, struct page *page) 4615 + static int cifs_read_folio(struct file *file, struct folio *folio) 4616 4616 { 4617 + struct page *page = &folio->page; 4617 4618 loff_t offset = page_file_offset(page); 4618 4619 int rc = -EACCES; 4619 4620 unsigned int xid; ··· 4627 4626 return rc; 4628 4627 } 4629 4628 4630 - cifs_dbg(FYI, "readpage %p at offset %d 0x%x\n", 4629 + cifs_dbg(FYI, "read_folio %p at offset %d 0x%x\n", 4631 4630 page, (int)offset, (int)offset); 4632 4631 4633 4632 rc = cifs_readpage_worker(file, page, &offset); ··· 4682 4681 } 4683 4682 4684 4683 static int cifs_write_begin(struct file *file, struct address_space *mapping, 4685 - loff_t pos, unsigned len, unsigned flags, 4684 + loff_t pos, unsigned len, 4686 4685 struct page **pagep, void **fsdata) 4687 4686 { 4688 4687 int oncethru = 0; ··· 4696 4695 cifs_dbg(FYI, "write_begin from %lld len %d\n", (long long)pos, len); 4697 4696 4698 4697 start: 4699 - page = grab_cache_page_write_begin(mapping, index, flags); 4698 + page = grab_cache_page_write_begin(mapping, index); 4700 4699 if (!page) { 4701 4700 rc = -ENOMEM; 4702 4701 goto out; ··· 4758 4757 return rc; 4759 4758 } 4760 4759 4761 - static int cifs_release_page(struct page *page, gfp_t gfp) 4760 + static bool cifs_release_folio(struct folio *folio, gfp_t gfp) 4762 4761 { 4763 - if (PagePrivate(page)) 4762 + if (folio_test_private(folio)) 4764 4763 return 0; 4765 - if (PageFsCache(page)) { 4764 + if (folio_test_fscache(folio)) { 4766 4765 if (current_is_kswapd() || !(gfp & __GFP_FS)) 4767 4766 return false; 4768 - wait_on_page_fscache(page); 4767 + folio_wait_fscache(folio); 4769 4768 } 4770 - fscache_note_page_release(cifs_inode_cookie(page->mapping->host)); 4769 + fscache_note_page_release(cifs_inode_cookie(folio->mapping->host)); 4771 4770 return true; 4772 4771 } 4773 4772 ··· 4966 4965 #endif 4967 4966 4968 4967 const struct address_space_operations cifs_addr_ops = { 4969 - .readpage = cifs_readpage, 4968 + .read_folio = cifs_read_folio, 4970 4969 .readahead = cifs_readahead, 4971 4970 .writepage = cifs_writepage, 4972 4971 .writepages = cifs_writepages, 4973 4972 .write_begin = cifs_write_begin, 4974 4973 .write_end = cifs_write_end, 4975 4974 .dirty_folio = cifs_dirty_folio, 4976 - .releasepage = cifs_release_page, 4975 + .release_folio = cifs_release_folio, 4977 4976 .direct_IO = cifs_direct_io, 4978 4977 .invalidate_folio = cifs_invalidate_folio, 4979 4978 .launder_folio = cifs_launder_folio, ··· 4987 4986 }; 4988 4987 4989 4988 /* 4990 - * cifs_readpages requires the server to support a buffer large enough to 4989 + * cifs_readahead requires the server to support a buffer large enough to 4991 4990 * contain the header plus one complete page of data. Otherwise, we need 4992 - * to leave cifs_readpages out of the address space operations. 4991 + * to leave cifs_readahead out of the address space operations. 4993 4992 */ 4994 4993 const struct address_space_operations cifs_addr_ops_smallbuf = { 4995 - .readpage = cifs_readpage, 4994 + .read_folio = cifs_read_folio, 4996 4995 .writepage = cifs_writepage, 4997 4996 .writepages = cifs_writepages, 4998 4997 .write_begin = cifs_write_begin, 4999 4998 .write_end = cifs_write_end, 5000 4999 .dirty_folio = cifs_dirty_folio, 5001 - .releasepage = cifs_release_page, 5000 + .release_folio = cifs_release_folio, 5002 5001 .invalidate_folio = cifs_invalidate_folio, 5003 5002 .launder_folio = cifs_launder_folio, 5004 5003 };
+4 -3
fs/coda/symlink.c
··· 20 20 #include "coda_psdev.h" 21 21 #include "coda_linux.h" 22 22 23 - static int coda_symlink_filler(struct file *file, struct page *page) 23 + static int coda_symlink_filler(struct file *file, struct folio *folio) 24 24 { 25 - struct inode *inode = page->mapping->host; 25 + struct page *page = &folio->page; 26 + struct inode *inode = folio->mapping->host; 26 27 int error; 27 28 struct coda_inode_info *cii; 28 29 unsigned int len = PAGE_SIZE; ··· 45 44 } 46 45 47 46 const struct address_space_operations coda_symlink_aops = { 48 - .readpage = coda_symlink_filler, 47 + .read_folio = coda_symlink_filler, 49 48 };
+4 -4
fs/cramfs/README
··· 115 115 116 116 (Block size in cramfs refers to the size of input data that is 117 117 compressed at a time. It's intended to be somewhere around 118 - PAGE_SIZE for cramfs_readpage's convenience.) 118 + PAGE_SIZE for cramfs_read_folio's convenience.) 119 119 120 120 The superblock ought to indicate the block size that the fs was 121 121 written for, since comments in <linux/pagemap.h> indicate that ··· 161 161 PAGE_SIZE. 162 162 163 163 It's easy enough to change the kernel to use a smaller value than 164 - PAGE_SIZE: just make cramfs_readpage read multiple blocks. 164 + PAGE_SIZE: just make cramfs_read_folio read multiple blocks. 165 165 166 166 The cost of option 1 is that kernels with a larger PAGE_SIZE 167 167 value don't get as good compression as they can. ··· 173 173 smaller PAGE_SIZE values. 174 174 175 175 Option 3 is easy to implement if we don't mind being CPU-inefficient: 176 - e.g. get readpage to decompress to a buffer of size MAX_BLKSIZE (which 176 + e.g. get read_folio to decompress to a buffer of size MAX_BLKSIZE (which 177 177 must be no larger than 32KB) and discard what it doesn't need. 178 - Getting readpage to read into all the covered pages is harder. 178 + Getting read_folio to read into all the covered pages is harder. 179 179 180 180 The main advantage of option 3 over 1, 2, is better compression. The 181 181 cost is greater complexity. Probably not worth it, but I hope someone
+4 -3
fs/cramfs/inode.c
··· 414 414 /* 415 415 * Let's create a mixed map if we can't map it all. 416 416 * The normal paging machinery will take care of the 417 - * unpopulated ptes via cramfs_readpage(). 417 + * unpopulated ptes via cramfs_read_folio(). 418 418 */ 419 419 int i; 420 420 vma->vm_flags |= VM_MIXEDMAP; ··· 814 814 return d_splice_alias(inode, dentry); 815 815 } 816 816 817 - static int cramfs_readpage(struct file *file, struct page *page) 817 + static int cramfs_read_folio(struct file *file, struct folio *folio) 818 818 { 819 + struct page *page = &folio->page; 819 820 struct inode *inode = page->mapping->host; 820 821 u32 maxblock; 821 822 int bytes_filled; ··· 926 925 } 927 926 928 927 static const struct address_space_operations cramfs_aops = { 929 - .readpage = cramfs_readpage 928 + .read_folio = cramfs_read_folio 930 929 }; 931 930 932 931 /*
+8 -7
fs/ecryptfs/mmap.c
··· 170 170 } 171 171 172 172 /** 173 - * ecryptfs_readpage 173 + * ecryptfs_read_folio 174 174 * @file: An eCryptfs file 175 - * @page: Page from eCryptfs inode mapping into which to stick the read data 175 + * @folio: Folio from eCryptfs inode mapping into which to stick the read data 176 176 * 177 - * Read in a page, decrypting if necessary. 177 + * Read in a folio, decrypting if necessary. 178 178 * 179 179 * Returns zero on success; non-zero on error. 180 180 */ 181 - static int ecryptfs_readpage(struct file *file, struct page *page) 181 + static int ecryptfs_read_folio(struct file *file, struct folio *folio) 182 182 { 183 + struct page *page = &folio->page; 183 184 struct ecryptfs_crypt_stat *crypt_stat = 184 185 &ecryptfs_inode_to_private(page->mapping->host)->crypt_stat; 185 186 int rc = 0; ··· 265 264 */ 266 265 static int ecryptfs_write_begin(struct file *file, 267 266 struct address_space *mapping, 268 - loff_t pos, unsigned len, unsigned flags, 267 + loff_t pos, unsigned len, 269 268 struct page **pagep, void **fsdata) 270 269 { 271 270 pgoff_t index = pos >> PAGE_SHIFT; ··· 273 272 loff_t prev_page_end_size; 274 273 int rc = 0; 275 274 276 - page = grab_cache_page_write_begin(mapping, index, flags); 275 + page = grab_cache_page_write_begin(mapping, index); 277 276 if (!page) 278 277 return -ENOMEM; 279 278 *pagep = page; ··· 550 549 .invalidate_folio = block_invalidate_folio, 551 550 #endif 552 551 .writepage = ecryptfs_writepage, 553 - .readpage = ecryptfs_readpage, 552 + .read_folio = ecryptfs_read_folio, 554 553 .write_begin = ecryptfs_write_begin, 555 554 .write_end = ecryptfs_write_end, 556 555 .bmap = ecryptfs_bmap,
+5 -3
fs/efs/inode.c
··· 14 14 #include "efs.h" 15 15 #include <linux/efs_fs_sb.h> 16 16 17 - static int efs_readpage(struct file *file, struct page *page) 17 + static int efs_read_folio(struct file *file, struct folio *folio) 18 18 { 19 - return block_read_full_page(page,efs_get_block); 19 + return block_read_full_folio(folio, efs_get_block); 20 20 } 21 + 21 22 static sector_t _efs_bmap(struct address_space *mapping, sector_t block) 22 23 { 23 24 return generic_block_bmap(mapping,block,efs_get_block); 24 25 } 26 + 25 27 static const struct address_space_operations efs_aops = { 26 - .readpage = efs_readpage, 28 + .read_folio = efs_read_folio, 27 29 .bmap = _efs_bmap 28 30 }; 29 31
+3 -2
fs/efs/symlink.c
··· 12 12 #include <linux/buffer_head.h> 13 13 #include "efs.h" 14 14 15 - static int efs_symlink_readpage(struct file *file, struct page *page) 15 + static int efs_symlink_read_folio(struct file *file, struct folio *folio) 16 16 { 17 + struct page *page = &folio->page; 17 18 char *link = page_address(page); 18 19 struct buffer_head * bh; 19 20 struct inode * inode = page->mapping->host; ··· 50 49 } 51 50 52 51 const struct address_space_operations efs_symlink_aops = { 53 - .readpage = efs_symlink_readpage 52 + .read_folio = efs_symlink_read_folio 54 53 };
+3 -3
fs/erofs/data.c
··· 351 351 * since we dont have write or truncate flows, so no inode 352 352 * locking needs to be held at the moment. 353 353 */ 354 - static int erofs_readpage(struct file *file, struct page *page) 354 + static int erofs_read_folio(struct file *file, struct folio *folio) 355 355 { 356 - return iomap_readpage(page, &erofs_iomap_ops); 356 + return iomap_read_folio(folio, &erofs_iomap_ops); 357 357 } 358 358 359 359 static void erofs_readahead(struct readahead_control *rac) ··· 408 408 409 409 /* for uncompressed (aligned) files and raw access for other files */ 410 410 const struct address_space_operations erofs_raw_access_aops = { 411 - .readpage = erofs_readpage, 411 + .read_folio = erofs_read_folio, 412 412 .readahead = erofs_readahead, 413 413 .bmap = erofs_bmap, 414 414 .direct_IO = noop_direct_IO,
+7 -9
fs/erofs/fscache.c
··· 205 205 return ret; 206 206 } 207 207 208 - static int erofs_fscache_meta_readpage(struct file *data, struct page *page) 208 + static int erofs_fscache_meta_read_folio(struct file *data, struct folio *folio) 209 209 { 210 210 int ret; 211 - struct folio *folio = page_folio(page); 212 211 struct super_block *sb = folio_mapping(folio)->host->i_sb; 213 212 struct netfs_io_request *rreq; 214 213 struct erofs_map_dev mdev = { ··· 231 232 return ret; 232 233 } 233 234 234 - static int erofs_fscache_readpage_inline(struct folio *folio, 235 + static int erofs_fscache_read_folio_inline(struct folio *folio, 235 236 struct erofs_map_blocks *map) 236 237 { 237 238 struct super_block *sb = folio_mapping(folio)->host->i_sb; ··· 258 259 return 0; 259 260 } 260 261 261 - static int erofs_fscache_readpage(struct file *file, struct page *page) 262 + static int erofs_fscache_read_folio(struct file *file, struct folio *folio) 262 263 { 263 - struct folio *folio = page_folio(page); 264 264 struct inode *inode = folio_mapping(folio)->host; 265 265 struct super_block *sb = inode->i_sb; 266 266 struct erofs_map_blocks map; ··· 284 286 } 285 287 286 288 if (map.m_flags & EROFS_MAP_META) { 287 - ret = erofs_fscache_readpage_inline(folio, &map); 289 + ret = erofs_fscache_read_folio_inline(folio, &map); 288 290 goto out_uptodate; 289 291 } 290 292 ··· 374 376 if (map.m_flags & EROFS_MAP_META) { 375 377 struct folio *folio = readahead_folio(rac); 376 378 377 - ret = erofs_fscache_readpage_inline(folio, &map); 379 + ret = erofs_fscache_read_folio_inline(folio, &map); 378 380 if (!ret) { 379 381 folio_mark_uptodate(folio); 380 382 ret = folio_size(folio); ··· 408 410 } 409 411 410 412 static const struct address_space_operations erofs_fscache_meta_aops = { 411 - .readpage = erofs_fscache_meta_readpage, 413 + .read_folio = erofs_fscache_meta_read_folio, 412 414 }; 413 415 414 416 const struct address_space_operations erofs_fscache_access_aops = { 415 - .readpage = erofs_fscache_readpage, 417 + .read_folio = erofs_fscache_read_folio, 416 418 .readahead = erofs_fscache_readahead, 417 419 }; 418 420
+8 -8
fs/erofs/super.c
··· 578 578 #ifdef CONFIG_EROFS_FS_ZIP 579 579 static const struct address_space_operations managed_cache_aops; 580 580 581 - static int erofs_managed_cache_releasepage(struct page *page, gfp_t gfp_mask) 581 + static bool erofs_managed_cache_release_folio(struct folio *folio, gfp_t gfp) 582 582 { 583 - int ret = 1; /* 0 - busy */ 584 - struct address_space *const mapping = page->mapping; 583 + bool ret = true; 584 + struct address_space *const mapping = folio->mapping; 585 585 586 - DBG_BUGON(!PageLocked(page)); 586 + DBG_BUGON(!folio_test_locked(folio)); 587 587 DBG_BUGON(mapping->a_ops != &managed_cache_aops); 588 588 589 - if (PagePrivate(page)) 590 - ret = erofs_try_to_free_cached_page(page); 589 + if (folio_test_private(folio)) 590 + ret = erofs_try_to_free_cached_page(&folio->page); 591 591 592 592 return ret; 593 593 } ··· 608 608 DBG_BUGON(stop > folio_size(folio) || stop < length); 609 609 610 610 if (offset == 0 && stop == folio_size(folio)) 611 - while (!erofs_managed_cache_releasepage(&folio->page, GFP_NOFS)) 611 + while (!erofs_managed_cache_release_folio(folio, GFP_NOFS)) 612 612 cond_resched(); 613 613 } 614 614 615 615 static const struct address_space_operations managed_cache_aops = { 616 - .releasepage = erofs_managed_cache_releasepage, 616 + .release_folio = erofs_managed_cache_release_folio, 617 617 .invalidate_folio = erofs_managed_cache_invalidate_folio, 618 618 }; 619 619
+4 -3
fs/erofs/zdata.c
··· 791 791 static bool z_erofs_get_sync_decompress_policy(struct erofs_sb_info *sbi, 792 792 unsigned int readahead_pages) 793 793 { 794 - /* auto: enable for readpage, disable for readahead */ 794 + /* auto: enable for read_folio, disable for readahead */ 795 795 if ((sbi->opt.sync_decompress == EROFS_SYNC_DECOMPRESS_AUTO) && 796 796 !readahead_pages) 797 797 return true; ··· 1488 1488 } 1489 1489 } 1490 1490 1491 - static int z_erofs_readpage(struct file *file, struct page *page) 1491 + static int z_erofs_read_folio(struct file *file, struct folio *folio) 1492 1492 { 1493 + struct page *page = &folio->page; 1493 1494 struct inode *const inode = page->mapping->host; 1494 1495 struct erofs_sb_info *const sbi = EROFS_I_SB(inode); 1495 1496 struct z_erofs_decompress_frontend f = DECOMPRESS_FRONTEND_INIT(inode); ··· 1564 1563 } 1565 1564 1566 1565 const struct address_space_operations z_erofs_aops = { 1567 - .readpage = z_erofs_readpage, 1566 + .read_folio = z_erofs_read_folio, 1568 1567 .readahead = z_erofs_readahead, 1569 1568 };
+5 -5
fs/exfat/inode.c
··· 357 357 return err; 358 358 } 359 359 360 - static int exfat_readpage(struct file *file, struct page *page) 360 + static int exfat_read_folio(struct file *file, struct folio *folio) 361 361 { 362 - return mpage_readpage(page, exfat_get_block); 362 + return mpage_read_folio(folio, exfat_get_block); 363 363 } 364 364 365 365 static void exfat_readahead(struct readahead_control *rac) ··· 389 389 } 390 390 391 391 static int exfat_write_begin(struct file *file, struct address_space *mapping, 392 - loff_t pos, unsigned int len, unsigned int flags, 392 + loff_t pos, unsigned int len, 393 393 struct page **pagep, void **fsdata) 394 394 { 395 395 int ret; 396 396 397 397 *pagep = NULL; 398 - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, 398 + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, 399 399 exfat_get_block, 400 400 &EXFAT_I(mapping->host)->i_size_ondisk); 401 401 ··· 492 492 static const struct address_space_operations exfat_aops = { 493 493 .dirty_folio = block_dirty_folio, 494 494 .invalidate_folio = block_invalidate_folio, 495 - .readpage = exfat_readpage, 495 + .read_folio = exfat_read_folio, 496 496 .readahead = exfat_readahead, 497 497 .writepage = exfat_writepage, 498 498 .writepages = exfat_writepages,
+8 -11
fs/ext2/inode.c
··· 875 875 return block_write_full_page(page, ext2_get_block, wbc); 876 876 } 877 877 878 - static int ext2_readpage(struct file *file, struct page *page) 878 + static int ext2_read_folio(struct file *file, struct folio *folio) 879 879 { 880 - return mpage_readpage(page, ext2_get_block); 880 + return mpage_read_folio(folio, ext2_get_block); 881 881 } 882 882 883 883 static void ext2_readahead(struct readahead_control *rac) ··· 887 887 888 888 static int 889 889 ext2_write_begin(struct file *file, struct address_space *mapping, 890 - loff_t pos, unsigned len, unsigned flags, 891 - struct page **pagep, void **fsdata) 890 + loff_t pos, unsigned len, struct page **pagep, void **fsdata) 892 891 { 893 892 int ret; 894 893 895 - ret = block_write_begin(mapping, pos, len, flags, pagep, 896 - ext2_get_block); 894 + ret = block_write_begin(mapping, pos, len, pagep, ext2_get_block); 897 895 if (ret < 0) 898 896 ext2_write_failed(mapping, pos + len); 899 897 return ret; ··· 911 913 912 914 static int 913 915 ext2_nobh_write_begin(struct file *file, struct address_space *mapping, 914 - loff_t pos, unsigned len, unsigned flags, 915 - struct page **pagep, void **fsdata) 916 + loff_t pos, unsigned len, struct page **pagep, void **fsdata) 916 917 { 917 918 int ret; 918 919 919 - ret = nobh_write_begin(mapping, pos, len, flags, pagep, fsdata, 920 + ret = nobh_write_begin(mapping, pos, len, pagep, fsdata, 920 921 ext2_get_block); 921 922 if (ret < 0) 922 923 ext2_write_failed(mapping, pos + len); ··· 966 969 const struct address_space_operations ext2_aops = { 967 970 .dirty_folio = block_dirty_folio, 968 971 .invalidate_folio = block_invalidate_folio, 969 - .readpage = ext2_readpage, 972 + .read_folio = ext2_read_folio, 970 973 .readahead = ext2_readahead, 971 974 .writepage = ext2_writepage, 972 975 .write_begin = ext2_write_begin, ··· 982 985 const struct address_space_operations ext2_nobh_aops = { 983 986 .dirty_folio = block_dirty_folio, 984 987 .invalidate_folio = block_invalidate_folio, 985 - .readpage = ext2_readpage, 988 + .read_folio = ext2_read_folio, 986 989 .readahead = ext2_readahead, 987 990 .writepage = ext2_nobh_writepage, 988 991 .write_begin = ext2_nobh_write_begin,
-2
fs/ext4/ext4.h
··· 3539 3539 extern int ext4_try_to_write_inline_data(struct address_space *mapping, 3540 3540 struct inode *inode, 3541 3541 loff_t pos, unsigned len, 3542 - unsigned flags, 3543 3542 struct page **pagep); 3544 3543 extern int ext4_write_inline_data_end(struct inode *inode, 3545 3544 loff_t pos, unsigned len, ··· 3551 3552 extern int ext4_da_write_inline_data_begin(struct address_space *mapping, 3552 3553 struct inode *inode, 3553 3554 loff_t pos, unsigned len, 3554 - unsigned flags, 3555 3555 struct page **pagep, 3556 3556 void **fsdata); 3557 3557 extern int ext4_try_add_inline_entry(handle_t *handle,
+19 -22
fs/ext4/inline.c
··· 527 527 } 528 528 529 529 static int ext4_convert_inline_data_to_extent(struct address_space *mapping, 530 - struct inode *inode, 531 - unsigned flags) 530 + struct inode *inode) 532 531 { 533 532 int ret, needed_blocks, no_expand; 534 533 handle_t *handle = NULL; 535 534 int retries = 0, sem_held = 0; 536 535 struct page *page = NULL; 536 + unsigned int flags; 537 537 unsigned from, to; 538 538 struct ext4_iloc iloc; 539 539 ··· 562 562 563 563 /* We cannot recurse into the filesystem as the transaction is already 564 564 * started */ 565 - flags |= AOP_FLAG_NOFS; 566 - 567 - page = grab_cache_page_write_begin(mapping, 0, flags); 565 + flags = memalloc_nofs_save(); 566 + page = grab_cache_page_write_begin(mapping, 0); 567 + memalloc_nofs_restore(flags); 568 568 if (!page) { 569 569 ret = -ENOMEM; 570 570 goto out; ··· 649 649 int ext4_try_to_write_inline_data(struct address_space *mapping, 650 650 struct inode *inode, 651 651 loff_t pos, unsigned len, 652 - unsigned flags, 653 652 struct page **pagep) 654 653 { 655 654 int ret; 656 655 handle_t *handle; 656 + unsigned int flags; 657 657 struct page *page; 658 658 struct ext4_iloc iloc; 659 659 ··· 691 691 if (ret) 692 692 goto out; 693 693 694 - flags |= AOP_FLAG_NOFS; 695 - 696 - page = grab_cache_page_write_begin(mapping, 0, flags); 694 + flags = memalloc_nofs_save(); 695 + page = grab_cache_page_write_begin(mapping, 0); 696 + memalloc_nofs_restore(flags); 697 697 if (!page) { 698 698 ret = -ENOMEM; 699 699 goto out; ··· 727 727 brelse(iloc.bh); 728 728 return ret; 729 729 convert: 730 - return ext4_convert_inline_data_to_extent(mapping, 731 - inode, flags); 730 + return ext4_convert_inline_data_to_extent(mapping, inode); 732 731 } 733 732 734 733 int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len, ··· 847 848 */ 848 849 static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping, 849 850 struct inode *inode, 850 - unsigned flags, 851 851 void **fsdata) 852 852 { 853 853 int ret = 0, inline_size; 854 854 struct page *page; 855 855 856 - page = grab_cache_page_write_begin(mapping, 0, flags); 856 + page = grab_cache_page_write_begin(mapping, 0); 857 857 if (!page) 858 858 return -ENOMEM; 859 859 ··· 905 907 int ext4_da_write_inline_data_begin(struct address_space *mapping, 906 908 struct inode *inode, 907 909 loff_t pos, unsigned len, 908 - unsigned flags, 909 910 struct page **pagep, 910 911 void **fsdata) 911 912 { ··· 913 916 struct page *page; 914 917 struct ext4_iloc iloc; 915 918 int retries = 0; 919 + unsigned int flags; 916 920 917 921 ret = ext4_get_inode_loc(inode, &iloc); 918 922 if (ret) ··· 930 932 if (ret && ret != -ENOSPC) 931 933 goto out_journal; 932 934 933 - /* 934 - * We cannot recurse into the filesystem as the transaction 935 - * is already started. 936 - */ 937 - flags |= AOP_FLAG_NOFS; 938 - 939 935 if (ret == -ENOSPC) { 940 936 ext4_journal_stop(handle); 941 937 ret = ext4_da_convert_inline_data_to_extent(mapping, 942 938 inode, 943 - flags, 944 939 fsdata); 945 940 if (ret == -ENOSPC && 946 941 ext4_should_retry_alloc(inode->i_sb, &retries)) ··· 941 950 goto out; 942 951 } 943 952 944 - page = grab_cache_page_write_begin(mapping, 0, flags); 953 + /* 954 + * We cannot recurse into the filesystem as the transaction 955 + * is already started. 956 + */ 957 + flags = memalloc_nofs_save(); 958 + page = grab_cache_page_write_begin(mapping, 0); 959 + memalloc_nofs_restore(flags); 945 960 if (!page) { 946 961 ret = -ENOMEM; 947 962 goto out_journal;
+24 -24
fs/ext4/inode.c
··· 1142 1142 #endif 1143 1143 1144 1144 static int ext4_write_begin(struct file *file, struct address_space *mapping, 1145 - loff_t pos, unsigned len, unsigned flags, 1145 + loff_t pos, unsigned len, 1146 1146 struct page **pagep, void **fsdata) 1147 1147 { 1148 1148 struct inode *inode = mapping->host; ··· 1156 1156 if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) 1157 1157 return -EIO; 1158 1158 1159 - trace_ext4_write_begin(inode, pos, len, flags); 1159 + trace_ext4_write_begin(inode, pos, len); 1160 1160 /* 1161 1161 * Reserve one block more for addition to orphan list in case 1162 1162 * we allocate blocks but write fails for some reason ··· 1168 1168 1169 1169 if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) { 1170 1170 ret = ext4_try_to_write_inline_data(mapping, inode, pos, len, 1171 - flags, pagep); 1171 + pagep); 1172 1172 if (ret < 0) 1173 1173 return ret; 1174 1174 if (ret == 1) ··· 1183 1183 * the page (if needed) without using GFP_NOFS. 1184 1184 */ 1185 1185 retry_grab: 1186 - page = grab_cache_page_write_begin(mapping, index, flags); 1186 + page = grab_cache_page_write_begin(mapping, index); 1187 1187 if (!page) 1188 1188 return -ENOMEM; 1189 1189 unlock_page(page); ··· 2943 2943 } 2944 2944 2945 2945 static int ext4_da_write_begin(struct file *file, struct address_space *mapping, 2946 - loff_t pos, unsigned len, unsigned flags, 2946 + loff_t pos, unsigned len, 2947 2947 struct page **pagep, void **fsdata) 2948 2948 { 2949 2949 int ret, retries = 0; ··· 2959 2959 if (ext4_nonda_switch(inode->i_sb) || ext4_verity_in_progress(inode)) { 2960 2960 *fsdata = (void *)FALL_BACK_TO_NONDELALLOC; 2961 2961 return ext4_write_begin(file, mapping, pos, 2962 - len, flags, pagep, fsdata); 2962 + len, pagep, fsdata); 2963 2963 } 2964 2964 *fsdata = (void *)0; 2965 - trace_ext4_da_write_begin(inode, pos, len, flags); 2965 + trace_ext4_da_write_begin(inode, pos, len); 2966 2966 2967 2967 if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) { 2968 - ret = ext4_da_write_inline_data_begin(mapping, inode, 2969 - pos, len, flags, 2968 + ret = ext4_da_write_inline_data_begin(mapping, inode, pos, len, 2970 2969 pagep, fsdata); 2971 2970 if (ret < 0) 2972 2971 return ret; ··· 2974 2975 } 2975 2976 2976 2977 retry: 2977 - page = grab_cache_page_write_begin(mapping, index, flags); 2978 + page = grab_cache_page_write_begin(mapping, index); 2978 2979 if (!page) 2979 2980 return -ENOMEM; 2980 2981 ··· 3191 3192 return iomap_bmap(mapping, block, &ext4_iomap_ops); 3192 3193 } 3193 3194 3194 - static int ext4_readpage(struct file *file, struct page *page) 3195 + static int ext4_read_folio(struct file *file, struct folio *folio) 3195 3196 { 3197 + struct page *page = &folio->page; 3196 3198 int ret = -EAGAIN; 3197 3199 struct inode *inode = page->mapping->host; 3198 3200 ··· 3254 3254 WARN_ON(__ext4_journalled_invalidate_folio(folio, offset, length) < 0); 3255 3255 } 3256 3256 3257 - static int ext4_releasepage(struct page *page, gfp_t wait) 3257 + static bool ext4_release_folio(struct folio *folio, gfp_t wait) 3258 3258 { 3259 - journal_t *journal = EXT4_JOURNAL(page->mapping->host); 3259 + journal_t *journal = EXT4_JOURNAL(folio->mapping->host); 3260 3260 3261 - trace_ext4_releasepage(page); 3261 + trace_ext4_releasepage(&folio->page); 3262 3262 3263 3263 /* Page has dirty journalled data -> cannot release */ 3264 - if (PageChecked(page)) 3265 - return 0; 3264 + if (folio_test_checked(folio)) 3265 + return false; 3266 3266 if (journal) 3267 - return jbd2_journal_try_to_free_buffers(journal, page); 3267 + return jbd2_journal_try_to_free_buffers(journal, folio); 3268 3268 else 3269 - return try_to_free_buffers(page); 3269 + return try_to_free_buffers(folio); 3270 3270 } 3271 3271 3272 3272 static bool ext4_inode_datasync_dirty(struct inode *inode) ··· 3620 3620 } 3621 3621 3622 3622 static const struct address_space_operations ext4_aops = { 3623 - .readpage = ext4_readpage, 3623 + .read_folio = ext4_read_folio, 3624 3624 .readahead = ext4_readahead, 3625 3625 .writepage = ext4_writepage, 3626 3626 .writepages = ext4_writepages, ··· 3629 3629 .dirty_folio = ext4_dirty_folio, 3630 3630 .bmap = ext4_bmap, 3631 3631 .invalidate_folio = ext4_invalidate_folio, 3632 - .releasepage = ext4_releasepage, 3632 + .release_folio = ext4_release_folio, 3633 3633 .direct_IO = noop_direct_IO, 3634 3634 .migratepage = buffer_migrate_page, 3635 3635 .is_partially_uptodate = block_is_partially_uptodate, ··· 3638 3638 }; 3639 3639 3640 3640 static const struct address_space_operations ext4_journalled_aops = { 3641 - .readpage = ext4_readpage, 3641 + .read_folio = ext4_read_folio, 3642 3642 .readahead = ext4_readahead, 3643 3643 .writepage = ext4_writepage, 3644 3644 .writepages = ext4_writepages, ··· 3647 3647 .dirty_folio = ext4_journalled_dirty_folio, 3648 3648 .bmap = ext4_bmap, 3649 3649 .invalidate_folio = ext4_journalled_invalidate_folio, 3650 - .releasepage = ext4_releasepage, 3650 + .release_folio = ext4_release_folio, 3651 3651 .direct_IO = noop_direct_IO, 3652 3652 .is_partially_uptodate = block_is_partially_uptodate, 3653 3653 .error_remove_page = generic_error_remove_page, ··· 3655 3655 }; 3656 3656 3657 3657 static const struct address_space_operations ext4_da_aops = { 3658 - .readpage = ext4_readpage, 3658 + .read_folio = ext4_read_folio, 3659 3659 .readahead = ext4_readahead, 3660 3660 .writepage = ext4_writepage, 3661 3661 .writepages = ext4_writepages, ··· 3664 3664 .dirty_folio = ext4_dirty_folio, 3665 3665 .bmap = ext4_bmap, 3666 3666 .invalidate_folio = ext4_invalidate_folio, 3667 - .releasepage = ext4_releasepage, 3667 + .release_folio = ext4_release_folio, 3668 3668 .direct_IO = noop_direct_IO, 3669 3669 .migratepage = buffer_migrate_page, 3670 3670 .is_partially_uptodate = block_is_partially_uptodate,
+11 -6
fs/ext4/move_extent.c
··· 8 8 #include <linux/fs.h> 9 9 #include <linux/quotaops.h> 10 10 #include <linux/slab.h> 11 + #include <linux/sched/mm.h> 11 12 #include "ext4_jbd2.h" 12 13 #include "ext4.h" 13 14 #include "ext4_extents.h" ··· 128 127 pgoff_t index1, pgoff_t index2, struct page *page[2]) 129 128 { 130 129 struct address_space *mapping[2]; 131 - unsigned fl = AOP_FLAG_NOFS; 130 + unsigned int flags; 132 131 133 132 BUG_ON(!inode1 || !inode2); 134 133 if (inode1 < inode2) { ··· 140 139 mapping[1] = inode1->i_mapping; 141 140 } 142 141 143 - page[0] = grab_cache_page_write_begin(mapping[0], index1, fl); 144 - if (!page[0]) 142 + flags = memalloc_nofs_save(); 143 + page[0] = grab_cache_page_write_begin(mapping[0], index1); 144 + if (!page[0]) { 145 + memalloc_nofs_restore(flags); 145 146 return -ENOMEM; 147 + } 146 148 147 - page[1] = grab_cache_page_write_begin(mapping[1], index2, fl); 149 + page[1] = grab_cache_page_write_begin(mapping[1], index2); 150 + memalloc_nofs_restore(flags); 148 151 if (!page[1]) { 149 152 unlock_page(page[0]); 150 153 put_page(page[0]); ··· 669 664 * Up semaphore to avoid following problems: 670 665 * a. transaction deadlock among ext4_journal_start, 671 666 * ->write_begin via pagefault, and jbd2_journal_commit 672 - * b. racing with ->readpage, ->write_begin, and ext4_get_block 673 - * in move_extent_per_page 667 + * b. racing with ->read_folio, ->write_begin, and 668 + * ext4_get_block in move_extent_per_page 674 669 */ 675 670 ext4_double_up_write_data_sem(orig_inode, donor_inode); 676 671 /* Swap original branches with new branches */
+2 -2
fs/ext4/readpage.c
··· 163 163 * 164 164 * The mpage code never puts partial pages into a BIO (except for end-of-file). 165 165 * If a page does not map to a contiguous run of blocks then it simply falls 166 - * back to block_read_full_page(). 166 + * back to block_read_full_folio(). 167 167 * 168 168 * Why is this? If a page's completion depends on a number of different BIOs 169 169 * which can complete in any order (or at the same time) then determining the ··· 394 394 bio = NULL; 395 395 } 396 396 if (!PageUptodate(page)) 397 - block_read_full_page(page, ext4_get_block); 397 + block_read_full_folio(page_folio(page), ext4_get_block); 398 398 else 399 399 unlock_page(page); 400 400 next_page:
+5 -4
fs/ext4/verity.c
··· 69 69 static int pagecache_write(struct inode *inode, const void *buf, size_t count, 70 70 loff_t pos) 71 71 { 72 + struct address_space *mapping = inode->i_mapping; 73 + const struct address_space_operations *aops = mapping->a_ops; 74 + 72 75 if (pos + count > inode->i_sb->s_maxbytes) 73 76 return -EFBIG; 74 77 ··· 82 79 void *fsdata; 83 80 int res; 84 81 85 - res = pagecache_write_begin(NULL, inode->i_mapping, pos, n, 0, 86 - &page, &fsdata); 82 + res = aops->write_begin(NULL, mapping, pos, n, &page, &fsdata); 87 83 if (res) 88 84 return res; 89 85 90 86 memcpy_to_page(page, offset_in_page(pos), buf, n); 91 87 92 - res = pagecache_write_end(NULL, inode->i_mapping, pos, n, n, 93 - page, fsdata); 88 + res = aops->write_end(NULL, mapping, pos, n, n, page, fsdata); 94 89 if (res < 0) 95 90 return res; 96 91 if (res != n)
+1 -1
fs/f2fs/checkpoint.c
··· 468 468 .writepages = f2fs_write_meta_pages, 469 469 .dirty_folio = f2fs_dirty_meta_folio, 470 470 .invalidate_folio = f2fs_invalidate_folio, 471 - .releasepage = f2fs_release_page, 471 + .release_folio = f2fs_release_folio, 472 472 #ifdef CONFIG_MIGRATION 473 473 .migratepage = f2fs_migrate_page, 474 474 #endif
+1 -1
fs/f2fs/compress.c
··· 1746 1746 } 1747 1747 1748 1748 const struct address_space_operations f2fs_compress_aops = { 1749 - .releasepage = f2fs_release_page, 1749 + .release_folio = f2fs_release_folio, 1750 1750 .invalidate_folio = f2fs_invalidate_folio, 1751 1751 }; 1752 1752
+22 -20
fs/f2fs/data.c
··· 2372 2372 return ret; 2373 2373 } 2374 2374 2375 - static int f2fs_read_data_page(struct file *file, struct page *page) 2375 + static int f2fs_read_data_folio(struct file *file, struct folio *folio) 2376 2376 { 2377 + struct page *page = &folio->page; 2377 2378 struct inode *inode = page_file_mapping(page)->host; 2378 2379 int ret = -EAGAIN; 2379 2380 ··· 3315 3314 } 3316 3315 3317 3316 static int f2fs_write_begin(struct file *file, struct address_space *mapping, 3318 - loff_t pos, unsigned len, unsigned flags, 3319 - struct page **pagep, void **fsdata) 3317 + loff_t pos, unsigned len, struct page **pagep, void **fsdata) 3320 3318 { 3321 3319 struct inode *inode = mapping->host; 3322 3320 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); ··· 3325 3325 block_t blkaddr = NULL_ADDR; 3326 3326 int err = 0; 3327 3327 3328 - trace_f2fs_write_begin(inode, pos, len, flags); 3328 + trace_f2fs_write_begin(inode, pos, len); 3329 3329 3330 3330 if (!f2fs_is_checkpoint_ready(sbi)) { 3331 3331 err = -ENOSPC; ··· 3528 3528 folio_detach_private(folio); 3529 3529 } 3530 3530 3531 - int f2fs_release_page(struct page *page, gfp_t wait) 3531 + bool f2fs_release_folio(struct folio *folio, gfp_t wait) 3532 3532 { 3533 - /* If this is dirty page, keep PagePrivate */ 3534 - if (PageDirty(page)) 3535 - return 0; 3533 + struct f2fs_sb_info *sbi; 3534 + 3535 + /* If this is dirty folio, keep private data */ 3536 + if (folio_test_dirty(folio)) 3537 + return false; 3536 3538 3537 3539 /* This is atomic written page, keep Private */ 3538 - if (page_private_atomic(page)) 3539 - return 0; 3540 + if (page_private_atomic(&folio->page)) 3541 + return false; 3540 3542 3541 - if (test_opt(F2FS_P_SB(page), COMPRESS_CACHE)) { 3542 - struct inode *inode = page->mapping->host; 3543 + sbi = F2FS_M_SB(folio->mapping); 3544 + if (test_opt(sbi, COMPRESS_CACHE)) { 3545 + struct inode *inode = folio->mapping->host; 3543 3546 3544 - if (inode->i_ino == F2FS_COMPRESS_INO(F2FS_I_SB(inode))) 3545 - clear_page_private_data(page); 3547 + if (inode->i_ino == F2FS_COMPRESS_INO(sbi)) 3548 + clear_page_private_data(&folio->page); 3546 3549 } 3547 3550 3548 - clear_page_private_gcing(page); 3551 + clear_page_private_gcing(&folio->page); 3549 3552 3550 - detach_page_private(page); 3551 - set_page_private(page, 0); 3552 - return 1; 3553 + folio_detach_private(folio); 3554 + return true; 3553 3555 } 3554 3556 3555 3557 static bool f2fs_dirty_data_folio(struct address_space *mapping, ··· 3938 3936 #endif 3939 3937 3940 3938 const struct address_space_operations f2fs_dblock_aops = { 3941 - .readpage = f2fs_read_data_page, 3939 + .read_folio = f2fs_read_data_folio, 3942 3940 .readahead = f2fs_readahead, 3943 3941 .writepage = f2fs_write_data_page, 3944 3942 .writepages = f2fs_write_data_pages, ··· 3946 3944 .write_end = f2fs_write_end, 3947 3945 .dirty_folio = f2fs_dirty_data_folio, 3948 3946 .invalidate_folio = f2fs_invalidate_folio, 3949 - .releasepage = f2fs_release_page, 3947 + .release_folio = f2fs_release_folio, 3950 3948 .direct_IO = noop_direct_IO, 3951 3949 .bmap = f2fs_bmap, 3952 3950 .swap_activate = f2fs_swap_activate,
+9 -2
fs/f2fs/f2fs.h
··· 18 18 #include <linux/kobject.h> 19 19 #include <linux/sched.h> 20 20 #include <linux/cred.h> 21 + #include <linux/sched/mm.h> 21 22 #include <linux/vmalloc.h> 22 23 #include <linux/bio.h> 23 24 #include <linux/blkdev.h> ··· 2655 2654 pgoff_t index, bool for_write) 2656 2655 { 2657 2656 struct page *page; 2657 + unsigned int flags; 2658 2658 2659 2659 if (IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION)) { 2660 2660 if (!for_write) ··· 2675 2673 2676 2674 if (!for_write) 2677 2675 return grab_cache_page(mapping, index); 2678 - return grab_cache_page_write_begin(mapping, index, AOP_FLAG_NOFS); 2676 + 2677 + flags = memalloc_nofs_save(); 2678 + page = grab_cache_page_write_begin(mapping, index); 2679 + memalloc_nofs_restore(flags); 2680 + 2681 + return page; 2679 2682 } 2680 2683 2681 2684 static inline struct page *f2fs_pagecache_get_page( ··· 3768 3761 int compr_blocks, bool allow_balance); 3769 3762 void f2fs_write_failed(struct inode *inode, loff_t to); 3770 3763 void f2fs_invalidate_folio(struct folio *folio, size_t offset, size_t length); 3771 - int f2fs_release_page(struct page *page, gfp_t wait); 3764 + bool f2fs_release_folio(struct folio *folio, gfp_t wait); 3772 3765 #ifdef CONFIG_MIGRATION 3773 3766 int f2fs_migrate_page(struct address_space *mapping, struct page *newpage, 3774 3767 struct page *page, enum migrate_mode mode);
+1 -1
fs/f2fs/node.c
··· 2165 2165 .writepages = f2fs_write_node_pages, 2166 2166 .dirty_folio = f2fs_dirty_node_folio, 2167 2167 .invalidate_folio = f2fs_invalidate_folio, 2168 - .releasepage = f2fs_release_page, 2168 + .release_folio = f2fs_release_folio, 2169 2169 #ifdef CONFIG_MIGRATION 2170 2170 .migratepage = f2fs_migrate_page, 2171 2171 #endif
+1 -1
fs/f2fs/super.c
··· 2483 2483 tocopy = min_t(unsigned long, sb->s_blocksize - offset, 2484 2484 towrite); 2485 2485 retry: 2486 - err = a_ops->write_begin(NULL, mapping, off, tocopy, 0, 2486 + err = a_ops->write_begin(NULL, mapping, off, tocopy, 2487 2487 &page, &fsdata); 2488 2488 if (unlikely(err)) { 2489 2489 if (err == -ENOMEM) {
+5 -4
fs/f2fs/verity.c
··· 74 74 static int pagecache_write(struct inode *inode, const void *buf, size_t count, 75 75 loff_t pos) 76 76 { 77 + struct address_space *mapping = inode->i_mapping; 78 + const struct address_space_operations *aops = mapping->a_ops; 79 + 77 80 if (pos + count > inode->i_sb->s_maxbytes) 78 81 return -EFBIG; 79 82 ··· 88 85 void *addr; 89 86 int res; 90 87 91 - res = pagecache_write_begin(NULL, inode->i_mapping, pos, n, 0, 92 - &page, &fsdata); 88 + res = aops->write_begin(NULL, mapping, pos, n, &page, &fsdata); 93 89 if (res) 94 90 return res; 95 91 ··· 96 94 memcpy(addr + offset_in_page(pos), buf, n); 97 95 kunmap_atomic(addr); 98 96 99 - res = pagecache_write_end(NULL, inode->i_mapping, pos, n, n, 100 - page, fsdata); 97 + res = aops->write_end(NULL, mapping, pos, n, n, page, fsdata); 101 98 if (res < 0) 102 99 return res; 103 100 if (res != n)
+5 -5
fs/fat/inode.c
··· 205 205 return mpage_writepages(mapping, wbc, fat_get_block); 206 206 } 207 207 208 - static int fat_readpage(struct file *file, struct page *page) 208 + static int fat_read_folio(struct file *file, struct folio *folio) 209 209 { 210 - return mpage_readpage(page, fat_get_block); 210 + return mpage_read_folio(folio, fat_get_block); 211 211 } 212 212 213 213 static void fat_readahead(struct readahead_control *rac) ··· 226 226 } 227 227 228 228 static int fat_write_begin(struct file *file, struct address_space *mapping, 229 - loff_t pos, unsigned len, unsigned flags, 229 + loff_t pos, unsigned len, 230 230 struct page **pagep, void **fsdata) 231 231 { 232 232 int err; 233 233 234 234 *pagep = NULL; 235 - err = cont_write_begin(file, mapping, pos, len, flags, 235 + err = cont_write_begin(file, mapping, pos, len, 236 236 pagep, fsdata, fat_get_block, 237 237 &MSDOS_I(mapping->host)->mmu_private); 238 238 if (err < 0) ··· 344 344 static const struct address_space_operations fat_aops = { 345 345 .dirty_folio = block_dirty_folio, 346 346 .invalidate_folio = block_invalidate_folio, 347 - .readpage = fat_readpage, 347 + .read_folio = fat_read_folio, 348 348 .readahead = fat_readahead, 349 349 .writepage = fat_writepage, 350 350 .writepages = fat_writepages,
+8 -7
fs/freevxfs/vxfs_immed.c
··· 38 38 #include "vxfs_inode.h" 39 39 40 40 41 - static int vxfs_immed_readpage(struct file *, struct page *); 41 + static int vxfs_immed_read_folio(struct file *, struct folio *); 42 42 43 43 /* 44 44 * Address space operations for immed files and directories. 45 45 */ 46 46 const struct address_space_operations vxfs_immed_aops = { 47 - .readpage = vxfs_immed_readpage, 47 + .read_folio = vxfs_immed_read_folio, 48 48 }; 49 49 50 50 /** 51 - * vxfs_immed_readpage - read part of an immed inode into pagecache 51 + * vxfs_immed_read_folio - read part of an immed inode into pagecache 52 52 * @file: file context (unused) 53 - * @page: page frame to fill in. 53 + * @folio: folio to fill in. 54 54 * 55 55 * Description: 56 - * vxfs_immed_readpage reads a part of the immed area of the 56 + * vxfs_immed_read_folio reads a part of the immed area of the 57 57 * file that hosts @pp into the pagecache. 58 58 * 59 59 * Returns: 60 60 * Zero on success, else a negative error code. 61 61 * 62 62 * Locking status: 63 - * @page is locked and will be unlocked. 63 + * @folio is locked and will be unlocked. 64 64 */ 65 65 static int 66 - vxfs_immed_readpage(struct file *fp, struct page *pp) 66 + vxfs_immed_read_folio(struct file *fp, struct folio *folio) 67 67 { 68 + struct page *pp = &folio->page; 68 69 struct vxfs_inode_info *vip = VXFS_INO(pp->mapping->host); 69 70 u_int64_t offset = (u_int64_t)pp->index << PAGE_SHIFT; 70 71 caddr_t kaddr;
+8 -9
fs/freevxfs/vxfs_subr.c
··· 38 38 #include "vxfs_extern.h" 39 39 40 40 41 - static int vxfs_readpage(struct file *, struct page *); 41 + static int vxfs_read_folio(struct file *, struct folio *); 42 42 static sector_t vxfs_bmap(struct address_space *, sector_t); 43 43 44 44 const struct address_space_operations vxfs_aops = { 45 - .readpage = vxfs_readpage, 45 + .read_folio = vxfs_read_folio, 46 46 .bmap = vxfs_bmap, 47 47 }; 48 48 ··· 141 141 } 142 142 143 143 /** 144 - * vxfs_readpage - read one page synchronously into the pagecache 144 + * vxfs_read_folio - read one page synchronously into the pagecache 145 145 * @file: file context (unused) 146 - * @page: page frame to fill in. 146 + * @folio: folio to fill in. 147 147 * 148 148 * Description: 149 - * The vxfs_readpage routine reads @page synchronously into the 149 + * The vxfs_read_folio routine reads @folio synchronously into the 150 150 * pagecache. 151 151 * 152 152 * Returns: 153 153 * Zero on success, else a negative error code. 154 154 * 155 155 * Locking status: 156 - * @page is locked and will be unlocked. 156 + * @folio is locked and will be unlocked. 157 157 */ 158 - static int 159 - vxfs_readpage(struct file *file, struct page *page) 158 + static int vxfs_read_folio(struct file *file, struct folio *folio) 160 159 { 161 - return block_read_full_page(page, vxfs_getblk); 160 + return block_read_full_folio(folio, vxfs_getblk); 162 161 } 163 162 164 163 /**
+5 -5
fs/fuse/dir.c
··· 1957 1957 fi->rdc.version = 0; 1958 1958 } 1959 1959 1960 - static int fuse_symlink_readpage(struct file *null, struct page *page) 1960 + static int fuse_symlink_read_folio(struct file *null, struct folio *folio) 1961 1961 { 1962 - int err = fuse_readlink_page(page->mapping->host, page); 1962 + int err = fuse_readlink_page(folio->mapping->host, &folio->page); 1963 1963 1964 1964 if (!err) 1965 - SetPageUptodate(page); 1965 + folio_mark_uptodate(folio); 1966 1966 1967 - unlock_page(page); 1967 + folio_unlock(folio); 1968 1968 1969 1969 return err; 1970 1970 } 1971 1971 1972 1972 static const struct address_space_operations fuse_symlink_aops = { 1973 - .readpage = fuse_symlink_readpage, 1973 + .read_folio = fuse_symlink_read_folio, 1974 1974 }; 1975 1975 1976 1976 void fuse_init_symlink(struct inode *inode)
+6 -6
fs/fuse/file.c
··· 857 857 return 0; 858 858 } 859 859 860 - static int fuse_readpage(struct file *file, struct page *page) 860 + static int fuse_read_folio(struct file *file, struct folio *folio) 861 861 { 862 + struct page *page = &folio->page; 862 863 struct inode *inode = page->mapping->host; 863 864 int err; 864 865 ··· 1175 1174 break; 1176 1175 1177 1176 err = -ENOMEM; 1178 - page = grab_cache_page_write_begin(mapping, index, 0); 1177 + page = grab_cache_page_write_begin(mapping, index); 1179 1178 if (!page) 1180 1179 break; 1181 1180 ··· 2274 2273 * but how to implement it without killing performance need more thinking. 2275 2274 */ 2276 2275 static int fuse_write_begin(struct file *file, struct address_space *mapping, 2277 - loff_t pos, unsigned len, unsigned flags, 2278 - struct page **pagep, void **fsdata) 2276 + loff_t pos, unsigned len, struct page **pagep, void **fsdata) 2279 2277 { 2280 2278 pgoff_t index = pos >> PAGE_SHIFT; 2281 2279 struct fuse_conn *fc = get_fuse_conn(file_inode(file)); ··· 2284 2284 2285 2285 WARN_ON(!fc->writeback_cache); 2286 2286 2287 - page = grab_cache_page_write_begin(mapping, index, flags); 2287 + page = grab_cache_page_write_begin(mapping, index); 2288 2288 if (!page) 2289 2289 goto error; 2290 2290 ··· 3175 3175 }; 3176 3176 3177 3177 static const struct address_space_operations fuse_file_aops = { 3178 - .readpage = fuse_readpage, 3178 + .read_folio = fuse_read_folio, 3179 3179 .readahead = fuse_readahead, 3180 3180 .writepage = fuse_writepage, 3181 3181 .writepages = fuse_writepages,
+38 -43
fs/gfs2/aops.c
··· 464 464 return 0; 465 465 } 466 466 467 - 468 - static int __gfs2_readpage(void *file, struct page *page) 467 + /** 468 + * gfs2_read_folio - read a folio from a file 469 + * @file: The file to read 470 + * @folio: The folio in the file 471 + */ 472 + static int gfs2_read_folio(struct file *file, struct folio *folio) 469 473 { 470 - struct inode *inode = page->mapping->host; 474 + struct inode *inode = folio->mapping->host; 471 475 struct gfs2_inode *ip = GFS2_I(inode); 472 476 struct gfs2_sbd *sdp = GFS2_SB(inode); 473 477 int error; 474 478 475 479 if (!gfs2_is_jdata(ip) || 476 - (i_blocksize(inode) == PAGE_SIZE && !page_has_buffers(page))) { 477 - error = iomap_readpage(page, &gfs2_iomap_ops); 480 + (i_blocksize(inode) == PAGE_SIZE && !folio_buffers(folio))) { 481 + error = iomap_read_folio(folio, &gfs2_iomap_ops); 478 482 } else if (gfs2_is_stuffed(ip)) { 479 - error = stuffed_readpage(ip, page); 480 - unlock_page(page); 483 + error = stuffed_readpage(ip, &folio->page); 484 + folio_unlock(folio); 481 485 } else { 482 - error = mpage_readpage(page, gfs2_block_map); 486 + error = mpage_read_folio(folio, gfs2_block_map); 483 487 } 484 488 485 489 if (unlikely(gfs2_withdrawn(sdp))) 486 490 return -EIO; 487 491 488 492 return error; 489 - } 490 - 491 - /** 492 - * gfs2_readpage - read a page of a file 493 - * @file: The file to read 494 - * @page: The page of the file 495 - */ 496 - 497 - static int gfs2_readpage(struct file *file, struct page *page) 498 - { 499 - return __gfs2_readpage(file, page); 500 493 } 501 494 502 495 /** ··· 516 523 amt = size - copied; 517 524 if (offset + size > PAGE_SIZE) 518 525 amt = PAGE_SIZE - offset; 519 - page = read_cache_page(mapping, index, __gfs2_readpage, NULL); 526 + page = read_cache_page(mapping, index, gfs2_read_folio, NULL); 520 527 if (IS_ERR(page)) 521 528 return PTR_ERR(page); 522 529 p = kmap_atomic(page); ··· 691 698 } 692 699 693 700 /** 694 - * gfs2_releasepage - free the metadata associated with a page 695 - * @page: the page that's being released 701 + * gfs2_release_folio - free the metadata associated with a folio 702 + * @folio: the folio that's being released 696 703 * @gfp_mask: passed from Linux VFS, ignored by us 697 704 * 698 - * Calls try_to_free_buffers() to free the buffers and put the page if the 705 + * Calls try_to_free_buffers() to free the buffers and put the folio if the 699 706 * buffers can be released. 700 707 * 701 - * Returns: 1 if the page was put or else 0 708 + * Returns: true if the folio was put or else false 702 709 */ 703 710 704 - int gfs2_releasepage(struct page *page, gfp_t gfp_mask) 711 + bool gfs2_release_folio(struct folio *folio, gfp_t gfp_mask) 705 712 { 706 - struct address_space *mapping = page->mapping; 713 + struct address_space *mapping = folio->mapping; 707 714 struct gfs2_sbd *sdp = gfs2_mapping2sbd(mapping); 708 715 struct buffer_head *bh, *head; 709 716 struct gfs2_bufdata *bd; 710 717 711 - if (!page_has_buffers(page)) 712 - return 0; 718 + head = folio_buffers(folio); 719 + if (!head) 720 + return false; 713 721 714 722 /* 715 - * From xfs_vm_releasepage: mm accommodates an old ext3 case where 716 - * clean pages might not have had the dirty bit cleared. Thus, it can 717 - * send actual dirty pages to ->releasepage() via shrink_active_list(). 723 + * mm accommodates an old ext3 case where clean folios might 724 + * not have had the dirty bit cleared. Thus, it can send actual 725 + * dirty folios to ->release_folio() via shrink_active_list(). 718 726 * 719 - * As a workaround, we skip pages that contain dirty buffers below. 720 - * Once ->releasepage isn't called on dirty pages anymore, we can warn 721 - * on dirty buffers like we used to here again. 727 + * As a workaround, we skip folios that contain dirty buffers 728 + * below. Once ->release_folio isn't called on dirty folios 729 + * anymore, we can warn on dirty buffers like we used to here 730 + * again. 722 731 */ 723 732 724 733 gfs2_log_lock(sdp); 725 - head = bh = page_buffers(page); 734 + bh = head; 726 735 do { 727 736 if (atomic_read(&bh->b_count)) 728 737 goto cannot_release; ··· 734 739 if (buffer_dirty(bh) || WARN_ON(buffer_pinned(bh))) 735 740 goto cannot_release; 736 741 bh = bh->b_this_page; 737 - } while(bh != head); 742 + } while (bh != head); 738 743 739 - head = bh = page_buffers(page); 744 + bh = head; 740 745 do { 741 746 bd = bh->b_private; 742 747 if (bd) { ··· 757 762 } while (bh != head); 758 763 gfs2_log_unlock(sdp); 759 764 760 - return try_to_free_buffers(page); 765 + return try_to_free_buffers(folio); 761 766 762 767 cannot_release: 763 768 gfs2_log_unlock(sdp); 764 - return 0; 769 + return false; 765 770 } 766 771 767 772 static const struct address_space_operations gfs2_aops = { 768 773 .writepage = gfs2_writepage, 769 774 .writepages = gfs2_writepages, 770 - .readpage = gfs2_readpage, 775 + .read_folio = gfs2_read_folio, 771 776 .readahead = gfs2_readahead, 772 777 .dirty_folio = filemap_dirty_folio, 773 - .releasepage = iomap_releasepage, 778 + .release_folio = iomap_release_folio, 774 779 .invalidate_folio = iomap_invalidate_folio, 775 780 .bmap = gfs2_bmap, 776 781 .direct_IO = noop_direct_IO, ··· 782 787 static const struct address_space_operations gfs2_jdata_aops = { 783 788 .writepage = gfs2_jdata_writepage, 784 789 .writepages = gfs2_jdata_writepages, 785 - .readpage = gfs2_readpage, 790 + .read_folio = gfs2_read_folio, 786 791 .readahead = gfs2_readahead, 787 792 .dirty_folio = jdata_dirty_folio, 788 793 .bmap = gfs2_bmap, 789 794 .invalidate_folio = gfs2_invalidate_folio, 790 - .releasepage = gfs2_releasepage, 795 + .release_folio = gfs2_release_folio, 791 796 .is_partially_uptodate = block_is_partially_uptodate, 792 797 .error_remove_page = generic_error_remove_page, 793 798 };
+1 -1
fs/gfs2/inode.h
··· 12 12 #include <linux/mm.h> 13 13 #include "util.h" 14 14 15 - extern int gfs2_releasepage(struct page *page, gfp_t gfp_mask); 15 + bool gfs2_release_folio(struct folio *folio, gfp_t gfp_mask); 16 16 extern int gfs2_internal_read(struct gfs2_inode *ip, 17 17 char *buf, loff_t *pos, unsigned size); 18 18 extern void gfs2_set_aops(struct inode *inode);
+2 -2
fs/gfs2/meta_io.c
··· 92 92 .dirty_folio = block_dirty_folio, 93 93 .invalidate_folio = block_invalidate_folio, 94 94 .writepage = gfs2_aspace_writepage, 95 - .releasepage = gfs2_releasepage, 95 + .release_folio = gfs2_release_folio, 96 96 }; 97 97 98 98 const struct address_space_operations gfs2_rgrp_aops = { 99 99 .dirty_folio = block_dirty_folio, 100 100 .invalidate_folio = block_invalidate_folio, 101 101 .writepage = gfs2_aspace_writepage, 102 - .releasepage = gfs2_releasepage, 102 + .release_folio = gfs2_release_folio, 103 103 }; 104 104 105 105 /**
+3 -3
fs/hfs/extent.c
··· 491 491 492 492 /* XXX: Can use generic_cont_expand? */ 493 493 size = inode->i_size - 1; 494 - res = pagecache_write_begin(NULL, mapping, size+1, 0, 0, 495 - &page, &fsdata); 494 + res = hfs_write_begin(NULL, mapping, size + 1, 0, &page, 495 + &fsdata); 496 496 if (!res) { 497 - res = pagecache_write_end(NULL, mapping, size+1, 0, 0, 497 + res = generic_write_end(NULL, mapping, size + 1, 0, 0, 498 498 page, fsdata); 499 499 } 500 500 if (res)
+2
fs/hfs/hfs_fs.h
··· 201 201 extern const struct address_space_operations hfs_aops; 202 202 extern const struct address_space_operations hfs_btree_aops; 203 203 204 + int hfs_write_begin(struct file *file, struct address_space *mapping, 205 + loff_t pos, unsigned len, struct page **pagep, void **fsdata); 204 206 extern struct inode *hfs_new_inode(struct inode *, const struct qstr *, umode_t); 205 207 extern void hfs_inode_write_fork(struct inode *, struct hfs_extent *, __be32 *, __be32 *); 206 208 extern int hfs_write_inode(struct inode *, struct writeback_control *);
+19 -19
fs/hfs/inode.c
··· 34 34 return block_write_full_page(page, hfs_get_block, wbc); 35 35 } 36 36 37 - static int hfs_readpage(struct file *file, struct page *page) 37 + static int hfs_read_folio(struct file *file, struct folio *folio) 38 38 { 39 - return block_read_full_page(page, hfs_get_block); 39 + return block_read_full_folio(folio, hfs_get_block); 40 40 } 41 41 42 42 static void hfs_write_failed(struct address_space *mapping, loff_t to) ··· 49 49 } 50 50 } 51 51 52 - static int hfs_write_begin(struct file *file, struct address_space *mapping, 53 - loff_t pos, unsigned len, unsigned flags, 54 - struct page **pagep, void **fsdata) 52 + int hfs_write_begin(struct file *file, struct address_space *mapping, 53 + loff_t pos, unsigned len, struct page **pagep, void **fsdata) 55 54 { 56 55 int ret; 57 56 58 57 *pagep = NULL; 59 - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, 58 + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, 60 59 hfs_get_block, 61 60 &HFS_I(mapping->host)->phys_size); 62 61 if (unlikely(ret)) ··· 69 70 return generic_block_bmap(mapping, block, hfs_get_block); 70 71 } 71 72 72 - static int hfs_releasepage(struct page *page, gfp_t mask) 73 + static bool hfs_release_folio(struct folio *folio, gfp_t mask) 73 74 { 74 - struct inode *inode = page->mapping->host; 75 + struct inode *inode = folio->mapping->host; 75 76 struct super_block *sb = inode->i_sb; 76 77 struct hfs_btree *tree; 77 78 struct hfs_bnode *node; 78 79 u32 nidx; 79 - int i, res = 1; 80 + int i; 81 + bool res = true; 80 82 81 83 switch (inode->i_ino) { 82 84 case HFS_EXT_CNID: ··· 88 88 break; 89 89 default: 90 90 BUG(); 91 - return 0; 91 + return false; 92 92 } 93 93 94 94 if (!tree) 95 - return 0; 95 + return false; 96 96 97 97 if (tree->node_size >= PAGE_SIZE) { 98 - nidx = page->index >> (tree->node_size_shift - PAGE_SHIFT); 98 + nidx = folio->index >> (tree->node_size_shift - PAGE_SHIFT); 99 99 spin_lock(&tree->hash_lock); 100 100 node = hfs_bnode_findhash(tree, nidx); 101 101 if (!node) 102 102 ; 103 103 else if (atomic_read(&node->refcnt)) 104 - res = 0; 104 + res = false; 105 105 if (res && node) { 106 106 hfs_bnode_unhash(node); 107 107 hfs_bnode_free(node); 108 108 } 109 109 spin_unlock(&tree->hash_lock); 110 110 } else { 111 - nidx = page->index << (PAGE_SHIFT - tree->node_size_shift); 111 + nidx = folio->index << (PAGE_SHIFT - tree->node_size_shift); 112 112 i = 1 << (PAGE_SHIFT - tree->node_size_shift); 113 113 spin_lock(&tree->hash_lock); 114 114 do { ··· 116 116 if (!node) 117 117 continue; 118 118 if (atomic_read(&node->refcnt)) { 119 - res = 0; 119 + res = false; 120 120 break; 121 121 } 122 122 hfs_bnode_unhash(node); ··· 124 124 } while (--i && nidx < tree->node_count); 125 125 spin_unlock(&tree->hash_lock); 126 126 } 127 - return res ? try_to_free_buffers(page) : 0; 127 + return res ? try_to_free_buffers(folio) : false; 128 128 } 129 129 130 130 static ssize_t hfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) ··· 161 161 const struct address_space_operations hfs_btree_aops = { 162 162 .dirty_folio = block_dirty_folio, 163 163 .invalidate_folio = block_invalidate_folio, 164 - .readpage = hfs_readpage, 164 + .read_folio = hfs_read_folio, 165 165 .writepage = hfs_writepage, 166 166 .write_begin = hfs_write_begin, 167 167 .write_end = generic_write_end, 168 168 .bmap = hfs_bmap, 169 - .releasepage = hfs_releasepage, 169 + .release_folio = hfs_release_folio, 170 170 }; 171 171 172 172 const struct address_space_operations hfs_aops = { 173 173 .dirty_folio = block_dirty_folio, 174 174 .invalidate_folio = block_invalidate_folio, 175 - .readpage = hfs_readpage, 175 + .read_folio = hfs_read_folio, 176 176 .writepage = hfs_writepage, 177 177 .write_begin = hfs_write_begin, 178 178 .write_end = generic_write_end,
+4 -4
fs/hfsplus/extents.c
··· 557 557 void *fsdata; 558 558 loff_t size = inode->i_size; 559 559 560 - res = pagecache_write_begin(NULL, mapping, size, 0, 0, 561 - &page, &fsdata); 560 + res = hfsplus_write_begin(NULL, mapping, size, 0, 561 + &page, &fsdata); 562 562 if (res) 563 563 return; 564 - res = pagecache_write_end(NULL, mapping, size, 565 - 0, 0, page, fsdata); 564 + res = generic_write_end(NULL, mapping, size, 0, 0, 565 + page, fsdata); 566 566 if (res < 0) 567 567 return; 568 568 mark_inode_dirty(inode);
+2
fs/hfsplus/hfsplus_fs.h
··· 468 468 extern const struct address_space_operations hfsplus_btree_aops; 469 469 extern const struct dentry_operations hfsplus_dentry_operations; 470 470 471 + int hfsplus_write_begin(struct file *file, struct address_space *mapping, 472 + loff_t pos, unsigned len, struct page **pagep, void **fsdata); 471 473 struct inode *hfsplus_new_inode(struct super_block *sb, struct inode *dir, 472 474 umode_t mode); 473 475 void hfsplus_delete_inode(struct inode *inode);
+19 -19
fs/hfsplus/inode.c
··· 23 23 #include "hfsplus_raw.h" 24 24 #include "xattr.h" 25 25 26 - static int hfsplus_readpage(struct file *file, struct page *page) 26 + static int hfsplus_read_folio(struct file *file, struct folio *folio) 27 27 { 28 - return block_read_full_page(page, hfsplus_get_block); 28 + return block_read_full_folio(folio, hfsplus_get_block); 29 29 } 30 30 31 31 static int hfsplus_writepage(struct page *page, struct writeback_control *wbc) ··· 43 43 } 44 44 } 45 45 46 - static int hfsplus_write_begin(struct file *file, struct address_space *mapping, 47 - loff_t pos, unsigned len, unsigned flags, 48 - struct page **pagep, void **fsdata) 46 + int hfsplus_write_begin(struct file *file, struct address_space *mapping, 47 + loff_t pos, unsigned len, struct page **pagep, void **fsdata) 49 48 { 50 49 int ret; 51 50 52 51 *pagep = NULL; 53 - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, 52 + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, 54 53 hfsplus_get_block, 55 54 &HFSPLUS_I(mapping->host)->phys_size); 56 55 if (unlikely(ret)) ··· 63 64 return generic_block_bmap(mapping, block, hfsplus_get_block); 64 65 } 65 66 66 - static int hfsplus_releasepage(struct page *page, gfp_t mask) 67 + static bool hfsplus_release_folio(struct folio *folio, gfp_t mask) 67 68 { 68 - struct inode *inode = page->mapping->host; 69 + struct inode *inode = folio->mapping->host; 69 70 struct super_block *sb = inode->i_sb; 70 71 struct hfs_btree *tree; 71 72 struct hfs_bnode *node; 72 73 u32 nidx; 73 - int i, res = 1; 74 + int i; 75 + bool res = true; 74 76 75 77 switch (inode->i_ino) { 76 78 case HFSPLUS_EXT_CNID: ··· 85 85 break; 86 86 default: 87 87 BUG(); 88 - return 0; 88 + return false; 89 89 } 90 90 if (!tree) 91 - return 0; 91 + return false; 92 92 if (tree->node_size >= PAGE_SIZE) { 93 - nidx = page->index >> 93 + nidx = folio->index >> 94 94 (tree->node_size_shift - PAGE_SHIFT); 95 95 spin_lock(&tree->hash_lock); 96 96 node = hfs_bnode_findhash(tree, nidx); 97 97 if (!node) 98 98 ; 99 99 else if (atomic_read(&node->refcnt)) 100 - res = 0; 100 + res = false; 101 101 if (res && node) { 102 102 hfs_bnode_unhash(node); 103 103 hfs_bnode_free(node); 104 104 } 105 105 spin_unlock(&tree->hash_lock); 106 106 } else { 107 - nidx = page->index << 107 + nidx = folio->index << 108 108 (PAGE_SHIFT - tree->node_size_shift); 109 109 i = 1 << (PAGE_SHIFT - tree->node_size_shift); 110 110 spin_lock(&tree->hash_lock); ··· 113 113 if (!node) 114 114 continue; 115 115 if (atomic_read(&node->refcnt)) { 116 - res = 0; 116 + res = false; 117 117 break; 118 118 } 119 119 hfs_bnode_unhash(node); ··· 121 121 } while (--i && nidx < tree->node_count); 122 122 spin_unlock(&tree->hash_lock); 123 123 } 124 - return res ? try_to_free_buffers(page) : 0; 124 + return res ? try_to_free_buffers(folio) : false; 125 125 } 126 126 127 127 static ssize_t hfsplus_direct_IO(struct kiocb *iocb, struct iov_iter *iter) ··· 158 158 const struct address_space_operations hfsplus_btree_aops = { 159 159 .dirty_folio = block_dirty_folio, 160 160 .invalidate_folio = block_invalidate_folio, 161 - .readpage = hfsplus_readpage, 161 + .read_folio = hfsplus_read_folio, 162 162 .writepage = hfsplus_writepage, 163 163 .write_begin = hfsplus_write_begin, 164 164 .write_end = generic_write_end, 165 165 .bmap = hfsplus_bmap, 166 - .releasepage = hfsplus_releasepage, 166 + .release_folio = hfsplus_release_folio, 167 167 }; 168 168 169 169 const struct address_space_operations hfsplus_aops = { 170 170 .dirty_folio = block_dirty_folio, 171 171 .invalidate_folio = block_invalidate_folio, 172 - .readpage = hfsplus_readpage, 172 + .read_folio = hfsplus_read_folio, 173 173 .writepage = hfsplus_writepage, 174 174 .write_begin = hfsplus_write_begin, 175 175 .write_end = generic_write_end,
+5 -4
fs/hostfs/hostfs_kern.c
··· 434 434 return err; 435 435 } 436 436 437 - static int hostfs_readpage(struct file *file, struct page *page) 437 + static int hostfs_read_folio(struct file *file, struct folio *folio) 438 438 { 439 + struct page *page = &folio->page; 439 440 char *buffer; 440 441 loff_t start = page_offset(page); 441 442 int bytes_read, ret = 0; ··· 464 463 } 465 464 466 465 static int hostfs_write_begin(struct file *file, struct address_space *mapping, 467 - loff_t pos, unsigned len, unsigned flags, 466 + loff_t pos, unsigned len, 468 467 struct page **pagep, void **fsdata) 469 468 { 470 469 pgoff_t index = pos >> PAGE_SHIFT; 471 470 472 - *pagep = grab_cache_page_write_begin(mapping, index, flags); 471 + *pagep = grab_cache_page_write_begin(mapping, index); 473 472 if (!*pagep) 474 473 return -ENOMEM; 475 474 return 0; ··· 505 504 506 505 static const struct address_space_operations hostfs_aops = { 507 506 .writepage = hostfs_writepage, 508 - .readpage = hostfs_readpage, 507 + .read_folio = hostfs_read_folio, 509 508 .dirty_folio = filemap_dirty_folio, 510 509 .write_begin = hostfs_write_begin, 511 510 .write_end = hostfs_write_end,
+5 -5
fs/hpfs/file.c
··· 158 158 .iomap_begin = hpfs_iomap_begin, 159 159 }; 160 160 161 - static int hpfs_readpage(struct file *file, struct page *page) 161 + static int hpfs_read_folio(struct file *file, struct folio *folio) 162 162 { 163 - return mpage_readpage(page, hpfs_get_block); 163 + return mpage_read_folio(folio, hpfs_get_block); 164 164 } 165 165 166 166 static int hpfs_writepage(struct page *page, struct writeback_control *wbc) ··· 194 194 } 195 195 196 196 static int hpfs_write_begin(struct file *file, struct address_space *mapping, 197 - loff_t pos, unsigned len, unsigned flags, 197 + loff_t pos, unsigned len, 198 198 struct page **pagep, void **fsdata) 199 199 { 200 200 int ret; 201 201 202 202 *pagep = NULL; 203 - ret = cont_write_begin(file, mapping, pos, len, flags, pagep, fsdata, 203 + ret = cont_write_begin(file, mapping, pos, len, pagep, fsdata, 204 204 hpfs_get_block, 205 205 &hpfs_i(mapping->host)->mmu_private); 206 206 if (unlikely(ret)) ··· 247 247 const struct address_space_operations hpfs_aops = { 248 248 .dirty_folio = block_dirty_folio, 249 249 .invalidate_folio = block_invalidate_folio, 250 - .readpage = hpfs_readpage, 250 + .read_folio = hpfs_read_folio, 251 251 .writepage = hpfs_writepage, 252 252 .readahead = hpfs_readahead, 253 253 .writepages = hpfs_writepages,
+3 -2
fs/hpfs/namei.c
··· 479 479 return err; 480 480 } 481 481 482 - static int hpfs_symlink_readpage(struct file *file, struct page *page) 482 + static int hpfs_symlink_read_folio(struct file *file, struct folio *folio) 483 483 { 484 + struct page *page = &folio->page; 484 485 char *link = page_address(page); 485 486 struct inode *i = page->mapping->host; 486 487 struct fnode *fnode; ··· 509 508 } 510 509 511 510 const struct address_space_operations hpfs_symlink_aops = { 512 - .readpage = hpfs_symlink_readpage 511 + .read_folio = hpfs_symlink_read_folio 513 512 }; 514 513 515 514 static int hpfs_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
+1 -1
fs/hugetlbfs/inode.c
··· 383 383 384 384 static int hugetlbfs_write_begin(struct file *file, 385 385 struct address_space *mapping, 386 - loff_t pos, unsigned len, unsigned flags, 386 + loff_t pos, unsigned len, 387 387 struct page **pagep, void **fsdata) 388 388 { 389 389 return -EINVAL;
+17 -21
fs/iomap/buffered-io.c
··· 297 297 /* 298 298 * If the bio_alloc fails, try it again for a single page to 299 299 * avoid having to deal with partial page reads. This emulates 300 - * what do_mpage_readpage does. 300 + * what do_mpage_read_folio does. 301 301 */ 302 302 if (!ctx->bio) { 303 303 ctx->bio = bio_alloc(iomap->bdev, 1, REQ_OP_READ, ··· 320 320 return pos - orig_pos + plen; 321 321 } 322 322 323 - int 324 - iomap_readpage(struct page *page, const struct iomap_ops *ops) 323 + int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops) 325 324 { 326 - struct folio *folio = page_folio(page); 327 325 struct iomap_iter iter = { 328 326 .inode = folio->mapping->host, 329 327 .pos = folio_pos(folio), ··· 349 351 } 350 352 351 353 /* 352 - * Just like mpage_readahead and block_read_full_page, we always 353 - * return 0 and just mark the page as PageError on errors. This 354 + * Just like mpage_readahead and block_read_full_folio, we always 355 + * return 0 and just set the folio error flag on errors. This 354 356 * should be cleaned up throughout the stack eventually. 355 357 */ 356 358 return 0; 357 359 } 358 - EXPORT_SYMBOL_GPL(iomap_readpage); 360 + EXPORT_SYMBOL_GPL(iomap_read_folio); 359 361 360 362 static loff_t iomap_readahead_iter(const struct iomap_iter *iter, 361 363 struct iomap_readpage_ctx *ctx) ··· 452 454 } 453 455 EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate); 454 456 455 - int 456 - iomap_releasepage(struct page *page, gfp_t gfp_mask) 457 + bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags) 457 458 { 458 - struct folio *folio = page_folio(page); 459 - 460 - trace_iomap_releasepage(folio->mapping->host, folio_pos(folio), 459 + trace_iomap_release_folio(folio->mapping->host, folio_pos(folio), 461 460 folio_size(folio)); 462 461 463 462 /* 464 - * mm accommodates an old ext3 case where clean pages might not have had 465 - * the dirty bit cleared. Thus, it can send actual dirty pages to 466 - * ->releasepage() via shrink_active_list(); skip those here. 463 + * mm accommodates an old ext3 case where clean folios might 464 + * not have had the dirty bit cleared. Thus, it can send actual 465 + * dirty folios to ->release_folio() via shrink_active_list(); 466 + * skip those here. 467 467 */ 468 468 if (folio_test_dirty(folio) || folio_test_writeback(folio)) 469 - return 0; 469 + return false; 470 470 iomap_page_release(folio); 471 - return 1; 471 + return true; 472 472 } 473 - EXPORT_SYMBOL_GPL(iomap_releasepage); 473 + EXPORT_SYMBOL_GPL(iomap_release_folio); 474 474 475 475 void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) 476 476 { ··· 660 664 661 665 /* 662 666 * The blocks that were entirely written will now be uptodate, so we 663 - * don't have to worry about a readpage reading them and overwriting a 667 + * don't have to worry about a read_folio reading them and overwriting a 664 668 * partial write. However, if we've encountered a short write and only 665 669 * partially written into a block, it will not be marked uptodate, so a 666 - * readpage might come in and destroy our partial write. 670 + * read_folio might come in and destroy our partial write. 667 671 * 668 672 * Do the simplest thing and just treat any short write to a 669 673 * non-uptodate page as a zero-length write, and force the caller to ··· 1481 1485 * Skip the page if it's fully outside i_size, e.g. due to a 1482 1486 * truncate operation that's in progress. We must redirty the 1483 1487 * page so that reclaim stops reclaiming it. Otherwise 1484 - * iomap_vm_releasepage() is called on it and gets confused. 1488 + * iomap_release_folio() is called on it and gets confused. 1485 1489 * 1486 1490 * Note that the end_index is unsigned long. If the given 1487 1491 * offset is greater than 16TB on a 32-bit system then if we
+1 -1
fs/iomap/trace.h
··· 80 80 TP_PROTO(struct inode *inode, loff_t off, u64 len),\ 81 81 TP_ARGS(inode, off, len)) 82 82 DEFINE_RANGE_EVENT(iomap_writepage); 83 - DEFINE_RANGE_EVENT(iomap_releasepage); 83 + DEFINE_RANGE_EVENT(iomap_release_folio); 84 84 DEFINE_RANGE_EVENT(iomap_invalidate_folio); 85 85 DEFINE_RANGE_EVENT(iomap_dio_invalidate_fail); 86 86
+3 -2
fs/isofs/compress.c
··· 296 296 * per reference. We inject the additional pages into the page 297 297 * cache as a form of readahead. 298 298 */ 299 - static int zisofs_readpage(struct file *file, struct page *page) 299 + static int zisofs_read_folio(struct file *file, struct folio *folio) 300 300 { 301 + struct page *page = &folio->page; 301 302 struct inode *inode = file_inode(file); 302 303 struct address_space *mapping = inode->i_mapping; 303 304 int err; ··· 370 369 } 371 370 372 371 const struct address_space_operations zisofs_aops = { 373 - .readpage = zisofs_readpage, 372 + .read_folio = zisofs_read_folio, 374 373 /* No bmap operation supported */ 375 374 }; 376 375
+3 -3
fs/isofs/inode.c
··· 1174 1174 return sb_bread(inode->i_sb, blknr); 1175 1175 } 1176 1176 1177 - static int isofs_readpage(struct file *file, struct page *page) 1177 + static int isofs_read_folio(struct file *file, struct folio *folio) 1178 1178 { 1179 - return mpage_readpage(page, isofs_get_block); 1179 + return mpage_read_folio(folio, isofs_get_block); 1180 1180 } 1181 1181 1182 1182 static void isofs_readahead(struct readahead_control *rac) ··· 1190 1190 } 1191 1191 1192 1192 static const struct address_space_operations isofs_aops = { 1193 - .readpage = isofs_readpage, 1193 + .read_folio = isofs_read_folio, 1194 1194 .readahead = isofs_readahead, 1195 1195 .bmap = _isofs_bmap 1196 1196 };
+4 -3
fs/isofs/rock.c
··· 687 687 } 688 688 689 689 /* 690 - * readpage() for symlinks: reads symlink contents into the page and either 690 + * read_folio() for symlinks: reads symlink contents into the folio and either 691 691 * makes it uptodate and returns 0 or returns error (-EIO) 692 692 */ 693 - static int rock_ridge_symlink_readpage(struct file *file, struct page *page) 693 + static int rock_ridge_symlink_read_folio(struct file *file, struct folio *folio) 694 694 { 695 + struct page *page = &folio->page; 695 696 struct inode *inode = page->mapping->host; 696 697 struct iso_inode_info *ei = ISOFS_I(inode); 697 698 struct isofs_sb_info *sbi = ISOFS_SB(inode->i_sb); ··· 805 804 } 806 805 807 806 const struct address_space_operations isofs_symlink_aops = { 808 - .readpage = rock_ridge_symlink_readpage 807 + .read_folio = rock_ridge_symlink_read_folio 809 808 };
+8 -6
fs/jbd2/commit.c
··· 62 62 */ 63 63 static void release_buffer_page(struct buffer_head *bh) 64 64 { 65 + struct folio *folio; 65 66 struct page *page; 66 67 67 68 if (buffer_dirty(bh)) ··· 72 71 page = bh->b_page; 73 72 if (!page) 74 73 goto nope; 75 - if (page->mapping) 74 + folio = page_folio(page); 75 + if (folio->mapping) 76 76 goto nope; 77 77 78 78 /* OK, it's a truncated page */ 79 - if (!trylock_page(page)) 79 + if (!folio_trylock(folio)) 80 80 goto nope; 81 81 82 - get_page(page); 82 + folio_get(folio); 83 83 __brelse(bh); 84 - try_to_free_buffers(page); 85 - unlock_page(page); 86 - put_page(page); 84 + try_to_free_buffers(folio); 85 + folio_unlock(folio); 86 + folio_put(folio); 87 87 return; 88 88 89 89 nope:
+7 -7
fs/jbd2/transaction.c
··· 2143 2143 * cannot happen because we never reallocate freed data as metadata 2144 2144 * while the data is part of a transaction. Yes? 2145 2145 * 2146 - * Return 0 on failure, 1 on success 2146 + * Return false on failure, true on success 2147 2147 */ 2148 - int jbd2_journal_try_to_free_buffers(journal_t *journal, struct page *page) 2148 + bool jbd2_journal_try_to_free_buffers(journal_t *journal, struct folio *folio) 2149 2149 { 2150 2150 struct buffer_head *head; 2151 2151 struct buffer_head *bh; 2152 - int ret = 0; 2152 + bool ret = false; 2153 2153 2154 - J_ASSERT(PageLocked(page)); 2154 + J_ASSERT(folio_test_locked(folio)); 2155 2155 2156 - head = page_buffers(page); 2156 + head = folio_buffers(folio); 2157 2157 bh = head; 2158 2158 do { 2159 2159 struct journal_head *jh; ··· 2175 2175 goto busy; 2176 2176 } while ((bh = bh->b_this_page) != head); 2177 2177 2178 - ret = try_to_free_buffers(page); 2178 + ret = try_to_free_buffers(folio); 2179 2179 busy: 2180 2180 return ret; 2181 2181 } ··· 2482 2482 } while (bh != head); 2483 2483 2484 2484 if (!partial_page) { 2485 - if (may_free && try_to_free_buffers(&folio->page)) 2485 + if (may_free && try_to_free_buffers(folio)) 2486 2486 J_ASSERT(!folio_buffers(folio)); 2487 2487 } 2488 2488 return 0;
+11 -12
fs/jffs2/file.c
··· 25 25 loff_t pos, unsigned len, unsigned copied, 26 26 struct page *pg, void *fsdata); 27 27 static int jffs2_write_begin(struct file *filp, struct address_space *mapping, 28 - loff_t pos, unsigned len, unsigned flags, 28 + loff_t pos, unsigned len, 29 29 struct page **pagep, void **fsdata); 30 - static int jffs2_readpage (struct file *filp, struct page *pg); 30 + static int jffs2_read_folio(struct file *filp, struct folio *folio); 31 31 32 32 int jffs2_fsync(struct file *filp, loff_t start, loff_t end, int datasync) 33 33 { ··· 72 72 73 73 const struct address_space_operations jffs2_file_address_operations = 74 74 { 75 - .readpage = jffs2_readpage, 75 + .read_folio = jffs2_read_folio, 76 76 .write_begin = jffs2_write_begin, 77 77 .write_end = jffs2_write_end, 78 78 }; ··· 110 110 return ret; 111 111 } 112 112 113 - int jffs2_do_readpage_unlock(void *data, struct page *pg) 113 + int __jffs2_read_folio(struct file *file, struct folio *folio) 114 114 { 115 - int ret = jffs2_do_readpage_nolock(data, pg); 116 - unlock_page(pg); 115 + int ret = jffs2_do_readpage_nolock(folio->mapping->host, &folio->page); 116 + folio_unlock(folio); 117 117 return ret; 118 118 } 119 119 120 - 121 - static int jffs2_readpage (struct file *filp, struct page *pg) 120 + static int jffs2_read_folio(struct file *file, struct folio *folio) 122 121 { 123 - struct jffs2_inode_info *f = JFFS2_INODE_INFO(pg->mapping->host); 122 + struct jffs2_inode_info *f = JFFS2_INODE_INFO(folio->mapping->host); 124 123 int ret; 125 124 126 125 mutex_lock(&f->sem); 127 - ret = jffs2_do_readpage_unlock(pg->mapping->host, pg); 126 + ret = __jffs2_read_folio(file, folio); 128 127 mutex_unlock(&f->sem); 129 128 return ret; 130 129 } 131 130 132 131 static int jffs2_write_begin(struct file *filp, struct address_space *mapping, 133 - loff_t pos, unsigned len, unsigned flags, 132 + loff_t pos, unsigned len, 134 133 struct page **pagep, void **fsdata) 135 134 { 136 135 struct page *pg; ··· 212 213 * page in read_cache_page(), which causes a deadlock. 213 214 */ 214 215 mutex_lock(&c->alloc_sem); 215 - pg = grab_cache_page_write_begin(mapping, index, flags); 216 + pg = grab_cache_page_write_begin(mapping, index); 216 217 if (!pg) { 217 218 ret = -ENOMEM; 218 219 goto release_sem;
+1 -1
fs/jffs2/fs.c
··· 178 178 jffs2_complete_reservation(c); 179 179 180 180 /* We have to do the truncate_setsize() without f->sem held, since 181 - some pages may be locked and waiting for it in readpage(). 181 + some pages may be locked and waiting for it in read_folio(). 182 182 We are protected from a simultaneous write() extending i_size 183 183 back past iattr->ia_size, because do_truncate() holds the 184 184 generic inode semaphore. */
+1 -1
fs/jffs2/gc.c
··· 1327 1327 * trying to write out, read_cache_page() will not deadlock. */ 1328 1328 mutex_unlock(&f->sem); 1329 1329 page = read_cache_page(inode->i_mapping, start >> PAGE_SHIFT, 1330 - jffs2_do_readpage_unlock, inode); 1330 + __jffs2_read_folio, NULL); 1331 1331 if (IS_ERR(page)) { 1332 1332 pr_warn("read_cache_page() returned error: %ld\n", 1333 1333 PTR_ERR(page));
+1 -1
fs/jffs2/os-linux.h
··· 155 155 extern const struct inode_operations jffs2_file_inode_operations; 156 156 extern const struct address_space_operations jffs2_file_address_operations; 157 157 int jffs2_fsync(struct file *, loff_t, loff_t, int); 158 - int jffs2_do_readpage_unlock(void *data, struct page *pg); 158 + int __jffs2_read_folio(struct file *file, struct folio *folio); 159 159 160 160 /* ioctl.c */ 161 161 long jffs2_ioctl(struct file *, unsigned int, unsigned long);
+5 -6
fs/jfs/inode.c
··· 293 293 return mpage_writepages(mapping, wbc, jfs_get_block); 294 294 } 295 295 296 - static int jfs_readpage(struct file *file, struct page *page) 296 + static int jfs_read_folio(struct file *file, struct folio *folio) 297 297 { 298 - return mpage_readpage(page, jfs_get_block); 298 + return mpage_read_folio(folio, jfs_get_block); 299 299 } 300 300 301 301 static void jfs_readahead(struct readahead_control *rac) ··· 314 314 } 315 315 316 316 static int jfs_write_begin(struct file *file, struct address_space *mapping, 317 - loff_t pos, unsigned len, unsigned flags, 317 + loff_t pos, unsigned len, 318 318 struct page **pagep, void **fsdata) 319 319 { 320 320 int ret; 321 321 322 - ret = nobh_write_begin(mapping, pos, len, flags, pagep, fsdata, 323 - jfs_get_block); 322 + ret = nobh_write_begin(mapping, pos, len, pagep, fsdata, jfs_get_block); 324 323 if (unlikely(ret)) 325 324 jfs_write_failed(mapping, pos + len); 326 325 ··· 359 360 const struct address_space_operations jfs_aops = { 360 361 .dirty_folio = block_dirty_folio, 361 362 .invalidate_folio = block_invalidate_folio, 362 - .readpage = jfs_readpage, 363 + .read_folio = jfs_read_folio, 363 364 .readahead = jfs_readahead, 364 365 .writepage = jfs_writepage, 365 366 .writepages = jfs_writepages,
+11 -10
fs/jfs/jfs_metapage.c
··· 467 467 return -EIO; 468 468 } 469 469 470 - static int metapage_readpage(struct file *fp, struct page *page) 470 + static int metapage_read_folio(struct file *fp, struct folio *folio) 471 471 { 472 + struct page *page = &folio->page; 472 473 struct inode *inode = page->mapping->host; 473 474 struct bio *bio = NULL; 474 475 int block_offset; ··· 524 523 return -EIO; 525 524 } 526 525 527 - static int metapage_releasepage(struct page *page, gfp_t gfp_mask) 526 + static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask) 528 527 { 529 528 struct metapage *mp; 530 - int ret = 1; 529 + bool ret = true; 531 530 int offset; 532 531 533 532 for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) { 534 - mp = page_to_mp(page, offset); 533 + mp = page_to_mp(&folio->page, offset); 535 534 536 535 if (!mp) 537 536 continue; 538 537 539 - jfs_info("metapage_releasepage: mp = 0x%p", mp); 538 + jfs_info("metapage_release_folio: mp = 0x%p", mp); 540 539 if (mp->count || mp->nohomeok || 541 540 test_bit(META_dirty, &mp->flag)) { 542 541 jfs_info("count = %ld, nohomeok = %d", mp->count, 543 542 mp->nohomeok); 544 - ret = 0; 543 + ret = false; 545 544 continue; 546 545 } 547 546 if (mp->lsn) 548 547 remove_from_logsync(mp); 549 - remove_metapage(page, mp); 548 + remove_metapage(&folio->page, mp); 550 549 INCREMENT(mpStat.pagefree); 551 550 free_metapage(mp); 552 551 } ··· 560 559 561 560 BUG_ON(folio_test_writeback(folio)); 562 561 563 - metapage_releasepage(&folio->page, 0); 562 + metapage_release_folio(folio, 0); 564 563 } 565 564 566 565 const struct address_space_operations jfs_metapage_aops = { 567 - .readpage = metapage_readpage, 566 + .read_folio = metapage_read_folio, 568 567 .writepage = metapage_writepage, 569 - .releasepage = metapage_releasepage, 568 + .release_folio = metapage_release_folio, 570 569 .invalidate_folio = metapage_invalidate_folio, 571 570 .dirty_folio = filemap_dirty_folio, 572 571 };
+9 -9
fs/libfs.c
··· 539 539 } 540 540 EXPORT_SYMBOL(simple_setattr); 541 541 542 - static int simple_readpage(struct file *file, struct page *page) 542 + static int simple_read_folio(struct file *file, struct folio *folio) 543 543 { 544 - clear_highpage(page); 545 - flush_dcache_page(page); 546 - SetPageUptodate(page); 547 - unlock_page(page); 544 + folio_zero_range(folio, 0, folio_size(folio)); 545 + flush_dcache_folio(folio); 546 + folio_mark_uptodate(folio); 547 + folio_unlock(folio); 548 548 return 0; 549 549 } 550 550 551 551 int simple_write_begin(struct file *file, struct address_space *mapping, 552 - loff_t pos, unsigned len, unsigned flags, 552 + loff_t pos, unsigned len, 553 553 struct page **pagep, void **fsdata) 554 554 { 555 555 struct page *page; ··· 557 557 558 558 index = pos >> PAGE_SHIFT; 559 559 560 - page = grab_cache_page_write_begin(mapping, index, flags); 560 + page = grab_cache_page_write_begin(mapping, index); 561 561 if (!page) 562 562 return -ENOMEM; 563 563 ··· 592 592 * should extend on what's done here with a call to mark_inode_dirty() in the 593 593 * case that i_size has changed. 594 594 * 595 - * Use *ONLY* with simple_readpage() 595 + * Use *ONLY* with simple_read_folio() 596 596 */ 597 597 static int simple_write_end(struct file *file, struct address_space *mapping, 598 598 loff_t pos, unsigned len, unsigned copied, ··· 628 628 * Provides ramfs-style behavior: data in the pagecache, but no writeback. 629 629 */ 630 630 const struct address_space_operations ram_aops = { 631 - .readpage = simple_readpage, 631 + .read_folio = simple_read_folio, 632 632 .write_begin = simple_write_begin, 633 633 .write_end = simple_write_end, 634 634 .dirty_folio = noop_dirty_folio,
+5 -6
fs/minix/inode.c
··· 402 402 return block_write_full_page(page, minix_get_block, wbc); 403 403 } 404 404 405 - static int minix_readpage(struct file *file, struct page *page) 405 + static int minix_read_folio(struct file *file, struct folio *folio) 406 406 { 407 - return block_read_full_page(page,minix_get_block); 407 + return block_read_full_folio(folio, minix_get_block); 408 408 } 409 409 410 410 int minix_prepare_chunk(struct page *page, loff_t pos, unsigned len) ··· 423 423 } 424 424 425 425 static int minix_write_begin(struct file *file, struct address_space *mapping, 426 - loff_t pos, unsigned len, unsigned flags, 426 + loff_t pos, unsigned len, 427 427 struct page **pagep, void **fsdata) 428 428 { 429 429 int ret; 430 430 431 - ret = block_write_begin(mapping, pos, len, flags, pagep, 432 - minix_get_block); 431 + ret = block_write_begin(mapping, pos, len, pagep, minix_get_block); 433 432 if (unlikely(ret)) 434 433 minix_write_failed(mapping, pos + len); 435 434 ··· 443 444 static const struct address_space_operations minix_aops = { 444 445 .dirty_folio = block_dirty_folio, 445 446 .invalidate_folio = block_invalidate_folio, 446 - .readpage = minix_readpage, 447 + .read_folio = minix_read_folio, 447 448 .writepage = minix_writepage, 448 449 .write_begin = minix_write_begin, 449 450 .write_end = generic_write_end,
+11 -9
fs/mpage.c
··· 36 36 * 37 37 * The mpage code never puts partial pages into a BIO (except for end-of-file). 38 38 * If a page does not map to a contiguous run of blocks then it simply falls 39 - * back to block_read_full_page(). 39 + * back to block_read_full_folio(). 40 40 * 41 41 * Why is this? If a page's completion depends on a number of different BIOs 42 42 * which can complete in any order (or at the same time) then determining the ··· 68 68 /* 69 69 * support function for mpage_readahead. The fs supplied get_block might 70 70 * return an up to date buffer. This is used to map that buffer into 71 - * the page, which allows readpage to avoid triggering a duplicate call 71 + * the page, which allows read_folio to avoid triggering a duplicate call 72 72 * to get_block. 73 73 * 74 74 * The idea is to avoid adding buffers to pages that don't already have ··· 296 296 if (args->bio) 297 297 args->bio = mpage_bio_submit(args->bio); 298 298 if (!PageUptodate(page)) 299 - block_read_full_page(page, args->get_block); 299 + block_read_full_folio(page_folio(page), args->get_block); 300 300 else 301 301 unlock_page(page); 302 302 goto out; ··· 364 364 /* 365 365 * This isn't called much at all 366 366 */ 367 - int mpage_readpage(struct page *page, get_block_t get_block) 367 + int mpage_read_folio(struct folio *folio, get_block_t get_block) 368 368 { 369 369 struct mpage_readpage_args args = { 370 - .page = page, 370 + .page = &folio->page, 371 371 .nr_pages = 1, 372 372 .get_block = get_block, 373 373 }; 374 + 375 + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); 374 376 375 377 args.bio = do_mpage_readpage(&args); 376 378 if (args.bio) 377 379 mpage_bio_submit(args.bio); 378 380 return 0; 379 381 } 380 - EXPORT_SYMBOL(mpage_readpage); 382 + EXPORT_SYMBOL(mpage_read_folio); 381 383 382 384 /* 383 385 * Writing is not so simple. ··· 427 425 428 426 /* 429 427 * we cannot drop the bh if the page is not uptodate or a concurrent 430 - * readpage would fail to serialize with the bh and it would read from 428 + * read_folio would fail to serialize with the bh and it would read from 431 429 * disk before we reach the platter. 432 430 */ 433 431 if (buffer_heads_over_limit && PageUptodate(page)) 434 - try_to_free_buffers(page); 432 + try_to_free_buffers(page_folio(page)); 435 433 } 436 434 437 435 /* ··· 512 510 /* 513 511 * Page has buffers, but they are all unmapped. The page was 514 512 * created by pagein or read over a hole which was handled by 515 - * block_read_full_page(). If this address_space is also 513 + * block_read_full_folio(). If this address_space is also 516 514 * using mpage_readahead then this can rarely happen. 517 515 */ 518 516 goto confused;
+11 -17
fs/namei.c
··· 22 22 #include <linux/fs.h> 23 23 #include <linux/namei.h> 24 24 #include <linux/pagemap.h> 25 + #include <linux/sched/mm.h> 25 26 #include <linux/fsnotify.h> 26 27 #include <linux/personality.h> 27 28 #include <linux/security.h> ··· 5002 5001 } 5003 5002 EXPORT_SYMBOL(page_readlink); 5004 5003 5005 - /* 5006 - * The nofs argument instructs pagecache_write_begin to pass AOP_FLAG_NOFS 5007 - */ 5008 - int __page_symlink(struct inode *inode, const char *symname, int len, int nofs) 5004 + int page_symlink(struct inode *inode, const char *symname, int len) 5009 5005 { 5010 5006 struct address_space *mapping = inode->i_mapping; 5007 + const struct address_space_operations *aops = mapping->a_ops; 5008 + bool nofs = !mapping_gfp_constraint(mapping, __GFP_FS); 5011 5009 struct page *page; 5012 5010 void *fsdata; 5013 5011 int err; 5014 - unsigned int flags = 0; 5015 - if (nofs) 5016 - flags |= AOP_FLAG_NOFS; 5012 + unsigned int flags; 5017 5013 5018 5014 retry: 5019 - err = pagecache_write_begin(NULL, mapping, 0, len-1, 5020 - flags, &page, &fsdata); 5015 + if (nofs) 5016 + flags = memalloc_nofs_save(); 5017 + err = aops->write_begin(NULL, mapping, 0, len-1, &page, &fsdata); 5018 + if (nofs) 5019 + memalloc_nofs_restore(flags); 5021 5020 if (err) 5022 5021 goto fail; 5023 5022 5024 5023 memcpy(page_address(page), symname, len-1); 5025 5024 5026 - err = pagecache_write_end(NULL, mapping, 0, len-1, len-1, 5025 + err = aops->write_end(NULL, mapping, 0, len-1, len-1, 5027 5026 page, fsdata); 5028 5027 if (err < 0) 5029 5028 goto fail; ··· 5034 5033 return 0; 5035 5034 fail: 5036 5035 return err; 5037 - } 5038 - EXPORT_SYMBOL(__page_symlink); 5039 - 5040 - int page_symlink(struct inode *inode, const char *symname, int len) 5041 - { 5042 - return __page_symlink(inode, symname, len, 5043 - !mapping_gfp_constraint(inode->i_mapping, __GFP_FS)); 5044 5036 } 5045 5037 EXPORT_SYMBOL(page_symlink); 5046 5038
+10 -15
fs/netfs/buffered_read.c
··· 198 198 EXPORT_SYMBOL(netfs_readahead); 199 199 200 200 /** 201 - * netfs_readpage - Helper to manage a readpage request 201 + * netfs_read_folio - Helper to manage a read_folio request 202 202 * @file: The file to read from 203 - * @subpage: A subpage of the folio to read 203 + * @folio: The folio to read 204 204 * 205 - * Fulfil a readpage request by drawing data from the cache if possible, or the 206 - * netfs if not. Space beyond the EOF is zero-filled. Multiple I/O requests 207 - * from different sources will get munged together. 205 + * Fulfil a read_folio request by drawing data from the cache if 206 + * possible, or the netfs if not. Space beyond the EOF is zero-filled. 207 + * Multiple I/O requests from different sources will get munged together. 208 208 * 209 209 * The calling netfs must initialise a netfs context contiguous to the vfs 210 210 * inode before calling this. 211 211 * 212 212 * This is usable whether or not caching is enabled. 213 213 */ 214 - int netfs_readpage(struct file *file, struct page *subpage) 214 + int netfs_read_folio(struct file *file, struct folio *folio) 215 215 { 216 - struct folio *folio = page_folio(subpage); 217 216 struct address_space *mapping = folio_file_mapping(folio); 218 217 struct netfs_io_request *rreq; 219 218 struct netfs_i_context *ctx = netfs_i_context(mapping->host); ··· 244 245 folio_unlock(folio); 245 246 return ret; 246 247 } 247 - EXPORT_SYMBOL(netfs_readpage); 248 + EXPORT_SYMBOL(netfs_read_folio); 248 249 249 250 /* 250 251 * Prepare a folio for writing without reading first ··· 301 302 * @mapping: The mapping to read from 302 303 * @pos: File position at which the write will begin 303 304 * @len: The length of the write (may extend beyond the end of the folio chosen) 304 - * @aop_flags: AOP_* flags 305 305 * @_folio: Where to put the resultant folio 306 306 * @_fsdata: Place for the netfs to store a cookie 307 307 * ··· 327 329 * This is usable whether or not caching is enabled. 328 330 */ 329 331 int netfs_write_begin(struct file *file, struct address_space *mapping, 330 - loff_t pos, unsigned int len, unsigned int aop_flags, 331 - struct folio **_folio, void **_fsdata) 332 + loff_t pos, unsigned int len, struct folio **_folio, 333 + void **_fsdata) 332 334 { 333 335 struct netfs_io_request *rreq; 334 336 struct netfs_i_context *ctx = netfs_i_context(file_inode(file )); 335 337 struct folio *folio; 336 - unsigned int fgp_flags; 338 + unsigned int fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE; 337 339 pgoff_t index = pos >> PAGE_SHIFT; 338 340 int ret; 339 341 340 342 DEFINE_READAHEAD(ractl, file, NULL, mapping, index); 341 343 342 344 retry: 343 - fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE; 344 - if (aop_flags & AOP_FLAG_NOFS) 345 - fgp_flags |= FGP_NOFS; 346 345 folio = __filemap_get_folio(mapping, index, fgp_flags, 347 346 mapping_gfp_mask(mapping)); 348 347 if (!folio)
+7 -2
fs/nfs/dir.c
··· 55 55 static int nfs_readdir(struct file *, struct dir_context *); 56 56 static int nfs_fsync_dir(struct file *, loff_t, loff_t, int); 57 57 static loff_t nfs_llseek_dir(struct file *, loff_t, int); 58 - static void nfs_readdir_clear_array(struct page*); 58 + static void nfs_readdir_free_folio(struct folio *); 59 59 60 60 const struct file_operations nfs_dir_operations = { 61 61 .llseek = nfs_llseek_dir, ··· 67 67 }; 68 68 69 69 const struct address_space_operations nfs_dir_aops = { 70 - .freepage = nfs_readdir_clear_array, 70 + .free_folio = nfs_readdir_free_folio, 71 71 }; 72 72 73 73 #define NFS_INIT_DTSIZE PAGE_SIZE ··· 226 226 kfree(array->array[i].name); 227 227 array->size = 0; 228 228 kunmap_atomic(array); 229 + } 230 + 231 + static void nfs_readdir_free_folio(struct folio *folio) 232 + { 233 + nfs_readdir_clear_array(&folio->page); 229 234 } 230 235 231 236 static void nfs_readdir_page_reinit_array(struct page *page, u64 last_cookie,
+24 -27
fs/nfs/file.c
··· 313 313 * increment the page use counts until he is done with the page. 314 314 */ 315 315 static int nfs_write_begin(struct file *file, struct address_space *mapping, 316 - loff_t pos, unsigned len, unsigned flags, 316 + loff_t pos, unsigned len, 317 317 struct page **pagep, void **fsdata) 318 318 { 319 319 int ret; ··· 325 325 file, mapping->host->i_ino, len, (long long) pos); 326 326 327 327 start: 328 - page = grab_cache_page_write_begin(mapping, index, flags); 328 + page = grab_cache_page_write_begin(mapping, index); 329 329 if (!page) 330 330 return -ENOMEM; 331 331 *pagep = page; ··· 337 337 } else if (!once_thru && 338 338 nfs_want_read_modify_write(file, page, pos, len)) { 339 339 once_thru = 1; 340 - ret = nfs_readpage(file, page); 340 + ret = nfs_read_folio(file, page_folio(page)); 341 341 put_page(page); 342 342 if (!ret) 343 343 goto start; ··· 415 415 } 416 416 417 417 /* 418 - * Attempt to release the private state associated with a page 419 - * - Called if either PG_private or PG_fscache is set on the page 420 - * - Caller holds page lock 421 - * - Return true (may release page) or false (may not) 418 + * Attempt to release the private state associated with a folio 419 + * - Called if either private or fscache flags are set on the folio 420 + * - Caller holds folio lock 421 + * - Return true (may release folio) or false (may not) 422 422 */ 423 - static int nfs_release_page(struct page *page, gfp_t gfp) 423 + static bool nfs_release_folio(struct folio *folio, gfp_t gfp) 424 424 { 425 - dfprintk(PAGECACHE, "NFS: release_page(%p)\n", page); 425 + dfprintk(PAGECACHE, "NFS: release_folio(%p)\n", folio); 426 426 427 - /* If PagePrivate() is set, then the page is not freeable */ 428 - if (PagePrivate(page)) 429 - return 0; 430 - return nfs_fscache_release_page(page, gfp); 427 + /* If the private flag is set, then the folio is not freeable */ 428 + if (folio_test_private(folio)) 429 + return false; 430 + return nfs_fscache_release_folio(folio, gfp); 431 431 } 432 432 433 - static void nfs_check_dirty_writeback(struct page *page, 433 + static void nfs_check_dirty_writeback(struct folio *folio, 434 434 bool *dirty, bool *writeback) 435 435 { 436 436 struct nfs_inode *nfsi; 437 - struct address_space *mapping = page_file_mapping(page); 438 - 439 - if (!mapping || PageSwapCache(page)) 440 - return; 437 + struct address_space *mapping = folio->mapping; 441 438 442 439 /* 443 - * Check if an unstable page is currently being committed and 444 - * if so, have the VM treat it as if the page is under writeback 445 - * so it will not block due to pages that will shortly be freeable. 440 + * Check if an unstable folio is currently being committed and 441 + * if so, have the VM treat it as if the folio is under writeback 442 + * so it will not block due to folios that will shortly be freeable. 446 443 */ 447 444 nfsi = NFS_I(mapping->host); 448 445 if (atomic_read(&nfsi->commit_info.rpcs_out)) { ··· 448 451 } 449 452 450 453 /* 451 - * If PagePrivate() is set, then the page is not freeable and as the 452 - * inode is not being committed, it's not going to be cleaned in the 453 - * near future so treat it as dirty 454 + * If the private flag is set, then the folio is not freeable 455 + * and as the inode is not being committed, it's not going to 456 + * be cleaned in the near future so treat it as dirty 454 457 */ 455 - if (PagePrivate(page)) 458 + if (folio_test_private(folio)) 456 459 *dirty = true; 457 460 } 458 461 ··· 514 517 } 515 518 516 519 const struct address_space_operations nfs_file_aops = { 517 - .readpage = nfs_readpage, 520 + .read_folio = nfs_read_folio, 518 521 .readahead = nfs_readahead, 519 522 .dirty_folio = filemap_dirty_folio, 520 523 .writepage = nfs_writepage, ··· 522 525 .write_begin = nfs_write_begin, 523 526 .write_end = nfs_write_end, 524 527 .invalidate_folio = nfs_invalidate_folio, 525 - .releasepage = nfs_release_page, 528 + .release_folio = nfs_release_folio, 526 529 .direct_IO = nfs_direct_IO, 527 530 #ifdef CONFIG_MIGRATION 528 531 .migratepage = nfs_migrate_page,
+7 -7
fs/nfs/fscache.h
··· 48 48 extern int __nfs_fscache_read_page(struct inode *, struct page *); 49 49 extern void __nfs_fscache_write_page(struct inode *, struct page *); 50 50 51 - static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp) 51 + static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp) 52 52 { 53 - if (PageFsCache(page)) { 53 + if (folio_test_fscache(folio)) { 54 54 if (current_is_kswapd() || !(gfp & __GFP_FS)) 55 55 return false; 56 - wait_on_page_fscache(page); 57 - fscache_note_page_release(nfs_i_fscache(page->mapping->host)); 58 - nfs_inc_fscache_stats(page->mapping->host, 56 + folio_wait_fscache(folio); 57 + fscache_note_page_release(nfs_i_fscache(folio->mapping->host)); 58 + nfs_inc_fscache_stats(folio->mapping->host, 59 59 NFSIOS_FSCACHE_PAGES_UNCACHED); 60 60 } 61 61 return true; ··· 129 129 struct file *filp) {} 130 130 static inline void nfs_fscache_release_file(struct inode *inode, struct file *file) {} 131 131 132 - static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp) 132 + static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp) 133 133 { 134 - return 1; /* True: may release page */ 134 + return true; /* may release folio */ 135 135 } 136 136 static inline int nfs_fscache_read_page(struct inode *inode, struct page *page) 137 137 {
+2 -1
fs/nfs/read.c
··· 333 333 * - The error flag is set for this page. This happens only when a 334 334 * previous async read operation failed. 335 335 */ 336 - int nfs_readpage(struct file *file, struct page *page) 336 + int nfs_read_folio(struct file *file, struct folio *folio) 337 337 { 338 + struct page *page = &folio->page; 338 339 struct nfs_readdesc desc; 339 340 struct inode *inode = page_file_mapping(page)->host; 340 341 int ret;
+8 -8
fs/nfs/symlink.c
··· 26 26 * and straight-forward than readdir caching. 27 27 */ 28 28 29 - static int nfs_symlink_filler(void *data, struct page *page) 29 + static int nfs_symlink_filler(struct file *file, struct folio *folio) 30 30 { 31 - struct inode *inode = data; 31 + struct inode *inode = folio->mapping->host; 32 32 int error; 33 33 34 - error = NFS_PROTO(inode)->readlink(inode, page, 0, PAGE_SIZE); 34 + error = NFS_PROTO(inode)->readlink(inode, &folio->page, 0, PAGE_SIZE); 35 35 if (error < 0) 36 36 goto error; 37 - SetPageUptodate(page); 38 - unlock_page(page); 37 + folio_mark_uptodate(folio); 38 + folio_unlock(folio); 39 39 return 0; 40 40 41 41 error: 42 - SetPageError(page); 43 - unlock_page(page); 42 + folio_set_error(folio); 43 + folio_unlock(folio); 44 44 return -EIO; 45 45 } 46 46 ··· 67 67 if (err) 68 68 return err; 69 69 page = read_cache_page(&inode->i_data, 0, nfs_symlink_filler, 70 - inode); 70 + NULL); 71 71 if (IS_ERR(page)) 72 72 return ERR_CAST(page); 73 73 }
+13 -14
fs/nilfs2/inode.c
··· 63 63 64 64 /** 65 65 * nilfs_get_block() - get a file block on the filesystem (callback function) 66 - * @inode - inode struct of the target file 67 - * @blkoff - file block number 68 - * @bh_result - buffer head to be mapped on 69 - * @create - indicate whether allocating the block or not when it has not 66 + * @inode: inode struct of the target file 67 + * @blkoff: file block number 68 + * @bh_result: buffer head to be mapped on 69 + * @create: indicate whether allocating the block or not when it has not 70 70 * been allocated yet. 71 71 * 72 72 * This function does not issue actual read request of the specified data ··· 140 140 } 141 141 142 142 /** 143 - * nilfs_readpage() - implement readpage() method of nilfs_aops {} 143 + * nilfs_read_folio() - implement read_folio() method of nilfs_aops {} 144 144 * address_space_operations. 145 - * @file - file struct of the file to be read 146 - * @page - the page to be read 145 + * @file: file struct of the file to be read 146 + * @folio: the folio to be read 147 147 */ 148 - static int nilfs_readpage(struct file *file, struct page *page) 148 + static int nilfs_read_folio(struct file *file, struct folio *folio) 149 149 { 150 - return mpage_readpage(page, nilfs_get_block); 150 + return mpage_read_folio(folio, nilfs_get_block); 151 151 } 152 152 153 153 static void nilfs_readahead(struct readahead_control *rac) ··· 248 248 } 249 249 250 250 static int nilfs_write_begin(struct file *file, struct address_space *mapping, 251 - loff_t pos, unsigned len, unsigned flags, 251 + loff_t pos, unsigned len, 252 252 struct page **pagep, void **fsdata) 253 253 254 254 { ··· 258 258 if (unlikely(err)) 259 259 return err; 260 260 261 - err = block_write_begin(mapping, pos, len, flags, pagep, 262 - nilfs_get_block); 261 + err = block_write_begin(mapping, pos, len, pagep, nilfs_get_block); 263 262 if (unlikely(err)) { 264 263 nilfs_write_failed(mapping, pos + len); 265 264 nilfs_transaction_abort(inode->i_sb); ··· 298 299 299 300 const struct address_space_operations nilfs_aops = { 300 301 .writepage = nilfs_writepage, 301 - .readpage = nilfs_readpage, 302 + .read_folio = nilfs_read_folio, 302 303 .writepages = nilfs_writepages, 303 304 .dirty_folio = nilfs_dirty_folio, 304 305 .readahead = nilfs_readahead, 305 306 .write_begin = nilfs_write_begin, 306 307 .write_end = nilfs_write_end, 307 - /* .releasepage = nilfs_releasepage, */ 308 308 .invalidate_folio = block_invalidate_folio, 309 309 .direct_IO = nilfs_direct_IO, 310 310 .is_partially_uptodate = block_is_partially_uptodate, ··· 1086 1088 /** 1087 1089 * nilfs_dirty_inode - reflect changes on given inode to an inode block. 1088 1090 * @inode: inode of the file to be registered. 1091 + * @flags: flags to determine the dirty state of the inode 1089 1092 * 1090 1093 * nilfs_dirty_inode() loads a inode block containing the specified 1091 1094 * @inode and copies data from a nilfs_inode to a corresponding inode
+1 -1
fs/nilfs2/recovery.c
··· 511 511 512 512 pos = rb->blkoff << inode->i_blkbits; 513 513 err = block_write_begin(inode->i_mapping, pos, blocksize, 514 - 0, &page, nilfs_get_block); 514 + &page, nilfs_get_block); 515 515 if (unlikely(err)) { 516 516 loff_t isize = inode->i_size; 517 517
+21 -19
fs/ntfs/aops.c
··· 159 159 * 160 160 * Return 0 on success and -errno on error. 161 161 * 162 - * Contains an adapted version of fs/buffer.c::block_read_full_page(). 162 + * Contains an adapted version of fs/buffer.c::block_read_full_folio(). 163 163 */ 164 164 static int ntfs_read_block(struct page *page) 165 165 { ··· 358 358 } 359 359 360 360 /** 361 - * ntfs_readpage - fill a @page of a @file with data from the device 362 - * @file: open file to which the page @page belongs or NULL 363 - * @page: page cache page to fill with data 361 + * ntfs_read_folio - fill a @folio of a @file with data from the device 362 + * @file: open file to which the folio @folio belongs or NULL 363 + * @folio: page cache folio to fill with data 364 364 * 365 - * For non-resident attributes, ntfs_readpage() fills the @page of the open 366 - * file @file by calling the ntfs version of the generic block_read_full_page() 365 + * For non-resident attributes, ntfs_read_folio() fills the @folio of the open 366 + * file @file by calling the ntfs version of the generic block_read_full_folio() 367 367 * function, ntfs_read_block(), which in turn creates and reads in the buffers 368 - * associated with the page asynchronously. 368 + * associated with the folio asynchronously. 369 369 * 370 - * For resident attributes, OTOH, ntfs_readpage() fills @page by copying the 370 + * For resident attributes, OTOH, ntfs_read_folio() fills @folio by copying the 371 371 * data from the mft record (which at this stage is most likely in memory) and 372 372 * fills the remainder with zeroes. Thus, in this case, I/O is synchronous, as 373 373 * even if the mft record is not cached at this point in time, we need to wait ··· 375 375 * 376 376 * Return 0 on success and -errno on error. 377 377 */ 378 - static int ntfs_readpage(struct file *file, struct page *page) 378 + static int ntfs_read_folio(struct file *file, struct folio *folio) 379 379 { 380 + struct page *page = &folio->page; 380 381 loff_t i_size; 381 382 struct inode *vi; 382 383 ntfs_inode *ni, *base_ni; ··· 459 458 } 460 459 /* 461 460 * If a parallel write made the attribute non-resident, drop the mft 462 - * record and retry the readpage. 461 + * record and retry the read_folio. 463 462 */ 464 463 if (unlikely(NInoNonResident(ni))) { 465 464 unmap_mft_record(base_ni); ··· 638 637 if (unlikely((block >= iblock) && 639 638 (initialized_size < i_size))) { 640 639 /* 641 - * If this page is fully outside initialized size, zero 642 - * out all pages between the current initialized size 643 - * and the current page. Just use ntfs_readpage() to do 644 - * the zeroing transparently. 640 + * If this page is fully outside initialized 641 + * size, zero out all pages between the current 642 + * initialized size and the current page. Just 643 + * use ntfs_read_folio() to do the zeroing 644 + * transparently. 645 645 */ 646 646 if (block > iblock) { 647 647 // TODO: ··· 800 798 /* For the error case, need to reset bh to the beginning. */ 801 799 bh = head; 802 800 803 - /* Just an optimization, so ->readpage() is not called later. */ 801 + /* Just an optimization, so ->read_folio() is not called later. */ 804 802 if (unlikely(!PageUptodate(page))) { 805 803 int uptodate = 1; 806 804 do { ··· 1331 1329 * vfs inode dirty code path for the inode the mft record belongs to or via the 1332 1330 * vm page dirty code path for the page the mft record is in. 1333 1331 * 1334 - * Based on ntfs_readpage() and fs/buffer.c::block_write_full_page(). 1332 + * Based on ntfs_read_folio() and fs/buffer.c::block_write_full_page(). 1335 1333 * 1336 1334 * Return 0 on success and -errno on error. 1337 1335 */ ··· 1653 1651 * attributes. 1654 1652 */ 1655 1653 const struct address_space_operations ntfs_normal_aops = { 1656 - .readpage = ntfs_readpage, 1654 + .read_folio = ntfs_read_folio, 1657 1655 #ifdef NTFS_RW 1658 1656 .writepage = ntfs_writepage, 1659 1657 .dirty_folio = block_dirty_folio, ··· 1668 1666 * ntfs_compressed_aops - address space operations for compressed inodes 1669 1667 */ 1670 1668 const struct address_space_operations ntfs_compressed_aops = { 1671 - .readpage = ntfs_readpage, 1669 + .read_folio = ntfs_read_folio, 1672 1670 #ifdef NTFS_RW 1673 1671 .writepage = ntfs_writepage, 1674 1672 .dirty_folio = block_dirty_folio, ··· 1683 1681 * and attributes 1684 1682 */ 1685 1683 const struct address_space_operations ntfs_mst_aops = { 1686 - .readpage = ntfs_readpage, /* Fill page with data. */ 1684 + .read_folio = ntfs_read_folio, /* Fill page with data. */ 1687 1685 #ifdef NTFS_RW 1688 1686 .writepage = ntfs_writepage, /* Write dirty page to disk. */ 1689 1687 .dirty_folio = filemap_dirty_folio,
+3 -3
fs/ntfs/aops.h
··· 37 37 * Read a page from the page cache of the address space @mapping at position 38 38 * @index, where @index is in units of PAGE_SIZE, and not in bytes. 39 39 * 40 - * If the page is not in memory it is loaded from disk first using the readpage 41 - * method defined in the address space operations of @mapping and the page is 42 - * added to the page cache of @mapping in the process. 40 + * If the page is not in memory it is loaded from disk first using the 41 + * read_folio method defined in the address space operations of @mapping 42 + * and the page is added to the page cache of @mapping in the process. 43 43 * 44 44 * If the page belongs to an mst protected attribute and it is marked as such 45 45 * in its ntfs inode (NInoMstProtected()) the mst fixups are applied but no
+1 -1
fs/ntfs/attrib.c
··· 1719 1719 vi->i_blocks = ni->allocated_size >> 9; 1720 1720 write_unlock_irqrestore(&ni->size_lock, flags); 1721 1721 /* 1722 - * This needs to be last since the address space operations ->readpage 1722 + * This needs to be last since the address space operations ->read_folio 1723 1723 * and ->writepage can run concurrently with us as they are not 1724 1724 * serialized on i_mutex. Note, we are not allowed to fail once we flip 1725 1725 * this switch, which is another reason to do this last.
+2 -2
fs/ntfs/compress.c
··· 780 780 /* Uncompressed cb, copy it to the destination pages. */ 781 781 /* 782 782 * TODO: As a big optimization, we could detect this case 783 - * before we read all the pages and use block_read_full_page() 783 + * before we read all the pages and use block_read_full_folio() 784 784 * on all full pages instead (we still have to treat partial 785 785 * pages especially but at least we are getting rid of the 786 786 * synchronous io for the majority of pages. 787 787 * Or if we choose not to do the read-ahead/-behind stuff, we 788 - * could just return block_read_full_page(pages[xpage]) as long 788 + * could just return block_read_full_folio(pages[xpage]) as long 789 789 * as PAGE_SIZE <= cb_size. 790 790 */ 791 791 if (cb_max_ofs)
+2 -2
fs/ntfs/file.c
··· 251 251 * 252 252 * TODO: For sparse pages could optimize this workload by using 253 253 * the FsMisc / MiscFs page bit as a "PageIsSparse" bit. This 254 - * would be set in readpage for sparse pages and here we would 254 + * would be set in read_folio for sparse pages and here we would 255 255 * not need to mark dirty any pages which have this bit set. 256 256 * The only caveat is that we have to clear the bit everywhere 257 257 * where we allocate any clusters that lie in the page or that 258 258 * contain the page. 259 259 * 260 260 * TODO: An even greater optimization would be for us to only 261 - * call readpage() on pages which are not in sparse regions as 261 + * call read_folio() on pages which are not in sparse regions as 262 262 * determined from the runlist. This would greatly reduce the 263 263 * number of pages we read and make dirty in the case of sparse 264 264 * files.
+2 -2
fs/ntfs/inode.c
··· 1832 1832 /* Need this to sanity check attribute list references to $MFT. */ 1833 1833 vi->i_generation = ni->seq_no = le16_to_cpu(m->sequence_number); 1834 1834 1835 - /* Provides readpage() for map_mft_record(). */ 1835 + /* Provides read_folio() for map_mft_record(). */ 1836 1836 vi->i_mapping->a_ops = &ntfs_mst_aops; 1837 1837 1838 1838 ctx = ntfs_attr_get_search_ctx(ni, m); ··· 2503 2503 * between the old data_size, i.e. old_size, and the new_size 2504 2504 * has not been zeroed. Fortunately, we do not need to zero it 2505 2505 * either since on one hand it will either already be zero due 2506 - * to both readpage and writepage clearing partial page data 2506 + * to both read_folio and writepage clearing partial page data 2507 2507 * beyond i_size in which case there is nothing to do or in the 2508 2508 * case of the file being mmap()ped at the same time, POSIX 2509 2509 * specifies that the behaviour is unspecified thus we do not
+1 -1
fs/ntfs/mft.h
··· 79 79 * paths and via the page cache write back code paths or between writing 80 80 * neighbouring mft records residing in the same page. 81 81 * 82 - * Locking the page also serializes us against ->readpage() if the page is not 82 + * Locking the page also serializes us against ->read_folio() if the page is not 83 83 * uptodate. 84 84 * 85 85 * On success, clean the mft record and return 0. On error, leave the mft
+2 -5
fs/ntfs3/file.c
··· 115 115 for (;;) { 116 116 u32 zerofrom, len; 117 117 struct page *page; 118 - void *fsdata; 119 118 u8 bits; 120 119 CLST vcn, lcn, clen; 121 120 ··· 156 157 if (pos + len > new_valid) 157 158 len = new_valid - pos; 158 159 159 - err = pagecache_write_begin(file, mapping, pos, len, 0, &page, 160 - &fsdata); 160 + err = ntfs_write_begin(file, mapping, pos, len, &page, NULL); 161 161 if (err) 162 162 goto out; 163 163 164 164 zero_user_segment(page, zerofrom, PAGE_SIZE); 165 165 166 166 /* This function in any case puts page. */ 167 - err = pagecache_write_end(file, mapping, pos, len, len, page, 168 - fsdata); 167 + err = ntfs_write_end(file, mapping, pos, len, len, page, NULL); 169 168 if (err < 0) 170 169 goto out; 171 170 pos += len;
+13 -14
fs/ntfs3/inode.c
··· 676 676 return generic_block_bmap(mapping, block, ntfs_get_block_bmap); 677 677 } 678 678 679 - static int ntfs_readpage(struct file *file, struct page *page) 679 + static int ntfs_read_folio(struct file *file, struct folio *folio) 680 680 { 681 + struct page *page = &folio->page; 681 682 int err; 682 683 struct address_space *mapping = page->mapping; 683 684 struct inode *inode = mapping->host; ··· 702 701 } 703 702 704 703 /* Normal + sparse files. */ 705 - return mpage_readpage(page, ntfs_get_block); 704 + return mpage_read_folio(folio, ntfs_get_block); 706 705 } 707 706 708 707 static void ntfs_readahead(struct readahead_control *rac) ··· 862 861 bh_result, create, GET_BLOCK_WRITE_BEGIN); 863 862 } 864 863 865 - static int ntfs_write_begin(struct file *file, struct address_space *mapping, 866 - loff_t pos, u32 len, u32 flags, struct page **pagep, 867 - void **fsdata) 864 + int ntfs_write_begin(struct file *file, struct address_space *mapping, 865 + loff_t pos, u32 len, struct page **pagep, void **fsdata) 868 866 { 869 867 int err; 870 868 struct inode *inode = mapping->host; ··· 872 872 *pagep = NULL; 873 873 if (is_resident(ni)) { 874 874 struct page *page = grab_cache_page_write_begin( 875 - mapping, pos >> PAGE_SHIFT, flags); 875 + mapping, pos >> PAGE_SHIFT); 876 876 877 877 if (!page) { 878 878 err = -ENOMEM; ··· 894 894 goto out; 895 895 } 896 896 897 - err = block_write_begin(mapping, pos, len, flags, pagep, 897 + err = block_write_begin(mapping, pos, len, pagep, 898 898 ntfs_get_block_write_begin); 899 899 900 900 out: ··· 904 904 /* 905 905 * ntfs_write_end - Address_space_operations::write_end. 906 906 */ 907 - static int ntfs_write_end(struct file *file, struct address_space *mapping, 908 - loff_t pos, u32 len, u32 copied, struct page *page, 909 - void *fsdata) 910 - 907 + int ntfs_write_end(struct file *file, struct address_space *mapping, 908 + loff_t pos, u32 len, u32 copied, struct page *page, 909 + void *fsdata) 911 910 { 912 911 struct inode *inode = mapping->host; 913 912 struct ntfs_inode *ni = ntfs_i(inode); ··· 974 975 975 976 len = pos + PAGE_SIZE > log_size ? (log_size - pos) : PAGE_SIZE; 976 977 977 - err = block_write_begin(mapping, pos, len, 0, &page, 978 + err = block_write_begin(mapping, pos, len, &page, 978 979 ntfs_get_block_write_begin); 979 980 if (err) 980 981 goto out; ··· 1941 1942 }; 1942 1943 1943 1944 const struct address_space_operations ntfs_aops = { 1944 - .readpage = ntfs_readpage, 1945 + .read_folio = ntfs_read_folio, 1945 1946 .readahead = ntfs_readahead, 1946 1947 .writepage = ntfs_writepage, 1947 1948 .writepages = ntfs_writepages, ··· 1953 1954 }; 1954 1955 1955 1956 const struct address_space_operations ntfs_aops_cmpr = { 1956 - .readpage = ntfs_readpage, 1957 + .read_folio = ntfs_read_folio, 1957 1958 .readahead = ntfs_readahead, 1958 1959 }; 1959 1960 // clang-format on
+5
fs/ntfs3/ntfs_fs.h
··· 689 689 int reset_log_file(struct inode *inode); 690 690 int ntfs_get_block(struct inode *inode, sector_t vbn, 691 691 struct buffer_head *bh_result, int create); 692 + int ntfs_write_begin(struct file *file, struct address_space *mapping, 693 + loff_t pos, u32 len, struct page **pagep, void **fsdata); 694 + int ntfs_write_end(struct file *file, struct address_space *mapping, 695 + loff_t pos, u32 len, u32 copied, struct page *page, 696 + void *fsdata); 692 697 int ntfs3_write_inode(struct inode *inode, struct writeback_control *wbc); 693 698 int ntfs_sync_inode(struct inode *inode); 694 699 int ntfs_flush_inodes(struct super_block *sb, struct inode *i1,
+1 -1
fs/ocfs2/alloc.c
··· 7427 7427 /* 7428 7428 * No need to worry about the data page here - it's been 7429 7429 * truncated already and inline data doesn't need it for 7430 - * pushing zero's to disk, so we'll let readpage pick it up 7430 + * pushing zero's to disk, so we'll let read_folio pick it up 7431 7431 * later. 7432 7432 */ 7433 7433 if (trunc) {
+12 -11
fs/ocfs2/aops.c
··· 275 275 return ret; 276 276 } 277 277 278 - static int ocfs2_readpage(struct file *file, struct page *page) 278 + static int ocfs2_read_folio(struct file *file, struct folio *folio) 279 279 { 280 + struct page *page = &folio->page; 280 281 struct inode *inode = page->mapping->host; 281 282 struct ocfs2_inode_info *oi = OCFS2_I(inode); 282 283 loff_t start = (loff_t)page->index << PAGE_SHIFT; ··· 310 309 /* 311 310 * i_size might have just been updated as we grabed the meta lock. We 312 311 * might now be discovering a truncate that hit on another node. 313 - * block_read_full_page->get_block freaks out if it is asked to read 312 + * block_read_full_folio->get_block freaks out if it is asked to read 314 313 * beyond the end of a file, so we check here. Callers 315 314 * (generic_file_read, vm_ops->fault) are clever enough to check i_size 316 315 * and notice that the page they just read isn't needed. ··· 327 326 if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) 328 327 ret = ocfs2_readpage_inline(inode, page); 329 328 else 330 - ret = block_read_full_page(page, ocfs2_get_block); 329 + ret = block_read_full_folio(page_folio(page), ocfs2_get_block); 331 330 unlock = 0; 332 331 333 332 out_alloc: ··· 498 497 return status; 499 498 } 500 499 501 - static int ocfs2_releasepage(struct page *page, gfp_t wait) 500 + static bool ocfs2_release_folio(struct folio *folio, gfp_t wait) 502 501 { 503 - if (!page_has_buffers(page)) 504 - return 0; 505 - return try_to_free_buffers(page); 502 + if (!folio_buffers(folio)) 503 + return false; 504 + return try_to_free_buffers(folio); 506 505 } 507 506 508 507 static void ocfs2_figure_cluster_boundaries(struct ocfs2_super *osb, ··· 1882 1881 } 1883 1882 1884 1883 static int ocfs2_write_begin(struct file *file, struct address_space *mapping, 1885 - loff_t pos, unsigned len, unsigned flags, 1884 + loff_t pos, unsigned len, 1886 1885 struct page **pagep, void **fsdata) 1887 1886 { 1888 1887 int ret; ··· 1898 1897 /* 1899 1898 * Take alloc sem here to prevent concurrent lookups. That way 1900 1899 * the mapping, zeroing and tree manipulation within 1901 - * ocfs2_write() will be safe against ->readpage(). This 1900 + * ocfs2_write() will be safe against ->read_folio(). This 1902 1901 * should also serve to lock out allocation from a shared 1903 1902 * writeable region. 1904 1903 */ ··· 2455 2454 2456 2455 const struct address_space_operations ocfs2_aops = { 2457 2456 .dirty_folio = block_dirty_folio, 2458 - .readpage = ocfs2_readpage, 2457 + .read_folio = ocfs2_read_folio, 2459 2458 .readahead = ocfs2_readahead, 2460 2459 .writepage = ocfs2_writepage, 2461 2460 .write_begin = ocfs2_write_begin, ··· 2463 2462 .bmap = ocfs2_bmap, 2464 2463 .direct_IO = ocfs2_direct_IO, 2465 2464 .invalidate_folio = block_invalidate_folio, 2466 - .releasepage = ocfs2_releasepage, 2465 + .release_folio = ocfs2_release_folio, 2467 2466 .migratepage = buffer_migrate_page, 2468 2467 .is_partially_uptodate = block_is_partially_uptodate, 2469 2468 .error_remove_page = generic_error_remove_page,
+1 -1
fs/ocfs2/file.c
··· 2526 2526 return -EOPNOTSUPP; 2527 2527 2528 2528 /* 2529 - * buffered reads protect themselves in ->readpage(). O_DIRECT reads 2529 + * buffered reads protect themselves in ->read_folio(). O_DIRECT reads 2530 2530 * need locks to protect pending reads from racing with truncate. 2531 2531 */ 2532 2532 if (direct_io) {
+4 -2
fs/ocfs2/refcounttree.c
··· 2961 2961 } 2962 2962 2963 2963 if (!PageUptodate(page)) { 2964 - ret = block_read_full_page(page, ocfs2_get_block); 2964 + struct folio *folio = page_folio(page); 2965 + 2966 + ret = block_read_full_folio(folio, ocfs2_get_block); 2965 2967 if (ret) { 2966 2968 mlog_errno(ret); 2967 2969 goto unlock; 2968 2970 } 2969 - lock_page(page); 2971 + folio_lock(folio); 2970 2972 } 2971 2973 2972 2974 if (page_has_buffers(page)) {
+3 -2
fs/ocfs2/symlink.c
··· 52 52 #include "buffer_head_io.h" 53 53 54 54 55 - static int ocfs2_fast_symlink_readpage(struct file *unused, struct page *page) 55 + static int ocfs2_fast_symlink_read_folio(struct file *f, struct folio *folio) 56 56 { 57 + struct page *page = &folio->page; 57 58 struct inode *inode = page->mapping->host; 58 59 struct buffer_head *bh = NULL; 59 60 int status = ocfs2_read_inode_block(inode, &bh); ··· 82 81 } 83 82 84 83 const struct address_space_operations ocfs2_fast_symlink_aops = { 85 - .readpage = ocfs2_fast_symlink_readpage, 84 + .read_folio = ocfs2_fast_symlink_read_folio, 86 85 }; 87 86 88 87 const struct inode_operations ocfs2_symlink_inode_operations = {
+5 -6
fs/omfs/file.c
··· 284 284 return ret; 285 285 } 286 286 287 - static int omfs_readpage(struct file *file, struct page *page) 287 + static int omfs_read_folio(struct file *file, struct folio *folio) 288 288 { 289 - return block_read_full_page(page, omfs_get_block); 289 + return block_read_full_folio(folio, omfs_get_block); 290 290 } 291 291 292 292 static void omfs_readahead(struct readahead_control *rac) ··· 316 316 } 317 317 318 318 static int omfs_write_begin(struct file *file, struct address_space *mapping, 319 - loff_t pos, unsigned len, unsigned flags, 319 + loff_t pos, unsigned len, 320 320 struct page **pagep, void **fsdata) 321 321 { 322 322 int ret; 323 323 324 - ret = block_write_begin(mapping, pos, len, flags, pagep, 325 - omfs_get_block); 324 + ret = block_write_begin(mapping, pos, len, pagep, omfs_get_block); 326 325 if (unlikely(ret)) 327 326 omfs_write_failed(mapping, pos + len); 328 327 ··· 373 374 const struct address_space_operations omfs_aops = { 374 375 .dirty_folio = block_dirty_folio, 375 376 .invalidate_folio = block_invalidate_folio, 376 - .readpage = omfs_readpage, 377 + .read_folio = omfs_read_folio, 377 378 .readahead = omfs_readahead, 378 379 .writepage = omfs_writepage, 379 380 .writepages = omfs_writepages,
+25 -27
fs/orangefs/inode.c
··· 288 288 } 289 289 } 290 290 291 - static int orangefs_readpage(struct file *file, struct page *page) 291 + static int orangefs_read_folio(struct file *file, struct folio *folio) 292 292 { 293 - struct folio *folio = page_folio(page); 294 - struct inode *inode = page->mapping->host; 293 + struct inode *inode = folio->mapping->host; 295 294 struct iov_iter iter; 296 295 struct bio_vec bv; 297 296 ssize_t ret; 298 - loff_t off; /* offset into this page */ 297 + loff_t off; /* offset of this folio in the file */ 299 298 300 299 if (folio_test_dirty(folio)) 301 300 orangefs_launder_folio(folio); 302 301 303 - off = page_offset(page); 304 - bv.bv_page = page; 305 - bv.bv_len = PAGE_SIZE; 302 + off = folio_pos(folio); 303 + bv.bv_page = &folio->page; 304 + bv.bv_len = folio_size(folio); 306 305 bv.bv_offset = 0; 307 - iov_iter_bvec(&iter, READ, &bv, 1, PAGE_SIZE); 306 + iov_iter_bvec(&iter, READ, &bv, 1, folio_size(folio)); 308 307 309 308 ret = wait_for_direct_io(ORANGEFS_IO_READ, inode, &off, &iter, 310 - PAGE_SIZE, inode->i_size, NULL, NULL, file); 309 + folio_size(folio), inode->i_size, NULL, NULL, file); 311 310 /* this will only zero remaining unread portions of the page data */ 312 311 iov_iter_zero(~0U, &iter); 313 312 /* takes care of potential aliasing */ 314 - flush_dcache_page(page); 313 + flush_dcache_folio(folio); 315 314 if (ret < 0) { 316 - SetPageError(page); 315 + folio_set_error(folio); 317 316 } else { 318 - SetPageUptodate(page); 319 - if (PageError(page)) 320 - ClearPageError(page); 317 + folio_mark_uptodate(folio); 318 + if (folio_test_error(folio)) 319 + folio_clear_error(folio); 321 320 ret = 0; 322 321 } 323 - /* unlock the page after the ->readpage() routine completes */ 324 - unlock_page(page); 322 + /* unlock the folio after the ->read_folio() routine completes */ 323 + folio_unlock(folio); 325 324 return ret; 326 325 } 327 326 328 327 static int orangefs_write_begin(struct file *file, 329 - struct address_space *mapping, 330 - loff_t pos, unsigned len, unsigned flags, struct page **pagep, 331 - void **fsdata) 328 + struct address_space *mapping, loff_t pos, unsigned len, 329 + struct page **pagep, void **fsdata) 332 330 { 333 331 struct orangefs_write_range *wr; 334 332 struct folio *folio; ··· 336 338 337 339 index = pos >> PAGE_SHIFT; 338 340 339 - page = grab_cache_page_write_begin(mapping, index, flags); 341 + page = grab_cache_page_write_begin(mapping, index); 340 342 if (!page) 341 343 return -ENOMEM; 342 344 ··· 485 487 orangefs_launder_folio(folio); 486 488 } 487 489 488 - static int orangefs_releasepage(struct page *page, gfp_t foo) 490 + static bool orangefs_release_folio(struct folio *folio, gfp_t foo) 489 491 { 490 - return !PagePrivate(page); 492 + return !folio_test_private(folio); 491 493 } 492 494 493 - static void orangefs_freepage(struct page *page) 495 + static void orangefs_free_folio(struct folio *folio) 494 496 { 495 - kfree(detach_page_private(page)); 497 + kfree(folio_detach_private(folio)); 496 498 } 497 499 498 500 static int orangefs_launder_folio(struct folio *folio) ··· 630 632 static const struct address_space_operations orangefs_address_operations = { 631 633 .writepage = orangefs_writepage, 632 634 .readahead = orangefs_readahead, 633 - .readpage = orangefs_readpage, 635 + .read_folio = orangefs_read_folio, 634 636 .writepages = orangefs_writepages, 635 637 .dirty_folio = filemap_dirty_folio, 636 638 .write_begin = orangefs_write_begin, 637 639 .write_end = orangefs_write_end, 638 640 .invalidate_folio = orangefs_invalidate_folio, 639 - .releasepage = orangefs_releasepage, 640 - .freepage = orangefs_freepage, 641 + .release_folio = orangefs_release_folio, 642 + .free_folio = orangefs_free_folio, 641 643 .launder_folio = orangefs_launder_folio, 642 644 .direct_IO = orangefs_direct_IO, 643 645 };
+4 -3
fs/qnx4/inode.c
··· 245 245 } 246 246 } 247 247 248 - static int qnx4_readpage(struct file *file, struct page *page) 248 + static int qnx4_read_folio(struct file *file, struct folio *folio) 249 249 { 250 - return block_read_full_page(page,qnx4_get_block); 250 + return block_read_full_folio(folio, qnx4_get_block); 251 251 } 252 252 253 253 static sector_t qnx4_bmap(struct address_space *mapping, sector_t block) 254 254 { 255 255 return generic_block_bmap(mapping,block,qnx4_get_block); 256 256 } 257 + 257 258 static const struct address_space_operations qnx4_aops = { 258 - .readpage = qnx4_readpage, 259 + .read_folio = qnx4_read_folio, 259 260 .bmap = qnx4_bmap 260 261 }; 261 262
+3 -3
fs/qnx6/inode.c
··· 94 94 return 1; 95 95 } 96 96 97 - static int qnx6_readpage(struct file *file, struct page *page) 97 + static int qnx6_read_folio(struct file *file, struct folio *folio) 98 98 { 99 - return mpage_readpage(page, qnx6_get_block); 99 + return mpage_read_folio(folio, qnx6_get_block); 100 100 } 101 101 102 102 static void qnx6_readahead(struct readahead_control *rac) ··· 496 496 return generic_block_bmap(mapping, block, qnx6_get_block); 497 497 } 498 498 static const struct address_space_operations qnx6_aops = { 499 - .readpage = qnx6_readpage, 499 + .read_folio = qnx6_read_folio, 500 500 .readahead = qnx6_readahead, 501 501 .bmap = qnx6_bmap 502 502 };
+1 -1
fs/reiserfs/file.c
··· 227 227 } 228 228 /* 229 229 * If this is a partial write which happened to make all buffers 230 - * uptodate then we can optimize away a bogus readpage() for 230 + * uptodate then we can optimize away a bogus read_folio() for 231 231 * the next read(). Here we 'discover' whether the page went 232 232 * uptodate as a result of this (potentially partial) write. 233 233 */
+18 -18
fs/reiserfs/inode.c
··· 167 167 * cutting the code is fine, since it really isn't in use yet and is easy 168 168 * to add back in. But, Vladimir has a really good idea here. Think 169 169 * about what happens for reading a file. For each page, 170 - * The VFS layer calls reiserfs_readpage, who searches the tree to find 170 + * The VFS layer calls reiserfs_read_folio, who searches the tree to find 171 171 * an indirect item. This indirect item has X number of pointers, where 172 172 * X is a big number if we've done the block allocation right. But, 173 - * we only use one or two of these pointers during each call to readpage, 173 + * we only use one or two of these pointers during each call to read_folio, 174 174 * needlessly researching again later on. 175 175 * 176 176 * The size of the cache could be dynamic based on the size of the file. ··· 966 966 * it is important the set_buffer_uptodate is done 967 967 * after the direct2indirect. The buffer might 968 968 * contain valid data newer than the data on disk 969 - * (read by readpage, changed, and then sent here by 969 + * (read by read_folio, changed, and then sent here by 970 970 * writepage). direct2indirect needs to know if unbh 971 971 * was already up to date, so it can decide if the 972 972 * data in unbh needs to be replaced with data from ··· 2733 2733 goto done; 2734 2734 } 2735 2735 2736 - static int reiserfs_readpage(struct file *f, struct page *page) 2736 + static int reiserfs_read_folio(struct file *f, struct folio *folio) 2737 2737 { 2738 - return block_read_full_page(page, reiserfs_get_block); 2738 + return block_read_full_folio(folio, reiserfs_get_block); 2739 2739 } 2740 2740 2741 2741 static int reiserfs_writepage(struct page *page, struct writeback_control *wbc) ··· 2753 2753 2754 2754 static int reiserfs_write_begin(struct file *file, 2755 2755 struct address_space *mapping, 2756 - loff_t pos, unsigned len, unsigned flags, 2756 + loff_t pos, unsigned len, 2757 2757 struct page **pagep, void **fsdata) 2758 2758 { 2759 2759 struct inode *inode; ··· 2764 2764 2765 2765 inode = mapping->host; 2766 2766 index = pos >> PAGE_SHIFT; 2767 - page = grab_cache_page_write_begin(mapping, index, flags); 2767 + page = grab_cache_page_write_begin(mapping, index); 2768 2768 if (!page) 2769 2769 return -ENOMEM; 2770 2770 *pagep = page; ··· 3202 3202 } 3203 3203 3204 3204 /* 3205 - * Returns 1 if the page's buffers were dropped. The page is locked. 3205 + * Returns true if the folio's buffers were dropped. The folio is locked. 3206 3206 * 3207 3207 * Takes j_dirty_buffers_lock to protect the b_assoc_buffers list_heads 3208 - * in the buffers at page_buffers(page). 3208 + * in the buffers at folio_buffers(folio). 3209 3209 * 3210 3210 * even in -o notail mode, we can't be sure an old mount without -o notail 3211 3211 * didn't create files with tails. 3212 3212 */ 3213 - static int reiserfs_releasepage(struct page *page, gfp_t unused_gfp_flags) 3213 + static bool reiserfs_release_folio(struct folio *folio, gfp_t unused_gfp_flags) 3214 3214 { 3215 - struct inode *inode = page->mapping->host; 3215 + struct inode *inode = folio->mapping->host; 3216 3216 struct reiserfs_journal *j = SB_JOURNAL(inode->i_sb); 3217 3217 struct buffer_head *head; 3218 3218 struct buffer_head *bh; 3219 - int ret = 1; 3219 + bool ret = true; 3220 3220 3221 - WARN_ON(PageChecked(page)); 3221 + WARN_ON(folio_test_checked(folio)); 3222 3222 spin_lock(&j->j_dirty_buffers_lock); 3223 - head = page_buffers(page); 3223 + head = folio_buffers(folio); 3224 3224 bh = head; 3225 3225 do { 3226 3226 if (bh->b_private) { 3227 3227 if (!buffer_dirty(bh) && !buffer_locked(bh)) { 3228 3228 reiserfs_free_jh(bh); 3229 3229 } else { 3230 - ret = 0; 3230 + ret = false; 3231 3231 break; 3232 3232 } 3233 3233 } 3234 3234 bh = bh->b_this_page; 3235 3235 } while (bh != head); 3236 3236 if (ret) 3237 - ret = try_to_free_buffers(page); 3237 + ret = try_to_free_buffers(folio); 3238 3238 spin_unlock(&j->j_dirty_buffers_lock); 3239 3239 return ret; 3240 3240 } ··· 3421 3421 3422 3422 const struct address_space_operations reiserfs_address_space_operations = { 3423 3423 .writepage = reiserfs_writepage, 3424 - .readpage = reiserfs_readpage, 3424 + .read_folio = reiserfs_read_folio, 3425 3425 .readahead = reiserfs_readahead, 3426 - .releasepage = reiserfs_releasepage, 3426 + .release_folio = reiserfs_release_folio, 3427 3427 .invalidate_folio = reiserfs_invalidate_folio, 3428 3428 .write_begin = reiserfs_write_begin, 3429 3429 .write_end = reiserfs_write_end,
+7 -7
fs/reiserfs/journal.c
··· 601 601 */ 602 602 static void release_buffer_page(struct buffer_head *bh) 603 603 { 604 - struct page *page = bh->b_page; 605 - if (!page->mapping && trylock_page(page)) { 606 - get_page(page); 604 + struct folio *folio = page_folio(bh->b_page); 605 + if (!folio->mapping && folio_trylock(folio)) { 606 + folio_get(folio); 607 607 put_bh(bh); 608 - if (!page->mapping) 609 - try_to_free_buffers(page); 610 - unlock_page(page); 611 - put_page(page); 608 + if (!folio->mapping) 609 + try_to_free_buffers(folio); 610 + folio_unlock(folio); 611 + folio_put(folio); 612 612 } else { 613 613 put_bh(bh); 614 614 }
+5 -4
fs/romfs/super.c
··· 18 18 * Changed for 2.1.19 modules 19 19 * Jan 1997 Initial release 20 20 * Jun 1997 2.1.43+ changes 21 - * Proper page locking in readpage 21 + * Proper page locking in read_folio 22 22 * Changed to work with 2.1.45+ fs 23 23 * Jul 1997 Fixed follow_link 24 24 * 2.1.47 ··· 41 41 * dentries in lookup 42 42 * clean up page flags setting 43 43 * (error, uptodate, locking) in 44 - * in readpage 44 + * in read_folio 45 45 * use init_special_inode for 46 46 * fifos/sockets (and streamline) in 47 47 * read_inode, fix _ops table order ··· 99 99 /* 100 100 * read a page worth of data from the image 101 101 */ 102 - static int romfs_readpage(struct file *file, struct page *page) 102 + static int romfs_read_folio(struct file *file, struct folio *folio) 103 103 { 104 + struct page *page = &folio->page; 104 105 struct inode *inode = page->mapping->host; 105 106 loff_t offset, size; 106 107 unsigned long fillsize, pos; ··· 143 142 } 144 143 145 144 static const struct address_space_operations romfs_aops = { 146 - .readpage = romfs_readpage 145 + .read_folio = romfs_read_folio 147 146 }; 148 147 149 148 /*
+3 -2
fs/squashfs/file.c
··· 444 444 return 0; 445 445 } 446 446 447 - static int squashfs_readpage(struct file *file, struct page *page) 447 + static int squashfs_read_folio(struct file *file, struct folio *folio) 448 448 { 449 + struct page *page = &folio->page; 449 450 struct inode *inode = page->mapping->host; 450 451 struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info; 451 452 int index = page->index >> (msblk->block_log - PAGE_SHIFT); ··· 497 496 498 497 499 498 const struct address_space_operations squashfs_aops = { 500 - .readpage = squashfs_readpage 499 + .read_folio = squashfs_read_folio 501 500 };
+1 -1
fs/squashfs/super.c
··· 148 148 149 149 /* 150 150 * squashfs provides 'backing_dev_info' in order to disable read-ahead. For 151 - * squashfs, I/O is not deferred, it is done immediately in readpage, 151 + * squashfs, I/O is not deferred, it is done immediately in read_folio, 152 152 * which means the user would always have to wait their own I/O. So the effect 153 153 * of readahead is very weak for squashfs. squashfs_bdi_init will set 154 154 * sb->s_bdi->ra_pages and sb->s_bdi->io_pages to 0 and close readahead for
+3 -2
fs/squashfs/symlink.c
··· 30 30 #include "squashfs.h" 31 31 #include "xattr.h" 32 32 33 - static int squashfs_symlink_readpage(struct file *file, struct page *page) 33 + static int squashfs_symlink_read_folio(struct file *file, struct folio *folio) 34 34 { 35 + struct page *page = &folio->page; 35 36 struct inode *inode = page->mapping->host; 36 37 struct super_block *sb = inode->i_sb; 37 38 struct squashfs_sb_info *msblk = sb->s_fs_info; ··· 102 101 103 102 104 103 const struct address_space_operations squashfs_symlink_aops = { 105 - .readpage = squashfs_symlink_readpage 104 + .read_folio = squashfs_symlink_read_folio 106 105 }; 107 106 108 107 const struct inode_operations squashfs_symlink_inode_ops = {
+5 -5
fs/sysv/itree.c
··· 456 456 return block_write_full_page(page,get_block,wbc); 457 457 } 458 458 459 - static int sysv_readpage(struct file *file, struct page *page) 459 + static int sysv_read_folio(struct file *file, struct folio *folio) 460 460 { 461 - return block_read_full_page(page,get_block); 461 + return block_read_full_folio(folio, get_block); 462 462 } 463 463 464 464 int sysv_prepare_chunk(struct page *page, loff_t pos, unsigned len) ··· 477 477 } 478 478 479 479 static int sysv_write_begin(struct file *file, struct address_space *mapping, 480 - loff_t pos, unsigned len, unsigned flags, 480 + loff_t pos, unsigned len, 481 481 struct page **pagep, void **fsdata) 482 482 { 483 483 int ret; 484 484 485 - ret = block_write_begin(mapping, pos, len, flags, pagep, get_block); 485 + ret = block_write_begin(mapping, pos, len, pagep, get_block); 486 486 if (unlikely(ret)) 487 487 sysv_write_failed(mapping, pos + len); 488 488 ··· 497 497 const struct address_space_operations sysv_aops = { 498 498 .dirty_folio = block_dirty_folio, 499 499 .invalidate_folio = block_invalidate_folio, 500 - .readpage = sysv_readpage, 500 + .read_folio = sysv_read_folio, 501 501 .writepage = sysv_writepage, 502 502 .write_begin = sysv_write_begin, 503 503 .write_end = generic_write_end,
+21 -20
fs/ubifs/file.c
··· 31 31 * in the "sys_write -> alloc_pages -> direct reclaim path". So, in 32 32 * 'ubifs_writepage()' we are only guaranteed that the page is locked. 33 33 * 34 - * Similarly, @i_mutex is not always locked in 'ubifs_readpage()', e.g., the 34 + * Similarly, @i_mutex is not always locked in 'ubifs_read_folio()', e.g., the 35 35 * read-ahead path does not lock it ("sys_read -> generic_file_aio_read -> 36 - * ondemand_readahead -> readpage"). In case of readahead, @I_SYNC flag is not 36 + * ondemand_readahead -> read_folio"). In case of readahead, @I_SYNC flag is not 37 37 * set as well. However, UBIFS disables readahead. 38 38 */ 39 39 ··· 215 215 } 216 216 217 217 static int write_begin_slow(struct address_space *mapping, 218 - loff_t pos, unsigned len, struct page **pagep, 219 - unsigned flags) 218 + loff_t pos, unsigned len, struct page **pagep) 220 219 { 221 220 struct inode *inode = mapping->host; 222 221 struct ubifs_info *c = inode->i_sb->s_fs_info; ··· 243 244 if (unlikely(err)) 244 245 return err; 245 246 246 - page = grab_cache_page_write_begin(mapping, index, flags); 247 + page = grab_cache_page_write_begin(mapping, index); 247 248 if (unlikely(!page)) { 248 249 ubifs_release_budget(c, &req); 249 250 return -ENOMEM; ··· 418 419 * without forcing write-back. The slow path does not make this assumption. 419 420 */ 420 421 static int ubifs_write_begin(struct file *file, struct address_space *mapping, 421 - loff_t pos, unsigned len, unsigned flags, 422 + loff_t pos, unsigned len, 422 423 struct page **pagep, void **fsdata) 423 424 { 424 425 struct inode *inode = mapping->host; ··· 436 437 return -EROFS; 437 438 438 439 /* Try out the fast-path part first */ 439 - page = grab_cache_page_write_begin(mapping, index, flags); 440 + page = grab_cache_page_write_begin(mapping, index); 440 441 if (unlikely(!page)) 441 442 return -ENOMEM; 442 443 ··· 492 493 unlock_page(page); 493 494 put_page(page); 494 495 495 - return write_begin_slow(mapping, pos, len, pagep, flags); 496 + return write_begin_slow(mapping, pos, len, pagep); 496 497 } 497 498 498 499 /* ··· 889 890 return err; 890 891 } 891 892 892 - static int ubifs_readpage(struct file *file, struct page *page) 893 + static int ubifs_read_folio(struct file *file, struct folio *folio) 893 894 { 895 + struct page *page = &folio->page; 896 + 894 897 if (ubifs_bulk_read(page)) 895 898 return 0; 896 899 do_readpage(page); 897 - unlock_page(page); 900 + folio_unlock(folio); 898 901 return 0; 899 902 } 900 903 ··· 1484 1483 } 1485 1484 #endif 1486 1485 1487 - static int ubifs_releasepage(struct page *page, gfp_t unused_gfp_flags) 1486 + static bool ubifs_release_folio(struct folio *folio, gfp_t unused_gfp_flags) 1488 1487 { 1489 - struct inode *inode = page->mapping->host; 1488 + struct inode *inode = folio->mapping->host; 1490 1489 struct ubifs_info *c = inode->i_sb->s_fs_info; 1491 1490 1492 1491 /* 1493 1492 * An attempt to release a dirty page without budgeting for it - should 1494 1493 * not happen. 1495 1494 */ 1496 - if (PageWriteback(page)) 1497 - return 0; 1498 - ubifs_assert(c, PagePrivate(page)); 1495 + if (folio_test_writeback(folio)) 1496 + return false; 1497 + ubifs_assert(c, folio_test_private(folio)); 1499 1498 ubifs_assert(c, 0); 1500 - detach_page_private(page); 1501 - ClearPageChecked(page); 1502 - return 1; 1499 + folio_detach_private(folio); 1500 + folio_clear_checked(folio); 1501 + return true; 1503 1502 } 1504 1503 1505 1504 /* ··· 1643 1642 } 1644 1643 1645 1644 const struct address_space_operations ubifs_file_address_operations = { 1646 - .readpage = ubifs_readpage, 1645 + .read_folio = ubifs_read_folio, 1647 1646 .writepage = ubifs_writepage, 1648 1647 .write_begin = ubifs_write_begin, 1649 1648 .write_end = ubifs_write_end, ··· 1652 1651 #ifdef CONFIG_MIGRATION 1653 1652 .migratepage = ubifs_migrate_page, 1654 1653 #endif 1655 - .releasepage = ubifs_releasepage, 1654 + .release_folio = ubifs_release_folio, 1656 1655 }; 1657 1656 1658 1657 const struct inode_operations ubifs_file_inode_operations = {
+1 -1
fs/ubifs/super.c
··· 2191 2191 2192 2192 /* 2193 2193 * UBIFS provides 'backing_dev_info' in order to disable read-ahead. For 2194 - * UBIFS, I/O is not deferred, it is done immediately in readpage, 2194 + * UBIFS, I/O is not deferred, it is done immediately in read_folio, 2195 2195 * which means the user would have to wait not just for their own I/O 2196 2196 * but the read-ahead I/O as well i.e. completely pointless. 2197 2197 *
+7 -7
fs/udf/file.c
··· 57 57 kunmap_atomic(kaddr); 58 58 } 59 59 60 - static int udf_adinicb_readpage(struct file *file, struct page *page) 60 + static int udf_adinicb_read_folio(struct file *file, struct folio *folio) 61 61 { 62 - BUG_ON(!PageLocked(page)); 63 - __udf_adinicb_readpage(page); 64 - unlock_page(page); 62 + BUG_ON(!folio_test_locked(folio)); 63 + __udf_adinicb_readpage(&folio->page); 64 + folio_unlock(folio); 65 65 66 66 return 0; 67 67 } ··· 87 87 88 88 static int udf_adinicb_write_begin(struct file *file, 89 89 struct address_space *mapping, loff_t pos, 90 - unsigned len, unsigned flags, struct page **pagep, 90 + unsigned len, struct page **pagep, 91 91 void **fsdata) 92 92 { 93 93 struct page *page; 94 94 95 95 if (WARN_ON_ONCE(pos >= PAGE_SIZE)) 96 96 return -EIO; 97 - page = grab_cache_page_write_begin(mapping, 0, flags); 97 + page = grab_cache_page_write_begin(mapping, 0); 98 98 if (!page) 99 99 return -ENOMEM; 100 100 *pagep = page; ··· 127 127 const struct address_space_operations udf_adinicb_aops = { 128 128 .dirty_folio = block_dirty_folio, 129 129 .invalidate_folio = block_invalidate_folio, 130 - .readpage = udf_adinicb_readpage, 130 + .read_folio = udf_adinicb_read_folio, 131 131 .writepage = udf_adinicb_writepage, 132 132 .write_begin = udf_adinicb_write_begin, 133 133 .write_end = udf_adinicb_write_end,
+5 -5
fs/udf/inode.c
··· 193 193 return mpage_writepages(mapping, wbc, udf_get_block); 194 194 } 195 195 196 - static int udf_readpage(struct file *file, struct page *page) 196 + static int udf_read_folio(struct file *file, struct folio *folio) 197 197 { 198 - return mpage_readpage(page, udf_get_block); 198 + return mpage_read_folio(folio, udf_get_block); 199 199 } 200 200 201 201 static void udf_readahead(struct readahead_control *rac) ··· 204 204 } 205 205 206 206 static int udf_write_begin(struct file *file, struct address_space *mapping, 207 - loff_t pos, unsigned len, unsigned flags, 207 + loff_t pos, unsigned len, 208 208 struct page **pagep, void **fsdata) 209 209 { 210 210 int ret; 211 211 212 - ret = block_write_begin(mapping, pos, len, flags, pagep, udf_get_block); 212 + ret = block_write_begin(mapping, pos, len, pagep, udf_get_block); 213 213 if (unlikely(ret)) 214 214 udf_write_failed(mapping, pos + len); 215 215 return ret; ··· 237 237 const struct address_space_operations udf_aops = { 238 238 .dirty_folio = block_dirty_folio, 239 239 .invalidate_folio = block_invalidate_folio, 240 - .readpage = udf_readpage, 240 + .read_folio = udf_read_folio, 241 241 .readahead = udf_readahead, 242 242 .writepage = udf_writepage, 243 243 .writepages = udf_writepages,
+3 -2
fs/udf/symlink.c
··· 101 101 return 0; 102 102 } 103 103 104 - static int udf_symlink_filler(struct file *file, struct page *page) 104 + static int udf_symlink_filler(struct file *file, struct folio *folio) 105 105 { 106 + struct page *page = &folio->page; 106 107 struct inode *inode = page->mapping->host; 107 108 struct buffer_head *bh = NULL; 108 109 unsigned char *symlink; ··· 184 183 * symlinks can't do much... 185 184 */ 186 185 const struct address_space_operations udf_symlink_aops = { 187 - .readpage = udf_symlink_filler, 186 + .read_folio = udf_symlink_filler, 188 187 }; 189 188 190 189 const struct inode_operations udf_symlink_inode_operations = {
+6 -7
fs/ufs/inode.c
··· 390 390 391 391 /** 392 392 * ufs_getfrag_block() - `get_block_t' function, interface between UFS and 393 - * readpage, writepage and so on 393 + * read_folio, writepage and so on 394 394 */ 395 395 396 396 static int ufs_getfrag_block(struct inode *inode, sector_t fragment, struct buffer_head *bh_result, int create) ··· 472 472 return block_write_full_page(page,ufs_getfrag_block,wbc); 473 473 } 474 474 475 - static int ufs_readpage(struct file *file, struct page *page) 475 + static int ufs_read_folio(struct file *file, struct folio *folio) 476 476 { 477 - return block_read_full_page(page,ufs_getfrag_block); 477 + return block_read_full_folio(folio, ufs_getfrag_block); 478 478 } 479 479 480 480 int ufs_prepare_chunk(struct page *page, loff_t pos, unsigned len) ··· 495 495 } 496 496 497 497 static int ufs_write_begin(struct file *file, struct address_space *mapping, 498 - loff_t pos, unsigned len, unsigned flags, 498 + loff_t pos, unsigned len, 499 499 struct page **pagep, void **fsdata) 500 500 { 501 501 int ret; 502 502 503 - ret = block_write_begin(mapping, pos, len, flags, pagep, 504 - ufs_getfrag_block); 503 + ret = block_write_begin(mapping, pos, len, pagep, ufs_getfrag_block); 505 504 if (unlikely(ret)) 506 505 ufs_write_failed(mapping, pos + len); 507 506 ··· 527 528 const struct address_space_operations ufs_aops = { 528 529 .dirty_folio = block_dirty_folio, 529 530 .invalidate_folio = block_invalidate_folio, 530 - .readpage = ufs_readpage, 531 + .read_folio = ufs_read_folio, 531 532 .writepage = ufs_writepage, 532 533 .write_begin = ufs_write_begin, 533 534 .write_end = ufs_write_end,
+3 -2
fs/vboxsf/file.c
··· 225 225 .setattr = vboxsf_setattr 226 226 }; 227 227 228 - static int vboxsf_readpage(struct file *file, struct page *page) 228 + static int vboxsf_read_folio(struct file *file, struct folio *folio) 229 229 { 230 + struct page *page = &folio->page; 230 231 struct vboxsf_handle *sf_handle = file->private_data; 231 232 loff_t off = page_offset(page); 232 233 u32 nread = PAGE_SIZE; ··· 353 352 * page and it does not call SetPageUptodate for partial writes. 354 353 */ 355 354 const struct address_space_operations vboxsf_reg_aops = { 356 - .readpage = vboxsf_readpage, 355 + .read_folio = vboxsf_read_folio, 357 356 .writepage = vboxsf_writepage, 358 357 .dirty_folio = filemap_dirty_folio, 359 358 .write_begin = simple_write_begin,
+14 -15
fs/verity/enable.c
··· 18 18 * Read a file data page for Merkle tree construction. Do aggressive readahead, 19 19 * since we're sequentially reading the entire file. 20 20 */ 21 - static struct page *read_file_data_page(struct file *filp, pgoff_t index, 21 + static struct page *read_file_data_page(struct file *file, pgoff_t index, 22 22 struct file_ra_state *ra, 23 23 unsigned long remaining_pages) 24 24 { 25 - struct page *page; 25 + DEFINE_READAHEAD(ractl, file, ra, file->f_mapping, index); 26 + struct folio *folio; 26 27 27 - page = find_get_page_flags(filp->f_mapping, index, FGP_ACCESSED); 28 - if (!page || !PageUptodate(page)) { 29 - if (page) 30 - put_page(page); 28 + folio = __filemap_get_folio(ractl.mapping, index, FGP_ACCESSED, 0); 29 + if (!folio || !folio_test_uptodate(folio)) { 30 + if (folio) 31 + folio_put(folio); 31 32 else 32 - page_cache_sync_readahead(filp->f_mapping, ra, filp, 33 - index, remaining_pages); 34 - page = read_mapping_page(filp->f_mapping, index, NULL); 35 - if (IS_ERR(page)) 36 - return page; 33 + page_cache_sync_ra(&ractl, remaining_pages); 34 + folio = read_cache_folio(ractl.mapping, index, NULL, file); 35 + if (IS_ERR(folio)) 36 + return &folio->page; 37 37 } 38 - if (PageReadahead(page)) 39 - page_cache_async_readahead(filp->f_mapping, ra, filp, page, 40 - index, remaining_pages); 41 - return page; 38 + if (folio_test_readahead(folio)) 39 + page_cache_async_ra(&ractl, folio, remaining_pages); 40 + return folio_file_page(folio, index); 42 41 } 43 42 44 43 static int build_merkle_tree_level(struct file *filp, unsigned int level,
+5 -5
fs/xfs/xfs_aops.c
··· 536 536 } 537 537 538 538 STATIC int 539 - xfs_vm_readpage( 539 + xfs_vm_read_folio( 540 540 struct file *unused, 541 - struct page *page) 541 + struct folio *folio) 542 542 { 543 - return iomap_readpage(page, &xfs_read_iomap_ops); 543 + return iomap_read_folio(folio, &xfs_read_iomap_ops); 544 544 } 545 545 546 546 STATIC void ··· 562 562 } 563 563 564 564 const struct address_space_operations xfs_address_space_operations = { 565 - .readpage = xfs_vm_readpage, 565 + .read_folio = xfs_vm_read_folio, 566 566 .readahead = xfs_vm_readahead, 567 567 .writepages = xfs_vm_writepages, 568 568 .dirty_folio = filemap_dirty_folio, 569 - .releasepage = iomap_releasepage, 569 + .release_folio = iomap_release_folio, 570 570 .invalidate_folio = iomap_invalidate_folio, 571 571 .bmap = xfs_vm_bmap, 572 572 .direct_IO = noop_direct_IO,
+4 -4
fs/zonefs/super.c
··· 162 162 .iomap_begin = zonefs_iomap_begin, 163 163 }; 164 164 165 - static int zonefs_readpage(struct file *unused, struct page *page) 165 + static int zonefs_read_folio(struct file *unused, struct folio *folio) 166 166 { 167 - return iomap_readpage(page, &zonefs_iomap_ops); 167 + return iomap_read_folio(folio, &zonefs_iomap_ops); 168 168 } 169 169 170 170 static void zonefs_readahead(struct readahead_control *rac) ··· 230 230 } 231 231 232 232 static const struct address_space_operations zonefs_file_aops = { 233 - .readpage = zonefs_readpage, 233 + .read_folio = zonefs_read_folio, 234 234 .readahead = zonefs_readahead, 235 235 .writepage = zonefs_writepage, 236 236 .writepages = zonefs_writepages, 237 237 .dirty_folio = filemap_dirty_folio, 238 - .releasepage = iomap_releasepage, 238 + .release_folio = iomap_release_folio, 239 239 .invalidate_folio = iomap_invalidate_folio, 240 240 .migratepage = iomap_migrate_page, 241 241 .is_partially_uptodate = iomap_is_partially_uptodate,
+7 -7
include/linux/buffer_head.h
··· 146 146 #define page_has_buffers(page) PagePrivate(page) 147 147 #define folio_buffers(folio) folio_get_private(folio) 148 148 149 - void buffer_check_dirty_writeback(struct page *page, 149 + void buffer_check_dirty_writeback(struct folio *folio, 150 150 bool *dirty, bool *writeback); 151 151 152 152 /* ··· 158 158 void touch_buffer(struct buffer_head *bh); 159 159 void set_bh_page(struct buffer_head *bh, 160 160 struct page *page, unsigned long offset); 161 - int try_to_free_buffers(struct page *); 161 + bool try_to_free_buffers(struct folio *); 162 162 struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, 163 163 bool retry); 164 164 void create_empty_buffers(struct page *, unsigned long, ··· 223 223 int __block_write_full_page(struct inode *inode, struct page *page, 224 224 get_block_t *get_block, struct writeback_control *wbc, 225 225 bh_end_io_t *handler); 226 - int block_read_full_page(struct page*, get_block_t*); 226 + int block_read_full_folio(struct folio *, get_block_t *); 227 227 bool block_is_partially_uptodate(struct folio *, size_t from, size_t count); 228 228 int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len, 229 - unsigned flags, struct page **pagep, get_block_t *get_block); 229 + struct page **pagep, get_block_t *get_block); 230 230 int __block_write_begin(struct page *page, loff_t pos, unsigned len, 231 231 get_block_t *get_block); 232 232 int block_write_end(struct file *, struct address_space *, ··· 238 238 void page_zero_new_buffers(struct page *page, unsigned from, unsigned to); 239 239 void clean_page_buffers(struct page *page); 240 240 int cont_write_begin(struct file *, struct address_space *, loff_t, 241 - unsigned, unsigned, struct page **, void **, 241 + unsigned, struct page **, void **, 242 242 get_block_t *, loff_t *); 243 243 int generic_cont_expand_simple(struct inode *inode, loff_t size); 244 244 int block_commit_write(struct page *page, unsigned from, unsigned to); ··· 258 258 } 259 259 sector_t generic_block_bmap(struct address_space *, sector_t, get_block_t *); 260 260 int block_truncate_page(struct address_space *, loff_t, get_block_t *); 261 - int nobh_write_begin(struct address_space *, loff_t, unsigned, unsigned, 261 + int nobh_write_begin(struct address_space *, loff_t, unsigned len, 262 262 struct page **, void **, get_block_t*); 263 263 int nobh_write_end(struct file *, struct address_space *, 264 264 loff_t, unsigned, unsigned, ··· 402 402 #else /* CONFIG_BLOCK */ 403 403 404 404 static inline void buffer_init(void) {} 405 - static inline int try_to_free_buffers(struct page *page) { return 1; } 405 + static inline bool try_to_free_buffers(struct folio *folio) { return true; } 406 406 static inline int inode_has_buffers(struct inode *inode) { return 0; } 407 407 static inline void invalidate_inode_buffers(struct inode *inode) {} 408 408 static inline int remove_inode_buffers(struct inode *inode) { return 1; }
+7 -25
include/linux/fs.h
··· 262 262 * trying again. The aop will be taking reasonable 263 263 * precautions not to livelock. If the caller held a page 264 264 * reference, it should drop it before retrying. Returned 265 - * by readpage(). 265 + * by read_folio(). 266 266 * 267 267 * address_space_operation functions return these large constants to indicate 268 268 * special semantics to the caller. These are much larger than the bytes in a ··· 274 274 AOP_WRITEPAGE_ACTIVATE = 0x80000, 275 275 AOP_TRUNCATED_PAGE = 0x80001, 276 276 }; 277 - 278 - #define AOP_FLAG_NOFS 0x0002 /* used by filesystem to direct 279 - * helper code (eg buffer layer) 280 - * to clear GFP_FS from alloc */ 281 277 282 278 /* 283 279 * oh the beauties of C type declarations. ··· 335 339 336 340 struct address_space_operations { 337 341 int (*writepage)(struct page *page, struct writeback_control *wbc); 338 - int (*readpage)(struct file *, struct page *); 342 + int (*read_folio)(struct file *, struct folio *); 339 343 340 344 /* Write back some dirty pages from this mapping. */ 341 345 int (*writepages)(struct address_space *, struct writeback_control *); ··· 346 350 void (*readahead)(struct readahead_control *); 347 351 348 352 int (*write_begin)(struct file *, struct address_space *mapping, 349 - loff_t pos, unsigned len, unsigned flags, 353 + loff_t pos, unsigned len, 350 354 struct page **pagep, void **fsdata); 351 355 int (*write_end)(struct file *, struct address_space *mapping, 352 356 loff_t pos, unsigned len, unsigned copied, ··· 355 359 /* Unfortunately this kludge is needed for FIBMAP. Don't use it */ 356 360 sector_t (*bmap)(struct address_space *, sector_t); 357 361 void (*invalidate_folio) (struct folio *, size_t offset, size_t len); 358 - int (*releasepage) (struct page *, gfp_t); 359 - void (*freepage)(struct page *); 362 + bool (*release_folio)(struct folio *, gfp_t); 363 + void (*free_folio)(struct folio *folio); 360 364 ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); 361 365 /* 362 366 * migrate the contents of a page to the specified target. If ··· 369 373 int (*launder_folio)(struct folio *); 370 374 bool (*is_partially_uptodate) (struct folio *, size_t from, 371 375 size_t count); 372 - void (*is_dirty_writeback) (struct page *, bool *, bool *); 376 + void (*is_dirty_writeback) (struct folio *, bool *dirty, bool *wb); 373 377 int (*error_remove_page)(struct address_space *, struct page *); 374 378 375 379 /* swapfile support */ ··· 379 383 }; 380 384 381 385 extern const struct address_space_operations empty_aops; 382 - 383 - /* 384 - * pagecache_write_begin/pagecache_write_end must be used by general code 385 - * to write into the pagecache. 386 - */ 387 - int pagecache_write_begin(struct file *, struct address_space *mapping, 388 - loff_t pos, unsigned len, unsigned flags, 389 - struct page **pagep, void **fsdata); 390 - 391 - int pagecache_write_end(struct file *, struct address_space *mapping, 392 - loff_t pos, unsigned len, unsigned copied, 393 - struct page *page, void *fsdata); 394 386 395 387 /** 396 388 * struct address_space - Contents of a cacheable, mappable object. ··· 3100 3116 extern const char *page_get_link(struct dentry *, struct inode *, 3101 3117 struct delayed_call *); 3102 3118 extern void page_put_link(void *); 3103 - extern int __page_symlink(struct inode *inode, const char *symname, int len, 3104 - int nofs); 3105 3119 extern int page_symlink(struct inode *inode, const char *symname, int len); 3106 3120 extern const struct inode_operations page_symlink_inode_operations; 3107 3121 extern void kfree_link(void *); ··· 3174 3192 extern ssize_t noop_direct_IO(struct kiocb *iocb, struct iov_iter *iter); 3175 3193 extern int simple_empty(struct dentry *); 3176 3194 extern int simple_write_begin(struct file *file, struct address_space *mapping, 3177 - loff_t pos, unsigned len, unsigned flags, 3195 + loff_t pos, unsigned len, 3178 3196 struct page **pagep, void **fsdata); 3179 3197 extern const struct address_space_operations ram_aops; 3180 3198 extern int always_delete_dentry(const struct dentry *);
+2 -2
include/linux/iomap.h
··· 226 226 227 227 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from, 228 228 const struct iomap_ops *ops); 229 - int iomap_readpage(struct page *page, const struct iomap_ops *ops); 229 + int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops); 230 230 void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops); 231 231 bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count); 232 - int iomap_releasepage(struct page *page, gfp_t gfp_mask); 232 + bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags); 233 233 void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); 234 234 #ifdef CONFIG_MIGRATION 235 235 int iomap_migrate_page(struct address_space *mapping, struct page *newpage,
+1 -1
include/linux/jbd2.h
··· 1529 1529 extern int jbd2_journal_forget (handle_t *, struct buffer_head *); 1530 1530 int jbd2_journal_invalidate_folio(journal_t *, struct folio *, 1531 1531 size_t offset, size_t length); 1532 - extern int jbd2_journal_try_to_free_buffers(journal_t *journal, struct page *page); 1532 + bool jbd2_journal_try_to_free_buffers(journal_t *journal, struct folio *folio); 1533 1533 extern int jbd2_journal_stop(handle_t *); 1534 1534 extern int jbd2_journal_flush(journal_t *journal, unsigned int flags); 1535 1535 extern void jbd2_journal_lock_updates (journal_t *);
+1 -1
include/linux/mpage.h
··· 16 16 struct readahead_control; 17 17 18 18 void mpage_readahead(struct readahead_control *, get_block_t get_block); 19 - int mpage_readpage(struct page *page, get_block_t get_block); 19 + int mpage_read_folio(struct folio *folio, get_block_t get_block); 20 20 int mpage_writepages(struct address_space *mapping, 21 21 struct writeback_control *wbc, get_block_t get_block); 22 22 int mpage_writepage(struct page *page, get_block_t *get_block,
+2 -2
include/linux/netfs.h
··· 275 275 276 276 struct readahead_control; 277 277 extern void netfs_readahead(struct readahead_control *); 278 - extern int netfs_readpage(struct file *, struct page *); 278 + int netfs_read_folio(struct file *, struct folio *); 279 279 extern int netfs_write_begin(struct file *, struct address_space *, 280 - loff_t, unsigned int, unsigned int, struct folio **, 280 + loff_t, unsigned int, struct folio **, 281 281 void **); 282 282 283 283 extern void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool);
+1 -1
include/linux/nfs_fs.h
··· 594 594 /* 595 595 * linux/fs/nfs/read.c 596 596 */ 597 - extern int nfs_readpage(struct file *, struct page *); 597 + int nfs_read_folio(struct file *, struct folio *); 598 598 void nfs_readahead(struct readahead_control *); 599 599 600 600 /*
+1 -1
include/linux/page-flags.h
··· 516 516 /* 517 517 * Private page markings that may be used by the filesystem that owns the page 518 518 * for its own purposes. 519 - * - PG_private and PG_private_2 cause releasepage() and co to be invoked 519 + * - PG_private and PG_private_2 cause release_folio() and co to be invoked 520 520 */ 521 521 PAGEFLAG(Private, private, PF_ANY) 522 522 PAGEFLAG(Private2, private_2, PF_ANY) TESTSCFLAG(Private2, private_2, PF_ANY)
+63 -15
include/linux/pagemap.h
··· 492 492 return mapping_gfp_mask(x) | __GFP_NORETRY | __GFP_NOWARN; 493 493 } 494 494 495 - typedef int filler_t(void *, struct page *); 495 + typedef int filler_t(struct file *, struct folio *); 496 496 497 497 pgoff_t page_cache_next_miss(struct address_space *mapping, 498 498 pgoff_t index, unsigned long max_scan); ··· 735 735 } 736 736 737 737 struct page *grab_cache_page_write_begin(struct address_space *mapping, 738 - pgoff_t index, unsigned flags); 738 + pgoff_t index); 739 739 740 740 /* 741 741 * Returns locked page at given index in given cache, creating it if needed. ··· 747 747 } 748 748 749 749 struct folio *read_cache_folio(struct address_space *, pgoff_t index, 750 - filler_t *filler, void *data); 750 + filler_t *filler, struct file *file); 751 751 struct page *read_cache_page(struct address_space *, pgoff_t index, 752 - filler_t *filler, void *data); 752 + filler_t *filler, struct file *file); 753 753 extern struct page * read_cache_page_gfp(struct address_space *mapping, 754 754 pgoff_t index, gfp_t gfp_mask); 755 755 ··· 888 888 void unlock_page(struct page *page); 889 889 void folio_unlock(struct folio *folio); 890 890 891 + /** 892 + * folio_trylock() - Attempt to lock a folio. 893 + * @folio: The folio to attempt to lock. 894 + * 895 + * Sometimes it is undesirable to wait for a folio to be unlocked (eg 896 + * when the locks are being taken in the wrong order, or if making 897 + * progress through a batch of folios is more important than processing 898 + * them in order). Usually folio_lock() is the correct function to call. 899 + * 900 + * Context: Any context. 901 + * Return: Whether the lock was successfully acquired. 902 + */ 891 903 static inline bool folio_trylock(struct folio *folio) 892 904 { 893 905 return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); ··· 913 901 return folio_trylock(page_folio(page)); 914 902 } 915 903 904 + /** 905 + * folio_lock() - Lock this folio. 906 + * @folio: The folio to lock. 907 + * 908 + * The folio lock protects against many things, probably more than it 909 + * should. It is primarily held while a folio is being brought uptodate, 910 + * either from its backing file or from swap. It is also held while a 911 + * folio is being truncated from its address_space, so holding the lock 912 + * is sufficient to keep folio->mapping stable. 913 + * 914 + * The folio lock is also held while write() is modifying the page to 915 + * provide POSIX atomicity guarantees (as long as the write does not 916 + * cross a page boundary). Other modifications to the data in the folio 917 + * do not hold the folio lock and can race with writes, eg DMA and stores 918 + * to mapped pages. 919 + * 920 + * Context: May sleep. If you need to acquire the locks of two or 921 + * more folios, they must be in order of ascending index, if they are 922 + * in the same address_space. If they are in different address_spaces, 923 + * acquire the lock of the folio which belongs to the address_space which 924 + * has the lowest address in memory first. 925 + */ 916 926 static inline void folio_lock(struct folio *folio) 917 927 { 918 928 might_sleep(); ··· 942 908 __folio_lock(folio); 943 909 } 944 910 945 - /* 946 - * lock_page may only be called if we have the page's inode pinned. 911 + /** 912 + * lock_page() - Lock the folio containing this page. 913 + * @page: The page to lock. 914 + * 915 + * See folio_lock() for a description of what the lock protects. 916 + * This is a legacy function and new code should probably use folio_lock() 917 + * instead. 918 + * 919 + * Context: May sleep. Pages in the same folio share a lock, so do not 920 + * attempt to lock two pages which share a folio. 947 921 */ 948 922 static inline void lock_page(struct page *page) 949 923 { ··· 963 921 __folio_lock(folio); 964 922 } 965 923 924 + /** 925 + * folio_lock_killable() - Lock this folio, interruptible by a fatal signal. 926 + * @folio: The folio to lock. 927 + * 928 + * Attempts to lock the folio, like folio_lock(), except that the sleep 929 + * to acquire the lock is interruptible by a fatal signal. 930 + * 931 + * Context: May sleep; see folio_lock(). 932 + * Return: 0 if the lock was acquired; -EINTR if a fatal signal was received. 933 + */ 966 934 static inline int folio_lock_killable(struct folio *folio) 967 935 { 968 936 might_sleep(); ··· 1019 967 * Wait for a folio to be unlocked. 1020 968 * 1021 969 * This must be called with the caller "holding" the folio, 1022 - * ie with increased "page->count" so that the folio won't 1023 - * go away during the wait.. 970 + * ie with increased folio reference count so that the folio won't 971 + * go away during the wait. 1024 972 */ 1025 973 static inline void folio_wait_locked(struct folio *folio) 1026 974 { ··· 1066 1014 /* Avoid atomic ops, locking, etc. when not actually needed. */ 1067 1015 if (folio_test_dirty(folio)) 1068 1016 __folio_cancel_dirty(folio); 1069 - } 1070 - static inline void cancel_dirty_page(struct page *page) 1071 - { 1072 - folio_cancel_dirty(page_folio(page)); 1073 1017 } 1074 1018 bool folio_clear_dirty_for_io(struct folio *folio); 1075 1019 bool clear_page_dirty_for_io(struct page *page); ··· 1239 1191 * @mapping: address_space which holds the pagecache and I/O vectors 1240 1192 * @ra: file_ra_state which holds the readahead state 1241 1193 * @file: Used by the filesystem for authentication. 1242 - * @page: The page at @index which triggered the readahead call. 1194 + * @folio: The folio at @index which triggered the readahead call. 1243 1195 * @index: Index of first page to be read. 1244 1196 * @req_count: Total number of pages being read by the caller. 1245 1197 * ··· 1251 1203 static inline 1252 1204 void page_cache_async_readahead(struct address_space *mapping, 1253 1205 struct file_ra_state *ra, struct file *file, 1254 - struct page *page, pgoff_t index, unsigned long req_count) 1206 + struct folio *folio, pgoff_t index, unsigned long req_count) 1255 1207 { 1256 1208 DEFINE_READAHEAD(ractl, file, ra, mapping, index); 1257 - page_cache_async_ra(&ractl, page_folio(page), req_count); 1209 + page_cache_async_ra(&ractl, folio, req_count); 1258 1210 } 1259 1211 1260 1212 static inline struct folio *__readahead_folio(struct readahead_control *ractl)
+8 -13
include/trace/events/ext4.h
··· 335 335 336 336 DECLARE_EVENT_CLASS(ext4__write_begin, 337 337 338 - TP_PROTO(struct inode *inode, loff_t pos, unsigned int len, 339 - unsigned int flags), 338 + TP_PROTO(struct inode *inode, loff_t pos, unsigned int len), 340 339 341 - TP_ARGS(inode, pos, len, flags), 340 + TP_ARGS(inode, pos, len), 342 341 343 342 TP_STRUCT__entry( 344 343 __field( dev_t, dev ) 345 344 __field( ino_t, ino ) 346 345 __field( loff_t, pos ) 347 346 __field( unsigned int, len ) 348 - __field( unsigned int, flags ) 349 347 ), 350 348 351 349 TP_fast_assign( ··· 351 353 __entry->ino = inode->i_ino; 352 354 __entry->pos = pos; 353 355 __entry->len = len; 354 - __entry->flags = flags; 355 356 ), 356 357 357 - TP_printk("dev %d,%d ino %lu pos %lld len %u flags %u", 358 + TP_printk("dev %d,%d ino %lu pos %lld len %u", 358 359 MAJOR(__entry->dev), MINOR(__entry->dev), 359 360 (unsigned long) __entry->ino, 360 - __entry->pos, __entry->len, __entry->flags) 361 + __entry->pos, __entry->len) 361 362 ); 362 363 363 364 DEFINE_EVENT(ext4__write_begin, ext4_write_begin, 364 365 365 - TP_PROTO(struct inode *inode, loff_t pos, unsigned int len, 366 - unsigned int flags), 366 + TP_PROTO(struct inode *inode, loff_t pos, unsigned int len), 367 367 368 - TP_ARGS(inode, pos, len, flags) 368 + TP_ARGS(inode, pos, len) 369 369 ); 370 370 371 371 DEFINE_EVENT(ext4__write_begin, ext4_da_write_begin, 372 372 373 - TP_PROTO(struct inode *inode, loff_t pos, unsigned int len, 374 - unsigned int flags), 373 + TP_PROTO(struct inode *inode, loff_t pos, unsigned int len), 375 374 376 - TP_ARGS(inode, pos, len, flags) 375 + TP_ARGS(inode, pos, len) 377 376 ); 378 377 379 378 DECLARE_EVENT_CLASS(ext4__write_end,
+4 -8
include/trace/events/f2fs.h
··· 1159 1159 1160 1160 TRACE_EVENT(f2fs_write_begin, 1161 1161 1162 - TP_PROTO(struct inode *inode, loff_t pos, unsigned int len, 1163 - unsigned int flags), 1162 + TP_PROTO(struct inode *inode, loff_t pos, unsigned int len), 1164 1163 1165 - TP_ARGS(inode, pos, len, flags), 1164 + TP_ARGS(inode, pos, len), 1166 1165 1167 1166 TP_STRUCT__entry( 1168 1167 __field(dev_t, dev) 1169 1168 __field(ino_t, ino) 1170 1169 __field(loff_t, pos) 1171 1170 __field(unsigned int, len) 1172 - __field(unsigned int, flags) 1173 1171 ), 1174 1172 1175 1173 TP_fast_assign( ··· 1175 1177 __entry->ino = inode->i_ino; 1176 1178 __entry->pos = pos; 1177 1179 __entry->len = len; 1178 - __entry->flags = flags; 1179 1180 ), 1180 1181 1181 - TP_printk("dev = (%d,%d), ino = %lu, pos = %llu, len = %u, flags = %u", 1182 + TP_printk("dev = (%d,%d), ino = %lu, pos = %llu, len = %u", 1182 1183 show_dev_ino(__entry), 1183 1184 (unsigned long long)__entry->pos, 1184 - __entry->len, 1185 - __entry->flags) 1185 + __entry->len) 1186 1186 ); 1187 1187 1188 1188 TRACE_EVENT(f2fs_write_end,
+4 -3
kernel/events/uprobes.c
··· 787 787 struct page *page; 788 788 /* 789 789 * Ensure that the page that has the original instruction is populated 790 - * and in page-cache. If ->readpage == NULL it must be shmem_mapping(), 790 + * and in page-cache. If ->read_folio == NULL it must be shmem_mapping(), 791 791 * see uprobe_register(). 792 792 */ 793 - if (mapping->a_ops->readpage) 793 + if (mapping->a_ops->read_folio) 794 794 page = read_mapping_page(mapping, offset >> PAGE_SHIFT, filp); 795 795 else 796 796 page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT); ··· 1143 1143 return -EINVAL; 1144 1144 1145 1145 /* copy_insn() uses read_mapping_page() or shmem_read_mapping_page() */ 1146 - if (!inode->i_mapping->a_ops->readpage && !shmem_mapping(inode->i_mapping)) 1146 + if (!inode->i_mapping->a_ops->read_folio && 1147 + !shmem_mapping(inode->i_mapping)) 1147 1148 return -EIO; 1148 1149 /* Racy, just to catch the obvious mistakes */ 1149 1150 if (offset > i_size_read(inode))
+38 -61
mm/filemap.c
··· 225 225 226 226 void filemap_free_folio(struct address_space *mapping, struct folio *folio) 227 227 { 228 - void (*freepage)(struct page *); 228 + void (*free_folio)(struct folio *); 229 229 int refs = 1; 230 230 231 - freepage = mapping->a_ops->freepage; 232 - if (freepage) 233 - freepage(&folio->page); 231 + free_folio = mapping->a_ops->free_folio; 232 + if (free_folio) 233 + free_folio(folio); 234 234 235 235 if (folio_test_large(folio) && !folio_test_hugetlb(folio)) 236 236 refs = folio_nr_pages(folio); ··· 807 807 struct folio *fold = page_folio(old); 808 808 struct folio *fnew = page_folio(new); 809 809 struct address_space *mapping = old->mapping; 810 - void (*freepage)(struct page *) = mapping->a_ops->freepage; 810 + void (*free_folio)(struct folio *) = mapping->a_ops->free_folio; 811 811 pgoff_t offset = old->index; 812 812 XA_STATE(xas, &mapping->i_pages, offset); 813 813 ··· 835 835 if (PageSwapBacked(new)) 836 836 __inc_lruvec_page_state(new, NR_SHMEM); 837 837 xas_unlock_irq(&xas); 838 - if (freepage) 839 - freepage(old); 840 - put_page(old); 838 + if (free_folio) 839 + free_folio(fold); 840 + folio_put(fold); 841 841 } 842 842 EXPORT_SYMBOL_GPL(replace_page_cache_page); 843 843 ··· 2414 2414 2415 2415 /* 2416 2416 * A previous I/O error may have been due to temporary failures, 2417 - * eg. multipath errors. PG_error will be set again if readpage 2417 + * eg. multipath errors. PG_error will be set again if read_folio 2418 2418 * fails. 2419 2419 */ 2420 2420 folio_clear_error(folio); 2421 2421 /* Start the actual read. The read will unlock the page. */ 2422 - error = mapping->a_ops->readpage(file, &folio->page); 2422 + error = mapping->a_ops->read_folio(file, folio); 2423 2423 if (error) 2424 2424 return error; 2425 2425 ··· 2636 2636 * @already_read: Number of bytes already read by the caller. 2637 2637 * 2638 2638 * Copies data from the page cache. If the data is not currently present, 2639 - * uses the readahead and readpage address_space operations to fetch it. 2639 + * uses the readahead and read_folio address_space operations to fetch it. 2640 2640 * 2641 2641 * Return: Total number of bytes copied, including those already read by 2642 2642 * the caller. If an error happens before any bytes are copied, returns ··· 3447 3447 { 3448 3448 struct address_space *mapping = file->f_mapping; 3449 3449 3450 - if (!mapping->a_ops->readpage) 3450 + if (!mapping->a_ops->read_folio) 3451 3451 return -ENOEXEC; 3452 3452 file_accessed(file); 3453 3453 vma->vm_ops = &generic_file_vm_ops; ··· 3483 3483 EXPORT_SYMBOL(generic_file_readonly_mmap); 3484 3484 3485 3485 static struct folio *do_read_cache_folio(struct address_space *mapping, 3486 - pgoff_t index, filler_t filler, void *data, gfp_t gfp) 3486 + pgoff_t index, filler_t filler, struct file *file, gfp_t gfp) 3487 3487 { 3488 3488 struct folio *folio; 3489 3489 int err; 3490 + 3491 + if (!filler) 3492 + filler = mapping->a_ops->read_folio; 3490 3493 repeat: 3491 3494 folio = filemap_get_folio(mapping, index); 3492 3495 if (!folio) { ··· 3506 3503 } 3507 3504 3508 3505 filler: 3509 - if (filler) 3510 - err = filler(data, &folio->page); 3511 - else 3512 - err = mapping->a_ops->readpage(data, &folio->page); 3513 - 3506 + err = filler(file, folio); 3514 3507 if (err < 0) { 3515 3508 folio_put(folio); 3516 3509 return ERR_PTR(err); ··· 3556 3557 } 3557 3558 3558 3559 /** 3559 - * read_cache_folio - read into page cache, fill it if needed 3560 - * @mapping: the page's address_space 3561 - * @index: the page index 3562 - * @filler: function to perform the read 3563 - * @data: first arg to filler(data, page) function, often left as NULL 3560 + * read_cache_folio - Read into page cache, fill it if needed. 3561 + * @mapping: The address_space to read from. 3562 + * @index: The index to read. 3563 + * @filler: Function to perform the read, or NULL to use aops->read_folio(). 3564 + * @file: Passed to filler function, may be NULL if not required. 3564 3565 * 3565 - * Read into the page cache. If a page already exists, and PageUptodate() is 3566 - * not set, try to fill the page and wait for it to become unlocked. 3566 + * Read one page into the page cache. If it succeeds, the folio returned 3567 + * will contain @index, but it may not be the first page of the folio. 3567 3568 * 3568 - * If the page does not get brought uptodate, return -EIO. 3569 + * If the filler function returns an error, it will be returned to the 3570 + * caller. 3569 3571 * 3570 - * The function expects mapping->invalidate_lock to be already held. 3571 - * 3572 - * Return: up to date page on success, ERR_PTR() on failure. 3572 + * Context: May sleep. Expects mapping->invalidate_lock to be held. 3573 + * Return: An uptodate folio on success, ERR_PTR() on failure. 3573 3574 */ 3574 3575 struct folio *read_cache_folio(struct address_space *mapping, pgoff_t index, 3575 - filler_t filler, void *data) 3576 + filler_t filler, struct file *file) 3576 3577 { 3577 - return do_read_cache_folio(mapping, index, filler, data, 3578 + return do_read_cache_folio(mapping, index, filler, file, 3578 3579 mapping_gfp_mask(mapping)); 3579 3580 } 3580 3581 EXPORT_SYMBOL(read_cache_folio); 3581 3582 3582 3583 static struct page *do_read_cache_page(struct address_space *mapping, 3583 - pgoff_t index, filler_t *filler, void *data, gfp_t gfp) 3584 + pgoff_t index, filler_t *filler, struct file *file, gfp_t gfp) 3584 3585 { 3585 3586 struct folio *folio; 3586 3587 3587 - folio = do_read_cache_folio(mapping, index, filler, data, gfp); 3588 + folio = do_read_cache_folio(mapping, index, filler, file, gfp); 3588 3589 if (IS_ERR(folio)) 3589 3590 return &folio->page; 3590 3591 return folio_file_page(folio, index); 3591 3592 } 3592 3593 3593 3594 struct page *read_cache_page(struct address_space *mapping, 3594 - pgoff_t index, filler_t *filler, void *data) 3595 + pgoff_t index, filler_t *filler, struct file *file) 3595 3596 { 3596 - return do_read_cache_page(mapping, index, filler, data, 3597 + return do_read_cache_page(mapping, index, filler, file, 3597 3598 mapping_gfp_mask(mapping)); 3598 3599 } 3599 3600 EXPORT_SYMBOL(read_cache_page); ··· 3620 3621 return do_read_cache_page(mapping, index, NULL, NULL, gfp); 3621 3622 } 3622 3623 EXPORT_SYMBOL(read_cache_page_gfp); 3623 - 3624 - int pagecache_write_begin(struct file *file, struct address_space *mapping, 3625 - loff_t pos, unsigned len, unsigned flags, 3626 - struct page **pagep, void **fsdata) 3627 - { 3628 - const struct address_space_operations *aops = mapping->a_ops; 3629 - 3630 - return aops->write_begin(file, mapping, pos, len, flags, 3631 - pagep, fsdata); 3632 - } 3633 - EXPORT_SYMBOL(pagecache_write_begin); 3634 - 3635 - int pagecache_write_end(struct file *file, struct address_space *mapping, 3636 - loff_t pos, unsigned len, unsigned copied, 3637 - struct page *page, void *fsdata) 3638 - { 3639 - const struct address_space_operations *aops = mapping->a_ops; 3640 - 3641 - return aops->write_end(file, mapping, pos, len, copied, page, fsdata); 3642 - } 3643 - EXPORT_SYMBOL(pagecache_write_end); 3644 3624 3645 3625 /* 3646 3626 * Warn about a page cache invalidation failure during a direct I/O write. ··· 3732 3754 const struct address_space_operations *a_ops = mapping->a_ops; 3733 3755 long status = 0; 3734 3756 ssize_t written = 0; 3735 - unsigned int flags = 0; 3736 3757 3737 3758 do { 3738 3759 struct page *page; ··· 3761 3784 break; 3762 3785 } 3763 3786 3764 - status = a_ops->write_begin(file, mapping, pos, bytes, flags, 3787 + status = a_ops->write_begin(file, mapping, pos, bytes, 3765 3788 &page, &fsdata); 3766 3789 if (unlikely(status < 0)) 3767 3790 break; ··· 3955 3978 if (folio_test_writeback(folio)) 3956 3979 return false; 3957 3980 3958 - if (mapping && mapping->a_ops->releasepage) 3959 - return mapping->a_ops->releasepage(&folio->page, gfp); 3960 - return try_to_free_buffers(&folio->page); 3981 + if (mapping && mapping->a_ops->release_folio) 3982 + return mapping->a_ops->release_folio(folio, gfp); 3983 + return try_to_free_buffers(folio); 3961 3984 } 3962 3985 EXPORT_SYMBOL(filemap_release_folio);
+1 -3
mm/folio-compat.c
··· 131 131 EXPORT_SYMBOL(pagecache_get_page); 132 132 133 133 struct page *grab_cache_page_write_begin(struct address_space *mapping, 134 - pgoff_t index, unsigned flags) 134 + pgoff_t index) 135 135 { 136 136 unsigned fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE; 137 137 138 - if (flags & AOP_FLAG_NOFS) 139 - fgp_flags |= FGP_NOFS; 140 138 return pagecache_get_page(mapping, index, fgp_flags, 141 139 mapping_gfp_mask(mapping)); 142 140 }
+2 -2
mm/memory.c
··· 555 555 dump_page(page, "bad pte"); 556 556 pr_alert("addr:%px vm_flags:%08lx anon_vma:%px mapping:%px index:%lx\n", 557 557 (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index); 558 - pr_alert("file:%pD fault:%ps mmap:%ps readpage:%ps\n", 558 + pr_alert("file:%pD fault:%ps mmap:%ps read_folio:%ps\n", 559 559 vma->vm_file, 560 560 vma->vm_ops ? vma->vm_ops->fault : NULL, 561 561 vma->vm_file ? vma->vm_file->f_op->mmap : NULL, 562 - mapping ? mapping->a_ops->readpage : NULL); 562 + mapping ? mapping->a_ops->read_folio : NULL); 563 563 dump_stack(); 564 564 add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); 565 565 }
+1 -1
mm/migrate.c
··· 1013 1013 if (!page->mapping) { 1014 1014 VM_BUG_ON_PAGE(PageAnon(page), page); 1015 1015 if (page_has_private(page)) { 1016 - try_to_free_buffers(page); 1016 + try_to_free_buffers(folio); 1017 1017 goto out_unlock_both; 1018 1018 } 1019 1019 } else if (page_mapped(page)) {
+6 -4
mm/page-writeback.c
··· 2602 2602 * folio_mark_dirty - Mark a folio as being modified. 2603 2603 * @folio: The folio. 2604 2604 * 2605 - * For folios with a mapping this should be done with the folio lock held 2606 - * for the benefit of asynchronous memory errors who prefer a consistent 2607 - * dirty state. This rule can be broken in some special cases, 2608 - * but should be better not to. 2605 + * The folio may not be truncated while this function is running. 2606 + * Holding the folio lock is sufficient to prevent truncation, but some 2607 + * callers cannot acquire a sleeping lock. These callers instead hold 2608 + * the page table lock for a page table which contains at least one page 2609 + * in this folio. Truncation will block on the page table lock as it 2610 + * unmaps pages before removing the folio from its mapping. 2609 2611 * 2610 2612 * Return: True if the folio was newly dirtied, false if it was already dirty. 2611 2613 */
+1 -1
mm/page_io.c
··· 336 336 struct file *swap_file = sis->swap_file; 337 337 struct address_space *mapping = swap_file->f_mapping; 338 338 339 - ret = mapping->a_ops->readpage(swap_file, page); 339 + ret = mapping->a_ops->read_folio(swap_file, page_folio(page)); 340 340 if (!ret) 341 341 count_vm_event(PSWPIN); 342 342 goto out;
+18 -19
mm/readahead.c
··· 15 15 * explicitly requested by the application. Readahead only ever 16 16 * attempts to read folios that are not yet in the page cache. If a 17 17 * folio is present but not up-to-date, readahead will not try to read 18 - * it. In that case a simple ->readpage() will be requested. 18 + * it. In that case a simple ->read_folio() will be requested. 19 19 * 20 20 * Readahead is triggered when an application read request (whether a 21 21 * system call or a page fault) finds that the requested folio is not in ··· 78 78 * address space operation, for which mpage_readahead() is a canonical 79 79 * implementation. ->readahead() should normally initiate reads on all 80 80 * folios, but may fail to read any or all folios without causing an I/O 81 - * error. The page cache reading code will issue a ->readpage() request 81 + * error. The page cache reading code will issue a ->read_folio() request 82 82 * for any folio which ->readahead() did not read, and only an error 83 83 * from this will be final. 84 84 * ··· 110 110 * were not fetched with readahead_folio(). This will allow a 111 111 * subsequent synchronous readahead request to try them again. If they 112 112 * are left in the page cache, then they will be read individually using 113 - * ->readpage() which may be less efficient. 113 + * ->read_folio() which may be less efficient. 114 114 */ 115 115 116 116 #include <linux/blkdev.h> ··· 146 146 static void read_pages(struct readahead_control *rac) 147 147 { 148 148 const struct address_space_operations *aops = rac->mapping->a_ops; 149 - struct page *page; 149 + struct folio *folio; 150 150 struct blk_plug plug; 151 151 152 152 if (!readahead_count(rac)) ··· 157 157 if (aops->readahead) { 158 158 aops->readahead(rac); 159 159 /* 160 - * Clean up the remaining pages. The sizes in ->ra 160 + * Clean up the remaining folios. The sizes in ->ra 161 161 * may be used to size the next readahead, so make sure 162 162 * they accurately reflect what happened. 163 163 */ 164 - while ((page = readahead_page(rac))) { 165 - rac->ra->size -= 1; 166 - if (rac->ra->async_size > 0) { 167 - rac->ra->async_size -= 1; 168 - delete_from_page_cache(page); 164 + while ((folio = readahead_folio(rac)) != NULL) { 165 + unsigned long nr = folio_nr_pages(folio); 166 + 167 + rac->ra->size -= nr; 168 + if (rac->ra->async_size >= nr) { 169 + rac->ra->async_size -= nr; 170 + filemap_remove_folio(folio); 169 171 } 170 - unlock_page(page); 171 - put_page(page); 172 + folio_unlock(folio); 172 173 } 173 174 } else { 174 - while ((page = readahead_page(rac))) { 175 - aops->readpage(rac->file, page); 176 - put_page(page); 177 - } 175 + while ((folio = readahead_folio(rac)) != NULL) 176 + aops->read_folio(rac->file, folio); 178 177 } 179 178 180 179 blk_finish_plug(&plug); ··· 254 255 } 255 256 256 257 /* 257 - * Now start the IO. We ignore I/O errors - if the page is not 258 - * uptodate then the caller will launch readpage again, and 258 + * Now start the IO. We ignore I/O errors - if the folio is not 259 + * uptodate then the caller will launch read_folio again, and 259 260 * will then handle the error. 260 261 */ 261 262 read_pages(ractl); ··· 303 304 struct backing_dev_info *bdi = inode_to_bdi(mapping->host); 304 305 unsigned long max_pages, index; 305 306 306 - if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readahead)) 307 + if (unlikely(!mapping->a_ops->read_folio && !mapping->a_ops->readahead)) 307 308 return; 308 309 309 310 /*
+4 -4
mm/secretmem.c
··· 145 145 return -EBUSY; 146 146 } 147 147 148 - static void secretmem_freepage(struct page *page) 148 + static void secretmem_free_folio(struct folio *folio) 149 149 { 150 - set_direct_map_default_noflush(page); 151 - clear_highpage(page); 150 + set_direct_map_default_noflush(&folio->page); 151 + folio_zero_segment(folio, 0, folio_size(folio)); 152 152 } 153 153 154 154 const struct address_space_operations secretmem_aops = { 155 155 .dirty_folio = noop_dirty_folio, 156 - .freepage = secretmem_freepage, 156 + .free_folio = secretmem_free_folio, 157 157 .migratepage = secretmem_migratepage, 158 158 .isolate_page = secretmem_isolate_page, 159 159 };
+2 -2
mm/shmem.c
··· 2426 2426 2427 2427 static int 2428 2428 shmem_write_begin(struct file *file, struct address_space *mapping, 2429 - loff_t pos, unsigned len, unsigned flags, 2429 + loff_t pos, unsigned len, 2430 2430 struct page **pagep, void **fsdata) 2431 2431 { 2432 2432 struct inode *inode = mapping->host; ··· 4162 4162 * 4163 4163 * This behaves as a tmpfs "read_cache_page_gfp(mapping, index, gfp)", 4164 4164 * with any new page allocations done using the specified allocation flags. 4165 - * But read_cache_page_gfp() uses the ->readpage() method: which does not 4165 + * But read_cache_page_gfp() uses the ->read_folio() method: which does not 4166 4166 * suit tmpfs, since it may have pages in swapcache, and needs to find those 4167 4167 * for itself; although drivers/gpu/drm i915 and ttm rely upon this support. 4168 4168 *
+1 -1
mm/swapfile.c
··· 3028 3028 /* 3029 3029 * Read the swap header. 3030 3030 */ 3031 - if (!mapping->a_ops->readpage) { 3031 + if (!mapping->a_ops->read_folio) { 3032 3032 error = -EINVAL; 3033 3033 goto bad_swap_unlock_inode; 3034 3034 }
+6 -6
mm/vmscan.c
··· 1181 1181 * folio->mapping == NULL while being dirty with clean buffers. 1182 1182 */ 1183 1183 if (folio_test_private(folio)) { 1184 - if (try_to_free_buffers(&folio->page)) { 1184 + if (try_to_free_buffers(folio)) { 1185 1185 folio_clear_dirty(folio); 1186 1186 pr_info("%s: orphaned folio\n", __func__); 1187 1187 return PAGE_CLEAN; ··· 1282 1282 xa_unlock_irq(&mapping->i_pages); 1283 1283 put_swap_page(&folio->page, swap); 1284 1284 } else { 1285 - void (*freepage)(struct page *); 1285 + void (*free_folio)(struct folio *); 1286 1286 1287 - freepage = mapping->a_ops->freepage; 1287 + free_folio = mapping->a_ops->free_folio; 1288 1288 /* 1289 1289 * Remember a shadow entry for reclaimed file cache in 1290 1290 * order to detect refaults, thus thrashing, later on. ··· 1310 1310 inode_add_lru(mapping->host); 1311 1311 spin_unlock(&mapping->host->i_lock); 1312 1312 1313 - if (freepage != NULL) 1314 - freepage(&folio->page); 1313 + if (free_folio) 1314 + free_folio(folio); 1315 1315 } 1316 1316 1317 1317 return 1; ··· 1451 1451 1452 1452 mapping = folio_mapping(folio); 1453 1453 if (mapping && mapping->a_ops->is_dirty_writeback) 1454 - mapping->a_ops->is_dirty_writeback(&folio->page, dirty, writeback); 1454 + mapping->a_ops->is_dirty_writeback(folio, dirty, writeback); 1455 1455 } 1456 1456 1457 1457 static struct page *alloc_demote_page(struct page *page, unsigned long node)