Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: change invalidatepage prototype to accept length

Currently there is no way to truncate partial page where the end
truncate point is not at the end of the page. This is because it was not
needed and the functionality was enough for file system truncate
operation to work properly. However more file systems now support punch
hole feature and it can benefit from mm supporting truncating page just
up to the certain point.

Specifically, with this functionality truncate_inode_pages_range() can
be changed so it supports truncating partial page at the end of the
range (currently it will BUG_ON() if 'end' is not at the end of the
page).

This commit changes the invalidatepage() address space operation
prototype to accept range to be invalidated and update all the instances
for it.

We also change the block_invalidatepage() in the same way and actually
make a use of the new length argument implementing range invalidation.

Actual file system implementations will follow except the file systems
where the changes are really simple and should not change the behaviour
in any way .Implementation for truncate_page_range() which will be able
to accept page unaligned ranges will follow as well.

Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Hugh Dickins <hughd@google.com>

authored by

Lukas Czerner and committed by
Theodore Ts'o
d47992f8 c7788792

+116 -69
+3 -3
Documentation/filesystems/Locking
··· 189 189 loff_t pos, unsigned len, unsigned copied, 190 190 struct page *page, void *fsdata); 191 191 sector_t (*bmap)(struct address_space *, sector_t); 192 - int (*invalidatepage) (struct page *, unsigned long); 192 + void (*invalidatepage) (struct page *, unsigned int, unsigned int); 193 193 int (*releasepage) (struct page *, int); 194 194 void (*freepage)(struct page *); 195 195 int (*direct_IO)(int, struct kiocb *, const struct iovec *iov, ··· 310 310 keep it that way and don't breed new callers. 311 311 312 312 ->invalidatepage() is called when the filesystem must attempt to drop 313 - some or all of the buffers from the page when it is being truncated. It 314 - returns zero on success. If ->invalidatepage is zero, the kernel uses 313 + some or all of the buffers from the page when it is being truncated. It 314 + returns zero on success. If ->invalidatepage is zero, the kernel uses 315 315 block_invalidatepage() instead. 316 316 317 317 ->releasepage() is called when the kernel is about to try to drop the
+10 -10
Documentation/filesystems/vfs.txt
··· 549 549 ------------------------------- 550 550 551 551 This describes how the VFS can manipulate mapping of a file to page cache in 552 - your filesystem. As of kernel 2.6.22, the following members are defined: 552 + your filesystem. The following members are defined: 553 553 554 554 struct address_space_operations { 555 555 int (*writepage)(struct page *page, struct writeback_control *wbc); ··· 566 566 loff_t pos, unsigned len, unsigned copied, 567 567 struct page *page, void *fsdata); 568 568 sector_t (*bmap)(struct address_space *, sector_t); 569 - int (*invalidatepage) (struct page *, unsigned long); 569 + void (*invalidatepage) (struct page *, unsigned int, unsigned int); 570 570 int (*releasepage) (struct page *, int); 571 571 void (*freepage)(struct page *); 572 572 ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov, ··· 685 685 invalidatepage: If a page has PagePrivate set, then invalidatepage 686 686 will be called when part or all of the page is to be removed 687 687 from the address space. This generally corresponds to either a 688 - truncation or a complete invalidation of the address space 689 - (in the latter case 'offset' will always be 0). 690 - Any private data associated with the page should be updated 691 - to reflect this truncation. If offset is 0, then 692 - the private data should be released, because the page 693 - must be able to be completely discarded. This may be done by 694 - calling the ->releasepage function, but in this case the 695 - release MUST succeed. 688 + truncation, punch hole or a complete invalidation of the address 689 + space (in the latter case 'offset' will always be 0 and 'length' 690 + will be PAGE_CACHE_SIZE). Any private data associated with the page 691 + should be updated to reflect this truncation. If offset is 0 and 692 + length is PAGE_CACHE_SIZE, then the private data should be released, 693 + because the page must be able to be completely discarded. This may 694 + be done by calling the ->releasepage function, but in this case the 695 + release MUST succeed. 696 696 697 697 releasepage: releasepage is called on PagePrivate pages to indicate 698 698 that the page should be freed if possible. ->releasepage
+3 -2
fs/9p/vfs_addr.c
··· 148 148 * @offset: offset in the page 149 149 */ 150 150 151 - static void v9fs_invalidate_page(struct page *page, unsigned long offset) 151 + static void v9fs_invalidate_page(struct page *page, unsigned int offset, 152 + unsigned int length) 152 153 { 153 154 /* 154 155 * If called with zero offset, we should release 155 156 * the private state assocated with the page 156 157 */ 157 - if (offset == 0) 158 + if (offset == 0 && length == PAGE_CACHE_SIZE) 158 159 v9fs_fscache_invalidate_page(page); 159 160 } 160 161
+6 -4
fs/afs/file.c
··· 19 19 #include "internal.h" 20 20 21 21 static int afs_readpage(struct file *file, struct page *page); 22 - static void afs_invalidatepage(struct page *page, unsigned long offset); 22 + static void afs_invalidatepage(struct page *page, unsigned int offset, 23 + unsigned int length); 23 24 static int afs_releasepage(struct page *page, gfp_t gfp_flags); 24 25 static int afs_launder_page(struct page *page); 25 26 ··· 311 310 * - release a page and clean up its private data if offset is 0 (indicating 312 311 * the entire page) 313 312 */ 314 - static void afs_invalidatepage(struct page *page, unsigned long offset) 313 + static void afs_invalidatepage(struct page *page, unsigned int offset, 314 + unsigned int length) 315 315 { 316 316 struct afs_writeback *wb = (struct afs_writeback *) page_private(page); 317 317 318 - _enter("{%lu},%lu", page->index, offset); 318 + _enter("{%lu},%u,%u", page->index, offset, length); 319 319 320 320 BUG_ON(!PageLocked(page)); 321 321 322 322 /* we clean up only if the entire page is being invalidated */ 323 - if (offset == 0) { 323 + if (offset == 0 && length == PAGE_CACHE_SIZE) { 324 324 #ifdef CONFIG_AFS_FSCACHE 325 325 if (PageFsCache(page)) { 326 326 struct afs_vnode *vnode = AFS_FS_I(page->mapping->host);
+2 -1
fs/btrfs/disk-io.c
··· 1013 1013 return try_release_extent_buffer(page); 1014 1014 } 1015 1015 1016 - static void btree_invalidatepage(struct page *page, unsigned long offset) 1016 + static void btree_invalidatepage(struct page *page, unsigned int offset, 1017 + unsigned int length) 1017 1018 { 1018 1019 struct extent_io_tree *tree; 1019 1020 tree = &BTRFS_I(page->mapping->host)->io_tree;
+1 -1
fs/btrfs/extent_io.c
··· 2957 2957 pg_offset = i_size & (PAGE_CACHE_SIZE - 1); 2958 2958 if (page->index > end_index || 2959 2959 (page->index == end_index && !pg_offset)) { 2960 - page->mapping->a_ops->invalidatepage(page, 0); 2960 + page->mapping->a_ops->invalidatepage(page, 0, PAGE_CACHE_SIZE); 2961 2961 unlock_page(page); 2962 2962 return 0; 2963 2963 }
+2 -1
fs/btrfs/inode.c
··· 7510 7510 return __btrfs_releasepage(page, gfp_flags & GFP_NOFS); 7511 7511 } 7512 7512 7513 - static void btrfs_invalidatepage(struct page *page, unsigned long offset) 7513 + static void btrfs_invalidatepage(struct page *page, unsigned int offset, 7514 + unsigned int length) 7514 7515 { 7515 7516 struct inode *inode = page->mapping->host; 7516 7517 struct extent_io_tree *tree;
+18 -3
fs/buffer.c
··· 1454 1454 * block_invalidatepage - invalidate part or all of a buffer-backed page 1455 1455 * 1456 1456 * @page: the page which is affected 1457 - * @offset: the index of the truncation point 1457 + * @offset: start of the range to invalidate 1458 + * @length: length of the range to invalidate 1458 1459 * 1459 1460 * block_invalidatepage() is called when all or part of the page has become 1460 1461 * invalidated by a truncate operation. ··· 1466 1465 * point. Because the caller is about to free (and possibly reuse) those 1467 1466 * blocks on-disk. 1468 1467 */ 1469 - void block_invalidatepage(struct page *page, unsigned long offset) 1468 + void block_invalidatepage(struct page *page, unsigned int offset, 1469 + unsigned int length) 1470 1470 { 1471 1471 struct buffer_head *head, *bh, *next; 1472 1472 unsigned int curr_off = 0; 1473 + unsigned int stop = length + offset; 1473 1474 1474 1475 BUG_ON(!PageLocked(page)); 1475 1476 if (!page_has_buffers(page)) 1476 1477 goto out; 1478 + 1479 + /* 1480 + * Check for overflow 1481 + */ 1482 + BUG_ON(stop > PAGE_CACHE_SIZE || stop < length); 1477 1483 1478 1484 head = page_buffers(page); 1479 1485 bh = head; 1480 1486 do { 1481 1487 unsigned int next_off = curr_off + bh->b_size; 1482 1488 next = bh->b_this_page; 1489 + 1490 + /* 1491 + * Are we still fully in range ? 1492 + */ 1493 + if (next_off > stop) 1494 + goto out; 1483 1495 1484 1496 /* 1485 1497 * is this block fully invalidated? ··· 1514 1500 return; 1515 1501 } 1516 1502 EXPORT_SYMBOL(block_invalidatepage); 1503 + 1517 1504 1518 1505 /* 1519 1506 * We attach and possibly dirty the buffers atomically wrt ··· 2856 2841 * they may have been added in ext3_writepage(). Make them 2857 2842 * freeable here, so the page does not leak. 2858 2843 */ 2859 - do_invalidatepage(page, 0); 2844 + do_invalidatepage(page, 0, PAGE_CACHE_SIZE); 2860 2845 unlock_page(page); 2861 2846 return 0; /* don't care */ 2862 2847 }
+3 -2
fs/ceph/addr.c
··· 143 143 * dirty page counters appropriately. Only called if there is private 144 144 * data on the page. 145 145 */ 146 - static void ceph_invalidatepage(struct page *page, unsigned long offset) 146 + static void ceph_invalidatepage(struct page *page, unsigned int offset, 147 + unsigned int length) 147 148 { 148 149 struct inode *inode; 149 150 struct ceph_inode_info *ci; ··· 169 168 170 169 ci = ceph_inode(inode); 171 170 if (offset == 0) { 172 - dout("%p invalidatepage %p idx %lu full dirty page %lu\n", 171 + dout("%p invalidatepage %p idx %lu full dirty page %u\n", 173 172 inode, page, page->index, offset); 174 173 ceph_put_wrbuffer_cap_refs(ci, 1, snapc); 175 174 ceph_put_snap_context(snapc);
+3 -2
fs/cifs/file.c
··· 3546 3546 return cifs_fscache_release_page(page, gfp); 3547 3547 } 3548 3548 3549 - static void cifs_invalidate_page(struct page *page, unsigned long offset) 3549 + static void cifs_invalidate_page(struct page *page, unsigned int offset, 3550 + unsigned int length) 3550 3551 { 3551 3552 struct cifsInodeInfo *cifsi = CIFS_I(page->mapping->host); 3552 3553 3553 - if (offset == 0) 3554 + if (offset == 0 && length == PAGE_CACHE_SIZE) 3554 3555 cifs_fscache_invalidate_page(page, &cifsi->vfs_inode); 3555 3556 } 3556 3557
+4 -2
fs/exofs/inode.c
··· 953 953 return 0; 954 954 } 955 955 956 - static void exofs_invalidatepage(struct page *page, unsigned long offset) 956 + static void exofs_invalidatepage(struct page *page, unsigned int offset, 957 + unsigned int length) 957 958 { 958 - EXOFS_DBGMSG("page 0x%lx offset 0x%lx\n", page->index, offset); 959 + EXOFS_DBGMSG("page 0x%lx offset 0x%x length 0x%x\n", 960 + page->index, offset, length); 959 961 WARN_ON(1); 960 962 } 961 963
+2 -1
fs/ext3/inode.c
··· 1825 1825 return mpage_readpages(mapping, pages, nr_pages, ext3_get_block); 1826 1826 } 1827 1827 1828 - static void ext3_invalidatepage(struct page *page, unsigned long offset) 1828 + static void ext3_invalidatepage(struct page *page, unsigned int offset, 1829 + unsigned int length) 1829 1830 { 1830 1831 journal_t *journal = EXT3_JOURNAL(page->mapping->host); 1831 1832
+11 -7
fs/ext4/inode.c
··· 132 132 new_size); 133 133 } 134 134 135 - static void ext4_invalidatepage(struct page *page, unsigned long offset); 135 + static void ext4_invalidatepage(struct page *page, unsigned int offset, 136 + unsigned int length); 136 137 static int __ext4_journalled_writepage(struct page *page, unsigned int len); 137 138 static int ext4_bh_delay_or_unwritten(handle_t *handle, struct buffer_head *bh); 138 139 static int ext4_discard_partial_page_buffers_no_lock(handle_t *handle, ··· 1607 1606 break; 1608 1607 BUG_ON(!PageLocked(page)); 1609 1608 BUG_ON(PageWriteback(page)); 1610 - block_invalidatepage(page, 0); 1609 + block_invalidatepage(page, 0, PAGE_CACHE_SIZE); 1611 1610 ClearPageUptodate(page); 1612 1611 unlock_page(page); 1613 1612 } ··· 2830 2829 return ret ? ret : copied; 2831 2830 } 2832 2831 2833 - static void ext4_da_invalidatepage(struct page *page, unsigned long offset) 2832 + static void ext4_da_invalidatepage(struct page *page, unsigned int offset, 2833 + unsigned int length) 2834 2834 { 2835 2835 /* 2836 2836 * Drop reserved blocks ··· 2843 2841 ext4_da_page_release_reservation(page, offset); 2844 2842 2845 2843 out: 2846 - ext4_invalidatepage(page, offset); 2844 + ext4_invalidatepage(page, offset, length); 2847 2845 2848 2846 return; 2849 2847 } ··· 2991 2989 return mpage_readpages(mapping, pages, nr_pages, ext4_get_block); 2992 2990 } 2993 2991 2994 - static void ext4_invalidatepage(struct page *page, unsigned long offset) 2992 + static void ext4_invalidatepage(struct page *page, unsigned int offset, 2993 + unsigned int length) 2995 2994 { 2996 2995 trace_ext4_invalidatepage(page, offset); 2997 2996 2998 2997 /* No journalling happens on data buffers when this function is used */ 2999 2998 WARN_ON(page_has_buffers(page) && buffer_jbd(page_buffers(page))); 3000 2999 3001 - block_invalidatepage(page, offset); 3000 + block_invalidatepage(page, offset, PAGE_CACHE_SIZE - offset); 3002 3001 } 3003 3002 3004 3003 static int __ext4_journalled_invalidatepage(struct page *page, ··· 3020 3017 3021 3018 /* Wrapper for aops... */ 3022 3019 static void ext4_journalled_invalidatepage(struct page *page, 3023 - unsigned long offset) 3020 + unsigned int offset, 3021 + unsigned int length) 3024 3022 { 3025 3023 WARN_ON(__ext4_journalled_invalidatepage(page, offset) < 0); 3026 3024 }
+2 -1
fs/f2fs/data.c
··· 698 698 get_data_block_ro); 699 699 } 700 700 701 - static void f2fs_invalidate_data_page(struct page *page, unsigned long offset) 701 + static void f2fs_invalidate_data_page(struct page *page, unsigned int offset, 702 + unsigned int length) 702 703 { 703 704 struct inode *inode = page->mapping->host; 704 705 struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+2 -1
fs/f2fs/node.c
··· 1205 1205 return 0; 1206 1206 } 1207 1207 1208 - static void f2fs_invalidate_node_page(struct page *page, unsigned long offset) 1208 + static void f2fs_invalidate_node_page(struct page *page, unsigned int offset, 1209 + unsigned int length) 1209 1210 { 1210 1211 struct inode *inode = page->mapping->host; 1211 1212 struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+5 -3
fs/gfs2/aops.c
··· 110 110 /* Is the page fully outside i_size? (truncate in progress) */ 111 111 offset = i_size & (PAGE_CACHE_SIZE-1); 112 112 if (page->index > end_index || (page->index == end_index && !offset)) { 113 - page->mapping->a_ops->invalidatepage(page, 0); 113 + page->mapping->a_ops->invalidatepage(page, 0, PAGE_CACHE_SIZE); 114 114 goto out; 115 115 } 116 116 return 1; ··· 299 299 300 300 /* Is the page fully outside i_size? (truncate in progress) */ 301 301 if (page->index > end_index || (page->index == end_index && !offset)) { 302 - page->mapping->a_ops->invalidatepage(page, 0); 302 + page->mapping->a_ops->invalidatepage(page, 0, 303 + PAGE_CACHE_SIZE); 303 304 unlock_page(page); 304 305 continue; 305 306 } ··· 944 943 unlock_buffer(bh); 945 944 } 946 945 947 - static void gfs2_invalidatepage(struct page *page, unsigned long offset) 946 + static void gfs2_invalidatepage(struct page *page, unsigned int offset, 947 + unsigned int length) 948 948 { 949 949 struct gfs2_sbd *sdp = GFS2_SB(page->mapping->host); 950 950 struct buffer_head *bh, *head;
+3 -2
fs/jfs/jfs_metapage.c
··· 571 571 return ret; 572 572 } 573 573 574 - static void metapage_invalidatepage(struct page *page, unsigned long offset) 574 + static void metapage_invalidatepage(struct page *page, unsigned int offset, 575 + unsigned int length) 575 576 { 576 - BUG_ON(offset); 577 + BUG_ON(offset || length < PAGE_CACHE_SIZE); 577 578 578 579 BUG_ON(PageWriteback(page)); 579 580
+2 -1
fs/logfs/file.c
··· 159 159 return __logfs_writepage(page); 160 160 } 161 161 162 - static void logfs_invalidatepage(struct page *page, unsigned long offset) 162 + static void logfs_invalidatepage(struct page *page, unsigned int offset, 163 + unsigned int length) 163 164 { 164 165 struct logfs_block *block = logfs_block(page); 165 166
+2 -1
fs/logfs/segment.c
··· 884 884 return area; 885 885 } 886 886 887 - static void map_invalidatepage(struct page *page, unsigned long l) 887 + static void map_invalidatepage(struct page *page, unsigned int o, 888 + unsigned int l) 888 889 { 889 890 return; 890 891 }
+5 -3
fs/nfs/file.c
··· 451 451 * - Called if either PG_private or PG_fscache is set on the page 452 452 * - Caller holds page lock 453 453 */ 454 - static void nfs_invalidate_page(struct page *page, unsigned long offset) 454 + static void nfs_invalidate_page(struct page *page, unsigned int offset, 455 + unsigned int length) 455 456 { 456 - dfprintk(PAGECACHE, "NFS: invalidate_page(%p, %lu)\n", page, offset); 457 + dfprintk(PAGECACHE, "NFS: invalidate_page(%p, %u, %u)\n", 458 + page, offset, length); 457 459 458 - if (offset != 0) 460 + if (offset != 0 || length < PAGE_CACHE_SIZE) 459 461 return; 460 462 /* Cancel any unstarted writes on this page */ 461 463 nfs_wb_page_cancel(page_file_mapping(page)->host, page);
+1 -1
fs/ntfs/aops.c
··· 1372 1372 * The page may have dirty, unmapped buffers. Make them 1373 1373 * freeable here, so the page does not leak. 1374 1374 */ 1375 - block_invalidatepage(page, 0); 1375 + block_invalidatepage(page, 0, PAGE_CACHE_SIZE); 1376 1376 unlock_page(page); 1377 1377 ntfs_debug("Write outside i_size - truncated?"); 1378 1378 return 0;
+2 -1
fs/ocfs2/aops.c
··· 603 603 * from ext3. PageChecked() bits have been removed as OCFS2 does not 604 604 * do journalled data. 605 605 */ 606 - static void ocfs2_invalidatepage(struct page *page, unsigned long offset) 606 + static void ocfs2_invalidatepage(struct page *page, unsigned int offset, 607 + unsigned int length) 607 608 { 608 609 journal_t *journal = OCFS2_SB(page->mapping->host->i_sb)->journal->j_journal; 609 610
+2 -1
fs/reiserfs/inode.c
··· 2970 2970 } 2971 2971 2972 2972 /* clm -- taken from fs/buffer.c:block_invalidate_page */ 2973 - static void reiserfs_invalidatepage(struct page *page, unsigned long offset) 2973 + static void reiserfs_invalidatepage(struct page *page, unsigned int offset, 2974 + unsigned int length) 2974 2975 { 2975 2976 struct buffer_head *head, *bh, *next; 2976 2977 struct inode *inode = page->mapping->host;
+3 -2
fs/ubifs/file.c
··· 1277 1277 return err; 1278 1278 } 1279 1279 1280 - static void ubifs_invalidatepage(struct page *page, unsigned long offset) 1280 + static void ubifs_invalidatepage(struct page *page, unsigned int offset, 1281 + unsigned int length) 1281 1282 { 1282 1283 struct inode *inode = page->mapping->host; 1283 1284 struct ubifs_info *c = inode->i_sb->s_fs_info; 1284 1285 1285 1286 ubifs_assert(PagePrivate(page)); 1286 - if (offset) 1287 + if (offset || length < PAGE_CACHE_SIZE) 1287 1288 /* Partial page remains dirty */ 1288 1289 return; 1289 1290
+4 -3
fs/xfs/xfs_aops.c
··· 824 824 STATIC void 825 825 xfs_vm_invalidatepage( 826 826 struct page *page, 827 - unsigned long offset) 827 + unsigned int offset, 828 + unsigned int length) 828 829 { 829 830 trace_xfs_invalidatepage(page->mapping->host, page, offset); 830 - block_invalidatepage(page, offset); 831 + block_invalidatepage(page, offset, PAGE_CACHE_SIZE - offset); 831 832 } 832 833 833 834 /* ··· 892 891 893 892 xfs_iunlock(ip, XFS_ILOCK_EXCL); 894 893 out_invalidate: 895 - xfs_vm_invalidatepage(page, 0); 894 + xfs_vm_invalidatepage(page, 0, PAGE_CACHE_SIZE); 896 895 return; 897 896 } 898 897
+2 -1
include/linux/buffer_head.h
··· 198 198 * Generic address_space_operations implementations for buffer_head-backed 199 199 * address_spaces. 200 200 */ 201 - void block_invalidatepage(struct page *page, unsigned long offset); 201 + void block_invalidatepage(struct page *page, unsigned int offset, 202 + unsigned int length); 202 203 int block_write_full_page(struct page *page, get_block_t *get_block, 203 204 struct writeback_control *wbc); 204 205 int block_write_full_page_endio(struct page *page, get_block_t *get_block,
+1 -1
include/linux/fs.h
··· 364 364 365 365 /* Unfortunately this kludge is needed for FIBMAP. Don't use it */ 366 366 sector_t (*bmap)(struct address_space *, sector_t); 367 - void (*invalidatepage) (struct page *, unsigned long); 367 + void (*invalidatepage) (struct page *, unsigned int, unsigned int); 368 368 int (*releasepage) (struct page *, gfp_t); 369 369 void (*freepage)(struct page *); 370 370 ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
+2 -1
include/linux/mm.h
··· 1041 1041 struct page *get_dump_page(unsigned long addr); 1042 1042 1043 1043 extern int try_to_release_page(struct page * page, gfp_t gfp_mask); 1044 - extern void do_invalidatepage(struct page *page, unsigned long offset); 1044 + extern void do_invalidatepage(struct page *page, unsigned int offset, 1045 + unsigned int length); 1045 1046 1046 1047 int __set_page_dirty_nobuffers(struct page *page); 1047 1048 int __set_page_dirty_no_writeback(struct page *page);
+1 -1
mm/readahead.c
··· 48 48 if (!trylock_page(page)) 49 49 BUG(); 50 50 page->mapping = mapping; 51 - do_invalidatepage(page, 0); 51 + do_invalidatepage(page, 0, PAGE_CACHE_SIZE); 52 52 page->mapping = NULL; 53 53 unlock_page(page); 54 54 }
+9 -6
mm/truncate.c
··· 26 26 /** 27 27 * do_invalidatepage - invalidate part or all of a page 28 28 * @page: the page which is affected 29 - * @offset: the index of the truncation point 29 + * @offset: start of the range to invalidate 30 + * @length: length of the range to invalidate 30 31 * 31 32 * do_invalidatepage() is called when all or part of the page has become 32 33 * invalidated by a truncate operation. ··· 38 37 * point. Because the caller is about to free (and possibly reuse) those 39 38 * blocks on-disk. 40 39 */ 41 - void do_invalidatepage(struct page *page, unsigned long offset) 40 + void do_invalidatepage(struct page *page, unsigned int offset, 41 + unsigned int length) 42 42 { 43 - void (*invalidatepage)(struct page *, unsigned long); 43 + void (*invalidatepage)(struct page *, unsigned int, unsigned int); 44 + 44 45 invalidatepage = page->mapping->a_ops->invalidatepage; 45 46 #ifdef CONFIG_BLOCK 46 47 if (!invalidatepage) 47 48 invalidatepage = block_invalidatepage; 48 49 #endif 49 50 if (invalidatepage) 50 - (*invalidatepage)(page, offset); 51 + (*invalidatepage)(page, offset, length); 51 52 } 52 53 53 54 static inline void truncate_partial_page(struct page *page, unsigned partial) ··· 57 54 zero_user_segment(page, partial, PAGE_CACHE_SIZE); 58 55 cleancache_invalidate_page(page->mapping, page); 59 56 if (page_has_private(page)) 60 - do_invalidatepage(page, partial); 57 + do_invalidatepage(page, partial, PAGE_CACHE_SIZE - partial); 61 58 } 62 59 63 60 /* ··· 106 103 return -EIO; 107 104 108 105 if (page_has_private(page)) 109 - do_invalidatepage(page, 0); 106 + do_invalidatepage(page, 0, PAGE_CACHE_SIZE); 110 107 111 108 cancel_dirty_page(page, PAGE_CACHE_SIZE); 112 109