Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'f2fs-for-6.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs updates from Jaegeuk Kim:
"Three main updates: folio conversion by Matthew, switch to a new mount
API by Hongbo and Eric, and several sysfs entries to tune GCs for ZUFS
with finer granularity by Daeho.

There are also patches to address bugs and issues in the existing
features such as GCs, file pinning, write-while-dio-read, contingous
block allocation, and memory access violations.

Enhancements:
- switch to new mount API and folio conversion
- add sysfs nodes to controle F2FS GCs for ZUFS
- improve performance on the nat entry cache
- drop inode from the donation list when the last file is closed
- avoid splitting bio when reading multiple pages

Bug fixes:
- fix to trigger foreground gc during f2fs_map_blocks() in lfs mode
- make sure zoned device GC to use FG_GC in shortage of free section
- fix to calculate dirty data during has_not_enough_free_secs()
- fix to update upper_p in __get_secs_required() correctly
- wait for inflight dio completion, excluding pinned files read using dio
- don't break allocation when crossing contiguous sections
- vm_unmap_ram() may be called from an invalid context
- fix to avoid out-of-boundary access in dnode page
- fix to avoid panic in f2fs_evict_inode
- fix to avoid UAF in f2fs_sync_inode_meta()
- fix to use f2fs_is_valid_blkaddr_raw() in do_write_page()
- fix UAF of f2fs_inode_info in f2fs_free_dic
- fix to avoid invalid wait context issue
- fix bio memleak when committing super block
- handle nat.blkaddr corruption in f2fs_get_node_info()

In addition, there are also clean-ups and minor bug fixes"

* tag 'f2fs-for-6.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (109 commits)
f2fs: drop inode from the donation list when the last file is closed
f2fs: add gc_boost_gc_greedy sysfs node
f2fs: add gc_boost_gc_multiple sysfs node
f2fs: fix to trigger foreground gc during f2fs_map_blocks() in lfs mode
f2fs: fix to calculate dirty data during has_not_enough_free_secs()
f2fs: fix to update upper_p in __get_secs_required() correctly
f2fs: directly add newly allocated pre-dirty nat entry to dirty set list
f2fs: avoid redundant clean nat entry move in lru list
f2fs: zone: wait for inflight dio completion, excluding pinned files read using dio
f2fs: ignore valid ratio when free section count is low
f2fs: don't break allocation when crossing contiguous sections
f2fs: remove unnecessary tracepoint enabled check
f2fs: merge the two conditions to avoid code duplication
f2fs: vm_unmap_ram() may be called from an invalid context
f2fs: fix to avoid out-of-boundary access in dnode page
f2fs: switch to the new mount api
f2fs: introduce fs_context_operation structure
f2fs: separate the options parsing and options checking
f2fs: Add f2fs_fs_context to record the mount options
f2fs: Allow sbi to be NULL in f2fs_printk
...

+2056 -1571
+22
Documentation/ABI/testing/sysfs-fs-f2fs
··· 861 861 SB_ENC_STRICT_MODE_FL 0x00000001 862 862 SB_ENC_NO_COMPAT_FALLBACK_FL 0x00000002 863 863 ============================ ========== 864 + 865 + What: /sys/fs/f2fs/<disk>/reserved_pin_section 866 + Date: June 2025 867 + Contact: "Chao Yu" <chao@kernel.org> 868 + Description: This threshold is used to control triggering garbage collection while 869 + fallocating on pinned file, so, it can guarantee there is enough free 870 + reserved section before preallocating on pinned file. 871 + By default, the value is ovp_sections, especially, for zoned ufs, the 872 + value is 1. 873 + 874 + What: /sys/fs/f2fs/<disk>/gc_boost_gc_multiple 875 + Date: June 2025 876 + Contact: "Daeho Jeong" <daehojeong@google.com> 877 + Description: Set a multiplier for the background GC migration window when F2FS GC is 878 + boosted. The range should be from 1 to the segment count in a section. 879 + Default: 5 880 + 881 + What: /sys/fs/f2fs/<disk>/gc_boost_gc_greedy 882 + Date: June 2025 883 + Contact: "Daeho Jeong" <daehojeong@google.com> 884 + Description: Control GC algorithm for boost GC. 0: cost benefit, 1: greedy 885 + Default: 1
+3 -3
Documentation/filesystems/f2fs.rst
··· 238 238 grpjquota=<file> information can be properly updated during recovery flow, 239 239 prjjquota=<file> <quota file>: must be in root directory; 240 240 jqfmt=<quota type> <quota type>: [vfsold,vfsv0,vfsv1]. 241 - offusrjquota Turn off user journalled quota. 242 - offgrpjquota Turn off group journalled quota. 243 - offprjjquota Turn off project journalled quota. 241 + usrjquota= Turn off user journalled quota. 242 + grpjquota= Turn off group journalled quota. 243 + prjjquota= Turn off project journalled quota. 244 244 quota Enable plain user disk quota accounting. 245 245 noquota Disable all plain disk quota option. 246 246 alloc_mode=%s Adjust block allocation policy, which supports "reuse"
+4 -4
fs/f2fs/checkpoint.c
··· 82 82 if (folio_test_uptodate(folio)) 83 83 goto out; 84 84 85 - fio.page = &folio->page; 85 + fio.folio = folio; 86 86 87 87 err = f2fs_submit_page_bio(&fio); 88 88 if (err) { ··· 309 309 continue; 310 310 } 311 311 312 - fio.page = &folio->page; 312 + fio.folio = folio; 313 313 err = f2fs_submit_page_bio(&fio); 314 314 f2fs_folio_put(folio, err ? true : false); 315 315 ··· 485 485 folio_mark_uptodate(folio); 486 486 if (filemap_dirty_folio(mapping, folio)) { 487 487 inc_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_META); 488 - set_page_private_reference(&folio->page); 488 + folio_set_f2fs_reference(folio); 489 489 return true; 490 490 } 491 491 return false; ··· 1045 1045 inode_inc_dirty_pages(inode); 1046 1046 spin_unlock(&sbi->inode_lock[type]); 1047 1047 1048 - set_page_private_reference(&folio->page); 1048 + folio_set_f2fs_reference(folio); 1049 1049 } 1050 1050 1051 1051 void f2fs_remove_dirty_inode(struct inode *inode)
+60 -60
fs/f2fs/compress.c
··· 23 23 static struct kmem_cache *cic_entry_slab; 24 24 static struct kmem_cache *dic_entry_slab; 25 25 26 - static void *page_array_alloc(struct inode *inode, int nr) 26 + static void *page_array_alloc(struct f2fs_sb_info *sbi, int nr) 27 27 { 28 - struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 29 28 unsigned int size = sizeof(struct page *) * nr; 30 29 31 30 if (likely(size <= sbi->page_array_slab_size)) 32 31 return f2fs_kmem_cache_alloc(sbi->page_array_slab, 33 - GFP_F2FS_ZERO, false, F2FS_I_SB(inode)); 32 + GFP_F2FS_ZERO, false, sbi); 34 33 return f2fs_kzalloc(sbi, size, GFP_NOFS); 35 34 } 36 35 37 - static void page_array_free(struct inode *inode, void *pages, int nr) 36 + static void page_array_free(struct f2fs_sb_info *sbi, void *pages, int nr) 38 37 { 39 - struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 40 38 unsigned int size = sizeof(struct page *) * nr; 41 39 42 40 if (!pages) ··· 71 73 return cc->cluster_idx << cc->log_cluster_size; 72 74 } 73 75 74 - bool f2fs_is_compressed_page(struct page *page) 76 + bool f2fs_is_compressed_page(struct folio *folio) 75 77 { 76 - if (!PagePrivate(page)) 78 + if (!folio->private) 77 79 return false; 78 - if (!page_private(page)) 79 - return false; 80 - if (page_private_nonpointer(page)) 80 + if (folio_test_f2fs_nonpointer(folio)) 81 81 return false; 82 82 83 - f2fs_bug_on(F2FS_P_SB(page), 84 - *((u32 *)page_private(page)) != F2FS_COMPRESSED_PAGE_MAGIC); 83 + f2fs_bug_on(F2FS_F_SB(folio), 84 + *((u32 *)folio->private) != F2FS_COMPRESSED_PAGE_MAGIC); 85 85 return true; 86 86 } 87 87 ··· 145 149 if (cc->rpages) 146 150 return 0; 147 151 148 - cc->rpages = page_array_alloc(cc->inode, cc->cluster_size); 152 + cc->rpages = page_array_alloc(F2FS_I_SB(cc->inode), cc->cluster_size); 149 153 return cc->rpages ? 0 : -ENOMEM; 150 154 } 151 155 152 156 void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse) 153 157 { 154 - page_array_free(cc->inode, cc->rpages, cc->cluster_size); 158 + page_array_free(F2FS_I_SB(cc->inode), cc->rpages, cc->cluster_size); 155 159 cc->rpages = NULL; 156 160 cc->nr_rpages = 0; 157 161 cc->nr_cpages = 0; ··· 212 216 ret = lzo1x_decompress_safe(dic->cbuf->cdata, dic->clen, 213 217 dic->rbuf, &dic->rlen); 214 218 if (ret != LZO_E_OK) { 215 - f2fs_err_ratelimited(F2FS_I_SB(dic->inode), 219 + f2fs_err_ratelimited(dic->sbi, 216 220 "lzo decompress failed, ret:%d", ret); 217 221 return -EIO; 218 222 } 219 223 220 224 if (dic->rlen != PAGE_SIZE << dic->log_cluster_size) { 221 - f2fs_err_ratelimited(F2FS_I_SB(dic->inode), 225 + f2fs_err_ratelimited(dic->sbi, 222 226 "lzo invalid rlen:%zu, expected:%lu", 223 227 dic->rlen, PAGE_SIZE << dic->log_cluster_size); 224 228 return -EIO; ··· 292 296 ret = LZ4_decompress_safe(dic->cbuf->cdata, dic->rbuf, 293 297 dic->clen, dic->rlen); 294 298 if (ret < 0) { 295 - f2fs_err_ratelimited(F2FS_I_SB(dic->inode), 299 + f2fs_err_ratelimited(dic->sbi, 296 300 "lz4 decompress failed, ret:%d", ret); 297 301 return -EIO; 298 302 } 299 303 300 304 if (ret != PAGE_SIZE << dic->log_cluster_size) { 301 - f2fs_err_ratelimited(F2FS_I_SB(dic->inode), 305 + f2fs_err_ratelimited(dic->sbi, 302 306 "lz4 invalid ret:%d, expected:%lu", 303 307 ret, PAGE_SIZE << dic->log_cluster_size); 304 308 return -EIO; ··· 420 424 421 425 workspace_size = zstd_dstream_workspace_bound(max_window_size); 422 426 423 - workspace = f2fs_vmalloc(F2FS_I_SB(dic->inode), workspace_size); 427 + workspace = f2fs_vmalloc(dic->sbi, workspace_size); 424 428 if (!workspace) 425 429 return -ENOMEM; 426 430 427 431 stream = zstd_init_dstream(max_window_size, workspace, workspace_size); 428 432 if (!stream) { 429 - f2fs_err_ratelimited(F2FS_I_SB(dic->inode), 433 + f2fs_err_ratelimited(dic->sbi, 430 434 "%s zstd_init_dstream failed", __func__); 431 435 vfree(workspace); 432 436 return -EIO; ··· 462 466 463 467 ret = zstd_decompress_stream(stream, &outbuf, &inbuf); 464 468 if (zstd_is_error(ret)) { 465 - f2fs_err_ratelimited(F2FS_I_SB(dic->inode), 469 + f2fs_err_ratelimited(dic->sbi, 466 470 "%s zstd_decompress_stream failed, ret: %d", 467 471 __func__, zstd_get_error_code(ret)); 468 472 return -EIO; 469 473 } 470 474 471 475 if (dic->rlen != outbuf.pos) { 472 - f2fs_err_ratelimited(F2FS_I_SB(dic->inode), 476 + f2fs_err_ratelimited(dic->sbi, 473 477 "%s ZSTD invalid rlen:%zu, expected:%lu", 474 478 __func__, dic->rlen, 475 479 PAGE_SIZE << dic->log_cluster_size); ··· 618 622 619 623 static int f2fs_compress_pages(struct compress_ctx *cc) 620 624 { 625 + struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode); 621 626 struct f2fs_inode_info *fi = F2FS_I(cc->inode); 622 627 const struct f2fs_compress_ops *cops = 623 628 f2fs_cops[fi->i_compress_algorithm]; ··· 639 642 cc->nr_cpages = DIV_ROUND_UP(max_len, PAGE_SIZE); 640 643 cc->valid_nr_cpages = cc->nr_cpages; 641 644 642 - cc->cpages = page_array_alloc(cc->inode, cc->nr_cpages); 645 + cc->cpages = page_array_alloc(sbi, cc->nr_cpages); 643 646 if (!cc->cpages) { 644 647 ret = -ENOMEM; 645 648 goto destroy_compress_ctx; ··· 713 716 if (cc->cpages[i]) 714 717 f2fs_compress_free_page(cc->cpages[i]); 715 718 } 716 - page_array_free(cc->inode, cc->cpages, cc->nr_cpages); 719 + page_array_free(sbi, cc->cpages, cc->nr_cpages); 717 720 cc->cpages = NULL; 718 721 destroy_compress_ctx: 719 722 if (cops->destroy_compress_ctx) ··· 731 734 732 735 void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task) 733 736 { 734 - struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode); 737 + struct f2fs_sb_info *sbi = dic->sbi; 735 738 struct f2fs_inode_info *fi = F2FS_I(dic->inode); 736 739 const struct f2fs_compress_ops *cops = 737 740 f2fs_cops[fi->i_compress_algorithm]; ··· 793 796 f2fs_decompress_end_io(dic, ret, in_task); 794 797 } 795 798 799 + static void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, 800 + struct folio *folio, nid_t ino, block_t blkaddr); 801 + 796 802 /* 797 803 * This is called when a page of a compressed cluster has been read from disk 798 804 * (or failed to be read from disk). It checks whether this page was the last 799 805 * page being waited on in the cluster, and if so, it decompresses the cluster 800 806 * (or in the case of a failure, cleans up without actually decompressing). 801 807 */ 802 - void f2fs_end_read_compressed_page(struct page *page, bool failed, 808 + void f2fs_end_read_compressed_page(struct folio *folio, bool failed, 803 809 block_t blkaddr, bool in_task) 804 810 { 805 - struct decompress_io_ctx *dic = 806 - (struct decompress_io_ctx *)page_private(page); 807 - struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode); 811 + struct decompress_io_ctx *dic = folio->private; 812 + struct f2fs_sb_info *sbi = dic->sbi; 808 813 809 814 dec_page_count(sbi, F2FS_RD_DATA); 810 815 811 816 if (failed) 812 817 WRITE_ONCE(dic->failed, true); 813 818 else if (blkaddr && in_task) 814 - f2fs_cache_compressed_page(sbi, page, 819 + f2fs_cache_compressed_page(sbi, folio, 815 820 dic->inode->i_ino, blkaddr); 816 821 817 822 if (atomic_dec_and_test(&dic->remaining_pages)) ··· 1339 1340 cic->magic = F2FS_COMPRESSED_PAGE_MAGIC; 1340 1341 cic->inode = inode; 1341 1342 atomic_set(&cic->pending_pages, cc->valid_nr_cpages); 1342 - cic->rpages = page_array_alloc(cc->inode, cc->cluster_size); 1343 + cic->rpages = page_array_alloc(sbi, cc->cluster_size); 1343 1344 if (!cic->rpages) 1344 1345 goto out_put_cic; 1345 1346 ··· 1419 1420 (*submitted)++; 1420 1421 unlock_continue: 1421 1422 inode_dec_dirty_pages(cc->inode); 1422 - unlock_page(fio.page); 1423 + folio_unlock(fio.folio); 1423 1424 } 1424 1425 1425 1426 if (fio.compr_blocks) ··· 1441 1442 spin_unlock(&fi->i_size_lock); 1442 1443 1443 1444 f2fs_put_rpages(cc); 1444 - page_array_free(cc->inode, cc->cpages, cc->nr_cpages); 1445 + page_array_free(sbi, cc->cpages, cc->nr_cpages); 1445 1446 cc->cpages = NULL; 1446 1447 f2fs_destroy_compress_ctx(cc, false); 1447 1448 return 0; 1448 1449 1449 1450 out_destroy_crypt: 1450 - page_array_free(cc->inode, cic->rpages, cc->cluster_size); 1451 + page_array_free(sbi, cic->rpages, cc->cluster_size); 1451 1452 1452 1453 for (--i; i >= 0; i--) { 1453 1454 if (!cc->cpages[i]) ··· 1468 1469 f2fs_compress_free_page(cc->cpages[i]); 1469 1470 cc->cpages[i] = NULL; 1470 1471 } 1471 - page_array_free(cc->inode, cc->cpages, cc->nr_cpages); 1472 + page_array_free(sbi, cc->cpages, cc->nr_cpages); 1472 1473 cc->cpages = NULL; 1473 1474 return -EAGAIN; 1474 1475 } 1475 1476 1476 - void f2fs_compress_write_end_io(struct bio *bio, struct page *page) 1477 + void f2fs_compress_write_end_io(struct bio *bio, struct folio *folio) 1477 1478 { 1479 + struct page *page = &folio->page; 1478 1480 struct f2fs_sb_info *sbi = bio->bi_private; 1479 - struct compress_io_ctx *cic = 1480 - (struct compress_io_ctx *)page_private(page); 1481 - enum count_type type = WB_DATA_TYPE(page, 1482 - f2fs_is_compressed_page(page)); 1481 + struct compress_io_ctx *cic = folio->private; 1482 + enum count_type type = WB_DATA_TYPE(folio, 1483 + f2fs_is_compressed_page(folio)); 1483 1484 int i; 1484 1485 1485 1486 if (unlikely(bio->bi_status != BLK_STS_OK)) ··· 1498 1499 end_page_writeback(cic->rpages[i]); 1499 1500 } 1500 1501 1501 - page_array_free(cic->inode, cic->rpages, cic->nr_rpages); 1502 + page_array_free(sbi, cic->rpages, cic->nr_rpages); 1502 1503 kmem_cache_free(cic_entry_slab, cic); 1503 1504 } 1504 1505 ··· 1632 1633 static int f2fs_prepare_decomp_mem(struct decompress_io_ctx *dic, 1633 1634 bool pre_alloc) 1634 1635 { 1635 - const struct f2fs_compress_ops *cops = 1636 - f2fs_cops[F2FS_I(dic->inode)->i_compress_algorithm]; 1636 + const struct f2fs_compress_ops *cops = f2fs_cops[dic->compress_algorithm]; 1637 1637 int i; 1638 1638 1639 - if (!allow_memalloc_for_decomp(F2FS_I_SB(dic->inode), pre_alloc)) 1639 + if (!allow_memalloc_for_decomp(dic->sbi, pre_alloc)) 1640 1640 return 0; 1641 1641 1642 - dic->tpages = page_array_alloc(dic->inode, dic->cluster_size); 1642 + dic->tpages = page_array_alloc(dic->sbi, dic->cluster_size); 1643 1643 if (!dic->tpages) 1644 1644 return -ENOMEM; 1645 1645 ··· 1668 1670 static void f2fs_release_decomp_mem(struct decompress_io_ctx *dic, 1669 1671 bool bypass_destroy_callback, bool pre_alloc) 1670 1672 { 1671 - const struct f2fs_compress_ops *cops = 1672 - f2fs_cops[F2FS_I(dic->inode)->i_compress_algorithm]; 1673 + const struct f2fs_compress_ops *cops = f2fs_cops[dic->compress_algorithm]; 1673 1674 1674 - if (!allow_memalloc_for_decomp(F2FS_I_SB(dic->inode), pre_alloc)) 1675 + if (!allow_memalloc_for_decomp(dic->sbi, pre_alloc)) 1675 1676 return; 1676 1677 1677 1678 if (!bypass_destroy_callback && cops->destroy_decompress_ctx) ··· 1697 1700 if (!dic) 1698 1701 return ERR_PTR(-ENOMEM); 1699 1702 1700 - dic->rpages = page_array_alloc(cc->inode, cc->cluster_size); 1703 + dic->rpages = page_array_alloc(sbi, cc->cluster_size); 1701 1704 if (!dic->rpages) { 1702 1705 kmem_cache_free(dic_entry_slab, dic); 1703 1706 return ERR_PTR(-ENOMEM); ··· 1705 1708 1706 1709 dic->magic = F2FS_COMPRESSED_PAGE_MAGIC; 1707 1710 dic->inode = cc->inode; 1711 + dic->sbi = sbi; 1712 + dic->compress_algorithm = F2FS_I(cc->inode)->i_compress_algorithm; 1708 1713 atomic_set(&dic->remaining_pages, cc->nr_cpages); 1709 1714 dic->cluster_idx = cc->cluster_idx; 1710 1715 dic->cluster_size = cc->cluster_size; ··· 1720 1721 dic->rpages[i] = cc->rpages[i]; 1721 1722 dic->nr_rpages = cc->cluster_size; 1722 1723 1723 - dic->cpages = page_array_alloc(dic->inode, dic->nr_cpages); 1724 + dic->cpages = page_array_alloc(sbi, dic->nr_cpages); 1724 1725 if (!dic->cpages) { 1725 1726 ret = -ENOMEM; 1726 1727 goto out_free; ··· 1750 1751 bool bypass_destroy_callback) 1751 1752 { 1752 1753 int i; 1754 + /* use sbi in dic to avoid UFA of dic->inode*/ 1755 + struct f2fs_sb_info *sbi = dic->sbi; 1753 1756 1754 1757 f2fs_release_decomp_mem(dic, bypass_destroy_callback, true); 1755 1758 ··· 1763 1762 continue; 1764 1763 f2fs_compress_free_page(dic->tpages[i]); 1765 1764 } 1766 - page_array_free(dic->inode, dic->tpages, dic->cluster_size); 1765 + page_array_free(sbi, dic->tpages, dic->cluster_size); 1767 1766 } 1768 1767 1769 1768 if (dic->cpages) { ··· 1772 1771 continue; 1773 1772 f2fs_compress_free_page(dic->cpages[i]); 1774 1773 } 1775 - page_array_free(dic->inode, dic->cpages, dic->nr_cpages); 1774 + page_array_free(sbi, dic->cpages, dic->nr_cpages); 1776 1775 } 1777 1776 1778 - page_array_free(dic->inode, dic->rpages, dic->nr_rpages); 1777 + page_array_free(sbi, dic->rpages, dic->nr_rpages); 1779 1778 kmem_cache_free(dic_entry_slab, dic); 1780 1779 } 1781 1780 ··· 1794 1793 f2fs_free_dic(dic, false); 1795 1794 } else { 1796 1795 INIT_WORK(&dic->free_work, f2fs_late_free_dic); 1797 - queue_work(F2FS_I_SB(dic->inode)->post_read_wq, 1798 - &dic->free_work); 1796 + queue_work(dic->sbi->post_read_wq, &dic->free_work); 1799 1797 } 1800 1798 } 1801 1799 } ··· 1921 1921 invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr + len - 1); 1922 1922 } 1923 1923 1924 - void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, 1925 - nid_t ino, block_t blkaddr) 1924 + static void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, 1925 + struct folio *folio, nid_t ino, block_t blkaddr) 1926 1926 { 1927 1927 struct folio *cfolio; 1928 1928 int ret; ··· 1953 1953 return; 1954 1954 } 1955 1955 1956 - set_page_private_data(&cfolio->page, ino); 1956 + folio_set_f2fs_data(cfolio, ino); 1957 1957 1958 - memcpy(folio_address(cfolio), page_address(page), PAGE_SIZE); 1958 + memcpy(folio_address(cfolio), folio_address(folio), PAGE_SIZE); 1959 1959 folio_mark_uptodate(cfolio); 1960 1960 f2fs_folio_put(cfolio, true); 1961 1961 } ··· 2012 2012 continue; 2013 2013 } 2014 2014 2015 - if (ino != get_page_private_data(&folio->page)) { 2015 + if (ino != folio_get_f2fs_data(folio)) { 2016 2016 folio_unlock(folio); 2017 2017 continue; 2018 2018 }
+99 -84
fs/f2fs/data.c
··· 47 47 bioset_exit(&f2fs_bioset); 48 48 } 49 49 50 - bool f2fs_is_cp_guaranteed(struct page *page) 50 + bool f2fs_is_cp_guaranteed(const struct folio *folio) 51 51 { 52 - struct address_space *mapping = page_folio(page)->mapping; 52 + struct address_space *mapping = folio->mapping; 53 53 struct inode *inode; 54 54 struct f2fs_sb_info *sbi; 55 55 56 - if (fscrypt_is_bounce_page(page)) 57 - return page_private_gcing(fscrypt_pagecache_page(page)); 56 + if (fscrypt_is_bounce_folio(folio)) 57 + return folio_test_f2fs_gcing(fscrypt_pagecache_folio(folio)); 58 58 59 59 inode = mapping->host; 60 60 sbi = F2FS_I_SB(inode); ··· 65 65 return true; 66 66 67 67 if ((S_ISREG(inode->i_mode) && IS_NOQUOTA(inode)) || 68 - page_private_gcing(page)) 68 + folio_test_f2fs_gcing(folio)) 69 69 return true; 70 70 return false; 71 71 } ··· 142 142 bio_for_each_folio_all(fi, bio) { 143 143 struct folio *folio = fi.folio; 144 144 145 - if (f2fs_is_compressed_page(&folio->page)) { 145 + if (f2fs_is_compressed_page(folio)) { 146 146 if (ctx && !ctx->decompression_attempted) 147 - f2fs_end_read_compressed_page(&folio->page, true, 0, 147 + f2fs_end_read_compressed_page(folio, true, 0, 148 148 in_task); 149 149 f2fs_put_folio_dic(folio, in_task); 150 150 continue; ··· 181 181 * as those were handled separately by f2fs_end_read_compressed_page(). 182 182 */ 183 183 if (may_have_compressed_pages) { 184 - struct bio_vec *bv; 185 - struct bvec_iter_all iter_all; 184 + struct folio_iter fi; 186 185 187 - bio_for_each_segment_all(bv, bio, iter_all) { 188 - struct page *page = bv->bv_page; 186 + bio_for_each_folio_all(fi, bio) { 187 + struct folio *folio = fi.folio; 189 188 190 - if (!f2fs_is_compressed_page(page) && 191 - !fsverity_verify_page(page)) { 189 + if (!f2fs_is_compressed_page(folio) && 190 + !fsverity_verify_page(&folio->page)) { 192 191 bio->bi_status = BLK_STS_IOERR; 193 192 break; 194 193 } ··· 232 233 static void f2fs_handle_step_decompress(struct bio_post_read_ctx *ctx, 233 234 bool in_task) 234 235 { 235 - struct bio_vec *bv; 236 - struct bvec_iter_all iter_all; 236 + struct folio_iter fi; 237 237 bool all_compressed = true; 238 238 block_t blkaddr = ctx->fs_blkaddr; 239 239 240 - bio_for_each_segment_all(bv, ctx->bio, iter_all) { 241 - struct page *page = bv->bv_page; 240 + bio_for_each_folio_all(fi, ctx->bio) { 241 + struct folio *folio = fi.folio; 242 242 243 - if (f2fs_is_compressed_page(page)) 244 - f2fs_end_read_compressed_page(page, false, blkaddr, 243 + if (f2fs_is_compressed_page(folio)) 244 + f2fs_end_read_compressed_page(folio, false, blkaddr, 245 245 in_task); 246 246 else 247 247 all_compressed = false; ··· 278 280 279 281 static void f2fs_read_end_io(struct bio *bio) 280 282 { 281 - struct f2fs_sb_info *sbi = F2FS_P_SB(bio_first_page_all(bio)); 283 + struct f2fs_sb_info *sbi = F2FS_F_SB(bio_first_folio_all(bio)); 282 284 struct bio_post_read_ctx *ctx; 283 - bool intask = in_task(); 285 + bool intask = in_task() && !irqs_disabled(); 284 286 285 287 iostat_update_and_unbind_ctx(bio); 286 288 ctx = bio->bi_private; ··· 337 339 } 338 340 339 341 #ifdef CONFIG_F2FS_FS_COMPRESSION 340 - if (f2fs_is_compressed_page(&folio->page)) { 341 - f2fs_compress_write_end_io(bio, &folio->page); 342 + if (f2fs_is_compressed_page(folio)) { 343 + f2fs_compress_write_end_io(bio, folio); 342 344 continue; 343 345 } 344 346 #endif 345 347 346 - type = WB_DATA_TYPE(&folio->page, false); 348 + type = WB_DATA_TYPE(folio, false); 347 349 348 350 if (unlikely(bio->bi_status != BLK_STS_OK)) { 349 351 mapping_set_error(folio->mapping, -EIO); ··· 353 355 } 354 356 355 357 f2fs_bug_on(sbi, is_node_folio(folio) && 356 - folio->index != nid_of_node(&folio->page)); 358 + folio->index != nid_of_node(folio)); 357 359 358 360 dec_page_count(sbi, type); 359 361 if (f2fs_in_warm_node_list(sbi, folio)) 360 362 f2fs_del_fsync_node_entry(sbi, folio); 361 - clear_page_private_gcing(&folio->page); 363 + folio_clear_f2fs_gcing(folio); 362 364 folio_end_writeback(folio); 363 365 } 364 366 if (!get_pages(sbi, F2FS_WB_CP_DATA) && ··· 417 419 static blk_opf_t f2fs_io_flags(struct f2fs_io_info *fio) 418 420 { 419 421 unsigned int temp_mask = GENMASK(NR_TEMP_TYPE - 1, 0); 420 - struct folio *fio_folio = page_folio(fio->page); 421 422 unsigned int fua_flag, meta_flag, io_flag; 422 423 blk_opf_t op_flags = 0; 423 424 ··· 444 447 op_flags |= REQ_FUA; 445 448 446 449 if (fio->type == DATA && 447 - F2FS_I(fio_folio->mapping->host)->ioprio_hint == F2FS_IOPRIO_WRITE) 450 + F2FS_I(fio->folio->mapping->host)->ioprio_hint == F2FS_IOPRIO_WRITE) 448 451 op_flags |= REQ_PRIO; 449 452 450 453 return op_flags; ··· 543 546 } 544 547 545 548 static bool __has_merged_page(struct bio *bio, struct inode *inode, 546 - struct page *page, nid_t ino) 549 + struct folio *folio, nid_t ino) 547 550 { 548 551 struct folio_iter fi; 549 552 550 553 if (!bio) 551 554 return false; 552 555 553 - if (!inode && !page && !ino) 556 + if (!inode && !folio && !ino) 554 557 return true; 555 558 556 559 bio_for_each_folio_all(fi, bio) { ··· 561 564 if (IS_ERR(target)) 562 565 continue; 563 566 } 564 - if (f2fs_is_compressed_page(&target->page)) { 567 + if (f2fs_is_compressed_page(target)) { 565 568 target = f2fs_compress_control_folio(target); 566 569 if (IS_ERR(target)) 567 570 continue; ··· 569 572 570 573 if (inode && inode == target->mapping->host) 571 574 return true; 572 - if (page && page == &target->page) 575 + if (folio && folio == target) 573 576 return true; 574 - if (ino && ino == ino_of_node(&target->page)) 577 + if (ino && ino == ino_of_node(target)) 575 578 return true; 576 579 } 577 580 ··· 638 641 } 639 642 640 643 static void __submit_merged_write_cond(struct f2fs_sb_info *sbi, 641 - struct inode *inode, struct page *page, 644 + struct inode *inode, struct folio *folio, 642 645 nid_t ino, enum page_type type, bool force) 643 646 { 644 647 enum temp_type temp; ··· 650 653 struct f2fs_bio_info *io = sbi->write_io[btype] + temp; 651 654 652 655 f2fs_down_read(&io->io_rwsem); 653 - ret = __has_merged_page(io->bio, inode, page, ino); 656 + ret = __has_merged_page(io->bio, inode, folio, ino); 654 657 f2fs_up_read(&io->io_rwsem); 655 658 } 656 659 if (ret) ··· 668 671 } 669 672 670 673 void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi, 671 - struct inode *inode, struct page *page, 674 + struct inode *inode, struct folio *folio, 672 675 nid_t ino, enum page_type type) 673 676 { 674 - __submit_merged_write_cond(sbi, inode, page, ino, type, false); 677 + __submit_merged_write_cond(sbi, inode, folio, ino, type, false); 675 678 } 676 679 677 680 void f2fs_flush_merged_writes(struct f2fs_sb_info *sbi) ··· 688 691 int f2fs_submit_page_bio(struct f2fs_io_info *fio) 689 692 { 690 693 struct bio *bio; 691 - struct folio *fio_folio = page_folio(fio->page); 694 + struct folio *fio_folio = fio->folio; 692 695 struct folio *data_folio = fio->encrypted_page ? 693 696 page_folio(fio->encrypted_page) : fio_folio; 694 697 ··· 710 713 wbc_account_cgroup_owner(fio->io_wbc, fio_folio, PAGE_SIZE); 711 714 712 715 inc_page_count(fio->sbi, is_read_io(fio->op) ? 713 - __read_io_type(data_folio) : WB_DATA_TYPE(fio->page, false)); 716 + __read_io_type(data_folio) : WB_DATA_TYPE(fio->folio, false)); 714 717 715 718 if (is_read_io(bio_op(bio))) 716 719 f2fs_submit_read_bio(fio->sbi, bio, fio->type); ··· 776 779 static int add_ipu_page(struct f2fs_io_info *fio, struct bio **bio, 777 780 struct page *page) 778 781 { 779 - struct folio *fio_folio = page_folio(fio->page); 782 + struct folio *fio_folio = fio->folio; 780 783 struct f2fs_sb_info *sbi = fio->sbi; 781 784 enum temp_type temp; 782 785 bool found = false; ··· 845 848 found = (target == be->bio); 846 849 else 847 850 found = __has_merged_page(be->bio, NULL, 848 - &folio->page, 0); 851 + folio, 0); 849 852 if (found) 850 853 break; 851 854 } ··· 862 865 found = (target == be->bio); 863 866 else 864 867 found = __has_merged_page(be->bio, NULL, 865 - &folio->page, 0); 868 + folio, 0); 866 869 if (found) { 867 870 target = be->bio; 868 871 del_bio_entry(be); ··· 883 886 int f2fs_merge_page_bio(struct f2fs_io_info *fio) 884 887 { 885 888 struct bio *bio = *fio->bio; 886 - struct page *page = fio->encrypted_page ? 887 - fio->encrypted_page : fio->page; 888 - struct folio *folio = page_folio(fio->page); 889 + struct folio *data_folio = fio->encrypted_page ? 890 + page_folio(fio->encrypted_page) : fio->folio; 891 + struct folio *folio = fio->folio; 889 892 890 893 if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr, 891 894 __is_meta_io(fio) ? META_GENERIC : DATA_GENERIC)) 892 895 return -EFSCORRUPTED; 893 896 894 - trace_f2fs_submit_folio_bio(page_folio(page), fio); 897 + trace_f2fs_submit_folio_bio(data_folio, fio); 895 898 896 899 if (bio && !page_is_mergeable(fio->sbi, bio, *fio->last_block, 897 900 fio->new_blkaddr)) ··· 902 905 f2fs_set_bio_crypt_ctx(bio, folio->mapping->host, 903 906 folio->index, fio, GFP_NOIO); 904 907 905 - add_bio_entry(fio->sbi, bio, page, fio->temp); 908 + add_bio_entry(fio->sbi, bio, &data_folio->page, fio->temp); 906 909 } else { 907 - if (add_ipu_page(fio, &bio, page)) 910 + if (add_ipu_page(fio, &bio, &data_folio->page)) 908 911 goto alloc_new; 909 912 } 910 913 911 914 if (fio->io_wbc) 912 915 wbc_account_cgroup_owner(fio->io_wbc, folio, folio_size(folio)); 913 916 914 - inc_page_count(fio->sbi, WB_DATA_TYPE(page, false)); 917 + inc_page_count(fio->sbi, WB_DATA_TYPE(data_folio, false)); 915 918 916 919 *fio->last_block = fio->new_blkaddr; 917 920 *fio->bio = bio; ··· 946 949 struct f2fs_sb_info *sbi = fio->sbi; 947 950 enum page_type btype = PAGE_TYPE_OF_BIO(fio->type); 948 951 struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp; 949 - struct page *bio_page; 952 + struct folio *bio_folio; 950 953 enum count_type type; 951 954 952 955 f2fs_bug_on(sbi, is_read_io(fio->op)); ··· 977 980 verify_fio_blkaddr(fio); 978 981 979 982 if (fio->encrypted_page) 980 - bio_page = fio->encrypted_page; 983 + bio_folio = page_folio(fio->encrypted_page); 981 984 else if (fio->compressed_page) 982 - bio_page = fio->compressed_page; 985 + bio_folio = page_folio(fio->compressed_page); 983 986 else 984 - bio_page = fio->page; 987 + bio_folio = fio->folio; 985 988 986 989 /* set submitted = true as a return value */ 987 990 fio->submitted = 1; 988 991 989 - type = WB_DATA_TYPE(bio_page, fio->compressed_page); 992 + type = WB_DATA_TYPE(bio_folio, fio->compressed_page); 990 993 inc_page_count(sbi, type); 991 994 992 995 if (io->bio && 993 996 (!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio, 994 997 fio->new_blkaddr) || 995 998 !f2fs_crypt_mergeable_bio(io->bio, fio_inode(fio), 996 - page_folio(bio_page)->index, fio))) 999 + bio_folio->index, fio))) 997 1000 __submit_merged_bio(io); 998 1001 alloc_new: 999 1002 if (io->bio == NULL) { 1000 1003 io->bio = __bio_alloc(fio, BIO_MAX_VECS); 1001 1004 f2fs_set_bio_crypt_ctx(io->bio, fio_inode(fio), 1002 - page_folio(bio_page)->index, fio, GFP_NOIO); 1005 + bio_folio->index, fio, GFP_NOIO); 1003 1006 io->fio = *fio; 1004 1007 } 1005 1008 1006 - if (bio_add_page(io->bio, bio_page, PAGE_SIZE, 0) < PAGE_SIZE) { 1009 + if (!bio_add_folio(io->bio, bio_folio, folio_size(bio_folio), 0)) { 1007 1010 __submit_merged_bio(io); 1008 1011 goto alloc_new; 1009 1012 } 1010 1013 1011 1014 if (fio->io_wbc) 1012 - wbc_account_cgroup_owner(fio->io_wbc, page_folio(fio->page), 1013 - PAGE_SIZE); 1015 + wbc_account_cgroup_owner(fio->io_wbc, fio->folio, 1016 + folio_size(fio->folio)); 1014 1017 1015 1018 io->last_block_in_bio = fio->new_blkaddr; 1016 1019 1017 - trace_f2fs_submit_folio_write(page_folio(fio->page), fio); 1020 + trace_f2fs_submit_folio_write(fio->folio, fio); 1018 1021 #ifdef CONFIG_BLK_DEV_ZONED 1019 1022 if (f2fs_sb_has_blkzoned(sbi) && btype < META && 1020 1023 is_end_zone_blkaddr(sbi, fio->new_blkaddr)) { ··· 1550 1553 unsigned int start_pgofs; 1551 1554 int bidx = 0; 1552 1555 bool is_hole; 1556 + bool lfs_dio_write; 1553 1557 1554 1558 if (!maxblocks) 1555 1559 return 0; 1560 + 1561 + lfs_dio_write = (flag == F2FS_GET_BLOCK_DIO && f2fs_lfs_mode(sbi) && 1562 + map->m_may_create); 1556 1563 1557 1564 if (!map->m_may_create && f2fs_map_blocks_cached(inode, map, flag)) 1558 1565 goto out; ··· 1573 1572 end = pgofs + maxblocks; 1574 1573 1575 1574 next_dnode: 1576 - if (map->m_may_create) 1575 + if (map->m_may_create) { 1576 + if (f2fs_lfs_mode(sbi)) 1577 + f2fs_balance_fs(sbi, true); 1577 1578 f2fs_map_lock(sbi, flag); 1579 + } 1578 1580 1579 1581 /* When reading holes, we need its node page */ 1580 1582 set_new_dnode(&dn, inode, NULL, NULL, 0); ··· 1593 1589 start_pgofs = pgofs; 1594 1590 prealloc = 0; 1595 1591 last_ofs_in_node = ofs_in_node = dn.ofs_in_node; 1596 - end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 1592 + end_offset = ADDRS_PER_PAGE(dn.node_folio, inode); 1597 1593 1598 1594 next_block: 1599 1595 blkaddr = f2fs_data_blkaddr(&dn); ··· 1607 1603 /* use out-place-update for direct IO under LFS mode */ 1608 1604 if (map->m_may_create && (is_hole || 1609 1605 (flag == F2FS_GET_BLOCK_DIO && f2fs_lfs_mode(sbi) && 1610 - !f2fs_is_pinned_file(inode)))) { 1606 + !f2fs_is_pinned_file(inode) && map->m_last_pblk != blkaddr))) { 1611 1607 if (unlikely(f2fs_cp_error(sbi))) { 1612 1608 err = -EIO; 1613 1609 goto sync_out; ··· 1691 1687 1692 1688 if (map->m_multidev_dio) 1693 1689 map->m_bdev = FDEV(bidx).bdev; 1690 + 1691 + if (lfs_dio_write) 1692 + map->m_last_pblk = NULL_ADDR; 1694 1693 } else if (map_is_mergeable(sbi, map, blkaddr, flag, bidx, ofs)) { 1695 1694 ofs++; 1696 1695 map->m_len++; 1697 1696 } else { 1697 + if (lfs_dio_write && !f2fs_is_pinned_file(inode)) 1698 + map->m_last_pblk = blkaddr; 1698 1699 goto sync_out; 1699 1700 } 1700 1701 ··· 1722 1713 goto sync_out; 1723 1714 } 1724 1715 dn.ofs_in_node = end_offset; 1725 - } 1726 - 1727 - if (flag == F2FS_GET_BLOCK_DIO && f2fs_lfs_mode(sbi) && 1728 - map->m_may_create) { 1729 - /* the next block to be allocated may not be contiguous. */ 1730 - if (GET_SEGOFF_FROM_SEG0(sbi, blkaddr) % BLKS_PER_SEC(sbi) == 1731 - CAP_BLKS_PER_SEC(sbi) - 1) 1732 - goto sync_out; 1733 1716 } 1734 1717 1735 1718 if (pgofs >= end) ··· 2304 2303 } 2305 2304 2306 2305 if (!bio) { 2307 - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages, 2306 + bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i, 2308 2307 f2fs_ra_op_flags(rac), 2309 2308 folio->index, for_write); 2310 2309 if (IS_ERR(bio)) { ··· 2376 2375 unsigned nr_pages = rac ? readahead_count(rac) : 1; 2377 2376 unsigned max_nr_pages = nr_pages; 2378 2377 int ret = 0; 2378 + 2379 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2380 + if (f2fs_compressed_file(inode)) { 2381 + index = rac ? readahead_index(rac) : folio->index; 2382 + max_nr_pages = round_up(index + nr_pages, cc.cluster_size) - 2383 + round_down(index, cc.cluster_size); 2384 + } 2385 + #endif 2379 2386 2380 2387 map.m_pblk = 0; 2381 2388 map.m_lblk = 0; ··· 2651 2642 2652 2643 int f2fs_do_write_data_page(struct f2fs_io_info *fio) 2653 2644 { 2654 - struct folio *folio = page_folio(fio->page); 2645 + struct folio *folio = fio->folio; 2655 2646 struct inode *inode = folio->mapping->host; 2656 2647 struct dnode_of_data dn; 2657 2648 struct node_info ni; ··· 2661 2652 2662 2653 /* Use COW inode to make dnode_of_data for atomic write */ 2663 2654 atomic_commit = f2fs_is_atomic_file(inode) && 2664 - page_private_atomic(folio_page(folio, 0)); 2655 + folio_test_f2fs_atomic(folio); 2665 2656 if (atomic_commit) 2666 2657 set_new_dnode(&dn, F2FS_I(inode)->cow_inode, NULL, NULL, 0); 2667 2658 else ··· 2692 2683 /* This page is already truncated */ 2693 2684 if (fio->old_blkaddr == NULL_ADDR) { 2694 2685 folio_clear_uptodate(folio); 2695 - clear_page_private_gcing(folio_page(folio, 0)); 2686 + folio_clear_f2fs_gcing(folio); 2696 2687 goto out_writepage; 2697 2688 } 2698 2689 got_it: ··· 2762 2753 trace_f2fs_do_write_data_page(folio, OPU); 2763 2754 set_inode_flag(inode, FI_APPEND_WRITE); 2764 2755 if (atomic_commit) 2765 - clear_page_private_atomic(folio_page(folio, 0)); 2756 + folio_clear_f2fs_atomic(folio); 2766 2757 out_writepage: 2767 2758 f2fs_put_dnode(&dn); 2768 2759 out: ··· 2780 2771 bool allow_balance) 2781 2772 { 2782 2773 struct inode *inode = folio->mapping->host; 2783 - struct page *page = folio_page(folio, 0); 2784 2774 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 2785 2775 loff_t i_size = i_size_read(inode); 2786 2776 const pgoff_t end_index = ((unsigned long long)i_size) ··· 2796 2788 .op = REQ_OP_WRITE, 2797 2789 .op_flags = wbc_to_write_flags(wbc), 2798 2790 .old_blkaddr = NULL_ADDR, 2799 - .page = page, 2791 + .folio = folio, 2800 2792 .encrypted_page = NULL, 2801 2793 .submitted = 0, 2802 2794 .compr_blocks = compr_blocks, ··· 2898 2890 inode_dec_dirty_pages(inode); 2899 2891 if (err) { 2900 2892 folio_clear_uptodate(folio); 2901 - clear_page_private_gcing(page); 2893 + folio_clear_f2fs_gcing(folio); 2902 2894 } 2903 2895 folio_unlock(folio); 2904 2896 if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) && ··· 3384 3376 f2fs_do_read_inline_data(folio, ifolio); 3385 3377 set_inode_flag(inode, FI_DATA_EXIST); 3386 3378 if (inode->i_nlink) 3387 - set_page_private_inline(&ifolio->page); 3379 + folio_set_f2fs_inline(ifolio); 3388 3380 goto out; 3389 3381 } 3390 3382 err = f2fs_convert_inline_folio(&dn, folio); ··· 3706 3698 folio_mark_dirty(folio); 3707 3699 3708 3700 if (f2fs_is_atomic_file(inode)) 3709 - set_page_private_atomic(folio_page(folio, 0)); 3701 + folio_set_f2fs_atomic(folio); 3710 3702 3711 3703 if (pos + copied > i_size_read(inode) && 3712 3704 !f2fs_verity_in_progress(inode)) { ··· 3741 3733 f2fs_remove_dirty_inode(inode); 3742 3734 } 3743 3735 } 3744 - clear_page_private_all(&folio->page); 3736 + folio_detach_private(folio); 3745 3737 } 3746 3738 3747 3739 bool f2fs_release_folio(struct folio *folio, gfp_t wait) ··· 3750 3742 if (folio_test_dirty(folio)) 3751 3743 return false; 3752 3744 3753 - clear_page_private_all(&folio->page); 3745 + folio_detach_private(folio); 3754 3746 return true; 3755 3747 } 3756 3748 ··· 4168 4160 unsigned int flags, struct iomap *iomap, 4169 4161 struct iomap *srcmap) 4170 4162 { 4171 - struct f2fs_map_blocks map = {}; 4163 + struct f2fs_map_blocks map = { NULL, }; 4172 4164 pgoff_t next_pgofs = 0; 4173 4165 int err; 4174 4166 ··· 4177 4169 map.m_next_pgofs = &next_pgofs; 4178 4170 map.m_seg_type = f2fs_rw_hint_to_seg_type(F2FS_I_SB(inode), 4179 4171 inode->i_write_hint); 4172 + if (flags & IOMAP_WRITE && iomap->private) { 4173 + map.m_last_pblk = (unsigned long)iomap->private; 4174 + iomap->private = NULL; 4175 + } 4180 4176 4181 4177 /* 4182 4178 * If the blocks being overwritten are already allocated, ··· 4219 4207 iomap->flags |= IOMAP_F_MERGED; 4220 4208 iomap->bdev = map.m_bdev; 4221 4209 iomap->addr = F2FS_BLK_TO_BYTES(map.m_pblk); 4210 + 4211 + if (flags & IOMAP_WRITE && map.m_last_pblk) 4212 + iomap->private = (void *)map.m_last_pblk; 4222 4213 } else { 4223 4214 if (flags & IOMAP_WRITE) 4224 4215 return -ENOTBLK;
+9 -12
fs/f2fs/debug.c
··· 21 21 #include "gc.h" 22 22 23 23 static LIST_HEAD(f2fs_stat_list); 24 - static DEFINE_RAW_SPINLOCK(f2fs_stat_lock); 24 + static DEFINE_SPINLOCK(f2fs_stat_lock); 25 25 #ifdef CONFIG_DEBUG_FS 26 26 static struct dentry *f2fs_debugfs_root; 27 27 #endif ··· 91 91 seg_blks = get_seg_entry(sbi, j)->valid_blocks; 92 92 93 93 /* update segment stats */ 94 - if (IS_CURSEG(sbi, j)) 94 + if (is_curseg(sbi, j)) 95 95 dev_stats[i].devstats[0][DEVSTAT_INUSE]++; 96 96 else if (seg_blks == BLKS_PER_SEG(sbi)) 97 97 dev_stats[i].devstats[0][DEVSTAT_FULL]++; ··· 109 109 sec_blks = get_sec_entry(sbi, j)->valid_blocks; 110 110 111 111 /* update section stats */ 112 - if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, j))) 112 + if (is_cursec(sbi, GET_SEC_FROM_SEG(sbi, j))) 113 113 dev_stats[i].devstats[1][DEVSTAT_INUSE]++; 114 114 else if (sec_blks == BLKS_PER_SEC(sbi)) 115 115 dev_stats[i].devstats[1][DEVSTAT_FULL]++; ··· 439 439 { 440 440 struct f2fs_stat_info *si; 441 441 int i = 0, j = 0; 442 - unsigned long flags; 443 442 444 - raw_spin_lock_irqsave(&f2fs_stat_lock, flags); 443 + spin_lock(&f2fs_stat_lock); 445 444 list_for_each_entry(si, &f2fs_stat_list, stat_list) { 446 445 struct f2fs_sb_info *sbi = si->sbi; 447 446 ··· 752 753 seq_printf(s, " - paged : %llu KB\n", 753 754 si->page_mem >> 10); 754 755 } 755 - raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags); 756 + spin_unlock(&f2fs_stat_lock); 756 757 return 0; 757 758 } 758 759 ··· 764 765 struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi); 765 766 struct f2fs_stat_info *si; 766 767 struct f2fs_dev_stats *dev_stats; 767 - unsigned long flags; 768 768 int i; 769 769 770 770 si = f2fs_kzalloc(sbi, sizeof(struct f2fs_stat_info), GFP_KERNEL); ··· 815 817 816 818 atomic_set(&sbi->max_aw_cnt, 0); 817 819 818 - raw_spin_lock_irqsave(&f2fs_stat_lock, flags); 820 + spin_lock(&f2fs_stat_lock); 819 821 list_add_tail(&si->stat_list, &f2fs_stat_list); 820 - raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags); 822 + spin_unlock(&f2fs_stat_lock); 821 823 822 824 return 0; 823 825 } ··· 825 827 void f2fs_destroy_stats(struct f2fs_sb_info *sbi) 826 828 { 827 829 struct f2fs_stat_info *si = F2FS_STAT(sbi); 828 - unsigned long flags; 829 830 830 - raw_spin_lock_irqsave(&f2fs_stat_lock, flags); 831 + spin_lock(&f2fs_stat_lock); 831 832 list_del(&si->stat_list); 832 - raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags); 833 + spin_unlock(&f2fs_stat_lock); 833 834 834 835 kfree(si->dev_stats); 835 836 kfree(si);
+2 -2
fs/f2fs/dir.c
··· 454 454 f2fs_folio_wait_writeback(ifolio, NODE, true, true); 455 455 456 456 /* copy name info. to this inode folio */ 457 - ri = F2FS_INODE(&ifolio->page); 457 + ri = F2FS_INODE(ifolio); 458 458 ri->i_namelen = cpu_to_le32(fname->disk_name.len); 459 459 memcpy(ri->i_name, fname->disk_name.name, fname->disk_name.len); 460 460 if (IS_ENCRYPTED(dir)) { ··· 897 897 f2fs_clear_page_cache_dirty_tag(folio); 898 898 folio_clear_dirty_for_io(folio); 899 899 folio_clear_uptodate(folio); 900 - clear_page_private_all(&folio->page); 900 + folio_detach_private(folio); 901 901 902 902 inode_dec_dirty_pages(dir); 903 903 f2fs_remove_dirty_inode(dir);
+5 -5
fs/f2fs/extent_cache.c
··· 19 19 #include "node.h" 20 20 #include <trace/events/f2fs.h> 21 21 22 - bool sanity_check_extent_cache(struct inode *inode, struct page *ipage) 22 + bool sanity_check_extent_cache(struct inode *inode, struct folio *ifolio) 23 23 { 24 24 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 25 - struct f2fs_extent *i_ext = &F2FS_INODE(ipage)->i_ext; 25 + struct f2fs_extent *i_ext = &F2FS_INODE(ifolio)->i_ext; 26 26 struct extent_info ei; 27 27 int devi; 28 28 ··· 411 411 { 412 412 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 413 413 struct extent_tree_info *eti = &sbi->extent_tree[EX_READ]; 414 - struct f2fs_extent *i_ext = &F2FS_INODE(&ifolio->page)->i_ext; 414 + struct f2fs_extent *i_ext = &F2FS_INODE(ifolio)->i_ext; 415 415 struct extent_tree *et; 416 416 struct extent_node *en; 417 - struct extent_info ei; 417 + struct extent_info ei = {0}; 418 418 419 419 if (!__may_extent_tree(inode, EX_READ)) { 420 420 /* drop largest read extent */ ··· 934 934 if (!__may_extent_tree(dn->inode, type)) 935 935 return; 936 936 937 - ei.fofs = f2fs_start_bidx_of_node(ofs_of_node(&dn->node_folio->page), dn->inode) + 937 + ei.fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_folio), dn->inode) + 938 938 dn->ofs_in_node; 939 939 ei.len = 1; 940 940
+83 -68
fs/f2fs/f2fs.h
··· 386 386 struct rb_node rb_node; /* rb node located in rb-tree */ 387 387 struct discard_info di; /* discard info */ 388 388 struct list_head list; /* command list */ 389 - struct completion wait; /* compleation */ 389 + struct completion wait; /* completion */ 390 390 struct block_device *bdev; /* bdev */ 391 391 unsigned short ref; /* reference count */ 392 392 unsigned char state; /* state */ ··· 732 732 block_t m_lblk; 733 733 unsigned int m_len; 734 734 unsigned int m_flags; 735 + unsigned long m_last_pblk; /* last allocated block, only used for DIO in LFS mode */ 735 736 pgoff_t *m_next_pgofs; /* point next possible non-hole pgofs */ 736 737 pgoff_t *m_next_extent; /* point to next possible extent */ 737 738 int m_seg_type; ··· 876 875 /* linked in global inode list for cache donation */ 877 876 struct list_head gdonate_list; 878 877 pgoff_t donate_start, donate_end; /* inclusive */ 878 + atomic_t open_count; /* # of open files */ 879 879 880 880 struct task_struct *atomic_write_task; /* store atomic write task */ 881 881 struct extent_tree *extent_tree[NR_EXTENT_CACHES]; ··· 1125 1123 * f2fs monitors the number of several block types such as on-writeback, 1126 1124 * dirty dentry blocks, dirty node blocks, and dirty meta blocks. 1127 1125 */ 1128 - #define WB_DATA_TYPE(p, f) \ 1129 - (f || f2fs_is_cp_guaranteed(p) ? F2FS_WB_CP_DATA : F2FS_WB_DATA) 1126 + #define WB_DATA_TYPE(folio, f) \ 1127 + (f || f2fs_is_cp_guaranteed(folio) ? F2FS_WB_CP_DATA : F2FS_WB_DATA) 1130 1128 enum count_type { 1131 1129 F2FS_DIRTY_DENTS, 1132 1130 F2FS_DIRTY_DATA, ··· 1242 1240 blk_opf_t op_flags; /* req_flag_bits */ 1243 1241 block_t new_blkaddr; /* new block address to be written */ 1244 1242 block_t old_blkaddr; /* old block address before Cow */ 1245 - struct page *page; /* page to be written */ 1243 + union { 1244 + struct page *page; /* page to be written */ 1245 + struct folio *folio; 1246 + }; 1246 1247 struct page *encrypted_page; /* encrypted page */ 1247 1248 struct page *compressed_page; /* compressed page */ 1248 1249 struct list_head list; /* serialize IOs */ ··· 1291 1286 struct f2fs_dev_info { 1292 1287 struct file *bdev_file; 1293 1288 struct block_device *bdev; 1294 - char path[MAX_PATH_LEN]; 1289 + char path[MAX_PATH_LEN + 1]; 1295 1290 unsigned int total_segments; 1296 1291 block_t start_blk; 1297 1292 block_t end_blk; ··· 1432 1427 1433 1428 enum { 1434 1429 MEMORY_MODE_NORMAL, /* memory mode for normal devices */ 1435 - MEMORY_MODE_LOW, /* memory mode for low memry devices */ 1430 + MEMORY_MODE_LOW, /* memory mode for low memory devices */ 1436 1431 }; 1437 1432 1438 1433 enum errors_option { ··· 1496 1491 #define COMPRESS_DATA_RESERVED_SIZE 4 1497 1492 struct compress_data { 1498 1493 __le32 clen; /* compressed data size */ 1499 - __le32 chksum; /* compressed data chksum */ 1494 + __le32 chksum; /* compressed data checksum */ 1500 1495 __le32 reserved[COMPRESS_DATA_RESERVED_SIZE]; /* reserved */ 1501 1496 u8 cdata[]; /* compressed data */ 1502 1497 }; ··· 1541 1536 struct decompress_io_ctx { 1542 1537 u32 magic; /* magic number to indicate page is compressed */ 1543 1538 struct inode *inode; /* inode the context belong to */ 1539 + struct f2fs_sb_info *sbi; /* f2fs_sb_info pointer */ 1544 1540 pgoff_t cluster_idx; /* cluster index number */ 1545 1541 unsigned int cluster_size; /* page count in cluster */ 1546 1542 unsigned int log_cluster_size; /* log of cluster size */ ··· 1582 1576 1583 1577 bool failed; /* IO error occurred before decompression? */ 1584 1578 bool need_verity; /* need fs-verity verification after decompression? */ 1579 + unsigned char compress_algorithm; /* backup algorithm type */ 1585 1580 void *private; /* payload buffer for specified decompression algorithm */ 1586 1581 void *private2; /* extra payload buffer */ 1587 1582 struct work_struct verity_work; /* work to verify the decompressed pages */ ··· 1730 1723 1731 1724 /* for skip statistic */ 1732 1725 unsigned long long skipped_gc_rwsem; /* FG_GC only */ 1726 + 1727 + /* free sections reserved for pinned file */ 1728 + unsigned int reserved_pin_section; 1733 1729 1734 1730 /* threshold for gc trials on pinned files */ 1735 1731 unsigned short gc_pin_file_threshold; ··· 2023 2013 return F2FS_I_SB(mapping->host); 2024 2014 } 2025 2015 2026 - static inline struct f2fs_sb_info *F2FS_F_SB(struct folio *folio) 2016 + static inline struct f2fs_sb_info *F2FS_F_SB(const struct folio *folio) 2027 2017 { 2028 2018 return F2FS_M_SB(folio->mapping); 2029 - } 2030 - 2031 - static inline struct f2fs_sb_info *F2FS_P_SB(struct page *page) 2032 - { 2033 - return F2FS_F_SB(page_folio(page)); 2034 2019 } 2035 2020 2036 2021 static inline struct f2fs_super_block *F2FS_RAW_SUPER(struct f2fs_sb_info *sbi) ··· 2048 2043 return (struct f2fs_checkpoint *)(sbi->ckpt); 2049 2044 } 2050 2045 2051 - static inline struct f2fs_node *F2FS_NODE(const struct page *page) 2046 + static inline struct f2fs_node *F2FS_NODE(const struct folio *folio) 2052 2047 { 2053 - return (struct f2fs_node *)page_address(page); 2048 + return (struct f2fs_node *)folio_address(folio); 2054 2049 } 2055 2050 2056 - static inline struct f2fs_inode *F2FS_INODE(struct page *page) 2051 + static inline struct f2fs_inode *F2FS_INODE(const struct folio *folio) 2057 2052 { 2058 - return &((struct f2fs_node *)page_address(page))->i; 2053 + return &((struct f2fs_node *)folio_address(folio))->i; 2059 2054 } 2060 2055 2061 2056 static inline struct f2fs_nm_info *NM_I(struct f2fs_sb_info *sbi) ··· 2458 2453 } 2459 2454 2460 2455 #define PAGE_PRIVATE_GET_FUNC(name, flagname) \ 2456 + static inline bool folio_test_f2fs_##name(const struct folio *folio) \ 2457 + { \ 2458 + unsigned long priv = (unsigned long)folio->private; \ 2459 + unsigned long v = (1UL << PAGE_PRIVATE_NOT_POINTER) | \ 2460 + (1UL << PAGE_PRIVATE_##flagname); \ 2461 + return (priv & v) == v; \ 2462 + } \ 2461 2463 static inline bool page_private_##name(struct page *page) \ 2462 2464 { \ 2463 2465 return PagePrivate(page) && \ ··· 2473 2461 } 2474 2462 2475 2463 #define PAGE_PRIVATE_SET_FUNC(name, flagname) \ 2464 + static inline void folio_set_f2fs_##name(struct folio *folio) \ 2465 + { \ 2466 + unsigned long v = (1UL << PAGE_PRIVATE_NOT_POINTER) | \ 2467 + (1UL << PAGE_PRIVATE_##flagname); \ 2468 + if (!folio->private) \ 2469 + folio_attach_private(folio, (void *)v); \ 2470 + else { \ 2471 + v |= (unsigned long)folio->private; \ 2472 + folio->private = (void *)v; \ 2473 + } \ 2474 + } \ 2476 2475 static inline void set_page_private_##name(struct page *page) \ 2477 2476 { \ 2478 2477 if (!PagePrivate(page)) \ ··· 2493 2470 } 2494 2471 2495 2472 #define PAGE_PRIVATE_CLEAR_FUNC(name, flagname) \ 2473 + static inline void folio_clear_f2fs_##name(struct folio *folio) \ 2474 + { \ 2475 + unsigned long v = (unsigned long)folio->private; \ 2476 + \ 2477 + v &= ~(1UL << PAGE_PRIVATE_##flagname); \ 2478 + if (v == (1UL << PAGE_PRIVATE_NOT_POINTER)) \ 2479 + folio_detach_private(folio); \ 2480 + else \ 2481 + folio->private = (void *)v; \ 2482 + } \ 2496 2483 static inline void clear_page_private_##name(struct page *page) \ 2497 2484 { \ 2498 2485 clear_bit(PAGE_PRIVATE_##flagname, &page_private(page)); \ ··· 2525 2492 PAGE_PRIVATE_CLEAR_FUNC(gcing, ONGOING_MIGRATION); 2526 2493 PAGE_PRIVATE_CLEAR_FUNC(atomic, ATOMIC_WRITE); 2527 2494 2528 - static inline unsigned long get_page_private_data(struct page *page) 2495 + static inline unsigned long folio_get_f2fs_data(struct folio *folio) 2529 2496 { 2530 - unsigned long data = page_private(page); 2497 + unsigned long data = (unsigned long)folio->private; 2531 2498 2532 2499 if (!test_bit(PAGE_PRIVATE_NOT_POINTER, &data)) 2533 2500 return 0; 2534 2501 return data >> PAGE_PRIVATE_MAX; 2535 2502 } 2536 2503 2537 - static inline void set_page_private_data(struct page *page, unsigned long data) 2504 + static inline void folio_set_f2fs_data(struct folio *folio, unsigned long data) 2538 2505 { 2539 - if (!PagePrivate(page)) 2540 - attach_page_private(page, (void *)0); 2541 - set_bit(PAGE_PRIVATE_NOT_POINTER, &page_private(page)); 2542 - page_private(page) |= data << PAGE_PRIVATE_MAX; 2543 - } 2506 + data = (1UL << PAGE_PRIVATE_NOT_POINTER) | (data << PAGE_PRIVATE_MAX); 2544 2507 2545 - static inline void clear_page_private_data(struct page *page) 2546 - { 2547 - page_private(page) &= GENMASK(PAGE_PRIVATE_MAX - 1, 0); 2548 - if (page_private(page) == BIT(PAGE_PRIVATE_NOT_POINTER)) 2549 - detach_page_private(page); 2550 - } 2551 - 2552 - static inline void clear_page_private_all(struct page *page) 2553 - { 2554 - clear_page_private_data(page); 2555 - clear_page_private_reference(page); 2556 - clear_page_private_gcing(page); 2557 - clear_page_private_inline(page); 2558 - clear_page_private_atomic(page); 2559 - 2560 - f2fs_bug_on(F2FS_P_SB(page), page_private(page)); 2508 + if (!folio_test_private(folio)) 2509 + folio_attach_private(folio, (void *)data); 2510 + else 2511 + folio->private = (void *)((unsigned long)folio->private | data); 2561 2512 } 2562 2513 2563 2514 static inline void dec_valid_block_count(struct f2fs_sb_info *sbi, ··· 3028 3011 3029 3012 #define RAW_IS_INODE(p) ((p)->footer.nid == (p)->footer.ino) 3030 3013 3031 - static inline bool IS_INODE(struct page *page) 3014 + static inline bool IS_INODE(const struct folio *folio) 3032 3015 { 3033 - struct f2fs_node *p = F2FS_NODE(page); 3016 + struct f2fs_node *p = F2FS_NODE(folio); 3034 3017 3035 3018 return RAW_IS_INODE(p); 3036 3019 } ··· 3048 3031 3049 3032 static inline int f2fs_has_extra_attr(struct inode *inode); 3050 3033 static inline unsigned int get_dnode_base(struct inode *inode, 3051 - struct page *node_page) 3034 + struct folio *node_folio) 3052 3035 { 3053 - if (!IS_INODE(node_page)) 3036 + if (!IS_INODE(node_folio)) 3054 3037 return 0; 3055 3038 3056 3039 return inode ? get_extra_isize(inode) : 3057 - offset_in_addr(&F2FS_NODE(node_page)->i); 3040 + offset_in_addr(&F2FS_NODE(node_folio)->i); 3058 3041 } 3059 3042 3060 3043 static inline __le32 *get_dnode_addr(struct inode *inode, 3061 3044 struct folio *node_folio) 3062 3045 { 3063 - return blkaddr_in_node(F2FS_NODE(&node_folio->page)) + 3064 - get_dnode_base(inode, &node_folio->page); 3046 + return blkaddr_in_node(F2FS_NODE(node_folio)) + 3047 + get_dnode_base(inode, node_folio); 3065 3048 } 3066 3049 3067 3050 static inline block_t data_blkaddr(struct inode *inode, ··· 3383 3366 return addrs; 3384 3367 } 3385 3368 3386 - static inline void *inline_xattr_addr(struct inode *inode, struct folio *folio) 3369 + static inline 3370 + void *inline_xattr_addr(struct inode *inode, const struct folio *folio) 3387 3371 { 3388 - struct f2fs_inode *ri = F2FS_INODE(&folio->page); 3372 + struct f2fs_inode *ri = F2FS_INODE(folio); 3389 3373 3390 3374 return (void *)&(ri->i_addr[DEF_ADDRS_PER_INODE - 3391 3375 get_inline_xattr_addrs(inode)]); ··· 3646 3628 */ 3647 3629 void f2fs_set_inode_flags(struct inode *inode); 3648 3630 bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct folio *folio); 3649 - void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page); 3631 + void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct folio *folio); 3650 3632 struct inode *f2fs_iget(struct super_block *sb, unsigned long ino); 3651 3633 struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino); 3652 3634 int f2fs_try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink); 3653 3635 void f2fs_update_inode(struct inode *inode, struct folio *node_folio); 3654 3636 void f2fs_update_inode_page(struct inode *inode); 3655 3637 int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc); 3638 + void f2fs_remove_donate_inode(struct inode *inode); 3656 3639 void f2fs_evict_inode(struct inode *inode); 3657 3640 void f2fs_handle_failed_inode(struct inode *inode); 3658 3641 ··· 3803 3784 void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid); 3804 3785 int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink); 3805 3786 int f2fs_recover_inline_xattr(struct inode *inode, struct folio *folio); 3806 - int f2fs_recover_xattr_data(struct inode *inode, struct page *page); 3807 - int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page); 3787 + int f2fs_recover_xattr_data(struct inode *inode, struct folio *folio); 3788 + int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct folio *folio); 3808 3789 int f2fs_restore_node_summary(struct f2fs_sb_info *sbi, 3809 3790 unsigned int segno, struct f2fs_summary_block *sum); 3810 3791 int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc); ··· 3871 3852 bool recover_newaddr); 3872 3853 enum temp_type f2fs_get_segment_temp(struct f2fs_sb_info *sbi, 3873 3854 enum log_type seg_type); 3874 - int f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, 3855 + int f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct folio *folio, 3875 3856 block_t old_blkaddr, block_t *new_blkaddr, 3876 3857 struct f2fs_summary *sum, int type, 3877 3858 struct f2fs_io_info *fio); ··· 3905 3886 3906 3887 static inline struct inode *fio_inode(struct f2fs_io_info *fio) 3907 3888 { 3908 - return page_folio(fio->page)->mapping->host; 3889 + return fio->folio->mapping->host; 3909 3890 } 3910 3891 3911 3892 #define DEF_FRAGMENT_SIZE 4 ··· 3972 3953 */ 3973 3954 int __init f2fs_init_bioset(void); 3974 3955 void f2fs_destroy_bioset(void); 3975 - bool f2fs_is_cp_guaranteed(struct page *page); 3956 + bool f2fs_is_cp_guaranteed(const struct folio *folio); 3976 3957 int f2fs_init_bio_entry_cache(void); 3977 3958 void f2fs_destroy_bio_entry_cache(void); 3978 3959 void f2fs_submit_read_bio(struct f2fs_sb_info *sbi, struct bio *bio, ··· 3980 3961 int f2fs_init_write_merge_io(struct f2fs_sb_info *sbi); 3981 3962 void f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type); 3982 3963 void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi, 3983 - struct inode *inode, struct page *page, 3964 + struct inode *inode, struct folio *folio, 3984 3965 nid_t ino, enum page_type type); 3985 3966 void f2fs_submit_merged_ipu_write(struct f2fs_sb_info *sbi, 3986 3967 struct bio **bio, struct folio *folio); ··· 4322 4303 * inline.c 4323 4304 */ 4324 4305 bool f2fs_may_inline_data(struct inode *inode); 4325 - bool f2fs_sanity_check_inline_data(struct inode *inode, struct page *ipage); 4306 + bool f2fs_sanity_check_inline_data(struct inode *inode, struct folio *ifolio); 4326 4307 bool f2fs_may_inline_dentry(struct inode *inode); 4327 4308 void f2fs_do_read_inline_data(struct folio *folio, struct folio *ifolio); 4328 4309 void f2fs_truncate_inline_inode(struct inode *inode, struct folio *ifolio, ··· 4364 4345 /* 4365 4346 * extent_cache.c 4366 4347 */ 4367 - bool sanity_check_extent_cache(struct inode *inode, struct page *ipage); 4348 + bool sanity_check_extent_cache(struct inode *inode, struct folio *ifolio); 4368 4349 void f2fs_init_extent_tree(struct inode *inode); 4369 4350 void f2fs_drop_extent_tree(struct inode *inode); 4370 4351 void f2fs_destroy_extent_node(struct inode *inode); ··· 4454 4435 CLUSTER_COMPR_BLKS, /* return # of compressed blocks in a cluster */ 4455 4436 CLUSTER_RAW_BLKS /* return # of raw blocks in a cluster */ 4456 4437 }; 4457 - bool f2fs_is_compressed_page(struct page *page); 4438 + bool f2fs_is_compressed_page(struct folio *folio); 4458 4439 struct folio *f2fs_compress_control_folio(struct folio *folio); 4459 4440 int f2fs_prepare_compress_overwrite(struct inode *inode, 4460 4441 struct page **pagep, pgoff_t index, void **fsdata); 4461 4442 bool f2fs_compress_write_end(struct inode *inode, void *fsdata, 4462 4443 pgoff_t index, unsigned copied); 4463 4444 int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock); 4464 - void f2fs_compress_write_end_io(struct bio *bio, struct page *page); 4445 + void f2fs_compress_write_end_io(struct bio *bio, struct folio *folio); 4465 4446 bool f2fs_is_compress_backend_ready(struct inode *inode); 4466 4447 bool f2fs_is_compress_level_valid(int alg, int lvl); 4467 4448 int __init f2fs_init_compress_mempool(void); 4468 4449 void f2fs_destroy_compress_mempool(void); 4469 4450 void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task); 4470 - void f2fs_end_read_compressed_page(struct page *page, bool failed, 4451 + void f2fs_end_read_compressed_page(struct folio *folio, bool failed, 4471 4452 block_t blkaddr, bool in_task); 4472 4453 bool f2fs_cluster_is_empty(struct compress_ctx *cc); 4473 4454 bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index); ··· 4505 4486 struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi); 4506 4487 void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi, 4507 4488 block_t blkaddr, unsigned int len); 4508 - void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, 4509 - nid_t ino, block_t blkaddr); 4510 4489 bool f2fs_load_compressed_folio(struct f2fs_sb_info *sbi, struct folio *folio, 4511 4490 block_t blkaddr); 4512 4491 void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino); ··· 4521 4504 sbi->compr_saved_block += diff; \ 4522 4505 } while (0) 4523 4506 #else 4524 - static inline bool f2fs_is_compressed_page(struct page *page) { return false; } 4507 + static inline bool f2fs_is_compressed_page(struct folio *folio) { return false; } 4525 4508 static inline bool f2fs_is_compress_backend_ready(struct inode *inode) 4526 4509 { 4527 4510 if (!f2fs_compressed_file(inode)) ··· 4539 4522 static inline void f2fs_destroy_compress_mempool(void) { } 4540 4523 static inline void f2fs_decompress_cluster(struct decompress_io_ctx *dic, 4541 4524 bool in_task) { } 4542 - static inline void f2fs_end_read_compressed_page(struct page *page, 4525 + static inline void f2fs_end_read_compressed_page(struct folio *folio, 4543 4526 bool failed, block_t blkaddr, bool in_task) 4544 4527 { 4545 4528 WARN_ON_ONCE(1); ··· 4559 4542 static inline void f2fs_destroy_compress_cache(void) { } 4560 4543 static inline void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi, 4561 4544 block_t blkaddr, unsigned int len) { } 4562 - static inline void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, 4563 - struct page *page, nid_t ino, block_t blkaddr) { } 4564 4545 static inline bool f2fs_load_compressed_folio(struct f2fs_sb_info *sbi, 4565 4546 struct folio *folio, block_t blkaddr) { return false; } 4566 4547 static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
+62 -45
fs/f2fs/file.c
··· 489 489 } 490 490 } 491 491 492 - end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 492 + end_offset = ADDRS_PER_PAGE(dn.node_folio, inode); 493 493 494 494 /* find data/hole in dnode block */ 495 495 for (; dn.ofs_in_node < end_offset; ··· 629 629 if (err) 630 630 return err; 631 631 632 - return finish_preallocate_blocks(inode); 632 + err = finish_preallocate_blocks(inode); 633 + if (!err) 634 + atomic_inc(&F2FS_I(inode)->open_count); 635 + return err; 633 636 } 634 637 635 638 void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count) ··· 711 708 * once we invalidate valid blkaddr in range [ofs, ofs + count], 712 709 * we will invalidate all blkaddr in the whole range. 713 710 */ 714 - fofs = f2fs_start_bidx_of_node(ofs_of_node(&dn->node_folio->page), 711 + fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_folio), 715 712 dn->inode) + ofs; 716 713 f2fs_update_read_extent_cache_range(dn, fofs, 0, len); 717 714 f2fs_update_age_extent_cache_range(dn, fofs, len); ··· 818 815 goto out; 819 816 } 820 817 821 - count = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 818 + count = ADDRS_PER_PAGE(dn.node_folio, inode); 822 819 823 820 count -= dn.ofs_in_node; 824 821 f2fs_bug_on(sbi, count < 0); 825 822 826 - if (dn.ofs_in_node || IS_INODE(&dn.node_folio->page)) { 823 + if (dn.ofs_in_node || IS_INODE(dn.node_folio)) { 827 824 f2fs_truncate_data_blocks_range(&dn, count); 828 825 free_from += count; 829 826 } ··· 1046 1043 { 1047 1044 struct inode *inode = d_inode(dentry); 1048 1045 struct f2fs_inode_info *fi = F2FS_I(inode); 1046 + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 1049 1047 int err; 1050 1048 1051 - if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) 1049 + if (unlikely(f2fs_cp_error(sbi))) 1052 1050 return -EIO; 1051 + 1052 + err = setattr_prepare(idmap, dentry, attr); 1053 + if (err) 1054 + return err; 1055 + 1056 + err = fscrypt_prepare_setattr(dentry, attr); 1057 + if (err) 1058 + return err; 1059 + 1060 + err = fsverity_prepare_setattr(dentry, attr); 1061 + if (err) 1062 + return err; 1053 1063 1054 1064 if (unlikely(IS_IMMUTABLE(inode))) 1055 1065 return -EPERM; ··· 1080 1064 !IS_ALIGNED(attr->ia_size, 1081 1065 F2FS_BLK_TO_BYTES(fi->i_cluster_size))) 1082 1066 return -EINVAL; 1067 + /* 1068 + * To prevent scattered pin block generation, we don't allow 1069 + * smaller/equal size unaligned truncation for pinned file. 1070 + * We only support overwrite IO to pinned file, so don't 1071 + * care about larger size truncation. 1072 + */ 1073 + if (f2fs_is_pinned_file(inode) && 1074 + attr->ia_size <= i_size_read(inode) && 1075 + !IS_ALIGNED(attr->ia_size, 1076 + F2FS_BLK_TO_BYTES(CAP_BLKS_PER_SEC(sbi)))) 1077 + return -EINVAL; 1083 1078 } 1084 - 1085 - err = setattr_prepare(idmap, dentry, attr); 1086 - if (err) 1087 - return err; 1088 - 1089 - err = fscrypt_prepare_setattr(dentry, attr); 1090 - if (err) 1091 - return err; 1092 - 1093 - err = fsverity_prepare_setattr(dentry, attr); 1094 - if (err) 1095 - return err; 1096 1079 1097 1080 if (is_quota_modification(idmap, inode, attr)) { 1098 1081 err = f2fs_dquot_initialize(inode); ··· 1100 1085 } 1101 1086 if (i_uid_needs_update(idmap, attr, inode) || 1102 1087 i_gid_needs_update(idmap, attr, inode)) { 1103 - f2fs_lock_op(F2FS_I_SB(inode)); 1088 + f2fs_lock_op(sbi); 1104 1089 err = dquot_transfer(idmap, inode, attr); 1105 1090 if (err) { 1106 - set_sbi_flag(F2FS_I_SB(inode), 1107 - SBI_QUOTA_NEED_REPAIR); 1108 - f2fs_unlock_op(F2FS_I_SB(inode)); 1091 + set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR); 1092 + f2fs_unlock_op(sbi); 1109 1093 return err; 1110 1094 } 1111 1095 /* ··· 1114 1100 i_uid_update(idmap, attr, inode); 1115 1101 i_gid_update(idmap, attr, inode); 1116 1102 f2fs_mark_inode_dirty_sync(inode, true); 1117 - f2fs_unlock_op(F2FS_I_SB(inode)); 1103 + f2fs_unlock_op(sbi); 1118 1104 } 1119 1105 1120 1106 if (attr->ia_valid & ATTR_SIZE) { ··· 1177 1163 f2fs_mark_inode_dirty_sync(inode, true); 1178 1164 1179 1165 /* inode change will produce dirty node pages flushed by checkpoint */ 1180 - f2fs_balance_fs(F2FS_I_SB(inode), true); 1166 + f2fs_balance_fs(sbi, true); 1181 1167 1182 1168 return err; 1183 1169 } ··· 1237 1223 return err; 1238 1224 } 1239 1225 1240 - end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 1226 + end_offset = ADDRS_PER_PAGE(dn.node_folio, inode); 1241 1227 count = min(end_offset - dn.ofs_in_node, pg_end - pg_start); 1242 1228 1243 1229 f2fs_bug_on(F2FS_I_SB(inode), count == 0 || count > end_offset); ··· 1336 1322 goto next; 1337 1323 } 1338 1324 1339 - done = min((pgoff_t)ADDRS_PER_PAGE(&dn.node_folio->page, inode) - 1325 + done = min((pgoff_t)ADDRS_PER_PAGE(dn.node_folio, inode) - 1340 1326 dn.ofs_in_node, len); 1341 1327 for (i = 0; i < done; i++, blkaddr++, do_replace++, dn.ofs_in_node++) { 1342 1328 *blkaddr = f2fs_data_blkaddr(&dn); ··· 1425 1411 } 1426 1412 1427 1413 ilen = min((pgoff_t) 1428 - ADDRS_PER_PAGE(&dn.node_folio->page, dst_inode) - 1414 + ADDRS_PER_PAGE(dn.node_folio, dst_inode) - 1429 1415 dn.ofs_in_node, len - i); 1430 1416 do { 1431 1417 dn.data_blkaddr = f2fs_data_blkaddr(&dn); ··· 1467 1453 1468 1454 memcpy_folio(fdst, 0, fsrc, 0, PAGE_SIZE); 1469 1455 folio_mark_dirty(fdst); 1470 - set_page_private_gcing(&fdst->page); 1456 + folio_set_f2fs_gcing(fdst); 1471 1457 f2fs_folio_put(fdst, true); 1472 1458 f2fs_folio_put(fsrc, true); 1473 1459 ··· 1721 1707 goto out; 1722 1708 } 1723 1709 1724 - end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 1710 + end_offset = ADDRS_PER_PAGE(dn.node_folio, inode); 1725 1711 end = min(pg_end, end_offset - dn.ofs_in_node + index); 1726 1712 1727 1713 ret = f2fs_do_zero_range(&dn, index, end); ··· 1902 1888 } 1903 1889 } 1904 1890 1905 - if (has_not_enough_free_secs(sbi, 0, f2fs_sb_has_blkzoned(sbi) ? 1906 - ZONED_PIN_SEC_REQUIRED_COUNT : 1907 - GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) { 1891 + if (has_not_enough_free_secs(sbi, 0, 1892 + sbi->reserved_pin_section)) { 1908 1893 f2fs_down_write(&sbi->gc_lock); 1909 1894 stat_inc_gc_call_count(sbi, FOREGROUND); 1910 1895 err = f2fs_gc(sbi, &gc_control); ··· 2041 2028 2042 2029 static int f2fs_release_file(struct inode *inode, struct file *filp) 2043 2030 { 2031 + if (atomic_dec_and_test(&F2FS_I(inode)->open_count)) 2032 + f2fs_remove_donate_inode(inode); 2033 + 2044 2034 /* 2045 2035 * f2fs_release_file is called at every close calls. So we should 2046 2036 * not drop any inmemory pages by close called by other process. ··· 2994 2978 f2fs_folio_wait_writeback(folio, DATA, true, true); 2995 2979 2996 2980 folio_mark_dirty(folio); 2997 - set_page_private_gcing(&folio->page); 2981 + folio_set_f2fs_gcing(folio); 2998 2982 f2fs_folio_put(folio, true); 2999 2983 3000 2984 idx++; ··· 3892 3876 break; 3893 3877 } 3894 3878 3895 - end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 3879 + end_offset = ADDRS_PER_PAGE(dn.node_folio, inode); 3896 3880 count = min(end_offset - dn.ofs_in_node, last_idx - page_idx); 3897 3881 count = round_up(count, fi->i_cluster_size); 3898 3882 ··· 4070 4054 break; 4071 4055 } 4072 4056 4073 - end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 4057 + end_offset = ADDRS_PER_PAGE(dn.node_folio, inode); 4074 4058 count = min(end_offset - dn.ofs_in_node, last_idx - page_idx); 4075 4059 count = round_up(count, fi->i_cluster_size); 4076 4060 ··· 4234 4218 goto out; 4235 4219 } 4236 4220 4237 - end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 4221 + end_offset = ADDRS_PER_PAGE(dn.node_folio, inode); 4238 4222 count = min(end_offset - dn.ofs_in_node, pg_end - index); 4239 4223 for (i = 0; i < count; i++, index++, dn.ofs_in_node++) { 4240 4224 struct block_device *cur_bdev; ··· 4431 4415 f2fs_folio_wait_writeback(folio, DATA, true, true); 4432 4416 4433 4417 folio_mark_dirty(folio); 4434 - set_page_private_gcing(&folio->page); 4418 + folio_set_f2fs_gcing(folio); 4435 4419 redirty_idx = folio_next_index(folio); 4436 4420 folio_unlock(folio); 4437 4421 folio_put_refs(folio, 2); ··· 4841 4825 struct inode *inode = file_inode(iocb->ki_filp); 4842 4826 const loff_t pos = iocb->ki_pos; 4843 4827 ssize_t ret; 4828 + bool dio; 4844 4829 4845 4830 if (!f2fs_is_compress_backend_ready(inode)) 4846 4831 return -EOPNOTSUPP; ··· 4850 4833 f2fs_trace_rw_file_path(iocb->ki_filp, iocb->ki_pos, 4851 4834 iov_iter_count(to), READ); 4852 4835 4836 + dio = f2fs_should_use_dio(inode, iocb, to); 4837 + 4853 4838 /* In LFS mode, if there is inflight dio, wait for its completion */ 4854 4839 if (f2fs_lfs_mode(F2FS_I_SB(inode)) && 4855 - get_pages(F2FS_I_SB(inode), F2FS_DIO_WRITE)) 4840 + get_pages(F2FS_I_SB(inode), F2FS_DIO_WRITE) && 4841 + (!f2fs_is_pinned_file(inode) || !dio)) 4856 4842 inode_dio_wait(inode); 4857 4843 4858 - if (f2fs_should_use_dio(inode, iocb, to)) { 4844 + if (dio) { 4859 4845 ret = f2fs_dio_read_iter(iocb, to); 4860 4846 } else { 4861 4847 ret = filemap_read(iocb, to, 0); ··· 4866 4846 f2fs_update_iostat(F2FS_I_SB(inode), inode, 4867 4847 APP_BUFFERED_READ_IO, ret); 4868 4848 } 4869 - if (trace_f2fs_dataread_end_enabled()) 4870 - trace_f2fs_dataread_end(inode, pos, ret); 4849 + trace_f2fs_dataread_end(inode, pos, ret); 4871 4850 return ret; 4872 4851 } 4873 4852 ··· 4889 4870 f2fs_update_iostat(F2FS_I_SB(inode), inode, 4890 4871 APP_BUFFERED_READ_IO, ret); 4891 4872 4892 - if (trace_f2fs_dataread_end_enabled()) 4893 - trace_f2fs_dataread_end(inode, pos, ret); 4873 + trace_f2fs_dataread_end(inode, pos, ret); 4894 4874 return ret; 4895 4875 } 4896 4876 ··· 5234 5216 f2fs_dio_write_iter(iocb, from, &may_need_sync) : 5235 5217 f2fs_buffered_write_iter(iocb, from); 5236 5218 5237 - if (trace_f2fs_datawrite_end_enabled()) 5238 - trace_f2fs_datawrite_end(inode, orig_pos, ret); 5219 + trace_f2fs_datawrite_end(inode, orig_pos, ret); 5239 5220 } 5240 5221 5241 5222 /* Don't leave any preallocated blocks around past i_size. */
+29 -25
fs/f2fs/gc.c
··· 141 141 FOREGROUND : BACKGROUND); 142 142 143 143 sync_mode = (F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC) || 144 - gc_control.one_time; 144 + (gc_control.one_time && gc_th->boost_gc_greedy); 145 145 146 146 /* foreground GC was been triggered via f2fs_balance_fs() */ 147 - if (foreground) 147 + if (foreground && !f2fs_sb_has_blkzoned(sbi)) 148 148 sync_mode = false; 149 149 150 150 gc_control.init_gc_type = sync_mode ? FG_GC : BG_GC; ··· 197 197 198 198 gc_th->urgent_sleep_time = DEF_GC_THREAD_URGENT_SLEEP_TIME; 199 199 gc_th->valid_thresh_ratio = DEF_GC_THREAD_VALID_THRESH_RATIO; 200 + gc_th->boost_gc_multiple = BOOST_GC_MULTIPLE; 201 + gc_th->boost_gc_greedy = GC_GREEDY; 200 202 201 203 if (f2fs_sb_has_blkzoned(sbi)) { 202 204 gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME_ZONED; ··· 280 278 { 281 279 struct dirty_seglist_info *dirty_i = DIRTY_I(sbi); 282 280 283 - if (p->alloc_mode == SSR) { 284 - p->gc_mode = GC_GREEDY; 285 - p->dirty_bitmap = dirty_i->dirty_segmap[type]; 286 - p->max_search = dirty_i->nr_dirty[type]; 287 - p->ofs_unit = 1; 288 - } else if (p->alloc_mode == AT_SSR) { 281 + if (p->alloc_mode == SSR || p->alloc_mode == AT_SSR) { 289 282 p->gc_mode = GC_GREEDY; 290 283 p->dirty_bitmap = dirty_i->dirty_segmap[type]; 291 284 p->max_search = dirty_i->nr_dirty[type]; ··· 386 389 } 387 390 388 391 static inline unsigned int get_gc_cost(struct f2fs_sb_info *sbi, 389 - unsigned int segno, struct victim_sel_policy *p) 392 + unsigned int segno, struct victim_sel_policy *p, 393 + unsigned int valid_thresh_ratio) 390 394 { 391 395 if (p->alloc_mode == SSR) 392 396 return get_seg_entry(sbi, segno)->ckpt_valid_blocks; 393 397 394 - if (p->one_time_gc && (get_valid_blocks(sbi, segno, true) >= 395 - CAP_BLKS_PER_SEC(sbi) * sbi->gc_thread->valid_thresh_ratio / 396 - 100)) 398 + if (p->one_time_gc && (valid_thresh_ratio < 100) && 399 + (get_valid_blocks(sbi, segno, true) >= 400 + CAP_BLKS_PER_SEC(sbi) * valid_thresh_ratio / 100)) 397 401 return UINT_MAX; 398 402 399 403 /* alloc_mode == LFS */ ··· 775 777 unsigned int secno, last_victim; 776 778 unsigned int last_segment; 777 779 unsigned int nsearched; 780 + unsigned int valid_thresh_ratio = 100; 778 781 bool is_atgc; 779 782 int ret = 0; 780 783 ··· 785 786 p.alloc_mode = alloc_mode; 786 787 p.age = age; 787 788 p.age_threshold = sbi->am.age_threshold; 788 - p.one_time_gc = one_time; 789 + if (one_time) { 790 + p.one_time_gc = one_time; 791 + if (has_enough_free_secs(sbi, 0, NR_PERSISTENT_LOG)) 792 + valid_thresh_ratio = sbi->gc_thread->valid_thresh_ratio; 793 + } 789 794 790 795 retry: 791 796 select_policy(sbi, gc_type, type, &p); ··· 915 912 goto next; 916 913 } 917 914 918 - cost = get_gc_cost(sbi, segno, &p); 915 + cost = get_gc_cost(sbi, segno, &p, valid_thresh_ratio); 919 916 920 917 if (p.min_cost > cost) { 921 918 p.min_segno = segno; ··· 1165 1162 return false; 1166 1163 } 1167 1164 1168 - if (IS_INODE(&node_folio->page)) { 1169 - base = offset_in_addr(F2FS_INODE(&node_folio->page)); 1165 + if (IS_INODE(node_folio)) { 1166 + base = offset_in_addr(F2FS_INODE(node_folio)); 1170 1167 max_addrs = DEF_ADDRS_PER_INODE; 1171 1168 } else { 1172 1169 base = 0; ··· 1180 1177 return false; 1181 1178 } 1182 1179 1183 - *nofs = ofs_of_node(&node_folio->page); 1180 + *nofs = ofs_of_node(node_folio); 1184 1181 source_blkaddr = data_blkaddr(NULL, node_folio, ofs_in_node); 1185 1182 f2fs_folio_put(node_folio, true); 1186 1183 ··· 1252 1249 } 1253 1250 got_it: 1254 1251 /* read folio */ 1255 - fio.page = &folio->page; 1252 + fio.folio = folio; 1256 1253 fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr; 1257 1254 1258 1255 /* ··· 1356 1353 goto put_out; 1357 1354 1358 1355 /* read page */ 1359 - fio.page = &folio->page; 1356 + fio.folio = folio; 1360 1357 fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr; 1361 1358 1362 1359 if (lfs_mode) ··· 1476 1473 goto out; 1477 1474 } 1478 1475 folio_mark_dirty(folio); 1479 - set_page_private_gcing(&folio->page); 1476 + folio_set_f2fs_gcing(folio); 1480 1477 } else { 1481 1478 struct f2fs_io_info fio = { 1482 1479 .sbi = F2FS_I_SB(inode), ··· 1486 1483 .op = REQ_OP_WRITE, 1487 1484 .op_flags = REQ_SYNC, 1488 1485 .old_blkaddr = NULL_ADDR, 1489 - .page = &folio->page, 1486 + .folio = folio, 1490 1487 .encrypted_page = NULL, 1491 1488 .need_lock = LOCK_REQ, 1492 1489 .io_type = FS_GC_DATA_IO, ··· 1502 1499 f2fs_remove_dirty_inode(inode); 1503 1500 } 1504 1501 1505 - set_page_private_gcing(&folio->page); 1502 + folio_set_f2fs_gcing(folio); 1506 1503 1507 1504 err = f2fs_do_write_data_page(&fio); 1508 1505 if (err) { 1509 - clear_page_private_gcing(&folio->page); 1506 + folio_clear_f2fs_gcing(folio); 1510 1507 if (err == -ENOMEM) { 1511 1508 memalloc_retry_wait(GFP_NOFS); 1512 1509 goto retry; ··· 1752 1749 !has_enough_free_blocks(sbi, 1753 1750 sbi->gc_thread->boost_zoned_gc_percent)) 1754 1751 window_granularity *= 1755 - BOOST_GC_MULTIPLE; 1752 + sbi->gc_thread->boost_gc_multiple; 1756 1753 1757 1754 end_segno = start_segno + window_granularity; 1758 1755 } ··· 1894 1891 /* Let's run FG_GC, if we don't have enough space. */ 1895 1892 if (has_not_enough_free_secs(sbi, 0, 0)) { 1896 1893 gc_type = FG_GC; 1894 + gc_control->one_time = false; 1897 1895 1898 1896 /* 1899 1897 * For example, if there are many prefree_segments below given ··· 2068 2064 .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS), 2069 2065 }; 2070 2066 2071 - if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, segno))) 2067 + if (is_cursec(sbi, GET_SEC_FROM_SEG(sbi, segno))) 2072 2068 continue; 2073 2069 2074 2070 do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false);
+4 -1
fs/f2fs/gc.h
··· 68 68 unsigned int no_zoned_gc_percent; 69 69 unsigned int boost_zoned_gc_percent; 70 70 unsigned int valid_thresh_ratio; 71 + unsigned int boost_gc_multiple; 72 + unsigned int boost_gc_greedy; 71 73 }; 72 74 73 75 struct gc_inode_list { ··· 196 194 static inline bool need_to_boost_gc(struct f2fs_sb_info *sbi) 197 195 { 198 196 if (f2fs_sb_has_blkzoned(sbi)) 199 - return !has_enough_free_blocks(sbi, LIMIT_BOOST_ZONED_GC); 197 + return !has_enough_free_blocks(sbi, 198 + sbi->gc_thread->boost_zoned_gc_percent); 200 199 return has_enough_invalid_blocks(sbi); 201 200 }
+10 -10
fs/f2fs/inline.c
··· 33 33 return !f2fs_post_read_required(inode); 34 34 } 35 35 36 - static bool inode_has_blocks(struct inode *inode, struct page *ipage) 36 + static bool inode_has_blocks(struct inode *inode, struct folio *ifolio) 37 37 { 38 - struct f2fs_inode *ri = F2FS_INODE(ipage); 38 + struct f2fs_inode *ri = F2FS_INODE(ifolio); 39 39 int i; 40 40 41 41 if (F2FS_HAS_BLOCKS(inode)) ··· 48 48 return false; 49 49 } 50 50 51 - bool f2fs_sanity_check_inline_data(struct inode *inode, struct page *ipage) 51 + bool f2fs_sanity_check_inline_data(struct inode *inode, struct folio *ifolio) 52 52 { 53 53 if (!f2fs_has_inline_data(inode)) 54 54 return false; 55 55 56 - if (inode_has_blocks(inode, ipage)) 56 + if (inode_has_blocks(inode, ifolio)) 57 57 return false; 58 58 59 59 if (!support_inline_data(inode)) ··· 150 150 .type = DATA, 151 151 .op = REQ_OP_WRITE, 152 152 .op_flags = REQ_SYNC | REQ_PRIO, 153 - .page = &folio->page, 153 + .folio = folio, 154 154 .encrypted_page = NULL, 155 155 .io_type = FS_DATA_IO, 156 156 }; ··· 206 206 207 207 /* clear inline data and flag after data writeback */ 208 208 f2fs_truncate_inline_inode(dn->inode, dn->inode_folio, 0); 209 - clear_page_private_inline(&dn->inode_folio->page); 209 + folio_clear_f2fs_inline(dn->inode_folio); 210 210 clear_out: 211 211 stat_dec_inline_inode(dn->inode); 212 212 clear_inode_flag(dn->inode, FI_INLINE_DATA); ··· 286 286 set_inode_flag(inode, FI_APPEND_WRITE); 287 287 set_inode_flag(inode, FI_DATA_EXIST); 288 288 289 - clear_page_private_inline(&ifolio->page); 289 + folio_clear_f2fs_inline(ifolio); 290 290 f2fs_folio_put(ifolio, 1); 291 291 return 0; 292 292 } ··· 305 305 * x o -> remove data blocks, and then recover inline_data 306 306 * x x -> recover data blocks 307 307 */ 308 - if (IS_INODE(&nfolio->page)) 309 - ri = F2FS_INODE(&nfolio->page); 308 + if (IS_INODE(nfolio)) 309 + ri = F2FS_INODE(nfolio); 310 310 311 311 if (f2fs_has_inline_data(inode) && 312 312 ri && (ri->i_inline & F2FS_INLINE_DATA)) { ··· 825 825 826 826 byteaddr = (__u64)ni.blk_addr << inode->i_sb->s_blocksize_bits; 827 827 byteaddr += (char *)inline_data_addr(inode, ifolio) - 828 - (char *)F2FS_INODE(&ifolio->page); 828 + (char *)F2FS_INODE(ifolio); 829 829 err = fiemap_fill_next_extent(fieinfo, start, byteaddr, ilen, flags); 830 830 trace_f2fs_fiemap(inode, start, byteaddr, ilen, flags, err); 831 831 out:
+51 -33
fs/f2fs/inode.c
··· 108 108 f2fs_folio_wait_writeback(ifolio, NODE, true, true); 109 109 110 110 set_inode_flag(inode, FI_DATA_EXIST); 111 - set_raw_inline(inode, F2FS_INODE(&ifolio->page)); 111 + set_raw_inline(inode, F2FS_INODE(ifolio)); 112 112 folio_mark_dirty(ifolio); 113 113 return; 114 114 } ··· 116 116 return; 117 117 } 118 118 119 - static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page) 119 + static 120 + bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct folio *folio) 120 121 { 121 - struct f2fs_inode *ri = &F2FS_NODE(page)->i; 122 + struct f2fs_inode *ri = &F2FS_NODE(folio)->i; 122 123 123 124 if (!f2fs_sb_has_inode_chksum(sbi)) 124 125 return false; 125 126 126 - if (!IS_INODE(page) || !(ri->i_inline & F2FS_EXTRA_ATTR)) 127 + if (!IS_INODE(folio) || !(ri->i_inline & F2FS_EXTRA_ATTR)) 127 128 return false; 128 129 129 130 if (!F2FS_FITS_IN_INODE(ri, le16_to_cpu(ri->i_extra_isize), ··· 134 133 return true; 135 134 } 136 135 137 - static __u32 f2fs_inode_chksum(struct f2fs_sb_info *sbi, struct page *page) 136 + static __u32 f2fs_inode_chksum(struct f2fs_sb_info *sbi, struct folio *folio) 138 137 { 139 - struct f2fs_node *node = F2FS_NODE(page); 138 + struct f2fs_node *node = F2FS_NODE(folio); 140 139 struct f2fs_inode *ri = &node->i; 141 140 __le32 ino = node->footer.ino; 142 141 __le32 gen = ri->i_generation; ··· 165 164 return true; 166 165 167 166 #ifdef CONFIG_F2FS_CHECK_FS 168 - if (!f2fs_enable_inode_chksum(sbi, &folio->page)) 167 + if (!f2fs_enable_inode_chksum(sbi, folio)) 169 168 #else 170 - if (!f2fs_enable_inode_chksum(sbi, &folio->page) || 169 + if (!f2fs_enable_inode_chksum(sbi, folio) || 171 170 folio_test_dirty(folio) || 172 171 folio_test_writeback(folio)) 173 172 #endif 174 173 return true; 175 174 176 - ri = &F2FS_NODE(&folio->page)->i; 175 + ri = &F2FS_NODE(folio)->i; 177 176 provided = le32_to_cpu(ri->i_inode_checksum); 178 - calculated = f2fs_inode_chksum(sbi, &folio->page); 177 + calculated = f2fs_inode_chksum(sbi, folio); 179 178 180 179 if (provided != calculated) 181 180 f2fs_warn(sbi, "checksum invalid, nid = %lu, ino_of_node = %x, %x vs. %x", 182 - folio->index, ino_of_node(&folio->page), 181 + folio->index, ino_of_node(folio), 183 182 provided, calculated); 184 183 185 184 return provided == calculated; 186 185 } 187 186 188 - void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page) 187 + void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct folio *folio) 189 188 { 190 - struct f2fs_inode *ri = &F2FS_NODE(page)->i; 189 + struct f2fs_inode *ri = &F2FS_NODE(folio)->i; 191 190 192 - if (!f2fs_enable_inode_chksum(sbi, page)) 191 + if (!f2fs_enable_inode_chksum(sbi, folio)) 193 192 return; 194 193 195 - ri->i_inode_checksum = cpu_to_le32(f2fs_inode_chksum(sbi, page)); 194 + ri->i_inode_checksum = cpu_to_le32(f2fs_inode_chksum(sbi, folio)); 196 195 } 197 196 198 197 static bool sanity_check_compress_inode(struct inode *inode, ··· 267 266 return false; 268 267 } 269 268 270 - static bool sanity_check_inode(struct inode *inode, struct page *node_page) 269 + static bool sanity_check_inode(struct inode *inode, struct folio *node_folio) 271 270 { 272 271 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 273 272 struct f2fs_inode_info *fi = F2FS_I(inode); 274 - struct f2fs_inode *ri = F2FS_INODE(node_page); 273 + struct f2fs_inode *ri = F2FS_INODE(node_folio); 275 274 unsigned long long iblocks; 276 275 277 - iblocks = le64_to_cpu(F2FS_INODE(node_page)->i_blocks); 276 + iblocks = le64_to_cpu(F2FS_INODE(node_folio)->i_blocks); 278 277 if (!iblocks) { 279 278 f2fs_warn(sbi, "%s: corrupted inode i_blocks i_ino=%lx iblocks=%llu, run fsck to fix.", 280 279 __func__, inode->i_ino, iblocks); 281 280 return false; 282 281 } 283 282 284 - if (ino_of_node(node_page) != nid_of_node(node_page)) { 283 + if (ino_of_node(node_folio) != nid_of_node(node_folio)) { 285 284 f2fs_warn(sbi, "%s: corrupted inode footer i_ino=%lx, ino,nid: [%u, %u] run fsck to fix.", 286 285 __func__, inode->i_ino, 287 - ino_of_node(node_page), nid_of_node(node_page)); 286 + ino_of_node(node_folio), nid_of_node(node_folio)); 288 287 return false; 289 288 } 290 289 291 - if (ino_of_node(node_page) == fi->i_xattr_nid) { 290 + if (ino_of_node(node_folio) == fi->i_xattr_nid) { 292 291 f2fs_warn(sbi, "%s: corrupted inode i_ino=%lx, xnid=%x, run fsck to fix.", 293 292 __func__, inode->i_ino, fi->i_xattr_nid); 294 293 return false; ··· 355 354 } 356 355 } 357 356 358 - if (f2fs_sanity_check_inline_data(inode, node_page)) { 357 + if (f2fs_sanity_check_inline_data(inode, node_folio)) { 359 358 f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_data, run fsck to fix", 360 359 __func__, inode->i_ino, inode->i_mode); 361 360 return false; ··· 420 419 if (IS_ERR(node_folio)) 421 420 return PTR_ERR(node_folio); 422 421 423 - ri = F2FS_INODE(&node_folio->page); 422 + ri = F2FS_INODE(node_folio); 424 423 425 424 inode->i_mode = le16_to_cpu(ri->i_mode); 426 425 i_uid_write(inode, le32_to_cpu(ri->i_uid)); ··· 470 469 fi->i_inline_xattr_size = 0; 471 470 } 472 471 473 - if (!sanity_check_inode(inode, &node_folio->page)) { 472 + if (!sanity_check_inode(inode, node_folio)) { 474 473 f2fs_folio_put(node_folio, true); 475 474 set_sbi_flag(sbi, SBI_NEED_FSCK); 476 475 f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); ··· 482 481 __recover_inline_status(inode, node_folio); 483 482 484 483 /* try to recover cold bit for non-dir inode */ 485 - if (!S_ISDIR(inode->i_mode) && !is_cold_node(&node_folio->page)) { 484 + if (!S_ISDIR(inode->i_mode) && !is_cold_node(node_folio)) { 486 485 f2fs_folio_wait_writeback(node_folio, NODE, true, true); 487 - set_cold_node(&node_folio->page, false); 486 + set_cold_node(node_folio, false); 488 487 folio_mark_dirty(node_folio); 489 488 } 490 489 ··· 532 531 533 532 init_idisk_time(inode); 534 533 535 - if (!sanity_check_extent_cache(inode, &node_folio->page)) { 534 + if (!sanity_check_extent_cache(inode, node_folio)) { 536 535 f2fs_folio_put(node_folio, true); 537 536 f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); 538 537 return -EFSCORRUPTED; ··· 670 669 671 670 f2fs_inode_synced(inode); 672 671 673 - ri = F2FS_INODE(&node_folio->page); 672 + ri = F2FS_INODE(node_folio); 674 673 675 674 ri->i_mode = cpu_to_le16(inode->i_mode); 676 675 ri->i_advise = fi->i_advise; ··· 749 748 750 749 /* deleted inode */ 751 750 if (inode->i_nlink == 0) 752 - clear_page_private_inline(&node_folio->page); 751 + folio_clear_f2fs_inline(node_folio); 753 752 754 753 init_idisk_time(inode); 755 754 #ifdef CONFIG_F2FS_CHECK_FS 756 - f2fs_inode_chksum_set(F2FS_I_SB(inode), &node_folio->page); 755 + f2fs_inode_chksum_set(F2FS_I_SB(inode), node_folio); 757 756 #endif 758 757 } 759 758 ··· 821 820 return 0; 822 821 } 823 822 824 - static void f2fs_remove_donate_inode(struct inode *inode) 823 + void f2fs_remove_donate_inode(struct inode *inode) 825 824 { 826 825 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 827 826 ··· 934 933 f2fs_update_inode_page(inode); 935 934 if (dquot_initialize_needed(inode)) 936 935 set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR); 936 + 937 + /* 938 + * If both f2fs_truncate() and f2fs_update_inode_page() failed 939 + * due to fuzzed corrupted inode, call f2fs_inode_synced() to 940 + * avoid triggering later f2fs_bug_on(). 941 + */ 942 + if (is_inode_flag_set(inode, FI_DIRTY_INODE)) { 943 + f2fs_warn(sbi, 944 + "f2fs_evict_inode: inode is dirty, ino:%lu", 945 + inode->i_ino); 946 + f2fs_inode_synced(inode); 947 + set_sbi_flag(sbi, SBI_NEED_FSCK); 948 + } 937 949 } 938 950 if (freeze_protected) 939 951 sb_end_intwrite(inode->i_sb); ··· 963 949 if (likely(!f2fs_cp_error(sbi) && 964 950 !is_sbi_flag_set(sbi, SBI_CP_DISABLED))) 965 951 f2fs_bug_on(sbi, is_inode_flag_set(inode, FI_DIRTY_INODE)); 966 - else 967 - f2fs_inode_synced(inode); 952 + 953 + /* 954 + * anyway, it needs to remove the inode from sbi->inode_list[DIRTY_META] 955 + * list to avoid UAF in f2fs_sync_inode_meta() during checkpoint. 956 + */ 957 + f2fs_inode_synced(inode); 968 958 969 959 /* for the case f2fs_new_inode() was failed, .i_ino is zero, skip it */ 970 960 if (inode->i_ino)
+6 -6
fs/f2fs/namei.c
··· 1298 1298 struct inode *inode, 1299 1299 struct delayed_call *done) 1300 1300 { 1301 - struct page *page; 1301 + struct folio *folio; 1302 1302 const char *target; 1303 1303 1304 1304 if (!dentry) 1305 1305 return ERR_PTR(-ECHILD); 1306 1306 1307 - page = read_mapping_page(inode->i_mapping, 0, NULL); 1308 - if (IS_ERR(page)) 1309 - return ERR_CAST(page); 1307 + folio = read_mapping_folio(inode->i_mapping, 0, NULL); 1308 + if (IS_ERR(folio)) 1309 + return ERR_CAST(folio); 1310 1310 1311 - target = fscrypt_get_symlink(inode, page_address(page), 1311 + target = fscrypt_get_symlink(inode, folio_address(folio), 1312 1312 inode->i_sb->s_blocksize, done); 1313 - put_page(page); 1313 + folio_put(folio); 1314 1314 return target; 1315 1315 } 1316 1316
+148 -113
fs/f2fs/node.c
··· 135 135 return f2fs_get_meta_folio_retry(sbi, current_nat_addr(sbi, nid)); 136 136 } 137 137 138 - static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid) 138 + static struct folio *get_next_nat_folio(struct f2fs_sb_info *sbi, nid_t nid) 139 139 { 140 140 struct folio *src_folio; 141 141 struct folio *dst_folio; ··· 149 149 /* get current nat block page with lock */ 150 150 src_folio = get_current_nat_folio(sbi, nid); 151 151 if (IS_ERR(src_folio)) 152 - return &src_folio->page; 152 + return src_folio; 153 153 dst_folio = f2fs_grab_meta_folio(sbi, dst_off); 154 154 f2fs_bug_on(sbi, folio_test_dirty(src_folio)); 155 155 ··· 161 161 162 162 set_to_next_nat(nm_i, nid); 163 163 164 - return &dst_folio->page; 164 + return dst_folio; 165 165 } 166 166 167 167 static struct nat_entry *__alloc_nat_entry(struct f2fs_sb_info *sbi, ··· 185 185 186 186 /* must be locked by nat_tree_lock */ 187 187 static struct nat_entry *__init_nat_entry(struct f2fs_nm_info *nm_i, 188 - struct nat_entry *ne, struct f2fs_nat_entry *raw_ne, bool no_fail) 188 + struct nat_entry *ne, struct f2fs_nat_entry *raw_ne, bool no_fail, bool init_dirty) 189 189 { 190 190 if (no_fail) 191 191 f2fs_radix_tree_insert(&nm_i->nat_root, nat_get_nid(ne), ne); ··· 194 194 195 195 if (raw_ne) 196 196 node_info_from_raw_nat(&ne->ni, raw_ne); 197 + 198 + if (init_dirty) { 199 + INIT_LIST_HEAD(&ne->list); 200 + nm_i->nat_cnt[TOTAL_NAT]++; 201 + return ne; 202 + } 197 203 198 204 spin_lock(&nm_i->nat_list_lock); 199 205 list_add_tail(&ne->list, &nm_i->nat_entries); ··· 210 204 return ne; 211 205 } 212 206 213 - static struct nat_entry *__lookup_nat_cache(struct f2fs_nm_info *nm_i, nid_t n) 207 + static struct nat_entry *__lookup_nat_cache(struct f2fs_nm_info *nm_i, nid_t n, bool for_dirty) 214 208 { 215 209 struct nat_entry *ne; 216 210 217 211 ne = radix_tree_lookup(&nm_i->nat_root, n); 218 212 219 - /* for recent accessed nat entry, move it to tail of lru list */ 220 - if (ne && !get_nat_flag(ne, IS_DIRTY)) { 213 + /* 214 + * for recent accessed nat entry which will not be dirtied soon 215 + * later, move it to tail of lru list. 216 + */ 217 + if (ne && !get_nat_flag(ne, IS_DIRTY) && !for_dirty) { 221 218 spin_lock(&nm_i->nat_list_lock); 222 219 if (!list_empty(&ne->list)) 223 220 list_move_tail(&ne->list, &nm_i->nat_entries); ··· 265 256 } 266 257 267 258 static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i, 268 - struct nat_entry *ne) 259 + struct nat_entry *ne, bool init_dirty) 269 260 { 270 261 struct nat_entry_set *head; 271 262 bool new_ne = nat_get_blkaddr(ne) == NEW_ADDR; ··· 288 279 goto refresh_list; 289 280 290 281 nm_i->nat_cnt[DIRTY_NAT]++; 291 - nm_i->nat_cnt[RECLAIMABLE_NAT]--; 282 + if (!init_dirty) 283 + nm_i->nat_cnt[RECLAIMABLE_NAT]--; 292 284 set_nat_flag(ne, IS_DIRTY, true); 293 285 refresh_list: 294 286 spin_lock(&nm_i->nat_list_lock); ··· 322 312 323 313 bool f2fs_in_warm_node_list(struct f2fs_sb_info *sbi, struct folio *folio) 324 314 { 325 - return is_node_folio(folio) && IS_DNODE(&folio->page) && 326 - is_cold_node(&folio->page); 315 + return is_node_folio(folio) && IS_DNODE(folio) && is_cold_node(folio); 327 316 } 328 317 329 318 void f2fs_init_fsync_node_info(struct f2fs_sb_info *sbi) ··· 393 384 bool need = false; 394 385 395 386 f2fs_down_read(&nm_i->nat_tree_lock); 396 - e = __lookup_nat_cache(nm_i, nid); 387 + e = __lookup_nat_cache(nm_i, nid, false); 397 388 if (e) { 398 389 if (!get_nat_flag(e, IS_CHECKPOINTED) && 399 390 !get_nat_flag(e, HAS_FSYNCED_INODE)) ··· 410 401 bool is_cp = true; 411 402 412 403 f2fs_down_read(&nm_i->nat_tree_lock); 413 - e = __lookup_nat_cache(nm_i, nid); 404 + e = __lookup_nat_cache(nm_i, nid, false); 414 405 if (e && !get_nat_flag(e, IS_CHECKPOINTED)) 415 406 is_cp = false; 416 407 f2fs_up_read(&nm_i->nat_tree_lock); ··· 424 415 bool need_update = true; 425 416 426 417 f2fs_down_read(&nm_i->nat_tree_lock); 427 - e = __lookup_nat_cache(nm_i, ino); 418 + e = __lookup_nat_cache(nm_i, ino, false); 428 419 if (e && get_nat_flag(e, HAS_LAST_FSYNC) && 429 420 (get_nat_flag(e, IS_CHECKPOINTED) || 430 421 get_nat_flag(e, HAS_FSYNCED_INODE))) ··· 449 440 return; 450 441 451 442 f2fs_down_write(&nm_i->nat_tree_lock); 452 - e = __lookup_nat_cache(nm_i, nid); 443 + e = __lookup_nat_cache(nm_i, nid, false); 453 444 if (!e) 454 - e = __init_nat_entry(nm_i, new, ne, false); 445 + e = __init_nat_entry(nm_i, new, ne, false, false); 455 446 else 456 447 f2fs_bug_on(sbi, nat_get_ino(e) != le32_to_cpu(ne->ino) || 457 448 nat_get_blkaddr(e) != ··· 468 459 struct f2fs_nm_info *nm_i = NM_I(sbi); 469 460 struct nat_entry *e; 470 461 struct nat_entry *new = __alloc_nat_entry(sbi, ni->nid, true); 462 + bool init_dirty = false; 471 463 472 464 f2fs_down_write(&nm_i->nat_tree_lock); 473 - e = __lookup_nat_cache(nm_i, ni->nid); 465 + e = __lookup_nat_cache(nm_i, ni->nid, true); 474 466 if (!e) { 475 - e = __init_nat_entry(nm_i, new, NULL, true); 467 + init_dirty = true; 468 + e = __init_nat_entry(nm_i, new, NULL, true, true); 476 469 copy_node_info(&e->ni, ni); 477 470 f2fs_bug_on(sbi, ni->blk_addr == NEW_ADDR); 478 471 } else if (new_blkaddr == NEW_ADDR) { ··· 510 499 nat_set_blkaddr(e, new_blkaddr); 511 500 if (!__is_valid_data_blkaddr(new_blkaddr)) 512 501 set_nat_flag(e, IS_CHECKPOINTED, false); 513 - __set_nat_cache_dirty(nm_i, e); 502 + __set_nat_cache_dirty(nm_i, e, init_dirty); 514 503 515 504 /* update fsync_mark if its inode nat entry is still alive */ 516 505 if (ni->nid != ni->ino) 517 - e = __lookup_nat_cache(nm_i, ni->ino); 506 + e = __lookup_nat_cache(nm_i, ni->ino, false); 518 507 if (e) { 519 508 if (fsync_done && ni->nid == ni->ino) 520 509 set_nat_flag(e, HAS_FSYNCED_INODE, true); ··· 566 555 struct f2fs_nat_entry ne; 567 556 struct nat_entry *e; 568 557 pgoff_t index; 569 - block_t blkaddr; 570 558 int i; 559 + bool need_cache = true; 571 560 572 561 ni->flag = 0; 573 562 ni->nid = nid; 574 563 retry: 575 564 /* Check nat cache */ 576 565 f2fs_down_read(&nm_i->nat_tree_lock); 577 - e = __lookup_nat_cache(nm_i, nid); 566 + e = __lookup_nat_cache(nm_i, nid, false); 578 567 if (e) { 579 568 ni->ino = nat_get_ino(e); 580 569 ni->blk_addr = nat_get_blkaddr(e); 581 570 ni->version = nat_get_version(e); 582 571 f2fs_up_read(&nm_i->nat_tree_lock); 572 + if (IS_ENABLED(CONFIG_F2FS_CHECK_FS)) { 573 + need_cache = false; 574 + goto sanity_check; 575 + } 583 576 return 0; 584 577 } 585 578 ··· 609 594 up_read(&curseg->journal_rwsem); 610 595 if (i >= 0) { 611 596 f2fs_up_read(&nm_i->nat_tree_lock); 612 - goto cache; 597 + goto sanity_check; 613 598 } 614 599 615 600 /* Fill node_info from nat page */ ··· 624 609 ne = nat_blk->entries[nid - start_nid]; 625 610 node_info_from_raw_nat(ni, &ne); 626 611 f2fs_folio_put(folio, true); 627 - cache: 628 - blkaddr = le32_to_cpu(ne.block_addr); 629 - if (__is_valid_data_blkaddr(blkaddr) && 630 - !f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE)) 631 - return -EFAULT; 612 + sanity_check: 613 + if (__is_valid_data_blkaddr(ni->blk_addr) && 614 + !f2fs_is_valid_blkaddr(sbi, ni->blk_addr, 615 + DATA_GENERIC_ENHANCE)) { 616 + set_sbi_flag(sbi, SBI_NEED_FSCK); 617 + f2fs_err_ratelimited(sbi, 618 + "f2fs_get_node_info of %pS: inconsistent nat entry, " 619 + "ino:%u, nid:%u, blkaddr:%u, ver:%u, flag:%u", 620 + __builtin_return_address(0), 621 + ni->ino, ni->nid, ni->blk_addr, ni->version, ni->flag); 622 + f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT); 623 + return -EFSCORRUPTED; 624 + } 632 625 633 626 /* cache nat entry */ 634 - cache_nat_entry(sbi, nid, &ne); 627 + if (need_cache) 628 + cache_nat_entry(sbi, nid, &ne); 635 629 return 0; 636 630 } 637 631 ··· 660 636 end = start + n; 661 637 end = min(end, (int)NIDS_PER_BLOCK); 662 638 for (i = start; i < end; i++) { 663 - nid = get_nid(&parent->page, i, false); 639 + nid = get_nid(parent, i, false); 664 640 f2fs_ra_node_page(sbi, nid); 665 641 } 666 642 ··· 819 795 820 796 parent = nfolio[0]; 821 797 if (level != 0) 822 - nids[1] = get_nid(&parent->page, offset[0], true); 798 + nids[1] = get_nid(parent, offset[0], true); 823 799 dn->inode_folio = nfolio[0]; 824 800 dn->inode_folio_locked = true; 825 801 826 802 /* get indirect or direct nodes */ 827 803 for (i = 1; i <= level; i++) { 828 804 bool done = false; 805 + 806 + if (nids[i] && nids[i] == dn->inode->i_ino) { 807 + err = -EFSCORRUPTED; 808 + f2fs_err_ratelimited(sbi, 809 + "inode mapping table is corrupted, run fsck to fix it, " 810 + "ino:%lu, nid:%u, level:%d, offset:%d", 811 + dn->inode->i_ino, nids[i], level, offset[level]); 812 + set_sbi_flag(sbi, SBI_NEED_FSCK); 813 + goto release_pages; 814 + } 829 815 830 816 if (!nids[i] && mode == ALLOC_NODE) { 831 817 /* alloc new node */ ··· 880 846 } 881 847 if (i < level) { 882 848 parent = nfolio[i]; 883 - nids[i + 1] = get_nid(&parent->page, offset[i], false); 849 + nids[i + 1] = get_nid(parent, offset[i], false); 884 850 } 885 851 } 886 852 dn->nid = nids[level]; ··· 995 961 else if (IS_ERR(folio)) 996 962 return PTR_ERR(folio); 997 963 998 - if (IS_INODE(&folio->page) || ino_of_node(&folio->page) != dn->inode->i_ino) { 964 + if (IS_INODE(folio) || ino_of_node(folio) != dn->inode->i_ino) { 999 965 f2fs_err(sbi, "incorrect node reference, ino: %lu, nid: %u, ino_of_node: %u", 1000 - dn->inode->i_ino, dn->nid, ino_of_node(&folio->page)); 966 + dn->inode->i_ino, dn->nid, ino_of_node(folio)); 1001 967 set_sbi_flag(sbi, SBI_NEED_FSCK); 1002 968 f2fs_handle_error(sbi, ERROR_INVALID_NODE_REFERENCE); 1003 969 f2fs_folio_put(folio, true); ··· 1041 1007 1042 1008 f2fs_ra_node_pages(folio, ofs, NIDS_PER_BLOCK); 1043 1009 1044 - rn = F2FS_NODE(&folio->page); 1010 + rn = F2FS_NODE(folio); 1045 1011 if (depth < 3) { 1046 1012 for (i = ofs; i < NIDS_PER_BLOCK; i++, freed++) { 1047 1013 child_nid = le32_to_cpu(rn->in.nid[i]); ··· 1104 1070 int i; 1105 1071 int idx = depth - 2; 1106 1072 1107 - nid[0] = get_nid(&dn->inode_folio->page, offset[0], true); 1073 + nid[0] = get_nid(dn->inode_folio, offset[0], true); 1108 1074 if (!nid[0]) 1109 1075 return 0; 1110 1076 ··· 1117 1083 idx = i - 1; 1118 1084 goto fail; 1119 1085 } 1120 - nid[i + 1] = get_nid(&folios[i]->page, offset[i + 1], false); 1086 + nid[i + 1] = get_nid(folios[i], offset[i + 1], false); 1121 1087 } 1122 1088 1123 1089 f2fs_ra_node_pages(folios[idx], offset[idx + 1], NIDS_PER_BLOCK); 1124 1090 1125 1091 /* free direct nodes linked to a partial indirect node */ 1126 1092 for (i = offset[idx + 1]; i < NIDS_PER_BLOCK; i++) { 1127 - child_nid = get_nid(&folios[idx]->page, i, false); 1093 + child_nid = get_nid(folios[idx], i, false); 1128 1094 if (!child_nid) 1129 1095 continue; 1130 1096 dn->nid = child_nid; ··· 1193 1159 set_new_dnode(&dn, inode, folio, NULL, 0); 1194 1160 folio_unlock(folio); 1195 1161 1196 - ri = F2FS_INODE(&folio->page); 1162 + ri = F2FS_INODE(folio); 1197 1163 switch (level) { 1198 1164 case 0: 1199 1165 case 1: ··· 1222 1188 1223 1189 skip_partial: 1224 1190 while (cont) { 1225 - dn.nid = get_nid(&folio->page, offset[0], true); 1191 + dn.nid = get_nid(folio, offset[0], true); 1226 1192 switch (offset[0]) { 1227 1193 case NODE_DIR1_BLOCK: 1228 1194 case NODE_DIR2_BLOCK: ··· 1254 1220 } 1255 1221 if (err < 0) 1256 1222 goto fail; 1257 - if (offset[1] == 0 && get_nid(&folio->page, offset[0], true)) { 1223 + if (offset[1] == 0 && get_nid(folio, offset[0], true)) { 1258 1224 folio_lock(folio); 1259 1225 BUG_ON(!is_node_folio(folio)); 1260 1226 set_nid(folio, offset[0], 0, true); ··· 1401 1367 set_node_addr(sbi, &new_ni, NEW_ADDR, false); 1402 1368 1403 1369 f2fs_folio_wait_writeback(folio, NODE, true, true); 1404 - fill_node_footer(&folio->page, dn->nid, dn->inode->i_ino, ofs, true); 1405 - set_cold_node(&folio->page, S_ISDIR(dn->inode->i_mode)); 1370 + fill_node_footer(folio, dn->nid, dn->inode->i_ino, ofs, true); 1371 + set_cold_node(folio, S_ISDIR(dn->inode->i_mode)); 1406 1372 if (!folio_test_uptodate(folio)) 1407 1373 folio_mark_uptodate(folio); 1408 1374 if (folio_mark_dirty(folio)) ··· 1434 1400 .type = NODE, 1435 1401 .op = REQ_OP_READ, 1436 1402 .op_flags = op_flags, 1437 - .page = &folio->page, 1403 + .folio = folio, 1438 1404 .encrypted_page = NULL, 1439 1405 }; 1440 1406 int err; ··· 1496 1462 struct folio *folio, pgoff_t nid, 1497 1463 enum node_type ntype) 1498 1464 { 1499 - struct page *page = &folio->page; 1500 - 1501 - if (unlikely(nid != nid_of_node(page) || 1502 - (ntype == NODE_TYPE_INODE && !IS_INODE(page)) || 1465 + if (unlikely(nid != nid_of_node(folio) || 1466 + (ntype == NODE_TYPE_INODE && !IS_INODE(folio)) || 1503 1467 (ntype == NODE_TYPE_XATTR && 1504 - !f2fs_has_xattr_block(ofs_of_node(page))) || 1468 + !f2fs_has_xattr_block(ofs_of_node(folio))) || 1505 1469 time_to_inject(sbi, FAULT_INCONSISTENT_FOOTER))) { 1506 1470 f2fs_warn(sbi, "inconsistent node block, node_type:%d, nid:%lu, " 1507 1471 "node_footer[nid:%u,ino:%u,ofs:%u,cpver:%llu,blkaddr:%u]", 1508 - ntype, nid, nid_of_node(page), ino_of_node(page), 1509 - ofs_of_node(page), cpver_of_node(page), 1472 + ntype, nid, nid_of_node(folio), ino_of_node(folio), 1473 + ofs_of_node(folio), cpver_of_node(folio), 1510 1474 next_blkaddr_of_node(folio)); 1511 1475 set_sbi_flag(sbi, SBI_NEED_FSCK); 1512 1476 f2fs_handle_error(sbi, ERROR_INCONSISTENT_FOOTER); ··· 1585 1553 static struct folio *f2fs_get_node_folio_ra(struct folio *parent, int start) 1586 1554 { 1587 1555 struct f2fs_sb_info *sbi = F2FS_F_SB(parent); 1588 - nid_t nid = get_nid(&parent->page, start, false); 1556 + nid_t nid = get_nid(parent, start, false); 1589 1557 1590 1558 return __get_node_folio(sbi, nid, parent, start, NODE_TYPE_REGULAR); 1591 1559 } ··· 1650 1618 return ERR_PTR(-EIO); 1651 1619 } 1652 1620 1653 - if (!IS_DNODE(&folio->page) || !is_cold_node(&folio->page)) 1621 + if (!IS_DNODE(folio) || !is_cold_node(folio)) 1654 1622 continue; 1655 - if (ino_of_node(&folio->page) != ino) 1623 + if (ino_of_node(folio) != ino) 1656 1624 continue; 1657 1625 1658 1626 folio_lock(folio); ··· 1662 1630 folio_unlock(folio); 1663 1631 continue; 1664 1632 } 1665 - if (ino_of_node(&folio->page) != ino) 1633 + if (ino_of_node(folio) != ino) 1666 1634 goto continue_unlock; 1667 1635 1668 1636 if (!folio_test_dirty(folio)) { ··· 1692 1660 struct node_info ni; 1693 1661 struct f2fs_io_info fio = { 1694 1662 .sbi = sbi, 1695 - .ino = ino_of_node(&folio->page), 1663 + .ino = ino_of_node(folio), 1696 1664 .type = NODE, 1697 1665 .op = REQ_OP_WRITE, 1698 1666 .op_flags = wbc_to_write_flags(wbc), 1699 - .page = &folio->page, 1667 + .folio = folio, 1700 1668 .encrypted_page = NULL, 1701 1669 .submitted = 0, 1702 1670 .io_type = io_type, ··· 1721 1689 1722 1690 if (!is_sbi_flag_set(sbi, SBI_CP_DISABLED) && 1723 1691 wbc->sync_mode == WB_SYNC_NONE && 1724 - IS_DNODE(&folio->page) && is_cold_node(&folio->page)) 1692 + IS_DNODE(folio) && is_cold_node(folio)) 1725 1693 goto redirty_out; 1726 1694 1727 1695 /* get old block addr of this node page */ 1728 - nid = nid_of_node(&folio->page); 1696 + nid = nid_of_node(folio); 1729 1697 f2fs_bug_on(sbi, folio->index != nid); 1730 1698 1731 1699 if (f2fs_get_node_info(sbi, nid, &ni, !do_balance)) ··· 1763 1731 1764 1732 fio.old_blkaddr = ni.blk_addr; 1765 1733 f2fs_do_write_node_page(nid, &fio); 1766 - set_node_addr(sbi, &ni, fio.new_blkaddr, is_fsync_dnode(&folio->page)); 1734 + set_node_addr(sbi, &ni, fio.new_blkaddr, is_fsync_dnode(folio)); 1767 1735 dec_page_count(sbi, F2FS_DIRTY_NODES); 1768 1736 f2fs_up_read(&sbi->node_write); 1769 1737 ··· 1859 1827 goto out; 1860 1828 } 1861 1829 1862 - if (!IS_DNODE(&folio->page) || !is_cold_node(&folio->page)) 1830 + if (!IS_DNODE(folio) || !is_cold_node(folio)) 1863 1831 continue; 1864 - if (ino_of_node(&folio->page) != ino) 1832 + if (ino_of_node(folio) != ino) 1865 1833 continue; 1866 1834 1867 1835 folio_lock(folio); ··· 1871 1839 folio_unlock(folio); 1872 1840 continue; 1873 1841 } 1874 - if (ino_of_node(&folio->page) != ino) 1842 + if (ino_of_node(folio) != ino) 1875 1843 goto continue_unlock; 1876 1844 1877 1845 if (!folio_test_dirty(folio) && folio != last_folio) { ··· 1881 1849 1882 1850 f2fs_folio_wait_writeback(folio, NODE, true, true); 1883 1851 1884 - set_fsync_mark(&folio->page, 0); 1885 - set_dentry_mark(&folio->page, 0); 1852 + set_fsync_mark(folio, 0); 1853 + set_dentry_mark(folio, 0); 1886 1854 1887 1855 if (!atomic || folio == last_folio) { 1888 - set_fsync_mark(&folio->page, 1); 1856 + set_fsync_mark(folio, 1); 1889 1857 percpu_counter_inc(&sbi->rf_node_block_count); 1890 - if (IS_INODE(&folio->page)) { 1858 + if (IS_INODE(folio)) { 1891 1859 if (is_inode_flag_set(inode, 1892 1860 FI_DIRTY_INODE)) 1893 1861 f2fs_update_inode(inode, folio); 1894 - set_dentry_mark(&folio->page, 1862 + set_dentry_mark(folio, 1895 1863 f2fs_need_dentry_mark(sbi, ino)); 1896 1864 } 1897 1865 /* may be written by other thread */ ··· 1967 1935 { 1968 1936 struct f2fs_sb_info *sbi = F2FS_F_SB(folio); 1969 1937 struct inode *inode; 1970 - nid_t ino = ino_of_node(&folio->page); 1938 + nid_t ino = ino_of_node(folio); 1971 1939 1972 1940 inode = find_inode_nowait(sbi->sb, ino, f2fs_match_ino, NULL); 1973 1941 if (!inode) ··· 1996 1964 for (i = 0; i < nr_folios; i++) { 1997 1965 struct folio *folio = fbatch.folios[i]; 1998 1966 1999 - if (!IS_INODE(&folio->page)) 1967 + if (!IS_INODE(folio)) 2000 1968 continue; 2001 1969 2002 1970 folio_lock(folio); ··· 2007 1975 goto unlock; 2008 1976 2009 1977 /* flush inline_data, if it's async context. */ 2010 - if (page_private_inline(&folio->page)) { 2011 - clear_page_private_inline(&folio->page); 1978 + if (folio_test_f2fs_inline(folio)) { 1979 + folio_clear_f2fs_inline(folio); 2012 1980 folio_unlock(folio); 2013 - flush_inline_data(sbi, ino_of_node(&folio->page)); 1981 + flush_inline_data(sbi, ino_of_node(folio)); 2014 1982 continue; 2015 1983 } 2016 1984 unlock: ··· 2059 2027 * 1. dentry dnodes 2060 2028 * 2. file dnodes 2061 2029 */ 2062 - if (step == 0 && IS_DNODE(&folio->page)) 2030 + if (step == 0 && IS_DNODE(folio)) 2063 2031 continue; 2064 - if (step == 1 && (!IS_DNODE(&folio->page) || 2065 - is_cold_node(&folio->page))) 2032 + if (step == 1 && (!IS_DNODE(folio) || 2033 + is_cold_node(folio))) 2066 2034 continue; 2067 - if (step == 2 && (!IS_DNODE(&folio->page) || 2068 - !is_cold_node(&folio->page))) 2035 + if (step == 2 && (!IS_DNODE(folio) || 2036 + !is_cold_node(folio))) 2069 2037 continue; 2070 2038 lock_node: 2071 2039 if (wbc->sync_mode == WB_SYNC_ALL) ··· 2089 2057 goto write_node; 2090 2058 2091 2059 /* flush inline_data */ 2092 - if (page_private_inline(&folio->page)) { 2093 - clear_page_private_inline(&folio->page); 2060 + if (folio_test_f2fs_inline(folio)) { 2061 + folio_clear_f2fs_inline(folio); 2094 2062 folio_unlock(folio); 2095 - flush_inline_data(sbi, ino_of_node(&folio->page)); 2063 + flush_inline_data(sbi, ino_of_node(folio)); 2096 2064 goto lock_node; 2097 2065 } 2098 2066 2099 2067 /* flush dirty inode */ 2100 - if (IS_INODE(&folio->page) && flush_dirty_inode(folio)) 2068 + if (IS_INODE(folio) && flush_dirty_inode(folio)) 2101 2069 goto lock_node; 2102 2070 write_node: 2103 2071 f2fs_folio_wait_writeback(folio, NODE, true, true); ··· 2105 2073 if (!folio_clear_dirty_for_io(folio)) 2106 2074 goto continue_unlock; 2107 2075 2108 - set_fsync_mark(&folio->page, 0); 2109 - set_dentry_mark(&folio->page, 0); 2076 + set_fsync_mark(folio, 0); 2077 + set_dentry_mark(folio, 0); 2110 2078 2111 2079 if (!__write_node_folio(folio, false, &submitted, 2112 2080 wbc, do_balance, io_type, NULL)) { ··· 2233 2201 if (!folio_test_uptodate(folio)) 2234 2202 folio_mark_uptodate(folio); 2235 2203 #ifdef CONFIG_F2FS_CHECK_FS 2236 - if (IS_INODE(&folio->page)) 2237 - f2fs_inode_chksum_set(F2FS_M_SB(mapping), &folio->page); 2204 + if (IS_INODE(folio)) 2205 + f2fs_inode_chksum_set(F2FS_M_SB(mapping), folio); 2238 2206 #endif 2239 2207 if (filemap_dirty_folio(mapping, folio)) { 2240 2208 inc_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES); 2241 - set_page_private_reference(&folio->page); 2209 + folio_set_f2fs_reference(folio); 2242 2210 return true; 2243 2211 } 2244 2212 return false; ··· 2383 2351 * - __remove_nid_from_list(PREALLOC_NID) 2384 2352 * - __insert_nid_to_list(FREE_NID) 2385 2353 */ 2386 - ne = __lookup_nat_cache(nm_i, nid); 2354 + ne = __lookup_nat_cache(nm_i, nid, false); 2387 2355 if (ne && (!get_nat_flag(ne, IS_CHECKPOINTED) || 2388 2356 nat_get_blkaddr(ne) != NULL_ADDR)) 2389 2357 goto err_out; ··· 2746 2714 if (IS_ERR(ifolio)) 2747 2715 return PTR_ERR(ifolio); 2748 2716 2749 - ri = F2FS_INODE(&folio->page); 2717 + ri = F2FS_INODE(folio); 2750 2718 if (ri->i_inline & F2FS_INLINE_XATTR) { 2751 2719 if (!f2fs_has_inline_xattr(inode)) { 2752 2720 set_inode_flag(inode, FI_INLINE_XATTR); ··· 2772 2740 return 0; 2773 2741 } 2774 2742 2775 - int f2fs_recover_xattr_data(struct inode *inode, struct page *page) 2743 + int f2fs_recover_xattr_data(struct inode *inode, struct folio *folio) 2776 2744 { 2777 2745 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 2778 2746 nid_t prev_xnid = F2FS_I(inode)->i_xattr_nid; ··· 2810 2778 f2fs_update_inode_page(inode); 2811 2779 2812 2780 /* 3: update and set xattr node page dirty */ 2813 - if (page) { 2814 - memcpy(F2FS_NODE(&xfolio->page), F2FS_NODE(page), 2781 + if (folio) { 2782 + memcpy(F2FS_NODE(xfolio), F2FS_NODE(folio), 2815 2783 VALID_XATTR_BLOCK_SIZE); 2816 2784 folio_mark_dirty(xfolio); 2817 2785 } ··· 2820 2788 return 0; 2821 2789 } 2822 2790 2823 - int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page) 2791 + int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct folio *folio) 2824 2792 { 2825 2793 struct f2fs_inode *src, *dst; 2826 - nid_t ino = ino_of_node(page); 2794 + nid_t ino = ino_of_node(folio); 2827 2795 struct node_info old_ni, new_ni; 2828 2796 struct folio *ifolio; 2829 2797 int err; ··· 2846 2814 2847 2815 if (!folio_test_uptodate(ifolio)) 2848 2816 folio_mark_uptodate(ifolio); 2849 - fill_node_footer(&ifolio->page, ino, ino, 0, true); 2850 - set_cold_node(&ifolio->page, false); 2817 + fill_node_footer(ifolio, ino, ino, 0, true); 2818 + set_cold_node(ifolio, false); 2851 2819 2852 - src = F2FS_INODE(page); 2853 - dst = F2FS_INODE(&ifolio->page); 2820 + src = F2FS_INODE(folio); 2821 + dst = F2FS_INODE(ifolio); 2854 2822 2855 2823 memcpy(dst, src, offsetof(struct f2fs_inode, i_ext)); 2856 2824 dst->i_size = 0; ··· 2916 2884 if (IS_ERR(folio)) 2917 2885 return PTR_ERR(folio); 2918 2886 2919 - rn = F2FS_NODE(&folio->page); 2887 + rn = F2FS_NODE(folio); 2920 2888 sum_entry->nid = rn->footer.nid; 2921 2889 sum_entry->version = 0; 2922 2890 sum_entry->ofs_in_node = 0; ··· 2936 2904 struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA); 2937 2905 struct f2fs_journal *journal = curseg->journal; 2938 2906 int i; 2907 + bool init_dirty; 2939 2908 2940 2909 down_write(&curseg->journal_rwsem); 2941 2910 for (i = 0; i < nats_in_cursum(journal); i++) { ··· 2947 2914 if (f2fs_check_nid_range(sbi, nid)) 2948 2915 continue; 2949 2916 2917 + init_dirty = false; 2918 + 2950 2919 raw_ne = nat_in_journal(journal, i); 2951 2920 2952 - ne = __lookup_nat_cache(nm_i, nid); 2921 + ne = __lookup_nat_cache(nm_i, nid, true); 2953 2922 if (!ne) { 2923 + init_dirty = true; 2954 2924 ne = __alloc_nat_entry(sbi, nid, true); 2955 - __init_nat_entry(nm_i, ne, &raw_ne, true); 2925 + __init_nat_entry(nm_i, ne, &raw_ne, true, true); 2956 2926 } 2957 2927 2958 2928 /* ··· 2970 2934 spin_unlock(&nm_i->nid_list_lock); 2971 2935 } 2972 2936 2973 - __set_nat_cache_dirty(nm_i, ne); 2937 + __set_nat_cache_dirty(nm_i, ne, init_dirty); 2974 2938 } 2975 2939 update_nats_in_cursum(journal, -i); 2976 2940 up_write(&curseg->journal_rwsem); ··· 2995 2959 } 2996 2960 2997 2961 static void __update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid, 2998 - struct page *page) 2962 + const struct f2fs_nat_block *nat_blk) 2999 2963 { 3000 2964 struct f2fs_nm_info *nm_i = NM_I(sbi); 3001 2965 unsigned int nat_index = start_nid / NAT_ENTRY_PER_BLOCK; 3002 - struct f2fs_nat_block *nat_blk = page_address(page); 3003 2966 int valid = 0; 3004 2967 int i = 0; 3005 2968 ··· 3035 3000 bool to_journal = true; 3036 3001 struct f2fs_nat_block *nat_blk; 3037 3002 struct nat_entry *ne, *cur; 3038 - struct page *page = NULL; 3003 + struct folio *folio = NULL; 3039 3004 3040 3005 /* 3041 3006 * there are two steps to flush nat entries: ··· 3049 3014 if (to_journal) { 3050 3015 down_write(&curseg->journal_rwsem); 3051 3016 } else { 3052 - page = get_next_nat_page(sbi, start_nid); 3053 - if (IS_ERR(page)) 3054 - return PTR_ERR(page); 3017 + folio = get_next_nat_folio(sbi, start_nid); 3018 + if (IS_ERR(folio)) 3019 + return PTR_ERR(folio); 3055 3020 3056 - nat_blk = page_address(page); 3021 + nat_blk = folio_address(folio); 3057 3022 f2fs_bug_on(sbi, !nat_blk); 3058 3023 } 3059 3024 ··· 3089 3054 if (to_journal) { 3090 3055 up_write(&curseg->journal_rwsem); 3091 3056 } else { 3092 - __update_nat_bits(sbi, start_nid, page); 3093 - f2fs_put_page(page, 1); 3057 + __update_nat_bits(sbi, start_nid, nat_blk); 3058 + f2fs_folio_put(folio, true); 3094 3059 } 3095 3060 3096 3061 /* Allow dirty nats by node block allocation in write_begin */ ··· 3430 3395 } 3431 3396 kvfree(nm_i->free_nid_count); 3432 3397 3433 - kvfree(nm_i->nat_bitmap); 3398 + kfree(nm_i->nat_bitmap); 3434 3399 kvfree(nm_i->nat_bits); 3435 3400 #ifdef CONFIG_F2FS_CHECK_FS 3436 - kvfree(nm_i->nat_bitmap_mir); 3401 + kfree(nm_i->nat_bitmap_mir); 3437 3402 #endif 3438 3403 sbi->nm_info = NULL; 3439 3404 kfree(nm_i);
+39 -38
fs/f2fs/node.h
··· 31 31 /* control total # of nats */ 32 32 #define DEF_NAT_CACHE_THRESHOLD 100000 33 33 34 - /* control total # of node writes used for roll-fowrad recovery */ 34 + /* control total # of node writes used for roll-forward recovery */ 35 35 #define DEF_RF_NODE_BLOCKS 0 36 36 37 37 /* vector size for gang look-up from nat cache that consists of radix tree */ ··· 243 243 #endif 244 244 } 245 245 246 - static inline nid_t ino_of_node(struct page *node_page) 246 + static inline nid_t ino_of_node(const struct folio *node_folio) 247 247 { 248 - struct f2fs_node *rn = F2FS_NODE(node_page); 248 + struct f2fs_node *rn = F2FS_NODE(node_folio); 249 249 return le32_to_cpu(rn->footer.ino); 250 250 } 251 251 252 - static inline nid_t nid_of_node(struct page *node_page) 252 + static inline nid_t nid_of_node(const struct folio *node_folio) 253 253 { 254 - struct f2fs_node *rn = F2FS_NODE(node_page); 254 + struct f2fs_node *rn = F2FS_NODE(node_folio); 255 255 return le32_to_cpu(rn->footer.nid); 256 256 } 257 257 258 - static inline unsigned int ofs_of_node(const struct page *node_page) 258 + static inline unsigned int ofs_of_node(const struct folio *node_folio) 259 259 { 260 - struct f2fs_node *rn = F2FS_NODE(node_page); 260 + struct f2fs_node *rn = F2FS_NODE(node_folio); 261 261 unsigned flag = le32_to_cpu(rn->footer.flag); 262 262 return flag >> OFFSET_BIT_SHIFT; 263 263 } 264 264 265 - static inline __u64 cpver_of_node(struct page *node_page) 265 + static inline __u64 cpver_of_node(const struct folio *node_folio) 266 266 { 267 - struct f2fs_node *rn = F2FS_NODE(node_page); 267 + struct f2fs_node *rn = F2FS_NODE(node_folio); 268 268 return le64_to_cpu(rn->footer.cp_ver); 269 269 } 270 270 271 - static inline block_t next_blkaddr_of_node(struct folio *node_folio) 271 + static inline block_t next_blkaddr_of_node(const struct folio *node_folio) 272 272 { 273 - struct f2fs_node *rn = F2FS_NODE(&node_folio->page); 273 + struct f2fs_node *rn = F2FS_NODE(node_folio); 274 274 return le32_to_cpu(rn->footer.next_blkaddr); 275 275 } 276 276 277 - static inline void fill_node_footer(struct page *page, nid_t nid, 277 + static inline void fill_node_footer(const struct folio *folio, nid_t nid, 278 278 nid_t ino, unsigned int ofs, bool reset) 279 279 { 280 - struct f2fs_node *rn = F2FS_NODE(page); 280 + struct f2fs_node *rn = F2FS_NODE(folio); 281 281 unsigned int old_flag = 0; 282 282 283 283 if (reset) ··· 293 293 (old_flag & OFFSET_BIT_MASK)); 294 294 } 295 295 296 - static inline void copy_node_footer(struct page *dst, struct page *src) 296 + static inline void copy_node_footer(const struct folio *dst, 297 + const struct folio *src) 297 298 { 298 299 struct f2fs_node *src_rn = F2FS_NODE(src); 299 300 struct f2fs_node *dst_rn = F2FS_NODE(dst); 300 301 memcpy(&dst_rn->footer, &src_rn->footer, sizeof(struct node_footer)); 301 302 } 302 303 303 - static inline void fill_node_footer_blkaddr(struct page *page, block_t blkaddr) 304 + static inline void fill_node_footer_blkaddr(struct folio *folio, block_t blkaddr) 304 305 { 305 - struct f2fs_checkpoint *ckpt = F2FS_CKPT(F2FS_P_SB(page)); 306 - struct f2fs_node *rn = F2FS_NODE(page); 306 + struct f2fs_checkpoint *ckpt = F2FS_CKPT(F2FS_F_SB(folio)); 307 + struct f2fs_node *rn = F2FS_NODE(folio); 307 308 __u64 cp_ver = cur_cp_version(ckpt); 308 309 309 310 if (__is_set_ckpt_flags(ckpt, CP_CRC_RECOVERY_FLAG)) ··· 314 313 rn->footer.next_blkaddr = cpu_to_le32(blkaddr); 315 314 } 316 315 317 - static inline bool is_recoverable_dnode(struct page *page) 316 + static inline bool is_recoverable_dnode(const struct folio *folio) 318 317 { 319 - struct f2fs_checkpoint *ckpt = F2FS_CKPT(F2FS_P_SB(page)); 318 + struct f2fs_checkpoint *ckpt = F2FS_CKPT(F2FS_F_SB(folio)); 320 319 __u64 cp_ver = cur_cp_version(ckpt); 321 320 322 321 /* Don't care crc part, if fsck.f2fs sets it. */ 323 322 if (__is_set_ckpt_flags(ckpt, CP_NOCRC_RECOVERY_FLAG)) 324 - return (cp_ver << 32) == (cpver_of_node(page) << 32); 323 + return (cp_ver << 32) == (cpver_of_node(folio) << 32); 325 324 326 325 if (__is_set_ckpt_flags(ckpt, CP_CRC_RECOVERY_FLAG)) 327 326 cp_ver |= (cur_cp_crc(ckpt) << 32); 328 327 329 - return cp_ver == cpver_of_node(page); 328 + return cp_ver == cpver_of_node(folio); 330 329 } 331 330 332 331 /* ··· 350 349 * `- indirect node ((6 + 2N) + (N - 1)(N + 1)) 351 350 * `- direct node 352 351 */ 353 - static inline bool IS_DNODE(const struct page *node_page) 352 + static inline bool IS_DNODE(const struct folio *node_folio) 354 353 { 355 - unsigned int ofs = ofs_of_node(node_page); 354 + unsigned int ofs = ofs_of_node(node_folio); 356 355 357 356 if (f2fs_has_xattr_block(ofs)) 358 357 return true; ··· 370 369 371 370 static inline int set_nid(struct folio *folio, int off, nid_t nid, bool i) 372 371 { 373 - struct f2fs_node *rn = F2FS_NODE(&folio->page); 372 + struct f2fs_node *rn = F2FS_NODE(folio); 374 373 375 374 f2fs_folio_wait_writeback(folio, NODE, true, true); 376 375 ··· 381 380 return folio_mark_dirty(folio); 382 381 } 383 382 384 - static inline nid_t get_nid(struct page *p, int off, bool i) 383 + static inline nid_t get_nid(const struct folio *folio, int off, bool i) 385 384 { 386 - struct f2fs_node *rn = F2FS_NODE(p); 385 + struct f2fs_node *rn = F2FS_NODE(folio); 387 386 388 387 if (i) 389 388 return le32_to_cpu(rn->i.i_nid[off - NODE_DIR1_BLOCK]); ··· 397 396 * - Mark cold data pages in page cache 398 397 */ 399 398 400 - static inline int is_node(const struct page *page, int type) 399 + static inline int is_node(const struct folio *folio, int type) 401 400 { 402 - struct f2fs_node *rn = F2FS_NODE(page); 401 + struct f2fs_node *rn = F2FS_NODE(folio); 403 402 return le32_to_cpu(rn->footer.flag) & BIT(type); 404 403 } 405 404 406 - #define is_cold_node(page) is_node(page, COLD_BIT_SHIFT) 407 - #define is_fsync_dnode(page) is_node(page, FSYNC_BIT_SHIFT) 408 - #define is_dent_dnode(page) is_node(page, DENT_BIT_SHIFT) 405 + #define is_cold_node(folio) is_node(folio, COLD_BIT_SHIFT) 406 + #define is_fsync_dnode(folio) is_node(folio, FSYNC_BIT_SHIFT) 407 + #define is_dent_dnode(folio) is_node(folio, DENT_BIT_SHIFT) 409 408 410 - static inline void set_cold_node(struct page *page, bool is_dir) 409 + static inline void set_cold_node(const struct folio *folio, bool is_dir) 411 410 { 412 - struct f2fs_node *rn = F2FS_NODE(page); 411 + struct f2fs_node *rn = F2FS_NODE(folio); 413 412 unsigned int flag = le32_to_cpu(rn->footer.flag); 414 413 415 414 if (is_dir) ··· 419 418 rn->footer.flag = cpu_to_le32(flag); 420 419 } 421 420 422 - static inline void set_mark(struct page *page, int mark, int type) 421 + static inline void set_mark(struct folio *folio, int mark, int type) 423 422 { 424 - struct f2fs_node *rn = F2FS_NODE(page); 423 + struct f2fs_node *rn = F2FS_NODE(folio); 425 424 unsigned int flag = le32_to_cpu(rn->footer.flag); 426 425 if (mark) 427 426 flag |= BIT(type); ··· 430 429 rn->footer.flag = cpu_to_le32(flag); 431 430 432 431 #ifdef CONFIG_F2FS_CHECK_FS 433 - f2fs_inode_chksum_set(F2FS_P_SB(page), page); 432 + f2fs_inode_chksum_set(F2FS_F_SB(folio), folio); 434 433 #endif 435 434 } 436 - #define set_dentry_mark(page, mark) set_mark(page, mark, DENT_BIT_SHIFT) 437 - #define set_fsync_mark(page, mark) set_mark(page, mark, FSYNC_BIT_SHIFT) 435 + #define set_dentry_mark(folio, mark) set_mark(folio, mark, DENT_BIT_SHIFT) 436 + #define set_fsync_mark(folio, mark) set_mark(folio, mark, FSYNC_BIT_SHIFT)
+70 -46
fs/f2fs/recovery.c
··· 157 157 return 0; 158 158 } 159 159 160 - static int recover_dentry(struct inode *inode, struct page *ipage, 160 + static int recover_dentry(struct inode *inode, struct folio *ifolio, 161 161 struct list_head *dir_list) 162 162 { 163 - struct f2fs_inode *raw_inode = F2FS_INODE(ipage); 163 + struct f2fs_inode *raw_inode = F2FS_INODE(ifolio); 164 164 nid_t pino = le32_to_cpu(raw_inode->i_pino); 165 165 struct f2fs_dir_entry *de; 166 166 struct f2fs_filename fname; ··· 233 233 else 234 234 name = raw_inode->i_name; 235 235 f2fs_notice(F2FS_I_SB(inode), "%s: ino = %x, name = %s, dir = %lx, err = %d", 236 - __func__, ino_of_node(ipage), name, 236 + __func__, ino_of_node(ifolio), name, 237 237 IS_ERR(dir) ? 0 : dir->i_ino, err); 238 238 return err; 239 239 } 240 240 241 - static int recover_quota_data(struct inode *inode, struct page *page) 241 + static int recover_quota_data(struct inode *inode, struct folio *folio) 242 242 { 243 - struct f2fs_inode *raw = F2FS_INODE(page); 243 + struct f2fs_inode *raw = F2FS_INODE(folio); 244 244 struct iattr attr; 245 245 uid_t i_uid = le32_to_cpu(raw->i_uid); 246 246 gid_t i_gid = le32_to_cpu(raw->i_gid); ··· 277 277 clear_inode_flag(inode, FI_DATA_EXIST); 278 278 } 279 279 280 - static int recover_inode(struct inode *inode, struct page *page) 280 + static int recover_inode(struct inode *inode, struct folio *folio) 281 281 { 282 - struct f2fs_inode *raw = F2FS_INODE(page); 282 + struct f2fs_inode *raw = F2FS_INODE(folio); 283 283 struct f2fs_inode_info *fi = F2FS_I(inode); 284 284 char *name; 285 285 int err; 286 286 287 287 inode->i_mode = le16_to_cpu(raw->i_mode); 288 288 289 - err = recover_quota_data(inode, page); 289 + err = recover_quota_data(inode, folio); 290 290 if (err) 291 291 return err; 292 292 ··· 333 333 if (file_enc_name(inode)) 334 334 name = "<encrypted>"; 335 335 else 336 - name = F2FS_INODE(page)->i_name; 336 + name = F2FS_INODE(folio)->i_name; 337 337 338 338 f2fs_notice(F2FS_I_SB(inode), "recover_inode: ino = %x, name = %s, inline = %x", 339 - ino_of_node(page), name, raw->i_inline); 339 + ino_of_node(folio), name, raw->i_inline); 340 340 return 0; 341 341 } 342 342 ··· 375 375 if (IS_ERR(folio)) 376 376 return PTR_ERR(folio); 377 377 378 - if (!is_recoverable_dnode(&folio->page)) { 378 + if (!is_recoverable_dnode(folio)) { 379 379 f2fs_folio_put(folio, true); 380 380 *is_detecting = false; 381 381 return 0; ··· 424 424 break; 425 425 } 426 426 427 - if (!is_recoverable_dnode(&folio->page)) { 427 + if (!is_recoverable_dnode(folio)) { 428 428 f2fs_folio_put(folio, true); 429 429 break; 430 430 } 431 431 432 - if (!is_fsync_dnode(&folio->page)) 432 + if (!is_fsync_dnode(folio)) 433 433 goto next; 434 434 435 - entry = get_fsync_inode(head, ino_of_node(&folio->page)); 435 + entry = get_fsync_inode(head, ino_of_node(folio)); 436 436 if (!entry) { 437 437 bool quota_inode = false; 438 438 439 439 if (!check_only && 440 - IS_INODE(&folio->page) && 441 - is_dent_dnode(&folio->page)) { 442 - err = f2fs_recover_inode_page(sbi, &folio->page); 440 + IS_INODE(folio) && 441 + is_dent_dnode(folio)) { 442 + err = f2fs_recover_inode_page(sbi, folio); 443 443 if (err) { 444 444 f2fs_folio_put(folio, true); 445 445 break; ··· 451 451 * CP | dnode(F) | inode(DF) 452 452 * For this case, we should not give up now. 453 453 */ 454 - entry = add_fsync_inode(sbi, head, ino_of_node(&folio->page), 454 + entry = add_fsync_inode(sbi, head, ino_of_node(folio), 455 455 quota_inode); 456 456 if (IS_ERR(entry)) { 457 457 err = PTR_ERR(entry); ··· 463 463 } 464 464 entry->blkaddr = blkaddr; 465 465 466 - if (IS_INODE(&folio->page) && is_dent_dnode(&folio->page)) 466 + if (IS_INODE(folio) && is_dent_dnode(folio)) 467 467 entry->last_dentry = blkaddr; 468 468 next: 469 469 /* check next segment */ ··· 527 527 nid = le32_to_cpu(sum.nid); 528 528 ofs_in_node = le16_to_cpu(sum.ofs_in_node); 529 529 530 - max_addrs = ADDRS_PER_PAGE(&dn->node_folio->page, dn->inode); 530 + max_addrs = ADDRS_PER_PAGE(dn->node_folio, dn->inode); 531 531 if (ofs_in_node >= max_addrs) { 532 532 f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%lu, nid:%u, max:%u", 533 533 ofs_in_node, dn->inode->i_ino, nid, max_addrs); ··· 552 552 if (IS_ERR(node_folio)) 553 553 return PTR_ERR(node_folio); 554 554 555 - offset = ofs_of_node(&node_folio->page); 556 - ino = ino_of_node(&node_folio->page); 555 + offset = ofs_of_node(node_folio); 556 + ino = ino_of_node(node_folio); 557 557 f2fs_folio_put(node_folio, true); 558 558 559 559 if (ino != dn->inode->i_ino) { ··· 624 624 { 625 625 struct dnode_of_data dn; 626 626 struct node_info ni; 627 - unsigned int start, end; 627 + unsigned int start = 0, end = 0, index; 628 628 int err = 0, recovered = 0; 629 629 630 630 /* step 1: recover xattr */ 631 - if (IS_INODE(&folio->page)) { 631 + if (IS_INODE(folio)) { 632 632 err = f2fs_recover_inline_xattr(inode, folio); 633 633 if (err) 634 634 goto out; 635 - } else if (f2fs_has_xattr_block(ofs_of_node(&folio->page))) { 636 - err = f2fs_recover_xattr_data(inode, &folio->page); 635 + } else if (f2fs_has_xattr_block(ofs_of_node(folio))) { 636 + err = f2fs_recover_xattr_data(inode, folio); 637 637 if (!err) 638 638 recovered++; 639 639 goto out; ··· 648 648 } 649 649 650 650 /* step 3: recover data indices */ 651 - start = f2fs_start_bidx_of_node(ofs_of_node(&folio->page), inode); 652 - end = start + ADDRS_PER_PAGE(&folio->page, inode); 651 + start = f2fs_start_bidx_of_node(ofs_of_node(folio), inode); 652 + end = start + ADDRS_PER_PAGE(folio, inode); 653 653 654 654 set_new_dnode(&dn, inode, NULL, NULL, 0); 655 655 retry_dn: ··· 668 668 if (err) 669 669 goto err; 670 670 671 - f2fs_bug_on(sbi, ni.ino != ino_of_node(&folio->page)); 671 + f2fs_bug_on(sbi, ni.ino != ino_of_node(folio)); 672 672 673 - if (ofs_of_node(&dn.node_folio->page) != ofs_of_node(&folio->page)) { 673 + if (ofs_of_node(dn.node_folio) != ofs_of_node(folio)) { 674 674 f2fs_warn(sbi, "Inconsistent ofs_of_node, ino:%lu, ofs:%u, %u", 675 - inode->i_ino, ofs_of_node(&dn.node_folio->page), 676 - ofs_of_node(&folio->page)); 675 + inode->i_ino, ofs_of_node(dn.node_folio), 676 + ofs_of_node(folio)); 677 677 err = -EFSCORRUPTED; 678 678 f2fs_handle_error(sbi, ERROR_INCONSISTENT_FOOTER); 679 679 goto err; 680 680 } 681 681 682 - for (; start < end; start++, dn.ofs_in_node++) { 682 + for (index = start; index < end; index++, dn.ofs_in_node++) { 683 683 block_t src, dest; 684 684 685 685 src = f2fs_data_blkaddr(&dn); ··· 708 708 } 709 709 710 710 if (!file_keep_isize(inode) && 711 - (i_size_read(inode) <= ((loff_t)start << PAGE_SHIFT))) 711 + (i_size_read(inode) <= ((loff_t)index << PAGE_SHIFT))) 712 712 f2fs_i_size_write(inode, 713 - (loff_t)(start + 1) << PAGE_SHIFT); 713 + (loff_t)(index + 1) << PAGE_SHIFT); 714 714 715 715 /* 716 716 * dest is reserved block, invalidate src block ··· 758 758 } 759 759 } 760 760 761 - copy_node_footer(&dn.node_folio->page, &folio->page); 762 - fill_node_footer(&dn.node_folio->page, dn.nid, ni.ino, 763 - ofs_of_node(&folio->page), false); 761 + copy_node_footer(dn.node_folio, folio); 762 + fill_node_footer(dn.node_folio, dn.nid, ni.ino, 763 + ofs_of_node(folio), false); 764 764 folio_mark_dirty(dn.node_folio); 765 765 err: 766 766 f2fs_put_dnode(&dn); 767 767 out: 768 - f2fs_notice(sbi, "recover_data: ino = %lx (i_size: %s) recovered = %d, err = %d", 769 - inode->i_ino, file_keep_isize(inode) ? "keep" : "recover", 770 - recovered, err); 768 + f2fs_notice(sbi, "recover_data: ino = %lx, nid = %x (i_size: %s), " 769 + "range (%u, %u), recovered = %d, err = %d", 770 + inode->i_ino, nid_of_node(folio), 771 + file_keep_isize(inode) ? "keep" : "recover", 772 + start, end, recovered, err); 771 773 return err; 772 774 } 773 775 ··· 780 778 int err = 0; 781 779 block_t blkaddr; 782 780 unsigned int ra_blocks = RECOVERY_MAX_RA_BLOCKS; 781 + unsigned int recoverable_dnode = 0; 782 + unsigned int fsynced_dnode = 0; 783 + unsigned int total_dnode = 0; 784 + unsigned int recovered_inode = 0; 785 + unsigned int recovered_dentry = 0; 786 + unsigned int recovered_dnode = 0; 787 + 788 + f2fs_notice(sbi, "do_recover_data: start to recover dnode"); 783 789 784 790 /* get node pages in the current segment */ 785 791 curseg = CURSEG_I(sbi, CURSEG_WARM_NODE); ··· 806 796 break; 807 797 } 808 798 809 - if (!is_recoverable_dnode(&folio->page)) { 799 + if (!is_recoverable_dnode(folio)) { 810 800 f2fs_folio_put(folio, true); 811 801 break; 812 802 } 803 + recoverable_dnode++; 813 804 814 - entry = get_fsync_inode(inode_list, ino_of_node(&folio->page)); 805 + entry = get_fsync_inode(inode_list, ino_of_node(folio)); 815 806 if (!entry) 816 807 goto next; 808 + fsynced_dnode++; 817 809 /* 818 810 * inode(x) | CP | inode(x) | dnode(F) 819 811 * In this case, we can lose the latest inode(x). 820 812 * So, call recover_inode for the inode update. 821 813 */ 822 - if (IS_INODE(&folio->page)) { 823 - err = recover_inode(entry->inode, &folio->page); 814 + if (IS_INODE(folio)) { 815 + err = recover_inode(entry->inode, folio); 824 816 if (err) { 825 817 f2fs_folio_put(folio, true); 826 818 break; 827 819 } 820 + recovered_inode++; 828 821 } 829 822 if (entry->last_dentry == blkaddr) { 830 - err = recover_dentry(entry->inode, &folio->page, dir_list); 823 + err = recover_dentry(entry->inode, folio, dir_list); 831 824 if (err) { 832 825 f2fs_folio_put(folio, true); 833 826 break; 834 827 } 828 + recovered_dentry++; 835 829 } 836 830 err = do_recover_data(sbi, entry->inode, folio); 837 831 if (err) { 838 832 f2fs_folio_put(folio, true); 839 833 break; 840 834 } 835 + recovered_dnode++; 841 836 842 837 if (entry->blkaddr == blkaddr) 843 838 list_move_tail(&entry->list, tmp_inode_list); ··· 855 840 f2fs_folio_put(folio, true); 856 841 857 842 f2fs_ra_meta_pages_cond(sbi, blkaddr, ra_blocks); 843 + total_dnode++; 858 844 } 859 845 if (!err) 860 846 err = f2fs_allocate_new_segments(sbi); 847 + 848 + f2fs_notice(sbi, "do_recover_data: dnode: (recoverable: %u, fsynced: %u, " 849 + "total: %u), recovered: (inode: %u, dentry: %u, dnode: %u), err: %d", 850 + recoverable_dnode, fsynced_dnode, total_dnode, recovered_inode, 851 + recovered_dentry, recovered_dnode, err); 861 852 return err; 862 853 } 863 854 ··· 875 854 int ret = 0; 876 855 unsigned long s_flags = sbi->sb->s_flags; 877 856 bool need_writecp = false; 857 + 858 + f2fs_notice(sbi, "f2fs_recover_fsync_data: recovery fsync data, " 859 + "check_only: %d", check_only); 878 860 879 861 if (is_sbi_flag_set(sbi, SBI_IS_WRITABLE)) 880 862 f2fs_info(sbi, "recover fsync data on readonly fs");
+34 -28
fs/f2fs/segment.c
··· 334 334 goto next; 335 335 } 336 336 337 - blen = min((pgoff_t)ADDRS_PER_PAGE(&dn.node_folio->page, cow_inode), 337 + blen = min((pgoff_t)ADDRS_PER_PAGE(dn.node_folio, cow_inode), 338 338 len); 339 339 index = off; 340 340 for (i = 0; i < blen; i++, dn.ofs_in_node++, index++) { ··· 455 455 } else { 456 456 struct f2fs_gc_control gc_control = { 457 457 .victim_segno = NULL_SEGNO, 458 - .init_gc_type = BG_GC, 458 + .init_gc_type = f2fs_sb_has_blkzoned(sbi) ? 459 + FG_GC : BG_GC, 459 460 .no_bg_gc = true, 460 461 .should_migrate_blocks = false, 461 462 .err_gc_skipped = false, ··· 773 772 struct dirty_seglist_info *dirty_i = DIRTY_I(sbi); 774 773 775 774 /* need not be added */ 776 - if (IS_CURSEG(sbi, segno)) 775 + if (is_curseg(sbi, segno)) 777 776 return; 778 777 779 778 if (!test_and_set_bit(segno, dirty_i->dirty_segmap[dirty_type])) ··· 800 799 !valid_blocks) || 801 800 valid_blocks == CAP_BLKS_PER_SEC(sbi)); 802 801 803 - if (!IS_CURSEC(sbi, secno)) 802 + if (!is_cursec(sbi, secno)) 804 803 set_bit(secno, dirty_i->dirty_secmap); 805 804 } 806 805 } ··· 839 838 return; 840 839 } 841 840 842 - if (!IS_CURSEC(sbi, secno)) 841 + if (!is_cursec(sbi, secno)) 843 842 set_bit(secno, dirty_i->dirty_secmap); 844 843 } 845 844 } ··· 856 855 unsigned short valid_blocks, ckpt_valid_blocks; 857 856 unsigned int usable_blocks; 858 857 859 - if (segno == NULL_SEGNO || IS_CURSEG(sbi, segno)) 858 + if (segno == NULL_SEGNO || is_curseg(sbi, segno)) 860 859 return; 861 860 862 861 usable_blocks = f2fs_usable_blks_in_seg(sbi, segno); ··· 889 888 for_each_set_bit(segno, dirty_i->dirty_segmap[DIRTY], MAIN_SEGS(sbi)) { 890 889 if (get_valid_blocks(sbi, segno, false)) 891 890 continue; 892 - if (IS_CURSEG(sbi, segno)) 891 + if (is_curseg(sbi, segno)) 893 892 continue; 894 893 __locate_dirty_segment(sbi, segno, PRE); 895 894 __remove_dirty_segment(sbi, segno, DIRTY); ··· 2108 2107 if (!force) { 2109 2108 if (!f2fs_realtime_discard_enable(sbi) || 2110 2109 (!se->valid_blocks && 2111 - !IS_CURSEG(sbi, cpc->trim_start)) || 2110 + !is_curseg(sbi, cpc->trim_start)) || 2112 2111 SM_I(sbi)->dcc_info->nr_discards >= 2113 2112 SM_I(sbi)->dcc_info->max_discards) 2114 2113 return false; ··· 2236 2235 next: 2237 2236 secno = GET_SEC_FROM_SEG(sbi, start); 2238 2237 start_segno = GET_SEG_FROM_SEC(sbi, secno); 2239 - if (!IS_CURSEC(sbi, secno) && 2238 + if (!is_cursec(sbi, secno) && 2240 2239 !get_valid_blocks(sbi, start, true)) 2241 2240 f2fs_issue_discard(sbi, START_BLOCK(sbi, start_segno), 2242 2241 BLKS_PER_SEC(sbi)); ··· 3620 3619 else 3621 3620 return CURSEG_COLD_DATA; 3622 3621 } else { 3623 - if (IS_DNODE(fio->page) && is_cold_node(fio->page)) 3622 + if (IS_DNODE(fio->folio) && is_cold_node(fio->folio)) 3624 3623 return CURSEG_WARM_NODE; 3625 3624 else 3626 3625 return CURSEG_COLD_NODE; ··· 3666 3665 if (file_is_cold(inode) || f2fs_need_compress_data(inode)) 3667 3666 return CURSEG_COLD_DATA; 3668 3667 3669 - type = __get_age_segment_type(inode, 3670 - page_folio(fio->page)->index); 3668 + type = __get_age_segment_type(inode, fio->folio->index); 3671 3669 if (type != NO_CHECK_TYPE) 3672 3670 return type; 3673 3671 ··· 3677 3677 return f2fs_rw_hint_to_seg_type(F2FS_I_SB(inode), 3678 3678 inode->i_write_hint); 3679 3679 } else { 3680 - if (IS_DNODE(fio->page)) 3681 - return is_cold_node(fio->page) ? CURSEG_WARM_NODE : 3680 + if (IS_DNODE(fio->folio)) 3681 + return is_cold_node(fio->folio) ? CURSEG_WARM_NODE : 3682 3682 CURSEG_HOT_NODE; 3683 3683 return CURSEG_COLD_NODE; 3684 3684 } ··· 3746 3746 get_random_u32_inclusive(1, sbi->max_fragment_hole); 3747 3747 } 3748 3748 3749 - int f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, 3749 + int f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct folio *folio, 3750 3750 block_t old_blkaddr, block_t *new_blkaddr, 3751 3751 struct f2fs_summary *sum, int type, 3752 3752 struct f2fs_io_info *fio) ··· 3850 3850 3851 3851 up_write(&sit_i->sentry_lock); 3852 3852 3853 - if (page && IS_NODESEG(curseg->seg_type)) { 3854 - fill_node_footer_blkaddr(page, NEXT_FREE_BLKADDR(sbi, curseg)); 3853 + if (folio && IS_NODESEG(curseg->seg_type)) { 3854 + fill_node_footer_blkaddr(folio, NEXT_FREE_BLKADDR(sbi, curseg)); 3855 3855 3856 - f2fs_inode_chksum_set(sbi, page); 3856 + f2fs_inode_chksum_set(sbi, folio); 3857 3857 } 3858 3858 3859 3859 if (fio) { ··· 3931 3931 3932 3932 static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio) 3933 3933 { 3934 - struct folio *folio = page_folio(fio->page); 3934 + struct folio *folio = fio->folio; 3935 3935 enum log_type type = __get_segment_type(fio); 3936 3936 int seg_type = log_type_to_seg_type(type); 3937 3937 bool keep_order = (f2fs_lfs_mode(fio->sbi) && ··· 3940 3940 if (keep_order) 3941 3941 f2fs_down_read(&fio->sbi->io_order_lock); 3942 3942 3943 - if (f2fs_allocate_data_block(fio->sbi, fio->page, fio->old_blkaddr, 3943 + if (f2fs_allocate_data_block(fio->sbi, folio, fio->old_blkaddr, 3944 3944 &fio->new_blkaddr, sum, type, fio)) { 3945 3945 if (fscrypt_inode_uses_fs_layer_crypto(folio->mapping->host)) 3946 3946 fscrypt_finalize_bounce_page(&fio->encrypted_page); 3947 3947 folio_end_writeback(folio); 3948 3948 if (f2fs_in_warm_node_list(fio->sbi, folio)) 3949 3949 f2fs_del_fsync_node_entry(fio->sbi, folio); 3950 + f2fs_bug_on(fio->sbi, !is_set_ckpt_flags(fio->sbi, 3951 + CP_ERROR_FLAG)); 3950 3952 goto out; 3951 3953 } 3954 + 3955 + f2fs_bug_on(fio->sbi, !f2fs_is_valid_blkaddr_raw(fio->sbi, 3956 + fio->new_blkaddr, DATA_GENERIC_ENHANCE)); 3957 + 3952 3958 if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO) 3953 3959 f2fs_invalidate_internal_cache(fio->sbi, fio->old_blkaddr, 1); 3954 3960 ··· 3978 3972 .op_flags = REQ_SYNC | REQ_META | REQ_PRIO, 3979 3973 .old_blkaddr = folio->index, 3980 3974 .new_blkaddr = folio->index, 3981 - .page = folio_page(folio, 0), 3975 + .folio = folio, 3982 3976 .encrypted_page = NULL, 3983 3977 .in_list = 0, 3984 3978 }; ··· 4106 4100 4107 4101 if (!recover_curseg) { 4108 4102 /* for recovery flow */ 4109 - if (se->valid_blocks == 0 && !IS_CURSEG(sbi, segno)) { 4103 + if (se->valid_blocks == 0 && !is_curseg(sbi, segno)) { 4110 4104 if (old_blkaddr == NULL_ADDR) 4111 4105 type = CURSEG_COLD_DATA; 4112 4106 else 4113 4107 type = CURSEG_WARM_DATA; 4114 4108 } 4115 4109 } else { 4116 - if (IS_CURSEG(sbi, segno)) { 4110 + if (is_curseg(sbi, segno)) { 4117 4111 /* se->type is volatile as SSR allocation */ 4118 4112 type = __f2fs_get_curseg(sbi, segno); 4119 4113 f2fs_bug_on(sbi, type == NO_CHECK_TYPE); ··· 4197 4191 struct f2fs_sb_info *sbi = F2FS_F_SB(folio); 4198 4192 4199 4193 /* submit cached LFS IO */ 4200 - f2fs_submit_merged_write_cond(sbi, NULL, &folio->page, 0, type); 4194 + f2fs_submit_merged_write_cond(sbi, NULL, folio, 0, type); 4201 4195 /* submit cached IPU IO */ 4202 4196 f2fs_submit_merged_ipu_write(sbi, NULL, folio); 4203 4197 if (ordered) { ··· 5149 5143 5150 5144 if (!valid_blocks || valid_blocks == CAP_BLKS_PER_SEC(sbi)) 5151 5145 continue; 5152 - if (IS_CURSEC(sbi, secno)) 5146 + if (is_cursec(sbi, secno)) 5153 5147 continue; 5154 5148 set_bit(secno, dirty_i->dirty_secmap); 5155 5149 } ··· 5285 5279 * Get # of valid block of the zone. 5286 5280 */ 5287 5281 valid_block_cnt = get_valid_blocks(sbi, zone_segno, true); 5288 - if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, zone_segno))) { 5282 + if (is_cursec(sbi, GET_SEC_FROM_SEG(sbi, zone_segno))) { 5289 5283 f2fs_notice(sbi, "Open zones: valid block[0x%x,0x%x] cond[%s]", 5290 5284 zone_segno, valid_block_cnt, 5291 5285 blk_zone_cond_str(zone->cond)); ··· 5812 5806 kvfree(sit_i->dirty_sentries_bitmap); 5813 5807 5814 5808 SM_I(sbi)->sit_info = NULL; 5815 - kvfree(sit_i->sit_bitmap); 5809 + kfree(sit_i->sit_bitmap); 5816 5810 #ifdef CONFIG_F2FS_CHECK_FS 5817 - kvfree(sit_i->sit_bitmap_mir); 5811 + kfree(sit_i->sit_bitmap_mir); 5818 5812 kvfree(sit_i->invalid_segmap); 5819 5813 #endif 5820 5814 kfree(sit_i);
+26 -33
fs/f2fs/segment.h
··· 34 34 f2fs_bug_on(sbi, seg_type >= NR_PERSISTENT_LOG); 35 35 } 36 36 37 - #define IS_CURSEG(sbi, seg) \ 38 - (((seg) == CURSEG_I(sbi, CURSEG_HOT_DATA)->segno) || \ 39 - ((seg) == CURSEG_I(sbi, CURSEG_WARM_DATA)->segno) || \ 40 - ((seg) == CURSEG_I(sbi, CURSEG_COLD_DATA)->segno) || \ 41 - ((seg) == CURSEG_I(sbi, CURSEG_HOT_NODE)->segno) || \ 42 - ((seg) == CURSEG_I(sbi, CURSEG_WARM_NODE)->segno) || \ 43 - ((seg) == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno) || \ 44 - ((seg) == CURSEG_I(sbi, CURSEG_COLD_DATA_PINNED)->segno) || \ 45 - ((seg) == CURSEG_I(sbi, CURSEG_ALL_DATA_ATGC)->segno)) 46 - 47 - #define IS_CURSEC(sbi, secno) \ 48 - (((secno) == CURSEG_I(sbi, CURSEG_HOT_DATA)->segno / \ 49 - SEGS_PER_SEC(sbi)) || \ 50 - ((secno) == CURSEG_I(sbi, CURSEG_WARM_DATA)->segno / \ 51 - SEGS_PER_SEC(sbi)) || \ 52 - ((secno) == CURSEG_I(sbi, CURSEG_COLD_DATA)->segno / \ 53 - SEGS_PER_SEC(sbi)) || \ 54 - ((secno) == CURSEG_I(sbi, CURSEG_HOT_NODE)->segno / \ 55 - SEGS_PER_SEC(sbi)) || \ 56 - ((secno) == CURSEG_I(sbi, CURSEG_WARM_NODE)->segno / \ 57 - SEGS_PER_SEC(sbi)) || \ 58 - ((secno) == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno / \ 59 - SEGS_PER_SEC(sbi)) || \ 60 - ((secno) == CURSEG_I(sbi, CURSEG_COLD_DATA_PINNED)->segno / \ 61 - SEGS_PER_SEC(sbi)) || \ 62 - ((secno) == CURSEG_I(sbi, CURSEG_ALL_DATA_ATGC)->segno / \ 63 - SEGS_PER_SEC(sbi))) 64 - 65 37 #define MAIN_BLKADDR(sbi) \ 66 38 (SM_I(sbi) ? SM_I(sbi)->main_blkaddr : \ 67 39 le32_to_cpu(F2FS_RAW_SUPER(sbi)->main_blkaddr)) ··· 290 318 return (struct curseg_info *)(SM_I(sbi)->curseg_array + type); 291 319 } 292 320 321 + static inline bool is_curseg(struct f2fs_sb_info *sbi, unsigned int segno) 322 + { 323 + int i; 324 + 325 + for (i = CURSEG_HOT_DATA; i < NO_CHECK_TYPE; i++) { 326 + if (segno == CURSEG_I(sbi, i)->segno) 327 + return true; 328 + } 329 + return false; 330 + } 331 + 332 + static inline bool is_cursec(struct f2fs_sb_info *sbi, unsigned int secno) 333 + { 334 + int i; 335 + 336 + for (i = CURSEG_HOT_DATA; i < NO_CHECK_TYPE; i++) { 337 + if (secno == GET_SEC_FROM_SEG(sbi, CURSEG_I(sbi, i)->segno)) 338 + return true; 339 + } 340 + return false; 341 + } 342 + 293 343 static inline struct seg_entry *get_seg_entry(struct f2fs_sb_info *sbi, 294 344 unsigned int segno) 295 345 { ··· 503 509 504 510 free_i->free_segments++; 505 511 506 - if (!inmem && IS_CURSEC(sbi, secno)) 512 + if (!inmem && is_cursec(sbi, secno)) 507 513 goto unlock_out; 508 514 509 515 /* check large section */ ··· 668 674 unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi); 669 675 unsigned int data_blocks = 0; 670 676 671 - if (f2fs_lfs_mode(sbi) && 672 - unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) { 677 + if (f2fs_lfs_mode(sbi)) { 673 678 total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA); 674 679 data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi); 675 680 data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi); ··· 677 684 if (lower_p) 678 685 *lower_p = node_secs + dent_secs + data_secs; 679 686 if (upper_p) 680 - *upper_p = node_secs + dent_secs + 687 + *upper_p = node_secs + dent_secs + data_secs + 681 688 (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) + 682 689 (data_blocks ? 1 : 0); 683 690 if (curseg_p) ··· 979 986 980 987 static inline bool sec_usage_check(struct f2fs_sb_info *sbi, unsigned int secno) 981 988 { 982 - if (IS_CURSEC(sbi, secno) || (sbi->cur_victim_sec == secno)) 989 + if (is_cursec(sbi, secno) || (sbi->cur_victim_sec == secno)) 983 990 return true; 984 991 return false; 985 992 }
+1235 -950
fs/f2fs/super.c
··· 27 27 #include <linux/part_stat.h> 28 28 #include <linux/zstd.h> 29 29 #include <linux/lz4.h> 30 + #include <linux/ctype.h> 31 + #include <linux/fs_parser.h> 30 32 31 33 #include "f2fs.h" 32 34 #include "node.h" ··· 127 125 Opt_disable_roll_forward, 128 126 Opt_norecovery, 129 127 Opt_discard, 130 - Opt_nodiscard, 131 128 Opt_noheap, 132 129 Opt_heap, 133 130 Opt_user_xattr, 134 - Opt_nouser_xattr, 135 131 Opt_acl, 136 - Opt_noacl, 137 132 Opt_active_logs, 138 133 Opt_disable_ext_identify, 139 134 Opt_inline_xattr, 140 - Opt_noinline_xattr, 141 135 Opt_inline_xattr_size, 142 136 Opt_inline_data, 143 137 Opt_inline_dentry, 144 - Opt_noinline_dentry, 145 138 Opt_flush_merge, 146 - Opt_noflush_merge, 147 139 Opt_barrier, 148 - Opt_nobarrier, 149 140 Opt_fastboot, 150 141 Opt_extent_cache, 151 - Opt_noextent_cache, 152 - Opt_noinline_data, 153 142 Opt_data_flush, 154 143 Opt_reserve_root, 155 144 Opt_resgid, ··· 149 156 Opt_fault_injection, 150 157 Opt_fault_type, 151 158 Opt_lazytime, 152 - Opt_nolazytime, 153 159 Opt_quota, 154 - Opt_noquota, 155 160 Opt_usrquota, 156 161 Opt_grpquota, 157 162 Opt_prjquota, 158 163 Opt_usrjquota, 159 164 Opt_grpjquota, 160 165 Opt_prjjquota, 161 - Opt_offusrjquota, 162 - Opt_offgrpjquota, 163 - Opt_offprjjquota, 164 - Opt_jqfmt_vfsold, 165 - Opt_jqfmt_vfsv0, 166 - Opt_jqfmt_vfsv1, 167 166 Opt_alloc, 168 167 Opt_fsync, 169 168 Opt_test_dummy_encryption, ··· 165 180 Opt_checkpoint_disable_cap_perc, 166 181 Opt_checkpoint_enable, 167 182 Opt_checkpoint_merge, 168 - Opt_nocheckpoint_merge, 169 183 Opt_compress_algorithm, 170 184 Opt_compress_log_size, 171 - Opt_compress_extension, 172 185 Opt_nocompress_extension, 186 + Opt_compress_extension, 173 187 Opt_compress_chksum, 174 188 Opt_compress_mode, 175 189 Opt_compress_cache, 176 190 Opt_atgc, 177 191 Opt_gc_merge, 178 - Opt_nogc_merge, 179 192 Opt_discard_unit, 180 193 Opt_memory_mode, 181 194 Opt_age_extent_cache, 182 195 Opt_errors, 183 196 Opt_nat_bits, 197 + Opt_jqfmt, 198 + Opt_checkpoint, 184 199 Opt_err, 185 200 }; 186 201 187 - static match_table_t f2fs_tokens = { 188 - {Opt_gc_background, "background_gc=%s"}, 189 - {Opt_disable_roll_forward, "disable_roll_forward"}, 190 - {Opt_norecovery, "norecovery"}, 191 - {Opt_discard, "discard"}, 192 - {Opt_nodiscard, "nodiscard"}, 193 - {Opt_noheap, "no_heap"}, 194 - {Opt_heap, "heap"}, 195 - {Opt_user_xattr, "user_xattr"}, 196 - {Opt_nouser_xattr, "nouser_xattr"}, 197 - {Opt_acl, "acl"}, 198 - {Opt_noacl, "noacl"}, 199 - {Opt_active_logs, "active_logs=%u"}, 200 - {Opt_disable_ext_identify, "disable_ext_identify"}, 201 - {Opt_inline_xattr, "inline_xattr"}, 202 - {Opt_noinline_xattr, "noinline_xattr"}, 203 - {Opt_inline_xattr_size, "inline_xattr_size=%u"}, 204 - {Opt_inline_data, "inline_data"}, 205 - {Opt_inline_dentry, "inline_dentry"}, 206 - {Opt_noinline_dentry, "noinline_dentry"}, 207 - {Opt_flush_merge, "flush_merge"}, 208 - {Opt_noflush_merge, "noflush_merge"}, 209 - {Opt_barrier, "barrier"}, 210 - {Opt_nobarrier, "nobarrier"}, 211 - {Opt_fastboot, "fastboot"}, 212 - {Opt_extent_cache, "extent_cache"}, 213 - {Opt_noextent_cache, "noextent_cache"}, 214 - {Opt_noinline_data, "noinline_data"}, 215 - {Opt_data_flush, "data_flush"}, 216 - {Opt_reserve_root, "reserve_root=%u"}, 217 - {Opt_resgid, "resgid=%u"}, 218 - {Opt_resuid, "resuid=%u"}, 219 - {Opt_mode, "mode=%s"}, 220 - {Opt_fault_injection, "fault_injection=%u"}, 221 - {Opt_fault_type, "fault_type=%u"}, 222 - {Opt_lazytime, "lazytime"}, 223 - {Opt_nolazytime, "nolazytime"}, 224 - {Opt_quota, "quota"}, 225 - {Opt_noquota, "noquota"}, 226 - {Opt_usrquota, "usrquota"}, 227 - {Opt_grpquota, "grpquota"}, 228 - {Opt_prjquota, "prjquota"}, 229 - {Opt_usrjquota, "usrjquota=%s"}, 230 - {Opt_grpjquota, "grpjquota=%s"}, 231 - {Opt_prjjquota, "prjjquota=%s"}, 232 - {Opt_offusrjquota, "usrjquota="}, 233 - {Opt_offgrpjquota, "grpjquota="}, 234 - {Opt_offprjjquota, "prjjquota="}, 235 - {Opt_jqfmt_vfsold, "jqfmt=vfsold"}, 236 - {Opt_jqfmt_vfsv0, "jqfmt=vfsv0"}, 237 - {Opt_jqfmt_vfsv1, "jqfmt=vfsv1"}, 238 - {Opt_alloc, "alloc_mode=%s"}, 239 - {Opt_fsync, "fsync_mode=%s"}, 240 - {Opt_test_dummy_encryption, "test_dummy_encryption=%s"}, 241 - {Opt_test_dummy_encryption, "test_dummy_encryption"}, 242 - {Opt_inlinecrypt, "inlinecrypt"}, 243 - {Opt_checkpoint_disable, "checkpoint=disable"}, 244 - {Opt_checkpoint_disable_cap, "checkpoint=disable:%u"}, 245 - {Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"}, 246 - {Opt_checkpoint_enable, "checkpoint=enable"}, 247 - {Opt_checkpoint_merge, "checkpoint_merge"}, 248 - {Opt_nocheckpoint_merge, "nocheckpoint_merge"}, 249 - {Opt_compress_algorithm, "compress_algorithm=%s"}, 250 - {Opt_compress_log_size, "compress_log_size=%u"}, 251 - {Opt_compress_extension, "compress_extension=%s"}, 252 - {Opt_nocompress_extension, "nocompress_extension=%s"}, 253 - {Opt_compress_chksum, "compress_chksum"}, 254 - {Opt_compress_mode, "compress_mode=%s"}, 255 - {Opt_compress_cache, "compress_cache"}, 256 - {Opt_atgc, "atgc"}, 257 - {Opt_gc_merge, "gc_merge"}, 258 - {Opt_nogc_merge, "nogc_merge"}, 259 - {Opt_discard_unit, "discard_unit=%s"}, 260 - {Opt_memory_mode, "memory=%s"}, 261 - {Opt_age_extent_cache, "age_extent_cache"}, 262 - {Opt_errors, "errors=%s"}, 263 - {Opt_nat_bits, "nat_bits"}, 202 + static const struct constant_table f2fs_param_background_gc[] = { 203 + {"on", BGGC_MODE_ON}, 204 + {"off", BGGC_MODE_OFF}, 205 + {"sync", BGGC_MODE_SYNC}, 206 + {} 207 + }; 208 + 209 + static const struct constant_table f2fs_param_mode[] = { 210 + {"adaptive", FS_MODE_ADAPTIVE}, 211 + {"lfs", FS_MODE_LFS}, 212 + {"fragment:segment", FS_MODE_FRAGMENT_SEG}, 213 + {"fragment:block", FS_MODE_FRAGMENT_BLK}, 214 + {} 215 + }; 216 + 217 + static const struct constant_table f2fs_param_jqfmt[] = { 218 + {"vfsold", QFMT_VFS_OLD}, 219 + {"vfsv0", QFMT_VFS_V0}, 220 + {"vfsv1", QFMT_VFS_V1}, 221 + {} 222 + }; 223 + 224 + static const struct constant_table f2fs_param_alloc_mode[] = { 225 + {"default", ALLOC_MODE_DEFAULT}, 226 + {"reuse", ALLOC_MODE_REUSE}, 227 + {} 228 + }; 229 + static const struct constant_table f2fs_param_fsync_mode[] = { 230 + {"posix", FSYNC_MODE_POSIX}, 231 + {"strict", FSYNC_MODE_STRICT}, 232 + {"nobarrier", FSYNC_MODE_NOBARRIER}, 233 + {} 234 + }; 235 + 236 + static const struct constant_table f2fs_param_compress_mode[] = { 237 + {"fs", COMPR_MODE_FS}, 238 + {"user", COMPR_MODE_USER}, 239 + {} 240 + }; 241 + 242 + static const struct constant_table f2fs_param_discard_unit[] = { 243 + {"block", DISCARD_UNIT_BLOCK}, 244 + {"segment", DISCARD_UNIT_SEGMENT}, 245 + {"section", DISCARD_UNIT_SECTION}, 246 + {} 247 + }; 248 + 249 + static const struct constant_table f2fs_param_memory_mode[] = { 250 + {"normal", MEMORY_MODE_NORMAL}, 251 + {"low", MEMORY_MODE_LOW}, 252 + {} 253 + }; 254 + 255 + static const struct constant_table f2fs_param_errors[] = { 256 + {"remount-ro", MOUNT_ERRORS_READONLY}, 257 + {"continue", MOUNT_ERRORS_CONTINUE}, 258 + {"panic", MOUNT_ERRORS_PANIC}, 259 + {} 260 + }; 261 + 262 + static const struct fs_parameter_spec f2fs_param_specs[] = { 263 + fsparam_enum("background_gc", Opt_gc_background, f2fs_param_background_gc), 264 + fsparam_flag("disable_roll_forward", Opt_disable_roll_forward), 265 + fsparam_flag("norecovery", Opt_norecovery), 266 + fsparam_flag_no("discard", Opt_discard), 267 + fsparam_flag("no_heap", Opt_noheap), 268 + fsparam_flag("heap", Opt_heap), 269 + fsparam_flag_no("user_xattr", Opt_user_xattr), 270 + fsparam_flag_no("acl", Opt_acl), 271 + fsparam_s32("active_logs", Opt_active_logs), 272 + fsparam_flag("disable_ext_identify", Opt_disable_ext_identify), 273 + fsparam_flag_no("inline_xattr", Opt_inline_xattr), 274 + fsparam_s32("inline_xattr_size", Opt_inline_xattr_size), 275 + fsparam_flag_no("inline_data", Opt_inline_data), 276 + fsparam_flag_no("inline_dentry", Opt_inline_dentry), 277 + fsparam_flag_no("flush_merge", Opt_flush_merge), 278 + fsparam_flag_no("barrier", Opt_barrier), 279 + fsparam_flag("fastboot", Opt_fastboot), 280 + fsparam_flag_no("extent_cache", Opt_extent_cache), 281 + fsparam_flag("data_flush", Opt_data_flush), 282 + fsparam_u32("reserve_root", Opt_reserve_root), 283 + fsparam_gid("resgid", Opt_resgid), 284 + fsparam_uid("resuid", Opt_resuid), 285 + fsparam_enum("mode", Opt_mode, f2fs_param_mode), 286 + fsparam_s32("fault_injection", Opt_fault_injection), 287 + fsparam_u32("fault_type", Opt_fault_type), 288 + fsparam_flag_no("lazytime", Opt_lazytime), 289 + fsparam_flag_no("quota", Opt_quota), 290 + fsparam_flag("usrquota", Opt_usrquota), 291 + fsparam_flag("grpquota", Opt_grpquota), 292 + fsparam_flag("prjquota", Opt_prjquota), 293 + fsparam_string_empty("usrjquota", Opt_usrjquota), 294 + fsparam_string_empty("grpjquota", Opt_grpjquota), 295 + fsparam_string_empty("prjjquota", Opt_prjjquota), 296 + fsparam_flag("nat_bits", Opt_nat_bits), 297 + fsparam_enum("jqfmt", Opt_jqfmt, f2fs_param_jqfmt), 298 + fsparam_enum("alloc_mode", Opt_alloc, f2fs_param_alloc_mode), 299 + fsparam_enum("fsync_mode", Opt_fsync, f2fs_param_fsync_mode), 300 + fsparam_string("test_dummy_encryption", Opt_test_dummy_encryption), 301 + fsparam_flag("test_dummy_encryption", Opt_test_dummy_encryption), 302 + fsparam_flag("inlinecrypt", Opt_inlinecrypt), 303 + fsparam_string("checkpoint", Opt_checkpoint), 304 + fsparam_flag_no("checkpoint_merge", Opt_checkpoint_merge), 305 + fsparam_string("compress_algorithm", Opt_compress_algorithm), 306 + fsparam_u32("compress_log_size", Opt_compress_log_size), 307 + fsparam_string("compress_extension", Opt_compress_extension), 308 + fsparam_string("nocompress_extension", Opt_nocompress_extension), 309 + fsparam_flag("compress_chksum", Opt_compress_chksum), 310 + fsparam_enum("compress_mode", Opt_compress_mode, f2fs_param_compress_mode), 311 + fsparam_flag("compress_cache", Opt_compress_cache), 312 + fsparam_flag("atgc", Opt_atgc), 313 + fsparam_flag_no("gc_merge", Opt_gc_merge), 314 + fsparam_enum("discard_unit", Opt_discard_unit, f2fs_param_discard_unit), 315 + fsparam_enum("memory", Opt_memory_mode, f2fs_param_memory_mode), 316 + fsparam_flag("age_extent_cache", Opt_age_extent_cache), 317 + fsparam_enum("errors", Opt_errors, f2fs_param_errors), 318 + {} 319 + }; 320 + 321 + /* Resort to a match_table for this interestingly formatted option */ 322 + static match_table_t f2fs_checkpoint_tokens = { 323 + {Opt_checkpoint_disable, "disable"}, 324 + {Opt_checkpoint_disable_cap, "disable:%u"}, 325 + {Opt_checkpoint_disable_cap_perc, "disable:%u%%"}, 326 + {Opt_checkpoint_enable, "enable"}, 264 327 {Opt_err, NULL}, 265 328 }; 266 329 330 + #define F2FS_SPEC_background_gc (1 << 0) 331 + #define F2FS_SPEC_inline_xattr_size (1 << 1) 332 + #define F2FS_SPEC_active_logs (1 << 2) 333 + #define F2FS_SPEC_reserve_root (1 << 3) 334 + #define F2FS_SPEC_resgid (1 << 4) 335 + #define F2FS_SPEC_resuid (1 << 5) 336 + #define F2FS_SPEC_mode (1 << 6) 337 + #define F2FS_SPEC_fault_injection (1 << 7) 338 + #define F2FS_SPEC_fault_type (1 << 8) 339 + #define F2FS_SPEC_jqfmt (1 << 9) 340 + #define F2FS_SPEC_alloc_mode (1 << 10) 341 + #define F2FS_SPEC_fsync_mode (1 << 11) 342 + #define F2FS_SPEC_checkpoint_disable_cap (1 << 12) 343 + #define F2FS_SPEC_checkpoint_disable_cap_perc (1 << 13) 344 + #define F2FS_SPEC_compress_level (1 << 14) 345 + #define F2FS_SPEC_compress_algorithm (1 << 15) 346 + #define F2FS_SPEC_compress_log_size (1 << 16) 347 + #define F2FS_SPEC_compress_extension (1 << 17) 348 + #define F2FS_SPEC_nocompress_extension (1 << 18) 349 + #define F2FS_SPEC_compress_chksum (1 << 19) 350 + #define F2FS_SPEC_compress_mode (1 << 20) 351 + #define F2FS_SPEC_discard_unit (1 << 21) 352 + #define F2FS_SPEC_memory_mode (1 << 22) 353 + #define F2FS_SPEC_errors (1 << 23) 354 + 355 + struct f2fs_fs_context { 356 + struct f2fs_mount_info info; 357 + unsigned int opt_mask; /* Bits changed */ 358 + unsigned int spec_mask; 359 + unsigned short qname_mask; 360 + }; 361 + 362 + #define F2FS_CTX_INFO(ctx) ((ctx)->info) 363 + 364 + static inline void ctx_set_opt(struct f2fs_fs_context *ctx, 365 + unsigned int flag) 366 + { 367 + ctx->info.opt |= flag; 368 + ctx->opt_mask |= flag; 369 + } 370 + 371 + static inline void ctx_clear_opt(struct f2fs_fs_context *ctx, 372 + unsigned int flag) 373 + { 374 + ctx->info.opt &= ~flag; 375 + ctx->opt_mask |= flag; 376 + } 377 + 378 + static inline bool ctx_test_opt(struct f2fs_fs_context *ctx, 379 + unsigned int flag) 380 + { 381 + return ctx->info.opt & flag; 382 + } 383 + 267 384 void f2fs_printk(struct f2fs_sb_info *sbi, bool limit_rate, 268 - const char *fmt, ...) 385 + const char *fmt, ...) 269 386 { 270 387 struct va_format vaf; 271 388 va_list args; ··· 379 292 vaf.fmt = printk_skip_level(fmt); 380 293 vaf.va = &args; 381 294 if (limit_rate) 382 - printk_ratelimited("%c%cF2FS-fs (%s): %pV\n", 383 - KERN_SOH_ASCII, level, sbi->sb->s_id, &vaf); 295 + if (sbi) 296 + printk_ratelimited("%c%cF2FS-fs (%s): %pV\n", 297 + KERN_SOH_ASCII, level, sbi->sb->s_id, &vaf); 298 + else 299 + printk_ratelimited("%c%cF2FS-fs: %pV\n", 300 + KERN_SOH_ASCII, level, &vaf); 384 301 else 385 - printk("%c%cF2FS-fs (%s): %pV\n", 386 - KERN_SOH_ASCII, level, sbi->sb->s_id, &vaf); 302 + if (sbi) 303 + printk("%c%cF2FS-fs (%s): %pV\n", 304 + KERN_SOH_ASCII, level, sbi->sb->s_id, &vaf); 305 + else 306 + printk("%c%cF2FS-fs: %pV\n", 307 + KERN_SOH_ASCII, level, &vaf); 387 308 388 309 va_end(args); 389 310 } ··· 485 390 #ifdef CONFIG_QUOTA 486 391 static const char * const quotatypes[] = INITQFNAMES; 487 392 #define QTYPE2NAME(t) (quotatypes[t]) 488 - static int f2fs_set_qf_name(struct f2fs_sb_info *sbi, int qtype, 489 - substring_t *args) 393 + /* 394 + * Note the name of the specified quota file. 395 + */ 396 + static int f2fs_note_qf_name(struct fs_context *fc, int qtype, 397 + struct fs_parameter *param) 490 398 { 491 - struct super_block *sb = sbi->sb; 399 + struct f2fs_fs_context *ctx = fc->fs_private; 492 400 char *qname; 493 - int ret = -EINVAL; 494 401 495 - if (sb_any_quota_loaded(sb) && !F2FS_OPTION(sbi).s_qf_names[qtype]) { 496 - f2fs_err(sbi, "Cannot change journaled quota options when quota turned on"); 402 + if (param->size < 1) { 403 + f2fs_err(NULL, "Missing quota name"); 497 404 return -EINVAL; 498 405 } 499 - if (f2fs_sb_has_quota_ino(sbi)) { 500 - f2fs_info(sbi, "QUOTA feature is enabled, so ignore qf_name"); 406 + if (strchr(param->string, '/')) { 407 + f2fs_err(NULL, "quotafile must be on filesystem root"); 408 + return -EINVAL; 409 + } 410 + if (ctx->info.s_qf_names[qtype]) { 411 + if (strcmp(ctx->info.s_qf_names[qtype], param->string) != 0) { 412 + f2fs_err(NULL, "Quota file already specified"); 413 + return -EINVAL; 414 + } 501 415 return 0; 502 416 } 503 417 504 - qname = match_strdup(args); 418 + qname = kmemdup_nul(param->string, param->size, GFP_KERNEL); 505 419 if (!qname) { 506 - f2fs_err(sbi, "Not enough memory for storing quotafile name"); 420 + f2fs_err(NULL, "Not enough memory for storing quotafile name"); 507 421 return -ENOMEM; 508 422 } 509 - if (F2FS_OPTION(sbi).s_qf_names[qtype]) { 510 - if (strcmp(F2FS_OPTION(sbi).s_qf_names[qtype], qname) == 0) 511 - ret = 0; 512 - else 513 - f2fs_err(sbi, "%s quota file already specified", 514 - QTYPE2NAME(qtype)); 515 - goto errout; 516 - } 517 - if (strchr(qname, '/')) { 518 - f2fs_err(sbi, "quotafile must be on filesystem root"); 519 - goto errout; 520 - } 521 - F2FS_OPTION(sbi).s_qf_names[qtype] = qname; 522 - set_opt(sbi, QUOTA); 523 - return 0; 524 - errout: 525 - kfree(qname); 526 - return ret; 527 - } 528 - 529 - static int f2fs_clear_qf_name(struct f2fs_sb_info *sbi, int qtype) 530 - { 531 - struct super_block *sb = sbi->sb; 532 - 533 - if (sb_any_quota_loaded(sb) && F2FS_OPTION(sbi).s_qf_names[qtype]) { 534 - f2fs_err(sbi, "Cannot change journaled quota options when quota turned on"); 535 - return -EINVAL; 536 - } 537 - kfree(F2FS_OPTION(sbi).s_qf_names[qtype]); 538 - F2FS_OPTION(sbi).s_qf_names[qtype] = NULL; 423 + F2FS_CTX_INFO(ctx).s_qf_names[qtype] = qname; 424 + ctx->qname_mask |= 1 << qtype; 539 425 return 0; 540 426 } 541 427 542 - static int f2fs_check_quota_options(struct f2fs_sb_info *sbi) 428 + /* 429 + * Clear the name of the specified quota file. 430 + */ 431 + static int f2fs_unnote_qf_name(struct fs_context *fc, int qtype) 543 432 { 544 - /* 545 - * We do the test below only for project quotas. 'usrquota' and 546 - * 'grpquota' mount options are allowed even without quota feature 547 - * to support legacy quotas in quota files. 548 - */ 549 - if (test_opt(sbi, PRJQUOTA) && !f2fs_sb_has_project_quota(sbi)) { 550 - f2fs_err(sbi, "Project quota feature not enabled. Cannot enable project quota enforcement."); 551 - return -1; 552 - } 553 - if (F2FS_OPTION(sbi).s_qf_names[USRQUOTA] || 554 - F2FS_OPTION(sbi).s_qf_names[GRPQUOTA] || 555 - F2FS_OPTION(sbi).s_qf_names[PRJQUOTA]) { 556 - if (test_opt(sbi, USRQUOTA) && 557 - F2FS_OPTION(sbi).s_qf_names[USRQUOTA]) 558 - clear_opt(sbi, USRQUOTA); 433 + struct f2fs_fs_context *ctx = fc->fs_private; 559 434 560 - if (test_opt(sbi, GRPQUOTA) && 561 - F2FS_OPTION(sbi).s_qf_names[GRPQUOTA]) 562 - clear_opt(sbi, GRPQUOTA); 563 - 564 - if (test_opt(sbi, PRJQUOTA) && 565 - F2FS_OPTION(sbi).s_qf_names[PRJQUOTA]) 566 - clear_opt(sbi, PRJQUOTA); 567 - 568 - if (test_opt(sbi, GRPQUOTA) || test_opt(sbi, USRQUOTA) || 569 - test_opt(sbi, PRJQUOTA)) { 570 - f2fs_err(sbi, "old and new quota format mixing"); 571 - return -1; 572 - } 573 - 574 - if (!F2FS_OPTION(sbi).s_jquota_fmt) { 575 - f2fs_err(sbi, "journaled quota format not specified"); 576 - return -1; 577 - } 578 - } 579 - 580 - if (f2fs_sb_has_quota_ino(sbi) && F2FS_OPTION(sbi).s_jquota_fmt) { 581 - f2fs_info(sbi, "QUOTA feature is enabled, so ignore jquota_fmt"); 582 - F2FS_OPTION(sbi).s_jquota_fmt = 0; 583 - } 435 + kfree(ctx->info.s_qf_names[qtype]); 436 + ctx->info.s_qf_names[qtype] = NULL; 437 + ctx->qname_mask |= 1 << qtype; 584 438 return 0; 439 + } 440 + 441 + static void f2fs_unnote_qf_name_all(struct fs_context *fc) 442 + { 443 + int i; 444 + 445 + for (i = 0; i < MAXQUOTAS; i++) 446 + f2fs_unnote_qf_name(fc, i); 585 447 } 586 448 #endif 587 449 588 - static int f2fs_set_test_dummy_encryption(struct f2fs_sb_info *sbi, 589 - const char *opt, 590 - const substring_t *arg, 591 - bool is_remount) 450 + static int f2fs_parse_test_dummy_encryption(const struct fs_parameter *param, 451 + struct f2fs_fs_context *ctx) 592 452 { 593 - struct fs_parameter param = { 594 - .type = fs_value_is_string, 595 - .string = arg->from ? arg->from : "", 596 - }; 597 - struct fscrypt_dummy_policy *policy = 598 - &F2FS_OPTION(sbi).dummy_enc_policy; 599 453 int err; 600 454 601 455 if (!IS_ENABLED(CONFIG_FS_ENCRYPTION)) { 602 - f2fs_warn(sbi, "test_dummy_encryption option not supported"); 456 + f2fs_warn(NULL, "test_dummy_encryption option not supported"); 603 457 return -EINVAL; 604 458 } 605 - 606 - if (!f2fs_sb_has_encrypt(sbi)) { 607 - f2fs_err(sbi, "Encrypt feature is off"); 608 - return -EINVAL; 609 - } 610 - 611 - /* 612 - * This mount option is just for testing, and it's not worthwhile to 613 - * implement the extra complexity (e.g. RCU protection) that would be 614 - * needed to allow it to be set or changed during remount. We do allow 615 - * it to be specified during remount, but only if there is no change. 616 - */ 617 - if (is_remount && !fscrypt_is_dummy_policy_set(policy)) { 618 - f2fs_warn(sbi, "Can't set test_dummy_encryption on remount"); 619 - return -EINVAL; 620 - } 621 - 622 - err = fscrypt_parse_test_dummy_encryption(&param, policy); 459 + err = fscrypt_parse_test_dummy_encryption(param, 460 + &ctx->info.dummy_enc_policy); 623 461 if (err) { 624 - if (err == -EEXIST) 625 - f2fs_warn(sbi, 626 - "Can't change test_dummy_encryption on remount"); 627 - else if (err == -EINVAL) 628 - f2fs_warn(sbi, "Value of option \"%s\" is unrecognized", 629 - opt); 462 + if (err == -EINVAL) 463 + f2fs_warn(NULL, "Value of option \"%s\" is unrecognized", 464 + param->key); 465 + else if (err == -EEXIST) 466 + f2fs_warn(NULL, "Conflicting test_dummy_encryption options"); 630 467 else 631 - f2fs_warn(sbi, "Error processing option \"%s\" [%d]", 632 - opt, err); 468 + f2fs_warn(NULL, "Error processing option \"%s\" [%d]", 469 + param->key, err); 633 470 return -EINVAL; 634 471 } 635 - f2fs_warn(sbi, "Test dummy encryption mode enabled"); 636 472 return 0; 637 473 } 638 474 639 475 #ifdef CONFIG_F2FS_FS_COMPRESSION 640 - static bool is_compress_extension_exist(struct f2fs_sb_info *sbi, 476 + static bool is_compress_extension_exist(struct f2fs_mount_info *info, 641 477 const char *new_ext, bool is_ext) 642 478 { 643 479 unsigned char (*ext)[F2FS_EXTENSION_LEN]; ··· 576 550 int i; 577 551 578 552 if (is_ext) { 579 - ext = F2FS_OPTION(sbi).extensions; 580 - ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt; 553 + ext = info->extensions; 554 + ext_cnt = info->compress_ext_cnt; 581 555 } else { 582 - ext = F2FS_OPTION(sbi).noextensions; 583 - ext_cnt = F2FS_OPTION(sbi).nocompress_ext_cnt; 556 + ext = info->noextensions; 557 + ext_cnt = info->nocompress_ext_cnt; 584 558 } 585 559 586 560 for (i = 0; i < ext_cnt; i++) { ··· 598 572 * extension will be treated as special cases and will not be compressed. 599 573 * 3. Don't allow the non-compress extension specifies all files. 600 574 */ 601 - static int f2fs_test_compress_extension(struct f2fs_sb_info *sbi) 575 + static int f2fs_test_compress_extension(unsigned char (*noext)[F2FS_EXTENSION_LEN], 576 + int noext_cnt, 577 + unsigned char (*ext)[F2FS_EXTENSION_LEN], 578 + int ext_cnt) 602 579 { 603 - unsigned char (*ext)[F2FS_EXTENSION_LEN]; 604 - unsigned char (*noext)[F2FS_EXTENSION_LEN]; 605 - int ext_cnt, noext_cnt, index = 0, no_index = 0; 606 - 607 - ext = F2FS_OPTION(sbi).extensions; 608 - ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt; 609 - noext = F2FS_OPTION(sbi).noextensions; 610 - noext_cnt = F2FS_OPTION(sbi).nocompress_ext_cnt; 580 + int index = 0, no_index = 0; 611 581 612 582 if (!noext_cnt) 613 583 return 0; 614 584 615 585 for (no_index = 0; no_index < noext_cnt; no_index++) { 586 + if (strlen(noext[no_index]) == 0) 587 + continue; 616 588 if (!strcasecmp("*", noext[no_index])) { 617 - f2fs_info(sbi, "Don't allow the nocompress extension specifies all files"); 589 + f2fs_info(NULL, "Don't allow the nocompress extension specifies all files"); 618 590 return -EINVAL; 619 591 } 620 592 for (index = 0; index < ext_cnt; index++) { 593 + if (strlen(ext[index]) == 0) 594 + continue; 621 595 if (!strcasecmp(ext[index], noext[no_index])) { 622 - f2fs_info(sbi, "Don't allow the same extension %s appear in both compress and nocompress extension", 596 + f2fs_info(NULL, "Don't allow the same extension %s appear in both compress and nocompress extension", 623 597 ext[index]); 624 598 return -EINVAL; 625 599 } ··· 629 603 } 630 604 631 605 #ifdef CONFIG_F2FS_FS_LZ4 632 - static int f2fs_set_lz4hc_level(struct f2fs_sb_info *sbi, const char *str) 606 + static int f2fs_set_lz4hc_level(struct f2fs_fs_context *ctx, const char *str) 633 607 { 634 608 #ifdef CONFIG_F2FS_FS_LZ4HC 635 609 unsigned int level; 636 610 637 611 if (strlen(str) == 3) { 638 - F2FS_OPTION(sbi).compress_level = 0; 612 + F2FS_CTX_INFO(ctx).compress_level = 0; 613 + ctx->spec_mask |= F2FS_SPEC_compress_level; 639 614 return 0; 640 615 } 641 616 642 617 str += 3; 643 618 644 619 if (str[0] != ':') { 645 - f2fs_info(sbi, "wrong format, e.g. <alg_name>:<compr_level>"); 620 + f2fs_info(NULL, "wrong format, e.g. <alg_name>:<compr_level>"); 646 621 return -EINVAL; 647 622 } 648 623 if (kstrtouint(str + 1, 10, &level)) 649 624 return -EINVAL; 650 625 651 626 if (!f2fs_is_compress_level_valid(COMPRESS_LZ4, level)) { 652 - f2fs_info(sbi, "invalid lz4hc compress level: %d", level); 627 + f2fs_info(NULL, "invalid lz4hc compress level: %d", level); 653 628 return -EINVAL; 654 629 } 655 630 656 - F2FS_OPTION(sbi).compress_level = level; 631 + F2FS_CTX_INFO(ctx).compress_level = level; 632 + ctx->spec_mask |= F2FS_SPEC_compress_level; 657 633 return 0; 658 634 #else 659 635 if (strlen(str) == 3) { 660 - F2FS_OPTION(sbi).compress_level = 0; 636 + F2FS_CTX_INFO(ctx).compress_level = 0; 637 + ctx->spec_mask |= F2FS_SPEC_compress_level; 661 638 return 0; 662 639 } 663 - f2fs_info(sbi, "kernel doesn't support lz4hc compression"); 640 + f2fs_info(NULL, "kernel doesn't support lz4hc compression"); 664 641 return -EINVAL; 665 642 #endif 666 643 } 667 644 #endif 668 645 669 646 #ifdef CONFIG_F2FS_FS_ZSTD 670 - static int f2fs_set_zstd_level(struct f2fs_sb_info *sbi, const char *str) 647 + static int f2fs_set_zstd_level(struct f2fs_fs_context *ctx, const char *str) 671 648 { 672 649 int level; 673 650 int len = 4; 674 651 675 652 if (strlen(str) == len) { 676 - F2FS_OPTION(sbi).compress_level = F2FS_ZSTD_DEFAULT_CLEVEL; 653 + F2FS_CTX_INFO(ctx).compress_level = F2FS_ZSTD_DEFAULT_CLEVEL; 654 + ctx->spec_mask |= F2FS_SPEC_compress_level; 677 655 return 0; 678 656 } 679 657 680 658 str += len; 681 659 682 660 if (str[0] != ':') { 683 - f2fs_info(sbi, "wrong format, e.g. <alg_name>:<compr_level>"); 661 + f2fs_info(NULL, "wrong format, e.g. <alg_name>:<compr_level>"); 684 662 return -EINVAL; 685 663 } 686 664 if (kstrtoint(str + 1, 10, &level)) ··· 692 662 693 663 /* f2fs does not support negative compress level now */ 694 664 if (level < 0) { 695 - f2fs_info(sbi, "do not support negative compress level: %d", level); 665 + f2fs_info(NULL, "do not support negative compress level: %d", level); 696 666 return -ERANGE; 697 667 } 698 668 699 669 if (!f2fs_is_compress_level_valid(COMPRESS_ZSTD, level)) { 700 - f2fs_info(sbi, "invalid zstd compress level: %d", level); 670 + f2fs_info(NULL, "invalid zstd compress level: %d", level); 701 671 return -EINVAL; 702 672 } 703 673 704 - F2FS_OPTION(sbi).compress_level = level; 674 + F2FS_CTX_INFO(ctx).compress_level = level; 675 + ctx->spec_mask |= F2FS_SPEC_compress_level; 705 676 return 0; 706 677 } 707 678 #endif 708 679 #endif 709 680 710 - static int parse_options(struct f2fs_sb_info *sbi, char *options, bool is_remount) 681 + static int f2fs_parse_param(struct fs_context *fc, struct fs_parameter *param) 711 682 { 712 - substring_t args[MAX_OPT_ARGS]; 683 + struct f2fs_fs_context *ctx = fc->fs_private; 713 684 #ifdef CONFIG_F2FS_FS_COMPRESSION 714 685 unsigned char (*ext)[F2FS_EXTENSION_LEN]; 715 686 unsigned char (*noext)[F2FS_EXTENSION_LEN]; 716 687 int ext_cnt, noext_cnt; 688 + char *name; 717 689 #endif 718 - char *p, *name; 719 - int arg = 0; 720 - kuid_t uid; 721 - kgid_t gid; 722 - int ret; 690 + substring_t args[MAX_OPT_ARGS]; 691 + struct fs_parse_result result; 692 + int token, ret, arg; 723 693 724 - if (!options) 725 - return 0; 694 + token = fs_parse(fc, f2fs_param_specs, param, &result); 695 + if (token < 0) 696 + return token; 726 697 727 - while ((p = strsep(&options, ",")) != NULL) { 728 - int token; 698 + switch (token) { 699 + case Opt_gc_background: 700 + F2FS_CTX_INFO(ctx).bggc_mode = result.uint_32; 701 + ctx->spec_mask |= F2FS_SPEC_background_gc; 702 + break; 703 + case Opt_disable_roll_forward: 704 + ctx_set_opt(ctx, F2FS_MOUNT_DISABLE_ROLL_FORWARD); 705 + break; 706 + case Opt_norecovery: 707 + /* requires ro mount, checked in f2fs_validate_options */ 708 + ctx_set_opt(ctx, F2FS_MOUNT_NORECOVERY); 709 + break; 710 + case Opt_discard: 711 + if (result.negated) 712 + ctx_clear_opt(ctx, F2FS_MOUNT_DISCARD); 713 + else 714 + ctx_set_opt(ctx, F2FS_MOUNT_DISCARD); 715 + break; 716 + case Opt_noheap: 717 + case Opt_heap: 718 + f2fs_warn(NULL, "heap/no_heap options were deprecated"); 719 + break; 720 + #ifdef CONFIG_F2FS_FS_XATTR 721 + case Opt_user_xattr: 722 + if (result.negated) 723 + ctx_clear_opt(ctx, F2FS_MOUNT_XATTR_USER); 724 + else 725 + ctx_set_opt(ctx, F2FS_MOUNT_XATTR_USER); 726 + break; 727 + case Opt_inline_xattr: 728 + if (result.negated) 729 + ctx_clear_opt(ctx, F2FS_MOUNT_INLINE_XATTR); 730 + else 731 + ctx_set_opt(ctx, F2FS_MOUNT_INLINE_XATTR); 732 + break; 733 + case Opt_inline_xattr_size: 734 + if (result.int_32 < MIN_INLINE_XATTR_SIZE || 735 + result.int_32 > MAX_INLINE_XATTR_SIZE) { 736 + f2fs_err(NULL, "inline xattr size is out of range: %u ~ %u", 737 + (u32)MIN_INLINE_XATTR_SIZE, (u32)MAX_INLINE_XATTR_SIZE); 738 + return -EINVAL; 739 + } 740 + ctx_set_opt(ctx, F2FS_MOUNT_INLINE_XATTR_SIZE); 741 + F2FS_CTX_INFO(ctx).inline_xattr_size = result.int_32; 742 + ctx->spec_mask |= F2FS_SPEC_inline_xattr_size; 743 + break; 744 + #else 745 + case Opt_user_xattr: 746 + case Opt_inline_xattr: 747 + case Opt_inline_xattr_size: 748 + f2fs_info(NULL, "%s options not supported", param->key); 749 + break; 750 + #endif 751 + #ifdef CONFIG_F2FS_FS_POSIX_ACL 752 + case Opt_acl: 753 + if (result.negated) 754 + ctx_clear_opt(ctx, F2FS_MOUNT_POSIX_ACL); 755 + else 756 + ctx_set_opt(ctx, F2FS_MOUNT_POSIX_ACL); 757 + break; 758 + #else 759 + case Opt_acl: 760 + f2fs_info(NULL, "%s options not supported", param->key); 761 + break; 762 + #endif 763 + case Opt_active_logs: 764 + if (result.int_32 != 2 && result.int_32 != 4 && 765 + result.int_32 != NR_CURSEG_PERSIST_TYPE) 766 + return -EINVAL; 767 + ctx->spec_mask |= F2FS_SPEC_active_logs; 768 + F2FS_CTX_INFO(ctx).active_logs = result.int_32; 769 + break; 770 + case Opt_disable_ext_identify: 771 + ctx_set_opt(ctx, F2FS_MOUNT_DISABLE_EXT_IDENTIFY); 772 + break; 773 + case Opt_inline_data: 774 + if (result.negated) 775 + ctx_clear_opt(ctx, F2FS_MOUNT_INLINE_DATA); 776 + else 777 + ctx_set_opt(ctx, F2FS_MOUNT_INLINE_DATA); 778 + break; 779 + case Opt_inline_dentry: 780 + if (result.negated) 781 + ctx_clear_opt(ctx, F2FS_MOUNT_INLINE_DENTRY); 782 + else 783 + ctx_set_opt(ctx, F2FS_MOUNT_INLINE_DENTRY); 784 + break; 785 + case Opt_flush_merge: 786 + if (result.negated) 787 + ctx_clear_opt(ctx, F2FS_MOUNT_FLUSH_MERGE); 788 + else 789 + ctx_set_opt(ctx, F2FS_MOUNT_FLUSH_MERGE); 790 + break; 791 + case Opt_barrier: 792 + if (result.negated) 793 + ctx_set_opt(ctx, F2FS_MOUNT_NOBARRIER); 794 + else 795 + ctx_clear_opt(ctx, F2FS_MOUNT_NOBARRIER); 796 + break; 797 + case Opt_fastboot: 798 + ctx_set_opt(ctx, F2FS_MOUNT_FASTBOOT); 799 + break; 800 + case Opt_extent_cache: 801 + if (result.negated) 802 + ctx_clear_opt(ctx, F2FS_MOUNT_READ_EXTENT_CACHE); 803 + else 804 + ctx_set_opt(ctx, F2FS_MOUNT_READ_EXTENT_CACHE); 805 + break; 806 + case Opt_data_flush: 807 + ctx_set_opt(ctx, F2FS_MOUNT_DATA_FLUSH); 808 + break; 809 + case Opt_reserve_root: 810 + ctx_set_opt(ctx, F2FS_MOUNT_RESERVE_ROOT); 811 + F2FS_CTX_INFO(ctx).root_reserved_blocks = result.uint_32; 812 + ctx->spec_mask |= F2FS_SPEC_reserve_root; 813 + break; 814 + case Opt_resuid: 815 + F2FS_CTX_INFO(ctx).s_resuid = result.uid; 816 + ctx->spec_mask |= F2FS_SPEC_resuid; 817 + break; 818 + case Opt_resgid: 819 + F2FS_CTX_INFO(ctx).s_resgid = result.gid; 820 + ctx->spec_mask |= F2FS_SPEC_resgid; 821 + break; 822 + case Opt_mode: 823 + F2FS_CTX_INFO(ctx).fs_mode = result.uint_32; 824 + ctx->spec_mask |= F2FS_SPEC_mode; 825 + break; 826 + #ifdef CONFIG_F2FS_FAULT_INJECTION 827 + case Opt_fault_injection: 828 + F2FS_CTX_INFO(ctx).fault_info.inject_rate = result.int_32; 829 + ctx->spec_mask |= F2FS_SPEC_fault_injection; 830 + ctx_set_opt(ctx, F2FS_MOUNT_FAULT_INJECTION); 831 + break; 729 832 730 - if (!*p) 731 - continue; 833 + case Opt_fault_type: 834 + if (result.uint_32 > BIT(FAULT_MAX)) 835 + return -EINVAL; 836 + F2FS_CTX_INFO(ctx).fault_info.inject_type = result.uint_32; 837 + ctx->spec_mask |= F2FS_SPEC_fault_type; 838 + ctx_set_opt(ctx, F2FS_MOUNT_FAULT_INJECTION); 839 + break; 840 + #else 841 + case Opt_fault_injection: 842 + case Opt_fault_type: 843 + f2fs_info(NULL, "%s options not supported", param->key); 844 + break; 845 + #endif 846 + case Opt_lazytime: 847 + if (result.negated) 848 + ctx_clear_opt(ctx, F2FS_MOUNT_LAZYTIME); 849 + else 850 + ctx_set_opt(ctx, F2FS_MOUNT_LAZYTIME); 851 + break; 852 + #ifdef CONFIG_QUOTA 853 + case Opt_quota: 854 + if (result.negated) { 855 + ctx_clear_opt(ctx, F2FS_MOUNT_QUOTA); 856 + ctx_clear_opt(ctx, F2FS_MOUNT_USRQUOTA); 857 + ctx_clear_opt(ctx, F2FS_MOUNT_GRPQUOTA); 858 + ctx_clear_opt(ctx, F2FS_MOUNT_PRJQUOTA); 859 + } else 860 + ctx_set_opt(ctx, F2FS_MOUNT_USRQUOTA); 861 + break; 862 + case Opt_usrquota: 863 + ctx_set_opt(ctx, F2FS_MOUNT_USRQUOTA); 864 + break; 865 + case Opt_grpquota: 866 + ctx_set_opt(ctx, F2FS_MOUNT_GRPQUOTA); 867 + break; 868 + case Opt_prjquota: 869 + ctx_set_opt(ctx, F2FS_MOUNT_PRJQUOTA); 870 + break; 871 + case Opt_usrjquota: 872 + if (!*param->string) 873 + ret = f2fs_unnote_qf_name(fc, USRQUOTA); 874 + else 875 + ret = f2fs_note_qf_name(fc, USRQUOTA, param); 876 + if (ret) 877 + return ret; 878 + break; 879 + case Opt_grpjquota: 880 + if (!*param->string) 881 + ret = f2fs_unnote_qf_name(fc, GRPQUOTA); 882 + else 883 + ret = f2fs_note_qf_name(fc, GRPQUOTA, param); 884 + if (ret) 885 + return ret; 886 + break; 887 + case Opt_prjjquota: 888 + if (!*param->string) 889 + ret = f2fs_unnote_qf_name(fc, PRJQUOTA); 890 + else 891 + ret = f2fs_note_qf_name(fc, PRJQUOTA, param); 892 + if (ret) 893 + return ret; 894 + break; 895 + case Opt_jqfmt: 896 + F2FS_CTX_INFO(ctx).s_jquota_fmt = result.int_32; 897 + ctx->spec_mask |= F2FS_SPEC_jqfmt; 898 + break; 899 + #else 900 + case Opt_quota: 901 + case Opt_usrquota: 902 + case Opt_grpquota: 903 + case Opt_prjquota: 904 + case Opt_usrjquota: 905 + case Opt_grpjquota: 906 + case Opt_prjjquota: 907 + f2fs_info(NULL, "quota operations not supported"); 908 + break; 909 + #endif 910 + case Opt_alloc: 911 + F2FS_CTX_INFO(ctx).alloc_mode = result.uint_32; 912 + ctx->spec_mask |= F2FS_SPEC_alloc_mode; 913 + break; 914 + case Opt_fsync: 915 + F2FS_CTX_INFO(ctx).fsync_mode = result.uint_32; 916 + ctx->spec_mask |= F2FS_SPEC_fsync_mode; 917 + break; 918 + case Opt_test_dummy_encryption: 919 + ret = f2fs_parse_test_dummy_encryption(param, ctx); 920 + if (ret) 921 + return ret; 922 + break; 923 + case Opt_inlinecrypt: 924 + #ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT 925 + ctx_set_opt(ctx, F2FS_MOUNT_INLINECRYPT); 926 + #else 927 + f2fs_info(NULL, "inline encryption not supported"); 928 + #endif 929 + break; 930 + case Opt_checkpoint: 732 931 /* 733 932 * Initialize args struct so we know whether arg was 734 933 * found; some options take optional arguments. 735 934 */ 736 - args[0].to = args[0].from = NULL; 737 - token = match_token(p, f2fs_tokens, args); 935 + args[0].from = args[0].to = NULL; 936 + arg = 0; 738 937 938 + /* revert to match_table for checkpoint= options */ 939 + token = match_token(param->string, f2fs_checkpoint_tokens, args); 739 940 switch (token) { 740 - case Opt_gc_background: 741 - name = match_strdup(&args[0]); 742 - 743 - if (!name) 744 - return -ENOMEM; 745 - if (!strcmp(name, "on")) { 746 - F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON; 747 - } else if (!strcmp(name, "off")) { 748 - if (f2fs_sb_has_blkzoned(sbi)) { 749 - f2fs_warn(sbi, "zoned devices need bggc"); 750 - kfree(name); 751 - return -EINVAL; 752 - } 753 - F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_OFF; 754 - } else if (!strcmp(name, "sync")) { 755 - F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_SYNC; 756 - } else { 757 - kfree(name); 758 - return -EINVAL; 759 - } 760 - kfree(name); 761 - break; 762 - case Opt_disable_roll_forward: 763 - set_opt(sbi, DISABLE_ROLL_FORWARD); 764 - break; 765 - case Opt_norecovery: 766 - /* requires ro mount, checked in f2fs_default_check */ 767 - set_opt(sbi, NORECOVERY); 768 - break; 769 - case Opt_discard: 770 - if (!f2fs_hw_support_discard(sbi)) { 771 - f2fs_warn(sbi, "device does not support discard"); 772 - break; 773 - } 774 - set_opt(sbi, DISCARD); 775 - break; 776 - case Opt_nodiscard: 777 - if (f2fs_hw_should_discard(sbi)) { 778 - f2fs_warn(sbi, "discard is required for zoned block devices"); 779 - return -EINVAL; 780 - } 781 - clear_opt(sbi, DISCARD); 782 - break; 783 - case Opt_noheap: 784 - case Opt_heap: 785 - f2fs_warn(sbi, "heap/no_heap options were deprecated"); 786 - break; 787 - #ifdef CONFIG_F2FS_FS_XATTR 788 - case Opt_user_xattr: 789 - set_opt(sbi, XATTR_USER); 790 - break; 791 - case Opt_nouser_xattr: 792 - clear_opt(sbi, XATTR_USER); 793 - break; 794 - case Opt_inline_xattr: 795 - set_opt(sbi, INLINE_XATTR); 796 - break; 797 - case Opt_noinline_xattr: 798 - clear_opt(sbi, INLINE_XATTR); 799 - break; 800 - case Opt_inline_xattr_size: 801 - if (args->from && match_int(args, &arg)) 802 - return -EINVAL; 803 - set_opt(sbi, INLINE_XATTR_SIZE); 804 - F2FS_OPTION(sbi).inline_xattr_size = arg; 805 - break; 806 - #else 807 - case Opt_user_xattr: 808 - case Opt_nouser_xattr: 809 - case Opt_inline_xattr: 810 - case Opt_noinline_xattr: 811 - case Opt_inline_xattr_size: 812 - f2fs_info(sbi, "xattr options not supported"); 813 - break; 814 - #endif 815 - #ifdef CONFIG_F2FS_FS_POSIX_ACL 816 - case Opt_acl: 817 - set_opt(sbi, POSIX_ACL); 818 - break; 819 - case Opt_noacl: 820 - clear_opt(sbi, POSIX_ACL); 821 - break; 822 - #else 823 - case Opt_acl: 824 - case Opt_noacl: 825 - f2fs_info(sbi, "acl options not supported"); 826 - break; 827 - #endif 828 - case Opt_active_logs: 829 - if (args->from && match_int(args, &arg)) 830 - return -EINVAL; 831 - if (arg != 2 && arg != 4 && 832 - arg != NR_CURSEG_PERSIST_TYPE) 833 - return -EINVAL; 834 - F2FS_OPTION(sbi).active_logs = arg; 835 - break; 836 - case Opt_disable_ext_identify: 837 - set_opt(sbi, DISABLE_EXT_IDENTIFY); 838 - break; 839 - case Opt_inline_data: 840 - set_opt(sbi, INLINE_DATA); 841 - break; 842 - case Opt_inline_dentry: 843 - set_opt(sbi, INLINE_DENTRY); 844 - break; 845 - case Opt_noinline_dentry: 846 - clear_opt(sbi, INLINE_DENTRY); 847 - break; 848 - case Opt_flush_merge: 849 - set_opt(sbi, FLUSH_MERGE); 850 - break; 851 - case Opt_noflush_merge: 852 - clear_opt(sbi, FLUSH_MERGE); 853 - break; 854 - case Opt_nobarrier: 855 - set_opt(sbi, NOBARRIER); 856 - break; 857 - case Opt_barrier: 858 - clear_opt(sbi, NOBARRIER); 859 - break; 860 - case Opt_fastboot: 861 - set_opt(sbi, FASTBOOT); 862 - break; 863 - case Opt_extent_cache: 864 - set_opt(sbi, READ_EXTENT_CACHE); 865 - break; 866 - case Opt_noextent_cache: 867 - if (f2fs_sb_has_device_alias(sbi)) { 868 - f2fs_err(sbi, "device aliasing requires extent cache"); 869 - return -EINVAL; 870 - } 871 - clear_opt(sbi, READ_EXTENT_CACHE); 872 - break; 873 - case Opt_noinline_data: 874 - clear_opt(sbi, INLINE_DATA); 875 - break; 876 - case Opt_data_flush: 877 - set_opt(sbi, DATA_FLUSH); 878 - break; 879 - case Opt_reserve_root: 880 - if (args->from && match_int(args, &arg)) 881 - return -EINVAL; 882 - if (test_opt(sbi, RESERVE_ROOT)) { 883 - f2fs_info(sbi, "Preserve previous reserve_root=%u", 884 - F2FS_OPTION(sbi).root_reserved_blocks); 885 - } else { 886 - F2FS_OPTION(sbi).root_reserved_blocks = arg; 887 - set_opt(sbi, RESERVE_ROOT); 888 - } 889 - break; 890 - case Opt_resuid: 891 - if (args->from && match_int(args, &arg)) 892 - return -EINVAL; 893 - uid = make_kuid(current_user_ns(), arg); 894 - if (!uid_valid(uid)) { 895 - f2fs_err(sbi, "Invalid uid value %d", arg); 896 - return -EINVAL; 897 - } 898 - F2FS_OPTION(sbi).s_resuid = uid; 899 - break; 900 - case Opt_resgid: 901 - if (args->from && match_int(args, &arg)) 902 - return -EINVAL; 903 - gid = make_kgid(current_user_ns(), arg); 904 - if (!gid_valid(gid)) { 905 - f2fs_err(sbi, "Invalid gid value %d", arg); 906 - return -EINVAL; 907 - } 908 - F2FS_OPTION(sbi).s_resgid = gid; 909 - break; 910 - case Opt_mode: 911 - name = match_strdup(&args[0]); 912 - 913 - if (!name) 914 - return -ENOMEM; 915 - if (!strcmp(name, "adaptive")) { 916 - F2FS_OPTION(sbi).fs_mode = FS_MODE_ADAPTIVE; 917 - } else if (!strcmp(name, "lfs")) { 918 - F2FS_OPTION(sbi).fs_mode = FS_MODE_LFS; 919 - } else if (!strcmp(name, "fragment:segment")) { 920 - F2FS_OPTION(sbi).fs_mode = FS_MODE_FRAGMENT_SEG; 921 - } else if (!strcmp(name, "fragment:block")) { 922 - F2FS_OPTION(sbi).fs_mode = FS_MODE_FRAGMENT_BLK; 923 - } else { 924 - kfree(name); 925 - return -EINVAL; 926 - } 927 - kfree(name); 928 - break; 929 - #ifdef CONFIG_F2FS_FAULT_INJECTION 930 - case Opt_fault_injection: 931 - if (args->from && match_int(args, &arg)) 932 - return -EINVAL; 933 - if (f2fs_build_fault_attr(sbi, arg, 0, FAULT_RATE)) 934 - return -EINVAL; 935 - set_opt(sbi, FAULT_INJECTION); 936 - break; 937 - 938 - case Opt_fault_type: 939 - if (args->from && match_int(args, &arg)) 940 - return -EINVAL; 941 - if (f2fs_build_fault_attr(sbi, 0, arg, FAULT_TYPE)) 942 - return -EINVAL; 943 - set_opt(sbi, FAULT_INJECTION); 944 - break; 945 - #else 946 - case Opt_fault_injection: 947 - case Opt_fault_type: 948 - f2fs_info(sbi, "fault injection options not supported"); 949 - break; 950 - #endif 951 - case Opt_lazytime: 952 - set_opt(sbi, LAZYTIME); 953 - break; 954 - case Opt_nolazytime: 955 - clear_opt(sbi, LAZYTIME); 956 - break; 957 - #ifdef CONFIG_QUOTA 958 - case Opt_quota: 959 - case Opt_usrquota: 960 - set_opt(sbi, USRQUOTA); 961 - break; 962 - case Opt_grpquota: 963 - set_opt(sbi, GRPQUOTA); 964 - break; 965 - case Opt_prjquota: 966 - set_opt(sbi, PRJQUOTA); 967 - break; 968 - case Opt_usrjquota: 969 - ret = f2fs_set_qf_name(sbi, USRQUOTA, &args[0]); 970 - if (ret) 971 - return ret; 972 - break; 973 - case Opt_grpjquota: 974 - ret = f2fs_set_qf_name(sbi, GRPQUOTA, &args[0]); 975 - if (ret) 976 - return ret; 977 - break; 978 - case Opt_prjjquota: 979 - ret = f2fs_set_qf_name(sbi, PRJQUOTA, &args[0]); 980 - if (ret) 981 - return ret; 982 - break; 983 - case Opt_offusrjquota: 984 - ret = f2fs_clear_qf_name(sbi, USRQUOTA); 985 - if (ret) 986 - return ret; 987 - break; 988 - case Opt_offgrpjquota: 989 - ret = f2fs_clear_qf_name(sbi, GRPQUOTA); 990 - if (ret) 991 - return ret; 992 - break; 993 - case Opt_offprjjquota: 994 - ret = f2fs_clear_qf_name(sbi, PRJQUOTA); 995 - if (ret) 996 - return ret; 997 - break; 998 - case Opt_jqfmt_vfsold: 999 - F2FS_OPTION(sbi).s_jquota_fmt = QFMT_VFS_OLD; 1000 - break; 1001 - case Opt_jqfmt_vfsv0: 1002 - F2FS_OPTION(sbi).s_jquota_fmt = QFMT_VFS_V0; 1003 - break; 1004 - case Opt_jqfmt_vfsv1: 1005 - F2FS_OPTION(sbi).s_jquota_fmt = QFMT_VFS_V1; 1006 - break; 1007 - case Opt_noquota: 1008 - clear_opt(sbi, QUOTA); 1009 - clear_opt(sbi, USRQUOTA); 1010 - clear_opt(sbi, GRPQUOTA); 1011 - clear_opt(sbi, PRJQUOTA); 1012 - break; 1013 - #else 1014 - case Opt_quota: 1015 - case Opt_usrquota: 1016 - case Opt_grpquota: 1017 - case Opt_prjquota: 1018 - case Opt_usrjquota: 1019 - case Opt_grpjquota: 1020 - case Opt_prjjquota: 1021 - case Opt_offusrjquota: 1022 - case Opt_offgrpjquota: 1023 - case Opt_offprjjquota: 1024 - case Opt_jqfmt_vfsold: 1025 - case Opt_jqfmt_vfsv0: 1026 - case Opt_jqfmt_vfsv1: 1027 - case Opt_noquota: 1028 - f2fs_info(sbi, "quota operations not supported"); 1029 - break; 1030 - #endif 1031 - case Opt_alloc: 1032 - name = match_strdup(&args[0]); 1033 - if (!name) 1034 - return -ENOMEM; 1035 - 1036 - if (!strcmp(name, "default")) { 1037 - F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT; 1038 - } else if (!strcmp(name, "reuse")) { 1039 - F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_REUSE; 1040 - } else { 1041 - kfree(name); 1042 - return -EINVAL; 1043 - } 1044 - kfree(name); 1045 - break; 1046 - case Opt_fsync: 1047 - name = match_strdup(&args[0]); 1048 - if (!name) 1049 - return -ENOMEM; 1050 - if (!strcmp(name, "posix")) { 1051 - F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX; 1052 - } else if (!strcmp(name, "strict")) { 1053 - F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_STRICT; 1054 - } else if (!strcmp(name, "nobarrier")) { 1055 - F2FS_OPTION(sbi).fsync_mode = 1056 - FSYNC_MODE_NOBARRIER; 1057 - } else { 1058 - kfree(name); 1059 - return -EINVAL; 1060 - } 1061 - kfree(name); 1062 - break; 1063 - case Opt_test_dummy_encryption: 1064 - ret = f2fs_set_test_dummy_encryption(sbi, p, &args[0], 1065 - is_remount); 1066 - if (ret) 1067 - return ret; 1068 - break; 1069 - case Opt_inlinecrypt: 1070 - #ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT 1071 - set_opt(sbi, INLINECRYPT); 1072 - #else 1073 - f2fs_info(sbi, "inline encryption not supported"); 1074 - #endif 1075 - break; 1076 941 case Opt_checkpoint_disable_cap_perc: 1077 942 if (args->from && match_int(args, &arg)) 1078 943 return -EINVAL; 1079 944 if (arg < 0 || arg > 100) 1080 945 return -EINVAL; 1081 - F2FS_OPTION(sbi).unusable_cap_perc = arg; 1082 - set_opt(sbi, DISABLE_CHECKPOINT); 946 + F2FS_CTX_INFO(ctx).unusable_cap_perc = arg; 947 + ctx->spec_mask |= F2FS_SPEC_checkpoint_disable_cap_perc; 948 + ctx_set_opt(ctx, F2FS_MOUNT_DISABLE_CHECKPOINT); 1083 949 break; 1084 950 case Opt_checkpoint_disable_cap: 1085 951 if (args->from && match_int(args, &arg)) 1086 952 return -EINVAL; 1087 - F2FS_OPTION(sbi).unusable_cap = arg; 1088 - set_opt(sbi, DISABLE_CHECKPOINT); 953 + F2FS_CTX_INFO(ctx).unusable_cap = arg; 954 + ctx->spec_mask |= F2FS_SPEC_checkpoint_disable_cap; 955 + ctx_set_opt(ctx, F2FS_MOUNT_DISABLE_CHECKPOINT); 1089 956 break; 1090 957 case Opt_checkpoint_disable: 1091 - set_opt(sbi, DISABLE_CHECKPOINT); 958 + ctx_set_opt(ctx, F2FS_MOUNT_DISABLE_CHECKPOINT); 1092 959 break; 1093 960 case Opt_checkpoint_enable: 1094 - clear_opt(sbi, DISABLE_CHECKPOINT); 1095 - break; 1096 - case Opt_checkpoint_merge: 1097 - set_opt(sbi, MERGE_CHECKPOINT); 1098 - break; 1099 - case Opt_nocheckpoint_merge: 1100 - clear_opt(sbi, MERGE_CHECKPOINT); 1101 - break; 1102 - #ifdef CONFIG_F2FS_FS_COMPRESSION 1103 - case Opt_compress_algorithm: 1104 - if (!f2fs_sb_has_compression(sbi)) { 1105 - f2fs_info(sbi, "Image doesn't support compression"); 1106 - break; 1107 - } 1108 - name = match_strdup(&args[0]); 1109 - if (!name) 1110 - return -ENOMEM; 1111 - if (!strcmp(name, "lzo")) { 1112 - #ifdef CONFIG_F2FS_FS_LZO 1113 - F2FS_OPTION(sbi).compress_level = 0; 1114 - F2FS_OPTION(sbi).compress_algorithm = 1115 - COMPRESS_LZO; 1116 - #else 1117 - f2fs_info(sbi, "kernel doesn't support lzo compression"); 1118 - #endif 1119 - } else if (!strncmp(name, "lz4", 3)) { 1120 - #ifdef CONFIG_F2FS_FS_LZ4 1121 - ret = f2fs_set_lz4hc_level(sbi, name); 1122 - if (ret) { 1123 - kfree(name); 1124 - return -EINVAL; 1125 - } 1126 - F2FS_OPTION(sbi).compress_algorithm = 1127 - COMPRESS_LZ4; 1128 - #else 1129 - f2fs_info(sbi, "kernel doesn't support lz4 compression"); 1130 - #endif 1131 - } else if (!strncmp(name, "zstd", 4)) { 1132 - #ifdef CONFIG_F2FS_FS_ZSTD 1133 - ret = f2fs_set_zstd_level(sbi, name); 1134 - if (ret) { 1135 - kfree(name); 1136 - return -EINVAL; 1137 - } 1138 - F2FS_OPTION(sbi).compress_algorithm = 1139 - COMPRESS_ZSTD; 1140 - #else 1141 - f2fs_info(sbi, "kernel doesn't support zstd compression"); 1142 - #endif 1143 - } else if (!strcmp(name, "lzo-rle")) { 1144 - #ifdef CONFIG_F2FS_FS_LZORLE 1145 - F2FS_OPTION(sbi).compress_level = 0; 1146 - F2FS_OPTION(sbi).compress_algorithm = 1147 - COMPRESS_LZORLE; 1148 - #else 1149 - f2fs_info(sbi, "kernel doesn't support lzorle compression"); 1150 - #endif 1151 - } else { 1152 - kfree(name); 1153 - return -EINVAL; 1154 - } 1155 - kfree(name); 1156 - break; 1157 - case Opt_compress_log_size: 1158 - if (!f2fs_sb_has_compression(sbi)) { 1159 - f2fs_info(sbi, "Image doesn't support compression"); 1160 - break; 1161 - } 1162 - if (args->from && match_int(args, &arg)) 1163 - return -EINVAL; 1164 - if (arg < MIN_COMPRESS_LOG_SIZE || 1165 - arg > MAX_COMPRESS_LOG_SIZE) { 1166 - f2fs_err(sbi, 1167 - "Compress cluster log size is out of range"); 1168 - return -EINVAL; 1169 - } 1170 - F2FS_OPTION(sbi).compress_log_size = arg; 1171 - break; 1172 - case Opt_compress_extension: 1173 - if (!f2fs_sb_has_compression(sbi)) { 1174 - f2fs_info(sbi, "Image doesn't support compression"); 1175 - break; 1176 - } 1177 - name = match_strdup(&args[0]); 1178 - if (!name) 1179 - return -ENOMEM; 1180 - 1181 - ext = F2FS_OPTION(sbi).extensions; 1182 - ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt; 1183 - 1184 - if (strlen(name) >= F2FS_EXTENSION_LEN || 1185 - ext_cnt >= COMPRESS_EXT_NUM) { 1186 - f2fs_err(sbi, 1187 - "invalid extension length/number"); 1188 - kfree(name); 1189 - return -EINVAL; 1190 - } 1191 - 1192 - if (is_compress_extension_exist(sbi, name, true)) { 1193 - kfree(name); 1194 - break; 1195 - } 1196 - 1197 - ret = strscpy(ext[ext_cnt], name); 1198 - if (ret < 0) { 1199 - kfree(name); 1200 - return ret; 1201 - } 1202 - F2FS_OPTION(sbi).compress_ext_cnt++; 1203 - kfree(name); 1204 - break; 1205 - case Opt_nocompress_extension: 1206 - if (!f2fs_sb_has_compression(sbi)) { 1207 - f2fs_info(sbi, "Image doesn't support compression"); 1208 - break; 1209 - } 1210 - name = match_strdup(&args[0]); 1211 - if (!name) 1212 - return -ENOMEM; 1213 - 1214 - noext = F2FS_OPTION(sbi).noextensions; 1215 - noext_cnt = F2FS_OPTION(sbi).nocompress_ext_cnt; 1216 - 1217 - if (strlen(name) >= F2FS_EXTENSION_LEN || 1218 - noext_cnt >= COMPRESS_EXT_NUM) { 1219 - f2fs_err(sbi, 1220 - "invalid extension length/number"); 1221 - kfree(name); 1222 - return -EINVAL; 1223 - } 1224 - 1225 - if (is_compress_extension_exist(sbi, name, false)) { 1226 - kfree(name); 1227 - break; 1228 - } 1229 - 1230 - ret = strscpy(noext[noext_cnt], name); 1231 - if (ret < 0) { 1232 - kfree(name); 1233 - return ret; 1234 - } 1235 - F2FS_OPTION(sbi).nocompress_ext_cnt++; 1236 - kfree(name); 1237 - break; 1238 - case Opt_compress_chksum: 1239 - if (!f2fs_sb_has_compression(sbi)) { 1240 - f2fs_info(sbi, "Image doesn't support compression"); 1241 - break; 1242 - } 1243 - F2FS_OPTION(sbi).compress_chksum = true; 1244 - break; 1245 - case Opt_compress_mode: 1246 - if (!f2fs_sb_has_compression(sbi)) { 1247 - f2fs_info(sbi, "Image doesn't support compression"); 1248 - break; 1249 - } 1250 - name = match_strdup(&args[0]); 1251 - if (!name) 1252 - return -ENOMEM; 1253 - if (!strcmp(name, "fs")) { 1254 - F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS; 1255 - } else if (!strcmp(name, "user")) { 1256 - F2FS_OPTION(sbi).compress_mode = COMPR_MODE_USER; 1257 - } else { 1258 - kfree(name); 1259 - return -EINVAL; 1260 - } 1261 - kfree(name); 1262 - break; 1263 - case Opt_compress_cache: 1264 - if (!f2fs_sb_has_compression(sbi)) { 1265 - f2fs_info(sbi, "Image doesn't support compression"); 1266 - break; 1267 - } 1268 - set_opt(sbi, COMPRESS_CACHE); 1269 - break; 1270 - #else 1271 - case Opt_compress_algorithm: 1272 - case Opt_compress_log_size: 1273 - case Opt_compress_extension: 1274 - case Opt_nocompress_extension: 1275 - case Opt_compress_chksum: 1276 - case Opt_compress_mode: 1277 - case Opt_compress_cache: 1278 - f2fs_info(sbi, "compression options not supported"); 1279 - break; 1280 - #endif 1281 - case Opt_atgc: 1282 - set_opt(sbi, ATGC); 1283 - break; 1284 - case Opt_gc_merge: 1285 - set_opt(sbi, GC_MERGE); 1286 - break; 1287 - case Opt_nogc_merge: 1288 - clear_opt(sbi, GC_MERGE); 1289 - break; 1290 - case Opt_discard_unit: 1291 - name = match_strdup(&args[0]); 1292 - if (!name) 1293 - return -ENOMEM; 1294 - if (!strcmp(name, "block")) { 1295 - F2FS_OPTION(sbi).discard_unit = 1296 - DISCARD_UNIT_BLOCK; 1297 - } else if (!strcmp(name, "segment")) { 1298 - F2FS_OPTION(sbi).discard_unit = 1299 - DISCARD_UNIT_SEGMENT; 1300 - } else if (!strcmp(name, "section")) { 1301 - F2FS_OPTION(sbi).discard_unit = 1302 - DISCARD_UNIT_SECTION; 1303 - } else { 1304 - kfree(name); 1305 - return -EINVAL; 1306 - } 1307 - kfree(name); 1308 - break; 1309 - case Opt_memory_mode: 1310 - name = match_strdup(&args[0]); 1311 - if (!name) 1312 - return -ENOMEM; 1313 - if (!strcmp(name, "normal")) { 1314 - F2FS_OPTION(sbi).memory_mode = 1315 - MEMORY_MODE_NORMAL; 1316 - } else if (!strcmp(name, "low")) { 1317 - F2FS_OPTION(sbi).memory_mode = 1318 - MEMORY_MODE_LOW; 1319 - } else { 1320 - kfree(name); 1321 - return -EINVAL; 1322 - } 1323 - kfree(name); 1324 - break; 1325 - case Opt_age_extent_cache: 1326 - set_opt(sbi, AGE_EXTENT_CACHE); 1327 - break; 1328 - case Opt_errors: 1329 - name = match_strdup(&args[0]); 1330 - if (!name) 1331 - return -ENOMEM; 1332 - if (!strcmp(name, "remount-ro")) { 1333 - F2FS_OPTION(sbi).errors = 1334 - MOUNT_ERRORS_READONLY; 1335 - } else if (!strcmp(name, "continue")) { 1336 - F2FS_OPTION(sbi).errors = 1337 - MOUNT_ERRORS_CONTINUE; 1338 - } else if (!strcmp(name, "panic")) { 1339 - F2FS_OPTION(sbi).errors = 1340 - MOUNT_ERRORS_PANIC; 1341 - } else { 1342 - kfree(name); 1343 - return -EINVAL; 1344 - } 1345 - kfree(name); 1346 - break; 1347 - case Opt_nat_bits: 1348 - set_opt(sbi, NAT_BITS); 961 + ctx_clear_opt(ctx, F2FS_MOUNT_DISABLE_CHECKPOINT); 1349 962 break; 1350 963 default: 1351 - f2fs_err(sbi, "Unrecognized mount option \"%s\" or missing value", 1352 - p); 1353 964 return -EINVAL; 1354 965 } 966 + break; 967 + case Opt_checkpoint_merge: 968 + if (result.negated) 969 + ctx_clear_opt(ctx, F2FS_MOUNT_MERGE_CHECKPOINT); 970 + else 971 + ctx_set_opt(ctx, F2FS_MOUNT_MERGE_CHECKPOINT); 972 + break; 973 + #ifdef CONFIG_F2FS_FS_COMPRESSION 974 + case Opt_compress_algorithm: 975 + name = param->string; 976 + if (!strcmp(name, "lzo")) { 977 + #ifdef CONFIG_F2FS_FS_LZO 978 + F2FS_CTX_INFO(ctx).compress_level = 0; 979 + F2FS_CTX_INFO(ctx).compress_algorithm = COMPRESS_LZO; 980 + ctx->spec_mask |= F2FS_SPEC_compress_level; 981 + ctx->spec_mask |= F2FS_SPEC_compress_algorithm; 982 + #else 983 + f2fs_info(NULL, "kernel doesn't support lzo compression"); 984 + #endif 985 + } else if (!strncmp(name, "lz4", 3)) { 986 + #ifdef CONFIG_F2FS_FS_LZ4 987 + ret = f2fs_set_lz4hc_level(ctx, name); 988 + if (ret) 989 + return -EINVAL; 990 + F2FS_CTX_INFO(ctx).compress_algorithm = COMPRESS_LZ4; 991 + ctx->spec_mask |= F2FS_SPEC_compress_algorithm; 992 + #else 993 + f2fs_info(NULL, "kernel doesn't support lz4 compression"); 994 + #endif 995 + } else if (!strncmp(name, "zstd", 4)) { 996 + #ifdef CONFIG_F2FS_FS_ZSTD 997 + ret = f2fs_set_zstd_level(ctx, name); 998 + if (ret) 999 + return -EINVAL; 1000 + F2FS_CTX_INFO(ctx).compress_algorithm = COMPRESS_ZSTD; 1001 + ctx->spec_mask |= F2FS_SPEC_compress_algorithm; 1002 + #else 1003 + f2fs_info(NULL, "kernel doesn't support zstd compression"); 1004 + #endif 1005 + } else if (!strcmp(name, "lzo-rle")) { 1006 + #ifdef CONFIG_F2FS_FS_LZORLE 1007 + F2FS_CTX_INFO(ctx).compress_level = 0; 1008 + F2FS_CTX_INFO(ctx).compress_algorithm = COMPRESS_LZORLE; 1009 + ctx->spec_mask |= F2FS_SPEC_compress_level; 1010 + ctx->spec_mask |= F2FS_SPEC_compress_algorithm; 1011 + #else 1012 + f2fs_info(NULL, "kernel doesn't support lzorle compression"); 1013 + #endif 1014 + } else 1015 + return -EINVAL; 1016 + break; 1017 + case Opt_compress_log_size: 1018 + if (result.uint_32 < MIN_COMPRESS_LOG_SIZE || 1019 + result.uint_32 > MAX_COMPRESS_LOG_SIZE) { 1020 + f2fs_err(NULL, 1021 + "Compress cluster log size is out of range"); 1022 + return -EINVAL; 1023 + } 1024 + F2FS_CTX_INFO(ctx).compress_log_size = result.uint_32; 1025 + ctx->spec_mask |= F2FS_SPEC_compress_log_size; 1026 + break; 1027 + case Opt_compress_extension: 1028 + name = param->string; 1029 + ext = F2FS_CTX_INFO(ctx).extensions; 1030 + ext_cnt = F2FS_CTX_INFO(ctx).compress_ext_cnt; 1031 + 1032 + if (strlen(name) >= F2FS_EXTENSION_LEN || 1033 + ext_cnt >= COMPRESS_EXT_NUM) { 1034 + f2fs_err(NULL, "invalid extension length/number"); 1035 + return -EINVAL; 1036 + } 1037 + 1038 + if (is_compress_extension_exist(&ctx->info, name, true)) 1039 + break; 1040 + 1041 + ret = strscpy(ext[ext_cnt], name, F2FS_EXTENSION_LEN); 1042 + if (ret < 0) 1043 + return ret; 1044 + F2FS_CTX_INFO(ctx).compress_ext_cnt++; 1045 + ctx->spec_mask |= F2FS_SPEC_compress_extension; 1046 + break; 1047 + case Opt_nocompress_extension: 1048 + name = param->string; 1049 + noext = F2FS_CTX_INFO(ctx).noextensions; 1050 + noext_cnt = F2FS_CTX_INFO(ctx).nocompress_ext_cnt; 1051 + 1052 + if (strlen(name) >= F2FS_EXTENSION_LEN || 1053 + noext_cnt >= COMPRESS_EXT_NUM) { 1054 + f2fs_err(NULL, "invalid extension length/number"); 1055 + return -EINVAL; 1056 + } 1057 + 1058 + if (is_compress_extension_exist(&ctx->info, name, false)) 1059 + break; 1060 + 1061 + ret = strscpy(noext[noext_cnt], name, F2FS_EXTENSION_LEN); 1062 + if (ret < 0) 1063 + return ret; 1064 + F2FS_CTX_INFO(ctx).nocompress_ext_cnt++; 1065 + ctx->spec_mask |= F2FS_SPEC_nocompress_extension; 1066 + break; 1067 + case Opt_compress_chksum: 1068 + F2FS_CTX_INFO(ctx).compress_chksum = true; 1069 + ctx->spec_mask |= F2FS_SPEC_compress_chksum; 1070 + break; 1071 + case Opt_compress_mode: 1072 + F2FS_CTX_INFO(ctx).compress_mode = result.uint_32; 1073 + ctx->spec_mask |= F2FS_SPEC_compress_mode; 1074 + break; 1075 + case Opt_compress_cache: 1076 + ctx_set_opt(ctx, F2FS_MOUNT_COMPRESS_CACHE); 1077 + break; 1078 + #else 1079 + case Opt_compress_algorithm: 1080 + case Opt_compress_log_size: 1081 + case Opt_compress_extension: 1082 + case Opt_nocompress_extension: 1083 + case Opt_compress_chksum: 1084 + case Opt_compress_mode: 1085 + case Opt_compress_cache: 1086 + f2fs_info(NULL, "compression options not supported"); 1087 + break; 1088 + #endif 1089 + case Opt_atgc: 1090 + ctx_set_opt(ctx, F2FS_MOUNT_ATGC); 1091 + break; 1092 + case Opt_gc_merge: 1093 + if (result.negated) 1094 + ctx_clear_opt(ctx, F2FS_MOUNT_GC_MERGE); 1095 + else 1096 + ctx_set_opt(ctx, F2FS_MOUNT_GC_MERGE); 1097 + break; 1098 + case Opt_discard_unit: 1099 + F2FS_CTX_INFO(ctx).discard_unit = result.uint_32; 1100 + ctx->spec_mask |= F2FS_SPEC_discard_unit; 1101 + break; 1102 + case Opt_memory_mode: 1103 + F2FS_CTX_INFO(ctx).memory_mode = result.uint_32; 1104 + ctx->spec_mask |= F2FS_SPEC_memory_mode; 1105 + break; 1106 + case Opt_age_extent_cache: 1107 + ctx_set_opt(ctx, F2FS_MOUNT_AGE_EXTENT_CACHE); 1108 + break; 1109 + case Opt_errors: 1110 + F2FS_CTX_INFO(ctx).errors = result.uint_32; 1111 + ctx->spec_mask |= F2FS_SPEC_errors; 1112 + break; 1113 + case Opt_nat_bits: 1114 + ctx_set_opt(ctx, F2FS_MOUNT_NAT_BITS); 1115 + break; 1355 1116 } 1356 1117 return 0; 1357 1118 } 1358 1119 1359 - static int f2fs_default_check(struct f2fs_sb_info *sbi) 1120 + /* 1121 + * Check quota settings consistency. 1122 + */ 1123 + static int f2fs_check_quota_consistency(struct fs_context *fc, 1124 + struct super_block *sb) 1360 1125 { 1361 - #ifdef CONFIG_QUOTA 1362 - if (f2fs_check_quota_options(sbi)) 1126 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1127 + #ifdef CONFIG_QUOTA 1128 + struct f2fs_fs_context *ctx = fc->fs_private; 1129 + bool quota_feature = f2fs_sb_has_quota_ino(sbi); 1130 + bool quota_turnon = sb_any_quota_loaded(sb); 1131 + char *old_qname, *new_qname; 1132 + bool usr_qf_name, grp_qf_name, prj_qf_name, usrquota, grpquota, prjquota; 1133 + int i; 1134 + 1135 + /* 1136 + * We do the test below only for project quotas. 'usrquota' and 1137 + * 'grpquota' mount options are allowed even without quota feature 1138 + * to support legacy quotas in quota files. 1139 + */ 1140 + if (ctx_test_opt(ctx, F2FS_MOUNT_PRJQUOTA) && 1141 + !f2fs_sb_has_project_quota(sbi)) { 1142 + f2fs_err(sbi, "Project quota feature not enabled. Cannot enable project quota enforcement."); 1363 1143 return -EINVAL; 1144 + } 1145 + 1146 + if (ctx->qname_mask) { 1147 + for (i = 0; i < MAXQUOTAS; i++) { 1148 + if (!(ctx->qname_mask & (1 << i))) 1149 + continue; 1150 + 1151 + old_qname = F2FS_OPTION(sbi).s_qf_names[i]; 1152 + new_qname = F2FS_CTX_INFO(ctx).s_qf_names[i]; 1153 + if (quota_turnon && 1154 + !!old_qname != !!new_qname) 1155 + goto err_jquota_change; 1156 + 1157 + if (old_qname) { 1158 + if (strcmp(old_qname, new_qname) == 0) { 1159 + ctx->qname_mask &= ~(1 << i); 1160 + continue; 1161 + } 1162 + goto err_jquota_specified; 1163 + } 1164 + 1165 + if (quota_feature) { 1166 + f2fs_info(sbi, "QUOTA feature is enabled, so ignore qf_name"); 1167 + ctx->qname_mask &= ~(1 << i); 1168 + kfree(F2FS_CTX_INFO(ctx).s_qf_names[i]); 1169 + F2FS_CTX_INFO(ctx).s_qf_names[i] = NULL; 1170 + } 1171 + } 1172 + } 1173 + 1174 + /* Make sure we don't mix old and new quota format */ 1175 + usr_qf_name = F2FS_OPTION(sbi).s_qf_names[USRQUOTA] || 1176 + F2FS_CTX_INFO(ctx).s_qf_names[USRQUOTA]; 1177 + grp_qf_name = F2FS_OPTION(sbi).s_qf_names[GRPQUOTA] || 1178 + F2FS_CTX_INFO(ctx).s_qf_names[GRPQUOTA]; 1179 + prj_qf_name = F2FS_OPTION(sbi).s_qf_names[PRJQUOTA] || 1180 + F2FS_CTX_INFO(ctx).s_qf_names[PRJQUOTA]; 1181 + usrquota = test_opt(sbi, USRQUOTA) || 1182 + ctx_test_opt(ctx, F2FS_MOUNT_USRQUOTA); 1183 + grpquota = test_opt(sbi, GRPQUOTA) || 1184 + ctx_test_opt(ctx, F2FS_MOUNT_GRPQUOTA); 1185 + prjquota = test_opt(sbi, PRJQUOTA) || 1186 + ctx_test_opt(ctx, F2FS_MOUNT_PRJQUOTA); 1187 + 1188 + if (usr_qf_name) { 1189 + ctx_clear_opt(ctx, F2FS_MOUNT_USRQUOTA); 1190 + usrquota = false; 1191 + } 1192 + if (grp_qf_name) { 1193 + ctx_clear_opt(ctx, F2FS_MOUNT_GRPQUOTA); 1194 + grpquota = false; 1195 + } 1196 + if (prj_qf_name) { 1197 + ctx_clear_opt(ctx, F2FS_MOUNT_PRJQUOTA); 1198 + prjquota = false; 1199 + } 1200 + if (usr_qf_name || grp_qf_name || prj_qf_name) { 1201 + if (grpquota || usrquota || prjquota) { 1202 + f2fs_err(sbi, "old and new quota format mixing"); 1203 + return -EINVAL; 1204 + } 1205 + if (!(ctx->spec_mask & F2FS_SPEC_jqfmt || 1206 + F2FS_OPTION(sbi).s_jquota_fmt)) { 1207 + f2fs_err(sbi, "journaled quota format not specified"); 1208 + return -EINVAL; 1209 + } 1210 + } 1211 + return 0; 1212 + 1213 + err_jquota_change: 1214 + f2fs_err(sbi, "Cannot change journaled quota options when quota turned on"); 1215 + return -EINVAL; 1216 + err_jquota_specified: 1217 + f2fs_err(sbi, "%s quota file already specified", 1218 + QTYPE2NAME(i)); 1219 + return -EINVAL; 1220 + 1364 1221 #else 1365 - if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sbi->sb)) { 1222 + if (f2fs_readonly(sbi->sb)) 1223 + return 0; 1224 + if (f2fs_sb_has_quota_ino(sbi)) { 1366 1225 f2fs_info(sbi, "Filesystem with quota feature cannot be mounted RDWR without CONFIG_QUOTA"); 1367 1226 return -EINVAL; 1368 1227 } 1369 - if (f2fs_sb_has_project_quota(sbi) && !f2fs_readonly(sbi->sb)) { 1228 + if (f2fs_sb_has_project_quota(sbi)) { 1370 1229 f2fs_err(sbi, "Filesystem with project quota feature cannot be mounted RDWR without CONFIG_QUOTA"); 1371 1230 return -EINVAL; 1372 1231 } 1232 + 1233 + return 0; 1373 1234 #endif 1235 + } 1236 + 1237 + static int f2fs_check_test_dummy_encryption(struct fs_context *fc, 1238 + struct super_block *sb) 1239 + { 1240 + struct f2fs_fs_context *ctx = fc->fs_private; 1241 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1242 + 1243 + if (!fscrypt_is_dummy_policy_set(&F2FS_CTX_INFO(ctx).dummy_enc_policy)) 1244 + return 0; 1245 + 1246 + if (!f2fs_sb_has_encrypt(sbi)) { 1247 + f2fs_err(sbi, "Encrypt feature is off"); 1248 + return -EINVAL; 1249 + } 1250 + 1251 + /* 1252 + * This mount option is just for testing, and it's not worthwhile to 1253 + * implement the extra complexity (e.g. RCU protection) that would be 1254 + * needed to allow it to be set or changed during remount. We do allow 1255 + * it to be specified during remount, but only if there is no change. 1256 + */ 1257 + if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE) { 1258 + if (fscrypt_dummy_policies_equal(&F2FS_OPTION(sbi).dummy_enc_policy, 1259 + &F2FS_CTX_INFO(ctx).dummy_enc_policy)) 1260 + return 0; 1261 + f2fs_warn(sbi, "Can't set or change test_dummy_encryption on remount"); 1262 + return -EINVAL; 1263 + } 1264 + return 0; 1265 + } 1266 + 1267 + static inline bool test_compression_spec(unsigned int mask) 1268 + { 1269 + return mask & (F2FS_SPEC_compress_algorithm 1270 + | F2FS_SPEC_compress_log_size 1271 + | F2FS_SPEC_compress_extension 1272 + | F2FS_SPEC_nocompress_extension 1273 + | F2FS_SPEC_compress_chksum 1274 + | F2FS_SPEC_compress_mode); 1275 + } 1276 + 1277 + static inline void clear_compression_spec(struct f2fs_fs_context *ctx) 1278 + { 1279 + ctx->spec_mask &= ~(F2FS_SPEC_compress_algorithm 1280 + | F2FS_SPEC_compress_log_size 1281 + | F2FS_SPEC_compress_extension 1282 + | F2FS_SPEC_nocompress_extension 1283 + | F2FS_SPEC_compress_chksum 1284 + | F2FS_SPEC_compress_mode); 1285 + } 1286 + 1287 + static int f2fs_check_compression(struct fs_context *fc, 1288 + struct super_block *sb) 1289 + { 1290 + #ifdef CONFIG_F2FS_FS_COMPRESSION 1291 + struct f2fs_fs_context *ctx = fc->fs_private; 1292 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1293 + int i, cnt; 1294 + 1295 + if (!f2fs_sb_has_compression(sbi)) { 1296 + if (test_compression_spec(ctx->spec_mask) || 1297 + ctx_test_opt(ctx, F2FS_MOUNT_COMPRESS_CACHE)) 1298 + f2fs_info(sbi, "Image doesn't support compression"); 1299 + clear_compression_spec(ctx); 1300 + ctx->opt_mask &= ~F2FS_MOUNT_COMPRESS_CACHE; 1301 + return 0; 1302 + } 1303 + if (ctx->spec_mask & F2FS_SPEC_compress_extension) { 1304 + cnt = F2FS_CTX_INFO(ctx).compress_ext_cnt; 1305 + for (i = 0; i < F2FS_CTX_INFO(ctx).compress_ext_cnt; i++) { 1306 + if (is_compress_extension_exist(&F2FS_OPTION(sbi), 1307 + F2FS_CTX_INFO(ctx).extensions[i], true)) { 1308 + F2FS_CTX_INFO(ctx).extensions[i][0] = '\0'; 1309 + cnt--; 1310 + } 1311 + } 1312 + if (F2FS_OPTION(sbi).compress_ext_cnt + cnt > COMPRESS_EXT_NUM) { 1313 + f2fs_err(sbi, "invalid extension length/number"); 1314 + return -EINVAL; 1315 + } 1316 + } 1317 + if (ctx->spec_mask & F2FS_SPEC_nocompress_extension) { 1318 + cnt = F2FS_CTX_INFO(ctx).nocompress_ext_cnt; 1319 + for (i = 0; i < F2FS_CTX_INFO(ctx).nocompress_ext_cnt; i++) { 1320 + if (is_compress_extension_exist(&F2FS_OPTION(sbi), 1321 + F2FS_CTX_INFO(ctx).noextensions[i], false)) { 1322 + F2FS_CTX_INFO(ctx).noextensions[i][0] = '\0'; 1323 + cnt--; 1324 + } 1325 + } 1326 + if (F2FS_OPTION(sbi).nocompress_ext_cnt + cnt > COMPRESS_EXT_NUM) { 1327 + f2fs_err(sbi, "invalid noextension length/number"); 1328 + return -EINVAL; 1329 + } 1330 + } 1331 + 1332 + if (f2fs_test_compress_extension(F2FS_CTX_INFO(ctx).noextensions, 1333 + F2FS_CTX_INFO(ctx).nocompress_ext_cnt, 1334 + F2FS_CTX_INFO(ctx).extensions, 1335 + F2FS_CTX_INFO(ctx).compress_ext_cnt)) { 1336 + f2fs_err(sbi, "new noextensions conflicts with new extensions"); 1337 + return -EINVAL; 1338 + } 1339 + if (f2fs_test_compress_extension(F2FS_CTX_INFO(ctx).noextensions, 1340 + F2FS_CTX_INFO(ctx).nocompress_ext_cnt, 1341 + F2FS_OPTION(sbi).extensions, 1342 + F2FS_OPTION(sbi).compress_ext_cnt)) { 1343 + f2fs_err(sbi, "new noextensions conflicts with old extensions"); 1344 + return -EINVAL; 1345 + } 1346 + if (f2fs_test_compress_extension(F2FS_OPTION(sbi).noextensions, 1347 + F2FS_OPTION(sbi).nocompress_ext_cnt, 1348 + F2FS_CTX_INFO(ctx).extensions, 1349 + F2FS_CTX_INFO(ctx).compress_ext_cnt)) { 1350 + f2fs_err(sbi, "new extensions conflicts with old noextensions"); 1351 + return -EINVAL; 1352 + } 1353 + #endif 1354 + return 0; 1355 + } 1356 + 1357 + static int f2fs_check_opt_consistency(struct fs_context *fc, 1358 + struct super_block *sb) 1359 + { 1360 + struct f2fs_fs_context *ctx = fc->fs_private; 1361 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1362 + int err; 1363 + 1364 + if (ctx_test_opt(ctx, F2FS_MOUNT_NORECOVERY) && !f2fs_readonly(sb)) 1365 + return -EINVAL; 1366 + 1367 + if (f2fs_hw_should_discard(sbi) && 1368 + (ctx->opt_mask & F2FS_MOUNT_DISCARD) && 1369 + !ctx_test_opt(ctx, F2FS_MOUNT_DISCARD)) { 1370 + f2fs_warn(sbi, "discard is required for zoned block devices"); 1371 + return -EINVAL; 1372 + } 1373 + 1374 + if (!f2fs_hw_support_discard(sbi) && 1375 + (ctx->opt_mask & F2FS_MOUNT_DISCARD) && 1376 + ctx_test_opt(ctx, F2FS_MOUNT_DISCARD)) { 1377 + f2fs_warn(sbi, "device does not support discard"); 1378 + ctx_clear_opt(ctx, F2FS_MOUNT_DISCARD); 1379 + ctx->opt_mask &= ~F2FS_MOUNT_DISCARD; 1380 + } 1381 + 1382 + if (f2fs_sb_has_device_alias(sbi) && 1383 + (ctx->opt_mask & F2FS_MOUNT_READ_EXTENT_CACHE) && 1384 + !ctx_test_opt(ctx, F2FS_MOUNT_READ_EXTENT_CACHE)) { 1385 + f2fs_err(sbi, "device aliasing requires extent cache"); 1386 + return -EINVAL; 1387 + } 1388 + 1389 + if (test_opt(sbi, RESERVE_ROOT) && 1390 + (ctx->opt_mask & F2FS_MOUNT_RESERVE_ROOT) && 1391 + ctx_test_opt(ctx, F2FS_MOUNT_RESERVE_ROOT)) { 1392 + f2fs_info(sbi, "Preserve previous reserve_root=%u", 1393 + F2FS_OPTION(sbi).root_reserved_blocks); 1394 + ctx_clear_opt(ctx, F2FS_MOUNT_RESERVE_ROOT); 1395 + ctx->opt_mask &= ~F2FS_MOUNT_RESERVE_ROOT; 1396 + } 1397 + 1398 + err = f2fs_check_test_dummy_encryption(fc, sb); 1399 + if (err) 1400 + return err; 1401 + 1402 + err = f2fs_check_compression(fc, sb); 1403 + if (err) 1404 + return err; 1405 + 1406 + err = f2fs_check_quota_consistency(fc, sb); 1407 + if (err) 1408 + return err; 1374 1409 1375 1410 if (!IS_ENABLED(CONFIG_UNICODE) && f2fs_sb_has_casefold(sbi)) { 1376 1411 f2fs_err(sbi, ··· 1449 1354 * devices, but mandatory for host-managed zoned block devices. 1450 1355 */ 1451 1356 if (f2fs_sb_has_blkzoned(sbi)) { 1357 + if (F2FS_CTX_INFO(ctx).bggc_mode == BGGC_MODE_OFF) { 1358 + f2fs_warn(sbi, "zoned devices need bggc"); 1359 + return -EINVAL; 1360 + } 1452 1361 #ifdef CONFIG_BLK_DEV_ZONED 1453 - if (F2FS_OPTION(sbi).discard_unit != 1454 - DISCARD_UNIT_SECTION) { 1362 + if ((ctx->spec_mask & F2FS_SPEC_discard_unit) && 1363 + F2FS_CTX_INFO(ctx).discard_unit != DISCARD_UNIT_SECTION) { 1455 1364 f2fs_info(sbi, "Zoned block device doesn't need small discard, set discard_unit=section by default"); 1456 - F2FS_OPTION(sbi).discard_unit = 1457 - DISCARD_UNIT_SECTION; 1365 + F2FS_CTX_INFO(ctx).discard_unit = DISCARD_UNIT_SECTION; 1458 1366 } 1459 1367 1460 - if (F2FS_OPTION(sbi).fs_mode != FS_MODE_LFS) { 1368 + if ((ctx->spec_mask & F2FS_SPEC_mode) && 1369 + F2FS_CTX_INFO(ctx).fs_mode != FS_MODE_LFS) { 1461 1370 f2fs_info(sbi, "Only lfs mode is allowed with zoned block device feature"); 1462 1371 return -EINVAL; 1463 1372 } ··· 1471 1372 #endif 1472 1373 } 1473 1374 1474 - #ifdef CONFIG_F2FS_FS_COMPRESSION 1475 - if (f2fs_test_compress_extension(sbi)) { 1476 - f2fs_err(sbi, "invalid compress or nocompress extension"); 1477 - return -EINVAL; 1478 - } 1479 - #endif 1480 - 1481 - if (test_opt(sbi, INLINE_XATTR_SIZE)) { 1482 - int min_size, max_size; 1483 - 1375 + if (ctx_test_opt(ctx, F2FS_MOUNT_INLINE_XATTR_SIZE)) { 1484 1376 if (!f2fs_sb_has_extra_attr(sbi) || 1485 1377 !f2fs_sb_has_flexible_inline_xattr(sbi)) { 1486 1378 f2fs_err(sbi, "extra_attr or flexible_inline_xattr feature is off"); 1487 1379 return -EINVAL; 1488 1380 } 1489 - if (!test_opt(sbi, INLINE_XATTR)) { 1381 + if (!ctx_test_opt(ctx, F2FS_MOUNT_INLINE_XATTR) && !test_opt(sbi, INLINE_XATTR)) { 1490 1382 f2fs_err(sbi, "inline_xattr_size option should be set with inline_xattr option"); 1491 - return -EINVAL; 1492 - } 1493 - 1494 - min_size = MIN_INLINE_XATTR_SIZE; 1495 - max_size = MAX_INLINE_XATTR_SIZE; 1496 - 1497 - if (F2FS_OPTION(sbi).inline_xattr_size < min_size || 1498 - F2FS_OPTION(sbi).inline_xattr_size > max_size) { 1499 - f2fs_err(sbi, "inline xattr size is out of range: %d ~ %d", 1500 - min_size, max_size); 1501 1383 return -EINVAL; 1502 1384 } 1503 1385 } 1504 1386 1505 - if (test_opt(sbi, ATGC) && f2fs_lfs_mode(sbi)) { 1387 + if (ctx_test_opt(ctx, F2FS_MOUNT_ATGC) && 1388 + F2FS_CTX_INFO(ctx).fs_mode == FS_MODE_LFS) { 1506 1389 f2fs_err(sbi, "LFS is not compatible with ATGC"); 1507 1390 return -EINVAL; 1508 1391 } 1509 1392 1510 - if (f2fs_is_readonly(sbi) && test_opt(sbi, FLUSH_MERGE)) { 1393 + if (f2fs_is_readonly(sbi) && ctx_test_opt(ctx, F2FS_MOUNT_FLUSH_MERGE)) { 1511 1394 f2fs_err(sbi, "FLUSH_MERGE not compatible with readonly mode"); 1512 1395 return -EINVAL; 1513 1396 } ··· 1498 1417 f2fs_err(sbi, "Allow to mount readonly mode only"); 1499 1418 return -EROFS; 1500 1419 } 1420 + return 0; 1421 + } 1501 1422 1502 - if (test_opt(sbi, NORECOVERY) && !f2fs_readonly(sbi->sb)) { 1503 - f2fs_err(sbi, "norecovery requires readonly mount"); 1423 + static void f2fs_apply_quota_options(struct fs_context *fc, 1424 + struct super_block *sb) 1425 + { 1426 + #ifdef CONFIG_QUOTA 1427 + struct f2fs_fs_context *ctx = fc->fs_private; 1428 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1429 + bool quota_feature = f2fs_sb_has_quota_ino(sbi); 1430 + char *qname; 1431 + int i; 1432 + 1433 + if (quota_feature) 1434 + return; 1435 + 1436 + for (i = 0; i < MAXQUOTAS; i++) { 1437 + if (!(ctx->qname_mask & (1 << i))) 1438 + continue; 1439 + 1440 + qname = F2FS_CTX_INFO(ctx).s_qf_names[i]; 1441 + if (qname) { 1442 + qname = kstrdup(F2FS_CTX_INFO(ctx).s_qf_names[i], 1443 + GFP_KERNEL | __GFP_NOFAIL); 1444 + set_opt(sbi, QUOTA); 1445 + } 1446 + F2FS_OPTION(sbi).s_qf_names[i] = qname; 1447 + } 1448 + 1449 + if (ctx->spec_mask & F2FS_SPEC_jqfmt) 1450 + F2FS_OPTION(sbi).s_jquota_fmt = F2FS_CTX_INFO(ctx).s_jquota_fmt; 1451 + 1452 + if (quota_feature && F2FS_OPTION(sbi).s_jquota_fmt) { 1453 + f2fs_info(sbi, "QUOTA feature is enabled, so ignore jquota_fmt"); 1454 + F2FS_OPTION(sbi).s_jquota_fmt = 0; 1455 + } 1456 + #endif 1457 + } 1458 + 1459 + static void f2fs_apply_test_dummy_encryption(struct fs_context *fc, 1460 + struct super_block *sb) 1461 + { 1462 + struct f2fs_fs_context *ctx = fc->fs_private; 1463 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1464 + 1465 + if (!fscrypt_is_dummy_policy_set(&F2FS_CTX_INFO(ctx).dummy_enc_policy) || 1466 + /* if already set, it was already verified to be the same */ 1467 + fscrypt_is_dummy_policy_set(&F2FS_OPTION(sbi).dummy_enc_policy)) 1468 + return; 1469 + swap(F2FS_OPTION(sbi).dummy_enc_policy, F2FS_CTX_INFO(ctx).dummy_enc_policy); 1470 + f2fs_warn(sbi, "Test dummy encryption mode enabled"); 1471 + } 1472 + 1473 + static void f2fs_apply_compression(struct fs_context *fc, 1474 + struct super_block *sb) 1475 + { 1476 + #ifdef CONFIG_F2FS_FS_COMPRESSION 1477 + struct f2fs_fs_context *ctx = fc->fs_private; 1478 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1479 + unsigned char (*ctx_ext)[F2FS_EXTENSION_LEN]; 1480 + unsigned char (*sbi_ext)[F2FS_EXTENSION_LEN]; 1481 + int ctx_cnt, sbi_cnt, i; 1482 + 1483 + if (ctx->spec_mask & F2FS_SPEC_compress_level) 1484 + F2FS_OPTION(sbi).compress_level = 1485 + F2FS_CTX_INFO(ctx).compress_level; 1486 + if (ctx->spec_mask & F2FS_SPEC_compress_algorithm) 1487 + F2FS_OPTION(sbi).compress_algorithm = 1488 + F2FS_CTX_INFO(ctx).compress_algorithm; 1489 + if (ctx->spec_mask & F2FS_SPEC_compress_log_size) 1490 + F2FS_OPTION(sbi).compress_log_size = 1491 + F2FS_CTX_INFO(ctx).compress_log_size; 1492 + if (ctx->spec_mask & F2FS_SPEC_compress_chksum) 1493 + F2FS_OPTION(sbi).compress_chksum = 1494 + F2FS_CTX_INFO(ctx).compress_chksum; 1495 + if (ctx->spec_mask & F2FS_SPEC_compress_mode) 1496 + F2FS_OPTION(sbi).compress_mode = 1497 + F2FS_CTX_INFO(ctx).compress_mode; 1498 + if (ctx->spec_mask & F2FS_SPEC_compress_extension) { 1499 + ctx_ext = F2FS_CTX_INFO(ctx).extensions; 1500 + ctx_cnt = F2FS_CTX_INFO(ctx).compress_ext_cnt; 1501 + sbi_ext = F2FS_OPTION(sbi).extensions; 1502 + sbi_cnt = F2FS_OPTION(sbi).compress_ext_cnt; 1503 + for (i = 0; i < ctx_cnt; i++) { 1504 + if (strlen(ctx_ext[i]) == 0) 1505 + continue; 1506 + strscpy(sbi_ext[sbi_cnt], ctx_ext[i]); 1507 + sbi_cnt++; 1508 + } 1509 + F2FS_OPTION(sbi).compress_ext_cnt = sbi_cnt; 1510 + } 1511 + if (ctx->spec_mask & F2FS_SPEC_nocompress_extension) { 1512 + ctx_ext = F2FS_CTX_INFO(ctx).noextensions; 1513 + ctx_cnt = F2FS_CTX_INFO(ctx).nocompress_ext_cnt; 1514 + sbi_ext = F2FS_OPTION(sbi).noextensions; 1515 + sbi_cnt = F2FS_OPTION(sbi).nocompress_ext_cnt; 1516 + for (i = 0; i < ctx_cnt; i++) { 1517 + if (strlen(ctx_ext[i]) == 0) 1518 + continue; 1519 + strscpy(sbi_ext[sbi_cnt], ctx_ext[i]); 1520 + sbi_cnt++; 1521 + } 1522 + F2FS_OPTION(sbi).nocompress_ext_cnt = sbi_cnt; 1523 + } 1524 + #endif 1525 + } 1526 + 1527 + static void f2fs_apply_options(struct fs_context *fc, struct super_block *sb) 1528 + { 1529 + struct f2fs_fs_context *ctx = fc->fs_private; 1530 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1531 + 1532 + F2FS_OPTION(sbi).opt &= ~ctx->opt_mask; 1533 + F2FS_OPTION(sbi).opt |= F2FS_CTX_INFO(ctx).opt; 1534 + 1535 + if (ctx->spec_mask & F2FS_SPEC_background_gc) 1536 + F2FS_OPTION(sbi).bggc_mode = F2FS_CTX_INFO(ctx).bggc_mode; 1537 + if (ctx->spec_mask & F2FS_SPEC_inline_xattr_size) 1538 + F2FS_OPTION(sbi).inline_xattr_size = 1539 + F2FS_CTX_INFO(ctx).inline_xattr_size; 1540 + if (ctx->spec_mask & F2FS_SPEC_active_logs) 1541 + F2FS_OPTION(sbi).active_logs = F2FS_CTX_INFO(ctx).active_logs; 1542 + if (ctx->spec_mask & F2FS_SPEC_reserve_root) 1543 + F2FS_OPTION(sbi).root_reserved_blocks = 1544 + F2FS_CTX_INFO(ctx).root_reserved_blocks; 1545 + if (ctx->spec_mask & F2FS_SPEC_resgid) 1546 + F2FS_OPTION(sbi).s_resgid = F2FS_CTX_INFO(ctx).s_resgid; 1547 + if (ctx->spec_mask & F2FS_SPEC_resuid) 1548 + F2FS_OPTION(sbi).s_resuid = F2FS_CTX_INFO(ctx).s_resuid; 1549 + if (ctx->spec_mask & F2FS_SPEC_mode) 1550 + F2FS_OPTION(sbi).fs_mode = F2FS_CTX_INFO(ctx).fs_mode; 1551 + #ifdef CONFIG_F2FS_FAULT_INJECTION 1552 + if (ctx->spec_mask & F2FS_SPEC_fault_injection) 1553 + (void)f2fs_build_fault_attr(sbi, 1554 + F2FS_CTX_INFO(ctx).fault_info.inject_rate, 0, FAULT_RATE); 1555 + if (ctx->spec_mask & F2FS_SPEC_fault_type) 1556 + (void)f2fs_build_fault_attr(sbi, 0, 1557 + F2FS_CTX_INFO(ctx).fault_info.inject_type, FAULT_TYPE); 1558 + #endif 1559 + if (ctx->spec_mask & F2FS_SPEC_alloc_mode) 1560 + F2FS_OPTION(sbi).alloc_mode = F2FS_CTX_INFO(ctx).alloc_mode; 1561 + if (ctx->spec_mask & F2FS_SPEC_fsync_mode) 1562 + F2FS_OPTION(sbi).fsync_mode = F2FS_CTX_INFO(ctx).fsync_mode; 1563 + if (ctx->spec_mask & F2FS_SPEC_checkpoint_disable_cap) 1564 + F2FS_OPTION(sbi).unusable_cap = F2FS_CTX_INFO(ctx).unusable_cap; 1565 + if (ctx->spec_mask & F2FS_SPEC_checkpoint_disable_cap_perc) 1566 + F2FS_OPTION(sbi).unusable_cap_perc = 1567 + F2FS_CTX_INFO(ctx).unusable_cap_perc; 1568 + if (ctx->spec_mask & F2FS_SPEC_discard_unit) 1569 + F2FS_OPTION(sbi).discard_unit = F2FS_CTX_INFO(ctx).discard_unit; 1570 + if (ctx->spec_mask & F2FS_SPEC_memory_mode) 1571 + F2FS_OPTION(sbi).memory_mode = F2FS_CTX_INFO(ctx).memory_mode; 1572 + if (ctx->spec_mask & F2FS_SPEC_errors) 1573 + F2FS_OPTION(sbi).errors = F2FS_CTX_INFO(ctx).errors; 1574 + 1575 + f2fs_apply_compression(fc, sb); 1576 + f2fs_apply_test_dummy_encryption(fc, sb); 1577 + f2fs_apply_quota_options(fc, sb); 1578 + } 1579 + 1580 + static int f2fs_sanity_check_options(struct f2fs_sb_info *sbi, bool remount) 1581 + { 1582 + if (f2fs_sb_has_device_alias(sbi) && 1583 + !test_opt(sbi, READ_EXTENT_CACHE)) { 1584 + f2fs_err(sbi, "device aliasing requires extent cache"); 1504 1585 return -EINVAL; 1505 1586 } 1506 1587 1588 + if (!remount) 1589 + return 0; 1590 + 1591 + #ifdef CONFIG_BLK_DEV_ZONED 1592 + if (f2fs_sb_has_blkzoned(sbi) && 1593 + sbi->max_open_zones < F2FS_OPTION(sbi).active_logs) { 1594 + f2fs_err(sbi, 1595 + "zoned: max open zones %u is too small, need at least %u open zones", 1596 + sbi->max_open_zones, F2FS_OPTION(sbi).active_logs); 1597 + return -EINVAL; 1598 + } 1599 + #endif 1600 + if (f2fs_lfs_mode(sbi) && !IS_F2FS_IPU_DISABLE(sbi)) { 1601 + f2fs_warn(sbi, "LFS is not compatible with IPU"); 1602 + return -EINVAL; 1603 + } 1507 1604 return 0; 1508 1605 } 1509 1606 ··· 1701 1442 /* Initialize f2fs-specific inode info */ 1702 1443 atomic_set(&fi->dirty_pages, 0); 1703 1444 atomic_set(&fi->i_compr_blocks, 0); 1445 + atomic_set(&fi->open_count, 0); 1704 1446 init_f2fs_rwsem(&fi->i_sem); 1705 1447 spin_lock_init(&fi->i_size_lock); 1706 1448 INIT_LIST_HEAD(&fi->dirty_list); ··· 1978 1718 destroy_percpu_info(sbi); 1979 1719 f2fs_destroy_iostat(sbi); 1980 1720 for (i = 0; i < NR_PAGE_TYPE; i++) 1981 - kvfree(sbi->write_io[i]); 1721 + kfree(sbi->write_io[i]); 1982 1722 #if IS_ENABLED(CONFIG_UNICODE) 1983 1723 utf8_unload(sb->s_encoding); 1984 1724 #endif ··· 2589 2329 f2fs_flush_ckpt_thread(sbi); 2590 2330 } 2591 2331 2592 - static int f2fs_remount(struct super_block *sb, int *flags, char *data) 2332 + static int __f2fs_remount(struct fs_context *fc, struct super_block *sb) 2593 2333 { 2594 2334 struct f2fs_sb_info *sbi = F2FS_SB(sb); 2595 2335 struct f2fs_mount_info org_mount_opt; 2596 2336 unsigned long old_sb_flags; 2337 + unsigned int flags = fc->sb_flags; 2597 2338 int err; 2598 2339 bool need_restart_gc = false, need_stop_gc = false; 2599 2340 bool need_restart_flush = false, need_stop_flush = false; ··· 2640 2379 #endif 2641 2380 2642 2381 /* recover superblocks we couldn't write due to previous RO mount */ 2643 - if (!(*flags & SB_RDONLY) && is_sbi_flag_set(sbi, SBI_NEED_SB_WRITE)) { 2382 + if (!(flags & SB_RDONLY) && is_sbi_flag_set(sbi, SBI_NEED_SB_WRITE)) { 2644 2383 err = f2fs_commit_super(sbi, false); 2645 2384 f2fs_info(sbi, "Try to recover all the superblocks, ret: %d", 2646 2385 err); ··· 2650 2389 2651 2390 default_options(sbi, true); 2652 2391 2653 - /* parse mount options */ 2654 - err = parse_options(sbi, data, true); 2392 + err = f2fs_check_opt_consistency(fc, sb); 2655 2393 if (err) 2656 2394 goto restore_opts; 2657 2395 2658 - #ifdef CONFIG_BLK_DEV_ZONED 2659 - if (f2fs_sb_has_blkzoned(sbi) && 2660 - sbi->max_open_zones < F2FS_OPTION(sbi).active_logs) { 2661 - f2fs_err(sbi, 2662 - "zoned: max open zones %u is too small, need at least %u open zones", 2663 - sbi->max_open_zones, F2FS_OPTION(sbi).active_logs); 2664 - err = -EINVAL; 2665 - goto restore_opts; 2666 - } 2667 - #endif 2396 + f2fs_apply_options(fc, sb); 2668 2397 2669 - err = f2fs_default_check(sbi); 2398 + err = f2fs_sanity_check_options(sbi, true); 2670 2399 if (err) 2671 2400 goto restore_opts; 2672 2401 ··· 2667 2416 * Previous and new state of filesystem is RO, 2668 2417 * so skip checking GC and FLUSH_MERGE conditions. 2669 2418 */ 2670 - if (f2fs_readonly(sb) && (*flags & SB_RDONLY)) 2419 + if (f2fs_readonly(sb) && (flags & SB_RDONLY)) 2671 2420 goto skip; 2672 2421 2673 - if (f2fs_dev_is_readonly(sbi) && !(*flags & SB_RDONLY)) { 2422 + if (f2fs_dev_is_readonly(sbi) && !(flags & SB_RDONLY)) { 2674 2423 err = -EROFS; 2675 2424 goto restore_opts; 2676 2425 } 2677 2426 2678 2427 #ifdef CONFIG_QUOTA 2679 - if (!f2fs_readonly(sb) && (*flags & SB_RDONLY)) { 2428 + if (!f2fs_readonly(sb) && (flags & SB_RDONLY)) { 2680 2429 err = dquot_suspend(sb, -1); 2681 2430 if (err < 0) 2682 2431 goto restore_opts; 2683 - } else if (f2fs_readonly(sb) && !(*flags & SB_RDONLY)) { 2432 + } else if (f2fs_readonly(sb) && !(flags & SB_RDONLY)) { 2684 2433 /* dquot_resume needs RW */ 2685 2434 sb->s_flags &= ~SB_RDONLY; 2686 2435 if (sb_any_quota_suspended(sb)) { ··· 2692 2441 } 2693 2442 } 2694 2443 #endif 2695 - if (f2fs_lfs_mode(sbi) && !IS_F2FS_IPU_DISABLE(sbi)) { 2696 - err = -EINVAL; 2697 - f2fs_warn(sbi, "LFS is not compatible with IPU"); 2698 - goto restore_opts; 2699 - } 2700 - 2701 2444 /* disallow enable atgc dynamically */ 2702 2445 if (no_atgc == !!test_opt(sbi, ATGC)) { 2703 2446 err = -EINVAL; ··· 2730 2485 goto restore_opts; 2731 2486 } 2732 2487 2733 - if ((*flags & SB_RDONLY) && test_opt(sbi, DISABLE_CHECKPOINT)) { 2488 + if ((flags & SB_RDONLY) && test_opt(sbi, DISABLE_CHECKPOINT)) { 2734 2489 err = -EINVAL; 2735 2490 f2fs_warn(sbi, "disabling checkpoint not compatible with read-only"); 2736 2491 goto restore_opts; ··· 2741 2496 * or if background_gc = off is passed in mount 2742 2497 * option. Also sync the filesystem. 2743 2498 */ 2744 - if ((*flags & SB_RDONLY) || 2499 + if ((flags & SB_RDONLY) || 2745 2500 (F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_OFF && 2746 2501 !test_opt(sbi, GC_MERGE))) { 2747 2502 if (sbi->gc_thread) { ··· 2755 2510 need_stop_gc = true; 2756 2511 } 2757 2512 2758 - if (*flags & SB_RDONLY) { 2513 + if (flags & SB_RDONLY) { 2759 2514 sync_inodes_sb(sb); 2760 2515 2761 2516 set_sbi_flag(sbi, SBI_IS_DIRTY); ··· 2768 2523 * We stop issue flush thread if FS is mounted as RO 2769 2524 * or if flush_merge is not passed in mount option. 2770 2525 */ 2771 - if ((*flags & SB_RDONLY) || !test_opt(sbi, FLUSH_MERGE)) { 2526 + if ((flags & SB_RDONLY) || !test_opt(sbi, FLUSH_MERGE)) { 2772 2527 clear_opt(sbi, FLUSH_MERGE); 2773 2528 f2fs_destroy_flush_cmd_control(sbi, false); 2774 2529 need_restart_flush = true; ··· 2810 2565 * triggered while remount and we need to take care of it before 2811 2566 * returning from remount. 2812 2567 */ 2813 - if ((*flags & SB_RDONLY) || test_opt(sbi, DISABLE_CHECKPOINT) || 2568 + if ((flags & SB_RDONLY) || test_opt(sbi, DISABLE_CHECKPOINT) || 2814 2569 !test_opt(sbi, MERGE_CHECKPOINT)) { 2815 2570 f2fs_stop_ckpt_thread(sbi); 2816 2571 } else { 2817 - /* Flush if the prevous checkpoint, if exists. */ 2572 + /* Flush if the previous checkpoint, if exists. */ 2818 2573 f2fs_flush_ckpt_thread(sbi); 2819 2574 2820 2575 err = f2fs_start_ckpt_thread(sbi); ··· 2837 2592 (test_opt(sbi, POSIX_ACL) ? SB_POSIXACL : 0); 2838 2593 2839 2594 limit_reserve_root(sbi); 2840 - *flags = (*flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME); 2595 + fc->sb_flags = (flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME); 2841 2596 2842 2597 sbi->umount_lock_holder = NULL; 2843 2598 return 0; ··· 3508 3263 .freeze_fs = f2fs_freeze, 3509 3264 .unfreeze_fs = f2fs_unfreeze, 3510 3265 .statfs = f2fs_statfs, 3511 - .remount_fs = f2fs_remount, 3512 3266 .shutdown = f2fs_shutdown, 3513 3267 }; 3514 3268 ··· 3695 3451 f2fs_bug_on(sbi, 1); 3696 3452 3697 3453 ret = submit_bio_wait(bio); 3454 + bio_put(bio); 3698 3455 folio_end_writeback(folio); 3699 3456 3700 3457 return ret; ··· 4767 4522 sbi->readdir_ra = true; 4768 4523 } 4769 4524 4770 - static int f2fs_fill_super(struct super_block *sb, void *data, int silent) 4525 + static int f2fs_fill_super(struct super_block *sb, struct fs_context *fc) 4771 4526 { 4527 + struct f2fs_fs_context *ctx = fc->fs_private; 4772 4528 struct f2fs_sb_info *sbi; 4773 4529 struct f2fs_super_block *raw_super; 4774 4530 struct inode *root; 4775 4531 int err; 4776 4532 bool skip_recovery = false, need_fsck = false; 4777 - char *options = NULL; 4778 4533 int recovery, i, valid_super_block; 4779 4534 struct curseg_info *seg_i; 4780 4535 int retry_cnt = 1; ··· 4837 4592 sizeof(raw_super->uuid)); 4838 4593 4839 4594 default_options(sbi, false); 4840 - /* parse mount options */ 4841 - options = kstrdup((const char *)data, GFP_KERNEL); 4842 - if (data && !options) { 4843 - err = -ENOMEM; 4844 - goto free_sb_buf; 4845 - } 4846 4595 4847 - err = parse_options(sbi, options, false); 4596 + err = f2fs_check_opt_consistency(fc, sb); 4848 4597 if (err) 4849 - goto free_options; 4598 + goto free_sb_buf; 4850 4599 4851 - err = f2fs_default_check(sbi); 4600 + f2fs_apply_options(fc, sb); 4601 + 4602 + err = f2fs_sanity_check_options(sbi, false); 4852 4603 if (err) 4853 4604 goto free_options; 4854 4605 ··· 5011 4770 /* get segno of first zoned block device */ 5012 4771 sbi->first_seq_zone_segno = get_first_seq_zone_segno(sbi); 5013 4772 4773 + sbi->reserved_pin_section = f2fs_sb_has_blkzoned(sbi) ? 4774 + ZONED_PIN_SEC_REQUIRED_COUNT : 4775 + GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)); 4776 + 5014 4777 /* Read accumulated write IO statistics if exists */ 5015 4778 seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE); 5016 4779 if (__exist_node_summaries(sbi)) ··· 5175 4930 if (err) 5176 4931 goto sync_free_meta; 5177 4932 } 5178 - kvfree(options); 5179 4933 5180 4934 /* recover broken superblock */ 5181 4935 if (recovery) { ··· 5257 5013 f2fs_destroy_iostat(sbi); 5258 5014 free_bio_info: 5259 5015 for (i = 0; i < NR_PAGE_TYPE; i++) 5260 - kvfree(sbi->write_io[i]); 5016 + kfree(sbi->write_io[i]); 5261 5017 5262 5018 #if IS_ENABLED(CONFIG_UNICODE) 5263 5019 utf8_unload(sb->s_encoding); ··· 5268 5024 for (i = 0; i < MAXQUOTAS; i++) 5269 5025 kfree(F2FS_OPTION(sbi).s_qf_names[i]); 5270 5026 #endif 5271 - fscrypt_free_dummy_policy(&F2FS_OPTION(sbi).dummy_enc_policy); 5272 - kvfree(options); 5027 + /* no need to free dummy_enc_policy, we just keep it in ctx when failed */ 5028 + swap(F2FS_CTX_INFO(ctx).dummy_enc_policy, F2FS_OPTION(sbi).dummy_enc_policy); 5273 5029 free_sb_buf: 5274 5030 kfree(raw_super); 5275 5031 free_sbi: ··· 5285 5041 return err; 5286 5042 } 5287 5043 5288 - static struct dentry *f2fs_mount(struct file_system_type *fs_type, int flags, 5289 - const char *dev_name, void *data) 5044 + static int f2fs_get_tree(struct fs_context *fc) 5290 5045 { 5291 - return mount_bdev(fs_type, flags, dev_name, data, f2fs_fill_super); 5046 + return get_tree_bdev(fc, f2fs_fill_super); 5292 5047 } 5048 + 5049 + static int f2fs_reconfigure(struct fs_context *fc) 5050 + { 5051 + struct super_block *sb = fc->root->d_sb; 5052 + 5053 + return __f2fs_remount(fc, sb); 5054 + } 5055 + 5056 + static void f2fs_fc_free(struct fs_context *fc) 5057 + { 5058 + struct f2fs_fs_context *ctx = fc->fs_private; 5059 + 5060 + if (!ctx) 5061 + return; 5062 + 5063 + #ifdef CONFIG_QUOTA 5064 + f2fs_unnote_qf_name_all(fc); 5065 + #endif 5066 + fscrypt_free_dummy_policy(&F2FS_CTX_INFO(ctx).dummy_enc_policy); 5067 + kfree(ctx); 5068 + } 5069 + 5070 + static const struct fs_context_operations f2fs_context_ops = { 5071 + .parse_param = f2fs_parse_param, 5072 + .get_tree = f2fs_get_tree, 5073 + .reconfigure = f2fs_reconfigure, 5074 + .free = f2fs_fc_free, 5075 + }; 5293 5076 5294 5077 static void kill_f2fs_super(struct super_block *sb) 5295 5078 { ··· 5359 5088 } 5360 5089 } 5361 5090 5091 + static int f2fs_init_fs_context(struct fs_context *fc) 5092 + { 5093 + struct f2fs_fs_context *ctx; 5094 + 5095 + ctx = kzalloc(sizeof(struct f2fs_fs_context), GFP_KERNEL); 5096 + if (!ctx) 5097 + return -ENOMEM; 5098 + 5099 + fc->fs_private = ctx; 5100 + fc->ops = &f2fs_context_ops; 5101 + 5102 + return 0; 5103 + } 5104 + 5362 5105 static struct file_system_type f2fs_fs_type = { 5363 5106 .owner = THIS_MODULE, 5364 5107 .name = "f2fs", 5365 - .mount = f2fs_mount, 5108 + .init_fs_context = f2fs_init_fs_context, 5366 5109 .kill_sb = kill_f2fs_super, 5367 5110 .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP, 5368 5111 };
+48
fs/f2fs/sysfs.c
··· 628 628 return count; 629 629 } 630 630 631 + if (!strcmp(a->attr.name, "gc_no_zoned_gc_percent")) { 632 + if (t > 100) 633 + return -EINVAL; 634 + *ui = (unsigned int)t; 635 + return count; 636 + } 637 + 638 + if (!strcmp(a->attr.name, "gc_boost_zoned_gc_percent")) { 639 + if (t > 100) 640 + return -EINVAL; 641 + *ui = (unsigned int)t; 642 + return count; 643 + } 644 + 645 + if (!strcmp(a->attr.name, "gc_valid_thresh_ratio")) { 646 + if (t > 100) 647 + return -EINVAL; 648 + *ui = (unsigned int)t; 649 + return count; 650 + } 651 + 631 652 #ifdef CONFIG_F2FS_IOSTAT 632 653 if (!strcmp(a->attr.name, "iostat_enable")) { 633 654 sbi->iostat_enable = !!t; ··· 842 821 if (t > MAX_DIR_HASH_DEPTH) 843 822 return -EINVAL; 844 823 sbi->dir_level = t; 824 + return count; 825 + } 826 + 827 + if (!strcmp(a->attr.name, "reserved_pin_section")) { 828 + if (t > GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi))) 829 + return -EINVAL; 830 + *ui = (unsigned int)t; 831 + return count; 832 + } 833 + 834 + if (!strcmp(a->attr.name, "gc_boost_gc_multiple")) { 835 + if (t < 1 || t > SEGS_PER_SEC(sbi)) 836 + return -EINVAL; 837 + sbi->gc_thread->boost_gc_multiple = (unsigned int)t; 838 + return count; 839 + } 840 + 841 + if (!strcmp(a->attr.name, "gc_boost_gc_greedy")) { 842 + if (t > GC_GREEDY) 843 + return -EINVAL; 844 + sbi->gc_thread->boost_gc_greedy = (unsigned int)t; 845 845 return count; 846 846 } 847 847 ··· 1092 1050 GC_THREAD_RW_ATTR(gc_no_zoned_gc_percent, no_zoned_gc_percent); 1093 1051 GC_THREAD_RW_ATTR(gc_boost_zoned_gc_percent, boost_zoned_gc_percent); 1094 1052 GC_THREAD_RW_ATTR(gc_valid_thresh_ratio, valid_thresh_ratio); 1053 + GC_THREAD_RW_ATTR(gc_boost_gc_multiple, boost_gc_multiple); 1054 + GC_THREAD_RW_ATTR(gc_boost_gc_greedy, boost_gc_greedy); 1095 1055 1096 1056 /* SM_INFO ATTR */ 1097 1057 SM_INFO_RW_ATTR(reclaim_segments, rec_prefree_segments); ··· 1174 1130 F2FS_SBI_GENERAL_RW_ATTR(blkzone_alloc_policy); 1175 1131 #endif 1176 1132 F2FS_SBI_GENERAL_RW_ATTR(carve_out); 1133 + F2FS_SBI_GENERAL_RW_ATTR(reserved_pin_section); 1177 1134 1178 1135 /* STAT_INFO ATTR */ 1179 1136 #ifdef CONFIG_F2FS_STAT_FS ··· 1265 1220 ATTR_LIST(gc_no_zoned_gc_percent), 1266 1221 ATTR_LIST(gc_boost_zoned_gc_percent), 1267 1222 ATTR_LIST(gc_valid_thresh_ratio), 1223 + ATTR_LIST(gc_boost_gc_multiple), 1224 + ATTR_LIST(gc_boost_gc_greedy), 1268 1225 ATTR_LIST(gc_idle), 1269 1226 ATTR_LIST(gc_urgent), 1270 1227 ATTR_LIST(reclaim_segments), ··· 1370 1323 ATTR_LIST(last_age_weight), 1371 1324 ATTR_LIST(max_read_extent_count), 1372 1325 ATTR_LIST(carve_out), 1326 + ATTR_LIST(reserved_pin_section), 1373 1327 NULL, 1374 1328 }; 1375 1329 ATTRIBUTE_GROUPS(f2fs);
+1 -1
include/linux/f2fs_fs.h
··· 268 268 /* Node IDs in an Indirect Block */ 269 269 #define NIDS_PER_BLOCK ((F2FS_BLKSIZE - sizeof(struct node_footer)) / sizeof(__le32)) 270 270 271 - #define ADDRS_PER_PAGE(page, inode) (addrs_per_page(inode, IS_INODE(page))) 271 + #define ADDRS_PER_PAGE(folio, inode) (addrs_per_page(inode, IS_INODE(folio))) 272 272 273 273 #define NODE_DIR1_BLOCK (DEF_ADDRS_PER_INODE + 1) 274 274 #define NODE_DIR2_BLOCK (DEF_ADDRS_PER_INODE + 2)
+6 -4
include/linux/fscrypt.h
··· 332 332 return (struct page *)page_private(bounce_page); 333 333 } 334 334 335 - static inline bool fscrypt_is_bounce_folio(struct folio *folio) 335 + static inline bool fscrypt_is_bounce_folio(const struct folio *folio) 336 336 { 337 337 return folio->mapping == NULL; 338 338 } 339 339 340 - static inline struct folio *fscrypt_pagecache_folio(struct folio *bounce_folio) 340 + static inline 341 + struct folio *fscrypt_pagecache_folio(const struct folio *bounce_folio) 341 342 { 342 343 return bounce_folio->private; 343 344 } ··· 518 517 return ERR_PTR(-EINVAL); 519 518 } 520 519 521 - static inline bool fscrypt_is_bounce_folio(struct folio *folio) 520 + static inline bool fscrypt_is_bounce_folio(const struct folio *folio) 522 521 { 523 522 return false; 524 523 } 525 524 526 - static inline struct folio *fscrypt_pagecache_folio(struct folio *bounce_folio) 525 + static inline 526 + struct folio *fscrypt_pagecache_folio(const struct folio *bounce_folio) 527 527 { 528 528 WARN_ON_ONCE(1); 529 529 return ERR_PTR(-EINVAL);