Merge tag 'f2fs-for-6.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs updates from Jaegeuk Kim:
"This series focuses on minor clean-ups and performance optimizations
across sysfs, documentation, debugfs, tracepoints, slab allocation,
and GC. Furthermore, it resolves several corner-case bugs caught by
xfstests, as well as issues related to 16KB page support and
f2fs_enable_checkpoint.

Enhancement:
- wrap ASCII tables in literal blocks to fix LaTeX build
- optimize trace_f2fs_write_checkpoint with enums
- support to show curseg.next_blkoff in debugfs
- add a sysfs entry to show max open zones
- add fadvise tracepoint
- use global inline_xattr_slab instead of per-sb slab cache
- set default valid_thresh_ratio to 80 for zoned devices
- maintain one time GC mode is enabled during whole zoned GC cycle

Bug fix:
- ensure node page reads complete before f2fs_put_super() finishes
- do not account invalid blocks in get_left_section_blocks()
- revert summary entry count from 2048 to 512 in 16kb block support
- detect recoverable inode during dryrun of find_fsync_dnodes()
- fix age extent cache insertion skip on counter overflow
- add sanity checks before unlinking and loading inodes
- ensure minimum trim granularity accounts for all devices
- block cache/dio write during f2fs_enable_checkpoint()
- propagate error from f2fs_enable_checkpoint()
- invalidate dentry cache on failed whiteout creation
- avoid updating compression context during writeback
- avoid updating zero-sized extent in extent cache
- avoid potential deadlock"

* tag 'f2fs-for-6.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (39 commits)
f2fs: ignore discard return value
f2fs: optimize trace_f2fs_write_checkpoint with enums
f2fs: fix to not account invalid blocks in get_left_section_blocks()
f2fs: support to show curseg.next_blkoff in debugfs
docs: f2fs: wrap ASCII tables in literal blocks to fix LaTeX build
f2fs: expand scalability of f2fs mount option
f2fs: change default schedule timeout value
f2fs: introduce f2fs_schedule_timeout()
f2fs: use memalloc_retry_wait() as much as possible
f2fs: add a sysfs entry to show max open zones
f2fs: wrap all unusable_blocks_per_sec code in CONFIG_BLK_DEV_ZONED
f2fs: simplify list initialization in f2fs_recover_fsync_data()
f2fs: revert summary entry count from 2048 to 512 in 16kb block support
f2fs: fix to detect recoverable inode during dryrun of find_fsync_dnodes()
f2fs: fix return value of f2fs_recover_fsync_data()
f2fs: add fadvise tracepoint
f2fs: fix age extent cache insertion skip on counter overflow
f2fs: Add sanity checks before unlinking and loading inodes
f2fs: Rename f2fs_unlink exit label
f2fs: ensure minimum trim granularity accounts for all devices
...

+6
Documentation/ABI/testing/sysfs-fs-f2fs
··· 643 643 Description: Shows the number of unusable blocks in a section which was defined by 644 644 the zone capacity reported by underlying zoned device. 645 645 646 + What: /sys/fs/f2fs/<disk>/max_open_zones 647 + Date: November 2025 648 + Contact: "Yongpeng Yang" <yangyongpeng@xiaomi.com> 649 + Description: Shows the max number of zones that F2FS can write concurrently when a zoned 650 + device is mounted. 651 + 646 652 What: /sys/fs/f2fs/<disk>/current_atomic_write 647 653 Date: July 2022 648 654 Contact: "Daeho Jeong" <daehojeong@google.com>
+68 -61
Documentation/filesystems/f2fs.rst
··· 188 188 enabled with fault_injection option, fault type value 189 189 is shown below, it supports single or combined type. 190 190 191 - =========================== ========== 192 - Type_Name Type_Value 193 - =========================== ========== 194 - FAULT_KMALLOC 0x00000001 195 - FAULT_KVMALLOC 0x00000002 196 - FAULT_PAGE_ALLOC 0x00000004 197 - FAULT_PAGE_GET 0x00000008 198 - FAULT_ALLOC_BIO 0x00000010 (obsolete) 199 - FAULT_ALLOC_NID 0x00000020 200 - FAULT_ORPHAN 0x00000040 201 - FAULT_BLOCK 0x00000080 202 - FAULT_DIR_DEPTH 0x00000100 203 - FAULT_EVICT_INODE 0x00000200 204 - FAULT_TRUNCATE 0x00000400 205 - FAULT_READ_IO 0x00000800 206 - FAULT_CHECKPOINT 0x00001000 207 - FAULT_DISCARD 0x00002000 208 - FAULT_WRITE_IO 0x00004000 209 - FAULT_SLAB_ALLOC 0x00008000 210 - FAULT_DQUOT_INIT 0x00010000 211 - FAULT_LOCK_OP 0x00020000 212 - FAULT_BLKADDR_VALIDITY 0x00040000 213 - FAULT_BLKADDR_CONSISTENCE 0x00080000 214 - FAULT_NO_SEGMENT 0x00100000 215 - FAULT_INCONSISTENT_FOOTER 0x00200000 216 - FAULT_TIMEOUT 0x00400000 (1000ms) 217 - FAULT_VMALLOC 0x00800000 218 - =========================== ========== 191 + .. code-block:: none 192 + 193 + =========================== ========== 194 + Type_Name Type_Value 195 + =========================== ========== 196 + FAULT_KMALLOC 0x00000001 197 + FAULT_KVMALLOC 0x00000002 198 + FAULT_PAGE_ALLOC 0x00000004 199 + FAULT_PAGE_GET 0x00000008 200 + FAULT_ALLOC_BIO 0x00000010 (obsolete) 201 + FAULT_ALLOC_NID 0x00000020 202 + FAULT_ORPHAN 0x00000040 203 + FAULT_BLOCK 0x00000080 204 + FAULT_DIR_DEPTH 0x00000100 205 + FAULT_EVICT_INODE 0x00000200 206 + FAULT_TRUNCATE 0x00000400 207 + FAULT_READ_IO 0x00000800 208 + FAULT_CHECKPOINT 0x00001000 209 + FAULT_DISCARD 0x00002000 210 + FAULT_WRITE_IO 0x00004000 211 + FAULT_SLAB_ALLOC 0x00008000 212 + FAULT_DQUOT_INIT 0x00010000 213 + FAULT_LOCK_OP 0x00020000 214 + FAULT_BLKADDR_VALIDITY 0x00040000 215 + FAULT_BLKADDR_CONSISTENCE 0x00080000 216 + FAULT_NO_SEGMENT 0x00100000 217 + FAULT_INCONSISTENT_FOOTER 0x00200000 218 + FAULT_TIMEOUT 0x00400000 (1000ms) 219 + FAULT_VMALLOC 0x00800000 220 + =========================== ========== 219 221 mode=%s Control block allocation mode which supports "adaptive" 220 222 and "lfs". In "lfs" mode, there should be no random 221 223 writes towards main area. ··· 298 296 compress_algorithm=%s Control compress algorithm, currently f2fs supports "lzo", 299 297 "lz4", "zstd" and "lzo-rle" algorithm. 300 298 compress_algorithm=%s:%d Control compress algorithm and its compress level, now, only 301 - "lz4" and "zstd" support compress level config. 299 + "lz4" and "zstd" support compress level config:: 302 300 303 - ========= =========== 304 - algorithm level range 305 - ========= =========== 306 - lz4 3 - 16 307 - zstd 1 - 22 308 - ========= =========== 301 + ========= =========== 302 + algorithm level range 303 + ========= =========== 304 + lz4 3 - 16 305 + zstd 1 - 22 306 + ========= =========== 307 + 309 308 compress_log_size=%u Support configuring compress cluster size. The size will 310 309 be 4KB * (1 << %u). The default and minimum sizes are 16KB. 311 310 compress_extension=%s Support adding specified extension, so that f2fs can enable ··· 371 368 the partition in read-only mode. By default it uses "continue" 372 369 mode. 373 370 374 - ====================== =============== =============== ======== 375 - mode continue remount-ro panic 376 - ====================== =============== =============== ======== 377 - access ops normal normal N/A 378 - syscall errors -EIO -EROFS N/A 379 - mount option rw ro N/A 380 - pending dir write keep keep N/A 381 - pending non-dir write drop keep N/A 382 - pending node write drop keep N/A 383 - pending meta write keep keep N/A 384 - ====================== =============== =============== ======== 371 + .. code-block:: none 372 + 373 + ====================== =============== =============== ======== 374 + mode continue remount-ro panic 375 + ====================== =============== =============== ======== 376 + access ops normal normal N/A 377 + syscall errors -EIO -EROFS N/A 378 + mount option rw ro N/A 379 + pending dir write keep keep N/A 380 + pending non-dir write drop keep N/A 381 + pending node write drop keep N/A 382 + pending meta write keep keep N/A 383 + ====================== =============== =============== ======== 385 384 nat_bits Enable nat_bits feature to enhance full/empty nat blocks access, 386 385 by default it's disabled. 387 386 lookup_mode=%s Control the directory lookup behavior for casefolded 388 387 directories. This option has no effect on directories 389 388 that do not have the casefold feature enabled. 390 389 391 - ================== ======================================== 392 - Value Description 393 - ================== ======================================== 394 - perf (Default) Enforces a hash-only lookup. 395 - The linear search fallback is always 396 - disabled, ignoring the on-disk flag. 397 - compat Enables the linear search fallback for 398 - compatibility with directory entries 399 - created by older kernel that used a 400 - different case-folding algorithm. 401 - This mode ignores the on-disk flag. 402 - auto F2FS determines the mode based on the 403 - on-disk `SB_ENC_NO_COMPAT_FALLBACK_FL` 404 - flag. 405 - ================== ======================================== 390 + .. code-block:: none 391 + 392 + ================== ======================================== 393 + Value Description 394 + ================== ======================================== 395 + perf (Default) Enforces a hash-only lookup. 396 + The linear search fallback is always 397 + disabled, ignoring the on-disk flag. 398 + compat Enables the linear search fallback for 399 + compatibility with directory entries 400 + created by older kernel that used a 401 + different case-folding algorithm. 402 + This mode ignores the on-disk flag. 403 + auto F2FS determines the mode based on the 404 + on-disk `SB_ENC_NO_COMPAT_FALLBACK_FL` 405 + flag. 406 + ================== ======================================== 406 407 ======================== ============================================================ 407 408 408 409 Debugfs Entries
+5 -5
fs/f2fs/checkpoint.c
··· 1318 1318 f2fs_submit_merged_write(sbi, DATA); 1319 1319 1320 1320 prepare_to_wait(&sbi->cp_wait, &wait, TASK_UNINTERRUPTIBLE); 1321 - io_schedule_timeout(DEFAULT_IO_TIMEOUT); 1321 + io_schedule_timeout(DEFAULT_SCHEDULE_TIMEOUT); 1322 1322 } 1323 1323 finish_wait(&sbi->cp_wait, &wait); 1324 1324 } ··· 1673 1673 goto out; 1674 1674 } 1675 1675 1676 - trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "start block_ops"); 1676 + trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, CP_PHASE_START_BLOCK_OPS); 1677 1677 1678 1678 err = block_operations(sbi); 1679 1679 if (err) ··· 1681 1681 1682 1682 stat_cp_time(cpc, CP_TIME_OP_LOCK); 1683 1683 1684 - trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "finish block_ops"); 1684 + trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, CP_PHASE_FINISH_BLOCK_OPS); 1685 1685 1686 1686 f2fs_flush_merged_writes(sbi); 1687 1687 ··· 1747 1747 1748 1748 /* update CP_TIME to trigger checkpoint periodically */ 1749 1749 f2fs_update_time(sbi, CP_TIME); 1750 - trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "finish checkpoint"); 1750 + trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, CP_PHASE_FINISH_CHECKPOINT); 1751 1751 out: 1752 1752 if (cpc->reason != CP_RESIZE) 1753 1753 f2fs_up_write(&sbi->cp_global_sem); ··· 1974 1974 1975 1975 /* Let's wait for the previous dispatched checkpoint. */ 1976 1976 while (atomic_read(&cprc->queued_ckpt)) 1977 - io_schedule_timeout(DEFAULT_IO_TIMEOUT); 1977 + io_schedule_timeout(DEFAULT_SCHEDULE_TIMEOUT); 1978 1978 } 1979 1979 1980 1980 void f2fs_init_ckpt_req_control(struct f2fs_sb_info *sbi)
+7 -10
fs/f2fs/compress.c
··· 120 120 } 121 121 122 122 static void f2fs_put_rpages_wbc(struct compress_ctx *cc, 123 - struct writeback_control *wbc, bool redirty, int unlock) 123 + struct writeback_control *wbc, bool redirty, bool unlock) 124 124 { 125 125 unsigned int i; 126 126 ··· 759 759 ret = -EFSCORRUPTED; 760 760 761 761 /* Avoid f2fs_commit_super in irq context */ 762 - if (!in_task) 763 - f2fs_handle_error_async(sbi, ERROR_FAIL_DECOMPRESSION); 764 - else 765 - f2fs_handle_error(sbi, ERROR_FAIL_DECOMPRESSION); 762 + f2fs_handle_error(sbi, ERROR_FAIL_DECOMPRESSION); 766 763 goto out_release; 767 764 } 768 765 ··· 1057 1060 f2fs_submit_merged_write(F2FS_I_SB(cc->inode), DATA); 1058 1061 while (atomic_read(&cic->pending_pages) != 1059 1062 (cc->valid_nr_cpages - submitted + 1)) 1060 - f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT); 1063 + f2fs_io_schedule_timeout(DEFAULT_SCHEDULE_TIMEOUT); 1061 1064 } 1062 1065 1063 1066 /* Cancel writeback and stay locked. */ ··· 1202 1205 if (copied) 1203 1206 set_cluster_dirty(&cc); 1204 1207 1205 - f2fs_put_rpages_wbc(&cc, NULL, false, 1); 1208 + f2fs_put_rpages_wbc(&cc, NULL, false, true); 1206 1209 f2fs_destroy_compress_ctx(&cc, false); 1207 1210 1208 1211 return first_index; ··· 1574 1577 */ 1575 1578 if (IS_NOQUOTA(cc->inode)) 1576 1579 goto out; 1577 - f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT); 1580 + f2fs_schedule_timeout(DEFAULT_SCHEDULE_TIMEOUT); 1578 1581 goto retry_write; 1579 1582 } 1580 1583 goto out; ··· 1605 1608 add_compr_block_stat(cc->inode, cc->cluster_size); 1606 1609 goto write; 1607 1610 } else if (err) { 1608 - f2fs_put_rpages_wbc(cc, wbc, true, 1); 1611 + f2fs_put_rpages_wbc(cc, wbc, true, true); 1609 1612 goto destroy_out; 1610 1613 } 1611 1614 ··· 1619 1622 f2fs_bug_on(F2FS_I_SB(cc->inode), *submitted); 1620 1623 1621 1624 err = f2fs_write_raw_pages(cc, submitted, wbc, io_type); 1622 - f2fs_put_rpages_wbc(cc, wbc, false, 0); 1625 + f2fs_put_rpages_wbc(cc, wbc, false, false); 1623 1626 destroy_out: 1624 1627 f2fs_destroy_compress_ctx(cc, false); 1625 1628 return err;
+33 -17
fs/f2fs/data.c
··· 752 752 } 753 753 754 754 static void add_bio_entry(struct f2fs_sb_info *sbi, struct bio *bio, 755 - struct page *page, enum temp_type temp) 755 + struct folio *folio, enum temp_type temp) 756 756 { 757 757 struct f2fs_bio_info *io = sbi->write_io[DATA] + temp; 758 758 struct bio_entry *be; ··· 761 761 be->bio = bio; 762 762 bio_get(bio); 763 763 764 - if (bio_add_page(bio, page, PAGE_SIZE, 0) != PAGE_SIZE) 765 - f2fs_bug_on(sbi, 1); 764 + bio_add_folio_nofail(bio, folio, folio_size(folio), 0); 766 765 767 766 f2fs_down_write(&io->bio_list_lock); 768 767 list_add_tail(&be->list, &io->bio_list); ··· 775 776 } 776 777 777 778 static int add_ipu_page(struct f2fs_io_info *fio, struct bio **bio, 778 - struct page *page) 779 + struct folio *folio) 779 780 { 780 781 struct folio *fio_folio = fio->folio; 781 782 struct f2fs_sb_info *sbi = fio->sbi; ··· 801 802 if (f2fs_crypt_mergeable_bio(*bio, 802 803 fio_folio->mapping->host, 803 804 fio_folio->index, fio) && 804 - bio_add_page(*bio, page, PAGE_SIZE, 0) == 805 - PAGE_SIZE) { 805 + bio_add_folio(*bio, folio, folio_size(folio), 0)) { 806 806 ret = 0; 807 807 break; 808 808 } ··· 902 904 f2fs_set_bio_crypt_ctx(bio, folio->mapping->host, 903 905 folio->index, fio, GFP_NOIO); 904 906 905 - add_bio_entry(fio->sbi, bio, &data_folio->page, fio->temp); 907 + add_bio_entry(fio->sbi, bio, data_folio, fio->temp); 906 908 } else { 907 - if (add_ipu_page(fio, &bio, &data_folio->page)) 909 + if (add_ipu_page(fio, &bio, data_folio)) 908 910 goto alloc_new; 909 911 } 910 912 ··· 1273 1275 struct address_space *mapping = inode->i_mapping; 1274 1276 struct folio *folio; 1275 1277 1276 - folio = __filemap_get_folio(mapping, index, FGP_ACCESSED, 0); 1278 + folio = f2fs_filemap_get_folio(mapping, index, FGP_ACCESSED, 0); 1277 1279 if (IS_ERR(folio)) 1278 1280 goto read; 1279 1281 if (folio_test_uptodate(folio)) ··· 1418 1420 1419 1421 static void f2fs_map_lock(struct f2fs_sb_info *sbi, int flag) 1420 1422 { 1423 + f2fs_down_read(&sbi->cp_enable_rwsem); 1421 1424 if (flag == F2FS_GET_BLOCK_PRE_AIO) 1422 1425 f2fs_down_read(&sbi->node_change); 1423 1426 else ··· 1431 1432 f2fs_up_read(&sbi->node_change); 1432 1433 else 1433 1434 f2fs_unlock_op(sbi); 1435 + f2fs_up_read(&sbi->cp_enable_rwsem); 1434 1436 } 1435 1437 1436 1438 int f2fs_get_block_locked(struct dnode_of_data *dn, pgoff_t index) ··· 3138 3138 } else if (ret == -EAGAIN) { 3139 3139 ret = 0; 3140 3140 if (wbc->sync_mode == WB_SYNC_ALL) { 3141 - f2fs_io_schedule_timeout( 3142 - DEFAULT_IO_TIMEOUT); 3141 + f2fs_schedule_timeout( 3142 + DEFAULT_SCHEDULE_TIMEOUT); 3143 3143 goto retry_write; 3144 3144 } 3145 3145 goto next; ··· 3221 3221 return false; 3222 3222 } 3223 3223 3224 + static inline void account_writeback(struct inode *inode, bool inc) 3225 + { 3226 + if (!f2fs_sb_has_compression(F2FS_I_SB(inode))) 3227 + return; 3228 + 3229 + f2fs_down_read(&F2FS_I(inode)->i_sem); 3230 + if (inc) 3231 + atomic_inc(&F2FS_I(inode)->writeback); 3232 + else 3233 + atomic_dec(&F2FS_I(inode)->writeback); 3234 + f2fs_up_read(&F2FS_I(inode)->i_sem); 3235 + } 3236 + 3224 3237 static int __f2fs_write_data_pages(struct address_space *mapping, 3225 3238 struct writeback_control *wbc, 3226 3239 enum iostat_type io_type) ··· 3279 3266 locked = true; 3280 3267 } 3281 3268 3269 + account_writeback(inode, true); 3270 + 3282 3271 blk_start_plug(&plug); 3283 3272 ret = f2fs_write_cache_pages(mapping, wbc, io_type); 3284 3273 blk_finish_plug(&plug); 3274 + 3275 + account_writeback(inode, false); 3285 3276 3286 3277 if (locked) 3287 3278 mutex_unlock(&sbi->writepages); ··· 3583 3566 * Do not use FGP_STABLE to avoid deadlock. 3584 3567 * Will wait that below with our IO control. 3585 3568 */ 3586 - folio = __filemap_get_folio(mapping, index, 3587 - FGP_LOCK | FGP_WRITE | FGP_CREAT, GFP_NOFS); 3569 + folio = f2fs_filemap_get_folio(mapping, index, 3570 + FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_NOFS, 3571 + mapping_gfp_mask(mapping)); 3588 3572 if (IS_ERR(folio)) { 3589 3573 err = PTR_ERR(folio); 3590 3574 goto fail; ··· 3655 3637 return 0; 3656 3638 3657 3639 put_folio: 3658 - folio_unlock(folio); 3659 - folio_put(folio); 3640 + f2fs_folio_put(folio, true); 3660 3641 fail: 3661 3642 f2fs_write_failed(inode, pos + len); 3662 3643 return err; ··· 3711 3694 pos + copied); 3712 3695 } 3713 3696 unlock_out: 3714 - folio_unlock(folio); 3715 - folio_put(folio); 3697 + f2fs_folio_put(folio, true); 3716 3698 f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); 3717 3699 return copied; 3718 3700 }
+19 -10
fs/f2fs/debug.c
··· 251 251 for (i = CURSEG_HOT_DATA; i < NO_CHECK_TYPE; i++) { 252 252 struct curseg_info *curseg = CURSEG_I(sbi, i); 253 253 254 + si->blkoff[i] = curseg->next_blkoff; 254 255 si->curseg[i] = curseg->segno; 255 256 si->cursec[i] = GET_SEC_FROM_SEG(sbi, curseg->segno); 256 257 si->curzone[i] = GET_ZONE_FROM_SEC(sbi, si->cursec[i]); ··· 509 508 seq_printf(s, "\nMain area: %d segs, %d secs %d zones\n", 510 509 si->main_area_segs, si->main_area_sections, 511 510 si->main_area_zones); 512 - seq_printf(s, " TYPE %8s %8s %8s %10s %10s %10s\n", 513 - "segno", "secno", "zoneno", "dirty_seg", "full_seg", "valid_blk"); 514 - seq_printf(s, " - COLD data: %8d %8d %8d %10u %10u %10u\n", 511 + seq_printf(s, " TYPE %8s %8s %8s %8s %10s %10s %10s\n", 512 + "blkoff", "segno", "secno", "zoneno", "dirty_seg", "full_seg", "valid_blk"); 513 + seq_printf(s, " - COLD data: %8d %8d %8d %8d %10u %10u %10u\n", 514 + si->blkoff[CURSEG_COLD_DATA], 515 515 si->curseg[CURSEG_COLD_DATA], 516 516 si->cursec[CURSEG_COLD_DATA], 517 517 si->curzone[CURSEG_COLD_DATA], 518 518 si->dirty_seg[CURSEG_COLD_DATA], 519 519 si->full_seg[CURSEG_COLD_DATA], 520 520 si->valid_blks[CURSEG_COLD_DATA]); 521 - seq_printf(s, " - WARM data: %8d %8d %8d %10u %10u %10u\n", 521 + seq_printf(s, " - WARM data: %8d %8d %8d %8d %10u %10u %10u\n", 522 + si->blkoff[CURSEG_WARM_DATA], 522 523 si->curseg[CURSEG_WARM_DATA], 523 524 si->cursec[CURSEG_WARM_DATA], 524 525 si->curzone[CURSEG_WARM_DATA], 525 526 si->dirty_seg[CURSEG_WARM_DATA], 526 527 si->full_seg[CURSEG_WARM_DATA], 527 528 si->valid_blks[CURSEG_WARM_DATA]); 528 - seq_printf(s, " - HOT data: %8d %8d %8d %10u %10u %10u\n", 529 + seq_printf(s, " - HOT data: %8d %8d %8d %8d %10u %10u %10u\n", 530 + si->blkoff[CURSEG_HOT_DATA], 529 531 si->curseg[CURSEG_HOT_DATA], 530 532 si->cursec[CURSEG_HOT_DATA], 531 533 si->curzone[CURSEG_HOT_DATA], 532 534 si->dirty_seg[CURSEG_HOT_DATA], 533 535 si->full_seg[CURSEG_HOT_DATA], 534 536 si->valid_blks[CURSEG_HOT_DATA]); 535 - seq_printf(s, " - Dir dnode: %8d %8d %8d %10u %10u %10u\n", 537 + seq_printf(s, " - Dir dnode: %8d %8d %8d %8d %10u %10u %10u\n", 538 + si->blkoff[CURSEG_HOT_NODE], 536 539 si->curseg[CURSEG_HOT_NODE], 537 540 si->cursec[CURSEG_HOT_NODE], 538 541 si->curzone[CURSEG_HOT_NODE], 539 542 si->dirty_seg[CURSEG_HOT_NODE], 540 543 si->full_seg[CURSEG_HOT_NODE], 541 544 si->valid_blks[CURSEG_HOT_NODE]); 542 - seq_printf(s, " - File dnode: %8d %8d %8d %10u %10u %10u\n", 545 + seq_printf(s, " - File dnode: %8d %8d %8d %8d %10u %10u %10u\n", 546 + si->blkoff[CURSEG_WARM_NODE], 543 547 si->curseg[CURSEG_WARM_NODE], 544 548 si->cursec[CURSEG_WARM_NODE], 545 549 si->curzone[CURSEG_WARM_NODE], 546 550 si->dirty_seg[CURSEG_WARM_NODE], 547 551 si->full_seg[CURSEG_WARM_NODE], 548 552 si->valid_blks[CURSEG_WARM_NODE]); 549 - seq_printf(s, " - Indir nodes: %8d %8d %8d %10u %10u %10u\n", 553 + seq_printf(s, " - Indir nodes: %8d %8d %8d %8d %10u %10u %10u\n", 554 + si->blkoff[CURSEG_COLD_NODE], 550 555 si->curseg[CURSEG_COLD_NODE], 551 556 si->cursec[CURSEG_COLD_NODE], 552 557 si->curzone[CURSEG_COLD_NODE], 553 558 si->dirty_seg[CURSEG_COLD_NODE], 554 559 si->full_seg[CURSEG_COLD_NODE], 555 560 si->valid_blks[CURSEG_COLD_NODE]); 556 - seq_printf(s, " - Pinned file: %8d %8d %8d\n", 561 + seq_printf(s, " - Pinned file: %8d %8d %8d %8d\n", 562 + si->blkoff[CURSEG_COLD_DATA_PINNED], 557 563 si->curseg[CURSEG_COLD_DATA_PINNED], 558 564 si->cursec[CURSEG_COLD_DATA_PINNED], 559 565 si->curzone[CURSEG_COLD_DATA_PINNED]); 560 - seq_printf(s, " - ATGC data: %8d %8d %8d\n", 566 + seq_printf(s, " - ATGC data: %8d %8d %8d %8d\n", 567 + si->blkoff[CURSEG_ALL_DATA_ATGC], 561 568 si->curseg[CURSEG_ALL_DATA_ATGC], 562 569 si->cursec[CURSEG_ALL_DATA_ATGC], 563 570 si->curzone[CURSEG_ALL_DATA_ATGC]);
+3 -2
fs/f2fs/extent_cache.c
··· 808 808 } 809 809 goto out_read_extent_cache; 810 810 update_age_extent_cache: 811 - if (!tei->last_blocks) 811 + if (tei->last_blocks == F2FS_EXTENT_AGE_INVALID) 812 812 goto out_read_extent_cache; 813 813 814 814 __set_extent_info(&ei, fofs, len, 0, false, ··· 912 912 cur_age = cur_blocks - tei.last_blocks; 913 913 else 914 914 /* allocated_data_blocks overflow */ 915 - cur_age = ULLONG_MAX - tei.last_blocks + cur_blocks; 915 + cur_age = (ULLONG_MAX - 1) - tei.last_blocks + cur_blocks; 916 916 917 917 if (tei.age) 918 918 ei->age = __calculate_block_age(sbi, cur_age, tei.age); ··· 1114 1114 struct extent_info ei = { 1115 1115 .fofs = fofs, 1116 1116 .len = len, 1117 + .last_blocks = F2FS_EXTENT_AGE_INVALID, 1117 1118 }; 1118 1119 1119 1120 if (!__may_extent_tree(dn->inode, EX_BLOCK_AGE))
+96 -66
fs/f2fs/f2fs.h
··· 96 96 /* 97 97 * For mount options 98 98 */ 99 - #define F2FS_MOUNT_DISABLE_ROLL_FORWARD 0x00000001 100 - #define F2FS_MOUNT_DISCARD 0x00000002 101 - #define F2FS_MOUNT_NOHEAP 0x00000004 102 - #define F2FS_MOUNT_XATTR_USER 0x00000008 103 - #define F2FS_MOUNT_POSIX_ACL 0x00000010 104 - #define F2FS_MOUNT_DISABLE_EXT_IDENTIFY 0x00000020 105 - #define F2FS_MOUNT_INLINE_XATTR 0x00000040 106 - #define F2FS_MOUNT_INLINE_DATA 0x00000080 107 - #define F2FS_MOUNT_INLINE_DENTRY 0x00000100 108 - #define F2FS_MOUNT_FLUSH_MERGE 0x00000200 109 - #define F2FS_MOUNT_NOBARRIER 0x00000400 110 - #define F2FS_MOUNT_FASTBOOT 0x00000800 111 - #define F2FS_MOUNT_READ_EXTENT_CACHE 0x00001000 112 - #define F2FS_MOUNT_DATA_FLUSH 0x00002000 113 - #define F2FS_MOUNT_FAULT_INJECTION 0x00004000 114 - #define F2FS_MOUNT_USRQUOTA 0x00008000 115 - #define F2FS_MOUNT_GRPQUOTA 0x00010000 116 - #define F2FS_MOUNT_PRJQUOTA 0x00020000 117 - #define F2FS_MOUNT_QUOTA 0x00040000 118 - #define F2FS_MOUNT_INLINE_XATTR_SIZE 0x00080000 119 - #define F2FS_MOUNT_RESERVE_ROOT 0x00100000 120 - #define F2FS_MOUNT_DISABLE_CHECKPOINT 0x00200000 121 - #define F2FS_MOUNT_NORECOVERY 0x00400000 122 - #define F2FS_MOUNT_ATGC 0x00800000 123 - #define F2FS_MOUNT_MERGE_CHECKPOINT 0x01000000 124 - #define F2FS_MOUNT_GC_MERGE 0x02000000 125 - #define F2FS_MOUNT_COMPRESS_CACHE 0x04000000 126 - #define F2FS_MOUNT_AGE_EXTENT_CACHE 0x08000000 127 - #define F2FS_MOUNT_NAT_BITS 0x10000000 128 - #define F2FS_MOUNT_INLINECRYPT 0x20000000 129 - /* 130 - * Some f2fs environments expect to be able to pass the "lazytime" option 131 - * string rather than using the MS_LAZYTIME flag, so this must remain. 132 - */ 133 - #define F2FS_MOUNT_LAZYTIME 0x40000000 134 - #define F2FS_MOUNT_RESERVE_NODE 0x80000000 99 + enum f2fs_mount_opt { 100 + F2FS_MOUNT_DISABLE_ROLL_FORWARD, 101 + F2FS_MOUNT_DISCARD, 102 + F2FS_MOUNT_NOHEAP, 103 + F2FS_MOUNT_XATTR_USER, 104 + F2FS_MOUNT_POSIX_ACL, 105 + F2FS_MOUNT_DISABLE_EXT_IDENTIFY, 106 + F2FS_MOUNT_INLINE_XATTR, 107 + F2FS_MOUNT_INLINE_DATA, 108 + F2FS_MOUNT_INLINE_DENTRY, 109 + F2FS_MOUNT_FLUSH_MERGE, 110 + F2FS_MOUNT_NOBARRIER, 111 + F2FS_MOUNT_FASTBOOT, 112 + F2FS_MOUNT_READ_EXTENT_CACHE, 113 + F2FS_MOUNT_DATA_FLUSH, 114 + F2FS_MOUNT_FAULT_INJECTION, 115 + F2FS_MOUNT_USRQUOTA, 116 + F2FS_MOUNT_GRPQUOTA, 117 + F2FS_MOUNT_PRJQUOTA, 118 + F2FS_MOUNT_QUOTA, 119 + F2FS_MOUNT_INLINE_XATTR_SIZE, 120 + F2FS_MOUNT_RESERVE_ROOT, 121 + F2FS_MOUNT_DISABLE_CHECKPOINT, 122 + F2FS_MOUNT_NORECOVERY, 123 + F2FS_MOUNT_ATGC, 124 + F2FS_MOUNT_MERGE_CHECKPOINT, 125 + F2FS_MOUNT_GC_MERGE, 126 + F2FS_MOUNT_COMPRESS_CACHE, 127 + F2FS_MOUNT_AGE_EXTENT_CACHE, 128 + F2FS_MOUNT_NAT_BITS, 129 + F2FS_MOUNT_INLINECRYPT, 130 + /* 131 + * Some f2fs environments expect to be able to pass the "lazytime" option 132 + * string rather than using the MS_LAZYTIME flag, so this must remain. 133 + */ 134 + F2FS_MOUNT_LAZYTIME, 135 + F2FS_MOUNT_RESERVE_NODE, 136 + }; 135 137 136 138 #define F2FS_OPTION(sbi) ((sbi)->mount_opt) 137 - #define clear_opt(sbi, option) (F2FS_OPTION(sbi).opt &= ~F2FS_MOUNT_##option) 138 - #define set_opt(sbi, option) (F2FS_OPTION(sbi).opt |= F2FS_MOUNT_##option) 139 - #define test_opt(sbi, option) (F2FS_OPTION(sbi).opt & F2FS_MOUNT_##option) 139 + #define clear_opt(sbi, option) \ 140 + (F2FS_OPTION(sbi).opt &= ~BIT(F2FS_MOUNT_##option)) 141 + #define set_opt(sbi, option) \ 142 + (F2FS_OPTION(sbi).opt |= BIT(F2FS_MOUNT_##option)) 143 + #define test_opt(sbi, option) \ 144 + (F2FS_OPTION(sbi).opt & BIT(F2FS_MOUNT_##option)) 140 145 141 146 #define ver_after(a, b) (typecheck(unsigned long long, a) && \ 142 147 typecheck(unsigned long long, b) && \ ··· 188 183 }; 189 184 190 185 struct f2fs_mount_info { 191 - unsigned int opt; 186 + unsigned long long opt; 192 187 block_t root_reserved_blocks; /* root reserved blocks */ 193 188 block_t root_reserved_nodes; /* root reserved nodes */ 194 189 kuid_t s_resuid; /* reserved blocks for uid */ ··· 250 245 #define F2FS_FEATURE_COMPRESSION 0x00002000 251 246 #define F2FS_FEATURE_RO 0x00004000 252 247 #define F2FS_FEATURE_DEVICE_ALIAS 0x00008000 248 + #define F2FS_FEATURE_PACKED_SSA 0x00010000 253 249 254 250 #define __F2FS_HAS_FEATURE(raw_super, mask) \ 255 251 ((raw_super->feature & cpu_to_le32(mask)) != 0) ··· 287 281 #define DEF_CP_INTERVAL 60 /* 60 secs */ 288 282 #define DEF_IDLE_INTERVAL 5 /* 5 secs */ 289 283 #define DEF_DISABLE_INTERVAL 5 /* 5 secs */ 290 - #define DEF_ENABLE_INTERVAL 16 /* 16 secs */ 284 + #define DEF_ENABLE_INTERVAL 5 /* 5 secs */ 291 285 #define DEF_DISABLE_QUICK_INTERVAL 1 /* 1 secs */ 292 286 #define DEF_UMOUNT_DISCARD_TIMEOUT 5 /* 5 secs */ 293 287 ··· 317 311 __u64 trim_end; 318 312 __u64 trim_minlen; 319 313 struct cp_stats stats; 314 + }; 315 + 316 + enum f2fs_cp_phase { 317 + CP_PHASE_START_BLOCK_OPS, 318 + CP_PHASE_FINISH_BLOCK_OPS, 319 + CP_PHASE_FINISH_CHECKPOINT, 320 320 }; 321 321 322 322 /* ··· 418 406 #define DEFAULT_DISCARD_GRANULARITY 16 419 407 /* default maximum discard granularity of ordered discard, unit: block count */ 420 408 #define DEFAULT_MAX_ORDERED_DISCARD_GRANULARITY 16 409 + /* default interval of periodical discard submission */ 410 + #define DEFAULT_DISCARD_INTERVAL (msecs_to_jiffies(20)) 421 411 422 412 /* max discard pend list number */ 423 413 #define MAX_PLIST_NUM 512 ··· 669 655 670 656 #define DEFAULT_RETRY_IO_COUNT 8 /* maximum retry read IO or flush count */ 671 657 672 - /* congestion wait timeout value, default: 20ms */ 673 - #define DEFAULT_IO_TIMEOUT (msecs_to_jiffies(20)) 658 + /* IO/non-IO congestion wait timeout value, default: 1ms */ 659 + #define DEFAULT_SCHEDULE_TIMEOUT (msecs_to_jiffies(1)) 674 660 675 661 /* timeout value injected, default: 1000ms */ 676 662 #define DEFAULT_FAULT_TIMEOUT (msecs_to_jiffies(1000)) ··· 720 706 EX_BLOCK_AGE, 721 707 NR_EXTENT_CACHES, 722 708 }; 709 + 710 + /* 711 + * Reserved value to mark invalid age extents, hence valid block range 712 + * from 0 to ULLONG_MAX-1 713 + */ 714 + #define F2FS_EXTENT_AGE_INVALID ULLONG_MAX 723 715 724 716 struct extent_info { 725 717 unsigned int fofs; /* start offset in a file */ ··· 967 947 unsigned char i_compress_level; /* compress level (lz4hc,zstd) */ 968 948 unsigned char i_compress_flag; /* compress flag */ 969 949 unsigned int i_cluster_size; /* cluster size */ 950 + atomic_t writeback; /* count # of writeback thread */ 970 951 971 952 unsigned int atomic_write_cnt; 972 953 loff_t original_i_size; /* original i_size before atomic write */ ··· 1682 1661 1683 1662 #ifdef CONFIG_BLK_DEV_ZONED 1684 1663 unsigned int blocks_per_blkz; /* F2FS blocks per zone */ 1664 + unsigned int unusable_blocks_per_sec; /* unusable blocks per section */ 1685 1665 unsigned int max_open_zones; /* max open zone resources of the zoned device */ 1686 1666 /* For adjust the priority writing position of data in zone UFS */ 1687 1667 unsigned int blkzone_alloc_policy; ··· 1716 1694 long interval_time[MAX_TIME]; /* to store thresholds */ 1717 1695 struct ckpt_req_control cprc_info; /* for checkpoint request control */ 1718 1696 struct cp_stats cp_stats; /* for time stat of checkpoint */ 1697 + struct f2fs_rwsem cp_enable_rwsem; /* block cache/dio write */ 1719 1698 1720 1699 struct inode_management im[MAX_INO_ENTRY]; /* manage inode cache */ 1721 1700 ··· 1755 1732 unsigned int meta_ino_num; /* meta inode number*/ 1756 1733 unsigned int log_blocks_per_seg; /* log2 blocks per segment */ 1757 1734 unsigned int blocks_per_seg; /* blocks per segment */ 1758 - unsigned int unusable_blocks_per_sec; /* unusable blocks per section */ 1759 1735 unsigned int segs_per_sec; /* segments per section */ 1760 1736 unsigned int secs_per_zone; /* sections per zone */ 1761 1737 unsigned int total_sections; /* total section count */ ··· 1905 1883 unsigned char stop_reason[MAX_STOP_REASON]; /* stop reason */ 1906 1884 spinlock_t error_lock; /* protect errors/stop_reason array */ 1907 1885 bool error_dirty; /* errors of sb is dirty */ 1908 - 1909 - struct kmem_cache *inline_xattr_slab; /* inline xattr entry */ 1910 - unsigned int inline_xattr_slab_size; /* default inline xattr slab size */ 1911 1886 1912 1887 /* For reclaimed segs statistics per each GC mode */ 1913 1888 unsigned int gc_segment_mode; /* GC state for reclaimed segments */ ··· 2115 2096 static inline struct f2fs_super_block *F2FS_SUPER_BLOCK(struct folio *folio, 2116 2097 pgoff_t index) 2117 2098 { 2118 - pgoff_t idx_in_folio = index % (1 << folio_order(folio)); 2099 + pgoff_t idx_in_folio = index % folio_nr_pages(folio); 2119 2100 2120 2101 return (struct f2fs_super_block *) 2121 2102 (page_address(folio_page(folio, idx_in_folio)) + ··· 2980 2961 return __filemap_get_folio(mapping, index, fgp_flags, gfp_mask); 2981 2962 } 2982 2963 2983 - static inline struct page *f2fs_pagecache_get_page( 2984 - struct address_space *mapping, pgoff_t index, 2985 - fgf_t fgp_flags, gfp_t gfp_mask) 2986 - { 2987 - if (time_to_inject(F2FS_M_SB(mapping), FAULT_PAGE_GET)) 2988 - return NULL; 2989 - 2990 - return pagecache_get_page(mapping, index, fgp_flags, gfp_mask); 2991 - } 2992 - 2993 2964 static inline void f2fs_folio_put(struct folio *folio, bool unlock) 2994 2965 { 2995 2966 if (IS_ERR_OR_NULL(folio)) ··· 2992 2983 folio_put(folio); 2993 2984 } 2994 2985 2995 - static inline void f2fs_put_page(struct page *page, int unlock) 2986 + static inline void f2fs_put_page(struct page *page, bool unlock) 2996 2987 { 2997 2988 if (!page) 2998 2989 return; ··· 3819 3810 void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag); 3820 3811 void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason); 3821 3812 void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error); 3822 - void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error); 3823 3813 int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover); 3824 3814 int f2fs_sync_fs(struct super_block *sb, int sync); 3825 3815 int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi); ··· 4194 4186 int gc_secs[2][2]; 4195 4187 int tot_blks, data_blks, node_blks; 4196 4188 int bg_data_blks, bg_node_blks; 4189 + int blkoff[NR_CURSEG_TYPE]; 4197 4190 int curseg[NR_CURSEG_TYPE]; 4198 4191 int cursec[NR_CURSEG_TYPE]; 4199 4192 int curzone[NR_CURSEG_TYPE]; ··· 4683 4674 f2fs_up_write(&fi->i_sem); 4684 4675 return true; 4685 4676 } 4686 - if (f2fs_is_mmap_file(inode) || 4677 + if (f2fs_is_mmap_file(inode) || atomic_read(&fi->writeback) || 4687 4678 (S_ISREG(inode->i_mode) && F2FS_HAS_BLOCKS(inode))) { 4688 4679 f2fs_up_write(&fi->i_sem); 4689 4680 return false; ··· 4719 4710 F2FS_FEATURE_FUNCS(compression, COMPRESSION); 4720 4711 F2FS_FEATURE_FUNCS(readonly, RO); 4721 4712 F2FS_FEATURE_FUNCS(device_alias, DEVICE_ALIAS); 4713 + F2FS_FEATURE_FUNCS(packed_ssa, PACKED_SSA); 4722 4714 4723 4715 #ifdef CONFIG_BLK_DEV_ZONED 4724 4716 static inline bool f2fs_zone_is_seq(struct f2fs_sb_info *sbi, int devi, ··· 4772 4762 if (f2fs_bdev_support_discard(FDEV(i).bdev)) 4773 4763 return true; 4774 4764 return false; 4765 + } 4766 + 4767 + static inline unsigned int f2fs_hw_discard_granularity(struct f2fs_sb_info *sbi) 4768 + { 4769 + int i = 1; 4770 + unsigned int discard_granularity = bdev_discard_granularity(sbi->sb->s_bdev); 4771 + 4772 + if (f2fs_is_multi_device(sbi)) 4773 + for (; i < sbi->s_ndevs && !bdev_is_zoned(FDEV(i).bdev); i++) 4774 + discard_granularity = max_t(unsigned int, discard_granularity, 4775 + bdev_discard_granularity(FDEV(i).bdev)); 4776 + return discard_granularity; 4775 4777 } 4776 4778 4777 4779 static inline bool f2fs_realtime_discard_enable(struct f2fs_sb_info *sbi) ··· 4922 4900 return F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_BLOCK; 4923 4901 } 4924 4902 4925 - static inline void f2fs_io_schedule_timeout(long timeout) 4903 + static inline void __f2fs_schedule_timeout(long timeout, bool io) 4926 4904 { 4927 4905 set_current_state(TASK_UNINTERRUPTIBLE); 4928 - io_schedule_timeout(timeout); 4906 + if (io) 4907 + io_schedule_timeout(timeout); 4908 + else 4909 + schedule_timeout(timeout); 4929 4910 } 4911 + 4912 + #define f2fs_io_schedule_timeout(timeout) \ 4913 + __f2fs_schedule_timeout(timeout, true) 4914 + #define f2fs_schedule_timeout(timeout) \ 4915 + __f2fs_schedule_timeout(timeout, false) 4930 4916 4931 4917 static inline void f2fs_io_schedule_timeout_killable(long timeout) 4932 4918 { ··· 4942 4912 if (fatal_signal_pending(current)) 4943 4913 return; 4944 4914 set_current_state(TASK_UNINTERRUPTIBLE); 4945 - io_schedule_timeout(DEFAULT_IO_TIMEOUT); 4946 - if (timeout <= DEFAULT_IO_TIMEOUT) 4915 + io_schedule_timeout(DEFAULT_SCHEDULE_TIMEOUT); 4916 + if (timeout <= DEFAULT_SCHEDULE_TIMEOUT) 4947 4917 return; 4948 - timeout -= DEFAULT_IO_TIMEOUT; 4918 + timeout -= DEFAULT_SCHEDULE_TIMEOUT; 4949 4919 } 4950 4920 } 4951 4921
+16 -10
fs/f2fs/file.c
··· 1654 1654 f2fs_set_data_blkaddr(dn, NEW_ADDR); 1655 1655 } 1656 1656 1657 - f2fs_update_read_extent_cache_range(dn, start, 0, index - start); 1658 - f2fs_update_age_extent_cache_range(dn, start, index - start); 1657 + if (index > start) { 1658 + f2fs_update_read_extent_cache_range(dn, start, 0, 1659 + index - start); 1660 + f2fs_update_age_extent_cache_range(dn, start, index - start); 1661 + } 1659 1662 1660 1663 return ret; 1661 1664 } ··· 2128 2125 2129 2126 f2fs_down_write(&fi->i_sem); 2130 2127 if (!f2fs_may_compress(inode) || 2131 - (S_ISREG(inode->i_mode) && 2132 - F2FS_HAS_BLOCKS(inode))) { 2128 + atomic_read(&fi->writeback) || 2129 + (S_ISREG(inode->i_mode) && 2130 + F2FS_HAS_BLOCKS(inode))) { 2133 2131 f2fs_up_write(&fi->i_sem); 2134 2132 return -EINVAL; 2135 2133 } ··· 2588 2584 static int f2fs_ioc_fitrim(struct file *filp, unsigned long arg) 2589 2585 { 2590 2586 struct inode *inode = file_inode(filp); 2591 - struct super_block *sb = inode->i_sb; 2587 + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 2592 2588 struct fstrim_range range; 2593 2589 int ret; 2594 2590 2595 2591 if (!capable(CAP_SYS_ADMIN)) 2596 2592 return -EPERM; 2597 2593 2598 - if (!f2fs_hw_support_discard(F2FS_SB(sb))) 2594 + if (!f2fs_hw_support_discard(sbi)) 2599 2595 return -EOPNOTSUPP; 2600 2596 2601 2597 if (copy_from_user(&range, (struct fstrim_range __user *)arg, ··· 2606 2602 if (ret) 2607 2603 return ret; 2608 2604 2609 - range.minlen = max((unsigned int)range.minlen, 2610 - bdev_discard_granularity(sb->s_bdev)); 2611 - ret = f2fs_trim_fs(F2FS_SB(sb), &range); 2605 + range.minlen = max_t(unsigned int, range.minlen, 2606 + f2fs_hw_discard_granularity(sbi)); 2607 + ret = f2fs_trim_fs(sbi, &range); 2612 2608 mnt_drop_write_file(filp); 2613 2609 if (ret < 0) 2614 2610 return ret; ··· 2616 2612 if (copy_to_user((struct fstrim_range __user *)arg, &range, 2617 2613 sizeof(range))) 2618 2614 return -EFAULT; 2619 - f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); 2615 + f2fs_update_time(sbi, REQ_TIME); 2620 2616 return 0; 2621 2617 } 2622 2618 ··· 5287 5283 struct backing_dev_info *bdi; 5288 5284 struct inode *inode = file_inode(filp); 5289 5285 int err; 5286 + 5287 + trace_f2fs_fadvise(inode, offset, len, advice); 5290 5288 5291 5289 if (advice == POSIX_FADV_SEQUENTIAL) { 5292 5290 if (S_ISFIFO(inode->i_mode))
+95 -68
fs/f2fs/gc.c
··· 38 38 struct f2fs_gc_control gc_control = { 39 39 .victim_segno = NULL_SEGNO, 40 40 .should_migrate_blocks = false, 41 - .err_gc_skipped = false }; 41 + .err_gc_skipped = false, 42 + .one_time = false }; 42 43 43 44 wait_ms = gc_th->min_sleep_time; 44 45 45 46 set_freezable(); 46 47 do { 47 - bool sync_mode, foreground = false; 48 + bool sync_mode, foreground = false, gc_boost = false; 48 49 49 50 wait_event_freezable_timeout(*wq, 50 51 kthread_should_stop() || ··· 53 52 gc_th->gc_wake, 54 53 msecs_to_jiffies(wait_ms)); 55 54 56 - if (test_opt(sbi, GC_MERGE) && waitqueue_active(fggc_wq)) 55 + if (test_opt(sbi, GC_MERGE) && waitqueue_active(fggc_wq)) { 57 56 foreground = true; 57 + gc_control.one_time = false; 58 + } else if (f2fs_sb_has_blkzoned(sbi)) { 59 + gc_control.one_time = true; 60 + } 58 61 59 62 /* give it a try one time */ 60 63 if (gc_th->gc_wake) ··· 85 80 stat_other_skip_bggc_count(sbi); 86 81 continue; 87 82 } 88 - 89 - gc_control.one_time = false; 90 83 91 84 /* 92 85 * [GC triggering condition] ··· 135 132 if (need_to_boost_gc(sbi)) { 136 133 decrease_sleep_time(gc_th, &wait_ms); 137 134 if (f2fs_sb_has_blkzoned(sbi)) 138 - gc_control.one_time = true; 135 + gc_boost = true; 139 136 } else { 140 137 increase_sleep_time(gc_th, &wait_ms); 141 138 } ··· 144 141 FOREGROUND : BACKGROUND); 145 142 146 143 sync_mode = (F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC) || 147 - (gc_control.one_time && gc_th->boost_gc_greedy); 144 + (gc_boost && gc_th->boost_gc_greedy); 148 145 149 146 /* foreground GC was been triggered via f2fs_balance_fs() */ 150 147 if (foreground && !f2fs_sb_has_blkzoned(sbi)) ··· 774 771 { 775 772 struct dirty_seglist_info *dirty_i = DIRTY_I(sbi); 776 773 struct sit_info *sm = SIT_I(sbi); 777 - struct victim_sel_policy p; 774 + struct victim_sel_policy p = {0}; 778 775 unsigned int secno, last_victim; 779 776 unsigned int last_segment; 780 777 unsigned int nsearched; ··· 1211 1208 struct address_space *mapping = f2fs_is_cow_file(inode) ? 1212 1209 F2FS_I(inode)->atomic_inode->i_mapping : inode->i_mapping; 1213 1210 struct dnode_of_data dn; 1214 - struct folio *folio; 1211 + struct folio *folio, *efolio; 1215 1212 struct f2fs_io_info fio = { 1216 1213 .sbi = sbi, 1217 1214 .ino = inode->i_ino, ··· 1266 1263 1267 1264 f2fs_wait_on_block_writeback(inode, dn.data_blkaddr); 1268 1265 1269 - fio.encrypted_page = f2fs_pagecache_get_page(META_MAPPING(sbi), 1270 - dn.data_blkaddr, 1266 + efolio = f2fs_filemap_get_folio(META_MAPPING(sbi), dn.data_blkaddr, 1271 1267 FGP_LOCK | FGP_CREAT, GFP_NOFS); 1272 - if (!fio.encrypted_page) { 1273 - err = -ENOMEM; 1268 + if (IS_ERR(efolio)) { 1269 + err = PTR_ERR(efolio); 1274 1270 goto put_folio; 1275 1271 } 1272 + 1273 + fio.encrypted_page = &efolio->page; 1276 1274 1277 1275 err = f2fs_submit_page_bio(&fio); 1278 1276 if (err) 1279 1277 goto put_encrypted_page; 1280 - f2fs_put_page(fio.encrypted_page, 0); 1278 + f2fs_put_page(fio.encrypted_page, false); 1281 1279 f2fs_folio_put(folio, true); 1282 1280 1283 1281 f2fs_update_iostat(sbi, inode, FS_DATA_READ_IO, F2FS_BLKSIZE); ··· 1286 1282 1287 1283 return 0; 1288 1284 put_encrypted_page: 1289 - f2fs_put_page(fio.encrypted_page, 1); 1285 + f2fs_put_page(fio.encrypted_page, true); 1290 1286 put_folio: 1291 1287 f2fs_folio_put(folio, true); 1292 1288 return err; ··· 1314 1310 struct dnode_of_data dn; 1315 1311 struct f2fs_summary sum; 1316 1312 struct node_info ni; 1317 - struct folio *folio, *mfolio; 1313 + struct folio *folio, *mfolio, *efolio; 1318 1314 block_t newaddr; 1319 1315 int err = 0; 1320 1316 bool lfs_mode = f2fs_lfs_mode(fio.sbi); ··· 1408 1404 goto up_out; 1409 1405 } 1410 1406 1411 - fio.encrypted_page = f2fs_pagecache_get_page(META_MAPPING(fio.sbi), 1412 - newaddr, FGP_LOCK | FGP_CREAT, GFP_NOFS); 1413 - if (!fio.encrypted_page) { 1414 - err = -ENOMEM; 1407 + efolio = f2fs_filemap_get_folio(META_MAPPING(fio.sbi), newaddr, 1408 + FGP_LOCK | FGP_CREAT, GFP_NOFS); 1409 + if (IS_ERR(efolio)) { 1410 + err = PTR_ERR(efolio); 1415 1411 f2fs_folio_put(mfolio, true); 1416 1412 goto recover_block; 1417 1413 } 1414 + 1415 + fio.encrypted_page = &efolio->page; 1418 1416 1419 1417 /* write target block */ 1420 1418 f2fs_wait_on_page_writeback(fio.encrypted_page, DATA, true, true); ··· 1442 1436 f2fs_update_data_blkaddr(&dn, newaddr); 1443 1437 set_inode_flag(inode, FI_APPEND_WRITE); 1444 1438 1445 - f2fs_put_page(fio.encrypted_page, 1); 1439 + f2fs_put_page(fio.encrypted_page, true); 1446 1440 recover_block: 1447 1441 if (err) 1448 1442 f2fs_do_replace_block(fio.sbi, &sum, newaddr, fio.old_blkaddr, ··· 1735 1729 unsigned char type = IS_DATASEG(get_seg_entry(sbi, segno)->type) ? 1736 1730 SUM_TYPE_DATA : SUM_TYPE_NODE; 1737 1731 unsigned char data_type = (type == SUM_TYPE_DATA) ? DATA : NODE; 1738 - int submitted = 0; 1732 + int submitted = 0, sum_blk_cnt; 1739 1733 1740 1734 if (__is_large_section(sbi)) { 1741 1735 sec_end_segno = rounddown(end_segno, SEGS_PER_SEC(sbi)); ··· 1769 1763 1770 1764 sanity_check_seg_type(sbi, get_seg_entry(sbi, segno)->type); 1771 1765 1766 + segno = rounddown(segno, SUMS_PER_BLOCK); 1767 + sum_blk_cnt = DIV_ROUND_UP(end_segno - segno, SUMS_PER_BLOCK); 1772 1768 /* readahead multi ssa blocks those have contiguous address */ 1773 1769 if (__is_large_section(sbi)) 1774 1770 f2fs_ra_meta_pages(sbi, GET_SUM_BLOCK(sbi, segno), 1775 - end_segno - segno, META_SSA, true); 1771 + sum_blk_cnt, META_SSA, true); 1776 1772 1777 1773 /* reference all summary page */ 1778 1774 while (segno < end_segno) { 1779 - struct folio *sum_folio = f2fs_get_sum_folio(sbi, segno++); 1775 + struct folio *sum_folio = f2fs_get_sum_folio(sbi, segno); 1776 + 1777 + segno += SUMS_PER_BLOCK; 1780 1778 if (IS_ERR(sum_folio)) { 1781 1779 int err = PTR_ERR(sum_folio); 1782 1780 1783 - end_segno = segno - 1; 1784 - for (segno = start_segno; segno < end_segno; segno++) { 1781 + end_segno = segno - SUMS_PER_BLOCK; 1782 + segno = rounddown(start_segno, SUMS_PER_BLOCK); 1783 + while (segno < end_segno) { 1785 1784 sum_folio = filemap_get_folio(META_MAPPING(sbi), 1786 1785 GET_SUM_BLOCK(sbi, segno)); 1787 1786 folio_put_refs(sum_folio, 2); 1787 + segno += SUMS_PER_BLOCK; 1788 1788 } 1789 1789 return err; 1790 1790 } ··· 1799 1787 1800 1788 blk_start_plug(&plug); 1801 1789 1802 - for (segno = start_segno; segno < end_segno; segno++) { 1803 - struct f2fs_summary_block *sum; 1790 + segno = start_segno; 1791 + while (segno < end_segno) { 1792 + unsigned int cur_segno; 1804 1793 1805 1794 /* find segment summary of victim */ 1806 1795 struct folio *sum_folio = filemap_get_folio(META_MAPPING(sbi), 1807 1796 GET_SUM_BLOCK(sbi, segno)); 1797 + unsigned int block_end_segno = rounddown(segno, SUMS_PER_BLOCK) 1798 + + SUMS_PER_BLOCK; 1799 + 1800 + if (block_end_segno > end_segno) 1801 + block_end_segno = end_segno; 1808 1802 1809 1803 if (is_cursec(sbi, GET_SEC_FROM_SEG(sbi, segno))) { 1810 1804 f2fs_err(sbi, "%s: segment %u is used by log", 1811 1805 __func__, segno); 1812 1806 f2fs_bug_on(sbi, 1); 1813 - goto skip; 1807 + goto next_block; 1814 1808 } 1815 1809 1816 - if (get_valid_blocks(sbi, segno, false) == 0) 1817 - goto freed; 1818 - if (gc_type == BG_GC && __is_large_section(sbi) && 1819 - migrated >= sbi->migration_granularity) 1820 - goto skip; 1821 1810 if (!folio_test_uptodate(sum_folio) || 1822 1811 unlikely(f2fs_cp_error(sbi))) 1823 - goto skip; 1812 + goto next_block; 1824 1813 1825 - sum = folio_address(sum_folio); 1826 - if (type != GET_SUM_TYPE((&sum->footer))) { 1827 - f2fs_err(sbi, "Inconsistent segment (%u) type [%d, %d] in SIT and SSA", 1828 - segno, type, GET_SUM_TYPE((&sum->footer))); 1829 - f2fs_stop_checkpoint(sbi, false, 1830 - STOP_CP_REASON_CORRUPTED_SUMMARY); 1831 - goto skip; 1832 - } 1814 + for (cur_segno = segno; cur_segno < block_end_segno; 1815 + cur_segno++) { 1816 + struct f2fs_summary_block *sum; 1833 1817 1834 - /* 1835 - * this is to avoid deadlock: 1836 - * - lock_page(sum_page) - f2fs_replace_block 1837 - * - check_valid_map() - down_write(sentry_lock) 1838 - * - down_read(sentry_lock) - change_curseg() 1839 - * - lock_page(sum_page) 1840 - */ 1841 - if (type == SUM_TYPE_NODE) 1842 - submitted += gc_node_segment(sbi, sum->entries, segno, 1843 - gc_type); 1844 - else 1845 - submitted += gc_data_segment(sbi, sum->entries, gc_list, 1846 - segno, gc_type, 1847 - force_migrate); 1818 + if (get_valid_blocks(sbi, cur_segno, false) == 0) 1819 + goto freed; 1820 + if (gc_type == BG_GC && __is_large_section(sbi) && 1821 + migrated >= sbi->migration_granularity) 1822 + continue; 1848 1823 1849 - stat_inc_gc_seg_count(sbi, data_type, gc_type); 1850 - sbi->gc_reclaimed_segs[sbi->gc_mode]++; 1851 - migrated++; 1824 + sum = SUM_BLK_PAGE_ADDR(sum_folio, cur_segno); 1825 + if (type != GET_SUM_TYPE((&sum->footer))) { 1826 + f2fs_err(sbi, "Inconsistent segment (%u) type " 1827 + "[%d, %d] in SSA and SIT", 1828 + cur_segno, type, 1829 + GET_SUM_TYPE((&sum->footer))); 1830 + f2fs_stop_checkpoint(sbi, false, 1831 + STOP_CP_REASON_CORRUPTED_SUMMARY); 1832 + continue; 1833 + } 1834 + 1835 + /* 1836 + * this is to avoid deadlock: 1837 + * - lock_page(sum_page) - f2fs_replace_block 1838 + * - check_valid_map() - down_write(sentry_lock) 1839 + * - down_read(sentry_lock) - change_curseg() 1840 + * - lock_page(sum_page) 1841 + */ 1842 + if (type == SUM_TYPE_NODE) 1843 + submitted += gc_node_segment(sbi, sum->entries, 1844 + cur_segno, gc_type); 1845 + else 1846 + submitted += gc_data_segment(sbi, sum->entries, 1847 + gc_list, cur_segno, 1848 + gc_type, force_migrate); 1849 + 1850 + stat_inc_gc_seg_count(sbi, data_type, gc_type); 1851 + sbi->gc_reclaimed_segs[sbi->gc_mode]++; 1852 + migrated++; 1852 1853 1853 1854 freed: 1854 - if (gc_type == FG_GC && 1855 - get_valid_blocks(sbi, segno, false) == 0) 1856 - seg_freed++; 1855 + if (gc_type == FG_GC && 1856 + get_valid_blocks(sbi, cur_segno, false) == 0) 1857 + seg_freed++; 1857 1858 1858 - if (__is_large_section(sbi)) 1859 - sbi->next_victim_seg[gc_type] = 1860 - (segno + 1 < sec_end_segno) ? 1861 - segno + 1 : NULL_SEGNO; 1862 - skip: 1859 + if (__is_large_section(sbi)) 1860 + sbi->next_victim_seg[gc_type] = 1861 + (cur_segno + 1 < sec_end_segno) ? 1862 + cur_segno + 1 : NULL_SEGNO; 1863 + } 1864 + next_block: 1863 1865 folio_put_refs(sum_folio, 2); 1866 + segno = block_end_segno; 1864 1867 } 1865 1868 1866 1869 if (submitted)
+1 -1
fs/f2fs/gc.h
··· 25 25 #define DEF_GC_THREAD_CANDIDATE_RATIO 20 /* select 20% oldest sections as candidates */ 26 26 #define DEF_GC_THREAD_MAX_CANDIDATE_COUNT 10 /* select at most 10 sections as candidates */ 27 27 #define DEF_GC_THREAD_AGE_WEIGHT 60 /* age weight */ 28 - #define DEF_GC_THREAD_VALID_THRESH_RATIO 95 /* do not GC over 95% valid block ratio for one time GC */ 28 + #define DEF_GC_THREAD_VALID_THRESH_RATIO 80 /* do not GC over 80% valid block ratio for one time GC */ 29 29 #define DEFAULT_ACCURACY_CLASS 10000 /* accuracy class */ 30 30 31 31 #define LIMIT_INVALID_BLOCK 40 /* percentage over total user space */
+2 -2
fs/f2fs/inline.c
··· 287 287 set_inode_flag(inode, FI_DATA_EXIST); 288 288 289 289 folio_clear_f2fs_inline(ifolio); 290 - f2fs_folio_put(ifolio, 1); 290 + f2fs_folio_put(ifolio, true); 291 291 return 0; 292 292 } 293 293 ··· 577 577 f2fs_i_depth_write(dir, 0); 578 578 f2fs_i_size_write(dir, MAX_INLINE_DATA(dir)); 579 579 folio_mark_dirty(ifolio); 580 - f2fs_folio_put(ifolio, 1); 580 + f2fs_folio_put(ifolio, true); 581 581 582 582 kfree(backup_dentry); 583 583 return err;
+6
fs/f2fs/inode.c
··· 294 294 return false; 295 295 } 296 296 297 + if (S_ISDIR(inode->i_mode) && unlikely(inode->i_nlink == 1)) { 298 + f2fs_warn(sbi, "%s: directory inode (ino=%lx) has a single i_nlink", 299 + __func__, inode->i_ino); 300 + return false; 301 + } 302 + 297 303 if (f2fs_has_extra_attr(inode)) { 298 304 if (!f2fs_sb_has_extra_attr(sbi)) { 299 305 f2fs_warn(sbi, "%s: inode (ino=%lx) is with extra_attr, but extra_attr feature is off",
+24 -15
fs/f2fs/namei.c
··· 552 552 553 553 if (unlikely(f2fs_cp_error(sbi))) { 554 554 err = -EIO; 555 - goto fail; 555 + goto out; 556 556 } 557 557 558 558 err = f2fs_dquot_initialize(dir); 559 559 if (err) 560 - goto fail; 560 + goto out; 561 561 err = f2fs_dquot_initialize(inode); 562 562 if (err) 563 - goto fail; 563 + goto out; 564 564 565 565 de = f2fs_find_entry(dir, &dentry->d_name, &folio); 566 566 if (!de) { 567 567 if (IS_ERR(folio)) 568 568 err = PTR_ERR(folio); 569 - goto fail; 569 + goto out; 570 570 } 571 571 572 572 if (unlikely(inode->i_nlink == 0)) { 573 - f2fs_warn(F2FS_I_SB(inode), "%s: inode (ino=%lx) has zero i_nlink", 573 + f2fs_warn(sbi, "%s: inode (ino=%lx) has zero i_nlink", 574 574 __func__, inode->i_ino); 575 - err = -EFSCORRUPTED; 576 - set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK); 577 - f2fs_folio_put(folio, false); 578 - goto fail; 575 + goto corrupted; 576 + } else if (S_ISDIR(inode->i_mode) && unlikely(inode->i_nlink == 1)) { 577 + f2fs_warn(sbi, "%s: directory inode (ino=%lx) has a single i_nlink", 578 + __func__, inode->i_ino); 579 + goto corrupted; 579 580 } 580 581 581 582 f2fs_balance_fs(sbi, true); ··· 586 585 if (err) { 587 586 f2fs_unlock_op(sbi); 588 587 f2fs_folio_put(folio, false); 589 - goto fail; 588 + goto out; 590 589 } 591 590 f2fs_delete_entry(de, folio, dir, inode); 592 591 f2fs_unlock_op(sbi); ··· 602 601 603 602 if (IS_DIRSYNC(dir)) 604 603 f2fs_sync_fs(sbi->sb, 1); 605 - fail: 604 + 605 + goto out; 606 + corrupted: 607 + err = -EFSCORRUPTED; 608 + set_sbi_flag(sbi, SBI_NEED_FSCK); 609 + f2fs_folio_put(folio, false); 610 + out: 606 611 trace_f2fs_unlink_exit(inode, err); 607 612 return err; 608 613 } ··· 1060 1053 if (whiteout) { 1061 1054 set_inode_flag(whiteout, FI_INC_LINK); 1062 1055 err = f2fs_add_link(old_dentry, whiteout); 1063 - if (err) 1056 + if (err) { 1057 + d_invalidate(old_dentry); 1058 + d_invalidate(new_dentry); 1064 1059 goto put_out_dir; 1065 - 1060 + } 1066 1061 spin_lock(&whiteout->i_lock); 1067 1062 inode_state_clear(whiteout, I_LINKABLE); 1068 1063 spin_unlock(&whiteout->i_lock); ··· 1256 1247 return 0; 1257 1248 out_new_dir: 1258 1249 if (new_dir_entry) { 1259 - f2fs_folio_put(new_dir_folio, 0); 1250 + f2fs_folio_put(new_dir_folio, false); 1260 1251 } 1261 1252 out_old_dir: 1262 1253 if (old_dir_entry) { 1263 - f2fs_folio_put(old_dir_folio, 0); 1254 + f2fs_folio_put(old_dir_folio, false); 1264 1255 } 1265 1256 out_new: 1266 1257 f2fs_folio_put(new_folio, false);
+16 -15
fs/f2fs/recovery.c
··· 399 399 } 400 400 401 401 static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, 402 - bool check_only) 402 + bool check_only, bool *new_inode) 403 403 { 404 404 struct curseg_info *curseg; 405 405 block_t blkaddr, blkaddr_fast; ··· 447 447 quota_inode = true; 448 448 } 449 449 450 - /* 451 - * CP | dnode(F) | inode(DF) 452 - * For this case, we should not give up now. 453 - */ 454 450 entry = add_fsync_inode(sbi, head, ino_of_node(folio), 455 451 quota_inode); 456 452 if (IS_ERR(entry)) { 457 453 err = PTR_ERR(entry); 458 - if (err == -ENOENT) 454 + /* 455 + * CP | dnode(F) | inode(DF) 456 + * For this case, we should not give up now. 457 + */ 458 + if (err == -ENOENT) { 459 + if (check_only) 460 + *new_inode = true; 459 461 goto next; 462 + } 460 463 f2fs_folio_put(folio, true); 461 464 break; 462 465 } ··· 522 519 sum_folio = f2fs_get_sum_folio(sbi, segno); 523 520 if (IS_ERR(sum_folio)) 524 521 return PTR_ERR(sum_folio); 525 - sum_node = folio_address(sum_folio); 522 + sum_node = SUM_BLK_PAGE_ADDR(sum_folio, segno); 526 523 sum = sum_node->entries[blkoff]; 527 524 f2fs_folio_put(sum_folio, true); 528 525 got_it: ··· 872 869 873 870 int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) 874 871 { 875 - struct list_head inode_list, tmp_inode_list; 876 - struct list_head dir_list; 872 + LIST_HEAD(inode_list); 873 + LIST_HEAD(tmp_inode_list); 874 + LIST_HEAD(dir_list); 877 875 int err; 878 876 int ret = 0; 879 877 unsigned long s_flags = sbi->sb->s_flags; 880 878 bool need_writecp = false; 879 + bool new_inode = false; 881 880 882 881 f2fs_notice(sbi, "f2fs_recover_fsync_data: recovery fsync data, " 883 882 "check_only: %d", check_only); ··· 887 882 if (is_sbi_flag_set(sbi, SBI_IS_WRITABLE)) 888 883 f2fs_info(sbi, "recover fsync data on readonly fs"); 889 884 890 - INIT_LIST_HEAD(&inode_list); 891 - INIT_LIST_HEAD(&tmp_inode_list); 892 - INIT_LIST_HEAD(&dir_list); 893 - 894 885 /* prevent checkpoint */ 895 886 f2fs_down_write(&sbi->cp_global_sem); 896 887 897 888 /* step #1: find fsynced inode numbers */ 898 - err = find_fsync_dnodes(sbi, &inode_list, check_only); 899 - if (err || list_empty(&inode_list)) 889 + err = find_fsync_dnodes(sbi, &inode_list, check_only, &new_inode); 890 + if (err < 0 || (list_empty(&inode_list) && (!check_only || !new_inode))) 900 891 goto skip; 901 892 902 893 if (check_only) {
+41 -22
fs/f2fs/segment.c
··· 234 234 err = f2fs_get_dnode_of_data(&dn, index, ALLOC_NODE); 235 235 if (err) { 236 236 if (err == -ENOMEM) { 237 - f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT); 237 + memalloc_retry_wait(GFP_NOFS); 238 238 goto retry; 239 239 } 240 240 return err; ··· 750 750 do { 751 751 ret = __submit_flush_wait(sbi, FDEV(i).bdev); 752 752 if (ret) 753 - f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT); 753 + f2fs_schedule_timeout(DEFAULT_SCHEDULE_TIMEOUT); 754 754 } while (ret && --count); 755 755 756 756 if (ret) { ··· 1343 1343 1344 1344 dc->di.len += len; 1345 1345 1346 + err = 0; 1346 1347 if (time_to_inject(sbi, FAULT_DISCARD)) { 1347 1348 err = -EIO; 1348 - } else { 1349 - err = __blkdev_issue_discard(bdev, 1350 - SECTOR_FROM_BLOCK(start), 1351 - SECTOR_FROM_BLOCK(len), 1352 - GFP_NOFS, &bio); 1353 - } 1354 - if (err) { 1355 1349 spin_lock_irqsave(&dc->lock, flags); 1356 1350 if (dc->state == D_PARTIAL) 1357 1351 dc->state = D_SUBMIT; ··· 1354 1360 break; 1355 1361 } 1356 1362 1363 + __blkdev_issue_discard(bdev, SECTOR_FROM_BLOCK(start), 1364 + SECTOR_FROM_BLOCK(len), GFP_NOFS, &bio); 1357 1365 f2fs_bug_on(sbi, !bio); 1358 1366 1359 1367 /* ··· 2708 2712 void f2fs_update_meta_page(struct f2fs_sb_info *sbi, 2709 2713 void *src, block_t blk_addr) 2710 2714 { 2711 - struct folio *folio = f2fs_grab_meta_folio(sbi, blk_addr); 2715 + struct folio *folio; 2716 + 2717 + if (SUMS_PER_BLOCK == 1) 2718 + folio = f2fs_grab_meta_folio(sbi, blk_addr); 2719 + else 2720 + folio = f2fs_get_meta_folio_retry(sbi, blk_addr); 2721 + 2722 + if (IS_ERR(folio)) 2723 + return; 2712 2724 2713 2725 memcpy(folio_address(folio), src, PAGE_SIZE); 2714 2726 folio_mark_dirty(folio); ··· 2724 2720 } 2725 2721 2726 2722 static void write_sum_page(struct f2fs_sb_info *sbi, 2727 - struct f2fs_summary_block *sum_blk, block_t blk_addr) 2723 + struct f2fs_summary_block *sum_blk, unsigned int segno) 2728 2724 { 2729 - f2fs_update_meta_page(sbi, (void *)sum_blk, blk_addr); 2725 + struct folio *folio; 2726 + 2727 + if (SUMS_PER_BLOCK == 1) 2728 + return f2fs_update_meta_page(sbi, (void *)sum_blk, 2729 + GET_SUM_BLOCK(sbi, segno)); 2730 + 2731 + folio = f2fs_get_sum_folio(sbi, segno); 2732 + if (IS_ERR(folio)) 2733 + return; 2734 + 2735 + memcpy(SUM_BLK_PAGE_ADDR(folio, segno), sum_blk, sizeof(*sum_blk)); 2736 + folio_mark_dirty(folio); 2737 + f2fs_folio_put(folio, true); 2730 2738 } 2731 2739 2732 2740 static void write_current_sum_page(struct f2fs_sb_info *sbi, ··· 3003 2987 int ret; 3004 2988 3005 2989 if (curseg->inited) 3006 - write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, segno)); 2990 + write_sum_page(sbi, curseg->sum_blk, segno); 3007 2991 3008 2992 segno = __get_next_segno(sbi, type); 3009 2993 ret = get_new_segment(sbi, &segno, new_sec, pinning); ··· 3062 3046 struct folio *sum_folio; 3063 3047 3064 3048 if (curseg->inited) 3065 - write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno)); 3049 + write_sum_page(sbi, curseg->sum_blk, curseg->segno); 3066 3050 3067 3051 __set_test_and_inuse(sbi, new_segno); 3068 3052 ··· 3081 3065 memset(curseg->sum_blk, 0, SUM_ENTRY_SIZE); 3082 3066 return PTR_ERR(sum_folio); 3083 3067 } 3084 - sum_node = folio_address(sum_folio); 3068 + sum_node = SUM_BLK_PAGE_ADDR(sum_folio, new_segno); 3085 3069 memcpy(curseg->sum_blk, sum_node, SUM_ENTRY_SIZE); 3086 3070 f2fs_folio_put(sum_folio, true); 3087 3071 return 0; ··· 3170 3154 goto out; 3171 3155 3172 3156 if (get_valid_blocks(sbi, curseg->segno, false)) { 3173 - write_sum_page(sbi, curseg->sum_blk, 3174 - GET_SUM_BLOCK(sbi, curseg->segno)); 3157 + write_sum_page(sbi, curseg->sum_blk, curseg->segno); 3175 3158 } else { 3176 3159 mutex_lock(&DIRTY_I(sbi)->seglist_lock); 3177 3160 __set_test_and_free(sbi, curseg->segno, true); ··· 3467 3452 blk_finish_plug(&plug); 3468 3453 mutex_unlock(&dcc->cmd_lock); 3469 3454 trimmed += __wait_all_discard_cmd(sbi, NULL); 3470 - f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT); 3455 + f2fs_schedule_timeout(DEFAULT_DISCARD_INTERVAL); 3471 3456 goto next; 3472 3457 } 3473 3458 skip: ··· 3848 3833 if (segment_full) { 3849 3834 if (type == CURSEG_COLD_DATA_PINNED && 3850 3835 !((curseg->segno + 1) % sbi->segs_per_sec)) { 3851 - write_sum_page(sbi, curseg->sum_blk, 3852 - GET_SUM_BLOCK(sbi, curseg->segno)); 3836 + write_sum_page(sbi, curseg->sum_blk, curseg->segno); 3853 3837 reset_curseg_fields(curseg); 3854 3838 goto skip_new_segment; 3855 3839 } ··· 3877 3863 locate_dirty_segment(sbi, GET_SEGNO(sbi, old_blkaddr)); 3878 3864 locate_dirty_segment(sbi, GET_SEGNO(sbi, *new_blkaddr)); 3879 3865 3880 - if (IS_DATASEG(curseg->seg_type)) 3881 - atomic64_inc(&sbi->allocated_data_blocks); 3866 + if (IS_DATASEG(curseg->seg_type)) { 3867 + unsigned long long new_val; 3868 + 3869 + new_val = atomic64_inc_return(&sbi->allocated_data_blocks); 3870 + if (unlikely(new_val == ULLONG_MAX)) 3871 + atomic64_set(&sbi->allocated_data_blocks, 0); 3872 + } 3882 3873 3883 3874 up_write(&sit_i->sentry_lock); 3884 3875
+16 -5
fs/f2fs/segment.h
··· 69 69 ((!__is_valid_data_blkaddr(blk_addr)) ? \ 70 70 NULL_SEGNO : GET_L2R_SEGNO(FREE_I(sbi), \ 71 71 GET_SEGNO_FROM_SEG0(sbi, blk_addr))) 72 + #ifdef CONFIG_BLK_DEV_ZONED 72 73 #define CAP_BLKS_PER_SEC(sbi) \ 73 74 (BLKS_PER_SEC(sbi) - (sbi)->unusable_blocks_per_sec) 74 75 #define CAP_SEGS_PER_SEC(sbi) \ 75 76 (SEGS_PER_SEC(sbi) - \ 76 77 BLKS_TO_SEGS(sbi, (sbi)->unusable_blocks_per_sec)) 78 + #else 79 + #define CAP_BLKS_PER_SEC(sbi) BLKS_PER_SEC(sbi) 80 + #define CAP_SEGS_PER_SEC(sbi) SEGS_PER_SEC(sbi) 81 + #endif 77 82 #define GET_START_SEG_FROM_SEC(sbi, segno) \ 78 83 (rounddown(segno, SEGS_PER_SEC(sbi))) 79 84 #define GET_SEC_FROM_SEG(sbi, segno) \ ··· 90 85 #define GET_ZONE_FROM_SEG(sbi, segno) \ 91 86 GET_ZONE_FROM_SEC(sbi, GET_SEC_FROM_SEG(sbi, segno)) 92 87 93 - #define GET_SUM_BLOCK(sbi, segno) \ 94 - ((sbi)->sm_info->ssa_blkaddr + (segno)) 88 + #define SUMS_PER_BLOCK (F2FS_BLKSIZE / F2FS_SUM_BLKSIZE) 89 + #define GET_SUM_BLOCK(sbi, segno) \ 90 + (SM_I(sbi)->ssa_blkaddr + (segno / SUMS_PER_BLOCK)) 91 + #define GET_SUM_BLKOFF(segno) (segno % SUMS_PER_BLOCK) 92 + #define SUM_BLK_PAGE_ADDR(folio, segno) \ 93 + (folio_address(folio) + GET_SUM_BLKOFF(segno) * F2FS_SUM_BLKSIZE) 95 94 96 95 #define GET_SUM_TYPE(footer) ((footer)->entry_type) 97 96 #define SET_SUM_TYPE(footer, type) ((footer)->entry_type = (type)) ··· 612 603 static inline unsigned int get_left_section_blocks(struct f2fs_sb_info *sbi, 613 604 enum log_type type, unsigned int segno) 614 605 { 615 - if (f2fs_lfs_mode(sbi) && __is_large_section(sbi)) 616 - return CAP_BLKS_PER_SEC(sbi) - SEGS_TO_BLKS(sbi, 617 - (segno - GET_START_SEG_FROM_SEC(sbi, segno))) - 606 + if (f2fs_lfs_mode(sbi)) { 607 + unsigned int used_blocks = __is_large_section(sbi) ? SEGS_TO_BLKS(sbi, 608 + (segno - GET_START_SEG_FROM_SEC(sbi, segno))) : 0; 609 + return CAP_BLKS_PER_SEC(sbi) - used_blocks - 618 610 CURSEG_I(sbi, type)->next_blkoff; 611 + } 619 612 return CAP_BLKS_PER_SEC(sbi) - get_ckpt_valid_blocks(sbi, segno, true); 620 613 } 621 614
+107 -101
fs/f2fs/super.c
··· 352 352 353 353 struct f2fs_fs_context { 354 354 struct f2fs_mount_info info; 355 - unsigned int opt_mask; /* Bits changed */ 355 + unsigned long long opt_mask; /* Bits changed */ 356 356 unsigned int spec_mask; 357 357 unsigned short qname_mask; 358 358 }; ··· 360 360 #define F2FS_CTX_INFO(ctx) ((ctx)->info) 361 361 362 362 static inline void ctx_set_opt(struct f2fs_fs_context *ctx, 363 - unsigned int flag) 363 + enum f2fs_mount_opt flag) 364 364 { 365 - ctx->info.opt |= flag; 366 - ctx->opt_mask |= flag; 365 + ctx->info.opt |= BIT(flag); 366 + ctx->opt_mask |= BIT(flag); 367 367 } 368 368 369 369 static inline void ctx_clear_opt(struct f2fs_fs_context *ctx, 370 - unsigned int flag) 370 + enum f2fs_mount_opt flag) 371 371 { 372 - ctx->info.opt &= ~flag; 373 - ctx->opt_mask |= flag; 372 + ctx->info.opt &= ~BIT(flag); 373 + ctx->opt_mask |= BIT(flag); 374 374 } 375 375 376 376 static inline bool ctx_test_opt(struct f2fs_fs_context *ctx, 377 - unsigned int flag) 377 + enum f2fs_mount_opt flag) 378 378 { 379 - return ctx->info.opt & flag; 379 + return ctx->info.opt & BIT(flag); 380 380 } 381 381 382 382 void f2fs_printk(struct f2fs_sb_info *sbi, bool limit_rate, ··· 1371 1371 ctx_test_opt(ctx, F2FS_MOUNT_COMPRESS_CACHE)) 1372 1372 f2fs_info(sbi, "Image doesn't support compression"); 1373 1373 clear_compression_spec(ctx); 1374 - ctx->opt_mask &= ~F2FS_MOUNT_COMPRESS_CACHE; 1374 + ctx->opt_mask &= ~BIT(F2FS_MOUNT_COMPRESS_CACHE); 1375 1375 return 0; 1376 1376 } 1377 1377 if (ctx->spec_mask & F2FS_SPEC_compress_extension) { ··· 1439 1439 return -EINVAL; 1440 1440 1441 1441 if (f2fs_hw_should_discard(sbi) && 1442 - (ctx->opt_mask & F2FS_MOUNT_DISCARD) && 1442 + (ctx->opt_mask & BIT(F2FS_MOUNT_DISCARD)) && 1443 1443 !ctx_test_opt(ctx, F2FS_MOUNT_DISCARD)) { 1444 1444 f2fs_warn(sbi, "discard is required for zoned block devices"); 1445 1445 return -EINVAL; 1446 1446 } 1447 1447 1448 1448 if (!f2fs_hw_support_discard(sbi) && 1449 - (ctx->opt_mask & F2FS_MOUNT_DISCARD) && 1449 + (ctx->opt_mask & BIT(F2FS_MOUNT_DISCARD)) && 1450 1450 ctx_test_opt(ctx, F2FS_MOUNT_DISCARD)) { 1451 1451 f2fs_warn(sbi, "device does not support discard"); 1452 1452 ctx_clear_opt(ctx, F2FS_MOUNT_DISCARD); 1453 - ctx->opt_mask &= ~F2FS_MOUNT_DISCARD; 1453 + ctx->opt_mask &= ~BIT(F2FS_MOUNT_DISCARD); 1454 1454 } 1455 1455 1456 1456 if (f2fs_sb_has_device_alias(sbi) && 1457 - (ctx->opt_mask & F2FS_MOUNT_READ_EXTENT_CACHE) && 1457 + (ctx->opt_mask & BIT(F2FS_MOUNT_READ_EXTENT_CACHE)) && 1458 1458 !ctx_test_opt(ctx, F2FS_MOUNT_READ_EXTENT_CACHE)) { 1459 1459 f2fs_err(sbi, "device aliasing requires extent cache"); 1460 1460 return -EINVAL; 1461 1461 } 1462 1462 1463 1463 if (test_opt(sbi, RESERVE_ROOT) && 1464 - (ctx->opt_mask & F2FS_MOUNT_RESERVE_ROOT) && 1464 + (ctx->opt_mask & BIT(F2FS_MOUNT_RESERVE_ROOT)) && 1465 1465 ctx_test_opt(ctx, F2FS_MOUNT_RESERVE_ROOT)) { 1466 1466 f2fs_info(sbi, "Preserve previous reserve_root=%u", 1467 1467 F2FS_OPTION(sbi).root_reserved_blocks); 1468 1468 ctx_clear_opt(ctx, F2FS_MOUNT_RESERVE_ROOT); 1469 - ctx->opt_mask &= ~F2FS_MOUNT_RESERVE_ROOT; 1469 + ctx->opt_mask &= ~BIT(F2FS_MOUNT_RESERVE_ROOT); 1470 1470 } 1471 1471 if (test_opt(sbi, RESERVE_NODE) && 1472 - (ctx->opt_mask & F2FS_MOUNT_RESERVE_NODE) && 1472 + (ctx->opt_mask & BIT(F2FS_MOUNT_RESERVE_NODE)) && 1473 1473 ctx_test_opt(ctx, F2FS_MOUNT_RESERVE_NODE)) { 1474 1474 f2fs_info(sbi, "Preserve previous reserve_node=%u", 1475 1475 F2FS_OPTION(sbi).root_reserved_nodes); 1476 1476 ctx_clear_opt(ctx, F2FS_MOUNT_RESERVE_NODE); 1477 - ctx->opt_mask &= ~F2FS_MOUNT_RESERVE_NODE; 1477 + ctx->opt_mask &= ~BIT(F2FS_MOUNT_RESERVE_NODE); 1478 1478 } 1479 1479 1480 1480 err = f2fs_check_test_dummy_encryption(fc, sb); ··· 1759 1759 atomic_set(&fi->dirty_pages, 0); 1760 1760 atomic_set(&fi->i_compr_blocks, 0); 1761 1761 atomic_set(&fi->open_count, 0); 1762 + atomic_set(&fi->writeback, 0); 1762 1763 init_f2fs_rwsem(&fi->i_sem); 1763 1764 spin_lock_init(&fi->i_size_lock); 1764 1765 INIT_LIST_HEAD(&fi->dirty_list); ··· 1989 1988 truncate_inode_pages_final(META_MAPPING(sbi)); 1990 1989 } 1991 1990 1992 - for (i = 0; i < NR_COUNT_TYPE; i++) { 1993 - if (!get_pages(sbi, i)) 1994 - continue; 1995 - f2fs_err(sbi, "detect filesystem reference count leak during " 1996 - "umount, type: %d, count: %lld", i, get_pages(sbi, i)); 1997 - f2fs_bug_on(sbi, 1); 1998 - } 1999 - 2000 1991 f2fs_bug_on(sbi, sbi->fsync_node_num); 2001 1992 2002 1993 f2fs_destroy_compress_inode(sbi); ··· 1998 2005 1999 2006 iput(sbi->meta_inode); 2000 2007 sbi->meta_inode = NULL; 2008 + 2009 + /* Should check the page counts after dropping all node/meta pages */ 2010 + for (i = 0; i < NR_COUNT_TYPE; i++) { 2011 + if (!get_pages(sbi, i)) 2012 + continue; 2013 + f2fs_err(sbi, "detect filesystem reference count leak during " 2014 + "umount, type: %d, count: %lld", i, get_pages(sbi, i)); 2015 + f2fs_bug_on(sbi, 1); 2016 + } 2001 2017 2002 2018 /* 2003 2019 * iput() can update stat information, if f2fs_write_checkpoint() ··· 2028 2026 kfree(sbi->raw_super); 2029 2027 2030 2028 f2fs_destroy_page_array_cache(sbi); 2031 - f2fs_destroy_xattr_caches(sbi); 2032 2029 #ifdef CONFIG_QUOTA 2033 2030 for (i = 0; i < MAXQUOTAS; i++) 2034 2031 kfree(F2FS_OPTION(sbi).s_qf_names[i]); ··· 2633 2632 return err; 2634 2633 } 2635 2634 2636 - static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi) 2635 + static int f2fs_enable_checkpoint(struct f2fs_sb_info *sbi) 2637 2636 { 2638 2637 unsigned int nr_pages = get_pages(sbi, F2FS_DIRTY_DATA) / 16; 2639 - long long start, writeback, end; 2638 + long long start, writeback, lock, sync_inode, end; 2639 + int ret; 2640 2640 2641 - f2fs_info(sbi, "f2fs_enable_checkpoint() starts, meta: %lld, node: %lld, data: %lld", 2641 + f2fs_info(sbi, "%s start, meta: %lld, node: %lld, data: %lld", 2642 + __func__, 2642 2643 get_pages(sbi, F2FS_DIRTY_META), 2643 2644 get_pages(sbi, F2FS_DIRTY_NODES), 2644 2645 get_pages(sbi, F2FS_DIRTY_DATA)); ··· 2652 2649 /* we should flush all the data to keep data consistency */ 2653 2650 while (get_pages(sbi, F2FS_DIRTY_DATA)) { 2654 2651 writeback_inodes_sb_nr(sbi->sb, nr_pages, WB_REASON_SYNC); 2655 - f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT); 2652 + f2fs_io_schedule_timeout(DEFAULT_SCHEDULE_TIMEOUT); 2656 2653 2657 2654 if (f2fs_time_over(sbi, ENABLE_TIME)) 2658 2655 break; 2659 2656 } 2660 2657 writeback = ktime_get(); 2661 2658 2662 - sync_inodes_sb(sbi->sb); 2659 + f2fs_down_write(&sbi->cp_enable_rwsem); 2660 + 2661 + lock = ktime_get(); 2662 + 2663 + if (get_pages(sbi, F2FS_DIRTY_DATA)) 2664 + sync_inodes_sb(sbi->sb); 2663 2665 2664 2666 if (unlikely(get_pages(sbi, F2FS_DIRTY_DATA))) 2665 - f2fs_warn(sbi, "checkpoint=enable has some unwritten data: %lld", 2666 - get_pages(sbi, F2FS_DIRTY_DATA)); 2667 + f2fs_warn(sbi, "%s: has some unwritten data: %lld", 2668 + __func__, get_pages(sbi, F2FS_DIRTY_DATA)); 2669 + 2670 + sync_inode = ktime_get(); 2667 2671 2668 2672 f2fs_down_write(&sbi->gc_lock); 2669 2673 f2fs_dirty_to_prefree(sbi); ··· 2679 2669 set_sbi_flag(sbi, SBI_IS_DIRTY); 2680 2670 f2fs_up_write(&sbi->gc_lock); 2681 2671 2682 - f2fs_sync_fs(sbi->sb, 1); 2672 + f2fs_info(sbi, "%s sync_fs, meta: %lld, imeta: %lld, node: %lld, dents: %lld, qdata: %lld", 2673 + __func__, 2674 + get_pages(sbi, F2FS_DIRTY_META), 2675 + get_pages(sbi, F2FS_DIRTY_IMETA), 2676 + get_pages(sbi, F2FS_DIRTY_NODES), 2677 + get_pages(sbi, F2FS_DIRTY_DENTS), 2678 + get_pages(sbi, F2FS_DIRTY_QDATA)); 2679 + ret = f2fs_sync_fs(sbi->sb, 1); 2680 + if (ret) 2681 + f2fs_err(sbi, "%s sync_fs failed, ret: %d", __func__, ret); 2683 2682 2684 2683 /* Let's ensure there's no pending checkpoint anymore */ 2685 2684 f2fs_flush_ckpt_thread(sbi); 2686 2685 2686 + f2fs_up_write(&sbi->cp_enable_rwsem); 2687 + 2687 2688 end = ktime_get(); 2688 2689 2689 - f2fs_info(sbi, "f2fs_enable_checkpoint() finishes, writeback:%llu, sync:%llu", 2690 - ktime_ms_delta(writeback, start), 2691 - ktime_ms_delta(end, writeback)); 2690 + f2fs_info(sbi, "%s end, writeback:%llu, " 2691 + "lock:%llu, sync_inode:%llu, sync_fs:%llu", 2692 + __func__, 2693 + ktime_ms_delta(writeback, start), 2694 + ktime_ms_delta(lock, writeback), 2695 + ktime_ms_delta(sync_inode, lock), 2696 + ktime_ms_delta(end, sync_inode)); 2697 + return ret; 2692 2698 } 2693 2699 2694 2700 static int __f2fs_remount(struct fs_context *fc, struct super_block *sb) ··· 2918 2892 goto restore_discard; 2919 2893 need_enable_checkpoint = true; 2920 2894 } else { 2921 - f2fs_enable_checkpoint(sbi); 2895 + err = f2fs_enable_checkpoint(sbi); 2896 + if (err) 2897 + goto restore_discard; 2922 2898 need_disable_checkpoint = true; 2923 2899 } 2924 2900 } ··· 2963 2935 return 0; 2964 2936 restore_checkpoint: 2965 2937 if (need_enable_checkpoint) { 2966 - f2fs_enable_checkpoint(sbi); 2938 + if (f2fs_enable_checkpoint(sbi)) 2939 + f2fs_warn(sbi, "checkpoint has not been enabled"); 2967 2940 } else if (need_disable_checkpoint) { 2968 2941 if (f2fs_disable_checkpoint(sbi)) 2969 2942 f2fs_warn(sbi, "checkpoint has not been disabled"); ··· 3139 3110 &folio, &fsdata); 3140 3111 if (unlikely(err)) { 3141 3112 if (err == -ENOMEM) { 3142 - f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT); 3113 + memalloc_retry_wait(GFP_NOFS); 3143 3114 goto retry; 3144 3115 } 3145 3116 set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR); ··· 4080 4051 if (sanity_check_area_boundary(sbi, folio, index)) 4081 4052 return -EFSCORRUPTED; 4082 4053 4054 + /* 4055 + * Check for legacy summary layout on 16KB+ block devices. 4056 + * Modern f2fs-tools packs multiple 4KB summary areas into one block, 4057 + * whereas legacy versions used one block per summary, leading 4058 + * to a much larger SSA. 4059 + */ 4060 + if (SUMS_PER_BLOCK > 1 && 4061 + !(__F2FS_HAS_FEATURE(raw_super, F2FS_FEATURE_PACKED_SSA))) { 4062 + f2fs_info(sbi, "Error: Device formatted with a legacy version. " 4063 + "Please reformat with a tool supporting the packed ssa " 4064 + "feature for block sizes larger than 4kb."); 4065 + return -EOPNOTSUPP; 4066 + } 4067 + 4083 4068 return 0; 4084 4069 } 4085 4070 ··· 4587 4544 spin_unlock_irqrestore(&sbi->error_lock, flags); 4588 4545 } 4589 4546 4590 - static bool f2fs_update_errors(struct f2fs_sb_info *sbi) 4591 - { 4592 - unsigned long flags; 4593 - bool need_update = false; 4594 - 4595 - spin_lock_irqsave(&sbi->error_lock, flags); 4596 - if (sbi->error_dirty) { 4597 - memcpy(F2FS_RAW_SUPER(sbi)->s_errors, sbi->errors, 4598 - MAX_F2FS_ERRORS); 4599 - sbi->error_dirty = false; 4600 - need_update = true; 4601 - } 4602 - spin_unlock_irqrestore(&sbi->error_lock, flags); 4603 - 4604 - return need_update; 4605 - } 4606 - 4607 - static void f2fs_record_errors(struct f2fs_sb_info *sbi, unsigned char error) 4608 - { 4609 - int err; 4610 - 4611 - f2fs_down_write(&sbi->sb_lock); 4612 - 4613 - if (!f2fs_update_errors(sbi)) 4614 - goto out_unlock; 4615 - 4616 - err = f2fs_commit_super(sbi, false); 4617 - if (err) 4618 - f2fs_err_ratelimited(sbi, 4619 - "f2fs_commit_super fails to record errors:%u, err:%d", 4620 - error, err); 4621 - out_unlock: 4622 - f2fs_up_write(&sbi->sb_lock); 4623 - } 4624 - 4625 4547 void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error) 4626 - { 4627 - f2fs_save_errors(sbi, error); 4628 - f2fs_record_errors(sbi, error); 4629 - } 4630 - 4631 - void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error) 4632 4548 { 4633 4549 f2fs_save_errors(sbi, error); 4634 4550 ··· 4906 4904 init_f2fs_rwsem(&sbi->node_change); 4907 4905 spin_lock_init(&sbi->stat_lock); 4908 4906 init_f2fs_rwsem(&sbi->cp_rwsem); 4907 + init_f2fs_rwsem(&sbi->cp_enable_rwsem); 4909 4908 init_f2fs_rwsem(&sbi->quota_sem); 4910 4909 init_waitqueue_head(&sbi->cp_wait); 4911 4910 spin_lock_init(&sbi->error_lock); ··· 5018 5015 if (err) 5019 5016 goto free_iostat; 5020 5017 5021 - /* init per sbi slab cache */ 5022 - err = f2fs_init_xattr_caches(sbi); 5023 - if (err) 5024 - goto free_percpu; 5025 5018 err = f2fs_init_page_array_cache(sbi); 5026 5019 if (err) 5027 - goto free_xattr_cache; 5020 + goto free_percpu; 5028 5021 5029 5022 /* get an inode for meta space */ 5030 5023 sbi->meta_inode = f2fs_iget(sb, F2FS_META_INO(sbi)); ··· 5225 5226 } 5226 5227 } else { 5227 5228 err = f2fs_recover_fsync_data(sbi, true); 5228 - 5229 - if (!f2fs_readonly(sb) && err > 0) { 5230 - err = -EINVAL; 5231 - f2fs_err(sbi, "Need to recover fsync data"); 5232 - goto free_meta; 5229 + if (err > 0) { 5230 + if (!f2fs_readonly(sb)) { 5231 + f2fs_err(sbi, "Need to recover fsync data"); 5232 + err = -EINVAL; 5233 + goto free_meta; 5234 + } else { 5235 + f2fs_info(sbi, "drop all fsynced data"); 5236 + err = 0; 5237 + } 5233 5238 } 5234 5239 } 5235 5240 ··· 5260 5257 if (err) 5261 5258 goto sync_free_meta; 5262 5259 5263 - if (test_opt(sbi, DISABLE_CHECKPOINT)) { 5260 + if (test_opt(sbi, DISABLE_CHECKPOINT)) 5264 5261 err = f2fs_disable_checkpoint(sbi); 5265 - if (err) 5266 - goto sync_free_meta; 5267 - } else if (is_set_ckpt_flags(sbi, CP_DISABLED_FLAG)) { 5268 - f2fs_enable_checkpoint(sbi); 5269 - } 5262 + else if (is_set_ckpt_flags(sbi, CP_DISABLED_FLAG)) 5263 + err = f2fs_enable_checkpoint(sbi); 5264 + if (err) 5265 + goto sync_free_meta; 5270 5266 5271 5267 /* 5272 5268 * If filesystem is not mounted as read-only then ··· 5352 5350 sbi->meta_inode = NULL; 5353 5351 free_page_array_cache: 5354 5352 f2fs_destroy_page_array_cache(sbi); 5355 - free_xattr_cache: 5356 - f2fs_destroy_xattr_caches(sbi); 5357 5353 free_percpu: 5358 5354 destroy_percpu_info(sbi); 5359 5355 free_iostat: ··· 5554 5554 err = f2fs_create_casefold_cache(); 5555 5555 if (err) 5556 5556 goto free_compress_cache; 5557 - err = register_filesystem(&f2fs_fs_type); 5557 + err = f2fs_init_xattr_cache(); 5558 5558 if (err) 5559 5559 goto free_casefold_cache; 5560 + err = register_filesystem(&f2fs_fs_type); 5561 + if (err) 5562 + goto free_xattr_cache; 5560 5563 return 0; 5564 + free_xattr_cache: 5565 + f2fs_destroy_xattr_cache(); 5561 5566 free_casefold_cache: 5562 5567 f2fs_destroy_casefold_cache(); 5563 5568 free_compress_cache: ··· 5603 5598 static void __exit exit_f2fs_fs(void) 5604 5599 { 5605 5600 unregister_filesystem(&f2fs_fs_type); 5601 + f2fs_destroy_xattr_cache(); 5606 5602 f2fs_destroy_casefold_cache(); 5607 5603 f2fs_destroy_compress_cache(); 5608 5604 f2fs_destroy_compress_mempool();
+9
fs/f2fs/sysfs.c
··· 235 235 if (f2fs_sb_has_compression(sbi)) 236 236 len += sysfs_emit_at(buf, len, "%s%s", 237 237 len ? ", " : "", "compression"); 238 + if (f2fs_sb_has_packed_ssa(sbi)) 239 + len += sysfs_emit_at(buf, len, "%s%s", 240 + len ? ", " : "", "packed_ssa"); 238 241 len += sysfs_emit_at(buf, len, "%s%s", 239 242 len ? ", " : "", "pin_file"); 240 243 len += sysfs_emit_at(buf, len, "\n"); ··· 1213 1210 F2FS_SBI_GENERAL_RW_ATTR(max_read_extent_count); 1214 1211 #ifdef CONFIG_BLK_DEV_ZONED 1215 1212 F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec); 1213 + F2FS_SBI_GENERAL_RO_ATTR(max_open_zones); 1216 1214 F2FS_SBI_GENERAL_RW_ATTR(blkzone_alloc_policy); 1217 1215 #endif 1218 1216 F2FS_SBI_GENERAL_RW_ATTR(carve_out); ··· 1300 1296 #ifdef CONFIG_UNICODE 1301 1297 F2FS_FEATURE_RO_ATTR(linear_lookup); 1302 1298 #endif 1299 + F2FS_FEATURE_RO_ATTR(packed_ssa); 1303 1300 1304 1301 #define ATTR_LIST(name) (&f2fs_attr_##name.attr) 1305 1302 static struct attribute *f2fs_attrs[] = { ··· 1389 1384 #endif 1390 1385 #ifdef CONFIG_BLK_DEV_ZONED 1391 1386 ATTR_LIST(unusable_blocks_per_sec), 1387 + ATTR_LIST(max_open_zones), 1392 1388 ATTR_LIST(blkzone_alloc_policy), 1393 1389 #endif 1394 1390 #ifdef CONFIG_F2FS_FS_COMPRESSION ··· 1461 1455 #ifdef CONFIG_UNICODE 1462 1456 BASE_ATTR_LIST(linear_lookup), 1463 1457 #endif 1458 + BASE_ATTR_LIST(packed_ssa), 1464 1459 NULL, 1465 1460 }; 1466 1461 ATTRIBUTE_GROUPS(f2fs_feat); ··· 1497 1490 F2FS_SB_FEATURE_RO_ATTR(compression, COMPRESSION); 1498 1491 F2FS_SB_FEATURE_RO_ATTR(readonly, RO); 1499 1492 F2FS_SB_FEATURE_RO_ATTR(device_alias, DEVICE_ALIAS); 1493 + F2FS_SB_FEATURE_RO_ATTR(packed_ssa, PACKED_SSA); 1500 1494 1501 1495 static struct attribute *f2fs_sb_feat_attrs[] = { 1502 1496 ATTR_LIST(sb_encryption), ··· 1515 1507 ATTR_LIST(sb_compression), 1516 1508 ATTR_LIST(sb_readonly), 1517 1509 ATTR_LIST(sb_device_alias), 1510 + ATTR_LIST(sb_packed_ssa), 1518 1511 NULL, 1519 1512 }; 1520 1513 ATTRIBUTE_GROUPS(f2fs_sb_feat);
+1 -1
fs/f2fs/verity.c
··· 263 263 264 264 index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT; 265 265 266 - folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0); 266 + folio = f2fs_filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0); 267 267 if (IS_ERR(folio) || !folio_test_uptodate(folio)) { 268 268 DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index); 269 269
+10 -20
fs/f2fs/xattr.c
··· 23 23 #include "xattr.h" 24 24 #include "segment.h" 25 25 26 + static struct kmem_cache *inline_xattr_slab; 26 27 static void *xattr_alloc(struct f2fs_sb_info *sbi, int size, bool *is_inline) 27 28 { 28 - if (likely(size == sbi->inline_xattr_slab_size)) { 29 + if (likely(size == DEFAULT_XATTR_SLAB_SIZE)) { 29 30 *is_inline = true; 30 - return f2fs_kmem_cache_alloc(sbi->inline_xattr_slab, 31 + return f2fs_kmem_cache_alloc(inline_xattr_slab, 31 32 GFP_F2FS_ZERO, false, sbi); 32 33 } 33 34 *is_inline = false; ··· 39 38 bool is_inline) 40 39 { 41 40 if (is_inline) 42 - kmem_cache_free(sbi->inline_xattr_slab, xattr_addr); 41 + kmem_cache_free(inline_xattr_slab, xattr_addr); 43 42 else 44 43 kfree(xattr_addr); 45 44 } ··· 831 830 return err; 832 831 } 833 832 834 - int f2fs_init_xattr_caches(struct f2fs_sb_info *sbi) 833 + int __init f2fs_init_xattr_cache(void) 835 834 { 836 - dev_t dev = sbi->sb->s_bdev->bd_dev; 837 - char slab_name[32]; 838 - 839 - sprintf(slab_name, "f2fs_xattr_entry-%u:%u", MAJOR(dev), MINOR(dev)); 840 - 841 - sbi->inline_xattr_slab_size = F2FS_OPTION(sbi).inline_xattr_size * 842 - sizeof(__le32) + XATTR_PADDING_SIZE; 843 - 844 - sbi->inline_xattr_slab = f2fs_kmem_cache_create(slab_name, 845 - sbi->inline_xattr_slab_size); 846 - if (!sbi->inline_xattr_slab) 847 - return -ENOMEM; 848 - 849 - return 0; 835 + inline_xattr_slab = f2fs_kmem_cache_create("f2fs_xattr_entry", 836 + DEFAULT_XATTR_SLAB_SIZE); 837 + return inline_xattr_slab ? 0 : -ENOMEM; 850 838 } 851 839 852 - void f2fs_destroy_xattr_caches(struct f2fs_sb_info *sbi) 840 + void f2fs_destroy_xattr_cache(void) 853 841 { 854 - kmem_cache_destroy(sbi->inline_xattr_slab); 842 + kmem_cache_destroy(inline_xattr_slab); 855 843 }
+6 -4
fs/f2fs/xattr.h
··· 89 89 F2FS_TOTAL_EXTRA_ATTR_SIZE / sizeof(__le32) - \ 90 90 DEF_INLINE_RESERVED_SIZE - \ 91 91 MIN_INLINE_DENTRY_SIZE / sizeof(__le32)) 92 + #define DEFAULT_XATTR_SLAB_SIZE (DEFAULT_INLINE_XATTR_ADDRS * \ 93 + sizeof(__le32) + XATTR_PADDING_SIZE) 92 94 93 95 /* 94 96 * On-disk structure of f2fs_xattr ··· 134 132 int f2fs_getxattr(struct inode *, int, const char *, void *, 135 133 size_t, struct folio *); 136 134 ssize_t f2fs_listxattr(struct dentry *, char *, size_t); 137 - int f2fs_init_xattr_caches(struct f2fs_sb_info *); 138 - void f2fs_destroy_xattr_caches(struct f2fs_sb_info *); 135 + int __init f2fs_init_xattr_cache(void); 136 + void f2fs_destroy_xattr_cache(void); 139 137 #else 140 138 141 139 #define f2fs_xattr_handlers NULL ··· 152 150 { 153 151 return -EOPNOTSUPP; 154 152 } 155 - static inline int f2fs_init_xattr_caches(struct f2fs_sb_info *sbi) { return 0; } 156 - static inline void f2fs_destroy_xattr_caches(struct f2fs_sb_info *sbi) { } 153 + static inline int __init f2fs_init_xattr_cache(void) { return 0; } 154 + static inline void f2fs_destroy_xattr_cache(void) { } 157 155 #endif 158 156 159 157 #ifdef CONFIG_F2FS_FS_SECURITY
+3 -2
include/linux/f2fs_fs.h
··· 17 17 #define F2FS_LOG_SECTORS_PER_BLOCK (PAGE_SHIFT - 9) /* log number for sector/blk */ 18 18 #define F2FS_BLKSIZE PAGE_SIZE /* support only block == page */ 19 19 #define F2FS_BLKSIZE_BITS PAGE_SHIFT /* bits for F2FS_BLKSIZE */ 20 + #define F2FS_SUM_BLKSIZE 4096 /* only support 4096 byte sum block */ 20 21 #define F2FS_MAX_EXTENSION 64 /* # of extension entries */ 21 22 #define F2FS_EXTENSION_LEN 8 /* max size of extension */ 22 23 ··· 442 441 * from node's page's beginning to get a data block address. 443 442 * ex) data_blkaddr = (block_t)(nodepage_start_address + ofs_in_node) 444 443 */ 445 - #define ENTRIES_IN_SUM (F2FS_BLKSIZE / 8) 444 + #define ENTRIES_IN_SUM (F2FS_SUM_BLKSIZE / 8) 446 445 #define SUMMARY_SIZE (7) /* sizeof(struct f2fs_summary) */ 447 446 #define SUM_FOOTER_SIZE (5) /* sizeof(struct summary_footer) */ 448 447 #define SUM_ENTRY_SIZE (SUMMARY_SIZE * ENTRIES_IN_SUM) ··· 468 467 __le32 check_sum; /* summary checksum */ 469 468 } __packed; 470 469 471 - #define SUM_JOURNAL_SIZE (F2FS_BLKSIZE - SUM_FOOTER_SIZE -\ 470 + #define SUM_JOURNAL_SIZE (F2FS_SUM_BLKSIZE - SUM_FOOTER_SIZE -\ 472 471 SUM_ENTRY_SIZE) 473 472 #define NAT_JOURNAL_ENTRIES ((SUM_JOURNAL_SIZE - 2) /\ 474 473 sizeof(struct nat_journal_entry))
+50 -9
include/trace/events/f2fs.h
··· 50 50 TRACE_DEFINE_ENUM(CP_RESIZE); 51 51 TRACE_DEFINE_ENUM(EX_READ); 52 52 TRACE_DEFINE_ENUM(EX_BLOCK_AGE); 53 + TRACE_DEFINE_ENUM(CP_PHASE_START_BLOCK_OPS); 54 + TRACE_DEFINE_ENUM(CP_PHASE_FINISH_BLOCK_OPS); 55 + TRACE_DEFINE_ENUM(CP_PHASE_FINISH_CHECKPOINT); 53 56 54 57 #define show_block_type(type) \ 55 58 __print_symbolic(type, \ ··· 178 175 #define S_ALL_PERM (S_ISUID | S_ISGID | S_ISVTX | \ 179 176 S_IRWXU | S_IRWXG | S_IRWXO) 180 177 178 + #define show_cp_phase(phase) \ 179 + __print_symbolic(phase, \ 180 + { CP_PHASE_START_BLOCK_OPS, "start block_ops" }, \ 181 + { CP_PHASE_FINISH_BLOCK_OPS, "finish block_ops" }, \ 182 + { CP_PHASE_FINISH_CHECKPOINT, "finish checkpoint" }) 183 + 181 184 struct f2fs_sb_info; 182 185 struct f2fs_io_info; 183 186 struct extent_info; ··· 213 204 __entry->pino = F2FS_I(inode)->i_pino; 214 205 __entry->mode = inode->i_mode; 215 206 __entry->nlink = inode->i_nlink; 216 - __entry->size = inode->i_size; 207 + __entry->size = i_size_read(inode); 217 208 __entry->blocks = inode->i_blocks; 218 209 __entry->advise = F2FS_I(inode)->i_advise; 219 210 ), ··· 362 353 TP_fast_assign( 363 354 __entry->dev = dir->i_sb->s_dev; 364 355 __entry->ino = dir->i_ino; 365 - __entry->size = dir->i_size; 356 + __entry->size = i_size_read(dir); 366 357 __entry->blocks = dir->i_blocks; 367 358 __assign_str(name); 368 359 ), ··· 442 433 TP_fast_assign( 443 434 __entry->dev = inode->i_sb->s_dev; 444 435 __entry->ino = inode->i_ino; 445 - __entry->size = inode->i_size; 436 + __entry->size = i_size_read(inode); 446 437 __entry->blocks = inode->i_blocks; 447 438 __entry->from = from; 448 439 ), ··· 593 584 __entry->offset, 594 585 __entry->length, 595 586 __entry->ret) 587 + ); 588 + 589 + TRACE_EVENT(f2fs_fadvise, 590 + 591 + TP_PROTO(struct inode *inode, loff_t offset, loff_t len, int advice), 592 + 593 + TP_ARGS(inode, offset, len, advice), 594 + 595 + TP_STRUCT__entry( 596 + __field(dev_t, dev) 597 + __field(ino_t, ino) 598 + __field(loff_t, size) 599 + __field(loff_t, offset) 600 + __field(loff_t, len) 601 + __field(int, advice) 602 + ), 603 + 604 + TP_fast_assign( 605 + __entry->dev = inode->i_sb->s_dev; 606 + __entry->ino = inode->i_ino; 607 + __entry->size = i_size_read(inode); 608 + __entry->offset = offset; 609 + __entry->len = len; 610 + __entry->advice = advice; 611 + ), 612 + 613 + TP_printk("dev = (%d,%d), ino = %lu, i_size = %lld offset:%llu, len:%llu, advise:%d", 614 + show_dev_ino(__entry), 615 + (unsigned long long)__entry->size, 616 + __entry->offset, 617 + __entry->len, 618 + __entry->advice) 596 619 ); 597 620 598 621 TRACE_EVENT(f2fs_map_blocks, ··· 1047 1006 __entry->mode = mode; 1048 1007 __entry->offset = offset; 1049 1008 __entry->len = len; 1050 - __entry->size = inode->i_size; 1009 + __entry->size = i_size_read(inode); 1051 1010 __entry->blocks = inode->i_blocks; 1052 1011 __entry->ret = ret; 1053 1012 ), ··· 1582 1541 1583 1542 TRACE_EVENT(f2fs_write_checkpoint, 1584 1543 1585 - TP_PROTO(struct super_block *sb, int reason, const char *msg), 1544 + TP_PROTO(struct super_block *sb, int reason, u16 phase), 1586 1545 1587 - TP_ARGS(sb, reason, msg), 1546 + TP_ARGS(sb, reason, phase), 1588 1547 1589 1548 TP_STRUCT__entry( 1590 1549 __field(dev_t, dev) 1591 1550 __field(int, reason) 1592 - __string(dest_msg, msg) 1551 + __field(u16, phase) 1593 1552 ), 1594 1553 1595 1554 TP_fast_assign( 1596 1555 __entry->dev = sb->s_dev; 1597 1556 __entry->reason = reason; 1598 - __assign_str(dest_msg); 1557 + __entry->phase = phase; 1599 1558 ), 1600 1559 1601 1560 TP_printk("dev = (%d,%d), checkpoint for %s, state = %s", 1602 1561 show_dev(__entry->dev), 1603 1562 show_cpreason(__entry->reason), 1604 - __get_str(dest_msg)) 1563 + show_cp_phase(__entry->phase)) 1605 1564 ); 1606 1565 1607 1566 DECLARE_EVENT_CLASS(f2fs_discard,