Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

f2fs: support data compression

This patch tries to support compression in f2fs.

- New term named cluster is defined as basic unit of compression, file can
be divided into multiple clusters logically. One cluster includes 4 << n
(n >= 0) logical pages, compression size is also cluster size, each of
cluster can be compressed or not.

- In cluster metadata layout, one special flag is used to indicate cluster
is compressed one or normal one, for compressed cluster, following metadata
maps cluster to [1, 4 << n - 1] physical blocks, in where f2fs stores
data including compress header and compressed data.

- In order to eliminate write amplification during overwrite, F2FS only
support compression on write-once file, data can be compressed only when
all logical blocks in file are valid and cluster compress ratio is lower
than specified threshold.

- To enable compression on regular inode, there are three ways:
* chattr +c file
* chattr +c dir; touch dir/file
* mount w/ -o compress_extension=ext; touch file.ext

Compress metadata layout:
[Dnode Structure]
+-----------------------------------------------+
| cluster 1 | cluster 2 | ......... | cluster N |
+-----------------------------------------------+
. . . .
. . . .
. Compressed Cluster . . Normal Cluster .
+----------+---------+---------+---------+ +---------+---------+---------+---------+
|compr flag| block 1 | block 2 | block 3 | | block 1 | block 2 | block 3 | block 4 |
+----------+---------+---------+---------+ +---------+---------+---------+---------+
. .
. .
. .
+-------------+-------------+----------+----------------------------+
| data length | data chksum | reserved | compressed data |
+-------------+-------------+----------+----------------------------+

Changelog:

20190326:
- fix error handling of read_end_io().
- remove unneeded comments in f2fs_encrypt_one_page().

20190327:
- fix wrong use of f2fs_cluster_is_full() in f2fs_mpage_readpages().
- don't jump into loop directly to avoid uninitialized variables.
- add TODO tag in error path of f2fs_write_cache_pages().

20190328:
- fix wrong merge condition in f2fs_read_multi_pages().
- check compressed file in f2fs_post_read_required().

20190401
- allow overwrite on non-compressed cluster.
- check cluster meta before writing compressed data.

20190402
- don't preallocate blocks for compressed file.

- add lz4 compress algorithm
- process multiple post read works in one workqueue
Now f2fs supports processing post read work in multiple workqueue,
it shows low performance due to schedule overhead of multiple
workqueue executing orderly.

20190921
- compress: support buffered overwrite
C: compress cluster flag
V: valid block address
N: NEW_ADDR

One cluster contain 4 blocks

before overwrite after overwrite

- VVVV -> CVNN
- CVNN -> VVVV

- CVNN -> CVNN
- CVNN -> CVVV

- CVVV -> CVNN
- CVVV -> CVVV

20191029
- add kconfig F2FS_FS_COMPRESSION to isolate compression related
codes, add kconfig F2FS_FS_{LZO,LZ4} to cover backend algorithm.
note that: will remove lzo backend if Jaegeuk agreed that too.
- update codes according to Eric's comments.

20191101
- apply fixes from Jaegeuk

20191113
- apply fixes from Jaegeuk
- split workqueue for fsverity

20191216
- apply fixes from Jaegeuk

20200117
- fix to avoid NULL pointer dereference

[Jaegeuk Kim]
- add tracepoint for f2fs_{,de}compress_pages()
- fix many bugs and add some compression stats
- fix overwrite/mmap bugs
- address 32bit build error, reported by Geert.
- bug fixes when handling errors and i_compressed_blocks

Reported-by: <noreply@ellerman.id.au>
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>

authored by

Chao Yu and committed by
Jaegeuk Kim
4c8ff709 820d3667

+2569 -120
+52
Documentation/filesystems/f2fs.txt
··· 235 235 hide up to all remaining free space. The actual space that 236 236 would be unusable can be viewed at /sys/fs/f2fs/<disk>/unusable 237 237 This space is reclaimed once checkpoint=enable. 238 + compress_algorithm=%s Control compress algorithm, currently f2fs supports "lzo" 239 + and "lz4" algorithm. 240 + compress_log_size=%u Support configuring compress cluster size, the size will 241 + be 4KB * (1 << %u), 16KB is minimum size, also it's 242 + default size. 243 + compress_extension=%s Support adding specified extension, so that f2fs can enable 244 + compression on those corresponding files, e.g. if all files 245 + with '.ext' has high compression rate, we can set the '.ext' 246 + on compression extension list and enable compression on 247 + these file by default rather than to enable it via ioctl. 248 + For other files, we can still enable compression via ioctl. 238 249 239 250 ================================================================================ 240 251 DEBUGFS ENTRIES ··· 851 840 4. address = fibmap(fd, offset) 852 841 5. open(blkdev) 853 842 6. write(blkdev, address) 843 + 844 + Compression implementation 845 + -------------------------- 846 + 847 + - New term named cluster is defined as basic unit of compression, file can 848 + be divided into multiple clusters logically. One cluster includes 4 << n 849 + (n >= 0) logical pages, compression size is also cluster size, each of 850 + cluster can be compressed or not. 851 + 852 + - In cluster metadata layout, one special block address is used to indicate 853 + cluster is compressed one or normal one, for compressed cluster, following 854 + metadata maps cluster to [1, 4 << n - 1] physical blocks, in where f2fs 855 + stores data including compress header and compressed data. 856 + 857 + - In order to eliminate write amplification during overwrite, F2FS only 858 + support compression on write-once file, data can be compressed only when 859 + all logical blocks in file are valid and cluster compress ratio is lower 860 + than specified threshold. 861 + 862 + - To enable compression on regular inode, there are three ways: 863 + * chattr +c file 864 + * chattr +c dir; touch dir/file 865 + * mount w/ -o compress_extension=ext; touch file.ext 866 + 867 + Compress metadata layout: 868 + [Dnode Structure] 869 + +-----------------------------------------------+ 870 + | cluster 1 | cluster 2 | ......... | cluster N | 871 + +-----------------------------------------------+ 872 + . . . . 873 + . . . . 874 + . Compressed Cluster . . Normal Cluster . 875 + +----------+---------+---------+---------+ +---------+---------+---------+---------+ 876 + |compr flag| block 1 | block 2 | block 3 | | block 1 | block 2 | block 3 | block 4 | 877 + +----------+---------+---------+---------+ +---------+---------+---------+---------+ 878 + . . 879 + . . 880 + . . 881 + +-------------+-------------+----------+----------------------------+ 882 + | data length | data chksum | reserved | compressed data | 883 + +-------------+-------------+----------+----------------------------+
+25
fs/f2fs/Kconfig
··· 92 92 Test F2FS to inject faults such as ENOMEM, ENOSPC, and so on. 93 93 94 94 If unsure, say N. 95 + 96 + config F2FS_FS_COMPRESSION 97 + bool "F2FS compression feature" 98 + depends on F2FS_FS 99 + help 100 + Enable filesystem-level compression on f2fs regular files, 101 + multiple back-end compression algorithms are supported. 102 + 103 + config F2FS_FS_LZO 104 + bool "LZO compression support" 105 + depends on F2FS_FS_COMPRESSION 106 + select LZO_COMPRESS 107 + select LZO_DECOMPRESS 108 + default y 109 + help 110 + Support LZO compress algorithm, if unsure, say Y. 111 + 112 + config F2FS_FS_LZ4 113 + bool "LZ4 compression support" 114 + depends on F2FS_FS_COMPRESSION 115 + select LZ4_COMPRESS 116 + select LZ4_DECOMPRESS 117 + default y 118 + help 119 + Support LZ4 compress algorithm, if unsure, say Y.
+1
fs/f2fs/Makefile
··· 9 9 f2fs-$(CONFIG_F2FS_FS_POSIX_ACL) += acl.o 10 10 f2fs-$(CONFIG_F2FS_IO_TRACE) += trace.o 11 11 f2fs-$(CONFIG_FS_VERITY) += verity.o 12 + f2fs-$(CONFIG_F2FS_FS_COMPRESSION) += compress.o
+1176
fs/f2fs/compress.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * f2fs compress support 4 + * 5 + * Copyright (c) 2019 Chao Yu <chao@kernel.org> 6 + */ 7 + 8 + #include <linux/fs.h> 9 + #include <linux/f2fs_fs.h> 10 + #include <linux/writeback.h> 11 + #include <linux/backing-dev.h> 12 + #include <linux/lzo.h> 13 + #include <linux/lz4.h> 14 + 15 + #include "f2fs.h" 16 + #include "node.h" 17 + #include <trace/events/f2fs.h> 18 + 19 + struct f2fs_compress_ops { 20 + int (*init_compress_ctx)(struct compress_ctx *cc); 21 + void (*destroy_compress_ctx)(struct compress_ctx *cc); 22 + int (*compress_pages)(struct compress_ctx *cc); 23 + int (*decompress_pages)(struct decompress_io_ctx *dic); 24 + }; 25 + 26 + static unsigned int offset_in_cluster(struct compress_ctx *cc, pgoff_t index) 27 + { 28 + return index & (cc->cluster_size - 1); 29 + } 30 + 31 + static pgoff_t cluster_idx(struct compress_ctx *cc, pgoff_t index) 32 + { 33 + return index >> cc->log_cluster_size; 34 + } 35 + 36 + static pgoff_t start_idx_of_cluster(struct compress_ctx *cc) 37 + { 38 + return cc->cluster_idx << cc->log_cluster_size; 39 + } 40 + 41 + bool f2fs_is_compressed_page(struct page *page) 42 + { 43 + if (!PagePrivate(page)) 44 + return false; 45 + if (!page_private(page)) 46 + return false; 47 + if (IS_ATOMIC_WRITTEN_PAGE(page) || IS_DUMMY_WRITTEN_PAGE(page)) 48 + return false; 49 + f2fs_bug_on(F2FS_M_SB(page->mapping), 50 + *((u32 *)page_private(page)) != F2FS_COMPRESSED_PAGE_MAGIC); 51 + return true; 52 + } 53 + 54 + static void f2fs_set_compressed_page(struct page *page, 55 + struct inode *inode, pgoff_t index, void *data, refcount_t *r) 56 + { 57 + SetPagePrivate(page); 58 + set_page_private(page, (unsigned long)data); 59 + 60 + /* i_crypto_info and iv index */ 61 + page->index = index; 62 + page->mapping = inode->i_mapping; 63 + if (r) 64 + refcount_inc(r); 65 + } 66 + 67 + static void f2fs_put_compressed_page(struct page *page) 68 + { 69 + set_page_private(page, (unsigned long)NULL); 70 + ClearPagePrivate(page); 71 + page->mapping = NULL; 72 + unlock_page(page); 73 + put_page(page); 74 + } 75 + 76 + static void f2fs_drop_rpages(struct compress_ctx *cc, int len, bool unlock) 77 + { 78 + int i; 79 + 80 + for (i = 0; i < len; i++) { 81 + if (!cc->rpages[i]) 82 + continue; 83 + if (unlock) 84 + unlock_page(cc->rpages[i]); 85 + else 86 + put_page(cc->rpages[i]); 87 + } 88 + } 89 + 90 + static void f2fs_put_rpages(struct compress_ctx *cc) 91 + { 92 + f2fs_drop_rpages(cc, cc->cluster_size, false); 93 + } 94 + 95 + static void f2fs_unlock_rpages(struct compress_ctx *cc, int len) 96 + { 97 + f2fs_drop_rpages(cc, len, true); 98 + } 99 + 100 + static void f2fs_put_rpages_mapping(struct compress_ctx *cc, 101 + struct address_space *mapping, 102 + pgoff_t start, int len) 103 + { 104 + int i; 105 + 106 + for (i = 0; i < len; i++) { 107 + struct page *page = find_get_page(mapping, start + i); 108 + 109 + put_page(page); 110 + put_page(page); 111 + } 112 + } 113 + 114 + static void f2fs_put_rpages_wbc(struct compress_ctx *cc, 115 + struct writeback_control *wbc, bool redirty, int unlock) 116 + { 117 + unsigned int i; 118 + 119 + for (i = 0; i < cc->cluster_size; i++) { 120 + if (!cc->rpages[i]) 121 + continue; 122 + if (redirty) 123 + redirty_page_for_writepage(wbc, cc->rpages[i]); 124 + f2fs_put_page(cc->rpages[i], unlock); 125 + } 126 + } 127 + 128 + struct page *f2fs_compress_control_page(struct page *page) 129 + { 130 + return ((struct compress_io_ctx *)page_private(page))->rpages[0]; 131 + } 132 + 133 + int f2fs_init_compress_ctx(struct compress_ctx *cc) 134 + { 135 + struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode); 136 + 137 + if (cc->nr_rpages) 138 + return 0; 139 + 140 + cc->rpages = f2fs_kzalloc(sbi, sizeof(struct page *) << 141 + cc->log_cluster_size, GFP_NOFS); 142 + return cc->rpages ? 0 : -ENOMEM; 143 + } 144 + 145 + void f2fs_destroy_compress_ctx(struct compress_ctx *cc) 146 + { 147 + kfree(cc->rpages); 148 + cc->rpages = NULL; 149 + cc->nr_rpages = 0; 150 + cc->nr_cpages = 0; 151 + cc->cluster_idx = NULL_CLUSTER; 152 + } 153 + 154 + void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page) 155 + { 156 + unsigned int cluster_ofs; 157 + 158 + if (!f2fs_cluster_can_merge_page(cc, page->index)) 159 + f2fs_bug_on(F2FS_I_SB(cc->inode), 1); 160 + 161 + cluster_ofs = offset_in_cluster(cc, page->index); 162 + cc->rpages[cluster_ofs] = page; 163 + cc->nr_rpages++; 164 + cc->cluster_idx = cluster_idx(cc, page->index); 165 + } 166 + 167 + #ifdef CONFIG_F2FS_FS_LZO 168 + static int lzo_init_compress_ctx(struct compress_ctx *cc) 169 + { 170 + cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode), 171 + LZO1X_MEM_COMPRESS, GFP_NOFS); 172 + if (!cc->private) 173 + return -ENOMEM; 174 + 175 + cc->clen = lzo1x_worst_compress(PAGE_SIZE << cc->log_cluster_size); 176 + return 0; 177 + } 178 + 179 + static void lzo_destroy_compress_ctx(struct compress_ctx *cc) 180 + { 181 + kvfree(cc->private); 182 + cc->private = NULL; 183 + } 184 + 185 + static int lzo_compress_pages(struct compress_ctx *cc) 186 + { 187 + int ret; 188 + 189 + ret = lzo1x_1_compress(cc->rbuf, cc->rlen, cc->cbuf->cdata, 190 + &cc->clen, cc->private); 191 + if (ret != LZO_E_OK) { 192 + printk_ratelimited("%sF2FS-fs (%s): lzo compress failed, ret:%d\n", 193 + KERN_ERR, F2FS_I_SB(cc->inode)->sb->s_id, ret); 194 + return -EIO; 195 + } 196 + return 0; 197 + } 198 + 199 + static int lzo_decompress_pages(struct decompress_io_ctx *dic) 200 + { 201 + int ret; 202 + 203 + ret = lzo1x_decompress_safe(dic->cbuf->cdata, dic->clen, 204 + dic->rbuf, &dic->rlen); 205 + if (ret != LZO_E_OK) { 206 + printk_ratelimited("%sF2FS-fs (%s): lzo decompress failed, ret:%d\n", 207 + KERN_ERR, F2FS_I_SB(dic->inode)->sb->s_id, ret); 208 + return -EIO; 209 + } 210 + 211 + if (dic->rlen != PAGE_SIZE << dic->log_cluster_size) { 212 + printk_ratelimited("%sF2FS-fs (%s): lzo invalid rlen:%zu, " 213 + "expected:%lu\n", KERN_ERR, 214 + F2FS_I_SB(dic->inode)->sb->s_id, 215 + dic->rlen, 216 + PAGE_SIZE << dic->log_cluster_size); 217 + return -EIO; 218 + } 219 + return 0; 220 + } 221 + 222 + static const struct f2fs_compress_ops f2fs_lzo_ops = { 223 + .init_compress_ctx = lzo_init_compress_ctx, 224 + .destroy_compress_ctx = lzo_destroy_compress_ctx, 225 + .compress_pages = lzo_compress_pages, 226 + .decompress_pages = lzo_decompress_pages, 227 + }; 228 + #endif 229 + 230 + #ifdef CONFIG_F2FS_FS_LZ4 231 + static int lz4_init_compress_ctx(struct compress_ctx *cc) 232 + { 233 + cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode), 234 + LZ4_MEM_COMPRESS, GFP_NOFS); 235 + if (!cc->private) 236 + return -ENOMEM; 237 + 238 + cc->clen = LZ4_compressBound(PAGE_SIZE << cc->log_cluster_size); 239 + return 0; 240 + } 241 + 242 + static void lz4_destroy_compress_ctx(struct compress_ctx *cc) 243 + { 244 + kvfree(cc->private); 245 + cc->private = NULL; 246 + } 247 + 248 + static int lz4_compress_pages(struct compress_ctx *cc) 249 + { 250 + int len; 251 + 252 + len = LZ4_compress_default(cc->rbuf, cc->cbuf->cdata, cc->rlen, 253 + cc->clen, cc->private); 254 + if (!len) { 255 + printk_ratelimited("%sF2FS-fs (%s): lz4 compress failed\n", 256 + KERN_ERR, F2FS_I_SB(cc->inode)->sb->s_id); 257 + return -EIO; 258 + } 259 + cc->clen = len; 260 + return 0; 261 + } 262 + 263 + static int lz4_decompress_pages(struct decompress_io_ctx *dic) 264 + { 265 + int ret; 266 + 267 + ret = LZ4_decompress_safe(dic->cbuf->cdata, dic->rbuf, 268 + dic->clen, dic->rlen); 269 + if (ret < 0) { 270 + printk_ratelimited("%sF2FS-fs (%s): lz4 decompress failed, ret:%d\n", 271 + KERN_ERR, F2FS_I_SB(dic->inode)->sb->s_id, ret); 272 + return -EIO; 273 + } 274 + 275 + if (ret != PAGE_SIZE << dic->log_cluster_size) { 276 + printk_ratelimited("%sF2FS-fs (%s): lz4 invalid rlen:%zu, " 277 + "expected:%lu\n", KERN_ERR, 278 + F2FS_I_SB(dic->inode)->sb->s_id, 279 + dic->rlen, 280 + PAGE_SIZE << dic->log_cluster_size); 281 + return -EIO; 282 + } 283 + return 0; 284 + } 285 + 286 + static const struct f2fs_compress_ops f2fs_lz4_ops = { 287 + .init_compress_ctx = lz4_init_compress_ctx, 288 + .destroy_compress_ctx = lz4_destroy_compress_ctx, 289 + .compress_pages = lz4_compress_pages, 290 + .decompress_pages = lz4_decompress_pages, 291 + }; 292 + #endif 293 + 294 + static const struct f2fs_compress_ops *f2fs_cops[COMPRESS_MAX] = { 295 + #ifdef CONFIG_F2FS_FS_LZO 296 + &f2fs_lzo_ops, 297 + #else 298 + NULL, 299 + #endif 300 + #ifdef CONFIG_F2FS_FS_LZ4 301 + &f2fs_lz4_ops, 302 + #else 303 + NULL, 304 + #endif 305 + }; 306 + 307 + bool f2fs_is_compress_backend_ready(struct inode *inode) 308 + { 309 + if (!f2fs_compressed_file(inode)) 310 + return true; 311 + return f2fs_cops[F2FS_I(inode)->i_compress_algorithm]; 312 + } 313 + 314 + static struct page *f2fs_grab_page(void) 315 + { 316 + struct page *page; 317 + 318 + page = alloc_page(GFP_NOFS); 319 + if (!page) 320 + return NULL; 321 + lock_page(page); 322 + return page; 323 + } 324 + 325 + static int f2fs_compress_pages(struct compress_ctx *cc) 326 + { 327 + struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode); 328 + struct f2fs_inode_info *fi = F2FS_I(cc->inode); 329 + const struct f2fs_compress_ops *cops = 330 + f2fs_cops[fi->i_compress_algorithm]; 331 + unsigned int max_len, nr_cpages; 332 + int i, ret; 333 + 334 + trace_f2fs_compress_pages_start(cc->inode, cc->cluster_idx, 335 + cc->cluster_size, fi->i_compress_algorithm); 336 + 337 + ret = cops->init_compress_ctx(cc); 338 + if (ret) 339 + goto out; 340 + 341 + max_len = COMPRESS_HEADER_SIZE + cc->clen; 342 + cc->nr_cpages = DIV_ROUND_UP(max_len, PAGE_SIZE); 343 + 344 + cc->cpages = f2fs_kzalloc(sbi, sizeof(struct page *) * 345 + cc->nr_cpages, GFP_NOFS); 346 + if (!cc->cpages) { 347 + ret = -ENOMEM; 348 + goto destroy_compress_ctx; 349 + } 350 + 351 + for (i = 0; i < cc->nr_cpages; i++) { 352 + cc->cpages[i] = f2fs_grab_page(); 353 + if (!cc->cpages[i]) { 354 + ret = -ENOMEM; 355 + goto out_free_cpages; 356 + } 357 + } 358 + 359 + cc->rbuf = vmap(cc->rpages, cc->cluster_size, VM_MAP, PAGE_KERNEL_RO); 360 + if (!cc->rbuf) { 361 + ret = -ENOMEM; 362 + goto out_free_cpages; 363 + } 364 + 365 + cc->cbuf = vmap(cc->cpages, cc->nr_cpages, VM_MAP, PAGE_KERNEL); 366 + if (!cc->cbuf) { 367 + ret = -ENOMEM; 368 + goto out_vunmap_rbuf; 369 + } 370 + 371 + ret = cops->compress_pages(cc); 372 + if (ret) 373 + goto out_vunmap_cbuf; 374 + 375 + max_len = PAGE_SIZE * (cc->cluster_size - 1) - COMPRESS_HEADER_SIZE; 376 + 377 + if (cc->clen > max_len) { 378 + ret = -EAGAIN; 379 + goto out_vunmap_cbuf; 380 + } 381 + 382 + cc->cbuf->clen = cpu_to_le32(cc->clen); 383 + cc->cbuf->chksum = cpu_to_le32(0); 384 + 385 + for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++) 386 + cc->cbuf->reserved[i] = cpu_to_le32(0); 387 + 388 + vunmap(cc->cbuf); 389 + vunmap(cc->rbuf); 390 + 391 + nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE); 392 + 393 + for (i = nr_cpages; i < cc->nr_cpages; i++) { 394 + f2fs_put_compressed_page(cc->cpages[i]); 395 + cc->cpages[i] = NULL; 396 + } 397 + 398 + cc->nr_cpages = nr_cpages; 399 + 400 + trace_f2fs_compress_pages_end(cc->inode, cc->cluster_idx, 401 + cc->clen, ret); 402 + return 0; 403 + 404 + out_vunmap_cbuf: 405 + vunmap(cc->cbuf); 406 + out_vunmap_rbuf: 407 + vunmap(cc->rbuf); 408 + out_free_cpages: 409 + for (i = 0; i < cc->nr_cpages; i++) { 410 + if (cc->cpages[i]) 411 + f2fs_put_compressed_page(cc->cpages[i]); 412 + } 413 + kfree(cc->cpages); 414 + cc->cpages = NULL; 415 + destroy_compress_ctx: 416 + cops->destroy_compress_ctx(cc); 417 + out: 418 + trace_f2fs_compress_pages_end(cc->inode, cc->cluster_idx, 419 + cc->clen, ret); 420 + return ret; 421 + } 422 + 423 + void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) 424 + { 425 + struct decompress_io_ctx *dic = 426 + (struct decompress_io_ctx *)page_private(page); 427 + struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode); 428 + struct f2fs_inode_info *fi= F2FS_I(dic->inode); 429 + const struct f2fs_compress_ops *cops = 430 + f2fs_cops[fi->i_compress_algorithm]; 431 + int ret; 432 + 433 + dec_page_count(sbi, F2FS_RD_DATA); 434 + 435 + if (bio->bi_status || PageError(page)) 436 + dic->failed = true; 437 + 438 + if (refcount_dec_not_one(&dic->ref)) 439 + return; 440 + 441 + trace_f2fs_decompress_pages_start(dic->inode, dic->cluster_idx, 442 + dic->cluster_size, fi->i_compress_algorithm); 443 + 444 + /* submit partial compressed pages */ 445 + if (dic->failed) { 446 + ret = -EIO; 447 + goto out_free_dic; 448 + } 449 + 450 + dic->rbuf = vmap(dic->tpages, dic->cluster_size, VM_MAP, PAGE_KERNEL); 451 + if (!dic->rbuf) { 452 + ret = -ENOMEM; 453 + goto out_free_dic; 454 + } 455 + 456 + dic->cbuf = vmap(dic->cpages, dic->nr_cpages, VM_MAP, PAGE_KERNEL_RO); 457 + if (!dic->cbuf) { 458 + ret = -ENOMEM; 459 + goto out_vunmap_rbuf; 460 + } 461 + 462 + dic->clen = le32_to_cpu(dic->cbuf->clen); 463 + dic->rlen = PAGE_SIZE << dic->log_cluster_size; 464 + 465 + if (dic->clen > PAGE_SIZE * dic->nr_cpages - COMPRESS_HEADER_SIZE) { 466 + ret = -EFSCORRUPTED; 467 + goto out_vunmap_cbuf; 468 + } 469 + 470 + ret = cops->decompress_pages(dic); 471 + 472 + out_vunmap_cbuf: 473 + vunmap(dic->cbuf); 474 + out_vunmap_rbuf: 475 + vunmap(dic->rbuf); 476 + out_free_dic: 477 + if (!verity) 478 + f2fs_decompress_end_io(dic->rpages, dic->cluster_size, 479 + ret, false); 480 + 481 + trace_f2fs_decompress_pages_end(dic->inode, dic->cluster_idx, 482 + dic->clen, ret); 483 + if (!verity) 484 + f2fs_free_dic(dic); 485 + } 486 + 487 + static bool is_page_in_cluster(struct compress_ctx *cc, pgoff_t index) 488 + { 489 + if (cc->cluster_idx == NULL_CLUSTER) 490 + return true; 491 + return cc->cluster_idx == cluster_idx(cc, index); 492 + } 493 + 494 + bool f2fs_cluster_is_empty(struct compress_ctx *cc) 495 + { 496 + return cc->nr_rpages == 0; 497 + } 498 + 499 + static bool f2fs_cluster_is_full(struct compress_ctx *cc) 500 + { 501 + return cc->cluster_size == cc->nr_rpages; 502 + } 503 + 504 + bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index) 505 + { 506 + if (f2fs_cluster_is_empty(cc)) 507 + return true; 508 + return is_page_in_cluster(cc, index); 509 + } 510 + 511 + static bool __cluster_may_compress(struct compress_ctx *cc) 512 + { 513 + struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode); 514 + loff_t i_size = i_size_read(cc->inode); 515 + unsigned nr_pages = DIV_ROUND_UP(i_size, PAGE_SIZE); 516 + int i; 517 + 518 + for (i = 0; i < cc->cluster_size; i++) { 519 + struct page *page = cc->rpages[i]; 520 + 521 + f2fs_bug_on(sbi, !page); 522 + 523 + if (unlikely(f2fs_cp_error(sbi))) 524 + return false; 525 + if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING))) 526 + return false; 527 + 528 + /* beyond EOF */ 529 + if (page->index >= nr_pages) 530 + return false; 531 + } 532 + return true; 533 + } 534 + 535 + /* return # of compressed block addresses */ 536 + static int f2fs_compressed_blocks(struct compress_ctx *cc) 537 + { 538 + struct dnode_of_data dn; 539 + int ret; 540 + 541 + set_new_dnode(&dn, cc->inode, NULL, NULL, 0); 542 + ret = f2fs_get_dnode_of_data(&dn, start_idx_of_cluster(cc), 543 + LOOKUP_NODE); 544 + if (ret) { 545 + if (ret == -ENOENT) 546 + ret = 0; 547 + goto fail; 548 + } 549 + 550 + if (dn.data_blkaddr == COMPRESS_ADDR) { 551 + int i; 552 + 553 + ret = 1; 554 + for (i = 1; i < cc->cluster_size; i++) { 555 + block_t blkaddr; 556 + 557 + blkaddr = datablock_addr(dn.inode, 558 + dn.node_page, dn.ofs_in_node + i); 559 + if (blkaddr != NULL_ADDR) 560 + ret++; 561 + } 562 + } 563 + fail: 564 + f2fs_put_dnode(&dn); 565 + return ret; 566 + } 567 + 568 + int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index) 569 + { 570 + struct compress_ctx cc = { 571 + .inode = inode, 572 + .log_cluster_size = F2FS_I(inode)->i_log_cluster_size, 573 + .cluster_size = F2FS_I(inode)->i_cluster_size, 574 + .cluster_idx = index >> F2FS_I(inode)->i_log_cluster_size, 575 + }; 576 + 577 + return f2fs_compressed_blocks(&cc); 578 + } 579 + 580 + static bool cluster_may_compress(struct compress_ctx *cc) 581 + { 582 + if (!f2fs_compressed_file(cc->inode)) 583 + return false; 584 + if (f2fs_is_atomic_file(cc->inode)) 585 + return false; 586 + if (f2fs_is_mmap_file(cc->inode)) 587 + return false; 588 + if (!f2fs_cluster_is_full(cc)) 589 + return false; 590 + return __cluster_may_compress(cc); 591 + } 592 + 593 + static void set_cluster_writeback(struct compress_ctx *cc) 594 + { 595 + int i; 596 + 597 + for (i = 0; i < cc->cluster_size; i++) { 598 + if (cc->rpages[i]) 599 + set_page_writeback(cc->rpages[i]); 600 + } 601 + } 602 + 603 + static void set_cluster_dirty(struct compress_ctx *cc) 604 + { 605 + int i; 606 + 607 + for (i = 0; i < cc->cluster_size; i++) 608 + if (cc->rpages[i]) 609 + set_page_dirty(cc->rpages[i]); 610 + } 611 + 612 + static int prepare_compress_overwrite(struct compress_ctx *cc, 613 + struct page **pagep, pgoff_t index, void **fsdata) 614 + { 615 + struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode); 616 + struct address_space *mapping = cc->inode->i_mapping; 617 + struct page *page; 618 + struct dnode_of_data dn; 619 + sector_t last_block_in_bio; 620 + unsigned fgp_flag = FGP_LOCK | FGP_WRITE | FGP_CREAT; 621 + pgoff_t start_idx = start_idx_of_cluster(cc); 622 + int i, ret; 623 + bool prealloc; 624 + 625 + retry: 626 + ret = f2fs_compressed_blocks(cc); 627 + if (ret <= 0) 628 + return ret; 629 + 630 + /* compressed case */ 631 + prealloc = (ret < cc->cluster_size); 632 + 633 + ret = f2fs_init_compress_ctx(cc); 634 + if (ret) 635 + return ret; 636 + 637 + /* keep page reference to avoid page reclaim */ 638 + for (i = 0; i < cc->cluster_size; i++) { 639 + page = f2fs_pagecache_get_page(mapping, start_idx + i, 640 + fgp_flag, GFP_NOFS); 641 + if (!page) { 642 + ret = -ENOMEM; 643 + goto unlock_pages; 644 + } 645 + 646 + if (PageUptodate(page)) 647 + unlock_page(page); 648 + else 649 + f2fs_compress_ctx_add_page(cc, page); 650 + } 651 + 652 + if (!f2fs_cluster_is_empty(cc)) { 653 + struct bio *bio = NULL; 654 + 655 + ret = f2fs_read_multi_pages(cc, &bio, cc->cluster_size, 656 + &last_block_in_bio, false); 657 + f2fs_destroy_compress_ctx(cc); 658 + if (ret) 659 + goto release_pages; 660 + if (bio) 661 + f2fs_submit_bio(sbi, bio, DATA); 662 + 663 + ret = f2fs_init_compress_ctx(cc); 664 + if (ret) 665 + goto release_pages; 666 + } 667 + 668 + for (i = 0; i < cc->cluster_size; i++) { 669 + f2fs_bug_on(sbi, cc->rpages[i]); 670 + 671 + page = find_lock_page(mapping, start_idx + i); 672 + f2fs_bug_on(sbi, !page); 673 + 674 + f2fs_wait_on_page_writeback(page, DATA, true, true); 675 + 676 + f2fs_compress_ctx_add_page(cc, page); 677 + f2fs_put_page(page, 0); 678 + 679 + if (!PageUptodate(page)) { 680 + f2fs_unlock_rpages(cc, i + 1); 681 + f2fs_put_rpages_mapping(cc, mapping, start_idx, 682 + cc->cluster_size); 683 + f2fs_destroy_compress_ctx(cc); 684 + goto retry; 685 + } 686 + } 687 + 688 + if (prealloc) { 689 + __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true); 690 + 691 + set_new_dnode(&dn, cc->inode, NULL, NULL, 0); 692 + 693 + for (i = cc->cluster_size - 1; i > 0; i--) { 694 + ret = f2fs_get_block(&dn, start_idx + i); 695 + if (ret) { 696 + i = cc->cluster_size; 697 + break; 698 + } 699 + 700 + if (dn.data_blkaddr != NEW_ADDR) 701 + break; 702 + } 703 + 704 + __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false); 705 + } 706 + 707 + if (likely(!ret)) { 708 + *fsdata = cc->rpages; 709 + *pagep = cc->rpages[offset_in_cluster(cc, index)]; 710 + return cc->cluster_size; 711 + } 712 + 713 + unlock_pages: 714 + f2fs_unlock_rpages(cc, i); 715 + release_pages: 716 + f2fs_put_rpages_mapping(cc, mapping, start_idx, i); 717 + f2fs_destroy_compress_ctx(cc); 718 + return ret; 719 + } 720 + 721 + int f2fs_prepare_compress_overwrite(struct inode *inode, 722 + struct page **pagep, pgoff_t index, void **fsdata) 723 + { 724 + struct compress_ctx cc = { 725 + .inode = inode, 726 + .log_cluster_size = F2FS_I(inode)->i_log_cluster_size, 727 + .cluster_size = F2FS_I(inode)->i_cluster_size, 728 + .cluster_idx = index >> F2FS_I(inode)->i_log_cluster_size, 729 + .rpages = NULL, 730 + .nr_rpages = 0, 731 + }; 732 + 733 + return prepare_compress_overwrite(&cc, pagep, index, fsdata); 734 + } 735 + 736 + bool f2fs_compress_write_end(struct inode *inode, void *fsdata, 737 + pgoff_t index, unsigned copied) 738 + 739 + { 740 + struct compress_ctx cc = { 741 + .log_cluster_size = F2FS_I(inode)->i_log_cluster_size, 742 + .cluster_size = F2FS_I(inode)->i_cluster_size, 743 + .rpages = fsdata, 744 + }; 745 + bool first_index = (index == cc.rpages[0]->index); 746 + 747 + if (copied) 748 + set_cluster_dirty(&cc); 749 + 750 + f2fs_put_rpages_wbc(&cc, NULL, false, 1); 751 + f2fs_destroy_compress_ctx(&cc); 752 + 753 + return first_index; 754 + } 755 + 756 + static int f2fs_write_compressed_pages(struct compress_ctx *cc, 757 + int *submitted, 758 + struct writeback_control *wbc, 759 + enum iostat_type io_type) 760 + { 761 + struct inode *inode = cc->inode; 762 + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 763 + struct f2fs_inode_info *fi = F2FS_I(inode); 764 + struct f2fs_io_info fio = { 765 + .sbi = sbi, 766 + .ino = cc->inode->i_ino, 767 + .type = DATA, 768 + .op = REQ_OP_WRITE, 769 + .op_flags = wbc_to_write_flags(wbc), 770 + .old_blkaddr = NEW_ADDR, 771 + .page = NULL, 772 + .encrypted_page = NULL, 773 + .compressed_page = NULL, 774 + .submitted = false, 775 + .need_lock = LOCK_RETRY, 776 + .io_type = io_type, 777 + .io_wbc = wbc, 778 + .encrypted = f2fs_encrypted_file(cc->inode), 779 + }; 780 + struct dnode_of_data dn; 781 + struct node_info ni; 782 + struct compress_io_ctx *cic; 783 + pgoff_t start_idx = start_idx_of_cluster(cc); 784 + unsigned int last_index = cc->cluster_size - 1; 785 + loff_t psize; 786 + int i, err; 787 + 788 + set_new_dnode(&dn, cc->inode, NULL, NULL, 0); 789 + 790 + f2fs_lock_op(sbi); 791 + 792 + err = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE); 793 + if (err) 794 + goto out_unlock_op; 795 + 796 + for (i = 0; i < cc->cluster_size; i++) { 797 + if (datablock_addr(dn.inode, dn.node_page, 798 + dn.ofs_in_node + i) == NULL_ADDR) 799 + goto out_put_dnode; 800 + } 801 + 802 + psize = (loff_t)(cc->rpages[last_index]->index + 1) << PAGE_SHIFT; 803 + 804 + err = f2fs_get_node_info(fio.sbi, dn.nid, &ni); 805 + if (err) 806 + goto out_put_dnode; 807 + 808 + fio.version = ni.version; 809 + 810 + cic = f2fs_kzalloc(sbi, sizeof(struct compress_io_ctx), GFP_NOFS); 811 + if (!cic) 812 + goto out_put_dnode; 813 + 814 + cic->magic = F2FS_COMPRESSED_PAGE_MAGIC; 815 + cic->inode = inode; 816 + refcount_set(&cic->ref, 1); 817 + cic->rpages = f2fs_kzalloc(sbi, sizeof(struct page *) << 818 + cc->log_cluster_size, GFP_NOFS); 819 + if (!cic->rpages) 820 + goto out_put_cic; 821 + 822 + cic->nr_rpages = cc->cluster_size; 823 + 824 + for (i = 0; i < cc->nr_cpages; i++) { 825 + f2fs_set_compressed_page(cc->cpages[i], inode, 826 + cc->rpages[i + 1]->index, 827 + cic, i ? &cic->ref : NULL); 828 + fio.compressed_page = cc->cpages[i]; 829 + if (fio.encrypted) { 830 + fio.page = cc->rpages[i + 1]; 831 + err = f2fs_encrypt_one_page(&fio); 832 + if (err) 833 + goto out_destroy_crypt; 834 + cc->cpages[i] = fio.encrypted_page; 835 + } 836 + } 837 + 838 + set_cluster_writeback(cc); 839 + 840 + for (i = 0; i < cc->cluster_size; i++) 841 + cic->rpages[i] = cc->rpages[i]; 842 + 843 + for (i = 0; i < cc->cluster_size; i++, dn.ofs_in_node++) { 844 + block_t blkaddr; 845 + 846 + blkaddr = datablock_addr(dn.inode, dn.node_page, 847 + dn.ofs_in_node); 848 + fio.page = cic->rpages[i]; 849 + fio.old_blkaddr = blkaddr; 850 + 851 + /* cluster header */ 852 + if (i == 0) { 853 + if (blkaddr == COMPRESS_ADDR) 854 + fio.compr_blocks++; 855 + if (__is_valid_data_blkaddr(blkaddr)) 856 + f2fs_invalidate_blocks(sbi, blkaddr); 857 + f2fs_update_data_blkaddr(&dn, COMPRESS_ADDR); 858 + goto unlock_continue; 859 + } 860 + 861 + if (fio.compr_blocks && __is_valid_data_blkaddr(blkaddr)) 862 + fio.compr_blocks++; 863 + 864 + if (i > cc->nr_cpages) { 865 + if (__is_valid_data_blkaddr(blkaddr)) { 866 + f2fs_invalidate_blocks(sbi, blkaddr); 867 + f2fs_update_data_blkaddr(&dn, NEW_ADDR); 868 + } 869 + goto unlock_continue; 870 + } 871 + 872 + f2fs_bug_on(fio.sbi, blkaddr == NULL_ADDR); 873 + 874 + if (fio.encrypted) 875 + fio.encrypted_page = cc->cpages[i - 1]; 876 + else 877 + fio.compressed_page = cc->cpages[i - 1]; 878 + 879 + cc->cpages[i - 1] = NULL; 880 + f2fs_outplace_write_data(&dn, &fio); 881 + (*submitted)++; 882 + unlock_continue: 883 + inode_dec_dirty_pages(cc->inode); 884 + unlock_page(fio.page); 885 + } 886 + 887 + if (fio.compr_blocks) 888 + f2fs_i_compr_blocks_update(inode, fio.compr_blocks - 1, false); 889 + f2fs_i_compr_blocks_update(inode, cc->nr_cpages, true); 890 + 891 + set_inode_flag(cc->inode, FI_APPEND_WRITE); 892 + if (cc->cluster_idx == 0) 893 + set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN); 894 + 895 + f2fs_put_dnode(&dn); 896 + f2fs_unlock_op(sbi); 897 + 898 + down_write(&fi->i_sem); 899 + if (fi->last_disk_size < psize) 900 + fi->last_disk_size = psize; 901 + up_write(&fi->i_sem); 902 + 903 + f2fs_put_rpages(cc); 904 + f2fs_destroy_compress_ctx(cc); 905 + return 0; 906 + 907 + out_destroy_crypt: 908 + kfree(cic->rpages); 909 + 910 + for (--i; i >= 0; i--) 911 + fscrypt_finalize_bounce_page(&cc->cpages[i]); 912 + for (i = 0; i < cc->nr_cpages; i++) { 913 + if (!cc->cpages[i]) 914 + continue; 915 + f2fs_put_page(cc->cpages[i], 1); 916 + } 917 + out_put_cic: 918 + kfree(cic); 919 + out_put_dnode: 920 + f2fs_put_dnode(&dn); 921 + out_unlock_op: 922 + f2fs_unlock_op(sbi); 923 + return -EAGAIN; 924 + } 925 + 926 + void f2fs_compress_write_end_io(struct bio *bio, struct page *page) 927 + { 928 + struct f2fs_sb_info *sbi = bio->bi_private; 929 + struct compress_io_ctx *cic = 930 + (struct compress_io_ctx *)page_private(page); 931 + int i; 932 + 933 + if (unlikely(bio->bi_status)) 934 + mapping_set_error(cic->inode->i_mapping, -EIO); 935 + 936 + f2fs_put_compressed_page(page); 937 + 938 + dec_page_count(sbi, F2FS_WB_DATA); 939 + 940 + if (refcount_dec_not_one(&cic->ref)) 941 + return; 942 + 943 + for (i = 0; i < cic->nr_rpages; i++) { 944 + WARN_ON(!cic->rpages[i]); 945 + clear_cold_data(cic->rpages[i]); 946 + end_page_writeback(cic->rpages[i]); 947 + } 948 + 949 + kfree(cic->rpages); 950 + kfree(cic); 951 + } 952 + 953 + static int f2fs_write_raw_pages(struct compress_ctx *cc, 954 + int *submitted, 955 + struct writeback_control *wbc, 956 + enum iostat_type io_type) 957 + { 958 + struct address_space *mapping = cc->inode->i_mapping; 959 + int _submitted, compr_blocks, ret; 960 + int i = -1, err = 0; 961 + 962 + compr_blocks = f2fs_compressed_blocks(cc); 963 + if (compr_blocks < 0) { 964 + err = compr_blocks; 965 + goto out_err; 966 + } 967 + 968 + for (i = 0; i < cc->cluster_size; i++) { 969 + if (!cc->rpages[i]) 970 + continue; 971 + retry_write: 972 + if (cc->rpages[i]->mapping != mapping) { 973 + unlock_page(cc->rpages[i]); 974 + continue; 975 + } 976 + 977 + BUG_ON(!PageLocked(cc->rpages[i])); 978 + 979 + ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted, 980 + NULL, NULL, wbc, io_type, 981 + compr_blocks); 982 + if (ret) { 983 + if (ret == AOP_WRITEPAGE_ACTIVATE) { 984 + unlock_page(cc->rpages[i]); 985 + ret = 0; 986 + } else if (ret == -EAGAIN) { 987 + ret = 0; 988 + cond_resched(); 989 + congestion_wait(BLK_RW_ASYNC, HZ/50); 990 + lock_page(cc->rpages[i]); 991 + clear_page_dirty_for_io(cc->rpages[i]); 992 + goto retry_write; 993 + } 994 + err = ret; 995 + goto out_fail; 996 + } 997 + 998 + *submitted += _submitted; 999 + } 1000 + return 0; 1001 + 1002 + out_fail: 1003 + /* TODO: revoke partially updated block addresses */ 1004 + BUG_ON(compr_blocks); 1005 + out_err: 1006 + for (++i; i < cc->cluster_size; i++) { 1007 + if (!cc->rpages[i]) 1008 + continue; 1009 + redirty_page_for_writepage(wbc, cc->rpages[i]); 1010 + unlock_page(cc->rpages[i]); 1011 + } 1012 + return err; 1013 + } 1014 + 1015 + int f2fs_write_multi_pages(struct compress_ctx *cc, 1016 + int *submitted, 1017 + struct writeback_control *wbc, 1018 + enum iostat_type io_type) 1019 + { 1020 + struct f2fs_inode_info *fi = F2FS_I(cc->inode); 1021 + const struct f2fs_compress_ops *cops = 1022 + f2fs_cops[fi->i_compress_algorithm]; 1023 + int err; 1024 + 1025 + *submitted = 0; 1026 + if (cluster_may_compress(cc)) { 1027 + err = f2fs_compress_pages(cc); 1028 + if (err == -EAGAIN) { 1029 + goto write; 1030 + } else if (err) { 1031 + f2fs_put_rpages_wbc(cc, wbc, true, 1); 1032 + goto destroy_out; 1033 + } 1034 + 1035 + err = f2fs_write_compressed_pages(cc, submitted, 1036 + wbc, io_type); 1037 + cops->destroy_compress_ctx(cc); 1038 + if (!err) 1039 + return 0; 1040 + f2fs_bug_on(F2FS_I_SB(cc->inode), err != -EAGAIN); 1041 + } 1042 + write: 1043 + f2fs_bug_on(F2FS_I_SB(cc->inode), *submitted); 1044 + 1045 + err = f2fs_write_raw_pages(cc, submitted, wbc, io_type); 1046 + f2fs_put_rpages_wbc(cc, wbc, false, 0); 1047 + destroy_out: 1048 + f2fs_destroy_compress_ctx(cc); 1049 + return err; 1050 + } 1051 + 1052 + struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc) 1053 + { 1054 + struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode); 1055 + struct decompress_io_ctx *dic; 1056 + pgoff_t start_idx = start_idx_of_cluster(cc); 1057 + int i; 1058 + 1059 + dic = f2fs_kzalloc(sbi, sizeof(struct decompress_io_ctx), GFP_NOFS); 1060 + if (!dic) 1061 + return ERR_PTR(-ENOMEM); 1062 + 1063 + dic->rpages = f2fs_kzalloc(sbi, sizeof(struct page *) << 1064 + cc->log_cluster_size, GFP_NOFS); 1065 + if (!dic->rpages) { 1066 + kfree(dic); 1067 + return ERR_PTR(-ENOMEM); 1068 + } 1069 + 1070 + dic->magic = F2FS_COMPRESSED_PAGE_MAGIC; 1071 + dic->inode = cc->inode; 1072 + refcount_set(&dic->ref, 1); 1073 + dic->cluster_idx = cc->cluster_idx; 1074 + dic->cluster_size = cc->cluster_size; 1075 + dic->log_cluster_size = cc->log_cluster_size; 1076 + dic->nr_cpages = cc->nr_cpages; 1077 + dic->failed = false; 1078 + 1079 + for (i = 0; i < dic->cluster_size; i++) 1080 + dic->rpages[i] = cc->rpages[i]; 1081 + dic->nr_rpages = cc->cluster_size; 1082 + 1083 + dic->cpages = f2fs_kzalloc(sbi, sizeof(struct page *) * 1084 + dic->nr_cpages, GFP_NOFS); 1085 + if (!dic->cpages) 1086 + goto out_free; 1087 + 1088 + for (i = 0; i < dic->nr_cpages; i++) { 1089 + struct page *page; 1090 + 1091 + page = f2fs_grab_page(); 1092 + if (!page) 1093 + goto out_free; 1094 + 1095 + f2fs_set_compressed_page(page, cc->inode, 1096 + start_idx + i + 1, 1097 + dic, i ? &dic->ref : NULL); 1098 + dic->cpages[i] = page; 1099 + } 1100 + 1101 + dic->tpages = f2fs_kzalloc(sbi, sizeof(struct page *) * 1102 + dic->cluster_size, GFP_NOFS); 1103 + if (!dic->tpages) 1104 + goto out_free; 1105 + 1106 + for (i = 0; i < dic->cluster_size; i++) { 1107 + if (cc->rpages[i]) 1108 + continue; 1109 + 1110 + dic->tpages[i] = f2fs_grab_page(); 1111 + if (!dic->tpages[i]) 1112 + goto out_free; 1113 + } 1114 + 1115 + for (i = 0; i < dic->cluster_size; i++) { 1116 + if (dic->tpages[i]) 1117 + continue; 1118 + dic->tpages[i] = cc->rpages[i]; 1119 + } 1120 + 1121 + return dic; 1122 + 1123 + out_free: 1124 + f2fs_free_dic(dic); 1125 + return ERR_PTR(-ENOMEM); 1126 + } 1127 + 1128 + void f2fs_free_dic(struct decompress_io_ctx *dic) 1129 + { 1130 + int i; 1131 + 1132 + if (dic->tpages) { 1133 + for (i = 0; i < dic->cluster_size; i++) { 1134 + if (dic->rpages[i]) 1135 + continue; 1136 + f2fs_put_page(dic->tpages[i], 1); 1137 + } 1138 + kfree(dic->tpages); 1139 + } 1140 + 1141 + if (dic->cpages) { 1142 + for (i = 0; i < dic->nr_cpages; i++) { 1143 + if (!dic->cpages[i]) 1144 + continue; 1145 + f2fs_put_compressed_page(dic->cpages[i]); 1146 + } 1147 + kfree(dic->cpages); 1148 + } 1149 + 1150 + kfree(dic->rpages); 1151 + kfree(dic); 1152 + } 1153 + 1154 + void f2fs_decompress_end_io(struct page **rpages, 1155 + unsigned int cluster_size, bool err, bool verity) 1156 + { 1157 + int i; 1158 + 1159 + for (i = 0; i < cluster_size; i++) { 1160 + struct page *rpage = rpages[i]; 1161 + 1162 + if (!rpage) 1163 + continue; 1164 + 1165 + if (err || PageError(rpage)) { 1166 + ClearPageUptodate(rpage); 1167 + ClearPageError(rpage); 1168 + } else { 1169 + if (!verity || fsverity_verify_page(rpage)) 1170 + SetPageUptodate(rpage); 1171 + else 1172 + SetPageError(rpage); 1173 + } 1174 + unlock_page(rpage); 1175 + } 1176 + }
+553 -77
fs/f2fs/data.c
··· 82 82 if (!mapping) 83 83 return false; 84 84 85 + if (f2fs_is_compressed_page(page)) 86 + return false; 87 + 85 88 inode = mapping->host; 86 89 sbi = F2FS_I_SB(inode); 87 90 ··· 117 114 118 115 /* postprocessing steps for read bios */ 119 116 enum bio_post_read_step { 120 - STEP_INITIAL = 0, 121 117 STEP_DECRYPT, 118 + STEP_DECOMPRESS, 122 119 STEP_VERITY, 123 120 }; 124 121 125 122 struct bio_post_read_ctx { 126 123 struct bio *bio; 124 + struct f2fs_sb_info *sbi; 127 125 struct work_struct work; 128 - unsigned int cur_step; 129 126 unsigned int enabled_steps; 130 127 }; 131 128 132 - static void __read_end_io(struct bio *bio) 129 + static void __read_end_io(struct bio *bio, bool compr, bool verity) 133 130 { 134 131 struct page *page; 135 132 struct bio_vec *bv; ··· 137 134 138 135 bio_for_each_segment_all(bv, bio, iter_all) { 139 136 page = bv->bv_page; 137 + 138 + #ifdef CONFIG_F2FS_FS_COMPRESSION 139 + if (compr && f2fs_is_compressed_page(page)) { 140 + f2fs_decompress_pages(bio, page, verity); 141 + continue; 142 + } 143 + #endif 140 144 141 145 /* PG_error was set if any post_read step failed */ 142 146 if (bio->bi_status || PageError(page)) { ··· 156 146 dec_page_count(F2FS_P_SB(page), __read_io_type(page)); 157 147 unlock_page(page); 158 148 } 159 - if (bio->bi_private) 160 - mempool_free(bio->bi_private, bio_post_read_ctx_pool); 161 - bio_put(bio); 149 + } 150 + 151 + static void f2fs_release_read_bio(struct bio *bio); 152 + static void __f2fs_read_end_io(struct bio *bio, bool compr, bool verity) 153 + { 154 + if (!compr) 155 + __read_end_io(bio, false, verity); 156 + f2fs_release_read_bio(bio); 157 + } 158 + 159 + static void f2fs_decompress_bio(struct bio *bio, bool verity) 160 + { 161 + __read_end_io(bio, true, verity); 162 162 } 163 163 164 164 static void bio_post_read_processing(struct bio_post_read_ctx *ctx); 165 165 166 - static void decrypt_work(struct work_struct *work) 166 + static void f2fs_decrypt_work(struct bio_post_read_ctx *ctx) 167 167 { 168 - struct bio_post_read_ctx *ctx = 169 - container_of(work, struct bio_post_read_ctx, work); 170 - 171 168 fscrypt_decrypt_bio(ctx->bio); 172 - 173 - bio_post_read_processing(ctx); 174 169 } 175 170 176 - static void verity_work(struct work_struct *work) 171 + static void f2fs_decompress_work(struct bio_post_read_ctx *ctx) 172 + { 173 + f2fs_decompress_bio(ctx->bio, ctx->enabled_steps & (1 << STEP_VERITY)); 174 + } 175 + 176 + #ifdef CONFIG_F2FS_FS_COMPRESSION 177 + static void f2fs_verify_pages(struct page **rpages, unsigned int cluster_size) 178 + { 179 + f2fs_decompress_end_io(rpages, cluster_size, false, true); 180 + } 181 + 182 + static void f2fs_verify_bio(struct bio *bio) 183 + { 184 + struct page *page = bio_first_page_all(bio); 185 + struct decompress_io_ctx *dic = 186 + (struct decompress_io_ctx *)page_private(page); 187 + 188 + f2fs_verify_pages(dic->rpages, dic->cluster_size); 189 + f2fs_free_dic(dic); 190 + } 191 + #endif 192 + 193 + static void f2fs_verity_work(struct work_struct *work) 177 194 { 178 195 struct bio_post_read_ctx *ctx = 179 196 container_of(work, struct bio_post_read_ctx, work); 180 197 181 - fsverity_verify_bio(ctx->bio); 198 + #ifdef CONFIG_F2FS_FS_COMPRESSION 199 + /* previous step is decompression */ 200 + if (ctx->enabled_steps & (1 << STEP_DECOMPRESS)) { 182 201 183 - bio_post_read_processing(ctx); 202 + f2fs_verify_bio(ctx->bio); 203 + f2fs_release_read_bio(ctx->bio); 204 + return; 205 + } 206 + #endif 207 + 208 + fsverity_verify_bio(ctx->bio); 209 + __f2fs_read_end_io(ctx->bio, false, false); 210 + } 211 + 212 + static void f2fs_post_read_work(struct work_struct *work) 213 + { 214 + struct bio_post_read_ctx *ctx = 215 + container_of(work, struct bio_post_read_ctx, work); 216 + 217 + if (ctx->enabled_steps & (1 << STEP_DECRYPT)) 218 + f2fs_decrypt_work(ctx); 219 + 220 + if (ctx->enabled_steps & (1 << STEP_DECOMPRESS)) 221 + f2fs_decompress_work(ctx); 222 + 223 + if (ctx->enabled_steps & (1 << STEP_VERITY)) { 224 + INIT_WORK(&ctx->work, f2fs_verity_work); 225 + fsverity_enqueue_verify_work(&ctx->work); 226 + return; 227 + } 228 + 229 + __f2fs_read_end_io(ctx->bio, 230 + ctx->enabled_steps & (1 << STEP_DECOMPRESS), false); 231 + } 232 + 233 + static void f2fs_enqueue_post_read_work(struct f2fs_sb_info *sbi, 234 + struct work_struct *work) 235 + { 236 + queue_work(sbi->post_read_wq, work); 184 237 } 185 238 186 239 static void bio_post_read_processing(struct bio_post_read_ctx *ctx) ··· 253 180 * verity may require reading metadata pages that need decryption, and 254 181 * we shouldn't recurse to the same workqueue. 255 182 */ 256 - switch (++ctx->cur_step) { 257 - case STEP_DECRYPT: 258 - if (ctx->enabled_steps & (1 << STEP_DECRYPT)) { 259 - INIT_WORK(&ctx->work, decrypt_work); 260 - fscrypt_enqueue_decrypt_work(&ctx->work); 261 - return; 262 - } 263 - ctx->cur_step++; 264 - /* fall-through */ 265 - case STEP_VERITY: 266 - if (ctx->enabled_steps & (1 << STEP_VERITY)) { 267 - INIT_WORK(&ctx->work, verity_work); 268 - fsverity_enqueue_verify_work(&ctx->work); 269 - return; 270 - } 271 - ctx->cur_step++; 272 - /* fall-through */ 273 - default: 274 - __read_end_io(ctx->bio); 183 + 184 + if (ctx->enabled_steps & (1 << STEP_DECRYPT) || 185 + ctx->enabled_steps & (1 << STEP_DECOMPRESS)) { 186 + INIT_WORK(&ctx->work, f2fs_post_read_work); 187 + f2fs_enqueue_post_read_work(ctx->sbi, &ctx->work); 188 + return; 275 189 } 190 + 191 + if (ctx->enabled_steps & (1 << STEP_VERITY)) { 192 + INIT_WORK(&ctx->work, f2fs_verity_work); 193 + fsverity_enqueue_verify_work(&ctx->work); 194 + return; 195 + } 196 + 197 + __f2fs_read_end_io(ctx->bio, false, false); 276 198 } 277 199 278 200 static bool f2fs_bio_post_read_required(struct bio *bio) 279 201 { 280 - return bio->bi_private && !bio->bi_status; 202 + return bio->bi_private; 281 203 } 282 204 283 205 static void f2fs_read_end_io(struct bio *bio) ··· 287 219 if (f2fs_bio_post_read_required(bio)) { 288 220 struct bio_post_read_ctx *ctx = bio->bi_private; 289 221 290 - ctx->cur_step = STEP_INITIAL; 291 222 bio_post_read_processing(ctx); 292 223 return; 293 224 } 294 225 295 - __read_end_io(bio); 226 + __f2fs_read_end_io(bio, false, false); 296 227 } 297 228 298 229 static void f2fs_write_end_io(struct bio *bio) ··· 321 254 } 322 255 323 256 fscrypt_finalize_bounce_page(&page); 257 + 258 + #ifdef CONFIG_F2FS_FS_COMPRESSION 259 + if (f2fs_is_compressed_page(page)) { 260 + f2fs_compress_write_end_io(bio, page); 261 + continue; 262 + } 263 + #endif 324 264 325 265 if (unlikely(bio->bi_status)) { 326 266 mapping_set_error(page->mapping, -EIO); ··· 473 399 submit_bio(bio); 474 400 } 475 401 402 + void f2fs_submit_bio(struct f2fs_sb_info *sbi, 403 + struct bio *bio, enum page_type type) 404 + { 405 + __submit_bio(sbi, bio, type); 406 + } 407 + 476 408 static void __submit_merged_bio(struct f2fs_bio_info *io) 477 409 { 478 410 struct f2fs_io_info *fio = &io->fio; ··· 501 421 struct page *page, nid_t ino) 502 422 { 503 423 struct bio_vec *bvec; 504 - struct page *target; 505 424 struct bvec_iter_all iter_all; 506 425 507 426 if (!bio) ··· 510 431 return true; 511 432 512 433 bio_for_each_segment_all(bvec, bio, iter_all) { 434 + struct page *target = bvec->bv_page; 513 435 514 - target = bvec->bv_page; 515 - if (fscrypt_is_bounce_page(target)) 436 + if (fscrypt_is_bounce_page(target)) { 516 437 target = fscrypt_pagecache_page(target); 438 + if (IS_ERR(target)) 439 + continue; 440 + } 441 + if (f2fs_is_compressed_page(target)) { 442 + target = f2fs_compress_control_page(target); 443 + if (IS_ERR(target)) 444 + continue; 445 + } 517 446 518 447 if (inode && inode == target->mapping->host) 519 448 return true; ··· 716 629 717 630 found = true; 718 631 719 - if (bio_add_page(*bio, page, PAGE_SIZE, 0) == PAGE_SIZE) { 632 + if (bio_add_page(*bio, page, PAGE_SIZE, 0) == 633 + PAGE_SIZE) { 720 634 ret = 0; 721 635 break; 722 636 } ··· 857 769 858 770 verify_fio_blkaddr(fio); 859 771 860 - bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page; 772 + if (fio->encrypted_page) 773 + bio_page = fio->encrypted_page; 774 + else if (fio->compressed_page) 775 + bio_page = fio->compressed_page; 776 + else 777 + bio_page = fio->page; 861 778 862 779 /* set submitted = true as a return value */ 863 780 fio->submitted = true; ··· 931 838 932 839 if (f2fs_encrypted_file(inode)) 933 840 post_read_steps |= 1 << STEP_DECRYPT; 934 - 841 + if (f2fs_compressed_file(inode)) 842 + post_read_steps |= 1 << STEP_DECOMPRESS; 935 843 if (f2fs_need_verity(inode, first_idx)) 936 844 post_read_steps |= 1 << STEP_VERITY; 937 845 ··· 943 849 return ERR_PTR(-ENOMEM); 944 850 } 945 851 ctx->bio = bio; 852 + ctx->sbi = sbi; 946 853 ctx->enabled_steps = post_read_steps; 947 854 bio->bi_private = ctx; 948 855 } 949 856 950 857 return bio; 858 + } 859 + 860 + static void f2fs_release_read_bio(struct bio *bio) 861 + { 862 + if (bio->bi_private) 863 + mempool_free(bio->bi_private, bio_post_read_ctx_pool); 864 + bio_put(bio); 951 865 } 952 866 953 867 /* This can handle encryption stuffs */ ··· 2002 1900 return ret; 2003 1901 } 2004 1902 1903 + #ifdef CONFIG_F2FS_FS_COMPRESSION 1904 + int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, 1905 + unsigned nr_pages, sector_t *last_block_in_bio, 1906 + bool is_readahead) 1907 + { 1908 + struct dnode_of_data dn; 1909 + struct inode *inode = cc->inode; 1910 + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 1911 + struct bio *bio = *bio_ret; 1912 + unsigned int start_idx = cc->cluster_idx << cc->log_cluster_size; 1913 + sector_t last_block_in_file; 1914 + const unsigned blkbits = inode->i_blkbits; 1915 + const unsigned blocksize = 1 << blkbits; 1916 + struct decompress_io_ctx *dic = NULL; 1917 + int i; 1918 + int ret = 0; 1919 + 1920 + f2fs_bug_on(sbi, f2fs_cluster_is_empty(cc)); 1921 + 1922 + last_block_in_file = (i_size_read(inode) + blocksize - 1) >> blkbits; 1923 + 1924 + /* get rid of pages beyond EOF */ 1925 + for (i = 0; i < cc->cluster_size; i++) { 1926 + struct page *page = cc->rpages[i]; 1927 + 1928 + if (!page) 1929 + continue; 1930 + if ((sector_t)page->index >= last_block_in_file) { 1931 + zero_user_segment(page, 0, PAGE_SIZE); 1932 + if (!PageUptodate(page)) 1933 + SetPageUptodate(page); 1934 + } else if (!PageUptodate(page)) { 1935 + continue; 1936 + } 1937 + unlock_page(page); 1938 + cc->rpages[i] = NULL; 1939 + cc->nr_rpages--; 1940 + } 1941 + 1942 + /* we are done since all pages are beyond EOF */ 1943 + if (f2fs_cluster_is_empty(cc)) 1944 + goto out; 1945 + 1946 + set_new_dnode(&dn, inode, NULL, NULL, 0); 1947 + ret = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE); 1948 + if (ret) 1949 + goto out; 1950 + 1951 + /* cluster was overwritten as normal cluster */ 1952 + if (dn.data_blkaddr != COMPRESS_ADDR) 1953 + goto out; 1954 + 1955 + for (i = 1; i < cc->cluster_size; i++) { 1956 + block_t blkaddr; 1957 + 1958 + blkaddr = datablock_addr(dn.inode, dn.node_page, 1959 + dn.ofs_in_node + i); 1960 + 1961 + if (!__is_valid_data_blkaddr(blkaddr)) 1962 + break; 1963 + 1964 + if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC)) { 1965 + ret = -EFAULT; 1966 + goto out_put_dnode; 1967 + } 1968 + cc->nr_cpages++; 1969 + } 1970 + 1971 + /* nothing to decompress */ 1972 + if (cc->nr_cpages == 0) { 1973 + ret = 0; 1974 + goto out_put_dnode; 1975 + } 1976 + 1977 + dic = f2fs_alloc_dic(cc); 1978 + if (IS_ERR(dic)) { 1979 + ret = PTR_ERR(dic); 1980 + goto out_put_dnode; 1981 + } 1982 + 1983 + for (i = 0; i < dic->nr_cpages; i++) { 1984 + struct page *page = dic->cpages[i]; 1985 + block_t blkaddr; 1986 + 1987 + blkaddr = datablock_addr(dn.inode, dn.node_page, 1988 + dn.ofs_in_node + i + 1); 1989 + 1990 + if (bio && !page_is_mergeable(sbi, bio, 1991 + *last_block_in_bio, blkaddr)) { 1992 + submit_and_realloc: 1993 + __submit_bio(sbi, bio, DATA); 1994 + bio = NULL; 1995 + } 1996 + 1997 + if (!bio) { 1998 + bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages, 1999 + is_readahead ? REQ_RAHEAD : 0, 2000 + page->index); 2001 + if (IS_ERR(bio)) { 2002 + ret = PTR_ERR(bio); 2003 + bio = NULL; 2004 + dic->failed = true; 2005 + if (refcount_sub_and_test(dic->nr_cpages - i, 2006 + &dic->ref)) 2007 + f2fs_decompress_end_io(dic->rpages, 2008 + cc->cluster_size, true, 2009 + false); 2010 + f2fs_free_dic(dic); 2011 + f2fs_put_dnode(&dn); 2012 + *bio_ret = bio; 2013 + return ret; 2014 + } 2015 + } 2016 + 2017 + f2fs_wait_on_block_writeback(inode, blkaddr); 2018 + 2019 + if (bio_add_page(bio, page, blocksize, 0) < blocksize) 2020 + goto submit_and_realloc; 2021 + 2022 + inc_page_count(sbi, F2FS_RD_DATA); 2023 + ClearPageError(page); 2024 + *last_block_in_bio = blkaddr; 2025 + } 2026 + 2027 + f2fs_put_dnode(&dn); 2028 + 2029 + *bio_ret = bio; 2030 + return 0; 2031 + 2032 + out_put_dnode: 2033 + f2fs_put_dnode(&dn); 2034 + out: 2035 + f2fs_decompress_end_io(cc->rpages, cc->cluster_size, true, false); 2036 + *bio_ret = bio; 2037 + return ret; 2038 + } 2039 + #endif 2040 + 2005 2041 /* 2006 2042 * This function was originally taken from fs/mpage.c, and customized for f2fs. 2007 2043 * Major change was from block_size == page_size in f2fs by default. ··· 2149 1909 * use ->readpage() or do the necessary surgery to decouple ->readpages() 2150 1910 * from read-ahead. 2151 1911 */ 2152 - static int f2fs_mpage_readpages(struct address_space *mapping, 1912 + int f2fs_mpage_readpages(struct address_space *mapping, 2153 1913 struct list_head *pages, struct page *page, 2154 1914 unsigned nr_pages, bool is_readahead) 2155 1915 { ··· 2157 1917 sector_t last_block_in_bio = 0; 2158 1918 struct inode *inode = mapping->host; 2159 1919 struct f2fs_map_blocks map; 1920 + #ifdef CONFIG_F2FS_FS_COMPRESSION 1921 + struct compress_ctx cc = { 1922 + .inode = inode, 1923 + .log_cluster_size = F2FS_I(inode)->i_log_cluster_size, 1924 + .cluster_size = F2FS_I(inode)->i_cluster_size, 1925 + .cluster_idx = NULL_CLUSTER, 1926 + .rpages = NULL, 1927 + .cpages = NULL, 1928 + .nr_rpages = 0, 1929 + .nr_cpages = 0, 1930 + }; 1931 + #endif 1932 + unsigned max_nr_pages = nr_pages; 2160 1933 int ret = 0; 2161 1934 2162 1935 map.m_pblk = 0; ··· 2193 1940 goto next_page; 2194 1941 } 2195 1942 2196 - ret = f2fs_read_single_page(inode, page, nr_pages, &map, &bio, 2197 - &last_block_in_bio, is_readahead); 1943 + #ifdef CONFIG_F2FS_FS_COMPRESSION 1944 + if (f2fs_compressed_file(inode)) { 1945 + /* there are remained comressed pages, submit them */ 1946 + if (!f2fs_cluster_can_merge_page(&cc, page->index)) { 1947 + ret = f2fs_read_multi_pages(&cc, &bio, 1948 + max_nr_pages, 1949 + &last_block_in_bio, 1950 + is_readahead); 1951 + f2fs_destroy_compress_ctx(&cc); 1952 + if (ret) 1953 + goto set_error_page; 1954 + } 1955 + ret = f2fs_is_compressed_cluster(inode, page->index); 1956 + if (ret < 0) 1957 + goto set_error_page; 1958 + else if (!ret) 1959 + goto read_single_page; 1960 + 1961 + ret = f2fs_init_compress_ctx(&cc); 1962 + if (ret) 1963 + goto set_error_page; 1964 + 1965 + f2fs_compress_ctx_add_page(&cc, page); 1966 + 1967 + goto next_page; 1968 + } 1969 + read_single_page: 1970 + #endif 1971 + 1972 + ret = f2fs_read_single_page(inode, page, max_nr_pages, &map, 1973 + &bio, &last_block_in_bio, is_readahead); 2198 1974 if (ret) { 1975 + #ifdef CONFIG_F2FS_FS_COMPRESSION 1976 + set_error_page: 1977 + #endif 2199 1978 SetPageError(page); 2200 1979 zero_user_segment(page, 0, PAGE_SIZE); 2201 1980 unlock_page(page); ··· 2235 1950 next_page: 2236 1951 if (pages) 2237 1952 put_page(page); 1953 + 1954 + #ifdef CONFIG_F2FS_FS_COMPRESSION 1955 + if (f2fs_compressed_file(inode)) { 1956 + /* last page */ 1957 + if (nr_pages == 1 && !f2fs_cluster_is_empty(&cc)) { 1958 + ret = f2fs_read_multi_pages(&cc, &bio, 1959 + max_nr_pages, 1960 + &last_block_in_bio, 1961 + is_readahead); 1962 + f2fs_destroy_compress_ctx(&cc); 1963 + } 1964 + } 1965 + #endif 2238 1966 } 2239 1967 BUG_ON(pages && !list_empty(pages)); 2240 1968 if (bio) ··· 2261 1963 int ret = -EAGAIN; 2262 1964 2263 1965 trace_f2fs_readpage(page, DATA); 1966 + 1967 + if (!f2fs_is_compress_backend_ready(inode)) { 1968 + unlock_page(page); 1969 + return -EOPNOTSUPP; 1970 + } 2264 1971 2265 1972 /* If the file has inline data, try to read it directly */ 2266 1973 if (f2fs_has_inline_data(inode)) ··· 2285 1982 2286 1983 trace_f2fs_readpages(inode, page, nr_pages); 2287 1984 1985 + if (!f2fs_is_compress_backend_ready(inode)) 1986 + return 0; 1987 + 2288 1988 /* If the file has inline data, skip readpages */ 2289 1989 if (f2fs_has_inline_data(inode)) 2290 1990 return 0; ··· 2295 1989 return f2fs_mpage_readpages(mapping, pages, NULL, nr_pages, true); 2296 1990 } 2297 1991 2298 - static int encrypt_one_page(struct f2fs_io_info *fio) 1992 + int f2fs_encrypt_one_page(struct f2fs_io_info *fio) 2299 1993 { 2300 1994 struct inode *inode = fio->page->mapping->host; 2301 - struct page *mpage; 1995 + struct page *mpage, *page; 2302 1996 gfp_t gfp_flags = GFP_NOFS; 2303 1997 2304 1998 if (!f2fs_encrypted_file(inode)) 2305 1999 return 0; 2306 2000 2001 + page = fio->compressed_page ? fio->compressed_page : fio->page; 2002 + 2307 2003 /* wait for GCed page writeback via META_MAPPING */ 2308 2004 f2fs_wait_on_block_writeback(inode, fio->old_blkaddr); 2309 2005 2310 2006 retry_encrypt: 2311 - fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(fio->page, 2312 - PAGE_SIZE, 0, 2313 - gfp_flags); 2007 + fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(page, 2008 + PAGE_SIZE, 0, gfp_flags); 2314 2009 if (IS_ERR(fio->encrypted_page)) { 2315 2010 /* flush pending IOs and wait for a while in the ENOMEM case */ 2316 2011 if (PTR_ERR(fio->encrypted_page) == -ENOMEM) { ··· 2471 2164 if (ipu_force || 2472 2165 (__is_valid_data_blkaddr(fio->old_blkaddr) && 2473 2166 need_inplace_update(fio))) { 2474 - err = encrypt_one_page(fio); 2167 + err = f2fs_encrypt_one_page(fio); 2475 2168 if (err) 2476 2169 goto out_writepage; 2477 2170 ··· 2507 2200 2508 2201 fio->version = ni.version; 2509 2202 2510 - err = encrypt_one_page(fio); 2203 + err = f2fs_encrypt_one_page(fio); 2511 2204 if (err) 2512 2205 goto out_writepage; 2513 2206 2514 2207 set_page_writeback(page); 2515 2208 ClearPageError(page); 2209 + 2210 + if (fio->compr_blocks && fio->old_blkaddr == COMPRESS_ADDR) 2211 + f2fs_i_compr_blocks_update(inode, fio->compr_blocks - 1, false); 2516 2212 2517 2213 /* LFS mode write path */ 2518 2214 f2fs_outplace_write_data(&dn, fio); ··· 2531 2221 return err; 2532 2222 } 2533 2223 2534 - static int __write_data_page(struct page *page, bool *submitted, 2224 + int f2fs_write_single_data_page(struct page *page, int *submitted, 2535 2225 struct bio **bio, 2536 2226 sector_t *last_block, 2537 2227 struct writeback_control *wbc, 2538 - enum iostat_type io_type) 2228 + enum iostat_type io_type, 2229 + int compr_blocks) 2539 2230 { 2540 2231 struct inode *inode = page->mapping->host; 2541 2232 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 2542 2233 loff_t i_size = i_size_read(inode); 2543 - const pgoff_t end_index = ((unsigned long long) i_size) 2234 + const pgoff_t end_index = ((unsigned long long)i_size) 2544 2235 >> PAGE_SHIFT; 2545 2236 loff_t psize = (loff_t)(page->index + 1) << PAGE_SHIFT; 2546 2237 unsigned offset = 0; ··· 2557 2246 .page = page, 2558 2247 .encrypted_page = NULL, 2559 2248 .submitted = false, 2249 + .compr_blocks = compr_blocks, 2560 2250 .need_lock = LOCK_RETRY, 2561 2251 .io_type = io_type, 2562 2252 .io_wbc = wbc, ··· 2582 2270 if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING))) 2583 2271 goto redirty_out; 2584 2272 2585 - if (page->index < end_index || f2fs_verity_in_progress(inode)) 2273 + if (page->index < end_index || 2274 + f2fs_verity_in_progress(inode) || 2275 + compr_blocks) 2586 2276 goto write; 2587 2277 2588 2278 /* ··· 2660 2346 f2fs_remove_dirty_inode(inode); 2661 2347 submitted = NULL; 2662 2348 } 2663 - 2664 2349 unlock_page(page); 2665 2350 if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) && 2666 2351 !F2FS_I(inode)->cp_task) ··· 2672 2359 } 2673 2360 2674 2361 if (submitted) 2675 - *submitted = fio.submitted; 2362 + *submitted = fio.submitted ? 1 : 0; 2676 2363 2677 2364 return 0; 2678 2365 ··· 2693 2380 static int f2fs_write_data_page(struct page *page, 2694 2381 struct writeback_control *wbc) 2695 2382 { 2696 - return __write_data_page(page, NULL, NULL, NULL, wbc, FS_DATA_IO); 2383 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2384 + struct inode *inode = page->mapping->host; 2385 + 2386 + if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) 2387 + goto out; 2388 + 2389 + if (f2fs_compressed_file(inode)) { 2390 + if (f2fs_is_compressed_cluster(inode, page->index)) { 2391 + redirty_page_for_writepage(wbc, page); 2392 + return AOP_WRITEPAGE_ACTIVATE; 2393 + } 2394 + } 2395 + out: 2396 + #endif 2397 + 2398 + return f2fs_write_single_data_page(page, NULL, NULL, NULL, 2399 + wbc, FS_DATA_IO, 0); 2697 2400 } 2698 2401 2699 2402 /* ··· 2722 2393 enum iostat_type io_type) 2723 2394 { 2724 2395 int ret = 0; 2725 - int done = 0; 2396 + int done = 0, retry = 0; 2726 2397 struct pagevec pvec; 2727 2398 struct f2fs_sb_info *sbi = F2FS_M_SB(mapping); 2728 2399 struct bio *bio = NULL; 2729 2400 sector_t last_block; 2401 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2402 + struct inode *inode = mapping->host; 2403 + struct compress_ctx cc = { 2404 + .inode = inode, 2405 + .log_cluster_size = F2FS_I(inode)->i_log_cluster_size, 2406 + .cluster_size = F2FS_I(inode)->i_cluster_size, 2407 + .cluster_idx = NULL_CLUSTER, 2408 + .rpages = NULL, 2409 + .nr_rpages = 0, 2410 + .cpages = NULL, 2411 + .rbuf = NULL, 2412 + .cbuf = NULL, 2413 + .rlen = PAGE_SIZE * F2FS_I(inode)->i_cluster_size, 2414 + .private = NULL, 2415 + }; 2416 + #endif 2730 2417 int nr_pages; 2731 2418 pgoff_t uninitialized_var(writeback_index); 2732 2419 pgoff_t index; ··· 2752 2407 int range_whole = 0; 2753 2408 xa_mark_t tag; 2754 2409 int nwritten = 0; 2410 + int submitted = 0; 2411 + int i; 2755 2412 2756 2413 pagevec_init(&pvec); 2757 2414 ··· 2783 2436 else 2784 2437 tag = PAGECACHE_TAG_DIRTY; 2785 2438 retry: 2439 + retry = 0; 2786 2440 if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) 2787 2441 tag_pages_for_writeback(mapping, index, end); 2788 2442 done_index = index; 2789 - while (!done && (index <= end)) { 2790 - int i; 2791 - 2443 + while (!done && !retry && (index <= end)) { 2792 2444 nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end, 2793 2445 tag); 2794 2446 if (nr_pages == 0) ··· 2795 2449 2796 2450 for (i = 0; i < nr_pages; i++) { 2797 2451 struct page *page = pvec.pages[i]; 2798 - bool submitted = false; 2452 + bool need_readd; 2453 + readd: 2454 + need_readd = false; 2455 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2456 + if (f2fs_compressed_file(inode)) { 2457 + ret = f2fs_init_compress_ctx(&cc); 2458 + if (ret) { 2459 + done = 1; 2460 + break; 2461 + } 2799 2462 2463 + if (!f2fs_cluster_can_merge_page(&cc, 2464 + page->index)) { 2465 + ret = f2fs_write_multi_pages(&cc, 2466 + &submitted, wbc, io_type); 2467 + if (!ret) 2468 + need_readd = true; 2469 + goto result; 2470 + } 2471 + 2472 + if (unlikely(f2fs_cp_error(sbi))) 2473 + goto lock_page; 2474 + 2475 + if (f2fs_cluster_is_empty(&cc)) { 2476 + void *fsdata = NULL; 2477 + struct page *pagep; 2478 + int ret2; 2479 + 2480 + ret2 = f2fs_prepare_compress_overwrite( 2481 + inode, &pagep, 2482 + page->index, &fsdata); 2483 + if (ret2 < 0) { 2484 + ret = ret2; 2485 + done = 1; 2486 + break; 2487 + } else if (ret2 && 2488 + !f2fs_compress_write_end(inode, 2489 + fsdata, page->index, 2490 + 1)) { 2491 + retry = 1; 2492 + break; 2493 + } 2494 + } else { 2495 + goto lock_page; 2496 + } 2497 + } 2498 + #endif 2800 2499 /* give a priority to WB_SYNC threads */ 2801 2500 if (atomic_read(&sbi->wb_sync_req[DATA]) && 2802 2501 wbc->sync_mode == WB_SYNC_NONE) { 2803 2502 done = 1; 2804 2503 break; 2805 2504 } 2806 - 2505 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2506 + lock_page: 2507 + #endif 2807 2508 done_index = page->index; 2808 2509 retry_write: 2809 2510 lock_page(page); ··· 2877 2484 if (!clear_page_dirty_for_io(page)) 2878 2485 goto continue_unlock; 2879 2486 2880 - ret = __write_data_page(page, &submitted, &bio, 2881 - &last_block, wbc, io_type); 2487 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2488 + if (f2fs_compressed_file(inode)) { 2489 + get_page(page); 2490 + f2fs_compress_ctx_add_page(&cc, page); 2491 + continue; 2492 + } 2493 + #endif 2494 + ret = f2fs_write_single_data_page(page, &submitted, 2495 + &bio, &last_block, wbc, io_type, 0); 2496 + if (ret == AOP_WRITEPAGE_ACTIVATE) 2497 + unlock_page(page); 2498 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2499 + result: 2500 + #endif 2501 + nwritten += submitted; 2502 + wbc->nr_to_write -= submitted; 2503 + 2882 2504 if (unlikely(ret)) { 2883 2505 /* 2884 2506 * keep nr_to_write, since vfs uses this to 2885 2507 * get # of written pages. 2886 2508 */ 2887 2509 if (ret == AOP_WRITEPAGE_ACTIVATE) { 2888 - unlock_page(page); 2889 2510 ret = 0; 2890 - continue; 2511 + goto next; 2891 2512 } else if (ret == -EAGAIN) { 2892 2513 ret = 0; 2893 2514 if (wbc->sync_mode == WB_SYNC_ALL) { 2894 2515 cond_resched(); 2895 2516 congestion_wait(BLK_RW_ASYNC, 2896 - HZ/50); 2517 + HZ/50); 2897 2518 goto retry_write; 2898 2519 } 2899 - continue; 2520 + goto next; 2900 2521 } 2901 2522 done_index = page->index + 1; 2902 2523 done = 1; 2903 2524 break; 2904 - } else if (submitted) { 2905 - nwritten++; 2906 2525 } 2907 2526 2908 - if (--wbc->nr_to_write <= 0 && 2527 + if (wbc->nr_to_write <= 0 && 2909 2528 wbc->sync_mode == WB_SYNC_NONE) { 2910 2529 done = 1; 2911 2530 break; 2912 2531 } 2532 + next: 2533 + if (need_readd) 2534 + goto readd; 2913 2535 } 2914 2536 pagevec_release(&pvec); 2915 2537 cond_resched(); 2916 2538 } 2917 - 2918 - if (!cycled && !done) { 2539 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2540 + /* flush remained pages in compress cluster */ 2541 + if (f2fs_compressed_file(inode) && !f2fs_cluster_is_empty(&cc)) { 2542 + ret = f2fs_write_multi_pages(&cc, &submitted, wbc, io_type); 2543 + nwritten += submitted; 2544 + wbc->nr_to_write -= submitted; 2545 + if (ret) { 2546 + done = 1; 2547 + retry = 0; 2548 + } 2549 + } 2550 + #endif 2551 + if ((!cycled && !done) || retry) { 2919 2552 cycled = 1; 2920 2553 index = 0; 2921 2554 end = writeback_index - 1; ··· 2965 2546 { 2966 2547 if (!S_ISREG(inode->i_mode)) 2967 2548 return false; 2549 + if (f2fs_compressed_file(inode)) 2550 + return true; 2968 2551 if (IS_NOQUOTA(inode)) 2969 2552 return false; 2970 2553 /* to avoid deadlock in path of data flush */ ··· 3111 2690 __do_map_lock(sbi, flag, true); 3112 2691 locked = true; 3113 2692 } 2693 + 3114 2694 restart: 3115 2695 /* check inline_data */ 3116 2696 ipage = f2fs_get_node_page(sbi, inode->i_ino); ··· 3202 2780 if (err) 3203 2781 goto fail; 3204 2782 } 2783 + 2784 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2785 + if (f2fs_compressed_file(inode)) { 2786 + int ret; 2787 + 2788 + *fsdata = NULL; 2789 + 2790 + ret = f2fs_prepare_compress_overwrite(inode, pagep, 2791 + index, fsdata); 2792 + if (ret < 0) { 2793 + err = ret; 2794 + goto fail; 2795 + } else if (ret) { 2796 + return 0; 2797 + } 2798 + } 2799 + #endif 2800 + 3205 2801 repeat: 3206 2802 /* 3207 2803 * Do not use grab_cache_page_write_begin() to avoid deadlock due to ··· 3231 2791 err = -ENOMEM; 3232 2792 goto fail; 3233 2793 } 2794 + 2795 + /* TODO: cluster can be compressed due to race with .writepage */ 3234 2796 3235 2797 *pagep = page; 3236 2798 ··· 3317 2875 else 3318 2876 SetPageUptodate(page); 3319 2877 } 2878 + 2879 + #ifdef CONFIG_F2FS_FS_COMPRESSION 2880 + /* overwrite compressed file */ 2881 + if (f2fs_compressed_file(inode) && fsdata) { 2882 + f2fs_compress_write_end(inode, fsdata, page->index, copied); 2883 + f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); 2884 + return copied; 2885 + } 2886 + #endif 2887 + 3320 2888 if (!copied) 3321 2889 goto unlock_out; 3322 2890 ··· 3717 3265 if (ret) 3718 3266 return ret; 3719 3267 3268 + if (f2fs_disable_compressed_file(inode)) 3269 + return -EINVAL; 3270 + 3720 3271 ret = check_swap_activate(file, sis->max); 3721 3272 if (ret) 3722 3273 return ret; ··· 3802 3347 { 3803 3348 mempool_destroy(bio_post_read_ctx_pool); 3804 3349 kmem_cache_destroy(bio_post_read_ctx_cache); 3350 + } 3351 + 3352 + int f2fs_init_post_read_wq(struct f2fs_sb_info *sbi) 3353 + { 3354 + if (!f2fs_sb_has_encrypt(sbi) && 3355 + !f2fs_sb_has_verity(sbi) && 3356 + !f2fs_sb_has_compression(sbi)) 3357 + return 0; 3358 + 3359 + sbi->post_read_wq = alloc_workqueue("f2fs_post_read_wq", 3360 + WQ_UNBOUND | WQ_HIGHPRI, 3361 + num_online_cpus()); 3362 + if (!sbi->post_read_wq) 3363 + return -ENOMEM; 3364 + return 0; 3365 + } 3366 + 3367 + void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi) 3368 + { 3369 + if (sbi->post_read_wq) 3370 + destroy_workqueue(sbi->post_read_wq); 3805 3371 } 3806 3372 3807 3373 int __init f2fs_init_bio_entry_cache(void)
+6
fs/f2fs/debug.c
··· 94 94 si->inline_xattr = atomic_read(&sbi->inline_xattr); 95 95 si->inline_inode = atomic_read(&sbi->inline_inode); 96 96 si->inline_dir = atomic_read(&sbi->inline_dir); 97 + si->compr_inode = atomic_read(&sbi->compr_inode); 98 + si->compr_blocks = atomic_read(&sbi->compr_blocks); 97 99 si->append = sbi->im[APPEND_INO].ino_num; 98 100 si->update = sbi->im[UPDATE_INO].ino_num; 99 101 si->orphans = sbi->im[ORPHAN_INO].ino_num; ··· 317 315 si->inline_inode); 318 316 seq_printf(s, " - Inline_dentry Inode: %u\n", 319 317 si->inline_dir); 318 + seq_printf(s, " - Compressed Inode: %u, Blocks: %u\n", 319 + si->compr_inode, si->compr_blocks); 320 320 seq_printf(s, " - Orphan/Append/Update Inode: %u, %u, %u\n", 321 321 si->orphans, si->append, si->update); 322 322 seq_printf(s, "\nMain area: %d segs, %d secs %d zones\n", ··· 495 491 atomic_set(&sbi->inline_xattr, 0); 496 492 atomic_set(&sbi->inline_inode, 0); 497 493 atomic_set(&sbi->inline_dir, 0); 494 + atomic_set(&sbi->compr_inode, 0); 495 + atomic_set(&sbi->compr_blocks, 0); 498 496 atomic_set(&sbi->inplace_count, 0); 499 497 for (i = META_CP; i < META_MAX; i++) 500 498 atomic_set(&sbi->meta_count[i], 0);
+272 -8
fs/f2fs/f2fs.h
··· 116 116 */ 117 117 typedef u32 nid_t; 118 118 119 + #define COMPRESS_EXT_NUM 16 120 + 119 121 struct f2fs_mount_info { 120 122 unsigned int opt; 121 123 int write_io_size_bits; /* Write IO size bits */ ··· 142 140 block_t unusable_cap; /* Amount of space allowed to be 143 141 * unusable when disabling checkpoint 144 142 */ 143 + 144 + /* For compression */ 145 + unsigned char compress_algorithm; /* algorithm type */ 146 + unsigned compress_log_size; /* cluster log size */ 147 + unsigned char compress_ext_cnt; /* extension count */ 148 + unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN]; /* extensions */ 145 149 }; 146 150 147 151 #define F2FS_FEATURE_ENCRYPT 0x0001 ··· 163 155 #define F2FS_FEATURE_VERITY 0x0400 164 156 #define F2FS_FEATURE_SB_CHKSUM 0x0800 165 157 #define F2FS_FEATURE_CASEFOLD 0x1000 158 + #define F2FS_FEATURE_COMPRESSION 0x2000 166 159 167 160 #define __F2FS_HAS_FEATURE(raw_super, mask) \ 168 161 ((raw_super->feature & cpu_to_le32(mask)) != 0) ··· 721 712 int i_inline_xattr_size; /* inline xattr size */ 722 713 struct timespec64 i_crtime; /* inode creation time */ 723 714 struct timespec64 i_disk_time[4];/* inode disk times */ 715 + 716 + /* for file compress */ 717 + u64 i_compr_blocks; /* # of compressed blocks */ 718 + unsigned char i_compress_algorithm; /* algorithm type */ 719 + unsigned char i_log_cluster_size; /* log of cluster size */ 720 + unsigned int i_cluster_size; /* cluster size */ 724 721 }; 725 722 726 723 static inline void get_extent_info(struct extent_info *ext, ··· 1033 1018 enum cp_reason_type { 1034 1019 CP_NO_NEEDED, 1035 1020 CP_NON_REGULAR, 1021 + CP_COMPRESSED, 1036 1022 CP_HARDLINK, 1037 1023 CP_SB_NEED_CP, 1038 1024 CP_WRONG_PINO, ··· 1072 1056 block_t old_blkaddr; /* old block address before Cow */ 1073 1057 struct page *page; /* page to be written */ 1074 1058 struct page *encrypted_page; /* encrypted page */ 1059 + struct page *compressed_page; /* compressed page */ 1075 1060 struct list_head list; /* serialize IOs */ 1076 1061 bool submitted; /* indicate IO submission */ 1077 1062 int need_lock; /* indicate we need to lock cp_rwsem */ 1078 1063 bool in_list; /* indicate fio is in io_list */ 1079 1064 bool is_por; /* indicate IO is from recovery or not */ 1080 1065 bool retry; /* need to reallocate block address */ 1066 + int compr_blocks; /* # of compressed block addresses */ 1067 + bool encrypted; /* indicate file is encrypted */ 1081 1068 enum iostat_type io_type; /* io type */ 1082 1069 struct writeback_control *io_wbc; /* writeback control */ 1083 1070 struct bio **bio; /* bio for ipu */ ··· 1188 1169 FSYNC_MODE_NOBARRIER, /* fsync behaves nobarrier based on posix */ 1189 1170 }; 1190 1171 1172 + /* 1173 + * this value is set in page as a private data which indicate that 1174 + * the page is atomically written, and it is in inmem_pages list. 1175 + */ 1176 + #define ATOMIC_WRITTEN_PAGE ((unsigned long)-1) 1177 + #define DUMMY_WRITTEN_PAGE ((unsigned long)-2) 1178 + 1179 + #define IS_ATOMIC_WRITTEN_PAGE(page) \ 1180 + (page_private(page) == (unsigned long)ATOMIC_WRITTEN_PAGE) 1181 + #define IS_DUMMY_WRITTEN_PAGE(page) \ 1182 + (page_private(page) == (unsigned long)DUMMY_WRITTEN_PAGE) 1183 + 1191 1184 #ifdef CONFIG_FS_ENCRYPTION 1192 1185 #define DUMMY_ENCRYPTION_ENABLED(sbi) \ 1193 1186 (unlikely(F2FS_OPTION(sbi).test_dummy_encryption)) 1194 1187 #else 1195 1188 #define DUMMY_ENCRYPTION_ENABLED(sbi) (0) 1196 1189 #endif 1190 + 1191 + /* For compression */ 1192 + enum compress_algorithm_type { 1193 + COMPRESS_LZO, 1194 + COMPRESS_LZ4, 1195 + COMPRESS_MAX, 1196 + }; 1197 + 1198 + #define COMPRESS_DATA_RESERVED_SIZE 4 1199 + struct compress_data { 1200 + __le32 clen; /* compressed data size */ 1201 + __le32 chksum; /* checksum of compressed data */ 1202 + __le32 reserved[COMPRESS_DATA_RESERVED_SIZE]; /* reserved */ 1203 + u8 cdata[]; /* compressed data */ 1204 + }; 1205 + 1206 + #define COMPRESS_HEADER_SIZE (sizeof(struct compress_data)) 1207 + 1208 + #define F2FS_COMPRESSED_PAGE_MAGIC 0xF5F2C000 1209 + 1210 + /* compress context */ 1211 + struct compress_ctx { 1212 + struct inode *inode; /* inode the context belong to */ 1213 + pgoff_t cluster_idx; /* cluster index number */ 1214 + unsigned int cluster_size; /* page count in cluster */ 1215 + unsigned int log_cluster_size; /* log of cluster size */ 1216 + struct page **rpages; /* pages store raw data in cluster */ 1217 + unsigned int nr_rpages; /* total page number in rpages */ 1218 + struct page **cpages; /* pages store compressed data in cluster */ 1219 + unsigned int nr_cpages; /* total page number in cpages */ 1220 + void *rbuf; /* virtual mapped address on rpages */ 1221 + struct compress_data *cbuf; /* virtual mapped address on cpages */ 1222 + size_t rlen; /* valid data length in rbuf */ 1223 + size_t clen; /* valid data length in cbuf */ 1224 + void *private; /* payload buffer for specified compression algorithm */ 1225 + }; 1226 + 1227 + /* compress context for write IO path */ 1228 + struct compress_io_ctx { 1229 + u32 magic; /* magic number to indicate page is compressed */ 1230 + struct inode *inode; /* inode the context belong to */ 1231 + struct page **rpages; /* pages store raw data in cluster */ 1232 + unsigned int nr_rpages; /* total page number in rpages */ 1233 + refcount_t ref; /* referrence count of raw page */ 1234 + }; 1235 + 1236 + /* decompress io context for read IO path */ 1237 + struct decompress_io_ctx { 1238 + u32 magic; /* magic number to indicate page is compressed */ 1239 + struct inode *inode; /* inode the context belong to */ 1240 + pgoff_t cluster_idx; /* cluster index number */ 1241 + unsigned int cluster_size; /* page count in cluster */ 1242 + unsigned int log_cluster_size; /* log of cluster size */ 1243 + struct page **rpages; /* pages store raw data in cluster */ 1244 + unsigned int nr_rpages; /* total page number in rpages */ 1245 + struct page **cpages; /* pages store compressed data in cluster */ 1246 + unsigned int nr_cpages; /* total page number in cpages */ 1247 + struct page **tpages; /* temp pages to pad holes in cluster */ 1248 + void *rbuf; /* virtual mapped address on rpages */ 1249 + struct compress_data *cbuf; /* virtual mapped address on cpages */ 1250 + size_t rlen; /* valid data length in rbuf */ 1251 + size_t clen; /* valid data length in cbuf */ 1252 + refcount_t ref; /* referrence count of compressed page */ 1253 + bool failed; /* indicate IO error during decompression */ 1254 + }; 1255 + 1256 + #define NULL_CLUSTER ((unsigned int)(~0)) 1257 + #define MIN_COMPRESS_LOG_SIZE 2 1258 + #define MAX_COMPRESS_LOG_SIZE 8 1197 1259 1198 1260 struct f2fs_sb_info { 1199 1261 struct super_block *sb; /* pointer to VFS super block */ ··· 1427 1327 atomic_t inline_xattr; /* # of inline_xattr inodes */ 1428 1328 atomic_t inline_inode; /* # of inline_data inodes */ 1429 1329 atomic_t inline_dir; /* # of inline_dentry inodes */ 1330 + atomic_t compr_inode; /* # of compressed inodes */ 1331 + atomic_t compr_blocks; /* # of compressed blocks */ 1430 1332 atomic_t vw_cnt; /* # of volatile writes */ 1431 1333 atomic_t max_aw_cnt; /* max # of atomic writes */ 1432 1334 atomic_t max_vw_cnt; /* max # of volatile writes */ ··· 1466 1364 1467 1365 /* Precomputed FS UUID checksum for seeding other checksums */ 1468 1366 __u32 s_chksum_seed; 1367 + 1368 + struct workqueue_struct *post_read_wq; /* post read workqueue */ 1469 1369 }; 1470 1370 1471 1371 struct f2fs_private_dio { ··· 2461 2357 /* 2462 2358 * On-disk inode flags (f2fs_inode::i_flags) 2463 2359 */ 2360 + #define F2FS_COMPR_FL 0x00000004 /* Compress file */ 2464 2361 #define F2FS_SYNC_FL 0x00000008 /* Synchronous updates */ 2465 2362 #define F2FS_IMMUTABLE_FL 0x00000010 /* Immutable file */ 2466 2363 #define F2FS_APPEND_FL 0x00000020 /* writes to file may only append */ 2467 2364 #define F2FS_NODUMP_FL 0x00000040 /* do not dump file */ 2468 2365 #define F2FS_NOATIME_FL 0x00000080 /* do not update atime */ 2366 + #define F2FS_NOCOMP_FL 0x00000400 /* Don't compress */ 2469 2367 #define F2FS_INDEX_FL 0x00001000 /* hash-indexed directory */ 2470 2368 #define F2FS_DIRSYNC_FL 0x00010000 /* dirsync behaviour (directories only) */ 2471 2369 #define F2FS_PROJINHERIT_FL 0x20000000 /* Create with parents projid */ ··· 2476 2370 /* Flags that should be inherited by new inodes from their parent. */ 2477 2371 #define F2FS_FL_INHERITED (F2FS_SYNC_FL | F2FS_NODUMP_FL | F2FS_NOATIME_FL | \ 2478 2372 F2FS_DIRSYNC_FL | F2FS_PROJINHERIT_FL | \ 2479 - F2FS_CASEFOLD_FL) 2373 + F2FS_CASEFOLD_FL | F2FS_COMPR_FL | F2FS_NOCOMP_FL) 2480 2374 2481 2375 /* Flags that are appropriate for regular files (all but dir-specific ones). */ 2482 2376 #define F2FS_REG_FLMASK (~(F2FS_DIRSYNC_FL | F2FS_PROJINHERIT_FL | \ ··· 2528 2422 FI_PIN_FILE, /* indicate file should not be gced */ 2529 2423 FI_ATOMIC_REVOKE_REQUEST, /* request to drop atomic data */ 2530 2424 FI_VERITY_IN_PROGRESS, /* building fs-verity Merkle tree */ 2425 + FI_COMPRESSED_FILE, /* indicate file's data can be compressed */ 2426 + FI_MMAP_FILE, /* indicate file was mmapped */ 2531 2427 }; 2532 2428 2533 2429 static inline void __mark_inode_dirty_flag(struct inode *inode, ··· 2546 2438 case FI_DATA_EXIST: 2547 2439 case FI_INLINE_DOTS: 2548 2440 case FI_PIN_FILE: 2441 + case FI_COMPRESSED_FILE: 2549 2442 f2fs_mark_inode_dirty_sync(inode, true); 2550 2443 } 2551 2444 } ··· 2702 2593 return is_inode_flag_set(inode, FI_INLINE_XATTR); 2703 2594 } 2704 2595 2596 + static inline int f2fs_compressed_file(struct inode *inode) 2597 + { 2598 + return S_ISREG(inode->i_mode) && 2599 + is_inode_flag_set(inode, FI_COMPRESSED_FILE); 2600 + } 2601 + 2705 2602 static inline unsigned int addrs_per_inode(struct inode *inode) 2706 2603 { 2707 2604 unsigned int addrs = CUR_ADDRS_PER_INODE(inode) - 2708 2605 get_inline_xattr_addrs(inode); 2709 - return ALIGN_DOWN(addrs, 1); 2606 + 2607 + if (!f2fs_compressed_file(inode)) 2608 + return addrs; 2609 + return ALIGN_DOWN(addrs, F2FS_I(inode)->i_cluster_size); 2710 2610 } 2711 2611 2712 2612 static inline unsigned int addrs_per_block(struct inode *inode) 2713 2613 { 2714 - return ALIGN_DOWN(DEF_ADDRS_PER_BLOCK, 1); 2614 + if (!f2fs_compressed_file(inode)) 2615 + return DEF_ADDRS_PER_BLOCK; 2616 + return ALIGN_DOWN(DEF_ADDRS_PER_BLOCK, F2FS_I(inode)->i_cluster_size); 2715 2617 } 2716 2618 2717 2619 static inline void *inline_xattr_addr(struct inode *inode, struct page *page) ··· 2753 2633 static inline int f2fs_has_inline_dots(struct inode *inode) 2754 2634 { 2755 2635 return is_inode_flag_set(inode, FI_INLINE_DOTS); 2636 + } 2637 + 2638 + static inline int f2fs_is_mmap_file(struct inode *inode) 2639 + { 2640 + return is_inode_flag_set(inode, FI_MMAP_FILE); 2756 2641 } 2757 2642 2758 2643 static inline bool f2fs_is_pinned_file(struct inode *inode) ··· 2887 2762 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 2888 2763 2889 2764 if (!test_opt(sbi, EXTENT_CACHE) || 2890 - is_inode_flag_set(inode, FI_NO_EXTENT)) 2765 + is_inode_flag_set(inode, FI_NO_EXTENT) || 2766 + is_inode_flag_set(inode, FI_COMPRESSED_FILE)) 2891 2767 return false; 2892 2768 2893 2769 /* ··· 3008 2882 3009 2883 static inline bool __is_valid_data_blkaddr(block_t blkaddr) 3010 2884 { 3011 - if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR) 2885 + if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR || 2886 + blkaddr == COMPRESS_ADDR) 3012 2887 return false; 3013 2888 return true; 3014 2889 } ··· 3316 3189 int __init f2fs_init_bioset(void); 3317 3190 void f2fs_destroy_bioset(void); 3318 3191 struct bio *f2fs_bio_alloc(struct f2fs_sb_info *sbi, int npages, bool no_fail); 3319 - int f2fs_init_post_read_processing(void); 3320 - void f2fs_destroy_post_read_processing(void); 3321 3192 int f2fs_init_bio_entry_cache(void); 3322 3193 void f2fs_destroy_bio_entry_cache(void); 3194 + void f2fs_submit_bio(struct f2fs_sb_info *sbi, 3195 + struct bio *bio, enum page_type type); 3323 3196 void f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type); 3324 3197 void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi, 3325 3198 struct inode *inode, struct page *page, ··· 3340 3213 int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index); 3341 3214 int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from); 3342 3215 int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index); 3216 + int f2fs_mpage_readpages(struct address_space *mapping, 3217 + struct list_head *pages, struct page *page, 3218 + unsigned nr_pages, bool is_readahead); 3343 3219 struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, 3344 3220 int op_flags, bool for_write); 3345 3221 struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index); ··· 3356 3226 int create, int flag); 3357 3227 int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 3358 3228 u64 start, u64 len); 3229 + int f2fs_encrypt_one_page(struct f2fs_io_info *fio); 3359 3230 bool f2fs_should_update_inplace(struct inode *inode, struct f2fs_io_info *fio); 3360 3231 bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio); 3232 + int f2fs_write_single_data_page(struct page *page, int *submitted, 3233 + struct bio **bio, sector_t *last_block, 3234 + struct writeback_control *wbc, 3235 + enum iostat_type io_type, 3236 + int compr_blocks); 3361 3237 void f2fs_invalidate_page(struct page *page, unsigned int offset, 3362 3238 unsigned int length); 3363 3239 int f2fs_release_page(struct page *page, gfp_t wait); ··· 3373 3237 #endif 3374 3238 bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len); 3375 3239 void f2fs_clear_page_cache_dirty_tag(struct page *page); 3240 + int f2fs_init_post_read_processing(void); 3241 + void f2fs_destroy_post_read_processing(void); 3242 + int f2fs_init_post_read_wq(struct f2fs_sb_info *sbi); 3243 + void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi); 3376 3244 3377 3245 /* 3378 3246 * gc.c ··· 3423 3283 int nr_discard_cmd; 3424 3284 unsigned int undiscard_blks; 3425 3285 int inline_xattr, inline_inode, inline_dir, append, update, orphans; 3286 + int compr_inode, compr_blocks; 3426 3287 int aw_cnt, max_aw_cnt, vw_cnt, max_vw_cnt; 3427 3288 unsigned int valid_count, valid_node_count, valid_inode_count, discard_blks; 3428 3289 unsigned int bimodal, avg_vblocks; ··· 3494 3353 if (f2fs_has_inline_dentry(inode)) \ 3495 3354 (atomic_dec(&F2FS_I_SB(inode)->inline_dir)); \ 3496 3355 } while (0) 3356 + #define stat_inc_compr_inode(inode) \ 3357 + do { \ 3358 + if (f2fs_compressed_file(inode)) \ 3359 + (atomic_inc(&F2FS_I_SB(inode)->compr_inode)); \ 3360 + } while (0) 3361 + #define stat_dec_compr_inode(inode) \ 3362 + do { \ 3363 + if (f2fs_compressed_file(inode)) \ 3364 + (atomic_dec(&F2FS_I_SB(inode)->compr_inode)); \ 3365 + } while (0) 3366 + #define stat_add_compr_blocks(inode, blocks) \ 3367 + (atomic_add(blocks, &F2FS_I_SB(inode)->compr_blocks)) 3368 + #define stat_sub_compr_blocks(inode, blocks) \ 3369 + (atomic_sub(blocks, &F2FS_I_SB(inode)->compr_blocks)) 3497 3370 #define stat_inc_meta_count(sbi, blkaddr) \ 3498 3371 do { \ 3499 3372 if (blkaddr < SIT_I(sbi)->sit_base_addr) \ ··· 3598 3443 #define stat_dec_inline_inode(inode) do { } while (0) 3599 3444 #define stat_inc_inline_dir(inode) do { } while (0) 3600 3445 #define stat_dec_inline_dir(inode) do { } while (0) 3446 + #define stat_inc_compr_inode(inode) do { } while (0) 3447 + #define stat_dec_compr_inode(inode) do { } while (0) 3448 + #define stat_add_compr_blocks(inode, blocks) do { } while (0) 3449 + #define stat_sub_compr_blocks(inode, blocks) do { } while (0) 3601 3450 #define stat_inc_atomic_write(inode) do { } while (0) 3602 3451 #define stat_dec_atomic_write(inode) do { } while (0) 3603 3452 #define stat_update_max_atomic_write(inode) do { } while (0) ··· 3741 3582 */ 3742 3583 static inline bool f2fs_post_read_required(struct inode *inode) 3743 3584 { 3744 - return f2fs_encrypted_file(inode) || fsverity_active(inode); 3585 + return f2fs_encrypted_file(inode) || fsverity_active(inode) || 3586 + f2fs_compressed_file(inode); 3587 + } 3588 + 3589 + /* 3590 + * compress.c 3591 + */ 3592 + #ifdef CONFIG_F2FS_FS_COMPRESSION 3593 + bool f2fs_is_compressed_page(struct page *page); 3594 + struct page *f2fs_compress_control_page(struct page *page); 3595 + int f2fs_prepare_compress_overwrite(struct inode *inode, 3596 + struct page **pagep, pgoff_t index, void **fsdata); 3597 + bool f2fs_compress_write_end(struct inode *inode, void *fsdata, 3598 + pgoff_t index, unsigned copied); 3599 + void f2fs_compress_write_end_io(struct bio *bio, struct page *page); 3600 + bool f2fs_is_compress_backend_ready(struct inode *inode); 3601 + void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity); 3602 + bool f2fs_cluster_is_empty(struct compress_ctx *cc); 3603 + bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index); 3604 + void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page); 3605 + int f2fs_write_multi_pages(struct compress_ctx *cc, 3606 + int *submitted, 3607 + struct writeback_control *wbc, 3608 + enum iostat_type io_type); 3609 + int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index); 3610 + int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, 3611 + unsigned nr_pages, sector_t *last_block_in_bio, 3612 + bool is_readahead); 3613 + struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc); 3614 + void f2fs_free_dic(struct decompress_io_ctx *dic); 3615 + void f2fs_decompress_end_io(struct page **rpages, 3616 + unsigned int cluster_size, bool err, bool verity); 3617 + int f2fs_init_compress_ctx(struct compress_ctx *cc); 3618 + void f2fs_destroy_compress_ctx(struct compress_ctx *cc); 3619 + void f2fs_init_compress_info(struct f2fs_sb_info *sbi); 3620 + #else 3621 + static inline bool f2fs_is_compressed_page(struct page *page) { return false; } 3622 + static inline bool f2fs_is_compress_backend_ready(struct inode *inode) 3623 + { 3624 + if (!f2fs_compressed_file(inode)) 3625 + return true; 3626 + /* not support compression */ 3627 + return false; 3628 + } 3629 + static inline struct page *f2fs_compress_control_page(struct page *page) 3630 + { 3631 + WARN_ON_ONCE(1); 3632 + return ERR_PTR(-EINVAL); 3633 + } 3634 + #endif 3635 + 3636 + static inline void set_compress_context(struct inode *inode) 3637 + { 3638 + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 3639 + 3640 + F2FS_I(inode)->i_compress_algorithm = 3641 + F2FS_OPTION(sbi).compress_algorithm; 3642 + F2FS_I(inode)->i_log_cluster_size = 3643 + F2FS_OPTION(sbi).compress_log_size; 3644 + F2FS_I(inode)->i_cluster_size = 3645 + 1 << F2FS_I(inode)->i_log_cluster_size; 3646 + F2FS_I(inode)->i_flags |= F2FS_COMPR_FL; 3647 + set_inode_flag(inode, FI_COMPRESSED_FILE); 3648 + stat_inc_compr_inode(inode); 3649 + } 3650 + 3651 + static inline u64 f2fs_disable_compressed_file(struct inode *inode) 3652 + { 3653 + struct f2fs_inode_info *fi = F2FS_I(inode); 3654 + 3655 + if (!f2fs_compressed_file(inode)) 3656 + return 0; 3657 + if (fi->i_compr_blocks) 3658 + return fi->i_compr_blocks; 3659 + 3660 + fi->i_flags &= ~F2FS_COMPR_FL; 3661 + clear_inode_flag(inode, FI_COMPRESSED_FILE); 3662 + stat_dec_compr_inode(inode); 3663 + return 0; 3745 3664 } 3746 3665 3747 3666 #define F2FS_FEATURE_FUNCS(name, flagname) \ ··· 3840 3603 F2FS_FEATURE_FUNCS(verity, VERITY); 3841 3604 F2FS_FEATURE_FUNCS(sb_chksum, SB_CHKSUM); 3842 3605 F2FS_FEATURE_FUNCS(casefold, CASEFOLD); 3606 + F2FS_FEATURE_FUNCS(compression, COMPRESSION); 3843 3607 3844 3608 #ifdef CONFIG_BLK_DEV_ZONED 3845 3609 static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi, ··· 3922 3684 #endif 3923 3685 } 3924 3686 3687 + static inline bool f2fs_may_compress(struct inode *inode) 3688 + { 3689 + if (IS_SWAPFILE(inode) || f2fs_is_pinned_file(inode) || 3690 + f2fs_is_atomic_file(inode) || 3691 + f2fs_is_volatile_file(inode)) 3692 + return false; 3693 + return S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode); 3694 + } 3695 + 3696 + static inline void f2fs_i_compr_blocks_update(struct inode *inode, 3697 + u64 blocks, bool add) 3698 + { 3699 + int diff = F2FS_I(inode)->i_cluster_size - blocks; 3700 + 3701 + if (add) { 3702 + F2FS_I(inode)->i_compr_blocks += diff; 3703 + stat_add_compr_blocks(inode, diff); 3704 + } else { 3705 + F2FS_I(inode)->i_compr_blocks -= diff; 3706 + stat_sub_compr_blocks(inode, diff); 3707 + } 3708 + f2fs_mark_inode_dirty_sync(inode, true); 3709 + } 3710 + 3925 3711 static inline int block_unaligned_IO(struct inode *inode, 3926 3712 struct kiocb *iocb, struct iov_iter *iter) 3927 3713 { ··· 3976 3714 if (f2fs_post_read_required(inode)) 3977 3715 return true; 3978 3716 if (f2fs_is_multi_device(sbi)) 3717 + return true; 3718 + if (f2fs_compressed_file(inode)) 3979 3719 return true; 3980 3720 /* 3981 3721 * for blkzoned device, fallback direct IO to buffered IO, so
+166 -20
fs/f2fs/file.c
··· 51 51 struct inode *inode = file_inode(vmf->vma->vm_file); 52 52 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 53 53 struct dnode_of_data dn; 54 - int err; 54 + bool need_alloc = true; 55 + int err = 0; 55 56 56 57 if (unlikely(f2fs_cp_error(sbi))) { 57 58 err = -EIO; ··· 64 63 goto err; 65 64 } 66 65 66 + #ifdef CONFIG_F2FS_FS_COMPRESSION 67 + if (f2fs_compressed_file(inode)) { 68 + int ret = f2fs_is_compressed_cluster(inode, page->index); 69 + 70 + if (ret < 0) { 71 + err = ret; 72 + goto err; 73 + } else if (ret) { 74 + if (ret < F2FS_I(inode)->i_cluster_size) { 75 + err = -EAGAIN; 76 + goto err; 77 + } 78 + need_alloc = false; 79 + } 80 + } 81 + #endif 67 82 /* should do out of any locked page */ 68 - f2fs_balance_fs(sbi, true); 83 + if (need_alloc) 84 + f2fs_balance_fs(sbi, true); 69 85 70 86 sb_start_pagefault(inode->i_sb); 71 87 ··· 99 81 goto out_sem; 100 82 } 101 83 102 - /* block allocation */ 103 - __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true); 104 - set_new_dnode(&dn, inode, NULL, NULL, 0); 105 - err = f2fs_get_block(&dn, page->index); 106 - f2fs_put_dnode(&dn); 107 - __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false); 108 - if (err) { 109 - unlock_page(page); 110 - goto out_sem; 84 + if (need_alloc) { 85 + /* block allocation */ 86 + __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true); 87 + set_new_dnode(&dn, inode, NULL, NULL, 0); 88 + err = f2fs_get_block(&dn, page->index); 89 + f2fs_put_dnode(&dn); 90 + __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false); 91 + if (err) { 92 + unlock_page(page); 93 + goto out_sem; 94 + } 111 95 } 112 96 113 97 /* fill the page */ ··· 176 156 177 157 if (!S_ISREG(inode->i_mode)) 178 158 cp_reason = CP_NON_REGULAR; 159 + else if (f2fs_compressed_file(inode)) 160 + cp_reason = CP_COMPRESSED; 179 161 else if (inode->i_nlink != 1) 180 162 cp_reason = CP_HARDLINK; 181 163 else if (is_sbi_flag_set(sbi, SBI_NEED_CP)) ··· 508 486 if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) 509 487 return -EIO; 510 488 489 + if (!f2fs_is_compress_backend_ready(inode)) 490 + return -EOPNOTSUPP; 491 + 511 492 /* we don't need to use inline_data strictly */ 512 493 err = f2fs_convert_inline_inode(inode); 513 494 if (err) ··· 518 493 519 494 file_accessed(file); 520 495 vma->vm_ops = &f2fs_file_vm_ops; 496 + set_inode_flag(inode, FI_MMAP_FILE); 521 497 return 0; 522 498 } 523 499 ··· 528 502 529 503 if (err) 530 504 return err; 505 + 506 + if (!f2fs_is_compress_backend_ready(inode)) 507 + return -EOPNOTSUPP; 531 508 532 509 err = fsverity_file_open(inode, filp); 533 510 if (err) ··· 548 519 int nr_free = 0, ofs = dn->ofs_in_node, len = count; 549 520 __le32 *addr; 550 521 int base = 0; 522 + bool compressed_cluster = false; 523 + int cluster_index = 0, valid_blocks = 0; 524 + int cluster_size = F2FS_I(dn->inode)->i_cluster_size; 551 525 552 526 if (IS_INODE(dn->node_page) && f2fs_has_extra_attr(dn->inode)) 553 527 base = get_extra_isize(dn->inode); ··· 558 526 raw_node = F2FS_NODE(dn->node_page); 559 527 addr = blkaddr_in_node(raw_node) + base + ofs; 560 528 561 - for (; count > 0; count--, addr++, dn->ofs_in_node++) { 529 + /* Assumption: truncateion starts with cluster */ 530 + for (; count > 0; count--, addr++, dn->ofs_in_node++, cluster_index++) { 562 531 block_t blkaddr = le32_to_cpu(*addr); 532 + 533 + if (f2fs_compressed_file(dn->inode) && 534 + !(cluster_index & (cluster_size - 1))) { 535 + if (compressed_cluster) 536 + f2fs_i_compr_blocks_update(dn->inode, 537 + valid_blocks, false); 538 + compressed_cluster = (blkaddr == COMPRESS_ADDR); 539 + valid_blocks = 0; 540 + } 563 541 564 542 if (blkaddr == NULL_ADDR) 565 543 continue; ··· 577 535 dn->data_blkaddr = NULL_ADDR; 578 536 f2fs_set_data_blkaddr(dn); 579 537 580 - if (__is_valid_data_blkaddr(blkaddr) && 581 - !f2fs_is_valid_blkaddr(sbi, blkaddr, 538 + if (__is_valid_data_blkaddr(blkaddr)) { 539 + if (!f2fs_is_valid_blkaddr(sbi, blkaddr, 582 540 DATA_GENERIC_ENHANCE)) 583 - continue; 541 + continue; 542 + if (compressed_cluster) 543 + valid_blocks++; 544 + } 584 545 585 - f2fs_invalidate_blocks(sbi, blkaddr); 586 546 if (dn->ofs_in_node == 0 && IS_INODE(dn->node_page)) 587 547 clear_inode_flag(dn->inode, FI_FIRST_BLOCK_WRITTEN); 548 + 549 + f2fs_invalidate_blocks(sbi, blkaddr); 588 550 nr_free++; 589 551 } 552 + 553 + if (compressed_cluster) 554 + f2fs_i_compr_blocks_update(dn->inode, valid_blocks, false); 590 555 591 556 if (nr_free) { 592 557 pgoff_t fofs; ··· 637 588 return 0; 638 589 } 639 590 591 + if (f2fs_compressed_file(inode)) 592 + return 0; 593 + 640 594 page = f2fs_get_lock_data_page(inode, index, true); 641 595 if (IS_ERR(page)) 642 596 return PTR_ERR(page) == -ENOENT ? 0 : PTR_ERR(page); ··· 655 603 return 0; 656 604 } 657 605 658 - int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock) 606 + static int do_truncate_blocks(struct inode *inode, u64 from, bool lock) 659 607 { 660 608 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 661 609 struct dnode_of_data dn; ··· 718 666 719 667 trace_f2fs_truncate_blocks_exit(inode, err); 720 668 return err; 669 + } 670 + 671 + int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock) 672 + { 673 + u64 free_from = from; 674 + 675 + /* 676 + * for compressed file, only support cluster size 677 + * aligned truncation. 678 + */ 679 + if (f2fs_compressed_file(inode)) { 680 + size_t cluster_shift = PAGE_SHIFT + 681 + F2FS_I(inode)->i_log_cluster_size; 682 + size_t cluster_mask = (1 << cluster_shift) - 1; 683 + 684 + free_from = from >> cluster_shift; 685 + if (from & cluster_mask) 686 + free_from++; 687 + free_from <<= cluster_shift; 688 + } 689 + 690 + return do_truncate_blocks(inode, free_from, lock); 721 691 } 722 692 723 693 int f2fs_truncate(struct inode *inode) ··· 860 786 861 787 if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) 862 788 return -EIO; 789 + 790 + if ((attr->ia_valid & ATTR_SIZE) && 791 + !f2fs_is_compress_backend_ready(inode)) 792 + return -EOPNOTSUPP; 863 793 864 794 err = setattr_prepare(dentry, attr); 865 795 if (err) ··· 1105 1027 } else if (ret == -ENOENT) { 1106 1028 if (dn.max_level == 0) 1107 1029 return -ENOENT; 1108 - done = min((pgoff_t)ADDRS_PER_BLOCK(inode) - dn.ofs_in_node, 1109 - len); 1030 + done = min((pgoff_t)ADDRS_PER_BLOCK(inode) - 1031 + dn.ofs_in_node, len); 1110 1032 blkaddr += done; 1111 1033 do_replace += done; 1112 1034 goto next; ··· 1700 1622 return -EIO; 1701 1623 if (!f2fs_is_checkpoint_ready(F2FS_I_SB(inode))) 1702 1624 return -ENOSPC; 1625 + if (!f2fs_is_compress_backend_ready(inode)) 1626 + return -EOPNOTSUPP; 1703 1627 1704 1628 /* f2fs only support ->fallocate for regular file */ 1705 1629 if (!S_ISREG(inode->i_mode)) ··· 1709 1629 1710 1630 if (IS_ENCRYPTED(inode) && 1711 1631 (mode & (FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_INSERT_RANGE))) 1632 + return -EOPNOTSUPP; 1633 + 1634 + if (f2fs_compressed_file(inode) && 1635 + (mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE | 1636 + FALLOC_FL_ZERO_RANGE | FALLOC_FL_INSERT_RANGE))) 1712 1637 return -EOPNOTSUPP; 1713 1638 1714 1639 if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | ··· 1805 1720 return -ENOTEMPTY; 1806 1721 } 1807 1722 1723 + if (iflags & (F2FS_COMPR_FL | F2FS_NOCOMP_FL)) { 1724 + if (!f2fs_sb_has_compression(F2FS_I_SB(inode))) 1725 + return -EOPNOTSUPP; 1726 + if ((iflags & F2FS_COMPR_FL) && (iflags & F2FS_NOCOMP_FL)) 1727 + return -EINVAL; 1728 + } 1729 + 1730 + if ((iflags ^ fi->i_flags) & F2FS_COMPR_FL) { 1731 + if (S_ISREG(inode->i_mode) && 1732 + (fi->i_flags & F2FS_COMPR_FL || i_size_read(inode) || 1733 + F2FS_HAS_BLOCKS(inode))) 1734 + return -EINVAL; 1735 + if (iflags & F2FS_NOCOMP_FL) 1736 + return -EINVAL; 1737 + if (iflags & F2FS_COMPR_FL) { 1738 + int err = f2fs_convert_inline_inode(inode); 1739 + 1740 + if (err) 1741 + return err; 1742 + 1743 + if (!f2fs_may_compress(inode)) 1744 + return -EINVAL; 1745 + 1746 + set_compress_context(inode); 1747 + } 1748 + } 1749 + if ((iflags ^ fi->i_flags) & F2FS_NOCOMP_FL) { 1750 + if (fi->i_flags & F2FS_COMPR_FL) 1751 + return -EINVAL; 1752 + } 1753 + 1808 1754 fi->i_flags = iflags | (fi->i_flags & ~mask); 1755 + f2fs_bug_on(F2FS_I_SB(inode), (fi->i_flags & F2FS_COMPR_FL) && 1756 + (fi->i_flags & F2FS_NOCOMP_FL)); 1809 1757 1810 1758 if (fi->i_flags & F2FS_PROJINHERIT_FL) 1811 1759 set_inode_flag(inode, FI_PROJ_INHERIT); ··· 1864 1746 u32 iflag; 1865 1747 u32 fsflag; 1866 1748 } f2fs_fsflags_map[] = { 1749 + { F2FS_COMPR_FL, FS_COMPR_FL }, 1867 1750 { F2FS_SYNC_FL, FS_SYNC_FL }, 1868 1751 { F2FS_IMMUTABLE_FL, FS_IMMUTABLE_FL }, 1869 1752 { F2FS_APPEND_FL, FS_APPEND_FL }, 1870 1753 { F2FS_NODUMP_FL, FS_NODUMP_FL }, 1871 1754 { F2FS_NOATIME_FL, FS_NOATIME_FL }, 1755 + { F2FS_NOCOMP_FL, FS_NOCOMP_FL }, 1872 1756 { F2FS_INDEX_FL, FS_INDEX_FL }, 1873 1757 { F2FS_DIRSYNC_FL, FS_DIRSYNC_FL }, 1874 1758 { F2FS_PROJINHERIT_FL, FS_PROJINHERIT_FL }, ··· 1878 1758 }; 1879 1759 1880 1760 #define F2FS_GETTABLE_FS_FL ( \ 1761 + FS_COMPR_FL | \ 1881 1762 FS_SYNC_FL | \ 1882 1763 FS_IMMUTABLE_FL | \ 1883 1764 FS_APPEND_FL | \ 1884 1765 FS_NODUMP_FL | \ 1885 1766 FS_NOATIME_FL | \ 1767 + FS_NOCOMP_FL | \ 1886 1768 FS_INDEX_FL | \ 1887 1769 FS_DIRSYNC_FL | \ 1888 1770 FS_PROJINHERIT_FL | \ ··· 1895 1773 FS_CASEFOLD_FL) 1896 1774 1897 1775 #define F2FS_SETTABLE_FS_FL ( \ 1776 + FS_COMPR_FL | \ 1898 1777 FS_SYNC_FL | \ 1899 1778 FS_IMMUTABLE_FL | \ 1900 1779 FS_APPEND_FL | \ 1901 1780 FS_NODUMP_FL | \ 1902 1781 FS_NOATIME_FL | \ 1782 + FS_NOCOMP_FL | \ 1903 1783 FS_DIRSYNC_FL | \ 1904 1784 FS_PROJINHERIT_FL | \ 1905 1785 FS_CASEFOLD_FL) ··· 2021 1897 return ret; 2022 1898 2023 1899 inode_lock(inode); 1900 + 1901 + f2fs_disable_compressed_file(inode); 2024 1902 2025 1903 if (f2fs_is_atomic_file(inode)) { 2026 1904 if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST)) ··· 3224 3098 ret = -EAGAIN; 3225 3099 goto out; 3226 3100 } 3101 + 3227 3102 ret = f2fs_convert_inline_inode(inode); 3228 3103 if (ret) 3229 3104 goto out; 3105 + 3106 + if (f2fs_disable_compressed_file(inode)) { 3107 + ret = -EOPNOTSUPP; 3108 + goto out; 3109 + } 3230 3110 3231 3111 set_inode_flag(inode, FI_PIN_FILE); 3232 3112 ret = F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN]; ··· 3482 3350 } 3483 3351 } 3484 3352 3353 + static ssize_t f2fs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) 3354 + { 3355 + struct file *file = iocb->ki_filp; 3356 + struct inode *inode = file_inode(file); 3357 + 3358 + if (!f2fs_is_compress_backend_ready(inode)) 3359 + return -EOPNOTSUPP; 3360 + 3361 + return generic_file_read_iter(iocb, iter); 3362 + } 3363 + 3485 3364 static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) 3486 3365 { 3487 3366 struct file *file = iocb->ki_filp; ··· 3503 3360 ret = -EIO; 3504 3361 goto out; 3505 3362 } 3363 + 3364 + if (!f2fs_is_compress_backend_ready(inode)) 3365 + return -EOPNOTSUPP; 3506 3366 3507 3367 if (iocb->ki_flags & IOCB_NOWAIT) { 3508 3368 if (!inode_trylock(inode)) { ··· 3644 3498 3645 3499 const struct file_operations f2fs_file_operations = { 3646 3500 .llseek = f2fs_llseek, 3647 - .read_iter = generic_file_read_iter, 3501 + .read_iter = f2fs_file_read_iter, 3648 3502 .write_iter = f2fs_file_write_iter, 3649 3503 .open = f2fs_file_open, 3650 3504 .release = f2fs_release_file,
+41
fs/f2fs/inode.c
··· 200 200 { 201 201 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 202 202 struct f2fs_inode_info *fi = F2FS_I(inode); 203 + struct f2fs_inode *ri = F2FS_INODE(node_page); 203 204 unsigned long long iblocks; 204 205 205 206 iblocks = le64_to_cpu(F2FS_INODE(node_page)->i_blocks); ··· 285 284 f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_dentry, run fsck to fix", 286 285 __func__, inode->i_ino, inode->i_mode); 287 286 return false; 287 + } 288 + 289 + if (f2fs_has_extra_attr(inode) && f2fs_sb_has_compression(sbi) && 290 + fi->i_flags & F2FS_COMPR_FL && 291 + F2FS_FITS_IN_INODE(ri, fi->i_extra_isize, 292 + i_log_cluster_size)) { 293 + if (ri->i_compress_algorithm >= COMPRESS_MAX) 294 + return false; 295 + if (le64_to_cpu(ri->i_compr_blocks) > inode->i_blocks) 296 + return false; 297 + if (ri->i_log_cluster_size < MIN_COMPRESS_LOG_SIZE || 298 + ri->i_log_cluster_size > MAX_COMPRESS_LOG_SIZE) 299 + return false; 288 300 } 289 301 290 302 return true; ··· 421 407 fi->i_crtime.tv_nsec = le32_to_cpu(ri->i_crtime_nsec); 422 408 } 423 409 410 + if (f2fs_has_extra_attr(inode) && f2fs_sb_has_compression(sbi) && 411 + (fi->i_flags & F2FS_COMPR_FL)) { 412 + if (F2FS_FITS_IN_INODE(ri, fi->i_extra_isize, 413 + i_log_cluster_size)) { 414 + fi->i_compr_blocks = le64_to_cpu(ri->i_compr_blocks); 415 + fi->i_compress_algorithm = ri->i_compress_algorithm; 416 + fi->i_log_cluster_size = ri->i_log_cluster_size; 417 + fi->i_cluster_size = 1 << fi->i_log_cluster_size; 418 + set_inode_flag(inode, FI_COMPRESSED_FILE); 419 + } 420 + } 421 + 424 422 F2FS_I(inode)->i_disk_time[0] = inode->i_atime; 425 423 F2FS_I(inode)->i_disk_time[1] = inode->i_ctime; 426 424 F2FS_I(inode)->i_disk_time[2] = inode->i_mtime; ··· 442 416 stat_inc_inline_xattr(inode); 443 417 stat_inc_inline_inode(inode); 444 418 stat_inc_inline_dir(inode); 419 + stat_inc_compr_inode(inode); 420 + stat_add_compr_blocks(inode, F2FS_I(inode)->i_compr_blocks); 445 421 446 422 return 0; 447 423 } ··· 597 569 ri->i_crtime_nsec = 598 570 cpu_to_le32(F2FS_I(inode)->i_crtime.tv_nsec); 599 571 } 572 + 573 + if (f2fs_sb_has_compression(F2FS_I_SB(inode)) && 574 + F2FS_FITS_IN_INODE(ri, F2FS_I(inode)->i_extra_isize, 575 + i_log_cluster_size)) { 576 + ri->i_compr_blocks = 577 + cpu_to_le64(F2FS_I(inode)->i_compr_blocks); 578 + ri->i_compress_algorithm = 579 + F2FS_I(inode)->i_compress_algorithm; 580 + ri->i_log_cluster_size = 581 + F2FS_I(inode)->i_log_cluster_size; 582 + } 600 583 } 601 584 602 585 __set_inode_rdev(inode, ri); ··· 750 711 stat_dec_inline_xattr(inode); 751 712 stat_dec_inline_dir(inode); 752 713 stat_dec_inline_inode(inode); 714 + stat_dec_compr_inode(inode); 715 + stat_sub_compr_blocks(inode, F2FS_I(inode)->i_compr_blocks); 753 716 754 717 if (likely(!f2fs_cp_error(sbi) && 755 718 !is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+51
fs/f2fs/namei.c
··· 119 119 if (F2FS_I(inode)->i_flags & F2FS_PROJINHERIT_FL) 120 120 set_inode_flag(inode, FI_PROJ_INHERIT); 121 121 122 + if (f2fs_sb_has_compression(sbi)) { 123 + /* Inherit the compression flag in directory */ 124 + if ((F2FS_I(dir)->i_flags & F2FS_COMPR_FL) && 125 + f2fs_may_compress(inode)) 126 + set_compress_context(inode); 127 + } 128 + 122 129 f2fs_set_inode_flags(inode); 123 130 124 131 trace_f2fs_new_inode(inode, 0); ··· 155 148 size_t slen = strlen(s); 156 149 size_t sublen = strlen(sub); 157 150 int i; 151 + 152 + if (sublen == 1 && *sub == '*') 153 + return 1; 158 154 159 155 /* 160 156 * filename format of multimedia file should be defined as: ··· 272 262 return 0; 273 263 } 274 264 265 + static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode, 266 + const unsigned char *name) 267 + { 268 + __u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list; 269 + unsigned char (*ext)[F2FS_EXTENSION_LEN]; 270 + unsigned int ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt; 271 + int i, cold_count, hot_count; 272 + 273 + if (!f2fs_sb_has_compression(sbi) || 274 + is_inode_flag_set(inode, FI_COMPRESSED_FILE) || 275 + F2FS_I(inode)->i_flags & F2FS_NOCOMP_FL || 276 + !f2fs_may_compress(inode)) 277 + return; 278 + 279 + down_read(&sbi->sb_lock); 280 + 281 + cold_count = le32_to_cpu(sbi->raw_super->extension_count); 282 + hot_count = sbi->raw_super->hot_ext_count; 283 + 284 + for (i = cold_count; i < cold_count + hot_count; i++) { 285 + if (is_extension_exist(name, extlist[i])) { 286 + up_read(&sbi->sb_lock); 287 + return; 288 + } 289 + } 290 + 291 + up_read(&sbi->sb_lock); 292 + 293 + ext = F2FS_OPTION(sbi).extensions; 294 + 295 + for (i = 0; i < ext_cnt; i++) { 296 + if (!is_extension_exist(name, ext[i])) 297 + continue; 298 + 299 + set_compress_context(inode); 300 + return; 301 + } 302 + } 303 + 275 304 static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode, 276 305 bool excl) 277 306 { ··· 334 285 335 286 if (!test_opt(sbi, DISABLE_EXT_IDENTIFY)) 336 287 set_file_temperature(sbi, inode, dentry->d_name.name); 288 + 289 + set_compress_inode(sbi, inode, dentry->d_name.name); 337 290 338 291 inode->i_op = &f2fs_file_inode_operations; 339 292 inode->i_fop = &f2fs_file_operations;
+3 -2
fs/f2fs/segment.c
··· 2224 2224 struct sit_info *sit_i = SIT_I(sbi); 2225 2225 2226 2226 f2fs_bug_on(sbi, addr == NULL_ADDR); 2227 - if (addr == NEW_ADDR) 2227 + if (addr == NEW_ADDR || addr == COMPRESS_ADDR) 2228 2228 return; 2229 2229 2230 2230 invalidate_mapping_pages(META_MAPPING(sbi), addr, addr); ··· 3035 3035 if (fio->type == DATA) { 3036 3036 struct inode *inode = fio->page->mapping->host; 3037 3037 3038 - if (is_cold_data(fio->page) || file_is_cold(inode)) 3038 + if (is_cold_data(fio->page) || file_is_cold(inode) || 3039 + f2fs_compressed_file(inode)) 3039 3040 return CURSEG_COLD_DATA; 3040 3041 if (file_is_hot(inode) || 3041 3042 is_inode_flag_set(inode, FI_HOT_DATA) ||
-12
fs/f2fs/segment.h
··· 200 200 void (*allocate_segment)(struct f2fs_sb_info *, int, bool); 201 201 }; 202 202 203 - /* 204 - * this value is set in page as a private data which indicate that 205 - * the page is atomically written, and it is in inmem_pages list. 206 - */ 207 - #define ATOMIC_WRITTEN_PAGE ((unsigned long)-1) 208 - #define DUMMY_WRITTEN_PAGE ((unsigned long)-2) 209 - 210 - #define IS_ATOMIC_WRITTEN_PAGE(page) \ 211 - (page_private(page) == (unsigned long)ATOMIC_WRITTEN_PAGE) 212 - #define IS_DUMMY_WRITTEN_PAGE(page) \ 213 - (page_private(page) == (unsigned long)DUMMY_WRITTEN_PAGE) 214 - 215 203 #define MAX_SKIP_GC_COUNT 16 216 204 217 205 struct inmem_pages {
+111 -1
fs/f2fs/super.c
··· 141 141 Opt_checkpoint_disable_cap, 142 142 Opt_checkpoint_disable_cap_perc, 143 143 Opt_checkpoint_enable, 144 + Opt_compress_algorithm, 145 + Opt_compress_log_size, 146 + Opt_compress_extension, 144 147 Opt_err, 145 148 }; 146 149 ··· 206 203 {Opt_checkpoint_disable_cap, "checkpoint=disable:%u"}, 207 204 {Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"}, 208 205 {Opt_checkpoint_enable, "checkpoint=enable"}, 206 + {Opt_compress_algorithm, "compress_algorithm=%s"}, 207 + {Opt_compress_log_size, "compress_log_size=%u"}, 208 + {Opt_compress_extension, "compress_extension=%s"}, 209 209 {Opt_err, NULL}, 210 210 }; 211 211 ··· 397 391 { 398 392 struct f2fs_sb_info *sbi = F2FS_SB(sb); 399 393 substring_t args[MAX_OPT_ARGS]; 394 + unsigned char (*ext)[F2FS_EXTENSION_LEN]; 400 395 char *p, *name; 401 - int arg = 0; 396 + int arg = 0, ext_cnt; 402 397 kuid_t uid; 403 398 kgid_t gid; 404 399 #ifdef CONFIG_QUOTA ··· 817 810 case Opt_checkpoint_enable: 818 811 clear_opt(sbi, DISABLE_CHECKPOINT); 819 812 break; 813 + case Opt_compress_algorithm: 814 + if (!f2fs_sb_has_compression(sbi)) { 815 + f2fs_err(sbi, "Compression feature if off"); 816 + return -EINVAL; 817 + } 818 + name = match_strdup(&args[0]); 819 + if (!name) 820 + return -ENOMEM; 821 + if (strlen(name) == 3 && !strcmp(name, "lzo")) { 822 + F2FS_OPTION(sbi).compress_algorithm = 823 + COMPRESS_LZO; 824 + } else if (strlen(name) == 3 && 825 + !strcmp(name, "lz4")) { 826 + F2FS_OPTION(sbi).compress_algorithm = 827 + COMPRESS_LZ4; 828 + } else { 829 + kfree(name); 830 + return -EINVAL; 831 + } 832 + kfree(name); 833 + break; 834 + case Opt_compress_log_size: 835 + if (!f2fs_sb_has_compression(sbi)) { 836 + f2fs_err(sbi, "Compression feature is off"); 837 + return -EINVAL; 838 + } 839 + if (args->from && match_int(args, &arg)) 840 + return -EINVAL; 841 + if (arg < MIN_COMPRESS_LOG_SIZE || 842 + arg > MAX_COMPRESS_LOG_SIZE) { 843 + f2fs_err(sbi, 844 + "Compress cluster log size is out of range"); 845 + return -EINVAL; 846 + } 847 + F2FS_OPTION(sbi).compress_log_size = arg; 848 + break; 849 + case Opt_compress_extension: 850 + if (!f2fs_sb_has_compression(sbi)) { 851 + f2fs_err(sbi, "Compression feature is off"); 852 + return -EINVAL; 853 + } 854 + name = match_strdup(&args[0]); 855 + if (!name) 856 + return -ENOMEM; 857 + 858 + ext = F2FS_OPTION(sbi).extensions; 859 + ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt; 860 + 861 + if (strlen(name) >= F2FS_EXTENSION_LEN || 862 + ext_cnt >= COMPRESS_EXT_NUM) { 863 + f2fs_err(sbi, 864 + "invalid extension length/number"); 865 + kfree(name); 866 + return -EINVAL; 867 + } 868 + 869 + strcpy(ext[ext_cnt], name); 870 + F2FS_OPTION(sbi).compress_ext_cnt++; 871 + kfree(name); 872 + break; 820 873 default: 821 874 f2fs_err(sbi, "Unrecognized mount option \"%s\" or missing value", 822 875 p); ··· 1192 1125 f2fs_destroy_node_manager(sbi); 1193 1126 f2fs_destroy_segment_manager(sbi); 1194 1127 1128 + f2fs_destroy_post_read_wq(sbi); 1129 + 1195 1130 kvfree(sbi->ckpt); 1196 1131 1197 1132 f2fs_unregister_sysfs(sbi); ··· 1409 1340 #endif 1410 1341 } 1411 1342 1343 + static inline void f2fs_show_compress_options(struct seq_file *seq, 1344 + struct super_block *sb) 1345 + { 1346 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1347 + char *algtype = ""; 1348 + int i; 1349 + 1350 + if (!f2fs_sb_has_compression(sbi)) 1351 + return; 1352 + 1353 + switch (F2FS_OPTION(sbi).compress_algorithm) { 1354 + case COMPRESS_LZO: 1355 + algtype = "lzo"; 1356 + break; 1357 + case COMPRESS_LZ4: 1358 + algtype = "lz4"; 1359 + break; 1360 + } 1361 + seq_printf(seq, ",compress_algorithm=%s", algtype); 1362 + 1363 + seq_printf(seq, ",compress_log_size=%u", 1364 + F2FS_OPTION(sbi).compress_log_size); 1365 + 1366 + for (i = 0; i < F2FS_OPTION(sbi).compress_ext_cnt; i++) { 1367 + seq_printf(seq, ",compress_extension=%s", 1368 + F2FS_OPTION(sbi).extensions[i]); 1369 + } 1370 + } 1371 + 1412 1372 static int f2fs_show_options(struct seq_file *seq, struct dentry *root) 1413 1373 { 1414 1374 struct f2fs_sb_info *sbi = F2FS_SB(root->d_sb); ··· 1560 1462 seq_printf(seq, ",fsync_mode=%s", "strict"); 1561 1463 else if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_NOBARRIER) 1562 1464 seq_printf(seq, ",fsync_mode=%s", "nobarrier"); 1465 + 1466 + f2fs_show_compress_options(seq, sbi->sb); 1563 1467 return 0; 1564 1468 } 1565 1469 ··· 1576 1476 F2FS_OPTION(sbi).test_dummy_encryption = false; 1577 1477 F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID); 1578 1478 F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID); 1479 + F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZO; 1480 + F2FS_OPTION(sbi).compress_log_size = MIN_COMPRESS_LOG_SIZE; 1481 + F2FS_OPTION(sbi).compress_ext_cnt = 0; 1579 1482 1580 1483 set_opt(sbi, BG_GC); 1581 1484 set_opt(sbi, INLINE_XATTR); ··· 3493 3390 goto free_devices; 3494 3391 } 3495 3392 3393 + err = f2fs_init_post_read_wq(sbi); 3394 + if (err) { 3395 + f2fs_err(sbi, "Failed to initialize post read workqueue"); 3396 + goto free_devices; 3397 + } 3398 + 3496 3399 sbi->total_valid_node_count = 3497 3400 le32_to_cpu(sbi->ckpt->valid_node_count); 3498 3401 percpu_counter_set(&sbi->total_valid_inode_count, ··· 3731 3622 f2fs_destroy_node_manager(sbi); 3732 3623 free_sm: 3733 3624 f2fs_destroy_segment_manager(sbi); 3625 + f2fs_destroy_post_read_wq(sbi); 3734 3626 free_devices: 3735 3627 destroy_device_list(sbi); 3736 3628 kvfree(sbi->ckpt);
+7
fs/f2fs/sysfs.c
··· 154 154 if (f2fs_sb_has_casefold(sbi)) 155 155 len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", 156 156 len ? ", " : "", "casefold"); 157 + if (f2fs_sb_has_compression(sbi)) 158 + len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", 159 + len ? ", " : "", "compression"); 157 160 len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", 158 161 len ? ", " : "", "pin_file"); 159 162 len += snprintf(buf + len, PAGE_SIZE - len, "\n"); ··· 392 389 FEAT_VERITY, 393 390 FEAT_SB_CHECKSUM, 394 391 FEAT_CASEFOLD, 392 + FEAT_COMPRESSION, 395 393 }; 396 394 397 395 static ssize_t f2fs_feature_show(struct f2fs_attr *a, ··· 412 408 case FEAT_VERITY: 413 409 case FEAT_SB_CHECKSUM: 414 410 case FEAT_CASEFOLD: 411 + case FEAT_COMPRESSION: 415 412 return snprintf(buf, PAGE_SIZE, "supported\n"); 416 413 } 417 414 return 0; ··· 508 503 #endif 509 504 F2FS_FEATURE_RO_ATTR(sb_checksum, FEAT_SB_CHECKSUM); 510 505 F2FS_FEATURE_RO_ATTR(casefold, FEAT_CASEFOLD); 506 + F2FS_FEATURE_RO_ATTR(compression, FEAT_COMPRESSION); 511 507 512 508 #define ATTR_LIST(name) (&f2fs_attr_##name.attr) 513 509 static struct attribute *f2fs_attrs[] = { ··· 579 573 #endif 580 574 ATTR_LIST(sb_checksum), 581 575 ATTR_LIST(casefold), 576 + ATTR_LIST(compression), 582 577 NULL, 583 578 }; 584 579 ATTRIBUTE_GROUPS(f2fs_feat);
+5
include/linux/f2fs_fs.h
··· 23 23 24 24 #define NULL_ADDR ((block_t)0) /* used as block_t addresses */ 25 25 #define NEW_ADDR ((block_t)-1) /* used as block_t addresses */ 26 + #define COMPRESS_ADDR ((block_t)-2) /* used as compressed data flag */ 26 27 27 28 #define F2FS_BYTES_TO_BLK(bytes) ((bytes) >> F2FS_BLKSIZE_BITS) 28 29 #define F2FS_BLK_TO_BYTES(blk) ((blk) << F2FS_BLKSIZE_BITS) ··· 272 271 __le32 i_inode_checksum;/* inode meta checksum */ 273 272 __le64 i_crtime; /* creation time */ 274 273 __le32 i_crtime_nsec; /* creation time in nano scale */ 274 + __le64 i_compr_blocks; /* # of compressed blocks */ 275 + __u8 i_compress_algorithm; /* compress algorithm */ 276 + __u8 i_log_cluster_size; /* log of cluster size */ 277 + __le16 i_padding; /* padding */ 275 278 __le32 i_extra_end[0]; /* for attribute size calculation */ 276 279 } __packed; 277 280 __le32 i_addr[DEF_ADDRS_PER_INODE]; /* Pointers to data blocks */
+100
include/trace/events/f2fs.h
··· 131 131 __print_symbolic(type, \ 132 132 { CP_NO_NEEDED, "no needed" }, \ 133 133 { CP_NON_REGULAR, "non regular" }, \ 134 + { CP_COMPRESSED, "compreesed" }, \ 134 135 { CP_HARDLINK, "hardlink" }, \ 135 136 { CP_SB_NEED_CP, "sb needs cp" }, \ 136 137 { CP_WRONG_PINO, "wrong pino" }, \ ··· 148 147 { F2FS_GOING_DOWN_NOSYNC, "no sync" }, \ 149 148 { F2FS_GOING_DOWN_METAFLUSH, "meta flush" }, \ 150 149 { F2FS_GOING_DOWN_NEED_FSCK, "need fsck" }) 150 + 151 + #define show_compress_algorithm(type) \ 152 + __print_symbolic(type, \ 153 + { COMPRESS_LZO, "LZO" }, \ 154 + { COMPRESS_LZ4, "LZ4" }) 151 155 152 156 struct f2fs_sb_info; 153 157 struct f2fs_io_info; ··· 1714 1708 show_dev(__entry->dev), 1715 1709 show_shutdown_mode(__entry->mode), 1716 1710 __entry->ret) 1711 + ); 1712 + 1713 + DECLARE_EVENT_CLASS(f2fs_zip_start, 1714 + 1715 + TP_PROTO(struct inode *inode, pgoff_t cluster_idx, 1716 + unsigned int cluster_size, unsigned char algtype), 1717 + 1718 + TP_ARGS(inode, cluster_idx, cluster_size, algtype), 1719 + 1720 + TP_STRUCT__entry( 1721 + __field(dev_t, dev) 1722 + __field(ino_t, ino) 1723 + __field(pgoff_t, idx) 1724 + __field(unsigned int, size) 1725 + __field(unsigned int, algtype) 1726 + ), 1727 + 1728 + TP_fast_assign( 1729 + __entry->dev = inode->i_sb->s_dev; 1730 + __entry->ino = inode->i_ino; 1731 + __entry->idx = cluster_idx; 1732 + __entry->size = cluster_size; 1733 + __entry->algtype = algtype; 1734 + ), 1735 + 1736 + TP_printk("dev = (%d,%d), ino = %lu, cluster_idx:%lu, " 1737 + "cluster_size = %u, algorithm = %s", 1738 + show_dev_ino(__entry), 1739 + __entry->idx, 1740 + __entry->size, 1741 + show_compress_algorithm(__entry->algtype)) 1742 + ); 1743 + 1744 + DECLARE_EVENT_CLASS(f2fs_zip_end, 1745 + 1746 + TP_PROTO(struct inode *inode, pgoff_t cluster_idx, 1747 + unsigned int compressed_size, int ret), 1748 + 1749 + TP_ARGS(inode, cluster_idx, compressed_size, ret), 1750 + 1751 + TP_STRUCT__entry( 1752 + __field(dev_t, dev) 1753 + __field(ino_t, ino) 1754 + __field(pgoff_t, idx) 1755 + __field(unsigned int, size) 1756 + __field(unsigned int, ret) 1757 + ), 1758 + 1759 + TP_fast_assign( 1760 + __entry->dev = inode->i_sb->s_dev; 1761 + __entry->ino = inode->i_ino; 1762 + __entry->idx = cluster_idx; 1763 + __entry->size = compressed_size; 1764 + __entry->ret = ret; 1765 + ), 1766 + 1767 + TP_printk("dev = (%d,%d), ino = %lu, cluster_idx:%lu, " 1768 + "compressed_size = %u, ret = %d", 1769 + show_dev_ino(__entry), 1770 + __entry->idx, 1771 + __entry->size, 1772 + __entry->ret) 1773 + ); 1774 + 1775 + DEFINE_EVENT(f2fs_zip_start, f2fs_compress_pages_start, 1776 + 1777 + TP_PROTO(struct inode *inode, pgoff_t cluster_idx, 1778 + unsigned int cluster_size, unsigned char algtype), 1779 + 1780 + TP_ARGS(inode, cluster_idx, cluster_size, algtype) 1781 + ); 1782 + 1783 + DEFINE_EVENT(f2fs_zip_start, f2fs_decompress_pages_start, 1784 + 1785 + TP_PROTO(struct inode *inode, pgoff_t cluster_idx, 1786 + unsigned int cluster_size, unsigned char algtype), 1787 + 1788 + TP_ARGS(inode, cluster_idx, cluster_size, algtype) 1789 + ); 1790 + 1791 + DEFINE_EVENT(f2fs_zip_end, f2fs_compress_pages_end, 1792 + 1793 + TP_PROTO(struct inode *inode, pgoff_t cluster_idx, 1794 + unsigned int compressed_size, int ret), 1795 + 1796 + TP_ARGS(inode, cluster_idx, compressed_size, ret) 1797 + ); 1798 + 1799 + DEFINE_EVENT(f2fs_zip_end, f2fs_decompress_pages_end, 1800 + 1801 + TP_PROTO(struct inode *inode, pgoff_t cluster_idx, 1802 + unsigned int compressed_size, int ret), 1803 + 1804 + TP_ARGS(inode, cluster_idx, compressed_size, ret) 1717 1805 ); 1718 1806 1719 1807 #endif /* _TRACE_F2FS_H */