Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

netfs: Add a netfs inode context

Add a netfs_i_context struct that should be included in the network
filesystem's own inode struct wrapper, directly after the VFS's inode
struct, e.g.:

struct my_inode {
struct {
/* These must be contiguous */
struct inode vfs_inode;
struct netfs_i_context netfs_ctx;
};
};

The netfs_i_context struct so far contains a single field for the network
filesystem to use - the cache cookie:

struct netfs_i_context {
...
struct fscache_cookie *cache;
};

Three functions are provided to help with this:

(1) void netfs_i_context_init(struct inode *inode,
const struct netfs_request_ops *ops);

Initialise the netfs context and set the operations.

(2) struct netfs_i_context *netfs_i_context(struct inode *inode);

Find the netfs context from the VFS inode.

(3) struct inode *netfs_inode(struct netfs_i_context *ctx);

Find the VFS inode from the netfs context.

Changes
=======
ver #4)
- Fix netfs_is_cache_enabled() to check cookie->cache_priv to see if a
cache is present[3].
- Fix netfs_skip_folio_read() to zero out all of the page, not just some
of it[3].

ver #3)
- Split out the bit to move ceph cap-getting on readahead into
ceph_init_request()[1].
- Stick in a comment to the netfs inode structs indicating the contiguity
requirements[2].

ver #2)
- Adjust documentation to match.
- Use "#if IS_ENABLED()" in netfs_i_cookie(), not "#ifdef".
- Move the cap check from ceph_readahead() to ceph_init_request() to be
called from netfslib.
- Remove ceph_readahead() and use netfs_readahead() directly instead.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Jeff Layton <jlayton@kernel.org>
cc: linux-cachefs@redhat.com

Link: https://lore.kernel.org/r/8af0d47f17d89c06bbf602496dd845f2b0bf25b3.camel@kernel.org/ [1]
Link: https://lore.kernel.org/r/beaf4f6a6c2575ed489adb14b257253c868f9a5c.camel@kernel.org/ [2]
Link: https://lore.kernel.org/r/3536452.1647421585@warthog.procyon.org.uk/ [3]
Link: https://lore.kernel.org/r/164622984545.3564931.15691742939278418580.stgit@warthog.procyon.org.uk/ # v1
Link: https://lore.kernel.org/r/164678213320.1200972.16807551936267647470.stgit@warthog.procyon.org.uk/ # v2
Link: https://lore.kernel.org/r/164692909854.2099075.9535537286264248057.stgit@warthog.procyon.org.uk/ # v3
Link: https://lore.kernel.org/r/306388.1647595110@warthog.procyon.org.uk/ # v4

+318 -278
+72 -29
Documentation/filesystems/netfs_library.rst
··· 7 7 .. Contents: 8 8 9 9 - Overview. 10 + - Per-inode context. 11 + - Inode context helper functions. 10 12 - Buffered read helpers. 11 13 - Read helper functions. 12 14 - Read helper structures. ··· 28 26 29 27 Note that the library module doesn't link against local caching directly, so 30 28 access must be provided by the netfs. 29 + 30 + 31 + Per-Inode Context 32 + ================= 33 + 34 + The network filesystem helper library needs a place to store a bit of state for 35 + its use on each netfs inode it is helping to manage. To this end, a context 36 + structure is defined:: 37 + 38 + struct netfs_i_context { 39 + const struct netfs_request_ops *ops; 40 + struct fscache_cookie *cache; 41 + }; 42 + 43 + A network filesystem that wants to use netfs lib must place one of these 44 + directly after the VFS ``struct inode`` it allocates, usually as part of its 45 + own struct. This can be done in a way similar to the following:: 46 + 47 + struct my_inode { 48 + struct { 49 + /* These must be contiguous */ 50 + struct inode vfs_inode; 51 + struct netfs_i_context netfs_ctx; 52 + }; 53 + ... 54 + }; 55 + 56 + This allows netfslib to find its state by simple offset from the inode pointer, 57 + thereby allowing the netfslib helper functions to be pointed to directly by the 58 + VFS/VM operation tables. 59 + 60 + The structure contains the following fields: 61 + 62 + * ``ops`` 63 + 64 + The set of operations provided by the network filesystem to netfslib. 65 + 66 + * ``cache`` 67 + 68 + Local caching cookie, or NULL if no caching is enabled. This field does not 69 + exist if fscache is disabled. 70 + 71 + 72 + Inode Context Helper Functions 73 + ------------------------------ 74 + 75 + To help deal with the per-inode context, a number helper functions are 76 + provided. Firstly, a function to perform basic initialisation on a context and 77 + set the operations table pointer:: 78 + 79 + void netfs_i_context_init(struct inode *inode, 80 + const struct netfs_request_ops *ops); 81 + 82 + then two functions to cast between the VFS inode structure and the netfs 83 + context:: 84 + 85 + struct netfs_i_context *netfs_i_context(struct inode *inode); 86 + struct inode *netfs_inode(struct netfs_i_context *ctx); 87 + 88 + and finally, a function to get the cache cookie pointer from the context 89 + attached to an inode (or NULL if fscache is disabled):: 90 + 91 + struct fscache_cookie *netfs_i_cookie(struct inode *inode); 31 92 32 93 33 94 Buffered Read Helpers ··· 135 70 136 71 Three read helpers are provided:: 137 72 138 - void netfs_readahead(struct readahead_control *ractl, 139 - const struct netfs_request_ops *ops, 140 - void *netfs_priv); 73 + void netfs_readahead(struct readahead_control *ractl); 141 74 int netfs_readpage(struct file *file, 142 - struct folio *folio, 143 - const struct netfs_request_ops *ops, 144 - void *netfs_priv); 75 + struct page *page); 145 76 int netfs_write_begin(struct file *file, 146 77 struct address_space *mapping, 147 78 loff_t pos, 148 79 unsigned int len, 149 80 unsigned int flags, 150 81 struct folio **_folio, 151 - void **_fsdata, 152 - const struct netfs_request_ops *ops, 153 - void *netfs_priv); 82 + void **_fsdata); 154 83 155 - Each corresponds to a VM operation, with the addition of a couple of parameters 156 - for the use of the read helpers: 84 + Each corresponds to a VM address space operation. These operations use the 85 + state in the per-inode context. 157 86 158 - * ``ops`` 159 - 160 - A table of operations through which the helpers can talk to the filesystem. 161 - 162 - * ``netfs_priv`` 163 - 164 - Filesystem private data (can be NULL). 165 - 166 - Both of these values will be stored into the read request structure. 167 - 168 - For ->readahead() and ->readpage(), the network filesystem should just jump 169 - into the corresponding read helper; whereas for ->write_begin(), it may be a 87 + For ->readahead() and ->readpage(), the network filesystem just point directly 88 + at the corresponding read helper; whereas for ->write_begin(), it may be a 170 89 little more complicated as the network filesystem might want to flush 171 90 conflicting writes or track dirty data and needs to put the acquired folio if 172 91 an error occurs after calling the helper. ··· 295 246 296 247 struct netfs_request_ops { 297 248 void (*init_request)(struct netfs_io_request *rreq, struct file *file); 298 - bool (*is_cache_enabled)(struct inode *inode); 299 249 int (*begin_cache_operation)(struct netfs_io_request *rreq); 300 250 void (*expand_readahead)(struct netfs_io_request *rreq); 301 251 bool (*clamp_length)(struct netfs_io_subrequest *subreq); ··· 312 264 313 265 [Optional] This is called to initialise the request structure. It is given 314 266 the file for reference and can modify the ->netfs_priv value. 315 - 316 - * ``is_cache_enabled()`` 317 - 318 - [Required] This is called by netfs_write_begin() to ask if the file is being 319 - cached. It should return true if it is being cached and false otherwise. 320 267 321 268 * ``begin_cache_operation()`` 322 269
+4 -6
fs/9p/cache.c
··· 49 49 50 50 void v9fs_cache_inode_get_cookie(struct inode *inode) 51 51 { 52 - struct v9fs_inode *v9inode; 52 + struct v9fs_inode *v9inode = V9FS_I(inode); 53 53 struct v9fs_session_info *v9ses; 54 54 __le32 version; 55 55 __le64 path; 56 56 57 57 if (!S_ISREG(inode->i_mode)) 58 58 return; 59 - 60 - v9inode = V9FS_I(inode); 61 - if (WARN_ON(v9inode->fscache)) 59 + if (WARN_ON(v9fs_inode_cookie(v9inode))) 62 60 return; 63 61 64 62 version = cpu_to_le32(v9inode->qid.version); 65 63 path = cpu_to_le64(v9inode->qid.path); 66 64 v9ses = v9fs_inode2v9ses(inode); 67 - v9inode->fscache = 65 + v9inode->netfs_ctx.cache = 68 66 fscache_acquire_cookie(v9fs_session_cache(v9ses), 69 67 0, 70 68 &path, sizeof(path), ··· 70 72 i_size_read(&v9inode->vfs_inode)); 71 73 72 74 p9_debug(P9_DEBUG_FSC, "inode %p get cookie %p\n", 73 - inode, v9inode->fscache); 75 + inode, v9fs_inode_cookie(v9inode)); 74 76 }
+1 -3
fs/9p/v9fs.c
··· 623 623 static void v9fs_inode_init_once(void *foo) 624 624 { 625 625 struct v9fs_inode *v9inode = (struct v9fs_inode *)foo; 626 - #ifdef CONFIG_9P_FSCACHE 627 - v9inode->fscache = NULL; 628 - #endif 626 + 629 627 memset(&v9inode->qid, 0, sizeof(v9inode->qid)); 630 628 inode_init_once(&v9inode->vfs_inode); 631 629 }
+8 -5
fs/9p/v9fs.h
··· 9 9 #define FS_9P_V9FS_H 10 10 11 11 #include <linux/backing-dev.h> 12 + #include <linux/netfs.h> 12 13 13 14 /** 14 15 * enum p9_session_flags - option flags for each 9P session ··· 109 108 #define V9FS_INO_INVALID_ATTR 0x01 110 109 111 110 struct v9fs_inode { 112 - #ifdef CONFIG_9P_FSCACHE 113 - struct fscache_cookie *fscache; 114 - #endif 111 + struct { 112 + /* These must be contiguous */ 113 + struct inode vfs_inode; /* the VFS's inode record */ 114 + struct netfs_i_context netfs_ctx; /* Netfslib context */ 115 + }; 115 116 struct p9_qid qid; 116 117 unsigned int cache_validity; 117 118 struct p9_fid *writeback_fid; 118 119 struct mutex v_mutex; 119 - struct inode vfs_inode; 120 120 }; 121 121 122 122 static inline struct v9fs_inode *V9FS_I(const struct inode *inode) ··· 128 126 static inline struct fscache_cookie *v9fs_inode_cookie(struct v9fs_inode *v9inode) 129 127 { 130 128 #ifdef CONFIG_9P_FSCACHE 131 - return v9inode->fscache; 129 + return netfs_i_cookie(&v9inode->vfs_inode); 132 130 #else 133 131 return NULL; 134 132 #endif ··· 165 163 extern const struct inode_operations v9fs_dir_inode_operations_dotl; 166 164 extern const struct inode_operations v9fs_file_inode_operations_dotl; 167 165 extern const struct inode_operations v9fs_symlink_inode_operations_dotl; 166 + extern const struct netfs_request_ops v9fs_req_ops; 168 167 extern struct inode *v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses, 169 168 struct p9_fid *fid, 170 169 struct super_block *sb, int new);
+4 -39
fs/9p/vfs_addr.c
··· 78 78 } 79 79 80 80 /** 81 - * v9fs_is_cache_enabled - Determine if caching is enabled for an inode 82 - * @inode: The inode to check 83 - */ 84 - static bool v9fs_is_cache_enabled(struct inode *inode) 85 - { 86 - struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(inode)); 87 - 88 - return fscache_cookie_enabled(cookie) && cookie->cache_priv; 89 - } 90 - 91 - /** 92 81 * v9fs_begin_cache_operation - Begin a cache operation for a read 93 82 * @rreq: The read request 94 83 */ ··· 92 103 #endif 93 104 } 94 105 95 - static const struct netfs_request_ops v9fs_req_ops = { 106 + const struct netfs_request_ops v9fs_req_ops = { 96 107 .init_request = v9fs_init_request, 97 - .is_cache_enabled = v9fs_is_cache_enabled, 98 108 .begin_cache_operation = v9fs_begin_cache_operation, 99 109 .issue_read = v9fs_issue_read, 100 110 .cleanup = v9fs_req_cleanup, 101 111 }; 102 - 103 - /** 104 - * v9fs_vfs_readpage - read an entire page in from 9P 105 - * @file: file being read 106 - * @page: structure to page 107 - * 108 - */ 109 - static int v9fs_vfs_readpage(struct file *file, struct page *page) 110 - { 111 - struct folio *folio = page_folio(page); 112 - 113 - return netfs_readpage(file, folio, &v9fs_req_ops, NULL); 114 - } 115 - 116 - /** 117 - * v9fs_vfs_readahead - read a set of pages from 9P 118 - * @ractl: The readahead parameters 119 - */ 120 - static void v9fs_vfs_readahead(struct readahead_control *ractl) 121 - { 122 - netfs_readahead(ractl, &v9fs_req_ops, NULL); 123 - } 124 112 125 113 /** 126 114 * v9fs_release_page - release the private state associated with a page ··· 292 326 * file. We need to do this before we get a lock on the page in case 293 327 * there's more than one writer competing for the same cache block. 294 328 */ 295 - retval = netfs_write_begin(filp, mapping, pos, len, flags, &folio, fsdata, 296 - &v9fs_req_ops, NULL); 329 + retval = netfs_write_begin(filp, mapping, pos, len, flags, &folio, fsdata); 297 330 if (retval < 0) 298 331 return retval; 299 332 ··· 353 388 #endif 354 389 355 390 const struct address_space_operations v9fs_addr_operations = { 356 - .readpage = v9fs_vfs_readpage, 357 - .readahead = v9fs_vfs_readahead, 391 + .readpage = netfs_readpage, 392 + .readahead = netfs_readahead, 358 393 .set_page_dirty = v9fs_set_page_dirty, 359 394 .writepage = v9fs_vfs_writepage, 360 395 .write_begin = v9fs_write_begin,
+10 -3
fs/9p/vfs_inode.c
··· 231 231 v9inode = kmem_cache_alloc(v9fs_inode_cache, GFP_KERNEL); 232 232 if (!v9inode) 233 233 return NULL; 234 - #ifdef CONFIG_9P_FSCACHE 235 - v9inode->fscache = NULL; 236 - #endif 237 234 v9inode->writeback_fid = NULL; 238 235 v9inode->cache_validity = 0; 239 236 mutex_init(&v9inode->v_mutex); ··· 245 248 void v9fs_free_inode(struct inode *inode) 246 249 { 247 250 kmem_cache_free(v9fs_inode_cache, V9FS_I(inode)); 251 + } 252 + 253 + /* 254 + * Set parameters for the netfs library 255 + */ 256 + static void v9fs_set_netfs_context(struct inode *inode) 257 + { 258 + netfs_i_context_init(inode, &v9fs_req_ops); 248 259 } 249 260 250 261 int v9fs_init_inode(struct v9fs_session_info *v9ses, ··· 343 338 err = -EINVAL; 344 339 goto error; 345 340 } 341 + 342 + v9fs_set_netfs_context(inode); 346 343 error: 347 344 return err; 348 345
+1
fs/afs/dynroot.c
··· 76 76 /* there shouldn't be an existing inode */ 77 77 BUG_ON(!(inode->i_state & I_NEW)); 78 78 79 + netfs_i_context_init(inode, NULL); 79 80 inode->i_size = 0; 80 81 inode->i_mode = S_IFDIR | S_IRUGO | S_IXUGO; 81 82 if (root) {
+2 -24
fs/afs/file.c
··· 19 19 #include "internal.h" 20 20 21 21 static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); 22 - static int afs_readpage(struct file *file, struct page *page); 23 22 static int afs_symlink_readpage(struct file *file, struct page *page); 24 23 static void afs_invalidatepage(struct page *page, unsigned int offset, 25 24 unsigned int length); 26 25 static int afs_releasepage(struct page *page, gfp_t gfp_flags); 27 26 28 - static void afs_readahead(struct readahead_control *ractl); 29 27 static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter); 30 28 static void afs_vm_open(struct vm_area_struct *area); 31 29 static void afs_vm_close(struct vm_area_struct *area); ··· 50 52 }; 51 53 52 54 const struct address_space_operations afs_file_aops = { 53 - .readpage = afs_readpage, 54 - .readahead = afs_readahead, 55 + .readpage = netfs_readpage, 56 + .readahead = netfs_readahead, 55 57 .set_page_dirty = afs_set_page_dirty, 56 58 .launder_page = afs_launder_page, 57 59 .releasepage = afs_releasepage, ··· 363 365 return 0; 364 366 } 365 367 366 - static bool afs_is_cache_enabled(struct inode *inode) 367 - { 368 - struct fscache_cookie *cookie = afs_vnode_cache(AFS_FS_I(inode)); 369 - 370 - return fscache_cookie_enabled(cookie) && cookie->cache_priv; 371 - } 372 - 373 368 static int afs_begin_cache_operation(struct netfs_io_request *rreq) 374 369 { 375 370 #ifdef CONFIG_AFS_FSCACHE ··· 390 399 391 400 const struct netfs_request_ops afs_req_ops = { 392 401 .init_request = afs_init_request, 393 - .is_cache_enabled = afs_is_cache_enabled, 394 402 .begin_cache_operation = afs_begin_cache_operation, 395 403 .check_write_begin = afs_check_write_begin, 396 404 .issue_read = afs_issue_read, 397 405 .cleanup = afs_priv_cleanup, 398 406 }; 399 - 400 - static int afs_readpage(struct file *file, struct page *page) 401 - { 402 - struct folio *folio = page_folio(page); 403 - 404 - return netfs_readpage(file, folio, &afs_req_ops, NULL); 405 - } 406 - 407 - static void afs_readahead(struct readahead_control *ractl) 408 - { 409 - netfs_readahead(ractl, &afs_req_ops, NULL); 410 - } 411 407 412 408 int afs_write_inode(struct inode *inode, struct writeback_control *wbc) 413 409 {
+20 -11
fs/afs/inode.c
··· 54 54 } 55 55 56 56 /* 57 + * Set parameters for the netfs library 58 + */ 59 + static void afs_set_netfs_context(struct afs_vnode *vnode) 60 + { 61 + netfs_i_context_init(&vnode->vfs_inode, &afs_req_ops); 62 + } 63 + 64 + /* 57 65 * Initialise an inode from the vnode status. 58 66 */ 59 67 static int afs_inode_init_from_status(struct afs_operation *op, ··· 136 128 } 137 129 138 130 afs_set_i_size(vnode, status->size); 131 + afs_set_netfs_context(vnode); 139 132 140 133 vnode->invalid_before = status->data_version; 141 134 inode_set_iversion_raw(&vnode->vfs_inode, status->data_version); ··· 429 420 struct afs_vnode_cache_aux aux; 430 421 431 422 if (vnode->status.type != AFS_FTYPE_FILE) { 432 - vnode->cache = NULL; 423 + vnode->netfs_ctx.cache = NULL; 433 424 return; 434 425 } 435 426 ··· 439 430 key.vnode_id_ext[1] = htonl(vnode->fid.vnode_hi); 440 431 afs_set_cache_aux(vnode, &aux); 441 432 442 - vnode->cache = fscache_acquire_cookie( 443 - vnode->volume->cache, 444 - vnode->status.type == AFS_FTYPE_FILE ? 0 : FSCACHE_ADV_SINGLE_CHUNK, 445 - &key, sizeof(key), 446 - &aux, sizeof(aux), 447 - vnode->status.size); 433 + afs_vnode_set_cache(vnode, 434 + fscache_acquire_cookie( 435 + vnode->volume->cache, 436 + vnode->status.type == AFS_FTYPE_FILE ? 437 + 0 : FSCACHE_ADV_SINGLE_CHUNK, 438 + &key, sizeof(key), 439 + &aux, sizeof(aux), 440 + vnode->status.size)); 448 441 #endif 449 442 } 450 443 ··· 539 528 540 529 vnode = AFS_FS_I(inode); 541 530 vnode->cb_v_break = as->volume->cb_v_break, 531 + afs_set_netfs_context(vnode); 542 532 543 533 op = afs_alloc_operation(key, as->volume); 544 534 if (IS_ERR(op)) { ··· 798 786 afs_put_wb_key(wbk); 799 787 } 800 788 801 - #ifdef CONFIG_AFS_FSCACHE 802 - fscache_relinquish_cookie(vnode->cache, 789 + fscache_relinquish_cookie(afs_vnode_cache(vnode), 803 790 test_bit(AFS_VNODE_DELETED, &vnode->flags)); 804 - vnode->cache = NULL; 805 - #endif 806 791 807 792 afs_prune_wb_keys(vnode); 808 793 afs_put_permits(rcu_access_pointer(vnode->permit_cache));
+14 -5
fs/afs/internal.h
··· 619 619 * leak from one inode to another. 620 620 */ 621 621 struct afs_vnode { 622 - struct inode vfs_inode; /* the VFS's inode record */ 622 + struct { 623 + /* These must be contiguous */ 624 + struct inode vfs_inode; /* the VFS's inode record */ 625 + struct netfs_i_context netfs_ctx; /* Netfslib context */ 626 + }; 623 627 624 628 struct afs_volume *volume; /* volume on which vnode resides */ 625 629 struct afs_fid fid; /* the file identifier for this inode */ 626 630 struct afs_file_status status; /* AFS status info for this file */ 627 631 afs_dataversion_t invalid_before; /* Child dentries are invalid before this */ 628 - #ifdef CONFIG_AFS_FSCACHE 629 - struct fscache_cookie *cache; /* caching cookie */ 630 - #endif 631 632 struct afs_permits __rcu *permit_cache; /* cache of permits so far obtained */ 632 633 struct mutex io_lock; /* Lock for serialising I/O on this mutex */ 633 634 struct rw_semaphore validate_lock; /* lock for validating this vnode */ ··· 675 674 static inline struct fscache_cookie *afs_vnode_cache(struct afs_vnode *vnode) 676 675 { 677 676 #ifdef CONFIG_AFS_FSCACHE 678 - return vnode->cache; 677 + return netfs_i_cookie(&vnode->vfs_inode); 679 678 #else 680 679 return NULL; 680 + #endif 681 + } 682 + 683 + static inline void afs_vnode_set_cache(struct afs_vnode *vnode, 684 + struct fscache_cookie *cookie) 685 + { 686 + #ifdef CONFIG_AFS_FSCACHE 687 + vnode->netfs_ctx.cache = cookie; 681 688 #endif 682 689 } 683 690
+1 -3
fs/afs/super.c
··· 688 688 /* Reset anything that shouldn't leak from one inode to the next. */ 689 689 memset(&vnode->fid, 0, sizeof(vnode->fid)); 690 690 memset(&vnode->status, 0, sizeof(vnode->status)); 691 + afs_vnode_set_cache(vnode, NULL); 691 692 692 693 vnode->volume = NULL; 693 694 vnode->lock_key = NULL; 694 695 vnode->permit_cache = NULL; 695 - #ifdef CONFIG_AFS_FSCACHE 696 - vnode->cache = NULL; 697 - #endif 698 696 699 697 vnode->flags = 1 << AFS_VNODE_UNSET; 700 698 vnode->lock_state = AFS_VNODE_LOCK_NONE;
+1 -2
fs/afs/write.c
··· 59 59 * file. We need to do this before we get a lock on the page in case 60 60 * there's more than one writer competing for the same cache block. 61 61 */ 62 - ret = netfs_write_begin(file, mapping, pos, len, flags, &folio, fsdata, 63 - &afs_req_ops, NULL); 62 + ret = netfs_write_begin(file, mapping, pos, len, flags, &folio, fsdata); 64 63 if (ret < 0) 65 64 return ret; 66 65
+4 -27
fs/ceph/addr.c
··· 403 403 ceph_put_cap_refs(ci, got); 404 404 } 405 405 406 - static const struct netfs_request_ops ceph_netfs_read_ops = { 406 + const struct netfs_request_ops ceph_netfs_ops = { 407 407 .init_request = ceph_init_request, 408 408 .begin_cache_operation = ceph_begin_cache_operation, 409 409 .issue_read = ceph_netfs_issue_read, ··· 412 412 .check_write_begin = ceph_netfs_check_write_begin, 413 413 .cleanup = ceph_readahead_cleanup, 414 414 }; 415 - 416 - /* read a single page, without unlocking it. */ 417 - static int ceph_readpage(struct file *file, struct page *subpage) 418 - { 419 - struct folio *folio = page_folio(subpage); 420 - struct inode *inode = file_inode(file); 421 - struct ceph_inode_info *ci = ceph_inode(inode); 422 - struct ceph_vino vino = ceph_vino(inode); 423 - size_t len = folio_size(folio); 424 - u64 off = folio_file_pos(folio); 425 - 426 - dout("readpage ino %llx.%llx file %p off %llu len %zu folio %p index %lu\n inline %d", 427 - vino.ino, vino.snap, file, off, len, folio, folio_index(folio), 428 - ci->i_inline_version != CEPH_INLINE_NONE); 429 - 430 - return netfs_readpage(file, folio, &ceph_netfs_read_ops, NULL); 431 - } 432 - 433 - static void ceph_readahead(struct readahead_control *ractl) 434 - { 435 - netfs_readahead(ractl, &ceph_netfs_read_ops, NULL); 436 - } 437 415 438 416 #ifdef CONFIG_CEPH_FSCACHE 439 417 static void ceph_set_page_fscache(struct page *page) ··· 1311 1333 struct folio *folio = NULL; 1312 1334 int r; 1313 1335 1314 - r = netfs_write_begin(file, inode->i_mapping, pos, len, 0, &folio, NULL, 1315 - &ceph_netfs_read_ops, NULL); 1336 + r = netfs_write_begin(file, inode->i_mapping, pos, len, 0, &folio, NULL); 1316 1337 if (r == 0) 1317 1338 folio_wait_fscache(folio); 1318 1339 if (r < 0) { ··· 1365 1388 } 1366 1389 1367 1390 const struct address_space_operations ceph_aops = { 1368 - .readpage = ceph_readpage, 1369 - .readahead = ceph_readahead, 1391 + .readpage = netfs_readpage, 1392 + .readahead = netfs_readahead, 1370 1393 .writepage = ceph_writepage, 1371 1394 .writepages = ceph_writepages_start, 1372 1395 .write_begin = ceph_write_begin,
+14 -14
fs/ceph/cache.c
··· 29 29 if (!(inode->i_state & I_NEW)) 30 30 return; 31 31 32 - WARN_ON_ONCE(ci->fscache); 32 + WARN_ON_ONCE(ci->netfs_ctx.cache); 33 33 34 - ci->fscache = fscache_acquire_cookie(fsc->fscache, 0, 35 - &ci->i_vino, sizeof(ci->i_vino), 36 - &ci->i_version, sizeof(ci->i_version), 37 - i_size_read(inode)); 34 + ci->netfs_ctx.cache = 35 + fscache_acquire_cookie(fsc->fscache, 0, 36 + &ci->i_vino, sizeof(ci->i_vino), 37 + &ci->i_version, sizeof(ci->i_version), 38 + i_size_read(inode)); 38 39 } 39 40 40 - void ceph_fscache_unregister_inode_cookie(struct ceph_inode_info* ci) 41 + void ceph_fscache_unregister_inode_cookie(struct ceph_inode_info *ci) 41 42 { 42 - struct fscache_cookie *cookie = ci->fscache; 43 - 44 - fscache_relinquish_cookie(cookie, false); 43 + fscache_relinquish_cookie(ceph_fscache_cookie(ci), false); 45 44 } 46 45 47 46 void ceph_fscache_use_cookie(struct inode *inode, bool will_modify) 48 47 { 49 48 struct ceph_inode_info *ci = ceph_inode(inode); 50 49 51 - fscache_use_cookie(ci->fscache, will_modify); 50 + fscache_use_cookie(ceph_fscache_cookie(ci), will_modify); 52 51 } 53 52 54 53 void ceph_fscache_unuse_cookie(struct inode *inode, bool update) ··· 57 58 if (update) { 58 59 loff_t i_size = i_size_read(inode); 59 60 60 - fscache_unuse_cookie(ci->fscache, &ci->i_version, &i_size); 61 + fscache_unuse_cookie(ceph_fscache_cookie(ci), 62 + &ci->i_version, &i_size); 61 63 } else { 62 - fscache_unuse_cookie(ci->fscache, NULL, NULL); 64 + fscache_unuse_cookie(ceph_fscache_cookie(ci), NULL, NULL); 63 65 } 64 66 } 65 67 ··· 69 69 struct ceph_inode_info *ci = ceph_inode(inode); 70 70 loff_t i_size = i_size_read(inode); 71 71 72 - fscache_update_cookie(ci->fscache, &ci->i_version, &i_size); 72 + fscache_update_cookie(ceph_fscache_cookie(ci), &ci->i_version, &i_size); 73 73 } 74 74 75 75 void ceph_fscache_invalidate(struct inode *inode, bool dio_write) 76 76 { 77 77 struct ceph_inode_info *ci = ceph_inode(inode); 78 78 79 - fscache_invalidate(ceph_inode(inode)->fscache, 79 + fscache_invalidate(ceph_fscache_cookie(ci), 80 80 &ci->i_version, i_size_read(inode), 81 81 dio_write ? FSCACHE_INVAL_DIO_WRITE : 0); 82 82 }
+1 -10
fs/ceph/cache.h
··· 26 26 void ceph_fscache_update(struct inode *inode); 27 27 void ceph_fscache_invalidate(struct inode *inode, bool dio_write); 28 28 29 - static inline void ceph_fscache_inode_init(struct ceph_inode_info *ci) 30 - { 31 - ci->fscache = NULL; 32 - } 33 - 34 29 static inline struct fscache_cookie *ceph_fscache_cookie(struct ceph_inode_info *ci) 35 30 { 36 - return ci->fscache; 31 + return netfs_i_cookie(&ci->vfs_inode); 37 32 } 38 33 39 34 static inline void ceph_fscache_resize(struct inode *inode, loff_t to) ··· 83 88 } 84 89 85 90 static inline void ceph_fscache_unregister_fs(struct ceph_fs_client* fsc) 86 - { 87 - } 88 - 89 - static inline void ceph_fscache_inode_init(struct ceph_inode_info *ci) 90 91 { 91 92 } 92 93
+3 -3
fs/ceph/inode.c
··· 453 453 454 454 dout("alloc_inode %p\n", &ci->vfs_inode); 455 455 456 + /* Set parameters for the netfs library */ 457 + netfs_i_context_init(&ci->vfs_inode, &ceph_netfs_ops); 458 + 456 459 spin_lock_init(&ci->i_ceph_lock); 457 460 458 461 ci->i_version = 0; ··· 541 538 INIT_WORK(&ci->i_work, ceph_inode_work); 542 539 ci->i_work_mask = 0; 543 540 memset(&ci->i_btime, '\0', sizeof(ci->i_btime)); 544 - 545 - ceph_fscache_inode_init(ci); 546 - 547 541 return &ci->vfs_inode; 548 542 } 549 543
+8 -9
fs/ceph/super.h
··· 17 17 #include <linux/posix_acl.h> 18 18 #include <linux/refcount.h> 19 19 #include <linux/security.h> 20 + #include <linux/netfs.h> 21 + #include <linux/fscache.h> 20 22 21 23 #include <linux/ceph/libceph.h> 22 - 23 - #ifdef CONFIG_CEPH_FSCACHE 24 - #include <linux/fscache.h> 25 - #endif 26 24 27 25 /* large granularity for statfs utilization stats to facilitate 28 26 * large volume sizes on 32-bit machines. */ ··· 315 317 * Ceph inode. 316 318 */ 317 319 struct ceph_inode_info { 320 + struct { 321 + /* These must be contiguous */ 322 + struct inode vfs_inode; 323 + struct netfs_i_context netfs_ctx; /* Netfslib context */ 324 + }; 318 325 struct ceph_vino i_vino; /* ceph ino + snap */ 319 326 320 327 spinlock_t i_ceph_lock; ··· 430 427 431 428 struct work_struct i_work; 432 429 unsigned long i_work_mask; 433 - 434 - #ifdef CONFIG_CEPH_FSCACHE 435 - struct fscache_cookie *fscache; 436 - #endif 437 - struct inode vfs_inode; /* at end */ 438 430 }; 439 431 440 432 static inline struct ceph_inode_info * ··· 1213 1215 1214 1216 /* addr.c */ 1215 1217 extern const struct address_space_operations ceph_aops; 1218 + extern const struct netfs_request_ops ceph_netfs_ops; 1216 1219 extern int ceph_mmap(struct file *file, struct vm_area_struct *vma); 1217 1220 extern int ceph_uninline_data(struct file *file); 1218 1221 extern int ceph_pool_perm_check(struct inode *inode, int need);
+6 -4
fs/cifs/cifsglob.h
··· 16 16 #include <linux/mempool.h> 17 17 #include <linux/workqueue.h> 18 18 #include <linux/utsname.h> 19 + #include <linux/netfs.h> 19 20 #include "cifs_fs_sb.h" 20 21 #include "cifsacl.h" 21 22 #include <crypto/internal/hash.h> ··· 1403 1402 */ 1404 1403 1405 1404 struct cifsInodeInfo { 1405 + struct { 1406 + /* These must be contiguous */ 1407 + struct inode vfs_inode; /* the VFS's inode record */ 1408 + struct netfs_i_context netfs_ctx; /* Netfslib context */ 1409 + }; 1406 1410 bool can_cache_brlcks; 1407 1411 struct list_head llist; /* locks helb by this inode */ 1408 1412 /* ··· 1438 1432 u64 uniqueid; /* server inode number */ 1439 1433 u64 createtime; /* creation time on server */ 1440 1434 __u8 lease_key[SMB2_LEASE_KEY_SIZE]; /* lease key for this inode */ 1441 - #ifdef CONFIG_CIFS_FSCACHE 1442 - struct fscache_cookie *fscache; 1443 - #endif 1444 - struct inode vfs_inode; 1445 1435 struct list_head deferred_closes; /* list of deferred closes */ 1446 1436 spinlock_t deferred_lock; /* protection on deferred list */ 1447 1437 bool lease_granted; /* Flag to indicate whether lease or oplock is granted. */
+6 -5
fs/cifs/fscache.c
··· 103 103 104 104 cifs_fscache_fill_coherency(&cifsi->vfs_inode, &cd); 105 105 106 - cifsi->fscache = 106 + cifsi->netfs_ctx.cache = 107 107 fscache_acquire_cookie(tcon->fscache, 0, 108 108 &cifsi->uniqueid, sizeof(cifsi->uniqueid), 109 109 &cd, sizeof(cd), ··· 126 126 void cifs_fscache_release_inode_cookie(struct inode *inode) 127 127 { 128 128 struct cifsInodeInfo *cifsi = CIFS_I(inode); 129 + struct fscache_cookie *cookie = cifs_inode_cookie(inode); 129 130 130 - if (cifsi->fscache) { 131 - cifs_dbg(FYI, "%s: (0x%p)\n", __func__, cifsi->fscache); 132 - fscache_relinquish_cookie(cifsi->fscache, false); 133 - cifsi->fscache = NULL; 131 + if (cookie) { 132 + cifs_dbg(FYI, "%s: (0x%p)\n", __func__, cookie); 133 + fscache_relinquish_cookie(cookie, false); 134 + cifsi->netfs_ctx.cache = NULL; 134 135 } 135 136 } 136 137
+1 -1
fs/cifs/fscache.h
··· 61 61 62 62 static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) 63 63 { 64 - return CIFS_I(inode)->fscache; 64 + return netfs_i_cookie(inode); 65 65 } 66 66 67 67 static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags)
+16 -2
fs/netfs/internal.h
··· 6 6 */ 7 7 8 8 #include <linux/netfs.h> 9 + #include <linux/fscache.h> 9 10 #include <trace/events/netfs.h> 10 11 11 12 #ifdef pr_fmt ··· 20 19 */ 21 20 struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, 22 21 struct file *file, 23 - const struct netfs_request_ops *ops, 24 - void *netfs_priv, 25 22 loff_t start, size_t len, 26 23 enum netfs_io_origin origin); 27 24 void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what); ··· 79 80 #define netfs_stat(x) do {} while(0) 80 81 #define netfs_stat_d(x) do {} while(0) 81 82 #endif 83 + 84 + /* 85 + * Miscellaneous functions. 86 + */ 87 + static inline bool netfs_is_cache_enabled(struct netfs_i_context *ctx) 88 + { 89 + #if IS_ENABLED(CONFIG_FSCACHE) 90 + struct fscache_cookie *cookie = ctx->cache; 91 + 92 + return fscache_cookie_valid(cookie) && cookie->cache_priv && 93 + fscache_cookie_enabled(cookie); 94 + #else 95 + return false; 96 + #endif 97 + } 82 98 83 99 /*****************************************************************************/ 84 100 /*
+6 -6
fs/netfs/objects.c
··· 13 13 */ 14 14 struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, 15 15 struct file *file, 16 - const struct netfs_request_ops *ops, 17 - void *netfs_priv, 18 16 loff_t start, size_t len, 19 17 enum netfs_io_origin origin) 20 18 { 21 19 static atomic_t debug_ids; 20 + struct inode *inode = file ? file_inode(file) : mapping->host; 21 + struct netfs_i_context *ctx = netfs_i_context(inode); 22 22 struct netfs_io_request *rreq; 23 23 int ret; 24 24 ··· 29 29 rreq->start = start; 30 30 rreq->len = len; 31 31 rreq->origin = origin; 32 - rreq->netfs_ops = ops; 33 - rreq->netfs_priv = netfs_priv; 32 + rreq->netfs_ops = ctx->ops; 34 33 rreq->mapping = mapping; 35 - rreq->inode = file_inode(file); 36 - rreq->i_size = i_size_read(rreq->inode); 34 + rreq->inode = inode; 35 + rreq->i_size = i_size_read(inode); 37 36 rreq->debug_id = atomic_inc_return(&debug_ids); 38 37 INIT_LIST_HEAD(&rreq->subrequests); 39 38 INIT_WORK(&rreq->work, netfs_rreq_work); ··· 75 76 { 76 77 struct netfs_io_request *rreq = 77 78 container_of(work, struct netfs_io_request, work); 79 + 78 80 netfs_clear_subrequests(rreq, false); 79 81 if (rreq->netfs_priv) 80 82 rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
+45 -55
fs/netfs/read_helper.c
··· 14 14 #include <linux/uio.h> 15 15 #include <linux/sched/mm.h> 16 16 #include <linux/task_io_accounting_ops.h> 17 - #include <linux/netfs.h> 18 17 #include "internal.h" 19 18 #define CREATE_TRACE_POINTS 20 19 #include <trace/events/netfs.h> ··· 734 735 /** 735 736 * netfs_readahead - Helper to manage a read request 736 737 * @ractl: The description of the readahead request 737 - * @ops: The network filesystem's operations for the helper to use 738 - * @netfs_priv: Private netfs data to be retained in the request 739 738 * 740 739 * Fulfil a readahead request by drawing data from the cache if possible, or 741 740 * the netfs if not. Space beyond the EOF is zero-filled. Multiple I/O ··· 741 744 * readahead window can be expanded in either direction to a more convenient 742 745 * alighment for RPC efficiency or to make storage in the cache feasible. 743 746 * 744 - * The calling netfs must provide a table of operations, only one of which, 745 - * issue_op, is mandatory. It may also be passed a private token, which will 746 - * be retained in rreq->netfs_priv and will be cleaned up by ops->cleanup(). 747 + * The calling netfs must initialise a netfs context contiguous to the vfs 748 + * inode before calling this. 747 749 * 748 750 * This is usable whether or not caching is enabled. 749 751 */ 750 - void netfs_readahead(struct readahead_control *ractl, 751 - const struct netfs_request_ops *ops, 752 - void *netfs_priv) 752 + void netfs_readahead(struct readahead_control *ractl) 753 753 { 754 754 struct netfs_io_request *rreq; 755 + struct netfs_i_context *ctx = netfs_i_context(ractl->mapping->host); 755 756 unsigned int debug_index = 0; 756 757 int ret; 757 758 758 759 _enter("%lx,%x", readahead_index(ractl), readahead_count(ractl)); 759 760 760 761 if (readahead_count(ractl) == 0) 761 - goto cleanup; 762 + return; 762 763 763 764 rreq = netfs_alloc_request(ractl->mapping, ractl->file, 764 - ops, netfs_priv, 765 765 readahead_pos(ractl), 766 766 readahead_length(ractl), 767 767 NETFS_READAHEAD); 768 768 if (IS_ERR(rreq)) 769 - goto cleanup; 769 + return; 770 770 771 - if (ops->begin_cache_operation) { 772 - ret = ops->begin_cache_operation(rreq); 771 + if (ctx->ops->begin_cache_operation) { 772 + ret = ctx->ops->begin_cache_operation(rreq); 773 773 if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) 774 774 goto cleanup_free; 775 775 } ··· 798 804 cleanup_free: 799 805 netfs_put_request(rreq, false, netfs_rreq_trace_put_failed); 800 806 return; 801 - cleanup: 802 - if (netfs_priv) 803 - ops->cleanup(ractl->mapping, netfs_priv); 804 - return; 805 807 } 806 808 EXPORT_SYMBOL(netfs_readahead); 807 809 808 810 /** 809 811 * netfs_readpage - Helper to manage a readpage request 810 812 * @file: The file to read from 811 - * @folio: The folio to read 812 - * @ops: The network filesystem's operations for the helper to use 813 - * @netfs_priv: Private netfs data to be retained in the request 813 + * @subpage: A subpage of the folio to read 814 814 * 815 815 * Fulfil a readpage request by drawing data from the cache if possible, or the 816 816 * netfs if not. Space beyond the EOF is zero-filled. Multiple I/O requests 817 817 * from different sources will get munged together. 818 818 * 819 - * The calling netfs must provide a table of operations, only one of which, 820 - * issue_op, is mandatory. It may also be passed a private token, which will 821 - * be retained in rreq->netfs_priv and will be cleaned up by ops->cleanup(). 819 + * The calling netfs must initialise a netfs context contiguous to the vfs 820 + * inode before calling this. 822 821 * 823 822 * This is usable whether or not caching is enabled. 824 823 */ 825 - int netfs_readpage(struct file *file, 826 - struct folio *folio, 827 - const struct netfs_request_ops *ops, 828 - void *netfs_priv) 824 + int netfs_readpage(struct file *file, struct page *subpage) 829 825 { 826 + struct folio *folio = page_folio(subpage); 827 + struct address_space *mapping = folio->mapping; 830 828 struct netfs_io_request *rreq; 829 + struct netfs_i_context *ctx = netfs_i_context(mapping->host); 831 830 unsigned int debug_index = 0; 832 831 int ret; 833 832 834 833 _enter("%lx", folio_index(folio)); 835 834 836 - rreq = netfs_alloc_request(folio->mapping, file, ops, netfs_priv, 835 + rreq = netfs_alloc_request(mapping, file, 837 836 folio_file_pos(folio), folio_size(folio), 838 837 NETFS_READPAGE); 839 838 if (IS_ERR(rreq)) { ··· 834 847 goto alloc_error; 835 848 } 836 849 837 - if (ops->begin_cache_operation) { 838 - ret = ops->begin_cache_operation(rreq); 850 + if (ctx->ops->begin_cache_operation) { 851 + ret = ctx->ops->begin_cache_operation(rreq); 839 852 if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) { 840 853 folio_unlock(folio); 841 854 goto out; ··· 873 886 netfs_put_request(rreq, false, netfs_rreq_trace_put_hold); 874 887 return ret; 875 888 alloc_error: 876 - if (netfs_priv) 877 - ops->cleanup(folio_file_mapping(folio), netfs_priv); 878 889 folio_unlock(folio); 879 890 return ret; 880 891 } ··· 883 898 * @folio: The folio being prepared 884 899 * @pos: starting position for the write 885 900 * @len: length of write 901 + * @always_fill: T if the folio should always be completely filled/cleared 886 902 * 887 903 * In some cases, write_begin doesn't need to read at all: 888 904 * - full folio write ··· 893 907 * If any of these criteria are met, then zero out the unwritten parts 894 908 * of the folio and return true. Otherwise, return false. 895 909 */ 896 - static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len) 910 + static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, 911 + bool always_fill) 897 912 { 898 913 struct inode *inode = folio_inode(folio); 899 914 loff_t i_size = i_size_read(inode); 900 915 size_t offset = offset_in_folio(folio, pos); 916 + size_t plen = folio_size(folio); 917 + 918 + if (unlikely(always_fill)) { 919 + if (pos - offset + len <= i_size) 920 + return false; /* Page entirely before EOF */ 921 + zero_user_segment(&folio->page, 0, plen); 922 + folio_mark_uptodate(folio); 923 + return true; 924 + } 901 925 902 926 /* Full folio write */ 903 - if (offset == 0 && len >= folio_size(folio)) 927 + if (offset == 0 && len >= plen) 904 928 return true; 905 929 906 - /* pos beyond last folio in the file */ 930 + /* Page entirely beyond the end of the file */ 907 931 if (pos - offset >= i_size) 908 932 goto zero_out; 909 933 ··· 923 927 924 928 return false; 925 929 zero_out: 926 - zero_user_segments(&folio->page, 0, offset, offset + len, folio_size(folio)); 930 + zero_user_segments(&folio->page, 0, offset, offset + len, plen); 927 931 return true; 928 932 } 929 933 ··· 936 940 * @aop_flags: AOP_* flags 937 941 * @_folio: Where to put the resultant folio 938 942 * @_fsdata: Place for the netfs to store a cookie 939 - * @ops: The network filesystem's operations for the helper to use 940 - * @netfs_priv: Private netfs data to be retained in the request 941 943 * 942 944 * Pre-read data for a write-begin request by drawing data from the cache if 943 945 * possible, or the netfs if not. Space beyond the EOF is zero-filled. ··· 954 960 * should go ahead; unlock the folio and return -EAGAIN to cause the folio to 955 961 * be regot; or return an error. 956 962 * 963 + * The calling netfs must initialise a netfs context contiguous to the vfs 964 + * inode before calling this. 965 + * 957 966 * This is usable whether or not caching is enabled. 958 967 */ 959 968 int netfs_write_begin(struct file *file, struct address_space *mapping, 960 969 loff_t pos, unsigned int len, unsigned int aop_flags, 961 - struct folio **_folio, void **_fsdata, 962 - const struct netfs_request_ops *ops, 963 - void *netfs_priv) 970 + struct folio **_folio, void **_fsdata) 964 971 { 965 972 struct netfs_io_request *rreq; 973 + struct netfs_i_context *ctx = netfs_i_context(file_inode(file )); 966 974 struct folio *folio; 967 - struct inode *inode = file_inode(file); 968 975 unsigned int debug_index = 0, fgp_flags; 969 976 pgoff_t index = pos >> PAGE_SHIFT; 970 977 int ret; ··· 981 986 if (!folio) 982 987 return -ENOMEM; 983 988 984 - if (ops->check_write_begin) { 989 + if (ctx->ops->check_write_begin) { 985 990 /* Allow the netfs (eg. ceph) to flush conflicts. */ 986 - ret = ops->check_write_begin(file, pos, len, folio, _fsdata); 991 + ret = ctx->ops->check_write_begin(file, pos, len, folio, _fsdata); 987 992 if (ret < 0) { 988 993 trace_netfs_failure(NULL, NULL, ret, netfs_fail_check_write_begin); 989 994 if (ret == -EAGAIN) ··· 999 1004 * within the cache granule containing the EOF, in which case we need 1000 1005 * to preload the granule. 1001 1006 */ 1002 - if (!ops->is_cache_enabled(inode) && 1003 - netfs_skip_folio_read(folio, pos, len)) { 1007 + if (!netfs_is_cache_enabled(ctx) && 1008 + netfs_skip_folio_read(folio, pos, len, false)) { 1004 1009 netfs_stat(&netfs_n_rh_write_zskip); 1005 1010 goto have_folio_no_wait; 1006 1011 } 1007 1012 1008 - rreq = netfs_alloc_request(mapping, file, ops, netfs_priv, 1013 + rreq = netfs_alloc_request(mapping, file, 1009 1014 folio_file_pos(folio), folio_size(folio), 1010 1015 NETFS_READ_FOR_WRITE); 1011 1016 if (IS_ERR(rreq)) { ··· 1014 1019 } 1015 1020 rreq->no_unlock_folio = folio_index(folio); 1016 1021 __set_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags); 1017 - netfs_priv = NULL; 1018 1022 1019 - if (ops->begin_cache_operation) { 1020 - ret = ops->begin_cache_operation(rreq); 1023 + if (ctx->ops->begin_cache_operation) { 1024 + ret = ctx->ops->begin_cache_operation(rreq); 1021 1025 if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) 1022 1026 goto error_put; 1023 1027 } ··· 1070 1076 if (ret < 0) 1071 1077 goto error; 1072 1078 have_folio_no_wait: 1073 - if (netfs_priv) 1074 - ops->cleanup(mapping, netfs_priv); 1075 1079 *_folio = folio; 1076 1080 _leave(" = 0"); 1077 1081 return 0; ··· 1079 1087 error: 1080 1088 folio_unlock(folio); 1081 1089 folio_put(folio); 1082 - if (netfs_priv) 1083 - ops->cleanup(mapping, netfs_priv); 1084 1090 _leave(" = %d", ret); 1085 1091 return ret; 1086 1092 }
-1
fs/netfs/stats.c
··· 7 7 8 8 #include <linux/export.h> 9 9 #include <linux/seq_file.h> 10 - #include <linux/netfs.h> 11 10 #include "internal.h" 12 11 13 12 atomic_t netfs_n_rh_readahead;
+70 -11
include/linux/netfs.h
··· 119 119 bool was_async); 120 120 121 121 /* 122 + * Per-inode description. This must be directly after the inode struct. 123 + */ 124 + struct netfs_i_context { 125 + const struct netfs_request_ops *ops; 126 + #if IS_ENABLED(CONFIG_FSCACHE) 127 + struct fscache_cookie *cache; 128 + #endif 129 + }; 130 + 131 + /* 122 132 * Resources required to do operations on a cache. 123 133 */ 124 134 struct netfs_cache_resources { ··· 202 192 * Operations the network filesystem can/must provide to the helpers. 203 193 */ 204 194 struct netfs_request_ops { 205 - bool (*is_cache_enabled)(struct inode *inode); 206 195 int (*init_request)(struct netfs_io_request *rreq, struct file *file); 207 196 int (*begin_cache_operation)(struct netfs_io_request *rreq); 208 197 void (*expand_readahead)(struct netfs_io_request *rreq); ··· 272 263 }; 273 264 274 265 struct readahead_control; 275 - extern void netfs_readahead(struct readahead_control *, 276 - const struct netfs_request_ops *, 277 - void *); 278 - extern int netfs_readpage(struct file *, 279 - struct folio *, 280 - const struct netfs_request_ops *, 281 - void *); 266 + extern void netfs_readahead(struct readahead_control *); 267 + extern int netfs_readpage(struct file *, struct page *); 282 268 extern int netfs_write_begin(struct file *, struct address_space *, 283 269 loff_t, unsigned int, unsigned int, struct folio **, 284 - void **, 285 - const struct netfs_request_ops *, 286 - void *); 270 + void **); 287 271 288 272 extern void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool); 289 273 extern void netfs_get_subrequest(struct netfs_io_subrequest *subreq, ··· 284 282 extern void netfs_put_subrequest(struct netfs_io_subrequest *subreq, 285 283 bool was_async, enum netfs_sreq_ref_trace what); 286 284 extern void netfs_stats_show(struct seq_file *); 285 + 286 + /** 287 + * netfs_i_context - Get the netfs inode context from the inode 288 + * @inode: The inode to query 289 + * 290 + * Get the netfs lib inode context from the network filesystem's inode. The 291 + * context struct is expected to directly follow on from the VFS inode struct. 292 + */ 293 + static inline struct netfs_i_context *netfs_i_context(struct inode *inode) 294 + { 295 + return (struct netfs_i_context *)(inode + 1); 296 + } 297 + 298 + /** 299 + * netfs_inode - Get the netfs inode from the inode context 300 + * @ctx: The context to query 301 + * 302 + * Get the netfs inode from the netfs library's inode context. The VFS inode 303 + * is expected to directly precede the context struct. 304 + */ 305 + static inline struct inode *netfs_inode(struct netfs_i_context *ctx) 306 + { 307 + return ((struct inode *)ctx) - 1; 308 + } 309 + 310 + /** 311 + * netfs_i_context_init - Initialise a netfs lib context 312 + * @inode: The inode with which the context is associated 313 + * @ops: The netfs's operations list 314 + * 315 + * Initialise the netfs library context struct. This is expected to follow on 316 + * directly from the VFS inode struct. 317 + */ 318 + static inline void netfs_i_context_init(struct inode *inode, 319 + const struct netfs_request_ops *ops) 320 + { 321 + struct netfs_i_context *ctx = netfs_i_context(inode); 322 + 323 + memset(ctx, 0, sizeof(*ctx)); 324 + ctx->ops = ops; 325 + } 326 + 327 + /** 328 + * netfs_i_cookie - Get the cache cookie from the inode 329 + * @inode: The inode to query 330 + * 331 + * Get the caching cookie (if enabled) from the network filesystem's inode. 332 + */ 333 + static inline struct fscache_cookie *netfs_i_cookie(struct inode *inode) 334 + { 335 + #if IS_ENABLED(CONFIG_FSCACHE) 336 + struct netfs_i_context *ctx = netfs_i_context(inode); 337 + return ctx->cache; 338 + #else 339 + return NULL; 340 + #endif 341 + } 287 342 288 343 #endif /* _LINUX_NETFS_H */