Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'vfs-6.12.netfs' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs

Pull netfs updates from Christian Brauner:
"This contains the work to improve read/write performance for the new
netfs library.

The main performance enhancing changes are:

- Define a structure, struct folio_queue, and a new iterator type,
ITER_FOLIOQ, to hold a buffer as a replacement for ITER_XARRAY. See
that patch for questions about naming and form.

ITER_FOLIOQ is provided as a replacement for ITER_XARRAY. The
problem with an xarray is that accessing it requires the use of a
lock (typically the RCU read lock) - and this means that we can't
supply iterate_and_advance() with a step function that might sleep
(crypto for example) without having to drop the lock between pages.
ITER_FOLIOQ is the iterator for a chain of folio_queue structs,
where each folio_queue holds a small list of folios. A folio_queue
struct is a simpler structure than xarray and is not subject to
concurrent manipulation by the VM. folio_queue is used rather than
a bvec[] as it can form lists of indefinite size, adding to one end
and removing from the other on the fly.

- Provide a copy_folio_from_iter() wrapper.

- Make cifs RDMA support ITER_FOLIOQ.

- Use folio queues in the write-side helpers instead of xarrays.

- Add a function to reset the iterator in a subrequest.

- Simplify the write-side helpers to use sheaves to skip gaps rather
than trying to work out where gaps are.

- In afs, make the read subrequests asynchronous, putting them into
work items to allow the next patch to do progressive
unlocking/reading.

- Overhaul the read-side helpers to improve performance.

- Fix the caching of a partial block at the end of a file.

- Allow a store to be cancelled.

Then some changes for cifs to make it use folio queues instead of
xarrays for crypto bufferage:

- Use raw iteration functions rather than manually coding iteration
when hashing data.

- Switch to using folio_queue for crypto buffers.

- Remove the xarray bits.

Make some adjustments to the /proc/fs/netfs/stats file such that:

- All the netfs stats lines begin 'Netfs:' but change this to
something a bit more useful.

- Add a couple of stats counters to track the numbers of skips and
waits on the per-inode writeback serialisation lock to make it
easier to check for this as a source of performance loss.

Miscellaneous work:

- Ensure that the sb_writers lock is taken around
vfs_{set,remove}xattr() in the cachefiles code.

- Reduce the number of conditional branches in netfs_perform_write().

- Move the CIFS_INO_MODIFIED_ATTR flag to the netfs_inode struct and
remove cifs_post_modify().

- Move the max_len/max_nr_segs members from netfs_io_subrequest to
netfs_io_request as they're only needed for one subreq at a time.

- Add an 'unknown' source value for tracing purposes.

- Remove NETFS_COPY_TO_CACHE as it's no longer used.

- Set the request work function up front at allocation time.

- Use bh-disabling spinlocks for rreq->lock as cachefiles completion
may be run from block-filesystem DIO completion in softirq context.

- Remove fs/netfs/io.c"

* tag 'vfs-6.12.netfs' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: (25 commits)
docs: filesystems: corrected grammar of netfs page
cifs: Don't support ITER_XARRAY
cifs: Switch crypto buffer to use a folio_queue rather than an xarray
cifs: Use iterate_and_advance*() routines directly for hashing
netfs: Cancel dirty folios that have no storage destination
cachefiles, netfs: Fix write to partial block at EOF
netfs: Remove fs/netfs/io.c
netfs: Speed up buffered reading
afs: Make read subreqs async
netfs: Simplify the writeback code
netfs: Provide an iterator-reset function
netfs: Use new folio_queue data type and iterator instead of xarray iter
cifs: Provide the capability to extract from ITER_FOLIOQ to RDMA SGEs
iov_iter: Provide copy_folio_from_iter()
mm: Define struct folio_queue and ITER_FOLIOQ to handle a sequence of folios
netfs: Use bh-disabling spinlocks for rreq->lock
netfs: Set the request work function upon allocation
netfs: Remove NETFS_COPY_TO_CACHE
netfs: Reserve netfs_sreq_source 0 as unset/unknown
netfs: Move max_len/max_nr_segs from netfs_io_subrequest to netfs_io_stream
...

+3537 -2000
+1 -1
Documentation/filesystems/netfs_library.rst
··· 116 116 * Handle local caching, allowing cached data and server-read data to be 117 117 interleaved for a single request. 118 118 119 - * Handle clearing of bufferage that aren't on the server. 119 + * Handle clearing of bufferage that isn't on the server. 120 120 121 121 * Handle retrying of reads that failed, switching reads from the cache to the 122 122 server as necessary.
+8 -3
fs/9p/vfs_addr.c
··· 68 68 { 69 69 struct netfs_io_request *rreq = subreq->rreq; 70 70 struct p9_fid *fid = rreq->netfs_priv; 71 + unsigned long long pos = subreq->start + subreq->transferred; 71 72 int total, err; 72 73 73 - total = p9_client_read(fid, subreq->start + subreq->transferred, 74 - &subreq->io_iter, &err); 74 + total = p9_client_read(fid, pos, &subreq->io_iter, &err); 75 75 76 76 /* if we just extended the file size, any portion not in 77 77 * cache won't be on server and is zeroes */ 78 78 if (subreq->rreq->origin != NETFS_DIO_READ) 79 79 __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 80 + if (pos + total >= i_size_read(rreq->inode)) 81 + __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); 80 82 81 - netfs_subreq_terminated(subreq, err ?: total, false); 83 + if (!err) 84 + subreq->transferred += total; 85 + 86 + netfs_read_subreq_terminated(subreq, err, false); 82 87 } 83 88 84 89 /**
+23 -7
fs/afs/file.c
··· 16 16 #include <linux/mm.h> 17 17 #include <linux/swap.h> 18 18 #include <linux/netfs.h> 19 + #include <trace/events/netfs.h> 19 20 #include "internal.h" 20 21 21 22 static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); ··· 243 242 244 243 req->error = error; 245 244 if (subreq) { 246 - if (subreq->rreq->origin != NETFS_DIO_READ) 247 - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 248 - netfs_subreq_terminated(subreq, error ?: req->actual_len, false); 245 + subreq->rreq->i_size = req->file_size; 246 + if (req->pos + req->actual_len >= req->file_size) 247 + __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); 248 + netfs_read_subreq_terminated(subreq, error, false); 249 249 req->subreq = NULL; 250 250 } else if (req->done) { 251 251 req->done(req); ··· 264 262 afs_fetch_data_notify(op); 265 263 } 266 264 265 + static void afs_fetch_data_aborted(struct afs_operation *op) 266 + { 267 + afs_check_for_remote_deletion(op); 268 + afs_fetch_data_notify(op); 269 + } 270 + 267 271 static void afs_fetch_data_put(struct afs_operation *op) 268 272 { 269 273 op->fetch.req->error = afs_op_error(op); ··· 280 272 .issue_afs_rpc = afs_fs_fetch_data, 281 273 .issue_yfs_rpc = yfs_fs_fetch_data, 282 274 .success = afs_fetch_data_success, 283 - .aborted = afs_check_for_remote_deletion, 275 + .aborted = afs_fetch_data_aborted, 284 276 .failed = afs_fetch_data_notify, 285 277 .put = afs_fetch_data_put, 286 278 }; ··· 302 294 op = afs_alloc_operation(req->key, vnode->volume); 303 295 if (IS_ERR(op)) { 304 296 if (req->subreq) 305 - netfs_subreq_terminated(req->subreq, PTR_ERR(op), false); 297 + netfs_read_subreq_terminated(req->subreq, PTR_ERR(op), false); 306 298 return PTR_ERR(op); 307 299 } 308 300 ··· 313 305 return afs_do_sync_operation(op); 314 306 } 315 307 316 - static void afs_issue_read(struct netfs_io_subrequest *subreq) 308 + static void afs_read_worker(struct work_struct *work) 317 309 { 310 + struct netfs_io_subrequest *subreq = container_of(work, struct netfs_io_subrequest, work); 318 311 struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode); 319 312 struct afs_read *fsreq; 320 313 321 314 fsreq = afs_alloc_read(GFP_NOFS); 322 315 if (!fsreq) 323 - return netfs_subreq_terminated(subreq, -ENOMEM, false); 316 + return netfs_read_subreq_terminated(subreq, -ENOMEM, false); 324 317 325 318 fsreq->subreq = subreq; 326 319 fsreq->pos = subreq->start + subreq->transferred; ··· 330 321 fsreq->vnode = vnode; 331 322 fsreq->iter = &subreq->io_iter; 332 323 324 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 333 325 afs_fetch_data(fsreq->vnode, fsreq); 334 326 afs_put_read(fsreq); 327 + } 328 + 329 + static void afs_issue_read(struct netfs_io_subrequest *subreq) 330 + { 331 + INIT_WORK(&subreq->work, afs_read_worker); 332 + queue_work(system_long_wq, &subreq->work); 335 333 } 336 334 337 335 static int afs_symlink_read_folio(struct file *file, struct folio *folio)
+7 -2
fs/afs/fsclient.c
··· 304 304 struct afs_vnode_param *vp = &op->file[0]; 305 305 struct afs_read *req = op->fetch.req; 306 306 const __be32 *bp; 307 + size_t count_before; 307 308 int ret; 308 309 309 310 _enter("{%u,%zu,%zu/%llu}", ··· 346 345 347 346 /* extract the returned data */ 348 347 case 2: 349 - _debug("extract data %zu/%llu", 350 - iov_iter_count(call->iter), req->actual_len); 348 + count_before = call->iov_len; 349 + _debug("extract data %zu/%llu", count_before, req->actual_len); 351 350 352 351 ret = afs_extract_data(call, true); 352 + if (req->subreq) { 353 + req->subreq->transferred += count_before - call->iov_len; 354 + netfs_read_subreq_progress(req->subreq, false); 355 + } 353 356 if (ret < 0) 354 357 return ret; 355 358
+3 -1
fs/afs/write.c
··· 89 89 */ 90 90 void afs_prepare_write(struct netfs_io_subrequest *subreq) 91 91 { 92 + struct netfs_io_stream *stream = &subreq->rreq->io_streams[subreq->stream_nr]; 93 + 92 94 //if (test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) 93 95 // subreq->max_len = 512 * 1024; 94 96 //else 95 - subreq->max_len = 256 * 1024 * 1024; 97 + stream->sreq_max_len = 256 * 1024 * 1024; 96 98 } 97 99 98 100 /*
+7 -2
fs/afs/yfsclient.c
··· 355 355 struct afs_vnode_param *vp = &op->file[0]; 356 356 struct afs_read *req = op->fetch.req; 357 357 const __be32 *bp; 358 + size_t count_before; 358 359 int ret; 359 360 360 361 _enter("{%u,%zu, %zu/%llu}", ··· 392 391 393 392 /* extract the returned data */ 394 393 case 2: 395 - _debug("extract data %zu/%llu", 396 - iov_iter_count(call->iter), req->actual_len); 394 + count_before = call->iov_len; 395 + _debug("extract data %zu/%llu", count_before, req->actual_len); 397 396 398 397 ret = afs_extract_data(call, true); 398 + if (req->subreq) { 399 + req->subreq->transferred += count_before - call->iov_len; 400 + netfs_read_subreq_progress(req->subreq, false); 401 + } 399 402 if (ret < 0) 400 403 return ret; 401 404
+17 -2
fs/cachefiles/io.c
··· 627 627 { 628 628 struct netfs_io_request *wreq = subreq->rreq; 629 629 struct netfs_cache_resources *cres = &wreq->cache_resources; 630 + struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr]; 630 631 631 632 _enter("W=%x[%x] %llx", wreq->debug_id, subreq->debug_index, subreq->start); 632 633 633 - subreq->max_len = MAX_RW_COUNT; 634 - subreq->max_nr_segs = BIO_MAX_VECS; 634 + stream->sreq_max_len = MAX_RW_COUNT; 635 + stream->sreq_max_segs = BIO_MAX_VECS; 635 636 636 637 if (!cachefiles_cres_file(cres)) { 637 638 if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) ··· 648 647 struct netfs_cache_resources *cres = &wreq->cache_resources; 649 648 struct cachefiles_object *object = cachefiles_cres_object(cres); 650 649 struct cachefiles_cache *cache = object->volume->cache; 650 + struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr]; 651 651 const struct cred *saved_cred; 652 652 size_t off, pre, post, len = subreq->len; 653 653 loff_t start = subreq->start; ··· 662 660 if (off) { 663 661 pre = CACHEFILES_DIO_BLOCK_SIZE - off; 664 662 if (pre >= len) { 663 + fscache_count_dio_misfit(); 665 664 netfs_write_subrequest_terminated(subreq, len, false); 666 665 return; 667 666 } ··· 673 670 } 674 671 675 672 /* We also need to end on the cache granularity boundary */ 673 + if (start + len == wreq->i_size) { 674 + size_t part = len % CACHEFILES_DIO_BLOCK_SIZE; 675 + size_t need = CACHEFILES_DIO_BLOCK_SIZE - part; 676 + 677 + if (part && stream->submit_extendable_to >= need) { 678 + len += need; 679 + subreq->len += need; 680 + subreq->io_iter.count += need; 681 + } 682 + } 683 + 676 684 post = len & (CACHEFILES_DIO_BLOCK_SIZE - 1); 677 685 if (post) { 678 686 len -= post; 679 687 if (len == 0) { 688 + fscache_count_dio_misfit(); 680 689 netfs_write_subrequest_terminated(subreq, post, false); 681 690 return; 682 691 }
+26 -8
fs/cachefiles/xattr.c
··· 64 64 memcpy(buf->data, fscache_get_aux(object->cookie), len); 65 65 66 66 ret = cachefiles_inject_write_error(); 67 - if (ret == 0) 68 - ret = vfs_setxattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache, 69 - buf, sizeof(struct cachefiles_xattr) + len, 0); 67 + if (ret == 0) { 68 + ret = mnt_want_write_file(file); 69 + if (ret == 0) { 70 + ret = vfs_setxattr(&nop_mnt_idmap, dentry, 71 + cachefiles_xattr_cache, buf, 72 + sizeof(struct cachefiles_xattr) + len, 0); 73 + mnt_drop_write_file(file); 74 + } 75 + } 70 76 if (ret < 0) { 71 77 trace_cachefiles_vfs_error(object, file_inode(file), ret, 72 78 cachefiles_trace_setxattr_error); ··· 157 151 int ret; 158 152 159 153 ret = cachefiles_inject_remove_error(); 160 - if (ret == 0) 161 - ret = vfs_removexattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache); 154 + if (ret == 0) { 155 + ret = mnt_want_write(cache->mnt); 156 + if (ret == 0) { 157 + ret = vfs_removexattr(&nop_mnt_idmap, dentry, 158 + cachefiles_xattr_cache); 159 + mnt_drop_write(cache->mnt); 160 + } 161 + } 162 162 if (ret < 0) { 163 163 trace_cachefiles_vfs_error(object, d_inode(dentry), ret, 164 164 cachefiles_trace_remxattr_error); ··· 220 208 memcpy(buf->data, p, volume->vcookie->coherency_len); 221 209 222 210 ret = cachefiles_inject_write_error(); 223 - if (ret == 0) 224 - ret = vfs_setxattr(&nop_mnt_idmap, dentry, cachefiles_xattr_cache, 225 - buf, len, 0); 211 + if (ret == 0) { 212 + ret = mnt_want_write(volume->cache->mnt); 213 + if (ret == 0) { 214 + ret = vfs_setxattr(&nop_mnt_idmap, dentry, 215 + cachefiles_xattr_cache, 216 + buf, len, 0); 217 + mnt_drop_write(volume->cache->mnt); 218 + } 219 + } 226 220 if (ret < 0) { 227 221 trace_cachefiles_vfs_error(NULL, d_inode(dentry), ret, 228 222 cachefiles_trace_setxattr_error);
+46 -30
fs/ceph/addr.c
··· 13 13 #include <linux/iversion.h> 14 14 #include <linux/ktime.h> 15 15 #include <linux/netfs.h> 16 + #include <trace/events/netfs.h> 16 17 17 18 #include "super.h" 18 19 #include "mds_client.h" ··· 206 205 } 207 206 } 208 207 209 - static bool ceph_netfs_clamp_length(struct netfs_io_subrequest *subreq) 210 - { 211 - struct inode *inode = subreq->rreq->inode; 212 - struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode); 213 - struct ceph_inode_info *ci = ceph_inode(inode); 214 - u64 objno, objoff; 215 - u32 xlen; 216 - 217 - /* Truncate the extent at the end of the current block */ 218 - ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, 219 - &objno, &objoff, &xlen); 220 - subreq->len = min(xlen, fsc->mount_options->rsize); 221 - return true; 222 - } 223 - 224 208 static void finish_netfs_read(struct ceph_osd_request *req) 225 209 { 226 210 struct inode *inode = req->r_inode; ··· 250 264 calc_pages_for(osd_data->alignment, 251 265 osd_data->length), false); 252 266 } 253 - netfs_subreq_terminated(subreq, err, false); 267 + if (err > 0) { 268 + subreq->transferred = err; 269 + err = 0; 270 + } 271 + trace_netfs_sreq(subreq, netfs_sreq_trace_io_progress); 272 + netfs_read_subreq_terminated(subreq, err, false); 254 273 iput(req->r_inode); 255 274 ceph_dec_osd_stopping_blocker(fsc->mdsc); 256 275 } ··· 269 278 struct ceph_mds_request *req; 270 279 struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb); 271 280 struct ceph_inode_info *ci = ceph_inode(inode); 272 - struct iov_iter iter; 273 281 ssize_t err = 0; 274 282 size_t len; 275 283 int mode; ··· 291 301 req->r_args.getattr.mask = cpu_to_le32(CEPH_STAT_CAP_INLINE_DATA); 292 302 req->r_num_caps = 2; 293 303 304 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 294 305 err = ceph_mdsc_do_request(mdsc, NULL, req); 295 306 if (err < 0) 296 307 goto out; ··· 305 314 } 306 315 307 316 len = min_t(size_t, iinfo->inline_len - subreq->start, subreq->len); 308 - iov_iter_xarray(&iter, ITER_DEST, &rreq->mapping->i_pages, subreq->start, len); 309 - err = copy_to_iter(iinfo->inline_data + subreq->start, len, &iter); 310 - if (err == 0) 317 + err = copy_to_iter(iinfo->inline_data + subreq->start, len, &subreq->io_iter); 318 + if (err == 0) { 311 319 err = -EFAULT; 320 + } else { 321 + subreq->transferred += err; 322 + err = 0; 323 + } 312 324 313 325 ceph_mdsc_put_request(req); 314 326 out: 315 - netfs_subreq_terminated(subreq, err, false); 327 + netfs_read_subreq_terminated(subreq, err, false); 316 328 return true; 329 + } 330 + 331 + static int ceph_netfs_prepare_read(struct netfs_io_subrequest *subreq) 332 + { 333 + struct netfs_io_request *rreq = subreq->rreq; 334 + struct inode *inode = rreq->inode; 335 + struct ceph_inode_info *ci = ceph_inode(inode); 336 + struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode); 337 + u64 objno, objoff; 338 + u32 xlen; 339 + 340 + /* Truncate the extent at the end of the current block */ 341 + ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len, 342 + &objno, &objoff, &xlen); 343 + rreq->io_streams[0].sreq_max_len = umin(xlen, fsc->mount_options->rsize); 344 + return 0; 317 345 } 318 346 319 347 static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) ··· 344 334 struct ceph_client *cl = fsc->client; 345 335 struct ceph_osd_request *req = NULL; 346 336 struct ceph_vino vino = ceph_vino(inode); 347 - struct iov_iter iter; 348 - int err = 0; 349 - u64 len = subreq->len; 337 + int err; 338 + u64 len; 350 339 bool sparse = IS_ENCRYPTED(inode) || ceph_test_mount_opt(fsc, SPARSEREAD); 351 340 u64 off = subreq->start; 352 341 int extent_cnt; ··· 358 349 if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq)) 359 350 return; 360 351 352 + // TODO: This rounding here is slightly dodgy. It *should* work, for 353 + // now, as the cache only deals in blocks that are a multiple of 354 + // PAGE_SIZE and fscrypt blocks are at most PAGE_SIZE. What needs to 355 + // happen is for the fscrypt driving to be moved into netfslib and the 356 + // data in the cache also to be stored encrypted. 357 + len = subreq->len; 361 358 ceph_fscrypt_adjust_off_and_len(inode, &off, &len); 362 359 363 360 req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, vino, ··· 386 371 doutc(cl, "%llx.%llx pos=%llu orig_len=%zu len=%llu\n", 387 372 ceph_vinop(inode), subreq->start, subreq->len, len); 388 373 389 - iov_iter_xarray(&iter, ITER_DEST, &rreq->mapping->i_pages, subreq->start, len); 390 - 391 374 /* 392 375 * FIXME: For now, use CEPH_OSD_DATA_TYPE_PAGES instead of _ITER for 393 376 * encrypted inodes. We'd need infrastructure that handles an iov_iter ··· 397 384 struct page **pages; 398 385 size_t page_off; 399 386 400 - err = iov_iter_get_pages_alloc2(&iter, &pages, len, &page_off); 387 + err = iov_iter_get_pages_alloc2(&subreq->io_iter, &pages, len, &page_off); 401 388 if (err < 0) { 402 389 doutc(cl, "%llx.%llx failed to allocate pages, %d\n", 403 390 ceph_vinop(inode), err); ··· 412 399 osd_req_op_extent_osd_data_pages(req, 0, pages, len, 0, false, 413 400 false); 414 401 } else { 415 - osd_req_op_extent_osd_iter(req, 0, &iter); 402 + osd_req_op_extent_osd_iter(req, 0, &subreq->io_iter); 416 403 } 417 404 if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) { 418 405 err = -EIO; ··· 423 410 req->r_inode = inode; 424 411 ihold(inode); 425 412 413 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 426 414 ceph_osdc_start_request(req->r_osdc, req); 427 415 out: 428 416 ceph_osdc_put_request(req); 429 417 if (err) 430 - netfs_subreq_terminated(subreq, err, false); 418 + netfs_read_subreq_terminated(subreq, err, false); 431 419 doutc(cl, "%llx.%llx result %d\n", ceph_vinop(inode), err); 432 420 } 433 421 434 422 static int ceph_init_request(struct netfs_io_request *rreq, struct file *file) 435 423 { 436 424 struct inode *inode = rreq->inode; 425 + struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode); 437 426 struct ceph_client *cl = ceph_inode_to_client(inode); 438 427 int got = 0, want = CEPH_CAP_FILE_CACHE; 439 428 struct ceph_netfs_request_data *priv; ··· 487 472 488 473 priv->caps = got; 489 474 rreq->netfs_priv = priv; 475 + rreq->io_streams[0].sreq_max_len = fsc->mount_options->rsize; 490 476 491 477 out: 492 478 if (ret < 0) ··· 512 496 const struct netfs_request_ops ceph_netfs_ops = { 513 497 .init_request = ceph_init_request, 514 498 .free_request = ceph_netfs_free_request, 499 + .prepare_read = ceph_netfs_prepare_read, 515 500 .issue_read = ceph_netfs_issue_read, 516 501 .expand_readahead = ceph_netfs_expand_readahead, 517 - .clamp_length = ceph_netfs_clamp_length, 518 502 .check_write_begin = ceph_netfs_check_write_begin, 519 503 }; 520 504
+3 -1
fs/netfs/Makefile
··· 5 5 buffered_write.o \ 6 6 direct_read.o \ 7 7 direct_write.o \ 8 - io.o \ 9 8 iterator.o \ 10 9 locking.o \ 11 10 main.o \ 12 11 misc.o \ 13 12 objects.o \ 13 + read_collect.o \ 14 + read_pgpriv2.o \ 15 + read_retry.o \ 14 16 write_collect.o \ 15 17 write_issue.o 16 18
+476 -300
fs/netfs/buffered_read.c
··· 9 9 #include <linux/task_io_accounting_ops.h> 10 10 #include "internal.h" 11 11 12 - /* 13 - * [DEPRECATED] Unlock the folios in a read operation for when the filesystem 14 - * is using PG_private_2 and direct writing to the cache from here rather than 15 - * marking the page for writeback. 16 - * 17 - * Note that we don't touch folio->private in this code. 18 - */ 19 - static void netfs_rreq_unlock_folios_pgpriv2(struct netfs_io_request *rreq, 20 - size_t *account) 21 - { 22 - struct netfs_io_subrequest *subreq; 23 - struct folio *folio; 24 - pgoff_t start_page = rreq->start / PAGE_SIZE; 25 - pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; 26 - bool subreq_failed = false; 27 - 28 - XA_STATE(xas, &rreq->mapping->i_pages, start_page); 29 - 30 - /* Walk through the pagecache and the I/O request lists simultaneously. 31 - * We may have a mixture of cached and uncached sections and we only 32 - * really want to write out the uncached sections. This is slightly 33 - * complicated by the possibility that we might have huge pages with a 34 - * mixture inside. 35 - */ 36 - subreq = list_first_entry(&rreq->subrequests, 37 - struct netfs_io_subrequest, rreq_link); 38 - subreq_failed = (subreq->error < 0); 39 - 40 - trace_netfs_rreq(rreq, netfs_rreq_trace_unlock_pgpriv2); 41 - 42 - rcu_read_lock(); 43 - xas_for_each(&xas, folio, last_page) { 44 - loff_t pg_end; 45 - bool pg_failed = false; 46 - bool folio_started = false; 47 - 48 - if (xas_retry(&xas, folio)) 49 - continue; 50 - 51 - pg_end = folio_pos(folio) + folio_size(folio) - 1; 52 - 53 - for (;;) { 54 - loff_t sreq_end; 55 - 56 - if (!subreq) { 57 - pg_failed = true; 58 - break; 59 - } 60 - 61 - if (!folio_started && 62 - test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags) && 63 - fscache_operation_valid(&rreq->cache_resources)) { 64 - trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); 65 - folio_start_private_2(folio); 66 - folio_started = true; 67 - } 68 - 69 - pg_failed |= subreq_failed; 70 - sreq_end = subreq->start + subreq->len - 1; 71 - if (pg_end < sreq_end) 72 - break; 73 - 74 - *account += subreq->transferred; 75 - if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { 76 - subreq = list_next_entry(subreq, rreq_link); 77 - subreq_failed = (subreq->error < 0); 78 - } else { 79 - subreq = NULL; 80 - subreq_failed = false; 81 - } 82 - 83 - if (pg_end == sreq_end) 84 - break; 85 - } 86 - 87 - if (!pg_failed) { 88 - flush_dcache_folio(folio); 89 - folio_mark_uptodate(folio); 90 - } 91 - 92 - if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { 93 - if (folio->index == rreq->no_unlock_folio && 94 - test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) 95 - _debug("no unlock"); 96 - else 97 - folio_unlock(folio); 98 - } 99 - } 100 - rcu_read_unlock(); 101 - } 102 - 103 - /* 104 - * Unlock the folios in a read operation. We need to set PG_writeback on any 105 - * folios we're going to write back before we unlock them. 106 - * 107 - * Note that if the deprecated NETFS_RREQ_USE_PGPRIV2 is set then we use 108 - * PG_private_2 and do a direct write to the cache from here instead. 109 - */ 110 - void netfs_rreq_unlock_folios(struct netfs_io_request *rreq) 111 - { 112 - struct netfs_io_subrequest *subreq; 113 - struct netfs_folio *finfo; 114 - struct folio *folio; 115 - pgoff_t start_page = rreq->start / PAGE_SIZE; 116 - pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; 117 - size_t account = 0; 118 - bool subreq_failed = false; 119 - 120 - XA_STATE(xas, &rreq->mapping->i_pages, start_page); 121 - 122 - if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) { 123 - __clear_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags); 124 - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 125 - __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); 126 - } 127 - } 128 - 129 - /* Handle deprecated PG_private_2 case. */ 130 - if (test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { 131 - netfs_rreq_unlock_folios_pgpriv2(rreq, &account); 132 - goto out; 133 - } 134 - 135 - /* Walk through the pagecache and the I/O request lists simultaneously. 136 - * We may have a mixture of cached and uncached sections and we only 137 - * really want to write out the uncached sections. This is slightly 138 - * complicated by the possibility that we might have huge pages with a 139 - * mixture inside. 140 - */ 141 - subreq = list_first_entry(&rreq->subrequests, 142 - struct netfs_io_subrequest, rreq_link); 143 - subreq_failed = (subreq->error < 0); 144 - 145 - trace_netfs_rreq(rreq, netfs_rreq_trace_unlock); 146 - 147 - rcu_read_lock(); 148 - xas_for_each(&xas, folio, last_page) { 149 - loff_t pg_end; 150 - bool pg_failed = false; 151 - bool wback_to_cache = false; 152 - 153 - if (xas_retry(&xas, folio)) 154 - continue; 155 - 156 - pg_end = folio_pos(folio) + folio_size(folio) - 1; 157 - 158 - for (;;) { 159 - loff_t sreq_end; 160 - 161 - if (!subreq) { 162 - pg_failed = true; 163 - break; 164 - } 165 - 166 - wback_to_cache |= test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); 167 - pg_failed |= subreq_failed; 168 - sreq_end = subreq->start + subreq->len - 1; 169 - if (pg_end < sreq_end) 170 - break; 171 - 172 - account += subreq->transferred; 173 - if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { 174 - subreq = list_next_entry(subreq, rreq_link); 175 - subreq_failed = (subreq->error < 0); 176 - } else { 177 - subreq = NULL; 178 - subreq_failed = false; 179 - } 180 - 181 - if (pg_end == sreq_end) 182 - break; 183 - } 184 - 185 - if (!pg_failed) { 186 - flush_dcache_folio(folio); 187 - finfo = netfs_folio_info(folio); 188 - if (finfo) { 189 - trace_netfs_folio(folio, netfs_folio_trace_filled_gaps); 190 - if (finfo->netfs_group) 191 - folio_change_private(folio, finfo->netfs_group); 192 - else 193 - folio_detach_private(folio); 194 - kfree(finfo); 195 - } 196 - folio_mark_uptodate(folio); 197 - if (wback_to_cache && !WARN_ON_ONCE(folio_get_private(folio) != NULL)) { 198 - trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); 199 - folio_attach_private(folio, NETFS_FOLIO_COPY_TO_CACHE); 200 - filemap_dirty_folio(folio->mapping, folio); 201 - } 202 - } 203 - 204 - if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { 205 - if (folio->index == rreq->no_unlock_folio && 206 - test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) 207 - _debug("no unlock"); 208 - else 209 - folio_unlock(folio); 210 - } 211 - } 212 - rcu_read_unlock(); 213 - 214 - out: 215 - task_io_account_read(account); 216 - if (rreq->netfs_ops->done) 217 - rreq->netfs_ops->done(rreq); 218 - } 219 - 220 12 static void netfs_cache_expand_readahead(struct netfs_io_request *rreq, 221 13 unsigned long long *_start, 222 14 unsigned long long *_len, ··· 63 271 return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cookie(ctx)); 64 272 } 65 273 274 + /* 275 + * Decant the list of folios to read into a rolling buffer. 276 + */ 277 + static size_t netfs_load_buffer_from_ra(struct netfs_io_request *rreq, 278 + struct folio_queue *folioq) 279 + { 280 + unsigned int order, nr; 281 + size_t size = 0; 282 + 283 + nr = __readahead_batch(rreq->ractl, (struct page **)folioq->vec.folios, 284 + ARRAY_SIZE(folioq->vec.folios)); 285 + folioq->vec.nr = nr; 286 + for (int i = 0; i < nr; i++) { 287 + struct folio *folio = folioq_folio(folioq, i); 288 + 289 + trace_netfs_folio(folio, netfs_folio_trace_read); 290 + order = folio_order(folio); 291 + folioq->orders[i] = order; 292 + size += PAGE_SIZE << order; 293 + } 294 + 295 + for (int i = nr; i < folioq_nr_slots(folioq); i++) 296 + folioq_clear(folioq, i); 297 + 298 + return size; 299 + } 300 + 301 + /* 302 + * netfs_prepare_read_iterator - Prepare the subreq iterator for I/O 303 + * @subreq: The subrequest to be set up 304 + * 305 + * Prepare the I/O iterator representing the read buffer on a subrequest for 306 + * the filesystem to use for I/O (it can be passed directly to a socket). This 307 + * is intended to be called from the ->issue_read() method once the filesystem 308 + * has trimmed the request to the size it wants. 309 + * 310 + * Returns the limited size if successful and -ENOMEM if insufficient memory 311 + * available. 312 + * 313 + * [!] NOTE: This must be run in the same thread as ->issue_read() was called 314 + * in as we access the readahead_control struct. 315 + */ 316 + static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq) 317 + { 318 + struct netfs_io_request *rreq = subreq->rreq; 319 + size_t rsize = subreq->len; 320 + 321 + if (subreq->source == NETFS_DOWNLOAD_FROM_SERVER) 322 + rsize = umin(rsize, rreq->io_streams[0].sreq_max_len); 323 + 324 + if (rreq->ractl) { 325 + /* If we don't have sufficient folios in the rolling buffer, 326 + * extract a folioq's worth from the readahead region at a time 327 + * into the buffer. Note that this acquires a ref on each page 328 + * that we will need to release later - but we don't want to do 329 + * that until after we've started the I/O. 330 + */ 331 + while (rreq->submitted < subreq->start + rsize) { 332 + struct folio_queue *tail = rreq->buffer_tail, *new; 333 + size_t added; 334 + 335 + new = kmalloc(sizeof(*new), GFP_NOFS); 336 + if (!new) 337 + return -ENOMEM; 338 + netfs_stat(&netfs_n_folioq); 339 + folioq_init(new); 340 + new->prev = tail; 341 + tail->next = new; 342 + rreq->buffer_tail = new; 343 + added = netfs_load_buffer_from_ra(rreq, new); 344 + rreq->iter.count += added; 345 + rreq->submitted += added; 346 + } 347 + } 348 + 349 + subreq->len = rsize; 350 + if (unlikely(rreq->io_streams[0].sreq_max_segs)) { 351 + size_t limit = netfs_limit_iter(&rreq->iter, 0, rsize, 352 + rreq->io_streams[0].sreq_max_segs); 353 + 354 + if (limit < rsize) { 355 + subreq->len = limit; 356 + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); 357 + } 358 + } 359 + 360 + subreq->io_iter = rreq->iter; 361 + 362 + if (iov_iter_is_folioq(&subreq->io_iter)) { 363 + if (subreq->io_iter.folioq_slot >= folioq_nr_slots(subreq->io_iter.folioq)) { 364 + subreq->io_iter.folioq = subreq->io_iter.folioq->next; 365 + subreq->io_iter.folioq_slot = 0; 366 + } 367 + subreq->curr_folioq = (struct folio_queue *)subreq->io_iter.folioq; 368 + subreq->curr_folioq_slot = subreq->io_iter.folioq_slot; 369 + subreq->curr_folio_order = subreq->curr_folioq->orders[subreq->curr_folioq_slot]; 370 + } 371 + 372 + iov_iter_truncate(&subreq->io_iter, subreq->len); 373 + iov_iter_advance(&rreq->iter, subreq->len); 374 + return subreq->len; 375 + } 376 + 377 + static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_request *rreq, 378 + struct netfs_io_subrequest *subreq, 379 + loff_t i_size) 380 + { 381 + struct netfs_cache_resources *cres = &rreq->cache_resources; 382 + 383 + if (!cres->ops) 384 + return NETFS_DOWNLOAD_FROM_SERVER; 385 + return cres->ops->prepare_read(subreq, i_size); 386 + } 387 + 388 + static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, 389 + bool was_async) 390 + { 391 + struct netfs_io_subrequest *subreq = priv; 392 + 393 + if (transferred_or_error < 0) { 394 + netfs_read_subreq_terminated(subreq, transferred_or_error, was_async); 395 + return; 396 + } 397 + 398 + if (transferred_or_error > 0) 399 + subreq->transferred += transferred_or_error; 400 + netfs_read_subreq_terminated(subreq, 0, was_async); 401 + } 402 + 403 + /* 404 + * Issue a read against the cache. 405 + * - Eats the caller's ref on subreq. 406 + */ 407 + static void netfs_read_cache_to_pagecache(struct netfs_io_request *rreq, 408 + struct netfs_io_subrequest *subreq) 409 + { 410 + struct netfs_cache_resources *cres = &rreq->cache_resources; 411 + 412 + netfs_stat(&netfs_n_rh_read); 413 + cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_IGNORE, 414 + netfs_cache_read_terminated, subreq); 415 + } 416 + 417 + /* 418 + * Perform a read to the pagecache from a series of sources of different types, 419 + * slicing up the region to be read according to available cache blocks and 420 + * network rsize. 421 + */ 422 + static void netfs_read_to_pagecache(struct netfs_io_request *rreq) 423 + { 424 + struct netfs_inode *ictx = netfs_inode(rreq->inode); 425 + unsigned long long start = rreq->start; 426 + ssize_t size = rreq->len; 427 + int ret = 0; 428 + 429 + atomic_inc(&rreq->nr_outstanding); 430 + 431 + do { 432 + struct netfs_io_subrequest *subreq; 433 + enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER; 434 + ssize_t slice; 435 + 436 + subreq = netfs_alloc_subrequest(rreq); 437 + if (!subreq) { 438 + ret = -ENOMEM; 439 + break; 440 + } 441 + 442 + subreq->start = start; 443 + subreq->len = size; 444 + 445 + atomic_inc(&rreq->nr_outstanding); 446 + spin_lock_bh(&rreq->lock); 447 + list_add_tail(&subreq->rreq_link, &rreq->subrequests); 448 + subreq->prev_donated = rreq->prev_donated; 449 + rreq->prev_donated = 0; 450 + trace_netfs_sreq(subreq, netfs_sreq_trace_added); 451 + spin_unlock_bh(&rreq->lock); 452 + 453 + source = netfs_cache_prepare_read(rreq, subreq, rreq->i_size); 454 + subreq->source = source; 455 + if (source == NETFS_DOWNLOAD_FROM_SERVER) { 456 + unsigned long long zp = umin(ictx->zero_point, rreq->i_size); 457 + size_t len = subreq->len; 458 + 459 + if (subreq->start >= zp) { 460 + subreq->source = source = NETFS_FILL_WITH_ZEROES; 461 + goto fill_with_zeroes; 462 + } 463 + 464 + if (len > zp - subreq->start) 465 + len = zp - subreq->start; 466 + if (len == 0) { 467 + pr_err("ZERO-LEN READ: R=%08x[%x] l=%zx/%zx s=%llx z=%llx i=%llx", 468 + rreq->debug_id, subreq->debug_index, 469 + subreq->len, size, 470 + subreq->start, ictx->zero_point, rreq->i_size); 471 + break; 472 + } 473 + subreq->len = len; 474 + 475 + netfs_stat(&netfs_n_rh_download); 476 + if (rreq->netfs_ops->prepare_read) { 477 + ret = rreq->netfs_ops->prepare_read(subreq); 478 + if (ret < 0) { 479 + atomic_dec(&rreq->nr_outstanding); 480 + netfs_put_subrequest(subreq, false, 481 + netfs_sreq_trace_put_cancel); 482 + break; 483 + } 484 + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); 485 + } 486 + 487 + slice = netfs_prepare_read_iterator(subreq); 488 + if (slice < 0) { 489 + atomic_dec(&rreq->nr_outstanding); 490 + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); 491 + ret = slice; 492 + break; 493 + } 494 + 495 + rreq->netfs_ops->issue_read(subreq); 496 + goto done; 497 + } 498 + 499 + fill_with_zeroes: 500 + if (source == NETFS_FILL_WITH_ZEROES) { 501 + subreq->source = NETFS_FILL_WITH_ZEROES; 502 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 503 + netfs_stat(&netfs_n_rh_zero); 504 + slice = netfs_prepare_read_iterator(subreq); 505 + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 506 + netfs_read_subreq_terminated(subreq, 0, false); 507 + goto done; 508 + } 509 + 510 + if (source == NETFS_READ_FROM_CACHE) { 511 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 512 + slice = netfs_prepare_read_iterator(subreq); 513 + netfs_read_cache_to_pagecache(rreq, subreq); 514 + goto done; 515 + } 516 + 517 + pr_err("Unexpected read source %u\n", source); 518 + WARN_ON_ONCE(1); 519 + break; 520 + 521 + done: 522 + size -= slice; 523 + start += slice; 524 + cond_resched(); 525 + } while (size > 0); 526 + 527 + if (atomic_dec_and_test(&rreq->nr_outstanding)) 528 + netfs_rreq_terminated(rreq, false); 529 + 530 + /* Defer error return as we may need to wait for outstanding I/O. */ 531 + cmpxchg(&rreq->error, 0, ret); 532 + } 533 + 534 + /* 535 + * Wait for the read operation to complete, successfully or otherwise. 536 + */ 537 + static int netfs_wait_for_read(struct netfs_io_request *rreq) 538 + { 539 + int ret; 540 + 541 + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); 542 + wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, TASK_UNINTERRUPTIBLE); 543 + ret = rreq->error; 544 + if (ret == 0 && rreq->submitted < rreq->len) { 545 + trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); 546 + ret = -EIO; 547 + } 548 + 549 + return ret; 550 + } 551 + 552 + /* 553 + * Set up the initial folioq of buffer folios in the rolling buffer and set the 554 + * iterator to refer to it. 555 + */ 556 + static int netfs_prime_buffer(struct netfs_io_request *rreq) 557 + { 558 + struct folio_queue *folioq; 559 + size_t added; 560 + 561 + folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); 562 + if (!folioq) 563 + return -ENOMEM; 564 + netfs_stat(&netfs_n_folioq); 565 + folioq_init(folioq); 566 + rreq->buffer = folioq; 567 + rreq->buffer_tail = folioq; 568 + rreq->submitted = rreq->start; 569 + iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, 0); 570 + 571 + added = netfs_load_buffer_from_ra(rreq, folioq); 572 + rreq->iter.count += added; 573 + rreq->submitted += added; 574 + return 0; 575 + } 576 + 577 + /* 578 + * Drop the ref on each folio that we inherited from the VM readahead code. We 579 + * still have the folio locks to pin the page until we complete the I/O. 580 + * 581 + * Note that we can't just release the batch in each queue struct as we use the 582 + * occupancy count in other places. 583 + */ 584 + static void netfs_put_ra_refs(struct folio_queue *folioq) 585 + { 586 + struct folio_batch fbatch; 587 + 588 + folio_batch_init(&fbatch); 589 + while (folioq) { 590 + for (unsigned int slot = 0; slot < folioq_count(folioq); slot++) { 591 + struct folio *folio = folioq_folio(folioq, slot); 592 + if (!folio) 593 + continue; 594 + trace_netfs_folio(folio, netfs_folio_trace_read_put); 595 + if (!folio_batch_add(&fbatch, folio)) 596 + folio_batch_release(&fbatch); 597 + } 598 + folioq = folioq->next; 599 + } 600 + 601 + folio_batch_release(&fbatch); 602 + } 603 + 66 604 /** 67 605 * netfs_readahead - Helper to manage a read request 68 606 * @ractl: The description of the readahead request ··· 411 289 void netfs_readahead(struct readahead_control *ractl) 412 290 { 413 291 struct netfs_io_request *rreq; 414 - struct netfs_inode *ctx = netfs_inode(ractl->mapping->host); 292 + struct netfs_inode *ictx = netfs_inode(ractl->mapping->host); 293 + unsigned long long start = readahead_pos(ractl); 294 + size_t size = readahead_length(ractl); 415 295 int ret; 416 296 417 - _enter("%lx,%x", readahead_index(ractl), readahead_count(ractl)); 418 - 419 - if (readahead_count(ractl) == 0) 420 - return; 421 - 422 - rreq = netfs_alloc_request(ractl->mapping, ractl->file, 423 - readahead_pos(ractl), 424 - readahead_length(ractl), 297 + rreq = netfs_alloc_request(ractl->mapping, ractl->file, start, size, 425 298 NETFS_READAHEAD); 426 299 if (IS_ERR(rreq)) 427 300 return; 428 301 429 - ret = netfs_begin_cache_read(rreq, ctx); 302 + ret = netfs_begin_cache_read(rreq, ictx); 430 303 if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) 431 304 goto cleanup_free; 432 305 ··· 431 314 432 315 netfs_rreq_expand(rreq, ractl); 433 316 434 - /* Set up the output buffer */ 435 - iov_iter_xarray(&rreq->iter, ITER_DEST, &ractl->mapping->i_pages, 436 - rreq->start, rreq->len); 317 + rreq->ractl = ractl; 318 + if (netfs_prime_buffer(rreq) < 0) 319 + goto cleanup_free; 320 + netfs_read_to_pagecache(rreq); 437 321 438 - /* Drop the refs on the folios here rather than in the cache or 439 - * filesystem. The locks will be dropped in netfs_rreq_unlock(). 440 - */ 441 - while (readahead_folio(ractl)) 442 - ; 322 + /* Release the folio refs whilst we're waiting for the I/O. */ 323 + netfs_put_ra_refs(rreq->buffer); 443 324 444 - netfs_begin_read(rreq, false); 445 - netfs_put_request(rreq, false, netfs_rreq_trace_put_return); 325 + netfs_put_request(rreq, true, netfs_rreq_trace_put_return); 446 326 return; 447 327 448 328 cleanup_free: ··· 447 333 return; 448 334 } 449 335 EXPORT_SYMBOL(netfs_readahead); 336 + 337 + /* 338 + * Create a rolling buffer with a single occupying folio. 339 + */ 340 + static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio) 341 + { 342 + struct folio_queue *folioq; 343 + 344 + folioq = kmalloc(sizeof(*folioq), GFP_KERNEL); 345 + if (!folioq) 346 + return -ENOMEM; 347 + 348 + netfs_stat(&netfs_n_folioq); 349 + folioq_init(folioq); 350 + folioq_append(folioq, folio); 351 + BUG_ON(folioq_folio(folioq, 0) != folio); 352 + BUG_ON(folioq_folio_order(folioq, 0) != folio_order(folio)); 353 + rreq->buffer = folioq; 354 + rreq->buffer_tail = folioq; 355 + rreq->submitted = rreq->start + rreq->len; 356 + iov_iter_folio_queue(&rreq->iter, ITER_DEST, folioq, 0, 0, rreq->len); 357 + rreq->ractl = (struct readahead_control *)1UL; 358 + return 0; 359 + } 360 + 361 + /* 362 + * Read into gaps in a folio partially filled by a streaming write. 363 + */ 364 + static int netfs_read_gaps(struct file *file, struct folio *folio) 365 + { 366 + struct netfs_io_request *rreq; 367 + struct address_space *mapping = folio->mapping; 368 + struct netfs_folio *finfo = netfs_folio_info(folio); 369 + struct netfs_inode *ctx = netfs_inode(mapping->host); 370 + struct folio *sink = NULL; 371 + struct bio_vec *bvec; 372 + unsigned int from = finfo->dirty_offset; 373 + unsigned int to = from + finfo->dirty_len; 374 + unsigned int off = 0, i = 0; 375 + size_t flen = folio_size(folio); 376 + size_t nr_bvec = flen / PAGE_SIZE + 2; 377 + size_t part; 378 + int ret; 379 + 380 + _enter("%lx", folio->index); 381 + 382 + rreq = netfs_alloc_request(mapping, file, folio_pos(folio), flen, NETFS_READ_GAPS); 383 + if (IS_ERR(rreq)) { 384 + ret = PTR_ERR(rreq); 385 + goto alloc_error; 386 + } 387 + 388 + ret = netfs_begin_cache_read(rreq, ctx); 389 + if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) 390 + goto discard; 391 + 392 + netfs_stat(&netfs_n_rh_read_folio); 393 + trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_read_gaps); 394 + 395 + /* Fiddle the buffer so that a gap at the beginning and/or a gap at the 396 + * end get copied to, but the middle is discarded. 397 + */ 398 + ret = -ENOMEM; 399 + bvec = kmalloc_array(nr_bvec, sizeof(*bvec), GFP_KERNEL); 400 + if (!bvec) 401 + goto discard; 402 + 403 + sink = folio_alloc(GFP_KERNEL, 0); 404 + if (!sink) { 405 + kfree(bvec); 406 + goto discard; 407 + } 408 + 409 + trace_netfs_folio(folio, netfs_folio_trace_read_gaps); 410 + 411 + rreq->direct_bv = bvec; 412 + rreq->direct_bv_count = nr_bvec; 413 + if (from > 0) { 414 + bvec_set_folio(&bvec[i++], folio, from, 0); 415 + off = from; 416 + } 417 + while (off < to) { 418 + part = min_t(size_t, to - off, PAGE_SIZE); 419 + bvec_set_folio(&bvec[i++], sink, part, 0); 420 + off += part; 421 + } 422 + if (to < flen) 423 + bvec_set_folio(&bvec[i++], folio, flen - to, to); 424 + iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len); 425 + rreq->submitted = rreq->start + flen; 426 + 427 + netfs_read_to_pagecache(rreq); 428 + 429 + if (sink) 430 + folio_put(sink); 431 + 432 + ret = netfs_wait_for_read(rreq); 433 + if (ret == 0) { 434 + flush_dcache_folio(folio); 435 + folio_mark_uptodate(folio); 436 + } 437 + folio_unlock(folio); 438 + netfs_put_request(rreq, false, netfs_rreq_trace_put_return); 439 + return ret < 0 ? ret : 0; 440 + 441 + discard: 442 + netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); 443 + alloc_error: 444 + folio_unlock(folio); 445 + return ret; 446 + } 450 447 451 448 /** 452 449 * netfs_read_folio - Helper to manage a read_folio request ··· 578 353 struct address_space *mapping = folio->mapping; 579 354 struct netfs_io_request *rreq; 580 355 struct netfs_inode *ctx = netfs_inode(mapping->host); 581 - struct folio *sink = NULL; 582 356 int ret; 357 + 358 + if (folio_test_dirty(folio)) { 359 + trace_netfs_folio(folio, netfs_folio_trace_read_gaps); 360 + return netfs_read_gaps(file, folio); 361 + } 583 362 584 363 _enter("%lx", folio->index); 585 364 ··· 603 374 trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); 604 375 605 376 /* Set up the output buffer */ 606 - if (folio_test_dirty(folio)) { 607 - /* Handle someone trying to read from an unflushed streaming 608 - * write. We fiddle the buffer so that a gap at the beginning 609 - * and/or a gap at the end get copied to, but the middle is 610 - * discarded. 611 - */ 612 - struct netfs_folio *finfo = netfs_folio_info(folio); 613 - struct bio_vec *bvec; 614 - unsigned int from = finfo->dirty_offset; 615 - unsigned int to = from + finfo->dirty_len; 616 - unsigned int off = 0, i = 0; 617 - size_t flen = folio_size(folio); 618 - size_t nr_bvec = flen / PAGE_SIZE + 2; 619 - size_t part; 377 + ret = netfs_create_singular_buffer(rreq, folio); 378 + if (ret < 0) 379 + goto discard; 620 380 621 - ret = -ENOMEM; 622 - bvec = kmalloc_array(nr_bvec, sizeof(*bvec), GFP_KERNEL); 623 - if (!bvec) 624 - goto discard; 625 - 626 - sink = folio_alloc(GFP_KERNEL, 0); 627 - if (!sink) 628 - goto discard; 629 - 630 - trace_netfs_folio(folio, netfs_folio_trace_read_gaps); 631 - 632 - rreq->direct_bv = bvec; 633 - rreq->direct_bv_count = nr_bvec; 634 - if (from > 0) { 635 - bvec_set_folio(&bvec[i++], folio, from, 0); 636 - off = from; 637 - } 638 - while (off < to) { 639 - part = min_t(size_t, to - off, PAGE_SIZE); 640 - bvec_set_folio(&bvec[i++], sink, part, 0); 641 - off += part; 642 - } 643 - if (to < flen) 644 - bvec_set_folio(&bvec[i++], folio, flen - to, to); 645 - iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len); 646 - } else { 647 - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, 648 - rreq->start, rreq->len); 649 - } 650 - 651 - ret = netfs_begin_read(rreq, true); 652 - if (sink) 653 - folio_put(sink); 381 + netfs_read_to_pagecache(rreq); 382 + ret = netfs_wait_for_read(rreq); 654 383 netfs_put_request(rreq, false, netfs_rreq_trace_put_return); 655 384 return ret < 0 ? ret : 0; 656 385 ··· 681 494 * 682 495 * Pre-read data for a write-begin request by drawing data from the cache if 683 496 * possible, or the netfs if not. Space beyond the EOF is zero-filled. 684 - * Multiple I/O requests from different sources will get munged together. If 685 - * necessary, the readahead window can be expanded in either direction to a 686 - * more convenient alighment for RPC efficiency or to make storage in the cache 687 - * feasible. 497 + * Multiple I/O requests from different sources will get munged together. 688 498 * 689 499 * The calling netfs must provide a table of operations, only one of which, 690 - * issue_op, is mandatory. 500 + * issue_read, is mandatory. 691 501 * 692 502 * The check_write_begin() operation can be provided to check for and flush 693 503 * conflicting writes once the folio is grabbed and locked. It is passed a ··· 711 527 struct folio *folio; 712 528 pgoff_t index = pos >> PAGE_SHIFT; 713 529 int ret; 714 - 715 - DEFINE_READAHEAD(ractl, file, NULL, mapping, index); 716 530 717 531 retry: 718 532 folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN, ··· 759 577 netfs_stat(&netfs_n_rh_write_begin); 760 578 trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin); 761 579 762 - /* Expand the request to meet caching requirements and download 763 - * preferences. 764 - */ 765 - ractl._nr_pages = folio_nr_pages(folio); 766 - netfs_rreq_expand(rreq, &ractl); 767 - 768 580 /* Set up the output buffer */ 769 - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, 770 - rreq->start, rreq->len); 581 + ret = netfs_create_singular_buffer(rreq, folio); 582 + if (ret < 0) 583 + goto error_put; 771 584 772 - /* We hold the folio locks, so we can drop the references */ 773 - folio_get(folio); 774 - while (readahead_folio(&ractl)) 775 - ; 776 - 777 - ret = netfs_begin_read(rreq, true); 585 + netfs_read_to_pagecache(rreq); 586 + ret = netfs_wait_for_read(rreq); 778 587 if (ret < 0) 779 588 goto error; 780 589 netfs_put_request(rreq, false, netfs_rreq_trace_put_return); ··· 825 652 trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write); 826 653 827 654 /* Set up the output buffer */ 828 - iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages, 829 - rreq->start, rreq->len); 655 + ret = netfs_create_singular_buffer(rreq, folio); 656 + if (ret < 0) 657 + goto error_put; 830 658 831 - ret = netfs_begin_read(rreq, true); 659 + folioq_mark2(rreq->buffer, 0); 660 + netfs_read_to_pagecache(rreq); 661 + ret = netfs_wait_for_read(rreq); 832 662 netfs_put_request(rreq, false, netfs_rreq_trace_put_return); 833 663 return ret; 834 664
+146 -171
fs/netfs/buffered_write.c
··· 13 13 #include <linux/pagevec.h> 14 14 #include "internal.h" 15 15 16 - /* 17 - * Determined write method. Adjust netfs_folio_traces if this is changed. 18 - */ 19 - enum netfs_how_to_modify { 20 - NETFS_FOLIO_IS_UPTODATE, /* Folio is uptodate already */ 21 - NETFS_JUST_PREFETCH, /* We have to read the folio anyway */ 22 - NETFS_WHOLE_FOLIO_MODIFY, /* We're going to overwrite the whole folio */ 23 - NETFS_MODIFY_AND_CLEAR, /* We can assume there is no data to be downloaded. */ 24 - NETFS_STREAMING_WRITE, /* Store incomplete data in non-uptodate page. */ 25 - NETFS_STREAMING_WRITE_CONT, /* Continue streaming write. */ 26 - NETFS_FLUSH_CONTENT, /* Flush incompatible content. */ 27 - }; 16 + static void __netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) 17 + { 18 + if (netfs_group) 19 + folio_attach_private(folio, netfs_get_group(netfs_group)); 20 + } 28 21 29 22 static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group) 30 23 { 31 24 void *priv = folio_get_private(folio); 32 25 33 - if (netfs_group && (!priv || priv == NETFS_FOLIO_COPY_TO_CACHE)) 34 - folio_attach_private(folio, netfs_get_group(netfs_group)); 35 - else if (!netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE) 36 - folio_detach_private(folio); 37 - } 38 - 39 - /* 40 - * Decide how we should modify a folio. We might be attempting to do 41 - * write-streaming, in which case we don't want to a local RMW cycle if we can 42 - * avoid it. If we're doing local caching or content crypto, we award that 43 - * priority over avoiding RMW. If the file is open readably, then we also 44 - * assume that we may want to read what we wrote. 45 - */ 46 - static enum netfs_how_to_modify netfs_how_to_modify(struct netfs_inode *ctx, 47 - struct file *file, 48 - struct folio *folio, 49 - void *netfs_group, 50 - size_t flen, 51 - size_t offset, 52 - size_t len, 53 - bool maybe_trouble) 54 - { 55 - struct netfs_folio *finfo = netfs_folio_info(folio); 56 - struct netfs_group *group = netfs_folio_group(folio); 57 - loff_t pos = folio_pos(folio); 58 - 59 - _enter(""); 60 - 61 - if (group != netfs_group && group != NETFS_FOLIO_COPY_TO_CACHE) 62 - return NETFS_FLUSH_CONTENT; 63 - 64 - if (folio_test_uptodate(folio)) 65 - return NETFS_FOLIO_IS_UPTODATE; 66 - 67 - if (pos >= ctx->zero_point) 68 - return NETFS_MODIFY_AND_CLEAR; 69 - 70 - if (!maybe_trouble && offset == 0 && len >= flen) 71 - return NETFS_WHOLE_FOLIO_MODIFY; 72 - 73 - if (file->f_mode & FMODE_READ) 74 - goto no_write_streaming; 75 - 76 - if (netfs_is_cache_enabled(ctx)) { 77 - /* We don't want to get a streaming write on a file that loses 78 - * caching service temporarily because the backing store got 79 - * culled. 80 - */ 81 - goto no_write_streaming; 26 + if (unlikely(priv != netfs_group)) { 27 + if (netfs_group && (!priv || priv == NETFS_FOLIO_COPY_TO_CACHE)) 28 + folio_attach_private(folio, netfs_get_group(netfs_group)); 29 + else if (!netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE) 30 + folio_detach_private(folio); 82 31 } 83 - 84 - if (!finfo) 85 - return NETFS_STREAMING_WRITE; 86 - 87 - /* We can continue a streaming write only if it continues on from the 88 - * previous. If it overlaps, we must flush lest we suffer a partial 89 - * copy and disjoint dirty regions. 90 - */ 91 - if (offset == finfo->dirty_offset + finfo->dirty_len) 92 - return NETFS_STREAMING_WRITE_CONT; 93 - return NETFS_FLUSH_CONTENT; 94 - 95 - no_write_streaming: 96 - if (finfo) { 97 - netfs_stat(&netfs_n_wh_wstream_conflict); 98 - return NETFS_FLUSH_CONTENT; 99 - } 100 - return NETFS_JUST_PREFETCH; 101 32 } 102 33 103 34 /* ··· 108 177 .range_end = iocb->ki_pos + iter->count, 109 178 }; 110 179 struct netfs_io_request *wreq = NULL; 111 - struct netfs_folio *finfo; 112 - struct folio *folio, *writethrough = NULL; 113 - enum netfs_how_to_modify howto; 114 - enum netfs_folio_trace trace; 180 + struct folio *folio = NULL, *writethrough = NULL; 115 181 unsigned int bdp_flags = (iocb->ki_flags & IOCB_NOWAIT) ? BDP_ASYNC : 0; 116 182 ssize_t written = 0, ret, ret2; 117 - loff_t i_size, pos = iocb->ki_pos, from, to; 183 + loff_t i_size, pos = iocb->ki_pos; 118 184 size_t max_chunk = mapping_max_folio_size(mapping); 119 185 bool maybe_trouble = false; 120 186 ··· 141 213 } 142 214 143 215 do { 216 + struct netfs_folio *finfo; 217 + struct netfs_group *group; 218 + unsigned long long fpos; 144 219 size_t flen; 145 220 size_t offset; /* Offset into pagecache folio */ 146 221 size_t part; /* Bytes to write to folio */ 147 222 size_t copied; /* Bytes copied from user */ 148 - 149 - ret = balance_dirty_pages_ratelimited_flags(mapping, bdp_flags); 150 - if (unlikely(ret < 0)) 151 - break; 152 223 153 224 offset = pos & (max_chunk - 1); 154 225 part = min(max_chunk - offset, iov_iter_count(iter)); ··· 174 247 } 175 248 176 249 flen = folio_size(folio); 177 - offset = pos & (flen - 1); 250 + fpos = folio_pos(folio); 251 + offset = pos - fpos; 178 252 part = min_t(size_t, flen - offset, part); 179 253 180 254 /* Wait for writeback to complete. The writeback engine owns ··· 193 265 goto error_folio_unlock; 194 266 } 195 267 196 - /* See if we need to prefetch the area we're going to modify. 197 - * We need to do this before we get a lock on the folio in case 198 - * there's more than one writer competing for the same cache 199 - * block. 268 + /* Decide how we should modify a folio. We might be attempting 269 + * to do write-streaming, in which case we don't want to a 270 + * local RMW cycle if we can avoid it. If we're doing local 271 + * caching or content crypto, we award that priority over 272 + * avoiding RMW. If the file is open readably, then we also 273 + * assume that we may want to read what we wrote. 200 274 */ 201 - howto = netfs_how_to_modify(ctx, file, folio, netfs_group, 202 - flen, offset, part, maybe_trouble); 203 - _debug("howto %u", howto); 204 - switch (howto) { 205 - case NETFS_JUST_PREFETCH: 206 - ret = netfs_prefetch_for_write(file, folio, offset, part); 207 - if (ret < 0) { 208 - _debug("prefetch = %zd", ret); 209 - goto error_folio_unlock; 210 - } 211 - break; 212 - case NETFS_FOLIO_IS_UPTODATE: 213 - case NETFS_WHOLE_FOLIO_MODIFY: 214 - case NETFS_STREAMING_WRITE_CONT: 215 - break; 216 - case NETFS_MODIFY_AND_CLEAR: 275 + finfo = netfs_folio_info(folio); 276 + group = netfs_folio_group(folio); 277 + 278 + if (unlikely(group != netfs_group) && 279 + group != NETFS_FOLIO_COPY_TO_CACHE) 280 + goto flush_content; 281 + 282 + if (folio_test_uptodate(folio)) { 283 + if (mapping_writably_mapped(mapping)) 284 + flush_dcache_folio(folio); 285 + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); 286 + if (unlikely(copied == 0)) 287 + goto copy_failed; 288 + netfs_set_group(folio, netfs_group); 289 + trace_netfs_folio(folio, netfs_folio_is_uptodate); 290 + goto copied; 291 + } 292 + 293 + /* If the page is above the zero-point then we assume that the 294 + * server would just return a block of zeros or a short read if 295 + * we try to read it. 296 + */ 297 + if (fpos >= ctx->zero_point) { 217 298 zero_user_segment(&folio->page, 0, offset); 218 - break; 219 - case NETFS_STREAMING_WRITE: 220 - ret = -EIO; 221 - if (WARN_ON(folio_get_private(folio))) 222 - goto error_folio_unlock; 223 - break; 224 - case NETFS_FLUSH_CONTENT: 225 - trace_netfs_folio(folio, netfs_flush_content); 226 - from = folio_pos(folio); 227 - to = from + folio_size(folio) - 1; 228 - folio_unlock(folio); 229 - folio_put(folio); 230 - ret = filemap_write_and_wait_range(mapping, from, to); 231 - if (ret < 0) 232 - goto error_folio_unlock; 233 - continue; 234 - } 235 - 236 - if (mapping_writably_mapped(mapping)) 237 - flush_dcache_folio(folio); 238 - 239 - copied = copy_folio_from_iter_atomic(folio, offset, part, iter); 240 - 241 - flush_dcache_folio(folio); 242 - 243 - /* Deal with a (partially) failed copy */ 244 - if (copied == 0) { 245 - ret = -EFAULT; 246 - goto error_folio_unlock; 247 - } 248 - 249 - trace = (enum netfs_folio_trace)howto; 250 - switch (howto) { 251 - case NETFS_FOLIO_IS_UPTODATE: 252 - case NETFS_JUST_PREFETCH: 253 - netfs_set_group(folio, netfs_group); 254 - break; 255 - case NETFS_MODIFY_AND_CLEAR: 299 + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); 300 + if (unlikely(copied == 0)) 301 + goto copy_failed; 256 302 zero_user_segment(&folio->page, offset + copied, flen); 257 - netfs_set_group(folio, netfs_group); 303 + __netfs_set_group(folio, netfs_group); 258 304 folio_mark_uptodate(folio); 259 - break; 260 - case NETFS_WHOLE_FOLIO_MODIFY: 305 + trace_netfs_folio(folio, netfs_modify_and_clear); 306 + goto copied; 307 + } 308 + 309 + /* See if we can write a whole folio in one go. */ 310 + if (!maybe_trouble && offset == 0 && part >= flen) { 311 + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); 312 + if (unlikely(copied == 0)) 313 + goto copy_failed; 261 314 if (unlikely(copied < part)) { 262 315 maybe_trouble = true; 263 316 iov_iter_revert(iter, copied); ··· 246 337 folio_unlock(folio); 247 338 goto retry; 248 339 } 249 - netfs_set_group(folio, netfs_group); 340 + __netfs_set_group(folio, netfs_group); 250 341 folio_mark_uptodate(folio); 251 - break; 252 - case NETFS_STREAMING_WRITE: 253 - if (offset == 0 && copied == flen) { 254 - netfs_set_group(folio, netfs_group); 255 - folio_mark_uptodate(folio); 256 - trace = netfs_streaming_filled_page; 257 - break; 342 + trace_netfs_folio(folio, netfs_whole_folio_modify); 343 + goto copied; 344 + } 345 + 346 + /* We don't want to do a streaming write on a file that loses 347 + * caching service temporarily because the backing store got 348 + * culled and we don't really want to get a streaming write on 349 + * a file that's open for reading as ->read_folio() then has to 350 + * be able to flush it. 351 + */ 352 + if ((file->f_mode & FMODE_READ) || 353 + netfs_is_cache_enabled(ctx)) { 354 + if (finfo) { 355 + netfs_stat(&netfs_n_wh_wstream_conflict); 356 + goto flush_content; 258 357 } 358 + ret = netfs_prefetch_for_write(file, folio, offset, part); 359 + if (ret < 0) { 360 + _debug("prefetch = %zd", ret); 361 + goto error_folio_unlock; 362 + } 363 + /* Note that copy-to-cache may have been set. */ 364 + 365 + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); 366 + if (unlikely(copied == 0)) 367 + goto copy_failed; 368 + netfs_set_group(folio, netfs_group); 369 + trace_netfs_folio(folio, netfs_just_prefetch); 370 + goto copied; 371 + } 372 + 373 + if (!finfo) { 374 + ret = -EIO; 375 + if (WARN_ON(folio_get_private(folio))) 376 + goto error_folio_unlock; 377 + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); 378 + if (unlikely(copied == 0)) 379 + goto copy_failed; 380 + if (offset == 0 && copied == flen) { 381 + __netfs_set_group(folio, netfs_group); 382 + folio_mark_uptodate(folio); 383 + trace_netfs_folio(folio, netfs_streaming_filled_page); 384 + goto copied; 385 + } 386 + 259 387 finfo = kzalloc(sizeof(*finfo), GFP_KERNEL); 260 388 if (!finfo) { 261 389 iov_iter_revert(iter, copied); ··· 304 358 finfo->dirty_len = copied; 305 359 folio_attach_private(folio, (void *)((unsigned long)finfo | 306 360 NETFS_FOLIO_INFO)); 307 - break; 308 - case NETFS_STREAMING_WRITE_CONT: 309 - finfo = netfs_folio_info(folio); 361 + trace_netfs_folio(folio, netfs_streaming_write); 362 + goto copied; 363 + } 364 + 365 + /* We can continue a streaming write only if it continues on 366 + * from the previous. If it overlaps, we must flush lest we 367 + * suffer a partial copy and disjoint dirty regions. 368 + */ 369 + if (offset == finfo->dirty_offset + finfo->dirty_len) { 370 + copied = copy_folio_from_iter_atomic(folio, offset, part, iter); 371 + if (unlikely(copied == 0)) 372 + goto copy_failed; 310 373 finfo->dirty_len += copied; 311 374 if (finfo->dirty_offset == 0 && finfo->dirty_len == flen) { 312 375 if (finfo->netfs_group) ··· 324 369 folio_detach_private(folio); 325 370 folio_mark_uptodate(folio); 326 371 kfree(finfo); 327 - trace = netfs_streaming_cont_filled_page; 372 + trace_netfs_folio(folio, netfs_streaming_cont_filled_page); 373 + } else { 374 + trace_netfs_folio(folio, netfs_streaming_write_cont); 328 375 } 329 - break; 330 - default: 331 - WARN(true, "Unexpected modify type %u ix=%lx\n", 332 - howto, folio->index); 333 - ret = -EIO; 334 - goto error_folio_unlock; 376 + goto copied; 335 377 } 336 378 337 - trace_netfs_folio(folio, trace); 379 + /* Incompatible write; flush the folio and try again. */ 380 + flush_content: 381 + trace_netfs_folio(folio, netfs_flush_content); 382 + folio_unlock(folio); 383 + folio_put(folio); 384 + ret = filemap_write_and_wait_range(mapping, fpos, fpos + flen - 1); 385 + if (ret < 0) 386 + goto error_folio_unlock; 387 + continue; 388 + 389 + copied: 390 + flush_dcache_folio(folio); 338 391 339 392 /* Update the inode size if we moved the EOF marker */ 340 393 pos += copied; ··· 364 401 folio_put(folio); 365 402 folio = NULL; 366 403 404 + ret = balance_dirty_pages_ratelimited_flags(mapping, bdp_flags); 405 + if (unlikely(ret < 0)) 406 + break; 407 + 367 408 cond_resched(); 368 409 } while (iov_iter_count(iter)); 369 410 370 411 out: 371 - if (likely(written) && ctx->ops->post_modify) 372 - ctx->ops->post_modify(inode); 412 + if (likely(written)) { 413 + /* Set indication that ctime and mtime got updated in case 414 + * close is deferred. 415 + */ 416 + set_bit(NETFS_ICTX_MODIFIED_ATTR, &ctx->flags); 417 + if (unlikely(ctx->ops->post_modify)) 418 + ctx->ops->post_modify(inode); 419 + } 373 420 374 421 if (unlikely(wreq)) { 375 422 ret2 = netfs_end_writethrough(wreq, &wbc, writethrough); ··· 394 421 _leave(" = %zd [%zd]", written, ret); 395 422 return written ? written : ret; 396 423 424 + copy_failed: 425 + ret = -EFAULT; 397 426 error_folio_unlock: 398 427 folio_unlock(folio); 399 428 folio_put(folio);
+141 -6
fs/netfs/direct_read.c
··· 16 16 #include <linux/netfs.h> 17 17 #include "internal.h" 18 18 19 + static void netfs_prepare_dio_read_iterator(struct netfs_io_subrequest *subreq) 20 + { 21 + struct netfs_io_request *rreq = subreq->rreq; 22 + size_t rsize; 23 + 24 + rsize = umin(subreq->len, rreq->io_streams[0].sreq_max_len); 25 + subreq->len = rsize; 26 + 27 + if (unlikely(rreq->io_streams[0].sreq_max_segs)) { 28 + size_t limit = netfs_limit_iter(&rreq->iter, 0, rsize, 29 + rreq->io_streams[0].sreq_max_segs); 30 + 31 + if (limit < rsize) { 32 + subreq->len = limit; 33 + trace_netfs_sreq(subreq, netfs_sreq_trace_limited); 34 + } 35 + } 36 + 37 + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); 38 + 39 + subreq->io_iter = rreq->iter; 40 + iov_iter_truncate(&subreq->io_iter, subreq->len); 41 + iov_iter_advance(&rreq->iter, subreq->len); 42 + } 43 + 44 + /* 45 + * Perform a read to a buffer from the server, slicing up the region to be read 46 + * according to the network rsize. 47 + */ 48 + static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq) 49 + { 50 + unsigned long long start = rreq->start; 51 + ssize_t size = rreq->len; 52 + int ret = 0; 53 + 54 + atomic_set(&rreq->nr_outstanding, 1); 55 + 56 + do { 57 + struct netfs_io_subrequest *subreq; 58 + ssize_t slice; 59 + 60 + subreq = netfs_alloc_subrequest(rreq); 61 + if (!subreq) { 62 + ret = -ENOMEM; 63 + break; 64 + } 65 + 66 + subreq->source = NETFS_DOWNLOAD_FROM_SERVER; 67 + subreq->start = start; 68 + subreq->len = size; 69 + 70 + atomic_inc(&rreq->nr_outstanding); 71 + spin_lock_bh(&rreq->lock); 72 + list_add_tail(&subreq->rreq_link, &rreq->subrequests); 73 + subreq->prev_donated = rreq->prev_donated; 74 + rreq->prev_donated = 0; 75 + trace_netfs_sreq(subreq, netfs_sreq_trace_added); 76 + spin_unlock_bh(&rreq->lock); 77 + 78 + netfs_stat(&netfs_n_rh_download); 79 + if (rreq->netfs_ops->prepare_read) { 80 + ret = rreq->netfs_ops->prepare_read(subreq); 81 + if (ret < 0) { 82 + atomic_dec(&rreq->nr_outstanding); 83 + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); 84 + break; 85 + } 86 + } 87 + 88 + netfs_prepare_dio_read_iterator(subreq); 89 + slice = subreq->len; 90 + rreq->netfs_ops->issue_read(subreq); 91 + 92 + size -= slice; 93 + start += slice; 94 + rreq->submitted += slice; 95 + 96 + if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && 97 + test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags)) 98 + break; 99 + cond_resched(); 100 + } while (size > 0); 101 + 102 + if (atomic_dec_and_test(&rreq->nr_outstanding)) 103 + netfs_rreq_terminated(rreq, false); 104 + return ret; 105 + } 106 + 107 + /* 108 + * Perform a read to an application buffer, bypassing the pagecache and the 109 + * local disk cache. 110 + */ 111 + static int netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync) 112 + { 113 + int ret; 114 + 115 + _enter("R=%x %llx-%llx", 116 + rreq->debug_id, rreq->start, rreq->start + rreq->len - 1); 117 + 118 + if (rreq->len == 0) { 119 + pr_err("Zero-sized read [R=%x]\n", rreq->debug_id); 120 + return -EIO; 121 + } 122 + 123 + // TODO: Use bounce buffer if requested 124 + 125 + inode_dio_begin(rreq->inode); 126 + 127 + ret = netfs_dispatch_unbuffered_reads(rreq); 128 + 129 + if (!rreq->submitted) { 130 + netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit); 131 + inode_dio_end(rreq->inode); 132 + ret = 0; 133 + goto out; 134 + } 135 + 136 + if (sync) { 137 + trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); 138 + wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, 139 + TASK_UNINTERRUPTIBLE); 140 + 141 + ret = rreq->error; 142 + if (ret == 0 && rreq->submitted < rreq->len && 143 + rreq->origin != NETFS_DIO_READ) { 144 + trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); 145 + ret = -EIO; 146 + } 147 + } else { 148 + ret = -EIOCBQUEUED; 149 + } 150 + 151 + out: 152 + _leave(" = %d", ret); 153 + return ret; 154 + } 155 + 19 156 /** 20 157 * netfs_unbuffered_read_iter_locked - Perform an unbuffered or direct I/O read 21 158 * @iocb: The I/O control descriptor describing the read ··· 168 31 struct netfs_io_request *rreq; 169 32 ssize_t ret; 170 33 size_t orig_count = iov_iter_count(iter); 171 - bool async = !is_sync_kiocb(iocb); 34 + bool sync = is_sync_kiocb(iocb); 172 35 173 36 _enter(""); 174 37 ··· 215 78 216 79 // TODO: Set up bounce buffer if needed 217 80 218 - if (async) 81 + if (!sync) 219 82 rreq->iocb = iocb; 220 83 221 - ret = netfs_begin_read(rreq, is_sync_kiocb(iocb)); 84 + ret = netfs_unbuffered_read(rreq, sync); 222 85 if (ret < 0) 223 86 goto out; /* May be -EIOCBQUEUED */ 224 - if (!async) { 87 + if (sync) { 225 88 // TODO: Copy from bounce buffer 226 89 iocb->ki_pos += rreq->transferred; 227 90 ret = rreq->transferred; ··· 231 94 netfs_put_request(rreq, false, netfs_rreq_trace_put_return); 232 95 if (ret > 0) 233 96 orig_count -= ret; 234 - if (ret != -EIOCBQUEUED) 235 - iov_iter_revert(iter, orig_count - iov_iter_count(iter)); 236 97 return ret; 237 98 } 238 99 EXPORT_SYMBOL(netfs_unbuffered_read_iter_locked);
+36 -7
fs/netfs/internal.h
··· 7 7 8 8 #include <linux/slab.h> 9 9 #include <linux/seq_file.h> 10 + #include <linux/folio_queue.h> 10 11 #include <linux/netfs.h> 11 12 #include <linux/fscache.h> 12 13 #include <linux/fscache-cache.h> ··· 23 22 /* 24 23 * buffered_read.c 25 24 */ 26 - void netfs_rreq_unlock_folios(struct netfs_io_request *rreq); 27 25 int netfs_prefetch_for_write(struct file *file, struct folio *folio, 28 26 size_t offset, size_t len); 29 - 30 - /* 31 - * io.c 32 - */ 33 - int netfs_begin_read(struct netfs_io_request *rreq, bool sync); 34 27 35 28 /* 36 29 * main.c ··· 58 63 /* 59 64 * misc.c 60 65 */ 66 + int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio *folio, 67 + bool needs_put); 68 + struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq); 69 + void netfs_clear_buffer(struct netfs_io_request *rreq); 70 + void netfs_reset_iter(struct netfs_io_subrequest *subreq); 61 71 62 72 /* 63 73 * objects.c ··· 82 82 { 83 83 trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), what); 84 84 } 85 + 86 + /* 87 + * read_collect.c 88 + */ 89 + void netfs_read_termination_worker(struct work_struct *work); 90 + void netfs_rreq_terminated(struct netfs_io_request *rreq, bool was_async); 91 + 92 + /* 93 + * read_pgpriv2.c 94 + */ 95 + void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_subrequest *subreq, 96 + struct netfs_io_request *rreq, 97 + struct folio_queue *folioq, 98 + int slot); 99 + void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq); 100 + bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *wreq); 101 + 102 + /* 103 + * read_retry.c 104 + */ 105 + void netfs_retry_reads(struct netfs_io_request *rreq); 106 + void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq); 85 107 86 108 /* 87 109 * stats.c ··· 132 110 extern atomic_t netfs_n_wh_writethrough; 133 111 extern atomic_t netfs_n_wh_dio_write; 134 112 extern atomic_t netfs_n_wh_writepages; 113 + extern atomic_t netfs_n_wh_copy_to_cache; 135 114 extern atomic_t netfs_n_wh_wstream_conflict; 136 115 extern atomic_t netfs_n_wh_upload; 137 116 extern atomic_t netfs_n_wh_upload_done; ··· 140 117 extern atomic_t netfs_n_wh_write; 141 118 extern atomic_t netfs_n_wh_write_done; 142 119 extern atomic_t netfs_n_wh_write_failed; 120 + extern atomic_t netfs_n_wb_lock_skip; 121 + extern atomic_t netfs_n_wb_lock_wait; 122 + extern atomic_t netfs_n_folioq; 143 123 144 124 int netfs_stats_show(struct seq_file *m, void *v); 145 125 ··· 176 150 loff_t start, 177 151 enum netfs_io_origin origin); 178 152 void netfs_reissue_write(struct netfs_io_stream *stream, 179 - struct netfs_io_subrequest *subreq); 153 + struct netfs_io_subrequest *subreq, 154 + struct iov_iter *source); 155 + void netfs_issue_write(struct netfs_io_request *wreq, 156 + struct netfs_io_stream *stream); 180 157 int netfs_advance_write(struct netfs_io_request *wreq, 181 158 struct netfs_io_stream *stream, 182 159 loff_t start, size_t len, bool to_eof);
-804
fs/netfs/io.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* Network filesystem high-level read support. 3 - * 4 - * Copyright (C) 2021 Red Hat, Inc. All Rights Reserved. 5 - * Written by David Howells (dhowells@redhat.com) 6 - */ 7 - 8 - #include <linux/module.h> 9 - #include <linux/export.h> 10 - #include <linux/fs.h> 11 - #include <linux/mm.h> 12 - #include <linux/pagemap.h> 13 - #include <linux/slab.h> 14 - #include <linux/uio.h> 15 - #include <linux/sched/mm.h> 16 - #include <linux/task_io_accounting_ops.h> 17 - #include "internal.h" 18 - 19 - /* 20 - * Clear the unread part of an I/O request. 21 - */ 22 - static void netfs_clear_unread(struct netfs_io_subrequest *subreq) 23 - { 24 - iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter); 25 - } 26 - 27 - static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, 28 - bool was_async) 29 - { 30 - struct netfs_io_subrequest *subreq = priv; 31 - 32 - netfs_subreq_terminated(subreq, transferred_or_error, was_async); 33 - } 34 - 35 - /* 36 - * Issue a read against the cache. 37 - * - Eats the caller's ref on subreq. 38 - */ 39 - static void netfs_read_from_cache(struct netfs_io_request *rreq, 40 - struct netfs_io_subrequest *subreq, 41 - enum netfs_read_from_hole read_hole) 42 - { 43 - struct netfs_cache_resources *cres = &rreq->cache_resources; 44 - 45 - netfs_stat(&netfs_n_rh_read); 46 - cres->ops->read(cres, subreq->start, &subreq->io_iter, read_hole, 47 - netfs_cache_read_terminated, subreq); 48 - } 49 - 50 - /* 51 - * Fill a subrequest region with zeroes. 52 - */ 53 - static void netfs_fill_with_zeroes(struct netfs_io_request *rreq, 54 - struct netfs_io_subrequest *subreq) 55 - { 56 - netfs_stat(&netfs_n_rh_zero); 57 - __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 58 - netfs_subreq_terminated(subreq, 0, false); 59 - } 60 - 61 - /* 62 - * Ask the netfs to issue a read request to the server for us. 63 - * 64 - * The netfs is expected to read from subreq->pos + subreq->transferred to 65 - * subreq->pos + subreq->len - 1. It may not backtrack and write data into the 66 - * buffer prior to the transferred point as it might clobber dirty data 67 - * obtained from the cache. 68 - * 69 - * Alternatively, the netfs is allowed to indicate one of two things: 70 - * 71 - * - NETFS_SREQ_SHORT_READ: A short read - it will get called again to try and 72 - * make progress. 73 - * 74 - * - NETFS_SREQ_CLEAR_TAIL: A short read - the rest of the buffer will be 75 - * cleared. 76 - */ 77 - static void netfs_read_from_server(struct netfs_io_request *rreq, 78 - struct netfs_io_subrequest *subreq) 79 - { 80 - netfs_stat(&netfs_n_rh_download); 81 - 82 - if (rreq->origin != NETFS_DIO_READ && 83 - iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred) 84 - pr_warn("R=%08x[%u] ITER PRE-MISMATCH %zx != %zx-%zx %lx\n", 85 - rreq->debug_id, subreq->debug_index, 86 - iov_iter_count(&subreq->io_iter), subreq->len, 87 - subreq->transferred, subreq->flags); 88 - rreq->netfs_ops->issue_read(subreq); 89 - } 90 - 91 - /* 92 - * Release those waiting. 93 - */ 94 - static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async) 95 - { 96 - trace_netfs_rreq(rreq, netfs_rreq_trace_done); 97 - netfs_clear_subrequests(rreq, was_async); 98 - netfs_put_request(rreq, was_async, netfs_rreq_trace_put_complete); 99 - } 100 - 101 - /* 102 - * [DEPRECATED] Deal with the completion of writing the data to the cache. We 103 - * have to clear the PG_fscache bits on the folios involved and release the 104 - * caller's ref. 105 - * 106 - * May be called in softirq mode and we inherit a ref from the caller. 107 - */ 108 - static void netfs_rreq_unmark_after_write(struct netfs_io_request *rreq, 109 - bool was_async) 110 - { 111 - struct netfs_io_subrequest *subreq; 112 - struct folio *folio; 113 - pgoff_t unlocked = 0; 114 - bool have_unlocked = false; 115 - 116 - rcu_read_lock(); 117 - 118 - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 119 - XA_STATE(xas, &rreq->mapping->i_pages, subreq->start / PAGE_SIZE); 120 - 121 - xas_for_each(&xas, folio, (subreq->start + subreq->len - 1) / PAGE_SIZE) { 122 - if (xas_retry(&xas, folio)) 123 - continue; 124 - 125 - /* We might have multiple writes from the same huge 126 - * folio, but we mustn't unlock a folio more than once. 127 - */ 128 - if (have_unlocked && folio->index <= unlocked) 129 - continue; 130 - unlocked = folio_next_index(folio) - 1; 131 - trace_netfs_folio(folio, netfs_folio_trace_end_copy); 132 - folio_end_private_2(folio); 133 - have_unlocked = true; 134 - } 135 - } 136 - 137 - rcu_read_unlock(); 138 - netfs_rreq_completed(rreq, was_async); 139 - } 140 - 141 - static void netfs_rreq_copy_terminated(void *priv, ssize_t transferred_or_error, 142 - bool was_async) /* [DEPRECATED] */ 143 - { 144 - struct netfs_io_subrequest *subreq = priv; 145 - struct netfs_io_request *rreq = subreq->rreq; 146 - 147 - if (IS_ERR_VALUE(transferred_or_error)) { 148 - netfs_stat(&netfs_n_rh_write_failed); 149 - trace_netfs_failure(rreq, subreq, transferred_or_error, 150 - netfs_fail_copy_to_cache); 151 - } else { 152 - netfs_stat(&netfs_n_rh_write_done); 153 - } 154 - 155 - trace_netfs_sreq(subreq, netfs_sreq_trace_write_term); 156 - 157 - /* If we decrement nr_copy_ops to 0, the ref belongs to us. */ 158 - if (atomic_dec_and_test(&rreq->nr_copy_ops)) 159 - netfs_rreq_unmark_after_write(rreq, was_async); 160 - 161 - netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); 162 - } 163 - 164 - /* 165 - * [DEPRECATED] Perform any outstanding writes to the cache. We inherit a ref 166 - * from the caller. 167 - */ 168 - static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq) 169 - { 170 - struct netfs_cache_resources *cres = &rreq->cache_resources; 171 - struct netfs_io_subrequest *subreq, *next, *p; 172 - struct iov_iter iter; 173 - int ret; 174 - 175 - trace_netfs_rreq(rreq, netfs_rreq_trace_copy); 176 - 177 - /* We don't want terminating writes trying to wake us up whilst we're 178 - * still going through the list. 179 - */ 180 - atomic_inc(&rreq->nr_copy_ops); 181 - 182 - list_for_each_entry_safe(subreq, p, &rreq->subrequests, rreq_link) { 183 - if (!test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { 184 - list_del_init(&subreq->rreq_link); 185 - netfs_put_subrequest(subreq, false, 186 - netfs_sreq_trace_put_no_copy); 187 - } 188 - } 189 - 190 - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 191 - /* Amalgamate adjacent writes */ 192 - while (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) { 193 - next = list_next_entry(subreq, rreq_link); 194 - if (next->start != subreq->start + subreq->len) 195 - break; 196 - subreq->len += next->len; 197 - list_del_init(&next->rreq_link); 198 - netfs_put_subrequest(next, false, 199 - netfs_sreq_trace_put_merged); 200 - } 201 - 202 - ret = cres->ops->prepare_write(cres, &subreq->start, &subreq->len, 203 - subreq->len, rreq->i_size, true); 204 - if (ret < 0) { 205 - trace_netfs_failure(rreq, subreq, ret, netfs_fail_prepare_write); 206 - trace_netfs_sreq(subreq, netfs_sreq_trace_write_skip); 207 - continue; 208 - } 209 - 210 - iov_iter_xarray(&iter, ITER_SOURCE, &rreq->mapping->i_pages, 211 - subreq->start, subreq->len); 212 - 213 - atomic_inc(&rreq->nr_copy_ops); 214 - netfs_stat(&netfs_n_rh_write); 215 - netfs_get_subrequest(subreq, netfs_sreq_trace_get_copy_to_cache); 216 - trace_netfs_sreq(subreq, netfs_sreq_trace_write); 217 - cres->ops->write(cres, subreq->start, &iter, 218 - netfs_rreq_copy_terminated, subreq); 219 - } 220 - 221 - /* If we decrement nr_copy_ops to 0, the usage ref belongs to us. */ 222 - if (atomic_dec_and_test(&rreq->nr_copy_ops)) 223 - netfs_rreq_unmark_after_write(rreq, false); 224 - } 225 - 226 - static void netfs_rreq_write_to_cache_work(struct work_struct *work) /* [DEPRECATED] */ 227 - { 228 - struct netfs_io_request *rreq = 229 - container_of(work, struct netfs_io_request, work); 230 - 231 - netfs_rreq_do_write_to_cache(rreq); 232 - } 233 - 234 - static void netfs_rreq_write_to_cache(struct netfs_io_request *rreq) /* [DEPRECATED] */ 235 - { 236 - rreq->work.func = netfs_rreq_write_to_cache_work; 237 - if (!queue_work(system_unbound_wq, &rreq->work)) 238 - BUG(); 239 - } 240 - 241 - /* 242 - * Handle a short read. 243 - */ 244 - static void netfs_rreq_short_read(struct netfs_io_request *rreq, 245 - struct netfs_io_subrequest *subreq) 246 - { 247 - __clear_bit(NETFS_SREQ_SHORT_IO, &subreq->flags); 248 - __set_bit(NETFS_SREQ_SEEK_DATA_READ, &subreq->flags); 249 - 250 - netfs_stat(&netfs_n_rh_short_read); 251 - trace_netfs_sreq(subreq, netfs_sreq_trace_resubmit_short); 252 - 253 - netfs_get_subrequest(subreq, netfs_sreq_trace_get_short_read); 254 - atomic_inc(&rreq->nr_outstanding); 255 - if (subreq->source == NETFS_READ_FROM_CACHE) 256 - netfs_read_from_cache(rreq, subreq, NETFS_READ_HOLE_CLEAR); 257 - else 258 - netfs_read_from_server(rreq, subreq); 259 - } 260 - 261 - /* 262 - * Reset the subrequest iterator prior to resubmission. 263 - */ 264 - static void netfs_reset_subreq_iter(struct netfs_io_request *rreq, 265 - struct netfs_io_subrequest *subreq) 266 - { 267 - size_t remaining = subreq->len - subreq->transferred; 268 - size_t count = iov_iter_count(&subreq->io_iter); 269 - 270 - if (count == remaining) 271 - return; 272 - 273 - _debug("R=%08x[%u] ITER RESUB-MISMATCH %zx != %zx-%zx-%llx %x", 274 - rreq->debug_id, subreq->debug_index, 275 - iov_iter_count(&subreq->io_iter), subreq->transferred, 276 - subreq->len, rreq->i_size, 277 - subreq->io_iter.iter_type); 278 - 279 - if (count < remaining) 280 - iov_iter_revert(&subreq->io_iter, remaining - count); 281 - else 282 - iov_iter_advance(&subreq->io_iter, count - remaining); 283 - } 284 - 285 - /* 286 - * Resubmit any short or failed operations. Returns true if we got the rreq 287 - * ref back. 288 - */ 289 - static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq) 290 - { 291 - struct netfs_io_subrequest *subreq; 292 - 293 - WARN_ON(in_interrupt()); 294 - 295 - trace_netfs_rreq(rreq, netfs_rreq_trace_resubmit); 296 - 297 - /* We don't want terminating submissions trying to wake us up whilst 298 - * we're still going through the list. 299 - */ 300 - atomic_inc(&rreq->nr_outstanding); 301 - 302 - __clear_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); 303 - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 304 - if (subreq->error) { 305 - if (subreq->source != NETFS_READ_FROM_CACHE) 306 - break; 307 - subreq->source = NETFS_DOWNLOAD_FROM_SERVER; 308 - subreq->error = 0; 309 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 310 - netfs_stat(&netfs_n_rh_download_instead); 311 - trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead); 312 - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 313 - atomic_inc(&rreq->nr_outstanding); 314 - netfs_reset_subreq_iter(rreq, subreq); 315 - netfs_read_from_server(rreq, subreq); 316 - } else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) { 317 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 318 - netfs_reset_subreq_iter(rreq, subreq); 319 - netfs_rreq_short_read(rreq, subreq); 320 - } 321 - } 322 - 323 - /* If we decrement nr_outstanding to 0, the usage ref belongs to us. */ 324 - if (atomic_dec_and_test(&rreq->nr_outstanding)) 325 - return true; 326 - 327 - wake_up_var(&rreq->nr_outstanding); 328 - return false; 329 - } 330 - 331 - /* 332 - * Check to see if the data read is still valid. 333 - */ 334 - static void netfs_rreq_is_still_valid(struct netfs_io_request *rreq) 335 - { 336 - struct netfs_io_subrequest *subreq; 337 - 338 - if (!rreq->netfs_ops->is_still_valid || 339 - rreq->netfs_ops->is_still_valid(rreq)) 340 - return; 341 - 342 - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 343 - if (subreq->source == NETFS_READ_FROM_CACHE) { 344 - subreq->error = -ESTALE; 345 - __set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); 346 - } 347 - } 348 - } 349 - 350 - /* 351 - * Determine how much we can admit to having read from a DIO read. 352 - */ 353 - static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) 354 - { 355 - struct netfs_io_subrequest *subreq; 356 - unsigned int i; 357 - size_t transferred = 0; 358 - 359 - for (i = 0; i < rreq->direct_bv_count; i++) { 360 - flush_dcache_page(rreq->direct_bv[i].bv_page); 361 - // TODO: cifs marks pages in the destination buffer 362 - // dirty under some circumstances after a read. Do we 363 - // need to do that too? 364 - set_page_dirty(rreq->direct_bv[i].bv_page); 365 - } 366 - 367 - list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 368 - if (subreq->error || subreq->transferred == 0) 369 - break; 370 - transferred += subreq->transferred; 371 - if (subreq->transferred < subreq->len || 372 - test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags)) 373 - break; 374 - } 375 - 376 - for (i = 0; i < rreq->direct_bv_count; i++) 377 - flush_dcache_page(rreq->direct_bv[i].bv_page); 378 - 379 - rreq->transferred = transferred; 380 - task_io_account_read(transferred); 381 - 382 - if (rreq->iocb) { 383 - rreq->iocb->ki_pos += transferred; 384 - if (rreq->iocb->ki_complete) 385 - rreq->iocb->ki_complete( 386 - rreq->iocb, rreq->error ? rreq->error : transferred); 387 - } 388 - if (rreq->netfs_ops->done) 389 - rreq->netfs_ops->done(rreq); 390 - inode_dio_end(rreq->inode); 391 - } 392 - 393 - /* 394 - * Assess the state of a read request and decide what to do next. 395 - * 396 - * Note that we could be in an ordinary kernel thread, on a workqueue or in 397 - * softirq context at this point. We inherit a ref from the caller. 398 - */ 399 - static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async) 400 - { 401 - trace_netfs_rreq(rreq, netfs_rreq_trace_assess); 402 - 403 - again: 404 - netfs_rreq_is_still_valid(rreq); 405 - 406 - if (!test_bit(NETFS_RREQ_FAILED, &rreq->flags) && 407 - test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags)) { 408 - if (netfs_rreq_perform_resubmissions(rreq)) 409 - goto again; 410 - return; 411 - } 412 - 413 - if (rreq->origin != NETFS_DIO_READ) 414 - netfs_rreq_unlock_folios(rreq); 415 - else 416 - netfs_rreq_assess_dio(rreq); 417 - 418 - trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); 419 - clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 420 - wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); 421 - 422 - if (test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags) && 423 - test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) 424 - return netfs_rreq_write_to_cache(rreq); 425 - 426 - netfs_rreq_completed(rreq, was_async); 427 - } 428 - 429 - static void netfs_rreq_work(struct work_struct *work) 430 - { 431 - struct netfs_io_request *rreq = 432 - container_of(work, struct netfs_io_request, work); 433 - netfs_rreq_assess(rreq, false); 434 - } 435 - 436 - /* 437 - * Handle the completion of all outstanding I/O operations on a read request. 438 - * We inherit a ref from the caller. 439 - */ 440 - static void netfs_rreq_terminated(struct netfs_io_request *rreq, 441 - bool was_async) 442 - { 443 - if (test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) && 444 - was_async) { 445 - if (!queue_work(system_unbound_wq, &rreq->work)) 446 - BUG(); 447 - } else { 448 - netfs_rreq_assess(rreq, was_async); 449 - } 450 - } 451 - 452 - /** 453 - * netfs_subreq_terminated - Note the termination of an I/O operation. 454 - * @subreq: The I/O request that has terminated. 455 - * @transferred_or_error: The amount of data transferred or an error code. 456 - * @was_async: The termination was asynchronous 457 - * 458 - * This tells the read helper that a contributory I/O operation has terminated, 459 - * one way or another, and that it should integrate the results. 460 - * 461 - * The caller indicates in @transferred_or_error the outcome of the operation, 462 - * supplying a positive value to indicate the number of bytes transferred, 0 to 463 - * indicate a failure to transfer anything that should be retried or a negative 464 - * error code. The helper will look after reissuing I/O operations as 465 - * appropriate and writing downloaded data to the cache. 466 - * 467 - * If @was_async is true, the caller might be running in softirq or interrupt 468 - * context and we can't sleep. 469 - */ 470 - void netfs_subreq_terminated(struct netfs_io_subrequest *subreq, 471 - ssize_t transferred_or_error, 472 - bool was_async) 473 - { 474 - struct netfs_io_request *rreq = subreq->rreq; 475 - int u; 476 - 477 - _enter("R=%x[%x]{%llx,%lx},%zd", 478 - rreq->debug_id, subreq->debug_index, 479 - subreq->start, subreq->flags, transferred_or_error); 480 - 481 - switch (subreq->source) { 482 - case NETFS_READ_FROM_CACHE: 483 - netfs_stat(&netfs_n_rh_read_done); 484 - break; 485 - case NETFS_DOWNLOAD_FROM_SERVER: 486 - netfs_stat(&netfs_n_rh_download_done); 487 - break; 488 - default: 489 - break; 490 - } 491 - 492 - if (IS_ERR_VALUE(transferred_or_error)) { 493 - subreq->error = transferred_or_error; 494 - trace_netfs_failure(rreq, subreq, transferred_or_error, 495 - netfs_fail_read); 496 - goto failed; 497 - } 498 - 499 - if (WARN(transferred_or_error > subreq->len - subreq->transferred, 500 - "Subreq overread: R%x[%x] %zd > %zu - %zu", 501 - rreq->debug_id, subreq->debug_index, 502 - transferred_or_error, subreq->len, subreq->transferred)) 503 - transferred_or_error = subreq->len - subreq->transferred; 504 - 505 - subreq->error = 0; 506 - subreq->transferred += transferred_or_error; 507 - if (subreq->transferred < subreq->len && 508 - !test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags)) 509 - goto incomplete; 510 - 511 - complete: 512 - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 513 - if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) 514 - set_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags); 515 - 516 - out: 517 - trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); 518 - 519 - /* If we decrement nr_outstanding to 0, the ref belongs to us. */ 520 - u = atomic_dec_return(&rreq->nr_outstanding); 521 - if (u == 0) 522 - netfs_rreq_terminated(rreq, was_async); 523 - else if (u == 1) 524 - wake_up_var(&rreq->nr_outstanding); 525 - 526 - netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); 527 - return; 528 - 529 - incomplete: 530 - if (test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags)) { 531 - netfs_clear_unread(subreq); 532 - subreq->transferred = subreq->len; 533 - goto complete; 534 - } 535 - 536 - if (transferred_or_error == 0) { 537 - if (__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { 538 - if (rreq->origin != NETFS_DIO_READ) 539 - subreq->error = -ENODATA; 540 - goto failed; 541 - } 542 - } else { 543 - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 544 - } 545 - 546 - __set_bit(NETFS_SREQ_SHORT_IO, &subreq->flags); 547 - set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); 548 - goto out; 549 - 550 - failed: 551 - if (subreq->source == NETFS_READ_FROM_CACHE) { 552 - netfs_stat(&netfs_n_rh_read_failed); 553 - set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags); 554 - } else { 555 - netfs_stat(&netfs_n_rh_download_failed); 556 - set_bit(NETFS_RREQ_FAILED, &rreq->flags); 557 - rreq->error = subreq->error; 558 - } 559 - goto out; 560 - } 561 - EXPORT_SYMBOL(netfs_subreq_terminated); 562 - 563 - static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_subrequest *subreq, 564 - loff_t i_size) 565 - { 566 - struct netfs_io_request *rreq = subreq->rreq; 567 - struct netfs_cache_resources *cres = &rreq->cache_resources; 568 - 569 - if (cres->ops) 570 - return cres->ops->prepare_read(subreq, i_size); 571 - if (subreq->start >= rreq->i_size) 572 - return NETFS_FILL_WITH_ZEROES; 573 - return NETFS_DOWNLOAD_FROM_SERVER; 574 - } 575 - 576 - /* 577 - * Work out what sort of subrequest the next one will be. 578 - */ 579 - static enum netfs_io_source 580 - netfs_rreq_prepare_read(struct netfs_io_request *rreq, 581 - struct netfs_io_subrequest *subreq, 582 - struct iov_iter *io_iter) 583 - { 584 - enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER; 585 - struct netfs_inode *ictx = netfs_inode(rreq->inode); 586 - size_t lsize; 587 - 588 - _enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size); 589 - 590 - if (rreq->origin != NETFS_DIO_READ) { 591 - source = netfs_cache_prepare_read(subreq, rreq->i_size); 592 - if (source == NETFS_INVALID_READ) 593 - goto out; 594 - } 595 - 596 - if (source == NETFS_DOWNLOAD_FROM_SERVER) { 597 - /* Call out to the netfs to let it shrink the request to fit 598 - * its own I/O sizes and boundaries. If it shinks it here, it 599 - * will be called again to make simultaneous calls; if it wants 600 - * to make serial calls, it can indicate a short read and then 601 - * we will call it again. 602 - */ 603 - if (rreq->origin != NETFS_DIO_READ) { 604 - if (subreq->start >= ictx->zero_point) { 605 - source = NETFS_FILL_WITH_ZEROES; 606 - goto set; 607 - } 608 - if (subreq->len > ictx->zero_point - subreq->start) 609 - subreq->len = ictx->zero_point - subreq->start; 610 - 611 - /* We limit buffered reads to the EOF, but let the 612 - * server deal with larger-than-EOF DIO/unbuffered 613 - * reads. 614 - */ 615 - if (subreq->len > rreq->i_size - subreq->start) 616 - subreq->len = rreq->i_size - subreq->start; 617 - } 618 - if (rreq->rsize && subreq->len > rreq->rsize) 619 - subreq->len = rreq->rsize; 620 - 621 - if (rreq->netfs_ops->clamp_length && 622 - !rreq->netfs_ops->clamp_length(subreq)) { 623 - source = NETFS_INVALID_READ; 624 - goto out; 625 - } 626 - 627 - if (subreq->max_nr_segs) { 628 - lsize = netfs_limit_iter(io_iter, 0, subreq->len, 629 - subreq->max_nr_segs); 630 - if (subreq->len > lsize) { 631 - subreq->len = lsize; 632 - trace_netfs_sreq(subreq, netfs_sreq_trace_limited); 633 - } 634 - } 635 - } 636 - 637 - set: 638 - if (subreq->len > rreq->len) 639 - pr_warn("R=%08x[%u] SREQ>RREQ %zx > %llx\n", 640 - rreq->debug_id, subreq->debug_index, 641 - subreq->len, rreq->len); 642 - 643 - if (WARN_ON(subreq->len == 0)) { 644 - source = NETFS_INVALID_READ; 645 - goto out; 646 - } 647 - 648 - subreq->source = source; 649 - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); 650 - 651 - subreq->io_iter = *io_iter; 652 - iov_iter_truncate(&subreq->io_iter, subreq->len); 653 - iov_iter_advance(io_iter, subreq->len); 654 - out: 655 - subreq->source = source; 656 - trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); 657 - return source; 658 - } 659 - 660 - /* 661 - * Slice off a piece of a read request and submit an I/O request for it. 662 - */ 663 - static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq, 664 - struct iov_iter *io_iter) 665 - { 666 - struct netfs_io_subrequest *subreq; 667 - enum netfs_io_source source; 668 - 669 - subreq = netfs_alloc_subrequest(rreq); 670 - if (!subreq) 671 - return false; 672 - 673 - subreq->start = rreq->start + rreq->submitted; 674 - subreq->len = io_iter->count; 675 - 676 - _debug("slice %llx,%zx,%llx", subreq->start, subreq->len, rreq->submitted); 677 - list_add_tail(&subreq->rreq_link, &rreq->subrequests); 678 - 679 - /* Call out to the cache to find out what it can do with the remaining 680 - * subset. It tells us in subreq->flags what it decided should be done 681 - * and adjusts subreq->len down if the subset crosses a cache boundary. 682 - * 683 - * Then when we hand the subset, it can choose to take a subset of that 684 - * (the starts must coincide), in which case, we go around the loop 685 - * again and ask it to download the next piece. 686 - */ 687 - source = netfs_rreq_prepare_read(rreq, subreq, io_iter); 688 - if (source == NETFS_INVALID_READ) 689 - goto subreq_failed; 690 - 691 - atomic_inc(&rreq->nr_outstanding); 692 - 693 - rreq->submitted += subreq->len; 694 - 695 - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 696 - switch (source) { 697 - case NETFS_FILL_WITH_ZEROES: 698 - netfs_fill_with_zeroes(rreq, subreq); 699 - break; 700 - case NETFS_DOWNLOAD_FROM_SERVER: 701 - netfs_read_from_server(rreq, subreq); 702 - break; 703 - case NETFS_READ_FROM_CACHE: 704 - netfs_read_from_cache(rreq, subreq, NETFS_READ_HOLE_IGNORE); 705 - break; 706 - default: 707 - BUG(); 708 - } 709 - 710 - return true; 711 - 712 - subreq_failed: 713 - rreq->error = subreq->error; 714 - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_failed); 715 - return false; 716 - } 717 - 718 - /* 719 - * Begin the process of reading in a chunk of data, where that data may be 720 - * stitched together from multiple sources, including multiple servers and the 721 - * local cache. 722 - */ 723 - int netfs_begin_read(struct netfs_io_request *rreq, bool sync) 724 - { 725 - struct iov_iter io_iter; 726 - int ret; 727 - 728 - _enter("R=%x %llx-%llx", 729 - rreq->debug_id, rreq->start, rreq->start + rreq->len - 1); 730 - 731 - if (rreq->len == 0) { 732 - pr_err("Zero-sized read [R=%x]\n", rreq->debug_id); 733 - return -EIO; 734 - } 735 - 736 - if (rreq->origin == NETFS_DIO_READ) 737 - inode_dio_begin(rreq->inode); 738 - 739 - // TODO: Use bounce buffer if requested 740 - rreq->io_iter = rreq->iter; 741 - 742 - INIT_WORK(&rreq->work, netfs_rreq_work); 743 - 744 - /* Chop the read into slices according to what the cache and the netfs 745 - * want and submit each one. 746 - */ 747 - netfs_get_request(rreq, netfs_rreq_trace_get_for_outstanding); 748 - atomic_set(&rreq->nr_outstanding, 1); 749 - io_iter = rreq->io_iter; 750 - do { 751 - _debug("submit %llx + %llx >= %llx", 752 - rreq->start, rreq->submitted, rreq->i_size); 753 - if (!netfs_rreq_submit_slice(rreq, &io_iter)) 754 - break; 755 - if (test_bit(NETFS_SREQ_NO_PROGRESS, &rreq->flags)) 756 - break; 757 - if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && 758 - test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags)) 759 - break; 760 - 761 - } while (rreq->submitted < rreq->len); 762 - 763 - if (!rreq->submitted) { 764 - netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit); 765 - if (rreq->origin == NETFS_DIO_READ) 766 - inode_dio_end(rreq->inode); 767 - ret = 0; 768 - goto out; 769 - } 770 - 771 - if (sync) { 772 - /* Keep nr_outstanding incremented so that the ref always 773 - * belongs to us, and the service code isn't punted off to a 774 - * random thread pool to process. Note that this might start 775 - * further work, such as writing to the cache. 776 - */ 777 - wait_var_event(&rreq->nr_outstanding, 778 - atomic_read(&rreq->nr_outstanding) == 1); 779 - if (atomic_dec_and_test(&rreq->nr_outstanding)) 780 - netfs_rreq_assess(rreq, false); 781 - 782 - trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip); 783 - wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS, 784 - TASK_UNINTERRUPTIBLE); 785 - 786 - ret = rreq->error; 787 - if (ret == 0) { 788 - if (rreq->origin == NETFS_DIO_READ) { 789 - ret = rreq->transferred; 790 - } else if (rreq->submitted < rreq->len) { 791 - trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); 792 - ret = -EIO; 793 - } 794 - } 795 - } else { 796 - /* If we decrement nr_outstanding to 0, the ref belongs to us. */ 797 - if (atomic_dec_and_test(&rreq->nr_outstanding)) 798 - netfs_rreq_assess(rreq, false); 799 - ret = -EIOCBQUEUED; 800 - } 801 - 802 - out: 803 - return ret; 804 - }
+50
fs/netfs/iterator.c
··· 188 188 return min(span, max_size); 189 189 } 190 190 191 + /* 192 + * Select the span of a folio queue iterator we're going to use. Limit it by 193 + * both maximum size and maximum number of segments. Returns the size of the 194 + * span in bytes. 195 + */ 196 + static size_t netfs_limit_folioq(const struct iov_iter *iter, size_t start_offset, 197 + size_t max_size, size_t max_segs) 198 + { 199 + const struct folio_queue *folioq = iter->folioq; 200 + unsigned int nsegs = 0; 201 + unsigned int slot = iter->folioq_slot; 202 + size_t span = 0, n = iter->count; 203 + 204 + if (WARN_ON(!iov_iter_is_folioq(iter)) || 205 + WARN_ON(start_offset > n) || 206 + n == 0) 207 + return 0; 208 + max_size = umin(max_size, n - start_offset); 209 + 210 + if (slot >= folioq_nr_slots(folioq)) { 211 + folioq = folioq->next; 212 + slot = 0; 213 + } 214 + 215 + start_offset += iter->iov_offset; 216 + do { 217 + size_t flen = folioq_folio_size(folioq, slot); 218 + 219 + if (start_offset < flen) { 220 + span += flen - start_offset; 221 + nsegs++; 222 + start_offset = 0; 223 + } else { 224 + start_offset -= flen; 225 + } 226 + if (span >= max_size || nsegs >= max_segs) 227 + break; 228 + 229 + slot++; 230 + if (slot >= folioq_nr_slots(folioq)) { 231 + folioq = folioq->next; 232 + slot = 0; 233 + } 234 + } while (folioq); 235 + 236 + return umin(span, max_size); 237 + } 238 + 191 239 size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset, 192 240 size_t max_size, size_t max_segs) 193 241 { 242 + if (iov_iter_is_folioq(iter)) 243 + return netfs_limit_folioq(iter, start_offset, max_size, max_segs); 194 244 if (iov_iter_is_bvec(iter)) 195 245 return netfs_limit_bvec(iter, start_offset, max_size, max_segs); 196 246 if (iov_iter_is_xarray(iter))
+4 -3
fs/netfs/main.c
··· 36 36 static const char *netfs_origins[nr__netfs_io_origin] = { 37 37 [NETFS_READAHEAD] = "RA", 38 38 [NETFS_READPAGE] = "RP", 39 + [NETFS_READ_GAPS] = "RG", 39 40 [NETFS_READ_FOR_WRITE] = "RW", 40 - [NETFS_COPY_TO_CACHE] = "CC", 41 + [NETFS_DIO_READ] = "DR", 41 42 [NETFS_WRITEBACK] = "WB", 42 43 [NETFS_WRITETHROUGH] = "WT", 43 44 [NETFS_UNBUFFERED_WRITE] = "UW", 44 - [NETFS_DIO_READ] = "DR", 45 45 [NETFS_DIO_WRITE] = "DW", 46 + [NETFS_PGPRIV2_COPY_TO_CACHE] = "2C", 46 47 }; 47 48 48 49 /* ··· 63 62 64 63 rreq = list_entry(v, struct netfs_io_request, proc_link); 65 64 seq_printf(m, 66 - "%08x %s %3d %2lx %4d %3d @%04llx %llx/%llx", 65 + "%08x %s %3d %2lx %4ld %3d @%04llx %llx/%llx", 67 66 rreq->debug_id, 68 67 netfs_origins[rreq->origin], 69 68 refcount_read(&rreq->ref),
+94
fs/netfs/misc.c
··· 8 8 #include <linux/swap.h> 9 9 #include "internal.h" 10 10 11 + /* 12 + * Append a folio to the rolling queue. 13 + */ 14 + int netfs_buffer_append_folio(struct netfs_io_request *rreq, struct folio *folio, 15 + bool needs_put) 16 + { 17 + struct folio_queue *tail = rreq->buffer_tail; 18 + unsigned int slot, order = folio_order(folio); 19 + 20 + if (WARN_ON_ONCE(!rreq->buffer && tail) || 21 + WARN_ON_ONCE(rreq->buffer && !tail)) 22 + return -EIO; 23 + 24 + if (!tail || folioq_full(tail)) { 25 + tail = kmalloc(sizeof(*tail), GFP_NOFS); 26 + if (!tail) 27 + return -ENOMEM; 28 + netfs_stat(&netfs_n_folioq); 29 + folioq_init(tail); 30 + tail->prev = rreq->buffer_tail; 31 + if (tail->prev) 32 + tail->prev->next = tail; 33 + rreq->buffer_tail = tail; 34 + if (!rreq->buffer) { 35 + rreq->buffer = tail; 36 + iov_iter_folio_queue(&rreq->io_iter, ITER_SOURCE, tail, 0, 0, 0); 37 + } 38 + rreq->buffer_tail_slot = 0; 39 + } 40 + 41 + rreq->io_iter.count += PAGE_SIZE << order; 42 + 43 + slot = folioq_append(tail, folio); 44 + /* Store the counter after setting the slot. */ 45 + smp_store_release(&rreq->buffer_tail_slot, slot); 46 + return 0; 47 + } 48 + 49 + /* 50 + * Delete the head of a rolling queue. 51 + */ 52 + struct folio_queue *netfs_delete_buffer_head(struct netfs_io_request *wreq) 53 + { 54 + struct folio_queue *head = wreq->buffer, *next = head->next; 55 + 56 + if (next) 57 + next->prev = NULL; 58 + netfs_stat_d(&netfs_n_folioq); 59 + kfree(head); 60 + wreq->buffer = next; 61 + return next; 62 + } 63 + 64 + /* 65 + * Clear out a rolling queue. 66 + */ 67 + void netfs_clear_buffer(struct netfs_io_request *rreq) 68 + { 69 + struct folio_queue *p; 70 + 71 + while ((p = rreq->buffer)) { 72 + rreq->buffer = p->next; 73 + for (int slot = 0; slot < folioq_nr_slots(p); slot++) { 74 + struct folio *folio = folioq_folio(p, slot); 75 + if (!folio) 76 + continue; 77 + if (folioq_is_marked(p, slot)) { 78 + trace_netfs_folio(folio, netfs_folio_trace_put); 79 + folio_put(folio); 80 + } 81 + } 82 + netfs_stat_d(&netfs_n_folioq); 83 + kfree(p); 84 + } 85 + } 86 + 87 + /* 88 + * Reset the subrequest iterator to refer just to the region remaining to be 89 + * read. The iterator may or may not have been advanced by socket ops or 90 + * extraction ops to an extent that may or may not match the amount actually 91 + * read. 92 + */ 93 + void netfs_reset_iter(struct netfs_io_subrequest *subreq) 94 + { 95 + struct iov_iter *io_iter = &subreq->io_iter; 96 + size_t remain = subreq->len - subreq->transferred; 97 + 98 + if (io_iter->count > remain) 99 + iov_iter_advance(io_iter, io_iter->count - remain); 100 + else if (io_iter->count < remain) 101 + iov_iter_revert(io_iter, remain - io_iter->count); 102 + iov_iter_truncate(&subreq->io_iter, remain); 103 + } 104 + 11 105 /** 12 106 * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback 13 107 * @mapping: The mapping the folio belongs to.
+13 -3
fs/netfs/objects.c
··· 36 36 memset(rreq, 0, kmem_cache_size(cache)); 37 37 rreq->start = start; 38 38 rreq->len = len; 39 - rreq->upper_len = len; 40 39 rreq->origin = origin; 41 40 rreq->netfs_ops = ctx->ops; 42 41 rreq->mapping = mapping; ··· 43 44 rreq->i_size = i_size_read(inode); 44 45 rreq->debug_id = atomic_inc_return(&debug_ids); 45 46 rreq->wsize = INT_MAX; 47 + rreq->io_streams[0].sreq_max_len = ULONG_MAX; 48 + rreq->io_streams[0].sreq_max_segs = 0; 46 49 spin_lock_init(&rreq->lock); 47 50 INIT_LIST_HEAD(&rreq->io_streams[0].subrequests); 48 51 INIT_LIST_HEAD(&rreq->io_streams[1].subrequests); 49 52 INIT_LIST_HEAD(&rreq->subrequests); 50 - INIT_WORK(&rreq->work, NULL); 51 53 refcount_set(&rreq->ref, 1); 54 + 55 + if (origin == NETFS_READAHEAD || 56 + origin == NETFS_READPAGE || 57 + origin == NETFS_READ_GAPS || 58 + origin == NETFS_READ_FOR_WRITE || 59 + origin == NETFS_DIO_READ) 60 + INIT_WORK(&rreq->work, netfs_read_termination_worker); 61 + else 62 + INIT_WORK(&rreq->work, netfs_write_collection_worker); 52 63 53 64 __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 54 65 if (file && file->f_flags & O_NONBLOCK) ··· 143 134 } 144 135 kvfree(rreq->direct_bv); 145 136 } 137 + netfs_clear_buffer(rreq); 146 138 147 139 if (atomic_dec_and_test(&ictx->io_count)) 148 140 wake_up_var(&ictx->io_count); ··· 165 155 if (was_async) { 166 156 rreq->work.func = netfs_free_request; 167 157 if (!queue_work(system_unbound_wq, &rreq->work)) 168 - BUG(); 158 + WARN_ON(1); 169 159 } else { 170 160 netfs_free_request(&rreq->work); 171 161 }
+544
fs/netfs/read_collect.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Network filesystem read subrequest result collection, assessment and 3 + * retrying. 4 + * 5 + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. 6 + * Written by David Howells (dhowells@redhat.com) 7 + */ 8 + 9 + #include <linux/export.h> 10 + #include <linux/fs.h> 11 + #include <linux/mm.h> 12 + #include <linux/pagemap.h> 13 + #include <linux/slab.h> 14 + #include <linux/task_io_accounting_ops.h> 15 + #include "internal.h" 16 + 17 + /* 18 + * Clear the unread part of an I/O request. 19 + */ 20 + static void netfs_clear_unread(struct netfs_io_subrequest *subreq) 21 + { 22 + netfs_reset_iter(subreq); 23 + WARN_ON_ONCE(subreq->len - subreq->transferred != iov_iter_count(&subreq->io_iter)); 24 + iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter); 25 + if (subreq->start + subreq->transferred >= subreq->rreq->i_size) 26 + __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); 27 + } 28 + 29 + /* 30 + * Flush, mark and unlock a folio that's now completely read. If we want to 31 + * cache the folio, we set the group to NETFS_FOLIO_COPY_TO_CACHE, mark it 32 + * dirty and let writeback handle it. 33 + */ 34 + static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq, 35 + struct netfs_io_request *rreq, 36 + struct folio_queue *folioq, 37 + int slot) 38 + { 39 + struct netfs_folio *finfo; 40 + struct folio *folio = folioq_folio(folioq, slot); 41 + 42 + flush_dcache_folio(folio); 43 + folio_mark_uptodate(folio); 44 + 45 + if (!test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) { 46 + finfo = netfs_folio_info(folio); 47 + if (finfo) { 48 + trace_netfs_folio(folio, netfs_folio_trace_filled_gaps); 49 + if (finfo->netfs_group) 50 + folio_change_private(folio, finfo->netfs_group); 51 + else 52 + folio_detach_private(folio); 53 + kfree(finfo); 54 + } 55 + 56 + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { 57 + if (!WARN_ON_ONCE(folio_get_private(folio) != NULL)) { 58 + trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); 59 + folio_attach_private(folio, NETFS_FOLIO_COPY_TO_CACHE); 60 + folio_mark_dirty(folio); 61 + } 62 + } else { 63 + trace_netfs_folio(folio, netfs_folio_trace_read_done); 64 + } 65 + } else { 66 + // TODO: Use of PG_private_2 is deprecated. 67 + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) 68 + netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, slot); 69 + } 70 + 71 + if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { 72 + if (folio->index == rreq->no_unlock_folio && 73 + test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) { 74 + _debug("no unlock"); 75 + } else { 76 + trace_netfs_folio(folio, netfs_folio_trace_read_unlock); 77 + folio_unlock(folio); 78 + } 79 + } 80 + } 81 + 82 + /* 83 + * Unlock any folios that are now completely read. Returns true if the 84 + * subrequest is removed from the list. 85 + */ 86 + static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was_async) 87 + { 88 + struct netfs_io_subrequest *prev, *next; 89 + struct netfs_io_request *rreq = subreq->rreq; 90 + struct folio_queue *folioq = subreq->curr_folioq; 91 + size_t avail, prev_donated, next_donated, fsize, part, excess; 92 + loff_t fpos, start; 93 + loff_t fend; 94 + int slot = subreq->curr_folioq_slot; 95 + 96 + if (WARN(subreq->transferred > subreq->len, 97 + "Subreq overread: R%x[%x] %zu > %zu", 98 + rreq->debug_id, subreq->debug_index, 99 + subreq->transferred, subreq->len)) 100 + subreq->transferred = subreq->len; 101 + 102 + next_folio: 103 + fsize = PAGE_SIZE << subreq->curr_folio_order; 104 + fpos = round_down(subreq->start + subreq->consumed, fsize); 105 + fend = fpos + fsize; 106 + 107 + if (WARN_ON_ONCE(!folioq) || 108 + WARN_ON_ONCE(!folioq_folio(folioq, slot)) || 109 + WARN_ON_ONCE(folioq_folio(folioq, slot)->index != fpos / PAGE_SIZE)) { 110 + pr_err("R=%08x[%x] s=%llx-%llx ctl=%zx/%zx/%zx sl=%u\n", 111 + rreq->debug_id, subreq->debug_index, 112 + subreq->start, subreq->start + subreq->transferred - 1, 113 + subreq->consumed, subreq->transferred, subreq->len, 114 + slot); 115 + if (folioq) { 116 + struct folio *folio = folioq_folio(folioq, slot); 117 + 118 + pr_err("folioq: orders=%02x%02x%02x%02x\n", 119 + folioq->orders[0], folioq->orders[1], 120 + folioq->orders[2], folioq->orders[3]); 121 + if (folio) 122 + pr_err("folio: %llx-%llx ix=%llx o=%u qo=%u\n", 123 + fpos, fend - 1, folio_pos(folio), folio_order(folio), 124 + folioq_folio_order(folioq, slot)); 125 + } 126 + } 127 + 128 + donation_changed: 129 + /* Try to consume the current folio if we've hit or passed the end of 130 + * it. There's a possibility that this subreq doesn't start at the 131 + * beginning of the folio, in which case we need to donate to/from the 132 + * preceding subreq. 133 + * 134 + * We also need to include any potential donation back from the 135 + * following subreq. 136 + */ 137 + prev_donated = READ_ONCE(subreq->prev_donated); 138 + next_donated = READ_ONCE(subreq->next_donated); 139 + if (prev_donated || next_donated) { 140 + spin_lock_bh(&rreq->lock); 141 + prev_donated = subreq->prev_donated; 142 + next_donated = subreq->next_donated; 143 + subreq->start -= prev_donated; 144 + subreq->len += prev_donated; 145 + subreq->transferred += prev_donated; 146 + prev_donated = subreq->prev_donated = 0; 147 + if (subreq->transferred == subreq->len) { 148 + subreq->len += next_donated; 149 + subreq->transferred += next_donated; 150 + next_donated = subreq->next_donated = 0; 151 + } 152 + trace_netfs_sreq(subreq, netfs_sreq_trace_add_donations); 153 + spin_unlock_bh(&rreq->lock); 154 + } 155 + 156 + avail = subreq->transferred; 157 + if (avail == subreq->len) 158 + avail += next_donated; 159 + start = subreq->start; 160 + if (subreq->consumed == 0) { 161 + start -= prev_donated; 162 + avail += prev_donated; 163 + } else { 164 + start += subreq->consumed; 165 + avail -= subreq->consumed; 166 + } 167 + part = umin(avail, fsize); 168 + 169 + trace_netfs_progress(subreq, start, avail, part); 170 + 171 + if (start + avail >= fend) { 172 + if (fpos == start) { 173 + /* Flush, unlock and mark for caching any folio we've just read. */ 174 + subreq->consumed = fend - subreq->start; 175 + netfs_unlock_read_folio(subreq, rreq, folioq, slot); 176 + folioq_mark2(folioq, slot); 177 + if (subreq->consumed >= subreq->len) 178 + goto remove_subreq; 179 + } else if (fpos < start) { 180 + excess = fend - subreq->start; 181 + 182 + spin_lock_bh(&rreq->lock); 183 + /* If we complete first on a folio split with the 184 + * preceding subreq, donate to that subreq - otherwise 185 + * we get the responsibility. 186 + */ 187 + if (subreq->prev_donated != prev_donated) { 188 + spin_unlock_bh(&rreq->lock); 189 + goto donation_changed; 190 + } 191 + 192 + if (list_is_first(&subreq->rreq_link, &rreq->subrequests)) { 193 + spin_unlock_bh(&rreq->lock); 194 + pr_err("Can't donate prior to front\n"); 195 + goto bad; 196 + } 197 + 198 + prev = list_prev_entry(subreq, rreq_link); 199 + WRITE_ONCE(prev->next_donated, prev->next_donated + excess); 200 + subreq->start += excess; 201 + subreq->len -= excess; 202 + subreq->transferred -= excess; 203 + trace_netfs_donate(rreq, subreq, prev, excess, 204 + netfs_trace_donate_tail_to_prev); 205 + trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev); 206 + 207 + if (subreq->consumed >= subreq->len) 208 + goto remove_subreq_locked; 209 + spin_unlock_bh(&rreq->lock); 210 + } else { 211 + pr_err("fpos > start\n"); 212 + goto bad; 213 + } 214 + 215 + /* Advance the rolling buffer to the next folio. */ 216 + slot++; 217 + if (slot >= folioq_nr_slots(folioq)) { 218 + slot = 0; 219 + folioq = folioq->next; 220 + subreq->curr_folioq = folioq; 221 + } 222 + subreq->curr_folioq_slot = slot; 223 + if (folioq && folioq_folio(folioq, slot)) 224 + subreq->curr_folio_order = folioq->orders[slot]; 225 + if (!was_async) 226 + cond_resched(); 227 + goto next_folio; 228 + } 229 + 230 + /* Deal with partial progress. */ 231 + if (subreq->transferred < subreq->len) 232 + return false; 233 + 234 + /* Donate the remaining downloaded data to one of the neighbouring 235 + * subrequests. Note that we may race with them doing the same thing. 236 + */ 237 + spin_lock_bh(&rreq->lock); 238 + 239 + if (subreq->prev_donated != prev_donated || 240 + subreq->next_donated != next_donated) { 241 + spin_unlock_bh(&rreq->lock); 242 + cond_resched(); 243 + goto donation_changed; 244 + } 245 + 246 + /* Deal with the trickiest case: that this subreq is in the middle of a 247 + * folio, not touching either edge, but finishes first. In such a 248 + * case, we donate to the previous subreq, if there is one, so that the 249 + * donation is only handled when that completes - and remove this 250 + * subreq from the list. 251 + * 252 + * If the previous subreq finished first, we will have acquired their 253 + * donation and should be able to unlock folios and/or donate nextwards. 254 + */ 255 + if (!subreq->consumed && 256 + !prev_donated && 257 + !list_is_first(&subreq->rreq_link, &rreq->subrequests)) { 258 + prev = list_prev_entry(subreq, rreq_link); 259 + WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); 260 + subreq->start += subreq->len; 261 + subreq->len = 0; 262 + subreq->transferred = 0; 263 + trace_netfs_donate(rreq, subreq, prev, subreq->len, 264 + netfs_trace_donate_to_prev); 265 + trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev); 266 + goto remove_subreq_locked; 267 + } 268 + 269 + /* If we can't donate down the chain, donate up the chain instead. */ 270 + excess = subreq->len - subreq->consumed + next_donated; 271 + 272 + if (!subreq->consumed) 273 + excess += prev_donated; 274 + 275 + if (list_is_last(&subreq->rreq_link, &rreq->subrequests)) { 276 + rreq->prev_donated = excess; 277 + trace_netfs_donate(rreq, subreq, NULL, excess, 278 + netfs_trace_donate_to_deferred_next); 279 + } else { 280 + next = list_next_entry(subreq, rreq_link); 281 + WRITE_ONCE(next->prev_donated, excess); 282 + trace_netfs_donate(rreq, subreq, next, excess, 283 + netfs_trace_donate_to_next); 284 + } 285 + trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_next); 286 + subreq->len = subreq->consumed; 287 + subreq->transferred = subreq->consumed; 288 + goto remove_subreq_locked; 289 + 290 + remove_subreq: 291 + spin_lock_bh(&rreq->lock); 292 + remove_subreq_locked: 293 + subreq->consumed = subreq->len; 294 + list_del(&subreq->rreq_link); 295 + spin_unlock_bh(&rreq->lock); 296 + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_consumed); 297 + return true; 298 + 299 + bad: 300 + /* Errr... prev and next both donated to us, but insufficient to finish 301 + * the folio. 302 + */ 303 + printk("R=%08x[%x] s=%llx-%llx %zx/%zx/%zx\n", 304 + rreq->debug_id, subreq->debug_index, 305 + subreq->start, subreq->start + subreq->transferred - 1, 306 + subreq->consumed, subreq->transferred, subreq->len); 307 + printk("folio: %llx-%llx\n", fpos, fend - 1); 308 + printk("donated: prev=%zx next=%zx\n", prev_donated, next_donated); 309 + printk("s=%llx av=%zx part=%zx\n", start, avail, part); 310 + BUG(); 311 + } 312 + 313 + /* 314 + * Do page flushing and suchlike after DIO. 315 + */ 316 + static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) 317 + { 318 + struct netfs_io_subrequest *subreq; 319 + unsigned int i; 320 + 321 + /* Collect unbuffered reads and direct reads, adding up the transfer 322 + * sizes until we find the first short or failed subrequest. 323 + */ 324 + list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 325 + rreq->transferred += subreq->transferred; 326 + 327 + if (subreq->transferred < subreq->len || 328 + test_bit(NETFS_SREQ_FAILED, &subreq->flags)) { 329 + rreq->error = subreq->error; 330 + break; 331 + } 332 + } 333 + 334 + if (rreq->origin == NETFS_DIO_READ) { 335 + for (i = 0; i < rreq->direct_bv_count; i++) { 336 + flush_dcache_page(rreq->direct_bv[i].bv_page); 337 + // TODO: cifs marks pages in the destination buffer 338 + // dirty under some circumstances after a read. Do we 339 + // need to do that too? 340 + set_page_dirty(rreq->direct_bv[i].bv_page); 341 + } 342 + } 343 + 344 + if (rreq->iocb) { 345 + rreq->iocb->ki_pos += rreq->transferred; 346 + if (rreq->iocb->ki_complete) 347 + rreq->iocb->ki_complete( 348 + rreq->iocb, rreq->error ? rreq->error : rreq->transferred); 349 + } 350 + if (rreq->netfs_ops->done) 351 + rreq->netfs_ops->done(rreq); 352 + if (rreq->origin == NETFS_DIO_READ) 353 + inode_dio_end(rreq->inode); 354 + } 355 + 356 + /* 357 + * Assess the state of a read request and decide what to do next. 358 + * 359 + * Note that we're in normal kernel thread context at this point, possibly 360 + * running on a workqueue. 361 + */ 362 + static void netfs_rreq_assess(struct netfs_io_request *rreq) 363 + { 364 + trace_netfs_rreq(rreq, netfs_rreq_trace_assess); 365 + 366 + //netfs_rreq_is_still_valid(rreq); 367 + 368 + if (test_and_clear_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags)) { 369 + netfs_retry_reads(rreq); 370 + return; 371 + } 372 + 373 + if (rreq->origin == NETFS_DIO_READ || 374 + rreq->origin == NETFS_READ_GAPS) 375 + netfs_rreq_assess_dio(rreq); 376 + task_io_account_read(rreq->transferred); 377 + 378 + trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); 379 + clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 380 + wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); 381 + 382 + trace_netfs_rreq(rreq, netfs_rreq_trace_done); 383 + netfs_clear_subrequests(rreq, false); 384 + netfs_unlock_abandoned_read_pages(rreq); 385 + if (unlikely(test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags))) 386 + netfs_pgpriv2_write_to_the_cache(rreq); 387 + } 388 + 389 + void netfs_read_termination_worker(struct work_struct *work) 390 + { 391 + struct netfs_io_request *rreq = 392 + container_of(work, struct netfs_io_request, work); 393 + netfs_see_request(rreq, netfs_rreq_trace_see_work); 394 + netfs_rreq_assess(rreq); 395 + netfs_put_request(rreq, false, netfs_rreq_trace_put_work_complete); 396 + } 397 + 398 + /* 399 + * Handle the completion of all outstanding I/O operations on a read request. 400 + * We inherit a ref from the caller. 401 + */ 402 + void netfs_rreq_terminated(struct netfs_io_request *rreq, bool was_async) 403 + { 404 + if (!was_async) 405 + return netfs_rreq_assess(rreq); 406 + if (!work_pending(&rreq->work)) { 407 + netfs_get_request(rreq, netfs_rreq_trace_get_work); 408 + if (!queue_work(system_unbound_wq, &rreq->work)) 409 + netfs_put_request(rreq, was_async, netfs_rreq_trace_put_work_nq); 410 + } 411 + } 412 + 413 + /** 414 + * netfs_read_subreq_progress - Note progress of a read operation. 415 + * @subreq: The read request that has terminated. 416 + * @was_async: True if we're in an asynchronous context. 417 + * 418 + * This tells the read side of netfs lib that a contributory I/O operation has 419 + * made some progress and that it may be possible to unlock some folios. 420 + * 421 + * Before calling, the filesystem should update subreq->transferred to track 422 + * the amount of data copied into the output buffer. 423 + * 424 + * If @was_async is true, the caller might be running in softirq or interrupt 425 + * context and we can't sleep. 426 + */ 427 + void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, 428 + bool was_async) 429 + { 430 + struct netfs_io_request *rreq = subreq->rreq; 431 + 432 + trace_netfs_sreq(subreq, netfs_sreq_trace_progress); 433 + 434 + if (subreq->transferred > subreq->consumed && 435 + (rreq->origin == NETFS_READAHEAD || 436 + rreq->origin == NETFS_READPAGE || 437 + rreq->origin == NETFS_READ_FOR_WRITE)) { 438 + netfs_consume_read_data(subreq, was_async); 439 + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 440 + } 441 + } 442 + EXPORT_SYMBOL(netfs_read_subreq_progress); 443 + 444 + /** 445 + * netfs_read_subreq_terminated - Note the termination of an I/O operation. 446 + * @subreq: The I/O request that has terminated. 447 + * @error: Error code indicating type of completion. 448 + * @was_async: The termination was asynchronous 449 + * 450 + * This tells the read helper that a contributory I/O operation has terminated, 451 + * one way or another, and that it should integrate the results. 452 + * 453 + * The caller indicates the outcome of the operation through @error, supplying 454 + * 0 to indicate a successful or retryable transfer (if NETFS_SREQ_NEED_RETRY 455 + * is set) or a negative error code. The helper will look after reissuing I/O 456 + * operations as appropriate and writing downloaded data to the cache. 457 + * 458 + * Before calling, the filesystem should update subreq->transferred to track 459 + * the amount of data copied into the output buffer. 460 + * 461 + * If @was_async is true, the caller might be running in softirq or interrupt 462 + * context and we can't sleep. 463 + */ 464 + void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, 465 + int error, bool was_async) 466 + { 467 + struct netfs_io_request *rreq = subreq->rreq; 468 + 469 + switch (subreq->source) { 470 + case NETFS_READ_FROM_CACHE: 471 + netfs_stat(&netfs_n_rh_read_done); 472 + break; 473 + case NETFS_DOWNLOAD_FROM_SERVER: 474 + netfs_stat(&netfs_n_rh_download_done); 475 + break; 476 + default: 477 + break; 478 + } 479 + 480 + if (rreq->origin != NETFS_DIO_READ) { 481 + /* Collect buffered reads. 482 + * 483 + * If the read completed validly short, then we can clear the 484 + * tail before going on to unlock the folios. 485 + */ 486 + if (error == 0 && subreq->transferred < subreq->len && 487 + (test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags) || 488 + test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags))) { 489 + netfs_clear_unread(subreq); 490 + subreq->transferred = subreq->len; 491 + trace_netfs_sreq(subreq, netfs_sreq_trace_clear); 492 + } 493 + if (subreq->transferred > subreq->consumed && 494 + (rreq->origin == NETFS_READAHEAD || 495 + rreq->origin == NETFS_READPAGE || 496 + rreq->origin == NETFS_READ_FOR_WRITE)) { 497 + netfs_consume_read_data(subreq, was_async); 498 + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 499 + } 500 + rreq->transferred += subreq->transferred; 501 + } 502 + 503 + /* Deal with retry requests, short reads and errors. If we retry 504 + * but don't make progress, we abandon the attempt. 505 + */ 506 + if (!error && subreq->transferred < subreq->len) { 507 + if (test_bit(NETFS_SREQ_HIT_EOF, &subreq->flags)) { 508 + trace_netfs_sreq(subreq, netfs_sreq_trace_hit_eof); 509 + } else { 510 + trace_netfs_sreq(subreq, netfs_sreq_trace_short); 511 + if (subreq->transferred > subreq->consumed) { 512 + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 513 + __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 514 + set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); 515 + } else if (!__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { 516 + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 517 + set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); 518 + } else { 519 + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); 520 + error = -ENODATA; 521 + } 522 + } 523 + } 524 + 525 + subreq->error = error; 526 + trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); 527 + 528 + if (unlikely(error < 0)) { 529 + trace_netfs_failure(rreq, subreq, error, netfs_fail_read); 530 + if (subreq->source == NETFS_READ_FROM_CACHE) { 531 + netfs_stat(&netfs_n_rh_read_failed); 532 + } else { 533 + netfs_stat(&netfs_n_rh_download_failed); 534 + set_bit(NETFS_RREQ_FAILED, &rreq->flags); 535 + rreq->error = subreq->error; 536 + } 537 + } 538 + 539 + if (atomic_dec_and_test(&rreq->nr_outstanding)) 540 + netfs_rreq_terminated(rreq, was_async); 541 + 542 + netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated); 543 + } 544 + EXPORT_SYMBOL(netfs_read_subreq_terminated);
+264
fs/netfs/read_pgpriv2.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Read with PG_private_2 [DEPRECATED]. 3 + * 4 + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. 5 + * Written by David Howells (dhowells@redhat.com) 6 + */ 7 + 8 + #include <linux/export.h> 9 + #include <linux/fs.h> 10 + #include <linux/mm.h> 11 + #include <linux/pagemap.h> 12 + #include <linux/slab.h> 13 + #include <linux/task_io_accounting_ops.h> 14 + #include "internal.h" 15 + 16 + /* 17 + * [DEPRECATED] Mark page as requiring copy-to-cache using PG_private_2. The 18 + * third mark in the folio queue is used to indicate that this folio needs 19 + * writing. 20 + */ 21 + void netfs_pgpriv2_mark_copy_to_cache(struct netfs_io_subrequest *subreq, 22 + struct netfs_io_request *rreq, 23 + struct folio_queue *folioq, 24 + int slot) 25 + { 26 + struct folio *folio = folioq_folio(folioq, slot); 27 + 28 + trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache); 29 + folio_start_private_2(folio); 30 + folioq_mark3(folioq, slot); 31 + } 32 + 33 + /* 34 + * [DEPRECATED] Cancel PG_private_2 on all marked folios in the event of an 35 + * unrecoverable error. 36 + */ 37 + static void netfs_pgpriv2_cancel(struct folio_queue *folioq) 38 + { 39 + struct folio *folio; 40 + int slot; 41 + 42 + while (folioq) { 43 + if (!folioq->marks3) { 44 + folioq = folioq->next; 45 + continue; 46 + } 47 + 48 + slot = __ffs(folioq->marks3); 49 + folio = folioq_folio(folioq, slot); 50 + 51 + trace_netfs_folio(folio, netfs_folio_trace_cancel_copy); 52 + folio_end_private_2(folio); 53 + folioq_unmark3(folioq, slot); 54 + } 55 + } 56 + 57 + /* 58 + * [DEPRECATED] Copy a folio to the cache with PG_private_2 set. 59 + */ 60 + static int netfs_pgpriv2_copy_folio(struct netfs_io_request *wreq, struct folio *folio) 61 + { 62 + struct netfs_io_stream *cache = &wreq->io_streams[1]; 63 + size_t fsize = folio_size(folio), flen = fsize; 64 + loff_t fpos = folio_pos(folio), i_size; 65 + bool to_eof = false; 66 + 67 + _enter(""); 68 + 69 + /* netfs_perform_write() may shift i_size around the page or from out 70 + * of the page to beyond it, but cannot move i_size into or through the 71 + * page since we have it locked. 72 + */ 73 + i_size = i_size_read(wreq->inode); 74 + 75 + if (fpos >= i_size) { 76 + /* mmap beyond eof. */ 77 + _debug("beyond eof"); 78 + folio_end_private_2(folio); 79 + return 0; 80 + } 81 + 82 + if (fpos + fsize > wreq->i_size) 83 + wreq->i_size = i_size; 84 + 85 + if (flen > i_size - fpos) { 86 + flen = i_size - fpos; 87 + to_eof = true; 88 + } else if (flen == i_size - fpos) { 89 + to_eof = true; 90 + } 91 + 92 + _debug("folio %zx %zx", flen, fsize); 93 + 94 + trace_netfs_folio(folio, netfs_folio_trace_store_copy); 95 + 96 + /* Attach the folio to the rolling buffer. */ 97 + if (netfs_buffer_append_folio(wreq, folio, false) < 0) 98 + return -ENOMEM; 99 + 100 + cache->submit_extendable_to = fsize; 101 + cache->submit_off = 0; 102 + cache->submit_len = flen; 103 + 104 + /* Attach the folio to one or more subrequests. For a big folio, we 105 + * could end up with thousands of subrequests if the wsize is small - 106 + * but we might need to wait during the creation of subrequests for 107 + * network resources (eg. SMB credits). 108 + */ 109 + do { 110 + ssize_t part; 111 + 112 + wreq->io_iter.iov_offset = cache->submit_off; 113 + 114 + atomic64_set(&wreq->issued_to, fpos + cache->submit_off); 115 + cache->submit_extendable_to = fsize - cache->submit_off; 116 + part = netfs_advance_write(wreq, cache, fpos + cache->submit_off, 117 + cache->submit_len, to_eof); 118 + cache->submit_off += part; 119 + if (part > cache->submit_len) 120 + cache->submit_len = 0; 121 + else 122 + cache->submit_len -= part; 123 + } while (cache->submit_len > 0); 124 + 125 + wreq->io_iter.iov_offset = 0; 126 + iov_iter_advance(&wreq->io_iter, fsize); 127 + atomic64_set(&wreq->issued_to, fpos + fsize); 128 + 129 + if (flen < fsize) 130 + netfs_issue_write(wreq, cache); 131 + 132 + _leave(" = 0"); 133 + return 0; 134 + } 135 + 136 + /* 137 + * [DEPRECATED] Go through the buffer and write any folios that are marked with 138 + * the third mark to the cache. 139 + */ 140 + void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq) 141 + { 142 + struct netfs_io_request *wreq; 143 + struct folio_queue *folioq; 144 + struct folio *folio; 145 + int error = 0; 146 + int slot = 0; 147 + 148 + _enter(""); 149 + 150 + if (!fscache_resources_valid(&rreq->cache_resources)) 151 + goto couldnt_start; 152 + 153 + /* Need the first folio to be able to set up the op. */ 154 + for (folioq = rreq->buffer; folioq; folioq = folioq->next) { 155 + if (folioq->marks3) { 156 + slot = __ffs(folioq->marks3); 157 + break; 158 + } 159 + } 160 + if (!folioq) 161 + return; 162 + folio = folioq_folio(folioq, slot); 163 + 164 + wreq = netfs_create_write_req(rreq->mapping, NULL, folio_pos(folio), 165 + NETFS_PGPRIV2_COPY_TO_CACHE); 166 + if (IS_ERR(wreq)) { 167 + kleave(" [create %ld]", PTR_ERR(wreq)); 168 + goto couldnt_start; 169 + } 170 + 171 + trace_netfs_write(wreq, netfs_write_trace_copy_to_cache); 172 + netfs_stat(&netfs_n_wh_copy_to_cache); 173 + 174 + for (;;) { 175 + error = netfs_pgpriv2_copy_folio(wreq, folio); 176 + if (error < 0) 177 + break; 178 + 179 + folioq_unmark3(folioq, slot); 180 + if (!folioq->marks3) { 181 + folioq = folioq->next; 182 + if (!folioq) 183 + break; 184 + } 185 + 186 + slot = __ffs(folioq->marks3); 187 + folio = folioq_folio(folioq, slot); 188 + } 189 + 190 + netfs_issue_write(wreq, &wreq->io_streams[1]); 191 + smp_wmb(); /* Write lists before ALL_QUEUED. */ 192 + set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags); 193 + 194 + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); 195 + _leave(" = %d", error); 196 + couldnt_start: 197 + netfs_pgpriv2_cancel(rreq->buffer); 198 + } 199 + 200 + /* 201 + * [DEPRECATED] Remove the PG_private_2 mark from any folios we've finished 202 + * copying. 203 + */ 204 + bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *wreq) 205 + { 206 + struct folio_queue *folioq = wreq->buffer; 207 + unsigned long long collected_to = wreq->collected_to; 208 + unsigned int slot = wreq->buffer_head_slot; 209 + bool made_progress = false; 210 + 211 + if (slot >= folioq_nr_slots(folioq)) { 212 + folioq = netfs_delete_buffer_head(wreq); 213 + slot = 0; 214 + } 215 + 216 + for (;;) { 217 + struct folio *folio; 218 + unsigned long long fpos, fend; 219 + size_t fsize, flen; 220 + 221 + folio = folioq_folio(folioq, slot); 222 + if (WARN_ONCE(!folio_test_private_2(folio), 223 + "R=%08x: folio %lx is not marked private_2\n", 224 + wreq->debug_id, folio->index)) 225 + trace_netfs_folio(folio, netfs_folio_trace_not_under_wback); 226 + 227 + fpos = folio_pos(folio); 228 + fsize = folio_size(folio); 229 + flen = fsize; 230 + 231 + fend = min_t(unsigned long long, fpos + flen, wreq->i_size); 232 + 233 + trace_netfs_collect_folio(wreq, folio, fend, collected_to); 234 + 235 + /* Unlock any folio we've transferred all of. */ 236 + if (collected_to < fend) 237 + break; 238 + 239 + trace_netfs_folio(folio, netfs_folio_trace_end_copy); 240 + folio_end_private_2(folio); 241 + wreq->cleaned_to = fpos + fsize; 242 + made_progress = true; 243 + 244 + /* Clean up the head folioq. If we clear an entire folioq, then 245 + * we can get rid of it provided it's not also the tail folioq 246 + * being filled by the issuer. 247 + */ 248 + folioq_clear(folioq, slot); 249 + slot++; 250 + if (slot >= folioq_nr_slots(folioq)) { 251 + if (READ_ONCE(wreq->buffer_tail) == folioq) 252 + break; 253 + folioq = netfs_delete_buffer_head(wreq); 254 + slot = 0; 255 + } 256 + 257 + if (fpos + fsize >= collected_to) 258 + break; 259 + } 260 + 261 + wreq->buffer = folioq; 262 + wreq->buffer_head_slot = slot; 263 + return made_progress; 264 + }
+256
fs/netfs/read_retry.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Network filesystem read subrequest retrying. 3 + * 4 + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. 5 + * Written by David Howells (dhowells@redhat.com) 6 + */ 7 + 8 + #include <linux/fs.h> 9 + #include <linux/slab.h> 10 + #include "internal.h" 11 + 12 + static void netfs_reissue_read(struct netfs_io_request *rreq, 13 + struct netfs_io_subrequest *subreq) 14 + { 15 + struct iov_iter *io_iter = &subreq->io_iter; 16 + 17 + if (iov_iter_is_folioq(io_iter)) { 18 + subreq->curr_folioq = (struct folio_queue *)io_iter->folioq; 19 + subreq->curr_folioq_slot = io_iter->folioq_slot; 20 + subreq->curr_folio_order = subreq->curr_folioq->orders[subreq->curr_folioq_slot]; 21 + } 22 + 23 + atomic_inc(&rreq->nr_outstanding); 24 + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 25 + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 26 + subreq->rreq->netfs_ops->issue_read(subreq); 27 + } 28 + 29 + /* 30 + * Go through the list of failed/short reads, retrying all retryable ones. We 31 + * need to switch failed cache reads to network downloads. 32 + */ 33 + static void netfs_retry_read_subrequests(struct netfs_io_request *rreq) 34 + { 35 + struct netfs_io_subrequest *subreq; 36 + struct netfs_io_stream *stream0 = &rreq->io_streams[0]; 37 + LIST_HEAD(sublist); 38 + LIST_HEAD(queue); 39 + 40 + _enter("R=%x", rreq->debug_id); 41 + 42 + if (list_empty(&rreq->subrequests)) 43 + return; 44 + 45 + if (rreq->netfs_ops->retry_request) 46 + rreq->netfs_ops->retry_request(rreq, NULL); 47 + 48 + /* If there's no renegotiation to do, just resend each retryable subreq 49 + * up to the first permanently failed one. 50 + */ 51 + if (!rreq->netfs_ops->prepare_read && 52 + !test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) { 53 + struct netfs_io_subrequest *subreq; 54 + 55 + list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 56 + if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) 57 + break; 58 + if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { 59 + netfs_reset_iter(subreq); 60 + netfs_reissue_read(rreq, subreq); 61 + } 62 + } 63 + return; 64 + } 65 + 66 + /* Okay, we need to renegotiate all the download requests and flip any 67 + * failed cache reads over to being download requests and negotiate 68 + * those also. All fully successful subreqs have been removed from the 69 + * list and any spare data from those has been donated. 70 + * 71 + * What we do is decant the list and rebuild it one subreq at a time so 72 + * that we don't end up with donations jumping over a gap we're busy 73 + * populating with smaller subrequests. In the event that the subreq 74 + * we just launched finishes before we insert the next subreq, it'll 75 + * fill in rreq->prev_donated instead. 76 + 77 + * Note: Alternatively, we could split the tail subrequest right before 78 + * we reissue it and fix up the donations under lock. 79 + */ 80 + list_splice_init(&rreq->subrequests, &queue); 81 + 82 + do { 83 + struct netfs_io_subrequest *from; 84 + struct iov_iter source; 85 + unsigned long long start, len; 86 + size_t part, deferred_next_donated = 0; 87 + bool boundary = false; 88 + 89 + /* Go through the subreqs and find the next span of contiguous 90 + * buffer that we then rejig (cifs, for example, needs the 91 + * rsize renegotiating) and reissue. 92 + */ 93 + from = list_first_entry(&queue, struct netfs_io_subrequest, rreq_link); 94 + list_move_tail(&from->rreq_link, &sublist); 95 + start = from->start + from->transferred; 96 + len = from->len - from->transferred; 97 + 98 + _debug("from R=%08x[%x] s=%llx ctl=%zx/%zx/%zx", 99 + rreq->debug_id, from->debug_index, 100 + from->start, from->consumed, from->transferred, from->len); 101 + 102 + if (test_bit(NETFS_SREQ_FAILED, &from->flags) || 103 + !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) 104 + goto abandon; 105 + 106 + deferred_next_donated = from->next_donated; 107 + while ((subreq = list_first_entry_or_null( 108 + &queue, struct netfs_io_subrequest, rreq_link))) { 109 + if (subreq->start != start + len || 110 + subreq->transferred > 0 || 111 + !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) 112 + break; 113 + list_move_tail(&subreq->rreq_link, &sublist); 114 + len += subreq->len; 115 + deferred_next_donated = subreq->next_donated; 116 + if (test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags)) 117 + break; 118 + } 119 + 120 + _debug(" - range: %llx-%llx %llx", start, start + len - 1, len); 121 + 122 + /* Determine the set of buffers we're going to use. Each 123 + * subreq gets a subset of a single overall contiguous buffer. 124 + */ 125 + netfs_reset_iter(from); 126 + source = from->io_iter; 127 + source.count = len; 128 + 129 + /* Work through the sublist. */ 130 + while ((subreq = list_first_entry_or_null( 131 + &sublist, struct netfs_io_subrequest, rreq_link))) { 132 + list_del(&subreq->rreq_link); 133 + 134 + subreq->source = NETFS_DOWNLOAD_FROM_SERVER; 135 + subreq->start = start - subreq->transferred; 136 + subreq->len = len + subreq->transferred; 137 + stream0->sreq_max_len = subreq->len; 138 + 139 + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 140 + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 141 + 142 + spin_lock_bh(&rreq->lock); 143 + list_add_tail(&subreq->rreq_link, &rreq->subrequests); 144 + subreq->prev_donated += rreq->prev_donated; 145 + rreq->prev_donated = 0; 146 + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); 147 + spin_unlock_bh(&rreq->lock); 148 + 149 + BUG_ON(!len); 150 + 151 + /* Renegotiate max_len (rsize) */ 152 + if (rreq->netfs_ops->prepare_read(subreq) < 0) { 153 + trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); 154 + __set_bit(NETFS_SREQ_FAILED, &subreq->flags); 155 + } 156 + 157 + part = umin(len, stream0->sreq_max_len); 158 + if (unlikely(rreq->io_streams[0].sreq_max_segs)) 159 + part = netfs_limit_iter(&source, 0, part, stream0->sreq_max_segs); 160 + subreq->len = subreq->transferred + part; 161 + subreq->io_iter = source; 162 + iov_iter_truncate(&subreq->io_iter, part); 163 + iov_iter_advance(&source, part); 164 + len -= part; 165 + start += part; 166 + if (!len) { 167 + if (boundary) 168 + __set_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); 169 + subreq->next_donated = deferred_next_donated; 170 + } else { 171 + __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); 172 + subreq->next_donated = 0; 173 + } 174 + 175 + netfs_reissue_read(rreq, subreq); 176 + if (!len) 177 + break; 178 + 179 + /* If we ran out of subrequests, allocate another. */ 180 + if (list_empty(&sublist)) { 181 + subreq = netfs_alloc_subrequest(rreq); 182 + if (!subreq) 183 + goto abandon; 184 + subreq->source = NETFS_DOWNLOAD_FROM_SERVER; 185 + subreq->start = start; 186 + 187 + /* We get two refs, but need just one. */ 188 + netfs_put_subrequest(subreq, false, netfs_sreq_trace_new); 189 + trace_netfs_sreq(subreq, netfs_sreq_trace_split); 190 + list_add_tail(&subreq->rreq_link, &sublist); 191 + } 192 + } 193 + 194 + /* If we managed to use fewer subreqs, we can discard the 195 + * excess. 196 + */ 197 + while ((subreq = list_first_entry_or_null( 198 + &sublist, struct netfs_io_subrequest, rreq_link))) { 199 + trace_netfs_sreq(subreq, netfs_sreq_trace_discard); 200 + list_del(&subreq->rreq_link); 201 + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done); 202 + } 203 + 204 + } while (!list_empty(&queue)); 205 + 206 + return; 207 + 208 + /* If we hit ENOMEM, fail all remaining subrequests */ 209 + abandon: 210 + list_splice_init(&sublist, &queue); 211 + list_for_each_entry(subreq, &queue, rreq_link) { 212 + if (!subreq->error) 213 + subreq->error = -ENOMEM; 214 + __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); 215 + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 216 + __clear_bit(NETFS_SREQ_RETRYING, &subreq->flags); 217 + } 218 + spin_lock_bh(&rreq->lock); 219 + list_splice_tail_init(&queue, &rreq->subrequests); 220 + spin_unlock_bh(&rreq->lock); 221 + } 222 + 223 + /* 224 + * Retry reads. 225 + */ 226 + void netfs_retry_reads(struct netfs_io_request *rreq) 227 + { 228 + trace_netfs_rreq(rreq, netfs_rreq_trace_resubmit); 229 + 230 + atomic_inc(&rreq->nr_outstanding); 231 + 232 + netfs_retry_read_subrequests(rreq); 233 + 234 + if (atomic_dec_and_test(&rreq->nr_outstanding)) 235 + netfs_rreq_terminated(rreq, false); 236 + } 237 + 238 + /* 239 + * Unlock any the pages that haven't been unlocked yet due to abandoned 240 + * subrequests. 241 + */ 242 + void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq) 243 + { 244 + struct folio_queue *p; 245 + 246 + for (p = rreq->buffer; p; p = p->next) { 247 + for (int slot = 0; slot < folioq_count(p); slot++) { 248 + struct folio *folio = folioq_folio(p, slot); 249 + 250 + if (folio && !folioq_is_marked2(p, slot)) { 251 + trace_netfs_folio(folio, netfs_folio_trace_abandon); 252 + folio_unlock(folio); 253 + } 254 + } 255 + } 256 + }
+18 -9
fs/netfs/stats.c
··· 32 32 atomic_t netfs_n_wh_writethrough; 33 33 atomic_t netfs_n_wh_dio_write; 34 34 atomic_t netfs_n_wh_writepages; 35 + atomic_t netfs_n_wh_copy_to_cache; 35 36 atomic_t netfs_n_wh_wstream_conflict; 36 37 atomic_t netfs_n_wh_upload; 37 38 atomic_t netfs_n_wh_upload_done; ··· 40 39 atomic_t netfs_n_wh_write; 41 40 atomic_t netfs_n_wh_write_done; 42 41 atomic_t netfs_n_wh_write_failed; 42 + atomic_t netfs_n_wb_lock_skip; 43 + atomic_t netfs_n_wb_lock_wait; 44 + atomic_t netfs_n_folioq; 43 45 44 46 int netfs_stats_show(struct seq_file *m, void *v) 45 47 { 46 - seq_printf(m, "Netfs : DR=%u RA=%u RF=%u WB=%u WBZ=%u\n", 48 + seq_printf(m, "Reads : DR=%u RA=%u RF=%u WB=%u WBZ=%u\n", 47 49 atomic_read(&netfs_n_rh_dio_read), 48 50 atomic_read(&netfs_n_rh_readahead), 49 51 atomic_read(&netfs_n_rh_read_folio), 50 52 atomic_read(&netfs_n_rh_write_begin), 51 53 atomic_read(&netfs_n_rh_write_zskip)); 52 - seq_printf(m, "Netfs : BW=%u WT=%u DW=%u WP=%u\n", 54 + seq_printf(m, "Writes : BW=%u WT=%u DW=%u WP=%u 2C=%u\n", 53 55 atomic_read(&netfs_n_wh_buffered_write), 54 56 atomic_read(&netfs_n_wh_writethrough), 55 57 atomic_read(&netfs_n_wh_dio_write), 56 - atomic_read(&netfs_n_wh_writepages)); 57 - seq_printf(m, "Netfs : ZR=%u sh=%u sk=%u\n", 58 + atomic_read(&netfs_n_wh_writepages), 59 + atomic_read(&netfs_n_wh_copy_to_cache)); 60 + seq_printf(m, "ZeroOps: ZR=%u sh=%u sk=%u\n", 58 61 atomic_read(&netfs_n_rh_zero), 59 62 atomic_read(&netfs_n_rh_short_read), 60 63 atomic_read(&netfs_n_rh_write_zskip)); 61 - seq_printf(m, "Netfs : DL=%u ds=%u df=%u di=%u\n", 64 + seq_printf(m, "DownOps: DL=%u ds=%u df=%u di=%u\n", 62 65 atomic_read(&netfs_n_rh_download), 63 66 atomic_read(&netfs_n_rh_download_done), 64 67 atomic_read(&netfs_n_rh_download_failed), 65 68 atomic_read(&netfs_n_rh_download_instead)); 66 - seq_printf(m, "Netfs : RD=%u rs=%u rf=%u\n", 69 + seq_printf(m, "CaRdOps: RD=%u rs=%u rf=%u\n", 67 70 atomic_read(&netfs_n_rh_read), 68 71 atomic_read(&netfs_n_rh_read_done), 69 72 atomic_read(&netfs_n_rh_read_failed)); 70 - seq_printf(m, "Netfs : UL=%u us=%u uf=%u\n", 73 + seq_printf(m, "UpldOps: UL=%u us=%u uf=%u\n", 71 74 atomic_read(&netfs_n_wh_upload), 72 75 atomic_read(&netfs_n_wh_upload_done), 73 76 atomic_read(&netfs_n_wh_upload_failed)); 74 - seq_printf(m, "Netfs : WR=%u ws=%u wf=%u\n", 77 + seq_printf(m, "CaWrOps: WR=%u ws=%u wf=%u\n", 75 78 atomic_read(&netfs_n_wh_write), 76 79 atomic_read(&netfs_n_wh_write_done), 77 80 atomic_read(&netfs_n_wh_write_failed)); 78 - seq_printf(m, "Netfs : rr=%u sr=%u wsc=%u\n", 81 + seq_printf(m, "Objs : rr=%u sr=%u foq=%u wsc=%u\n", 79 82 atomic_read(&netfs_n_rh_rreq), 80 83 atomic_read(&netfs_n_rh_sreq), 84 + atomic_read(&netfs_n_folioq), 81 85 atomic_read(&netfs_n_wh_wstream_conflict)); 86 + seq_printf(m, "WbLock : skip=%u wait=%u\n", 87 + atomic_read(&netfs_n_wb_lock_skip), 88 + atomic_read(&netfs_n_wb_lock_wait)); 82 89 return fscache_stats_show(m); 83 90 } 84 91 EXPORT_SYMBOL(netfs_stats_show);
+81 -169
fs/netfs/write_collect.c
··· 15 15 16 16 /* Notes made in the collector */ 17 17 #define HIT_PENDING 0x01 /* A front op was still pending */ 18 - #define SOME_EMPTY 0x02 /* One of more streams are empty */ 19 - #define ALL_EMPTY 0x04 /* All streams are empty */ 20 - #define MAYBE_DISCONTIG 0x08 /* A front op may be discontiguous (rounded to PAGE_SIZE) */ 21 - #define NEED_REASSESS 0x10 /* Need to loop round and reassess */ 22 - #define REASSESS_DISCONTIG 0x20 /* Reassess discontiguity if contiguity advances */ 23 - #define MADE_PROGRESS 0x40 /* Made progress cleaning up a stream or the folio set */ 24 - #define BUFFERED 0x80 /* The pagecache needs cleaning up */ 25 - #define NEED_RETRY 0x100 /* A front op requests retrying */ 26 - #define SAW_FAILURE 0x200 /* One stream or hit a permanent failure */ 18 + #define NEED_REASSESS 0x02 /* Need to loop round and reassess */ 19 + #define MADE_PROGRESS 0x04 /* Made progress cleaning up a stream or the folio set */ 20 + #define BUFFERED 0x08 /* The pagecache needs cleaning up */ 21 + #define NEED_RETRY 0x10 /* A front op requests retrying */ 22 + #define SAW_FAILURE 0x20 /* One stream or hit a permanent failure */ 27 23 28 24 /* 29 25 * Successful completion of write of a folio to the server and/or cache. Note ··· 78 82 } 79 83 80 84 /* 81 - * Get hold of a folio we have under writeback. We don't want to get the 82 - * refcount on it. 83 - */ 84 - static struct folio *netfs_writeback_lookup_folio(struct netfs_io_request *wreq, loff_t pos) 85 - { 86 - XA_STATE(xas, &wreq->mapping->i_pages, pos / PAGE_SIZE); 87 - struct folio *folio; 88 - 89 - rcu_read_lock(); 90 - 91 - for (;;) { 92 - xas_reset(&xas); 93 - folio = xas_load(&xas); 94 - if (xas_retry(&xas, folio)) 95 - continue; 96 - 97 - if (!folio || xa_is_value(folio)) 98 - kdebug("R=%08x: folio %lx (%llx) not present", 99 - wreq->debug_id, xas.xa_index, pos / PAGE_SIZE); 100 - BUG_ON(!folio || xa_is_value(folio)); 101 - 102 - if (folio == xas_reload(&xas)) 103 - break; 104 - } 105 - 106 - rcu_read_unlock(); 107 - 108 - if (WARN_ONCE(!folio_test_writeback(folio), 109 - "R=%08x: folio %lx is not under writeback\n", 110 - wreq->debug_id, folio->index)) { 111 - trace_netfs_folio(folio, netfs_folio_trace_not_under_wback); 112 - } 113 - return folio; 114 - } 115 - 116 - /* 117 85 * Unlock any folios we've finished with. 118 86 */ 119 87 static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, 120 - unsigned long long collected_to, 121 88 unsigned int *notes) 122 89 { 90 + struct folio_queue *folioq = wreq->buffer; 91 + unsigned long long collected_to = wreq->collected_to; 92 + unsigned int slot = wreq->buffer_head_slot; 93 + 94 + if (wreq->origin == NETFS_PGPRIV2_COPY_TO_CACHE) { 95 + if (netfs_pgpriv2_unlock_copied_folios(wreq)) 96 + *notes |= MADE_PROGRESS; 97 + return; 98 + } 99 + 100 + if (slot >= folioq_nr_slots(folioq)) { 101 + folioq = netfs_delete_buffer_head(wreq); 102 + slot = 0; 103 + } 104 + 123 105 for (;;) { 124 106 struct folio *folio; 125 107 struct netfs_folio *finfo; 126 108 unsigned long long fpos, fend; 127 109 size_t fsize, flen; 128 110 129 - folio = netfs_writeback_lookup_folio(wreq, wreq->cleaned_to); 111 + folio = folioq_folio(folioq, slot); 112 + if (WARN_ONCE(!folio_test_writeback(folio), 113 + "R=%08x: folio %lx is not under writeback\n", 114 + wreq->debug_id, folio->index)) 115 + trace_netfs_folio(folio, netfs_folio_trace_not_under_wback); 130 116 131 117 fpos = folio_pos(folio); 132 118 fsize = folio_size(folio); ··· 119 141 120 142 trace_netfs_collect_folio(wreq, folio, fend, collected_to); 121 143 122 - if (fpos + fsize > wreq->contiguity) { 123 - trace_netfs_collect_contig(wreq, fpos + fsize, 124 - netfs_contig_trace_unlock); 125 - wreq->contiguity = fpos + fsize; 126 - } 127 - 128 144 /* Unlock any folio we've transferred all of. */ 129 145 if (collected_to < fend) 130 146 break; ··· 127 155 wreq->cleaned_to = fpos + fsize; 128 156 *notes |= MADE_PROGRESS; 129 157 158 + /* Clean up the head folioq. If we clear an entire folioq, then 159 + * we can get rid of it provided it's not also the tail folioq 160 + * being filled by the issuer. 161 + */ 162 + folioq_clear(folioq, slot); 163 + slot++; 164 + if (slot >= folioq_nr_slots(folioq)) { 165 + if (READ_ONCE(wreq->buffer_tail) == folioq) 166 + break; 167 + folioq = netfs_delete_buffer_head(wreq); 168 + slot = 0; 169 + } 170 + 130 171 if (fpos + fsize >= collected_to) 131 172 break; 132 173 } 174 + 175 + wreq->buffer = folioq; 176 + wreq->buffer_head_slot = slot; 133 177 } 134 178 135 179 /* ··· 176 188 if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) 177 189 break; 178 190 if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { 191 + struct iov_iter source = subreq->io_iter; 192 + 193 + iov_iter_revert(&source, subreq->len - source.count); 179 194 __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 180 195 netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 181 - netfs_reissue_write(stream, subreq); 196 + netfs_reissue_write(stream, subreq, &source); 182 197 } 183 198 } 184 199 return; ··· 191 200 192 201 do { 193 202 struct netfs_io_subrequest *subreq = NULL, *from, *to, *tmp; 203 + struct iov_iter source; 194 204 unsigned long long start, len; 195 205 size_t part; 196 206 bool boundary = false; ··· 219 227 len += to->len; 220 228 } 221 229 230 + /* Determine the set of buffers we're going to use. Each 231 + * subreq gets a subset of a single overall contiguous buffer. 232 + */ 233 + netfs_reset_iter(from); 234 + source = from->io_iter; 235 + source.count = len; 236 + 222 237 /* Work through the sublist. */ 223 238 subreq = from; 224 239 list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { ··· 237 238 __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 238 239 stream->prepare_write(subreq); 239 240 240 - part = min(len, subreq->max_len); 241 + part = min(len, stream->sreq_max_len); 241 242 subreq->len = part; 242 243 subreq->start = start; 243 244 subreq->transferred = 0; ··· 248 249 boundary = true; 249 250 250 251 netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 251 - netfs_reissue_write(stream, subreq); 252 + netfs_reissue_write(stream, subreq, &source); 252 253 if (subreq == to) 253 254 break; 254 255 } ··· 277 278 subreq = netfs_alloc_subrequest(wreq); 278 279 subreq->source = to->source; 279 280 subreq->start = start; 280 - subreq->max_len = len; 281 - subreq->max_nr_segs = INT_MAX; 282 281 subreq->debug_index = atomic_inc_return(&wreq->subreq_counter); 283 282 subreq->stream_nr = to->stream_nr; 284 283 __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); ··· 290 293 to = list_next_entry(to, rreq_link); 291 294 trace_netfs_sreq(subreq, netfs_sreq_trace_retry); 292 295 296 + stream->sreq_max_len = len; 297 + stream->sreq_max_segs = INT_MAX; 293 298 switch (stream->source) { 294 299 case NETFS_UPLOAD_TO_SERVER: 295 300 netfs_stat(&netfs_n_wh_upload); 296 - subreq->max_len = min(len, wreq->wsize); 301 + stream->sreq_max_len = umin(len, wreq->wsize); 297 302 break; 298 303 case NETFS_WRITE_TO_CACHE: 299 304 netfs_stat(&netfs_n_wh_write); ··· 306 307 307 308 stream->prepare_write(subreq); 308 309 309 - part = min(len, subreq->max_len); 310 + part = umin(len, stream->sreq_max_len); 310 311 subreq->len = subreq->transferred + part; 311 312 len -= part; 312 313 start += part; ··· 315 316 boundary = false; 316 317 } 317 318 318 - netfs_reissue_write(stream, subreq); 319 + netfs_reissue_write(stream, subreq, &source); 319 320 if (!len) 320 321 break; 321 322 ··· 376 377 { 377 378 struct netfs_io_subrequest *front, *remove; 378 379 struct netfs_io_stream *stream; 379 - unsigned long long collected_to; 380 + unsigned long long collected_to, issued_to; 380 381 unsigned int notes; 381 382 int s; 382 383 ··· 385 386 trace_netfs_rreq(wreq, netfs_rreq_trace_collect); 386 387 387 388 reassess_streams: 389 + issued_to = atomic64_read(&wreq->issued_to); 388 390 smp_rmb(); 389 391 collected_to = ULLONG_MAX; 390 - if (wreq->origin == NETFS_WRITEBACK) 391 - notes = ALL_EMPTY | BUFFERED | MAYBE_DISCONTIG; 392 - else if (wreq->origin == NETFS_WRITETHROUGH) 393 - notes = ALL_EMPTY | BUFFERED; 392 + if (wreq->origin == NETFS_WRITEBACK || 393 + wreq->origin == NETFS_WRITETHROUGH || 394 + wreq->origin == NETFS_PGPRIV2_COPY_TO_CACHE) 395 + notes = BUFFERED; 394 396 else 395 - notes = ALL_EMPTY; 397 + notes = 0; 396 398 397 399 /* Remove completed subrequests from the front of the streams and 398 400 * advance the completion point on each stream. We stop when we hit 399 401 * something that's in progress. The issuer thread may be adding stuff 400 402 * to the tail whilst we're doing this. 401 - * 402 - * We must not, however, merge in discontiguities that span whole 403 - * folios that aren't under writeback. This is made more complicated 404 - * by the folios in the gap being of unpredictable sizes - if they even 405 - * exist - but we don't want to look them up. 406 403 */ 407 404 for (s = 0; s < NR_IO_STREAMS; s++) { 408 - loff_t rstart, rend; 409 - 410 405 stream = &wreq->io_streams[s]; 411 406 /* Read active flag before list pointers */ 412 407 if (!smp_load_acquire(&stream->active)) ··· 412 419 //_debug("sreq [%x] %llx %zx/%zx", 413 420 // front->debug_index, front->start, front->transferred, front->len); 414 421 415 - /* Stall if there may be a discontinuity. */ 416 - rstart = round_down(front->start, PAGE_SIZE); 417 - if (rstart > wreq->contiguity) { 418 - if (wreq->contiguity > stream->collected_to) { 419 - trace_netfs_collect_gap(wreq, stream, 420 - wreq->contiguity, 'D'); 421 - stream->collected_to = wreq->contiguity; 422 - } 423 - notes |= REASSESS_DISCONTIG; 424 - break; 422 + if (stream->collected_to < front->start) { 423 + trace_netfs_collect_gap(wreq, stream, issued_to, 'F'); 424 + stream->collected_to = front->start; 425 425 } 426 - rend = round_up(front->start + front->len, PAGE_SIZE); 427 - if (rend > wreq->contiguity) { 428 - trace_netfs_collect_contig(wreq, rend, 429 - netfs_contig_trace_collect); 430 - wreq->contiguity = rend; 431 - if (notes & REASSESS_DISCONTIG) 432 - notes |= NEED_REASSESS; 433 - } 434 - notes &= ~MAYBE_DISCONTIG; 435 426 436 427 /* Stall if the front is still undergoing I/O. */ 437 428 if (test_bit(NETFS_SREQ_IN_PROGRESS, &front->flags)) { ··· 450 473 451 474 cancel: 452 475 /* Remove if completely consumed. */ 453 - spin_lock(&wreq->lock); 476 + spin_lock_bh(&wreq->lock); 454 477 455 478 remove = front; 456 479 list_del_init(&front->rreq_link); 457 480 front = list_first_entry_or_null(&stream->subrequests, 458 481 struct netfs_io_subrequest, rreq_link); 459 482 stream->front = front; 460 - if (!front) { 461 - unsigned long long jump_to = atomic64_read(&wreq->issued_to); 462 - 463 - if (stream->collected_to < jump_to) { 464 - trace_netfs_collect_gap(wreq, stream, jump_to, 'A'); 465 - stream->collected_to = jump_to; 466 - } 467 - } 468 - 469 - spin_unlock(&wreq->lock); 483 + spin_unlock_bh(&wreq->lock); 470 484 netfs_put_subrequest(remove, false, 471 485 notes & SAW_FAILURE ? 472 486 netfs_sreq_trace_put_cancel : 473 487 netfs_sreq_trace_put_done); 474 488 } 475 489 476 - if (front) 477 - notes &= ~ALL_EMPTY; 478 - else 479 - notes |= SOME_EMPTY; 490 + /* If we have an empty stream, we need to jump it forward 491 + * otherwise the collection point will never advance. 492 + */ 493 + if (!front && issued_to > stream->collected_to) { 494 + trace_netfs_collect_gap(wreq, stream, issued_to, 'E'); 495 + stream->collected_to = issued_to; 496 + } 480 497 481 498 if (stream->collected_to < collected_to) 482 499 collected_to = stream->collected_to; ··· 478 507 479 508 if (collected_to != ULLONG_MAX && collected_to > wreq->collected_to) 480 509 wreq->collected_to = collected_to; 481 - 482 - /* If we have an empty stream, we need to jump it forward over any gap 483 - * otherwise the collection point will never advance. 484 - * 485 - * Note that the issuer always adds to the stream with the lowest 486 - * so-far submitted start, so if we see two consecutive subreqs in one 487 - * stream with nothing between then in another stream, then the second 488 - * stream has a gap that can be jumped. 489 - */ 490 - if (notes & SOME_EMPTY) { 491 - unsigned long long jump_to = wreq->start + READ_ONCE(wreq->submitted); 492 - 493 - for (s = 0; s < NR_IO_STREAMS; s++) { 494 - stream = &wreq->io_streams[s]; 495 - if (stream->active && 496 - stream->front && 497 - stream->front->start < jump_to) 498 - jump_to = stream->front->start; 499 - } 500 - 501 - for (s = 0; s < NR_IO_STREAMS; s++) { 502 - stream = &wreq->io_streams[s]; 503 - if (stream->active && 504 - !stream->front && 505 - stream->collected_to < jump_to) { 506 - trace_netfs_collect_gap(wreq, stream, jump_to, 'B'); 507 - stream->collected_to = jump_to; 508 - } 509 - } 510 - } 511 510 512 511 for (s = 0; s < NR_IO_STREAMS; s++) { 513 512 stream = &wreq->io_streams[s]; ··· 489 548 490 549 /* Unlock any folios that we have now finished with. */ 491 550 if (notes & BUFFERED) { 492 - unsigned long long clean_to = min(wreq->collected_to, wreq->contiguity); 493 - 494 - if (wreq->cleaned_to < clean_to) 495 - netfs_writeback_unlock_folios(wreq, clean_to, &notes); 551 + if (wreq->cleaned_to < wreq->collected_to) 552 + netfs_writeback_unlock_folios(wreq, &notes); 496 553 } else { 497 554 wreq->cleaned_to = wreq->collected_to; 498 555 } 499 556 500 557 // TODO: Discard encryption buffers 501 - 502 - /* If all streams are discontiguous with the last folio we cleared, we 503 - * may need to skip a set of folios. 504 - */ 505 - if ((notes & (MAYBE_DISCONTIG | ALL_EMPTY)) == MAYBE_DISCONTIG) { 506 - unsigned long long jump_to = ULLONG_MAX; 507 - 508 - for (s = 0; s < NR_IO_STREAMS; s++) { 509 - stream = &wreq->io_streams[s]; 510 - if (stream->active && stream->front && 511 - stream->front->start < jump_to) 512 - jump_to = stream->front->start; 513 - } 514 - 515 - trace_netfs_collect_contig(wreq, jump_to, netfs_contig_trace_jump); 516 - wreq->contiguity = jump_to; 517 - wreq->cleaned_to = jump_to; 518 - wreq->collected_to = jump_to; 519 - for (s = 0; s < NR_IO_STREAMS; s++) { 520 - stream = &wreq->io_streams[s]; 521 - if (stream->collected_to < jump_to) 522 - stream->collected_to = jump_to; 523 - } 524 - //cond_resched(); 525 - notes |= MADE_PROGRESS; 526 - goto reassess_streams; 527 - } 528 558 529 559 if (notes & NEED_RETRY) 530 560 goto need_retry;
+48 -45
fs/netfs/write_issue.c
··· 95 95 struct netfs_io_request *wreq; 96 96 struct netfs_inode *ictx; 97 97 bool is_buffered = (origin == NETFS_WRITEBACK || 98 - origin == NETFS_WRITETHROUGH); 98 + origin == NETFS_WRITETHROUGH || 99 + origin == NETFS_PGPRIV2_COPY_TO_CACHE); 99 100 100 101 wreq = netfs_alloc_request(mapping, file, start, 0, origin); 101 102 if (IS_ERR(wreq)) ··· 108 107 if (is_buffered && netfs_is_cache_enabled(ictx)) 109 108 fscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ictx)); 110 109 111 - wreq->contiguity = wreq->start; 112 110 wreq->cleaned_to = wreq->start; 113 - INIT_WORK(&wreq->work, netfs_write_collection_worker); 114 111 115 112 wreq->io_streams[0].stream_nr = 0; 116 113 wreq->io_streams[0].source = NETFS_UPLOAD_TO_SERVER; ··· 157 158 subreq = netfs_alloc_subrequest(wreq); 158 159 subreq->source = stream->source; 159 160 subreq->start = start; 160 - subreq->max_len = ULONG_MAX; 161 - subreq->max_nr_segs = INT_MAX; 162 161 subreq->stream_nr = stream->stream_nr; 162 + subreq->io_iter = wreq->io_iter; 163 163 164 164 _enter("R=%x[%x]", wreq->debug_id, subreq->debug_index); 165 165 166 - trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, 167 - refcount_read(&subreq->ref), 168 - netfs_sreq_trace_new); 169 - 170 166 trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); 171 167 168 + stream->sreq_max_len = UINT_MAX; 169 + stream->sreq_max_segs = INT_MAX; 172 170 switch (stream->source) { 173 171 case NETFS_UPLOAD_TO_SERVER: 174 172 netfs_stat(&netfs_n_wh_upload); 175 - subreq->max_len = wreq->wsize; 173 + stream->sreq_max_len = wreq->wsize; 176 174 break; 177 175 case NETFS_WRITE_TO_CACHE: 178 176 netfs_stat(&netfs_n_wh_write); ··· 188 192 * the list. The collector only goes nextwards and uses the lock to 189 193 * remove entries off of the front. 190 194 */ 191 - spin_lock(&wreq->lock); 195 + spin_lock_bh(&wreq->lock); 192 196 list_add_tail(&subreq->rreq_link, &stream->subrequests); 193 197 if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { 194 198 stream->front = subreq; ··· 199 203 } 200 204 } 201 205 202 - spin_unlock(&wreq->lock); 206 + spin_unlock_bh(&wreq->lock); 203 207 204 208 stream->construct = subreq; 205 209 } ··· 219 223 if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) 220 224 return netfs_write_subrequest_terminated(subreq, subreq->error, false); 221 225 222 - // TODO: Use encrypted buffer 223 - if (test_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags)) { 224 - subreq->io_iter = wreq->io_iter; 225 - iov_iter_advance(&subreq->io_iter, 226 - subreq->start + subreq->transferred - wreq->start); 227 - iov_iter_truncate(&subreq->io_iter, 228 - subreq->len - subreq->transferred); 229 - } else { 230 - iov_iter_xarray(&subreq->io_iter, ITER_SOURCE, &wreq->mapping->i_pages, 231 - subreq->start + subreq->transferred, 232 - subreq->len - subreq->transferred); 233 - } 234 - 235 226 trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 236 227 stream->issue_write(subreq); 237 228 } 238 229 239 230 void netfs_reissue_write(struct netfs_io_stream *stream, 240 - struct netfs_io_subrequest *subreq) 231 + struct netfs_io_subrequest *subreq, 232 + struct iov_iter *source) 241 233 { 234 + size_t size = subreq->len - subreq->transferred; 235 + 236 + // TODO: Use encrypted buffer 237 + subreq->io_iter = *source; 238 + iov_iter_advance(source, size); 239 + iov_iter_truncate(&subreq->io_iter, size); 240 + 242 241 __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 243 242 netfs_do_issue_write(stream, subreq); 244 243 } 245 244 246 - static void netfs_issue_write(struct netfs_io_request *wreq, 247 - struct netfs_io_stream *stream) 245 + void netfs_issue_write(struct netfs_io_request *wreq, 246 + struct netfs_io_stream *stream) 248 247 { 249 248 struct netfs_io_subrequest *subreq = stream->construct; 250 249 251 250 if (!subreq) 252 251 return; 253 252 stream->construct = NULL; 254 - 255 - if (subreq->start + subreq->len > wreq->start + wreq->submitted) 256 - WRITE_ONCE(wreq->submitted, subreq->start + subreq->len - wreq->start); 253 + subreq->io_iter.count = subreq->len; 257 254 netfs_do_issue_write(stream, subreq); 258 255 } 259 256 ··· 279 290 netfs_prepare_write(wreq, stream, start); 280 291 subreq = stream->construct; 281 292 282 - part = min(subreq->max_len - subreq->len, len); 283 - _debug("part %zx/%zx %zx/%zx", subreq->len, subreq->max_len, part, len); 293 + part = umin(stream->sreq_max_len - subreq->len, len); 294 + _debug("part %zx/%zx %zx/%zx", subreq->len, stream->sreq_max_len, part, len); 284 295 subreq->len += part; 285 296 subreq->nr_segs++; 297 + stream->submit_extendable_to -= part; 286 298 287 - if (subreq->len >= subreq->max_len || 288 - subreq->nr_segs >= subreq->max_nr_segs || 299 + if (subreq->len >= stream->sreq_max_len || 300 + subreq->nr_segs >= stream->sreq_max_segs || 289 301 to_eof) { 290 302 netfs_issue_write(wreq, stream); 291 303 subreq = NULL; ··· 400 410 folio_unlock(folio); 401 411 402 412 if (fgroup == NETFS_FOLIO_COPY_TO_CACHE) { 403 - if (!fscache_resources_valid(&wreq->cache_resources)) { 413 + if (!cache->avail) { 404 414 trace_netfs_folio(folio, netfs_folio_trace_cancel_copy); 405 415 netfs_issue_write(wreq, upload); 406 416 netfs_folio_written_back(folio); 407 417 return 0; 408 418 } 409 419 trace_netfs_folio(folio, netfs_folio_trace_store_copy); 420 + } else if (!upload->avail && !cache->avail) { 421 + trace_netfs_folio(folio, netfs_folio_trace_cancel_store); 422 + netfs_folio_written_back(folio); 423 + return 0; 410 424 } else if (!upload->construct) { 411 425 trace_netfs_folio(folio, netfs_folio_trace_store); 412 426 } else { 413 427 trace_netfs_folio(folio, netfs_folio_trace_store_plus); 414 428 } 429 + 430 + /* Attach the folio to the rolling buffer. */ 431 + netfs_buffer_append_folio(wreq, folio, false); 415 432 416 433 /* Move the submission point forward to allow for write-streaming data 417 434 * not starting at the front of the page. We don't do write-streaming ··· 429 432 */ 430 433 for (int s = 0; s < NR_IO_STREAMS; s++) { 431 434 stream = &wreq->io_streams[s]; 432 - stream->submit_max_len = fsize; 433 435 stream->submit_off = foff; 434 436 stream->submit_len = flen; 435 437 if ((stream->source == NETFS_WRITE_TO_CACHE && streamw) || ··· 436 440 fgroup == NETFS_FOLIO_COPY_TO_CACHE)) { 437 441 stream->submit_off = UINT_MAX; 438 442 stream->submit_len = 0; 439 - stream->submit_max_len = 0; 440 443 } 441 444 } 442 445 ··· 462 467 if (choose_s < 0) 463 468 break; 464 469 stream = &wreq->io_streams[choose_s]; 470 + wreq->io_iter.iov_offset = stream->submit_off; 465 471 472 + atomic64_set(&wreq->issued_to, fpos + stream->submit_off); 473 + stream->submit_extendable_to = fsize - stream->submit_off; 466 474 part = netfs_advance_write(wreq, stream, fpos + stream->submit_off, 467 475 stream->submit_len, to_eof); 468 - atomic64_set(&wreq->issued_to, fpos + stream->submit_off); 469 476 stream->submit_off += part; 470 - stream->submit_max_len -= part; 471 477 if (part > stream->submit_len) 472 478 stream->submit_len = 0; 473 479 else ··· 477 481 debug = true; 478 482 } 479 483 484 + wreq->io_iter.iov_offset = 0; 485 + iov_iter_advance(&wreq->io_iter, fsize); 480 486 atomic64_set(&wreq->issued_to, fpos + fsize); 481 487 482 488 if (!debug) ··· 503 505 struct folio *folio; 504 506 int error = 0; 505 507 506 - if (wbc->sync_mode == WB_SYNC_ALL) 508 + if (!mutex_trylock(&ictx->wb_lock)) { 509 + if (wbc->sync_mode == WB_SYNC_NONE) { 510 + netfs_stat(&netfs_n_wb_lock_skip); 511 + return 0; 512 + } 513 + netfs_stat(&netfs_n_wb_lock_wait); 507 514 mutex_lock(&ictx->wb_lock); 508 - else if (!mutex_trylock(&ictx->wb_lock)) 509 - return 0; 515 + } 510 516 511 517 /* Need the first folio to be able to set up the op. */ 512 518 folio = writeback_iter(mapping, wbc, NULL, &error); ··· 527 525 netfs_stat(&netfs_n_wh_writepages); 528 526 529 527 do { 530 - _debug("wbiter %lx %llx", folio->index, wreq->start + wreq->submitted); 528 + _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to)); 531 529 532 530 /* It appears we don't have to handle cyclic writeback wrapping. */ 533 - WARN_ON_ONCE(wreq && folio_pos(folio) < wreq->start + wreq->submitted); 531 + WARN_ON_ONCE(wreq && folio_pos(folio) < atomic64_read(&wreq->issued_to)); 534 532 535 533 if (netfs_folio_group(folio) != NETFS_FOLIO_COPY_TO_CACHE && 536 534 unlikely(!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))) { ··· 674 672 part = netfs_advance_write(wreq, upload, start, len, false); 675 673 start += part; 676 674 len -= part; 675 + iov_iter_advance(&wreq->io_iter, part); 677 676 if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) { 678 677 trace_netfs_rreq(wreq, netfs_rreq_trace_wait_pause); 679 678 wait_on_bit(&wreq->flags, NETFS_RREQ_PAUSE, TASK_UNINTERRUPTIBLE);
+6 -13
fs/nfs/fscache.c
··· 267 267 rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id); 268 268 /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */ 269 269 __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags); 270 + rreq->io_streams[0].sreq_max_len = NFS_SB(rreq->inode->i_sb)->rsize; 270 271 271 272 return 0; 272 273 } ··· 289 288 return netfs; 290 289 } 291 290 292 - static bool nfs_netfs_clamp_length(struct netfs_io_subrequest *sreq) 293 - { 294 - size_t rsize = NFS_SB(sreq->rreq->inode->i_sb)->rsize; 295 - 296 - sreq->len = min(sreq->len, rsize); 297 - return true; 298 - } 299 - 300 291 static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq) 301 292 { 302 293 struct nfs_netfs_io_data *netfs; ··· 297 304 struct nfs_open_context *ctx = sreq->rreq->netfs_priv; 298 305 struct page *page; 299 306 unsigned long idx; 307 + pgoff_t start, last; 300 308 int err; 301 - pgoff_t start = (sreq->start + sreq->transferred) >> PAGE_SHIFT; 302 - pgoff_t last = ((sreq->start + sreq->len - 303 - sreq->transferred - 1) >> PAGE_SHIFT); 309 + 310 + start = (sreq->start + sreq->transferred) >> PAGE_SHIFT; 311 + last = ((sreq->start + sreq->len - sreq->transferred - 1) >> PAGE_SHIFT); 304 312 305 313 nfs_pageio_init_read(&pgio, inode, false, 306 314 &nfs_async_read_completion_ops); 307 315 308 316 netfs = nfs_netfs_alloc(sreq); 309 317 if (!netfs) 310 - return netfs_subreq_terminated(sreq, -ENOMEM, false); 318 + return netfs_read_subreq_terminated(sreq, -ENOMEM, false); 311 319 312 320 pgio.pg_netfs = netfs; /* used in completion */ 313 321 ··· 374 380 .init_request = nfs_netfs_init_request, 375 381 .free_request = nfs_netfs_free_request, 376 382 .issue_read = nfs_netfs_issue_read, 377 - .clamp_length = nfs_netfs_clamp_length 378 383 };
+3 -4
fs/nfs/fscache.h
··· 60 60 61 61 static inline void nfs_netfs_put(struct nfs_netfs_io_data *netfs) 62 62 { 63 - ssize_t final_len; 64 - 65 63 /* Only the last RPC completion should call netfs_subreq_terminated() */ 66 64 if (!refcount_dec_and_test(&netfs->refcount)) 67 65 return; ··· 72 74 * Correct the final length here to be no larger than the netfs subrequest 73 75 * length, and thus avoid netfs's "Subreq overread" warning message. 74 76 */ 75 - final_len = min_t(s64, netfs->sreq->len, atomic64_read(&netfs->transferred)); 76 - netfs_subreq_terminated(netfs->sreq, netfs->error ?: final_len, false); 77 + netfs->sreq->transferred = min_t(s64, netfs->sreq->len, 78 + atomic64_read(&netfs->transferred)); 79 + netfs_read_subreq_terminated(netfs->sreq, netfs->error, false); 77 80 kfree(netfs); 78 81 } 79 82 static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi)
+15 -129
fs/smb/client/cifsencrypt.c
··· 21 21 #include <linux/random.h> 22 22 #include <linux/highmem.h> 23 23 #include <linux/fips.h> 24 + #include <linux/iov_iter.h> 24 25 #include "../common/arc4.h" 25 26 #include <crypto/aead.h> 26 27 27 - /* 28 - * Hash data from a BVEC-type iterator. 29 - */ 30 - static int cifs_shash_bvec(const struct iov_iter *iter, ssize_t maxsize, 31 - struct shash_desc *shash) 28 + static size_t cifs_shash_step(void *iter_base, size_t progress, size_t len, 29 + void *priv, void *priv2) 32 30 { 33 - const struct bio_vec *bv = iter->bvec; 34 - unsigned long start = iter->iov_offset; 35 - unsigned int i; 36 - void *p; 37 - int ret; 31 + struct shash_desc *shash = priv; 32 + int ret, *pret = priv2; 38 33 39 - for (i = 0; i < iter->nr_segs; i++) { 40 - size_t off, len; 41 - 42 - len = bv[i].bv_len; 43 - if (start >= len) { 44 - start -= len; 45 - continue; 46 - } 47 - 48 - len = min_t(size_t, maxsize, len - start); 49 - off = bv[i].bv_offset + start; 50 - 51 - p = kmap_local_page(bv[i].bv_page); 52 - ret = crypto_shash_update(shash, p + off, len); 53 - kunmap_local(p); 54 - if (ret < 0) 55 - return ret; 56 - 57 - maxsize -= len; 58 - if (maxsize <= 0) 59 - break; 60 - start = 0; 34 + ret = crypto_shash_update(shash, iter_base, len); 35 + if (ret < 0) { 36 + *pret = ret; 37 + return len; 61 38 } 62 - 63 - return 0; 64 - } 65 - 66 - /* 67 - * Hash data from a KVEC-type iterator. 68 - */ 69 - static int cifs_shash_kvec(const struct iov_iter *iter, ssize_t maxsize, 70 - struct shash_desc *shash) 71 - { 72 - const struct kvec *kv = iter->kvec; 73 - unsigned long start = iter->iov_offset; 74 - unsigned int i; 75 - int ret; 76 - 77 - for (i = 0; i < iter->nr_segs; i++) { 78 - size_t len; 79 - 80 - len = kv[i].iov_len; 81 - if (start >= len) { 82 - start -= len; 83 - continue; 84 - } 85 - 86 - len = min_t(size_t, maxsize, len - start); 87 - ret = crypto_shash_update(shash, kv[i].iov_base + start, len); 88 - if (ret < 0) 89 - return ret; 90 - maxsize -= len; 91 - 92 - if (maxsize <= 0) 93 - break; 94 - start = 0; 95 - } 96 - 97 - return 0; 98 - } 99 - 100 - /* 101 - * Hash data from an XARRAY-type iterator. 102 - */ 103 - static ssize_t cifs_shash_xarray(const struct iov_iter *iter, ssize_t maxsize, 104 - struct shash_desc *shash) 105 - { 106 - struct folio *folios[16], *folio; 107 - unsigned int nr, i, j, npages; 108 - loff_t start = iter->xarray_start + iter->iov_offset; 109 - pgoff_t last, index = start / PAGE_SIZE; 110 - ssize_t ret = 0; 111 - size_t len, offset, foffset; 112 - void *p; 113 - 114 - if (maxsize == 0) 115 - return 0; 116 - 117 - last = (start + maxsize - 1) / PAGE_SIZE; 118 - do { 119 - nr = xa_extract(iter->xarray, (void **)folios, index, last, 120 - ARRAY_SIZE(folios), XA_PRESENT); 121 - if (nr == 0) 122 - return -EIO; 123 - 124 - for (i = 0; i < nr; i++) { 125 - folio = folios[i]; 126 - npages = folio_nr_pages(folio); 127 - foffset = start - folio_pos(folio); 128 - offset = foffset % PAGE_SIZE; 129 - for (j = foffset / PAGE_SIZE; j < npages; j++) { 130 - len = min_t(size_t, maxsize, PAGE_SIZE - offset); 131 - p = kmap_local_page(folio_page(folio, j)); 132 - ret = crypto_shash_update(shash, p + offset, len); 133 - kunmap_local(p); 134 - if (ret < 0) 135 - return ret; 136 - maxsize -= len; 137 - if (maxsize <= 0) 138 - return 0; 139 - start += len; 140 - offset = 0; 141 - index++; 142 - } 143 - } 144 - } while (nr == ARRAY_SIZE(folios)); 145 39 return 0; 146 40 } 147 41 ··· 45 151 static int cifs_shash_iter(const struct iov_iter *iter, size_t maxsize, 46 152 struct shash_desc *shash) 47 153 { 48 - if (maxsize == 0) 49 - return 0; 154 + struct iov_iter tmp_iter = *iter; 155 + int err = -EIO; 50 156 51 - switch (iov_iter_type(iter)) { 52 - case ITER_BVEC: 53 - return cifs_shash_bvec(iter, maxsize, shash); 54 - case ITER_KVEC: 55 - return cifs_shash_kvec(iter, maxsize, shash); 56 - case ITER_XARRAY: 57 - return cifs_shash_xarray(iter, maxsize, shash); 58 - default: 59 - pr_err("cifs_shash_iter(%u) unsupported\n", iov_iter_type(iter)); 60 - WARN_ON_ONCE(1); 61 - return -EIO; 62 - } 157 + if (iterate_and_advance_kernel(&tmp_iter, maxsize, shash, &err, 158 + cifs_shash_step) != maxsize) 159 + return err; 160 + return 0; 63 161 } 64 162 65 163 int __cifs_calc_signature(struct smb_rqst *rqst,
+1 -3
fs/smb/client/cifsglob.h
··· 255 255 struct kvec *rq_iov; /* array of kvecs */ 256 256 unsigned int rq_nvec; /* number of kvecs in array */ 257 257 struct iov_iter rq_iter; /* Data iterator */ 258 - struct xarray rq_buffer; /* Page buffer for encryption */ 258 + struct folio_queue *rq_buffer; /* Buffer for encryption */ 259 259 }; 260 260 261 261 struct mid_q_entry; ··· 1485 1485 struct cifs_io_request *req; 1486 1486 }; 1487 1487 ssize_t got_bytes; 1488 - size_t actual_len; 1489 1488 unsigned int xid; 1490 1489 int result; 1491 1490 bool have_xid; ··· 1549 1550 #define CIFS_INO_DELETE_PENDING (3) /* delete pending on server */ 1550 1551 #define CIFS_INO_INVALID_MAPPING (4) /* pagecache is invalid */ 1551 1552 #define CIFS_INO_LOCK (5) /* lock bit for synchronization */ 1552 - #define CIFS_INO_MODIFIED_ATTR (6) /* Indicate change in mtime/ctime */ 1553 1553 #define CIFS_INO_CLOSE_ON_LOCK (7) /* Not to defer the close when lock is set */ 1554 1554 unsigned long flags; 1555 1555 spinlock_t writers_lock;
+5 -6
fs/smb/client/cifssmb.c
··· 1266 1266 struct cifs_io_subrequest *rdata = 1267 1267 container_of(work, struct cifs_io_subrequest, subreq.work); 1268 1268 1269 - netfs_subreq_terminated(&rdata->subreq, 1270 - (rdata->result == 0 || rdata->result == -EAGAIN) ? 1271 - rdata->got_bytes : rdata->result, true); 1269 + netfs_read_subreq_terminated(&rdata->subreq, rdata->result, true); 1272 1270 } 1273 1271 1274 1272 static void ··· 1325 1327 __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1326 1328 rdata->result = 0; 1327 1329 } else { 1328 - if (rdata->got_bytes < rdata->actual_len && 1329 - rdata->subreq.start + rdata->subreq.transferred + rdata->got_bytes == 1330 - ictx->remote_i_size) { 1330 + size_t trans = rdata->subreq.transferred + rdata->got_bytes; 1331 + if (trans < rdata->subreq.len && 1332 + rdata->subreq.start + trans == ictx->remote_i_size) { 1331 1333 __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1332 1334 rdata->result = 0; 1333 1335 } 1334 1336 } 1335 1337 1336 1338 rdata->credits.value = 0; 1339 + rdata->subreq.transferred += rdata->got_bytes; 1337 1340 INIT_WORK(&rdata->subreq.work, cifs_readv_worker); 1338 1341 queue_work(cifsiod_wq, &rdata->subreq.work); 1339 1342 release_mid(mid);
+32 -64
fs/smb/client/file.c
··· 49 49 struct cifs_io_subrequest *wdata = 50 50 container_of(subreq, struct cifs_io_subrequest, subreq); 51 51 struct cifs_io_request *req = wdata->req; 52 + struct netfs_io_stream *stream = &req->rreq.io_streams[subreq->stream_nr]; 52 53 struct TCP_Server_Info *server; 53 54 struct cifsFileInfo *open_file = req->cfile; 54 55 size_t wsize = req->rreq.wsize; ··· 74 73 } 75 74 } 76 75 77 - rc = server->ops->wait_mtu_credits(server, wsize, &wdata->subreq.max_len, 76 + rc = server->ops->wait_mtu_credits(server, wsize, &stream->sreq_max_len, 78 77 &wdata->credits); 79 78 if (rc < 0) { 80 79 subreq->error = rc; ··· 93 92 94 93 #ifdef CONFIG_CIFS_SMB_DIRECT 95 94 if (server->smbd_conn) 96 - subreq->max_nr_segs = server->smbd_conn->max_frmr_depth; 95 + stream->sreq_max_segs = server->smbd_conn->max_frmr_depth; 97 96 #endif 98 97 } 99 98 ··· 112 111 goto fail; 113 112 } 114 113 115 - wdata->actual_len = wdata->subreq.len; 116 114 rc = adjust_credits(wdata->server, wdata, cifs_trace_rw_credits_issue_write_adjust); 117 115 if (rc) 118 116 goto fail; ··· 140 140 } 141 141 142 142 /* 143 - * Split the read up according to how many credits we can get for each piece. 144 - * It's okay to sleep here if we need to wait for more credit to become 145 - * available. 146 - * 147 - * We also choose the server and allocate an operation ID to be cleaned up 148 - * later. 143 + * Negotiate the size of a read operation on behalf of the netfs library. 149 144 */ 150 - static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) 145 + static int cifs_prepare_read(struct netfs_io_subrequest *subreq) 151 146 { 152 147 struct netfs_io_request *rreq = subreq->rreq; 153 148 struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); 154 149 struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); 155 150 struct TCP_Server_Info *server = req->server; 156 151 struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); 157 - size_t rsize; 158 - int rc; 152 + size_t size; 153 + int rc = 0; 159 154 160 - rdata->xid = get_xid(); 161 - rdata->have_xid = true; 155 + if (!rdata->have_xid) { 156 + rdata->xid = get_xid(); 157 + rdata->have_xid = true; 158 + } 162 159 rdata->server = server; 163 160 164 161 if (cifs_sb->ctx->rsize == 0) ··· 163 166 server->ops->negotiate_rsize(tlink_tcon(req->cfile->tlink), 164 167 cifs_sb->ctx); 165 168 166 - 167 169 rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, 168 - &rsize, &rdata->credits); 169 - if (rc) { 170 - subreq->error = rc; 171 - return false; 172 - } 170 + &size, &rdata->credits); 171 + if (rc) 172 + return rc; 173 + 174 + rreq->io_streams[0].sreq_max_len = size; 173 175 174 176 rdata->credits.in_flight_check = 1; 175 177 rdata->credits.rreq_debug_id = rreq->debug_id; ··· 180 184 server->credits, server->in_flight, 0, 181 185 cifs_trace_rw_credits_read_submit); 182 186 183 - subreq->len = umin(subreq->len, rsize); 184 - rdata->actual_len = subreq->len; 185 - 186 187 #ifdef CONFIG_CIFS_SMB_DIRECT 187 188 if (server->smbd_conn) 188 - subreq->max_nr_segs = server->smbd_conn->max_frmr_depth; 189 + rreq->io_streams[0].sreq_max_segs = server->smbd_conn->max_frmr_depth; 189 190 #endif 190 - return true; 191 + return 0; 191 192 } 192 193 193 194 /* ··· 193 200 * to only read a portion of that, but as long as we read something, the netfs 194 201 * helper will call us again so that we can issue another read. 195 202 */ 196 - static void cifs_req_issue_read(struct netfs_io_subrequest *subreq) 203 + static void cifs_issue_read(struct netfs_io_subrequest *subreq) 197 204 { 198 205 struct netfs_io_request *rreq = subreq->rreq; 199 206 struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); 200 207 struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); 201 208 struct TCP_Server_Info *server = req->server; 202 - struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); 203 209 int rc = 0; 204 210 205 211 cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n", 206 212 __func__, rreq->debug_id, subreq->debug_index, rreq->mapping, 207 213 subreq->transferred, subreq->len); 208 214 209 - if (test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) { 210 - /* 211 - * As we're issuing a retry, we need to negotiate some new 212 - * credits otherwise the server may reject the op with 213 - * INVALID_PARAMETER. Note, however, we may get back less 214 - * credit than we need to complete the op, in which case, we 215 - * shorten the op and rely on additional rounds of retry. 216 - */ 217 - size_t rsize = umin(subreq->len - subreq->transferred, 218 - cifs_sb->ctx->rsize); 219 - 220 - rc = server->ops->wait_mtu_credits(server, rsize, &rdata->actual_len, 221 - &rdata->credits); 222 - if (rc) 223 - goto out; 224 - 225 - rdata->credits.in_flight_check = 1; 226 - 227 - trace_smb3_rw_credits(rdata->rreq->debug_id, 228 - rdata->subreq.debug_index, 229 - rdata->credits.value, 230 - server->credits, server->in_flight, 0, 231 - cifs_trace_rw_credits_read_resubmit); 232 - } 215 + rc = adjust_credits(server, rdata, cifs_trace_rw_credits_issue_read_adjust); 216 + if (rc) 217 + goto failed; 233 218 234 219 if (req->cfile->invalidHandle) { 235 220 do { 236 221 rc = cifs_reopen_file(req->cfile, true); 237 222 } while (rc == -EAGAIN); 238 223 if (rc) 239 - goto out; 224 + goto failed; 240 225 } 241 226 242 227 if (subreq->rreq->origin != NETFS_DIO_READ) 243 228 __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 244 229 230 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 245 231 rc = rdata->server->ops->async_readv(rdata); 246 - out: 247 232 if (rc) 248 - netfs_subreq_terminated(subreq, rc, false); 233 + goto failed; 234 + return; 235 + 236 + failed: 237 + netfs_read_subreq_terminated(subreq, rc, false); 249 238 } 250 239 251 240 /* ··· 291 316 inode_set_atime_to_ts(inode, inode_get_mtime(inode)); 292 317 } 293 318 294 - static void cifs_post_modify(struct inode *inode) 295 - { 296 - /* Indication to update ctime and mtime as close is deferred */ 297 - set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); 298 - } 299 - 300 319 static void cifs_free_request(struct netfs_io_request *rreq) 301 320 { 302 321 struct cifs_io_request *req = container_of(rreq, struct cifs_io_request, rreq); ··· 338 369 .init_request = cifs_init_request, 339 370 .free_request = cifs_free_request, 340 371 .free_subrequest = cifs_free_subrequest, 341 - .clamp_length = cifs_clamp_length, 342 - .issue_read = cifs_req_issue_read, 372 + .prepare_read = cifs_prepare_read, 373 + .issue_read = cifs_issue_read, 343 374 .done = cifs_rreq_done, 344 - .post_modify = cifs_post_modify, 345 375 .begin_writeback = cifs_begin_writeback, 346 376 .prepare_write = cifs_prepare_write, 347 377 .issue_write = cifs_issue_write, ··· 1364 1396 dclose = kmalloc(sizeof(struct cifs_deferred_close), GFP_KERNEL); 1365 1397 if ((cfile->status_file_deleted == false) && 1366 1398 (smb2_can_defer_close(inode, dclose))) { 1367 - if (test_and_clear_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags)) { 1399 + if (test_and_clear_bit(NETFS_ICTX_MODIFIED_ATTR, &cinode->netfs.flags)) { 1368 1400 inode_set_mtime_to_ts(inode, 1369 1401 inode_set_ctime_current(inode)); 1370 1402 }
+122 -99
fs/smb/client/smb2ops.c
··· 13 13 #include <linux/sort.h> 14 14 #include <crypto/aead.h> 15 15 #include <linux/fiemap.h> 16 + #include <linux/folio_queue.h> 16 17 #include <uapi/linux/magic.h> 17 18 #include "cifsfs.h" 18 19 #include "cifsglob.h" ··· 302 301 unsigned int /*enum smb3_rw_credits_trace*/ trace) 303 302 { 304 303 struct cifs_credits *credits = &subreq->credits; 305 - int new_val = DIV_ROUND_UP(subreq->actual_len, SMB2_MAX_BUFFER_SIZE); 304 + int new_val = DIV_ROUND_UP(subreq->subreq.len - subreq->subreq.transferred, 305 + SMB2_MAX_BUFFER_SIZE); 306 306 int scredits, in_flight; 307 307 308 308 if (!credits->value || credits->value == new_val) ··· 4394 4392 } 4395 4393 4396 4394 /* 4397 - * Clear a read buffer, discarding the folios which have XA_MARK_0 set. 4395 + * Clear a read buffer, discarding the folios which have the 1st mark set. 4398 4396 */ 4399 - static void cifs_clear_xarray_buffer(struct xarray *buffer) 4397 + static void cifs_clear_folioq_buffer(struct folio_queue *buffer) 4400 4398 { 4401 - struct folio *folio; 4399 + struct folio_queue *folioq; 4402 4400 4403 - XA_STATE(xas, buffer, 0); 4404 - 4405 - rcu_read_lock(); 4406 - xas_for_each_marked(&xas, folio, ULONG_MAX, XA_MARK_0) { 4407 - folio_put(folio); 4401 + while ((folioq = buffer)) { 4402 + for (int s = 0; s < folioq_count(folioq); s++) 4403 + if (folioq_is_marked(folioq, s)) 4404 + folio_put(folioq_folio(folioq, s)); 4405 + buffer = folioq->next; 4406 + kfree(folioq); 4408 4407 } 4409 - rcu_read_unlock(); 4410 - xa_destroy(buffer); 4408 + } 4409 + 4410 + /* 4411 + * Allocate buffer space into a folio queue. 4412 + */ 4413 + static struct folio_queue *cifs_alloc_folioq_buffer(ssize_t size) 4414 + { 4415 + struct folio_queue *buffer = NULL, *tail = NULL, *p; 4416 + struct folio *folio; 4417 + unsigned int slot; 4418 + 4419 + do { 4420 + if (!tail || folioq_full(tail)) { 4421 + p = kmalloc(sizeof(*p), GFP_NOFS); 4422 + if (!p) 4423 + goto nomem; 4424 + folioq_init(p); 4425 + if (tail) { 4426 + tail->next = p; 4427 + p->prev = tail; 4428 + } else { 4429 + buffer = p; 4430 + } 4431 + tail = p; 4432 + } 4433 + 4434 + folio = folio_alloc(GFP_KERNEL|__GFP_HIGHMEM, 0); 4435 + if (!folio) 4436 + goto nomem; 4437 + 4438 + slot = folioq_append_mark(tail, folio); 4439 + size -= folioq_folio_size(tail, slot); 4440 + } while (size > 0); 4441 + 4442 + return buffer; 4443 + 4444 + nomem: 4445 + cifs_clear_folioq_buffer(buffer); 4446 + return NULL; 4447 + } 4448 + 4449 + /* 4450 + * Copy data from an iterator to the folios in a folio queue buffer. 4451 + */ 4452 + static bool cifs_copy_iter_to_folioq(struct iov_iter *iter, size_t size, 4453 + struct folio_queue *buffer) 4454 + { 4455 + for (; buffer; buffer = buffer->next) { 4456 + for (int s = 0; s < folioq_count(buffer); s++) { 4457 + struct folio *folio = folioq_folio(buffer, s); 4458 + size_t part = folioq_folio_size(buffer, s); 4459 + 4460 + part = umin(part, size); 4461 + 4462 + if (copy_folio_from_iter(folio, 0, part, iter) != part) 4463 + return false; 4464 + size -= part; 4465 + } 4466 + } 4467 + return true; 4411 4468 } 4412 4469 4413 4470 void 4414 4471 smb3_free_compound_rqst(int num_rqst, struct smb_rqst *rqst) 4415 4472 { 4416 - int i; 4417 - 4418 - for (i = 0; i < num_rqst; i++) 4419 - if (!xa_empty(&rqst[i].rq_buffer)) 4420 - cifs_clear_xarray_buffer(&rqst[i].rq_buffer); 4473 + for (int i = 0; i < num_rqst; i++) 4474 + cifs_clear_folioq_buffer(rqst[i].rq_buffer); 4421 4475 } 4422 4476 4423 4477 /* ··· 4494 4436 struct smb_rqst *new_rq, struct smb_rqst *old_rq) 4495 4437 { 4496 4438 struct smb2_transform_hdr *tr_hdr = new_rq[0].rq_iov[0].iov_base; 4497 - struct page *page; 4498 4439 unsigned int orig_len = 0; 4499 - int i, j; 4500 4440 int rc = -ENOMEM; 4501 4441 4502 - for (i = 1; i < num_rqst; i++) { 4442 + for (int i = 1; i < num_rqst; i++) { 4503 4443 struct smb_rqst *old = &old_rq[i - 1]; 4504 4444 struct smb_rqst *new = &new_rq[i]; 4505 - struct xarray *buffer = &new->rq_buffer; 4506 - size_t size = iov_iter_count(&old->rq_iter), seg, copied = 0; 4445 + struct folio_queue *buffer; 4446 + size_t size = iov_iter_count(&old->rq_iter); 4507 4447 4508 4448 orig_len += smb_rqst_len(server, old); 4509 4449 new->rq_iov = old->rq_iov; 4510 4450 new->rq_nvec = old->rq_nvec; 4511 4451 4512 - xa_init(buffer); 4513 - 4514 4452 if (size > 0) { 4515 - unsigned int npages = DIV_ROUND_UP(size, PAGE_SIZE); 4453 + buffer = cifs_alloc_folioq_buffer(size); 4454 + if (!buffer) 4455 + goto err_free; 4516 4456 4517 - for (j = 0; j < npages; j++) { 4518 - void *o; 4457 + new->rq_buffer = buffer; 4458 + iov_iter_folio_queue(&new->rq_iter, ITER_SOURCE, 4459 + buffer, 0, 0, size); 4519 4460 4520 - rc = -ENOMEM; 4521 - page = alloc_page(GFP_KERNEL|__GFP_HIGHMEM); 4522 - if (!page) 4523 - goto err_free; 4524 - page->index = j; 4525 - o = xa_store(buffer, j, page, GFP_KERNEL); 4526 - if (xa_is_err(o)) { 4527 - rc = xa_err(o); 4528 - put_page(page); 4529 - goto err_free; 4530 - } 4531 - 4532 - xa_set_mark(buffer, j, XA_MARK_0); 4533 - 4534 - seg = min_t(size_t, size - copied, PAGE_SIZE); 4535 - if (copy_page_from_iter(page, 0, seg, &old->rq_iter) != seg) { 4536 - rc = -EFAULT; 4537 - goto err_free; 4538 - } 4539 - copied += seg; 4461 + if (!cifs_copy_iter_to_folioq(&old->rq_iter, size, buffer)) { 4462 + rc = -EIO; 4463 + goto err_free; 4540 4464 } 4541 - iov_iter_xarray(&new->rq_iter, ITER_SOURCE, 4542 - buffer, 0, size); 4543 4465 } 4544 4466 } 4545 4467 ··· 4583 4545 } 4584 4546 4585 4547 static int 4586 - cifs_copy_pages_to_iter(struct xarray *pages, unsigned int data_size, 4587 - unsigned int skip, struct iov_iter *iter) 4548 + cifs_copy_folioq_to_iter(struct folio_queue *folioq, size_t data_size, 4549 + size_t skip, struct iov_iter *iter) 4588 4550 { 4589 - struct page *page; 4590 - unsigned long index; 4551 + for (; folioq; folioq = folioq->next) { 4552 + for (int s = 0; s < folioq_count(folioq); s++) { 4553 + struct folio *folio = folioq_folio(folioq, s); 4554 + size_t fsize = folio_size(folio); 4555 + size_t n, len = umin(fsize - skip, data_size); 4591 4556 4592 - xa_for_each(pages, index, page) { 4593 - size_t n, len = min_t(unsigned int, PAGE_SIZE - skip, data_size); 4594 - 4595 - n = copy_page_to_iter(page, skip, len, iter); 4596 - if (n != len) { 4597 - cifs_dbg(VFS, "%s: something went wrong\n", __func__); 4598 - return -EIO; 4557 + n = copy_folio_to_iter(folio, skip, len, iter); 4558 + if (n != len) { 4559 + cifs_dbg(VFS, "%s: something went wrong\n", __func__); 4560 + return -EIO; 4561 + } 4562 + data_size -= n; 4563 + skip = 0; 4599 4564 } 4600 - data_size -= n; 4601 - skip = 0; 4602 4565 } 4603 4566 4604 4567 return 0; ··· 4607 4568 4608 4569 static int 4609 4570 handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, 4610 - char *buf, unsigned int buf_len, struct xarray *pages, 4611 - unsigned int pages_len, bool is_offloaded) 4571 + char *buf, unsigned int buf_len, struct folio_queue *buffer, 4572 + unsigned int buffer_len, bool is_offloaded) 4612 4573 { 4613 4574 unsigned int data_offset; 4614 4575 unsigned int data_len; ··· 4705 4666 return 0; 4706 4667 } 4707 4668 4708 - if (data_len > pages_len - pad_len) { 4669 + if (data_len > buffer_len - pad_len) { 4709 4670 /* data_len is corrupt -- discard frame */ 4710 4671 rdata->result = -EIO; 4711 4672 if (is_offloaded) ··· 4716 4677 } 4717 4678 4718 4679 /* Copy the data to the output I/O iterator. */ 4719 - rdata->result = cifs_copy_pages_to_iter(pages, pages_len, 4720 - cur_off, &rdata->subreq.io_iter); 4680 + rdata->result = cifs_copy_folioq_to_iter(buffer, buffer_len, 4681 + cur_off, &rdata->subreq.io_iter); 4721 4682 if (rdata->result != 0) { 4722 4683 if (is_offloaded) 4723 4684 mid->mid_state = MID_RESPONSE_MALFORMED; ··· 4725 4686 dequeue_mid(mid, rdata->result); 4726 4687 return 0; 4727 4688 } 4728 - rdata->got_bytes = pages_len; 4689 + rdata->got_bytes = buffer_len; 4729 4690 4730 4691 } else if (buf_len >= data_offset + data_len) { 4731 4692 /* read response payload is in buf */ 4732 - WARN_ONCE(pages && !xa_empty(pages), 4733 - "read data can be either in buf or in pages"); 4693 + WARN_ONCE(buffer, "read data can be either in buf or in buffer"); 4734 4694 length = copy_to_iter(buf + data_offset, data_len, &rdata->subreq.io_iter); 4735 4695 if (length < 0) 4736 4696 return length; ··· 4755 4717 struct smb2_decrypt_work { 4756 4718 struct work_struct decrypt; 4757 4719 struct TCP_Server_Info *server; 4758 - struct xarray buffer; 4720 + struct folio_queue *buffer; 4759 4721 char *buf; 4760 4722 unsigned int len; 4761 4723 }; ··· 4769 4731 struct mid_q_entry *mid; 4770 4732 struct iov_iter iter; 4771 4733 4772 - iov_iter_xarray(&iter, ITER_DEST, &dw->buffer, 0, dw->len); 4734 + iov_iter_folio_queue(&iter, ITER_DEST, dw->buffer, 0, 0, dw->len); 4773 4735 rc = decrypt_raw_data(dw->server, dw->buf, dw->server->vals->read_rsp_size, 4774 4736 &iter, true); 4775 4737 if (rc) { ··· 4785 4747 mid->decrypted = true; 4786 4748 rc = handle_read_data(dw->server, mid, dw->buf, 4787 4749 dw->server->vals->read_rsp_size, 4788 - &dw->buffer, dw->len, 4750 + dw->buffer, dw->len, 4789 4751 true); 4790 4752 if (rc >= 0) { 4791 4753 #ifdef CONFIG_CIFS_STATS2 ··· 4818 4780 } 4819 4781 4820 4782 free_pages: 4821 - cifs_clear_xarray_buffer(&dw->buffer); 4783 + cifs_clear_folioq_buffer(dw->buffer); 4822 4784 cifs_small_buf_release(dw->buf); 4823 4785 kfree(dw); 4824 4786 } ··· 4828 4790 receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid, 4829 4791 int *num_mids) 4830 4792 { 4831 - struct page *page; 4832 4793 char *buf = server->smallbuf; 4833 4794 struct smb2_transform_hdr *tr_hdr = (struct smb2_transform_hdr *)buf; 4834 4795 struct iov_iter iter; 4835 - unsigned int len, npages; 4796 + unsigned int len; 4836 4797 unsigned int buflen = server->pdu_size; 4837 4798 int rc; 4838 - int i = 0; 4839 4799 struct smb2_decrypt_work *dw; 4840 4800 4841 4801 dw = kzalloc(sizeof(struct smb2_decrypt_work), GFP_KERNEL); 4842 4802 if (!dw) 4843 4803 return -ENOMEM; 4844 - xa_init(&dw->buffer); 4845 4804 INIT_WORK(&dw->decrypt, smb2_decrypt_offload); 4846 4805 dw->server = server; 4847 4806 ··· 4854 4819 len = le32_to_cpu(tr_hdr->OriginalMessageSize) - 4855 4820 server->vals->read_rsp_size; 4856 4821 dw->len = len; 4857 - npages = DIV_ROUND_UP(len, PAGE_SIZE); 4822 + len = round_up(dw->len, PAGE_SIZE); 4858 4823 4859 4824 rc = -ENOMEM; 4860 - for (; i < npages; i++) { 4861 - void *old; 4825 + dw->buffer = cifs_alloc_folioq_buffer(len); 4826 + if (!dw->buffer) 4827 + goto discard_data; 4862 4828 4863 - page = alloc_page(GFP_KERNEL|__GFP_HIGHMEM); 4864 - if (!page) 4865 - goto discard_data; 4866 - page->index = i; 4867 - old = xa_store(&dw->buffer, i, page, GFP_KERNEL); 4868 - if (xa_is_err(old)) { 4869 - rc = xa_err(old); 4870 - put_page(page); 4871 - goto discard_data; 4872 - } 4873 - xa_set_mark(&dw->buffer, i, XA_MARK_0); 4874 - } 4875 - 4876 - iov_iter_xarray(&iter, ITER_DEST, &dw->buffer, 0, npages * PAGE_SIZE); 4829 + iov_iter_folio_queue(&iter, ITER_DEST, dw->buffer, 0, 0, len); 4877 4830 4878 4831 /* Read the data into the buffer and clear excess bufferage. */ 4879 4832 rc = cifs_read_iter_from_socket(server, &iter, dw->len); ··· 4869 4846 goto discard_data; 4870 4847 4871 4848 server->total_read += rc; 4872 - if (rc < npages * PAGE_SIZE) 4873 - iov_iter_zero(npages * PAGE_SIZE - rc, &iter); 4874 - iov_iter_revert(&iter, npages * PAGE_SIZE); 4849 + if (rc < len) 4850 + iov_iter_zero(len - rc, &iter); 4851 + iov_iter_revert(&iter, len); 4875 4852 iov_iter_truncate(&iter, dw->len); 4876 4853 4877 4854 rc = cifs_discard_remaining_data(server); ··· 4906 4883 (*mid)->decrypted = true; 4907 4884 rc = handle_read_data(server, *mid, buf, 4908 4885 server->vals->read_rsp_size, 4909 - &dw->buffer, dw->len, false); 4886 + dw->buffer, dw->len, false); 4910 4887 if (rc >= 0) { 4911 4888 if (server->ops->is_network_name_deleted) { 4912 4889 server->ops->is_network_name_deleted(buf, ··· 4916 4893 } 4917 4894 4918 4895 free_pages: 4919 - cifs_clear_xarray_buffer(&dw->buffer); 4896 + cifs_clear_folioq_buffer(dw->buffer); 4920 4897 free_dw: 4921 4898 kfree(dw); 4922 4899 return rc;
+16 -11
fs/smb/client/smb2pdu.c
··· 4498 4498 struct cifs_io_subrequest *rdata = 4499 4499 container_of(work, struct cifs_io_subrequest, subreq.work); 4500 4500 4501 - netfs_subreq_terminated(&rdata->subreq, 4502 - (rdata->result == 0 || rdata->result == -EAGAIN) ? 4503 - rdata->got_bytes : rdata->result, true); 4501 + netfs_read_subreq_terminated(&rdata->subreq, rdata->result, false); 4504 4502 } 4505 4503 4506 4504 static void ··· 4530 4532 4531 4533 cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu/%zu\n", 4532 4534 __func__, mid->mid, mid->mid_state, rdata->result, 4533 - rdata->actual_len, rdata->subreq.len - rdata->subreq.transferred); 4535 + rdata->got_bytes, rdata->subreq.len - rdata->subreq.transferred); 4534 4536 4535 4537 switch (mid->mid_state) { 4536 4538 case MID_RESPONSE_RECEIVED: ··· 4552 4554 break; 4553 4555 case MID_REQUEST_SUBMITTED: 4554 4556 case MID_RETRY_NEEDED: 4557 + __set_bit(NETFS_SREQ_NEED_RETRY, &rdata->subreq.flags); 4555 4558 rdata->result = -EAGAIN; 4556 4559 if (server->sign && rdata->got_bytes) 4557 4560 /* reset bytes number since we can not check a sign */ ··· 4587 4588 rdata->req->cfile->fid.persistent_fid, 4588 4589 tcon->tid, tcon->ses->Suid, 4589 4590 rdata->subreq.start + rdata->subreq.transferred, 4590 - rdata->actual_len, 4591 + rdata->subreq.len - rdata->subreq.transferred, 4591 4592 rdata->result); 4592 4593 } else 4593 4594 trace_smb3_read_done(rdata->rreq->debug_id, ··· 4602 4603 __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 4603 4604 rdata->result = 0; 4604 4605 } else { 4605 - if (rdata->got_bytes < rdata->actual_len && 4606 - rdata->subreq.start + rdata->subreq.transferred + rdata->got_bytes == 4607 - ictx->remote_i_size) { 4606 + size_t trans = rdata->subreq.transferred + rdata->got_bytes; 4607 + if (trans < rdata->subreq.len && 4608 + rdata->subreq.start + trans == ictx->remote_i_size) { 4608 4609 __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 4609 4610 rdata->result = 0; 4610 4611 } ··· 4613 4614 server->credits, server->in_flight, 4614 4615 0, cifs_trace_rw_credits_read_response_clear); 4615 4616 rdata->credits.value = 0; 4617 + rdata->subreq.transferred += rdata->got_bytes; 4618 + if (rdata->subreq.start + rdata->subreq.transferred >= rdata->subreq.rreq->i_size) 4619 + __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 4620 + trace_netfs_sreq(&rdata->subreq, netfs_sreq_trace_io_progress); 4616 4621 INIT_WORK(&rdata->subreq.work, smb2_readv_worker); 4617 4622 queue_work(cifsiod_wq, &rdata->subreq.work); 4618 4623 release_mid(mid); ··· 4651 4648 io_parms.tcon = tlink_tcon(rdata->req->cfile->tlink); 4652 4649 io_parms.server = server = rdata->server; 4653 4650 io_parms.offset = subreq->start + subreq->transferred; 4654 - io_parms.length = rdata->actual_len; 4651 + io_parms.length = subreq->len - subreq->transferred; 4655 4652 io_parms.persistent_fid = rdata->req->cfile->fid.persistent_fid; 4656 4653 io_parms.volatile_fid = rdata->req->cfile->fid.volatile_fid; 4657 4654 io_parms.pid = rdata->req->pid; ··· 4672 4669 shdr = (struct smb2_hdr *)buf; 4673 4670 4674 4671 if (rdata->credits.value > 0) { 4675 - shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->actual_len, 4672 + shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(io_parms.length, 4676 4673 SMB2_MAX_BUFFER_SIZE)); 4677 4674 credit_request = le16_to_cpu(shdr->CreditCharge) + 8; 4678 4675 if (server->credits >= server->max_credits) ··· 4700 4697 rdata->xid, io_parms.persistent_fid, 4701 4698 io_parms.tcon->tid, 4702 4699 io_parms.tcon->ses->Suid, 4703 - io_parms.offset, rdata->actual_len, rc); 4700 + io_parms.offset, 4701 + subreq->len - subreq->transferred, rc); 4704 4702 } 4705 4703 4706 4704 async_readv_out: ··· 4884 4880 server->credits, server->in_flight, 4885 4881 0, cifs_trace_rw_credits_write_response_clear); 4886 4882 wdata->credits.value = 0; 4883 + trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_progress); 4887 4884 cifs_write_subrequest_terminated(wdata, result ?: written, true); 4888 4885 release_mid(mid); 4889 4886 trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0,
+51 -35
fs/smb/client/smbdirect.c
··· 6 6 */ 7 7 #include <linux/module.h> 8 8 #include <linux/highmem.h> 9 + #include <linux/folio_queue.h> 9 10 #include "smbdirect.h" 10 11 #include "cifs_debug.h" 11 12 #include "cifsproto.h" ··· 2464 2463 start = 0; 2465 2464 } 2466 2465 2466 + if (ret > 0) 2467 + iov_iter_advance(iter, ret); 2467 2468 return ret; 2468 2469 } 2469 2470 ··· 2522 2519 start = 0; 2523 2520 } 2524 2521 2522 + if (ret > 0) 2523 + iov_iter_advance(iter, ret); 2525 2524 return ret; 2526 2525 } 2527 2526 2528 2527 /* 2529 - * Extract folio fragments from an XARRAY-class iterator and add them to an 2530 - * RDMA list. The folios are not pinned. 2528 + * Extract folio fragments from a FOLIOQ-class iterator and add them to an RDMA 2529 + * list. The folios are not pinned. 2531 2530 */ 2532 - static ssize_t smb_extract_xarray_to_rdma(struct iov_iter *iter, 2531 + static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter, 2533 2532 struct smb_extract_to_rdma *rdma, 2534 2533 ssize_t maxsize) 2535 2534 { 2536 - struct xarray *xa = iter->xarray; 2537 - struct folio *folio; 2538 - loff_t start = iter->xarray_start + iter->iov_offset; 2539 - pgoff_t index = start / PAGE_SIZE; 2535 + const struct folio_queue *folioq = iter->folioq; 2536 + unsigned int slot = iter->folioq_slot; 2540 2537 ssize_t ret = 0; 2541 - size_t off, len; 2542 - XA_STATE(xas, xa, index); 2538 + size_t offset = iter->iov_offset; 2543 2539 2544 - rcu_read_lock(); 2540 + BUG_ON(!folioq); 2545 2541 2546 - xas_for_each(&xas, folio, ULONG_MAX) { 2547 - if (xas_retry(&xas, folio)) 2548 - continue; 2549 - if (WARN_ON(xa_is_value(folio))) 2550 - break; 2551 - if (WARN_ON(folio_test_hugetlb(folio))) 2552 - break; 2553 - 2554 - off = offset_in_folio(folio, start); 2555 - len = min_t(size_t, maxsize, folio_size(folio) - off); 2556 - 2557 - if (!smb_set_sge(rdma, folio_page(folio, 0), off, len)) { 2558 - rcu_read_unlock(); 2542 + if (slot >= folioq_nr_slots(folioq)) { 2543 + folioq = folioq->next; 2544 + if (WARN_ON_ONCE(!folioq)) 2559 2545 return -EIO; 2560 - } 2561 - 2562 - maxsize -= len; 2563 - ret += len; 2564 - if (rdma->nr_sge >= rdma->max_sge || maxsize <= 0) 2565 - break; 2546 + slot = 0; 2566 2547 } 2567 2548 2568 - rcu_read_unlock(); 2549 + do { 2550 + struct folio *folio = folioq_folio(folioq, slot); 2551 + size_t fsize = folioq_folio_size(folioq, slot); 2552 + 2553 + if (offset < fsize) { 2554 + size_t part = umin(maxsize - ret, fsize - offset); 2555 + 2556 + if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part)) 2557 + return -EIO; 2558 + 2559 + offset += part; 2560 + ret += part; 2561 + } 2562 + 2563 + if (offset >= fsize) { 2564 + offset = 0; 2565 + slot++; 2566 + if (slot >= folioq_nr_slots(folioq)) { 2567 + if (!folioq->next) { 2568 + WARN_ON_ONCE(ret < iter->count); 2569 + break; 2570 + } 2571 + folioq = folioq->next; 2572 + slot = 0; 2573 + } 2574 + } 2575 + } while (rdma->nr_sge < rdma->max_sge || maxsize > 0); 2576 + 2577 + iter->folioq = folioq; 2578 + iter->folioq_slot = slot; 2579 + iter->iov_offset = offset; 2580 + iter->count -= ret; 2569 2581 return ret; 2570 2582 } 2571 2583 ··· 2608 2590 case ITER_KVEC: 2609 2591 ret = smb_extract_kvec_to_rdma(iter, rdma, len); 2610 2592 break; 2611 - case ITER_XARRAY: 2612 - ret = smb_extract_xarray_to_rdma(iter, rdma, len); 2593 + case ITER_FOLIOQ: 2594 + ret = smb_extract_folioq_to_rdma(iter, rdma, len); 2613 2595 break; 2614 2596 default: 2615 2597 WARN_ON_ONCE(1); 2616 2598 return -EIO; 2617 2599 } 2618 2600 2619 - if (ret > 0) { 2620 - iov_iter_advance(iter, ret); 2621 - } else if (ret < 0) { 2601 + if (ret < 0) { 2622 2602 while (rdma->nr_sge > before) { 2623 2603 struct ib_sge *sge = &rdma->sge[rdma->nr_sge--]; 2624 2604
+156
include/linux/folio_queue.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* Queue of folios definitions 3 + * 4 + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. 5 + * Written by David Howells (dhowells@redhat.com) 6 + */ 7 + 8 + #ifndef _LINUX_FOLIO_QUEUE_H 9 + #define _LINUX_FOLIO_QUEUE_H 10 + 11 + #include <linux/pagevec.h> 12 + 13 + /* 14 + * Segment in a queue of running buffers. Each segment can hold a number of 15 + * folios and a portion of the queue can be referenced with the ITER_FOLIOQ 16 + * iterator. The possibility exists of inserting non-folio elements into the 17 + * queue (such as gaps). 18 + * 19 + * Explicit prev and next pointers are used instead of a list_head to make it 20 + * easier to add segments to tail and remove them from the head without the 21 + * need for a lock. 22 + */ 23 + struct folio_queue { 24 + struct folio_batch vec; /* Folios in the queue segment */ 25 + u8 orders[PAGEVEC_SIZE]; /* Order of each folio */ 26 + struct folio_queue *next; /* Next queue segment or NULL */ 27 + struct folio_queue *prev; /* Previous queue segment of NULL */ 28 + unsigned long marks; /* 1-bit mark per folio */ 29 + unsigned long marks2; /* Second 1-bit mark per folio */ 30 + unsigned long marks3; /* Third 1-bit mark per folio */ 31 + #if PAGEVEC_SIZE > BITS_PER_LONG 32 + #error marks is not big enough 33 + #endif 34 + }; 35 + 36 + static inline void folioq_init(struct folio_queue *folioq) 37 + { 38 + folio_batch_init(&folioq->vec); 39 + folioq->next = NULL; 40 + folioq->prev = NULL; 41 + folioq->marks = 0; 42 + folioq->marks2 = 0; 43 + folioq->marks3 = 0; 44 + } 45 + 46 + static inline unsigned int folioq_nr_slots(const struct folio_queue *folioq) 47 + { 48 + return PAGEVEC_SIZE; 49 + } 50 + 51 + static inline unsigned int folioq_count(struct folio_queue *folioq) 52 + { 53 + return folio_batch_count(&folioq->vec); 54 + } 55 + 56 + static inline bool folioq_full(struct folio_queue *folioq) 57 + { 58 + //return !folio_batch_space(&folioq->vec); 59 + return folioq_count(folioq) >= folioq_nr_slots(folioq); 60 + } 61 + 62 + static inline bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot) 63 + { 64 + return test_bit(slot, &folioq->marks); 65 + } 66 + 67 + static inline void folioq_mark(struct folio_queue *folioq, unsigned int slot) 68 + { 69 + set_bit(slot, &folioq->marks); 70 + } 71 + 72 + static inline void folioq_unmark(struct folio_queue *folioq, unsigned int slot) 73 + { 74 + clear_bit(slot, &folioq->marks); 75 + } 76 + 77 + static inline bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slot) 78 + { 79 + return test_bit(slot, &folioq->marks2); 80 + } 81 + 82 + static inline void folioq_mark2(struct folio_queue *folioq, unsigned int slot) 83 + { 84 + set_bit(slot, &folioq->marks2); 85 + } 86 + 87 + static inline void folioq_unmark2(struct folio_queue *folioq, unsigned int slot) 88 + { 89 + clear_bit(slot, &folioq->marks2); 90 + } 91 + 92 + static inline bool folioq_is_marked3(const struct folio_queue *folioq, unsigned int slot) 93 + { 94 + return test_bit(slot, &folioq->marks3); 95 + } 96 + 97 + static inline void folioq_mark3(struct folio_queue *folioq, unsigned int slot) 98 + { 99 + set_bit(slot, &folioq->marks3); 100 + } 101 + 102 + static inline void folioq_unmark3(struct folio_queue *folioq, unsigned int slot) 103 + { 104 + clear_bit(slot, &folioq->marks3); 105 + } 106 + 107 + static inline unsigned int __folio_order(struct folio *folio) 108 + { 109 + if (!folio_test_large(folio)) 110 + return 0; 111 + return folio->_flags_1 & 0xff; 112 + } 113 + 114 + static inline unsigned int folioq_append(struct folio_queue *folioq, struct folio *folio) 115 + { 116 + unsigned int slot = folioq->vec.nr++; 117 + 118 + folioq->vec.folios[slot] = folio; 119 + folioq->orders[slot] = __folio_order(folio); 120 + return slot; 121 + } 122 + 123 + static inline unsigned int folioq_append_mark(struct folio_queue *folioq, struct folio *folio) 124 + { 125 + unsigned int slot = folioq->vec.nr++; 126 + 127 + folioq->vec.folios[slot] = folio; 128 + folioq->orders[slot] = __folio_order(folio); 129 + folioq_mark(folioq, slot); 130 + return slot; 131 + } 132 + 133 + static inline struct folio *folioq_folio(const struct folio_queue *folioq, unsigned int slot) 134 + { 135 + return folioq->vec.folios[slot]; 136 + } 137 + 138 + static inline unsigned int folioq_folio_order(const struct folio_queue *folioq, unsigned int slot) 139 + { 140 + return folioq->orders[slot]; 141 + } 142 + 143 + static inline size_t folioq_folio_size(const struct folio_queue *folioq, unsigned int slot) 144 + { 145 + return PAGE_SIZE << folioq_folio_order(folioq, slot); 146 + } 147 + 148 + static inline void folioq_clear(struct folio_queue *folioq, unsigned int slot) 149 + { 150 + folioq->vec.folios[slot] = NULL; 151 + folioq_unmark(folioq, slot); 152 + folioq_unmark2(folioq, slot); 153 + folioq_unmark3(folioq, slot); 154 + } 155 + 156 + #endif /* _LINUX_FOLIO_QUEUE_H */
+104
include/linux/iov_iter.h
··· 10 10 11 11 #include <linux/uio.h> 12 12 #include <linux/bvec.h> 13 + #include <linux/folio_queue.h> 13 14 14 15 typedef size_t (*iov_step_f)(void *iter_base, size_t progress, size_t len, 15 16 void *priv, void *priv2); ··· 142 141 } 143 142 144 143 /* 144 + * Handle ITER_FOLIOQ. 145 + */ 146 + static __always_inline 147 + size_t iterate_folioq(struct iov_iter *iter, size_t len, void *priv, void *priv2, 148 + iov_step_f step) 149 + { 150 + const struct folio_queue *folioq = iter->folioq; 151 + unsigned int slot = iter->folioq_slot; 152 + size_t progress = 0, skip = iter->iov_offset; 153 + 154 + if (slot == folioq_nr_slots(folioq)) { 155 + /* The iterator may have been extended. */ 156 + folioq = folioq->next; 157 + slot = 0; 158 + } 159 + 160 + do { 161 + struct folio *folio = folioq_folio(folioq, slot); 162 + size_t part, remain, consumed; 163 + size_t fsize; 164 + void *base; 165 + 166 + if (!folio) 167 + break; 168 + 169 + fsize = folioq_folio_size(folioq, slot); 170 + base = kmap_local_folio(folio, skip); 171 + part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); 172 + remain = step(base, progress, part, priv, priv2); 173 + kunmap_local(base); 174 + consumed = part - remain; 175 + len -= consumed; 176 + progress += consumed; 177 + skip += consumed; 178 + if (skip >= fsize) { 179 + skip = 0; 180 + slot++; 181 + if (slot == folioq_nr_slots(folioq) && folioq->next) { 182 + folioq = folioq->next; 183 + slot = 0; 184 + } 185 + } 186 + if (remain) 187 + break; 188 + } while (len); 189 + 190 + iter->folioq_slot = slot; 191 + iter->folioq = folioq; 192 + iter->iov_offset = skip; 193 + iter->count -= progress; 194 + return progress; 195 + } 196 + 197 + /* 145 198 * Handle ITER_XARRAY. 146 199 */ 147 200 static __always_inline ··· 304 249 return iterate_bvec(iter, len, priv, priv2, step); 305 250 if (iov_iter_is_kvec(iter)) 306 251 return iterate_kvec(iter, len, priv, priv2, step); 252 + if (iov_iter_is_folioq(iter)) 253 + return iterate_folioq(iter, len, priv, priv2, step); 307 254 if (iov_iter_is_xarray(iter)) 308 255 return iterate_xarray(iter, len, priv, priv2, step); 309 256 return iterate_discard(iter, len, priv, priv2, step); ··· 326 269 iov_ustep_f ustep, iov_step_f step) 327 270 { 328 271 return iterate_and_advance2(iter, len, priv, NULL, ustep, step); 272 + } 273 + 274 + /** 275 + * iterate_and_advance_kernel - Iterate over a kernel-internal iterator 276 + * @iter: The iterator to iterate over. 277 + * @len: The amount to iterate over. 278 + * @priv: Data for the step functions. 279 + * @priv2: More data for the step functions. 280 + * @step: Function for other iterators; given kernel addresses. 281 + * 282 + * Iterate over the next part of an iterator, up to the specified length. The 283 + * buffer is presented in segments, which for kernel iteration are broken up by 284 + * physical pages and mapped, with the mapped address being presented. 285 + * 286 + * [!] Note This will only handle BVEC, KVEC, FOLIOQ, XARRAY and DISCARD-type 287 + * iterators; it will not handle UBUF or IOVEC-type iterators. 288 + * 289 + * A step functions, @step, must be provided, one for handling mapped kernel 290 + * addresses and the other is given user addresses which have the potential to 291 + * fault since no pinning is performed. 292 + * 293 + * The step functions are passed the address and length of the segment, @priv, 294 + * @priv2 and the amount of data so far iterated over (which can, for example, 295 + * be added to @priv to point to the right part of a second buffer). The step 296 + * functions should return the amount of the segment they didn't process (ie. 0 297 + * indicates complete processsing). 298 + * 299 + * This function returns the amount of data processed (ie. 0 means nothing was 300 + * processed and the value of @len means processes to completion). 301 + */ 302 + static __always_inline 303 + size_t iterate_and_advance_kernel(struct iov_iter *iter, size_t len, void *priv, 304 + void *priv2, iov_step_f step) 305 + { 306 + if (unlikely(iter->count < len)) 307 + len = iter->count; 308 + if (unlikely(!len)) 309 + return 0; 310 + if (iov_iter_is_bvec(iter)) 311 + return iterate_bvec(iter, len, priv, priv2, step); 312 + if (iov_iter_is_kvec(iter)) 313 + return iterate_kvec(iter, len, priv, priv2, step); 314 + if (iov_iter_is_folioq(iter)) 315 + return iterate_folioq(iter, len, priv, priv2, step); 316 + if (iov_iter_is_xarray(iter)) 317 + return iterate_xarray(iter, len, priv, priv2, step); 318 + return iterate_discard(iter, len, priv, priv2, step); 329 319 } 330 320 331 321 #endif /* _LINUX_IOV_ITER_H */
+28 -18
include/linux/netfs.h
··· 38 38 folio_set_private_2(folio); 39 39 } 40 40 41 - /* Marks used on xarray-based buffers */ 42 - #define NETFS_BUF_PUT_MARK XA_MARK_0 /* - Page needs putting */ 43 - #define NETFS_BUF_PAGECACHE_MARK XA_MARK_1 /* - Page needs wb/dirty flag wrangling */ 44 - 45 41 enum netfs_io_source { 42 + NETFS_SOURCE_UNKNOWN, 46 43 NETFS_FILL_WITH_ZEROES, 47 44 NETFS_DOWNLOAD_FROM_SERVER, 48 45 NETFS_READ_FROM_CACHE, ··· 70 73 #define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */ 71 74 #define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */ 72 75 #define NETFS_ICTX_WRITETHROUGH 2 /* Write-through caching */ 76 + #define NETFS_ICTX_MODIFIED_ATTR 3 /* Indicate change in mtime/ctime */ 73 77 }; 74 78 75 79 /* ··· 131 133 struct netfs_io_stream { 132 134 /* Submission tracking */ 133 135 struct netfs_io_subrequest *construct; /* Op being constructed */ 136 + size_t sreq_max_len; /* Maximum size of a subrequest */ 137 + unsigned int sreq_max_segs; /* 0 or max number of segments in an iterator */ 134 138 unsigned int submit_off; /* Folio offset we're submitting from */ 135 139 unsigned int submit_len; /* Amount of data left to submit */ 136 - unsigned int submit_max_len; /* Amount I/O can be rounded up to */ 140 + unsigned int submit_extendable_to; /* Amount I/O can be rounded up to */ 137 141 void (*prepare_write)(struct netfs_io_subrequest *subreq); 138 142 void (*issue_write)(struct netfs_io_subrequest *subreq); 139 143 /* Collection tracking */ ··· 176 176 struct list_head rreq_link; /* Link in rreq->subrequests */ 177 177 struct iov_iter io_iter; /* Iterator for this subrequest */ 178 178 unsigned long long start; /* Where to start the I/O */ 179 - size_t max_len; /* Maximum size of the I/O */ 180 179 size_t len; /* Size of the I/O */ 181 180 size_t transferred; /* Amount of data transferred */ 181 + size_t consumed; /* Amount of read data consumed */ 182 + size_t prev_donated; /* Amount of data donated from previous subreq */ 183 + size_t next_donated; /* Amount of data donated from next subreq */ 182 184 refcount_t ref; 183 185 short error; /* 0 or error that occurred */ 184 186 unsigned short debug_index; /* Index in list (for debugging output) */ 185 187 unsigned int nr_segs; /* Number of segs in io_iter */ 186 - unsigned int max_nr_segs; /* 0 or max number of segments in an iterator */ 187 188 enum netfs_io_source source; /* Where to read from/write to */ 188 189 unsigned char stream_nr; /* I/O stream this belongs to */ 190 + unsigned char curr_folioq_slot; /* Folio currently being read */ 191 + unsigned char curr_folio_order; /* Order of folio */ 192 + struct folio_queue *curr_folioq; /* Queue segment in which current folio resides */ 189 193 unsigned long flags; 190 194 #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */ 191 195 #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */ 192 - #define NETFS_SREQ_SHORT_IO 2 /* Set if the I/O was short */ 193 196 #define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */ 194 197 #define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */ 195 198 #define NETFS_SREQ_ONDEMAND 5 /* Set if it's from on-demand read mode */ 196 199 #define NETFS_SREQ_BOUNDARY 6 /* Set if ends on hard boundary (eg. ceph object) */ 200 + #define NETFS_SREQ_HIT_EOF 7 /* Set if short due to EOF */ 197 201 #define NETFS_SREQ_IN_PROGRESS 8 /* Unlocked when the subrequest completes */ 198 202 #define NETFS_SREQ_NEED_RETRY 9 /* Set if the filesystem requests a retry */ 199 203 #define NETFS_SREQ_RETRYING 10 /* Set if we're retrying */ 200 204 #define NETFS_SREQ_FAILED 11 /* Set if the subreq failed unretryably */ 201 - #define NETFS_SREQ_HIT_EOF 12 /* Set if we hit the EOF */ 202 205 }; 203 206 204 207 enum netfs_io_origin { 205 208 NETFS_READAHEAD, /* This read was triggered by readahead */ 206 209 NETFS_READPAGE, /* This read is a synchronous read */ 210 + NETFS_READ_GAPS, /* This read is a synchronous read to fill gaps */ 207 211 NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ 208 - NETFS_COPY_TO_CACHE, /* This write is to copy a read to the cache */ 212 + NETFS_DIO_READ, /* This is a direct I/O read */ 209 213 NETFS_WRITEBACK, /* This write was triggered by writepages */ 210 214 NETFS_WRITETHROUGH, /* This write was made by netfs_perform_write() */ 211 215 NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ 212 - NETFS_DIO_READ, /* This is a direct I/O read */ 213 216 NETFS_DIO_WRITE, /* This is a direct I/O write */ 217 + NETFS_PGPRIV2_COPY_TO_CACHE, /* [DEPRECATED] This is writing read data to the cache */ 214 218 nr__netfs_io_origin 215 219 } __mode(byte); 216 220 ··· 231 227 struct address_space *mapping; /* The mapping being accessed */ 232 228 struct kiocb *iocb; /* AIO completion vector */ 233 229 struct netfs_cache_resources cache_resources; 230 + struct readahead_control *ractl; /* Readahead descriptor */ 234 231 struct list_head proc_link; /* Link in netfs_iorequests */ 235 232 struct list_head subrequests; /* Contributory I/O operations */ 236 233 struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operations */ 237 234 #define NR_IO_STREAMS 2 //wreq->nr_io_streams 238 235 struct netfs_group *group; /* Writeback group being written back */ 236 + struct folio_queue *buffer; /* Head of I/O buffer */ 237 + struct folio_queue *buffer_tail; /* Tail of I/O buffer */ 239 238 struct iov_iter iter; /* Unencrypted-side iterator */ 240 239 struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */ 241 240 void *netfs_priv; /* Private data for the netfs */ ··· 252 245 unsigned int nr_group_rel; /* Number of refs to release on ->group */ 253 246 spinlock_t lock; /* Lock for queuing subreqs */ 254 247 atomic_t nr_outstanding; /* Number of ops in progress */ 255 - atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */ 256 - size_t upper_len; /* Length can be extended to here */ 257 248 unsigned long long submitted; /* Amount submitted for I/O so far */ 258 249 unsigned long long len; /* Length of the request */ 259 250 size_t transferred; /* Amount to be indicated as transferred */ 260 - short error; /* 0 or error that occurred */ 251 + long error; /* 0 or error that occurred */ 261 252 enum netfs_io_origin origin; /* Origin of the request */ 262 253 bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */ 254 + u8 buffer_head_slot; /* First slot in ->buffer */ 255 + u8 buffer_tail_slot; /* Next slot in ->buffer_tail */ 263 256 unsigned long long i_size; /* Size of the file */ 264 257 unsigned long long start; /* Start position */ 265 258 atomic64_t issued_to; /* Write issuer folio cursor */ 266 - unsigned long long contiguity; /* Tracking for gaps in the writeback sequence */ 267 259 unsigned long long collected_to; /* Point we've collected to */ 268 260 unsigned long long cleaned_to; /* Position we've cleaned folios to */ 269 261 pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ 262 + size_t prev_donated; /* Fallback for subreq->prev_donated */ 270 263 refcount_t ref; 271 264 unsigned long flags; 272 - #define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short or with error */ 273 265 #define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */ 274 266 #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */ 275 267 #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */ ··· 280 274 #define NETFS_RREQ_PAUSE 11 /* Pause subrequest generation */ 281 275 #define NETFS_RREQ_USE_IO_ITER 12 /* Use ->io_iter rather than ->i_pages */ 282 276 #define NETFS_RREQ_ALL_QUEUED 13 /* All subreqs are now queued */ 277 + #define NETFS_RREQ_NEED_RETRY 14 /* Need to try retrying */ 283 278 #define NETFS_RREQ_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark 284 279 * write to cache on read */ 285 280 const struct netfs_request_ops *netfs_ops; ··· 299 292 300 293 /* Read request handling */ 301 294 void (*expand_readahead)(struct netfs_io_request *rreq); 302 - bool (*clamp_length)(struct netfs_io_subrequest *subreq); 295 + int (*prepare_read)(struct netfs_io_subrequest *subreq); 303 296 void (*issue_read)(struct netfs_io_subrequest *subreq); 304 297 bool (*is_still_valid)(struct netfs_io_request *rreq); 305 298 int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, ··· 429 422 vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group); 430 423 431 424 /* (Sub)request management API. */ 432 - void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool); 425 + void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq, 426 + bool was_async); 427 + void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq, 428 + int error, bool was_async); 433 429 void netfs_get_subrequest(struct netfs_io_subrequest *subreq, 434 430 enum netfs_sreq_ref_trace what); 435 431 void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
+18
include/linux/uio.h
··· 11 11 #include <uapi/linux/uio.h> 12 12 13 13 struct page; 14 + struct folio_queue; 14 15 15 16 typedef unsigned int __bitwise iov_iter_extraction_t; 16 17 ··· 26 25 ITER_IOVEC, 27 26 ITER_BVEC, 28 27 ITER_KVEC, 28 + ITER_FOLIOQ, 29 29 ITER_XARRAY, 30 30 ITER_DISCARD, 31 31 }; ··· 68 66 const struct iovec *__iov; 69 67 const struct kvec *kvec; 70 68 const struct bio_vec *bvec; 69 + const struct folio_queue *folioq; 71 70 struct xarray *xarray; 72 71 void __user *ubuf; 73 72 }; ··· 77 74 }; 78 75 union { 79 76 unsigned long nr_segs; 77 + u8 folioq_slot; 80 78 loff_t xarray_start; 81 79 }; 82 80 }; ··· 128 124 static inline bool iov_iter_is_discard(const struct iov_iter *i) 129 125 { 130 126 return iov_iter_type(i) == ITER_DISCARD; 127 + } 128 + 129 + static inline bool iov_iter_is_folioq(const struct iov_iter *i) 130 + { 131 + return iov_iter_type(i) == ITER_FOLIOQ; 131 132 } 132 133 133 134 static inline bool iov_iter_is_xarray(const struct iov_iter *i) ··· 187 178 size_t bytes, struct iov_iter *i) 188 179 { 189 180 return copy_page_to_iter(&folio->page, offset, bytes, i); 181 + } 182 + 183 + static inline size_t copy_folio_from_iter(struct folio *folio, size_t offset, 184 + size_t bytes, struct iov_iter *i) 185 + { 186 + return copy_page_from_iter(&folio->page, offset, bytes, i); 190 187 } 191 188 192 189 static inline size_t copy_folio_from_iter_atomic(struct folio *folio, ··· 288 273 void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_vec *bvec, 289 274 unsigned long nr_segs, size_t count); 290 275 void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count); 276 + void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, 277 + const struct folio_queue *folioq, 278 + unsigned int first_slot, unsigned int offset, size_t count); 291 279 void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray, 292 280 loff_t start, size_t count); 293 281 ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages,
+104 -40
include/trace/events/netfs.h
··· 20 20 EM(netfs_read_trace_expanded, "EXPANDED ") \ 21 21 EM(netfs_read_trace_readahead, "READAHEAD") \ 22 22 EM(netfs_read_trace_readpage, "READPAGE ") \ 23 + EM(netfs_read_trace_read_gaps, "READ-GAPS") \ 23 24 EM(netfs_read_trace_prefetch_for_write, "PREFETCHW") \ 24 25 E_(netfs_read_trace_write_begin, "WRITEBEGN") 25 26 ··· 34 33 #define netfs_rreq_origins \ 35 34 EM(NETFS_READAHEAD, "RA") \ 36 35 EM(NETFS_READPAGE, "RP") \ 36 + EM(NETFS_READ_GAPS, "RG") \ 37 37 EM(NETFS_READ_FOR_WRITE, "RW") \ 38 - EM(NETFS_COPY_TO_CACHE, "CC") \ 38 + EM(NETFS_DIO_READ, "DR") \ 39 39 EM(NETFS_WRITEBACK, "WB") \ 40 40 EM(NETFS_WRITETHROUGH, "WT") \ 41 41 EM(NETFS_UNBUFFERED_WRITE, "UW") \ 42 - EM(NETFS_DIO_READ, "DR") \ 43 - E_(NETFS_DIO_WRITE, "DW") 42 + EM(NETFS_DIO_WRITE, "DW") \ 43 + E_(NETFS_PGPRIV2_COPY_TO_CACHE, "2C") 44 44 45 45 #define netfs_rreq_traces \ 46 46 EM(netfs_rreq_trace_assess, "ASSESS ") \ ··· 62 60 E_(netfs_rreq_trace_write_done, "WR-DONE") 63 61 64 62 #define netfs_sreq_sources \ 63 + EM(NETFS_SOURCE_UNKNOWN, "----") \ 65 64 EM(NETFS_FILL_WITH_ZEROES, "ZERO") \ 66 65 EM(NETFS_DOWNLOAD_FROM_SERVER, "DOWN") \ 67 66 EM(NETFS_READ_FROM_CACHE, "READ") \ ··· 72 69 E_(NETFS_INVALID_WRITE, "INVL") 73 70 74 71 #define netfs_sreq_traces \ 72 + EM(netfs_sreq_trace_add_donations, "+DON ") \ 73 + EM(netfs_sreq_trace_added, "ADD ") \ 74 + EM(netfs_sreq_trace_clear, "CLEAR") \ 75 75 EM(netfs_sreq_trace_discard, "DSCRD") \ 76 + EM(netfs_sreq_trace_donate_to_prev, "DON-P") \ 77 + EM(netfs_sreq_trace_donate_to_next, "DON-N") \ 76 78 EM(netfs_sreq_trace_download_instead, "RDOWN") \ 77 79 EM(netfs_sreq_trace_fail, "FAIL ") \ 78 80 EM(netfs_sreq_trace_free, "FREE ") \ 81 + EM(netfs_sreq_trace_hit_eof, "EOF ") \ 82 + EM(netfs_sreq_trace_io_progress, "IO ") \ 79 83 EM(netfs_sreq_trace_limited, "LIMIT") \ 80 84 EM(netfs_sreq_trace_prepare, "PREP ") \ 81 85 EM(netfs_sreq_trace_prep_failed, "PRPFL") \ 82 - EM(netfs_sreq_trace_resubmit_short, "SHORT") \ 86 + EM(netfs_sreq_trace_progress, "PRGRS") \ 87 + EM(netfs_sreq_trace_reprep_failed, "REPFL") \ 83 88 EM(netfs_sreq_trace_retry, "RETRY") \ 89 + EM(netfs_sreq_trace_short, "SHORT") \ 90 + EM(netfs_sreq_trace_split, "SPLIT") \ 84 91 EM(netfs_sreq_trace_submit, "SUBMT") \ 85 92 EM(netfs_sreq_trace_terminated, "TERM ") \ 86 93 EM(netfs_sreq_trace_write, "WRITE") \ ··· 131 118 EM(netfs_sreq_trace_new, "NEW ") \ 132 119 EM(netfs_sreq_trace_put_cancel, "PUT CANCEL ") \ 133 120 EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \ 134 - EM(netfs_sreq_trace_put_discard, "PUT DISCARD") \ 121 + EM(netfs_sreq_trace_put_consumed, "PUT CONSUME") \ 135 122 EM(netfs_sreq_trace_put_done, "PUT DONE ") \ 136 123 EM(netfs_sreq_trace_put_failed, "PUT FAILED ") \ 137 124 EM(netfs_sreq_trace_put_merged, "PUT MERGED ") \ ··· 142 129 E_(netfs_sreq_trace_put_terminated, "PUT TERM ") 143 130 144 131 #define netfs_folio_traces \ 145 - /* The first few correspond to enum netfs_how_to_modify */ \ 146 132 EM(netfs_folio_is_uptodate, "mod-uptodate") \ 147 133 EM(netfs_just_prefetch, "mod-prefetch") \ 148 134 EM(netfs_whole_folio_modify, "mod-whole-f") \ ··· 151 139 EM(netfs_flush_content, "flush") \ 152 140 EM(netfs_streaming_filled_page, "mod-streamw-f") \ 153 141 EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \ 154 - /* The rest are for writeback */ \ 142 + EM(netfs_folio_trace_abandon, "abandon") \ 155 143 EM(netfs_folio_trace_cancel_copy, "cancel-copy") \ 144 + EM(netfs_folio_trace_cancel_store, "cancel-store") \ 156 145 EM(netfs_folio_trace_clear, "clear") \ 157 146 EM(netfs_folio_trace_clear_cc, "clear-cc") \ 158 147 EM(netfs_folio_trace_clear_g, "clear-g") \ ··· 168 155 EM(netfs_folio_trace_mkwrite, "mkwrite") \ 169 156 EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ 170 157 EM(netfs_folio_trace_not_under_wback, "!wback") \ 158 + EM(netfs_folio_trace_put, "put") \ 159 + EM(netfs_folio_trace_read, "read") \ 160 + EM(netfs_folio_trace_read_done, "read-done") \ 171 161 EM(netfs_folio_trace_read_gaps, "read-gaps") \ 162 + EM(netfs_folio_trace_read_put, "read-put") \ 163 + EM(netfs_folio_trace_read_unlock, "read-unlock") \ 172 164 EM(netfs_folio_trace_redirtied, "redirtied") \ 173 165 EM(netfs_folio_trace_store, "store") \ 174 166 EM(netfs_folio_trace_store_copy, "store-copy") \ ··· 185 167 EM(netfs_contig_trace_collect, "Collect") \ 186 168 EM(netfs_contig_trace_jump, "-->JUMP-->") \ 187 169 E_(netfs_contig_trace_unlock, "Unlock") 170 + 171 + #define netfs_donate_traces \ 172 + EM(netfs_trace_donate_tail_to_prev, "tail-to-prev") \ 173 + EM(netfs_trace_donate_to_prev, "to-prev") \ 174 + EM(netfs_trace_donate_to_next, "to-next") \ 175 + E_(netfs_trace_donate_to_deferred_next, "defer-next") 188 176 189 177 #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY 190 178 #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY ··· 209 185 enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte); 210 186 enum netfs_folio_trace { netfs_folio_traces } __mode(byte); 211 187 enum netfs_collect_contig_trace { netfs_collect_contig_traces } __mode(byte); 188 + enum netfs_donate_trace { netfs_donate_traces } __mode(byte); 212 189 213 190 #endif 214 191 ··· 232 207 netfs_sreq_ref_traces; 233 208 netfs_folio_traces; 234 209 netfs_collect_contig_traces; 210 + netfs_donate_traces; 235 211 236 212 /* 237 213 * Now redefine the EM() and E_() macros to map the enums to the strings that ··· 253 227 TP_STRUCT__entry( 254 228 __field(unsigned int, rreq ) 255 229 __field(unsigned int, cookie ) 230 + __field(loff_t, i_size ) 256 231 __field(loff_t, start ) 257 232 __field(size_t, len ) 258 233 __field(enum netfs_read_trace, what ) ··· 263 236 TP_fast_assign( 264 237 __entry->rreq = rreq->debug_id; 265 238 __entry->cookie = rreq->cache_resources.debug_id; 239 + __entry->i_size = rreq->i_size; 266 240 __entry->start = start; 267 241 __entry->len = len; 268 242 __entry->what = what; 269 243 __entry->netfs_inode = rreq->inode->i_ino; 270 244 ), 271 245 272 - TP_printk("R=%08x %s c=%08x ni=%x s=%llx %zx", 246 + TP_printk("R=%08x %s c=%08x ni=%x s=%llx l=%zx sz=%llx", 273 247 __entry->rreq, 274 248 __print_symbolic(__entry->what, netfs_read_traces), 275 249 __entry->cookie, 276 250 __entry->netfs_inode, 277 - __entry->start, __entry->len) 251 + __entry->start, __entry->len, __entry->i_size) 278 252 ); 279 253 280 254 TRACE_EVENT(netfs_rreq, ··· 541 513 __entry->start + __entry->len) 542 514 ); 543 515 544 - TRACE_EVENT(netfs_collect_contig, 545 - TP_PROTO(const struct netfs_io_request *wreq, unsigned long long to, 546 - enum netfs_collect_contig_trace type), 547 - 548 - TP_ARGS(wreq, to, type), 549 - 550 - TP_STRUCT__entry( 551 - __field(unsigned int, wreq) 552 - __field(enum netfs_collect_contig_trace, type) 553 - __field(unsigned long long, contiguity) 554 - __field(unsigned long long, to) 555 - ), 556 - 557 - TP_fast_assign( 558 - __entry->wreq = wreq->debug_id; 559 - __entry->type = type; 560 - __entry->contiguity = wreq->contiguity; 561 - __entry->to = to; 562 - ), 563 - 564 - TP_printk("R=%08x %llx -> %llx %s", 565 - __entry->wreq, 566 - __entry->contiguity, 567 - __entry->to, 568 - __print_symbolic(__entry->type, netfs_collect_contig_traces)) 569 - ); 570 - 571 516 TRACE_EVENT(netfs_collect_sreq, 572 517 TP_PROTO(const struct netfs_io_request *wreq, 573 518 const struct netfs_io_subrequest *subreq), ··· 612 611 __field(unsigned int, notes ) 613 612 __field(unsigned long long, collected_to ) 614 613 __field(unsigned long long, cleaned_to ) 615 - __field(unsigned long long, contiguity ) 616 614 ), 617 615 618 616 TP_fast_assign( ··· 619 619 __entry->notes = notes; 620 620 __entry->collected_to = collected_to; 621 621 __entry->cleaned_to = wreq->cleaned_to; 622 - __entry->contiguity = wreq->contiguity; 623 622 ), 624 623 625 - TP_printk("R=%08x cto=%llx fto=%llx ctg=%llx n=%x", 624 + TP_printk("R=%08x col=%llx cln=%llx n=%x", 626 625 __entry->wreq, __entry->collected_to, 627 - __entry->cleaned_to, __entry->contiguity, 626 + __entry->cleaned_to, 628 627 __entry->notes) 629 628 ); 630 629 ··· 678 679 TP_printk("R=%08x[%x:] cto=%llx frn=%llx", 679 680 __entry->wreq, __entry->stream, 680 681 __entry->collected_to, __entry->front) 682 + ); 683 + 684 + TRACE_EVENT(netfs_progress, 685 + TP_PROTO(const struct netfs_io_subrequest *subreq, 686 + unsigned long long start, size_t avail, size_t part), 687 + 688 + TP_ARGS(subreq, start, avail, part), 689 + 690 + TP_STRUCT__entry( 691 + __field(unsigned int, rreq) 692 + __field(unsigned int, subreq) 693 + __field(unsigned int, consumed) 694 + __field(unsigned int, transferred) 695 + __field(unsigned long long, f_start) 696 + __field(unsigned int, f_avail) 697 + __field(unsigned int, f_part) 698 + __field(unsigned char, slot) 699 + ), 700 + 701 + TP_fast_assign( 702 + __entry->rreq = subreq->rreq->debug_id; 703 + __entry->subreq = subreq->debug_index; 704 + __entry->consumed = subreq->consumed; 705 + __entry->transferred = subreq->transferred; 706 + __entry->f_start = start; 707 + __entry->f_avail = avail; 708 + __entry->f_part = part; 709 + __entry->slot = subreq->curr_folioq_slot; 710 + ), 711 + 712 + TP_printk("R=%08x[%02x] s=%llx ct=%x/%x pa=%x/%x sl=%x", 713 + __entry->rreq, __entry->subreq, __entry->f_start, 714 + __entry->consumed, __entry->transferred, 715 + __entry->f_part, __entry->f_avail, __entry->slot) 716 + ); 717 + 718 + TRACE_EVENT(netfs_donate, 719 + TP_PROTO(const struct netfs_io_request *rreq, 720 + const struct netfs_io_subrequest *from, 721 + const struct netfs_io_subrequest *to, 722 + size_t amount, 723 + enum netfs_donate_trace trace), 724 + 725 + TP_ARGS(rreq, from, to, amount, trace), 726 + 727 + TP_STRUCT__entry( 728 + __field(unsigned int, rreq) 729 + __field(unsigned int, from) 730 + __field(unsigned int, to) 731 + __field(unsigned int, amount) 732 + __field(enum netfs_donate_trace, trace) 733 + ), 734 + 735 + TP_fast_assign( 736 + __entry->rreq = rreq->debug_id; 737 + __entry->from = from->debug_index; 738 + __entry->to = to ? to->debug_index : -1; 739 + __entry->amount = amount; 740 + __entry->trace = trace; 741 + ), 742 + 743 + TP_printk("R=%08x[%02x] -> [%02x] %s am=%x", 744 + __entry->rreq, __entry->from, __entry->to, 745 + __print_symbolic(__entry->trace, netfs_donate_traces), 746 + __entry->amount) 681 747 ); 682 748 683 749 #undef EM
+238 -2
lib/iov_iter.c
··· 527 527 i->__iov = iov; 528 528 } 529 529 530 + static void iov_iter_folioq_advance(struct iov_iter *i, size_t size) 531 + { 532 + const struct folio_queue *folioq = i->folioq; 533 + unsigned int slot = i->folioq_slot; 534 + 535 + if (!i->count) 536 + return; 537 + i->count -= size; 538 + 539 + if (slot >= folioq_nr_slots(folioq)) { 540 + folioq = folioq->next; 541 + slot = 0; 542 + } 543 + 544 + size += i->iov_offset; /* From beginning of current segment. */ 545 + do { 546 + size_t fsize = folioq_folio_size(folioq, slot); 547 + 548 + if (likely(size < fsize)) 549 + break; 550 + size -= fsize; 551 + slot++; 552 + if (slot >= folioq_nr_slots(folioq) && folioq->next) { 553 + folioq = folioq->next; 554 + slot = 0; 555 + } 556 + } while (size); 557 + 558 + i->iov_offset = size; 559 + i->folioq_slot = slot; 560 + i->folioq = folioq; 561 + } 562 + 530 563 void iov_iter_advance(struct iov_iter *i, size_t size) 531 564 { 532 565 if (unlikely(i->count < size)) ··· 572 539 iov_iter_iovec_advance(i, size); 573 540 } else if (iov_iter_is_bvec(i)) { 574 541 iov_iter_bvec_advance(i, size); 542 + } else if (iov_iter_is_folioq(i)) { 543 + iov_iter_folioq_advance(i, size); 575 544 } else if (iov_iter_is_discard(i)) { 576 545 i->count -= size; 577 546 } 578 547 } 579 548 EXPORT_SYMBOL(iov_iter_advance); 549 + 550 + static void iov_iter_folioq_revert(struct iov_iter *i, size_t unroll) 551 + { 552 + const struct folio_queue *folioq = i->folioq; 553 + unsigned int slot = i->folioq_slot; 554 + 555 + for (;;) { 556 + size_t fsize; 557 + 558 + if (slot == 0) { 559 + folioq = folioq->prev; 560 + slot = folioq_nr_slots(folioq); 561 + } 562 + slot--; 563 + 564 + fsize = folioq_folio_size(folioq, slot); 565 + if (unroll <= fsize) { 566 + i->iov_offset = fsize - unroll; 567 + break; 568 + } 569 + unroll -= fsize; 570 + } 571 + 572 + i->folioq_slot = slot; 573 + i->folioq = folioq; 574 + } 580 575 581 576 void iov_iter_revert(struct iov_iter *i, size_t unroll) 582 577 { ··· 637 576 } 638 577 unroll -= n; 639 578 } 579 + } else if (iov_iter_is_folioq(i)) { 580 + i->iov_offset = 0; 581 + iov_iter_folioq_revert(i, unroll); 640 582 } else { /* same logics for iovec and kvec */ 641 583 const struct iovec *iov = iter_iov(i); 642 584 while (1) { ··· 667 603 if (iov_iter_is_bvec(i)) 668 604 return min(i->count, i->bvec->bv_len - i->iov_offset); 669 605 } 606 + if (unlikely(iov_iter_is_folioq(i))) 607 + return !i->count ? 0 : 608 + umin(folioq_folio_size(i->folioq, i->folioq_slot), i->count); 670 609 return i->count; 671 610 } 672 611 EXPORT_SYMBOL(iov_iter_single_seg_count); ··· 705 638 }; 706 639 } 707 640 EXPORT_SYMBOL(iov_iter_bvec); 641 + 642 + /** 643 + * iov_iter_folio_queue - Initialise an I/O iterator to use the folios in a folio queue 644 + * @i: The iterator to initialise. 645 + * @direction: The direction of the transfer. 646 + * @folioq: The starting point in the folio queue. 647 + * @first_slot: The first slot in the folio queue to use 648 + * @offset: The offset into the folio in the first slot to start at 649 + * @count: The size of the I/O buffer in bytes. 650 + * 651 + * Set up an I/O iterator to either draw data out of the pages attached to an 652 + * inode or to inject data into those pages. The pages *must* be prevented 653 + * from evaporation, either by taking a ref on them or locking them by the 654 + * caller. 655 + */ 656 + void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction, 657 + const struct folio_queue *folioq, unsigned int first_slot, 658 + unsigned int offset, size_t count) 659 + { 660 + BUG_ON(direction & ~1); 661 + *i = (struct iov_iter) { 662 + .iter_type = ITER_FOLIOQ, 663 + .data_source = direction, 664 + .folioq = folioq, 665 + .folioq_slot = first_slot, 666 + .count = count, 667 + .iov_offset = offset, 668 + }; 669 + } 670 + EXPORT_SYMBOL(iov_iter_folio_queue); 708 671 709 672 /** 710 673 * iov_iter_xarray - Initialise an I/O iterator to use the pages in an xarray ··· 862 765 if (iov_iter_is_bvec(i)) 863 766 return iov_iter_aligned_bvec(i, addr_mask, len_mask); 864 767 768 + /* With both xarray and folioq types, we're dealing with whole folios. */ 865 769 if (iov_iter_is_xarray(i)) { 866 770 if (i->count & len_mask) 867 771 return false; 868 772 if ((i->xarray_start + i->iov_offset) & addr_mask) 773 + return false; 774 + } 775 + if (iov_iter_is_folioq(i)) { 776 + if (i->count & len_mask) 777 + return false; 778 + if (i->iov_offset & addr_mask) 869 779 return false; 870 780 } 871 781 ··· 939 835 if (iov_iter_is_bvec(i)) 940 836 return iov_iter_alignment_bvec(i); 941 837 838 + /* With both xarray and folioq types, we're dealing with whole folios. */ 839 + if (iov_iter_is_folioq(i)) 840 + return i->iov_offset | i->count; 942 841 if (iov_iter_is_xarray(i)) 943 842 return (i->xarray_start + i->iov_offset) | i->count; 944 843 ··· 992 885 return 0; 993 886 } 994 887 return count; 888 + } 889 + 890 + static ssize_t iter_folioq_get_pages(struct iov_iter *iter, 891 + struct page ***ppages, size_t maxsize, 892 + unsigned maxpages, size_t *_start_offset) 893 + { 894 + const struct folio_queue *folioq = iter->folioq; 895 + struct page **pages; 896 + unsigned int slot = iter->folioq_slot; 897 + size_t extracted = 0, count = iter->count, iov_offset = iter->iov_offset; 898 + 899 + if (slot >= folioq_nr_slots(folioq)) { 900 + folioq = folioq->next; 901 + slot = 0; 902 + if (WARN_ON(iov_offset != 0)) 903 + return -EIO; 904 + } 905 + 906 + maxpages = want_pages_array(ppages, maxsize, iov_offset & ~PAGE_MASK, maxpages); 907 + if (!maxpages) 908 + return -ENOMEM; 909 + *_start_offset = iov_offset & ~PAGE_MASK; 910 + pages = *ppages; 911 + 912 + for (;;) { 913 + struct folio *folio = folioq_folio(folioq, slot); 914 + size_t offset = iov_offset, fsize = folioq_folio_size(folioq, slot); 915 + size_t part = PAGE_SIZE - offset % PAGE_SIZE; 916 + 917 + part = umin(part, umin(maxsize - extracted, fsize - offset)); 918 + count -= part; 919 + iov_offset += part; 920 + extracted += part; 921 + 922 + *pages = folio_page(folio, offset / PAGE_SIZE); 923 + get_page(*pages); 924 + pages++; 925 + maxpages--; 926 + if (maxpages == 0 || extracted >= maxsize) 927 + break; 928 + 929 + if (offset >= fsize) { 930 + iov_offset = 0; 931 + slot++; 932 + if (slot == folioq_nr_slots(folioq) && folioq->next) { 933 + folioq = folioq->next; 934 + slot = 0; 935 + } 936 + } 937 + } 938 + 939 + iter->count = count; 940 + iter->iov_offset = iov_offset; 941 + iter->folioq = folioq; 942 + iter->folioq_slot = slot; 943 + return extracted; 995 944 } 996 945 997 946 static ssize_t iter_xarray_populate_pages(struct page **pages, struct xarray *xa, ··· 1197 1034 } 1198 1035 return maxsize; 1199 1036 } 1037 + if (iov_iter_is_folioq(i)) 1038 + return iter_folioq_get_pages(i, pages, maxsize, maxpages, start); 1200 1039 if (iov_iter_is_xarray(i)) 1201 1040 return iter_xarray_get_pages(i, pages, maxsize, maxpages, start); 1202 1041 return -EFAULT; ··· 1283 1118 return iov_npages(i, maxpages); 1284 1119 if (iov_iter_is_bvec(i)) 1285 1120 return bvec_npages(i, maxpages); 1121 + if (iov_iter_is_folioq(i)) { 1122 + unsigned offset = i->iov_offset % PAGE_SIZE; 1123 + int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE); 1124 + return min(npages, maxpages); 1125 + } 1286 1126 if (iov_iter_is_xarray(i)) { 1287 1127 unsigned offset = (i->xarray_start + i->iov_offset) % PAGE_SIZE; 1288 1128 int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE); ··· 1569 1399 } 1570 1400 1571 1401 /* 1402 + * Extract a list of contiguous pages from an ITER_FOLIOQ iterator. This does 1403 + * not get references on the pages, nor does it get a pin on them. 1404 + */ 1405 + static ssize_t iov_iter_extract_folioq_pages(struct iov_iter *i, 1406 + struct page ***pages, size_t maxsize, 1407 + unsigned int maxpages, 1408 + iov_iter_extraction_t extraction_flags, 1409 + size_t *offset0) 1410 + { 1411 + const struct folio_queue *folioq = i->folioq; 1412 + struct page **p; 1413 + unsigned int nr = 0; 1414 + size_t extracted = 0, offset, slot = i->folioq_slot; 1415 + 1416 + if (slot >= folioq_nr_slots(folioq)) { 1417 + folioq = folioq->next; 1418 + slot = 0; 1419 + if (WARN_ON(i->iov_offset != 0)) 1420 + return -EIO; 1421 + } 1422 + 1423 + offset = i->iov_offset & ~PAGE_MASK; 1424 + *offset0 = offset; 1425 + 1426 + maxpages = want_pages_array(pages, maxsize, offset, maxpages); 1427 + if (!maxpages) 1428 + return -ENOMEM; 1429 + p = *pages; 1430 + 1431 + for (;;) { 1432 + struct folio *folio = folioq_folio(folioq, slot); 1433 + size_t offset = i->iov_offset, fsize = folioq_folio_size(folioq, slot); 1434 + size_t part = PAGE_SIZE - offset % PAGE_SIZE; 1435 + 1436 + if (offset < fsize) { 1437 + part = umin(part, umin(maxsize - extracted, fsize - offset)); 1438 + i->count -= part; 1439 + i->iov_offset += part; 1440 + extracted += part; 1441 + 1442 + p[nr++] = folio_page(folio, offset / PAGE_SIZE); 1443 + } 1444 + 1445 + if (nr >= maxpages || extracted >= maxsize) 1446 + break; 1447 + 1448 + if (i->iov_offset >= fsize) { 1449 + i->iov_offset = 0; 1450 + slot++; 1451 + if (slot == folioq_nr_slots(folioq) && folioq->next) { 1452 + folioq = folioq->next; 1453 + slot = 0; 1454 + } 1455 + } 1456 + } 1457 + 1458 + i->folioq = folioq; 1459 + i->folioq_slot = slot; 1460 + return extracted; 1461 + } 1462 + 1463 + /* 1572 1464 * Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not 1573 1465 * get references on the pages, nor does it get a pin on them. 1574 1466 */ ··· 1850 1618 * added to the pages, but refs will not be taken. 1851 1619 * iov_iter_extract_will_pin() will return true. 1852 1620 * 1853 - * (*) If the iterator is ITER_KVEC, ITER_BVEC or ITER_XARRAY, the pages are 1854 - * merely listed; no extra refs or pins are obtained. 1621 + * (*) If the iterator is ITER_KVEC, ITER_BVEC, ITER_FOLIOQ or ITER_XARRAY, the 1622 + * pages are merely listed; no extra refs or pins are obtained. 1855 1623 * iov_iter_extract_will_pin() will return 0. 1856 1624 * 1857 1625 * Note also: ··· 1886 1654 return iov_iter_extract_bvec_pages(i, pages, maxsize, 1887 1655 maxpages, extraction_flags, 1888 1656 offset0); 1657 + if (iov_iter_is_folioq(i)) 1658 + return iov_iter_extract_folioq_pages(i, pages, maxsize, 1659 + maxpages, extraction_flags, 1660 + offset0); 1889 1661 if (iov_iter_is_xarray(i)) 1890 1662 return iov_iter_extract_xarray_pages(i, pages, maxsize, 1891 1663 maxpages, extraction_flags,
+259
lib/kunit_iov_iter.c
··· 12 12 #include <linux/mm.h> 13 13 #include <linux/uio.h> 14 14 #include <linux/bvec.h> 15 + #include <linux/folio_queue.h> 15 16 #include <kunit/test.h> 16 17 17 18 MODULE_DESCRIPTION("iov_iter testing"); ··· 62 61 release_pages(pages, got); 63 62 KUNIT_ASSERT_EQ(test, got, npages); 64 63 } 64 + 65 + for (int i = 0; i < npages; i++) 66 + pages[i]->index = i; 65 67 66 68 buffer = vmap(pages, npages, VM_MAP | VM_MAP_PUT_PAGES, PAGE_KERNEL); 67 69 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer); ··· 350 346 351 347 for (j = pr->from; j < pr->to; j++) { 352 348 buffer[i++] = pattern(patt + j); 349 + if (i >= bufsize) 350 + goto stop; 351 + } 352 + } 353 + stop: 354 + 355 + /* Compare the images */ 356 + for (i = 0; i < bufsize; i++) { 357 + KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i); 358 + if (scratch[i] != buffer[i]) 359 + return; 360 + } 361 + 362 + KUNIT_SUCCEED(test); 363 + } 364 + 365 + static void iov_kunit_destroy_folioq(void *data) 366 + { 367 + struct folio_queue *folioq, *next; 368 + 369 + for (folioq = data; folioq; folioq = next) { 370 + next = folioq->next; 371 + for (int i = 0; i < folioq_nr_slots(folioq); i++) 372 + if (folioq_folio(folioq, i)) 373 + folio_put(folioq_folio(folioq, i)); 374 + kfree(folioq); 375 + } 376 + } 377 + 378 + static void __init iov_kunit_load_folioq(struct kunit *test, 379 + struct iov_iter *iter, int dir, 380 + struct folio_queue *folioq, 381 + struct page **pages, size_t npages) 382 + { 383 + struct folio_queue *p = folioq; 384 + size_t size = 0; 385 + int i; 386 + 387 + for (i = 0; i < npages; i++) { 388 + if (folioq_full(p)) { 389 + p->next = kzalloc(sizeof(struct folio_queue), GFP_KERNEL); 390 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p->next); 391 + folioq_init(p->next); 392 + p->next->prev = p; 393 + p = p->next; 394 + } 395 + folioq_append(p, page_folio(pages[i])); 396 + size += PAGE_SIZE; 397 + } 398 + iov_iter_folio_queue(iter, dir, folioq, 0, 0, size); 399 + } 400 + 401 + static struct folio_queue *iov_kunit_create_folioq(struct kunit *test) 402 + { 403 + struct folio_queue *folioq; 404 + 405 + folioq = kzalloc(sizeof(struct folio_queue), GFP_KERNEL); 406 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folioq); 407 + kunit_add_action_or_reset(test, iov_kunit_destroy_folioq, folioq); 408 + folioq_init(folioq); 409 + return folioq; 410 + } 411 + 412 + /* 413 + * Test copying to a ITER_FOLIOQ-type iterator. 414 + */ 415 + static void __init iov_kunit_copy_to_folioq(struct kunit *test) 416 + { 417 + const struct kvec_test_range *pr; 418 + struct iov_iter iter; 419 + struct folio_queue *folioq; 420 + struct page **spages, **bpages; 421 + u8 *scratch, *buffer; 422 + size_t bufsize, npages, size, copied; 423 + int i, patt; 424 + 425 + bufsize = 0x100000; 426 + npages = bufsize / PAGE_SIZE; 427 + 428 + folioq = iov_kunit_create_folioq(test); 429 + 430 + scratch = iov_kunit_create_buffer(test, &spages, npages); 431 + for (i = 0; i < bufsize; i++) 432 + scratch[i] = pattern(i); 433 + 434 + buffer = iov_kunit_create_buffer(test, &bpages, npages); 435 + memset(buffer, 0, bufsize); 436 + 437 + iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); 438 + 439 + i = 0; 440 + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { 441 + size = pr->to - pr->from; 442 + KUNIT_ASSERT_LE(test, pr->to, bufsize); 443 + 444 + iov_iter_folio_queue(&iter, READ, folioq, 0, 0, pr->to); 445 + iov_iter_advance(&iter, pr->from); 446 + copied = copy_to_iter(scratch + i, size, &iter); 447 + 448 + KUNIT_EXPECT_EQ(test, copied, size); 449 + KUNIT_EXPECT_EQ(test, iter.count, 0); 450 + KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE); 451 + i += size; 452 + if (test->status == KUNIT_FAILURE) 453 + goto stop; 454 + } 455 + 456 + /* Build the expected image in the scratch buffer. */ 457 + patt = 0; 458 + memset(scratch, 0, bufsize); 459 + for (pr = kvec_test_ranges; pr->from >= 0; pr++) 460 + for (i = pr->from; i < pr->to; i++) 461 + scratch[i] = pattern(patt++); 462 + 463 + /* Compare the images */ 464 + for (i = 0; i < bufsize; i++) { 465 + KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i); 466 + if (buffer[i] != scratch[i]) 467 + return; 468 + } 469 + 470 + stop: 471 + KUNIT_SUCCEED(test); 472 + } 473 + 474 + /* 475 + * Test copying from a ITER_FOLIOQ-type iterator. 476 + */ 477 + static void __init iov_kunit_copy_from_folioq(struct kunit *test) 478 + { 479 + const struct kvec_test_range *pr; 480 + struct iov_iter iter; 481 + struct folio_queue *folioq; 482 + struct page **spages, **bpages; 483 + u8 *scratch, *buffer; 484 + size_t bufsize, npages, size, copied; 485 + int i, j; 486 + 487 + bufsize = 0x100000; 488 + npages = bufsize / PAGE_SIZE; 489 + 490 + folioq = iov_kunit_create_folioq(test); 491 + 492 + buffer = iov_kunit_create_buffer(test, &bpages, npages); 493 + for (i = 0; i < bufsize; i++) 494 + buffer[i] = pattern(i); 495 + 496 + scratch = iov_kunit_create_buffer(test, &spages, npages); 497 + memset(scratch, 0, bufsize); 498 + 499 + iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); 500 + 501 + i = 0; 502 + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { 503 + size = pr->to - pr->from; 504 + KUNIT_ASSERT_LE(test, pr->to, bufsize); 505 + 506 + iov_iter_folio_queue(&iter, WRITE, folioq, 0, 0, pr->to); 507 + iov_iter_advance(&iter, pr->from); 508 + copied = copy_from_iter(scratch + i, size, &iter); 509 + 510 + KUNIT_EXPECT_EQ(test, copied, size); 511 + KUNIT_EXPECT_EQ(test, iter.count, 0); 512 + KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE); 513 + i += size; 514 + } 515 + 516 + /* Build the expected image in the main buffer. */ 517 + i = 0; 518 + memset(buffer, 0, bufsize); 519 + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { 520 + for (j = pr->from; j < pr->to; j++) { 521 + buffer[i++] = pattern(j); 353 522 if (i >= bufsize) 354 523 goto stop; 355 524 } ··· 855 678 } 856 679 857 680 /* 681 + * Test the extraction of ITER_FOLIOQ-type iterators. 682 + */ 683 + static void __init iov_kunit_extract_pages_folioq(struct kunit *test) 684 + { 685 + const struct kvec_test_range *pr; 686 + struct folio_queue *folioq; 687 + struct iov_iter iter; 688 + struct page **bpages, *pagelist[8], **pages = pagelist; 689 + ssize_t len; 690 + size_t bufsize, size = 0, npages; 691 + int i, from; 692 + 693 + bufsize = 0x100000; 694 + npages = bufsize / PAGE_SIZE; 695 + 696 + folioq = iov_kunit_create_folioq(test); 697 + 698 + iov_kunit_create_buffer(test, &bpages, npages); 699 + iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages); 700 + 701 + for (pr = kvec_test_ranges; pr->from >= 0; pr++) { 702 + from = pr->from; 703 + size = pr->to - from; 704 + KUNIT_ASSERT_LE(test, pr->to, bufsize); 705 + 706 + iov_iter_folio_queue(&iter, WRITE, folioq, 0, 0, pr->to); 707 + iov_iter_advance(&iter, from); 708 + 709 + do { 710 + size_t offset0 = LONG_MAX; 711 + 712 + for (i = 0; i < ARRAY_SIZE(pagelist); i++) 713 + pagelist[i] = (void *)(unsigned long)0xaa55aa55aa55aa55ULL; 714 + 715 + len = iov_iter_extract_pages(&iter, &pages, 100 * 1024, 716 + ARRAY_SIZE(pagelist), 0, &offset0); 717 + KUNIT_EXPECT_GE(test, len, 0); 718 + if (len < 0) 719 + break; 720 + KUNIT_EXPECT_LE(test, len, size); 721 + KUNIT_EXPECT_EQ(test, iter.count, size - len); 722 + if (len == 0) 723 + break; 724 + size -= len; 725 + KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0); 726 + KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE); 727 + 728 + for (i = 0; i < ARRAY_SIZE(pagelist); i++) { 729 + struct page *p; 730 + ssize_t part = min_t(ssize_t, len, PAGE_SIZE - offset0); 731 + int ix; 732 + 733 + KUNIT_ASSERT_GE(test, part, 0); 734 + ix = from / PAGE_SIZE; 735 + KUNIT_ASSERT_LT(test, ix, npages); 736 + p = bpages[ix]; 737 + KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p); 738 + KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE); 739 + from += part; 740 + len -= part; 741 + KUNIT_ASSERT_GE(test, len, 0); 742 + if (len == 0) 743 + break; 744 + offset0 = 0; 745 + } 746 + 747 + if (test->status == KUNIT_FAILURE) 748 + goto stop; 749 + } while (iov_iter_count(&iter) > 0); 750 + 751 + KUNIT_EXPECT_EQ(test, size, 0); 752 + KUNIT_EXPECT_EQ(test, iter.count, 0); 753 + } 754 + 755 + stop: 756 + KUNIT_SUCCEED(test); 757 + } 758 + 759 + /* 858 760 * Test the extraction of ITER_XARRAY-type iterators. 859 761 */ 860 762 static void __init iov_kunit_extract_pages_xarray(struct kunit *test) ··· 1017 761 KUNIT_CASE(iov_kunit_copy_from_kvec), 1018 762 KUNIT_CASE(iov_kunit_copy_to_bvec), 1019 763 KUNIT_CASE(iov_kunit_copy_from_bvec), 764 + KUNIT_CASE(iov_kunit_copy_to_folioq), 765 + KUNIT_CASE(iov_kunit_copy_from_folioq), 1020 766 KUNIT_CASE(iov_kunit_copy_to_xarray), 1021 767 KUNIT_CASE(iov_kunit_copy_from_xarray), 1022 768 KUNIT_CASE(iov_kunit_extract_pages_kvec), 1023 769 KUNIT_CASE(iov_kunit_extract_pages_bvec), 770 + KUNIT_CASE(iov_kunit_extract_pages_folioq), 1024 771 KUNIT_CASE(iov_kunit_extract_pages_xarray), 1025 772 {} 1026 773 };
+67 -2
lib/scatterlist.c
··· 11 11 #include <linux/kmemleak.h> 12 12 #include <linux/bvec.h> 13 13 #include <linux/uio.h> 14 + #include <linux/folio_queue.h> 14 15 15 16 /** 16 17 * sg_next - return the next scatterlist entry in a list ··· 1263 1262 } 1264 1263 1265 1264 /* 1265 + * Extract up to sg_max folios from an FOLIOQ-type iterator and add them to 1266 + * the scatterlist. The pages are not pinned. 1267 + */ 1268 + static ssize_t extract_folioq_to_sg(struct iov_iter *iter, 1269 + ssize_t maxsize, 1270 + struct sg_table *sgtable, 1271 + unsigned int sg_max, 1272 + iov_iter_extraction_t extraction_flags) 1273 + { 1274 + const struct folio_queue *folioq = iter->folioq; 1275 + struct scatterlist *sg = sgtable->sgl + sgtable->nents; 1276 + unsigned int slot = iter->folioq_slot; 1277 + ssize_t ret = 0; 1278 + size_t offset = iter->iov_offset; 1279 + 1280 + BUG_ON(!folioq); 1281 + 1282 + if (slot >= folioq_nr_slots(folioq)) { 1283 + folioq = folioq->next; 1284 + if (WARN_ON_ONCE(!folioq)) 1285 + return 0; 1286 + slot = 0; 1287 + } 1288 + 1289 + do { 1290 + struct folio *folio = folioq_folio(folioq, slot); 1291 + size_t fsize = folioq_folio_size(folioq, slot); 1292 + 1293 + if (offset < fsize) { 1294 + size_t part = umin(maxsize - ret, fsize - offset); 1295 + 1296 + sg_set_page(sg, folio_page(folio, 0), part, offset); 1297 + sgtable->nents++; 1298 + sg++; 1299 + sg_max--; 1300 + offset += part; 1301 + ret += part; 1302 + } 1303 + 1304 + if (offset >= fsize) { 1305 + offset = 0; 1306 + slot++; 1307 + if (slot >= folioq_nr_slots(folioq)) { 1308 + if (!folioq->next) { 1309 + WARN_ON_ONCE(ret < iter->count); 1310 + break; 1311 + } 1312 + folioq = folioq->next; 1313 + slot = 0; 1314 + } 1315 + } 1316 + } while (sg_max > 0 && ret < maxsize); 1317 + 1318 + iter->folioq = folioq; 1319 + iter->folioq_slot = slot; 1320 + iter->iov_offset = offset; 1321 + iter->count -= ret; 1322 + return ret; 1323 + } 1324 + 1325 + /* 1266 1326 * Extract up to sg_max folios from an XARRAY-type iterator and add them to 1267 1327 * the scatterlist. The pages are not pinned. 1268 1328 */ ··· 1385 1323 * addition of @sg_max elements. 1386 1324 * 1387 1325 * The pages referred to by UBUF- and IOVEC-type iterators are extracted and 1388 - * pinned; BVEC-, KVEC- and XARRAY-type are extracted but aren't pinned; PIPE- 1389 - * and DISCARD-type are not supported. 1326 + * pinned; BVEC-, KVEC-, FOLIOQ- and XARRAY-type are extracted but aren't 1327 + * pinned; DISCARD-type is not supported. 1390 1328 * 1391 1329 * No end mark is placed on the scatterlist; that's left to the caller. 1392 1330 * ··· 1418 1356 case ITER_KVEC: 1419 1357 return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, 1420 1358 extraction_flags); 1359 + case ITER_FOLIOQ: 1360 + return extract_folioq_to_sg(iter, maxsize, sgtable, sg_max, 1361 + extraction_flags); 1421 1362 case ITER_XARRAY: 1422 1363 return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, 1423 1364 extraction_flags);