Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

netfs: Fix race between cache write completion and ALL_QUEUED being set

When netfslib is issuing subrequests, the subrequests start processing
immediately and may complete before we reach the end of the issuing
function. At the end of the issuing function we set NETFS_RREQ_ALL_QUEUED
to indicate to the collector that we aren't going to issue any more subreqs
and that it can do the final notifications and cleanup.

Now, this isn't a problem if the request is synchronous
(NETFS_RREQ_OFFLOAD_COLLECTION is unset) as the result collection will be
done in-thread and we're guaranteed an opportunity to run the collector.

However, if the request is asynchronous, collection is primarily triggered
by the termination of subrequests queuing it on a workqueue. Now, a race
can occur here if the app thread sets ALL_QUEUED after the last subrequest
terminates.

This can happen most easily with the copy2cache code (as used by Ceph)
where, in the collection routine of a read request, an asynchronous write
request is spawned to copy data to the cache. Folios are added to the
write request as they're unlocked, but there may be a delay before
ALL_QUEUED is set as the write subrequests may complete before we get
there.

If all the write subreqs have finished by the ALL_QUEUED point, no further
events happen and the collection never happens, leaving the request
hanging.

Fix this by queuing the collector after setting ALL_QUEUED. This is a bit
heavy-handed and it may be sufficient to do it only if there are no extant
subreqs.

Also add a tracepoint to cross-reference both requests in a copy-to-request
operation and add a trace to the netfs_rreq tracepoint to indicate the
setting of ALL_QUEUED.

Fixes: e2d46f2ec332 ("netfs: Change the read result collector to only use one work item")
Reported-by: Max Kellermann <max.kellermann@ionos.com>
Link: https://lore.kernel.org/r/CAKPOu+8z_ijTLHdiCYGU_Uk7yYD=shxyGLwfe-L7AV3DhebS3w@mail.gmail.com/
Signed-off-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/20250711151005.2956810-3-dhowells@redhat.com
Reviewed-by: Paulo Alcantara (Red Hat) <pc@manguebit.org>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Viacheslav Dubeyko <slava@dubeyko.com>
cc: Alex Markuze <amarkuze@redhat.com>
cc: Ilya Dryomov <idryomov@gmail.com>
cc: netfs@lists.linux.dev
cc: ceph-devel@vger.kernel.org
cc: linux-fsdevel@vger.kernel.org
cc: stable@vger.kernel.org
Signed-off-by: Christian Brauner <brauner@kernel.org>

authored by

David Howells and committed by
Christian Brauner
89635eae 4c238e30

+34
+4
fs/netfs/read_pgpriv2.c
··· 111 111 goto cancel_put; 112 112 113 113 __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags); 114 + trace_netfs_copy2cache(rreq, creq); 114 115 trace_netfs_write(creq, netfs_write_trace_copy_to_cache); 115 116 netfs_stat(&netfs_n_wh_copy_to_cache); 116 117 rreq->copy_to_cache = creq; ··· 156 155 netfs_issue_write(creq, &creq->io_streams[1]); 157 156 smp_wmb(); /* Write lists before ALL_QUEUED. */ 158 157 set_bit(NETFS_RREQ_ALL_QUEUED, &creq->flags); 158 + trace_netfs_rreq(rreq, netfs_rreq_trace_end_copy_to_cache); 159 + if (list_empty_careful(&creq->io_streams[1].subrequests)) 160 + netfs_wake_collector(creq); 159 161 160 162 netfs_put_request(creq, netfs_rreq_trace_put_return); 161 163 creq->copy_to_cache = NULL;
+30
include/trace/events/netfs.h
··· 55 55 EM(netfs_rreq_trace_copy, "COPY ") \ 56 56 EM(netfs_rreq_trace_dirty, "DIRTY ") \ 57 57 EM(netfs_rreq_trace_done, "DONE ") \ 58 + EM(netfs_rreq_trace_end_copy_to_cache, "END-C2C") \ 58 59 EM(netfs_rreq_trace_free, "FREE ") \ 59 60 EM(netfs_rreq_trace_ki_complete, "KI-CMPL") \ 60 61 EM(netfs_rreq_trace_recollect, "RECLLCT") \ ··· 558 557 __entry->cookie, 559 558 __entry->ino, 560 559 __entry->start, __entry->start + __entry->len - 1) 560 + ); 561 + 562 + TRACE_EVENT(netfs_copy2cache, 563 + TP_PROTO(const struct netfs_io_request *rreq, 564 + const struct netfs_io_request *creq), 565 + 566 + TP_ARGS(rreq, creq), 567 + 568 + TP_STRUCT__entry( 569 + __field(unsigned int, rreq) 570 + __field(unsigned int, creq) 571 + __field(unsigned int, cookie) 572 + __field(unsigned int, ino) 573 + ), 574 + 575 + TP_fast_assign( 576 + struct netfs_inode *__ctx = netfs_inode(rreq->inode); 577 + struct fscache_cookie *__cookie = netfs_i_cookie(__ctx); 578 + __entry->rreq = rreq->debug_id; 579 + __entry->creq = creq->debug_id; 580 + __entry->cookie = __cookie ? __cookie->debug_id : 0; 581 + __entry->ino = rreq->inode->i_ino; 582 + ), 583 + 584 + TP_printk("R=%08x CR=%08x c=%08x i=%x ", 585 + __entry->rreq, 586 + __entry->creq, 587 + __entry->cookie, 588 + __entry->ino) 561 589 ); 562 590 563 591 TRACE_EVENT(netfs_collect,