Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

9p/trans_fd: mark concurrent read and writes to p9_conn->err

Writes for the error value of a connection are spinlock-protected inside
p9_conn_cancel, but lockless reads are present elsewhere to avoid
performing unnecessary work after an error has been met.

Mark the write and lockless reads to make KCSAN happy. Mark the write as
exclusive following the recommendation in "Lock-Protected Writes with
Lockless Reads" in tools/memory-model/Documentation/access-marking.txt
while we are at it.

Mark p9_fd_request and p9_conn_cancel m->err reads despite the fact that
they do not race with concurrent writes for stylistic reasons.

Reported-by: syzbot+d69a7cc8c683c2cb7506@syzkaller.appspotmail.com
Reported-by: syzbot+483d6c9b9231ea7e1851@syzkaller.appspotmail.com
Signed-off-by: Ignacio Encinas <ignacio@iencinas.com>
Message-ID: <20250318-p9_conn_err_benign_data_race-v3-1-290bb18335cc@iencinas.com>
Signed-off-by: Dominique Martinet <asmadeus@codewreck.org>

authored by

Ignacio Encinas and committed by
Dominique Martinet
fbc0283f ad6d4558

+10 -7
+10 -7
net/9p/trans_fd.c
··· 192 192 193 193 spin_lock(&m->req_lock); 194 194 195 - if (m->err) { 195 + if (READ_ONCE(m->err)) { 196 196 spin_unlock(&m->req_lock); 197 197 return; 198 198 } 199 199 200 - m->err = err; 200 + WRITE_ONCE(m->err, err); 201 + ASSERT_EXCLUSIVE_WRITER(m->err); 201 202 202 203 list_for_each_entry_safe(req, rtmp, &m->req_list, req_list) { 203 204 list_move(&req->req_list, &cancel_list); ··· 285 284 286 285 m = container_of(work, struct p9_conn, rq); 287 286 288 - if (m->err < 0) 287 + if (READ_ONCE(m->err) < 0) 289 288 return; 290 289 291 290 p9_debug(P9_DEBUG_TRANS, "start mux %p pos %zd\n", m, m->rc.offset); ··· 452 451 453 452 m = container_of(work, struct p9_conn, wq); 454 453 455 - if (m->err < 0) { 454 + if (READ_ONCE(m->err) < 0) { 456 455 clear_bit(Wworksched, &m->wsched); 457 456 return; 458 457 } ··· 624 623 __poll_t n; 625 624 int err = -ECONNRESET; 626 625 627 - if (m->err < 0) 626 + if (READ_ONCE(m->err) < 0) 628 627 return; 629 628 630 629 n = p9_fd_poll(m->client, NULL, &err); ··· 667 666 static int p9_fd_request(struct p9_client *client, struct p9_req_t *req) 668 667 { 669 668 __poll_t n; 669 + int err; 670 670 struct p9_trans_fd *ts = client->trans; 671 671 struct p9_conn *m = &ts->conn; 672 672 ··· 676 674 677 675 spin_lock(&m->req_lock); 678 676 679 - if (m->err < 0) { 677 + err = READ_ONCE(m->err); 678 + if (err < 0) { 680 679 spin_unlock(&m->req_lock); 681 - return m->err; 680 + return err; 682 681 } 683 682 684 683 WRITE_ONCE(req->status, REQ_STATUS_UNSENT);