Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

rxrpc: peer->mtu_lock is redundant

The peer->mtu_lock is only used to lock around writes to peer->max_data -
and nothing else; further, all such writes take place in the I/O thread and
the lock is only ever write-locked and never read-locked.

In a couple of places, the write_seqcount_begin() is wrapped in
preempt_disable/enable(), but not in all places. This can cause lockdep to
complain:

WARNING: CPU: 0 PID: 1549 at include/linux/seqlock.h:221 rxrpc_input_ack_trailer+0x305/0x430
...
RIP: 0010:rxrpc_input_ack_trailer+0x305/0x430

Fix this by just getting rid of the lock.

Fixes: eeaedc5449d9 ("rxrpc: Implement path-MTU probing using padded PING ACKs (RFC8899)")
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
Link: https://patch.msgid.link/20250218192250.296870-3-dhowells@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

authored by

David Howells and committed by
Jakub Kicinski
833fefa0 c34d999c

+1 -12
-1
net/rxrpc/ar-internal.h
··· 360 360 u8 pmtud_jumbo; /* Max jumbo packets for the MTU */ 361 361 bool ackr_adv_pmtud; /* T if the peer advertises path-MTU */ 362 362 unsigned int ackr_max_data; /* Maximum data advertised by peer */ 363 - seqcount_t mtu_lock; /* Lockless MTU access management */ 364 363 unsigned int if_mtu; /* Local interface MTU (- hdrsize) for this peer */ 365 364 unsigned int max_data; /* Maximum packet data capacity for this peer */ 366 365 unsigned short hdrsize; /* header size (IP + UDP + RxRPC) */
-2
net/rxrpc/input.c
··· 810 810 if (max_mtu < peer->max_data) { 811 811 trace_rxrpc_pmtud_reduce(peer, sp->hdr.serial, max_mtu, 812 812 rxrpc_pmtud_reduce_ack); 813 - write_seqcount_begin(&peer->mtu_lock); 814 813 peer->max_data = max_mtu; 815 - write_seqcount_end(&peer->mtu_lock); 816 814 } 817 815 818 816 max_data = umin(max_mtu, peer->max_data);
+1 -8
net/rxrpc/peer_event.c
··· 130 130 peer->pmtud_bad = max_data + 1; 131 131 132 132 trace_rxrpc_pmtud_reduce(peer, 0, max_data, rxrpc_pmtud_reduce_icmp); 133 - write_seqcount_begin(&peer->mtu_lock); 134 133 peer->max_data = max_data; 135 - write_seqcount_end(&peer->mtu_lock); 136 134 } 137 135 } 138 136 ··· 406 408 } 407 409 408 410 max_data = umin(max_data, peer->ackr_max_data); 409 - if (max_data != peer->max_data) { 410 - preempt_disable(); 411 - write_seqcount_begin(&peer->mtu_lock); 411 + if (max_data != peer->max_data) 412 412 peer->max_data = max_data; 413 - write_seqcount_end(&peer->mtu_lock); 414 - preempt_enable(); 415 - } 416 413 417 414 jumbo = max_data + sizeof(struct rxrpc_jumbo_header); 418 415 jumbo /= RXRPC_JUMBO_SUBPKTLEN;
-1
net/rxrpc/peer_object.c
··· 235 235 peer->service_conns = RB_ROOT; 236 236 seqlock_init(&peer->service_conn_lock); 237 237 spin_lock_init(&peer->lock); 238 - seqcount_init(&peer->mtu_lock); 239 238 peer->debug_id = atomic_inc_return(&rxrpc_debug_id); 240 239 peer->recent_srtt_us = UINT_MAX; 241 240 peer->cong_ssthresh = RXRPC_TX_MAX_WINDOW;