Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

net: Introduce sk_use_task_frag in struct sock.

Sockets that can be used while recursing into memory reclaim, like
those used by network block devices and file systems, mustn't use
current->task_frag: if the current process is already using it, then
the inner memory reclaim call would corrupt the task_frag structure.

To avoid this, sk_page_frag() uses ->sk_allocation to detect sockets
that mustn't use current->task_frag, assuming that those used during
memory reclaim had their allocation constraints reflected in
->sk_allocation.

This unfortunately doesn't cover all cases: in an attempt to remove all
usage of GFP_NOFS and GFP_NOIO, sunrpc stopped setting these flags in
->sk_allocation, and used memalloc_nofs critical sections instead.
This breaks the sk_page_frag() heuristic since the allocation
constraints are now stored in current->flags, which sk_page_frag()
can't read without risking triggering a cache miss and slowing down
TCP's fast path.

This patch creates a new field in struct sock, named sk_use_task_frag,
which sockets with memory reclaim constraints can set to false if they
can't safely use current->task_frag. In such cases, sk_page_frag() now
always returns the socket's page_frag (->sk_frag). The first user is
sunrpc, which needs to avoid using current->task_frag but can keep
->sk_allocation set to GFP_KERNEL otherwise.

Eventually, it might be possible to simplify sk_page_frag() by only
testing ->sk_use_task_frag and avoid relying on the ->sk_allocation
heuristic entirely (assuming other sockets will set ->sk_use_task_frag
according to their constraints in the future).

The new ->sk_use_task_frag field is placed in a hole in struct sock and
belongs to a cache line shared with ->sk_shutdown. Therefore it should
be hot and shouldn't have negative performance impacts on TCP's fast
path (sk_shutdown is tested just before the while() loop in
tcp_sendmsg_locked()).

Link: https://lore.kernel.org/netdev/b4d8cb09c913d3e34f853736f3f5628abfd7f4b6.1656699567.git.gnault@redhat.com/
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Reviewed-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

authored by

Guillaume Nault and committed by
Jakub Kicinski
fb87bd47 b389a902

+10 -2
+9 -2
include/net/sock.h
··· 318 318 * @sk_stamp: time stamp of last packet received 319 319 * @sk_stamp_seq: lock for accessing sk_stamp on 32 bit architectures only 320 320 * @sk_tsflags: SO_TIMESTAMPING flags 321 + * @sk_use_task_frag: allow sk_page_frag() to use current->task_frag. 322 + * Sockets that can be used under memory reclaim should 323 + * set this to false. 321 324 * @sk_bind_phc: SO_TIMESTAMPING bind PHC index of PTP virtual clock 322 325 * for timestamping 323 326 * @sk_tskey: counter to disambiguate concurrent tstamp requests ··· 515 512 u8 sk_txtime_deadline_mode : 1, 516 513 sk_txtime_report_errors : 1, 517 514 sk_txtime_unused : 6; 515 + bool sk_use_task_frag; 518 516 519 517 struct socket *sk_socket; 520 518 void *sk_user_data; ··· 2565 2561 * socket operations and end up recursing into sk_page_frag() 2566 2562 * while it's already in use: explicitly avoid task page_frag 2567 2563 * usage if the caller is potentially doing any of them. 2568 - * This assumes that page fault handlers use the GFP_NOFS flags. 2564 + * This assumes that page fault handlers use the GFP_NOFS flags or 2565 + * explicitly disable sk_use_task_frag. 2569 2566 * 2570 2567 * Return: a per task page_frag if context allows that, 2571 2568 * otherwise a per socket one. 2572 2569 */ 2573 2570 static inline struct page_frag *sk_page_frag(struct sock *sk) 2574 2571 { 2575 - if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) == 2572 + if (sk->sk_use_task_frag && 2573 + (sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | 2574 + __GFP_FS)) == 2576 2575 (__GFP_DIRECT_RECLAIM | __GFP_FS)) 2577 2576 return &current->task_frag; 2578 2577
+1
net/core/sock.c
··· 3390 3390 sk->sk_rcvbuf = READ_ONCE(sysctl_rmem_default); 3391 3391 sk->sk_sndbuf = READ_ONCE(sysctl_wmem_default); 3392 3392 sk->sk_state = TCP_CLOSE; 3393 + sk->sk_use_task_frag = true; 3393 3394 sk_set_socket(sk, sock); 3394 3395 3395 3396 sock_set_flag(sk, SOCK_ZAPPED);