Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

udp: add drop_counters to udp socket

When a packet flood hits one or more UDP sockets, many cpus
have to update sk->sk_drops.

This slows down other cpus, because currently
sk_drops is in sock_write_rx group.

Add a socket_drop_counters structure to udp sockets.

Using dedicated cache lines to hold drop counters
makes sure that consumers no longer suffer from
false sharing if/when producers only change sk->sk_drops.

This adds 128 bytes per UDP socket.

Tested with the following stress test, sending about 11 Mpps
to a dual socket AMD EPYC 7B13 64-Core.

super_netperf 20 -t UDP_STREAM -H DUT -l10 -- -n -P,1000 -m 120
Note: due to socket lookup, only one UDP socket is receiving
packets on DUT.

Then measure receiver (DUT) behavior. We can see both
consumer and BH handlers can process more packets per second.

Before:

nstat -n ; sleep 1 ; nstat | grep Udp
Udp6InDatagrams 615091 0.0
Udp6InErrors 3904277 0.0
Udp6RcvbufErrors 3904277 0.0

After:

nstat -n ; sleep 1 ; nstat | grep Udp
Udp6InDatagrams 816281 0.0
Udp6InErrors 7497093 0.0
Udp6RcvbufErrors 7497093 0.0

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Link: https://patch.msgid.link/20250826125031.1578842-5-edumazet@google.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>

authored by

Eric Dumazet and committed by
Paolo Abeni
51132b99 c51613fa

+6 -3
+1
include/linux/udp.h
··· 108 108 * the last UDP socket cacheline. 109 109 */ 110 110 struct hlist_node tunnel_list; 111 + struct socket_drop_counters drop_counters; 111 112 }; 112 113 113 114 #define udp_test_bit(nr, sk) \
+1
include/net/udp.h
··· 288 288 { 289 289 struct udp_sock *up = udp_sk(sk); 290 290 291 + sk->sk_drop_counters = &up->drop_counters; 291 292 skb_queue_head_init(&up->reader_queue); 292 293 INIT_HLIST_NODE(&up->tunnel_list); 293 294 up->forward_threshold = sk->sk_rcvbuf >> 2;
+2 -1
tools/testing/selftests/bpf/progs/bpf_iter_udp4.c
··· 64 64 0, 0L, 0, ctx->uid, 0, 65 65 sock_i_ino(&inet->sk), 66 66 inet->sk.sk_refcnt.refs.counter, udp_sk, 67 - inet->sk.sk_drops.counter); 67 + udp_sk->drop_counters.drops0.counter + 68 + udp_sk->drop_counters.drops1.counter); 68 69 69 70 return 0; 70 71 }
+2 -2
tools/testing/selftests/bpf/progs/bpf_iter_udp6.c
··· 72 72 0, 0L, 0, ctx->uid, 0, 73 73 sock_i_ino(&inet->sk), 74 74 inet->sk.sk_refcnt.refs.counter, udp_sk, 75 - inet->sk.sk_drops.counter); 76 - 75 + udp_sk->drop_counters.drops0.counter + 76 + udp_sk->drop_counters.drops1.counter); 77 77 return 0; 78 78 }