Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

packet: Move reference count in packet_sock to atomic_long_t

In some potential instances the reference count on struct packet_sock
could be saturated and cause overflows which gets the kernel a bit
confused. To prevent this, move to a 64-bit atomic reference count on
64-bit architectures to prevent the possibility of this type to overflow.

Because we can not handle saturation, using refcount_t is not possible
in this place. Maybe someday in the future if it changes it could be
used. Also, instead of using plain atomic64_t, use atomic_long_t instead.
32-bit machines tend to be memory-limited (i.e. anything that increases
a reference uses so much memory that you can't actually get to 2**32
references). 32-bit architectures also tend to have serious problems
with 64-bit atomics. Hence, atomic_long_t is the more natural solution.

Reported-by: "The UK's National Cyber Security Centre (NCSC)" <security@ncsc.gov.uk>
Co-developed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: stable@kernel.org
Reviewed-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20231201131021.19999-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

authored by

Daniel Borkmann and committed by
Jakub Kicinski
db3fadac 79321a79

+9 -9
+8 -8
net/packet/af_packet.c
··· 4300 4300 struct sock *sk = sock->sk; 4301 4301 4302 4302 if (sk) 4303 - atomic_inc(&pkt_sk(sk)->mapped); 4303 + atomic_long_inc(&pkt_sk(sk)->mapped); 4304 4304 } 4305 4305 4306 4306 static void packet_mm_close(struct vm_area_struct *vma) ··· 4310 4310 struct sock *sk = sock->sk; 4311 4311 4312 4312 if (sk) 4313 - atomic_dec(&pkt_sk(sk)->mapped); 4313 + atomic_long_dec(&pkt_sk(sk)->mapped); 4314 4314 } 4315 4315 4316 4316 static const struct vm_operations_struct packet_mmap_ops = { ··· 4405 4405 4406 4406 err = -EBUSY; 4407 4407 if (!closing) { 4408 - if (atomic_read(&po->mapped)) 4408 + if (atomic_long_read(&po->mapped)) 4409 4409 goto out; 4410 4410 if (packet_read_pending(rb)) 4411 4411 goto out; ··· 4508 4508 4509 4509 err = -EBUSY; 4510 4510 mutex_lock(&po->pg_vec_lock); 4511 - if (closing || atomic_read(&po->mapped) == 0) { 4511 + if (closing || atomic_long_read(&po->mapped) == 0) { 4512 4512 err = 0; 4513 4513 spin_lock_bh(&rb_queue->lock); 4514 4514 swap(rb->pg_vec, pg_vec); ··· 4526 4526 po->prot_hook.func = (po->rx_ring.pg_vec) ? 4527 4527 tpacket_rcv : packet_rcv; 4528 4528 skb_queue_purge(rb_queue); 4529 - if (atomic_read(&po->mapped)) 4530 - pr_err("packet_mmap: vma is busy: %d\n", 4531 - atomic_read(&po->mapped)); 4529 + if (atomic_long_read(&po->mapped)) 4530 + pr_err("packet_mmap: vma is busy: %ld\n", 4531 + atomic_long_read(&po->mapped)); 4532 4532 } 4533 4533 mutex_unlock(&po->pg_vec_lock); 4534 4534 ··· 4606 4606 } 4607 4607 } 4608 4608 4609 - atomic_inc(&po->mapped); 4609 + atomic_long_inc(&po->mapped); 4610 4610 vma->vm_ops = &packet_mmap_ops; 4611 4611 err = 0; 4612 4612
+1 -1
net/packet/internal.h
··· 122 122 __be16 num; 123 123 struct packet_rollover *rollover; 124 124 struct packet_mclist *mclist; 125 - atomic_t mapped; 125 + atomic_long_t mapped; 126 126 enum tpacket_versions tp_version; 127 127 unsigned int tp_hdrlen; 128 128 unsigned int tp_reserve;