Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

xsk: Fix possible segfault at xskmap entry insertion

Fix possible segfault when entry is inserted into xskmap. This can
happen if the socket is in a state where the umem has been set up, the
Rx ring created but it has yet to be bound to a device. In this case
the pool has not yet been created and we cannot reference it for the
existence of the fill ring. Fix this by removing the whole
xsk_is_setup_for_bpf_map function. Once upon a time, it was used to
make sure that the Rx and fill rings where set up before the driver
could call xsk_rcv, since there are no tests for the existence of
these rings in the data path. But these days, we have a state variable
that we test instead. When it is XSK_BOUND, everything has been set up
correctly and the socket has been bound. So no reason to have the
xsk_is_setup_for_bpf_map function anymore.

Fixes: 7361f9c3d719 ("xsk: Move fill and completion rings to buffer pool")
Reported-by: syzbot+febe51d44243fbc564ee@syzkaller.appspotmail.com
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1599037569-26690-1-git-send-email-magnus.karlsson@intel.com

authored by

Magnus Karlsson and committed by
Daniel Borkmann
968be23c 53ea2076

-12
-6
net/xdp/xsk.c
··· 33 33 34 34 static DEFINE_PER_CPU(struct list_head, xskmap_flush_list); 35 35 36 - bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs) 37 - { 38 - return READ_ONCE(xs->rx) && READ_ONCE(xs->umem) && 39 - (xs->pool->fq || READ_ONCE(xs->fq_tmp)); 40 - } 41 - 42 36 void xsk_set_rx_need_wakeup(struct xsk_buff_pool *pool) 43 37 { 44 38 if (pool->cached_need_wakeup & XDP_WAKEUP_RX)
-1
net/xdp/xsk.h
··· 39 39 return (struct xdp_sock *)sk; 40 40 } 41 41 42 - bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs); 43 42 void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs, 44 43 struct xdp_sock **map_entry); 45 44 int xsk_map_inc(struct xsk_map *map);
-5
net/xdp/xskmap.c
··· 185 185 186 186 xs = (struct xdp_sock *)sock->sk; 187 187 188 - if (!xsk_is_setup_for_bpf_map(xs)) { 189 - sockfd_put(sock); 190 - return -EOPNOTSUPP; 191 - } 192 - 193 188 map_entry = &m->xsk_map[i]; 194 189 node = xsk_map_node_alloc(m, map_entry); 195 190 if (IS_ERR(node)) {