Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

vmxnet3: correctly report gso type for UDP tunnels

Commit 3d010c8031e3 ("udp: do not accept non-tunnel GSO skbs landing
in a tunnel") added checks in linux stack to not accept non-tunnel
GRO packets landing in a tunnel. This exposed an issue in vmxnet3
which was not correctly reporting GRO packets for tunnel packets.

This patch fixes this issue by setting correct GSO type for the
tunnel packets.

Currently, vmxnet3 does not support reporting inner fields for LRO
tunnel packets. The issue is not seen for egress drivers that do not
use skb inner fields. The workaround is to enable tnl-segmentation
offload on the egress interfaces if the driver supports it. This
problem pre-exists this patch fix and can be addressed as a separate
future patch.

Fixes: dacce2be3312 ("vmxnet3: add geneve and vxlan tunnel offload support")
Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com>
Acked-by: Guolin Yang <guolin.yang@broadcom.com>
Link: https://patch.msgid.link/20250530152701.70354-1-ronak.doshi@broadcom.com
[pabeni@redhat.com: dropped the changelog]
Signed-off-by: Paolo Abeni <pabeni@redhat.com>

authored by

Ronak Doshi and committed by
Paolo Abeni
982d30c3 f6695269

+26
+26
drivers/net/vmxnet3/vmxnet3_drv.c
··· 1572 1572 return (hlen + (hdr.tcp->doff << 2)); 1573 1573 } 1574 1574 1575 + static void 1576 + vmxnet3_lro_tunnel(struct sk_buff *skb, __be16 ip_proto) 1577 + { 1578 + struct udphdr *uh = NULL; 1579 + 1580 + if (ip_proto == htons(ETH_P_IP)) { 1581 + struct iphdr *iph = (struct iphdr *)skb->data; 1582 + 1583 + if (iph->protocol == IPPROTO_UDP) 1584 + uh = (struct udphdr *)(iph + 1); 1585 + } else { 1586 + struct ipv6hdr *iph = (struct ipv6hdr *)skb->data; 1587 + 1588 + if (iph->nexthdr == IPPROTO_UDP) 1589 + uh = (struct udphdr *)(iph + 1); 1590 + } 1591 + if (uh) { 1592 + if (uh->check) 1593 + skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL_CSUM; 1594 + else 1595 + skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL; 1596 + } 1597 + } 1598 + 1575 1599 static int 1576 1600 vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, 1577 1601 struct vmxnet3_adapter *adapter, int quota) ··· 1909 1885 if (segCnt != 0 && mss != 0) { 1910 1886 skb_shinfo(skb)->gso_type = rcd->v4 ? 1911 1887 SKB_GSO_TCPV4 : SKB_GSO_TCPV6; 1888 + if (encap_lro) 1889 + vmxnet3_lro_tunnel(skb, skb->protocol); 1912 1890 skb_shinfo(skb)->gso_size = mss; 1913 1891 skb_shinfo(skb)->gso_segs = segCnt; 1914 1892 } else if ((segCnt != 0 || skb->len > mtu) && !encap_lro) {