Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

wireguard: send: account for mtu=0 devices

It turns out there's an easy way to get packets queued up while still
having an MTU of zero, and that's via persistent keep alive. This commit
makes sure that in whatever condition, we don't wind up dividing by
zero. Note that an MTU of zero for a wireguard interface is something
quasi-valid, so I don't think the correct fix is to limit it via
min_mtu. This can be reproduced easily with:

ip link add wg0 type wireguard
ip link add wg1 type wireguard
ip link set wg0 up mtu 0
ip link set wg1 up
wg set wg0 private-key <(wg genkey)
wg set wg1 listen-port 1 private-key <(wg genkey) peer $(wg show wg0 public-key)
wg set wg0 peer $(wg show wg1 public-key) persistent-keepalive 1 endpoint 127.0.0.1:1

However, while min_mtu=0 seems fine, it makes sense to restrict the
max_mtu. This commit also restricts the maximum MTU to the greatest
number for which rounding up to the padding multiple won't overflow a
signed integer. Packets this large were always rejected anyway
eventually, due to checks deeper in, but it seems more sound not to even
let the administrator configure something that won't work anyway.

We use this opportunity to clean up this function a bit so that it's
clear which paths we're expecting.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

Jason A. Donenfeld and committed by
David S. Miller
175f1ca9 2a8a4df3

+15 -8
+4 -3
drivers/net/wireguard/device.c
··· 258 258 enum { WG_NETDEV_FEATURES = NETIF_F_HW_CSUM | NETIF_F_RXCSUM | 259 259 NETIF_F_SG | NETIF_F_GSO | 260 260 NETIF_F_GSO_SOFTWARE | NETIF_F_HIGHDMA }; 261 + const int overhead = MESSAGE_MINIMUM_LENGTH + sizeof(struct udphdr) + 262 + max(sizeof(struct ipv6hdr), sizeof(struct iphdr)); 261 263 262 264 dev->netdev_ops = &netdev_ops; 263 265 dev->hard_header_len = 0; ··· 273 271 dev->features |= WG_NETDEV_FEATURES; 274 272 dev->hw_features |= WG_NETDEV_FEATURES; 275 273 dev->hw_enc_features |= WG_NETDEV_FEATURES; 276 - dev->mtu = ETH_DATA_LEN - MESSAGE_MINIMUM_LENGTH - 277 - sizeof(struct udphdr) - 278 - max(sizeof(struct ipv6hdr), sizeof(struct iphdr)); 274 + dev->mtu = ETH_DATA_LEN - overhead; 275 + dev->max_mtu = round_down(INT_MAX, MESSAGE_PADDING_MULTIPLE) - overhead; 279 276 280 277 SET_NETDEV_DEVTYPE(dev, &device_type); 281 278
+11 -5
drivers/net/wireguard/send.c
··· 143 143 144 144 static unsigned int calculate_skb_padding(struct sk_buff *skb) 145 145 { 146 + unsigned int padded_size, last_unit = skb->len; 147 + 148 + if (unlikely(!PACKET_CB(skb)->mtu)) 149 + return ALIGN(last_unit, MESSAGE_PADDING_MULTIPLE) - last_unit; 150 + 146 151 /* We do this modulo business with the MTU, just in case the networking 147 152 * layer gives us a packet that's bigger than the MTU. In that case, we 148 153 * wouldn't want the final subtraction to overflow in the case of the 149 - * padded_size being clamped. 154 + * padded_size being clamped. Fortunately, that's very rarely the case, 155 + * so we optimize for that not happening. 150 156 */ 151 - unsigned int last_unit = skb->len % PACKET_CB(skb)->mtu; 152 - unsigned int padded_size = ALIGN(last_unit, MESSAGE_PADDING_MULTIPLE); 157 + if (unlikely(last_unit > PACKET_CB(skb)->mtu)) 158 + last_unit %= PACKET_CB(skb)->mtu; 153 159 154 - if (padded_size > PACKET_CB(skb)->mtu) 155 - padded_size = PACKET_CB(skb)->mtu; 160 + padded_size = min(PACKET_CB(skb)->mtu, 161 + ALIGN(last_unit, MESSAGE_PADDING_MULTIPLE)); 156 162 return padded_size - last_unit; 157 163 } 158 164