Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Cross-tree/merge window issues:

- rtl8150: don't incorrectly assign random MAC addresses; fix late in
the 5.9 cycle started depending on a return code from a function
which changed with the 5.10 PR from the usb subsystem

Current release regressions:

- Revert "virtio-net: ethtool configurable RXCSUM", it was causing
crashes at probe when control vq was not negotiated/available

Previous release regressions:

- ixgbe: fix probing of multi-port 10 Gigabit Intel NICs with an MDIO
bus, only first device would be probed correctly

- nexthop: Fix performance regression in nexthop deletion by
effectively switching from recently added synchronize_rcu() to
synchronize_rcu_expedited()

- netsec: ignore 'phy-mode' device property on ACPI systems; the
property is not populated correctly by the firmware, but firmware
configures the PHY so just keep boot settings

Previous releases - always broken:

- tcp: fix to update snd_wl1 in bulk receiver fast path, addressing
bulk transfers getting "stuck"

- icmp: randomize the global rate limiter to prevent attackers from
getting useful signal

- r8169: fix operation under forced interrupt threading, make the
driver always use hard irqs, even on RT, given the handler is light
and only wants to schedule napi (and do so through a _irqoff()
variant, preferably)

- bpf: Enforce pointer id generation for all may-be-null register
type to avoid pointers erroneously getting marked as null-checked

- tipc: re-configure queue limit for broadcast link

- net/sched: act_tunnel_key: fix OOB write in case of IPv6 ERSPAN
tunnels

- fix various issues in chelsio inline tls driver

Misc:

- bpf: improve just-added bpf_redirect_neigh() helper api to support
supplying nexthop by the caller - in case BPF program has already
done a lookup we can avoid doing another one

- remove unnecessary break statements

- make MCTCP not select IPV6, but rather depend on it"

* tag 'net-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (62 commits)
tcp: fix to update snd_wl1 in bulk receiver fast path
net: Properly typecast int values to set sk_max_pacing_rate
netfilter: nf_fwd_netdev: clear timestamp in forwarding path
ibmvnic: save changed mac address to adapter->mac_addr
selftests: mptcp: depends on built-in IPv6
Revert "virtio-net: ethtool configurable RXCSUM"
rtnetlink: fix data overflow in rtnl_calcit()
net: ethernet: mtk-star-emac: select REGMAP_MMIO
net: hdlc_raw_eth: Clear the IFF_TX_SKB_SHARING flag after calling ether_setup
net: hdlc: In hdlc_rcv, check to make sure dev is an HDLC device
bpf, libbpf: Guard bpf inline asm from bpf_tail_call_static
bpf, selftests: Extend test_tc_redirect to use modified bpf_redirect_neigh()
bpf: Fix bpf_redirect_neigh helper api to support supplying nexthop
mptcp: depends on IPV6 but not as a module
sfc: move initialisation of efx->filter_sem to efx_init_struct()
mpls: load mpls_gso after mpls_iptunnel
net/sched: act_tunnel_key: fix OOB write in case of IPv6 ERSPAN tunnels
net/sched: act_gate: Unlock ->tcfa_lock in tc_setup_flow_action()
net: dsa: bcm_sf2: make const array static, makes object smaller
mptcp: MPTCP_IPV6 should depend on IPV6 instead of selecting it
...

+677 -257
+3 -1
Documentation/devicetree/bindings/net/socionext-netsec.txt
··· 30 30 - max-frame-size: See ethernet.txt in the same directory. 31 31 32 32 The MAC address will be determined using the optional properties 33 - defined in ethernet.txt. 33 + defined in ethernet.txt. The 'phy-mode' property is required, but may 34 + be set to the empty string if the PHY configuration is programmed by 35 + the firmware or set by hardware straps, and needs to be preserved. 34 36 35 37 Example: 36 38 eth0: ethernet@522d0000 {
+3 -1
Documentation/networking/ip-sysctl.rst
··· 1142 1142 icmp_msgs_per_sec - INTEGER 1143 1143 Limit maximal number of ICMP packets sent per second from this host. 1144 1144 Only messages whose type matches icmp_ratemask (see below) are 1145 - controlled by this limit. 1145 + controlled by this limit. For security reasons, the precise count 1146 + of messages per second is randomized. 1146 1147 1147 1148 Default: 1000 1148 1149 1149 1150 icmp_msgs_burst - INTEGER 1150 1151 icmp_msgs_per_sec controls number of ICMP packets sent per second, 1151 1152 while icmp_msgs_burst controls the burst size of these packets. 1153 + For security reasons, the precise burst size is randomized. 1152 1154 1153 1155 Default: 50 1154 1156
+1 -1
Documentation/networking/nf_flowtable.rst
··· 109 109 This documentation is based on the LWN.net articles [1]_\ [2]_. Rafal Milecki 110 110 also made a very complete and comprehensive summary called "A state of network 111 111 acceleration" that describes how things were before this infrastructure was 112 - mailined [3]_ and it also makes a rough summary of this work [4]_. 112 + mainlined [3]_ and it also makes a rough summary of this work [4]_. 113 113 114 114 .. [1] https://lwn.net/Articles/738214/ 115 115 .. [2] https://lwn.net/Articles/742164/
+2 -1
MAINTAINERS
··· 3244 3244 L: netdev@vger.kernel.org 3245 3245 L: bpf@vger.kernel.org 3246 3246 S: Supported 3247 - Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 3247 + W: https://bpf.io/ 3248 + Q: https://patchwork.kernel.org/project/netdevbpf/list/?delegate=121173 3248 3249 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git 3249 3250 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git 3250 3251 F: Documentation/bpf/
+1 -1
drivers/net/dsa/bcm_sf2.c
··· 54 54 unsigned long new_rate; 55 55 unsigned int ports_active; 56 56 /* Frequenty in Mhz */ 57 - const unsigned long rate_table[] = { 57 + static const unsigned long rate_table[] = { 58 58 59220000, 59 59 60820000, 60 60 62500000,
+1 -1
drivers/net/dsa/ocelot/seville_vsc9953.c
··· 1181 1181 .stats_layout = vsc9953_stats_layout, 1182 1182 .num_stats = ARRAY_SIZE(vsc9953_stats_layout), 1183 1183 .vcap = vsc9953_vcap_props, 1184 - .shared_queue_sz = 2048 * 1024, 1184 + .shared_queue_sz = 256 * 1024, 1185 1185 .num_mact_rows = 2048, 1186 1186 .num_ports = 10, 1187 1187 .mdio_bus_alloc = vsc9953_mdio_bus_alloc,
-1
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 1163 1163 default: 1164 1164 err = -1; 1165 1165 goto err_exit; 1166 - break; 1167 1166 } 1168 1167 if (!(self->aq_nic_cfg.aq_hw_caps->link_speed_msk & rate)) { 1169 1168 err = -1;
+1
drivers/net/ethernet/chelsio/inline_crypto/Kconfig
··· 16 16 config CRYPTO_DEV_CHELSIO_TLS 17 17 tristate "Chelsio Crypto Inline TLS Driver" 18 18 depends on CHELSIO_T4 19 + depends on TLS 19 20 depends on TLS_TOE 20 21 help 21 22 Support Chelsio Inline TLS with Chelsio crypto accelerator.
+13 -6
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
··· 92 92 static struct net_device *chtls_find_netdev(struct chtls_dev *cdev, 93 93 struct sock *sk) 94 94 { 95 + struct adapter *adap = pci_get_drvdata(cdev->pdev); 95 96 struct net_device *ndev = cdev->ports[0]; 96 97 #if IS_ENABLED(CONFIG_IPV6) 97 98 struct net_device *temp; 98 99 int addr_type; 99 100 #endif 101 + int i; 100 102 101 103 switch (sk->sk_family) { 102 104 case PF_INET: ··· 129 127 return NULL; 130 128 131 129 if (is_vlan_dev(ndev)) 132 - return vlan_dev_real_dev(ndev); 133 - return ndev; 130 + ndev = vlan_dev_real_dev(ndev); 131 + 132 + for_each_port(adap, i) 133 + if (cdev->ports[i] == ndev) 134 + return ndev; 135 + return NULL; 134 136 } 135 137 136 138 static void assign_rxopt(struct sock *sk, unsigned int opt) ··· 483 477 chtls_purge_write_queue(sk); 484 478 free_tls_keyid(sk); 485 479 kref_put(&csk->kref, chtls_sock_release); 486 - csk->cdev = NULL; 487 480 if (sk->sk_family == AF_INET) 488 481 sk->sk_prot = &tcp_prot; 489 482 #if IS_ENABLED(CONFIG_IPV6) ··· 741 736 742 737 #if IS_ENABLED(CONFIG_IPV6) 743 738 if (sk->sk_family == PF_INET6) { 744 - struct chtls_sock *csk; 739 + struct net_device *ndev = chtls_find_netdev(cdev, sk); 745 740 int addr_type = 0; 746 741 747 - csk = rcu_dereference_sk_user_data(sk); 748 742 addr_type = ipv6_addr_type((const struct in6_addr *) 749 743 &sk->sk_v6_rcv_saddr); 750 744 if (addr_type != IPV6_ADDR_ANY) 751 - cxgb4_clip_release(csk->egress_dev, (const u32 *) 745 + cxgb4_clip_release(ndev, (const u32 *) 752 746 &sk->sk_v6_rcv_saddr, 1); 753 747 } 754 748 #endif ··· 1161 1157 ndev = n->dev; 1162 1158 if (!ndev) 1163 1159 goto free_dst; 1160 + if (is_vlan_dev(ndev)) 1161 + ndev = vlan_dev_real_dev(ndev); 1162 + 1164 1163 port_id = cxgb4_port_idx(ndev); 1165 1164 1166 1165 csk = chtls_sock_create(cdev);
+3 -2
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
··· 902 902 return 0; 903 903 } 904 904 905 - static int csk_mem_free(struct chtls_dev *cdev, struct sock *sk) 905 + static bool csk_mem_free(struct chtls_dev *cdev, struct sock *sk) 906 906 { 907 - return (cdev->max_host_sndbuf - sk->sk_wmem_queued); 907 + return (cdev->max_host_sndbuf - sk->sk_wmem_queued > 0); 908 908 } 909 909 910 910 static int csk_wait_memory(struct chtls_dev *cdev, ··· 1240 1240 copied = 0; 1241 1241 csk = rcu_dereference_sk_user_data(sk); 1242 1242 cdev = csk->cdev; 1243 + lock_sock(sk); 1243 1244 timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); 1244 1245 1245 1246 err = sk_stream_wait_connect(sk, &timeo);
-1
drivers/net/ethernet/cisco/enic/enic_ethtool.c
··· 434 434 break; 435 435 default: 436 436 return -EINVAL; 437 - break; 438 437 } 439 438 440 439 fsp->h_u.tcp_ip4_spec.ip4src = flow_get_u32_src(&n->keys);
+5
drivers/net/ethernet/faraday/ftgmac100.c
··· 1817 1817 priv->rxdes0_edorr_mask = BIT(30); 1818 1818 priv->txdes0_edotr_mask = BIT(30); 1819 1819 priv->is_aspeed = true; 1820 + /* Disable ast2600 problematic HW arbitration */ 1821 + if (of_device_is_compatible(np, "aspeed,ast2600-mac")) { 1822 + iowrite32(FTGMAC100_TM_DEFAULT, 1823 + priv->base + FTGMAC100_OFFSET_TM); 1824 + } 1820 1825 } else { 1821 1826 priv->rxdes0_edorr_mask = BIT(15); 1822 1827 priv->txdes0_edotr_mask = BIT(15);
+8
drivers/net/ethernet/faraday/ftgmac100.h
··· 170 170 #define FTGMAC100_MACCR_SW_RST (1 << 31) 171 171 172 172 /* 173 + * test mode control register 174 + */ 175 + #define FTGMAC100_TM_RQ_TX_VALID_DIS (1 << 28) 176 + #define FTGMAC100_TM_RQ_RR_IDLE_PREV (1 << 27) 177 + #define FTGMAC100_TM_DEFAULT \ 178 + (FTGMAC100_TM_RQ_TX_VALID_DIS | FTGMAC100_TM_RQ_RR_IDLE_PREV) 179 + 180 + /* 173 181 * PHY control register 174 182 */ 175 183 #define FTGMAC100_PHYCR_MDC_CYCTHR_MASK 0x3f
+5
drivers/net/ethernet/ibm/ibmvnic.c
··· 4235 4235 dev_err(dev, "Error %ld in CHANGE_MAC_ADDR_RSP\n", rc); 4236 4236 goto out; 4237 4237 } 4238 + /* crq->change_mac_addr.mac_addr is the requested one 4239 + * crq->change_mac_addr_rsp.mac_addr is the returned valid one. 4240 + */ 4238 4241 ether_addr_copy(netdev->dev_addr, 4242 + &crq->change_mac_addr_rsp.mac_addr[0]); 4243 + ether_addr_copy(adapter->mac_addr, 4239 4244 &crq->change_mac_addr_rsp.mac_addr[0]); 4240 4245 out: 4241 4246 complete(&adapter->fw_done);
+14 -9
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
··· 901 901 **/ 902 902 s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw) 903 903 { 904 + s32 (*write)(struct mii_bus *bus, int addr, int regnum, u16 val); 905 + s32 (*read)(struct mii_bus *bus, int addr, int regnum); 904 906 struct ixgbe_adapter *adapter = hw->back; 905 907 struct pci_dev *pdev = adapter->pdev; 906 908 struct device *dev = &adapter->netdev->dev; 907 909 struct mii_bus *bus; 908 - 909 - bus = devm_mdiobus_alloc(dev); 910 - if (!bus) 911 - return -ENOMEM; 912 910 913 911 switch (hw->device_id) { 914 912 /* C3000 SoCs */ ··· 920 922 case IXGBE_DEV_ID_X550EM_A_1G_T: 921 923 case IXGBE_DEV_ID_X550EM_A_1G_T_L: 922 924 if (!ixgbe_x550em_a_has_mii(hw)) 923 - return -ENODEV; 924 - bus->read = &ixgbe_x550em_a_mii_bus_read; 925 - bus->write = &ixgbe_x550em_a_mii_bus_write; 925 + return 0; 926 + read = &ixgbe_x550em_a_mii_bus_read; 927 + write = &ixgbe_x550em_a_mii_bus_write; 926 928 break; 927 929 default: 928 - bus->read = &ixgbe_mii_bus_read; 929 - bus->write = &ixgbe_mii_bus_write; 930 + read = &ixgbe_mii_bus_read; 931 + write = &ixgbe_mii_bus_write; 930 932 break; 931 933 } 934 + 935 + bus = devm_mdiobus_alloc(dev); 936 + if (!bus) 937 + return -ENOMEM; 938 + 939 + bus->read = read; 940 + bus->write = write; 932 941 933 942 /* Use the position of the device in the PCI hierarchy as the id */ 934 943 snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mdio-%s", ixgbe_driver_name,
-1
drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
··· 350 350 if (ixgbe_read_eerd_generic(hw, pointer, &length)) { 351 351 hw_dbg(hw, "EEPROM read failed\n"); 352 352 return IXGBE_ERR_EEPROM; 353 - break; 354 353 } 355 354 356 355 /* Skip pointer section if length is invalid. */
+2 -2
drivers/net/ethernet/korina.c
··· 1113 1113 return rc; 1114 1114 1115 1115 probe_err_register: 1116 - kfree(KSEG0ADDR(lp->td_ring)); 1116 + kfree((struct dma_desc *)KSEG0ADDR(lp->td_ring)); 1117 1117 probe_err_td_ring: 1118 1118 iounmap(lp->tx_dma_regs); 1119 1119 probe_err_dma_tx: ··· 1133 1133 iounmap(lp->eth_regs); 1134 1134 iounmap(lp->rx_dma_regs); 1135 1135 iounmap(lp->tx_dma_regs); 1136 - kfree(KSEG0ADDR(lp->td_ring)); 1136 + kfree((struct dma_desc *)KSEG0ADDR(lp->td_ring)); 1137 1137 1138 1138 unregister_netdev(bif->dev); 1139 1139 free_netdev(bif->dev);
+1
drivers/net/ethernet/mediatek/Kconfig
··· 17 17 config NET_MEDIATEK_STAR_EMAC 18 18 tristate "MediaTek STAR Ethernet MAC support" 19 19 select PHYLIB 20 + select REGMAP_MMIO 20 21 help 21 22 This driver supports the ethernet MAC IP first used on 22 23 MediaTek MT85** SoCs.
+4 -4
drivers/net/ethernet/realtek/r8169_main.c
··· 4694 4694 4695 4695 phy_disconnect(tp->phydev); 4696 4696 4697 - pci_free_irq(pdev, 0, tp); 4697 + free_irq(pci_irq_vector(pdev, 0), tp); 4698 4698 4699 4699 dma_free_coherent(&pdev->dev, R8169_RX_RING_BYTES, tp->RxDescArray, 4700 4700 tp->RxPhyAddr); ··· 4745 4745 4746 4746 rtl_request_firmware(tp); 4747 4747 4748 - retval = pci_request_irq(pdev, 0, rtl8169_interrupt, NULL, tp, 4749 - dev->name); 4748 + retval = request_irq(pci_irq_vector(pdev, 0), rtl8169_interrupt, 4749 + IRQF_NO_THREAD | IRQF_SHARED, dev->name, tp); 4750 4750 if (retval < 0) 4751 4751 goto err_release_fw_2; 4752 4752 ··· 4763 4763 return retval; 4764 4764 4765 4765 err_free_irq: 4766 - pci_free_irq(pdev, 0, tp); 4766 + free_irq(pci_irq_vector(pdev, 0), tp); 4767 4767 err_release_fw_2: 4768 4768 rtl_release_firmware(tp); 4769 4769 rtl8169_rx_clear(tp);
+1
drivers/net/ethernet/sfc/efx_common.c
··· 1014 1014 efx->num_mac_stats = MC_CMD_MAC_NSTATS; 1015 1015 BUILD_BUG_ON(MC_CMD_MAC_NSTATS - 1 != MC_CMD_MAC_GENERATION_END); 1016 1016 mutex_init(&efx->mac_lock); 1017 + init_rwsem(&efx->filter_sem); 1017 1018 #ifdef CONFIG_RFS_ACCEL 1018 1019 mutex_init(&efx->rps_mutex); 1019 1020 spin_lock_init(&efx->rps_hash_lock);
-1
drivers/net/ethernet/sfc/rx_common.c
··· 797 797 { 798 798 int rc; 799 799 800 - init_rwsem(&efx->filter_sem); 801 800 mutex_lock(&efx->mac_lock); 802 801 down_write(&efx->filter_sem); 803 802 rc = efx->type->filter_table_probe(efx);
+17 -7
drivers/net/ethernet/socionext/netsec.c
··· 6 6 #include <linux/pm_runtime.h> 7 7 #include <linux/acpi.h> 8 8 #include <linux/of_mdio.h> 9 + #include <linux/of_net.h> 9 10 #include <linux/etherdevice.h> 10 11 #include <linux/interrupt.h> 11 12 #include <linux/io.h> ··· 1834 1833 static int netsec_of_probe(struct platform_device *pdev, 1835 1834 struct netsec_priv *priv, u32 *phy_addr) 1836 1835 { 1836 + int err; 1837 + 1838 + err = of_get_phy_mode(pdev->dev.of_node, &priv->phy_interface); 1839 + if (err) { 1840 + dev_err(&pdev->dev, "missing required property 'phy-mode'\n"); 1841 + return err; 1842 + } 1843 + 1837 1844 priv->phy_np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); 1838 1845 if (!priv->phy_np) { 1839 1846 dev_err(&pdev->dev, "missing required property 'phy-handle'\n"); ··· 1867 1858 1868 1859 if (!IS_ENABLED(CONFIG_ACPI)) 1869 1860 return -ENODEV; 1861 + 1862 + /* ACPI systems are assumed to configure the PHY in firmware, so 1863 + * there is really no need to discover the PHY mode from the DSDT. 1864 + * Since firmware is known to exist in the field that configures the 1865 + * PHY correctly but passes the wrong mode string in the phy-mode 1866 + * device property, we have no choice but to ignore it. 1867 + */ 1868 + priv->phy_interface = PHY_INTERFACE_MODE_NA; 1870 1869 1871 1870 ret = device_property_read_u32(&pdev->dev, "phy-channel", phy_addr); 1872 1871 if (ret) { ··· 2011 1994 2012 1995 priv->msg_enable = NETIF_MSG_TX_ERR | NETIF_MSG_HW | NETIF_MSG_DRV | 2013 1996 NETIF_MSG_LINK | NETIF_MSG_PROBE; 2014 - 2015 - priv->phy_interface = device_get_phy_mode(&pdev->dev); 2016 - if ((int)priv->phy_interface < 0) { 2017 - dev_err(&pdev->dev, "missing required property 'phy-mode'\n"); 2018 - ret = -ENODEV; 2019 - goto free_ndev; 2020 - } 2021 1997 2022 1998 priv->ioaddr = devm_ioremap(&pdev->dev, mmio_res->start, 2023 1999 resource_size(mmio_res));
+1 -2
drivers/net/pcs/Kconfig
··· 7 7 8 8 config PCS_XPCS 9 9 tristate "Synopsys DesignWare XPCS controller" 10 - select MDIO_BUS 11 - depends on MDIO_DEVICE 10 + depends on MDIO_DEVICE && MDIO_BUS 12 11 help 13 12 This module provides helper functions for Synopsys DesignWare XPCS 14 13 controllers.
+1 -1
drivers/net/usb/rtl8150.c
··· 261 261 262 262 ret = get_registers(dev, IDR, sizeof(node_id), node_id); 263 263 264 - if (ret == sizeof(node_id)) { 264 + if (!ret) { 265 265 ether_addr_copy(dev->netdev->dev_addr, node_id); 266 266 } else { 267 267 eth_hw_addr_random(dev->netdev);
+13 -37
drivers/net/virtio_net.c
··· 68 68 (1ULL << VIRTIO_NET_F_GUEST_ECN) | \ 69 69 (1ULL << VIRTIO_NET_F_GUEST_UFO)) 70 70 71 - #define GUEST_OFFLOAD_CSUM_MASK (1ULL << VIRTIO_NET_F_GUEST_CSUM) 72 - 73 71 struct virtnet_stat_desc { 74 72 char desc[ETH_GSTRING_LEN]; 75 73 size_t offset; ··· 2522 2524 return 0; 2523 2525 } 2524 2526 2525 - static netdev_features_t virtnet_fix_features(struct net_device *netdev, 2526 - netdev_features_t features) 2527 - { 2528 - /* If Rx checksum is disabled, LRO should also be disabled. */ 2529 - if (!(features & NETIF_F_RXCSUM)) 2530 - features &= ~NETIF_F_LRO; 2531 - 2532 - return features; 2533 - } 2534 - 2535 2527 static int virtnet_set_features(struct net_device *dev, 2536 2528 netdev_features_t features) 2537 2529 { 2538 2530 struct virtnet_info *vi = netdev_priv(dev); 2539 - u64 offloads = vi->guest_offloads; 2531 + u64 offloads; 2540 2532 int err; 2541 2533 2542 - /* Don't allow configuration while XDP is active. */ 2543 - if (vi->xdp_queue_pairs) 2544 - return -EBUSY; 2545 - 2546 2534 if ((dev->features ^ features) & NETIF_F_LRO) { 2535 + if (vi->xdp_queue_pairs) 2536 + return -EBUSY; 2537 + 2547 2538 if (features & NETIF_F_LRO) 2548 - offloads |= GUEST_OFFLOAD_LRO_MASK & 2549 - vi->guest_offloads_capable; 2539 + offloads = vi->guest_offloads_capable; 2550 2540 else 2551 - offloads &= ~GUEST_OFFLOAD_LRO_MASK; 2541 + offloads = vi->guest_offloads_capable & 2542 + ~GUEST_OFFLOAD_LRO_MASK; 2543 + 2544 + err = virtnet_set_guest_offloads(vi, offloads); 2545 + if (err) 2546 + return err; 2547 + vi->guest_offloads = offloads; 2552 2548 } 2553 2549 2554 - if ((dev->features ^ features) & NETIF_F_RXCSUM) { 2555 - if (features & NETIF_F_RXCSUM) 2556 - offloads |= GUEST_OFFLOAD_CSUM_MASK & 2557 - vi->guest_offloads_capable; 2558 - else 2559 - offloads &= ~GUEST_OFFLOAD_CSUM_MASK; 2560 - } 2561 - 2562 - err = virtnet_set_guest_offloads(vi, offloads); 2563 - if (err) 2564 - return err; 2565 - 2566 - vi->guest_offloads = offloads; 2567 2550 return 0; 2568 2551 } 2569 2552 ··· 2563 2584 .ndo_features_check = passthru_features_check, 2564 2585 .ndo_get_phys_port_name = virtnet_get_phys_port_name, 2565 2586 .ndo_set_features = virtnet_set_features, 2566 - .ndo_fix_features = virtnet_fix_features, 2567 2587 }; 2568 2588 2569 2589 static void virtnet_config_changed_work(struct work_struct *work) ··· 3013 3035 if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || 3014 3036 virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6)) 3015 3037 dev->features |= NETIF_F_LRO; 3016 - if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) { 3017 - dev->hw_features |= NETIF_F_RXCSUM; 3038 + if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) 3018 3039 dev->hw_features |= NETIF_F_LRO; 3019 - } 3020 3040 3021 3041 dev->vlan_features = dev->features; 3022 3042
+9 -1
drivers/net/wan/hdlc.c
··· 46 46 static int hdlc_rcv(struct sk_buff *skb, struct net_device *dev, 47 47 struct packet_type *p, struct net_device *orig_dev) 48 48 { 49 - struct hdlc_device *hdlc = dev_to_hdlc(dev); 49 + struct hdlc_device *hdlc; 50 + 51 + /* First make sure "dev" is an HDLC device */ 52 + if (!(dev->priv_flags & IFF_WAN_HDLC)) { 53 + kfree_skb(skb); 54 + return NET_RX_SUCCESS; 55 + } 56 + 57 + hdlc = dev_to_hdlc(dev); 50 58 51 59 if (!net_eq(dev_net(dev), &init_net)) { 52 60 kfree_skb(skb);
+1
drivers/net/wan/hdlc_raw_eth.c
··· 99 99 old_qlen = dev->tx_queue_len; 100 100 ether_setup(dev); 101 101 dev->tx_queue_len = old_qlen; 102 + dev->priv_flags &= ~IFF_TX_SKB_SHARING; 102 103 eth_hw_addr_random(dev); 103 104 call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE, dev); 104 105 netif_dormant_off(dev);
-4
drivers/net/wan/lmc/lmc_proto.c
··· 89 89 switch(sc->if_type){ 90 90 case LMC_PPP: 91 91 return hdlc_type_trans(skb, sc->lmc_device); 92 - break; 93 92 case LMC_NET: 94 93 return htons(ETH_P_802_2); 95 - break; 96 94 case LMC_RAW: /* Packet type for skbuff kind of useless */ 97 95 return htons(ETH_P_802_2); 98 - break; 99 96 default: 100 97 printk(KERN_WARNING "%s: No protocol set for this interface, assuming 802.2 (which is wrong!!)\n", sc->name); 101 98 return htons(ETH_P_802_2); 102 - break; 103 99 } 104 100 } 105 101
-1
drivers/nfc/st21nfca/core.c
··· 794 794 skb->len, 795 795 st21nfca_hci_data_exchange_cb, 796 796 info); 797 - break; 798 797 default: 799 798 return 1; 800 799 }
-1
drivers/nfc/trf7970a.c
··· 1382 1382 case ISO15693_CMD_WRITE_DSFID: 1383 1383 case ISO15693_CMD_LOCK_DSFID: 1384 1384 return 1; 1385 - break; 1386 1385 default: 1387 1386 return 0; 1388 1387 }
+9
include/linux/filter.h
··· 607 607 void *data_end; 608 608 }; 609 609 610 + struct bpf_nh_params { 611 + u32 nh_family; 612 + union { 613 + u32 ipv4_nh; 614 + struct in6_addr ipv6_nh; 615 + }; 616 + }; 617 + 610 618 struct bpf_redirect_info { 611 619 u32 flags; 612 620 u32 tgt_index; 613 621 void *tgt_value; 614 622 struct bpf_map *map; 615 623 u32 kern_flags; 624 + struct bpf_nh_params nh; 616 625 }; 617 626 618 627 DECLARE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info);
+1 -1
include/linux/netlink.h
··· 240 240 int (*done)(struct netlink_callback *); 241 241 void *data; 242 242 struct module *module; 243 - u16 min_dump_alloc; 243 + u32 min_dump_alloc; 244 244 }; 245 245 246 246 int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
+6
include/net/netfilter/nf_tables.h
··· 891 891 return (struct nft_expr *)&rule->data[rule->dlen]; 892 892 } 893 893 894 + static inline bool nft_expr_more(const struct nft_rule *rule, 895 + const struct nft_expr *expr) 896 + { 897 + return expr != nft_expr_last(rule) && expr->ops; 898 + } 899 + 894 900 static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule) 895 901 { 896 902 return (void *)&rule->data[rule->dlen];
+18 -4
include/uapi/linux/bpf.h
··· 3677 3677 * Return 3678 3678 * The id is returned or 0 in case the id could not be retrieved. 3679 3679 * 3680 - * long bpf_redirect_neigh(u32 ifindex, u64 flags) 3680 + * long bpf_redirect_neigh(u32 ifindex, struct bpf_redir_neigh *params, int plen, u64 flags) 3681 3681 * Description 3682 3682 * Redirect the packet to another net device of index *ifindex* 3683 3683 * and fill in L2 addresses from neighboring subsystem. This helper 3684 3684 * is somewhat similar to **bpf_redirect**\ (), except that it 3685 3685 * populates L2 addresses as well, meaning, internally, the helper 3686 - * performs a FIB lookup based on the skb's networking header to 3687 - * get the address of the next hop and then relies on the neighbor 3688 - * lookup for the L2 address of the nexthop. 3686 + * relies on the neighbor lookup for the L2 address of the nexthop. 3687 + * 3688 + * The helper will perform a FIB lookup based on the skb's 3689 + * networking header to get the address of the next hop, unless 3690 + * this is supplied by the caller in the *params* argument. The 3691 + * *plen* argument indicates the len of *params* and should be set 3692 + * to 0 if *params* is NULL. 3689 3693 * 3690 3694 * The *flags* argument is reserved and must be 0. The helper is 3691 3695 * currently only supported for tc BPF program types, and enabled ··· 4908 4904 __be16 h_vlan_TCI; 4909 4905 __u8 smac[6]; /* ETH_ALEN */ 4910 4906 __u8 dmac[6]; /* ETH_ALEN */ 4907 + }; 4908 + 4909 + struct bpf_redir_neigh { 4910 + /* network family for lookup (AF_INET, AF_INET6) */ 4911 + __u32 nh_family; 4912 + /* network address of nexthop; skips fib lookup to find gateway */ 4913 + union { 4914 + __be32 ipv4_nh; 4915 + __u32 ipv6_nh[4]; /* in6_addr; network order */ 4916 + }; 4911 4917 }; 4912 4918 4913 4919 enum bpf_task_fd_type {
-1
kernel/bpf/syscall.c
··· 2913 2913 case BPF_CGROUP_INET_INGRESS: 2914 2914 case BPF_CGROUP_INET_EGRESS: 2915 2915 return BPF_PROG_TYPE_CGROUP_SKB; 2916 - break; 2917 2916 case BPF_CGROUP_INET_SOCK_CREATE: 2918 2917 case BPF_CGROUP_INET_SOCK_RELEASE: 2919 2918 case BPF_CGROUP_INET4_POST_BIND:
+5 -6
kernel/bpf/verifier.c
··· 5133 5133 regs[BPF_REG_0].id = ++env->id_gen; 5134 5134 } else { 5135 5135 regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL; 5136 - regs[BPF_REG_0].id = ++env->id_gen; 5137 5136 } 5138 5137 } else if (fn->ret_type == RET_PTR_TO_SOCKET_OR_NULL) { 5139 5138 mark_reg_known_zero(env, regs, BPF_REG_0); 5140 5139 regs[BPF_REG_0].type = PTR_TO_SOCKET_OR_NULL; 5141 - regs[BPF_REG_0].id = ++env->id_gen; 5142 5140 } else if (fn->ret_type == RET_PTR_TO_SOCK_COMMON_OR_NULL) { 5143 5141 mark_reg_known_zero(env, regs, BPF_REG_0); 5144 5142 regs[BPF_REG_0].type = PTR_TO_SOCK_COMMON_OR_NULL; 5145 - regs[BPF_REG_0].id = ++env->id_gen; 5146 5143 } else if (fn->ret_type == RET_PTR_TO_TCP_SOCK_OR_NULL) { 5147 5144 mark_reg_known_zero(env, regs, BPF_REG_0); 5148 5145 regs[BPF_REG_0].type = PTR_TO_TCP_SOCK_OR_NULL; 5149 - regs[BPF_REG_0].id = ++env->id_gen; 5150 5146 } else if (fn->ret_type == RET_PTR_TO_ALLOC_MEM_OR_NULL) { 5151 5147 mark_reg_known_zero(env, regs, BPF_REG_0); 5152 5148 regs[BPF_REG_0].type = PTR_TO_MEM_OR_NULL; 5153 - regs[BPF_REG_0].id = ++env->id_gen; 5154 5149 regs[BPF_REG_0].mem_size = meta.mem_size; 5155 5150 } else if (fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL || 5156 5151 fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID) { ··· 5193 5198 fn->ret_type, func_id_name(func_id), func_id); 5194 5199 return -EINVAL; 5195 5200 } 5201 + 5202 + if (reg_type_may_be_null(regs[BPF_REG_0].type)) 5203 + regs[BPF_REG_0].id = ++env->id_gen; 5196 5204 5197 5205 if (is_ptr_cast_function(func_id)) { 5198 5206 /* For release_reference() */ ··· 7210 7212 struct bpf_reg_state *reg, u32 id, 7211 7213 bool is_null) 7212 7214 { 7213 - if (reg_type_may_be_null(reg->type) && reg->id == id) { 7215 + if (reg_type_may_be_null(reg->type) && reg->id == id && 7216 + !WARN_ON_ONCE(!reg->id)) { 7214 7217 /* Old offset (both fixed and variable parts) should 7215 7218 * have been known-zero, because we don't allow pointer 7216 7219 * arithmetic on pointers that might be NULL.
+1 -1
net/bridge/netfilter/ebt_dnat.c
··· 21 21 { 22 22 const struct ebt_nat_info *info = par->targinfo; 23 23 24 - if (skb_ensure_writable(skb, ETH_ALEN)) 24 + if (skb_ensure_writable(skb, 0)) 25 25 return EBT_DROP; 26 26 27 27 ether_addr_copy(eth_hdr(skb)->h_dest, info->mac);
+1 -1
net/bridge/netfilter/ebt_redirect.c
··· 21 21 { 22 22 const struct ebt_redirect_info *info = par->targinfo; 23 23 24 - if (skb_ensure_writable(skb, ETH_ALEN)) 24 + if (skb_ensure_writable(skb, 0)) 25 25 return EBT_DROP; 26 26 27 27 if (xt_hooknum(par) != NF_BR_BROUTING)
+1 -1
net/bridge/netfilter/ebt_snat.c
··· 22 22 { 23 23 const struct ebt_nat_info *info = par->targinfo; 24 24 25 - if (skb_ensure_writable(skb, ETH_ALEN * 2)) 25 + if (skb_ensure_writable(skb, 0)) 26 26 return EBT_DROP; 27 27 28 28 ether_addr_copy(eth_hdr(skb)->h_source, info->mac);
+1 -1
net/core/dev.c
··· 10213 10213 struct net_device *dev = list_first_entry(&unlink_list, 10214 10214 struct net_device, 10215 10215 unlink_list); 10216 - list_del(&dev->unlink_list); 10216 + list_del_init(&dev->unlink_list); 10217 10217 dev->nested_level = dev->lower_level - 1; 10218 10218 } 10219 10219 #endif
+101 -60
net/core/filter.c
··· 2165 2165 } 2166 2166 2167 2167 #if IS_ENABLED(CONFIG_IPV6) 2168 - static int bpf_out_neigh_v6(struct net *net, struct sk_buff *skb) 2168 + static int bpf_out_neigh_v6(struct net *net, struct sk_buff *skb, 2169 + struct net_device *dev, struct bpf_nh_params *nh) 2169 2170 { 2170 - struct dst_entry *dst = skb_dst(skb); 2171 - struct net_device *dev = dst->dev; 2172 2171 u32 hh_len = LL_RESERVED_SPACE(dev); 2173 2172 const struct in6_addr *nexthop; 2173 + struct dst_entry *dst = NULL; 2174 2174 struct neighbour *neigh; 2175 2175 2176 2176 if (dev_xmit_recursion()) { ··· 2196 2196 } 2197 2197 2198 2198 rcu_read_lock_bh(); 2199 - nexthop = rt6_nexthop(container_of(dst, struct rt6_info, dst), 2200 - &ipv6_hdr(skb)->daddr); 2199 + if (!nh) { 2200 + dst = skb_dst(skb); 2201 + nexthop = rt6_nexthop(container_of(dst, struct rt6_info, dst), 2202 + &ipv6_hdr(skb)->daddr); 2203 + } else { 2204 + nexthop = &nh->ipv6_nh; 2205 + } 2201 2206 neigh = ip_neigh_gw6(dev, nexthop); 2202 2207 if (likely(!IS_ERR(neigh))) { 2203 2208 int ret; ··· 2215 2210 return ret; 2216 2211 } 2217 2212 rcu_read_unlock_bh(); 2218 - IP6_INC_STATS(dev_net(dst->dev), 2219 - ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES); 2213 + if (dst) 2214 + IP6_INC_STATS(dev_net(dst->dev), 2215 + ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES); 2220 2216 out_drop: 2221 2217 kfree_skb(skb); 2222 2218 return -ENETDOWN; 2223 2219 } 2224 2220 2225 - static int __bpf_redirect_neigh_v6(struct sk_buff *skb, struct net_device *dev) 2221 + static int __bpf_redirect_neigh_v6(struct sk_buff *skb, struct net_device *dev, 2222 + struct bpf_nh_params *nh) 2226 2223 { 2227 2224 const struct ipv6hdr *ip6h = ipv6_hdr(skb); 2228 2225 struct net *net = dev_net(dev); 2229 2226 int err, ret = NET_XMIT_DROP; 2230 - struct dst_entry *dst; 2231 - struct flowi6 fl6 = { 2232 - .flowi6_flags = FLOWI_FLAG_ANYSRC, 2233 - .flowi6_mark = skb->mark, 2234 - .flowlabel = ip6_flowinfo(ip6h), 2235 - .flowi6_oif = dev->ifindex, 2236 - .flowi6_proto = ip6h->nexthdr, 2237 - .daddr = ip6h->daddr, 2238 - .saddr = ip6h->saddr, 2239 - }; 2240 2227 2241 - dst = ipv6_stub->ipv6_dst_lookup_flow(net, NULL, &fl6, NULL); 2242 - if (IS_ERR(dst)) 2228 + if (!nh) { 2229 + struct dst_entry *dst; 2230 + struct flowi6 fl6 = { 2231 + .flowi6_flags = FLOWI_FLAG_ANYSRC, 2232 + .flowi6_mark = skb->mark, 2233 + .flowlabel = ip6_flowinfo(ip6h), 2234 + .flowi6_oif = dev->ifindex, 2235 + .flowi6_proto = ip6h->nexthdr, 2236 + .daddr = ip6h->daddr, 2237 + .saddr = ip6h->saddr, 2238 + }; 2239 + 2240 + dst = ipv6_stub->ipv6_dst_lookup_flow(net, NULL, &fl6, NULL); 2241 + if (IS_ERR(dst)) 2242 + goto out_drop; 2243 + 2244 + skb_dst_set(skb, dst); 2245 + } else if (nh->nh_family != AF_INET6) { 2243 2246 goto out_drop; 2247 + } 2244 2248 2245 - skb_dst_set(skb, dst); 2246 - 2247 - err = bpf_out_neigh_v6(net, skb); 2249 + err = bpf_out_neigh_v6(net, skb, dev, nh); 2248 2250 if (unlikely(net_xmit_eval(err))) 2249 2251 dev->stats.tx_errors++; 2250 2252 else ··· 2264 2252 return ret; 2265 2253 } 2266 2254 #else 2267 - static int __bpf_redirect_neigh_v6(struct sk_buff *skb, struct net_device *dev) 2255 + static int __bpf_redirect_neigh_v6(struct sk_buff *skb, struct net_device *dev, 2256 + struct bpf_nh_params *nh) 2268 2257 { 2269 2258 kfree_skb(skb); 2270 2259 return NET_XMIT_DROP; ··· 2273 2260 #endif /* CONFIG_IPV6 */ 2274 2261 2275 2262 #if IS_ENABLED(CONFIG_INET) 2276 - static int bpf_out_neigh_v4(struct net *net, struct sk_buff *skb) 2263 + static int bpf_out_neigh_v4(struct net *net, struct sk_buff *skb, 2264 + struct net_device *dev, struct bpf_nh_params *nh) 2277 2265 { 2278 - struct dst_entry *dst = skb_dst(skb); 2279 - struct rtable *rt = container_of(dst, struct rtable, dst); 2280 - struct net_device *dev = dst->dev; 2281 2266 u32 hh_len = LL_RESERVED_SPACE(dev); 2282 2267 struct neighbour *neigh; 2283 2268 bool is_v6gw = false; ··· 2303 2292 } 2304 2293 2305 2294 rcu_read_lock_bh(); 2306 - neigh = ip_neigh_for_gw(rt, skb, &is_v6gw); 2295 + if (!nh) { 2296 + struct dst_entry *dst = skb_dst(skb); 2297 + struct rtable *rt = container_of(dst, struct rtable, dst); 2298 + 2299 + neigh = ip_neigh_for_gw(rt, skb, &is_v6gw); 2300 + } else if (nh->nh_family == AF_INET6) { 2301 + neigh = ip_neigh_gw6(dev, &nh->ipv6_nh); 2302 + is_v6gw = true; 2303 + } else if (nh->nh_family == AF_INET) { 2304 + neigh = ip_neigh_gw4(dev, nh->ipv4_nh); 2305 + } else { 2306 + rcu_read_unlock_bh(); 2307 + goto out_drop; 2308 + } 2309 + 2307 2310 if (likely(!IS_ERR(neigh))) { 2308 2311 int ret; 2309 2312 ··· 2334 2309 return -ENETDOWN; 2335 2310 } 2336 2311 2337 - static int __bpf_redirect_neigh_v4(struct sk_buff *skb, struct net_device *dev) 2312 + static int __bpf_redirect_neigh_v4(struct sk_buff *skb, struct net_device *dev, 2313 + struct bpf_nh_params *nh) 2338 2314 { 2339 2315 const struct iphdr *ip4h = ip_hdr(skb); 2340 2316 struct net *net = dev_net(dev); 2341 2317 int err, ret = NET_XMIT_DROP; 2342 - struct rtable *rt; 2343 - struct flowi4 fl4 = { 2344 - .flowi4_flags = FLOWI_FLAG_ANYSRC, 2345 - .flowi4_mark = skb->mark, 2346 - .flowi4_tos = RT_TOS(ip4h->tos), 2347 - .flowi4_oif = dev->ifindex, 2348 - .flowi4_proto = ip4h->protocol, 2349 - .daddr = ip4h->daddr, 2350 - .saddr = ip4h->saddr, 2351 - }; 2352 2318 2353 - rt = ip_route_output_flow(net, &fl4, NULL); 2354 - if (IS_ERR(rt)) 2355 - goto out_drop; 2356 - if (rt->rt_type != RTN_UNICAST && rt->rt_type != RTN_LOCAL) { 2357 - ip_rt_put(rt); 2358 - goto out_drop; 2319 + if (!nh) { 2320 + struct flowi4 fl4 = { 2321 + .flowi4_flags = FLOWI_FLAG_ANYSRC, 2322 + .flowi4_mark = skb->mark, 2323 + .flowi4_tos = RT_TOS(ip4h->tos), 2324 + .flowi4_oif = dev->ifindex, 2325 + .flowi4_proto = ip4h->protocol, 2326 + .daddr = ip4h->daddr, 2327 + .saddr = ip4h->saddr, 2328 + }; 2329 + struct rtable *rt; 2330 + 2331 + rt = ip_route_output_flow(net, &fl4, NULL); 2332 + if (IS_ERR(rt)) 2333 + goto out_drop; 2334 + if (rt->rt_type != RTN_UNICAST && rt->rt_type != RTN_LOCAL) { 2335 + ip_rt_put(rt); 2336 + goto out_drop; 2337 + } 2338 + 2339 + skb_dst_set(skb, &rt->dst); 2359 2340 } 2360 2341 2361 - skb_dst_set(skb, &rt->dst); 2362 - 2363 - err = bpf_out_neigh_v4(net, skb); 2342 + err = bpf_out_neigh_v4(net, skb, dev, nh); 2364 2343 if (unlikely(net_xmit_eval(err))) 2365 2344 dev->stats.tx_errors++; 2366 2345 else ··· 2377 2348 return ret; 2378 2349 } 2379 2350 #else 2380 - static int __bpf_redirect_neigh_v4(struct sk_buff *skb, struct net_device *dev) 2351 + static int __bpf_redirect_neigh_v4(struct sk_buff *skb, struct net_device *dev, 2352 + struct bpf_nh_params *nh) 2381 2353 { 2382 2354 kfree_skb(skb); 2383 2355 return NET_XMIT_DROP; 2384 2356 } 2385 2357 #endif /* CONFIG_INET */ 2386 2358 2387 - static int __bpf_redirect_neigh(struct sk_buff *skb, struct net_device *dev) 2359 + static int __bpf_redirect_neigh(struct sk_buff *skb, struct net_device *dev, 2360 + struct bpf_nh_params *nh) 2388 2361 { 2389 2362 struct ethhdr *ethh = eth_hdr(skb); 2390 2363 ··· 2401 2370 skb_reset_network_header(skb); 2402 2371 2403 2372 if (skb->protocol == htons(ETH_P_IP)) 2404 - return __bpf_redirect_neigh_v4(skb, dev); 2373 + return __bpf_redirect_neigh_v4(skb, dev, nh); 2405 2374 else if (skb->protocol == htons(ETH_P_IPV6)) 2406 - return __bpf_redirect_neigh_v6(skb, dev); 2375 + return __bpf_redirect_neigh_v6(skb, dev, nh); 2407 2376 out: 2408 2377 kfree_skb(skb); 2409 2378 return -ENOTSUPP; ··· 2413 2382 enum { 2414 2383 BPF_F_NEIGH = (1ULL << 1), 2415 2384 BPF_F_PEER = (1ULL << 2), 2416 - #define BPF_F_REDIRECT_INTERNAL (BPF_F_NEIGH | BPF_F_PEER) 2385 + BPF_F_NEXTHOP = (1ULL << 3), 2386 + #define BPF_F_REDIRECT_INTERNAL (BPF_F_NEIGH | BPF_F_PEER | BPF_F_NEXTHOP) 2417 2387 }; 2418 2388 2419 2389 BPF_CALL_3(bpf_clone_redirect, struct sk_buff *, skb, u32, ifindex, u64, flags) ··· 2487 2455 return -EAGAIN; 2488 2456 } 2489 2457 return flags & BPF_F_NEIGH ? 2490 - __bpf_redirect_neigh(skb, dev) : 2458 + __bpf_redirect_neigh(skb, dev, flags & BPF_F_NEXTHOP ? 2459 + &ri->nh : NULL) : 2491 2460 __bpf_redirect(skb, dev, flags); 2492 2461 out_drop: 2493 2462 kfree_skb(skb); ··· 2537 2504 .arg2_type = ARG_ANYTHING, 2538 2505 }; 2539 2506 2540 - BPF_CALL_2(bpf_redirect_neigh, u32, ifindex, u64, flags) 2507 + BPF_CALL_4(bpf_redirect_neigh, u32, ifindex, struct bpf_redir_neigh *, params, 2508 + int, plen, u64, flags) 2541 2509 { 2542 2510 struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); 2543 2511 2544 - if (unlikely(flags)) 2512 + if (unlikely((plen && plen < sizeof(*params)) || flags)) 2545 2513 return TC_ACT_SHOT; 2546 2514 2547 - ri->flags = BPF_F_NEIGH; 2515 + ri->flags = BPF_F_NEIGH | (plen ? BPF_F_NEXTHOP : 0); 2548 2516 ri->tgt_index = ifindex; 2517 + 2518 + BUILD_BUG_ON(sizeof(struct bpf_redir_neigh) != sizeof(struct bpf_nh_params)); 2519 + if (plen) 2520 + memcpy(&ri->nh, params, sizeof(ri->nh)); 2549 2521 2550 2522 return TC_ACT_REDIRECT; 2551 2523 } ··· 2560 2522 .gpl_only = false, 2561 2523 .ret_type = RET_INTEGER, 2562 2524 .arg1_type = ARG_ANYTHING, 2563 - .arg2_type = ARG_ANYTHING, 2525 + .arg2_type = ARG_PTR_TO_MEM_OR_NULL, 2526 + .arg3_type = ARG_CONST_SIZE_OR_ZERO, 2527 + .arg4_type = ARG_ANYTHING, 2564 2528 }; 2565 2529 2566 2530 BPF_CALL_2(bpf_msg_apply_bytes, struct sk_msg *, msg, u32, bytes) ··· 4733 4693 cmpxchg(&sk->sk_pacing_status, 4734 4694 SK_PACING_NONE, 4735 4695 SK_PACING_NEEDED); 4736 - sk->sk_max_pacing_rate = (val == ~0U) ? ~0UL : val; 4696 + sk->sk_max_pacing_rate = (val == ~0U) ? 4697 + ~0UL : (unsigned int)val; 4737 4698 sk->sk_pacing_rate = min(sk->sk_pacing_rate, 4738 4699 sk->sk_max_pacing_rate); 4739 4700 break;
+6 -7
net/core/rtnetlink.c
··· 3709 3709 return rtnl_linkprop(RTM_DELLINKPROP, skb, nlh, extack); 3710 3710 } 3711 3711 3712 - static u16 rtnl_calcit(struct sk_buff *skb, struct nlmsghdr *nlh) 3712 + static u32 rtnl_calcit(struct sk_buff *skb, struct nlmsghdr *nlh) 3713 3713 { 3714 3714 struct net *net = sock_net(skb->sk); 3715 - struct net_device *dev; 3715 + size_t min_ifinfo_dump_size = 0; 3716 3716 struct nlattr *tb[IFLA_MAX+1]; 3717 3717 u32 ext_filter_mask = 0; 3718 - u16 min_ifinfo_dump_size = 0; 3718 + struct net_device *dev; 3719 3719 int hdrlen; 3720 3720 3721 3721 /* Same kernel<->userspace interface hack as in rtnl_dump_ifinfo. */ ··· 3735 3735 */ 3736 3736 rcu_read_lock(); 3737 3737 for_each_netdev_rcu(net, dev) { 3738 - min_ifinfo_dump_size = max_t(u16, min_ifinfo_dump_size, 3739 - if_nlmsg_size(dev, 3740 - ext_filter_mask)); 3738 + min_ifinfo_dump_size = max(min_ifinfo_dump_size, 3739 + if_nlmsg_size(dev, ext_filter_mask)); 3741 3740 } 3742 3741 rcu_read_unlock(); 3743 3742 ··· 5493 5494 if (kind == 2 && nlh->nlmsg_flags&NLM_F_DUMP) { 5494 5495 struct sock *rtnl; 5495 5496 rtnl_dumpit_func dumpit; 5496 - u16 min_dump_alloc = 0; 5497 + u32 min_dump_alloc = 0; 5497 5498 5498 5499 link = rtnl_get_link(family, type); 5499 5500 if (!link || !link->dumpit) {
+1 -1
net/core/sock.c
··· 1163 1163 1164 1164 case SO_MAX_PACING_RATE: 1165 1165 { 1166 - unsigned long ulval = (val == ~0U) ? ~0UL : val; 1166 + unsigned long ulval = (val == ~0U) ? ~0UL : (unsigned int)val; 1167 1167 1168 1168 if (sizeof(ulval) != sizeof(val) && 1169 1169 optlen >= sizeof(ulval) &&
+2
net/dsa/tag_ksz.c
··· 123 123 .xmit = ksz8795_xmit, 124 124 .rcv = ksz8795_rcv, 125 125 .overhead = KSZ_INGRESS_TAG_LEN, 126 + .tail_tag = true, 126 127 }; 127 128 128 129 DSA_TAG_DRIVER(ksz8795_netdev_ops); ··· 200 199 .xmit = ksz9477_xmit, 201 200 .rcv = ksz9477_rcv, 202 201 .overhead = KSZ9477_INGRESS_TAG_LEN, 202 + .tail_tag = true, 203 203 }; 204 204 205 205 DSA_TAG_DRIVER(ksz9477_netdev_ops);
+5 -2
net/ipv4/icmp.c
··· 239 239 /** 240 240 * icmp_global_allow - Are we allowed to send one more ICMP message ? 241 241 * 242 - * Uses a token bucket to limit our ICMP messages to sysctl_icmp_msgs_per_sec. 242 + * Uses a token bucket to limit our ICMP messages to ~sysctl_icmp_msgs_per_sec. 243 243 * Returns false if we reached the limit and can not send another packet. 244 244 * Note: called with BH disabled 245 245 */ ··· 267 267 } 268 268 credit = min_t(u32, icmp_global.credit + incr, sysctl_icmp_msgs_burst); 269 269 if (credit) { 270 - credit--; 270 + /* We want to use a credit of one in average, but need to randomize 271 + * it for security reasons. 272 + */ 273 + credit = max_t(int, credit - prandom_u32_max(3), 0); 271 274 rc = true; 272 275 } 273 276 WRITE_ONCE(icmp_global.credit, credit);
+1 -1
net/ipv4/nexthop.c
··· 845 845 remove_nh_grp_entry(net, nhge, nlinfo); 846 846 847 847 /* make sure all see the newly published array before releasing rtnl */ 848 - synchronize_rcu(); 848 + synchronize_net(); 849 849 } 850 850 851 851 static void remove_nexthop_group(struct nexthop *nh, struct nl_info *nlinfo)
+2
net/ipv4/tcp_input.c
··· 5827 5827 tcp_data_snd_check(sk); 5828 5828 if (!inet_csk_ack_scheduled(sk)) 5829 5829 goto no_ack; 5830 + } else { 5831 + tcp_update_wl(tp, TCP_SKB_CB(skb)->seq); 5830 5832 } 5831 5833 5832 5834 __tcp_ack_snd_check(sk, 0);
+1
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 355 355 ipv6_hdr(skb)->payload_len = htons(payload_len); 356 356 ipv6_change_dsfield(ipv6_hdr(skb), 0xff, ecn); 357 357 IP6CB(skb)->frag_max_size = sizeof(struct ipv6hdr) + fq->q.max_size; 358 + IP6CB(skb)->flags |= IP6SKB_FRAGMENTED; 358 359 359 360 /* Yes, and fold redundant checksum back. 8) */ 360 361 if (skb->ip_summed == CHECKSUM_COMPLETE)
+1
net/mpls/mpls_iptunnel.c
··· 300 300 module_exit(mpls_iptunnel_exit); 301 301 302 302 MODULE_ALIAS_RTNL_LWT(MPLS); 303 + MODULE_SOFTDEP("post: mpls_gso"); 303 304 MODULE_DESCRIPTION("MultiProtocol Label Switching IP Tunnels"); 304 305 MODULE_LICENSE("GPL v2");
+2 -4
net/mptcp/Kconfig
··· 19 19 20 20 config MPTCP_IPV6 21 21 bool "MPTCP: IPv6 support for Multipath TCP" 22 - select IPV6 22 + depends on IPV6=y 23 23 default y 24 - 25 - endif 26 24 27 25 config MPTCP_KUNIT_TESTS 28 26 tristate "This builds the MPTCP KUnit tests" if !KUNIT_ALL_TESTS 29 - select MPTCP 30 27 depends on KUNIT 31 28 default KUNIT_ALL_TESTS 32 29 help ··· 36 39 37 40 If unsure, say N. 38 41 42 + endif
+2 -1
net/mptcp/options.c
··· 241 241 } 242 242 243 243 mp_opt->add_addr = 1; 244 - mp_opt->port = 0; 245 244 mp_opt->addr_id = *ptr++; 246 245 pr_debug("ADD_ADDR: id=%d, echo=%d", mp_opt->addr_id, mp_opt->echo); 247 246 if (mp_opt->family == MPTCP_ADDR_IPVERSION_4) { ··· 296 297 mp_opt->mp_capable = 0; 297 298 mp_opt->mp_join = 0; 298 299 mp_opt->add_addr = 0; 300 + mp_opt->ahmac = 0; 301 + mp_opt->port = 0; 299 302 mp_opt->rm_addr = 0; 300 303 mp_opt->dss = 0; 301 304
+6 -4
net/netfilter/ipvs/ip_vs_proto_tcp.c
··· 539 539 if (new_state != cp->state) { 540 540 struct ip_vs_dest *dest = cp->dest; 541 541 542 - IP_VS_DBG_BUF(8, "%s %s [%c%c%c%c] %s:%d->" 543 - "%s:%d state: %s->%s conn->refcnt:%d\n", 542 + IP_VS_DBG_BUF(8, "%s %s [%c%c%c%c] c:%s:%d v:%s:%d " 543 + "d:%s:%d state: %s->%s conn->refcnt:%d\n", 544 544 pd->pp->name, 545 545 ((state_off == TCP_DIR_OUTPUT) ? 546 546 "output " : "input "), ··· 548 548 th->fin ? 'F' : '.', 549 549 th->ack ? 'A' : '.', 550 550 th->rst ? 'R' : '.', 551 - IP_VS_DBG_ADDR(cp->daf, &cp->daddr), 552 - ntohs(cp->dport), 553 551 IP_VS_DBG_ADDR(cp->af, &cp->caddr), 554 552 ntohs(cp->cport), 553 + IP_VS_DBG_ADDR(cp->af, &cp->vaddr), 554 + ntohs(cp->vport), 555 + IP_VS_DBG_ADDR(cp->daf, &cp->daddr), 556 + ntohs(cp->dport), 555 557 tcp_state_name(cp->state), 556 558 tcp_state_name(new_state), 557 559 refcount_read(&cp->refcnt));
+13 -6
net/netfilter/nf_conntrack_proto_tcp.c
··· 541 541 swin = win << sender->td_scale; 542 542 sender->td_maxwin = (swin == 0 ? 1 : swin); 543 543 sender->td_maxend = end + sender->td_maxwin; 544 - /* 545 - * We haven't seen traffic in the other direction yet 546 - * but we have to tweak window tracking to pass III 547 - * and IV until that happens. 548 - */ 549 - if (receiver->td_maxwin == 0) 544 + if (receiver->td_maxwin == 0) { 545 + /* We haven't seen traffic in the other 546 + * direction yet but we have to tweak window 547 + * tracking to pass III and IV until that 548 + * happens. 549 + */ 550 550 receiver->td_end = receiver->td_maxend = sack; 551 + } else if (sack == receiver->td_end + 1) { 552 + /* Likely a reply to a keepalive. 553 + * Needed for III. 554 + */ 555 + receiver->td_end++; 556 + } 557 + 551 558 } 552 559 } else if (((state->state == TCP_CONNTRACK_SYN_SENT 553 560 && dir == IP_CT_DIR_ORIGINAL)
+1
net/netfilter/nf_dup_netdev.c
··· 19 19 skb_push(skb, skb->mac_len); 20 20 21 21 skb->dev = dev; 22 + skb->tstamp = 0; 22 23 dev_queue_xmit(skb); 23 24 } 24 25
+3 -3
net/netfilter/nf_tables_api.c
··· 302 302 struct nft_expr *expr; 303 303 304 304 expr = nft_expr_first(rule); 305 - while (expr != nft_expr_last(rule) && expr->ops) { 305 + while (nft_expr_more(rule, expr)) { 306 306 if (expr->ops->activate) 307 307 expr->ops->activate(ctx, expr); 308 308 ··· 317 317 struct nft_expr *expr; 318 318 319 319 expr = nft_expr_first(rule); 320 - while (expr != nft_expr_last(rule) && expr->ops) { 320 + while (nft_expr_more(rule, expr)) { 321 321 if (expr->ops->deactivate) 322 322 expr->ops->deactivate(ctx, expr, phase); 323 323 ··· 3080 3080 * is called on error from nf_tables_newrule(). 3081 3081 */ 3082 3082 expr = nft_expr_first(rule); 3083 - while (expr != nft_expr_last(rule) && expr->ops) { 3083 + while (nft_expr_more(rule, expr)) { 3084 3084 next = nft_expr_next(expr); 3085 3085 nf_tables_expr_destroy(ctx, expr); 3086 3086 expr = next;
+2 -2
net/netfilter/nf_tables_offload.c
··· 37 37 struct nft_expr *expr; 38 38 39 39 expr = nft_expr_first(rule); 40 - while (expr->ops && expr != nft_expr_last(rule)) { 40 + while (nft_expr_more(rule, expr)) { 41 41 if (expr->ops->offload_flags & NFT_OFFLOAD_F_ACTION) 42 42 num_actions++; 43 43 ··· 61 61 ctx->net = net; 62 62 ctx->dep.type = NFT_OFFLOAD_DEP_UNSPEC; 63 63 64 - while (expr->ops && expr != nft_expr_last(rule)) { 64 + while (nft_expr_more(rule, expr)) { 65 65 if (!expr->ops->offload) { 66 66 err = -EOPNOTSUPP; 67 67 goto err_out;
+1
net/netfilter/nft_fwd_netdev.c
··· 138 138 return; 139 139 140 140 skb->dev = dev; 141 + skb->tstamp = 0; 141 142 neigh_xmit(neigh_table, dev, addr, skb); 142 143 out: 143 144 regs->verdict.code = verdict;
+1 -1
net/nfc/netlink.c
··· 1217 1217 u32 idx; 1218 1218 char firmware_name[NFC_FIRMWARE_NAME_MAXSIZE + 1]; 1219 1219 1220 - if (!info->attrs[NFC_ATTR_DEVICE_INDEX]) 1220 + if (!info->attrs[NFC_ATTR_DEVICE_INDEX] || !info->attrs[NFC_ATTR_FIRMWARE_NAME]) 1221 1221 return -EINVAL; 1222 1222 1223 1223 idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
+35 -23
net/openvswitch/flow_table.c
··· 175 175 176 176 static void __mask_array_destroy(struct mask_array *ma) 177 177 { 178 - free_percpu(ma->masks_usage_cntr); 178 + free_percpu(ma->masks_usage_stats); 179 179 kfree(ma); 180 180 } 181 181 ··· 199 199 ma->masks_usage_zero_cntr[i] = 0; 200 200 201 201 for_each_possible_cpu(cpu) { 202 - u64 *usage_counters = per_cpu_ptr(ma->masks_usage_cntr, 203 - cpu); 202 + struct mask_array_stats *stats; 204 203 unsigned int start; 205 204 u64 counter; 206 205 206 + stats = per_cpu_ptr(ma->masks_usage_stats, cpu); 207 207 do { 208 - start = u64_stats_fetch_begin_irq(&ma->syncp); 209 - counter = usage_counters[i]; 210 - } while (u64_stats_fetch_retry_irq(&ma->syncp, start)); 208 + start = u64_stats_fetch_begin_irq(&stats->syncp); 209 + counter = stats->usage_cntrs[i]; 210 + } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); 211 211 212 212 ma->masks_usage_zero_cntr[i] += counter; 213 213 } ··· 230 230 sizeof(struct sw_flow_mask *) * 231 231 size); 232 232 233 - new->masks_usage_cntr = __alloc_percpu(sizeof(u64) * size, 234 - __alignof__(u64)); 235 - if (!new->masks_usage_cntr) { 233 + new->masks_usage_stats = __alloc_percpu(sizeof(struct mask_array_stats) + 234 + sizeof(u64) * size, 235 + __alignof__(u64)); 236 + if (!new->masks_usage_stats) { 236 237 kfree(new); 237 238 return NULL; 238 239 } ··· 723 722 724 723 /* Flow lookup does full lookup on flow table. It starts with 725 724 * mask from index passed in *index. 725 + * This function MUST be called with BH disabled due to the use 726 + * of CPU specific variables. 726 727 */ 727 728 static struct sw_flow *flow_lookup(struct flow_table *tbl, 728 729 struct table_instance *ti, ··· 734 731 u32 *n_cache_hit, 735 732 u32 *index) 736 733 { 737 - u64 *usage_counters = this_cpu_ptr(ma->masks_usage_cntr); 734 + struct mask_array_stats *stats = this_cpu_ptr(ma->masks_usage_stats); 738 735 struct sw_flow *flow; 739 736 struct sw_flow_mask *mask; 740 737 int i; ··· 744 741 if (mask) { 745 742 flow = masked_flow_lookup(ti, key, mask, n_mask_hit); 746 743 if (flow) { 747 - u64_stats_update_begin(&ma->syncp); 748 - usage_counters[*index]++; 749 - u64_stats_update_end(&ma->syncp); 744 + u64_stats_update_begin(&stats->syncp); 745 + stats->usage_cntrs[*index]++; 746 + u64_stats_update_end(&stats->syncp); 750 747 (*n_cache_hit)++; 751 748 return flow; 752 749 } ··· 765 762 flow = masked_flow_lookup(ti, key, mask, n_mask_hit); 766 763 if (flow) { /* Found */ 767 764 *index = i; 768 - u64_stats_update_begin(&ma->syncp); 769 - usage_counters[*index]++; 770 - u64_stats_update_end(&ma->syncp); 765 + u64_stats_update_begin(&stats->syncp); 766 + stats->usage_cntrs[*index]++; 767 + u64_stats_update_end(&stats->syncp); 771 768 return flow; 772 769 } 773 770 } ··· 853 850 struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); 854 851 u32 __always_unused n_mask_hit; 855 852 u32 __always_unused n_cache_hit; 853 + struct sw_flow *flow; 856 854 u32 index = 0; 857 855 858 - return flow_lookup(tbl, ti, ma, key, &n_mask_hit, &n_cache_hit, &index); 856 + /* This function gets called trough the netlink interface and therefore 857 + * is preemptible. However, flow_lookup() function needs to be called 858 + * with BH disabled due to CPU specific variables. 859 + */ 860 + local_bh_disable(); 861 + flow = flow_lookup(tbl, ti, ma, key, &n_mask_hit, &n_cache_hit, &index); 862 + local_bh_enable(); 863 + return flow; 859 864 } 860 865 861 866 struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, ··· 1120 1109 1121 1110 for (i = 0; i < ma->max; i++) { 1122 1111 struct sw_flow_mask *mask; 1123 - unsigned int start; 1124 1112 int cpu; 1125 1113 1126 1114 mask = rcu_dereference_ovsl(ma->masks[i]); ··· 1130 1120 masks_and_count[i].counter = 0; 1131 1121 1132 1122 for_each_possible_cpu(cpu) { 1133 - u64 *usage_counters = per_cpu_ptr(ma->masks_usage_cntr, 1134 - cpu); 1123 + struct mask_array_stats *stats; 1124 + unsigned int start; 1135 1125 u64 counter; 1136 1126 1127 + stats = per_cpu_ptr(ma->masks_usage_stats, cpu); 1137 1128 do { 1138 - start = u64_stats_fetch_begin_irq(&ma->syncp); 1139 - counter = usage_counters[i]; 1140 - } while (u64_stats_fetch_retry_irq(&ma->syncp, start)); 1129 + start = u64_stats_fetch_begin_irq(&stats->syncp); 1130 + counter = stats->usage_cntrs[i]; 1131 + } while (u64_stats_fetch_retry_irq(&stats->syncp, 1132 + start)); 1141 1133 1142 1134 masks_and_count[i].counter += counter; 1143 1135 }
+6 -2
net/openvswitch/flow_table.h
··· 38 38 u64 counter; 39 39 }; 40 40 41 + struct mask_array_stats { 42 + struct u64_stats_sync syncp; 43 + u64 usage_cntrs[]; 44 + }; 45 + 41 46 struct mask_array { 42 47 struct rcu_head rcu; 43 48 int count, max; 44 - u64 __percpu *masks_usage_cntr; 49 + struct mask_array_stats __percpu *masks_usage_stats; 45 50 u64 *masks_usage_zero_cntr; 46 - struct u64_stats_sync syncp; 47 51 struct sw_flow_mask __rcu *masks[]; 48 52 }; 49 53
+2 -2
net/sched/act_ct.c
··· 156 156 __be16 target_dst = target.dst.u.udp.port; 157 157 158 158 if (target_src != tuple->src.u.udp.port) 159 - tcf_ct_add_mangle_action(action, FLOW_ACT_MANGLE_HDR_TYPE_TCP, 159 + tcf_ct_add_mangle_action(action, FLOW_ACT_MANGLE_HDR_TYPE_UDP, 160 160 offsetof(struct udphdr, source), 161 161 0xFFFF, be16_to_cpu(target_src)); 162 162 if (target_dst != tuple->dst.u.udp.port) 163 - tcf_ct_add_mangle_action(action, FLOW_ACT_MANGLE_HDR_TYPE_TCP, 163 + tcf_ct_add_mangle_action(action, FLOW_ACT_MANGLE_HDR_TYPE_UDP, 164 164 offsetof(struct udphdr, dest), 165 165 0xFFFF, be16_to_cpu(target_dst)); 166 166 }
+1 -1
net/sched/act_tunnel_key.c
··· 459 459 460 460 metadata = __ipv6_tun_set_dst(&saddr, &daddr, tos, ttl, dst_port, 461 461 0, flags, 462 - key_id, 0); 462 + key_id, opts_len); 463 463 } else { 464 464 NL_SET_ERR_MSG(extack, "Missing either ipv4 or ipv6 src and dst"); 465 465 ret = -EINVAL;
+1 -1
net/sched/cls_api.c
··· 3712 3712 entry->gate.num_entries = tcf_gate_num_entries(act); 3713 3713 err = tcf_gate_get_entries(entry, act); 3714 3714 if (err) 3715 - goto err_out; 3715 + goto err_out_locked; 3716 3716 } else { 3717 3717 err = -EOPNOTSUPP; 3718 3718 goto err_out_locked;
+8 -2
net/tipc/bcast.c
··· 108 108 { 109 109 struct tipc_bc_base *bb = tipc_bc_base(net); 110 110 int all_dests = tipc_link_bc_peers(bb->link); 111 + int max_win = tipc_link_max_win(bb->link); 112 + int min_win = tipc_link_min_win(bb->link); 111 113 int i, mtu, prim; 112 114 113 115 bb->primary_bearer = INVALID_BEARER_ID; ··· 123 121 continue; 124 122 125 123 mtu = tipc_bearer_mtu(net, i); 126 - if (mtu < tipc_link_mtu(bb->link)) 124 + if (mtu < tipc_link_mtu(bb->link)) { 127 125 tipc_link_set_mtu(bb->link, mtu); 126 + tipc_link_set_queue_limits(bb->link, 127 + min_win, 128 + max_win); 129 + } 128 130 bb->bcast_support &= tipc_bearer_bcast_support(net, i); 129 131 if (bb->dests[i] < all_dests) 130 132 continue; ··· 591 585 if (max_win > TIPC_MAX_LINK_WIN) 592 586 return -EINVAL; 593 587 tipc_bcast_lock(net); 594 - tipc_link_set_queue_limits(l, BCLINK_WIN_MIN, max_win); 588 + tipc_link_set_queue_limits(l, tipc_link_min_win(l), max_win); 595 589 tipc_bcast_unlock(net); 596 590 return 0; 597 591 }
+4 -4
samples/bpf/sockex3_kern.c
··· 44 44 switch (proto) { 45 45 case ETH_P_8021Q: 46 46 case ETH_P_8021AD: 47 - bpf_tail_call_static(skb, &jmp_table, PARSE_VLAN); 47 + bpf_tail_call(skb, &jmp_table, PARSE_VLAN); 48 48 break; 49 49 case ETH_P_MPLS_UC: 50 50 case ETH_P_MPLS_MC: 51 - bpf_tail_call_static(skb, &jmp_table, PARSE_MPLS); 51 + bpf_tail_call(skb, &jmp_table, PARSE_MPLS); 52 52 break; 53 53 case ETH_P_IP: 54 - bpf_tail_call_static(skb, &jmp_table, PARSE_IP); 54 + bpf_tail_call(skb, &jmp_table, PARSE_IP); 55 55 break; 56 56 case ETH_P_IPV6: 57 - bpf_tail_call_static(skb, &jmp_table, PARSE_IPV6); 57 + bpf_tail_call(skb, &jmp_table, PARSE_IPV6); 58 58 break; 59 59 } 60 60 }
+1
scripts/bpf_helpers_doc.py
··· 453 453 'struct bpf_perf_event_data', 454 454 'struct bpf_perf_event_value', 455 455 'struct bpf_pidns_info', 456 + 'struct bpf_redir_neigh', 456 457 'struct bpf_sk_lookup', 457 458 'struct bpf_sock', 458 459 'struct bpf_sock_addr',
+18 -4
tools/include/uapi/linux/bpf.h
··· 3677 3677 * Return 3678 3678 * The id is returned or 0 in case the id could not be retrieved. 3679 3679 * 3680 - * long bpf_redirect_neigh(u32 ifindex, u64 flags) 3680 + * long bpf_redirect_neigh(u32 ifindex, struct bpf_redir_neigh *params, int plen, u64 flags) 3681 3681 * Description 3682 3682 * Redirect the packet to another net device of index *ifindex* 3683 3683 * and fill in L2 addresses from neighboring subsystem. This helper 3684 3684 * is somewhat similar to **bpf_redirect**\ (), except that it 3685 3685 * populates L2 addresses as well, meaning, internally, the helper 3686 - * performs a FIB lookup based on the skb's networking header to 3687 - * get the address of the next hop and then relies on the neighbor 3688 - * lookup for the L2 address of the nexthop. 3686 + * relies on the neighbor lookup for the L2 address of the nexthop. 3687 + * 3688 + * The helper will perform a FIB lookup based on the skb's 3689 + * networking header to get the address of the next hop, unless 3690 + * this is supplied by the caller in the *params* argument. The 3691 + * *plen* argument indicates the len of *params* and should be set 3692 + * to 0 if *params* is NULL. 3689 3693 * 3690 3694 * The *flags* argument is reserved and must be 0. The helper is 3691 3695 * currently only supported for tc BPF program types, and enabled ··· 4908 4904 __be16 h_vlan_TCI; 4909 4905 __u8 smac[6]; /* ETH_ALEN */ 4910 4906 __u8 dmac[6]; /* ETH_ALEN */ 4907 + }; 4908 + 4909 + struct bpf_redir_neigh { 4910 + /* network family for lookup (AF_INET, AF_INET6) */ 4911 + __u32 nh_family; 4912 + /* network address of nexthop; skips fib lookup to find gateway */ 4913 + union { 4914 + __be32 ipv4_nh; 4915 + __u32 ipv6_nh[4]; /* in6_addr; network order */ 4916 + }; 4911 4917 }; 4912 4918 4913 4919 enum bpf_task_fd_type {
+2
tools/lib/bpf/bpf_helpers.h
··· 72 72 /* 73 73 * Helper function to perform a tail call with a constant/immediate map slot. 74 74 */ 75 + #if __clang_major__ >= 8 && defined(__bpf__) 75 76 static __always_inline void 76 77 bpf_tail_call_static(void *ctx, const void *map, const __u32 slot) 77 78 { ··· 99 98 :: [ctx]"r"(ctx), [map]"r"(map), [slot]"i"(slot) 100 99 : "r0", "r1", "r2", "r3", "r4", "r5"); 101 100 } 101 + #endif 102 102 103 103 /* 104 104 * Helper structure used by eBPF C program
+39 -18
tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
··· 5 5 #include <bpf/libbpf.h> 6 6 #include <bpf/btf.h> 7 7 #include "test_ksyms_btf.skel.h" 8 + #include "test_ksyms_btf_null_check.skel.h" 8 9 9 10 static int duration; 10 11 11 - void test_ksyms_btf(void) 12 + static void test_basic(void) 12 13 { 13 14 __u64 runqueues_addr, bpf_prog_active_addr; 14 15 __u32 this_rq_cpu; 15 16 int this_bpf_prog_active; 16 17 struct test_ksyms_btf *skel = NULL; 17 18 struct test_ksyms_btf__data *data; 18 - struct btf *btf; 19 - int percpu_datasec; 20 19 int err; 21 20 22 21 err = kallsyms_find("runqueues", &runqueues_addr); ··· 29 30 return; 30 31 if (CHECK(err == -ENOENT, "ksym_find", "symbol 'bpf_prog_active' not found\n")) 31 32 return; 32 - 33 - btf = libbpf_find_kernel_btf(); 34 - if (CHECK(IS_ERR(btf), "btf_exists", "failed to load kernel BTF: %ld\n", 35 - PTR_ERR(btf))) 36 - return; 37 - 38 - percpu_datasec = btf__find_by_name_kind(btf, ".data..percpu", 39 - BTF_KIND_DATASEC); 40 - if (percpu_datasec < 0) { 41 - printf("%s:SKIP:no PERCPU DATASEC in kernel btf\n", 42 - __func__); 43 - test__skip(); 44 - goto cleanup; 45 - } 46 33 47 34 skel = test_ksyms_btf__open_and_load(); 48 35 if (CHECK(!skel, "skel_open", "failed to open and load skeleton\n")) ··· 68 83 data->out__bpf_prog_active); 69 84 70 85 cleanup: 71 - btf__free(btf); 72 86 test_ksyms_btf__destroy(skel); 87 + } 88 + 89 + static void test_null_check(void) 90 + { 91 + struct test_ksyms_btf_null_check *skel; 92 + 93 + skel = test_ksyms_btf_null_check__open_and_load(); 94 + CHECK(skel, "skel_open", "unexpected load of a prog missing null check\n"); 95 + 96 + test_ksyms_btf_null_check__destroy(skel); 97 + } 98 + 99 + void test_ksyms_btf(void) 100 + { 101 + int percpu_datasec; 102 + struct btf *btf; 103 + 104 + btf = libbpf_find_kernel_btf(); 105 + if (CHECK(IS_ERR(btf), "btf_exists", "failed to load kernel BTF: %ld\n", 106 + PTR_ERR(btf))) 107 + return; 108 + 109 + percpu_datasec = btf__find_by_name_kind(btf, ".data..percpu", 110 + BTF_KIND_DATASEC); 111 + btf__free(btf); 112 + if (percpu_datasec < 0) { 113 + printf("%s:SKIP:no PERCPU DATASEC in kernel btf\n", 114 + __func__); 115 + test__skip(); 116 + return; 117 + } 118 + 119 + if (test__start_subtest("basic")) 120 + test_basic(); 121 + 122 + if (test__start_subtest("null_check")) 123 + test_null_check(); 73 124 }
+31
tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2020 Facebook */ 3 + 4 + #include "vmlinux.h" 5 + 6 + #include <bpf/bpf_helpers.h> 7 + 8 + extern const struct rq runqueues __ksym; /* struct type global var. */ 9 + extern const int bpf_prog_active __ksym; /* int type global var. */ 10 + 11 + SEC("raw_tp/sys_enter") 12 + int handler(const void *ctx) 13 + { 14 + struct rq *rq; 15 + int *active; 16 + __u32 cpu; 17 + 18 + cpu = bpf_get_smp_processor_id(); 19 + rq = (struct rq *)bpf_per_cpu_ptr(&runqueues, cpu); 20 + active = (int *)bpf_per_cpu_ptr(&bpf_prog_active, cpu); 21 + if (active) { 22 + /* READ_ONCE */ 23 + *(volatile int *)active; 24 + /* !rq has not been tested, so verifier should reject. */ 25 + *(volatile int *)(&rq->cpu); 26 + } 27 + 28 + return 0; 29 + } 30 + 31 + char _license[] SEC("license") = "GPL";
+3 -2
tools/testing/selftests/bpf/progs/test_tc_neigh.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + #include <stddef.h> 2 3 #include <stdint.h> 3 4 #include <stdbool.h> 4 5 ··· 119 118 if (bpf_skb_store_bytes(skb, 0, &zero, sizeof(zero), 0) < 0) 120 119 return TC_ACT_SHOT; 121 120 122 - return bpf_redirect_neigh(get_dev_ifindex(dev_src), 0); 121 + return bpf_redirect_neigh(get_dev_ifindex(dev_src), NULL, 0, 0); 123 122 } 124 123 125 124 SEC("src_ingress") int tc_src(struct __sk_buff *skb) ··· 143 142 if (bpf_skb_store_bytes(skb, 0, &zero, sizeof(zero), 0) < 0) 144 143 return TC_ACT_SHOT; 145 144 146 - return bpf_redirect_neigh(get_dev_ifindex(dev_dst), 0); 145 + return bpf_redirect_neigh(get_dev_ifindex(dev_dst), NULL, 0, 0); 147 146 } 148 147 149 148 char __license[] SEC("license") = "GPL";
+155
tools/testing/selftests/bpf/progs/test_tc_neigh_fib.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <stdint.h> 3 + #include <stdbool.h> 4 + #include <stddef.h> 5 + 6 + #include <linux/bpf.h> 7 + #include <linux/stddef.h> 8 + #include <linux/pkt_cls.h> 9 + #include <linux/if_ether.h> 10 + #include <linux/in.h> 11 + #include <linux/ip.h> 12 + #include <linux/ipv6.h> 13 + 14 + #include <bpf/bpf_helpers.h> 15 + #include <bpf/bpf_endian.h> 16 + 17 + #ifndef ctx_ptr 18 + # define ctx_ptr(field) (void *)(long)(field) 19 + #endif 20 + 21 + #define AF_INET 2 22 + #define AF_INET6 10 23 + 24 + static __always_inline int fill_fib_params_v4(struct __sk_buff *skb, 25 + struct bpf_fib_lookup *fib_params) 26 + { 27 + void *data_end = ctx_ptr(skb->data_end); 28 + void *data = ctx_ptr(skb->data); 29 + struct iphdr *ip4h; 30 + 31 + if (data + sizeof(struct ethhdr) > data_end) 32 + return -1; 33 + 34 + ip4h = (struct iphdr *)(data + sizeof(struct ethhdr)); 35 + if ((void *)(ip4h + 1) > data_end) 36 + return -1; 37 + 38 + fib_params->family = AF_INET; 39 + fib_params->tos = ip4h->tos; 40 + fib_params->l4_protocol = ip4h->protocol; 41 + fib_params->sport = 0; 42 + fib_params->dport = 0; 43 + fib_params->tot_len = bpf_ntohs(ip4h->tot_len); 44 + fib_params->ipv4_src = ip4h->saddr; 45 + fib_params->ipv4_dst = ip4h->daddr; 46 + 47 + return 0; 48 + } 49 + 50 + static __always_inline int fill_fib_params_v6(struct __sk_buff *skb, 51 + struct bpf_fib_lookup *fib_params) 52 + { 53 + struct in6_addr *src = (struct in6_addr *)fib_params->ipv6_src; 54 + struct in6_addr *dst = (struct in6_addr *)fib_params->ipv6_dst; 55 + void *data_end = ctx_ptr(skb->data_end); 56 + void *data = ctx_ptr(skb->data); 57 + struct ipv6hdr *ip6h; 58 + 59 + if (data + sizeof(struct ethhdr) > data_end) 60 + return -1; 61 + 62 + ip6h = (struct ipv6hdr *)(data + sizeof(struct ethhdr)); 63 + if ((void *)(ip6h + 1) > data_end) 64 + return -1; 65 + 66 + fib_params->family = AF_INET6; 67 + fib_params->flowinfo = 0; 68 + fib_params->l4_protocol = ip6h->nexthdr; 69 + fib_params->sport = 0; 70 + fib_params->dport = 0; 71 + fib_params->tot_len = bpf_ntohs(ip6h->payload_len); 72 + *src = ip6h->saddr; 73 + *dst = ip6h->daddr; 74 + 75 + return 0; 76 + } 77 + 78 + SEC("chk_egress") int tc_chk(struct __sk_buff *skb) 79 + { 80 + void *data_end = ctx_ptr(skb->data_end); 81 + void *data = ctx_ptr(skb->data); 82 + __u32 *raw = data; 83 + 84 + if (data + sizeof(struct ethhdr) > data_end) 85 + return TC_ACT_SHOT; 86 + 87 + return !raw[0] && !raw[1] && !raw[2] ? TC_ACT_SHOT : TC_ACT_OK; 88 + } 89 + 90 + static __always_inline int tc_redir(struct __sk_buff *skb) 91 + { 92 + struct bpf_fib_lookup fib_params = { .ifindex = skb->ingress_ifindex }; 93 + __u8 zero[ETH_ALEN * 2]; 94 + int ret = -1; 95 + 96 + switch (skb->protocol) { 97 + case __bpf_constant_htons(ETH_P_IP): 98 + ret = fill_fib_params_v4(skb, &fib_params); 99 + break; 100 + case __bpf_constant_htons(ETH_P_IPV6): 101 + ret = fill_fib_params_v6(skb, &fib_params); 102 + break; 103 + } 104 + 105 + if (ret) 106 + return TC_ACT_OK; 107 + 108 + ret = bpf_fib_lookup(skb, &fib_params, sizeof(fib_params), 0); 109 + if (ret == BPF_FIB_LKUP_RET_NOT_FWDED || ret < 0) 110 + return TC_ACT_OK; 111 + 112 + __builtin_memset(&zero, 0, sizeof(zero)); 113 + if (bpf_skb_store_bytes(skb, 0, &zero, sizeof(zero), 0) < 0) 114 + return TC_ACT_SHOT; 115 + 116 + if (ret == BPF_FIB_LKUP_RET_NO_NEIGH) { 117 + struct bpf_redir_neigh nh_params = {}; 118 + 119 + nh_params.nh_family = fib_params.family; 120 + __builtin_memcpy(&nh_params.ipv6_nh, &fib_params.ipv6_dst, 121 + sizeof(nh_params.ipv6_nh)); 122 + 123 + return bpf_redirect_neigh(fib_params.ifindex, &nh_params, 124 + sizeof(nh_params), 0); 125 + 126 + } else if (ret == BPF_FIB_LKUP_RET_SUCCESS) { 127 + void *data_end = ctx_ptr(skb->data_end); 128 + struct ethhdr *eth = ctx_ptr(skb->data); 129 + 130 + if (eth + 1 > data_end) 131 + return TC_ACT_SHOT; 132 + 133 + __builtin_memcpy(eth->h_dest, fib_params.dmac, ETH_ALEN); 134 + __builtin_memcpy(eth->h_source, fib_params.smac, ETH_ALEN); 135 + 136 + return bpf_redirect(fib_params.ifindex, 0); 137 + } 138 + 139 + return TC_ACT_SHOT; 140 + } 141 + 142 + /* these are identical, but keep them separate for compatibility with the 143 + * section names expected by test_tc_redirect.sh 144 + */ 145 + SEC("dst_ingress") int tc_dst(struct __sk_buff *skb) 146 + { 147 + return tc_redir(skb); 148 + } 149 + 150 + SEC("src_ingress") int tc_src(struct __sk_buff *skb) 151 + { 152 + return tc_redir(skb); 153 + } 154 + 155 + char __license[] SEC("license") = "GPL";
+15 -3
tools/testing/selftests/bpf/test_tc_redirect.sh
··· 24 24 { echo >&2 "timeout is not available"; exit 1; } 25 25 command -v ping >/dev/null 2>&1 || \ 26 26 { echo >&2 "ping is not available"; exit 1; } 27 - command -v ping6 >/dev/null 2>&1 || \ 28 - { echo >&2 "ping6 is not available"; exit 1; } 27 + if command -v ping6 >/dev/null 2>&1; then PING6=ping6; else PING6=ping; fi 29 28 command -v perl >/dev/null 2>&1 || \ 30 29 { echo >&2 "perl is not available"; exit 1; } 31 30 command -v jq >/dev/null 2>&1 || \ ··· 151 152 echo -e "${TEST}: ${GREEN}PASS${NC}" 152 153 153 154 TEST="ICMPv6 connectivity test" 154 - ip netns exec ${NS_SRC} ping6 $PING_ARG ${IP6_DST} 155 + ip netns exec ${NS_SRC} $PING6 $PING_ARG ${IP6_DST} 155 156 if [ $? -ne 0 ]; then 156 157 echo -e "${TEST}: ${RED}FAIL${NC}" 157 158 exit 1 ··· 169 170 netns_setup_bpf() 170 171 { 171 172 local obj=$1 173 + local use_forwarding=${2:-0} 172 174 173 175 ip netns exec ${NS_FWD} tc qdisc add dev veth_src_fwd clsact 174 176 ip netns exec ${NS_FWD} tc filter add dev veth_src_fwd ingress bpf da obj $obj sec src_ingress ··· 178 178 ip netns exec ${NS_FWD} tc qdisc add dev veth_dst_fwd clsact 179 179 ip netns exec ${NS_FWD} tc filter add dev veth_dst_fwd ingress bpf da obj $obj sec dst_ingress 180 180 ip netns exec ${NS_FWD} tc filter add dev veth_dst_fwd egress bpf da obj $obj sec chk_egress 181 + 182 + if [ "$use_forwarding" -eq "1" ]; then 183 + # bpf_fib_lookup() checks if forwarding is enabled 184 + ip netns exec ${NS_FWD} sysctl -w net.ipv4.ip_forward=1 185 + ip netns exec ${NS_FWD} sysctl -w net.ipv6.conf.veth_dst_fwd.forwarding=1 186 + ip netns exec ${NS_FWD} sysctl -w net.ipv6.conf.veth_src_fwd.forwarding=1 187 + return 0 188 + fi 181 189 182 190 veth_src=$(ip netns exec ${NS_FWD} cat /sys/class/net/veth_src_fwd/ifindex) 183 191 veth_dst=$(ip netns exec ${NS_FWD} cat /sys/class/net/veth_dst_fwd/ifindex) ··· 205 197 206 198 netns_setup 207 199 netns_setup_bpf test_tc_neigh.o 200 + netns_test_connectivity 201 + netns_cleanup 202 + netns_setup 203 + netns_setup_bpf test_tc_neigh_fib.o 1 208 204 netns_test_connectivity 209 205 netns_cleanup 210 206 netns_setup
+25
tools/testing/selftests/bpf/verifier/sock.c
··· 631 631 .prog_type = BPF_PROG_TYPE_SK_REUSEPORT, 632 632 .result = ACCEPT, 633 633 }, 634 + { 635 + "mark null check on return value of bpf_skc_to helpers", 636 + .insns = { 637 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), 638 + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), 639 + BPF_MOV64_IMM(BPF_REG_0, 0), 640 + BPF_EXIT_INSN(), 641 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 642 + BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock), 643 + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), 644 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), 645 + BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_request_sock), 646 + BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), 647 + BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0, 2), 648 + BPF_MOV64_IMM(BPF_REG_0, 0), 649 + BPF_EXIT_INSN(), 650 + BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0), 651 + BPF_EXIT_INSN(), 652 + }, 653 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 654 + .result = REJECT, 655 + .errstr = "invalid mem access", 656 + .result_unpriv = REJECT, 657 + .errstr_unpriv = "unknown func", 658 + },
+1
tools/testing/selftests/net/config
··· 33 33 CONFIG_TRACEPOINTS=y 34 34 CONFIG_NET_DROP_MONITOR=m 35 35 CONFIG_NETDEVSIM=m 36 + CONFIG_NET_FOU=m
+10
tools/testing/selftests/net/forwarding/vxlan_asymmetric.sh
··· 215 215 216 216 bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 10 217 217 bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 20 218 + 219 + sysctl_set net.ipv4.conf.all.rp_filter 0 220 + sysctl_set net.ipv4.conf.vlan10-v.rp_filter 0 221 + sysctl_set net.ipv4.conf.vlan20-v.rp_filter 0 218 222 } 219 223 220 224 switch_destroy() 221 225 { 226 + sysctl_restore net.ipv4.conf.all.rp_filter 227 + 222 228 bridge fdb del 00:00:5e:00:01:01 dev br1 self local vlan 20 223 229 bridge fdb del 00:00:5e:00:01:01 dev br1 self local vlan 10 224 230 ··· 365 359 366 360 bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 10 367 361 bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 20 362 + 363 + sysctl_set net.ipv4.conf.all.rp_filter 0 364 + sysctl_set net.ipv4.conf.vlan10-v.rp_filter 0 365 + sysctl_set net.ipv4.conf.vlan20-v.rp_filter 0 368 366 } 369 367 export -f ns_switch_create 370 368
+10
tools/testing/selftests/net/forwarding/vxlan_symmetric.sh
··· 237 237 238 238 bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 10 239 239 bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 20 240 + 241 + sysctl_set net.ipv4.conf.all.rp_filter 0 242 + sysctl_set net.ipv4.conf.vlan10-v.rp_filter 0 243 + sysctl_set net.ipv4.conf.vlan20-v.rp_filter 0 240 244 } 241 245 242 246 switch_destroy() 243 247 { 248 + sysctl_restore net.ipv4.conf.all.rp_filter 249 + 244 250 bridge fdb del 00:00:5e:00:01:01 dev br1 self local vlan 20 245 251 bridge fdb del 00:00:5e:00:01:01 dev br1 self local vlan 10 246 252 ··· 408 402 409 403 bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 10 410 404 bridge fdb add 00:00:5e:00:01:01 dev br1 self local vlan 20 405 + 406 + sysctl_set net.ipv4.conf.all.rp_filter 0 407 + sysctl_set net.ipv4.conf.vlan10-v.rp_filter 0 408 + sysctl_set net.ipv4.conf.vlan20-v.rp_filter 0 411 409 } 412 410 export -f ns_switch_create 413 411
+1
tools/testing/selftests/net/mptcp/config
··· 1 1 CONFIG_MPTCP=y 2 + CONFIG_IPV6=y 2 3 CONFIG_MPTCP_IPV6=y 3 4 CONFIG_INET_DIAG=m 4 5 CONFIG_INET_MPTCP_DIAG=m
+5
tools/testing/selftests/net/rtnetlink.sh
··· 520 520 return $ksft_skip 521 521 fi 522 522 523 + if ! /sbin/modprobe -q -n fou; then 524 + echo "SKIP: module fou is not found" 525 + return $ksft_skip 526 + fi 527 + /sbin/modprobe -q fou 523 528 ip -netns "$testns" fou add port 7777 ipproto 47 2>/dev/null 524 529 if [ $? -ne 0 ];then 525 530 echo "FAIL: can't add fou port 7777, skipping test"