Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Fix RCU locking in xfrm_local_error(), from Taehee Yoo.

2) Fix return value assignments and thus error checking in
iwl_mvm_start_ap_ibss(), from Johannes Berg.

3) Don't count header length twice in vti4, from Stefano Brivio.

4) Fix deadlock in rt6_age_examine_exception, from Eric Dumazet.

5) Fix out-of-bounds access in nf_sk_lookup_slow{v4,v6}() from Subash
Abhinov.

6) Check nladdr size in netlink_connect(), from Alexander Potapenko.

7) VF representor SQ numbers are 32 not 16 bits, in mlx5 driver, from
Or Gerlitz.

8) Out of bounds read in skb_network_protocol(), from Eric Dumazet.

9) r8169 driver sets driver data pointer after register_netdev() which
is too late. Fix from Heiner Kallweit.

10) Fix memory leak in mlx4 driver, from Moshe Shemesh.

11) The multi-VLAN decap fix added a regression when dealing with device
that lack a MAC header, such as tun. Fix from Toshiaki Makita.

12) Fix integer overflow in dynamic interrupt coalescing code. From Tal
Gilboa.

13) Use after free in vrf code, from David Ahern.

14) IPV6 route leak between VRFs fix, also from David Ahern.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (81 commits)
net: mvneta: fix enable of all initialized RXQs
net/ipv6: Fix route leaking between VRFs
vrf: Fix use after free and double free in vrf_finish_output
ipv6: sr: fix seg6 encap performances with TSO enabled
net/dim: Fix int overflow
vlan: Fix vlan insertion for packets without ethernet header
net: Fix untag for vlan packets without ethernet header
atm: iphase: fix spelling mistake: "Receiverd" -> "Received"
vhost: validate log when IOTLB is enabled
qede: Do not drop rx-checksum invalidated packets.
hv_netvsc: enable multicast if necessary
ip_tunnel: Resolve ipsec merge conflict properly.
lan78xx: Crash in lan78xx_writ_reg (Workqueue: events lan78xx_deferred_multicast_write)
qede: Fix barrier usage after tx doorbell write.
vhost: correctly remove wait queue during poll failure
net/mlx4_core: Fix memory leak while delete slave's resources
net/mlx4_en: Fix mixed PFC and Global pause user control requests
net/smc: use announced length in sock_recvmsg()
llc: properly handle dev_queue_xmit() return value
strparser: Fix sign of err codes
...

+1 -1
Documentation/isdn/INTERFACE.CAPI
··· 18 18 directions between the application and the hardware driver. 19 19 20 20 Format and semantics of CAPI messages are specified in the CAPI 2.0 standard. 21 - This standard is freely available from http://www.capi.org. 21 + This standard is freely available from https://www.capi.org. 22 22 23 23 24 24 2. Driver and Device Registration
+2 -2
Documentation/isdn/README
··· 33 33 de.alt.comm.isdn4linux 34 34 35 35 There is also a well maintained FAQ in English available at 36 - http://www.mhessler.de/i4lfaq/ 36 + https://www.mhessler.de/i4lfaq/ 37 37 It can be viewed online, or downloaded in sgml/text/html format. 38 38 The FAQ can also be viewed online at 39 - http://www.isdn4linux.de/faq/ 39 + https://www.isdn4linux.de/faq/i4lfaq.html 40 40 or downloaded from 41 41 ftp://ftp.isdn4linux.de/pub/isdn4linux/FAQ/ 42 42
+2 -2
Documentation/isdn/README.FAQ
··· 8 8 9 9 In case you just want to see the FAQ online, or download the newest version, 10 10 you can have a look at my website: 11 - http://www.mhessler.de/i4lfaq/ (view + download) 11 + https://www.mhessler.de/i4lfaq/ (view + download) 12 12 or: 13 - http://www.isdn4linux.de/faq/ (view) 13 + https://www.isdn4linux.de/faq/4lfaq.html (view) 14 14 15 15 As the extension tells, the FAQ is in SGML format, and you can convert it 16 16 into text/html/... format by using the sgml2txt/sgml2html/... tools.
+9 -7
Documentation/isdn/README.gigaset
··· 29 29 T-Com Sinus 721 data 30 30 Chicago 390 USB (KPN) 31 31 32 - See also http://www.erbze.info/sinus_gigaset.htm and 33 - http://gigaset307x.sourceforge.net/ 32 + See also http://www.erbze.info/sinus_gigaset.htm 33 + (archived at https://web.archive.org/web/20100717020421/http://www.erbze.info:80/sinus_gigaset.htm ) and 34 + http://gigaset307x.sourceforge.net/ 34 35 35 36 We had also reports from users of Gigaset M105 who could use the drivers 36 37 with SX 100 and CX 100 ISDN bases (only in unimodem mode, see section 2.5.) ··· 53 52 to use CAPI 2.0 or ISDN4Linux for ISDN connections (voice or data). 54 53 55 54 There are some user space tools available at 56 - http://sourceforge.net/projects/gigaset307x/ 55 + https://sourceforge.net/projects/gigaset307x/ 57 56 which provide access to additional device specific functions like SMS, 58 57 phonebook or call journal. 59 58 ··· 203 202 You can use some configuration tool of your distribution to configure this 204 203 "modem" or configure pppd/wvdial manually. There are some example ppp 205 204 configuration files and chat scripts in the gigaset-VERSION/ppp directory 206 - in the driver packages from http://sourceforge.net/projects/gigaset307x/. 205 + in the driver packages from https://sourceforge.net/projects/gigaset307x/. 207 206 Please note that the USB drivers are not able to change the state of the 208 207 control lines. This means you must use "Stupid Mode" if you are using 209 208 wvdial or you should use the nocrtscts option of pppd. ··· 362 361 --------------------------- 363 362 If you can't solve problems with the driver on your own, feel free to 364 363 use one of the forums, bug trackers, or mailing lists on 365 - http://sourceforge.net/projects/gigaset307x 364 + https://sourceforge.net/projects/gigaset307x 366 365 or write an electronic mail to the maintainers. 367 366 368 367 Try to provide as much information as possible, such as ··· 392 391 4. Links, other software 393 392 --------------------- 394 393 - Sourceforge project developing this driver and associated tools 395 - http://sourceforge.net/projects/gigaset307x 394 + https://sourceforge.net/projects/gigaset307x 396 395 - Yahoo! Group on the Siemens Gigaset family of devices 397 - http://de.groups.yahoo.com/group/Siemens-Gigaset 396 + https://de.groups.yahoo.com/group/Siemens-Gigaset 398 397 - Siemens Gigaset/T-Sinus compatibility table 399 398 http://www.erbze.info/sinus_gigaset.htm 399 + (archived at https://web.archive.org/web/20100717020421/http://www.erbze.info:80/sinus_gigaset.htm ) 400 400 401 401 402 402 5. Credits
+1 -1
drivers/atm/iphase.c
··· 3147 3147 " Size of Tx Buffer : %u\n" 3148 3148 " Number of Rx Buffer: %u\n" 3149 3149 " Size of Rx Buffer : %u\n" 3150 - " Packets Receiverd : %u\n" 3150 + " Packets Received : %u\n" 3151 3151 " Packets Transmitted: %u\n" 3152 3152 " Cells Received : %u\n" 3153 3153 " Cells Transmitted : %u\n"
+37 -36
drivers/net/bonding/bond_main.c
··· 1528 1528 goto err_close; 1529 1529 } 1530 1530 1531 - /* If the mode uses primary, then the following is handled by 1532 - * bond_change_active_slave(). 1533 - */ 1534 - if (!bond_uses_primary(bond)) { 1535 - /* set promiscuity level to new slave */ 1536 - if (bond_dev->flags & IFF_PROMISC) { 1537 - res = dev_set_promiscuity(slave_dev, 1); 1538 - if (res) 1539 - goto err_close; 1540 - } 1541 - 1542 - /* set allmulti level to new slave */ 1543 - if (bond_dev->flags & IFF_ALLMULTI) { 1544 - res = dev_set_allmulti(slave_dev, 1); 1545 - if (res) 1546 - goto err_close; 1547 - } 1548 - 1549 - netif_addr_lock_bh(bond_dev); 1550 - 1551 - dev_mc_sync_multiple(slave_dev, bond_dev); 1552 - dev_uc_sync_multiple(slave_dev, bond_dev); 1553 - 1554 - netif_addr_unlock_bh(bond_dev); 1555 - } 1556 - 1557 - if (BOND_MODE(bond) == BOND_MODE_8023AD) { 1558 - /* add lacpdu mc addr to mc list */ 1559 - u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR; 1560 - 1561 - dev_mc_add(slave_dev, lacpdu_multicast); 1562 - } 1563 - 1564 1531 res = vlan_vids_add_by_dev(slave_dev, bond_dev); 1565 1532 if (res) { 1566 1533 netdev_err(bond_dev, "Couldn't add bond vlan ids to %s\n", ··· 1692 1725 goto err_upper_unlink; 1693 1726 } 1694 1727 1728 + /* If the mode uses primary, then the following is handled by 1729 + * bond_change_active_slave(). 1730 + */ 1731 + if (!bond_uses_primary(bond)) { 1732 + /* set promiscuity level to new slave */ 1733 + if (bond_dev->flags & IFF_PROMISC) { 1734 + res = dev_set_promiscuity(slave_dev, 1); 1735 + if (res) 1736 + goto err_sysfs_del; 1737 + } 1738 + 1739 + /* set allmulti level to new slave */ 1740 + if (bond_dev->flags & IFF_ALLMULTI) { 1741 + res = dev_set_allmulti(slave_dev, 1); 1742 + if (res) { 1743 + if (bond_dev->flags & IFF_PROMISC) 1744 + dev_set_promiscuity(slave_dev, -1); 1745 + goto err_sysfs_del; 1746 + } 1747 + } 1748 + 1749 + netif_addr_lock_bh(bond_dev); 1750 + dev_mc_sync_multiple(slave_dev, bond_dev); 1751 + dev_uc_sync_multiple(slave_dev, bond_dev); 1752 + netif_addr_unlock_bh(bond_dev); 1753 + 1754 + if (BOND_MODE(bond) == BOND_MODE_8023AD) { 1755 + /* add lacpdu mc addr to mc list */ 1756 + u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR; 1757 + 1758 + dev_mc_add(slave_dev, lacpdu_multicast); 1759 + } 1760 + } 1761 + 1695 1762 bond->slave_cnt++; 1696 1763 bond_compute_features(bond); 1697 1764 bond_set_carrier(bond); ··· 1749 1748 return 0; 1750 1749 1751 1750 /* Undo stages on error */ 1751 + err_sysfs_del: 1752 + bond_sysfs_slave_del(new_slave); 1753 + 1752 1754 err_upper_unlink: 1753 1755 bond_upper_dev_unlink(bond, new_slave); 1754 1756 ··· 1759 1755 netdev_rx_handler_unregister(slave_dev); 1760 1756 1761 1757 err_detach: 1762 - if (!bond_uses_primary(bond)) 1763 - bond_hw_addr_flush(bond_dev, slave_dev); 1764 - 1765 1758 vlan_vids_del_by_dev(slave_dev, bond_dev); 1766 1759 if (rcu_access_pointer(bond->primary_slave) == new_slave) 1767 1760 RCU_INIT_POINTER(bond->primary_slave, NULL);
+1 -1
drivers/net/dsa/mt7530.c
··· 1409 1409 { .compatible = "mediatek,mt7530" }, 1410 1410 { /* sentinel */ }, 1411 1411 }; 1412 + MODULE_DEVICE_TABLE(of, mt7530_of_match); 1412 1413 1413 1414 static struct mdio_driver mt7530_mdio_driver = { 1414 1415 .probe = mt7530_probe, ··· 1425 1424 MODULE_AUTHOR("Sean Wang <sean.wang@mediatek.com>"); 1426 1425 MODULE_DESCRIPTION("Driver for Mediatek MT7530 Switch"); 1427 1426 MODULE_LICENSE("GPL"); 1428 - MODULE_ALIAS("platform:mediatek-mt7530");
+1
drivers/net/ethernet/marvell/mvneta.c
··· 1132 1132 } 1133 1133 mvreg_write(pp, MVNETA_TXQ_CMD, q_map); 1134 1134 1135 + q_map = 0; 1135 1136 /* Enable all initialized RXQs. */ 1136 1137 for (queue = 0; queue < rxq_number; queue++) { 1137 1138 struct mvneta_rx_queue *rxq = &pp->rxqs[queue];
+41 -31
drivers/net/ethernet/mellanox/mlx4/en_dcb_nl.c
··· 156 156 static u8 mlx4_en_dcbnl_set_all(struct net_device *netdev) 157 157 { 158 158 struct mlx4_en_priv *priv = netdev_priv(netdev); 159 + struct mlx4_en_port_profile *prof = priv->prof; 159 160 struct mlx4_en_dev *mdev = priv->mdev; 161 + u8 tx_pause, tx_ppp, rx_pause, rx_ppp; 160 162 161 163 if (!(priv->dcbx_cap & DCB_CAP_DCBX_VER_CEE)) 162 164 return 1; 163 165 164 166 if (priv->cee_config.pfc_state) { 165 167 int tc; 168 + rx_ppp = prof->rx_ppp; 169 + tx_ppp = prof->tx_ppp; 166 170 167 - priv->prof->rx_pause = 0; 168 - priv->prof->tx_pause = 0; 169 171 for (tc = 0; tc < CEE_DCBX_MAX_PRIO; tc++) { 170 172 u8 tc_mask = 1 << tc; 171 173 172 174 switch (priv->cee_config.dcb_pfc[tc]) { 173 175 case pfc_disabled: 174 - priv->prof->tx_ppp &= ~tc_mask; 175 - priv->prof->rx_ppp &= ~tc_mask; 176 + tx_ppp &= ~tc_mask; 177 + rx_ppp &= ~tc_mask; 176 178 break; 177 179 case pfc_enabled_full: 178 - priv->prof->tx_ppp |= tc_mask; 179 - priv->prof->rx_ppp |= tc_mask; 180 + tx_ppp |= tc_mask; 181 + rx_ppp |= tc_mask; 180 182 break; 181 183 case pfc_enabled_tx: 182 - priv->prof->tx_ppp |= tc_mask; 183 - priv->prof->rx_ppp &= ~tc_mask; 184 + tx_ppp |= tc_mask; 185 + rx_ppp &= ~tc_mask; 184 186 break; 185 187 case pfc_enabled_rx: 186 - priv->prof->tx_ppp &= ~tc_mask; 187 - priv->prof->rx_ppp |= tc_mask; 188 + tx_ppp &= ~tc_mask; 189 + rx_ppp |= tc_mask; 188 190 break; 189 191 default: 190 192 break; 191 193 } 192 194 } 193 - en_dbg(DRV, priv, "Set pfc on\n"); 195 + rx_pause = !!(rx_ppp || tx_ppp) ? 0 : prof->rx_pause; 196 + tx_pause = !!(rx_ppp || tx_ppp) ? 0 : prof->tx_pause; 194 197 } else { 195 - priv->prof->rx_pause = 1; 196 - priv->prof->tx_pause = 1; 197 - en_dbg(DRV, priv, "Set pfc off\n"); 198 + rx_ppp = 0; 199 + tx_ppp = 0; 200 + rx_pause = prof->rx_pause; 201 + tx_pause = prof->tx_pause; 198 202 } 199 203 200 204 if (mlx4_SET_PORT_general(mdev->dev, priv->port, 201 205 priv->rx_skb_size + ETH_FCS_LEN, 202 - priv->prof->tx_pause, 203 - priv->prof->tx_ppp, 204 - priv->prof->rx_pause, 205 - priv->prof->rx_ppp)) { 206 + tx_pause, tx_ppp, rx_pause, rx_ppp)) { 206 207 en_err(priv, "Failed setting pause params\n"); 207 208 return 1; 208 209 } 210 + 211 + prof->tx_ppp = tx_ppp; 212 + prof->rx_ppp = rx_ppp; 213 + prof->tx_pause = tx_pause; 214 + prof->rx_pause = rx_pause; 209 215 210 216 return 0; 211 217 } ··· 414 408 struct mlx4_en_priv *priv = netdev_priv(dev); 415 409 struct mlx4_en_port_profile *prof = priv->prof; 416 410 struct mlx4_en_dev *mdev = priv->mdev; 411 + u32 tx_pause, tx_ppp, rx_pause, rx_ppp; 417 412 int err; 418 413 419 414 en_dbg(DRV, priv, "cap: 0x%x en: 0x%x mbc: 0x%x delay: %d\n", ··· 423 416 pfc->mbc, 424 417 pfc->delay); 425 418 426 - prof->rx_pause = !pfc->pfc_en; 427 - prof->tx_pause = !pfc->pfc_en; 428 - prof->rx_ppp = pfc->pfc_en; 429 - prof->tx_ppp = pfc->pfc_en; 419 + rx_pause = prof->rx_pause && !pfc->pfc_en; 420 + tx_pause = prof->tx_pause && !pfc->pfc_en; 421 + rx_ppp = pfc->pfc_en; 422 + tx_ppp = pfc->pfc_en; 430 423 431 424 err = mlx4_SET_PORT_general(mdev->dev, priv->port, 432 425 priv->rx_skb_size + ETH_FCS_LEN, 433 - prof->tx_pause, 434 - prof->tx_ppp, 435 - prof->rx_pause, 436 - prof->rx_ppp); 437 - if (err) 426 + tx_pause, tx_ppp, rx_pause, rx_ppp); 427 + if (err) { 438 428 en_err(priv, "Failed setting pause params\n"); 439 - else 440 - mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap, 441 - prof->rx_ppp, prof->rx_pause, 442 - prof->tx_ppp, prof->tx_pause); 429 + return err; 430 + } 431 + 432 + mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap, 433 + rx_ppp, rx_pause, tx_ppp, tx_pause); 434 + 435 + prof->tx_ppp = tx_ppp; 436 + prof->rx_ppp = rx_ppp; 437 + prof->rx_pause = rx_pause; 438 + prof->tx_pause = tx_pause; 443 439 444 440 return err; 445 441 }
+19 -14
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 1046 1046 { 1047 1047 struct mlx4_en_priv *priv = netdev_priv(dev); 1048 1048 struct mlx4_en_dev *mdev = priv->mdev; 1049 + u8 tx_pause, tx_ppp, rx_pause, rx_ppp; 1049 1050 int err; 1050 1051 1051 1052 if (pause->autoneg) 1052 1053 return -EINVAL; 1053 1054 1054 - priv->prof->tx_pause = pause->tx_pause != 0; 1055 - priv->prof->rx_pause = pause->rx_pause != 0; 1055 + tx_pause = !!(pause->tx_pause); 1056 + rx_pause = !!(pause->rx_pause); 1057 + rx_ppp = priv->prof->rx_ppp && !(tx_pause || rx_pause); 1058 + tx_ppp = priv->prof->tx_ppp && !(tx_pause || rx_pause); 1059 + 1056 1060 err = mlx4_SET_PORT_general(mdev->dev, priv->port, 1057 1061 priv->rx_skb_size + ETH_FCS_LEN, 1058 - priv->prof->tx_pause, 1059 - priv->prof->tx_ppp, 1060 - priv->prof->rx_pause, 1061 - priv->prof->rx_ppp); 1062 - if (err) 1063 - en_err(priv, "Failed setting pause params\n"); 1064 - else 1065 - mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap, 1066 - priv->prof->rx_ppp, 1067 - priv->prof->rx_pause, 1068 - priv->prof->tx_ppp, 1069 - priv->prof->tx_pause); 1062 + tx_pause, tx_ppp, rx_pause, rx_ppp); 1063 + if (err) { 1064 + en_err(priv, "Failed setting pause params, err = %d\n", err); 1065 + return err; 1066 + } 1067 + 1068 + mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap, 1069 + rx_ppp, rx_pause, tx_ppp, tx_pause); 1070 + 1071 + priv->prof->tx_pause = tx_pause; 1072 + priv->prof->rx_pause = rx_pause; 1073 + priv->prof->tx_ppp = tx_ppp; 1074 + priv->prof->rx_ppp = rx_ppp; 1070 1075 1071 1076 return err; 1072 1077 }
+2 -2
drivers/net/ethernet/mellanox/mlx4/en_main.c
··· 163 163 params->udp_rss = 0; 164 164 } 165 165 for (i = 1; i <= MLX4_MAX_PORTS; i++) { 166 - params->prof[i].rx_pause = 1; 166 + params->prof[i].rx_pause = !(pfcrx || pfctx); 167 167 params->prof[i].rx_ppp = pfcrx; 168 - params->prof[i].tx_pause = 1; 168 + params->prof[i].tx_pause = !(pfcrx || pfctx); 169 169 params->prof[i].tx_ppp = pfctx; 170 170 params->prof[i].tx_ring_size = MLX4_EN_DEF_TX_RING_SIZE; 171 171 params->prof[i].rx_ring_size = MLX4_EN_DEF_RX_RING_SIZE;
+1
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 5088 5088 &tracker->res_tree[RES_FS_RULE]); 5089 5089 list_del(&fs_rule->com.list); 5090 5090 spin_unlock_irq(mlx4_tlock(dev)); 5091 + kfree(fs_rule->mirr_mbox); 5091 5092 kfree(fs_rule); 5092 5093 state = 0; 5093 5094 break;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/Kconfig
··· 46 46 47 47 config MLX5_ESWITCH 48 48 bool "Mellanox Technologies MLX5 SRIOV E-Switch support" 49 - depends on MLX5_CORE_EN 49 + depends on MLX5_CORE_EN && NET_SWITCHDEV 50 50 default y 51 51 ---help--- 52 52 Mellanox Technologies Ethernet SRIOV E-Switch support in ConnectX NIC.
+17
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 477 477 return mlx5e_ethtool_get_coalesce(priv, coal); 478 478 } 479 479 480 + #define MLX5E_MAX_COAL_TIME MLX5_MAX_CQ_PERIOD 481 + #define MLX5E_MAX_COAL_FRAMES MLX5_MAX_CQ_COUNT 482 + 480 483 static void 481 484 mlx5e_set_priv_channels_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal) 482 485 { ··· 513 510 514 511 if (!MLX5_CAP_GEN(mdev, cq_moderation)) 515 512 return -EOPNOTSUPP; 513 + 514 + if (coal->tx_coalesce_usecs > MLX5E_MAX_COAL_TIME || 515 + coal->rx_coalesce_usecs > MLX5E_MAX_COAL_TIME) { 516 + netdev_info(priv->netdev, "%s: maximum coalesce time supported is %lu usecs\n", 517 + __func__, MLX5E_MAX_COAL_TIME); 518 + return -ERANGE; 519 + } 520 + 521 + if (coal->tx_max_coalesced_frames > MLX5E_MAX_COAL_FRAMES || 522 + coal->rx_max_coalesced_frames > MLX5E_MAX_COAL_FRAMES) { 523 + netdev_info(priv->netdev, "%s: maximum coalesced frames supported is %lu\n", 524 + __func__, MLX5E_MAX_COAL_FRAMES); 525 + return -ERANGE; 526 + } 516 527 517 528 mutex_lock(&priv->state_lock); 518 529 new_channels.params = priv->channels.params;
+5 -8
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 2572 2572 mlx5_set_port_admin_status(priv->mdev, MLX5_PORT_UP); 2573 2573 mutex_unlock(&priv->state_lock); 2574 2574 2575 + if (mlx5e_vxlan_allowed(priv->mdev)) 2576 + udp_tunnel_get_rx_info(netdev); 2577 + 2575 2578 return err; 2576 2579 } 2577 2580 ··· 4072 4069 } 4073 4070 } 4074 4071 4075 - #if IS_ENABLED(CONFIG_NET_SWITCHDEV) && IS_ENABLED(CONFIG_MLX5_ESWITCH) 4072 + #if IS_ENABLED(CONFIG_MLX5_ESWITCH) 4076 4073 static const struct switchdev_ops mlx5e_switchdev_ops = { 4077 4074 .switchdev_port_attr_get = mlx5e_attr_get, 4078 4075 }; ··· 4178 4175 4179 4176 mlx5e_set_netdev_dev_addr(netdev); 4180 4177 4181 - #if IS_ENABLED(CONFIG_NET_SWITCHDEV) && IS_ENABLED(CONFIG_MLX5_ESWITCH) 4178 + #if IS_ENABLED(CONFIG_MLX5_ESWITCH) 4182 4179 if (MLX5_VPORT_MANAGER(mdev)) 4183 4180 netdev->switchdev_ops = &mlx5e_switchdev_ops; 4184 4181 #endif ··· 4330 4327 #ifdef CONFIG_MLX5_CORE_EN_DCB 4331 4328 mlx5e_dcbnl_init_app(priv); 4332 4329 #endif 4333 - /* Device already registered: sync netdev system state */ 4334 - if (mlx5e_vxlan_allowed(mdev)) { 4335 - rtnl_lock(); 4336 - udp_tunnel_get_rx_info(netdev); 4337 - rtnl_unlock(); 4338 - } 4339 4330 4340 4331 queue_work(priv->wq, &priv->set_rx_mode_work); 4341 4332
+19 -15
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 44 44 #include "en_tc.h" 45 45 #include "fs_core.h" 46 46 47 + #define MLX5E_REP_PARAMS_LOG_SQ_SIZE \ 48 + max(0x6, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE) 49 + #define MLX5E_REP_PARAMS_LOG_RQ_SIZE \ 50 + max(0x6, MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE) 51 + 47 52 static const char mlx5e_rep_driver_name[] = "mlx5e_rep"; 48 53 49 54 static void mlx5e_rep_get_drvinfo(struct net_device *dev, ··· 214 209 215 210 static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw, 216 211 struct mlx5_eswitch_rep *rep, 217 - u16 *sqns_array, int sqns_num) 212 + u32 *sqns_array, int sqns_num) 218 213 { 219 214 struct mlx5_flow_handle *flow_rule; 220 215 struct mlx5e_rep_priv *rpriv; ··· 260 255 struct mlx5e_channel *c; 261 256 int n, tc, num_sqs = 0; 262 257 int err = -ENOMEM; 263 - u16 *sqs; 258 + u32 *sqs; 264 259 265 - sqs = kcalloc(priv->channels.num * priv->channels.params.num_tc, sizeof(u16), GFP_KERNEL); 260 + sqs = kcalloc(priv->channels.num * priv->channels.params.num_tc, sizeof(*sqs), GFP_KERNEL); 266 261 if (!sqs) 267 262 goto out; 268 263 ··· 293 288 static void mlx5e_rep_neigh_update_init_interval(struct mlx5e_rep_priv *rpriv) 294 289 { 295 290 #if IS_ENABLED(CONFIG_IPV6) 296 - unsigned long ipv6_interval = NEIGH_VAR(&ipv6_stub->nd_tbl->parms, 291 + unsigned long ipv6_interval = NEIGH_VAR(&nd_tbl.parms, 297 292 DELAY_PROBE_TIME); 298 293 #else 299 294 unsigned long ipv6_interval = ~0UL; ··· 429 424 case NETEVENT_NEIGH_UPDATE: 430 425 n = ptr; 431 426 #if IS_ENABLED(CONFIG_IPV6) 432 - if (n->tbl != ipv6_stub->nd_tbl && n->tbl != &arp_tbl) 427 + if (n->tbl != &nd_tbl && n->tbl != &arp_tbl) 433 428 #else 434 429 if (n->tbl != &arp_tbl) 435 430 #endif ··· 477 472 * done per device delay prob time parameter. 478 473 */ 479 474 #if IS_ENABLED(CONFIG_IPV6) 480 - if (!p->dev || (p->tbl != ipv6_stub->nd_tbl && p->tbl != &arp_tbl)) 475 + if (!p->dev || (p->tbl != &nd_tbl && p->tbl != &arp_tbl)) 481 476 #else 482 477 if (!p->dev || p->tbl != &arp_tbl) 483 478 #endif ··· 673 668 struct mlx5e_priv *priv = netdev_priv(dev); 674 669 struct mlx5e_rep_priv *rpriv = priv->ppriv; 675 670 struct mlx5_eswitch_rep *rep = rpriv->rep; 676 - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 677 671 int err; 678 672 679 673 mutex_lock(&priv->state_lock); ··· 680 676 if (err) 681 677 goto unlock; 682 678 683 - if (!mlx5_eswitch_set_vport_state(esw, rep->vport, 684 - MLX5_ESW_VPORT_ADMIN_STATE_UP)) 679 + if (!mlx5_modify_vport_admin_state(priv->mdev, 680 + MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT, 681 + rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_UP)) 685 682 netif_carrier_on(dev); 686 683 687 684 unlock: ··· 695 690 struct mlx5e_priv *priv = netdev_priv(dev); 696 691 struct mlx5e_rep_priv *rpriv = priv->ppriv; 697 692 struct mlx5_eswitch_rep *rep = rpriv->rep; 698 - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 699 693 int ret; 700 694 701 695 mutex_lock(&priv->state_lock); 702 - (void)mlx5_eswitch_set_vport_state(esw, rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_DOWN); 696 + mlx5_modify_vport_admin_state(priv->mdev, 697 + MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT, 698 + rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_DOWN); 703 699 ret = mlx5e_close_locked(dev); 704 700 mutex_unlock(&priv->state_lock); 705 701 return ret; ··· 883 877 MLX5_CQ_PERIOD_MODE_START_FROM_CQE : 884 878 MLX5_CQ_PERIOD_MODE_START_FROM_EQE; 885 879 886 - params->log_sq_size = MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE; 880 + params->log_sq_size = MLX5E_REP_PARAMS_LOG_SQ_SIZE; 887 881 params->rq_wq_type = MLX5_WQ_TYPE_LINKED_LIST; 888 - params->log_rq_size = MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE; 882 + params->log_rq_size = MLX5E_REP_PARAMS_LOG_RQ_SIZE; 889 883 890 884 params->rx_dim_enabled = MLX5_CAP_GEN(mdev, cq_moderation); 891 885 mlx5e_set_rx_cq_mode_params(params, cq_period_mode); ··· 905 899 906 900 netdev->ethtool_ops = &mlx5e_rep_ethtool_ops; 907 901 908 - #ifdef CONFIG_NET_SWITCHDEV 909 902 netdev->switchdev_ops = &mlx5e_rep_switchdev_ops; 910 - #endif 911 903 912 904 netdev->features |= NETIF_F_VLAN_CHALLENGED | NETIF_F_HW_TC | NETIF_F_NETNS_LOCAL; 913 905 netdev->hw_features |= NETIF_F_HW_TC;
+10 -10
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 963 963 tbl = &arp_tbl; 964 964 #if IS_ENABLED(CONFIG_IPV6) 965 965 else if (m_neigh->family == AF_INET6) 966 - tbl = ipv6_stub->nd_tbl; 966 + tbl = &nd_tbl; 967 967 #endif 968 968 else 969 969 return; ··· 2608 2608 if (err != -EAGAIN) 2609 2609 flow->flags |= MLX5E_TC_FLOW_OFFLOADED; 2610 2610 2611 - err = rhashtable_insert_fast(&tc->ht, &flow->node, 2612 - tc->ht_params); 2613 - if (err) 2614 - goto err_del_rule; 2615 - 2616 - if (flow->flags & MLX5E_TC_FLOW_ESWITCH && 2611 + if (!(flow->flags & MLX5E_TC_FLOW_ESWITCH) || 2617 2612 !(flow->esw_attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP)) 2618 2613 kvfree(parse_attr); 2619 - return err; 2620 2614 2621 - err_del_rule: 2622 - mlx5e_tc_del_flow(priv, flow); 2615 + err = rhashtable_insert_fast(&tc->ht, &flow->node, 2616 + tc->ht_params); 2617 + if (err) { 2618 + mlx5e_tc_del_flow(priv, flow); 2619 + kfree(flow); 2620 + } 2621 + 2622 + return err; 2623 2623 2624 2624 err_free: 2625 2625 kvfree(parse_attr);
+51 -27
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 1380 1380 decap_fib_entry); 1381 1381 } 1382 1382 1383 + static int 1384 + mlxsw_sp_rif_ipip_lb_op(struct mlxsw_sp_rif_ipip_lb *lb_rif, 1385 + struct mlxsw_sp_vr *ul_vr, bool enable) 1386 + { 1387 + struct mlxsw_sp_rif_ipip_lb_config lb_cf = lb_rif->lb_config; 1388 + struct mlxsw_sp_rif *rif = &lb_rif->common; 1389 + struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp; 1390 + char ritr_pl[MLXSW_REG_RITR_LEN]; 1391 + u32 saddr4; 1392 + 1393 + switch (lb_cf.ul_protocol) { 1394 + case MLXSW_SP_L3_PROTO_IPV4: 1395 + saddr4 = be32_to_cpu(lb_cf.saddr.addr4); 1396 + mlxsw_reg_ritr_pack(ritr_pl, enable, MLXSW_REG_RITR_LOOPBACK_IF, 1397 + rif->rif_index, rif->vr_id, rif->dev->mtu); 1398 + mlxsw_reg_ritr_loopback_ipip4_pack(ritr_pl, lb_cf.lb_ipipt, 1399 + MLXSW_REG_RITR_LOOPBACK_IPIP_OPTIONS_GRE_KEY_PRESET, 1400 + ul_vr->id, saddr4, lb_cf.okey); 1401 + break; 1402 + 1403 + case MLXSW_SP_L3_PROTO_IPV6: 1404 + return -EAFNOSUPPORT; 1405 + } 1406 + 1407 + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ritr), ritr_pl); 1408 + } 1409 + 1410 + static int mlxsw_sp_netdevice_ipip_ol_update_mtu(struct mlxsw_sp *mlxsw_sp, 1411 + struct net_device *ol_dev) 1412 + { 1413 + struct mlxsw_sp_ipip_entry *ipip_entry; 1414 + struct mlxsw_sp_rif_ipip_lb *lb_rif; 1415 + struct mlxsw_sp_vr *ul_vr; 1416 + int err = 0; 1417 + 1418 + ipip_entry = mlxsw_sp_ipip_entry_find_by_ol_dev(mlxsw_sp, ol_dev); 1419 + if (ipip_entry) { 1420 + lb_rif = ipip_entry->ol_lb; 1421 + ul_vr = &mlxsw_sp->router->vrs[lb_rif->ul_vr_id]; 1422 + err = mlxsw_sp_rif_ipip_lb_op(lb_rif, ul_vr, true); 1423 + if (err) 1424 + goto out; 1425 + lb_rif->common.mtu = ol_dev->mtu; 1426 + } 1427 + 1428 + out: 1429 + return err; 1430 + } 1431 + 1383 1432 static void mlxsw_sp_netdevice_ipip_ol_up_event(struct mlxsw_sp *mlxsw_sp, 1384 1433 struct net_device *ol_dev) 1385 1434 { ··· 1709 1660 extack = info->extack; 1710 1661 return mlxsw_sp_netdevice_ipip_ol_change_event(mlxsw_sp, 1711 1662 ol_dev, extack); 1663 + case NETDEV_CHANGEMTU: 1664 + return mlxsw_sp_netdevice_ipip_ol_update_mtu(mlxsw_sp, ol_dev); 1712 1665 } 1713 1666 return 0; 1714 1667 } ··· 6892 6841 common); 6893 6842 rif_lb = mlxsw_sp_rif_ipip_lb_rif(rif); 6894 6843 rif_lb->lb_config = params_lb->lb_config; 6895 - } 6896 - 6897 - static int 6898 - mlxsw_sp_rif_ipip_lb_op(struct mlxsw_sp_rif_ipip_lb *lb_rif, 6899 - struct mlxsw_sp_vr *ul_vr, bool enable) 6900 - { 6901 - struct mlxsw_sp_rif_ipip_lb_config lb_cf = lb_rif->lb_config; 6902 - struct mlxsw_sp_rif *rif = &lb_rif->common; 6903 - struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp; 6904 - char ritr_pl[MLXSW_REG_RITR_LEN]; 6905 - u32 saddr4; 6906 - 6907 - switch (lb_cf.ul_protocol) { 6908 - case MLXSW_SP_L3_PROTO_IPV4: 6909 - saddr4 = be32_to_cpu(lb_cf.saddr.addr4); 6910 - mlxsw_reg_ritr_pack(ritr_pl, enable, MLXSW_REG_RITR_LOOPBACK_IF, 6911 - rif->rif_index, rif->vr_id, rif->dev->mtu); 6912 - mlxsw_reg_ritr_loopback_ipip4_pack(ritr_pl, lb_cf.lb_ipipt, 6913 - MLXSW_REG_RITR_LOOPBACK_IPIP_OPTIONS_GRE_KEY_PRESET, 6914 - ul_vr->id, saddr4, lb_cf.okey); 6915 - break; 6916 - 6917 - case MLXSW_SP_L3_PROTO_IPV6: 6918 - return -EAFNOSUPPORT; 6919 - } 6920 - 6921 - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ritr), ritr_pl); 6922 6844 } 6923 6845 6924 6846 static int
+5 -1
drivers/net/ethernet/netronome/nfp/bpf/jit.c
··· 74 74 75 75 static void nfp_prog_push(struct nfp_prog *nfp_prog, u64 insn) 76 76 { 77 - if (nfp_prog->__prog_alloc_len == nfp_prog->prog_len) { 77 + if (nfp_prog->__prog_alloc_len / sizeof(u64) == nfp_prog->prog_len) { 78 + pr_warn("instruction limit reached (%u NFP instructions)\n", 79 + nfp_prog->prog_len); 78 80 nfp_prog->error = -ENOSPC; 79 81 return; 80 82 } ··· 2465 2463 err = cb(nfp_prog, meta); 2466 2464 if (err) 2467 2465 return err; 2466 + if (nfp_prog->error) 2467 + return nfp_prog->error; 2468 2468 2469 2469 nfp_prog->n_translated++; 2470 2470 }
+6 -14
drivers/net/ethernet/qlogic/qede/qede_fp.c
··· 320 320 barrier(); 321 321 writel(txq->tx_db.raw, txq->doorbell_addr); 322 322 323 - /* mmiowb is needed to synchronize doorbell writes from more than one 324 - * processor. It guarantees that the write arrives to the device before 325 - * the queue lock is released and another start_xmit is called (possibly 326 - * on another CPU). Without this barrier, the next doorbell can bypass 327 - * this doorbell. This is applicable to IA64/Altix systems. 323 + /* Fence required to flush the write combined buffer, since another 324 + * CPU may write to the same doorbell address and data may be lost 325 + * due to relaxed order nature of write combined bar. 328 326 */ 329 - mmiowb(); 327 + wmb(); 330 328 } 331 329 332 330 static int qede_xdp_xmit(struct qede_dev *edev, struct qede_fastpath *fp, ··· 1247 1249 1248 1250 csum_flag = qede_check_csum(parse_flag); 1249 1251 if (unlikely(csum_flag == QEDE_CSUM_ERROR)) { 1250 - if (qede_pkt_is_ip_fragmented(fp_cqe, parse_flag)) { 1252 + if (qede_pkt_is_ip_fragmented(fp_cqe, parse_flag)) 1251 1253 rxq->rx_ip_frags++; 1252 - } else { 1253 - DP_NOTICE(edev, 1254 - "CQE has error, flags = %x, dropping incoming packet\n", 1255 - parse_flag); 1254 + else 1256 1255 rxq->rx_hw_errors++; 1257 - qede_recycle_rx_bd_ring(rxq, fp_cqe->bd_num); 1258 - return 0; 1259 - } 1260 1256 } 1261 1257 1262 1258 /* Basic validation passed; Need to prepare an SKB. This would also
+2 -2
drivers/net/ethernet/realtek/r8169.c
··· 8660 8660 if (!tp->counters) 8661 8661 return -ENOMEM; 8662 8662 8663 + pci_set_drvdata(pdev, dev); 8664 + 8663 8665 rc = register_netdev(dev); 8664 8666 if (rc < 0) 8665 8667 return rc; 8666 - 8667 - pci_set_drvdata(pdev, dev); 8668 8668 8669 8669 netif_info(tp, probe, dev, "%s at 0x%p, %pM, XID %08x IRQ %d\n", 8670 8670 rtl_chip_infos[chipset].name, ioaddr, dev->dev_addr,
+1 -1
drivers/net/hyperv/rndis_filter.c
··· 858 858 if (flags & IFF_PROMISC) { 859 859 filter = NDIS_PACKET_TYPE_PROMISCUOUS; 860 860 } else { 861 - if (flags & IFF_ALLMULTI) 861 + if (!netdev_mc_empty(rdev->ndev) || (flags & IFF_ALLMULTI)) 862 862 filter |= NDIS_PACKET_TYPE_ALL_MULTICAST; 863 863 if (flags & IFF_BROADCAST) 864 864 filter |= NDIS_PACKET_TYPE_BROADCAST;
+5 -7
drivers/net/team/team.c
··· 1197 1197 goto err_dev_open; 1198 1198 } 1199 1199 1200 - netif_addr_lock_bh(dev); 1201 - dev_uc_sync_multiple(port_dev, dev); 1202 - dev_mc_sync_multiple(port_dev, dev); 1203 - netif_addr_unlock_bh(dev); 1204 - 1205 1200 err = vlan_vids_add_by_dev(port_dev, dev); 1206 1201 if (err) { 1207 1202 netdev_err(dev, "Failed to add vlan ids to device %s\n", ··· 1236 1241 goto err_option_port_add; 1237 1242 } 1238 1243 1244 + netif_addr_lock_bh(dev); 1245 + dev_uc_sync_multiple(port_dev, dev); 1246 + dev_mc_sync_multiple(port_dev, dev); 1247 + netif_addr_unlock_bh(dev); 1248 + 1239 1249 port->index = -1; 1240 1250 list_add_tail_rcu(&port->list, &team->port_list); 1241 1251 team_port_enable(team, port); ··· 1265 1265 vlan_vids_del_by_dev(port_dev, dev); 1266 1266 1267 1267 err_vids_add: 1268 - dev_uc_unsync(port_dev, dev); 1269 - dev_mc_unsync(port_dev, dev); 1270 1268 dev_close(port_dev); 1271 1269 1272 1270 err_dev_open:
+31 -2
drivers/net/usb/lan78xx.c
··· 2351 2351 u32 buf; 2352 2352 int ret = 0; 2353 2353 unsigned long timeout; 2354 + u8 sig; 2354 2355 2355 2356 ret = lan78xx_read_reg(dev, HW_CFG, &buf); 2356 2357 buf |= HW_CFG_LRST_; ··· 2451 2450 /* LAN7801 only has RGMII mode */ 2452 2451 if (dev->chipid == ID_REV_CHIP_ID_7801_) 2453 2452 buf &= ~MAC_CR_GMII_EN_; 2453 + 2454 + if (dev->chipid == ID_REV_CHIP_ID_7800_) { 2455 + ret = lan78xx_read_raw_eeprom(dev, 0, 1, &sig); 2456 + if (!ret && sig != EEPROM_INDICATOR) { 2457 + /* Implies there is no external eeprom. Set mac speed */ 2458 + netdev_info(dev->net, "No External EEPROM. Setting MAC Speed\n"); 2459 + buf |= MAC_CR_AUTO_DUPLEX_ | MAC_CR_AUTO_SPEED_; 2460 + } 2461 + } 2454 2462 ret = lan78xx_write_reg(dev, MAC_CR, buf); 2455 2463 2456 2464 ret = lan78xx_read_reg(dev, MAC_TX, &buf); ··· 2873 2863 if (ret < 0) { 2874 2864 netdev_warn(dev->net, 2875 2865 "lan78xx_setup_irq_domain() failed : %d", ret); 2876 - kfree(pdata); 2877 - return ret; 2866 + goto out1; 2878 2867 } 2879 2868 2880 2869 dev->net->hard_header_len += TX_OVERHEAD; ··· 2881 2872 2882 2873 /* Init all registers */ 2883 2874 ret = lan78xx_reset(dev); 2875 + if (ret) { 2876 + netdev_warn(dev->net, "Registers INIT FAILED...."); 2877 + goto out2; 2878 + } 2884 2879 2885 2880 ret = lan78xx_mdio_init(dev); 2881 + if (ret) { 2882 + netdev_warn(dev->net, "MDIO INIT FAILED....."); 2883 + goto out2; 2884 + } 2886 2885 2887 2886 dev->net->flags |= IFF_MULTICAST; 2888 2887 2889 2888 pdata->wol = WAKE_MAGIC; 2890 2889 2890 + return ret; 2891 + 2892 + out2: 2893 + lan78xx_remove_irq_domain(dev); 2894 + 2895 + out1: 2896 + netdev_warn(dev->net, "Bind routine FAILED"); 2897 + cancel_work_sync(&pdata->set_multicast); 2898 + cancel_work_sync(&pdata->set_vlan); 2899 + kfree(pdata); 2891 2900 return ret; 2892 2901 } 2893 2902 ··· 2918 2891 lan78xx_remove_mdio(dev); 2919 2892 2920 2893 if (pdata) { 2894 + cancel_work_sync(&pdata->set_multicast); 2895 + cancel_work_sync(&pdata->set_vlan); 2921 2896 netif_dbg(dev, ifdown, dev->net, "free pdata"); 2922 2897 kfree(pdata); 2923 2898 pdata = NULL;
+5
drivers/net/usb/qmi_wwan.c
··· 1104 1104 {QMI_FIXED_INTF(0x0846, 0x68a2, 8)}, 1105 1105 {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ 1106 1106 {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ 1107 + {QMI_FIXED_INTF(0x1435, 0xd181, 3)}, /* Wistron NeWeb D18Q1 */ 1108 + {QMI_FIXED_INTF(0x1435, 0xd181, 4)}, /* Wistron NeWeb D18Q1 */ 1109 + {QMI_FIXED_INTF(0x1435, 0xd181, 5)}, /* Wistron NeWeb D18Q1 */ 1107 1110 {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ 1108 1111 {QMI_FIXED_INTF(0x16d8, 0x6007, 0)}, /* CMOTech CHE-628S */ 1109 1112 {QMI_FIXED_INTF(0x16d8, 0x6008, 0)}, /* CMOTech CMU-301 */ ··· 1183 1180 {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ 1184 1181 {QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */ 1185 1182 {QMI_FIXED_INTF(0x2001, 0x7e35, 4)}, /* D-Link DWM-222 */ 1183 + {QMI_FIXED_INTF(0x2020, 0x2033, 4)}, /* BroadMobi BM806U */ 1186 1184 {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ 1187 1185 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ 1188 1186 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */ ··· 1244 1240 {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */ 1245 1241 {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */ 1246 1242 {QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */ 1243 + {QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */ 1247 1244 {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */ 1248 1245 {QMI_FIXED_INTF(0x1e0e, 0x9001, 5)}, /* SIMCom 7230E */ 1249 1246 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0125, 4)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
+3 -2
drivers/net/vrf.c
··· 578 578 if (!IS_ERR(neigh)) { 579 579 sock_confirm_neigh(skb, neigh); 580 580 ret = neigh_output(neigh, skb); 581 + rcu_read_unlock_bh(); 582 + return ret; 581 583 } 582 584 583 585 rcu_read_unlock_bh(); 584 586 err: 585 - if (unlikely(ret < 0)) 586 - vrf_tx_error(skb->dev, skb); 587 + vrf_tx_error(skb->dev, skb); 587 588 return ret; 588 589 } 589 590
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
··· 6802 6802 return; 6803 6803 6804 6804 /* ignore non-ISO3166 country codes */ 6805 - for (i = 0; i < sizeof(req->alpha2); i++) 6805 + for (i = 0; i < 2; i++) 6806 6806 if (req->alpha2[i] < 'A' || req->alpha2[i] > 'Z') { 6807 6807 brcmf_err("not an ISO3166 code (0x%02x 0x%02x)\n", 6808 6808 req->alpha2[0], req->alpha2[1]);
+5
drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
··· 75 75 module_param_named(roamoff, brcmf_roamoff, int, S_IRUSR); 76 76 MODULE_PARM_DESC(roamoff, "Do not use internal roaming engine"); 77 77 78 + static int brcmf_iapp_enable; 79 + module_param_named(iapp, brcmf_iapp_enable, int, 0); 80 + MODULE_PARM_DESC(iapp, "Enable partial support for the obsoleted Inter-Access Point Protocol"); 81 + 78 82 #ifdef DEBUG 79 83 /* always succeed brcmf_bus_started() */ 80 84 static int brcmf_ignore_probe_fail; ··· 445 441 settings->feature_disable = brcmf_feature_disable; 446 442 settings->fcmode = brcmf_fcmode; 447 443 settings->roamoff = !!brcmf_roamoff; 444 + settings->iapp = !!brcmf_iapp_enable; 448 445 #ifdef DEBUG 449 446 settings->ignore_probe_fail = !!brcmf_ignore_probe_fail; 450 447 #endif
+1
drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h
··· 58 58 unsigned int feature_disable; 59 59 int fcmode; 60 60 bool roamoff; 61 + bool iapp; 61 62 bool ignore_probe_fail; 62 63 struct brcmfmac_pd_cc *country_codes; 63 64 union {
+57
drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
··· 230 230 schedule_work(&ifp->multicast_work); 231 231 } 232 232 233 + /** 234 + * brcmf_skb_is_iapp - checks if skb is an IAPP packet 235 + * 236 + * @skb: skb to check 237 + */ 238 + static bool brcmf_skb_is_iapp(struct sk_buff *skb) 239 + { 240 + static const u8 iapp_l2_update_packet[6] __aligned(2) = { 241 + 0x00, 0x01, 0xaf, 0x81, 0x01, 0x00, 242 + }; 243 + unsigned char *eth_data; 244 + #if !defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) 245 + const u16 *a, *b; 246 + #endif 247 + 248 + if (skb->len - skb->mac_len != 6 || 249 + !is_multicast_ether_addr(eth_hdr(skb)->h_dest)) 250 + return false; 251 + 252 + eth_data = skb_mac_header(skb) + ETH_HLEN; 253 + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) 254 + return !(((*(const u32 *)eth_data) ^ (*(const u32 *)iapp_l2_update_packet)) | 255 + ((*(const u16 *)(eth_data + 4)) ^ (*(const u16 *)(iapp_l2_update_packet + 4)))); 256 + #else 257 + a = (const u16 *)eth_data; 258 + b = (const u16 *)iapp_l2_update_packet; 259 + 260 + return !((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])); 261 + #endif 262 + } 263 + 233 264 static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb, 234 265 struct net_device *ndev) 235 266 { ··· 278 247 netif_stop_queue(ndev); 279 248 dev_kfree_skb(skb); 280 249 ret = -ENODEV; 250 + goto done; 251 + } 252 + 253 + /* Some recent Broadcom's firmwares disassociate STA when they receive 254 + * an 802.11f ADD frame. This behavior can lead to a local DoS security 255 + * issue. Attacker may trigger disassociation of any STA by sending a 256 + * proper Ethernet frame to the wireless interface. 257 + * 258 + * Moreover this feature may break AP interfaces in some specific 259 + * setups. This applies e.g. to the bridge with hairpin mode enabled and 260 + * IFLA_BRPORT_MCAST_TO_UCAST set. IAPP packet generated by a firmware 261 + * will get passed back to the wireless interface and cause immediate 262 + * disassociation of a just-connected STA. 263 + */ 264 + if (!drvr->settings->iapp && brcmf_skb_is_iapp(skb)) { 265 + dev_kfree_skb(skb); 266 + ret = -EINVAL; 281 267 goto done; 282 268 } 283 269 ··· 373 325 374 326 void brcmf_netif_rx(struct brcmf_if *ifp, struct sk_buff *skb) 375 327 { 328 + /* Most of Broadcom's firmwares send 802.11f ADD frame every time a new 329 + * STA connects to the AP interface. This is an obsoleted standard most 330 + * users don't use, so don't pass these frames up unless requested. 331 + */ 332 + if (!ifp->drvr->settings->iapp && brcmf_skb_is_iapp(skb)) { 333 + brcmu_pkt_buf_free_skb(skb); 334 + return; 335 + } 336 + 376 337 if (skb->pkt_type == PACKET_MULTICAST) 377 338 ifp->ndev->stats.multicast++; 378 339
+62
drivers/net/wireless/intel/iwlwifi/cfg/9000.c
··· 53 53 #include <linux/stringify.h> 54 54 #include "iwl-config.h" 55 55 #include "iwl-agn-hw.h" 56 + #include "fw/file.h" 56 57 57 58 /* Highest firmware API version supported */ 58 59 #define IWL9000_UCODE_API_MAX 36 ··· 266 265 .integrated = true, 267 266 .soc_latency = 5000, 268 267 }; 268 + 269 + const struct iwl_cfg iwl9460_2ac_cfg_shared_clk = { 270 + .name = "Intel(R) Dual Band Wireless AC 9460", 271 + .fw_name_pre = IWL9000A_FW_PRE, 272 + .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, 273 + .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, 274 + IWL_DEVICE_9000, 275 + .ht_params = &iwl9000_ht_params, 276 + .nvm_ver = IWL9000_NVM_VERSION, 277 + .nvm_calib_ver = IWL9000_TX_POWER_VERSION, 278 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, 279 + .integrated = true, 280 + .soc_latency = 5000, 281 + .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK 282 + }; 283 + 284 + const struct iwl_cfg iwl9461_2ac_cfg_shared_clk = { 285 + .name = "Intel(R) Dual Band Wireless AC 9461", 286 + .fw_name_pre = IWL9000A_FW_PRE, 287 + .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, 288 + .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, 289 + IWL_DEVICE_9000, 290 + .ht_params = &iwl9000_ht_params, 291 + .nvm_ver = IWL9000_NVM_VERSION, 292 + .nvm_calib_ver = IWL9000_TX_POWER_VERSION, 293 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, 294 + .integrated = true, 295 + .soc_latency = 5000, 296 + .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK 297 + }; 298 + 299 + const struct iwl_cfg iwl9462_2ac_cfg_shared_clk = { 300 + .name = "Intel(R) Dual Band Wireless AC 9462", 301 + .fw_name_pre = IWL9000A_FW_PRE, 302 + .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, 303 + .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, 304 + IWL_DEVICE_9000, 305 + .ht_params = &iwl9000_ht_params, 306 + .nvm_ver = IWL9000_NVM_VERSION, 307 + .nvm_calib_ver = IWL9000_TX_POWER_VERSION, 308 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, 309 + .integrated = true, 310 + .soc_latency = 5000, 311 + .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK 312 + }; 313 + 314 + const struct iwl_cfg iwl9560_2ac_cfg_shared_clk = { 315 + .name = "Intel(R) Dual Band Wireless AC 9560", 316 + .fw_name_pre = IWL9000A_FW_PRE, 317 + .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, 318 + .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, 319 + IWL_DEVICE_9000, 320 + .ht_params = &iwl9000_ht_params, 321 + .nvm_ver = IWL9000_NVM_VERSION, 322 + .nvm_calib_ver = IWL9000_TX_POWER_VERSION, 323 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, 324 + .integrated = true, 325 + .soc_latency = 5000, 326 + .extra_phy_cfg_flags = FW_PHY_CFG_SHARED_CLK 327 + }; 328 + 269 329 MODULE_FIRMWARE(IWL9000A_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX)); 270 330 MODULE_FIRMWARE(IWL9000B_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX)); 271 331 MODULE_FIRMWARE(IWL9000RFB_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX));
+1
drivers/net/wireless/intel/iwlwifi/fw/file.h
··· 441 441 FW_PHY_CFG_TX_CHAIN = 0xf << FW_PHY_CFG_TX_CHAIN_POS, 442 442 FW_PHY_CFG_RX_CHAIN_POS = 20, 443 443 FW_PHY_CFG_RX_CHAIN = 0xf << FW_PHY_CFG_RX_CHAIN_POS, 444 + FW_PHY_CFG_SHARED_CLK = BIT(31), 444 445 }; 445 446 446 447 #define IWL_UCODE_MAX_CS 1
+5
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 398 398 u8 ucode_api_max; 399 399 u8 ucode_api_min; 400 400 u32 min_umac_error_event_table; 401 + u32 extra_phy_cfg_flags; 401 402 }; 402 403 403 404 /* ··· 478 477 extern const struct iwl_cfg iwl9461_2ac_cfg_soc; 479 478 extern const struct iwl_cfg iwl9462_2ac_cfg_soc; 480 479 extern const struct iwl_cfg iwl9560_2ac_cfg_soc; 480 + extern const struct iwl_cfg iwl9460_2ac_cfg_shared_clk; 481 + extern const struct iwl_cfg iwl9461_2ac_cfg_shared_clk; 482 + extern const struct iwl_cfg iwl9462_2ac_cfg_shared_clk; 483 + extern const struct iwl_cfg iwl9560_2ac_cfg_shared_clk; 481 484 extern const struct iwl_cfg iwl22000_2ac_cfg_hr; 482 485 extern const struct iwl_cfg iwl22000_2ac_cfg_hr_cdb; 483 486 extern const struct iwl_cfg iwl22000_2ac_cfg_jf;
+4
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 433 433 434 434 /* Set parameters */ 435 435 phy_cfg_cmd.phy_cfg = cpu_to_le32(iwl_mvm_get_phy_config(mvm)); 436 + 437 + /* set flags extra PHY configuration flags from the device's cfg */ 438 + phy_cfg_cmd.phy_cfg |= cpu_to_le32(mvm->cfg->extra_phy_cfg_flags); 439 + 436 440 phy_cfg_cmd.calib_control.event_trigger = 437 441 mvm->fw->default_calib[ucode_type].event_trigger; 438 442 phy_cfg_cmd.calib_control.flow_trigger =
+6 -3
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 2132 2132 * Send the bcast station. At this stage the TBTT and DTIM time 2133 2133 * events are added and applied to the scheduler 2134 2134 */ 2135 - iwl_mvm_send_add_bcast_sta(mvm, vif); 2135 + ret = iwl_mvm_send_add_bcast_sta(mvm, vif); 2136 2136 if (ret) 2137 2137 goto out_unbind; 2138 - iwl_mvm_add_mcast_sta(mvm, vif); 2138 + ret = iwl_mvm_add_mcast_sta(mvm, vif); 2139 2139 if (ret) { 2140 2140 iwl_mvm_send_rm_bcast_sta(mvm, vif); 2141 2141 goto out_unbind; ··· 3494 3494 ret = 0; 3495 3495 goto out; 3496 3496 case NL80211_IFTYPE_STATION: 3497 + mvmvif->csa_bcn_pending = false; 3497 3498 break; 3498 3499 case NL80211_IFTYPE_MONITOR: 3499 3500 /* always disable PS when a monitor interface is active */ ··· 3538 3537 } 3539 3538 3540 3539 if (switching_chanctx && vif->type == NL80211_IFTYPE_STATION) { 3541 - u32 duration = 2 * vif->bss_conf.beacon_int; 3540 + u32 duration = 3 * vif->bss_conf.beacon_int; 3542 3541 3543 3542 /* iwl_mvm_protect_session() reads directly from the 3544 3543 * device (the system time), so make sure it is ··· 3551 3550 /* Protect the session to make sure we hear the first 3552 3551 * beacon on the new channel. 3553 3552 */ 3553 + mvmvif->csa_bcn_pending = true; 3554 3554 iwl_mvm_protect_session(mvm, vif, duration, duration, 3555 3555 vif->bss_conf.beacon_int / 2, 3556 3556 true); ··· 3990 3988 if (vif->type == NL80211_IFTYPE_STATION) { 3991 3989 struct iwl_mvm_sta *mvmsta; 3992 3990 3991 + mvmvif->csa_bcn_pending = false; 3993 3992 mvmsta = iwl_mvm_sta_from_staid_protected(mvm, 3994 3993 mvmvif->ap_sta_id); 3995 3994
+3
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 438 438 bool csa_failed; 439 439 u16 csa_target_freq; 440 440 441 + /* Indicates that we are waiting for a beacon on a new channel */ 442 + bool csa_bcn_pending; 443 + 441 444 /* TCP Checksum Offload */ 442 445 netdev_features_t features; 443 446 };
+16 -5
drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
··· 8 8 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 10 10 * Copyright(c) 2017 Intel Deutschland GmbH 11 + * Copyright(c) 2018 Intel Corporation 11 12 * 12 13 * This program is free software; you can redistribute it and/or modify 13 14 * it under the terms of version 2 of the GNU General Public License as ··· 18 17 * WITHOUT ANY WARRANTY; without even the implied warranty of 19 18 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 20 19 * General Public License for more details. 21 - * 22 - * You should have received a copy of the GNU General Public License 23 - * along with this program; if not, write to the Free Software 24 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, 25 - * USA 26 20 * 27 21 * The full GNU General Public License is included in this distribution 28 22 * in the file called COPYING. ··· 30 34 * 31 35 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 32 36 * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 37 + * Copyright(c) 2018 Intel Corporation 33 38 * All rights reserved. 34 39 * 35 40 * Redistribution and use in source and binary forms, with or without ··· 283 286 return; 284 287 285 288 ctxt->ref--; 289 + 290 + /* 291 + * Move unused phy's to a default channel. When the phy is moved the, 292 + * fw will cleanup immediate quiet bit if it was previously set, 293 + * otherwise we might not be able to reuse this phy. 294 + */ 295 + if (ctxt->ref == 0) { 296 + struct ieee80211_channel *chan; 297 + struct cfg80211_chan_def chandef; 298 + 299 + chan = &mvm->hw->wiphy->bands[NL80211_BAND_2GHZ]->channels[0]; 300 + cfg80211_chandef_create(&chandef, chan, NL80211_CHAN_NO_HT); 301 + iwl_mvm_phy_ctxt_changed(mvm, ctxt, &chandef, 1, 1); 302 + } 286 303 } 287 304 288 305 static void iwl_mvm_binding_iterator(void *_data, u8 *mac,
+33 -41
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 1695 1695 u32 qmask, enum nl80211_iftype iftype, 1696 1696 enum iwl_sta_type type) 1697 1697 { 1698 - if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) { 1698 + if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) || 1699 + sta->sta_id == IWL_MVM_INVALID_STA) { 1699 1700 sta->sta_id = iwl_mvm_find_free_sta_id(mvm, iftype); 1700 1701 if (WARN_ON_ONCE(sta->sta_id == IWL_MVM_INVALID_STA)) 1701 1702 return -ENOSPC; ··· 2479 2478 2480 2479 /* 2481 2480 * Note the possible cases: 2482 - * 1. In DQA mode with an enabled TXQ - TXQ needs to become agg'ed 2483 - * 2. Non-DQA mode: the TXQ hasn't yet been enabled, so find a free 2484 - * one and mark it as reserved 2485 - * 3. In DQA mode, but no traffic yet on this TID: same treatment as in 2486 - * non-DQA mode, since the TXQ hasn't yet been allocated 2487 - * Don't support case 3 for new TX path as it is not expected to happen 2488 - * and aggregation will be offloaded soon anyway 2481 + * 1. An enabled TXQ - TXQ needs to become agg'ed 2482 + * 2. The TXQ hasn't yet been enabled, so find a free one and mark 2483 + * it as reserved 2489 2484 */ 2490 2485 txq_id = mvmsta->tid_data[tid].txq_id; 2491 - if (iwl_mvm_has_new_tx_api(mvm)) { 2492 - if (txq_id == IWL_MVM_INVALID_QUEUE) { 2493 - ret = -ENXIO; 2494 - goto release_locks; 2495 - } 2496 - } else if (unlikely(mvm->queue_info[txq_id].status == 2497 - IWL_MVM_QUEUE_SHARED)) { 2498 - ret = -ENXIO; 2499 - IWL_DEBUG_TX_QUEUES(mvm, 2500 - "Can't start tid %d agg on shared queue!\n", 2501 - tid); 2502 - goto release_locks; 2503 - } else if (mvm->queue_info[txq_id].status != IWL_MVM_QUEUE_READY) { 2486 + if (txq_id == IWL_MVM_INVALID_QUEUE) { 2504 2487 txq_id = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id, 2505 2488 IWL_MVM_DQA_MIN_DATA_QUEUE, 2506 2489 IWL_MVM_DQA_MAX_DATA_QUEUE); ··· 2493 2508 IWL_ERR(mvm, "Failed to allocate agg queue\n"); 2494 2509 goto release_locks; 2495 2510 } 2496 - /* 2497 - * TXQ shouldn't be in inactive mode for non-DQA, so getting 2498 - * an inactive queue from iwl_mvm_find_free_queue() is 2499 - * certainly a bug 2500 - */ 2501 - WARN_ON(mvm->queue_info[txq_id].status == 2502 - IWL_MVM_QUEUE_INACTIVE); 2503 2511 2504 2512 /* TXQ hasn't yet been enabled, so mark it only as reserved */ 2505 2513 mvm->queue_info[txq_id].status = IWL_MVM_QUEUE_RESERVED; 2514 + } else if (unlikely(mvm->queue_info[txq_id].status == 2515 + IWL_MVM_QUEUE_SHARED)) { 2516 + ret = -ENXIO; 2517 + IWL_DEBUG_TX_QUEUES(mvm, 2518 + "Can't start tid %d agg on shared queue!\n", 2519 + tid); 2520 + goto release_locks; 2506 2521 } 2507 2522 2508 2523 spin_unlock(&mvm->queue_info_lock); ··· 2681 2696 2682 2697 static void iwl_mvm_unreserve_agg_queue(struct iwl_mvm *mvm, 2683 2698 struct iwl_mvm_sta *mvmsta, 2684 - u16 txq_id) 2699 + struct iwl_mvm_tid_data *tid_data) 2685 2700 { 2701 + u16 txq_id = tid_data->txq_id; 2702 + 2686 2703 if (iwl_mvm_has_new_tx_api(mvm)) 2687 2704 return; 2688 2705 ··· 2696 2709 * allocated through iwl_mvm_enable_txq, so we can just mark it back as 2697 2710 * free. 2698 2711 */ 2699 - if (mvm->queue_info[txq_id].status == IWL_MVM_QUEUE_RESERVED) 2712 + if (mvm->queue_info[txq_id].status == IWL_MVM_QUEUE_RESERVED) { 2700 2713 mvm->queue_info[txq_id].status = IWL_MVM_QUEUE_FREE; 2714 + tid_data->txq_id = IWL_MVM_INVALID_QUEUE; 2715 + } 2701 2716 2702 2717 spin_unlock_bh(&mvm->queue_info_lock); 2703 2718 } ··· 2730 2741 2731 2742 mvmsta->agg_tids &= ~BIT(tid); 2732 2743 2733 - iwl_mvm_unreserve_agg_queue(mvm, mvmsta, txq_id); 2744 + iwl_mvm_unreserve_agg_queue(mvm, mvmsta, tid_data); 2734 2745 2735 2746 switch (tid_data->state) { 2736 2747 case IWL_AGG_ON: ··· 2797 2808 mvmsta->agg_tids &= ~BIT(tid); 2798 2809 spin_unlock_bh(&mvmsta->lock); 2799 2810 2800 - iwl_mvm_unreserve_agg_queue(mvm, mvmsta, txq_id); 2811 + iwl_mvm_unreserve_agg_queue(mvm, mvmsta, tid_data); 2801 2812 2802 2813 if (old_state >= IWL_AGG_ON) { 2803 2814 iwl_mvm_drain_sta(mvm, mvmsta, true); ··· 3222 3233 } 3223 3234 sta_id = mvm_sta->sta_id; 3224 3235 3225 - if (keyconf->cipher == WLAN_CIPHER_SUITE_AES_CMAC || 3226 - keyconf->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_128 || 3227 - keyconf->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_256) { 3228 - ret = iwl_mvm_send_sta_igtk(mvm, keyconf, sta_id, 3229 - false); 3230 - goto end; 3231 - } 3232 - 3233 3236 /* 3234 3237 * It is possible that the 'sta' parameter is NULL, and thus 3235 - * there is a need to retrieve the sta from the local station 3238 + * there is a need to retrieve the sta from the local station 3236 3239 * table. 3237 3240 */ 3238 3241 if (!sta) { ··· 3239 3258 3240 3259 if (WARN_ON_ONCE(iwl_mvm_sta_from_mac80211(sta)->vif != vif)) 3241 3260 return -EINVAL; 3261 + } else { 3262 + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 3263 + 3264 + sta_id = mvmvif->mcast_sta.sta_id; 3265 + } 3266 + 3267 + if (keyconf->cipher == WLAN_CIPHER_SUITE_AES_CMAC || 3268 + keyconf->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_128 || 3269 + keyconf->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_256) { 3270 + ret = iwl_mvm_send_sta_igtk(mvm, keyconf, sta_id, false); 3271 + goto end; 3242 3272 } 3243 3273 3244 3274 /* If the key_offset is not pre-assigned, we need to find a
+8 -7
drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
··· 8 8 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * Copyright(c) 2017 Intel Deutschland GmbH 11 + * Copyright(c) 2018 Intel Corporation 11 12 * 12 13 * This program is free software; you can redistribute it and/or modify 13 14 * it under the terms of version 2 of the GNU General Public License as ··· 18 17 * WITHOUT ANY WARRANTY; without even the implied warranty of 19 18 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 20 19 * General Public License for more details. 21 - * 22 - * You should have received a copy of the GNU General Public License 23 - * along with this program; if not, write to the Free Software 24 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, 25 - * USA 26 20 * 27 21 * The full GNU General Public License is included in this distribution 28 22 * in the file called COPYING. ··· 31 35 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 32 36 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 33 37 * Copyright(c) 2017 Intel Deutschland GmbH 38 + * Copyright(c) 2018 Intel Corporation 34 39 * All rights reserved. 35 40 * 36 41 * Redistribution and use in source and binary forms, with or without ··· 195 198 struct ieee80211_vif *vif, 196 199 const char *errmsg) 197 200 { 201 + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 202 + 198 203 if (vif->type != NL80211_IFTYPE_STATION) 199 204 return false; 200 - if (vif->bss_conf.assoc && vif->bss_conf.dtim_period) 205 + 206 + if (!mvmvif->csa_bcn_pending && vif->bss_conf.assoc && 207 + vif->bss_conf.dtim_period) 201 208 return false; 202 209 if (errmsg) 203 210 IWL_ERR(mvm, "%s\n", errmsg); ··· 345 344 * and know the dtim period. 346 345 */ 347 346 iwl_mvm_te_check_disconnect(mvm, te_data->vif, 348 - "No association and the time event is over already..."); 347 + "No beacon heard and the time event is over already..."); 349 348 break; 350 349 default: 351 350 break;
+4 -6
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 1894 1894 struct iwl_mvm_int_sta *int_sta = sta; 1895 1895 struct iwl_mvm_sta *mvm_sta = sta; 1896 1896 1897 - if (iwl_mvm_has_new_tx_api(mvm)) { 1898 - if (internal) 1899 - return iwl_mvm_flush_sta_tids(mvm, int_sta->sta_id, 1900 - BIT(IWL_MGMT_TID), flags); 1897 + BUILD_BUG_ON(offsetof(struct iwl_mvm_int_sta, sta_id) != 1898 + offsetof(struct iwl_mvm_sta, sta_id)); 1901 1899 1900 + if (iwl_mvm_has_new_tx_api(mvm)) 1902 1901 return iwl_mvm_flush_sta_tids(mvm, mvm_sta->sta_id, 1903 - 0xFF, flags); 1904 - } 1902 + 0xff | BIT(IWL_MGMT_TID), flags); 1905 1903 1906 1904 if (internal) 1907 1905 return iwl_mvm_flush_tx_path(mvm, int_sta->tfd_queue_msk,
+19 -19
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 579 579 {IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_soc)}, 580 580 {IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_soc)}, 581 581 {IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_soc)}, 582 - {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_cfg_soc)}, 583 - {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg_soc)}, 584 - {IWL_PCI_DEVICE(0x31DC, 0x0038, iwl9560_2ac_cfg_soc)}, 585 - {IWL_PCI_DEVICE(0x31DC, 0x003C, iwl9560_2ac_cfg_soc)}, 586 - {IWL_PCI_DEVICE(0x31DC, 0x0060, iwl9460_2ac_cfg_soc)}, 587 - {IWL_PCI_DEVICE(0x31DC, 0x0064, iwl9461_2ac_cfg_soc)}, 588 - {IWL_PCI_DEVICE(0x31DC, 0x00A0, iwl9462_2ac_cfg_soc)}, 589 - {IWL_PCI_DEVICE(0x31DC, 0x00A4, iwl9462_2ac_cfg_soc)}, 590 - {IWL_PCI_DEVICE(0x31DC, 0x0230, iwl9560_2ac_cfg_soc)}, 591 - {IWL_PCI_DEVICE(0x31DC, 0x0234, iwl9560_2ac_cfg_soc)}, 592 - {IWL_PCI_DEVICE(0x31DC, 0x0238, iwl9560_2ac_cfg_soc)}, 593 - {IWL_PCI_DEVICE(0x31DC, 0x023C, iwl9560_2ac_cfg_soc)}, 594 - {IWL_PCI_DEVICE(0x31DC, 0x0260, iwl9461_2ac_cfg_soc)}, 595 - {IWL_PCI_DEVICE(0x31DC, 0x0264, iwl9461_2ac_cfg_soc)}, 596 - {IWL_PCI_DEVICE(0x31DC, 0x02A0, iwl9462_2ac_cfg_soc)}, 597 - {IWL_PCI_DEVICE(0x31DC, 0x02A4, iwl9462_2ac_cfg_soc)}, 598 - {IWL_PCI_DEVICE(0x31DC, 0x4030, iwl9560_2ac_cfg_soc)}, 599 - {IWL_PCI_DEVICE(0x31DC, 0x4034, iwl9560_2ac_cfg_soc)}, 600 - {IWL_PCI_DEVICE(0x31DC, 0x40A4, iwl9462_2ac_cfg_soc)}, 582 + {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_cfg_shared_clk)}, 583 + {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg_shared_clk)}, 584 + {IWL_PCI_DEVICE(0x31DC, 0x0038, iwl9560_2ac_cfg_shared_clk)}, 585 + {IWL_PCI_DEVICE(0x31DC, 0x003C, iwl9560_2ac_cfg_shared_clk)}, 586 + {IWL_PCI_DEVICE(0x31DC, 0x0060, iwl9460_2ac_cfg_shared_clk)}, 587 + {IWL_PCI_DEVICE(0x31DC, 0x0064, iwl9461_2ac_cfg_shared_clk)}, 588 + {IWL_PCI_DEVICE(0x31DC, 0x00A0, iwl9462_2ac_cfg_shared_clk)}, 589 + {IWL_PCI_DEVICE(0x31DC, 0x00A4, iwl9462_2ac_cfg_shared_clk)}, 590 + {IWL_PCI_DEVICE(0x31DC, 0x0230, iwl9560_2ac_cfg_shared_clk)}, 591 + {IWL_PCI_DEVICE(0x31DC, 0x0234, iwl9560_2ac_cfg_shared_clk)}, 592 + {IWL_PCI_DEVICE(0x31DC, 0x0238, iwl9560_2ac_cfg_shared_clk)}, 593 + {IWL_PCI_DEVICE(0x31DC, 0x023C, iwl9560_2ac_cfg_shared_clk)}, 594 + {IWL_PCI_DEVICE(0x31DC, 0x0260, iwl9461_2ac_cfg_shared_clk)}, 595 + {IWL_PCI_DEVICE(0x31DC, 0x0264, iwl9461_2ac_cfg_shared_clk)}, 596 + {IWL_PCI_DEVICE(0x31DC, 0x02A0, iwl9462_2ac_cfg_shared_clk)}, 597 + {IWL_PCI_DEVICE(0x31DC, 0x02A4, iwl9462_2ac_cfg_shared_clk)}, 598 + {IWL_PCI_DEVICE(0x31DC, 0x4030, iwl9560_2ac_cfg_shared_clk)}, 599 + {IWL_PCI_DEVICE(0x31DC, 0x4034, iwl9560_2ac_cfg_shared_clk)}, 600 + {IWL_PCI_DEVICE(0x31DC, 0x40A4, iwl9462_2ac_cfg_shared_clk)}, 601 601 {IWL_PCI_DEVICE(0x34F0, 0x0030, iwl9560_2ac_cfg_soc)}, 602 602 {IWL_PCI_DEVICE(0x34F0, 0x0034, iwl9560_2ac_cfg_soc)}, 603 603 {IWL_PCI_DEVICE(0x34F0, 0x02A4, iwl9462_2ac_cfg_soc)},
+2 -2
drivers/vhost/net.c
··· 630 630 631 631 if (!len && vq->busyloop_timeout) { 632 632 /* Both tx vq and rx socket were polled here */ 633 - mutex_lock(&vq->mutex); 633 + mutex_lock_nested(&vq->mutex, 1); 634 634 vhost_disable_notify(&net->dev, vq); 635 635 636 636 preempt_disable(); ··· 763 763 struct iov_iter fixup; 764 764 __virtio16 num_buffers; 765 765 766 - mutex_lock(&vq->mutex); 766 + mutex_lock_nested(&vq->mutex, 0); 767 767 sock = vq->private_data; 768 768 if (!sock) 769 769 goto out;
+7 -10
drivers/vhost/vhost.c
··· 212 212 if (mask) 213 213 vhost_poll_wakeup(&poll->wait, 0, 0, poll_to_key(mask)); 214 214 if (mask & EPOLLERR) { 215 - if (poll->wqh) 216 - remove_wait_queue(poll->wqh, &poll->wait); 215 + vhost_poll_stop(poll); 217 216 ret = -EINVAL; 218 217 } 219 218 ··· 1244 1245 /* Caller should have vq mutex and device mutex */ 1245 1246 int vhost_vq_access_ok(struct vhost_virtqueue *vq) 1246 1247 { 1247 - if (vq->iotlb) { 1248 - /* When device IOTLB was used, the access validation 1249 - * will be validated during prefetching. 1250 - */ 1251 - return 1; 1252 - } 1253 - return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used) && 1254 - vq_log_access_ok(vq, vq->log_base); 1248 + int ret = vq_log_access_ok(vq, vq->log_base); 1249 + 1250 + if (ret || vq->iotlb) 1251 + return ret; 1252 + 1253 + return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used); 1255 1254 } 1256 1255 EXPORT_SYMBOL_GPL(vhost_vq_access_ok); 1257 1256
+13 -2
include/linux/if_vlan.h
··· 323 323 skb_push(skb, VLAN_HLEN); 324 324 325 325 /* Move the mac header sans proto to the beginning of the new header. */ 326 - memmove(skb->data, skb->data + VLAN_HLEN, mac_len - ETH_TLEN); 326 + if (likely(mac_len > ETH_TLEN)) 327 + memmove(skb->data, skb->data + VLAN_HLEN, mac_len - ETH_TLEN); 327 328 skb->mac_header -= VLAN_HLEN; 328 329 329 330 veth = (struct vlan_ethhdr *)(skb->data + mac_len - ETH_HLEN); 330 331 331 332 /* first, the ethernet type */ 332 - veth->h_vlan_proto = vlan_proto; 333 + if (likely(mac_len >= ETH_TLEN)) { 334 + /* h_vlan_encapsulated_proto should already be populated, and 335 + * skb->data has space for h_vlan_proto 336 + */ 337 + veth->h_vlan_proto = vlan_proto; 338 + } else { 339 + /* h_vlan_encapsulated_proto should not be populated, and 340 + * skb->data has no space for h_vlan_proto 341 + */ 342 + veth->h_vlan_encapsulated_proto = skb->protocol; 343 + } 333 344 334 345 /* now, the TCI */ 335 346 veth->h_vlan_TCI = htons(vlan_tci);
+1 -1
include/linux/net_dim.h
··· 231 231 } 232 232 233 233 #define IS_SIGNIFICANT_DIFF(val, ref) \ 234 - (((100 * abs((val) - (ref))) / (ref)) > 10) /* more than 10% difference */ 234 + (((100UL * abs((val) - (ref))) / (ref)) > 10) /* more than 10% difference */ 235 235 236 236 static inline int net_dim_stats_compare(struct net_dim_stats *curr, 237 237 struct net_dim_stats *prev)
+1 -1
include/net/llc_conn.h
··· 103 103 104 104 /* Access to a connection */ 105 105 int llc_conn_state_process(struct sock *sk, struct sk_buff *skb); 106 - void llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb); 106 + int llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb); 107 107 void llc_conn_rtn_pdu(struct sock *sk, struct sk_buff *skb); 108 108 void llc_conn_resend_i_pdu_as_cmd(struct sock *sk, u8 nr, u8 first_p_bit); 109 109 void llc_conn_resend_i_pdu_as_rsp(struct sock *sk, u8 nr, u8 first_f_bit);
+4
include/net/netfilter/nf_tables.h
··· 1068 1068 int nft_register_obj(struct nft_object_type *obj_type); 1069 1069 void nft_unregister_obj(struct nft_object_type *obj_type); 1070 1070 1071 + #define NFT_FLOWTABLE_DEVICE_MAX 8 1072 + 1071 1073 /** 1072 1074 * struct nft_flowtable - nf_tables flow table 1073 1075 * ··· 1082 1080 * @genmask: generation mask 1083 1081 * @use: number of references to this flow table 1084 1082 * @handle: unique object handle 1083 + * @dev_name: array of device names 1085 1084 * @data: rhashtable and garbage collector 1086 1085 * @ops: array of hooks 1087 1086 */ ··· 1096 1093 u32 genmask:2, 1097 1094 use:30; 1098 1095 u64 handle; 1096 + char *dev_name[NFT_FLOWTABLE_DEVICE_MAX]; 1099 1097 /* runtime data below here */ 1100 1098 struct nf_hook_ops *ops ____cacheline_aligned; 1101 1099 struct nf_flowtable data;
+1
include/net/sch_generic.h
··· 30 30 enum qdisc_state_t { 31 31 __QDISC_STATE_SCHED, 32 32 __QDISC_STATE_DEACTIVATED, 33 + __QDISC_STATE_RUNNING, 33 34 }; 34 35 35 36 struct qdisc_size_table {
+4 -1
net/batman-adv/gateway_client.c
··· 746 746 { 747 747 struct batadv_neigh_node *neigh_curr = NULL; 748 748 struct batadv_neigh_node *neigh_old = NULL; 749 - struct batadv_orig_node *orig_dst_node; 749 + struct batadv_orig_node *orig_dst_node = NULL; 750 750 struct batadv_gw_node *gw_node = NULL; 751 751 struct batadv_gw_node *curr_gw = NULL; 752 752 struct batadv_neigh_ifinfo *curr_ifinfo, *old_ifinfo; ··· 756 756 unsigned short vid; 757 757 758 758 vid = batadv_get_vid(skb, 0); 759 + 760 + if (is_multicast_ether_addr(ethhdr->h_dest)) 761 + goto out; 759 762 760 763 orig_dst_node = batadv_transtable_search(bat_priv, ethhdr->h_source, 761 764 ethhdr->h_dest, vid);
+2 -2
net/batman-adv/multicast.c
··· 814 814 batadv_mcast_forw_tt_node_get(struct batadv_priv *bat_priv, 815 815 struct ethhdr *ethhdr) 816 816 { 817 - return batadv_transtable_search(bat_priv, ethhdr->h_source, 818 - ethhdr->h_dest, BATADV_NO_FLAGS); 817 + return batadv_transtable_search(bat_priv, NULL, ethhdr->h_dest, 818 + BATADV_NO_FLAGS); 819 819 } 820 820 821 821 /**
+1 -1
net/core/dev.c
··· 2735 2735 if (unlikely(!pskb_may_pull(skb, sizeof(struct ethhdr)))) 2736 2736 return 0; 2737 2737 2738 - eth = (struct ethhdr *)skb_mac_header(skb); 2738 + eth = (struct ethhdr *)skb->data; 2739 2739 type = eth->h_proto; 2740 2740 } 2741 2741
+4 -2
net/core/skbuff.c
··· 5028 5028 } 5029 5029 5030 5030 mac_len = skb->data - skb_mac_header(skb); 5031 - memmove(skb_mac_header(skb) + VLAN_HLEN, skb_mac_header(skb), 5032 - mac_len - VLAN_HLEN - ETH_TLEN); 5031 + if (likely(mac_len > VLAN_HLEN + ETH_TLEN)) { 5032 + memmove(skb_mac_header(skb) + VLAN_HLEN, skb_mac_header(skb), 5033 + mac_len - VLAN_HLEN - ETH_TLEN); 5034 + } 5033 5035 skb->mac_header += VLAN_HLEN; 5034 5036 return skb; 5035 5037 }
+26 -5
net/ipv4/ip_tunnel.c
··· 362 362 struct ip_tunnel *nt; 363 363 struct net_device *dev; 364 364 int t_hlen; 365 + int mtu; 366 + int err; 365 367 366 368 BUG_ON(!itn->fb_tunnel_dev); 367 369 dev = __ip_tunnel_create(net, itn->fb_tunnel_dev->rtnl_link_ops, parms); 368 370 if (IS_ERR(dev)) 369 371 return ERR_CAST(dev); 370 372 371 - dev->mtu = ip_tunnel_bind_dev(dev); 373 + mtu = ip_tunnel_bind_dev(dev); 374 + err = dev_set_mtu(dev, mtu); 375 + if (err) 376 + goto err_dev_set_mtu; 372 377 373 378 nt = netdev_priv(dev); 374 379 t_hlen = nt->hlen + sizeof(struct iphdr); ··· 381 376 dev->max_mtu = 0xFFF8 - dev->hard_header_len - t_hlen; 382 377 ip_tunnel_add(itn, nt); 383 378 return nt; 379 + 380 + err_dev_set_mtu: 381 + unregister_netdevice(dev); 382 + return ERR_PTR(err); 384 383 } 385 384 386 385 int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb, ··· 1111 1102 nt->fwmark = fwmark; 1112 1103 err = register_netdevice(dev); 1113 1104 if (err) 1114 - goto out; 1105 + goto err_register_netdevice; 1115 1106 1116 1107 if (dev->type == ARPHRD_ETHER && !tb[IFLA_ADDRESS]) 1117 1108 eth_hw_addr_random(dev); 1118 1109 1119 1110 mtu = ip_tunnel_bind_dev(dev); 1120 - if (!tb[IFLA_MTU]) 1121 - dev->mtu = mtu; 1111 + if (tb[IFLA_MTU]) { 1112 + unsigned int max = 0xfff8 - dev->hard_header_len - nt->hlen; 1113 + 1114 + mtu = clamp(dev->mtu, (unsigned int)ETH_MIN_MTU, 1115 + (unsigned int)(max - sizeof(struct iphdr))); 1116 + } 1117 + 1118 + err = dev_set_mtu(dev, mtu); 1119 + if (err) 1120 + goto err_dev_set_mtu; 1122 1121 1123 1122 ip_tunnel_add(itn, nt); 1124 - out: 1123 + return 0; 1124 + 1125 + err_dev_set_mtu: 1126 + unregister_netdevice(dev); 1127 + err_register_netdevice: 1125 1128 return err; 1126 1129 } 1127 1130 EXPORT_SYMBOL_GPL(ip_tunnel_newlink);
-2
net/ipv4/ip_vti.c
··· 387 387 memcpy(dev->dev_addr, &iph->saddr, 4); 388 388 memcpy(dev->broadcast, &iph->daddr, 4); 389 389 390 - dev->hard_header_len = LL_MAX_HEADER + sizeof(struct iphdr); 391 - dev->mtu = ETH_DATA_LEN; 392 390 dev->flags = IFF_NOARP; 393 391 dev->addr_len = 4; 394 392 dev->features |= NETIF_F_LLTX;
+13 -1
net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
··· 154 154 struct sk_buff *skb, 155 155 const struct nf_hook_state *state) 156 156 { 157 - if (ip_is_fragment(ip_hdr(skb))) /* IP_NODEFRAG setsockopt set */ 157 + if (ip_is_fragment(ip_hdr(skb))) { /* IP_NODEFRAG setsockopt set */ 158 + enum ip_conntrack_info ctinfo; 159 + struct nf_conn *tmpl; 160 + 161 + tmpl = nf_ct_get(skb, &ctinfo); 162 + if (tmpl && nf_ct_is_template(tmpl)) { 163 + /* when skipping ct, clear templates to avoid fooling 164 + * later targets/matches 165 + */ 166 + skb->_nfct = 0; 167 + nf_ct_put(tmpl); 168 + } 158 169 return NF_ACCEPT; 170 + } 159 171 160 172 return nf_conntrack_in(state->net, PF_INET, state->hook, skb); 161 173 }
+4 -2
net/ipv4/netfilter/nf_socket_ipv4.c
··· 108 108 int doff = 0; 109 109 110 110 if (iph->protocol == IPPROTO_UDP || iph->protocol == IPPROTO_TCP) { 111 - struct udphdr _hdr, *hp; 111 + struct tcphdr _hdr; 112 + struct udphdr *hp; 112 113 113 114 hp = skb_header_pointer(skb, ip_hdrlen(skb), 114 - sizeof(_hdr), &_hdr); 115 + iph->protocol == IPPROTO_UDP ? 116 + sizeof(*hp) : sizeof(_hdr), &_hdr); 115 117 if (hp == NULL) 116 118 return NULL; 117 119
+2
net/ipv4/syncookies.c
··· 349 349 req->ts_recent = tcp_opt.saw_tstamp ? tcp_opt.rcv_tsval : 0; 350 350 treq->snt_synack = 0; 351 351 treq->tfo_listener = false; 352 + if (IS_ENABLED(CONFIG_SMC)) 353 + ireq->smc_ok = 0; 352 354 353 355 ireq->ir_iif = inet_request_bound_dev_if(sk, skb); 354 356
+3
net/ipv4/tcp_input.c
··· 6256 6256 if (want_cookie && !tmp_opt.saw_tstamp) 6257 6257 tcp_clear_options(&tmp_opt); 6258 6258 6259 + if (IS_ENABLED(CONFIG_SMC) && want_cookie) 6260 + tmp_opt.smc_ok = 0; 6261 + 6259 6262 tmp_opt.tstamp_ok = tmp_opt.saw_tstamp; 6260 6263 tcp_openreq_init(req, &tmp_opt, skb, sk); 6261 6264 inet_rsk(req)->no_srccheck = inet_sk(sk)->transparent;
+9 -4
net/ipv6/ip6_output.c
··· 1246 1246 const struct sockcm_cookie *sockc) 1247 1247 { 1248 1248 struct sk_buff *skb, *skb_prev = NULL; 1249 - unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu; 1249 + unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu, pmtu; 1250 1250 int exthdrlen = 0; 1251 1251 int dst_exthdrlen = 0; 1252 1252 int hh_len; ··· 1282 1282 sizeof(struct frag_hdr) : 0) + 1283 1283 rt->rt6i_nfheader_len; 1284 1284 1285 + /* as per RFC 7112 section 5, the entire IPv6 Header Chain must fit 1286 + * the first fragment 1287 + */ 1288 + if (headersize + transhdrlen > mtu) 1289 + goto emsgsize; 1290 + 1285 1291 if (cork->length + length > mtu - headersize && ipc6->dontfrag && 1286 1292 (sk->sk_protocol == IPPROTO_UDP || 1287 1293 sk->sk_protocol == IPPROTO_RAW)) { ··· 1303 1297 1304 1298 if (cork->length + length > maxnonfragsize - headersize) { 1305 1299 emsgsize: 1306 - ipv6_local_error(sk, EMSGSIZE, fl6, 1307 - mtu - headersize + 1308 - sizeof(struct ipv6hdr)); 1300 + pmtu = max_t(int, mtu - headersize + sizeof(struct ipv6hdr), 0); 1301 + ipv6_local_error(sk, EMSGSIZE, fl6, pmtu); 1309 1302 return -EMSGSIZE; 1310 1303 } 1311 1304
+23 -13
net/ipv6/ip6_vti.c
··· 622 622 return 0; 623 623 } 624 624 625 - static void vti6_link_config(struct ip6_tnl *t) 625 + static void vti6_link_config(struct ip6_tnl *t, bool keep_mtu) 626 626 { 627 627 struct net_device *dev = t->dev; 628 628 struct __ip6_tnl_parm *p = &t->parms; 629 629 struct net_device *tdev = NULL; 630 + int mtu; 630 631 631 632 memcpy(dev->dev_addr, &p->laddr, sizeof(struct in6_addr)); 632 633 memcpy(dev->broadcast, &p->raddr, sizeof(struct in6_addr)); ··· 640 639 dev->flags |= IFF_POINTOPOINT; 641 640 else 642 641 dev->flags &= ~IFF_POINTOPOINT; 642 + 643 + if (keep_mtu && dev->mtu) { 644 + dev->mtu = clamp(dev->mtu, dev->min_mtu, dev->max_mtu); 645 + return; 646 + } 643 647 644 648 if (p->flags & IP6_TNL_F_CAP_XMIT) { 645 649 int strict = (ipv6_addr_type(&p->raddr) & ··· 662 656 tdev = __dev_get_by_index(t->net, p->link); 663 657 664 658 if (tdev) 665 - dev->mtu = max_t(int, tdev->mtu - dev->hard_header_len, 666 - IPV6_MIN_MTU); 659 + mtu = tdev->mtu - sizeof(struct ipv6hdr); 660 + else 661 + mtu = ETH_DATA_LEN - LL_MAX_HEADER - sizeof(struct ipv6hdr); 662 + 663 + dev->mtu = max_t(int, mtu, IPV6_MIN_MTU); 667 664 } 668 665 669 666 /** 670 667 * vti6_tnl_change - update the tunnel parameters 671 668 * @t: tunnel to be changed 672 669 * @p: tunnel configuration parameters 670 + * @keep_mtu: MTU was set from userspace, don't re-compute it 673 671 * 674 672 * Description: 675 673 * vti6_tnl_change() updates the tunnel parameters 676 674 **/ 677 675 static int 678 - vti6_tnl_change(struct ip6_tnl *t, const struct __ip6_tnl_parm *p) 676 + vti6_tnl_change(struct ip6_tnl *t, const struct __ip6_tnl_parm *p, 677 + bool keep_mtu) 679 678 { 680 679 t->parms.laddr = p->laddr; 681 680 t->parms.raddr = p->raddr; ··· 690 679 t->parms.proto = p->proto; 691 680 t->parms.fwmark = p->fwmark; 692 681 dst_cache_reset(&t->dst_cache); 693 - vti6_link_config(t); 682 + vti6_link_config(t, keep_mtu); 694 683 return 0; 695 684 } 696 685 697 - static int vti6_update(struct ip6_tnl *t, struct __ip6_tnl_parm *p) 686 + static int vti6_update(struct ip6_tnl *t, struct __ip6_tnl_parm *p, 687 + bool keep_mtu) 698 688 { 699 689 struct net *net = dev_net(t->dev); 700 690 struct vti6_net *ip6n = net_generic(net, vti6_net_id); ··· 703 691 704 692 vti6_tnl_unlink(ip6n, t); 705 693 synchronize_net(); 706 - err = vti6_tnl_change(t, p); 694 + err = vti6_tnl_change(t, p, keep_mtu); 707 695 vti6_tnl_link(ip6n, t); 708 696 netdev_state_change(t->dev); 709 697 return err; ··· 816 804 } else 817 805 t = netdev_priv(dev); 818 806 819 - err = vti6_update(t, &p1); 807 + err = vti6_update(t, &p1, false); 820 808 } 821 809 if (t) { 822 810 err = 0; ··· 878 866 dev->priv_destructor = vti6_dev_free; 879 867 880 868 dev->type = ARPHRD_TUNNEL6; 881 - dev->hard_header_len = LL_MAX_HEADER + sizeof(struct ipv6hdr); 882 - dev->mtu = ETH_DATA_LEN; 883 869 dev->min_mtu = IPV6_MIN_MTU; 884 - dev->max_mtu = IP_MAX_MTU; 870 + dev->max_mtu = IP_MAX_MTU - sizeof(struct ipv6hdr); 885 871 dev->flags |= IFF_NOARP; 886 872 dev->addr_len = sizeof(struct in6_addr); 887 873 netif_keep_dst(dev); ··· 915 905 916 906 if (err) 917 907 return err; 918 - vti6_link_config(t); 908 + vti6_link_config(t, true); 919 909 return 0; 920 910 } 921 911 ··· 1020 1010 } else 1021 1011 t = netdev_priv(dev); 1022 1012 1023 - return vti6_update(t, &p); 1013 + return vti6_update(t, &p, tb && tb[IFLA_MTU]); 1024 1014 } 1025 1015 1026 1016 static size_t vti6_get_size(const struct net_device *dev)
+4 -2
net/ipv6/netfilter/nf_socket_ipv6.c
··· 116 116 } 117 117 118 118 if (tproto == IPPROTO_UDP || tproto == IPPROTO_TCP) { 119 - struct udphdr _hdr, *hp; 119 + struct tcphdr _hdr; 120 + struct udphdr *hp; 120 121 121 - hp = skb_header_pointer(skb, thoff, sizeof(_hdr), &_hdr); 122 + hp = skb_header_pointer(skb, thoff, tproto == IPPROTO_UDP ? 123 + sizeof(*hp) : sizeof(_hdr), &_hdr); 122 124 if (hp == NULL) 123 125 return NULL; 124 126
+10 -6
net/ipv6/route.c
··· 919 919 struct rt6_info *rt, *rt_cache; 920 920 struct fib6_node *fn; 921 921 922 + if (fl6->flowi6_flags & FLOWI_FLAG_SKIP_NH_OIF) 923 + flags &= ~RT6_LOOKUP_F_IFACE; 924 + 922 925 rcu_read_lock(); 923 926 fn = fib6_lookup(&table->tb6_root, &fl6->daddr, &fl6->saddr); 924 927 restart: ··· 1629 1626 struct neighbour *neigh; 1630 1627 __u8 neigh_flags = 0; 1631 1628 1632 - neigh = dst_neigh_lookup(&rt->dst, &rt->rt6i_gateway); 1633 - if (neigh) { 1629 + neigh = __ipv6_neigh_lookup_noref(rt->dst.dev, &rt->rt6i_gateway); 1630 + if (neigh) 1634 1631 neigh_flags = neigh->flags; 1635 - neigh_release(neigh); 1636 - } 1632 + 1637 1633 if (!(neigh_flags & NTF_ROUTER)) { 1638 1634 RT6_TRACE("purging route %p via non-router but gateway\n", 1639 1635 rt); ··· 1656 1654 if (!rcu_access_pointer(rt->rt6i_exception_bucket)) 1657 1655 return; 1658 1656 1659 - spin_lock_bh(&rt6_exception_lock); 1657 + rcu_read_lock_bh(); 1658 + spin_lock(&rt6_exception_lock); 1660 1659 bucket = rcu_dereference_protected(rt->rt6i_exception_bucket, 1661 1660 lockdep_is_held(&rt6_exception_lock)); 1662 1661 ··· 1671 1668 bucket++; 1672 1669 } 1673 1670 } 1674 - spin_unlock_bh(&rt6_exception_lock); 1671 + spin_unlock(&rt6_exception_lock); 1672 + rcu_read_unlock_bh(); 1675 1673 } 1676 1674 1677 1675 struct rt6_info *ip6_pol_route(struct net *net, struct fib6_table *table,
+7 -9
net/ipv6/seg6_iptunnel.c
··· 16 16 #include <linux/net.h> 17 17 #include <linux/module.h> 18 18 #include <net/ip.h> 19 + #include <net/ip_tunnels.h> 19 20 #include <net/lwtunnel.h> 20 21 #include <net/netevent.h> 21 22 #include <net/netns/generic.h> ··· 212 211 213 212 tinfo = seg6_encap_lwtunnel(dst->lwtstate); 214 213 215 - if (likely(!skb->encapsulation)) { 216 - skb_reset_inner_headers(skb); 217 - skb->encapsulation = 1; 218 - } 219 - 220 214 switch (tinfo->mode) { 221 215 case SEG6_IPTUN_MODE_INLINE: 222 216 if (skb->protocol != htons(ETH_P_IPV6)) ··· 220 224 err = seg6_do_srh_inline(skb, tinfo->srh); 221 225 if (err) 222 226 return err; 223 - 224 - skb_reset_inner_headers(skb); 225 227 break; 226 228 case SEG6_IPTUN_MODE_ENCAP: 229 + err = iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6); 230 + if (err) 231 + return err; 232 + 227 233 if (skb->protocol == htons(ETH_P_IPV6)) 228 234 proto = IPPROTO_IPV6; 229 235 else if (skb->protocol == htons(ETH_P_IP)) ··· 237 239 if (err) 238 240 return err; 239 241 242 + skb_set_inner_transport_header(skb, skb_transport_offset(skb)); 243 + skb_set_inner_protocol(skb, skb->protocol); 240 244 skb->protocol = htons(ETH_P_IPV6); 241 245 break; 242 246 case SEG6_IPTUN_MODE_L2ENCAP: ··· 261 261 262 262 ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); 263 263 skb_set_transport_header(skb, sizeof(struct ipv6hdr)); 264 - 265 - skb_set_inner_protocol(skb, skb->protocol); 266 264 267 265 return 0; 268 266 }
+2
net/ipv6/syncookies.c
··· 217 217 treq->snt_isn = cookie; 218 218 treq->ts_off = 0; 219 219 treq->txhash = net_tx_rndhash(); 220 + if (IS_ENABLED(CONFIG_SMC)) 221 + ireq->smc_ok = 0; 220 222 221 223 /* 222 224 * We need to lookup the dst_entry to get the correct window size.
+9 -6
net/llc/llc_c_ac.c
··· 389 389 llc_pdu_init_as_i_cmd(skb, 0, llc->vS, llc->vR); 390 390 rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac); 391 391 if (likely(!rc)) { 392 - llc_conn_send_pdu(sk, skb); 392 + rc = llc_conn_send_pdu(sk, skb); 393 393 llc_conn_ac_inc_vs_by_1(sk, skb); 394 394 } 395 395 return rc; ··· 916 916 llc_pdu_init_as_i_cmd(skb, llc->ack_pf, llc->vS, llc->vR); 917 917 rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac); 918 918 if (likely(!rc)) { 919 - llc_conn_send_pdu(sk, skb); 919 + rc = llc_conn_send_pdu(sk, skb); 920 920 llc_conn_ac_inc_vs_by_1(sk, skb); 921 921 } 922 922 return rc; ··· 935 935 int llc_conn_ac_send_i_as_ack(struct sock *sk, struct sk_buff *skb) 936 936 { 937 937 struct llc_sock *llc = llc_sk(sk); 938 + int ret; 938 939 939 940 if (llc->ack_must_be_send) { 940 - llc_conn_ac_send_i_rsp_f_set_ackpf(sk, skb); 941 + ret = llc_conn_ac_send_i_rsp_f_set_ackpf(sk, skb); 941 942 llc->ack_must_be_send = 0 ; 942 943 llc->ack_pf = 0; 943 - } else 944 - llc_conn_ac_send_i_cmd_p_set_0(sk, skb); 945 - return 0; 944 + } else { 945 + ret = llc_conn_ac_send_i_cmd_p_set_0(sk, skb); 946 + } 947 + 948 + return ret; 946 949 } 947 950 948 951 /**
+23 -9
net/llc/llc_conn.c
··· 30 30 #endif 31 31 32 32 static int llc_find_offset(int state, int ev_type); 33 - static void llc_conn_send_pdus(struct sock *sk); 33 + static int llc_conn_send_pdus(struct sock *sk, struct sk_buff *skb); 34 34 static int llc_conn_service(struct sock *sk, struct sk_buff *skb); 35 35 static int llc_exec_conn_trans_actions(struct sock *sk, 36 36 struct llc_conn_state_trans *trans, ··· 193 193 return rc; 194 194 } 195 195 196 - void llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb) 196 + int llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb) 197 197 { 198 198 /* queue PDU to send to MAC layer */ 199 199 skb_queue_tail(&sk->sk_write_queue, skb); 200 - llc_conn_send_pdus(sk); 200 + return llc_conn_send_pdus(sk, skb); 201 201 } 202 202 203 203 /** ··· 255 255 if (howmany_resend > 0) 256 256 llc->vS = (llc->vS + 1) % LLC_2_SEQ_NBR_MODULO; 257 257 /* any PDUs to re-send are queued up; start sending to MAC */ 258 - llc_conn_send_pdus(sk); 258 + llc_conn_send_pdus(sk, NULL); 259 259 out:; 260 260 } 261 261 ··· 296 296 if (howmany_resend > 0) 297 297 llc->vS = (llc->vS + 1) % LLC_2_SEQ_NBR_MODULO; 298 298 /* any PDUs to re-send are queued up; start sending to MAC */ 299 - llc_conn_send_pdus(sk); 299 + llc_conn_send_pdus(sk, NULL); 300 300 out:; 301 301 } 302 302 ··· 340 340 /** 341 341 * llc_conn_send_pdus - Sends queued PDUs 342 342 * @sk: active connection 343 + * @hold_skb: the skb held by caller, or NULL if does not care 343 344 * 344 - * Sends queued pdus to MAC layer for transmission. 345 + * Sends queued pdus to MAC layer for transmission. When @hold_skb is 346 + * NULL, always return 0. Otherwise, return 0 if @hold_skb is sent 347 + * successfully, or 1 for failure. 345 348 */ 346 - static void llc_conn_send_pdus(struct sock *sk) 349 + static int llc_conn_send_pdus(struct sock *sk, struct sk_buff *hold_skb) 347 350 { 348 351 struct sk_buff *skb; 352 + int ret = 0; 349 353 350 354 while ((skb = skb_dequeue(&sk->sk_write_queue)) != NULL) { 351 355 struct llc_pdu_sn *pdu = llc_pdu_sn_hdr(skb); ··· 361 357 skb_queue_tail(&llc_sk(sk)->pdu_unack_q, skb); 362 358 if (!skb2) 363 359 break; 364 - skb = skb2; 360 + dev_queue_xmit(skb2); 361 + } else { 362 + bool is_target = skb == hold_skb; 363 + int rc; 364 + 365 + if (is_target) 366 + skb_get(skb); 367 + rc = dev_queue_xmit(skb); 368 + if (is_target) 369 + ret = rc; 365 370 } 366 - dev_queue_xmit(skb); 367 371 } 372 + 373 + return ret; 368 374 } 369 375 370 376 /**
+83 -23
net/netfilter/nf_tables_api.c
··· 74 74 kfree(trans); 75 75 } 76 76 77 + /* removal requests are queued in the commit_list, but not acted upon 78 + * until after all new rules are in place. 79 + * 80 + * Therefore, nf_register_net_hook(net, &nat_hook) runs before pending 81 + * nf_unregister_net_hook(). 82 + * 83 + * nf_register_net_hook thus fails if a nat hook is already in place 84 + * even if the conflicting hook is about to be removed. 85 + * 86 + * If collision is detected, search commit_log for DELCHAIN matching 87 + * the new nat hooknum; if we find one collision is temporary: 88 + * 89 + * Either transaction is aborted (new/colliding hook is removed), or 90 + * transaction is committed (old hook is removed). 91 + */ 92 + static bool nf_tables_allow_nat_conflict(const struct net *net, 93 + const struct nf_hook_ops *ops) 94 + { 95 + const struct nft_trans *trans; 96 + bool ret = false; 97 + 98 + if (!ops->nat_hook) 99 + return false; 100 + 101 + list_for_each_entry(trans, &net->nft.commit_list, list) { 102 + const struct nf_hook_ops *pending_ops; 103 + const struct nft_chain *pending; 104 + 105 + if (trans->msg_type != NFT_MSG_NEWCHAIN && 106 + trans->msg_type != NFT_MSG_DELCHAIN) 107 + continue; 108 + 109 + pending = trans->ctx.chain; 110 + if (!nft_is_base_chain(pending)) 111 + continue; 112 + 113 + pending_ops = &nft_base_chain(pending)->ops; 114 + if (pending_ops->nat_hook && 115 + pending_ops->pf == ops->pf && 116 + pending_ops->hooknum == ops->hooknum) { 117 + /* other hook registration already pending? */ 118 + if (trans->msg_type == NFT_MSG_NEWCHAIN) 119 + return false; 120 + 121 + ret = true; 122 + } 123 + } 124 + 125 + return ret; 126 + } 127 + 77 128 static int nf_tables_register_hook(struct net *net, 78 129 const struct nft_table *table, 79 130 struct nft_chain *chain) 80 131 { 132 + struct nf_hook_ops *ops; 133 + int ret; 134 + 81 135 if (table->flags & NFT_TABLE_F_DORMANT || 82 136 !nft_is_base_chain(chain)) 83 137 return 0; 84 138 85 - return nf_register_net_hook(net, &nft_base_chain(chain)->ops); 139 + ops = &nft_base_chain(chain)->ops; 140 + ret = nf_register_net_hook(net, ops); 141 + if (ret == -EBUSY && nf_tables_allow_nat_conflict(net, ops)) { 142 + ops->nat_hook = false; 143 + ret = nf_register_net_hook(net, ops); 144 + ops->nat_hook = true; 145 + } 146 + 147 + return ret; 86 148 } 87 149 88 150 static void nf_tables_unregister_hook(struct net *net, ··· 1288 1226 free_percpu(basechain->stats); 1289 1227 if (basechain->stats) 1290 1228 static_branch_dec(&nft_counters_enabled); 1291 - if (basechain->ops.dev != NULL) 1292 - dev_put(basechain->ops.dev); 1293 1229 kfree(chain->name); 1294 1230 kfree(basechain); 1295 1231 } else { ··· 1354 1294 } 1355 1295 1356 1296 nla_strlcpy(ifname, ha[NFTA_HOOK_DEV], IFNAMSIZ); 1357 - dev = dev_get_by_name(net, ifname); 1297 + dev = __dev_get_by_name(net, ifname); 1358 1298 if (!dev) { 1359 1299 module_put(type->owner); 1360 1300 return -ENOENT; ··· 1371 1311 static void nft_chain_release_hook(struct nft_chain_hook *hook) 1372 1312 { 1373 1313 module_put(hook->type->owner); 1374 - if (hook->dev != NULL) 1375 - dev_put(hook->dev); 1376 1314 } 1377 1315 1378 1316 static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask, ··· 1969 1911 [NFTA_RULE_POSITION] = { .type = NLA_U64 }, 1970 1912 [NFTA_RULE_USERDATA] = { .type = NLA_BINARY, 1971 1913 .len = NFT_USERDATA_MAXLEN }, 1914 + [NFTA_RULE_ID] = { .type = NLA_U32 }, 1972 1915 }; 1973 1916 1974 1917 static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net, ··· 2505 2446 2506 2447 static bool nft_set_ops_candidate(const struct nft_set_ops *ops, u32 flags) 2507 2448 { 2449 + if ((flags & NFT_SET_EVAL) && !ops->update) 2450 + return false; 2451 + 2508 2452 return (flags & ops->features) == (flags & NFT_SET_FEATURES); 2509 2453 } 2510 2454 ··· 2572 2510 if (est.space == best.space && 2573 2511 est.lookup < best.lookup) 2574 2512 break; 2575 - } else if (est.size < best.size) { 2513 + } else if (est.size < best.size || !bops) { 2576 2514 break; 2577 2515 } 2578 2516 continue; ··· 3377 3315 [NFTA_SET_ELEM_TIMEOUT] = { .type = NLA_U64 }, 3378 3316 [NFTA_SET_ELEM_USERDATA] = { .type = NLA_BINARY, 3379 3317 .len = NFT_USERDATA_MAXLEN }, 3318 + [NFTA_SET_ELEM_EXPR] = { .type = NLA_NESTED }, 3319 + [NFTA_SET_ELEM_OBJREF] = { .type = NLA_STRING }, 3380 3320 }; 3381 3321 3382 3322 static const struct nla_policy nft_set_elem_list_policy[NFTA_SET_ELEM_LIST_MAX + 1] = { ··· 4928 4864 return ERR_PTR(-ENOENT); 4929 4865 } 4930 4866 4931 - #define NFT_FLOWTABLE_DEVICE_MAX 8 4932 - 4933 4867 static int nf_tables_parse_devices(const struct nft_ctx *ctx, 4934 4868 const struct nlattr *attr, 4935 4869 struct net_device *dev_array[], int *len) ··· 4944 4882 } 4945 4883 4946 4884 nla_strlcpy(ifname, tmp, IFNAMSIZ); 4947 - dev = dev_get_by_name(ctx->net, ifname); 4885 + dev = __dev_get_by_name(ctx->net, ifname); 4948 4886 if (!dev) { 4949 4887 err = -ENOENT; 4950 4888 goto err1; ··· 5000 4938 err = nf_tables_parse_devices(ctx, tb[NFTA_FLOWTABLE_HOOK_DEVS], 5001 4939 dev_array, &n); 5002 4940 if (err < 0) 5003 - goto err1; 4941 + return err; 5004 4942 5005 4943 ops = kzalloc(sizeof(struct nf_hook_ops) * n, GFP_KERNEL); 5006 - if (!ops) { 5007 - err = -ENOMEM; 5008 - goto err1; 5009 - } 4944 + if (!ops) 4945 + return -ENOMEM; 5010 4946 5011 4947 flowtable->hooknum = hooknum; 5012 4948 flowtable->priority = priority; ··· 5018 4958 flowtable->ops[i].priv = &flowtable->data.rhashtable; 5019 4959 flowtable->ops[i].hook = flowtable->data.type->hook; 5020 4960 flowtable->ops[i].dev = dev_array[i]; 4961 + flowtable->dev_name[i] = kstrdup(dev_array[i]->name, 4962 + GFP_KERNEL); 5021 4963 } 5022 - 5023 - err = 0; 5024 - err1: 5025 - for (i = 0; i < n; i++) 5026 - dev_put(dev_array[i]); 5027 4964 5028 4965 return err; 5029 4966 } ··· 5192 5135 err5: 5193 5136 i = flowtable->ops_len; 5194 5137 err4: 5195 - for (k = i - 1; k >= 0; k--) 5138 + for (k = i - 1; k >= 0; k--) { 5139 + kfree(flowtable->dev_name[k]); 5196 5140 nf_unregister_net_hook(net, &flowtable->ops[k]); 5141 + } 5197 5142 5198 5143 kfree(flowtable->ops); 5199 5144 err3: ··· 5285 5226 goto nla_put_failure; 5286 5227 5287 5228 for (i = 0; i < flowtable->ops_len; i++) { 5288 - if (flowtable->ops[i].dev && 5229 + if (flowtable->dev_name[i][0] && 5289 5230 nla_put_string(skb, NFTA_DEVICE_NAME, 5290 - flowtable->ops[i].dev->name)) 5231 + flowtable->dev_name[i])) 5291 5232 goto nla_put_failure; 5292 5233 } 5293 5234 nla_nest_end(skb, nest_devs); ··· 5529 5470 continue; 5530 5471 5531 5472 nf_unregister_net_hook(dev_net(dev), &flowtable->ops[i]); 5473 + flowtable->dev_name[i][0] = '\0'; 5532 5474 flowtable->ops[i].dev = NULL; 5533 5475 break; 5534 5476 }
+1 -1
net/netfilter/nft_set_hash.c
··· 674 674 nft_hash_select_ops(const struct nft_ctx *ctx, const struct nft_set_desc *desc, 675 675 u32 flags) 676 676 { 677 - if (desc->size && !(flags & NFT_SET_TIMEOUT)) { 677 + if (desc->size && !(flags & (NFT_SET_EVAL | NFT_SET_TIMEOUT))) { 678 678 switch (desc->klen) { 679 679 case 4: 680 680 return &nft_hash_fast_ops;
+3
net/netlink/af_netlink.c
··· 1085 1085 if (addr->sa_family != AF_NETLINK) 1086 1086 return -EINVAL; 1087 1087 1088 + if (alen < sizeof(struct sockaddr_nl)) 1089 + return -EINVAL; 1090 + 1088 1091 if ((nladdr->nl_groups || nladdr->nl_pid) && 1089 1092 !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND)) 1090 1093 return -EPERM;
+3 -1
net/sched/act_api.c
··· 135 135 continue; 136 136 137 137 nest = nla_nest_start(skb, n_i); 138 - if (!nest) 138 + if (!nest) { 139 + index--; 139 140 goto nla_put_failure; 141 + } 140 142 err = tcf_action_dump_1(skb, p, 0, 0); 141 143 if (err < 0) { 142 144 index--;
+14 -5
net/sched/sch_generic.c
··· 373 373 */ 374 374 static inline bool qdisc_restart(struct Qdisc *q, int *packets) 375 375 { 376 + bool more, validate, nolock = q->flags & TCQ_F_NOLOCK; 376 377 spinlock_t *root_lock = NULL; 377 378 struct netdev_queue *txq; 378 379 struct net_device *dev; 379 380 struct sk_buff *skb; 380 - bool validate; 381 381 382 382 /* Dequeue packet */ 383 - skb = dequeue_skb(q, &validate, packets); 384 - if (unlikely(!skb)) 383 + if (nolock && test_and_set_bit(__QDISC_STATE_RUNNING, &q->state)) 385 384 return false; 386 385 387 - if (!(q->flags & TCQ_F_NOLOCK)) 386 + skb = dequeue_skb(q, &validate, packets); 387 + if (unlikely(!skb)) { 388 + if (nolock) 389 + clear_bit(__QDISC_STATE_RUNNING, &q->state); 390 + return false; 391 + } 392 + 393 + if (!nolock) 388 394 root_lock = qdisc_lock(q); 389 395 390 396 dev = qdisc_dev(q); 391 397 txq = skb_get_tx_queue(dev, skb); 392 398 393 - return sch_direct_xmit(skb, q, dev, txq, root_lock, validate); 399 + more = sch_direct_xmit(skb, q, dev, txq, root_lock, validate); 400 + if (nolock) 401 + clear_bit(__QDISC_STATE_RUNNING, &q->state); 402 + return more; 394 403 } 395 404 396 405 void __qdisc_run(struct Qdisc *q)
+1 -1
net/smc/smc_clc.c
··· 133 133 134 134 /* receive the complete CLC message */ 135 135 memset(&msg, 0, sizeof(struct msghdr)); 136 - iov_iter_kvec(&msg.msg_iter, READ | ITER_KVEC, &vec, 1, buflen); 136 + iov_iter_kvec(&msg.msg_iter, READ | ITER_KVEC, &vec, 1, datlen); 137 137 krflags = MSG_WAITALL; 138 138 smc->clcsock->sk->sk_rcvtimeo = CLC_WAIT_TIME; 139 139 len = sock_recvmsg(smc->clcsock, &msg, krflags);
+2 -2
net/strparser/strparser.c
··· 60 60 struct sock *sk = strp->sk; 61 61 62 62 /* Report an error on the lower socket */ 63 - sk->sk_err = err; 63 + sk->sk_err = -err; 64 64 sk->sk_error_report(sk); 65 65 } 66 66 } ··· 458 458 /* Message assembly timed out */ 459 459 STRP_STATS_INCR(strp->stats.msg_timeouts); 460 460 strp->cb.lock(strp); 461 - strp->cb.abort_parser(strp, ETIMEDOUT); 461 + strp->cb.abort_parser(strp, -ETIMEDOUT); 462 462 strp->cb.unlock(strp); 463 463 } 464 464
+6
net/xfrm/xfrm_input.c
··· 26 26 }; 27 27 28 28 struct xfrm_trans_cb { 29 + union { 30 + struct inet_skb_parm h4; 31 + #if IS_ENABLED(CONFIG_IPV6) 32 + struct inet6_skb_parm h6; 33 + #endif 34 + } header; 29 35 int (*finish)(struct net *net, struct sock *sk, struct sk_buff *skb); 30 36 }; 31 37
+3 -2
net/xfrm/xfrm_output.c
··· 285 285 return; 286 286 287 287 afinfo = xfrm_state_get_afinfo(proto); 288 - if (afinfo) 288 + if (afinfo) { 289 289 afinfo->local_error(skb, mtu); 290 - rcu_read_unlock(); 290 + rcu_read_unlock(); 291 + } 291 292 } 292 293 EXPORT_SYMBOL_GPL(xfrm_local_error);
+1 -1
tools/bpf/bpftool/map.c
··· 428 428 jsonw_string_field(json_wtr, "name", info->name); 429 429 430 430 jsonw_name(json_wtr, "flags"); 431 - jsonw_printf(json_wtr, "%#x", info->map_flags); 431 + jsonw_printf(json_wtr, "%d", info->map_flags); 432 432 433 433 print_dev_json(info->ifindex, info->netns_dev, info->netns_ino); 434 434