Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

1) Off by one in mt76 airtime calculation, from Dan Carpenter.

2) Fix TLV fragment allocation loop condition in iwlwifi, from Luca
Coelho.

3) Don't confirm neigh entries when doing ipsec pmtu updates, from Xu
Wang.

4) More checks to make sure we only send TSO packets to lan78xx chips
that they can actually handle. From James Hughes.

5) Fix ip_tunnel namespace move, from William Dauchy.

6) Fix unintended packet reordering due to cooperation between
listification done by GRO and non-GRO paths. From Maxim
Mikityanskiy.

7) Add Jakub Kicincki formally as networking co-maintainer.

8) Info leak in airo ioctls, from Michael Ellerman.

9) IFLA_MTU attribute needs validation during rtnl_create_link(), from
Eric Dumazet.

10) Use after free during reload in mlxsw, from Ido Schimmel.

11) Dangling pointers are possible in tp->highest_sack, fix from Eric
Dumazet.

12) Missing *pos++ in various networking seq_next handlers, from Vasily
Averin.

13) CHELSIO_GET_MEM operation neds CAP_NET_ADMIN check, from Michael
Ellerman.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (109 commits)
firestream: fix memory leaks
net: cxgb3_main: Add CAP_NET_ADMIN check to CHELSIO_GET_MEM
net: bcmgenet: Use netif_tx_napi_add() for TX NAPI
tipc: change maintainer email address
net: stmmac: platform: fix probe for ACPI devices
net/mlx5e: kTLS, Do not send decrypted-marked SKBs via non-accel path
net/mlx5e: kTLS, Remove redundant posts in TX resync flow
net/mlx5e: kTLS, Fix corner-case checks in TX resync flow
net/mlx5e: Clear VF config when switching modes
net/mlx5: DR, use non preemptible call to get the current cpu number
net/mlx5: E-Switch, Prevent ingress rate configuration of uplink rep
net/mlx5: DR, Enable counter on non-fwd-dest objects
net/mlx5: Update the list of the PCI supported devices
net/mlx5: Fix lowest FDB pool size
net: Fix skb->csum update in inet_proto_csum_replace16().
netfilter: nf_tables: autoload modules from the abort path
netfilter: nf_tables: add __nft_chain_type_get()
netfilter: nf_tables_offload: fix check the chain offload flag
netfilter: conntrack: sctp: use distinct states for new SCTP connections
ipv6_route_seq_next should increase position index
...

Changed files
+1465 -599
Documentation
devicetree
bindings
arch
drivers
include
linux
netfilter
net
netns
net
+13
Documentation/devicetree/bindings/net/fsl-fman.txt
··· 403 403 The settings and programming routines for internal/external 404 404 MDIO are different. Must be included for internal MDIO. 405 405 406 + - fsl,erratum-a011043 407 + Usage: optional 408 + Value type: <boolean> 409 + Definition: Indicates the presence of the A011043 erratum 410 + describing that the MDIO_CFG[MDIO_RD_ER] bit may be falsely 411 + set when reading internal PCS registers. MDIO reads to 412 + internal PCS registers may result in having the 413 + MDIO_CFG[MDIO_RD_ER] bit set, even when there is no error and 414 + read data (MDIO_DATA[MDIO_DATA]) is correct. 415 + Software may get false read error when reading internal 416 + PCS registers through MDIO. As a workaround, all internal 417 + MDIO accesses should ignore the MDIO_CFG[MDIO_RD_ER] bit. 418 + 406 419 For internal PHY device on internal mdio bus, a PHY node should be created. 407 420 See the definition of the PHY node in booting-without-of.txt for an 408 421 example of how to define a PHY (Internal PHY has no interrupt line).
+5 -3
MAINTAINERS
··· 6197 6197 M: Andrew Lunn <andrew@lunn.ch> 6198 6198 M: Florian Fainelli <f.fainelli@gmail.com> 6199 6199 M: Heiner Kallweit <hkallweit1@gmail.com> 6200 + R: Russell King <linux@armlinux.org.uk> 6200 6201 L: netdev@vger.kernel.org 6201 6202 S: Maintained 6202 6203 F: Documentation/ABI/testing/sysfs-class-net-phydev ··· 8570 8569 F: drivers/platform/x86/intel-vbtn.c 8571 8570 8572 8571 INTEL WIRELESS 3945ABG/BG, 4965AGN (iwlegacy) 8573 - M: Stanislaw Gruszka <sgruszka@redhat.com> 8572 + M: Stanislaw Gruszka <stf_xl@wp.pl> 8574 8573 L: linux-wireless@vger.kernel.org 8575 8574 S: Supported 8576 8575 F: drivers/net/wireless/intel/iwlegacy/ ··· 11500 11499 11501 11500 NETWORKING [GENERAL] 11502 11501 M: "David S. Miller" <davem@davemloft.net> 11502 + M: Jakub Kicinski <kuba@kernel.org> 11503 11503 L: netdev@vger.kernel.org 11504 11504 W: http://www.linuxfoundation.org/en/Net 11505 11505 Q: http://patchwork.ozlabs.org/project/netdev/list/ ··· 13822 13820 F: arch/mips/ralink 13823 13821 13824 13822 RALINK RT2X00 WIRELESS LAN DRIVER 13825 - M: Stanislaw Gruszka <sgruszka@redhat.com> 13823 + M: Stanislaw Gruszka <stf_xl@wp.pl> 13826 13824 M: Helmut Schaa <helmut.schaa@googlemail.com> 13827 13825 L: linux-wireless@vger.kernel.org 13828 13826 S: Maintained ··· 16603 16601 F: tools/testing/selftests/timers/ 16604 16602 16605 16603 TIPC NETWORK LAYER 16606 - M: Jon Maloy <jon.maloy@ericsson.com> 16604 + M: Jon Maloy <jmaloy@redhat.com> 16607 16605 M: Ying Xue <ying.xue@windriver.com> 16608 16606 L: netdev@vger.kernel.org (core kernel code) 16609 16607 L: tipc-discussion@lists.sourceforge.net (user apps, general discussion)
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-0-best-effort.dtsi
··· 63 63 #size-cells = <0>; 64 64 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 65 65 reg = <0xe1000 0x1000>; 66 + fsl,erratum-a011043; /* must ignore read errors */ 66 67 67 68 pcsphy0: ethernet-phy@0 { 68 69 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-0.dtsi
··· 60 60 #size-cells = <0>; 61 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 62 reg = <0xf1000 0x1000>; 63 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 64 65 pcsphy6: ethernet-phy@0 { 65 66 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-1-best-effort.dtsi
··· 63 63 #size-cells = <0>; 64 64 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 65 65 reg = <0xe3000 0x1000>; 66 + fsl,erratum-a011043; /* must ignore read errors */ 66 67 67 68 pcsphy1: ethernet-phy@0 { 68 69 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-1.dtsi
··· 60 60 #size-cells = <0>; 61 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 62 reg = <0xf3000 0x1000>; 63 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 64 65 pcsphy7: ethernet-phy@0 { 65 66 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-0.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe1000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy0: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-1.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe3000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy1: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-2.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe5000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy2: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-3.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe7000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy3: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-4.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe9000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy4: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-5.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xeb000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy5: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-10g-0.dtsi
··· 60 60 #size-cells = <0>; 61 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 62 reg = <0xf1000 0x1000>; 63 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 64 65 pcsphy14: ethernet-phy@0 { 65 66 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-10g-1.dtsi
··· 60 60 #size-cells = <0>; 61 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 62 reg = <0xf3000 0x1000>; 63 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 64 65 pcsphy15: ethernet-phy@0 { 65 66 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-0.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe1000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy8: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-1.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe3000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy9: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-2.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe5000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy10: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-3.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe7000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy11: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-4.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xe9000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy12: ethernet-phy@0 { 64 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-5.dtsi
··· 59 59 #size-cells = <0>; 60 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 61 reg = <0xeb000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 62 63 63 64 pcsphy13: ethernet-phy@0 { 64 65 reg = <0x0>;
+3
drivers/atm/firestream.c
··· 912 912 } 913 913 if (!to) { 914 914 printk ("No more free channels for FS50..\n"); 915 + kfree(vcc); 915 916 return -EBUSY; 916 917 } 917 918 vcc->channo = dev->channo; ··· 923 922 if (((DO_DIRECTION(rxtp) && dev->atm_vccs[vcc->channo])) || 924 923 ( DO_DIRECTION(txtp) && test_bit (vcc->channo, dev->tx_inuse))) { 925 924 printk ("Channel is in use for FS155.\n"); 925 + kfree(vcc); 926 926 return -EBUSY; 927 927 } 928 928 } ··· 937 935 tc, sizeof (struct fs_transmit_config)); 938 936 if (!tc) { 939 937 fs_dprintk (FS_DEBUG_OPEN, "fs: can't alloc transmit_config.\n"); 938 + kfree(vcc); 940 939 return -ENOMEM; 941 940 } 942 941
+10 -2
drivers/net/can/slcan.c
··· 344 344 */ 345 345 static void slcan_write_wakeup(struct tty_struct *tty) 346 346 { 347 - struct slcan *sl = tty->disc_data; 347 + struct slcan *sl; 348 + 349 + rcu_read_lock(); 350 + sl = rcu_dereference(tty->disc_data); 351 + if (!sl) 352 + goto out; 348 353 349 354 schedule_work(&sl->tx_work); 355 + out: 356 + rcu_read_unlock(); 350 357 } 351 358 352 359 /* Send a can_frame to a TTY queue. */ ··· 651 644 return; 652 645 653 646 spin_lock_bh(&sl->lock); 654 - tty->disc_data = NULL; 647 + rcu_assign_pointer(tty->disc_data, NULL); 655 648 sl->tty = NULL; 656 649 spin_unlock_bh(&sl->lock); 657 650 651 + synchronize_rcu(); 658 652 flush_work(&sl->tx_work); 659 653 660 654 /* Flush network side */
+2 -2
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 2164 2164 DMA_END_ADDR); 2165 2165 2166 2166 /* Initialize Tx NAPI */ 2167 - netif_napi_add(priv->dev, &ring->napi, bcmgenet_tx_poll, 2168 - NAPI_POLL_WEIGHT); 2167 + netif_tx_napi_add(priv->dev, &ring->napi, bcmgenet_tx_poll, 2168 + NAPI_POLL_WEIGHT); 2169 2169 } 2170 2170 2171 2171 /* Initialize a RDMA ring */
+2
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
··· 2448 2448 2449 2449 if (!is_offload(adapter)) 2450 2450 return -EOPNOTSUPP; 2451 + if (!capable(CAP_NET_ADMIN)) 2452 + return -EPERM; 2451 2453 if (!(adapter->flags & FULL_INIT_DONE)) 2452 2454 return -EIO; /* need the memory controllers */ 2453 2455 if (copy_from_user(&t, useraddr, sizeof(t)))
+1 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
··· 70 70 static void *seq_tab_next(struct seq_file *seq, void *v, loff_t *pos) 71 71 { 72 72 v = seq_tab_get_idx(seq->private, *pos + 1); 73 - if (v) 74 - ++*pos; 73 + ++(*pos); 75 74 return v; 76 75 } 77 76
+1 -2
drivers/net/ethernet/chelsio/cxgb4/l2t.c
··· 678 678 static void *l2t_seq_next(struct seq_file *seq, void *v, loff_t *pos) 679 679 { 680 680 v = l2t_get_idx(seq, *pos); 681 - if (v) 682 - ++*pos; 681 + ++(*pos); 683 682 return v; 684 683 } 685 684
+2 -2
drivers/net/ethernet/freescale/fman/fman_memac.c
··· 110 110 /* Interface Mode Register (IF_MODE) */ 111 111 112 112 #define IF_MODE_MASK 0x00000003 /* 30-31 Mask on i/f mode bits */ 113 - #define IF_MODE_XGMII 0x00000000 /* 30-31 XGMII (10G) interface */ 113 + #define IF_MODE_10G 0x00000000 /* 30-31 10G interface */ 114 114 #define IF_MODE_GMII 0x00000002 /* 30-31 GMII (1G) interface */ 115 115 #define IF_MODE_RGMII 0x00000004 116 116 #define IF_MODE_RGMII_AUTO 0x00008000 ··· 440 440 tmp = 0; 441 441 switch (phy_if) { 442 442 case PHY_INTERFACE_MODE_XGMII: 443 - tmp |= IF_MODE_XGMII; 443 + tmp |= IF_MODE_10G; 444 444 break; 445 445 default: 446 446 tmp |= IF_MODE_GMII;
+6 -1
drivers/net/ethernet/freescale/xgmac_mdio.c
··· 49 49 struct mdio_fsl_priv { 50 50 struct tgec_mdio_controller __iomem *mdio_base; 51 51 bool is_little_endian; 52 + bool has_a011043; 52 53 }; 53 54 54 55 static u32 xgmac_read32(void __iomem *regs, ··· 227 226 return ret; 228 227 229 228 /* Return all Fs if nothing was there */ 230 - if (xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) { 229 + if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) && 230 + !priv->has_a011043) { 231 231 dev_err(&bus->dev, 232 232 "Error while reading PHY%d reg at %d.%hhu\n", 233 233 phy_id, dev_addr, regnum); ··· 275 273 276 274 priv->is_little_endian = of_property_read_bool(pdev->dev.of_node, 277 275 "little-endian"); 276 + 277 + priv->has_a011043 = of_property_read_bool(pdev->dev.of_node, 278 + "fsl,erratum-a011043"); 278 279 279 280 ret = of_mdiobus_register(bus, np); 280 281 if (ret) {
+1 -1
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 1113 1113 */ 1114 1114 pba_size--; 1115 1115 if (pba_num_size < (((u32)pba_size * 2) + 1)) { 1116 - hw_dbg(hw, "Buffer to small for PBA data.\n"); 1116 + hw_dbg(hw, "Buffer too small for PBA data.\n"); 1117 1117 return I40E_ERR_PARAM; 1118 1118 } 1119 1119
+29 -20
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 180 180 181 181 struct tx_sync_info { 182 182 u64 rcd_sn; 183 - s32 sync_len; 183 + u32 sync_len; 184 184 int nr_frags; 185 185 skb_frag_t frags[MAX_SKB_FRAGS]; 186 186 }; ··· 193 193 194 194 static enum mlx5e_ktls_sync_retval 195 195 tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx, 196 - u32 tcp_seq, struct tx_sync_info *info) 196 + u32 tcp_seq, int datalen, struct tx_sync_info *info) 197 197 { 198 198 struct tls_offload_context_tx *tx_ctx = priv_tx->tx_ctx; 199 199 enum mlx5e_ktls_sync_retval ret = MLX5E_KTLS_SYNC_DONE; 200 200 struct tls_record_info *record; 201 201 int remaining, i = 0; 202 202 unsigned long flags; 203 + bool ends_before; 203 204 204 205 spin_lock_irqsave(&tx_ctx->lock, flags); 205 206 record = tls_get_record(tx_ctx, tcp_seq, &info->rcd_sn); ··· 210 209 goto out; 211 210 } 212 211 213 - if (unlikely(tcp_seq < tls_record_start_seq(record))) { 214 - ret = tls_record_is_start_marker(record) ? 215 - MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL; 212 + /* There are the following cases: 213 + * 1. packet ends before start marker: bypass offload. 214 + * 2. packet starts before start marker and ends after it: drop, 215 + * not supported, breaks contract with kernel. 216 + * 3. packet ends before tls record info starts: drop, 217 + * this packet was already acknowledged and its record info 218 + * was released. 219 + */ 220 + ends_before = before(tcp_seq + datalen, tls_record_start_seq(record)); 221 + 222 + if (unlikely(tls_record_is_start_marker(record))) { 223 + ret = ends_before ? MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL; 224 + goto out; 225 + } else if (ends_before) { 226 + ret = MLX5E_KTLS_SYNC_FAIL; 216 227 goto out; 217 228 } 218 229 ··· 350 337 u8 num_wqebbs; 351 338 int i = 0; 352 339 353 - ret = tx_sync_info_get(priv_tx, seq, &info); 340 + ret = tx_sync_info_get(priv_tx, seq, datalen, &info); 354 341 if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) { 355 342 if (ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA) { 356 343 stats->tls_skip_no_sync_data++; ··· 361 348 * It should be safe to drop the packet in this case 362 349 */ 363 350 stats->tls_drop_no_sync_data++; 364 - goto err_out; 365 - } 366 - 367 - if (unlikely(info.sync_len < 0)) { 368 - if (likely(datalen <= -info.sync_len)) 369 - return MLX5E_KTLS_SYNC_DONE; 370 - 371 - stats->tls_drop_bypass_req++; 372 351 goto err_out; 373 352 } 374 353 ··· 382 377 383 378 if (unlikely(contig_wqebbs_room < num_wqebbs)) 384 379 mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room); 385 - 386 - tx_post_resync_params(sq, priv_tx, info.rcd_sn); 387 380 388 381 for (; i < info.nr_frags; i++) { 389 382 unsigned int orig_fsz, frag_offset = 0, n = 0; ··· 458 455 enum mlx5e_ktls_sync_retval ret = 459 456 mlx5e_ktls_tx_handle_ooo(priv_tx, sq, datalen, seq); 460 457 461 - if (likely(ret == MLX5E_KTLS_SYNC_DONE)) 458 + switch (ret) { 459 + case MLX5E_KTLS_SYNC_DONE: 462 460 *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi); 463 - else if (ret == MLX5E_KTLS_SYNC_FAIL) 461 + break; 462 + case MLX5E_KTLS_SYNC_SKIP_NO_DATA: 463 + if (likely(!skb->decrypted)) 464 + goto out; 465 + WARN_ON_ONCE(1); 466 + /* fall-through */ 467 + default: /* MLX5E_KTLS_SYNC_FAIL */ 464 468 goto err_out; 465 - else /* ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA */ 466 - goto out; 469 + } 467 470 } 468 471 469 472 priv_tx->expected_seq = seq + datalen;
+7 -2
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 4036 4036 u32 rate_mbps; 4037 4037 int err; 4038 4038 4039 + vport_num = rpriv->rep->vport; 4040 + if (vport_num >= MLX5_VPORT_ECPF) { 4041 + NL_SET_ERR_MSG_MOD(extack, 4042 + "Ingress rate limit is supported only for Eswitch ports connected to VFs"); 4043 + return -EOPNOTSUPP; 4044 + } 4045 + 4039 4046 esw = priv->mdev->priv.eswitch; 4040 4047 /* rate is given in bytes/sec. 4041 4048 * First convert to bits/sec and then round to the nearest mbit/secs. ··· 4051 4044 * 1 mbit/sec. 4052 4045 */ 4053 4046 rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0; 4054 - vport_num = rpriv->rep->vport; 4055 - 4056 4047 err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps); 4057 4048 if (err) 4058 4049 NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware");
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1928 1928 struct mlx5_vport *vport; 1929 1929 int i; 1930 1930 1931 - mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) 1931 + mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) { 1932 1932 memset(&vport->info, 0, sizeof(vport->info)); 1933 + vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO; 1934 + } 1933 1935 } 1934 1936 1935 1937 /* Public E-Switch API */
+9 -4
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 866 866 */ 867 867 #define ESW_SIZE (16 * 1024 * 1024) 868 868 const unsigned int ESW_POOLS[4] = { 4 * 1024 * 1024, 1 * 1024 * 1024, 869 - 64 * 1024, 4 * 1024 }; 869 + 64 * 1024, 128 }; 870 870 871 871 static int 872 872 get_sz_from_pool(struct mlx5_eswitch *esw) ··· 1377 1377 return -EINVAL; 1378 1378 } 1379 1379 1380 - mlx5_eswitch_disable(esw, false); 1380 + mlx5_eswitch_disable(esw, true); 1381 1381 mlx5_eswitch_update_num_of_vfs(esw, esw->dev->priv.sriov.num_vfs); 1382 1382 err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_OFFLOADS); 1383 1383 if (err) { ··· 2220 2220 2221 2221 int esw_offloads_enable(struct mlx5_eswitch *esw) 2222 2222 { 2223 - int err; 2223 + struct mlx5_vport *vport; 2224 + int err, i; 2224 2225 2225 2226 if (MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, reformat) && 2226 2227 MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, decap)) ··· 2237 2236 err = esw_set_passing_vport_metadata(esw, true); 2238 2237 if (err) 2239 2238 goto err_vport_metadata; 2239 + 2240 + /* Representor will control the vport link state */ 2241 + mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) 2242 + vport->info.link_state = MLX5_VPORT_ADMIN_STATE_DOWN; 2240 2243 2241 2244 err = mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_VPORT_UC_ADDR_CHANGE); 2242 2245 if (err) ··· 2271 2266 { 2272 2267 int err, err1; 2273 2268 2274 - mlx5_eswitch_disable(esw, false); 2269 + mlx5_eswitch_disable(esw, true); 2275 2270 err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_LEGACY); 2276 2271 if (err) { 2277 2272 NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");
+1
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1563 1563 { PCI_VDEVICE(MELLANOX, 0x101d) }, /* ConnectX-6 Dx */ 1564 1564 { PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF}, /* ConnectX Family mlx5Gen Virtual Function */ 1565 1565 { PCI_VDEVICE(MELLANOX, 0x101f) }, /* ConnectX-6 LX */ 1566 + { PCI_VDEVICE(MELLANOX, 0x1021) }, /* ConnectX-7 */ 1566 1567 { PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */ 1567 1568 { PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */ 1568 1569 { PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 /* Copyright (c) 2019 Mellanox Technologies. */ 3 3 4 + #include <linux/smp.h> 4 5 #include "dr_types.h" 5 6 6 7 #define QUEUE_SIZE 128 ··· 730 729 if (!in) 731 730 goto err_cqwq; 732 731 733 - vector = smp_processor_id() % mlx5_comp_vectors_count(mdev); 732 + vector = raw_smp_processor_id() % mlx5_comp_vectors_count(mdev); 734 733 err = mlx5_vector2eqn(mdev, vector, &eqn, &irqn); 735 734 if (err) { 736 735 kvfree(in);
+29 -13
drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
··· 352 352 if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { 353 353 list_for_each_entry(dst, &fte->node.children, node.list) { 354 354 enum mlx5_flow_destination_type type = dst->dest_attr.type; 355 - u32 id; 356 355 357 356 if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { 358 357 err = -ENOSPC; 359 358 goto free_actions; 360 359 } 361 360 362 - switch (type) { 363 - case MLX5_FLOW_DESTINATION_TYPE_COUNTER: 364 - id = dst->dest_attr.counter_id; 361 + if (type == MLX5_FLOW_DESTINATION_TYPE_COUNTER) 362 + continue; 365 363 366 - tmp_action = 367 - mlx5dr_action_create_flow_counter(id); 368 - if (!tmp_action) { 369 - err = -ENOMEM; 370 - goto free_actions; 371 - } 372 - fs_dr_actions[fs_dr_num_actions++] = tmp_action; 373 - actions[num_actions++] = tmp_action; 374 - break; 364 + switch (type) { 375 365 case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE: 376 366 tmp_action = create_ft_action(dev, dst); 377 367 if (!tmp_action) { ··· 384 394 err = -EOPNOTSUPP; 385 395 goto free_actions; 386 396 } 397 + } 398 + } 399 + 400 + if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_COUNT) { 401 + list_for_each_entry(dst, &fte->node.children, node.list) { 402 + u32 id; 403 + 404 + if (dst->dest_attr.type != 405 + MLX5_FLOW_DESTINATION_TYPE_COUNTER) 406 + continue; 407 + 408 + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { 409 + err = -ENOSPC; 410 + goto free_actions; 411 + } 412 + 413 + id = dst->dest_attr.counter_id; 414 + tmp_action = 415 + mlx5dr_action_create_flow_counter(id); 416 + if (!tmp_action) { 417 + err = -ENOMEM; 418 + goto free_actions; 419 + } 420 + 421 + fs_dr_actions[fs_dr_num_actions++] = tmp_action; 422 + actions[num_actions++] = tmp_action; 387 423 } 388 424 } 389 425
+12 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
··· 8 8 #include <linux/string.h> 9 9 #include <linux/rhashtable.h> 10 10 #include <linux/netdevice.h> 11 + #include <linux/mutex.h> 11 12 #include <net/net_namespace.h> 12 13 #include <net/tc_act/tc_vlan.h> 13 14 ··· 26 25 struct mlxsw_sp_fid *dummy_fid; 27 26 struct rhashtable ruleset_ht; 28 27 struct list_head rules; 28 + struct mutex rules_lock; /* Protects rules list */ 29 29 struct { 30 30 struct delayed_work dw; 31 31 unsigned long interval; /* ms */ ··· 703 701 goto err_ruleset_block_bind; 704 702 } 705 703 704 + mutex_lock(&mlxsw_sp->acl->rules_lock); 706 705 list_add_tail(&rule->list, &mlxsw_sp->acl->rules); 706 + mutex_unlock(&mlxsw_sp->acl->rules_lock); 707 707 block->rule_count++; 708 708 block->egress_blocker_rule_count += rule->rulei->egress_bind_blocker; 709 709 return 0; ··· 727 723 728 724 block->egress_blocker_rule_count -= rule->rulei->egress_bind_blocker; 729 725 ruleset->ht_key.block->rule_count--; 726 + mutex_lock(&mlxsw_sp->acl->rules_lock); 730 727 list_del(&rule->list); 728 + mutex_unlock(&mlxsw_sp->acl->rules_lock); 731 729 if (!ruleset->ht_key.chain_index && 732 730 mlxsw_sp_acl_ruleset_is_singular(ruleset)) 733 731 mlxsw_sp_acl_ruleset_block_unbind(mlxsw_sp, ruleset, ··· 789 783 struct mlxsw_sp_acl_rule *rule; 790 784 int err; 791 785 792 - /* Protect internal structures from changes */ 793 - rtnl_lock(); 786 + mutex_lock(&acl->rules_lock); 794 787 list_for_each_entry(rule, &acl->rules, list) { 795 788 err = mlxsw_sp_acl_rule_activity_update(acl->mlxsw_sp, 796 789 rule); 797 790 if (err) 798 791 goto err_rule_update; 799 792 } 800 - rtnl_unlock(); 793 + mutex_unlock(&acl->rules_lock); 801 794 return 0; 802 795 803 796 err_rule_update: 804 - rtnl_unlock(); 797 + mutex_unlock(&acl->rules_lock); 805 798 return err; 806 799 } 807 800 ··· 885 880 acl->dummy_fid = fid; 886 881 887 882 INIT_LIST_HEAD(&acl->rules); 883 + mutex_init(&acl->rules_lock); 888 884 err = mlxsw_sp_acl_tcam_init(mlxsw_sp, &acl->tcam); 889 885 if (err) 890 886 goto err_acl_ops_init; ··· 898 892 return 0; 899 893 900 894 err_acl_ops_init: 895 + mutex_destroy(&acl->rules_lock); 901 896 mlxsw_sp_fid_put(fid); 902 897 err_fid_get: 903 898 rhashtable_destroy(&acl->ruleset_ht); ··· 915 908 916 909 cancel_delayed_work_sync(&mlxsw_sp->acl->rule_activity_update.dw); 917 910 mlxsw_sp_acl_tcam_fini(mlxsw_sp, &acl->tcam); 911 + mutex_destroy(&acl->rules_lock); 918 912 WARN_ON(!list_empty(&acl->rules)); 919 913 mlxsw_sp_fid_put(acl->dummy_fid); 920 914 rhashtable_destroy(&acl->ruleset_ht);
+227 -147
drivers/net/ethernet/natsemi/sonic.c
··· 64 64 65 65 netif_dbg(lp, ifup, dev, "%s: initializing sonic driver\n", __func__); 66 66 67 + spin_lock_init(&lp->lock); 68 + 67 69 for (i = 0; i < SONIC_NUM_RRS; i++) { 68 70 struct sk_buff *skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2); 69 71 if (skb == NULL) { ··· 116 114 return 0; 117 115 } 118 116 117 + /* Wait for the SONIC to become idle. */ 118 + static void sonic_quiesce(struct net_device *dev, u16 mask) 119 + { 120 + struct sonic_local * __maybe_unused lp = netdev_priv(dev); 121 + int i; 122 + u16 bits; 123 + 124 + for (i = 0; i < 1000; ++i) { 125 + bits = SONIC_READ(SONIC_CMD) & mask; 126 + if (!bits) 127 + return; 128 + if (irqs_disabled() || in_interrupt()) 129 + udelay(20); 130 + else 131 + usleep_range(100, 200); 132 + } 133 + WARN_ONCE(1, "command deadline expired! 0x%04x\n", bits); 134 + } 119 135 120 136 /* 121 137 * Close the SONIC device ··· 150 130 /* 151 131 * stop the SONIC, disable interrupts 152 132 */ 133 + SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS); 134 + sonic_quiesce(dev, SONIC_CR_ALL); 135 + 153 136 SONIC_WRITE(SONIC_IMR, 0); 154 137 SONIC_WRITE(SONIC_ISR, 0x7fff); 155 138 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); ··· 192 169 * put the Sonic into software-reset mode and 193 170 * disable all interrupts before releasing DMA buffers 194 171 */ 172 + SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS); 173 + sonic_quiesce(dev, SONIC_CR_ALL); 174 + 195 175 SONIC_WRITE(SONIC_IMR, 0); 196 176 SONIC_WRITE(SONIC_ISR, 0x7fff); 197 177 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); ··· 232 206 * wake the tx queue 233 207 * Concurrently with all of this, the SONIC is potentially writing to 234 208 * the status flags of the TDs. 235 - * Until some mutual exclusion is added, this code will not work with SMP. However, 236 - * MIPS Jazz machines and m68k Macs were all uni-processor machines. 237 209 */ 238 210 239 211 static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev) ··· 239 215 struct sonic_local *lp = netdev_priv(dev); 240 216 dma_addr_t laddr; 241 217 int length; 242 - int entry = lp->next_tx; 218 + int entry; 219 + unsigned long flags; 243 220 244 221 netif_dbg(lp, tx_queued, dev, "%s: skb=%p\n", __func__, skb); 245 222 ··· 262 237 return NETDEV_TX_OK; 263 238 } 264 239 240 + spin_lock_irqsave(&lp->lock, flags); 241 + 242 + entry = lp->next_tx; 243 + 265 244 sonic_tda_put(dev, entry, SONIC_TD_STATUS, 0); /* clear status */ 266 245 sonic_tda_put(dev, entry, SONIC_TD_FRAG_COUNT, 1); /* single fragment */ 267 246 sonic_tda_put(dev, entry, SONIC_TD_PKTSIZE, length); /* length of packet */ ··· 275 246 sonic_tda_put(dev, entry, SONIC_TD_LINK, 276 247 sonic_tda_get(dev, entry, SONIC_TD_LINK) | SONIC_EOL); 277 248 278 - /* 279 - * Must set tx_skb[entry] only after clearing status, and 280 - * before clearing EOL and before stopping queue 281 - */ 282 249 wmb(); 283 250 lp->tx_len[entry] = length; 284 251 lp->tx_laddr[entry] = laddr; ··· 297 272 298 273 SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP); 299 274 275 + spin_unlock_irqrestore(&lp->lock, flags); 276 + 300 277 return NETDEV_TX_OK; 301 278 } 302 279 ··· 311 284 struct net_device *dev = dev_id; 312 285 struct sonic_local *lp = netdev_priv(dev); 313 286 int status; 287 + unsigned long flags; 314 288 315 - if (!(status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT)) 289 + /* The lock has two purposes. Firstly, it synchronizes sonic_interrupt() 290 + * with sonic_send_packet() so that the two functions can share state. 291 + * Secondly, it makes sonic_interrupt() re-entrant, as that is required 292 + * by macsonic which must use two IRQs with different priority levels. 293 + */ 294 + spin_lock_irqsave(&lp->lock, flags); 295 + 296 + status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT; 297 + if (!status) { 298 + spin_unlock_irqrestore(&lp->lock, flags); 299 + 316 300 return IRQ_NONE; 301 + } 317 302 318 303 do { 304 + SONIC_WRITE(SONIC_ISR, status); /* clear the interrupt(s) */ 305 + 319 306 if (status & SONIC_INT_PKTRX) { 320 307 netif_dbg(lp, intr, dev, "%s: packet rx\n", __func__); 321 308 sonic_rx(dev); /* got packet(s) */ 322 - SONIC_WRITE(SONIC_ISR, SONIC_INT_PKTRX); /* clear the interrupt */ 323 309 } 324 310 325 311 if (status & SONIC_INT_TXDN) { ··· 340 300 int td_status; 341 301 int freed_some = 0; 342 302 343 - /* At this point, cur_tx is the index of a TD that is one of: 344 - * unallocated/freed (status set & tx_skb[entry] clear) 345 - * allocated and sent (status set & tx_skb[entry] set ) 346 - * allocated and not yet sent (status clear & tx_skb[entry] set ) 347 - * still being allocated by sonic_send_packet (status clear & tx_skb[entry] clear) 303 + /* The state of a Transmit Descriptor may be inferred 304 + * from { tx_skb[entry], td_status } as follows. 305 + * { clear, clear } => the TD has never been used 306 + * { set, clear } => the TD was handed to SONIC 307 + * { set, set } => the TD was handed back 308 + * { clear, set } => the TD is available for re-use 348 309 */ 349 310 350 311 netif_dbg(lp, intr, dev, "%s: tx done\n", __func__); ··· 354 313 if ((td_status = sonic_tda_get(dev, entry, SONIC_TD_STATUS)) == 0) 355 314 break; 356 315 357 - if (td_status & 0x0001) { 316 + if (td_status & SONIC_TCR_PTX) { 358 317 lp->stats.tx_packets++; 359 318 lp->stats.tx_bytes += sonic_tda_get(dev, entry, SONIC_TD_PKTSIZE); 360 319 } else { 361 - lp->stats.tx_errors++; 362 - if (td_status & 0x0642) 320 + if (td_status & (SONIC_TCR_EXD | 321 + SONIC_TCR_EXC | SONIC_TCR_BCM)) 363 322 lp->stats.tx_aborted_errors++; 364 - if (td_status & 0x0180) 323 + if (td_status & 324 + (SONIC_TCR_NCRS | SONIC_TCR_CRLS)) 365 325 lp->stats.tx_carrier_errors++; 366 - if (td_status & 0x0020) 326 + if (td_status & SONIC_TCR_OWC) 367 327 lp->stats.tx_window_errors++; 368 - if (td_status & 0x0004) 328 + if (td_status & SONIC_TCR_FU) 369 329 lp->stats.tx_fifo_errors++; 370 330 } 371 331 ··· 388 346 if (freed_some || lp->tx_skb[entry] == NULL) 389 347 netif_wake_queue(dev); /* The ring is no longer full */ 390 348 lp->cur_tx = entry; 391 - SONIC_WRITE(SONIC_ISR, SONIC_INT_TXDN); /* clear the interrupt */ 392 349 } 393 350 394 351 /* ··· 396 355 if (status & SONIC_INT_RFO) { 397 356 netif_dbg(lp, rx_err, dev, "%s: rx fifo overrun\n", 398 357 __func__); 399 - lp->stats.rx_fifo_errors++; 400 - SONIC_WRITE(SONIC_ISR, SONIC_INT_RFO); /* clear the interrupt */ 401 358 } 402 359 if (status & SONIC_INT_RDE) { 403 360 netif_dbg(lp, rx_err, dev, "%s: rx descriptors exhausted\n", 404 361 __func__); 405 - lp->stats.rx_dropped++; 406 - SONIC_WRITE(SONIC_ISR, SONIC_INT_RDE); /* clear the interrupt */ 407 362 } 408 363 if (status & SONIC_INT_RBAE) { 409 364 netif_dbg(lp, rx_err, dev, "%s: rx buffer area exceeded\n", 410 365 __func__); 411 - lp->stats.rx_dropped++; 412 - SONIC_WRITE(SONIC_ISR, SONIC_INT_RBAE); /* clear the interrupt */ 413 366 } 414 367 415 368 /* counter overruns; all counters are 16bit wide */ 416 - if (status & SONIC_INT_FAE) { 369 + if (status & SONIC_INT_FAE) 417 370 lp->stats.rx_frame_errors += 65536; 418 - SONIC_WRITE(SONIC_ISR, SONIC_INT_FAE); /* clear the interrupt */ 419 - } 420 - if (status & SONIC_INT_CRC) { 371 + if (status & SONIC_INT_CRC) 421 372 lp->stats.rx_crc_errors += 65536; 422 - SONIC_WRITE(SONIC_ISR, SONIC_INT_CRC); /* clear the interrupt */ 423 - } 424 - if (status & SONIC_INT_MP) { 373 + if (status & SONIC_INT_MP) 425 374 lp->stats.rx_missed_errors += 65536; 426 - SONIC_WRITE(SONIC_ISR, SONIC_INT_MP); /* clear the interrupt */ 427 - } 428 375 429 376 /* transmit error */ 430 377 if (status & SONIC_INT_TXER) { 431 - if (SONIC_READ(SONIC_TCR) & SONIC_TCR_FU) 432 - netif_dbg(lp, tx_err, dev, "%s: tx fifo underrun\n", 433 - __func__); 434 - SONIC_WRITE(SONIC_ISR, SONIC_INT_TXER); /* clear the interrupt */ 378 + u16 tcr = SONIC_READ(SONIC_TCR); 379 + 380 + netif_dbg(lp, tx_err, dev, "%s: TXER intr, TCR %04x\n", 381 + __func__, tcr); 382 + 383 + if (tcr & (SONIC_TCR_EXD | SONIC_TCR_EXC | 384 + SONIC_TCR_FU | SONIC_TCR_BCM)) { 385 + /* Aborted transmission. Try again. */ 386 + netif_stop_queue(dev); 387 + SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP); 388 + } 435 389 } 436 390 437 391 /* bus retry */ ··· 436 400 /* ... to help debug DMA problems causing endless interrupts. */ 437 401 /* Bounce the eth interface to turn on the interrupt again. */ 438 402 SONIC_WRITE(SONIC_IMR, 0); 439 - SONIC_WRITE(SONIC_ISR, SONIC_INT_BR); /* clear the interrupt */ 440 403 } 441 404 442 - /* load CAM done */ 443 - if (status & SONIC_INT_LCD) 444 - SONIC_WRITE(SONIC_ISR, SONIC_INT_LCD); /* clear the interrupt */ 445 - } while((status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT)); 405 + status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT; 406 + } while (status); 407 + 408 + spin_unlock_irqrestore(&lp->lock, flags); 409 + 446 410 return IRQ_HANDLED; 411 + } 412 + 413 + /* Return the array index corresponding to a given Receive Buffer pointer. */ 414 + static int index_from_addr(struct sonic_local *lp, dma_addr_t addr, 415 + unsigned int last) 416 + { 417 + unsigned int i = last; 418 + 419 + do { 420 + i = (i + 1) & SONIC_RRS_MASK; 421 + if (addr == lp->rx_laddr[i]) 422 + return i; 423 + } while (i != last); 424 + 425 + return -ENOENT; 426 + } 427 + 428 + /* Allocate and map a new skb to be used as a receive buffer. */ 429 + static bool sonic_alloc_rb(struct net_device *dev, struct sonic_local *lp, 430 + struct sk_buff **new_skb, dma_addr_t *new_addr) 431 + { 432 + *new_skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2); 433 + if (!*new_skb) 434 + return false; 435 + 436 + if (SONIC_BUS_SCALE(lp->dma_bitmode) == 2) 437 + skb_reserve(*new_skb, 2); 438 + 439 + *new_addr = dma_map_single(lp->device, skb_put(*new_skb, SONIC_RBSIZE), 440 + SONIC_RBSIZE, DMA_FROM_DEVICE); 441 + if (!*new_addr) { 442 + dev_kfree_skb(*new_skb); 443 + *new_skb = NULL; 444 + return false; 445 + } 446 + 447 + return true; 448 + } 449 + 450 + /* Place a new receive resource in the Receive Resource Area and update RWP. */ 451 + static void sonic_update_rra(struct net_device *dev, struct sonic_local *lp, 452 + dma_addr_t old_addr, dma_addr_t new_addr) 453 + { 454 + unsigned int entry = sonic_rr_entry(dev, SONIC_READ(SONIC_RWP)); 455 + unsigned int end = sonic_rr_entry(dev, SONIC_READ(SONIC_RRP)); 456 + u32 buf; 457 + 458 + /* The resources in the range [RRP, RWP) belong to the SONIC. This loop 459 + * scans the other resources in the RRA, those in the range [RWP, RRP). 460 + */ 461 + do { 462 + buf = (sonic_rra_get(dev, entry, SONIC_RR_BUFADR_H) << 16) | 463 + sonic_rra_get(dev, entry, SONIC_RR_BUFADR_L); 464 + 465 + if (buf == old_addr) 466 + break; 467 + 468 + entry = (entry + 1) & SONIC_RRS_MASK; 469 + } while (entry != end); 470 + 471 + WARN_ONCE(buf != old_addr, "failed to find resource!\n"); 472 + 473 + sonic_rra_put(dev, entry, SONIC_RR_BUFADR_H, new_addr >> 16); 474 + sonic_rra_put(dev, entry, SONIC_RR_BUFADR_L, new_addr & 0xffff); 475 + 476 + entry = (entry + 1) & SONIC_RRS_MASK; 477 + 478 + SONIC_WRITE(SONIC_RWP, sonic_rr_addr(dev, entry)); 447 479 } 448 480 449 481 /* ··· 520 416 static void sonic_rx(struct net_device *dev) 521 417 { 522 418 struct sonic_local *lp = netdev_priv(dev); 523 - int status; 524 419 int entry = lp->cur_rx; 420 + int prev_entry = lp->eol_rx; 421 + bool rbe = false; 525 422 526 423 while (sonic_rda_get(dev, entry, SONIC_RD_IN_USE) == 0) { 527 - struct sk_buff *used_skb; 528 - struct sk_buff *new_skb; 529 - dma_addr_t new_laddr; 530 - u16 bufadr_l; 531 - u16 bufadr_h; 532 - int pkt_len; 424 + u16 status = sonic_rda_get(dev, entry, SONIC_RD_STATUS); 533 425 534 - status = sonic_rda_get(dev, entry, SONIC_RD_STATUS); 535 - if (status & SONIC_RCR_PRX) { 536 - /* Malloc up new buffer. */ 537 - new_skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2); 538 - if (new_skb == NULL) { 539 - lp->stats.rx_dropped++; 540 - break; 541 - } 542 - /* provide 16 byte IP header alignment unless DMA requires otherwise */ 543 - if(SONIC_BUS_SCALE(lp->dma_bitmode) == 2) 544 - skb_reserve(new_skb, 2); 426 + /* If the RD has LPKT set, the chip has finished with the RB */ 427 + if ((status & SONIC_RCR_PRX) && (status & SONIC_RCR_LPKT)) { 428 + struct sk_buff *new_skb; 429 + dma_addr_t new_laddr; 430 + u32 addr = (sonic_rda_get(dev, entry, 431 + SONIC_RD_PKTPTR_H) << 16) | 432 + sonic_rda_get(dev, entry, SONIC_RD_PKTPTR_L); 433 + int i = index_from_addr(lp, addr, entry); 545 434 546 - new_laddr = dma_map_single(lp->device, skb_put(new_skb, SONIC_RBSIZE), 547 - SONIC_RBSIZE, DMA_FROM_DEVICE); 548 - if (!new_laddr) { 549 - dev_kfree_skb(new_skb); 550 - printk(KERN_ERR "%s: Failed to map rx buffer, dropping packet.\n", dev->name); 551 - lp->stats.rx_dropped++; 435 + if (i < 0) { 436 + WARN_ONCE(1, "failed to find buffer!\n"); 552 437 break; 553 438 } 554 439 555 - /* now we have a new skb to replace it, pass the used one up the stack */ 556 - dma_unmap_single(lp->device, lp->rx_laddr[entry], SONIC_RBSIZE, DMA_FROM_DEVICE); 557 - used_skb = lp->rx_skb[entry]; 558 - pkt_len = sonic_rda_get(dev, entry, SONIC_RD_PKTLEN); 559 - skb_trim(used_skb, pkt_len); 560 - used_skb->protocol = eth_type_trans(used_skb, dev); 561 - netif_rx(used_skb); 562 - lp->stats.rx_packets++; 563 - lp->stats.rx_bytes += pkt_len; 440 + if (sonic_alloc_rb(dev, lp, &new_skb, &new_laddr)) { 441 + struct sk_buff *used_skb = lp->rx_skb[i]; 442 + int pkt_len; 564 443 565 - /* and insert the new skb */ 566 - lp->rx_laddr[entry] = new_laddr; 567 - lp->rx_skb[entry] = new_skb; 444 + /* Pass the used buffer up the stack */ 445 + dma_unmap_single(lp->device, addr, SONIC_RBSIZE, 446 + DMA_FROM_DEVICE); 568 447 569 - bufadr_l = (unsigned long)new_laddr & 0xffff; 570 - bufadr_h = (unsigned long)new_laddr >> 16; 571 - sonic_rra_put(dev, entry, SONIC_RR_BUFADR_L, bufadr_l); 572 - sonic_rra_put(dev, entry, SONIC_RR_BUFADR_H, bufadr_h); 573 - } else { 574 - /* This should only happen, if we enable accepting broken packets. */ 575 - lp->stats.rx_errors++; 576 - if (status & SONIC_RCR_FAER) 577 - lp->stats.rx_frame_errors++; 578 - if (status & SONIC_RCR_CRCR) 579 - lp->stats.rx_crc_errors++; 580 - } 581 - if (status & SONIC_RCR_LPKT) { 582 - /* 583 - * this was the last packet out of the current receive buffer 584 - * give the buffer back to the SONIC 448 + pkt_len = sonic_rda_get(dev, entry, 449 + SONIC_RD_PKTLEN); 450 + skb_trim(used_skb, pkt_len); 451 + used_skb->protocol = eth_type_trans(used_skb, 452 + dev); 453 + netif_rx(used_skb); 454 + lp->stats.rx_packets++; 455 + lp->stats.rx_bytes += pkt_len; 456 + 457 + lp->rx_skb[i] = new_skb; 458 + lp->rx_laddr[i] = new_laddr; 459 + } else { 460 + /* Failed to obtain a new buffer so re-use it */ 461 + new_laddr = addr; 462 + lp->stats.rx_dropped++; 463 + } 464 + /* If RBE is already asserted when RWP advances then 465 + * it's safe to clear RBE after processing this packet. 585 466 */ 586 - lp->cur_rwp += SIZEOF_SONIC_RR * SONIC_BUS_SCALE(lp->dma_bitmode); 587 - if (lp->cur_rwp >= lp->rra_end) lp->cur_rwp = lp->rra_laddr & 0xffff; 588 - SONIC_WRITE(SONIC_RWP, lp->cur_rwp); 589 - if (SONIC_READ(SONIC_ISR) & SONIC_INT_RBE) { 590 - netif_dbg(lp, rx_err, dev, "%s: rx buffer exhausted\n", 591 - __func__); 592 - SONIC_WRITE(SONIC_ISR, SONIC_INT_RBE); /* clear the flag */ 593 - } 594 - } else 595 - printk(KERN_ERR "%s: rx desc without RCR_LPKT. Shouldn't happen !?\n", 596 - dev->name); 467 + rbe = rbe || SONIC_READ(SONIC_ISR) & SONIC_INT_RBE; 468 + sonic_update_rra(dev, lp, addr, new_laddr); 469 + } 597 470 /* 598 471 * give back the descriptor 599 472 */ 600 - sonic_rda_put(dev, entry, SONIC_RD_LINK, 601 - sonic_rda_get(dev, entry, SONIC_RD_LINK) | SONIC_EOL); 473 + sonic_rda_put(dev, entry, SONIC_RD_STATUS, 0); 602 474 sonic_rda_put(dev, entry, SONIC_RD_IN_USE, 1); 603 - sonic_rda_put(dev, lp->eol_rx, SONIC_RD_LINK, 604 - sonic_rda_get(dev, lp->eol_rx, SONIC_RD_LINK) & ~SONIC_EOL); 605 - lp->eol_rx = entry; 606 - lp->cur_rx = entry = (entry + 1) & SONIC_RDS_MASK; 475 + 476 + prev_entry = entry; 477 + entry = (entry + 1) & SONIC_RDS_MASK; 607 478 } 479 + 480 + lp->cur_rx = entry; 481 + 482 + if (prev_entry != lp->eol_rx) { 483 + /* Advance the EOL flag to put descriptors back into service */ 484 + sonic_rda_put(dev, prev_entry, SONIC_RD_LINK, SONIC_EOL | 485 + sonic_rda_get(dev, prev_entry, SONIC_RD_LINK)); 486 + sonic_rda_put(dev, lp->eol_rx, SONIC_RD_LINK, ~SONIC_EOL & 487 + sonic_rda_get(dev, lp->eol_rx, SONIC_RD_LINK)); 488 + lp->eol_rx = prev_entry; 489 + } 490 + 491 + if (rbe) 492 + SONIC_WRITE(SONIC_ISR, SONIC_INT_RBE); 608 493 /* 609 494 * If any worth-while packets have been received, netif_rx() 610 495 * has done a mark_bh(NET_BH) for us and will work on them ··· 643 550 (netdev_mc_count(dev) > 15)) { 644 551 rcr |= SONIC_RCR_AMC; 645 552 } else { 553 + unsigned long flags; 554 + 646 555 netif_dbg(lp, ifup, dev, "%s: mc_count %d\n", __func__, 647 556 netdev_mc_count(dev)); 648 557 sonic_set_cam_enable(dev, 1); /* always enable our own address */ ··· 658 563 i++; 659 564 } 660 565 SONIC_WRITE(SONIC_CDC, 16); 661 - /* issue Load CAM command */ 662 566 SONIC_WRITE(SONIC_CDP, lp->cda_laddr & 0xffff); 567 + 568 + /* LCAM and TXP commands can't be used simultaneously */ 569 + spin_lock_irqsave(&lp->lock, flags); 570 + sonic_quiesce(dev, SONIC_CR_TXP); 663 571 SONIC_WRITE(SONIC_CMD, SONIC_CR_LCAM); 572 + sonic_quiesce(dev, SONIC_CR_LCAM); 573 + spin_unlock_irqrestore(&lp->lock, flags); 664 574 } 665 575 } 666 576 ··· 680 580 */ 681 581 static int sonic_init(struct net_device *dev) 682 582 { 683 - unsigned int cmd; 684 583 struct sonic_local *lp = netdev_priv(dev); 685 584 int i; 686 585 ··· 691 592 SONIC_WRITE(SONIC_ISR, 0x7fff); 692 593 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); 693 594 595 + /* While in reset mode, clear CAM Enable register */ 596 + SONIC_WRITE(SONIC_CE, 0); 597 + 694 598 /* 695 599 * clear software reset flag, disable receiver, clear and 696 600 * enable interrupts, then completely initialize the SONIC 697 601 */ 698 602 SONIC_WRITE(SONIC_CMD, 0); 699 - SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS); 603 + SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS | SONIC_CR_STP); 604 + sonic_quiesce(dev, SONIC_CR_ALL); 700 605 701 606 /* 702 607 * initialize the receive resource area ··· 718 615 } 719 616 720 617 /* initialize all RRA registers */ 721 - lp->rra_end = (lp->rra_laddr + SONIC_NUM_RRS * SIZEOF_SONIC_RR * 722 - SONIC_BUS_SCALE(lp->dma_bitmode)) & 0xffff; 723 - lp->cur_rwp = (lp->rra_laddr + (SONIC_NUM_RRS - 1) * SIZEOF_SONIC_RR * 724 - SONIC_BUS_SCALE(lp->dma_bitmode)) & 0xffff; 725 - 726 - SONIC_WRITE(SONIC_RSA, lp->rra_laddr & 0xffff); 727 - SONIC_WRITE(SONIC_REA, lp->rra_end); 728 - SONIC_WRITE(SONIC_RRP, lp->rra_laddr & 0xffff); 729 - SONIC_WRITE(SONIC_RWP, lp->cur_rwp); 618 + SONIC_WRITE(SONIC_RSA, sonic_rr_addr(dev, 0)); 619 + SONIC_WRITE(SONIC_REA, sonic_rr_addr(dev, SONIC_NUM_RRS)); 620 + SONIC_WRITE(SONIC_RRP, sonic_rr_addr(dev, 0)); 621 + SONIC_WRITE(SONIC_RWP, sonic_rr_addr(dev, SONIC_NUM_RRS - 1)); 730 622 SONIC_WRITE(SONIC_URRA, lp->rra_laddr >> 16); 731 623 SONIC_WRITE(SONIC_EOBC, (SONIC_RBSIZE >> 1) - (lp->dma_bitmode ? 2 : 1)); 732 624 ··· 729 631 netif_dbg(lp, ifup, dev, "%s: issuing RRRA command\n", __func__); 730 632 731 633 SONIC_WRITE(SONIC_CMD, SONIC_CR_RRRA); 732 - i = 0; 733 - while (i++ < 100) { 734 - if (SONIC_READ(SONIC_CMD) & SONIC_CR_RRRA) 735 - break; 736 - } 737 - 738 - netif_dbg(lp, ifup, dev, "%s: status=%x, i=%d\n", __func__, 739 - SONIC_READ(SONIC_CMD), i); 634 + sonic_quiesce(dev, SONIC_CR_RRRA); 740 635 741 636 /* 742 637 * Initialize the receive descriptors so that they ··· 804 713 * load the CAM 805 714 */ 806 715 SONIC_WRITE(SONIC_CMD, SONIC_CR_LCAM); 807 - 808 - i = 0; 809 - while (i++ < 100) { 810 - if (SONIC_READ(SONIC_ISR) & SONIC_INT_LCD) 811 - break; 812 - } 813 - netif_dbg(lp, ifup, dev, "%s: CMD=%x, ISR=%x, i=%d\n", __func__, 814 - SONIC_READ(SONIC_CMD), SONIC_READ(SONIC_ISR), i); 716 + sonic_quiesce(dev, SONIC_CR_LCAM); 815 717 816 718 /* 817 719 * enable receiver, disable loopback 818 720 * and enable all interrupts 819 721 */ 820 - SONIC_WRITE(SONIC_CMD, SONIC_CR_RXEN | SONIC_CR_STP); 821 722 SONIC_WRITE(SONIC_RCR, SONIC_RCR_DEFAULT); 822 723 SONIC_WRITE(SONIC_TCR, SONIC_TCR_DEFAULT); 823 724 SONIC_WRITE(SONIC_ISR, 0x7fff); 824 725 SONIC_WRITE(SONIC_IMR, SONIC_IMR_DEFAULT); 825 - 826 - cmd = SONIC_READ(SONIC_CMD); 827 - if ((cmd & SONIC_CR_RXEN) == 0 || (cmd & SONIC_CR_STP) == 0) 828 - printk(KERN_ERR "sonic_init: failed, status=%x\n", cmd); 726 + SONIC_WRITE(SONIC_CMD, SONIC_CR_RXEN); 829 727 830 728 netif_dbg(lp, ifup, dev, "%s: new status=%x\n", __func__, 831 729 SONIC_READ(SONIC_CMD));
+32 -12
drivers/net/ethernet/natsemi/sonic.h
··· 110 110 #define SONIC_CR_TXP 0x0002 111 111 #define SONIC_CR_HTX 0x0001 112 112 113 + #define SONIC_CR_ALL (SONIC_CR_LCAM | SONIC_CR_RRRA | \ 114 + SONIC_CR_RXEN | SONIC_CR_TXP) 115 + 113 116 /* 114 117 * SONIC data configuration bits 115 118 */ ··· 178 175 #define SONIC_TCR_NCRS 0x0100 179 176 #define SONIC_TCR_CRLS 0x0080 180 177 #define SONIC_TCR_EXC 0x0040 178 + #define SONIC_TCR_OWC 0x0020 181 179 #define SONIC_TCR_PMB 0x0008 182 180 #define SONIC_TCR_FU 0x0004 183 181 #define SONIC_TCR_BCM 0x0002 ··· 278 274 #define SONIC_NUM_RDS SONIC_NUM_RRS /* number of receive descriptors */ 279 275 #define SONIC_NUM_TDS 16 /* number of transmit descriptors */ 280 276 281 - #define SONIC_RDS_MASK (SONIC_NUM_RDS-1) 282 - #define SONIC_TDS_MASK (SONIC_NUM_TDS-1) 277 + #define SONIC_RRS_MASK (SONIC_NUM_RRS - 1) 278 + #define SONIC_RDS_MASK (SONIC_NUM_RDS - 1) 279 + #define SONIC_TDS_MASK (SONIC_NUM_TDS - 1) 283 280 284 281 #define SONIC_RBSIZE 1520 /* size of one resource buffer */ 285 282 ··· 317 312 u32 rda_laddr; /* logical DMA address of RDA */ 318 313 dma_addr_t rx_laddr[SONIC_NUM_RRS]; /* logical DMA addresses of rx skbuffs */ 319 314 dma_addr_t tx_laddr[SONIC_NUM_TDS]; /* logical DMA addresses of tx skbuffs */ 320 - unsigned int rra_end; 321 - unsigned int cur_rwp; 322 315 unsigned int cur_rx; 323 316 unsigned int cur_tx; /* first unacked transmit packet */ 324 317 unsigned int eol_rx; ··· 325 322 int msg_enable; 326 323 struct device *device; /* generic device */ 327 324 struct net_device_stats stats; 325 + spinlock_t lock; 328 326 }; 329 327 330 328 #define TX_TIMEOUT (3 * HZ) ··· 348 344 as far as we can tell. */ 349 345 /* OpenBSD calls this "SWO". I'd like to think that sonic_buf_put() 350 346 is a much better name. */ 351 - static inline void sonic_buf_put(void* base, int bitmode, 347 + static inline void sonic_buf_put(u16 *base, int bitmode, 352 348 int offset, __u16 val) 353 349 { 354 350 if (bitmode) 355 351 #ifdef __BIG_ENDIAN 356 - ((__u16 *) base + (offset*2))[1] = val; 352 + __raw_writew(val, base + (offset * 2) + 1); 357 353 #else 358 - ((__u16 *) base + (offset*2))[0] = val; 354 + __raw_writew(val, base + (offset * 2) + 0); 359 355 #endif 360 356 else 361 - ((__u16 *) base)[offset] = val; 357 + __raw_writew(val, base + (offset * 1) + 0); 362 358 } 363 359 364 - static inline __u16 sonic_buf_get(void* base, int bitmode, 360 + static inline __u16 sonic_buf_get(u16 *base, int bitmode, 365 361 int offset) 366 362 { 367 363 if (bitmode) 368 364 #ifdef __BIG_ENDIAN 369 - return ((volatile __u16 *) base + (offset*2))[1]; 365 + return __raw_readw(base + (offset * 2) + 1); 370 366 #else 371 - return ((volatile __u16 *) base + (offset*2))[0]; 367 + return __raw_readw(base + (offset * 2) + 0); 372 368 #endif 373 369 else 374 - return ((volatile __u16 *) base)[offset]; 370 + return __raw_readw(base + (offset * 1) + 0); 375 371 } 376 372 377 373 /* Inlines that you should actually use for reading/writing DMA buffers */ ··· 449 445 struct sonic_local *lp = netdev_priv(dev); 450 446 return sonic_buf_get(lp->rra, lp->dma_bitmode, 451 447 (entry * SIZEOF_SONIC_RR) + offset); 448 + } 449 + 450 + static inline u16 sonic_rr_addr(struct net_device *dev, int entry) 451 + { 452 + struct sonic_local *lp = netdev_priv(dev); 453 + 454 + return lp->rra_laddr + 455 + entry * SIZEOF_SONIC_RR * SONIC_BUS_SCALE(lp->dma_bitmode); 456 + } 457 + 458 + static inline u16 sonic_rr_entry(struct net_device *dev, u16 addr) 459 + { 460 + struct sonic_local *lp = netdev_priv(dev); 461 + 462 + return (addr - (u16)lp->rra_laddr) / (SIZEOF_SONIC_RR * 463 + SONIC_BUS_SCALE(lp->dma_bitmode)); 452 464 } 453 465 454 466 static const char version[] =
+1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
··· 2043 2043 break; 2044 2044 } 2045 2045 entry += p_hdr->size; 2046 + cond_resched(); 2046 2047 } 2047 2048 p_dev->ahw->reset.seq_index = index; 2048 2049 }
+2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
··· 703 703 addr += 16; 704 704 reg_read -= 16; 705 705 ret += 16; 706 + cond_resched(); 706 707 } 707 708 out: 708 709 mutex_unlock(&adapter->ahw->mem_lock); ··· 1384 1383 buf_offset += entry->hdr.cap_size; 1385 1384 entry_offset += entry->hdr.offset; 1386 1385 buffer = fw_dump->data + buf_offset; 1386 + cond_resched(); 1387 1387 } 1388 1388 1389 1389 fw_dump->clr = 1;
+3 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 412 412 *mac = NULL; 413 413 } 414 414 415 - rc = of_get_phy_mode(np, &plat->phy_interface); 416 - if (rc) 417 - return ERR_PTR(rc); 415 + plat->phy_interface = device_get_phy_mode(&pdev->dev); 416 + if (plat->phy_interface < 0) 417 + return ERR_PTR(plat->phy_interface); 418 418 419 419 plat->interface = stmmac_of_get_mac_mode(np); 420 420 if (plat->interface < 0)
+6 -4
drivers/net/gtp.c
··· 804 804 return NULL; 805 805 } 806 806 807 - if (sock->sk->sk_protocol != IPPROTO_UDP) { 807 + sk = sock->sk; 808 + if (sk->sk_protocol != IPPROTO_UDP || 809 + sk->sk_type != SOCK_DGRAM || 810 + (sk->sk_family != AF_INET && sk->sk_family != AF_INET6)) { 808 811 pr_debug("socket fd=%d not UDP\n", fd); 809 812 sk = ERR_PTR(-EINVAL); 810 813 goto out_sock; 811 814 } 812 815 813 - lock_sock(sock->sk); 814 - if (sock->sk->sk_user_data) { 816 + lock_sock(sk); 817 + if (sk->sk_user_data) { 815 818 sk = ERR_PTR(-EBUSY); 816 819 goto out_rel_sock; 817 820 } 818 821 819 - sk = sock->sk; 820 822 sock_hold(sk); 821 823 822 824 tuncfg.sk_user_data = gtp;
+10 -2
drivers/net/slip/slip.c
··· 452 452 */ 453 453 static void slip_write_wakeup(struct tty_struct *tty) 454 454 { 455 - struct slip *sl = tty->disc_data; 455 + struct slip *sl; 456 + 457 + rcu_read_lock(); 458 + sl = rcu_dereference(tty->disc_data); 459 + if (!sl) 460 + goto out; 456 461 457 462 schedule_work(&sl->tx_work); 463 + out: 464 + rcu_read_unlock(); 458 465 } 459 466 460 467 static void sl_tx_timeout(struct net_device *dev) ··· 889 882 return; 890 883 891 884 spin_lock_bh(&sl->lock); 892 - tty->disc_data = NULL; 885 + rcu_assign_pointer(tty->disc_data, NULL); 893 886 sl->tty = NULL; 894 887 spin_unlock_bh(&sl->lock); 895 888 889 + synchronize_rcu(); 896 890 flush_work(&sl->tx_work); 897 891 898 892 /* VSV = very important to remove timers */
+4
drivers/net/tun.c
··· 1936 1936 if (ret != XDP_PASS) { 1937 1937 rcu_read_unlock(); 1938 1938 local_bh_enable(); 1939 + if (frags) { 1940 + tfile->napi.skb = NULL; 1941 + mutex_unlock(&tfile->napi_mutex); 1942 + } 1939 1943 return total_len; 1940 1944 } 1941 1945 }
+15
drivers/net/usb/lan78xx.c
··· 20 20 #include <linux/mdio.h> 21 21 #include <linux/phy.h> 22 22 #include <net/ip6_checksum.h> 23 + #include <net/vxlan.h> 23 24 #include <linux/interrupt.h> 24 25 #include <linux/irqdomain.h> 25 26 #include <linux/irq.h> ··· 3669 3668 tasklet_schedule(&dev->bh); 3670 3669 } 3671 3670 3671 + static netdev_features_t lan78xx_features_check(struct sk_buff *skb, 3672 + struct net_device *netdev, 3673 + netdev_features_t features) 3674 + { 3675 + if (skb->len + TX_OVERHEAD > MAX_SINGLE_PACKET_SIZE) 3676 + features &= ~NETIF_F_GSO_MASK; 3677 + 3678 + features = vlan_features_check(skb, features); 3679 + features = vxlan_features_check(skb, features); 3680 + 3681 + return features; 3682 + } 3683 + 3672 3684 static const struct net_device_ops lan78xx_netdev_ops = { 3673 3685 .ndo_open = lan78xx_open, 3674 3686 .ndo_stop = lan78xx_stop, ··· 3695 3681 .ndo_set_features = lan78xx_set_features, 3696 3682 .ndo_vlan_rx_add_vid = lan78xx_vlan_rx_add_vid, 3697 3683 .ndo_vlan_rx_kill_vid = lan78xx_vlan_rx_kill_vid, 3684 + .ndo_features_check = lan78xx_features_check, 3698 3685 }; 3699 3686 3700 3687 static void lan78xx_stat_monitor(struct timer_list *t)
+114 -11
drivers/net/usb/r8152.c
··· 31 31 #define NETNEXT_VERSION "11" 32 32 33 33 /* Information for net */ 34 - #define NET_VERSION "10" 34 + #define NET_VERSION "11" 35 35 36 36 #define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION 37 37 #define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>" ··· 68 68 #define PLA_LED_FEATURE 0xdd92 69 69 #define PLA_PHYAR 0xde00 70 70 #define PLA_BOOT_CTRL 0xe004 71 + #define PLA_LWAKE_CTRL_REG 0xe007 71 72 #define PLA_GPHY_INTR_IMR 0xe022 72 73 #define PLA_EEE_CR 0xe040 73 74 #define PLA_EEEP_CR 0xe080 ··· 96 95 #define PLA_TALLYCNT 0xe890 97 96 #define PLA_SFF_STS_7 0xe8de 98 97 #define PLA_PHYSTATUS 0xe908 98 + #define PLA_CONFIG6 0xe90a /* CONFIG6 */ 99 99 #define PLA_BP_BA 0xfc26 100 100 #define PLA_BP_0 0xfc28 101 101 #define PLA_BP_1 0xfc2a ··· 109 107 #define PLA_BP_EN 0xfc38 110 108 111 109 #define USB_USB2PHY 0xb41e 110 + #define USB_SSPHYLINK1 0xb426 112 111 #define USB_SSPHYLINK2 0xb428 113 112 #define USB_U2P3_CTRL 0xb460 114 113 #define USB_CSR_DUMMY1 0xb464 ··· 303 300 #define LINK_ON_WAKE_EN 0x0010 304 301 #define LINK_OFF_WAKE_EN 0x0008 305 302 303 + /* PLA_CONFIG6 */ 304 + #define LANWAKE_CLR_EN BIT(0) 305 + 306 306 /* PLA_CONFIG5 */ 307 307 #define BWF_EN 0x0040 308 308 #define MWF_EN 0x0020 ··· 318 312 /* PLA_PHY_PWR */ 319 313 #define TX_10M_IDLE_EN 0x0080 320 314 #define PFM_PWM_SWITCH 0x0040 315 + #define TEST_IO_OFF BIT(4) 321 316 322 317 /* PLA_MAC_PWR_CTRL */ 323 318 #define D3_CLK_GATED_EN 0x00004000 ··· 331 324 #define MAC_CLK_SPDWN_EN BIT(15) 332 325 333 326 /* PLA_MAC_PWR_CTRL3 */ 327 + #define PLA_MCU_SPDWN_EN BIT(14) 334 328 #define PKT_AVAIL_SPDWN_EN 0x0100 335 329 #define SUSPEND_SPDWN_EN 0x0004 336 330 #define U1U2_SPDWN_EN 0x0002 ··· 362 354 /* PLA_BOOT_CTRL */ 363 355 #define AUTOLOAD_DONE 0x0002 364 356 357 + /* PLA_LWAKE_CTRL_REG */ 358 + #define LANWAKE_PIN BIT(7) 359 + 365 360 /* PLA_SUSPEND_FLAG */ 366 361 #define LINK_CHG_EVENT BIT(0) 367 362 ··· 376 365 #define DEBUG_LTSSM 0x0082 377 366 378 367 /* PLA_EXTRA_STATUS */ 368 + #define CUR_LINK_OK BIT(15) 379 369 #define U3P3_CHECK_EN BIT(7) /* RTL_VER_05 only */ 380 370 #define LINK_CHANGE_FLAG BIT(8) 371 + #define POLL_LINK_CHG BIT(0) 381 372 382 373 /* USB_USB2PHY */ 383 374 #define USB2PHY_SUSPEND 0x0001 384 375 #define USB2PHY_L1 0x0002 376 + 377 + /* USB_SSPHYLINK1 */ 378 + #define DELAY_PHY_PWR_CHG BIT(1) 385 379 386 380 /* USB_SSPHYLINK2 */ 387 381 #define pwd_dn_scale_mask 0x3ffe ··· 2879 2863 r8153_set_rx_early_timeout(tp); 2880 2864 r8153_set_rx_early_size(tp); 2881 2865 2866 + if (tp->version == RTL_VER_09) { 2867 + u32 ocp_data; 2868 + 2869 + ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_FW_TASK); 2870 + ocp_data &= ~FC_PATCH_TASK; 2871 + ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data); 2872 + usleep_range(1000, 2000); 2873 + ocp_data |= FC_PATCH_TASK; 2874 + ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data); 2875 + } 2876 + 2882 2877 return rtl_enable(tp); 2883 2878 } 2884 2879 ··· 3403 3376 r8153b_ups_en(tp, false); 3404 3377 r8153_queue_wake(tp, false); 3405 3378 rtl_runtime_suspend_enable(tp, false); 3406 - r8153_u2p3en(tp, true); 3407 - r8153b_u1u2en(tp, true); 3379 + if (tp->udev->speed != USB_SPEED_HIGH) 3380 + r8153b_u1u2en(tp, true); 3408 3381 } 3409 3382 } 3410 3383 ··· 4702 4675 4703 4676 r8153_aldps_en(tp, true); 4704 4677 r8152b_enable_fc(tp); 4705 - r8153_u2p3en(tp, true); 4706 4678 4707 4679 set_bit(PHY_RESET, &tp->flags); 4708 4680 } ··· 4980 4954 4981 4955 static void rtl8153_up(struct r8152 *tp) 4982 4956 { 4957 + u32 ocp_data; 4958 + 4983 4959 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 4984 4960 return; 4985 4961 ··· 4989 4961 r8153_u2p3en(tp, false); 4990 4962 r8153_aldps_en(tp, false); 4991 4963 r8153_first_init(tp); 4964 + 4965 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6); 4966 + ocp_data |= LANWAKE_CLR_EN; 4967 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data); 4968 + 4969 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG); 4970 + ocp_data &= ~LANWAKE_PIN; 4971 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG, ocp_data); 4972 + 4973 + ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_SSPHYLINK1); 4974 + ocp_data &= ~DELAY_PHY_PWR_CHG; 4975 + ocp_write_word(tp, MCU_TYPE_USB, USB_SSPHYLINK1, ocp_data); 4976 + 4992 4977 r8153_aldps_en(tp, true); 4993 4978 4994 4979 switch (tp->version) { ··· 5020 4979 5021 4980 static void rtl8153_down(struct r8152 *tp) 5022 4981 { 4982 + u32 ocp_data; 4983 + 5023 4984 if (test_bit(RTL8152_UNPLUG, &tp->flags)) { 5024 4985 rtl_drop_queued_tx(tp); 5025 4986 return; 5026 4987 } 4988 + 4989 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6); 4990 + ocp_data &= ~LANWAKE_CLR_EN; 4991 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data); 5027 4992 5028 4993 r8153_u1u2en(tp, false); 5029 4994 r8153_u2p3en(tp, false); ··· 5041 4994 5042 4995 static void rtl8153b_up(struct r8152 *tp) 5043 4996 { 4997 + u32 ocp_data; 4998 + 5044 4999 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 5045 5000 return; 5046 5001 ··· 5053 5004 r8153_first_init(tp); 5054 5005 ocp_write_dword(tp, MCU_TYPE_USB, USB_RX_BUF_TH, RX_THR_B); 5055 5006 5007 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3); 5008 + ocp_data &= ~PLA_MCU_SPDWN_EN; 5009 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data); 5010 + 5056 5011 r8153_aldps_en(tp, true); 5057 - r8153_u2p3en(tp, true); 5058 - r8153b_u1u2en(tp, true); 5012 + 5013 + if (tp->udev->speed != USB_SPEED_HIGH) 5014 + r8153b_u1u2en(tp, true); 5059 5015 } 5060 5016 5061 5017 static void rtl8153b_down(struct r8152 *tp) 5062 5018 { 5019 + u32 ocp_data; 5020 + 5063 5021 if (test_bit(RTL8152_UNPLUG, &tp->flags)) { 5064 5022 rtl_drop_queued_tx(tp); 5065 5023 return; 5066 5024 } 5025 + 5026 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3); 5027 + ocp_data |= PLA_MCU_SPDWN_EN; 5028 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data); 5067 5029 5068 5030 r8153b_u1u2en(tp, false); 5069 5031 r8153_u2p3en(tp, false); ··· 5447 5387 else 5448 5388 ocp_data |= DYNAMIC_BURST; 5449 5389 ocp_write_byte(tp, MCU_TYPE_USB, USB_CSR_DUMMY1, ocp_data); 5390 + 5391 + r8153_queue_wake(tp, false); 5392 + 5393 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS); 5394 + if (rtl8152_get_speed(tp) & LINK_STATUS) 5395 + ocp_data |= CUR_LINK_OK; 5396 + else 5397 + ocp_data &= ~CUR_LINK_OK; 5398 + ocp_data |= POLL_LINK_CHG; 5399 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS, ocp_data); 5450 5400 } 5451 5401 5452 5402 ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_CSR_DUMMY2); ··· 5486 5416 ocp_write_word(tp, MCU_TYPE_USB, USB_CONNECT_TIMER, 0x0001); 5487 5417 5488 5418 r8153_power_cut_en(tp, false); 5419 + rtl_runtime_suspend_enable(tp, false); 5489 5420 r8153_u1u2en(tp, true); 5490 5421 r8153_mac_clk_spd(tp, false); 5491 5422 usb_enable_lpm(tp->udev); 5423 + 5424 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6); 5425 + ocp_data |= LANWAKE_CLR_EN; 5426 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data); 5427 + 5428 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG); 5429 + ocp_data &= ~LANWAKE_PIN; 5430 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG, ocp_data); 5492 5431 5493 5432 /* rx aggregation */ 5494 5433 ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_USB_CTRL); ··· 5563 5484 r8153b_ups_en(tp, false); 5564 5485 r8153_queue_wake(tp, false); 5565 5486 rtl_runtime_suspend_enable(tp, false); 5566 - r8153b_u1u2en(tp, true); 5487 + 5488 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS); 5489 + if (rtl8152_get_speed(tp) & LINK_STATUS) 5490 + ocp_data |= CUR_LINK_OK; 5491 + else 5492 + ocp_data &= ~CUR_LINK_OK; 5493 + ocp_data |= POLL_LINK_CHG; 5494 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS, ocp_data); 5495 + 5496 + if (tp->udev->speed != USB_SPEED_HIGH) 5497 + r8153b_u1u2en(tp, true); 5567 5498 usb_enable_lpm(tp->udev); 5568 5499 5569 5500 /* MAC clock speed down */ 5570 5501 ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2); 5571 5502 ocp_data |= MAC_CLK_SPDWN_EN; 5572 5503 ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2, ocp_data); 5504 + 5505 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3); 5506 + ocp_data &= ~PLA_MCU_SPDWN_EN; 5507 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data); 5508 + 5509 + if (tp->version == RTL_VER_09) { 5510 + /* Disable Test IO for 32QFN */ 5511 + if (ocp_read_byte(tp, MCU_TYPE_PLA, 0xdc00) & BIT(5)) { 5512 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_PHY_PWR); 5513 + ocp_data |= TEST_IO_OFF; 5514 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_PHY_PWR, ocp_data); 5515 + } 5516 + } 5573 5517 5574 5518 set_bit(GREEN_ETHERNET, &tp->flags); 5575 5519 ··· 6809 6707 6810 6708 intf->needs_remote_wakeup = 1; 6811 6709 6710 + if (!rtl_can_wakeup(tp)) 6711 + __rtl_set_wol(tp, 0); 6712 + else 6713 + tp->saved_wolopts = __rtl_get_wol(tp); 6714 + 6812 6715 tp->rtl_ops.init(tp); 6813 6716 #if IS_BUILTIN(CONFIG_USB_RTL8152) 6814 6717 /* Retry in case request_firmware() is not ready yet. */ ··· 6831 6724 goto out1; 6832 6725 } 6833 6726 6834 - if (!rtl_can_wakeup(tp)) 6835 - __rtl_set_wol(tp, 0); 6836 - 6837 - tp->saved_wolopts = __rtl_get_wol(tp); 6838 6727 if (tp->saved_wolopts) 6839 6728 device_set_wakeup_enable(&udev->dev, true); 6840 6729 else
+9 -11
drivers/net/wireless/cisco/airo.c
··· 7790 7790 case AIROGVLIST: ridcode = RID_APLIST; break; 7791 7791 case AIROGDRVNAM: ridcode = RID_DRVNAME; break; 7792 7792 case AIROGEHTENC: ridcode = RID_ETHERENCAP; break; 7793 - case AIROGWEPKTMP: ridcode = RID_WEP_TEMP; 7794 - /* Only super-user can read WEP keys */ 7795 - if (!capable(CAP_NET_ADMIN)) 7796 - return -EPERM; 7797 - break; 7798 - case AIROGWEPKNV: ridcode = RID_WEP_PERM; 7799 - /* Only super-user can read WEP keys */ 7800 - if (!capable(CAP_NET_ADMIN)) 7801 - return -EPERM; 7802 - break; 7793 + case AIROGWEPKTMP: ridcode = RID_WEP_TEMP; break; 7794 + case AIROGWEPKNV: ridcode = RID_WEP_PERM; break; 7803 7795 case AIROGSTAT: ridcode = RID_STATUS; break; 7804 7796 case AIROGSTATSD32: ridcode = RID_STATSDELTA; break; 7805 7797 case AIROGSTATSC32: ridcode = RID_STATS; break; ··· 7805 7813 return -EINVAL; 7806 7814 } 7807 7815 7808 - if ((iobuf = kmalloc(RIDSIZE, GFP_KERNEL)) == NULL) 7816 + if (ridcode == RID_WEP_TEMP || ridcode == RID_WEP_PERM) { 7817 + /* Only super-user can read WEP keys */ 7818 + if (!capable(CAP_NET_ADMIN)) 7819 + return -EPERM; 7820 + } 7821 + 7822 + if ((iobuf = kzalloc(RIDSIZE, GFP_KERNEL)) == NULL) 7809 7823 return -ENOMEM; 7810 7824 7811 7825 PC4500_readrid(ai,ridcode,iobuf,RIDSIZE, 1);
+1 -2
drivers/net/wireless/intel/iwlwifi/dvm/tx.c
··· 267 267 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 268 268 struct iwl_station_priv *sta_priv = NULL; 269 269 struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS]; 270 - struct iwl_device_cmd *dev_cmd; 270 + struct iwl_device_tx_cmd *dev_cmd; 271 271 struct iwl_tx_cmd *tx_cmd; 272 272 __le16 fc; 273 273 u8 hdr_len; ··· 348 348 if (unlikely(!dev_cmd)) 349 349 goto drop_unlock_priv; 350 350 351 - memset(dev_cmd, 0, sizeof(*dev_cmd)); 352 351 dev_cmd->hdr.cmd = REPLY_TX; 353 352 tx_cmd = (struct iwl_tx_cmd *) dev_cmd->payload; 354 353
+5 -5
drivers/net/wireless/intel/iwlwifi/fw/acpi.c
··· 357 357 { 358 358 union acpi_object *wifi_pkg, *data; 359 359 bool enabled; 360 - int i, n_profiles, tbl_rev; 361 - int ret = 0; 360 + int i, n_profiles, tbl_rev, pos; 361 + int ret = 0; 362 362 363 363 data = iwl_acpi_get_object(fwrt->dev, ACPI_EWRD_METHOD); 364 364 if (IS_ERR(data)) ··· 390 390 goto out_free; 391 391 } 392 392 393 - for (i = 0; i < n_profiles; i++) { 394 - /* the tables start at element 3 */ 395 - int pos = 3; 393 + /* the tables start at element 3 */ 394 + pos = 3; 396 395 396 + for (i = 0; i < n_profiles; i++) { 397 397 /* The EWRD profiles officially go from 2 to 4, but we 398 398 * save them in sar_profiles[1-3] (because we don't 399 399 * have profile 0). So in the array we start from 1.
+1 -6
drivers/net/wireless/intel/iwlwifi/fw/dbg.c
··· 2669 2669 { 2670 2670 int ret = 0; 2671 2671 2672 - /* if the FW crashed or not debug monitor cfg was given, there is 2673 - * no point in changing the recording state 2674 - */ 2675 - if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status) || 2676 - (!fwrt->trans->dbg.dest_tlv && 2677 - fwrt->trans->dbg.ini_dest == IWL_FW_INI_LOCATION_INVALID)) 2672 + if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status)) 2678 2673 return 0; 2679 2674 2680 2675 if (fw_has_capa(&fwrt->fw->ucode_capa,
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-csr.h
··· 379 379 380 380 381 381 /* CSR GIO */ 382 - #define CSR_GIO_REG_VAL_L0S_ENABLED (0x00000002) 382 + #define CSR_GIO_REG_VAL_L0S_DISABLED (0x00000002) 383 383 384 384 /* 385 385 * UCODE-DRIVER GP (general purpose) mailbox register 1
+8 -1
drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
··· 480 480 if (!frag || frag->size || !pages) 481 481 return -EIO; 482 482 483 - while (pages) { 483 + /* 484 + * We try to allocate as many pages as we can, starting with 485 + * the requested amount and going down until we can allocate 486 + * something. Because of DIV_ROUND_UP(), pages will never go 487 + * down to 0 and stop the loop, so stop when pages reaches 1, 488 + * which is too small anyway. 489 + */ 490 + while (pages > 1) { 484 491 block = dma_alloc_coherent(fwrt->dev, pages * PAGE_SIZE, 485 492 &physical, 486 493 GFP_KERNEL | __GFP_NOWARN);
-3
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 1817 1817 module_param_named(nvm_file, iwlwifi_mod_params.nvm_file, charp, 0444); 1818 1818 MODULE_PARM_DESC(nvm_file, "NVM file name"); 1819 1819 1820 - module_param_named(lar_disable, iwlwifi_mod_params.lar_disable, bool, 0444); 1821 - MODULE_PARM_DESC(lar_disable, "disable LAR functionality (default: N)"); 1822 - 1823 1820 module_param_named(uapsd_disable, iwlwifi_mod_params.uapsd_disable, uint, 0644); 1824 1821 MODULE_PARM_DESC(uapsd_disable, 1825 1822 "disable U-APSD functionality bitmap 1: BSS 2: P2P Client (default: 3)");
-2
drivers/net/wireless/intel/iwlwifi/iwl-modparams.h
··· 115 115 * @nvm_file: specifies a external NVM file 116 116 * @uapsd_disable: disable U-APSD, see &enum iwl_uapsd_disable, default = 117 117 * IWL_DISABLE_UAPSD_BSS | IWL_DISABLE_UAPSD_P2P_CLIENT 118 - * @lar_disable: disable LAR (regulatory), default = 0 119 118 * @fw_monitor: allow to use firmware monitor 120 119 * @disable_11ac: disable VHT capabilities, default = false. 121 120 * @remove_when_gone: remove an inaccessible device from the PCIe bus. ··· 135 136 int antenna_coupling; 136 137 char *nvm_file; 137 138 u32 uapsd_disable; 138 - bool lar_disable; 139 139 bool fw_monitor; 140 140 bool disable_11ac; 141 141 /**
+53 -8
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 224 224 NVM_CHANNEL_DC_HIGH = BIT(12), 225 225 }; 226 226 227 + /** 228 + * enum iwl_reg_capa_flags - global flags applied for the whole regulatory 229 + * domain. 230 + * @REG_CAPA_BF_CCD_LOW_BAND: Beam-forming or Cyclic Delay Diversity in the 231 + * 2.4Ghz band is allowed. 232 + * @REG_CAPA_BF_CCD_HIGH_BAND: Beam-forming or Cyclic Delay Diversity in the 233 + * 5Ghz band is allowed. 234 + * @REG_CAPA_160MHZ_ALLOWED: 11ac channel with a width of 160Mhz is allowed 235 + * for this regulatory domain (valid only in 5Ghz). 236 + * @REG_CAPA_80MHZ_ALLOWED: 11ac channel with a width of 80Mhz is allowed 237 + * for this regulatory domain (valid only in 5Ghz). 238 + * @REG_CAPA_MCS_8_ALLOWED: 11ac with MCS 8 is allowed. 239 + * @REG_CAPA_MCS_9_ALLOWED: 11ac with MCS 9 is allowed. 240 + * @REG_CAPA_40MHZ_FORBIDDEN: 11n channel with a width of 40Mhz is forbidden 241 + * for this regulatory domain (valid only in 5Ghz). 242 + * @REG_CAPA_DC_HIGH_ENABLED: DC HIGH allowed. 243 + */ 244 + enum iwl_reg_capa_flags { 245 + REG_CAPA_BF_CCD_LOW_BAND = BIT(0), 246 + REG_CAPA_BF_CCD_HIGH_BAND = BIT(1), 247 + REG_CAPA_160MHZ_ALLOWED = BIT(2), 248 + REG_CAPA_80MHZ_ALLOWED = BIT(3), 249 + REG_CAPA_MCS_8_ALLOWED = BIT(4), 250 + REG_CAPA_MCS_9_ALLOWED = BIT(5), 251 + REG_CAPA_40MHZ_FORBIDDEN = BIT(7), 252 + REG_CAPA_DC_HIGH_ENABLED = BIT(9), 253 + }; 254 + 227 255 static inline void iwl_nvm_print_channel_flags(struct device *dev, u32 level, 228 256 int chan, u32 flags) 229 257 { ··· 967 939 968 940 struct iwl_nvm_data * 969 941 iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, 942 + const struct iwl_fw *fw, 970 943 const __be16 *nvm_hw, const __le16 *nvm_sw, 971 944 const __le16 *nvm_calib, const __le16 *regulatory, 972 945 const __le16 *mac_override, const __le16 *phy_sku, 973 - u8 tx_chains, u8 rx_chains, bool lar_fw_supported) 946 + u8 tx_chains, u8 rx_chains) 974 947 { 975 948 struct iwl_nvm_data *data; 976 949 bool lar_enabled; ··· 1051 1022 return NULL; 1052 1023 } 1053 1024 1054 - if (lar_fw_supported && lar_enabled) 1025 + if (lar_enabled && 1026 + fw_has_capa(&fw->ucode_capa, IWL_UCODE_TLV_CAPA_LAR_SUPPORT)) 1055 1027 sbands_flags |= IWL_NVM_SBANDS_FLAGS_LAR; 1056 1028 1057 1029 if (iwl_nvm_no_wide_in_5ghz(trans, cfg, nvm_hw)) ··· 1068 1038 1069 1039 static u32 iwl_nvm_get_regdom_bw_flags(const u16 *nvm_chan, 1070 1040 int ch_idx, u16 nvm_flags, 1041 + u16 cap_flags, 1071 1042 const struct iwl_cfg *cfg) 1072 1043 { 1073 1044 u32 flags = NL80211_RRF_NO_HT40; ··· 1107 1076 (flags & NL80211_RRF_NO_IR)) 1108 1077 flags |= NL80211_RRF_GO_CONCURRENT; 1109 1078 1079 + /* 1080 + * cap_flags is per regulatory domain so apply it for every channel 1081 + */ 1082 + if (ch_idx >= NUM_2GHZ_CHANNELS) { 1083 + if (cap_flags & REG_CAPA_40MHZ_FORBIDDEN) 1084 + flags |= NL80211_RRF_NO_HT40; 1085 + 1086 + if (!(cap_flags & REG_CAPA_80MHZ_ALLOWED)) 1087 + flags |= NL80211_RRF_NO_80MHZ; 1088 + 1089 + if (!(cap_flags & REG_CAPA_160MHZ_ALLOWED)) 1090 + flags |= NL80211_RRF_NO_160MHZ; 1091 + } 1092 + 1110 1093 return flags; 1111 1094 } 1112 1095 1113 1096 struct ieee80211_regdomain * 1114 1097 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 1115 1098 int num_of_ch, __le32 *channels, u16 fw_mcc, 1116 - u16 geo_info) 1099 + u16 geo_info, u16 cap) 1117 1100 { 1118 1101 int ch_idx; 1119 1102 u16 ch_flags; ··· 1185 1140 } 1186 1141 1187 1142 reg_rule_flags = iwl_nvm_get_regdom_bw_flags(nvm_chan, ch_idx, 1188 - ch_flags, cfg); 1143 + ch_flags, cap, 1144 + cfg); 1189 1145 1190 1146 /* we can't continue the same rule */ 1191 1147 if (ch_idx == 0 || prev_reg_rule_flags != reg_rule_flags || ··· 1451 1405 .id = WIDE_ID(REGULATORY_AND_NVM_GROUP, NVM_GET_INFO) 1452 1406 }; 1453 1407 int ret; 1454 - bool lar_fw_supported = !iwlwifi_mod_params.lar_disable && 1455 - fw_has_capa(&fw->ucode_capa, 1456 - IWL_UCODE_TLV_CAPA_LAR_SUPPORT); 1457 1408 bool empty_otp; 1458 1409 u32 mac_flags; 1459 1410 u32 sbands_flags = 0; ··· 1528 1485 nvm->valid_tx_ant = (u8)le32_to_cpu(rsp->phy_sku.tx_chains); 1529 1486 nvm->valid_rx_ant = (u8)le32_to_cpu(rsp->phy_sku.rx_chains); 1530 1487 1531 - if (le32_to_cpu(rsp->regulatory.lar_enabled) && lar_fw_supported) { 1488 + if (le32_to_cpu(rsp->regulatory.lar_enabled) && 1489 + fw_has_capa(&fw->ucode_capa, 1490 + IWL_UCODE_TLV_CAPA_LAR_SUPPORT)) { 1532 1491 nvm->lar_enabled = true; 1533 1492 sbands_flags |= IWL_NVM_SBANDS_FLAGS_LAR; 1534 1493 }
+5 -4
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h
··· 7 7 * 8 8 * Copyright(c) 2008 - 2015 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 10 - * Copyright(c) 2018 Intel Corporation 10 + * Copyright(c) 2018 - 2019 Intel Corporation 11 11 * 12 12 * This program is free software; you can redistribute it and/or modify 13 13 * it under the terms of version 2 of the GNU General Public License as ··· 29 29 * 30 30 * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. 31 31 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 32 - * Copyright(c) 2018 Intel Corporation 32 + * Copyright(c) 2018 - 2019 Intel Corporation 33 33 * All rights reserved. 34 34 * 35 35 * Redistribution and use in source and binary forms, with or without ··· 85 85 */ 86 86 struct iwl_nvm_data * 87 87 iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, 88 + const struct iwl_fw *fw, 88 89 const __be16 *nvm_hw, const __le16 *nvm_sw, 89 90 const __le16 *nvm_calib, const __le16 *regulatory, 90 91 const __le16 *mac_override, const __le16 *phy_sku, 91 - u8 tx_chains, u8 rx_chains, bool lar_fw_supported); 92 + u8 tx_chains, u8 rx_chains); 92 93 93 94 /** 94 95 * iwl_parse_mcc_info - parse MCC (mobile country code) info coming from FW ··· 104 103 struct ieee80211_regdomain * 105 104 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 106 105 int num_of_ch, __le32 *channels, u16 fw_mcc, 107 - u16 geo_info); 106 + u16 geo_info, u16 cap); 108 107 109 108 /** 110 109 * struct iwl_nvm_section - describes an NVM section in memory.
+5 -5
drivers/net/wireless/intel/iwlwifi/iwl-trans.c
··· 66 66 67 67 struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, 68 68 struct device *dev, 69 - const struct iwl_trans_ops *ops) 69 + const struct iwl_trans_ops *ops, 70 + unsigned int cmd_pool_size, 71 + unsigned int cmd_pool_align) 70 72 { 71 73 struct iwl_trans *trans; 72 74 #ifdef CONFIG_LOCKDEP ··· 92 90 "iwl_cmd_pool:%s", dev_name(trans->dev)); 93 91 trans->dev_cmd_pool = 94 92 kmem_cache_create(trans->dev_cmd_pool_name, 95 - sizeof(struct iwl_device_cmd), 96 - sizeof(void *), 97 - SLAB_HWCACHE_ALIGN, 98 - NULL); 93 + cmd_pool_size, cmd_pool_align, 94 + SLAB_HWCACHE_ALIGN, NULL); 99 95 if (!trans->dev_cmd_pool) 100 96 return NULL; 101 97
+20 -6
drivers/net/wireless/intel/iwlwifi/iwl-trans.h
··· 193 193 }; 194 194 } __packed; 195 195 196 + /** 197 + * struct iwl_device_tx_cmd - buffer for TX command 198 + * @hdr: the header 199 + * @payload: the payload placeholder 200 + * 201 + * The actual structure is sized dynamically according to need. 202 + */ 203 + struct iwl_device_tx_cmd { 204 + struct iwl_cmd_header hdr; 205 + u8 payload[]; 206 + } __packed; 207 + 196 208 #define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_device_cmd)) 197 209 198 210 /* ··· 556 544 int (*send_cmd)(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 557 545 558 546 int (*tx)(struct iwl_trans *trans, struct sk_buff *skb, 559 - struct iwl_device_cmd *dev_cmd, int queue); 547 + struct iwl_device_tx_cmd *dev_cmd, int queue); 560 548 void (*reclaim)(struct iwl_trans *trans, int queue, int ssn, 561 549 struct sk_buff_head *skbs); 562 550 ··· 960 948 return trans->ops->dump_data(trans, dump_mask); 961 949 } 962 950 963 - static inline struct iwl_device_cmd * 951 + static inline struct iwl_device_tx_cmd * 964 952 iwl_trans_alloc_tx_cmd(struct iwl_trans *trans) 965 953 { 966 - return kmem_cache_alloc(trans->dev_cmd_pool, GFP_ATOMIC); 954 + return kmem_cache_zalloc(trans->dev_cmd_pool, GFP_ATOMIC); 967 955 } 968 956 969 957 int iwl_trans_send_cmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 970 958 971 959 static inline void iwl_trans_free_tx_cmd(struct iwl_trans *trans, 972 - struct iwl_device_cmd *dev_cmd) 960 + struct iwl_device_tx_cmd *dev_cmd) 973 961 { 974 962 kmem_cache_free(trans->dev_cmd_pool, dev_cmd); 975 963 } 976 964 977 965 static inline int iwl_trans_tx(struct iwl_trans *trans, struct sk_buff *skb, 978 - struct iwl_device_cmd *dev_cmd, int queue) 966 + struct iwl_device_tx_cmd *dev_cmd, int queue) 979 967 { 980 968 if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status))) 981 969 return -EIO; ··· 1283 1271 *****************************************************/ 1284 1272 struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, 1285 1273 struct device *dev, 1286 - const struct iwl_trans_ops *ops); 1274 + const struct iwl_trans_ops *ops, 1275 + unsigned int cmd_pool_size, 1276 + unsigned int cmd_pool_align); 1287 1277 void iwl_trans_free(struct iwl_trans *trans); 1288 1278 1289 1279 /*****************************************************
+1
drivers/net/wireless/intel/iwlwifi/mvm/constants.h
··· 154 154 #define IWL_MVM_D3_DEBUG false 155 155 #define IWL_MVM_USE_TWT false 156 156 #define IWL_MVM_AMPDU_CONSEC_DROPS_DELBA 10 157 + #define IWL_MVM_USE_NSSN_SYNC 0 157 158 158 159 #endif /* __MVM_CONSTANTS_H */
+6 -2
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 841 841 return 0; 842 842 } 843 843 844 + if (!mvm->fwrt.ppag_table.enabled) { 845 + IWL_DEBUG_RADIO(mvm, 846 + "PPAG not enabled, command not sent.\n"); 847 + return 0; 848 + } 849 + 844 850 IWL_DEBUG_RADIO(mvm, "Sending PER_PLATFORM_ANT_GAIN_CMD\n"); 845 - IWL_DEBUG_RADIO(mvm, "PPAG is %s\n", 846 - mvm->fwrt.ppag_table.enabled ? "enabled" : "disabled"); 847 851 848 852 for (i = 0; i < ACPI_PPAG_NUM_CHAINS; i++) { 849 853 for (j = 0; j < ACPI_PPAG_NUM_SUB_BANDS; j++) {
+144 -13
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 256 256 __le32_to_cpu(resp->n_channels), 257 257 resp->channels, 258 258 __le16_to_cpu(resp->mcc), 259 - __le16_to_cpu(resp->geo_info)); 259 + __le16_to_cpu(resp->geo_info), 260 + __le16_to_cpu(resp->cap)); 260 261 /* Store the return source id */ 261 262 src_id = resp->source_id; 262 263 kfree(resp); ··· 755 754 return ret; 756 755 } 757 756 757 + static void iwl_mvm_tx_skb(struct iwl_mvm *mvm, struct sk_buff *skb, 758 + struct ieee80211_sta *sta) 759 + { 760 + if (likely(sta)) { 761 + if (likely(iwl_mvm_tx_skb_sta(mvm, skb, sta) == 0)) 762 + return; 763 + } else { 764 + if (likely(iwl_mvm_tx_skb_non_sta(mvm, skb) == 0)) 765 + return; 766 + } 767 + 768 + ieee80211_free_txskb(mvm->hw, skb); 769 + } 770 + 758 771 static void iwl_mvm_mac_tx(struct ieee80211_hw *hw, 759 772 struct ieee80211_tx_control *control, 760 773 struct sk_buff *skb) ··· 812 797 } 813 798 } 814 799 815 - if (sta) { 816 - if (iwl_mvm_tx_skb(mvm, skb, sta)) 817 - goto drop; 818 - return; 819 - } 820 - 821 - if (iwl_mvm_tx_skb_non_sta(mvm, skb)) 822 - goto drop; 800 + iwl_mvm_tx_skb(mvm, skb, sta); 823 801 return; 824 802 drop: 825 803 ieee80211_free_txskb(hw, skb); ··· 862 854 break; 863 855 } 864 856 865 - if (!txq->sta) 866 - iwl_mvm_tx_skb_non_sta(mvm, skb); 867 - else 868 - iwl_mvm_tx_skb(mvm, skb, txq->sta); 857 + iwl_mvm_tx_skb(mvm, skb, txq->sta); 869 858 } 870 859 } while (atomic_dec_return(&mvmtxq->tx_request)); 871 860 rcu_read_unlock(); ··· 4776 4771 return ret; 4777 4772 } 4778 4773 4774 + static void iwl_mvm_set_sta_rate(u32 rate_n_flags, struct rate_info *rinfo) 4775 + { 4776 + switch (rate_n_flags & RATE_MCS_CHAN_WIDTH_MSK) { 4777 + case RATE_MCS_CHAN_WIDTH_20: 4778 + rinfo->bw = RATE_INFO_BW_20; 4779 + break; 4780 + case RATE_MCS_CHAN_WIDTH_40: 4781 + rinfo->bw = RATE_INFO_BW_40; 4782 + break; 4783 + case RATE_MCS_CHAN_WIDTH_80: 4784 + rinfo->bw = RATE_INFO_BW_80; 4785 + break; 4786 + case RATE_MCS_CHAN_WIDTH_160: 4787 + rinfo->bw = RATE_INFO_BW_160; 4788 + break; 4789 + } 4790 + 4791 + if (rate_n_flags & RATE_MCS_HT_MSK) { 4792 + rinfo->flags |= RATE_INFO_FLAGS_MCS; 4793 + rinfo->mcs = u32_get_bits(rate_n_flags, RATE_HT_MCS_INDEX_MSK); 4794 + rinfo->nss = u32_get_bits(rate_n_flags, 4795 + RATE_HT_MCS_NSS_MSK) + 1; 4796 + if (rate_n_flags & RATE_MCS_SGI_MSK) 4797 + rinfo->flags |= RATE_INFO_FLAGS_SHORT_GI; 4798 + } else if (rate_n_flags & RATE_MCS_VHT_MSK) { 4799 + rinfo->flags |= RATE_INFO_FLAGS_VHT_MCS; 4800 + rinfo->mcs = u32_get_bits(rate_n_flags, 4801 + RATE_VHT_MCS_RATE_CODE_MSK); 4802 + rinfo->nss = u32_get_bits(rate_n_flags, 4803 + RATE_VHT_MCS_NSS_MSK) + 1; 4804 + if (rate_n_flags & RATE_MCS_SGI_MSK) 4805 + rinfo->flags |= RATE_INFO_FLAGS_SHORT_GI; 4806 + } else if (rate_n_flags & RATE_MCS_HE_MSK) { 4807 + u32 gi_ltf = u32_get_bits(rate_n_flags, 4808 + RATE_MCS_HE_GI_LTF_MSK); 4809 + 4810 + rinfo->flags |= RATE_INFO_FLAGS_HE_MCS; 4811 + rinfo->mcs = u32_get_bits(rate_n_flags, 4812 + RATE_VHT_MCS_RATE_CODE_MSK); 4813 + rinfo->nss = u32_get_bits(rate_n_flags, 4814 + RATE_VHT_MCS_NSS_MSK) + 1; 4815 + 4816 + if (rate_n_flags & RATE_MCS_HE_106T_MSK) { 4817 + rinfo->bw = RATE_INFO_BW_HE_RU; 4818 + rinfo->he_ru_alloc = NL80211_RATE_INFO_HE_RU_ALLOC_106; 4819 + } 4820 + 4821 + switch (rate_n_flags & RATE_MCS_HE_TYPE_MSK) { 4822 + case RATE_MCS_HE_TYPE_SU: 4823 + case RATE_MCS_HE_TYPE_EXT_SU: 4824 + if (gi_ltf == 0 || gi_ltf == 1) 4825 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_0_8; 4826 + else if (gi_ltf == 2) 4827 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_1_6; 4828 + else if (rate_n_flags & RATE_MCS_SGI_MSK) 4829 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_0_8; 4830 + else 4831 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_3_2; 4832 + break; 4833 + case RATE_MCS_HE_TYPE_MU: 4834 + if (gi_ltf == 0 || gi_ltf == 1) 4835 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_0_8; 4836 + else if (gi_ltf == 2) 4837 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_1_6; 4838 + else 4839 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_3_2; 4840 + break; 4841 + case RATE_MCS_HE_TYPE_TRIG: 4842 + if (gi_ltf == 0 || gi_ltf == 1) 4843 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_1_6; 4844 + else 4845 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_3_2; 4846 + break; 4847 + } 4848 + 4849 + if (rate_n_flags & RATE_HE_DUAL_CARRIER_MODE_MSK) 4850 + rinfo->he_dcm = 1; 4851 + } else { 4852 + switch (u32_get_bits(rate_n_flags, RATE_LEGACY_RATE_MSK)) { 4853 + case IWL_RATE_1M_PLCP: 4854 + rinfo->legacy = 10; 4855 + break; 4856 + case IWL_RATE_2M_PLCP: 4857 + rinfo->legacy = 20; 4858 + break; 4859 + case IWL_RATE_5M_PLCP: 4860 + rinfo->legacy = 55; 4861 + break; 4862 + case IWL_RATE_11M_PLCP: 4863 + rinfo->legacy = 110; 4864 + break; 4865 + case IWL_RATE_6M_PLCP: 4866 + rinfo->legacy = 60; 4867 + break; 4868 + case IWL_RATE_9M_PLCP: 4869 + rinfo->legacy = 90; 4870 + break; 4871 + case IWL_RATE_12M_PLCP: 4872 + rinfo->legacy = 120; 4873 + break; 4874 + case IWL_RATE_18M_PLCP: 4875 + rinfo->legacy = 180; 4876 + break; 4877 + case IWL_RATE_24M_PLCP: 4878 + rinfo->legacy = 240; 4879 + break; 4880 + case IWL_RATE_36M_PLCP: 4881 + rinfo->legacy = 360; 4882 + break; 4883 + case IWL_RATE_48M_PLCP: 4884 + rinfo->legacy = 480; 4885 + break; 4886 + case IWL_RATE_54M_PLCP: 4887 + rinfo->legacy = 540; 4888 + break; 4889 + } 4890 + } 4891 + } 4892 + 4779 4893 static void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw, 4780 4894 struct ieee80211_vif *vif, 4781 4895 struct ieee80211_sta *sta, ··· 4907 4783 if (mvmsta->avg_energy) { 4908 4784 sinfo->signal_avg = mvmsta->avg_energy; 4909 4785 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG); 4786 + } 4787 + 4788 + if (iwl_mvm_has_tlc_offload(mvm)) { 4789 + struct iwl_lq_sta_rs_fw *lq_sta = &mvmsta->lq_sta.rs_fw; 4790 + 4791 + iwl_mvm_set_sta_rate(lq_sta->last_rate_n_flags, &sinfo->txrate); 4792 + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE); 4910 4793 } 4911 4794 4912 4795 /* if beacon filtering isn't on mac80211 does it anyway */
+2 -5
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 1298 1298 bool tlv_lar = fw_has_capa(&mvm->fw->ucode_capa, 1299 1299 IWL_UCODE_TLV_CAPA_LAR_SUPPORT); 1300 1300 1301 - if (iwlwifi_mod_params.lar_disable) 1302 - return false; 1303 - 1304 1301 /* 1305 1302 * Enable LAR only if it is supported by the FW (TLV) && 1306 1303 * enabled in the NVM ··· 1505 1508 int __must_check iwl_mvm_send_cmd_pdu_status(struct iwl_mvm *mvm, u32 id, 1506 1509 u16 len, const void *data, 1507 1510 u32 *status); 1508 - int iwl_mvm_tx_skb(struct iwl_mvm *mvm, struct sk_buff *skb, 1509 - struct ieee80211_sta *sta); 1511 + int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb, 1512 + struct ieee80211_sta *sta); 1510 1513 int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb); 1511 1514 void iwl_mvm_set_tx_cmd(struct iwl_mvm *mvm, struct sk_buff *skb, 1512 1515 struct iwl_tx_cmd *tx_cmd,
+3 -9
drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
··· 277 277 struct iwl_nvm_section *sections = mvm->nvm_sections; 278 278 const __be16 *hw; 279 279 const __le16 *sw, *calib, *regulatory, *mac_override, *phy_sku; 280 - bool lar_enabled; 281 280 int regulatory_type; 282 281 283 282 /* Checking for required sections */ 284 - if (mvm->trans->cfg->nvm_type != IWL_NVM_EXT) { 283 + if (mvm->trans->cfg->nvm_type == IWL_NVM) { 285 284 if (!mvm->nvm_sections[NVM_SECTION_TYPE_SW].data || 286 285 !mvm->nvm_sections[mvm->cfg->nvm_hw_section_num].data) { 287 286 IWL_ERR(mvm, "Can't parse empty OTP/NVM sections\n"); ··· 326 327 (const __le16 *)sections[NVM_SECTION_TYPE_REGULATORY_SDP].data : 327 328 (const __le16 *)sections[NVM_SECTION_TYPE_REGULATORY].data; 328 329 329 - lar_enabled = !iwlwifi_mod_params.lar_disable && 330 - fw_has_capa(&mvm->fw->ucode_capa, 331 - IWL_UCODE_TLV_CAPA_LAR_SUPPORT); 332 - 333 - return iwl_parse_nvm_data(mvm->trans, mvm->cfg, hw, sw, calib, 330 + return iwl_parse_nvm_data(mvm->trans, mvm->cfg, mvm->fw, hw, sw, calib, 334 331 regulatory, mac_override, phy_sku, 335 - mvm->fw->valid_tx_ant, mvm->fw->valid_rx_ant, 336 - lar_enabled); 332 + mvm->fw->valid_tx_ant, mvm->fw->valid_rx_ant); 337 333 } 338 334 339 335 /* Loads the NVM data stored in mvm->nvm_sections into the NIC */
+10 -7
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 514 514 515 515 static void iwl_mvm_sync_nssn(struct iwl_mvm *mvm, u8 baid, u16 nssn) 516 516 { 517 - struct iwl_mvm_rss_sync_notif notif = { 518 - .metadata.type = IWL_MVM_RXQ_NSSN_SYNC, 519 - .metadata.sync = 0, 520 - .nssn_sync.baid = baid, 521 - .nssn_sync.nssn = nssn, 522 - }; 517 + if (IWL_MVM_USE_NSSN_SYNC) { 518 + struct iwl_mvm_rss_sync_notif notif = { 519 + .metadata.type = IWL_MVM_RXQ_NSSN_SYNC, 520 + .metadata.sync = 0, 521 + .nssn_sync.baid = baid, 522 + .nssn_sync.nssn = nssn, 523 + }; 523 524 524 - iwl_mvm_sync_rx_queues_internal(mvm, (void *)&notif, sizeof(notif)); 525 + iwl_mvm_sync_rx_queues_internal(mvm, (void *)&notif, 526 + sizeof(notif)); 527 + } 525 528 } 526 529 527 530 #define RX_REORDER_BUF_TIMEOUT_MQ (HZ / 10)
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
··· 1213 1213 cmd_size = sizeof(struct iwl_scan_config_v2); 1214 1214 else 1215 1215 cmd_size = sizeof(struct iwl_scan_config_v1); 1216 - cmd_size += num_channels; 1216 + cmd_size += mvm->fw->ucode_capa.n_scan_channels; 1217 1217 1218 1218 cfg = kzalloc(cmd_size, GFP_KERNEL); 1219 1219 if (!cfg)
+8 -13
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 490 490 /* 491 491 * Allocates and sets the Tx cmd the driver data pointers in the skb 492 492 */ 493 - static struct iwl_device_cmd * 493 + static struct iwl_device_tx_cmd * 494 494 iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb, 495 495 struct ieee80211_tx_info *info, int hdrlen, 496 496 struct ieee80211_sta *sta, u8 sta_id) 497 497 { 498 498 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 499 - struct iwl_device_cmd *dev_cmd; 499 + struct iwl_device_tx_cmd *dev_cmd; 500 500 struct iwl_tx_cmd *tx_cmd; 501 501 502 502 dev_cmd = iwl_trans_alloc_tx_cmd(mvm->trans); ··· 504 504 if (unlikely(!dev_cmd)) 505 505 return NULL; 506 506 507 - /* Make sure we zero enough of dev_cmd */ 508 - BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen2) > sizeof(*tx_cmd)); 509 - BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen3) > sizeof(*tx_cmd)); 510 - 511 - memset(dev_cmd, 0, sizeof(dev_cmd->hdr) + sizeof(*tx_cmd)); 512 507 dev_cmd->hdr.cmd = TX_CMD; 513 508 514 509 if (iwl_mvm_has_new_tx_api(mvm)) { ··· 592 597 } 593 598 594 599 static void iwl_mvm_skb_prepare_status(struct sk_buff *skb, 595 - struct iwl_device_cmd *cmd) 600 + struct iwl_device_tx_cmd *cmd) 596 601 { 597 602 struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb); 598 603 ··· 711 716 { 712 717 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 713 718 struct ieee80211_tx_info info; 714 - struct iwl_device_cmd *dev_cmd; 719 + struct iwl_device_tx_cmd *dev_cmd; 715 720 u8 sta_id; 716 721 int hdrlen = ieee80211_hdrlen(hdr->frame_control); 717 722 __le16 fc = hdr->frame_control; ··· 1073 1078 { 1074 1079 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1075 1080 struct iwl_mvm_sta *mvmsta; 1076 - struct iwl_device_cmd *dev_cmd; 1081 + struct iwl_device_tx_cmd *dev_cmd; 1077 1082 __le16 fc; 1078 1083 u16 seq_number = 0; 1079 1084 u8 tid = IWL_MAX_TID_COUNT; ··· 1149 1154 if (WARN_ONCE(txq_id == IWL_MVM_INVALID_QUEUE, "Invalid TXQ id")) { 1150 1155 iwl_trans_free_tx_cmd(mvm->trans, dev_cmd); 1151 1156 spin_unlock(&mvmsta->lock); 1152 - return 0; 1157 + return -1; 1153 1158 } 1154 1159 1155 1160 if (!iwl_mvm_has_new_tx_api(mvm)) { ··· 1201 1206 return -1; 1202 1207 } 1203 1208 1204 - int iwl_mvm_tx_skb(struct iwl_mvm *mvm, struct sk_buff *skb, 1205 - struct ieee80211_sta *sta) 1209 + int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb, 1210 + struct ieee80211_sta *sta) 1206 1211 { 1207 1212 struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 1208 1213 struct ieee80211_tx_info info;
+42 -3
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
··· 57 57 #include "internal.h" 58 58 #include "iwl-prph.h" 59 59 60 + static void *_iwl_pcie_ctxt_info_dma_alloc_coherent(struct iwl_trans *trans, 61 + size_t size, 62 + dma_addr_t *phys, 63 + int depth) 64 + { 65 + void *result; 66 + 67 + if (WARN(depth > 2, 68 + "failed to allocate DMA memory not crossing 2^32 boundary")) 69 + return NULL; 70 + 71 + result = dma_alloc_coherent(trans->dev, size, phys, GFP_KERNEL); 72 + 73 + if (!result) 74 + return NULL; 75 + 76 + if (unlikely(iwl_pcie_crosses_4g_boundary(*phys, size))) { 77 + void *old = result; 78 + dma_addr_t oldphys = *phys; 79 + 80 + result = _iwl_pcie_ctxt_info_dma_alloc_coherent(trans, size, 81 + phys, 82 + depth + 1); 83 + dma_free_coherent(trans->dev, size, old, oldphys); 84 + } 85 + 86 + return result; 87 + } 88 + 89 + static void *iwl_pcie_ctxt_info_dma_alloc_coherent(struct iwl_trans *trans, 90 + size_t size, 91 + dma_addr_t *phys) 92 + { 93 + return _iwl_pcie_ctxt_info_dma_alloc_coherent(trans, size, phys, 0); 94 + } 95 + 60 96 void iwl_pcie_ctxt_info_free_paging(struct iwl_trans *trans) 61 97 { 62 98 struct iwl_self_init_dram *dram = &trans->init_dram; ··· 197 161 struct iwl_context_info *ctxt_info; 198 162 struct iwl_context_info_rbd_cfg *rx_cfg; 199 163 u32 control_flags = 0, rb_size; 164 + dma_addr_t phys; 200 165 int ret; 201 166 202 - ctxt_info = dma_alloc_coherent(trans->dev, sizeof(*ctxt_info), 203 - &trans_pcie->ctxt_info_dma_addr, 204 - GFP_KERNEL); 167 + ctxt_info = iwl_pcie_ctxt_info_dma_alloc_coherent(trans, 168 + sizeof(*ctxt_info), 169 + &phys); 205 170 if (!ctxt_info) 206 171 return -ENOMEM; 172 + 173 + trans_pcie->ctxt_info_dma_addr = phys; 207 174 208 175 ctxt_info->version.version = 0; 209 176 ctxt_info->version.mac_id =
+15 -4
drivers/net/wireless/intel/iwlwifi/pcie/internal.h
··· 305 305 #define IWL_FIRST_TB_SIZE_ALIGN ALIGN(IWL_FIRST_TB_SIZE, 64) 306 306 307 307 struct iwl_pcie_txq_entry { 308 - struct iwl_device_cmd *cmd; 308 + void *cmd; 309 309 struct sk_buff *skb; 310 310 /* buffer to free after command completes */ 311 311 const void *free_buf; ··· 672 672 /***************************************************** 673 673 * TX / HCMD 674 674 ******************************************************/ 675 + /* 676 + * We need this inline in case dma_addr_t is only 32-bits - since the 677 + * hardware is always 64-bit, the issue can still occur in that case, 678 + * so use u64 for 'phys' here to force the addition in 64-bit. 679 + */ 680 + static inline bool iwl_pcie_crosses_4g_boundary(u64 phys, u16 len) 681 + { 682 + return upper_32_bits(phys) != upper_32_bits(phys + len); 683 + } 684 + 675 685 int iwl_pcie_tx_init(struct iwl_trans *trans); 676 686 int iwl_pcie_gen2_tx_init(struct iwl_trans *trans, int txq_id, 677 687 int queue_size); ··· 698 688 void iwl_trans_pcie_log_scd_error(struct iwl_trans *trans, 699 689 struct iwl_txq *txq); 700 690 int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, 701 - struct iwl_device_cmd *dev_cmd, int txq_id); 691 + struct iwl_device_tx_cmd *dev_cmd, int txq_id); 702 692 void iwl_pcie_txq_check_wrptrs(struct iwl_trans *trans); 703 693 int iwl_trans_pcie_send_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 704 694 void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx); ··· 1092 1082 void iwl_pcie_free_tso_page(struct iwl_trans_pcie *trans_pcie, 1093 1083 struct sk_buff *skb); 1094 1084 #ifdef CONFIG_INET 1095 - struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len); 1085 + struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len, 1086 + struct sk_buff *skb); 1096 1087 #endif 1097 1088 1098 1089 /* common functions that are used by gen3 transport */ ··· 1117 1106 unsigned int timeout); 1118 1107 void iwl_trans_pcie_dyn_txq_free(struct iwl_trans *trans, int queue); 1119 1108 int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, 1120 - struct iwl_device_cmd *dev_cmd, int txq_id); 1109 + struct iwl_device_tx_cmd *dev_cmd, int txq_id); 1121 1110 int iwl_trans_pcie_gen2_send_hcmd(struct iwl_trans *trans, 1122 1111 struct iwl_host_cmd *cmd); 1123 1112 void iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans);
+2 -2
drivers/net/wireless/intel/iwlwifi/pcie/rx.c
··· 1529 1529 1530 1530 napi = &rxq->napi; 1531 1531 if (napi->poll) { 1532 + napi_gro_flush(napi, false); 1533 + 1532 1534 if (napi->rx_count) { 1533 1535 netif_receive_skb_list(&napi->rx_list); 1534 1536 INIT_LIST_HEAD(&napi->rx_list); 1535 1537 napi->rx_count = 0; 1536 1538 } 1537 - 1538 - napi_gro_flush(napi, false); 1539 1539 } 1540 1540 1541 1541 iwl_pcie_rxq_restock(trans, rxq);
+29 -18
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
··· 79 79 #include "iwl-agn-hw.h" 80 80 #include "fw/error-dump.h" 81 81 #include "fw/dbg.h" 82 + #include "fw/api/tx.h" 82 83 #include "internal.h" 83 84 #include "iwl-fh.h" 84 85 ··· 302 301 u16 cap; 303 302 304 303 /* 305 - * HW bug W/A for instability in PCIe bus L0S->L1 transition. 306 - * Check if BIOS (or OS) enabled L1-ASPM on this device. 307 - * If so (likely), disable L0S, so device moves directly L0->L1; 308 - * costs negligible amount of power savings. 309 - * If not (unlikely), enable L0S, so there is at least some 310 - * power savings, even without L1. 304 + * L0S states have been found to be unstable with our devices 305 + * and in newer hardware they are not officially supported at 306 + * all, so we must always set the L0S_DISABLED bit. 311 307 */ 308 + iwl_set_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_DISABLED); 309 + 312 310 pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_LNKCTL, &lctl); 313 - if (lctl & PCI_EXP_LNKCTL_ASPM_L1) 314 - iwl_set_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_ENABLED); 315 - else 316 - iwl_clear_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_ENABLED); 317 311 trans->pm_support = !(lctl & PCI_EXP_LNKCTL_ASPM_L0S); 318 312 319 313 pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_DEVCTL2, &cap); ··· 3456 3460 { 3457 3461 struct iwl_trans_pcie *trans_pcie; 3458 3462 struct iwl_trans *trans; 3459 - int ret, addr_size; 3463 + int ret, addr_size, txcmd_size, txcmd_align; 3464 + const struct iwl_trans_ops *ops = &trans_ops_pcie_gen2; 3465 + 3466 + if (!cfg_trans->gen2) { 3467 + ops = &trans_ops_pcie; 3468 + txcmd_size = sizeof(struct iwl_tx_cmd); 3469 + txcmd_align = sizeof(void *); 3470 + } else if (cfg_trans->device_family < IWL_DEVICE_FAMILY_AX210) { 3471 + txcmd_size = sizeof(struct iwl_tx_cmd_gen2); 3472 + txcmd_align = 64; 3473 + } else { 3474 + txcmd_size = sizeof(struct iwl_tx_cmd_gen3); 3475 + txcmd_align = 128; 3476 + } 3477 + 3478 + txcmd_size += sizeof(struct iwl_cmd_header); 3479 + txcmd_size += 36; /* biggest possible 802.11 header */ 3480 + 3481 + /* Ensure device TX cmd cannot reach/cross a page boundary in gen2 */ 3482 + if (WARN_ON(cfg_trans->gen2 && txcmd_size >= txcmd_align)) 3483 + return ERR_PTR(-EINVAL); 3460 3484 3461 3485 ret = pcim_enable_device(pdev); 3462 3486 if (ret) 3463 3487 return ERR_PTR(ret); 3464 3488 3465 - if (cfg_trans->gen2) 3466 - trans = iwl_trans_alloc(sizeof(struct iwl_trans_pcie), 3467 - &pdev->dev, &trans_ops_pcie_gen2); 3468 - else 3469 - trans = iwl_trans_alloc(sizeof(struct iwl_trans_pcie), 3470 - &pdev->dev, &trans_ops_pcie); 3471 - 3489 + trans = iwl_trans_alloc(sizeof(struct iwl_trans_pcie), &pdev->dev, ops, 3490 + txcmd_size, txcmd_align); 3472 3491 if (!trans) 3473 3492 return ERR_PTR(-ENOMEM); 3474 3493
+170 -38
drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
··· 221 221 int idx = iwl_pcie_gen2_get_num_tbs(trans, tfd); 222 222 struct iwl_tfh_tb *tb; 223 223 224 + /* 225 + * Only WARN here so we know about the issue, but we mess up our 226 + * unmap path because not every place currently checks for errors 227 + * returned from this function - it can only return an error if 228 + * there's no more space, and so when we know there is enough we 229 + * don't always check ... 230 + */ 231 + WARN(iwl_pcie_crosses_4g_boundary(addr, len), 232 + "possible DMA problem with iova:0x%llx, len:%d\n", 233 + (unsigned long long)addr, len); 234 + 224 235 if (WARN_ON(idx >= IWL_TFH_NUM_TBS)) 225 236 return -EINVAL; 226 237 tb = &tfd->tbs[idx]; ··· 251 240 return idx; 252 241 } 253 242 243 + static struct page *get_workaround_page(struct iwl_trans *trans, 244 + struct sk_buff *skb) 245 + { 246 + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 247 + struct page **page_ptr; 248 + struct page *ret; 249 + 250 + page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 251 + 252 + ret = alloc_page(GFP_ATOMIC); 253 + if (!ret) 254 + return NULL; 255 + 256 + /* set the chaining pointer to the previous page if there */ 257 + *(void **)(page_address(ret) + PAGE_SIZE - sizeof(void *)) = *page_ptr; 258 + *page_ptr = ret; 259 + 260 + return ret; 261 + } 262 + 263 + /* 264 + * Add a TB and if needed apply the FH HW bug workaround; 265 + * meta != NULL indicates that it's a page mapping and we 266 + * need to dma_unmap_page() and set the meta->tbs bit in 267 + * this case. 268 + */ 269 + static int iwl_pcie_gen2_set_tb_with_wa(struct iwl_trans *trans, 270 + struct sk_buff *skb, 271 + struct iwl_tfh_tfd *tfd, 272 + dma_addr_t phys, void *virt, 273 + u16 len, struct iwl_cmd_meta *meta) 274 + { 275 + dma_addr_t oldphys = phys; 276 + struct page *page; 277 + int ret; 278 + 279 + if (unlikely(dma_mapping_error(trans->dev, phys))) 280 + return -ENOMEM; 281 + 282 + if (likely(!iwl_pcie_crosses_4g_boundary(phys, len))) { 283 + ret = iwl_pcie_gen2_set_tb(trans, tfd, phys, len); 284 + 285 + if (ret < 0) 286 + goto unmap; 287 + 288 + if (meta) 289 + meta->tbs |= BIT(ret); 290 + 291 + ret = 0; 292 + goto trace; 293 + } 294 + 295 + /* 296 + * Work around a hardware bug. If (as expressed in the 297 + * condition above) the TB ends on a 32-bit boundary, 298 + * then the next TB may be accessed with the wrong 299 + * address. 300 + * To work around it, copy the data elsewhere and make 301 + * a new mapping for it so the device will not fail. 302 + */ 303 + 304 + if (WARN_ON(len > PAGE_SIZE - sizeof(void *))) { 305 + ret = -ENOBUFS; 306 + goto unmap; 307 + } 308 + 309 + page = get_workaround_page(trans, skb); 310 + if (!page) { 311 + ret = -ENOMEM; 312 + goto unmap; 313 + } 314 + 315 + memcpy(page_address(page), virt, len); 316 + 317 + phys = dma_map_single(trans->dev, page_address(page), len, 318 + DMA_TO_DEVICE); 319 + if (unlikely(dma_mapping_error(trans->dev, phys))) 320 + return -ENOMEM; 321 + ret = iwl_pcie_gen2_set_tb(trans, tfd, phys, len); 322 + if (ret < 0) { 323 + /* unmap the new allocation as single */ 324 + oldphys = phys; 325 + meta = NULL; 326 + goto unmap; 327 + } 328 + IWL_WARN(trans, 329 + "TB bug workaround: copied %d bytes from 0x%llx to 0x%llx\n", 330 + len, (unsigned long long)oldphys, (unsigned long long)phys); 331 + 332 + ret = 0; 333 + unmap: 334 + if (meta) 335 + dma_unmap_page(trans->dev, oldphys, len, DMA_TO_DEVICE); 336 + else 337 + dma_unmap_single(trans->dev, oldphys, len, DMA_TO_DEVICE); 338 + trace: 339 + trace_iwlwifi_dev_tx_tb(trans->dev, skb, virt, phys, len); 340 + 341 + return ret; 342 + } 343 + 254 344 static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans, 255 345 struct sk_buff *skb, 256 346 struct iwl_tfh_tfd *tfd, int start_len, 257 - u8 hdr_len, struct iwl_device_cmd *dev_cmd) 347 + u8 hdr_len, 348 + struct iwl_device_tx_cmd *dev_cmd) 258 349 { 259 350 #ifdef CONFIG_INET 260 - struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 261 351 struct iwl_tx_cmd_gen2 *tx_cmd = (void *)dev_cmd->payload; 262 352 struct ieee80211_hdr *hdr = (void *)skb->data; 263 353 unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; ··· 366 254 u16 length, amsdu_pad; 367 255 u8 *start_hdr; 368 256 struct iwl_tso_hdr_page *hdr_page; 369 - struct page **page_ptr; 370 257 struct tso_t tso; 371 258 372 259 trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), ··· 381 270 (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)); 382 271 383 272 /* Our device supports 9 segments at most, it will fit in 1 page */ 384 - hdr_page = get_page_hdr(trans, hdr_room); 273 + hdr_page = get_page_hdr(trans, hdr_room, skb); 385 274 if (!hdr_page) 386 275 return -ENOMEM; 387 276 388 - get_page(hdr_page->page); 389 277 start_hdr = hdr_page->pos; 390 - page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 391 - *page_ptr = hdr_page->page; 392 278 393 279 /* 394 280 * Pull the ieee80211 header to be able to use TSO core, ··· 440 332 dev_kfree_skb(csum_skb); 441 333 goto out_err; 442 334 } 335 + /* 336 + * No need for _with_wa, this is from the TSO page and 337 + * we leave some space at the end of it so can't hit 338 + * the buggy scenario. 339 + */ 443 340 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb_len); 444 341 trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr, 445 342 tb_phys, tb_len); ··· 456 343 457 344 /* put the payload */ 458 345 while (data_left) { 346 + int ret; 347 + 459 348 tb_len = min_t(unsigned int, tso.size, data_left); 460 349 tb_phys = dma_map_single(trans->dev, tso.data, 461 350 tb_len, DMA_TO_DEVICE); 462 - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) { 351 + ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, 352 + tb_phys, tso.data, 353 + tb_len, NULL); 354 + if (ret) { 463 355 dev_kfree_skb(csum_skb); 464 356 goto out_err; 465 357 } 466 - iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb_len); 467 - trace_iwlwifi_dev_tx_tb(trans->dev, skb, tso.data, 468 - tb_phys, tb_len); 469 358 470 359 data_left -= tb_len; 471 360 tso_build_data(skb, &tso, tb_len); ··· 487 372 static struct 488 373 iwl_tfh_tfd *iwl_pcie_gen2_build_tx_amsdu(struct iwl_trans *trans, 489 374 struct iwl_txq *txq, 490 - struct iwl_device_cmd *dev_cmd, 375 + struct iwl_device_tx_cmd *dev_cmd, 491 376 struct sk_buff *skb, 492 377 struct iwl_cmd_meta *out_meta, 493 378 int hdr_len, ··· 501 386 502 387 tb_phys = iwl_pcie_get_first_tb_dma(txq, idx); 503 388 389 + /* 390 + * No need for _with_wa, the first TB allocation is aligned up 391 + * to a 64-byte boundary and thus can't be at the end or cross 392 + * a page boundary (much less a 2^32 boundary). 393 + */ 504 394 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); 505 395 506 396 /* ··· 524 404 tb_phys = dma_map_single(trans->dev, tb1_addr, len, DMA_TO_DEVICE); 525 405 if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 526 406 goto out_err; 407 + /* 408 + * No need for _with_wa(), we ensure (via alignment) that the data 409 + * here can never cross or end at a page boundary. 410 + */ 527 411 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, len); 528 412 529 413 if (iwl_pcie_gen2_build_amsdu(trans, skb, tfd, ··· 554 430 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 555 431 const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 556 432 dma_addr_t tb_phys; 557 - int tb_idx; 433 + unsigned int fragsz = skb_frag_size(frag); 434 + int ret; 558 435 559 - if (!skb_frag_size(frag)) 436 + if (!fragsz) 560 437 continue; 561 438 562 439 tb_phys = skb_frag_dma_map(trans->dev, frag, 0, 563 - skb_frag_size(frag), DMA_TO_DEVICE); 564 - 565 - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 566 - return -ENOMEM; 567 - tb_idx = iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, 568 - skb_frag_size(frag)); 569 - trace_iwlwifi_dev_tx_tb(trans->dev, skb, skb_frag_address(frag), 570 - tb_phys, skb_frag_size(frag)); 571 - if (tb_idx < 0) 572 - return tb_idx; 573 - 574 - out_meta->tbs |= BIT(tb_idx); 440 + fragsz, DMA_TO_DEVICE); 441 + ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, 442 + skb_frag_address(frag), 443 + fragsz, out_meta); 444 + if (ret) 445 + return ret; 575 446 } 576 447 577 448 return 0; ··· 575 456 static struct 576 457 iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, 577 458 struct iwl_txq *txq, 578 - struct iwl_device_cmd *dev_cmd, 459 + struct iwl_device_tx_cmd *dev_cmd, 579 460 struct sk_buff *skb, 580 461 struct iwl_cmd_meta *out_meta, 581 462 int hdr_len, ··· 594 475 /* The first TB points to bi-directional DMA data */ 595 476 memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE); 596 477 478 + /* 479 + * No need for _with_wa, the first TB allocation is aligned up 480 + * to a 64-byte boundary and thus can't be at the end or cross 481 + * a page boundary (much less a 2^32 boundary). 482 + */ 597 483 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); 598 484 599 485 /* ··· 620 496 tb_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE); 621 497 if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 622 498 goto out_err; 499 + /* 500 + * No need for _with_wa(), we ensure (via alignment) that the data 501 + * here can never cross or end at a page boundary. 502 + */ 623 503 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb1_len); 624 504 trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr, 625 505 IWL_FIRST_TB_SIZE + tb1_len, hdr_len); ··· 632 504 tb2_len = skb_headlen(skb) - hdr_len; 633 505 634 506 if (tb2_len > 0) { 507 + int ret; 508 + 635 509 tb_phys = dma_map_single(trans->dev, skb->data + hdr_len, 636 510 tb2_len, DMA_TO_DEVICE); 637 - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 511 + ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, 512 + skb->data + hdr_len, tb2_len, 513 + NULL); 514 + if (ret) 638 515 goto out_err; 639 - iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb2_len); 640 - trace_iwlwifi_dev_tx_tb(trans->dev, skb, skb->data + hdr_len, 641 - tb_phys, tb2_len); 642 516 } 643 517 644 518 if (iwl_pcie_gen2_tx_add_frags(trans, skb, tfd, out_meta)) 645 519 goto out_err; 646 520 647 521 skb_walk_frags(skb, frag) { 522 + int ret; 523 + 648 524 tb_phys = dma_map_single(trans->dev, frag->data, 649 525 skb_headlen(frag), DMA_TO_DEVICE); 650 - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 526 + ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, 527 + frag->data, 528 + skb_headlen(frag), NULL); 529 + if (ret) 651 530 goto out_err; 652 - iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, skb_headlen(frag)); 653 - trace_iwlwifi_dev_tx_tb(trans->dev, skb, frag->data, 654 - tb_phys, skb_headlen(frag)); 655 531 if (iwl_pcie_gen2_tx_add_frags(trans, frag, tfd, out_meta)) 656 532 goto out_err; 657 533 } ··· 670 538 static 671 539 struct iwl_tfh_tfd *iwl_pcie_gen2_build_tfd(struct iwl_trans *trans, 672 540 struct iwl_txq *txq, 673 - struct iwl_device_cmd *dev_cmd, 541 + struct iwl_device_tx_cmd *dev_cmd, 674 542 struct sk_buff *skb, 675 543 struct iwl_cmd_meta *out_meta) 676 544 { ··· 710 578 } 711 579 712 580 int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, 713 - struct iwl_device_cmd *dev_cmd, int txq_id) 581 + struct iwl_device_tx_cmd *dev_cmd, int txq_id) 714 582 { 715 583 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 716 584 struct iwl_cmd_meta *out_meta; ··· 735 603 736 604 /* don't put the packet on the ring, if there is no room */ 737 605 if (unlikely(iwl_queue_space(trans, txq) < 3)) { 738 - struct iwl_device_cmd **dev_cmd_ptr; 606 + struct iwl_device_tx_cmd **dev_cmd_ptr; 739 607 740 608 dev_cmd_ptr = (void *)((u8 *)skb->cb + 741 609 trans_pcie->dev_cmd_offs);
+47 -21
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 213 213 u8 sec_ctl = 0; 214 214 u16 len = byte_cnt + IWL_TX_CRC_SIZE + IWL_TX_DELIMITER_SIZE; 215 215 __le16 bc_ent; 216 - struct iwl_tx_cmd *tx_cmd = 217 - (void *)txq->entries[txq->write_ptr].cmd->payload; 216 + struct iwl_device_tx_cmd *dev_cmd = txq->entries[txq->write_ptr].cmd; 217 + struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; 218 218 u8 sta_id = tx_cmd->sta_id; 219 219 220 220 scd_bc_tbl = trans_pcie->scd_bc_tbls.addr; ··· 257 257 int read_ptr = txq->read_ptr; 258 258 u8 sta_id = 0; 259 259 __le16 bc_ent; 260 - struct iwl_tx_cmd *tx_cmd = 261 - (void *)txq->entries[read_ptr].cmd->payload; 260 + struct iwl_device_tx_cmd *dev_cmd = txq->entries[read_ptr].cmd; 261 + struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; 262 262 263 263 WARN_ON(read_ptr >= TFD_QUEUE_SIZE_MAX); 264 264 ··· 624 624 struct sk_buff *skb) 625 625 { 626 626 struct page **page_ptr; 627 + struct page *next; 627 628 628 629 page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 630 + next = *page_ptr; 631 + *page_ptr = NULL; 629 632 630 - if (*page_ptr) { 631 - __free_page(*page_ptr); 632 - *page_ptr = NULL; 633 + while (next) { 634 + struct page *tmp = next; 635 + 636 + next = *(void **)(page_address(next) + PAGE_SIZE - 637 + sizeof(void *)); 638 + __free_page(tmp); 633 639 } 634 640 } 635 641 ··· 1202 1196 1203 1197 while (!skb_queue_empty(&overflow_skbs)) { 1204 1198 struct sk_buff *skb = __skb_dequeue(&overflow_skbs); 1205 - struct iwl_device_cmd *dev_cmd_ptr; 1199 + struct iwl_device_tx_cmd *dev_cmd_ptr; 1206 1200 1207 1201 dev_cmd_ptr = *(void **)((u8 *)skb->cb + 1208 1202 trans_pcie->dev_cmd_offs); ··· 2058 2052 } 2059 2053 2060 2054 #ifdef CONFIG_INET 2061 - struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len) 2055 + struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len, 2056 + struct sk_buff *skb) 2062 2057 { 2063 2058 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 2064 2059 struct iwl_tso_hdr_page *p = this_cpu_ptr(trans_pcie->tso_hdr_page); 2060 + struct page **page_ptr; 2061 + 2062 + page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 2063 + 2064 + if (WARN_ON(*page_ptr)) 2065 + return NULL; 2065 2066 2066 2067 if (!p->page) 2067 2068 goto alloc; 2068 2069 2069 - /* enough room on this page */ 2070 - if (p->pos + len < (u8 *)page_address(p->page) + PAGE_SIZE) 2071 - return p; 2070 + /* 2071 + * Check if there's enough room on this page 2072 + * 2073 + * Note that we put a page chaining pointer *last* in the 2074 + * page - we need it somewhere, and if it's there then we 2075 + * avoid DMA mapping the last bits of the page which may 2076 + * trigger the 32-bit boundary hardware bug. 2077 + * 2078 + * (see also get_workaround_page() in tx-gen2.c) 2079 + */ 2080 + if (p->pos + len < (u8 *)page_address(p->page) + PAGE_SIZE - 2081 + sizeof(void *)) 2082 + goto out; 2072 2083 2073 2084 /* We don't have enough room on this page, get a new one. */ 2074 2085 __free_page(p->page); ··· 2095 2072 if (!p->page) 2096 2073 return NULL; 2097 2074 p->pos = page_address(p->page); 2075 + /* set the chaining pointer to NULL */ 2076 + *(void **)(page_address(p->page) + PAGE_SIZE - sizeof(void *)) = NULL; 2077 + out: 2078 + *page_ptr = p->page; 2079 + get_page(p->page); 2098 2080 return p; 2099 2081 } 2100 2082 ··· 2125 2097 static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, 2126 2098 struct iwl_txq *txq, u8 hdr_len, 2127 2099 struct iwl_cmd_meta *out_meta, 2128 - struct iwl_device_cmd *dev_cmd, u16 tb1_len) 2100 + struct iwl_device_tx_cmd *dev_cmd, 2101 + u16 tb1_len) 2129 2102 { 2130 2103 struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; 2131 2104 struct iwl_trans_pcie *trans_pcie = txq->trans_pcie; ··· 2136 2107 u16 length, iv_len, amsdu_pad; 2137 2108 u8 *start_hdr; 2138 2109 struct iwl_tso_hdr_page *hdr_page; 2139 - struct page **page_ptr; 2140 2110 struct tso_t tso; 2141 2111 2142 2112 /* if the packet is protected, then it must be CCMP or GCMP */ ··· 2158 2130 (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len; 2159 2131 2160 2132 /* Our device supports 9 segments at most, it will fit in 1 page */ 2161 - hdr_page = get_page_hdr(trans, hdr_room); 2133 + hdr_page = get_page_hdr(trans, hdr_room, skb); 2162 2134 if (!hdr_page) 2163 2135 return -ENOMEM; 2164 2136 2165 - get_page(hdr_page->page); 2166 2137 start_hdr = hdr_page->pos; 2167 - page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 2168 - *page_ptr = hdr_page->page; 2169 2138 memcpy(hdr_page->pos, skb->data + hdr_len, iv_len); 2170 2139 hdr_page->pos += iv_len; 2171 2140 ··· 2304 2279 static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, 2305 2280 struct iwl_txq *txq, u8 hdr_len, 2306 2281 struct iwl_cmd_meta *out_meta, 2307 - struct iwl_device_cmd *dev_cmd, u16 tb1_len) 2282 + struct iwl_device_tx_cmd *dev_cmd, 2283 + u16 tb1_len) 2308 2284 { 2309 2285 /* No A-MSDU without CONFIG_INET */ 2310 2286 WARN_ON(1); ··· 2315 2289 #endif /* CONFIG_INET */ 2316 2290 2317 2291 int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, 2318 - struct iwl_device_cmd *dev_cmd, int txq_id) 2292 + struct iwl_device_tx_cmd *dev_cmd, int txq_id) 2319 2293 { 2320 2294 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 2321 2295 struct ieee80211_hdr *hdr; ··· 2372 2346 2373 2347 /* don't put the packet on the ring, if there is no room */ 2374 2348 if (unlikely(iwl_queue_space(trans, txq) < 3)) { 2375 - struct iwl_device_cmd **dev_cmd_ptr; 2349 + struct iwl_device_tx_cmd **dev_cmd_ptr; 2376 2350 2377 2351 dev_cmd_ptr = (void *)((u8 *)skb->cb + 2378 2352 trans_pcie->dev_cmd_offs);
+13 -3
drivers/net/wireless/marvell/libertas/cfg.c
··· 273 273 int hw, ap, ap_max = ie[1]; 274 274 u8 hw_rate; 275 275 276 + if (ap_max > MAX_RATES) { 277 + lbs_deb_assoc("invalid rates\n"); 278 + return tlv; 279 + } 276 280 /* Advance past IE header */ 277 281 ie += 2; 278 282 ··· 1721 1717 struct cmd_ds_802_11_ad_hoc_join cmd; 1722 1718 u8 preamble = RADIO_PREAMBLE_SHORT; 1723 1719 int ret = 0; 1720 + int hw, i; 1721 + u8 rates_max; 1722 + u8 *rates; 1724 1723 1725 1724 /* TODO: set preamble based on scan result */ 1726 1725 ret = lbs_set_radio(priv, preamble, 1); ··· 1782 1775 if (!rates_eid) { 1783 1776 lbs_add_rates(cmd.bss.rates); 1784 1777 } else { 1785 - int hw, i; 1786 - u8 rates_max = rates_eid[1]; 1787 - u8 *rates = cmd.bss.rates; 1778 + rates_max = rates_eid[1]; 1779 + if (rates_max > MAX_RATES) { 1780 + lbs_deb_join("invalid rates"); 1781 + goto out; 1782 + } 1783 + rates = cmd.bss.rates; 1788 1784 for (hw = 0; hw < ARRAY_SIZE(lbs_rates); hw++) { 1789 1785 u8 hw_rate = lbs_rates[hw].bitrate / 5; 1790 1786 for (i = 0; i < rates_max; i++) {
+1 -1
drivers/net/wireless/mediatek/mt76/airtime.c
··· 242 242 return 0; 243 243 244 244 sband = dev->hw->wiphy->bands[status->band]; 245 - if (!sband || status->rate_idx > sband->n_bitrates) 245 + if (!sband || status->rate_idx >= sband->n_bitrates) 246 246 return 0; 247 247 248 248 rate = &sband->bitrates[status->rate_idx];
+2 -1
drivers/net/wireless/mediatek/mt76/mac80211.c
··· 378 378 { 379 379 struct ieee80211_hw *hw = dev->hw; 380 380 381 - mt76_led_cleanup(dev); 381 + if (IS_ENABLED(CONFIG_MT76_LEDS)) 382 + mt76_led_cleanup(dev); 382 383 mt76_tx_status_check(dev, NULL, true); 383 384 ieee80211_unregister_hw(hw); 384 385 }
+2
include/linux/netdevice.h
··· 3698 3698 int dev_get_alias(const struct net_device *, char *, size_t); 3699 3699 int dev_change_net_namespace(struct net_device *, struct net *, const char *); 3700 3700 int __dev_set_mtu(struct net_device *, int); 3701 + int dev_validate_mtu(struct net_device *dev, int mtu, 3702 + struct netlink_ext_ack *extack); 3701 3703 int dev_set_mtu_ext(struct net_device *dev, int mtu, 3702 3704 struct netlink_ext_ack *extack); 3703 3705 int dev_set_mtu(struct net_device *, int);
-7
include/linux/netfilter/ipset/ip_set.h
··· 426 426 sizeof(*addr)); 427 427 } 428 428 429 - /* Calculate the bytes required to store the inclusive range of a-b */ 430 - static inline int 431 - bitmap_bytes(u32 a, u32 b) 432 - { 433 - return 4 * ((((b - a + 8) / 8) + 3) / 4); 434 - } 435 - 436 429 /* How often should the gc be run by default */ 437 430 #define IPSET_GC_TIME (3 * 60) 438 431
+1 -1
include/linux/netfilter/nfnetlink.h
··· 31 31 const struct nfnl_callback *cb; /* callback for individual types */ 32 32 struct module *owner; 33 33 int (*commit)(struct net *net, struct sk_buff *skb); 34 - int (*abort)(struct net *net, struct sk_buff *skb); 34 + int (*abort)(struct net *net, struct sk_buff *skb, bool autoload); 35 35 void (*cleanup)(struct net *net); 36 36 bool (*valid_genid)(struct net *net, u32 genid); 37 37 };
+1
include/net/netns/nftables.h
··· 7 7 struct netns_nftables { 8 8 struct list_head tables; 9 9 struct list_head commit_list; 10 + struct list_head module_list; 10 11 struct mutex commit_mutex; 11 12 unsigned int base_seq; 12 13 u8 gencursor;
+1 -2
net/atm/proc.c
··· 134 134 static void *vcc_seq_next(struct seq_file *seq, void *v, loff_t *pos) 135 135 { 136 136 v = vcc_walk(seq, 1); 137 - if (v) 138 - (*pos)++; 137 + (*pos)++; 139 138 return v; 140 139 } 141 140
+1 -1
net/caif/caif_usb.c
··· 62 62 hpad = (info->hdr_len + CFUSB_PAD_DESCR_SZ) & (CFUSB_ALIGNMENT - 1); 63 63 64 64 if (skb_headroom(skb) < ETH_HLEN + CFUSB_PAD_DESCR_SZ + hpad) { 65 - pr_warn("Headroom to small\n"); 65 + pr_warn("Headroom too small\n"); 66 66 kfree_skb(skb); 67 67 return -EIO; 68 68 }
+55 -42
net/core/dev.c
··· 5491 5491 put_online_cpus(); 5492 5492 } 5493 5493 5494 + /* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ 5495 + static void gro_normal_list(struct napi_struct *napi) 5496 + { 5497 + if (!napi->rx_count) 5498 + return; 5499 + netif_receive_skb_list_internal(&napi->rx_list); 5500 + INIT_LIST_HEAD(&napi->rx_list); 5501 + napi->rx_count = 0; 5502 + } 5503 + 5504 + /* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded, 5505 + * pass the whole batch up to the stack. 5506 + */ 5507 + static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb) 5508 + { 5509 + list_add_tail(&skb->list, &napi->rx_list); 5510 + if (++napi->rx_count >= gro_normal_batch) 5511 + gro_normal_list(napi); 5512 + } 5513 + 5494 5514 INDIRECT_CALLABLE_DECLARE(int inet_gro_complete(struct sk_buff *, int)); 5495 5515 INDIRECT_CALLABLE_DECLARE(int ipv6_gro_complete(struct sk_buff *, int)); 5496 - static int napi_gro_complete(struct sk_buff *skb) 5516 + static int napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) 5497 5517 { 5498 5518 struct packet_offload *ptype; 5499 5519 __be16 type = skb->protocol; ··· 5546 5526 } 5547 5527 5548 5528 out: 5549 - return netif_receive_skb_internal(skb); 5529 + gro_normal_one(napi, skb); 5530 + return NET_RX_SUCCESS; 5550 5531 } 5551 5532 5552 5533 static void __napi_gro_flush_chain(struct napi_struct *napi, u32 index, ··· 5560 5539 if (flush_old && NAPI_GRO_CB(skb)->age == jiffies) 5561 5540 return; 5562 5541 skb_list_del_init(skb); 5563 - napi_gro_complete(skb); 5542 + napi_gro_complete(napi, skb); 5564 5543 napi->gro_hash[index].count--; 5565 5544 } 5566 5545 ··· 5662 5641 } 5663 5642 } 5664 5643 5665 - static void gro_flush_oldest(struct list_head *head) 5644 + static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) 5666 5645 { 5667 5646 struct sk_buff *oldest; 5668 5647 ··· 5678 5657 * SKB to the chain. 5679 5658 */ 5680 5659 skb_list_del_init(oldest); 5681 - napi_gro_complete(oldest); 5660 + napi_gro_complete(napi, oldest); 5682 5661 } 5683 5662 5684 5663 INDIRECT_CALLABLE_DECLARE(struct sk_buff *inet_gro_receive(struct list_head *, ··· 5754 5733 5755 5734 if (pp) { 5756 5735 skb_list_del_init(pp); 5757 - napi_gro_complete(pp); 5736 + napi_gro_complete(napi, pp); 5758 5737 napi->gro_hash[hash].count--; 5759 5738 } 5760 5739 ··· 5765 5744 goto normal; 5766 5745 5767 5746 if (unlikely(napi->gro_hash[hash].count >= MAX_GRO_SKBS)) { 5768 - gro_flush_oldest(gro_head); 5747 + gro_flush_oldest(napi, gro_head); 5769 5748 } else { 5770 5749 napi->gro_hash[hash].count++; 5771 5750 } ··· 5822 5801 return NULL; 5823 5802 } 5824 5803 EXPORT_SYMBOL(gro_find_complete_by_type); 5825 - 5826 - /* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ 5827 - static void gro_normal_list(struct napi_struct *napi) 5828 - { 5829 - if (!napi->rx_count) 5830 - return; 5831 - netif_receive_skb_list_internal(&napi->rx_list); 5832 - INIT_LIST_HEAD(&napi->rx_list); 5833 - napi->rx_count = 0; 5834 - } 5835 - 5836 - /* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded, 5837 - * pass the whole batch up to the stack. 5838 - */ 5839 - static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb) 5840 - { 5841 - list_add_tail(&skb->list, &napi->rx_list); 5842 - if (++napi->rx_count >= gro_normal_batch) 5843 - gro_normal_list(napi); 5844 - } 5845 5804 5846 5805 static void napi_skb_free_stolen_head(struct sk_buff *skb) 5847 5806 { ··· 6201 6200 NAPIF_STATE_IN_BUSY_POLL))) 6202 6201 return false; 6203 6202 6204 - gro_normal_list(n); 6205 - 6206 6203 if (n->gro_bitmask) { 6207 6204 unsigned long timeout = 0; 6208 6205 ··· 6216 6217 hrtimer_start(&n->timer, ns_to_ktime(timeout), 6217 6218 HRTIMER_MODE_REL_PINNED); 6218 6219 } 6220 + 6221 + gro_normal_list(n); 6222 + 6219 6223 if (unlikely(!list_empty(&n->poll_list))) { 6220 6224 /* If n->poll_list is not empty, we need to mask irqs */ 6221 6225 local_irq_save(flags); ··· 6550 6548 goto out_unlock; 6551 6549 } 6552 6550 6553 - gro_normal_list(n); 6554 - 6555 6551 if (n->gro_bitmask) { 6556 6552 /* flush too old packets 6557 6553 * If HZ < 1000, flush all packets. 6558 6554 */ 6559 6555 napi_gro_flush(n, HZ >= 1000); 6560 6556 } 6557 + 6558 + gro_normal_list(n); 6561 6559 6562 6560 /* Some drivers may have called napi_schedule 6563 6561 * prior to exhausting their budget. ··· 8196 8194 } 8197 8195 EXPORT_SYMBOL(__dev_set_mtu); 8198 8196 8197 + int dev_validate_mtu(struct net_device *dev, int new_mtu, 8198 + struct netlink_ext_ack *extack) 8199 + { 8200 + /* MTU must be positive, and in range */ 8201 + if (new_mtu < 0 || new_mtu < dev->min_mtu) { 8202 + NL_SET_ERR_MSG(extack, "mtu less than device minimum"); 8203 + return -EINVAL; 8204 + } 8205 + 8206 + if (dev->max_mtu > 0 && new_mtu > dev->max_mtu) { 8207 + NL_SET_ERR_MSG(extack, "mtu greater than device maximum"); 8208 + return -EINVAL; 8209 + } 8210 + return 0; 8211 + } 8212 + 8199 8213 /** 8200 8214 * dev_set_mtu_ext - Change maximum transfer unit 8201 8215 * @dev: device ··· 8228 8210 if (new_mtu == dev->mtu) 8229 8211 return 0; 8230 8212 8231 - /* MTU must be positive, and in range */ 8232 - if (new_mtu < 0 || new_mtu < dev->min_mtu) { 8233 - NL_SET_ERR_MSG(extack, "mtu less than device minimum"); 8234 - return -EINVAL; 8235 - } 8236 - 8237 - if (dev->max_mtu > 0 && new_mtu > dev->max_mtu) { 8238 - NL_SET_ERR_MSG(extack, "mtu greater than device maximum"); 8239 - return -EINVAL; 8240 - } 8213 + err = dev_validate_mtu(dev, new_mtu, extack); 8214 + if (err) 8215 + return err; 8241 8216 8242 8217 if (!netif_device_present(dev)) 8243 8218 return -ENODEV; ··· 9313 9302 goto err_uninit; 9314 9303 9315 9304 ret = netdev_register_kobject(dev); 9316 - if (ret) 9305 + if (ret) { 9306 + dev->reg_state = NETREG_UNREGISTERED; 9317 9307 goto err_uninit; 9308 + } 9318 9309 dev->reg_state = NETREG_REGISTERED; 9319 9310 9320 9311 __netdev_update_features(dev);
+1
net/core/neighbour.c
··· 3290 3290 *pos = cpu+1; 3291 3291 return per_cpu_ptr(tbl->stats, cpu); 3292 3292 } 3293 + (*pos)++; 3293 3294 return NULL; 3294 3295 } 3295 3296
+11 -2
net/core/rtnetlink.c
··· 3048 3048 dev->rtnl_link_ops = ops; 3049 3049 dev->rtnl_link_state = RTNL_LINK_INITIALIZING; 3050 3050 3051 - if (tb[IFLA_MTU]) 3052 - dev->mtu = nla_get_u32(tb[IFLA_MTU]); 3051 + if (tb[IFLA_MTU]) { 3052 + u32 mtu = nla_get_u32(tb[IFLA_MTU]); 3053 + int err; 3054 + 3055 + err = dev_validate_mtu(dev, mtu, extack); 3056 + if (err) { 3057 + free_netdev(dev); 3058 + return ERR_PTR(err); 3059 + } 3060 + dev->mtu = mtu; 3061 + } 3053 3062 if (tb[IFLA_ADDRESS]) { 3054 3063 memcpy(dev->dev_addr, nla_data(tb[IFLA_ADDRESS]), 3055 3064 nla_len(tb[IFLA_ADDRESS]));
-2
net/core/skmsg.c
··· 594 594 595 595 void sk_psock_drop(struct sock *sk, struct sk_psock *psock) 596 596 { 597 - sock_owned_by_me(sk); 598 - 599 597 sk_psock_cork_free(psock); 600 598 sk_psock_zap_ingress(psock); 601 599
+17 -3
net/core/utils.c
··· 438 438 } 439 439 EXPORT_SYMBOL(inet_proto_csum_replace4); 440 440 441 + /** 442 + * inet_proto_csum_replace16 - update layer 4 header checksum field 443 + * @sum: Layer 4 header checksum field 444 + * @skb: sk_buff for the packet 445 + * @from: old IPv6 address 446 + * @to: new IPv6 address 447 + * @pseudohdr: True if layer 4 header checksum includes pseudoheader 448 + * 449 + * Update layer 4 header as per the update in IPv6 src/dst address. 450 + * 451 + * There is no need to update skb->csum in this function, because update in two 452 + * fields a.) IPv6 src/dst address and b.) L4 header checksum cancels each other 453 + * for skb->csum calculation. Whereas inet_proto_csum_replace4 function needs to 454 + * update skb->csum, because update in 3 fields a.) IPv4 src/dst address, 455 + * b.) IPv4 Header checksum and c.) L4 header checksum results in same diff as 456 + * L4 Header checksum for skb->csum calculation. 457 + */ 441 458 void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb, 442 459 const __be32 *from, const __be32 *to, 443 460 bool pseudohdr) ··· 466 449 if (skb->ip_summed != CHECKSUM_PARTIAL) { 467 450 *sum = csum_fold(csum_partial(diff, sizeof(diff), 468 451 ~csum_unfold(*sum))); 469 - if (skb->ip_summed == CHECKSUM_COMPLETE && pseudohdr) 470 - skb->csum = ~csum_partial(diff, sizeof(diff), 471 - ~skb->csum); 472 452 } else if (pseudohdr) 473 453 *sum = ~csum_fold(csum_partial(diff, sizeof(diff), 474 454 csum_unfold(*sum)));
+1 -1
net/hsr/hsr_main.h
··· 191 191 void hsr_debugfs_create_root(void); 192 192 void hsr_debugfs_remove_root(void); 193 193 #else 194 - static inline void void hsr_debugfs_rename(struct net_device *dev) 194 + static inline void hsr_debugfs_rename(struct net_device *dev) 195 195 { 196 196 } 197 197 static inline void hsr_debugfs_init(struct hsr_priv *priv,
+2
net/ipv4/esp4_offload.c
··· 57 57 if (!x) 58 58 goto out_reset; 59 59 60 + skb->mark = xfrm_smark_get(skb->mark, x); 61 + 60 62 sp->xvec[sp->len++] = x; 61 63 sp->olen++; 62 64
+2 -2
net/ipv4/fou.c
··· 662 662 [FOU_ATTR_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG, }, 663 663 [FOU_ATTR_LOCAL_V4] = { .type = NLA_U32, }, 664 664 [FOU_ATTR_PEER_V4] = { .type = NLA_U32, }, 665 - [FOU_ATTR_LOCAL_V6] = { .type = sizeof(struct in6_addr), }, 666 - [FOU_ATTR_PEER_V6] = { .type = sizeof(struct in6_addr), }, 665 + [FOU_ATTR_LOCAL_V6] = { .len = sizeof(struct in6_addr), }, 666 + [FOU_ATTR_PEER_V6] = { .len = sizeof(struct in6_addr), }, 667 667 [FOU_ATTR_PEER_PORT] = { .type = NLA_U16, }, 668 668 [FOU_ATTR_IFINDEX] = { .type = NLA_S32, }, 669 669 };
+1 -3
net/ipv4/ip_tunnel.c
··· 1236 1236 iph->version = 4; 1237 1237 iph->ihl = 5; 1238 1238 1239 - if (tunnel->collect_md) { 1240 - dev->features |= NETIF_F_NETNS_LOCAL; 1239 + if (tunnel->collect_md) 1241 1240 netif_keep_dst(dev); 1242 - } 1243 1241 return 0; 1244 1242 } 1245 1243 EXPORT_SYMBOL_GPL(ip_tunnel_init);
+11 -2
net/ipv4/ip_vti.c
··· 187 187 int mtu; 188 188 189 189 if (!dst) { 190 - dev->stats.tx_carrier_errors++; 191 - goto tx_error_icmp; 190 + struct rtable *rt; 191 + 192 + fl->u.ip4.flowi4_oif = dev->ifindex; 193 + fl->u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; 194 + rt = __ip_route_output_key(dev_net(dev), &fl->u.ip4); 195 + if (IS_ERR(rt)) { 196 + dev->stats.tx_carrier_errors++; 197 + goto tx_error_icmp; 198 + } 199 + dst = &rt->dst; 200 + skb_dst_set(skb, dst); 192 201 } 193 202 194 203 dst_hold(dst);
+1
net/ipv4/route.c
··· 271 271 *pos = cpu+1; 272 272 return &per_cpu(rt_cache_stat, cpu); 273 273 } 274 + (*pos)++; 274 275 return NULL; 275 276 276 277 }
+1 -1
net/ipv4/tcp.c
··· 2524 2524 { 2525 2525 struct rb_node *p = rb_first(&sk->tcp_rtx_queue); 2526 2526 2527 + tcp_sk(sk)->highest_sack = NULL; 2527 2528 while (p) { 2528 2529 struct sk_buff *skb = rb_to_skb(p); 2529 2530 ··· 2615 2614 WRITE_ONCE(tp->write_seq, seq); 2616 2615 2617 2616 icsk->icsk_backoff = 0; 2618 - tp->snd_cwnd = 2; 2619 2617 icsk->icsk_probes_out = 0; 2620 2618 icsk->icsk_rto = TCP_TIMEOUT_INIT; 2621 2619 tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+1 -2
net/ipv4/tcp_bbr.c
··· 779 779 * bandwidth sample. Delivered is in packets and interval_us in uS and 780 780 * ratio will be <<1 for most connections. So delivered is first scaled. 781 781 */ 782 - bw = (u64)rs->delivered * BW_UNIT; 783 - do_div(bw, rs->interval_us); 782 + bw = div64_long((u64)rs->delivered * BW_UNIT, rs->interval_us); 784 783 785 784 /* If this sample is application-limited, it is likely to have a very 786 785 * low delivered count that represents application behavior rather than
+1
net/ipv4/tcp_input.c
··· 3164 3164 tp->retransmit_skb_hint = NULL; 3165 3165 if (unlikely(skb == tp->lost_skb_hint)) 3166 3166 tp->lost_skb_hint = NULL; 3167 + tcp_highest_sack_replace(sk, skb, next); 3167 3168 tcp_rtx_queue_unlink_and_free(skb, sk); 3168 3169 } 3169 3170
+1
net/ipv4/tcp_output.c
··· 3232 3232 if (!nskb) 3233 3233 return -ENOMEM; 3234 3234 INIT_LIST_HEAD(&nskb->tcp_tsorted_anchor); 3235 + tcp_highest_sack_replace(sk, skb, nskb); 3235 3236 tcp_rtx_queue_unlink_and_free(skb, sk); 3236 3237 __skb_header_release(nskb); 3237 3238 tcp_rbtree_insert(&sk->tcp_rtx_queue, nskb);
+2 -1
net/ipv4/udp.c
··· 1368 1368 if (likely(partial)) { 1369 1369 up->forward_deficit += size; 1370 1370 size = up->forward_deficit; 1371 - if (size < (sk->sk_rcvbuf >> 2)) 1371 + if (size < (sk->sk_rcvbuf >> 2) && 1372 + !skb_queue_empty(&up->reader_queue)) 1372 1373 return; 1373 1374 } else { 1374 1375 size += up->forward_deficit;
+2
net/ipv6/esp6_offload.c
··· 79 79 if (!x) 80 80 goto out_reset; 81 81 82 + skb->mark = xfrm_smark_get(skb->mark, x); 83 + 82 84 sp->xvec[sp->len++] = x; 83 85 sp->olen++; 84 86
+2 -5
net/ipv6/ip6_fib.c
··· 2495 2495 struct net *net = seq_file_net(seq); 2496 2496 struct ipv6_route_iter *iter = seq->private; 2497 2497 2498 + ++(*pos); 2498 2499 if (!v) 2499 2500 goto iter_table; 2500 2501 2501 2502 n = rcu_dereference_bh(((struct fib6_info *)v)->fib6_next); 2502 - if (n) { 2503 - ++*pos; 2503 + if (n) 2504 2504 return n; 2505 - } 2506 2505 2507 2506 iter_table: 2508 2507 ipv6_route_check_sernum(iter); ··· 2509 2510 r = fib6_walk_continue(&iter->w); 2510 2511 spin_unlock_bh(&iter->tbl->tb6_lock); 2511 2512 if (r > 0) { 2512 - if (v) 2513 - ++*pos; 2514 2513 return iter->w.leaf; 2515 2514 } else if (r < 0) { 2516 2515 fib6_walker_unlink(net, &iter->w);
-3
net/ipv6/ip6_gre.c
··· 1466 1466 dev->mtu -= 8; 1467 1467 1468 1468 if (tunnel->parms.collect_md) { 1469 - dev->features |= NETIF_F_NETNS_LOCAL; 1470 1469 netif_keep_dst(dev); 1471 1470 } 1472 1471 ip6gre_tnl_init_features(dev); ··· 1893 1894 dev->needs_free_netdev = true; 1894 1895 dev->priv_destructor = ip6gre_dev_free; 1895 1896 1896 - dev->features |= NETIF_F_NETNS_LOCAL; 1897 1897 dev->priv_flags &= ~IFF_TX_SKB_SHARING; 1898 1898 dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 1899 1899 netif_keep_dst(dev); ··· 2195 2197 dev->needs_free_netdev = true; 2196 2198 dev->priv_destructor = ip6gre_dev_free; 2197 2199 2198 - dev->features |= NETIF_F_NETNS_LOCAL; 2199 2200 dev->priv_flags &= ~IFF_TX_SKB_SHARING; 2200 2201 dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 2201 2202 netif_keep_dst(dev);
+1 -3
net/ipv6/ip6_tunnel.c
··· 1877 1877 if (err) 1878 1878 return err; 1879 1879 ip6_tnl_link_config(t); 1880 - if (t->parms.collect_md) { 1881 - dev->features |= NETIF_F_NETNS_LOCAL; 1880 + if (t->parms.collect_md) 1882 1881 netif_keep_dst(dev); 1883 - } 1884 1882 return 0; 1885 1883 } 1886 1884
+11 -2
net/ipv6/ip6_vti.c
··· 449 449 int err = -1; 450 450 int mtu; 451 451 452 - if (!dst) 453 - goto tx_err_link_failure; 452 + if (!dst) { 453 + fl->u.ip6.flowi6_oif = dev->ifindex; 454 + fl->u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC; 455 + dst = ip6_route_output(dev_net(dev), NULL, &fl->u.ip6); 456 + if (dst->error) { 457 + dst_release(dst); 458 + dst = NULL; 459 + goto tx_err_link_failure; 460 + } 461 + skb_dst_set(skb, dst); 462 + } 454 463 455 464 dst_hold(dst); 456 465 dst = xfrm_lookup(t->net, dst, fl, NULL, 0);
+3 -1
net/ipv6/seg6_local.c
··· 23 23 #include <net/addrconf.h> 24 24 #include <net/ip6_route.h> 25 25 #include <net/dst_cache.h> 26 + #include <net/ip_tunnels.h> 26 27 #ifdef CONFIG_IPV6_SEG6_HMAC 27 28 #include <net/seg6_hmac.h> 28 29 #endif ··· 136 135 137 136 skb_reset_network_header(skb); 138 137 skb_reset_transport_header(skb); 139 - skb->encapsulation = 0; 138 + if (iptunnel_pull_offloads(skb)) 139 + return false; 140 140 141 141 return true; 142 142 }
+1 -1
net/netfilter/ipset/ip_set_bitmap_gen.h
··· 75 75 76 76 if (set->extensions & IPSET_EXT_DESTROY) 77 77 mtype_ext_cleanup(set); 78 - memset(map->members, 0, map->memsize); 78 + bitmap_zero(map->members, map->elements); 79 79 set->elements = 0; 80 80 set->ext_size = 0; 81 81 }
+3 -3
net/netfilter/ipset/ip_set_bitmap_ip.c
··· 37 37 38 38 /* Type structure */ 39 39 struct bitmap_ip { 40 - void *members; /* the set members */ 40 + unsigned long *members; /* the set members */ 41 41 u32 first_ip; /* host byte order, included in range */ 42 42 u32 last_ip; /* host byte order, included in range */ 43 43 u32 elements; /* number of max elements in the set */ ··· 220 220 u32 first_ip, u32 last_ip, 221 221 u32 elements, u32 hosts, u8 netmask) 222 222 { 223 - map->members = ip_set_alloc(map->memsize); 223 + map->members = bitmap_zalloc(elements, GFP_KERNEL | __GFP_NOWARN); 224 224 if (!map->members) 225 225 return false; 226 226 map->first_ip = first_ip; ··· 322 322 if (!map) 323 323 return -ENOMEM; 324 324 325 - map->memsize = bitmap_bytes(0, elements - 1); 325 + map->memsize = BITS_TO_LONGS(elements) * sizeof(unsigned long); 326 326 set->variant = &bitmap_ip; 327 327 if (!init_map_ip(set, map, first_ip, last_ip, 328 328 elements, hosts, netmask)) {
+3 -3
net/netfilter/ipset/ip_set_bitmap_ipmac.c
··· 42 42 43 43 /* Type structure */ 44 44 struct bitmap_ipmac { 45 - void *members; /* the set members */ 45 + unsigned long *members; /* the set members */ 46 46 u32 first_ip; /* host byte order, included in range */ 47 47 u32 last_ip; /* host byte order, included in range */ 48 48 u32 elements; /* number of max elements in the set */ ··· 299 299 init_map_ipmac(struct ip_set *set, struct bitmap_ipmac *map, 300 300 u32 first_ip, u32 last_ip, u32 elements) 301 301 { 302 - map->members = ip_set_alloc(map->memsize); 302 + map->members = bitmap_zalloc(elements, GFP_KERNEL | __GFP_NOWARN); 303 303 if (!map->members) 304 304 return false; 305 305 map->first_ip = first_ip; ··· 360 360 if (!map) 361 361 return -ENOMEM; 362 362 363 - map->memsize = bitmap_bytes(0, elements - 1); 363 + map->memsize = BITS_TO_LONGS(elements) * sizeof(unsigned long); 364 364 set->variant = &bitmap_ipmac; 365 365 if (!init_map_ipmac(set, map, first_ip, last_ip, elements)) { 366 366 kfree(map);
+3 -3
net/netfilter/ipset/ip_set_bitmap_port.c
··· 30 30 31 31 /* Type structure */ 32 32 struct bitmap_port { 33 - void *members; /* the set members */ 33 + unsigned long *members; /* the set members */ 34 34 u16 first_port; /* host byte order, included in range */ 35 35 u16 last_port; /* host byte order, included in range */ 36 36 u32 elements; /* number of max elements in the set */ ··· 231 231 init_map_port(struct ip_set *set, struct bitmap_port *map, 232 232 u16 first_port, u16 last_port) 233 233 { 234 - map->members = ip_set_alloc(map->memsize); 234 + map->members = bitmap_zalloc(map->elements, GFP_KERNEL | __GFP_NOWARN); 235 235 if (!map->members) 236 236 return false; 237 237 map->first_port = first_port; ··· 271 271 return -ENOMEM; 272 272 273 273 map->elements = elements; 274 - map->memsize = bitmap_bytes(0, map->elements); 274 + map->memsize = BITS_TO_LONGS(elements) * sizeof(unsigned long); 275 275 set->variant = &bitmap_port; 276 276 if (!init_map_port(set, map, first_port, last_port)) { 277 277 kfree(map);
+1 -1
net/netfilter/ipvs/ip_vs_sync.c
··· 1239 1239 1240 1240 p = msg_end; 1241 1241 if (p + sizeof(s->v4) > buffer+buflen) { 1242 - IP_VS_ERR_RL("BACKUP, Dropping buffer, to small\n"); 1242 + IP_VS_ERR_RL("BACKUP, Dropping buffer, too small\n"); 1243 1243 return; 1244 1244 } 1245 1245 s = (union ip_vs_sync_conn *)p;
+3 -3
net/netfilter/nf_conntrack_proto_sctp.c
··· 114 114 { 115 115 /* ORIGINAL */ 116 116 /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */ 117 - /* init */ {sCW, sCW, sCW, sCE, sES, sSS, sSR, sSA, sCW, sHA}, 117 + /* init */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW, sHA}, 118 118 /* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA}, 119 119 /* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL}, 120 120 /* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL, sSS}, ··· 130 130 /* REPLY */ 131 131 /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */ 132 132 /* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},/* INIT in sCL Big TODO */ 133 - /* init_ack */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA}, 133 + /* init_ack */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA}, 134 134 /* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV, sCL}, 135 135 /* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV, sSR}, 136 136 /* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV, sHA}, ··· 316 316 ct->proto.sctp.vtag[IP_CT_DIR_REPLY] = sh->vtag; 317 317 } 318 318 319 - ct->proto.sctp.state = new_state; 319 + ct->proto.sctp.state = SCTP_CONNTRACK_NONE; 320 320 } 321 321 322 322 return true;
+107 -48
net/netfilter/nf_tables_api.c
··· 553 553 static const struct nft_chain_type *chain_type[NFPROTO_NUMPROTO][NFT_CHAIN_T_MAX]; 554 554 555 555 static const struct nft_chain_type * 556 + __nft_chain_type_get(u8 family, enum nft_chain_types type) 557 + { 558 + if (family >= NFPROTO_NUMPROTO || 559 + type >= NFT_CHAIN_T_MAX) 560 + return NULL; 561 + 562 + return chain_type[family][type]; 563 + } 564 + 565 + static const struct nft_chain_type * 556 566 __nf_tables_chain_type_lookup(const struct nlattr *nla, u8 family) 557 567 { 568 + const struct nft_chain_type *type; 558 569 int i; 559 570 560 571 for (i = 0; i < NFT_CHAIN_T_MAX; i++) { 561 - if (chain_type[family][i] != NULL && 562 - !nla_strcmp(nla, chain_type[family][i]->name)) 563 - return chain_type[family][i]; 572 + type = __nft_chain_type_get(family, i); 573 + if (!type) 574 + continue; 575 + if (!nla_strcmp(nla, type->name)) 576 + return type; 564 577 } 565 578 return NULL; 566 579 } 567 580 568 - /* 569 - * Loading a module requires dropping mutex that guards the transaction. 570 - * A different client might race to start a new transaction meanwhile. Zap the 571 - * list of pending transaction and then restore it once the mutex is grabbed 572 - * again. Users of this function return EAGAIN which implicitly triggers the 573 - * transaction abort path to clean up the list of pending transactions. 574 - */ 581 + struct nft_module_request { 582 + struct list_head list; 583 + char module[MODULE_NAME_LEN]; 584 + bool done; 585 + }; 586 + 575 587 #ifdef CONFIG_MODULES 576 - static void nft_request_module(struct net *net, const char *fmt, ...) 588 + static int nft_request_module(struct net *net, const char *fmt, ...) 577 589 { 578 590 char module_name[MODULE_NAME_LEN]; 579 - LIST_HEAD(commit_list); 591 + struct nft_module_request *req; 580 592 va_list args; 581 593 int ret; 582 - 583 - list_splice_init(&net->nft.commit_list, &commit_list); 584 594 585 595 va_start(args, fmt); 586 596 ret = vsnprintf(module_name, MODULE_NAME_LEN, fmt, args); 587 597 va_end(args); 588 598 if (ret >= MODULE_NAME_LEN) 589 - return; 599 + return 0; 590 600 591 - mutex_unlock(&net->nft.commit_mutex); 592 - request_module("%s", module_name); 593 - mutex_lock(&net->nft.commit_mutex); 601 + list_for_each_entry(req, &net->nft.module_list, list) { 602 + if (!strcmp(req->module, module_name)) { 603 + if (req->done) 604 + return 0; 594 605 595 - WARN_ON_ONCE(!list_empty(&net->nft.commit_list)); 596 - list_splice(&commit_list, &net->nft.commit_list); 606 + /* A request to load this module already exists. */ 607 + return -EAGAIN; 608 + } 609 + } 610 + 611 + req = kmalloc(sizeof(*req), GFP_KERNEL); 612 + if (!req) 613 + return -ENOMEM; 614 + 615 + req->done = false; 616 + strlcpy(req->module, module_name, MODULE_NAME_LEN); 617 + list_add_tail(&req->list, &net->nft.module_list); 618 + 619 + return -EAGAIN; 597 620 } 598 621 #endif 599 622 ··· 640 617 lockdep_nfnl_nft_mutex_not_held(); 641 618 #ifdef CONFIG_MODULES 642 619 if (autoload) { 643 - nft_request_module(net, "nft-chain-%u-%.*s", family, 644 - nla_len(nla), (const char *)nla_data(nla)); 645 - type = __nf_tables_chain_type_lookup(nla, family); 646 - if (type != NULL) 620 + if (nft_request_module(net, "nft-chain-%u-%.*s", family, 621 + nla_len(nla), 622 + (const char *)nla_data(nla)) == -EAGAIN) 647 623 return ERR_PTR(-EAGAIN); 648 624 } 649 625 #endif ··· 1184 1162 1185 1163 void nft_register_chain_type(const struct nft_chain_type *ctype) 1186 1164 { 1187 - if (WARN_ON(ctype->family >= NFPROTO_NUMPROTO)) 1188 - return; 1189 - 1190 1165 nfnl_lock(NFNL_SUBSYS_NFTABLES); 1191 - if (WARN_ON(chain_type[ctype->family][ctype->type] != NULL)) { 1166 + if (WARN_ON(__nft_chain_type_get(ctype->family, ctype->type))) { 1192 1167 nfnl_unlock(NFNL_SUBSYS_NFTABLES); 1193 1168 return; 1194 1169 } ··· 1787 1768 hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM])); 1788 1769 hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY])); 1789 1770 1790 - type = chain_type[family][NFT_CHAIN_T_DEFAULT]; 1771 + type = __nft_chain_type_get(family, NFT_CHAIN_T_DEFAULT); 1772 + if (!type) 1773 + return -EOPNOTSUPP; 1774 + 1791 1775 if (nla[NFTA_CHAIN_TYPE]) { 1792 1776 type = nf_tables_chain_type_lookup(net, nla[NFTA_CHAIN_TYPE], 1793 1777 family, autoload); ··· 2350 2328 static int nft_expr_type_request_module(struct net *net, u8 family, 2351 2329 struct nlattr *nla) 2352 2330 { 2353 - nft_request_module(net, "nft-expr-%u-%.*s", family, 2354 - nla_len(nla), (char *)nla_data(nla)); 2355 - if (__nft_expr_type_get(family, nla)) 2331 + if (nft_request_module(net, "nft-expr-%u-%.*s", family, 2332 + nla_len(nla), (char *)nla_data(nla)) == -EAGAIN) 2356 2333 return -EAGAIN; 2357 2334 2358 2335 return 0; ··· 2377 2356 if (nft_expr_type_request_module(net, family, nla) == -EAGAIN) 2378 2357 return ERR_PTR(-EAGAIN); 2379 2358 2380 - nft_request_module(net, "nft-expr-%.*s", 2381 - nla_len(nla), (char *)nla_data(nla)); 2382 - if (__nft_expr_type_get(family, nla)) 2359 + if (nft_request_module(net, "nft-expr-%.*s", 2360 + nla_len(nla), 2361 + (char *)nla_data(nla)) == -EAGAIN) 2383 2362 return ERR_PTR(-EAGAIN); 2384 2363 } 2385 2364 #endif ··· 2470 2449 err = PTR_ERR(ops); 2471 2450 #ifdef CONFIG_MODULES 2472 2451 if (err == -EAGAIN) 2473 - nft_expr_type_request_module(ctx->net, 2474 - ctx->family, 2475 - tb[NFTA_EXPR_NAME]); 2452 + if (nft_expr_type_request_module(ctx->net, 2453 + ctx->family, 2454 + tb[NFTA_EXPR_NAME]) != -EAGAIN) 2455 + err = -ENOENT; 2476 2456 #endif 2477 2457 goto err1; 2478 2458 } ··· 3310 3288 lockdep_nfnl_nft_mutex_not_held(); 3311 3289 #ifdef CONFIG_MODULES 3312 3290 if (list_empty(&nf_tables_set_types)) { 3313 - nft_request_module(ctx->net, "nft-set"); 3314 - if (!list_empty(&nf_tables_set_types)) 3291 + if (nft_request_module(ctx->net, "nft-set") == -EAGAIN) 3315 3292 return ERR_PTR(-EAGAIN); 3316 3293 } 3317 3294 #endif ··· 5436 5415 lockdep_nfnl_nft_mutex_not_held(); 5437 5416 #ifdef CONFIG_MODULES 5438 5417 if (type == NULL) { 5439 - nft_request_module(net, "nft-obj-%u", objtype); 5440 - if (__nft_obj_type_get(objtype)) 5418 + if (nft_request_module(net, "nft-obj-%u", objtype) == -EAGAIN) 5441 5419 return ERR_PTR(-EAGAIN); 5442 5420 } 5443 5421 #endif ··· 6009 5989 lockdep_nfnl_nft_mutex_not_held(); 6010 5990 #ifdef CONFIG_MODULES 6011 5991 if (type == NULL) { 6012 - nft_request_module(net, "nf-flowtable-%u", family); 6013 - if (__nft_flowtable_type_get(family)) 5992 + if (nft_request_module(net, "nf-flowtable-%u", family) == -EAGAIN) 6014 5993 return ERR_PTR(-EAGAIN); 6015 5994 } 6016 5995 #endif ··· 7011 6992 list_del_rcu(&chain->list); 7012 6993 } 7013 6994 6995 + static void nf_tables_module_autoload_cleanup(struct net *net) 6996 + { 6997 + struct nft_module_request *req, *next; 6998 + 6999 + WARN_ON_ONCE(!list_empty(&net->nft.commit_list)); 7000 + list_for_each_entry_safe(req, next, &net->nft.module_list, list) { 7001 + WARN_ON_ONCE(!req->done); 7002 + list_del(&req->list); 7003 + kfree(req); 7004 + } 7005 + } 7006 + 7014 7007 static void nf_tables_commit_release(struct net *net) 7015 7008 { 7016 7009 struct nft_trans *trans; ··· 7035 7004 * to prevent expensive synchronize_rcu() in commit phase. 7036 7005 */ 7037 7006 if (list_empty(&net->nft.commit_list)) { 7007 + nf_tables_module_autoload_cleanup(net); 7038 7008 mutex_unlock(&net->nft.commit_mutex); 7039 7009 return; 7040 7010 } ··· 7050 7018 list_splice_tail_init(&net->nft.commit_list, &nf_tables_destroy_list); 7051 7019 spin_unlock(&nf_tables_destroy_list_lock); 7052 7020 7021 + nf_tables_module_autoload_cleanup(net); 7053 7022 mutex_unlock(&net->nft.commit_mutex); 7054 7023 7055 7024 schedule_work(&trans_destroy_work); ··· 7242 7209 return 0; 7243 7210 } 7244 7211 7212 + static void nf_tables_module_autoload(struct net *net) 7213 + { 7214 + struct nft_module_request *req, *next; 7215 + LIST_HEAD(module_list); 7216 + 7217 + list_splice_init(&net->nft.module_list, &module_list); 7218 + mutex_unlock(&net->nft.commit_mutex); 7219 + list_for_each_entry_safe(req, next, &module_list, list) { 7220 + if (req->done) { 7221 + list_del(&req->list); 7222 + kfree(req); 7223 + } else { 7224 + request_module("%s", req->module); 7225 + req->done = true; 7226 + } 7227 + } 7228 + mutex_lock(&net->nft.commit_mutex); 7229 + list_splice(&module_list, &net->nft.module_list); 7230 + } 7231 + 7245 7232 static void nf_tables_abort_release(struct nft_trans *trans) 7246 7233 { 7247 7234 switch (trans->msg_type) { ··· 7291 7238 kfree(trans); 7292 7239 } 7293 7240 7294 - static int __nf_tables_abort(struct net *net) 7241 + static int __nf_tables_abort(struct net *net, bool autoload) 7295 7242 { 7296 7243 struct nft_trans *trans, *next; 7297 7244 struct nft_trans_elem *te; ··· 7413 7360 nf_tables_abort_release(trans); 7414 7361 } 7415 7362 7363 + if (autoload) 7364 + nf_tables_module_autoload(net); 7365 + else 7366 + nf_tables_module_autoload_cleanup(net); 7367 + 7416 7368 return 0; 7417 7369 } 7418 7370 ··· 7426 7368 nft_validate_state_update(net, NFT_VALIDATE_SKIP); 7427 7369 } 7428 7370 7429 - static int nf_tables_abort(struct net *net, struct sk_buff *skb) 7371 + static int nf_tables_abort(struct net *net, struct sk_buff *skb, bool autoload) 7430 7372 { 7431 - int ret = __nf_tables_abort(net); 7373 + int ret = __nf_tables_abort(net, autoload); 7432 7374 7433 7375 mutex_unlock(&net->nft.commit_mutex); 7434 7376 ··· 8023 7965 { 8024 7966 INIT_LIST_HEAD(&net->nft.tables); 8025 7967 INIT_LIST_HEAD(&net->nft.commit_list); 7968 + INIT_LIST_HEAD(&net->nft.module_list); 8026 7969 mutex_init(&net->nft.commit_mutex); 8027 7970 net->nft.base_seq = 1; 8028 7971 net->nft.validate_state = NFT_VALIDATE_SKIP; ··· 8035 7976 { 8036 7977 mutex_lock(&net->nft.commit_mutex); 8037 7978 if (!list_empty(&net->nft.commit_list)) 8038 - __nf_tables_abort(net); 7979 + __nf_tables_abort(net, false); 8039 7980 __nft_release_tables(net); 8040 7981 mutex_unlock(&net->nft.commit_mutex); 8041 7982 WARN_ON_ONCE(!list_empty(&net->nft.tables));
+1 -1
net/netfilter/nf_tables_offload.c
··· 564 564 565 565 mutex_lock(&net->nft.commit_mutex); 566 566 chain = __nft_offload_get_chain(dev); 567 - if (chain) { 567 + if (chain && chain->flags & NFT_CHAIN_HW_OFFLOAD) { 568 568 struct nft_base_chain *basechain; 569 569 570 570 basechain = nft_base_chain(chain);
+3 -3
net/netfilter/nfnetlink.c
··· 476 476 } 477 477 done: 478 478 if (status & NFNL_BATCH_REPLAY) { 479 - ss->abort(net, oskb); 479 + ss->abort(net, oskb, true); 480 480 nfnl_err_reset(&err_list); 481 481 kfree_skb(skb); 482 482 module_put(ss->owner); ··· 487 487 status |= NFNL_BATCH_REPLAY; 488 488 goto done; 489 489 } else if (err) { 490 - ss->abort(net, oskb); 490 + ss->abort(net, oskb, false); 491 491 netlink_ack(oskb, nlmsg_hdr(oskb), err, NULL); 492 492 } 493 493 } else { 494 - ss->abort(net, oskb); 494 + ss->abort(net, oskb, false); 495 495 } 496 496 if (ss->cleanup) 497 497 ss->cleanup(net);
+3
net/netfilter/nft_osf.c
··· 61 61 int err; 62 62 u8 ttl; 63 63 64 + if (!tb[NFTA_OSF_DREG]) 65 + return -EINVAL; 66 + 64 67 if (tb[NFTA_OSF_TTL]) { 65 68 ttl = nla_get_u8(tb[NFTA_OSF_TTL]); 66 69 if (ttl > 2)
+1 -1
net/rose/af_rose.c
··· 1475 1475 int rc; 1476 1476 1477 1477 if (rose_ndevs > 0x7FFFFFFF/sizeof(struct net_device *)) { 1478 - printk(KERN_ERR "ROSE: rose_proto_init - rose_ndevs parameter to large\n"); 1478 + printk(KERN_ERR "ROSE: rose_proto_init - rose_ndevs parameter too large\n"); 1479 1479 rc = -EINVAL; 1480 1480 goto out; 1481 1481 }
+2 -3
net/sched/cls_api.c
··· 2055 2055 &chain_info)); 2056 2056 2057 2057 mutex_unlock(&chain->filter_chain_lock); 2058 - tp_new = tcf_proto_create(nla_data(tca[TCA_KIND]), 2059 - protocol, prio, chain, rtnl_held, 2060 - extack); 2058 + tp_new = tcf_proto_create(name, protocol, prio, chain, 2059 + rtnl_held, extack); 2061 2060 if (IS_ERR(tp_new)) { 2062 2061 err = PTR_ERR(tp_new); 2063 2062 goto errout_tp;
+1 -1
net/sched/ematch.c
··· 263 263 } 264 264 em->data = (unsigned long) v; 265 265 } 266 + em->datalen = data_len; 266 267 } 267 268 } 268 269 269 270 em->matchid = em_hdr->matchid; 270 271 em->flags = em_hdr->flags; 271 - em->datalen = data_len; 272 272 em->net = net; 273 273 274 274 err = 0;
+26 -8
net/xfrm/xfrm_interface.c
··· 268 268 int err = -1; 269 269 int mtu; 270 270 271 - if (!dst) 272 - goto tx_err_link_failure; 273 - 274 271 dst_hold(dst); 275 272 dst = xfrm_lookup_with_ifid(xi->net, dst, fl, NULL, 0, xi->p.if_id); 276 273 if (IS_ERR(dst)) { ··· 294 297 295 298 mtu = dst_mtu(dst); 296 299 if (!skb->ignore_df && skb->len > mtu) { 297 - skb_dst_update_pmtu(skb, mtu); 300 + skb_dst_update_pmtu_no_confirm(skb, mtu); 298 301 299 302 if (skb->protocol == htons(ETH_P_IPV6)) { 300 303 if (mtu < IPV6_MIN_MTU) ··· 340 343 { 341 344 struct xfrm_if *xi = netdev_priv(dev); 342 345 struct net_device_stats *stats = &xi->dev->stats; 346 + struct dst_entry *dst = skb_dst(skb); 343 347 struct flowi fl; 344 348 int ret; 345 349 ··· 350 352 case htons(ETH_P_IPV6): 351 353 xfrm_decode_session(skb, &fl, AF_INET6); 352 354 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 355 + if (!dst) { 356 + fl.u.ip6.flowi6_oif = dev->ifindex; 357 + fl.u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC; 358 + dst = ip6_route_output(dev_net(dev), NULL, &fl.u.ip6); 359 + if (dst->error) { 360 + dst_release(dst); 361 + stats->tx_carrier_errors++; 362 + goto tx_err; 363 + } 364 + skb_dst_set(skb, dst); 365 + } 353 366 break; 354 367 case htons(ETH_P_IP): 355 368 xfrm_decode_session(skb, &fl, AF_INET); 356 369 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 370 + if (!dst) { 371 + struct rtable *rt; 372 + 373 + fl.u.ip4.flowi4_oif = dev->ifindex; 374 + fl.u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; 375 + rt = __ip_route_output_key(dev_net(dev), &fl.u.ip4); 376 + if (IS_ERR(rt)) { 377 + stats->tx_carrier_errors++; 378 + goto tx_err; 379 + } 380 + skb_dst_set(skb, &rt->dst); 381 + } 357 382 break; 358 383 default: 359 384 goto tx_err; ··· 584 563 { 585 564 dev->netdev_ops = &xfrmi_netdev_ops; 586 565 dev->type = ARPHRD_NONE; 587 - dev->hard_header_len = ETH_HLEN; 588 - dev->min_header_len = ETH_HLEN; 589 566 dev->mtu = ETH_DATA_LEN; 590 567 dev->min_mtu = ETH_MIN_MTU; 591 - dev->max_mtu = ETH_DATA_LEN; 592 - dev->addr_len = ETH_ALEN; 568 + dev->max_mtu = IP_MAX_MTU; 593 569 dev->flags = IFF_NOARP; 594 570 dev->needs_free_netdev = true; 595 571 dev->priv_destructor = xfrmi_dev_free;