Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

1) Off by one in mt76 airtime calculation, from Dan Carpenter.

2) Fix TLV fragment allocation loop condition in iwlwifi, from Luca
Coelho.

3) Don't confirm neigh entries when doing ipsec pmtu updates, from Xu
Wang.

4) More checks to make sure we only send TSO packets to lan78xx chips
that they can actually handle. From James Hughes.

5) Fix ip_tunnel namespace move, from William Dauchy.

6) Fix unintended packet reordering due to cooperation between
listification done by GRO and non-GRO paths. From Maxim
Mikityanskiy.

7) Add Jakub Kicincki formally as networking co-maintainer.

8) Info leak in airo ioctls, from Michael Ellerman.

9) IFLA_MTU attribute needs validation during rtnl_create_link(), from
Eric Dumazet.

10) Use after free during reload in mlxsw, from Ido Schimmel.

11) Dangling pointers are possible in tp->highest_sack, fix from Eric
Dumazet.

12) Missing *pos++ in various networking seq_next handlers, from Vasily
Averin.

13) CHELSIO_GET_MEM operation neds CAP_NET_ADMIN check, from Michael
Ellerman.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (109 commits)
firestream: fix memory leaks
net: cxgb3_main: Add CAP_NET_ADMIN check to CHELSIO_GET_MEM
net: bcmgenet: Use netif_tx_napi_add() for TX NAPI
tipc: change maintainer email address
net: stmmac: platform: fix probe for ACPI devices
net/mlx5e: kTLS, Do not send decrypted-marked SKBs via non-accel path
net/mlx5e: kTLS, Remove redundant posts in TX resync flow
net/mlx5e: kTLS, Fix corner-case checks in TX resync flow
net/mlx5e: Clear VF config when switching modes
net/mlx5: DR, use non preemptible call to get the current cpu number
net/mlx5: E-Switch, Prevent ingress rate configuration of uplink rep
net/mlx5: DR, Enable counter on non-fwd-dest objects
net/mlx5: Update the list of the PCI supported devices
net/mlx5: Fix lowest FDB pool size
net: Fix skb->csum update in inet_proto_csum_replace16().
netfilter: nf_tables: autoload modules from the abort path
netfilter: nf_tables: add __nft_chain_type_get()
netfilter: nf_tables_offload: fix check the chain offload flag
netfilter: conntrack: sctp: use distinct states for new SCTP connections
ipv6_route_seq_next should increase position index
...

Changed files
+1465 -599
Documentation
devicetree
bindings
arch
drivers
include
linux
netfilter
net
netns
net
+13
Documentation/devicetree/bindings/net/fsl-fman.txt
··· 403 The settings and programming routines for internal/external 404 MDIO are different. Must be included for internal MDIO. 405 406 For internal PHY device on internal mdio bus, a PHY node should be created. 407 See the definition of the PHY node in booting-without-of.txt for an 408 example of how to define a PHY (Internal PHY has no interrupt line).
··· 403 The settings and programming routines for internal/external 404 MDIO are different. Must be included for internal MDIO. 405 406 + - fsl,erratum-a011043 407 + Usage: optional 408 + Value type: <boolean> 409 + Definition: Indicates the presence of the A011043 erratum 410 + describing that the MDIO_CFG[MDIO_RD_ER] bit may be falsely 411 + set when reading internal PCS registers. MDIO reads to 412 + internal PCS registers may result in having the 413 + MDIO_CFG[MDIO_RD_ER] bit set, even when there is no error and 414 + read data (MDIO_DATA[MDIO_DATA]) is correct. 415 + Software may get false read error when reading internal 416 + PCS registers through MDIO. As a workaround, all internal 417 + MDIO accesses should ignore the MDIO_CFG[MDIO_RD_ER] bit. 418 + 419 For internal PHY device on internal mdio bus, a PHY node should be created. 420 See the definition of the PHY node in booting-without-of.txt for an 421 example of how to define a PHY (Internal PHY has no interrupt line).
+5 -3
MAINTAINERS
··· 6197 M: Andrew Lunn <andrew@lunn.ch> 6198 M: Florian Fainelli <f.fainelli@gmail.com> 6199 M: Heiner Kallweit <hkallweit1@gmail.com> 6200 L: netdev@vger.kernel.org 6201 S: Maintained 6202 F: Documentation/ABI/testing/sysfs-class-net-phydev ··· 8570 F: drivers/platform/x86/intel-vbtn.c 8571 8572 INTEL WIRELESS 3945ABG/BG, 4965AGN (iwlegacy) 8573 - M: Stanislaw Gruszka <sgruszka@redhat.com> 8574 L: linux-wireless@vger.kernel.org 8575 S: Supported 8576 F: drivers/net/wireless/intel/iwlegacy/ ··· 11500 11501 NETWORKING [GENERAL] 11502 M: "David S. Miller" <davem@davemloft.net> 11503 L: netdev@vger.kernel.org 11504 W: http://www.linuxfoundation.org/en/Net 11505 Q: http://patchwork.ozlabs.org/project/netdev/list/ ··· 13822 F: arch/mips/ralink 13823 13824 RALINK RT2X00 WIRELESS LAN DRIVER 13825 - M: Stanislaw Gruszka <sgruszka@redhat.com> 13826 M: Helmut Schaa <helmut.schaa@googlemail.com> 13827 L: linux-wireless@vger.kernel.org 13828 S: Maintained ··· 16603 F: tools/testing/selftests/timers/ 16604 16605 TIPC NETWORK LAYER 16606 - M: Jon Maloy <jon.maloy@ericsson.com> 16607 M: Ying Xue <ying.xue@windriver.com> 16608 L: netdev@vger.kernel.org (core kernel code) 16609 L: tipc-discussion@lists.sourceforge.net (user apps, general discussion)
··· 6197 M: Andrew Lunn <andrew@lunn.ch> 6198 M: Florian Fainelli <f.fainelli@gmail.com> 6199 M: Heiner Kallweit <hkallweit1@gmail.com> 6200 + R: Russell King <linux@armlinux.org.uk> 6201 L: netdev@vger.kernel.org 6202 S: Maintained 6203 F: Documentation/ABI/testing/sysfs-class-net-phydev ··· 8569 F: drivers/platform/x86/intel-vbtn.c 8570 8571 INTEL WIRELESS 3945ABG/BG, 4965AGN (iwlegacy) 8572 + M: Stanislaw Gruszka <stf_xl@wp.pl> 8573 L: linux-wireless@vger.kernel.org 8574 S: Supported 8575 F: drivers/net/wireless/intel/iwlegacy/ ··· 11499 11500 NETWORKING [GENERAL] 11501 M: "David S. Miller" <davem@davemloft.net> 11502 + M: Jakub Kicinski <kuba@kernel.org> 11503 L: netdev@vger.kernel.org 11504 W: http://www.linuxfoundation.org/en/Net 11505 Q: http://patchwork.ozlabs.org/project/netdev/list/ ··· 13820 F: arch/mips/ralink 13821 13822 RALINK RT2X00 WIRELESS LAN DRIVER 13823 + M: Stanislaw Gruszka <stf_xl@wp.pl> 13824 M: Helmut Schaa <helmut.schaa@googlemail.com> 13825 L: linux-wireless@vger.kernel.org 13826 S: Maintained ··· 16601 F: tools/testing/selftests/timers/ 16602 16603 TIPC NETWORK LAYER 16604 + M: Jon Maloy <jmaloy@redhat.com> 16605 M: Ying Xue <ying.xue@windriver.com> 16606 L: netdev@vger.kernel.org (core kernel code) 16607 L: tipc-discussion@lists.sourceforge.net (user apps, general discussion)
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-0-best-effort.dtsi
··· 63 #size-cells = <0>; 64 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 65 reg = <0xe1000 0x1000>; 66 67 pcsphy0: ethernet-phy@0 { 68 reg = <0x0>;
··· 63 #size-cells = <0>; 64 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 65 reg = <0xe1000 0x1000>; 66 + fsl,erratum-a011043; /* must ignore read errors */ 67 68 pcsphy0: ethernet-phy@0 { 69 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-0.dtsi
··· 60 #size-cells = <0>; 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 reg = <0xf1000 0x1000>; 63 64 pcsphy6: ethernet-phy@0 { 65 reg = <0x0>;
··· 60 #size-cells = <0>; 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 reg = <0xf1000 0x1000>; 63 + fsl,erratum-a011043; /* must ignore read errors */ 64 65 pcsphy6: ethernet-phy@0 { 66 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-1-best-effort.dtsi
··· 63 #size-cells = <0>; 64 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 65 reg = <0xe3000 0x1000>; 66 67 pcsphy1: ethernet-phy@0 { 68 reg = <0x0>;
··· 63 #size-cells = <0>; 64 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 65 reg = <0xe3000 0x1000>; 66 + fsl,erratum-a011043; /* must ignore read errors */ 67 68 pcsphy1: ethernet-phy@0 { 69 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-1.dtsi
··· 60 #size-cells = <0>; 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 reg = <0xf3000 0x1000>; 63 64 pcsphy7: ethernet-phy@0 { 65 reg = <0x0>;
··· 60 #size-cells = <0>; 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 reg = <0xf3000 0x1000>; 63 + fsl,erratum-a011043; /* must ignore read errors */ 64 65 pcsphy7: ethernet-phy@0 { 66 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-0.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe1000 0x1000>; 62 63 pcsphy0: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe1000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy0: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-1.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe3000 0x1000>; 62 63 pcsphy1: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe3000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy1: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-2.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe5000 0x1000>; 62 63 pcsphy2: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe5000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy2: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-3.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe7000 0x1000>; 62 63 pcsphy3: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe7000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy3: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-4.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe9000 0x1000>; 62 63 pcsphy4: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe9000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy4: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-0-1g-5.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xeb000 0x1000>; 62 63 pcsphy5: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xeb000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy5: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-10g-0.dtsi
··· 60 #size-cells = <0>; 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 reg = <0xf1000 0x1000>; 63 64 pcsphy14: ethernet-phy@0 { 65 reg = <0x0>;
··· 60 #size-cells = <0>; 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 reg = <0xf1000 0x1000>; 63 + fsl,erratum-a011043; /* must ignore read errors */ 64 65 pcsphy14: ethernet-phy@0 { 66 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-10g-1.dtsi
··· 60 #size-cells = <0>; 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 reg = <0xf3000 0x1000>; 63 64 pcsphy15: ethernet-phy@0 { 65 reg = <0x0>;
··· 60 #size-cells = <0>; 61 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 62 reg = <0xf3000 0x1000>; 63 + fsl,erratum-a011043; /* must ignore read errors */ 64 65 pcsphy15: ethernet-phy@0 { 66 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-0.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe1000 0x1000>; 62 63 pcsphy8: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe1000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy8: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-1.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe3000 0x1000>; 62 63 pcsphy9: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe3000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy9: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-2.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe5000 0x1000>; 62 63 pcsphy10: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe5000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy10: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-3.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe7000 0x1000>; 62 63 pcsphy11: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe7000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy11: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-4.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe9000 0x1000>; 62 63 pcsphy12: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xe9000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy12: ethernet-phy@0 { 65 reg = <0x0>;
+1
arch/powerpc/boot/dts/fsl/qoriq-fman3-1-1g-5.dtsi
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xeb000 0x1000>; 62 63 pcsphy13: ethernet-phy@0 { 64 reg = <0x0>;
··· 59 #size-cells = <0>; 60 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 61 reg = <0xeb000 0x1000>; 62 + fsl,erratum-a011043; /* must ignore read errors */ 63 64 pcsphy13: ethernet-phy@0 { 65 reg = <0x0>;
+3
drivers/atm/firestream.c
··· 912 } 913 if (!to) { 914 printk ("No more free channels for FS50..\n"); 915 return -EBUSY; 916 } 917 vcc->channo = dev->channo; ··· 923 if (((DO_DIRECTION(rxtp) && dev->atm_vccs[vcc->channo])) || 924 ( DO_DIRECTION(txtp) && test_bit (vcc->channo, dev->tx_inuse))) { 925 printk ("Channel is in use for FS155.\n"); 926 return -EBUSY; 927 } 928 } ··· 937 tc, sizeof (struct fs_transmit_config)); 938 if (!tc) { 939 fs_dprintk (FS_DEBUG_OPEN, "fs: can't alloc transmit_config.\n"); 940 return -ENOMEM; 941 } 942
··· 912 } 913 if (!to) { 914 printk ("No more free channels for FS50..\n"); 915 + kfree(vcc); 916 return -EBUSY; 917 } 918 vcc->channo = dev->channo; ··· 922 if (((DO_DIRECTION(rxtp) && dev->atm_vccs[vcc->channo])) || 923 ( DO_DIRECTION(txtp) && test_bit (vcc->channo, dev->tx_inuse))) { 924 printk ("Channel is in use for FS155.\n"); 925 + kfree(vcc); 926 return -EBUSY; 927 } 928 } ··· 935 tc, sizeof (struct fs_transmit_config)); 936 if (!tc) { 937 fs_dprintk (FS_DEBUG_OPEN, "fs: can't alloc transmit_config.\n"); 938 + kfree(vcc); 939 return -ENOMEM; 940 } 941
+10 -2
drivers/net/can/slcan.c
··· 344 */ 345 static void slcan_write_wakeup(struct tty_struct *tty) 346 { 347 - struct slcan *sl = tty->disc_data; 348 349 schedule_work(&sl->tx_work); 350 } 351 352 /* Send a can_frame to a TTY queue. */ ··· 651 return; 652 653 spin_lock_bh(&sl->lock); 654 - tty->disc_data = NULL; 655 sl->tty = NULL; 656 spin_unlock_bh(&sl->lock); 657 658 flush_work(&sl->tx_work); 659 660 /* Flush network side */
··· 344 */ 345 static void slcan_write_wakeup(struct tty_struct *tty) 346 { 347 + struct slcan *sl; 348 + 349 + rcu_read_lock(); 350 + sl = rcu_dereference(tty->disc_data); 351 + if (!sl) 352 + goto out; 353 354 schedule_work(&sl->tx_work); 355 + out: 356 + rcu_read_unlock(); 357 } 358 359 /* Send a can_frame to a TTY queue. */ ··· 644 return; 645 646 spin_lock_bh(&sl->lock); 647 + rcu_assign_pointer(tty->disc_data, NULL); 648 sl->tty = NULL; 649 spin_unlock_bh(&sl->lock); 650 651 + synchronize_rcu(); 652 flush_work(&sl->tx_work); 653 654 /* Flush network side */
+2 -2
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 2164 DMA_END_ADDR); 2165 2166 /* Initialize Tx NAPI */ 2167 - netif_napi_add(priv->dev, &ring->napi, bcmgenet_tx_poll, 2168 - NAPI_POLL_WEIGHT); 2169 } 2170 2171 /* Initialize a RDMA ring */
··· 2164 DMA_END_ADDR); 2165 2166 /* Initialize Tx NAPI */ 2167 + netif_tx_napi_add(priv->dev, &ring->napi, bcmgenet_tx_poll, 2168 + NAPI_POLL_WEIGHT); 2169 } 2170 2171 /* Initialize a RDMA ring */
+2
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
··· 2448 2449 if (!is_offload(adapter)) 2450 return -EOPNOTSUPP; 2451 if (!(adapter->flags & FULL_INIT_DONE)) 2452 return -EIO; /* need the memory controllers */ 2453 if (copy_from_user(&t, useraddr, sizeof(t)))
··· 2448 2449 if (!is_offload(adapter)) 2450 return -EOPNOTSUPP; 2451 + if (!capable(CAP_NET_ADMIN)) 2452 + return -EPERM; 2453 if (!(adapter->flags & FULL_INIT_DONE)) 2454 return -EIO; /* need the memory controllers */ 2455 if (copy_from_user(&t, useraddr, sizeof(t)))
+1 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
··· 70 static void *seq_tab_next(struct seq_file *seq, void *v, loff_t *pos) 71 { 72 v = seq_tab_get_idx(seq->private, *pos + 1); 73 - if (v) 74 - ++*pos; 75 return v; 76 } 77
··· 70 static void *seq_tab_next(struct seq_file *seq, void *v, loff_t *pos) 71 { 72 v = seq_tab_get_idx(seq->private, *pos + 1); 73 + ++(*pos); 74 return v; 75 } 76
+1 -2
drivers/net/ethernet/chelsio/cxgb4/l2t.c
··· 678 static void *l2t_seq_next(struct seq_file *seq, void *v, loff_t *pos) 679 { 680 v = l2t_get_idx(seq, *pos); 681 - if (v) 682 - ++*pos; 683 return v; 684 } 685
··· 678 static void *l2t_seq_next(struct seq_file *seq, void *v, loff_t *pos) 679 { 680 v = l2t_get_idx(seq, *pos); 681 + ++(*pos); 682 return v; 683 } 684
+2 -2
drivers/net/ethernet/freescale/fman/fman_memac.c
··· 110 /* Interface Mode Register (IF_MODE) */ 111 112 #define IF_MODE_MASK 0x00000003 /* 30-31 Mask on i/f mode bits */ 113 - #define IF_MODE_XGMII 0x00000000 /* 30-31 XGMII (10G) interface */ 114 #define IF_MODE_GMII 0x00000002 /* 30-31 GMII (1G) interface */ 115 #define IF_MODE_RGMII 0x00000004 116 #define IF_MODE_RGMII_AUTO 0x00008000 ··· 440 tmp = 0; 441 switch (phy_if) { 442 case PHY_INTERFACE_MODE_XGMII: 443 - tmp |= IF_MODE_XGMII; 444 break; 445 default: 446 tmp |= IF_MODE_GMII;
··· 110 /* Interface Mode Register (IF_MODE) */ 111 112 #define IF_MODE_MASK 0x00000003 /* 30-31 Mask on i/f mode bits */ 113 + #define IF_MODE_10G 0x00000000 /* 30-31 10G interface */ 114 #define IF_MODE_GMII 0x00000002 /* 30-31 GMII (1G) interface */ 115 #define IF_MODE_RGMII 0x00000004 116 #define IF_MODE_RGMII_AUTO 0x00008000 ··· 440 tmp = 0; 441 switch (phy_if) { 442 case PHY_INTERFACE_MODE_XGMII: 443 + tmp |= IF_MODE_10G; 444 break; 445 default: 446 tmp |= IF_MODE_GMII;
+6 -1
drivers/net/ethernet/freescale/xgmac_mdio.c
··· 49 struct mdio_fsl_priv { 50 struct tgec_mdio_controller __iomem *mdio_base; 51 bool is_little_endian; 52 }; 53 54 static u32 xgmac_read32(void __iomem *regs, ··· 227 return ret; 228 229 /* Return all Fs if nothing was there */ 230 - if (xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) { 231 dev_err(&bus->dev, 232 "Error while reading PHY%d reg at %d.%hhu\n", 233 phy_id, dev_addr, regnum); ··· 275 276 priv->is_little_endian = of_property_read_bool(pdev->dev.of_node, 277 "little-endian"); 278 279 ret = of_mdiobus_register(bus, np); 280 if (ret) {
··· 49 struct mdio_fsl_priv { 50 struct tgec_mdio_controller __iomem *mdio_base; 51 bool is_little_endian; 52 + bool has_a011043; 53 }; 54 55 static u32 xgmac_read32(void __iomem *regs, ··· 226 return ret; 227 228 /* Return all Fs if nothing was there */ 229 + if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) && 230 + !priv->has_a011043) { 231 dev_err(&bus->dev, 232 "Error while reading PHY%d reg at %d.%hhu\n", 233 phy_id, dev_addr, regnum); ··· 273 274 priv->is_little_endian = of_property_read_bool(pdev->dev.of_node, 275 "little-endian"); 276 + 277 + priv->has_a011043 = of_property_read_bool(pdev->dev.of_node, 278 + "fsl,erratum-a011043"); 279 280 ret = of_mdiobus_register(bus, np); 281 if (ret) {
+1 -1
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 1113 */ 1114 pba_size--; 1115 if (pba_num_size < (((u32)pba_size * 2) + 1)) { 1116 - hw_dbg(hw, "Buffer to small for PBA data.\n"); 1117 return I40E_ERR_PARAM; 1118 } 1119
··· 1113 */ 1114 pba_size--; 1115 if (pba_num_size < (((u32)pba_size * 2) + 1)) { 1116 + hw_dbg(hw, "Buffer too small for PBA data.\n"); 1117 return I40E_ERR_PARAM; 1118 } 1119
+29 -20
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 180 181 struct tx_sync_info { 182 u64 rcd_sn; 183 - s32 sync_len; 184 int nr_frags; 185 skb_frag_t frags[MAX_SKB_FRAGS]; 186 }; ··· 193 194 static enum mlx5e_ktls_sync_retval 195 tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx, 196 - u32 tcp_seq, struct tx_sync_info *info) 197 { 198 struct tls_offload_context_tx *tx_ctx = priv_tx->tx_ctx; 199 enum mlx5e_ktls_sync_retval ret = MLX5E_KTLS_SYNC_DONE; 200 struct tls_record_info *record; 201 int remaining, i = 0; 202 unsigned long flags; 203 204 spin_lock_irqsave(&tx_ctx->lock, flags); 205 record = tls_get_record(tx_ctx, tcp_seq, &info->rcd_sn); ··· 210 goto out; 211 } 212 213 - if (unlikely(tcp_seq < tls_record_start_seq(record))) { 214 - ret = tls_record_is_start_marker(record) ? 215 - MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL; 216 goto out; 217 } 218 ··· 350 u8 num_wqebbs; 351 int i = 0; 352 353 - ret = tx_sync_info_get(priv_tx, seq, &info); 354 if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) { 355 if (ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA) { 356 stats->tls_skip_no_sync_data++; ··· 361 * It should be safe to drop the packet in this case 362 */ 363 stats->tls_drop_no_sync_data++; 364 - goto err_out; 365 - } 366 - 367 - if (unlikely(info.sync_len < 0)) { 368 - if (likely(datalen <= -info.sync_len)) 369 - return MLX5E_KTLS_SYNC_DONE; 370 - 371 - stats->tls_drop_bypass_req++; 372 goto err_out; 373 } 374 ··· 382 383 if (unlikely(contig_wqebbs_room < num_wqebbs)) 384 mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room); 385 - 386 - tx_post_resync_params(sq, priv_tx, info.rcd_sn); 387 388 for (; i < info.nr_frags; i++) { 389 unsigned int orig_fsz, frag_offset = 0, n = 0; ··· 458 enum mlx5e_ktls_sync_retval ret = 459 mlx5e_ktls_tx_handle_ooo(priv_tx, sq, datalen, seq); 460 461 - if (likely(ret == MLX5E_KTLS_SYNC_DONE)) 462 *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi); 463 - else if (ret == MLX5E_KTLS_SYNC_FAIL) 464 goto err_out; 465 - else /* ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA */ 466 - goto out; 467 } 468 469 priv_tx->expected_seq = seq + datalen;
··· 180 181 struct tx_sync_info { 182 u64 rcd_sn; 183 + u32 sync_len; 184 int nr_frags; 185 skb_frag_t frags[MAX_SKB_FRAGS]; 186 }; ··· 193 194 static enum mlx5e_ktls_sync_retval 195 tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx, 196 + u32 tcp_seq, int datalen, struct tx_sync_info *info) 197 { 198 struct tls_offload_context_tx *tx_ctx = priv_tx->tx_ctx; 199 enum mlx5e_ktls_sync_retval ret = MLX5E_KTLS_SYNC_DONE; 200 struct tls_record_info *record; 201 int remaining, i = 0; 202 unsigned long flags; 203 + bool ends_before; 204 205 spin_lock_irqsave(&tx_ctx->lock, flags); 206 record = tls_get_record(tx_ctx, tcp_seq, &info->rcd_sn); ··· 209 goto out; 210 } 211 212 + /* There are the following cases: 213 + * 1. packet ends before start marker: bypass offload. 214 + * 2. packet starts before start marker and ends after it: drop, 215 + * not supported, breaks contract with kernel. 216 + * 3. packet ends before tls record info starts: drop, 217 + * this packet was already acknowledged and its record info 218 + * was released. 219 + */ 220 + ends_before = before(tcp_seq + datalen, tls_record_start_seq(record)); 221 + 222 + if (unlikely(tls_record_is_start_marker(record))) { 223 + ret = ends_before ? MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL; 224 + goto out; 225 + } else if (ends_before) { 226 + ret = MLX5E_KTLS_SYNC_FAIL; 227 goto out; 228 } 229 ··· 337 u8 num_wqebbs; 338 int i = 0; 339 340 + ret = tx_sync_info_get(priv_tx, seq, datalen, &info); 341 if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) { 342 if (ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA) { 343 stats->tls_skip_no_sync_data++; ··· 348 * It should be safe to drop the packet in this case 349 */ 350 stats->tls_drop_no_sync_data++; 351 goto err_out; 352 } 353 ··· 377 378 if (unlikely(contig_wqebbs_room < num_wqebbs)) 379 mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room); 380 381 for (; i < info.nr_frags; i++) { 382 unsigned int orig_fsz, frag_offset = 0, n = 0; ··· 455 enum mlx5e_ktls_sync_retval ret = 456 mlx5e_ktls_tx_handle_ooo(priv_tx, sq, datalen, seq); 457 458 + switch (ret) { 459 + case MLX5E_KTLS_SYNC_DONE: 460 *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi); 461 + break; 462 + case MLX5E_KTLS_SYNC_SKIP_NO_DATA: 463 + if (likely(!skb->decrypted)) 464 + goto out; 465 + WARN_ON_ONCE(1); 466 + /* fall-through */ 467 + default: /* MLX5E_KTLS_SYNC_FAIL */ 468 goto err_out; 469 + } 470 } 471 472 priv_tx->expected_seq = seq + datalen;
+7 -2
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 4036 u32 rate_mbps; 4037 int err; 4038 4039 esw = priv->mdev->priv.eswitch; 4040 /* rate is given in bytes/sec. 4041 * First convert to bits/sec and then round to the nearest mbit/secs. ··· 4051 * 1 mbit/sec. 4052 */ 4053 rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0; 4054 - vport_num = rpriv->rep->vport; 4055 - 4056 err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps); 4057 if (err) 4058 NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware");
··· 4036 u32 rate_mbps; 4037 int err; 4038 4039 + vport_num = rpriv->rep->vport; 4040 + if (vport_num >= MLX5_VPORT_ECPF) { 4041 + NL_SET_ERR_MSG_MOD(extack, 4042 + "Ingress rate limit is supported only for Eswitch ports connected to VFs"); 4043 + return -EOPNOTSUPP; 4044 + } 4045 + 4046 esw = priv->mdev->priv.eswitch; 4047 /* rate is given in bytes/sec. 4048 * First convert to bits/sec and then round to the nearest mbit/secs. ··· 4044 * 1 mbit/sec. 4045 */ 4046 rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0; 4047 err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps); 4048 if (err) 4049 NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware");
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1928 struct mlx5_vport *vport; 1929 int i; 1930 1931 - mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) 1932 memset(&vport->info, 0, sizeof(vport->info)); 1933 } 1934 1935 /* Public E-Switch API */
··· 1928 struct mlx5_vport *vport; 1929 int i; 1930 1931 + mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) { 1932 memset(&vport->info, 0, sizeof(vport->info)); 1933 + vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO; 1934 + } 1935 } 1936 1937 /* Public E-Switch API */
+9 -4
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 866 */ 867 #define ESW_SIZE (16 * 1024 * 1024) 868 const unsigned int ESW_POOLS[4] = { 4 * 1024 * 1024, 1 * 1024 * 1024, 869 - 64 * 1024, 4 * 1024 }; 870 871 static int 872 get_sz_from_pool(struct mlx5_eswitch *esw) ··· 1377 return -EINVAL; 1378 } 1379 1380 - mlx5_eswitch_disable(esw, false); 1381 mlx5_eswitch_update_num_of_vfs(esw, esw->dev->priv.sriov.num_vfs); 1382 err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_OFFLOADS); 1383 if (err) { ··· 2220 2221 int esw_offloads_enable(struct mlx5_eswitch *esw) 2222 { 2223 - int err; 2224 2225 if (MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, reformat) && 2226 MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, decap)) ··· 2237 err = esw_set_passing_vport_metadata(esw, true); 2238 if (err) 2239 goto err_vport_metadata; 2240 2241 err = mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_VPORT_UC_ADDR_CHANGE); 2242 if (err) ··· 2271 { 2272 int err, err1; 2273 2274 - mlx5_eswitch_disable(esw, false); 2275 err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_LEGACY); 2276 if (err) { 2277 NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");
··· 866 */ 867 #define ESW_SIZE (16 * 1024 * 1024) 868 const unsigned int ESW_POOLS[4] = { 4 * 1024 * 1024, 1 * 1024 * 1024, 869 + 64 * 1024, 128 }; 870 871 static int 872 get_sz_from_pool(struct mlx5_eswitch *esw) ··· 1377 return -EINVAL; 1378 } 1379 1380 + mlx5_eswitch_disable(esw, true); 1381 mlx5_eswitch_update_num_of_vfs(esw, esw->dev->priv.sriov.num_vfs); 1382 err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_OFFLOADS); 1383 if (err) { ··· 2220 2221 int esw_offloads_enable(struct mlx5_eswitch *esw) 2222 { 2223 + struct mlx5_vport *vport; 2224 + int err, i; 2225 2226 if (MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, reformat) && 2227 MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, decap)) ··· 2236 err = esw_set_passing_vport_metadata(esw, true); 2237 if (err) 2238 goto err_vport_metadata; 2239 + 2240 + /* Representor will control the vport link state */ 2241 + mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) 2242 + vport->info.link_state = MLX5_VPORT_ADMIN_STATE_DOWN; 2243 2244 err = mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_VPORT_UC_ADDR_CHANGE); 2245 if (err) ··· 2266 { 2267 int err, err1; 2268 2269 + mlx5_eswitch_disable(esw, true); 2270 err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_LEGACY); 2271 if (err) { 2272 NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");
+1
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1563 { PCI_VDEVICE(MELLANOX, 0x101d) }, /* ConnectX-6 Dx */ 1564 { PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF}, /* ConnectX Family mlx5Gen Virtual Function */ 1565 { PCI_VDEVICE(MELLANOX, 0x101f) }, /* ConnectX-6 LX */ 1566 { PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */ 1567 { PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */ 1568 { PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
··· 1563 { PCI_VDEVICE(MELLANOX, 0x101d) }, /* ConnectX-6 Dx */ 1564 { PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF}, /* ConnectX Family mlx5Gen Virtual Function */ 1565 { PCI_VDEVICE(MELLANOX, 0x101f) }, /* ConnectX-6 LX */ 1566 + { PCI_VDEVICE(MELLANOX, 0x1021) }, /* ConnectX-7 */ 1567 { PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */ 1568 { PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */ 1569 { PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
··· 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 /* Copyright (c) 2019 Mellanox Technologies. */ 3 4 #include "dr_types.h" 5 6 #define QUEUE_SIZE 128 ··· 730 if (!in) 731 goto err_cqwq; 732 733 - vector = smp_processor_id() % mlx5_comp_vectors_count(mdev); 734 err = mlx5_vector2eqn(mdev, vector, &eqn, &irqn); 735 if (err) { 736 kvfree(in);
··· 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 /* Copyright (c) 2019 Mellanox Technologies. */ 3 4 + #include <linux/smp.h> 5 #include "dr_types.h" 6 7 #define QUEUE_SIZE 128 ··· 729 if (!in) 730 goto err_cqwq; 731 732 + vector = raw_smp_processor_id() % mlx5_comp_vectors_count(mdev); 733 err = mlx5_vector2eqn(mdev, vector, &eqn, &irqn); 734 if (err) { 735 kvfree(in);
+29 -13
drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
··· 352 if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { 353 list_for_each_entry(dst, &fte->node.children, node.list) { 354 enum mlx5_flow_destination_type type = dst->dest_attr.type; 355 - u32 id; 356 357 if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { 358 err = -ENOSPC; 359 goto free_actions; 360 } 361 362 - switch (type) { 363 - case MLX5_FLOW_DESTINATION_TYPE_COUNTER: 364 - id = dst->dest_attr.counter_id; 365 366 - tmp_action = 367 - mlx5dr_action_create_flow_counter(id); 368 - if (!tmp_action) { 369 - err = -ENOMEM; 370 - goto free_actions; 371 - } 372 - fs_dr_actions[fs_dr_num_actions++] = tmp_action; 373 - actions[num_actions++] = tmp_action; 374 - break; 375 case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE: 376 tmp_action = create_ft_action(dev, dst); 377 if (!tmp_action) { ··· 384 err = -EOPNOTSUPP; 385 goto free_actions; 386 } 387 } 388 } 389
··· 352 if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { 353 list_for_each_entry(dst, &fte->node.children, node.list) { 354 enum mlx5_flow_destination_type type = dst->dest_attr.type; 355 356 if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { 357 err = -ENOSPC; 358 goto free_actions; 359 } 360 361 + if (type == MLX5_FLOW_DESTINATION_TYPE_COUNTER) 362 + continue; 363 364 + switch (type) { 365 case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE: 366 tmp_action = create_ft_action(dev, dst); 367 if (!tmp_action) { ··· 394 err = -EOPNOTSUPP; 395 goto free_actions; 396 } 397 + } 398 + } 399 + 400 + if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_COUNT) { 401 + list_for_each_entry(dst, &fte->node.children, node.list) { 402 + u32 id; 403 + 404 + if (dst->dest_attr.type != 405 + MLX5_FLOW_DESTINATION_TYPE_COUNTER) 406 + continue; 407 + 408 + if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) { 409 + err = -ENOSPC; 410 + goto free_actions; 411 + } 412 + 413 + id = dst->dest_attr.counter_id; 414 + tmp_action = 415 + mlx5dr_action_create_flow_counter(id); 416 + if (!tmp_action) { 417 + err = -ENOMEM; 418 + goto free_actions; 419 + } 420 + 421 + fs_dr_actions[fs_dr_num_actions++] = tmp_action; 422 + actions[num_actions++] = tmp_action; 423 } 424 } 425
+12 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
··· 8 #include <linux/string.h> 9 #include <linux/rhashtable.h> 10 #include <linux/netdevice.h> 11 #include <net/net_namespace.h> 12 #include <net/tc_act/tc_vlan.h> 13 ··· 26 struct mlxsw_sp_fid *dummy_fid; 27 struct rhashtable ruleset_ht; 28 struct list_head rules; 29 struct { 30 struct delayed_work dw; 31 unsigned long interval; /* ms */ ··· 703 goto err_ruleset_block_bind; 704 } 705 706 list_add_tail(&rule->list, &mlxsw_sp->acl->rules); 707 block->rule_count++; 708 block->egress_blocker_rule_count += rule->rulei->egress_bind_blocker; 709 return 0; ··· 727 728 block->egress_blocker_rule_count -= rule->rulei->egress_bind_blocker; 729 ruleset->ht_key.block->rule_count--; 730 list_del(&rule->list); 731 if (!ruleset->ht_key.chain_index && 732 mlxsw_sp_acl_ruleset_is_singular(ruleset)) 733 mlxsw_sp_acl_ruleset_block_unbind(mlxsw_sp, ruleset, ··· 789 struct mlxsw_sp_acl_rule *rule; 790 int err; 791 792 - /* Protect internal structures from changes */ 793 - rtnl_lock(); 794 list_for_each_entry(rule, &acl->rules, list) { 795 err = mlxsw_sp_acl_rule_activity_update(acl->mlxsw_sp, 796 rule); 797 if (err) 798 goto err_rule_update; 799 } 800 - rtnl_unlock(); 801 return 0; 802 803 err_rule_update: 804 - rtnl_unlock(); 805 return err; 806 } 807 ··· 885 acl->dummy_fid = fid; 886 887 INIT_LIST_HEAD(&acl->rules); 888 err = mlxsw_sp_acl_tcam_init(mlxsw_sp, &acl->tcam); 889 if (err) 890 goto err_acl_ops_init; ··· 898 return 0; 899 900 err_acl_ops_init: 901 mlxsw_sp_fid_put(fid); 902 err_fid_get: 903 rhashtable_destroy(&acl->ruleset_ht); ··· 915 916 cancel_delayed_work_sync(&mlxsw_sp->acl->rule_activity_update.dw); 917 mlxsw_sp_acl_tcam_fini(mlxsw_sp, &acl->tcam); 918 WARN_ON(!list_empty(&acl->rules)); 919 mlxsw_sp_fid_put(acl->dummy_fid); 920 rhashtable_destroy(&acl->ruleset_ht);
··· 8 #include <linux/string.h> 9 #include <linux/rhashtable.h> 10 #include <linux/netdevice.h> 11 + #include <linux/mutex.h> 12 #include <net/net_namespace.h> 13 #include <net/tc_act/tc_vlan.h> 14 ··· 25 struct mlxsw_sp_fid *dummy_fid; 26 struct rhashtable ruleset_ht; 27 struct list_head rules; 28 + struct mutex rules_lock; /* Protects rules list */ 29 struct { 30 struct delayed_work dw; 31 unsigned long interval; /* ms */ ··· 701 goto err_ruleset_block_bind; 702 } 703 704 + mutex_lock(&mlxsw_sp->acl->rules_lock); 705 list_add_tail(&rule->list, &mlxsw_sp->acl->rules); 706 + mutex_unlock(&mlxsw_sp->acl->rules_lock); 707 block->rule_count++; 708 block->egress_blocker_rule_count += rule->rulei->egress_bind_blocker; 709 return 0; ··· 723 724 block->egress_blocker_rule_count -= rule->rulei->egress_bind_blocker; 725 ruleset->ht_key.block->rule_count--; 726 + mutex_lock(&mlxsw_sp->acl->rules_lock); 727 list_del(&rule->list); 728 + mutex_unlock(&mlxsw_sp->acl->rules_lock); 729 if (!ruleset->ht_key.chain_index && 730 mlxsw_sp_acl_ruleset_is_singular(ruleset)) 731 mlxsw_sp_acl_ruleset_block_unbind(mlxsw_sp, ruleset, ··· 783 struct mlxsw_sp_acl_rule *rule; 784 int err; 785 786 + mutex_lock(&acl->rules_lock); 787 list_for_each_entry(rule, &acl->rules, list) { 788 err = mlxsw_sp_acl_rule_activity_update(acl->mlxsw_sp, 789 rule); 790 if (err) 791 goto err_rule_update; 792 } 793 + mutex_unlock(&acl->rules_lock); 794 return 0; 795 796 err_rule_update: 797 + mutex_unlock(&acl->rules_lock); 798 return err; 799 } 800 ··· 880 acl->dummy_fid = fid; 881 882 INIT_LIST_HEAD(&acl->rules); 883 + mutex_init(&acl->rules_lock); 884 err = mlxsw_sp_acl_tcam_init(mlxsw_sp, &acl->tcam); 885 if (err) 886 goto err_acl_ops_init; ··· 892 return 0; 893 894 err_acl_ops_init: 895 + mutex_destroy(&acl->rules_lock); 896 mlxsw_sp_fid_put(fid); 897 err_fid_get: 898 rhashtable_destroy(&acl->ruleset_ht); ··· 908 909 cancel_delayed_work_sync(&mlxsw_sp->acl->rule_activity_update.dw); 910 mlxsw_sp_acl_tcam_fini(mlxsw_sp, &acl->tcam); 911 + mutex_destroy(&acl->rules_lock); 912 WARN_ON(!list_empty(&acl->rules)); 913 mlxsw_sp_fid_put(acl->dummy_fid); 914 rhashtable_destroy(&acl->ruleset_ht);
+227 -147
drivers/net/ethernet/natsemi/sonic.c
··· 64 65 netif_dbg(lp, ifup, dev, "%s: initializing sonic driver\n", __func__); 66 67 for (i = 0; i < SONIC_NUM_RRS; i++) { 68 struct sk_buff *skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2); 69 if (skb == NULL) { ··· 116 return 0; 117 } 118 119 120 /* 121 * Close the SONIC device ··· 150 /* 151 * stop the SONIC, disable interrupts 152 */ 153 SONIC_WRITE(SONIC_IMR, 0); 154 SONIC_WRITE(SONIC_ISR, 0x7fff); 155 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); ··· 192 * put the Sonic into software-reset mode and 193 * disable all interrupts before releasing DMA buffers 194 */ 195 SONIC_WRITE(SONIC_IMR, 0); 196 SONIC_WRITE(SONIC_ISR, 0x7fff); 197 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); ··· 232 * wake the tx queue 233 * Concurrently with all of this, the SONIC is potentially writing to 234 * the status flags of the TDs. 235 - * Until some mutual exclusion is added, this code will not work with SMP. However, 236 - * MIPS Jazz machines and m68k Macs were all uni-processor machines. 237 */ 238 239 static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev) ··· 239 struct sonic_local *lp = netdev_priv(dev); 240 dma_addr_t laddr; 241 int length; 242 - int entry = lp->next_tx; 243 244 netif_dbg(lp, tx_queued, dev, "%s: skb=%p\n", __func__, skb); 245 ··· 262 return NETDEV_TX_OK; 263 } 264 265 sonic_tda_put(dev, entry, SONIC_TD_STATUS, 0); /* clear status */ 266 sonic_tda_put(dev, entry, SONIC_TD_FRAG_COUNT, 1); /* single fragment */ 267 sonic_tda_put(dev, entry, SONIC_TD_PKTSIZE, length); /* length of packet */ ··· 275 sonic_tda_put(dev, entry, SONIC_TD_LINK, 276 sonic_tda_get(dev, entry, SONIC_TD_LINK) | SONIC_EOL); 277 278 - /* 279 - * Must set tx_skb[entry] only after clearing status, and 280 - * before clearing EOL and before stopping queue 281 - */ 282 wmb(); 283 lp->tx_len[entry] = length; 284 lp->tx_laddr[entry] = laddr; ··· 297 298 SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP); 299 300 return NETDEV_TX_OK; 301 } 302 ··· 311 struct net_device *dev = dev_id; 312 struct sonic_local *lp = netdev_priv(dev); 313 int status; 314 315 - if (!(status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT)) 316 return IRQ_NONE; 317 318 do { 319 if (status & SONIC_INT_PKTRX) { 320 netif_dbg(lp, intr, dev, "%s: packet rx\n", __func__); 321 sonic_rx(dev); /* got packet(s) */ 322 - SONIC_WRITE(SONIC_ISR, SONIC_INT_PKTRX); /* clear the interrupt */ 323 } 324 325 if (status & SONIC_INT_TXDN) { ··· 340 int td_status; 341 int freed_some = 0; 342 343 - /* At this point, cur_tx is the index of a TD that is one of: 344 - * unallocated/freed (status set & tx_skb[entry] clear) 345 - * allocated and sent (status set & tx_skb[entry] set ) 346 - * allocated and not yet sent (status clear & tx_skb[entry] set ) 347 - * still being allocated by sonic_send_packet (status clear & tx_skb[entry] clear) 348 */ 349 350 netif_dbg(lp, intr, dev, "%s: tx done\n", __func__); ··· 354 if ((td_status = sonic_tda_get(dev, entry, SONIC_TD_STATUS)) == 0) 355 break; 356 357 - if (td_status & 0x0001) { 358 lp->stats.tx_packets++; 359 lp->stats.tx_bytes += sonic_tda_get(dev, entry, SONIC_TD_PKTSIZE); 360 } else { 361 - lp->stats.tx_errors++; 362 - if (td_status & 0x0642) 363 lp->stats.tx_aborted_errors++; 364 - if (td_status & 0x0180) 365 lp->stats.tx_carrier_errors++; 366 - if (td_status & 0x0020) 367 lp->stats.tx_window_errors++; 368 - if (td_status & 0x0004) 369 lp->stats.tx_fifo_errors++; 370 } 371 ··· 388 if (freed_some || lp->tx_skb[entry] == NULL) 389 netif_wake_queue(dev); /* The ring is no longer full */ 390 lp->cur_tx = entry; 391 - SONIC_WRITE(SONIC_ISR, SONIC_INT_TXDN); /* clear the interrupt */ 392 } 393 394 /* ··· 396 if (status & SONIC_INT_RFO) { 397 netif_dbg(lp, rx_err, dev, "%s: rx fifo overrun\n", 398 __func__); 399 - lp->stats.rx_fifo_errors++; 400 - SONIC_WRITE(SONIC_ISR, SONIC_INT_RFO); /* clear the interrupt */ 401 } 402 if (status & SONIC_INT_RDE) { 403 netif_dbg(lp, rx_err, dev, "%s: rx descriptors exhausted\n", 404 __func__); 405 - lp->stats.rx_dropped++; 406 - SONIC_WRITE(SONIC_ISR, SONIC_INT_RDE); /* clear the interrupt */ 407 } 408 if (status & SONIC_INT_RBAE) { 409 netif_dbg(lp, rx_err, dev, "%s: rx buffer area exceeded\n", 410 __func__); 411 - lp->stats.rx_dropped++; 412 - SONIC_WRITE(SONIC_ISR, SONIC_INT_RBAE); /* clear the interrupt */ 413 } 414 415 /* counter overruns; all counters are 16bit wide */ 416 - if (status & SONIC_INT_FAE) { 417 lp->stats.rx_frame_errors += 65536; 418 - SONIC_WRITE(SONIC_ISR, SONIC_INT_FAE); /* clear the interrupt */ 419 - } 420 - if (status & SONIC_INT_CRC) { 421 lp->stats.rx_crc_errors += 65536; 422 - SONIC_WRITE(SONIC_ISR, SONIC_INT_CRC); /* clear the interrupt */ 423 - } 424 - if (status & SONIC_INT_MP) { 425 lp->stats.rx_missed_errors += 65536; 426 - SONIC_WRITE(SONIC_ISR, SONIC_INT_MP); /* clear the interrupt */ 427 - } 428 429 /* transmit error */ 430 if (status & SONIC_INT_TXER) { 431 - if (SONIC_READ(SONIC_TCR) & SONIC_TCR_FU) 432 - netif_dbg(lp, tx_err, dev, "%s: tx fifo underrun\n", 433 - __func__); 434 - SONIC_WRITE(SONIC_ISR, SONIC_INT_TXER); /* clear the interrupt */ 435 } 436 437 /* bus retry */ ··· 436 /* ... to help debug DMA problems causing endless interrupts. */ 437 /* Bounce the eth interface to turn on the interrupt again. */ 438 SONIC_WRITE(SONIC_IMR, 0); 439 - SONIC_WRITE(SONIC_ISR, SONIC_INT_BR); /* clear the interrupt */ 440 } 441 442 - /* load CAM done */ 443 - if (status & SONIC_INT_LCD) 444 - SONIC_WRITE(SONIC_ISR, SONIC_INT_LCD); /* clear the interrupt */ 445 - } while((status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT)); 446 return IRQ_HANDLED; 447 } 448 449 /* ··· 520 static void sonic_rx(struct net_device *dev) 521 { 522 struct sonic_local *lp = netdev_priv(dev); 523 - int status; 524 int entry = lp->cur_rx; 525 526 while (sonic_rda_get(dev, entry, SONIC_RD_IN_USE) == 0) { 527 - struct sk_buff *used_skb; 528 - struct sk_buff *new_skb; 529 - dma_addr_t new_laddr; 530 - u16 bufadr_l; 531 - u16 bufadr_h; 532 - int pkt_len; 533 534 - status = sonic_rda_get(dev, entry, SONIC_RD_STATUS); 535 - if (status & SONIC_RCR_PRX) { 536 - /* Malloc up new buffer. */ 537 - new_skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2); 538 - if (new_skb == NULL) { 539 - lp->stats.rx_dropped++; 540 - break; 541 - } 542 - /* provide 16 byte IP header alignment unless DMA requires otherwise */ 543 - if(SONIC_BUS_SCALE(lp->dma_bitmode) == 2) 544 - skb_reserve(new_skb, 2); 545 546 - new_laddr = dma_map_single(lp->device, skb_put(new_skb, SONIC_RBSIZE), 547 - SONIC_RBSIZE, DMA_FROM_DEVICE); 548 - if (!new_laddr) { 549 - dev_kfree_skb(new_skb); 550 - printk(KERN_ERR "%s: Failed to map rx buffer, dropping packet.\n", dev->name); 551 - lp->stats.rx_dropped++; 552 break; 553 } 554 555 - /* now we have a new skb to replace it, pass the used one up the stack */ 556 - dma_unmap_single(lp->device, lp->rx_laddr[entry], SONIC_RBSIZE, DMA_FROM_DEVICE); 557 - used_skb = lp->rx_skb[entry]; 558 - pkt_len = sonic_rda_get(dev, entry, SONIC_RD_PKTLEN); 559 - skb_trim(used_skb, pkt_len); 560 - used_skb->protocol = eth_type_trans(used_skb, dev); 561 - netif_rx(used_skb); 562 - lp->stats.rx_packets++; 563 - lp->stats.rx_bytes += pkt_len; 564 565 - /* and insert the new skb */ 566 - lp->rx_laddr[entry] = new_laddr; 567 - lp->rx_skb[entry] = new_skb; 568 569 - bufadr_l = (unsigned long)new_laddr & 0xffff; 570 - bufadr_h = (unsigned long)new_laddr >> 16; 571 - sonic_rra_put(dev, entry, SONIC_RR_BUFADR_L, bufadr_l); 572 - sonic_rra_put(dev, entry, SONIC_RR_BUFADR_H, bufadr_h); 573 - } else { 574 - /* This should only happen, if we enable accepting broken packets. */ 575 - lp->stats.rx_errors++; 576 - if (status & SONIC_RCR_FAER) 577 - lp->stats.rx_frame_errors++; 578 - if (status & SONIC_RCR_CRCR) 579 - lp->stats.rx_crc_errors++; 580 - } 581 - if (status & SONIC_RCR_LPKT) { 582 - /* 583 - * this was the last packet out of the current receive buffer 584 - * give the buffer back to the SONIC 585 */ 586 - lp->cur_rwp += SIZEOF_SONIC_RR * SONIC_BUS_SCALE(lp->dma_bitmode); 587 - if (lp->cur_rwp >= lp->rra_end) lp->cur_rwp = lp->rra_laddr & 0xffff; 588 - SONIC_WRITE(SONIC_RWP, lp->cur_rwp); 589 - if (SONIC_READ(SONIC_ISR) & SONIC_INT_RBE) { 590 - netif_dbg(lp, rx_err, dev, "%s: rx buffer exhausted\n", 591 - __func__); 592 - SONIC_WRITE(SONIC_ISR, SONIC_INT_RBE); /* clear the flag */ 593 - } 594 - } else 595 - printk(KERN_ERR "%s: rx desc without RCR_LPKT. Shouldn't happen !?\n", 596 - dev->name); 597 /* 598 * give back the descriptor 599 */ 600 - sonic_rda_put(dev, entry, SONIC_RD_LINK, 601 - sonic_rda_get(dev, entry, SONIC_RD_LINK) | SONIC_EOL); 602 sonic_rda_put(dev, entry, SONIC_RD_IN_USE, 1); 603 - sonic_rda_put(dev, lp->eol_rx, SONIC_RD_LINK, 604 - sonic_rda_get(dev, lp->eol_rx, SONIC_RD_LINK) & ~SONIC_EOL); 605 - lp->eol_rx = entry; 606 - lp->cur_rx = entry = (entry + 1) & SONIC_RDS_MASK; 607 } 608 /* 609 * If any worth-while packets have been received, netif_rx() 610 * has done a mark_bh(NET_BH) for us and will work on them ··· 643 (netdev_mc_count(dev) > 15)) { 644 rcr |= SONIC_RCR_AMC; 645 } else { 646 netif_dbg(lp, ifup, dev, "%s: mc_count %d\n", __func__, 647 netdev_mc_count(dev)); 648 sonic_set_cam_enable(dev, 1); /* always enable our own address */ ··· 658 i++; 659 } 660 SONIC_WRITE(SONIC_CDC, 16); 661 - /* issue Load CAM command */ 662 SONIC_WRITE(SONIC_CDP, lp->cda_laddr & 0xffff); 663 SONIC_WRITE(SONIC_CMD, SONIC_CR_LCAM); 664 } 665 } 666 ··· 680 */ 681 static int sonic_init(struct net_device *dev) 682 { 683 - unsigned int cmd; 684 struct sonic_local *lp = netdev_priv(dev); 685 int i; 686 ··· 691 SONIC_WRITE(SONIC_ISR, 0x7fff); 692 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); 693 694 /* 695 * clear software reset flag, disable receiver, clear and 696 * enable interrupts, then completely initialize the SONIC 697 */ 698 SONIC_WRITE(SONIC_CMD, 0); 699 - SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS); 700 701 /* 702 * initialize the receive resource area ··· 718 } 719 720 /* initialize all RRA registers */ 721 - lp->rra_end = (lp->rra_laddr + SONIC_NUM_RRS * SIZEOF_SONIC_RR * 722 - SONIC_BUS_SCALE(lp->dma_bitmode)) & 0xffff; 723 - lp->cur_rwp = (lp->rra_laddr + (SONIC_NUM_RRS - 1) * SIZEOF_SONIC_RR * 724 - SONIC_BUS_SCALE(lp->dma_bitmode)) & 0xffff; 725 - 726 - SONIC_WRITE(SONIC_RSA, lp->rra_laddr & 0xffff); 727 - SONIC_WRITE(SONIC_REA, lp->rra_end); 728 - SONIC_WRITE(SONIC_RRP, lp->rra_laddr & 0xffff); 729 - SONIC_WRITE(SONIC_RWP, lp->cur_rwp); 730 SONIC_WRITE(SONIC_URRA, lp->rra_laddr >> 16); 731 SONIC_WRITE(SONIC_EOBC, (SONIC_RBSIZE >> 1) - (lp->dma_bitmode ? 2 : 1)); 732 ··· 729 netif_dbg(lp, ifup, dev, "%s: issuing RRRA command\n", __func__); 730 731 SONIC_WRITE(SONIC_CMD, SONIC_CR_RRRA); 732 - i = 0; 733 - while (i++ < 100) { 734 - if (SONIC_READ(SONIC_CMD) & SONIC_CR_RRRA) 735 - break; 736 - } 737 - 738 - netif_dbg(lp, ifup, dev, "%s: status=%x, i=%d\n", __func__, 739 - SONIC_READ(SONIC_CMD), i); 740 741 /* 742 * Initialize the receive descriptors so that they ··· 804 * load the CAM 805 */ 806 SONIC_WRITE(SONIC_CMD, SONIC_CR_LCAM); 807 - 808 - i = 0; 809 - while (i++ < 100) { 810 - if (SONIC_READ(SONIC_ISR) & SONIC_INT_LCD) 811 - break; 812 - } 813 - netif_dbg(lp, ifup, dev, "%s: CMD=%x, ISR=%x, i=%d\n", __func__, 814 - SONIC_READ(SONIC_CMD), SONIC_READ(SONIC_ISR), i); 815 816 /* 817 * enable receiver, disable loopback 818 * and enable all interrupts 819 */ 820 - SONIC_WRITE(SONIC_CMD, SONIC_CR_RXEN | SONIC_CR_STP); 821 SONIC_WRITE(SONIC_RCR, SONIC_RCR_DEFAULT); 822 SONIC_WRITE(SONIC_TCR, SONIC_TCR_DEFAULT); 823 SONIC_WRITE(SONIC_ISR, 0x7fff); 824 SONIC_WRITE(SONIC_IMR, SONIC_IMR_DEFAULT); 825 - 826 - cmd = SONIC_READ(SONIC_CMD); 827 - if ((cmd & SONIC_CR_RXEN) == 0 || (cmd & SONIC_CR_STP) == 0) 828 - printk(KERN_ERR "sonic_init: failed, status=%x\n", cmd); 829 830 netif_dbg(lp, ifup, dev, "%s: new status=%x\n", __func__, 831 SONIC_READ(SONIC_CMD));
··· 64 65 netif_dbg(lp, ifup, dev, "%s: initializing sonic driver\n", __func__); 66 67 + spin_lock_init(&lp->lock); 68 + 69 for (i = 0; i < SONIC_NUM_RRS; i++) { 70 struct sk_buff *skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2); 71 if (skb == NULL) { ··· 114 return 0; 115 } 116 117 + /* Wait for the SONIC to become idle. */ 118 + static void sonic_quiesce(struct net_device *dev, u16 mask) 119 + { 120 + struct sonic_local * __maybe_unused lp = netdev_priv(dev); 121 + int i; 122 + u16 bits; 123 + 124 + for (i = 0; i < 1000; ++i) { 125 + bits = SONIC_READ(SONIC_CMD) & mask; 126 + if (!bits) 127 + return; 128 + if (irqs_disabled() || in_interrupt()) 129 + udelay(20); 130 + else 131 + usleep_range(100, 200); 132 + } 133 + WARN_ONCE(1, "command deadline expired! 0x%04x\n", bits); 134 + } 135 136 /* 137 * Close the SONIC device ··· 130 /* 131 * stop the SONIC, disable interrupts 132 */ 133 + SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS); 134 + sonic_quiesce(dev, SONIC_CR_ALL); 135 + 136 SONIC_WRITE(SONIC_IMR, 0); 137 SONIC_WRITE(SONIC_ISR, 0x7fff); 138 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); ··· 169 * put the Sonic into software-reset mode and 170 * disable all interrupts before releasing DMA buffers 171 */ 172 + SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS); 173 + sonic_quiesce(dev, SONIC_CR_ALL); 174 + 175 SONIC_WRITE(SONIC_IMR, 0); 176 SONIC_WRITE(SONIC_ISR, 0x7fff); 177 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); ··· 206 * wake the tx queue 207 * Concurrently with all of this, the SONIC is potentially writing to 208 * the status flags of the TDs. 209 */ 210 211 static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev) ··· 215 struct sonic_local *lp = netdev_priv(dev); 216 dma_addr_t laddr; 217 int length; 218 + int entry; 219 + unsigned long flags; 220 221 netif_dbg(lp, tx_queued, dev, "%s: skb=%p\n", __func__, skb); 222 ··· 237 return NETDEV_TX_OK; 238 } 239 240 + spin_lock_irqsave(&lp->lock, flags); 241 + 242 + entry = lp->next_tx; 243 + 244 sonic_tda_put(dev, entry, SONIC_TD_STATUS, 0); /* clear status */ 245 sonic_tda_put(dev, entry, SONIC_TD_FRAG_COUNT, 1); /* single fragment */ 246 sonic_tda_put(dev, entry, SONIC_TD_PKTSIZE, length); /* length of packet */ ··· 246 sonic_tda_put(dev, entry, SONIC_TD_LINK, 247 sonic_tda_get(dev, entry, SONIC_TD_LINK) | SONIC_EOL); 248 249 wmb(); 250 lp->tx_len[entry] = length; 251 lp->tx_laddr[entry] = laddr; ··· 272 273 SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP); 274 275 + spin_unlock_irqrestore(&lp->lock, flags); 276 + 277 return NETDEV_TX_OK; 278 } 279 ··· 284 struct net_device *dev = dev_id; 285 struct sonic_local *lp = netdev_priv(dev); 286 int status; 287 + unsigned long flags; 288 289 + /* The lock has two purposes. Firstly, it synchronizes sonic_interrupt() 290 + * with sonic_send_packet() so that the two functions can share state. 291 + * Secondly, it makes sonic_interrupt() re-entrant, as that is required 292 + * by macsonic which must use two IRQs with different priority levels. 293 + */ 294 + spin_lock_irqsave(&lp->lock, flags); 295 + 296 + status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT; 297 + if (!status) { 298 + spin_unlock_irqrestore(&lp->lock, flags); 299 + 300 return IRQ_NONE; 301 + } 302 303 do { 304 + SONIC_WRITE(SONIC_ISR, status); /* clear the interrupt(s) */ 305 + 306 if (status & SONIC_INT_PKTRX) { 307 netif_dbg(lp, intr, dev, "%s: packet rx\n", __func__); 308 sonic_rx(dev); /* got packet(s) */ 309 } 310 311 if (status & SONIC_INT_TXDN) { ··· 300 int td_status; 301 int freed_some = 0; 302 303 + /* The state of a Transmit Descriptor may be inferred 304 + * from { tx_skb[entry], td_status } as follows. 305 + * { clear, clear } => the TD has never been used 306 + * { set, clear } => the TD was handed to SONIC 307 + * { set, set } => the TD was handed back 308 + * { clear, set } => the TD is available for re-use 309 */ 310 311 netif_dbg(lp, intr, dev, "%s: tx done\n", __func__); ··· 313 if ((td_status = sonic_tda_get(dev, entry, SONIC_TD_STATUS)) == 0) 314 break; 315 316 + if (td_status & SONIC_TCR_PTX) { 317 lp->stats.tx_packets++; 318 lp->stats.tx_bytes += sonic_tda_get(dev, entry, SONIC_TD_PKTSIZE); 319 } else { 320 + if (td_status & (SONIC_TCR_EXD | 321 + SONIC_TCR_EXC | SONIC_TCR_BCM)) 322 lp->stats.tx_aborted_errors++; 323 + if (td_status & 324 + (SONIC_TCR_NCRS | SONIC_TCR_CRLS)) 325 lp->stats.tx_carrier_errors++; 326 + if (td_status & SONIC_TCR_OWC) 327 lp->stats.tx_window_errors++; 328 + if (td_status & SONIC_TCR_FU) 329 lp->stats.tx_fifo_errors++; 330 } 331 ··· 346 if (freed_some || lp->tx_skb[entry] == NULL) 347 netif_wake_queue(dev); /* The ring is no longer full */ 348 lp->cur_tx = entry; 349 } 350 351 /* ··· 355 if (status & SONIC_INT_RFO) { 356 netif_dbg(lp, rx_err, dev, "%s: rx fifo overrun\n", 357 __func__); 358 } 359 if (status & SONIC_INT_RDE) { 360 netif_dbg(lp, rx_err, dev, "%s: rx descriptors exhausted\n", 361 __func__); 362 } 363 if (status & SONIC_INT_RBAE) { 364 netif_dbg(lp, rx_err, dev, "%s: rx buffer area exceeded\n", 365 __func__); 366 } 367 368 /* counter overruns; all counters are 16bit wide */ 369 + if (status & SONIC_INT_FAE) 370 lp->stats.rx_frame_errors += 65536; 371 + if (status & SONIC_INT_CRC) 372 lp->stats.rx_crc_errors += 65536; 373 + if (status & SONIC_INT_MP) 374 lp->stats.rx_missed_errors += 65536; 375 376 /* transmit error */ 377 if (status & SONIC_INT_TXER) { 378 + u16 tcr = SONIC_READ(SONIC_TCR); 379 + 380 + netif_dbg(lp, tx_err, dev, "%s: TXER intr, TCR %04x\n", 381 + __func__, tcr); 382 + 383 + if (tcr & (SONIC_TCR_EXD | SONIC_TCR_EXC | 384 + SONIC_TCR_FU | SONIC_TCR_BCM)) { 385 + /* Aborted transmission. Try again. */ 386 + netif_stop_queue(dev); 387 + SONIC_WRITE(SONIC_CMD, SONIC_CR_TXP); 388 + } 389 } 390 391 /* bus retry */ ··· 400 /* ... to help debug DMA problems causing endless interrupts. */ 401 /* Bounce the eth interface to turn on the interrupt again. */ 402 SONIC_WRITE(SONIC_IMR, 0); 403 } 404 405 + status = SONIC_READ(SONIC_ISR) & SONIC_IMR_DEFAULT; 406 + } while (status); 407 + 408 + spin_unlock_irqrestore(&lp->lock, flags); 409 + 410 return IRQ_HANDLED; 411 + } 412 + 413 + /* Return the array index corresponding to a given Receive Buffer pointer. */ 414 + static int index_from_addr(struct sonic_local *lp, dma_addr_t addr, 415 + unsigned int last) 416 + { 417 + unsigned int i = last; 418 + 419 + do { 420 + i = (i + 1) & SONIC_RRS_MASK; 421 + if (addr == lp->rx_laddr[i]) 422 + return i; 423 + } while (i != last); 424 + 425 + return -ENOENT; 426 + } 427 + 428 + /* Allocate and map a new skb to be used as a receive buffer. */ 429 + static bool sonic_alloc_rb(struct net_device *dev, struct sonic_local *lp, 430 + struct sk_buff **new_skb, dma_addr_t *new_addr) 431 + { 432 + *new_skb = netdev_alloc_skb(dev, SONIC_RBSIZE + 2); 433 + if (!*new_skb) 434 + return false; 435 + 436 + if (SONIC_BUS_SCALE(lp->dma_bitmode) == 2) 437 + skb_reserve(*new_skb, 2); 438 + 439 + *new_addr = dma_map_single(lp->device, skb_put(*new_skb, SONIC_RBSIZE), 440 + SONIC_RBSIZE, DMA_FROM_DEVICE); 441 + if (!*new_addr) { 442 + dev_kfree_skb(*new_skb); 443 + *new_skb = NULL; 444 + return false; 445 + } 446 + 447 + return true; 448 + } 449 + 450 + /* Place a new receive resource in the Receive Resource Area and update RWP. */ 451 + static void sonic_update_rra(struct net_device *dev, struct sonic_local *lp, 452 + dma_addr_t old_addr, dma_addr_t new_addr) 453 + { 454 + unsigned int entry = sonic_rr_entry(dev, SONIC_READ(SONIC_RWP)); 455 + unsigned int end = sonic_rr_entry(dev, SONIC_READ(SONIC_RRP)); 456 + u32 buf; 457 + 458 + /* The resources in the range [RRP, RWP) belong to the SONIC. This loop 459 + * scans the other resources in the RRA, those in the range [RWP, RRP). 460 + */ 461 + do { 462 + buf = (sonic_rra_get(dev, entry, SONIC_RR_BUFADR_H) << 16) | 463 + sonic_rra_get(dev, entry, SONIC_RR_BUFADR_L); 464 + 465 + if (buf == old_addr) 466 + break; 467 + 468 + entry = (entry + 1) & SONIC_RRS_MASK; 469 + } while (entry != end); 470 + 471 + WARN_ONCE(buf != old_addr, "failed to find resource!\n"); 472 + 473 + sonic_rra_put(dev, entry, SONIC_RR_BUFADR_H, new_addr >> 16); 474 + sonic_rra_put(dev, entry, SONIC_RR_BUFADR_L, new_addr & 0xffff); 475 + 476 + entry = (entry + 1) & SONIC_RRS_MASK; 477 + 478 + SONIC_WRITE(SONIC_RWP, sonic_rr_addr(dev, entry)); 479 } 480 481 /* ··· 416 static void sonic_rx(struct net_device *dev) 417 { 418 struct sonic_local *lp = netdev_priv(dev); 419 int entry = lp->cur_rx; 420 + int prev_entry = lp->eol_rx; 421 + bool rbe = false; 422 423 while (sonic_rda_get(dev, entry, SONIC_RD_IN_USE) == 0) { 424 + u16 status = sonic_rda_get(dev, entry, SONIC_RD_STATUS); 425 426 + /* If the RD has LPKT set, the chip has finished with the RB */ 427 + if ((status & SONIC_RCR_PRX) && (status & SONIC_RCR_LPKT)) { 428 + struct sk_buff *new_skb; 429 + dma_addr_t new_laddr; 430 + u32 addr = (sonic_rda_get(dev, entry, 431 + SONIC_RD_PKTPTR_H) << 16) | 432 + sonic_rda_get(dev, entry, SONIC_RD_PKTPTR_L); 433 + int i = index_from_addr(lp, addr, entry); 434 435 + if (i < 0) { 436 + WARN_ONCE(1, "failed to find buffer!\n"); 437 break; 438 } 439 440 + if (sonic_alloc_rb(dev, lp, &new_skb, &new_laddr)) { 441 + struct sk_buff *used_skb = lp->rx_skb[i]; 442 + int pkt_len; 443 444 + /* Pass the used buffer up the stack */ 445 + dma_unmap_single(lp->device, addr, SONIC_RBSIZE, 446 + DMA_FROM_DEVICE); 447 448 + pkt_len = sonic_rda_get(dev, entry, 449 + SONIC_RD_PKTLEN); 450 + skb_trim(used_skb, pkt_len); 451 + used_skb->protocol = eth_type_trans(used_skb, 452 + dev); 453 + netif_rx(used_skb); 454 + lp->stats.rx_packets++; 455 + lp->stats.rx_bytes += pkt_len; 456 + 457 + lp->rx_skb[i] = new_skb; 458 + lp->rx_laddr[i] = new_laddr; 459 + } else { 460 + /* Failed to obtain a new buffer so re-use it */ 461 + new_laddr = addr; 462 + lp->stats.rx_dropped++; 463 + } 464 + /* If RBE is already asserted when RWP advances then 465 + * it's safe to clear RBE after processing this packet. 466 */ 467 + rbe = rbe || SONIC_READ(SONIC_ISR) & SONIC_INT_RBE; 468 + sonic_update_rra(dev, lp, addr, new_laddr); 469 + } 470 /* 471 * give back the descriptor 472 */ 473 + sonic_rda_put(dev, entry, SONIC_RD_STATUS, 0); 474 sonic_rda_put(dev, entry, SONIC_RD_IN_USE, 1); 475 + 476 + prev_entry = entry; 477 + entry = (entry + 1) & SONIC_RDS_MASK; 478 } 479 + 480 + lp->cur_rx = entry; 481 + 482 + if (prev_entry != lp->eol_rx) { 483 + /* Advance the EOL flag to put descriptors back into service */ 484 + sonic_rda_put(dev, prev_entry, SONIC_RD_LINK, SONIC_EOL | 485 + sonic_rda_get(dev, prev_entry, SONIC_RD_LINK)); 486 + sonic_rda_put(dev, lp->eol_rx, SONIC_RD_LINK, ~SONIC_EOL & 487 + sonic_rda_get(dev, lp->eol_rx, SONIC_RD_LINK)); 488 + lp->eol_rx = prev_entry; 489 + } 490 + 491 + if (rbe) 492 + SONIC_WRITE(SONIC_ISR, SONIC_INT_RBE); 493 /* 494 * If any worth-while packets have been received, netif_rx() 495 * has done a mark_bh(NET_BH) for us and will work on them ··· 550 (netdev_mc_count(dev) > 15)) { 551 rcr |= SONIC_RCR_AMC; 552 } else { 553 + unsigned long flags; 554 + 555 netif_dbg(lp, ifup, dev, "%s: mc_count %d\n", __func__, 556 netdev_mc_count(dev)); 557 sonic_set_cam_enable(dev, 1); /* always enable our own address */ ··· 563 i++; 564 } 565 SONIC_WRITE(SONIC_CDC, 16); 566 SONIC_WRITE(SONIC_CDP, lp->cda_laddr & 0xffff); 567 + 568 + /* LCAM and TXP commands can't be used simultaneously */ 569 + spin_lock_irqsave(&lp->lock, flags); 570 + sonic_quiesce(dev, SONIC_CR_TXP); 571 SONIC_WRITE(SONIC_CMD, SONIC_CR_LCAM); 572 + sonic_quiesce(dev, SONIC_CR_LCAM); 573 + spin_unlock_irqrestore(&lp->lock, flags); 574 } 575 } 576 ··· 580 */ 581 static int sonic_init(struct net_device *dev) 582 { 583 struct sonic_local *lp = netdev_priv(dev); 584 int i; 585 ··· 592 SONIC_WRITE(SONIC_ISR, 0x7fff); 593 SONIC_WRITE(SONIC_CMD, SONIC_CR_RST); 594 595 + /* While in reset mode, clear CAM Enable register */ 596 + SONIC_WRITE(SONIC_CE, 0); 597 + 598 /* 599 * clear software reset flag, disable receiver, clear and 600 * enable interrupts, then completely initialize the SONIC 601 */ 602 SONIC_WRITE(SONIC_CMD, 0); 603 + SONIC_WRITE(SONIC_CMD, SONIC_CR_RXDIS | SONIC_CR_STP); 604 + sonic_quiesce(dev, SONIC_CR_ALL); 605 606 /* 607 * initialize the receive resource area ··· 615 } 616 617 /* initialize all RRA registers */ 618 + SONIC_WRITE(SONIC_RSA, sonic_rr_addr(dev, 0)); 619 + SONIC_WRITE(SONIC_REA, sonic_rr_addr(dev, SONIC_NUM_RRS)); 620 + SONIC_WRITE(SONIC_RRP, sonic_rr_addr(dev, 0)); 621 + SONIC_WRITE(SONIC_RWP, sonic_rr_addr(dev, SONIC_NUM_RRS - 1)); 622 SONIC_WRITE(SONIC_URRA, lp->rra_laddr >> 16); 623 SONIC_WRITE(SONIC_EOBC, (SONIC_RBSIZE >> 1) - (lp->dma_bitmode ? 2 : 1)); 624 ··· 631 netif_dbg(lp, ifup, dev, "%s: issuing RRRA command\n", __func__); 632 633 SONIC_WRITE(SONIC_CMD, SONIC_CR_RRRA); 634 + sonic_quiesce(dev, SONIC_CR_RRRA); 635 636 /* 637 * Initialize the receive descriptors so that they ··· 713 * load the CAM 714 */ 715 SONIC_WRITE(SONIC_CMD, SONIC_CR_LCAM); 716 + sonic_quiesce(dev, SONIC_CR_LCAM); 717 718 /* 719 * enable receiver, disable loopback 720 * and enable all interrupts 721 */ 722 SONIC_WRITE(SONIC_RCR, SONIC_RCR_DEFAULT); 723 SONIC_WRITE(SONIC_TCR, SONIC_TCR_DEFAULT); 724 SONIC_WRITE(SONIC_ISR, 0x7fff); 725 SONIC_WRITE(SONIC_IMR, SONIC_IMR_DEFAULT); 726 + SONIC_WRITE(SONIC_CMD, SONIC_CR_RXEN); 727 728 netif_dbg(lp, ifup, dev, "%s: new status=%x\n", __func__, 729 SONIC_READ(SONIC_CMD));
+32 -12
drivers/net/ethernet/natsemi/sonic.h
··· 110 #define SONIC_CR_TXP 0x0002 111 #define SONIC_CR_HTX 0x0001 112 113 /* 114 * SONIC data configuration bits 115 */ ··· 178 #define SONIC_TCR_NCRS 0x0100 179 #define SONIC_TCR_CRLS 0x0080 180 #define SONIC_TCR_EXC 0x0040 181 #define SONIC_TCR_PMB 0x0008 182 #define SONIC_TCR_FU 0x0004 183 #define SONIC_TCR_BCM 0x0002 ··· 278 #define SONIC_NUM_RDS SONIC_NUM_RRS /* number of receive descriptors */ 279 #define SONIC_NUM_TDS 16 /* number of transmit descriptors */ 280 281 - #define SONIC_RDS_MASK (SONIC_NUM_RDS-1) 282 - #define SONIC_TDS_MASK (SONIC_NUM_TDS-1) 283 284 #define SONIC_RBSIZE 1520 /* size of one resource buffer */ 285 ··· 317 u32 rda_laddr; /* logical DMA address of RDA */ 318 dma_addr_t rx_laddr[SONIC_NUM_RRS]; /* logical DMA addresses of rx skbuffs */ 319 dma_addr_t tx_laddr[SONIC_NUM_TDS]; /* logical DMA addresses of tx skbuffs */ 320 - unsigned int rra_end; 321 - unsigned int cur_rwp; 322 unsigned int cur_rx; 323 unsigned int cur_tx; /* first unacked transmit packet */ 324 unsigned int eol_rx; ··· 325 int msg_enable; 326 struct device *device; /* generic device */ 327 struct net_device_stats stats; 328 }; 329 330 #define TX_TIMEOUT (3 * HZ) ··· 348 as far as we can tell. */ 349 /* OpenBSD calls this "SWO". I'd like to think that sonic_buf_put() 350 is a much better name. */ 351 - static inline void sonic_buf_put(void* base, int bitmode, 352 int offset, __u16 val) 353 { 354 if (bitmode) 355 #ifdef __BIG_ENDIAN 356 - ((__u16 *) base + (offset*2))[1] = val; 357 #else 358 - ((__u16 *) base + (offset*2))[0] = val; 359 #endif 360 else 361 - ((__u16 *) base)[offset] = val; 362 } 363 364 - static inline __u16 sonic_buf_get(void* base, int bitmode, 365 int offset) 366 { 367 if (bitmode) 368 #ifdef __BIG_ENDIAN 369 - return ((volatile __u16 *) base + (offset*2))[1]; 370 #else 371 - return ((volatile __u16 *) base + (offset*2))[0]; 372 #endif 373 else 374 - return ((volatile __u16 *) base)[offset]; 375 } 376 377 /* Inlines that you should actually use for reading/writing DMA buffers */ ··· 449 struct sonic_local *lp = netdev_priv(dev); 450 return sonic_buf_get(lp->rra, lp->dma_bitmode, 451 (entry * SIZEOF_SONIC_RR) + offset); 452 } 453 454 static const char version[] =
··· 110 #define SONIC_CR_TXP 0x0002 111 #define SONIC_CR_HTX 0x0001 112 113 + #define SONIC_CR_ALL (SONIC_CR_LCAM | SONIC_CR_RRRA | \ 114 + SONIC_CR_RXEN | SONIC_CR_TXP) 115 + 116 /* 117 * SONIC data configuration bits 118 */ ··· 175 #define SONIC_TCR_NCRS 0x0100 176 #define SONIC_TCR_CRLS 0x0080 177 #define SONIC_TCR_EXC 0x0040 178 + #define SONIC_TCR_OWC 0x0020 179 #define SONIC_TCR_PMB 0x0008 180 #define SONIC_TCR_FU 0x0004 181 #define SONIC_TCR_BCM 0x0002 ··· 274 #define SONIC_NUM_RDS SONIC_NUM_RRS /* number of receive descriptors */ 275 #define SONIC_NUM_TDS 16 /* number of transmit descriptors */ 276 277 + #define SONIC_RRS_MASK (SONIC_NUM_RRS - 1) 278 + #define SONIC_RDS_MASK (SONIC_NUM_RDS - 1) 279 + #define SONIC_TDS_MASK (SONIC_NUM_TDS - 1) 280 281 #define SONIC_RBSIZE 1520 /* size of one resource buffer */ 282 ··· 312 u32 rda_laddr; /* logical DMA address of RDA */ 313 dma_addr_t rx_laddr[SONIC_NUM_RRS]; /* logical DMA addresses of rx skbuffs */ 314 dma_addr_t tx_laddr[SONIC_NUM_TDS]; /* logical DMA addresses of tx skbuffs */ 315 unsigned int cur_rx; 316 unsigned int cur_tx; /* first unacked transmit packet */ 317 unsigned int eol_rx; ··· 322 int msg_enable; 323 struct device *device; /* generic device */ 324 struct net_device_stats stats; 325 + spinlock_t lock; 326 }; 327 328 #define TX_TIMEOUT (3 * HZ) ··· 344 as far as we can tell. */ 345 /* OpenBSD calls this "SWO". I'd like to think that sonic_buf_put() 346 is a much better name. */ 347 + static inline void sonic_buf_put(u16 *base, int bitmode, 348 int offset, __u16 val) 349 { 350 if (bitmode) 351 #ifdef __BIG_ENDIAN 352 + __raw_writew(val, base + (offset * 2) + 1); 353 #else 354 + __raw_writew(val, base + (offset * 2) + 0); 355 #endif 356 else 357 + __raw_writew(val, base + (offset * 1) + 0); 358 } 359 360 + static inline __u16 sonic_buf_get(u16 *base, int bitmode, 361 int offset) 362 { 363 if (bitmode) 364 #ifdef __BIG_ENDIAN 365 + return __raw_readw(base + (offset * 2) + 1); 366 #else 367 + return __raw_readw(base + (offset * 2) + 0); 368 #endif 369 else 370 + return __raw_readw(base + (offset * 1) + 0); 371 } 372 373 /* Inlines that you should actually use for reading/writing DMA buffers */ ··· 445 struct sonic_local *lp = netdev_priv(dev); 446 return sonic_buf_get(lp->rra, lp->dma_bitmode, 447 (entry * SIZEOF_SONIC_RR) + offset); 448 + } 449 + 450 + static inline u16 sonic_rr_addr(struct net_device *dev, int entry) 451 + { 452 + struct sonic_local *lp = netdev_priv(dev); 453 + 454 + return lp->rra_laddr + 455 + entry * SIZEOF_SONIC_RR * SONIC_BUS_SCALE(lp->dma_bitmode); 456 + } 457 + 458 + static inline u16 sonic_rr_entry(struct net_device *dev, u16 addr) 459 + { 460 + struct sonic_local *lp = netdev_priv(dev); 461 + 462 + return (addr - (u16)lp->rra_laddr) / (SIZEOF_SONIC_RR * 463 + SONIC_BUS_SCALE(lp->dma_bitmode)); 464 } 465 466 static const char version[] =
+1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
··· 2043 break; 2044 } 2045 entry += p_hdr->size; 2046 } 2047 p_dev->ahw->reset.seq_index = index; 2048 }
··· 2043 break; 2044 } 2045 entry += p_hdr->size; 2046 + cond_resched(); 2047 } 2048 p_dev->ahw->reset.seq_index = index; 2049 }
+2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
··· 703 addr += 16; 704 reg_read -= 16; 705 ret += 16; 706 } 707 out: 708 mutex_unlock(&adapter->ahw->mem_lock); ··· 1384 buf_offset += entry->hdr.cap_size; 1385 entry_offset += entry->hdr.offset; 1386 buffer = fw_dump->data + buf_offset; 1387 } 1388 1389 fw_dump->clr = 1;
··· 703 addr += 16; 704 reg_read -= 16; 705 ret += 16; 706 + cond_resched(); 707 } 708 out: 709 mutex_unlock(&adapter->ahw->mem_lock); ··· 1383 buf_offset += entry->hdr.cap_size; 1384 entry_offset += entry->hdr.offset; 1385 buffer = fw_dump->data + buf_offset; 1386 + cond_resched(); 1387 } 1388 1389 fw_dump->clr = 1;
+3 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 412 *mac = NULL; 413 } 414 415 - rc = of_get_phy_mode(np, &plat->phy_interface); 416 - if (rc) 417 - return ERR_PTR(rc); 418 419 plat->interface = stmmac_of_get_mac_mode(np); 420 if (plat->interface < 0)
··· 412 *mac = NULL; 413 } 414 415 + plat->phy_interface = device_get_phy_mode(&pdev->dev); 416 + if (plat->phy_interface < 0) 417 + return ERR_PTR(plat->phy_interface); 418 419 plat->interface = stmmac_of_get_mac_mode(np); 420 if (plat->interface < 0)
+6 -4
drivers/net/gtp.c
··· 804 return NULL; 805 } 806 807 - if (sock->sk->sk_protocol != IPPROTO_UDP) { 808 pr_debug("socket fd=%d not UDP\n", fd); 809 sk = ERR_PTR(-EINVAL); 810 goto out_sock; 811 } 812 813 - lock_sock(sock->sk); 814 - if (sock->sk->sk_user_data) { 815 sk = ERR_PTR(-EBUSY); 816 goto out_rel_sock; 817 } 818 819 - sk = sock->sk; 820 sock_hold(sk); 821 822 tuncfg.sk_user_data = gtp;
··· 804 return NULL; 805 } 806 807 + sk = sock->sk; 808 + if (sk->sk_protocol != IPPROTO_UDP || 809 + sk->sk_type != SOCK_DGRAM || 810 + (sk->sk_family != AF_INET && sk->sk_family != AF_INET6)) { 811 pr_debug("socket fd=%d not UDP\n", fd); 812 sk = ERR_PTR(-EINVAL); 813 goto out_sock; 814 } 815 816 + lock_sock(sk); 817 + if (sk->sk_user_data) { 818 sk = ERR_PTR(-EBUSY); 819 goto out_rel_sock; 820 } 821 822 sock_hold(sk); 823 824 tuncfg.sk_user_data = gtp;
+10 -2
drivers/net/slip/slip.c
··· 452 */ 453 static void slip_write_wakeup(struct tty_struct *tty) 454 { 455 - struct slip *sl = tty->disc_data; 456 457 schedule_work(&sl->tx_work); 458 } 459 460 static void sl_tx_timeout(struct net_device *dev) ··· 889 return; 890 891 spin_lock_bh(&sl->lock); 892 - tty->disc_data = NULL; 893 sl->tty = NULL; 894 spin_unlock_bh(&sl->lock); 895 896 flush_work(&sl->tx_work); 897 898 /* VSV = very important to remove timers */
··· 452 */ 453 static void slip_write_wakeup(struct tty_struct *tty) 454 { 455 + struct slip *sl; 456 + 457 + rcu_read_lock(); 458 + sl = rcu_dereference(tty->disc_data); 459 + if (!sl) 460 + goto out; 461 462 schedule_work(&sl->tx_work); 463 + out: 464 + rcu_read_unlock(); 465 } 466 467 static void sl_tx_timeout(struct net_device *dev) ··· 882 return; 883 884 spin_lock_bh(&sl->lock); 885 + rcu_assign_pointer(tty->disc_data, NULL); 886 sl->tty = NULL; 887 spin_unlock_bh(&sl->lock); 888 889 + synchronize_rcu(); 890 flush_work(&sl->tx_work); 891 892 /* VSV = very important to remove timers */
+4
drivers/net/tun.c
··· 1936 if (ret != XDP_PASS) { 1937 rcu_read_unlock(); 1938 local_bh_enable(); 1939 return total_len; 1940 } 1941 }
··· 1936 if (ret != XDP_PASS) { 1937 rcu_read_unlock(); 1938 local_bh_enable(); 1939 + if (frags) { 1940 + tfile->napi.skb = NULL; 1941 + mutex_unlock(&tfile->napi_mutex); 1942 + } 1943 return total_len; 1944 } 1945 }
+15
drivers/net/usb/lan78xx.c
··· 20 #include <linux/mdio.h> 21 #include <linux/phy.h> 22 #include <net/ip6_checksum.h> 23 #include <linux/interrupt.h> 24 #include <linux/irqdomain.h> 25 #include <linux/irq.h> ··· 3669 tasklet_schedule(&dev->bh); 3670 } 3671 3672 static const struct net_device_ops lan78xx_netdev_ops = { 3673 .ndo_open = lan78xx_open, 3674 .ndo_stop = lan78xx_stop, ··· 3695 .ndo_set_features = lan78xx_set_features, 3696 .ndo_vlan_rx_add_vid = lan78xx_vlan_rx_add_vid, 3697 .ndo_vlan_rx_kill_vid = lan78xx_vlan_rx_kill_vid, 3698 }; 3699 3700 static void lan78xx_stat_monitor(struct timer_list *t)
··· 20 #include <linux/mdio.h> 21 #include <linux/phy.h> 22 #include <net/ip6_checksum.h> 23 + #include <net/vxlan.h> 24 #include <linux/interrupt.h> 25 #include <linux/irqdomain.h> 26 #include <linux/irq.h> ··· 3668 tasklet_schedule(&dev->bh); 3669 } 3670 3671 + static netdev_features_t lan78xx_features_check(struct sk_buff *skb, 3672 + struct net_device *netdev, 3673 + netdev_features_t features) 3674 + { 3675 + if (skb->len + TX_OVERHEAD > MAX_SINGLE_PACKET_SIZE) 3676 + features &= ~NETIF_F_GSO_MASK; 3677 + 3678 + features = vlan_features_check(skb, features); 3679 + features = vxlan_features_check(skb, features); 3680 + 3681 + return features; 3682 + } 3683 + 3684 static const struct net_device_ops lan78xx_netdev_ops = { 3685 .ndo_open = lan78xx_open, 3686 .ndo_stop = lan78xx_stop, ··· 3681 .ndo_set_features = lan78xx_set_features, 3682 .ndo_vlan_rx_add_vid = lan78xx_vlan_rx_add_vid, 3683 .ndo_vlan_rx_kill_vid = lan78xx_vlan_rx_kill_vid, 3684 + .ndo_features_check = lan78xx_features_check, 3685 }; 3686 3687 static void lan78xx_stat_monitor(struct timer_list *t)
+114 -11
drivers/net/usb/r8152.c
··· 31 #define NETNEXT_VERSION "11" 32 33 /* Information for net */ 34 - #define NET_VERSION "10" 35 36 #define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION 37 #define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>" ··· 68 #define PLA_LED_FEATURE 0xdd92 69 #define PLA_PHYAR 0xde00 70 #define PLA_BOOT_CTRL 0xe004 71 #define PLA_GPHY_INTR_IMR 0xe022 72 #define PLA_EEE_CR 0xe040 73 #define PLA_EEEP_CR 0xe080 ··· 96 #define PLA_TALLYCNT 0xe890 97 #define PLA_SFF_STS_7 0xe8de 98 #define PLA_PHYSTATUS 0xe908 99 #define PLA_BP_BA 0xfc26 100 #define PLA_BP_0 0xfc28 101 #define PLA_BP_1 0xfc2a ··· 109 #define PLA_BP_EN 0xfc38 110 111 #define USB_USB2PHY 0xb41e 112 #define USB_SSPHYLINK2 0xb428 113 #define USB_U2P3_CTRL 0xb460 114 #define USB_CSR_DUMMY1 0xb464 ··· 303 #define LINK_ON_WAKE_EN 0x0010 304 #define LINK_OFF_WAKE_EN 0x0008 305 306 /* PLA_CONFIG5 */ 307 #define BWF_EN 0x0040 308 #define MWF_EN 0x0020 ··· 318 /* PLA_PHY_PWR */ 319 #define TX_10M_IDLE_EN 0x0080 320 #define PFM_PWM_SWITCH 0x0040 321 322 /* PLA_MAC_PWR_CTRL */ 323 #define D3_CLK_GATED_EN 0x00004000 ··· 331 #define MAC_CLK_SPDWN_EN BIT(15) 332 333 /* PLA_MAC_PWR_CTRL3 */ 334 #define PKT_AVAIL_SPDWN_EN 0x0100 335 #define SUSPEND_SPDWN_EN 0x0004 336 #define U1U2_SPDWN_EN 0x0002 ··· 362 /* PLA_BOOT_CTRL */ 363 #define AUTOLOAD_DONE 0x0002 364 365 /* PLA_SUSPEND_FLAG */ 366 #define LINK_CHG_EVENT BIT(0) 367 ··· 376 #define DEBUG_LTSSM 0x0082 377 378 /* PLA_EXTRA_STATUS */ 379 #define U3P3_CHECK_EN BIT(7) /* RTL_VER_05 only */ 380 #define LINK_CHANGE_FLAG BIT(8) 381 382 /* USB_USB2PHY */ 383 #define USB2PHY_SUSPEND 0x0001 384 #define USB2PHY_L1 0x0002 385 386 /* USB_SSPHYLINK2 */ 387 #define pwd_dn_scale_mask 0x3ffe ··· 2879 r8153_set_rx_early_timeout(tp); 2880 r8153_set_rx_early_size(tp); 2881 2882 return rtl_enable(tp); 2883 } 2884 ··· 3403 r8153b_ups_en(tp, false); 3404 r8153_queue_wake(tp, false); 3405 rtl_runtime_suspend_enable(tp, false); 3406 - r8153_u2p3en(tp, true); 3407 - r8153b_u1u2en(tp, true); 3408 } 3409 } 3410 ··· 4702 4703 r8153_aldps_en(tp, true); 4704 r8152b_enable_fc(tp); 4705 - r8153_u2p3en(tp, true); 4706 4707 set_bit(PHY_RESET, &tp->flags); 4708 } ··· 4980 4981 static void rtl8153_up(struct r8152 *tp) 4982 { 4983 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 4984 return; 4985 ··· 4989 r8153_u2p3en(tp, false); 4990 r8153_aldps_en(tp, false); 4991 r8153_first_init(tp); 4992 r8153_aldps_en(tp, true); 4993 4994 switch (tp->version) { ··· 5020 5021 static void rtl8153_down(struct r8152 *tp) 5022 { 5023 if (test_bit(RTL8152_UNPLUG, &tp->flags)) { 5024 rtl_drop_queued_tx(tp); 5025 return; 5026 } 5027 5028 r8153_u1u2en(tp, false); 5029 r8153_u2p3en(tp, false); ··· 5041 5042 static void rtl8153b_up(struct r8152 *tp) 5043 { 5044 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 5045 return; 5046 ··· 5053 r8153_first_init(tp); 5054 ocp_write_dword(tp, MCU_TYPE_USB, USB_RX_BUF_TH, RX_THR_B); 5055 5056 r8153_aldps_en(tp, true); 5057 - r8153_u2p3en(tp, true); 5058 - r8153b_u1u2en(tp, true); 5059 } 5060 5061 static void rtl8153b_down(struct r8152 *tp) 5062 { 5063 if (test_bit(RTL8152_UNPLUG, &tp->flags)) { 5064 rtl_drop_queued_tx(tp); 5065 return; 5066 } 5067 5068 r8153b_u1u2en(tp, false); 5069 r8153_u2p3en(tp, false); ··· 5447 else 5448 ocp_data |= DYNAMIC_BURST; 5449 ocp_write_byte(tp, MCU_TYPE_USB, USB_CSR_DUMMY1, ocp_data); 5450 } 5451 5452 ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_CSR_DUMMY2); ··· 5486 ocp_write_word(tp, MCU_TYPE_USB, USB_CONNECT_TIMER, 0x0001); 5487 5488 r8153_power_cut_en(tp, false); 5489 r8153_u1u2en(tp, true); 5490 r8153_mac_clk_spd(tp, false); 5491 usb_enable_lpm(tp->udev); 5492 5493 /* rx aggregation */ 5494 ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_USB_CTRL); ··· 5563 r8153b_ups_en(tp, false); 5564 r8153_queue_wake(tp, false); 5565 rtl_runtime_suspend_enable(tp, false); 5566 - r8153b_u1u2en(tp, true); 5567 usb_enable_lpm(tp->udev); 5568 5569 /* MAC clock speed down */ 5570 ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2); 5571 ocp_data |= MAC_CLK_SPDWN_EN; 5572 ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2, ocp_data); 5573 5574 set_bit(GREEN_ETHERNET, &tp->flags); 5575 ··· 6809 6810 intf->needs_remote_wakeup = 1; 6811 6812 tp->rtl_ops.init(tp); 6813 #if IS_BUILTIN(CONFIG_USB_RTL8152) 6814 /* Retry in case request_firmware() is not ready yet. */ ··· 6831 goto out1; 6832 } 6833 6834 - if (!rtl_can_wakeup(tp)) 6835 - __rtl_set_wol(tp, 0); 6836 - 6837 - tp->saved_wolopts = __rtl_get_wol(tp); 6838 if (tp->saved_wolopts) 6839 device_set_wakeup_enable(&udev->dev, true); 6840 else
··· 31 #define NETNEXT_VERSION "11" 32 33 /* Information for net */ 34 + #define NET_VERSION "11" 35 36 #define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION 37 #define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>" ··· 68 #define PLA_LED_FEATURE 0xdd92 69 #define PLA_PHYAR 0xde00 70 #define PLA_BOOT_CTRL 0xe004 71 + #define PLA_LWAKE_CTRL_REG 0xe007 72 #define PLA_GPHY_INTR_IMR 0xe022 73 #define PLA_EEE_CR 0xe040 74 #define PLA_EEEP_CR 0xe080 ··· 95 #define PLA_TALLYCNT 0xe890 96 #define PLA_SFF_STS_7 0xe8de 97 #define PLA_PHYSTATUS 0xe908 98 + #define PLA_CONFIG6 0xe90a /* CONFIG6 */ 99 #define PLA_BP_BA 0xfc26 100 #define PLA_BP_0 0xfc28 101 #define PLA_BP_1 0xfc2a ··· 107 #define PLA_BP_EN 0xfc38 108 109 #define USB_USB2PHY 0xb41e 110 + #define USB_SSPHYLINK1 0xb426 111 #define USB_SSPHYLINK2 0xb428 112 #define USB_U2P3_CTRL 0xb460 113 #define USB_CSR_DUMMY1 0xb464 ··· 300 #define LINK_ON_WAKE_EN 0x0010 301 #define LINK_OFF_WAKE_EN 0x0008 302 303 + /* PLA_CONFIG6 */ 304 + #define LANWAKE_CLR_EN BIT(0) 305 + 306 /* PLA_CONFIG5 */ 307 #define BWF_EN 0x0040 308 #define MWF_EN 0x0020 ··· 312 /* PLA_PHY_PWR */ 313 #define TX_10M_IDLE_EN 0x0080 314 #define PFM_PWM_SWITCH 0x0040 315 + #define TEST_IO_OFF BIT(4) 316 317 /* PLA_MAC_PWR_CTRL */ 318 #define D3_CLK_GATED_EN 0x00004000 ··· 324 #define MAC_CLK_SPDWN_EN BIT(15) 325 326 /* PLA_MAC_PWR_CTRL3 */ 327 + #define PLA_MCU_SPDWN_EN BIT(14) 328 #define PKT_AVAIL_SPDWN_EN 0x0100 329 #define SUSPEND_SPDWN_EN 0x0004 330 #define U1U2_SPDWN_EN 0x0002 ··· 354 /* PLA_BOOT_CTRL */ 355 #define AUTOLOAD_DONE 0x0002 356 357 + /* PLA_LWAKE_CTRL_REG */ 358 + #define LANWAKE_PIN BIT(7) 359 + 360 /* PLA_SUSPEND_FLAG */ 361 #define LINK_CHG_EVENT BIT(0) 362 ··· 365 #define DEBUG_LTSSM 0x0082 366 367 /* PLA_EXTRA_STATUS */ 368 + #define CUR_LINK_OK BIT(15) 369 #define U3P3_CHECK_EN BIT(7) /* RTL_VER_05 only */ 370 #define LINK_CHANGE_FLAG BIT(8) 371 + #define POLL_LINK_CHG BIT(0) 372 373 /* USB_USB2PHY */ 374 #define USB2PHY_SUSPEND 0x0001 375 #define USB2PHY_L1 0x0002 376 + 377 + /* USB_SSPHYLINK1 */ 378 + #define DELAY_PHY_PWR_CHG BIT(1) 379 380 /* USB_SSPHYLINK2 */ 381 #define pwd_dn_scale_mask 0x3ffe ··· 2863 r8153_set_rx_early_timeout(tp); 2864 r8153_set_rx_early_size(tp); 2865 2866 + if (tp->version == RTL_VER_09) { 2867 + u32 ocp_data; 2868 + 2869 + ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_FW_TASK); 2870 + ocp_data &= ~FC_PATCH_TASK; 2871 + ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data); 2872 + usleep_range(1000, 2000); 2873 + ocp_data |= FC_PATCH_TASK; 2874 + ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data); 2875 + } 2876 + 2877 return rtl_enable(tp); 2878 } 2879 ··· 3376 r8153b_ups_en(tp, false); 3377 r8153_queue_wake(tp, false); 3378 rtl_runtime_suspend_enable(tp, false); 3379 + if (tp->udev->speed != USB_SPEED_HIGH) 3380 + r8153b_u1u2en(tp, true); 3381 } 3382 } 3383 ··· 4675 4676 r8153_aldps_en(tp, true); 4677 r8152b_enable_fc(tp); 4678 4679 set_bit(PHY_RESET, &tp->flags); 4680 } ··· 4954 4955 static void rtl8153_up(struct r8152 *tp) 4956 { 4957 + u32 ocp_data; 4958 + 4959 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 4960 return; 4961 ··· 4961 r8153_u2p3en(tp, false); 4962 r8153_aldps_en(tp, false); 4963 r8153_first_init(tp); 4964 + 4965 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6); 4966 + ocp_data |= LANWAKE_CLR_EN; 4967 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data); 4968 + 4969 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG); 4970 + ocp_data &= ~LANWAKE_PIN; 4971 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG, ocp_data); 4972 + 4973 + ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_SSPHYLINK1); 4974 + ocp_data &= ~DELAY_PHY_PWR_CHG; 4975 + ocp_write_word(tp, MCU_TYPE_USB, USB_SSPHYLINK1, ocp_data); 4976 + 4977 r8153_aldps_en(tp, true); 4978 4979 switch (tp->version) { ··· 4979 4980 static void rtl8153_down(struct r8152 *tp) 4981 { 4982 + u32 ocp_data; 4983 + 4984 if (test_bit(RTL8152_UNPLUG, &tp->flags)) { 4985 rtl_drop_queued_tx(tp); 4986 return; 4987 } 4988 + 4989 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6); 4990 + ocp_data &= ~LANWAKE_CLR_EN; 4991 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data); 4992 4993 r8153_u1u2en(tp, false); 4994 r8153_u2p3en(tp, false); ··· 4994 4995 static void rtl8153b_up(struct r8152 *tp) 4996 { 4997 + u32 ocp_data; 4998 + 4999 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 5000 return; 5001 ··· 5004 r8153_first_init(tp); 5005 ocp_write_dword(tp, MCU_TYPE_USB, USB_RX_BUF_TH, RX_THR_B); 5006 5007 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3); 5008 + ocp_data &= ~PLA_MCU_SPDWN_EN; 5009 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data); 5010 + 5011 r8153_aldps_en(tp, true); 5012 + 5013 + if (tp->udev->speed != USB_SPEED_HIGH) 5014 + r8153b_u1u2en(tp, true); 5015 } 5016 5017 static void rtl8153b_down(struct r8152 *tp) 5018 { 5019 + u32 ocp_data; 5020 + 5021 if (test_bit(RTL8152_UNPLUG, &tp->flags)) { 5022 rtl_drop_queued_tx(tp); 5023 return; 5024 } 5025 + 5026 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3); 5027 + ocp_data |= PLA_MCU_SPDWN_EN; 5028 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data); 5029 5030 r8153b_u1u2en(tp, false); 5031 r8153_u2p3en(tp, false); ··· 5387 else 5388 ocp_data |= DYNAMIC_BURST; 5389 ocp_write_byte(tp, MCU_TYPE_USB, USB_CSR_DUMMY1, ocp_data); 5390 + 5391 + r8153_queue_wake(tp, false); 5392 + 5393 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS); 5394 + if (rtl8152_get_speed(tp) & LINK_STATUS) 5395 + ocp_data |= CUR_LINK_OK; 5396 + else 5397 + ocp_data &= ~CUR_LINK_OK; 5398 + ocp_data |= POLL_LINK_CHG; 5399 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS, ocp_data); 5400 } 5401 5402 ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_CSR_DUMMY2); ··· 5416 ocp_write_word(tp, MCU_TYPE_USB, USB_CONNECT_TIMER, 0x0001); 5417 5418 r8153_power_cut_en(tp, false); 5419 + rtl_runtime_suspend_enable(tp, false); 5420 r8153_u1u2en(tp, true); 5421 r8153_mac_clk_spd(tp, false); 5422 usb_enable_lpm(tp->udev); 5423 + 5424 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6); 5425 + ocp_data |= LANWAKE_CLR_EN; 5426 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CONFIG6, ocp_data); 5427 + 5428 + ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG); 5429 + ocp_data &= ~LANWAKE_PIN; 5430 + ocp_write_byte(tp, MCU_TYPE_PLA, PLA_LWAKE_CTRL_REG, ocp_data); 5431 5432 /* rx aggregation */ 5433 ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_USB_CTRL); ··· 5484 r8153b_ups_en(tp, false); 5485 r8153_queue_wake(tp, false); 5486 rtl_runtime_suspend_enable(tp, false); 5487 + 5488 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS); 5489 + if (rtl8152_get_speed(tp) & LINK_STATUS) 5490 + ocp_data |= CUR_LINK_OK; 5491 + else 5492 + ocp_data &= ~CUR_LINK_OK; 5493 + ocp_data |= POLL_LINK_CHG; 5494 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS, ocp_data); 5495 + 5496 + if (tp->udev->speed != USB_SPEED_HIGH) 5497 + r8153b_u1u2en(tp, true); 5498 usb_enable_lpm(tp->udev); 5499 5500 /* MAC clock speed down */ 5501 ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2); 5502 ocp_data |= MAC_CLK_SPDWN_EN; 5503 ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL2, ocp_data); 5504 + 5505 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3); 5506 + ocp_data &= ~PLA_MCU_SPDWN_EN; 5507 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_MAC_PWR_CTRL3, ocp_data); 5508 + 5509 + if (tp->version == RTL_VER_09) { 5510 + /* Disable Test IO for 32QFN */ 5511 + if (ocp_read_byte(tp, MCU_TYPE_PLA, 0xdc00) & BIT(5)) { 5512 + ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_PHY_PWR); 5513 + ocp_data |= TEST_IO_OFF; 5514 + ocp_write_word(tp, MCU_TYPE_PLA, PLA_PHY_PWR, ocp_data); 5515 + } 5516 + } 5517 5518 set_bit(GREEN_ETHERNET, &tp->flags); 5519 ··· 6707 6708 intf->needs_remote_wakeup = 1; 6709 6710 + if (!rtl_can_wakeup(tp)) 6711 + __rtl_set_wol(tp, 0); 6712 + else 6713 + tp->saved_wolopts = __rtl_get_wol(tp); 6714 + 6715 tp->rtl_ops.init(tp); 6716 #if IS_BUILTIN(CONFIG_USB_RTL8152) 6717 /* Retry in case request_firmware() is not ready yet. */ ··· 6724 goto out1; 6725 } 6726 6727 if (tp->saved_wolopts) 6728 device_set_wakeup_enable(&udev->dev, true); 6729 else
+9 -11
drivers/net/wireless/cisco/airo.c
··· 7790 case AIROGVLIST: ridcode = RID_APLIST; break; 7791 case AIROGDRVNAM: ridcode = RID_DRVNAME; break; 7792 case AIROGEHTENC: ridcode = RID_ETHERENCAP; break; 7793 - case AIROGWEPKTMP: ridcode = RID_WEP_TEMP; 7794 - /* Only super-user can read WEP keys */ 7795 - if (!capable(CAP_NET_ADMIN)) 7796 - return -EPERM; 7797 - break; 7798 - case AIROGWEPKNV: ridcode = RID_WEP_PERM; 7799 - /* Only super-user can read WEP keys */ 7800 - if (!capable(CAP_NET_ADMIN)) 7801 - return -EPERM; 7802 - break; 7803 case AIROGSTAT: ridcode = RID_STATUS; break; 7804 case AIROGSTATSD32: ridcode = RID_STATSDELTA; break; 7805 case AIROGSTATSC32: ridcode = RID_STATS; break; ··· 7805 return -EINVAL; 7806 } 7807 7808 - if ((iobuf = kmalloc(RIDSIZE, GFP_KERNEL)) == NULL) 7809 return -ENOMEM; 7810 7811 PC4500_readrid(ai,ridcode,iobuf,RIDSIZE, 1);
··· 7790 case AIROGVLIST: ridcode = RID_APLIST; break; 7791 case AIROGDRVNAM: ridcode = RID_DRVNAME; break; 7792 case AIROGEHTENC: ridcode = RID_ETHERENCAP; break; 7793 + case AIROGWEPKTMP: ridcode = RID_WEP_TEMP; break; 7794 + case AIROGWEPKNV: ridcode = RID_WEP_PERM; break; 7795 case AIROGSTAT: ridcode = RID_STATUS; break; 7796 case AIROGSTATSD32: ridcode = RID_STATSDELTA; break; 7797 case AIROGSTATSC32: ridcode = RID_STATS; break; ··· 7813 return -EINVAL; 7814 } 7815 7816 + if (ridcode == RID_WEP_TEMP || ridcode == RID_WEP_PERM) { 7817 + /* Only super-user can read WEP keys */ 7818 + if (!capable(CAP_NET_ADMIN)) 7819 + return -EPERM; 7820 + } 7821 + 7822 + if ((iobuf = kzalloc(RIDSIZE, GFP_KERNEL)) == NULL) 7823 return -ENOMEM; 7824 7825 PC4500_readrid(ai,ridcode,iobuf,RIDSIZE, 1);
+1 -2
drivers/net/wireless/intel/iwlwifi/dvm/tx.c
··· 267 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 268 struct iwl_station_priv *sta_priv = NULL; 269 struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS]; 270 - struct iwl_device_cmd *dev_cmd; 271 struct iwl_tx_cmd *tx_cmd; 272 __le16 fc; 273 u8 hdr_len; ··· 348 if (unlikely(!dev_cmd)) 349 goto drop_unlock_priv; 350 351 - memset(dev_cmd, 0, sizeof(*dev_cmd)); 352 dev_cmd->hdr.cmd = REPLY_TX; 353 tx_cmd = (struct iwl_tx_cmd *) dev_cmd->payload; 354
··· 267 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 268 struct iwl_station_priv *sta_priv = NULL; 269 struct iwl_rxon_context *ctx = &priv->contexts[IWL_RXON_CTX_BSS]; 270 + struct iwl_device_tx_cmd *dev_cmd; 271 struct iwl_tx_cmd *tx_cmd; 272 __le16 fc; 273 u8 hdr_len; ··· 348 if (unlikely(!dev_cmd)) 349 goto drop_unlock_priv; 350 351 dev_cmd->hdr.cmd = REPLY_TX; 352 tx_cmd = (struct iwl_tx_cmd *) dev_cmd->payload; 353
+5 -5
drivers/net/wireless/intel/iwlwifi/fw/acpi.c
··· 357 { 358 union acpi_object *wifi_pkg, *data; 359 bool enabled; 360 - int i, n_profiles, tbl_rev; 361 - int ret = 0; 362 363 data = iwl_acpi_get_object(fwrt->dev, ACPI_EWRD_METHOD); 364 if (IS_ERR(data)) ··· 390 goto out_free; 391 } 392 393 - for (i = 0; i < n_profiles; i++) { 394 - /* the tables start at element 3 */ 395 - int pos = 3; 396 397 /* The EWRD profiles officially go from 2 to 4, but we 398 * save them in sar_profiles[1-3] (because we don't 399 * have profile 0). So in the array we start from 1.
··· 357 { 358 union acpi_object *wifi_pkg, *data; 359 bool enabled; 360 + int i, n_profiles, tbl_rev, pos; 361 + int ret = 0; 362 363 data = iwl_acpi_get_object(fwrt->dev, ACPI_EWRD_METHOD); 364 if (IS_ERR(data)) ··· 390 goto out_free; 391 } 392 393 + /* the tables start at element 3 */ 394 + pos = 3; 395 396 + for (i = 0; i < n_profiles; i++) { 397 /* The EWRD profiles officially go from 2 to 4, but we 398 * save them in sar_profiles[1-3] (because we don't 399 * have profile 0). So in the array we start from 1.
+1 -6
drivers/net/wireless/intel/iwlwifi/fw/dbg.c
··· 2669 { 2670 int ret = 0; 2671 2672 - /* if the FW crashed or not debug monitor cfg was given, there is 2673 - * no point in changing the recording state 2674 - */ 2675 - if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status) || 2676 - (!fwrt->trans->dbg.dest_tlv && 2677 - fwrt->trans->dbg.ini_dest == IWL_FW_INI_LOCATION_INVALID)) 2678 return 0; 2679 2680 if (fw_has_capa(&fwrt->fw->ucode_capa,
··· 2669 { 2670 int ret = 0; 2671 2672 + if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status)) 2673 return 0; 2674 2675 if (fw_has_capa(&fwrt->fw->ucode_capa,
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-csr.h
··· 379 380 381 /* CSR GIO */ 382 - #define CSR_GIO_REG_VAL_L0S_ENABLED (0x00000002) 383 384 /* 385 * UCODE-DRIVER GP (general purpose) mailbox register 1
··· 379 380 381 /* CSR GIO */ 382 + #define CSR_GIO_REG_VAL_L0S_DISABLED (0x00000002) 383 384 /* 385 * UCODE-DRIVER GP (general purpose) mailbox register 1
+8 -1
drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
··· 480 if (!frag || frag->size || !pages) 481 return -EIO; 482 483 - while (pages) { 484 block = dma_alloc_coherent(fwrt->dev, pages * PAGE_SIZE, 485 &physical, 486 GFP_KERNEL | __GFP_NOWARN);
··· 480 if (!frag || frag->size || !pages) 481 return -EIO; 482 483 + /* 484 + * We try to allocate as many pages as we can, starting with 485 + * the requested amount and going down until we can allocate 486 + * something. Because of DIV_ROUND_UP(), pages will never go 487 + * down to 0 and stop the loop, so stop when pages reaches 1, 488 + * which is too small anyway. 489 + */ 490 + while (pages > 1) { 491 block = dma_alloc_coherent(fwrt->dev, pages * PAGE_SIZE, 492 &physical, 493 GFP_KERNEL | __GFP_NOWARN);
-3
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 1817 module_param_named(nvm_file, iwlwifi_mod_params.nvm_file, charp, 0444); 1818 MODULE_PARM_DESC(nvm_file, "NVM file name"); 1819 1820 - module_param_named(lar_disable, iwlwifi_mod_params.lar_disable, bool, 0444); 1821 - MODULE_PARM_DESC(lar_disable, "disable LAR functionality (default: N)"); 1822 - 1823 module_param_named(uapsd_disable, iwlwifi_mod_params.uapsd_disable, uint, 0644); 1824 MODULE_PARM_DESC(uapsd_disable, 1825 "disable U-APSD functionality bitmap 1: BSS 2: P2P Client (default: 3)");
··· 1817 module_param_named(nvm_file, iwlwifi_mod_params.nvm_file, charp, 0444); 1818 MODULE_PARM_DESC(nvm_file, "NVM file name"); 1819 1820 module_param_named(uapsd_disable, iwlwifi_mod_params.uapsd_disable, uint, 0644); 1821 MODULE_PARM_DESC(uapsd_disable, 1822 "disable U-APSD functionality bitmap 1: BSS 2: P2P Client (default: 3)");
-2
drivers/net/wireless/intel/iwlwifi/iwl-modparams.h
··· 115 * @nvm_file: specifies a external NVM file 116 * @uapsd_disable: disable U-APSD, see &enum iwl_uapsd_disable, default = 117 * IWL_DISABLE_UAPSD_BSS | IWL_DISABLE_UAPSD_P2P_CLIENT 118 - * @lar_disable: disable LAR (regulatory), default = 0 119 * @fw_monitor: allow to use firmware monitor 120 * @disable_11ac: disable VHT capabilities, default = false. 121 * @remove_when_gone: remove an inaccessible device from the PCIe bus. ··· 135 int antenna_coupling; 136 char *nvm_file; 137 u32 uapsd_disable; 138 - bool lar_disable; 139 bool fw_monitor; 140 bool disable_11ac; 141 /**
··· 115 * @nvm_file: specifies a external NVM file 116 * @uapsd_disable: disable U-APSD, see &enum iwl_uapsd_disable, default = 117 * IWL_DISABLE_UAPSD_BSS | IWL_DISABLE_UAPSD_P2P_CLIENT 118 * @fw_monitor: allow to use firmware monitor 119 * @disable_11ac: disable VHT capabilities, default = false. 120 * @remove_when_gone: remove an inaccessible device from the PCIe bus. ··· 136 int antenna_coupling; 137 char *nvm_file; 138 u32 uapsd_disable; 139 bool fw_monitor; 140 bool disable_11ac; 141 /**
+53 -8
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 224 NVM_CHANNEL_DC_HIGH = BIT(12), 225 }; 226 227 static inline void iwl_nvm_print_channel_flags(struct device *dev, u32 level, 228 int chan, u32 flags) 229 { ··· 967 968 struct iwl_nvm_data * 969 iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, 970 const __be16 *nvm_hw, const __le16 *nvm_sw, 971 const __le16 *nvm_calib, const __le16 *regulatory, 972 const __le16 *mac_override, const __le16 *phy_sku, 973 - u8 tx_chains, u8 rx_chains, bool lar_fw_supported) 974 { 975 struct iwl_nvm_data *data; 976 bool lar_enabled; ··· 1051 return NULL; 1052 } 1053 1054 - if (lar_fw_supported && lar_enabled) 1055 sbands_flags |= IWL_NVM_SBANDS_FLAGS_LAR; 1056 1057 if (iwl_nvm_no_wide_in_5ghz(trans, cfg, nvm_hw)) ··· 1068 1069 static u32 iwl_nvm_get_regdom_bw_flags(const u16 *nvm_chan, 1070 int ch_idx, u16 nvm_flags, 1071 const struct iwl_cfg *cfg) 1072 { 1073 u32 flags = NL80211_RRF_NO_HT40; ··· 1107 (flags & NL80211_RRF_NO_IR)) 1108 flags |= NL80211_RRF_GO_CONCURRENT; 1109 1110 return flags; 1111 } 1112 1113 struct ieee80211_regdomain * 1114 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 1115 int num_of_ch, __le32 *channels, u16 fw_mcc, 1116 - u16 geo_info) 1117 { 1118 int ch_idx; 1119 u16 ch_flags; ··· 1185 } 1186 1187 reg_rule_flags = iwl_nvm_get_regdom_bw_flags(nvm_chan, ch_idx, 1188 - ch_flags, cfg); 1189 1190 /* we can't continue the same rule */ 1191 if (ch_idx == 0 || prev_reg_rule_flags != reg_rule_flags || ··· 1451 .id = WIDE_ID(REGULATORY_AND_NVM_GROUP, NVM_GET_INFO) 1452 }; 1453 int ret; 1454 - bool lar_fw_supported = !iwlwifi_mod_params.lar_disable && 1455 - fw_has_capa(&fw->ucode_capa, 1456 - IWL_UCODE_TLV_CAPA_LAR_SUPPORT); 1457 bool empty_otp; 1458 u32 mac_flags; 1459 u32 sbands_flags = 0; ··· 1528 nvm->valid_tx_ant = (u8)le32_to_cpu(rsp->phy_sku.tx_chains); 1529 nvm->valid_rx_ant = (u8)le32_to_cpu(rsp->phy_sku.rx_chains); 1530 1531 - if (le32_to_cpu(rsp->regulatory.lar_enabled) && lar_fw_supported) { 1532 nvm->lar_enabled = true; 1533 sbands_flags |= IWL_NVM_SBANDS_FLAGS_LAR; 1534 }
··· 224 NVM_CHANNEL_DC_HIGH = BIT(12), 225 }; 226 227 + /** 228 + * enum iwl_reg_capa_flags - global flags applied for the whole regulatory 229 + * domain. 230 + * @REG_CAPA_BF_CCD_LOW_BAND: Beam-forming or Cyclic Delay Diversity in the 231 + * 2.4Ghz band is allowed. 232 + * @REG_CAPA_BF_CCD_HIGH_BAND: Beam-forming or Cyclic Delay Diversity in the 233 + * 5Ghz band is allowed. 234 + * @REG_CAPA_160MHZ_ALLOWED: 11ac channel with a width of 160Mhz is allowed 235 + * for this regulatory domain (valid only in 5Ghz). 236 + * @REG_CAPA_80MHZ_ALLOWED: 11ac channel with a width of 80Mhz is allowed 237 + * for this regulatory domain (valid only in 5Ghz). 238 + * @REG_CAPA_MCS_8_ALLOWED: 11ac with MCS 8 is allowed. 239 + * @REG_CAPA_MCS_9_ALLOWED: 11ac with MCS 9 is allowed. 240 + * @REG_CAPA_40MHZ_FORBIDDEN: 11n channel with a width of 40Mhz is forbidden 241 + * for this regulatory domain (valid only in 5Ghz). 242 + * @REG_CAPA_DC_HIGH_ENABLED: DC HIGH allowed. 243 + */ 244 + enum iwl_reg_capa_flags { 245 + REG_CAPA_BF_CCD_LOW_BAND = BIT(0), 246 + REG_CAPA_BF_CCD_HIGH_BAND = BIT(1), 247 + REG_CAPA_160MHZ_ALLOWED = BIT(2), 248 + REG_CAPA_80MHZ_ALLOWED = BIT(3), 249 + REG_CAPA_MCS_8_ALLOWED = BIT(4), 250 + REG_CAPA_MCS_9_ALLOWED = BIT(5), 251 + REG_CAPA_40MHZ_FORBIDDEN = BIT(7), 252 + REG_CAPA_DC_HIGH_ENABLED = BIT(9), 253 + }; 254 + 255 static inline void iwl_nvm_print_channel_flags(struct device *dev, u32 level, 256 int chan, u32 flags) 257 { ··· 939 940 struct iwl_nvm_data * 941 iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, 942 + const struct iwl_fw *fw, 943 const __be16 *nvm_hw, const __le16 *nvm_sw, 944 const __le16 *nvm_calib, const __le16 *regulatory, 945 const __le16 *mac_override, const __le16 *phy_sku, 946 + u8 tx_chains, u8 rx_chains) 947 { 948 struct iwl_nvm_data *data; 949 bool lar_enabled; ··· 1022 return NULL; 1023 } 1024 1025 + if (lar_enabled && 1026 + fw_has_capa(&fw->ucode_capa, IWL_UCODE_TLV_CAPA_LAR_SUPPORT)) 1027 sbands_flags |= IWL_NVM_SBANDS_FLAGS_LAR; 1028 1029 if (iwl_nvm_no_wide_in_5ghz(trans, cfg, nvm_hw)) ··· 1038 1039 static u32 iwl_nvm_get_regdom_bw_flags(const u16 *nvm_chan, 1040 int ch_idx, u16 nvm_flags, 1041 + u16 cap_flags, 1042 const struct iwl_cfg *cfg) 1043 { 1044 u32 flags = NL80211_RRF_NO_HT40; ··· 1076 (flags & NL80211_RRF_NO_IR)) 1077 flags |= NL80211_RRF_GO_CONCURRENT; 1078 1079 + /* 1080 + * cap_flags is per regulatory domain so apply it for every channel 1081 + */ 1082 + if (ch_idx >= NUM_2GHZ_CHANNELS) { 1083 + if (cap_flags & REG_CAPA_40MHZ_FORBIDDEN) 1084 + flags |= NL80211_RRF_NO_HT40; 1085 + 1086 + if (!(cap_flags & REG_CAPA_80MHZ_ALLOWED)) 1087 + flags |= NL80211_RRF_NO_80MHZ; 1088 + 1089 + if (!(cap_flags & REG_CAPA_160MHZ_ALLOWED)) 1090 + flags |= NL80211_RRF_NO_160MHZ; 1091 + } 1092 + 1093 return flags; 1094 } 1095 1096 struct ieee80211_regdomain * 1097 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 1098 int num_of_ch, __le32 *channels, u16 fw_mcc, 1099 + u16 geo_info, u16 cap) 1100 { 1101 int ch_idx; 1102 u16 ch_flags; ··· 1140 } 1141 1142 reg_rule_flags = iwl_nvm_get_regdom_bw_flags(nvm_chan, ch_idx, 1143 + ch_flags, cap, 1144 + cfg); 1145 1146 /* we can't continue the same rule */ 1147 if (ch_idx == 0 || prev_reg_rule_flags != reg_rule_flags || ··· 1405 .id = WIDE_ID(REGULATORY_AND_NVM_GROUP, NVM_GET_INFO) 1406 }; 1407 int ret; 1408 bool empty_otp; 1409 u32 mac_flags; 1410 u32 sbands_flags = 0; ··· 1485 nvm->valid_tx_ant = (u8)le32_to_cpu(rsp->phy_sku.tx_chains); 1486 nvm->valid_rx_ant = (u8)le32_to_cpu(rsp->phy_sku.rx_chains); 1487 1488 + if (le32_to_cpu(rsp->regulatory.lar_enabled) && 1489 + fw_has_capa(&fw->ucode_capa, 1490 + IWL_UCODE_TLV_CAPA_LAR_SUPPORT)) { 1491 nvm->lar_enabled = true; 1492 sbands_flags |= IWL_NVM_SBANDS_FLAGS_LAR; 1493 }
+5 -4
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h
··· 7 * 8 * Copyright(c) 2008 - 2015 Intel Corporation. All rights reserved. 9 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 10 - * Copyright(c) 2018 Intel Corporation 11 * 12 * This program is free software; you can redistribute it and/or modify 13 * it under the terms of version 2 of the GNU General Public License as ··· 29 * 30 * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. 31 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 32 - * Copyright(c) 2018 Intel Corporation 33 * All rights reserved. 34 * 35 * Redistribution and use in source and binary forms, with or without ··· 85 */ 86 struct iwl_nvm_data * 87 iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, 88 const __be16 *nvm_hw, const __le16 *nvm_sw, 89 const __le16 *nvm_calib, const __le16 *regulatory, 90 const __le16 *mac_override, const __le16 *phy_sku, 91 - u8 tx_chains, u8 rx_chains, bool lar_fw_supported); 92 93 /** 94 * iwl_parse_mcc_info - parse MCC (mobile country code) info coming from FW ··· 104 struct ieee80211_regdomain * 105 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 106 int num_of_ch, __le32 *channels, u16 fw_mcc, 107 - u16 geo_info); 108 109 /** 110 * struct iwl_nvm_section - describes an NVM section in memory.
··· 7 * 8 * Copyright(c) 2008 - 2015 Intel Corporation. All rights reserved. 9 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 10 + * Copyright(c) 2018 - 2019 Intel Corporation 11 * 12 * This program is free software; you can redistribute it and/or modify 13 * it under the terms of version 2 of the GNU General Public License as ··· 29 * 30 * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. 31 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 32 + * Copyright(c) 2018 - 2019 Intel Corporation 33 * All rights reserved. 34 * 35 * Redistribution and use in source and binary forms, with or without ··· 85 */ 86 struct iwl_nvm_data * 87 iwl_parse_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg, 88 + const struct iwl_fw *fw, 89 const __be16 *nvm_hw, const __le16 *nvm_sw, 90 const __le16 *nvm_calib, const __le16 *regulatory, 91 const __le16 *mac_override, const __le16 *phy_sku, 92 + u8 tx_chains, u8 rx_chains); 93 94 /** 95 * iwl_parse_mcc_info - parse MCC (mobile country code) info coming from FW ··· 103 struct ieee80211_regdomain * 104 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 105 int num_of_ch, __le32 *channels, u16 fw_mcc, 106 + u16 geo_info, u16 cap); 107 108 /** 109 * struct iwl_nvm_section - describes an NVM section in memory.
+5 -5
drivers/net/wireless/intel/iwlwifi/iwl-trans.c
··· 66 67 struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, 68 struct device *dev, 69 - const struct iwl_trans_ops *ops) 70 { 71 struct iwl_trans *trans; 72 #ifdef CONFIG_LOCKDEP ··· 92 "iwl_cmd_pool:%s", dev_name(trans->dev)); 93 trans->dev_cmd_pool = 94 kmem_cache_create(trans->dev_cmd_pool_name, 95 - sizeof(struct iwl_device_cmd), 96 - sizeof(void *), 97 - SLAB_HWCACHE_ALIGN, 98 - NULL); 99 if (!trans->dev_cmd_pool) 100 return NULL; 101
··· 66 67 struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, 68 struct device *dev, 69 + const struct iwl_trans_ops *ops, 70 + unsigned int cmd_pool_size, 71 + unsigned int cmd_pool_align) 72 { 73 struct iwl_trans *trans; 74 #ifdef CONFIG_LOCKDEP ··· 90 "iwl_cmd_pool:%s", dev_name(trans->dev)); 91 trans->dev_cmd_pool = 92 kmem_cache_create(trans->dev_cmd_pool_name, 93 + cmd_pool_size, cmd_pool_align, 94 + SLAB_HWCACHE_ALIGN, NULL); 95 if (!trans->dev_cmd_pool) 96 return NULL; 97
+20 -6
drivers/net/wireless/intel/iwlwifi/iwl-trans.h
··· 193 }; 194 } __packed; 195 196 #define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_device_cmd)) 197 198 /* ··· 556 int (*send_cmd)(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 557 558 int (*tx)(struct iwl_trans *trans, struct sk_buff *skb, 559 - struct iwl_device_cmd *dev_cmd, int queue); 560 void (*reclaim)(struct iwl_trans *trans, int queue, int ssn, 561 struct sk_buff_head *skbs); 562 ··· 960 return trans->ops->dump_data(trans, dump_mask); 961 } 962 963 - static inline struct iwl_device_cmd * 964 iwl_trans_alloc_tx_cmd(struct iwl_trans *trans) 965 { 966 - return kmem_cache_alloc(trans->dev_cmd_pool, GFP_ATOMIC); 967 } 968 969 int iwl_trans_send_cmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 970 971 static inline void iwl_trans_free_tx_cmd(struct iwl_trans *trans, 972 - struct iwl_device_cmd *dev_cmd) 973 { 974 kmem_cache_free(trans->dev_cmd_pool, dev_cmd); 975 } 976 977 static inline int iwl_trans_tx(struct iwl_trans *trans, struct sk_buff *skb, 978 - struct iwl_device_cmd *dev_cmd, int queue) 979 { 980 if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status))) 981 return -EIO; ··· 1283 *****************************************************/ 1284 struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, 1285 struct device *dev, 1286 - const struct iwl_trans_ops *ops); 1287 void iwl_trans_free(struct iwl_trans *trans); 1288 1289 /*****************************************************
··· 193 }; 194 } __packed; 195 196 + /** 197 + * struct iwl_device_tx_cmd - buffer for TX command 198 + * @hdr: the header 199 + * @payload: the payload placeholder 200 + * 201 + * The actual structure is sized dynamically according to need. 202 + */ 203 + struct iwl_device_tx_cmd { 204 + struct iwl_cmd_header hdr; 205 + u8 payload[]; 206 + } __packed; 207 + 208 #define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_device_cmd)) 209 210 /* ··· 544 int (*send_cmd)(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 545 546 int (*tx)(struct iwl_trans *trans, struct sk_buff *skb, 547 + struct iwl_device_tx_cmd *dev_cmd, int queue); 548 void (*reclaim)(struct iwl_trans *trans, int queue, int ssn, 549 struct sk_buff_head *skbs); 550 ··· 948 return trans->ops->dump_data(trans, dump_mask); 949 } 950 951 + static inline struct iwl_device_tx_cmd * 952 iwl_trans_alloc_tx_cmd(struct iwl_trans *trans) 953 { 954 + return kmem_cache_zalloc(trans->dev_cmd_pool, GFP_ATOMIC); 955 } 956 957 int iwl_trans_send_cmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 958 959 static inline void iwl_trans_free_tx_cmd(struct iwl_trans *trans, 960 + struct iwl_device_tx_cmd *dev_cmd) 961 { 962 kmem_cache_free(trans->dev_cmd_pool, dev_cmd); 963 } 964 965 static inline int iwl_trans_tx(struct iwl_trans *trans, struct sk_buff *skb, 966 + struct iwl_device_tx_cmd *dev_cmd, int queue) 967 { 968 if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status))) 969 return -EIO; ··· 1271 *****************************************************/ 1272 struct iwl_trans *iwl_trans_alloc(unsigned int priv_size, 1273 struct device *dev, 1274 + const struct iwl_trans_ops *ops, 1275 + unsigned int cmd_pool_size, 1276 + unsigned int cmd_pool_align); 1277 void iwl_trans_free(struct iwl_trans *trans); 1278 1279 /*****************************************************
+1
drivers/net/wireless/intel/iwlwifi/mvm/constants.h
··· 154 #define IWL_MVM_D3_DEBUG false 155 #define IWL_MVM_USE_TWT false 156 #define IWL_MVM_AMPDU_CONSEC_DROPS_DELBA 10 157 158 #endif /* __MVM_CONSTANTS_H */
··· 154 #define IWL_MVM_D3_DEBUG false 155 #define IWL_MVM_USE_TWT false 156 #define IWL_MVM_AMPDU_CONSEC_DROPS_DELBA 10 157 + #define IWL_MVM_USE_NSSN_SYNC 0 158 159 #endif /* __MVM_CONSTANTS_H */
+6 -2
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 841 return 0; 842 } 843 844 IWL_DEBUG_RADIO(mvm, "Sending PER_PLATFORM_ANT_GAIN_CMD\n"); 845 - IWL_DEBUG_RADIO(mvm, "PPAG is %s\n", 846 - mvm->fwrt.ppag_table.enabled ? "enabled" : "disabled"); 847 848 for (i = 0; i < ACPI_PPAG_NUM_CHAINS; i++) { 849 for (j = 0; j < ACPI_PPAG_NUM_SUB_BANDS; j++) {
··· 841 return 0; 842 } 843 844 + if (!mvm->fwrt.ppag_table.enabled) { 845 + IWL_DEBUG_RADIO(mvm, 846 + "PPAG not enabled, command not sent.\n"); 847 + return 0; 848 + } 849 + 850 IWL_DEBUG_RADIO(mvm, "Sending PER_PLATFORM_ANT_GAIN_CMD\n"); 851 852 for (i = 0; i < ACPI_PPAG_NUM_CHAINS; i++) { 853 for (j = 0; j < ACPI_PPAG_NUM_SUB_BANDS; j++) {
+144 -13
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 256 __le32_to_cpu(resp->n_channels), 257 resp->channels, 258 __le16_to_cpu(resp->mcc), 259 - __le16_to_cpu(resp->geo_info)); 260 /* Store the return source id */ 261 src_id = resp->source_id; 262 kfree(resp); ··· 755 return ret; 756 } 757 758 static void iwl_mvm_mac_tx(struct ieee80211_hw *hw, 759 struct ieee80211_tx_control *control, 760 struct sk_buff *skb) ··· 812 } 813 } 814 815 - if (sta) { 816 - if (iwl_mvm_tx_skb(mvm, skb, sta)) 817 - goto drop; 818 - return; 819 - } 820 - 821 - if (iwl_mvm_tx_skb_non_sta(mvm, skb)) 822 - goto drop; 823 return; 824 drop: 825 ieee80211_free_txskb(hw, skb); ··· 862 break; 863 } 864 865 - if (!txq->sta) 866 - iwl_mvm_tx_skb_non_sta(mvm, skb); 867 - else 868 - iwl_mvm_tx_skb(mvm, skb, txq->sta); 869 } 870 } while (atomic_dec_return(&mvmtxq->tx_request)); 871 rcu_read_unlock(); ··· 4776 return ret; 4777 } 4778 4779 static void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw, 4780 struct ieee80211_vif *vif, 4781 struct ieee80211_sta *sta, ··· 4907 if (mvmsta->avg_energy) { 4908 sinfo->signal_avg = mvmsta->avg_energy; 4909 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG); 4910 } 4911 4912 /* if beacon filtering isn't on mac80211 does it anyway */
··· 256 __le32_to_cpu(resp->n_channels), 257 resp->channels, 258 __le16_to_cpu(resp->mcc), 259 + __le16_to_cpu(resp->geo_info), 260 + __le16_to_cpu(resp->cap)); 261 /* Store the return source id */ 262 src_id = resp->source_id; 263 kfree(resp); ··· 754 return ret; 755 } 756 757 + static void iwl_mvm_tx_skb(struct iwl_mvm *mvm, struct sk_buff *skb, 758 + struct ieee80211_sta *sta) 759 + { 760 + if (likely(sta)) { 761 + if (likely(iwl_mvm_tx_skb_sta(mvm, skb, sta) == 0)) 762 + return; 763 + } else { 764 + if (likely(iwl_mvm_tx_skb_non_sta(mvm, skb) == 0)) 765 + return; 766 + } 767 + 768 + ieee80211_free_txskb(mvm->hw, skb); 769 + } 770 + 771 static void iwl_mvm_mac_tx(struct ieee80211_hw *hw, 772 struct ieee80211_tx_control *control, 773 struct sk_buff *skb) ··· 797 } 798 } 799 800 + iwl_mvm_tx_skb(mvm, skb, sta); 801 return; 802 drop: 803 ieee80211_free_txskb(hw, skb); ··· 854 break; 855 } 856 857 + iwl_mvm_tx_skb(mvm, skb, txq->sta); 858 } 859 } while (atomic_dec_return(&mvmtxq->tx_request)); 860 rcu_read_unlock(); ··· 4771 return ret; 4772 } 4773 4774 + static void iwl_mvm_set_sta_rate(u32 rate_n_flags, struct rate_info *rinfo) 4775 + { 4776 + switch (rate_n_flags & RATE_MCS_CHAN_WIDTH_MSK) { 4777 + case RATE_MCS_CHAN_WIDTH_20: 4778 + rinfo->bw = RATE_INFO_BW_20; 4779 + break; 4780 + case RATE_MCS_CHAN_WIDTH_40: 4781 + rinfo->bw = RATE_INFO_BW_40; 4782 + break; 4783 + case RATE_MCS_CHAN_WIDTH_80: 4784 + rinfo->bw = RATE_INFO_BW_80; 4785 + break; 4786 + case RATE_MCS_CHAN_WIDTH_160: 4787 + rinfo->bw = RATE_INFO_BW_160; 4788 + break; 4789 + } 4790 + 4791 + if (rate_n_flags & RATE_MCS_HT_MSK) { 4792 + rinfo->flags |= RATE_INFO_FLAGS_MCS; 4793 + rinfo->mcs = u32_get_bits(rate_n_flags, RATE_HT_MCS_INDEX_MSK); 4794 + rinfo->nss = u32_get_bits(rate_n_flags, 4795 + RATE_HT_MCS_NSS_MSK) + 1; 4796 + if (rate_n_flags & RATE_MCS_SGI_MSK) 4797 + rinfo->flags |= RATE_INFO_FLAGS_SHORT_GI; 4798 + } else if (rate_n_flags & RATE_MCS_VHT_MSK) { 4799 + rinfo->flags |= RATE_INFO_FLAGS_VHT_MCS; 4800 + rinfo->mcs = u32_get_bits(rate_n_flags, 4801 + RATE_VHT_MCS_RATE_CODE_MSK); 4802 + rinfo->nss = u32_get_bits(rate_n_flags, 4803 + RATE_VHT_MCS_NSS_MSK) + 1; 4804 + if (rate_n_flags & RATE_MCS_SGI_MSK) 4805 + rinfo->flags |= RATE_INFO_FLAGS_SHORT_GI; 4806 + } else if (rate_n_flags & RATE_MCS_HE_MSK) { 4807 + u32 gi_ltf = u32_get_bits(rate_n_flags, 4808 + RATE_MCS_HE_GI_LTF_MSK); 4809 + 4810 + rinfo->flags |= RATE_INFO_FLAGS_HE_MCS; 4811 + rinfo->mcs = u32_get_bits(rate_n_flags, 4812 + RATE_VHT_MCS_RATE_CODE_MSK); 4813 + rinfo->nss = u32_get_bits(rate_n_flags, 4814 + RATE_VHT_MCS_NSS_MSK) + 1; 4815 + 4816 + if (rate_n_flags & RATE_MCS_HE_106T_MSK) { 4817 + rinfo->bw = RATE_INFO_BW_HE_RU; 4818 + rinfo->he_ru_alloc = NL80211_RATE_INFO_HE_RU_ALLOC_106; 4819 + } 4820 + 4821 + switch (rate_n_flags & RATE_MCS_HE_TYPE_MSK) { 4822 + case RATE_MCS_HE_TYPE_SU: 4823 + case RATE_MCS_HE_TYPE_EXT_SU: 4824 + if (gi_ltf == 0 || gi_ltf == 1) 4825 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_0_8; 4826 + else if (gi_ltf == 2) 4827 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_1_6; 4828 + else if (rate_n_flags & RATE_MCS_SGI_MSK) 4829 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_0_8; 4830 + else 4831 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_3_2; 4832 + break; 4833 + case RATE_MCS_HE_TYPE_MU: 4834 + if (gi_ltf == 0 || gi_ltf == 1) 4835 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_0_8; 4836 + else if (gi_ltf == 2) 4837 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_1_6; 4838 + else 4839 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_3_2; 4840 + break; 4841 + case RATE_MCS_HE_TYPE_TRIG: 4842 + if (gi_ltf == 0 || gi_ltf == 1) 4843 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_1_6; 4844 + else 4845 + rinfo->he_gi = NL80211_RATE_INFO_HE_GI_3_2; 4846 + break; 4847 + } 4848 + 4849 + if (rate_n_flags & RATE_HE_DUAL_CARRIER_MODE_MSK) 4850 + rinfo->he_dcm = 1; 4851 + } else { 4852 + switch (u32_get_bits(rate_n_flags, RATE_LEGACY_RATE_MSK)) { 4853 + case IWL_RATE_1M_PLCP: 4854 + rinfo->legacy = 10; 4855 + break; 4856 + case IWL_RATE_2M_PLCP: 4857 + rinfo->legacy = 20; 4858 + break; 4859 + case IWL_RATE_5M_PLCP: 4860 + rinfo->legacy = 55; 4861 + break; 4862 + case IWL_RATE_11M_PLCP: 4863 + rinfo->legacy = 110; 4864 + break; 4865 + case IWL_RATE_6M_PLCP: 4866 + rinfo->legacy = 60; 4867 + break; 4868 + case IWL_RATE_9M_PLCP: 4869 + rinfo->legacy = 90; 4870 + break; 4871 + case IWL_RATE_12M_PLCP: 4872 + rinfo->legacy = 120; 4873 + break; 4874 + case IWL_RATE_18M_PLCP: 4875 + rinfo->legacy = 180; 4876 + break; 4877 + case IWL_RATE_24M_PLCP: 4878 + rinfo->legacy = 240; 4879 + break; 4880 + case IWL_RATE_36M_PLCP: 4881 + rinfo->legacy = 360; 4882 + break; 4883 + case IWL_RATE_48M_PLCP: 4884 + rinfo->legacy = 480; 4885 + break; 4886 + case IWL_RATE_54M_PLCP: 4887 + rinfo->legacy = 540; 4888 + break; 4889 + } 4890 + } 4891 + } 4892 + 4893 static void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw, 4894 struct ieee80211_vif *vif, 4895 struct ieee80211_sta *sta, ··· 4783 if (mvmsta->avg_energy) { 4784 sinfo->signal_avg = mvmsta->avg_energy; 4785 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG); 4786 + } 4787 + 4788 + if (iwl_mvm_has_tlc_offload(mvm)) { 4789 + struct iwl_lq_sta_rs_fw *lq_sta = &mvmsta->lq_sta.rs_fw; 4790 + 4791 + iwl_mvm_set_sta_rate(lq_sta->last_rate_n_flags, &sinfo->txrate); 4792 + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE); 4793 } 4794 4795 /* if beacon filtering isn't on mac80211 does it anyway */
+2 -5
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 1298 bool tlv_lar = fw_has_capa(&mvm->fw->ucode_capa, 1299 IWL_UCODE_TLV_CAPA_LAR_SUPPORT); 1300 1301 - if (iwlwifi_mod_params.lar_disable) 1302 - return false; 1303 - 1304 /* 1305 * Enable LAR only if it is supported by the FW (TLV) && 1306 * enabled in the NVM ··· 1505 int __must_check iwl_mvm_send_cmd_pdu_status(struct iwl_mvm *mvm, u32 id, 1506 u16 len, const void *data, 1507 u32 *status); 1508 - int iwl_mvm_tx_skb(struct iwl_mvm *mvm, struct sk_buff *skb, 1509 - struct ieee80211_sta *sta); 1510 int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb); 1511 void iwl_mvm_set_tx_cmd(struct iwl_mvm *mvm, struct sk_buff *skb, 1512 struct iwl_tx_cmd *tx_cmd,
··· 1298 bool tlv_lar = fw_has_capa(&mvm->fw->ucode_capa, 1299 IWL_UCODE_TLV_CAPA_LAR_SUPPORT); 1300 1301 /* 1302 * Enable LAR only if it is supported by the FW (TLV) && 1303 * enabled in the NVM ··· 1508 int __must_check iwl_mvm_send_cmd_pdu_status(struct iwl_mvm *mvm, u32 id, 1509 u16 len, const void *data, 1510 u32 *status); 1511 + int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb, 1512 + struct ieee80211_sta *sta); 1513 int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb); 1514 void iwl_mvm_set_tx_cmd(struct iwl_mvm *mvm, struct sk_buff *skb, 1515 struct iwl_tx_cmd *tx_cmd,
+3 -9
drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
··· 277 struct iwl_nvm_section *sections = mvm->nvm_sections; 278 const __be16 *hw; 279 const __le16 *sw, *calib, *regulatory, *mac_override, *phy_sku; 280 - bool lar_enabled; 281 int regulatory_type; 282 283 /* Checking for required sections */ 284 - if (mvm->trans->cfg->nvm_type != IWL_NVM_EXT) { 285 if (!mvm->nvm_sections[NVM_SECTION_TYPE_SW].data || 286 !mvm->nvm_sections[mvm->cfg->nvm_hw_section_num].data) { 287 IWL_ERR(mvm, "Can't parse empty OTP/NVM sections\n"); ··· 326 (const __le16 *)sections[NVM_SECTION_TYPE_REGULATORY_SDP].data : 327 (const __le16 *)sections[NVM_SECTION_TYPE_REGULATORY].data; 328 329 - lar_enabled = !iwlwifi_mod_params.lar_disable && 330 - fw_has_capa(&mvm->fw->ucode_capa, 331 - IWL_UCODE_TLV_CAPA_LAR_SUPPORT); 332 - 333 - return iwl_parse_nvm_data(mvm->trans, mvm->cfg, hw, sw, calib, 334 regulatory, mac_override, phy_sku, 335 - mvm->fw->valid_tx_ant, mvm->fw->valid_rx_ant, 336 - lar_enabled); 337 } 338 339 /* Loads the NVM data stored in mvm->nvm_sections into the NIC */
··· 277 struct iwl_nvm_section *sections = mvm->nvm_sections; 278 const __be16 *hw; 279 const __le16 *sw, *calib, *regulatory, *mac_override, *phy_sku; 280 int regulatory_type; 281 282 /* Checking for required sections */ 283 + if (mvm->trans->cfg->nvm_type == IWL_NVM) { 284 if (!mvm->nvm_sections[NVM_SECTION_TYPE_SW].data || 285 !mvm->nvm_sections[mvm->cfg->nvm_hw_section_num].data) { 286 IWL_ERR(mvm, "Can't parse empty OTP/NVM sections\n"); ··· 327 (const __le16 *)sections[NVM_SECTION_TYPE_REGULATORY_SDP].data : 328 (const __le16 *)sections[NVM_SECTION_TYPE_REGULATORY].data; 329 330 + return iwl_parse_nvm_data(mvm->trans, mvm->cfg, mvm->fw, hw, sw, calib, 331 regulatory, mac_override, phy_sku, 332 + mvm->fw->valid_tx_ant, mvm->fw->valid_rx_ant); 333 } 334 335 /* Loads the NVM data stored in mvm->nvm_sections into the NIC */
+10 -7
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 514 515 static void iwl_mvm_sync_nssn(struct iwl_mvm *mvm, u8 baid, u16 nssn) 516 { 517 - struct iwl_mvm_rss_sync_notif notif = { 518 - .metadata.type = IWL_MVM_RXQ_NSSN_SYNC, 519 - .metadata.sync = 0, 520 - .nssn_sync.baid = baid, 521 - .nssn_sync.nssn = nssn, 522 - }; 523 524 - iwl_mvm_sync_rx_queues_internal(mvm, (void *)&notif, sizeof(notif)); 525 } 526 527 #define RX_REORDER_BUF_TIMEOUT_MQ (HZ / 10)
··· 514 515 static void iwl_mvm_sync_nssn(struct iwl_mvm *mvm, u8 baid, u16 nssn) 516 { 517 + if (IWL_MVM_USE_NSSN_SYNC) { 518 + struct iwl_mvm_rss_sync_notif notif = { 519 + .metadata.type = IWL_MVM_RXQ_NSSN_SYNC, 520 + .metadata.sync = 0, 521 + .nssn_sync.baid = baid, 522 + .nssn_sync.nssn = nssn, 523 + }; 524 525 + iwl_mvm_sync_rx_queues_internal(mvm, (void *)&notif, 526 + sizeof(notif)); 527 + } 528 } 529 530 #define RX_REORDER_BUF_TIMEOUT_MQ (HZ / 10)
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
··· 1213 cmd_size = sizeof(struct iwl_scan_config_v2); 1214 else 1215 cmd_size = sizeof(struct iwl_scan_config_v1); 1216 - cmd_size += num_channels; 1217 1218 cfg = kzalloc(cmd_size, GFP_KERNEL); 1219 if (!cfg)
··· 1213 cmd_size = sizeof(struct iwl_scan_config_v2); 1214 else 1215 cmd_size = sizeof(struct iwl_scan_config_v1); 1216 + cmd_size += mvm->fw->ucode_capa.n_scan_channels; 1217 1218 cfg = kzalloc(cmd_size, GFP_KERNEL); 1219 if (!cfg)
+8 -13
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 490 /* 491 * Allocates and sets the Tx cmd the driver data pointers in the skb 492 */ 493 - static struct iwl_device_cmd * 494 iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb, 495 struct ieee80211_tx_info *info, int hdrlen, 496 struct ieee80211_sta *sta, u8 sta_id) 497 { 498 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 499 - struct iwl_device_cmd *dev_cmd; 500 struct iwl_tx_cmd *tx_cmd; 501 502 dev_cmd = iwl_trans_alloc_tx_cmd(mvm->trans); ··· 504 if (unlikely(!dev_cmd)) 505 return NULL; 506 507 - /* Make sure we zero enough of dev_cmd */ 508 - BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen2) > sizeof(*tx_cmd)); 509 - BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen3) > sizeof(*tx_cmd)); 510 - 511 - memset(dev_cmd, 0, sizeof(dev_cmd->hdr) + sizeof(*tx_cmd)); 512 dev_cmd->hdr.cmd = TX_CMD; 513 514 if (iwl_mvm_has_new_tx_api(mvm)) { ··· 592 } 593 594 static void iwl_mvm_skb_prepare_status(struct sk_buff *skb, 595 - struct iwl_device_cmd *cmd) 596 { 597 struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb); 598 ··· 711 { 712 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 713 struct ieee80211_tx_info info; 714 - struct iwl_device_cmd *dev_cmd; 715 u8 sta_id; 716 int hdrlen = ieee80211_hdrlen(hdr->frame_control); 717 __le16 fc = hdr->frame_control; ··· 1073 { 1074 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1075 struct iwl_mvm_sta *mvmsta; 1076 - struct iwl_device_cmd *dev_cmd; 1077 __le16 fc; 1078 u16 seq_number = 0; 1079 u8 tid = IWL_MAX_TID_COUNT; ··· 1149 if (WARN_ONCE(txq_id == IWL_MVM_INVALID_QUEUE, "Invalid TXQ id")) { 1150 iwl_trans_free_tx_cmd(mvm->trans, dev_cmd); 1151 spin_unlock(&mvmsta->lock); 1152 - return 0; 1153 } 1154 1155 if (!iwl_mvm_has_new_tx_api(mvm)) { ··· 1201 return -1; 1202 } 1203 1204 - int iwl_mvm_tx_skb(struct iwl_mvm *mvm, struct sk_buff *skb, 1205 - struct ieee80211_sta *sta) 1206 { 1207 struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 1208 struct ieee80211_tx_info info;
··· 490 /* 491 * Allocates and sets the Tx cmd the driver data pointers in the skb 492 */ 493 + static struct iwl_device_tx_cmd * 494 iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb, 495 struct ieee80211_tx_info *info, int hdrlen, 496 struct ieee80211_sta *sta, u8 sta_id) 497 { 498 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 499 + struct iwl_device_tx_cmd *dev_cmd; 500 struct iwl_tx_cmd *tx_cmd; 501 502 dev_cmd = iwl_trans_alloc_tx_cmd(mvm->trans); ··· 504 if (unlikely(!dev_cmd)) 505 return NULL; 506 507 dev_cmd->hdr.cmd = TX_CMD; 508 509 if (iwl_mvm_has_new_tx_api(mvm)) { ··· 597 } 598 599 static void iwl_mvm_skb_prepare_status(struct sk_buff *skb, 600 + struct iwl_device_tx_cmd *cmd) 601 { 602 struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb); 603 ··· 716 { 717 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 718 struct ieee80211_tx_info info; 719 + struct iwl_device_tx_cmd *dev_cmd; 720 u8 sta_id; 721 int hdrlen = ieee80211_hdrlen(hdr->frame_control); 722 __le16 fc = hdr->frame_control; ··· 1078 { 1079 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1080 struct iwl_mvm_sta *mvmsta; 1081 + struct iwl_device_tx_cmd *dev_cmd; 1082 __le16 fc; 1083 u16 seq_number = 0; 1084 u8 tid = IWL_MAX_TID_COUNT; ··· 1154 if (WARN_ONCE(txq_id == IWL_MVM_INVALID_QUEUE, "Invalid TXQ id")) { 1155 iwl_trans_free_tx_cmd(mvm->trans, dev_cmd); 1156 spin_unlock(&mvmsta->lock); 1157 + return -1; 1158 } 1159 1160 if (!iwl_mvm_has_new_tx_api(mvm)) { ··· 1206 return -1; 1207 } 1208 1209 + int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb, 1210 + struct ieee80211_sta *sta) 1211 { 1212 struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 1213 struct ieee80211_tx_info info;
+42 -3
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
··· 57 #include "internal.h" 58 #include "iwl-prph.h" 59 60 void iwl_pcie_ctxt_info_free_paging(struct iwl_trans *trans) 61 { 62 struct iwl_self_init_dram *dram = &trans->init_dram; ··· 197 struct iwl_context_info *ctxt_info; 198 struct iwl_context_info_rbd_cfg *rx_cfg; 199 u32 control_flags = 0, rb_size; 200 int ret; 201 202 - ctxt_info = dma_alloc_coherent(trans->dev, sizeof(*ctxt_info), 203 - &trans_pcie->ctxt_info_dma_addr, 204 - GFP_KERNEL); 205 if (!ctxt_info) 206 return -ENOMEM; 207 208 ctxt_info->version.version = 0; 209 ctxt_info->version.mac_id =
··· 57 #include "internal.h" 58 #include "iwl-prph.h" 59 60 + static void *_iwl_pcie_ctxt_info_dma_alloc_coherent(struct iwl_trans *trans, 61 + size_t size, 62 + dma_addr_t *phys, 63 + int depth) 64 + { 65 + void *result; 66 + 67 + if (WARN(depth > 2, 68 + "failed to allocate DMA memory not crossing 2^32 boundary")) 69 + return NULL; 70 + 71 + result = dma_alloc_coherent(trans->dev, size, phys, GFP_KERNEL); 72 + 73 + if (!result) 74 + return NULL; 75 + 76 + if (unlikely(iwl_pcie_crosses_4g_boundary(*phys, size))) { 77 + void *old = result; 78 + dma_addr_t oldphys = *phys; 79 + 80 + result = _iwl_pcie_ctxt_info_dma_alloc_coherent(trans, size, 81 + phys, 82 + depth + 1); 83 + dma_free_coherent(trans->dev, size, old, oldphys); 84 + } 85 + 86 + return result; 87 + } 88 + 89 + static void *iwl_pcie_ctxt_info_dma_alloc_coherent(struct iwl_trans *trans, 90 + size_t size, 91 + dma_addr_t *phys) 92 + { 93 + return _iwl_pcie_ctxt_info_dma_alloc_coherent(trans, size, phys, 0); 94 + } 95 + 96 void iwl_pcie_ctxt_info_free_paging(struct iwl_trans *trans) 97 { 98 struct iwl_self_init_dram *dram = &trans->init_dram; ··· 161 struct iwl_context_info *ctxt_info; 162 struct iwl_context_info_rbd_cfg *rx_cfg; 163 u32 control_flags = 0, rb_size; 164 + dma_addr_t phys; 165 int ret; 166 167 + ctxt_info = iwl_pcie_ctxt_info_dma_alloc_coherent(trans, 168 + sizeof(*ctxt_info), 169 + &phys); 170 if (!ctxt_info) 171 return -ENOMEM; 172 + 173 + trans_pcie->ctxt_info_dma_addr = phys; 174 175 ctxt_info->version.version = 0; 176 ctxt_info->version.mac_id =
+15 -4
drivers/net/wireless/intel/iwlwifi/pcie/internal.h
··· 305 #define IWL_FIRST_TB_SIZE_ALIGN ALIGN(IWL_FIRST_TB_SIZE, 64) 306 307 struct iwl_pcie_txq_entry { 308 - struct iwl_device_cmd *cmd; 309 struct sk_buff *skb; 310 /* buffer to free after command completes */ 311 const void *free_buf; ··· 672 /***************************************************** 673 * TX / HCMD 674 ******************************************************/ 675 int iwl_pcie_tx_init(struct iwl_trans *trans); 676 int iwl_pcie_gen2_tx_init(struct iwl_trans *trans, int txq_id, 677 int queue_size); ··· 698 void iwl_trans_pcie_log_scd_error(struct iwl_trans *trans, 699 struct iwl_txq *txq); 700 int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, 701 - struct iwl_device_cmd *dev_cmd, int txq_id); 702 void iwl_pcie_txq_check_wrptrs(struct iwl_trans *trans); 703 int iwl_trans_pcie_send_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 704 void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx); ··· 1092 void iwl_pcie_free_tso_page(struct iwl_trans_pcie *trans_pcie, 1093 struct sk_buff *skb); 1094 #ifdef CONFIG_INET 1095 - struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len); 1096 #endif 1097 1098 /* common functions that are used by gen3 transport */ ··· 1117 unsigned int timeout); 1118 void iwl_trans_pcie_dyn_txq_free(struct iwl_trans *trans, int queue); 1119 int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, 1120 - struct iwl_device_cmd *dev_cmd, int txq_id); 1121 int iwl_trans_pcie_gen2_send_hcmd(struct iwl_trans *trans, 1122 struct iwl_host_cmd *cmd); 1123 void iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans);
··· 305 #define IWL_FIRST_TB_SIZE_ALIGN ALIGN(IWL_FIRST_TB_SIZE, 64) 306 307 struct iwl_pcie_txq_entry { 308 + void *cmd; 309 struct sk_buff *skb; 310 /* buffer to free after command completes */ 311 const void *free_buf; ··· 672 /***************************************************** 673 * TX / HCMD 674 ******************************************************/ 675 + /* 676 + * We need this inline in case dma_addr_t is only 32-bits - since the 677 + * hardware is always 64-bit, the issue can still occur in that case, 678 + * so use u64 for 'phys' here to force the addition in 64-bit. 679 + */ 680 + static inline bool iwl_pcie_crosses_4g_boundary(u64 phys, u16 len) 681 + { 682 + return upper_32_bits(phys) != upper_32_bits(phys + len); 683 + } 684 + 685 int iwl_pcie_tx_init(struct iwl_trans *trans); 686 int iwl_pcie_gen2_tx_init(struct iwl_trans *trans, int txq_id, 687 int queue_size); ··· 688 void iwl_trans_pcie_log_scd_error(struct iwl_trans *trans, 689 struct iwl_txq *txq); 690 int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, 691 + struct iwl_device_tx_cmd *dev_cmd, int txq_id); 692 void iwl_pcie_txq_check_wrptrs(struct iwl_trans *trans); 693 int iwl_trans_pcie_send_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd); 694 void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx); ··· 1082 void iwl_pcie_free_tso_page(struct iwl_trans_pcie *trans_pcie, 1083 struct sk_buff *skb); 1084 #ifdef CONFIG_INET 1085 + struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len, 1086 + struct sk_buff *skb); 1087 #endif 1088 1089 /* common functions that are used by gen3 transport */ ··· 1106 unsigned int timeout); 1107 void iwl_trans_pcie_dyn_txq_free(struct iwl_trans *trans, int queue); 1108 int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, 1109 + struct iwl_device_tx_cmd *dev_cmd, int txq_id); 1110 int iwl_trans_pcie_gen2_send_hcmd(struct iwl_trans *trans, 1111 struct iwl_host_cmd *cmd); 1112 void iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans);
+2 -2
drivers/net/wireless/intel/iwlwifi/pcie/rx.c
··· 1529 1530 napi = &rxq->napi; 1531 if (napi->poll) { 1532 if (napi->rx_count) { 1533 netif_receive_skb_list(&napi->rx_list); 1534 INIT_LIST_HEAD(&napi->rx_list); 1535 napi->rx_count = 0; 1536 } 1537 - 1538 - napi_gro_flush(napi, false); 1539 } 1540 1541 iwl_pcie_rxq_restock(trans, rxq);
··· 1529 1530 napi = &rxq->napi; 1531 if (napi->poll) { 1532 + napi_gro_flush(napi, false); 1533 + 1534 if (napi->rx_count) { 1535 netif_receive_skb_list(&napi->rx_list); 1536 INIT_LIST_HEAD(&napi->rx_list); 1537 napi->rx_count = 0; 1538 } 1539 } 1540 1541 iwl_pcie_rxq_restock(trans, rxq);
+29 -18
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
··· 79 #include "iwl-agn-hw.h" 80 #include "fw/error-dump.h" 81 #include "fw/dbg.h" 82 #include "internal.h" 83 #include "iwl-fh.h" 84 ··· 302 u16 cap; 303 304 /* 305 - * HW bug W/A for instability in PCIe bus L0S->L1 transition. 306 - * Check if BIOS (or OS) enabled L1-ASPM on this device. 307 - * If so (likely), disable L0S, so device moves directly L0->L1; 308 - * costs negligible amount of power savings. 309 - * If not (unlikely), enable L0S, so there is at least some 310 - * power savings, even without L1. 311 */ 312 pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_LNKCTL, &lctl); 313 - if (lctl & PCI_EXP_LNKCTL_ASPM_L1) 314 - iwl_set_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_ENABLED); 315 - else 316 - iwl_clear_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_ENABLED); 317 trans->pm_support = !(lctl & PCI_EXP_LNKCTL_ASPM_L0S); 318 319 pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_DEVCTL2, &cap); ··· 3456 { 3457 struct iwl_trans_pcie *trans_pcie; 3458 struct iwl_trans *trans; 3459 - int ret, addr_size; 3460 3461 ret = pcim_enable_device(pdev); 3462 if (ret) 3463 return ERR_PTR(ret); 3464 3465 - if (cfg_trans->gen2) 3466 - trans = iwl_trans_alloc(sizeof(struct iwl_trans_pcie), 3467 - &pdev->dev, &trans_ops_pcie_gen2); 3468 - else 3469 - trans = iwl_trans_alloc(sizeof(struct iwl_trans_pcie), 3470 - &pdev->dev, &trans_ops_pcie); 3471 - 3472 if (!trans) 3473 return ERR_PTR(-ENOMEM); 3474
··· 79 #include "iwl-agn-hw.h" 80 #include "fw/error-dump.h" 81 #include "fw/dbg.h" 82 + #include "fw/api/tx.h" 83 #include "internal.h" 84 #include "iwl-fh.h" 85 ··· 301 u16 cap; 302 303 /* 304 + * L0S states have been found to be unstable with our devices 305 + * and in newer hardware they are not officially supported at 306 + * all, so we must always set the L0S_DISABLED bit. 307 */ 308 + iwl_set_bit(trans, CSR_GIO_REG, CSR_GIO_REG_VAL_L0S_DISABLED); 309 + 310 pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_LNKCTL, &lctl); 311 trans->pm_support = !(lctl & PCI_EXP_LNKCTL_ASPM_L0S); 312 313 pcie_capability_read_word(trans_pcie->pci_dev, PCI_EXP_DEVCTL2, &cap); ··· 3460 { 3461 struct iwl_trans_pcie *trans_pcie; 3462 struct iwl_trans *trans; 3463 + int ret, addr_size, txcmd_size, txcmd_align; 3464 + const struct iwl_trans_ops *ops = &trans_ops_pcie_gen2; 3465 + 3466 + if (!cfg_trans->gen2) { 3467 + ops = &trans_ops_pcie; 3468 + txcmd_size = sizeof(struct iwl_tx_cmd); 3469 + txcmd_align = sizeof(void *); 3470 + } else if (cfg_trans->device_family < IWL_DEVICE_FAMILY_AX210) { 3471 + txcmd_size = sizeof(struct iwl_tx_cmd_gen2); 3472 + txcmd_align = 64; 3473 + } else { 3474 + txcmd_size = sizeof(struct iwl_tx_cmd_gen3); 3475 + txcmd_align = 128; 3476 + } 3477 + 3478 + txcmd_size += sizeof(struct iwl_cmd_header); 3479 + txcmd_size += 36; /* biggest possible 802.11 header */ 3480 + 3481 + /* Ensure device TX cmd cannot reach/cross a page boundary in gen2 */ 3482 + if (WARN_ON(cfg_trans->gen2 && txcmd_size >= txcmd_align)) 3483 + return ERR_PTR(-EINVAL); 3484 3485 ret = pcim_enable_device(pdev); 3486 if (ret) 3487 return ERR_PTR(ret); 3488 3489 + trans = iwl_trans_alloc(sizeof(struct iwl_trans_pcie), &pdev->dev, ops, 3490 + txcmd_size, txcmd_align); 3491 if (!trans) 3492 return ERR_PTR(-ENOMEM); 3493
+170 -38
drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
··· 221 int idx = iwl_pcie_gen2_get_num_tbs(trans, tfd); 222 struct iwl_tfh_tb *tb; 223 224 if (WARN_ON(idx >= IWL_TFH_NUM_TBS)) 225 return -EINVAL; 226 tb = &tfd->tbs[idx]; ··· 251 return idx; 252 } 253 254 static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans, 255 struct sk_buff *skb, 256 struct iwl_tfh_tfd *tfd, int start_len, 257 - u8 hdr_len, struct iwl_device_cmd *dev_cmd) 258 { 259 #ifdef CONFIG_INET 260 - struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 261 struct iwl_tx_cmd_gen2 *tx_cmd = (void *)dev_cmd->payload; 262 struct ieee80211_hdr *hdr = (void *)skb->data; 263 unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; ··· 366 u16 length, amsdu_pad; 367 u8 *start_hdr; 368 struct iwl_tso_hdr_page *hdr_page; 369 - struct page **page_ptr; 370 struct tso_t tso; 371 372 trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), ··· 381 (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)); 382 383 /* Our device supports 9 segments at most, it will fit in 1 page */ 384 - hdr_page = get_page_hdr(trans, hdr_room); 385 if (!hdr_page) 386 return -ENOMEM; 387 388 - get_page(hdr_page->page); 389 start_hdr = hdr_page->pos; 390 - page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 391 - *page_ptr = hdr_page->page; 392 393 /* 394 * Pull the ieee80211 header to be able to use TSO core, ··· 440 dev_kfree_skb(csum_skb); 441 goto out_err; 442 } 443 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb_len); 444 trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr, 445 tb_phys, tb_len); ··· 456 457 /* put the payload */ 458 while (data_left) { 459 tb_len = min_t(unsigned int, tso.size, data_left); 460 tb_phys = dma_map_single(trans->dev, tso.data, 461 tb_len, DMA_TO_DEVICE); 462 - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) { 463 dev_kfree_skb(csum_skb); 464 goto out_err; 465 } 466 - iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb_len); 467 - trace_iwlwifi_dev_tx_tb(trans->dev, skb, tso.data, 468 - tb_phys, tb_len); 469 470 data_left -= tb_len; 471 tso_build_data(skb, &tso, tb_len); ··· 487 static struct 488 iwl_tfh_tfd *iwl_pcie_gen2_build_tx_amsdu(struct iwl_trans *trans, 489 struct iwl_txq *txq, 490 - struct iwl_device_cmd *dev_cmd, 491 struct sk_buff *skb, 492 struct iwl_cmd_meta *out_meta, 493 int hdr_len, ··· 501 502 tb_phys = iwl_pcie_get_first_tb_dma(txq, idx); 503 504 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); 505 506 /* ··· 524 tb_phys = dma_map_single(trans->dev, tb1_addr, len, DMA_TO_DEVICE); 525 if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 526 goto out_err; 527 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, len); 528 529 if (iwl_pcie_gen2_build_amsdu(trans, skb, tfd, ··· 554 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 555 const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 556 dma_addr_t tb_phys; 557 - int tb_idx; 558 559 - if (!skb_frag_size(frag)) 560 continue; 561 562 tb_phys = skb_frag_dma_map(trans->dev, frag, 0, 563 - skb_frag_size(frag), DMA_TO_DEVICE); 564 - 565 - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 566 - return -ENOMEM; 567 - tb_idx = iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, 568 - skb_frag_size(frag)); 569 - trace_iwlwifi_dev_tx_tb(trans->dev, skb, skb_frag_address(frag), 570 - tb_phys, skb_frag_size(frag)); 571 - if (tb_idx < 0) 572 - return tb_idx; 573 - 574 - out_meta->tbs |= BIT(tb_idx); 575 } 576 577 return 0; ··· 575 static struct 576 iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, 577 struct iwl_txq *txq, 578 - struct iwl_device_cmd *dev_cmd, 579 struct sk_buff *skb, 580 struct iwl_cmd_meta *out_meta, 581 int hdr_len, ··· 594 /* The first TB points to bi-directional DMA data */ 595 memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE); 596 597 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); 598 599 /* ··· 620 tb_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE); 621 if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 622 goto out_err; 623 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb1_len); 624 trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr, 625 IWL_FIRST_TB_SIZE + tb1_len, hdr_len); ··· 632 tb2_len = skb_headlen(skb) - hdr_len; 633 634 if (tb2_len > 0) { 635 tb_phys = dma_map_single(trans->dev, skb->data + hdr_len, 636 tb2_len, DMA_TO_DEVICE); 637 - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 638 goto out_err; 639 - iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb2_len); 640 - trace_iwlwifi_dev_tx_tb(trans->dev, skb, skb->data + hdr_len, 641 - tb_phys, tb2_len); 642 } 643 644 if (iwl_pcie_gen2_tx_add_frags(trans, skb, tfd, out_meta)) 645 goto out_err; 646 647 skb_walk_frags(skb, frag) { 648 tb_phys = dma_map_single(trans->dev, frag->data, 649 skb_headlen(frag), DMA_TO_DEVICE); 650 - if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 651 goto out_err; 652 - iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, skb_headlen(frag)); 653 - trace_iwlwifi_dev_tx_tb(trans->dev, skb, frag->data, 654 - tb_phys, skb_headlen(frag)); 655 if (iwl_pcie_gen2_tx_add_frags(trans, frag, tfd, out_meta)) 656 goto out_err; 657 } ··· 670 static 671 struct iwl_tfh_tfd *iwl_pcie_gen2_build_tfd(struct iwl_trans *trans, 672 struct iwl_txq *txq, 673 - struct iwl_device_cmd *dev_cmd, 674 struct sk_buff *skb, 675 struct iwl_cmd_meta *out_meta) 676 { ··· 710 } 711 712 int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, 713 - struct iwl_device_cmd *dev_cmd, int txq_id) 714 { 715 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 716 struct iwl_cmd_meta *out_meta; ··· 735 736 /* don't put the packet on the ring, if there is no room */ 737 if (unlikely(iwl_queue_space(trans, txq) < 3)) { 738 - struct iwl_device_cmd **dev_cmd_ptr; 739 740 dev_cmd_ptr = (void *)((u8 *)skb->cb + 741 trans_pcie->dev_cmd_offs);
··· 221 int idx = iwl_pcie_gen2_get_num_tbs(trans, tfd); 222 struct iwl_tfh_tb *tb; 223 224 + /* 225 + * Only WARN here so we know about the issue, but we mess up our 226 + * unmap path because not every place currently checks for errors 227 + * returned from this function - it can only return an error if 228 + * there's no more space, and so when we know there is enough we 229 + * don't always check ... 230 + */ 231 + WARN(iwl_pcie_crosses_4g_boundary(addr, len), 232 + "possible DMA problem with iova:0x%llx, len:%d\n", 233 + (unsigned long long)addr, len); 234 + 235 if (WARN_ON(idx >= IWL_TFH_NUM_TBS)) 236 return -EINVAL; 237 tb = &tfd->tbs[idx]; ··· 240 return idx; 241 } 242 243 + static struct page *get_workaround_page(struct iwl_trans *trans, 244 + struct sk_buff *skb) 245 + { 246 + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 247 + struct page **page_ptr; 248 + struct page *ret; 249 + 250 + page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 251 + 252 + ret = alloc_page(GFP_ATOMIC); 253 + if (!ret) 254 + return NULL; 255 + 256 + /* set the chaining pointer to the previous page if there */ 257 + *(void **)(page_address(ret) + PAGE_SIZE - sizeof(void *)) = *page_ptr; 258 + *page_ptr = ret; 259 + 260 + return ret; 261 + } 262 + 263 + /* 264 + * Add a TB and if needed apply the FH HW bug workaround; 265 + * meta != NULL indicates that it's a page mapping and we 266 + * need to dma_unmap_page() and set the meta->tbs bit in 267 + * this case. 268 + */ 269 + static int iwl_pcie_gen2_set_tb_with_wa(struct iwl_trans *trans, 270 + struct sk_buff *skb, 271 + struct iwl_tfh_tfd *tfd, 272 + dma_addr_t phys, void *virt, 273 + u16 len, struct iwl_cmd_meta *meta) 274 + { 275 + dma_addr_t oldphys = phys; 276 + struct page *page; 277 + int ret; 278 + 279 + if (unlikely(dma_mapping_error(trans->dev, phys))) 280 + return -ENOMEM; 281 + 282 + if (likely(!iwl_pcie_crosses_4g_boundary(phys, len))) { 283 + ret = iwl_pcie_gen2_set_tb(trans, tfd, phys, len); 284 + 285 + if (ret < 0) 286 + goto unmap; 287 + 288 + if (meta) 289 + meta->tbs |= BIT(ret); 290 + 291 + ret = 0; 292 + goto trace; 293 + } 294 + 295 + /* 296 + * Work around a hardware bug. If (as expressed in the 297 + * condition above) the TB ends on a 32-bit boundary, 298 + * then the next TB may be accessed with the wrong 299 + * address. 300 + * To work around it, copy the data elsewhere and make 301 + * a new mapping for it so the device will not fail. 302 + */ 303 + 304 + if (WARN_ON(len > PAGE_SIZE - sizeof(void *))) { 305 + ret = -ENOBUFS; 306 + goto unmap; 307 + } 308 + 309 + page = get_workaround_page(trans, skb); 310 + if (!page) { 311 + ret = -ENOMEM; 312 + goto unmap; 313 + } 314 + 315 + memcpy(page_address(page), virt, len); 316 + 317 + phys = dma_map_single(trans->dev, page_address(page), len, 318 + DMA_TO_DEVICE); 319 + if (unlikely(dma_mapping_error(trans->dev, phys))) 320 + return -ENOMEM; 321 + ret = iwl_pcie_gen2_set_tb(trans, tfd, phys, len); 322 + if (ret < 0) { 323 + /* unmap the new allocation as single */ 324 + oldphys = phys; 325 + meta = NULL; 326 + goto unmap; 327 + } 328 + IWL_WARN(trans, 329 + "TB bug workaround: copied %d bytes from 0x%llx to 0x%llx\n", 330 + len, (unsigned long long)oldphys, (unsigned long long)phys); 331 + 332 + ret = 0; 333 + unmap: 334 + if (meta) 335 + dma_unmap_page(trans->dev, oldphys, len, DMA_TO_DEVICE); 336 + else 337 + dma_unmap_single(trans->dev, oldphys, len, DMA_TO_DEVICE); 338 + trace: 339 + trace_iwlwifi_dev_tx_tb(trans->dev, skb, virt, phys, len); 340 + 341 + return ret; 342 + } 343 + 344 static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans, 345 struct sk_buff *skb, 346 struct iwl_tfh_tfd *tfd, int start_len, 347 + u8 hdr_len, 348 + struct iwl_device_tx_cmd *dev_cmd) 349 { 350 #ifdef CONFIG_INET 351 struct iwl_tx_cmd_gen2 *tx_cmd = (void *)dev_cmd->payload; 352 struct ieee80211_hdr *hdr = (void *)skb->data; 353 unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room; ··· 254 u16 length, amsdu_pad; 255 u8 *start_hdr; 256 struct iwl_tso_hdr_page *hdr_page; 257 struct tso_t tso; 258 259 trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), ··· 270 (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)); 271 272 /* Our device supports 9 segments at most, it will fit in 1 page */ 273 + hdr_page = get_page_hdr(trans, hdr_room, skb); 274 if (!hdr_page) 275 return -ENOMEM; 276 277 start_hdr = hdr_page->pos; 278 279 /* 280 * Pull the ieee80211 header to be able to use TSO core, ··· 332 dev_kfree_skb(csum_skb); 333 goto out_err; 334 } 335 + /* 336 + * No need for _with_wa, this is from the TSO page and 337 + * we leave some space at the end of it so can't hit 338 + * the buggy scenario. 339 + */ 340 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb_len); 341 trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr, 342 tb_phys, tb_len); ··· 343 344 /* put the payload */ 345 while (data_left) { 346 + int ret; 347 + 348 tb_len = min_t(unsigned int, tso.size, data_left); 349 tb_phys = dma_map_single(trans->dev, tso.data, 350 tb_len, DMA_TO_DEVICE); 351 + ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, 352 + tb_phys, tso.data, 353 + tb_len, NULL); 354 + if (ret) { 355 dev_kfree_skb(csum_skb); 356 goto out_err; 357 } 358 359 data_left -= tb_len; 360 tso_build_data(skb, &tso, tb_len); ··· 372 static struct 373 iwl_tfh_tfd *iwl_pcie_gen2_build_tx_amsdu(struct iwl_trans *trans, 374 struct iwl_txq *txq, 375 + struct iwl_device_tx_cmd *dev_cmd, 376 struct sk_buff *skb, 377 struct iwl_cmd_meta *out_meta, 378 int hdr_len, ··· 386 387 tb_phys = iwl_pcie_get_first_tb_dma(txq, idx); 388 389 + /* 390 + * No need for _with_wa, the first TB allocation is aligned up 391 + * to a 64-byte boundary and thus can't be at the end or cross 392 + * a page boundary (much less a 2^32 boundary). 393 + */ 394 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); 395 396 /* ··· 404 tb_phys = dma_map_single(trans->dev, tb1_addr, len, DMA_TO_DEVICE); 405 if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 406 goto out_err; 407 + /* 408 + * No need for _with_wa(), we ensure (via alignment) that the data 409 + * here can never cross or end at a page boundary. 410 + */ 411 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, len); 412 413 if (iwl_pcie_gen2_build_amsdu(trans, skb, tfd, ··· 430 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 431 const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 432 dma_addr_t tb_phys; 433 + unsigned int fragsz = skb_frag_size(frag); 434 + int ret; 435 436 + if (!fragsz) 437 continue; 438 439 tb_phys = skb_frag_dma_map(trans->dev, frag, 0, 440 + fragsz, DMA_TO_DEVICE); 441 + ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, 442 + skb_frag_address(frag), 443 + fragsz, out_meta); 444 + if (ret) 445 + return ret; 446 } 447 448 return 0; ··· 456 static struct 457 iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, 458 struct iwl_txq *txq, 459 + struct iwl_device_tx_cmd *dev_cmd, 460 struct sk_buff *skb, 461 struct iwl_cmd_meta *out_meta, 462 int hdr_len, ··· 475 /* The first TB points to bi-directional DMA data */ 476 memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE); 477 478 + /* 479 + * No need for _with_wa, the first TB allocation is aligned up 480 + * to a 64-byte boundary and thus can't be at the end or cross 481 + * a page boundary (much less a 2^32 boundary). 482 + */ 483 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE); 484 485 /* ··· 496 tb_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE); 497 if (unlikely(dma_mapping_error(trans->dev, tb_phys))) 498 goto out_err; 499 + /* 500 + * No need for _with_wa(), we ensure (via alignment) that the data 501 + * here can never cross or end at a page boundary. 502 + */ 503 iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb1_len); 504 trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr, 505 IWL_FIRST_TB_SIZE + tb1_len, hdr_len); ··· 504 tb2_len = skb_headlen(skb) - hdr_len; 505 506 if (tb2_len > 0) { 507 + int ret; 508 + 509 tb_phys = dma_map_single(trans->dev, skb->data + hdr_len, 510 tb2_len, DMA_TO_DEVICE); 511 + ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, 512 + skb->data + hdr_len, tb2_len, 513 + NULL); 514 + if (ret) 515 goto out_err; 516 } 517 518 if (iwl_pcie_gen2_tx_add_frags(trans, skb, tfd, out_meta)) 519 goto out_err; 520 521 skb_walk_frags(skb, frag) { 522 + int ret; 523 + 524 tb_phys = dma_map_single(trans->dev, frag->data, 525 skb_headlen(frag), DMA_TO_DEVICE); 526 + ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys, 527 + frag->data, 528 + skb_headlen(frag), NULL); 529 + if (ret) 530 goto out_err; 531 if (iwl_pcie_gen2_tx_add_frags(trans, frag, tfd, out_meta)) 532 goto out_err; 533 } ··· 538 static 539 struct iwl_tfh_tfd *iwl_pcie_gen2_build_tfd(struct iwl_trans *trans, 540 struct iwl_txq *txq, 541 + struct iwl_device_tx_cmd *dev_cmd, 542 struct sk_buff *skb, 543 struct iwl_cmd_meta *out_meta) 544 { ··· 578 } 579 580 int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb, 581 + struct iwl_device_tx_cmd *dev_cmd, int txq_id) 582 { 583 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 584 struct iwl_cmd_meta *out_meta; ··· 603 604 /* don't put the packet on the ring, if there is no room */ 605 if (unlikely(iwl_queue_space(trans, txq) < 3)) { 606 + struct iwl_device_tx_cmd **dev_cmd_ptr; 607 608 dev_cmd_ptr = (void *)((u8 *)skb->cb + 609 trans_pcie->dev_cmd_offs);
+47 -21
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 213 u8 sec_ctl = 0; 214 u16 len = byte_cnt + IWL_TX_CRC_SIZE + IWL_TX_DELIMITER_SIZE; 215 __le16 bc_ent; 216 - struct iwl_tx_cmd *tx_cmd = 217 - (void *)txq->entries[txq->write_ptr].cmd->payload; 218 u8 sta_id = tx_cmd->sta_id; 219 220 scd_bc_tbl = trans_pcie->scd_bc_tbls.addr; ··· 257 int read_ptr = txq->read_ptr; 258 u8 sta_id = 0; 259 __le16 bc_ent; 260 - struct iwl_tx_cmd *tx_cmd = 261 - (void *)txq->entries[read_ptr].cmd->payload; 262 263 WARN_ON(read_ptr >= TFD_QUEUE_SIZE_MAX); 264 ··· 624 struct sk_buff *skb) 625 { 626 struct page **page_ptr; 627 628 page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 629 630 - if (*page_ptr) { 631 - __free_page(*page_ptr); 632 - *page_ptr = NULL; 633 } 634 } 635 ··· 1202 1203 while (!skb_queue_empty(&overflow_skbs)) { 1204 struct sk_buff *skb = __skb_dequeue(&overflow_skbs); 1205 - struct iwl_device_cmd *dev_cmd_ptr; 1206 1207 dev_cmd_ptr = *(void **)((u8 *)skb->cb + 1208 trans_pcie->dev_cmd_offs); ··· 2058 } 2059 2060 #ifdef CONFIG_INET 2061 - struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len) 2062 { 2063 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 2064 struct iwl_tso_hdr_page *p = this_cpu_ptr(trans_pcie->tso_hdr_page); 2065 2066 if (!p->page) 2067 goto alloc; 2068 2069 - /* enough room on this page */ 2070 - if (p->pos + len < (u8 *)page_address(p->page) + PAGE_SIZE) 2071 - return p; 2072 2073 /* We don't have enough room on this page, get a new one. */ 2074 __free_page(p->page); ··· 2095 if (!p->page) 2096 return NULL; 2097 p->pos = page_address(p->page); 2098 return p; 2099 } 2100 ··· 2125 static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, 2126 struct iwl_txq *txq, u8 hdr_len, 2127 struct iwl_cmd_meta *out_meta, 2128 - struct iwl_device_cmd *dev_cmd, u16 tb1_len) 2129 { 2130 struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; 2131 struct iwl_trans_pcie *trans_pcie = txq->trans_pcie; ··· 2136 u16 length, iv_len, amsdu_pad; 2137 u8 *start_hdr; 2138 struct iwl_tso_hdr_page *hdr_page; 2139 - struct page **page_ptr; 2140 struct tso_t tso; 2141 2142 /* if the packet is protected, then it must be CCMP or GCMP */ ··· 2158 (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len; 2159 2160 /* Our device supports 9 segments at most, it will fit in 1 page */ 2161 - hdr_page = get_page_hdr(trans, hdr_room); 2162 if (!hdr_page) 2163 return -ENOMEM; 2164 2165 - get_page(hdr_page->page); 2166 start_hdr = hdr_page->pos; 2167 - page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 2168 - *page_ptr = hdr_page->page; 2169 memcpy(hdr_page->pos, skb->data + hdr_len, iv_len); 2170 hdr_page->pos += iv_len; 2171 ··· 2304 static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, 2305 struct iwl_txq *txq, u8 hdr_len, 2306 struct iwl_cmd_meta *out_meta, 2307 - struct iwl_device_cmd *dev_cmd, u16 tb1_len) 2308 { 2309 /* No A-MSDU without CONFIG_INET */ 2310 WARN_ON(1); ··· 2315 #endif /* CONFIG_INET */ 2316 2317 int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, 2318 - struct iwl_device_cmd *dev_cmd, int txq_id) 2319 { 2320 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 2321 struct ieee80211_hdr *hdr; ··· 2372 2373 /* don't put the packet on the ring, if there is no room */ 2374 if (unlikely(iwl_queue_space(trans, txq) < 3)) { 2375 - struct iwl_device_cmd **dev_cmd_ptr; 2376 2377 dev_cmd_ptr = (void *)((u8 *)skb->cb + 2378 trans_pcie->dev_cmd_offs);
··· 213 u8 sec_ctl = 0; 214 u16 len = byte_cnt + IWL_TX_CRC_SIZE + IWL_TX_DELIMITER_SIZE; 215 __le16 bc_ent; 216 + struct iwl_device_tx_cmd *dev_cmd = txq->entries[txq->write_ptr].cmd; 217 + struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; 218 u8 sta_id = tx_cmd->sta_id; 219 220 scd_bc_tbl = trans_pcie->scd_bc_tbls.addr; ··· 257 int read_ptr = txq->read_ptr; 258 u8 sta_id = 0; 259 __le16 bc_ent; 260 + struct iwl_device_tx_cmd *dev_cmd = txq->entries[read_ptr].cmd; 261 + struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; 262 263 WARN_ON(read_ptr >= TFD_QUEUE_SIZE_MAX); 264 ··· 624 struct sk_buff *skb) 625 { 626 struct page **page_ptr; 627 + struct page *next; 628 629 page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 630 + next = *page_ptr; 631 + *page_ptr = NULL; 632 633 + while (next) { 634 + struct page *tmp = next; 635 + 636 + next = *(void **)(page_address(next) + PAGE_SIZE - 637 + sizeof(void *)); 638 + __free_page(tmp); 639 } 640 } 641 ··· 1196 1197 while (!skb_queue_empty(&overflow_skbs)) { 1198 struct sk_buff *skb = __skb_dequeue(&overflow_skbs); 1199 + struct iwl_device_tx_cmd *dev_cmd_ptr; 1200 1201 dev_cmd_ptr = *(void **)((u8 *)skb->cb + 1202 trans_pcie->dev_cmd_offs); ··· 2052 } 2053 2054 #ifdef CONFIG_INET 2055 + struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len, 2056 + struct sk_buff *skb) 2057 { 2058 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 2059 struct iwl_tso_hdr_page *p = this_cpu_ptr(trans_pcie->tso_hdr_page); 2060 + struct page **page_ptr; 2061 + 2062 + page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs); 2063 + 2064 + if (WARN_ON(*page_ptr)) 2065 + return NULL; 2066 2067 if (!p->page) 2068 goto alloc; 2069 2070 + /* 2071 + * Check if there's enough room on this page 2072 + * 2073 + * Note that we put a page chaining pointer *last* in the 2074 + * page - we need it somewhere, and if it's there then we 2075 + * avoid DMA mapping the last bits of the page which may 2076 + * trigger the 32-bit boundary hardware bug. 2077 + * 2078 + * (see also get_workaround_page() in tx-gen2.c) 2079 + */ 2080 + if (p->pos + len < (u8 *)page_address(p->page) + PAGE_SIZE - 2081 + sizeof(void *)) 2082 + goto out; 2083 2084 /* We don't have enough room on this page, get a new one. */ 2085 __free_page(p->page); ··· 2072 if (!p->page) 2073 return NULL; 2074 p->pos = page_address(p->page); 2075 + /* set the chaining pointer to NULL */ 2076 + *(void **)(page_address(p->page) + PAGE_SIZE - sizeof(void *)) = NULL; 2077 + out: 2078 + *page_ptr = p->page; 2079 + get_page(p->page); 2080 return p; 2081 } 2082 ··· 2097 static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, 2098 struct iwl_txq *txq, u8 hdr_len, 2099 struct iwl_cmd_meta *out_meta, 2100 + struct iwl_device_tx_cmd *dev_cmd, 2101 + u16 tb1_len) 2102 { 2103 struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload; 2104 struct iwl_trans_pcie *trans_pcie = txq->trans_pcie; ··· 2107 u16 length, iv_len, amsdu_pad; 2108 u8 *start_hdr; 2109 struct iwl_tso_hdr_page *hdr_page; 2110 struct tso_t tso; 2111 2112 /* if the packet is protected, then it must be CCMP or GCMP */ ··· 2130 (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len; 2131 2132 /* Our device supports 9 segments at most, it will fit in 1 page */ 2133 + hdr_page = get_page_hdr(trans, hdr_room, skb); 2134 if (!hdr_page) 2135 return -ENOMEM; 2136 2137 start_hdr = hdr_page->pos; 2138 memcpy(hdr_page->pos, skb->data + hdr_len, iv_len); 2139 hdr_page->pos += iv_len; 2140 ··· 2279 static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb, 2280 struct iwl_txq *txq, u8 hdr_len, 2281 struct iwl_cmd_meta *out_meta, 2282 + struct iwl_device_tx_cmd *dev_cmd, 2283 + u16 tb1_len) 2284 { 2285 /* No A-MSDU without CONFIG_INET */ 2286 WARN_ON(1); ··· 2289 #endif /* CONFIG_INET */ 2290 2291 int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb, 2292 + struct iwl_device_tx_cmd *dev_cmd, int txq_id) 2293 { 2294 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 2295 struct ieee80211_hdr *hdr; ··· 2346 2347 /* don't put the packet on the ring, if there is no room */ 2348 if (unlikely(iwl_queue_space(trans, txq) < 3)) { 2349 + struct iwl_device_tx_cmd **dev_cmd_ptr; 2350 2351 dev_cmd_ptr = (void *)((u8 *)skb->cb + 2352 trans_pcie->dev_cmd_offs);
+13 -3
drivers/net/wireless/marvell/libertas/cfg.c
··· 273 int hw, ap, ap_max = ie[1]; 274 u8 hw_rate; 275 276 /* Advance past IE header */ 277 ie += 2; 278 ··· 1721 struct cmd_ds_802_11_ad_hoc_join cmd; 1722 u8 preamble = RADIO_PREAMBLE_SHORT; 1723 int ret = 0; 1724 1725 /* TODO: set preamble based on scan result */ 1726 ret = lbs_set_radio(priv, preamble, 1); ··· 1782 if (!rates_eid) { 1783 lbs_add_rates(cmd.bss.rates); 1784 } else { 1785 - int hw, i; 1786 - u8 rates_max = rates_eid[1]; 1787 - u8 *rates = cmd.bss.rates; 1788 for (hw = 0; hw < ARRAY_SIZE(lbs_rates); hw++) { 1789 u8 hw_rate = lbs_rates[hw].bitrate / 5; 1790 for (i = 0; i < rates_max; i++) {
··· 273 int hw, ap, ap_max = ie[1]; 274 u8 hw_rate; 275 276 + if (ap_max > MAX_RATES) { 277 + lbs_deb_assoc("invalid rates\n"); 278 + return tlv; 279 + } 280 /* Advance past IE header */ 281 ie += 2; 282 ··· 1717 struct cmd_ds_802_11_ad_hoc_join cmd; 1718 u8 preamble = RADIO_PREAMBLE_SHORT; 1719 int ret = 0; 1720 + int hw, i; 1721 + u8 rates_max; 1722 + u8 *rates; 1723 1724 /* TODO: set preamble based on scan result */ 1725 ret = lbs_set_radio(priv, preamble, 1); ··· 1775 if (!rates_eid) { 1776 lbs_add_rates(cmd.bss.rates); 1777 } else { 1778 + rates_max = rates_eid[1]; 1779 + if (rates_max > MAX_RATES) { 1780 + lbs_deb_join("invalid rates"); 1781 + goto out; 1782 + } 1783 + rates = cmd.bss.rates; 1784 for (hw = 0; hw < ARRAY_SIZE(lbs_rates); hw++) { 1785 u8 hw_rate = lbs_rates[hw].bitrate / 5; 1786 for (i = 0; i < rates_max; i++) {
+1 -1
drivers/net/wireless/mediatek/mt76/airtime.c
··· 242 return 0; 243 244 sband = dev->hw->wiphy->bands[status->band]; 245 - if (!sband || status->rate_idx > sband->n_bitrates) 246 return 0; 247 248 rate = &sband->bitrates[status->rate_idx];
··· 242 return 0; 243 244 sband = dev->hw->wiphy->bands[status->band]; 245 + if (!sband || status->rate_idx >= sband->n_bitrates) 246 return 0; 247 248 rate = &sband->bitrates[status->rate_idx];
+2 -1
drivers/net/wireless/mediatek/mt76/mac80211.c
··· 378 { 379 struct ieee80211_hw *hw = dev->hw; 380 381 - mt76_led_cleanup(dev); 382 mt76_tx_status_check(dev, NULL, true); 383 ieee80211_unregister_hw(hw); 384 }
··· 378 { 379 struct ieee80211_hw *hw = dev->hw; 380 381 + if (IS_ENABLED(CONFIG_MT76_LEDS)) 382 + mt76_led_cleanup(dev); 383 mt76_tx_status_check(dev, NULL, true); 384 ieee80211_unregister_hw(hw); 385 }
+2
include/linux/netdevice.h
··· 3698 int dev_get_alias(const struct net_device *, char *, size_t); 3699 int dev_change_net_namespace(struct net_device *, struct net *, const char *); 3700 int __dev_set_mtu(struct net_device *, int); 3701 int dev_set_mtu_ext(struct net_device *dev, int mtu, 3702 struct netlink_ext_ack *extack); 3703 int dev_set_mtu(struct net_device *, int);
··· 3698 int dev_get_alias(const struct net_device *, char *, size_t); 3699 int dev_change_net_namespace(struct net_device *, struct net *, const char *); 3700 int __dev_set_mtu(struct net_device *, int); 3701 + int dev_validate_mtu(struct net_device *dev, int mtu, 3702 + struct netlink_ext_ack *extack); 3703 int dev_set_mtu_ext(struct net_device *dev, int mtu, 3704 struct netlink_ext_ack *extack); 3705 int dev_set_mtu(struct net_device *, int);
-7
include/linux/netfilter/ipset/ip_set.h
··· 426 sizeof(*addr)); 427 } 428 429 - /* Calculate the bytes required to store the inclusive range of a-b */ 430 - static inline int 431 - bitmap_bytes(u32 a, u32 b) 432 - { 433 - return 4 * ((((b - a + 8) / 8) + 3) / 4); 434 - } 435 - 436 /* How often should the gc be run by default */ 437 #define IPSET_GC_TIME (3 * 60) 438
··· 426 sizeof(*addr)); 427 } 428 429 /* How often should the gc be run by default */ 430 #define IPSET_GC_TIME (3 * 60) 431
+1 -1
include/linux/netfilter/nfnetlink.h
··· 31 const struct nfnl_callback *cb; /* callback for individual types */ 32 struct module *owner; 33 int (*commit)(struct net *net, struct sk_buff *skb); 34 - int (*abort)(struct net *net, struct sk_buff *skb); 35 void (*cleanup)(struct net *net); 36 bool (*valid_genid)(struct net *net, u32 genid); 37 };
··· 31 const struct nfnl_callback *cb; /* callback for individual types */ 32 struct module *owner; 33 int (*commit)(struct net *net, struct sk_buff *skb); 34 + int (*abort)(struct net *net, struct sk_buff *skb, bool autoload); 35 void (*cleanup)(struct net *net); 36 bool (*valid_genid)(struct net *net, u32 genid); 37 };
+1
include/net/netns/nftables.h
··· 7 struct netns_nftables { 8 struct list_head tables; 9 struct list_head commit_list; 10 struct mutex commit_mutex; 11 unsigned int base_seq; 12 u8 gencursor;
··· 7 struct netns_nftables { 8 struct list_head tables; 9 struct list_head commit_list; 10 + struct list_head module_list; 11 struct mutex commit_mutex; 12 unsigned int base_seq; 13 u8 gencursor;
+1 -2
net/atm/proc.c
··· 134 static void *vcc_seq_next(struct seq_file *seq, void *v, loff_t *pos) 135 { 136 v = vcc_walk(seq, 1); 137 - if (v) 138 - (*pos)++; 139 return v; 140 } 141
··· 134 static void *vcc_seq_next(struct seq_file *seq, void *v, loff_t *pos) 135 { 136 v = vcc_walk(seq, 1); 137 + (*pos)++; 138 return v; 139 } 140
+1 -1
net/caif/caif_usb.c
··· 62 hpad = (info->hdr_len + CFUSB_PAD_DESCR_SZ) & (CFUSB_ALIGNMENT - 1); 63 64 if (skb_headroom(skb) < ETH_HLEN + CFUSB_PAD_DESCR_SZ + hpad) { 65 - pr_warn("Headroom to small\n"); 66 kfree_skb(skb); 67 return -EIO; 68 }
··· 62 hpad = (info->hdr_len + CFUSB_PAD_DESCR_SZ) & (CFUSB_ALIGNMENT - 1); 63 64 if (skb_headroom(skb) < ETH_HLEN + CFUSB_PAD_DESCR_SZ + hpad) { 65 + pr_warn("Headroom too small\n"); 66 kfree_skb(skb); 67 return -EIO; 68 }
+55 -42
net/core/dev.c
··· 5491 put_online_cpus(); 5492 } 5493 5494 INDIRECT_CALLABLE_DECLARE(int inet_gro_complete(struct sk_buff *, int)); 5495 INDIRECT_CALLABLE_DECLARE(int ipv6_gro_complete(struct sk_buff *, int)); 5496 - static int napi_gro_complete(struct sk_buff *skb) 5497 { 5498 struct packet_offload *ptype; 5499 __be16 type = skb->protocol; ··· 5546 } 5547 5548 out: 5549 - return netif_receive_skb_internal(skb); 5550 } 5551 5552 static void __napi_gro_flush_chain(struct napi_struct *napi, u32 index, ··· 5560 if (flush_old && NAPI_GRO_CB(skb)->age == jiffies) 5561 return; 5562 skb_list_del_init(skb); 5563 - napi_gro_complete(skb); 5564 napi->gro_hash[index].count--; 5565 } 5566 ··· 5662 } 5663 } 5664 5665 - static void gro_flush_oldest(struct list_head *head) 5666 { 5667 struct sk_buff *oldest; 5668 ··· 5678 * SKB to the chain. 5679 */ 5680 skb_list_del_init(oldest); 5681 - napi_gro_complete(oldest); 5682 } 5683 5684 INDIRECT_CALLABLE_DECLARE(struct sk_buff *inet_gro_receive(struct list_head *, ··· 5754 5755 if (pp) { 5756 skb_list_del_init(pp); 5757 - napi_gro_complete(pp); 5758 napi->gro_hash[hash].count--; 5759 } 5760 ··· 5765 goto normal; 5766 5767 if (unlikely(napi->gro_hash[hash].count >= MAX_GRO_SKBS)) { 5768 - gro_flush_oldest(gro_head); 5769 } else { 5770 napi->gro_hash[hash].count++; 5771 } ··· 5822 return NULL; 5823 } 5824 EXPORT_SYMBOL(gro_find_complete_by_type); 5825 - 5826 - /* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ 5827 - static void gro_normal_list(struct napi_struct *napi) 5828 - { 5829 - if (!napi->rx_count) 5830 - return; 5831 - netif_receive_skb_list_internal(&napi->rx_list); 5832 - INIT_LIST_HEAD(&napi->rx_list); 5833 - napi->rx_count = 0; 5834 - } 5835 - 5836 - /* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded, 5837 - * pass the whole batch up to the stack. 5838 - */ 5839 - static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb) 5840 - { 5841 - list_add_tail(&skb->list, &napi->rx_list); 5842 - if (++napi->rx_count >= gro_normal_batch) 5843 - gro_normal_list(napi); 5844 - } 5845 5846 static void napi_skb_free_stolen_head(struct sk_buff *skb) 5847 { ··· 6201 NAPIF_STATE_IN_BUSY_POLL))) 6202 return false; 6203 6204 - gro_normal_list(n); 6205 - 6206 if (n->gro_bitmask) { 6207 unsigned long timeout = 0; 6208 ··· 6216 hrtimer_start(&n->timer, ns_to_ktime(timeout), 6217 HRTIMER_MODE_REL_PINNED); 6218 } 6219 if (unlikely(!list_empty(&n->poll_list))) { 6220 /* If n->poll_list is not empty, we need to mask irqs */ 6221 local_irq_save(flags); ··· 6550 goto out_unlock; 6551 } 6552 6553 - gro_normal_list(n); 6554 - 6555 if (n->gro_bitmask) { 6556 /* flush too old packets 6557 * If HZ < 1000, flush all packets. 6558 */ 6559 napi_gro_flush(n, HZ >= 1000); 6560 } 6561 6562 /* Some drivers may have called napi_schedule 6563 * prior to exhausting their budget. ··· 8196 } 8197 EXPORT_SYMBOL(__dev_set_mtu); 8198 8199 /** 8200 * dev_set_mtu_ext - Change maximum transfer unit 8201 * @dev: device ··· 8228 if (new_mtu == dev->mtu) 8229 return 0; 8230 8231 - /* MTU must be positive, and in range */ 8232 - if (new_mtu < 0 || new_mtu < dev->min_mtu) { 8233 - NL_SET_ERR_MSG(extack, "mtu less than device minimum"); 8234 - return -EINVAL; 8235 - } 8236 - 8237 - if (dev->max_mtu > 0 && new_mtu > dev->max_mtu) { 8238 - NL_SET_ERR_MSG(extack, "mtu greater than device maximum"); 8239 - return -EINVAL; 8240 - } 8241 8242 if (!netif_device_present(dev)) 8243 return -ENODEV; ··· 9313 goto err_uninit; 9314 9315 ret = netdev_register_kobject(dev); 9316 - if (ret) 9317 goto err_uninit; 9318 dev->reg_state = NETREG_REGISTERED; 9319 9320 __netdev_update_features(dev);
··· 5491 put_online_cpus(); 5492 } 5493 5494 + /* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ 5495 + static void gro_normal_list(struct napi_struct *napi) 5496 + { 5497 + if (!napi->rx_count) 5498 + return; 5499 + netif_receive_skb_list_internal(&napi->rx_list); 5500 + INIT_LIST_HEAD(&napi->rx_list); 5501 + napi->rx_count = 0; 5502 + } 5503 + 5504 + /* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded, 5505 + * pass the whole batch up to the stack. 5506 + */ 5507 + static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb) 5508 + { 5509 + list_add_tail(&skb->list, &napi->rx_list); 5510 + if (++napi->rx_count >= gro_normal_batch) 5511 + gro_normal_list(napi); 5512 + } 5513 + 5514 INDIRECT_CALLABLE_DECLARE(int inet_gro_complete(struct sk_buff *, int)); 5515 INDIRECT_CALLABLE_DECLARE(int ipv6_gro_complete(struct sk_buff *, int)); 5516 + static int napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) 5517 { 5518 struct packet_offload *ptype; 5519 __be16 type = skb->protocol; ··· 5526 } 5527 5528 out: 5529 + gro_normal_one(napi, skb); 5530 + return NET_RX_SUCCESS; 5531 } 5532 5533 static void __napi_gro_flush_chain(struct napi_struct *napi, u32 index, ··· 5539 if (flush_old && NAPI_GRO_CB(skb)->age == jiffies) 5540 return; 5541 skb_list_del_init(skb); 5542 + napi_gro_complete(napi, skb); 5543 napi->gro_hash[index].count--; 5544 } 5545 ··· 5641 } 5642 } 5643 5644 + static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) 5645 { 5646 struct sk_buff *oldest; 5647 ··· 5657 * SKB to the chain. 5658 */ 5659 skb_list_del_init(oldest); 5660 + napi_gro_complete(napi, oldest); 5661 } 5662 5663 INDIRECT_CALLABLE_DECLARE(struct sk_buff *inet_gro_receive(struct list_head *, ··· 5733 5734 if (pp) { 5735 skb_list_del_init(pp); 5736 + napi_gro_complete(napi, pp); 5737 napi->gro_hash[hash].count--; 5738 } 5739 ··· 5744 goto normal; 5745 5746 if (unlikely(napi->gro_hash[hash].count >= MAX_GRO_SKBS)) { 5747 + gro_flush_oldest(napi, gro_head); 5748 } else { 5749 napi->gro_hash[hash].count++; 5750 } ··· 5801 return NULL; 5802 } 5803 EXPORT_SYMBOL(gro_find_complete_by_type); 5804 5805 static void napi_skb_free_stolen_head(struct sk_buff *skb) 5806 { ··· 6200 NAPIF_STATE_IN_BUSY_POLL))) 6201 return false; 6202 6203 if (n->gro_bitmask) { 6204 unsigned long timeout = 0; 6205 ··· 6217 hrtimer_start(&n->timer, ns_to_ktime(timeout), 6218 HRTIMER_MODE_REL_PINNED); 6219 } 6220 + 6221 + gro_normal_list(n); 6222 + 6223 if (unlikely(!list_empty(&n->poll_list))) { 6224 /* If n->poll_list is not empty, we need to mask irqs */ 6225 local_irq_save(flags); ··· 6548 goto out_unlock; 6549 } 6550 6551 if (n->gro_bitmask) { 6552 /* flush too old packets 6553 * If HZ < 1000, flush all packets. 6554 */ 6555 napi_gro_flush(n, HZ >= 1000); 6556 } 6557 + 6558 + gro_normal_list(n); 6559 6560 /* Some drivers may have called napi_schedule 6561 * prior to exhausting their budget. ··· 8194 } 8195 EXPORT_SYMBOL(__dev_set_mtu); 8196 8197 + int dev_validate_mtu(struct net_device *dev, int new_mtu, 8198 + struct netlink_ext_ack *extack) 8199 + { 8200 + /* MTU must be positive, and in range */ 8201 + if (new_mtu < 0 || new_mtu < dev->min_mtu) { 8202 + NL_SET_ERR_MSG(extack, "mtu less than device minimum"); 8203 + return -EINVAL; 8204 + } 8205 + 8206 + if (dev->max_mtu > 0 && new_mtu > dev->max_mtu) { 8207 + NL_SET_ERR_MSG(extack, "mtu greater than device maximum"); 8208 + return -EINVAL; 8209 + } 8210 + return 0; 8211 + } 8212 + 8213 /** 8214 * dev_set_mtu_ext - Change maximum transfer unit 8215 * @dev: device ··· 8210 if (new_mtu == dev->mtu) 8211 return 0; 8212 8213 + err = dev_validate_mtu(dev, new_mtu, extack); 8214 + if (err) 8215 + return err; 8216 8217 if (!netif_device_present(dev)) 8218 return -ENODEV; ··· 9302 goto err_uninit; 9303 9304 ret = netdev_register_kobject(dev); 9305 + if (ret) { 9306 + dev->reg_state = NETREG_UNREGISTERED; 9307 goto err_uninit; 9308 + } 9309 dev->reg_state = NETREG_REGISTERED; 9310 9311 __netdev_update_features(dev);
+1
net/core/neighbour.c
··· 3290 *pos = cpu+1; 3291 return per_cpu_ptr(tbl->stats, cpu); 3292 } 3293 return NULL; 3294 } 3295
··· 3290 *pos = cpu+1; 3291 return per_cpu_ptr(tbl->stats, cpu); 3292 } 3293 + (*pos)++; 3294 return NULL; 3295 } 3296
+11 -2
net/core/rtnetlink.c
··· 3048 dev->rtnl_link_ops = ops; 3049 dev->rtnl_link_state = RTNL_LINK_INITIALIZING; 3050 3051 - if (tb[IFLA_MTU]) 3052 - dev->mtu = nla_get_u32(tb[IFLA_MTU]); 3053 if (tb[IFLA_ADDRESS]) { 3054 memcpy(dev->dev_addr, nla_data(tb[IFLA_ADDRESS]), 3055 nla_len(tb[IFLA_ADDRESS]));
··· 3048 dev->rtnl_link_ops = ops; 3049 dev->rtnl_link_state = RTNL_LINK_INITIALIZING; 3050 3051 + if (tb[IFLA_MTU]) { 3052 + u32 mtu = nla_get_u32(tb[IFLA_MTU]); 3053 + int err; 3054 + 3055 + err = dev_validate_mtu(dev, mtu, extack); 3056 + if (err) { 3057 + free_netdev(dev); 3058 + return ERR_PTR(err); 3059 + } 3060 + dev->mtu = mtu; 3061 + } 3062 if (tb[IFLA_ADDRESS]) { 3063 memcpy(dev->dev_addr, nla_data(tb[IFLA_ADDRESS]), 3064 nla_len(tb[IFLA_ADDRESS]));
-2
net/core/skmsg.c
··· 594 595 void sk_psock_drop(struct sock *sk, struct sk_psock *psock) 596 { 597 - sock_owned_by_me(sk); 598 - 599 sk_psock_cork_free(psock); 600 sk_psock_zap_ingress(psock); 601
··· 594 595 void sk_psock_drop(struct sock *sk, struct sk_psock *psock) 596 { 597 sk_psock_cork_free(psock); 598 sk_psock_zap_ingress(psock); 599
+17 -3
net/core/utils.c
··· 438 } 439 EXPORT_SYMBOL(inet_proto_csum_replace4); 440 441 void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb, 442 const __be32 *from, const __be32 *to, 443 bool pseudohdr) ··· 466 if (skb->ip_summed != CHECKSUM_PARTIAL) { 467 *sum = csum_fold(csum_partial(diff, sizeof(diff), 468 ~csum_unfold(*sum))); 469 - if (skb->ip_summed == CHECKSUM_COMPLETE && pseudohdr) 470 - skb->csum = ~csum_partial(diff, sizeof(diff), 471 - ~skb->csum); 472 } else if (pseudohdr) 473 *sum = ~csum_fold(csum_partial(diff, sizeof(diff), 474 csum_unfold(*sum)));
··· 438 } 439 EXPORT_SYMBOL(inet_proto_csum_replace4); 440 441 + /** 442 + * inet_proto_csum_replace16 - update layer 4 header checksum field 443 + * @sum: Layer 4 header checksum field 444 + * @skb: sk_buff for the packet 445 + * @from: old IPv6 address 446 + * @to: new IPv6 address 447 + * @pseudohdr: True if layer 4 header checksum includes pseudoheader 448 + * 449 + * Update layer 4 header as per the update in IPv6 src/dst address. 450 + * 451 + * There is no need to update skb->csum in this function, because update in two 452 + * fields a.) IPv6 src/dst address and b.) L4 header checksum cancels each other 453 + * for skb->csum calculation. Whereas inet_proto_csum_replace4 function needs to 454 + * update skb->csum, because update in 3 fields a.) IPv4 src/dst address, 455 + * b.) IPv4 Header checksum and c.) L4 header checksum results in same diff as 456 + * L4 Header checksum for skb->csum calculation. 457 + */ 458 void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb, 459 const __be32 *from, const __be32 *to, 460 bool pseudohdr) ··· 449 if (skb->ip_summed != CHECKSUM_PARTIAL) { 450 *sum = csum_fold(csum_partial(diff, sizeof(diff), 451 ~csum_unfold(*sum))); 452 } else if (pseudohdr) 453 *sum = ~csum_fold(csum_partial(diff, sizeof(diff), 454 csum_unfold(*sum)));
+1 -1
net/hsr/hsr_main.h
··· 191 void hsr_debugfs_create_root(void); 192 void hsr_debugfs_remove_root(void); 193 #else 194 - static inline void void hsr_debugfs_rename(struct net_device *dev) 195 { 196 } 197 static inline void hsr_debugfs_init(struct hsr_priv *priv,
··· 191 void hsr_debugfs_create_root(void); 192 void hsr_debugfs_remove_root(void); 193 #else 194 + static inline void hsr_debugfs_rename(struct net_device *dev) 195 { 196 } 197 static inline void hsr_debugfs_init(struct hsr_priv *priv,
+2
net/ipv4/esp4_offload.c
··· 57 if (!x) 58 goto out_reset; 59 60 sp->xvec[sp->len++] = x; 61 sp->olen++; 62
··· 57 if (!x) 58 goto out_reset; 59 60 + skb->mark = xfrm_smark_get(skb->mark, x); 61 + 62 sp->xvec[sp->len++] = x; 63 sp->olen++; 64
+2 -2
net/ipv4/fou.c
··· 662 [FOU_ATTR_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG, }, 663 [FOU_ATTR_LOCAL_V4] = { .type = NLA_U32, }, 664 [FOU_ATTR_PEER_V4] = { .type = NLA_U32, }, 665 - [FOU_ATTR_LOCAL_V6] = { .type = sizeof(struct in6_addr), }, 666 - [FOU_ATTR_PEER_V6] = { .type = sizeof(struct in6_addr), }, 667 [FOU_ATTR_PEER_PORT] = { .type = NLA_U16, }, 668 [FOU_ATTR_IFINDEX] = { .type = NLA_S32, }, 669 };
··· 662 [FOU_ATTR_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG, }, 663 [FOU_ATTR_LOCAL_V4] = { .type = NLA_U32, }, 664 [FOU_ATTR_PEER_V4] = { .type = NLA_U32, }, 665 + [FOU_ATTR_LOCAL_V6] = { .len = sizeof(struct in6_addr), }, 666 + [FOU_ATTR_PEER_V6] = { .len = sizeof(struct in6_addr), }, 667 [FOU_ATTR_PEER_PORT] = { .type = NLA_U16, }, 668 [FOU_ATTR_IFINDEX] = { .type = NLA_S32, }, 669 };
+1 -3
net/ipv4/ip_tunnel.c
··· 1236 iph->version = 4; 1237 iph->ihl = 5; 1238 1239 - if (tunnel->collect_md) { 1240 - dev->features |= NETIF_F_NETNS_LOCAL; 1241 netif_keep_dst(dev); 1242 - } 1243 return 0; 1244 } 1245 EXPORT_SYMBOL_GPL(ip_tunnel_init);
··· 1236 iph->version = 4; 1237 iph->ihl = 5; 1238 1239 + if (tunnel->collect_md) 1240 netif_keep_dst(dev); 1241 return 0; 1242 } 1243 EXPORT_SYMBOL_GPL(ip_tunnel_init);
+11 -2
net/ipv4/ip_vti.c
··· 187 int mtu; 188 189 if (!dst) { 190 - dev->stats.tx_carrier_errors++; 191 - goto tx_error_icmp; 192 } 193 194 dst_hold(dst);
··· 187 int mtu; 188 189 if (!dst) { 190 + struct rtable *rt; 191 + 192 + fl->u.ip4.flowi4_oif = dev->ifindex; 193 + fl->u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; 194 + rt = __ip_route_output_key(dev_net(dev), &fl->u.ip4); 195 + if (IS_ERR(rt)) { 196 + dev->stats.tx_carrier_errors++; 197 + goto tx_error_icmp; 198 + } 199 + dst = &rt->dst; 200 + skb_dst_set(skb, dst); 201 } 202 203 dst_hold(dst);
+1
net/ipv4/route.c
··· 271 *pos = cpu+1; 272 return &per_cpu(rt_cache_stat, cpu); 273 } 274 return NULL; 275 276 }
··· 271 *pos = cpu+1; 272 return &per_cpu(rt_cache_stat, cpu); 273 } 274 + (*pos)++; 275 return NULL; 276 277 }
+1 -1
net/ipv4/tcp.c
··· 2524 { 2525 struct rb_node *p = rb_first(&sk->tcp_rtx_queue); 2526 2527 while (p) { 2528 struct sk_buff *skb = rb_to_skb(p); 2529 ··· 2615 WRITE_ONCE(tp->write_seq, seq); 2616 2617 icsk->icsk_backoff = 0; 2618 - tp->snd_cwnd = 2; 2619 icsk->icsk_probes_out = 0; 2620 icsk->icsk_rto = TCP_TIMEOUT_INIT; 2621 tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
··· 2524 { 2525 struct rb_node *p = rb_first(&sk->tcp_rtx_queue); 2526 2527 + tcp_sk(sk)->highest_sack = NULL; 2528 while (p) { 2529 struct sk_buff *skb = rb_to_skb(p); 2530 ··· 2614 WRITE_ONCE(tp->write_seq, seq); 2615 2616 icsk->icsk_backoff = 0; 2617 icsk->icsk_probes_out = 0; 2618 icsk->icsk_rto = TCP_TIMEOUT_INIT; 2619 tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+1 -2
net/ipv4/tcp_bbr.c
··· 779 * bandwidth sample. Delivered is in packets and interval_us in uS and 780 * ratio will be <<1 for most connections. So delivered is first scaled. 781 */ 782 - bw = (u64)rs->delivered * BW_UNIT; 783 - do_div(bw, rs->interval_us); 784 785 /* If this sample is application-limited, it is likely to have a very 786 * low delivered count that represents application behavior rather than
··· 779 * bandwidth sample. Delivered is in packets and interval_us in uS and 780 * ratio will be <<1 for most connections. So delivered is first scaled. 781 */ 782 + bw = div64_long((u64)rs->delivered * BW_UNIT, rs->interval_us); 783 784 /* If this sample is application-limited, it is likely to have a very 785 * low delivered count that represents application behavior rather than
+1
net/ipv4/tcp_input.c
··· 3164 tp->retransmit_skb_hint = NULL; 3165 if (unlikely(skb == tp->lost_skb_hint)) 3166 tp->lost_skb_hint = NULL; 3167 tcp_rtx_queue_unlink_and_free(skb, sk); 3168 } 3169
··· 3164 tp->retransmit_skb_hint = NULL; 3165 if (unlikely(skb == tp->lost_skb_hint)) 3166 tp->lost_skb_hint = NULL; 3167 + tcp_highest_sack_replace(sk, skb, next); 3168 tcp_rtx_queue_unlink_and_free(skb, sk); 3169 } 3170
+1
net/ipv4/tcp_output.c
··· 3232 if (!nskb) 3233 return -ENOMEM; 3234 INIT_LIST_HEAD(&nskb->tcp_tsorted_anchor); 3235 tcp_rtx_queue_unlink_and_free(skb, sk); 3236 __skb_header_release(nskb); 3237 tcp_rbtree_insert(&sk->tcp_rtx_queue, nskb);
··· 3232 if (!nskb) 3233 return -ENOMEM; 3234 INIT_LIST_HEAD(&nskb->tcp_tsorted_anchor); 3235 + tcp_highest_sack_replace(sk, skb, nskb); 3236 tcp_rtx_queue_unlink_and_free(skb, sk); 3237 __skb_header_release(nskb); 3238 tcp_rbtree_insert(&sk->tcp_rtx_queue, nskb);
+2 -1
net/ipv4/udp.c
··· 1368 if (likely(partial)) { 1369 up->forward_deficit += size; 1370 size = up->forward_deficit; 1371 - if (size < (sk->sk_rcvbuf >> 2)) 1372 return; 1373 } else { 1374 size += up->forward_deficit;
··· 1368 if (likely(partial)) { 1369 up->forward_deficit += size; 1370 size = up->forward_deficit; 1371 + if (size < (sk->sk_rcvbuf >> 2) && 1372 + !skb_queue_empty(&up->reader_queue)) 1373 return; 1374 } else { 1375 size += up->forward_deficit;
+2
net/ipv6/esp6_offload.c
··· 79 if (!x) 80 goto out_reset; 81 82 sp->xvec[sp->len++] = x; 83 sp->olen++; 84
··· 79 if (!x) 80 goto out_reset; 81 82 + skb->mark = xfrm_smark_get(skb->mark, x); 83 + 84 sp->xvec[sp->len++] = x; 85 sp->olen++; 86
+2 -5
net/ipv6/ip6_fib.c
··· 2495 struct net *net = seq_file_net(seq); 2496 struct ipv6_route_iter *iter = seq->private; 2497 2498 if (!v) 2499 goto iter_table; 2500 2501 n = rcu_dereference_bh(((struct fib6_info *)v)->fib6_next); 2502 - if (n) { 2503 - ++*pos; 2504 return n; 2505 - } 2506 2507 iter_table: 2508 ipv6_route_check_sernum(iter); ··· 2509 r = fib6_walk_continue(&iter->w); 2510 spin_unlock_bh(&iter->tbl->tb6_lock); 2511 if (r > 0) { 2512 - if (v) 2513 - ++*pos; 2514 return iter->w.leaf; 2515 } else if (r < 0) { 2516 fib6_walker_unlink(net, &iter->w);
··· 2495 struct net *net = seq_file_net(seq); 2496 struct ipv6_route_iter *iter = seq->private; 2497 2498 + ++(*pos); 2499 if (!v) 2500 goto iter_table; 2501 2502 n = rcu_dereference_bh(((struct fib6_info *)v)->fib6_next); 2503 + if (n) 2504 return n; 2505 2506 iter_table: 2507 ipv6_route_check_sernum(iter); ··· 2510 r = fib6_walk_continue(&iter->w); 2511 spin_unlock_bh(&iter->tbl->tb6_lock); 2512 if (r > 0) { 2513 return iter->w.leaf; 2514 } else if (r < 0) { 2515 fib6_walker_unlink(net, &iter->w);
-3
net/ipv6/ip6_gre.c
··· 1466 dev->mtu -= 8; 1467 1468 if (tunnel->parms.collect_md) { 1469 - dev->features |= NETIF_F_NETNS_LOCAL; 1470 netif_keep_dst(dev); 1471 } 1472 ip6gre_tnl_init_features(dev); ··· 1893 dev->needs_free_netdev = true; 1894 dev->priv_destructor = ip6gre_dev_free; 1895 1896 - dev->features |= NETIF_F_NETNS_LOCAL; 1897 dev->priv_flags &= ~IFF_TX_SKB_SHARING; 1898 dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 1899 netif_keep_dst(dev); ··· 2195 dev->needs_free_netdev = true; 2196 dev->priv_destructor = ip6gre_dev_free; 2197 2198 - dev->features |= NETIF_F_NETNS_LOCAL; 2199 dev->priv_flags &= ~IFF_TX_SKB_SHARING; 2200 dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 2201 netif_keep_dst(dev);
··· 1466 dev->mtu -= 8; 1467 1468 if (tunnel->parms.collect_md) { 1469 netif_keep_dst(dev); 1470 } 1471 ip6gre_tnl_init_features(dev); ··· 1894 dev->needs_free_netdev = true; 1895 dev->priv_destructor = ip6gre_dev_free; 1896 1897 dev->priv_flags &= ~IFF_TX_SKB_SHARING; 1898 dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 1899 netif_keep_dst(dev); ··· 2197 dev->needs_free_netdev = true; 2198 dev->priv_destructor = ip6gre_dev_free; 2199 2200 dev->priv_flags &= ~IFF_TX_SKB_SHARING; 2201 dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 2202 netif_keep_dst(dev);
+1 -3
net/ipv6/ip6_tunnel.c
··· 1877 if (err) 1878 return err; 1879 ip6_tnl_link_config(t); 1880 - if (t->parms.collect_md) { 1881 - dev->features |= NETIF_F_NETNS_LOCAL; 1882 netif_keep_dst(dev); 1883 - } 1884 return 0; 1885 } 1886
··· 1877 if (err) 1878 return err; 1879 ip6_tnl_link_config(t); 1880 + if (t->parms.collect_md) 1881 netif_keep_dst(dev); 1882 return 0; 1883 } 1884
+11 -2
net/ipv6/ip6_vti.c
··· 449 int err = -1; 450 int mtu; 451 452 - if (!dst) 453 - goto tx_err_link_failure; 454 455 dst_hold(dst); 456 dst = xfrm_lookup(t->net, dst, fl, NULL, 0);
··· 449 int err = -1; 450 int mtu; 451 452 + if (!dst) { 453 + fl->u.ip6.flowi6_oif = dev->ifindex; 454 + fl->u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC; 455 + dst = ip6_route_output(dev_net(dev), NULL, &fl->u.ip6); 456 + if (dst->error) { 457 + dst_release(dst); 458 + dst = NULL; 459 + goto tx_err_link_failure; 460 + } 461 + skb_dst_set(skb, dst); 462 + } 463 464 dst_hold(dst); 465 dst = xfrm_lookup(t->net, dst, fl, NULL, 0);
+3 -1
net/ipv6/seg6_local.c
··· 23 #include <net/addrconf.h> 24 #include <net/ip6_route.h> 25 #include <net/dst_cache.h> 26 #ifdef CONFIG_IPV6_SEG6_HMAC 27 #include <net/seg6_hmac.h> 28 #endif ··· 136 137 skb_reset_network_header(skb); 138 skb_reset_transport_header(skb); 139 - skb->encapsulation = 0; 140 141 return true; 142 }
··· 23 #include <net/addrconf.h> 24 #include <net/ip6_route.h> 25 #include <net/dst_cache.h> 26 + #include <net/ip_tunnels.h> 27 #ifdef CONFIG_IPV6_SEG6_HMAC 28 #include <net/seg6_hmac.h> 29 #endif ··· 135 136 skb_reset_network_header(skb); 137 skb_reset_transport_header(skb); 138 + if (iptunnel_pull_offloads(skb)) 139 + return false; 140 141 return true; 142 }
+1 -1
net/netfilter/ipset/ip_set_bitmap_gen.h
··· 75 76 if (set->extensions & IPSET_EXT_DESTROY) 77 mtype_ext_cleanup(set); 78 - memset(map->members, 0, map->memsize); 79 set->elements = 0; 80 set->ext_size = 0; 81 }
··· 75 76 if (set->extensions & IPSET_EXT_DESTROY) 77 mtype_ext_cleanup(set); 78 + bitmap_zero(map->members, map->elements); 79 set->elements = 0; 80 set->ext_size = 0; 81 }
+3 -3
net/netfilter/ipset/ip_set_bitmap_ip.c
··· 37 38 /* Type structure */ 39 struct bitmap_ip { 40 - void *members; /* the set members */ 41 u32 first_ip; /* host byte order, included in range */ 42 u32 last_ip; /* host byte order, included in range */ 43 u32 elements; /* number of max elements in the set */ ··· 220 u32 first_ip, u32 last_ip, 221 u32 elements, u32 hosts, u8 netmask) 222 { 223 - map->members = ip_set_alloc(map->memsize); 224 if (!map->members) 225 return false; 226 map->first_ip = first_ip; ··· 322 if (!map) 323 return -ENOMEM; 324 325 - map->memsize = bitmap_bytes(0, elements - 1); 326 set->variant = &bitmap_ip; 327 if (!init_map_ip(set, map, first_ip, last_ip, 328 elements, hosts, netmask)) {
··· 37 38 /* Type structure */ 39 struct bitmap_ip { 40 + unsigned long *members; /* the set members */ 41 u32 first_ip; /* host byte order, included in range */ 42 u32 last_ip; /* host byte order, included in range */ 43 u32 elements; /* number of max elements in the set */ ··· 220 u32 first_ip, u32 last_ip, 221 u32 elements, u32 hosts, u8 netmask) 222 { 223 + map->members = bitmap_zalloc(elements, GFP_KERNEL | __GFP_NOWARN); 224 if (!map->members) 225 return false; 226 map->first_ip = first_ip; ··· 322 if (!map) 323 return -ENOMEM; 324 325 + map->memsize = BITS_TO_LONGS(elements) * sizeof(unsigned long); 326 set->variant = &bitmap_ip; 327 if (!init_map_ip(set, map, first_ip, last_ip, 328 elements, hosts, netmask)) {
+3 -3
net/netfilter/ipset/ip_set_bitmap_ipmac.c
··· 42 43 /* Type structure */ 44 struct bitmap_ipmac { 45 - void *members; /* the set members */ 46 u32 first_ip; /* host byte order, included in range */ 47 u32 last_ip; /* host byte order, included in range */ 48 u32 elements; /* number of max elements in the set */ ··· 299 init_map_ipmac(struct ip_set *set, struct bitmap_ipmac *map, 300 u32 first_ip, u32 last_ip, u32 elements) 301 { 302 - map->members = ip_set_alloc(map->memsize); 303 if (!map->members) 304 return false; 305 map->first_ip = first_ip; ··· 360 if (!map) 361 return -ENOMEM; 362 363 - map->memsize = bitmap_bytes(0, elements - 1); 364 set->variant = &bitmap_ipmac; 365 if (!init_map_ipmac(set, map, first_ip, last_ip, elements)) { 366 kfree(map);
··· 42 43 /* Type structure */ 44 struct bitmap_ipmac { 45 + unsigned long *members; /* the set members */ 46 u32 first_ip; /* host byte order, included in range */ 47 u32 last_ip; /* host byte order, included in range */ 48 u32 elements; /* number of max elements in the set */ ··· 299 init_map_ipmac(struct ip_set *set, struct bitmap_ipmac *map, 300 u32 first_ip, u32 last_ip, u32 elements) 301 { 302 + map->members = bitmap_zalloc(elements, GFP_KERNEL | __GFP_NOWARN); 303 if (!map->members) 304 return false; 305 map->first_ip = first_ip; ··· 360 if (!map) 361 return -ENOMEM; 362 363 + map->memsize = BITS_TO_LONGS(elements) * sizeof(unsigned long); 364 set->variant = &bitmap_ipmac; 365 if (!init_map_ipmac(set, map, first_ip, last_ip, elements)) { 366 kfree(map);
+3 -3
net/netfilter/ipset/ip_set_bitmap_port.c
··· 30 31 /* Type structure */ 32 struct bitmap_port { 33 - void *members; /* the set members */ 34 u16 first_port; /* host byte order, included in range */ 35 u16 last_port; /* host byte order, included in range */ 36 u32 elements; /* number of max elements in the set */ ··· 231 init_map_port(struct ip_set *set, struct bitmap_port *map, 232 u16 first_port, u16 last_port) 233 { 234 - map->members = ip_set_alloc(map->memsize); 235 if (!map->members) 236 return false; 237 map->first_port = first_port; ··· 271 return -ENOMEM; 272 273 map->elements = elements; 274 - map->memsize = bitmap_bytes(0, map->elements); 275 set->variant = &bitmap_port; 276 if (!init_map_port(set, map, first_port, last_port)) { 277 kfree(map);
··· 30 31 /* Type structure */ 32 struct bitmap_port { 33 + unsigned long *members; /* the set members */ 34 u16 first_port; /* host byte order, included in range */ 35 u16 last_port; /* host byte order, included in range */ 36 u32 elements; /* number of max elements in the set */ ··· 231 init_map_port(struct ip_set *set, struct bitmap_port *map, 232 u16 first_port, u16 last_port) 233 { 234 + map->members = bitmap_zalloc(map->elements, GFP_KERNEL | __GFP_NOWARN); 235 if (!map->members) 236 return false; 237 map->first_port = first_port; ··· 271 return -ENOMEM; 272 273 map->elements = elements; 274 + map->memsize = BITS_TO_LONGS(elements) * sizeof(unsigned long); 275 set->variant = &bitmap_port; 276 if (!init_map_port(set, map, first_port, last_port)) { 277 kfree(map);
+1 -1
net/netfilter/ipvs/ip_vs_sync.c
··· 1239 1240 p = msg_end; 1241 if (p + sizeof(s->v4) > buffer+buflen) { 1242 - IP_VS_ERR_RL("BACKUP, Dropping buffer, to small\n"); 1243 return; 1244 } 1245 s = (union ip_vs_sync_conn *)p;
··· 1239 1240 p = msg_end; 1241 if (p + sizeof(s->v4) > buffer+buflen) { 1242 + IP_VS_ERR_RL("BACKUP, Dropping buffer, too small\n"); 1243 return; 1244 } 1245 s = (union ip_vs_sync_conn *)p;
+3 -3
net/netfilter/nf_conntrack_proto_sctp.c
··· 114 { 115 /* ORIGINAL */ 116 /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */ 117 - /* init */ {sCW, sCW, sCW, sCE, sES, sSS, sSR, sSA, sCW, sHA}, 118 /* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA}, 119 /* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL}, 120 /* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL, sSS}, ··· 130 /* REPLY */ 131 /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */ 132 /* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},/* INIT in sCL Big TODO */ 133 - /* init_ack */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA}, 134 /* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV, sCL}, 135 /* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV, sSR}, 136 /* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV, sHA}, ··· 316 ct->proto.sctp.vtag[IP_CT_DIR_REPLY] = sh->vtag; 317 } 318 319 - ct->proto.sctp.state = new_state; 320 } 321 322 return true;
··· 114 { 115 /* ORIGINAL */ 116 /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */ 117 + /* init */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW, sHA}, 118 /* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA}, 119 /* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL}, 120 /* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL, sSS}, ··· 130 /* REPLY */ 131 /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */ 132 /* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},/* INIT in sCL Big TODO */ 133 + /* init_ack */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA}, 134 /* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV, sCL}, 135 /* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV, sSR}, 136 /* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV, sHA}, ··· 316 ct->proto.sctp.vtag[IP_CT_DIR_REPLY] = sh->vtag; 317 } 318 319 + ct->proto.sctp.state = SCTP_CONNTRACK_NONE; 320 } 321 322 return true;
+107 -48
net/netfilter/nf_tables_api.c
··· 553 static const struct nft_chain_type *chain_type[NFPROTO_NUMPROTO][NFT_CHAIN_T_MAX]; 554 555 static const struct nft_chain_type * 556 __nf_tables_chain_type_lookup(const struct nlattr *nla, u8 family) 557 { 558 int i; 559 560 for (i = 0; i < NFT_CHAIN_T_MAX; i++) { 561 - if (chain_type[family][i] != NULL && 562 - !nla_strcmp(nla, chain_type[family][i]->name)) 563 - return chain_type[family][i]; 564 } 565 return NULL; 566 } 567 568 - /* 569 - * Loading a module requires dropping mutex that guards the transaction. 570 - * A different client might race to start a new transaction meanwhile. Zap the 571 - * list of pending transaction and then restore it once the mutex is grabbed 572 - * again. Users of this function return EAGAIN which implicitly triggers the 573 - * transaction abort path to clean up the list of pending transactions. 574 - */ 575 #ifdef CONFIG_MODULES 576 - static void nft_request_module(struct net *net, const char *fmt, ...) 577 { 578 char module_name[MODULE_NAME_LEN]; 579 - LIST_HEAD(commit_list); 580 va_list args; 581 int ret; 582 - 583 - list_splice_init(&net->nft.commit_list, &commit_list); 584 585 va_start(args, fmt); 586 ret = vsnprintf(module_name, MODULE_NAME_LEN, fmt, args); 587 va_end(args); 588 if (ret >= MODULE_NAME_LEN) 589 - return; 590 591 - mutex_unlock(&net->nft.commit_mutex); 592 - request_module("%s", module_name); 593 - mutex_lock(&net->nft.commit_mutex); 594 595 - WARN_ON_ONCE(!list_empty(&net->nft.commit_list)); 596 - list_splice(&commit_list, &net->nft.commit_list); 597 } 598 #endif 599 ··· 640 lockdep_nfnl_nft_mutex_not_held(); 641 #ifdef CONFIG_MODULES 642 if (autoload) { 643 - nft_request_module(net, "nft-chain-%u-%.*s", family, 644 - nla_len(nla), (const char *)nla_data(nla)); 645 - type = __nf_tables_chain_type_lookup(nla, family); 646 - if (type != NULL) 647 return ERR_PTR(-EAGAIN); 648 } 649 #endif ··· 1184 1185 void nft_register_chain_type(const struct nft_chain_type *ctype) 1186 { 1187 - if (WARN_ON(ctype->family >= NFPROTO_NUMPROTO)) 1188 - return; 1189 - 1190 nfnl_lock(NFNL_SUBSYS_NFTABLES); 1191 - if (WARN_ON(chain_type[ctype->family][ctype->type] != NULL)) { 1192 nfnl_unlock(NFNL_SUBSYS_NFTABLES); 1193 return; 1194 } ··· 1787 hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM])); 1788 hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY])); 1789 1790 - type = chain_type[family][NFT_CHAIN_T_DEFAULT]; 1791 if (nla[NFTA_CHAIN_TYPE]) { 1792 type = nf_tables_chain_type_lookup(net, nla[NFTA_CHAIN_TYPE], 1793 family, autoload); ··· 2350 static int nft_expr_type_request_module(struct net *net, u8 family, 2351 struct nlattr *nla) 2352 { 2353 - nft_request_module(net, "nft-expr-%u-%.*s", family, 2354 - nla_len(nla), (char *)nla_data(nla)); 2355 - if (__nft_expr_type_get(family, nla)) 2356 return -EAGAIN; 2357 2358 return 0; ··· 2377 if (nft_expr_type_request_module(net, family, nla) == -EAGAIN) 2378 return ERR_PTR(-EAGAIN); 2379 2380 - nft_request_module(net, "nft-expr-%.*s", 2381 - nla_len(nla), (char *)nla_data(nla)); 2382 - if (__nft_expr_type_get(family, nla)) 2383 return ERR_PTR(-EAGAIN); 2384 } 2385 #endif ··· 2470 err = PTR_ERR(ops); 2471 #ifdef CONFIG_MODULES 2472 if (err == -EAGAIN) 2473 - nft_expr_type_request_module(ctx->net, 2474 - ctx->family, 2475 - tb[NFTA_EXPR_NAME]); 2476 #endif 2477 goto err1; 2478 } ··· 3310 lockdep_nfnl_nft_mutex_not_held(); 3311 #ifdef CONFIG_MODULES 3312 if (list_empty(&nf_tables_set_types)) { 3313 - nft_request_module(ctx->net, "nft-set"); 3314 - if (!list_empty(&nf_tables_set_types)) 3315 return ERR_PTR(-EAGAIN); 3316 } 3317 #endif ··· 5436 lockdep_nfnl_nft_mutex_not_held(); 5437 #ifdef CONFIG_MODULES 5438 if (type == NULL) { 5439 - nft_request_module(net, "nft-obj-%u", objtype); 5440 - if (__nft_obj_type_get(objtype)) 5441 return ERR_PTR(-EAGAIN); 5442 } 5443 #endif ··· 6009 lockdep_nfnl_nft_mutex_not_held(); 6010 #ifdef CONFIG_MODULES 6011 if (type == NULL) { 6012 - nft_request_module(net, "nf-flowtable-%u", family); 6013 - if (__nft_flowtable_type_get(family)) 6014 return ERR_PTR(-EAGAIN); 6015 } 6016 #endif ··· 7011 list_del_rcu(&chain->list); 7012 } 7013 7014 static void nf_tables_commit_release(struct net *net) 7015 { 7016 struct nft_trans *trans; ··· 7035 * to prevent expensive synchronize_rcu() in commit phase. 7036 */ 7037 if (list_empty(&net->nft.commit_list)) { 7038 mutex_unlock(&net->nft.commit_mutex); 7039 return; 7040 } ··· 7050 list_splice_tail_init(&net->nft.commit_list, &nf_tables_destroy_list); 7051 spin_unlock(&nf_tables_destroy_list_lock); 7052 7053 mutex_unlock(&net->nft.commit_mutex); 7054 7055 schedule_work(&trans_destroy_work); ··· 7242 return 0; 7243 } 7244 7245 static void nf_tables_abort_release(struct nft_trans *trans) 7246 { 7247 switch (trans->msg_type) { ··· 7291 kfree(trans); 7292 } 7293 7294 - static int __nf_tables_abort(struct net *net) 7295 { 7296 struct nft_trans *trans, *next; 7297 struct nft_trans_elem *te; ··· 7413 nf_tables_abort_release(trans); 7414 } 7415 7416 return 0; 7417 } 7418 ··· 7426 nft_validate_state_update(net, NFT_VALIDATE_SKIP); 7427 } 7428 7429 - static int nf_tables_abort(struct net *net, struct sk_buff *skb) 7430 { 7431 - int ret = __nf_tables_abort(net); 7432 7433 mutex_unlock(&net->nft.commit_mutex); 7434 ··· 8023 { 8024 INIT_LIST_HEAD(&net->nft.tables); 8025 INIT_LIST_HEAD(&net->nft.commit_list); 8026 mutex_init(&net->nft.commit_mutex); 8027 net->nft.base_seq = 1; 8028 net->nft.validate_state = NFT_VALIDATE_SKIP; ··· 8035 { 8036 mutex_lock(&net->nft.commit_mutex); 8037 if (!list_empty(&net->nft.commit_list)) 8038 - __nf_tables_abort(net); 8039 __nft_release_tables(net); 8040 mutex_unlock(&net->nft.commit_mutex); 8041 WARN_ON_ONCE(!list_empty(&net->nft.tables));
··· 553 static const struct nft_chain_type *chain_type[NFPROTO_NUMPROTO][NFT_CHAIN_T_MAX]; 554 555 static const struct nft_chain_type * 556 + __nft_chain_type_get(u8 family, enum nft_chain_types type) 557 + { 558 + if (family >= NFPROTO_NUMPROTO || 559 + type >= NFT_CHAIN_T_MAX) 560 + return NULL; 561 + 562 + return chain_type[family][type]; 563 + } 564 + 565 + static const struct nft_chain_type * 566 __nf_tables_chain_type_lookup(const struct nlattr *nla, u8 family) 567 { 568 + const struct nft_chain_type *type; 569 int i; 570 571 for (i = 0; i < NFT_CHAIN_T_MAX; i++) { 572 + type = __nft_chain_type_get(family, i); 573 + if (!type) 574 + continue; 575 + if (!nla_strcmp(nla, type->name)) 576 + return type; 577 } 578 return NULL; 579 } 580 581 + struct nft_module_request { 582 + struct list_head list; 583 + char module[MODULE_NAME_LEN]; 584 + bool done; 585 + }; 586 + 587 #ifdef CONFIG_MODULES 588 + static int nft_request_module(struct net *net, const char *fmt, ...) 589 { 590 char module_name[MODULE_NAME_LEN]; 591 + struct nft_module_request *req; 592 va_list args; 593 int ret; 594 595 va_start(args, fmt); 596 ret = vsnprintf(module_name, MODULE_NAME_LEN, fmt, args); 597 va_end(args); 598 if (ret >= MODULE_NAME_LEN) 599 + return 0; 600 601 + list_for_each_entry(req, &net->nft.module_list, list) { 602 + if (!strcmp(req->module, module_name)) { 603 + if (req->done) 604 + return 0; 605 606 + /* A request to load this module already exists. */ 607 + return -EAGAIN; 608 + } 609 + } 610 + 611 + req = kmalloc(sizeof(*req), GFP_KERNEL); 612 + if (!req) 613 + return -ENOMEM; 614 + 615 + req->done = false; 616 + strlcpy(req->module, module_name, MODULE_NAME_LEN); 617 + list_add_tail(&req->list, &net->nft.module_list); 618 + 619 + return -EAGAIN; 620 } 621 #endif 622 ··· 617 lockdep_nfnl_nft_mutex_not_held(); 618 #ifdef CONFIG_MODULES 619 if (autoload) { 620 + if (nft_request_module(net, "nft-chain-%u-%.*s", family, 621 + nla_len(nla), 622 + (const char *)nla_data(nla)) == -EAGAIN) 623 return ERR_PTR(-EAGAIN); 624 } 625 #endif ··· 1162 1163 void nft_register_chain_type(const struct nft_chain_type *ctype) 1164 { 1165 nfnl_lock(NFNL_SUBSYS_NFTABLES); 1166 + if (WARN_ON(__nft_chain_type_get(ctype->family, ctype->type))) { 1167 nfnl_unlock(NFNL_SUBSYS_NFTABLES); 1168 return; 1169 } ··· 1768 hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM])); 1769 hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY])); 1770 1771 + type = __nft_chain_type_get(family, NFT_CHAIN_T_DEFAULT); 1772 + if (!type) 1773 + return -EOPNOTSUPP; 1774 + 1775 if (nla[NFTA_CHAIN_TYPE]) { 1776 type = nf_tables_chain_type_lookup(net, nla[NFTA_CHAIN_TYPE], 1777 family, autoload); ··· 2328 static int nft_expr_type_request_module(struct net *net, u8 family, 2329 struct nlattr *nla) 2330 { 2331 + if (nft_request_module(net, "nft-expr-%u-%.*s", family, 2332 + nla_len(nla), (char *)nla_data(nla)) == -EAGAIN) 2333 return -EAGAIN; 2334 2335 return 0; ··· 2356 if (nft_expr_type_request_module(net, family, nla) == -EAGAIN) 2357 return ERR_PTR(-EAGAIN); 2358 2359 + if (nft_request_module(net, "nft-expr-%.*s", 2360 + nla_len(nla), 2361 + (char *)nla_data(nla)) == -EAGAIN) 2362 return ERR_PTR(-EAGAIN); 2363 } 2364 #endif ··· 2449 err = PTR_ERR(ops); 2450 #ifdef CONFIG_MODULES 2451 if (err == -EAGAIN) 2452 + if (nft_expr_type_request_module(ctx->net, 2453 + ctx->family, 2454 + tb[NFTA_EXPR_NAME]) != -EAGAIN) 2455 + err = -ENOENT; 2456 #endif 2457 goto err1; 2458 } ··· 3288 lockdep_nfnl_nft_mutex_not_held(); 3289 #ifdef CONFIG_MODULES 3290 if (list_empty(&nf_tables_set_types)) { 3291 + if (nft_request_module(ctx->net, "nft-set") == -EAGAIN) 3292 return ERR_PTR(-EAGAIN); 3293 } 3294 #endif ··· 5415 lockdep_nfnl_nft_mutex_not_held(); 5416 #ifdef CONFIG_MODULES 5417 if (type == NULL) { 5418 + if (nft_request_module(net, "nft-obj-%u", objtype) == -EAGAIN) 5419 return ERR_PTR(-EAGAIN); 5420 } 5421 #endif ··· 5989 lockdep_nfnl_nft_mutex_not_held(); 5990 #ifdef CONFIG_MODULES 5991 if (type == NULL) { 5992 + if (nft_request_module(net, "nf-flowtable-%u", family) == -EAGAIN) 5993 return ERR_PTR(-EAGAIN); 5994 } 5995 #endif ··· 6992 list_del_rcu(&chain->list); 6993 } 6994 6995 + static void nf_tables_module_autoload_cleanup(struct net *net) 6996 + { 6997 + struct nft_module_request *req, *next; 6998 + 6999 + WARN_ON_ONCE(!list_empty(&net->nft.commit_list)); 7000 + list_for_each_entry_safe(req, next, &net->nft.module_list, list) { 7001 + WARN_ON_ONCE(!req->done); 7002 + list_del(&req->list); 7003 + kfree(req); 7004 + } 7005 + } 7006 + 7007 static void nf_tables_commit_release(struct net *net) 7008 { 7009 struct nft_trans *trans; ··· 7004 * to prevent expensive synchronize_rcu() in commit phase. 7005 */ 7006 if (list_empty(&net->nft.commit_list)) { 7007 + nf_tables_module_autoload_cleanup(net); 7008 mutex_unlock(&net->nft.commit_mutex); 7009 return; 7010 } ··· 7018 list_splice_tail_init(&net->nft.commit_list, &nf_tables_destroy_list); 7019 spin_unlock(&nf_tables_destroy_list_lock); 7020 7021 + nf_tables_module_autoload_cleanup(net); 7022 mutex_unlock(&net->nft.commit_mutex); 7023 7024 schedule_work(&trans_destroy_work); ··· 7209 return 0; 7210 } 7211 7212 + static void nf_tables_module_autoload(struct net *net) 7213 + { 7214 + struct nft_module_request *req, *next; 7215 + LIST_HEAD(module_list); 7216 + 7217 + list_splice_init(&net->nft.module_list, &module_list); 7218 + mutex_unlock(&net->nft.commit_mutex); 7219 + list_for_each_entry_safe(req, next, &module_list, list) { 7220 + if (req->done) { 7221 + list_del(&req->list); 7222 + kfree(req); 7223 + } else { 7224 + request_module("%s", req->module); 7225 + req->done = true; 7226 + } 7227 + } 7228 + mutex_lock(&net->nft.commit_mutex); 7229 + list_splice(&module_list, &net->nft.module_list); 7230 + } 7231 + 7232 static void nf_tables_abort_release(struct nft_trans *trans) 7233 { 7234 switch (trans->msg_type) { ··· 7238 kfree(trans); 7239 } 7240 7241 + static int __nf_tables_abort(struct net *net, bool autoload) 7242 { 7243 struct nft_trans *trans, *next; 7244 struct nft_trans_elem *te; ··· 7360 nf_tables_abort_release(trans); 7361 } 7362 7363 + if (autoload) 7364 + nf_tables_module_autoload(net); 7365 + else 7366 + nf_tables_module_autoload_cleanup(net); 7367 + 7368 return 0; 7369 } 7370 ··· 7368 nft_validate_state_update(net, NFT_VALIDATE_SKIP); 7369 } 7370 7371 + static int nf_tables_abort(struct net *net, struct sk_buff *skb, bool autoload) 7372 { 7373 + int ret = __nf_tables_abort(net, autoload); 7374 7375 mutex_unlock(&net->nft.commit_mutex); 7376 ··· 7965 { 7966 INIT_LIST_HEAD(&net->nft.tables); 7967 INIT_LIST_HEAD(&net->nft.commit_list); 7968 + INIT_LIST_HEAD(&net->nft.module_list); 7969 mutex_init(&net->nft.commit_mutex); 7970 net->nft.base_seq = 1; 7971 net->nft.validate_state = NFT_VALIDATE_SKIP; ··· 7976 { 7977 mutex_lock(&net->nft.commit_mutex); 7978 if (!list_empty(&net->nft.commit_list)) 7979 + __nf_tables_abort(net, false); 7980 __nft_release_tables(net); 7981 mutex_unlock(&net->nft.commit_mutex); 7982 WARN_ON_ONCE(!list_empty(&net->nft.tables));
+1 -1
net/netfilter/nf_tables_offload.c
··· 564 565 mutex_lock(&net->nft.commit_mutex); 566 chain = __nft_offload_get_chain(dev); 567 - if (chain) { 568 struct nft_base_chain *basechain; 569 570 basechain = nft_base_chain(chain);
··· 564 565 mutex_lock(&net->nft.commit_mutex); 566 chain = __nft_offload_get_chain(dev); 567 + if (chain && chain->flags & NFT_CHAIN_HW_OFFLOAD) { 568 struct nft_base_chain *basechain; 569 570 basechain = nft_base_chain(chain);
+3 -3
net/netfilter/nfnetlink.c
··· 476 } 477 done: 478 if (status & NFNL_BATCH_REPLAY) { 479 - ss->abort(net, oskb); 480 nfnl_err_reset(&err_list); 481 kfree_skb(skb); 482 module_put(ss->owner); ··· 487 status |= NFNL_BATCH_REPLAY; 488 goto done; 489 } else if (err) { 490 - ss->abort(net, oskb); 491 netlink_ack(oskb, nlmsg_hdr(oskb), err, NULL); 492 } 493 } else { 494 - ss->abort(net, oskb); 495 } 496 if (ss->cleanup) 497 ss->cleanup(net);
··· 476 } 477 done: 478 if (status & NFNL_BATCH_REPLAY) { 479 + ss->abort(net, oskb, true); 480 nfnl_err_reset(&err_list); 481 kfree_skb(skb); 482 module_put(ss->owner); ··· 487 status |= NFNL_BATCH_REPLAY; 488 goto done; 489 } else if (err) { 490 + ss->abort(net, oskb, false); 491 netlink_ack(oskb, nlmsg_hdr(oskb), err, NULL); 492 } 493 } else { 494 + ss->abort(net, oskb, false); 495 } 496 if (ss->cleanup) 497 ss->cleanup(net);
+3
net/netfilter/nft_osf.c
··· 61 int err; 62 u8 ttl; 63 64 if (tb[NFTA_OSF_TTL]) { 65 ttl = nla_get_u8(tb[NFTA_OSF_TTL]); 66 if (ttl > 2)
··· 61 int err; 62 u8 ttl; 63 64 + if (!tb[NFTA_OSF_DREG]) 65 + return -EINVAL; 66 + 67 if (tb[NFTA_OSF_TTL]) { 68 ttl = nla_get_u8(tb[NFTA_OSF_TTL]); 69 if (ttl > 2)
+1 -1
net/rose/af_rose.c
··· 1475 int rc; 1476 1477 if (rose_ndevs > 0x7FFFFFFF/sizeof(struct net_device *)) { 1478 - printk(KERN_ERR "ROSE: rose_proto_init - rose_ndevs parameter to large\n"); 1479 rc = -EINVAL; 1480 goto out; 1481 }
··· 1475 int rc; 1476 1477 if (rose_ndevs > 0x7FFFFFFF/sizeof(struct net_device *)) { 1478 + printk(KERN_ERR "ROSE: rose_proto_init - rose_ndevs parameter too large\n"); 1479 rc = -EINVAL; 1480 goto out; 1481 }
+2 -3
net/sched/cls_api.c
··· 2055 &chain_info)); 2056 2057 mutex_unlock(&chain->filter_chain_lock); 2058 - tp_new = tcf_proto_create(nla_data(tca[TCA_KIND]), 2059 - protocol, prio, chain, rtnl_held, 2060 - extack); 2061 if (IS_ERR(tp_new)) { 2062 err = PTR_ERR(tp_new); 2063 goto errout_tp;
··· 2055 &chain_info)); 2056 2057 mutex_unlock(&chain->filter_chain_lock); 2058 + tp_new = tcf_proto_create(name, protocol, prio, chain, 2059 + rtnl_held, extack); 2060 if (IS_ERR(tp_new)) { 2061 err = PTR_ERR(tp_new); 2062 goto errout_tp;
+1 -1
net/sched/ematch.c
··· 263 } 264 em->data = (unsigned long) v; 265 } 266 } 267 } 268 269 em->matchid = em_hdr->matchid; 270 em->flags = em_hdr->flags; 271 - em->datalen = data_len; 272 em->net = net; 273 274 err = 0;
··· 263 } 264 em->data = (unsigned long) v; 265 } 266 + em->datalen = data_len; 267 } 268 } 269 270 em->matchid = em_hdr->matchid; 271 em->flags = em_hdr->flags; 272 em->net = net; 273 274 err = 0;
+26 -8
net/xfrm/xfrm_interface.c
··· 268 int err = -1; 269 int mtu; 270 271 - if (!dst) 272 - goto tx_err_link_failure; 273 - 274 dst_hold(dst); 275 dst = xfrm_lookup_with_ifid(xi->net, dst, fl, NULL, 0, xi->p.if_id); 276 if (IS_ERR(dst)) { ··· 294 295 mtu = dst_mtu(dst); 296 if (!skb->ignore_df && skb->len > mtu) { 297 - skb_dst_update_pmtu(skb, mtu); 298 299 if (skb->protocol == htons(ETH_P_IPV6)) { 300 if (mtu < IPV6_MIN_MTU) ··· 340 { 341 struct xfrm_if *xi = netdev_priv(dev); 342 struct net_device_stats *stats = &xi->dev->stats; 343 struct flowi fl; 344 int ret; 345 ··· 350 case htons(ETH_P_IPV6): 351 xfrm_decode_session(skb, &fl, AF_INET6); 352 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 353 break; 354 case htons(ETH_P_IP): 355 xfrm_decode_session(skb, &fl, AF_INET); 356 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 357 break; 358 default: 359 goto tx_err; ··· 584 { 585 dev->netdev_ops = &xfrmi_netdev_ops; 586 dev->type = ARPHRD_NONE; 587 - dev->hard_header_len = ETH_HLEN; 588 - dev->min_header_len = ETH_HLEN; 589 dev->mtu = ETH_DATA_LEN; 590 dev->min_mtu = ETH_MIN_MTU; 591 - dev->max_mtu = ETH_DATA_LEN; 592 - dev->addr_len = ETH_ALEN; 593 dev->flags = IFF_NOARP; 594 dev->needs_free_netdev = true; 595 dev->priv_destructor = xfrmi_dev_free;
··· 268 int err = -1; 269 int mtu; 270 271 dst_hold(dst); 272 dst = xfrm_lookup_with_ifid(xi->net, dst, fl, NULL, 0, xi->p.if_id); 273 if (IS_ERR(dst)) { ··· 297 298 mtu = dst_mtu(dst); 299 if (!skb->ignore_df && skb->len > mtu) { 300 + skb_dst_update_pmtu_no_confirm(skb, mtu); 301 302 if (skb->protocol == htons(ETH_P_IPV6)) { 303 if (mtu < IPV6_MIN_MTU) ··· 343 { 344 struct xfrm_if *xi = netdev_priv(dev); 345 struct net_device_stats *stats = &xi->dev->stats; 346 + struct dst_entry *dst = skb_dst(skb); 347 struct flowi fl; 348 int ret; 349 ··· 352 case htons(ETH_P_IPV6): 353 xfrm_decode_session(skb, &fl, AF_INET6); 354 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 355 + if (!dst) { 356 + fl.u.ip6.flowi6_oif = dev->ifindex; 357 + fl.u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC; 358 + dst = ip6_route_output(dev_net(dev), NULL, &fl.u.ip6); 359 + if (dst->error) { 360 + dst_release(dst); 361 + stats->tx_carrier_errors++; 362 + goto tx_err; 363 + } 364 + skb_dst_set(skb, dst); 365 + } 366 break; 367 case htons(ETH_P_IP): 368 xfrm_decode_session(skb, &fl, AF_INET); 369 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 370 + if (!dst) { 371 + struct rtable *rt; 372 + 373 + fl.u.ip4.flowi4_oif = dev->ifindex; 374 + fl.u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; 375 + rt = __ip_route_output_key(dev_net(dev), &fl.u.ip4); 376 + if (IS_ERR(rt)) { 377 + stats->tx_carrier_errors++; 378 + goto tx_err; 379 + } 380 + skb_dst_set(skb, &rt->dst); 381 + } 382 break; 383 default: 384 goto tx_err; ··· 563 { 564 dev->netdev_ops = &xfrmi_netdev_ops; 565 dev->type = ARPHRD_NONE; 566 dev->mtu = ETH_DATA_LEN; 567 dev->min_mtu = ETH_MIN_MTU; 568 + dev->max_mtu = IP_MAX_MTU; 569 dev->flags = IFF_NOARP; 570 dev->needs_free_netdev = true; 571 dev->priv_destructor = xfrmi_dev_free;