Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.11-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Networking fixes, including fixes from netfilter, wireless and bpf
trees.

Current release - regressions:

- mt76: fix NULL pointer dereference in mt76u_status_worker and
mt76s_process_tx_queue

- net: ipa: fix interconnect enable bug

Current release - always broken:

- netfilter: fixes possible oops in mtype_resize in ipset

- ath11k: fix number of coding issues found by static analysis tools
and spurious error messages

Previous releases - regressions:

- e1000e: re-enable s0ix power saving flows for systems with the
Intel i219-LM Ethernet controllers to fix power use regression

- virtio_net: fix recursive call to cpus_read_lock() to avoid a
deadlock

- ipv4: ignore ECN bits for fib lookups in fib_compute_spec_dst()

- sysfs: take the rtnl lock around XPS configuration

- xsk: fix memory leak for failed bind and rollback reservation at
NETDEV_TX_BUSY

- r8169: work around power-saving bug on some chip versions

Previous releases - always broken:

- dcb: validate netlink message in DCB handler

- tun: fix return value when the number of iovs exceeds MAX_SKB_FRAGS
to prevent unnecessary retries

- vhost_net: fix ubuf refcount when sendmsg fails

- bpf: save correct stopping point in file seq iteration

- ncsi: use real net-device for response handler

- neighbor: fix div by zero caused by a data race (TOCTOU)

- bareudp: fix use of incorrect min_headroom size and a false
positive lockdep splat from the TX lock

- mvpp2:
- clear force link UP during port init procedure in case
bootloader had set it
- add TCAM entry to drop flow control pause frames
- fix PPPoE with ipv6 packet parsing
- fix GoP Networking Complex Control config of port 3
- fix pkt coalescing IRQ-threshold configuration

- xsk: fix race in SKB mode transmit with shared cq

- ionic: account for vlan tag len in rx buffer len

- stmmac: ignore the second clock input, current clock framework does
not handle exclusive clock use well, other drivers may reconfigure
the second clock

Misc:

- ppp: change PPPIOCUNBRIDGECHAN ioctl request number to follow
existing scheme"

* tag 'net-5.11-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (99 commits)
net: dsa: lantiq_gswip: Fix GSWIP_MII_CFG(p) register access
net: dsa: lantiq_gswip: Enable GSWIP_MII_CFG_EN also for internal PHYs
net: lapb: Decrease the refcount of "struct lapb_cb" in lapb_device_event
r8169: work around power-saving bug on some chip versions
net: usb: qmi_wwan: add Quectel EM160R-GL
selftests: mlxsw: Set headroom size of correct port
net: macb: Correct usage of MACB_CAPS_CLK_HW_CHG flag
ibmvnic: fix: NULL pointer dereference.
docs: networking: packet_mmap: fix old config reference
docs: networking: packet_mmap: fix formatting for C macros
vhost_net: fix ubuf refcount incorrectly when sendmsg fails
bareudp: Fix use of incorrect min_headroom size
bareudp: set NETIF_F_LLTX flag
net: hdlc_ppp: Fix issues when mod_timer is called while timer is running
atlantic: remove architecture depends
erspan: fix version 1 check in gre_parse_header()
net: hns: fix return value check in __lb_other_process()
net: sched: prevent invalid Scell_log shift count
net: neighbor: fix a crash caused by mod zero
ipv4: Ignore ECN bits for fib lookups in fib_compute_spec_dst()
...

+709 -410
+59 -67
Documentation/networking/netdev-FAQ.rst
··· 6 6 netdev FAQ 7 7 ========== 8 8 9 - Q: What is netdev? 10 - ------------------ 11 - A: It is a mailing list for all network-related Linux stuff. This 9 + What is netdev? 10 + --------------- 11 + It is a mailing list for all network-related Linux stuff. This 12 12 includes anything found under net/ (i.e. core code like IPv6) and 13 13 drivers/net (i.e. hardware specific drivers) in the Linux source tree. 14 14 ··· 25 25 Linux development (i.e. RFC, review, comments, etc.) takes place on 26 26 netdev. 27 27 28 - Q: How do the changes posted to netdev make their way into Linux? 29 - ----------------------------------------------------------------- 30 - A: There are always two trees (git repositories) in play. Both are 28 + How do the changes posted to netdev make their way into Linux? 29 + -------------------------------------------------------------- 30 + There are always two trees (git repositories) in play. Both are 31 31 driven by David Miller, the main network maintainer. There is the 32 32 ``net`` tree, and the ``net-next`` tree. As you can probably guess from 33 33 the names, the ``net`` tree is for fixes to existing code already in the ··· 37 37 - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 38 38 - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git 39 39 40 - Q: How often do changes from these trees make it to the mainline Linus tree? 41 - ---------------------------------------------------------------------------- 42 - A: To understand this, you need to know a bit of background information on 40 + How often do changes from these trees make it to the mainline Linus tree? 41 + ------------------------------------------------------------------------- 42 + To understand this, you need to know a bit of background information on 43 43 the cadence of Linux development. Each new release starts off with a 44 44 two week "merge window" where the main maintainers feed their new stuff 45 45 to Linus for merging into the mainline tree. After the two weeks, the ··· 81 81 82 82 Finally, the vX.Y gets released, and the whole cycle starts over. 83 83 84 - Q: So where are we now in this cycle? 84 + So where are we now in this cycle? 85 + ---------------------------------- 85 86 86 87 Load the mainline (Linus) page here: 87 88 ··· 92 91 the dev cycle. If it was tagged rc7 a week ago, then a release is 93 92 probably imminent. 94 93 95 - Q: How do I indicate which tree (net vs. net-next) my patch should be in? 96 - ------------------------------------------------------------------------- 97 - A: Firstly, think whether you have a bug fix or new "next-like" content. 94 + How do I indicate which tree (net vs. net-next) my patch should be in? 95 + ---------------------------------------------------------------------- 96 + Firstly, think whether you have a bug fix or new "next-like" content. 98 97 Then once decided, assuming that you use git, use the prefix flag, i.e. 99 98 :: 100 99 ··· 106 105 can manually change it yourself with whatever MUA you are comfortable 107 106 with. 108 107 109 - Q: I sent a patch and I'm wondering what happened to it? 110 - -------------------------------------------------------- 111 - Q: How can I tell whether it got merged? 112 - A: Start by looking at the main patchworks queue for netdev: 108 + I sent a patch and I'm wondering what happened to it - how can I tell whether it got merged? 109 + -------------------------------------------------------------------------------------------- 110 + Start by looking at the main patchworks queue for netdev: 113 111 114 112 https://patchwork.kernel.org/project/netdevbpf/list/ 115 113 116 114 The "State" field will tell you exactly where things are at with your 117 115 patch. 118 116 119 - Q: The above only says "Under Review". How can I find out more? 120 - ---------------------------------------------------------------- 121 - A: Generally speaking, the patches get triaged quickly (in less than 117 + The above only says "Under Review". How can I find out more? 118 + ------------------------------------------------------------- 119 + Generally speaking, the patches get triaged quickly (in less than 122 120 48h). So be patient. Asking the maintainer for status updates on your 123 121 patch is a good way to ensure your patch is ignored or pushed to the 124 122 bottom of the priority list. 125 123 126 - Q: I submitted multiple versions of the patch series 127 - ---------------------------------------------------- 128 - Q: should I directly update patchwork for the previous versions of these 129 - patch series? 130 - A: No, please don't interfere with the patch status on patchwork, leave 124 + I submitted multiple versions of the patch series. Should I directly update patchwork for the previous versions of these patch series? 125 + -------------------------------------------------------------------------------------------------------------------------------------- 126 + No, please don't interfere with the patch status on patchwork, leave 131 127 it to the maintainer to figure out what is the most recent and current 132 128 version that should be applied. If there is any doubt, the maintainer 133 129 will reply and ask what should be done. 134 130 135 - Q: I made changes to only a few patches in a patch series should I resend only those changed? 136 - --------------------------------------------------------------------------------------------- 137 - A: No, please resend the entire patch series and make sure you do number your 131 + I made changes to only a few patches in a patch series should I resend only those changed? 132 + ------------------------------------------------------------------------------------------ 133 + No, please resend the entire patch series and make sure you do number your 138 134 patches such that it is clear this is the latest and greatest set of patches 139 135 that can be applied. 140 136 141 - Q: I submitted multiple versions of a patch series and it looks like a version other than the last one has been accepted, what should I do? 142 - ------------------------------------------------------------------------------------------------------------------------------------------- 143 - A: There is no revert possible, once it is pushed out, it stays like that. 137 + I submitted multiple versions of a patch series and it looks like a version other than the last one has been accepted, what should I do? 138 + ---------------------------------------------------------------------------------------------------------------------------------------- 139 + There is no revert possible, once it is pushed out, it stays like that. 144 140 Please send incremental versions on top of what has been merged in order to fix 145 141 the patches the way they would look like if your latest patch series was to be 146 142 merged. 147 143 148 - Q: How can I tell what patches are queued up for backporting to the various stable releases? 149 - -------------------------------------------------------------------------------------------- 150 - A: Normally Greg Kroah-Hartman collects stable commits himself, but for 144 + How can I tell what patches are queued up for backporting to the various stable releases? 145 + ----------------------------------------------------------------------------------------- 146 + Normally Greg Kroah-Hartman collects stable commits himself, but for 151 147 networking, Dave collects up patches he deems critical for the 152 148 networking subsystem, and then hands them off to Greg. 153 149 ··· 167 169 releases/3.9.8/ipv6-fix-possible-crashes-in-ip6_cork_release.patch 168 170 stable/stable-queue$ 169 171 170 - Q: I see a network patch and I think it should be backported to stable. 171 - ----------------------------------------------------------------------- 172 - Q: Should I request it via stable@vger.kernel.org like the references in 173 - the kernel's Documentation/process/stable-kernel-rules.rst file say? 174 - A: No, not for networking. Check the stable queues as per above first 172 + I see a network patch and I think it should be backported to stable. Should I request it via stable@vger.kernel.org like the references in the kernel's Documentation/process/stable-kernel-rules.rst file say? 173 + --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 174 + No, not for networking. Check the stable queues as per above first 175 175 to see if it is already queued. If not, then send a mail to netdev, 176 176 listing the upstream commit ID and why you think it should be a stable 177 177 candidate. ··· 186 190 scrambling to request a commit be added the day after it appears should 187 191 be avoided. 188 192 189 - Q: I have created a network patch and I think it should be backported to stable. 190 - -------------------------------------------------------------------------------- 191 - Q: Should I add a Cc: stable@vger.kernel.org like the references in the 192 - kernel's Documentation/ directory say? 193 - A: No. See above answer. In short, if you think it really belongs in 193 + I have created a network patch and I think it should be backported to stable. Should I add a Cc: stable@vger.kernel.org like the references in the kernel's Documentation/ directory say? 194 + ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 195 + No. See above answer. In short, if you think it really belongs in 194 196 stable, then ensure you write a decent commit log that describes who 195 197 gets impacted by the bug fix and how it manifests itself, and when the 196 198 bug was introduced. If you do that properly, then the commit will get ··· 201 207 :ref:`Documentation/process/submitting-patches.rst <the_canonical_patch_format>` 202 208 to temporarily embed that information into the patch that you send. 203 209 204 - Q: Are all networking bug fixes backported to all stable releases? 205 - ------------------------------------------------------------------ 206 - A: Due to capacity, Dave could only take care of the backports for the 210 + Are all networking bug fixes backported to all stable releases? 211 + --------------------------------------------------------------- 212 + Due to capacity, Dave could only take care of the backports for the 207 213 last two stable releases. For earlier stable releases, each stable 208 214 branch maintainer is supposed to take care of them. If you find any 209 215 patch is missing from an earlier stable branch, please notify 210 216 stable@vger.kernel.org with either a commit ID or a formal patch 211 217 backported, and CC Dave and other relevant networking developers. 212 218 213 - Q: Is the comment style convention different for the networking content? 214 - ------------------------------------------------------------------------ 215 - A: Yes, in a largely trivial way. Instead of this:: 219 + Is the comment style convention different for the networking content? 220 + --------------------------------------------------------------------- 221 + Yes, in a largely trivial way. Instead of this:: 216 222 217 223 /* 218 224 * foobar blah blah blah ··· 225 231 * another line of text 226 232 */ 227 233 228 - Q: I am working in existing code that has the former comment style and not the latter. 229 - -------------------------------------------------------------------------------------- 230 - Q: Should I submit new code in the former style or the latter? 231 - A: Make it the latter style, so that eventually all code in the domain 234 + I am working in existing code that has the former comment style and not the latter. Should I submit new code in the former style or the latter? 235 + ----------------------------------------------------------------------------------------------------------------------------------------------- 236 + Make it the latter style, so that eventually all code in the domain 232 237 of netdev is of this format. 233 238 234 - Q: I found a bug that might have possible security implications or similar. 235 - --------------------------------------------------------------------------- 236 - Q: Should I mail the main netdev maintainer off-list?** 237 - A: No. The current netdev maintainer has consistently requested that 239 + I found a bug that might have possible security implications or similar. Should I mail the main netdev maintainer off-list? 240 + --------------------------------------------------------------------------------------------------------------------------- 241 + No. The current netdev maintainer has consistently requested that 238 242 people use the mailing lists and not reach out directly. If you aren't 239 243 OK with that, then perhaps consider mailing security@kernel.org or 240 244 reading about http://oss-security.openwall.org/wiki/mailing-lists/distros 241 245 as possible alternative mechanisms. 242 246 243 - Q: What level of testing is expected before I submit my change? 244 - --------------------------------------------------------------- 245 - A: If your changes are against ``net-next``, the expectation is that you 247 + What level of testing is expected before I submit my change? 248 + ------------------------------------------------------------ 249 + If your changes are against ``net-next``, the expectation is that you 246 250 have tested by layering your changes on top of ``net-next``. Ideally 247 251 you will have done run-time testing specific to your change, but at a 248 252 minimum, your changes should survive an ``allyesconfig`` and an 249 253 ``allmodconfig`` build without new warnings or failures. 250 254 251 - Q: How do I post corresponding changes to user space components? 252 - ---------------------------------------------------------------- 253 - A: User space code exercising kernel features should be posted 255 + How do I post corresponding changes to user space components? 256 + ------------------------------------------------------------- 257 + User space code exercising kernel features should be posted 254 258 alongside kernel patches. This gives reviewers a chance to see 255 259 how any new interface is used and how well it works. 256 260 ··· 272 280 Posting as one thread is discouraged because it confuses patchwork 273 281 (as of patchwork 2.2.2). 274 282 275 - Q: Any other tips to help ensure my net/net-next patch gets OK'd? 276 - ----------------------------------------------------------------- 277 - A: Attention to detail. Re-read your own work as if you were the 283 + Any other tips to help ensure my net/net-next patch gets OK'd? 284 + -------------------------------------------------------------- 285 + Attention to detail. Re-read your own work as if you were the 278 286 reviewer. You can start with using ``checkpatch.pl``, perhaps even with 279 287 the ``--strict`` flag. But do not be mindlessly robotic in doing so. 280 288 If your change is a bug fix, make sure your commit log indicates the
+5 -6
Documentation/networking/packet_mmap.rst
··· 8 8 ======== 9 9 10 10 This file documents the mmap() facility available with the PACKET 11 - socket interface on 2.4/2.6/3.x kernels. This type of sockets is used for 11 + socket interface. This type of sockets is used for 12 12 13 13 i) capture network traffic with utilities like tcpdump, 14 14 ii) transmit network traffic, or any other that needs raw ··· 25 25 Why use PACKET_MMAP 26 26 =================== 27 27 28 - In Linux 2.4/2.6/3.x if PACKET_MMAP is not enabled, the capture process is very 28 + Non PACKET_MMAP capture process (plain AF_PACKET) is very 29 29 inefficient. It uses very limited buffers and requires one system call to 30 30 capture each packet, it requires two if you want to get packet's timestamp 31 31 (like libpcap always does). 32 32 33 - In the other hand PACKET_MMAP is very efficient. PACKET_MMAP provides a size 33 + On the other hand PACKET_MMAP is very efficient. PACKET_MMAP provides a size 34 34 configurable circular buffer mapped in user space that can be used to either 35 35 send or receive packets. This way reading packets just needs to wait for them, 36 36 most of the time there is no need to issue a single system call. Concerning ··· 252 252 253 253 In kernel versions prior to 2.4.26 (for the 2.4 branch) and 2.6.5 (2.6 branch), 254 254 the PACKET_MMAP buffer could hold only 32768 frames in a 32 bit architecture or 255 - 16384 in a 64 bit architecture. For information on these kernel versions 256 - see http://pusa.uv.es/~ulisses/packet_mmap/packet_mmap.pre-2.4.26_2.6.5.txt 255 + 16384 in a 64 bit architecture. 257 256 258 257 Block size limit 259 258 ---------------- ··· 436 437 Capture process 437 438 ^^^^^^^^^^^^^^^ 438 439 439 - from include/linux/if_packet.h 440 + From include/linux/if_packet.h:: 440 441 441 442 #define TP_STATUS_COPY (1 << 1) 442 443 #define TP_STATUS_LOSING (1 << 2)
+13 -13
MAINTAINERS
··· 203 203 F: net/wireless/ 204 204 205 205 8169 10/100/1000 GIGABIT ETHERNET DRIVER 206 - M: Realtek linux nic maintainers <nic_swsd@realtek.com> 207 206 M: Heiner Kallweit <hkallweit1@gmail.com> 207 + M: nic_swsd@realtek.com 208 208 L: netdev@vger.kernel.org 209 209 S: Maintained 210 210 F: drivers/net/ethernet/realtek/r8169* ··· 2119 2119 ARM/Microchip Sparx5 SoC support 2120 2120 M: Lars Povlsen <lars.povlsen@microchip.com> 2121 2121 M: Steen Hegelund <Steen.Hegelund@microchip.com> 2122 - M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> 2122 + M: UNGLinuxDriver@microchip.com 2123 2123 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2124 2124 S: Supported 2125 2125 T: git git://github.com/microchip-ung/linux-upstream.git ··· 3556 3556 F: drivers/net/ethernet/broadcom/bnxt/ 3557 3557 3558 3558 BROADCOM BRCM80211 IEEE802.11n WIRELESS DRIVER 3559 - M: Arend van Spriel <arend.vanspriel@broadcom.com> 3559 + M: Arend van Spriel <aspriel@gmail.com> 3560 3560 M: Franky Lin <franky.lin@broadcom.com> 3561 3561 M: Hante Meuleman <hante.meuleman@broadcom.com> 3562 3562 M: Chi-hsien Lin <chi-hsien.lin@infineon.com> ··· 3961 3961 CAN-J1939 NETWORK LAYER 3962 3962 M: Robin van der Gracht <robin@protonic.nl> 3963 3963 M: Oleksij Rempel <o.rempel@pengutronix.de> 3964 - R: Pengutronix Kernel Team <kernel@pengutronix.de> 3964 + R: kernel@pengutronix.de 3965 3965 L: linux-can@vger.kernel.org 3966 3966 S: Maintained 3967 3967 F: Documentation/networking/j1939.rst ··· 11667 11667 11668 11668 MICROCHIP KSZ SERIES ETHERNET SWITCH DRIVER 11669 11669 M: Woojung Huh <woojung.huh@microchip.com> 11670 - M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> 11670 + M: UNGLinuxDriver@microchip.com 11671 11671 L: netdev@vger.kernel.org 11672 11672 S: Maintained 11673 11673 F: Documentation/devicetree/bindings/net/dsa/microchip,ksz.yaml ··· 11677 11677 11678 11678 MICROCHIP LAN743X ETHERNET DRIVER 11679 11679 M: Bryan Whitehead <bryan.whitehead@microchip.com> 11680 - M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> 11680 + M: UNGLinuxDriver@microchip.com 11681 11681 L: netdev@vger.kernel.org 11682 11682 S: Maintained 11683 11683 F: drivers/net/ethernet/microchip/lan743x_* ··· 11771 11771 11772 11772 MICROSEMI MIPS SOCS 11773 11773 M: Alexandre Belloni <alexandre.belloni@bootlin.com> 11774 - M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> 11774 + M: UNGLinuxDriver@microchip.com 11775 11775 L: linux-mips@vger.kernel.org 11776 11776 S: Supported 11777 11777 F: Documentation/devicetree/bindings/mips/mscc.txt ··· 12825 12825 F: include/linux/objtool.h 12826 12826 12827 12827 OCELOT ETHERNET SWITCH DRIVER 12828 - M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> 12829 12828 M: Vladimir Oltean <vladimir.oltean@nxp.com> 12830 12829 M: Claudiu Manoil <claudiu.manoil@nxp.com> 12831 12830 M: Alexandre Belloni <alexandre.belloni@bootlin.com> 12831 + M: UNGLinuxDriver@microchip.com 12832 12832 L: netdev@vger.kernel.org 12833 12833 S: Supported 12834 12834 F: drivers/net/dsa/ocelot/* ··· 13890 13890 13891 13891 PENSANDO ETHERNET DRIVERS 13892 13892 M: Shannon Nelson <snelson@pensando.io> 13893 - M: Pensando Drivers <drivers@pensando.io> 13893 + M: drivers@pensando.io 13894 13894 L: netdev@vger.kernel.org 13895 13895 S: Supported 13896 13896 F: Documentation/networking/device_drivers/ethernet/pensando/ionic.rst ··· 14669 14669 F: drivers/net/wireless/ath/ath11k/ 14670 14670 14671 14671 QUALCOMM ATHEROS ATH9K WIRELESS DRIVER 14672 - M: QCA ath9k Development <ath9k-devel@qca.qualcomm.com> 14672 + M: ath9k-devel@qca.qualcomm.com 14673 14673 L: linux-wireless@vger.kernel.org 14674 14674 S: Supported 14675 14675 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath9k ··· 18370 18370 18371 18371 USB LAN78XX ETHERNET DRIVER 18372 18372 M: Woojung Huh <woojung.huh@microchip.com> 18373 - M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> 18373 + M: UNGLinuxDriver@microchip.com 18374 18374 L: netdev@vger.kernel.org 18375 18375 S: Maintained 18376 18376 F: Documentation/devicetree/bindings/net/microchip,lan78xx.txt ··· 18484 18484 18485 18485 USB SMSC95XX ETHERNET DRIVER 18486 18486 M: Steve Glendinning <steve.glendinning@shawell.net> 18487 - M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> 18487 + M: UNGLinuxDriver@microchip.com 18488 18488 L: netdev@vger.kernel.org 18489 18489 S: Maintained 18490 18490 F: drivers/net/usb/smsc95xx.* ··· 19031 19031 19032 19032 VMWARE VMXNET3 ETHERNET DRIVER 19033 19033 M: Ronak Doshi <doshir@vmware.com> 19034 - M: "VMware, Inc." <pv-drivers@vmware.com> 19034 + M: pv-drivers@vmware.com 19035 19035 L: netdev@vger.kernel.org 19036 19036 S: Maintained 19037 19037 F: drivers/net/vmxnet3/
+1 -1
drivers/atm/idt77252.c
··· 3607 3607 3608 3608 if ((err = dma_set_mask_and_coherent(&pcidev->dev, DMA_BIT_MASK(32)))) { 3609 3609 printk("idt77252: can't enable DMA for PCI device at %s\n", pci_name(pcidev)); 3610 - return err; 3610 + goto err_out_disable_pdev; 3611 3611 } 3612 3612 3613 3613 card = kzalloc(sizeof(struct idt77252_dev), GFP_KERNEL);
+2 -1
drivers/net/bareudp.c
··· 380 380 goto free_dst; 381 381 382 382 min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len + 383 - BAREUDP_BASE_HLEN + info->options_len + sizeof(struct iphdr); 383 + BAREUDP_BASE_HLEN + info->options_len + sizeof(struct ipv6hdr); 384 384 385 385 err = skb_cow_head(skb, min_headroom); 386 386 if (unlikely(err)) ··· 534 534 SET_NETDEV_DEVTYPE(dev, &bareudp_type); 535 535 dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM; 536 536 dev->features |= NETIF_F_RXCSUM; 537 + dev->features |= NETIF_F_LLTX; 537 538 dev->features |= NETIF_F_GSO_SOFTWARE; 538 539 dev->hw_features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM; 539 540 dev->hw_features |= NETIF_F_GSO_SOFTWARE;
+7 -20
drivers/net/dsa/lantiq_gswip.c
··· 92 92 GSWIP_MDIO_PHY_FDUP_MASK) 93 93 94 94 /* GSWIP MII Registers */ 95 - #define GSWIP_MII_CFG0 0x00 96 - #define GSWIP_MII_CFG1 0x02 97 - #define GSWIP_MII_CFG5 0x04 95 + #define GSWIP_MII_CFGp(p) (0x2 * (p)) 98 96 #define GSWIP_MII_CFG_EN BIT(14) 99 97 #define GSWIP_MII_CFG_LDCLKDIS BIT(12) 100 98 #define GSWIP_MII_CFG_MODE_MIIP 0x0 ··· 390 392 static void gswip_mii_mask_cfg(struct gswip_priv *priv, u32 clear, u32 set, 391 393 int port) 392 394 { 393 - switch (port) { 394 - case 0: 395 - gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG0); 396 - break; 397 - case 1: 398 - gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG1); 399 - break; 400 - case 5: 401 - gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG5); 402 - break; 403 - } 395 + /* There's no MII_CFG register for the CPU port */ 396 + if (!dsa_is_cpu_port(priv->ds, port)) 397 + gswip_mii_mask(priv, clear, set, GSWIP_MII_CFGp(port)); 404 398 } 405 399 406 400 static void gswip_mii_mask_pcdu(struct gswip_priv *priv, u32 clear, u32 set, ··· 812 822 gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1); 813 823 814 824 /* Disable the xMII link */ 815 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 0); 816 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 1); 817 - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 5); 825 + for (i = 0; i < priv->hw_info->max_ports; i++) 826 + gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i); 818 827 819 828 /* enable special tag insertion on cpu port */ 820 829 gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN, ··· 1530 1541 { 1531 1542 struct gswip_priv *priv = ds->priv; 1532 1543 1533 - /* Enable the xMII interface only for the external PHY */ 1534 - if (interface != PHY_INTERFACE_MODE_INTERNAL) 1535 - gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port); 1544 + gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port); 1536 1545 } 1537 1546 1538 1547 static void gswip_get_strings(struct dsa_switch *ds, int port, u32 stringset,
-1
drivers/net/ethernet/aquantia/Kconfig
··· 19 19 config AQTION 20 20 tristate "aQuantia AQtion(tm) Support" 21 21 depends on PCI 22 - depends on X86_64 || ARM64 || COMPILE_TEST 23 22 depends on MACSEC || MACSEC=n 24 23 help 25 24 This enables the support for the aQuantia AQtion(tm) Ethernet card.
+1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2577 2577 NETIF_F_HW_VLAN_CTAG_TX; 2578 2578 dev->hw_features |= dev->features; 2579 2579 dev->vlan_features |= dev->features; 2580 + dev->max_mtu = UMAC_MAX_MTU_SIZE; 2580 2581 2581 2582 /* Request the WOL interrupt and advertise suspend if available */ 2582 2583 priv->wol_irq_disabled = 1;
+19 -19
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 6790 6790 ctx->tqm_fp_rings_count = resp->tqm_fp_rings_count; 6791 6791 if (!ctx->tqm_fp_rings_count) 6792 6792 ctx->tqm_fp_rings_count = bp->max_q; 6793 + else if (ctx->tqm_fp_rings_count > BNXT_MAX_TQM_FP_RINGS) 6794 + ctx->tqm_fp_rings_count = BNXT_MAX_TQM_FP_RINGS; 6793 6795 6794 - tqm_rings = ctx->tqm_fp_rings_count + 1; 6796 + tqm_rings = ctx->tqm_fp_rings_count + BNXT_MAX_TQM_SP_RINGS; 6795 6797 ctx_pg = kcalloc(tqm_rings, sizeof(*ctx_pg), GFP_KERNEL); 6796 6798 if (!ctx_pg) { 6797 6799 kfree(ctx); ··· 6927 6925 pg_attr = &req.tqm_sp_pg_size_tqm_sp_lvl, 6928 6926 pg_dir = &req.tqm_sp_page_dir, 6929 6927 ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP; 6930 - i < 9; i++, num_entries++, pg_attr++, pg_dir++, ena <<= 1) { 6928 + i < BNXT_MAX_TQM_RINGS; 6929 + i++, num_entries++, pg_attr++, pg_dir++, ena <<= 1) { 6931 6930 if (!(enables & ena)) 6932 6931 continue; 6933 6932 ··· 12890 12887 */ 12891 12888 static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev) 12892 12889 { 12890 + pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT; 12893 12891 struct net_device *netdev = pci_get_drvdata(pdev); 12894 12892 struct bnxt *bp = netdev_priv(netdev); 12895 12893 int err = 0, off; 12896 - pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT; 12897 12894 12898 12895 netdev_info(bp->dev, "PCI Slot Reset\n"); 12899 12896 ··· 12922 12919 pci_save_state(pdev); 12923 12920 12924 12921 err = bnxt_hwrm_func_reset(bp); 12925 - if (!err) { 12926 - err = bnxt_hwrm_func_qcaps(bp); 12927 - if (!err && netif_running(netdev)) 12928 - err = bnxt_open(netdev); 12929 - } 12930 - bnxt_ulp_start(bp, err); 12931 - if (!err) { 12932 - bnxt_reenable_sriov(bp); 12922 + if (!err) 12933 12923 result = PCI_ERS_RESULT_RECOVERED; 12934 - } 12935 - } 12936 - 12937 - if (result != PCI_ERS_RESULT_RECOVERED) { 12938 - if (netif_running(netdev)) 12939 - dev_close(netdev); 12940 - pci_disable_device(pdev); 12941 12924 } 12942 12925 12943 12926 rtnl_unlock(); ··· 12941 12952 static void bnxt_io_resume(struct pci_dev *pdev) 12942 12953 { 12943 12954 struct net_device *netdev = pci_get_drvdata(pdev); 12955 + struct bnxt *bp = netdev_priv(netdev); 12956 + int err; 12944 12957 12958 + netdev_info(bp->dev, "PCI Slot Resume\n"); 12945 12959 rtnl_lock(); 12946 12960 12947 - netif_device_attach(netdev); 12961 + err = bnxt_hwrm_func_qcaps(bp); 12962 + if (!err && netif_running(netdev)) 12963 + err = bnxt_open(netdev); 12964 + 12965 + bnxt_ulp_start(bp, err); 12966 + if (!err) { 12967 + bnxt_reenable_sriov(bp); 12968 + netif_device_attach(netdev); 12969 + } 12948 12970 12949 12971 rtnl_unlock(); 12950 12972 }
+6 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1436 1436 struct bnxt_ctx_pg_info **ctx_pg_tbl; 1437 1437 }; 1438 1438 1439 + #define BNXT_MAX_TQM_SP_RINGS 1 1440 + #define BNXT_MAX_TQM_FP_RINGS 8 1441 + #define BNXT_MAX_TQM_RINGS \ 1442 + (BNXT_MAX_TQM_SP_RINGS + BNXT_MAX_TQM_FP_RINGS) 1443 + 1439 1444 struct bnxt_ctx_mem_info { 1440 1445 u32 qp_max_entries; 1441 1446 u16 qp_min_qp1_entries; ··· 1479 1474 struct bnxt_ctx_pg_info stat_mem; 1480 1475 struct bnxt_ctx_pg_info mrav_mem; 1481 1476 struct bnxt_ctx_pg_info tim_mem; 1482 - struct bnxt_ctx_pg_info *tqm_mem[9]; 1477 + struct bnxt_ctx_pg_info *tqm_mem[BNXT_MAX_TQM_RINGS]; 1483 1478 }; 1484 1479 1485 1480 struct bnxt_fw_health {
+1 -1
drivers/net/ethernet/cadence/macb_main.c
··· 467 467 { 468 468 long ferr, rate, rate_rounded; 469 469 470 - if (!bp->tx_clk || !(bp->caps & MACB_CAPS_CLK_HW_CHG)) 470 + if (!bp->tx_clk || (bp->caps & MACB_CAPS_CLK_HW_CHG)) 471 471 return; 472 472 473 473 switch (speed) {
+2 -1
drivers/net/ethernet/ethoc.c
··· 1211 1211 ret = mdiobus_register(priv->mdio); 1212 1212 if (ret) { 1213 1213 dev_err(&netdev->dev, "failed to register MDIO bus\n"); 1214 - goto free2; 1214 + goto free3; 1215 1215 } 1216 1216 1217 1217 ret = ethoc_mdio_probe(netdev); ··· 1243 1243 netif_napi_del(&priv->napi); 1244 1244 error: 1245 1245 mdiobus_unregister(priv->mdio); 1246 + free3: 1246 1247 mdiobus_free(priv->mdio); 1247 1248 free2: 1248 1249 clk_disable_unprepare(priv->clk);
+2 -1
drivers/net/ethernet/freescale/ucc_geth.c
··· 3889 3889 INIT_WORK(&ugeth->timeout_work, ucc_geth_timeout_work); 3890 3890 netif_napi_add(dev, &ugeth->napi, ucc_geth_poll, 64); 3891 3891 dev->mtu = 1500; 3892 + dev->max_mtu = 1518; 3892 3893 3893 3894 ugeth->msg_enable = netif_msg_init(debug.msg_enable, UGETH_MSG_DEFAULT); 3894 3895 ugeth->phy_interface = phy_interface; ··· 3935 3934 struct device_node *np = ofdev->dev.of_node; 3936 3935 3937 3936 unregister_netdev(dev); 3938 - free_netdev(dev); 3939 3937 ucc_geth_memclean(ugeth); 3940 3938 if (of_phy_is_fixed_link(np)) 3941 3939 of_phy_deregister_fixed_link(np); 3942 3940 of_node_put(ugeth->ug_info->tbi_node); 3943 3941 of_node_put(ugeth->ug_info->phy_node); 3942 + free_netdev(dev); 3944 3943 3945 3944 return 0; 3946 3945 }
+8 -1
drivers/net/ethernet/freescale/ucc_geth.h
··· 575 575 u32 vtagtable[0x8]; /* 8 4-byte VLAN tags */ 576 576 u32 tqptr; /* a base pointer to the Tx Queues Memory 577 577 Region */ 578 - u8 res2[0x80 - 0x74]; 578 + u8 res2[0x78 - 0x74]; 579 + u64 snums_en; 580 + u32 l2l3baseptr; /* top byte consists of a few other bit fields */ 581 + 582 + u16 mtu[8]; 583 + u8 res3[0xa8 - 0x94]; 584 + u32 wrrtablebase; /* top byte is reserved */ 585 + u8 res4[0xc0 - 0xac]; 579 586 } __packed; 580 587 581 588 /* structure representing Extended Filtering Global Parameters in PRAM */
+4
drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
··· 415 415 /* for mutl buffer*/ 416 416 new_skb = skb_copy(skb, GFP_ATOMIC); 417 417 dev_kfree_skb_any(skb); 418 + if (!new_skb) { 419 + netdev_err(ndev, "skb alloc failed\n"); 420 + return; 421 + } 418 422 skb = new_skb; 419 423 420 424 check_ok = 0;
+5 -5
drivers/net/ethernet/ibm/ibmvnic.c
··· 955 955 release_rx_pools(adapter); 956 956 957 957 release_napi(adapter); 958 + release_login_buffer(adapter); 958 959 release_login_rsp_buffer(adapter); 959 960 } 960 961 ··· 2342 2341 set_current_state(TASK_UNINTERRUPTIBLE); 2343 2342 schedule_timeout(60 * HZ); 2344 2343 } 2345 - } else if (!(rwi->reset_reason == VNIC_RESET_FATAL && 2346 - adapter->from_passive_init)) { 2344 + } else { 2347 2345 rc = do_reset(adapter, rwi, reset_state); 2348 2346 } 2349 2347 kfree(rwi); ··· 2981 2981 int rc; 2982 2982 2983 2983 if (!scrq) { 2984 - netdev_dbg(adapter->netdev, 2985 - "Invalid scrq reset. irq (%d) or msgs (%p).\n", 2986 - scrq->irq, scrq->msgs); 2984 + netdev_dbg(adapter->netdev, "Invalid scrq reset.\n"); 2987 2985 return -EINVAL; 2988 2986 } 2989 2987 ··· 3871 3873 return -1; 3872 3874 } 3873 3875 3876 + release_login_buffer(adapter); 3874 3877 release_login_rsp_buffer(adapter); 3878 + 3875 3879 client_data_len = vnic_client_data_len(adapter); 3876 3880 3877 3881 buffer_size =
+1
drivers/net/ethernet/intel/e1000e/e1000.h
··· 436 436 #define FLAG2_DFLT_CRC_STRIPPING BIT(12) 437 437 #define FLAG2_CHECK_RX_HWTSTAMP BIT(13) 438 438 #define FLAG2_CHECK_SYSTIM_OVERFLOW BIT(14) 439 + #define FLAG2_ENABLE_S0IX_FLOWS BIT(15) 439 440 440 441 #define E1000_RX_DESC_PS(R, i) \ 441 442 (&(((union e1000_rx_desc_packet_split *)((R).desc))[i]))
+46
drivers/net/ethernet/intel/e1000e/ethtool.c
··· 23 23 int stat_offset; 24 24 }; 25 25 26 + static const char e1000e_priv_flags_strings[][ETH_GSTRING_LEN] = { 27 + #define E1000E_PRIV_FLAGS_S0IX_ENABLED BIT(0) 28 + "s0ix-enabled", 29 + }; 30 + 31 + #define E1000E_PRIV_FLAGS_STR_LEN ARRAY_SIZE(e1000e_priv_flags_strings) 32 + 26 33 #define E1000_STAT(str, m) { \ 27 34 .stat_string = str, \ 28 35 .type = E1000_STATS, \ ··· 1783 1776 return E1000_TEST_LEN; 1784 1777 case ETH_SS_STATS: 1785 1778 return E1000_STATS_LEN; 1779 + case ETH_SS_PRIV_FLAGS: 1780 + return E1000E_PRIV_FLAGS_STR_LEN; 1786 1781 default: 1787 1782 return -EOPNOTSUPP; 1788 1783 } ··· 2106 2097 p += ETH_GSTRING_LEN; 2107 2098 } 2108 2099 break; 2100 + case ETH_SS_PRIV_FLAGS: 2101 + memcpy(data, e1000e_priv_flags_strings, 2102 + E1000E_PRIV_FLAGS_STR_LEN * ETH_GSTRING_LEN); 2103 + break; 2109 2104 } 2110 2105 } 2111 2106 ··· 2318 2305 return 0; 2319 2306 } 2320 2307 2308 + static u32 e1000e_get_priv_flags(struct net_device *netdev) 2309 + { 2310 + struct e1000_adapter *adapter = netdev_priv(netdev); 2311 + u32 priv_flags = 0; 2312 + 2313 + if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS) 2314 + priv_flags |= E1000E_PRIV_FLAGS_S0IX_ENABLED; 2315 + 2316 + return priv_flags; 2317 + } 2318 + 2319 + static int e1000e_set_priv_flags(struct net_device *netdev, u32 priv_flags) 2320 + { 2321 + struct e1000_adapter *adapter = netdev_priv(netdev); 2322 + unsigned int flags2 = adapter->flags2; 2323 + 2324 + flags2 &= ~FLAG2_ENABLE_S0IX_FLOWS; 2325 + if (priv_flags & E1000E_PRIV_FLAGS_S0IX_ENABLED) { 2326 + struct e1000_hw *hw = &adapter->hw; 2327 + 2328 + if (hw->mac.type < e1000_pch_cnp) 2329 + return -EINVAL; 2330 + flags2 |= FLAG2_ENABLE_S0IX_FLOWS; 2331 + } 2332 + 2333 + if (flags2 != adapter->flags2) 2334 + adapter->flags2 = flags2; 2335 + 2336 + return 0; 2337 + } 2338 + 2321 2339 static const struct ethtool_ops e1000_ethtool_ops = { 2322 2340 .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS, 2323 2341 .get_drvinfo = e1000_get_drvinfo, ··· 2380 2336 .set_eee = e1000e_set_eee, 2381 2337 .get_link_ksettings = e1000_get_link_ksettings, 2382 2338 .set_link_ksettings = e1000_set_link_ksettings, 2339 + .get_priv_flags = e1000e_get_priv_flags, 2340 + .set_priv_flags = e1000e_set_priv_flags, 2383 2341 }; 2384 2342 2385 2343 void e1000e_set_ethtool_ops(struct net_device *netdev)
+14 -3
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 1240 1240 return 0; 1241 1241 1242 1242 if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) { 1243 + struct e1000_adapter *adapter = hw->adapter; 1244 + bool firmware_bug = false; 1245 + 1243 1246 if (force) { 1244 1247 /* Request ME un-configure ULP mode in the PHY */ 1245 1248 mac_reg = er32(H2ME); ··· 1251 1248 ew32(H2ME, mac_reg); 1252 1249 } 1253 1250 1254 - /* Poll up to 300msec for ME to clear ULP_CFG_DONE. */ 1251 + /* Poll up to 2.5 seconds for ME to clear ULP_CFG_DONE. 1252 + * If this takes more than 1 second, show a warning indicating a 1253 + * firmware bug 1254 + */ 1255 1255 while (er32(FWSM) & E1000_FWSM_ULP_CFG_DONE) { 1256 - if (i++ == 30) { 1256 + if (i++ == 250) { 1257 1257 ret_val = -E1000_ERR_PHY; 1258 1258 goto out; 1259 1259 } 1260 + if (i > 100 && !firmware_bug) 1261 + firmware_bug = true; 1260 1262 1261 1263 usleep_range(10000, 11000); 1262 1264 } 1263 - e_dbg("ULP_CONFIG_DONE cleared after %dmsec\n", i * 10); 1265 + if (firmware_bug) 1266 + e_warn("ULP_CONFIG_DONE took %dmsec. This is a firmware bug\n", i * 10); 1267 + else 1268 + e_dbg("ULP_CONFIG_DONE cleared after %dmsec\n", i * 10); 1264 1269 1265 1270 if (force) { 1266 1271 mac_reg = er32(H2ME);
+10 -49
drivers/net/ethernet/intel/e1000e/netdev.c
··· 103 103 {0, NULL} 104 104 }; 105 105 106 - struct e1000e_me_supported { 107 - u16 device_id; /* supported device ID */ 108 - }; 109 - 110 - static const struct e1000e_me_supported me_supported[] = { 111 - {E1000_DEV_ID_PCH_LPT_I217_LM}, 112 - {E1000_DEV_ID_PCH_LPTLP_I218_LM}, 113 - {E1000_DEV_ID_PCH_I218_LM2}, 114 - {E1000_DEV_ID_PCH_I218_LM3}, 115 - {E1000_DEV_ID_PCH_SPT_I219_LM}, 116 - {E1000_DEV_ID_PCH_SPT_I219_LM2}, 117 - {E1000_DEV_ID_PCH_LBG_I219_LM3}, 118 - {E1000_DEV_ID_PCH_SPT_I219_LM4}, 119 - {E1000_DEV_ID_PCH_SPT_I219_LM5}, 120 - {E1000_DEV_ID_PCH_CNP_I219_LM6}, 121 - {E1000_DEV_ID_PCH_CNP_I219_LM7}, 122 - {E1000_DEV_ID_PCH_ICP_I219_LM8}, 123 - {E1000_DEV_ID_PCH_ICP_I219_LM9}, 124 - {E1000_DEV_ID_PCH_CMP_I219_LM10}, 125 - {E1000_DEV_ID_PCH_CMP_I219_LM11}, 126 - {E1000_DEV_ID_PCH_CMP_I219_LM12}, 127 - {E1000_DEV_ID_PCH_TGP_I219_LM13}, 128 - {E1000_DEV_ID_PCH_TGP_I219_LM14}, 129 - {E1000_DEV_ID_PCH_TGP_I219_LM15}, 130 - {0} 131 - }; 132 - 133 - static bool e1000e_check_me(u16 device_id) 134 - { 135 - struct e1000e_me_supported *id; 136 - 137 - for (id = (struct e1000e_me_supported *)me_supported; 138 - id->device_id; id++) 139 - if (device_id == id->device_id) 140 - return true; 141 - 142 - return false; 143 - } 144 - 145 106 /** 146 107 * __ew32_prepare - prepare to write to MAC CSR register on certain parts 147 108 * @hw: pointer to the HW structure ··· 6923 6962 struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); 6924 6963 struct e1000_adapter *adapter = netdev_priv(netdev); 6925 6964 struct pci_dev *pdev = to_pci_dev(dev); 6926 - struct e1000_hw *hw = &adapter->hw; 6927 6965 int rc; 6928 6966 6929 6967 e1000e_flush_lpic(pdev); ··· 6930 6970 e1000e_pm_freeze(dev); 6931 6971 6932 6972 rc = __e1000_shutdown(pdev, false); 6933 - if (rc) 6973 + if (rc) { 6934 6974 e1000e_pm_thaw(dev); 6935 - 6936 - /* Introduce S0ix implementation */ 6937 - if (hw->mac.type >= e1000_pch_cnp && 6938 - !e1000e_check_me(hw->adapter->pdev->device)) 6939 - e1000e_s0ix_entry_flow(adapter); 6975 + } else { 6976 + /* Introduce S0ix implementation */ 6977 + if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS) 6978 + e1000e_s0ix_entry_flow(adapter); 6979 + } 6940 6980 6941 6981 return rc; 6942 6982 } ··· 6946 6986 struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); 6947 6987 struct e1000_adapter *adapter = netdev_priv(netdev); 6948 6988 struct pci_dev *pdev = to_pci_dev(dev); 6949 - struct e1000_hw *hw = &adapter->hw; 6950 6989 int rc; 6951 6990 6952 6991 /* Introduce S0ix implementation */ 6953 - if (hw->mac.type >= e1000_pch_cnp && 6954 - !e1000e_check_me(hw->adapter->pdev->device)) 6992 + if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS) 6955 6993 e1000e_s0ix_exit_flow(adapter); 6956 6994 6957 6995 rc = __e1000_resume(pdev); ··· 7612 7654 */ 7613 7655 if (!(adapter->flags & FLAG_HAS_AMT)) 7614 7656 e1000e_get_hw_control(adapter); 7657 + 7658 + if (hw->mac.type >= e1000_pch_cnp) 7659 + adapter->flags2 |= FLAG2_ENABLE_S0IX_FLOWS; 7615 7660 7616 7661 strlcpy(netdev->name, "eth%d", sizeof(netdev->name)); 7617 7662 err = register_netdev(netdev);
+3
drivers/net/ethernet/intel/i40e/i40e.h
··· 120 120 __I40E_RESET_INTR_RECEIVED, 121 121 __I40E_REINIT_REQUESTED, 122 122 __I40E_PF_RESET_REQUESTED, 123 + __I40E_PF_RESET_AND_REBUILD_REQUESTED, 123 124 __I40E_CORE_RESET_REQUESTED, 124 125 __I40E_GLOBAL_RESET_REQUESTED, 125 126 __I40E_EMP_RESET_INTR_RECEIVED, ··· 147 146 }; 148 147 149 148 #define I40E_PF_RESET_FLAG BIT_ULL(__I40E_PF_RESET_REQUESTED) 149 + #define I40E_PF_RESET_AND_REBUILD_FLAG \ 150 + BIT_ULL(__I40E_PF_RESET_AND_REBUILD_REQUESTED) 150 151 151 152 /* VSI state flags */ 152 153 enum i40e_vsi_state_t {
+10
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 36 36 static void i40e_determine_queue_usage(struct i40e_pf *pf); 37 37 static int i40e_setup_pf_filter_control(struct i40e_pf *pf); 38 38 static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired); 39 + static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit, 40 + bool lock_acquired); 39 41 static int i40e_reset(struct i40e_pf *pf); 40 42 static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired); 41 43 static int i40e_setup_misc_vector_for_recovery_mode(struct i40e_pf *pf); ··· 8537 8535 pf->flags & I40E_FLAG_DISABLE_FW_LLDP ? 8538 8536 "FW LLDP is disabled\n" : 8539 8537 "FW LLDP is enabled\n"); 8538 + 8539 + } else if (reset_flags & I40E_PF_RESET_AND_REBUILD_FLAG) { 8540 + /* Request a PF Reset 8541 + * 8542 + * Resets PF and reinitializes PFs VSI. 8543 + */ 8544 + i40e_prep_for_reset(pf, lock_acquired); 8545 + i40e_reset_and_rebuild(pf, true, lock_acquired); 8540 8546 8541 8547 } else if (reset_flags & BIT_ULL(__I40E_REINIT_REQUESTED)) { 8542 8548 int v;
+2 -2
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1772 1772 if (num_vfs) { 1773 1773 if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) { 1774 1774 pf->flags |= I40E_FLAG_VEB_MODE_ENABLED; 1775 - i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG); 1775 + i40e_do_reset_safe(pf, I40E_PF_RESET_AND_REBUILD_FLAG); 1776 1776 } 1777 1777 ret = i40e_pci_sriov_enable(pdev, num_vfs); 1778 1778 goto sriov_configure_out; ··· 1781 1781 if (!pci_vfs_assigned(pf->pdev)) { 1782 1782 i40e_free_vfs(pf); 1783 1783 pf->flags &= ~I40E_FLAG_VEB_MODE_ENABLED; 1784 - i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG); 1784 + i40e_do_reset_safe(pf, I40E_PF_RESET_AND_REBUILD_FLAG); 1785 1785 } else { 1786 1786 dev_warn(&pdev->dev, "Unable to free VFs because some are assigned to VMs.\n"); 1787 1787 ret = -EINVAL;
+1 -3
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 1834 1834 netif_tx_stop_all_queues(netdev); 1835 1835 if (CLIENT_ALLOWED(adapter)) { 1836 1836 err = iavf_lan_add_device(adapter); 1837 - if (err) { 1838 - rtnl_unlock(); 1837 + if (err) 1839 1838 dev_info(&pdev->dev, "Failed to add VF to client API service list: %d\n", 1840 1839 err); 1841 - } 1842 1840 } 1843 1841 dev_info(&pdev->dev, "MAC address: %pM\n", adapter->hw.mac.addr); 1844 1842 if (netdev->features & NETIF_F_GRO)
+1 -1
drivers/net/ethernet/marvell/mvneta.c
··· 5255 5255 err = mvneta_port_power_up(pp, pp->phy_interface); 5256 5256 if (err < 0) { 5257 5257 dev_err(&pdev->dev, "can't power up port\n"); 5258 - return err; 5258 + goto err_netdev; 5259 5259 } 5260 5260 5261 5261 /* Armada3700 network controller does not support per-cpu
+20 -7
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 1231 1231 1232 1232 regmap_read(priv->sysctrl_base, GENCONF_CTRL0, &val); 1233 1233 if (port->gop_id == 2) 1234 - val |= GENCONF_CTRL0_PORT0_RGMII | GENCONF_CTRL0_PORT1_RGMII; 1234 + val |= GENCONF_CTRL0_PORT0_RGMII; 1235 1235 else if (port->gop_id == 3) 1236 1236 val |= GENCONF_CTRL0_PORT1_RGMII_MII; 1237 1237 regmap_write(priv->sysctrl_base, GENCONF_CTRL0, val); ··· 2370 2370 static void mvpp2_tx_pkts_coal_set(struct mvpp2_port *port, 2371 2371 struct mvpp2_tx_queue *txq) 2372 2372 { 2373 - unsigned int thread = mvpp2_cpu_to_thread(port->priv, get_cpu()); 2373 + unsigned int thread; 2374 2374 u32 val; 2375 2375 2376 2376 if (txq->done_pkts_coal > MVPP2_TXQ_THRESH_MASK) 2377 2377 txq->done_pkts_coal = MVPP2_TXQ_THRESH_MASK; 2378 2378 2379 2379 val = (txq->done_pkts_coal << MVPP2_TXQ_THRESH_OFFSET); 2380 - mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_NUM_REG, txq->id); 2381 - mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_THRESH_REG, val); 2382 - 2383 - put_cpu(); 2380 + /* PKT-coalescing registers are per-queue + per-thread */ 2381 + for (thread = 0; thread < MVPP2_MAX_THREADS; thread++) { 2382 + mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_NUM_REG, txq->id); 2383 + mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_THRESH_REG, val); 2384 + } 2384 2385 } 2385 2386 2386 2387 static u32 mvpp2_usec_to_cycles(u32 usec, unsigned long clk_hz) ··· 5488 5487 struct mvpp2 *priv = port->priv; 5489 5488 struct mvpp2_txq_pcpu *txq_pcpu; 5490 5489 unsigned int thread; 5491 - int queue, err; 5490 + int queue, err, val; 5492 5491 5493 5492 /* Checks for hardware constraints */ 5494 5493 if (port->first_rxq + port->nrxqs > ··· 5501 5500 /* Disable port */ 5502 5501 mvpp2_egress_disable(port); 5503 5502 mvpp2_port_disable(port); 5503 + 5504 + if (mvpp2_is_xlg(port->phy_interface)) { 5505 + val = readl(port->base + MVPP22_XLG_CTRL0_REG); 5506 + val &= ~MVPP22_XLG_CTRL0_FORCE_LINK_PASS; 5507 + val |= MVPP22_XLG_CTRL0_FORCE_LINK_DOWN; 5508 + writel(val, port->base + MVPP22_XLG_CTRL0_REG); 5509 + } else { 5510 + val = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG); 5511 + val &= ~MVPP2_GMAC_FORCE_LINK_PASS; 5512 + val |= MVPP2_GMAC_FORCE_LINK_DOWN; 5513 + writel(val, port->base + MVPP2_GMAC_AUTONEG_CONFIG); 5514 + } 5504 5515 5505 5516 port->tx_time_coal = MVPP2_TXDONE_COAL_USEC; 5506 5517
+36 -2
drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
··· 405 405 return -EINVAL; 406 406 } 407 407 408 + /* Drop flow control pause frames */ 409 + static void mvpp2_prs_drop_fc(struct mvpp2 *priv) 410 + { 411 + unsigned char da[ETH_ALEN] = { 0x01, 0x80, 0xC2, 0x00, 0x00, 0x01 }; 412 + struct mvpp2_prs_entry pe; 413 + unsigned int len; 414 + 415 + memset(&pe, 0, sizeof(pe)); 416 + 417 + /* For all ports - drop flow control frames */ 418 + pe.index = MVPP2_PE_FC_DROP; 419 + mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC); 420 + 421 + /* Set match on DA */ 422 + len = ETH_ALEN; 423 + while (len--) 424 + mvpp2_prs_tcam_data_byte_set(&pe, len, da[len], 0xff); 425 + 426 + mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_DROP_MASK, 427 + MVPP2_PRS_RI_DROP_MASK); 428 + 429 + mvpp2_prs_sram_bits_set(&pe, MVPP2_PRS_SRAM_LU_GEN_BIT, 1); 430 + mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_FLOWS); 431 + 432 + /* Mask all ports */ 433 + mvpp2_prs_tcam_port_map_set(&pe, MVPP2_PRS_PORT_MASK); 434 + 435 + /* Update shadow table and hw entry */ 436 + mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_MAC); 437 + mvpp2_prs_hw_write(priv, &pe); 438 + } 439 + 408 440 /* Enable/disable dropping all mac da's */ 409 441 static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add) 410 442 { ··· 1194 1162 mvpp2_prs_hw_write(priv, &pe); 1195 1163 1196 1164 /* Create dummy entries for drop all and promiscuous modes */ 1165 + mvpp2_prs_drop_fc(priv); 1197 1166 mvpp2_prs_mac_drop_all_set(priv, 0, false); 1198 1167 mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false); 1199 1168 mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false); ··· 1680 1647 mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_IP6); 1681 1648 mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP6, 1682 1649 MVPP2_PRS_RI_L3_PROTO_MASK); 1683 - /* Skip eth_type + 4 bytes of IPv6 header */ 1684 - mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN + 4, 1650 + /* Jump to DIP of IPV6 header */ 1651 + mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN + 8 + 1652 + MVPP2_MAX_L3_ADDR_SIZE, 1685 1653 MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD); 1686 1654 /* Set L3 offset */ 1687 1655 mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L3,
+1 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h
··· 129 129 #define MVPP2_PE_VID_EDSA_FLTR_DEFAULT (MVPP2_PRS_TCAM_SRAM_SIZE - 7) 130 130 #define MVPP2_PE_VLAN_DBL (MVPP2_PRS_TCAM_SRAM_SIZE - 6) 131 131 #define MVPP2_PE_VLAN_NONE (MVPP2_PRS_TCAM_SRAM_SIZE - 5) 132 - /* reserved */ 132 + #define MVPP2_PE_FC_DROP (MVPP2_PRS_TCAM_SRAM_SIZE - 4) 133 133 #define MVPP2_PE_MAC_MC_PROMISCUOUS (MVPP2_PRS_TCAM_SRAM_SIZE - 3) 134 134 #define MVPP2_PE_MAC_UC_PROMISCUOUS (MVPP2_PRS_TCAM_SRAM_SIZE - 2) 135 135 #define MVPP2_PE_MAC_NON_PROMISCUOUS (MVPP2_PRS_TCAM_SRAM_SIZE - 1)
+1 -1
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 337 337 unsigned int i, j; 338 338 unsigned int len; 339 339 340 - len = netdev->mtu + ETH_HLEN; 340 + len = netdev->mtu + ETH_HLEN + VLAN_HLEN; 341 341 nfrags = round_up(len, PAGE_SIZE) / PAGE_SIZE; 342 342 343 343 for (i = ionic_q_space_avail(q); i; i--) {
+5
drivers/net/ethernet/qlogic/qede/qede_fp.c
··· 1799 1799 ntohs(udp_hdr(skb)->dest) != gnv_port)) 1800 1800 return features & ~(NETIF_F_CSUM_MASK | 1801 1801 NETIF_F_GSO_MASK); 1802 + } else if (l4_proto == IPPROTO_IPIP) { 1803 + /* IPIP tunnels are unknown to the device or at least unsupported natively, 1804 + * offloads for them can't be done trivially, so disable them for such skb. 1805 + */ 1806 + return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 1802 1807 } 1803 1808 } 1804 1809
+4 -2
drivers/net/ethernet/realtek/r8169_main.c
··· 2207 2207 } 2208 2208 2209 2209 switch (tp->mac_version) { 2210 - case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_33: 2210 + case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_26: 2211 + case RTL_GIGA_MAC_VER_32 ... RTL_GIGA_MAC_VER_33: 2211 2212 case RTL_GIGA_MAC_VER_37: 2212 2213 case RTL_GIGA_MAC_VER_39: 2213 2214 case RTL_GIGA_MAC_VER_43: ··· 2234 2233 static void rtl_pll_power_up(struct rtl8169_private *tp) 2235 2234 { 2236 2235 switch (tp->mac_version) { 2237 - case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_33: 2236 + case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_26: 2237 + case RTL_GIGA_MAC_VER_32 ... RTL_GIGA_MAC_VER_33: 2238 2238 case RTL_GIGA_MAC_VER_37: 2239 2239 case RTL_GIGA_MAC_VER_39: 2240 2240 case RTL_GIGA_MAC_VER_43:
+4
drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
··· 721 721 #define PCI_DEVICE_ID_INTEL_EHL_PSE1_RGMII1G_ID 0x4bb0 722 722 #define PCI_DEVICE_ID_INTEL_EHL_PSE1_SGMII1G_ID 0x4bb1 723 723 #define PCI_DEVICE_ID_INTEL_EHL_PSE1_SGMII2G5_ID 0x4bb2 724 + #define PCI_DEVICE_ID_INTEL_TGLH_SGMII1G_0_ID 0x43ac 725 + #define PCI_DEVICE_ID_INTEL_TGLH_SGMII1G_1_ID 0x43a2 724 726 #define PCI_DEVICE_ID_INTEL_TGL_SGMII1G_ID 0xa0ac 725 727 726 728 static const struct pci_device_id intel_eth_pci_id_table[] = { ··· 737 735 { PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII1G_ID, &ehl_pse1_sgmii1g_info) }, 738 736 { PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII2G5_ID, &ehl_pse1_sgmii1g_info) }, 739 737 { PCI_DEVICE_DATA(INTEL, TGL_SGMII1G_ID, &tgl_sgmii1g_info) }, 738 + { PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_0_ID, &tgl_sgmii1g_info) }, 739 + { PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_1_ID, &tgl_sgmii1g_info) }, 740 740 {} 741 741 }; 742 742 MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table);
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
··· 135 135 struct device *dev = dwmac->dev; 136 136 static const struct clk_parent_data mux_parents[] = { 137 137 { .fw_name = "clkin0", }, 138 - { .fw_name = "clkin1", }, 138 + { .index = -1, }, 139 139 }; 140 140 static const struct clk_div_table div_table[] = { 141 141 { .div = 2, .val = 2, },
+2
drivers/net/ethernet/ti/cpts.c
··· 599 599 600 600 ptp_clock_unregister(cpts->clock); 601 601 cpts->clock = NULL; 602 + cpts->phc_index = -1; 602 603 603 604 cpts_write32(cpts, 0, int_enable); 604 605 cpts_write32(cpts, 0, control); ··· 785 784 cpts->cc.read = cpts_systim_read; 786 785 cpts->cc.mask = CLOCKSOURCE_MASK(32); 787 786 cpts->info = cpts_info; 787 + cpts->phc_index = -1; 788 788 789 789 if (n_ext_ts) 790 790 cpts->info.n_ext_ts = n_ext_ts;
+69 -58
drivers/net/ipa/gsi.c
··· 326 326 } 327 327 328 328 /* Issue an event ring command and wait for it to complete */ 329 - static int evt_ring_command(struct gsi *gsi, u32 evt_ring_id, 330 - enum gsi_evt_cmd_opcode opcode) 329 + static void evt_ring_command(struct gsi *gsi, u32 evt_ring_id, 330 + enum gsi_evt_cmd_opcode opcode) 331 331 { 332 332 struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; 333 333 struct completion *completion = &evt_ring->completion; ··· 340 340 * is issued here. Only permit *this* event ring to trigger 341 341 * an interrupt, and only enable the event control IRQ type 342 342 * when we expect it to occur. 343 + * 344 + * There's a small chance that a previous command completed 345 + * after the interrupt was disabled, so make sure we have no 346 + * pending interrupts before we enable them. 343 347 */ 348 + iowrite32(~0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET); 349 + 344 350 val = BIT(evt_ring_id); 345 351 iowrite32(val, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); 346 352 gsi_irq_type_enable(gsi, GSI_EV_CTRL); ··· 361 355 iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); 362 356 363 357 if (success) 364 - return 0; 358 + return; 365 359 366 360 dev_err(dev, "GSI command %u for event ring %u timed out, state %u\n", 367 361 opcode, evt_ring_id, evt_ring->state); 368 - 369 - return -ETIMEDOUT; 370 362 } 371 363 372 364 /* Allocate an event ring in NOT_ALLOCATED state */ 373 365 static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id) 374 366 { 375 367 struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; 376 - int ret; 377 368 378 369 /* Get initial event ring state */ 379 370 evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id); ··· 380 377 return -EINVAL; 381 378 } 382 379 383 - ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE); 384 - if (!ret && evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) { 385 - dev_err(gsi->dev, "event ring %u bad state %u after alloc\n", 386 - evt_ring_id, evt_ring->state); 387 - ret = -EIO; 388 - } 380 + evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE); 389 381 390 - return ret; 382 + /* If successful the event ring state will have changed */ 383 + if (evt_ring->state == GSI_EVT_RING_STATE_ALLOCATED) 384 + return 0; 385 + 386 + dev_err(gsi->dev, "event ring %u bad state %u after alloc\n", 387 + evt_ring_id, evt_ring->state); 388 + 389 + return -EIO; 391 390 } 392 391 393 392 /* Reset a GSI event ring in ALLOCATED or ERROR state. */ ··· 397 392 { 398 393 struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; 399 394 enum gsi_evt_ring_state state = evt_ring->state; 400 - int ret; 401 395 402 396 if (state != GSI_EVT_RING_STATE_ALLOCATED && 403 397 state != GSI_EVT_RING_STATE_ERROR) { ··· 405 401 return; 406 402 } 407 403 408 - ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET); 409 - if (!ret && evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) 410 - dev_err(gsi->dev, "event ring %u bad state %u after reset\n", 411 - evt_ring_id, evt_ring->state); 404 + evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET); 405 + 406 + /* If successful the event ring state will have changed */ 407 + if (evt_ring->state == GSI_EVT_RING_STATE_ALLOCATED) 408 + return; 409 + 410 + dev_err(gsi->dev, "event ring %u bad state %u after reset\n", 411 + evt_ring_id, evt_ring->state); 412 412 } 413 413 414 414 /* Issue a hardware de-allocation request for an allocated event ring */ 415 415 static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id) 416 416 { 417 417 struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; 418 - int ret; 419 418 420 419 if (evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) { 421 420 dev_err(gsi->dev, "event ring %u state %u before dealloc\n", ··· 426 419 return; 427 420 } 428 421 429 - ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC); 430 - if (!ret && evt_ring->state != GSI_EVT_RING_STATE_NOT_ALLOCATED) 431 - dev_err(gsi->dev, "event ring %u bad state %u after dealloc\n", 432 - evt_ring_id, evt_ring->state); 422 + evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC); 423 + 424 + /* If successful the event ring state will have changed */ 425 + if (evt_ring->state == GSI_EVT_RING_STATE_NOT_ALLOCATED) 426 + return; 427 + 428 + dev_err(gsi->dev, "event ring %u bad state %u after dealloc\n", 429 + evt_ring_id, evt_ring->state); 433 430 } 434 431 435 432 /* Fetch the current state of a channel from hardware */ ··· 449 438 } 450 439 451 440 /* Issue a channel command and wait for it to complete */ 452 - static int 441 + static void 453 442 gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode) 454 443 { 455 444 struct completion *completion = &channel->completion; ··· 464 453 * issued here. So we only permit *this* channel to trigger 465 454 * an interrupt and only enable the channel control IRQ type 466 455 * when we expect it to occur. 456 + * 457 + * There's a small chance that a previous command completed 458 + * after the interrupt was disabled, so make sure we have no 459 + * pending interrupts before we enable them. 467 460 */ 461 + iowrite32(~0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET); 462 + 468 463 val = BIT(channel_id); 469 464 iowrite32(val, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); 470 465 gsi_irq_type_enable(gsi, GSI_CH_CTRL); ··· 484 467 iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); 485 468 486 469 if (success) 487 - return 0; 470 + return; 488 471 489 472 dev_err(dev, "GSI command %u for channel %u timed out, state %u\n", 490 473 opcode, channel_id, gsi_channel_state(channel)); 491 - 492 - return -ETIMEDOUT; 493 474 } 494 475 495 476 /* Allocate GSI channel in NOT_ALLOCATED state */ ··· 496 481 struct gsi_channel *channel = &gsi->channel[channel_id]; 497 482 struct device *dev = gsi->dev; 498 483 enum gsi_channel_state state; 499 - int ret; 500 484 501 485 /* Get initial channel state */ 502 486 state = gsi_channel_state(channel); ··· 505 491 return -EINVAL; 506 492 } 507 493 508 - ret = gsi_channel_command(channel, GSI_CH_ALLOCATE); 494 + gsi_channel_command(channel, GSI_CH_ALLOCATE); 509 495 510 - /* Channel state will normally have been updated */ 496 + /* If successful the channel state will have changed */ 511 497 state = gsi_channel_state(channel); 512 - if (!ret && state != GSI_CHANNEL_STATE_ALLOCATED) { 513 - dev_err(dev, "channel %u bad state %u after alloc\n", 514 - channel_id, state); 515 - ret = -EIO; 516 - } 498 + if (state == GSI_CHANNEL_STATE_ALLOCATED) 499 + return 0; 517 500 518 - return ret; 501 + dev_err(dev, "channel %u bad state %u after alloc\n", 502 + channel_id, state); 503 + 504 + return -EIO; 519 505 } 520 506 521 507 /* Start an ALLOCATED channel */ ··· 523 509 { 524 510 struct device *dev = channel->gsi->dev; 525 511 enum gsi_channel_state state; 526 - int ret; 527 512 528 513 state = gsi_channel_state(channel); 529 514 if (state != GSI_CHANNEL_STATE_ALLOCATED && ··· 532 519 return -EINVAL; 533 520 } 534 521 535 - ret = gsi_channel_command(channel, GSI_CH_START); 522 + gsi_channel_command(channel, GSI_CH_START); 536 523 537 - /* Channel state will normally have been updated */ 524 + /* If successful the channel state will have changed */ 538 525 state = gsi_channel_state(channel); 539 - if (!ret && state != GSI_CHANNEL_STATE_STARTED) { 540 - dev_err(dev, "channel %u bad state %u after start\n", 541 - gsi_channel_id(channel), state); 542 - ret = -EIO; 543 - } 526 + if (state == GSI_CHANNEL_STATE_STARTED) 527 + return 0; 544 528 545 - return ret; 529 + dev_err(dev, "channel %u bad state %u after start\n", 530 + gsi_channel_id(channel), state); 531 + 532 + return -EIO; 546 533 } 547 534 548 535 /* Stop a GSI channel in STARTED state */ ··· 550 537 { 551 538 struct device *dev = channel->gsi->dev; 552 539 enum gsi_channel_state state; 553 - int ret; 554 540 555 541 state = gsi_channel_state(channel); 556 542 ··· 566 554 return -EINVAL; 567 555 } 568 556 569 - ret = gsi_channel_command(channel, GSI_CH_STOP); 557 + gsi_channel_command(channel, GSI_CH_STOP); 570 558 571 - /* Channel state will normally have been updated */ 559 + /* If successful the channel state will have changed */ 572 560 state = gsi_channel_state(channel); 573 - if (ret || state == GSI_CHANNEL_STATE_STOPPED) 574 - return ret; 561 + if (state == GSI_CHANNEL_STATE_STOPPED) 562 + return 0; 575 563 576 564 /* We may have to try again if stop is in progress */ 577 565 if (state == GSI_CHANNEL_STATE_STOP_IN_PROC) ··· 588 576 { 589 577 struct device *dev = channel->gsi->dev; 590 578 enum gsi_channel_state state; 591 - int ret; 592 579 593 580 msleep(1); /* A short delay is required before a RESET command */ 594 581 ··· 601 590 return; 602 591 } 603 592 604 - ret = gsi_channel_command(channel, GSI_CH_RESET); 593 + gsi_channel_command(channel, GSI_CH_RESET); 605 594 606 - /* Channel state will normally have been updated */ 595 + /* If successful the channel state will have changed */ 607 596 state = gsi_channel_state(channel); 608 - if (!ret && state != GSI_CHANNEL_STATE_ALLOCATED) 597 + if (state != GSI_CHANNEL_STATE_ALLOCATED) 609 598 dev_err(dev, "channel %u bad state %u after reset\n", 610 599 gsi_channel_id(channel), state); 611 600 } ··· 616 605 struct gsi_channel *channel = &gsi->channel[channel_id]; 617 606 struct device *dev = gsi->dev; 618 607 enum gsi_channel_state state; 619 - int ret; 620 608 621 609 state = gsi_channel_state(channel); 622 610 if (state != GSI_CHANNEL_STATE_ALLOCATED) { ··· 624 614 return; 625 615 } 626 616 627 - ret = gsi_channel_command(channel, GSI_CH_DE_ALLOC); 617 + gsi_channel_command(channel, GSI_CH_DE_ALLOC); 628 618 629 - /* Channel state will normally have been updated */ 619 + /* If successful the channel state will have changed */ 630 620 state = gsi_channel_state(channel); 631 - if (!ret && state != GSI_CHANNEL_STATE_NOT_ALLOCATED) 621 + 622 + if (state != GSI_CHANNEL_STATE_NOT_ALLOCATED) 632 623 dev_err(dev, "channel %u bad state %u after dealloc\n", 633 624 channel_id, state); 634 625 }
+2 -2
drivers/net/ipa/ipa_clock.c
··· 115 115 return ret; 116 116 117 117 data = &clock->interconnect_data[IPA_INTERCONNECT_IMEM]; 118 - ret = icc_set_bw(clock->memory_path, data->average_rate, 118 + ret = icc_set_bw(clock->imem_path, data->average_rate, 119 119 data->peak_rate); 120 120 if (ret) 121 121 goto err_memory_path_disable; 122 122 123 123 data = &clock->interconnect_data[IPA_INTERCONNECT_CONFIG]; 124 - ret = icc_set_bw(clock->memory_path, data->average_rate, 124 + ret = icc_set_bw(clock->config_path, data->average_rate, 125 125 data->peak_rate); 126 126 if (ret) 127 127 goto err_imem_path_disable;
+1 -1
drivers/net/tun.c
··· 1365 1365 int i; 1366 1366 1367 1367 if (it->nr_segs > MAX_SKB_FRAGS + 1) 1368 - return ERR_PTR(-ENOMEM); 1368 + return ERR_PTR(-EMSGSIZE); 1369 1369 1370 1370 local_bh_disable(); 1371 1371 skb = napi_get_frags(&tfile->napi);
-3
drivers/net/usb/cdc_ncm.c
··· 1863 1863 * USB_CDC_NOTIFY_NETWORK_CONNECTION notification shall be 1864 1864 * sent by device after USB_CDC_NOTIFY_SPEED_CHANGE. 1865 1865 */ 1866 - netif_info(dev, link, dev->net, 1867 - "network connection: %sconnected\n", 1868 - !!event->wValue ? "" : "dis"); 1869 1866 usbnet_link_change(dev, !!event->wValue, 0); 1870 1867 break; 1871 1868
+1
drivers/net/usb/qmi_wwan.c
··· 1013 1013 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */ 1014 1014 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */ 1015 1015 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */ 1016 + {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0620)}, /* Quectel EM160R-GL */ 1016 1017 {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0800)}, /* Quectel RM500Q-GL */ 1017 1018 1018 1019 /* 3. Combined interface devices matching on interface number */
+7 -5
drivers/net/virtio_net.c
··· 2093 2093 2094 2094 get_online_cpus(); 2095 2095 err = _virtnet_set_queues(vi, queue_pairs); 2096 - if (!err) { 2097 - netif_set_real_num_tx_queues(dev, queue_pairs); 2098 - netif_set_real_num_rx_queues(dev, queue_pairs); 2099 - 2100 - virtnet_set_affinity(vi); 2096 + if (err) { 2097 + put_online_cpus(); 2098 + goto err; 2101 2099 } 2100 + virtnet_set_affinity(vi); 2102 2101 put_online_cpus(); 2103 2102 2103 + netif_set_real_num_tx_queues(dev, queue_pairs); 2104 + netif_set_real_num_rx_queues(dev, queue_pairs); 2105 + err: 2104 2106 return err; 2105 2107 } 2106 2108
+7
drivers/net/wan/hdlc_ppp.c
··· 569 569 unsigned long flags; 570 570 571 571 spin_lock_irqsave(&ppp->lock, flags); 572 + /* mod_timer could be called after we entered this function but 573 + * before we got the lock. 574 + */ 575 + if (timer_pending(&proto->timer)) { 576 + spin_unlock_irqrestore(&ppp->lock, flags); 577 + return; 578 + } 572 579 switch (proto->state) { 573 580 case STOPPING: 574 581 case REQ_SENT:
+1 -1
drivers/net/wireless/ath/ath11k/core.c
··· 185 185 ath11k_hif_ce_irq_disable(ab); 186 186 187 187 ret = ath11k_hif_suspend(ab); 188 - if (!ret) { 188 + if (ret) { 189 189 ath11k_warn(ab, "failed to suspend hif: %d\n", ret); 190 190 return ret; 191 191 }
+7 -3
drivers/net/wireless/ath/ath11k/dp_rx.c
··· 2294 2294 { 2295 2295 u8 channel_num; 2296 2296 u32 center_freq; 2297 + struct ieee80211_channel *channel; 2297 2298 2298 2299 rx_status->freq = 0; 2299 2300 rx_status->rate_idx = 0; ··· 2315 2314 rx_status->band = NL80211_BAND_5GHZ; 2316 2315 } else { 2317 2316 spin_lock_bh(&ar->data_lock); 2318 - rx_status->band = ar->rx_channel->band; 2319 - channel_num = 2320 - ieee80211_frequency_to_channel(ar->rx_channel->center_freq); 2317 + channel = ar->rx_channel; 2318 + if (channel) { 2319 + rx_status->band = channel->band; 2320 + channel_num = 2321 + ieee80211_frequency_to_channel(channel->center_freq); 2322 + } 2321 2323 spin_unlock_bh(&ar->data_lock); 2322 2324 ath11k_dbg_dump(ar->ab, ATH11K_DBG_DATA, NULL, "rx_desc: ", 2323 2325 rx_desc, sizeof(struct hal_rx_desc));
+6 -2
drivers/net/wireless/ath/ath11k/mac.c
··· 3021 3021 } 3022 3022 3023 3023 if (ab->hw_params.vdev_start_delay && 3024 + !arvif->is_started && 3024 3025 arvif->vdev_type != WMI_VDEV_TYPE_AP) { 3025 3026 ret = ath11k_start_vdev_delay(ar->hw, vif); 3026 3027 if (ret) { ··· 5285 5284 /* for QCA6390 bss peer must be created before vdev_start */ 5286 5285 if (ab->hw_params.vdev_start_delay && 5287 5286 arvif->vdev_type != WMI_VDEV_TYPE_AP && 5288 - arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) { 5287 + arvif->vdev_type != WMI_VDEV_TYPE_MONITOR && 5288 + !ath11k_peer_find_by_vdev_id(ab, arvif->vdev_id)) { 5289 5289 memcpy(&arvif->chanctx, ctx, sizeof(*ctx)); 5290 5290 ret = 0; 5291 5291 goto out; ··· 5297 5295 goto out; 5298 5296 } 5299 5297 5300 - if (ab->hw_params.vdev_start_delay) { 5298 + if (ab->hw_params.vdev_start_delay && 5299 + (arvif->vdev_type == WMI_VDEV_TYPE_AP || 5300 + arvif->vdev_type == WMI_VDEV_TYPE_MONITOR)) { 5301 5301 param.vdev_id = arvif->vdev_id; 5302 5302 param.peer_type = WMI_PEER_TYPE_DEFAULT; 5303 5303 param.peer_addr = ar->mac_addr;
+40 -4
drivers/net/wireless/ath/ath11k/pci.c
··· 274 274 PCIE_QSERDES_COM_SYSCLK_EN_SEL_REG, 275 275 PCIE_QSERDES_COM_SYSCLK_EN_SEL_VAL, 276 276 PCIE_QSERDES_COM_SYSCLK_EN_SEL_MSK); 277 - if (!ret) { 277 + if (ret) { 278 278 ath11k_warn(ab, "failed to set sysclk: %d\n", ret); 279 279 return ret; 280 280 } ··· 283 283 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG1_REG, 284 284 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG1_VAL, 285 285 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG_MSK); 286 - if (!ret) { 286 + if (ret) { 287 287 ath11k_warn(ab, "failed to set dtct config1 error: %d\n", ret); 288 288 return ret; 289 289 } ··· 292 292 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG2_REG, 293 293 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG2_VAL, 294 294 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG_MSK); 295 - if (!ret) { 295 + if (ret) { 296 296 ath11k_warn(ab, "failed to set dtct config2: %d\n", ret); 297 297 return ret; 298 298 } ··· 301 301 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG4_REG, 302 302 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG4_VAL, 303 303 PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG_MSK); 304 - if (!ret) { 304 + if (ret) { 305 305 ath11k_warn(ab, "failed to set dtct config4: %d\n", ret); 306 306 return ret; 307 307 } ··· 886 886 pci_disable_device(pci_dev); 887 887 } 888 888 889 + static void ath11k_pci_aspm_disable(struct ath11k_pci *ab_pci) 890 + { 891 + struct ath11k_base *ab = ab_pci->ab; 892 + 893 + pcie_capability_read_word(ab_pci->pdev, PCI_EXP_LNKCTL, 894 + &ab_pci->link_ctl); 895 + 896 + ath11k_dbg(ab, ATH11K_DBG_PCI, "pci link_ctl 0x%04x L0s %d L1 %d\n", 897 + ab_pci->link_ctl, 898 + u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L0S), 899 + u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1)); 900 + 901 + /* disable L0s and L1 */ 902 + pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, 903 + ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC); 904 + 905 + set_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags); 906 + } 907 + 908 + static void ath11k_pci_aspm_restore(struct ath11k_pci *ab_pci) 909 + { 910 + if (test_and_clear_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags)) 911 + pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, 912 + ab_pci->link_ctl); 913 + } 914 + 889 915 static int ath11k_pci_power_up(struct ath11k_base *ab) 890 916 { 891 917 struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); ··· 920 894 ab_pci->register_window = 0; 921 895 clear_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags); 922 896 ath11k_pci_sw_reset(ab_pci->ab, true); 897 + 898 + /* Disable ASPM during firmware download due to problems switching 899 + * to AMSS state. 900 + */ 901 + ath11k_pci_aspm_disable(ab_pci); 923 902 924 903 ret = ath11k_mhi_start(ab_pci); 925 904 if (ret) { ··· 938 907 static void ath11k_pci_power_down(struct ath11k_base *ab) 939 908 { 940 909 struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); 910 + 911 + /* restore aspm in case firmware bootup fails */ 912 + ath11k_pci_aspm_restore(ab_pci); 941 913 942 914 ath11k_pci_force_wake(ab_pci->ab); 943 915 ath11k_mhi_stop(ab_pci); ··· 998 964 struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); 999 965 1000 966 set_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags); 967 + 968 + ath11k_pci_aspm_restore(ab_pci); 1001 969 1002 970 ath11k_pci_ce_irqs_enable(ab); 1003 971 ath11k_ce_rx_post_buf(ab);
+2
drivers/net/wireless/ath/ath11k/pci.h
··· 63 63 enum ath11k_pci_flags { 64 64 ATH11K_PCI_FLAG_INIT_DONE, 65 65 ATH11K_PCI_FLAG_IS_MSI_64, 66 + ATH11K_PCI_ASPM_RESTORE, 66 67 }; 67 68 68 69 struct ath11k_pci { ··· 81 80 82 81 /* enum ath11k_pci_flags */ 83 82 unsigned long flags; 83 + u16 link_ctl; 84 84 }; 85 85 86 86 static inline struct ath11k_pci *ath11k_pci_priv(struct ath11k_base *ab)
+17
drivers/net/wireless/ath/ath11k/peer.c
··· 76 76 return NULL; 77 77 } 78 78 79 + struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab, 80 + int vdev_id) 81 + { 82 + struct ath11k_peer *peer; 83 + 84 + spin_lock_bh(&ab->base_lock); 85 + 86 + list_for_each_entry(peer, &ab->peers, list) { 87 + if (vdev_id == peer->vdev_id) { 88 + spin_unlock_bh(&ab->base_lock); 89 + return peer; 90 + } 91 + } 92 + spin_unlock_bh(&ab->base_lock); 93 + return NULL; 94 + } 95 + 79 96 void ath11k_peer_unmap_event(struct ath11k_base *ab, u16 peer_id) 80 97 { 81 98 struct ath11k_peer *peer;
+2
drivers/net/wireless/ath/ath11k/peer.h
··· 43 43 struct ieee80211_sta *sta, struct peer_create_params *param); 44 44 int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id, 45 45 const u8 *addr); 46 + struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab, 47 + int vdev_id); 46 48 47 49 #endif /* _PEER_H_ */
+22 -2
drivers/net/wireless/ath/ath11k/qmi.c
··· 1660 1660 struct qmi_wlanfw_respond_mem_resp_msg_v01 resp; 1661 1661 struct qmi_txn txn = {}; 1662 1662 int ret = 0, i; 1663 + bool delayed; 1663 1664 1664 1665 req = kzalloc(sizeof(*req), GFP_KERNEL); 1665 1666 if (!req) ··· 1673 1672 * failure to FW and FW will then request mulitple blocks of small 1674 1673 * chunk size memory. 1675 1674 */ 1676 - if (!ab->bus_params.fixed_mem_region && ab->qmi.mem_seg_count <= 2) { 1675 + if (!ab->bus_params.fixed_mem_region && ab->qmi.target_mem_delayed) { 1676 + delayed = true; 1677 1677 ath11k_dbg(ab, ATH11K_DBG_QMI, "qmi delays mem_request %d\n", 1678 1678 ab->qmi.mem_seg_count); 1679 1679 memset(req, 0, sizeof(*req)); 1680 1680 } else { 1681 + delayed = false; 1681 1682 req->mem_seg_len = ab->qmi.mem_seg_count; 1682 1683 1683 1684 for (i = 0; i < req->mem_seg_len ; i++) { ··· 1711 1708 } 1712 1709 1713 1710 if (resp.resp.result != QMI_RESULT_SUCCESS_V01) { 1711 + /* the error response is expected when 1712 + * target_mem_delayed is true. 1713 + */ 1714 + if (delayed && resp.resp.error == 0) 1715 + goto out; 1716 + 1714 1717 ath11k_warn(ab, "Respond mem req failed, result: %d, err: %d\n", 1715 1718 resp.resp.result, resp.resp.error); 1716 1719 ret = -EINVAL; ··· 1751 1742 int i; 1752 1743 struct target_mem_chunk *chunk; 1753 1744 1745 + ab->qmi.target_mem_delayed = false; 1746 + 1754 1747 for (i = 0; i < ab->qmi.mem_seg_count; i++) { 1755 1748 chunk = &ab->qmi.target_mem[i]; 1756 1749 chunk->vaddr = dma_alloc_coherent(ab->dev, ··· 1760 1749 &chunk->paddr, 1761 1750 GFP_KERNEL); 1762 1751 if (!chunk->vaddr) { 1752 + if (ab->qmi.mem_seg_count <= 2) { 1753 + ath11k_dbg(ab, ATH11K_DBG_QMI, 1754 + "qmi dma allocation failed (%d B type %u), will try later with small size\n", 1755 + chunk->size, 1756 + chunk->type); 1757 + ath11k_qmi_free_target_mem_chunk(ab); 1758 + ab->qmi.target_mem_delayed = true; 1759 + return 0; 1760 + } 1763 1761 ath11k_err(ab, "failed to alloc memory, size: 0x%x, type: %u\n", 1764 1762 chunk->size, 1765 1763 chunk->type); ··· 2537 2517 ret); 2538 2518 return; 2539 2519 } 2540 - } else if (msg->mem_seg_len > 2) { 2520 + } else { 2541 2521 ret = ath11k_qmi_alloc_target_mem_chunk(ab); 2542 2522 if (ret) { 2543 2523 ath11k_warn(ab, "qmi failed to alloc target memory: %d\n",
+1
drivers/net/wireless/ath/ath11k/qmi.h
··· 125 125 struct target_mem_chunk target_mem[ATH11K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01]; 126 126 u32 mem_seg_count; 127 127 u32 target_mem_mode; 128 + bool target_mem_delayed; 128 129 u8 cal_done; 129 130 struct target_info target; 130 131 struct m3_mem_region m3_mem;
+3
drivers/net/wireless/ath/ath11k/wmi.c
··· 3460 3460 len = sizeof(*cmd); 3461 3461 3462 3462 skb = ath11k_wmi_alloc_skb(wmi_ab, len); 3463 + if (!skb) 3464 + return -ENOMEM; 3465 + 3463 3466 cmd = (struct wmi_pdev_set_hw_mode_cmd_param *)skb->data; 3464 3467 3465 3468 cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_PDEV_SET_HW_MODE_CMD) |
+2 -2
drivers/net/wireless/mediatek/mt76/mt7915/init.c
··· 40 40 .types = BIT(NL80211_IFTYPE_ADHOC) 41 41 }, { 42 42 .max = 16, 43 - .types = BIT(NL80211_IFTYPE_AP) | 43 + .types = BIT(NL80211_IFTYPE_AP) 44 44 #ifdef CONFIG_MAC80211_MESH 45 - BIT(NL80211_IFTYPE_MESH_POINT) 45 + | BIT(NL80211_IFTYPE_MESH_POINT) 46 46 #endif 47 47 }, { 48 48 .max = MT7915_MAX_INTERFACES,
+7 -12
drivers/net/wireless/mediatek/mt76/sdio.c
··· 157 157 158 158 static int mt76s_process_tx_queue(struct mt76_dev *dev, struct mt76_queue *q) 159 159 { 160 - bool wake, mcu = q == dev->q_mcu[MT_MCUQ_WM]; 161 160 struct mt76_queue_entry entry; 162 161 int nframes = 0; 162 + bool mcu; 163 163 164 + if (!q) 165 + return 0; 166 + 167 + mcu = q == dev->q_mcu[MT_MCUQ_WM]; 164 168 while (q->queued > 0) { 165 169 if (!q->entry[q->tail].done) 166 170 break; ··· 181 177 nframes++; 182 178 } 183 179 184 - wake = q->stopped && q->queued < q->ndesc - 8; 185 - if (wake) 186 - q->stopped = false; 187 - 188 180 if (!q->queued) 189 181 wake_up(&dev->tx_wait); 190 182 191 - if (mcu) 192 - goto out; 183 + if (!mcu) 184 + mt76_txq_schedule(&dev->phy, q->qid); 193 185 194 - mt76_txq_schedule(&dev->phy, q->qid); 195 - 196 - if (wake) 197 - ieee80211_wake_queue(dev->hw, q->qid); 198 - out: 199 186 return nframes; 200 187 } 201 188
+2 -7
drivers/net/wireless/mediatek/mt76/usb.c
··· 811 811 struct mt76_dev *dev = container_of(usb, struct mt76_dev, usb); 812 812 struct mt76_queue_entry entry; 813 813 struct mt76_queue *q; 814 - bool wake; 815 814 int i; 816 815 817 816 for (i = 0; i < IEEE80211_NUM_ACS; i++) { 818 817 q = dev->phy.q_tx[i]; 818 + if (!q) 819 + continue; 819 820 820 821 while (q->queued > 0) { 821 822 if (!q->entry[q->tail].done) ··· 828 827 mt76_queue_tx_complete(dev, q, &entry); 829 828 } 830 829 831 - wake = q->stopped && q->queued < q->ndesc - 8; 832 - if (wake) 833 - q->stopped = false; 834 - 835 830 if (!q->queued) 836 831 wake_up(&dev->tx_wait); 837 832 ··· 836 839 if (dev->drv->tx_status_data && 837 840 !test_and_set_bit(MT76_READING_STATS, &dev->phy.state)) 838 841 queue_work(dev->wq, &dev->usb.stat_work); 839 - if (wake) 840 - ieee80211_wake_queue(dev->hw, i); 841 842 } 842 843 } 843 844
+5 -3
drivers/net/wireless/realtek/rtlwifi/core.c
··· 78 78 79 79 rtl_dbg(rtlpriv, COMP_ERR, DBG_LOUD, 80 80 "Firmware callback routine entered!\n"); 81 - complete(&rtlpriv->firmware_loading_complete); 82 81 if (!firmware) { 83 82 if (rtlpriv->cfg->alt_fw_name) { 84 83 err = request_firmware(&firmware, ··· 90 91 } 91 92 pr_err("Selected firmware is not available\n"); 92 93 rtlpriv->max_fw_size = 0; 93 - return; 94 + goto exit; 94 95 } 95 96 found_alt: 96 97 if (firmware->size > rtlpriv->max_fw_size) { 97 98 pr_err("Firmware is too big!\n"); 98 99 release_firmware(firmware); 99 - return; 100 + goto exit; 100 101 } 101 102 if (!is_wow) { 102 103 memcpy(rtlpriv->rtlhal.pfirmware, firmware->data, ··· 108 109 rtlpriv->rtlhal.wowlan_fwsize = firmware->size; 109 110 } 110 111 release_firmware(firmware); 112 + 113 + exit: 114 + complete(&rtlpriv->firmware_loading_complete); 111 115 } 112 116 113 117 void rtl_fw_cb(const struct firmware *firmware, void *context)
+3 -3
drivers/vhost/net.c
··· 863 863 size_t len, total_len = 0; 864 864 int err; 865 865 struct vhost_net_ubuf_ref *ubufs; 866 + struct ubuf_info *ubuf; 866 867 bool zcopy_used; 867 868 int sent_pkts = 0; 868 869 ··· 896 895 897 896 /* use msg_control to pass vhost zerocopy ubuf info to skb */ 898 897 if (zcopy_used) { 899 - struct ubuf_info *ubuf; 900 898 ubuf = nvq->ubuf_info + nvq->upend_idx; 901 - 902 899 vq->heads[nvq->upend_idx].id = cpu_to_vhost32(vq, head); 903 900 vq->heads[nvq->upend_idx].len = VHOST_DMA_IN_PROGRESS; 904 901 ubuf->callback = vhost_zerocopy_callback; ··· 926 927 err = sock->ops->sendmsg(sock, &msg, len); 927 928 if (unlikely(err < 0)) { 928 929 if (zcopy_used) { 929 - vhost_net_ubuf_put(ubufs); 930 + if (vq->heads[ubuf->desc].len == VHOST_DMA_IN_PROGRESS) 931 + vhost_net_ubuf_put(ubufs); 930 932 nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) 931 933 % UIO_MAXIOV; 932 934 }
+3 -1
include/net/red.h
··· 168 168 v->qcount = -1; 169 169 } 170 170 171 - static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog) 171 + static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, u8 Scell_log) 172 172 { 173 173 if (fls(qth_min) + Wlog > 32) 174 174 return false; 175 175 if (fls(qth_max) + Wlog > 32) 176 + return false; 177 + if (Scell_log >= 32) 176 178 return false; 177 179 if (qth_max < qth_min) 178 180 return false;
-4
include/net/xdp_sock.h
··· 58 58 59 59 struct xsk_queue *tx ____cacheline_aligned_in_smp; 60 60 struct list_head tx_list; 61 - /* Mutual exclusion of NAPI TX thread and sendmsg error paths 62 - * in the SKB destructor callback. 63 - */ 64 - spinlock_t tx_completion_lock; 65 61 /* Protects generic receive. */ 66 62 spinlock_t rx_lock; 67 63
+5
include/net/xsk_buff_pool.h
··· 73 73 bool dma_need_sync; 74 74 bool unaligned; 75 75 void *addrs; 76 + /* Mutual exclusion of the completion ring in the SKB mode. Two cases to protect: 77 + * NAPI TX thread and sendmsg error paths in the SKB destructor callback and when 78 + * sockets share a single cq when the same netdev and queue id is shared. 79 + */ 80 + spinlock_t cq_lock; 76 81 struct xdp_buff_xsk *free_heads[]; 77 82 }; 78 83
+3
include/uapi/linux/netfilter/nf_tables.h
··· 293 293 * @NFT_SET_EVAL: set can be updated from the evaluation path 294 294 * @NFT_SET_OBJECT: set contains stateful objects 295 295 * @NFT_SET_CONCAT: set contains a concatenation 296 + * @NFT_SET_EXPR: set contains expressions 296 297 */ 297 298 enum nft_set_flags { 298 299 NFT_SET_ANONYMOUS = 0x1, ··· 304 303 NFT_SET_EVAL = 0x20, 305 304 NFT_SET_OBJECT = 0x40, 306 305 NFT_SET_CONCAT = 0x80, 306 + NFT_SET_EXPR = 0x100, 307 307 }; 308 308 309 309 /** ··· 708 706 709 707 enum nft_dynset_flags { 710 708 NFT_DYNSET_F_INV = (1 << 0), 709 + NFT_DYNSET_F_EXPR = (1 << 1), 711 710 }; 712 711 713 712 /**
+1 -1
include/uapi/linux/ppp-ioctl.h
··· 116 116 #define PPPIOCGCHAN _IOR('t', 55, int) /* get ppp channel number */ 117 117 #define PPPIOCGL2TPSTATS _IOR('t', 54, struct pppol2tp_ioc_stats) 118 118 #define PPPIOCBRIDGECHAN _IOW('t', 53, int) /* bridge one channel to another */ 119 - #define PPPIOCUNBRIDGECHAN _IO('t', 54) /* unbridge channel */ 119 + #define PPPIOCUNBRIDGECHAN _IO('t', 52) /* unbridge channel */ 120 120 121 121 #define SIOCGPPPSTATS (SIOCDEVPRIVATE + 0) 122 122 #define SIOCGPPPVER (SIOCDEVPRIVATE + 1) /* NEVER change this!! */
+1
kernel/bpf/hashtab.c
··· 152 152 lockdep_set_class(&htab->buckets[i].lock, 153 153 &htab->lockdep_key); 154 154 } 155 + cond_resched(); 155 156 } 156 157 } 157 158
-1
kernel/bpf/syscall.c
··· 17 17 #include <linux/fs.h> 18 18 #include <linux/license.h> 19 19 #include <linux/filter.h> 20 - #include <linux/version.h> 21 20 #include <linux/kernel.h> 22 21 #include <linux/idr.h> 23 22 #include <linux/cred.h>
+8 -8
kernel/bpf/task_iter.c
··· 37 37 if (!task) { 38 38 ++*tid; 39 39 goto retry; 40 - } else if (skip_if_dup_files && task->tgid != task->pid && 40 + } else if (skip_if_dup_files && !thread_group_leader(task) && 41 41 task->files == task->group_leader->files) { 42 42 put_task_struct(task); 43 43 task = NULL; ··· 151 151 curr_task = info->task; 152 152 curr_fd = info->fd; 153 153 } else { 154 - curr_task = task_seq_get_next(ns, &curr_tid, true); 155 - if (!curr_task) { 156 - info->task = NULL; 157 - return NULL; 158 - } 154 + curr_task = task_seq_get_next(ns, &curr_tid, true); 155 + if (!curr_task) { 156 + info->task = NULL; 157 + info->tid = curr_tid; 158 + return NULL; 159 + } 159 160 160 - /* set info->task and info->tid */ 161 - info->task = curr_task; 161 + /* set info->task and info->tid */ 162 162 if (curr_tid == info->tid) { 163 163 curr_fd = info->fd; 164 164 } else {
+2 -4
net/core/neighbour.c
··· 1569 1569 void pneigh_enqueue(struct neigh_table *tbl, struct neigh_parms *p, 1570 1570 struct sk_buff *skb) 1571 1571 { 1572 - unsigned long now = jiffies; 1573 - 1574 - unsigned long sched_next = now + (prandom_u32() % 1575 - NEIGH_VAR(p, PROXY_DELAY)); 1572 + unsigned long sched_next = jiffies + 1573 + prandom_u32_max(NEIGH_VAR(p, PROXY_DELAY)); 1576 1574 1577 1575 if (tbl->proxy_queue.qlen > NEIGH_VAR(p, PROXY_QLEN)) { 1578 1576 kfree_skb(skb);
+53 -12
net/core/net-sysfs.c
··· 1317 1317 static ssize_t xps_cpus_show(struct netdev_queue *queue, 1318 1318 char *buf) 1319 1319 { 1320 + int cpu, len, ret, num_tc = 1, tc = 0; 1320 1321 struct net_device *dev = queue->dev; 1321 - int cpu, len, num_tc = 1, tc = 0; 1322 1322 struct xps_dev_maps *dev_maps; 1323 1323 cpumask_var_t mask; 1324 1324 unsigned long index; ··· 1328 1328 1329 1329 index = get_netdev_queue_index(queue); 1330 1330 1331 + if (!rtnl_trylock()) 1332 + return restart_syscall(); 1333 + 1331 1334 if (dev->num_tc) { 1332 1335 /* Do not allow XPS on subordinate device directly */ 1333 1336 num_tc = dev->num_tc; 1334 - if (num_tc < 0) 1335 - return -EINVAL; 1337 + if (num_tc < 0) { 1338 + ret = -EINVAL; 1339 + goto err_rtnl_unlock; 1340 + } 1336 1341 1337 1342 /* If queue belongs to subordinate dev use its map */ 1338 1343 dev = netdev_get_tx_queue(dev, index)->sb_dev ? : dev; 1339 1344 1340 1345 tc = netdev_txq_to_tc(dev, index); 1341 - if (tc < 0) 1342 - return -EINVAL; 1346 + if (tc < 0) { 1347 + ret = -EINVAL; 1348 + goto err_rtnl_unlock; 1349 + } 1343 1350 } 1344 1351 1345 - if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) 1346 - return -ENOMEM; 1352 + if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) { 1353 + ret = -ENOMEM; 1354 + goto err_rtnl_unlock; 1355 + } 1347 1356 1348 1357 rcu_read_lock(); 1349 1358 dev_maps = rcu_dereference(dev->xps_cpus_map); ··· 1375 1366 } 1376 1367 rcu_read_unlock(); 1377 1368 1369 + rtnl_unlock(); 1370 + 1378 1371 len = snprintf(buf, PAGE_SIZE, "%*pb\n", cpumask_pr_args(mask)); 1379 1372 free_cpumask_var(mask); 1380 1373 return len < PAGE_SIZE ? len : -EINVAL; 1374 + 1375 + err_rtnl_unlock: 1376 + rtnl_unlock(); 1377 + return ret; 1381 1378 } 1382 1379 1383 1380 static ssize_t xps_cpus_store(struct netdev_queue *queue, ··· 1411 1396 return err; 1412 1397 } 1413 1398 1399 + if (!rtnl_trylock()) { 1400 + free_cpumask_var(mask); 1401 + return restart_syscall(); 1402 + } 1403 + 1414 1404 err = netif_set_xps_queue(dev, mask, index); 1405 + rtnl_unlock(); 1415 1406 1416 1407 free_cpumask_var(mask); 1417 1408 ··· 1429 1408 1430 1409 static ssize_t xps_rxqs_show(struct netdev_queue *queue, char *buf) 1431 1410 { 1411 + int j, len, ret, num_tc = 1, tc = 0; 1432 1412 struct net_device *dev = queue->dev; 1433 1413 struct xps_dev_maps *dev_maps; 1434 1414 unsigned long *mask, index; 1435 - int j, len, num_tc = 1, tc = 0; 1436 1415 1437 1416 index = get_netdev_queue_index(queue); 1417 + 1418 + if (!rtnl_trylock()) 1419 + return restart_syscall(); 1438 1420 1439 1421 if (dev->num_tc) { 1440 1422 num_tc = dev->num_tc; 1441 1423 tc = netdev_txq_to_tc(dev, index); 1442 - if (tc < 0) 1443 - return -EINVAL; 1424 + if (tc < 0) { 1425 + ret = -EINVAL; 1426 + goto err_rtnl_unlock; 1427 + } 1444 1428 } 1445 1429 mask = bitmap_zalloc(dev->num_rx_queues, GFP_KERNEL); 1446 - if (!mask) 1447 - return -ENOMEM; 1430 + if (!mask) { 1431 + ret = -ENOMEM; 1432 + goto err_rtnl_unlock; 1433 + } 1448 1434 1449 1435 rcu_read_lock(); 1450 1436 dev_maps = rcu_dereference(dev->xps_rxqs_map); ··· 1477 1449 out_no_maps: 1478 1450 rcu_read_unlock(); 1479 1451 1452 + rtnl_unlock(); 1453 + 1480 1454 len = bitmap_print_to_pagebuf(false, buf, mask, dev->num_rx_queues); 1481 1455 bitmap_free(mask); 1482 1456 1483 1457 return len < PAGE_SIZE ? len : -EINVAL; 1458 + 1459 + err_rtnl_unlock: 1460 + rtnl_unlock(); 1461 + return ret; 1484 1462 } 1485 1463 1486 1464 static ssize_t xps_rxqs_store(struct netdev_queue *queue, const char *buf, ··· 1512 1478 return err; 1513 1479 } 1514 1480 1481 + if (!rtnl_trylock()) { 1482 + bitmap_free(mask); 1483 + return restart_syscall(); 1484 + } 1485 + 1515 1486 cpus_read_lock(); 1516 1487 err = __netif_set_xps_queue(dev, mask, index, true); 1517 1488 cpus_read_unlock(); 1489 + 1490 + rtnl_unlock(); 1518 1491 1519 1492 bitmap_free(mask); 1520 1493 return err ? : len;
+2
net/dcb/dcbnl.c
··· 1765 1765 fn = &reply_funcs[dcb->cmd]; 1766 1766 if (!fn->cb) 1767 1767 return -EOPNOTSUPP; 1768 + if (fn->type != nlh->nlmsg_type) 1769 + return -EPERM; 1768 1770 1769 1771 if (!tb[DCB_ATTR_IFNAME]) 1770 1772 return -EINVAL;
+1 -1
net/ipv4/fib_frontend.c
··· 292 292 .flowi4_iif = LOOPBACK_IFINDEX, 293 293 .flowi4_oif = l3mdev_master_ifindex_rcu(dev), 294 294 .daddr = ip_hdr(skb)->saddr, 295 - .flowi4_tos = RT_TOS(ip_hdr(skb)->tos), 295 + .flowi4_tos = ip_hdr(skb)->tos & IPTOS_RT_MASK, 296 296 .flowi4_scope = scope, 297 297 .flowi4_mark = vmark ? skb->mark : 0, 298 298 };
+1 -1
net/ipv4/gre_demux.c
··· 128 128 * to 0 and sets the configured key in the 129 129 * inner erspan header field 130 130 */ 131 - if (greh->protocol == htons(ETH_P_ERSPAN) || 131 + if ((greh->protocol == htons(ETH_P_ERSPAN) && hdr_len != 4) || 132 132 greh->protocol == htons(ETH_P_ERSPAN2)) { 133 133 struct erspan_base_hdr *ershdr; 134 134
+1 -1
net/ipv4/netfilter/arp_tables.c
··· 1379 1379 xt_compat_lock(NFPROTO_ARP); 1380 1380 t = xt_find_table_lock(net, NFPROTO_ARP, get.name); 1381 1381 if (!IS_ERR(t)) { 1382 - const struct xt_table_info *private = t->private; 1382 + const struct xt_table_info *private = xt_table_get_private_protected(t); 1383 1383 struct xt_table_info info; 1384 1384 1385 1385 ret = compat_table_info(private, &info);
+1 -1
net/ipv4/netfilter/ip_tables.c
··· 1589 1589 xt_compat_lock(AF_INET); 1590 1590 t = xt_find_table_lock(net, AF_INET, get.name); 1591 1591 if (!IS_ERR(t)) { 1592 - const struct xt_table_info *private = t->private; 1592 + const struct xt_table_info *private = xt_table_get_private_protected(t); 1593 1593 struct xt_table_info info; 1594 1594 ret = compat_table_info(private, &info); 1595 1595 if (!ret && get.size == info.size)
+1 -1
net/ipv6/netfilter/ip6_tables.c
··· 1598 1598 xt_compat_lock(AF_INET6); 1599 1599 t = xt_find_table_lock(net, AF_INET6, get.name); 1600 1600 if (!IS_ERR(t)) { 1601 - const struct xt_table_info *private = t->private; 1601 + const struct xt_table_info *private = xt_table_get_private_protected(t); 1602 1602 struct xt_table_info info; 1603 1603 ret = compat_table_info(private, &info); 1604 1604 if (!ret && get.size == info.size)
+1
net/lapb/lapb_iface.c
··· 489 489 break; 490 490 } 491 491 492 + lapb_put(lapb); 492 493 return NOTIFY_DONE; 493 494 } 494 495
+4 -1
net/mptcp/protocol.c
··· 877 877 struct mptcp_sock *msk = mptcp_sk(sk); 878 878 879 879 WARN_ON_ONCE(msk->wmem_reserved); 880 + if (WARN_ON_ONCE(amount < 0)) 881 + amount = 0; 882 + 880 883 if (amount <= sk->sk_forward_alloc) 881 884 goto reserve; 882 885 ··· 1590 1587 if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL)) 1591 1588 return -EOPNOTSUPP; 1592 1589 1593 - mptcp_lock_sock(sk, __mptcp_wmem_reserve(sk, len)); 1590 + mptcp_lock_sock(sk, __mptcp_wmem_reserve(sk, min_t(size_t, 1 << 20, len))); 1594 1591 1595 1592 timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); 1596 1593
+1 -1
net/ncsi/ncsi-rsp.c
··· 1120 1120 int payload, i, ret; 1121 1121 1122 1122 /* Find the NCSI device */ 1123 - nd = ncsi_find_dev(dev); 1123 + nd = ncsi_find_dev(orig_dev); 1124 1124 ndp = nd ? TO_NCSI_DEV_PRIV(nd) : NULL; 1125 1125 if (!ndp) 1126 1126 return -ENODEV;
+18 -24
net/netfilter/ipset/ip_set_hash_gen.h
··· 141 141 return hsize * sizeof(struct hbucket *) + sizeof(struct htable); 142 142 } 143 143 144 - /* Compute htable_bits from the user input parameter hashsize */ 145 - static u8 146 - htable_bits(u32 hashsize) 147 - { 148 - /* Assume that hashsize == 2^htable_bits */ 149 - u8 bits = fls(hashsize - 1); 150 - 151 - if (jhash_size(bits) != hashsize) 152 - /* Round up to the first 2^n value */ 153 - bits = fls(hashsize); 154 - 155 - return bits; 156 - } 157 - 158 144 #ifdef IP_SET_HASH_WITH_NETS 159 145 #if IPSET_NET_COUNT > 1 160 146 #define __CIDR(cidr, i) (cidr[i]) ··· 626 640 struct htype *h = set->data; 627 641 struct htable *t, *orig; 628 642 u8 htable_bits; 629 - size_t dsize = set->dsize; 643 + size_t hsize, dsize = set->dsize; 630 644 #ifdef IP_SET_HASH_WITH_NETS 631 645 u8 flags; 632 646 struct mtype_elem *tmp; ··· 650 664 retry: 651 665 ret = 0; 652 666 htable_bits++; 653 - if (!htable_bits) { 654 - /* In case we have plenty of memory :-) */ 655 - pr_warn("Cannot increase the hashsize of set %s further\n", 656 - set->name); 657 - ret = -IPSET_ERR_HASH_FULL; 658 - goto out; 659 - } 660 - t = ip_set_alloc(htable_size(htable_bits)); 667 + if (!htable_bits) 668 + goto hbwarn; 669 + hsize = htable_size(htable_bits); 670 + if (!hsize) 671 + goto hbwarn; 672 + t = ip_set_alloc(hsize); 661 673 if (!t) { 662 674 ret = -ENOMEM; 663 675 goto out; ··· 796 812 mtype_ahash_destroy(set, t, false); 797 813 if (ret == -EAGAIN) 798 814 goto retry; 815 + goto out; 816 + 817 + hbwarn: 818 + /* In case we have plenty of memory :-) */ 819 + pr_warn("Cannot increase the hashsize of set %s further\n", set->name); 820 + ret = -IPSET_ERR_HASH_FULL; 799 821 goto out; 800 822 } 801 823 ··· 1511 1521 if (!h) 1512 1522 return -ENOMEM; 1513 1523 1514 - hbits = htable_bits(hashsize); 1524 + /* Compute htable_bits from the user input parameter hashsize. 1525 + * Assume that hashsize == 2^htable_bits, 1526 + * otherwise round up to the first 2^n value. 1527 + */ 1528 + hbits = fls(hashsize - 1); 1515 1529 hsize = htable_size(hbits); 1516 1530 if (hsize == 0) { 1517 1531 kfree(h);
+7 -3
net/netfilter/nf_tables_api.c
··· 4162 4162 if (flags & ~(NFT_SET_ANONYMOUS | NFT_SET_CONSTANT | 4163 4163 NFT_SET_INTERVAL | NFT_SET_TIMEOUT | 4164 4164 NFT_SET_MAP | NFT_SET_EVAL | 4165 - NFT_SET_OBJECT | NFT_SET_CONCAT)) 4165 + NFT_SET_OBJECT | NFT_SET_CONCAT | NFT_SET_EXPR)) 4166 4166 return -EOPNOTSUPP; 4167 4167 /* Only one of these operations is supported */ 4168 4168 if ((flags & (NFT_SET_MAP | NFT_SET_OBJECT)) == ··· 4304 4304 struct nlattr *tmp; 4305 4305 int left; 4306 4306 4307 + if (!(flags & NFT_SET_EXPR)) { 4308 + err = -EINVAL; 4309 + goto err_set_alloc_name; 4310 + } 4307 4311 i = 0; 4308 4312 nla_for_each_nested(tmp, nla[NFTA_SET_EXPRESSIONS], left) { 4309 4313 if (i == NFT_SET_EXPR_MAX) { ··· 5258 5254 return 0; 5259 5255 5260 5256 err_expr: 5261 - for (k = i - 1; k >= 0; k++) 5262 - nft_expr_destroy(ctx, expr_array[i]); 5257 + for (k = i - 1; k >= 0; k--) 5258 + nft_expr_destroy(ctx, expr_array[k]); 5263 5259 5264 5260 return -ENOMEM; 5265 5261 }
+10 -5
net/netfilter/nft_dynset.c
··· 19 19 enum nft_registers sreg_key:8; 20 20 enum nft_registers sreg_data:8; 21 21 bool invert; 22 + bool expr; 22 23 u8 num_exprs; 23 24 u64 timeout; 24 25 struct nft_expr *expr_array[NFT_SET_EXPR_MAX]; ··· 176 175 177 176 if (tb[NFTA_DYNSET_FLAGS]) { 178 177 u32 flags = ntohl(nla_get_be32(tb[NFTA_DYNSET_FLAGS])); 179 - 180 - if (flags & ~NFT_DYNSET_F_INV) 181 - return -EINVAL; 178 + if (flags & ~(NFT_DYNSET_F_INV | NFT_DYNSET_F_EXPR)) 179 + return -EOPNOTSUPP; 182 180 if (flags & NFT_DYNSET_F_INV) 183 181 priv->invert = true; 182 + if (flags & NFT_DYNSET_F_EXPR) 183 + priv->expr = true; 184 184 } 185 185 186 186 set = nft_set_lookup_global(ctx->net, ctx->table, ··· 212 210 timeout = 0; 213 211 if (tb[NFTA_DYNSET_TIMEOUT] != NULL) { 214 212 if (!(set->flags & NFT_SET_TIMEOUT)) 215 - return -EINVAL; 213 + return -EOPNOTSUPP; 216 214 217 215 err = nf_msecs_to_jiffies64(tb[NFTA_DYNSET_TIMEOUT], &timeout); 218 216 if (err) ··· 226 224 227 225 if (tb[NFTA_DYNSET_SREG_DATA] != NULL) { 228 226 if (!(set->flags & NFT_SET_MAP)) 229 - return -EINVAL; 227 + return -EOPNOTSUPP; 230 228 if (set->dtype == NFT_DATA_VERDICT) 231 229 return -EOPNOTSUPP; 232 230 ··· 262 260 struct nft_expr *dynset_expr; 263 261 struct nlattr *tmp; 264 262 int left; 263 + 264 + if (!priv->expr) 265 + return -EINVAL; 265 266 266 267 i = 0; 267 268 nla_for_each_nested(tmp, tb[NFTA_DYNSET_EXPRESSIONS], left) {
+3
net/netfilter/xt_RATEEST.c
··· 115 115 } cfg; 116 116 int ret; 117 117 118 + if (strnlen(info->name, sizeof(est->name)) >= sizeof(est->name)) 119 + return -ENAMETOOLONG; 120 + 118 121 net_get_random_once(&jhash_rnd, sizeof(jhash_rnd)); 119 122 120 123 mutex_lock(&xn->hash_lock);
+3 -1
net/packet/af_packet.c
··· 4595 4595 static int packet_seq_show(struct seq_file *seq, void *v) 4596 4596 { 4597 4597 if (v == SEQ_START_TOKEN) 4598 - seq_puts(seq, "sk RefCnt Type Proto Iface R Rmem User Inode\n"); 4598 + seq_printf(seq, 4599 + "%*sRefCnt Type Proto Iface R Rmem User Inode\n", 4600 + IS_ENABLED(CONFIG_64BIT) ? -17 : -9, "sk"); 4599 4601 else { 4600 4602 struct sock *s = sk_entry(v); 4601 4603 const struct packet_sock *po = pkt_sk(s);
+1 -1
net/sched/sch_choke.c
··· 362 362 363 363 ctl = nla_data(tb[TCA_CHOKE_PARMS]); 364 364 365 - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog)) 365 + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) 366 366 return -EINVAL; 367 367 368 368 if (ctl->limit > CHOKE_MAX_QUEUE)
+1 -1
net/sched/sch_gred.c
··· 480 480 struct gred_sched *table = qdisc_priv(sch); 481 481 struct gred_sched_data *q = table->tab[dp]; 482 482 483 - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog)) { 483 + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) { 484 484 NL_SET_ERR_MSG_MOD(extack, "invalid RED parameters"); 485 485 return -EINVAL; 486 486 }
+1 -1
net/sched/sch_red.c
··· 250 250 max_P = tb[TCA_RED_MAX_P] ? nla_get_u32(tb[TCA_RED_MAX_P]) : 0; 251 251 252 252 ctl = nla_data(tb[TCA_RED_PARMS]); 253 - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog)) 253 + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) 254 254 return -EINVAL; 255 255 256 256 err = red_get_flags(ctl->flags, TC_RED_HISTORIC_FLAGS,
+1 -1
net/sched/sch_sfq.c
··· 647 647 } 648 648 649 649 if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max, 650 - ctl_v1->Wlog)) 650 + ctl_v1->Wlog, ctl_v1->Scell_log)) 651 651 return -EINVAL; 652 652 if (ctl_v1 && ctl_v1->qth_min) { 653 653 p = kmalloc(sizeof(*p), GFP_KERNEL);
+4 -3
net/sched/sch_taprio.c
··· 1605 1605 1606 1606 hrtimer_cancel(&q->advance_timer); 1607 1607 if (q->qdiscs) { 1608 - for (i = 0; i < dev->num_tx_queues && q->qdiscs[i]; i++) 1609 - qdisc_reset(q->qdiscs[i]); 1608 + for (i = 0; i < dev->num_tx_queues; i++) 1609 + if (q->qdiscs[i]) 1610 + qdisc_reset(q->qdiscs[i]); 1610 1611 } 1611 1612 sch->qstats.backlog = 0; 1612 1613 sch->q.qlen = 0; ··· 1627 1626 taprio_disable_offload(dev, q, NULL); 1628 1627 1629 1628 if (q->qdiscs) { 1630 - for (i = 0; i < dev->num_tx_queues && q->qdiscs[i]; i++) 1629 + for (i = 0; i < dev->num_tx_queues; i++) 1631 1630 qdisc_put(q->qdiscs[i]); 1632 1631 1633 1632 kfree(q->qdiscs);
+13 -3
net/xdp/xsk.c
··· 423 423 struct xdp_sock *xs = xdp_sk(skb->sk); 424 424 unsigned long flags; 425 425 426 - spin_lock_irqsave(&xs->tx_completion_lock, flags); 426 + spin_lock_irqsave(&xs->pool->cq_lock, flags); 427 427 xskq_prod_submit_addr(xs->pool->cq, addr); 428 - spin_unlock_irqrestore(&xs->tx_completion_lock, flags); 428 + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 429 429 430 430 sock_wfree(skb); 431 431 } ··· 437 437 bool sent_frame = false; 438 438 struct xdp_desc desc; 439 439 struct sk_buff *skb; 440 + unsigned long flags; 440 441 int err = 0; 441 442 442 443 mutex_lock(&xs->mutex); ··· 469 468 * if there is space in it. This avoids having to implement 470 469 * any buffering in the Tx path. 471 470 */ 471 + spin_lock_irqsave(&xs->pool->cq_lock, flags); 472 472 if (unlikely(err) || xskq_prod_reserve(xs->pool->cq)) { 473 + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 473 474 kfree_skb(skb); 474 475 goto out; 475 476 } 477 + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 476 478 477 479 skb->dev = xs->dev; 478 480 skb->priority = sk->sk_priority; ··· 487 483 if (err == NETDEV_TX_BUSY) { 488 484 /* Tell user-space to retry the send */ 489 485 skb->destructor = sock_wfree; 486 + spin_lock_irqsave(&xs->pool->cq_lock, flags); 487 + xskq_prod_cancel(xs->pool->cq); 488 + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 490 489 /* Free skb without triggering the perf drop trace */ 491 490 consume_skb(skb); 492 491 err = -EAGAIN; ··· 884 877 goto out_unlock; 885 878 } 886 879 } 880 + 881 + /* FQ and CQ are now owned by the buffer pool and cleaned up with it. */ 882 + xs->fq_tmp = NULL; 883 + xs->cq_tmp = NULL; 887 884 888 885 xs->dev = dev; 889 886 xs->zc = xs->umem->zc; ··· 1310 1299 xs->state = XSK_READY; 1311 1300 mutex_init(&xs->mutex); 1312 1301 spin_lock_init(&xs->rx_lock); 1313 - spin_lock_init(&xs->tx_completion_lock); 1314 1302 1315 1303 INIT_LIST_HEAD(&xs->map_list); 1316 1304 spin_lock_init(&xs->map_list_lock);
+1 -2
net/xdp/xsk_buff_pool.c
··· 71 71 INIT_LIST_HEAD(&pool->free_list); 72 72 INIT_LIST_HEAD(&pool->xsk_tx_list); 73 73 spin_lock_init(&pool->xsk_tx_list_lock); 74 + spin_lock_init(&pool->cq_lock); 74 75 refcount_set(&pool->users, 1); 75 76 76 77 pool->fq = xs->fq_tmp; 77 78 pool->cq = xs->cq_tmp; 78 - xs->fq_tmp = NULL; 79 - xs->cq_tmp = NULL; 80 79 81 80 for (i = 0; i < pool->free_heads_cnt; i++) { 82 81 xskb = &pool->heads[i];
+5
net/xdp/xsk_queue.h
··· 334 334 return xskq_prod_nb_free(q, 1) ? false : true; 335 335 } 336 336 337 + static inline void xskq_prod_cancel(struct xsk_queue *q) 338 + { 339 + q->cached_prod--; 340 + } 341 + 337 342 static inline int xskq_prod_reserve(struct xsk_queue *q) 338 343 { 339 344 if (xskq_prod_is_full(q))
+3
tools/testing/selftests/bpf/Makefile
··· 121 121 /sys/kernel/btf/vmlinux \ 122 122 /boot/vmlinux-$(shell uname -r) 123 123 VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS)))) 124 + ifeq ($(VMLINUX_BTF),) 125 + $(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)") 126 + endif 124 127 125 128 # Define simple and short `make test_progs`, `make test_sysctl`, etc targets 126 129 # to build individual tests.
+42 -6
tools/testing/selftests/bpf/test_maps.c
··· 1312 1312 #define DO_UPDATE 1 1313 1313 #define DO_DELETE 0 1314 1314 1315 + #define MAP_RETRIES 20 1316 + 1317 + static int map_update_retriable(int map_fd, const void *key, const void *value, 1318 + int flags, int attempts) 1319 + { 1320 + while (bpf_map_update_elem(map_fd, key, value, flags)) { 1321 + if (!attempts || (errno != EAGAIN && errno != EBUSY)) 1322 + return -errno; 1323 + 1324 + usleep(1); 1325 + attempts--; 1326 + } 1327 + 1328 + return 0; 1329 + } 1330 + 1331 + static int map_delete_retriable(int map_fd, const void *key, int attempts) 1332 + { 1333 + while (bpf_map_delete_elem(map_fd, key)) { 1334 + if (!attempts || (errno != EAGAIN && errno != EBUSY)) 1335 + return -errno; 1336 + 1337 + usleep(1); 1338 + attempts--; 1339 + } 1340 + 1341 + return 0; 1342 + } 1343 + 1315 1344 static void test_update_delete(unsigned int fn, void *data) 1316 1345 { 1317 1346 int do_update = ((int *)data)[1]; 1318 1347 int fd = ((int *)data)[0]; 1319 - int i, key, value; 1348 + int i, key, value, err; 1320 1349 1321 1350 for (i = fn; i < MAP_SIZE; i += TASKS) { 1322 1351 key = value = i; 1323 1352 1324 1353 if (do_update) { 1325 - assert(bpf_map_update_elem(fd, &key, &value, 1326 - BPF_NOEXIST) == 0); 1327 - assert(bpf_map_update_elem(fd, &key, &value, 1328 - BPF_EXIST) == 0); 1354 + err = map_update_retriable(fd, &key, &value, BPF_NOEXIST, MAP_RETRIES); 1355 + if (err) 1356 + printf("error %d %d\n", err, errno); 1357 + assert(err == 0); 1358 + err = map_update_retriable(fd, &key, &value, BPF_EXIST, MAP_RETRIES); 1359 + if (err) 1360 + printf("error %d %d\n", err, errno); 1361 + assert(err == 0); 1329 1362 } else { 1330 - assert(bpf_map_delete_elem(fd, &key) == 0); 1363 + err = map_delete_retriable(fd, &key, MAP_RETRIES); 1364 + if (err) 1365 + printf("error %d %d\n", err, errno); 1366 + assert(err == 0); 1331 1367 } 1332 1368 } 1333 1369 }
+2 -2
tools/testing/selftests/bpf/xdpxceiver.c
··· 715 715 int payload = *((uint32_t *)(pkt_buf[iter]->payload + PKT_HDR_SIZE)); 716 716 717 717 if (payload == EOT) { 718 - ksft_print_msg("End-of-tranmission frame received\n"); 718 + ksft_print_msg("End-of-transmission frame received\n"); 719 719 fprintf(stdout, "---------------------------------------\n"); 720 720 break; 721 721 } ··· 747 747 } 748 748 749 749 if (payloadseqnum == EOT) { 750 - ksft_print_msg("End-of-tranmission frame received: PASS\n"); 750 + ksft_print_msg("End-of-transmission frame received: PASS\n"); 751 751 sigvar = 1; 752 752 break; 753 753 }
+1 -1
tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
··· 230 230 __mlnx_qos -i $swp4 --pfc=0,1,0,0,0,0,0,0 >/dev/null 231 231 # PG0 will get autoconfigured to Xoff, give PG1 arbitrarily 100K, which 232 232 # is (-2*MTU) about 80K of delay provision. 233 - __mlnx_qos -i $swp3 --buffer_size=0,$_100KB,0,0,0,0,0,0 >/dev/null 233 + __mlnx_qos -i $swp4 --buffer_size=0,$_100KB,0,0,0,0,0,0 >/dev/null 234 234 235 235 # bridges 236 236 # -------