Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.4-rc6 into usb-next

We need the USB fixes in here to build on top of.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3958 -2199
+6 -1
Documentation/arm64/silicon-errata.rst
··· 91 91 | ARM | MMU-500 | #841119,826419 | N/A | 92 92 +----------------+-----------------+-----------------+-----------------------------+ 93 93 +----------------+-----------------+-----------------+-----------------------------+ 94 + | Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_845719 | 95 + +----------------+-----------------+-----------------+-----------------------------+ 96 + | Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_843419 | 97 + +----------------+-----------------+-----------------+-----------------------------+ 98 + +----------------+-----------------+-----------------+-----------------------------+ 94 99 | Cavium | ThunderX ITS | #22375,24313 | CAVIUM_ERRATUM_22375 | 95 100 +----------------+-----------------+-----------------+-----------------------------+ 96 101 | Cavium | ThunderX ITS | #23144 | CAVIUM_ERRATUM_23144 | ··· 131 126 +----------------+-----------------+-----------------+-----------------------------+ 132 127 | Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 | 133 128 +----------------+-----------------+-----------------+-----------------------------+ 134 - | Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 | 129 + | Qualcomm Tech. | Kryo/Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 | 135 130 +----------------+-----------------+-----------------+-----------------------------+ 136 131 | Qualcomm Tech. | QDF2400 ITS | E0065 | QCOM_QDF2400_ERRATUM_0065 | 137 132 +----------------+-----------------+-----------------+-----------------------------+
+7 -7
Documentation/networking/device_drivers/intel/e100.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ============================================================== 4 - Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters 5 - ============================================================== 3 + ============================================================= 4 + Linux Base Driver for the Intel(R) PRO/100 Family of Adapters 5 + ============================================================= 6 6 7 7 June 1, 2018 8 8 ··· 21 21 In This Release 22 22 =============== 23 23 24 - This file describes the Linux* Base Driver for the Intel(R) PRO/100 Family of 24 + This file describes the Linux Base Driver for the Intel(R) PRO/100 Family of 25 25 Adapters. This driver includes support for Itanium(R)2-based systems. 26 26 27 27 For questions related to hardware requirements, refer to the documentation ··· 138 138 The latest release of ethtool can be found from 139 139 https://www.kernel.org/pub/software/network/ethtool/ 140 140 141 - Enabling Wake on LAN* (WoL) 142 - --------------------------- 143 - WoL is provided through the ethtool* utility. For instructions on 141 + Enabling Wake on LAN (WoL) 142 + -------------------------- 143 + WoL is provided through the ethtool utility. For instructions on 144 144 enabling WoL with ethtool, refer to the ethtool man page. WoL will be 145 145 enabled on the system during the next shut down or reboot. For this 146 146 driver version, in order to enable WoL, the e100 driver must be loaded
+6 -6
Documentation/networking/device_drivers/intel/e1000.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - =========================================================== 4 - Linux* Base Driver for Intel(R) Ethernet Network Connection 5 - =========================================================== 3 + ========================================================== 4 + Linux Base Driver for Intel(R) Ethernet Network Connection 5 + ========================================================== 6 6 7 7 Intel Gigabit Linux driver. 8 8 Copyright(c) 1999 - 2013 Intel Corporation. ··· 438 438 The latest release of ethtool can be found from 439 439 https://www.kernel.org/pub/software/network/ethtool/ 440 440 441 - Enabling Wake on LAN* (WoL) 442 - --------------------------- 441 + Enabling Wake on LAN (WoL) 442 + -------------------------- 443 443 444 - WoL is configured through the ethtool* utility. 444 + WoL is configured through the ethtool utility. 445 445 446 446 WoL will be enabled on the system during the next shut down or reboot. 447 447 For this driver version, in order to enable WoL, the e1000 driver must be
+7 -7
Documentation/networking/device_drivers/intel/e1000e.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ====================================================== 4 - Linux* Driver for Intel(R) Ethernet Network Connection 5 - ====================================================== 3 + ===================================================== 4 + Linux Driver for Intel(R) Ethernet Network Connection 5 + ===================================================== 6 6 7 7 Intel Gigabit Linux driver. 8 8 Copyright(c) 2008-2018 Intel Corporation. ··· 338 338 manually set devices for 1 Gbps and higher. 339 339 340 340 Speed, duplex, and autonegotiation advertising are configured through the 341 - ethtool* utility. 341 + ethtool utility. 342 342 343 343 Caution: Only experienced network administrators should force speed and duplex 344 344 or change autonegotiation advertising manually. The settings at the switch must ··· 351 351 operate only in full duplex and only at their native speed. 352 352 353 353 354 - Enabling Wake on LAN* (WoL) 355 - --------------------------- 356 - WoL is configured through the ethtool* utility. 354 + Enabling Wake on LAN (WoL) 355 + -------------------------- 356 + WoL is configured through the ethtool utility. 357 357 358 358 WoL will be enabled on the system during the next shut down or reboot. For 359 359 this driver version, in order to enable WoL, the e1000e driver must be loaded
+5 -5
Documentation/networking/device_drivers/intel/fm10k.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ============================================================== 4 - Linux* Base Driver for Intel(R) Ethernet Multi-host Controller 5 - ============================================================== 3 + ============================================================= 4 + Linux Base Driver for Intel(R) Ethernet Multi-host Controller 5 + ============================================================= 6 6 7 7 August 20, 2018 8 8 Copyright(c) 2015-2018 Intel Corporation. ··· 120 120 Known Issues/Troubleshooting 121 121 ============================ 122 122 123 - Enabling SR-IOV in a 64-bit Microsoft* Windows Server* 2012/R2 guest OS under Linux KVM 124 - --------------------------------------------------------------------------------------- 123 + Enabling SR-IOV in a 64-bit Microsoft Windows Server 2012/R2 guest OS under Linux KVM 124 + ------------------------------------------------------------------------------------- 125 125 KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This 126 126 includes traditional PCIe devices, as well as SR-IOV-capable devices based on 127 127 the Intel Ethernet Controller XL710.
+4 -4
Documentation/networking/device_drivers/intel/i40e.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ================================================================== 4 - Linux* Base Driver for the Intel(R) Ethernet Controller 700 Series 5 - ================================================================== 3 + ================================================================= 4 + Linux Base Driver for the Intel(R) Ethernet Controller 700 Series 5 + ================================================================= 6 6 7 7 Intel 40 Gigabit Linux driver. 8 8 Copyright(c) 1999-2018 Intel Corporation. ··· 384 384 Network Adapter XXV710 based devices. 385 385 386 386 Speed, duplex, and autonegotiation advertising are configured through the 387 - ethtool* utility. 387 + ethtool utility. 388 388 389 389 Caution: Only experienced network administrators should force speed and duplex 390 390 or change autonegotiation advertising manually. The settings at the switch must
+4 -4
Documentation/networking/device_drivers/intel/iavf.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ================================================================== 4 - Linux* Base Driver for Intel(R) Ethernet Adaptive Virtual Function 5 - ================================================================== 3 + ================================================================= 4 + Linux Base Driver for Intel(R) Ethernet Adaptive Virtual Function 5 + ================================================================= 6 6 7 7 Intel Ethernet Adaptive Virtual Function Linux driver. 8 8 Copyright(c) 2013-2018 Intel Corporation. ··· 19 19 Overview 20 20 ======== 21 21 22 - This file describes the iavf Linux* Base Driver. This driver was formerly 22 + This file describes the iavf Linux Base Driver. This driver was formerly 23 23 called i40evf. 24 24 25 25 The iavf driver supports the below mentioned virtual function devices and
+3 -3
Documentation/networking/device_drivers/intel/ice.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - =================================================================== 4 - Linux* Base Driver for the Intel(R) Ethernet Connection E800 Series 5 - =================================================================== 3 + ================================================================== 4 + Linux Base Driver for the Intel(R) Ethernet Connection E800 Series 5 + ================================================================== 6 6 7 7 Intel ice Linux driver. 8 8 Copyright(c) 2018 Intel Corporation.
+6 -6
Documentation/networking/device_drivers/intel/igb.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - =========================================================== 4 - Linux* Base Driver for Intel(R) Ethernet Network Connection 5 - =========================================================== 3 + ========================================================== 4 + Linux Base Driver for Intel(R) Ethernet Network Connection 5 + ========================================================== 6 6 7 7 Intel Gigabit Linux driver. 8 8 Copyright(c) 1999-2018 Intel Corporation. ··· 129 129 https://www.kernel.org/pub/software/network/ethtool/ 130 130 131 131 132 - Enabling Wake on LAN* (WoL) 133 - --------------------------- 134 - WoL is configured through the ethtool* utility. 132 + Enabling Wake on LAN (WoL) 133 + -------------------------- 134 + WoL is configured through the ethtool utility. 135 135 136 136 WoL will be enabled on the system during the next shut down or reboot. For 137 137 this driver version, in order to enable WoL, the igb driver must be loaded
+3 -3
Documentation/networking/device_drivers/intel/igbvf.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ============================================================ 4 - Linux* Base Virtual Function Driver for Intel(R) 1G Ethernet 5 - ============================================================ 3 + =========================================================== 4 + Linux Base Virtual Function Driver for Intel(R) 1G Ethernet 5 + =========================================================== 6 6 7 7 Intel Gigabit Virtual Function Linux driver. 8 8 Copyright(c) 1999-2018 Intel Corporation.
+5 -5
Documentation/networking/device_drivers/intel/ixgbe.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ============================================================================= 4 - Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters 5 - ============================================================================= 3 + =========================================================================== 4 + Linux Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters 5 + =========================================================================== 6 6 7 7 Intel 10 Gigabit Linux driver. 8 8 Copyright(c) 1999-2018 Intel Corporation. ··· 519 519 Known Issues/Troubleshooting 520 520 ============================ 521 521 522 - Enabling SR-IOV in a 64-bit Microsoft* Windows Server* 2012/R2 guest OS 523 - ----------------------------------------------------------------------- 522 + Enabling SR-IOV in a 64-bit Microsoft Windows Server 2012/R2 guest OS 523 + --------------------------------------------------------------------- 524 524 Linux KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. 525 525 This includes traditional PCIe devices, as well as SR-IOV-capable devices based 526 526 on the Intel Ethernet Controller XL710.
+3 -3
Documentation/networking/device_drivers/intel/ixgbevf.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ============================================================= 4 - Linux* Base Virtual Function Driver for Intel(R) 10G Ethernet 5 - ============================================================= 3 + ============================================================ 4 + Linux Base Virtual Function Driver for Intel(R) 10G Ethernet 5 + ============================================================ 6 6 7 7 Intel 10 Gigabit Virtual Function Linux driver. 8 8 Copyright(c) 1999-2018 Intel Corporation.
+3 -3
Documentation/networking/device_drivers/pensando/ionic.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0+ 2 2 3 - ========================================================== 4 - Linux* Driver for the Pensando(R) Ethernet adapter family 5 - ========================================================== 3 + ======================================================== 4 + Linux Driver for the Pensando(R) Ethernet adapter family 5 + ======================================================== 6 6 7 7 Pensando Linux Ethernet driver. 8 8 Copyright(c) 2019 Pensando Systems, Inc
+7 -4
Documentation/networking/ip-sysctl.txt
··· 207 207 208 208 somaxconn - INTEGER 209 209 Limit of socket listen() backlog, known in userspace as SOMAXCONN. 210 - Defaults to 128. See also tcp_max_syn_backlog for additional tuning 211 - for TCP sockets. 210 + Defaults to 4096. (Was 128 before linux-5.4) 211 + See also tcp_max_syn_backlog for additional tuning for TCP sockets. 212 212 213 213 tcp_abort_on_overflow - BOOLEAN 214 214 If listening service is too slow to accept new connections, ··· 408 408 up to ~64K of unswappable memory. 409 409 410 410 tcp_max_syn_backlog - INTEGER 411 - Maximal number of remembered connection requests, which have not 412 - received an acknowledgment from connecting client. 411 + Maximal number of remembered connection requests (SYN_RECV), 412 + which have not received an acknowledgment from connecting client. 413 + This is a per-listener limit. 413 414 The minimal value is 128 for low memory machines, and it will 414 415 increase in proportion to the memory of machine. 415 416 If server suffers from overload, try increasing this number. 417 + Remember to also check /proc/sys/net/core/somaxconn 418 + A SYN_RECV request socket consumes about 304 bytes of memory. 416 419 417 420 tcp_max_tw_buckets - INTEGER 418 421 Maximal number of timewait sockets held by system simultaneously.
+3 -4
MAINTAINERS
··· 11408 11408 NETWORKING [TLS] 11409 11409 M: Boris Pismenny <borisp@mellanox.com> 11410 11410 M: Aviad Yehezkel <aviadye@mellanox.com> 11411 - M: Dave Watson <davejwatson@fb.com> 11412 11411 M: John Fastabend <john.fastabend@gmail.com> 11413 11412 M: Daniel Borkmann <daniel@iogearbox.net> 11414 11413 M: Jakub Kicinski <jakub.kicinski@netronome.com> ··· 13905 13906 13906 13907 RISC-V ARCHITECTURE 13907 13908 M: Paul Walmsley <paul.walmsley@sifive.com> 13908 - M: Palmer Dabbelt <palmer@sifive.com> 13909 + M: Palmer Dabbelt <palmer@dabbelt.com> 13909 13910 M: Albert Ou <aou@eecs.berkeley.edu> 13910 13911 L: linux-riscv@lists.infradead.org 13911 13912 T: git git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git ··· 14782 14783 F: drivers/media/mmc/siano/ 14783 14784 14784 14785 SIFIVE DRIVERS 14785 - M: Palmer Dabbelt <palmer@sifive.com> 14786 + M: Palmer Dabbelt <palmer@dabbelt.com> 14786 14787 M: Paul Walmsley <paul.walmsley@sifive.com> 14787 14788 L: linux-riscv@lists.infradead.org 14788 14789 T: git git://github.com/sifive/riscv-linux.git ··· 14792 14793 14793 14794 SIFIVE FU540 SYSTEM-ON-CHIP 14794 14795 M: Paul Walmsley <paul.walmsley@sifive.com> 14795 - M: Palmer Dabbelt <palmer@sifive.com> 14796 + M: Palmer Dabbelt <palmer@dabbelt.com> 14796 14797 L: linux-riscv@lists.infradead.org 14797 14798 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pjw/sifive.git 14798 14799 S: Supported
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 4 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+23
arch/arc/boot/dts/hsdk.dts
··· 65 65 clock-frequency = <33333333>; 66 66 }; 67 67 68 + reg_5v0: regulator-5v0 { 69 + compatible = "regulator-fixed"; 70 + 71 + regulator-name = "5v0-supply"; 72 + regulator-min-microvolt = <5000000>; 73 + regulator-max-microvolt = <5000000>; 74 + }; 75 + 68 76 cpu_intc: cpu-interrupt-controller { 69 77 compatible = "snps,archs-intc"; 70 78 interrupt-controller; ··· 272 264 clocks = <&input_clk>; 273 265 cs-gpios = <&creg_gpio 0 GPIO_ACTIVE_LOW>, 274 266 <&creg_gpio 1 GPIO_ACTIVE_LOW>; 267 + 268 + spi-flash@0 { 269 + compatible = "sst26wf016b", "jedec,spi-nor"; 270 + reg = <0>; 271 + #address-cells = <1>; 272 + #size-cells = <1>; 273 + spi-max-frequency = <4000000>; 274 + }; 275 + 276 + adc@1 { 277 + compatible = "ti,adc108s102"; 278 + reg = <1>; 279 + vref-supply = <&reg_5v0>; 280 + spi-max-frequency = <1000000>; 281 + }; 275 282 }; 276 283 277 284 creg_gpio: gpio@14b0 {
+6
arch/arc/configs/hsdk_defconfig
··· 32 32 CONFIG_DEVTMPFS=y 33 33 # CONFIG_STANDALONE is not set 34 34 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 35 + CONFIG_MTD=y 36 + CONFIG_MTD_SPI_NOR=y 35 37 CONFIG_SCSI=y 36 38 CONFIG_BLK_DEV_SD=y 37 39 CONFIG_NETDEVICES=y ··· 57 55 CONFIG_GPIO_DWAPB=y 58 56 CONFIG_GPIO_SNPS_CREG=y 59 57 # CONFIG_HWMON is not set 58 + CONFIG_REGULATOR=y 59 + CONFIG_REGULATOR_FIXED_VOLTAGE=y 60 60 CONFIG_DRM=y 61 61 # CONFIG_DRM_FBDEV_EMULATION is not set 62 62 CONFIG_DRM_UDL=y ··· 76 72 CONFIG_MMC_DW=y 77 73 CONFIG_DMADEVICES=y 78 74 CONFIG_DW_AXI_DMAC=y 75 + CONFIG_IIO=y 76 + CONFIG_TI_ADC108S102=y 79 77 CONFIG_EXT3_FS=y 80 78 CONFIG_VFAT_FS=y 81 79 CONFIG_TMPFS=y
+2 -2
arch/arc/kernel/perf_event.c
··· 614 614 /* loop thru all available h/w condition indexes */ 615 615 for (i = 0; i < cc_bcr.c; i++) { 616 616 write_aux_reg(ARC_REG_CC_INDEX, i); 617 - cc_name.indiv.word0 = read_aux_reg(ARC_REG_CC_NAME0); 618 - cc_name.indiv.word1 = read_aux_reg(ARC_REG_CC_NAME1); 617 + cc_name.indiv.word0 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME0)); 618 + cc_name.indiv.word1 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME1)); 619 619 620 620 arc_pmu_map_hw_event(i, cc_name.str); 621 621 arc_pmu_add_raw_event_attr(i, cc_name.str);
+2
arch/arm64/include/asm/cputype.h
··· 79 79 #define CAVIUM_CPU_PART_THUNDERX_83XX 0x0A3 80 80 #define CAVIUM_CPU_PART_THUNDERX2 0x0AF 81 81 82 + #define BRCM_CPU_PART_BRAHMA_B53 0x100 82 83 #define BRCM_CPU_PART_VULCAN 0x516 83 84 84 85 #define QCOM_CPU_PART_FALKOR_V1 0x800 ··· 106 105 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) 107 106 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX) 108 107 #define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2) 108 + #define MIDR_BRAHMA_B53 MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_BRAHMA_B53) 109 109 #define MIDR_BRCM_VULCAN MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_VULCAN) 110 110 #define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1) 111 111 #define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR)
+8 -7
arch/arm64/include/asm/pgtable-prot.h
··· 32 32 #define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) 33 33 #define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG) 34 34 35 - #define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE)) 36 - #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE)) 37 - #define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC)) 38 - #define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_WT)) 39 - #define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL)) 35 + #define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE)) 36 + #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE)) 37 + #define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC)) 38 + #define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_WT)) 39 + #define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL)) 40 40 41 41 #define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE)) 42 42 #define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL)) ··· 80 80 #define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_S2_XN) 81 81 82 82 #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) 83 - #define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) 84 - #define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE) 83 + /* shared+writable pages are clean by default, hence PTE_RDONLY|PTE_WRITE */ 84 + #define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) 85 + #define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE) 85 86 #define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) 86 87 #define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN) 87 88 #define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
+48 -11
arch/arm64/kernel/cpu_errata.c
··· 489 489 MIDR_ALL_VERSIONS(MIDR_CORTEX_A35), 490 490 MIDR_ALL_VERSIONS(MIDR_CORTEX_A53), 491 491 MIDR_ALL_VERSIONS(MIDR_CORTEX_A55), 492 + MIDR_ALL_VERSIONS(MIDR_BRAHMA_B53), 492 493 {}, 493 494 }; 494 495 ··· 574 573 MIDR_ALL_VERSIONS(MIDR_CORTEX_A35), 575 574 MIDR_ALL_VERSIONS(MIDR_CORTEX_A53), 576 575 MIDR_ALL_VERSIONS(MIDR_CORTEX_A55), 576 + MIDR_ALL_VERSIONS(MIDR_BRAHMA_B53), 577 577 { /* sentinel */ } 578 578 }; 579 579 ··· 661 659 #endif 662 660 663 661 #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI 664 - 665 - static const struct midr_range arm64_repeat_tlbi_cpus[] = { 662 + static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = { 666 663 #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1009 667 - MIDR_RANGE(MIDR_QCOM_FALKOR_V1, 0, 0, 0, 0), 664 + { 665 + ERRATA_MIDR_REV(MIDR_QCOM_FALKOR_V1, 0, 0) 666 + }, 667 + { 668 + .midr_range.model = MIDR_QCOM_KRYO, 669 + .matches = is_kryo_midr, 670 + }, 668 671 #endif 669 672 #ifdef CONFIG_ARM64_ERRATUM_1286807 670 - MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0), 673 + { 674 + ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0), 675 + }, 671 676 #endif 672 677 {}, 673 678 }; 674 - 675 679 #endif 676 680 677 681 #ifdef CONFIG_CAVIUM_ERRATUM_27456 ··· 745 737 }; 746 738 #endif 747 739 740 + #ifdef CONFIG_ARM64_ERRATUM_845719 741 + static const struct midr_range erratum_845719_list[] = { 742 + /* Cortex-A53 r0p[01234] */ 743 + MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4), 744 + /* Brahma-B53 r0p[0] */ 745 + MIDR_REV(MIDR_BRAHMA_B53, 0, 0), 746 + {}, 747 + }; 748 + #endif 749 + 750 + #ifdef CONFIG_ARM64_ERRATUM_843419 751 + static const struct arm64_cpu_capabilities erratum_843419_list[] = { 752 + { 753 + /* Cortex-A53 r0p[01234] */ 754 + .matches = is_affected_midr_range, 755 + ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4), 756 + MIDR_FIXED(0x4, BIT(8)), 757 + }, 758 + { 759 + /* Brahma-B53 r0p[0] */ 760 + .matches = is_affected_midr_range, 761 + ERRATA_MIDR_REV(MIDR_BRAHMA_B53, 0, 0), 762 + }, 763 + {}, 764 + }; 765 + #endif 766 + 748 767 const struct arm64_cpu_capabilities arm64_errata[] = { 749 768 #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE 750 769 { ··· 803 768 #endif 804 769 #ifdef CONFIG_ARM64_ERRATUM_843419 805 770 { 806 - /* Cortex-A53 r0p[01234] */ 807 771 .desc = "ARM erratum 843419", 808 772 .capability = ARM64_WORKAROUND_843419, 809 - ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4), 810 - MIDR_FIXED(0x4, BIT(8)), 773 + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, 774 + .matches = cpucap_multi_entry_cap_matches, 775 + .match_list = erratum_843419_list, 811 776 }, 812 777 #endif 813 778 #ifdef CONFIG_ARM64_ERRATUM_845719 814 779 { 815 - /* Cortex-A53 r0p[01234] */ 816 780 .desc = "ARM erratum 845719", 817 781 .capability = ARM64_WORKAROUND_845719, 818 - ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4), 782 + ERRATA_MIDR_RANGE_LIST(erratum_845719_list), 819 783 }, 820 784 #endif 821 785 #ifdef CONFIG_CAVIUM_ERRATUM_23154 ··· 850 816 { 851 817 .desc = "Qualcomm Technologies Falkor/Kryo erratum 1003", 852 818 .capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003, 819 + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, 853 820 .matches = cpucap_multi_entry_cap_matches, 854 821 .match_list = qcom_erratum_1003_list, 855 822 }, ··· 859 824 { 860 825 .desc = "Qualcomm erratum 1009, ARM erratum 1286807", 861 826 .capability = ARM64_WORKAROUND_REPEAT_TLBI, 862 - ERRATA_MIDR_RANGE_LIST(arm64_repeat_tlbi_cpus), 827 + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, 828 + .matches = cpucap_multi_entry_cap_matches, 829 + .match_list = arm64_repeat_tlbi_list, 863 830 }, 864 831 #endif 865 832 #ifdef CONFIG_ARM64_ERRATUM_858921
+1 -1
arch/parisc/kernel/entry.S
··· 2125 2125 copy %rp, %r26 2126 2126 LDREG -FTRACE_FRAME_SIZE-PT_SZ_ALGN(%sp), %r25 2127 2127 ldo -8(%r25), %r25 2128 - copy %r3, %arg2 2128 + ldo -FTRACE_FRAME_SIZE(%r1), %arg2 2129 2129 b,l ftrace_function_trampoline, %rp 2130 2130 copy %r1, %arg3 /* struct pt_regs */ 2131 2131
+1
arch/powerpc/include/asm/book3s/32/kup.h
··· 91 91 92 92 static inline void kuap_update_sr(u32 sr, u32 addr, u32 end) 93 93 { 94 + addr &= 0xf0000000; /* align addr to start of segment */ 94 95 barrier(); /* make sure thread.kuap is updated before playing with SRs */ 95 96 while (addr < end) { 96 97 mtsrin(sr, addr);
+3
arch/powerpc/include/asm/elf.h
··· 175 175 ARCH_DLINFO_CACHE_GEOMETRY; \ 176 176 } while (0) 177 177 178 + /* Relocate the kernel image to @final_address */ 179 + void relocate(unsigned long final_address); 180 + 178 181 #endif /* _ASM_POWERPC_ELF_H */
+13
arch/powerpc/kernel/prom_init.c
··· 3249 3249 /* Switch to secure mode. */ 3250 3250 prom_printf("Switching to secure mode.\n"); 3251 3251 3252 + /* 3253 + * The ultravisor will do an integrity check of the kernel image but we 3254 + * relocated it so the check will fail. Restore the original image by 3255 + * relocating it back to the kernel virtual base address. 3256 + */ 3257 + if (IS_ENABLED(CONFIG_RELOCATABLE)) 3258 + relocate(KERNELBASE); 3259 + 3252 3260 ret = enter_secure_mode(kbase, fdt); 3261 + 3262 + /* Relocate the kernel again. */ 3263 + if (IS_ENABLED(CONFIG_RELOCATABLE)) 3264 + relocate(kbase); 3265 + 3253 3266 if (ret != U_SUCCESS) { 3254 3267 prom_printf("Returned %d from switching to secure mode.\n", ret); 3255 3268 prom_rtas_os_term("Switch to secure mode failed.\n");
+2 -1
arch/powerpc/kernel/prom_init_check.sh
··· 26 26 __secondary_hold_acknowledge __secondary_hold_spinloop __start 27 27 logo_linux_clut224 btext_prepare_BAT 28 28 reloc_got2 kernstart_addr memstart_addr linux_banner _stext 29 - __prom_init_toc_start __prom_init_toc_end btext_setup_display TOC." 29 + __prom_init_toc_start __prom_init_toc_end btext_setup_display TOC. 30 + relocate" 30 31 31 32 NM="$1" 32 33 OBJ="$2"
+1 -1
arch/powerpc/platforms/powernv/eeh-powernv.c
··· 42 42 { 43 43 struct pci_dn *pdn = pci_get_pdn(pdev); 44 44 45 - if (eeh_has_flag(EEH_FORCE_DISABLED)) 45 + if (!pdn || eeh_has_flag(EEH_FORCE_DISABLED)) 46 46 return; 47 47 48 48 dev_dbg(&pdev->dev, "EEH: Setting up device\n");
+37 -16
arch/powerpc/platforms/powernv/smp.c
··· 146 146 return 0; 147 147 } 148 148 149 + static void pnv_flush_interrupts(void) 150 + { 151 + if (cpu_has_feature(CPU_FTR_ARCH_300)) { 152 + if (xive_enabled()) 153 + xive_flush_interrupt(); 154 + else 155 + icp_opal_flush_interrupt(); 156 + } else { 157 + icp_native_flush_interrupt(); 158 + } 159 + } 160 + 149 161 static void pnv_smp_cpu_kill_self(void) 150 162 { 163 + unsigned long srr1, unexpected_mask, wmask; 151 164 unsigned int cpu; 152 - unsigned long srr1, wmask; 153 165 u64 lpcr_val; 154 166 155 167 /* Standard hot unplug procedure */ 156 - /* 157 - * This hard disables local interurpts, ensuring we have no lazy 158 - * irqs pending. 159 - */ 160 - WARN_ON(irqs_disabled()); 161 - hard_irq_disable(); 162 - WARN_ON(lazy_irq_pending()); 163 168 164 169 idle_task_exit(); 165 170 current->active_mm = NULL; /* for sanity */ ··· 176 171 wmask = SRR1_WAKEMASK; 177 172 if (cpu_has_feature(CPU_FTR_ARCH_207S)) 178 173 wmask = SRR1_WAKEMASK_P8; 174 + 175 + /* 176 + * This turns the irq soft-disabled state we're called with, into a 177 + * hard-disabled state with pending irq_happened interrupts cleared. 178 + * 179 + * PACA_IRQ_DEC - Decrementer should be ignored. 180 + * PACA_IRQ_HMI - Can be ignored, processing is done in real mode. 181 + * PACA_IRQ_DBELL, EE, PMI - Unexpected. 182 + */ 183 + hard_irq_disable(); 184 + if (generic_check_cpu_restart(cpu)) 185 + goto out; 186 + 187 + unexpected_mask = ~(PACA_IRQ_DEC | PACA_IRQ_HMI | PACA_IRQ_HARD_DIS); 188 + if (local_paca->irq_happened & unexpected_mask) { 189 + if (local_paca->irq_happened & PACA_IRQ_EE) 190 + pnv_flush_interrupts(); 191 + DBG("CPU%d Unexpected exit while offline irq_happened=%lx!\n", 192 + cpu, local_paca->irq_happened); 193 + } 194 + local_paca->irq_happened = PACA_IRQ_HARD_DIS; 179 195 180 196 /* 181 197 * We don't want to take decrementer interrupts while we are ··· 223 197 224 198 srr1 = pnv_cpu_offline(cpu); 225 199 200 + WARN_ON_ONCE(!irqs_disabled()); 226 201 WARN_ON(lazy_irq_pending()); 227 202 228 203 /* ··· 239 212 */ 240 213 if (((srr1 & wmask) == SRR1_WAKEEE) || 241 214 ((srr1 & wmask) == SRR1_WAKEHVI)) { 242 - if (cpu_has_feature(CPU_FTR_ARCH_300)) { 243 - if (xive_enabled()) 244 - xive_flush_interrupt(); 245 - else 246 - icp_opal_flush_interrupt(); 247 - } else 248 - icp_native_flush_interrupt(); 215 + pnv_flush_interrupts(); 249 216 } else if ((srr1 & wmask) == SRR1_WAKEHDBELL) { 250 217 unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER); 251 218 asm volatile(PPC_MSGCLR(%0) : : "r" (msg)); ··· 287 266 */ 288 267 lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1; 289 268 pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val); 290 - 269 + out: 291 270 DBG("CPU%d coming online...\n", cpu); 292 271 } 293 272
+7
arch/riscv/include/asm/io.h
··· 13 13 14 14 #include <linux/types.h> 15 15 #include <asm/mmiowb.h> 16 + #include <asm/pgtable.h> 16 17 17 18 extern void __iomem *ioremap(phys_addr_t offset, unsigned long size); 18 19 ··· 161 160 #define readq(c) ({ u64 __v; __io_br(); __v = readq_cpu(c); __io_ar(__v); __v; }) 162 161 #define writeq(v,c) ({ __io_bw(); writeq_cpu((v),(c)); __io_aw(); }) 163 162 #endif 163 + 164 + /* 165 + * I/O port access constants. 166 + */ 167 + #define IO_SPACE_LIMIT (PCI_IO_SIZE - 1) 168 + #define PCI_IOBASE ((void __iomem *)PCI_IO_START) 164 169 165 170 /* 166 171 * Emulation routines for the port-mapped IO space used by some PCI drivers.
+3
arch/riscv/include/asm/irq.h
··· 7 7 #ifndef _ASM_RISCV_IRQ_H 8 8 #define _ASM_RISCV_IRQ_H 9 9 10 + #include <linux/interrupt.h> 11 + #include <linux/linkage.h> 12 + 10 13 #define NR_IRQS 0 11 14 12 15 void riscv_timer_interrupt(void);
+6 -1
arch/riscv/include/asm/pgtable.h
··· 7 7 #define _ASM_RISCV_PGTABLE_H 8 8 9 9 #include <linux/mmzone.h> 10 + #include <linux/sizes.h> 10 11 11 12 #include <asm/pgtable-bits.h> 12 13 ··· 87 86 #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) 88 87 #define VMALLOC_END (PAGE_OFFSET - 1) 89 88 #define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) 89 + #define PCI_IO_SIZE SZ_16M 90 90 91 91 /* 92 92 * Roughly size the vmemmap space to be large enough to fit enough ··· 102 100 103 101 #define vmemmap ((struct page *)VMEMMAP_START) 104 102 105 - #define FIXADDR_TOP (VMEMMAP_START) 103 + #define PCI_IO_END VMEMMAP_START 104 + #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) 105 + #define FIXADDR_TOP PCI_IO_START 106 + 106 107 #ifdef CONFIG_64BIT 107 108 #define FIXADDR_SIZE PMD_SIZE 108 109 #else
+1
arch/riscv/include/asm/switch_to.h
··· 6 6 #ifndef _ASM_RISCV_SWITCH_TO_H 7 7 #define _ASM_RISCV_SWITCH_TO_H 8 8 9 + #include <linux/sched/task_stack.h> 9 10 #include <asm/processor.h> 10 11 #include <asm/ptrace.h> 11 12 #include <asm/csr.h>
+1
arch/riscv/kernel/cpufeature.c
··· 10 10 #include <asm/processor.h> 11 11 #include <asm/hwcap.h> 12 12 #include <asm/smp.h> 13 + #include <asm/switch_to.h> 13 14 14 15 unsigned long elf_hwcap __read_mostly; 15 16 #ifdef CONFIG_FPU
+21
arch/riscv/kernel/head.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2019 SiFive, Inc. 4 + */ 5 + #ifndef __ASM_HEAD_H 6 + #define __ASM_HEAD_H 7 + 8 + #include <linux/linkage.h> 9 + #include <linux/init.h> 10 + 11 + extern atomic_t hart_lottery; 12 + 13 + asmlinkage void do_page_fault(struct pt_regs *regs); 14 + asmlinkage void __init setup_vm(uintptr_t dtb_pa); 15 + 16 + extern void *__cpu_up_stack_pointer[]; 17 + extern void *__cpu_up_task_pointer[]; 18 + 19 + void __init parse_dtb(void); 20 + 21 + #endif /* __ASM_HEAD_H */
+1 -1
arch/riscv/kernel/irq.c
··· 24 24 return 0; 25 25 } 26 26 27 - asmlinkage void __irq_entry do_IRQ(struct pt_regs *regs) 27 + asmlinkage __visible void __irq_entry do_IRQ(struct pt_regs *regs) 28 28 { 29 29 struct pt_regs *old_regs = set_irq_regs(regs); 30 30
+1
arch/riscv/kernel/module-sections.c
··· 8 8 #include <linux/elf.h> 9 9 #include <linux/kernel.h> 10 10 #include <linux/module.h> 11 + #include <linux/moduleloader.h> 11 12 12 13 unsigned long module_emit_got_entry(struct module *mod, unsigned long val) 13 14 {
+2
arch/riscv/kernel/process.c
··· 7 7 * Copyright (C) 2017 SiFive 8 8 */ 9 9 10 + #include <linux/cpu.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/sched.h> 12 13 #include <linux/sched/task_stack.h> ··· 20 19 #include <asm/csr.h> 21 20 #include <asm/string.h> 22 21 #include <asm/switch_to.h> 22 + #include <asm/thread_info.h> 23 23 24 24 extern asmlinkage void ret_from_fork(void); 25 25 extern asmlinkage void ret_from_kernel_thread(void);
+2 -2
arch/riscv/kernel/ptrace.c
··· 148 148 * Allows PTRACE_SYSCALL to work. These are called from entry.S in 149 149 * {handle,ret_from}_syscall. 150 150 */ 151 - void do_syscall_trace_enter(struct pt_regs *regs) 151 + __visible void do_syscall_trace_enter(struct pt_regs *regs) 152 152 { 153 153 if (test_thread_flag(TIF_SYSCALL_TRACE)) 154 154 if (tracehook_report_syscall_entry(regs)) ··· 162 162 audit_syscall_entry(regs->a7, regs->a0, regs->a1, regs->a2, regs->a3); 163 163 } 164 164 165 - void do_syscall_trace_exit(struct pt_regs *regs) 165 + __visible void do_syscall_trace_exit(struct pt_regs *regs) 166 166 { 167 167 audit_syscall_exit(regs); 168 168
+1
arch/riscv/kernel/reset.c
··· 4 4 */ 5 5 6 6 #include <linux/reboot.h> 7 + #include <linux/pm.h> 7 8 #include <asm/sbi.h> 8 9 9 10 static void default_power_off(void)
+2
arch/riscv/kernel/setup.c
··· 24 24 #include <asm/tlbflush.h> 25 25 #include <asm/thread_info.h> 26 26 27 + #include "head.h" 28 + 27 29 #ifdef CONFIG_DUMMY_CONSOLE 28 30 struct screen_info screen_info = { 29 31 .orig_video_lines = 30,
+4 -4
arch/riscv/kernel/signal.c
··· 26 26 27 27 #ifdef CONFIG_FPU 28 28 static long restore_fp_state(struct pt_regs *regs, 29 - union __riscv_fp_state *sc_fpregs) 29 + union __riscv_fp_state __user *sc_fpregs) 30 30 { 31 31 long err; 32 32 struct __riscv_d_ext_state __user *state = &sc_fpregs->d; ··· 53 53 } 54 54 55 55 static long save_fp_state(struct pt_regs *regs, 56 - union __riscv_fp_state *sc_fpregs) 56 + union __riscv_fp_state __user *sc_fpregs) 57 57 { 58 58 long err; 59 59 struct __riscv_d_ext_state __user *state = &sc_fpregs->d; ··· 292 292 * notification of userspace execution resumption 293 293 * - triggered by the _TIF_WORK_MASK flags 294 294 */ 295 - asmlinkage void do_notify_resume(struct pt_regs *regs, 296 - unsigned long thread_info_flags) 295 + asmlinkage __visible void do_notify_resume(struct pt_regs *regs, 296 + unsigned long thread_info_flags) 297 297 { 298 298 /* Handle pending signal delivery */ 299 299 if (thread_info_flags & _TIF_SIGPENDING)
+2
arch/riscv/kernel/smp.c
··· 8 8 * Copyright (C) 2017 SiFive 9 9 */ 10 10 11 + #include <linux/cpu.h> 11 12 #include <linux/interrupt.h> 13 + #include <linux/profile.h> 12 14 #include <linux/smp.h> 13 15 #include <linux/sched.h> 14 16 #include <linux/seq_file.h>
+4 -1
arch/riscv/kernel/smpboot.c
··· 29 29 #include <asm/tlbflush.h> 30 30 #include <asm/sections.h> 31 31 #include <asm/sbi.h> 32 + #include <asm/smp.h> 33 + 34 + #include "head.h" 32 35 33 36 void *__cpu_up_stack_pointer[NR_CPUS]; 34 37 void *__cpu_up_task_pointer[NR_CPUS]; ··· 133 130 /* 134 131 * C entry point for a secondary processor. 135 132 */ 136 - asmlinkage void __init smp_callin(void) 133 + asmlinkage __visible void __init smp_callin(void) 137 134 { 138 135 struct mm_struct *mm = &init_mm; 139 136
+1
arch/riscv/kernel/syscall_table.c
··· 8 8 #include <linux/syscalls.h> 9 9 #include <asm-generic/syscalls.h> 10 10 #include <asm/vdso.h> 11 + #include <asm/syscall.h> 11 12 12 13 #undef __SYSCALL 13 14 #define __SYSCALL(nr, call) [nr] = (call),
+1
arch/riscv/kernel/time.c
··· 7 7 #include <linux/clocksource.h> 8 8 #include <linux/delay.h> 9 9 #include <asm/sbi.h> 10 + #include <asm/processor.h> 10 11 11 12 unsigned long riscv_timebase; 12 13 EXPORT_SYMBOL_GPL(riscv_timebase);
+3 -2
arch/riscv/kernel/traps.c
··· 3 3 * Copyright (C) 2012 Regents of the University of California 4 4 */ 5 5 6 + #include <linux/cpu.h> 6 7 #include <linux/kernel.h> 7 8 #include <linux/init.h> 8 9 #include <linux/sched.h> ··· 84 83 } 85 84 86 85 #define DO_ERROR_INFO(name, signo, code, str) \ 87 - asmlinkage void name(struct pt_regs *regs) \ 86 + asmlinkage __visible void name(struct pt_regs *regs) \ 88 87 { \ 89 88 do_trap_error(regs, signo, code, regs->sepc, "Oops - " str); \ 90 89 } ··· 121 120 return (((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) ? 4UL : 2UL); 122 121 } 123 122 124 - asmlinkage void do_trap_break(struct pt_regs *regs) 123 + asmlinkage __visible void do_trap_break(struct pt_regs *regs) 125 124 { 126 125 if (user_mode(regs)) 127 126 force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->sepc);
+2 -1
arch/riscv/kernel/vdso.c
··· 6 6 * Copyright (C) 2015 Regents of the University of California 7 7 */ 8 8 9 + #include <linux/elf.h> 9 10 #include <linux/mm.h> 10 11 #include <linux/slab.h> 11 12 #include <linux/binfmts.h> ··· 26 25 struct vdso_data data; 27 26 u8 page[PAGE_SIZE]; 28 27 } vdso_data_store __page_aligned_data; 29 - struct vdso_data *vdso_data = &vdso_data_store.data; 28 + static struct vdso_data *vdso_data = &vdso_data_store.data; 30 29 31 30 static int __init vdso_init(void) 32 31 {
+1
arch/riscv/mm/context.c
··· 7 7 #include <linux/mm.h> 8 8 #include <asm/tlbflush.h> 9 9 #include <asm/cacheflush.h> 10 + #include <asm/mmu_context.h> 10 11 11 12 /* 12 13 * When necessary, performs a deferred icache flush for the given MM context,
+2
arch/riscv/mm/fault.c
··· 18 18 #include <asm/ptrace.h> 19 19 #include <asm/tlbflush.h> 20 20 21 + #include "../kernel/head.h" 22 + 21 23 /* 22 24 * This routine handles page faults. It determines the address and the 23 25 * problem, and then passes it off to one of the appropriate routines.
+3 -2
arch/riscv/mm/init.c
··· 19 19 #include <asm/pgtable.h> 20 20 #include <asm/io.h> 21 21 22 + #include "../kernel/head.h" 23 + 22 24 unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] 23 25 __page_aligned_bss; 24 26 EXPORT_SYMBOL(empty_zero_page); ··· 339 337 */ 340 338 341 339 #ifndef __riscv_cmodel_medany 342 - #error "setup_vm() is called from head.S before relocate so it should " 343 - "not use absolute addressing." 340 + #error "setup_vm() is called from head.S before relocate so it should not use absolute addressing." 344 341 #endif 345 342 346 343 asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+1 -1
arch/riscv/mm/sifive_l2_cache.c
··· 142 142 return IRQ_HANDLED; 143 143 } 144 144 145 - int __init sifive_l2_init(void) 145 + static int __init sifive_l2_init(void) 146 146 { 147 147 struct device_node *np; 148 148 struct resource res;
+1
arch/s390/include/asm/unwind.h
··· 35 35 struct task_struct *task; 36 36 struct pt_regs *regs; 37 37 unsigned long sp, ip; 38 + bool reuse_sp; 38 39 int graph_idx; 39 40 bool reliable; 40 41 bool error;
+22 -7
arch/s390/kernel/idle.c
··· 69 69 static ssize_t show_idle_time(struct device *dev, 70 70 struct device_attribute *attr, char *buf) 71 71 { 72 + unsigned long long now, idle_time, idle_enter, idle_exit, in_idle; 72 73 struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id); 73 - unsigned long long now, idle_time, idle_enter, idle_exit; 74 74 unsigned int seq; 75 75 76 76 do { 77 - now = get_tod_clock(); 78 77 seq = read_seqcount_begin(&idle->seqcount); 79 78 idle_time = READ_ONCE(idle->idle_time); 80 79 idle_enter = READ_ONCE(idle->clock_idle_enter); 81 80 idle_exit = READ_ONCE(idle->clock_idle_exit); 82 81 } while (read_seqcount_retry(&idle->seqcount, seq)); 83 - idle_time += idle_enter ? ((idle_exit ? : now) - idle_enter) : 0; 82 + in_idle = 0; 83 + now = get_tod_clock(); 84 + if (idle_enter) { 85 + if (idle_exit) { 86 + in_idle = idle_exit - idle_enter; 87 + } else if (now > idle_enter) { 88 + in_idle = now - idle_enter; 89 + } 90 + } 91 + idle_time += in_idle; 84 92 return sprintf(buf, "%llu\n", idle_time >> 12); 85 93 } 86 94 DEVICE_ATTR(idle_time_us, 0444, show_idle_time, NULL); ··· 96 88 u64 arch_cpu_idle_time(int cpu) 97 89 { 98 90 struct s390_idle_data *idle = &per_cpu(s390_idle, cpu); 99 - unsigned long long now, idle_enter, idle_exit; 91 + unsigned long long now, idle_enter, idle_exit, in_idle; 100 92 unsigned int seq; 101 93 102 94 do { 103 - now = get_tod_clock(); 104 95 seq = read_seqcount_begin(&idle->seqcount); 105 96 idle_enter = READ_ONCE(idle->clock_idle_enter); 106 97 idle_exit = READ_ONCE(idle->clock_idle_exit); 107 98 } while (read_seqcount_retry(&idle->seqcount, seq)); 108 - 109 - return cputime_to_nsecs(idle_enter ? ((idle_exit ?: now) - idle_enter) : 0); 99 + in_idle = 0; 100 + now = get_tod_clock(); 101 + if (idle_enter) { 102 + if (idle_exit) { 103 + in_idle = idle_exit - idle_enter; 104 + } else if (now > idle_enter) { 105 + in_idle = now - idle_enter; 106 + } 107 + } 108 + return cputime_to_nsecs(in_idle); 110 109 } 111 110 112 111 void arch_cpu_idle_enter(void)
+13 -5
arch/s390/kernel/unwind_bc.c
··· 46 46 47 47 regs = state->regs; 48 48 if (unlikely(regs)) { 49 - sp = READ_ONCE_NOCHECK(regs->gprs[15]); 50 - if (unlikely(outside_of_stack(state, sp))) { 51 - if (!update_stack_info(state, sp)) 52 - goto out_err; 49 + if (state->reuse_sp) { 50 + sp = state->sp; 51 + state->reuse_sp = false; 52 + } else { 53 + sp = READ_ONCE_NOCHECK(regs->gprs[15]); 54 + if (unlikely(outside_of_stack(state, sp))) { 55 + if (!update_stack_info(state, sp)) 56 + goto out_err; 57 + } 53 58 } 54 59 sf = (struct stack_frame *) sp; 55 60 ip = READ_ONCE_NOCHECK(sf->gprs[8]); ··· 112 107 { 113 108 struct stack_info *info = &state->stack_info; 114 109 unsigned long *mask = &state->stack_mask; 110 + bool reliable, reuse_sp; 115 111 struct stack_frame *sf; 116 112 unsigned long ip; 117 - bool reliable; 118 113 119 114 memset(state, 0, sizeof(*state)); 120 115 state->task = task; ··· 139 134 if (regs) { 140 135 ip = READ_ONCE_NOCHECK(regs->psw.addr); 141 136 reliable = true; 137 + reuse_sp = true; 142 138 } else { 143 139 sf = (struct stack_frame *) sp; 144 140 ip = READ_ONCE_NOCHECK(sf->gprs[8]); 145 141 reliable = false; 142 + reuse_sp = false; 146 143 } 147 144 148 145 #ifdef CONFIG_FUNCTION_GRAPH_TRACER ··· 158 151 state->sp = sp; 159 152 state->ip = ip; 160 153 state->reliable = reliable; 154 + state->reuse_sp = reuse_sp; 161 155 } 162 156 EXPORT_SYMBOL_GPL(__unwind_start);
+6 -6
arch/s390/mm/cmm.c
··· 298 298 } 299 299 300 300 if (write) { 301 - len = *lenp; 302 - if (copy_from_user(buf, buffer, 303 - len > sizeof(buf) ? sizeof(buf) : len)) 301 + len = min(*lenp, sizeof(buf)); 302 + if (copy_from_user(buf, buffer, len)) 304 303 return -EFAULT; 305 - buf[sizeof(buf) - 1] = '\0'; 304 + buf[len - 1] = '\0'; 306 305 cmm_skip_blanks(buf, &p); 307 306 nr = simple_strtoul(p, &p, 0); 308 307 cmm_skip_blanks(p, &p); 309 308 seconds = simple_strtoul(p, &p, 0); 310 309 cmm_set_timeout(nr, seconds); 310 + *ppos += *lenp; 311 311 } else { 312 312 len = sprintf(buf, "%ld %ld\n", 313 313 cmm_timeout_pages, cmm_timeout_seconds); ··· 315 315 len = *lenp; 316 316 if (copy_to_user(buffer, buf, len)) 317 317 return -EFAULT; 318 + *lenp = len; 319 + *ppos += len; 318 320 } 319 - *lenp = len; 320 - *ppos += len; 321 321 return 0; 322 322 } 323 323
+6 -2
arch/um/drivers/ubd_kern.c
··· 1403 1403 1404 1404 spin_unlock_irq(&ubd_dev->lock); 1405 1405 1406 - if (ret < 0) 1407 - blk_mq_requeue_request(req, true); 1406 + if (ret < 0) { 1407 + if (ret == -ENOMEM) 1408 + res = BLK_STS_RESOURCE; 1409 + else 1410 + res = BLK_STS_DEV_RESOURCE; 1411 + } 1408 1412 1409 1413 return res; 1410 1414 }
+3 -1
arch/x86/boot/compressed/eboot.c
··· 13 13 #include <asm/e820/types.h> 14 14 #include <asm/setup.h> 15 15 #include <asm/desc.h> 16 + #include <asm/boot.h> 16 17 17 18 #include "../string.h" 18 19 #include "eboot.h" ··· 814 813 status = efi_relocate_kernel(sys_table, &bzimage_addr, 815 814 hdr->init_size, hdr->init_size, 816 815 hdr->pref_address, 817 - hdr->kernel_alignment); 816 + hdr->kernel_alignment, 817 + LOAD_PHYSICAL_ADDR); 818 818 if (status != EFI_SUCCESS) { 819 819 efi_printk(sys_table, "efi_relocate_kernel() failed!\n"); 820 820 goto fail;
+5 -3
arch/x86/events/amd/ibs.c
··· 377 377 struct hw_perf_event *hwc, u64 config) 378 378 { 379 379 config &= ~perf_ibs->cnt_mask; 380 - wrmsrl(hwc->config_base, config); 380 + if (boot_cpu_data.x86 == 0x10) 381 + wrmsrl(hwc->config_base, config); 381 382 config &= ~perf_ibs->enable_mask; 382 383 wrmsrl(hwc->config_base, config); 383 384 } ··· 554 553 }, 555 554 .msr = MSR_AMD64_IBSOPCTL, 556 555 .config_mask = IBS_OP_CONFIG_MASK, 557 - .cnt_mask = IBS_OP_MAX_CNT, 556 + .cnt_mask = IBS_OP_MAX_CNT | IBS_OP_CUR_CNT | 557 + IBS_OP_CUR_CNT_RAND, 558 558 .enable_mask = IBS_OP_ENABLE, 559 559 .valid_mask = IBS_OP_VAL, 560 560 .max_period = IBS_OP_MAX_CNT << 4, ··· 616 614 if (event->attr.sample_type & PERF_SAMPLE_RAW) 617 615 offset_max = perf_ibs->offset_max; 618 616 else if (check_rip) 619 - offset_max = 2; 617 + offset_max = 3; 620 618 else 621 619 offset_max = 1; 622 620 do {
+38 -6
arch/x86/events/intel/uncore.c
··· 502 502 local64_set(&event->hw.prev_count, uncore_read_counter(box, event)); 503 503 uncore_enable_event(box, event); 504 504 505 - if (box->n_active == 1) { 506 - uncore_enable_box(box); 505 + if (box->n_active == 1) 507 506 uncore_pmu_start_hrtimer(box); 508 - } 509 507 } 510 508 511 509 void uncore_pmu_event_stop(struct perf_event *event, int flags) ··· 527 529 WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); 528 530 hwc->state |= PERF_HES_STOPPED; 529 531 530 - if (box->n_active == 0) { 531 - uncore_disable_box(box); 532 + if (box->n_active == 0) 532 533 uncore_pmu_cancel_hrtimer(box); 533 - } 534 534 } 535 535 536 536 if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { ··· 774 778 return ret; 775 779 } 776 780 781 + static void uncore_pmu_enable(struct pmu *pmu) 782 + { 783 + struct intel_uncore_pmu *uncore_pmu; 784 + struct intel_uncore_box *box; 785 + 786 + uncore_pmu = container_of(pmu, struct intel_uncore_pmu, pmu); 787 + if (!uncore_pmu) 788 + return; 789 + 790 + box = uncore_pmu_to_box(uncore_pmu, smp_processor_id()); 791 + if (!box) 792 + return; 793 + 794 + if (uncore_pmu->type->ops->enable_box) 795 + uncore_pmu->type->ops->enable_box(box); 796 + } 797 + 798 + static void uncore_pmu_disable(struct pmu *pmu) 799 + { 800 + struct intel_uncore_pmu *uncore_pmu; 801 + struct intel_uncore_box *box; 802 + 803 + uncore_pmu = container_of(pmu, struct intel_uncore_pmu, pmu); 804 + if (!uncore_pmu) 805 + return; 806 + 807 + box = uncore_pmu_to_box(uncore_pmu, smp_processor_id()); 808 + if (!box) 809 + return; 810 + 811 + if (uncore_pmu->type->ops->disable_box) 812 + uncore_pmu->type->ops->disable_box(box); 813 + } 814 + 777 815 static ssize_t uncore_get_attr_cpumask(struct device *dev, 778 816 struct device_attribute *attr, char *buf) 779 817 { ··· 833 803 pmu->pmu = (struct pmu) { 834 804 .attr_groups = pmu->type->attr_groups, 835 805 .task_ctx_nr = perf_invalid_context, 806 + .pmu_enable = uncore_pmu_enable, 807 + .pmu_disable = uncore_pmu_disable, 836 808 .event_init = uncore_pmu_event_init, 837 809 .add = uncore_pmu_event_add, 838 810 .del = uncore_pmu_event_del,
-12
arch/x86/events/intel/uncore.h
··· 441 441 return -EINVAL; 442 442 } 443 443 444 - static inline void uncore_disable_box(struct intel_uncore_box *box) 445 - { 446 - if (box->pmu->type->ops->disable_box) 447 - box->pmu->type->ops->disable_box(box); 448 - } 449 - 450 - static inline void uncore_enable_box(struct intel_uncore_box *box) 451 - { 452 - if (box->pmu->type->ops->enable_box) 453 - box->pmu->type->ops->enable_box(box); 454 - } 455 - 456 444 static inline void uncore_disable_event(struct intel_uncore_box *box, 457 445 struct perf_event *event) 458 446 {
+8 -2
arch/x86/kvm/svm.c
··· 734 734 static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) 735 735 { 736 736 vcpu->arch.efer = efer; 737 - if (!npt_enabled && !(efer & EFER_LMA)) 738 - efer &= ~EFER_LME; 737 + 738 + if (!npt_enabled) { 739 + /* Shadow paging assumes NX to be available. */ 740 + efer |= EFER_NX; 741 + 742 + if (!(efer & EFER_LMA)) 743 + efer &= ~EFER_LME; 744 + } 739 745 740 746 to_svm(vcpu)->vmcb->save.efer = efer | EFER_SVME; 741 747 mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR);
+3 -11
arch/x86/kvm/vmx/vmx.c
··· 969 969 u64 guest_efer = vmx->vcpu.arch.efer; 970 970 u64 ignore_bits = 0; 971 971 972 - if (!enable_ept) { 973 - /* 974 - * NX is needed to handle CR0.WP=1, CR4.SMEP=1. Testing 975 - * host CPUID is more efficient than testing guest CPUID 976 - * or CR4. Host SMEP is anyway a requirement for guest SMEP. 977 - */ 978 - if (boot_cpu_has(X86_FEATURE_SMEP)) 979 - guest_efer |= EFER_NX; 980 - else if (!(guest_efer & EFER_NX)) 981 - ignore_bits |= EFER_NX; 982 - } 972 + /* Shadow paging assumes NX to be available. */ 973 + if (!enable_ept) 974 + guest_efer |= EFER_NX; 983 975 984 976 /* 985 977 * LMA and LME handled by hardware; SCE meaningless outside long mode.
+2 -2
block/blk-iocost.c
··· 2110 2110 goto einval; 2111 2111 } 2112 2112 2113 - spin_lock_irq(&iocg->ioc->lock); 2113 + spin_lock(&iocg->ioc->lock); 2114 2114 iocg->cfg_weight = v; 2115 2115 weight_updated(iocg); 2116 - spin_unlock_irq(&iocg->ioc->lock); 2116 + spin_unlock(&iocg->ioc->lock); 2117 2117 2118 2118 blkg_conf_finish(&ctx); 2119 2119 return nbytes;
+21 -13
drivers/acpi/processor_perflib.c
··· 159 159 160 160 void acpi_processor_ppc_init(struct cpufreq_policy *policy) 161 161 { 162 - int cpu = policy->cpu; 163 - struct acpi_processor *pr = per_cpu(processors, cpu); 164 - int ret; 162 + unsigned int cpu; 165 163 166 - if (!pr) 167 - return; 164 + for_each_cpu(cpu, policy->related_cpus) { 165 + struct acpi_processor *pr = per_cpu(processors, cpu); 166 + int ret; 168 167 169 - ret = freq_qos_add_request(&policy->constraints, &pr->perflib_req, 170 - FREQ_QOS_MAX, INT_MAX); 171 - if (ret < 0) 172 - pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu, 173 - ret); 168 + if (!pr) 169 + continue; 170 + 171 + ret = freq_qos_add_request(&policy->constraints, 172 + &pr->perflib_req, 173 + FREQ_QOS_MAX, INT_MAX); 174 + if (ret < 0) 175 + pr_err("Failed to add freq constraint for CPU%d (%d)\n", 176 + cpu, ret); 177 + } 174 178 } 175 179 176 180 void acpi_processor_ppc_exit(struct cpufreq_policy *policy) 177 181 { 178 - struct acpi_processor *pr = per_cpu(processors, policy->cpu); 182 + unsigned int cpu; 179 183 180 - if (pr) 181 - freq_qos_remove_request(&pr->perflib_req); 184 + for_each_cpu(cpu, policy->related_cpus) { 185 + struct acpi_processor *pr = per_cpu(processors, cpu); 186 + 187 + if (pr) 188 + freq_qos_remove_request(&pr->perflib_req); 189 + } 182 190 } 183 191 184 192 static int acpi_processor_get_performance_control(struct acpi_processor *pr)
+21 -13
drivers/acpi/processor_thermal.c
··· 127 127 128 128 void acpi_thermal_cpufreq_init(struct cpufreq_policy *policy) 129 129 { 130 - int cpu = policy->cpu; 131 - struct acpi_processor *pr = per_cpu(processors, cpu); 132 - int ret; 130 + unsigned int cpu; 133 131 134 - if (!pr) 135 - return; 132 + for_each_cpu(cpu, policy->related_cpus) { 133 + struct acpi_processor *pr = per_cpu(processors, cpu); 134 + int ret; 136 135 137 - ret = freq_qos_add_request(&policy->constraints, &pr->thermal_req, 138 - FREQ_QOS_MAX, INT_MAX); 139 - if (ret < 0) 140 - pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu, 141 - ret); 136 + if (!pr) 137 + continue; 138 + 139 + ret = freq_qos_add_request(&policy->constraints, 140 + &pr->thermal_req, 141 + FREQ_QOS_MAX, INT_MAX); 142 + if (ret < 0) 143 + pr_err("Failed to add freq constraint for CPU%d (%d)\n", 144 + cpu, ret); 145 + } 142 146 } 143 147 144 148 void acpi_thermal_cpufreq_exit(struct cpufreq_policy *policy) 145 149 { 146 - struct acpi_processor *pr = per_cpu(processors, policy->cpu); 150 + unsigned int cpu; 147 151 148 - if (pr) 149 - freq_qos_remove_request(&pr->thermal_req); 152 + for_each_cpu(cpu, policy->related_cpus) { 153 + struct acpi_processor *pr = per_cpu(processors, policy->cpu); 154 + 155 + if (pr) 156 + freq_qos_remove_request(&pr->thermal_req); 157 + } 150 158 } 151 159 #else /* ! CONFIG_CPU_FREQ */ 152 160 static int cpufreq_get_max_state(unsigned int cpu)
+1 -1
drivers/crypto/chelsio/chtls/chtls_cm.c
··· 1297 1297 tp->write_seq = snd_isn; 1298 1298 tp->snd_nxt = snd_isn; 1299 1299 tp->snd_una = snd_isn; 1300 - inet_sk(sk)->inet_id = tp->write_seq ^ jiffies; 1300 + inet_sk(sk)->inet_id = prandom_u32(); 1301 1301 assign_rxopt(sk, opt); 1302 1302 1303 1303 if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))
+1 -1
drivers/crypto/chelsio/chtls/chtls_io.c
··· 1702 1702 return peekmsg(sk, msg, len, nonblock, flags); 1703 1703 1704 1704 if (sk_can_busy_loop(sk) && 1705 - skb_queue_empty(&sk->sk_receive_queue) && 1705 + skb_queue_empty_lockless(&sk->sk_receive_queue) && 1706 1706 sk->sk_state == TCP_ESTABLISHED) 1707 1707 sk_busy_loop(sk, nonblock); 1708 1708
+8
drivers/dma/imx-sdma.c
··· 1707 1707 if (!sdma->script_number) 1708 1708 sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1; 1709 1709 1710 + if (sdma->script_number > sizeof(struct sdma_script_start_addrs) 1711 + / sizeof(s32)) { 1712 + dev_err(sdma->dev, 1713 + "SDMA script number %d not match with firmware.\n", 1714 + sdma->script_number); 1715 + return; 1716 + } 1717 + 1710 1718 for (i = 0; i < sdma->script_number; i++) 1711 1719 if (addr_arr[i] > 0) 1712 1720 saddr_arr[i] = addr_arr[i];
+19
drivers/dma/qcom/bam_dma.c
··· 694 694 695 695 /* remove all transactions, including active transaction */ 696 696 spin_lock_irqsave(&bchan->vc.lock, flag); 697 + /* 698 + * If we have transactions queued, then some might be committed to the 699 + * hardware in the desc fifo. The only way to reset the desc fifo is 700 + * to do a hardware reset (either by pipe or the entire block). 701 + * bam_chan_init_hw() will trigger a pipe reset, and also reinit the 702 + * pipe. If the pipe is left disabled (default state after pipe reset) 703 + * and is accessed by a connected hardware engine, a fatal error in 704 + * the BAM will occur. There is a small window where this could happen 705 + * with bam_chan_init_hw(), but it is assumed that the caller has 706 + * stopped activity on any attached hardware engine. Make sure to do 707 + * this first so that the BAM hardware doesn't cause memory corruption 708 + * by accessing freed resources. 709 + */ 710 + if (!list_empty(&bchan->desc_list)) { 711 + async_desc = list_first_entry(&bchan->desc_list, 712 + struct bam_async_desc, desc_node); 713 + bam_chan_init_hw(bchan, async_desc->dir); 714 + } 715 + 697 716 list_for_each_entry_safe(async_desc, tmp, 698 717 &bchan->desc_list, desc_node) { 699 718 list_add(&async_desc->vd.node, &bchan->vc.desc_issued);
+25 -2
drivers/dma/sprd-dma.c
··· 134 134 #define SPRD_DMA_SRC_TRSF_STEP_OFFSET 0 135 135 #define SPRD_DMA_TRSF_STEP_MASK GENMASK(15, 0) 136 136 137 + /* SPRD DMA_SRC_BLK_STEP register definition */ 138 + #define SPRD_DMA_LLIST_HIGH_MASK GENMASK(31, 28) 139 + #define SPRD_DMA_LLIST_HIGH_SHIFT 28 140 + 137 141 /* define DMA channel mode & trigger mode mask */ 138 142 #define SPRD_DMA_CHN_MODE_MASK GENMASK(7, 0) 139 143 #define SPRD_DMA_TRG_MODE_MASK GENMASK(7, 0) ··· 212 208 struct sprd_dma_chn channels[0]; 213 209 }; 214 210 211 + static void sprd_dma_free_desc(struct virt_dma_desc *vd); 215 212 static bool sprd_dma_filter_fn(struct dma_chan *chan, void *param); 216 213 static struct of_dma_filter_info sprd_dma_info = { 217 214 .filter_fn = sprd_dma_filter_fn, ··· 614 609 static void sprd_dma_free_chan_resources(struct dma_chan *chan) 615 610 { 616 611 struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); 612 + struct virt_dma_desc *cur_vd = NULL; 617 613 unsigned long flags; 618 614 619 615 spin_lock_irqsave(&schan->vc.lock, flags); 616 + if (schan->cur_desc) 617 + cur_vd = &schan->cur_desc->vd; 618 + 620 619 sprd_dma_stop(schan); 621 620 spin_unlock_irqrestore(&schan->vc.lock, flags); 621 + 622 + if (cur_vd) 623 + sprd_dma_free_desc(cur_vd); 622 624 623 625 vchan_free_chan_resources(&schan->vc); 624 626 pm_runtime_put(chan->device->dev); ··· 729 717 u32 int_mode = flags & SPRD_DMA_INT_MASK; 730 718 int src_datawidth, dst_datawidth, src_step, dst_step; 731 719 u32 temp, fix_mode = 0, fix_en = 0; 720 + phys_addr_t llist_ptr; 732 721 733 722 if (dir == DMA_MEM_TO_DEV) { 734 723 src_step = sprd_dma_get_step(slave_cfg->src_addr_width); ··· 827 814 * Set the link-list pointer point to next link-list 828 815 * configuration's physical address. 829 816 */ 830 - hw->llist_ptr = schan->linklist.phy_addr + temp; 817 + llist_ptr = schan->linklist.phy_addr + temp; 818 + hw->llist_ptr = lower_32_bits(llist_ptr); 819 + hw->src_blk_step = (upper_32_bits(llist_ptr) << SPRD_DMA_LLIST_HIGH_SHIFT) & 820 + SPRD_DMA_LLIST_HIGH_MASK; 831 821 } else { 832 822 hw->llist_ptr = 0; 823 + hw->src_blk_step = 0; 833 824 } 834 825 835 826 hw->frg_step = 0; 836 - hw->src_blk_step = 0; 837 827 hw->des_blk_step = 0; 838 828 return 0; 839 829 } ··· 1039 1023 static int sprd_dma_terminate_all(struct dma_chan *chan) 1040 1024 { 1041 1025 struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); 1026 + struct virt_dma_desc *cur_vd = NULL; 1042 1027 unsigned long flags; 1043 1028 LIST_HEAD(head); 1044 1029 1045 1030 spin_lock_irqsave(&schan->vc.lock, flags); 1031 + if (schan->cur_desc) 1032 + cur_vd = &schan->cur_desc->vd; 1033 + 1046 1034 sprd_dma_stop(schan); 1047 1035 1048 1036 vchan_get_all_descriptors(&schan->vc, &head); 1049 1037 spin_unlock_irqrestore(&schan->vc.lock, flags); 1038 + 1039 + if (cur_vd) 1040 + sprd_dma_free_desc(cur_vd); 1050 1041 1051 1042 vchan_dma_desc_free_list(&schan->vc, &head); 1052 1043 return 0;
+7
drivers/dma/tegra210-adma.c
··· 40 40 #define ADMA_CH_CONFIG_MAX_BURST_SIZE 16 41 41 #define ADMA_CH_CONFIG_WEIGHT_FOR_WRR(val) ((val) & 0xf) 42 42 #define ADMA_CH_CONFIG_MAX_BUFS 8 43 + #define TEGRA186_ADMA_CH_CONFIG_OUTSTANDING_REQS(reqs) (reqs << 4) 43 44 44 45 #define ADMA_CH_FIFO_CTRL 0x2c 45 46 #define TEGRA210_ADMA_CH_FIFO_CTRL_TXSIZE(val) (((val) & 0xf) << 8) ··· 78 77 * @ch_req_tx_shift: Register offset for AHUB transmit channel select. 79 78 * @ch_req_rx_shift: Register offset for AHUB receive channel select. 80 79 * @ch_base_offset: Register offset of DMA channel registers. 80 + * @has_outstanding_reqs: If DMA channel can have outstanding requests. 81 81 * @ch_fifo_ctrl: Default value for channel FIFO CTRL register. 82 82 * @ch_req_mask: Mask for Tx or Rx channel select. 83 83 * @ch_req_max: Maximum number of Tx or Rx channels available. ··· 97 95 unsigned int ch_req_max; 98 96 unsigned int ch_reg_size; 99 97 unsigned int nr_channels; 98 + bool has_outstanding_reqs; 100 99 }; 101 100 102 101 /* ··· 597 594 ADMA_CH_CTRL_FLOWCTRL_EN; 598 595 ch_regs->config |= cdata->adma_get_burst_config(burst_size); 599 596 ch_regs->config |= ADMA_CH_CONFIG_WEIGHT_FOR_WRR(1); 597 + if (cdata->has_outstanding_reqs) 598 + ch_regs->config |= TEGRA186_ADMA_CH_CONFIG_OUTSTANDING_REQS(8); 600 599 ch_regs->fifo_ctrl = cdata->ch_fifo_ctrl; 601 600 ch_regs->tc = desc->period_len & ADMA_CH_TC_COUNT_MASK; 602 601 ··· 783 778 .ch_req_tx_shift = 28, 784 779 .ch_req_rx_shift = 24, 785 780 .ch_base_offset = 0, 781 + .has_outstanding_reqs = false, 786 782 .ch_fifo_ctrl = TEGRA210_FIFO_CTRL_DEFAULT, 787 783 .ch_req_mask = 0xf, 788 784 .ch_req_max = 10, ··· 798 792 .ch_req_tx_shift = 27, 799 793 .ch_req_rx_shift = 22, 800 794 .ch_base_offset = 0x10000, 795 + .has_outstanding_reqs = true, 801 796 .ch_fifo_ctrl = TEGRA186_FIFO_CTRL_DEFAULT, 802 797 .ch_req_mask = 0x1f, 803 798 .ch_req_max = 20,
+20 -1
drivers/dma/ti/cppi41.c
··· 586 586 enum dma_transfer_direction dir, unsigned long tx_flags, void *context) 587 587 { 588 588 struct cppi41_channel *c = to_cpp41_chan(chan); 589 + struct dma_async_tx_descriptor *txd = NULL; 590 + struct cppi41_dd *cdd = c->cdd; 589 591 struct cppi41_desc *d; 590 592 struct scatterlist *sg; 591 593 unsigned int i; 594 + int error; 595 + 596 + error = pm_runtime_get(cdd->ddev.dev); 597 + if (error < 0) { 598 + pm_runtime_put_noidle(cdd->ddev.dev); 599 + 600 + return NULL; 601 + } 602 + 603 + if (cdd->is_suspended) 604 + goto err_out_not_ready; 592 605 593 606 d = c->desc; 594 607 for_each_sg(sgl, sg, sg_len, i) { ··· 624 611 d++; 625 612 } 626 613 627 - return &c->txd; 614 + txd = &c->txd; 615 + 616 + err_out_not_ready: 617 + pm_runtime_mark_last_busy(cdd->ddev.dev); 618 + pm_runtime_put_autosuspend(cdd->ddev.dev); 619 + 620 + return txd; 628 621 } 629 622 630 623 static void cppi41_compute_td_desc(struct cppi41_desc *d)
+9 -1
drivers/dma/xilinx/xilinx_dma.c
··· 68 68 #define XILINX_DMA_DMACR_CIRC_EN BIT(1) 69 69 #define XILINX_DMA_DMACR_RUNSTOP BIT(0) 70 70 #define XILINX_DMA_DMACR_FSYNCSRC_MASK GENMASK(6, 5) 71 + #define XILINX_DMA_DMACR_DELAY_MASK GENMASK(31, 24) 72 + #define XILINX_DMA_DMACR_FRAME_COUNT_MASK GENMASK(23, 16) 73 + #define XILINX_DMA_DMACR_MASTER_MASK GENMASK(11, 8) 71 74 72 75 #define XILINX_DMA_REG_DMASR 0x0004 73 76 #define XILINX_DMA_DMASR_EOL_LATE_ERR BIT(15) ··· 1357 1354 node); 1358 1355 hw = &segment->hw; 1359 1356 1360 - xilinx_write(chan, XILINX_DMA_REG_SRCDSTADDR, hw->buf_addr); 1357 + xilinx_write(chan, XILINX_DMA_REG_SRCDSTADDR, 1358 + xilinx_prep_dma_addr_t(hw->buf_addr)); 1361 1359 1362 1360 /* Start the transfer */ 1363 1361 dma_ctrl_write(chan, XILINX_DMA_REG_BTT, ··· 2121 2117 chan->config.gen_lock = cfg->gen_lock; 2122 2118 chan->config.master = cfg->master; 2123 2119 2120 + dmacr &= ~XILINX_DMA_DMACR_GENLOCK_EN; 2124 2121 if (cfg->gen_lock && chan->genlock) { 2125 2122 dmacr |= XILINX_DMA_DMACR_GENLOCK_EN; 2123 + dmacr &= ~XILINX_DMA_DMACR_MASTER_MASK; 2126 2124 dmacr |= cfg->master << XILINX_DMA_DMACR_MASTER_SHIFT; 2127 2125 } 2128 2126 ··· 2140 2134 chan->config.delay = cfg->delay; 2141 2135 2142 2136 if (cfg->coalesc <= XILINX_DMA_DMACR_FRAME_COUNT_MAX) { 2137 + dmacr &= ~XILINX_DMA_DMACR_FRAME_COUNT_MASK; 2143 2138 dmacr |= cfg->coalesc << XILINX_DMA_DMACR_FRAME_COUNT_SHIFT; 2144 2139 chan->config.coalesc = cfg->coalesc; 2145 2140 } 2146 2141 2147 2142 if (cfg->delay <= XILINX_DMA_DMACR_DELAY_MAX) { 2143 + dmacr &= ~XILINX_DMA_DMACR_DELAY_MASK; 2148 2144 dmacr |= cfg->delay << XILINX_DMA_DMACR_DELAY_SHIFT; 2149 2145 chan->config.delay = cfg->delay; 2150 2146 }
+1
drivers/firmware/efi/Kconfig
··· 182 182 183 183 config EFI_RCI2_TABLE 184 184 bool "EFI Runtime Configuration Interface Table Version 2 Support" 185 + depends on X86 || COMPILE_TEST 185 186 help 186 187 Displays the content of the Runtime Configuration Interface 187 188 Table version 2 on Dell EMC PowerEdge systems as a binary
+1 -1
drivers/firmware/efi/efi.c
··· 554 554 sizeof(*seed) + size); 555 555 if (seed != NULL) { 556 556 pr_notice("seeding entropy pool\n"); 557 - add_device_randomness(seed->bits, seed->size); 557 + add_bootloader_randomness(seed->bits, seed->size); 558 558 early_memunmap(seed, sizeof(*seed) + size); 559 559 } else { 560 560 pr_err("Could not map UEFI random seed!\n");
+1
drivers/firmware/efi/libstub/Makefile
··· 52 52 53 53 lib-$(CONFIG_ARM) += arm32-stub.o 54 54 lib-$(CONFIG_ARM64) += arm64-stub.o 55 + CFLAGS_arm32-stub.o := -DTEXT_OFFSET=$(TEXT_OFFSET) 55 56 CFLAGS_arm64-stub.o := -DTEXT_OFFSET=$(TEXT_OFFSET) 56 57 57 58 #
+13 -3
drivers/firmware/efi/libstub/arm32-stub.c
··· 195 195 unsigned long dram_base, 196 196 efi_loaded_image_t *image) 197 197 { 198 + unsigned long kernel_base; 198 199 efi_status_t status; 199 200 200 201 /* ··· 205 204 * loaded. These assumptions are made by the decompressor, 206 205 * before any memory map is available. 207 206 */ 208 - dram_base = round_up(dram_base, SZ_128M); 207 + kernel_base = round_up(dram_base, SZ_128M); 209 208 210 - status = reserve_kernel_base(sys_table, dram_base, reserve_addr, 209 + /* 210 + * Note that some platforms (notably, the Raspberry Pi 2) put 211 + * spin-tables and other pieces of firmware at the base of RAM, 212 + * abusing the fact that the window of TEXT_OFFSET bytes at the 213 + * base of the kernel image is only partially used at the moment. 214 + * (Up to 5 pages are used for the swapper page tables) 215 + */ 216 + kernel_base += TEXT_OFFSET - 5 * PAGE_SIZE; 217 + 218 + status = reserve_kernel_base(sys_table, kernel_base, reserve_addr, 211 219 reserve_size); 212 220 if (status != EFI_SUCCESS) { 213 221 pr_efi_err(sys_table, "Unable to allocate memory for uncompressed kernel.\n"); ··· 230 220 *image_size = image->image_size; 231 221 status = efi_relocate_kernel(sys_table, image_addr, *image_size, 232 222 *image_size, 233 - dram_base + MAX_UNCOMP_KERNEL_SIZE, 0); 223 + kernel_base + MAX_UNCOMP_KERNEL_SIZE, 0, 0); 234 224 if (status != EFI_SUCCESS) { 235 225 pr_efi_err(sys_table, "Failed to relocate kernel.\n"); 236 226 efi_free(sys_table, *reserve_size, *reserve_addr);
+10 -14
drivers/firmware/efi/libstub/efi-stub-helper.c
··· 260 260 } 261 261 262 262 /* 263 - * Allocate at the lowest possible address. 263 + * Allocate at the lowest possible address that is not below 'min'. 264 264 */ 265 - efi_status_t efi_low_alloc(efi_system_table_t *sys_table_arg, 266 - unsigned long size, unsigned long align, 267 - unsigned long *addr) 265 + efi_status_t efi_low_alloc_above(efi_system_table_t *sys_table_arg, 266 + unsigned long size, unsigned long align, 267 + unsigned long *addr, unsigned long min) 268 268 { 269 269 unsigned long map_size, desc_size, buff_size; 270 270 efi_memory_desc_t *map; ··· 311 311 start = desc->phys_addr; 312 312 end = start + desc->num_pages * EFI_PAGE_SIZE; 313 313 314 - /* 315 - * Don't allocate at 0x0. It will confuse code that 316 - * checks pointers against NULL. Skip the first 8 317 - * bytes so we start at a nice even number. 318 - */ 319 - if (start == 0x0) 320 - start += 8; 314 + if (start < min) 315 + start = min; 321 316 322 317 start = round_up(start, align); 323 318 if ((start + size) > end) ··· 693 698 unsigned long image_size, 694 699 unsigned long alloc_size, 695 700 unsigned long preferred_addr, 696 - unsigned long alignment) 701 + unsigned long alignment, 702 + unsigned long min_addr) 697 703 { 698 704 unsigned long cur_image_addr; 699 705 unsigned long new_addr = 0; ··· 727 731 * possible. 728 732 */ 729 733 if (status != EFI_SUCCESS) { 730 - status = efi_low_alloc(sys_table_arg, alloc_size, alignment, 731 - &new_addr); 734 + status = efi_low_alloc_above(sys_table_arg, alloc_size, 735 + alignment, &new_addr, min_addr); 732 736 } 733 737 if (status != EFI_SUCCESS) { 734 738 pr_efi_err(sys_table_arg, "Failed to allocate usable memory for kernel.\n");
+8
drivers/firmware/efi/test/efi_test.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/proc_fs.h> 16 16 #include <linux/efi.h> 17 + #include <linux/security.h> 17 18 #include <linux/slab.h> 18 19 #include <linux/uaccess.h> 19 20 ··· 718 717 719 718 static int efi_test_open(struct inode *inode, struct file *file) 720 719 { 720 + int ret = security_locked_down(LOCKDOWN_EFI_TEST); 721 + 722 + if (ret) 723 + return ret; 724 + 725 + if (!capable(CAP_SYS_ADMIN)) 726 + return -EACCES; 721 727 /* 722 728 * nothing special to do here 723 729 * We do accept multiple open files at the same time as we
+1
drivers/firmware/efi/tpm.c
··· 88 88 89 89 if (tbl_size < 0) { 90 90 pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n"); 91 + ret = -EINVAL; 91 92 goto out_calc; 92 93 } 93 94
+3 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 218 218 struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched); 219 219 struct dma_fence *fence = NULL, *finished; 220 220 struct amdgpu_job *job; 221 - int r; 221 + int r = 0; 222 222 223 223 job = to_amdgpu_job(sched_job); 224 224 finished = &job->base.s_fence->finished; ··· 243 243 job->fence = dma_fence_get(fence); 244 244 245 245 amdgpu_job_free_resources(job); 246 + 247 + fence = r ? ERR_PTR(r) : fence; 246 248 return fence; 247 249 } 248 250
+3 -3
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 93 93 { 94 94 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_4, 0xffffffff, 0x00400014), 95 95 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_CPF_CLK_CTRL, 0xfcff8fff, 0xf8000100), 96 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xc0000000, 0xc0000100), 96 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xcd000000, 0x0d000100), 97 97 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQ_CLK_CTRL, 0x60000ff0, 0x60000100), 98 98 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQG_CLK_CTRL, 0x40000000, 0x40000100), 99 99 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_VGT_CLK_CTRL, 0xffff8fff, 0xffff8100), ··· 140 140 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_4, 0xffffffff, 0x003c0014), 141 141 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_GS_NGG_CLK_CTRL, 0xffff8fff, 0xffff8100), 142 142 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_IA_CLK_CTRL, 0xffff0fff, 0xffff0100), 143 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xc0000000, 0xc0000100), 143 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xcd000000, 0x0d000100), 144 144 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQ_CLK_CTRL, 0xf8ff0fff, 0x60000100), 145 145 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQG_CLK_CTRL, 0x40000ff0, 0x40000100), 146 146 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_VGT_CLK_CTRL, 0xffff8fff, 0xffff8100), ··· 179 179 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCB_HW_CONTROL_4, 0x003e001f, 0x003c0014), 180 180 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_GS_NGG_CLK_CTRL, 0xffff8fff, 0xffff8100), 181 181 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_IA_CLK_CTRL, 0xffff0fff, 0xffff0100), 182 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xff7f0fff, 0xc0000100), 182 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SPI_CLK_CTRL, 0xff7f0fff, 0x0d000100), 183 183 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQ_CLK_CTRL, 0xffffcfff, 0x60000100), 184 184 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_SQG_CLK_CTRL, 0xffff0fff, 0x40000100), 185 185 SOC15_REG_GOLDEN_VALUE(GC, 0, mmCGTT_VGT_CLK_CTRL, 0xffff8fff, 0xffff8100),
+9
drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
··· 151 151 WREG32_SOC15(GC, 0, mmGCVM_L2_CNTL2, tmp); 152 152 153 153 tmp = mmGCVM_L2_CNTL3_DEFAULT; 154 + if (adev->gmc.translate_further) { 155 + tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3, BANK_SELECT, 12); 156 + tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3, 157 + L2_CACHE_BIGK_FRAGMENT_SIZE, 9); 158 + } else { 159 + tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3, BANK_SELECT, 9); 160 + tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3, 161 + L2_CACHE_BIGK_FRAGMENT_SIZE, 6); 162 + } 154 163 WREG32_SOC15(GC, 0, mmGCVM_L2_CNTL3, tmp); 155 164 156 165 tmp = mmGCVM_L2_CNTL4_DEFAULT;
+1
drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
··· 309 309 310 310 job->vm_pd_addr = amdgpu_gmc_pd_addr(adev->gart.bo); 311 311 job->vm_needs_flush = true; 312 + job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop; 312 313 amdgpu_ring_pad_ib(ring, &job->ibs[0]); 313 314 r = amdgpu_job_submit(job, &adev->mman.entity, 314 315 AMDGPU_FENCE_OWNER_UNDEFINED, &fence);
+9
drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
··· 137 137 WREG32_SOC15(MMHUB, 0, mmMMVM_L2_CNTL2, tmp); 138 138 139 139 tmp = mmMMVM_L2_CNTL3_DEFAULT; 140 + if (adev->gmc.translate_further) { 141 + tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL3, BANK_SELECT, 12); 142 + tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL3, 143 + L2_CACHE_BIGK_FRAGMENT_SIZE, 9); 144 + } else { 145 + tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL3, BANK_SELECT, 9); 146 + tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL3, 147 + L2_CACHE_BIGK_FRAGMENT_SIZE, 6); 148 + } 140 149 WREG32_SOC15(MMHUB, 0, mmMMVM_L2_CNTL3, tmp); 141 150 142 151 tmp = mmMMVM_L2_CNTL4_DEFAULT;
+1
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
··· 254 254 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000), 255 255 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000), 256 256 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0), 257 + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000) 257 258 }; 258 259 259 260 static u32 sdma_v4_0_get_reg_offset(struct amdgpu_device *adev,
+12 -7
drivers/gpu/drm/amd/display/dc/calcs/Makefile
··· 24 24 # It calculates Bandwidth and Watermarks values for HW programming 25 25 # 26 26 27 - ifneq ($(call cc-option, -mpreferred-stack-boundary=4),) 28 - cc_stack_align := -mpreferred-stack-boundary=4 29 - else ifneq ($(call cc-option, -mstack-alignment=16),) 30 - cc_stack_align := -mstack-alignment=16 27 + calcs_ccflags := -mhard-float -msse 28 + 29 + ifdef CONFIG_CC_IS_GCC 30 + ifeq ($(call cc-ifversion, -lt, 0701, y), y) 31 + IS_OLD_GCC = 1 32 + endif 31 33 endif 32 34 33 - calcs_ccflags := -mhard-float -msse $(cc_stack_align) 34 - 35 - ifdef CONFIG_CC_IS_CLANG 35 + ifdef IS_OLD_GCC 36 + # Stack alignment mismatch, proceed with caution. 37 + # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3 38 + # (8B stack alignment). 39 + calcs_ccflags += -mpreferred-stack-boundary=4 40 + else 36 41 calcs_ccflags += -msse2 37 42 endif 38 43
+4
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 580 580 #ifdef CONFIG_DRM_AMD_DC_DCN2_0 581 581 // Allocate memory for the vm_helper 582 582 dc->vm_helper = kzalloc(sizeof(struct vm_helper), GFP_KERNEL); 583 + if (!dc->vm_helper) { 584 + dm_error("%s: failed to create dc->vm_helper\n", __func__); 585 + goto fail; 586 + } 583 587 584 588 #endif 585 589 memcpy(&dc->bb_overrides, &init_params->bb_overrides, sizeof(dc->bb_overrides));
+9
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 2767 2767 CONTROLLER_DP_TEST_PATTERN_VIDEOMODE, 2768 2768 COLOR_DEPTH_UNDEFINED); 2769 2769 2770 + /* This second call is needed to reconfigure the DIG 2771 + * as a workaround for the incorrect value being applied 2772 + * from transmitter control. 2773 + */ 2774 + if (!dc_is_virtual_signal(pipe_ctx->stream->signal)) 2775 + stream->link->link_enc->funcs->setup( 2776 + stream->link->link_enc, 2777 + pipe_ctx->stream->signal); 2778 + 2770 2779 #ifdef CONFIG_DRM_AMD_DC_DSC_SUPPORT 2771 2780 if (pipe_ctx->stream->timing.flags.DSC) { 2772 2781 if (dc_is_dp_signal(pipe_ctx->stream->signal) ||
+6
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 404 404 if (stream1->view_format != stream2->view_format) 405 405 return false; 406 406 407 + if (stream1->ignore_msa_timing_param || stream2->ignore_msa_timing_param) 408 + return false; 409 + 407 410 return true; 408 411 } 409 412 static bool is_dp_and_hdmi_sharable( ··· 1541 1538 { 1542 1539 1543 1540 if (!are_stream_backends_same(old_stream, stream)) 1541 + return false; 1542 + 1543 + if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param) 1544 1544 return false; 1545 1545 1546 1546 return true;
+8 -14
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
··· 393 393 rgb_resulted[hw_points - 1].green = output_tf->tf_pts.green[start_index]; 394 394 rgb_resulted[hw_points - 1].blue = output_tf->tf_pts.blue[start_index]; 395 395 396 + rgb_resulted[hw_points].red = rgb_resulted[hw_points - 1].red; 397 + rgb_resulted[hw_points].green = rgb_resulted[hw_points - 1].green; 398 + rgb_resulted[hw_points].blue = rgb_resulted[hw_points - 1].blue; 399 + 396 400 // All 3 color channels have same x 397 401 corner_points[0].red.x = dc_fixpt_pow(dc_fixpt_from_int(2), 398 402 dc_fixpt_from_int(region_start)); ··· 468 464 469 465 i = 1; 470 466 while (i != hw_points + 1) { 471 - if (dc_fixpt_lt(rgb_plus_1->red, rgb->red)) 472 - rgb_plus_1->red = rgb->red; 473 - if (dc_fixpt_lt(rgb_plus_1->green, rgb->green)) 474 - rgb_plus_1->green = rgb->green; 475 - if (dc_fixpt_lt(rgb_plus_1->blue, rgb->blue)) 476 - rgb_plus_1->blue = rgb->blue; 477 - 478 467 rgb->delta_red = dc_fixpt_sub(rgb_plus_1->red, rgb->red); 479 468 rgb->delta_green = dc_fixpt_sub(rgb_plus_1->green, rgb->green); 480 469 rgb->delta_blue = dc_fixpt_sub(rgb_plus_1->blue, rgb->blue); ··· 559 562 rgb_resulted[hw_points - 1].green = output_tf->tf_pts.green[start_index]; 560 563 rgb_resulted[hw_points - 1].blue = output_tf->tf_pts.blue[start_index]; 561 564 565 + rgb_resulted[hw_points].red = rgb_resulted[hw_points - 1].red; 566 + rgb_resulted[hw_points].green = rgb_resulted[hw_points - 1].green; 567 + rgb_resulted[hw_points].blue = rgb_resulted[hw_points - 1].blue; 568 + 562 569 corner_points[0].red.x = dc_fixpt_pow(dc_fixpt_from_int(2), 563 570 dc_fixpt_from_int(region_start)); 564 571 corner_points[0].green.x = corner_points[0].red.x; ··· 625 624 626 625 i = 1; 627 626 while (i != hw_points + 1) { 628 - if (dc_fixpt_lt(rgb_plus_1->red, rgb->red)) 629 - rgb_plus_1->red = rgb->red; 630 - if (dc_fixpt_lt(rgb_plus_1->green, rgb->green)) 631 - rgb_plus_1->green = rgb->green; 632 - if (dc_fixpt_lt(rgb_plus_1->blue, rgb->blue)) 633 - rgb_plus_1->blue = rgb->blue; 634 - 635 627 rgb->delta_red = dc_fixpt_sub(rgb_plus_1->red, rgb->red); 636 628 rgb->delta_green = dc_fixpt_sub(rgb_plus_1->green, rgb->green); 637 629 rgb->delta_blue = dc_fixpt_sub(rgb_plus_1->blue, rgb->blue);
+12 -7
drivers/gpu/drm/amd/display/dc/dcn20/Makefile
··· 10 10 DCN20 += dcn20_dsc.o 11 11 endif 12 12 13 - ifneq ($(call cc-option, -mpreferred-stack-boundary=4),) 14 - cc_stack_align := -mpreferred-stack-boundary=4 15 - else ifneq ($(call cc-option, -mstack-alignment=16),) 16 - cc_stack_align := -mstack-alignment=16 13 + CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o := -mhard-float -msse 14 + 15 + ifdef CONFIG_CC_IS_GCC 16 + ifeq ($(call cc-ifversion, -lt, 0701, y), y) 17 + IS_OLD_GCC = 1 18 + endif 17 19 endif 18 20 19 - CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o := -mhard-float -msse $(cc_stack_align) 20 - 21 - ifdef CONFIG_CC_IS_CLANG 21 + ifdef IS_OLD_GCC 22 + # Stack alignment mismatch, proceed with caution. 23 + # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3 24 + # (8B stack alignment). 25 + CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o += -mpreferred-stack-boundary=4 26 + else 22 27 CFLAGS_$(AMDDALPATH)/dc/dcn20/dcn20_resource.o += -msse2 23 28 endif 24 29
+1 -1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
··· 814 814 .num_audio = 6, 815 815 .num_stream_encoder = 5, 816 816 .num_pll = 5, 817 - .num_dwb = 0, 817 + .num_dwb = 1, 818 818 .num_ddc = 5, 819 819 }; 820 820
+12 -7
drivers/gpu/drm/amd/display/dc/dcn21/Makefile
··· 3 3 4 4 DCN21 = dcn21_hubp.o dcn21_hubbub.o dcn21_resource.o 5 5 6 - ifneq ($(call cc-option, -mpreferred-stack-boundary=4),) 7 - cc_stack_align := -mpreferred-stack-boundary=4 8 - else ifneq ($(call cc-option, -mstack-alignment=16),) 9 - cc_stack_align := -mstack-alignment=16 6 + CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o := -mhard-float -msse 7 + 8 + ifdef CONFIG_CC_IS_GCC 9 + ifeq ($(call cc-ifversion, -lt, 0701, y), y) 10 + IS_OLD_GCC = 1 11 + endif 10 12 endif 11 13 12 - CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o := -mhard-float -msse $(cc_stack_align) 13 - 14 - ifdef CONFIG_CC_IS_CLANG 14 + ifdef IS_OLD_GCC 15 + # Stack alignment mismatch, proceed with caution. 16 + # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3 17 + # (8B stack alignment). 18 + CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o += -mpreferred-stack-boundary=4 19 + else 15 20 CFLAGS_$(AMDDALPATH)/dc/dcn21/dcn21_resource.o += -msse2 16 21 endif 17 22
+12 -7
drivers/gpu/drm/amd/display/dc/dml/Makefile
··· 24 24 # It provides the general basic services required by other DAL 25 25 # subcomponents. 26 26 27 - ifneq ($(call cc-option, -mpreferred-stack-boundary=4),) 28 - cc_stack_align := -mpreferred-stack-boundary=4 29 - else ifneq ($(call cc-option, -mstack-alignment=16),) 30 - cc_stack_align := -mstack-alignment=16 27 + dml_ccflags := -mhard-float -msse 28 + 29 + ifdef CONFIG_CC_IS_GCC 30 + ifeq ($(call cc-ifversion, -lt, 0701, y), y) 31 + IS_OLD_GCC = 1 32 + endif 31 33 endif 32 34 33 - dml_ccflags := -mhard-float -msse $(cc_stack_align) 34 - 35 - ifdef CONFIG_CC_IS_CLANG 35 + ifdef IS_OLD_GCC 36 + # Stack alignment mismatch, proceed with caution. 37 + # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3 38 + # (8B stack alignment). 39 + dml_ccflags += -mpreferred-stack-boundary=4 40 + else 36 41 dml_ccflags += -msse2 37 42 endif 38 43
+2 -1
drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
··· 2577 2577 mode_lib->vba.MinActiveDRAMClockChangeMargin 2578 2578 + mode_lib->vba.DRAMClockChangeLatency; 2579 2579 2580 - if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 0) { 2580 + if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 50) { 2581 + mode_lib->vba.DRAMClockChangeWatermark += 25; 2581 2582 mode_lib->vba.DRAMClockChangeSupport[0][0] = dm_dram_clock_change_vactive; 2582 2583 } else { 2583 2584 if (mode_lib->vba.SynchronizedVBlank || mode_lib->vba.NumberOfActivePlanes == 1) {
+12 -7
drivers/gpu/drm/amd/display/dc/dsc/Makefile
··· 1 1 # 2 2 # Makefile for the 'dsc' sub-component of DAL. 3 3 4 - ifneq ($(call cc-option, -mpreferred-stack-boundary=4),) 5 - cc_stack_align := -mpreferred-stack-boundary=4 6 - else ifneq ($(call cc-option, -mstack-alignment=16),) 7 - cc_stack_align := -mstack-alignment=16 4 + dsc_ccflags := -mhard-float -msse 5 + 6 + ifdef CONFIG_CC_IS_GCC 7 + ifeq ($(call cc-ifversion, -lt, 0701, y), y) 8 + IS_OLD_GCC = 1 9 + endif 8 10 endif 9 11 10 - dsc_ccflags := -mhard-float -msse $(cc_stack_align) 11 - 12 - ifdef CONFIG_CC_IS_CLANG 12 + ifdef IS_OLD_GCC 13 + # Stack alignment mismatch, proceed with caution. 14 + # GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3 15 + # (8B stack alignment). 16 + dsc_ccflags += -mpreferred-stack-boundary=4 17 + else 13 18 dsc_ccflags += -msse2 14 19 endif 15 20
+1 -3
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
··· 5098 5098 5099 5099 if (type == PP_OD_EDIT_SCLK_VDDC_TABLE) { 5100 5100 podn_vdd_dep = &data->odn_dpm_table.vdd_dep_on_sclk; 5101 - for (i = 0; i < podn_vdd_dep->count - 1; i++) 5102 - od_vddc_lookup_table->entries[i].us_vdd = podn_vdd_dep->entries[i].vddc; 5103 - if (od_vddc_lookup_table->entries[i].us_vdd < podn_vdd_dep->entries[i].vddc) 5101 + for (i = 0; i < podn_vdd_dep->count; i++) 5104 5102 od_vddc_lookup_table->entries[i].us_vdd = podn_vdd_dep->entries[i].vddc; 5105 5103 } else if (type == PP_OD_EDIT_MCLK_VDDC_TABLE) { 5106 5104 podn_vdd_dep = &data->odn_dpm_table.vdd_dep_on_mclk;
+2 -2
drivers/gpu/drm/etnaviv/etnaviv_dump.c
··· 180 180 etnaviv_cmdbuf_get_va(&submit->cmdbuf, 181 181 &gpu->mmu_context->cmdbuf_mapping)); 182 182 183 + mutex_unlock(&gpu->mmu_context->lock); 184 + 183 185 /* Reserve space for the bomap */ 184 186 if (n_bomap_pages) { 185 187 bomap_start = bomap = iter.data; ··· 222 220 etnaviv_core_dump_header(&iter, ETDUMP_BUF_BO, iter.data + 223 221 obj->base.size); 224 222 } 225 - 226 - mutex_unlock(&gpu->mmu_context->lock); 227 223 228 224 etnaviv_core_dump_header(&iter, ETDUMP_BUF_END, iter.data); 229 225
+4 -2
drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c
··· 155 155 156 156 memcpy(buf, v2_context->mtlb_cpu, SZ_4K); 157 157 buf += SZ_4K; 158 - for (i = 0; i < MMUv2_MAX_STLB_ENTRIES; i++, buf += SZ_4K) 159 - if (v2_context->mtlb_cpu[i] & MMUv2_PTE_PRESENT) 158 + for (i = 0; i < MMUv2_MAX_STLB_ENTRIES; i++) 159 + if (v2_context->mtlb_cpu[i] & MMUv2_PTE_PRESENT) { 160 160 memcpy(buf, v2_context->stlb_cpu[i], SZ_4K); 161 + buf += SZ_4K; 162 + } 161 163 } 162 164 163 165 static void etnaviv_iommuv2_restore_nonsec(struct etnaviv_gpu *gpu,
+14 -3
drivers/gpu/drm/etnaviv/etnaviv_mmu.c
··· 328 328 329 329 ret = etnaviv_cmdbuf_suballoc_map(suballoc, ctx, &ctx->cmdbuf_mapping, 330 330 global->memory_base); 331 - if (ret) { 332 - global->ops->free(ctx); 333 - return NULL; 331 + if (ret) 332 + goto out_free; 333 + 334 + if (global->version == ETNAVIV_IOMMU_V1 && 335 + ctx->cmdbuf_mapping.iova > 0x80000000) { 336 + dev_err(global->dev, 337 + "command buffer outside valid memory window\n"); 338 + goto out_unmap; 334 339 } 335 340 336 341 return ctx; 342 + 343 + out_unmap: 344 + etnaviv_cmdbuf_suballoc_unmap(ctx, &ctx->cmdbuf_mapping); 345 + out_free: 346 + global->ops->free(ctx); 347 + return NULL; 337 348 } 338 349 339 350 void etnaviv_iommu_restore(struct etnaviv_gpu *gpu,
+6 -5
drivers/gpu/drm/i915/display/intel_display.c
··· 9315 9315 static void lpt_init_pch_refclk(struct drm_i915_private *dev_priv) 9316 9316 { 9317 9317 struct intel_encoder *encoder; 9318 - bool pch_ssc_in_use = false; 9319 9318 bool has_fdi = false; 9320 9319 9321 9320 for_each_intel_encoder(&dev_priv->drm, encoder) { ··· 9342 9343 * clock hierarchy. That would also allow us to do 9343 9344 * clock bending finally. 9344 9345 */ 9346 + dev_priv->pch_ssc_use = 0; 9347 + 9345 9348 if (spll_uses_pch_ssc(dev_priv)) { 9346 9349 DRM_DEBUG_KMS("SPLL using PCH SSC\n"); 9347 - pch_ssc_in_use = true; 9350 + dev_priv->pch_ssc_use |= BIT(DPLL_ID_SPLL); 9348 9351 } 9349 9352 9350 9353 if (wrpll_uses_pch_ssc(dev_priv, DPLL_ID_WRPLL1)) { 9351 9354 DRM_DEBUG_KMS("WRPLL1 using PCH SSC\n"); 9352 - pch_ssc_in_use = true; 9355 + dev_priv->pch_ssc_use |= BIT(DPLL_ID_WRPLL1); 9353 9356 } 9354 9357 9355 9358 if (wrpll_uses_pch_ssc(dev_priv, DPLL_ID_WRPLL2)) { 9356 9359 DRM_DEBUG_KMS("WRPLL2 using PCH SSC\n"); 9357 - pch_ssc_in_use = true; 9360 + dev_priv->pch_ssc_use |= BIT(DPLL_ID_WRPLL2); 9358 9361 } 9359 9362 9360 - if (pch_ssc_in_use) 9363 + if (dev_priv->pch_ssc_use) 9361 9364 return; 9362 9365 9363 9366 if (has_fdi) {
+15
drivers/gpu/drm/i915/display/intel_dpll_mgr.c
··· 525 525 val = I915_READ(WRPLL_CTL(id)); 526 526 I915_WRITE(WRPLL_CTL(id), val & ~WRPLL_PLL_ENABLE); 527 527 POSTING_READ(WRPLL_CTL(id)); 528 + 529 + /* 530 + * Try to set up the PCH reference clock once all DPLLs 531 + * that depend on it have been shut down. 532 + */ 533 + if (dev_priv->pch_ssc_use & BIT(id)) 534 + intel_init_pch_refclk(dev_priv); 528 535 } 529 536 530 537 static void hsw_ddi_spll_disable(struct drm_i915_private *dev_priv, 531 538 struct intel_shared_dpll *pll) 532 539 { 540 + enum intel_dpll_id id = pll->info->id; 533 541 u32 val; 534 542 535 543 val = I915_READ(SPLL_CTL); 536 544 I915_WRITE(SPLL_CTL, val & ~SPLL_PLL_ENABLE); 537 545 POSTING_READ(SPLL_CTL); 546 + 547 + /* 548 + * Try to set up the PCH reference clock once all DPLLs 549 + * that depend on it have been shut down. 550 + */ 551 + if (dev_priv->pch_ssc_use & BIT(id)) 552 + intel_init_pch_refclk(dev_priv); 538 553 } 539 554 540 555 static bool hsw_ddi_wrpll_get_hw_state(struct drm_i915_private *dev_priv,
+2 -2
drivers/gpu/drm/i915/display/intel_dpll_mgr.h
··· 147 147 */ 148 148 DPLL_ID_ICL_MGPLL4 = 6, 149 149 /** 150 - * @DPLL_ID_TGL_TCPLL5: TGL TC PLL port 5 (TC5) 150 + * @DPLL_ID_TGL_MGPLL5: TGL TC PLL port 5 (TC5) 151 151 */ 152 152 DPLL_ID_TGL_MGPLL5 = 7, 153 153 /** 154 - * @DPLL_ID_TGL_TCPLL6: TGL TC PLL port 6 (TC6) 154 + * @DPLL_ID_TGL_MGPLL6: TGL TC PLL port 6 (TC6) 155 155 */ 156 156 DPLL_ID_TGL_MGPLL6 = 8, 157 157 };
+2
drivers/gpu/drm/i915/i915_drv.h
··· 1723 1723 struct work_struct idle_work; 1724 1724 } gem; 1725 1725 1726 + u8 pch_ssc_use; 1727 + 1726 1728 /* For i945gm vblank irq vs. C3 workaround */ 1727 1729 struct { 1728 1730 struct work_struct work;
+1 -1
drivers/gpu/drm/panfrost/panfrost_drv.c
··· 556 556 return 0; 557 557 558 558 err_out2: 559 + pm_runtime_disable(pfdev->dev); 559 560 panfrost_devfreq_fini(pfdev); 560 561 err_out1: 561 562 panfrost_device_fini(pfdev); 562 563 err_out0: 563 - pm_runtime_disable(pfdev->dev); 564 564 drm_dev_put(ddev); 565 565 return err; 566 566 }
+8 -7
drivers/gpu/drm/panfrost/panfrost_mmu.c
··· 224 224 return SZ_2M; 225 225 } 226 226 227 - void panfrost_mmu_flush_range(struct panfrost_device *pfdev, 228 - struct panfrost_mmu *mmu, 229 - u64 iova, size_t size) 227 + static void panfrost_mmu_flush_range(struct panfrost_device *pfdev, 228 + struct panfrost_mmu *mmu, 229 + u64 iova, size_t size) 230 230 { 231 231 if (mmu->as < 0) 232 232 return; ··· 406 406 spin_lock(&pfdev->as_lock); 407 407 list_for_each_entry(mmu, &pfdev->as_lru_list, list) { 408 408 if (as == mmu->as) 409 - break; 409 + goto found_mmu; 410 410 } 411 - if (as != mmu->as) 412 - goto out; 411 + goto out; 413 412 413 + found_mmu: 414 414 priv = container_of(mmu, struct panfrost_file_priv, mmu); 415 415 416 416 spin_lock(&priv->mm_lock); ··· 432 432 433 433 #define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE) 434 434 435 - int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) 435 + static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, 436 + u64 addr) 436 437 { 437 438 int ret, i; 438 439 struct panfrost_gem_object *bo;
+1
drivers/gpu/drm/panfrost/panfrost_perfcnt.c
··· 16 16 #include "panfrost_issues.h" 17 17 #include "panfrost_job.h" 18 18 #include "panfrost_mmu.h" 19 + #include "panfrost_perfcnt.h" 19 20 #include "panfrost_regs.h" 20 21 21 22 #define COUNTERS_PER_BLOCK 64
+14
drivers/gpu/drm/radeon/radeon_drv.c
··· 379 379 static void 380 380 radeon_pci_shutdown(struct pci_dev *pdev) 381 381 { 382 + #ifdef CONFIG_PPC64 383 + struct drm_device *ddev = pci_get_drvdata(pdev); 384 + #endif 385 + 382 386 /* if we are running in a VM, make sure the device 383 387 * torn down properly on reboot/shutdown 384 388 */ 385 389 if (radeon_device_is_virtual()) 386 390 radeon_pci_remove(pdev); 391 + 392 + #ifdef CONFIG_PPC64 393 + /* Some adapters need to be suspended before a 394 + * shutdown occurs in order to prevent an error 395 + * during kexec. 396 + * Make this power specific becauase it breaks 397 + * some non-power boards. 398 + */ 399 + radeon_suspend_kms(ddev, true, true, false); 400 + #endif 387 401 } 388 402 389 403 static int radeon_pmops_suspend(struct device *dev)
+16 -3
drivers/gpu/drm/scheduler/sched_main.c
··· 479 479 struct drm_sched_job *s_job, *tmp; 480 480 uint64_t guilty_context; 481 481 bool found_guilty = false; 482 + struct dma_fence *fence; 482 483 483 484 list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { 484 485 struct drm_sched_fence *s_fence = s_job->s_fence; ··· 493 492 dma_fence_set_error(&s_fence->finished, -ECANCELED); 494 493 495 494 dma_fence_put(s_job->s_fence->parent); 496 - s_job->s_fence->parent = sched->ops->run_job(s_job); 495 + fence = sched->ops->run_job(s_job); 496 + 497 + if (IS_ERR_OR_NULL(fence)) { 498 + s_job->s_fence->parent = NULL; 499 + dma_fence_set_error(&s_fence->finished, PTR_ERR(fence)); 500 + } else { 501 + s_job->s_fence->parent = fence; 502 + } 503 + 504 + 497 505 } 498 506 } 499 507 EXPORT_SYMBOL(drm_sched_resubmit_jobs); ··· 730 720 fence = sched->ops->run_job(sched_job); 731 721 drm_sched_fence_scheduled(s_fence); 732 722 733 - if (fence) { 723 + if (!IS_ERR_OR_NULL(fence)) { 734 724 s_fence->parent = dma_fence_get(fence); 735 725 r = dma_fence_add_callback(fence, &sched_job->cb, 736 726 drm_sched_process_job); ··· 740 730 DRM_ERROR("fence add callback failed (%d)\n", 741 731 r); 742 732 dma_fence_put(fence); 743 - } else 733 + } else { 734 + 735 + dma_fence_set_error(&s_fence->finished, PTR_ERR(fence)); 744 736 drm_sched_process_job(NULL, &sched_job->cb); 737 + } 745 738 746 739 wake_up(&sched->job_scheduled); 747 740 }
+4 -1
drivers/gpu/drm/v3d/v3d_gem.c
··· 557 557 558 558 if (args->bcl_start != args->bcl_end) { 559 559 bin = kcalloc(1, sizeof(*bin), GFP_KERNEL); 560 - if (!bin) 560 + if (!bin) { 561 + v3d_job_put(&render->base); 561 562 return -ENOMEM; 563 + } 562 564 563 565 ret = v3d_job_init(v3d, file_priv, &bin->base, 564 566 v3d_job_free, args->in_sync_bcl); 565 567 if (ret) { 566 568 v3d_job_put(&render->base); 569 + kfree(bin); 567 570 return ret; 568 571 } 569 572
+9 -2
drivers/hid/hid-axff.c
··· 63 63 { 64 64 struct axff_device *axff; 65 65 struct hid_report *report; 66 - struct hid_input *hidinput = list_first_entry(&hid->inputs, struct hid_input, list); 66 + struct hid_input *hidinput; 67 67 struct list_head *report_list =&hid->report_enum[HID_OUTPUT_REPORT].report_list; 68 - struct input_dev *dev = hidinput->input; 68 + struct input_dev *dev; 69 69 int field_count = 0; 70 70 int i, j; 71 71 int error; 72 + 73 + if (list_empty(&hid->inputs)) { 74 + hid_err(hid, "no inputs found\n"); 75 + return -ENODEV; 76 + } 77 + hidinput = list_first_entry(&hid->inputs, struct hid_input, list); 78 + dev = hidinput->input; 72 79 73 80 if (list_empty(report_list)) { 74 81 hid_err(hid, "no output reports found\n");
+5 -2
drivers/hid/hid-core.c
··· 1139 1139 __u8 *start; 1140 1140 __u8 *buf; 1141 1141 __u8 *end; 1142 + __u8 *next; 1142 1143 int ret; 1143 1144 static int (*dispatch_type[])(struct hid_parser *parser, 1144 1145 struct hid_item *item) = { ··· 1193 1192 device->collection_size = HID_DEFAULT_NUM_COLLECTIONS; 1194 1193 1195 1194 ret = -EINVAL; 1196 - while ((start = fetch_item(start, end, &item)) != NULL) { 1195 + while ((next = fetch_item(start, end, &item)) != NULL) { 1196 + start = next; 1197 1197 1198 1198 if (item.format != HID_ITEM_FORMAT_SHORT) { 1199 1199 hid_err(device, "unexpected long global item\n"); ··· 1232 1230 } 1233 1231 } 1234 1232 1235 - hid_err(device, "item fetching failed at offset %d\n", (int)(end - start)); 1233 + hid_err(device, "item fetching failed at offset %u/%u\n", 1234 + size - (unsigned int)(end - start), size); 1236 1235 err: 1237 1236 kfree(parser->collection_stack); 1238 1237 alloc_err:
+9 -3
drivers/hid/hid-dr.c
··· 75 75 { 76 76 struct drff_device *drff; 77 77 struct hid_report *report; 78 - struct hid_input *hidinput = list_first_entry(&hid->inputs, 79 - struct hid_input, list); 78 + struct hid_input *hidinput; 80 79 struct list_head *report_list = 81 80 &hid->report_enum[HID_OUTPUT_REPORT].report_list; 82 - struct input_dev *dev = hidinput->input; 81 + struct input_dev *dev; 83 82 int error; 83 + 84 + if (list_empty(&hid->inputs)) { 85 + hid_err(hid, "no inputs found\n"); 86 + return -ENODEV; 87 + } 88 + hidinput = list_first_entry(&hid->inputs, struct hid_input, list); 89 + dev = hidinput->input; 84 90 85 91 if (list_empty(report_list)) { 86 92 hid_err(hid, "no output reports found\n");
+9 -3
drivers/hid/hid-emsff.c
··· 47 47 { 48 48 struct emsff_device *emsff; 49 49 struct hid_report *report; 50 - struct hid_input *hidinput = list_first_entry(&hid->inputs, 51 - struct hid_input, list); 50 + struct hid_input *hidinput; 52 51 struct list_head *report_list = 53 52 &hid->report_enum[HID_OUTPUT_REPORT].report_list; 54 - struct input_dev *dev = hidinput->input; 53 + struct input_dev *dev; 55 54 int error; 55 + 56 + if (list_empty(&hid->inputs)) { 57 + hid_err(hid, "no inputs found\n"); 58 + return -ENODEV; 59 + } 60 + hidinput = list_first_entry(&hid->inputs, struct hid_input, list); 61 + dev = hidinput->input; 56 62 57 63 if (list_empty(report_list)) { 58 64 hid_err(hid, "no output reports found\n");
+9 -3
drivers/hid/hid-gaff.c
··· 64 64 { 65 65 struct gaff_device *gaff; 66 66 struct hid_report *report; 67 - struct hid_input *hidinput = list_entry(hid->inputs.next, 68 - struct hid_input, list); 67 + struct hid_input *hidinput; 69 68 struct list_head *report_list = 70 69 &hid->report_enum[HID_OUTPUT_REPORT].report_list; 71 70 struct list_head *report_ptr = report_list; 72 - struct input_dev *dev = hidinput->input; 71 + struct input_dev *dev; 73 72 int error; 73 + 74 + if (list_empty(&hid->inputs)) { 75 + hid_err(hid, "no inputs found\n"); 76 + return -ENODEV; 77 + } 78 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 79 + dev = hidinput->input; 74 80 75 81 if (list_empty(report_list)) { 76 82 hid_err(hid, "no output reports found\n");
+4
drivers/hid/hid-google-hammer.c
··· 470 470 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 471 471 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) }, 472 472 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 473 + USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) }, 474 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 475 + USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MASTERBALL) }, 476 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 473 477 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_STAFF) }, 474 478 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 475 479 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_WAND) },
+9 -3
drivers/hid/hid-holtekff.c
··· 124 124 { 125 125 struct holtekff_device *holtekff; 126 126 struct hid_report *report; 127 - struct hid_input *hidinput = list_entry(hid->inputs.next, 128 - struct hid_input, list); 127 + struct hid_input *hidinput; 129 128 struct list_head *report_list = 130 129 &hid->report_enum[HID_OUTPUT_REPORT].report_list; 131 - struct input_dev *dev = hidinput->input; 130 + struct input_dev *dev; 132 131 int error; 132 + 133 + if (list_empty(&hid->inputs)) { 134 + hid_err(hid, "no inputs found\n"); 135 + return -ENODEV; 136 + } 137 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 138 + dev = hidinput->input; 133 139 134 140 if (list_empty(report_list)) { 135 141 hid_err(hid, "no output report found\n");
+2
drivers/hid/hid-ids.h
··· 476 476 #define USB_DEVICE_ID_GOOGLE_STAFF 0x502b 477 477 #define USB_DEVICE_ID_GOOGLE_WAND 0x502d 478 478 #define USB_DEVICE_ID_GOOGLE_WHISKERS 0x5030 479 + #define USB_DEVICE_ID_GOOGLE_MASTERBALL 0x503c 480 + #define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d 479 481 480 482 #define USB_VENDOR_ID_GOTOP 0x08f2 481 483 #define USB_DEVICE_ID_SUPER_Q2 0x007f
+9 -3
drivers/hid/hid-lg2ff.c
··· 50 50 { 51 51 struct lg2ff_device *lg2ff; 52 52 struct hid_report *report; 53 - struct hid_input *hidinput = list_entry(hid->inputs.next, 54 - struct hid_input, list); 55 - struct input_dev *dev = hidinput->input; 53 + struct hid_input *hidinput; 54 + struct input_dev *dev; 56 55 int error; 56 + 57 + if (list_empty(&hid->inputs)) { 58 + hid_err(hid, "no inputs found\n"); 59 + return -ENODEV; 60 + } 61 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 62 + dev = hidinput->input; 57 63 58 64 /* Check that the report looks ok */ 59 65 report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7);
+9 -2
drivers/hid/hid-lg3ff.c
··· 117 117 118 118 int lg3ff_init(struct hid_device *hid) 119 119 { 120 - struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list); 121 - struct input_dev *dev = hidinput->input; 120 + struct hid_input *hidinput; 121 + struct input_dev *dev; 122 122 const signed short *ff_bits = ff3_joystick_ac; 123 123 int error; 124 124 int i; 125 + 126 + if (list_empty(&hid->inputs)) { 127 + hid_err(hid, "no inputs found\n"); 128 + return -ENODEV; 129 + } 130 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 131 + dev = hidinput->input; 125 132 126 133 /* Check that the report looks ok */ 127 134 if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 35))
+9 -2
drivers/hid/hid-lg4ff.c
··· 1253 1253 1254 1254 int lg4ff_init(struct hid_device *hid) 1255 1255 { 1256 - struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list); 1257 - struct input_dev *dev = hidinput->input; 1256 + struct hid_input *hidinput; 1257 + struct input_dev *dev; 1258 1258 struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list; 1259 1259 struct hid_report *report = list_entry(report_list->next, struct hid_report, list); 1260 1260 const struct usb_device_descriptor *udesc = &(hid_to_usb_dev(hid)->descriptor); ··· 1265 1265 int error, i, j; 1266 1266 int mmode_ret, mmode_idx = -1; 1267 1267 u16 real_product_id; 1268 + 1269 + if (list_empty(&hid->inputs)) { 1270 + hid_err(hid, "no inputs found\n"); 1271 + return -ENODEV; 1272 + } 1273 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 1274 + dev = hidinput->input; 1268 1275 1269 1276 /* Check that the report looks ok */ 1270 1277 if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7))
+9 -2
drivers/hid/hid-lgff.c
··· 115 115 116 116 int lgff_init(struct hid_device* hid) 117 117 { 118 - struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list); 119 - struct input_dev *dev = hidinput->input; 118 + struct hid_input *hidinput; 119 + struct input_dev *dev; 120 120 const signed short *ff_bits = ff_joystick; 121 121 int error; 122 122 int i; 123 + 124 + if (list_empty(&hid->inputs)) { 125 + hid_err(hid, "no inputs found\n"); 126 + return -ENODEV; 127 + } 128 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 129 + dev = hidinput->input; 123 130 124 131 /* Check that the report looks ok */ 125 132 if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7))
+149 -117
drivers/hid/hid-logitech-hidpp.c
··· 1669 1669 1670 1670 #define HIDPP_FF_EFFECTID_NONE -1 1671 1671 #define HIDPP_FF_EFFECTID_AUTOCENTER -2 1672 + #define HIDPP_AUTOCENTER_PARAMS_LENGTH 18 1672 1673 1673 1674 #define HIDPP_FF_MAX_PARAMS 20 1674 1675 #define HIDPP_FF_RESERVED_SLOTS 1 ··· 2010 2009 static void hidpp_ff_set_autocenter(struct input_dev *dev, u16 magnitude) 2011 2010 { 2012 2011 struct hidpp_ff_private_data *data = dev->ff->private; 2013 - u8 params[18]; 2012 + u8 params[HIDPP_AUTOCENTER_PARAMS_LENGTH]; 2014 2013 2015 2014 dbg_hid("Setting autocenter to %d.\n", magnitude); 2016 2015 ··· 2078 2077 static void hidpp_ff_destroy(struct ff_device *ff) 2079 2078 { 2080 2079 struct hidpp_ff_private_data *data = ff->private; 2080 + struct hid_device *hid = data->hidpp->hid_dev; 2081 2081 2082 + hid_info(hid, "Unloading HID++ force feedback.\n"); 2083 + 2084 + device_remove_file(&hid->dev, &dev_attr_range); 2085 + destroy_workqueue(data->wq); 2082 2086 kfree(data->effect_ids); 2083 2087 } 2084 2088 2085 - static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index) 2089 + static int hidpp_ff_init(struct hidpp_device *hidpp, 2090 + struct hidpp_ff_private_data *data) 2086 2091 { 2087 2092 struct hid_device *hid = hidpp->hid_dev; 2088 - struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list); 2089 - struct input_dev *dev = hidinput->input; 2093 + struct hid_input *hidinput; 2094 + struct input_dev *dev; 2090 2095 const struct usb_device_descriptor *udesc = &(hid_to_usb_dev(hid)->descriptor); 2091 2096 const u16 bcdDevice = le16_to_cpu(udesc->bcdDevice); 2092 2097 struct ff_device *ff; 2093 - struct hidpp_report response; 2094 - struct hidpp_ff_private_data *data; 2095 - int error, j, num_slots; 2098 + int error, j, num_slots = data->num_effects; 2096 2099 u8 version; 2100 + 2101 + if (list_empty(&hid->inputs)) { 2102 + hid_err(hid, "no inputs found\n"); 2103 + return -ENODEV; 2104 + } 2105 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 2106 + dev = hidinput->input; 2097 2107 2098 2108 if (!dev) { 2099 2109 hid_err(hid, "Struct input_dev not set!\n"); ··· 2121 2109 for (j = 0; hidpp_ff_effects_v2[j] >= 0; j++) 2122 2110 set_bit(hidpp_ff_effects_v2[j], dev->ffbit); 2123 2111 2124 - /* Read number of slots available in device */ 2125 - error = hidpp_send_fap_command_sync(hidpp, feature_index, 2126 - HIDPP_FF_GET_INFO, NULL, 0, &response); 2127 - if (error) { 2128 - if (error < 0) 2129 - return error; 2130 - hid_err(hidpp->hid_dev, "%s: received protocol error 0x%02x\n", 2131 - __func__, error); 2132 - return -EPROTO; 2133 - } 2134 - 2135 - num_slots = response.fap.params[0] - HIDPP_FF_RESERVED_SLOTS; 2136 - 2137 2112 error = input_ff_create(dev, num_slots); 2138 2113 2139 2114 if (error) { 2140 2115 hid_err(dev, "Failed to create FF device!\n"); 2141 2116 return error; 2142 2117 } 2143 - 2144 - data = kzalloc(sizeof(*data), GFP_KERNEL); 2118 + /* 2119 + * Create a copy of passed data, so we can transfer memory 2120 + * ownership to FF core 2121 + */ 2122 + data = kmemdup(data, sizeof(*data), GFP_KERNEL); 2145 2123 if (!data) 2146 2124 return -ENOMEM; 2147 2125 data->effect_ids = kcalloc(num_slots, sizeof(int), GFP_KERNEL); ··· 2147 2145 } 2148 2146 2149 2147 data->hidpp = hidpp; 2150 - data->feature_index = feature_index; 2151 2148 data->version = version; 2152 - data->slot_autocenter = 0; 2153 - data->num_effects = num_slots; 2154 2149 for (j = 0; j < num_slots; j++) 2155 2150 data->effect_ids[j] = -1; 2156 2151 ··· 2161 2162 ff->set_autocenter = hidpp_ff_set_autocenter; 2162 2163 ff->destroy = hidpp_ff_destroy; 2163 2164 2164 - 2165 - /* reset all forces */ 2166 - error = hidpp_send_fap_command_sync(hidpp, feature_index, 2167 - HIDPP_FF_RESET_ALL, NULL, 0, &response); 2168 - 2169 - /* Read current Range */ 2170 - error = hidpp_send_fap_command_sync(hidpp, feature_index, 2171 - HIDPP_FF_GET_APERTURE, NULL, 0, &response); 2172 - if (error) 2173 - hid_warn(hidpp->hid_dev, "Failed to read range from device!\n"); 2174 - data->range = error ? 900 : get_unaligned_be16(&response.fap.params[0]); 2175 - 2176 2165 /* Create sysfs interface */ 2177 2166 error = device_create_file(&(hidpp->hid_dev->dev), &dev_attr_range); 2178 2167 if (error) 2179 2168 hid_warn(hidpp->hid_dev, "Unable to create sysfs interface for \"range\", errno %d!\n", error); 2180 2169 2181 - /* Read the current gain values */ 2182 - error = hidpp_send_fap_command_sync(hidpp, feature_index, 2183 - HIDPP_FF_GET_GLOBAL_GAINS, NULL, 0, &response); 2184 - if (error) 2185 - hid_warn(hidpp->hid_dev, "Failed to read gain values from device!\n"); 2186 - data->gain = error ? 0xffff : get_unaligned_be16(&response.fap.params[0]); 2187 - /* ignore boost value at response.fap.params[2] */ 2188 - 2189 2170 /* init the hardware command queue */ 2190 2171 atomic_set(&data->workqueue_size, 0); 2191 - 2192 - /* initialize with zero autocenter to get wheel in usable state */ 2193 - hidpp_ff_set_autocenter(dev, 0); 2194 2172 2195 2173 hid_info(hid, "Force feedback support loaded (firmware release %d).\n", 2196 2174 version); 2197 2175 2198 2176 return 0; 2199 2177 } 2200 - 2201 - static int hidpp_ff_deinit(struct hid_device *hid) 2202 - { 2203 - struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list); 2204 - struct input_dev *dev = hidinput->input; 2205 - struct hidpp_ff_private_data *data; 2206 - 2207 - if (!dev) { 2208 - hid_err(hid, "Struct input_dev not found!\n"); 2209 - return -EINVAL; 2210 - } 2211 - 2212 - hid_info(hid, "Unloading HID++ force feedback.\n"); 2213 - data = dev->ff->private; 2214 - if (!data) { 2215 - hid_err(hid, "Private data not found!\n"); 2216 - return -EINVAL; 2217 - } 2218 - 2219 - destroy_workqueue(data->wq); 2220 - device_remove_file(&hid->dev, &dev_attr_range); 2221 - 2222 - return 0; 2223 - } 2224 - 2225 2178 2226 2179 /* ************************************************************************** */ 2227 2180 /* */ ··· 2676 2725 2677 2726 #define HIDPP_PAGE_G920_FORCE_FEEDBACK 0x8123 2678 2727 2679 - static int g920_get_config(struct hidpp_device *hidpp) 2728 + static int g920_ff_set_autocenter(struct hidpp_device *hidpp, 2729 + struct hidpp_ff_private_data *data) 2680 2730 { 2681 - u8 feature_type; 2682 - u8 feature_index; 2731 + struct hidpp_report response; 2732 + u8 params[HIDPP_AUTOCENTER_PARAMS_LENGTH] = { 2733 + [1] = HIDPP_FF_EFFECT_SPRING | HIDPP_FF_EFFECT_AUTOSTART, 2734 + }; 2683 2735 int ret; 2736 + 2737 + /* initialize with zero autocenter to get wheel in usable state */ 2738 + 2739 + dbg_hid("Setting autocenter to 0.\n"); 2740 + ret = hidpp_send_fap_command_sync(hidpp, data->feature_index, 2741 + HIDPP_FF_DOWNLOAD_EFFECT, 2742 + params, ARRAY_SIZE(params), 2743 + &response); 2744 + if (ret) 2745 + hid_warn(hidpp->hid_dev, "Failed to autocenter device!\n"); 2746 + else 2747 + data->slot_autocenter = response.fap.params[0]; 2748 + 2749 + return ret; 2750 + } 2751 + 2752 + static int g920_get_config(struct hidpp_device *hidpp, 2753 + struct hidpp_ff_private_data *data) 2754 + { 2755 + struct hidpp_report response; 2756 + u8 feature_type; 2757 + int ret; 2758 + 2759 + memset(data, 0, sizeof(*data)); 2684 2760 2685 2761 /* Find feature and store for later use */ 2686 2762 ret = hidpp_root_get_feature(hidpp, HIDPP_PAGE_G920_FORCE_FEEDBACK, 2687 - &feature_index, &feature_type); 2763 + &data->feature_index, &feature_type); 2688 2764 if (ret) 2689 2765 return ret; 2690 2766 2691 - ret = hidpp_ff_init(hidpp, feature_index); 2692 - if (ret) 2693 - hid_warn(hidpp->hid_dev, "Unable to initialize force feedback support, errno %d\n", 2694 - ret); 2767 + /* Read number of slots available in device */ 2768 + ret = hidpp_send_fap_command_sync(hidpp, data->feature_index, 2769 + HIDPP_FF_GET_INFO, 2770 + NULL, 0, 2771 + &response); 2772 + if (ret) { 2773 + if (ret < 0) 2774 + return ret; 2775 + hid_err(hidpp->hid_dev, 2776 + "%s: received protocol error 0x%02x\n", __func__, ret); 2777 + return -EPROTO; 2778 + } 2695 2779 2696 - return 0; 2780 + data->num_effects = response.fap.params[0] - HIDPP_FF_RESERVED_SLOTS; 2781 + 2782 + /* reset all forces */ 2783 + ret = hidpp_send_fap_command_sync(hidpp, data->feature_index, 2784 + HIDPP_FF_RESET_ALL, 2785 + NULL, 0, 2786 + &response); 2787 + if (ret) 2788 + hid_warn(hidpp->hid_dev, "Failed to reset all forces!\n"); 2789 + 2790 + ret = hidpp_send_fap_command_sync(hidpp, data->feature_index, 2791 + HIDPP_FF_GET_APERTURE, 2792 + NULL, 0, 2793 + &response); 2794 + if (ret) { 2795 + hid_warn(hidpp->hid_dev, 2796 + "Failed to read range from device!\n"); 2797 + } 2798 + data->range = ret ? 2799 + 900 : get_unaligned_be16(&response.fap.params[0]); 2800 + 2801 + /* Read the current gain values */ 2802 + ret = hidpp_send_fap_command_sync(hidpp, data->feature_index, 2803 + HIDPP_FF_GET_GLOBAL_GAINS, 2804 + NULL, 0, 2805 + &response); 2806 + if (ret) 2807 + hid_warn(hidpp->hid_dev, 2808 + "Failed to read gain values from device!\n"); 2809 + data->gain = ret ? 2810 + 0xffff : get_unaligned_be16(&response.fap.params[0]); 2811 + 2812 + /* ignore boost value at response.fap.params[2] */ 2813 + 2814 + return g920_ff_set_autocenter(hidpp, data); 2697 2815 } 2698 2816 2699 2817 /* -------------------------------------------------------------------------- */ ··· 3478 3458 return report->field[0]->report_count + 1; 3479 3459 } 3480 3460 3481 - static bool hidpp_validate_report(struct hid_device *hdev, int id, 3482 - int expected_length, bool optional) 3483 - { 3484 - int report_length; 3485 - 3486 - if (id >= HID_MAX_IDS || id < 0) { 3487 - hid_err(hdev, "invalid HID report id %u\n", id); 3488 - return false; 3489 - } 3490 - 3491 - report_length = hidpp_get_report_length(hdev, id); 3492 - if (!report_length) 3493 - return optional; 3494 - 3495 - if (report_length < expected_length) { 3496 - hid_warn(hdev, "not enough values in hidpp report %d\n", id); 3497 - return false; 3498 - } 3499 - 3500 - return true; 3501 - } 3502 - 3503 3461 static bool hidpp_validate_device(struct hid_device *hdev) 3504 3462 { 3505 - return hidpp_validate_report(hdev, REPORT_ID_HIDPP_SHORT, 3506 - HIDPP_REPORT_SHORT_LENGTH, false) && 3507 - hidpp_validate_report(hdev, REPORT_ID_HIDPP_LONG, 3508 - HIDPP_REPORT_LONG_LENGTH, true); 3463 + struct hidpp_device *hidpp = hid_get_drvdata(hdev); 3464 + int id, report_length, supported_reports = 0; 3465 + 3466 + id = REPORT_ID_HIDPP_SHORT; 3467 + report_length = hidpp_get_report_length(hdev, id); 3468 + if (report_length) { 3469 + if (report_length < HIDPP_REPORT_SHORT_LENGTH) 3470 + goto bad_device; 3471 + 3472 + supported_reports++; 3473 + } 3474 + 3475 + id = REPORT_ID_HIDPP_LONG; 3476 + report_length = hidpp_get_report_length(hdev, id); 3477 + if (report_length) { 3478 + if (report_length < HIDPP_REPORT_LONG_LENGTH) 3479 + goto bad_device; 3480 + 3481 + supported_reports++; 3482 + } 3483 + 3484 + id = REPORT_ID_HIDPP_VERY_LONG; 3485 + report_length = hidpp_get_report_length(hdev, id); 3486 + if (report_length) { 3487 + if (report_length < HIDPP_REPORT_LONG_LENGTH || 3488 + report_length > HIDPP_REPORT_VERY_LONG_MAX_LENGTH) 3489 + goto bad_device; 3490 + 3491 + supported_reports++; 3492 + hidpp->very_long_report_length = report_length; 3493 + } 3494 + 3495 + return supported_reports; 3496 + 3497 + bad_device: 3498 + hid_warn(hdev, "not enough values in hidpp report %d\n", id); 3499 + return false; 3509 3500 } 3510 3501 3511 3502 static bool hidpp_application_equals(struct hid_device *hdev, ··· 3536 3505 int ret; 3537 3506 bool connected; 3538 3507 unsigned int connect_mask = HID_CONNECT_DEFAULT; 3508 + struct hidpp_ff_private_data data; 3539 3509 3540 3510 /* report_fixup needs drvdata to be set before we call hid_parse */ 3541 3511 hidpp = devm_kzalloc(&hdev->dev, sizeof(*hidpp), GFP_KERNEL); ··· 3562 3530 devm_kfree(&hdev->dev, hidpp); 3563 3531 return hid_hw_start(hdev, HID_CONNECT_DEFAULT); 3564 3532 } 3565 - 3566 - hidpp->very_long_report_length = 3567 - hidpp_get_report_length(hdev, REPORT_ID_HIDPP_VERY_LONG); 3568 - if (hidpp->very_long_report_length > HIDPP_REPORT_VERY_LONG_MAX_LENGTH) 3569 - hidpp->very_long_report_length = HIDPP_REPORT_VERY_LONG_MAX_LENGTH; 3570 3533 3571 3534 if (id->group == HID_GROUP_LOGITECH_DJ_DEVICE) 3572 3535 hidpp->quirks |= HIDPP_QUIRK_UNIFYING; ··· 3641 3614 if (ret) 3642 3615 goto hid_hw_init_fail; 3643 3616 } else if (connected && (hidpp->quirks & HIDPP_QUIRK_CLASS_G920)) { 3644 - ret = g920_get_config(hidpp); 3617 + ret = g920_get_config(hidpp, &data); 3645 3618 if (ret) 3646 3619 goto hid_hw_init_fail; 3647 3620 } ··· 3661 3634 if (ret) { 3662 3635 hid_err(hdev, "%s:hid_hw_start returned error\n", __func__); 3663 3636 goto hid_hw_start_fail; 3637 + } 3638 + 3639 + if (hidpp->quirks & HIDPP_QUIRK_CLASS_G920) { 3640 + ret = hidpp_ff_init(hidpp, &data); 3641 + if (ret) 3642 + hid_warn(hidpp->hid_dev, 3643 + "Unable to initialize force feedback support, errno %d\n", 3644 + ret); 3664 3645 } 3665 3646 3666 3647 return ret; ··· 3692 3657 return hid_hw_stop(hdev); 3693 3658 3694 3659 sysfs_remove_group(&hdev->dev.kobj, &ps_attribute_group); 3695 - 3696 - if (hidpp->quirks & HIDPP_QUIRK_CLASS_G920) 3697 - hidpp_ff_deinit(hdev); 3698 3660 3699 3661 hid_hw_stop(hdev); 3700 3662 cancel_work_sync(&hidpp->work);
+9 -3
drivers/hid/hid-microsoft.c
··· 328 328 329 329 static int ms_init_ff(struct hid_device *hdev) 330 330 { 331 - struct hid_input *hidinput = list_entry(hdev->inputs.next, 332 - struct hid_input, list); 333 - struct input_dev *input_dev = hidinput->input; 331 + struct hid_input *hidinput; 332 + struct input_dev *input_dev; 334 333 struct ms_data *ms = hid_get_drvdata(hdev); 334 + 335 + if (list_empty(&hdev->inputs)) { 336 + hid_err(hdev, "no inputs found\n"); 337 + return -ENODEV; 338 + } 339 + hidinput = list_entry(hdev->inputs.next, struct hid_input, list); 340 + input_dev = hidinput->input; 335 341 336 342 if (!(ms->quirks & MS_QUIRK_FF)) 337 343 return 0;
+2 -2
drivers/hid/hid-prodikeys.c
··· 516 516 MY PICTURES => KEY_WORDPROCESSOR 517 517 MY MUSIC=> KEY_SPREADSHEET 518 518 */ 519 - unsigned int keys[] = { 519 + static const unsigned int keys[] = { 520 520 KEY_FN, 521 521 KEY_MESSENGER, KEY_CALENDAR, 522 522 KEY_ADDRESSBOOK, KEY_DOCUMENTS, ··· 532 532 0 533 533 }; 534 534 535 - unsigned int *pkeys = &keys[0]; 535 + const unsigned int *pkeys = &keys[0]; 536 536 unsigned short i; 537 537 538 538 if (pm->ifnum != 1) /* only set up ONCE for interace 1 */
+9 -3
drivers/hid/hid-sony.c
··· 2254 2254 2255 2255 static int sony_init_ff(struct sony_sc *sc) 2256 2256 { 2257 - struct hid_input *hidinput = list_entry(sc->hdev->inputs.next, 2258 - struct hid_input, list); 2259 - struct input_dev *input_dev = hidinput->input; 2257 + struct hid_input *hidinput; 2258 + struct input_dev *input_dev; 2259 + 2260 + if (list_empty(&sc->hdev->inputs)) { 2261 + hid_err(sc->hdev, "no inputs found\n"); 2262 + return -ENODEV; 2263 + } 2264 + hidinput = list_entry(sc->hdev->inputs.next, struct hid_input, list); 2265 + input_dev = hidinput->input; 2260 2266 2261 2267 input_set_capability(input_dev, EV_FF, FF_RUMBLE); 2262 2268 return input_ff_create_memless(input_dev, NULL, sony_play_effect);
+9 -3
drivers/hid/hid-tmff.c
··· 124 124 struct tmff_device *tmff; 125 125 struct hid_report *report; 126 126 struct list_head *report_list; 127 - struct hid_input *hidinput = list_entry(hid->inputs.next, 128 - struct hid_input, list); 129 - struct input_dev *input_dev = hidinput->input; 127 + struct hid_input *hidinput; 128 + struct input_dev *input_dev; 130 129 int error; 131 130 int i; 131 + 132 + if (list_empty(&hid->inputs)) { 133 + hid_err(hid, "no inputs found\n"); 134 + return -ENODEV; 135 + } 136 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 137 + input_dev = hidinput->input; 132 138 133 139 tmff = kzalloc(sizeof(struct tmff_device), GFP_KERNEL); 134 140 if (!tmff)
+9 -3
drivers/hid/hid-zpff.c
··· 54 54 { 55 55 struct zpff_device *zpff; 56 56 struct hid_report *report; 57 - struct hid_input *hidinput = list_entry(hid->inputs.next, 58 - struct hid_input, list); 59 - struct input_dev *dev = hidinput->input; 57 + struct hid_input *hidinput; 58 + struct input_dev *dev; 60 59 int i, error; 60 + 61 + if (list_empty(&hid->inputs)) { 62 + hid_err(hid, "no inputs found\n"); 63 + return -ENODEV; 64 + } 65 + hidinput = list_entry(hid->inputs.next, struct hid_input, list); 66 + dev = hidinput->input; 61 67 62 68 for (i = 0; i < 4; i++) { 63 69 report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, i, 1);
+7 -111
drivers/hid/i2c-hid/i2c-hid-core.c
··· 26 26 #include <linux/delay.h> 27 27 #include <linux/slab.h> 28 28 #include <linux/pm.h> 29 - #include <linux/pm_runtime.h> 30 29 #include <linux/device.h> 31 30 #include <linux/wait.h> 32 31 #include <linux/err.h> ··· 47 48 /* quirks to control the device */ 48 49 #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV BIT(0) 49 50 #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET BIT(1) 50 - #define I2C_HID_QUIRK_NO_RUNTIME_PM BIT(2) 51 - #define I2C_HID_QUIRK_DELAY_AFTER_SLEEP BIT(3) 52 51 #define I2C_HID_QUIRK_BOGUS_IRQ BIT(4) 53 52 54 53 /* flags */ ··· 169 172 { USB_VENDOR_ID_WEIDA, HID_ANY_ID, 170 173 I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV }, 171 174 { I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288, 172 - I2C_HID_QUIRK_NO_IRQ_AFTER_RESET | 173 - I2C_HID_QUIRK_NO_RUNTIME_PM }, 174 - { I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_4B33, 175 - I2C_HID_QUIRK_DELAY_AFTER_SLEEP }, 176 - { USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_8001, 177 - I2C_HID_QUIRK_NO_RUNTIME_PM }, 178 - { I2C_VENDOR_ID_GOODIX, I2C_DEVICE_ID_GOODIX_01F0, 179 - I2C_HID_QUIRK_NO_RUNTIME_PM }, 175 + I2C_HID_QUIRK_NO_IRQ_AFTER_RESET }, 180 176 { USB_VENDOR_ID_ELAN, HID_ANY_ID, 181 177 I2C_HID_QUIRK_BOGUS_IRQ }, 182 178 { 0, 0 } ··· 387 397 { 388 398 struct i2c_hid *ihid = i2c_get_clientdata(client); 389 399 int ret; 390 - unsigned long now, delay; 391 400 392 401 i2c_hid_dbg(ihid, "%s\n", __func__); 393 402 ··· 404 415 goto set_pwr_exit; 405 416 } 406 417 407 - if (ihid->quirks & I2C_HID_QUIRK_DELAY_AFTER_SLEEP && 408 - power_state == I2C_HID_PWR_ON) { 409 - now = jiffies; 410 - if (time_after(ihid->sleep_delay, now)) { 411 - delay = jiffies_to_usecs(ihid->sleep_delay - now); 412 - usleep_range(delay, delay + 1); 413 - } 414 - } 415 - 416 418 ret = __i2c_hid_command(client, &hid_set_power_cmd, power_state, 417 419 0, NULL, 0, NULL, 0); 418 - 419 - if (ihid->quirks & I2C_HID_QUIRK_DELAY_AFTER_SLEEP && 420 - power_state == I2C_HID_PWR_SLEEP) 421 - ihid->sleep_delay = jiffies + msecs_to_jiffies(20); 422 420 423 421 if (ret) 424 422 dev_err(&client->dev, "failed to change power setting.\n"); ··· 767 791 { 768 792 struct i2c_client *client = hid->driver_data; 769 793 struct i2c_hid *ihid = i2c_get_clientdata(client); 770 - int ret = 0; 771 - 772 - ret = pm_runtime_get_sync(&client->dev); 773 - if (ret < 0) 774 - return ret; 775 794 776 795 set_bit(I2C_HID_STARTED, &ihid->flags); 777 796 return 0; ··· 778 807 struct i2c_hid *ihid = i2c_get_clientdata(client); 779 808 780 809 clear_bit(I2C_HID_STARTED, &ihid->flags); 781 - 782 - /* Save some power */ 783 - pm_runtime_put(&client->dev); 784 - } 785 - 786 - static int i2c_hid_power(struct hid_device *hid, int lvl) 787 - { 788 - struct i2c_client *client = hid->driver_data; 789 - struct i2c_hid *ihid = i2c_get_clientdata(client); 790 - 791 - i2c_hid_dbg(ihid, "%s lvl:%d\n", __func__, lvl); 792 - 793 - switch (lvl) { 794 - case PM_HINT_FULLON: 795 - pm_runtime_get_sync(&client->dev); 796 - break; 797 - case PM_HINT_NORMAL: 798 - pm_runtime_put(&client->dev); 799 - break; 800 - } 801 - return 0; 802 810 } 803 811 804 812 struct hid_ll_driver i2c_hid_ll_driver = { ··· 786 836 .stop = i2c_hid_stop, 787 837 .open = i2c_hid_open, 788 838 .close = i2c_hid_close, 789 - .power = i2c_hid_power, 790 839 .output_report = i2c_hid_output_report, 791 840 .raw_request = i2c_hid_raw_request, 792 841 }; ··· 1053 1104 1054 1105 i2c_hid_acpi_fix_up_power(&client->dev); 1055 1106 1056 - pm_runtime_get_noresume(&client->dev); 1057 - pm_runtime_set_active(&client->dev); 1058 - pm_runtime_enable(&client->dev); 1059 1107 device_enable_async_suspend(&client->dev); 1060 1108 1061 1109 /* Make sure there is something at this address */ ··· 1060 1114 if (ret < 0) { 1061 1115 dev_dbg(&client->dev, "nothing at this address: %d\n", ret); 1062 1116 ret = -ENXIO; 1063 - goto err_pm; 1117 + goto err_regulator; 1064 1118 } 1065 1119 1066 1120 ret = i2c_hid_fetch_hid_descriptor(ihid); 1067 1121 if (ret < 0) 1068 - goto err_pm; 1122 + goto err_regulator; 1069 1123 1070 1124 ret = i2c_hid_init_irq(client); 1071 1125 if (ret < 0) 1072 - goto err_pm; 1126 + goto err_regulator; 1073 1127 1074 1128 hid = hid_allocate_device(); 1075 1129 if (IS_ERR(hid)) { ··· 1100 1154 goto err_mem_free; 1101 1155 } 1102 1156 1103 - if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM)) 1104 - pm_runtime_put(&client->dev); 1105 - 1106 1157 return 0; 1107 1158 1108 1159 err_mem_free: ··· 1107 1164 1108 1165 err_irq: 1109 1166 free_irq(client->irq, ihid); 1110 - 1111 - err_pm: 1112 - pm_runtime_put_noidle(&client->dev); 1113 - pm_runtime_disable(&client->dev); 1114 1167 1115 1168 err_regulator: 1116 1169 regulator_bulk_disable(ARRAY_SIZE(ihid->pdata.supplies), ··· 1119 1180 { 1120 1181 struct i2c_hid *ihid = i2c_get_clientdata(client); 1121 1182 struct hid_device *hid; 1122 - 1123 - if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM)) 1124 - pm_runtime_get_sync(&client->dev); 1125 - pm_runtime_disable(&client->dev); 1126 - pm_runtime_set_suspended(&client->dev); 1127 - pm_runtime_put_noidle(&client->dev); 1128 1183 1129 1184 hid = ihid->hid; 1130 1185 hid_destroy_device(hid); ··· 1152 1219 int wake_status; 1153 1220 1154 1221 if (hid->driver && hid->driver->suspend) { 1155 - /* 1156 - * Wake up the device so that IO issues in 1157 - * HID driver's suspend code can succeed. 1158 - */ 1159 - ret = pm_runtime_resume(dev); 1160 - if (ret < 0) 1161 - return ret; 1162 - 1163 1222 ret = hid->driver->suspend(hid, PMSG_SUSPEND); 1164 1223 if (ret < 0) 1165 1224 return ret; 1166 1225 } 1167 1226 1168 - if (!pm_runtime_suspended(dev)) { 1169 - /* Save some power */ 1170 - i2c_hid_set_power(client, I2C_HID_PWR_SLEEP); 1227 + /* Save some power */ 1228 + i2c_hid_set_power(client, I2C_HID_PWR_SLEEP); 1171 1229 1172 - disable_irq(client->irq); 1173 - } 1230 + disable_irq(client->irq); 1174 1231 1175 1232 if (device_may_wakeup(&client->dev)) { 1176 1233 wake_status = enable_irq_wake(client->irq); ··· 1202 1279 wake_status); 1203 1280 } 1204 1281 1205 - /* We'll resume to full power */ 1206 - pm_runtime_disable(dev); 1207 - pm_runtime_set_active(dev); 1208 - pm_runtime_enable(dev); 1209 - 1210 1282 enable_irq(client->irq); 1211 1283 1212 1284 /* Instead of resetting device, simply powers the device on. This ··· 1222 1304 } 1223 1305 #endif 1224 1306 1225 - #ifdef CONFIG_PM 1226 - static int i2c_hid_runtime_suspend(struct device *dev) 1227 - { 1228 - struct i2c_client *client = to_i2c_client(dev); 1229 - 1230 - i2c_hid_set_power(client, I2C_HID_PWR_SLEEP); 1231 - disable_irq(client->irq); 1232 - return 0; 1233 - } 1234 - 1235 - static int i2c_hid_runtime_resume(struct device *dev) 1236 - { 1237 - struct i2c_client *client = to_i2c_client(dev); 1238 - 1239 - enable_irq(client->irq); 1240 - i2c_hid_set_power(client, I2C_HID_PWR_ON); 1241 - return 0; 1242 - } 1243 - #endif 1244 - 1245 1307 static const struct dev_pm_ops i2c_hid_pm = { 1246 1308 SET_SYSTEM_SLEEP_PM_OPS(i2c_hid_suspend, i2c_hid_resume) 1247 - SET_RUNTIME_PM_OPS(i2c_hid_runtime_suspend, i2c_hid_runtime_resume, 1248 - NULL) 1249 1309 }; 1250 1310 1251 1311 static const struct i2c_device_id i2c_hid_id_table[] = {
+19
drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
··· 323 323 .driver_data = (void *)&sipodev_desc 324 324 }, 325 325 { 326 + /* 327 + * There are at least 2 Primebook C11B versions, the older 328 + * version has a product-name of "Primebook C11B", and a 329 + * bios version / release / firmware revision of: 330 + * V2.1.2 / 05/03/2018 / 18.2 331 + * The new version has "PRIMEBOOK C11B" as product-name and a 332 + * bios version / release / firmware revision of: 333 + * CFALKSW05_BIOS_V1.1.2 / 11/19/2018 / 19.2 334 + * Only the older version needs this quirk, note the newer 335 + * version will not match as it has a different product-name. 336 + */ 337 + .ident = "Trekstor Primebook C11B", 338 + .matches = { 339 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TREKSTOR"), 340 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Primebook C11B"), 341 + }, 342 + .driver_data = (void *)&sipodev_desc 343 + }, 344 + { 326 345 .ident = "Direkt-Tek DTLAPY116-2", 327 346 .matches = { 328 347 DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Direkt-Tek"),
+1 -1
drivers/hid/intel-ish-hid/ishtp/client-buffers.c
··· 84 84 return 0; 85 85 out: 86 86 dev_err(&cl->device->dev, "error in allocating Tx pool\n"); 87 - ishtp_cl_free_rx_ring(cl); 87 + ishtp_cl_free_tx_ring(cl); 88 88 return -ENOMEM; 89 89 } 90 90
+1 -1
drivers/hwmon/ina3221.c
··· 170 170 171 171 /* Polling the CVRF bit to make sure read data is ready */ 172 172 return regmap_field_read_poll_timeout(ina->fields[F_CVRF], 173 - cvrf, cvrf, wait, 100000); 173 + cvrf, cvrf, wait, wait * 2); 174 174 } 175 175 176 176 static int ina3221_read_value(struct ina3221_data *ina, unsigned int reg,
+12 -3
drivers/hwmon/nct7904.c
··· 82 82 #define FANCTL1_FMR_REG 0x00 /* Bank 3; 1 reg per channel */ 83 83 #define FANCTL1_OUT_REG 0x10 /* Bank 3; 1 reg per channel */ 84 84 85 + #define VOLT_MONITOR_MODE 0x0 86 + #define THERMAL_DIODE_MODE 0x1 87 + #define THERMISTOR_MODE 0x3 88 + 85 89 #define ENABLE_TSI BIT(1) 86 90 87 91 static const unsigned short normal_i2c[] = { ··· 939 935 for (i = 0; i < 4; i++) { 940 936 val = (ret >> (i * 2)) & 0x03; 941 937 bit = (1 << i); 942 - if (val == 0) { 938 + if (val == VOLT_MONITOR_MODE) { 943 939 data->tcpu_mask &= ~bit; 940 + } else if (val == THERMAL_DIODE_MODE && i < 2) { 941 + data->temp_mode |= bit; 942 + data->vsen_mask &= ~(0x06 << (i * 2)); 943 + } else if (val == THERMISTOR_MODE) { 944 + data->vsen_mask &= ~(0x02 << (i * 2)); 944 945 } else { 945 - if (val == 0x1 || val == 0x2) 946 - data->temp_mode |= bit; 946 + /* Reserved */ 947 + data->tcpu_mask &= ~bit; 947 948 data->vsen_mask &= ~(0x06 << (i * 2)); 948 949 } 949 950 }
+1
drivers/infiniband/core/core_priv.h
··· 199 199 int ib_sa_init(void); 200 200 void ib_sa_cleanup(void); 201 201 202 + void rdma_nl_init(void); 202 203 void rdma_nl_exit(void); 203 204 204 205 int ib_nl_handle_resolve_resp(struct sk_buff *skb,
+2
drivers/infiniband/core/device.c
··· 2716 2716 goto err_comp_unbound; 2717 2717 } 2718 2718 2719 + rdma_nl_init(); 2720 + 2719 2721 ret = addr_init(); 2720 2722 if (ret) { 2721 2723 pr_warn("Could't init IB address resolution\n");
+29 -23
drivers/infiniband/core/iwcm.c
··· 372 372 static void destroy_cm_id(struct iw_cm_id *cm_id) 373 373 { 374 374 struct iwcm_id_private *cm_id_priv; 375 + struct ib_qp *qp; 375 376 unsigned long flags; 376 377 377 378 cm_id_priv = container_of(cm_id, struct iwcm_id_private, id); ··· 390 389 set_bit(IWCM_F_DROP_EVENTS, &cm_id_priv->flags); 391 390 392 391 spin_lock_irqsave(&cm_id_priv->lock, flags); 392 + qp = cm_id_priv->qp; 393 + cm_id_priv->qp = NULL; 394 + 393 395 switch (cm_id_priv->state) { 394 396 case IW_CM_STATE_LISTEN: 395 397 cm_id_priv->state = IW_CM_STATE_DESTROYING; ··· 405 401 cm_id_priv->state = IW_CM_STATE_DESTROYING; 406 402 spin_unlock_irqrestore(&cm_id_priv->lock, flags); 407 403 /* Abrupt close of the connection */ 408 - (void)iwcm_modify_qp_err(cm_id_priv->qp); 404 + (void)iwcm_modify_qp_err(qp); 409 405 spin_lock_irqsave(&cm_id_priv->lock, flags); 410 406 break; 411 407 case IW_CM_STATE_IDLE: ··· 430 426 BUG(); 431 427 break; 432 428 } 433 - if (cm_id_priv->qp) { 434 - cm_id_priv->id.device->ops.iw_rem_ref(cm_id_priv->qp); 435 - cm_id_priv->qp = NULL; 436 - } 437 429 spin_unlock_irqrestore(&cm_id_priv->lock, flags); 430 + if (qp) 431 + cm_id_priv->id.device->ops.iw_rem_ref(qp); 438 432 439 433 if (cm_id->mapped) { 440 434 iwpm_remove_mapinfo(&cm_id->local_addr, &cm_id->m_local_addr); ··· 673 671 BUG_ON(cm_id_priv->state != IW_CM_STATE_CONN_RECV); 674 672 cm_id_priv->state = IW_CM_STATE_IDLE; 675 673 spin_lock_irqsave(&cm_id_priv->lock, flags); 676 - if (cm_id_priv->qp) { 677 - cm_id->device->ops.iw_rem_ref(qp); 678 - cm_id_priv->qp = NULL; 679 - } 674 + qp = cm_id_priv->qp; 675 + cm_id_priv->qp = NULL; 680 676 spin_unlock_irqrestore(&cm_id_priv->lock, flags); 677 + if (qp) 678 + cm_id->device->ops.iw_rem_ref(qp); 681 679 clear_bit(IWCM_F_CONNECT_WAIT, &cm_id_priv->flags); 682 680 wake_up_all(&cm_id_priv->connect_wait); 683 681 } ··· 698 696 struct iwcm_id_private *cm_id_priv; 699 697 int ret; 700 698 unsigned long flags; 701 - struct ib_qp *qp; 699 + struct ib_qp *qp = NULL; 702 700 703 701 cm_id_priv = container_of(cm_id, struct iwcm_id_private, id); 704 702 ··· 732 730 return 0; /* success */ 733 731 734 732 spin_lock_irqsave(&cm_id_priv->lock, flags); 735 - if (cm_id_priv->qp) { 736 - cm_id->device->ops.iw_rem_ref(qp); 737 - cm_id_priv->qp = NULL; 738 - } 733 + qp = cm_id_priv->qp; 734 + cm_id_priv->qp = NULL; 739 735 cm_id_priv->state = IW_CM_STATE_IDLE; 740 736 err: 741 737 spin_unlock_irqrestore(&cm_id_priv->lock, flags); 738 + if (qp) 739 + cm_id->device->ops.iw_rem_ref(qp); 742 740 clear_bit(IWCM_F_CONNECT_WAIT, &cm_id_priv->flags); 743 741 wake_up_all(&cm_id_priv->connect_wait); 744 742 return ret; ··· 880 878 static int cm_conn_rep_handler(struct iwcm_id_private *cm_id_priv, 881 879 struct iw_cm_event *iw_event) 882 880 { 881 + struct ib_qp *qp = NULL; 883 882 unsigned long flags; 884 883 int ret; 885 884 ··· 899 896 cm_id_priv->state = IW_CM_STATE_ESTABLISHED; 900 897 } else { 901 898 /* REJECTED or RESET */ 902 - cm_id_priv->id.device->ops.iw_rem_ref(cm_id_priv->qp); 899 + qp = cm_id_priv->qp; 903 900 cm_id_priv->qp = NULL; 904 901 cm_id_priv->state = IW_CM_STATE_IDLE; 905 902 } 906 903 spin_unlock_irqrestore(&cm_id_priv->lock, flags); 904 + if (qp) 905 + cm_id_priv->id.device->ops.iw_rem_ref(qp); 907 906 ret = cm_id_priv->id.cm_handler(&cm_id_priv->id, iw_event); 908 907 909 908 if (iw_event->private_data_len) ··· 947 942 static int cm_close_handler(struct iwcm_id_private *cm_id_priv, 948 943 struct iw_cm_event *iw_event) 949 944 { 945 + struct ib_qp *qp; 950 946 unsigned long flags; 951 - int ret = 0; 947 + int ret = 0, notify_event = 0; 952 948 spin_lock_irqsave(&cm_id_priv->lock, flags); 949 + qp = cm_id_priv->qp; 950 + cm_id_priv->qp = NULL; 953 951 954 - if (cm_id_priv->qp) { 955 - cm_id_priv->id.device->ops.iw_rem_ref(cm_id_priv->qp); 956 - cm_id_priv->qp = NULL; 957 - } 958 952 switch (cm_id_priv->state) { 959 953 case IW_CM_STATE_ESTABLISHED: 960 954 case IW_CM_STATE_CLOSING: 961 955 cm_id_priv->state = IW_CM_STATE_IDLE; 962 - spin_unlock_irqrestore(&cm_id_priv->lock, flags); 963 - ret = cm_id_priv->id.cm_handler(&cm_id_priv->id, iw_event); 964 - spin_lock_irqsave(&cm_id_priv->lock, flags); 956 + notify_event = 1; 965 957 break; 966 958 case IW_CM_STATE_DESTROYING: 967 959 break; ··· 967 965 } 968 966 spin_unlock_irqrestore(&cm_id_priv->lock, flags); 969 967 968 + if (qp) 969 + cm_id_priv->id.device->ops.iw_rem_ref(qp); 970 + if (notify_event) 971 + ret = cm_id_priv->id.cm_handler(&cm_id_priv->id, iw_event); 970 972 return ret; 971 973 } 972 974
+53 -54
drivers/infiniband/core/netlink.c
··· 42 42 #include <linux/module.h> 43 43 #include "core_priv.h" 44 44 45 - static DEFINE_MUTEX(rdma_nl_mutex); 46 45 static struct { 47 - const struct rdma_nl_cbs *cb_table; 46 + const struct rdma_nl_cbs *cb_table; 47 + /* Synchronizes between ongoing netlink commands and netlink client 48 + * unregistration. 49 + */ 50 + struct rw_semaphore sem; 48 51 } rdma_nl_types[RDMA_NL_NUM_CLIENTS]; 49 52 50 53 bool rdma_nl_chk_listeners(unsigned int group) ··· 78 75 return (op < max_num_ops[type]) ? true : false; 79 76 } 80 77 81 - static bool 82 - is_nl_valid(const struct sk_buff *skb, unsigned int type, unsigned int op) 78 + static const struct rdma_nl_cbs * 79 + get_cb_table(const struct sk_buff *skb, unsigned int type, unsigned int op) 83 80 { 84 81 const struct rdma_nl_cbs *cb_table; 85 - 86 - if (!is_nl_msg_valid(type, op)) 87 - return false; 88 82 89 83 /* 90 84 * Currently only NLDEV client is supporting netlink commands in 91 85 * non init_net net namespace. 92 86 */ 93 87 if (sock_net(skb->sk) != &init_net && type != RDMA_NL_NLDEV) 94 - return false; 88 + return NULL; 95 89 96 - if (!rdma_nl_types[type].cb_table) { 97 - mutex_unlock(&rdma_nl_mutex); 90 + cb_table = READ_ONCE(rdma_nl_types[type].cb_table); 91 + if (!cb_table) { 92 + /* 93 + * Didn't get valid reference of the table, attempt module 94 + * load once. 95 + */ 96 + up_read(&rdma_nl_types[type].sem); 97 + 98 98 request_module("rdma-netlink-subsys-%d", type); 99 - mutex_lock(&rdma_nl_mutex); 99 + 100 + down_read(&rdma_nl_types[type].sem); 101 + cb_table = READ_ONCE(rdma_nl_types[type].cb_table); 100 102 } 101 - 102 - cb_table = rdma_nl_types[type].cb_table; 103 - 104 103 if (!cb_table || (!cb_table[op].dump && !cb_table[op].doit)) 105 - return false; 106 - return true; 104 + return NULL; 105 + return cb_table; 107 106 } 108 107 109 108 void rdma_nl_register(unsigned int index, 110 109 const struct rdma_nl_cbs cb_table[]) 111 110 { 112 - mutex_lock(&rdma_nl_mutex); 113 - if (!is_nl_msg_valid(index, 0)) { 114 - /* 115 - * All clients are not interesting in success/failure of 116 - * this call. They want to see the print to error log and 117 - * continue their initialization. Print warning for them, 118 - * because it is programmer's error to be here. 119 - */ 120 - mutex_unlock(&rdma_nl_mutex); 121 - WARN(true, 122 - "The not-valid %u index was supplied to RDMA netlink\n", 123 - index); 111 + if (WARN_ON(!is_nl_msg_valid(index, 0)) || 112 + WARN_ON(READ_ONCE(rdma_nl_types[index].cb_table))) 124 113 return; 125 - } 126 114 127 - if (rdma_nl_types[index].cb_table) { 128 - mutex_unlock(&rdma_nl_mutex); 129 - WARN(true, 130 - "The %u index is already registered in RDMA netlink\n", 131 - index); 132 - return; 133 - } 134 - 135 - rdma_nl_types[index].cb_table = cb_table; 136 - mutex_unlock(&rdma_nl_mutex); 115 + /* Pairs with the READ_ONCE in is_nl_valid() */ 116 + smp_store_release(&rdma_nl_types[index].cb_table, cb_table); 137 117 } 138 118 EXPORT_SYMBOL(rdma_nl_register); 139 119 140 120 void rdma_nl_unregister(unsigned int index) 141 121 { 142 - mutex_lock(&rdma_nl_mutex); 122 + down_write(&rdma_nl_types[index].sem); 143 123 rdma_nl_types[index].cb_table = NULL; 144 - mutex_unlock(&rdma_nl_mutex); 124 + up_write(&rdma_nl_types[index].sem); 145 125 } 146 126 EXPORT_SYMBOL(rdma_nl_unregister); 147 127 ··· 156 170 unsigned int index = RDMA_NL_GET_CLIENT(type); 157 171 unsigned int op = RDMA_NL_GET_OP(type); 158 172 const struct rdma_nl_cbs *cb_table; 173 + int err = -EINVAL; 159 174 160 - if (!is_nl_valid(skb, index, op)) 175 + if (!is_nl_msg_valid(index, op)) 161 176 return -EINVAL; 162 177 163 - cb_table = rdma_nl_types[index].cb_table; 178 + down_read(&rdma_nl_types[index].sem); 179 + cb_table = get_cb_table(skb, index, op); 180 + if (!cb_table) 181 + goto done; 164 182 165 183 if ((cb_table[op].flags & RDMA_NL_ADMIN_PERM) && 166 - !netlink_capable(skb, CAP_NET_ADMIN)) 167 - return -EPERM; 184 + !netlink_capable(skb, CAP_NET_ADMIN)) { 185 + err = -EPERM; 186 + goto done; 187 + } 168 188 169 189 /* 170 190 * LS responses overload the 0x100 (NLM_F_ROOT) flag. Don't ··· 178 186 */ 179 187 if (index == RDMA_NL_LS) { 180 188 if (cb_table[op].doit) 181 - return cb_table[op].doit(skb, nlh, extack); 182 - return -EINVAL; 189 + err = cb_table[op].doit(skb, nlh, extack); 190 + goto done; 183 191 } 184 192 /* FIXME: Convert IWCM to properly handle doit callbacks */ 185 193 if ((nlh->nlmsg_flags & NLM_F_DUMP) || index == RDMA_NL_IWCM) { ··· 187 195 .dump = cb_table[op].dump, 188 196 }; 189 197 if (c.dump) 190 - return netlink_dump_start(skb->sk, skb, nlh, &c); 191 - return -EINVAL; 198 + err = netlink_dump_start(skb->sk, skb, nlh, &c); 199 + goto done; 192 200 } 193 201 194 202 if (cb_table[op].doit) 195 - return cb_table[op].doit(skb, nlh, extack); 196 - 197 - return 0; 203 + err = cb_table[op].doit(skb, nlh, extack); 204 + done: 205 + up_read(&rdma_nl_types[index].sem); 206 + return err; 198 207 } 199 208 200 209 /* ··· 256 263 257 264 static void rdma_nl_rcv(struct sk_buff *skb) 258 265 { 259 - mutex_lock(&rdma_nl_mutex); 260 266 rdma_nl_rcv_skb(skb, &rdma_nl_rcv_msg); 261 - mutex_unlock(&rdma_nl_mutex); 262 267 } 263 268 264 269 int rdma_nl_unicast(struct net *net, struct sk_buff *skb, u32 pid) ··· 287 296 return nlmsg_multicast(rnet->nl_sock, skb, 0, group, flags); 288 297 } 289 298 EXPORT_SYMBOL(rdma_nl_multicast); 299 + 300 + void rdma_nl_init(void) 301 + { 302 + int idx; 303 + 304 + for (idx = 0; idx < RDMA_NL_NUM_CLIENTS; idx++) 305 + init_rwsem(&rdma_nl_types[idx].sem); 306 + } 290 307 291 308 void rdma_nl_exit(void) 292 309 {
+1 -1
drivers/infiniband/core/nldev.c
··· 778 778 container_of(res, struct rdma_counter, res); 779 779 780 780 if (port && port != counter->port) 781 - return 0; 781 + return -EAGAIN; 782 782 783 783 /* Dump it even query failed */ 784 784 rdma_counter_query_stats(counter);
+1 -1
drivers/infiniband/core/uverbs.h
··· 98 98 99 99 struct ib_uverbs_device { 100 100 atomic_t refcount; 101 - int num_comp_vectors; 101 + u32 num_comp_vectors; 102 102 struct completion comp; 103 103 struct device dev; 104 104 /* First group for device attributes, NULL terminated array */
+5 -4
drivers/infiniband/core/verbs.c
··· 662 662 void *context) 663 663 { 664 664 struct find_gid_index_context *ctx = context; 665 + u16 vlan_id = 0xffff; 666 + int ret; 665 667 666 668 if (ctx->gid_type != gid_attr->gid_type) 667 669 return false; 668 670 669 - if ((!!(ctx->vlan_id != 0xffff) == !is_vlan_dev(gid_attr->ndev)) || 670 - (is_vlan_dev(gid_attr->ndev) && 671 - vlan_dev_vlan_id(gid_attr->ndev) != ctx->vlan_id)) 671 + ret = rdma_read_gid_l2_fields(gid_attr, &vlan_id, NULL); 672 + if (ret) 672 673 return false; 673 674 674 - return true; 675 + return ctx->vlan_id == vlan_id; 675 676 } 676 677 677 678 static const struct ib_gid_attr *
+14 -16
drivers/infiniband/hw/cxgb4/cm.c
··· 495 495 496 496 ep = *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *))); 497 497 release_ep_resources(ep); 498 - kfree_skb(skb); 499 498 return 0; 500 499 } 501 500 ··· 505 506 ep = *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *))); 506 507 c4iw_put_ep(&ep->parent_ep->com); 507 508 release_ep_resources(ep); 508 - kfree_skb(skb); 509 509 return 0; 510 510 } 511 511 ··· 2422 2424 enum chip_type adapter_type = ep->com.dev->rdev.lldi.adapter_type; 2423 2425 2424 2426 pr_debug("ep %p tid %u\n", ep, ep->hwtid); 2425 - 2426 - skb_get(skb); 2427 - rpl = cplhdr(skb); 2428 - if (!is_t4(adapter_type)) { 2429 - skb_trim(skb, roundup(sizeof(*rpl5), 16)); 2430 - rpl5 = (void *)rpl; 2431 - INIT_TP_WR(rpl5, ep->hwtid); 2432 - } else { 2433 - skb_trim(skb, sizeof(*rpl)); 2434 - INIT_TP_WR(rpl, ep->hwtid); 2435 - } 2436 - OPCODE_TID(rpl) = cpu_to_be32(MK_OPCODE_TID(CPL_PASS_ACCEPT_RPL, 2437 - ep->hwtid)); 2438 - 2439 2427 cxgb_best_mtu(ep->com.dev->rdev.lldi.mtus, ep->mtu, &mtu_idx, 2440 2428 enable_tcp_timestamps && req->tcpopt.tstamp, 2441 2429 (ep->com.remote_addr.ss_family == AF_INET) ? 0 : 1); ··· 2467 2483 if (tcph->ece && tcph->cwr) 2468 2484 opt2 |= CCTRL_ECN_V(1); 2469 2485 } 2486 + 2487 + skb_get(skb); 2488 + rpl = cplhdr(skb); 2489 + if (!is_t4(adapter_type)) { 2490 + skb_trim(skb, roundup(sizeof(*rpl5), 16)); 2491 + rpl5 = (void *)rpl; 2492 + INIT_TP_WR(rpl5, ep->hwtid); 2493 + } else { 2494 + skb_trim(skb, sizeof(*rpl)); 2495 + INIT_TP_WR(rpl, ep->hwtid); 2496 + } 2497 + OPCODE_TID(rpl) = cpu_to_be32(MK_OPCODE_TID(CPL_PASS_ACCEPT_RPL, 2498 + ep->hwtid)); 2499 + 2470 2500 if (CHELSIO_CHIP_VERSION(adapter_type) > CHELSIO_T4) { 2471 2501 u32 isn = (prandom_u32() & ~7UL) - 1; 2472 2502 opt2 |= T5_OPT_2_VALID_F;
+3 -2
drivers/infiniband/hw/hfi1/sdma.c
··· 65 65 #define SDMA_DESCQ_CNT 2048 66 66 #define SDMA_DESC_INTR 64 67 67 #define INVALID_TAIL 0xffff 68 + #define SDMA_PAD max_t(size_t, MAX_16B_PADDING, sizeof(u32)) 68 69 69 70 static uint sdma_descq_cnt = SDMA_DESCQ_CNT; 70 71 module_param(sdma_descq_cnt, uint, S_IRUGO); ··· 1297 1296 struct sdma_engine *sde; 1298 1297 1299 1298 if (dd->sdma_pad_dma) { 1300 - dma_free_coherent(&dd->pcidev->dev, 4, 1299 + dma_free_coherent(&dd->pcidev->dev, SDMA_PAD, 1301 1300 (void *)dd->sdma_pad_dma, 1302 1301 dd->sdma_pad_phys); 1303 1302 dd->sdma_pad_dma = NULL; ··· 1492 1491 } 1493 1492 1494 1493 /* Allocate memory for pad */ 1495 - dd->sdma_pad_dma = dma_alloc_coherent(&dd->pcidev->dev, sizeof(u32), 1494 + dd->sdma_pad_dma = dma_alloc_coherent(&dd->pcidev->dev, SDMA_PAD, 1496 1495 &dd->sdma_pad_phys, GFP_KERNEL); 1497 1496 if (!dd->sdma_pad_dma) { 1498 1497 dd_dev_err(dd, "failed to allocate SendDMA pad memory\n");
-5
drivers/infiniband/hw/hfi1/tid_rdma.c
··· 2736 2736 diff = cmp_psn(psn, 2737 2737 flow->flow_state.r_next_psn); 2738 2738 if (diff > 0) { 2739 - if (!(qp->r_flags & RVT_R_RDMAR_SEQ)) 2740 - restart_tid_rdma_read_req(rcd, 2741 - qp, 2742 - wqe); 2743 - 2744 2739 /* Drop the packet.*/ 2745 2740 goto s_unlock; 2746 2741 } else if (diff < 0) {
+4 -6
drivers/infiniband/hw/hfi1/verbs.c
··· 147 147 /* Length of buffer to create verbs txreq cache name */ 148 148 #define TXREQ_NAME_LEN 24 149 149 150 - /* 16B trailing buffer */ 151 - static const u8 trail_buf[MAX_16B_PADDING]; 152 - 153 150 static uint wss_threshold = 80; 154 151 module_param(wss_threshold, uint, S_IRUGO); 155 152 MODULE_PARM_DESC(wss_threshold, "Percentage (1-100) of LLC to use as a threshold for a cacheless copy"); ··· 817 820 818 821 /* add icrc, lt byte, and padding to flit */ 819 822 if (extra_bytes) 820 - ret = sdma_txadd_kvaddr(sde->dd, &tx->txreq, 821 - (void *)trail_buf, extra_bytes); 823 + ret = sdma_txadd_daddr(sde->dd, &tx->txreq, 824 + sde->dd->sdma_pad_phys, extra_bytes); 822 825 823 826 bail_txadd: 824 827 return ret; ··· 1086 1089 } 1087 1090 /* add icrc, lt byte, and padding to flit */ 1088 1091 if (extra_bytes) 1089 - seg_pio_copy_mid(pbuf, trail_buf, extra_bytes); 1092 + seg_pio_copy_mid(pbuf, ppd->dd->sdma_pad_dma, 1093 + extra_bytes); 1090 1094 1091 1095 seg_pio_copy_end(pbuf); 1092 1096 }
+3 -3
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 5389 5389 return; 5390 5390 } 5391 5391 5392 - if (eq->buf_list) 5393 - dma_free_coherent(hr_dev->dev, buf_chk_sz, 5394 - eq->buf_list->buf, eq->buf_list->map); 5392 + dma_free_coherent(hr_dev->dev, buf_chk_sz, eq->buf_list->buf, 5393 + eq->buf_list->map); 5394 + kfree(eq->buf_list); 5395 5395 } 5396 5396 5397 5397 static void hns_roce_config_eqc(struct hns_roce_dev *hr_dev,
+2 -2
drivers/infiniband/hw/mlx5/mr.c
··· 1967 1967 int err; 1968 1968 1969 1969 if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) { 1970 - xa_erase(&dev->mdev->priv.mkey_table, 1971 - mlx5_base_mkey(mmw->mmkey.key)); 1970 + xa_erase_irq(&dev->mdev->priv.mkey_table, 1971 + mlx5_base_mkey(mmw->mmkey.key)); 1972 1972 /* 1973 1973 * pagefault_single_data_segment() may be accessing mmw under 1974 1974 * SRCU if the user bound an ODP MR to this MW.
+5 -3
drivers/infiniband/hw/mlx5/qp.c
··· 3249 3249 } 3250 3250 3251 3251 /* Only remove the old rate after new rate was set */ 3252 - if ((old_rl.rate && 3253 - !mlx5_rl_are_equal(&old_rl, &new_rl)) || 3254 - (new_state != MLX5_SQC_STATE_RDY)) 3252 + if ((old_rl.rate && !mlx5_rl_are_equal(&old_rl, &new_rl)) || 3253 + (new_state != MLX5_SQC_STATE_RDY)) { 3255 3254 mlx5_rl_remove_rate(dev, &old_rl); 3255 + if (new_state != MLX5_SQC_STATE_RDY) 3256 + memset(&new_rl, 0, sizeof(new_rl)); 3257 + } 3256 3258 3257 3259 ibqp->rl = new_rl; 3258 3260 sq->state = new_state;
+1 -1
drivers/infiniband/hw/qedr/main.c
··· 76 76 struct qedr_dev *qedr = get_qedr_dev(ibdev); 77 77 u32 fw_ver = (u32)qedr->attr.fw_ver; 78 78 79 - snprintf(str, IB_FW_VERSION_NAME_MAX, "%d. %d. %d. %d", 79 + snprintf(str, IB_FW_VERSION_NAME_MAX, "%d.%d.%d.%d", 80 80 (fw_ver >> 24) & 0xFF, (fw_ver >> 16) & 0xFF, 81 81 (fw_ver >> 8) & 0xFF, fw_ver & 0xFF); 82 82 }
+2
drivers/infiniband/sw/siw/siw_qp.c
··· 1312 1312 void siw_free_qp(struct kref *ref) 1313 1313 { 1314 1314 struct siw_qp *found, *qp = container_of(ref, struct siw_qp, ref); 1315 + struct siw_base_qp *siw_base_qp = to_siw_base_qp(qp->ib_qp); 1315 1316 struct siw_device *sdev = qp->sdev; 1316 1317 unsigned long flags; 1317 1318 ··· 1335 1334 atomic_dec(&sdev->num_qp); 1336 1335 siw_dbg_qp(qp, "free QP\n"); 1337 1336 kfree_rcu(qp, rcu); 1337 + kfree(siw_base_qp); 1338 1338 }
-2
drivers/infiniband/sw/siw/siw_verbs.c
··· 604 604 int siw_destroy_qp(struct ib_qp *base_qp, struct ib_udata *udata) 605 605 { 606 606 struct siw_qp *qp = to_siw_qp(base_qp); 607 - struct siw_base_qp *siw_base_qp = to_siw_base_qp(base_qp); 608 607 struct siw_ucontext *uctx = 609 608 rdma_udata_to_drv_context(udata, struct siw_ucontext, 610 609 base_ucontext); ··· 640 641 qp->scq = qp->rcq = NULL; 641 642 642 643 siw_qp_put(qp); 643 - kfree(siw_base_qp); 644 644 645 645 return 0; 646 646 }
+13
drivers/iommu/amd_iommu_quirks.c
··· 74 74 .driver_data = (void *)&ivrs_ioapic_quirks[DELL_LATITUDE_5495], 75 75 }, 76 76 { 77 + /* 78 + * Acer Aspire A315-41 requires the very same workaround as 79 + * Dell Latitude 5495 80 + */ 81 + .callback = ivrs_ioapic_quirk_cb, 82 + .ident = "Acer Aspire A315-41", 83 + .matches = { 84 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 85 + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire A315-41"), 86 + }, 87 + .driver_data = (void *)&ivrs_ioapic_quirks[DELL_LATITUDE_5495], 88 + }, 89 + { 77 90 .callback = ivrs_ioapic_quirk_cb, 78 91 .ident = "Lenovo ideapad 330S-15ARR", 79 92 .matches = {
+1 -1
drivers/iommu/intel-iommu.c
··· 2794 2794 struct device_domain_info *info; 2795 2795 2796 2796 info = dev->archdata.iommu; 2797 - if (info && info != DUMMY_DEVICE_DOMAIN_INFO) 2797 + if (info && info != DUMMY_DEVICE_DOMAIN_INFO && info != DEFER_DEVICE_DOMAIN_INFO) 2798 2798 return (info->domain == si_domain); 2799 2799 2800 2800 return 0;
+1 -3
drivers/iommu/ipmmu-vmsa.c
··· 1105 1105 /* Root devices have mandatory IRQs */ 1106 1106 if (ipmmu_is_root(mmu)) { 1107 1107 irq = platform_get_irq(pdev, 0); 1108 - if (irq < 0) { 1109 - dev_err(&pdev->dev, "no IRQ found\n"); 1108 + if (irq < 0) 1110 1109 return irq; 1111 - } 1112 1110 1113 1111 ret = devm_request_irq(&pdev->dev, irq, ipmmu_irq, 0, 1114 1112 dev_name(&pdev->dev), mmu);
+1 -1
drivers/isdn/capi/capi.c
··· 744 744 745 745 poll_wait(file, &(cdev->recvwait), wait); 746 746 mask = EPOLLOUT | EPOLLWRNORM; 747 - if (!skb_queue_empty(&cdev->recvqueue)) 747 + if (!skb_queue_empty_lockless(&cdev->recvqueue)) 748 748 mask |= EPOLLIN | EPOLLRDNORM; 749 749 return mask; 750 750 }
+1 -1
drivers/net/bonding/bond_alb.c
··· 952 952 struct bond_vlan_tag *tags; 953 953 954 954 if (is_vlan_dev(upper) && 955 - bond->nest_level == vlan_get_encap_level(upper) - 1) { 955 + bond->dev->lower_level == upper->lower_level - 1) { 956 956 if (upper->addr_assign_type == NET_ADDR_STOLEN) { 957 957 alb_send_lp_vid(slave, mac_addr, 958 958 vlan_dev_vlan_proto(upper),
+9 -19
drivers/net/bonding/bond_main.c
··· 1733 1733 goto err_upper_unlink; 1734 1734 } 1735 1735 1736 - bond->nest_level = dev_get_nest_level(bond_dev) + 1; 1737 - 1738 1736 /* If the mode uses primary, then the following is handled by 1739 1737 * bond_change_active_slave(). 1740 1738 */ ··· 1814 1816 slave_disable_netpoll(new_slave); 1815 1817 1816 1818 err_close: 1817 - slave_dev->priv_flags &= ~IFF_BONDING; 1819 + if (!netif_is_bond_master(slave_dev)) 1820 + slave_dev->priv_flags &= ~IFF_BONDING; 1818 1821 dev_close(slave_dev); 1819 1822 1820 1823 err_restore_mac: ··· 1955 1956 if (!bond_has_slaves(bond)) { 1956 1957 bond_set_carrier(bond); 1957 1958 eth_hw_addr_random(bond_dev); 1958 - bond->nest_level = SINGLE_DEPTH_NESTING; 1959 - } else { 1960 - bond->nest_level = dev_get_nest_level(bond_dev) + 1; 1961 1959 } 1962 1960 1963 1961 unblock_netpoll_tx(); ··· 2013 2017 else 2014 2018 dev_set_mtu(slave_dev, slave->original_mtu); 2015 2019 2016 - slave_dev->priv_flags &= ~IFF_BONDING; 2020 + if (!netif_is_bond_master(slave_dev)) 2021 + slave_dev->priv_flags &= ~IFF_BONDING; 2017 2022 2018 2023 bond_free_slave(slave); 2019 2024 ··· 3439 3442 } 3440 3443 } 3441 3444 3442 - static int bond_get_nest_level(struct net_device *bond_dev) 3443 - { 3444 - struct bonding *bond = netdev_priv(bond_dev); 3445 - 3446 - return bond->nest_level; 3447 - } 3448 - 3449 3445 static void bond_get_stats(struct net_device *bond_dev, 3450 3446 struct rtnl_link_stats64 *stats) 3451 3447 { ··· 3447 3457 struct list_head *iter; 3448 3458 struct slave *slave; 3449 3459 3450 - spin_lock_nested(&bond->stats_lock, bond_get_nest_level(bond_dev)); 3460 + spin_lock(&bond->stats_lock); 3451 3461 memcpy(stats, &bond->bond_stats, sizeof(*stats)); 3452 3462 3453 3463 rcu_read_lock(); ··· 4258 4268 .ndo_neigh_setup = bond_neigh_setup, 4259 4269 .ndo_vlan_rx_add_vid = bond_vlan_rx_add_vid, 4260 4270 .ndo_vlan_rx_kill_vid = bond_vlan_rx_kill_vid, 4261 - .ndo_get_lock_subclass = bond_get_nest_level, 4262 4271 #ifdef CONFIG_NET_POLL_CONTROLLER 4263 4272 .ndo_netpoll_setup = bond_netpoll_setup, 4264 4273 .ndo_netpoll_cleanup = bond_netpoll_cleanup, ··· 4285 4296 struct bonding *bond = netdev_priv(bond_dev); 4286 4297 4287 4298 spin_lock_init(&bond->mode_lock); 4288 - spin_lock_init(&bond->stats_lock); 4289 4299 bond->params = bonding_defaults; 4290 4300 4291 4301 /* Initialize pointers */ ··· 4353 4365 4354 4366 list_del(&bond->bond_list); 4355 4367 4368 + lockdep_unregister_key(&bond->stats_lock_key); 4356 4369 bond_debug_unregister(bond); 4357 4370 } 4358 4371 ··· 4757 4768 if (!bond->wq) 4758 4769 return -ENOMEM; 4759 4770 4760 - bond->nest_level = SINGLE_DEPTH_NESTING; 4761 - netdev_lockdep_set_classes(bond_dev); 4771 + spin_lock_init(&bond->stats_lock); 4772 + lockdep_register_key(&bond->stats_lock_key); 4773 + lockdep_set_class(&bond->stats_lock, &bond->stats_lock_key); 4762 4774 4763 4775 list_add_tail(&bond->bond_list, &bn->dev_list); 4764 4776
+21 -15
drivers/net/dsa/bcm_sf2.c
··· 37 37 unsigned int i; 38 38 u32 reg, offset; 39 39 40 - if (priv->type == BCM7445_DEVICE_ID) 41 - offset = CORE_STS_OVERRIDE_IMP; 42 - else 43 - offset = CORE_STS_OVERRIDE_IMP2; 44 - 45 40 /* Enable the port memories */ 46 41 reg = core_readl(priv, CORE_MEM_PSM_VDD_CTRL); 47 42 reg &= ~P_TXQ_PSM_VDD(port); 48 43 core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL); 49 - 50 - /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */ 51 - reg = core_readl(priv, CORE_IMP_CTL); 52 - reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN); 53 - reg &= ~(RX_DIS | TX_DIS); 54 - core_writel(priv, reg, CORE_IMP_CTL); 55 44 56 45 /* Enable forwarding */ 57 46 core_writel(priv, SW_FWDG_EN, CORE_SWMODE); ··· 60 71 61 72 b53_brcm_hdr_setup(ds, port); 62 73 63 - /* Force link status for IMP port */ 64 - reg = core_readl(priv, offset); 65 - reg |= (MII_SW_OR | LINK_STS); 66 - core_writel(priv, reg, offset); 74 + if (port == 8) { 75 + if (priv->type == BCM7445_DEVICE_ID) 76 + offset = CORE_STS_OVERRIDE_IMP; 77 + else 78 + offset = CORE_STS_OVERRIDE_IMP2; 79 + 80 + /* Force link status for IMP port */ 81 + reg = core_readl(priv, offset); 82 + reg |= (MII_SW_OR | LINK_STS); 83 + core_writel(priv, reg, offset); 84 + 85 + /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */ 86 + reg = core_readl(priv, CORE_IMP_CTL); 87 + reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN); 88 + reg &= ~(RX_DIS | TX_DIS); 89 + core_writel(priv, reg, CORE_IMP_CTL); 90 + } else { 91 + reg = core_readl(priv, CORE_G_PCTL_PORT(port)); 92 + reg &= ~(RX_DIS | TX_DIS); 93 + core_writel(priv, reg, CORE_G_PCTL_PORT(port)); 94 + } 67 95 } 68 96 69 97 static void bcm_sf2_gphy_enable_set(struct dsa_switch *ds, bool enable)
+2 -2
drivers/net/dsa/sja1105/Kconfig
··· 26 26 27 27 config NET_DSA_SJA1105_TAS 28 28 bool "Support for the Time-Aware Scheduler on NXP SJA1105" 29 - depends on NET_DSA_SJA1105 30 - depends on NET_SCH_TAPRIO 29 + depends on NET_DSA_SJA1105 && NET_SCH_TAPRIO 30 + depends on NET_SCH_TAPRIO=y || NET_DSA_SJA1105=m 31 31 help 32 32 This enables support for the TTEthernet-based egress scheduling 33 33 engine in the SJA1105 DSA driver, which is controlled using a
+3
drivers/net/ethernet/arc/emac_rockchip.c
··· 256 256 if (priv->regulator) 257 257 regulator_disable(priv->regulator); 258 258 259 + if (priv->soc_data->need_div_macclk) 260 + clk_disable_unprepare(priv->macclk); 261 + 259 262 free_netdev(ndev); 260 263 return err; 261 264 }
+4 -6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 10382 10382 { 10383 10383 bnxt_unmap_bars(bp, bp->pdev); 10384 10384 pci_release_regions(bp->pdev); 10385 - pci_disable_device(bp->pdev); 10385 + if (pci_is_enabled(bp->pdev)) 10386 + pci_disable_device(bp->pdev); 10386 10387 } 10387 10388 10388 10389 static void bnxt_init_dflt_coal(struct bnxt *bp) ··· 10670 10669 bp->fw_reset_state = BNXT_FW_RESET_STATE_RESET_FW; 10671 10670 } 10672 10671 /* fall through */ 10673 - case BNXT_FW_RESET_STATE_RESET_FW: { 10674 - u32 wait_dsecs = bp->fw_health->post_reset_wait_dsecs; 10675 - 10672 + case BNXT_FW_RESET_STATE_RESET_FW: 10676 10673 bnxt_reset_all(bp); 10677 10674 bp->fw_reset_state = BNXT_FW_RESET_STATE_ENABLE_DEV; 10678 - bnxt_queue_fw_reset_work(bp, wait_dsecs * HZ / 10); 10675 + bnxt_queue_fw_reset_work(bp, bp->fw_reset_min_dsecs * HZ / 10); 10679 10676 return; 10680 - } 10681 10677 case BNXT_FW_RESET_STATE_ENABLE_DEV: 10682 10678 if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state) && 10683 10679 bp->fw_health) {
+67 -45
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
··· 29 29 val = bnxt_fw_health_readl(bp, BNXT_FW_HEALTH_REG); 30 30 health_status = val & 0xffff; 31 31 32 - if (health_status == BNXT_FW_STATUS_HEALTHY) { 33 - rc = devlink_fmsg_string_pair_put(fmsg, "FW status", 34 - "Healthy;"); 35 - if (rc) 36 - return rc; 37 - } else if (health_status < BNXT_FW_STATUS_HEALTHY) { 38 - rc = devlink_fmsg_string_pair_put(fmsg, "FW status", 39 - "Not yet completed initialization;"); 32 + if (health_status < BNXT_FW_STATUS_HEALTHY) { 33 + rc = devlink_fmsg_string_pair_put(fmsg, "Description", 34 + "Not yet completed initialization"); 40 35 if (rc) 41 36 return rc; 42 37 } else if (health_status > BNXT_FW_STATUS_HEALTHY) { 43 - rc = devlink_fmsg_string_pair_put(fmsg, "FW status", 44 - "Encountered fatal error and cannot recover;"); 38 + rc = devlink_fmsg_string_pair_put(fmsg, "Description", 39 + "Encountered fatal error and cannot recover"); 45 40 if (rc) 46 41 return rc; 47 42 } 48 43 49 44 if (val >> 16) { 50 - rc = devlink_fmsg_u32_pair_put(fmsg, "Error", val >> 16); 45 + rc = devlink_fmsg_u32_pair_put(fmsg, "Error code", val >> 16); 51 46 if (rc) 52 47 return rc; 53 48 } ··· 210 215 211 216 static const struct bnxt_dl_nvm_param nvm_params[] = { 212 217 {DEVLINK_PARAM_GENERIC_ID_ENABLE_SRIOV, NVM_OFF_ENABLE_SRIOV, 213 - BNXT_NVM_SHARED_CFG, 1}, 218 + BNXT_NVM_SHARED_CFG, 1, 1}, 214 219 {DEVLINK_PARAM_GENERIC_ID_IGNORE_ARI, NVM_OFF_IGNORE_ARI, 215 - BNXT_NVM_SHARED_CFG, 1}, 220 + BNXT_NVM_SHARED_CFG, 1, 1}, 216 221 {DEVLINK_PARAM_GENERIC_ID_MSIX_VEC_PER_PF_MAX, 217 - NVM_OFF_MSIX_VEC_PER_PF_MAX, BNXT_NVM_SHARED_CFG, 10}, 222 + NVM_OFF_MSIX_VEC_PER_PF_MAX, BNXT_NVM_SHARED_CFG, 10, 4}, 218 223 {DEVLINK_PARAM_GENERIC_ID_MSIX_VEC_PER_PF_MIN, 219 - NVM_OFF_MSIX_VEC_PER_PF_MIN, BNXT_NVM_SHARED_CFG, 7}, 224 + NVM_OFF_MSIX_VEC_PER_PF_MIN, BNXT_NVM_SHARED_CFG, 7, 4}, 220 225 {BNXT_DEVLINK_PARAM_ID_GRE_VER_CHECK, NVM_OFF_DIS_GRE_VER_CHECK, 221 - BNXT_NVM_SHARED_CFG, 1}, 226 + BNXT_NVM_SHARED_CFG, 1, 1}, 222 227 }; 228 + 229 + union bnxt_nvm_data { 230 + u8 val8; 231 + __le32 val32; 232 + }; 233 + 234 + static void bnxt_copy_to_nvm_data(union bnxt_nvm_data *dst, 235 + union devlink_param_value *src, 236 + int nvm_num_bits, int dl_num_bytes) 237 + { 238 + u32 val32 = 0; 239 + 240 + if (nvm_num_bits == 1) { 241 + dst->val8 = src->vbool; 242 + return; 243 + } 244 + if (dl_num_bytes == 4) 245 + val32 = src->vu32; 246 + else if (dl_num_bytes == 2) 247 + val32 = (u32)src->vu16; 248 + else if (dl_num_bytes == 1) 249 + val32 = (u32)src->vu8; 250 + dst->val32 = cpu_to_le32(val32); 251 + } 252 + 253 + static void bnxt_copy_from_nvm_data(union devlink_param_value *dst, 254 + union bnxt_nvm_data *src, 255 + int nvm_num_bits, int dl_num_bytes) 256 + { 257 + u32 val32; 258 + 259 + if (nvm_num_bits == 1) { 260 + dst->vbool = src->val8; 261 + return; 262 + } 263 + val32 = le32_to_cpu(src->val32); 264 + if (dl_num_bytes == 4) 265 + dst->vu32 = val32; 266 + else if (dl_num_bytes == 2) 267 + dst->vu16 = (u16)val32; 268 + else if (dl_num_bytes == 1) 269 + dst->vu8 = (u8)val32; 270 + } 223 271 224 272 static int bnxt_hwrm_nvm_req(struct bnxt *bp, u32 param_id, void *msg, 225 273 int msg_len, union devlink_param_value *val) 226 274 { 227 275 struct hwrm_nvm_get_variable_input *req = msg; 228 - void *data_addr = NULL, *buf = NULL; 229 276 struct bnxt_dl_nvm_param nvm_param; 230 - int bytesize, idx = 0, rc, i; 277 + union bnxt_nvm_data *data; 231 278 dma_addr_t data_dma_addr; 279 + int idx = 0, rc, i; 232 280 233 281 /* Get/Set NVM CFG parameter is supported only on PFs */ 234 282 if (BNXT_VF(bp)) ··· 292 254 else if (nvm_param.dir_type == BNXT_NVM_FUNC_CFG) 293 255 idx = bp->pf.fw_fid - BNXT_FIRST_PF_FID; 294 256 295 - bytesize = roundup(nvm_param.num_bits, BITS_PER_BYTE) / BITS_PER_BYTE; 296 - switch (bytesize) { 297 - case 1: 298 - if (nvm_param.num_bits == 1) 299 - buf = &val->vbool; 300 - else 301 - buf = &val->vu8; 302 - break; 303 - case 2: 304 - buf = &val->vu16; 305 - break; 306 - case 4: 307 - buf = &val->vu32; 308 - break; 309 - default: 310 - return -EFAULT; 311 - } 312 - 313 - data_addr = dma_alloc_coherent(&bp->pdev->dev, bytesize, 314 - &data_dma_addr, GFP_KERNEL); 315 - if (!data_addr) 257 + data = dma_alloc_coherent(&bp->pdev->dev, sizeof(*data), 258 + &data_dma_addr, GFP_KERNEL); 259 + if (!data) 316 260 return -ENOMEM; 317 261 318 262 req->dest_data_addr = cpu_to_le64(data_dma_addr); 319 - req->data_len = cpu_to_le16(nvm_param.num_bits); 263 + req->data_len = cpu_to_le16(nvm_param.nvm_num_bits); 320 264 req->option_num = cpu_to_le16(nvm_param.offset); 321 265 req->index_0 = cpu_to_le16(idx); 322 266 if (idx) 323 267 req->dimensions = cpu_to_le16(1); 324 268 325 269 if (req->req_type == cpu_to_le16(HWRM_NVM_SET_VARIABLE)) { 326 - memcpy(data_addr, buf, bytesize); 270 + bnxt_copy_to_nvm_data(data, val, nvm_param.nvm_num_bits, 271 + nvm_param.dl_num_bytes); 327 272 rc = hwrm_send_message(bp, msg, msg_len, HWRM_CMD_TIMEOUT); 328 273 } else { 329 274 rc = hwrm_send_message_silent(bp, msg, msg_len, 330 275 HWRM_CMD_TIMEOUT); 276 + if (!rc) 277 + bnxt_copy_from_nvm_data(val, data, 278 + nvm_param.nvm_num_bits, 279 + nvm_param.dl_num_bytes); 331 280 } 332 - if (!rc && req->req_type == cpu_to_le16(HWRM_NVM_GET_VARIABLE)) 333 - memcpy(buf, data_addr, bytesize); 334 - 335 - dma_free_coherent(&bp->pdev->dev, bytesize, data_addr, data_dma_addr); 281 + dma_free_coherent(&bp->pdev->dev, sizeof(*data), data, data_dma_addr); 336 282 if (rc == -EACCES) 337 283 netdev_err(bp->dev, "PF does not have admin privileges to modify NVM config\n"); 338 284 return rc;
+2 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
··· 52 52 u16 id; 53 53 u16 offset; 54 54 u16 dir_type; 55 - u16 num_bits; 55 + u16 nvm_num_bits; 56 + u8 dl_num_bytes; 56 57 }; 57 58 58 59 void bnxt_devlink_health_report(struct bnxt *bp, unsigned long event);
+16 -12
drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
··· 695 695 lld->write_cmpl_support = adap->params.write_cmpl_support; 696 696 } 697 697 698 - static void uld_attach(struct adapter *adap, unsigned int uld) 698 + static int uld_attach(struct adapter *adap, unsigned int uld) 699 699 { 700 - void *handle; 701 700 struct cxgb4_lld_info lli; 701 + void *handle; 702 702 703 703 uld_init(adap, &lli); 704 704 uld_queue_init(adap, uld, &lli); ··· 708 708 dev_warn(adap->pdev_dev, 709 709 "could not attach to the %s driver, error %ld\n", 710 710 adap->uld[uld].name, PTR_ERR(handle)); 711 - return; 711 + return PTR_ERR(handle); 712 712 } 713 713 714 714 adap->uld[uld].handle = handle; ··· 716 716 717 717 if (adap->flags & CXGB4_FULL_INIT_DONE) 718 718 adap->uld[uld].state_change(handle, CXGB4_STATE_UP); 719 + 720 + return 0; 719 721 } 720 722 721 - /** 722 - * cxgb4_register_uld - register an upper-layer driver 723 - * @type: the ULD type 724 - * @p: the ULD methods 723 + /* cxgb4_register_uld - register an upper-layer driver 724 + * @type: the ULD type 725 + * @p: the ULD methods 725 726 * 726 - * Registers an upper-layer driver with this driver and notifies the ULD 727 - * about any presently available devices that support its type. Returns 728 - * %-EBUSY if a ULD of the same type is already registered. 727 + * Registers an upper-layer driver with this driver and notifies the ULD 728 + * about any presently available devices that support its type. 729 729 */ 730 730 void cxgb4_register_uld(enum cxgb4_uld type, 731 731 const struct cxgb4_uld_info *p) 732 732 { 733 - int ret = 0; 734 733 struct adapter *adap; 734 + int ret = 0; 735 735 736 736 if (type >= CXGB4_ULD_MAX) 737 737 return; ··· 763 763 if (ret) 764 764 goto free_irq; 765 765 adap->uld[type] = *p; 766 - uld_attach(adap, type); 766 + ret = uld_attach(adap, type); 767 + if (ret) 768 + goto free_txq; 767 769 continue; 770 + free_txq: 771 + release_sge_txq_uld(adap, type); 768 772 free_irq: 769 773 if (adap->flags & CXGB4_FULL_INIT_DONE) 770 774 quiesce_rx_uld(adap, type);
+2 -6
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 3791 3791 * write the CIDX Updates into the Status Page at the end of the 3792 3792 * TX Queue. 3793 3793 */ 3794 - c.autoequiqe_to_viid = htonl((dbqt 3795 - ? FW_EQ_ETH_CMD_AUTOEQUIQE_F 3796 - : FW_EQ_ETH_CMD_AUTOEQUEQE_F) | 3794 + c.autoequiqe_to_viid = htonl(FW_EQ_ETH_CMD_AUTOEQUEQE_F | 3797 3795 FW_EQ_ETH_CMD_VIID_V(pi->viid)); 3798 3796 3799 3797 c.fetchszm_to_iqid = 3800 - htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(dbqt 3801 - ? HOSTFCMODE_INGRESS_QUEUE_X 3802 - : HOSTFCMODE_STATUS_PAGE_X) | 3798 + htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(HOSTFCMODE_STATUS_PAGE_X) | 3803 3799 FW_EQ_ETH_CMD_PCIECHN_V(pi->tx_chan) | 3804 3800 FW_EQ_ETH_CMD_FETCHRO_F | FW_EQ_ETH_CMD_IQID_V(iqid)); 3805 3801
+1 -1
drivers/net/ethernet/cortina/gemini.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* Register definitions for Gemini GMAC Ethernet device driver 3 3 * 4 4 * Copyright (C) 2006 Storlink, Corp.
+12 -13
drivers/net/ethernet/faraday/ftgmac100.c
··· 727 727 */ 728 728 nfrags = skb_shinfo(skb)->nr_frags; 729 729 730 + /* Setup HW checksumming */ 731 + csum_vlan = 0; 732 + if (skb->ip_summed == CHECKSUM_PARTIAL && 733 + !ftgmac100_prep_tx_csum(skb, &csum_vlan)) 734 + goto drop; 735 + 736 + /* Add VLAN tag */ 737 + if (skb_vlan_tag_present(skb)) { 738 + csum_vlan |= FTGMAC100_TXDES1_INS_VLANTAG; 739 + csum_vlan |= skb_vlan_tag_get(skb) & 0xffff; 740 + } 741 + 730 742 /* Get header len */ 731 743 len = skb_headlen(skb); 732 744 ··· 765 753 if (nfrags == 0) 766 754 f_ctl_stat |= FTGMAC100_TXDES0_LTS; 767 755 txdes->txdes3 = cpu_to_le32(map); 768 - 769 - /* Setup HW checksumming */ 770 - csum_vlan = 0; 771 - if (skb->ip_summed == CHECKSUM_PARTIAL && 772 - !ftgmac100_prep_tx_csum(skb, &csum_vlan)) 773 - goto drop; 774 - 775 - /* Add VLAN tag */ 776 - if (skb_vlan_tag_present(skb)) { 777 - csum_vlan |= FTGMAC100_TXDES1_INS_VLANTAG; 778 - csum_vlan |= skb_vlan_tag_get(skb) & 0xffff; 779 - } 780 - 781 756 txdes->txdes1 = cpu_to_le32(csum_vlan); 782 757 783 758 /* Next descriptor */
+1 -1
drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Copyright 2018 NXP 4 4 */
+1 -1
drivers/net/ethernet/freescale/dpaa2/dprtc-cmd.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Copyright 2013-2016 Freescale Semiconductor Inc. 4 4 * Copyright 2016-2018 NXP
+1 -1
drivers/net/ethernet/freescale/dpaa2/dprtc.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Copyright 2013-2016 Freescale Semiconductor Inc. 4 4 * Copyright 2016-2018 NXP
+1 -1
drivers/net/ethernet/freescale/fec_main.c
··· 3558 3558 3559 3559 for (i = 0; i < irq_cnt; i++) { 3560 3560 snprintf(irq_name, sizeof(irq_name), "int%d", i); 3561 - irq = platform_get_irq_byname(pdev, irq_name); 3561 + irq = platform_get_irq_byname_optional(pdev, irq_name); 3562 3562 if (irq < 0) 3563 3563 irq = platform_get_irq(pdev, i); 3564 3564 if (irq < 0) {
+2 -2
drivers/net/ethernet/freescale/fec_ptp.c
··· 600 600 601 601 INIT_DELAYED_WORK(&fep->time_keep, fec_time_keep); 602 602 603 - irq = platform_get_irq_byname(pdev, "pps"); 603 + irq = platform_get_irq_byname_optional(pdev, "pps"); 604 604 if (irq < 0) 605 - irq = platform_get_irq(pdev, irq_idx); 605 + irq = platform_get_irq_optional(pdev, irq_idx); 606 606 /* Failure to get an irq is not fatal, 607 607 * only the PTP_CLOCK_PPS clock events should stop 608 608 */
+2
drivers/net/ethernet/google/gve/gve_rx.c
··· 289 289 290 290 len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD; 291 291 page_info = &rx->data.page_info[idx]; 292 + dma_sync_single_for_cpu(&priv->pdev->dev, rx->data.qpl->page_buses[idx], 293 + PAGE_SIZE, DMA_FROM_DEVICE); 292 294 293 295 /* gvnic can only receive into registered segments. If the buffer 294 296 * can't be recycled, our only choice is to copy the data out of
+22 -2
drivers/net/ethernet/google/gve/gve_tx.c
··· 390 390 seg_desc->seg.seg_addr = cpu_to_be64(addr); 391 391 } 392 392 393 - static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb) 393 + static void gve_dma_sync_for_device(struct device *dev, dma_addr_t *page_buses, 394 + u64 iov_offset, u64 iov_len) 395 + { 396 + dma_addr_t dma; 397 + u64 addr; 398 + 399 + for (addr = iov_offset; addr < iov_offset + iov_len; 400 + addr += PAGE_SIZE) { 401 + dma = page_buses[addr / PAGE_SIZE]; 402 + dma_sync_single_for_device(dev, dma, PAGE_SIZE, DMA_TO_DEVICE); 403 + } 404 + } 405 + 406 + static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, 407 + struct device *dev) 394 408 { 395 409 int pad_bytes, hlen, hdr_nfrags, payload_nfrags, l4_hdr_offset; 396 410 union gve_tx_desc *pkt_desc, *seg_desc; ··· 446 432 skb_copy_bits(skb, 0, 447 433 tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset, 448 434 hlen); 435 + gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses, 436 + info->iov[hdr_nfrags - 1].iov_offset, 437 + info->iov[hdr_nfrags - 1].iov_len); 449 438 copy_offset = hlen; 450 439 451 440 for (i = payload_iov; i < payload_nfrags + payload_iov; i++) { ··· 462 445 skb_copy_bits(skb, copy_offset, 463 446 tx->tx_fifo.base + info->iov[i].iov_offset, 464 447 info->iov[i].iov_len); 448 + gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses, 449 + info->iov[i].iov_offset, 450 + info->iov[i].iov_len); 465 451 copy_offset += info->iov[i].iov_len; 466 452 } 467 453 ··· 493 473 gve_tx_put_doorbell(priv, tx->q_resources, tx->req); 494 474 return NETDEV_TX_BUSY; 495 475 } 496 - nsegs = gve_tx_add_skb(tx, skb); 476 + nsegs = gve_tx_add_skb(tx, skb, &priv->pdev->dev); 497 477 498 478 netdev_tx_sent_queue(tx->netdev_txq, skb->len); 499 479 skb_tx_timestamp(skb);
+9 -7
drivers/net/ethernet/hisilicon/hip04_eth.c
··· 237 237 dma_addr_t rx_phys[RX_DESC_NUM]; 238 238 unsigned int rx_head; 239 239 unsigned int rx_buf_size; 240 + unsigned int rx_cnt_remaining; 240 241 241 242 struct device_node *phy_node; 242 243 struct phy_device *phy; ··· 576 575 struct hip04_priv *priv = container_of(napi, struct hip04_priv, napi); 577 576 struct net_device *ndev = priv->ndev; 578 577 struct net_device_stats *stats = &ndev->stats; 579 - unsigned int cnt = hip04_recv_cnt(priv); 580 578 struct rx_desc *desc; 581 579 struct sk_buff *skb; 582 580 unsigned char *buf; ··· 588 588 589 589 /* clean up tx descriptors */ 590 590 tx_remaining = hip04_tx_reclaim(ndev, false); 591 - 592 - while (cnt && !last) { 591 + priv->rx_cnt_remaining += hip04_recv_cnt(priv); 592 + while (priv->rx_cnt_remaining && !last) { 593 593 buf = priv->rx_buf[priv->rx_head]; 594 594 skb = build_skb(buf, priv->rx_buf_size); 595 595 if (unlikely(!skb)) { ··· 635 635 hip04_set_recv_desc(priv, phys); 636 636 637 637 priv->rx_head = RX_NEXT(priv->rx_head); 638 - if (rx >= budget) 638 + if (rx >= budget) { 639 + --priv->rx_cnt_remaining; 639 640 goto done; 641 + } 640 642 641 - if (--cnt == 0) 642 - cnt = hip04_recv_cnt(priv); 643 + if (--priv->rx_cnt_remaining == 0) 644 + priv->rx_cnt_remaining += hip04_recv_cnt(priv); 643 645 } 644 646 645 647 if (!(priv->reg_inten & RCV_INT)) { ··· 726 724 int i; 727 725 728 726 priv->rx_head = 0; 727 + priv->rx_cnt_remaining = 0; 729 728 priv->tx_head = 0; 730 729 priv->tx_tail = 0; 731 730 hip04_reset_ppe(priv); ··· 1041 1038 1042 1039 hip04_free_ring(ndev, d); 1043 1040 unregister_netdev(ndev); 1044 - free_irq(ndev->irq, ndev); 1045 1041 of_node_put(priv->phy_node); 1046 1042 cancel_work_sync(&priv->tx_timeout_task); 1047 1043 free_netdev(ndev);
+3 -4
drivers/net/ethernet/intel/e1000/e1000_ethtool.c
··· 607 607 for (i = 0; i < adapter->num_rx_queues; i++) 608 608 rxdr[i].count = rxdr->count; 609 609 610 + err = 0; 610 611 if (netif_running(adapter->netdev)) { 611 612 /* Try to get new resources before deleting old */ 612 613 err = e1000_setup_all_rx_resources(adapter); ··· 628 627 adapter->rx_ring = rxdr; 629 628 adapter->tx_ring = txdr; 630 629 err = e1000_up(adapter); 631 - if (err) 632 - goto err_setup; 633 630 } 634 631 kfree(tx_old); 635 632 kfree(rx_old); 636 633 637 634 clear_bit(__E1000_RESETTING, &adapter->flags); 638 - return 0; 635 + return err; 636 + 639 637 err_setup_tx: 640 638 e1000_free_all_rx_resources(adapter); 641 639 err_setup_rx: ··· 646 646 err_alloc_tx: 647 647 if (netif_running(adapter->netdev)) 648 648 e1000_up(adapter); 649 - err_setup: 650 649 clear_bit(__E1000_RESETTING, &adapter->flags); 651 650 return err; 652 651 }
-5
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 157 157 err = i40e_queue_pair_enable(vsi, qid); 158 158 if (err) 159 159 return err; 160 - 161 - /* Kick start the NAPI context so that receiving will start */ 162 - err = i40e_xsk_wakeup(vsi->netdev, qid, XDP_WAKEUP_RX); 163 - if (err) 164 - return err; 165 160 } 166 161 167 162 return 0;
+1 -1
drivers/net/ethernet/intel/igb/e1000_82575.c
··· 466 466 ? igb_setup_copper_link_82575 467 467 : igb_setup_serdes_link_82575; 468 468 469 - if (mac->type == e1000_82580) { 469 + if (mac->type == e1000_82580 || mac->type == e1000_i350) { 470 470 switch (hw->device_id) { 471 471 /* feature not supported on these id's */ 472 472 case E1000_DEV_ID_DH89XXCC_SGMII:
+5 -3
drivers/net/ethernet/intel/igb/igb_main.c
··· 753 753 struct net_device *netdev = igb->netdev; 754 754 hw->hw_addr = NULL; 755 755 netdev_err(netdev, "PCIe link lost\n"); 756 - WARN(1, "igb: Failed to read reg 0x%x!\n", reg); 756 + WARN(pci_device_is_present(igb->pdev), 757 + "igb: Failed to read reg 0x%x!\n", reg); 757 758 } 758 759 759 760 return value; ··· 2065 2064 if ((hw->phy.media_type == e1000_media_type_copper) && 2066 2065 (!(connsw & E1000_CONNSW_AUTOSENSE_EN))) { 2067 2066 swap_now = true; 2068 - } else if (!(connsw & E1000_CONNSW_SERDESD)) { 2067 + } else if ((hw->phy.media_type != e1000_media_type_copper) && 2068 + !(connsw & E1000_CONNSW_SERDESD)) { 2069 2069 /* copper signal takes time to appear */ 2070 2070 if (adapter->copper_tries < 4) { 2071 2071 adapter->copper_tries++; ··· 2372 2370 adapter->ei.get_invariants(hw); 2373 2371 adapter->flags &= ~IGB_FLAG_MEDIA_RESET; 2374 2372 } 2375 - if ((mac->type == e1000_82575) && 2373 + if ((mac->type == e1000_82575 || mac->type == e1000_i350) && 2376 2374 (adapter->flags & IGB_FLAG_MAS_ENABLE)) { 2377 2375 igb_enable_mas(adapter); 2378 2376 }
+2 -1
drivers/net/ethernet/intel/igc/igc_main.c
··· 4047 4047 hw->hw_addr = NULL; 4048 4048 netif_device_detach(netdev); 4049 4049 netdev_err(netdev, "PCIe link lost, device now detached\n"); 4050 - WARN(1, "igc: Failed to read reg 0x%x!\n", reg); 4050 + WARN(pci_device_is_present(igc->pdev), 4051 + "igc: Failed to read reg 0x%x!\n", reg); 4051 4052 } 4052 4053 4053 4054 return value;
-1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 4310 4310 if (test_bit(__IXGBE_RX_FCOE, &rx_ring->state)) 4311 4311 set_bit(__IXGBE_RX_3K_BUFFER, &rx_ring->state); 4312 4312 4313 - clear_bit(__IXGBE_RX_BUILD_SKB_ENABLED, &rx_ring->state); 4314 4313 if (adapter->flags2 & IXGBE_FLAG2_RX_LEGACY) 4315 4314 continue; 4316 4315
+20 -12
drivers/net/ethernet/marvell/mvneta_bm.h
··· 160 160 (bm_pool->id << MVNETA_BM_POOL_ACCESS_OFFS)); 161 161 } 162 162 #else 163 - void mvneta_bm_pool_destroy(struct mvneta_bm *priv, 164 - struct mvneta_bm_pool *bm_pool, u8 port_map) {} 165 - void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool, 166 - u8 port_map) {} 167 - int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf) { return 0; } 168 - int mvneta_bm_pool_refill(struct mvneta_bm *priv, 169 - struct mvneta_bm_pool *bm_pool) {return 0; } 170 - struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id, 171 - enum mvneta_bm_type type, u8 port_id, 172 - int pkt_size) { return NULL; } 163 + static inline void mvneta_bm_pool_destroy(struct mvneta_bm *priv, 164 + struct mvneta_bm_pool *bm_pool, 165 + u8 port_map) {} 166 + static inline void mvneta_bm_bufs_free(struct mvneta_bm *priv, 167 + struct mvneta_bm_pool *bm_pool, 168 + u8 port_map) {} 169 + static inline int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf) 170 + { return 0; } 171 + static inline int mvneta_bm_pool_refill(struct mvneta_bm *priv, 172 + struct mvneta_bm_pool *bm_pool) 173 + { return 0; } 174 + static inline struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, 175 + u8 pool_id, 176 + enum mvneta_bm_type type, 177 + u8 port_id, 178 + int pkt_size) 179 + { return NULL; } 173 180 174 181 static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv, 175 182 struct mvneta_bm_pool *bm_pool, ··· 185 178 static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv, 186 179 struct mvneta_bm_pool *bm_pool) 187 180 { return 0; } 188 - struct mvneta_bm *mvneta_bm_get(struct device_node *node) { return NULL; } 189 - void mvneta_bm_put(struct mvneta_bm *priv) {} 181 + static inline struct mvneta_bm *mvneta_bm_get(struct device_node *node) 182 + { return NULL; } 183 + static inline void mvneta_bm_put(struct mvneta_bm *priv) {} 190 184 #endif /* CONFIG_MVNETA_BM */ 191 185 #endif
+26 -16
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 471 471 priv->mfunc.master.res_tracker.res_alloc[RES_MPT].quota[pf]; 472 472 } 473 473 474 - static int get_max_gauranteed_vfs_counter(struct mlx4_dev *dev) 474 + static int 475 + mlx4_calc_res_counter_guaranteed(struct mlx4_dev *dev, 476 + struct resource_allocator *res_alloc, 477 + int vf) 475 478 { 476 - /* reduce the sink counter */ 477 - return (dev->caps.max_counters - 1 - 478 - (MLX4_PF_COUNTERS_PER_PORT * MLX4_MAX_PORTS)) 479 - / MLX4_MAX_PORTS; 479 + struct mlx4_active_ports actv_ports; 480 + int ports, counters_guaranteed; 481 + 482 + /* For master, only allocate according to the number of phys ports */ 483 + if (vf == mlx4_master_func_num(dev)) 484 + return MLX4_PF_COUNTERS_PER_PORT * dev->caps.num_ports; 485 + 486 + /* calculate real number of ports for the VF */ 487 + actv_ports = mlx4_get_active_ports(dev, vf); 488 + ports = bitmap_weight(actv_ports.ports, dev->caps.num_ports); 489 + counters_guaranteed = ports * MLX4_VF_COUNTERS_PER_PORT; 490 + 491 + /* If we do not have enough counters for this VF, do not 492 + * allocate any for it. '-1' to reduce the sink counter. 493 + */ 494 + if ((res_alloc->res_reserved + counters_guaranteed) > 495 + (dev->caps.max_counters - 1)) 496 + return 0; 497 + 498 + return counters_guaranteed; 480 499 } 481 500 482 501 int mlx4_init_resource_tracker(struct mlx4_dev *dev) ··· 503 484 struct mlx4_priv *priv = mlx4_priv(dev); 504 485 int i, j; 505 486 int t; 506 - int max_vfs_guarantee_counter = get_max_gauranteed_vfs_counter(dev); 507 487 508 488 priv->mfunc.master.res_tracker.slave_list = 509 489 kcalloc(dev->num_slaves, sizeof(struct slave_list), ··· 621 603 break; 622 604 case RES_COUNTER: 623 605 res_alloc->quota[t] = dev->caps.max_counters; 624 - if (t == mlx4_master_func_num(dev)) 625 - res_alloc->guaranteed[t] = 626 - MLX4_PF_COUNTERS_PER_PORT * 627 - MLX4_MAX_PORTS; 628 - else if (t <= max_vfs_guarantee_counter) 629 - res_alloc->guaranteed[t] = 630 - MLX4_VF_COUNTERS_PER_PORT * 631 - MLX4_MAX_PORTS; 632 - else 633 - res_alloc->guaranteed[t] = 0; 606 + res_alloc->guaranteed[t] = 607 + mlx4_calc_res_counter_guaranteed(dev, res_alloc, t); 634 608 break; 635 609 default: 636 610 break;
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 345 345 u8 num_wqebbs; 346 346 u8 num_dma; 347 347 #ifdef CONFIG_MLX5_EN_TLS 348 - skb_frag_t *resync_dump_frag; 348 + struct page *resync_dump_frag_page; 349 349 #endif 350 350 }; 351 351 ··· 410 410 struct device *pdev; 411 411 __be32 mkey_be; 412 412 unsigned long state; 413 + unsigned int hw_mtu; 413 414 struct hwtstamp_config *tstamp; 414 415 struct mlx5_clock *clock; 415 416
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en/hv_vhca_stats.c
··· 141 141 "Failed to create hv vhca stats agent, err = %ld\n", 142 142 PTR_ERR(agent)); 143 143 144 - kfree(priv->stats_agent.buf); 144 + kvfree(priv->stats_agent.buf); 145 145 return IS_ERR_OR_NULL(agent); 146 146 } 147 147 ··· 157 157 return; 158 158 159 159 mlx5_hv_vhca_agent_destroy(priv->stats_agent.agent); 160 - kfree(priv->stats_agent.buf); 160 + kvfree(priv->stats_agent.buf); 161 161 }
+9 -3
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
··· 97 97 if (ret) 98 98 return ret; 99 99 100 - if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET) 100 + if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET) { 101 + ip_rt_put(rt); 101 102 return -ENETUNREACH; 103 + } 102 104 #else 103 105 return -EOPNOTSUPP; 104 106 #endif 105 107 106 108 ret = get_route_and_out_devs(priv, rt->dst.dev, route_dev, out_dev); 107 - if (ret < 0) 109 + if (ret < 0) { 110 + ip_rt_put(rt); 108 111 return ret; 112 + } 109 113 110 114 if (!(*out_ttl)) 111 115 *out_ttl = ip4_dst_hoplimit(&rt->dst); ··· 153 149 *out_ttl = ip6_dst_hoplimit(dst); 154 150 155 151 ret = get_route_and_out_devs(priv, dst->dev, route_dev, out_dev); 156 - if (ret < 0) 152 + if (ret < 0) { 153 + dst_release(dst); 157 154 return ret; 155 + } 158 156 #else 159 157 return -EOPNOTSUPP; 160 158 #endif
+6 -7
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
··· 15 15 #else 16 16 /* TLS offload requires additional stop_room for: 17 17 * - a resync SKB. 18 - * kTLS offload requires additional stop_room for: 19 - * - static params WQE, 20 - * - progress params WQE, and 21 - * - resync DUMP per frag. 18 + * kTLS offload requires fixed additional stop_room for: 19 + * - a static params WQE, and a progress params WQE. 20 + * The additional MTU-depending room for the resync DUMP WQEs 21 + * will be calculated and added in runtime. 22 22 */ 23 23 #define MLX5E_SQ_TLS_ROOM \ 24 24 (MLX5_SEND_WQE_MAX_WQEBBS + \ 25 - MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS + \ 26 - MAX_SKB_FRAGS * MLX5E_KTLS_MAX_DUMP_WQEBBS) 25 + MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS) 27 26 #endif 28 27 29 28 #define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start)) ··· 91 92 92 93 /* fill sq frag edge with nops to avoid wqe wrapping two pages */ 93 94 for (; wi < edge_wi; wi++) { 94 - wi->skb = NULL; 95 + memset(wi, 0, sizeof(*wi)); 95 96 wi->num_wqebbs = 1; 96 97 mlx5e_post_nop(wq, sq->sqn, &sq->pc); 97 98 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
··· 38 38 return -ENOMEM; 39 39 40 40 tx_priv->expected_seq = start_offload_tcp_sn; 41 - tx_priv->crypto_info = crypto_info; 41 + tx_priv->crypto_info = *(struct tls12_crypto_info_aes_gcm_128 *)crypto_info; 42 42 mlx5e_set_ktls_tx_priv_ctx(tls_ctx, tx_priv); 43 43 44 44 /* tc and underlay_qpn values are not in use for tls tis */
+25 -4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
··· 21 21 MLX5_ST_SZ_BYTES(tls_progress_params)) 22 22 #define MLX5E_KTLS_PROGRESS_WQEBBS \ 23 23 (DIV_ROUND_UP(MLX5E_KTLS_PROGRESS_WQE_SZ, MLX5_SEND_WQE_BB)) 24 - #define MLX5E_KTLS_MAX_DUMP_WQEBBS 2 24 + 25 + struct mlx5e_dump_wqe { 26 + struct mlx5_wqe_ctrl_seg ctrl; 27 + struct mlx5_wqe_data_seg data; 28 + }; 29 + 30 + #define MLX5E_KTLS_DUMP_WQEBBS \ 31 + (DIV_ROUND_UP(sizeof(struct mlx5e_dump_wqe), MLX5_SEND_WQE_BB)) 25 32 26 33 enum { 27 34 MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD = 0, ··· 44 37 45 38 struct mlx5e_ktls_offload_context_tx { 46 39 struct tls_offload_context_tx *tx_ctx; 47 - struct tls_crypto_info *crypto_info; 40 + struct tls12_crypto_info_aes_gcm_128 crypto_info; 48 41 u32 expected_seq; 49 42 u32 tisn; 50 43 u32 key_id; ··· 93 86 struct mlx5e_tx_wqe **wqe, u16 *pi); 94 87 void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, 95 88 struct mlx5e_tx_wqe_info *wi, 96 - struct mlx5e_sq_dma *dma); 97 - 89 + u32 *dma_fifo_cc); 90 + static inline u8 91 + mlx5e_ktls_dumps_num_wqebbs(struct mlx5e_txqsq *sq, unsigned int nfrags, 92 + unsigned int sync_len) 93 + { 94 + /* Given the MTU and sync_len, calculates an upper bound for the 95 + * number of WQEBBs needed for the TX resync DUMP WQEs of a record. 96 + */ 97 + return MLX5E_KTLS_DUMP_WQEBBS * 98 + (nfrags + DIV_ROUND_UP(sync_len, sq->hw_mtu)); 99 + } 98 100 #else 99 101 100 102 static inline void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv) 101 103 { 102 104 } 105 + 106 + static inline void 107 + mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, 108 + struct mlx5e_tx_wqe_info *wi, 109 + u32 *dma_fifo_cc) {} 103 110 104 111 #endif 105 112
+113 -75
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 24 24 static void 25 25 fill_static_params_ctx(void *ctx, struct mlx5e_ktls_offload_context_tx *priv_tx) 26 26 { 27 - struct tls_crypto_info *crypto_info = priv_tx->crypto_info; 28 - struct tls12_crypto_info_aes_gcm_128 *info; 27 + struct tls12_crypto_info_aes_gcm_128 *info = &priv_tx->crypto_info; 29 28 char *initial_rn, *gcm_iv; 30 29 u16 salt_sz, rec_seq_sz; 31 30 char *salt, *rec_seq; 32 31 u8 tls_version; 33 32 34 - if (WARN_ON(crypto_info->cipher_type != TLS_CIPHER_AES_GCM_128)) 35 - return; 36 - 37 - info = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info; 38 33 EXTRACT_INFO_FIELDS; 39 34 40 35 gcm_iv = MLX5_ADDR_OF(tls_static_params, ctx, gcm_iv); ··· 103 108 } 104 109 105 110 static void tx_fill_wi(struct mlx5e_txqsq *sq, 106 - u16 pi, u8 num_wqebbs, 107 - skb_frag_t *resync_dump_frag, 108 - u32 num_bytes) 111 + u16 pi, u8 num_wqebbs, u32 num_bytes, 112 + struct page *page) 109 113 { 110 114 struct mlx5e_tx_wqe_info *wi = &sq->db.wqe_info[pi]; 111 115 112 - wi->skb = NULL; 113 - wi->num_wqebbs = num_wqebbs; 114 - wi->resync_dump_frag = resync_dump_frag; 115 - wi->num_bytes = num_bytes; 116 + memset(wi, 0, sizeof(*wi)); 117 + wi->num_wqebbs = num_wqebbs; 118 + wi->num_bytes = num_bytes; 119 + wi->resync_dump_frag_page = page; 116 120 } 117 121 118 122 void mlx5e_ktls_tx_offload_set_pending(struct mlx5e_ktls_offload_context_tx *priv_tx) ··· 139 145 140 146 umr_wqe = mlx5e_sq_fetch_wqe(sq, MLX5E_KTLS_STATIC_UMR_WQE_SZ, &pi); 141 147 build_static_params(umr_wqe, sq->pc, sq->sqn, priv_tx, fence); 142 - tx_fill_wi(sq, pi, MLX5E_KTLS_STATIC_WQEBBS, NULL, 0); 148 + tx_fill_wi(sq, pi, MLX5E_KTLS_STATIC_WQEBBS, 0, NULL); 143 149 sq->pc += MLX5E_KTLS_STATIC_WQEBBS; 144 150 } 145 151 ··· 153 159 154 160 wqe = mlx5e_sq_fetch_wqe(sq, MLX5E_KTLS_PROGRESS_WQE_SZ, &pi); 155 161 build_progress_params(wqe, sq->pc, sq->sqn, priv_tx, fence); 156 - tx_fill_wi(sq, pi, MLX5E_KTLS_PROGRESS_WQEBBS, NULL, 0); 162 + tx_fill_wi(sq, pi, MLX5E_KTLS_PROGRESS_WQEBBS, 0, NULL); 157 163 sq->pc += MLX5E_KTLS_PROGRESS_WQEBBS; 158 164 } 159 165 ··· 163 169 bool skip_static_post, bool fence_first_post) 164 170 { 165 171 bool progress_fence = skip_static_post || !fence_first_post; 172 + struct mlx5_wq_cyc *wq = &sq->wq; 173 + u16 contig_wqebbs_room, pi; 174 + 175 + pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 176 + contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi); 177 + if (unlikely(contig_wqebbs_room < 178 + MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS)) 179 + mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room); 166 180 167 181 if (!skip_static_post) 168 182 post_static_params(sq, priv_tx, fence_first_post); ··· 182 180 u64 rcd_sn; 183 181 s32 sync_len; 184 182 int nr_frags; 185 - skb_frag_t *frags[MAX_SKB_FRAGS]; 183 + skb_frag_t frags[MAX_SKB_FRAGS]; 186 184 }; 187 185 188 - static bool tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx, 189 - u32 tcp_seq, struct tx_sync_info *info) 186 + enum mlx5e_ktls_sync_retval { 187 + MLX5E_KTLS_SYNC_DONE, 188 + MLX5E_KTLS_SYNC_FAIL, 189 + MLX5E_KTLS_SYNC_SKIP_NO_DATA, 190 + }; 191 + 192 + static enum mlx5e_ktls_sync_retval 193 + tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx, 194 + u32 tcp_seq, struct tx_sync_info *info) 190 195 { 191 196 struct tls_offload_context_tx *tx_ctx = priv_tx->tx_ctx; 197 + enum mlx5e_ktls_sync_retval ret = MLX5E_KTLS_SYNC_DONE; 192 198 struct tls_record_info *record; 193 199 int remaining, i = 0; 194 200 unsigned long flags; 195 - bool ret = true; 196 201 197 202 spin_lock_irqsave(&tx_ctx->lock, flags); 198 203 record = tls_get_record(tx_ctx, tcp_seq, &info->rcd_sn); 199 204 200 205 if (unlikely(!record)) { 201 - ret = false; 206 + ret = MLX5E_KTLS_SYNC_FAIL; 202 207 goto out; 203 208 } 204 209 205 210 if (unlikely(tcp_seq < tls_record_start_seq(record))) { 206 - if (!tls_record_is_start_marker(record)) 207 - ret = false; 211 + ret = tls_record_is_start_marker(record) ? 212 + MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL; 208 213 goto out; 209 214 } 210 215 ··· 220 211 while (remaining > 0) { 221 212 skb_frag_t *frag = &record->frags[i]; 222 213 223 - __skb_frag_ref(frag); 214 + get_page(skb_frag_page(frag)); 224 215 remaining -= skb_frag_size(frag); 225 - info->frags[i++] = frag; 216 + info->frags[i++] = *frag; 226 217 } 227 218 /* reduce the part which will be sent with the original SKB */ 228 219 if (remaining < 0) 229 - skb_frag_size_add(info->frags[i - 1], remaining); 220 + skb_frag_size_add(&info->frags[i - 1], remaining); 230 221 info->nr_frags = i; 231 222 out: 232 223 spin_unlock_irqrestore(&tx_ctx->lock, flags); ··· 238 229 struct mlx5e_ktls_offload_context_tx *priv_tx, 239 230 u64 rcd_sn) 240 231 { 241 - struct tls_crypto_info *crypto_info = priv_tx->crypto_info; 242 - struct tls12_crypto_info_aes_gcm_128 *info; 232 + struct tls12_crypto_info_aes_gcm_128 *info = &priv_tx->crypto_info; 243 233 __be64 rn_be = cpu_to_be64(rcd_sn); 244 234 bool skip_static_post; 245 235 u16 rec_seq_sz; 246 236 char *rec_seq; 247 237 248 - if (WARN_ON(crypto_info->cipher_type != TLS_CIPHER_AES_GCM_128)) 249 - return; 250 - 251 - info = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info; 252 238 rec_seq = info->rec_seq; 253 239 rec_seq_sz = sizeof(info->rec_seq); 254 240 ··· 254 250 mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, skip_static_post, true); 255 251 } 256 252 257 - struct mlx5e_dump_wqe { 258 - struct mlx5_wqe_ctrl_seg ctrl; 259 - struct mlx5_wqe_data_seg data; 260 - }; 261 - 262 253 static int 263 254 tx_post_resync_dump(struct mlx5e_txqsq *sq, skb_frag_t *frag, u32 tisn, bool first) 264 255 { ··· 261 262 struct mlx5_wqe_data_seg *dseg; 262 263 struct mlx5e_dump_wqe *wqe; 263 264 dma_addr_t dma_addr = 0; 264 - u8 num_wqebbs; 265 265 u16 ds_cnt; 266 266 int fsz; 267 267 u16 pi; ··· 268 270 wqe = mlx5e_sq_fetch_wqe(sq, sizeof(*wqe), &pi); 269 271 270 272 ds_cnt = sizeof(*wqe) / MLX5_SEND_WQE_DS; 271 - num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS); 272 273 273 274 cseg = &wqe->ctrl; 274 275 dseg = &wqe->data; ··· 288 291 dseg->byte_count = cpu_to_be32(fsz); 289 292 mlx5e_dma_push(sq, dma_addr, fsz, MLX5E_DMA_MAP_PAGE); 290 293 291 - tx_fill_wi(sq, pi, num_wqebbs, frag, fsz); 292 - sq->pc += num_wqebbs; 293 - 294 - WARN(num_wqebbs > MLX5E_KTLS_MAX_DUMP_WQEBBS, 295 - "unexpected DUMP num_wqebbs, %d > %d", 296 - num_wqebbs, MLX5E_KTLS_MAX_DUMP_WQEBBS); 294 + tx_fill_wi(sq, pi, MLX5E_KTLS_DUMP_WQEBBS, fsz, skb_frag_page(frag)); 295 + sq->pc += MLX5E_KTLS_DUMP_WQEBBS; 297 296 298 297 return 0; 299 298 } 300 299 301 300 void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, 302 301 struct mlx5e_tx_wqe_info *wi, 303 - struct mlx5e_sq_dma *dma) 302 + u32 *dma_fifo_cc) 304 303 { 305 - struct mlx5e_sq_stats *stats = sq->stats; 304 + struct mlx5e_sq_stats *stats; 305 + struct mlx5e_sq_dma *dma; 306 + 307 + if (!wi->resync_dump_frag_page) 308 + return; 309 + 310 + dma = mlx5e_dma_get(sq, (*dma_fifo_cc)++); 311 + stats = sq->stats; 306 312 307 313 mlx5e_tx_dma_unmap(sq->pdev, dma); 308 - __skb_frag_unref(wi->resync_dump_frag); 314 + put_page(wi->resync_dump_frag_page); 309 315 stats->tls_dump_packets++; 310 316 stats->tls_dump_bytes += wi->num_bytes; 311 317 } ··· 318 318 struct mlx5_wq_cyc *wq = &sq->wq; 319 319 u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 320 320 321 - tx_fill_wi(sq, pi, 1, NULL, 0); 321 + tx_fill_wi(sq, pi, 1, 0, NULL); 322 322 323 323 mlx5e_post_nop_fence(wq, sq->sqn, &sq->pc); 324 324 } 325 325 326 - static struct sk_buff * 326 + static enum mlx5e_ktls_sync_retval 327 327 mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx, 328 328 struct mlx5e_txqsq *sq, 329 - struct sk_buff *skb, 329 + int datalen, 330 330 u32 seq) 331 331 { 332 332 struct mlx5e_sq_stats *stats = sq->stats; 333 333 struct mlx5_wq_cyc *wq = &sq->wq; 334 + enum mlx5e_ktls_sync_retval ret; 334 335 struct tx_sync_info info = {}; 335 336 u16 contig_wqebbs_room, pi; 336 337 u8 num_wqebbs; 337 - int i; 338 + int i = 0; 338 339 339 - if (!tx_sync_info_get(priv_tx, seq, &info)) { 340 + ret = tx_sync_info_get(priv_tx, seq, &info); 341 + if (unlikely(ret != MLX5E_KTLS_SYNC_DONE)) { 342 + if (ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA) { 343 + stats->tls_skip_no_sync_data++; 344 + return MLX5E_KTLS_SYNC_SKIP_NO_DATA; 345 + } 340 346 /* We might get here if a retransmission reaches the driver 341 347 * after the relevant record is acked. 342 348 * It should be safe to drop the packet in this case ··· 352 346 } 353 347 354 348 if (unlikely(info.sync_len < 0)) { 355 - u32 payload; 356 - int headln; 357 - 358 - headln = skb_transport_offset(skb) + tcp_hdrlen(skb); 359 - payload = skb->len - headln; 360 - if (likely(payload <= -info.sync_len)) 361 - return skb; 349 + if (likely(datalen <= -info.sync_len)) 350 + return MLX5E_KTLS_SYNC_DONE; 362 351 363 352 stats->tls_drop_bypass_req++; 364 353 goto err_out; ··· 361 360 362 361 stats->tls_ooo++; 363 362 364 - num_wqebbs = MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS + 365 - (info.nr_frags ? info.nr_frags * MLX5E_KTLS_MAX_DUMP_WQEBBS : 1); 363 + tx_post_resync_params(sq, priv_tx, info.rcd_sn); 364 + 365 + /* If no dump WQE was sent, we need to have a fence NOP WQE before the 366 + * actual data xmit. 367 + */ 368 + if (!info.nr_frags) { 369 + tx_post_fence_nop(sq); 370 + return MLX5E_KTLS_SYNC_DONE; 371 + } 372 + 373 + num_wqebbs = mlx5e_ktls_dumps_num_wqebbs(sq, info.nr_frags, info.sync_len); 366 374 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 367 375 contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi); 376 + 368 377 if (unlikely(contig_wqebbs_room < num_wqebbs)) 369 378 mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room); 370 379 371 380 tx_post_resync_params(sq, priv_tx, info.rcd_sn); 372 381 373 - for (i = 0; i < info.nr_frags; i++) 374 - if (tx_post_resync_dump(sq, info.frags[i], priv_tx->tisn, !i)) 375 - goto err_out; 382 + for (; i < info.nr_frags; i++) { 383 + unsigned int orig_fsz, frag_offset = 0, n = 0; 384 + skb_frag_t *f = &info.frags[i]; 376 385 377 - /* If no dump WQE was sent, we need to have a fence NOP WQE before the 378 - * actual data xmit. 379 - */ 380 - if (!info.nr_frags) 381 - tx_post_fence_nop(sq); 386 + orig_fsz = skb_frag_size(f); 382 387 383 - return skb; 388 + do { 389 + bool fence = !(i || frag_offset); 390 + unsigned int fsz; 391 + 392 + n++; 393 + fsz = min_t(unsigned int, sq->hw_mtu, orig_fsz - frag_offset); 394 + skb_frag_size_set(f, fsz); 395 + if (tx_post_resync_dump(sq, f, priv_tx->tisn, fence)) { 396 + page_ref_add(skb_frag_page(f), n - 1); 397 + goto err_out; 398 + } 399 + 400 + skb_frag_off_add(f, fsz); 401 + frag_offset += fsz; 402 + } while (frag_offset < orig_fsz); 403 + 404 + page_ref_add(skb_frag_page(f), n - 1); 405 + } 406 + 407 + return MLX5E_KTLS_SYNC_DONE; 384 408 385 409 err_out: 386 - dev_kfree_skb_any(skb); 387 - return NULL; 410 + for (; i < info.nr_frags; i++) 411 + /* The put_page() here undoes the page ref obtained in tx_sync_info_get(). 412 + * Page refs obtained for the DUMP WQEs above (by page_ref_add) will be 413 + * released only upon their completions (or in mlx5e_free_txqsq_descs, 414 + * if channel closes). 415 + */ 416 + put_page(skb_frag_page(&info.frags[i])); 417 + 418 + return MLX5E_KTLS_SYNC_FAIL; 388 419 } 389 420 390 421 struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev, ··· 452 419 453 420 seq = ntohl(tcp_hdr(skb)->seq); 454 421 if (unlikely(priv_tx->expected_seq != seq)) { 455 - skb = mlx5e_ktls_tx_handle_ooo(priv_tx, sq, skb, seq); 456 - if (unlikely(!skb)) 422 + enum mlx5e_ktls_sync_retval ret = 423 + mlx5e_ktls_tx_handle_ooo(priv_tx, sq, datalen, seq); 424 + 425 + if (likely(ret == MLX5E_KTLS_SYNC_DONE)) 426 + *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi); 427 + else if (ret == MLX5E_KTLS_SYNC_FAIL) 428 + goto err_out; 429 + else /* ret == MLX5E_KTLS_SYNC_SKIP_NO_DATA */ 457 430 goto out; 458 - *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi); 459 431 } 460 432 461 433 priv_tx->expected_seq = seq + datalen;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 1021 1021 { 1022 1022 #define MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT ETHTOOL_LINK_MODE_50000baseKR_Full_BIT 1023 1023 int size = __ETHTOOL_LINK_MODE_MASK_NBITS - MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT; 1024 - __ETHTOOL_DECLARE_LINK_MODE_MASK(modes); 1024 + __ETHTOOL_DECLARE_LINK_MODE_MASK(modes) = {0,}; 1025 1025 1026 1026 bitmap_set(modes, MLX5E_MIN_PTYS_EXT_LINK_MODE_BIT, size); 1027 1027 return bitmap_intersects(modes, adver, __ETHTOOL_LINK_MODE_MASK_NBITS);
+11 -2
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1128 1128 sq->txq_ix = txq_ix; 1129 1129 sq->uar_map = mdev->mlx5e_res.bfreg.map; 1130 1130 sq->min_inline_mode = params->tx_min_inline_mode; 1131 + sq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); 1131 1132 sq->stats = &c->priv->channel_stats[c->ix].sq[tc]; 1132 1133 sq->stop_room = MLX5E_SQ_STOP_ROOM; 1133 1134 INIT_WORK(&sq->recover_work, mlx5e_tx_err_cqe_work); ··· 1136 1135 set_bit(MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE, &sq->state); 1137 1136 if (MLX5_IPSEC_DEV(c->priv->mdev)) 1138 1137 set_bit(MLX5E_SQ_STATE_IPSEC, &sq->state); 1138 + #ifdef CONFIG_MLX5_EN_TLS 1139 1139 if (mlx5_accel_is_tls_device(c->priv->mdev)) { 1140 1140 set_bit(MLX5E_SQ_STATE_TLS, &sq->state); 1141 - sq->stop_room += MLX5E_SQ_TLS_ROOM; 1141 + sq->stop_room += MLX5E_SQ_TLS_ROOM + 1142 + mlx5e_ktls_dumps_num_wqebbs(sq, MAX_SKB_FRAGS, 1143 + TLS_MAX_PAYLOAD_SIZE); 1142 1144 } 1145 + #endif 1143 1146 1144 1147 param->wq.db_numa_node = cpu_to_node(c->cpu); 1145 1148 err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl); ··· 1354 1349 /* last doorbell out, godspeed .. */ 1355 1350 if (mlx5e_wqc_has_room_for(wq, sq->cc, sq->pc, 1)) { 1356 1351 u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 1352 + struct mlx5e_tx_wqe_info *wi; 1357 1353 struct mlx5e_tx_wqe *nop; 1358 1354 1359 - sq->db.wqe_info[pi].skb = NULL; 1355 + wi = &sq->db.wqe_info[pi]; 1356 + 1357 + memset(wi, 0, sizeof(*wi)); 1358 + wi->num_wqebbs = 1; 1360 1359 nop = mlx5e_post_nop(wq, sq->sqn, &sq->pc); 1361 1360 mlx5e_notify_hw(wq, sq->pc, sq->uar_map, &nop->ctrl); 1362 1361 }
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 611 611 612 612 mutex_lock(&esw->offloads.encap_tbl_lock); 613 613 encap_connected = !!(e->flags & MLX5_ENCAP_ENTRY_VALID); 614 - if (e->compl_result || (encap_connected == neigh_connected && 615 - ether_addr_equal(e->h_dest, ha))) 614 + if (e->compl_result < 0 || (encap_connected == neigh_connected && 615 + ether_addr_equal(e->h_dest, ha))) 616 616 goto unlock; 617 617 618 618 mlx5e_take_all_encap_flows(e, &flow_list);
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1386 1386 if (unlikely(!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state))) 1387 1387 return 0; 1388 1388 1389 - if (rq->cqd.left) 1389 + if (rq->cqd.left) { 1390 1390 work_done += mlx5e_decompress_cqes_cont(rq, cqwq, 0, budget); 1391 + if (rq->cqd.left || work_done >= budget) 1392 + goto out; 1393 + } 1391 1394 1392 1395 cqe = mlx5_cqwq_get_cqe(cqwq); 1393 1396 if (!cqe) {
+3 -12
drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
··· 35 35 #include <linux/udp.h> 36 36 #include <net/udp.h> 37 37 #include "en.h" 38 + #include "en/port.h" 38 39 39 40 enum { 40 41 MLX5E_ST_LINK_STATE, ··· 81 80 82 81 static int mlx5e_test_link_speed(struct mlx5e_priv *priv) 83 82 { 84 - u32 out[MLX5_ST_SZ_DW(ptys_reg)]; 85 - u32 eth_proto_oper; 86 - int i; 83 + u32 speed; 87 84 88 85 if (!netif_carrier_ok(priv->netdev)) 89 86 return 1; 90 87 91 - if (mlx5_query_port_ptys(priv->mdev, out, sizeof(out), MLX5_PTYS_EN, 1)) 92 - return 1; 93 - 94 - eth_proto_oper = MLX5_GET(ptys_reg, out, eth_proto_oper); 95 - for (i = 0; i < MLX5E_LINK_MODES_NUMBER; i++) { 96 - if (eth_proto_oper & MLX5E_PROT_MASK(i)) 97 - return 0; 98 - } 99 - return 1; 88 + return mlx5e_port_linkspeed(priv->mdev, &speed); 100 89 } 101 90 102 91 struct mlx5ehdr {
+12 -8
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 52 52 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_bytes) }, 53 53 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ctx) }, 54 54 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ooo) }, 55 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_resync_bytes) }, 56 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_no_sync_data) }, 57 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_bypass_req) }, 58 55 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) }, 59 56 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) }, 57 + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_resync_bytes) }, 58 + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_skip_no_sync_data) }, 59 + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_no_sync_data) }, 60 + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_bypass_req) }, 60 61 #endif 61 62 62 63 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_lro_packets) }, ··· 289 288 s->tx_tls_encrypted_bytes += sq_stats->tls_encrypted_bytes; 290 289 s->tx_tls_ctx += sq_stats->tls_ctx; 291 290 s->tx_tls_ooo += sq_stats->tls_ooo; 292 - s->tx_tls_resync_bytes += sq_stats->tls_resync_bytes; 293 - s->tx_tls_drop_no_sync_data += sq_stats->tls_drop_no_sync_data; 294 - s->tx_tls_drop_bypass_req += sq_stats->tls_drop_bypass_req; 295 291 s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes; 296 292 s->tx_tls_dump_packets += sq_stats->tls_dump_packets; 293 + s->tx_tls_resync_bytes += sq_stats->tls_resync_bytes; 294 + s->tx_tls_skip_no_sync_data += sq_stats->tls_skip_no_sync_data; 295 + s->tx_tls_drop_no_sync_data += sq_stats->tls_drop_no_sync_data; 296 + s->tx_tls_drop_bypass_req += sq_stats->tls_drop_bypass_req; 297 297 #endif 298 298 s->tx_cqes += sq_stats->cqes; 299 299 } ··· 1474 1472 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) }, 1475 1473 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ctx) }, 1476 1474 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ooo) }, 1477 - { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_no_sync_data) }, 1478 - { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_bypass_req) }, 1479 1475 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) }, 1480 1476 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) }, 1477 + { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_resync_bytes) }, 1478 + { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_skip_no_sync_data) }, 1479 + { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_no_sync_data) }, 1480 + { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_bypass_req) }, 1481 1481 #endif 1482 1482 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_none) }, 1483 1483 { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, stopped) },
+8 -6
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 129 129 u64 tx_tls_encrypted_bytes; 130 130 u64 tx_tls_ctx; 131 131 u64 tx_tls_ooo; 132 - u64 tx_tls_resync_bytes; 133 - u64 tx_tls_drop_no_sync_data; 134 - u64 tx_tls_drop_bypass_req; 135 132 u64 tx_tls_dump_packets; 136 133 u64 tx_tls_dump_bytes; 134 + u64 tx_tls_resync_bytes; 135 + u64 tx_tls_skip_no_sync_data; 136 + u64 tx_tls_drop_no_sync_data; 137 + u64 tx_tls_drop_bypass_req; 137 138 #endif 138 139 139 140 u64 rx_xsk_packets; ··· 274 273 u64 tls_encrypted_bytes; 275 274 u64 tls_ctx; 276 275 u64 tls_ooo; 277 - u64 tls_resync_bytes; 278 - u64 tls_drop_no_sync_data; 279 - u64 tls_drop_bypass_req; 280 276 u64 tls_dump_packets; 281 277 u64 tls_dump_bytes; 278 + u64 tls_resync_bytes; 279 + u64 tls_skip_no_sync_data; 280 + u64 tls_drop_no_sync_data; 281 + u64 tls_drop_bypass_req; 282 282 #endif 283 283 /* less likely accessed in data path */ 284 284 u64 csum_none;
+28 -8
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1278 1278 mlx5_eswitch_del_vlan_action(esw, attr); 1279 1279 1280 1280 for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) 1281 - if (attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP) 1281 + if (attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP) { 1282 1282 mlx5e_detach_encap(priv, flow, out_index); 1283 + kfree(attr->parse_attr->tun_info[out_index]); 1284 + } 1283 1285 kvfree(attr->parse_attr); 1284 1286 1285 1287 if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) ··· 1561 1559 mlx5_packet_reformat_dealloc(priv->mdev, e->pkt_reformat); 1562 1560 } 1563 1561 1562 + kfree(e->tun_info); 1564 1563 kfree(e->encap_header); 1565 1564 kfree_rcu(e, rcu); 1566 1565 } ··· 2975 2972 return NULL; 2976 2973 } 2977 2974 2975 + static struct ip_tunnel_info *dup_tun_info(const struct ip_tunnel_info *tun_info) 2976 + { 2977 + size_t tun_size = sizeof(*tun_info) + tun_info->options_len; 2978 + 2979 + return kmemdup(tun_info, tun_size, GFP_KERNEL); 2980 + } 2981 + 2978 2982 static int mlx5e_attach_encap(struct mlx5e_priv *priv, 2979 2983 struct mlx5e_tc_flow *flow, 2980 2984 struct net_device *mirred_dev, ··· 3038 3028 refcount_set(&e->refcnt, 1); 3039 3029 init_completion(&e->res_ready); 3040 3030 3031 + tun_info = dup_tun_info(tun_info); 3032 + if (!tun_info) { 3033 + err = -ENOMEM; 3034 + goto out_err_init; 3035 + } 3041 3036 e->tun_info = tun_info; 3042 3037 err = mlx5e_tc_tun_init_encap_attr(mirred_dev, priv, e, extack); 3043 - if (err) { 3044 - kfree(e); 3045 - e = NULL; 3046 - goto out_err; 3047 - } 3038 + if (err) 3039 + goto out_err_init; 3048 3040 3049 3041 INIT_LIST_HEAD(&e->flows); 3050 3042 hash_add_rcu(esw->offloads.encap_tbl, &e->encap_hlist, hash_key); ··· 3086 3074 mutex_unlock(&esw->offloads.encap_tbl_lock); 3087 3075 if (e) 3088 3076 mlx5e_encap_put(priv, e); 3077 + return err; 3078 + 3079 + out_err_init: 3080 + mutex_unlock(&esw->offloads.encap_tbl_lock); 3081 + kfree(tun_info); 3082 + kfree(e); 3089 3083 return err; 3090 3084 } 3091 3085 ··· 3178 3160 struct mlx5_esw_flow_attr *attr, 3179 3161 u32 *action) 3180 3162 { 3181 - int nest_level = vlan_get_encap_level(attr->parse_attr->filter_dev); 3163 + int nest_level = attr->parse_attr->filter_dev->lower_level; 3182 3164 struct flow_action_entry vlan_act = { 3183 3165 .id = FLOW_ACTION_VLAN_POP, 3184 3166 }; ··· 3313 3295 } else if (encap) { 3314 3296 parse_attr->mirred_ifindex[attr->out_count] = 3315 3297 out_dev->ifindex; 3316 - parse_attr->tun_info[attr->out_count] = info; 3298 + parse_attr->tun_info[attr->out_count] = dup_tun_info(info); 3299 + if (!parse_attr->tun_info[attr->out_count]) 3300 + return -ENOMEM; 3317 3301 encap = false; 3318 3302 attr->dests[attr->out_count].flags |= 3319 3303 MLX5_ESW_DEST_ENCAP;
+20 -15
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 403 403 static void mlx5e_dump_error_cqe(struct mlx5e_txqsq *sq, 404 404 struct mlx5_err_cqe *err_cqe) 405 405 { 406 - u32 ci = mlx5_cqwq_get_ci(&sq->cq.wq); 406 + struct mlx5_cqwq *wq = &sq->cq.wq; 407 + u32 ci; 408 + 409 + ci = mlx5_cqwq_ctr2ix(wq, wq->cc - 1); 407 410 408 411 netdev_err(sq->channel->netdev, 409 412 "Error cqe on cqn 0x%x, ci 0x%x, sqn 0x%x, opcode 0x%x, syndrome 0x%x, vendor syndrome 0x%x\n", ··· 482 479 skb = wi->skb; 483 480 484 481 if (unlikely(!skb)) { 485 - #ifdef CONFIG_MLX5_EN_TLS 486 - if (wi->resync_dump_frag) { 487 - struct mlx5e_sq_dma *dma = 488 - mlx5e_dma_get(sq, dma_fifo_cc++); 489 - 490 - mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, dma); 491 - } 492 - #endif 482 + mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc); 493 483 sqcc += wi->num_wqebbs; 494 484 continue; 495 485 } ··· 538 542 { 539 543 struct mlx5e_tx_wqe_info *wi; 540 544 struct sk_buff *skb; 545 + u32 dma_fifo_cc; 546 + u16 sqcc; 541 547 u16 ci; 542 548 int i; 543 549 544 - while (sq->cc != sq->pc) { 545 - ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->cc); 550 + sqcc = sq->cc; 551 + dma_fifo_cc = sq->dma_fifo_cc; 552 + 553 + while (sqcc != sq->pc) { 554 + ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc); 546 555 wi = &sq->db.wqe_info[ci]; 547 556 skb = wi->skb; 548 557 549 - if (!skb) { /* nop */ 550 - sq->cc++; 558 + if (!skb) { 559 + mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc); 560 + sqcc += wi->num_wqebbs; 551 561 continue; 552 562 } 553 563 554 564 for (i = 0; i < wi->num_dma; i++) { 555 565 struct mlx5e_sq_dma *dma = 556 - mlx5e_dma_get(sq, sq->dma_fifo_cc++); 566 + mlx5e_dma_get(sq, dma_fifo_cc++); 557 567 558 568 mlx5e_tx_dma_unmap(sq->pdev, dma); 559 569 } 560 570 561 571 dev_kfree_skb_any(skb); 562 - sq->cc += wi->num_wqebbs; 572 + sqcc += wi->num_wqebbs; 563 573 } 574 + 575 + sq->dma_fifo_cc = dma_fifo_cc; 576 + sq->cc = sqcc; 564 577 } 565 578 566 579 #ifdef CONFIG_MLX5_CORE_IPOIB
-1
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 285 285 286 286 mlx5_eswitch_set_rule_source_port(esw, spec, attr); 287 287 288 - spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS; 289 288 if (attr->outer_match_level != MLX5_MATCH_NONE) 290 289 spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; 291 290
+16 -6
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
··· 177 177 memset(&src->vlan[1], 0, sizeof(src->vlan[1])); 178 178 } 179 179 180 + static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw, 181 + const struct mlx5_flow_spec *spec) 182 + { 183 + u32 port_mask, port_value; 184 + 185 + if (MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source)) 186 + return spec->flow_context.flow_source == MLX5_VPORT_UPLINK; 187 + 188 + port_mask = MLX5_GET(fte_match_param, spec->match_criteria, 189 + misc_parameters.source_port); 190 + port_value = MLX5_GET(fte_match_param, spec->match_value, 191 + misc_parameters.source_port); 192 + return (port_mask & port_value & 0xffff) == MLX5_VPORT_UPLINK; 193 + } 194 + 180 195 bool 181 196 mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw, 182 197 struct mlx5_flow_act *flow_act, 183 198 struct mlx5_flow_spec *spec) 184 199 { 185 - u32 port_mask = MLX5_GET(fte_match_param, spec->match_criteria, 186 - misc_parameters.source_port); 187 - u32 port_value = MLX5_GET(fte_match_param, spec->match_value, 188 - misc_parameters.source_port); 189 - 190 200 if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table)) 191 201 return false; 192 202 193 203 /* push vlan on RX */ 194 204 return (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) && 195 - ((port_mask & port_value) == MLX5_VPORT_UPLINK); 205 + mlx5_eswitch_offload_is_uplink_port(esw, spec); 196 206 } 197 207 198 208 struct mlx5_flow_handle *
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
··· 464 464 } 465 465 466 466 err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn, &irqn); 467 - if (err) 467 + if (err) { 468 + kvfree(in); 468 469 goto err_cqwq; 470 + } 469 471 470 472 cqc = MLX5_ADDR_OF(create_cq_in, in, cq_context); 471 473 MLX5_SET(cqc, cqc, log_cq_size, ilog2(cq_size));
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
··· 507 507 MLX5_SET(dest_format_struct, in_dests, 508 508 destination_eswitch_owner_vhca_id, 509 509 dst->dest_attr.vport.vhca_id); 510 - if (extended_dest) { 510 + if (extended_dest && 511 + dst->dest_attr.vport.pkt_reformat) { 511 512 MLX5_SET(dest_format_struct, in_dests, 512 513 packet_reformat, 513 514 !!(dst->dest_attr.vport.flags &
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 572 572 return -ENOMEM; 573 573 err = mlx5_crdump_collect(dev, cr_data); 574 574 if (err) 575 - return err; 575 + goto free_data; 576 576 577 577 if (priv_ctx) { 578 578 struct mlx5_fw_reporter_ctx *fw_reporter_ctx = priv_ctx;
+2 -2
drivers/net/ethernet/mellanox/mlxsw/core.c
··· 1186 1186 if (err) 1187 1187 goto err_thermal_init; 1188 1188 1189 - if (mlxsw_driver->params_register && !reload) 1189 + if (mlxsw_driver->params_register) 1190 1190 devlink_params_publish(devlink); 1191 1191 1192 1192 return 0; ··· 1259 1259 return; 1260 1260 } 1261 1261 1262 - if (mlxsw_core->driver->params_unregister && !reload) 1262 + if (mlxsw_core->driver->params_unregister) 1263 1263 devlink_params_unpublish(devlink); 1264 1264 mlxsw_thermal_fini(mlxsw_core->thermal); 1265 1265 mlxsw_hwmon_fini(mlxsw_core->hwmon);
+9 -2
drivers/net/ethernet/mscc/ocelot.c
··· 261 261 port->pvid = vid; 262 262 263 263 /* Untagged egress vlan clasification */ 264 - if (untagged) 264 + if (untagged && port->vid != vid) { 265 + if (port->vid) { 266 + dev_err(ocelot->dev, 267 + "Port already has a native VLAN: %d\n", 268 + port->vid); 269 + return -EBUSY; 270 + } 265 271 port->vid = vid; 272 + } 266 273 267 274 ocelot_vlan_port_apply(ocelot, port); 268 275 ··· 941 934 static int ocelot_vlan_rx_add_vid(struct net_device *dev, __be16 proto, 942 935 u16 vid) 943 936 { 944 - return ocelot_vlan_vid_add(dev, vid, false, true); 937 + return ocelot_vlan_vid_add(dev, vid, false, false); 945 938 } 946 939 947 940 static int ocelot_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
-18
drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
··· 299 299 nfp_port_free(repr->port); 300 300 } 301 301 302 - static struct lock_class_key nfp_repr_netdev_xmit_lock_key; 303 - static struct lock_class_key nfp_repr_netdev_addr_lock_key; 304 - 305 - static void nfp_repr_set_lockdep_class_one(struct net_device *dev, 306 - struct netdev_queue *txq, 307 - void *_unused) 308 - { 309 - lockdep_set_class(&txq->_xmit_lock, &nfp_repr_netdev_xmit_lock_key); 310 - } 311 - 312 - static void nfp_repr_set_lockdep_class(struct net_device *dev) 313 - { 314 - lockdep_set_class(&dev->addr_list_lock, &nfp_repr_netdev_addr_lock_key); 315 - netdev_for_each_tx_queue(dev, nfp_repr_set_lockdep_class_one, NULL); 316 - } 317 - 318 302 int nfp_repr_init(struct nfp_app *app, struct net_device *netdev, 319 303 u32 cmsg_port_id, struct nfp_port *port, 320 304 struct net_device *pf_netdev) ··· 307 323 struct nfp_net *nn = netdev_priv(pf_netdev); 308 324 u32 repr_cap = nn->tlv_caps.repr_cap; 309 325 int err; 310 - 311 - nfp_repr_set_lockdep_class(netdev); 312 326 313 327 repr->port = port; 314 328 repr->dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, GFP_KERNEL);
+2
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright(c) 2017 - 2019 Pensando Systems, Inc */ 3 3 4 + #include <linux/printk.h> 5 + #include <linux/dynamic_debug.h> 4 6 #include <linux/netdevice.h> 5 7 #include <linux/etherdevice.h> 6 8 #include <linux/rtnetlink.h>
+2
drivers/net/ethernet/pensando/ionic/ionic_main.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright(c) 2017 - 2019 Pensando Systems, Inc */ 3 3 4 + #include <linux/printk.h> 5 + #include <linux/dynamic_debug.h> 4 6 #include <linux/module.h> 5 7 #include <linux/netdevice.h> 6 8 #include <linux/utsname.h>
+21 -6
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 67 67 #define QED_ROCE_QPS (8192) 68 68 #define QED_ROCE_DPIS (8) 69 69 #define QED_RDMA_SRQS QED_ROCE_QPS 70 - #define QED_NVM_CFG_SET_FLAGS 0xE 71 - #define QED_NVM_CFG_SET_PF_FLAGS 0x1E 72 70 #define QED_NVM_CFG_GET_FLAGS 0xA 73 71 #define QED_NVM_CFG_GET_PF_FLAGS 0x1A 72 + #define QED_NVM_CFG_MAX_ATTRS 50 74 73 75 74 static char version[] = 76 75 "QLogic FastLinQ 4xxxx Core Module qed " DRV_MODULE_VERSION "\n"; ··· 2254 2255 { 2255 2256 struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev); 2256 2257 u8 entity_id, len, buf[32]; 2258 + bool need_nvm_init = true; 2257 2259 struct qed_ptt *ptt; 2258 2260 u16 cfg_id, count; 2259 2261 int rc = 0, i; ··· 2271 2271 2272 2272 DP_VERBOSE(cdev, NETIF_MSG_DRV, 2273 2273 "Read config ids: num_attrs = %0d\n", count); 2274 - /* NVM CFG ID attributes */ 2275 - for (i = 0; i < count; i++) { 2274 + /* NVM CFG ID attributes. Start loop index from 1 to avoid additional 2275 + * arithmetic operations in the implementation. 2276 + */ 2277 + for (i = 1; i <= count; i++) { 2276 2278 cfg_id = *((u16 *)*data); 2277 2279 *data += 2; 2278 2280 entity_id = **data; ··· 2284 2282 memcpy(buf, *data, len); 2285 2283 *data += len; 2286 2284 2287 - flags = entity_id ? QED_NVM_CFG_SET_PF_FLAGS : 2288 - QED_NVM_CFG_SET_FLAGS; 2285 + flags = 0; 2286 + if (need_nvm_init) { 2287 + flags |= QED_NVM_CFG_OPTION_INIT; 2288 + need_nvm_init = false; 2289 + } 2290 + 2291 + /* Commit to flash and free the resources */ 2292 + if (!(i % QED_NVM_CFG_MAX_ATTRS) || i == count) { 2293 + flags |= QED_NVM_CFG_OPTION_COMMIT | 2294 + QED_NVM_CFG_OPTION_FREE; 2295 + need_nvm_init = true; 2296 + } 2297 + 2298 + if (entity_id) 2299 + flags |= QED_NVM_CFG_OPTION_ENTITY_SEL; 2289 2300 2290 2301 DP_VERBOSE(cdev, NETIF_MSG_DRV, 2291 2302 "cfg_id = %d entity = %d len = %d\n", cfg_id,
+1 -1
drivers/net/ethernet/qlogic/qed/qed_sriov.c
··· 2005 2005 (qed_iov_validate_active_txq(p_hwfn, vf))) { 2006 2006 vf->b_malicious = true; 2007 2007 DP_NOTICE(p_hwfn, 2008 - "VF [%02x] - considered malicious; Unable to stop RX/TX queuess\n", 2008 + "VF [%02x] - considered malicious; Unable to stop RX/TX queues\n", 2009 2009 vf->abs_vf_id); 2010 2010 status = PFVF_STATUS_MALICIOUS; 2011 2011 goto out;
+4
drivers/net/ethernet/realtek/r8169_main.c
··· 1029 1029 { 1030 1030 int value; 1031 1031 1032 + /* Work around issue with chip reporting wrong PHY ID */ 1033 + if (reg == MII_PHYSID2) 1034 + return 0xc912; 1035 + 1032 1036 r8168dp_2_mdio_start(tp); 1033 1037 1034 1038 value = r8169_mdio_read(tp, reg);
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2995 2995 } else { 2996 2996 stmmac_set_desc_addr(priv, first, des); 2997 2997 tmp_pay_len = pay_len; 2998 + des += proto_hdr_len; 2998 2999 } 2999 3000 3000 3001 stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
+14 -1
drivers/net/fjes/fjes_main.c
··· 1237 1237 adapter->open_guard = false; 1238 1238 1239 1239 adapter->txrx_wq = alloc_workqueue(DRV_NAME "/txrx", WQ_MEM_RECLAIM, 0); 1240 + if (unlikely(!adapter->txrx_wq)) { 1241 + err = -ENOMEM; 1242 + goto err_free_netdev; 1243 + } 1244 + 1240 1245 adapter->control_wq = alloc_workqueue(DRV_NAME "/control", 1241 1246 WQ_MEM_RECLAIM, 0); 1247 + if (unlikely(!adapter->control_wq)) { 1248 + err = -ENOMEM; 1249 + goto err_free_txrx_wq; 1250 + } 1242 1251 1243 1252 INIT_WORK(&adapter->tx_stall_task, fjes_tx_stall_task); 1244 1253 INIT_WORK(&adapter->raise_intr_rxdata_task, ··· 1264 1255 hw->hw_res.irq = platform_get_irq(plat_dev, 0); 1265 1256 err = fjes_hw_init(&adapter->hw); 1266 1257 if (err) 1267 - goto err_free_netdev; 1258 + goto err_free_control_wq; 1268 1259 1269 1260 /* setup MAC address (02:00:00:00:00:[epid])*/ 1270 1261 netdev->dev_addr[0] = 2; ··· 1286 1277 1287 1278 err_hw_exit: 1288 1279 fjes_hw_exit(&adapter->hw); 1280 + err_free_control_wq: 1281 + destroy_workqueue(adapter->control_wq); 1282 + err_free_txrx_wq: 1283 + destroy_workqueue(adapter->txrx_wq); 1289 1284 err_free_netdev: 1290 1285 free_netdev(netdev); 1291 1286 err_out:
-22
drivers/net/hamradio/bpqether.c
··· 107 107 108 108 static LIST_HEAD(bpq_devices); 109 109 110 - /* 111 - * bpqether network devices are paired with ethernet devices below them, so 112 - * form a special "super class" of normal ethernet devices; split their locks 113 - * off into a separate class since they always nest. 114 - */ 115 - static struct lock_class_key bpq_netdev_xmit_lock_key; 116 - static struct lock_class_key bpq_netdev_addr_lock_key; 117 - 118 - static void bpq_set_lockdep_class_one(struct net_device *dev, 119 - struct netdev_queue *txq, 120 - void *_unused) 121 - { 122 - lockdep_set_class(&txq->_xmit_lock, &bpq_netdev_xmit_lock_key); 123 - } 124 - 125 - static void bpq_set_lockdep_class(struct net_device *dev) 126 - { 127 - lockdep_set_class(&dev->addr_list_lock, &bpq_netdev_addr_lock_key); 128 - netdev_for_each_tx_queue(dev, bpq_set_lockdep_class_one, NULL); 129 - } 130 - 131 110 /* ------------------------------------------------------------------------ */ 132 111 133 112 ··· 477 498 err = register_netdevice(ndev); 478 499 if (err) 479 500 goto error; 480 - bpq_set_lockdep_class(ndev); 481 501 482 502 /* List protected by RTNL */ 483 503 list_add_rcu(&bpq->bpq_list, &bpq_devices);
+11 -4
drivers/net/hyperv/netvsc_drv.c
··· 982 982 if (netif_running(ndev)) { 983 983 ret = rndis_filter_open(nvdev); 984 984 if (ret) 985 - return ret; 985 + goto err; 986 986 987 987 rdev = nvdev->extension; 988 988 if (!rdev->link_state) ··· 990 990 } 991 991 992 992 return 0; 993 + 994 + err: 995 + netif_device_detach(ndev); 996 + 997 + rndis_filter_device_remove(hdev, nvdev); 998 + 999 + return ret; 993 1000 } 994 1001 995 1002 static int netvsc_set_channels(struct net_device *net, ··· 1814 1807 1815 1808 ret = rndis_filter_set_offload_params(ndev, nvdev, &offloads); 1816 1809 1817 - if (ret) 1810 + if (ret) { 1818 1811 features ^= NETIF_F_LRO; 1812 + ndev->features = features; 1813 + } 1819 1814 1820 1815 syncvf: 1821 1816 if (!vf_netdev) ··· 2343 2334 NETIF_F_HIGHDMA | NETIF_F_HW_VLAN_CTAG_TX | 2344 2335 NETIF_F_HW_VLAN_CTAG_RX; 2345 2336 net->vlan_features = net->features; 2346 - 2347 - netdev_lockdep_set_classes(net); 2348 2337 2349 2338 /* MTU range: 68 - 1500 or 65521 */ 2350 2339 net->min_mtu = NETVSC_MTU_MIN;
-2
drivers/net/ipvlan/ipvlan_main.c
··· 131 131 dev->gso_max_segs = phy_dev->gso_max_segs; 132 132 dev->hard_header_len = phy_dev->hard_header_len; 133 133 134 - netdev_lockdep_set_classes(dev); 135 - 136 134 ipvlan->pcpu_stats = netdev_alloc_pcpu_stats(struct ipvl_pcpu_stats); 137 135 if (!ipvlan->pcpu_stats) 138 136 return -ENOMEM;
-18
drivers/net/macsec.c
··· 267 267 struct pcpu_secy_stats __percpu *stats; 268 268 struct list_head secys; 269 269 struct gro_cells gro_cells; 270 - unsigned int nest_level; 271 270 }; 272 271 273 272 /** ··· 2749 2750 2750 2751 #define MACSEC_FEATURES \ 2751 2752 (NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST) 2752 - static struct lock_class_key macsec_netdev_addr_lock_key; 2753 2753 2754 2754 static int macsec_dev_init(struct net_device *dev) 2755 2755 { ··· 2956 2958 return macsec_priv(dev)->real_dev->ifindex; 2957 2959 } 2958 2960 2959 - static int macsec_get_nest_level(struct net_device *dev) 2960 - { 2961 - return macsec_priv(dev)->nest_level; 2962 - } 2963 - 2964 2961 static const struct net_device_ops macsec_netdev_ops = { 2965 2962 .ndo_init = macsec_dev_init, 2966 2963 .ndo_uninit = macsec_dev_uninit, ··· 2969 2976 .ndo_start_xmit = macsec_start_xmit, 2970 2977 .ndo_get_stats64 = macsec_get_stats64, 2971 2978 .ndo_get_iflink = macsec_get_iflink, 2972 - .ndo_get_lock_subclass = macsec_get_nest_level, 2973 2979 }; 2974 2980 2975 2981 static const struct device_type macsec_type = { ··· 2993 3001 static void macsec_free_netdev(struct net_device *dev) 2994 3002 { 2995 3003 struct macsec_dev *macsec = macsec_priv(dev); 2996 - struct net_device *real_dev = macsec->real_dev; 2997 3004 2998 3005 free_percpu(macsec->stats); 2999 3006 free_percpu(macsec->secy.tx_sc.stats); 3000 3007 3001 - dev_put(real_dev); 3002 3008 } 3003 3009 3004 3010 static void macsec_setup(struct net_device *dev) ··· 3250 3260 err = register_netdevice(dev); 3251 3261 if (err < 0) 3252 3262 return err; 3253 - 3254 - dev_hold(real_dev); 3255 - 3256 - macsec->nest_level = dev_get_nest_level(real_dev) + 1; 3257 - netdev_lockdep_set_classes(dev); 3258 - lockdep_set_class_and_subclass(&dev->addr_list_lock, 3259 - &macsec_netdev_addr_lock_key, 3260 - macsec_get_nest_level(dev)); 3261 3263 3262 3264 err = netdev_upper_dev_link(real_dev, dev, extack); 3263 3265 if (err < 0)
-19
drivers/net/macvlan.c
··· 852 852 * "super class" of normal network devices; split their locks off into a 853 853 * separate class since they always nest. 854 854 */ 855 - static struct lock_class_key macvlan_netdev_addr_lock_key; 856 - 857 855 #define ALWAYS_ON_OFFLOADS \ 858 856 (NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE | \ 859 857 NETIF_F_GSO_ROBUST | NETIF_F_GSO_ENCAP_ALL) ··· 866 868 867 869 #define MACVLAN_STATE_MASK \ 868 870 ((1<<__LINK_STATE_NOCARRIER) | (1<<__LINK_STATE_DORMANT)) 869 - 870 - static int macvlan_get_nest_level(struct net_device *dev) 871 - { 872 - return ((struct macvlan_dev *)netdev_priv(dev))->nest_level; 873 - } 874 - 875 - static void macvlan_set_lockdep_class(struct net_device *dev) 876 - { 877 - netdev_lockdep_set_classes(dev); 878 - lockdep_set_class_and_subclass(&dev->addr_list_lock, 879 - &macvlan_netdev_addr_lock_key, 880 - macvlan_get_nest_level(dev)); 881 - } 882 871 883 872 static int macvlan_init(struct net_device *dev) 884 873 { ··· 884 899 dev->gso_max_size = lowerdev->gso_max_size; 885 900 dev->gso_max_segs = lowerdev->gso_max_segs; 886 901 dev->hard_header_len = lowerdev->hard_header_len; 887 - 888 - macvlan_set_lockdep_class(dev); 889 902 890 903 vlan->pcpu_stats = netdev_alloc_pcpu_stats(struct vlan_pcpu_stats); 891 904 if (!vlan->pcpu_stats) ··· 1144 1161 .ndo_fdb_add = macvlan_fdb_add, 1145 1162 .ndo_fdb_del = macvlan_fdb_del, 1146 1163 .ndo_fdb_dump = ndo_dflt_fdb_dump, 1147 - .ndo_get_lock_subclass = macvlan_get_nest_level, 1148 1164 #ifdef CONFIG_NET_POLL_CONTROLLER 1149 1165 .ndo_poll_controller = macvlan_dev_poll_controller, 1150 1166 .ndo_netpoll_setup = macvlan_dev_netpoll_setup, ··· 1427 1445 vlan->dev = dev; 1428 1446 vlan->port = port; 1429 1447 vlan->set_features = MACVLAN_FEATURES; 1430 - vlan->nest_level = dev_get_nest_level(lowerdev) + 1; 1431 1448 1432 1449 vlan->mode = MACVLAN_MODE_VEPA; 1433 1450 if (data && data[IFLA_MACVLAN_MODE])
+5
drivers/net/netdevsim/dev.c
··· 806 806 { 807 807 struct nsim_dev_port *nsim_dev_port, *tmp; 808 808 809 + mutex_lock(&nsim_dev->port_list_lock); 809 810 list_for_each_entry_safe(nsim_dev_port, tmp, 810 811 &nsim_dev->port_list, list) 811 812 __nsim_dev_port_del(nsim_dev_port); 813 + mutex_unlock(&nsim_dev->port_list_lock); 812 814 } 813 815 814 816 int nsim_dev_probe(struct nsim_bus_dev *nsim_bus_dev) ··· 824 822 return PTR_ERR(nsim_dev); 825 823 dev_set_drvdata(&nsim_bus_dev->dev, nsim_dev); 826 824 825 + mutex_lock(&nsim_dev->port_list_lock); 827 826 for (i = 0; i < nsim_bus_dev->port_count; i++) { 828 827 err = __nsim_dev_port_add(nsim_dev, i); 829 828 if (err) 830 829 goto err_port_del_all; 831 830 } 831 + mutex_unlock(&nsim_dev->port_list_lock); 832 832 return 0; 833 833 834 834 err_port_del_all: 835 + mutex_unlock(&nsim_dev->port_list_lock); 835 836 nsim_dev_port_del_all(nsim_dev); 836 837 nsim_dev_destroy(nsim_dev); 837 838 return err;
+16
drivers/net/phy/phylink.c
··· 87 87 phylink_printk(KERN_WARNING, pl, fmt, ##__VA_ARGS__) 88 88 #define phylink_info(pl, fmt, ...) \ 89 89 phylink_printk(KERN_INFO, pl, fmt, ##__VA_ARGS__) 90 + #if defined(CONFIG_DYNAMIC_DEBUG) 90 91 #define phylink_dbg(pl, fmt, ...) \ 92 + do { \ 93 + if ((pl)->config->type == PHYLINK_NETDEV) \ 94 + netdev_dbg((pl)->netdev, fmt, ##__VA_ARGS__); \ 95 + else if ((pl)->config->type == PHYLINK_DEV) \ 96 + dev_dbg((pl)->dev, fmt, ##__VA_ARGS__); \ 97 + } while (0) 98 + #elif defined(DEBUG) 99 + #define phylink_dbg(pl, fmt, ...) \ 91 100 phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__) 101 + #else 102 + #define phylink_dbg(pl, fmt, ...) \ 103 + ({ \ 104 + if (0) \ 105 + phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__); \ 106 + }) 107 + #endif 92 108 93 109 /** 94 110 * phylink_set_port_modes() - set the port type modes in the ethtool mask
+1
drivers/net/phy/smsc.c
··· 327 327 .name = "SMSC LAN8740", 328 328 329 329 /* PHY_BASIC_FEATURES */ 330 + .flags = PHY_RST_AFTER_CLK_EN, 330 331 331 332 .probe = smsc_phy_probe, 332 333
-2
drivers/net/ppp/ppp_generic.c
··· 1324 1324 { 1325 1325 struct ppp *ppp; 1326 1326 1327 - netdev_lockdep_set_classes(dev); 1328 - 1329 1327 ppp = netdev_priv(dev); 1330 1328 /* Let the netdevice take a reference on the ppp file. This ensures 1331 1329 * that ppp_destroy_interface() won't run before the device gets
+12 -4
drivers/net/team/team.c
··· 1615 1615 int err; 1616 1616 1617 1617 team->dev = dev; 1618 - mutex_init(&team->lock); 1619 1618 team_set_no_mode(team); 1620 1619 1621 1620 team->pcpu_stats = netdev_alloc_pcpu_stats(struct team_pcpu_stats); ··· 1641 1642 goto err_options_register; 1642 1643 netif_carrier_off(dev); 1643 1644 1644 - netdev_lockdep_set_classes(dev); 1645 + lockdep_register_key(&team->team_lock_key); 1646 + __mutex_init(&team->lock, "team->team_lock_key", &team->team_lock_key); 1645 1647 1646 1648 return 0; 1647 1649 ··· 1673 1673 team_queue_override_fini(team); 1674 1674 mutex_unlock(&team->lock); 1675 1675 netdev_change_features(dev); 1676 + lockdep_unregister_key(&team->team_lock_key); 1676 1677 } 1677 1678 1678 1679 static void team_destructor(struct net_device *dev) ··· 1977 1976 err = team_port_del(team, port_dev); 1978 1977 mutex_unlock(&team->lock); 1979 1978 1980 - if (!err) 1981 - netdev_change_features(dev); 1979 + if (err) 1980 + return err; 1981 + 1982 + if (netif_is_team_master(port_dev)) { 1983 + lockdep_unregister_key(&team->team_lock_key); 1984 + lockdep_register_key(&team->team_lock_key); 1985 + lockdep_set_class(&team->lock, &team->team_lock_key); 1986 + } 1987 + netdev_change_features(dev); 1982 1988 1983 1989 return err; 1984 1990 }
+7
drivers/net/usb/cdc_ether.c
··· 787 787 .driver_info = 0, 788 788 }, 789 789 790 + /* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */ 791 + { 792 + USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM, 793 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 794 + .driver_info = 0, 795 + }, 796 + 790 797 /* NVIDIA Tegra USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 791 798 { 792 799 USB_DEVICE_AND_INTERFACE_INFO(NVIDIA_VENDOR_ID, 0x09ff, USB_CLASS_COMM,
+4 -1
drivers/net/usb/lan78xx.c
··· 1264 1264 netif_dbg(dev, link, dev->net, "PHY INTR: 0x%08x\n", intdata); 1265 1265 lan78xx_defer_kevent(dev, EVENT_LINK_RESET); 1266 1266 1267 - if (dev->domain_data.phyirq > 0) 1267 + if (dev->domain_data.phyirq > 0) { 1268 + local_irq_disable(); 1268 1269 generic_handle_irq(dev->domain_data.phyirq); 1270 + local_irq_enable(); 1271 + } 1269 1272 } else 1270 1273 netdev_warn(dev->net, 1271 1274 "unexpected interrupt: 0x%08x\n", intdata);
+1
drivers/net/usb/r8152.c
··· 5755 5755 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, 5756 5756 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)}, 5757 5757 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)}, 5758 + {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)}, 5758 5759 {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)}, 5759 5760 {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)}, 5760 5761 {REALTEK_USB_DEVICE(VENDOR_ID_TPLINK, 0x0601)},
-1
drivers/net/vrf.c
··· 865 865 866 866 /* similarly, oper state is irrelevant; set to up to avoid confusion */ 867 867 dev->operstate = IF_OPER_UP; 868 - netdev_lockdep_set_classes(dev); 869 868 return 0; 870 869 871 870 out_rth:
+50 -12
drivers/net/vxlan.c
··· 2487 2487 vni = tunnel_id_to_key32(info->key.tun_id); 2488 2488 ifindex = 0; 2489 2489 dst_cache = &info->dst_cache; 2490 - if (info->options_len && 2491 - info->key.tun_flags & TUNNEL_VXLAN_OPT) 2490 + if (info->key.tun_flags & TUNNEL_VXLAN_OPT) { 2491 + if (info->options_len < sizeof(*md)) 2492 + goto drop; 2492 2493 md = ip_tunnel_info_opts(info); 2494 + } 2493 2495 ttl = info->key.ttl; 2494 2496 tos = info->key.tos; 2495 2497 label = info->key.label; ··· 3568 3566 { 3569 3567 struct vxlan_net *vn = net_generic(net, vxlan_net_id); 3570 3568 struct vxlan_dev *vxlan = netdev_priv(dev); 3569 + struct net_device *remote_dev = NULL; 3571 3570 struct vxlan_fdb *f = NULL; 3572 3571 bool unregister = false; 3572 + struct vxlan_rdst *dst; 3573 3573 int err; 3574 3574 3575 + dst = &vxlan->default_dst; 3575 3576 err = vxlan_dev_configure(net, dev, conf, false, extack); 3576 3577 if (err) 3577 3578 return err; ··· 3582 3577 dev->ethtool_ops = &vxlan_ethtool_ops; 3583 3578 3584 3579 /* create an fdb entry for a valid default destination */ 3585 - if (!vxlan_addr_any(&vxlan->default_dst.remote_ip)) { 3580 + if (!vxlan_addr_any(&dst->remote_ip)) { 3586 3581 err = vxlan_fdb_create(vxlan, all_zeros_mac, 3587 - &vxlan->default_dst.remote_ip, 3582 + &dst->remote_ip, 3588 3583 NUD_REACHABLE | NUD_PERMANENT, 3589 3584 vxlan->cfg.dst_port, 3590 - vxlan->default_dst.remote_vni, 3591 - vxlan->default_dst.remote_vni, 3592 - vxlan->default_dst.remote_ifindex, 3585 + dst->remote_vni, 3586 + dst->remote_vni, 3587 + dst->remote_ifindex, 3593 3588 NTF_SELF, &f); 3594 3589 if (err) 3595 3590 return err; ··· 3600 3595 goto errout; 3601 3596 unregister = true; 3602 3597 3598 + if (dst->remote_ifindex) { 3599 + remote_dev = __dev_get_by_index(net, dst->remote_ifindex); 3600 + if (!remote_dev) 3601 + goto errout; 3602 + 3603 + err = netdev_upper_dev_link(remote_dev, dev, extack); 3604 + if (err) 3605 + goto errout; 3606 + } 3607 + 3603 3608 err = rtnl_configure_link(dev, NULL); 3604 3609 if (err) 3605 - goto errout; 3610 + goto unlink; 3606 3611 3607 3612 if (f) { 3608 - vxlan_fdb_insert(vxlan, all_zeros_mac, 3609 - vxlan->default_dst.remote_vni, f); 3613 + vxlan_fdb_insert(vxlan, all_zeros_mac, dst->remote_vni, f); 3610 3614 3611 3615 /* notify default fdb entry */ 3612 3616 err = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), 3613 3617 RTM_NEWNEIGH, true, extack); 3614 3618 if (err) { 3615 3619 vxlan_fdb_destroy(vxlan, f, false, false); 3620 + if (remote_dev) 3621 + netdev_upper_dev_unlink(remote_dev, dev); 3616 3622 goto unregister; 3617 3623 } 3618 3624 } 3619 3625 3620 3626 list_add(&vxlan->next, &vn->vxlan_list); 3627 + if (remote_dev) 3628 + dst->remote_dev = remote_dev; 3621 3629 return 0; 3622 - 3630 + unlink: 3631 + if (remote_dev) 3632 + netdev_upper_dev_unlink(remote_dev, dev); 3623 3633 errout: 3624 3634 /* unregister_netdevice() destroys the default FDB entry with deletion 3625 3635 * notification. But the addition notification was not sent yet, so ··· 3952 3932 struct netlink_ext_ack *extack) 3953 3933 { 3954 3934 struct vxlan_dev *vxlan = netdev_priv(dev); 3955 - struct vxlan_rdst *dst = &vxlan->default_dst; 3956 3935 struct net_device *lowerdev; 3957 3936 struct vxlan_config conf; 3937 + struct vxlan_rdst *dst; 3958 3938 int err; 3959 3939 3940 + dst = &vxlan->default_dst; 3960 3941 err = vxlan_nl2conf(tb, data, dev, &conf, true, extack); 3961 3942 if (err) 3962 3943 return err; 3963 3944 3964 3945 err = vxlan_config_validate(vxlan->net, &conf, &lowerdev, 3965 3946 vxlan, extack); 3947 + if (err) 3948 + return err; 3949 + 3950 + if (dst->remote_dev == lowerdev) 3951 + lowerdev = NULL; 3952 + 3953 + err = netdev_adjacent_change_prepare(dst->remote_dev, lowerdev, dev, 3954 + extack); 3966 3955 if (err) 3967 3956 return err; 3968 3957 ··· 3991 3962 NTF_SELF, true, extack); 3992 3963 if (err) { 3993 3964 spin_unlock_bh(&vxlan->hash_lock[hash_index]); 3965 + netdev_adjacent_change_abort(dst->remote_dev, 3966 + lowerdev, dev); 3994 3967 return err; 3995 3968 } 3996 3969 } ··· 4010 3979 if (conf.age_interval != vxlan->cfg.age_interval) 4011 3980 mod_timer(&vxlan->age_timer, jiffies); 4012 3981 3982 + netdev_adjacent_change_commit(dst->remote_dev, lowerdev, dev); 3983 + if (lowerdev && lowerdev != dst->remote_dev) { 3984 + dst->remote_dev = lowerdev; 3985 + netdev_update_lockdep_key(lowerdev); 3986 + } 4013 3987 vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true); 4014 3988 return 0; 4015 3989 } ··· 4027 3991 4028 3992 list_del(&vxlan->next); 4029 3993 unregister_netdevice_queue(dev, head); 3994 + if (vxlan->default_dst.remote_dev) 3995 + netdev_upper_dev_unlink(vxlan->default_dst.remote_dev, dev); 4030 3996 } 4031 3997 4032 3998 static size_t vxlan_get_size(const struct net_device *dev)
+1 -1
drivers/net/wimax/i2400m/op-rfkill.c
··· 127 127 "%d\n", result); 128 128 result = 0; 129 129 error_cmd: 130 - kfree(cmd); 131 130 kfree_skb(ack_skb); 132 131 error_msg_to_dev: 133 132 error_alloc: 134 133 d_fnend(4, dev, "(wimax_dev %p state %d) = %d\n", 135 134 wimax_dev, state, result); 135 + kfree(cmd); 136 136 return result; 137 137 } 138 138
+20 -2
drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
··· 520 520 } __packed; 521 521 522 522 /** 523 - * struct iwl_scan_config 523 + * struct iwl_scan_config_v1 524 524 * @flags: enum scan_config_flags 525 525 * @tx_chains: valid_tx antenna - ANT_* definitions 526 526 * @rx_chains: valid_rx antenna - ANT_* definitions ··· 552 552 #define SCAN_LB_LMAC_IDX 0 553 553 #define SCAN_HB_LMAC_IDX 1 554 554 555 - struct iwl_scan_config { 555 + struct iwl_scan_config_v2 { 556 556 __le32 flags; 557 557 __le32 tx_chains; 558 558 __le32 rx_chains; ··· 564 564 u8 bcast_sta_id; 565 565 u8 channel_flags; 566 566 u8 channel_array[]; 567 + } __packed; /* SCAN_CONFIG_DB_CMD_API_S_2 */ 568 + 569 + /** 570 + * struct iwl_scan_config 571 + * @enable_cam_mode: whether to enable CAM mode. 572 + * @enable_promiscouos_mode: whether to enable promiscouos mode 573 + * @bcast_sta_id: the index of the station in the fw 574 + * @reserved: reserved 575 + * @tx_chains: valid_tx antenna - ANT_* definitions 576 + * @rx_chains: valid_rx antenna - ANT_* definitions 577 + */ 578 + struct iwl_scan_config { 579 + u8 enable_cam_mode; 580 + u8 enable_promiscouos_mode; 581 + u8 bcast_sta_id; 582 + u8 reserved; 583 + __le32 tx_chains; 584 + __le32 rx_chains; 567 585 } __packed; /* SCAN_CONFIG_DB_CMD_API_S_3 */ 568 586 569 587 /**
+3
drivers/net/wireless/intel/iwlwifi/fw/file.h
··· 288 288 * STA_CONTEXT_DOT11AX_API_S 289 289 * @IWL_UCODE_TLV_CAPA_SAR_TABLE_VER: This ucode supports different sar 290 290 * version tables. 291 + * @IWL_UCODE_TLV_API_REDUCED_SCAN_CONFIG: This ucode supports v3 of 292 + * SCAN_CONFIG_DB_CMD_API_S. 291 293 * 292 294 * @NUM_IWL_UCODE_TLV_API: number of bits used 293 295 */ ··· 323 321 IWL_UCODE_TLV_API_WOWLAN_TCP_SYN_WAKE = (__force iwl_ucode_tlv_api_t)53, 324 322 IWL_UCODE_TLV_API_FTM_RTT_ACCURACY = (__force iwl_ucode_tlv_api_t)54, 325 323 IWL_UCODE_TLV_API_SAR_TABLE_VER = (__force iwl_ucode_tlv_api_t)55, 324 + IWL_UCODE_TLV_API_REDUCED_SCAN_CONFIG = (__force iwl_ucode_tlv_api_t)56, 326 325 IWL_UCODE_TLV_API_ADWELL_HB_DEF_N_AP = (__force iwl_ucode_tlv_api_t)57, 327 326 IWL_UCODE_TLV_API_SCAN_EXT_CHAN_VER = (__force iwl_ucode_tlv_api_t)58, 328 327
+1
drivers/net/wireless/intel/iwlwifi/iwl-csr.h
··· 279 279 * Indicates MAC is entering a power-saving sleep power-down. 280 280 * Not a good time to access device-internal resources. 281 281 */ 282 + #define CSR_GP_CNTRL_REG_FLAG_INIT_DONE (0x00000004) 282 283 #define CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP (0x00000010) 283 284 #define CSR_GP_CNTRL_REG_FLAG_XTAL_ON (0x00000400) 284 285
+5
drivers/net/wireless/intel/iwlwifi/iwl-prph.h
··· 449 449 #define PERSISTENCE_BIT BIT(12) 450 450 #define PREG_WFPM_ACCESS BIT(12) 451 451 452 + #define HPM_HIPM_GEN_CFG 0xA03458 453 + #define HPM_HIPM_GEN_CFG_CR_PG_EN BIT(0) 454 + #define HPM_HIPM_GEN_CFG_CR_SLP_EN BIT(1) 455 + #define HPM_HIPM_GEN_CFG_CR_FORCE_ACTIVE BIT(10) 456 + 452 457 #define UREG_DOORBELL_TO_ISR6 0xA05C04 453 458 #define UREG_DOORBELL_TO_ISR6_NMI_BIT BIT(0) 454 459 #define UREG_DOORBELL_TO_ISR6_SUSPEND BIT(18)
+6
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 1405 1405 IWL_UCODE_TLV_API_SCAN_EXT_CHAN_VER); 1406 1406 } 1407 1407 1408 + static inline bool iwl_mvm_is_reduced_config_scan_supported(struct iwl_mvm *mvm) 1409 + { 1410 + return fw_has_api(&mvm->fw->ucode_capa, 1411 + IWL_UCODE_TLV_API_REDUCED_SCAN_CONFIG); 1412 + } 1413 + 1408 1414 static inline bool iwl_mvm_has_new_rx_stats_api(struct iwl_mvm *mvm) 1409 1415 { 1410 1416 return fw_has_api(&mvm->fw->ucode_capa,
+32 -8
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
··· 1137 1137 iwl_mvm_fill_channels(mvm, cfg->channel_array, max_channels); 1138 1138 } 1139 1139 1140 - static void iwl_mvm_fill_scan_config(struct iwl_mvm *mvm, void *config, 1141 - u32 flags, u8 channel_flags, 1142 - u32 max_channels) 1140 + static void iwl_mvm_fill_scan_config_v2(struct iwl_mvm *mvm, void *config, 1141 + u32 flags, u8 channel_flags, 1142 + u32 max_channels) 1143 1143 { 1144 - struct iwl_scan_config *cfg = config; 1144 + struct iwl_scan_config_v2 *cfg = config; 1145 1145 1146 1146 cfg->flags = cpu_to_le32(flags); 1147 1147 cfg->tx_chains = cpu_to_le32(iwl_mvm_get_valid_tx_ant(mvm)); ··· 1185 1185 iwl_mvm_fill_channels(mvm, cfg->channel_array, max_channels); 1186 1186 } 1187 1187 1188 - int iwl_mvm_config_scan(struct iwl_mvm *mvm) 1188 + static int iwl_mvm_legacy_config_scan(struct iwl_mvm *mvm) 1189 1189 { 1190 1190 void *cfg; 1191 1191 int ret, cmd_size; ··· 1217 1217 } 1218 1218 1219 1219 if (iwl_mvm_cdb_scan_api(mvm)) 1220 - cmd_size = sizeof(struct iwl_scan_config); 1220 + cmd_size = sizeof(struct iwl_scan_config_v2); 1221 1221 else 1222 1222 cmd_size = sizeof(struct iwl_scan_config_v1); 1223 1223 cmd_size += num_channels; ··· 1254 1254 flags |= (iwl_mvm_is_scan_fragmented(hb_type)) ? 1255 1255 SCAN_CONFIG_FLAG_SET_LMAC2_FRAGMENTED : 1256 1256 SCAN_CONFIG_FLAG_CLEAR_LMAC2_FRAGMENTED; 1257 - iwl_mvm_fill_scan_config(mvm, cfg, flags, channel_flags, 1258 - num_channels); 1257 + iwl_mvm_fill_scan_config_v2(mvm, cfg, flags, channel_flags, 1258 + num_channels); 1259 1259 } else { 1260 1260 iwl_mvm_fill_scan_config_v1(mvm, cfg, flags, channel_flags, 1261 1261 num_channels); ··· 1275 1275 1276 1276 kfree(cfg); 1277 1277 return ret; 1278 + } 1279 + 1280 + int iwl_mvm_config_scan(struct iwl_mvm *mvm) 1281 + { 1282 + struct iwl_scan_config cfg; 1283 + struct iwl_host_cmd cmd = { 1284 + .id = iwl_cmd_id(SCAN_CFG_CMD, IWL_ALWAYS_LONG_GROUP, 0), 1285 + .len[0] = sizeof(cfg), 1286 + .data[0] = &cfg, 1287 + .dataflags[0] = IWL_HCMD_DFL_NOCOPY, 1288 + }; 1289 + 1290 + if (!iwl_mvm_is_reduced_config_scan_supported(mvm)) 1291 + return iwl_mvm_legacy_config_scan(mvm); 1292 + 1293 + memset(&cfg, 0, sizeof(cfg)); 1294 + 1295 + cfg.bcast_sta_id = mvm->aux_sta.sta_id; 1296 + cfg.tx_chains = cpu_to_le32(iwl_mvm_get_valid_tx_ant(mvm)); 1297 + cfg.rx_chains = cpu_to_le32(iwl_mvm_scan_rx_ant(mvm)); 1298 + 1299 + IWL_DEBUG_SCAN(mvm, "Sending UMAC scan config\n"); 1300 + 1301 + return iwl_mvm_send_cmd(mvm, &cmd); 1278 1302 } 1279 1303 1280 1304 static int iwl_mvm_scan_uid_by_status(struct iwl_mvm *mvm, int status)
+84 -58
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 1482 1482 mvm_sta->sta_id, i); 1483 1483 txq_id = iwl_mvm_tvqm_enable_txq(mvm, mvm_sta->sta_id, 1484 1484 i, wdg); 1485 + /* 1486 + * on failures, just set it to IWL_MVM_INVALID_QUEUE 1487 + * to try again later, we have no other good way of 1488 + * failing here 1489 + */ 1490 + if (txq_id < 0) 1491 + txq_id = IWL_MVM_INVALID_QUEUE; 1485 1492 tid_data->txq_id = txq_id; 1486 1493 1487 1494 /* ··· 1957 1950 sta->sta_id = IWL_MVM_INVALID_STA; 1958 1951 } 1959 1952 1960 - static void iwl_mvm_enable_aux_snif_queue(struct iwl_mvm *mvm, u16 *queue, 1953 + static void iwl_mvm_enable_aux_snif_queue(struct iwl_mvm *mvm, u16 queue, 1961 1954 u8 sta_id, u8 fifo) 1962 1955 { 1963 1956 unsigned int wdg_timeout = iwlmvm_mod_params.tfd_q_hang_detect ? 1964 1957 mvm->trans->trans_cfg->base_params->wd_timeout : 1965 1958 IWL_WATCHDOG_DISABLED; 1959 + struct iwl_trans_txq_scd_cfg cfg = { 1960 + .fifo = fifo, 1961 + .sta_id = sta_id, 1962 + .tid = IWL_MAX_TID_COUNT, 1963 + .aggregate = false, 1964 + .frame_limit = IWL_FRAME_LIMIT, 1965 + }; 1966 1966 1967 - if (iwl_mvm_has_new_tx_api(mvm)) { 1968 - int tvqm_queue = 1969 - iwl_mvm_tvqm_enable_txq(mvm, sta_id, 1970 - IWL_MAX_TID_COUNT, 1971 - wdg_timeout); 1972 - *queue = tvqm_queue; 1973 - } else { 1974 - struct iwl_trans_txq_scd_cfg cfg = { 1975 - .fifo = fifo, 1976 - .sta_id = sta_id, 1977 - .tid = IWL_MAX_TID_COUNT, 1978 - .aggregate = false, 1979 - .frame_limit = IWL_FRAME_LIMIT, 1980 - }; 1967 + WARN_ON(iwl_mvm_has_new_tx_api(mvm)); 1981 1968 1982 - iwl_mvm_enable_txq(mvm, NULL, *queue, 0, &cfg, wdg_timeout); 1969 + iwl_mvm_enable_txq(mvm, NULL, queue, 0, &cfg, wdg_timeout); 1970 + } 1971 + 1972 + static int iwl_mvm_enable_aux_snif_queue_tvqm(struct iwl_mvm *mvm, u8 sta_id) 1973 + { 1974 + unsigned int wdg_timeout = iwlmvm_mod_params.tfd_q_hang_detect ? 1975 + mvm->trans->trans_cfg->base_params->wd_timeout : 1976 + IWL_WATCHDOG_DISABLED; 1977 + 1978 + WARN_ON(!iwl_mvm_has_new_tx_api(mvm)); 1979 + 1980 + return iwl_mvm_tvqm_enable_txq(mvm, sta_id, IWL_MAX_TID_COUNT, 1981 + wdg_timeout); 1982 + } 1983 + 1984 + static int iwl_mvm_add_int_sta_with_queue(struct iwl_mvm *mvm, int macidx, 1985 + int maccolor, 1986 + struct iwl_mvm_int_sta *sta, 1987 + u16 *queue, int fifo) 1988 + { 1989 + int ret; 1990 + 1991 + /* Map queue to fifo - needs to happen before adding station */ 1992 + if (!iwl_mvm_has_new_tx_api(mvm)) 1993 + iwl_mvm_enable_aux_snif_queue(mvm, *queue, sta->sta_id, fifo); 1994 + 1995 + ret = iwl_mvm_add_int_sta_common(mvm, sta, NULL, macidx, maccolor); 1996 + if (ret) { 1997 + if (!iwl_mvm_has_new_tx_api(mvm)) 1998 + iwl_mvm_disable_txq(mvm, NULL, *queue, 1999 + IWL_MAX_TID_COUNT, 0); 2000 + return ret; 1983 2001 } 2002 + 2003 + /* 2004 + * For 22000 firmware and on we cannot add queue to a station unknown 2005 + * to firmware so enable queue here - after the station was added 2006 + */ 2007 + if (iwl_mvm_has_new_tx_api(mvm)) { 2008 + int txq; 2009 + 2010 + txq = iwl_mvm_enable_aux_snif_queue_tvqm(mvm, sta->sta_id); 2011 + if (txq < 0) { 2012 + iwl_mvm_rm_sta_common(mvm, sta->sta_id); 2013 + return txq; 2014 + } 2015 + 2016 + *queue = txq; 2017 + } 2018 + 2019 + return 0; 1984 2020 } 1985 2021 1986 2022 int iwl_mvm_add_aux_sta(struct iwl_mvm *mvm) ··· 2039 1989 if (ret) 2040 1990 return ret; 2041 1991 2042 - /* Map Aux queue to fifo - needs to happen before adding Aux station */ 2043 - if (!iwl_mvm_has_new_tx_api(mvm)) 2044 - iwl_mvm_enable_aux_snif_queue(mvm, &mvm->aux_queue, 2045 - mvm->aux_sta.sta_id, 2046 - IWL_MVM_TX_FIFO_MCAST); 2047 - 2048 - ret = iwl_mvm_add_int_sta_common(mvm, &mvm->aux_sta, NULL, 2049 - MAC_INDEX_AUX, 0); 1992 + ret = iwl_mvm_add_int_sta_with_queue(mvm, MAC_INDEX_AUX, 0, 1993 + &mvm->aux_sta, &mvm->aux_queue, 1994 + IWL_MVM_TX_FIFO_MCAST); 2050 1995 if (ret) { 2051 1996 iwl_mvm_dealloc_int_sta(mvm, &mvm->aux_sta); 2052 1997 return ret; 2053 1998 } 2054 - 2055 - /* 2056 - * For 22000 firmware and on we cannot add queue to a station unknown 2057 - * to firmware so enable queue here - after the station was added 2058 - */ 2059 - if (iwl_mvm_has_new_tx_api(mvm)) 2060 - iwl_mvm_enable_aux_snif_queue(mvm, &mvm->aux_queue, 2061 - mvm->aux_sta.sta_id, 2062 - IWL_MVM_TX_FIFO_MCAST); 2063 1999 2064 2000 return 0; 2065 2001 } ··· 2053 2017 int iwl_mvm_add_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif) 2054 2018 { 2055 2019 struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 2056 - int ret; 2057 2020 2058 2021 lockdep_assert_held(&mvm->mutex); 2059 2022 2060 - /* Map snif queue to fifo - must happen before adding snif station */ 2061 - if (!iwl_mvm_has_new_tx_api(mvm)) 2062 - iwl_mvm_enable_aux_snif_queue(mvm, &mvm->snif_queue, 2063 - mvm->snif_sta.sta_id, 2023 + return iwl_mvm_add_int_sta_with_queue(mvm, mvmvif->id, mvmvif->color, 2024 + &mvm->snif_sta, &mvm->snif_queue, 2064 2025 IWL_MVM_TX_FIFO_BE); 2065 - 2066 - ret = iwl_mvm_add_int_sta_common(mvm, &mvm->snif_sta, vif->addr, 2067 - mvmvif->id, 0); 2068 - if (ret) 2069 - return ret; 2070 - 2071 - /* 2072 - * For 22000 firmware and on we cannot add queue to a station unknown 2073 - * to firmware so enable queue here - after the station was added 2074 - */ 2075 - if (iwl_mvm_has_new_tx_api(mvm)) 2076 - iwl_mvm_enable_aux_snif_queue(mvm, &mvm->snif_queue, 2077 - mvm->snif_sta.sta_id, 2078 - IWL_MVM_TX_FIFO_BE); 2079 - 2080 - return 0; 2081 2026 } 2082 2027 2083 2028 int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif) ··· 2150 2133 queue = iwl_mvm_tvqm_enable_txq(mvm, bsta->sta_id, 2151 2134 IWL_MAX_TID_COUNT, 2152 2135 wdg_timeout); 2136 + if (queue < 0) { 2137 + iwl_mvm_rm_sta_common(mvm, bsta->sta_id); 2138 + return queue; 2139 + } 2153 2140 2154 2141 if (vif->type == NL80211_IFTYPE_AP || 2155 2142 vif->type == NL80211_IFTYPE_ADHOC) ··· 2328 2307 } 2329 2308 ret = iwl_mvm_add_int_sta_common(mvm, msta, maddr, 2330 2309 mvmvif->id, mvmvif->color); 2331 - if (ret) { 2332 - iwl_mvm_dealloc_int_sta(mvm, msta); 2333 - return ret; 2334 - } 2310 + if (ret) 2311 + goto err; 2335 2312 2336 2313 /* 2337 2314 * Enable cab queue after the ADD_STA command is sent. ··· 2342 2323 int queue = iwl_mvm_tvqm_enable_txq(mvm, msta->sta_id, 2343 2324 0, 2344 2325 timeout); 2326 + if (queue < 0) { 2327 + ret = queue; 2328 + goto err; 2329 + } 2345 2330 mvmvif->cab_queue = queue; 2346 2331 } else if (!fw_has_api(&mvm->fw->ucode_capa, 2347 2332 IWL_UCODE_TLV_API_STA_TYPE)) ··· 2353 2330 timeout); 2354 2331 2355 2332 return 0; 2333 + err: 2334 + iwl_mvm_dealloc_int_sta(mvm, msta); 2335 + return ret; 2356 2336 } 2357 2337 2358 2338 static int __iwl_mvm_remove_sta_key(struct iwl_mvm *mvm, u8 sta_id,
+63 -66
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 573 573 {IWL_PCI_DEVICE(0x2526, 0x0034, iwl9560_2ac_cfg)}, 574 574 {IWL_PCI_DEVICE(0x2526, 0x0038, iwl9560_2ac_160_cfg)}, 575 575 {IWL_PCI_DEVICE(0x2526, 0x003C, iwl9560_2ac_160_cfg)}, 576 - {IWL_PCI_DEVICE(0x2526, 0x0060, iwl9460_2ac_cfg)}, 577 - {IWL_PCI_DEVICE(0x2526, 0x0064, iwl9460_2ac_cfg)}, 578 - {IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9460_2ac_cfg)}, 579 - {IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9460_2ac_cfg)}, 576 + {IWL_PCI_DEVICE(0x2526, 0x0060, iwl9461_2ac_cfg_soc)}, 577 + {IWL_PCI_DEVICE(0x2526, 0x0064, iwl9461_2ac_cfg_soc)}, 578 + {IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9462_2ac_cfg_soc)}, 579 + {IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9462_2ac_cfg_soc)}, 580 580 {IWL_PCI_DEVICE(0x2526, 0x0210, iwl9260_2ac_cfg)}, 581 581 {IWL_PCI_DEVICE(0x2526, 0x0214, iwl9260_2ac_cfg)}, 582 582 {IWL_PCI_DEVICE(0x2526, 0x0230, iwl9560_2ac_cfg)}, 583 583 {IWL_PCI_DEVICE(0x2526, 0x0234, iwl9560_2ac_cfg)}, 584 584 {IWL_PCI_DEVICE(0x2526, 0x0238, iwl9560_2ac_cfg)}, 585 585 {IWL_PCI_DEVICE(0x2526, 0x023C, iwl9560_2ac_cfg)}, 586 - {IWL_PCI_DEVICE(0x2526, 0x0260, iwl9460_2ac_cfg)}, 586 + {IWL_PCI_DEVICE(0x2526, 0x0260, iwl9461_2ac_cfg_soc)}, 587 587 {IWL_PCI_DEVICE(0x2526, 0x0264, iwl9461_2ac_cfg_soc)}, 588 - {IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9460_2ac_cfg)}, 589 - {IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9460_2ac_cfg)}, 588 + {IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9462_2ac_cfg_soc)}, 589 + {IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9462_2ac_cfg_soc)}, 590 590 {IWL_PCI_DEVICE(0x2526, 0x1010, iwl9260_2ac_cfg)}, 591 591 {IWL_PCI_DEVICE(0x2526, 0x1030, iwl9560_2ac_cfg)}, 592 592 {IWL_PCI_DEVICE(0x2526, 0x1210, iwl9260_2ac_cfg)}, ··· 603 603 {IWL_PCI_DEVICE(0x2526, 0x401C, iwl9260_2ac_160_cfg)}, 604 604 {IWL_PCI_DEVICE(0x2526, 0x4030, iwl9560_2ac_160_cfg)}, 605 605 {IWL_PCI_DEVICE(0x2526, 0x4034, iwl9560_2ac_160_cfg_soc)}, 606 - {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)}, 606 + {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9462_2ac_cfg_soc)}, 607 607 {IWL_PCI_DEVICE(0x2526, 0x4234, iwl9560_2ac_cfg_soc)}, 608 608 {IWL_PCI_DEVICE(0x2526, 0x42A4, iwl9462_2ac_cfg_soc)}, 609 609 {IWL_PCI_DEVICE(0x2526, 0x6010, iwl9260_2ac_160_cfg)}, ··· 618 618 {IWL_PCI_DEVICE(0x271B, 0x0210, iwl9160_2ac_cfg)}, 619 619 {IWL_PCI_DEVICE(0x271B, 0x0214, iwl9260_2ac_cfg)}, 620 620 {IWL_PCI_DEVICE(0x271C, 0x0214, iwl9260_2ac_cfg)}, 621 - {IWL_PCI_DEVICE(0x2720, 0x0034, iwl9560_2ac_160_cfg)}, 622 - {IWL_PCI_DEVICE(0x2720, 0x0038, iwl9560_2ac_160_cfg)}, 623 - {IWL_PCI_DEVICE(0x2720, 0x003C, iwl9560_2ac_160_cfg)}, 624 - {IWL_PCI_DEVICE(0x2720, 0x0060, iwl9461_2ac_cfg_soc)}, 625 - {IWL_PCI_DEVICE(0x2720, 0x0064, iwl9461_2ac_cfg_soc)}, 626 - {IWL_PCI_DEVICE(0x2720, 0x00A0, iwl9462_2ac_cfg_soc)}, 627 - {IWL_PCI_DEVICE(0x2720, 0x00A4, iwl9462_2ac_cfg_soc)}, 628 - {IWL_PCI_DEVICE(0x2720, 0x0230, iwl9560_2ac_cfg)}, 629 - {IWL_PCI_DEVICE(0x2720, 0x0234, iwl9560_2ac_cfg)}, 630 - {IWL_PCI_DEVICE(0x2720, 0x0238, iwl9560_2ac_cfg)}, 631 - {IWL_PCI_DEVICE(0x2720, 0x023C, iwl9560_2ac_cfg)}, 632 - {IWL_PCI_DEVICE(0x2720, 0x0260, iwl9461_2ac_cfg_soc)}, 633 - {IWL_PCI_DEVICE(0x2720, 0x0264, iwl9461_2ac_cfg_soc)}, 634 - {IWL_PCI_DEVICE(0x2720, 0x02A0, iwl9462_2ac_cfg_soc)}, 635 - {IWL_PCI_DEVICE(0x2720, 0x02A4, iwl9462_2ac_cfg_soc)}, 636 - {IWL_PCI_DEVICE(0x2720, 0x1010, iwl9260_2ac_cfg)}, 637 - {IWL_PCI_DEVICE(0x2720, 0x1030, iwl9560_2ac_cfg_soc)}, 638 - {IWL_PCI_DEVICE(0x2720, 0x1210, iwl9260_2ac_cfg)}, 639 - {IWL_PCI_DEVICE(0x2720, 0x1551, iwl9560_killer_s_2ac_cfg_soc)}, 640 - {IWL_PCI_DEVICE(0x2720, 0x1552, iwl9560_killer_2ac_cfg_soc)}, 641 - {IWL_PCI_DEVICE(0x2720, 0x2030, iwl9560_2ac_160_cfg_soc)}, 642 - {IWL_PCI_DEVICE(0x2720, 0x2034, iwl9560_2ac_160_cfg_soc)}, 643 - {IWL_PCI_DEVICE(0x2720, 0x4030, iwl9560_2ac_160_cfg)}, 644 - {IWL_PCI_DEVICE(0x2720, 0x4034, iwl9560_2ac_160_cfg_soc)}, 645 - {IWL_PCI_DEVICE(0x2720, 0x40A4, iwl9462_2ac_cfg_soc)}, 646 - {IWL_PCI_DEVICE(0x2720, 0x4234, iwl9560_2ac_cfg_soc)}, 647 - {IWL_PCI_DEVICE(0x2720, 0x42A4, iwl9462_2ac_cfg_soc)}, 648 621 649 - {IWL_PCI_DEVICE(0x30DC, 0x0030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 650 - {IWL_PCI_DEVICE(0x30DC, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 651 - {IWL_PCI_DEVICE(0x30DC, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 652 - {IWL_PCI_DEVICE(0x30DC, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 653 - {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)}, 654 - {IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)}, 655 - {IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 656 - {IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 657 - {IWL_PCI_DEVICE(0x30DC, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 658 - {IWL_PCI_DEVICE(0x30DC, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 659 - {IWL_PCI_DEVICE(0x30DC, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 660 - {IWL_PCI_DEVICE(0x30DC, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 661 - {IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)}, 662 - {IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)}, 663 - {IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 664 - {IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 665 - {IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 666 - {IWL_PCI_DEVICE(0x30DC, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)}, 667 - {IWL_PCI_DEVICE(0x30DC, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)}, 668 - {IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 669 - {IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 670 - {IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 671 - {IWL_PCI_DEVICE(0x30DC, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 672 - {IWL_PCI_DEVICE(0x30DC, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 673 - {IWL_PCI_DEVICE(0x30DC, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 674 - {IWL_PCI_DEVICE(0x30DC, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 622 + {IWL_PCI_DEVICE(0x2720, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 623 + {IWL_PCI_DEVICE(0x2720, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 624 + {IWL_PCI_DEVICE(0x2720, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 625 + {IWL_PCI_DEVICE(0x2720, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)}, 626 + {IWL_PCI_DEVICE(0x2720, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)}, 627 + {IWL_PCI_DEVICE(0x2720, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 628 + {IWL_PCI_DEVICE(0x2720, 0x00A4, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 629 + {IWL_PCI_DEVICE(0x2720, 0x0230, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 630 + {IWL_PCI_DEVICE(0x2720, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 631 + {IWL_PCI_DEVICE(0x2720, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 632 + {IWL_PCI_DEVICE(0x2720, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 633 + {IWL_PCI_DEVICE(0x2720, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)}, 634 + {IWL_PCI_DEVICE(0x2720, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)}, 635 + {IWL_PCI_DEVICE(0x2720, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 636 + {IWL_PCI_DEVICE(0x2720, 0x02A4, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 637 + {IWL_PCI_DEVICE(0x2720, 0x1030, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 638 + {IWL_PCI_DEVICE(0x2720, 0x1551, killer1550s_2ac_cfg_qu_b0_jf_b0)}, 639 + {IWL_PCI_DEVICE(0x2720, 0x1552, killer1550i_2ac_cfg_qu_b0_jf_b0)}, 640 + {IWL_PCI_DEVICE(0x2720, 0x2030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 641 + {IWL_PCI_DEVICE(0x2720, 0x2034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 642 + {IWL_PCI_DEVICE(0x2720, 0x4030, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 643 + {IWL_PCI_DEVICE(0x2720, 0x4034, iwl9560_2ac_160_cfg_qu_b0_jf_b0)}, 644 + {IWL_PCI_DEVICE(0x2720, 0x40A4, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 645 + {IWL_PCI_DEVICE(0x2720, 0x4234, iwl9560_2ac_cfg_qu_b0_jf_b0)}, 646 + {IWL_PCI_DEVICE(0x2720, 0x42A4, iwl9462_2ac_cfg_qu_b0_jf_b0)}, 647 + 648 + {IWL_PCI_DEVICE(0x30DC, 0x0030, iwl9560_2ac_160_cfg_soc)}, 649 + {IWL_PCI_DEVICE(0x30DC, 0x0034, iwl9560_2ac_cfg_soc)}, 650 + {IWL_PCI_DEVICE(0x30DC, 0x0038, iwl9560_2ac_160_cfg_soc)}, 651 + {IWL_PCI_DEVICE(0x30DC, 0x003C, iwl9560_2ac_160_cfg_soc)}, 652 + {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg_soc)}, 653 + {IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_soc)}, 654 + {IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_soc)}, 655 + {IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_soc)}, 656 + {IWL_PCI_DEVICE(0x30DC, 0x0230, iwl9560_2ac_cfg_soc)}, 657 + {IWL_PCI_DEVICE(0x30DC, 0x0234, iwl9560_2ac_cfg_soc)}, 658 + {IWL_PCI_DEVICE(0x30DC, 0x0238, iwl9560_2ac_cfg_soc)}, 659 + {IWL_PCI_DEVICE(0x30DC, 0x023C, iwl9560_2ac_cfg_soc)}, 660 + {IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_soc)}, 661 + {IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_soc)}, 662 + {IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_soc)}, 663 + {IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_soc)}, 664 + {IWL_PCI_DEVICE(0x30DC, 0x1010, iwl9260_2ac_cfg)}, 665 + {IWL_PCI_DEVICE(0x30DC, 0x1030, iwl9560_2ac_cfg_soc)}, 666 + {IWL_PCI_DEVICE(0x30DC, 0x1210, iwl9260_2ac_cfg)}, 667 + {IWL_PCI_DEVICE(0x30DC, 0x1551, iwl9560_killer_s_2ac_cfg_soc)}, 668 + {IWL_PCI_DEVICE(0x30DC, 0x1552, iwl9560_killer_2ac_cfg_soc)}, 669 + {IWL_PCI_DEVICE(0x30DC, 0x2030, iwl9560_2ac_160_cfg_soc)}, 670 + {IWL_PCI_DEVICE(0x30DC, 0x2034, iwl9560_2ac_160_cfg_soc)}, 671 + {IWL_PCI_DEVICE(0x30DC, 0x4030, iwl9560_2ac_160_cfg_soc)}, 672 + {IWL_PCI_DEVICE(0x30DC, 0x4034, iwl9560_2ac_160_cfg_soc)}, 673 + {IWL_PCI_DEVICE(0x30DC, 0x40A4, iwl9462_2ac_cfg_soc)}, 674 + {IWL_PCI_DEVICE(0x30DC, 0x4234, iwl9560_2ac_cfg_soc)}, 675 + {IWL_PCI_DEVICE(0x30DC, 0x42A4, iwl9462_2ac_cfg_soc)}, 675 676 676 677 {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_160_cfg_shared_clk)}, 677 678 {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg_shared_clk)}, ··· 1068 1067 } 1069 1068 } else if (CSR_HW_RF_ID_TYPE_CHIP_ID(iwl_trans->hw_rf_id) == 1070 1069 CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) && 1071 - ((cfg != &iwl_ax200_cfg_cc && 1072 - cfg != &killer1650x_2ax_cfg && 1073 - cfg != &killer1650w_2ax_cfg && 1074 - cfg != &iwl_ax201_cfg_quz_hr) || 1075 - iwl_trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0)) { 1070 + iwl_trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0) { 1076 1071 u32 hw_status; 1077 1072 1078 1073 hw_status = iwl_read_prph(iwl_trans, UMAG_GEN_HW_STATUS);
+25
drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
··· 57 57 #include "internal.h" 58 58 #include "fw/dbg.h" 59 59 60 + static int iwl_pcie_gen2_force_power_gating(struct iwl_trans *trans) 61 + { 62 + iwl_set_bits_prph(trans, HPM_HIPM_GEN_CFG, 63 + HPM_HIPM_GEN_CFG_CR_FORCE_ACTIVE); 64 + udelay(20); 65 + iwl_set_bits_prph(trans, HPM_HIPM_GEN_CFG, 66 + HPM_HIPM_GEN_CFG_CR_PG_EN | 67 + HPM_HIPM_GEN_CFG_CR_SLP_EN); 68 + udelay(20); 69 + iwl_clear_bits_prph(trans, HPM_HIPM_GEN_CFG, 70 + HPM_HIPM_GEN_CFG_CR_FORCE_ACTIVE); 71 + 72 + iwl_trans_sw_reset(trans); 73 + iwl_clear_bit(trans, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE); 74 + 75 + return 0; 76 + } 77 + 60 78 /* 61 79 * Start up NIC's basic functionality after it has been reset 62 80 * (e.g. after platform boot, or shutdown via iwl_pcie_apm_stop()) ··· 109 91 CSR_HW_IF_CONFIG_REG_BIT_HAP_WAKE_L1A); 110 92 111 93 iwl_pcie_apm_config(trans); 94 + 95 + if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000 && 96 + trans->cfg->integrated) { 97 + ret = iwl_pcie_gen2_force_power_gating(trans); 98 + if (ret) 99 + return ret; 100 + } 112 101 113 102 ret = iwl_finish_nic_init(trans, trans->trans_cfg); 114 103 if (ret)
-25
drivers/net/wireless/intersil/hostap/hostap_hw.c
··· 3041 3041 } 3042 3042 } 3043 3043 3044 - 3045 - /* 3046 - * HostAP uses two layers of net devices, where the inner 3047 - * layer gets called all the time from the outer layer. 3048 - * This is a natural nesting, which needs a split lock type. 3049 - */ 3050 - static struct lock_class_key hostap_netdev_xmit_lock_key; 3051 - static struct lock_class_key hostap_netdev_addr_lock_key; 3052 - 3053 - static void prism2_set_lockdep_class_one(struct net_device *dev, 3054 - struct netdev_queue *txq, 3055 - void *_unused) 3056 - { 3057 - lockdep_set_class(&txq->_xmit_lock, 3058 - &hostap_netdev_xmit_lock_key); 3059 - } 3060 - 3061 - static void prism2_set_lockdep_class(struct net_device *dev) 3062 - { 3063 - lockdep_set_class(&dev->addr_list_lock, 3064 - &hostap_netdev_addr_lock_key); 3065 - netdev_for_each_tx_queue(dev, prism2_set_lockdep_class_one, NULL); 3066 - } 3067 - 3068 3044 static struct net_device * 3069 3045 prism2_init_local_data(struct prism2_helper_functions *funcs, int card_idx, 3070 3046 struct device *sdev) ··· 3199 3223 if (ret >= 0) 3200 3224 ret = register_netdevice(dev); 3201 3225 3202 - prism2_set_lockdep_class(dev); 3203 3226 rtnl_unlock(); 3204 3227 if (ret < 0) { 3205 3228 printk(KERN_WARNING "%s: register netdevice failed!\n",
+2
drivers/net/wireless/mediatek/mt76/Makefile
··· 8 8 mmio.o util.o trace.o dma.o mac80211.o debugfs.o eeprom.o \ 9 9 tx.o agg-rx.o mcu.o 10 10 11 + mt76-$(CONFIG_PCI) += pci.o 12 + 11 13 mt76-usb-y := usb.o usb_trace.o 12 14 13 15 CFLAGS_trace.o := -I$(src)
+4 -2
drivers/net/wireless/mediatek/mt76/dma.c
··· 53 53 u32 ctrl; 54 54 int i, idx = -1; 55 55 56 - if (txwi) 56 + if (txwi) { 57 57 q->entry[q->head].txwi = DMA_DUMMY_DATA; 58 + q->entry[q->head].skip_buf0 = true; 59 + } 58 60 59 61 for (i = 0; i < nbufs; i += 2, buf += 2) { 60 62 u32 buf0 = buf[0].addr, buf1 = 0; ··· 99 97 __le32 __ctrl = READ_ONCE(q->desc[idx].ctrl); 100 98 u32 ctrl = le32_to_cpu(__ctrl); 101 99 102 - if (!e->txwi || !e->skb) { 100 + if (!e->skip_buf0) { 103 101 __le32 addr = READ_ONCE(q->desc[idx].buf0); 104 102 u32 len = FIELD_GET(MT_DMA_CTL_SD_LEN0, ctrl); 105 103
+4 -2
drivers/net/wireless/mediatek/mt76/mt76.h
··· 93 93 struct urb *urb; 94 94 }; 95 95 enum mt76_txq_id qid; 96 - bool schedule; 97 - bool done; 96 + bool skip_buf0:1; 97 + bool schedule:1; 98 + bool done:1; 98 99 }; 99 100 100 101 struct mt76_queue_regs { ··· 579 578 #define mt76_poll_msec(dev, ...) __mt76_poll_msec(&((dev)->mt76), __VA_ARGS__) 580 579 581 580 void mt76_mmio_init(struct mt76_dev *dev, void __iomem *regs); 581 + void mt76_pci_disable_aspm(struct pci_dev *pdev); 582 582 583 583 static inline u16 mt76_chip(struct mt76_dev *dev) 584 584 {
+2
drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
··· 81 81 /* RG_SSUSB_CDR_BR_PE1D = 0x3 */ 82 82 mt76_rmw_field(dev, 0x15c58, 0x3 << 6, 0x3); 83 83 84 + mt76_pci_disable_aspm(pdev); 85 + 84 86 return 0; 85 87 86 88 error:
+46
drivers/net/wireless/mediatek/mt76/pci.c
··· 1 + // SPDX-License-Identifier: ISC 2 + /* 3 + * Copyright (C) 2019 Lorenzo Bianconi <lorenzo@kernel.org> 4 + */ 5 + 6 + #include <linux/pci.h> 7 + 8 + void mt76_pci_disable_aspm(struct pci_dev *pdev) 9 + { 10 + struct pci_dev *parent = pdev->bus->self; 11 + u16 aspm_conf, parent_aspm_conf = 0; 12 + 13 + pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &aspm_conf); 14 + aspm_conf &= PCI_EXP_LNKCTL_ASPMC; 15 + if (parent) { 16 + pcie_capability_read_word(parent, PCI_EXP_LNKCTL, 17 + &parent_aspm_conf); 18 + parent_aspm_conf &= PCI_EXP_LNKCTL_ASPMC; 19 + } 20 + 21 + if (!aspm_conf && (!parent || !parent_aspm_conf)) { 22 + /* aspm already disabled */ 23 + return; 24 + } 25 + 26 + dev_info(&pdev->dev, "disabling ASPM %s %s\n", 27 + (aspm_conf & PCI_EXP_LNKCTL_ASPM_L0S) ? "L0s" : "", 28 + (aspm_conf & PCI_EXP_LNKCTL_ASPM_L1) ? "L1" : ""); 29 + 30 + if (IS_ENABLED(CONFIG_PCIEASPM)) { 31 + int err; 32 + 33 + err = pci_disable_link_state(pdev, aspm_conf); 34 + if (!err) 35 + return; 36 + } 37 + 38 + /* both device and parent should have the same ASPM setting. 39 + * disable ASPM in downstream component first and then upstream. 40 + */ 41 + pcie_capability_clear_word(pdev, PCI_EXP_LNKCTL, aspm_conf); 42 + if (parent) 43 + pcie_capability_clear_word(parent, PCI_EXP_LNKCTL, 44 + aspm_conf); 45 + } 46 + EXPORT_SYMBOL_GPL(mt76_pci_disable_aspm);
+2 -1
drivers/net/wireless/realtek/rtlwifi/pci.c
··· 822 822 hdr = rtl_get_hdr(skb); 823 823 fc = rtl_get_fc(skb); 824 824 825 - if (!stats.crc && !stats.hwerror) { 825 + if (!stats.crc && !stats.hwerror && (skb->len > FCS_LEN)) { 826 826 memcpy(IEEE80211_SKB_RXCB(skb), &rx_status, 827 827 sizeof(rx_status)); 828 828 ··· 859 859 _rtl_pci_rx_to_mac80211(hw, skb, rx_status); 860 860 } 861 861 } else { 862 + /* drop packets with errors or those too short */ 862 863 dev_kfree_skb_any(skb); 863 864 } 864 865 new_trx_end:
+6
drivers/net/wireless/realtek/rtlwifi/ps.c
··· 754 754 return; 755 755 } else { 756 756 noa_num = (noa_len - 2) / 13; 757 + if (noa_num > P2P_MAX_NOA_NUM) 758 + noa_num = P2P_MAX_NOA_NUM; 759 + 757 760 } 758 761 noa_index = ie[3]; 759 762 if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode == ··· 851 848 return; 852 849 } else { 853 850 noa_num = (noa_len - 2) / 13; 851 + if (noa_num > P2P_MAX_NOA_NUM) 852 + noa_num = P2P_MAX_NOA_NUM; 853 + 854 854 } 855 855 noa_index = ie[3]; 856 856 if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode ==
+52 -2
drivers/net/wireless/virt_wifi.c
··· 548 548 priv->is_connected = false; 549 549 priv->is_up = false; 550 550 INIT_DELAYED_WORK(&priv->connect, virt_wifi_connect_complete); 551 + __module_get(THIS_MODULE); 551 552 552 553 return 0; 553 554 unregister_netdev: ··· 579 578 netdev_upper_dev_unlink(priv->lowerdev, dev); 580 579 581 580 unregister_netdevice_queue(dev, head); 581 + module_put(THIS_MODULE); 582 582 583 583 /* Deleting the wiphy is handled in the module destructor. */ 584 584 } ··· 592 590 .priv_size = sizeof(struct virt_wifi_netdev_priv), 593 591 }; 594 592 593 + static bool netif_is_virt_wifi_dev(const struct net_device *dev) 594 + { 595 + return rcu_access_pointer(dev->rx_handler) == virt_wifi_rx_handler; 596 + } 597 + 598 + static int virt_wifi_event(struct notifier_block *this, unsigned long event, 599 + void *ptr) 600 + { 601 + struct net_device *lower_dev = netdev_notifier_info_to_dev(ptr); 602 + struct virt_wifi_netdev_priv *priv; 603 + struct net_device *upper_dev; 604 + LIST_HEAD(list_kill); 605 + 606 + if (!netif_is_virt_wifi_dev(lower_dev)) 607 + return NOTIFY_DONE; 608 + 609 + switch (event) { 610 + case NETDEV_UNREGISTER: 611 + priv = rtnl_dereference(lower_dev->rx_handler_data); 612 + if (!priv) 613 + return NOTIFY_DONE; 614 + 615 + upper_dev = priv->upperdev; 616 + 617 + upper_dev->rtnl_link_ops->dellink(upper_dev, &list_kill); 618 + unregister_netdevice_many(&list_kill); 619 + break; 620 + } 621 + 622 + return NOTIFY_DONE; 623 + } 624 + 625 + static struct notifier_block virt_wifi_notifier = { 626 + .notifier_call = virt_wifi_event, 627 + }; 628 + 595 629 /* Acquires and releases the rtnl lock. */ 596 630 static int __init virt_wifi_init_module(void) 597 631 { ··· 636 598 /* Guaranteed to be locallly-administered and not multicast. */ 637 599 eth_random_addr(fake_router_bssid); 638 600 601 + err = register_netdevice_notifier(&virt_wifi_notifier); 602 + if (err) 603 + return err; 604 + 605 + err = -ENOMEM; 639 606 common_wiphy = virt_wifi_make_wiphy(); 640 607 if (!common_wiphy) 641 - return -ENOMEM; 608 + goto notifier; 642 609 643 610 err = rtnl_link_register(&virt_wifi_link_ops); 644 611 if (err) 645 - virt_wifi_destroy_wiphy(common_wiphy); 612 + goto destroy_wiphy; 646 613 614 + return 0; 615 + 616 + destroy_wiphy: 617 + virt_wifi_destroy_wiphy(common_wiphy); 618 + notifier: 619 + unregister_netdevice_notifier(&virt_wifi_notifier); 647 620 return err; 648 621 } 649 622 ··· 664 615 /* Will delete any devices that depend on the wiphy. */ 665 616 rtnl_link_unregister(&virt_wifi_link_ops); 666 617 virt_wifi_destroy_wiphy(common_wiphy); 618 + unregister_netdevice_notifier(&virt_wifi_notifier); 667 619 } 668 620 669 621 module_init(virt_wifi_init_module);
+4 -5
drivers/nvme/host/multipath.c
··· 522 522 return 0; 523 523 } 524 524 525 - static int nvme_read_ana_log(struct nvme_ctrl *ctrl, bool groups_only) 525 + static int nvme_read_ana_log(struct nvme_ctrl *ctrl) 526 526 { 527 527 u32 nr_change_groups = 0; 528 528 int error; 529 529 530 530 mutex_lock(&ctrl->ana_lock); 531 - error = nvme_get_log(ctrl, NVME_NSID_ALL, NVME_LOG_ANA, 532 - groups_only ? NVME_ANA_LOG_RGO : 0, 531 + error = nvme_get_log(ctrl, NVME_NSID_ALL, NVME_LOG_ANA, 0, 533 532 ctrl->ana_log_buf, ctrl->ana_log_size, 0); 534 533 if (error) { 535 534 dev_warn(ctrl->device, "Failed to get ANA log: %d\n", error); ··· 564 565 { 565 566 struct nvme_ctrl *ctrl = container_of(work, struct nvme_ctrl, ana_work); 566 567 567 - nvme_read_ana_log(ctrl, false); 568 + nvme_read_ana_log(ctrl); 568 569 } 569 570 570 571 static void nvme_anatt_timeout(struct timer_list *t) ··· 714 715 goto out; 715 716 } 716 717 717 - error = nvme_read_ana_log(ctrl, true); 718 + error = nvme_read_ana_log(ctrl); 718 719 if (error) 719 720 goto out_free_ana_log_buf; 720 721 return 0;
+1 -1
drivers/nvme/host/tcp.c
··· 2219 2219 struct nvme_tcp_queue *queue = hctx->driver_data; 2220 2220 struct sock *sk = queue->sock->sk; 2221 2221 2222 - if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue)) 2222 + if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue)) 2223 2223 sk_busy_loop(sk, true); 2224 2224 nvme_tcp_try_recv(queue); 2225 2225 return queue->nr_cqe;
+1 -8
drivers/pwm/core.c
··· 472 472 if (err) 473 473 return err; 474 474 475 - /* 476 - * .apply might have to round some values in *state, if possible 477 - * read the actually implemented value back. 478 - */ 479 - if (chip->ops->get_state) 480 - chip->ops->get_state(chip, pwm, &pwm->state); 481 - else 482 - pwm->state = *state; 475 + pwm->state = *state; 483 476 } else { 484 477 /* 485 478 * FIXME: restore the initial state in case of error.
+2 -2
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 851 851 852 852 if (!(vport->fc_flag & FC_PT2PT)) { 853 853 /* Check config parameter use-adisc or FCP-2 */ 854 - if ((vport->cfg_use_adisc && (vport->fc_flag & FC_RSCN_MODE)) || 854 + if (vport->cfg_use_adisc && ((vport->fc_flag & FC_RSCN_MODE) || 855 855 ((ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) && 856 - (ndlp->nlp_type & NLP_FCP_TARGET))) { 856 + (ndlp->nlp_type & NLP_FCP_TARGET)))) { 857 857 spin_lock_irq(shost->host_lock); 858 858 ndlp->nlp_flag |= NLP_NPR_ADISC; 859 859 spin_unlock_irq(shost->host_lock);
+1 -1
drivers/scsi/lpfc/lpfc_sli.c
··· 7866 7866 if (sli4_hba->hdwq) { 7867 7867 for (eqidx = 0; eqidx < phba->cfg_irq_chann; eqidx++) { 7868 7868 eq = phba->sli4_hba.hba_eq_hdl[eqidx].eq; 7869 - if (eq->queue_id == sli4_hba->mbx_cq->assoc_qid) { 7869 + if (eq && eq->queue_id == sli4_hba->mbx_cq->assoc_qid) { 7870 7870 fpeq = eq; 7871 7871 break; 7872 7872 }
+3 -4
drivers/scsi/qla2xxx/qla_attr.c
··· 440 440 valid = 0; 441 441 if (ha->optrom_size == OPTROM_SIZE_2300 && start == 0) 442 442 valid = 1; 443 - else if (start == (ha->flt_region_boot * 4) || 444 - start == (ha->flt_region_fw * 4)) 445 - valid = 1; 446 443 else if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha)) 447 444 valid = 1; 448 445 if (!valid) { ··· 486 489 "Writing flash region -- 0x%x/0x%x.\n", 487 490 ha->optrom_region_start, ha->optrom_region_size); 488 491 489 - ha->isp_ops->write_optrom(vha, ha->optrom_buffer, 492 + rval = ha->isp_ops->write_optrom(vha, ha->optrom_buffer, 490 493 ha->optrom_region_start, ha->optrom_region_size); 494 + if (rval) 495 + rval = -EIO; 491 496 break; 492 497 default: 493 498 rval = -EINVAL;
+3 -3
drivers/scsi/qla2xxx/qla_bsg.c
··· 253 253 srb_t *sp; 254 254 const char *type; 255 255 int req_sg_cnt, rsp_sg_cnt; 256 - int rval = (DRIVER_ERROR << 16); 256 + int rval = (DID_ERROR << 16); 257 257 uint16_t nextlid = 0; 258 258 259 259 if (bsg_request->msgcode == FC_BSG_RPT_ELS) { ··· 432 432 struct Scsi_Host *host = fc_bsg_to_shost(bsg_job); 433 433 scsi_qla_host_t *vha = shost_priv(host); 434 434 struct qla_hw_data *ha = vha->hw; 435 - int rval = (DRIVER_ERROR << 16); 435 + int rval = (DID_ERROR << 16); 436 436 int req_sg_cnt, rsp_sg_cnt; 437 437 uint16_t loop_id; 438 438 struct fc_port *fcport; ··· 1950 1950 struct Scsi_Host *host = fc_bsg_to_shost(bsg_job); 1951 1951 scsi_qla_host_t *vha = shost_priv(host); 1952 1952 struct qla_hw_data *ha = vha->hw; 1953 - int rval = (DRIVER_ERROR << 16); 1953 + int rval = (DID_ERROR << 16); 1954 1954 struct qla_mt_iocb_rqst_fx00 *piocb_rqst; 1955 1955 srb_t *sp; 1956 1956 int req_sg_cnt = 0, rsp_sg_cnt = 0;
+2 -1
drivers/scsi/qla2xxx/qla_mbx.c
··· 702 702 mcp->mb[2] = LSW(risc_addr); 703 703 mcp->mb[3] = 0; 704 704 mcp->mb[4] = 0; 705 + mcp->mb[11] = 0; 705 706 ha->flags.using_lr_setting = 0; 706 707 if (IS_QLA25XX(ha) || IS_QLA81XX(ha) || IS_QLA83XX(ha) || 707 708 IS_QLA27XX(ha) || IS_QLA28XX(ha)) { ··· 747 746 if (ha->flags.exchoffld_enabled) 748 747 mcp->mb[4] |= ENABLE_EXCHANGE_OFFLD; 749 748 750 - mcp->out_mb |= MBX_4|MBX_3|MBX_2|MBX_1; 749 + mcp->out_mb |= MBX_4 | MBX_3 | MBX_2 | MBX_1 | MBX_11; 751 750 mcp->in_mb |= MBX_3 | MBX_2 | MBX_1; 752 751 } else { 753 752 mcp->mb[1] = LSW(risc_addr);
+4
drivers/scsi/qla2xxx/qla_os.c
··· 3535 3535 qla2x00_try_to_stop_firmware(vha); 3536 3536 } 3537 3537 3538 + /* Disable timer */ 3539 + if (vha->timer_active) 3540 + qla2x00_stop_timer(vha); 3541 + 3538 3542 /* Turn adapter off line */ 3539 3543 vha->flags.online = 0; 3540 3544
+2 -1
drivers/scsi/sd.c
··· 1166 1166 sector_t lba = sectors_to_logical(sdp, blk_rq_pos(rq)); 1167 1167 sector_t threshold; 1168 1168 unsigned int nr_blocks = sectors_to_logical(sdp, blk_rq_sectors(rq)); 1169 - bool dif, dix; 1170 1169 unsigned int mask = logical_to_sectors(sdp, 1) - 1; 1171 1170 bool write = rq_data_dir(rq) == WRITE; 1172 1171 unsigned char protect, fua; 1173 1172 blk_status_t ret; 1173 + unsigned int dif; 1174 + bool dix; 1174 1175 1175 1176 ret = scsi_init_io(cmd); 1176 1177 if (ret != BLK_STS_OK)
+4
drivers/scsi/ufs/ufs_bsg.c
··· 98 98 99 99 bsg_reply->reply_payload_rcv_len = 0; 100 100 101 + pm_runtime_get_sync(hba->dev); 102 + 101 103 msgcode = bsg_request->msgcode; 102 104 switch (msgcode) { 103 105 case UPIU_TRANSACTION_QUERY_REQ: ··· 136 134 137 135 break; 138 136 } 137 + 138 + pm_runtime_put_sync(hba->dev); 139 139 140 140 if (!desc_buff) 141 141 goto out;
+2 -1
drivers/target/iscsi/cxgbit/cxgbit_cm.c
··· 1831 1831 1832 1832 while (credits) { 1833 1833 struct sk_buff *p = cxgbit_sock_peek_wr(csk); 1834 - const u32 csum = (__force u32)p->csum; 1834 + u32 csum; 1835 1835 1836 1836 if (unlikely(!p)) { 1837 1837 pr_err("csk 0x%p,%u, cr %u,%u+%u, empty.\n", ··· 1840 1840 break; 1841 1841 } 1842 1842 1843 + csum = (__force u32)p->csum; 1843 1844 if (unlikely(credits < csum)) { 1844 1845 pr_warn("csk 0x%p,%u, cr %u,%u+%u, < %u.\n", 1845 1846 csk, csk->tid,
+28 -9
drivers/usb/cdns3/gadget.c
··· 2329 2329 writel(USB_CONF_CLK2OFFDS | USB_CONF_L1DS, &regs->usb_conf); 2330 2330 2331 2331 cdns3_configure_dmult(priv_dev, NULL); 2332 - 2333 - cdns3_gadget_pullup(&priv_dev->gadget, 1); 2334 2332 } 2335 2333 2336 2334 /** ··· 2343 2345 { 2344 2346 struct cdns3_device *priv_dev = gadget_to_cdns3_device(gadget); 2345 2347 unsigned long flags; 2348 + enum usb_device_speed max_speed = driver->max_speed; 2346 2349 2347 2350 spin_lock_irqsave(&priv_dev->lock, flags); 2348 2351 priv_dev->gadget_driver = driver; 2352 + 2353 + /* limit speed if necessary */ 2354 + max_speed = min(driver->max_speed, gadget->max_speed); 2355 + 2356 + switch (max_speed) { 2357 + case USB_SPEED_FULL: 2358 + writel(USB_CONF_SFORCE_FS, &priv_dev->regs->usb_conf); 2359 + writel(USB_CONF_USB3DIS, &priv_dev->regs->usb_conf); 2360 + break; 2361 + case USB_SPEED_HIGH: 2362 + writel(USB_CONF_USB3DIS, &priv_dev->regs->usb_conf); 2363 + break; 2364 + case USB_SPEED_SUPER: 2365 + break; 2366 + default: 2367 + dev_err(priv_dev->dev, 2368 + "invalid maximum_speed parameter %d\n", 2369 + max_speed); 2370 + /* fall through */ 2371 + case USB_SPEED_UNKNOWN: 2372 + /* default to superspeed */ 2373 + max_speed = USB_SPEED_SUPER; 2374 + break; 2375 + } 2376 + 2349 2377 cdns3_gadget_config(priv_dev); 2350 2378 spin_unlock_irqrestore(&priv_dev->lock, flags); 2351 2379 return 0; ··· 2405 2381 writel(EP_CMD_EPRST, &priv_dev->regs->ep_cmd); 2406 2382 readl_poll_timeout_atomic(&priv_dev->regs->ep_cmd, val, 2407 2383 !(val & EP_CMD_EPRST), 1, 100); 2384 + 2385 + priv_ep->flags &= ~EP_CLAIMED; 2408 2386 } 2409 2387 2410 2388 /* disable interrupt for device */ ··· 2601 2575 /* Check the maximum_speed parameter */ 2602 2576 switch (max_speed) { 2603 2577 case USB_SPEED_FULL: 2604 - writel(USB_CONF_SFORCE_FS, &priv_dev->regs->usb_conf); 2605 - writel(USB_CONF_USB3DIS, &priv_dev->regs->usb_conf); 2606 - break; 2607 2578 case USB_SPEED_HIGH: 2608 - writel(USB_CONF_USB3DIS, &priv_dev->regs->usb_conf); 2609 - break; 2610 2579 case USB_SPEED_SUPER: 2611 2580 break; 2612 2581 default: ··· 2733 2712 2734 2713 /* disable interrupt for device */ 2735 2714 writel(0, &priv_dev->regs->usb_ien); 2736 - 2737 - cdns3_gadget_pullup(&priv_dev->gadget, 0); 2738 2715 2739 2716 return 0; 2740 2717 }
-1
drivers/usb/cdns3/host-export.h
··· 12 12 #ifdef CONFIG_USB_CDNS3_HOST 13 13 14 14 int cdns3_host_init(struct cdns3 *cdns); 15 - void cdns3_host_exit(struct cdns3 *cdns); 16 15 17 16 #else 18 17
+1
drivers/usb/cdns3/host.c
··· 12 12 #include <linux/platform_device.h> 13 13 #include "core.h" 14 14 #include "drd.h" 15 + #include "host-export.h" 15 16 16 17 static int __cdns3_host_init(struct cdns3 *cdns) 17 18 {
+5
drivers/usb/core/config.c
··· 348 348 349 349 /* Validate the wMaxPacketSize field */ 350 350 maxp = usb_endpoint_maxp(&endpoint->desc); 351 + if (maxp == 0) { 352 + dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has wMaxPacketSize 0, skipping\n", 353 + cfgno, inum, asnum, d->bEndpointAddress); 354 + goto skip_to_next_endpoint_or_interface_descriptor; 355 + } 351 356 352 357 /* Find the highest legal maxpacket size for this endpoint */ 353 358 i = 0; /* additional transactions per microframe */
+1
drivers/usb/dwc3/Kconfig
··· 102 102 depends on ARCH_MESON || COMPILE_TEST 103 103 default USB_DWC3 104 104 select USB_ROLE_SWITCH 105 + select REGMAP_MMIO 105 106 help 106 107 Support USB2/3 functionality in Amlogic G12A platforms. 107 108 Say 'Y' or 'M' if you have one such device.
+1 -2
drivers/usb/dwc3/core.c
··· 312 312 313 313 reg = dwc3_readl(dwc->regs, DWC3_GFLADJ); 314 314 dft = reg & DWC3_GFLADJ_30MHZ_MASK; 315 - if (!dev_WARN_ONCE(dwc->dev, dft == dwc->fladj, 316 - "request value same as default, ignoring\n")) { 315 + if (dft != dwc->fladj) { 317 316 reg &= ~DWC3_GFLADJ_30MHZ_MASK; 318 317 reg |= DWC3_GFLADJ_30MHZ_SDBND_SEL | dwc->fladj; 319 318 dwc3_writel(dwc->regs, DWC3_GFLADJ, reg);
+1 -1
drivers/usb/dwc3/dwc3-pci.c
··· 258 258 259 259 ret = platform_device_add_properties(dwc->dwc3, p); 260 260 if (ret < 0) 261 - return ret; 261 + goto err; 262 262 263 263 ret = dwc3_pci_quirks(dwc); 264 264 if (ret)
+6
drivers/usb/dwc3/gadget.c
··· 707 707 708 708 dwc3_gadget_giveback(dep, req, -ESHUTDOWN); 709 709 } 710 + 711 + while (!list_empty(&dep->cancelled_list)) { 712 + req = next_request(&dep->cancelled_list); 713 + 714 + dwc3_gadget_giveback(dep, req, -ESHUTDOWN); 715 + } 710 716 } 711 717 712 718 /**
+4
drivers/usb/gadget/composite.c
··· 2170 2170 usb_ep_dequeue(cdev->gadget->ep0, cdev->os_desc_req); 2171 2171 2172 2172 kfree(cdev->os_desc_req->buf); 2173 + cdev->os_desc_req->buf = NULL; 2173 2174 usb_ep_free_request(cdev->gadget->ep0, cdev->os_desc_req); 2175 + cdev->os_desc_req = NULL; 2174 2176 } 2175 2177 if (cdev->req) { 2176 2178 if (cdev->setup_pending) 2177 2179 usb_ep_dequeue(cdev->gadget->ep0, cdev->req); 2178 2180 2179 2181 kfree(cdev->req->buf); 2182 + cdev->req->buf = NULL; 2180 2183 usb_ep_free_request(cdev->gadget->ep0, cdev->req); 2184 + cdev->req = NULL; 2181 2185 } 2182 2186 cdev->next_string_id = 0; 2183 2187 device_remove_file(&cdev->gadget->dev, &dev_attr_suspended);
+105 -5
drivers/usb/gadget/configfs.c
··· 61 61 bool use_os_desc; 62 62 char b_vendor_code; 63 63 char qw_sign[OS_STRING_QW_SIGN_LEN]; 64 + spinlock_t spinlock; 65 + bool unbind; 64 66 }; 65 67 66 68 static inline struct gadget_info *to_gadget_info(struct config_item *item) ··· 1246 1244 int ret; 1247 1245 1248 1246 /* the gi->lock is hold by the caller */ 1247 + gi->unbind = 0; 1249 1248 cdev->gadget = gadget; 1250 1249 set_gadget_data(gadget, cdev); 1251 1250 ret = composite_dev_prepare(composite, cdev); ··· 1379 1376 { 1380 1377 struct usb_composite_dev *cdev; 1381 1378 struct gadget_info *gi; 1379 + unsigned long flags; 1382 1380 1383 1381 /* the gi->lock is hold by the caller */ 1384 1382 1385 1383 cdev = get_gadget_data(gadget); 1386 1384 gi = container_of(cdev, struct gadget_info, cdev); 1385 + spin_lock_irqsave(&gi->spinlock, flags); 1386 + gi->unbind = 1; 1387 + spin_unlock_irqrestore(&gi->spinlock, flags); 1387 1388 1388 1389 kfree(otg_desc[0]); 1389 1390 otg_desc[0] = NULL; 1390 1391 purge_configs_funcs(gi); 1391 1392 composite_dev_cleanup(cdev); 1392 1393 usb_ep_autoconfig_reset(cdev->gadget); 1394 + spin_lock_irqsave(&gi->spinlock, flags); 1393 1395 cdev->gadget = NULL; 1394 1396 set_gadget_data(gadget, NULL); 1397 + spin_unlock_irqrestore(&gi->spinlock, flags); 1398 + } 1399 + 1400 + static int configfs_composite_setup(struct usb_gadget *gadget, 1401 + const struct usb_ctrlrequest *ctrl) 1402 + { 1403 + struct usb_composite_dev *cdev; 1404 + struct gadget_info *gi; 1405 + unsigned long flags; 1406 + int ret; 1407 + 1408 + cdev = get_gadget_data(gadget); 1409 + if (!cdev) 1410 + return 0; 1411 + 1412 + gi = container_of(cdev, struct gadget_info, cdev); 1413 + spin_lock_irqsave(&gi->spinlock, flags); 1414 + cdev = get_gadget_data(gadget); 1415 + if (!cdev || gi->unbind) { 1416 + spin_unlock_irqrestore(&gi->spinlock, flags); 1417 + return 0; 1418 + } 1419 + 1420 + ret = composite_setup(gadget, ctrl); 1421 + spin_unlock_irqrestore(&gi->spinlock, flags); 1422 + return ret; 1423 + } 1424 + 1425 + static void configfs_composite_disconnect(struct usb_gadget *gadget) 1426 + { 1427 + struct usb_composite_dev *cdev; 1428 + struct gadget_info *gi; 1429 + unsigned long flags; 1430 + 1431 + cdev = get_gadget_data(gadget); 1432 + if (!cdev) 1433 + return; 1434 + 1435 + gi = container_of(cdev, struct gadget_info, cdev); 1436 + spin_lock_irqsave(&gi->spinlock, flags); 1437 + cdev = get_gadget_data(gadget); 1438 + if (!cdev || gi->unbind) { 1439 + spin_unlock_irqrestore(&gi->spinlock, flags); 1440 + return; 1441 + } 1442 + 1443 + composite_disconnect(gadget); 1444 + spin_unlock_irqrestore(&gi->spinlock, flags); 1445 + } 1446 + 1447 + static void configfs_composite_suspend(struct usb_gadget *gadget) 1448 + { 1449 + struct usb_composite_dev *cdev; 1450 + struct gadget_info *gi; 1451 + unsigned long flags; 1452 + 1453 + cdev = get_gadget_data(gadget); 1454 + if (!cdev) 1455 + return; 1456 + 1457 + gi = container_of(cdev, struct gadget_info, cdev); 1458 + spin_lock_irqsave(&gi->spinlock, flags); 1459 + cdev = get_gadget_data(gadget); 1460 + if (!cdev || gi->unbind) { 1461 + spin_unlock_irqrestore(&gi->spinlock, flags); 1462 + return; 1463 + } 1464 + 1465 + composite_suspend(gadget); 1466 + spin_unlock_irqrestore(&gi->spinlock, flags); 1467 + } 1468 + 1469 + static void configfs_composite_resume(struct usb_gadget *gadget) 1470 + { 1471 + struct usb_composite_dev *cdev; 1472 + struct gadget_info *gi; 1473 + unsigned long flags; 1474 + 1475 + cdev = get_gadget_data(gadget); 1476 + if (!cdev) 1477 + return; 1478 + 1479 + gi = container_of(cdev, struct gadget_info, cdev); 1480 + spin_lock_irqsave(&gi->spinlock, flags); 1481 + cdev = get_gadget_data(gadget); 1482 + if (!cdev || gi->unbind) { 1483 + spin_unlock_irqrestore(&gi->spinlock, flags); 1484 + return; 1485 + } 1486 + 1487 + composite_resume(gadget); 1488 + spin_unlock_irqrestore(&gi->spinlock, flags); 1395 1489 } 1396 1490 1397 1491 static const struct usb_gadget_driver configfs_driver_template = { 1398 1492 .bind = configfs_composite_bind, 1399 1493 .unbind = configfs_composite_unbind, 1400 1494 1401 - .setup = composite_setup, 1402 - .reset = composite_disconnect, 1403 - .disconnect = composite_disconnect, 1495 + .setup = configfs_composite_setup, 1496 + .reset = configfs_composite_disconnect, 1497 + .disconnect = configfs_composite_disconnect, 1404 1498 1405 - .suspend = composite_suspend, 1406 - .resume = composite_resume, 1499 + .suspend = configfs_composite_suspend, 1500 + .resume = configfs_composite_resume, 1407 1501 1408 1502 .max_speed = USB_SPEED_SUPER, 1409 1503 .driver = {
+4 -2
drivers/usb/gadget/udc/atmel_usba_udc.c
··· 449 449 next_fifo_transaction(ep, req); 450 450 if (req->last_transaction) { 451 451 usba_ep_writel(ep, CTL_DIS, USBA_TX_PK_RDY); 452 - usba_ep_writel(ep, CTL_ENB, USBA_TX_COMPLETE); 452 + if (ep_is_control(ep)) 453 + usba_ep_writel(ep, CTL_ENB, USBA_TX_COMPLETE); 453 454 } else { 454 - usba_ep_writel(ep, CTL_DIS, USBA_TX_COMPLETE); 455 + if (ep_is_control(ep)) 456 + usba_ep_writel(ep, CTL_DIS, USBA_TX_COMPLETE); 455 457 usba_ep_writel(ep, CTL_ENB, USBA_TX_PK_RDY); 456 458 } 457 459 }
+11
drivers/usb/gadget/udc/core.c
··· 98 98 if (ep->enabled) 99 99 goto out; 100 100 101 + /* UDC drivers can't handle endpoints with maxpacket size 0 */ 102 + if (usb_endpoint_maxp(ep->desc) == 0) { 103 + /* 104 + * We should log an error message here, but we can't call 105 + * dev_err() because there's no way to find the gadget 106 + * given only ep. 107 + */ 108 + ret = -EINVAL; 109 + goto out; 110 + } 111 + 101 112 ret = ep->ops->enable(ep, ep->desc); 102 113 if (ret) 103 114 goto out;
+1 -1
drivers/usb/gadget/udc/fsl_udc_core.c
··· 2576 2576 dma_pool_destroy(udc_controller->td_pool); 2577 2577 free_irq(udc_controller->irq, udc_controller); 2578 2578 iounmap(dr_regs); 2579 - if (pdata->operating_mode == FSL_USB2_DR_DEVICE) 2579 + if (res && (pdata->operating_mode == FSL_USB2_DR_DEVICE)) 2580 2580 release_mem_region(res->start, resource_size(res)); 2581 2581 2582 2582 /* free udc --wait for the release() finished */
+6 -5
drivers/usb/gadget/udc/renesas_usb3.c
··· 1544 1544 static bool usb3_std_req_set_address(struct renesas_usb3 *usb3, 1545 1545 struct usb_ctrlrequest *ctrl) 1546 1546 { 1547 - if (ctrl->wValue >= 128) 1547 + if (le16_to_cpu(ctrl->wValue) >= 128) 1548 1548 return true; /* stall */ 1549 1549 1550 - usb3_set_device_address(usb3, ctrl->wValue); 1550 + usb3_set_device_address(usb3, le16_to_cpu(ctrl->wValue)); 1551 1551 usb3_set_p0_con_for_no_data(usb3); 1552 1552 1553 1553 return false; ··· 1582 1582 struct renesas_usb3_ep *usb3_ep; 1583 1583 int num; 1584 1584 u16 status = 0; 1585 + __le16 tx_data; 1585 1586 1586 1587 switch (ctrl->bRequestType & USB_RECIP_MASK) { 1587 1588 case USB_RECIP_DEVICE: ··· 1605 1604 } 1606 1605 1607 1606 if (!stall) { 1608 - status = cpu_to_le16(status); 1607 + tx_data = cpu_to_le16(status); 1609 1608 dev_dbg(usb3_to_dev(usb3), "get_status: req = %p\n", 1610 1609 usb_req_to_usb3_req(usb3->ep0_req)); 1611 - usb3_pipe0_internal_xfer(usb3, &status, sizeof(status), 1610 + usb3_pipe0_internal_xfer(usb3, &tx_data, sizeof(tx_data), 1612 1611 usb3_pipe0_get_status_completion); 1613 1612 } 1614 1613 ··· 1773 1772 static bool usb3_std_req_set_configuration(struct renesas_usb3 *usb3, 1774 1773 struct usb_ctrlrequest *ctrl) 1775 1774 { 1776 - if (ctrl->wValue > 0) 1775 + if (le16_to_cpu(ctrl->wValue) > 0) 1777 1776 usb3_set_bit(usb3, USB_COM_CON_CONF, USB3_USB_COM_CON); 1778 1777 else 1779 1778 usb3_clear_bit(usb3, USB_COM_CON_CONF, USB3_USB_COM_CON);
+12 -12
drivers/usb/host/xhci-debugfs.c
··· 202 202 trb = &seg->trbs[i]; 203 203 dma = seg->dma + i * sizeof(*trb); 204 204 seq_printf(s, "%pad: %s\n", &dma, 205 - xhci_decode_trb(trb->generic.field[0], 206 - trb->generic.field[1], 207 - trb->generic.field[2], 208 - trb->generic.field[3])); 205 + xhci_decode_trb(le32_to_cpu(trb->generic.field[0]), 206 + le32_to_cpu(trb->generic.field[1]), 207 + le32_to_cpu(trb->generic.field[2]), 208 + le32_to_cpu(trb->generic.field[3]))); 209 209 } 210 210 } 211 211 ··· 263 263 xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus)); 264 264 slot_ctx = xhci_get_slot_ctx(xhci, dev->out_ctx); 265 265 seq_printf(s, "%pad: %s\n", &dev->out_ctx->dma, 266 - xhci_decode_slot_context(slot_ctx->dev_info, 267 - slot_ctx->dev_info2, 268 - slot_ctx->tt_info, 269 - slot_ctx->dev_state)); 266 + xhci_decode_slot_context(le32_to_cpu(slot_ctx->dev_info), 267 + le32_to_cpu(slot_ctx->dev_info2), 268 + le32_to_cpu(slot_ctx->tt_info), 269 + le32_to_cpu(slot_ctx->dev_state))); 270 270 271 271 return 0; 272 272 } ··· 286 286 ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, dci); 287 287 dma = dev->out_ctx->dma + dci * CTX_SIZE(xhci->hcc_params); 288 288 seq_printf(s, "%pad: %s\n", &dma, 289 - xhci_decode_ep_context(ep_ctx->ep_info, 290 - ep_ctx->ep_info2, 291 - ep_ctx->deq, 292 - ep_ctx->tx_info)); 289 + xhci_decode_ep_context(le32_to_cpu(ep_ctx->ep_info), 290 + le32_to_cpu(ep_ctx->ep_info2), 291 + le64_to_cpu(ep_ctx->deq), 292 + le32_to_cpu(ep_ctx->tx_info))); 293 293 } 294 294 295 295 return 0;
+2
drivers/usb/host/xhci-ring.c
··· 3330 3330 if (xhci_urb_suitable_for_idt(urb)) { 3331 3331 memcpy(&send_addr, urb->transfer_buffer, 3332 3332 trb_buff_len); 3333 + le64_to_cpus(&send_addr); 3333 3334 field |= TRB_IDT; 3334 3335 } 3335 3336 } ··· 3476 3475 if (xhci_urb_suitable_for_idt(urb)) { 3477 3476 memcpy(&addr, urb->transfer_buffer, 3478 3477 urb->transfer_buffer_length); 3478 + le64_to_cpus(&addr); 3479 3479 field |= TRB_IDT; 3480 3480 } else { 3481 3481 addr = (u64) urb->transfer_dma;
+45 -9
drivers/usb/host/xhci.c
··· 3071 3071 } 3072 3072 } 3073 3073 3074 + static void xhci_endpoint_disable(struct usb_hcd *hcd, 3075 + struct usb_host_endpoint *host_ep) 3076 + { 3077 + struct xhci_hcd *xhci; 3078 + struct xhci_virt_device *vdev; 3079 + struct xhci_virt_ep *ep; 3080 + struct usb_device *udev; 3081 + unsigned long flags; 3082 + unsigned int ep_index; 3083 + 3084 + xhci = hcd_to_xhci(hcd); 3085 + rescan: 3086 + spin_lock_irqsave(&xhci->lock, flags); 3087 + 3088 + udev = (struct usb_device *)host_ep->hcpriv; 3089 + if (!udev || !udev->slot_id) 3090 + goto done; 3091 + 3092 + vdev = xhci->devs[udev->slot_id]; 3093 + if (!vdev) 3094 + goto done; 3095 + 3096 + ep_index = xhci_get_endpoint_index(&host_ep->desc); 3097 + ep = &vdev->eps[ep_index]; 3098 + if (!ep) 3099 + goto done; 3100 + 3101 + /* wait for hub_tt_work to finish clearing hub TT */ 3102 + if (ep->ep_state & EP_CLEARING_TT) { 3103 + spin_unlock_irqrestore(&xhci->lock, flags); 3104 + schedule_timeout_uninterruptible(1); 3105 + goto rescan; 3106 + } 3107 + 3108 + if (ep->ep_state) 3109 + xhci_dbg(xhci, "endpoint disable with ep_state 0x%x\n", 3110 + ep->ep_state); 3111 + done: 3112 + host_ep->hcpriv = NULL; 3113 + spin_unlock_irqrestore(&xhci->lock, flags); 3114 + } 3115 + 3074 3116 /* 3075 3117 * Called after usb core issues a clear halt control message. 3076 3118 * The host side of the halt should already be cleared by a reset endpoint ··· 5280 5238 unsigned int ep_index; 5281 5239 unsigned long flags; 5282 5240 5283 - /* 5284 - * udev might be NULL if tt buffer is cleared during a failed device 5285 - * enumeration due to a halted control endpoint. Usb core might 5286 - * have allocated a new udev for the next enumeration attempt. 5287 - */ 5288 - 5289 5241 xhci = hcd_to_xhci(hcd); 5242 + 5243 + spin_lock_irqsave(&xhci->lock, flags); 5290 5244 udev = (struct usb_device *)ep->hcpriv; 5291 - if (!udev) 5292 - return; 5293 5245 slot_id = udev->slot_id; 5294 5246 ep_index = xhci_get_endpoint_index(&ep->desc); 5295 5247 5296 - spin_lock_irqsave(&xhci->lock, flags); 5297 5248 xhci->devs[slot_id]->eps[ep_index].ep_state &= ~EP_CLEARING_TT; 5298 5249 xhci_ring_doorbell_for_active_rings(xhci, slot_id, ep_index); 5299 5250 spin_unlock_irqrestore(&xhci->lock, flags); ··· 5323 5288 .free_streams = xhci_free_streams, 5324 5289 .add_endpoint = xhci_add_endpoint, 5325 5290 .drop_endpoint = xhci_drop_endpoint, 5291 + .endpoint_disable = xhci_endpoint_disable, 5326 5292 .endpoint_reset = xhci_endpoint_reset, 5327 5293 .check_bandwidth = xhci_check_bandwidth, 5328 5294 .reset_bandwidth = xhci_reset_bandwidth,
+7 -6
drivers/usb/misc/ldusb.c
··· 487 487 } 488 488 bytes_to_read = min(count, *actual_buffer); 489 489 if (bytes_to_read < *actual_buffer) 490 - dev_warn(&dev->intf->dev, "Read buffer overflow, %zd bytes dropped\n", 490 + dev_warn(&dev->intf->dev, "Read buffer overflow, %zu bytes dropped\n", 491 491 *actual_buffer-bytes_to_read); 492 492 493 493 /* copy one interrupt_in_buffer from ring_buffer into userspace */ ··· 495 495 retval = -EFAULT; 496 496 goto unlock_exit; 497 497 } 498 - dev->ring_tail = (dev->ring_tail+1) % ring_buffer_size; 499 - 500 498 retval = bytes_to_read; 501 499 502 500 spin_lock_irq(&dev->rbsl); 501 + dev->ring_tail = (dev->ring_tail + 1) % ring_buffer_size; 502 + 503 503 if (dev->buffer_overflow) { 504 504 dev->buffer_overflow = 0; 505 505 spin_unlock_irq(&dev->rbsl); ··· 562 562 /* write the data into interrupt_out_buffer from userspace */ 563 563 bytes_to_write = min(count, write_buffer_size*dev->interrupt_out_endpoint_size); 564 564 if (bytes_to_write < count) 565 - dev_warn(&dev->intf->dev, "Write buffer overflow, %zd bytes dropped\n", count-bytes_to_write); 566 - dev_dbg(&dev->intf->dev, "%s: count = %zd, bytes_to_write = %zd\n", 565 + dev_warn(&dev->intf->dev, "Write buffer overflow, %zu bytes dropped\n", 566 + count - bytes_to_write); 567 + dev_dbg(&dev->intf->dev, "%s: count = %zu, bytes_to_write = %zu\n", 567 568 __func__, count, bytes_to_write); 568 569 569 570 if (copy_from_user(dev->interrupt_out_buffer, buffer, bytes_to_write)) { ··· 581 580 1 << 8, 0, 582 581 dev->interrupt_out_buffer, 583 582 bytes_to_write, 584 - USB_CTRL_SET_TIMEOUT * HZ); 583 + USB_CTRL_SET_TIMEOUT); 585 584 if (retval < 0) 586 585 dev_err(&dev->intf->dev, 587 586 "Couldn't submit HID_REQ_SET_REPORT %d\n",
+1
drivers/usb/mtu3/mtu3_core.c
··· 16 16 #include <linux/platform_device.h> 17 17 18 18 #include "mtu3.h" 19 + #include "mtu3_dr.h" 19 20 #include "mtu3_debug.h" 20 21 #include "mtu3_trace.h" 21 22
+1 -1
drivers/usb/renesas_usbhs/mod_gadget.c
··· 265 265 case USB_DEVICE_TEST_MODE: 266 266 usbhsg_recip_handler_std_control_done(priv, uep, ctrl); 267 267 udelay(100); 268 - usbhs_sys_set_test_mode(priv, le16_to_cpu(ctrl->wIndex >> 8)); 268 + usbhs_sys_set_test_mode(priv, le16_to_cpu(ctrl->wIndex) >> 8); 269 269 break; 270 270 default: 271 271 usbhsg_recip_handler_std_control_done(priv, uep, ctrl);
+10 -3
drivers/usb/serial/whiteheat.c
··· 559 559 560 560 command_port = port->serial->port[COMMAND_PORT]; 561 561 command_info = usb_get_serial_port_data(command_port); 562 + 563 + if (command_port->bulk_out_size < datasize + 1) 564 + return -EIO; 565 + 562 566 mutex_lock(&command_info->mutex); 563 567 command_info->command_finished = false; 564 568 ··· 636 632 struct device *dev = &port->dev; 637 633 struct whiteheat_port_settings port_settings; 638 634 unsigned int cflag = tty->termios.c_cflag; 635 + speed_t baud; 639 636 640 637 port_settings.port = port->port_number + 1; 641 638 ··· 697 692 dev_dbg(dev, "%s - XON = %2x, XOFF = %2x\n", __func__, port_settings.xon, port_settings.xoff); 698 693 699 694 /* get the baud rate wanted */ 700 - port_settings.baud = tty_get_baud_rate(tty); 701 - dev_dbg(dev, "%s - baud rate = %d\n", __func__, port_settings.baud); 695 + baud = tty_get_baud_rate(tty); 696 + port_settings.baud = cpu_to_le32(baud); 697 + dev_dbg(dev, "%s - baud rate = %u\n", __func__, baud); 702 698 703 699 /* fixme: should set validated settings */ 704 - tty_encode_baud_rate(tty, port_settings.baud, port_settings.baud); 700 + tty_encode_baud_rate(tty, baud, baud); 701 + 705 702 /* handle any settings that aren't specified in the tty structure */ 706 703 port_settings.lloop = 0; 707 704
+1 -1
drivers/usb/serial/whiteheat.h
··· 87 87 88 88 struct whiteheat_port_settings { 89 89 __u8 port; /* port number (1 to N) */ 90 - __u32 baud; /* any value 7 - 460800, firmware calculates 90 + __le32 baud; /* any value 7 - 460800, firmware calculates 91 91 best fit; arrives little endian */ 92 92 __u8 bits; /* 5, 6, 7, or 8 */ 93 93 __u8 stop; /* 1 or 2, default 1 (2 = 1.5 if bits = 5) */
-10
drivers/usb/storage/scsiglue.c
··· 68 68 static int slave_alloc (struct scsi_device *sdev) 69 69 { 70 70 struct us_data *us = host_to_us(sdev->host); 71 - int maxp; 72 71 73 72 /* 74 73 * Set the INQUIRY transfer length to 36. We don't use any of ··· 75 76 * less than 36 bytes. 76 77 */ 77 78 sdev->inquiry_len = 36; 78 - 79 - /* 80 - * USB has unusual scatter-gather requirements: the length of each 81 - * scatterlist element except the last must be divisible by the 82 - * Bulk maxpacket value. Fortunately this value is always a 83 - * power of 2. Inform the block layer about this requirement. 84 - */ 85 - maxp = usb_maxpacket(us->pusb_dev, us->recv_bulk_pipe, 0); 86 - blk_queue_virt_boundary(sdev->request_queue, maxp - 1); 87 79 88 80 /* 89 81 * Some host controllers may have alignment requirements.
-20
drivers/usb/storage/uas.c
··· 789 789 { 790 790 struct uas_dev_info *devinfo = 791 791 (struct uas_dev_info *)sdev->host->hostdata; 792 - int maxp; 793 792 794 793 sdev->hostdata = devinfo; 795 - 796 - /* 797 - * We have two requirements here. We must satisfy the requirements 798 - * of the physical HC and the demands of the protocol, as we 799 - * definitely want no additional memory allocation in this path 800 - * ruling out using bounce buffers. 801 - * 802 - * For a transmission on USB to continue we must never send 803 - * a package that is smaller than maxpacket. Hence the length of each 804 - * scatterlist element except the last must be divisible by the 805 - * Bulk maxpacket value. 806 - * If the HC does not ensure that through SG, 807 - * the upper layer must do that. We must assume nothing 808 - * about the capabilities off the HC, so we use the most 809 - * pessimistic requirement. 810 - */ 811 - 812 - maxp = usb_maxpacket(devinfo->udev, devinfo->data_in_pipe, 0); 813 - blk_queue_virt_boundary(sdev->request_queue, maxp - 1); 814 794 815 795 /* 816 796 * The protocol has no requirements on alignment in the strict sense.
+3
drivers/usb/usbip/vhci_tx.c
··· 147 147 } 148 148 149 149 kfree(iov); 150 + /* This is only for isochronous case */ 150 151 kfree(iso_buffer); 152 + iso_buffer = NULL; 153 + 151 154 usbip_dbg_vhci_tx("send txdata\n"); 152 155 153 156 total_size += txsize;
+7 -1
drivers/vhost/vringh.c
··· 852 852 return 0; 853 853 } 854 854 855 + static inline int kern_xfer(void *dst, void *src, size_t len) 856 + { 857 + memcpy(dst, src, len); 858 + return 0; 859 + } 860 + 855 861 /** 856 862 * vringh_init_kern - initialize a vringh for a kernelspace vring. 857 863 * @vrh: the vringh to initialize. ··· 964 958 ssize_t vringh_iov_push_kern(struct vringh_kiov *wiov, 965 959 const void *src, size_t len) 966 960 { 967 - return vringh_iov_xfer(wiov, (void *)src, len, xfer_kern); 961 + return vringh_iov_xfer(wiov, (void *)src, len, kern_xfer); 968 962 } 969 963 EXPORT_SYMBOL(vringh_iov_push_kern); 970 964
+3 -4
drivers/virtio/virtio_ring.c
··· 1499 1499 * counter first before updating event flags. 1500 1500 */ 1501 1501 virtio_wmb(vq->weak_barriers); 1502 - } else { 1503 - used_idx = vq->last_used_idx; 1504 - wrap_counter = vq->packed.used_wrap_counter; 1505 1502 } 1506 1503 1507 1504 if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DISABLE) { ··· 1515 1518 */ 1516 1519 virtio_mb(vq->weak_barriers); 1517 1520 1518 - if (is_used_desc_packed(vq, used_idx, wrap_counter)) { 1521 + if (is_used_desc_packed(vq, 1522 + vq->last_used_idx, 1523 + vq->packed.used_wrap_counter)) { 1519 1524 END_USE(vq); 1520 1525 return false; 1521 1526 }
+2 -1
fs/cifs/smb2ops.c
··· 4084 4084 4085 4085 kfree(dw->ppages); 4086 4086 cifs_small_buf_release(dw->buf); 4087 + kfree(dw); 4087 4088 } 4088 4089 4089 4090 ··· 4158 4157 dw->server = server; 4159 4158 dw->ppages = pages; 4160 4159 dw->len = len; 4161 - queue_work(cifsiod_wq, &dw->decrypt); 4160 + queue_work(decrypt_wq, &dw->decrypt); 4162 4161 *num_mids = 0; /* worker thread takes care of finding mid */ 4163 4162 return -1; 4164 4163 }
+2 -1
fs/fuse/Makefile
··· 5 5 6 6 obj-$(CONFIG_FUSE_FS) += fuse.o 7 7 obj-$(CONFIG_CUSE) += cuse.o 8 - obj-$(CONFIG_VIRTIO_FS) += virtio_fs.o 8 + obj-$(CONFIG_VIRTIO_FS) += virtiofs.o 9 9 10 10 fuse-objs := dev.o dir.o file.o inode.o control.o xattr.o acl.o readdir.o 11 + virtiofs-y += virtio_fs.o
+3 -1
fs/fuse/dev.c
··· 276 276 void fuse_request_end(struct fuse_conn *fc, struct fuse_req *req) 277 277 { 278 278 struct fuse_iqueue *fiq = &fc->iq; 279 - bool async = req->args->end; 279 + bool async; 280 280 281 281 if (test_and_set_bit(FR_FINISHED, &req->flags)) 282 282 goto put_request; 283 + 284 + async = req->args->end; 283 285 /* 284 286 * test_and_set_bit() implies smp_mb() between bit 285 287 * changing and below intr_entry check. Pairs with
+15 -1
fs/fuse/dir.c
··· 405 405 else 406 406 fuse_invalidate_entry_cache(entry); 407 407 408 - fuse_advise_use_readdirplus(dir); 408 + if (inode) 409 + fuse_advise_use_readdirplus(dir); 409 410 return newent; 410 411 411 412 out_iput: ··· 1520 1519 if (WARN_ON(!S_ISREG(inode->i_mode))) 1521 1520 return -EIO; 1522 1521 is_truncate = true; 1522 + } 1523 + 1524 + /* Flush dirty data/metadata before non-truncate SETATTR */ 1525 + if (is_wb && S_ISREG(inode->i_mode) && 1526 + attr->ia_valid & 1527 + (ATTR_MODE | ATTR_UID | ATTR_GID | ATTR_MTIME_SET | 1528 + ATTR_TIMES_SET)) { 1529 + err = write_inode_now(inode, true); 1530 + if (err) 1531 + return err; 1532 + 1533 + fuse_set_nowrite(inode); 1534 + fuse_release_nowrite(inode); 1523 1535 } 1524 1536 1525 1537 if (is_truncate) {
+8 -6
fs/fuse/file.c
··· 217 217 { 218 218 struct fuse_conn *fc = get_fuse_conn(inode); 219 219 int err; 220 - bool lock_inode = (file->f_flags & O_TRUNC) && 220 + bool is_wb_truncate = (file->f_flags & O_TRUNC) && 221 221 fc->atomic_o_trunc && 222 222 fc->writeback_cache; 223 223 ··· 225 225 if (err) 226 226 return err; 227 227 228 - if (lock_inode) 228 + if (is_wb_truncate) { 229 229 inode_lock(inode); 230 + fuse_set_nowrite(inode); 231 + } 230 232 231 233 err = fuse_do_open(fc, get_node_id(inode), file, isdir); 232 234 233 235 if (!err) 234 236 fuse_finish_open(inode, file); 235 237 236 - if (lock_inode) 238 + if (is_wb_truncate) { 239 + fuse_release_nowrite(inode); 237 240 inode_unlock(inode); 241 + } 238 242 239 243 return err; 240 244 } ··· 2001 1997 2002 1998 if (!data->ff) { 2003 1999 err = -EIO; 2004 - data->ff = fuse_write_file_get(fc, get_fuse_inode(inode)); 2000 + data->ff = fuse_write_file_get(fc, fi); 2005 2001 if (!data->ff) 2006 2002 goto out_unlock; 2007 2003 } ··· 2046 2042 * under writeback, so we can release the page lock. 2047 2043 */ 2048 2044 if (data->wpa == NULL) { 2049 - struct fuse_inode *fi = get_fuse_inode(inode); 2050 - 2051 2045 err = -ENOMEM; 2052 2046 wpa = fuse_writepage_args_alloc(); 2053 2047 if (!wpa) {
+4
fs/fuse/fuse_i.h
··· 479 479 bool destroy:1; 480 480 bool no_control:1; 481 481 bool no_force_umount:1; 482 + bool no_mount_options:1; 482 483 unsigned int max_read; 483 484 unsigned int blksize; 484 485 const char *subtype; ··· 713 712 714 713 /** Do not allow MNT_FORCE umount */ 715 714 unsigned int no_force_umount:1; 715 + 716 + /* Do not show mount options */ 717 + unsigned int no_mount_options:1; 716 718 717 719 /** The number of requests waiting for completion */ 718 720 atomic_t num_waiting;
+4
fs/fuse/inode.c
··· 558 558 struct super_block *sb = root->d_sb; 559 559 struct fuse_conn *fc = get_fuse_conn_super(sb); 560 560 561 + if (fc->no_mount_options) 562 + return 0; 563 + 561 564 seq_printf(m, ",user_id=%u", from_kuid_munged(fc->user_ns, fc->user_id)); 562 565 seq_printf(m, ",group_id=%u", from_kgid_munged(fc->user_ns, fc->group_id)); 563 566 if (fc->default_permissions) ··· 1183 1180 fc->destroy = ctx->destroy; 1184 1181 fc->no_control = ctx->no_control; 1185 1182 fc->no_force_umount = ctx->no_force_umount; 1183 + fc->no_mount_options = ctx->no_mount_options; 1186 1184 1187 1185 err = -ENOMEM; 1188 1186 root = fuse_get_root_inode(sb, ctx->rootmode);
+113 -56
fs/fuse/virtio_fs.c
··· 30 30 struct virtqueue *vq; /* protected by ->lock */ 31 31 struct work_struct done_work; 32 32 struct list_head queued_reqs; 33 + struct list_head end_reqs; /* End these requests */ 33 34 struct delayed_work dispatch_work; 34 35 struct fuse_dev *fud; 35 36 bool connected; ··· 55 54 struct list_head list; 56 55 }; 57 56 57 + static int virtio_fs_enqueue_req(struct virtio_fs_vq *fsvq, 58 + struct fuse_req *req, bool in_flight); 59 + 58 60 static inline struct virtio_fs_vq *vq_to_fsvq(struct virtqueue *vq) 59 61 { 60 62 struct virtio_fs *fs = vq->vdev->priv; ··· 68 64 static inline struct fuse_pqueue *vq_to_fpq(struct virtqueue *vq) 69 65 { 70 66 return &vq_to_fsvq(vq)->fud->pq; 67 + } 68 + 69 + /* Should be called with fsvq->lock held. */ 70 + static inline void inc_in_flight_req(struct virtio_fs_vq *fsvq) 71 + { 72 + fsvq->in_flight++; 73 + } 74 + 75 + /* Should be called with fsvq->lock held. */ 76 + static inline void dec_in_flight_req(struct virtio_fs_vq *fsvq) 77 + { 78 + WARN_ON(fsvq->in_flight <= 0); 79 + fsvq->in_flight--; 71 80 } 72 81 73 82 static void release_virtio_fs_obj(struct kref *ref) ··· 126 109 flush_delayed_work(&fsvq->dispatch_work); 127 110 } 128 111 129 - static inline void drain_hiprio_queued_reqs(struct virtio_fs_vq *fsvq) 130 - { 131 - struct virtio_fs_forget *forget; 132 - 133 - spin_lock(&fsvq->lock); 134 - while (1) { 135 - forget = list_first_entry_or_null(&fsvq->queued_reqs, 136 - struct virtio_fs_forget, list); 137 - if (!forget) 138 - break; 139 - list_del(&forget->list); 140 - kfree(forget); 141 - } 142 - spin_unlock(&fsvq->lock); 143 - } 144 - 145 112 static void virtio_fs_drain_all_queues(struct virtio_fs *fs) 146 113 { 147 114 struct virtio_fs_vq *fsvq; ··· 133 132 134 133 for (i = 0; i < fs->nvqs; i++) { 135 134 fsvq = &fs->vqs[i]; 136 - if (i == VQ_HIPRIO) 137 - drain_hiprio_queued_reqs(fsvq); 138 - 139 135 virtio_fs_drain_queue(fsvq); 140 136 } 141 137 } ··· 251 253 252 254 while ((req = virtqueue_get_buf(vq, &len)) != NULL) { 253 255 kfree(req); 254 - fsvq->in_flight--; 256 + dec_in_flight_req(fsvq); 255 257 } 256 258 } while (!virtqueue_enable_cb(vq) && likely(!virtqueue_is_broken(vq))); 257 259 spin_unlock(&fsvq->lock); 258 260 } 259 261 260 - static void virtio_fs_dummy_dispatch_work(struct work_struct *work) 262 + static void virtio_fs_request_dispatch_work(struct work_struct *work) 261 263 { 264 + struct fuse_req *req; 265 + struct virtio_fs_vq *fsvq = container_of(work, struct virtio_fs_vq, 266 + dispatch_work.work); 267 + struct fuse_conn *fc = fsvq->fud->fc; 268 + int ret; 269 + 270 + pr_debug("virtio-fs: worker %s called.\n", __func__); 271 + while (1) { 272 + spin_lock(&fsvq->lock); 273 + req = list_first_entry_or_null(&fsvq->end_reqs, struct fuse_req, 274 + list); 275 + if (!req) { 276 + spin_unlock(&fsvq->lock); 277 + break; 278 + } 279 + 280 + list_del_init(&req->list); 281 + spin_unlock(&fsvq->lock); 282 + fuse_request_end(fc, req); 283 + } 284 + 285 + /* Dispatch pending requests */ 286 + while (1) { 287 + spin_lock(&fsvq->lock); 288 + req = list_first_entry_or_null(&fsvq->queued_reqs, 289 + struct fuse_req, list); 290 + if (!req) { 291 + spin_unlock(&fsvq->lock); 292 + return; 293 + } 294 + list_del_init(&req->list); 295 + spin_unlock(&fsvq->lock); 296 + 297 + ret = virtio_fs_enqueue_req(fsvq, req, true); 298 + if (ret < 0) { 299 + if (ret == -ENOMEM || ret == -ENOSPC) { 300 + spin_lock(&fsvq->lock); 301 + list_add_tail(&req->list, &fsvq->queued_reqs); 302 + schedule_delayed_work(&fsvq->dispatch_work, 303 + msecs_to_jiffies(1)); 304 + spin_unlock(&fsvq->lock); 305 + return; 306 + } 307 + req->out.h.error = ret; 308 + spin_lock(&fsvq->lock); 309 + dec_in_flight_req(fsvq); 310 + spin_unlock(&fsvq->lock); 311 + pr_err("virtio-fs: virtio_fs_enqueue_req() failed %d\n", 312 + ret); 313 + fuse_request_end(fc, req); 314 + } 315 + } 262 316 } 263 317 264 318 static void virtio_fs_hiprio_dispatch_work(struct work_struct *work) ··· 336 286 337 287 list_del(&forget->list); 338 288 if (!fsvq->connected) { 289 + dec_in_flight_req(fsvq); 339 290 spin_unlock(&fsvq->lock); 340 291 kfree(forget); 341 292 continue; ··· 358 307 } else { 359 308 pr_debug("virtio-fs: Could not queue FORGET: err=%d. Dropping it.\n", 360 309 ret); 310 + dec_in_flight_req(fsvq); 361 311 kfree(forget); 362 312 } 363 313 spin_unlock(&fsvq->lock); 364 314 return; 365 315 } 366 316 367 - fsvq->in_flight++; 368 317 notify = virtqueue_kick_prepare(vq); 369 318 spin_unlock(&fsvq->lock); 370 319 ··· 503 452 504 453 fuse_request_end(fc, req); 505 454 spin_lock(&fsvq->lock); 506 - fsvq->in_flight--; 455 + dec_in_flight_req(fsvq); 507 456 spin_unlock(&fsvq->lock); 508 457 } 509 458 } ··· 553 502 names[VQ_HIPRIO] = fs->vqs[VQ_HIPRIO].name; 554 503 INIT_WORK(&fs->vqs[VQ_HIPRIO].done_work, virtio_fs_hiprio_done_work); 555 504 INIT_LIST_HEAD(&fs->vqs[VQ_HIPRIO].queued_reqs); 505 + INIT_LIST_HEAD(&fs->vqs[VQ_HIPRIO].end_reqs); 556 506 INIT_DELAYED_WORK(&fs->vqs[VQ_HIPRIO].dispatch_work, 557 507 virtio_fs_hiprio_dispatch_work); 558 508 spin_lock_init(&fs->vqs[VQ_HIPRIO].lock); ··· 563 511 spin_lock_init(&fs->vqs[i].lock); 564 512 INIT_WORK(&fs->vqs[i].done_work, virtio_fs_requests_done_work); 565 513 INIT_DELAYED_WORK(&fs->vqs[i].dispatch_work, 566 - virtio_fs_dummy_dispatch_work); 514 + virtio_fs_request_dispatch_work); 567 515 INIT_LIST_HEAD(&fs->vqs[i].queued_reqs); 516 + INIT_LIST_HEAD(&fs->vqs[i].end_reqs); 568 517 snprintf(fs->vqs[i].name, sizeof(fs->vqs[i].name), 569 518 "requests.%u", i - VQ_REQUEST); 570 519 callbacks[i] = virtio_fs_vq_done; ··· 761 708 list_add_tail(&forget->list, &fsvq->queued_reqs); 762 709 schedule_delayed_work(&fsvq->dispatch_work, 763 710 msecs_to_jiffies(1)); 711 + inc_in_flight_req(fsvq); 764 712 } else { 765 713 pr_debug("virtio-fs: Could not queue FORGET: err=%d. Dropping it.\n", 766 714 ret); ··· 771 717 goto out; 772 718 } 773 719 774 - fsvq->in_flight++; 720 + inc_in_flight_req(fsvq); 775 721 notify = virtqueue_kick_prepare(vq); 776 722 777 723 spin_unlock(&fsvq->lock); ··· 873 819 874 820 /* Add a request to a virtqueue and kick the device */ 875 821 static int virtio_fs_enqueue_req(struct virtio_fs_vq *fsvq, 876 - struct fuse_req *req) 822 + struct fuse_req *req, bool in_flight) 877 823 { 878 824 /* requests need at least 4 elements */ 879 825 struct scatterlist *stack_sgs[6]; ··· 889 835 unsigned int i; 890 836 int ret; 891 837 bool notify; 838 + struct fuse_pqueue *fpq; 892 839 893 840 /* Does the sglist fit on the stack? */ 894 841 total_sgs = sg_count_fuse_req(req); ··· 944 889 goto out; 945 890 } 946 891 947 - fsvq->in_flight++; 892 + /* Request successfully sent. */ 893 + fpq = &fsvq->fud->pq; 894 + spin_lock(&fpq->lock); 895 + list_add_tail(&req->list, fpq->processing); 896 + spin_unlock(&fpq->lock); 897 + set_bit(FR_SENT, &req->flags); 898 + /* matches barrier in request_wait_answer() */ 899 + smp_mb__after_atomic(); 900 + 901 + if (!in_flight) 902 + inc_in_flight_req(fsvq); 948 903 notify = virtqueue_kick_prepare(vq); 949 904 950 905 spin_unlock(&fsvq->lock); ··· 980 915 { 981 916 unsigned int queue_id = VQ_REQUEST; /* TODO multiqueue */ 982 917 struct virtio_fs *fs; 983 - struct fuse_conn *fc; 984 918 struct fuse_req *req; 985 - struct fuse_pqueue *fpq; 919 + struct virtio_fs_vq *fsvq; 986 920 int ret; 987 921 988 922 WARN_ON(list_empty(&fiq->pending)); ··· 992 928 spin_unlock(&fiq->lock); 993 929 994 930 fs = fiq->priv; 995 - fc = fs->vqs[queue_id].fud->fc; 996 931 997 932 pr_debug("%s: opcode %u unique %#llx nodeid %#llx in.len %u out.len %u\n", 998 933 __func__, req->in.h.opcode, req->in.h.unique, 999 934 req->in.h.nodeid, req->in.h.len, 1000 935 fuse_len_args(req->args->out_numargs, req->args->out_args)); 1001 936 1002 - fpq = &fs->vqs[queue_id].fud->pq; 1003 - spin_lock(&fpq->lock); 1004 - if (!fpq->connected) { 1005 - spin_unlock(&fpq->lock); 1006 - req->out.h.error = -ENODEV; 1007 - pr_err("virtio-fs: %s disconnected\n", __func__); 1008 - fuse_request_end(fc, req); 1009 - return; 1010 - } 1011 - list_add_tail(&req->list, fpq->processing); 1012 - spin_unlock(&fpq->lock); 1013 - set_bit(FR_SENT, &req->flags); 1014 - /* matches barrier in request_wait_answer() */ 1015 - smp_mb__after_atomic(); 1016 - 1017 - retry: 1018 - ret = virtio_fs_enqueue_req(&fs->vqs[queue_id], req); 937 + fsvq = &fs->vqs[queue_id]; 938 + ret = virtio_fs_enqueue_req(fsvq, req, false); 1019 939 if (ret < 0) { 1020 940 if (ret == -ENOMEM || ret == -ENOSPC) { 1021 - /* Virtqueue full. Retry submission */ 1022 - /* TODO use completion instead of timeout */ 1023 - usleep_range(20, 30); 1024 - goto retry; 941 + /* 942 + * Virtqueue full. Retry submission from worker 943 + * context as we might be holding fc->bg_lock. 944 + */ 945 + spin_lock(&fsvq->lock); 946 + list_add_tail(&req->list, &fsvq->queued_reqs); 947 + inc_in_flight_req(fsvq); 948 + schedule_delayed_work(&fsvq->dispatch_work, 949 + msecs_to_jiffies(1)); 950 + spin_unlock(&fsvq->lock); 951 + return; 1025 952 } 1026 953 req->out.h.error = ret; 1027 954 pr_err("virtio-fs: virtio_fs_enqueue_req() failed %d\n", ret); 1028 - spin_lock(&fpq->lock); 1029 - clear_bit(FR_SENT, &req->flags); 1030 - list_del_init(&req->list); 1031 - spin_unlock(&fpq->lock); 1032 - fuse_request_end(fc, req); 955 + 956 + /* Can't end request in submission context. Use a worker */ 957 + spin_lock(&fsvq->lock); 958 + list_add_tail(&req->list, &fsvq->end_reqs); 959 + schedule_delayed_work(&fsvq->dispatch_work, 0); 960 + spin_unlock(&fsvq->lock); 1033 961 return; 1034 962 } 1035 963 } ··· 1048 992 .destroy = true, 1049 993 .no_control = true, 1050 994 .no_force_umount = true, 995 + .no_mount_options = true, 1051 996 }; 1052 997 1053 998 mutex_lock(&virtio_fs_mutex);
+13 -7
fs/gfs2/ops_fstype.c
··· 1540 1540 { 1541 1541 struct gfs2_args *args; 1542 1542 1543 - args = kzalloc(sizeof(*args), GFP_KERNEL); 1543 + args = kmalloc(sizeof(*args), GFP_KERNEL); 1544 1544 if (args == NULL) 1545 1545 return -ENOMEM; 1546 1546 1547 - args->ar_quota = GFS2_QUOTA_DEFAULT; 1548 - args->ar_data = GFS2_DATA_DEFAULT; 1549 - args->ar_commit = 30; 1550 - args->ar_statfs_quantum = 30; 1551 - args->ar_quota_quantum = 60; 1552 - args->ar_errors = GFS2_ERRORS_DEFAULT; 1547 + if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE) { 1548 + struct gfs2_sbd *sdp = fc->root->d_sb->s_fs_info; 1553 1549 1550 + *args = sdp->sd_args; 1551 + } else { 1552 + memset(args, 0, sizeof(*args)); 1553 + args->ar_quota = GFS2_QUOTA_DEFAULT; 1554 + args->ar_data = GFS2_DATA_DEFAULT; 1555 + args->ar_commit = 30; 1556 + args->ar_statfs_quantum = 30; 1557 + args->ar_quota_quantum = 60; 1558 + args->ar_errors = GFS2_ERRORS_DEFAULT; 1559 + } 1554 1560 fc->fs_private = args; 1555 1561 fc->ops = &gfs2_context_ops; 1556 1562 return 0;
+10 -4
fs/io_uring.c
··· 1124 1124 1125 1125 kiocb->ki_flags |= IOCB_HIPRI; 1126 1126 kiocb->ki_complete = io_complete_rw_iopoll; 1127 + req->result = 0; 1127 1128 } else { 1128 1129 if (kiocb->ki_flags & IOCB_HIPRI) 1129 1130 return -EINVAL; ··· 2414 2413 if (ret) { 2415 2414 if (ret != -EIOCBQUEUED) { 2416 2415 io_free_req(req); 2416 + __io_free_req(shadow); 2417 2417 io_cqring_add_event(ctx, s->sqe->user_data, ret); 2418 2418 return 0; 2419 2419 } ··· 3830 3828 if (ret) 3831 3829 goto err; 3832 3830 3833 - ret = io_uring_get_fd(ctx); 3834 - if (ret < 0) 3835 - goto err; 3836 - 3837 3831 memset(&p->sq_off, 0, sizeof(p->sq_off)); 3838 3832 p->sq_off.head = offsetof(struct io_rings, sq.head); 3839 3833 p->sq_off.tail = offsetof(struct io_rings, sq.tail); ··· 3846 3848 p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries); 3847 3849 p->cq_off.overflow = offsetof(struct io_rings, cq_overflow); 3848 3850 p->cq_off.cqes = offsetof(struct io_rings, cqes); 3851 + 3852 + /* 3853 + * Install ring fd as the very last thing, so we don't risk someone 3854 + * having closed it before we finish setup 3855 + */ 3856 + ret = io_uring_get_fd(ctx); 3857 + if (ret < 0) 3858 + goto err; 3849 3859 3850 3860 p->features = IORING_FEAT_SINGLE_MMAP; 3851 3861 return ret;
+11 -1
fs/nfs/delegation.c
··· 53 53 return false; 54 54 } 55 55 56 + struct nfs_delegation *nfs4_get_valid_delegation(const struct inode *inode) 57 + { 58 + struct nfs_delegation *delegation; 59 + 60 + delegation = rcu_dereference(NFS_I(inode)->delegation); 61 + if (nfs4_is_valid_delegation(delegation, 0)) 62 + return delegation; 63 + return NULL; 64 + } 65 + 56 66 static int 57 67 nfs4_do_check_delegation(struct inode *inode, fmode_t flags, bool mark) 58 68 { ··· 1191 1181 if (delegation != NULL && 1192 1182 nfs4_stateid_match_other(dst, &delegation->stateid)) { 1193 1183 dst->seqid = delegation->stateid.seqid; 1194 - return ret; 1184 + ret = true; 1195 1185 } 1196 1186 rcu_read_unlock(); 1197 1187 out:
+1
fs/nfs/delegation.h
··· 68 68 bool nfs4_copy_delegation_stateid(struct inode *inode, fmode_t flags, nfs4_stateid *dst, const struct cred **cred); 69 69 bool nfs4_refresh_delegation_stateid(nfs4_stateid *dst, struct inode *inode); 70 70 71 + struct nfs_delegation *nfs4_get_valid_delegation(const struct inode *inode); 71 72 void nfs_mark_delegation_referenced(struct nfs_delegation *delegation); 72 73 int nfs4_have_delegation(struct inode *inode, fmode_t flags); 73 74 int nfs4_check_delegation(struct inode *inode, fmode_t flags);
+2 -5
fs/nfs/nfs4proc.c
··· 1440 1440 return 0; 1441 1441 if ((delegation->type & fmode) != fmode) 1442 1442 return 0; 1443 - if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) 1444 - return 0; 1445 1443 switch (claim) { 1446 1444 case NFS4_OPEN_CLAIM_NULL: 1447 1445 case NFS4_OPEN_CLAIM_FH: ··· 1808 1810 static struct nfs4_state *nfs4_try_open_cached(struct nfs4_opendata *opendata) 1809 1811 { 1810 1812 struct nfs4_state *state = opendata->state; 1811 - struct nfs_inode *nfsi = NFS_I(state->inode); 1812 1813 struct nfs_delegation *delegation; 1813 1814 int open_mode = opendata->o_arg.open_flags; 1814 1815 fmode_t fmode = opendata->o_arg.fmode; ··· 1824 1827 } 1825 1828 spin_unlock(&state->owner->so_lock); 1826 1829 rcu_read_lock(); 1827 - delegation = rcu_dereference(nfsi->delegation); 1830 + delegation = nfs4_get_valid_delegation(state->inode); 1828 1831 if (!can_open_delegated(delegation, fmode, claim)) { 1829 1832 rcu_read_unlock(); 1830 1833 break; ··· 2368 2371 data->o_arg.open_flags, claim)) 2369 2372 goto out_no_action; 2370 2373 rcu_read_lock(); 2371 - delegation = rcu_dereference(NFS_I(data->state->inode)->delegation); 2374 + delegation = nfs4_get_valid_delegation(data->state->inode); 2372 2375 if (can_open_delegated(delegation, data->o_arg.fmode, claim)) 2373 2376 goto unlock_no_action; 2374 2377 rcu_read_unlock();
+6
include/linux/dynamic_debug.h
··· 204 204 do { if (0) printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__); } while (0) 205 205 #define dynamic_dev_dbg(dev, fmt, ...) \ 206 206 do { if (0) dev_printk(KERN_DEBUG, dev, fmt, ##__VA_ARGS__); } while (0) 207 + #define dynamic_hex_dump(prefix_str, prefix_type, rowsize, \ 208 + groupsize, buf, len, ascii) \ 209 + do { if (0) \ 210 + print_hex_dump(KERN_DEBUG, prefix_str, prefix_type, \ 211 + rowsize, groupsize, buf, len, ascii); \ 212 + } while (0) 207 213 #endif 208 214 209 215 #endif
+16 -2
include/linux/efi.h
··· 1579 1579 efi_status_t efi_get_memory_map(efi_system_table_t *sys_table_arg, 1580 1580 struct efi_boot_memmap *map); 1581 1581 1582 + efi_status_t efi_low_alloc_above(efi_system_table_t *sys_table_arg, 1583 + unsigned long size, unsigned long align, 1584 + unsigned long *addr, unsigned long min); 1585 + 1586 + static inline 1582 1587 efi_status_t efi_low_alloc(efi_system_table_t *sys_table_arg, 1583 1588 unsigned long size, unsigned long align, 1584 - unsigned long *addr); 1589 + unsigned long *addr) 1590 + { 1591 + /* 1592 + * Don't allocate at 0x0. It will confuse code that 1593 + * checks pointers against NULL. Skip the first 8 1594 + * bytes so we start at a nice even number. 1595 + */ 1596 + return efi_low_alloc_above(sys_table_arg, size, align, addr, 0x8); 1597 + } 1585 1598 1586 1599 efi_status_t efi_high_alloc(efi_system_table_t *sys_table_arg, 1587 1600 unsigned long size, unsigned long align, ··· 1605 1592 unsigned long image_size, 1606 1593 unsigned long alloc_size, 1607 1594 unsigned long preferred_addr, 1608 - unsigned long alignment); 1595 + unsigned long alignment, 1596 + unsigned long min_addr); 1609 1597 1610 1598 efi_status_t handle_cmdline_files(efi_system_table_t *sys_table_arg, 1611 1599 efi_loaded_image_t *image,
-1
include/linux/filter.h
··· 1099 1099 1100 1100 #endif /* CONFIG_BPF_JIT */ 1101 1101 1102 - void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp); 1103 1102 void bpf_prog_kallsyms_del_all(struct bpf_prog *fp); 1104 1103 1105 1104 #define BPF_ANC BIT(15)
+23
include/linux/gfp.h
··· 325 325 return !!(gfp_flags & __GFP_DIRECT_RECLAIM); 326 326 } 327 327 328 + /** 329 + * gfpflags_normal_context - is gfp_flags a normal sleepable context? 330 + * @gfp_flags: gfp_flags to test 331 + * 332 + * Test whether @gfp_flags indicates that the allocation is from the 333 + * %current context and allowed to sleep. 334 + * 335 + * An allocation being allowed to block doesn't mean it owns the %current 336 + * context. When direct reclaim path tries to allocate memory, the 337 + * allocation context is nested inside whatever %current was doing at the 338 + * time of the original allocation. The nested allocation may be allowed 339 + * to block but modifying anything %current owns can corrupt the outer 340 + * context's expectations. 341 + * 342 + * %true result from this function indicates that the allocation context 343 + * can sleep and use anything that's associated with %current. 344 + */ 345 + static inline bool gfpflags_normal_context(const gfp_t gfp_flags) 346 + { 347 + return (gfp_flags & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC)) == 348 + __GFP_DIRECT_RECLAIM; 349 + } 350 + 328 351 #ifdef CONFIG_HIGHMEM 329 352 #define OPT_ZONE_HIGHMEM ZONE_HIGHMEM 330 353 #else
-1
include/linux/if_macvlan.h
··· 29 29 netdev_features_t set_features; 30 30 enum macvlan_mode mode; 31 31 u16 flags; 32 - int nest_level; 33 32 unsigned int macaddr_count; 34 33 #ifdef CONFIG_NET_POLL_CONTROLLER 35 34 struct netpoll *netpoll;
+1
include/linux/if_team.h
··· 223 223 atomic_t count_pending; 224 224 struct delayed_work dw; 225 225 } mcast_rejoin; 226 + struct lock_class_key team_lock_key; 226 227 long mode_priv[TEAM_MODE_PRIV_LONGS]; 227 228 }; 228 229
-11
include/linux/if_vlan.h
··· 182 182 #ifdef CONFIG_NET_POLL_CONTROLLER 183 183 struct netpoll *netpoll; 184 184 #endif 185 - unsigned int nest_level; 186 185 }; 187 186 188 187 static inline struct vlan_dev_priv *vlan_dev_priv(const struct net_device *dev) ··· 220 221 221 222 extern bool vlan_uses_dev(const struct net_device *dev); 222 223 223 - static inline int vlan_get_encap_level(struct net_device *dev) 224 - { 225 - BUG_ON(!is_vlan_dev(dev)); 226 - return vlan_dev_priv(dev)->nest_level; 227 - } 228 224 #else 229 225 static inline struct net_device * 230 226 __vlan_find_dev_deep_rcu(struct net_device *real_dev, ··· 288 294 static inline bool vlan_uses_dev(const struct net_device *dev) 289 295 { 290 296 return false; 291 - } 292 - static inline int vlan_get_encap_level(struct net_device *dev) 293 - { 294 - BUG(); 295 - return 0; 296 297 } 297 298 #endif 298 299
+1 -2
include/linux/mlx5/mlx5_ifc.h
··· 1545 1545 }; 1546 1546 1547 1547 union mlx5_ifc_dest_format_struct_flow_counter_list_auto_bits { 1548 - struct mlx5_ifc_dest_format_struct_bits dest_format_struct; 1548 + struct mlx5_ifc_extended_dest_format_bits extended_dest_format; 1549 1549 struct mlx5_ifc_flow_counter_list_bits flow_counter_list; 1550 - u8 reserved_at_0[0x40]; 1551 1550 }; 1552 1551 1553 1552 struct mlx5_ifc_fte_match_param_bits {
+27 -34
include/linux/netdevice.h
··· 925 925 struct devlink; 926 926 struct tlsdev_ops; 927 927 928 + 928 929 /* 929 930 * This structure defines the management hooks for network devices. 930 931 * The following hooks can be defined; unless noted otherwise, they are ··· 1422 1421 void (*ndo_dfwd_del_station)(struct net_device *pdev, 1423 1422 void *priv); 1424 1423 1425 - int (*ndo_get_lock_subclass)(struct net_device *dev); 1426 1424 int (*ndo_set_tx_maxrate)(struct net_device *dev, 1427 1425 int queue_index, 1428 1426 u32 maxrate); ··· 1649 1649 * @perm_addr: Permanent hw address 1650 1650 * @addr_assign_type: Hw address assignment type 1651 1651 * @addr_len: Hardware address length 1652 + * @upper_level: Maximum depth level of upper devices. 1653 + * @lower_level: Maximum depth level of lower devices. 1652 1654 * @neigh_priv_len: Used in neigh_alloc() 1653 1655 * @dev_id: Used to differentiate devices that share 1654 1656 * the same link layer address ··· 1760 1758 * @phydev: Physical device may attach itself 1761 1759 * for hardware timestamping 1762 1760 * @sfp_bus: attached &struct sfp_bus structure. 1763 - * 1764 - * @qdisc_tx_busylock: lockdep class annotating Qdisc->busylock spinlock 1765 - * @qdisc_running_key: lockdep class annotating Qdisc->running seqcount 1761 + * @qdisc_tx_busylock_key: lockdep class annotating Qdisc->busylock 1762 + spinlock 1763 + * @qdisc_running_key: lockdep class annotating Qdisc->running seqcount 1764 + * @qdisc_xmit_lock_key: lockdep class annotating 1765 + * netdev_queue->_xmit_lock spinlock 1766 + * @addr_list_lock_key: lockdep class annotating 1767 + * net_device->addr_list_lock spinlock 1766 1768 * 1767 1769 * @proto_down: protocol port state information can be sent to the 1768 1770 * switch driver and used to set the phys state of the ··· 1881 1875 unsigned char perm_addr[MAX_ADDR_LEN]; 1882 1876 unsigned char addr_assign_type; 1883 1877 unsigned char addr_len; 1878 + unsigned char upper_level; 1879 + unsigned char lower_level; 1884 1880 unsigned short neigh_priv_len; 1885 1881 unsigned short dev_id; 1886 1882 unsigned short dev_port; ··· 2053 2045 #endif 2054 2046 struct phy_device *phydev; 2055 2047 struct sfp_bus *sfp_bus; 2056 - struct lock_class_key *qdisc_tx_busylock; 2057 - struct lock_class_key *qdisc_running_key; 2048 + struct lock_class_key qdisc_tx_busylock_key; 2049 + struct lock_class_key qdisc_running_key; 2050 + struct lock_class_key qdisc_xmit_lock_key; 2051 + struct lock_class_key addr_list_lock_key; 2058 2052 bool proto_down; 2059 2053 unsigned wol_enabled:1; 2060 2054 }; ··· 2132 2122 2133 2123 for (i = 0; i < dev->num_tx_queues; i++) 2134 2124 f(dev, &dev->_tx[i], arg); 2135 - } 2136 - 2137 - #define netdev_lockdep_set_classes(dev) \ 2138 - { \ 2139 - static struct lock_class_key qdisc_tx_busylock_key; \ 2140 - static struct lock_class_key qdisc_running_key; \ 2141 - static struct lock_class_key qdisc_xmit_lock_key; \ 2142 - static struct lock_class_key dev_addr_list_lock_key; \ 2143 - unsigned int i; \ 2144 - \ 2145 - (dev)->qdisc_tx_busylock = &qdisc_tx_busylock_key; \ 2146 - (dev)->qdisc_running_key = &qdisc_running_key; \ 2147 - lockdep_set_class(&(dev)->addr_list_lock, \ 2148 - &dev_addr_list_lock_key); \ 2149 - for (i = 0; i < (dev)->num_tx_queues; i++) \ 2150 - lockdep_set_class(&(dev)->_tx[i]._xmit_lock, \ 2151 - &qdisc_xmit_lock_key); \ 2152 2125 } 2153 2126 2154 2127 u16 netdev_pick_tx(struct net_device *dev, struct sk_buff *skb, ··· 3132 3139 } 3133 3140 3134 3141 void netif_tx_stop_all_queues(struct net_device *dev); 3142 + void netdev_update_lockdep_key(struct net_device *dev); 3135 3143 3136 3144 static inline bool netif_tx_queue_stopped(const struct netdev_queue *dev_queue) 3137 3145 { ··· 4050 4056 spin_lock(&dev->addr_list_lock); 4051 4057 } 4052 4058 4053 - static inline void netif_addr_lock_nested(struct net_device *dev) 4054 - { 4055 - int subclass = SINGLE_DEPTH_NESTING; 4056 - 4057 - if (dev->netdev_ops->ndo_get_lock_subclass) 4058 - subclass = dev->netdev_ops->ndo_get_lock_subclass(dev); 4059 - 4060 - spin_lock_nested(&dev->addr_list_lock, subclass); 4061 - } 4062 - 4063 4059 static inline void netif_addr_lock_bh(struct net_device *dev) 4064 4060 { 4065 4061 spin_lock_bh(&dev->addr_list_lock); ··· 4313 4329 struct netlink_ext_ack *extack); 4314 4330 void netdev_upper_dev_unlink(struct net_device *dev, 4315 4331 struct net_device *upper_dev); 4332 + int netdev_adjacent_change_prepare(struct net_device *old_dev, 4333 + struct net_device *new_dev, 4334 + struct net_device *dev, 4335 + struct netlink_ext_ack *extack); 4336 + void netdev_adjacent_change_commit(struct net_device *old_dev, 4337 + struct net_device *new_dev, 4338 + struct net_device *dev); 4339 + void netdev_adjacent_change_abort(struct net_device *old_dev, 4340 + struct net_device *new_dev, 4341 + struct net_device *dev); 4316 4342 void netdev_adjacent_rename_links(struct net_device *dev, char *oldname); 4317 4343 void *netdev_lower_dev_get_private(struct net_device *dev, 4318 4344 struct net_device *lower_dev); ··· 4334 4340 extern u8 netdev_rss_key[NETDEV_RSS_KEY_LEN] __read_mostly; 4335 4341 void netdev_rss_key_fill(void *buffer, size_t len); 4336 4342 4337 - int dev_get_nest_level(struct net_device *dev); 4338 4343 int skb_checksum_help(struct sk_buff *skb); 4339 4344 int skb_crc32c_csum_help(struct sk_buff *skb); 4340 4345 int skb_csum_hwoffload_help(struct sk_buff *skb,
+1 -1
include/linux/perf_event.h
··· 292 292 * -EBUSY -- @event is for this PMU but PMU temporarily unavailable 293 293 * -EINVAL -- @event is for this PMU but @event is not valid 294 294 * -EOPNOTSUPP -- @event is for this PMU, @event is valid, but not supported 295 - * -EACCESS -- @event is for this PMU, @event is valid, but no privilidges 295 + * -EACCES -- @event is for this PMU, @event is valid, but no privileges 296 296 * 297 297 * 0 -- @event is for this PMU and valid 298 298 *
+3
include/linux/platform_data/dma-imx-sdma.h
··· 51 51 /* End of v2 array */ 52 52 s32 zcanfd_2_mcu_addr; 53 53 s32 zqspi_2_mcu_addr; 54 + s32 mcu_2_ecspi_addr; 54 55 /* End of v3 array */ 56 + s32 mcu_2_zqspi_addr; 57 + /* End of v4 array */ 55 58 }; 56 59 57 60 /**
+1
include/linux/security.h
··· 105 105 LOCKDOWN_NONE, 106 106 LOCKDOWN_MODULE_SIGNATURE, 107 107 LOCKDOWN_DEV_MEM, 108 + LOCKDOWN_EFI_TEST, 108 109 LOCKDOWN_KEXEC, 109 110 LOCKDOWN_HIBERNATION, 110 111 LOCKDOWN_PCI_ACCESS,
+26 -10
include/linux/skbuff.h
··· 1354 1354 return skb->hash; 1355 1355 } 1356 1356 1357 - __u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb); 1357 + __u32 skb_get_hash_perturb(const struct sk_buff *skb, 1358 + const siphash_key_t *perturb); 1358 1359 1359 1360 static inline __u32 skb_get_hash_raw(const struct sk_buff *skb) 1360 1361 { ··· 1494 1493 { 1495 1494 return list->next == (const struct sk_buff *) list; 1496 1495 } 1496 + 1497 + /** 1498 + * skb_queue_empty_lockless - check if a queue is empty 1499 + * @list: queue head 1500 + * 1501 + * Returns true if the queue is empty, false otherwise. 1502 + * This variant can be used in lockless contexts. 1503 + */ 1504 + static inline bool skb_queue_empty_lockless(const struct sk_buff_head *list) 1505 + { 1506 + return READ_ONCE(list->next) == (const struct sk_buff *) list; 1507 + } 1508 + 1497 1509 1498 1510 /** 1499 1511 * skb_queue_is_last - check if skb is the last entry in the queue ··· 1861 1847 struct sk_buff *prev, struct sk_buff *next, 1862 1848 struct sk_buff_head *list) 1863 1849 { 1864 - newsk->next = next; 1865 - newsk->prev = prev; 1866 - next->prev = prev->next = newsk; 1850 + /* see skb_queue_empty_lockless() for the opposite READ_ONCE() */ 1851 + WRITE_ONCE(newsk->next, next); 1852 + WRITE_ONCE(newsk->prev, prev); 1853 + WRITE_ONCE(next->prev, newsk); 1854 + WRITE_ONCE(prev->next, newsk); 1867 1855 list->qlen++; 1868 1856 } 1869 1857 ··· 1876 1860 struct sk_buff *first = list->next; 1877 1861 struct sk_buff *last = list->prev; 1878 1862 1879 - first->prev = prev; 1880 - prev->next = first; 1863 + WRITE_ONCE(first->prev, prev); 1864 + WRITE_ONCE(prev->next, first); 1881 1865 1882 - last->next = next; 1883 - next->prev = last; 1866 + WRITE_ONCE(last->next, next); 1867 + WRITE_ONCE(next->prev, last); 1884 1868 } 1885 1869 1886 1870 /** ··· 2021 2005 next = skb->next; 2022 2006 prev = skb->prev; 2023 2007 skb->next = skb->prev = NULL; 2024 - next->prev = prev; 2025 - prev->next = next; 2008 + WRITE_ONCE(next->prev, prev); 2009 + WRITE_ONCE(prev->next, next); 2026 2010 } 2027 2011 2028 2012 /**
+1 -1
include/linux/socket.h
··· 263 263 #define PF_MAX AF_MAX 264 264 265 265 /* Maximum queue length specifiable by listen. */ 266 - #define SOMAXCONN 128 266 + #define SOMAXCONN 4096 267 267 268 268 /* Flags we can use with send/ and recv. 269 269 Added those for 1003.1g not all are supported yet
+5
include/linux/sunrpc/bc_xprt.h
··· 64 64 return 0; 65 65 } 66 66 67 + static inline void xprt_destroy_backchannel(struct rpc_xprt *xprt, 68 + unsigned int max_reqs) 69 + { 70 + } 71 + 67 72 static inline bool svc_is_backchannel(const struct svc_rqst *rqstp) 68 73 { 69 74 return false;
-1
include/linux/virtio_vsock.h
··· 48 48 49 49 struct virtio_vsock_pkt { 50 50 struct virtio_vsock_hdr hdr; 51 - struct work_struct work; 52 51 struct list_head list; 53 52 /* socket refcnt not held, only use for cancellation */ 54 53 struct vsock_sock *vsk;
+1 -1
include/net/bonding.h
··· 203 203 struct slave __rcu *primary_slave; 204 204 struct bond_up_slave __rcu *slave_arr; /* Array of usable slaves */ 205 205 bool force_primary; 206 - u32 nest_level; 207 206 s32 slave_cnt; /* never change this value outside the attach/detach wrappers */ 208 207 int (*recv_probe)(const struct sk_buff *, struct bonding *, 209 208 struct slave *); ··· 238 239 struct dentry *debug_dir; 239 240 #endif /* CONFIG_DEBUG_FS */ 240 241 struct rtnl_link_stats64 bond_stats; 242 + struct lock_class_key stats_lock_key; 241 243 }; 242 244 243 245 #define bond_slave_get_rcu(dev) \
+3 -3
include/net/busy_poll.h
··· 122 122 static inline void sk_mark_napi_id(struct sock *sk, const struct sk_buff *skb) 123 123 { 124 124 #ifdef CONFIG_NET_RX_BUSY_POLL 125 - sk->sk_napi_id = skb->napi_id; 125 + WRITE_ONCE(sk->sk_napi_id, skb->napi_id); 126 126 #endif 127 127 sk_rx_queue_set(sk, skb); 128 128 } ··· 132 132 const struct sk_buff *skb) 133 133 { 134 134 #ifdef CONFIG_NET_RX_BUSY_POLL 135 - if (!sk->sk_napi_id) 136 - sk->sk_napi_id = skb->napi_id; 135 + if (!READ_ONCE(sk->sk_napi_id)) 136 + WRITE_ONCE(sk->sk_napi_id, skb->napi_id); 137 137 #endif 138 138 } 139 139
+2 -1
include/net/flow_dissector.h
··· 4 4 5 5 #include <linux/types.h> 6 6 #include <linux/in6.h> 7 + #include <linux/siphash.h> 7 8 #include <uapi/linux/if_ether.h> 8 9 9 10 /** ··· 277 276 struct flow_keys { 278 277 struct flow_dissector_key_control control; 279 278 #define FLOW_KEYS_HASH_START_FIELD basic 280 - struct flow_dissector_key_basic basic; 279 + struct flow_dissector_key_basic basic __aligned(SIPHASH_ALIGNMENT); 281 280 struct flow_dissector_key_tags tags; 282 281 struct flow_dissector_key_vlan vlan; 283 282 struct flow_dissector_key_vlan cvlan;
+1 -1
include/net/fq.h
··· 69 69 struct list_head backlogs; 70 70 spinlock_t lock; 71 71 u32 flows_cnt; 72 - u32 perturbation; 72 + siphash_key_t perturbation; 73 73 u32 limit; 74 74 u32 memory_limit; 75 75 u32 memory_usage;
+2 -2
include/net/fq_impl.h
··· 108 108 109 109 static u32 fq_flow_idx(struct fq *fq, struct sk_buff *skb) 110 110 { 111 - u32 hash = skb_get_hash_perturb(skb, fq->perturbation); 111 + u32 hash = skb_get_hash_perturb(skb, &fq->perturbation); 112 112 113 113 return reciprocal_scale(hash, fq->flows_cnt); 114 114 } ··· 308 308 INIT_LIST_HEAD(&fq->backlogs); 309 309 spin_lock_init(&fq->lock); 310 310 fq->flows_cnt = max_t(u32, flows_cnt, 1); 311 - fq->perturbation = prandom_u32(); 311 + get_random_bytes(&fq->perturbation, sizeof(fq->perturbation)); 312 312 fq->quantum = 300; 313 313 fq->limit = 8192; 314 314 fq->memory_limit = 16 << 20; /* 16 MBytes */
+7 -3
include/net/hwbm.h
··· 21 21 int hwbm_pool_refill(struct hwbm_pool *bm_pool, gfp_t gfp); 22 22 int hwbm_pool_add(struct hwbm_pool *bm_pool, unsigned int buf_num); 23 23 #else 24 - void hwbm_buf_free(struct hwbm_pool *bm_pool, void *buf) {} 25 - int hwbm_pool_refill(struct hwbm_pool *bm_pool, gfp_t gfp) { return 0; } 26 - int hwbm_pool_add(struct hwbm_pool *bm_pool, unsigned int buf_num) 24 + static inline void hwbm_buf_free(struct hwbm_pool *bm_pool, void *buf) {} 25 + 26 + static inline int hwbm_pool_refill(struct hwbm_pool *bm_pool, gfp_t gfp) 27 + { return 0; } 28 + 29 + static inline int hwbm_pool_add(struct hwbm_pool *bm_pool, 30 + unsigned int buf_num) 27 31 { return 0; } 28 32 #endif /* CONFIG_HWBM */ 29 33 #endif /* _HWBM_H */
+2 -2
include/net/ip.h
··· 185 185 } 186 186 187 187 struct ip_frag_state { 188 - struct iphdr *iph; 188 + bool DF; 189 189 unsigned int hlen; 190 190 unsigned int ll_rs; 191 191 unsigned int mtu; ··· 196 196 }; 197 197 198 198 void ip_frag_init(struct sk_buff *skb, unsigned int hlen, unsigned int ll_rs, 199 - unsigned int mtu, struct ip_frag_state *state); 199 + unsigned int mtu, bool DF, struct ip_frag_state *state); 200 200 struct sk_buff *ip_frag_next(struct sk_buff *skb, 201 201 struct ip_frag_state *state); 202 202
+1
include/net/ip_vs.h
··· 889 889 struct delayed_work defense_work; /* Work handler */ 890 890 int drop_rate; 891 891 int drop_counter; 892 + int old_secure_tcp; 892 893 atomic_t dropentry; 893 894 /* locks in ctl.c */ 894 895 spinlock_t dropentry_lock; /* drop entry handling */
+1 -1
include/net/net_namespace.h
··· 342 342 #define __net_initconst __initconst 343 343 #endif 344 344 345 - int peernet2id_alloc(struct net *net, struct net *peer); 345 + int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp); 346 346 int peernet2id(struct net *net, struct net *peer); 347 347 bool peernet_has_id(struct net *net, struct net *peer); 348 348 struct net *get_net_ns_by_id(struct net *net, int id);
+10 -5
include/net/sock.h
··· 954 954 { 955 955 int cpu = raw_smp_processor_id(); 956 956 957 - if (unlikely(sk->sk_incoming_cpu != cpu)) 958 - sk->sk_incoming_cpu = cpu; 957 + if (unlikely(READ_ONCE(sk->sk_incoming_cpu) != cpu)) 958 + WRITE_ONCE(sk->sk_incoming_cpu, cpu); 959 959 } 960 960 961 961 static inline void sock_rps_record_flow_hash(__u32 hash) ··· 2242 2242 * sk_page_frag - return an appropriate page_frag 2243 2243 * @sk: socket 2244 2244 * 2245 - * If socket allocation mode allows current thread to sleep, it means its 2246 - * safe to use the per task page_frag instead of the per socket one. 2245 + * Use the per task page_frag instead of the per socket one for 2246 + * optimization when we know that we're in the normal context and owns 2247 + * everything that's associated with %current. 2248 + * 2249 + * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest 2250 + * inside other socket operations and end up recursing into sk_page_frag() 2251 + * while it's already in use. 2247 2252 */ 2248 2253 static inline struct page_frag *sk_page_frag(struct sock *sk) 2249 2254 { 2250 - if (gfpflags_allow_blocking(sk->sk_allocation)) 2255 + if (gfpflags_normal_context(sk->sk_allocation)) 2251 2256 return &current->task_frag; 2252 2257 2253 2258 return &sk->sk_frag;
+1
include/net/vxlan.h
··· 197 197 u8 offloaded:1; 198 198 __be32 remote_vni; 199 199 u32 remote_ifindex; 200 + struct net_device *remote_dev; 200 201 struct list_head list; 201 202 struct rcu_head rcu; 202 203 struct dst_cache dst_cache;
+1 -1
include/rdma/ib_verbs.h
··· 366 366 367 367 struct ib_cq_init_attr { 368 368 unsigned int cqe; 369 - int comp_vector; 369 + u32 comp_vector; 370 370 u32 flags; 371 371 }; 372 372
+37
include/uapi/linux/fuse.h
··· 38 38 * 39 39 * Protocol changelog: 40 40 * 41 + * 7.1: 42 + * - add the following messages: 43 + * FUSE_SETATTR, FUSE_SYMLINK, FUSE_MKNOD, FUSE_MKDIR, FUSE_UNLINK, 44 + * FUSE_RMDIR, FUSE_RENAME, FUSE_LINK, FUSE_OPEN, FUSE_READ, FUSE_WRITE, 45 + * FUSE_RELEASE, FUSE_FSYNC, FUSE_FLUSH, FUSE_SETXATTR, FUSE_GETXATTR, 46 + * FUSE_LISTXATTR, FUSE_REMOVEXATTR, FUSE_OPENDIR, FUSE_READDIR, 47 + * FUSE_RELEASEDIR 48 + * - add padding to messages to accommodate 32-bit servers on 64-bit kernels 49 + * 50 + * 7.2: 51 + * - add FOPEN_DIRECT_IO and FOPEN_KEEP_CACHE flags 52 + * - add FUSE_FSYNCDIR message 53 + * 54 + * 7.3: 55 + * - add FUSE_ACCESS message 56 + * - add FUSE_CREATE message 57 + * - add filehandle to fuse_setattr_in 58 + * 59 + * 7.4: 60 + * - add frsize to fuse_kstatfs 61 + * - clean up request size limit checking 62 + * 63 + * 7.5: 64 + * - add flags and max_write to fuse_init_out 65 + * 66 + * 7.6: 67 + * - add max_readahead to fuse_init_in and fuse_init_out 68 + * 69 + * 7.7: 70 + * - add FUSE_INTERRUPT message 71 + * - add POSIX file lock support 72 + * 73 + * 7.8: 74 + * - add lock_owner and flags fields to fuse_release_in 75 + * - add FUSE_BMAP message 76 + * - add FUSE_DESTROY message 77 + * 41 78 * 7.9: 42 79 * - new fuse_getattr_in input argument of GETATTR 43 80 * - add lk_flags in fuse_lk_in
+1 -1
kernel/bpf/core.c
··· 502 502 return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false)); 503 503 } 504 504 505 - void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp) 505 + static void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp) 506 506 { 507 507 int i; 508 508
+32 -1
kernel/bpf/devmap.c
··· 128 128 129 129 if (!dtab->n_buckets) /* Overflow check */ 130 130 return -EINVAL; 131 - cost += sizeof(struct hlist_head) * dtab->n_buckets; 131 + cost += (u64) sizeof(struct hlist_head) * dtab->n_buckets; 132 132 } 133 133 134 134 /* if map size is larger than memlock limit, reject it */ ··· 719 719 .map_check_btf = map_check_no_btf, 720 720 }; 721 721 722 + static void dev_map_hash_remove_netdev(struct bpf_dtab *dtab, 723 + struct net_device *netdev) 724 + { 725 + unsigned long flags; 726 + u32 i; 727 + 728 + spin_lock_irqsave(&dtab->index_lock, flags); 729 + for (i = 0; i < dtab->n_buckets; i++) { 730 + struct bpf_dtab_netdev *dev; 731 + struct hlist_head *head; 732 + struct hlist_node *next; 733 + 734 + head = dev_map_index_hash(dtab, i); 735 + 736 + hlist_for_each_entry_safe(dev, next, head, index_hlist) { 737 + if (netdev != dev->dev) 738 + continue; 739 + 740 + dtab->items--; 741 + hlist_del_rcu(&dev->index_hlist); 742 + call_rcu(&dev->rcu, __dev_map_entry_free); 743 + } 744 + } 745 + spin_unlock_irqrestore(&dtab->index_lock, flags); 746 + } 747 + 722 748 static int dev_map_notification(struct notifier_block *notifier, 723 749 ulong event, void *ptr) 724 750 { ··· 761 735 */ 762 736 rcu_read_lock(); 763 737 list_for_each_entry_rcu(dtab, &dev_map_list, list) { 738 + if (dtab->map.map_type == BPF_MAP_TYPE_DEVMAP_HASH) { 739 + dev_map_hash_remove_netdev(dtab, netdev); 740 + continue; 741 + } 742 + 764 743 for (i = 0; i < dtab->map.max_entries; i++) { 765 744 struct bpf_dtab_netdev *dev, *odev; 766 745
+20 -11
kernel/bpf/syscall.c
··· 1326 1326 { 1327 1327 struct bpf_prog_aux *aux = container_of(rcu, struct bpf_prog_aux, rcu); 1328 1328 1329 + kvfree(aux->func_info); 1329 1330 free_used_maps(aux); 1330 1331 bpf_prog_uncharge_memlock(aux->prog); 1331 1332 security_bpf_prog_free(aux); 1332 1333 bpf_prog_free(aux->prog); 1334 + } 1335 + 1336 + static void __bpf_prog_put_noref(struct bpf_prog *prog, bool deferred) 1337 + { 1338 + bpf_prog_kallsyms_del_all(prog); 1339 + btf_put(prog->aux->btf); 1340 + bpf_prog_free_linfo(prog); 1341 + 1342 + if (deferred) 1343 + call_rcu(&prog->aux->rcu, __bpf_prog_put_rcu); 1344 + else 1345 + __bpf_prog_put_rcu(&prog->aux->rcu); 1333 1346 } 1334 1347 1335 1348 static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock) ··· 1351 1338 perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_UNLOAD, 0); 1352 1339 /* bpf_prog_free_id() must be called first */ 1353 1340 bpf_prog_free_id(prog, do_idr_lock); 1354 - bpf_prog_kallsyms_del_all(prog); 1355 - btf_put(prog->aux->btf); 1356 - kvfree(prog->aux->func_info); 1357 - bpf_prog_free_linfo(prog); 1358 - 1359 - call_rcu(&prog->aux->rcu, __bpf_prog_put_rcu); 1341 + __bpf_prog_put_noref(prog, true); 1360 1342 } 1361 1343 } 1362 1344 ··· 1749 1741 return err; 1750 1742 1751 1743 free_used_maps: 1752 - bpf_prog_free_linfo(prog); 1753 - kvfree(prog->aux->func_info); 1754 - btf_put(prog->aux->btf); 1755 - bpf_prog_kallsyms_del_subprogs(prog); 1756 - free_used_maps(prog->aux); 1744 + /* In case we have subprogs, we need to wait for a grace 1745 + * period before we can tear down JIT memory since symbols 1746 + * are already exposed under kallsyms. 1747 + */ 1748 + __bpf_prog_put_noref(prog, prog->aux->func_cnt); 1749 + return err; 1757 1750 free_prog: 1758 1751 bpf_prog_uncharge_memlock(prog); 1759 1752 free_prog_sec:
+2 -1
kernel/cgroup/cpuset.c
··· 798 798 cpumask_subset(cp->cpus_allowed, top_cpuset.effective_cpus)) 799 799 continue; 800 800 801 - if (is_sched_load_balance(cp)) 801 + if (is_sched_load_balance(cp) && 802 + !cpumask_empty(cp->effective_cpus)) 802 803 csa[csn++] = cp; 803 804 804 805 /* skip @cp's subtree if not a partition root */
+1 -1
kernel/events/core.c
··· 10635 10635 10636 10636 attr->size = size; 10637 10637 10638 - if (attr->__reserved_1) 10638 + if (attr->__reserved_1 || attr->__reserved_2) 10639 10639 return -EINVAL; 10640 10640 10641 10641 if (attr->sample_type & ~(PERF_SAMPLE_MAX-1))
+9 -2
kernel/sched/topology.c
··· 1948 1948 static int 1949 1949 build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *attr) 1950 1950 { 1951 - enum s_alloc alloc_state; 1951 + enum s_alloc alloc_state = sa_none; 1952 1952 struct sched_domain *sd; 1953 1953 struct s_data d; 1954 1954 struct rq *rq = NULL; 1955 1955 int i, ret = -ENOMEM; 1956 1956 struct sched_domain_topology_level *tl_asym; 1957 1957 bool has_asym = false; 1958 + 1959 + if (WARN_ON(cpumask_empty(cpu_map))) 1960 + goto error; 1958 1961 1959 1962 alloc_state = __visit_domain_allocation_hell(&d, cpu_map); 1960 1963 if (alloc_state != sa_rootdomain) ··· 2029 2026 rcu_read_unlock(); 2030 2027 2031 2028 if (has_asym) 2032 - static_branch_enable_cpuslocked(&sched_asym_cpucapacity); 2029 + static_branch_inc_cpuslocked(&sched_asym_cpucapacity); 2033 2030 2034 2031 if (rq && sched_debug_enabled) { 2035 2032 pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n", ··· 2124 2121 */ 2125 2122 static void detach_destroy_domains(const struct cpumask *cpu_map) 2126 2123 { 2124 + unsigned int cpu = cpumask_any(cpu_map); 2127 2125 int i; 2126 + 2127 + if (rcu_access_pointer(per_cpu(sd_asym_cpucapacity, cpu))) 2128 + static_branch_dec_cpuslocked(&sched_asym_cpucapacity); 2128 2129 2129 2130 rcu_read_lock(); 2130 2131 for_each_cpu(i, cpu_map)
-1
net/8021q/vlan.c
··· 172 172 if (err < 0) 173 173 goto out_uninit_mvrp; 174 174 175 - vlan->nest_level = dev_get_nest_level(real_dev) + 1; 176 175 err = register_netdevice(dev); 177 176 if (err < 0) 178 177 goto out_uninit_mvrp;
-33
net/8021q/vlan_dev.c
··· 489 489 dev_uc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev); 490 490 } 491 491 492 - /* 493 - * vlan network devices have devices nesting below it, and are a special 494 - * "super class" of normal network devices; split their locks off into a 495 - * separate class since they always nest. 496 - */ 497 - static struct lock_class_key vlan_netdev_xmit_lock_key; 498 - static struct lock_class_key vlan_netdev_addr_lock_key; 499 - 500 - static void vlan_dev_set_lockdep_one(struct net_device *dev, 501 - struct netdev_queue *txq, 502 - void *_subclass) 503 - { 504 - lockdep_set_class_and_subclass(&txq->_xmit_lock, 505 - &vlan_netdev_xmit_lock_key, 506 - *(int *)_subclass); 507 - } 508 - 509 - static void vlan_dev_set_lockdep_class(struct net_device *dev, int subclass) 510 - { 511 - lockdep_set_class_and_subclass(&dev->addr_list_lock, 512 - &vlan_netdev_addr_lock_key, 513 - subclass); 514 - netdev_for_each_tx_queue(dev, vlan_dev_set_lockdep_one, &subclass); 515 - } 516 - 517 - static int vlan_dev_get_lock_subclass(struct net_device *dev) 518 - { 519 - return vlan_dev_priv(dev)->nest_level; 520 - } 521 - 522 492 static const struct header_ops vlan_header_ops = { 523 493 .create = vlan_dev_hard_header, 524 494 .parse = eth_header_parse, ··· 578 608 dev->netdev_ops = &vlan_netdev_ops; 579 609 580 610 SET_NETDEV_DEVTYPE(dev, &vlan_type); 581 - 582 - vlan_dev_set_lockdep_class(dev, vlan_dev_get_lock_subclass(dev)); 583 611 584 612 vlan->vlan_pcpu_stats = netdev_alloc_pcpu_stats(struct vlan_pcpu_stats); 585 613 if (!vlan->vlan_pcpu_stats) ··· 780 812 .ndo_netpoll_cleanup = vlan_dev_netpoll_cleanup, 781 813 #endif 782 814 .ndo_fix_features = vlan_dev_fix_features, 783 - .ndo_get_lock_subclass = vlan_dev_get_lock_subclass, 784 815 .ndo_get_iflink = vlan_dev_get_iflink, 785 816 }; 786 817
+1 -1
net/atm/common.c
··· 668 668 mask |= EPOLLHUP; 669 669 670 670 /* readable? */ 671 - if (!skb_queue_empty(&sk->sk_receive_queue)) 671 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 672 672 mask |= EPOLLIN | EPOLLRDNORM; 673 673 674 674 /* writable? */
+52 -9
net/batman-adv/bat_iv_ogm.c
··· 22 22 #include <linux/kernel.h> 23 23 #include <linux/kref.h> 24 24 #include <linux/list.h> 25 + #include <linux/lockdep.h> 26 + #include <linux/mutex.h> 25 27 #include <linux/netdevice.h> 26 28 #include <linux/netlink.h> 27 29 #include <linux/pkt_sched.h> ··· 195 193 unsigned char *ogm_buff; 196 194 u32 random_seqno; 197 195 196 + mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex); 197 + 198 198 /* randomize initial seqno to avoid collision */ 199 199 get_random_bytes(&random_seqno, sizeof(random_seqno)); 200 200 atomic_set(&hard_iface->bat_iv.ogm_seqno, random_seqno); 201 201 202 202 hard_iface->bat_iv.ogm_buff_len = BATADV_OGM_HLEN; 203 203 ogm_buff = kmalloc(hard_iface->bat_iv.ogm_buff_len, GFP_ATOMIC); 204 - if (!ogm_buff) 204 + if (!ogm_buff) { 205 + mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex); 205 206 return -ENOMEM; 207 + } 206 208 207 209 hard_iface->bat_iv.ogm_buff = ogm_buff; 208 210 ··· 218 212 batadv_ogm_packet->reserved = 0; 219 213 batadv_ogm_packet->tq = BATADV_TQ_MAX_VALUE; 220 214 215 + mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex); 216 + 221 217 return 0; 222 218 } 223 219 224 220 static void batadv_iv_ogm_iface_disable(struct batadv_hard_iface *hard_iface) 225 221 { 222 + mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex); 223 + 226 224 kfree(hard_iface->bat_iv.ogm_buff); 227 225 hard_iface->bat_iv.ogm_buff = NULL; 226 + 227 + mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex); 228 228 } 229 229 230 230 static void batadv_iv_ogm_iface_update_mac(struct batadv_hard_iface *hard_iface) 231 231 { 232 232 struct batadv_ogm_packet *batadv_ogm_packet; 233 - unsigned char *ogm_buff = hard_iface->bat_iv.ogm_buff; 233 + void *ogm_buff; 234 234 235 - batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff; 235 + mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex); 236 + 237 + ogm_buff = hard_iface->bat_iv.ogm_buff; 238 + if (!ogm_buff) 239 + goto unlock; 240 + 241 + batadv_ogm_packet = ogm_buff; 236 242 ether_addr_copy(batadv_ogm_packet->orig, 237 243 hard_iface->net_dev->dev_addr); 238 244 ether_addr_copy(batadv_ogm_packet->prev_sender, 239 245 hard_iface->net_dev->dev_addr); 246 + 247 + unlock: 248 + mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex); 240 249 } 241 250 242 251 static void 243 252 batadv_iv_ogm_primary_iface_set(struct batadv_hard_iface *hard_iface) 244 253 { 245 254 struct batadv_ogm_packet *batadv_ogm_packet; 246 - unsigned char *ogm_buff = hard_iface->bat_iv.ogm_buff; 255 + void *ogm_buff; 247 256 248 - batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff; 257 + mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex); 258 + 259 + ogm_buff = hard_iface->bat_iv.ogm_buff; 260 + if (!ogm_buff) 261 + goto unlock; 262 + 263 + batadv_ogm_packet = ogm_buff; 249 264 batadv_ogm_packet->ttl = BATADV_TTL; 265 + 266 + unlock: 267 + mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex); 250 268 } 251 269 252 270 /* when do we schedule our own ogm to be sent */ ··· 772 742 } 773 743 } 774 744 775 - static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface) 745 + /** 746 + * batadv_iv_ogm_schedule_buff() - schedule submission of hardif ogm buffer 747 + * @hard_iface: interface whose ogm buffer should be transmitted 748 + */ 749 + static void batadv_iv_ogm_schedule_buff(struct batadv_hard_iface *hard_iface) 776 750 { 777 751 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); 778 752 unsigned char **ogm_buff = &hard_iface->bat_iv.ogm_buff; ··· 787 753 u16 tvlv_len = 0; 788 754 unsigned long send_time; 789 755 790 - if (hard_iface->if_status == BATADV_IF_NOT_IN_USE || 791 - hard_iface->if_status == BATADV_IF_TO_BE_REMOVED) 792 - return; 756 + lockdep_assert_held(&hard_iface->bat_iv.ogm_buff_mutex); 793 757 794 758 /* the interface gets activated here to avoid race conditions between 795 759 * the moment of activating the interface in ··· 853 821 out: 854 822 if (primary_if) 855 823 batadv_hardif_put(primary_if); 824 + } 825 + 826 + static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface) 827 + { 828 + if (hard_iface->if_status == BATADV_IF_NOT_IN_USE || 829 + hard_iface->if_status == BATADV_IF_TO_BE_REMOVED) 830 + return; 831 + 832 + mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex); 833 + batadv_iv_ogm_schedule_buff(hard_iface); 834 + mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex); 856 835 } 857 836 858 837 /**
+33 -8
net/batman-adv/bat_v_ogm.c
··· 18 18 #include <linux/kref.h> 19 19 #include <linux/list.h> 20 20 #include <linux/lockdep.h> 21 + #include <linux/mutex.h> 21 22 #include <linux/netdevice.h> 22 23 #include <linux/random.h> 23 24 #include <linux/rculist.h> ··· 257 256 } 258 257 259 258 /** 260 - * batadv_v_ogm_send() - periodic worker broadcasting the own OGM 261 - * @work: work queue item 259 + * batadv_v_ogm_send_softif() - periodic worker broadcasting the own OGM 260 + * @bat_priv: the bat priv with all the soft interface information 262 261 */ 263 - static void batadv_v_ogm_send(struct work_struct *work) 262 + static void batadv_v_ogm_send_softif(struct batadv_priv *bat_priv) 264 263 { 265 264 struct batadv_hard_iface *hard_iface; 266 - struct batadv_priv_bat_v *bat_v; 267 - struct batadv_priv *bat_priv; 268 265 struct batadv_ogm2_packet *ogm_packet; 269 266 struct sk_buff *skb, *skb_tmp; 270 267 unsigned char *ogm_buff; ··· 270 271 u16 tvlv_len = 0; 271 272 int ret; 272 273 273 - bat_v = container_of(work, struct batadv_priv_bat_v, ogm_wq.work); 274 - bat_priv = container_of(bat_v, struct batadv_priv, bat_v); 274 + lockdep_assert_held(&bat_priv->bat_v.ogm_buff_mutex); 275 275 276 276 if (atomic_read(&bat_priv->mesh_state) == BATADV_MESH_DEACTIVATING) 277 277 goto out; ··· 362 364 } 363 365 364 366 /** 367 + * batadv_v_ogm_send() - periodic worker broadcasting the own OGM 368 + * @work: work queue item 369 + */ 370 + static void batadv_v_ogm_send(struct work_struct *work) 371 + { 372 + struct batadv_priv_bat_v *bat_v; 373 + struct batadv_priv *bat_priv; 374 + 375 + bat_v = container_of(work, struct batadv_priv_bat_v, ogm_wq.work); 376 + bat_priv = container_of(bat_v, struct batadv_priv, bat_v); 377 + 378 + mutex_lock(&bat_priv->bat_v.ogm_buff_mutex); 379 + batadv_v_ogm_send_softif(bat_priv); 380 + mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex); 381 + } 382 + 383 + /** 365 384 * batadv_v_ogm_aggr_work() - OGM queue periodic task per interface 366 385 * @work: work queue item 367 386 * ··· 439 424 struct batadv_priv *bat_priv = netdev_priv(primary_iface->soft_iface); 440 425 struct batadv_ogm2_packet *ogm_packet; 441 426 427 + mutex_lock(&bat_priv->bat_v.ogm_buff_mutex); 442 428 if (!bat_priv->bat_v.ogm_buff) 443 - return; 429 + goto unlock; 444 430 445 431 ogm_packet = (struct batadv_ogm2_packet *)bat_priv->bat_v.ogm_buff; 446 432 ether_addr_copy(ogm_packet->orig, primary_iface->net_dev->dev_addr); 433 + 434 + unlock: 435 + mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex); 447 436 } 448 437 449 438 /** ··· 1069 1050 atomic_set(&bat_priv->bat_v.ogm_seqno, random_seqno); 1070 1051 INIT_DELAYED_WORK(&bat_priv->bat_v.ogm_wq, batadv_v_ogm_send); 1071 1052 1053 + mutex_init(&bat_priv->bat_v.ogm_buff_mutex); 1054 + 1072 1055 return 0; 1073 1056 } 1074 1057 ··· 1082 1061 { 1083 1062 cancel_delayed_work_sync(&bat_priv->bat_v.ogm_wq); 1084 1063 1064 + mutex_lock(&bat_priv->bat_v.ogm_buff_mutex); 1065 + 1085 1066 kfree(bat_priv->bat_v.ogm_buff); 1086 1067 bat_priv->bat_v.ogm_buff = NULL; 1087 1068 bat_priv->bat_v.ogm_buff_len = 0; 1069 + 1070 + mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex); 1088 1071 }
+2
net/batman-adv/hard-interface.c
··· 18 18 #include <linux/kref.h> 19 19 #include <linux/limits.h> 20 20 #include <linux/list.h> 21 + #include <linux/mutex.h> 21 22 #include <linux/netdevice.h> 22 23 #include <linux/printk.h> 23 24 #include <linux/rculist.h> ··· 930 929 INIT_LIST_HEAD(&hard_iface->list); 931 930 INIT_HLIST_HEAD(&hard_iface->neigh_list); 932 931 932 + mutex_init(&hard_iface->bat_iv.ogm_buff_mutex); 933 933 spin_lock_init(&hard_iface->neigh_list_lock); 934 934 kref_init(&hard_iface->refcount); 935 935
-32
net/batman-adv/soft-interface.c
··· 740 740 return 0; 741 741 } 742 742 743 - /* batman-adv network devices have devices nesting below it and are a special 744 - * "super class" of normal network devices; split their locks off into a 745 - * separate class since they always nest. 746 - */ 747 - static struct lock_class_key batadv_netdev_xmit_lock_key; 748 - static struct lock_class_key batadv_netdev_addr_lock_key; 749 - 750 - /** 751 - * batadv_set_lockdep_class_one() - Set lockdep class for a single tx queue 752 - * @dev: device which owns the tx queue 753 - * @txq: tx queue to modify 754 - * @_unused: always NULL 755 - */ 756 - static void batadv_set_lockdep_class_one(struct net_device *dev, 757 - struct netdev_queue *txq, 758 - void *_unused) 759 - { 760 - lockdep_set_class(&txq->_xmit_lock, &batadv_netdev_xmit_lock_key); 761 - } 762 - 763 - /** 764 - * batadv_set_lockdep_class() - Set txq and addr_list lockdep class 765 - * @dev: network device to modify 766 - */ 767 - static void batadv_set_lockdep_class(struct net_device *dev) 768 - { 769 - lockdep_set_class(&dev->addr_list_lock, &batadv_netdev_addr_lock_key); 770 - netdev_for_each_tx_queue(dev, batadv_set_lockdep_class_one, NULL); 771 - } 772 - 773 743 /** 774 744 * batadv_softif_init_late() - late stage initialization of soft interface 775 745 * @dev: registered network device to modify ··· 752 782 u32 random_seqno; 753 783 int ret; 754 784 size_t cnt_len = sizeof(u64) * BATADV_CNT_NUM; 755 - 756 - batadv_set_lockdep_class(dev); 757 785 758 786 bat_priv = netdev_priv(dev); 759 787 bat_priv->soft_iface = dev;
+7
net/batman-adv/types.h
··· 17 17 #include <linux/if.h> 18 18 #include <linux/if_ether.h> 19 19 #include <linux/kref.h> 20 + #include <linux/mutex.h> 20 21 #include <linux/netdevice.h> 21 22 #include <linux/netlink.h> 22 23 #include <linux/sched.h> /* for linux/wait.h */ ··· 82 81 83 82 /** @ogm_seqno: OGM sequence number - used to identify each OGM */ 84 83 atomic_t ogm_seqno; 84 + 85 + /** @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len */ 86 + struct mutex ogm_buff_mutex; 85 87 }; 86 88 87 89 /** ··· 1542 1538 1543 1539 /** @ogm_seqno: OGM sequence number - used to identify each OGM */ 1544 1540 atomic_t ogm_seqno; 1541 + 1542 + /** @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len */ 1543 + struct mutex ogm_buff_mutex; 1545 1544 1546 1545 /** @ogm_wq: workqueue used to schedule OGM transmissions */ 1547 1546 struct delayed_work ogm_wq;
-8
net/bluetooth/6lowpan.c
··· 571 571 return err < 0 ? NET_XMIT_DROP : err; 572 572 } 573 573 574 - static int bt_dev_init(struct net_device *dev) 575 - { 576 - netdev_lockdep_set_classes(dev); 577 - 578 - return 0; 579 - } 580 - 581 574 static const struct net_device_ops netdev_ops = { 582 - .ndo_init = bt_dev_init, 583 575 .ndo_start_xmit = bt_xmit, 584 576 }; 585 577
+2 -2
net/bluetooth/af_bluetooth.c
··· 460 460 if (sk->sk_state == BT_LISTEN) 461 461 return bt_accept_poll(sk); 462 462 463 - if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) 463 + if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue)) 464 464 mask |= EPOLLERR | 465 465 (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0); 466 466 ··· 470 470 if (sk->sk_shutdown == SHUTDOWN_MASK) 471 471 mask |= EPOLLHUP; 472 472 473 - if (!skb_queue_empty(&sk->sk_receive_queue)) 473 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 474 474 mask |= EPOLLIN | EPOLLRDNORM; 475 475 476 476 if (sk->sk_state == BT_CLOSED)
-8
net/bridge/br_device.c
··· 24 24 const struct nf_br_ops __rcu *nf_br_ops __read_mostly; 25 25 EXPORT_SYMBOL_GPL(nf_br_ops); 26 26 27 - static struct lock_class_key bridge_netdev_addr_lock_key; 28 - 29 27 /* net device transmit always called with BH disabled */ 30 28 netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) 31 29 { ··· 106 108 return NETDEV_TX_OK; 107 109 } 108 110 109 - static void br_set_lockdep_class(struct net_device *dev) 110 - { 111 - lockdep_set_class(&dev->addr_list_lock, &bridge_netdev_addr_lock_key); 112 - } 113 - 114 111 static int br_dev_init(struct net_device *dev) 115 112 { 116 113 struct net_bridge *br = netdev_priv(dev); ··· 143 150 br_mdb_hash_fini(br); 144 151 br_fdb_hash_fini(br); 145 152 } 146 - br_set_lockdep_class(dev); 147 153 148 154 return err; 149 155 }
+1 -1
net/bridge/netfilter/nf_conntrack_bridge.c
··· 95 95 * This may also be a clone skbuff, we could preserve the geometry for 96 96 * the copies but probably not worth the effort. 97 97 */ 98 - ip_frag_init(skb, hlen, ll_rs, frag_max_size, &state); 98 + ip_frag_init(skb, hlen, ll_rs, frag_max_size, false, &state); 99 99 100 100 while (state.left > 0) { 101 101 struct sk_buff *skb2;
+1 -1
net/caif/caif_socket.c
··· 953 953 mask |= EPOLLRDHUP; 954 954 955 955 /* readable? */ 956 - if (!skb_queue_empty(&sk->sk_receive_queue) || 956 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue) || 957 957 (sk->sk_shutdown & RCV_SHUTDOWN)) 958 958 mask |= EPOLLIN | EPOLLRDNORM; 959 959
+4 -4
net/core/datagram.c
··· 97 97 if (error) 98 98 goto out_err; 99 99 100 - if (sk->sk_receive_queue.prev != skb) 100 + if (READ_ONCE(sk->sk_receive_queue.prev) != skb) 101 101 goto out; 102 102 103 103 /* Socket shut down? */ ··· 278 278 break; 279 279 280 280 sk_busy_loop(sk, flags & MSG_DONTWAIT); 281 - } while (sk->sk_receive_queue.prev != *last); 281 + } while (READ_ONCE(sk->sk_receive_queue.prev) != *last); 282 282 283 283 error = -EAGAIN; 284 284 ··· 767 767 mask = 0; 768 768 769 769 /* exceptional events? */ 770 - if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) 770 + if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue)) 771 771 mask |= EPOLLERR | 772 772 (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0); 773 773 ··· 777 777 mask |= EPOLLHUP; 778 778 779 779 /* readable? */ 780 - if (!skb_queue_empty(&sk->sk_receive_queue)) 780 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 781 781 mask |= EPOLLIN | EPOLLRDNORM; 782 782 783 783 /* Connection-based need to check for termination and startup */
+467 -156
net/core/dev.c
··· 146 146 #include "net-sysfs.h" 147 147 148 148 #define MAX_GRO_SKBS 8 149 + #define MAX_NEST_DEV 8 149 150 150 151 /* This should be increased if a protocol with a bigger head is added. */ 151 152 #define GRO_MAX_HEAD (MAX_HEADER + 128) ··· 276 275 277 276 DEFINE_PER_CPU_ALIGNED(struct softnet_data, softnet_data); 278 277 EXPORT_PER_CPU_SYMBOL(softnet_data); 279 - 280 - #ifdef CONFIG_LOCKDEP 281 - /* 282 - * register_netdevice() inits txq->_xmit_lock and sets lockdep class 283 - * according to dev->type 284 - */ 285 - static const unsigned short netdev_lock_type[] = { 286 - ARPHRD_NETROM, ARPHRD_ETHER, ARPHRD_EETHER, ARPHRD_AX25, 287 - ARPHRD_PRONET, ARPHRD_CHAOS, ARPHRD_IEEE802, ARPHRD_ARCNET, 288 - ARPHRD_APPLETLK, ARPHRD_DLCI, ARPHRD_ATM, ARPHRD_METRICOM, 289 - ARPHRD_IEEE1394, ARPHRD_EUI64, ARPHRD_INFINIBAND, ARPHRD_SLIP, 290 - ARPHRD_CSLIP, ARPHRD_SLIP6, ARPHRD_CSLIP6, ARPHRD_RSRVD, 291 - ARPHRD_ADAPT, ARPHRD_ROSE, ARPHRD_X25, ARPHRD_HWX25, 292 - ARPHRD_PPP, ARPHRD_CISCO, ARPHRD_LAPB, ARPHRD_DDCMP, 293 - ARPHRD_RAWHDLC, ARPHRD_TUNNEL, ARPHRD_TUNNEL6, ARPHRD_FRAD, 294 - ARPHRD_SKIP, ARPHRD_LOOPBACK, ARPHRD_LOCALTLK, ARPHRD_FDDI, 295 - ARPHRD_BIF, ARPHRD_SIT, ARPHRD_IPDDP, ARPHRD_IPGRE, 296 - ARPHRD_PIMREG, ARPHRD_HIPPI, ARPHRD_ASH, ARPHRD_ECONET, 297 - ARPHRD_IRDA, ARPHRD_FCPP, ARPHRD_FCAL, ARPHRD_FCPL, 298 - ARPHRD_FCFABRIC, ARPHRD_IEEE80211, ARPHRD_IEEE80211_PRISM, 299 - ARPHRD_IEEE80211_RADIOTAP, ARPHRD_PHONET, ARPHRD_PHONET_PIPE, 300 - ARPHRD_IEEE802154, ARPHRD_VOID, ARPHRD_NONE}; 301 - 302 - static const char *const netdev_lock_name[] = { 303 - "_xmit_NETROM", "_xmit_ETHER", "_xmit_EETHER", "_xmit_AX25", 304 - "_xmit_PRONET", "_xmit_CHAOS", "_xmit_IEEE802", "_xmit_ARCNET", 305 - "_xmit_APPLETLK", "_xmit_DLCI", "_xmit_ATM", "_xmit_METRICOM", 306 - "_xmit_IEEE1394", "_xmit_EUI64", "_xmit_INFINIBAND", "_xmit_SLIP", 307 - "_xmit_CSLIP", "_xmit_SLIP6", "_xmit_CSLIP6", "_xmit_RSRVD", 308 - "_xmit_ADAPT", "_xmit_ROSE", "_xmit_X25", "_xmit_HWX25", 309 - "_xmit_PPP", "_xmit_CISCO", "_xmit_LAPB", "_xmit_DDCMP", 310 - "_xmit_RAWHDLC", "_xmit_TUNNEL", "_xmit_TUNNEL6", "_xmit_FRAD", 311 - "_xmit_SKIP", "_xmit_LOOPBACK", "_xmit_LOCALTLK", "_xmit_FDDI", 312 - "_xmit_BIF", "_xmit_SIT", "_xmit_IPDDP", "_xmit_IPGRE", 313 - "_xmit_PIMREG", "_xmit_HIPPI", "_xmit_ASH", "_xmit_ECONET", 314 - "_xmit_IRDA", "_xmit_FCPP", "_xmit_FCAL", "_xmit_FCPL", 315 - "_xmit_FCFABRIC", "_xmit_IEEE80211", "_xmit_IEEE80211_PRISM", 316 - "_xmit_IEEE80211_RADIOTAP", "_xmit_PHONET", "_xmit_PHONET_PIPE", 317 - "_xmit_IEEE802154", "_xmit_VOID", "_xmit_NONE"}; 318 - 319 - static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)]; 320 - static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)]; 321 - 322 - static inline unsigned short netdev_lock_pos(unsigned short dev_type) 323 - { 324 - int i; 325 - 326 - for (i = 0; i < ARRAY_SIZE(netdev_lock_type); i++) 327 - if (netdev_lock_type[i] == dev_type) 328 - return i; 329 - /* the last key is used by default */ 330 - return ARRAY_SIZE(netdev_lock_type) - 1; 331 - } 332 - 333 - static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock, 334 - unsigned short dev_type) 335 - { 336 - int i; 337 - 338 - i = netdev_lock_pos(dev_type); 339 - lockdep_set_class_and_name(lock, &netdev_xmit_lock_key[i], 340 - netdev_lock_name[i]); 341 - } 342 - 343 - static inline void netdev_set_addr_lockdep_class(struct net_device *dev) 344 - { 345 - int i; 346 - 347 - i = netdev_lock_pos(dev->type); 348 - lockdep_set_class_and_name(&dev->addr_list_lock, 349 - &netdev_addr_lock_key[i], 350 - netdev_lock_name[i]); 351 - } 352 - #else 353 - static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock, 354 - unsigned short dev_type) 355 - { 356 - } 357 - static inline void netdev_set_addr_lockdep_class(struct net_device *dev) 358 - { 359 - } 360 - #endif 361 278 362 279 /******************************************************************************* 363 280 * ··· 6408 6489 /* upper master flag, there can only be one master device per list */ 6409 6490 bool master; 6410 6491 6492 + /* lookup ignore flag */ 6493 + bool ignore; 6494 + 6411 6495 /* counter for the number of times this device was added to us */ 6412 6496 u16 ref_nr; 6413 6497 ··· 6433 6511 return NULL; 6434 6512 } 6435 6513 6436 - static int __netdev_has_upper_dev(struct net_device *upper_dev, void *data) 6514 + static int ____netdev_has_upper_dev(struct net_device *upper_dev, void *data) 6437 6515 { 6438 6516 struct net_device *dev = data; 6439 6517 ··· 6454 6532 { 6455 6533 ASSERT_RTNL(); 6456 6534 6457 - return netdev_walk_all_upper_dev_rcu(dev, __netdev_has_upper_dev, 6535 + return netdev_walk_all_upper_dev_rcu(dev, ____netdev_has_upper_dev, 6458 6536 upper_dev); 6459 6537 } 6460 6538 EXPORT_SYMBOL(netdev_has_upper_dev); ··· 6472 6550 bool netdev_has_upper_dev_all_rcu(struct net_device *dev, 6473 6551 struct net_device *upper_dev) 6474 6552 { 6475 - return !!netdev_walk_all_upper_dev_rcu(dev, __netdev_has_upper_dev, 6553 + return !!netdev_walk_all_upper_dev_rcu(dev, ____netdev_has_upper_dev, 6476 6554 upper_dev); 6477 6555 } 6478 6556 EXPORT_SYMBOL(netdev_has_upper_dev_all_rcu); ··· 6515 6593 return NULL; 6516 6594 } 6517 6595 EXPORT_SYMBOL(netdev_master_upper_dev_get); 6596 + 6597 + static struct net_device *__netdev_master_upper_dev_get(struct net_device *dev) 6598 + { 6599 + struct netdev_adjacent *upper; 6600 + 6601 + ASSERT_RTNL(); 6602 + 6603 + if (list_empty(&dev->adj_list.upper)) 6604 + return NULL; 6605 + 6606 + upper = list_first_entry(&dev->adj_list.upper, 6607 + struct netdev_adjacent, list); 6608 + if (likely(upper->master) && !upper->ignore) 6609 + return upper->dev; 6610 + return NULL; 6611 + } 6518 6612 6519 6613 /** 6520 6614 * netdev_has_any_lower_dev - Check if device is linked to some device ··· 6582 6644 } 6583 6645 EXPORT_SYMBOL(netdev_upper_get_next_dev_rcu); 6584 6646 6647 + static struct net_device *__netdev_next_upper_dev(struct net_device *dev, 6648 + struct list_head **iter, 6649 + bool *ignore) 6650 + { 6651 + struct netdev_adjacent *upper; 6652 + 6653 + upper = list_entry((*iter)->next, struct netdev_adjacent, list); 6654 + 6655 + if (&upper->list == &dev->adj_list.upper) 6656 + return NULL; 6657 + 6658 + *iter = &upper->list; 6659 + *ignore = upper->ignore; 6660 + 6661 + return upper->dev; 6662 + } 6663 + 6585 6664 static struct net_device *netdev_next_upper_dev_rcu(struct net_device *dev, 6586 6665 struct list_head **iter) 6587 6666 { ··· 6616 6661 return upper->dev; 6617 6662 } 6618 6663 6664 + static int __netdev_walk_all_upper_dev(struct net_device *dev, 6665 + int (*fn)(struct net_device *dev, 6666 + void *data), 6667 + void *data) 6668 + { 6669 + struct net_device *udev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6670 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6671 + int ret, cur = 0; 6672 + bool ignore; 6673 + 6674 + now = dev; 6675 + iter = &dev->adj_list.upper; 6676 + 6677 + while (1) { 6678 + if (now != dev) { 6679 + ret = fn(now, data); 6680 + if (ret) 6681 + return ret; 6682 + } 6683 + 6684 + next = NULL; 6685 + while (1) { 6686 + udev = __netdev_next_upper_dev(now, &iter, &ignore); 6687 + if (!udev) 6688 + break; 6689 + if (ignore) 6690 + continue; 6691 + 6692 + next = udev; 6693 + niter = &udev->adj_list.upper; 6694 + dev_stack[cur] = now; 6695 + iter_stack[cur++] = iter; 6696 + break; 6697 + } 6698 + 6699 + if (!next) { 6700 + if (!cur) 6701 + return 0; 6702 + next = dev_stack[--cur]; 6703 + niter = iter_stack[cur]; 6704 + } 6705 + 6706 + now = next; 6707 + iter = niter; 6708 + } 6709 + 6710 + return 0; 6711 + } 6712 + 6619 6713 int netdev_walk_all_upper_dev_rcu(struct net_device *dev, 6620 6714 int (*fn)(struct net_device *dev, 6621 6715 void *data), 6622 6716 void *data) 6623 6717 { 6624 - struct net_device *udev; 6625 - struct list_head *iter; 6626 - int ret; 6718 + struct net_device *udev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6719 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6720 + int ret, cur = 0; 6627 6721 6628 - for (iter = &dev->adj_list.upper, 6629 - udev = netdev_next_upper_dev_rcu(dev, &iter); 6630 - udev; 6631 - udev = netdev_next_upper_dev_rcu(dev, &iter)) { 6632 - /* first is the upper device itself */ 6633 - ret = fn(udev, data); 6634 - if (ret) 6635 - return ret; 6722 + now = dev; 6723 + iter = &dev->adj_list.upper; 6636 6724 6637 - /* then look at all of its upper devices */ 6638 - ret = netdev_walk_all_upper_dev_rcu(udev, fn, data); 6639 - if (ret) 6640 - return ret; 6725 + while (1) { 6726 + if (now != dev) { 6727 + ret = fn(now, data); 6728 + if (ret) 6729 + return ret; 6730 + } 6731 + 6732 + next = NULL; 6733 + while (1) { 6734 + udev = netdev_next_upper_dev_rcu(now, &iter); 6735 + if (!udev) 6736 + break; 6737 + 6738 + next = udev; 6739 + niter = &udev->adj_list.upper; 6740 + dev_stack[cur] = now; 6741 + iter_stack[cur++] = iter; 6742 + break; 6743 + } 6744 + 6745 + if (!next) { 6746 + if (!cur) 6747 + return 0; 6748 + next = dev_stack[--cur]; 6749 + niter = iter_stack[cur]; 6750 + } 6751 + 6752 + now = next; 6753 + iter = niter; 6641 6754 } 6642 6755 6643 6756 return 0; 6644 6757 } 6645 6758 EXPORT_SYMBOL_GPL(netdev_walk_all_upper_dev_rcu); 6759 + 6760 + static bool __netdev_has_upper_dev(struct net_device *dev, 6761 + struct net_device *upper_dev) 6762 + { 6763 + ASSERT_RTNL(); 6764 + 6765 + return __netdev_walk_all_upper_dev(dev, ____netdev_has_upper_dev, 6766 + upper_dev); 6767 + } 6646 6768 6647 6769 /** 6648 6770 * netdev_lower_get_next_private - Get the next ->private from the ··· 6817 6785 return lower->dev; 6818 6786 } 6819 6787 6788 + static struct net_device *__netdev_next_lower_dev(struct net_device *dev, 6789 + struct list_head **iter, 6790 + bool *ignore) 6791 + { 6792 + struct netdev_adjacent *lower; 6793 + 6794 + lower = list_entry((*iter)->next, struct netdev_adjacent, list); 6795 + 6796 + if (&lower->list == &dev->adj_list.lower) 6797 + return NULL; 6798 + 6799 + *iter = &lower->list; 6800 + *ignore = lower->ignore; 6801 + 6802 + return lower->dev; 6803 + } 6804 + 6820 6805 int netdev_walk_all_lower_dev(struct net_device *dev, 6821 6806 int (*fn)(struct net_device *dev, 6822 6807 void *data), 6823 6808 void *data) 6824 6809 { 6825 - struct net_device *ldev; 6826 - struct list_head *iter; 6827 - int ret; 6810 + struct net_device *ldev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6811 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6812 + int ret, cur = 0; 6828 6813 6829 - for (iter = &dev->adj_list.lower, 6830 - ldev = netdev_next_lower_dev(dev, &iter); 6831 - ldev; 6832 - ldev = netdev_next_lower_dev(dev, &iter)) { 6833 - /* first is the lower device itself */ 6834 - ret = fn(ldev, data); 6835 - if (ret) 6836 - return ret; 6814 + now = dev; 6815 + iter = &dev->adj_list.lower; 6837 6816 6838 - /* then look at all of its lower devices */ 6839 - ret = netdev_walk_all_lower_dev(ldev, fn, data); 6840 - if (ret) 6841 - return ret; 6817 + while (1) { 6818 + if (now != dev) { 6819 + ret = fn(now, data); 6820 + if (ret) 6821 + return ret; 6822 + } 6823 + 6824 + next = NULL; 6825 + while (1) { 6826 + ldev = netdev_next_lower_dev(now, &iter); 6827 + if (!ldev) 6828 + break; 6829 + 6830 + next = ldev; 6831 + niter = &ldev->adj_list.lower; 6832 + dev_stack[cur] = now; 6833 + iter_stack[cur++] = iter; 6834 + break; 6835 + } 6836 + 6837 + if (!next) { 6838 + if (!cur) 6839 + return 0; 6840 + next = dev_stack[--cur]; 6841 + niter = iter_stack[cur]; 6842 + } 6843 + 6844 + now = next; 6845 + iter = niter; 6842 6846 } 6843 6847 6844 6848 return 0; 6845 6849 } 6846 6850 EXPORT_SYMBOL_GPL(netdev_walk_all_lower_dev); 6851 + 6852 + static int __netdev_walk_all_lower_dev(struct net_device *dev, 6853 + int (*fn)(struct net_device *dev, 6854 + void *data), 6855 + void *data) 6856 + { 6857 + struct net_device *ldev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6858 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6859 + int ret, cur = 0; 6860 + bool ignore; 6861 + 6862 + now = dev; 6863 + iter = &dev->adj_list.lower; 6864 + 6865 + while (1) { 6866 + if (now != dev) { 6867 + ret = fn(now, data); 6868 + if (ret) 6869 + return ret; 6870 + } 6871 + 6872 + next = NULL; 6873 + while (1) { 6874 + ldev = __netdev_next_lower_dev(now, &iter, &ignore); 6875 + if (!ldev) 6876 + break; 6877 + if (ignore) 6878 + continue; 6879 + 6880 + next = ldev; 6881 + niter = &ldev->adj_list.lower; 6882 + dev_stack[cur] = now; 6883 + iter_stack[cur++] = iter; 6884 + break; 6885 + } 6886 + 6887 + if (!next) { 6888 + if (!cur) 6889 + return 0; 6890 + next = dev_stack[--cur]; 6891 + niter = iter_stack[cur]; 6892 + } 6893 + 6894 + now = next; 6895 + iter = niter; 6896 + } 6897 + 6898 + return 0; 6899 + } 6847 6900 6848 6901 static struct net_device *netdev_next_lower_dev_rcu(struct net_device *dev, 6849 6902 struct list_head **iter) ··· 6944 6827 return lower->dev; 6945 6828 } 6946 6829 6830 + static u8 __netdev_upper_depth(struct net_device *dev) 6831 + { 6832 + struct net_device *udev; 6833 + struct list_head *iter; 6834 + u8 max_depth = 0; 6835 + bool ignore; 6836 + 6837 + for (iter = &dev->adj_list.upper, 6838 + udev = __netdev_next_upper_dev(dev, &iter, &ignore); 6839 + udev; 6840 + udev = __netdev_next_upper_dev(dev, &iter, &ignore)) { 6841 + if (ignore) 6842 + continue; 6843 + if (max_depth < udev->upper_level) 6844 + max_depth = udev->upper_level; 6845 + } 6846 + 6847 + return max_depth; 6848 + } 6849 + 6850 + static u8 __netdev_lower_depth(struct net_device *dev) 6851 + { 6852 + struct net_device *ldev; 6853 + struct list_head *iter; 6854 + u8 max_depth = 0; 6855 + bool ignore; 6856 + 6857 + for (iter = &dev->adj_list.lower, 6858 + ldev = __netdev_next_lower_dev(dev, &iter, &ignore); 6859 + ldev; 6860 + ldev = __netdev_next_lower_dev(dev, &iter, &ignore)) { 6861 + if (ignore) 6862 + continue; 6863 + if (max_depth < ldev->lower_level) 6864 + max_depth = ldev->lower_level; 6865 + } 6866 + 6867 + return max_depth; 6868 + } 6869 + 6870 + static int __netdev_update_upper_level(struct net_device *dev, void *data) 6871 + { 6872 + dev->upper_level = __netdev_upper_depth(dev) + 1; 6873 + return 0; 6874 + } 6875 + 6876 + static int __netdev_update_lower_level(struct net_device *dev, void *data) 6877 + { 6878 + dev->lower_level = __netdev_lower_depth(dev) + 1; 6879 + return 0; 6880 + } 6881 + 6947 6882 int netdev_walk_all_lower_dev_rcu(struct net_device *dev, 6948 6883 int (*fn)(struct net_device *dev, 6949 6884 void *data), 6950 6885 void *data) 6951 6886 { 6952 - struct net_device *ldev; 6953 - struct list_head *iter; 6954 - int ret; 6887 + struct net_device *ldev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6888 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6889 + int ret, cur = 0; 6955 6890 6956 - for (iter = &dev->adj_list.lower, 6957 - ldev = netdev_next_lower_dev_rcu(dev, &iter); 6958 - ldev; 6959 - ldev = netdev_next_lower_dev_rcu(dev, &iter)) { 6960 - /* first is the lower device itself */ 6961 - ret = fn(ldev, data); 6962 - if (ret) 6963 - return ret; 6891 + now = dev; 6892 + iter = &dev->adj_list.lower; 6964 6893 6965 - /* then look at all of its lower devices */ 6966 - ret = netdev_walk_all_lower_dev_rcu(ldev, fn, data); 6967 - if (ret) 6968 - return ret; 6894 + while (1) { 6895 + if (now != dev) { 6896 + ret = fn(now, data); 6897 + if (ret) 6898 + return ret; 6899 + } 6900 + 6901 + next = NULL; 6902 + while (1) { 6903 + ldev = netdev_next_lower_dev_rcu(now, &iter); 6904 + if (!ldev) 6905 + break; 6906 + 6907 + next = ldev; 6908 + niter = &ldev->adj_list.lower; 6909 + dev_stack[cur] = now; 6910 + iter_stack[cur++] = iter; 6911 + break; 6912 + } 6913 + 6914 + if (!next) { 6915 + if (!cur) 6916 + return 0; 6917 + next = dev_stack[--cur]; 6918 + niter = iter_stack[cur]; 6919 + } 6920 + 6921 + now = next; 6922 + iter = niter; 6969 6923 } 6970 6924 6971 6925 return 0; ··· 7140 6952 adj->master = master; 7141 6953 adj->ref_nr = 1; 7142 6954 adj->private = private; 6955 + adj->ignore = false; 7143 6956 dev_hold(adj_dev); 7144 6957 7145 6958 pr_debug("Insert adjacency: dev %s adj_dev %s adj->ref_nr %d; dev_hold on %s\n", ··· 7291 7102 return -EBUSY; 7292 7103 7293 7104 /* To prevent loops, check if dev is not upper device to upper_dev. */ 7294 - if (netdev_has_upper_dev(upper_dev, dev)) 7105 + if (__netdev_has_upper_dev(upper_dev, dev)) 7295 7106 return -EBUSY; 7296 7107 7108 + if ((dev->lower_level + upper_dev->upper_level) > MAX_NEST_DEV) 7109 + return -EMLINK; 7110 + 7297 7111 if (!master) { 7298 - if (netdev_has_upper_dev(dev, upper_dev)) 7112 + if (__netdev_has_upper_dev(dev, upper_dev)) 7299 7113 return -EEXIST; 7300 7114 } else { 7301 - master_dev = netdev_master_upper_dev_get(dev); 7115 + master_dev = __netdev_master_upper_dev_get(dev); 7302 7116 if (master_dev) 7303 7117 return master_dev == upper_dev ? -EEXIST : -EBUSY; 7304 7118 } ··· 7322 7130 ret = notifier_to_errno(ret); 7323 7131 if (ret) 7324 7132 goto rollback; 7133 + 7134 + __netdev_update_upper_level(dev, NULL); 7135 + __netdev_walk_all_lower_dev(dev, __netdev_update_upper_level, NULL); 7136 + 7137 + __netdev_update_lower_level(upper_dev, NULL); 7138 + __netdev_walk_all_upper_dev(upper_dev, __netdev_update_lower_level, 7139 + NULL); 7325 7140 7326 7141 return 0; 7327 7142 ··· 7412 7213 7413 7214 call_netdevice_notifiers_info(NETDEV_CHANGEUPPER, 7414 7215 &changeupper_info.info); 7216 + 7217 + __netdev_update_upper_level(dev, NULL); 7218 + __netdev_walk_all_lower_dev(dev, __netdev_update_upper_level, NULL); 7219 + 7220 + __netdev_update_lower_level(upper_dev, NULL); 7221 + __netdev_walk_all_upper_dev(upper_dev, __netdev_update_lower_level, 7222 + NULL); 7415 7223 } 7416 7224 EXPORT_SYMBOL(netdev_upper_dev_unlink); 7225 + 7226 + static void __netdev_adjacent_dev_set(struct net_device *upper_dev, 7227 + struct net_device *lower_dev, 7228 + bool val) 7229 + { 7230 + struct netdev_adjacent *adj; 7231 + 7232 + adj = __netdev_find_adj(lower_dev, &upper_dev->adj_list.lower); 7233 + if (adj) 7234 + adj->ignore = val; 7235 + 7236 + adj = __netdev_find_adj(upper_dev, &lower_dev->adj_list.upper); 7237 + if (adj) 7238 + adj->ignore = val; 7239 + } 7240 + 7241 + static void netdev_adjacent_dev_disable(struct net_device *upper_dev, 7242 + struct net_device *lower_dev) 7243 + { 7244 + __netdev_adjacent_dev_set(upper_dev, lower_dev, true); 7245 + } 7246 + 7247 + static void netdev_adjacent_dev_enable(struct net_device *upper_dev, 7248 + struct net_device *lower_dev) 7249 + { 7250 + __netdev_adjacent_dev_set(upper_dev, lower_dev, false); 7251 + } 7252 + 7253 + int netdev_adjacent_change_prepare(struct net_device *old_dev, 7254 + struct net_device *new_dev, 7255 + struct net_device *dev, 7256 + struct netlink_ext_ack *extack) 7257 + { 7258 + int err; 7259 + 7260 + if (!new_dev) 7261 + return 0; 7262 + 7263 + if (old_dev && new_dev != old_dev) 7264 + netdev_adjacent_dev_disable(dev, old_dev); 7265 + 7266 + err = netdev_upper_dev_link(new_dev, dev, extack); 7267 + if (err) { 7268 + if (old_dev && new_dev != old_dev) 7269 + netdev_adjacent_dev_enable(dev, old_dev); 7270 + return err; 7271 + } 7272 + 7273 + return 0; 7274 + } 7275 + EXPORT_SYMBOL(netdev_adjacent_change_prepare); 7276 + 7277 + void netdev_adjacent_change_commit(struct net_device *old_dev, 7278 + struct net_device *new_dev, 7279 + struct net_device *dev) 7280 + { 7281 + if (!new_dev || !old_dev) 7282 + return; 7283 + 7284 + if (new_dev == old_dev) 7285 + return; 7286 + 7287 + netdev_adjacent_dev_enable(dev, old_dev); 7288 + netdev_upper_dev_unlink(old_dev, dev); 7289 + } 7290 + EXPORT_SYMBOL(netdev_adjacent_change_commit); 7291 + 7292 + void netdev_adjacent_change_abort(struct net_device *old_dev, 7293 + struct net_device *new_dev, 7294 + struct net_device *dev) 7295 + { 7296 + if (!new_dev) 7297 + return; 7298 + 7299 + if (old_dev && new_dev != old_dev) 7300 + netdev_adjacent_dev_enable(dev, old_dev); 7301 + 7302 + netdev_upper_dev_unlink(new_dev, dev); 7303 + } 7304 + EXPORT_SYMBOL(netdev_adjacent_change_abort); 7417 7305 7418 7306 /** 7419 7307 * netdev_bonding_info_change - Dispatch event about slave change ··· 7614 7328 } 7615 7329 EXPORT_SYMBOL(netdev_lower_dev_get_private); 7616 7330 7617 - 7618 - int dev_get_nest_level(struct net_device *dev) 7619 - { 7620 - struct net_device *lower = NULL; 7621 - struct list_head *iter; 7622 - int max_nest = -1; 7623 - int nest; 7624 - 7625 - ASSERT_RTNL(); 7626 - 7627 - netdev_for_each_lower_dev(dev, lower, iter) { 7628 - nest = dev_get_nest_level(lower); 7629 - if (max_nest < nest) 7630 - max_nest = nest; 7631 - } 7632 - 7633 - return max_nest + 1; 7634 - } 7635 - EXPORT_SYMBOL(dev_get_nest_level); 7636 7331 7637 7332 /** 7638 7333 * netdev_lower_change - Dispatch event about lower device state change ··· 8421 8154 return -EINVAL; 8422 8155 } 8423 8156 8424 - if (prog->aux->id == prog_id) { 8157 + /* prog->aux->id may be 0 for orphaned device-bound progs */ 8158 + if (prog->aux->id && prog->aux->id == prog_id) { 8425 8159 bpf_prog_put(prog); 8426 8160 return 0; 8427 8161 } ··· 8887 8619 { 8888 8620 /* Initialize queue lock */ 8889 8621 spin_lock_init(&queue->_xmit_lock); 8890 - netdev_set_xmit_lockdep_class(&queue->_xmit_lock, dev->type); 8622 + lockdep_set_class(&queue->_xmit_lock, &dev->qdisc_xmit_lock_key); 8891 8623 queue->xmit_lock_owner = -1; 8892 8624 netdev_queue_numa_node_write(queue, NUMA_NO_NODE); 8893 8625 queue->dev = dev; ··· 8934 8666 } 8935 8667 EXPORT_SYMBOL(netif_tx_stop_all_queues); 8936 8668 8669 + static void netdev_register_lockdep_key(struct net_device *dev) 8670 + { 8671 + lockdep_register_key(&dev->qdisc_tx_busylock_key); 8672 + lockdep_register_key(&dev->qdisc_running_key); 8673 + lockdep_register_key(&dev->qdisc_xmit_lock_key); 8674 + lockdep_register_key(&dev->addr_list_lock_key); 8675 + } 8676 + 8677 + static void netdev_unregister_lockdep_key(struct net_device *dev) 8678 + { 8679 + lockdep_unregister_key(&dev->qdisc_tx_busylock_key); 8680 + lockdep_unregister_key(&dev->qdisc_running_key); 8681 + lockdep_unregister_key(&dev->qdisc_xmit_lock_key); 8682 + lockdep_unregister_key(&dev->addr_list_lock_key); 8683 + } 8684 + 8685 + void netdev_update_lockdep_key(struct net_device *dev) 8686 + { 8687 + struct netdev_queue *queue; 8688 + int i; 8689 + 8690 + lockdep_unregister_key(&dev->qdisc_xmit_lock_key); 8691 + lockdep_unregister_key(&dev->addr_list_lock_key); 8692 + 8693 + lockdep_register_key(&dev->qdisc_xmit_lock_key); 8694 + lockdep_register_key(&dev->addr_list_lock_key); 8695 + 8696 + lockdep_set_class(&dev->addr_list_lock, &dev->addr_list_lock_key); 8697 + for (i = 0; i < dev->num_tx_queues; i++) { 8698 + queue = netdev_get_tx_queue(dev, i); 8699 + 8700 + lockdep_set_class(&queue->_xmit_lock, 8701 + &dev->qdisc_xmit_lock_key); 8702 + } 8703 + } 8704 + EXPORT_SYMBOL(netdev_update_lockdep_key); 8705 + 8937 8706 /** 8938 8707 * register_netdevice - register a network device 8939 8708 * @dev: device to register ··· 9005 8700 BUG_ON(!net); 9006 8701 9007 8702 spin_lock_init(&dev->addr_list_lock); 9008 - netdev_set_addr_lockdep_class(dev); 8703 + lockdep_set_class(&dev->addr_list_lock, &dev->addr_list_lock_key); 9009 8704 9010 8705 ret = dev_get_valid_name(net, dev, dev->name); 9011 8706 if (ret < 0) ··· 9515 9210 9516 9211 dev_net_set(dev, &init_net); 9517 9212 9213 + netdev_register_lockdep_key(dev); 9214 + 9518 9215 dev->gso_max_size = GSO_MAX_SIZE; 9519 9216 dev->gso_max_segs = GSO_MAX_SEGS; 9217 + dev->upper_level = 1; 9218 + dev->lower_level = 1; 9520 9219 9521 9220 INIT_LIST_HEAD(&dev->napi_list); 9522 9221 INIT_LIST_HEAD(&dev->unreg_list); ··· 9600 9291 9601 9292 free_percpu(dev->pcpu_refcnt); 9602 9293 dev->pcpu_refcnt = NULL; 9294 + 9295 + netdev_unregister_lockdep_key(dev); 9603 9296 9604 9297 /* Compatibility with error handling in drivers */ 9605 9298 if (dev->reg_state == NETREG_UNINITIALIZED) { ··· 9771 9460 call_netdevice_notifiers(NETDEV_UNREGISTER, dev); 9772 9461 rcu_barrier(); 9773 9462 9774 - new_nsid = peernet2id_alloc(dev_net(dev), net); 9463 + new_nsid = peernet2id_alloc(dev_net(dev), net, GFP_KERNEL); 9775 9464 /* If there is an ifindex conflict assign a new one */ 9776 9465 if (__dev_get_by_index(net, dev->ifindex)) 9777 9466 new_ifindex = dev_new_index(net);
+6 -6
net/core/dev_addr_lists.c
··· 637 637 if (to->addr_len != from->addr_len) 638 638 return -EINVAL; 639 639 640 - netif_addr_lock_nested(to); 640 + netif_addr_lock(to); 641 641 err = __hw_addr_sync(&to->uc, &from->uc, to->addr_len); 642 642 if (!err) 643 643 __dev_set_rx_mode(to); ··· 667 667 if (to->addr_len != from->addr_len) 668 668 return -EINVAL; 669 669 670 - netif_addr_lock_nested(to); 670 + netif_addr_lock(to); 671 671 err = __hw_addr_sync_multiple(&to->uc, &from->uc, to->addr_len); 672 672 if (!err) 673 673 __dev_set_rx_mode(to); ··· 691 691 return; 692 692 693 693 netif_addr_lock_bh(from); 694 - netif_addr_lock_nested(to); 694 + netif_addr_lock(to); 695 695 __hw_addr_unsync(&to->uc, &from->uc, to->addr_len); 696 696 __dev_set_rx_mode(to); 697 697 netif_addr_unlock(to); ··· 858 858 if (to->addr_len != from->addr_len) 859 859 return -EINVAL; 860 860 861 - netif_addr_lock_nested(to); 861 + netif_addr_lock(to); 862 862 err = __hw_addr_sync(&to->mc, &from->mc, to->addr_len); 863 863 if (!err) 864 864 __dev_set_rx_mode(to); ··· 888 888 if (to->addr_len != from->addr_len) 889 889 return -EINVAL; 890 890 891 - netif_addr_lock_nested(to); 891 + netif_addr_lock(to); 892 892 err = __hw_addr_sync_multiple(&to->mc, &from->mc, to->addr_len); 893 893 if (!err) 894 894 __dev_set_rx_mode(to); ··· 912 912 return; 913 913 914 914 netif_addr_lock_bh(from); 915 - netif_addr_lock_nested(to); 915 + netif_addr_lock(to); 916 916 __hw_addr_unsync(&to->mc, &from->mc, to->addr_len); 917 917 __dev_set_rx_mode(to); 918 918 netif_addr_unlock(to);
+3 -1
net/core/ethtool.c
··· 1396 1396 1397 1397 static int ethtool_get_wol(struct net_device *dev, char __user *useraddr) 1398 1398 { 1399 - struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; 1399 + struct ethtool_wolinfo wol; 1400 1400 1401 1401 if (!dev->ethtool_ops->get_wol) 1402 1402 return -EOPNOTSUPP; 1403 1403 1404 + memset(&wol, 0, sizeof(struct ethtool_wolinfo)); 1405 + wol.cmd = ETHTOOL_GWOL; 1404 1406 dev->ethtool_ops->get_wol(dev, &wol); 1405 1407 1406 1408 if (copy_to_user(useraddr, &wol, sizeof(wol)))
+16 -22
net/core/flow_dissector.c
··· 1350 1350 } 1351 1351 EXPORT_SYMBOL(__skb_flow_dissect); 1352 1352 1353 - static u32 hashrnd __read_mostly; 1353 + static siphash_key_t hashrnd __read_mostly; 1354 1354 static __always_inline void __flow_hash_secret_init(void) 1355 1355 { 1356 1356 net_get_random_once(&hashrnd, sizeof(hashrnd)); 1357 1357 } 1358 1358 1359 - static __always_inline u32 __flow_hash_words(const u32 *words, u32 length, 1360 - u32 keyval) 1359 + static const void *flow_keys_hash_start(const struct flow_keys *flow) 1361 1360 { 1362 - return jhash2(words, length, keyval); 1363 - } 1364 - 1365 - static inline const u32 *flow_keys_hash_start(const struct flow_keys *flow) 1366 - { 1367 - const void *p = flow; 1368 - 1369 - BUILD_BUG_ON(FLOW_KEYS_HASH_OFFSET % sizeof(u32)); 1370 - return (const u32 *)(p + FLOW_KEYS_HASH_OFFSET); 1361 + BUILD_BUG_ON(FLOW_KEYS_HASH_OFFSET % SIPHASH_ALIGNMENT); 1362 + return &flow->FLOW_KEYS_HASH_START_FIELD; 1371 1363 } 1372 1364 1373 1365 static inline size_t flow_keys_hash_length(const struct flow_keys *flow) 1374 1366 { 1375 1367 size_t diff = FLOW_KEYS_HASH_OFFSET + sizeof(flow->addrs); 1376 - BUILD_BUG_ON((sizeof(*flow) - FLOW_KEYS_HASH_OFFSET) % sizeof(u32)); 1377 1368 BUILD_BUG_ON(offsetof(typeof(*flow), addrs) != 1378 1369 sizeof(*flow) - sizeof(flow->addrs)); 1379 1370 ··· 1379 1388 diff -= sizeof(flow->addrs.tipckey); 1380 1389 break; 1381 1390 } 1382 - return (sizeof(*flow) - diff) / sizeof(u32); 1391 + return sizeof(*flow) - diff; 1383 1392 } 1384 1393 1385 1394 __be32 flow_get_u32_src(const struct flow_keys *flow) ··· 1445 1454 } 1446 1455 } 1447 1456 1448 - static inline u32 __flow_hash_from_keys(struct flow_keys *keys, u32 keyval) 1457 + static inline u32 __flow_hash_from_keys(struct flow_keys *keys, 1458 + const siphash_key_t *keyval) 1449 1459 { 1450 1460 u32 hash; 1451 1461 1452 1462 __flow_hash_consistentify(keys); 1453 1463 1454 - hash = __flow_hash_words(flow_keys_hash_start(keys), 1455 - flow_keys_hash_length(keys), keyval); 1464 + hash = siphash(flow_keys_hash_start(keys), 1465 + flow_keys_hash_length(keys), keyval); 1456 1466 if (!hash) 1457 1467 hash = 1; 1458 1468 ··· 1463 1471 u32 flow_hash_from_keys(struct flow_keys *keys) 1464 1472 { 1465 1473 __flow_hash_secret_init(); 1466 - return __flow_hash_from_keys(keys, hashrnd); 1474 + return __flow_hash_from_keys(keys, &hashrnd); 1467 1475 } 1468 1476 EXPORT_SYMBOL(flow_hash_from_keys); 1469 1477 1470 1478 static inline u32 ___skb_get_hash(const struct sk_buff *skb, 1471 - struct flow_keys *keys, u32 keyval) 1479 + struct flow_keys *keys, 1480 + const siphash_key_t *keyval) 1472 1481 { 1473 1482 skb_flow_dissect_flow_keys(skb, keys, 1474 1483 FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL); ··· 1517 1524 &keys, NULL, 0, 0, 0, 1518 1525 FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL); 1519 1526 1520 - return __flow_hash_from_keys(&keys, hashrnd); 1527 + return __flow_hash_from_keys(&keys, &hashrnd); 1521 1528 } 1522 1529 EXPORT_SYMBOL_GPL(__skb_get_hash_symmetric); 1523 1530 ··· 1537 1544 1538 1545 __flow_hash_secret_init(); 1539 1546 1540 - hash = ___skb_get_hash(skb, &keys, hashrnd); 1547 + hash = ___skb_get_hash(skb, &keys, &hashrnd); 1541 1548 1542 1549 __skb_set_sw_hash(skb, hash, flow_keys_have_l4(&keys)); 1543 1550 } 1544 1551 EXPORT_SYMBOL(__skb_get_hash); 1545 1552 1546 - __u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb) 1553 + __u32 skb_get_hash_perturb(const struct sk_buff *skb, 1554 + const siphash_key_t *perturb) 1547 1555 { 1548 1556 struct flow_keys keys; 1549 1557
+6 -1
net/core/lwt_bpf.c
··· 88 88 int err = -EINVAL; 89 89 90 90 if (skb->protocol == htons(ETH_P_IP)) { 91 + struct net_device *dev = skb_dst(skb)->dev; 91 92 struct iphdr *iph = ip_hdr(skb); 92 93 94 + dev_hold(dev); 95 + skb_dst_drop(skb); 93 96 err = ip_route_input_noref(skb, iph->daddr, iph->saddr, 94 - iph->tos, skb_dst(skb)->dev); 97 + iph->tos, dev); 98 + dev_put(dev); 95 99 } else if (skb->protocol == htons(ETH_P_IPV6)) { 100 + skb_dst_drop(skb); 96 101 err = ipv6_stub->ipv6_route_input(skb); 97 102 } else { 98 103 err = -EAFNOSUPPORT;
+10 -8
net/core/net_namespace.c
··· 246 246 } 247 247 248 248 static void rtnl_net_notifyid(struct net *net, int cmd, int id, u32 portid, 249 - struct nlmsghdr *nlh); 249 + struct nlmsghdr *nlh, gfp_t gfp); 250 250 /* This function returns the id of a peer netns. If no id is assigned, one will 251 251 * be allocated and returned. 252 252 */ 253 - int peernet2id_alloc(struct net *net, struct net *peer) 253 + int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp) 254 254 { 255 255 bool alloc = false, alive = false; 256 256 int id; ··· 269 269 id = __peernet2id_alloc(net, peer, &alloc); 270 270 spin_unlock_bh(&net->nsid_lock); 271 271 if (alloc && id >= 0) 272 - rtnl_net_notifyid(net, RTM_NEWNSID, id, 0, NULL); 272 + rtnl_net_notifyid(net, RTM_NEWNSID, id, 0, NULL, gfp); 273 273 if (alive) 274 274 put_net(peer); 275 275 return id; ··· 479 479 480 480 if (rv < 0) { 481 481 put_userns: 482 + key_remove_domain(net->key_domain); 482 483 put_user_ns(user_ns); 483 484 net_drop_ns(net); 484 485 dec_ucounts: ··· 534 533 idr_remove(&tmp->netns_ids, id); 535 534 spin_unlock_bh(&tmp->nsid_lock); 536 535 if (id >= 0) 537 - rtnl_net_notifyid(tmp, RTM_DELNSID, id, 0, NULL); 536 + rtnl_net_notifyid(tmp, RTM_DELNSID, id, 0, NULL, 537 + GFP_KERNEL); 538 538 if (tmp == last) 539 539 break; 540 540 } ··· 768 766 spin_unlock_bh(&net->nsid_lock); 769 767 if (err >= 0) { 770 768 rtnl_net_notifyid(net, RTM_NEWNSID, err, NETLINK_CB(skb).portid, 771 - nlh); 769 + nlh, GFP_KERNEL); 772 770 err = 0; 773 771 } else if (err == -ENOSPC && nsid >= 0) { 774 772 err = -EEXIST; ··· 1056 1054 } 1057 1055 1058 1056 static void rtnl_net_notifyid(struct net *net, int cmd, int id, u32 portid, 1059 - struct nlmsghdr *nlh) 1057 + struct nlmsghdr *nlh, gfp_t gfp) 1060 1058 { 1061 1059 struct net_fill_args fillargs = { 1062 1060 .portid = portid, ··· 1067 1065 struct sk_buff *msg; 1068 1066 int err = -ENOMEM; 1069 1067 1070 - msg = nlmsg_new(rtnl_net_get_size(), GFP_KERNEL); 1068 + msg = nlmsg_new(rtnl_net_get_size(), gfp); 1071 1069 if (!msg) 1072 1070 goto out; 1073 1071 ··· 1075 1073 if (err < 0) 1076 1074 goto err_out; 1077 1075 1078 - rtnl_notify(msg, net, portid, RTNLGRP_NSID, nlh, 0); 1076 + rtnl_notify(msg, net, portid, RTNLGRP_NSID, nlh, gfp); 1079 1077 return; 1080 1078 1081 1079 err_out:
+9 -8
net/core/rtnetlink.c
··· 1523 1523 1524 1524 static int rtnl_fill_link_netnsid(struct sk_buff *skb, 1525 1525 const struct net_device *dev, 1526 - struct net *src_net) 1526 + struct net *src_net, gfp_t gfp) 1527 1527 { 1528 1528 bool put_iflink = false; 1529 1529 ··· 1531 1531 struct net *link_net = dev->rtnl_link_ops->get_link_net(dev); 1532 1532 1533 1533 if (!net_eq(dev_net(dev), link_net)) { 1534 - int id = peernet2id_alloc(src_net, link_net); 1534 + int id = peernet2id_alloc(src_net, link_net, gfp); 1535 1535 1536 1536 if (nla_put_s32(skb, IFLA_LINK_NETNSID, id)) 1537 1537 return -EMSGSIZE; ··· 1589 1589 int type, u32 pid, u32 seq, u32 change, 1590 1590 unsigned int flags, u32 ext_filter_mask, 1591 1591 u32 event, int *new_nsid, int new_ifindex, 1592 - int tgt_netnsid) 1592 + int tgt_netnsid, gfp_t gfp) 1593 1593 { 1594 1594 struct ifinfomsg *ifm; 1595 1595 struct nlmsghdr *nlh; ··· 1681 1681 goto nla_put_failure; 1682 1682 } 1683 1683 1684 - if (rtnl_fill_link_netnsid(skb, dev, src_net)) 1684 + if (rtnl_fill_link_netnsid(skb, dev, src_net, gfp)) 1685 1685 goto nla_put_failure; 1686 1686 1687 1687 if (new_nsid && ··· 2001 2001 NETLINK_CB(cb->skb).portid, 2002 2002 nlh->nlmsg_seq, 0, flags, 2003 2003 ext_filter_mask, 0, NULL, 0, 2004 - netnsid); 2004 + netnsid, GFP_KERNEL); 2005 2005 2006 2006 if (err < 0) { 2007 2007 if (likely(skb->len)) ··· 2355 2355 err = ops->ndo_del_slave(upper_dev, dev); 2356 2356 if (err) 2357 2357 return err; 2358 + netdev_update_lockdep_key(dev); 2358 2359 } else { 2359 2360 return -EOPNOTSUPP; 2360 2361 } ··· 3360 3359 err = rtnl_fill_ifinfo(nskb, dev, net, 3361 3360 RTM_NEWLINK, NETLINK_CB(skb).portid, 3362 3361 nlh->nlmsg_seq, 0, 0, ext_filter_mask, 3363 - 0, NULL, 0, netnsid); 3362 + 0, NULL, 0, netnsid, GFP_KERNEL); 3364 3363 if (err < 0) { 3365 3364 /* -EMSGSIZE implies BUG in if_nlmsg_size */ 3366 3365 WARN_ON(err == -EMSGSIZE); ··· 3472 3471 3473 3472 err = rtnl_fill_ifinfo(skb, dev, dev_net(dev), 3474 3473 type, 0, 0, change, 0, 0, event, 3475 - new_nsid, new_ifindex, -1); 3474 + new_nsid, new_ifindex, -1, flags); 3476 3475 if (err < 0) { 3477 3476 /* -EMSGSIZE implies BUG in if_nlmsg_size() */ 3478 3477 WARN_ON(err == -EMSGSIZE); ··· 3917 3916 ndm = nlmsg_data(nlh); 3918 3917 if (ndm->ndm_pad1 || ndm->ndm_pad2 || ndm->ndm_state || 3919 3918 ndm->ndm_flags || ndm->ndm_type) { 3920 - NL_SET_ERR_MSG(extack, "Invalid values in header for fbd dump request"); 3919 + NL_SET_ERR_MSG(extack, "Invalid values in header for fdb dump request"); 3921 3920 return -EINVAL; 3922 3921 } 3923 3922
+3 -3
net/core/sock.c
··· 1127 1127 break; 1128 1128 } 1129 1129 case SO_INCOMING_CPU: 1130 - sk->sk_incoming_cpu = val; 1130 + WRITE_ONCE(sk->sk_incoming_cpu, val); 1131 1131 break; 1132 1132 1133 1133 case SO_CNX_ADVICE: ··· 1476 1476 break; 1477 1477 1478 1478 case SO_INCOMING_CPU: 1479 - v.val = sk->sk_incoming_cpu; 1479 + v.val = READ_ONCE(sk->sk_incoming_cpu); 1480 1480 break; 1481 1481 1482 1482 case SO_MEMINFO: ··· 3600 3600 { 3601 3601 struct sock *sk = p; 3602 3602 3603 - return !skb_queue_empty(&sk->sk_receive_queue) || 3603 + return !skb_queue_empty_lockless(&sk->sk_receive_queue) || 3604 3604 sk_busy_loop_timeout(sk, start_time); 3605 3605 } 3606 3606 EXPORT_SYMBOL(sk_busy_loop_end);
+1 -1
net/dccp/ipv4.c
··· 117 117 inet->inet_daddr, 118 118 inet->inet_sport, 119 119 inet->inet_dport); 120 - inet->inet_id = dp->dccps_iss ^ jiffies; 120 + inet->inet_id = prandom_u32(); 121 121 122 122 err = dccp_connect(sk); 123 123 rt = NULL;
+1 -1
net/decnet/af_decnet.c
··· 1205 1205 struct dn_scp *scp = DN_SK(sk); 1206 1206 __poll_t mask = datagram_poll(file, sock, wait); 1207 1207 1208 - if (!skb_queue_empty(&scp->other_receive_queue)) 1208 + if (!skb_queue_empty_lockless(&scp->other_receive_queue)) 1209 1209 mask |= EPOLLRDBAND; 1210 1210 1211 1211 return mask;
-5
net/dsa/master.c
··· 310 310 rtnl_unlock(); 311 311 } 312 312 313 - static struct lock_class_key dsa_master_addr_list_lock_key; 314 - 315 313 int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 316 314 { 317 315 int ret; ··· 323 325 wmb(); 324 326 325 327 dev->dsa_ptr = cpu_dp; 326 - lockdep_set_class(&dev->addr_list_lock, 327 - &dsa_master_addr_list_lock_key); 328 - 329 328 ret = dsa_master_ethtool_setup(dev); 330 329 if (ret) 331 330 return ret;
-12
net/dsa/slave.c
··· 1341 1341 return ret; 1342 1342 } 1343 1343 1344 - static struct lock_class_key dsa_slave_netdev_xmit_lock_key; 1345 - static void dsa_slave_set_lockdep_class_one(struct net_device *dev, 1346 - struct netdev_queue *txq, 1347 - void *_unused) 1348 - { 1349 - lockdep_set_class(&txq->_xmit_lock, 1350 - &dsa_slave_netdev_xmit_lock_key); 1351 - } 1352 - 1353 1344 int dsa_slave_suspend(struct net_device *slave_dev) 1354 1345 { 1355 1346 struct dsa_port *dp = dsa_slave_to_port(slave_dev); ··· 1423 1432 slave_dev->min_mtu = 0; 1424 1433 slave_dev->max_mtu = ETH_MAX_MTU; 1425 1434 SET_NETDEV_DEVTYPE(slave_dev, &dsa_type); 1426 - 1427 - netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one, 1428 - NULL); 1429 1435 1430 1436 SET_NETDEV_DEV(slave_dev, port->ds->dev); 1431 1437 slave_dev->dev.of_node = port->dn;
-8
net/ieee802154/6lowpan/core.c
··· 58 58 .create = lowpan_header_create, 59 59 }; 60 60 61 - static int lowpan_dev_init(struct net_device *ldev) 62 - { 63 - netdev_lockdep_set_classes(ldev); 64 - 65 - return 0; 66 - } 67 - 68 61 static int lowpan_open(struct net_device *dev) 69 62 { 70 63 if (!open_count) ··· 89 96 } 90 97 91 98 static const struct net_device_ops lowpan_netdev_ops = { 92 - .ndo_init = lowpan_dev_init, 93 99 .ndo_start_xmit = lowpan_xmit, 94 100 .ndo_open = lowpan_open, 95 101 .ndo_stop = lowpan_stop,
+1 -1
net/ipv4/datagram.c
··· 73 73 reuseport_has_conns(sk, true); 74 74 sk->sk_state = TCP_ESTABLISHED; 75 75 sk_set_txhash(sk); 76 - inet->inet_id = jiffies; 76 + inet->inet_id = prandom_u32(); 77 77 78 78 sk_dst_set(sk, &rt->dst); 79 79 err = 0;
+1 -1
net/ipv4/fib_frontend.c
··· 1148 1148 if (!(dev->flags & IFF_UP) || 1149 1149 ifa->ifa_flags & (IFA_F_SECONDARY | IFA_F_NOPREFIXROUTE) || 1150 1150 ipv4_is_zeronet(prefix) || 1151 - prefix == ifa->ifa_local || ifa->ifa_prefixlen == 32) 1151 + (prefix == ifa->ifa_local && ifa->ifa_prefixlen == 32)) 1152 1152 return; 1153 1153 1154 1154 /* add the new */
+1 -1
net/ipv4/inet_hashtables.c
··· 240 240 return -1; 241 241 242 242 score = sk->sk_family == PF_INET ? 2 : 1; 243 - if (sk->sk_incoming_cpu == raw_smp_processor_id()) 243 + if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id()) 244 244 score++; 245 245 } 246 246 return score;
+2 -2
net/ipv4/ip_gre.c
··· 509 509 key = &tun_info->key; 510 510 if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT)) 511 511 goto err_free_skb; 512 - md = ip_tunnel_info_opts(tun_info); 513 - if (!md) 512 + if (tun_info->options_len < sizeof(*md)) 514 513 goto err_free_skb; 514 + md = ip_tunnel_info_opts(tun_info); 515 515 516 516 /* ERSPAN has fixed 8 byte GRE header */ 517 517 version = md->version;
+6 -5
net/ipv4/ip_output.c
··· 645 645 EXPORT_SYMBOL(ip_fraglist_prepare); 646 646 647 647 void ip_frag_init(struct sk_buff *skb, unsigned int hlen, 648 - unsigned int ll_rs, unsigned int mtu, 648 + unsigned int ll_rs, unsigned int mtu, bool DF, 649 649 struct ip_frag_state *state) 650 650 { 651 651 struct iphdr *iph = ip_hdr(skb); 652 652 653 + state->DF = DF; 653 654 state->hlen = hlen; 654 655 state->ll_rs = ll_rs; 655 656 state->mtu = mtu; ··· 668 667 { 669 668 /* Copy the flags to each fragment. */ 670 669 IPCB(to)->flags = IPCB(from)->flags; 671 - 672 - if (IPCB(from)->flags & IPSKB_FRAG_PMTU) 673 - state->iph->frag_off |= htons(IP_DF); 674 670 675 671 /* ANK: dirty, but effective trick. Upgrade options only if 676 672 * the segment to be fragmented was THE FIRST (otherwise, ··· 736 738 */ 737 739 iph = ip_hdr(skb2); 738 740 iph->frag_off = htons((state->offset >> 3)); 741 + if (state->DF) 742 + iph->frag_off |= htons(IP_DF); 739 743 740 744 /* 741 745 * Added AC : If we are fragmenting a fragment that's not the ··· 883 883 * Fragment the datagram. 884 884 */ 885 885 886 - ip_frag_init(skb, hlen, ll_rs, mtu, &state); 886 + ip_frag_init(skb, hlen, ll_rs, mtu, IPCB(skb)->flags & IPSKB_FRAG_PMTU, 887 + &state); 887 888 888 889 /* 889 890 * Keep copying data until we run out.
+2 -2
net/ipv4/tcp.c
··· 584 584 } 585 585 /* This barrier is coupled with smp_wmb() in tcp_reset() */ 586 586 smp_rmb(); 587 - if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) 587 + if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue)) 588 588 mask |= EPOLLERR; 589 589 590 590 return mask; ··· 1964 1964 if (unlikely(flags & MSG_ERRQUEUE)) 1965 1965 return inet_recv_error(sk, msg, len, addr_len); 1966 1966 1967 - if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue) && 1967 + if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue) && 1968 1968 (sk->sk_state == TCP_ESTABLISHED)) 1969 1969 sk_busy_loop(sk, nonblock); 1970 1970
+3 -3
net/ipv4/tcp_ipv4.c
··· 303 303 inet->inet_daddr); 304 304 } 305 305 306 - inet->inet_id = tp->write_seq ^ jiffies; 306 + inet->inet_id = prandom_u32(); 307 307 308 308 if (tcp_fastopen_defer_connect(sk, &err)) 309 309 return err; ··· 1450 1450 inet_csk(newsk)->icsk_ext_hdr_len = 0; 1451 1451 if (inet_opt) 1452 1452 inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen; 1453 - newinet->inet_id = newtp->write_seq ^ jiffies; 1453 + newinet->inet_id = prandom_u32(); 1454 1454 1455 1455 if (!dst) { 1456 1456 dst = inet_csk_route_child_sock(sk, newsk, req); ··· 2681 2681 net->ipv4.tcp_death_row.sysctl_max_tw_buckets = cnt / 2; 2682 2682 net->ipv4.tcp_death_row.hashinfo = &tcp_hashinfo; 2683 2683 2684 - net->ipv4.sysctl_max_syn_backlog = max(128, cnt / 256); 2684 + net->ipv4.sysctl_max_syn_backlog = max(128, cnt / 128); 2685 2685 net->ipv4.sysctl_tcp_sack = 1; 2686 2686 net->ipv4.sysctl_tcp_window_scaling = 1; 2687 2687 net->ipv4.sysctl_tcp_timestamps = 1;
+20 -9
net/ipv4/udp.c
··· 388 388 return -1; 389 389 score += 4; 390 390 391 - if (sk->sk_incoming_cpu == raw_smp_processor_id()) 391 + if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id()) 392 392 score++; 393 393 return score; 394 394 } ··· 1316 1316 scratch->_tsize_state |= UDP_SKB_IS_STATELESS; 1317 1317 } 1318 1318 1319 + static void udp_skb_csum_unnecessary_set(struct sk_buff *skb) 1320 + { 1321 + /* We come here after udp_lib_checksum_complete() returned 0. 1322 + * This means that __skb_checksum_complete() might have 1323 + * set skb->csum_valid to 1. 1324 + * On 64bit platforms, we can set csum_unnecessary 1325 + * to true, but only if the skb is not shared. 1326 + */ 1327 + #if BITS_PER_LONG == 64 1328 + if (!skb_shared(skb)) 1329 + udp_skb_scratch(skb)->csum_unnecessary = true; 1330 + #endif 1331 + } 1332 + 1319 1333 static int udp_skb_truesize(struct sk_buff *skb) 1320 1334 { 1321 1335 return udp_skb_scratch(skb)->_tsize_state & ~UDP_SKB_IS_STATELESS; ··· 1564 1550 *total += skb->truesize; 1565 1551 kfree_skb(skb); 1566 1552 } else { 1567 - /* the csum related bits could be changed, refresh 1568 - * the scratch area 1569 - */ 1570 - udp_set_dev_scratch(skb); 1553 + udp_skb_csum_unnecessary_set(skb); 1571 1554 break; 1572 1555 } 1573 1556 } ··· 1588 1577 1589 1578 spin_lock_bh(&rcvq->lock); 1590 1579 skb = __first_packet_length(sk, rcvq, &total); 1591 - if (!skb && !skb_queue_empty(sk_queue)) { 1580 + if (!skb && !skb_queue_empty_lockless(sk_queue)) { 1592 1581 spin_lock(&sk_queue->lock); 1593 1582 skb_queue_splice_tail_init(sk_queue, rcvq); 1594 1583 spin_unlock(&sk_queue->lock); ··· 1661 1650 return skb; 1662 1651 } 1663 1652 1664 - if (skb_queue_empty(sk_queue)) { 1653 + if (skb_queue_empty_lockless(sk_queue)) { 1665 1654 spin_unlock_bh(&queue->lock); 1666 1655 goto busy_check; 1667 1656 } ··· 1687 1676 break; 1688 1677 1689 1678 sk_busy_loop(sk, flags & MSG_DONTWAIT); 1690 - } while (!skb_queue_empty(sk_queue)); 1679 + } while (!skb_queue_empty_lockless(sk_queue)); 1691 1680 1692 1681 /* sk_queue is empty, reader_queue may contain peeked packets */ 1693 1682 } while (timeo && ··· 2723 2712 __poll_t mask = datagram_poll(file, sock, wait); 2724 2713 struct sock *sk = sock->sk; 2725 2714 2726 - if (!skb_queue_empty(&udp_sk(sk)->reader_queue)) 2715 + if (!skb_queue_empty_lockless(&udp_sk(sk)->reader_queue)) 2727 2716 mask |= EPOLLIN | EPOLLRDNORM; 2728 2717 2729 2718 /* Check for false positives due to checksum errors */
+1
net/ipv6/addrconf_core.c
··· 7 7 #include <linux/export.h> 8 8 #include <net/ipv6.h> 9 9 #include <net/ipv6_stubs.h> 10 + #include <net/addrconf.h> 10 11 #include <net/ip.h> 11 12 12 13 /* if ipv6 module registers this function is used by xfrm to force all
+1 -1
net/ipv6/inet6_hashtables.c
··· 105 105 return -1; 106 106 107 107 score = 1; 108 - if (sk->sk_incoming_cpu == raw_smp_processor_id()) 108 + if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id()) 109 109 score++; 110 110 } 111 111 return score;
+2 -2
net/ipv6/ip6_gre.c
··· 980 980 dsfield = key->tos; 981 981 if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT)) 982 982 goto tx_err; 983 - md = ip_tunnel_info_opts(tun_info); 984 - if (!md) 983 + if (tun_info->options_len < sizeof(*md)) 985 984 goto tx_err; 985 + md = ip_tunnel_info_opts(tun_info); 986 986 987 987 tun_id = tunnel_id_to_key32(key->tun_id); 988 988 if (md->version == 1) {
+1 -1
net/ipv6/udp.c
··· 135 135 return -1; 136 136 score++; 137 137 138 - if (sk->sk_incoming_cpu == raw_smp_processor_id()) 138 + if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id()) 139 139 score++; 140 140 141 141 return score;
-1
net/l2tp/l2tp_eth.c
··· 56 56 { 57 57 eth_hw_addr_random(dev); 58 58 eth_broadcast_addr(dev->broadcast); 59 - netdev_lockdep_set_classes(dev); 60 59 61 60 return 0; 62 61 }
+10 -2
net/netfilter/ipvs/ip_vs_app.c
··· 193 193 194 194 mutex_lock(&__ip_vs_app_mutex); 195 195 196 + /* increase the module use count */ 197 + if (!ip_vs_use_count_inc()) { 198 + err = -ENOENT; 199 + goto out_unlock; 200 + } 201 + 196 202 list_for_each_entry(a, &ipvs->app_list, a_list) { 197 203 if (!strcmp(app->name, a->name)) { 198 204 err = -EEXIST; 205 + /* decrease the module use count */ 206 + ip_vs_use_count_dec(); 199 207 goto out_unlock; 200 208 } 201 209 } 202 210 a = kmemdup(app, sizeof(*app), GFP_KERNEL); 203 211 if (!a) { 204 212 err = -ENOMEM; 213 + /* decrease the module use count */ 214 + ip_vs_use_count_dec(); 205 215 goto out_unlock; 206 216 } 207 217 INIT_LIST_HEAD(&a->incs_list); 208 218 list_add(&a->a_list, &ipvs->app_list); 209 - /* increase the module use count */ 210 - ip_vs_use_count_inc(); 211 219 212 220 out_unlock: 213 221 mutex_unlock(&__ip_vs_app_mutex);
+11 -18
net/netfilter/ipvs/ip_vs_ctl.c
··· 93 93 static void update_defense_level(struct netns_ipvs *ipvs) 94 94 { 95 95 struct sysinfo i; 96 - static int old_secure_tcp = 0; 97 96 int availmem; 98 97 int nomem; 99 98 int to_change = -1; ··· 173 174 spin_lock(&ipvs->securetcp_lock); 174 175 switch (ipvs->sysctl_secure_tcp) { 175 176 case 0: 176 - if (old_secure_tcp >= 2) 177 + if (ipvs->old_secure_tcp >= 2) 177 178 to_change = 0; 178 179 break; 179 180 case 1: 180 181 if (nomem) { 181 - if (old_secure_tcp < 2) 182 + if (ipvs->old_secure_tcp < 2) 182 183 to_change = 1; 183 184 ipvs->sysctl_secure_tcp = 2; 184 185 } else { 185 - if (old_secure_tcp >= 2) 186 + if (ipvs->old_secure_tcp >= 2) 186 187 to_change = 0; 187 188 } 188 189 break; 189 190 case 2: 190 191 if (nomem) { 191 - if (old_secure_tcp < 2) 192 + if (ipvs->old_secure_tcp < 2) 192 193 to_change = 1; 193 194 } else { 194 - if (old_secure_tcp >= 2) 195 + if (ipvs->old_secure_tcp >= 2) 195 196 to_change = 0; 196 197 ipvs->sysctl_secure_tcp = 1; 197 198 } 198 199 break; 199 200 case 3: 200 - if (old_secure_tcp < 2) 201 + if (ipvs->old_secure_tcp < 2) 201 202 to_change = 1; 202 203 break; 203 204 } 204 - old_secure_tcp = ipvs->sysctl_secure_tcp; 205 + ipvs->old_secure_tcp = ipvs->sysctl_secure_tcp; 205 206 if (to_change >= 0) 206 207 ip_vs_protocol_timeout_change(ipvs, 207 208 ipvs->sysctl_secure_tcp > 1); ··· 1274 1275 struct ip_vs_service *svc = NULL; 1275 1276 1276 1277 /* increase the module use count */ 1277 - ip_vs_use_count_inc(); 1278 + if (!ip_vs_use_count_inc()) 1279 + return -ENOPROTOOPT; 1278 1280 1279 1281 /* Lookup the scheduler by 'u->sched_name' */ 1280 1282 if (strcmp(u->sched_name, "none")) { ··· 2435 2435 if (copy_from_user(arg, user, len) != 0) 2436 2436 return -EFAULT; 2437 2437 2438 - /* increase the module use count */ 2439 - ip_vs_use_count_inc(); 2440 - 2441 2438 /* Handle daemons since they have another lock */ 2442 2439 if (cmd == IP_VS_SO_SET_STARTDAEMON || 2443 2440 cmd == IP_VS_SO_SET_STOPDAEMON) { ··· 2447 2450 ret = -EINVAL; 2448 2451 if (strscpy(cfg.mcast_ifn, dm->mcast_ifn, 2449 2452 sizeof(cfg.mcast_ifn)) <= 0) 2450 - goto out_dec; 2453 + return ret; 2451 2454 cfg.syncid = dm->syncid; 2452 2455 ret = start_sync_thread(ipvs, &cfg, dm->state); 2453 2456 } else { 2454 2457 ret = stop_sync_thread(ipvs, dm->state); 2455 2458 } 2456 - goto out_dec; 2459 + return ret; 2457 2460 } 2458 2461 2459 2462 mutex_lock(&__ip_vs_mutex); ··· 2548 2551 2549 2552 out_unlock: 2550 2553 mutex_unlock(&__ip_vs_mutex); 2551 - out_dec: 2552 - /* decrease the module use count */ 2553 - ip_vs_use_count_dec(); 2554 - 2555 2554 return ret; 2556 2555 } 2557 2556
+2 -1
net/netfilter/ipvs/ip_vs_pe.c
··· 68 68 struct ip_vs_pe *tmp; 69 69 70 70 /* increase the module use count */ 71 - ip_vs_use_count_inc(); 71 + if (!ip_vs_use_count_inc()) 72 + return -ENOENT; 72 73 73 74 mutex_lock(&ip_vs_pe_mutex); 74 75 /* Make sure that the pe with this name doesn't exist
+2 -1
net/netfilter/ipvs/ip_vs_sched.c
··· 179 179 } 180 180 181 181 /* increase the module use count */ 182 - ip_vs_use_count_inc(); 182 + if (!ip_vs_use_count_inc()) 183 + return -ENOENT; 183 184 184 185 mutex_lock(&ip_vs_sched_mutex); 185 186
+10 -3
net/netfilter/ipvs/ip_vs_sync.c
··· 1762 1762 IP_VS_DBG(7, "Each ip_vs_sync_conn entry needs %zd bytes\n", 1763 1763 sizeof(struct ip_vs_sync_conn_v0)); 1764 1764 1765 + /* increase the module use count */ 1766 + if (!ip_vs_use_count_inc()) 1767 + return -ENOPROTOOPT; 1768 + 1765 1769 /* Do not hold one mutex and then to block on another */ 1766 1770 for (;;) { 1767 1771 rtnl_lock(); ··· 1896 1892 mutex_unlock(&ipvs->sync_mutex); 1897 1893 rtnl_unlock(); 1898 1894 1899 - /* increase the module use count */ 1900 - ip_vs_use_count_inc(); 1901 - 1902 1895 return 0; 1903 1896 1904 1897 out: ··· 1925 1924 } 1926 1925 kfree(ti); 1927 1926 } 1927 + 1928 + /* decrease the module use count */ 1929 + ip_vs_use_count_dec(); 1928 1930 return result; 1929 1931 1930 1932 out_early: 1931 1933 mutex_unlock(&ipvs->sync_mutex); 1932 1934 rtnl_unlock(); 1935 + 1936 + /* decrease the module use count */ 1937 + ip_vs_use_count_dec(); 1933 1938 return result; 1934 1939 } 1935 1940
+2 -1
net/netfilter/nf_flow_table_core.c
··· 202 202 { 203 203 int err; 204 204 205 + flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; 206 + 205 207 err = rhashtable_insert_fast(&flow_table->rhashtable, 206 208 &flow->tuplehash[0].node, 207 209 nf_flow_offload_rhash_params); ··· 220 218 return err; 221 219 } 222 220 223 - flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; 224 221 return 0; 225 222 } 226 223 EXPORT_SYMBOL_GPL(flow_offload_add);
+1 -1
net/netfilter/nf_tables_offload.c
··· 347 347 348 348 policy = nft_trans_chain_policy(trans); 349 349 err = nft_flow_offload_chain(trans->ctx.chain, &policy, 350 - FLOW_BLOCK_BIND); 350 + FLOW_BLOCK_UNBIND); 351 351 break; 352 352 case NFT_MSG_NEWRULE: 353 353 if (!(trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD))
+38
net/netfilter/nft_payload.c
··· 161 161 162 162 switch (priv->offset) { 163 163 case offsetof(struct ethhdr, h_source): 164 + if (priv->len != ETH_ALEN) 165 + return -EOPNOTSUPP; 166 + 164 167 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_ETH_ADDRS, eth_addrs, 165 168 src, ETH_ALEN, reg); 166 169 break; 167 170 case offsetof(struct ethhdr, h_dest): 171 + if (priv->len != ETH_ALEN) 172 + return -EOPNOTSUPP; 173 + 168 174 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_ETH_ADDRS, eth_addrs, 169 175 dst, ETH_ALEN, reg); 170 176 break; 177 + default: 178 + return -EOPNOTSUPP; 171 179 } 172 180 173 181 return 0; ··· 189 181 190 182 switch (priv->offset) { 191 183 case offsetof(struct iphdr, saddr): 184 + if (priv->len != sizeof(struct in_addr)) 185 + return -EOPNOTSUPP; 186 + 192 187 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4, src, 193 188 sizeof(struct in_addr), reg); 194 189 break; 195 190 case offsetof(struct iphdr, daddr): 191 + if (priv->len != sizeof(struct in_addr)) 192 + return -EOPNOTSUPP; 193 + 196 194 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4, dst, 197 195 sizeof(struct in_addr), reg); 198 196 break; 199 197 case offsetof(struct iphdr, protocol): 198 + if (priv->len != sizeof(__u8)) 199 + return -EOPNOTSUPP; 200 + 200 201 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto, 201 202 sizeof(__u8), reg); 202 203 nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_TRANSPORT); ··· 225 208 226 209 switch (priv->offset) { 227 210 case offsetof(struct ipv6hdr, saddr): 211 + if (priv->len != sizeof(struct in6_addr)) 212 + return -EOPNOTSUPP; 213 + 228 214 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6, src, 229 215 sizeof(struct in6_addr), reg); 230 216 break; 231 217 case offsetof(struct ipv6hdr, daddr): 218 + if (priv->len != sizeof(struct in6_addr)) 219 + return -EOPNOTSUPP; 220 + 232 221 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6, dst, 233 222 sizeof(struct in6_addr), reg); 234 223 break; 235 224 case offsetof(struct ipv6hdr, nexthdr): 225 + if (priv->len != sizeof(__u8)) 226 + return -EOPNOTSUPP; 227 + 236 228 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto, 237 229 sizeof(__u8), reg); 238 230 nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_TRANSPORT); ··· 281 255 282 256 switch (priv->offset) { 283 257 case offsetof(struct tcphdr, source): 258 + if (priv->len != sizeof(__be16)) 259 + return -EOPNOTSUPP; 260 + 284 261 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, src, 285 262 sizeof(__be16), reg); 286 263 break; 287 264 case offsetof(struct tcphdr, dest): 265 + if (priv->len != sizeof(__be16)) 266 + return -EOPNOTSUPP; 267 + 288 268 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, dst, 289 269 sizeof(__be16), reg); 290 270 break; ··· 309 277 310 278 switch (priv->offset) { 311 279 case offsetof(struct udphdr, source): 280 + if (priv->len != sizeof(__be16)) 281 + return -EOPNOTSUPP; 282 + 312 283 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, src, 313 284 sizeof(__be16), reg); 314 285 break; 315 286 case offsetof(struct udphdr, dest): 287 + if (priv->len != sizeof(__be16)) 288 + return -EOPNOTSUPP; 289 + 316 290 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, dst, 317 291 sizeof(__be16), reg); 318 292 break;
-23
net/netrom/af_netrom.c
··· 64 64 static const struct proto_ops nr_proto_ops; 65 65 66 66 /* 67 - * NETROM network devices are virtual network devices encapsulating NETROM 68 - * frames into AX.25 which will be sent through an AX.25 device, so form a 69 - * special "super class" of normal net devices; split their locks off into a 70 - * separate class since they always nest. 71 - */ 72 - static struct lock_class_key nr_netdev_xmit_lock_key; 73 - static struct lock_class_key nr_netdev_addr_lock_key; 74 - 75 - static void nr_set_lockdep_one(struct net_device *dev, 76 - struct netdev_queue *txq, 77 - void *_unused) 78 - { 79 - lockdep_set_class(&txq->_xmit_lock, &nr_netdev_xmit_lock_key); 80 - } 81 - 82 - static void nr_set_lockdep_key(struct net_device *dev) 83 - { 84 - lockdep_set_class(&dev->addr_list_lock, &nr_netdev_addr_lock_key); 85 - netdev_for_each_tx_queue(dev, nr_set_lockdep_one, NULL); 86 - } 87 - 88 - /* 89 67 * Socket removal during an interrupt is now safe. 90 68 */ 91 69 static void nr_remove_socket(struct sock *sk) ··· 1392 1414 free_netdev(dev); 1393 1415 goto fail; 1394 1416 } 1395 - nr_set_lockdep_key(dev); 1396 1417 dev_nr[i] = dev; 1397 1418 } 1398 1419
+2 -2
net/nfc/llcp_sock.c
··· 554 554 if (sk->sk_state == LLCP_LISTEN) 555 555 return llcp_accept_poll(sk); 556 556 557 - if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) 557 + if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue)) 558 558 mask |= EPOLLERR | 559 559 (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0); 560 560 561 - if (!skb_queue_empty(&sk->sk_receive_queue)) 561 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 562 562 mask |= EPOLLIN | EPOLLRDNORM; 563 563 564 564 if (sk->sk_state == LLCP_CLOSED)
+11 -9
net/openvswitch/datapath.c
··· 1881 1881 /* Called with ovs_mutex or RCU read lock. */ 1882 1882 static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb, 1883 1883 struct net *net, u32 portid, u32 seq, 1884 - u32 flags, u8 cmd) 1884 + u32 flags, u8 cmd, gfp_t gfp) 1885 1885 { 1886 1886 struct ovs_header *ovs_header; 1887 1887 struct ovs_vport_stats vport_stats; ··· 1902 1902 goto nla_put_failure; 1903 1903 1904 1904 if (!net_eq(net, dev_net(vport->dev))) { 1905 - int id = peernet2id_alloc(net, dev_net(vport->dev)); 1905 + int id = peernet2id_alloc(net, dev_net(vport->dev), gfp); 1906 1906 1907 1907 if (nla_put_s32(skb, OVS_VPORT_ATTR_NETNSID, id)) 1908 1908 goto nla_put_failure; ··· 1943 1943 struct sk_buff *skb; 1944 1944 int retval; 1945 1945 1946 - skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); 1946 + skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 1947 1947 if (!skb) 1948 1948 return ERR_PTR(-ENOMEM); 1949 1949 1950 - retval = ovs_vport_cmd_fill_info(vport, skb, net, portid, seq, 0, cmd); 1950 + retval = ovs_vport_cmd_fill_info(vport, skb, net, portid, seq, 0, cmd, 1951 + GFP_KERNEL); 1951 1952 BUG_ON(retval < 0); 1952 1953 1953 1954 return skb; ··· 2090 2089 2091 2090 err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info), 2092 2091 info->snd_portid, info->snd_seq, 0, 2093 - OVS_VPORT_CMD_NEW); 2092 + OVS_VPORT_CMD_NEW, GFP_KERNEL); 2094 2093 2095 2094 new_headroom = netdev_get_fwd_headroom(vport->dev); 2096 2095 ··· 2151 2150 2152 2151 err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info), 2153 2152 info->snd_portid, info->snd_seq, 0, 2154 - OVS_VPORT_CMD_SET); 2153 + OVS_VPORT_CMD_SET, GFP_KERNEL); 2155 2154 BUG_ON(err < 0); 2156 2155 2157 2156 ovs_unlock(); ··· 2191 2190 2192 2191 err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info), 2193 2192 info->snd_portid, info->snd_seq, 0, 2194 - OVS_VPORT_CMD_DEL); 2193 + OVS_VPORT_CMD_DEL, GFP_KERNEL); 2195 2194 BUG_ON(err < 0); 2196 2195 2197 2196 /* the vport deletion may trigger dp headroom update */ ··· 2238 2237 goto exit_unlock_free; 2239 2238 err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info), 2240 2239 info->snd_portid, info->snd_seq, 0, 2241 - OVS_VPORT_CMD_GET); 2240 + OVS_VPORT_CMD_GET, GFP_ATOMIC); 2242 2241 BUG_ON(err < 0); 2243 2242 rcu_read_unlock(); 2244 2243 ··· 2274 2273 NETLINK_CB(cb->skb).portid, 2275 2274 cb->nlh->nlmsg_seq, 2276 2275 NLM_F_MULTI, 2277 - OVS_VPORT_CMD_GET) < 0) 2276 + OVS_VPORT_CMD_GET, 2277 + GFP_ATOMIC) < 0) 2278 2278 goto out; 2279 2279 2280 2280 j++;
+4 -7
net/openvswitch/vport-internal_dev.c
··· 137 137 netdev->priv_flags |= IFF_LIVE_ADDR_CHANGE | IFF_OPENVSWITCH | 138 138 IFF_NO_QUEUE; 139 139 netdev->needs_free_netdev = true; 140 - netdev->priv_destructor = internal_dev_destructor; 140 + netdev->priv_destructor = NULL; 141 141 netdev->ethtool_ops = &internal_dev_ethtool_ops; 142 142 netdev->rtnl_link_ops = &internal_dev_link_ops; 143 143 ··· 159 159 struct internal_dev *internal_dev; 160 160 struct net_device *dev; 161 161 int err; 162 - bool free_vport = true; 163 162 164 163 vport = ovs_vport_alloc(0, &ovs_internal_vport_ops, parms); 165 164 if (IS_ERR(vport)) { ··· 189 190 190 191 rtnl_lock(); 191 192 err = register_netdevice(vport->dev); 192 - if (err) { 193 - free_vport = false; 193 + if (err) 194 194 goto error_unlock; 195 - } 195 + vport->dev->priv_destructor = internal_dev_destructor; 196 196 197 197 dev_set_promiscuity(vport->dev, 1); 198 198 rtnl_unlock(); ··· 205 207 error_free_netdev: 206 208 free_netdev(dev); 207 209 error_free_vport: 208 - if (free_vport) 209 - ovs_vport_free(vport); 210 + ovs_vport_free(vport); 210 211 error: 211 212 return ERR_PTR(err); 212 213 }
+2 -2
net/phonet/socket.c
··· 338 338 339 339 if (sk->sk_state == TCP_CLOSE) 340 340 return EPOLLERR; 341 - if (!skb_queue_empty(&sk->sk_receive_queue)) 341 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 342 342 mask |= EPOLLIN | EPOLLRDNORM; 343 - if (!skb_queue_empty(&pn->ctrlreq_queue)) 343 + if (!skb_queue_empty_lockless(&pn->ctrlreq_queue)) 344 344 mask |= EPOLLPRI; 345 345 if (!mask && sk->sk_state == TCP_CLOSE_WAIT) 346 346 return EPOLLHUP;
-23
net/rose/af_rose.c
··· 65 65 ax25_address rose_callsign; 66 66 67 67 /* 68 - * ROSE network devices are virtual network devices encapsulating ROSE 69 - * frames into AX.25 which will be sent through an AX.25 device, so form a 70 - * special "super class" of normal net devices; split their locks off into a 71 - * separate class since they always nest. 72 - */ 73 - static struct lock_class_key rose_netdev_xmit_lock_key; 74 - static struct lock_class_key rose_netdev_addr_lock_key; 75 - 76 - static void rose_set_lockdep_one(struct net_device *dev, 77 - struct netdev_queue *txq, 78 - void *_unused) 79 - { 80 - lockdep_set_class(&txq->_xmit_lock, &rose_netdev_xmit_lock_key); 81 - } 82 - 83 - static void rose_set_lockdep_key(struct net_device *dev) 84 - { 85 - lockdep_set_class(&dev->addr_list_lock, &rose_netdev_addr_lock_key); 86 - netdev_for_each_tx_queue(dev, rose_set_lockdep_one, NULL); 87 - } 88 - 89 - /* 90 68 * Convert a ROSE address into text. 91 69 */ 92 70 char *rose2asc(char *buf, const rose_address *addr) ··· 1511 1533 free_netdev(dev); 1512 1534 goto fail; 1513 1535 } 1514 - rose_set_lockdep_key(dev); 1515 1536 dev_rose[i] = dev; 1516 1537 } 1517 1538
+1
net/rxrpc/ar-internal.h
··· 601 601 int debug_id; /* debug ID for printks */ 602 602 unsigned short rx_pkt_offset; /* Current recvmsg packet offset */ 603 603 unsigned short rx_pkt_len; /* Current recvmsg packet len */ 604 + bool rx_pkt_last; /* Current recvmsg packet is last */ 604 605 605 606 /* Rx/Tx circular buffer, depending on phase. 606 607 *
+13 -5
net/rxrpc/recvmsg.c
··· 267 267 */ 268 268 static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb, 269 269 u8 *_annotation, 270 - unsigned int *_offset, unsigned int *_len) 270 + unsigned int *_offset, unsigned int *_len, 271 + bool *_last) 271 272 { 272 273 struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 273 274 unsigned int offset = sizeof(struct rxrpc_wire_header); 274 275 unsigned int len; 276 + bool last = false; 275 277 int ret; 276 278 u8 annotation = *_annotation; 277 279 u8 subpacket = annotation & RXRPC_RX_ANNO_SUBPACKET; ··· 283 281 len = skb->len - offset; 284 282 if (subpacket < sp->nr_subpackets - 1) 285 283 len = RXRPC_JUMBO_DATALEN; 284 + else if (sp->rx_flags & RXRPC_SKB_INCL_LAST) 285 + last = true; 286 286 287 287 if (!(annotation & RXRPC_RX_ANNO_VERIFIED)) { 288 288 ret = rxrpc_verify_packet(call, skb, annotation, offset, len); ··· 295 291 296 292 *_offset = offset; 297 293 *_len = len; 294 + *_last = last; 298 295 call->security->locate_data(call, skb, _offset, _len); 299 296 return 0; 300 297 } ··· 314 309 rxrpc_serial_t serial; 315 310 rxrpc_seq_t hard_ack, top, seq; 316 311 size_t remain; 317 - bool last; 312 + bool rx_pkt_last; 318 313 unsigned int rx_pkt_offset, rx_pkt_len; 319 314 int ix, copy, ret = -EAGAIN, ret2; 320 315 ··· 324 319 325 320 rx_pkt_offset = call->rx_pkt_offset; 326 321 rx_pkt_len = call->rx_pkt_len; 322 + rx_pkt_last = call->rx_pkt_last; 327 323 328 324 if (call->state >= RXRPC_CALL_SERVER_ACK_REQUEST) { 329 325 seq = call->rx_hard_ack; ··· 335 329 /* Barriers against rxrpc_input_data(). */ 336 330 hard_ack = call->rx_hard_ack; 337 331 seq = hard_ack + 1; 332 + 338 333 while (top = smp_load_acquire(&call->rx_top), 339 334 before_eq(seq, top) 340 335 ) { ··· 363 356 if (rx_pkt_offset == 0) { 364 357 ret2 = rxrpc_locate_data(call, skb, 365 358 &call->rxtx_annotations[ix], 366 - &rx_pkt_offset, &rx_pkt_len); 359 + &rx_pkt_offset, &rx_pkt_len, 360 + &rx_pkt_last); 367 361 trace_rxrpc_recvmsg(call, rxrpc_recvmsg_next, seq, 368 362 rx_pkt_offset, rx_pkt_len, ret2); 369 363 if (ret2 < 0) { ··· 404 396 } 405 397 406 398 /* The whole packet has been transferred. */ 407 - last = sp->hdr.flags & RXRPC_LAST_PACKET; 408 399 if (!(flags & MSG_PEEK)) 409 400 rxrpc_rotate_rx_window(call); 410 401 rx_pkt_offset = 0; 411 402 rx_pkt_len = 0; 412 403 413 - if (last) { 404 + if (rx_pkt_last) { 414 405 ASSERTCMP(seq, ==, READ_ONCE(call->rx_top)); 415 406 ret = 1; 416 407 goto out; ··· 422 415 if (!(flags & MSG_PEEK)) { 423 416 call->rx_pkt_offset = rx_pkt_offset; 424 417 call->rx_pkt_len = rx_pkt_len; 418 + call->rx_pkt_last = rx_pkt_last; 425 419 } 426 420 done: 427 421 trace_rxrpc_recvmsg(call, rxrpc_recvmsg_data_return, seq,
+6 -2
net/sched/cls_bpf.c
··· 162 162 cls_bpf.name = obj->bpf_name; 163 163 cls_bpf.exts_integrated = obj->exts_integrated; 164 164 165 - if (oldprog) 165 + if (oldprog && prog) 166 166 err = tc_setup_cb_replace(block, tp, TC_SETUP_CLSBPF, &cls_bpf, 167 167 skip_sw, &oldprog->gen_flags, 168 168 &oldprog->in_hw_count, 169 169 &prog->gen_flags, &prog->in_hw_count, 170 170 true); 171 - else 171 + else if (prog) 172 172 err = tc_setup_cb_add(block, tp, TC_SETUP_CLSBPF, &cls_bpf, 173 173 skip_sw, &prog->gen_flags, 174 174 &prog->in_hw_count, true); 175 + else 176 + err = tc_setup_cb_destroy(block, tp, TC_SETUP_CLSBPF, &cls_bpf, 177 + skip_sw, &oldprog->gen_flags, 178 + &oldprog->in_hw_count, true); 175 179 176 180 if (prog && err) { 177 181 cls_bpf_offload_cmd(tp, oldprog, prog, extack);
+8 -11
net/sched/sch_generic.c
··· 799 799 }; 800 800 EXPORT_SYMBOL(pfifo_fast_ops); 801 801 802 - static struct lock_class_key qdisc_tx_busylock; 803 - static struct lock_class_key qdisc_running_key; 804 - 805 802 struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue, 806 803 const struct Qdisc_ops *ops, 807 804 struct netlink_ext_ack *extack) ··· 851 854 } 852 855 853 856 spin_lock_init(&sch->busylock); 854 - lockdep_set_class(&sch->busylock, 855 - dev->qdisc_tx_busylock ?: &qdisc_tx_busylock); 856 - 857 857 /* seqlock has the same scope of busylock, for NOLOCK qdisc */ 858 858 spin_lock_init(&sch->seqlock); 859 - lockdep_set_class(&sch->busylock, 860 - dev->qdisc_tx_busylock ?: &qdisc_tx_busylock); 861 - 862 859 seqcount_init(&sch->running); 863 - lockdep_set_class(&sch->running, 864 - dev->qdisc_running_key ?: &qdisc_running_key); 865 860 866 861 sch->ops = ops; 867 862 sch->flags = ops->static_flags; ··· 863 874 sch->empty = true; 864 875 dev_hold(dev); 865 876 refcount_set(&sch->refcnt, 1); 877 + 878 + if (sch != &noop_qdisc) { 879 + lockdep_set_class(&sch->busylock, &dev->qdisc_tx_busylock_key); 880 + lockdep_set_class(&sch->seqlock, &dev->qdisc_tx_busylock_key); 881 + lockdep_set_class(&sch->running, &dev->qdisc_running_key); 882 + } 866 883 867 884 return sch; 868 885 errout1: ··· 1038 1043 1039 1044 if (dev->priv_flags & IFF_NO_QUEUE) 1040 1045 ops = &noqueue_qdisc_ops; 1046 + else if(dev->type == ARPHRD_CAN) 1047 + ops = &pfifo_fast_ops; 1041 1048 1042 1049 qdisc = qdisc_create_dflt(dev_queue, ops, TC_H_ROOT, NULL); 1043 1050 if (!qdisc) {
+4 -4
net/sched/sch_hhf.c
··· 5 5 * Copyright (C) 2013 Nandita Dukkipati <nanditad@google.com> 6 6 */ 7 7 8 - #include <linux/jhash.h> 9 8 #include <linux/jiffies.h> 10 9 #include <linux/module.h> 11 10 #include <linux/skbuff.h> 12 11 #include <linux/vmalloc.h> 12 + #include <linux/siphash.h> 13 13 #include <net/pkt_sched.h> 14 14 #include <net/sock.h> 15 15 ··· 126 126 127 127 struct hhf_sched_data { 128 128 struct wdrr_bucket buckets[WDRR_BUCKET_CNT]; 129 - u32 perturbation; /* hash perturbation */ 129 + siphash_key_t perturbation; /* hash perturbation */ 130 130 u32 quantum; /* psched_mtu(qdisc_dev(sch)); */ 131 131 u32 drop_overlimit; /* number of times max qdisc packet 132 132 * limit was hit ··· 264 264 } 265 265 266 266 /* Get hashed flow-id of the skb. */ 267 - hash = skb_get_hash_perturb(skb, q->perturbation); 267 + hash = skb_get_hash_perturb(skb, &q->perturbation); 268 268 269 269 /* Check if this packet belongs to an already established HH flow. */ 270 270 flow_pos = hash & HHF_BIT_MASK; ··· 582 582 583 583 sch->limit = 1000; 584 584 q->quantum = psched_mtu(qdisc_dev(sch)); 585 - q->perturbation = prandom_u32(); 585 + get_random_bytes(&q->perturbation, sizeof(q->perturbation)); 586 586 INIT_LIST_HEAD(&q->new_buckets); 587 587 INIT_LIST_HEAD(&q->old_buckets); 588 588
+7 -6
net/sched/sch_sfb.c
··· 18 18 #include <linux/errno.h> 19 19 #include <linux/skbuff.h> 20 20 #include <linux/random.h> 21 - #include <linux/jhash.h> 21 + #include <linux/siphash.h> 22 22 #include <net/ip.h> 23 23 #include <net/pkt_sched.h> 24 24 #include <net/pkt_cls.h> ··· 45 45 * (Section 4.4 of SFB reference : moving hash functions) 46 46 */ 47 47 struct sfb_bins { 48 - u32 perturbation; /* jhash perturbation */ 48 + siphash_key_t perturbation; /* siphash key */ 49 49 struct sfb_bucket bins[SFB_LEVELS][SFB_NUMBUCKETS]; 50 50 }; 51 51 ··· 217 217 218 218 static void sfb_init_perturbation(u32 slot, struct sfb_sched_data *q) 219 219 { 220 - q->bins[slot].perturbation = prandom_u32(); 220 + get_random_bytes(&q->bins[slot].perturbation, 221 + sizeof(q->bins[slot].perturbation)); 221 222 } 222 223 223 224 static void sfb_swap_slot(struct sfb_sched_data *q) ··· 315 314 /* If using external classifiers, get result and record it. */ 316 315 if (!sfb_classify(skb, fl, &ret, &salt)) 317 316 goto other_drop; 318 - sfbhash = jhash_1word(salt, q->bins[slot].perturbation); 317 + sfbhash = siphash_1u32(salt, &q->bins[slot].perturbation); 319 318 } else { 320 - sfbhash = skb_get_hash_perturb(skb, q->bins[slot].perturbation); 319 + sfbhash = skb_get_hash_perturb(skb, &q->bins[slot].perturbation); 321 320 } 322 321 323 322 ··· 353 352 /* Inelastic flow */ 354 353 if (q->double_buffering) { 355 354 sfbhash = skb_get_hash_perturb(skb, 356 - q->bins[slot].perturbation); 355 + &q->bins[slot].perturbation); 357 356 if (!sfbhash) 358 357 sfbhash = 1; 359 358 sfb_skb_cb(skb)->hashes[slot] = sfbhash;
+8 -6
net/sched/sch_sfq.c
··· 14 14 #include <linux/errno.h> 15 15 #include <linux/init.h> 16 16 #include <linux/skbuff.h> 17 - #include <linux/jhash.h> 17 + #include <linux/siphash.h> 18 18 #include <linux/slab.h> 19 19 #include <linux/vmalloc.h> 20 20 #include <net/netlink.h> ··· 117 117 u8 headdrop; 118 118 u8 maxdepth; /* limit of packets per flow */ 119 119 120 - u32 perturbation; 120 + siphash_key_t perturbation; 121 121 u8 cur_depth; /* depth of longest slot */ 122 122 u8 flags; 123 123 unsigned short scaled_quantum; /* SFQ_ALLOT_SIZE(quantum) */ ··· 157 157 static unsigned int sfq_hash(const struct sfq_sched_data *q, 158 158 const struct sk_buff *skb) 159 159 { 160 - return skb_get_hash_perturb(skb, q->perturbation) & (q->divisor - 1); 160 + return skb_get_hash_perturb(skb, &q->perturbation) & (q->divisor - 1); 161 161 } 162 162 163 163 static unsigned int sfq_classify(struct sk_buff *skb, struct Qdisc *sch, ··· 607 607 struct sfq_sched_data *q = from_timer(q, t, perturb_timer); 608 608 struct Qdisc *sch = q->sch; 609 609 spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); 610 + siphash_key_t nkey; 610 611 612 + get_random_bytes(&nkey, sizeof(nkey)); 611 613 spin_lock(root_lock); 612 - q->perturbation = prandom_u32(); 614 + q->perturbation = nkey; 613 615 if (!q->filter_list && q->tail) 614 616 sfq_rehash(sch); 615 617 spin_unlock(root_lock); ··· 690 688 del_timer(&q->perturb_timer); 691 689 if (q->perturb_period) { 692 690 mod_timer(&q->perturb_timer, jiffies + q->perturb_period); 693 - q->perturbation = prandom_u32(); 691 + get_random_bytes(&q->perturbation, sizeof(q->perturbation)); 694 692 } 695 693 sch_tree_unlock(sch); 696 694 kfree(p); ··· 747 745 q->quantum = psched_mtu(qdisc_dev(sch)); 748 746 q->scaled_quantum = SFQ_ALLOT_SIZE(q->quantum); 749 747 q->perturb_period = 0; 750 - q->perturbation = prandom_u32(); 748 + get_random_bytes(&q->perturbation, sizeof(q->perturbation)); 751 749 752 750 if (opt) { 753 751 int err = sfq_change(sch, opt);
+1 -1
net/sched/sch_taprio.c
··· 1152 1152 * offload state (PENDING, ACTIVE, INACTIVE) so it can be visible in dump(). 1153 1153 * This is left as TODO. 1154 1154 */ 1155 - void taprio_offload_config_changed(struct taprio_sched *q) 1155 + static void taprio_offload_config_changed(struct taprio_sched *q) 1156 1156 { 1157 1157 struct sched_gate_list *oper, *admin; 1158 1158
+4 -4
net/sctp/socket.c
··· 8476 8476 mask = 0; 8477 8477 8478 8478 /* Is there any exceptional events? */ 8479 - if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) 8479 + if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue)) 8480 8480 mask |= EPOLLERR | 8481 8481 (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0); 8482 8482 if (sk->sk_shutdown & RCV_SHUTDOWN) ··· 8485 8485 mask |= EPOLLHUP; 8486 8486 8487 8487 /* Is it readable? Reconsider this code with TCP-style support. */ 8488 - if (!skb_queue_empty(&sk->sk_receive_queue)) 8488 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 8489 8489 mask |= EPOLLIN | EPOLLRDNORM; 8490 8490 8491 8491 /* The association is either gone or not ready. */ ··· 8871 8871 if (sk_can_busy_loop(sk)) { 8872 8872 sk_busy_loop(sk, noblock); 8873 8873 8874 - if (!skb_queue_empty(&sk->sk_receive_queue)) 8874 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 8875 8875 continue; 8876 8876 } 8877 8877 ··· 9306 9306 newinet->inet_rcv_saddr = inet->inet_rcv_saddr; 9307 9307 newinet->inet_dport = htons(asoc->peer.port); 9308 9308 newinet->pmtudisc = inet->pmtudisc; 9309 - newinet->inet_id = asoc->next_tsn ^ jiffies; 9309 + newinet->inet_id = prandom_u32(); 9310 9310 9311 9311 newinet->uc_ttl = inet->uc_ttl; 9312 9312 newinet->mc_loop = 1;
+10 -3
net/smc/af_smc.c
··· 123 123 }; 124 124 EXPORT_SYMBOL_GPL(smc_proto6); 125 125 126 + static void smc_restore_fallback_changes(struct smc_sock *smc) 127 + { 128 + smc->clcsock->file->private_data = smc->sk.sk_socket; 129 + smc->clcsock->file = NULL; 130 + } 131 + 126 132 static int __smc_release(struct smc_sock *smc) 127 133 { 128 134 struct sock *sk = &smc->sk; ··· 147 141 } 148 142 sk->sk_state = SMC_CLOSED; 149 143 sk->sk_state_change(sk); 144 + smc_restore_fallback_changes(smc); 150 145 } 151 146 152 147 sk->sk_prot->unhash(sk); ··· 707 700 int smc_type; 708 701 int rc = 0; 709 702 710 - sock_hold(&smc->sk); /* sock put in passive closing */ 711 - 712 703 if (smc->use_fallback) 713 704 return smc_connect_fallback(smc, smc->fallback_rsn); 714 705 ··· 851 846 rc = kernel_connect(smc->clcsock, addr, alen, flags); 852 847 if (rc && rc != -EINPROGRESS) 853 848 goto out; 849 + 850 + sock_hold(&smc->sk); /* sock put in passive closing */ 854 851 if (flags & O_NONBLOCK) { 855 852 if (schedule_work(&smc->connect_work)) 856 853 smc->connect_nonblock = 1; ··· 1298 1291 /* check if RDMA is available */ 1299 1292 if (!ism_supported) { /* SMC_TYPE_R or SMC_TYPE_B */ 1300 1293 /* prepare RDMA check */ 1301 - memset(&ini, 0, sizeof(ini)); 1302 1294 ini.is_smcd = false; 1295 + ini.ism_dev = NULL; 1303 1296 ini.ib_lcl = &pclc->lcl; 1304 1297 rc = smc_find_rdma_device(new_smc, &ini); 1305 1298 if (rc) {
+1 -1
net/smc/smc_core.c
··· 561 561 } 562 562 563 563 rtnl_lock(); 564 - nest_lvl = dev_get_nest_level(ndev); 564 + nest_lvl = ndev->lower_level; 565 565 for (i = 0; i < nest_lvl; i++) { 566 566 struct list_head *lower = &ndev->adj_list.lower; 567 567
+1 -1
net/smc/smc_pnet.c
··· 718 718 int i, nest_lvl; 719 719 720 720 rtnl_lock(); 721 - nest_lvl = dev_get_nest_level(ndev); 721 + nest_lvl = ndev->lower_level; 722 722 for (i = 0; i < nest_lvl; i++) { 723 723 struct list_head *lower = &ndev->adj_list.lower; 724 724
+4 -3
net/sunrpc/backchannel_rqst.c
··· 220 220 goto out; 221 221 222 222 spin_lock_bh(&xprt->bc_pa_lock); 223 - xprt->bc_alloc_max -= max_reqs; 223 + xprt->bc_alloc_max -= min(max_reqs, xprt->bc_alloc_max); 224 224 list_for_each_entry_safe(req, tmp, &xprt->bc_pa_list, rq_bc_pa_list) { 225 225 dprintk("RPC: req=%p\n", req); 226 226 list_del(&req->rq_bc_pa_list); ··· 307 307 */ 308 308 dprintk("RPC: Last session removed req=%p\n", req); 309 309 xprt_free_allocation(req); 310 - return; 311 310 } 311 + xprt_put(xprt); 312 312 } 313 313 314 314 /* ··· 339 339 spin_unlock(&xprt->bc_pa_lock); 340 340 if (new) { 341 341 if (req != new) 342 - xprt_free_bc_rqst(new); 342 + xprt_free_allocation(new); 343 343 break; 344 344 } else if (req) 345 345 break; ··· 368 368 set_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state); 369 369 370 370 dprintk("RPC: add callback request to list\n"); 371 + xprt_get(xprt); 371 372 spin_lock(&bc_serv->sv_cb_lock); 372 373 list_add(&req->rq_bc_list, &bc_serv->sv_cb_list); 373 374 wake_up(&bc_serv->sv_cb_waitq);
+5
net/sunrpc/xprt.c
··· 1943 1943 rpc_destroy_wait_queue(&xprt->backlog); 1944 1944 kfree(xprt->servername); 1945 1945 /* 1946 + * Destroy any existing back channel 1947 + */ 1948 + xprt_destroy_backchannel(xprt, UINT_MAX); 1949 + 1950 + /* 1946 1951 * Tear down transport state and free the rpc_xprt 1947 1952 */ 1948 1953 xprt->ops->destroy(xprt);
+2
net/sunrpc/xprtrdma/backchannel.c
··· 163 163 spin_lock(&xprt->bc_pa_lock); 164 164 list_add_tail(&rqst->rq_bc_pa_list, &xprt->bc_pa_list); 165 165 spin_unlock(&xprt->bc_pa_lock); 166 + xprt_put(xprt); 166 167 } 167 168 168 169 static struct rpc_rqst *rpcrdma_bc_rqst_get(struct rpcrdma_xprt *r_xprt) ··· 260 259 261 260 /* Queue rqst for ULP's callback service */ 262 261 bc_serv = xprt->bc_serv; 262 + xprt_get(xprt); 263 263 spin_lock(&bc_serv->sv_cb_lock); 264 264 list_add(&rqst->rq_bc_list, &bc_serv->sv_cb_list); 265 265 spin_unlock(&bc_serv->sv_cb_lock);
+2 -2
net/tipc/socket.c
··· 740 740 /* fall through */ 741 741 case TIPC_LISTEN: 742 742 case TIPC_CONNECTING: 743 - if (!skb_queue_empty(&sk->sk_receive_queue)) 743 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 744 744 revents |= EPOLLIN | EPOLLRDNORM; 745 745 break; 746 746 case TIPC_OPEN: ··· 748 748 revents |= EPOLLOUT; 749 749 if (!tipc_sk_type_connectionless(sk)) 750 750 break; 751 - if (skb_queue_empty(&sk->sk_receive_queue)) 751 + if (skb_queue_empty_lockless(&sk->sk_receive_queue)) 752 752 break; 753 753 revents |= EPOLLIN | EPOLLRDNORM; 754 754 break;
+3 -3
net/unix/af_unix.c
··· 2599 2599 mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM; 2600 2600 2601 2601 /* readable? */ 2602 - if (!skb_queue_empty(&sk->sk_receive_queue)) 2602 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 2603 2603 mask |= EPOLLIN | EPOLLRDNORM; 2604 2604 2605 2605 /* Connection-based need to check for termination and startup */ ··· 2628 2628 mask = 0; 2629 2629 2630 2630 /* exceptional events? */ 2631 - if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) 2631 + if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue)) 2632 2632 mask |= EPOLLERR | 2633 2633 (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0); 2634 2634 ··· 2638 2638 mask |= EPOLLHUP; 2639 2639 2640 2640 /* readable? */ 2641 - if (!skb_queue_empty(&sk->sk_receive_queue)) 2641 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue)) 2642 2642 mask |= EPOLLIN | EPOLLRDNORM; 2643 2643 2644 2644 /* Connection-based need to check for termination and startup */
+1 -1
net/vmw_vsock/af_vsock.c
··· 870 870 * the queue and write as long as the socket isn't shutdown for 871 871 * sending. 872 872 */ 873 - if (!skb_queue_empty(&sk->sk_receive_queue) || 873 + if (!skb_queue_empty_lockless(&sk->sk_receive_queue) || 874 874 (sk->sk_shutdown & RCV_SHUTDOWN)) { 875 875 mask |= EPOLLIN | EPOLLRDNORM; 876 876 }
+5
net/wireless/chan.c
··· 204 204 return false; 205 205 } 206 206 207 + /* channel 14 is only for IEEE 802.11b */ 208 + if (chandef->center_freq1 == 2484 && 209 + chandef->width != NL80211_CHAN_WIDTH_20_NOHT) 210 + return false; 211 + 207 212 if (cfg80211_chandef_is_edmg(chandef) && 208 213 !cfg80211_edmg_chandef_valid(chandef)) 209 214 return false;
+1 -1
net/wireless/nl80211.c
··· 393 393 [NL80211_ATTR_MNTR_FLAGS] = { /* NLA_NESTED can't be empty */ }, 394 394 [NL80211_ATTR_MESH_ID] = { .type = NLA_BINARY, 395 395 .len = IEEE80211_MAX_MESH_ID_LEN }, 396 - [NL80211_ATTR_MPATH_NEXT_HOP] = { .type = NLA_U32 }, 396 + [NL80211_ATTR_MPATH_NEXT_HOP] = NLA_POLICY_ETH_ADDR_COMPAT, 397 397 398 398 [NL80211_ATTR_REG_ALPHA2] = { .type = NLA_STRING, .len = 2 }, 399 399 [NL80211_ATTR_REG_RULES] = { .type = NLA_NESTED },
+2 -1
net/wireless/util.c
··· 1559 1559 } 1560 1560 1561 1561 if (freq == 2484) { 1562 - if (chandef->width > NL80211_CHAN_WIDTH_40) 1562 + /* channel 14 is only for IEEE 802.11b */ 1563 + if (chandef->width != NL80211_CHAN_WIDTH_20_NOHT) 1563 1564 return false; 1564 1565 1565 1566 *op_class = 82; /* channel 14 */
+6
net/xdp/xdp_umem.c
··· 27 27 { 28 28 unsigned long flags; 29 29 30 + if (!xs->tx) 31 + return; 32 + 30 33 spin_lock_irqsave(&umem->xsk_list_lock, flags); 31 34 list_add_rcu(&xs->list, &umem->xsk_list); 32 35 spin_unlock_irqrestore(&umem->xsk_list_lock, flags); ··· 38 35 void xdp_del_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs) 39 36 { 40 37 unsigned long flags; 38 + 39 + if (!xs->tx) 40 + return; 41 41 42 42 spin_lock_irqsave(&umem->xsk_list_lock, flags); 43 43 list_del_rcu(&xs->list);
+1
security/lockdown/lockdown.c
··· 20 20 [LOCKDOWN_NONE] = "none", 21 21 [LOCKDOWN_MODULE_SIGNATURE] = "unsigned module loading", 22 22 [LOCKDOWN_DEV_MEM] = "/dev/mem,kmem,port", 23 + [LOCKDOWN_EFI_TEST] = "/dev/efi_test access", 23 24 [LOCKDOWN_KEXEC] = "kexec of unsigned images", 24 25 [LOCKDOWN_HIBERNATION] = "hibernation", 25 26 [LOCKDOWN_PCI_ACCESS] = "direct PCI access",
+17 -7
sound/core/timer.c
··· 226 226 return 0; 227 227 } 228 228 229 - static int snd_timer_close_locked(struct snd_timer_instance *timeri); 229 + static int snd_timer_close_locked(struct snd_timer_instance *timeri, 230 + struct device **card_devp_to_put); 230 231 231 232 /* 232 233 * open a timer instance ··· 239 238 { 240 239 struct snd_timer *timer; 241 240 struct snd_timer_instance *timeri = NULL; 241 + struct device *card_dev_to_put = NULL; 242 242 int err; 243 243 244 244 mutex_lock(&register_mutex); ··· 263 261 list_add_tail(&timeri->open_list, &snd_timer_slave_list); 264 262 err = snd_timer_check_slave(timeri); 265 263 if (err < 0) { 266 - snd_timer_close_locked(timeri); 264 + snd_timer_close_locked(timeri, &card_dev_to_put); 267 265 timeri = NULL; 268 266 } 269 267 goto unlock; ··· 315 313 timeri = NULL; 316 314 317 315 if (timer->card) 318 - put_device(&timer->card->card_dev); 316 + card_dev_to_put = &timer->card->card_dev; 319 317 module_put(timer->module); 320 318 goto unlock; 321 319 } ··· 325 323 timer->num_instances++; 326 324 err = snd_timer_check_master(timeri); 327 325 if (err < 0) { 328 - snd_timer_close_locked(timeri); 326 + snd_timer_close_locked(timeri, &card_dev_to_put); 329 327 timeri = NULL; 330 328 } 331 329 332 330 unlock: 333 331 mutex_unlock(&register_mutex); 332 + /* put_device() is called after unlock for avoiding deadlock */ 333 + if (card_dev_to_put) 334 + put_device(card_dev_to_put); 334 335 *ti = timeri; 335 336 return err; 336 337 } ··· 343 338 * close a timer instance 344 339 * call this with register_mutex down. 345 340 */ 346 - static int snd_timer_close_locked(struct snd_timer_instance *timeri) 341 + static int snd_timer_close_locked(struct snd_timer_instance *timeri, 342 + struct device **card_devp_to_put) 347 343 { 348 344 struct snd_timer *timer = timeri->timer; 349 345 struct snd_timer_instance *slave, *tmp; ··· 401 395 timer->hw.close(timer); 402 396 /* release a card refcount for safe disconnection */ 403 397 if (timer->card) 404 - put_device(&timer->card->card_dev); 398 + *card_devp_to_put = &timer->card->card_dev; 405 399 module_put(timer->module); 406 400 } 407 401 ··· 413 407 */ 414 408 int snd_timer_close(struct snd_timer_instance *timeri) 415 409 { 410 + struct device *card_dev_to_put = NULL; 416 411 int err; 417 412 418 413 if (snd_BUG_ON(!timeri)) 419 414 return -ENXIO; 420 415 421 416 mutex_lock(&register_mutex); 422 - err = snd_timer_close_locked(timeri); 417 + err = snd_timer_close_locked(timeri, &card_dev_to_put); 423 418 mutex_unlock(&register_mutex); 419 + /* put_device() is called after unlock for avoiding deadlock */ 420 + if (card_dev_to_put) 421 + put_device(card_dev_to_put); 424 422 return err; 425 423 } 426 424 EXPORT_SYMBOL(snd_timer_close);
+1 -2
sound/firewire/bebob/bebob_stream.c
··· 252 252 return err; 253 253 } 254 254 255 - static unsigned int 256 - map_data_channels(struct snd_bebob *bebob, struct amdtp_stream *s) 255 + static int map_data_channels(struct snd_bebob *bebob, struct amdtp_stream *s) 257 256 { 258 257 unsigned int sec, sections, ch, channels; 259 258 unsigned int pcm, midi, location;
-2
sound/hda/hdac_controller.c
··· 447 447 list_for_each_entry(azx_dev, &bus->stream_list, list) 448 448 snd_hdac_stream_updateb(azx_dev, SD_CTL, SD_INT_MASK, 0); 449 449 450 - synchronize_irq(bus->irq); 451 - 452 450 /* disable SIE for all streams */ 453 451 snd_hdac_chip_writeb(bus, INTCTL, 0); 454 452
+1 -1
sound/pci/hda/hda_intel.c
··· 1348 1348 } 1349 1349 1350 1350 if (bus->chip_init) { 1351 - azx_stop_chip(chip); 1352 1351 azx_clear_irq_pending(chip); 1353 1352 azx_stop_all_streams(chip); 1353 + azx_stop_chip(chip); 1354 1354 } 1355 1355 1356 1356 if (bus->irq >= 0)
+6 -4
sound/pci/hda/patch_hdmi.c
··· 145 145 struct snd_array pins; /* struct hdmi_spec_per_pin */ 146 146 struct hdmi_pcm pcm_rec[16]; 147 147 struct mutex pcm_lock; 148 + struct mutex bind_lock; /* for audio component binding */ 148 149 /* pcm_bitmap means which pcms have been assigned to pins*/ 149 150 unsigned long pcm_bitmap; 150 151 int pcm_used; /* counter of pcm_rec[] */ ··· 2259 2258 struct hdmi_spec *spec = codec->spec; 2260 2259 int pin_idx; 2261 2260 2262 - mutex_lock(&spec->pcm_lock); 2261 + mutex_lock(&spec->bind_lock); 2263 2262 spec->use_jack_detect = !codec->jackpoll_interval; 2264 2263 for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) { 2265 2264 struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx); ··· 2276 2275 snd_hda_jack_detect_enable_callback(codec, pin_nid, 2277 2276 jack_callback); 2278 2277 } 2279 - mutex_unlock(&spec->pcm_lock); 2278 + mutex_unlock(&spec->bind_lock); 2280 2279 return 0; 2281 2280 } 2282 2281 ··· 2383 2382 spec->ops = generic_standard_hdmi_ops; 2384 2383 spec->dev_num = 1; /* initialize to 1 */ 2385 2384 mutex_init(&spec->pcm_lock); 2385 + mutex_init(&spec->bind_lock); 2386 2386 snd_hdac_register_chmap_ops(&codec->core, &spec->chmap); 2387 2387 2388 2388 spec->chmap.ops.get_chmap = hdmi_get_chmap; ··· 2453 2451 int i; 2454 2452 2455 2453 spec = container_of(acomp->audio_ops, struct hdmi_spec, drm_audio_ops); 2456 - mutex_lock(&spec->pcm_lock); 2454 + mutex_lock(&spec->bind_lock); 2457 2455 spec->use_acomp_notifier = use_acomp; 2458 2456 spec->codec->relaxed_resume = use_acomp; 2459 2457 /* reprogram each jack detection logic depending on the notifier */ ··· 2463 2461 get_pin(spec, i)->pin_nid, 2464 2462 use_acomp); 2465 2463 } 2466 - mutex_unlock(&spec->pcm_lock); 2464 + mutex_unlock(&spec->bind_lock); 2467 2465 } 2468 2466 2469 2467 /* enable / disable the notifier via master bind / unbind */
+11
sound/pci/hda/patch_realtek.c
··· 409 409 case 0x10ec0672: 410 410 alc_update_coef_idx(codec, 0xd, 0, 1<<14); /* EAPD Ctrl */ 411 411 break; 412 + case 0x10ec0623: 413 + alc_update_coef_idx(codec, 0x19, 1<<13, 0); 414 + break; 412 415 case 0x10ec0668: 413 416 alc_update_coef_idx(codec, 0x7, 3<<13, 0); 414 417 break; ··· 2923 2920 ALC269_TYPE_ALC225, 2924 2921 ALC269_TYPE_ALC294, 2925 2922 ALC269_TYPE_ALC300, 2923 + ALC269_TYPE_ALC623, 2926 2924 ALC269_TYPE_ALC700, 2927 2925 }; 2928 2926 ··· 2959 2955 case ALC269_TYPE_ALC225: 2960 2956 case ALC269_TYPE_ALC294: 2961 2957 case ALC269_TYPE_ALC300: 2958 + case ALC269_TYPE_ALC623: 2962 2959 case ALC269_TYPE_ALC700: 2963 2960 ssids = alc269_ssids; 2964 2961 break; ··· 7221 7216 SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 7222 7217 SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 7223 7218 SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), 7219 + SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), 7220 + SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), 7224 7221 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 7225 7222 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), 7226 7223 SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI), ··· 8023 8016 case 0x10ec0300: 8024 8017 spec->codec_variant = ALC269_TYPE_ALC300; 8025 8018 spec->gen.mixer_nid = 0; /* no loopback on ALC300 */ 8019 + break; 8020 + case 0x10ec0623: 8021 + spec->codec_variant = ALC269_TYPE_ALC623; 8026 8022 break; 8027 8023 case 0x10ec0700: 8028 8024 case 0x10ec0701: ··· 9228 9218 HDA_CODEC_ENTRY(0x10ec0298, "ALC298", patch_alc269), 9229 9219 HDA_CODEC_ENTRY(0x10ec0299, "ALC299", patch_alc269), 9230 9220 HDA_CODEC_ENTRY(0x10ec0300, "ALC300", patch_alc269), 9221 + HDA_CODEC_ENTRY(0x10ec0623, "ALC623", patch_alc269), 9231 9222 HDA_CODEC_REV_ENTRY(0x10ec0861, 0x100340, "ALC660", patch_alc861), 9232 9223 HDA_CODEC_ENTRY(0x10ec0660, "ALC660-VD", patch_alc861vd), 9233 9224 HDA_CODEC_ENTRY(0x10ec0861, "ALC861", patch_alc861),
+1
sound/usb/quirks.c
··· 1657 1657 case 0x23ba: /* Playback Designs */ 1658 1658 case 0x25ce: /* Mytek devices */ 1659 1659 case 0x278b: /* Rotel? */ 1660 + case 0x292b: /* Gustard/Ess based devices */ 1660 1661 case 0x2ab6: /* T+A devices */ 1661 1662 case 0x3842: /* EVGA */ 1662 1663 case 0xc502: /* HiBy devices */
+5
tools/testing/selftests/bpf/test_offload.py
··· 22 22 import pprint 23 23 import random 24 24 import re 25 + import stat 25 26 import string 26 27 import struct 27 28 import subprocess ··· 312 311 for f in out.split(): 313 312 if f == "ports": 314 313 continue 314 + 315 315 p = os.path.join(path, f) 316 + if not os.stat(p).st_mode & stat.S_IRUSR: 317 + continue 318 + 316 319 if os.path.isfile(p): 317 320 _, out = cmd('cat %s/%s' % (path, f)) 318 321 dfs[f] = out.strip()
+1 -1
tools/testing/selftests/bpf/test_tc_edt.sh
··· 59 59 60 60 # start the listener 61 61 ip netns exec ${NS_DST} bash -c \ 62 - "nc -4 -l -s ${IP_DST} -p 9000 >/dev/null &" 62 + "nc -4 -l -p 9000 >/dev/null &" 63 63 declare -i NC_PID=$! 64 64 sleep 1 65 65
+21
tools/testing/selftests/net/fib_tests.sh
··· 1438 1438 fi 1439 1439 log_test $rc 0 "Prefix route with metric on link up" 1440 1440 1441 + # explicitly check for metric changes on edge scenarios 1442 + run_cmd "$IP addr flush dev dummy2" 1443 + run_cmd "$IP addr add dev dummy2 172.16.104.0/24 metric 259" 1444 + run_cmd "$IP addr change dev dummy2 172.16.104.0/24 metric 260" 1445 + rc=$? 1446 + if [ $rc -eq 0 ]; then 1447 + check_route "172.16.104.0/24 dev dummy2 proto kernel scope link src 172.16.104.0 metric 260" 1448 + rc=$? 1449 + fi 1450 + log_test $rc 0 "Modify metric of .0/24 address" 1451 + 1452 + run_cmd "$IP addr flush dev dummy2" 1453 + run_cmd "$IP addr add dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 260" 1454 + run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 261" 1455 + rc=$? 1456 + if [ $rc -eq 0 ]; then 1457 + check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261" 1458 + rc=$? 1459 + fi 1460 + log_test $rc 0 "Modify metric of address with peer route" 1461 + 1441 1462 $IP li del dummy1 1442 1463 $IP li del dummy2 1443 1464 cleanup
tools/testing/selftests/net/l2tp.sh
+2 -1
tools/testing/selftests/net/reuseport_dualstack.c
··· 129 129 { 130 130 struct epoll_event ev; 131 131 int epfd, i, test_fd; 132 - uint16_t test_family; 132 + int test_family; 133 133 socklen_t len; 134 134 135 135 epfd = epoll_create(1); ··· 146 146 send_from_v4(proto); 147 147 148 148 test_fd = receive_once(epfd, proto); 149 + len = sizeof(test_family); 149 150 if (getsockopt(test_fd, SOL_SOCKET, SO_DOMAIN, &test_family, &len)) 150 151 error(1, errno, "failed to read socket domain"); 151 152 if (test_family != AF_INET)
+4 -2
tools/usb/usbip/libsrc/usbip_device_driver.c
··· 69 69 FILE *fd = NULL; 70 70 struct udev_device *plat; 71 71 const char *speed; 72 - int ret = 0; 72 + size_t ret; 73 73 74 74 plat = udev_device_get_parent(sdev); 75 75 path = udev_device_get_syspath(plat); ··· 79 79 if (!fd) 80 80 return -1; 81 81 ret = fread((char *) &descr, sizeof(descr), 1, fd); 82 - if (ret < 0) 82 + if (ret != 1) { 83 + err("Cannot read vudc device descr file: %s", strerror(errno)); 83 84 goto err; 85 + } 84 86 fclose(fd); 85 87 86 88 copy_descr_attr(dev, &descr, bDeviceClass);
+26 -22
virt/kvm/kvm_main.c
··· 627 627 628 628 static struct kvm *kvm_create_vm(unsigned long type) 629 629 { 630 - int r, i; 631 630 struct kvm *kvm = kvm_arch_alloc_vm(); 631 + int r = -ENOMEM; 632 + int i; 632 633 633 634 if (!kvm) 634 635 return ERR_PTR(-ENOMEM); ··· 641 640 mutex_init(&kvm->lock); 642 641 mutex_init(&kvm->irq_lock); 643 642 mutex_init(&kvm->slots_lock); 644 - refcount_set(&kvm->users_count, 1); 645 643 INIT_LIST_HEAD(&kvm->devices); 646 644 645 + BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX); 646 + 647 + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { 648 + struct kvm_memslots *slots = kvm_alloc_memslots(); 649 + 650 + if (!slots) 651 + goto out_err_no_arch_destroy_vm; 652 + /* Generations must be different for each address space. */ 653 + slots->generation = i; 654 + rcu_assign_pointer(kvm->memslots[i], slots); 655 + } 656 + 657 + for (i = 0; i < KVM_NR_BUSES; i++) { 658 + rcu_assign_pointer(kvm->buses[i], 659 + kzalloc(sizeof(struct kvm_io_bus), GFP_KERNEL_ACCOUNT)); 660 + if (!kvm->buses[i]) 661 + goto out_err_no_arch_destroy_vm; 662 + } 663 + 664 + refcount_set(&kvm->users_count, 1); 647 665 r = kvm_arch_init_vm(kvm, type); 648 666 if (r) 649 - goto out_err_no_disable; 667 + goto out_err_no_arch_destroy_vm; 650 668 651 669 r = hardware_enable_all(); 652 670 if (r) ··· 675 655 INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list); 676 656 #endif 677 657 678 - BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX); 679 - 680 - r = -ENOMEM; 681 - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { 682 - struct kvm_memslots *slots = kvm_alloc_memslots(); 683 - if (!slots) 684 - goto out_err_no_srcu; 685 - /* Generations must be different for each address space. */ 686 - slots->generation = i; 687 - rcu_assign_pointer(kvm->memslots[i], slots); 688 - } 689 - 690 658 if (init_srcu_struct(&kvm->srcu)) 691 659 goto out_err_no_srcu; 692 660 if (init_srcu_struct(&kvm->irq_srcu)) 693 661 goto out_err_no_irq_srcu; 694 - for (i = 0; i < KVM_NR_BUSES; i++) { 695 - rcu_assign_pointer(kvm->buses[i], 696 - kzalloc(sizeof(struct kvm_io_bus), GFP_KERNEL_ACCOUNT)); 697 - if (!kvm->buses[i]) 698 - goto out_err; 699 - } 700 662 701 663 r = kvm_init_mmu_notifier(kvm); 702 664 if (r) ··· 699 697 out_err_no_srcu: 700 698 hardware_disable_all(); 701 699 out_err_no_disable: 702 - refcount_set(&kvm->users_count, 0); 700 + kvm_arch_destroy_vm(kvm); 701 + WARN_ON_ONCE(!refcount_dec_and_test(&kvm->users_count)); 702 + out_err_no_arch_destroy_vm: 703 703 for (i = 0; i < KVM_NR_BUSES; i++) 704 704 kfree(kvm_get_bus(kvm, i)); 705 705 for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)